Compare commits

...

139 Commits

Author SHA1 Message Date
Waleed
2dbc7fdddf v0.6.47: files focusing, documentation, opus 4.7 2026-04-16 12:57:12 -07:00
Waleed
e16c8e6f70 fix(ui): stop terminal auto-select from stealing copilot input focus (#4201) 2026-04-16 12:44:26 -07:00
Waleed
9f41736d50 fix(misc): remove duplicate docs page, update clopus 4.7 (#4200)
* fix(misc): remove duplicate docs page, update clopus 4.7

* fix(docs): consolidate duplicate docs and fix SDK API signatures

- Remove duplicate custom-tools page (custom-tools/index.mdx → tools/custom-tools.mdx is canonical)
- Remove comparison table from custom-tools per product preference
- Fix permissions inconsistency: delete now requires Admin across all docs
- Consolidate sdks/ into api-reference/ (sdks/ directory deleted)
- Fix Python SDK docs: correct param is `input`, not `input_data`
- Fix TypeScript SDK docs: correct signature is executeWorkflow(id, input, options) not options-object form
- Add FAQ sections to both SDK reference pages

* fix(docs): update SDKs card links from /sdks to /api-reference

* fix(docs): update /sdks references to /api-reference in llms.txt files
2026-04-16 12:33:25 -07:00
Waleed
147ac89672 feat(docs): fill documentation gaps across platform features (#4110)
* feat(docs): fill documentation gaps across platform features

* fix(docs): address PR review comments on chat OTP cookies and MCP env var placeholders

* fix(docs): replace smart quotes with straight quotes in JSX attributes

* update(docs): update mcp, custom tools, and variables docs

* Fix grammar

* mothership docs, tags, connectors, api, chat deploy, etc

* more info

* more

* feat(docs): auto-generate per-provider trigger documentation

Extends scripts/generate-docs.ts to produce one MDX page per trigger
provider (39 pages) in apps/docs/content/docs/en/triggers/. The 5
hand-written pages (index, start, schedule, webhook, rss) are never
touched.

Key additions to the generation script:
- resolveConstVariable() resolves module-level const spreads so
  providers like Vercel that build outputs from const variables (not
  just functions) are fully documented
- resolveTriggerBuilderFunction() extended to expand variable spreads
  (...varName) in addition to function-call spreads (...fn())
- groupTriggersByProvider() deduplicates v1/v2 trigger variants by
  name, keeping the highest-versioned one per provider
- writeIconMapping() adds bare-name aliases for versioned block types
  (github_v2 → github, fireflies_v2 → fireflies, etc.) so
  BlockInfoCard resolves icons for all 39 trigger providers
- extractTriggerConfigFields() filters readOnly display blocks (webhook
  URL displays, sample payloads, curl examples) from config tables

Each generated page includes: BlockInfoCard with correct icon/color,
trigger count, polling note where applicable, Configuration table, and
Output table for every trigger. No "Type:" lines.

* refactor(docs): align trigger docs structure with tools docs

- Use ### `trigger_id` headings (matching ### `tool_id` in tools docs)
- Wrap all trigger sections under a ## Triggers header
- Rename Configuration/Output to #### level (matching #### Input/Output)
- Use Parameter column header to match tools docs table style
- Map UI widget types to semantic types: short-input/long-input/dropdown
  → string, switch → boolean, slider → number, oauth-input → string

* refactor(docs): use human-readable names for trigger section headings

Trigger IDs are internal identifiers; users scan by name. Switch from
### `trigger_id` to ### Trigger Name for cleaner sidebar navigation
and better readability.

* fix(docs): resolve subBlock builder functions for all trigger Config sections

Extends generate-docs.ts to parse subBlock builder functions so all 15
providers previously missing Configuration sections now generate them.

Handles three patterns:
- `buildTriggerSubBlocks({extraFields: buildX(...)})` — extracts extra
  fields from the call site and resolves them from the provider's utils.ts
- `return [...]` — direct array return (Attio, Confluence, etc.)
- `blocks.push(...)` — imperative push pattern (Linear, Ashby)

Also resolves const-reference field IDs (SCREAMING_CASE) by searching
the webhook provider constants cache, fixing Gong's `gongJwtPublicKeyPem`
field which was previously unresolvable. Adds title-as-description fallback
for OAuth credential fields that have no explicit description.

* fix(docs): correctly destructure nested implicit-object trigger outputs

Fixes a parser bug where output fields with no top-level `type` key but
child fields each having their own `type`/`description` were incorrectly
parsed. The `type:` and `description:` regex matches were not
depth-aware, so values from nested children bled into the parent field.

Changes:
- Add `isAtDepthZero()` helper for brace-depth-aware regex matching
- Fix `parseFieldContent` to only match `type:` at brace depth 0
- Fix `extractDescription` to only match `description:` at brace depth 0
- Add implicit-object fallback: when no top-level `type` exists but child
  fields have their own types, treat as `object` with `properties`
- Regenerate all affected trigger docs (Cal.com payload, Linear data,
  Jira issue.fields, Ashby application, Greenhouse candidate, etc.)

* chore(docs): update static trigger and start page images

* feat(providers): add claude-opus-4-7 model with adaptive thinking support

* Add workflow version screenshots

* Add function block screenshots

---------

Co-authored-by: Theodore Li <theo@sim.ai>
2026-04-16 11:51:49 -07:00
Theodore Li
4cdc941490 fix(ui): fix focusing bugs while editing files (#4197) 2026-04-16 14:21:10 -04:00
Waleed
387cc977fa v0.6.46: mothership queueing, web vitals 2026-04-16 00:12:50 -07:00
Waleed
0464a57601 fix(ui): posthog guard, dynamic import loading, compact variant, rebase cleanup (#4196)
* v0.6.29: login improvements, posthog telemetry (#4026)

* feat(posthog): Add tracking on mothership abort (#4023)

Co-authored-by: Theodore Li <theo@sim.ai>

* fix(login): fix captcha headers for manual login  (#4025)

* fix(signup): fix turnstile key loading

* fix(login): fix captcha header passing

* Catch user already exists, remove login form captcha

* fix(ui): posthog guard, dynamic import loading, compact variant, rebase cleanup

---------

Co-authored-by: Theodore Li <theodoreqili@gmail.com>
2026-04-15 23:52:57 -07:00
Emir Karabeg
23ccd4a50c improvement(landing): optimize core web vitals and accessibility (#4193)
* improvement(landing): optimize core web vitals and accessibility

Code-split AuthModal and DemoRequestModal via next/dynamic across 7 landing
components to move auth-client bundle (~150-250KB) out of the initial JS payload.
Replace useSession import in navbar with direct SessionContext read to avoid
pulling the entire better-auth client into the landing page bundle. Add immutable
cache header for content-hashed _next/static assets. Defer PostHog session
recording until user identification to avoid loading the recorder (~80KB) on
anonymous visits. Fix accessibility issues flagged by Lighthouse: add missing
aria-label on preview submit button, add inert to aria-hidden ReactFlow wrapper,
set decorative alt on logos inside labeled links, disambiguate duplicate footer
API links.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(posthog): guard startSessionRecording against repeated calls on refetch

The effect fires on every session reload (e.g., subscription upgrade).
Calling startSessionRecording() while already recording fragments the
session in the analytics dashboard. Add sessionRecordingStarted() guard
so recording only starts once per page lifecycle.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(config): remove redundant _next/static cache header

Next.js already sets Cache-Control: public, max-age=31536000, immutable
on _next/static assets natively and this cannot be overridden. The custom
rule was redundant on Vercel and conflicted with the extension-based rule
on self-hosted deployments due to last-match-wins ordering.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-15 19:56:16 -07:00
Theodore Li
ba6bc91681 fix(ui): fix attachment logic on queued mothership messages (#4191)
* fix(ui): fix attachment logic on queued mothership messages

* Add focus after hitting pencil button for queued message

* fix copilot layout
2026-04-15 21:42:48 -04:00
Vikhyath Mondreti
c0bc62c592 Merge pull request #4190 from simstudioai/staging
v0.6.46: mothership streaming fixes, brightdata integration
2026-04-15 17:28:28 -07:00
Vikhyath Mondreti
377712c9f3 fix(mothership): chat stream structuring + logs resource post fix (#4189)
* fix(mothership): chat streaming structure

* fix logs resource thinking bug"

* address comments

* address comments
2026-04-15 16:43:22 -07:00
Waleed
6dddc3f796 fix(brightdata): fix async Discover API, echo-back fields, and registry ordering (#4188)
* fix(brightdata): use params for echo-back fields in transformResponse

transformResponse receives params as its second argument. Use it to
return the original url, query, snapshotId, and searchEngine values
instead of hardcoding null or extracting from response data that may
not contain them.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(brightdata): handle async Discover API with polling

The Bright Data Discover API is asynchronous — POST /discover returns
a task_id, and results must be polled via GET /discover?task_id=...
The previous implementation incorrectly treated it as synchronous,
always returning empty results.

Uses postProcess (matching Firecrawl crawl pattern) to poll every 3s
with a 120s timeout until status is "done".

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(brightdata): alphabetize block registry entry

Move box before brandfetch/brightdata to maintain alphabetical ordering.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* lint

* fix(brightdata): return error objects instead of throwing in postProcess

The executor wraps postProcess in try-catch and falls back to the
intermediate transformResponse result on error, which has success: true
with empty results. Throwing errors would silently return empty results.

Match Firecrawl's pattern: return { ...result, success: false, error }
instead of throwing. Also add taskId to BrightDataDiscoverResponse type
to eliminate unsafe casts.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(brightdata): use platform execution timeout for Discover polling

Replace hardcoded 120s timeout with DEFAULT_EXECUTION_TIMEOUT_MS to
match Firecrawl and other async polling tools. Respects platform-
configured limits (300s free, 3000s paid).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-15 16:20:39 -07:00
Siddharth Ganesan
010435c53b v0.6.45: superagent, csp, brightdata integration, gemini response format, logs performance improvements
fix(csp): add missing analytics domains, remove unsafe-eval, fix workspace CSP gap (#4179)
fix(landing): return 404 for invalid dynamic route slugs (#4182)
improvement(seo): optimize sitemaps, robots.txt, and core web vitals across sim and docs (#4170)
fix(gemini): support structured output with tools on Gemini 3 models (#4184)
feat(brightdata): add Bright Data integration with 8 tools (#4183)
fix(mothership): fix superagent credentials (#4185)
fix(logs): close sidebar when selected log disappears from filtered list; cleanup (#4186)
2026-04-15 13:20:27 -07:00
Waleed
cd8c5bd0b8 fix(logs): close sidebar when selected log disappears from filtered list + cleanup (#4186)
Derive sidebar open state from selection validity instead of using a
separate useEffect. Also removes unnecessary useMemo/useCallback in
non-memo'd components, replaces useEffect with render-time reset in
dashboard, fixes CSS tokens, and adds hierarchical query key factory.

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-15 12:58:22 -07:00
Siddharth Ganesan
f0285adc38 fix(mothership): fix superagent credentials (#4185)
* Fix

* Fix ajv csp issue

* Lint
2026-04-15 12:52:02 -07:00
Waleed
a39dc158cf feat(brightdata): add Bright Data integration with 8 tools (#4183)
* feat(brightdata): add Bright Data integration with 8 tools

Add complete Bright Data integration supporting Web Unlocker, SERP API,
Discover API, and Web Scraper dataset operations. Includes scrape URL,
SERP search, discover, sync scrape, scrape dataset, snapshot status,
download snapshot, and cancel snapshot tools.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(brightdata): address PR review feedback

- Fix truncated "Download Snapshot" description in integrations.json and docs
- Map engine-specific query params (num/count/numdoc, hl/setLang/lang/kl,
  gl/cc/lr) per search engine instead of using Google-specific params for all
- Attempt to parse snapshot_id from cancel/download response bodies instead
  of hardcoding null

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* lint

* fix(agiloft): change bgColor to white; fix docs truncation

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(brightdata): avoid inner quotes in description to fix docs generation

The docs generator regex truncates at inner quotes. Reword the
download_snapshot description to avoid embedded double quotes.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(brightdata): disable incompatible DuckDuckGo and Yandex URL params

DuckDuckGo kl expects region-language format (us-en) and Yandex lr
expects numeric region IDs (213), not plain two-letter codes. Disable
these URL-level params since Bright Data normalizes localization through
the body-level country param.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-15 12:47:02 -07:00
Waleed
05c1c5b1f6 fix(gemini): support structured output with tools on Gemini 3 models (#4184)
* v0.6.29: login improvements, posthog telemetry (#4026)

* feat(posthog): Add tracking on mothership abort (#4023)

Co-authored-by: Theodore Li <theo@sim.ai>

* fix(login): fix captcha headers for manual login  (#4025)

* fix(signup): fix turnstile key loading

* fix(login): fix captcha header passing

* Catch user already exists, remove login form captcha

* fix(gemini): support structured output with tools on Gemini 3 models

* fix(home): remove duplicate handleStopGeneration declaration

* refactor(gemini): use prefix-based Gemini 3 model detection

---------

Co-authored-by: Theodore Li <theodoreqili@gmail.com>
2026-04-15 12:21:39 -07:00
Emir Karabeg
5274efd8f9 improvement(seo): optimize sitemaps, robots.txt, and core web vitals across sim and docs (#4170)
* improvement(seo): optimize sitemaps and robots.txt across sim and docs

- Add missing pages to sim sitemap: blog author pages, academy catalog and course pages
- Fix 6x duplicate URL bug in docs sitemap by deduplicating with source.getLanguages()
- Convert docs sitemap from route handler to Next.js metadata convention with native hreflang
- Add x-default hreflang alternate for docs multi-language pages
- Remove changeFrequency and priority fields (Google ignores both)
- Fix inaccurate lastModified timestamps — derive from real content dates, omit when unknown
- Consolidate 20+ redundant per-bot robots rules into single wildcard entry
- Add /form/ and /credential-account/ to sim robots disallow list
- Reference image sitemap in sim robots.txt
- Remove deprecated host directive from sim robots
- Move disallow rules before allow in docs robots for crawler compatibility
- Extract hardcoded docs baseUrl to env variable with production fallback

* fix(seo): remove homepage new Date(), guard latestModelDate empty array

* improvement(seo): consolidate DOCS_BASE_URL, optimize core web vitals

Extract hardcoded https://docs.sim.ai into shared DOCS_BASE_URL constant
in lib/urls.ts and replace all 20+ instances across layouts, metadata,
structured data, LLM manifest, sitemap, and robots files. Remove
OneDollarStats analytics script and tighten CSP for improved core web vitals.

* fix: removed onedollarstats from bun lock

* fix(seo): guard per-provider Math.max, consolidate docs robots to single wildcard
2026-04-15 12:13:30 -07:00
Waleed
0b36c8bcb6 fix(landing): return 404 for invalid dynamic route slugs (#4182)
* v0.6.29: login improvements, posthog telemetry (#4026)

* feat(posthog): Add tracking on mothership abort (#4023)

Co-authored-by: Theodore Li <theo@sim.ai>

* fix(login): fix captcha headers for manual login  (#4025)

* fix(signup): fix turnstile key loading

* fix(login): fix captcha header passing

* Catch user already exists, remove login form captcha

* fix(landing): return 404 for invalid dynamic route slugs

Add `dynamicParams = false` to all landing page dynamic routes so
Next.js returns a proper 404 instead of a client-side exception for
slugs not in generateStaticParams.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(home): remove duplicate handleStopGeneration declaration

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Theodore Li <theodoreqili@gmail.com>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-15 11:48:39 -07:00
Waleed
842aa2c254 fix(csp): add missing analytics domains, remove unsafe-eval, fix workspace CSP gap (#4179) 2026-04-15 10:04:53 -07:00
Waleed
46ffc4904e v0.6.44: streamdown, mothership intelligence, excel extension 2026-04-14 22:13:57 -07:00
Waleed
ff71a07e8f improvement(ui): rename user-facing "execution" to "run" (#4176)
* v0.6.29: login improvements, posthog telemetry (#4026)

* feat(posthog): Add tracking on mothership abort (#4023)

Co-authored-by: Theodore Li <theo@sim.ai>

* fix(login): fix captcha headers for manual login  (#4025)

* fix(signup): fix turnstile key loading

* fix(login): fix captcha header passing

* Catch user already exists, remove login form captcha

* improvement(ui): rename user-facing "execution" to "run"

* fix(mothership): remove duplicate handleStopGeneration declaration

* chore: remove verbose comment in cancel route

* fix(ui): missed execution → run renames in search suggestions and error fallback

---------

Co-authored-by: Theodore Li <theodoreqili@gmail.com>
2026-04-14 21:49:20 -07:00
Waleed
22d4639f13 refactor(microsoft-excel): export GRAPH_ID_PATTERN and deduplicate validation (#4174)
* refactor(microsoft-excel): export GRAPH_ID_PATTERN and reuse across routes

Export the shared regex pattern from utils.ts and import it in files/route.ts
and drives/route.ts instead of duplicating the inline pattern. Also reorders
the TSDoc comment to sit above getItemBasePath where it belongs.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* lint

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-14 21:34:54 -07:00
Waleed
80095788fc feat(microsoft-excel): add SharePoint drive support for Excel integration (#4162)
* feat(microsoft-excel): add SharePoint drive support for Excel integration

* fix(microsoft-excel): address PR review comments

- Validate siteId/driveId format in drives route to prevent path traversal
- Use direct single-drive endpoint for fetchById instead of filtering full list
- Fix dependsOn on sheet/spreadsheet selectors so driveId flows into context
- Fix NextRequest type in drives route for build compatibility

* fix(microsoft-excel): validate driveId in files route

Add regex validation for driveId query param in the Microsoft OAuth
files route to prevent path traversal, matching the drives route.

* fix(microsoft-excel): unblock OneDrive users and validate driveId in sheets route

- Add credential to any[] arrays so OneDrive users (no drive selected)
  still pass the dependsOn gate while driveSelector remains in the
  dependency list for context flow to SharePoint users
- Add /^[\w-]+$/ validation for driveId in sheets API route

* fix(microsoft-excel): validate driveId in getItemBasePath utility

Add regex validation for driveId at the shared utility level to prevent
path traversal through the tool execution path, which bypasses the
API route validators.

* fix(microsoft-excel): use centralized input validation

Replace inline regex validation with platform validators from
@/lib/core/security/input-validation:
- validateSharePointSiteId for siteId in drives route
- validateAlphanumericId for driveId in drives, sheets, files routes
  and getItemBasePath utility

* lint

* improvement(microsoft-excel): add File Source dropdown to control SharePoint visibility

Replace always-visible optional SharePoint fields with a File Source
dropdown (OneDrive/SharePoint) that conditionally shows site and drive
selectors. OneDrive users see zero extra fields (default). SharePoint
users switch the dropdown and get the full cascade.

* fix(microsoft-excel): fix canonical param test failures

Make fileSource dropdown mode:'both' so it appears in basic and advanced
modes. Add condition to manualDriveId to match driveSelector's condition,
satisfying the canonical pair consistency test.

* fix(microsoft-excel): address PR review feedback for SharePoint drive support

- Clear stale driveId/siteId/spreadsheetId when fileSource changes by adding
  fileSource to dependsOn arrays for siteSelector, driveSelector, and
  spreadsheetId selectors
- Reorder manualDriveId before manualSpreadsheetId in advanced mode for
  logical top-down flow
- Validate spreadsheetId with validateMicrosoftGraphId in getItemBasePath()
  and sheets route to close injection vector (uses permissive validator that
  accepts ! chars in OneDrive item IDs)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(microsoft-excel): use validateMicrosoftGraphId for driveId validation

SharePoint drive IDs use the format b!<base64-string> which contains !
characters rejected by validateAlphanumericId. Switch all driveId
validation to validateMicrosoftGraphId which blocks path traversal and
control characters while accepting valid Microsoft Graph identifiers.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(microsoft-excel): use validatePathSegment with strict pattern for driveId/spreadsheetId

Replace validateMicrosoftGraphId with validatePathSegment using a custom
pattern ^[a-zA-Z0-9!_-]+$ for all URL-interpolated IDs. validatePathSegment
blocks /, \, path traversal, and null bytes before checking the pattern,
preventing URL-modifying characters like ?, #, & from altering the Graph
API endpoint. The pattern allows ! for SharePoint b!<base64> drive IDs.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* lint

* fix(microsoft-excel): reorder driveId before spreadsheetId in v1 block

Move driveId subBlock before manualSpreadsheetId in the legacy v1 block
to match the logical top-down flow (Drive ID → Spreadsheet ID), consistent
with the v2 block ordering.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(microsoft-excel): clear manualDriveId when fileSource changes

Add dependsOn: ['fileSource'] to manualDriveId so its value is cleared
when switching from SharePoint back to OneDrive. Without this, the stale
driveId would still be serialized and forwarded to getItemBasePath,
routing through the SharePoint drive path instead of me/drive.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* refactor(microsoft-excel): use getItemBasePath in sheets route to remove duplication

Replace inline URL construction and validation logic with the shared
getItemBasePath utility, eliminating duplicated GRAPH_ID_PATTERN regex
and conditional URL building.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* lint

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-14 21:10:55 -07:00
Waleed
61b33e5978 fix(blocks): correct required field validation for Jira and Confluence blocks (#4172)
* fix(blocks): correct required field validation for Jira and Confluence blocks

Jira: summary is only required for create (not update), projectId is not required for update (API uses issueKey). Confluence: title and content are required for page creation, title is required for blog post creation — all enforced by backend validation.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(blocks): remove projectId dependsOn gate for update fields, require content for blog post creation

Jira: Remove dependsOn projectId from shared write/update fields — projectId is not required for update so the gate would disable all update fields when no project is selected. Write-only fields (issueType, parentIssue, reporter) retain the gate since projectId is required for create.

Confluence V2: Add create_blogpost to content required condition — backend Zod schema enforces content for blog post creation.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* lint

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-14 20:44:47 -07:00
Siddharth Ganesan
29fbad2874 fix(mothership): fix intelligence regression (#4171) 2026-04-14 20:01:44 -07:00
Emir Karabeg
e281ca0dac fix(ui): align PlayOutline icon with filled Play shape (#4169)
The PlayOutline icon had a non-standard viewBox and mismatched path,
causing it to render at an inconsistent size and shape compared to the
filled Play icon and other action bar icons.
2026-04-14 19:38:00 -07:00
Emir Karabeg
cbf0a139ed fix(seo): correct canonical URLs, compress oversized images, add cache headers (#4168)
* fix(seo): correct canonical URLs, compress oversized images, add cache headers

- Replace all hardcoded https://sim.ai with https://www.sim.ai via SITE_URL constant
- Migrate models, integrations, and homepage metadata from getBaseUrl() to SITE_URL
- Compress 6 blog/landing images from 2.6MB to 300KB total
- Convert mothership cover from PNG to JPEG (1.1MB → 99KB)
- Add Cache-Control headers for static assets (1d max-age, 7d stale-while-revalidate)
- Add SEO regression test scanning all public pages for canonical URL violations

* fix(seo): replace hardcoded URLs with SITE_URL, broaden test detection

- Replace hardcoded https://www.sim.ai with SITE_URL in academy, changelog.xml, and whitelabeling
- Broaden getBaseUrl() detection in SEO test to match any variable name assignment
- Add ee/whitelabeling/metadata.ts to SEO test scan scope
2026-04-14 16:42:32 -07:00
Theodore Li
751eeaccd4 fix(ui): resource tab fixes, add search to workspace modal (#4166)
* fix(ui): fix resource switching logic, multi select delete

* Allow cmd+click on workspace menu

* Add search bar to workspace modal

* address greptile comments

* fix resource tab scroll
2026-04-14 19:06:51 -04:00
Emir Karabeg
1bf2d95813 improvement(ui): delegate streaming animation to Streamdown component (#4163)
* improvement(ui): delegate streaming animation to Streamdown component

Remove custom useStreamingText hook and useThrottledValue indirection
in favor of Streamdown's built-in streaming props. This eliminates the
manual character-by-character reveal logic (setInterval, easing, chase
factor) and lets the library handle animation natively, reducing
complexity and improving consistency across Mothership and chat.

* improvement(ui): inline passthrough wrapper, add hydration guard

- Inline EnhancedMarkdownRenderer which became a trivial passthrough
  after removing useThrottledValue
- Add hydration guard to MarkdownRenderer to prevent replaying the
  entrance animation when mounting mid-stream with existing content

* improvement: removed chat animation

* improvement(ui): remove hardcoded fade-in animations from special tags

Remove animate-stream-fade-in from OptionsDisplay, CredentialDisplay,
MothershipErrorDisplay, and UsageUpgradeDisplay. These components
re-render after streaming ends, causing a visible flash as the
opacity animation replays. PendingTagIndicator retains its animation
since it only renders during active streaming.

* fix(ui): use streaming mode for Streamdown during active streams

mode='static' disables Remend (auto-closing incomplete markdown),
incremental block splitting, and React Transitions. Switch to
streaming mode while isStreaming is true so partial markdown renders
correctly, without re-adding animation props.
2026-04-14 15:31:39 -07:00
Waleed
3a1b1a8032 v0.6.43: mothership billing idempotency, env var resolution fixes 2026-04-14 15:22:32 -07:00
Waleed
3d6660ba4d feat(jira): support raw ADF in description and environment fields (#4164)
* fix(security): resolve ReDoS vulnerability in function execute tag pattern

Simplified regex to eliminate overlapping quantifiers that caused exponential
backtracking on malformed input without closing delimiter.

* feat(jira): support raw ADF document objects in description and environment fields

Add toAdf() helper that passes through ADF objects as-is or wraps plain
text in a single-paragraph ADF doc. Update write and update routes to
use it, replacing inline ADF wrapping. Update Zod schema to accept
string or object for description. Fully backward compatible — plain
text still works, but callers can now pass rich ADF with expand nodes,
tables, code blocks, etc.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(jira): handle partial ADF nodes and non-ADF objects in toAdf()

Wrap partial ADF nodes (type + content but not doc) in a doc envelope.
Fall back to JSON.stringify for non-ADF objects instead of String()
which produces [object Object].

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* lint

* fix(jira): handle JSON-stringified ADF in toAdf() for variable resolution

The executor's formatValueForBlock() JSON.stringify's object values when
resolving <Block.output> references. This means an ADF object from an
upstream Agent block arrives at the route as a JSON string. toAdf() now
detects JSON strings containing valid ADF documents or nodes and parses
them back, ensuring rich formatting is preserved through the pipeline.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* lint changes

* fix(jira): update environment Zod schema to accept ADF objects

Match the description field schema change — environment also passes
through toAdf() so its Zod schema must accept objects too.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* updated lobkc

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-14 15:18:18 -07:00
Waleed
48e174b21f fix(google-drive): add auto export format and validate against Drive API docs (#4161)
* fix(google-drive): add auto export format and Azure storage debug logging

* chore: remove Azure storage debug logging

* fix(google-drive): use status-based fallback instead of string matching for export errors

* fix(google-drive): validate export formats against Drive API docs, remove fallback

* fix(google-drive): use value function for dropdown default

* fix(google-drive): add text/markdown to valid export formats for Google Docs

* fix(google-drive): correct ODS MIME type for Sheets export format
2026-04-14 15:09:36 -07:00
Vikhyath Mondreti
7529a75ac0 fix(triggers): env var resolution in provider configs (#4160)
* fix(triggers): env var resolution in provider configs

* throw on errored resolution
2026-04-14 14:30:26 -07:00
Theodore Li
6b2e83bf58 fix(billing): add idempotency to billing (#4157)
* fix(billing): add idempotency to billing

* Only release redis lock if billed
2026-04-14 17:15:54 -04:00
Waleed
fc07922536 v0.6.42: mothership nested file reads, search modal improvements 2026-04-14 13:07:50 -07:00
Siddharth Ganesan
367415f649 fix(mothership): tool path for nested folders (#4158) 2026-04-14 13:03:07 -07:00
Siddharth Ganesan
ff2e369c20 fix(mothership): fix workflow vfs reads (#4156)
* v0.6.29: login improvements, posthog telemetry (#4026)

* feat(posthog): Add tracking on mothership abort (#4023)

Co-authored-by: Theodore Li <theo@sim.ai>

* fix(login): fix captcha headers for manual login  (#4025)

* fix(signup): fix turnstile key loading

* fix(login): fix captcha header passing

* Catch user already exists, remove login form captcha

* fix build error

* improvement(mothership): new agent loop (#3920)

* feat(transport): replace shared chat transport with mothership-stream module

* improvement(contracts): regenerate contracts from go

* feat(tools): add tool catalog codegen from go tool contracts

* feat(tools): add tool-executor dispatch framework for sim side tool routing

* feat(orchestrator): rewrite tool dispatch with catalog-driven executor and simplified resume loop

* feat(orchestrator): checkpoint resume flow

* refactor(copilot): consolidate orchestrator into request/ layer

* refactor(mothership): reorganize lib/copilot into structured subdirectories

* refactor(mothership): canonical transcript layer, dead code cleanup, type consolidation

* refactor(mothership): rebase onto latest staging

* refactor(mothership): rename request continue to lifecycle

* feat(trace): add initial version of request traces

* improvement(stream): batch stream from redis

* fix(resume): fix the resume checkpoint

* fix(resume): fix resume client tool

* fix(subagents): subagent resume should join on existing subagent text block

* improvement(reconnect): harden reconnect logic

* fix(superagent): fix superagent integration tools

* improvement(stream): improve stream perf

* Rebase with origin dev

* fix(tests): fix failing test

* fix(build): fix type errors

* fix(build): fix build errors

* fix(build): fix type errors

* feat(mothership): add cli execution

* fix(mothership): fix function execute tests

* Force redeploy

* feat(motheship): add docx support

* feat(mothership): append

* Add deps

* improvement(mothership): docs

* File types

* Add client retry logic

* Fix stream reconnect

* Eager tool streaming

* Fix client side tools

* Security

* Fix shell var injection

* Remove auto injected tasks

* Fix 10mb tool response limit

* Fix trailing leak

* Remove dead tools

* file/folder tools

* Folder tools

* Hide function code inline

* Dont show internal tool result reads

* Fix spacing

* Auth vfs

* Empty folders should show in vfs

* Fix run workflow

* change to node runtime

* revert back to bun runtime

* Fix

* Appends

* Remove debug logs

* Patch

* Fix patch tool

* Temp

* Checkpoint

* File writes

* Fix

* Remove tool truncation limits

* Bad hook

* replace react markdown with streamdown

* Checkpoitn

* fix code block

* fix stream persistence

* temp

* Fix file tools

* tool joining

* cleanup subagent + streaming issues

* streamed text change

* Tool display intetns

* Fix dev

* Fix tests

* Fix dev

* Speed up dev ci

* Add req id

* Fix persistence

* Tool call names

* fix payload accesses

* Fix name

* fix snapshot crash bug

* fix

* Fix

* remove worker code

* Clickable resources

* Options ordering

* Folder vfs

* Restore and mass delete tools

* Fix

* lint

* Update request tracing and skills and handlers

* Fix editable

* fix type error

* Html code

* fix(chat): make inline code inherit parent font size in markdown headers

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* improved autolayout

* durable stream for files

* one more fix

* POSSIBLE BREAKAGE: SCROLLING

* Fixes

* Fixes

* Lint fix

* fix(resource): fix resource view disappearing on ats (#4103)

Co-authored-by: Theodore Li <theo@sim.ai>

* Fixes

* feat(mothership): add execution logs as a resource type

Adds `log` as a first-class mothership resource type so copilot can open
and display workflow execution logs as tabs alongside workflows, tables,
files, and knowledge bases.

- Add `log` to MothershipResourceType, all Zod enums, and VALID_RESOURCE_TYPES
- Register log in RESOURCE_REGISTRY (Library icon) and RESOURCE_INVALIDATORS
- Add EmbeddedLog and EmbeddedLogActions components in resource-content
- Export WorkflowOutputSection from log-details for reuse in EmbeddedLog
- Add log resolution branch in open_resource handler via new getLogById service
- Include log id in get_workflow_logs response and extract resources from output
- Exclude log from manual add-resource dropdown (enters via copilot tools only)
- Regenerate copilot contracts after adding log to open_resource Go enum

* Fix perf and message queueing

* Fix abort

* fix(ui): dont delete resource on clearing from context, set resource closed on new task (#4113)

Co-authored-by: Theodore Li <theo@sim.ai>

* improvement(mothership): structure sim side typing

* address comments

* reactive text editor tweaks

* Fix file read and tool call name persistence bug

* Fix code stream + create file opening resource

* fix use chat race + headless trace issues

* Fix type issue

* Fix mothership block req lifecycle

* Fix build

* Move copy reqid

* Fix

* fix(ui): fix resource tag transition from home to task (#4132)

Co-authored-by: Theodore Li <theo@sim.ai>

* Fix persistence

* Clean code, fix bugs

* Fix

* Fixes

---------

Co-authored-by: Waleed <walif6@gmail.com>
Co-authored-by: Theodore Li <theodoreqili@gmail.com>
Co-authored-by: Vikhyath Mondreti <vikhyath@simstudio.ai>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: Theodore Li <theo@sim.ai>
2026-04-14 12:11:53 -07:00
Theodore Li
64cdab24f7 fix(ui): handle long file paths and names in search modal (#4155)
* fix(ui): handle long file paths and names in search modal

* Handle long subfolder names

* fix memo
2026-04-14 13:04:08 -04:00
Waleed
3838b6e892 v0.6.41: webhooks fix, workers removal 2026-04-14 08:44:39 -07:00
Waleed
a51333aa2f fix(webhooks): non-polling webhook executions silently dropped after BullMQ removal (#4153) 2026-04-14 08:43:17 -07:00
Waleed
0ac05397eb v0.6.40: mothership tool loop, new skills, agiloft, STS, IAM integrations, jira forms endpoints 2026-04-13 22:26:19 -07:00
Waleed
8a8bc1b0e6 fix(posthog): set email and name on person profile at signup (#4152) 2026-04-13 22:19:09 -07:00
Waleed
48d5101151 fix(ci): replace dynamic secret access with explicit secret references (#4151)
* fix(ci): replace dynamic secret access with explicit secret references

Resolves CodeQL "Excessive Secrets Exposure" warning by replacing
secrets[matrix.ecr_repo_secret] with conditional expressions that
reference only the specific secrets needed.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(ci): add explicit ECR_REALTIME guard and use env block for secret injection

- Prevent silent fallthrough to ECR_REALTIME for unrecognized secret keys
- Move build-amd64 secret resolution to env: block matching build-dev pattern

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-13 22:10:52 -07:00
Waleed
9c1b0bc15f improvement(ui): remove anti-patterns, fix follow-up auto-scroll, move CopyCodeButton to emcn (#4148)
* improvement(ui): restore smooth streaming animation, fix follow-up auto-scroll, move CopyCodeButton to emcn

* fix(ui): restore delayed animation, handle tilde fences, fix follow-up scroll root cause

* fix(ui): extract useStreamingReveal to followup, keep cleanup changes

* fix(ui): restore hydratedStreamingRef for reconnect path order-of-ops

* fix(ui): restore full hydratedStreamingRef effect for reconnect path

* fix(ui): use hover-hover prefix on CopyCodeButton callers to correctly override ghost variant

* fix(logs): remove destructive color from cancel execution menu item

* feat(logs): optimistic cancelling status on cancel execution

* feat(logs): allow cancellation of pending (paused) executions

* fix(hitl): cancel paused executions directly in DB

Paused HITL executions are idle in the DB — they don't poll Redis or
run in-process, so the existing cancel signals had no effect. The DB
status stayed 'pending', causing the optimistic 'cancelling' update to
revert on refetch.

- Add PauseResumeManager.cancelPausedExecution: atomically sets
  paused_executions.status and workflow_execution_logs.status to
  'cancelled' inside a FOR UPDATE transaction
- Guard enqueueOrStartResume against resuming a cancelled execution
- Include pausedCancelled in the cancel route success check

* upgrade turbo

* test(hitl): update cancel route tests for paused execution cancellation

- Mock PauseResumeManager.cancelPausedExecution to prevent DB calls
- Add pausedCancelled to all expected response objects
- Add test for HITL paused execution cancellation path
- Add missing auth/authz tests
- Switch to vi.hoisted pattern for all mocks

* fix(hitl): set endedAt when cancelling paused execution

Without endedAt, the logs API running filter (isNull(endedAt)) would
keep cancelled paused executions in the running view indefinitely.

* fix(hitl): emit execution:cancelled event to canvas when cancelling paused execution

Paused HITL executions have no active SSE stream, so the canvas never
received the cancellation event. Now writes execution:cancelled to the
event buffer and updates the stream meta so the canvas reconnect path
picks it up and shows 'Execution Cancelled'.

* fix(hitl): isolate cancelPausedExecution failure from successful cancellation

Wrap cancelPausedExecution in try/catch so a DB error does not mask
a prior successful Redis or in-process cancellation. Also move the
resource-collapse side effect in home.tsx to a useEffect to avoid the
stale closure on the resources array.

* fix(hitl): add .catch() to fire-and-forget event buffer calls in cancel route
2026-04-13 22:04:13 -07:00
Waleed
0e6ada4bdb fix(security): resolve ReDoS vulnerability in function execute tag pattern (#4149)
* fix(security): resolve ReDoS vulnerability in function execute tag pattern

Simplified regex to eliminate overlapping quantifiers that caused exponential
backtracking on malformed input without closing delimiter.

* fix(security): exclude trailing-dot refs and hoist tag pattern to module level

* fix(security): align tag pattern with codebase standard [^<>]+ pattern

Matches createReferencePattern() from reference-validation.ts used by the
core executor. Invalid refs handled gracefully by resolveBlockReference.

* refactor(security): use createReferencePattern() instead of inline regex
2026-04-13 20:34:25 -07:00
Waleed
85fda999b5 fix(block-card): webhook URL never hydrates due to namespaced subBlock ID (#4150)
getTrigger() namespaces condition-gated subBlock IDs (e.g. webhookUrlDisplay
→ webhookUrlDisplay_github_release_published). The block card's useMemo was
checking for an exact match on 'webhookUrlDisplay', which never matched.

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-13 20:19:24 -07:00
Vikhyath Mondreti
a4da8beb20 improvement(docs): remove references to concurrency control (#4147) 2026-04-13 19:17:11 -07:00
Siddharth Ganesan
bd9dcf1ec0 fix(mothership): revert to deployment and set env var tools (#4141)
* fix build error

* improvement(mothership): new agent loop (#3920)

* feat(transport): replace shared chat transport with mothership-stream module

* improvement(contracts): regenerate contracts from go

* feat(tools): add tool catalog codegen from go tool contracts

* feat(tools): add tool-executor dispatch framework for sim side tool routing

* feat(orchestrator): rewrite tool dispatch with catalog-driven executor and simplified resume loop

* feat(orchestrator): checkpoint resume flow

* refactor(copilot): consolidate orchestrator into request/ layer

* refactor(mothership): reorganize lib/copilot into structured subdirectories

* refactor(mothership): canonical transcript layer, dead code cleanup, type consolidation

* refactor(mothership): rebase onto latest staging

* refactor(mothership): rename request continue to lifecycle

* feat(trace): add initial version of request traces

* improvement(stream): batch stream from redis

* fix(resume): fix the resume checkpoint

* fix(resume): fix resume client tool

* fix(subagents): subagent resume should join on existing subagent text block

* improvement(reconnect): harden reconnect logic

* fix(superagent): fix superagent integration tools

* improvement(stream): improve stream perf

* Rebase with origin dev

* fix(tests): fix failing test

* fix(build): fix type errors

* fix(build): fix build errors

* fix(build): fix type errors

* feat(mothership): add cli execution

* fix(mothership): fix function execute tests

* Force redeploy

* feat(motheship): add docx support

* feat(mothership): append

* Add deps

* improvement(mothership): docs

* File types

* Add client retry logic

* Fix stream reconnect

* Eager tool streaming

* Fix client side tools

* Security

* Fix shell var injection

* Remove auto injected tasks

* Fix 10mb tool response limit

* Fix trailing leak

* Remove dead tools

* file/folder tools

* Folder tools

* Hide function code inline

* Dont show internal tool result reads

* Fix spacing

* Auth vfs

* Empty folders should show in vfs

* Fix run workflow

* change to node runtime

* revert back to bun runtime

* Fix

* Appends

* Remove debug logs

* Patch

* Fix patch tool

* Temp

* Checkpoint

* File writes

* Fix

* Remove tool truncation limits

* Bad hook

* replace react markdown with streamdown

* Checkpoitn

* fix code block

* fix stream persistence

* temp

* Fix file tools

* tool joining

* cleanup subagent + streaming issues

* streamed text change

* Tool display intetns

* Fix dev

* Fix tests

* Fix dev

* Speed up dev ci

* Add req id

* Fix persistence

* Tool call names

* fix payload accesses

* Fix name

* fix snapshot crash bug

* fix

* Fix

* remove worker code

* Clickable resources

* Options ordering

* Folder vfs

* Restore and mass delete tools

* Fix

* lint

* Update request tracing and skills and handlers

* Fix editable

* fix type error

* Html code

* fix(chat): make inline code inherit parent font size in markdown headers

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* improved autolayout

* durable stream for files

* one more fix

* POSSIBLE BREAKAGE: SCROLLING

* Fixes

* Fixes

* Lint fix

* fix(resource): fix resource view disappearing on ats (#4103)

Co-authored-by: Theodore Li <theo@sim.ai>

* Fixes

* feat(mothership): add execution logs as a resource type

Adds `log` as a first-class mothership resource type so copilot can open
and display workflow execution logs as tabs alongside workflows, tables,
files, and knowledge bases.

- Add `log` to MothershipResourceType, all Zod enums, and VALID_RESOURCE_TYPES
- Register log in RESOURCE_REGISTRY (Library icon) and RESOURCE_INVALIDATORS
- Add EmbeddedLog and EmbeddedLogActions components in resource-content
- Export WorkflowOutputSection from log-details for reuse in EmbeddedLog
- Add log resolution branch in open_resource handler via new getLogById service
- Include log id in get_workflow_logs response and extract resources from output
- Exclude log from manual add-resource dropdown (enters via copilot tools only)
- Regenerate copilot contracts after adding log to open_resource Go enum

* Fix perf and message queueing

* Fix abort

* fix(ui): dont delete resource on clearing from context, set resource closed on new task (#4113)

Co-authored-by: Theodore Li <theo@sim.ai>

* improvement(mothership): structure sim side typing

* address comments

* reactive text editor tweaks

* Fix file read and tool call name persistence bug

* Fix code stream + create file opening resource

* fix use chat race + headless trace issues

* Fix type issue

* Fix mothership block req lifecycle

* Fix build

* Move copy reqid

* Fix

* fix(ui): fix resource tag transition from home to task (#4132)

Co-authored-by: Theodore Li <theo@sim.ai>

* Fix persistence

* Clean code, fix bugs

* Fix

---------

Co-authored-by: Vikhyath Mondreti <vikhyath@simstudio.ai>
Co-authored-by: Waleed Latif <walif6@gmail.com>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: Theodore Li <theo@sim.ai>
Co-authored-by: Theodore Li <theodoreqili@gmail.com>
2026-04-13 19:16:58 -07:00
Waleed
6ce299bb23 feat(jsm): add all Forms API endpoints for jira (#4142)
* feat(jsm): add all Forms API endpoints for two-step form workflow

* removed tyoes

* fix(jsm): handle 204 No Content on action endpoints and reject array answers

* fix(jsm): validate formIds is an array in copy_forms route and block

* fix(jsm): add formTemplateId validation and conditional required on formAnswers
2026-04-13 19:03:36 -07:00
Theodore Li
c75d7b9ddc fix(ui): fix home button not working until stream ends (#4145)
Co-authored-by: Theodore Li <theo@sim.ai>
2026-04-13 21:55:55 -04:00
Vikhyath Mondreti
c71ae49da0 chore(copilot): streaming paths reviewer group (#4144)
* chore(copilot): streaming paths reviewer group

* narrow scope
2026-04-13 18:28:18 -07:00
Theodore Li
d6dc9f73cd fix(ui): fix flash between home and new chat (#4143)
Co-authored-by: Theodore Li <theo@sim.ai>
2026-04-13 21:18:37 -04:00
Theodore Li
6587afb97e fix(ci): Increase build application memory (#4140)
Co-authored-by: Theodore Li <theo@sim.ai>
2026-04-13 20:35:47 -04:00
Waleed
e23557fdfe feat(aws): add IAM and STS integrations (#4137)
* feat(aws): add IAM and STS integrations

* fix(sts): address PR review comments

- Fix CrowdStrike tags to include "security" (unintended removal)
- Standardize STS tool versions to '1.0.0' (matching IAM convention)
- Add range validation to durationSeconds in Zod schemas

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* icon

* lint

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-13 16:53:15 -07:00
Siddharth Ganesan
0abcc6e813 improvement(mothership): restructured stream, tool structures, code typing, file write/patch/append tools, timing issues (#4090)
* fix build error

* improvement(mothership): new agent loop (#3920)

* feat(transport): replace shared chat transport with mothership-stream module

* improvement(contracts): regenerate contracts from go

* feat(tools): add tool catalog codegen from go tool contracts

* feat(tools): add tool-executor dispatch framework for sim side tool routing

* feat(orchestrator): rewrite tool dispatch with catalog-driven executor and simplified resume loop

* feat(orchestrator): checkpoint resume flow

* refactor(copilot): consolidate orchestrator into request/ layer

* refactor(mothership): reorganize lib/copilot into structured subdirectories

* refactor(mothership): canonical transcript layer, dead code cleanup, type consolidation

* refactor(mothership): rebase onto latest staging

* refactor(mothership): rename request continue to lifecycle

* feat(trace): add initial version of request traces

* improvement(stream): batch stream from redis

* fix(resume): fix the resume checkpoint

* fix(resume): fix resume client tool

* fix(subagents): subagent resume should join on existing subagent text block

* improvement(reconnect): harden reconnect logic

* fix(superagent): fix superagent integration tools

* improvement(stream): improve stream perf

* Rebase with origin dev

* fix(tests): fix failing test

* fix(build): fix type errors

* fix(build): fix build errors

* fix(build): fix type errors

* feat(mothership): add cli execution

* fix(mothership): fix function execute tests

* Force redeploy

* feat(motheship): add docx support

* feat(mothership): append

* Add deps

* improvement(mothership): docs

* File types

* Add client retry logic

* Fix stream reconnect

* Eager tool streaming

* Fix client side tools

* Security

* Fix shell var injection

* Remove auto injected tasks

* Fix 10mb tool response limit

* Fix trailing leak

* Remove dead tools

* file/folder tools

* Folder tools

* Hide function code inline

* Dont show internal tool result reads

* Fix spacing

* Auth vfs

* Empty folders should show in vfs

* Fix run workflow

* change to node runtime

* revert back to bun runtime

* Fix

* Appends

* Remove debug logs

* Patch

* Fix patch tool

* Temp

* Checkpoint

* File writes

* Fix

* Remove tool truncation limits

* Bad hook

* replace react markdown with streamdown

* Checkpoitn

* fix code block

* fix stream persistence

* temp

* Fix file tools

* tool joining

* cleanup subagent + streaming issues

* streamed text change

* Tool display intetns

* Fix dev

* Fix tests

* Fix dev

* Speed up dev ci

* Add req id

* Fix persistence

* Tool call names

* fix payload accesses

* Fix name

* fix snapshot crash bug

* fix

* Fix

* remove worker code

* Clickable resources

* Options ordering

* Folder vfs

* Restore and mass delete tools

* Fix

* lint

* Update request tracing and skills and handlers

* Fix editable

* fix type error

* Html code

* fix(chat): make inline code inherit parent font size in markdown headers

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* improved autolayout

* durable stream for files

* one more fix

* POSSIBLE BREAKAGE: SCROLLING

* Fixes

* Fixes

* Lint fix

* fix(resource): fix resource view disappearing on ats (#4103)

Co-authored-by: Theodore Li <theo@sim.ai>

* Fixes

* feat(mothership): add execution logs as a resource type

Adds `log` as a first-class mothership resource type so copilot can open
and display workflow execution logs as tabs alongside workflows, tables,
files, and knowledge bases.

- Add `log` to MothershipResourceType, all Zod enums, and VALID_RESOURCE_TYPES
- Register log in RESOURCE_REGISTRY (Library icon) and RESOURCE_INVALIDATORS
- Add EmbeddedLog and EmbeddedLogActions components in resource-content
- Export WorkflowOutputSection from log-details for reuse in EmbeddedLog
- Add log resolution branch in open_resource handler via new getLogById service
- Include log id in get_workflow_logs response and extract resources from output
- Exclude log from manual add-resource dropdown (enters via copilot tools only)
- Regenerate copilot contracts after adding log to open_resource Go enum

* Fix perf and message queueing

* Fix abort

* fix(ui): dont delete resource on clearing from context, set resource closed on new task (#4113)

Co-authored-by: Theodore Li <theo@sim.ai>

* improvement(mothership): structure sim side typing

* address comments

* reactive text editor tweaks

* Fix file read and tool call name persistence bug

* Fix code stream + create file opening resource

* fix use chat race + headless trace issues

* Fix type issue

* Fix mothership block req lifecycle

* Fix build

* Move copy reqid

* Fix

* fix(ui): fix resource tag transition from home to task (#4132)

Co-authored-by: Theodore Li <theo@sim.ai>

* Fix persistence

---------

Co-authored-by: Vikhyath Mondreti <vikhyath@simstudio.ai>
Co-authored-by: Waleed Latif <walif6@gmail.com>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: Theodore Li <theo@sim.ai>
Co-authored-by: Theodore Li <theodoreqili@gmail.com>
2026-04-13 16:46:35 -07:00
Theodore Li
d238052fe8 feat(ui): show folder path in search modal (#4138)
* feat(ui): show folder path in search modal

* Switch truncate folder over workspace name

---------

Co-authored-by: Theodore Li <theo@sim.ai>
2026-04-13 19:41:21 -04:00
Theodore Li
c0db9de07b fix(ui): Focus first text input by default (#4134)
* Auto-focus input boxes for modals and copilot

* Fix focus in emcn modal

* Fix integrations manager focus

* Change modal tabs to auto focus on first text input

* Auto-focus mothership task chats

---------

Co-authored-by: Theodore Li <theo@sim.ai>
2026-04-13 19:40:19 -04:00
Waleed
7491d70a67 feat(workspaces): add workspace logo upload (#4136)
* feat(workspaces): add workspace logo upload

* feat(workspaces): add workspace logo upload

* fix(workspaces): validate logoUrl accepts only paths or HTTPS URLs

* fix(workspaces): add admin authorization, audit log, and posthog event for workspace logo uploads

* lint

* fix: add WebP support and use refs pattern in useProfilePictureUpload

- Add image/webp to ACCEPTED_IMAGE_TYPES in useProfilePictureUpload
- Add image/webp to file input accept attributes in whitelabeling settings
- Refactor useProfilePictureUpload to use refs for onUpload, onError, and
  currentImage callbacks, matching the established codebase pattern

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix: restore cloudwatch/cloudformation files from staging

These files were accidentally regressed during rebase conflict resolution,
reverting changes from #4027. Restoring to staging versions.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix: add workspace_logo_uploaded to PostHogEventMap

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix: separate workspaceId ref sync to prevent overwrite on re-render

Split the ref sync useEffect so workspaceIdRef only updates when the
workspaceId prop changes, not when onUpload/onError callbacks get new
references. Prevents setTargetWorkspaceId from being overwritten by
a re-render before the file upload completes.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix: use Pick type for workspace dropdown in knowledge header

The shared Workspace type requires ownerId and other fields that aren't
available from the workspaces API response mapping. Use a Pick type to
accurately represent the subset of fields actually constructed.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* refactor: replace raw fetch with useWorkspacesQuery in knowledge header

Remove useState + useEffect + fetch anti-pattern for loading workspaces.
Use useWorkspacesQuery from React Query with inline filter for write/admin
permissions. Eliminates ~30 lines of manual state management, any casts,
and the Pick type workaround.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-13 15:54:21 -07:00
Waleed
4375f9921a fix(atlassian): unify error message extraction across all routes (#4135)
* fix(atlassian): unify error message extraction across all Jira, JSM, and Confluence routes

Add parseAtlassianErrorMessage() to jira/utils.ts as single source of truth for
parsing all 5 Atlassian error formats. Update 51 proxy routes (18 JSM, 5 Jira,
28 Confluence) to use it instead of hardcoded generic errors. Remove dead
errorExtractor field from 95 Atlassian tool files — the compat loop in
extractErrorMessage() already handles all formats without it. Consolidate
duplicate parseJsmErrorMessage into a re-export from the shared utility.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix: address PR review comments from Bugbot

- Remove debug logger.info for formAnswers in JSM request route
- Restore user-friendly spaceId error message in Confluence create-page route
- Restore details field in Jira write and update route error responses

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* refactor: remove re-exports from jsm/utils and import directly from source

Remove re-exports of getJiraCloudId, parseAtlassianErrorMessage, and
parseJsmErrorMessage from jsm/utils.ts. Update all 21 JSM routes to
import directly from @/tools/jira/utils per CLAUDE.md import rules.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* regen docs

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-13 15:03:20 -07:00
Waleed
fb4fb9e869 feat(agiloft): add Agiloft CLM integration with token-based auth (#4133)
* feat(agiloft): add Agiloft CLM integration with token-based auth

Add 12 tools (CRUD, search, select, saved search, attachments, lock),
block, icon, docs, and internal API route for file attachments.
Uses EWLogin/EWLogout for short-lived Bearer tokens — credentials
are never embedded in API request URLs.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(agiloft): address PR review feedback

- Add HTTPS enforcement guard to agiloftLogin to prevent plaintext credential transit
- Add null guard on data.output in attach_file transformResponse
- Change empty AgiloftSavedSearchParams interface to type alias

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(agiloft): add SSRF protection via DNS validation on instanceUrl

Validates user-supplied instanceUrl against private/reserved IP ranges
using validateUrlWithDNS before making any outbound requests. Uses dynamic
import to avoid bundling Node.js dns module in client-side code.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(agiloft): fix SSRF protection to avoid client bundle breakage

Replace dynamic import of input-validation.server (which Turbopack traces
into the client bundle) with client-safe validateExternalUrl in utils.ts.
Add full DNS-level SSRF validation via validateUrlWithDNS in the attach
API route (server-only file). This matches the Okta pattern for
directExecution tools and the textract pattern for API routes.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(agiloft): use DELETE method for EWRemoveAttachment endpoint

The remove_attachment tool was incorrectly using GET instead of DELETE
for the Agiloft EWRemoveAttachment endpoint, which would cause removals
to fail at runtime.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(agiloft): correct HTTP methods and parameter names per Agiloft API docs

- EWRemoveAttachment uses GET, not DELETE (revert incorrect change)
- EWRetrieve uses `filePosition` parameter, not `position`
- EWAttach uses PUT, not POST

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-13 14:36:50 -07:00
Waleed
5ab85c6930 feat(workspaces): add recency-based workspace switching and redirect (#4131)
* feat(workspaces): add recency-based workspace switching and redirect

* fix(workspaces): skip prune when workspace list is empty on mount
2026-04-13 14:10:29 -07:00
Waleed
eba48e815f feat(logs): add cancel execution to log row context menu (#4130)
* feat(logs): add cancel execution to log row context menu

* lint

* fix(logs): check success response and use targeted cache invalidation
2026-04-13 12:05:37 -07:00
Waleed
cd7e413607 chore(skills): add code quality review skills and cleanup command (#4129)
* chore(skills): add code quality review skills and cleanup command

* chore(skills): fix emcn design review with verified codebase patterns
2026-04-13 11:33:12 -07:00
Waleed
cfe55914c9 fix(navbar): eliminate auth button flash using useSyncExternalStore (#4127)
* fix(navbar): eliminate auth button flash using useSyncExternalStore

* fix(navbar): add inert and fix aria-hidden on auth button containers
2026-04-13 11:05:27 -07:00
Waleed
e3d0e74cc4 v0.6.39: billing fixes, tools audit, landing fix 2026-04-12 22:32:14 -07:00
Waleed
ffda34442b fix(models): fix mobile overflow and hide cost bars on small screens (#4125) 2026-04-12 22:26:57 -07:00
Vikhyath Mondreti
cd3e24b79b feat(crowdstrike): add tools + validate whatsapp, shopify, trello (#4123)
* feat(crowdstrike): add tools + validate whatsapp, shopify, trello

* address comment

* remove tools when unsure about docs shape

* addresss comments

* fix build
2026-04-12 16:53:39 -07:00
Vikhyath Mondreti
6d2deb1b33 chore(skills): reinforce skill to not guess integration outputs (#4122) 2026-04-12 14:35:20 -07:00
Vikhyath Mondreti
10341ae4a5 fix(billing): unblock on payment success (#4121) 2026-04-12 12:12:23 -07:00
Waleed
8b57476957 v0.6.38: models page 2026-04-12 01:30:17 -07:00
Waleed
6ef40c5b21 fix(models): exclude reseller providers from model catalog pages (#4117)
* fix(models): exclude reseller providers from model catalog pages

Reseller providers like OpenRouter, Fireworks, Azure, Vertex, and Bedrock
are aggregators that proxy other providers' models. Their model detail
pages were generating broken links. Filter them out of
MODEL_PROVIDERS_WITH_CATALOGS so they don't generate static pages or
appear as clickable entries in the model directory.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(models): use filtered catalog for JSON-LD structured data

Switch flatModels in page.tsx from MODEL_CATALOG_PROVIDERS to
MODEL_PROVIDERS_WITH_CATALOGS so the Schema.org ItemList excludes
reseller models, matching TOTAL_MODELS and avoiding broken URLs.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-12 01:28:27 -07:00
Waleed
4309d0619a v0.6.37: audit logs page, isolated-vm worker rotation, permission groups ui 2026-04-11 20:50:50 -07:00
Waleed
85f1d96859 feat(ee): enterprise feature flags, permission group platform controls, audit logs ui, delete account (#4115)
* feat(ee): enterprise feature flags, permission group platform controls, audit logs ui, delete account

* fix(settings): improve sidebar skeleton fidelity and fix credit purchase org cache invalidation

- Bump skeleton icon and text from 16/14px to 24px to better match real nav item visual weight
- Add orgId support to usePurchaseCredits so org billing/subscription caches are invalidated on credit purchase, matching the pattern used by useUpgradeSubscription
- Polish ColorInput in whitelabeling settings with auto-prefix and select-on-focus UX

* revert(settings): remove delete account feature

* fix(settings): address pr review — atomic autoAddNewMembers, extract query hook, fix types and signal forwarding

* chore(helm): add CREDENTIAL_SETS_ENABLED to values.yaml

* fix(access-control): dynamic platform category columns, atomic permission group delete

* fix(access-control): restore triggers section in blocks tab

* fix(access-control): merge triggers into tools section in blocks tab

* upgrade tubro

* fix(access-control): fix Select All state when config has stale blacklisted provider IDs

* fix(access-control): derive platform Select All from features list; revert turbo schema version

* fix(access-control): fix blocks Select All check, filter empty platform columns

* revert(settings): restore original skeleton icon and text sizes
2026-04-11 20:41:37 -07:00
Emir Karabeg
bc31710c1c improvement(landing): rebrand to AI workspace, add auth modal, harden PostHog tracking (#4116)
* improvement: seo, geo, signup, posthog

* fix(landing): address PR review issues and convention violations

- Fix auth modal race condition: show loading state instead of redirecting when provider status hasn't loaded yet
- Fix auth modal HTTP error caching: reject non-200 responses so they aren't permanently cached
- Replace <img> with next/image <Image> in auth modal
- Use cn() instead of template literal class concatenation in hero, footer-cta
- Remove commented-out dead code in footer, landing, sitemap
- Remove unused arrow property from FooterItem interface
- Convert relative imports to absolute in integrations/[slug]/page
- Remove no-op sanitizedName variable in signup form
- Remove unnecessary async from llms-full.txt route
- Remove extraneous non-TSDoc comment in auth modal

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* style(landing): apply linter formatting fixes

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(landing): second pass — fix remaining code quality issues

- auth-modal: add @sim/logger, log social sign-in errors instead of swallowing silently
- auth-modal: extract duplicated social button classes into SOCIAL_BTN constant
- auth-modal: remove unused isProduction from ProviderStatus interface
- auth-modal: memoize getBrandConfig() call
- footer: remove stale arrow destructuring left after interface cleanup, use cn() throughout
- footer-cta: replace inline styles on submit button with Tailwind classes via cn()
- footer-cta: replace caretColor inline style with caret-white utility
- templates: fix incorrect section value 'landing_preview' → 'templates' for PostHog tracking
- events: add 'templates' to landing_cta_clicked section union
- integrations: replace "canvas" with "workflow builder" per constitution rules
- llms-full: replace "canvas" terminology with "visual builder"/"workflow builder"

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(landing): point Mothership and Workflows footer links to docs root

These docs pages don't exist yet — link to docs.sim.ai until they are published.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(landing): complete rebrand in blog fallback description

Remove "workflows" from the non-tagged blog meta description to
align with the AI workspace rebrand across the rest of the PR.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(landing): strip isProduction from provider response and handle late-resolve redirect

- Destructure only githubAvailable/googleAvailable from getOAuthProviderStatus
  so isProduction is not leaked to unauthenticated callers.
- Add useEffect to redirect away from the modal if provider status resolves
  after the modal is already open and no social providers are configured.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(landing): align auth modal with login/signup page logic

- Add SSO button when NEXT_PUBLIC_SSO_ENABLED is set
- Gate "Continue with email" behind EMAIL_PASSWORD_SIGNUP_ENABLED
- Expose registrationDisabled from /api/auth/providers and hide
  the "Sign up" toggle when registration is disabled
- Simplify skip-modal logic: redirect to full page when no social
  providers or SSO are available (hasModalContent)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(landing): force login view when registration is disabled

When a CTA passes defaultView='signup' but registration is disabled,
the modal now opens in login mode instead of showing "Create free
account" with social buttons that would fail on the backend.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* lint

* fix(landing): correct signup view when registrationDisabled loads late

When the user opens the modal before providerStatus resolves and
registrationDisabled comes back true, the view was stuck on 'signup'.
Now the late-resolve useEffect also forces the view to 'login'.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(landing): add click tracking to integration page CTAs

Create IntegrationCtaButton client component that wraps AuthModal
and fires trackLandingCta on click, matching the pattern used by
every other landing section CTA.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(landing): prevent mobile auth modal from unmounting on open

Remove setMobileMenuOpen(false) from mobile AuthModal button onClick
handlers. Closing the mobile menu unmounts the AuthModal before it
can open. The modal overlay or page redirect makes the menu
irrelevant without needing to explicitly close it.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Waleed Latif <walif6@gmail.com>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-11 20:37:18 -07:00
Waleed
30c5e82ab0 feat(ee): add enterprise audit logs settings page (#4111)
* feat(ee): add enterprise audit logs settings page with server-side search

Add a new audit logs page under enterprise settings that displays all
actions captured via recordAudit. Includes server-side search, resource
type filtering, date range selection, and cursor-based pagination.

- Add internal API route (app/api/audit-logs) with session auth
- Extract shared query logic (buildFilterConditions, buildOrgScopeCondition,
  queryAuditLogs) into app/api/v1/audit-logs/query.ts
- Refactor v1 and admin audit log routes to use shared query module
- Add React Query hook with useInfiniteQuery and cursor pagination
- Add audit logs UI with debounced search, combobox filters, expandable rows
- Gate behind requiresHosted + requiresEnterprise navigation flags
- Place all enterprise audit log code in ee/audit-logs/

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* lint

* fix(ee): fix build error and address PR review comments

- Fix import path: @/lib/utils → @/lib/core/utils/cn
- Guard against empty orgMemberIds array in buildOrgScopeCondition
- Skip debounce effect on mount when search is already synced

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* lint

* fix(ee): fix type error with unknown metadata in JSX expression

Use ternary instead of && chain to prevent unknown type from being
returned as ReactNode.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(ee): align skeleton filter width with actual component layout

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* lint

* feat(audit): add audit logging for passwords, credentials, and schedules

- Add PASSWORD_RESET_REQUESTED audit on forget-password with user lookup
- Add CREDENTIAL_CREATED/UPDATED/DELETED audit on credential CRUD routes
  with metadata (credentialType, providerId, updatedFields, envKey)
- Add SCHEDULE_CREATED audit on schedule creation with cron/timezone metadata
- Fix SCHEDULE_DELETED (was incorrectly using SCHEDULE_UPDATED for deletes)
- Enhance existing schedule update/disable/reactivate audit with structured
  metadata (operation, updatedFields, sourceType, previousStatus)
- Add CREDENTIAL resource type and Credential filter option to audit logs UI
- Enhance password reset completed description with user email

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(audit): align metadata with established recordAudit patterns

- Add actorName/actorEmail to all new credential and schedule audit calls
  to match the established pattern (e.g., api-keys, byok-keys, knowledge)
- Add resourceId and resourceName to forget-password audit call
- Enhance forget-password description with user email

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(testing): sync audit mock with new AuditAction and AuditResourceType entries

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* refactor(audit-logs): derive resource type filter from AuditResourceType

Instead of maintaining a separate hardcoded list, the filter dropdown
now derives its options directly from the AuditResourceType const object.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* feat(audit): enrich all recordAudit calls with structured metadata

- Move resource type filter options to ee/audit-logs/constants.ts
  (derived from AuditResourceType, no separate list to maintain)
- Remove export from internal cursor helpers in query.ts
- Add 5 new AuditAction entries: BYOK_KEY_UPDATED, ENVIRONMENT_DELETED,
  INVITATION_RESENT, WORKSPACE_UPDATED, ORG_INVITATION_RESENT
- Enrich ~80 recordAudit calls across the codebase with structured
  metadata (knowledge bases, connectors, documents, workspaces, members,
  invitations, workflows, deployments, templates, MCP servers, credential
  sets, organizations, permission groups, files, tables, notifications,
  copilot operations)
- Sync audit mock with all new entries

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(audit): remove redundant metadata fields duplicating top-level audit fields

Remove metadata entries that duplicate resourceName, workspaceId, or
other top-level recordAudit fields. Also remove noisy fileNames arrays
from bulk document upload audits (kept fileCount).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(audit): split audit types from server-only log module

Extract AuditAction, AuditResourceType, and their types into
lib/audit/types.ts (client-safe, no @sim/db dependency). The
server-only recordAudit stays in log.ts and re-exports the types
for backwards compatibility. constants.ts now imports from types.ts
directly, breaking the postgres -> tls client bundle chain.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(audit): escape LIKE wildcards in audit log search query

Escape %, _, and \ characters in the search parameter before embedding
in the LIKE pattern to prevent unintended broad matches.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(audit): use actual deletedCount in bulk API key revoke description

The description was using keys.length (requested count) instead of
deletedCount (actual count), which could differ if some keys didn't
exist.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(audit-logs): fix OAuth label displaying as "Oauth" in filter dropdown

ACRONYMS set stored 'OAuth' but lookup used toUpperCase() producing
'OAUTH' which never matched. Now store all acronyms uppercase and use
a display override map for special casing like OAuth.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-11 16:15:48 -07:00
Waleed
6a4f5f2074 fix(trigger): handle Drive rate limits, 410 page token expiry, and clean up comments (#4112)
* fix(trigger): handle Drive rate limits, 410 page token expiry, and clean up comments

* fix(trigger): treat Drive rate limits as success to preserve failure budget

* fix(trigger): distinguish Drive 403 rate limits from permission errors, preserve knownFileIds on 410 re-seed
2026-04-11 15:04:08 -07:00
Waleed
74d0a47525 fix(trigger): fix Google Sheets trigger header detection and row index tracking (#4109)
* fix(trigger): auto-detect header row and rename lastKnownRowCount to lastIndexChecked

- Replace hardcoded !1:1 header fetch with detectHeaderRow(), which scans
  the first 10 rows and returns the first non-empty row as headers. This
  fixes row: null / headers: [] when a sheet has blank rows or a title row
  above the actual column headers (e.g. headers in row 3).
- Rename lastKnownRowCount → lastIndexChecked in GoogleSheetsWebhookConfig
  and all usage sites to clarify that the value is a row index pointer, not
  a total count.
- Remove config parameter from processRows() since it was unused after the
  includeHeaders flag was removed.

* fix(trigger): combine sheet state fetch, skip header/blank rows from data emission

- Replace separate getDataRowCount() + detectHeaderRow() with a single
  fetchSheetState() call that returns rowCount, headers, and headerRowIndex
  from one A:Z fetch. Saves one Sheets API round-trip per poll cycle when
  new rows are detected.
- Use headerRowIndex to compute adjustedStartRow, preventing the header row
  (and any blank rows above it) from being emitted as data events when
  lastIndexChecked was seeded from an empty sheet.
- Handle the edge case where the entire batch falls within the header/blank
  window by advancing the pointer and returning early without fetching rows.
- Skip empty rows (row.length === 0) in processRows rather than firing a
  workflow run with no meaningful data.

* fix(trigger): preserve lastModifiedTime when remaining rows exist after header skip

When all rows in a batch fall within the header/blank window (adjustedStartRow
> endRow), the early return was unconditionally updating lastModifiedTime to the
current value. If there were additional rows beyond the batch cap, the next
Drive pre-check would see an unchanged modifiedTime and skip polling entirely,
leaving those rows unprocessed. Mirror the hasRemainingOrFailed pattern from the
normal processing path.

* chore(trigger): remove verbose inline comments from google-sheets poller

* fix(trigger): revert to full-width A:Z fetch for correct row count and consistent column scope

* fix(trigger): don't count skipped empty rows as processed
2026-04-11 12:08:15 -07:00
Waleed
c8525852d4 chore(triggers): deprecate trigger-save subblock (#4107)
* chore(triggers): deprecate trigger-save subblock

Remove the defunct triggerSave subblock from all 102 trigger definitions,
the SubBlockType union, SYSTEM_SUBBLOCK_IDS, tool params, and command
templates. Retain the backwards-compat filter in getTrigger() for any
legacy stored data.

* fix(triggers): remove leftover no-op blocks.push() in linear utils

* chore(triggers): remove orphaned triggerId property and stale comments
2026-04-11 11:41:23 -07:00
Waleed
20cc0185bf fix(execution): fix isolated-vm memory leak and add worker recycling (#4108)
* fix(execution): fix isolated-vm memory leak and add worker recycling

* fix(execution): mirror retirement check in send-failure path and fix pool sizing

* chore(execution): remove verbose comments from isolated-vm changes

* fix(execution): apply retiring-worker exclusion to drainQueue pool size check

* fix(execution): increment lifetimeExecutions on parent-side timeout
2026-04-11 11:22:50 -07:00
Waleed
cbfab1ceaa v0.6.36: new chunkers, sockets state machine, google sheets/drive/calendar triggers, docs updates, integrations/models pages improvements 2026-04-10 21:58:16 -07:00
Waleed
1acafe8763 feat(knowledge): add token, sentence, recursive, and regex chunkers (#4102)
* feat(knowledge): add token, sentence, recursive, and regex chunkers

* fix(chunkers): standardize token estimation and use emcn dropdown

- Refactor all existing chunkers (Text, JsonYaml, StructuredData, Docs) to use shared utils
- Fix inconsistent token estimation (JsonYaml used tiktoken, StructuredData used /3 ratio)
- Fix DocsChunker operator precedence bug and hard-coded 300-token limit
- Fix JsonYamlChunker isStructuredData false positive on plain strings
- Add MAX_DEPTH recursion guard to JsonYamlChunker
- Replace @/components/ui/select with emcn DropdownMenu in strategy selector

* fix(chunkers): address research audit findings

- Expand RecursiveChunker recipes: markdown adds horizontal rules, code
  fences, blockquotes; code adds const/let/var/if/for/while/switch/return
- RecursiveChunker fallback uses splitAtWordBoundaries instead of char slicing
- RegexChunker ReDoS test uses adversarial strings (repeated chars, spaces)
- SentenceChunker abbreviation list adds St/Rev/Gen/No/Fig/Vol/months
  and single-capital-letter lookbehind
- Add overlap < maxSize validation in Zod schema and UI form
- Add pattern max length (500) validation in Zod schema
- Fix StructuredDataChunker footer grammar

* fix(chunkers): fix remaining audit issues across all chunkers

- DocsChunker: extract headers from cleaned content (not raw markdown)
  to fix position mismatch between header positions and chunk positions
- DocsChunker: strip export statements and JSX expressions in cleanContent
- DocsChunker: fix table merge dedup using equality instead of includes
- JsonYamlChunker: preserve path breadcrumbs when nested value fits in
  one chunk, matching LangChain RecursiveJsonSplitter behavior
- StructuredDataChunker: detect 2-column CSV (lowered threshold from >2
  to >=1) and use 20% relative tolerance instead of absolute +/-2
- TokenChunker: use sliding window overlap (matching LangChain/Chonkie)
  where chunks stay within chunkSize instead of exceeding it
- utils: splitAtWordBoundaries accepts optional stepChars for sliding
  window overlap; addOverlap uses newline join instead of space

* chore(chunkers): lint formatting

* updated styling

* fix(chunkers): audit fixes and comprehensive tests

- Fix SentenceChunker regex: lookbehinds now include the period to correctly handle abbreviations (Mr., Dr., etc.), initials (J.K.), and decimals
- Fix RegexChunker ReDoS: reset lastIndex between adversarial test iterations, add poisoned-suffix test strings
- Fix DocsChunker: skip code blocks during table boundary detection to prevent false positives from pipe characters
- Fix JsonYamlChunker: oversized primitive leaf values now fall back to text chunking instead of emitting a single chunk
- Fix TokenChunker: pass 0 to buildChunks for overlap metadata since sliding window handles overlap inherently
- Add defensive guard in splitAtWordBoundaries to prevent infinite loops if step is 0
- Add tests for utils, TokenChunker, SentenceChunker, RecursiveChunker, RegexChunker (236 total tests, 0 failures)
- Fix existing test expectations for updated footer format and isStructuredData behavior

* chore(chunkers): remove unnecessary comments and dead code

Strip 445 lines of redundant TSDoc, math calculation comments,
implementation rationale notes, and assertion-restating comments
across all chunker source and test files.

* fix(chunkers): address PR review comments

- Fix regex fallback path: use sliding window for overlap instead of
  passing chunkOverlap to buildChunks without prepended overlap text
- Fix misleading strategy label: "Text (hierarchical splitting)" →
  "Text (word boundary splitting)"

* fix(chunkers): use consistent overlap pattern in regex fallback

Use addOverlap + buildChunks(chunks, overlap) in the regex fallback
path to match the main path and all other chunkers (TextChunker,
RecursiveChunker). The sliding window approach was inconsistent.

* fix(chunkers): prevent content loss in word boundary splitting

When splitAtWordBoundaries snaps end back to a word boundary, advance
pos from end (not pos + step) in non-overlapping mode. The step-based
advancement is preserved for the sliding window case (TokenChunker).

* fix(chunkers): restore structured data token ratio and overlap joiner

- Restore /3 token estimation for StructuredDataChunker (structured data
  is denser than prose, ~3 chars/token vs ~4)
- Change addOverlap joiner from \n to space to match original TextChunker
  behavior

* lint

* fix(chunkers): fall back to character-level overlap in sentence chunker

When no complete sentence fits within the overlap budget,
fall back to character-level word-boundary overlap from the
previous group's text. This ensures buildChunks metadata is
always correct.

* fix(chunkers): fix log message and add missing month abbreviations

- Fix regex fallback log: "character splitting" → "word-boundary splitting"
- Add Jun and Jul to sentence chunker abbreviation list

* lint

* fix(chunkers): restore structured data detection threshold to > 2

avgCount >= 1 was too permissive — prose with consistent comma usage
would be misclassified as CSV. Restore original > 2 threshold while
keeping the improved proportional tolerance.

* fix(chunkers): pass chunkOverlap to buildChunks in TokenChunker

* fix(chunkers): restore separator-as-joiner pattern in splitRecursively

Separator was unconditionally prepended to parts after the first,
leaving leading punctuation on chunks after a boundary reset.

* feat(knowledge): add JSONL file support for knowledge base uploads

Parses JSON Lines files by splitting on newlines and converting to a
JSON array, which then flows through the existing JsonYamlChunker.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-10 21:33:29 -07:00
Emir Karabeg
c1d788ce94 improvement(integrations, models): ui/ux (#4105)
* improvement(integrations, models): ui/ux

* fix(models, integrations): dedup ChevronArrow/provider colors, fix UTC date rendering

- Extract PROVIDER_COLORS and getProviderColor to model-colors.ts to eliminate
  identical definitions in model-comparison-charts and model-timeline-chart
- Remove duplicate private ChevronArrow from integration-card; import the
  exported one from model-primitives instead
- Add timeZone: 'UTC' to formatShortDate so ISO date-only strings (parsed as
  UTC midnight) render the correct calendar day in all timezones

* refactor(models): rename model-colors.ts to consts.ts

* improvement(models): derive provider colors/resellers from definitions, reorient FAQs to agent builder

Dynamic data:
- Add `color` and `isReseller` fields to ProviderDefinition interface
- Move brand colors for all 10 providers into their definitions
- Mark 6 reseller providers (Azure, Bedrock, Vertex, OpenRouter, Fireworks)
- consts.ts now derives color map from MODEL_CATALOG_PROVIDERS
- model-comparison-charts derives RESELLER_PROVIDERS from catalog
- Fix deepseek name: Deepseek → DeepSeek; remove now-redundant
  PROVIDER_NAME_OVERRIDES and getProviderDisplayName from utils
- Add color/isReseller fields to CatalogProvider; clean up duplicate
  providerDisplayName in searchText array

FAQs:
- Replace all 4 main-page FAQs with 5 agent-builder-oriented ones
  covering model selection, context windows, pricing, tool use, and
  how to use models in a Sim agent workflow
- buildProviderFaqs: add conditional tool use FAQ per provider
- buildModelFaqs: add bestFor FAQ (conditional on field presence);
  improve context window answer to explain agent implications;
  tighten capabilities answer wording

* chore(models): remove model-colors.ts (superseded by consts.ts)

* update footer

---------

Co-authored-by: waleed <walif6@gmail.com>
2026-04-10 20:46:44 -07:00
Vikhyath Mondreti
bad78ccb59 improvement(sockets): workflow switching state machine (#4104)
* improvement(sockets): workflow switching state machine

* address comments
2026-04-10 19:06:10 -07:00
Waleed
8bbca9ba05 fix(trigger): fix polling trigger config defaults, row count, clock-skew, and stale config clearing (#4101)
* fix(trigger): fix polling trigger config defaults, row count, clock-skew, and stale config clearing

* fix(deploy): track first-pass fills to prevent stale baseConfig bypassing required-field validation

Use a dedicated `filledSubBlockIds` Set populated during the first pass so the second-pass skip guard is based solely on live `getConfigValue` results, not on stale entries spread from `baseConfig` (`triggerConfig`).

* fix(trigger): prevent calendar cursor regression when all events are filtered client-side
2026-04-10 17:41:36 -07:00
Theodore Li
34f77e00bc update(doc): Update hosted key/byok section (#4098)
* fix(doc): Update byok docs section

* Update cost page with new byok providers

* Add translated sections

---------

Co-authored-by: Theodore Li <theo@sim.ai>
2026-04-10 17:48:40 -04:00
Waleed
fb5ebd3bed fix(ui): support Tab key to select items in tag, env-var, and resource dropdowns (#4096)
* fix(ui): support Tab key to select items in tag, env-var, and resource dropdowns

* fix(ui): support Tab key to select items in tag, env-var, and resource dropdowns

* fix(ui): guard Tab selection against Shift+Tab and undefined index
2026-04-10 14:30:09 -07:00
Waleed
2e85361ed6 fix(tools): use OAuth-compatible URL for JSM Forms API (#4099)
The Forms API has a different base URL for OAuth vs Basic Auth.
Per Atlassian support, OAuth requires the /ex/jira/{cloudId}/forms
pattern, not /jira/forms/cloud/{cloudId} which only works with
Basic Auth. This was causing 401 Unauthorized errors.

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-10 14:28:29 -07:00
Waleed
59de6bbb43 fix(trigger): show selector display names on canvas for trigger file/sheet selectors (#4097)
* fix(trigger): show selector display names on canvas for trigger file/sheet selectors

* fix(trigger): use isNonEmptyValue in canonical member scan to match visibility contract
2026-04-10 14:24:44 -07:00
Waleed
2b9fb19899 fix(trigger): resolve dependsOn for trigger-mode subblocks sharing canonical groups with block subblocks (#4095) 2026-04-10 12:50:04 -07:00
Theodore Li
266bc2141d feat(ui): allow multiselect in resource tabs (#4094)
* feat(ui): allow multiselect in resource tabs

* Fix bugs with deselection

* Try catch resource tab deletion independently

* Fix chat switch selection

* Default to null active id

---------

Co-authored-by: Theodore Li <theo@sim.ai>
2026-04-10 15:20:01 -04:00
Waleed
6099683e5a feat(trigger): add Google Sheets, Drive, and Calendar polling triggers (#4081)
* feat(trigger): add Google Sheets, Drive, and Calendar polling triggers

Add polling triggers for Google Sheets (new rows), Google Drive (file
changes via changes.list API), and Google Calendar (event updates via
updatedMin). Each includes OAuth credential support, configurable
filters (event type, MIME type, folder, search term, render options),
idempotency, and first-poll seeding. Wire triggers into block configs
and regenerate integrations.json. Update add-trigger skill with polling
instructions and versioned block wiring guidance.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(polling): address PR review feedback for Google polling triggers

- Fix Drive cursor stall: use nextPageToken as resume point when
  breaking early from pagination instead of re-using the original token
- Eliminate redundant Drive API call in Sheets poller by returning
  modifiedTime from the pre-check function
- Add 403/429 rate-limit handling to Sheets API calls matching the
  Calendar handler pattern
- Remove unused changeType field from DriveChangeEntry interface
- Rename triggers/google_drive to triggers/google-drive for consistency

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(polling): fix Drive pre-check never activating in Sheets poller

isDriveFileUnchanged short-circuited when lastModifiedTime was
undefined, never calling the Drive API — so currentModifiedTime
was never populated, creating a permanent chicken-and-egg loop.
Now always calls the Drive API and returns the modifiedTime
regardless of whether there's a previous value to compare against.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* chore(lint): fix import ordering in triggers registry

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(polling): address PR review feedback for Google polling handlers

- Fix fetchHeaderRow to throw on 403/429 rate limits instead of silently
  returning empty headers (prevents rows from being processed without
  headers and lastKnownRowCount from advancing past them permanently)
- Fix Drive pagination to avoid advancing resume cursor past sliced
  changes (prevents permanent change loss when allChanges > maxFiles)
- Remove unused logger import from Google Drive trigger config

* fix(polling): prevent data loss on partial row failures and harden idempotency key

- Sheets: only advance lastKnownRowCount by processedCount when there
  are failures, so failed rows are retried on the next poll cycle
  (idempotency deduplicates already-processed rows on re-fetch)
- Drive: add fallback for change.time in idempotency key to prevent
  key collisions if the field is ever absent from the API response

* fix(polling): remove unused variable and preserve lastModifiedTime on Drive API failure

- Remove unused `now` variable from Google Drive polling handler
- Preserve stored lastModifiedTime when Drive API pre-check fails
  (previously wrote undefined, disabling the optimization until the
  next successful Drive API call)

* fix(polling): don't advance state when all events fail across sheets, calendar, drive handlers

* fix(polling): retry failed idempotency keys, fix drive cursor overshoot, fix calendar inclusive updatedMin

* fix(polling): revert calendar timestamp on any failure, not just all-fail

* fix(polling): revert drive cursor on any failure, not just all-fail

* feat(triggers): add canonical selector toggle to google polling triggers

- Add 'trigger-advanced' mode to SubBlockConfig so canonical pairs work in trigger mode
- Fix buildCanonicalIndex: trigger-mode subblocks don't overwrite non-trigger basicId, deduplicate advancedIds from block spreads
- Update editor, subblock layout, and trigger config aggregation to include trigger-advanced subblocks
- Replace dropdown+fetchOptions in Calendar/Sheets/Drive pollers with file-selector (basic) + short-input (advanced) canonical pairs
- Add canonicalParamId: 'oauthCredential' to triggerCredentials for selector context resolution
- Update polling handlers to read canonical fallbacks (calendarId||manualCalendarId, etc.)

* test(blocks): handle trigger-advanced mode in canonical validation tests

* fix(triggers): handle trigger-advanced mode in deploy, preview, params, and copilot

* fix(polling): use position-only idempotency key for sheets rows

* fix(polling): don't advance calendar timestamp to client clock on empty poll

* fix(polling): remove extraneous comment from calendar poller

* fix(polling): drive cursor stall on full page, calendar latestUpdated past filtered events

* fix(polling): advance calendar cursor past fully-filtered event batches

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-09 23:43:28 -07:00
Waleed
4f40c4ce3e v0.6.35: additional jira fields, HITL docs, logs cleanup efficiency 2026-04-09 22:53:05 -07:00
Waleed
3efbd1d612 fix(agent): include model in structured response output (#4092)
* fix(agent): include model in structured response output

* fix(agent): update test expectation for model in structured response
2026-04-09 22:50:26 -07:00
Waleed
04c1f8e475 feat(tools): add fields parameter to Jira search block (#4091)
* feat(tools): add fields parameter to Jira search block

Expose the Jira REST API `fields` parameter on the search operation,
allowing users to specify which fields to return per issue. This reduces
response payload size by 10-15x, preventing 10MB workflow state limit
errors for users with high ticket volume.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* style(tools): remove redundant type annotation in fields map callback

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(tools): restore type annotation for implicit any in params callback

The params object is untyped, so TypeScript cannot infer the string
element type from .split() — the explicit annotation is required.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-09 22:45:18 -07:00
Waleed
476669fd55 docs(openapi): add Human in the Loop section to API reference sidebar (#4089)
Add the generated human-in-the-loop group to the docs navigation
and create meta.json listing all HITL operation IDs so endpoints
render in the API reference.

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-09 18:53:46 -07:00
Theodore Li
4074109362 fix(log): log cleanup sql query (#4087)
* fix(log): log cleanup sql query

* perf(log): use startedAt index for cleanup query filter

Switch cleanup WHERE clause from createdAt to startedAt to leverage
the existing composite index (workspaceId, startedAt), converting a
full table scan to an index range scan. Also remove explanatory comment.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Theodore Li <theo@sim.ai>
Co-authored-by: Waleed Latif <walif6@gmail.com>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-09 18:04:15 -07:00
Waleed
171485d3b6 fix(tools): handle all Atlassian error formats in parseJsmErrorMessage (#4088)
Update parseJsmErrorMessage to extract errors from all Atlassian API
response formats: errorMessage (JSM), errorMessages array (Jira),
errors[].title RFC 7807 (Confluence/Forms), field-level errors object,
and message (gateway). Remove redundant prefix wrapping so the raw
error message surfaces cleanly through the extractor.
2026-04-09 17:08:19 -07:00
Waleed
d33acf426d v0.6.34: trigger.dev fixes, CI speedup, atlassian error extractor 2026-04-09 15:31:13 -07:00
Waleed
bce638dd75 fix(tools): add Atlassian error extractor to all Jira, JSM, and Confluence tools (#4085)
* fix(tools): add Atlassian error extractor to all Jira, JSM, and Confluence tools

Wire up the existing `atlassian-errors` error extractor to all 95 Atlassian
tool configs so the executor surfaces meaningful error messages instead of
generic status codes. Also fix the extractor itself to handle all three
Atlassian error response formats: `errorMessage` (JSM), `errorMessages`
array (Jira), and `message` (Confluence).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* chore(tools): lint formatting fix for error extractor

* fix(tools): handle all Atlassian error formats in error extractor

Add RFC 7807 errors[].title format (Confluence v2, Forms/ProForma API)
and Jira field-level errors object to the atlassian-errors extractor.

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-09 15:18:34 -07:00
Waleed
05b5588a7b improvement(ci): parallelize Docker builds and fix test timeouts (#4083)
* improvement(ci): parallelize Docker builds with tests and remove duplicate turbo install

* fix(test): use SecureFetchResponse shape in mock instead of standard Response
2026-04-09 15:18:19 -07:00
Waleed
32bdf3cfa5 fix(trigger): use @react-email/render v2 to fix renderToPipeableStream error (#4084) 2026-04-09 14:46:57 -07:00
Waleed
12deb0f5b4 chore(ci): bump actions/checkout to v6 and dorny/paths-filter to v4 (#4082)
* chore(ci): bump actions/checkout to v6 and dorny/paths-filter to v4

* fix(ci): mock secureFetchWithPinnedIP in tools tests to prevent timeouts

* lint
2026-04-09 14:33:11 -07:00
Waleed
3c8bb4076c v0.6.33: polling improvements, jsm forms tools, credentials reactquery invalidation, HITL docs 2026-04-09 14:03:38 -07:00
Waleed
c393791f04 docs(openapi): add Human in the Loop API endpoints (#4079)
* docs(openapi): add Human in the Loop API endpoints

Add HITL pause/resume endpoints to the OpenAPI spec covering
the full workflow pause lifecycle: listing paused executions,
inspecting pause details, and resuming with input.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* docs(openapi): add 403 and 500 responses to HITL endpoints

Address PR review feedback: add missing 403 Forbidden response
to all HITL endpoints (from validateWorkflowAccess), and 500
responses to resume endpoints that have explicit error paths.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* lint

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-09 14:01:09 -07:00
Waleed
fc3e762b1f feat(trigger): add ServiceNow webhook triggers (#4077)
* feat(trigger): add ServiceNow webhook triggers

* fix(trigger): add webhook secret field and remove non-TSDoc comment

Add webhookSecret field to ServiceNow triggers (matching Salesforce pattern)
so users are prompted to protect the webhook endpoint. Update setup
instructions to include Authorization header in the Business Rule example.
Remove non-TSDoc inline comment in the block config.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* feat(trigger): add ServiceNow provider handler with event matching

Add dedicated ServiceNow webhook provider handler with:
- verifyAuth: validates webhookSecret via Bearer token or X-Sim-Webhook-Secret
- matchEvent: filters events by trigger type and table name using
  isServiceNowEventMatch utility (matching Salesforce/GitHub pattern)

The event matcher handles incident created/updated and change request
created/updated triggers with table name enforcement and event type
normalization. The generic webhook trigger passes through all events
but still respects the optional table name filter.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* lint

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-09 13:59:07 -07:00
Waleed
70f04c003b feat(jsm): add ProForma/JSM Forms discovery tools (#4078)
* feat(jsm): add ProForma/JSM Forms discovery tools

Add three new tools for discovering and inspecting JSM Forms (ProForma) templates
and their structure, enabling dynamic form-based workflows:

- jsm_get_form_templates: List form templates in a project with request type bindings
- jsm_get_form_structure: Get full form design (questions, layout, conditions, sections)
- jsm_get_issue_forms: List forms attached to an issue with submission status

All endpoints validated against the official Atlassian Forms REST API OpenAPI spec.
Uses the Forms Cloud API base URL (jira/forms/cloud/{cloudId}) with X-ExperimentalApi header.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(jsm): add input validation and extract shared error parser

- Add validateJiraIssueKey for projectIdOrKey in templates and structure routes
- Add validateJiraCloudId for formId (UUID) in structure route
- Extract parseJsmErrorMessage to shared utils.ts (was duplicated across 3 routes)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* chore(jsm): remove unused FORM_QUESTION_PROPERTIES constant

Dead code — the get_form_structure tool passes the raw design object
through as JSON, so this output constant had no consumers.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-09 13:58:41 -07:00
Waleed
7bd271ae5b fix(credentials): add cross-cache invalidation for oauth credential queries (#4076) 2026-04-09 11:32:08 -07:00
Waleed
8e222fa369 improvement(polling): fix correctness and efficiency across all polling handlers (#4067)
* improvement(polling): fix correctness and efficiency across all polling handlers

- Gmail: paginate history API, add historyTypes filter, differentiate 403/429,
  fetch fresh historyId on fallback to break 404 retry loop
- Outlook: follow @odata.nextLink pagination, use fetchWithRetry for all Graph
  calls, fix $top alignment, skip folder filter on partial resolution failure,
  remove Content-Type from GET requests
- RSS: add conditional GET (ETag/If-None-Match), raise GUID cap to 500, fix 304
  ETag capture per RFC 9111, align GUID tracking with idempotency fallback key
- IMAP: single connection reuse, UIDVALIDITY tracking per mailbox, advance UID
  only on successful fetch, fix messageFlagsAdd range type, remove cross-mailbox
  legacy UID fallback
- Dispatch polling via trigger.dev task with per-provider concurrency key;
  fall back to synchronous Redis-locked polling for self-hosted

* fix(rss): align idempotency key GUID fallback with tracking/filter guard

* removed comments

* fix(imap): clear stale UID when UIDVALIDITY changes during state merge

* fix(rss): skip items with no identifiable GUID to avoid idempotency key collisions

* fix(schedules): convert dynamic import of getWorkflowById to static import

* fix(imap): preserve fresh UID after UIDVALIDITY reset in state merge

* improvement(polling): remove trigger.dev dispatch, use synchronous Redis-locked polling

* fix(polling): decouple outlook page size from total email cap so pagination works
2026-04-09 11:22:38 -07:00
Waleed
b67c068817 improvement(deploy): improve auto-generated version descriptions (#4075)
* improvement(deploy): improve auto-generated version descriptions

* fix(deploy): address PR review - log dropdown errors, populate first-deploy details

* lint
2026-04-09 10:51:46 -07:00
Waleed
d778b3d35b fix(trigger): add @react-email/components to additionalPackages (#4068) 2026-04-08 23:26:30 -07:00
Vikhyath Mondreti
dc7d876a34 improvement(release): address comments (#4069) 2026-04-08 23:22:18 -07:00
Waleed
f8f3758649 v0.6.32: BYOK fixes, ui improvements, cloudwatch tools, jsm tools extension 2026-04-08 22:31:21 -07:00
Waleed
db230785d3 fix(jsm): improve create request error handling, add form-based submission support (#4066)
* fix(jsm): improve create request error handling, add form-based submission support

* refactor(jsm): extract parseJsmErrorMessage helper to deduplicate error handling

* fix(jsm): remove required on summary for advanced mode, add JSON.parse error handling

* fix(jsm): include description in requestFieldValues gate for form-only requests
2026-04-08 22:17:01 -07:00
Vikhyath Mondreti
9fbe514dbd fix(hitl): resume workflow output async (#4065) 2026-04-08 19:31:18 -07:00
Theodore Li
139213ef45 feat(block): Add cloudwatch publish operation (#4027)
* feat(block): Add cloudwatch publish operation

* fix(integrations): validate and fix cloudwatch, cloudformation, athena conventions

- Update tool version strings from '1.0' to '1.0.0' across all three integrations
- Add missing `export * from './types'` barrel re-exports (cloudwatch, cloudformation)
- Add docsLink, wandConfig timestamps, mode: 'advanced' on optional fields (cloudwatch)
- Add dropdown defaults, ZodError handling, docs intro section (cloudwatch)
- Add mode: 'advanced' on limit field (cloudformation)
- Alphabetize registry entries (cloudwatch, cloudformation)
- Fix athena docs maxResults range (1-999)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(cloudwatch): complete put_metric_data unit dropdown, add missing outputs, fix JSON error handling

- Add all 27 valid CloudWatch StandardUnit values to metricUnit dropdown (was 13)
- Add missing block outputs for put_metric_data: success, namespace, metricName, value, unit
- Add try-catch around dimensions JSON.parse in put-metric-data route for proper 400 errors

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(cloudwatch): fix DescribeAlarms returning only MetricAlarm when "All Types" selected

Per AWS docs, omitting AlarmTypes returns only MetricAlarm. Now explicitly
sends both MetricAlarm and CompositeAlarm when no filter is selected.

Also fix dimensions JSON parse errors returning 500 instead of 400 in
get-metric-statistics route.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(cloudwatch): validate dimensions JSON at Zod schema level

Move dimensions validation from runtime try-catch to Zod refinement,
catching malformed JSON and arrays at schema validation time (400)
instead of runtime (500). Also rejects JSON arrays that would produce
meaningless numeric dimension names.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(cloudwatch): reject non-numeric metricValue instead of silently publishing 0

Add NaN guard in block config and .finite() refinement in Zod schema
so "abc" → NaN is caught at both layers instead of coercing to 0.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(cloudwatch): use Number.isFinite to also reject Infinity in block config

Aligns block-level validation with route's Zod .finite() refinement so
Infinity/-Infinity are caught at the block config layer, not just the API.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Theodore Li <teddy@zenobiapay.com>
Co-authored-by: Waleed Latif <walif6@gmail.com>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-08 19:02:24 -07:00
Vikhyath Mondreti
a8468a6056 fix(hitl): async resume (#4064)
* fix(hitl): async resume

* fix
2026-04-08 18:46:16 -07:00
Vikhyath Mondreti
3e85218142 improvement(hitl): streaming, async support + update docs (#4058)
* improvement(hitl): support streaming, async, update docs

* update docs

* fix tests

* fix abort signal passthrough

* module level const

* fix form route

* address comments

* fix build
2026-04-08 17:36:33 -07:00
Vikhyath Mondreti
c5cc336847 fix(subscription-state): remove dead code, change token route check (#4062)
* fix(subscription-state): remove dead code, change token route check

* update tests

* remove mock

* improve ux past usage limit
2026-04-08 17:17:32 -07:00
Theodore Li
5f33432dc2 fix(billing): Skip billing on streamed workflows with byok (#4056)
* fix(billing): skip billing on streamed workflows with byok

* Simplify logic

* Address comments, skip tokenization billing fallback

* Fix tool usage billing for streamed outputs

* fix(webhook): throw webhook errors as 4xxs (#4050)

* fix(webhook): throw webhook errors as 4xxs

* Fix shadowing body var

---------

Co-authored-by: Theodore Li <theo@sim.ai>

* feat(enterprise): cloud whitelabeling for enterprise orgs (#4047)

* feat(enterprise): cloud whitelabeling for enterprise orgs

* fix(enterprise): scope enterprise plan check to target org in whitelabel PUT

* fix(enterprise): use isOrganizationOnEnterprisePlan for org-scoped enterprise check

* fix(enterprise): allow clearing whitelabel fields and guard against empty update result

* fix(enterprise): remove webp from logo accept attribute to match upload hook validation

* improvement(billing): use isBillingEnabled instead of isProd for plan gate bypasses

* fix(enterprise): show whitelabeling nav item when billing is enabled on non-hosted environments

* fix(enterprise): accept relative paths for logoUrl since upload API returns /api/files/serve/ paths

* fix(whitelabeling): prevent logo flash on refresh by hiding logo while branding loads

* fix(whitelabeling): wire hover color through CSS token on tertiary buttons

* fix(whitelabeling): show sim logo by default, only replace when org logo loads

* fix(whitelabeling): cache org logo url in localstorage to eliminate flash on repeat visits

* feat(whitelabeling): add wordmark support with drag/drop upload

* updated turbo

* fix(whitelabeling): defer localstorage read to effect to prevent hydration mismatch

* fix(whitelabeling): use layout effect for cache read to eliminate logo flash before paint

* fix(whitelabeling): cache theme css to eliminate color flash before org settings resolve

* fix(whitelabeling): deduplicate HEX_COLOR_REGEX into lib/branding and remove mutation from useCallback deps

* fix(whitelabeling): use cookie-based SSR cache to eliminate brand flash on all page loads

* fix(whitelabeling): use !orgSettings condition to fix SSR brand cache injection

React Query returns isLoading: false with data: undefined during SSR, so the
previous brandingLoading condition was always false on the server — initialCache
was never injected into brandConfig. Changing to !orgSettings correctly applies
the cookie cache both during SSR and while the client-side query loads, eliminating
the logo flash on hard refresh.

* fix(editor): stop highlighting start.input as blue when block is not connected to starter (#4054)

* fix: merge subblock values in auto-layout to prevent losing router context (#4055)

Auto-layout was reading from getWorkflowState() without merging subblock
store values, then persisting stale subblock data to the database. This
caused runtime-edited values (e.g. router_v2 context) to be overwritten
with their initial/empty values whenever auto-layout was triggered.

* fix(whitelabeling): eliminate logo flash by fetching org settings server-side (#4057)

* fix(whitelabeling): eliminate logo flash by fetching org settings server-side

* improvement(whitelabeling): add SVG support for logo and wordmark uploads

* skelly in workspace header

* remove dead code

* fix(whitelabeling): hydration error, SVG support, skeleton shimmer, dead code removal

* fix(whitelabeling): blob preview dep cycle and missing color fallback

* fix(whitelabeling): use brand-accent as color fallback when workspace color is undefined

* chore(whitelabeling): inline hasOrgBrand

---------

Co-authored-by: Theodore Li <theo@sim.ai>
2026-04-08 19:24:04 -04:00
Theodore Li
c83349200c fix(error): catch socket auth error as 4xx (#4059)
* fix(error): catch socket auth error as 4xx

* Switch to type guard

---------

Co-authored-by: Theodore Li <theo@sim.ai>
2026-04-08 19:07:30 -04:00
waleed
1856635927 fix(whitelabeling): cast activeOrganizationId on session for TS build 2026-04-08 15:54:51 -07:00
Waleed
91ce55e547 fix(whitelabeling): eliminate logo flash by fetching org settings server-side (#4057)
* fix(whitelabeling): eliminate logo flash by fetching org settings server-side

* improvement(whitelabeling): add SVG support for logo and wordmark uploads

* skelly in workspace header

* remove dead code

* fix(whitelabeling): hydration error, SVG support, skeleton shimmer, dead code removal

* fix(whitelabeling): blob preview dep cycle and missing color fallback

* fix(whitelabeling): use brand-accent as color fallback when workspace color is undefined

* chore(whitelabeling): inline hasOrgBrand
2026-04-08 14:07:31 -07:00
Waleed
694f4a5895 fix: merge subblock values in auto-layout to prevent losing router context (#4055)
Auto-layout was reading from getWorkflowState() without merging subblock
store values, then persisting stale subblock data to the database. This
caused runtime-edited values (e.g. router_v2 context) to be overwritten
with their initial/empty values whenever auto-layout was triggered.
2026-04-08 13:25:15 -07:00
Waleed
cf233bb497 v0.6.31: elevenlabs voice, trigger.dev fixes, cloud whitelabeling for enterprises 2026-04-08 12:57:13 -07:00
Waleed
4700590e64 fix(editor): stop highlighting start.input as blue when block is not connected to starter (#4054) 2026-04-08 12:51:13 -07:00
Waleed
1189400167 feat(enterprise): cloud whitelabeling for enterprise orgs (#4047)
* feat(enterprise): cloud whitelabeling for enterprise orgs

* fix(enterprise): scope enterprise plan check to target org in whitelabel PUT

* fix(enterprise): use isOrganizationOnEnterprisePlan for org-scoped enterprise check

* fix(enterprise): allow clearing whitelabel fields and guard against empty update result

* fix(enterprise): remove webp from logo accept attribute to match upload hook validation

* improvement(billing): use isBillingEnabled instead of isProd for plan gate bypasses

* fix(enterprise): show whitelabeling nav item when billing is enabled on non-hosted environments

* fix(enterprise): accept relative paths for logoUrl since upload API returns /api/files/serve/ paths

* fix(whitelabeling): prevent logo flash on refresh by hiding logo while branding loads

* fix(whitelabeling): wire hover color through CSS token on tertiary buttons

* fix(whitelabeling): show sim logo by default, only replace when org logo loads

* fix(whitelabeling): cache org logo url in localstorage to eliminate flash on repeat visits

* feat(whitelabeling): add wordmark support with drag/drop upload

* updated turbo

* fix(whitelabeling): defer localstorage read to effect to prevent hydration mismatch

* fix(whitelabeling): use layout effect for cache read to eliminate logo flash before paint

* fix(whitelabeling): cache theme css to eliminate color flash before org settings resolve

* fix(whitelabeling): deduplicate HEX_COLOR_REGEX into lib/branding and remove mutation from useCallback deps

* fix(whitelabeling): use cookie-based SSR cache to eliminate brand flash on all page loads

* fix(whitelabeling): use !orgSettings condition to fix SSR brand cache injection

React Query returns isLoading: false with data: undefined during SSR, so the
previous brandingLoading condition was always false on the server — initialCache
was never injected into brandConfig. Changing to !orgSettings correctly applies
the cookie cache both during SSR and while the client-side query loads, eliminating
the logo flash on hard refresh.
2026-04-08 12:33:26 -07:00
Theodore Li
621aa65b91 fix(webhook): throw webhook errors as 4xxs (#4050)
* fix(webhook): throw webhook errors as 4xxs

* Fix shadowing body var

---------

Co-authored-by: Theodore Li <theo@sim.ai>
2026-04-08 15:30:12 -04:00
Waleed
c21876ab40 fix(trigger): add react-dom and react-email to additionalPackages (#4052) 2026-04-08 11:39:06 -07:00
Theodore Li
a1173ee712 debug(log): Add logging on socket token error (#4051)
Co-authored-by: Theodore Li <theo@sim.ai>
2026-04-08 14:36:02 -04:00
Waleed
579d240cee fix(parallel): remove broken node-counting completion + resolver claim cross-block (#4045)
* fix(parallel): remove broken node-counting completion in parallel blocks

* fix resolver claim

---------

Co-authored-by: Vikhyath Mondreti <vikhyath@simstudio.ai>
2026-04-08 11:05:23 -07:00
Waleed
d7da35ba0b v0.6.30: slack trigger enhancements, connectors performance improvements, secrets performance, polling refactors, drag resources in mothership 2026-04-08 01:00:43 -07:00
Theodore Li
d6ec115348 v0.6.29: login improvements, posthog telemetry (#4026)
* feat(posthog): Add tracking on mothership abort (#4023)

Co-authored-by: Theodore Li <theo@sim.ai>

* fix(login): fix captcha headers for manual login  (#4025)

* fix(signup): fix turnstile key loading

* fix(login): fix captcha header passing

* Catch user already exists, remove login form captcha
2026-04-07 19:11:31 -04:00
Waleed
3f508e445f v0.6.28: new docs, delete confirmation standardization, dagster integration, signup method feature flags, SSO improvements 2026-04-07 14:26:42 -07:00
Waleed
316bc8cdcc v0.6.27: new triggers, mothership improvements, files archive, queueing improvements, posthog, secrets mutations 2026-04-06 22:15:29 -07:00
Waleed
d889f32697 v0.6.26: ui improvements, multiple response blocks, docx previews, ollama fix 2026-04-05 12:33:24 -07:00
Waleed
28af223a9f v0.6.25: cloudwatch, cloudformation, live kb sync, linear fixes, posthog upgrade 2026-04-04 18:39:28 -07:00
Waleed
a54dcbe949 v0.6.24: copilot feedback wiring, captcha fixes 2026-04-04 12:52:05 -07:00
Waleed
0b9019d9a2 v0.6.23: MCP fixes, remove local state in favor of server state, mothership workflow edits via sockets, ui improvements 2026-04-03 23:30:26 -07:00
1418 changed files with 140924 additions and 33451 deletions

View File

@@ -14,6 +14,20 @@ When the user asks you to create a block:
2. Configure all subBlocks with proper types, conditions, and dependencies
3. Wire up tools correctly
## Hard Rule: No Guessed Tool Outputs
Blocks depend on tool outputs. If the underlying tool response schema is not documented or live-verified, you MUST tell the user instead of guessing block outputs.
- Do NOT invent block outputs for undocumented tool responses
- Do NOT describe unknown JSON shapes as if they were confirmed
- Do NOT wire fields into the block just because they seem likely to exist
If the tool outputs are not known, do one of these instead:
1. Ask the user for sample tool responses
2. Ask the user for test credentials so the tool responses can be verified
3. Limit the block to operations whose outputs are documented
4. Leave uncertain outputs out and explicitly tell the user what remains unknown
## Block Configuration Structure
```typescript
@@ -575,6 +589,8 @@ Use `type: 'json'` with a descriptive string when:
- It represents a list/array of items
- The shape varies by operation
If the output shape is unknown because the underlying tool response is undocumented, you MUST tell the user and stop. Unknown is not the same as variable. Never guess block outputs.
## V2 Block Pattern
When creating V2 blocks (alongside legacy V1):
@@ -829,3 +845,4 @@ After creating the block, you MUST validate it against every tool it references:
- Type coercions in `tools.config.params` for any params that need conversion (Number(), Boolean(), JSON.parse())
3. **Verify block outputs** cover the key fields returned by all tools
4. **Verify conditions** — each subBlock should only show for the operations that actually use it
5. **If any tool outputs are still unknown**, explicitly tell the user instead of guessing block outputs

View File

@@ -15,6 +15,21 @@ When the user asks you to create a connector:
3. Create the connector directory and config
4. Register it in the connector registry
## Hard Rule: No Guessed Response Or Document Schemas
If the service docs do not clearly show the document list response, document fetch response, pagination shape, or metadata fields, you MUST tell the user instead of guessing.
- Do NOT invent document fields
- Do NOT guess pagination cursors or next-page fields
- Do NOT infer metadata/tag mappings from unrelated endpoints
- Do NOT fabricate `ExternalDocument` content structure from partial docs
If the source schema is unknown, do one of these instead:
1. Ask the user for sample API responses
2. Ask the user for test credentials so you can verify live payloads
3. Implement only the documented parts of the connector
4. Leave the connector incomplete and explicitly say which fields remain unknown
## Directory Structure
Create files in `apps/sim/connectors/{service}/`:
@@ -92,6 +107,8 @@ export const {service}Connector: ConnectorConfig = {
}
```
Only map fields in `listDocuments`, `getDocument`, `validateConfig`, and `mapTags` when the source payload shape is documented or live-verified. If not, tell the user and stop rather than guessing.
### API key connector example
```typescript

View File

@@ -29,6 +29,21 @@ Before writing any code:
- Required vs optional parameters
- Response structures
### Hard Rule: No Guessed Response Schemas
If the official docs do not clearly show the response JSON shape for an endpoint, you MUST stop and tell the user exactly which outputs are unknown.
- Do NOT guess response field names
- Do NOT infer nested JSON paths from related endpoints
- Do NOT invent output properties just because they seem likely
- Do NOT implement `transformResponse` against unverified payload shapes
If response schemas are missing or incomplete, do one of the following before proceeding:
1. Ask the user for sample responses
2. Ask the user for test credentials so you can verify the live payload
3. Reduce the scope to only endpoints whose response shapes are documented
4. Leave the tool unimplemented and explicitly report why
## Step 2: Create Tools
### Directory Structure
@@ -103,6 +118,7 @@ export const {service}{Action}Tool: ToolConfig<Params, Response> = {
- Set `optional: true` for outputs that may not exist
- Never output raw JSON dumps - extract meaningful fields
- When using `type: 'json'` and you know the object shape, define `properties` with the inner fields so downstream consumers know the structure. Only use bare `type: 'json'` when the shape is truly dynamic
- If you do not know the response JSON shape from docs or verified examples, you MUST tell the user and stop. Never guess outputs or response mappings.
## Step 3: Create Block
@@ -450,6 +466,8 @@ If creating V2 versions (API-aligned outputs):
- [ ] Verified block subBlocks cover all required tool params with correct conditions
- [ ] Verified block outputs match what the tools actually return
- [ ] Verified `tools.config.params` correctly maps and coerces all param types
- [ ] Verified every tool output and `transformResponse` path against documented or live-verified JSON responses
- [ ] If any response schema remained unknown, explicitly told the user instead of guessing
## Example Command

View File

@@ -14,6 +14,21 @@ When the user asks you to create tools for a service:
2. Create the tools directory structure
3. Generate properly typed tool configurations
## Hard Rule: No Guessed Response Schemas
If the docs do not clearly show the response JSON for a tool, you MUST tell the user exactly which outputs are unknown and stop short of guessing.
- Do NOT invent response field names
- Do NOT infer nested paths from nearby endpoints
- Do NOT guess array item shapes
- Do NOT write `transformResponse` against unverified payloads
If the response shape is unknown, do one of these instead:
1. Ask the user for sample responses
2. Ask the user for test credentials so you can verify live responses
3. Implement only the endpoints whose outputs are documented
4. Leave the tool unimplemented and explicitly say why
## Directory Structure
Create files in `apps/sim/tools/{service}/`:
@@ -187,6 +202,8 @@ items: {
Only use bare `type: 'json'` without `properties` when the shape is truly dynamic or unknown.
If the response shape is unknown because the docs do not provide it, you MUST tell the user and stop. Unknown is not the same as dynamic. Never guess outputs.
## Critical Rules for transformResponse
### Handle Nullable Fields
@@ -441,7 +458,9 @@ After creating all tools, you MUST validate every tool before finishing:
- All output fields match what the API actually returns
- No fields are missing from outputs that the API provides
- No extra fields are defined in outputs that the API doesn't return
- Every output field and JSON path is backed by docs or live-verified sample responses
3. **Verify consistency** across tools:
- Shared types in `types.ts` match all tools that use them
- Tool IDs in the barrel export match the tool file definitions
- Error handling is consistent (error checks, meaningful messages)
4. **If any response schema is still unknown**, explicitly tell the user instead of guessing

View File

@@ -14,6 +14,21 @@ You are an expert at creating webhook triggers for Sim. You understand the trigg
3. Create a provider handler if custom auth, formatting, or subscriptions are needed
4. Register triggers and connect them to the block
## Hard Rule: No Guessed Webhook Payload Schemas
If the service docs do not clearly show the webhook payload JSON for an event, you MUST tell the user instead of guessing trigger outputs or `formatInput` mappings.
- Do NOT invent payload field names
- Do NOT guess nested event object paths
- Do NOT infer output fields from the UI or marketing docs
- Do NOT write `formatInput` against unverified webhook bodies
If the payload shape is unknown, do one of these instead:
1. Ask the user for sample webhook payloads
2. Ask the user for a test webhook source so you can inspect a real event
3. Implement only the event registration/setup portions whose payloads are documented
4. Leave the trigger unimplemented and explicitly say which payload fields are unknown
## Directory Structure
```

View File

@@ -0,0 +1,25 @@
---
name: cleanup
description: Run all code quality skills in sequence — effects, memo, callbacks, state, React Query, and emcn design review
---
# Cleanup
Arguments:
- scope: what to review (default: your current changes). Examples: "diff to main", "PR #123", "src/components/", "whole codebase"
- fix: whether to apply fixes (default: true). Set to false to only propose changes.
User arguments: $ARGUMENTS
## Steps
Run each of these skills in order on the specified scope, passing through the scope and fix arguments. After each skill completes, move to the next. Do not skip any.
1. `/you-might-not-need-an-effect $ARGUMENTS`
2. `/you-might-not-need-a-memo $ARGUMENTS`
3. `/you-might-not-need-a-callback $ARGUMENTS`
4. `/you-might-not-need-state $ARGUMENTS`
5. `/react-query-best-practices $ARGUMENTS`
6. `/emcn-design-review $ARGUMENTS`
After all skills have run, output a summary of what was found and fixed (or proposed) across all six passes.

View File

@@ -0,0 +1,335 @@
---
name: emcn-design-review
description: Review UI code for alignment with the emcn design system — components, tokens, patterns, and conventions
---
# EMCN Design Review
Arguments:
- scope: what to review (default: your current changes). Examples: "diff to main", "PR #123", "src/components/", "whole codebase"
- fix: whether to apply fixes (default: true). Set to false to only propose changes.
User arguments: $ARGUMENTS
## Context
This codebase uses **emcn**, a custom component library built on Radix UI primitives with CVA (class-variance-authority) variants and CSS variable design tokens. All UI must use emcn components and tokens — never raw HTML elements or hardcoded colors.
## Steps
1. Read the emcn barrel export at `apps/sim/components/emcn/components/index.ts` to know what's available
2. Read `apps/sim/app/_styles/globals.css` for the full set of CSS variable tokens
3. Analyze the specified scope against every rule below
4. If fix=true, apply the fixes. If fix=false, propose the fixes without applying.
---
## Imports
- Import components from `@/components/emcn`, never from subpaths
- Import icons from `@/components/emcn/icons` or `lucide-react`
- Import `cn` from `@/lib/core/utils/cn` for conditional class merging
- Import app-specific wrappers (Select, VerifiedBadge) from `@/components/ui`
```tsx
// Good
import { Button, Modal, Badge } from '@/components/emcn'
// Bad
import { Button } from '@/components/emcn/components/button/button'
```
---
## Design Tokens (CSS Variables)
Never use raw color values. Always use CSS variable tokens via Tailwind arbitrary values: `text-[var(--text-primary)]`, not `text-gray-500` or `#333`. The CSS variable pattern is canonical (1,700+ uses) — do not use Tailwind semantic classes like `text-muted-foreground`.
### Text hierarchy
| Token | Use |
|-------|-----|
| `text-[var(--text-primary)]` | Main content text |
| `text-[var(--text-secondary)]` | Secondary/supporting text |
| `text-[var(--text-tertiary)]` | Tertiary text |
| `text-[var(--text-muted)]` | Disabled, placeholder text |
| `text-[var(--text-icon)]` | Icon tinting |
| `text-[var(--text-inverse)]` | Text on dark backgrounds |
| `text-[var(--text-error)]` | Error/warning messages |
### Surfaces (elevation)
| Token | Use |
|-------|-----|
| `bg-[var(--bg)]` | Page background |
| `bg-[var(--surface-2)]` through `bg-[var(--surface-7)]` | Increasing elevation |
| `bg-[var(--surface-hover)]` | Hover state backgrounds |
| `bg-[var(--surface-active)]` | Active/selected backgrounds |
### Borders
| Token | Use |
|-------|-----|
| `border-[var(--border)]` | Default borders |
| `border-[var(--border-1)]` | Stronger borders (inputs, cards) |
| `border-[var(--border-muted)]` | Subtle dividers |
### Status
| Token | Use |
|-------|-----|
| `--success` | Success states |
| `--error` | Error states |
| `--caution` | Warning states |
### Brand
| Token | Use |
|-------|-----|
| `--brand-secondary` | Brand color |
| `--brand-accent` | Accent/CTA color |
### Shadows
Use shadow tokens, never raw box-shadow values:
- `shadow-subtle`, `shadow-medium`, `shadow-overlay`
- `shadow-kbd`, `shadow-card`
### Z-Index
Use z-index tokens for layering:
- `z-[var(--z-dropdown)]` (100), `z-[var(--z-modal)]` (200), `z-[var(--z-popover)]` (300), `z-[var(--z-tooltip)]` (400), `z-[var(--z-toast)]` (500)
---
## Component Usage Rules
### Buttons
Available variants: `default`, `primary`, `destructive`, `ghost`, `outline`, `active`, `secondary`, `tertiary`, `subtle`, `ghost-secondary`, `3d`
| Action type | Variant | Frequency |
|-------------|---------|-----------|
| Toolbar, icon-only, utility actions | `ghost` | Most common (28%) |
| Primary action (create, save, submit) | `primary` | Very common (24%) |
| Cancel, close, secondary action | `default` | Common |
| Delete, remove, destructive action | `destructive` | Targeted use only |
| Active/selected state | `active` | Targeted use only |
| Toggle, mode switch | `outline` | Moderate |
Sizes: `sm` (compact, 32% of buttons) or `md` (default, used when no size specified). Never create custom button styles — use an existing variant.
Buttons without an explicit variant prop get `default` styling. This is acceptable for cancel/secondary actions.
### Modals (Dialogs)
Use `Modal` + subcomponents. Never build custom dialog overlays.
```tsx
<Modal open={open} onOpenChange={setOpen}>
<ModalContent size="sm">
<ModalHeader>Title</ModalHeader>
<ModalBody>Content</ModalBody>
<ModalFooter>
<Button variant="default" onClick={() => setOpen(false)}>Cancel</Button>
<Button variant="primary" onClick={handleSubmit}>Save</Button>
</ModalFooter>
</ModalContent>
</Modal>
```
Modal sizes by frequency: `sm` (440px, most common — confirmations and simple dialogs), `md` (500px, forms), `lg` (600px, content-heavy), `xl` (800px, rare), `full` (1200px, rare).
Footer buttons: Cancel on left (`variant="default"`), primary action on right. This pattern is followed 100% across the codebase.
### Delete/Remove Confirmations
Always use Modal with `size="sm"`. The established pattern:
```tsx
<Modal open={open} onOpenChange={setOpen}>
<ModalContent size="sm">
<ModalHeader>Delete {itemType}</ModalHeader>
<ModalBody>
<p>Description of consequences</p>
<p className="text-[var(--text-error)]">Warning about irreversibility</p>
</ModalBody>
<ModalFooter>
<Button variant="default" onClick={() => setOpen(false)}>Cancel</Button>
<Button variant="destructive" onClick={handleDelete} disabled={isDeleting}>
Delete
</Button>
</ModalFooter>
</ModalContent>
</Modal>
```
Rules:
- Title: "Delete {ItemType}" or "Remove {ItemType}" (use "Remove" for membership/association changes)
- Include consequence description
- Use `text-[var(--text-error)]` for warning text when the action is irreversible
- `variant="destructive"` for the action button (100% compliance)
- `variant="default"` for cancel (100% compliance)
- Cancel left, destructive right (100% compliance)
- For high-risk deletes (workspaces), require typing the name to confirm
- Include recovery info if soft-delete: "You can restore it from Recently Deleted in Settings"
### Toast Notifications
Use the imperative `toast` API from `@/components/emcn`. Never build custom notification UI.
```tsx
import { toast } from '@/components/emcn'
toast.success('Item saved')
toast.error('Something went wrong')
toast.success('Deleted', { action: { label: 'Undo', onClick: handleUndo } })
```
Variants: `default`, `success`, `error`. Auto-dismiss after 5s. Supports optional action buttons with callbacks.
### Badges
Use semantic color variants for status:
| Status | Variant | Usage |
|--------|---------|-------|
| Error, failed, disconnected | `red` | Most common (15 uses) |
| Metadata, roles, auth types, scopes | `gray-secondary` | Very common (12 uses) |
| Type annotations (TS types, field types) | `type` | Very common (12 uses) |
| Success, active, enabled, running | `green` | Common (7 uses) |
| Neutral, default, unknown | `gray` | Common (6 uses) |
| Outline, parameters, public | `outline` | Moderate (6 uses) |
| Warning, processing | `amber` | Moderate (5 uses) |
| Paused, warning | `orange` | Occasional |
| Info, queued | `blue` | Occasional |
| Data types (arrays) | `purple` | Occasional |
| Generic with border | `default` | Occasional |
Use `dot` prop for status indicators (19 instances in codebase). `icon` prop is available but rarely used.
### Tooltips
Use `Tooltip` from emcn with namespace pattern:
```tsx
<Tooltip.Root>
<Tooltip.Trigger asChild>
<Button variant="ghost">{icon}</Button>
</Tooltip.Trigger>
<Tooltip.Content>Helpful text</Tooltip.Content>
</Tooltip.Root>
```
Use tooltips for icon-only buttons and truncated text. Don't tooltip self-explanatory elements.
### Popovers
Use for filters, option menus, and nested navigation:
```tsx
<Popover open={open} onOpenChange={setOpen} size="sm">
<PopoverTrigger asChild>
<Button variant="ghost">Trigger</Button>
</PopoverTrigger>
<PopoverContent side="bottom" align="end" minWidth={160}>
<PopoverSection>Section Title</PopoverSection>
<PopoverItem active={isActive} onClick={handleClick}>
Item Label
</PopoverItem>
<PopoverDivider />
</PopoverContent>
</Popover>
```
### Dropdown Menus
Use for context menus and action menus:
```tsx
<DropdownMenu>
<DropdownMenuTrigger asChild>
<Button variant="ghost">
<MoreHorizontal className="h-[14px] w-[14px]" />
</Button>
</DropdownMenuTrigger>
<DropdownMenuContent align="end">
<DropdownMenuItem onClick={handleEdit}>Edit</DropdownMenuItem>
<DropdownMenuSeparator />
<DropdownMenuItem onClick={handleDelete} className="text-[var(--text-error)]">
Delete
</DropdownMenuItem>
</DropdownMenuContent>
</DropdownMenu>
```
Destructive items go last, after a separator, in error color.
### Forms
Use `FormField` wrapper for labeled inputs:
```tsx
<FormField label="Name" htmlFor="name" error={errors.name} optional>
<Input id="name" value={name} onChange={e => setName(e.target.value)} />
</FormField>
```
Rules:
- Use `Input` from emcn, never raw `<input>` (exception: hidden file inputs)
- Use `Textarea` from emcn, never raw `<textarea>`
- Use `FormField` for label + input + error layout
- Mark optional fields with `optional` prop
- Show errors inline below the input
- Use `Combobox` for searchable selects
- Use `TagInput` for multi-value inputs
### Loading States
Use `Skeleton` for content placeholders:
```tsx
<Skeleton className="h-5 w-[200px] rounded-md" />
```
Rules:
- Mirror the actual UI structure with skeletons
- Match exact dimensions of the final content
- Use `rounded-md` to match component radius
- Stack multiple skeletons for lists
### Icons
Standard sizing — `h-[14px] w-[14px]` is the dominant pattern (400+ uses):
```tsx
<Icon className="h-[14px] w-[14px] text-[var(--text-icon)]" />
```
Size scale by frequency:
1. `h-[14px] w-[14px]` — default for inline icons (most common)
2. `h-[16px] w-[16px]` — slightly larger inline icons
3. `h-3 w-3` (12px) — compact/tight spaces
4. `h-4 w-4` (16px) — Tailwind equivalent, also common
5. `h-3.5 w-3.5` (14px) — Tailwind equivalent of 14px
6. `h-5 w-5` (20px) — larger icons, section headers
Use `text-[var(--text-icon)]` for icon color (113+ uses in codebase).
---
## Styling Rules
1. **Use `cn()` for conditional classes**: `cn('base', condition && 'conditional')` — never template literal concatenation like `` `base ${condition ? 'active' : ''}` ``
2. **Inline styles**: Avoid. Exception: dynamic values that can't be expressed as Tailwind classes (e.g., `style={{ width: dynamicVar }}` or CSS variable references). Never use inline styles for colors or static values.
3. **Never hardcode colors**: Use CSS variable tokens. Never `text-gray-500`, `bg-red-100`, `#fff`, or `rgb()`. Always `text-[var(--text-*)]`, `bg-[var(--surface-*)]`, etc.
4. **Never use Tailwind semantic color classes**: Use `text-[var(--text-muted)]` not `text-muted-foreground`. The CSS variable pattern is canonical.
5. **Never use global styles**: Keep all styling local to components
6. **Hover states**: Use `hover-hover:` pseudo-class for hover-capable devices
7. **Transitions**: Use `transition-colors` for color changes, `transition-colors duration-100` for fast hover
8. **Border radius**: `rounded-lg` (large cards), `rounded-md` (medium), `rounded-sm` (small), `rounded-xs` (tiny)
9. **Typography**: Use semantic sizes — `text-small` (13px), `text-caption` (12px), `text-xs` (11px), `text-micro` (10px)
10. **Font weight**: Use `font-medium` for emphasis, avoid `font-bold` unless for headings
11. **Spacing**: Use Tailwind gap/padding utilities. Common patterns: `gap-2`, `gap-3`, `px-4 py-2.5`
---
## Anti-patterns to flag
- Raw HTML `<button>` instead of Button component (exception: inside Radix primitives)
- Raw HTML `<input>` instead of Input component (exception: hidden file inputs, read-only checkboxes in markdown)
- Hardcoded Tailwind default colors (`text-gray-*`, `bg-red-*`, `text-blue-*`)
- Hex values in className (`bg-[#fff]`, `text-[#333]`)
- Tailwind semantic classes (`text-muted-foreground`) instead of CSS variables (`text-[var(--text-muted)]`)
- Custom modal/dialog implementations instead of `Modal`
- Custom toast/notification implementations instead of `toast`
- Inline styles for colors or static values (dynamic values are acceptable)
- Template literal className concatenation instead of `cn()`
- Wrong button variant for the action type
- Missing loading/skeleton states
- Missing error states on forms
- Importing from emcn subpaths instead of barrel export
- Using arbitrary z-index (`z-50`, `z-[9999]`) instead of z-index tokens
- Custom shadows instead of shadow tokens
- Icon sizes that don't follow the established scale (default to `h-[14px] w-[14px]`)

View File

@@ -0,0 +1,54 @@
---
name: react-query-best-practices
description: Audit React Query usage for best practices — key factories, staleTime, mutations, and server state ownership
---
# React Query Best Practices
Arguments:
- scope: what to analyze (default: your current changes). Examples: "diff to main", "PR #123", "src/hooks/queries/", "whole codebase"
- fix: whether to apply fixes (default: true). Set to false to only propose changes.
User arguments: $ARGUMENTS
## Context
This codebase uses React Query (TanStack Query) as the single source of truth for all server state. All query hooks live in `hooks/queries/`. Zustand is used only for client-only UI state. Server data must never be duplicated into useState or Zustand outside of mutation callbacks that coordinate cross-store state.
## References
Read these before analyzing:
1. https://tkdodo.eu/blog/practical-react-query — foundational defaults, custom hooks, avoiding local state copies
2. https://tkdodo.eu/blog/effective-react-query-keys — key factory pattern, hierarchical keys, fuzzy invalidation
3. https://tkdodo.eu/blog/react-query-as-a-state-manager — React Query IS your server state manager
## Rules to enforce
### Query key factories
- Every file in `hooks/queries/` must have a hierarchical key factory with an `all` root key
- Keys must include intermediate plural keys (`lists`, `details`) for prefix invalidation
- Key factories are colocated with their query hooks, not in a global keys file
### Query hooks
- Every `queryFn` must forward `signal` for request cancellation
- Every query must have an explicit `staleTime` (default 0 is almost never correct)
- `keepPreviousData` / `placeholderData` only on variable-key queries (where params change), never on static keys
- Use `enabled` to prevent queries from running without required params
### Mutations
- Use `onSettled` (not `onSuccess`) for cache reconciliation — it fires on both success and error
- For optimistic updates: save previous data in `onMutate`, roll back in `onError`
- Use targeted invalidation (`entityKeys.lists()`) not broad (`entityKeys.all`) when possible
- Don't include mutation objects in `useCallback` deps — `.mutate()` is stable
### Server state ownership
- Never copy query data into useState. Use query data directly in components.
- Never copy query data into Zustand stores (exception: mutation callbacks that coordinate cross-store state like temp ID replacement)
- The query cache is not a local state manager — `setQueryData` is for optimistic updates only
- Forms are the one deliberate exception: copy server data into local form state with `staleTime: Infinity`
## Steps
1. Read the references above to understand the guidelines
2. Analyze the specified scope against the rules listed above
3. If fix=true, apply the fixes. If fix=false, propose the fixes without applying.

View File

@@ -52,6 +52,20 @@ Fetch the official API docs for the service. This is the **source of truth** for
Use Context7 (resolve-library-id → query-docs) or WebFetch to retrieve documentation. If both fail, note which claims are based on training knowledge vs verified docs.
### Hard Rule: No Guessed Source Schemas
If the service docs do not clearly show document list responses, document fetch responses, metadata fields, or pagination shapes, you MUST tell the user instead of guessing.
- Do NOT infer document fields from unrelated endpoints
- Do NOT guess pagination cursors or response wrappers
- Do NOT assume metadata keys that are not documented
- Do NOT treat probable shapes as validated
If a schema is unknown, validation must explicitly recommend:
1. sample API responses,
2. live test credentials, or
3. trimming the connector to only documented fields.
## Step 3: Validate API Endpoints
For **every** API call in the connector (`listDocuments`, `getDocument`, `validateConfig`, and any helper functions), verify against the API docs:
@@ -93,6 +107,7 @@ For **every** API call in the connector (`listDocuments`, `getDocument`, `valida
- [ ] Field names extracted match what the API actually returns
- [ ] Nullable fields are handled with `?? null` or `|| undefined`
- [ ] Error responses are checked before accessing data fields
- [ ] Every extracted field and pagination value is backed by official docs or live-verified sample payloads
## Step 4: Validate OAuth Scopes (if OAuth connector)
@@ -304,6 +319,7 @@ After fixing, confirm:
1. `bun run lint` passes
2. TypeScript compiles clean
3. Re-read all modified files to verify fixes are correct
4. Any remaining unknown source schemas were explicitly reported to the user instead of guessed
## Checklist Summary

View File

@@ -41,6 +41,20 @@ Fetch the official API docs for the service. This is the **source of truth** for
- Pagination patterns (which param name, which response field)
- Rate limits and error formats
### Hard Rule: No Guessed Response Schemas
If the official docs do not clearly show the response JSON shape for an endpoint, you MUST tell the user instead of guessing.
- Do NOT assume field names from nearby endpoints
- Do NOT infer nested JSON paths without evidence
- Do NOT treat "likely" fields as confirmed outputs
- Do NOT accept implementation guesses as valid just because they are defensive
If a response schema is unknown, the validation must explicitly call that out and require:
1. sample responses from the user,
2. live test credentials for verification, or
3. trimming the tool/block down to only documented fields.
## Step 3: Validate Tools
For **every** tool file, check:
@@ -81,6 +95,7 @@ For **every** tool file, check:
- [ ] All optional arrays use `?? []`
- [ ] Error cases are handled: checks for missing/empty data and returns meaningful error
- [ ] Does NOT do raw JSON dumps — extracts meaningful, individual fields
- [ ] Every extracted field is backed by official docs or live-verified sample payloads
### Outputs
- [ ] All output fields match what the API actually returns
@@ -267,6 +282,7 @@ After fixing, confirm:
1. `bun run lint` passes with no fixes needed
2. TypeScript compiles clean (no type errors)
3. Re-read all modified files to verify fixes are correct
4. Any remaining unknown response schemas were explicitly reported to the user instead of guessed
## Checklist Summary

View File

@@ -44,6 +44,20 @@ Fetch the service's official webhook documentation. This is the **source of trut
- Webhook subscription API (create/delete endpoints, if applicable)
- Retry behavior and delivery guarantees
### Hard Rule: No Guessed Webhook Payload Schemas
If the official docs do not clearly show the webhook payload JSON for an event, you MUST tell the user instead of guessing.
- Do NOT invent payload field names
- Do NOT infer nested payload paths without evidence
- Do NOT treat likely event shapes as verified
- Do NOT accept `formatInput` mappings that are not backed by docs or live payloads
If a payload schema is unknown, validation must explicitly recommend:
1. sample webhook payloads,
2. a live test webhook source, or
3. trimming the trigger to only documented outputs.
## Step 3: Validate Trigger Definitions
### utils.ts
@@ -93,6 +107,7 @@ Fetch the service's official webhook documentation. This is the **source of trut
- [ ] Nested output paths exist at the correct depth (e.g., `resource.id` actually has `resource: { id: ... }`)
- [ ] `null` is used for missing optional fields (not empty strings or empty objects)
- [ ] Returns `{ input: { ... } }` — not a bare object
- [ ] Every mapped payload field is backed by official docs or live-verified webhook payloads
### Idempotency
- [ ] `extractIdempotencyId` returns a stable, unique key per delivery
@@ -195,6 +210,7 @@ After fixing, confirm:
1. `bun run type-check` passes
2. Re-read all modified files to verify fixes are correct
3. Provider handler tests pass (if they exist): `bun test {service}`
4. Any remaining unknown webhook payload schemas were explicitly reported to the user instead of guessed
## Checklist Summary

View File

@@ -0,0 +1,51 @@
---
name: you-might-not-need-a-callback
description: Analyze and fix useCallback anti-patterns in your code
---
# You Might Not Need a Callback
Arguments:
- scope: what to analyze (default: your current changes). Examples: "diff to main", "PR #123", "src/components/", "whole codebase"
- fix: whether to apply fixes (default: true). Set to false to only propose changes.
User arguments: $ARGUMENTS
## References
Read before analyzing:
1. https://react.dev/reference/react/useCallback — official docs on when useCallback is actually needed
## When useCallback IS needed
- Passing a callback to a child wrapped in `React.memo` (to preserve referential equality)
- The callback is a dependency of another hook (`useEffect`, `useMemo`)
- The callback is used in a custom hook that documents referential stability requirements
## Anti-patterns to detect
1. **useCallback on functions not passed as props or deps**: If the function is only called within the same component and isn't in any dependency array, useCallback adds overhead for no benefit. Just declare the function normally.
2. **useCallback with exhaustive deps that change every render**: If the dependency array includes values that change on every render, useCallback recalculates every time. The memoization is wasted. Either stabilize the deps (use refs) or remove the useCallback.
3. **useCallback on event handlers passed to native elements**: `<button onClick={handleClick}>` — native elements don't benefit from stable references. Only child components wrapped in React.memo do.
4. **useCallback wrapping a function that creates new objects/arrays**: If the callback returns `{ ...newObj }` or `[...newArr]`, memoizing the callback doesn't prevent the child from re-rendering due to new return values. The memoization is at the wrong level.
5. **useCallback with an empty dep array when deps are needed**: Stale closures — the callback captures outdated values. Either add proper deps or use refs for values that shouldn't trigger re-creation.
6. **Pairing useCallback with React.memo unnecessarily**: If the child component is cheap to render, neither useCallback nor React.memo adds value. Only optimize when you've measured a performance problem.
7. **useCallback in custom hooks that don't need stable references**: Not every hook return needs to be memoized. Only stabilize callbacks when consumers depend on referential equality.
## Codebase-specific notes
This codebase uses a ref pattern for stable callbacks in hooks:
```tsx
const idRef = useRef(id)
useEffect(() => { idRef.current = id }, [id])
const fetchData = useCallback(async () => {
// use idRef.current instead of id
}, []) // empty deps because refs are used
```
This pattern is correct — don't flag it as an anti-pattern.
## Steps
1. Read the reference above
2. Analyze the specified scope for the anti-patterns listed above
3. If fix=true, apply the fixes. If fix=false, propose the fixes without applying.

View File

@@ -0,0 +1,33 @@
---
name: you-might-not-need-a-memo
description: Analyze and fix useMemo/React.memo anti-patterns in your code
---
# You Might Not Need a Memo
Arguments:
- scope: what to analyze (default: your current changes). Examples: "diff to main", "PR #123", "src/components/", "whole codebase"
- fix: whether to apply fixes (default: true). Set to false to only propose changes.
User arguments: $ARGUMENTS
## References
Read before analyzing:
1. https://overreacted.io/before-you-memo/ — two techniques to avoid memo entirely
## Anti-patterns to detect
1. **Wrapping a slow component in React.memo when state can be moved down**: If a component re-renders because of state it doesn't use, move that state into a smaller child component instead of memoizing. The slow component stops re-rendering without memo.
2. **Wrapping in React.memo when children can be lifted up**: If a parent owns state that changes frequently, extract the stateful part and pass the expensive subtree as `children`. Children passed as props don't re-render when the parent's state changes.
3. **useMemo on cheap computations**: Filtering or mapping a small array, string concatenation, simple arithmetic — these don't need memoization. Only memoize when you've measured a performance problem.
4. **useMemo with constantly-changing deps**: If the dependency array changes on every render, useMemo does nothing — it recalculates every time. Fix the deps or remove the memo.
5. **useMemo to create objects/arrays passed as props**: Instead of memoizing to prevent child re-renders, consider whether the child even needs referential stability. If the child doesn't use React.memo or pass it to a dep array, the memo is wasted.
6. **React.memo on components that always receive new props**: If the parent always passes new objects, arrays, or callbacks, React.memo's shallow comparison always fails. Fix the parent instead of memoizing the child.
7. **useMemo for derived state**: If you're computing a value from props or state, just compute it inline during render. React renders are fast. `const fullName = first + ' ' + last` doesn't need useMemo.
## Steps
1. Read the reference above to understand the two core techniques (move state down, lift content up)
2. Analyze the specified scope for the anti-patterns listed above
3. If fix=true, apply the fixes. If fix=false, propose the fixes without applying.

View File

@@ -0,0 +1,38 @@
---
name: you-might-not-need-state
description: Analyze and fix unnecessary useState, derived state, and server-state-in-local-state anti-patterns
---
# You Might Not Need State
Arguments:
- scope: what to analyze (default: your current changes). Examples: "diff to main", "PR #123", "src/components/", "whole codebase"
- fix: whether to apply fixes (default: true). Set to false to only propose changes.
User arguments: $ARGUMENTS
## Context
This codebase uses React Query for all server state and Zustand for client-only global state. useState should only be used for ephemeral UI concerns (open/closed, hover, local form input). Server data should never be copied into useState or Zustand — React Query is the single source of truth.
## References
Read these before analyzing:
1. https://react.dev/learn/choosing-the-state-structure — 5 principles for structuring state
2. https://tkdodo.eu/blog/dont-over-use-state — never store derived/computed values in state
3. https://tkdodo.eu/blog/putting-props-to-use-state — never mirror props into state via useEffect
## Anti-patterns to detect
1. **Derived state stored in useState**: If a value can be computed from props, other state, or query data, compute it inline during render instead of storing it in state.
2. **Server state copied into useState**: Never `useState` + `useEffect` to sync React Query data into local state. Use query data directly. The only exception is forms where users edit server data.
3. **Props mirrored into state**: Never `useState(prop)` + `useEffect(() => setState(prop))`. Use the prop directly, or use a key to reset component state.
4. **Chained useEffect state updates**: Never chain Effects that set state to trigger other Effects. Calculate all derived values in the event handler or inline during render.
5. **Storing objects when an ID suffices**: Store `selectedId` not a copy of the selected object. Derive the object: `items.find(i => i.id === selectedId)`.
6. **State that duplicates Zustand or React Query**: If the data already lives in a store or query cache, don't create a parallel useState.
## Steps
1. Read the references above to understand the guidelines
2. Analyze the specified scope for the anti-patterns listed above
3. If fix=true, apply the fixes. If fix=false, propose the fixes without applying.

View File

@@ -1,17 +1,17 @@
---
description: Create webhook triggers for a Sim integration using the generic trigger builder
description: Create webhook or polling triggers for a Sim integration
argument-hint: <service-name>
---
# Add Trigger
You are an expert at creating webhook triggers for Sim. You understand the trigger system, the generic `buildTriggerSubBlocks` helper, and how triggers connect to blocks.
You are an expert at creating webhook and polling triggers for Sim. You understand the trigger system, the generic `buildTriggerSubBlocks` helper, polling infrastructure, and how triggers connect to blocks.
## Your Task
1. Research what webhook events the service supports
2. Create the trigger files using the generic builder
3. Create a provider handler if custom auth, formatting, or subscriptions are needed
1. Research what webhook events the service supports — if the service lacks reliable webhooks, use polling
2. Create the trigger files using the generic builder (webhook) or manual config (polling)
3. Create a provider handler (webhook) or polling handler (polling)
4. Register triggers and connect them to the block
## Directory Structure
@@ -146,23 +146,37 @@ export const TRIGGER_REGISTRY: TriggerRegistry = {
### Block file (`apps/sim/blocks/blocks/{service}.ts`)
Wire triggers into the block so the trigger UI appears and `generate-docs.ts` discovers them. Two changes are needed:
1. **Spread trigger subBlocks** at the end of the block's `subBlocks` array
2. **Add `triggers` property** after `outputs` with `enabled: true` and `available: [...]`
```typescript
import { getTrigger } from '@/triggers'
export const {Service}Block: BlockConfig = {
// ...
triggers: {
enabled: true,
available: ['{service}_event_a', '{service}_event_b'],
},
subBlocks: [
// Regular tool subBlocks first...
...getTrigger('{service}_event_a').subBlocks,
...getTrigger('{service}_event_b').subBlocks,
],
// ... tools, inputs, outputs ...
triggers: {
enabled: true,
available: ['{service}_event_a', '{service}_event_b'],
},
}
```
**Versioned blocks (V1 + V2):** Many integrations have a hidden V1 block and a visible V2 block. Where you add the trigger wiring depends on how V2 inherits from V1:
- **V2 uses `...V1Block` spread** (e.g., Google Calendar): Add trigger to V1 — V2 inherits both `subBlocks` and `triggers` automatically.
- **V2 defines its own `subBlocks`** (e.g., Google Sheets): Add trigger to V2 (the visible block). V1 is hidden and doesn't need it.
- **Single block, no V2** (e.g., Google Drive): Add trigger directly.
`generate-docs.ts` deduplicates by base type (first match wins). If V1 is processed first without triggers, the V2 triggers won't appear in `integrations.json`. Always verify by checking the output after running the script.
## Provider Handler
All provider-specific webhook logic lives in a single handler file: `apps/sim/lib/webhooks/providers/{service}.ts`.
@@ -327,6 +341,121 @@ export function buildOutputs(): Record<string, TriggerOutput> {
}
```
## Polling Triggers
Use polling when the service lacks reliable webhooks (e.g., Google Sheets, Google Drive, Google Calendar, Gmail, RSS, IMAP). Polling triggers do NOT use `buildTriggerSubBlocks` — they define subBlocks manually.
### Directory Structure
```
apps/sim/triggers/{service}/
├── index.ts # Barrel export
└── poller.ts # TriggerConfig with polling: true
apps/sim/lib/webhooks/polling/
└── {service}.ts # PollingProviderHandler implementation
```
### Polling Handler (`apps/sim/lib/webhooks/polling/{service}.ts`)
```typescript
import { pollingIdempotency } from '@/lib/core/idempotency/service'
import type { PollingProviderHandler, PollWebhookContext } from '@/lib/webhooks/polling/types'
import { markWebhookFailed, markWebhookSuccess, resolveOAuthCredential, updateWebhookProviderConfig } from '@/lib/webhooks/polling/utils'
import { processPolledWebhookEvent } from '@/lib/webhooks/processor'
export const {service}PollingHandler: PollingProviderHandler = {
provider: '{service}',
label: '{Service}',
async pollWebhook(ctx: PollWebhookContext): Promise<'success' | 'failure'> {
const { webhookData, workflowData, requestId, logger } = ctx
const webhookId = webhookData.id
try {
// For OAuth services:
const accessToken = await resolveOAuthCredential(webhookData, '{service}', requestId, logger)
const config = webhookData.providerConfig as unknown as {Service}WebhookConfig
// First poll: seed state, emit nothing
if (!config.lastCheckedTimestamp) {
await updateWebhookProviderConfig(webhookId, { lastCheckedTimestamp: new Date().toISOString() }, logger)
await markWebhookSuccess(webhookId, logger)
return 'success'
}
// Fetch changes since last poll, process with idempotency
// ...
await markWebhookSuccess(webhookId, logger)
return 'success'
} catch (error) {
logger.error(`[${requestId}] Error processing {service} webhook ${webhookId}:`, error)
await markWebhookFailed(webhookId, logger)
return 'failure'
}
},
}
```
**Key patterns:**
- First poll seeds state and emits nothing (avoids flooding with existing data)
- Use `pollingIdempotency.executeWithIdempotency(provider, key, callback)` for dedup
- Use `processPolledWebhookEvent(webhookData, workflowData, payload, requestId)` to fire the workflow
- Use `updateWebhookProviderConfig(webhookId, partialConfig, logger)` for read-merge-write on state
- Use the latest server-side timestamp from API responses (not wall clock) to avoid clock skew
### Trigger Config (`apps/sim/triggers/{service}/poller.ts`)
```typescript
import { {Service}Icon } from '@/components/icons'
import type { TriggerConfig } from '@/triggers/types'
export const {service}PollingTrigger: TriggerConfig = {
id: '{service}_poller',
name: '{Service} Trigger',
provider: '{service}',
description: 'Triggers when ...',
version: '1.0.0',
icon: {Service}Icon,
polling: true, // REQUIRED — routes to polling infrastructure
subBlocks: [
{ id: 'triggerCredentials', type: 'oauth-input', title: 'Credentials', serviceId: '{service}', requiredScopes: [], required: true, mode: 'trigger', supportsCredentialSets: true },
// ... service-specific config fields (dropdowns, inputs, switches) ...
{ id: 'triggerInstructions', type: 'text', title: 'Setup Instructions', hideFromPreview: true, mode: 'trigger', defaultValue: '...' },
],
outputs: {
// Must match the payload shape from processPolledWebhookEvent
},
}
```
### Registration (3 places)
1. **`apps/sim/triggers/constants.ts`** — add provider to `POLLING_PROVIDERS` Set
2. **`apps/sim/lib/webhooks/polling/registry.ts`** — import handler, add to `POLLING_HANDLERS`
3. **`apps/sim/triggers/registry.ts`** — import trigger config, add to `TRIGGER_REGISTRY`
### Helm Cron Job
Add to `helm/sim/values.yaml` under the existing polling cron jobs:
```yaml
{service}WebhookPoll:
schedule: "*/1 * * * *"
concurrencyPolicy: Forbid
url: "http://sim:3000/api/webhooks/poll/{service}"
```
### Reference Implementations
- Simple: `apps/sim/lib/webhooks/polling/rss.ts` + `apps/sim/triggers/rss/poller.ts`
- Complex (OAuth, attachments): `apps/sim/lib/webhooks/polling/gmail.ts` + `apps/sim/triggers/gmail/poller.ts`
- Cursor-based (changes API): `apps/sim/lib/webhooks/polling/google-drive.ts`
- Timestamp-based: `apps/sim/lib/webhooks/polling/google-calendar.ts`
## Checklist
### Trigger Definition
@@ -352,7 +481,17 @@ export function buildOutputs(): Record<string, TriggerOutput> {
- [ ] NO changes to `route.ts`, `provider-subscriptions.ts`, or `deploy.ts`
- [ ] API key field uses `password: true`
### Polling Trigger (if applicable)
- [ ] Handler implements `PollingProviderHandler` at `lib/webhooks/polling/{service}.ts`
- [ ] Trigger config has `polling: true` and defines subBlocks manually (no `buildTriggerSubBlocks`)
- [ ] Provider string matches across: trigger config, handler, `POLLING_PROVIDERS`, polling registry
- [ ] First poll seeds state and emits nothing
- [ ] Added provider to `POLLING_PROVIDERS` in `triggers/constants.ts`
- [ ] Added handler to `POLLING_HANDLERS` in `lib/webhooks/polling/registry.ts`
- [ ] Added cron job to `helm/sim/values.yaml`
- [ ] Payload shape matches trigger `outputs` schema
### Testing
- [ ] `bun run type-check` passes
- [ ] Manually verify `formatInput` output keys match trigger `outputs` keys
- [ ] Manually verify output keys match trigger `outputs` keys
- [ ] Trigger UI shows correctly in the block

View File

@@ -0,0 +1,25 @@
---
description: Run all code quality skills in sequence — effects, memo, callbacks, state, React Query, and emcn design review
argument-hint: [scope] [fix=true|false]
---
# Cleanup
Arguments:
- scope: what to review (default: your current changes). Examples: "diff to main", "PR #123", "src/components/", "whole codebase"
- fix: whether to apply fixes (default: true). Set to false to only propose changes.
User arguments: $ARGUMENTS
## Steps
Run each of these skills in order on the specified scope, passing through the scope and fix arguments. After each skill completes, move to the next. Do not skip any.
1. `/you-might-not-need-an-effect $ARGUMENTS`
2. `/you-might-not-need-a-memo $ARGUMENTS`
3. `/you-might-not-need-a-callback $ARGUMENTS`
4. `/you-might-not-need-state $ARGUMENTS`
5. `/react-query-best-practices $ARGUMENTS`
6. `/emcn-design-review $ARGUMENTS`
After all skills have run, output a summary of what was found and fixed (or proposed) across all six passes.

View File

@@ -0,0 +1,79 @@
---
description: Review UI code for alignment with the emcn design system — components, tokens, patterns, and conventions
argument-hint: [scope] [fix=true|false]
---
# EMCN Design Review
Arguments:
- scope: what to review (default: your current changes). Examples: "diff to main", "PR #123", "src/components/", "whole codebase"
- fix: whether to apply fixes (default: true). Set to false to only propose changes.
User arguments: $ARGUMENTS
## Context
This codebase uses **emcn**, a custom component library built on Radix UI primitives with CVA variants and CSS variable design tokens. All UI must use emcn components and tokens.
## Steps
1. Read the emcn barrel export at `apps/sim/components/emcn/components/index.ts` to know what's available
2. Read `apps/sim/app/_styles/globals.css` for CSS variable tokens
3. Analyze the specified scope against every rule below
4. If fix=true, apply the fixes. If fix=false, propose the fixes without applying.
---
## Imports
- Import from `@/components/emcn` barrel, never subpaths
- Icons from `@/components/emcn/icons` or `lucide-react`
- Use `cn` from `@/lib/core/utils/cn` for conditional classes
## Design Tokens
Use CSS variable pattern (`text-[var(--text-primary)]`), never Tailwind semantics (`text-muted-foreground`) or hardcoded colors (`text-gray-500`, `#333`).
**Text**: `--text-primary`, `--text-secondary`, `--text-tertiary`, `--text-muted`, `--text-icon`, `--text-inverse`, `--text-error`
**Surfaces**: `--bg`, `--surface-2` through `--surface-7`, `--surface-hover`, `--surface-active`
**Borders**: `--border`, `--border-1`, `--border-muted`
**Z-Index**: `--z-dropdown` (100), `--z-modal` (200), `--z-popover` (300), `--z-tooltip` (400), `--z-toast` (500)
**Shadows**: `shadow-subtle`, `shadow-medium`, `shadow-overlay`, `shadow-card`
## Buttons
| Action | Variant |
|--------|---------|
| Toolbar, icon-only | `ghost` (most common, 28%) |
| Create, save, submit | `primary` (24%) |
| Cancel, close | `default` |
| Delete, remove | `destructive` |
| Selected state | `active` |
| Toggle | `outline` |
## Delete/Remove Confirmations
Modal `size="sm"`, title "Delete/Remove {ItemType}", `variant="destructive"` action button, `variant="default"` cancel. Cancel left, action right (100% compliance). Use `text-[var(--text-error)]` for irreversible warnings.
## Toast
`toast.success()`, `toast.error()`, `toast()` from `@/components/emcn`. Never custom notification UI.
## Badges
`red`=error/failed, `gray-secondary`=metadata/roles, `type`=type annotations, `green`=success/active, `gray`=neutral, `amber`=processing, `orange`=paused, `blue`=info. Use `dot` prop for status indicators.
## Icons
Default: `h-[14px] w-[14px]` (400+ uses). Color: `text-[var(--text-icon)]`. Scale: 14px > 16px > 12px > 20px.
## Anti-patterns to flag
- Raw `<button>`/`<input>` instead of emcn components
- Hardcoded colors (`text-gray-*`, `#hex`, `rgb()`)
- Tailwind semantics (`text-muted-foreground`) instead of CSS variables
- Template literal className instead of `cn()`
- Inline styles for colors/static values (dynamic values OK)
- Importing from emcn subpaths instead of barrel
- Arbitrary z-index instead of tokens
- Wrong button variant for action type

View File

@@ -0,0 +1,54 @@
---
description: Audit React Query usage for best practices — key factories, staleTime, mutations, and server state ownership
argument-hint: [scope] [fix=true|false]
---
# React Query Best Practices
Arguments:
- scope: what to analyze (default: your current changes). Examples: "diff to main", "PR #123", "src/hooks/queries/", "whole codebase"
- fix: whether to apply fixes (default: true). Set to false to only propose changes.
User arguments: $ARGUMENTS
## Context
This codebase uses React Query (TanStack Query) as the single source of truth for all server state. All query hooks live in `hooks/queries/`. Zustand is used only for client-only UI state. Server data must never be duplicated into useState or Zustand outside of mutation callbacks that coordinate cross-store state.
## References
Read these before analyzing:
1. https://tkdodo.eu/blog/practical-react-query — foundational defaults, custom hooks, avoiding local state copies
2. https://tkdodo.eu/blog/effective-react-query-keys — key factory pattern, hierarchical keys, fuzzy invalidation
3. https://tkdodo.eu/blog/react-query-as-a-state-manager — React Query IS your server state manager
## Rules to enforce
### Query key factories
- Every file in `hooks/queries/` must have a hierarchical key factory with an `all` root key
- Keys must include intermediate plural keys (`lists`, `details`) for prefix invalidation
- Key factories are colocated with their query hooks, not in a global keys file
### Query hooks
- Every `queryFn` must forward `signal` for request cancellation
- Every query must have an explicit `staleTime` (default 0 is almost never correct)
- `keepPreviousData` / `placeholderData` only on variable-key queries (where params change), never on static keys
- Use `enabled` to prevent queries from running without required params
### Mutations
- Use `onSettled` (not `onSuccess`) for cache reconciliation — it fires on both success and error
- For optimistic updates: save previous data in `onMutate`, roll back in `onError`
- Use targeted invalidation (`entityKeys.lists()`) not broad (`entityKeys.all`) when possible
- Don't include mutation objects in `useCallback` deps — `.mutate()` is stable
### Server state ownership
- Never copy query data into useState. Use query data directly in components.
- Never copy query data into Zustand stores (exception: mutation callbacks that coordinate cross-store state like temp ID replacement)
- The query cache is not a local state manager — `setQueryData` is for optimistic updates only
- Forms are the one deliberate exception: copy server data into local form state with `staleTime: Infinity`
## Steps
1. Read the references above to understand the guidelines
2. Analyze the specified scope against the rules listed above
3. If fix=true, apply the fixes. If fix=false, propose the fixes without applying.

View File

@@ -0,0 +1,35 @@
---
description: Analyze and fix useCallback anti-patterns in your code
argument-hint: [scope] [fix=true|false]
---
# You Might Not Need a Callback
Arguments:
- scope: what to analyze (default: your current changes). Examples: "diff to main", "PR #123", "src/components/", "whole codebase"
- fix: whether to apply fixes (default: true). Set to false to only propose changes.
User arguments: $ARGUMENTS
## References
Read before analyzing:
1. https://react.dev/reference/react/useCallback — official docs on when useCallback is actually needed
## Anti-patterns to detect
1. **useCallback on functions not passed as props or deps**: No benefit if only called within the same component.
2. **useCallback with deps that change every render**: Memoization is wasted.
3. **useCallback on handlers passed to native elements**: `<button onClick={fn}>` doesn't benefit from stable references.
4. **useCallback wrapping functions that return new objects/arrays**: Memoization at the wrong level.
5. **useCallback with empty deps when deps are needed**: Stale closures.
6. **Pairing useCallback + React.memo unnecessarily**: Only optimize when you've measured a problem.
7. **useCallback in hooks that don't need stable references**: Not every hook return needs memoization.
Note: This codebase uses a ref pattern for stable callbacks (`useRef` + empty deps). That pattern is correct — don't flag it.
## Steps
1. Read the reference above
2. Analyze the specified scope for the anti-patterns listed above
3. If fix=true, apply the fixes. If fix=false, propose the fixes without applying.

View File

@@ -0,0 +1,33 @@
---
description: Analyze and fix useMemo/React.memo anti-patterns in your code
argument-hint: [scope] [fix=true|false]
---
# You Might Not Need a Memo
Arguments:
- scope: what to analyze (default: your current changes). Examples: "diff to main", "PR #123", "src/components/", "whole codebase"
- fix: whether to apply fixes (default: true). Set to false to only propose changes.
User arguments: $ARGUMENTS
## References
Read before analyzing:
1. https://overreacted.io/before-you-memo/ — two techniques to avoid memo entirely
## Anti-patterns to detect
1. **State can be moved down instead of memoizing**: Move state into a smaller child so the slow component stops re-rendering without memo.
2. **Children can be lifted up**: Extract stateful part, pass expensive subtree as `children` — children as props don't re-render when parent state changes.
3. **useMemo on cheap computations**: Small array filters, string concat, arithmetic don't need memoization.
4. **useMemo with constantly-changing deps**: Deps change every render = useMemo does nothing.
5. **useMemo to stabilize props for non-memoized children**: If the child isn't wrapped in React.memo, stable references don't matter.
6. **React.memo on components that always receive new props**: Fix the parent instead.
7. **useMemo for derived state**: Just compute inline during render.
## Steps
1. Read the reference above
2. Analyze the specified scope for the anti-patterns listed above
3. If fix=true, apply the fixes. If fix=false, propose the fixes without applying.

View File

@@ -0,0 +1,38 @@
---
description: Analyze and fix unnecessary useState, derived state, and server-state-in-local-state anti-patterns
argument-hint: [scope] [fix=true|false]
---
# You Might Not Need State
Arguments:
- scope: what to analyze (default: your current changes). Examples: "diff to main", "PR #123", "src/components/", "whole codebase"
- fix: whether to apply fixes (default: true). Set to false to only propose changes.
User arguments: $ARGUMENTS
## Context
This codebase uses React Query for all server state and Zustand for client-only global state. useState should only be used for ephemeral UI concerns (open/closed, hover, local form input). Server data should never be copied into useState or Zustand — React Query is the single source of truth.
## References
Read these before analyzing:
1. https://react.dev/learn/choosing-the-state-structure — 5 principles for structuring state
2. https://tkdodo.eu/blog/dont-over-use-state — never store derived/computed values in state
3. https://tkdodo.eu/blog/putting-props-to-use-state — never mirror props into state via useEffect
## Anti-patterns to detect
1. **Derived state stored in useState**: If a value can be computed from props, other state, or query data, compute it inline during render instead of storing it in state.
2. **Server state copied into useState**: Never `useState` + `useEffect` to sync React Query data into local state. Use query data directly. The only exception is forms where users edit server data.
3. **Props mirrored into state**: Never `useState(prop)` + `useEffect(() => setState(prop))`. Use the prop directly, or use a key to reset component state.
4. **Chained useEffect state updates**: Never chain Effects that set state to trigger other Effects. Calculate all derived values in the event handler or inline during render.
5. **Storing objects when an ID suffices**: Store `selectedId` not a copy of the selected object. Derive the object: `items.find(i => i.id === selectedId)`.
6. **State that duplicates Zustand or React Query**: If the data already lives in a store or query cache, don't create a parallel useState.
## Steps
1. Read the references above to understand the guidelines
2. Analyze the specified scope for the anti-patterns listed above
3. If fix=true, apply the fixes. If fix=false, propose the fixes without applying.

View File

@@ -0,0 +1,71 @@
# Sim — Language & Positioning
When editing user-facing copy (landing pages, docs, metadata, marketing), follow these rules.
## Identity
Sim is the **AI workspace** where teams build and run AI agents. Not a workflow tool, not an agent framework, not an automation platform.
**Short definition:** Sim is the open-source AI workspace where teams build, deploy, and manage AI agents.
**Full definition:** Sim is the open-source AI workspace where teams build, deploy, and manage AI agents. Connect 1,000+ integrations and every major LLM to create agents that automate real work — visually, conversationally, or with code.
## Audience
**Primary:** Teams building AI agents for their organization — IT, operations, and technical teams who need governance, security, lifecycle management, and collaboration.
**Secondary:** Individual builders and developers who care about speed, flexibility, and open source.
## Required Language
| Concept | Use | Never use |
|---------|-----|-----------|
| The product | "AI workspace" | "workflow tool", "automation platform", "agent framework" |
| Building | "build agents", "create agents" | "create workflows" (unless describing the workflow module specifically) |
| Visual builder | "workflow builder" or "visual builder" | "canvas", "graph editor" |
| Mothership | "Mothership" (capitalized) | "chat", "AI assistant", "copilot" |
| Deployment | "deploy", "ship" | "publish", "activate" |
| Audience | "teams", "builders" | "users", "customers" (in marketing copy) |
| What agents do | "automate real work" | "automate tasks", "automate workflows" |
| Our advantage | "open-source AI workspace" | "open-source platform" |
## Tone
- **Direct.** Short sentences. Active voice. Lead with what it does.
- **Concrete.** Name specific things — "Slack bots, compliance agents, data pipelines" — not abstractions.
- **Confident, not loud.** No exclamation marks or superlatives.
- **Simple.** If a 16-year-old can't understand the sentence, rewrite it.
## Claim Hierarchy
When describing Sim, always lead with the most differentiated claim:
1. **What it is:** "The AI workspace for teams"
2. **What you do:** "Build, deploy, and manage AI agents"
3. **How:** "Visually, conversationally, or with code"
4. **Scale:** "1,000+ integrations, every major LLM"
5. **Trust:** "Open source. SOC2. Trusted by 100,000+ builders."
## Module Descriptions
| Module | One-liner |
|--------|-----------|
| **Mothership** | Your AI command center. Build and manage everything in natural language. |
| **Workflows** | The visual builder. Connect blocks, models, and integrations into agent logic. |
| **Knowledge Base** | Your agents' memory. Upload docs, sync sources, build vector databases. |
| **Tables** | A database, built in. Store, query, and wire structured data into agent runs. |
| **Files** | Upload, create, and share. One store for your team and every agent. |
| **Logs** | Full visibility, every run. Trace execution block by block. |
## What We Never Say
- Never call Sim "just a workflow tool"
- Never compare only on integration count — we win on AI-native capabilities
- Never use "no-code" as the primary descriptor — say "visually, conversationally, or with code"
- Never promise unshipped features
- Never use jargon ("RAG", "vector database", "MCP") without plain-English explanation on public pages
- Avoid "agentic workforce" as a primary term — use "AI agents"
## Vision
Sim becomes the default environment where teams build AI agents — not a tool you visit for one task, but a workspace you live in. Workflows are one module; Mothership is another. The workspace is the constant; the interface adapts.

View File

@@ -1,12 +1,12 @@
# Add Trigger
You are an expert at creating webhook triggers for Sim. You understand the trigger system, the generic `buildTriggerSubBlocks` helper, and how triggers connect to blocks.
You are an expert at creating webhook and polling triggers for Sim. You understand the trigger system, the generic `buildTriggerSubBlocks` helper, polling infrastructure, and how triggers connect to blocks.
## Your Task
1. Research what webhook events the service supports
2. Create the trigger files using the generic builder
3. Create a provider handler if custom auth, formatting, or subscriptions are needed
1. Research what webhook events the service supports — if the service lacks reliable webhooks, use polling
2. Create the trigger files using the generic builder (webhook) or manual config (polling)
3. Create a provider handler (webhook) or polling handler (polling)
4. Register triggers and connect them to the block
## Directory Structure
@@ -141,23 +141,37 @@ export const TRIGGER_REGISTRY: TriggerRegistry = {
### Block file (`apps/sim/blocks/blocks/{service}.ts`)
Wire triggers into the block so the trigger UI appears and `generate-docs.ts` discovers them. Two changes are needed:
1. **Spread trigger subBlocks** at the end of the block's `subBlocks` array
2. **Add `triggers` property** after `outputs` with `enabled: true` and `available: [...]`
```typescript
import { getTrigger } from '@/triggers'
export const {Service}Block: BlockConfig = {
// ...
triggers: {
enabled: true,
available: ['{service}_event_a', '{service}_event_b'],
},
subBlocks: [
// Regular tool subBlocks first...
...getTrigger('{service}_event_a').subBlocks,
...getTrigger('{service}_event_b').subBlocks,
],
// ... tools, inputs, outputs ...
triggers: {
enabled: true,
available: ['{service}_event_a', '{service}_event_b'],
},
}
```
**Versioned blocks (V1 + V2):** Many integrations have a hidden V1 block and a visible V2 block. Where you add the trigger wiring depends on how V2 inherits from V1:
- **V2 uses `...V1Block` spread** (e.g., Google Calendar): Add trigger to V1 — V2 inherits both `subBlocks` and `triggers` automatically.
- **V2 defines its own `subBlocks`** (e.g., Google Sheets): Add trigger to V2 (the visible block). V1 is hidden and doesn't need it.
- **Single block, no V2** (e.g., Google Drive): Add trigger directly.
`generate-docs.ts` deduplicates by base type (first match wins). If V1 is processed first without triggers, the V2 triggers won't appear in `integrations.json`. Always verify by checking the output after running the script.
## Provider Handler
All provider-specific webhook logic lives in a single handler file: `apps/sim/lib/webhooks/providers/{service}.ts`.
@@ -322,6 +336,121 @@ export function buildOutputs(): Record<string, TriggerOutput> {
}
```
## Polling Triggers
Use polling when the service lacks reliable webhooks (e.g., Google Sheets, Google Drive, Google Calendar, Gmail, RSS, IMAP). Polling triggers do NOT use `buildTriggerSubBlocks` — they define subBlocks manually.
### Directory Structure
```
apps/sim/triggers/{service}/
├── index.ts # Barrel export
└── poller.ts # TriggerConfig with polling: true
apps/sim/lib/webhooks/polling/
└── {service}.ts # PollingProviderHandler implementation
```
### Polling Handler (`apps/sim/lib/webhooks/polling/{service}.ts`)
```typescript
import { pollingIdempotency } from '@/lib/core/idempotency/service'
import type { PollingProviderHandler, PollWebhookContext } from '@/lib/webhooks/polling/types'
import { markWebhookFailed, markWebhookSuccess, resolveOAuthCredential, updateWebhookProviderConfig } from '@/lib/webhooks/polling/utils'
import { processPolledWebhookEvent } from '@/lib/webhooks/processor'
export const {service}PollingHandler: PollingProviderHandler = {
provider: '{service}',
label: '{Service}',
async pollWebhook(ctx: PollWebhookContext): Promise<'success' | 'failure'> {
const { webhookData, workflowData, requestId, logger } = ctx
const webhookId = webhookData.id
try {
// For OAuth services:
const accessToken = await resolveOAuthCredential(webhookData, '{service}', requestId, logger)
const config = webhookData.providerConfig as unknown as {Service}WebhookConfig
// First poll: seed state, emit nothing
if (!config.lastCheckedTimestamp) {
await updateWebhookProviderConfig(webhookId, { lastCheckedTimestamp: new Date().toISOString() }, logger)
await markWebhookSuccess(webhookId, logger)
return 'success'
}
// Fetch changes since last poll, process with idempotency
// ...
await markWebhookSuccess(webhookId, logger)
return 'success'
} catch (error) {
logger.error(`[${requestId}] Error processing {service} webhook ${webhookId}:`, error)
await markWebhookFailed(webhookId, logger)
return 'failure'
}
},
}
```
**Key patterns:**
- First poll seeds state and emits nothing (avoids flooding with existing data)
- Use `pollingIdempotency.executeWithIdempotency(provider, key, callback)` for dedup
- Use `processPolledWebhookEvent(webhookData, workflowData, payload, requestId)` to fire the workflow
- Use `updateWebhookProviderConfig(webhookId, partialConfig, logger)` for read-merge-write on state
- Use the latest server-side timestamp from API responses (not wall clock) to avoid clock skew
### Trigger Config (`apps/sim/triggers/{service}/poller.ts`)
```typescript
import { {Service}Icon } from '@/components/icons'
import type { TriggerConfig } from '@/triggers/types'
export const {service}PollingTrigger: TriggerConfig = {
id: '{service}_poller',
name: '{Service} Trigger',
provider: '{service}',
description: 'Triggers when ...',
version: '1.0.0',
icon: {Service}Icon,
polling: true, // REQUIRED — routes to polling infrastructure
subBlocks: [
{ id: 'triggerCredentials', type: 'oauth-input', title: 'Credentials', serviceId: '{service}', requiredScopes: [], required: true, mode: 'trigger', supportsCredentialSets: true },
// ... service-specific config fields (dropdowns, inputs, switches) ...
{ id: 'triggerInstructions', type: 'text', title: 'Setup Instructions', hideFromPreview: true, mode: 'trigger', defaultValue: '...' },
],
outputs: {
// Must match the payload shape from processPolledWebhookEvent
},
}
```
### Registration (3 places)
1. **`apps/sim/triggers/constants.ts`** — add provider to `POLLING_PROVIDERS` Set
2. **`apps/sim/lib/webhooks/polling/registry.ts`** — import handler, add to `POLLING_HANDLERS`
3. **`apps/sim/triggers/registry.ts`** — import trigger config, add to `TRIGGER_REGISTRY`
### Helm Cron Job
Add to `helm/sim/values.yaml` under the existing polling cron jobs:
```yaml
{service}WebhookPoll:
schedule: "*/1 * * * *"
concurrencyPolicy: Forbid
url: "http://sim:3000/api/webhooks/poll/{service}"
```
### Reference Implementations
- Simple: `apps/sim/lib/webhooks/polling/rss.ts` + `apps/sim/triggers/rss/poller.ts`
- Complex (OAuth, attachments): `apps/sim/lib/webhooks/polling/gmail.ts` + `apps/sim/triggers/gmail/poller.ts`
- Cursor-based (changes API): `apps/sim/lib/webhooks/polling/google-drive.ts`
- Timestamp-based: `apps/sim/lib/webhooks/polling/google-calendar.ts`
## Checklist
### Trigger Definition
@@ -347,7 +476,17 @@ export function buildOutputs(): Record<string, TriggerOutput> {
- [ ] NO changes to `route.ts`, `provider-subscriptions.ts`, or `deploy.ts`
- [ ] API key field uses `password: true`
### Polling Trigger (if applicable)
- [ ] Handler implements `PollingProviderHandler` at `lib/webhooks/polling/{service}.ts`
- [ ] Trigger config has `polling: true` and defines subBlocks manually (no `buildTriggerSubBlocks`)
- [ ] Provider string matches across: trigger config, handler, `POLLING_PROVIDERS`, polling registry
- [ ] First poll seeds state and emits nothing
- [ ] Added provider to `POLLING_PROVIDERS` in `triggers/constants.ts`
- [ ] Added handler to `POLLING_HANDLERS` in `lib/webhooks/polling/registry.ts`
- [ ] Added cron job to `helm/sim/values.yaml`
- [ ] Payload shape matches trigger `outputs` schema
### Testing
- [ ] `bun run type-check` passes
- [ ] Manually verify `formatInput` output keys match trigger `outputs` keys
- [ ] Manually verify output keys match trigger `outputs` keys
- [ ] Trigger UI shows correctly in the block

View File

@@ -0,0 +1,20 @@
# Cleanup
Arguments:
- scope: what to review (default: your current changes). Examples: "diff to main", "PR #123", "src/components/", "whole codebase"
- fix: whether to apply fixes (default: true). Set to false to only propose changes.
User arguments: $ARGUMENTS
## Steps
Run each of these skills in order on the specified scope, passing through the scope and fix arguments. After each skill completes, move to the next. Do not skip any.
1. `/you-might-not-need-an-effect $ARGUMENTS`
2. `/you-might-not-need-a-memo $ARGUMENTS`
3. `/you-might-not-need-a-callback $ARGUMENTS`
4. `/you-might-not-need-state $ARGUMENTS`
5. `/react-query-best-practices $ARGUMENTS`
6. `/emcn-design-review $ARGUMENTS`
After all skills have run, output a summary of what was found and fixed (or proposed) across all six passes.

View File

@@ -0,0 +1,74 @@
# EMCN Design Review
Arguments:
- scope: what to review (default: your current changes). Examples: "diff to main", "PR #123", "src/components/", "whole codebase"
- fix: whether to apply fixes (default: true). Set to false to only propose changes.
User arguments: $ARGUMENTS
## Context
This codebase uses **emcn**, a custom component library built on Radix UI primitives with CVA variants and CSS variable design tokens. All UI must use emcn components and tokens.
## Steps
1. Read the emcn barrel export at `apps/sim/components/emcn/components/index.ts` to know what's available
2. Read `apps/sim/app/_styles/globals.css` for CSS variable tokens
3. Analyze the specified scope against every rule below
4. If fix=true, apply the fixes. If fix=false, propose the fixes without applying.
---
## Imports
- Import from `@/components/emcn` barrel, never subpaths
- Icons from `@/components/emcn/icons` or `lucide-react`
- Use `cn` from `@/lib/core/utils/cn` for conditional classes
## Design Tokens
Use CSS variable pattern (`text-[var(--text-primary)]`), never Tailwind semantics (`text-muted-foreground`) or hardcoded colors (`text-gray-500`, `#333`).
**Text**: `--text-primary`, `--text-secondary`, `--text-tertiary`, `--text-muted`, `--text-icon`, `--text-inverse`, `--text-error`
**Surfaces**: `--bg`, `--surface-2` through `--surface-7`, `--surface-hover`, `--surface-active`
**Borders**: `--border`, `--border-1`, `--border-muted`
**Z-Index**: `--z-dropdown` (100), `--z-modal` (200), `--z-popover` (300), `--z-tooltip` (400), `--z-toast` (500)
**Shadows**: `shadow-subtle`, `shadow-medium`, `shadow-overlay`, `shadow-card`
## Buttons
| Action | Variant |
|--------|---------|
| Toolbar, icon-only | `ghost` (most common, 28%) |
| Create, save, submit | `primary` (24%) |
| Cancel, close | `default` |
| Delete, remove | `destructive` |
| Selected state | `active` |
| Toggle | `outline` |
## Delete/Remove Confirmations
Modal `size="sm"`, title "Delete/Remove {ItemType}", `variant="destructive"` action button, `variant="default"` cancel. Cancel left, action right (100% compliance). Use `text-[var(--text-error)]` for irreversible warnings.
## Toast
`toast.success()`, `toast.error()`, `toast()` from `@/components/emcn`. Never custom notification UI.
## Badges
`red`=error/failed, `gray-secondary`=metadata/roles, `type`=type annotations, `green`=success/active, `gray`=neutral, `amber`=processing, `orange`=paused, `blue`=info. Use `dot` prop for status indicators.
## Icons
Default: `h-[14px] w-[14px]` (400+ uses). Color: `text-[var(--text-icon)]`. Scale: 14px > 16px > 12px > 20px.
## Anti-patterns to flag
- Raw `<button>`/`<input>` instead of emcn components
- Hardcoded colors (`text-gray-*`, `#hex`, `rgb()`)
- Tailwind semantics (`text-muted-foreground`) instead of CSS variables
- Template literal className instead of `cn()`
- Inline styles for colors/static values (dynamic values OK)
- Importing from emcn subpaths instead of barrel
- Arbitrary z-index instead of tokens
- Wrong button variant for action type

View File

@@ -0,0 +1,49 @@
# React Query Best Practices
Arguments:
- scope: what to analyze (default: your current changes). Examples: "diff to main", "PR #123", "src/hooks/queries/", "whole codebase"
- fix: whether to apply fixes (default: true). Set to false to only propose changes.
User arguments: $ARGUMENTS
## Context
This codebase uses React Query (TanStack Query) as the single source of truth for all server state. All query hooks live in `hooks/queries/`. Zustand is used only for client-only UI state. Server data must never be duplicated into useState or Zustand outside of mutation callbacks that coordinate cross-store state.
## References
Read these before analyzing:
1. https://tkdodo.eu/blog/practical-react-query — foundational defaults, custom hooks, avoiding local state copies
2. https://tkdodo.eu/blog/effective-react-query-keys — key factory pattern, hierarchical keys, fuzzy invalidation
3. https://tkdodo.eu/blog/react-query-as-a-state-manager — React Query IS your server state manager
## Rules to enforce
### Query key factories
- Every file in `hooks/queries/` must have a hierarchical key factory with an `all` root key
- Keys must include intermediate plural keys (`lists`, `details`) for prefix invalidation
- Key factories are colocated with their query hooks, not in a global keys file
### Query hooks
- Every `queryFn` must forward `signal` for request cancellation
- Every query must have an explicit `staleTime` (default 0 is almost never correct)
- `keepPreviousData` / `placeholderData` only on variable-key queries (where params change), never on static keys
- Use `enabled` to prevent queries from running without required params
### Mutations
- Use `onSettled` (not `onSuccess`) for cache reconciliation — it fires on both success and error
- For optimistic updates: save previous data in `onMutate`, roll back in `onError`
- Use targeted invalidation (`entityKeys.lists()`) not broad (`entityKeys.all`) when possible
- Don't include mutation objects in `useCallback` deps — `.mutate()` is stable
### Server state ownership
- Never copy query data into useState. Use query data directly in components.
- Never copy query data into Zustand stores (exception: mutation callbacks that coordinate cross-store state like temp ID replacement)
- The query cache is not a local state manager — `setQueryData` is for optimistic updates only
- Forms are the one deliberate exception: copy server data into local form state with `staleTime: Infinity`
## Steps
1. Read the references above to understand the guidelines
2. Analyze the specified scope against the rules listed above
3. If fix=true, apply the fixes. If fix=false, propose the fixes without applying.

View File

@@ -0,0 +1,30 @@
# You Might Not Need a Callback
Arguments:
- scope: what to analyze (default: your current changes). Examples: "diff to main", "PR #123", "src/components/", "whole codebase"
- fix: whether to apply fixes (default: true). Set to false to only propose changes.
User arguments: $ARGUMENTS
## References
Read before analyzing:
1. https://react.dev/reference/react/useCallback — official docs on when useCallback is actually needed
## Anti-patterns to detect
1. **useCallback on functions not passed as props or deps**: No benefit if only called within the same component.
2. **useCallback with deps that change every render**: Memoization is wasted.
3. **useCallback on handlers passed to native elements**: `<button onClick={fn}>` doesn't benefit from stable references.
4. **useCallback wrapping functions that return new objects/arrays**: Memoization at the wrong level.
5. **useCallback with empty deps when deps are needed**: Stale closures.
6. **Pairing useCallback + React.memo unnecessarily**: Only optimize when you've measured a problem.
7. **useCallback in hooks that don't need stable references**: Not every hook return needs memoization.
Note: This codebase uses a ref pattern for stable callbacks (`useRef` + empty deps). That pattern is correct — don't flag it.
## Steps
1. Read the reference above
2. Analyze the specified scope for the anti-patterns listed above
3. If fix=true, apply the fixes. If fix=false, propose the fixes without applying.

View File

@@ -0,0 +1,28 @@
# You Might Not Need a Memo
Arguments:
- scope: what to analyze (default: your current changes). Examples: "diff to main", "PR #123", "src/components/", "whole codebase"
- fix: whether to apply fixes (default: true). Set to false to only propose changes.
User arguments: $ARGUMENTS
## References
Read before analyzing:
1. https://overreacted.io/before-you-memo/ — two techniques to avoid memo entirely
## Anti-patterns to detect
1. **State can be moved down instead of memoizing**: Move state into a smaller child so the slow component stops re-rendering without memo.
2. **Children can be lifted up**: Extract stateful part, pass expensive subtree as `children` — children as props don't re-render when parent state changes.
3. **useMemo on cheap computations**: Small array filters, string concat, arithmetic don't need memoization.
4. **useMemo with constantly-changing deps**: Deps change every render = useMemo does nothing.
5. **useMemo to stabilize props for non-memoized children**: If the child isn't wrapped in React.memo, stable references don't matter.
6. **React.memo on components that always receive new props**: Fix the parent instead.
7. **useMemo for derived state**: Just compute inline during render.
## Steps
1. Read the reference above
2. Analyze the specified scope for the anti-patterns listed above
3. If fix=true, apply the fixes. If fix=false, propose the fixes without applying.

View File

@@ -0,0 +1,33 @@
# You Might Not Need State
Arguments:
- scope: what to analyze (default: your current changes). Examples: "diff to main", "PR #123", "src/components/", "whole codebase"
- fix: whether to apply fixes (default: true). Set to false to only propose changes.
User arguments: $ARGUMENTS
## Context
This codebase uses React Query for all server state and Zustand for client-only global state. useState should only be used for ephemeral UI concerns (open/closed, hover, local form input). Server data should never be copied into useState or Zustand — React Query is the single source of truth.
## References
Read these before analyzing:
1. https://react.dev/learn/choosing-the-state-structure — 5 principles for structuring state
2. https://tkdodo.eu/blog/dont-over-use-state — never store derived/computed values in state
3. https://tkdodo.eu/blog/putting-props-to-use-state — never mirror props into state via useEffect
## Anti-patterns to detect
1. **Derived state stored in useState**: If a value can be computed from props, other state, or query data, compute it inline during render instead of storing it in state.
2. **Server state copied into useState**: Never `useState` + `useEffect` to sync React Query data into local state. Use query data directly. The only exception is forms where users edit server data.
3. **Props mirrored into state**: Never `useState(prop)` + `useEffect(() => setState(prop))`. Use the prop directly, or use a key to reset component state.
4. **Chained useEffect state updates**: Never chain Effects that set state to trigger other Effects. Calculate all derived values in the event handler or inline during render.
5. **Storing objects when an ID suffices**: Store `selectedId` not a copy of the selected object. Derive the object: `items.find(i => i.id === selectedId)`.
6. **State that duplicates Zustand or React Query**: If the data already lives in a store or query cache, don't create a parallel useState.
## Steps
1. Read the references above to understand the guidelines
2. Analyze the specified scope for the anti-patterns listed above
3. If fix=true, apply the fixes. If fix=false, propose the fixes without applying.

View File

@@ -0,0 +1,76 @@
---
description: Sim product language, positioning, and tone guidelines
globs: ["apps/sim/app/(landing)/**", "apps/sim/app/(home)/**", "apps/docs/**", "apps/sim/app/manifest.ts", "apps/sim/app/sitemap.ts", "apps/sim/app/robots.ts", "apps/sim/app/llms.txt/**", "apps/sim/app/llms-full.txt/**", "apps/sim/app/(landing)/**/structured-data*", "apps/docs/**/structured-data*", "**/metadata*", "**/seo*"]
---
# Sim — Language & Positioning
When editing user-facing copy (landing pages, docs, metadata, marketing), follow these rules.
## Identity
Sim is the **AI workspace** where teams build and run AI agents. Not a workflow tool, not an agent framework, not an automation platform.
**Short definition:** Sim is the open-source AI workspace where teams build, deploy, and manage AI agents.
**Full definition:** Sim is the open-source AI workspace where teams build, deploy, and manage AI agents. Connect 1,000+ integrations and every major LLM to create agents that automate real work — visually, conversationally, or with code.
## Audience
**Primary:** Teams building AI agents for their organization — IT, operations, and technical teams who need governance, security, lifecycle management, and collaboration.
**Secondary:** Individual builders and developers who care about speed, flexibility, and open source.
## Required Language
| Concept | Use | Never use |
|---------|-----|-----------|
| The product | "AI workspace" | "workflow tool", "automation platform", "agent framework" |
| Building | "build agents", "create agents" | "create workflows" (unless describing the workflow module specifically) |
| Visual builder | "workflow builder" or "visual builder" | "canvas", "graph editor" |
| Mothership | "Mothership" (capitalized) | "chat", "AI assistant", "copilot" |
| Deployment | "deploy", "ship" | "publish", "activate" |
| Audience | "teams", "builders" | "users", "customers" (in marketing copy) |
| What agents do | "automate real work" | "automate tasks", "automate workflows" |
| Our advantage | "open-source AI workspace" | "open-source platform" |
## Tone
- **Direct.** Short sentences. Active voice. Lead with what it does.
- **Concrete.** Name specific things — "Slack bots, compliance agents, data pipelines" — not abstractions.
- **Confident, not loud.** No exclamation marks or superlatives.
- **Simple.** If a 16-year-old can't understand the sentence, rewrite it.
## Claim Hierarchy
When describing Sim, always lead with the most differentiated claim:
1. **What it is:** "The AI workspace for teams"
2. **What you do:** "Build, deploy, and manage AI agents"
3. **How:** "Visually, conversationally, or with code"
4. **Scale:** "1,000+ integrations, every major LLM"
5. **Trust:** "Open source. SOC2. Trusted by 100,000+ builders."
## Module Descriptions
| Module | One-liner |
|--------|-----------|
| **Mothership** | Your AI command center. Build and manage everything in natural language. |
| **Workflows** | The visual builder. Connect blocks, models, and integrations into agent logic. |
| **Knowledge Base** | Your agents' memory. Upload docs, sync sources, build vector databases. |
| **Tables** | A database, built in. Store, query, and wire structured data into agent runs. |
| **Files** | Upload, create, and share. One store for your team and every agent. |
| **Logs** | Full visibility, every run. Trace execution block by block. |
## What We Never Say
- Never call Sim "just a workflow tool"
- Never compare only on integration count — we win on AI-native capabilities
- Never use "no-code" as the primary descriptor — say "visually, conversationally, or with code"
- Never promise unshipped features
- Never use jargon ("RAG", "vector database", "MCP") without plain-English explanation on public pages
- Avoid "agentic workforce" as a primary term — use "AI agents"
## Vision
Sim becomes the default environment where teams build AI agents — not a tool you visit for one task, but a workspace you live in. Workflows are one module; Mothership is another. The workspace is the constant; the interface adapts.

28
.github/CODEOWNERS vendored Normal file
View File

@@ -0,0 +1,28 @@
# Copilot/Mothership chat streaming entrypoints and replay surfaces.
/apps/sim/app/api/copilot/chat/ @simstudioai/mothership
/apps/sim/app/api/copilot/confirm/ @simstudioai/mothership
/apps/sim/app/api/copilot/chats/ @simstudioai/mothership
/apps/sim/app/api/mothership/chat/ @simstudioai/mothership
/apps/sim/app/api/mothership/chats/ @simstudioai/mothership
/apps/sim/app/api/mothership/execute/ @simstudioai/mothership
/apps/sim/app/api/v1/copilot/chat/ @simstudioai/mothership
# Server-side stream orchestration, persistence, and protocol.
/apps/sim/lib/copilot/chat/ @simstudioai/mothership
/apps/sim/lib/copilot/async-runs/ @simstudioai/mothership
/apps/sim/lib/copilot/request/ @simstudioai/mothership
/apps/sim/lib/copilot/generated/ @simstudioai/mothership
/apps/sim/lib/copilot/constants.ts @simstudioai/mothership
/apps/sim/lib/core/utils/sse.ts @simstudioai/mothership
# Stream-time tool execution, confirmations, resource persistence, and handlers.
/apps/sim/lib/copilot/tool-executor/ @simstudioai/mothership
/apps/sim/lib/copilot/tools/ @simstudioai/mothership
/apps/sim/lib/copilot/persistence/ @simstudioai/mothership
/apps/sim/lib/copilot/resources/ @simstudioai/mothership
# Client-side stream consumption, hydration, and reconnect.
/apps/sim/app/workspace/*/home/hooks/index.ts @simstudioai/mothership
/apps/sim/app/workspace/*/home/hooks/use-chat.ts @simstudioai/mothership
/apps/sim/app/workspace/*/home/hooks/use-file-preview-sessions.ts @simstudioai/mothership
/apps/sim/hooks/queries/tasks.ts @simstudioai/mothership

View File

@@ -16,6 +16,7 @@ permissions:
jobs:
test-build:
name: Test and Build
if: github.ref != 'refs/heads/dev' || github.event_name == 'pull_request'
uses: ./.github/workflows/test-build.yml
secrets: inherit
@@ -45,11 +46,72 @@ jobs:
echo " Not a release commit"
fi
# Build AMD64 images and push to ECR immediately (+ GHCR for main)
# Dev: build all 3 images for ECR only (no GHCR, no ARM64)
build-dev:
name: Build Dev ECR
needs: [detect-version]
if: github.event_name == 'push' && github.ref == 'refs/heads/dev'
runs-on: blacksmith-8vcpu-ubuntu-2404
permissions:
contents: read
id-token: write
strategy:
fail-fast: false
matrix:
include:
- dockerfile: ./docker/app.Dockerfile
ecr_repo_secret: ECR_APP
- dockerfile: ./docker/db.Dockerfile
ecr_repo_secret: ECR_MIGRATIONS
- dockerfile: ./docker/realtime.Dockerfile
ecr_repo_secret: ECR_REALTIME
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v4
with:
role-to-assume: ${{ secrets.DEV_AWS_ROLE_TO_ASSUME }}
aws-region: ${{ secrets.DEV_AWS_REGION }}
- name: Login to Amazon ECR
id: login-ecr
uses: aws-actions/amazon-ecr-login@v2
- name: Login to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Set up Docker Buildx
uses: useblacksmith/setup-docker-builder@v1
- name: Resolve ECR repo name
id: ecr-repo
run: echo "name=$ECR_REPO" >> $GITHUB_OUTPUT
env:
ECR_REPO: ${{ matrix.ecr_repo_secret == 'ECR_APP' && secrets.ECR_APP || matrix.ecr_repo_secret == 'ECR_MIGRATIONS' && secrets.ECR_MIGRATIONS || matrix.ecr_repo_secret == 'ECR_REALTIME' && secrets.ECR_REALTIME || '' }}
- name: Build and push
uses: useblacksmith/build-push-action@v2
with:
context: .
file: ${{ matrix.dockerfile }}
platforms: linux/amd64
push: true
tags: ${{ steps.login-ecr.outputs.registry }}/${{ steps.ecr-repo.outputs.name }}:dev
provenance: false
sbom: false
# Main/staging: build AMD64 images and push to ECR + GHCR
build-amd64:
name: Build AMD64
needs: [test-build, detect-version]
if: github.event_name == 'push' && (github.ref == 'refs/heads/main' || github.ref == 'refs/heads/staging' || github.ref == 'refs/heads/dev')
if: >-
github.event_name == 'push' &&
(github.ref == 'refs/heads/main' || github.ref == 'refs/heads/staging')
runs-on: blacksmith-8vcpu-ubuntu-2404
permissions:
contents: read
@@ -70,13 +132,13 @@ jobs:
ecr_repo_secret: ECR_REALTIME
steps:
- name: Checkout code
uses: actions/checkout@v4
uses: actions/checkout@v6
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v4
with:
role-to-assume: ${{ github.ref == 'refs/heads/main' && secrets.AWS_ROLE_TO_ASSUME || github.ref == 'refs/heads/dev' && secrets.DEV_AWS_ROLE_TO_ASSUME || secrets.STAGING_AWS_ROLE_TO_ASSUME }}
aws-region: ${{ github.ref == 'refs/heads/main' && secrets.AWS_REGION || github.ref == 'refs/heads/dev' && secrets.DEV_AWS_REGION || secrets.STAGING_AWS_REGION }}
role-to-assume: ${{ github.ref == 'refs/heads/main' && secrets.AWS_ROLE_TO_ASSUME || secrets.STAGING_AWS_ROLE_TO_ASSUME }}
aws-region: ${{ github.ref == 'refs/heads/main' && secrets.AWS_REGION || secrets.STAGING_AWS_REGION }}
- name: Login to Amazon ECR
id: login-ecr
@@ -99,33 +161,33 @@ jobs:
- name: Set up Docker Buildx
uses: useblacksmith/setup-docker-builder@v1
- name: Resolve ECR repo name
id: ecr-repo
run: echo "name=$ECR_REPO" >> $GITHUB_OUTPUT
env:
ECR_REPO: ${{ matrix.ecr_repo_secret == 'ECR_APP' && secrets.ECR_APP || matrix.ecr_repo_secret == 'ECR_MIGRATIONS' && secrets.ECR_MIGRATIONS || matrix.ecr_repo_secret == 'ECR_REALTIME' && secrets.ECR_REALTIME || '' }}
- name: Generate tags
id: meta
run: |
ECR_REGISTRY="${{ steps.login-ecr.outputs.registry }}"
ECR_REPO="${{ secrets[matrix.ecr_repo_secret] }}"
ECR_REPO="${{ steps.ecr-repo.outputs.name }}"
GHCR_IMAGE="${{ matrix.ghcr_image }}"
# ECR tags (always build for ECR)
if [ "${{ github.ref }}" = "refs/heads/main" ]; then
ECR_TAG="latest"
elif [ "${{ github.ref }}" = "refs/heads/dev" ]; then
ECR_TAG="dev"
else
ECR_TAG="staging"
fi
ECR_IMAGE="${ECR_REGISTRY}/${ECR_REPO}:${ECR_TAG}"
# Build tags list
TAGS="${ECR_IMAGE}"
# Add GHCR tags only for main branch
if [ "${{ github.ref }}" = "refs/heads/main" ]; then
GHCR_AMD64="${GHCR_IMAGE}:latest-amd64"
GHCR_SHA="${GHCR_IMAGE}:${{ github.sha }}-amd64"
TAGS="${TAGS},$GHCR_AMD64,$GHCR_SHA"
# Add version tag if this is a release commit
if [ "${{ needs.detect-version.outputs.is_release }}" = "true" ]; then
VERSION="${{ needs.detect-version.outputs.version }}"
GHCR_VERSION="${GHCR_IMAGE}:${VERSION}-amd64"
@@ -150,7 +212,7 @@ jobs:
# Build ARM64 images for GHCR (main branch only, runs in parallel)
build-ghcr-arm64:
name: Build ARM64 (GHCR Only)
needs: [test-build, detect-version]
needs: [detect-version]
runs-on: blacksmith-8vcpu-ubuntu-2404-arm
if: github.event_name == 'push' && github.ref == 'refs/heads/main'
permissions:
@@ -169,7 +231,7 @@ jobs:
steps:
- name: Checkout code
uses: actions/checkout@v4
uses: actions/checkout@v6
- name: Login to GHCR
uses: docker/login-action@v3
@@ -256,6 +318,14 @@ jobs:
docker manifest push "${IMAGE_BASE}:${VERSION}"
fi
# Run database migrations for dev
migrate-dev:
name: Migrate Dev DB
needs: [build-dev]
if: github.event_name == 'push' && github.ref == 'refs/heads/dev'
uses: ./.github/workflows/migrations.yml
secrets: inherit
# Check if docs changed
check-docs-changes:
name: Check Docs Changes
@@ -264,10 +334,10 @@ jobs:
outputs:
docs_changed: ${{ steps.filter.outputs.docs }}
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v6
with:
fetch-depth: 2 # Need at least 2 commits to detect changes
- uses: dorny/paths-filter@v3
- uses: dorny/paths-filter@v4
id: filter
with:
filters: |
@@ -294,7 +364,7 @@ jobs:
contents: write
steps:
- name: Checkout code
uses: actions/checkout@v4
uses: actions/checkout@v6
with:
fetch-depth: 0

View File

@@ -15,7 +15,7 @@ jobs:
steps:
- name: Checkout code
uses: actions/checkout@v4
uses: actions/checkout@v6
- name: Setup Bun
uses: oven-sh/setup-bun@v2

View File

@@ -14,7 +14,7 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@v4
uses: actions/checkout@v6
with:
ref: staging
token: ${{ secrets.GH_PAT }}
@@ -115,7 +115,7 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@v4
uses: actions/checkout@v6
with:
ref: staging

View File

@@ -31,7 +31,7 @@ jobs:
steps:
- name: Checkout code
uses: actions/checkout@v4
uses: actions/checkout@v6
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v4
@@ -117,7 +117,7 @@ jobs:
steps:
- name: Checkout code
uses: actions/checkout@v4
uses: actions/checkout@v6
- name: Login to GHCR
uses: docker/login-action@v3

View File

@@ -14,7 +14,7 @@ jobs:
steps:
- name: Checkout code
uses: actions/checkout@v4
uses: actions/checkout@v6
- name: Setup Bun
uses: oven-sh/setup-bun@v2
@@ -38,5 +38,5 @@ jobs:
- name: Apply migrations
working-directory: ./packages/db
env:
DATABASE_URL: ${{ github.ref == 'refs/heads/main' && secrets.DATABASE_URL || secrets.STAGING_DATABASE_URL }}
DATABASE_URL: ${{ github.ref == 'refs/heads/main' && secrets.DATABASE_URL || github.ref == 'refs/heads/dev' && secrets.DEV_DATABASE_URL || secrets.STAGING_DATABASE_URL }}
run: bunx drizzle-kit migrate --config=./drizzle.config.ts

View File

@@ -14,7 +14,7 @@ jobs:
runs-on: blacksmith-4vcpu-ubuntu-2404
steps:
- name: Checkout repository
uses: actions/checkout@v4
uses: actions/checkout@v6
- name: Setup Bun
uses: oven-sh/setup-bun@v2

View File

@@ -14,7 +14,7 @@ jobs:
runs-on: blacksmith-4vcpu-ubuntu-2404
steps:
- name: Checkout repository
uses: actions/checkout@v4
uses: actions/checkout@v6
- name: Setup Python
uses: actions/setup-python@v5

View File

@@ -14,7 +14,7 @@ jobs:
runs-on: blacksmith-4vcpu-ubuntu-2404
steps:
- name: Checkout repository
uses: actions/checkout@v4
uses: actions/checkout@v6
- name: Setup Bun
uses: oven-sh/setup-bun@v2

View File

@@ -14,7 +14,7 @@ jobs:
steps:
- name: Checkout code
uses: actions/checkout@v4
uses: actions/checkout@v6
- name: Setup Bun
uses: oven-sh/setup-bun@v2
@@ -105,7 +105,7 @@ jobs:
- name: Run tests with coverage
env:
NODE_OPTIONS: '--no-warnings'
NODE_OPTIONS: '--no-warnings --max-old-space-size=8192'
NEXT_PUBLIC_APP_URL: 'https://www.sim.ai'
DATABASE_URL: 'postgresql://postgres:postgres@localhost:5432/simstudio'
ENCRYPTION_KEY: '7cf672e460e430c1fba707575c2b0e2ad5a99dddf9b7b7e3b5646e630861db1c' # dummy key for CI only
@@ -127,7 +127,7 @@ jobs:
- name: Build application
env:
NODE_OPTIONS: '--no-warnings'
NODE_OPTIONS: '--no-warnings --max-old-space-size=8192'
NEXT_PUBLIC_APP_URL: 'https://www.sim.ai'
DATABASE_URL: 'postgresql://postgres:postgres@localhost:5432/simstudio'
STRIPE_SECRET_KEY: 'dummy_key_for_ci_only'

View File

@@ -74,10 +74,6 @@ docker compose -f docker-compose.prod.yml up -d
Open [http://localhost:3000](http://localhost:3000)
#### Background worker note
The Docker Compose stack starts a dedicated worker container by default. If `REDIS_URL` is not configured, the worker will start, log that it is idle, and do no queue processing. This is expected. Queue-backed API, webhook, and schedule execution requires Redis; installs without Redis continue to use the inline execution path.
Sim also supports local models via [Ollama](https://ollama.ai) and [vLLM](https://docs.vllm.ai/) — see the [Docker self-hosting docs](https://docs.sim.ai/self-hosting/docker) for setup details.
### Self-hosted: Manual Setup
@@ -123,12 +119,10 @@ cd packages/db && bun run db:migrate
5. Start development servers:
```bash
bun run dev:full # Starts Next.js app, realtime socket server, and the BullMQ worker
bun run dev:full # Starts Next.js app and realtime socket server
```
If `REDIS_URL` is not configured, the worker will remain idle and execution continues inline.
Or run separately: `bun run dev` (Next.js), `cd apps/sim && bun run dev:sockets` (realtime), and `cd apps/sim && bun run worker` (BullMQ worker).
Or run separately: `bun run dev` (Next.js) and `cd apps/sim && bun run dev:sockets` (realtime).
## Copilot API Keys

View File

@@ -17,9 +17,10 @@ import { ResponseSection } from '@/components/ui/response-section'
import { i18n } from '@/lib/i18n'
import { getApiSpecContent, openapi } from '@/lib/openapi'
import { type PageData, source } from '@/lib/source'
import { DOCS_BASE_URL } from '@/lib/urls'
const SUPPORTED_LANGUAGES: Set<string> = new Set(i18n.languages)
const BASE_URL = 'https://docs.sim.ai'
const BASE_URL = DOCS_BASE_URL
const OG_LOCALE_MAP: Record<string, string> = {
en: 'en_US',
@@ -280,12 +281,12 @@ export async function generateMetadata(props: {
title: data.title,
description:
data.description ||
'Documentation for Sim — the open-source platform to build AI agents and run your agentic workforce.',
'Documentation for Sim — the open-source AI workspace where teams build, deploy, and manage AI agents.',
keywords: [
'AI agents',
'agentic workforce',
'AI agent platform',
'agentic workflows',
'AI workspace',
'AI agent builder',
'build AI agents',
'LLM orchestration',
'AI automation',
'knowledge base',
@@ -300,7 +301,7 @@ export async function generateMetadata(props: {
title: data.title,
description:
data.description ||
'Documentation for Sim — the open-source platform to build AI agents and run your agentic workforce.',
'Documentation for Sim — the open-source AI workspace where teams build, deploy, and manage AI agents.',
url: fullUrl,
siteName: 'Sim Documentation',
type: 'article',
@@ -322,7 +323,7 @@ export async function generateMetadata(props: {
title: data.title,
description:
data.description ||
'Documentation for Sim — the open-source platform to build AI agents and run your agentic workforce.',
'Documentation for Sim — the open-source AI workspace where teams build, deploy, and manage AI agents.',
images: [ogImageUrl],
creator: '@simdotai',
site: '@simdotai',

View File

@@ -3,7 +3,6 @@ import { defineI18nUI } from 'fumadocs-ui/i18n'
import { DocsLayout } from 'fumadocs-ui/layouts/docs'
import { RootProvider } from 'fumadocs-ui/provider/next'
import { Geist_Mono, Inter } from 'next/font/google'
import Script from 'next/script'
import {
SidebarFolder,
SidebarItem,
@@ -13,6 +12,7 @@ import { Navbar } from '@/components/navbar/navbar'
import { SimLogoFull } from '@/components/ui/sim-logo'
import { i18n } from '@/lib/i18n'
import { source } from '@/lib/source'
import { DOCS_BASE_URL } from '@/lib/urls'
import '../global.css'
const inter = Inter({
@@ -66,15 +66,15 @@ export default async function Layout({ children, params }: LayoutProps) {
'@type': 'WebSite',
name: 'Sim Documentation',
description:
'Documentation for Sim — the open-source platform to build AI agents and run your agentic workforce. Connect 1,000+ integrations and LLMs to deploy and orchestrate agentic workflows.',
url: 'https://docs.sim.ai',
'Documentation for Sim — the open-source AI workspace where teams build, deploy, and manage AI agents. Connect 1,000+ integrations and every major LLM.',
url: DOCS_BASE_URL,
publisher: {
'@type': 'Organization',
name: 'Sim',
url: 'https://sim.ai',
logo: {
'@type': 'ImageObject',
url: 'https://docs.sim.ai/static/logo.png',
url: `${DOCS_BASE_URL}/static/logo.png`,
},
},
inLanguage: lang,
@@ -82,7 +82,7 @@ export default async function Layout({ children, params }: LayoutProps) {
'@type': 'SearchAction',
target: {
'@type': 'EntryPoint',
urlTemplate: 'https://docs.sim.ai/api/search?q={search_term_string}',
urlTemplate: `${DOCS_BASE_URL}/api/search?q={search_term_string}`,
},
'query-input': 'required name=search_term_string',
},
@@ -101,7 +101,6 @@ export default async function Layout({ children, params }: LayoutProps) {
/>
</head>
<body className='flex min-h-screen flex-col font-sans'>
<Script src='https://assets.onedollarstats.com/stonks.js' strategy='lazyOnload' />
<RootProvider i18n={provider(lang)}>
<Navbar />
<DocsLayout

View File

@@ -1,4 +1,5 @@
import { DocsBody, DocsPage } from 'fumadocs-ui/page'
import { DocsPage } from 'fumadocs-ui/page'
import Link from 'next/link'
export const metadata = {
title: 'Page Not Found',
@@ -7,17 +8,21 @@ export const metadata = {
export default function NotFound() {
return (
<DocsPage>
<DocsBody>
<div className='flex min-h-[60vh] flex-col items-center justify-center text-center'>
<h1 className='mb-4 bg-gradient-to-b from-[#47d991] to-[#33c482] bg-clip-text font-bold text-8xl text-transparent'>
404
</h1>
<h2 className='mb-2 font-semibold text-2xl text-foreground'>Page Not Found</h2>
<p className='text-muted-foreground'>
The page you're looking for doesn't exist or has been moved.
</p>
</div>
</DocsBody>
<div className='flex min-h-[70vh] flex-col items-center justify-center gap-4 text-center'>
<h1 className='bg-gradient-to-b from-[#47d991] to-[#33c482] bg-clip-text font-bold text-8xl text-transparent'>
404
</h1>
<h2 className='font-semibold text-2xl text-foreground'>Page Not Found</h2>
<p className='text-muted-foreground'>
The page you're looking for doesn't exist or has been moved.
</p>
<Link
href='/'
className='ml-1 flex items-center rounded-[8px] bg-[#33c482] px-2.5 py-1.5 text-[13px] text-white transition-colors duration-200 hover:bg-[#2DAC72]'
>
Go home
</Link>
</div>
</DocsPage>
)
}

View File

@@ -1,5 +1,6 @@
import type { ReactNode } from 'react'
import type { Viewport } from 'next'
import { DOCS_BASE_URL } from '@/lib/urls'
export default function RootLayout({ children }: { children: ReactNode }) {
return children
@@ -12,31 +13,29 @@ export const viewport: Viewport = {
}
export const metadata = {
metadataBase: new URL('https://docs.sim.ai'),
metadataBase: new URL(DOCS_BASE_URL),
title: {
default: 'Sim Documentation — Build AI Agents & Run Your Agentic Workforce',
default: 'Sim Documentation — The AI Workspace for Teams',
template: '%s | Sim Docs',
},
description:
'Documentation for Sim — the open-source platform to build AI agents and run your agentic workforce. Connect 1,000+ integrations and LLMs to deploy and orchestrate agentic workflows.',
'Documentation for Sim — the open-source AI workspace where teams build, deploy, and manage AI agents. Connect 1,000+ integrations and every major LLM.',
applicationName: 'Sim Docs',
generator: 'Next.js',
referrer: 'origin-when-cross-origin' as const,
keywords: [
'AI workspace',
'AI agent builder',
'AI agents',
'agentic workforce',
'AI agent platform',
'build AI agents',
'open-source AI agents',
'agentic workflows',
'LLM orchestration',
'AI integrations',
'knowledge base',
'AI automation',
'workflow builder',
'AI workflow orchestration',
'visual workflow builder',
'enterprise AI',
'AI agent deployment',
'intelligent automation',
'AI tools',
],
authors: [{ name: 'Sim Team', url: 'https://sim.ai' }],
@@ -63,14 +62,14 @@ export const metadata = {
type: 'website',
locale: 'en_US',
alternateLocale: ['es_ES', 'fr_FR', 'de_DE', 'ja_JP', 'zh_CN'],
url: 'https://docs.sim.ai',
url: DOCS_BASE_URL,
siteName: 'Sim Documentation',
title: 'Sim Documentation — Build AI Agents & Run Your Agentic Workforce',
title: 'Sim Documentation — The AI Workspace for Teams',
description:
'Documentation for Sim — the open-source platform to build AI agents and run your agentic workforce. Connect 1,000+ integrations and LLMs to deploy and orchestrate agentic workflows.',
'Documentation for Sim — the open-source AI workspace where teams build, deploy, and manage AI agents. Connect 1,000+ integrations and every major LLM.',
images: [
{
url: 'https://docs.sim.ai/api/og?title=Sim%20Documentation',
url: `${DOCS_BASE_URL}/api/og?title=Sim%20Documentation`,
width: 1200,
height: 630,
alt: 'Sim Documentation',
@@ -79,12 +78,12 @@ export const metadata = {
},
twitter: {
card: 'summary_large_image',
title: 'Sim Documentation — Build AI Agents & Run Your Agentic Workforce',
title: 'Sim Documentation — The AI Workspace for Teams',
description:
'Documentation for Sim — the open-source platform to build AI agents and run your agentic workforce. Connect 1,000+ integrations and LLMs to deploy and orchestrate agentic workflows.',
'Documentation for Sim — the open-source AI workspace where teams build, deploy, and manage AI agents. Connect 1,000+ integrations and every major LLM.',
creator: '@simdotai',
site: '@simdotai',
images: ['https://docs.sim.ai/api/og?title=Sim%20Documentation'],
images: [`${DOCS_BASE_URL}/api/og?title=Sim%20Documentation`],
},
robots: {
index: true,
@@ -98,15 +97,15 @@ export const metadata = {
},
},
alternates: {
canonical: 'https://docs.sim.ai',
canonical: DOCS_BASE_URL,
languages: {
'x-default': 'https://docs.sim.ai',
en: 'https://docs.sim.ai',
es: 'https://docs.sim.ai/es',
fr: 'https://docs.sim.ai/fr',
de: 'https://docs.sim.ai/de',
ja: 'https://docs.sim.ai/ja',
zh: 'https://docs.sim.ai/zh',
'x-default': DOCS_BASE_URL,
en: DOCS_BASE_URL,
es: `${DOCS_BASE_URL}/es`,
fr: `${DOCS_BASE_URL}/fr`,
de: `${DOCS_BASE_URL}/de`,
ja: `${DOCS_BASE_URL}/ja`,
zh: `${DOCS_BASE_URL}/zh`,
},
},
}

View File

@@ -1,9 +1,10 @@
import { source } from '@/lib/source'
import { DOCS_BASE_URL } from '@/lib/urls'
export const revalidate = false
export async function GET() {
const baseUrl = 'https://docs.sim.ai'
const baseUrl = DOCS_BASE_URL
try {
const pages = source.getPages().filter((page) => {
@@ -37,9 +38,9 @@ export async function GET() {
const manifest = `# Sim Documentation
> The open-source platform to build AI agents and run your agentic workforce.
> The open-source AI workspace where teams build, deploy, and manage AI agents.
Sim is the open-source platform to build AI agents and run your agentic workforce. Connect 1,000+ integrations and LLMs to deploy and orchestrate agentic workflows. Create agents, workflows, knowledge bases, tables, and docs. Trusted by over 100,000 builders.
Sim is the open-source AI workspace where teams build, deploy, and manage AI agents. Connect 1,000+ integrations and every major LLM to create agents that automate real work — visually, conversationally, or with code. Trusted by over 100,000 builders.
## Documentation Overview
@@ -61,7 +62,7 @@ ${Object.entries(sections)
- Full documentation content: ${baseUrl}/llms-full.txt
- Individual page content: ${baseUrl}/llms.mdx/[page-path]
- API documentation: ${baseUrl}/sdks/
- API documentation: ${baseUrl}/api-reference/
- Tool integrations: ${baseUrl}/tools/
## Statistics

View File

@@ -1,70 +1,18 @@
import { DOCS_BASE_URL } from '@/lib/urls'
export const revalidate = false
export async function GET() {
const baseUrl = 'https://docs.sim.ai'
const baseUrl = DOCS_BASE_URL
const robotsTxt = `# Robots.txt for Sim Documentation
User-agent: *
Allow: /
# Search engine crawlers
User-agent: Googlebot
Allow: /
User-agent: Bingbot
Allow: /
User-agent: Slurp
Allow: /
User-agent: DuckDuckBot
Allow: /
User-agent: Baiduspider
Allow: /
User-agent: YandexBot
Allow: /
# AI and LLM crawlers - explicitly allowed for documentation indexing
User-agent: GPTBot
Allow: /
User-agent: ChatGPT-User
Allow: /
User-agent: CCBot
Allow: /
User-agent: anthropic-ai
Allow: /
User-agent: Claude-Web
Allow: /
User-agent: Applebot
Allow: /
User-agent: PerplexityBot
Allow: /
User-agent: Diffbot
Allow: /
User-agent: FacebookBot
Allow: /
User-agent: cohere-ai
Allow: /
# Disallow admin and internal paths (if any exist)
Disallow: /.next/
Disallow: /api/internal/
Disallow: /_next/static/
Disallow: /admin/
# Allow but don't prioritize these
Allow: /
Allow: /api/search
Allow: /llms.txt
Allow: /llms-full.txt
@@ -73,23 +21,12 @@ Allow: /llms.mdx/
# Sitemaps
Sitemap: ${baseUrl}/sitemap.xml
# Crawl delay for aggressive bots (optional)
# Crawl-delay: 1
# Additional resources for AI indexing
# See https://github.com/AnswerDotAI/llms-txt for more info
# LLM-friendly content:
# Manifest: ${baseUrl}/llms.txt
# Full content: ${baseUrl}/llms-full.txt
# Individual pages: ${baseUrl}/llms.mdx/[page-path]
# Multi-language documentation available at:
# ${baseUrl}/en - English
# ${baseUrl}/es - Español
# ${baseUrl}/fr - Français
# ${baseUrl}/de - Deutsch
# ${baseUrl}/ja - 日本語
# ${baseUrl}/zh - 简体中文`
# Individual pages: ${baseUrl}/llms.mdx/[page-path]`
return new Response(robotsTxt, {
headers: {

42
apps/docs/app/sitemap.ts Normal file
View File

@@ -0,0 +1,42 @@
import type { MetadataRoute } from 'next'
import { i18n } from '@/lib/i18n'
import { source } from '@/lib/source'
import { DOCS_BASE_URL } from '@/lib/urls'
export const revalidate = 3600
export default function sitemap(): MetadataRoute.Sitemap {
const baseUrl = DOCS_BASE_URL
const languages = source.getLanguages()
const pagesBySlug = new Map<string, Map<string, string>>()
for (const { language, pages } of languages) {
for (const page of pages) {
const key = page.slugs.join('/')
if (!pagesBySlug.has(key)) {
pagesBySlug.set(key, new Map())
}
pagesBySlug.get(key)!.set(language, `${baseUrl}${page.url}`)
}
}
const entries: MetadataRoute.Sitemap = []
for (const [, localeMap] of pagesBySlug) {
const defaultUrl = localeMap.get(i18n.defaultLanguage)
if (!defaultUrl) continue
const langAlternates: Record<string, string> = {}
for (const [lang, url] of localeMap) {
langAlternates[lang] = url
}
langAlternates['x-default'] = defaultUrl
entries.push({
url: defaultUrl,
alternates: { languages: langAlternates },
})
}
return entries
}

View File

@@ -1,62 +0,0 @@
import { i18n } from '@/lib/i18n'
import { source } from '@/lib/source'
export const revalidate = 3600
export async function GET() {
const baseUrl = 'https://docs.sim.ai'
const allPages = source.getPages()
const getPriority = (url: string): string => {
if (url === '/introduction' || url === '/') return '1.0'
if (url === '/getting-started') return '0.9'
if (url.match(/^\/[^/]+$/)) return '0.8'
if (url.includes('/sdks/') || url.includes('/tools/')) return '0.7'
return '0.6'
}
const urls = allPages
.flatMap((page) => {
const urlWithoutLang = page.url.replace(/^\/[a-z]{2}\//, '/')
return i18n.languages.map((lang) => {
const url =
lang === i18n.defaultLanguage
? `${baseUrl}${urlWithoutLang}`
: `${baseUrl}/${lang}${urlWithoutLang}`
return ` <url>
<loc>${url}</loc>
<priority>${getPriority(urlWithoutLang)}</priority>
${i18n.languages.length > 1 ? generateAlternateLinks(baseUrl, urlWithoutLang) : ''}
</url>`
})
})
.join('\n')
const sitemap = `<?xml version="1.0" encoding="UTF-8"?>
<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9" xmlns:xhtml="http://www.w3.org/1999/xhtml">
${urls}
</urlset>`
return new Response(sitemap, {
headers: {
'Content-Type': 'application/xml',
'Cache-Control': 'public, max-age=3600, s-maxage=3600',
},
})
}
function generateAlternateLinks(baseUrl: string, urlWithoutLang: string): string {
const langLinks = i18n.languages
.map((lang) => {
const url =
lang === i18n.defaultLanguage
? `${baseUrl}${urlWithoutLang}`
: `${baseUrl}/${lang}${urlWithoutLang}`
return ` <xhtml:link rel="alternate" hreflang="${lang}" href="${url}" />`
})
.join('\n')
return `${langLinks}\n <xhtml:link rel="alternate" hreflang="x-default" href="${baseUrl}${urlWithoutLang}" />`
}

View File

@@ -28,6 +28,17 @@ export function AgentMailIcon(props: SVGProps<SVGSVGElement>) {
)
}
export function CrowdStrikeIcon(props: SVGProps<SVGSVGElement>) {
return (
<svg {...props} viewBox='0 0 768 500' fill='none' xmlns='http://www.w3.org/2000/svg'>
<path
d='m152.8 23.6c-.8.8.3 4.4 1.3 4.4.5 0 .9.5.9 1.2 0 1.5 7.2 15.9 8.8 17.6.6.7 1.2 1.7 1.2 2.2 0 1.3 8.6 13.7 12.8 18.4 10 11.2 28.2 28.1 35.2 32.7 1.4.9 3.9 2.9 5.5 4.3 1.7 1.5 4.8 3.9 7 5.4s4.9 3.5 5.9 4.4c1.1 1 3.8 3 6 4.5 2.3 1.6 5 3.6 6 4.5 1.1 1 3.8 3 6 4.5 2.3 1.5 4.3 3 4.6 3.3s3.7 3 7.5 6c3.9 3 7.5 5.9 8.1 6.5.6.5 4.6 4.1 8.9 8 14.6 13.1 25.8 25.3 32.6 35.5 6.6 10 9.2 14.4 15.1 25.8 3.1 6.2 7.7 14.4 10 18.3 2.4 3.9 5.4 8.9 6.7 11.2s3 4.8 3.8 5.5c.7.7 1.3 1.8 1.3 2.3s.5 1.5 1 2.2c.6.7 5.3 7.7 10.6 15.7 16.9 25.6 40.1 46 62.9 55.1 10.8 4.3 33.4 6 63 4.7 20.6-.8 44.2-.2 48.3 1.3 1.3.5 4.2.9 6.5.9 2.3.1 6 .7 8.2 1.5s4.9 1.5 6 1.5 3.3.7 4.9 1.5c1.5.8 3.5 1.5 4.3 1.5 1.6 0 7.1 2.4 19.8 8.6 18.3 9.1 33.1 19.9 48.7 35.6 10.4 10.5 10.8 10.8 11.4 8.2.8-3.1-.2-13.7-1.5-16.1-.5-1-2-4.1-3.3-6.8-2.5-5.6-7.2-12.3-14.2-20.4-2.7-3.3-4.6-6.5-4.6-7.9 0-4.1-3.9-10.5-8.5-13.9-5.8-4.3-23.6-13.3-26.3-13.3-.5 0-2.3-.7-3.8-1.5-1.6-.8-3.7-1.5-4.7-1.5-.9 0-2.5-.4-3.5-.9-.9-.5-5.1-1.9-9.2-3.1-13.7-4.1-22.5-7.2-25.6-9.1-3.3-2-6.4-7.2-6.4-10.7 0-2.6 3.8-14.4 5-15.6.6-.6 1-1.7 1-2.5 0-.9.6-2.8 1.4-4.3.8-1.4 1.9-5.8 2.6-9.7 3.3-19.4-7.2-31.8-41-48.7-4.5-2.2-12.7-5.9-16.5-7.5-1.1-.4-4.1-1.7-6.7-2.8-2.6-1.2-5.4-2.1-6.2-2.1s-1.8-.5-2.1-1c-.3-.6-1.3-1-2.2-1-.8 0-2.9-.6-4.6-1.4-1.8-.8-10.4-3.8-19.2-6.6-8.8-2.9-16.7-5.6-17.6-6-.9-.5-3.4-1.2-5.5-1.6-2.2-.3-4.3-1-4.9-1.4-.5-.4-2.6-1.1-4.5-1.4-1.9-.4-4.4-1.1-5.5-1.6-1.1-.4-4-1.3-6.5-2-2.5-.6-6.3-1.6-8.5-2.1-2.2-.6-4.9-1.5-6-1.9-1.1-.5-3.6-1.2-5.5-1.6-1.9-.3-4.1-1-5-1.4-.8-.4-4.9-1.8-9-3s-8.2-2.5-9-2.9c-.9-.5-3.1-1.2-5-1.6s-3.9-1-4.5-1.4c-.5-.4-4.4-1.8-8.5-3.1-4.1-1.2-7.9-2.6-8.5-3-.5-.4-3.9-1.7-7.5-3s-6.9-2.7-7.4-3.2c-.6-.4-1.6-.8-2.4-.8-2 0-11.4-4.3-35.2-15.9-16.7-8.2-32.1-16.6-35.5-19.3-.5-.4-4.6-3.1-9-6s-8.4-5.6-9-6c-.5-.4-5.2-3.9-10.4-7.8-18.1-13.5-44.4-38.8-55.5-53.5-2.1-2.8-3.9-5.1-4-5.3-.2-.1-.5.1-.8.4zm447.2 303c10.2 3.4 13.5 6 15.9 12.1 2.4 5.9-1.6 7.3-6.5 2.2-1.6-1.7-4.5-4-6.4-5.2s-4.1-2.7-4.8-3.4-1.9-1.3-2.7-1.3c-1.3 0-2.5-2.1-2.5-4.6 0-1.8 1.4-1.8 7 .2zm-519-240c0 1.1 8.5 17.9 10 19.7.6.7 2.7 3.4 4.7 6.2 7.3 9.8 18.7 21.5 33.9 34.5 3.8 3.3 14.2 11.1 17.5 13.2 1.4.9 3.2 2.3 4 3 .8.8 3.2 2.5 5.4 3.8s4.2 2.7 4.5 3c.6.8 30.1 18.3 39.5 23.5 7.4 4.2 15.4 8.2 43.5 21.9 16.5 8.1 19.6 9.7 31.7 17 9.1 5.5 23.7 16.9 31 24.2 4.1 4.1 7.6 7.4 7.8 7.4.3 0-.1-1.1-.7-2.5s-1.5-2.5-2-2.5c-.4 0-.8-.6-.8-1.3 0-.8-.9-2.5-2-3.8s-2.3-2.9-2.7-3.4c-7.3-9.6-13.3-15.4-31.7-31-2.5-2.2-19-13.4-26.7-18.2-6.1-3.9-18.4-10.8-30.9-17.5-3-1.7-5.9-3.4-6.5-3.8-.9-.7-5.2-3-19.5-10.8-9-4.8-31.8-18.9-35.5-21.9-.5-.5-2.8-2-5-3.3s-4.4-2.8-5-3.2c-.5-.4-5.9-4.4-12-8.9-6-4.5-11.2-8.5-11.5-8.8-.3-.4-2.7-2.4-5.5-4.5-5.6-4.2-12.8-10.8-26.2-24-5.1-5-9.3-8.6-9.3-8zm113.6 179.1c-1 1 15.8 16.6 26.9 24.9 5.5 4.1 10.5 7.8 11 8.2 2.6 2 11.6 7.2 12.4 7.2.5 0 1.6.6 2.3 1.2.7.7 2.9 2 4.8 3 13.3 6.3 19 8.8 20.4 8.8.8 0 1.7.4 2 .8.8 1.3 32.3 11.2 35.8 11.2 1 0 2.6.4 3.6 1 .9.5 3.7 1.4 6.2 1.9 8.7 1.9 13.5 3.1 15.5 4 1.1.5 5.4 1.9 9.5 3.2s7.9 2.6 8.5 3.1c.5.4 1.5.8 2.3.8s2.8.6 4.5 1.4c16.4 7.1 20.8 8.8 21.4 8.3.3-.4-.7-1.7-2.3-2.9-2.5-2-6.9-5.9-16.4-14.8-1.5-1.4-4.2-3.8-6-5.4-5-4.3-26-19.9-30.5-22.6-2.2-1.3-4.2-2.7-4.5-3-.3-.4-1.2-1-2-1.4s-4.2-2.2-7.5-4.1c-6.2-3.6-18.9-9.9-26-12.9-2.2-.9-4.7-2.1-5.5-2.5-.9-.5-3-1.2-4.8-1.5-1.7-.4-3.4-1.2-3.7-1.7-.4-.5-1.6-.9-2.8-.9-2.2.1-2.2.1-.2 1.2 1.1.6 2.2 1.4 2.5 1.8.3.3 2.5 1.8 5 3.3 5.3 3.1 15 11.7 15 13.3 0 .6-.7 1.7-1.5 2.4-1.2 1-4.1.9-14.5-.4-7.2-.9-14.1-2.1-15.3-2.6-1.2-.4-4.7-1.6-7.7-2.5-15.6-4.7-47-22.1-56.1-31-.9-.8-1.9-1.2-2.3-.8z'
fill='currentColor'
/>
</svg>
)
}
export function SearchIcon(props: SVGProps<SVGSVGElement>) {
return (
<svg
@@ -2076,6 +2087,21 @@ export function BrandfetchIcon(props: SVGProps<SVGSVGElement>) {
)
}
export function BrightDataIcon(props: SVGProps<SVGSVGElement>) {
return (
<svg {...props} viewBox='54 93 22 52' fill='none' xmlns='http://www.w3.org/2000/svg'>
<path
d='M62 95.21c.19 2.16 1.85 3.24 2.82 4.74.25.38.48.11.67-.16.21-.31.6-1.21 1.15-1.28-.35 1.38-.04 3.15.16 4.45.49 3.05-1.22 5.64-4.07 6.18-3.38.65-6.22-2.21-5.6-5.62.23-1.24 1.37-2.5.77-3.7-.85-1.7.54-.52.79-.22 1.04 1.2 1.21.09 1.45-.55.24-.63.31-1.31.47-1.97.19-.77.55-1.4 1.39-1.87z'
fill='currentColor'
/>
<path
d='M66.70 123.37c0 3.69.04 7.38-.03 11.07-.02 1.04.31 1.48 1.32 1.49.29 0 .59.12.88.13.93.01 1.18.47 1.16 1.37-.05 2.19 0 2.19-2.24 2.19-3.48 0-6.96-.04-10.44.03-1.09.02-1.47-.33-1.3-1.36.02-.12.02-.26 0-.38-.28-1.39.39-1.96 1.7-1.9 1.36.06 1.76-.51 1.74-1.88-.09-5.17-.08-10.35 0-15.53.02-1.22-.32-1.87-1.52-2.17-.57-.14-1.47-.11-1.57-.85-.15-1.04-.05-2.11.01-3.17.02-.34.44-.35.73-.39 2.81-.39 5.63-.77 8.44-1.18.92-.14 1.15.2 1.14 1.09-.04 3.8-.02 7.62-.02 11.44z'
fill='currentColor'
/>
</svg>
)
}
export function BrowserUseIcon(props: SVGProps<SVGSVGElement>) {
return (
<svg
@@ -3554,7 +3580,7 @@ export function FireworksIcon(props: SVGProps<SVGSVGElement>) {
>
<path
d='M314.333 110.167L255.98 251.729l-58.416-141.562h-37.459l64 154.75c5.23 12.854 17.771 21.312 31.646 21.312s26.417-8.437 31.646-21.27l64.396-154.792h-37.459zm24.917 215.666L446 216.583l-14.562-34.77-116.584 119.562c-9.708 9.958-12.541 24.833-7.146 37.646 5.292 12.73 17.792 21.083 31.584 21.083l.042.063L506 359.75l-14.562-34.77-152.146.853h-.042zM66 216.5l14.563-34.77 116.583 119.562a34.592 34.592 0 017.146 37.646C199 351.667 186.5 360.02 172.708 360.02l-166.666-.375-.042.042 14.563-34.771 152.145.875L66 216.5z'
fill='currentColor'
fill='#5019c5'
/>
</svg>
)
@@ -4614,6 +4640,42 @@ export function DynamoDBIcon(props: SVGProps<SVGSVGElement>) {
)
}
export function IAMIcon(props: SVGProps<SVGSVGElement>) {
return (
<svg {...props} viewBox='0 0 80 80' xmlns='http://www.w3.org/2000/svg'>
<defs>
<linearGradient x1='0%' y1='100%' x2='100%' y2='0%' id='iamGradient'>
<stop stopColor='#BD0816' offset='0%' />
<stop stopColor='#FF5252' offset='100%' />
</linearGradient>
</defs>
<rect fill='url(#iamGradient)' width='80' height='80' />
<path
d='M14,59 L66,59 L66,21 L14,21 L14,59 Z M68,20 L68,60 C68,60.552 67.553,61 67,61 L13,61 C12.447,61 12,60.552 12,60 L12,20 C12,19.448 12.447,19 13,19 L67,19 C67.553,19 68,19.448 68,20 L68,20 Z M44,48 L59,48 L59,46 L44,46 L44,48 Z M57,42 L62,42 L62,40 L57,40 L57,42 Z M44,42 L52,42 L52,40 L44,40 L44,42 Z M29,46 C29,45.449 28.552,45 28,45 C27.448,45 27,45.449 27,46 C27,46.551 27.448,47 28,47 C28.552,47 29,46.551 29,46 L29,46 Z M31,46 C31,47.302 30.161,48.401 29,48.816 L29,51 L27,51 L27,48.815 C25.839,48.401 25,47.302 25,46 C25,44.346 26.346,43 28,43 C29.654,43 31,44.346 31,46 L31,46 Z M19,53.993 L36.994,54 L36.996,50 L33,50 L33,48 L36.996,48 L36.998,45 L33,45 L33,43 L36.999,43 L37,40.007 L19.006,40 L19,53.993 Z M22,38.001 L34,38.006 L34,31 C34.001,28.697 31.197,26.677 28,26.675 L27.996,26.675 C24.804,26.675 22.004,28.696 22.002,31 L22,38.001 Z M17,54.992 L17.006,39 C17.006,38.734 17.111,38.48 17.299,38.292 C17.486,38.105 17.741,38 18.006,38 L20,38.001 L20.002,31 C20.004,27.512 23.59,24.675 27.996,24.675 L28,24.675 C32.412,24.677 36.001,27.515 36,31 L36,38.007 L38,38.008 C38.553,38.008 39,38.456 39,39.008 L38.994,55 C38.994,55.266 38.889,55.52 38.701,55.708 C38.514,55.895 38.259,56 37.994,56 L18,55.992 C17.447,55.992 17,55.544 17,54.992 L17,54.992 Z M60,36 L62,36 L62,34 L60,34 L60,36 Z M44,36 L55,36 L55,34 L44,34 L44,36 Z'
fill='#FFFFFF'
/>
</svg>
)
}
export function STSIcon(props: SVGProps<SVGSVGElement>) {
return (
<svg {...props} viewBox='0 0 80 80' xmlns='http://www.w3.org/2000/svg'>
<defs>
<linearGradient x1='0%' y1='100%' x2='100%' y2='0%' id='stsGradient'>
<stop stopColor='#BD0816' offset='0%' />
<stop stopColor='#FF5252' offset='100%' />
</linearGradient>
</defs>
<rect fill='url(#stsGradient)' width='80' height='80' />
<path
d='M14,59 L66,59 L66,21 L14,21 L14,59 Z M68,20 L68,60 C68,60.552 67.553,61 67,61 L13,61 C12.447,61 12,60.552 12,60 L12,20 C12,19.448 12.447,19 13,19 L67,19 C67.553,19 68,19.448 68,20 L68,20 Z M44,48 L59,48 L59,46 L44,46 L44,48 Z M57,42 L62,42 L62,40 L57,40 L57,42 Z M44,42 L52,42 L52,40 L44,40 L44,42 Z M29,46 C29,45.449 28.552,45 28,45 C27.448,45 27,45.449 27,46 C27,46.551 27.448,47 28,47 C28.552,47 29,46.551 29,46 L29,46 Z M31,46 C31,47.302 30.161,48.401 29,48.816 L29,51 L27,51 L27,48.815 C25.839,48.401 25,47.302 25,46 C25,44.346 26.346,43 28,43 C29.654,43 31,44.346 31,46 L31,46 Z M19,53.993 L36.994,54 L36.996,50 L33,50 L33,48 L36.996,48 L36.998,45 L33,45 L33,43 L36.999,43 L37,40.007 L19.006,40 L19,53.993 Z M22,38.001 L34,38.006 L34,31 C34.001,28.697 31.197,26.677 28,26.675 L27.996,26.675 C24.804,26.675 22.004,28.696 22.002,31 L22,38.001 Z M17,54.992 L17.006,39 C17.006,38.734 17.111,38.48 17.299,38.292 C17.486,38.105 17.741,38 18.006,38 L20,38.001 L20.002,31 C20.004,27.512 23.59,24.675 27.996,24.675 L28,24.675 C32.412,24.677 36.001,27.515 36,31 L36,38.007 L38,38.008 C38.553,38.008 39,38.456 39,39.008 L38.994,55 C38.994,55.266 38.889,55.52 38.701,55.708 C38.514,55.895 38.259,56 37.994,56 L18,55.992 C17.447,55.992 17,55.544 17,54.992 L17,54.992 Z M60,36 L62,36 L62,34 L60,34 L60,36 Z M44,36 L55,36 L55,34 L44,34 L44,36 Z'
fill='#FFFFFF'
/>
</svg>
)
}
export function SecretsManagerIcon(props: SVGProps<SVGSVGElement>) {
return (
<svg {...props} viewBox='0 0 80 80' xmlns='http://www.w3.org/2000/svg'>
@@ -4824,6 +4886,17 @@ export function WordpressIcon(props: SVGProps<SVGSVGElement>) {
)
}
export function AgiloftIcon(props: SVGProps<SVGSVGElement>) {
return (
<svg {...props} viewBox='0 0 47.3 47.2' xmlns='http://www.w3.org/2000/svg'>
<path d='M47.3,21.4H0v-4.3l4.3-4.2h43V21.4z' fill='#263A5C' />
<path d='M47.3,8.6H8.6L17.2,0h30.1V8.6z' fill='#001028' />
<path d='M0,25.7h47.3V30L43,34.4H0V25.7z' fill='#4A6587' />
<path d='M0,38.7h38.8l-8.6,8.5H0V38.7z' fill='#6D8DAF' />
</svg>
)
}
export function AhrefsIcon(props: SVGProps<SVGSVGElement>) {
return (
<svg {...props} xmlns='http://www.w3.org/2000/svg' viewBox='0 0 1065 1300'>

View File

@@ -1,3 +1,5 @@
import { DOCS_BASE_URL } from '@/lib/urls'
interface StructuredDataProps {
title: string
description: string
@@ -15,7 +17,7 @@ export function StructuredData({
dateModified,
breadcrumb,
}: StructuredDataProps) {
const baseUrl = 'https://docs.sim.ai'
const baseUrl = DOCS_BASE_URL
const articleStructuredData = {
'@context': 'https://schema.org',
@@ -70,10 +72,11 @@ export function StructuredData({
'@context': 'https://schema.org',
'@type': 'SoftwareApplication',
name: 'Sim',
applicationCategory: 'DeveloperApplication',
applicationCategory: 'BusinessApplication',
applicationSubCategory: 'AI Workspace',
operatingSystem: 'Any',
description:
'Sim is the open-source platform to build AI agents and run your agentic workforce. Connect 1,000+ integrations and LLMs to deploy and orchestrate agentic workflows. Create agents, workflows, knowledge bases, tables, and docs.',
'Sim is the open-source AI workspace where teams build, deploy, and manage AI agents. Connect 1,000+ integrations and every major LLM to create agents that automate real work.',
url: baseUrl,
author: {
'@type': 'Organization',
@@ -84,8 +87,9 @@ export function StructuredData({
category: 'Developer Tools',
},
featureList: [
'AI agent creation',
'Agentic workflow orchestration',
'AI workspace for teams',
'Mothership — natural language agent creation',
'Visual workflow builder',
'1,000+ integrations',
'LLM orchestration (OpenAI, Anthropic, Google, xAI, Mistral, Perplexity)',
'Knowledge base creation',

View File

@@ -6,6 +6,7 @@ import type { ComponentType, SVGProps } from 'react'
import {
A2AIcon,
AgentMailIcon,
AgiloftIcon,
AhrefsIcon,
AirtableIcon,
AirweaveIcon,
@@ -22,6 +23,7 @@ import {
BoxCompanyIcon,
BrainIcon,
BrandfetchIcon,
BrightDataIcon,
BrowserUseIcon,
CalComIcon,
CalendlyIcon,
@@ -32,6 +34,7 @@ import {
CloudflareIcon,
CloudWatchIcon,
ConfluenceIcon,
CrowdStrikeIcon,
CursorIcon,
DagsterIcon,
DatabricksIcon,
@@ -87,6 +90,7 @@ import {
HubspotIcon,
HuggingFaceIcon,
HunterIOIcon,
IAMIcon,
ImageIcon,
IncidentioIcon,
InfisicalIcon,
@@ -161,6 +165,7 @@ import {
SmtpIcon,
SQSIcon,
SshIcon,
STSIcon,
STTIcon,
StagehandIcon,
StripeIcon,
@@ -196,6 +201,7 @@ type IconComponent = ComponentType<SVGProps<SVGSVGElement>>
export const blockTypeToIconMap: Record<string, IconComponent> = {
a2a: A2AIcon,
agentmail: AgentMailIcon,
agiloft: AgiloftIcon,
ahrefs: AhrefsIcon,
airtable: AirtableIcon,
airweave: AirweaveIcon,
@@ -210,6 +216,7 @@ export const blockTypeToIconMap: Record<string, IconComponent> = {
attio: AttioIcon,
box: BoxCompanyIcon,
brandfetch: BrandfetchIcon,
brightdata: BrightDataIcon,
browser_use: BrowserUseIcon,
calcom: CalComIcon,
calendly: CalendlyIcon,
@@ -219,7 +226,10 @@ export const blockTypeToIconMap: Record<string, IconComponent> = {
cloudflare: CloudflareIcon,
cloudformation: CloudFormationIcon,
cloudwatch: CloudWatchIcon,
confluence: ConfluenceIcon,
confluence_v2: ConfluenceIcon,
crowdstrike: CrowdStrikeIcon,
cursor: CursorIcon,
cursor_v2: CursorIcon,
dagster: DagsterIcon,
databricks: DatabricksIcon,
@@ -237,19 +247,25 @@ export const blockTypeToIconMap: Record<string, IconComponent> = {
enrich: EnrichSoIcon,
evernote: EvernoteIcon,
exa: ExaAIIcon,
extend: ExtendIcon,
extend_v2: ExtendIcon,
fathom: FathomIcon,
file: DocumentIcon,
file_v3: DocumentIcon,
firecrawl: FirecrawlIcon,
fireflies: FirefliesIcon,
fireflies_v2: FirefliesIcon,
gamma: GammaIcon,
github: GithubIcon,
github_v2: GithubIcon,
gitlab: GitLabIcon,
gmail: GmailIcon,
gmail_v2: GmailIcon,
gong: GongIcon,
google_ads: GoogleAdsIcon,
google_bigquery: GoogleBigQueryIcon,
google_books: GoogleBooksIcon,
google_calendar: GoogleCalendarIcon,
google_calendar_v2: GoogleCalendarIcon,
google_contacts: GoogleContactsIcon,
google_docs: GoogleDocsIcon,
@@ -260,7 +276,9 @@ export const blockTypeToIconMap: Record<string, IconComponent> = {
google_meet: GoogleMeetIcon,
google_pagespeed: GooglePagespeedIcon,
google_search: GoogleIcon,
google_sheets: GoogleSheetsIcon,
google_sheets_v2: GoogleSheetsIcon,
google_slides: GoogleSlidesIcon,
google_slides_v2: GoogleSlidesIcon,
google_tasks: GoogleTasksIcon,
google_translate: GoogleTranslateIcon,
@@ -274,20 +292,24 @@ export const blockTypeToIconMap: Record<string, IconComponent> = {
hubspot: HubspotIcon,
huggingface: HuggingFaceIcon,
hunter: HunterIOIcon,
iam: IAMIcon,
image_generator: ImageIcon,
imap: MailServerIcon,
incidentio: IncidentioIcon,
infisical: InfisicalIcon,
intercom: IntercomIcon,
intercom_v2: IntercomIcon,
jina: JinaAIIcon,
jira: JiraIcon,
jira_service_management: JiraServiceManagementIcon,
kalshi: KalshiIcon,
kalshi_v2: KalshiIcon,
ketch: KetchIcon,
knowledge: PackageSearchIcon,
langsmith: LangsmithIcon,
launchdarkly: LaunchDarklyIcon,
lemlist: LemlistIcon,
linear: LinearIcon,
linear_v2: LinearIcon,
linkedin: LinkedInIcon,
linkup: LinkupIcon,
@@ -299,13 +321,16 @@ export const blockTypeToIconMap: Record<string, IconComponent> = {
memory: BrainIcon,
microsoft_ad: AzureIcon,
microsoft_dataverse: MicrosoftDataverseIcon,
microsoft_excel: MicrosoftExcelIcon,
microsoft_excel_v2: MicrosoftExcelIcon,
microsoft_planner: MicrosoftPlannerIcon,
microsoft_teams: MicrosoftTeamsIcon,
mistral_parse: MistralIcon,
mistral_parse_v3: MistralIcon,
mongodb: MongoDBIcon,
mysql: MySQLIcon,
neo4j: Neo4jIcon,
notion: NotionIcon,
notion_v2: NotionIcon,
obsidian: ObsidianIcon,
okta: OktaIcon,
@@ -322,12 +347,14 @@ export const blockTypeToIconMap: Record<string, IconComponent> = {
postgresql: PostgresIcon,
posthog: PosthogIcon,
profound: ProfoundIcon,
pulse: PulseIcon,
pulse_v2: PulseIcon,
qdrant: QdrantIcon,
quiver: QuiverIcon,
rds: RDSIcon,
reddit: RedditIcon,
redis: RedisIcon,
reducto: ReductoIcon,
reducto_v2: ReductoIcon,
resend: ResendIcon,
revenuecat: RevenueCatIcon,
@@ -352,11 +379,14 @@ export const blockTypeToIconMap: Record<string, IconComponent> = {
ssh: SshIcon,
stagehand: StagehandIcon,
stripe: StripeIcon,
sts: STSIcon,
stt: STTIcon,
stt_v2: STTIcon,
supabase: SupabaseIcon,
tailscale: TailscaleIcon,
tavily: TavilyIcon,
telegram: TelegramIcon,
textract: TextractIcon,
textract_v2: TextractIcon,
tinybird: TinybirdIcon,
translate: TranslateIcon,
@@ -367,7 +397,9 @@ export const blockTypeToIconMap: Record<string, IconComponent> = {
typeform: TypeformIcon,
upstash: UpstashIcon,
vercel: VercelIcon,
video_generator: VideoIcon,
video_generator_v2: VideoIcon,
vision: EyeIcon,
vision_v2: EyeIcon,
wealthbox: WealthboxIcon,
webflow: WebflowIcon,

View File

@@ -21,7 +21,17 @@ Verwenden Sie Ihre eigenen API-Schlüssel für KI-Modellanbieter anstelle der ge
| OpenAI | Knowledge Base-Embeddings, Agent-Block |
| Anthropic | Agent-Block |
| Google | Agent-Block |
| Mistral | Knowledge Base OCR |
| Mistral | Knowledge Base OCR, Agent-Block |
| Fireworks | Agent-Block |
| Firecrawl | Web-Scraping, Crawling, Suche und Extraktion |
| Exa | KI-gestützte Suche und Recherche |
| Serper | Google-Such-API |
| Linkup | Websuche und Inhaltsabruf |
| Parallel AI | Websuche, Extraktion und tiefgehende Recherche |
| Perplexity | KI-gestützter Chat und Websuche |
| Jina AI | Web-Lesen und Suche |
| Google Cloud | Translate, Maps, PageSpeed und Books APIs |
| Brandfetch | Marken-Assets, Logos, Farben und Unternehmensinformationen |
### Einrichtung

View File

@@ -105,9 +105,108 @@ Die Modellaufschlüsselung zeigt:
Die angezeigten Preise entsprechen den Tarifen vom 10. September 2025. Überprüfen Sie die Dokumentation der Anbieter für aktuelle Preise.
</Callout>
## Gehostete Tool-Preise
Wenn Workflows Tool-Blöcke mit den gehosteten API-Schlüsseln von Sim verwenden, werden die Kosten pro Operation berechnet. Verwenden Sie Ihre eigenen Schlüssel über BYOK, um direkt an die Anbieter zu zahlen.
<Tabs items={['Firecrawl', 'Exa', 'Serper', 'Perplexity', 'Linkup', 'Parallel AI', 'Jina AI', 'Google Cloud', 'Brandfetch']}>
<Tab>
**Firecrawl** - Web-Scraping, Crawling, Suche und Extraktion
| Operation | Cost |
|-----------|------|
| Scrape | $0.001 per credit used |
| Crawl | $0.001 per credit used |
| Search | $0.001 per credit used |
| Extract | $0.001 per credit used |
| Map | $0.001 per credit used |
</Tab>
<Tab>
**Exa** - KI-gestützte Suche und Recherche
| Operation | Cost |
|-----------|------|
| Search | Dynamic (returned by API) |
| Get Contents | Dynamic (returned by API) |
| Find Similar Links | Dynamic (returned by API) |
| Answer | Dynamic (returned by API) |
</Tab>
<Tab>
**Serper** - Google-Such-API
| Operation | Cost |
|-----------|------|
| Search (≤10 results) | $0.001 |
| Search (>10 results) | $0.002 |
</Tab>
<Tab>
**Perplexity** - KI-gestützter Chat und Websuche
| Operation | Cost |
|-----------|------|
| Search | $0.005 per request |
| Chat | Token-based (varies by model) |
</Tab>
<Tab>
**Linkup** - Websuche und Inhaltsabruf
| Operation | Cost |
|-----------|------|
| Standard search | ~$0.006 |
| Deep search | ~$0.055 |
</Tab>
<Tab>
**Parallel AI** - Websuche, Extraktion und tiefgehende Recherche
| Operation | Cost |
|-----------|------|
| Search (≤10 results) | $0.005 |
| Search (>10 results) | $0.005 + $0.001 per additional result |
| Extract | $0.001 per URL |
| Deep Research | $0.005$2.40 (varies by processor tier) |
</Tab>
<Tab>
**Jina AI** - Web-Lesen und Suche
| Operation | Cost |
|-----------|------|
| Read URL | $0.20 per 1M tokens |
| Search | $0.20 per 1M tokens (minimum 10K tokens) |
</Tab>
<Tab>
**Google Cloud** - Translate, Maps, PageSpeed und Books APIs
| Operation | Cost |
|-----------|------|
| Translate / Detect | $0.00002 per character |
| Maps (Geocode, Directions, Distance Matrix, Elevation, Timezone, Reverse Geocode, Geolocate, Validate Address) | $0.005 per request |
| Maps (Snap to Roads) | $0.01 per request |
| Maps (Place Details) | $0.017 per request |
| Maps (Places Search) | $0.032 per request |
| PageSpeed | Free |
| Books (Search, Details) | Free |
</Tab>
<Tab>
**Brandfetch** - Marken-Assets, Logos, Farben und Unternehmensinformationen
| Operation | Cost |
|-----------|------|
| Search | Free |
| Get Brand | $0.04 per request |
</Tab>
</Tabs>
## Bring Your Own Key (BYOK)
Sie können Ihre eigenen API-Schlüssel für gehostete Modelle (OpenAI, Anthropic, Google, Mistral) unter **Einstellungen → BYOK** verwenden, um Basispreise zu zahlen. Schlüssel werden verschlüsselt und gelten arbeitsbereichsweit.
Sie können Ihre eigenen API-Schlüssel für unterstützte Anbieter (OpenAI, Anthropic, Google, Mistral, Fireworks, Firecrawl, Exa, Serper, Linkup, Parallel AI, Perplexity, Jina AI, Google Cloud, Brandfetch) unter **Einstellungen → BYOK** verwenden, um Basispreise zu zahlen. Schlüssel werden verschlüsselt und gelten arbeitsbereichsweit.
## Strategien zur Kostenoptimierung

View File

@@ -51,7 +51,7 @@ Willkommen bei Sim, einem visuellen Workflow-Builder für KI-Anwendungen. Erstel
<Card title="MCP-Integration" href="/mcp">
Externe Dienste mit dem Model Context Protocol verbinden
</Card>
<Card title="SDKs" href="/sdks">
<Card title="SDKs" href="/api-reference">
Sim in Ihre Anwendungen integrieren
</Card>
</Cards>

View File

@@ -0,0 +1,9 @@
{
"pages": [
"listPausedExecutions",
"getPausedExecution",
"getPausedExecutionByResumePath",
"getPauseContext",
"resumeExecution"
]
}

View File

@@ -10,6 +10,7 @@
"typescript",
"---Endpoints---",
"(generated)/workflows",
"(generated)/human-in-the-loop",
"(generated)/logs",
"(generated)/usage",
"(generated)/audit-logs",

View File

@@ -65,14 +65,14 @@ Execute a workflow with optional input data.
```python
result = client.execute_workflow(
"workflow-id",
input_data={"message": "Hello, world!"},
input={"message": "Hello, world!"},
timeout=30.0 # 30 seconds
)
```
**Parameters:**
- `workflow_id` (str): The ID of the workflow to execute
- `input_data` (dict, optional): Input data to pass to the workflow
- `input` (dict, optional): Input data to pass to the workflow
- `timeout` (float, optional): Timeout in seconds (default: 30.0)
- `stream` (bool, optional): Enable streaming responses (default: False)
- `selected_outputs` (list[str], optional): Block outputs to stream in `blockName.attribute` format (e.g., `["agent1.content"]`)
@@ -144,7 +144,7 @@ Execute a workflow with automatic retry on rate limit errors using exponential b
```python
result = client.execute_with_retry(
"workflow-id",
input_data={"message": "Hello"},
input={"message": "Hello"},
timeout=30.0,
max_retries=3, # Maximum number of retries
initial_delay=1.0, # Initial delay in seconds
@@ -155,7 +155,7 @@ result = client.execute_with_retry(
**Parameters:**
- `workflow_id` (str): The ID of the workflow to execute
- `input_data` (dict, optional): Input data to pass to the workflow
- `input` (dict, optional): Input data to pass to the workflow
- `timeout` (float, optional): Timeout in seconds
- `stream` (bool, optional): Enable streaming responses
- `selected_outputs` (list, optional): Block outputs to stream
@@ -359,7 +359,7 @@ def run_workflow():
# Execute the workflow
result = client.execute_workflow(
"my-workflow-id",
input_data={
input={
"message": "Process this data",
"user_id": "12345"
}
@@ -488,7 +488,7 @@ def execute_async():
# Start async execution
result = client.execute_workflow(
"workflow-id",
input_data={"data": "large dataset"},
input={"data": "large dataset"},
async_execution=True # Execute asynchronously
)
@@ -533,7 +533,7 @@ def execute_with_retry_handling():
# Automatically retries on rate limit
result = client.execute_with_retry(
"workflow-id",
input_data={"message": "Process this"},
input={"message": "Process this"},
max_retries=5,
initial_delay=1.0,
max_delay=60.0,
@@ -615,7 +615,7 @@ def execute_with_streaming():
# Enable streaming for specific block outputs
result = client.execute_workflow(
"workflow-id",
input_data={"message": "Count to five"},
input={"message": "Count to five"},
stream=True,
selected_outputs=["agent1.content"] # Use blockName.attribute format
)
@@ -758,4 +758,15 @@ Configure the client using environment variables:
## License
Apache-2.0
Apache-2.0
import { FAQ } from '@/components/ui/faq'
<FAQ items={[
{ question: "Do I need to deploy a workflow before I can execute it via the SDK?", answer: "Yes. Workflows must be deployed before they can be executed through the SDK. You can use the validate_workflow() method to check whether a workflow is deployed and ready. If it returns False, deploy the workflow from the Sim UI first and create or select an API key during deployment." },
{ question: "What is the difference between sync and async execution?", answer: "Sync execution (the default) blocks until the workflow completes and returns the full result. Async execution (async_execution=True) returns immediately with a task ID that you can poll using get_job_status(). Use async mode for long-running workflows to avoid request timeouts. Async job statuses include queued, processing, completed, failed, and cancelled." },
{ question: "How does the SDK handle rate limiting?", answer: "The SDK provides built-in rate limiting support through the execute_with_retry() method. It uses exponential backoff (1s, 2s, 4s, 8s...) with 25% jitter to avoid thundering herd problems. If the API returns a retry-after header, that value is used instead. You can configure max_retries, initial_delay, max_delay, and backoff_multiplier. Use get_rate_limit_info() to check your current rate limit status." },
{ question: "Can I use the Python SDK as a context manager?", answer: "Yes. The SimStudioClient supports Python's context manager protocol. Use it with the 'with' statement to automatically close the underlying HTTP session when you are done, which is especially useful for scripts that create and discard client instances." },
{ question: "How do I handle different types of errors from the SDK?", answer: "The SDK raises SimStudioError with a code property for API-specific errors. Common error codes are UNAUTHORIZED (invalid API key), TIMEOUT (request timed out), RATE_LIMIT_EXCEEDED (too many requests), USAGE_LIMIT_EXCEEDED (billing limit reached), and EXECUTION_ERROR (workflow failed). Use the error code to implement targeted error handling and recovery logic." },
{ question: "How do I monitor my API usage and remaining quota?", answer: "Use the get_usage_limits() method to check your current usage. It returns sync and async rate limit details (limit, remaining, reset time, whether you are currently limited), plus your current period cost, usage limit, and plan tier. This lets you monitor consumption and alert before hitting limits." },
]} />

View File

@@ -78,16 +78,15 @@ new SimStudioClient(config: SimStudioConfig)
Execute a workflow with optional input data.
```typescript
const result = await client.executeWorkflow('workflow-id', {
input: { message: 'Hello, world!' },
const result = await client.executeWorkflow('workflow-id', { message: 'Hello, world!' }, {
timeout: 30000 // 30 seconds
});
```
**Parameters:**
- `workflowId` (string): The ID of the workflow to execute
- `input` (any, optional): Input data to pass to the workflow
- `options` (ExecutionOptions, optional):
- `input` (any): Input data to pass to the workflow
- `timeout` (number): Timeout in milliseconds (default: 30000)
- `stream` (boolean): Enable streaming responses (default: false)
- `selectedOutputs` (string[]): Block outputs to stream in `blockName.attribute` format (e.g., `["agent1.content"]`)
@@ -158,8 +157,7 @@ if (status.status === 'completed') {
Execute a workflow with automatic retry on rate limit errors using exponential backoff.
```typescript
const result = await client.executeWithRetry('workflow-id', {
input: { message: 'Hello' },
const result = await client.executeWithRetry('workflow-id', { message: 'Hello' }, {
timeout: 30000
}, {
maxRetries: 3, // Maximum number of retries
@@ -171,6 +169,7 @@ const result = await client.executeWithRetry('workflow-id', {
**Parameters:**
- `workflowId` (string): The ID of the workflow to execute
- `input` (any, optional): Input data to pass to the workflow
- `options` (ExecutionOptions, optional): Same as `executeWorkflow()`
- `retryOptions` (RetryOptions, optional):
- `maxRetries` (number): Maximum number of retries (default: 3)
@@ -389,10 +388,8 @@ async function runWorkflow() {
// Execute the workflow
const result = await client.executeWorkflow('my-workflow-id', {
input: {
message: 'Process this data',
userId: '12345'
}
});
if (result.success) {
@@ -508,8 +505,7 @@ app.post('/execute-workflow', async (req, res) => {
try {
const { workflowId, input } = req.body;
const result = await client.executeWorkflow(workflowId, {
input,
const result = await client.executeWorkflow(workflowId, input, {
timeout: 60000
});
@@ -555,8 +551,7 @@ export default async function handler(
try {
const { workflowId, input } = req.body;
const result = await client.executeWorkflow(workflowId, {
input,
const result = await client.executeWorkflow(workflowId, input, {
timeout: 30000
});
@@ -586,9 +581,7 @@ const client = new SimStudioClient({
async function executeClientSideWorkflow() {
try {
const result = await client.executeWorkflow('workflow-id', {
input: {
userInput: 'Hello from browser'
}
});
console.log('Workflow result:', result);
@@ -642,10 +635,8 @@ Alternatively, you can manually provide files using the URL format:
// Include files under the field name from your API trigger's input format
const result = await client.executeWorkflow('workflow-id', {
input: {
documents: files, // Must match your workflow's "files" field name
instructions: 'Analyze these documents'
}
});
console.log('Result:', result);
@@ -669,10 +660,8 @@ Alternatively, you can manually provide files using the URL format:
// Include files under the field name from your API trigger's input format
const result = await client.executeWorkflow('workflow-id', {
input: {
documents: [file], // Must match your workflow's "files" field name
query: 'Summarize this document'
}
});
```
</Tab>
@@ -712,8 +701,7 @@ export function useWorkflow(): UseWorkflowResult {
setResult(null);
try {
const workflowResult = await client.executeWorkflow(workflowId, {
input,
const workflowResult = await client.executeWorkflow(workflowId, input, {
timeout: 30000
});
setResult(workflowResult);
@@ -774,8 +762,7 @@ const client = new SimStudioClient({
async function executeAsync() {
try {
// Start async execution
const result = await client.executeWorkflow('workflow-id', {
input: { data: 'large dataset' },
const result = await client.executeWorkflow('workflow-id', { data: 'large dataset' }, {
async: true // Execute asynchronously
});
@@ -823,9 +810,7 @@ const client = new SimStudioClient({
async function executeWithRetryHandling() {
try {
// Automatically retries on rate limit
const result = await client.executeWithRetry('workflow-id', {
input: { message: 'Process this' }
}, {
const result = await client.executeWithRetry('workflow-id', { message: 'Process this' }, {}, {
maxRetries: 5,
initialDelay: 1000,
maxDelay: 60000,
@@ -908,8 +893,7 @@ const client = new SimStudioClient({
async function executeWithStreaming() {
try {
// Enable streaming for specific block outputs
const result = await client.executeWorkflow('workflow-id', {
input: { message: 'Count to five' },
const result = await client.executeWorkflow('workflow-id', { message: 'Count to five' }, {
stream: true,
selectedOutputs: ['agent1.content'] // Use blockName.attribute format
});
@@ -1033,3 +1017,14 @@ function StreamingWorkflow() {
## License
Apache-2.0
import { FAQ } from '@/components/ui/faq'
<FAQ items={[
{ question: "Do I need to deploy a workflow before I can execute it via the SDK?", answer: "Yes. Workflows must be deployed before they can be executed through the SDK. You can use the validateWorkflow() method to check whether a workflow is deployed and ready. If it returns false, deploy the workflow from the Sim UI first and create or select an API key during deployment." },
{ question: "What is the difference between sync and async execution?", answer: "Sync execution (the default) blocks until the workflow completes and returns the full result. Async execution returns immediately with a task ID that you can poll using getJobStatus(). Use async mode for long-running workflows to avoid request timeouts. Async job statuses include queued, processing, completed, failed, and cancelled." },
{ question: "How does streaming work with the SDK?", answer: "Enable streaming by setting stream: true and specifying selectedOutputs with block names and attributes in blockName.attribute format (e.g., ['agent1.content']). The response uses Server-Sent Events (SSE) format, sending incremental chunks as the workflow executes. Each chunk includes the blockId and the text content. A final done event includes the execution metadata." },
{ question: "How does the SDK handle rate limiting?", answer: "The SDK provides built-in rate limiting support through the executeWithRetry() method. It uses exponential backoff (1s, 2s, 4s, 8s...) with 25% jitter to avoid thundering herd problems. If the API returns a retry-after header, that value is used instead. You can configure maxRetries, initialDelay, maxDelay, and backoffMultiplier. Use getRateLimitInfo() to check your current rate limit status." },
{ question: "Is it safe to use the SDK in browser-side code?", answer: "You can use the SDK in the browser, but you should not expose your API key in client-side code. In production, use a backend proxy server to handle SDK calls, or use a public API key with limited permissions. The SDK works with both Node.js and browser environments, but sensitive keys should stay server-side." },
{ question: "How do I send files to a workflow through the SDK?", answer: "File objects are automatically detected and converted to base64 format. Include them in the input object under the field name that matches your workflow's API trigger input format. In the browser, pass File objects directly from file inputs. In Node.js, create File objects from buffers. You can also provide files as URL references with type, data, name, and mime fields." },
]} />

View File

@@ -2,10 +2,12 @@
title: Function
---
import { Callout } from 'fumadocs-ui/components/callout'
import { Tab, Tabs } from 'fumadocs-ui/components/tabs'
import { Image } from '@/components/ui/image'
import { FAQ } from '@/components/ui/faq'
The Function block executes custom JavaScript or TypeScript code in your workflows. Transform data, perform calculations, or implement custom logic.
The Function block executes custom JavaScript, TypeScript, or Python code in your workflows. Transform data, perform calculations, or implement custom logic.
<div className="flex justify-center">
<Image
@@ -41,6 +43,8 @@ Input → Function (Validate & Sanitize) → API (Save to Database)
### Example: Loyalty Score Calculator
<Tabs items={['JavaScript', 'Python']}>
<Tab value="JavaScript">
```javascript title="loyalty-calculator.js"
// Process customer data and calculate loyalty score
const { purchaseHistory, accountAge, supportTickets } = <agent>;
@@ -64,6 +68,120 @@ return {
metrics: { spendScore, frequencyScore, supportScore }
};
```
</Tab>
<Tab value="Python">
```python title="loyalty-calculator.py"
import json
# Reference outputs from other blocks using angle bracket syntax
data = json.loads('<agent>')
purchase_history = data["purchaseHistory"]
account_age = data["accountAge"]
support_tickets = data["supportTickets"]
# Calculate metrics
total_spent = sum(p["amount"] for p in purchase_history)
purchase_frequency = len(purchase_history) / (account_age / 365)
ticket_ratio = support_tickets["resolved"] / support_tickets["total"]
# Calculate loyalty score (0-100)
spend_score = min(total_spent / 1000 * 30, 30)
frequency_score = min(purchase_frequency * 20, 40)
support_score = ticket_ratio * 30
loyalty_score = round(spend_score + frequency_score + support_score)
tier = "Platinum" if loyalty_score >= 80 else "Gold" if loyalty_score >= 60 else "Silver"
result = {
"customer": data["name"],
"loyaltyScore": loyalty_score,
"loyaltyTier": tier,
"metrics": {
"spendScore": spend_score,
"frequencyScore": frequency_score,
"supportScore": support_score
}
}
print(json.dumps(result))
```
</Tab>
</Tabs>
## Python Support
The Function block supports Python as an alternative to JavaScript. Python code runs in a secure [E2B](https://e2b.dev) cloud sandbox.
<div className="flex justify-center">
<Image
src="/static/blocks/function-python.png"
alt="Function block with Python selected"
width={400}
height={500}
className="my-6"
/>
</div>
### Enabling Python
Select **Python** from the language dropdown in the Function block. Python execution requires E2B to be enabled on your Sim instance.
<Callout type="warn">
If you don't see Python as an option in the language dropdown, E2B is not enabled. This only applies to self-hosted instances — E2B is enabled by default on sim.ai.
</Callout>
<Callout type="info">
Python code always runs in the E2B sandbox, even for simple scripts without imports. This ensures a secure, isolated execution environment.
</Callout>
### Returning Results
In Python, print your result as JSON to stdout. The Function block captures stdout and makes it available via `<function.result>`:
```python title="example.py"
import json
data = {"status": "processed", "count": 42}
print(json.dumps(data))
```
### Available Libraries
The E2B sandbox includes the Python standard library (`json`, `re`, `datetime`, `math`, `os`, `collections`, etc.) and common packages like `matplotlib` for visualization. Charts generated with matplotlib are captured as images automatically.
<Callout type="info">
The exact set of pre-installed packages depends on the E2B sandbox configuration. If a package you need isn't available, consider calling an external API from your code instead.
</Callout>
### Matplotlib Charts
When your Python code generates matplotlib figures, they are automatically captured and returned as base64-encoded PNG images in the output:
```python title="chart.py"
import matplotlib.pyplot as plt
import json
data = json.loads('<api.data>')
plt.figure(figsize=(10, 6))
plt.bar(data["labels"], data["values"])
plt.title("Monthly Revenue")
plt.xlabel("Month")
plt.ylabel("Revenue ($)")
plt.savefig("chart.png")
plt.show()
```
{/* TODO: Screenshot of Python code execution output in the logs panel */}
### JavaScript vs. Python
| | JavaScript | Python |
|--|-----------|--------|
| **Execution** | Local VM (fast) or E2B sandbox (with imports) | Always E2B sandbox |
| **Returning results** | `return { ... }` | `print(json.dumps({ ... }))` |
| **HTTP requests** | `fetch()` built-in | `requests` or `httpx` |
| **Best for** | Quick transforms, JSON manipulation | Data science, charting, complex math |
## Best Practices

View File

@@ -78,7 +78,7 @@ Defines the fields approvers fill in when responding. This data becomes availabl
}
```
Access resume data in downstream blocks using `<blockId.resumeInput.fieldName>`.
Access resume data in downstream blocks using `<blockId.fieldName>`.
## Approval Methods
@@ -93,11 +93,12 @@ Access resume data in downstream blocks using `<blockId.resumeInput.fieldName>`.
<Tab>
### REST API
Programmatically resume workflows using the resume endpoint. The `contextId` is available from the block's `resumeEndpoint` output or from the paused execution detail.
Programmatically resume workflows using the resume endpoint. The `contextId` is available from the block's `resumeEndpoint` output or from the `_resume` object in the paused execution response.
```bash
POST /api/resume/{workflowId}/{executionId}/{contextId}
Content-Type: application/json
X-API-Key: your-api-key
{
"input": {
@@ -107,23 +108,56 @@ Access resume data in downstream blocks using `<blockId.resumeInput.fieldName>`.
}
```
The response includes a new `executionId` for the resumed execution:
The resume endpoint automatically respects the execution mode used in the original execute call:
- **Sync mode** (default) — The response waits for the remaining workflow to complete and returns the full result:
```json
{
"status": "started",
"success": true,
"status": "completed",
"executionId": "<resumeExecutionId>",
"message": "Resume execution started."
"output": { ... },
"metadata": { "duration": 1234, "startTime": "...", "endTime": "..." }
}
```
To poll execution progress after resuming, connect to the SSE stream:
If the resumed workflow hits another HITL block, the response returns `"status": "paused"` with new `_resume` URLs in the output.
```bash
GET /api/workflows/{workflowId}/executions/{resumeExecutionId}/stream
- **Stream mode** (`stream: true` on the original execute call) — The resume response streams SSE events with `selectedOutputs` chunks, just like the initial execution.
- **Async mode** (`X-Execution-Mode: async` on the original execute call) — The resume dispatches execution to a background worker and returns immediately with `202`, including a `jobId` and `statusUrl` for polling:
```json
{
"success": true,
"async": true,
"jobId": "<jobId>",
"executionId": "<resumeExecutionId>",
"message": "Resume execution queued",
"statusUrl": "/api/jobs/<jobId>"
}
```
Build custom approval UIs or integrate with existing systems.
#### Polling execution status
Poll the `statusUrl` from the async response to check when the resume completes:
```bash
GET /api/jobs/{jobId}
X-API-Key: your-api-key
```
Returns job status and, when completed, the full workflow output.
To check on a paused execution's pause points and resume links:
```bash
GET /api/resume/{workflowId}/{executionId}
X-API-Key: your-api-key
```
Returns the paused execution detail with all pause points, their statuses, and resume links. Returns `404` when the execution has completed and is no longer paused.
</Tab>
<Tab>
### Webhook
@@ -132,6 +166,53 @@ Access resume data in downstream blocks using `<blockId.resumeInput.fieldName>`.
</Tab>
</Tabs>
## API Execute Behavior
When triggering a workflow via the execute API (`POST /api/workflows/{id}/execute`), HITL blocks cause the execution to pause and return the `_resume` data in the response:
<Tabs items={['Sync (JSON)', 'Stream (SSE)', 'Async']}>
<Tab>
The response includes the full pause data with resume URLs:
```json
{
"success": true,
"executionId": "<executionId>",
"output": {
"data": {
"operation": "human",
"_resume": {
"apiUrl": "/api/resume/{workflowId}/{executionId}/{contextId}",
"uiUrl": "/resume/{workflowId}/{executionId}",
"contextId": "<contextId>",
"executionId": "<executionId>",
"workflowId": "<workflowId>"
}
}
}
}
```
</Tab>
<Tab>
Blocks before the HITL stream their `selectedOutputs` normally. When execution pauses, the final SSE event includes `status: "paused"` and the `_resume` data:
```
data: {"blockId":"agent1","chunk":"streamed content..."}
data: {"event":"final","data":{"success":true,"output":{...,"_resume":{...}},"status":"paused"}}
data: "[DONE]"
```
On resume, blocks after the HITL stream their `selectedOutputs` the same way.
<Callout type="info">
HITL blocks are automatically excluded from the `selectedOutputs` dropdown since their data is always included in the pause response.
</Callout>
</Tab>
<Tab>
Returns `202` immediately. Use the polling endpoint to check when the execution pauses.
</Tab>
</Tabs>
## Common Use Cases
**Content Approval** - Review AI-generated content before publishing
@@ -161,9 +242,9 @@ Agent (Generate) → Human in the Loop (QA) → Gmail (Send)
**`response`** - Display data shown to the approver (json)
**`submission`** - Form submission data from the approver (json)
**`submittedAt`** - ISO timestamp when the workflow was resumed
**`resumeInput.*`** - All fields defined in Resume Form become available after the workflow resumes
**`<fieldName>`** - All fields defined in Resume Form become available at the top level after the workflow resumes
Access using `<blockId.resumeInput.fieldName>`.
Access using `<blockId.fieldName>`.
## Example
@@ -187,7 +268,7 @@ Access using `<blockId.resumeInput.fieldName>`.
**Downstream Usage:**
```javascript
// Condition block
<approval1.resumeInput.approved> === true
<approval1.approved> === true
```
The example below shows an approval portal as seen by an approver after the workflow is paused. Approvers can review the data and provide inputs as a part of the workflow resumption. The approval portal can be accessed directly via the unique URL, `<blockId.url>`.
@@ -204,7 +285,7 @@ The example below shows an approval portal as seen by an approver after the work
<FAQ items={[
{ question: "How long does the workflow stay paused?", answer: "The workflow pauses indefinitely until a human provides input through the approval portal, the REST API, or a webhook. There is no automatic timeout — it will wait until someone responds." },
{ question: "What notification channels can I use to alert approvers?", answer: "You can configure notifications through Slack, Gmail, Microsoft Teams, SMS (via Twilio), or custom webhooks. Include the approval URL in your notification message so approvers can access the portal directly." },
{ question: "How do I access the approver's input in downstream blocks?", answer: "Use the syntax <blockId.resumeInput.fieldName> to reference specific fields from the resume form. For example, if your block ID is 'approval1' and the form has an 'approved' field, use <approval1.resumeInput.approved>." },
{ question: "How do I access the approver's input in downstream blocks?", answer: "Use the syntax <blockId.fieldName> to reference specific fields from the resume form. For example, if your block name is 'approval1' and the form has an 'approved' field, use <approval1.approved>." },
{ question: "Can I chain multiple Human in the Loop blocks for multi-stage approvals?", answer: "Yes. You can place multiple Human in the Loop blocks in sequence to create multi-stage approval workflows. Each block pauses independently and can have its own notification configuration and resume form fields." },
{ question: "Can I resume the workflow programmatically without the portal?", answer: "Yes. Each block exposes a resume API endpoint that you can call with a POST request containing the form data as JSON. This lets you build custom approval UIs or integrate with existing systems like Jira or ServiceNow." },
{ question: "What outputs are available after the workflow resumes?", answer: "The block outputs include the approval portal URL, the resume API endpoint URL, the display data shown to the approver, the form submission data, the raw resume input, and an ISO timestamp of when the workflow was resumed." },

View File

@@ -1,225 +1,70 @@
---
title: Copilot
description: Your per-workflow AI assistant for building and editing workflows.
---
import { Callout } from 'fumadocs-ui/components/callout'
import { Card, Cards } from 'fumadocs-ui/components/card'
import { Image } from '@/components/ui/image'
import { MessageCircle, Hammer, ListChecks, Zap, Globe, Paperclip, History, RotateCcw, Brain } from 'lucide-react'
import { FAQ } from '@/components/ui/faq'
Copilot is your in-editor assistant that helps you build and edit workflows. It can:
Copilot is the AI assistant built into every workflow editor. It is scoped to the workflow you have open — it reads the current structure, makes changes directly, and saves checkpoints so you can revert if needed.
- **Explain**: Answer questions about Sim and your current workflow
- **Guide**: Suggest edits and best practices
- **Build**: Add blocks, wire connections, and configure settings
- **Debug**: Analyze execution issues and optimize performance
For workspace-wide tasks (managing multiple workflows, running research, working with tables, scheduling jobs), use [Mothership](/mothership).
<Callout type="info">
Copilot is a Sim-managed service. For self-hosted deployments:
1. Go to [sim.ai](https://sim.ai) → Settings → Copilot and generate a Copilot API key
2. Set `COPILOT_API_KEY` in your self-hosted environment
Copilot is a Sim-managed service. For self-hosted deployments, go to [sim.ai](https://sim.ai) → Settings → Copilot, generate a Copilot API key, then set `COPILOT_API_KEY` in your self-hosted environment.
</Callout>
## Modes
{/* TODO: Screenshot of the workflow editor with the Copilot panel open on the right side — showing a conversation with a workflow change applied. Ideally shows a message from the user, a response from Copilot, and the checkpoint icon visible on the message. */}
Switch between modes using the mode selector at the bottom of the input area.
## What Copilot Can Do
<Cards>
<Card
title={
<span className="inline-flex items-center gap-2">
<MessageCircle className="h-4 w-4 text-muted-foreground" />
Ask
</span>
}
>
<div className="m-0 text-sm">
Q&A mode for explanations, guidance, and suggestions without making changes to your workflow.
</div>
</Card>
<Card
title={
<span className="inline-flex items-center gap-2">
<Hammer className="h-4 w-4 text-muted-foreground" />
Build
</span>
}
>
<div className="m-0 text-sm">
Workflow building mode. Copilot can add blocks, wire connections, edit configurations, and debug issues.
</div>
</Card>
<Card
title={
<span className="inline-flex items-center gap-2">
<ListChecks className="h-4 w-4 text-muted-foreground" />
Plan
</span>
}
>
<div className="m-0 text-sm">
Creates a step-by-step implementation plan for your workflow without making any changes. Helps you think through the approach before building.
</div>
</Card>
</Cards>
Copilot can read and modify the workflow you are currently editing:
## Models
- Add, configure, and connect blocks
- Edit existing block configurations
- Delete blocks and connections
- Debug failures by reading execution logs
- Answer questions about the workflow or how Sim works
Select your preferred AI model using the model selector at the bottom right of the input area.
## Chat History
**Available Models:**
- Claude 4.6 Opus (default), 4.5 Opus, Sonnet, Haiku
- GPT 5.2 Codex, Pro
- Gemini 3 Pro
Choose based on your needs: faster models for simple tasks, more capable models for complex workflows.
## Context Menu (@)
Use the `@` symbol to reference resources and give Copilot more context:
| Reference | Description |
|-----------|-------------|
| **Chats** | Previous copilot conversations |
| **Workflows** | Any workflow in your workspace |
| **Workflow Blocks** | Blocks in the current workflow |
| **Blocks** | Block types and templates |
| **Knowledge** | Uploaded documents and knowledge bases |
| **Docs** | Sim documentation |
| **Templates** | Workflow templates |
| **Logs** | Execution logs and results |
Type `@` in the input field to open the context menu, then search or browse to find what you need.
## Slash Commands (/)
Use slash commands for quick actions:
| Command | Description |
|---------|-------------|
| `/fast` | Fast mode execution |
| `/research` | Research and exploration mode |
| `/actions` | Execute agent actions |
**Web Commands:**
| Command | Description |
|---------|-------------|
| `/search` | Search the web |
| `/read` | Read a specific URL |
| `/scrape` | Scrape web page content |
| `/crawl` | Crawl multiple pages |
Type `/` in the input field to see available commands.
## Chat Management
### Starting a New Chat
Click the **+** button in the Copilot header to start a fresh conversation.
### Chat History
Click **History** to view previous conversations grouped by date. You can:
- Click a chat to resume it
- Delete chats you no longer need
### Editing Messages
Hover over any of your messages and click **Edit** to modify and resend it. This is useful for refining your prompts.
### Message Queue
If you send a message while Copilot is still responding, it gets queued. You can:
- View queued messages in the expandable queue panel
- Send a queued message immediately (aborts current response)
- Remove messages from the queue
Click **History** (clock icon) in the Copilot header to see past conversations for this workflow. Click any chat to resume it, or click **+** to start a new one.
## File Attachments
Click the attachment icon to upload files with your message. Supported file types include:
- Images (preview thumbnails shown)
- PDFs
- Text files, JSON, XML
- Other document formats
Click the attachment icon in the input to upload files alongside your message. Copilot can read images, PDFs, and text-based files as context.
Files are displayed as clickable thumbnails that open in a new tab.
## Checkpoints
## Checkpoints & Changes
When Copilot modifies a workflow, it saves a checkpoint of the previous state.
When Copilot makes changes to your workflow, it saves checkpoints so you can revert if needed.
To revert: hover over a Copilot message and click the checkpoints icon, then click **Revert** on the state you want to restore. Reverting cannot be undone.
### Viewing Checkpoints
## Thinking
Hover over a Copilot message and click the checkpoints icon to see saved workflow states for that message.
For complex requests, Copilot may show its reasoning in an expandable thinking block before responding. The block shows how long the thinking took and collapses after the response is complete.
### Reverting Changes
## Usage
Click **Revert** on any checkpoint to restore your workflow to that state. A confirmation dialog will warn that this action cannot be undone.
Copilot usage is billed per token and counts toward your plan's credit usage. If you reach your limit, enable on-demand billing from Settings → Subscription.
### Accepting Changes
When Copilot proposes changes, you can:
- **Accept**: Apply the proposed changes (`Mod+Shift+Enter`)
- **Reject**: Dismiss the changes and keep your current workflow
## Thinking Blocks
For complex requests, Copilot may show its reasoning process in expandable thinking blocks:
- Blocks auto-expand while Copilot is thinking
- Click to manually expand/collapse
- Shows duration of the thinking process
- Helps you understand how Copilot arrived at its solution
## Options Selection
When Copilot presents multiple options, you can select using:
| Control | Action |
|---------|--------|
| **1-9** | Select option by number |
| **Arrow Up/Down** | Navigate between options |
| **Enter** | Select highlighted option |
Selected options are highlighted; unselected options appear struck through.
## Keyboard Shortcuts
| Shortcut | Action |
|----------|--------|
| `@` | Open context menu |
| `/` | Open slash commands |
| `Arrow Up/Down` | Navigate menu items |
| `Enter` | Select menu item |
| `Esc` | Close menus |
| `Mod+Shift+Enter` | Accept Copilot changes |
## Usage Limits
Copilot usage is billed per token from the underlying LLM and counts toward your plan's credit usage. If you reach your usage limit, enable on-demand billing from Settings → Subscription to continue using Copilot beyond your plan's included credits.
<Callout type="info">
See the [Cost Calculation page](/execution/costs) for billing and plan details.
</Callout>
## Copilot MCP
You can use Copilot as an MCP server in your favorite editor or AI client. This lets you build, test, deploy, and manage Sim workflows directly from tools like Cursor, Claude Code, Claude Desktop, and VS Code.
You can use Copilot as an MCP server to build, test, and manage Sim workflows from external editors — Cursor, Claude Code, Claude Desktop, and VS Code.
### Generating a Copilot API Key
To connect to the Copilot MCP server, you need a **Copilot API key**:
1. Go to [sim.ai](https://sim.ai) and sign in
2. Navigate to **Settings** → **Copilot**
3. Click **Generate API Key**
4. Copy the key — it is only shown once
The key will look like `sk-sim-copilot-...`. You will use this in the configuration below.
The key will look like `sk-sim-copilot-...`.
### Cursor
Add the following to your `.cursor/mcp.json` (project-level) or global Cursor MCP settings:
Add to `.cursor/mcp.json`:
```json
{
@@ -234,12 +79,8 @@ Add the following to your `.cursor/mcp.json` (project-level) or global Cursor MC
}
```
Replace `YOUR_COPILOT_API_KEY` with the key you generated above.
### Claude Code
Run the following command to add the Copilot MCP server:
```bash
claude mcp add sim-copilot \
--transport http \
@@ -247,11 +88,9 @@ claude mcp add sim-copilot \
--header "X-API-Key: YOUR_COPILOT_API_KEY"
```
Replace `YOUR_COPILOT_API_KEY` with your key.
### Claude Desktop
Claude Desktop requires [`mcp-remote`](https://www.npmjs.com/package/mcp-remote) to connect to HTTP-based MCP servers. Add the following to your Claude Desktop config file (`~/Library/Application Support/Claude/claude_desktop_config.json` on macOS):
Claude Desktop requires [`mcp-remote`](https://www.npmjs.com/package/mcp-remote). Add to `~/Library/Application Support/Claude/claude_desktop_config.json`:
```json
{
@@ -270,11 +109,9 @@ Claude Desktop requires [`mcp-remote`](https://www.npmjs.com/package/mcp-remote)
}
```
Replace `YOUR_COPILOT_API_KEY` with your key.
### VS Code
Add the following to your VS Code `settings.json` or workspace `.vscode/settings.json`:
Add to `settings.json` or `.vscode/settings.json`:
```json
{
@@ -292,21 +129,14 @@ Add the following to your VS Code `settings.json` or workspace `.vscode/settings
}
```
Replace `YOUR_COPILOT_API_KEY` with your key.
<Callout type="info">
For self-hosted deployments, replace `https://www.sim.ai` with your self-hosted Sim URL.
</Callout>
import { FAQ } from '@/components/ui/faq'
<FAQ items={[
{ question: "What is the difference between Ask, Build, and Plan mode?", answer: "Copilot has three modes. Ask mode is a read-only Q&A mode for explanations, guidance, and suggestions without making any changes to your workflow. Build mode allows Copilot to actively modify your workflow by adding blocks, wiring connections, editing configurations, and debugging issues. Plan mode creates a step-by-step implementation plan for your request without making any changes, so you can review the approach before committing. Use Ask when you want to learn or explore ideas, Plan when you want to see a proposed approach first, and Build when you want Copilot to make changes directly." },
{ question: "Does Copilot have access to my full workflow when answering questions?", answer: "Copilot has access to the workflow you are currently editing as context. You can also use the @ context menu to reference other workflows, previous chats, execution logs, knowledge bases, documentation, and templates to give Copilot additional context for your request." },
{ question: "How do I use Copilot from an external editor like Cursor or VS Code?", answer: "You can use Copilot as an MCP server from external editors. First, generate a Copilot API key from Settings > Copilot on sim.ai. Then add the MCP server configuration to your editor using the endpoint https://www.sim.ai/api/mcp/copilot with your API key in the X-API-Key header. Configuration examples are available for Cursor, Claude Code, Claude Desktop, and VS Code." },
{ question: "Can I revert changes that Copilot made to my workflow?", answer: "Yes. When Copilot makes changes in Build mode, it saves checkpoints of your workflow state. You can hover over a Copilot message and click the checkpoints icon to see saved states, then click Revert on any checkpoint to restore your workflow. Note that reverting cannot be undone, so review the checkpoint before confirming." },
{ question: "How does Copilot billing work?", answer: "Copilot usage is billed per token from the underlying LLM and counts toward your plan's credit usage. More capable models like Claude Opus cost more per token than lighter models like Haiku. If you reach your usage limit, you can enable on-demand billing from Settings > Subscription to continue using Copilot." },
{ question: "What do the slash commands like /research and /search do?", answer: "Slash commands trigger specialized behaviors. /fast enables fast mode execution, /research activates a research and exploration mode, and /actions executes agent actions. Web commands like /search, /read, /scrape, and /crawl let Copilot interact with the web to search for information, read URLs, scrape page content, or crawl multiple pages to gather context for your request." },
{ question: "How do I set up Copilot for a self-hosted deployment?", answer: "For self-hosted deployments, go to sim.ai > Settings > Copilot and generate a Copilot API key. Then set the COPILOT_API_KEY environment variable in your self-hosted environment. Copilot is a Sim-managed service, so the self-hosted instance communicates with Sim's servers to process requests." },
{ question: "How is Copilot different from Mothership?", answer: "Copilot is scoped to the workflow you have open — it reads and edits that workflow's blocks and connections. Mothership has access to your entire workspace and can build workflows, manage tables, run research, schedule jobs, and take actions across integrations." },
{ question: "Can Copilot access other workflows or workspace data?", answer: "Copilot is scoped to the current workflow. For tasks that span multiple workflows or require workspace-level context, use Mothership." },
{ question: "Can I revert changes Copilot made?", answer: "Yes. Copilot saves a checkpoint before each change. Hover over the message and click the checkpoints icon to see saved states, then click Revert to restore one. Reverting cannot be undone." },
{ question: "How does Copilot billing work?", answer: "Copilot usage is billed per token and counts toward your plan's credit usage. If you reach your limit, enable on-demand billing from Settings → Subscription." },
{ question: "How do I set up Copilot for a self-hosted deployment?", answer: "Go to sim.ai → Settings → Copilot and generate a Copilot API key. Set the COPILOT_API_KEY environment variable in your self-hosted environment. Copilot runs on Sim's infrastructure regardless of where you host the application." },
]} />

View File

@@ -1,203 +1,121 @@
---
title: Credentials
description: Manage secrets, API keys, and OAuth connections for your workflows
title: Secrets
description: Manage API keys and environment variables for your workflows
---
import { Callout } from 'fumadocs-ui/components/callout'
import { Image } from '@/components/ui/image'
import { Step, Steps } from 'fumadocs-ui/components/steps'
import { FAQ } from '@/components/ui/faq'
Credentials provide a secure way to manage API keys, tokens, and third-party service connections across your workflows. Instead of hardcoding sensitive values into your workflow, you store them as credentials and reference them at runtime.
Secrets are key-value pairs that store sensitive data like API keys, tokens, and passwords. Instead of hardcoding values into your workflows, you store them as secrets and reference them by name at runtime.
Sim supports two categories of credentials: **secrets** for static values like API keys, and **OAuth accounts** for authenticated service connections like Google or Slack.
## Managing Secrets
## Getting Started
To manage credentials, open your workspace **Settings** and navigate to the **Secrets** tab.
To manage secrets, open your workspace **Settings** and navigate to the **Secrets** tab.
<Image
src="/static/credentials/settings-secrets.png"
alt="Settings modal showing the Secrets tab with a list of saved credentials"
src="/static/secrets/secrets-list.png"
alt="Secrets tab showing Workspace and Personal sections with inline key-value rows"
width={700}
height={200}
height={500}
/>
From here you can search, create, and delete both secrets and OAuth connections.
Secrets are organized into two sections:
## Secrets
- **Workspace** — shared with all members of your workspace
- **Personal** — private to you
Secrets are key-value pairs that store sensitive data like API keys, tokens, and passwords. Each secret has a **key** (used to reference it in workflows) and a **value** (the actual secret).
### Adding a Secret
### Creating a Secret
Type a key name (e.g. `OPENAI_API_KEY`) into the **Key** column and its value into the **Value** column in the last empty row. A new empty row appears automatically as you type. Existing values are masked by default.
<Image
src="/static/credentials/create-secret.png"
alt="Create Secret dialog with fields for key, value, description, and scope toggle"
width={500}
height={400}
/>
When you're done, click **Save** to persist all changes.
<Steps>
<Step>
Click **+ Add** and select **Secret** as the type
</Step>
<Step>
Enter a **Key** name (letters, numbers, and underscores only, e.g. `OPENAI_API_KEY`)
</Step>
<Step>
Enter the **Value**
</Step>
<Step>
Optionally add a **Description** to help your team understand what the secret is for
</Step>
<Step>
Choose the **Scope** — Workspace or Personal
</Step>
<Step>
Click **Create**
</Step>
</Steps>
<Callout type="info">
Keys must use only letters, numbers, and underscores — no spaces or special characters.
</Callout>
### Using Secrets in Workflows
### Bulk Import
To reference a secret in any input field, type `{{` to open the dropdown. It will show your available secrets grouped by scope.
You can populate multiple secrets at once by pasting `.env`-style content into any key or value field. The parser supports standard `KEY=VALUE` pairs, `export KEY=VALUE`, quoted values, and inline comments.
### Editing and Deleting
Click directly into any key or value cell to edit it. To delete a secret, click the trash icon on its row and save.
## Using Secrets in Workflows
To reference a secret in any input field, type `{{` to open the variable dropdown. Your available secrets are listed grouped by scope (workspace, then personal).
<Image
src="/static/credentials/secret-dropdown.png"
alt="Typing {{ in a code block opens a dropdown showing available workspace secrets"
alt="Typing {{ in an input opens a dropdown showing available secrets"
width={400}
height={250}
/>
Select the secret you want to use. The reference will appear highlighted in blue, indicating it will be resolved at runtime.
Select the secret you want to use. The reference appears highlighted in blue and is resolved to its actual value at runtime.
<Image
src="/static/credentials/secret-resolved.png"
alt="A resolved secret reference shown in blue text as {{OPENAI_API_KEY}}"
alt="A resolved secret reference shown as {{OPENAI_API_KEY}}"
width={400}
height={200}
/>
<Callout type="warn">
Secret values are never exposed in the workflow editor or logs. They are only resolved during execution.
Secret values are never exposed in the workflow editor or execution logs — they are only resolved during execution.
</Callout>
### Bulk Import
## Secret Details
You can import multiple secrets at once by pasting `.env`-style content:
1. Click **+ Add**, then switch to **Bulk** mode
2. Paste your environment variables in `KEY=VALUE` format
3. Choose the scope for all imported secrets
4. Click **Create**
The parser supports standard `KEY=VALUE` pairs, quoted values, comments (`#`), and blank lines.
## OAuth Accounts
OAuth accounts are authenticated connections to third-party services like Google, Slack, GitHub, and more. Sim handles the OAuth flow, token storage, and automatic refresh.
You can connect **multiple accounts per provider** — for example, two separate Gmail accounts for different workflows.
### Connecting an OAuth Account
Click **Details** on any secret row to open its detail view.
<Image
src="/static/credentials/create-oauth.png"
alt="Create Secret dialog with OAuth Account type selected, showing display name and provider dropdown"
width={500}
src="/static/secrets/secret-details.png"
alt="Secret details view showing Display Name, Description, and Members sections"
width={700}
height={400}
/>
<Steps>
<Step>
Click **+ Add** and select **OAuth Account** as the type
</Step>
<Step>
Enter a **Display name** to identify this connection (e.g. "Work Gmail" or "Marketing Slack")
</Step>
<Step>
Optionally add a **Description**
</Step>
<Step>
Select the **Account** provider from the dropdown
</Step>
<Step>
Click **Connect** and complete the authorization flow
</Step>
</Steps>
From here you can:
### Using OAuth Accounts in Workflows
- Edit the **Display Name** and **Description**
- Manage **Members** — invite teammates by email and assign them an **Admin** or **Member** role
Blocks that require authentication (e.g. Gmail, Slack, Google Sheets) display a credential selector dropdown. Select the OAuth account you want the block to use.
<Image
src="/static/credentials/oauth-selector.png"
alt="Gmail block showing the account selector dropdown with a connected account and option to connect another"
width={500}
height={350}
/>
You can also connect additional accounts directly from the block by selecting **Connect another account** at the bottom of the dropdown.
<Callout type="info">
If a block requires an OAuth connection and none is selected, the workflow will fail at that step.
</Callout>
Click **Save** to apply changes, or **Back** to return to the list.
## Workspace vs. Personal
Credentials can be scoped to your **workspace** (shared with your team) or kept **personal** (private to you).
| | Workspace | Personal |
|---|---|---|
| **Visibility** | All workspace members | Only you |
| **Use in workflows** | Any member can use | Only you can use |
| **Best for** | Production workflows, shared services | Testing, personal API keys |
| **Who can edit** | Workspace admins | Only you |
| **Auto-shared** | Yes — all members get access on creation | No — only you have access |
<Callout type="info">
When a workspace and personal secret share the same key name, the **workspace secret takes precedence**.
When a workspace secret and a personal secret share the same key name, the **workspace secret takes precedence**.
</Callout>
### Resolution Order
When a workflow runs, Sim resolves secrets in this order:
When a workflow runs, secrets resolve in this order:
1. **Workspace secrets** are checked first
2. **Personal secrets** are used as a fallback — from the user who triggered the run (manual) or the workflow owner (automated runs via API, webhook, or schedule)
## Access Control
Each credential has role-based access control:
- **Admin** — can view, edit, delete, and manage who has access
- **Member** — can use the credential in workflows (read-only)
When you create a workspace secret, all current workspace members are automatically granted access. Personal secrets are only accessible to you by default.
### Sharing a Credential
To share a credential with specific team members:
1. Click **Details** on the credential
2. Invite members by email
3. Assign them an **Admin** or **Member** role
## Best Practices
- **Use workspace credentials for production** so workflows work regardless of who triggers them
- **Use personal credentials for development** to keep your test keys separate
- **Use workspace secrets for production** so workflows work regardless of who triggers them
- **Use personal secrets for development** to keep test keys separate
- **Name keys descriptively** — `STRIPE_SECRET_KEY` over `KEY1`
- **Connect multiple OAuth accounts** when you need different permissions or identities per workflow
- **Never hardcode secrets** in workflow input fields — always use `{{KEY}}` references
<FAQ items={[
{ question: "Are my secrets encrypted at rest?", answer: "Yes. Secret values and OAuth tokens are encrypted before being stored in the database. The platform uses server-side encryption so that raw secret values are never persisted in plaintext. Secret values are also never exposed in the workflow editor, logs, or API responses." },
{ question: "What happens if both a workspace secret and a personal secret have the same key name?", answer: "The workspace secret takes precedence. During execution, the resolver checks workspace secrets first and uses personal secrets only as a fallback. This ensures that production workflows use the shared, team-managed value." },
{ question: "Are my secrets encrypted at rest?", answer: "Yes. Secret values are encrypted before being stored in the database using server-side encryption, so raw values are never persisted in plaintext. They are also never exposed in the workflow editor, logs, or API responses." },
{ question: "What happens if both a workspace secret and a personal secret have the same key name?", answer: "The workspace secret takes precedence. During execution, the resolver checks workspace secrets first and uses personal secrets only as a fallback. This ensures production workflows use the shared, team-managed value." },
{ question: "Who determines which personal secret is used for automated runs?", answer: "For manual runs, the personal secrets of the user who clicked Run are used as fallback. For automated runs triggered by API, webhook, or schedule, the personal secrets of the workflow owner are used instead." },
{ question: "Does Sim handle OAuth token refresh automatically?", answer: "Yes. When an OAuth token is used during execution, the platform checks whether the access token has expired and automatically refreshes it using the stored refresh token before making the API call. You do not need to handle token refresh manually." },
{ question: "Can I connect multiple OAuth accounts for the same provider?", answer: "Yes. You can connect multiple accounts per provider (for example, two separate Gmail accounts). Each block that requires OAuth lets you select which specific account to use from the credential dropdown. This is useful when different workflows or blocks need different permissions or identities." },
{ question: "What happens if I delete a credential that is used in a workflow?", answer: "If a block references a deleted credential, the workflow will fail at that block during execution because the credential cannot be resolved. Make sure to update any blocks that reference a credential before deleting it." },
{ question: "Can I import secrets from a .env file?", answer: "Yes. The bulk import feature lets you paste .env-style content in KEY=VALUE format. The parser supports quoted values, comments (lines starting with #), and blank lines. All imported secrets are created with the scope you choose (workspace or personal)." },
{ question: "Can I import secrets from a .env file?", answer: "Yes. Paste .env-style content (KEY=VALUE format) into any key or value field and the secrets will be auto-populated. The parser supports export KEY=VALUE, quoted values, and inline comments." },
{ question: "What happens if I delete a secret that is used in a workflow?", answer: "The workflow will fail at any block that references the deleted secret during execution because the value cannot be resolved. Update any references before deleting a secret." },
]} />

View File

@@ -1,5 +1,5 @@
{
"title": "Credentials",
"pages": ["index", "google-service-account"],
"title": "Secrets",
"pages": ["index"],
"defaultOpen": false
}

View File

@@ -69,6 +69,9 @@ For self-hosted deployments, enterprise features can be enabled via environment
| `ACCESS_CONTROL_ENABLED`, `NEXT_PUBLIC_ACCESS_CONTROL_ENABLED` | Permission groups for access restrictions |
| `SSO_ENABLED`, `NEXT_PUBLIC_SSO_ENABLED` | Single Sign-On with SAML/OIDC |
| `CREDENTIAL_SETS_ENABLED`, `NEXT_PUBLIC_CREDENTIAL_SETS_ENABLED` | Polling Groups for email triggers |
| `INBOX_ENABLED`, `NEXT_PUBLIC_INBOX_ENABLED` | Sim Mailer inbox for outbound email |
| `WHITELABELING_ENABLED`, `NEXT_PUBLIC_WHITELABELING_ENABLED` | Custom branding and white-labeling |
| `AUDIT_LOGS_ENABLED`, `NEXT_PUBLIC_AUDIT_LOGS_ENABLED` | Audit logging for compliance and monitoring |
| `DISABLE_INVITATIONS`, `NEXT_PUBLIC_DISABLE_INVITATIONS` | Globally disable workspace/organization invitations |
### Organization Management

View File

@@ -0,0 +1,343 @@
---
title: API Deployment
---
import { Callout } from 'fumadocs-ui/components/callout'
import { Tab, Tabs } from 'fumadocs-ui/components/tabs'
import { Image } from '@/components/ui/image'
import { FAQ } from '@/components/ui/faq'
Deploy your workflow as a REST API endpoint that any application can call directly. Supports synchronous, streaming, and asynchronous execution modes.
## Deploying a Workflow
Open your workflow and click **Deploy**. The **General** tab opens first and shows you the current deployment state:
<Image src="/static/api-deployment/api-versions.png" alt="General tab of the Workflow Deployment modal showing a live workflow preview, a Versions table with v2 (live) and v1, and Undeploy / Update buttons" width={800} height={500} />
The **General** tab contains:
- **Live Workflow** — a read-only minimap of the workflow snapshot that is currently deployed
- **Versions** — a table of every deployment you've published, showing version number, who deployed it, and when
- **Deploy / Update / Undeploy** — action buttons at the bottom right
Click **Deploy** to publish your workflow for the first time, or **Update** to push a new snapshot after making changes. The green dot next to a version indicates it is the currently live version.
Once deployed, your workflow is available at:
```
POST https://sim.ai/api/workflows/{workflow-id}/execute
```
<Callout type="info">
API executions always run against the active deployment snapshot. After changing your workflow on the canvas, click **Update** to publish a new version.
</Callout>
### Keeping Track of Changes
When you modify the workflow canvas after deploying, an **Update deployment** badge appears at the bottom of the screen as a reminder that your live version is out of date:
<Image src="/static/api-deployment/api-update-button.png" alt="Canvas toolbar showing the Update and Run buttons with an Update deployment tooltip" width={400} height={200} />
You can click the **Update** button directly from the canvas toolbar — you don't need to open the Deploy modal every time.
## Version Control
Every time you deploy or update, a new version is recorded in the Versions table. You can manage past versions using the context menu (⋮) next to any row:
<Image src="/static/api-deployment/api-versions-menu.png" alt="Versions table showing v2 (live) and v1 with a context menu open offering Rename, Add description, Promote to live, and Load deployment options" width={800} height={400} />
| Action | Description |
|--------|-------------|
| **Rename** | Give the version a human-readable name (e.g., "Added memory") |
| **Add description** | Attach a note describing what changed in this version |
| **Promote to live** | Make this older version the active one without re-deploying |
| **Load deployment** | Load the workflow snapshot from this version back onto your canvas |
**Promote to live** is useful for rolling back — if a new deployment has an issue, promote the previous version to restore the last known-good state instantly.
## Making API Calls
Switch to the **API** tab in the Deploy modal to see ready-to-use code for all three execution modes:
<Image src="/static/api-deployment/api-tab.png" alt="API tab showing cURL, Python, JavaScript, and TypeScript language options, with Run workflow, Run workflow (stream response), and Run workflow (async) code sections" width={800} height={500} />
The language selector at the top lets you switch between **cURL**, **Python**, **JavaScript**, and **TypeScript**. Each mode — synchronous, streaming, and async — has its own code block that you can copy directly. The code is pre-filled with your workflow ID and a masked version of your API key.
At the bottom of the tab, two buttons give you quick access to key settings:
- **Edit API Info** — set a description and choose between API key auth or public access
- **Generate API Key** — create a new API key scoped to your workspace
## Authentication
By default, API endpoints require an API key passed in the `x-api-key` header. Generate keys in **Settings → Sim Keys** or via the **Generate API Key** button in the API tab.
```bash
curl -X POST https://sim.ai/api/workflows/{workflow-id}/execute \
-H "Content-Type: application/json" \
-H "x-api-key: $SIM_API_KEY" \
-d '{ "input": "Hello" }'
```
### API Info and Public Access
Click **Edit API Info** to add a description and change the access mode:
<Image src="/static/api-deployment/api-info.png" alt="Edit API Info modal with a Description textarea and an Access section toggling between API Key and Public modes" width={800} height={400} />
| Access Mode | Description |
|-------------|-------------|
| **API Key** (default) | Requires a valid API key in the `x-api-key` header |
| **Public** | No authentication required — anyone with the URL can call the endpoint |
The **Description** field documents what the workflow API does. This is useful for teams, or when exposing the workflow to tools and services that surface API metadata.
<Callout type="warn">
Public endpoints can be called by anyone with the URL. Only use this for workflows that don't expose sensitive data or perform sensitive actions.
</Callout>
## Execution Modes
### Synchronous
The default mode. Send a request and wait for the complete response:
<Tabs items={['cURL', 'Python', 'TypeScript']}>
<Tab value="cURL">
```bash
curl -X POST https://sim.ai/api/workflows/{workflow-id}/execute \
-H "Content-Type: application/json" \
-H "x-api-key: $SIM_API_KEY" \
-d '{ "input": "Summarize this article" }'
```
</Tab>
<Tab value="Python">
```python
import requests, os
response = requests.post(
"https://sim.ai/api/workflows/{workflow-id}/execute",
headers={
"Content-Type": "application/json",
"x-api-key": os.environ["SIM_API_KEY"]
},
json={"input": "Summarize this article"}
)
print(response.json())
```
</Tab>
<Tab value="TypeScript">
```typescript
const response = await fetch('https://sim.ai/api/workflows/{workflow-id}/execute', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'x-api-key': process.env.SIM_API_KEY!
},
body: JSON.stringify({ input: 'Summarize this article' })
});
console.log(await response.json());
```
</Tab>
</Tabs>
### Streaming
Stream the response token-by-token as it is generated. Add `"stream": true` to your request body and specify which block output fields to stream using `selectedOutputs`.
Use the **Select outputs** dropdown in the API tab to choose which fields to stream:
<Image src="/static/api-deployment/api-select-outputs.png" alt="Select outputs dropdown open showing Agent 1 block with selectable output fields: content, model, tokens, toolCalls, providerTiming, cost" width={800} height={400} />
The dropdown groups available outputs by block. The most common choice is `content` from an Agent block, which streams the generated text. You can select fields from multiple blocks simultaneously.
The `selectedOutputs` values in the request body follow the format `blockName.field` (e.g., `agent_1.content`).
<Tabs items={['cURL', 'Python', 'TypeScript']}>
<Tab value="cURL">
```bash
curl -X POST https://sim.ai/api/workflows/{workflow-id}/execute \
-H "Content-Type: application/json" \
-H "x-api-key: $SIM_API_KEY" \
-d '{
"input": "Write a long essay",
"stream": true,
"selectedOutputs": ["agent_1.content"]
}'
```
</Tab>
<Tab value="Python">
```python
import requests, os
response = requests.post(
"https://sim.ai/api/workflows/{workflow-id}/execute",
headers={
"Content-Type": "application/json",
"x-api-key": os.environ["SIM_API_KEY"]
},
json={
"input": "Write a long essay",
"stream": True,
"selectedOutputs": ["agent_1.content"]
},
stream=True
)
for line in response.iter_lines():
if line:
print(line.decode())
```
</Tab>
<Tab value="TypeScript">
```typescript
const response = await fetch('https://sim.ai/api/workflows/{workflow-id}/execute', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'x-api-key': process.env.SIM_API_KEY!
},
body: JSON.stringify({
input: 'Write a long essay',
stream: true,
selectedOutputs: ['agent_1.content']
})
});
const reader = response.body!.getReader();
const decoder = new TextDecoder();
while (true) {
const { done, value } = await reader.read();
if (done) break;
console.log(decoder.decode(value));
}
```
</Tab>
</Tabs>
### Asynchronous
For long-running workflows, async mode returns a job ID immediately so you don't need to hold the connection open. Add the `X-Execution-Mode: async` header to your request. The API returns HTTP 202 with a job ID and status URL. Poll the status URL until the job completes.
<Tabs items={['Start Job', 'Check Status']}>
<Tab value="Start Job">
```bash
curl -X POST https://sim.ai/api/workflows/{workflow-id}/execute \
-H "Content-Type: application/json" \
-H "x-api-key: $SIM_API_KEY" \
-H "X-Execution-Mode: async" \
-d '{ "input": "Process this large dataset" }'
```
**Response** (HTTP 202):
```json
{
"success": true,
"async": true,
"jobId": "run_abc123",
"executionId": "exec_xyz",
"message": "Workflow execution queued",
"statusUrl": "https://sim.ai/api/jobs/run_abc123"
}
```
</Tab>
<Tab value="Check Status">
```bash
curl https://sim.ai/api/jobs/{jobId} \
-H "x-api-key: $SIM_API_KEY"
```
**While processing:**
```json
{
"success": true,
"taskId": "run_abc123",
"status": "processing",
"metadata": {
"createdAt": "2025-09-10T12:00:00.000Z",
"startedAt": "2025-09-10T12:00:01.000Z"
},
"estimatedDuration": 300000
}
```
**When completed:**
```json
{
"success": true,
"taskId": "run_abc123",
"status": "completed",
"metadata": {
"createdAt": "2025-09-10T12:00:00.000Z",
"startedAt": "2025-09-10T12:00:01.000Z",
"completedAt": "2025-09-10T12:00:05.000Z",
"duration": 4000
},
"output": { "result": "..." }
}
```
</Tab>
</Tabs>
#### Job Status Values
| Status | Description |
|--------|-------------|
| `queued` | Job is waiting to be picked up |
| `processing` | Workflow is actively executing |
| `completed` | Finished successfully — `output` field contains the result |
| `failed` | Execution failed — `error` field contains the message |
Poll the `statusUrl` from the initial response until the status is `completed` or `failed`.
#### Execution Time Limits
| Plan | Sync Limit | Async Limit |
|------|-----------|-------------|
| **Community** | 5 minutes | 90 minutes |
| **Pro / Max / Team / Enterprise** | 50 minutes | 90 minutes |
If a job exceeds its time limit it is automatically marked as `failed`.
#### Job Retention
Completed and failed job results are retained for **24 hours**. After that, the status endpoint returns `404`. Retrieve and store results on your end if you need them longer.
#### Capacity Limits
If the execution queue is full, the API returns `503`:
```json
{
"error": "Service temporarily at capacity",
"retryAfterSeconds": 10
}
```
<Callout type="info">
Async mode always runs against the deployed version. It does not support draft state, block overrides, or partial execution options like `runFromBlock` or `stopAfterBlockId`.
</Callout>
## API Key Management
Generate and manage API keys in **Settings → Sim Keys**:
- **Create** new keys for different applications or environments
- **Revoke** keys that are no longer needed
- Keys are scoped to your workspace
## Rate Limits
API calls are subject to rate limits based on your plan. Rate limit details are returned in response headers (`X-RateLimit-*`) and in the response body. Use async mode for high-volume or long-running workloads.
For detailed rate limit information and the logs/webhooks API, see [External API](/execution/api).
<FAQ items={[
{ question: "What is the difference between the General tab and the API tab?", answer: "The General tab manages your deployment lifecycle — deploying, updating, rolling back, and viewing version history. The API tab gives you ready-to-use code samples and lets you configure the endpoint's description and access mode." },
{ question: "Can I deploy the same workflow as both an API and a chat?", answer: "Yes. A workflow can be simultaneously deployed as an API, chat, MCP tool, and more. Each deployment type runs against the same active snapshot." },
{ question: "How do I choose between sync, streaming, and async?", answer: "Use sync for quick workflows that finish in seconds. Use streaming when you want to show progressive output to users as it's generated. Use async for long-running workflows where holding a connection open isn't practical." },
{ question: "How do I select multiple outputs for streaming?", answer: "Open the Select outputs dropdown in the API tab and check each output field you want to stream. You can choose fields from multiple blocks. The selected fields are reflected as an array in the selectedOutputs request body parameter." },
{ question: "How does Promote to live work?", answer: "Promote to live sets an older version as the active deployment without creating a new version. Subsequent API calls immediately run against the promoted snapshot. This is the fastest way to roll back to a previous state." },
{ question: "How long are async job results available?", answer: "Completed and failed job results are retained for 24 hours. After that, the status endpoint returns 404. Retrieve and store results on your end if you need them longer." },
{ question: "What happens if my API key is compromised?", answer: "Revoke the key immediately in Settings → Sim Keys and generate a new one. Revoked keys stop working instantly." },
]} />

View File

@@ -6,7 +6,7 @@ import { Callout } from 'fumadocs-ui/components/callout'
import { Tab, Tabs } from 'fumadocs-ui/components/tabs'
import { Video } from '@/components/ui/video'
Sim provides a comprehensive external API for querying workflow execution logs and setting up webhooks for real-time notifications when workflows complete.
Sim provides a comprehensive external API for querying workflow run logs and setting up webhooks for real-time notifications when workflows complete.
## Authentication
@@ -21,7 +21,7 @@ You can generate API keys from the Sim platform and navigate to **Settings**, th
## Logs API
All API responses include information about your workflow execution limits and usage:
All API responses include information about your workflow run limits and usage:
```json
"limits": {
@@ -48,11 +48,11 @@ All API responses include information about your workflow execution limits and u
}
```
**Note:** Rate limits use a token bucket algorithm. `remaining` can exceed `requestsPerMinute` up to `maxBurst` when you haven't used your full allowance recently, allowing for burst traffic. The rate limits in the response body are for workflow executions. The rate limits for calling this API endpoint are in the response headers (`X-RateLimit-*`).
**Note:** Rate limits use a token bucket algorithm. `remaining` can exceed `requestsPerMinute` up to `maxBurst` when you haven't used your full allowance recently, allowing for burst traffic. The rate limits in the response body are for workflow runs. The rate limits for calling this API endpoint are in the response headers (`X-RateLimit-*`).
### Query Logs
Query workflow execution logs with extensive filtering options.
Query workflow run logs with extensive filtering options.
<Tabs items={['Request', 'Response']}>
<Tab value="Request">
@@ -70,11 +70,11 @@ Query workflow execution logs with extensive filtering options.
- `level` - Filter by level: `info`, `error`
- `startDate` - ISO timestamp for date range start
- `endDate` - ISO timestamp for date range end
- `executionId` - Exact execution ID match
- `minDurationMs` - Minimum execution duration in milliseconds
- `maxDurationMs` - Maximum execution duration in milliseconds
- `minCost` - Minimum execution cost
- `maxCost` - Maximum execution cost
- `executionId` - Exact run ID match
- `minDurationMs` - Minimum run duration in milliseconds
- `maxDurationMs` - Maximum run duration in milliseconds
- `minCost` - Minimum run cost
- `maxCost` - Maximum run cost
- `model` - Filter by AI model used
**Pagination:**
@@ -213,9 +213,9 @@ Retrieve detailed information about a specific log entry.
</Tab>
</Tabs>
### Get Execution Details
### Get Run Details
Retrieve execution details including the workflow state snapshot.
Retrieve run details including the workflow state snapshot.
<Tabs items={['Request', 'Response']}>
<Tab value="Request">
@@ -248,7 +248,7 @@ Retrieve execution details including the workflow state snapshot.
## Notifications
Get real-time notifications when workflow executions complete via webhook, email, or Slack. Notifications are configured at the workspace level from the Logs page.
Get real-time notifications when workflow runs complete via webhook, email, or Slack. Notifications are configured at the workspace level from the Logs page.
### Configuration
@@ -256,7 +256,7 @@ Configure notifications from the Logs page by clicking the menu button and selec
**Notification Channels:**
- **Webhook**: Send HTTP POST requests to your endpoint
- **Email**: Receive email notifications with execution details
- **Email**: Receive email notifications with run details
- **Slack**: Post messages to a Slack channel
**Workflow Selection:**
@@ -269,38 +269,38 @@ Configure notifications from the Logs page by clicking the menu button and selec
**Optional Data:**
- `includeFinalOutput`: Include the workflow's final output
- `includeTraceSpans`: Include detailed execution trace spans
- `includeTraceSpans`: Include detailed trace spans
- `includeRateLimits`: Include rate limit information (sync/async limits and remaining)
- `includeUsageData`: Include billing period usage and limits
### Alert Rules
Instead of receiving notifications for every execution, configure alert rules to be notified only when issues are detected:
Instead of receiving notifications for every run, configure alert rules to be notified only when issues are detected:
**Consecutive Failures**
- Alert after X consecutive failed executions (e.g., 3 failures in a row)
- Resets when an execution succeeds
- Alert after X consecutive failed runs (e.g., 3 failures in a row)
- Resets when a run succeeds
**Failure Rate**
- Alert when failure rate exceeds X% over the last Y hours
- Requires minimum 5 executions in the window
- Requires minimum 5 runs in the window
- Only triggers after the full time window has elapsed
**Latency Threshold**
- Alert when any execution takes longer than X seconds
- Alert when any run takes longer than X seconds
- Useful for catching slow or hanging workflows
**Latency Spike**
- Alert when execution is X% slower than the average
- Alert when a run is X% slower than the average
- Compares against the average duration over the configured time window
- Requires minimum 5 executions to establish baseline
- Requires minimum 5 runs to establish baseline
**Cost Threshold**
- Alert when a single execution costs more than $X
- Alert when a single run costs more than $X
- Useful for catching expensive LLM calls
**No Activity**
- Alert when no executions occur within X hours
- Alert when no runs occur within X hours
- Useful for monitoring scheduled workflows that should run regularly
**Error Count**
@@ -317,7 +317,7 @@ For webhooks, additional options are available:
### Payload Structure
When a workflow execution completes, Sim sends the following payload (via webhook POST, email, or Slack):
When a workflow run completes, Sim sends the following payload (via webhook POST, email, or Slack):
```json
{
@@ -456,7 +456,7 @@ Failed webhook deliveries are retried with exponential backoff and jitter:
- Deliveries timeout after 30 seconds
<Callout type="info">
Webhook deliveries are processed asynchronously and don't affect workflow execution performance.
Webhook deliveries are processed asynchronously and don't affect workflow run performance.
</Callout>
## Best Practices
@@ -596,11 +596,11 @@ app.listen(3000, () => {
import { FAQ } from '@/components/ui/faq'
<FAQ items={[
{ question: "How do I trigger async execution via the API?", answer: "Set the X-Execution-Mode header to 'async' on your POST request to /api/workflows/{id}/execute. The API returns a 202 response with a jobId, executionId, and a statusUrl you can poll to check when the job completes. Async mode does not support draft state, workflow overrides, or selective output options." },
{ question: "How do I trigger an async run via the API?", answer: "Set the X-Execution-Mode header to 'async' on your POST request to /api/workflows/{id}/execute. The API returns a 202 response with a jobId, executionId, and a statusUrl you can poll to check when the job completes. Async mode does not support draft state, workflow overrides, or selective output options." },
{ question: "What authentication methods does the API support?", answer: "The API supports two authentication methods: API keys passed in the x-api-key header, and session-based authentication for logged-in users. API keys can be generated from Settings > Sim Keys in the platform. Workflows with public API access enabled can also be called without authentication." },
{ question: "How does the webhook retry policy work?", answer: "Failed webhook deliveries are retried up to 5 times with exponential backoff: 5 seconds, 15 seconds, 1 minute, 3 minutes, and 10 minutes, plus up to 10% jitter. Only HTTP 5xx and 429 responses trigger retries. Each delivery times out after 30 seconds." },
{ question: "What rate limits apply to the Logs API?", answer: "Rate limits use a token bucket algorithm. Free plans get 30 requests/minute with 60 burst capacity, Pro gets 100/200, Team gets 200/400, and Enterprise gets 500/1000. These are separate from workflow execution rate limits, which are shown in the response body." },
{ question: "What rate limits apply to the Logs API?", answer: "Rate limits use a token bucket algorithm. Free plans get 30 requests/minute with 60 burst capacity, Pro gets 100/200, Team gets 200/400, and Enterprise gets 500/1000. These are separate from workflow run rate limits, which are shown in the response body." },
{ question: "How do I verify that a webhook is from Sim?", answer: "Configure a webhook secret when setting up notifications. Sim signs each delivery with HMAC-SHA256 using the format 't={timestamp},v1={signature}' in the sim-signature header. Compute the HMAC of '{timestamp}.{body}' with your secret and compare it to the signature value." },
{ question: "What alert rules are available for notifications?", answer: "You can configure alerts for consecutive failures, failure rate thresholds, latency thresholds, latency spikes (percentage above average), cost thresholds, no-activity periods, and error counts within a time window. All alert types include a 1-hour cooldown to prevent notification spam." },
{ question: "Can I filter which executions trigger notifications?", answer: "Yes. You can filter notifications by specific workflows (or select all), log level (info or error), and trigger type (api, webhook, schedule, manual, chat). You can also choose whether to include final output, trace spans, rate limits, and usage data in the notification payload." },
{ question: "Can I filter which runs trigger notifications?", answer: "Yes. You can filter notifications by specific workflows (or select all), log level (info or error), and trigger type (api, webhook, schedule, manual, chat). You can also choose whether to include final output, trace spans, rate limits, and usage data in the notification payload." },
]} />

View File

@@ -6,7 +6,7 @@ import { Callout } from 'fumadocs-ui/components/callout'
import { Card, Cards } from 'fumadocs-ui/components/card'
import { Image } from '@/components/ui/image'
Understanding how workflows execute in Sim is key to building efficient and reliable automations. The execution engine automatically handles dependencies, concurrency, and data flow to ensure your workflows run smoothly and predictably.
Understanding how workflows run in Sim is key to building efficient and reliable automations. The execution engine automatically handles dependencies, concurrency, and data flow to ensure your workflows run smoothly and predictably.
## How Workflows Execute
@@ -14,7 +14,7 @@ Sim's execution engine processes workflows intelligently by analyzing dependenci
### Concurrent Execution by Default
Multiple blocks run concurrently when they don't depend on each other. This parallel execution dramatically improves performance without requiring manual configuration.
Multiple blocks run concurrently when they don't depend on each other. This dramatically improves performance without requiring manual configuration.
<Image
src="/static/execution/concurrency.png"
@@ -49,7 +49,7 @@ Workflows can branch in multiple directions using routing blocks. The execution
height={500}
/>
This workflow demonstrates how execution can follow different paths based on conditions or AI decisions, with each path executing independently.
This workflow demonstrates how a run can follow different paths based on conditions or AI decisions, with each path running independently.
## Block Types
@@ -57,7 +57,7 @@ Sim provides different types of blocks that serve specific purposes in your work
<Cards>
<Card title="Triggers" href="/triggers">
**Starter blocks** initiate workflows and **Webhook blocks** respond to external events. Every workflow needs a trigger to begin execution.
**Starter blocks** initiate workflows and **Webhook blocks** respond to external events. Every workflow needs a trigger to begin a run.
</Card>
<Card title="Processing Blocks" href="/blocks">
@@ -73,37 +73,37 @@ Sim provides different types of blocks that serve specific purposes in your work
</Card>
</Cards>
All blocks execute automatically based on their dependencies - you don't need to manually manage execution order or timing.
All blocks run automatically based on their dependencies - you don't need to manually manage run order or timing.
## Execution Monitoring
## Run Monitoring
When workflows run, Sim provides real-time visibility into the execution process:
When workflows run, Sim provides real-time visibility into the process:
- **Live Block States**: See which blocks are currently executing, completed, or failed
- **Execution Logs**: Detailed logs appear in real-time showing inputs, outputs, and any errors
- **Performance Metrics**: Track execution time and costs for each block
- **Path Visualization**: Understand which execution paths were taken through your workflow
- **Live Block States**: See which blocks are currently running, completed, or failed
- **Run Logs**: Detailed logs appear in real-time showing inputs, outputs, and any errors
- **Performance Metrics**: Track run time and costs for each block
- **Path Visualization**: Understand which paths were taken through your workflow
<Callout type="info">
All execution details are captured and available for review even after workflows complete, helping with debugging and optimization.
All run details are captured and available for review even after workflows complete, helping with debugging and optimization.
</Callout>
## Key Execution Principles
## Key Principles
Understanding these core principles will help you build better workflows:
1. **Dependency-Based Execution**: Blocks only run when all their dependencies have completed
2. **Automatic Parallelization**: Independent blocks run concurrently without configuration
3. **Smart Data Flow**: Outputs flow automatically to connected blocks
4. **Error Handling**: Failed blocks stop their execution path but don't affect independent paths
5. **Response Blocks as Exit Points**: When a Response block executes, the entire workflow stops and the API response is sent immediately. Multiple Response blocks can exist on different branches — the first one to execute wins
6. **State Persistence**: All block outputs and execution details are preserved for debugging
7. **Cycle Protection**: Workflows that call other workflows (via Workflow blocks, MCP tools, or API blocks) are tracked with a call chain. If the chain exceeds 25 hops, execution is stopped to prevent infinite loops
4. **Error Handling**: Failed blocks stop their run path but don't affect independent paths
5. **Response Blocks as Exit Points**: When a Response block runs, the entire workflow stops and the API response is sent immediately. Multiple Response blocks can exist on different branches — the first one to run wins
6. **State Persistence**: All block outputs and run details are preserved for debugging
7. **Cycle Protection**: Workflows that call other workflows (via Workflow blocks, MCP tools, or API blocks) are tracked with a call chain. If the chain exceeds 25 hops, the run is stopped to prevent infinite loops
## Next Steps
Now that you understand execution basics, explore:
- **[Block Types](/blocks)** - Learn about specific block capabilities
- **[Logging](/execution/logging)** - Monitor workflow executions and debug issues
- **[Logging](/execution/logging)** - Monitor workflow runs and debug issues
- **[Cost Calculation](/execution/costs)** - Understand and optimize workflow costs
- **[Triggers](/triggers)** - Set up different ways to run your workflows

View File

@@ -0,0 +1,184 @@
---
title: Chat Deployment
---
import { Callout } from 'fumadocs-ui/components/callout'
import { Tab, Tabs } from 'fumadocs-ui/components/tabs'
import { Image } from '@/components/ui/image'
import { FAQ } from '@/components/ui/faq'
Deploy your workflow as a conversational chat interface that users can interact with via a shareable link or embedded widget. Chat supports multi-turn conversations, file uploads, and voice input.
<Image src="/static/chat/chat-live.png" alt="A deployed chat interface showing a conversation with Friendly Assistant" width={800} height={500} />
Every chat message triggers a fresh workflow execution, with the full conversation history passed in as context. Responses stream back to the user in real time.
<Callout type="info">
Chat executions run against your workflow's active deployment snapshot. Publish a new deployment after making canvas changes so the chat uses the updated version.
</Callout>
## Creating a Chat
Open your workflow, click **Deploy**, and select the **Chat** tab. You'll see the chat configuration panel:
<Image src="/static/chat/chat-deploy-config.png" alt="Chat deployment configuration panel showing URL, Title, Output, Access control, and Welcome message fields" width={800} height={500} />
Configure the following fields, then click **Launch Chat**:
| Field | Description |
|-------|-------------|
| **URL** | Slug that forms the public URL, e.g. `https://www.sim.ai/chat/your-slug`. Lowercase letters, numbers, and hyphens only. Must be unique across all workspaces. |
| **Title** | Display name shown in the chat header. |
| **Output** | Output fields from your workflow blocks returned as the chat response. At least one must be selected. |
| **Welcome Message** | Greeting shown before the user sends their first message. Defaults to `"Hi there! How can I help you today?"`. |
| **Access Control** | Controls who can access the chat. See [Access Control](#access-control) below. |
### Output Selection
<Image src="/static/chat/chat-deploy-output.png" alt="Output dropdown showing Agent 1 block with selectable fields: content, model, tokens, toolCalls, providerTiming, cost" width={800} height={400} />
The output dropdown groups available fields by block. For an Agent block, you can choose from `content`, `model`, `tokens`, `toolCalls`, `providerTiming`, and `cost`. In most cases, selecting `content` from the final Agent block is all you need — it streams the agent's text response directly to the user.
## Access Control
<Image src="/static/chat/chat-deploy-access-email.png" alt="Access control section with Email tab selected, showing an Allowed emails field with @sim.ai domain added" width={800} height={300} />
| Mode | Description |
|------|-------------|
| **Public** | Anyone with the link can chat — no authentication required |
| **Password** | Users must enter a password before they can start chatting |
| **Email** | Only specific email addresses or domains can access. Users verify with a 6-digit OTP sent to their email |
| **SSO** | OIDC-based single sign-on (enterprise only) |
**Email access:** Add individual addresses (`user@example.com`) or entire domains (`@example.com`) to the **Allowed emails** field. Users receive a one-time 6-digit OTP to their inbox — once verified, they can chat for the duration of their session.
**Password access:** A password field appears when this mode is selected. Share the password with users directly; they enter it before the conversation begins.
**SSO:** Uses OIDC to authenticate users through your identity provider. Available on enterprise plans.
## Sharing
### Direct Link
```
https://www.sim.ai/chat/your-slug
```
### Iframe
```html
<iframe
src="https://www.sim.ai/chat/your-slug"
width="100%"
height="600"
frameborder="0"
title="Chat"
></iframe>
```
## API Submission
You can also send messages to a chat programmatically. Responses are streamed using server-sent events (SSE).
<Tabs items={['cURL', 'TypeScript']}>
<Tab value="cURL">
```bash
curl -X POST https://www.sim.ai/api/chat/your-slug \
-H "Content-Type: application/json" \
-d '{
"input": "Hello, I need help with my order",
"conversationId": "optional-conversation-id"
}'
```
</Tab>
<Tab value="TypeScript">
```typescript
const response = await fetch('https://www.sim.ai/api/chat/your-slug', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
input: 'Hello, I need help with my order',
conversationId: 'optional-conversation-id'
})
});
// Response is an SSE stream
const reader = response.body?.getReader();
const decoder = new TextDecoder();
while (true) {
const { done, value } = await reader!.read();
if (done) break;
console.log(decoder.decode(value));
}
```
</Tab>
</Tabs>
### With File Uploads
```bash
curl -X POST https://www.sim.ai/api/chat/your-slug \
-H "Content-Type: application/json" \
-d '{
"input": "What does this document say?",
"files": [{
"name": "report.pdf",
"type": "application/pdf",
"size": 1048576,
"data": "data:application/pdf;base64,..."
}]
}'
```
### Protected Chats
For password-protected chats, include the password in the request body:
```bash
curl -X POST https://www.sim.ai/api/chat/your-slug \
-H "Content-Type: application/json" \
-d '{ "password": "secret", "input": "Hello" }'
```
For email-protected chats, authenticate with OTP first:
```bash
# Step 1: Request OTP — sends a 6-digit code to the email address
curl -X POST https://www.sim.ai/api/chat/your-slug/otp \
-H "Content-Type: application/json" \
-d '{ "email": "allowed@example.com" }'
# Step 2: Verify OTP — save the Set-Cookie header for subsequent requests
curl -X PUT https://www.sim.ai/api/chat/your-slug/otp \
-H "Content-Type: application/json" \
-c cookies.txt \
-d '{ "email": "allowed@example.com", "otp": "123456" }'
# Step 3: Send messages using the auth cookie from Step 2
curl -X POST https://www.sim.ai/api/chat/your-slug \
-H "Content-Type: application/json" \
-b cookies.txt \
-d '{ "input": "Hello" }'
```
## Troubleshooting
**Chat returns 403** — The deployment is inactive. Open the Deploy modal and re-deploy the workflow.
**"At least one output block is required"** — No output field is selected in the Output dropdown. Open the Deploy modal, go to the Chat tab, and select at least one output from a block.
**OTP email not arriving** — Confirm the email address is on the allowed list and check spam folders. OTP codes expire after 15 minutes and can be resent after a 30-second cooldown.
**Chat not loading in iframe** — Check your site's Content Security Policy allows iframes from `sim.ai`.
**Responses not updating after workflow changes** — Chat uses the active deployment snapshot. Publish a new deployment from the Deploy modal to pick up your latest changes.
<FAQ items={[
{ question: "How is chat different from API deployment?", answer: "API deployment exposes your workflow as a REST endpoint for programmatic use. Chat wraps the workflow in a hosted conversational UI with streaming, file uploads, voice input, and access control — no application code required to use it." },
{ question: "Which output field should I select?", answer: "For workflows built around Agent blocks, select the content field from the final Agent block — this streams the agent's text response to the user. You can select multiple fields if your workflow produces structured output you want to expose." },
{ question: "How does conversation history work?", answer: "Each message triggers a new workflow execution. The full conversation history — all prior user messages and assistant responses — is passed as context so your workflow can maintain continuity across turns." },
{ question: "How does email OTP authentication work?", answer: "When a user opens an email-protected chat, they enter their email address. If it matches the allowed list, Sim sends a 6-digit OTP to that address. The user enters the code, and a session cookie is set for the duration of their visit." },
{ question: "Is there a message length limit?", answer: "There is no hard limit on message length. Very long messages may impact response time depending on your workflow's model context window." },
{ question: "Can I use chat with any workflow?", answer: "Yes, any workflow can be deployed as a chat. The chat sends the user's message as the workflow input and streams the selected block outputs back as the response." },
]} />

View File

@@ -6,7 +6,7 @@ import { Callout } from 'fumadocs-ui/components/callout'
import { Tab, Tabs } from 'fumadocs-ui/components/tabs'
import { Image } from '@/components/ui/image'
Sim automatically calculates costs for all workflow executions, providing transparent pricing based on AI model usage and execution charges. Understanding these costs helps you optimize workflows and manage your budget effectively.
Sim automatically calculates costs for all workflow runs, providing transparent pricing based on AI model usage and run charges. Understanding these costs helps you optimize workflows and manage your budget effectively.
## Credits
@@ -16,18 +16,18 @@ All plan limits, usage meters, and billing thresholds are displayed in credits t
## How Costs Are Calculated
Every workflow execution includes two cost components:
Every workflow run includes two cost components:
**Base Execution Charge**: 1 credit ($0.005) per execution
**Base Run Charge**: 1 credit ($0.005) per run
**AI Model Usage**: Variable cost based on token consumption
```javascript
modelCost = (inputTokens × inputPrice + outputTokens × outputPrice) / 1,000,000
totalCredits = baseExecutionCharge + modelCost × 200
totalCredits = baseRunCharge + modelCost × 200
```
<Callout type="info">
AI model prices are per million tokens. The calculation divides by 1,000,000 to get the actual cost. Workflows without AI blocks only incur the base execution charge.
AI model prices are per million tokens. The calculation divides by 1,000,000 to get the actual cost. Workflows without AI blocks only incur the base run charge.
</Callout>
## Model Breakdown in Logs
@@ -48,7 +48,7 @@ The model breakdown shows:
- **Token Usage**: Input and output token counts for each model
- **Cost Breakdown**: Individual costs per model and operation
- **Model Distribution**: Which models were used and how many times
- **Total Cost**: Aggregate cost for the entire workflow execution
- **Total Cost**: Aggregate cost for the entire workflow run
## Pricing Options
@@ -110,9 +110,108 @@ The model breakdown shows:
Pricing shown reflects rates as of September 10, 2025. Check provider documentation for current pricing.
</Callout>
## Hosted Tool Pricing
When workflows use tool blocks with Sim's hosted API keys, costs are charged per operation. Use your own keys via BYOK to pay providers directly instead.
<Tabs items={['Firecrawl', 'Exa', 'Serper', 'Perplexity', 'Linkup', 'Parallel AI', 'Jina AI', 'Google Cloud', 'Brandfetch']}>
<Tab>
**Firecrawl** - Web scraping, crawling, search, and extraction
| Operation | Cost |
|-----------|------|
| Scrape | $0.001 per credit used |
| Crawl | $0.001 per credit used |
| Search | $0.001 per credit used |
| Extract | $0.001 per credit used |
| Map | $0.001 per credit used |
</Tab>
<Tab>
**Exa** - AI-powered search and research
| Operation | Cost |
|-----------|------|
| Search | Dynamic (returned by API) |
| Get Contents | Dynamic (returned by API) |
| Find Similar Links | Dynamic (returned by API) |
| Answer | Dynamic (returned by API) |
</Tab>
<Tab>
**Serper** - Google search API
| Operation | Cost |
|-----------|------|
| Search (≤10 results) | $0.001 |
| Search (>10 results) | $0.002 |
</Tab>
<Tab>
**Perplexity** - AI-powered chat and web search
| Operation | Cost |
|-----------|------|
| Search | $0.005 per request |
| Chat | Token-based (varies by model) |
</Tab>
<Tab>
**Linkup** - Web search and content retrieval
| Operation | Cost |
|-----------|------|
| Standard search | ~$0.006 |
| Deep search | ~$0.055 |
</Tab>
<Tab>
**Parallel AI** - Web search, extraction, and deep research
| Operation | Cost |
|-----------|------|
| Search (≤10 results) | $0.005 |
| Search (>10 results) | $0.005 + $0.001 per additional result |
| Extract | $0.001 per URL |
| Deep Research | $0.005$2.40 (varies by processor tier) |
</Tab>
<Tab>
**Jina AI** - Web reading and search
| Operation | Cost |
|-----------|------|
| Read URL | $0.20 per 1M tokens |
| Search | $0.20 per 1M tokens (minimum 10K tokens) |
</Tab>
<Tab>
**Google Cloud** - Translate, Maps, PageSpeed, and Books APIs
| Operation | Cost |
|-----------|------|
| Translate / Detect | $0.00002 per character |
| Maps (Geocode, Directions, Distance Matrix, Elevation, Timezone, Reverse Geocode, Geolocate, Validate Address) | $0.005 per request |
| Maps (Snap to Roads) | $0.01 per request |
| Maps (Place Details) | $0.017 per request |
| Maps (Places Search) | $0.032 per request |
| PageSpeed | Free |
| Books (Search, Details) | Free |
</Tab>
<Tab>
**Brandfetch** - Brand assets, logos, colors, and company info
| Operation | Cost |
|-----------|------|
| Search | Free |
| Get Brand | $0.04 per request |
</Tab>
</Tabs>
## Bring Your Own Key (BYOK)
Use your own API keys for AI model providers instead of Sim's hosted keys to pay base prices with no markup.
Use your own API keys for supported providers instead of Sim's hosted keys to pay base prices with no markup.
### Supported Providers
@@ -121,7 +220,17 @@ Use your own API keys for AI model providers instead of Sim's hosted keys to pay
| OpenAI | Knowledge Base embeddings, Agent block |
| Anthropic | Agent block |
| Google | Agent block |
| Mistral | Knowledge Base OCR |
| Mistral | Knowledge Base OCR, Agent block |
| Fireworks | Agent block |
| Firecrawl | Web scraping, crawling, search, and extraction |
| Exa | AI-powered search and research |
| Serper | Google search API |
| Linkup | Web search and content retrieval |
| Parallel AI | Web search, extraction, and deep research |
| Perplexity | AI-powered chat and web search |
| Jina AI | Web reading and search |
| Google Cloud | Translate, Maps, PageSpeed, and Books APIs |
| Brandfetch | Brand assets, logos, colors, and company info |
### Setup
@@ -152,20 +261,20 @@ Each voice session is billed when it starts. In deployed chat voice mode, each c
## Plans
Sim has two paid plan tiers **Pro** and **Max**. Either can be used individually or with a team. Team plans pool credits across all seats in the organization.
Sim has two paid plan tiers - **Pro** and **Max**. Either can be used individually or with a team. Team plans pool credits across all seats in the organization.
| Plan | Price | Credits Included | Daily Refresh |
|------|-------|------------------|---------------|
| **Community** | $0 | 1,000 (one-time) | |
| **Community** | $0 | 1,000 (one-time) | - |
| **Pro** | $25/mo | 6,000/mo | +50/day |
| **Max** | $100/mo | 25,000/mo | +200/day |
| **Enterprise** | Custom | Custom | |
| **Enterprise** | Custom | Custom | - |
To use Pro or Max with a team, select **Get For Team** in subscription settings and choose the tier and number of seats. Credits are pooled across the organization at the per-seat rate (e.g. Max for Teams with 3 seats = 75,000 credits/mo pooled).
### Daily Refresh Credits
Paid plans include a small daily credit allowance that does not count toward your plan limit. Each day, usage up to the daily refresh amount is excluded from billable usage. This allowance resets every 24 hours and does not carry over use it or lose it.
Paid plans include a small daily credit allowance that does not count toward your plan limit. Each day, usage up to the daily refresh amount is excluded from billable usage. This allowance resets every 24 hours and does not carry over - use it or lose it.
| Plan | Daily Refresh |
|------|---------------|
@@ -210,17 +319,6 @@ By default, your usage is capped at the credits included in your plan. To allow
Max (individual) shares the same rate limits as team plans. Team plans (Pro or Max for Teams) use the Max-tier rate limits.
### Concurrent Execution Limits
| Plan | Concurrent Executions |
|------|----------------------|
| **Free** | 5 |
| **Pro** | 50 |
| **Max / Team** | 200 |
| **Enterprise** | 200 (customizable) |
Concurrent execution limits control how many workflow executions can run simultaneously within a workspace. When the limit is reached, new executions are queued and admitted as running executions complete. Manual runs from the editor are not subject to these limits.
### File Storage
| Plan | Storage |
@@ -232,18 +330,18 @@ Concurrent execution limits control how many workflow executions can run simulta
Team plans (Pro or Max for Teams) use 500 GB.
### Execution Time Limits
### Run Time Limits
| Plan | Sync | Async |
|------|------|-------|
| **Free** | 5 minutes | 90 minutes |
| **Pro / Max / Team / Enterprise** | 50 minutes | 90 minutes |
**Sync executions** run immediately and return results directly. These are triggered via the API with `async: false` (default) or through the UI.
**Async executions** (triggered via API with `async: true`, webhooks, or schedules) run in the background.
**Sync runs** complete immediately and return results directly. These are triggered via the API with `async: false` (default) or through the UI.
**Async runs** (triggered via API with `async: true`, webhooks, or schedules) run in the background.
<Callout type="info">
If a workflow exceeds its time limit, it will be terminated and marked as failed with a timeout error. Design long-running workflows to use async execution or break them into smaller workflows.
If a workflow exceeds its time limit, it will be terminated and marked as failed with a timeout error. Design long-running workflows to use async runs or break them into smaller workflows.
</Callout>
## Billing Model
@@ -252,7 +350,7 @@ Sim uses a **base subscription + overage** billing model:
### How It Works
**Pro Plan ($25/month 6,000 credits):**
**Pro Plan ($25/month - 6,000 credits):**
- Monthly subscription includes 6,000 credits of usage
- Usage under 6,000 credits → No additional charges
- Usage over 6,000 credits (with on-demand enabled) → Pay the overage at month end
@@ -343,6 +441,21 @@ curl -X GET -H "X-API-Key: YOUR_API_KEY" -H "Content-Type: application/json" htt
- `limit` is derived from individual limits (Free/Pro/Max) or pooled organization limits (Team/Enterprise)
- `plan` is the highest-priority active plan associated with your user
## Purchasing Additional Credits
Pro and Team plan users can buy additional credits at any time in **Settings → Subscription → Credit Balance**:
- **Range**: $10 to $1,000 per purchase
- **Conversion**: 1 credit = $0.005 (a $10 purchase adds 2,000 credits)
- **Availability**: Credits are added immediately after payment
- **Expiration**: Purchased credits do not expire
- **Refunds**: Purchases are non-refundable
- **Team plans**: Only organization owners and admins can purchase credits. Purchased credits are added to the team's shared pool.
<Callout type="info">
Enterprise users should contact support for credit adjustments.
</Callout>
## Cost Optimization Strategies
- **Model Selection**: Choose models based on task complexity. Simple tasks can use GPT-4.1-nano while complex reasoning might need o1 or Claude Opus.
@@ -354,18 +467,18 @@ curl -X GET -H "X-API-Key: YOUR_API_KEY" -H "Content-Type: application/json" htt
## Next Steps
- Review your current usage in [Settings → Subscription](https://sim.ai/settings/subscription)
- Learn about [Logging](/execution/logging) to track execution details
- Learn about [Logging](/execution/logging) to track run details
- Explore the [External API](/execution/api) for programmatic cost monitoring
- Check out [workflow optimization techniques](/blocks) to reduce costs
import { FAQ } from '@/components/ui/faq'
<FAQ items={[
{ question: "How much does a single workflow execution cost?", answer: "Every execution incurs a base charge of 1 credit ($0.005). On top of that, any AI model usage is billed based on token consumption. Workflows that do not use AI blocks only pay the base execution charge." },
{ question: "How much does a single workflow run cost?", answer: "Every run incurs a base charge of 1 credit ($0.005). On top of that, any AI model usage is billed based on token consumption. Workflows that do not use AI blocks only pay the base run charge." },
{ question: "What is the credit-to-dollar conversion rate?", answer: "1 credit equals $0.005. All plan limits, usage meters, and billing thresholds in the Sim UI are displayed in credits." },
{ question: "Do unused daily refresh credits carry over?", answer: "No. Daily refresh credits reset every 24 hours and do not accumulate. If you do not use them within the day, they are lost." },
{ question: "What happens when I exceed my plan's credit limit?", answer: "By default, your usage is capped at your plan's included credits and executions will stop. If you enable on-demand billing or manually raise your usage limit in Settings, you can continue running workflows and pay for the overage at the end of the billing period." },
{ question: "What happens when I exceed my plan's credit limit?", answer: "By default, your usage is capped at your plan's included credits and runs will stop. If you enable on-demand billing or manually raise your usage limit in Settings, you can continue running workflows and pay for the overage at the end of the billing period." },
{ question: "How does the 1.1x hosted model multiplier work?", answer: "When you use Sim's hosted API keys (instead of bringing your own), a 1.1x multiplier is applied to the base model pricing for Agent blocks. This covers infrastructure and API management costs. You can avoid this multiplier by using your own API keys via the BYOK feature." },
{ question: "Are there any free options for AI models?", answer: "Yes. If you run local models through Ollama or VLLM, there are no API costs for those model calls. You still pay the base execution charge of 1 credit per execution." },
{ question: "Are there any free options for AI models?", answer: "Yes. If you run local models through Ollama or VLLM, there are no API costs for those model calls. You still pay the base run charge of 1 credit per run." },
{ question: "When does threshold billing trigger?", answer: "When on-demand billing is enabled and your unbilled overage reaches $50, Sim automatically bills the full unbilled amount. This spreads large charges throughout the month instead of accumulating one large bill at period end." },
]} />

View File

@@ -156,7 +156,7 @@ Use `url` for direct downloads or `base64` for inline processing.
- **Dropbox** - Dropbox file operations
<Callout type="info">
Files are automatically available to downstream blocks. The execution engine handles all file transfer and format conversion.
Files are automatically available to downstream blocks. The engine handles all file transfer and format conversion.
</Callout>
## Best Practices
@@ -165,15 +165,15 @@ Use `url` for direct downloads or `base64` for inline processing.
2. **Check file types** - Ensure the file type matches what the receiving block expects. The Vision block needs images, the File block handles documents.
3. **Consider file size** - Large files increase execution time. For very large files, consider using storage blocks (S3, Supabase) for intermediate storage.
3. **Consider file size** - Large files increase run time. For very large files, consider using storage blocks (S3, Supabase) for intermediate storage.
import { FAQ } from '@/components/ui/faq'
<FAQ items={[
{ question: "What is the maximum file size for uploads?", answer: "The maximum file size for files processed during workflow execution is 20 MB. Files exceeding this limit will be rejected with an error indicating the actual file size. For larger files, use storage blocks like S3 or Supabase for intermediate storage." },
{ question: "What is the maximum file size for uploads?", answer: "The maximum file size for files processed during a workflow run is 20 MB. Files exceeding this limit will be rejected with an error indicating the actual file size. For larger files, use storage blocks like S3 or Supabase for intermediate storage." },
{ question: "What file input formats are supported via the API?", answer: "When triggering a workflow via API, you can send files as base64-encoded data (using a data URI with the format 'data:{mime};base64,{data}') or as a URL pointing to a publicly accessible file. In both cases, include the file name and MIME type in the request." },
{ question: "How are files passed between blocks internally?", answer: "Files are represented as standardized UserFile objects with name, url, base64, type, and size properties. Most blocks accept the full file object and extract what they need automatically, so you typically pass the entire object rather than individual properties." },
{ question: "Which blocks can output files?", answer: "Gmail outputs email attachments, Slack outputs downloaded files, TTS generates audio files, Video Generator and Image Generator produce media files. Storage blocks like S3, Supabase, Google Drive, and Dropbox can also retrieve files for use in downstream blocks." },
{ question: "Do I need to extract base64 or URL from file objects manually?", answer: "No. Most blocks accept the full file object and handle the format conversion automatically. Simply pass the entire file reference (e.g., <gmail.attachments[0]>) and the receiving block will extract the data it needs." },
{ question: "How do file fields work in the Start block's input format?", answer: "When you define a field with type 'file[]' in the Start block's input format, the execution engine automatically processes incoming file data (base64 or URL) and uploads it to storage, converting it into UserFile objects before the workflow runs." },
{ question: "How do file fields work in the Start block's input format?", answer: "When you define a field with type 'file[]' in the Start block's input format, the engine automatically processes incoming file data (base64 or URL) and uploads it to storage, converting it into UserFile objects before the workflow runs." },
]} />

View File

@@ -7,10 +7,10 @@ import { Card, Cards } from 'fumadocs-ui/components/card'
import { Image } from '@/components/ui/image'
import { FAQ } from '@/components/ui/faq'
Sim's execution engine brings your workflows to life by processing blocks in the correct order, managing data flow, and handling errors gracefully, so you can understand exactly how workflows are executed in Sim.
Sim's execution engine brings your workflows to life by processing blocks in the correct order, managing data flow, and handling errors gracefully, so you can understand exactly how workflows run in Sim.
<Callout type="info">
Every workflow execution follows a deterministic path based on your block connections and logic, ensuring predictable and reliable results.
Every workflow run follows a deterministic path based on your block connections and logic, ensuring predictable and reliable results.
</Callout>
## Documentation Overview
@@ -22,33 +22,42 @@ Sim's execution engine brings your workflows to life by processing blocks in the
</Card>
<Card title="Logging" href="/execution/logging">
Monitor workflow executions with comprehensive logging and real-time visibility
Monitor workflow runs with comprehensive logging and real-time visibility
</Card>
<Card title="Cost Calculation" href="/execution/costs">
Understand how workflow execution costs are calculated and optimized
Understand how workflow run costs are calculated and optimized
</Card>
<Card title="External API" href="/execution/api">
Access execution logs and set up webhooks programmatically via REST API
Access run logs and set up webhooks programmatically via REST API
</Card>
<Card title="API Deployment" href="/execution/api-deployment">
Deploy your workflow as a REST API endpoint with sync, streaming, and async modes
</Card>
<Card title="Chat Deployment" href="/execution/chat">
Deploy your workflow as a conversational chat interface with streaming, file uploads, and voice
</Card>
</Cards>
## Key Concepts
### Topological Execution
Blocks execute in dependency order, similar to how a spreadsheet recalculates cells. The execution engine automatically determines which blocks can run based on completed dependencies.
Blocks run in dependency order, similar to how a spreadsheet recalculates cells. The execution engine automatically determines which blocks can run based on completed dependencies.
### Path Tracking
The engine actively tracks execution paths through your workflow. Router and Condition blocks dynamically update these paths, ensuring only relevant blocks execute.
The engine actively tracks run paths through your workflow. Router and Condition blocks dynamically update these paths, ensuring only relevant blocks run.
### Layer-Based Processing
Instead of executing blocks one-by-one, the engine identifies layers of blocks that can run in parallel, optimizing performance for complex workflows.
### Execution Context
Each workflow maintains a rich context during execution containing:
### Run Context
Each workflow maintains a rich context during a run containing:
- Block outputs and states
- Active execution paths
- Active run paths
- Loop and parallel iteration tracking
- Environment variables
- Routing decisions
@@ -56,23 +65,57 @@ Each workflow maintains a rich context during execution containing:
## Deployment Snapshots
API, Chat, Schedule, and Webhook executions run against the workflows active deployment snapshot. Manual runs from the editor execute the current draft canvas state, letting you test changes before deploying. Publish a new deployment whenever you change the canvas so every trigger uses the updated version.
API, Chat, Schedule, and Webhook runs use the workflows active deployment snapshot. Manual runs from the editor use the current draft canvas state, letting you test changes before deploying. Publish a new deployment whenever you change the canvas so every trigger uses the updated version.
<div className='flex justify-center my-6'>
<div className="flex justify-center my-6">
<Image
src='/static/execution/deployment-versions.png'
alt='Deployment versions table'
src="/static/execution/deployment-versions.png"
alt="Deployment versions table"
width={500}
height={280}
className='rounded-xl border border-border shadow-sm'
className="rounded-xl border border-border shadow-sm"
/>
</div>
The Deploy modal keeps a full version history—inspect any snapshot, compare it against your draft, and promote or roll back with one click when you need to restore a prior release.
### Version History
## Programmatic Execution
The **General** tab in the Deploy modal shows a version history table for every deployment. Each row shows the version name, who deployed it, and when.
Execute workflows from your applications using our official SDKs:
<div className="flex justify-center">
<Image
src="/static/execution/deployment-versions-table.png"
alt="Version history table with multiple deployment versions"
width={600}
height={650}
className="my-6"
/>
</div>
From the version table you can:
- **Rename** a version to give it a meaningful label (e.g., "v2 — added error handling")
- **Add a description** with notes about what changed in that deployment
- **Promote to live** to roll back to an older version — this makes the selected version the active deployment without changing your draft canvas
- **Load into editor** to restore a previous version's workflow into the canvas for editing and redeploying
- **Preview a version** by selecting a row to view that version's workflow in the canvas preview, then toggle between **Live** and the selected version
<div className="flex justify-center">
<Image
src="/static/execution/deployment-version-preview.png"
alt="Previewing a selected deployment version"
width={600}
height={650}
className="my-6"
/>
</div>
<Callout type="info">
Promoting an old version takes effect immediately — all API, Chat, Schedule, and Webhook executions will use the promoted version. Your draft canvas is not affected.
</Callout>
## Programmatic Access
Run workflows from your applications using our official SDKs:
```bash
# TypeScript/JavaScript
@@ -107,21 +150,21 @@ const result = await client.executeWorkflow('workflow-id', {
- Use parallel execution for independent operations
- Cache results with Memory blocks when appropriate
### Monitor Executions
### Monitor Runs
- Review logs regularly to understand performance patterns
- Track costs for AI model usage
- Use workflow snapshots to debug issues
## What's Next?
Start with [Execution Basics](/execution/basics) to understand how workflows run, then explore [Logging](/execution/logging) to monitor your executions and [Cost Calculation](/execution/costs) to optimize your spending.
Start with [Execution Basics](/execution/basics) to understand how workflows run, then explore [Logging](/execution/logging) to monitor your runs and [Cost Calculation](/execution/costs) to optimize your spending.
<FAQ items={[
{ question: "What are the execution timeout limits?", answer: "Synchronous executions (API, chat) have a default timeout of 5 minutes on the Free plan and 50 minutes on Pro, Team, and Enterprise plans. Asynchronous executions (schedules, webhooks) allow up to 90 minutes across all plans. These limits are configurable by the platform administrator." },
{ question: "What are the run timeout limits?", answer: "Synchronous runs (API, chat) have a default timeout of 5 minutes on the Free plan and 50 minutes on Pro, Team, and Enterprise plans. Asynchronous runs (schedules, webhooks) allow up to 90 minutes across all plans. These limits are configurable by the platform administrator." },
{ question: "How does parallel execution work?", answer: "The engine identifies layers of blocks with no dependencies on each other and runs them concurrently. Within loops and parallel blocks, the engine supports up to 20 parallel branches by default and up to 1,000 loop iterations. Nested subflows (loops inside parallels, or vice versa) are supported up to 10 levels deep." },
{ question: "Can I cancel a running execution?", answer: "Yes. The engine supports cancellation through an abort signal mechanism. When you cancel an execution, the engine checks for cancellation between block executions (at roughly 500ms intervals when using Redis-backed cancellation). Any in-progress blocks complete, and the execution returns with a cancelled status." },
{ question: "What is a deployment snapshot?", answer: "A deployment snapshot is a frozen copy of your workflow at the time you click Deploy. Trigger-based executions (API, chat, schedule, webhook) run against the active snapshot, not your draft canvas. Manual runs from the editor execute the current draft canvas state, so you can test changes before deploying. You can view, compare, and roll back snapshots from the Deploy modal." },
{ question: "How are execution costs calculated?", answer: "Costs are tracked per block based on the AI model used. Each block log records input tokens, output tokens, and the computed cost using the model's pricing. The total workflow cost is the sum of all block-level costs for that execution. You can review costs in the execution logs." },
{ question: "What happens when a block fails during execution?", answer: "When a block throws an error, the engine captures the error message in the block log, finalizes any incomplete logs with timing data, and halts the execution with a failure status. If the failing block has an error output handle connected to another block, that error path is followed instead of halting entirely." },
{ question: "Can I re-run part of a workflow without starting from scratch?", answer: "Yes. The run-from-block feature lets you select a specific block and re-execute from that point. The engine computes which upstream blocks need to be re-run (the dirty set) and preserves cached outputs from blocks that have not changed, so only the affected portion of the workflow re-executes." },
{ question: "Can I cancel a running workflow?", answer: "Yes. The engine supports cancellation through an abort signal mechanism. When you cancel a run, the engine checks for cancellation between blocks (at roughly 500ms intervals when using Redis-backed cancellation). Any in-progress blocks complete, and the run returns with a cancelled status." },
{ question: "What is a deployment snapshot?", answer: "A deployment snapshot is a frozen copy of your workflow at the time you click Deploy. Trigger-based runs (API, chat, schedule, webhook) use the active snapshot, not your draft canvas. Manual runs from the editor use the current draft canvas state, so you can test changes before deploying. You can view, compare, and roll back snapshots from the Deploy modal." },
{ question: "How are run costs calculated?", answer: "Costs are tracked per block based on the AI model used. Each block log records input tokens, output tokens, and the computed cost using the model's pricing. The total workflow cost is the sum of all block-level costs for that run. You can review costs in the run logs." },
{ question: "What happens when a block fails during a run?", answer: "When a block throws an error, the engine captures the error message in the block log, finalizes any incomplete logs with timing data, and halts the run with a failure status. If the failing block has an error output handle connected to another block, that error path is followed instead of halting entirely." },
{ question: "Can I re-run part of a workflow without starting from scratch?", answer: "Yes. The run-from-block feature lets you select a specific block and re-run from that point. The engine computes which upstream blocks need to be re-run (the dirty set) and preserves cached outputs from blocks that have not changed, so only the affected portion of the workflow re-runs." },
]} />

View File

@@ -6,7 +6,7 @@ import { Callout } from 'fumadocs-ui/components/callout'
import { Tab, Tabs } from 'fumadocs-ui/components/tabs'
import { Image } from '@/components/ui/image'
Sim provides comprehensive logging for all workflow executions, giving you complete visibility into how your workflows run, what data flows through them, and where issues might occur.
Sim provides comprehensive logging for all workflow runs, giving you complete visibility into how your workflows run, what data flows through them, and where issues might occur.
## Logging System
@@ -14,7 +14,7 @@ Sim offers two complementary logging interfaces to match different workflows and
### Real-Time Console
During manual or chat workflow execution, logs appear in real-time in the Console panel on the right side of the workflow editor:
During manual or chat workflow runs, logs appear in real-time in the Console panel on the right side of the workflow editor:
<div className="flex justify-center">
<Image
@@ -27,14 +27,14 @@ During manual or chat workflow execution, logs appear in real-time in the Consol
</div>
The console shows:
- Block execution progress with active block highlighting
- Block progress with active block highlighting
- Real-time outputs as blocks complete
- Execution timing for each block
- Timing for each block
- Success/error status indicators
### Logs Page
All workflow executions—whether triggered manually, via API, Chat, Schedule, or Webhook—are logged to the dedicated Logs page:
All workflow runs—whether triggered manually, via API, Chat, Schedule, or Webhook—are logged to the dedicated Logs page:
<div className="flex justify-center">
<Image
@@ -72,7 +72,7 @@ View the complete data flow for each block with tabs to switch between:
<Tabs items={['Output', 'Input']}>
<Tab>
**Output Tab** shows the block's execution result:
**Output Tab** shows the block's result:
- Structured data with JSON formatting
- Markdown rendering for AI-generated content
- Copy button for easy data extraction
@@ -87,17 +87,17 @@ View the complete data flow for each block with tabs to switch between:
</Tab>
</Tabs>
### Execution Timeline
### Run Timeline
For workflow-level logs, view detailed execution metrics:
For workflow-level logs, view detailed run metrics:
- Start and end timestamps
- Total workflow duration
- Individual block execution times
- Individual block run times
- Performance bottleneck identification
## Workflow Snapshots
For any logged execution, click "View Snapshot" to see the exact workflow state at execution time:
For any logged run, click "View Snapshot" to see the exact workflow state at the time of the run:
<div className="flex justify-center">
<Image
@@ -111,12 +111,12 @@ For any logged execution, click "View Snapshot" to see the exact workflow state
The snapshot provides:
- Frozen canvas showing the workflow structure
- Block states and connections as they were during execution
- Block states and connections as they were during the run
- Click any block to see its inputs and outputs
- Useful for debugging workflows that have since been modified
<Callout type="info">
Workflow snapshots are only available for executions after the enhanced logging system was introduced. Older migrated logs show a "Logged State Not Found" message.
Workflow snapshots are only available for runs after the enhanced logging system was introduced. Older migrated logs show a "Logged State Not Found" message.
</Callout>
## Log Retention
@@ -134,11 +134,11 @@ The snapshot provides:
### For Production
- Monitor the Logs page regularly for errors or performance issues
- Set up filters to focus on specific workflows or time periods
- Use live mode during critical deployments to watch executions in real-time
- Use live mode during critical deployments to watch runs in real-time
### For Debugging
- Always check the execution timeline to identify slow blocks
- Compare inputs between working and failing executions
- Always check the run timeline to identify slow blocks
- Compare inputs between working and failing runs
- Use workflow snapshots to see the exact state when issues occurred
## Next Steps
@@ -150,10 +150,10 @@ The snapshot provides:
import { FAQ } from '@/components/ui/faq'
<FAQ items={[
{ question: "How long are execution logs retained?", answer: "Free plans retain logs for 7 days — after that, logs are archived to cloud storage and deleted from the database. Pro, Team, and Enterprise plans retain logs indefinitely with no automatic cleanup." },
{ question: "What data is captured in each execution log?", answer: "Each log entry includes the execution ID, workflow ID, trigger type, start and end timestamps, total duration in milliseconds, cost breakdown (total cost, token counts, and per-model breakdowns), execution data with trace spans, final output, and any associated files. The log details sidebar lets you inspect block-level inputs and outputs." },
{ question: "How long are run logs retained?", answer: "Free plans retain logs for 7 days — after that, logs are archived to cloud storage and deleted from the database. Pro, Team, and Enterprise plans retain logs indefinitely with no automatic cleanup." },
{ question: "What data is captured in each run log?", answer: "Each log entry includes the run ID, workflow ID, trigger type, start and end timestamps, total duration in milliseconds, cost breakdown (total cost, token counts, and per-model breakdowns), run data with trace spans, final output, and any associated files. The log details sidebar lets you inspect block-level inputs and outputs." },
{ question: "Are API keys visible in the logs?", answer: "No. API keys and credentials are automatically redacted in the log input tab for security. You can safely inspect block inputs without exposing sensitive values." },
{ question: "What is a workflow snapshot?", answer: "A workflow snapshot is a frozen copy of the workflow's structure (blocks, connections, and configuration) captured at execution time. It lets you see the exact state of the workflow when a particular execution ran, which is useful for debugging workflows that have been modified since the execution." },
{ question: "Can I access logs programmatically?", answer: "Yes. The External API provides endpoints to query logs with filtering by workflow, time range, trigger type, duration, cost, and model. You can also set up webhook, email, or Slack notifications for real-time alerts when executions complete." },
{ question: "What does Live mode do on the Logs page?", answer: "Live mode automatically refreshes the Logs page in real-time so new execution entries appear as they are logged, without requiring manual page refreshes. This is useful during deployments or when monitoring active workflows." },
{ question: "What is a workflow snapshot?", answer: "A workflow snapshot is a frozen copy of the workflow's structure (blocks, connections, and configuration) captured at the time of a run. It lets you see the exact state of the workflow when a particular run happened, which is useful for debugging workflows that have been modified since." },
{ question: "Can I access logs programmatically?", answer: "Yes. The External API provides endpoints to query logs with filtering by workflow, time range, trigger type, duration, cost, and model. You can also set up webhook, email, or Slack notifications for real-time alerts when runs complete." },
{ question: "What does Live mode do on the Logs page?", answer: "Live mode automatically refreshes the Logs page in real-time so new log entries appear as they are recorded, without requiring manual page refreshes. This is useful during deployments or when monitoring active workflows." },
]} />

View File

@@ -1,3 +1,3 @@
{
"pages": ["index", "basics", "files", "api", "logging", "costs"]
"pages": ["index", "basics", "files", "api", "api-deployment", "chat", "logging", "costs"]
}

View File

@@ -170,17 +170,17 @@ Build, test, and refine workflows quickly with immediate feedback
## Next Steps
<Cards>
<Card title="Explore Workflow Blocks" href="/blocks">
Discover API, Function, Condition, and other workflow blocks
<Card title="Explore Blocks" href="/blocks">
Discover API, Function, Condition, and other blocks
</Card>
<Card title="Browse Integrations" href="/tools">
Connect 160+ services including Gmail, Slack, Notion, and more
Connect 1,000+ services including Gmail, Slack, Notion, and more
</Card>
<Card title="Add Custom Logic" href="/blocks/function">
Write custom functions for advanced data processing
</Card>
<Card title="Deploy Your Workflow" href="/execution">
Make your workflow accessible via REST API or webhooks
<Card title="Deploy Your Agent" href="/execution">
Make your agent accessible via REST API or webhooks
</Card>
</Cards>
@@ -188,7 +188,7 @@ Build, test, and refine workflows quickly with immediate feedback
**Need detailed explanations?** Visit the [Blocks documentation](/blocks) for comprehensive guides on each component.
**Looking for integrations?** Explore the [Tools documentation](/tools) to see all 160+ available integrations.
**Looking for integrations?** Explore the [Tools documentation](/tools) to see all 1,000+ available integrations.
**Ready to go live?** Learn about [Execution and Deployment](/execution) to make your workflows production-ready.
@@ -199,5 +199,5 @@ Build, test, and refine workflows quickly with immediate feedback
{ question: "Can I use a different AI model instead of GPT-4o?", answer: "Yes. The Agent block supports models from OpenAI, Anthropic, Google, Groq, Cerebras, DeepSeek, Mistral, xAI, and more. You can select any available model from the dropdown. If you self-host, you can also use local models through Ollama." },
{ question: "Can I import workflows from other tools?", answer: "Sim does not currently support importing workflows from other automation platforms. However, you can use the Copilot feature to describe what you want in natural language and have it build the workflow for you, which is often faster than manual recreation." },
{ question: "What if my workflow does not produce the expected output?", answer: "Use the Chat panel to test iteratively and inspect outputs from each block. You can click the dropdown to view different block outputs and pinpoint where the issue is. The execution logs (accessible from the Logs tab) show detailed information about each step including token usage, costs, and any errors." },
{ question: "Where do I go after completing this tutorial?", answer: "Explore the Blocks documentation to learn about Condition, Router, Function, and API blocks. Browse the Tools section to discover 160+ integrations you can add to your agents. When you are ready to deploy, check the Execution docs for REST API, webhook, and scheduled trigger options." },
{ question: "Where do I go after completing this tutorial?", answer: "Explore the Blocks documentation to learn about Condition, Router, Function, and API blocks. Browse the Tools section to discover 1,000+ integrations you can add to your agents. When you are ready to deploy, check the Execution docs for REST API, webhook, and scheduled trigger options." },
]} />

View File

@@ -6,7 +6,7 @@ import { Card, Cards } from 'fumadocs-ui/components/card'
# Sim Documentation
Welcome to Sim, a visual workflow builder for AI applications. Build powerful AI agents, automation workflows, and data processing pipelines by connecting blocks on a canvas.
Welcome to Sim, the open-source AI workspace where teams build, deploy, and manage AI agents. Create agents visually with the workflow builder, conversationally through Mothership, or programmatically with the API — connected to 1,000+ integrations and every major LLM.
## Quick Start
@@ -15,13 +15,13 @@ Welcome to Sim, a visual workflow builder for AI applications. Build powerful AI
Learn what you can build with Sim
</Card>
<Card title="Getting Started" href="/getting-started">
Create your first workflow in 10 minutes
Build your first agent in 10 minutes
</Card>
<Card title="Workflow Blocks" href="/blocks">
<Card title="Blocks" href="/blocks">
Learn about the building blocks
</Card>
<Card title="Tools & Integrations" href="/tools">
Explore 80+ built-in integrations
Explore 1,000+ integrations
</Card>
</Cards>
@@ -35,10 +35,10 @@ Welcome to Sim, a visual workflow builder for AI applications. Build powerful AI
Work with workflow and environment variables
</Card>
<Card title="Execution" href="/execution">
Monitor workflow runs and manage costs
Monitor agent runs and manage costs
</Card>
<Card title="Triggers" href="/triggers">
Start workflows via API, webhooks, or schedules
Start agents via API, webhooks, or schedules
</Card>
</Cards>
@@ -51,7 +51,7 @@ Welcome to Sim, a visual workflow builder for AI applications. Build powerful AI
<Card title="MCP Integration" href="/mcp">
Connect external services with Model Context Protocol
</Card>
<Card title="SDKs" href="/sdks">
<Card title="SDKs" href="/api-reference">
Integrate Sim into your applications
</Card>
</Cards>

View File

@@ -0,0 +1,140 @@
---
title: Integrations
description: Connect third-party services and OAuth accounts for your workflows
---
import { Callout } from 'fumadocs-ui/components/callout'
import { Image } from '@/components/ui/image'
import { FAQ } from '@/components/ui/faq'
Integrations are authenticated connections to third-party services like Gmail, Slack, GitHub, Dropbox, and more. Sim handles the OAuth flow, token storage, and automatic token refresh — you connect once and select the account in any block that needs it.
You can connect **multiple accounts per service** — for example, two separate Gmail accounts for different workflows.
## Managing Integrations
To manage integrations, open your workspace **Settings** and navigate to the **Integrations** tab.
<Image
src="/static/integrations/integrations-list.png"
alt="Integrations tab showing connected accounts with service icons, names, and Details/Disconnect buttons"
width={700}
height={500}
/>
The list shows all your connected accounts with the service icon, display name, and provider. Each entry has a **Details** button and a **Disconnect** button.
## Connecting an Account
Click **+ Connect** in the top right to open the connection modal.
<Image
src="/static/integrations/connect-service-picker.png"
alt="Connect Integration modal showing a searchable list of available services"
width={500}
height={400}
/>
Search for or select the service you want to connect, then fill in the connection details:
<Image
src="/static/integrations/connect-modal.png"
alt="Connect Gmail modal showing permissions requested, display name field, and description field"
width={500}
height={450}
/>
1. Review the **Permissions requested** — these are the scopes Sim will request from the provider
2. Enter a **Display name** to identify this connection (e.g. "Work Gmail" or "Marketing Slack")
3. Optionally add a **Description**
4. Click **Connect** and complete the authorization flow
## Using Integrations in Workflows
Blocks that require authentication (e.g. Gmail, Slack, Google Sheets) display a credential selector. Select the connected account you want that block to use.
<Image
src="/static/credentials/oauth-selector.png"
alt="Gmail block showing the account selector dropdown with connected accounts"
width={500}
height={350}
/>
You can also connect additional accounts directly from the block by selecting **Connect another [service] account** at the bottom of the dropdown.
<Callout type="info">
If a block requires an integration and none is selected, the workflow will fail at that step.
</Callout>
## Using a Credential ID
Each integration has a unique credential ID you can use to reference it dynamically. This is useful when you have multiple accounts for the same service and want to switch between them programmatically — for example, routing different workflow runs to different Gmail accounts based on a variable.
To copy a credential ID, open **Details** on any integration and click the clipboard icon next to the Display Name.
<Image
src="/static/integrations/copy-credential-id.png"
alt="Integration details showing the Copy credential ID tooltip on the clipboard icon next to the Display Name"
width={700}
height={150}
/>
In any block that requires an integration, click **Switch to manual ID** next to the credential selector to switch from the dropdown to a text field.
<Image
src="/static/integrations/switch-to-manual-id.png"
alt="Block showing the Switch to manual ID button next to the account selector"
width={500}
height={200}
/>
Paste or reference the credential ID in that field. You can use a `{{SECRET}}` reference or a block output variable to make it dynamic.
<Image
src="/static/integrations/manual-credential-id.png"
alt="Block showing the Enter credential ID text field after switching to manual mode"
width={500}
height={200}
/>
## Integration Details
Click **Details** on any integration to open its detail view.
<Image
src="/static/integrations/integration-details.png"
alt="Integration details view showing Display Name, Description, Members, Reconnect, and Disconnect"
width={700}
height={420}
/>
From here you can:
- Edit the **Display Name** and **Description**
- Manage **Members** — invite teammates by email and assign them an **Admin** or **Member** role
- **Reconnect** — re-authorize the connection if it has expired or if you need to update permissions
- **Disconnect** — remove the integration entirely
Click **Save** to apply changes, or **Back** to return to the list.
<Callout type="warn">
If you disconnect an integration that is used in a workflow, that workflow will fail at any block referencing it. Update blocks before disconnecting.
</Callout>
## Access Control
Each integration has role-based access:
- **Admin** — can view, edit, disconnect, reconnect, and manage member access
- **Member** — can use the integration in workflows (read-only)
When you connect an integration, you are automatically set as its Admin. You can share it with teammates from the Details view.
<FAQ items={[
{ question: "Does Sim handle OAuth token refresh automatically?", answer: "Yes. When an integration is used during execution, Sim checks whether the access token has expired and automatically refreshes it using the stored refresh token before making the API call. You do not need to handle token refresh manually." },
{ question: "Can I connect multiple accounts for the same service?", answer: "Yes. You can connect multiple accounts per service (for example, two separate Gmail accounts). Each block lets you select which account to use from the credential dropdown. This is useful when different workflows need different identities or permissions." },
{ question: "What is a credential ID and when should I use it?", answer: "Each integration has a unique credential ID that you can use instead of the dropdown selector. This lets you pass the credential dynamically — for example, from a variable or a previous block's output — so the same workflow can use different accounts depending on the context. Copy the ID from the Details view and use Switch to manual ID in any block to paste or reference it." },
{ question: "What happens if an OAuth token can no longer be refreshed?", answer: "If a refresh fails (e.g. the user revoked access or the refresh token expired), the workflow will fail at the block using that integration. Open Settings → Integrations, find the connection, and use the Reconnect button to re-authorize it." },
{ question: "Are OAuth tokens encrypted at rest?", answer: "Yes. OAuth tokens are encrypted before being stored in the database and are never exposed in the workflow editor, logs, or API responses." },
{ question: "What happens if I disconnect an integration that is used in a workflow?", answer: "Any block referencing the disconnected integration will fail at runtime. Make sure to update those blocks before disconnecting, or reconnect the integration to restore access." },
]} />

View File

@@ -0,0 +1,5 @@
{
"title": "Integrations",
"pages": ["index", "google-service-account"],
"defaultOpen": false
}

View File

@@ -8,7 +8,7 @@ import { Image } from '@/components/ui/image'
import { Video } from '@/components/ui/video'
import { FAQ } from '@/components/ui/faq'
Sim is an open-source visual workflow builder for building and deploying AI agent workflows. Design intelligent automation systems using a no-code interface—connect AI models, databases, APIs, and business tools through an intuitive drag-and-drop canvas. Whether you're building chatbots, automating business processes, or orchestrating complex data pipelines, Sim provides the tools to bring your AI workflows to life.
Sim is the open-source AI workspace where teams build, deploy, and manage AI agents. Create agents visually with the workflow builder, conversationally through Mothership, or programmatically with the API. Connect AI models, databases, APIs, and 1,000+ business tools to build agents that automate real work — from chatbots and compliance agents to data pipelines and ITSM automation.
<div className="flex justify-center">
<Image
@@ -40,8 +40,8 @@ Orchestrate complex multi-service interactions. Create unified API endpoints, im
## How It Works
**Visual Workflow Editor**
Design workflows using an intuitive drag-and-drop canvas. Connect AI models, databases, APIs, and third-party services through a visual, no-code interface that makes complex automation logic easy to understand and maintain.
**Visual Workflow Builder**
Design agent logic using an intuitive drag-and-drop canvas. Connect AI models, databases, APIs, and third-party services through a visual interface that makes complex automation easy to understand and maintain.
**Modular Block System**
Build with specialized components: processing blocks (AI agents, API calls, custom functions), logic blocks (conditional branching, loops, routers), and output blocks (responses, evaluators). Each block handles a specific task in your workflow.
@@ -58,7 +58,7 @@ Enable your team to build together. Multiple users can edit workflows simultaneo
## Integrations
Sim provides native integrations with 160+ services across multiple categories:
Sim provides native integrations with 1,000+ services across multiple categories:
- **AI Models**: OpenAI, Anthropic, Google Gemini, Groq, Cerebras, local models via Ollama or VLLM
- **Communication**: Gmail, Slack, Microsoft Teams, Telegram, WhatsApp
@@ -100,17 +100,17 @@ Deploy on your own infrastructure using Docker Compose or Kubernetes. Maintain c
## Next Steps
Ready to build your first AI workflow?
Ready to build your first AI agent?
<Cards>
<Card title="Getting Started" href="/getting-started">
Create your first workflow in 10 minutes
Build your first agent in 10 minutes
</Card>
<Card title="Workflow Blocks" href="/blocks">
<Card title="Blocks" href="/blocks">
Learn about the building blocks
</Card>
<Card title="Tools & Integrations" href="/tools">
Explore 160+ built-in integrations
Explore 1,000+ integrations
</Card>
<Card title="Team Permissions" href="/permissions/roles-and-permissions">
Set up workspace roles and permissions
@@ -121,9 +121,9 @@ Ready to build your first AI workflow?
{ question: "Is Sim free to use?", answer: "Sim offers a free Community plan with 1,000 one-time credits to get started. Paid plans start at $25/month (Pro) with 5,000 credits and go up to $100/month (Max) with 20,000 credits. Annual billing is available at a 15% discount. You can also self-host Sim for free on your own infrastructure." },
{ question: "Is Sim open source?", answer: "Yes. Sim is open source under the Apache 2.0 license. The full source code is available on GitHub and you can self-host it, contribute to development, or modify it for your own needs. Enterprise features (SSO, access control) have a separate license that requires a subscription for production use." },
{ question: "Which AI models and providers are supported?", answer: "Sim supports 15+ providers including OpenAI, Anthropic, Google Gemini, Groq, Cerebras, DeepSeek, Mistral, xAI, and OpenRouter. You can also run local models through Ollama or VLLM at no API cost. Bring Your Own Key (BYOK) is supported so you can use your own API keys at base provider pricing with no markup." },
{ question: "Do I need coding experience to use Sim?", answer: "No. Sim is a no-code visual builder where you design workflows by dragging blocks onto a canvas and connecting them. For advanced use cases, the Function block lets you write custom JavaScript, but it is entirely optional." },
{ question: "Do I need coding experience to use Sim?", answer: "No. Sim lets you build agents visually by dragging blocks onto a canvas and connecting them, or conversationally through Mothership using natural language. For advanced use cases, the Function block lets you write custom JavaScript, and the full API/SDK is available for programmatic access." },
{ question: "Can I self-host Sim?", answer: "Yes. Sim provides Docker Compose configurations for self-hosted deployments. The stack includes the Sim application, a PostgreSQL database with pgvector, and a realtime collaboration server. You can also integrate local AI models via Ollama for a fully offline setup." },
{ question: "Is there a limit on how many workflows I can create?", answer: "There is no limit on the number of workflows you can create on any plan. Usage limits apply to execution credits, rate limits, and file storage, which vary by plan tier." },
{ question: "What integrations are available?", answer: "Sim offers 160+ native integrations across categories including AI models, communication tools (Gmail, Slack, Teams, Telegram), productivity apps (Notion, Google Workspace, Airtable), development tools (GitHub, Jira, Linear), search services (Google Search, Perplexity, Exa), and databases (PostgreSQL, Supabase, Pinecone). For anything not built in, you can use the MCP (Model Context Protocol) support to connect custom services." },
{ question: "How does Sim compare to other workflow automation tools?", answer: "Sim is purpose-built for AI agent workflows rather than general task automation. It provides a visual canvas for orchestrating LLM-powered agents with built-in support for tool use, structured outputs, conditional branching, and real-time collaboration. The Copilot feature also lets you build and modify workflows using natural language." },
{ question: "What integrations are available?", answer: "Sim offers 1,000+ native integrations across categories including AI models, communication tools (Gmail, Slack, Teams, Telegram), productivity apps (Notion, Google Workspace, Airtable), development tools (GitHub, Jira, Linear), search services (Google Search, Perplexity, Exa), and databases (PostgreSQL, Supabase, Pinecone). For anything not built in, you can use the MCP (Model Context Protocol) support to connect custom services." },
{ question: "How does Sim compare to other AI agent builders?", answer: "Sim is an AI workspace — not just a workflow tool or an agent framework. It combines a visual workflow builder, Mothership for natural-language agent creation, knowledge bases, tables, and full observability in one environment. Teams build agents visually, conversationally, or with code, then deploy and manage them with enterprise governance, real-time collaboration, and staging-to-production workflows." },
]} />

View File

@@ -5,13 +5,16 @@ description: Automatically sync documents from external sources into your knowle
import { Callout } from 'fumadocs-ui/components/callout'
import { Step, Steps } from 'fumadocs-ui/components/steps'
import { Image } from '@/components/ui/image'
import { FAQ } from '@/components/ui/faq'
Connectors let you pull documents directly from external services into your knowledge base. Instead of manually uploading files, a connector continuously syncs content from sources like Notion, Google Drive, GitHub, Slack, and more — keeping your knowledge base up to date automatically.
Connectors continuously sync documents from external services into your knowledge base, so you never have to upload files manually. New content is added, changed content is re-processed, and deleted content is removed — all automatically.
## Available Connectors
Sim ships with 30 built-in connectors spanning productivity tools, cloud storage, development platforms, and more.
<Image src="/static/connectors/connectors-sources.png" alt="Connect Source picker showing a searchable list of available connectors including Airtable, Asana, Confluence, Discord, Dropbox, Evernote, Fireflies, GitHub, and Gmail" width={800} height={500} />
Sim ships with 30 built-in connectors:
| Category | Connectors |
|----------|-----------|
@@ -29,24 +32,25 @@ Sim ships with 30 built-in connectors spanning productivity tools, cloud storage
## Adding a Connector
From inside a knowledge base, click **+ New connector** in the top right to open the connector picker. Select a service, then complete the setup steps:
<Steps>
<Step>
### Select a source
Open a knowledge base and click **Add Connector**. You'll see the full list of available connectors — pick the service you want to sync from.
</Step>
<Step>
### Authenticate
Most connectors use **OAuth** — select an existing credential from the dropdown, or click **Connect new account** to authorize through the service's login flow. Tokens are refreshed automatically, so you won't need to re-authenticate unless you revoke access.
Most connectors use **OAuth** — select an existing credential from the dropdown or click **Connect new account** to authorize through the service. Tokens are refreshed automatically.
A few connectors (Evernote, Obsidian, Fireflies) use **API keys** instead. Paste your key or developer token directly, and it will be stored securely.
A few connectors use **API keys** instead:
| Connector | Where to get the key |
|-----------|---------------------|
| **Evernote** | Developer Token (starts with `S=`) from your Evernote account settings |
| **Obsidian** | Install the [Local REST API](https://github.com/coddingtonbear/obsidian-local-rest-api) plugin, then copy the key from its settings |
| **Fireflies** | Generate from the Integrations page in your Fireflies account |
<Callout type="info">
If you rotate an API key in the external service, you'll need to update it in Sim as well. OAuth tokens are refreshed automatically, but API keys are not.
If you rotate an API key in the external service, update it in Sim as well OAuth tokens refresh automatically, but API keys do not.
</Callout>
</Step>
@@ -54,103 +58,135 @@ A few connectors (Evernote, Obsidian, Fireflies) use **API keys** instead. Paste
### Configure
Each connector has its own configuration fields that control what gets synced. Some examples:
Each connector has source-specific fields that control what gets synced. Examples:
- **Notion**: Choose between syncing an entire workspace, a specific database, or a single page tree
- **GitHub**: Specify a repository, branch, and optional file extension filter
- **Confluence**: Enter your Atlassian domain and optionally filter by space key or content type
- **Obsidian**: Provide your vault URL and optionally restrict to a folder path
- **Notion** sync an entire workspace, a specific database, or a single page tree
- **GitHub** — specify a repository, branch, and optional file extension filter
- **Confluence** — enter your Atlassian domain and optionally filter by space key or content type
- **Obsidian** — provide your vault URL (`https://127.0.0.1:27124` by default) and optionally restrict to a folder path
- **Fireflies** — optionally filter by host email or cap the number of transcripts synced
All configuration is validated when you save — if a repository doesn't exist or a domain is unreachable, you'll get an immediate error.
Configuration is validated on save — if a repository doesn't exist or a domain is unreachable, you'll see an error immediately.
</Step>
<Step>
### Choose sync frequency
Select how often the connector should re-sync:
| Frequency | Description |
|-----------|-------------|
| Frequency | Notes |
|-----------|-------|
| Every hour | Best for fast-moving sources |
| Every 6 hours | Good balance for most use cases |
| Every 6 hours | Good balance for most sources |
| **Daily** (default) | Suitable for content that changes infrequently |
| Weekly | For stable, rarely-updated sources |
| Manual only | Sync only when you trigger it |
| Manual only | Sync only when you trigger it manually |
Sub-hourly frequencies require a Max or Enterprise plan.
</Step>
<Step>
### Configure metadata tags (optional)
If the connector supports metadata tags, you'll see checkboxes for each tag type (e.g., Labels, Last Modified, Notebook). All are enabled by default — uncheck any you don't need.
If the connector supports metadata tags, you'll see checkboxes for each available tag type (e.g., Labels, Last Modified, Notebook). All are enabled by default — uncheck any you don't need.
See the [Metadata Tags](#metadata-tags) section below for details.
Tag slots are shared across all documents in a knowledge base. See [Tags](/knowledgebase/tags) for details.
</Step>
<Step>
### Connect & Sync
Click **Connect & Sync** to save the connector and trigger the first sync immediately. Documents will begin appearing in your knowledge base as they are processed.
Click **Connect & Sync** to save the connector and trigger the first sync. Documents will start appearing as they're processed.
</Step>
</Steps>
## How Syncing Works
## Managing Connectors
On each sync, the connector fetches documents from the external service and compares them against what's already in your knowledge base. Only documents that have actually changed are reprocessed — new content is added, updated content is re-chunked and re-embedded, and documents that no longer exist in the source are removed.
Open **Connected Sources** from the knowledge base to see all active connectors. Each card shows the connector's status, the last sync time and document count, and the next scheduled sync:
This means syncing is efficient even for large document sets. A connector with thousands of documents will only do meaningful work when something changes.
<Image src="/static/connectors/connectors-sync-history.png" alt="Connected Sources panel showing a Google Docs connector with Active status, last sync details, and a sync history log with dated entries" width={800} height={450} />
### Handling Failures
The action buttons on each connector card:
If a single document fails to fetch (e.g., due to a permission issue or timeout), the sync continues with the remaining documents. The failed document will be retried on the next sync cycle.
| Button | Action |
|--------|--------|
| **↻** (Refresh) | Trigger a manual sync immediately. Disabled while syncing or disabled; a 5-minute cooldown applies after each manual trigger |
| **⚙** (Settings) | Open the edit modal to change source config or sync frequency |
| **⏸ / ▶** (Pause / Resume) | Pause scheduled syncs without removing the connector. Resume works from both paused and disabled states |
| **🗑** (Delete) | Remove the connector. A confirmation modal appears with an option to also delete all synced documents |
| **** (Chevron) | Expand to show sync history |
If an entire sync fails (e.g., the service is down or credentials expired), the connector automatically backs off and retries later. The backoff resets as soon as a sync succeeds.
### Editing a Connector
## Metadata Tags
Click the settings icon to open the edit modal. It has two tabs:
Connectors can automatically populate [tags](/docs/knowledgebase/tags) with metadata from the source, letting you filter documents in the Knowledge block based on information from the external service.
**Settings** — change any source-specific config fields (e.g., switch the GitHub branch) and update the sync frequency.
For example, a Notion connector might tag documents with their **Labels**, **Last Modified** date, and **Created** date. A GitHub connector might tag documents with their **Repository** and **File Path**. This metadata becomes available for [tag-based filtering](/docs/knowledgebase/tags) in your workflows.
**Documents** — browse all documents this connector has synced and manage exclusions (see [Excluding Documents](#excluding-documents) below).
### Opting Out
### Sync History
You can disable specific metadata tags during connector setup. Disabled tags won't be populated, leaving those tag slots available for other connectors or manual tagging.
Expand any connector card by clicking the chevron to see a log of recent syncs:
<Callout type="info">
Tag slots are shared across all documents in a knowledge base. If you have multiple connectors, each one's metadata tags draw from the same pool of available slots.
</Callout>
- Each row shows the date/time and a summary of what changed: **+N** (added, green), **~N** (updated, amber), **-N** (deleted, red), **!N** (failed, red), or **No changes**
- A spinner indicates a sync currently in progress
- Error rows show a red icon and the failure message
The log retains the most recent 10 sync runs.
## Excluding Documents
You can manually exclude specific documents from a connector's sync. Excluded documents are skipped on every subsequent sync, even if they change in the source. This is useful for filtering out templates, drafts, or other content you don't want in your knowledge base.
Sometimes a connector syncs documents you don't want in your knowledge base — drafts, templates, confidential pages, and so on. You can exclude them individually.
## Source Links
<Image src="/static/connectors/connectors-excluded.png" alt="Edit Google Docs modal showing the Documents tab with Active (37) and Excluded (0) filter buttons and a 'No excluded documents' message" width={800} height={450} />
Every synced document retains a link back to the original in the external service. This lets you trace any knowledge base document to its source — whether that's a Notion page, a GitHub file, a Confluence article, or a Slack conversation.
To exclude a document, open the connector's settings modal, go to the **Documents** tab, and click **Exclude** next to any document. Excluded documents are skipped on every subsequent sync even if the source content changes.
To reverse an exclusion, switch to the **Excluded** tab and click **Restore** — the document will be pulled in on the next sync.
## How Syncing Works
On each run the connector fetches documents from the source and compares them against what's already stored. Only changed documents are reprocessed — new content is added, updated content is re-chunked and re-embedded, deleted content is removed. A connector syncing thousands of documents will only do real work when something actually changes.
### Connector Status
| Status | Meaning |
|--------|---------|
| **Active** | Running normally on schedule |
| **Syncing** | A sync is currently in progress |
| **Paused** | Scheduled syncs are suspended; manual sync is still available |
| **Error** | The last sync failed; will retry on the next scheduled run with backoff |
| **Disabled** | Syncing has been paused automatically after 10 consecutive failures |
A disabled connector requires intervention — either reconnect the OAuth account or use the Resume button to re-enable syncing.
### Handling Failures
If a single document fails (e.g., a permission issue or timeout), the sync continues and retries that document next time. If an entire sync fails, the connector backs off and retries with increasing delays. After 10 consecutive full-sync failures the connector is automatically set to **Disabled** to avoid spinning indefinitely.
## Metadata Tags
Connectors can auto-populate [tags](/knowledgebase/tags) with metadata from the source — for example, a Notion connector can tag documents with their Labels and Last Modified date; a GitHub connector can tag documents with Repository and File Path. These tags are then available for filtered search in the Knowledge block.
You can disable specific tag types during setup or at any time from the connector settings to free up tag slots for manual tagging or other connectors.
<Callout type="info">
Tag slots are shared across all documents in a knowledge base. If multiple connectors each populate tags, they draw from the same pool of 17 slots.
</Callout>
## Multiple Connectors
You can add multiple connectors to a single knowledge base. For example, you might sync internal documentation from Confluence alongside code from GitHub and meeting notes from Fireflies — all searchable together through the Knowledge block.
<Image src="/static/connectors/connectors-list.png" alt="Knowledge base document list showing synced Google Docs documents with Name, Size, Tokens, Chunks, Uploaded date, Status, and Tags columns" width={800} height={300} />
Each connector manages its own documents independently. Metadata tag slots are shared across the knowledge base, so keep an eye on slot usage if you're combining several connectors that each populate tags.
## Common Use Cases
- **Internal knowledge base**: Sync your team's Notion workspace and Confluence spaces so AI agents can answer questions about internal processes, policies, and documentation
- **Customer support**: Connect HubSpot or Salesforce alongside your help docs from WordPress or Google Docs to give support agents full context on customers and product information
- **Engineering assistant**: Sync a GitHub repository and Jira or Linear issues so an AI agent can reference code, specs, and ticket history when answering developer questions
- **Meeting intelligence**: Pull in Fireflies transcripts alongside Slack conversations to build a searchable archive of decisions and discussions
- **Research and notes**: Sync Evernote notebooks or an Obsidian vault to make your personal notes available to AI workflows
You can add as many connectors as you need to a single knowledge base. Each manages its own documents independently, and all content is searchable together through the Knowledge block. Keep tag slot usage in mind when combining connectors that each populate metadata tags.
<FAQ items={[
{ question: "How often do connectors sync?", answer: "You can choose from hourly, every 6 hours, daily (default), weekly, or manual-only sync frequencies. Each connector can have its own schedule." },
{ question: "What happens if a source document is deleted?", answer: "On the next sync, the connector detects that the document no longer exists in the source and removes it from your knowledge base automatically." },
{ question: "Can I connect multiple services to one knowledge base?", answer: "Yes. You can add as many connectors as you need to a single knowledge base. Each connector manages its documents independently." },
{ question: "Do I need to re-authenticate connectors?", answer: "OAuth-based connectors refresh tokens automatically. API key-based connectors (Evernote, Obsidian, Fireflies) need manual updates if you rotate the key." },
{ question: "What if a connector sync fails?", answer: "If a single document fails, the rest of the sync continues. If the entire sync fails (e.g., service is down), the connector backs off and retries automatically." },
{ question: "Can I exclude specific documents from syncing?", answer: "Yes. You can manually exclude documents from any connector. Excluded documents are skipped on every subsequent sync, even if they change in the source." },
{ question: "Do metadata tags count against a limit?", answer: "Tag slots are shared across all documents in a knowledge base. If you have multiple connectors, their metadata tags draw from the same pool of available slots." },
{ question: "How often do connectors sync?", answer: "You choose from hourly, every 6 hours, daily (default), weekly, or manual-only. Sub-hourly frequencies require a Max or Enterprise plan. Each connector has its own schedule." },
{ question: "What happens if a source document is deleted?", answer: "On the next sync the connector detects the document is gone and removes it from your knowledge base automatically." },
{ question: "What happens when I delete a connector?", answer: "The connector is removed and future syncs stop. You're given the option to also delete all documents that were synced by that connector. If you don't check that option, they stay in the knowledge base as-is." },
{ question: "What does the Disabled status mean?", answer: "After 10 consecutive full-sync failures, the connector is automatically disabled to stop retrying. Reconnect the OAuth account or click Resume to re-enable it." },
{ question: "Do metadata tags count against a limit?", answer: "Yes. Tag slots are shared across all documents in a knowledge base — 17 slots total. Multiple connectors draw from the same pool, so plan accordingly if several connectors each auto-populate tags." },
{ question: "Do I need to re-authenticate connectors?", answer: "OAuth connectors refresh tokens automatically. API key connectors (Evernote, Obsidian, Fireflies) need manual updates if you rotate the key in the external service." },
]} />

View File

@@ -2,117 +2,112 @@
title: Tags and Filtering
---
import { Video } from '@/components/ui/video'
import { Callout } from 'fumadocs-ui/components/callout'
import { Image } from '@/components/ui/image'
import { FAQ } from '@/components/ui/faq'
Tags provide a powerful way to organize your documents and create precise filtering for your vector searches. By combining tag-based filtering with semantic search, you can retrieve exactly the content you need from your knowledgebase.
Tags let you attach structured metadata to documents so the Knowledge block can filter results precisely — by department, date, priority, status, or any dimension you define.
## Adding Tags to Documents
## How Tags Work
You can add custom tags to any document in your knowledgebase to organize and categorize your content for easier retrieval.
Tags have two layers:
<div className="mx-auto w-full overflow-hidden rounded-lg">
<Video src="knowledgebase-tag.mp4" width={700} height={450} />
</div>
1. **Tag definitions** — created at the knowledge base level. A definition has a name (e.g., "Department") and a type (Text, Number, Date, or Boolean). Definitions are shared across all documents.
2. **Tag values** — set per document. Once a definition exists, you assign a value to it on each document that needs it (e.g., `Department = "engineering"`).
### Tag Management
- **Custom tags**: Create your own tag system that fits your workflow
- **Multiple tags per document**: Apply as many tags as needed to each document. Each knowledgebase has 17 tag slots total: 7 text, 5 number, 2 date, and 3 boolean slots, shared by all documents in the knowledgebase
- **Tag organization**: Group related documents with consistent tagging
## Tag Slots
### Tag Best Practices
- **Consistent naming**: Use standardized tag names across your documents
- **Descriptive tags**: Use clear, meaningful tag names
- **Regular cleanup**: Remove unused or outdated tags periodically
Each knowledge base has **17 tag slots** distributed across four types:
## Using Tags in Knowledge Blocks
| Type | Slots | Accepted values |
|------|-------|-----------------|
| **Text** | 7 | Any string — matching is case-insensitive |
| **Number** | 5 | Any valid number |
| **Date** | 2 | `YYYY-MM-DD` format |
| **Boolean** | 3 | `true` or `false` |
Tags become powerful when combined with the Knowledge block in your workflows. You can filter your searches to specific tagged content, ensuring your AI agents get the most relevant information.
The type dropdown in the creation form shows current slot usage for each type (e.g., `Text (0/7)` means none of the 7 text slots are in use yet).
<div className="mx-auto w-full overflow-hidden rounded-lg">
<Video src="knowledgebase-tag2.mp4" width={700} height={450} />
</div>
<Callout type="info">
Slots are shared across all documents and connectors in a knowledge base. Connectors that auto-populate metadata tags draw from the same pool. Plan your schema with that in mind.
</Callout>
## Defining Tags
Tag definitions live at the knowledge base level. To manage them, click the knowledge base name in the header to open the context menu and select **Tags**:
<Image src="/static/tags/tags-kb-menu.png" alt="Knowledge base header showing the dropdown menu with Rename, Tags, and Delete options" width={700} height={400} />
This opens the Tags modal, which lists all defined tags and shows how many documents each one is assigned to. Click **Add Tag** to define a new one:
<Image src="/static/tags/tags-create.png" alt="Tags modal showing 0 defined tags, a Tag Name input field, and a Type dropdown set to Text (0/7), with Cancel and Create Tag buttons" width={700} height={450} />
Enter a **Tag Name** and pick a **Type**, then click **Create Tag**. The name must be unique within the knowledge base. The type dropdown only shows types that still have available slots. Press Enter to submit or Escape to cancel.
To delete a tag definition, click the trash icon next to it. Deleting a definition removes the tag value from every document it was assigned to — the modal shows you which documents are affected before you confirm.
Clicking any existing tag definition opens a dialog showing all documents that have a value set for it, along with their current tag values.
## Setting Tag Values on Documents
Once a definition exists, you assign values document by document. Right-click any document (or click the `…` menu) to open the document context menu, then select **Tags**:
This opens the tag panel for that document where you can set a value for each defined tag.
## Viewing Tags in the Document List
The **Tags** column in the document list shows the current tag values for each document at a glance. Documents with no tags assigned show ` `:
<Image src="/static/tags/tags-document-list.png" alt="Knowledge base document list showing Name, Size, Tokens, Chunks, Uploaded, Status, and Tags columns — Document1.txt shows no tags ( ) while Document2.txt shows the value 'Waleed'" width={900} height={200} />
Use the **Filter** and **Sort** controls in the top right to narrow the list by tag values or sort by them.
## Using Tags in the Knowledge Block
In a workflow, open the Knowledge block and configure **Tag Filters** to restrict which documents are searched:
<Image src="/static/tags/tags-knowledge-block.png" alt="Knowledge block editor showing Operation: search, Knowledge Base: test, Search Query field (optional), Number of Results, and a Tag Filters section with Filter 1 containing Tag: Name, Operator: equals, and a Value field" width={900} height={500} />
Each filter has three parts:
- **Tag** — select a tag definition from the knowledge base
- **Operator** — depends on the tag type (see below)
- **Value** — the value to match against
Add as many filters as you need with the **+** button. Multiple filters are combined with AND logic — a document must match all filters to be included in the search.
### Operators by Type
| Type | Available operators |
|------|-------------------|
| **Text** | `equals`, `not equals`, `contains`, `does not contain`, `starts with`, `ends with` |
| **Number** | `equals`, `not equals`, `greater than`, `greater than or equal`, `less than`, `less than or equal`, `between` |
| **Date** | `equals`, `after`, `on or after`, `before`, `on or before`, `between` |
| **Boolean** | `is`, `is not` |
Tag values in filter fields can be static strings or workflow variable references (e.g., `<start.department>`), so filtering can adapt dynamically at runtime.
## Search Modes
The Knowledge block supports three different search modes depending on what you provide:
The Knowledge block behaves differently depending on what you provide:
### 1. Tag-Only Search
When you **only provide tags** (no search query):
- **Direct retrieval**: Fetches all documents that have the specified tags
- **No vector search**: Results are based purely on tag matching
- **Fast performance**: Quick retrieval without semantic processing
- **Exact matching**: Only documents with all specified tags are returned
| What you provide | Behaviour |
|-----------------|-----------|
| **Tags only** (no search query) | Fetches all documents that match the tag filters — pure tag matching, no vector search |
| **Query only** (no tag filters) | Semantic vector search across all documents in the knowledge base |
| **Both tags and query** | Tag filters run first to narrow the document set, then vector search runs within that subset |
**Use case**: When you need all documents from a specific category or project
The combined mode is the most precise — tag filtering cuts down the candidate set cheaply before the more expensive vector similarity comparison runs.
### 2. Vector Search Only
When you **only provide a search query** (no tags):
- **Semantic search**: Finds content based on meaning and context
- **Full knowledgebase**: Searches across all documents
- **Relevance ranking**: Results ordered by semantic similarity
- **Natural language**: Use questions or phrases to find relevant content
## Connector-Populated Tags
**Use case**: When you need the most relevant content regardless of organization
Connectors can auto-populate tags with metadata from the source. A Notion connector might set **Last Modified** and **Labels**; a GitHub connector might set **Repository** and **File Path**. These work exactly like manually defined tags and are available in Knowledge block filters.
### 3. Combined Tag Filtering + Vector Search
When you **provide both tags and a search query**:
1. **First**: Filter documents to only those with the specified tags
2. **Then**: Perform vector search within that filtered subset
3. **Result**: Semantically relevant content from your tagged documents only
**Use case**: When you need relevant content from a specific category or project
### Search Configuration
#### Tag Filtering
- **Multiple tags**: Use multiple tags with AND or OR logic to control whether documents must match all or any of the specified tags
- **Tag combinations**: Mix different tag types for precise filtering
- **Case sensitivity**: Tag matching is case-insensitive
- **Partial matching**: Text fields support partial matching operators such as contains, starts_with, and ends_with in addition to exact matching
#### Vector Search Parameters
- **Query complexity**: Natural language questions work best
- **Result limits**: Configure how many chunks to retrieve
- **Relevance threshold**: Set minimum similarity scores
- **Context window**: Adjust chunk size for your use case
## Integration with Workflows
### Knowledge Block Configuration
1. **Select knowledgebase**: Choose which knowledgebase to search
2. **Add tags**: Specify filtering tags (optional)
3. **Enter query**: Add your search query (optional)
4. **Configure results**: Set number of chunks to retrieve
5. **Test search**: Preview results before using in workflow
### Dynamic Tag Usage
- **Variable tags**: Use workflow variables as tag values
- **Conditional filtering**: Apply different tags based on workflow logic
- **Context-aware search**: Adjust tags based on conversation context
- **Multi-step filtering**: Refine searches through workflow steps
### Performance Optimization
- **Efficient filtering**: Tag filtering happens before vector search for better performance
- **Caching**: Frequently used tag combinations are cached for speed
- **Parallel processing**: Multiple tag searches can run simultaneously
- **Resource management**: Automatic optimization of search resources
## Getting Started with Tags
1. **Plan your tag structure**: Decide on consistent naming conventions
2. **Start tagging**: Add relevant tags to your existing documents
3. **Test combinations**: Experiment with tag + search query combinations
4. **Integrate into workflows**: Use the Knowledge block with your tagging strategy
5. **Refine over time**: Adjust your tagging approach based on search results
Tags transform your knowledgebase from a simple document store into a precisely organized, searchable intelligence system that your AI workflows can navigate with surgical precision.
You can disable specific metadata tag types during connector setup or in connector settings to free up slots for manual use. See [Connectors](/knowledgebase/connectors) for details.
<FAQ items={[
{ question: "How many tag slots are available per knowledgebase?", answer: "Each knowledgebase supports up to 17 tag slots total across four field types: 7 text slots, 5 number slots, 2 date slots, and 3 boolean slots. These slots are shared across all documents in the knowledgebase." },
{ question: "What tag field types are supported?", answer: "Four field types are supported: text (free-form string values), number (numeric values), date (date values in YYYY-MM-DD format), and boolean (true/false values). Each type has its own pool of available slots." },
{ question: "Is tag matching case-sensitive?", answer: "No, tag matching is case-insensitive. You can use any capitalization when filtering by tags and it will match regardless of how the tag value was originally entered." },
{ question: "How does combined tag and vector search work?", answer: "When you provide both tags and a search query, tag filtering is applied first to narrow down the document set, then vector search runs within that filtered subset. This approach is more efficient because it reduces the number of vectors that need similarity comparison." },
{ question: "What is the default number of results returned from a knowledge search?", answer: "The default is 10 results. You can configure this with the topK parameter, which accepts values from 1 to 100." },
{ question: "What embedding model does Sim use for knowledge base search?", answer: "Sim uses OpenAI's text-embedding-3-small model with 1536 dimensions for generating document embeddings and performing vector similarity search." },
{ question: "How many tag slots are available?", answer: "17 total: 7 text, 5 number, 2 date, 3 boolean. These are shared across all documents and connectors in a knowledge base." },
{ question: "Can I rename a tag definition?", answer: "No. Tag definitions cannot be renamed after creation. Delete the old definition and create a new one with the correct name. Deleting will remove the tag value from all documents it was assigned to." },
{ question: "Is tag matching case-sensitive?", answer: "No. Text tag matching is case-insensitive — 'Engineering' and 'engineering' are treated the same." },
{ question: "Can tag filter values come from workflow variables?", answer: "Yes. Enter a variable reference like <start.department> as the filter value. It resolves to the actual value at runtime, so a single workflow can filter different documents on each run." },
{ question: "What happens to tag values when I delete a tag definition?", answer: "Deleting a definition removes the tag value from every document it was assigned to and frees the slot. The modal shows you which documents are affected before you confirm." },
]} />

View File

@@ -3,6 +3,7 @@ title: Deploy Workflows as MCP
description: Expose your workflows as MCP tools for external AI assistants and applications
---
import { Image } from '@/components/ui/image'
import { Video } from '@/components/ui/video'
import { Callout } from 'fumadocs-ui/components/callout'
import { FAQ } from '@/components/ui/faq'
@@ -18,10 +19,47 @@ MCP servers group your workflow tools together. Create and manage them in worksp
</div>
1. Navigate to **Settings → MCP Servers**
2. Click **Create Server**
<div className="flex justify-center">
<Image
src="/static/blocks/mcp-servers-settings.png"
alt="MCP Servers settings page"
width={700}
height={450}
className="my-6"
/>
</div>
2. Click **Add**
3. Enter a name and optional description
4. Copy the server URL for use in your MCP clients
5. View and manage all tools added to the server
4. Choose access: **API Key** (private, requires `X-API-Key` header) or **Public** (no authentication)
5. Optionally select deployed workflows to add as tools immediately
<div className="flex justify-center">
<Image
src="/static/blocks/mcp-server-add-modal.png"
alt="Add New MCP Server modal"
width={550}
height={380}
className="my-6"
/>
</div>
6. Click **Add Server**
7. Click **Details** to view the MCP server
<div className="flex justify-center">
<Image
src="/static/blocks/mcp-server-details.png"
alt="MCP Server details view"
width={700}
height={450}
className="my-6"
/>
</div>
8. Copy the server URL for use in your MCP clients
9. View and manage all tools added to the server
## Adding a Workflow as a Tool
@@ -33,9 +71,21 @@ Once your workflow is deployed, you can expose it as an MCP tool:
1. Open your deployed workflow
2. Click **Deploy** and go to the **MCP** tab
<div className="flex justify-center">
<Image
src="/static/blocks/mcp-deploy-modal.png"
alt="Workflow Deployment MCP tab"
width={380}
height={470}
className="my-6"
/>
</div>
3. Configure the tool name and description
4. Add descriptions for each parameter (helps AI understand inputs)
5. Select which MCP servers to add it to
6. Click **Save Tool**
<Callout type="info">
The workflow must be deployed before it can be added as an MCP tool.
@@ -54,9 +104,50 @@ Your workflow's input format fields become tool parameters. Add descriptions to
## Connecting MCP Clients
Use the server URL from settings to connect external applications:
Sim generates a ready-to-paste configuration for every supported client. To get it:
1. Navigate to **Settings → MCP Servers**
2. Click **Details** on your server
3. Under **MCP Client**, select your client — **Cursor**, **Claude Code**, **Claude Desktop**, **VS Code**, or **Sim**
4. Copy the configuration, replacing `$SIM_API_KEY` with your Sim API key
<div className="flex justify-center">
<Image
src="/static/blocks/mcp-client-config.png"
alt="MCP client configuration panel"
width={700}
height={450}
className="my-6"
/>
</div>
### Cursor
Cursor supports direct URL configuration. Add to your Cursor MCP settings (`.cursor/mcp.json`):
```json
{
"mcpServers": {
"my-sim-workflows": {
"url": "YOUR_SERVER_URL",
"headers": { "X-API-Key": "$SIM_API_KEY" }
}
}
}
```
Cursor also provides a one-click install button in the server detail view.
### Claude Code
Run this command in your terminal:
```bash
claude mcp add "my-sim-workflows" --url "YOUR_SERVER_URL" --header "X-API-Key:$SIM_API_KEY"
```
### Claude Desktop
Add to your Claude Desktop config (`~/Library/Application Support/Claude/claude_desktop_config.json`):
```json
@@ -64,17 +155,33 @@ Add to your Claude Desktop config (`~/Library/Application Support/Claude/claude_
"mcpServers": {
"my-sim-workflows": {
"command": "npx",
"args": ["-y", "mcp-remote", "YOUR_SERVER_URL"]
"args": ["-y", "mcp-remote", "YOUR_SERVER_URL", "--header", "X-API-Key:$SIM_API_KEY"]
}
}
}
```
### Cursor
Add the server URL in Cursor's MCP settings using the same mcp-remote pattern.
### VS Code
Add to your VS Code MCP settings (`.vscode/mcp.json`):
```json
{
"mcpServers": {
"my-sim-workflows": {
"command": "npx",
"args": ["-y", "mcp-remote", "YOUR_SERVER_URL", "--header", "X-API-Key:$SIM_API_KEY"]
}
}
}
```
<Callout type="info">
For public servers, omit the `X-API-Key` header and `--header` arguments. Public servers don't require authentication.
</Callout>
<Callout type="warn">
Include your API key header (`X-API-Key`) for authenticated access when using mcp-remote or other HTTP-based MCP transports.
`$SIM_API_KEY` is a placeholder. For Claude Desktop and VS Code configs, replace it with your actual API key since these clients don't expand environment variables in JSON config files. Claude Code and Cursor handle variable expansion natively.
</Callout>
## Server Management

View File

@@ -18,27 +18,83 @@ MCP is an open standard that enables AI assistants to securely connect to extern
- Execute custom tools and scripts
- Maintain secure, controlled access to external resources
## Configuring MCP Servers
## Adding an MCP Server as a Tool
MCP servers provide collections of tools that your agents can use. Configure them in workspace settings:
MCP servers provide collections of tools that your agents can use.
<div className="mx-auto w-full overflow-hidden rounded-lg">
<Video src="mcp/settings-mcp-tools.mp4" width={700} height={450} />
</div>
1. Navigate to your workspace settings
2. Go to the **MCP Servers** section
3. Click **Add MCP Server**
4. Enter the server configuration details
5. Save the configuration
To add one:
1. Navigate to **Settings → MCP Tools**
<div className="flex justify-center">
<Image
src="/static/blocks/mcp-settings.png"
alt="MCP Tools settings page"
width={700}
height={450}
className="my-6"
/>
</div>
2. Click **Add** to open the configuration modal
3. Enter a **Server Name** and **Server URL**
4. Add any required **Headers** (e.g. API keys)
5. Click **Add MCP** to save
<div className="flex justify-center">
<Image
src="/static/blocks/mcp-add-modal.png"
alt="Add New MCP Server modal"
width={450}
height={290}
className="my-6"
/>
</div>
<Callout type="info">
You can also configure MCP servers directly from the toolbar in an Agent block for quick setup.
</Callout>
### Server Configuration Options
| Field | Description |
|-------|-------------|
| **Name** | Display name for the server |
| **URL** | The MCP server endpoint |
| **Transport** | Currently supports `streamable-http` |
| **Headers** | Key-value pairs for authentication or custom headers |
| **Timeout** | Connection timeout in milliseconds (default: 30,000) |
### Environment Variables in Configuration
Server URLs and headers support environment variable substitution using `{{VAR_NAME}}` syntax. This keeps sensitive values like API keys out of the server configuration.
```
URL: https://api.example.com/mcp
Authorization: Bearer {{MCP_API_TOKEN}}
```
When you type `{{` in the URL or header fields, a dropdown appears showing available workspace environment variables.
### Testing and Validation
Click **Test Connection** before saving to verify the server is reachable and discover available tools. The test response shows the number of tools found and the protocol version.
After saving, each server displays its available tools with parameter names, types, and required flags. If a server's tools change (e.g., after a server update), click **Refresh** to fetch the latest schemas. This automatically updates any agent blocks using those tools.
Tool validation badges appear on servers with issues — for example, if a tool was removed from the server but is still referenced in a workflow. Click the badge to see which workflows are affected.
### Domain Allowlisting
Self-hosted deployments can restrict which MCP server domains are allowed by setting the `ALLOWED_MCP_DOMAINS` environment variable (comma-separated list). When set, only servers on approved domains can be added. When unset, all domains are allowed.
### Refresh Tools
Click **Refresh** on a server to fetch the latest tool schemas and automatically update any agent blocks using those tools with the new parameter definitions.
To auto-refresh an MCP tool already in use by an agent, go to **Settings → MCP Tools**, open the server's details, and click **Refresh**. This fetches the latest tool schemas and automatically updates any agent blocks using those tools with the new parameter definitions.
## Using MCP Tools in Agents
@@ -46,7 +102,7 @@ Once MCP servers are configured, their tools become available within your agent
<div className="flex justify-center">
<Image
src="/static/blocks/mcp-2.png"
src="/static/blocks/mcp-agent-dropdown.png"
alt="Using MCP Tool in Agent Block"
width={700}
height={450}
@@ -55,9 +111,25 @@ Once MCP servers are configured, their tools become available within your agent
</div>
1. Open an **Agent** block
2. In the **Tools** section, you'll see available MCP tools
3. Select the tools you want the agent to use
4. The agent can now access these tools during execution
2. In the **Tools** section, click **Add tool…**
3. Under **MCP Servers**, click a server to see its tools
<div className="flex justify-center">
<Image
src="/static/blocks/mcp-agent-tools.png"
alt="MCP tools list for a selected server"
width={400}
height={400}
className="my-6"
/>
</div>
4. Select individual tools, or choose **Use all N tools** to add every tool from that server
5. The agent can now access these tools during execution
<Callout type="info">
If you haven't configured a server yet, click **Add MCP Server** at the top of the dropdown to open the setup modal without leaving the block.
</Callout>
## Standalone MCP Tool Block
@@ -65,7 +137,7 @@ For more granular control, you can use the dedicated MCP Tool block to execute s
<div className="flex justify-center">
<Image
src="/static/blocks/mcp-3.png"
src="/static/blocks/mcp-tool-block.png"
alt="Standalone MCP Tool Block"
width={700}
height={450}
@@ -79,17 +151,14 @@ The MCP Tool block allows you to:
- Use the tool's output in subsequent workflow steps
- Chain multiple MCP tools together
### When to Use MCP Tool vs Agent
## When to Use MCP Tool vs Agent
**Use Agent with MCP tools when:**
- You want the AI to decide which tools to use
- You need complex reasoning about when and how to use tools
- You want natural language interaction with the tools
**Use MCP Tool block when:**
- You need deterministic tool execution
- You want to execute a specific tool with known parameters
- You're building structured workflows with predictable steps
| | **Agent with MCP tools** | **MCP Tool block** |
|---|---|---|
| **Execution** | AI decides which tools to call | Deterministic — runs the tool you pick |
| **Parameters** | AI chooses at runtime | You set them explicitly |
| **Best for** | Dynamic, conversational flows | Structured, repeatable steps |
| **Reasoning** | Handles complex multi-step logic | One tool, one call |
## Permission Requirements

View File

@@ -11,6 +11,7 @@
"tools",
"connections",
"---Features---",
"mothership",
"mcp",
"copilot",
"mailer",
@@ -18,6 +19,7 @@
"knowledgebase",
"tables",
"variables",
"integrations",
"credentials",
"---Platform---",
"execution",

View File

@@ -0,0 +1,121 @@
---
title: Files & Documents
description: Upload, create, edit, and generate files — documents, presentations, images, and more.
---
import { Image } from '@/components/ui/image'
import { FAQ } from '@/components/ui/faq'
Describe a document, presentation, image, or visualization and Mothership creates it — streaming the content live into the resource panel as it writes. Attach any file to your message and Mothership reads it, processes it, and saves it to your workspace.
{/* TODO: Screenshot of Mothership with the File Write subagent active — file content streaming into the resource panel in split or preview mode. Shows the live streaming preview experience as a document is being written. */}
## Uploading Files to the Workspace
Attach any file directly to your Mothership message — drag it into the input, paste it, or click the attachment icon. Mothership reads the file as context and saves it to your workspace.
{/* TODO: Screenshot of the Mothership input area showing a file attached — e.g., a PDF or image thumbnail visible in the input before sending. */}
Use this to:
- Hand Mothership a document and ask it to process, summarize, or extract data from it
- Upload a CSV and have it create a table from it
- Drop in a PDF and ask Mothership to turn it into a knowledge base document
- Attach a design mockup and ask Mothership to describe it or generate code from it
Uploaded files appear in the Files panel in the sidebar and are accessible to all workflows in the workspace. Mothership can also fetch a file directly from a URL and save it for you: "Download the JSON at [URL] and save it to the workspace."
## Creating Documents
Mothership can write any text-based file — markdown, plain text, code files, CSV, JSON, or any other format:
- "Write a technical spec for the new auth system as a markdown file"
- "Create a CSV of our test accounts with columns for name, email, and plan tier"
- "Write a Python script that calls our workflow API and processes the response"
- "Draft a postmortem for the outage last Tuesday and save it as a markdown file"
- "Write a personalized outbound email for Acme Corp based on their recent funding announcement"
- "Draft a weekly ops digest summarizing workflow run counts, errors, and top failures for the past 7 days"
Files are saved to your workspace and accessible from the Files panel in the sidebar.
## Editing Existing Files
Open a file using `@filename` or the **+** menu, then describe the change:
- "Update the pricing section to reflect the new tiers"
- "Refactor this Python script to use async/await"
- "Add a section on error handling to this spec"
- "Rewrite the introduction of this report to be more concise"
## Presentations
Mothership can generate `.pptx` files:
- "Create a pitch deck for Q3 review — 8 slides covering growth, retention, and roadmap"
- "Turn this research report into a 10-slide presentation"
- "Build a deck that walks through our API onboarding flow"
- "Build a battle card deck for our top 3 competitors — one slide each covering positioning, pricing, and how we win"
- "Create an account plan for Acme Corp — their priorities, our solution fit, and proposed next steps"
The file is saved to your workspace and can be downloaded.
{/* TODO: Screenshot of the resource panel with a generated .pptx file open or a download prompt visible, showing the file name and confirming it was saved to the workspace. */}
## Images
Mothership can generate images using AI, and can use an existing image as a reference to guide the output:
**Generating images:**
- "Generate a banner image for the new feature announcement — dark background, clean typography"
- "Create a diagram showing the data flow through our webhook pipeline"
- "Make a social card for the blog post with the title and author name"
**Using a reference image:**
- Attach an existing image to your message, then describe what you want: "Generate a new version of this banner with a blue color scheme instead of green"
- "Create a variation of this diagram with the boxes rearranged horizontally [attach image]"
{/* TODO: Screenshot of the resource panel showing a generated image open as a file tab — ideally with the image rendered in the viewer panel. */}
Generated images are saved as workspace files.
## Charts and Visualizations
Mothership can generate charts and data visualizations from data you describe or reference:
- "Plot the workflow run counts from the metrics table as a bar chart grouped by week"
- "Create a line chart of token usage over the past 30 days from this data [paste data]"
- "Generate a pie chart showing the distribution of lead sources from the leads table"
{/* TODO: Screenshot of a chart or visualization rendered in the resource panel as a file. */}
Visualizations are saved as files and rendered in the resource panel.
## Calculations & Data Processing
For one-off calculations and data transformations, describe what you need and Mothership runs it directly in the chat:
- "Parse this JSON and extract all records where status is 'failed'"
- "Calculate the p95 latency from these timing values: [paste values]"
- "Convert these Unix timestamps to ISO 8601"
- "Deduplicate this list of emails, case-insensitive"
Results come back directly in the chat. Ask Mothership to save the output as a file if you need it.
## File Viewer Modes
When a file opens in the resource panel, you can switch between three views:
{/* TODO: Screenshot of the file viewer in the resource panel showing the mode selector (editor/split/preview), ideally in split mode with a markdown file showing raw content on the left and rendered preview on the right. */}
| Mode | What it shows |
|------|--------------|
| **Editor** | Raw editable text |
| **Preview** | Rendered output (markdown, HTML) |
| **Split** | Editor and preview side by side |
<FAQ items={[
{ question: "Where are uploaded and generated files stored?", answer: "All files — uploaded, created, or generated — go to your workspace's Files section. They're accessible from the sidebar and can be referenced in any workflow." },
{ question: "Can I use files created in Mothership in workflows?", answer: "Yes. Workspace files can be referenced in workflows using the File block or by passing them as inputs." },
{ question: "What file types can I upload to Mothership?", answer: "You can attach images, PDFs, text files, JSON, XML, and other document formats directly to a Mothership message." },
{ question: "What can Mothership calculate or process?", answer: "Anything expressible as a short script — parsing JSON, number crunching, string transformations, deduplication, format conversions. Results come back in the chat." },
{ question: "Is there a file size limit for generated files?", answer: "There is no hard limit on generated file size, but very large files may take longer to stream." },
]} />

View File

@@ -0,0 +1,68 @@
---
title: Mothership
description: Your AI command center. Build and manage your entire workspace in natural language.
---
import { Image } from '@/components/ui/image'
import { FAQ } from '@/components/ui/faq'
Describe what you want and Mothership handles it. Build a workflow, run research, generate a presentation, query a table, schedule a recurring job, send a Slack message — Mothership knows your entire workspace and takes action directly.
{/* TODO: Screenshot or GIF of the full Mothership home page — chat pane on the left with a conversation in progress, resource panel on the right with a workflow or file tab open. Hero shot for the page. */}
## What You Can Do
| Area | What Mothership can do |
|------|-----------------------|
| **[Workflows](/mothership/workflows)** | Build, edit, run, debug, deploy, and organize workflows |
| **[Research](/mothership/research)** | Search the web, read pages, crawl sites, produce research reports |
| **[Files & Documents](/mothership/files)** | Upload, create, edit, and generate documents, presentations, and images |
| **[Tables](/mothership/tables)** | Create, query, update, and export workspace tables |
| **[Automation & Configuration](/mothership/tasks)** | Schedule jobs, take immediate actions, connect integrations, manage tools |
| **[Knowledge Bases](/mothership/knowledge)** | Create knowledge bases, add documents, and query content in plain language |
## How It Works
Mothership receives a snapshot of your entire workspace with every message — all workflows, tables, knowledge bases, files, credentials, jobs, and integrations. This is why you can refer to things by name without specifying IDs or paths:
- "Run the invoice workflow"
- "Add a row to the leads table"
- "Deploy the summarizer as a chat"
No configuration, no context-setting. Just describe what you want:
- "Build a lead enrichment workflow that scores inbound signups and writes the results to the leads table"
- "Research our top 5 competitors and save a battle card for each one"
- "Schedule a daily job that checks for new high-fit prospects and posts them to #outbound in Slack"
- "Create a workflow that takes a contract PDF, extracts the key terms, and emails a summary to legal"
For complex tasks, Mothership delegates to specialized subagents automatically. You'll see them appear as collapsible sections in the chat while they work — building, researching, writing files, executing actions.
{/* TODO: Screenshot of the Mothership chat showing a subagent section expanded mid-task — e.g., the Build or Research subagent actively working, with its collapsible header and steps visible in the thread. */}
## Adding Context
Bring any workspace object into the conversation via the **+** menu, `@`-mentions, or drag-and-drop from the sidebar. Mothership also opens resources automatically when it creates or modifies them.
{/* TODO: Screenshot of the resource panel with multiple tabs open — a workflow tab, a table tab, and a file tab — showing different resource types side by side. */}
| What to add | How it appears |
|-------------|---------------|
| **Workflow** | Interactive canvas in the resource panel |
| **Table** | Full table editor in the resource panel |
| **File** | File viewer with editor, split, and preview modes |
| **Knowledge Base** | Knowledge base management UI |
| **Folder** | Folder contents |
| **Past task** | A previous Mothership conversation |
## Layout
Mothership has two panes. On the left: the chat thread, where your messages and Mothership's responses appear. On the right: the resource panel, where workflows, tables, files, and knowledge bases open as tabs. The panel is resizable; tabs are draggable and closeable.
<FAQ items={[
{ question: "How is Mothership different from Copilot?", answer: "Copilot is scoped to a single workflow — it helps you build and edit that workflow. Mothership has access to your entire workspace and can build workflows, manage data, run research, schedule jobs, take actions across integrations, and more." },
{ question: "What model does Mothership use?", answer: "Mothership always uses Claude Opus 4.6. There is no model selector." },
{ question: "How do I reference an existing workflow or table?", answer: "Type @ followed by the name in the input, use the + menu, or drag the item from the sidebar into the chat." },
{ question: "How long can a Mothership task run?", answer: "Up to one hour. For tasks that exceed that, set up a scheduled job or break the work into steps." },
{ question: "Can Mothership work on multiple things at once?", answer: "Mothership processes one message at a time. You can queue messages — they will be processed in order." },
]} />

View File

@@ -0,0 +1,84 @@
---
title: Knowledge Bases
description: Create, populate, and query knowledge bases from Mothership.
---
import { Image } from '@/components/ui/image'
import { FAQ } from '@/components/ui/faq'
Create a knowledge base, add documents to it, and query it in plain language — all through conversation. Knowledge bases you create in Mothership are immediately available to Agent blocks in any workflow.
{/* TODO: Screenshot of Mothership with a knowledge base open in the resource panel — showing the knowledge base name, document list, and status of indexed documents. */}
## Creating Knowledge Bases
Describe the knowledge base and Mothership creates it:
- "Create a knowledge base called 'Product Docs'"
- "Set up a knowledge base for our support team — call it 'Support KB'"
- "Create a competitive intelligence knowledge base"
- "Create a knowledge base from our sales playbook and attach it to the outbound agent workflow"
- "Set up a customer success knowledge base — I'll add our onboarding guides and past case studies to it"
## Adding Documents
Add documents by attaching files to your message, pasting text, or pointing Mothership at a URL:
- "Add this PDF to the Product Docs knowledge base [attach file]"
- "Add the following text to the Support KB as a new document: [paste content]"
- "Fetch the page at [URL] and add it to the competitive intelligence knowledge base"
- "Add these three uploaded case studies to the customer success knowledge base"
Mothership processes and indexes each document automatically. Once indexed, the content is searchable by any Agent block that has the knowledge base attached.
{/* TODO: Screenshot of Mothership confirming a document was added and indexed — showing the document name and its indexed status in the knowledge base. */}
## Querying Knowledge Bases
Ask Mothership a question and it searches the specified knowledge base to answer:
- "What does the Product Docs knowledge base say about our refund policy?"
- "Search the Support KB for anything related to SSO setup errors"
- "What are the key differences between our Pro and Enterprise plans, based on the product docs?"
- "Find everything in the competitive intelligence knowledge base about [competitor]'s pricing"
## Connectors
For knowledge bases that should stay current automatically, connectors sync content from external services on a schedule — no manual uploads needed. New content is added, changed content is re-processed, and deleted content is removed on every run.
Connectors are configured through the knowledge base settings, not through Mothership chat. Once connected, all synced content is immediately searchable by Mothership and by any Agent block with the knowledge base attached.
Sim ships with 30 built-in connectors, including Notion, Google Drive, Slack, GitHub, Confluence, HubSpot, Salesforce, Gmail, and more.
Examples of what you can sync:
- **Notion** — sync a workspace, a database, or a specific page tree
- **Google Drive / Dropbox / OneDrive** — sync documents from cloud storage
- **GitHub** — sync a repository's markdown and code files
- **Slack** — sync channel history
- **Confluence / Jira** — sync your internal wiki or issue tracker
- **HubSpot / Salesforce** — sync CRM records into a searchable knowledge base
See [Connectors](/knowledgebase/connectors) for setup steps, sync frequency options, and managing connector status.
## Managing Knowledge Bases
List, inspect, and clean up knowledge bases in plain language:
- "What knowledge bases are in this workspace?"
- "How many documents are in the Support KB?"
- "Remove the outdated pricing doc from the Product Docs knowledge base"
- "Delete the old-competitive-intel knowledge base"
## Using Knowledge Bases in Workflows
Knowledge bases created in Mothership are immediately available to Agent blocks in any workflow. Attach a knowledge base to an Agent block and it will use semantic search to retrieve relevant content at runtime.
See [Knowledge Base](/knowledgebase) for full details on document processing settings, search configuration, and connector syncing.
<FAQ items={[
{ question: "How do I attach a knowledge base to a workflow?", answer: "Open the Agent block in the workflow editor, find the Knowledge Base setting, and select the knowledge base by name. Mothership can also do this for you: 'Attach the Product Docs knowledge base to the research agent block in the content pipeline workflow.'" },
{ question: "What file types can I add to a knowledge base?", answer: "PDFs, markdown files, plain text, and web pages fetched from URLs. Mothership handles the parsing and indexing automatically." },
{ question: "Can I add documents to a knowledge base from a workflow run?", answer: "Yes. The Knowledge Base write tool is available to Agent blocks and can be used to add documents programmatically during a workflow run." },
{ question: "How do I keep a knowledge base up to date?", answer: "Set up a connector to sync from an external source automatically — see Connectors above. For one-off updates, ask Mothership to add or replace the document directly." },
]} />

View File

@@ -0,0 +1,4 @@
{
"title": "Mothership",
"pages": ["index", "workflows", "research", "files", "tables", "tasks", "knowledge"]
}

View File

@@ -0,0 +1,43 @@
---
title: Research
description: Ask Mothership to research anything — it searches, reads, and synthesizes from the web.
---
import { Image } from '@/components/ui/image'
import { FAQ } from '@/components/ui/faq'
Ask Mothership to research anything and it figures out the best approach — searching the web, reading specific pages, crawling sites, looking up technical docs. Just describe what you want to know.
{/* TODO: Screenshot of the Research subagent section in the Mothership chat — expanded, showing it working through a research task with the final report or answer appearing. Ideally with a file tab open in the resource panel showing the output. */}
## Asking Questions
Ask anything — about a company, a competitor, a market, a technical question, or a specific URL:
- "What did Salesforce, HubSpot, and Gong each ship in the past 30 days? Summarize the key product updates."
- "What's Acme Corp's tech stack, recent hires, and open engineering roles?"
- "Find everything published about [competitor] in the past 90 days — press, product changes, job postings."
- "What are the current rate limits on the Anthropic API?"
- "Read [URL] and tell me what changed in this release"
- "What does Stripe's API say about handling webhooks with idempotency keys?"
- "Who are the main players in AI-powered revenue operations, and how do they differentiate?"
Mothership returns an answer directly in the chat. For anything that needs a longer written output, ask it to save the result as a file.
## Research Reports
When you need a structured, saved document rather than a chat answer, ask Mothership to write it up. It searches, reads, and cross-references multiple sources until it has enough to produce a full report. The output is saved as a file in your workspace and opened in the resource panel.
{/* TODO: Screenshot of a completed research report open in the resource panel as a file — showing a structured markdown document with sections, findings, and citations. */}
- "Research the top 10 AI SDR tools — pricing, features, positioning, and what customers say. Save as a competitive analysis."
- "Do a full market landscape for AI in healthcare diagnostics — major players, funding, use cases, and regulatory environment."
- "Research how our top 5 competitors handle multi-tenant auth — pricing, architecture, and any known vulnerabilities. Write it up as a report."
- "Find every public case study on AI agents in financial compliance from the past 2 years. Summarize the key outcomes and save as a markdown file."
- "Build a battle card for [competitor] — their positioning, pricing, strengths, weaknesses, and how we win against them."
<FAQ items={[
{ question: "Does Mothership have access to real-time information?", answer: "Yes. Mothership queries live internet data. Results reflect current information, not a training cutoff." },
{ question: "Can Mothership read pages that require login?", answer: "No. Mothership can only read publicly accessible pages." },
{ question: "Where are research reports saved?", answer: "Reports are saved as files in your workspace and opened in the resource panel. You can find them in the Files section of the sidebar." },
]} />

View File

@@ -0,0 +1,60 @@
---
title: Tables
description: Create, query, and manage workspace tables from Mothership.
---
import { Image } from '@/components/ui/image'
import { FAQ } from '@/components/ui/faq'
Create a table from a description or a CSV, query it in plain language, add or update rows, and export the results — all through conversation. Tables open in the resource panel when created or referenced.
{/* TODO: Screenshot of Mothership with a table open in the resource panel — ideally after a query or row operation, showing the table with data populated. */}
## Creating Tables
Describe the schema and Mothership creates the table:
- "Create a leads table with columns for name, email, company, status, and created date"
- "Create a table that matches the structure of this CSV [attach file]"
- "Set up an errors table with: id (text), message (text), workflow (text), timestamp (date), resolved (boolean)"
- "Create a prospect table for outbound — company, domain, employee count, industry, ICP score, and last contacted date"
- "Set up an enrichment results table to store output from the lead enrichment workflow: email, company, title, LinkedIn URL, fit score"
## Querying Data
Ask questions about table contents in plain language:
- "How many rows in the leads table have status 'qualified'?"
- "Show me all records from the past 7 days where score is above 0.8"
- "What are the top 5 most common error messages in the failures table?"
- "Are there any duplicate emails in the contacts table?"
- "How many prospects have an ICP score above 0.75 and haven't been contacted in the past 30 days?"
- "What's the conversion rate from 'contacted' to 'meeting booked' in the pipeline table this month?"
Mothership translates the question into a structured query and returns the results.
## Adding and Updating Rows
Add individual rows, bulk-update based on a condition, or delete records — all in plain language:
- "Add a row to the leads table: Acme Corp, jane@acme.com, status pending"
- "Mark all rows in the queue table as processed where created_at is before today"
- "Update the price column for all rows where tier is 'pro' to 49"
- "Delete all rows in the test_events table"
## Exporting
Export a full table or a filtered subset as a CSV. The file is saved to your workspace and can be downloaded or referenced in other workflows:
- "Export the leads table to a CSV"
- "Export all rows where status is 'closed' and save as a file"
## Using Tables in Workflows
Tables created in Mothership are immediately available in workflows via the [Table tool](/tools/table). Reference a table by name — no additional configuration needed.
<FAQ items={[
{ question: "Can Mothership join or combine data from multiple tables?", answer: "Yes. Describe the relationship and what you want — Mothership will query both tables and combine the results." },
{ question: "Is there a row limit?", answer: "There is no hard row limit set by Mothership. Performance for very large tables may vary." },
{ question: "Can I use a table created in Mothership as a workflow data source?", answer: "Yes. All workspace tables are accessible to the Table tool in any workflow." },
]} />

View File

@@ -0,0 +1,130 @@
---
title: Automation & Configuration
description: Schedule recurring jobs, take immediate actions, connect integrations, and configure your workspace.
---
import { Callout } from 'fumadocs-ui/components/callout'
import { Image } from '@/components/ui/image'
import { FAQ } from '@/components/ui/faq'
Mothership can act on your behalf right now — send a message, create an issue, call an API — or on a schedule, running a prompt automatically every hour, day, or week. It can also connect integrations, set environment variables, add MCP servers, and create custom tools.
## Scheduled Jobs
A scheduled job is a Mothership task that runs on a cron schedule. On each run, Mothership reads the current workspace state and executes the job's prompt as if you had just sent it.
{/* TODO: Screenshot of Mothership chat confirming a scheduled job was created — showing the job name, schedule, and what it will do. If there's a jobs list view in the sidebar, include that as a second screenshot here. */}
### Creating a Job
Describe the recurring task and how often it should run:
- "Every morning at 8am, check the leads table for new entries and post a summary to #sales in Slack"
- "Every Monday at 9am, pull last week's workflow run counts and write a report to the workspace"
- "Run the data sync workflow every 6 hours"
- "On the first of every month, export the billing table to CSV and email it to finance@example.com"
- "Every weekday at 7am, check for new funding announcements from companies in our ICP and post the top 5 to #market-intel in Slack"
- "Every Sunday night, run the lead enrichment workflow on all prospects added in the past week and update their scores in the table"
- "Daily at 6am, pull the previous day's workflow errors, summarize the top issues, and post to #eng-alerts"
Mothership sets the cron expression and stores the job prompt. The first run happens at the next scheduled time.
### Viewing Job Logs
- "Show me the last 5 runs of the weekly report job"
- "Did the sync job run successfully this morning?"
- "What did the Monday digest job do last week?"
Logs show run time, status (completed, failed), and a summary of what the agent did.
### Managing Jobs
- "Pause the morning summary job"
- "Change the sync job to run every 3 hours instead of 6"
- "Delete the onboarding digest job"
- "What scheduled jobs are currently active?"
## Taking Direct Action
For requests that should happen right now — without building a workflow — just ask. Mothership acts immediately using the credentials connected to your workspace.
{/* TODO: Screenshot of Mothership chat showing the "Taking action" subagent label active during a direct action — e.g., posting to Slack or sending an email. Shows the subagent inline in the chat thread. */}
| Request | What happens |
|---------|-------------|
| "Send a Slack message to #eng that the deploy finished" | Posts to Slack immediately |
| "Email the Q3 report to jane@example.com" | Sends via connected Gmail or Outlook |
| "Create a GitHub issue: auth tokens not rotating on logout" | Opens an issue in the specified repo |
| "Add a contact to HubSpot: Acme Corp, ceo@acme.com" | Creates the contact via HubSpot API |
| "Call the webhook at [URL] with this JSON payload" | Makes the HTTP request |
If an integration isn't connected, Mothership walks you through connecting it.
## Connecting Integrations
Mothership can connect new OAuth integrations and API credentials on demand:
- "Connect my Google account"
- "Add the Slack workspace for our team"
- "Set up GitHub with my personal access token"
{/* TODO: Screenshot of Mothership walking through connecting an integration — e.g., the Integration subagent active with an OAuth prompt or confirmation that a credential was connected. */}
Once connected, credentials are available to Mothership for direct actions and scheduled jobs, and to all workflows in the workspace.
<Callout type="info">
Connected credentials are shared across the workspace. Any workflow that uses the same integration will automatically use the same credential.
</Callout>
See [Credentials](/credentials) for managing connected accounts.
## Environment Variables
Environment variables are workspace-scoped values — API keys, connection strings, and configuration that workflows reference via `{{ENV_VAR}}` syntax rather than hardcoding. Set them once and every workflow in the workspace can use them.
- "Set the DATABASE_URL environment variable to 'postgres://...'"
- "Add an OPENAI_API_KEY environment variable"
- "Add a WEBHOOK_SECRET variable for the inbound webhook workflow"
- "Update the SCORING_API_URL variable to point to the new endpoint"
- "What environment variables are currently set?"
{/* TODO: Screenshot of Mothership confirming an environment variable was set — e.g., a response message showing the variable name was saved. */}
## MCP Servers
MCP (Model Context Protocol) servers expose tools from external services that Agent blocks can call inside workflows. Connecting an MCP server makes all of its tools available in the workflow editor's tool picker — no custom integration code required.
Mothership can add and manage MCP servers connected to your workspace:
- "Add the Stripe MCP server using my API key"
- "Remove the old analytics MCP server"
- "What MCP servers are connected to this workspace?"
- "Update the endpoint for the internal tools MCP server to [URL]"
Once added, MCP tools appear in the workflow editor's tool picker and can be called from any Agent block.
{/* TODO: Screenshot of Mothership confirming an MCP server was added or updated — showing the server name and its status. */}
## Custom Tools
Custom tools are single HTTP endpoints you define manually — useful for internal APIs and services that don't have a built-in Sim integration or an MCP server. Once created, they appear in the workflow editor alongside built-in tools and can be called from any Agent block.
Mothership can build custom tools from a description:
- "Create a custom tool that calls our internal scoring API at [URL] with a POST request and returns the score field"
- "Build a tool for our Zendesk instance that creates a ticket with a subject and body"
- "Create a tool that hits our internal enrichment API with a domain and returns company size, industry, and funding stage"
- "Add a tool that calls our CRM's REST API to look up a contact by email and return their account owner"
Custom tools appear in the workflow editor and are callable from any Agent block alongside built-in tools.
{/* TODO: Screenshot of Mothership with the Custom Tool subagent active — showing it building a tool definition. */}
<FAQ items={[
{ question: "What's the difference between a scheduled job and a deployed workflow?", answer: "A scheduled job runs a Mothership prompt on a cron schedule — Mothership decides what to do each time based on current workspace state. A deployed workflow runs a fixed, deterministic graph of blocks. Use jobs when you want Mothership to reason and adapt; use workflows when you want predictable, auditable execution." },
{ question: "Can a scheduled job trigger a workflow?", answer: "Yes. Include it in the job prompt: 'Run the invoice sync workflow and then post the results to Slack.'" },
{ question: "How do I know what integrations are connected?", answer: "Ask Mothership: 'What integrations are connected to this workspace?' or check Settings → Credentials." },
{ question: "Can direct actions be undone?", answer: "No. Actions like sending emails, posting to Slack, or creating records are immediate and irreversible. Confirm the details before asking Mothership to act." },
{ question: "Are environment variables visible to all workflows?", answer: "Yes. Environment variables are workspace-scoped and available to every workflow via {{ENV_VAR}} syntax." },
{ question: "What's the difference between MCP servers and custom tools?", answer: "MCP servers expose a set of tools from an external service over the MCP protocol — you connect an existing MCP-compatible server. Custom tools are single HTTP endpoints you define manually — useful for internal APIs that don't have an MCP server." },
]} />

View File

@@ -0,0 +1,122 @@
---
title: Workflows
description: Create, edit, run, debug, deploy, and organize workflows from Mothership.
---
import { Callout } from 'fumadocs-ui/components/callout'
import { Image } from '@/components/ui/image'
import { FAQ } from '@/components/ui/faq'
Describe a workflow and Mothership builds it. Reference an existing one by name and it edits it. No canvas navigation required — every change appears in the resource panel in real time.
{/* TODO: Screenshot of Mothership chat on the left with the Build subagent section visible, and a workflow open in the resource panel on the right. Shows the split-pane experience of building via natural language. */}
## Creating Workflows
Describe what the workflow should do — what triggers it, what it should do, which integrations it needs, and what it should return. Mothership builds it and opens the canvas in the resource panel.
- "Build a workflow that takes a URL, scrapes the page, summarizes it with Claude, and sends the summary to a Slack channel"
- "Create a workflow triggered by a webhook that extracts invoice data from a PDF and writes it to the billing table"
- "Build an outbound workflow: take a company name and domain, enrich it with firmographic data, score the fit, and draft a personalized cold email"
- "Create a lead enrichment workflow that takes an email from a form submission, looks up the company, and writes the enriched record to the leads table"
- "Build a customer onboarding workflow: when a new user signs up, send a welcome email, create a HubSpot contact, and post a notification to #new-customers in Slack"
## Editing Workflows
{/* TODO: Screenshot of Mothership with the Edit subagent active and a change applied to an open workflow — e.g., a new block added or a configuration updated, visible on the canvas in the resource panel. */}
Open an existing workflow with `@workflow-name` or the **+** menu, then describe the change. Mothership reads the current structure before modifying it — you don't need to explain what already exists.
- "Add a condition that routes to a different branch if the confidence score is below 0.7"
- "Replace the GPT-4o model with Claude Opus 4.6 on the summarizer block"
- "Add a Slack notification at the end that includes the output"
## Running Workflows
{/* TODO: Screenshot or GIF of Mothership running a workflow — showing the chat streaming execution output on the left while the workflow canvas in the resource panel highlights blocks as they execute in real time. */}
Ask Mothership to run a workflow and it handles the execution:
- "Run the data sync workflow"
- "Run the invoice processor with this PDF [attach file]"
- "Test the lead scoring workflow with these inputs: name=Acme, score=0.4"
Execution streams back to the chat. The workflow in the resource panel shows live block-by-block state.
## Reading Logs
Mothership can retrieve and interpret execution logs for any workflow in the workspace:
- "Show me the last 10 runs of the pipeline workflow"
- "Why did the invoice workflow fail yesterday?"
- "What did the extractor block return in the most recent run?"
Logs include per-block execution state, outputs, errors, and timing.
## Debugging
When a workflow fails, tell Mothership to debug it:
- "Debug the last failed run of the content pipeline"
- "The summarizer block is returning empty output — figure out why"
Mothership reads the failure logs, identifies the cause, applies a fix, and can re-run to confirm.
{/* TODO: Screenshot of the Debug subagent section in the Mothership chat showing it reading logs and applying a fix. */}
## Deploying
Mothership can deploy a workflow as any of the three deployment types:
| Deployment type | What it creates |
|----------------|----------------|
| **API** | A REST endpoint at `https://sim.ai/api/workflows/{id}/execute` |
| **Chat** | A hosted conversational interface with a shareable URL |
| **MCP tool** | An MCP server that exposes the workflow as a tool |
Ask: "Deploy the invoice workflow as an API and generate an API key."
Mothership can also roll back: "Revert the billing workflow to the version from last Tuesday."
See [API Deployment](/execution/api-deployment) and [Chat Deployment](/execution/chat) for full details on each deployment type.
## Organizing Workflows
Mothership can create and manage folders to keep your workspace organized.
**Folders:**
- "Create a folder called 'Data Pipelines'"
- "Move the invoice workflow into the billing folder"
- "Move the billing folder inside the finance folder"
- "Delete the old-experiments folder"
**Renaming and moving:**
- "Rename the 'test_v2' workflow to 'lead-scorer'"
- "Move the summarizer workflow to the research folder"
{/* TODO: Screenshot showing Mothership confirming a folder or workflow organization action — e.g., a message confirming "Moved 'invoice-processor' into 'billing' folder" with the resource panel showing the folder open. */}
## Workflow Variables
Mothership can set global variables on a workflow — values accessible across all blocks in that workflow at runtime:
- "Set the API_ENDPOINT variable on the sync workflow to 'https://api.example.com/v2'"
- "Update the MAX_RETRIES variable on the pipeline workflow to 5"
Variables set this way are available via `<variable.VARIABLE_NAME>` syntax inside any block in the workflow.
## Deleting Workflows
- "Delete the old_api_prototype workflow"
- "Delete all workflows in the deprecated folder"
<Callout type="warn">
Workflow deletion is permanent. Deployed versions are also removed. There is no recycle bin.
</Callout>
<FAQ items={[
{ question: "Can Mothership edit a workflow while it's deployed?", answer: "Yes. Editing a workflow does not affect the live deployment. The deployed version is a snapshot — you need to ask Mothership to redeploy to push changes to production." },
{ question: "Can I run a workflow with specific inputs from Mothership?", answer: "Yes. Describe the inputs in your message and Mothership passes them to the workflow's start block." },
{ question: "How does Mothership know what my workflow does?", answer: "When you reference a workflow, Mothership loads its full structure — every block, connection, and configuration — before acting on it." },
{ question: "What happens to a workflow's deployment when I delete it?", answer: "The workflow and all its deployments are permanently removed. Any API endpoints, chat URLs, or MCP tools that pointed to that workflow will stop working." },
]} />

View File

@@ -1,4 +0,0 @@
{
"title": "SDKs",
"pages": ["python", "typescript"]
}

View File

@@ -1,772 +0,0 @@
---
title: Python
---
import { Callout } from 'fumadocs-ui/components/callout'
import { Card, Cards } from 'fumadocs-ui/components/card'
import { Step, Steps } from 'fumadocs-ui/components/steps'
import { Tab, Tabs } from 'fumadocs-ui/components/tabs'
The official Python SDK for Sim allows you to execute workflows programmatically from your Python applications using the official Python SDK.
<Callout type="info">
The Python SDK supports Python 3.8+ with async execution support, automatic rate limiting with exponential backoff, and usage tracking.
</Callout>
## Installation
Install the SDK using pip:
```bash
pip install simstudio-sdk
```
## Quick Start
Here's a simple example to get you started:
```python
from simstudio import SimStudioClient
# Initialize the client
client = SimStudioClient(
api_key="your-api-key-here",
base_url="https://sim.ai" # optional, defaults to https://sim.ai
)
# Execute a workflow
try:
result = client.execute_workflow("workflow-id")
print("Workflow executed successfully:", result)
except Exception as error:
print("Workflow execution failed:", error)
```
## API Reference
### SimStudioClient
#### Constructor
```python
SimStudioClient(api_key: str, base_url: str = "https://sim.ai")
```
**Parameters:**
- `api_key` (str): Your Sim API key
- `base_url` (str, optional): Base URL for the Sim API
#### Methods
##### execute_workflow()
Execute a workflow with optional input data.
```python
result = client.execute_workflow(
"workflow-id",
input={"message": "Hello, world!"},
timeout=30.0 # 30 seconds
)
```
**Parameters:**
- `workflow_id` (str): The ID of the workflow to execute
- `input` (dict, optional): Input data to pass to the workflow
- `timeout` (float, optional): Timeout in seconds (default: 30.0)
- `stream` (bool, optional): Enable streaming responses (default: False)
- `selected_outputs` (list[str], optional): Block outputs to stream in `blockName.attribute` format (e.g., `["agent1.content"]`)
- `async_execution` (bool, optional): Execute asynchronously (default: False)
**Returns:** `WorkflowExecutionResult | AsyncExecutionResult`
When `async_execution=True`, returns immediately with a task ID for polling. Otherwise, waits for completion.
##### get_workflow_status()
Get the status of a workflow (deployment status, etc.).
```python
status = client.get_workflow_status("workflow-id")
print("Is deployed:", status.is_deployed)
```
**Parameters:**
- `workflow_id` (str): The ID of the workflow
**Returns:** `WorkflowStatus`
##### validate_workflow()
Validate that a workflow is ready for execution.
```python
is_ready = client.validate_workflow("workflow-id")
if is_ready:
# Workflow is deployed and ready
pass
```
**Parameters:**
- `workflow_id` (str): The ID of the workflow
**Returns:** `bool`
##### get_job_status()
Get the status of an async job execution.
```python
status = client.get_job_status("task-id-from-async-execution")
print("Status:", status["status"]) # 'queued', 'processing', 'completed', 'failed'
if status["status"] == "completed":
print("Output:", status["output"])
```
**Parameters:**
- `task_id` (str): The task ID returned from async execution
**Returns:** `Dict[str, Any]`
**Response fields:**
- `success` (bool): Whether the request was successful
- `taskId` (str): The task ID
- `status` (str): One of `'queued'`, `'processing'`, `'completed'`, `'failed'`, `'cancelled'`
- `metadata` (dict): Contains `startedAt`, `completedAt`, and `duration`
- `output` (any, optional): The workflow output (when completed)
- `error` (any, optional): Error details (when failed)
- `estimatedDuration` (int, optional): Estimated duration in milliseconds (when processing/queued)
##### execute_with_retry()
Execute a workflow with automatic retry on rate limit errors using exponential backoff.
```python
result = client.execute_with_retry(
"workflow-id",
input={"message": "Hello"},
timeout=30.0,
max_retries=3, # Maximum number of retries
initial_delay=1.0, # Initial delay in seconds
max_delay=30.0, # Maximum delay in seconds
backoff_multiplier=2.0 # Exponential backoff multiplier
)
```
**Parameters:**
- `workflow_id` (str): The ID of the workflow to execute
- `input` (dict, optional): Input data to pass to the workflow
- `timeout` (float, optional): Timeout in seconds
- `stream` (bool, optional): Enable streaming responses
- `selected_outputs` (list, optional): Block outputs to stream
- `async_execution` (bool, optional): Execute asynchronously
- `max_retries` (int, optional): Maximum number of retries (default: 3)
- `initial_delay` (float, optional): Initial delay in seconds (default: 1.0)
- `max_delay` (float, optional): Maximum delay in seconds (default: 30.0)
- `backoff_multiplier` (float, optional): Backoff multiplier (default: 2.0)
**Returns:** `WorkflowExecutionResult | AsyncExecutionResult`
The retry logic uses exponential backoff (1s → 2s → 4s → 8s...) with ±25% jitter to prevent thundering herd. If the API provides a `retry-after` header, it will be used instead.
##### get_rate_limit_info()
Get the current rate limit information from the last API response.
```python
rate_limit_info = client.get_rate_limit_info()
if rate_limit_info:
print("Limit:", rate_limit_info.limit)
print("Remaining:", rate_limit_info.remaining)
print("Reset:", datetime.fromtimestamp(rate_limit_info.reset))
```
**Returns:** `RateLimitInfo | None`
##### get_usage_limits()
Get current usage limits and quota information for your account.
```python
limits = client.get_usage_limits()
print("Sync requests remaining:", limits.rate_limit["sync"]["remaining"])
print("Async requests remaining:", limits.rate_limit["async"]["remaining"])
print("Current period cost:", limits.usage["currentPeriodCost"])
print("Plan:", limits.usage["plan"])
```
**Returns:** `UsageLimits`
**Response structure:**
```python
{
"success": bool,
"rateLimit": {
"sync": {
"isLimited": bool,
"limit": int,
"remaining": int,
"resetAt": str
},
"async": {
"isLimited": bool,
"limit": int,
"remaining": int,
"resetAt": str
},
"authType": str # 'api' or 'manual'
},
"usage": {
"currentPeriodCost": float,
"limit": float,
"plan": str # e.g., 'free', 'pro'
}
}
```
##### set_api_key()
Update the API key.
```python
client.set_api_key("new-api-key")
```
##### set_base_url()
Update the base URL.
```python
client.set_base_url("https://my-custom-domain.com")
```
##### close()
Close the underlying HTTP session.
```python
client.close()
```
## Data Classes
### WorkflowExecutionResult
```python
@dataclass
class WorkflowExecutionResult:
success: bool
output: Optional[Any] = None
error: Optional[str] = None
logs: Optional[List[Any]] = None
metadata: Optional[Dict[str, Any]] = None
trace_spans: Optional[List[Any]] = None
total_duration: Optional[float] = None
```
### AsyncExecutionResult
```python
@dataclass
class AsyncExecutionResult:
success: bool
task_id: str
status: str # 'queued'
created_at: str
links: Dict[str, str] # e.g., {"status": "/api/jobs/{taskId}"}
```
### WorkflowStatus
```python
@dataclass
class WorkflowStatus:
is_deployed: bool
deployed_at: Optional[str] = None
needs_redeployment: bool = False
```
### RateLimitInfo
```python
@dataclass
class RateLimitInfo:
limit: int
remaining: int
reset: int
retry_after: Optional[int] = None
```
### UsageLimits
```python
@dataclass
class UsageLimits:
success: bool
rate_limit: Dict[str, Any]
usage: Dict[str, Any]
```
### SimStudioError
```python
class SimStudioError(Exception):
def __init__(self, message: str, code: Optional[str] = None, status: Optional[int] = None):
super().__init__(message)
self.code = code
self.status = status
```
**Common error codes:**
- `UNAUTHORIZED`: Invalid API key
- `TIMEOUT`: Request timed out
- `RATE_LIMIT_EXCEEDED`: Rate limit exceeded
- `USAGE_LIMIT_EXCEEDED`: Usage limit exceeded
- `EXECUTION_ERROR`: Workflow execution failed
## Examples
### Basic Workflow Execution
<Steps>
<Step title="Initialize the client">
Set up the SimStudioClient with your API key.
</Step>
<Step title="Validate the workflow">
Check if the workflow is deployed and ready for execution.
</Step>
<Step title="Execute the workflow">
Run the workflow with your input data.
</Step>
<Step title="Handle the result">
Process the execution result and handle any errors.
</Step>
</Steps>
```python
import os
from simstudio import SimStudioClient
client = SimStudioClient(api_key=os.getenv("SIM_API_KEY"))
def run_workflow():
try:
# Check if workflow is ready
is_ready = client.validate_workflow("my-workflow-id")
if not is_ready:
raise Exception("Workflow is not deployed or ready")
# Execute the workflow
result = client.execute_workflow(
"my-workflow-id",
input={
"message": "Process this data",
"user_id": "12345"
}
)
if result.success:
print("Output:", result.output)
print("Duration:", result.metadata.get("duration") if result.metadata else None)
else:
print("Workflow failed:", result.error)
except Exception as error:
print("Error:", error)
run_workflow()
```
### Error Handling
Handle different types of errors that may occur during workflow execution:
```python
from simstudio import SimStudioClient, SimStudioError
import os
client = SimStudioClient(api_key=os.getenv("SIM_API_KEY"))
def execute_with_error_handling():
try:
result = client.execute_workflow("workflow-id")
return result
except SimStudioError as error:
if error.code == "UNAUTHORIZED":
print("Invalid API key")
elif error.code == "TIMEOUT":
print("Workflow execution timed out")
elif error.code == "USAGE_LIMIT_EXCEEDED":
print("Usage limit exceeded")
elif error.code == "INVALID_JSON":
print("Invalid JSON in request body")
else:
print(f"Workflow error: {error}")
raise
except Exception as error:
print(f"Unexpected error: {error}")
raise
```
### Context Manager Usage
Use the client as a context manager to automatically handle resource cleanup:
```python
from simstudio import SimStudioClient
import os
# Using context manager to automatically close the session
with SimStudioClient(api_key=os.getenv("SIM_API_KEY")) as client:
result = client.execute_workflow("workflow-id")
print("Result:", result)
# Session is automatically closed here
```
### Batch Workflow Execution
Execute multiple workflows efficiently:
```python
from simstudio import SimStudioClient
import os
client = SimStudioClient(api_key=os.getenv("SIM_API_KEY"))
def execute_workflows_batch(workflow_data_pairs):
"""Execute multiple workflows with different input data."""
results = []
for workflow_id, input_data in workflow_data_pairs:
try:
# Validate workflow before execution
if not client.validate_workflow(workflow_id):
print(f"Skipping {workflow_id}: not deployed")
continue
result = client.execute_workflow(workflow_id, input_data)
results.append({
"workflow_id": workflow_id,
"success": result.success,
"output": result.output,
"error": result.error
})
except Exception as error:
results.append({
"workflow_id": workflow_id,
"success": False,
"error": str(error)
})
return results
# Example usage
workflows = [
("workflow-1", {"type": "analysis", "data": "sample1"}),
("workflow-2", {"type": "processing", "data": "sample2"}),
]
results = execute_workflows_batch(workflows)
for result in results:
print(f"Workflow {result['workflow_id']}: {'Success' if result['success'] else 'Failed'}")
```
### Async Workflow Execution
Execute workflows asynchronously for long-running tasks:
```python
import os
import time
from simstudio import SimStudioClient
client = SimStudioClient(api_key=os.getenv("SIM_API_KEY"))
def execute_async():
try:
# Start async execution
result = client.execute_workflow(
"workflow-id",
input={"data": "large dataset"},
async_execution=True # Execute asynchronously
)
# Check if result is an async execution
if hasattr(result, 'task_id'):
print(f"Task ID: {result.task_id}")
print(f"Status endpoint: {result.links['status']}")
# Poll for completion
status = client.get_job_status(result.task_id)
while status["status"] in ["queued", "processing"]:
print(f"Current status: {status['status']}")
time.sleep(2) # Wait 2 seconds
status = client.get_job_status(result.task_id)
if status["status"] == "completed":
print("Workflow completed!")
print(f"Output: {status['output']}")
print(f"Duration: {status['metadata']['duration']}")
else:
print(f"Workflow failed: {status['error']}")
except Exception as error:
print(f"Error: {error}")
execute_async()
```
### Rate Limiting and Retry
Handle rate limits automatically with exponential backoff:
```python
import os
from simstudio import SimStudioClient, SimStudioError
client = SimStudioClient(api_key=os.getenv("SIM_API_KEY"))
def execute_with_retry_handling():
try:
# Automatically retries on rate limit
result = client.execute_with_retry(
"workflow-id",
input={"message": "Process this"},
max_retries=5,
initial_delay=1.0,
max_delay=60.0,
backoff_multiplier=2.0
)
print(f"Success: {result}")
except SimStudioError as error:
if error.code == "RATE_LIMIT_EXCEEDED":
print("Rate limit exceeded after all retries")
# Check rate limit info
rate_limit_info = client.get_rate_limit_info()
if rate_limit_info:
from datetime import datetime
reset_time = datetime.fromtimestamp(rate_limit_info.reset)
print(f"Rate limit resets at: {reset_time}")
execute_with_retry_handling()
```
### Usage Monitoring
Monitor your account usage and limits:
```python
import os
from simstudio import SimStudioClient
client = SimStudioClient(api_key=os.getenv("SIM_API_KEY"))
def check_usage():
try:
limits = client.get_usage_limits()
print("=== Rate Limits ===")
print("Sync requests:")
print(f" Limit: {limits.rate_limit['sync']['limit']}")
print(f" Remaining: {limits.rate_limit['sync']['remaining']}")
print(f" Resets at: {limits.rate_limit['sync']['resetAt']}")
print(f" Is limited: {limits.rate_limit['sync']['isLimited']}")
print("\nAsync requests:")
print(f" Limit: {limits.rate_limit['async']['limit']}")
print(f" Remaining: {limits.rate_limit['async']['remaining']}")
print(f" Resets at: {limits.rate_limit['async']['resetAt']}")
print(f" Is limited: {limits.rate_limit['async']['isLimited']}")
print("\n=== Usage ===")
print(f"Current period cost: ${limits.usage['currentPeriodCost']:.2f}")
print(f"Limit: ${limits.usage['limit']:.2f}")
print(f"Plan: {limits.usage['plan']}")
percent_used = (limits.usage['currentPeriodCost'] / limits.usage['limit']) * 100
print(f"Usage: {percent_used:.1f}%")
if percent_used > 80:
print("⚠️ Warning: You are approaching your usage limit!")
except Exception as error:
print(f"Error checking usage: {error}")
check_usage()
```
### Streaming Workflow Execution
Execute workflows with real-time streaming responses:
```python
from simstudio import SimStudioClient
import os
client = SimStudioClient(api_key=os.getenv("SIM_API_KEY"))
def execute_with_streaming():
"""Execute workflow with streaming enabled."""
try:
# Enable streaming for specific block outputs
result = client.execute_workflow(
"workflow-id",
input={"message": "Count to five"},
stream=True,
selected_outputs=["agent1.content"] # Use blockName.attribute format
)
print("Workflow result:", result)
except Exception as error:
print("Error:", error)
execute_with_streaming()
```
The streaming response follows the Server-Sent Events (SSE) format:
```
data: {"blockId":"7b7735b9-19e5-4bd6-818b-46aae2596e9f","chunk":"One"}
data: {"blockId":"7b7735b9-19e5-4bd6-818b-46aae2596e9f","chunk":", two"}
data: {"event":"done","success":true,"output":{},"metadata":{"duration":610}}
data: [DONE]
```
**Flask Streaming Example:**
```python
from flask import Flask, Response, stream_with_context
import requests
import json
import os
app = Flask(__name__)
@app.route('/stream-workflow')
def stream_workflow():
"""Stream workflow execution to the client."""
def generate():
response = requests.post(
'https://sim.ai/api/workflows/WORKFLOW_ID/execute',
headers={
'Content-Type': 'application/json',
'X-API-Key': os.getenv('SIM_API_KEY')
},
json={
'message': 'Generate a story',
'stream': True,
'selectedOutputs': ['agent1.content']
},
stream=True
)
for line in response.iter_lines():
if line:
decoded_line = line.decode('utf-8')
if decoded_line.startswith('data: '):
data = decoded_line[6:] # Remove 'data: ' prefix
if data == '[DONE]':
break
try:
parsed = json.loads(data)
if 'chunk' in parsed:
yield f"data: {json.dumps(parsed)}\n\n"
elif parsed.get('event') == 'done':
yield f"data: {json.dumps(parsed)}\n\n"
print("Execution complete:", parsed.get('metadata'))
except json.JSONDecodeError:
pass
return Response(
stream_with_context(generate()),
mimetype='text/event-stream'
)
if __name__ == '__main__':
app.run(debug=True)
```
### Environment Configuration
Configure the client using environment variables:
<Tabs items={['Development', 'Production']}>
<Tab value="Development">
```python
import os
from simstudio import SimStudioClient
# Development configuration
client = SimStudioClient(
api_key=os.getenv("SIM_API_KEY")
base_url=os.getenv("SIM_BASE_URL", "https://sim.ai")
)
```
</Tab>
<Tab value="Production">
```python
import os
from simstudio import SimStudioClient
# Production configuration with error handling
api_key = os.getenv("SIM_API_KEY")
if not api_key:
raise ValueError("SIM_API_KEY environment variable is required")
client = SimStudioClient(
api_key=api_key,
base_url=os.getenv("SIM_BASE_URL", "https://sim.ai")
)
```
</Tab>
</Tabs>
## Getting Your API Key
<Steps>
<Step title="Log in to Sim">
Navigate to [Sim](https://sim.ai) and log in to your account.
</Step>
<Step title="Open your workflow">
Navigate to the workflow you want to execute programmatically.
</Step>
<Step title="Deploy your workflow">
Click on "Deploy" to deploy your workflow if it hasn't been deployed yet.
</Step>
<Step title="Create or select an API key">
During the deployment process, select or create an API key.
</Step>
<Step title="Copy the API key">
Copy the API key to use in your Python application.
</Step>
</Steps>
## Requirements
- Python 3.8+
- requests >= 2.25.0
## License
Apache-2.0
import { FAQ } from '@/components/ui/faq'
<FAQ items={[
{ question: "Do I need to deploy a workflow before I can execute it via the SDK?", answer: "Yes. Workflows must be deployed before they can be executed through the SDK. You can use the validate_workflow() method to check whether a workflow is deployed and ready. If it returns False, deploy the workflow from the Sim UI first and create or select an API key during deployment." },
{ question: "What is the difference between sync and async execution?", answer: "Sync execution (the default) blocks until the workflow completes and returns the full result. Async execution (async_execution=True) returns immediately with a task ID that you can poll using get_job_status(). Use async mode for long-running workflows to avoid request timeouts. Async job statuses include queued, processing, completed, failed, and cancelled." },
{ question: "How does the SDK handle rate limiting?", answer: "The SDK provides built-in rate limiting support through the execute_with_retry() method. It uses exponential backoff (1s, 2s, 4s, 8s...) with 25% jitter to avoid thundering herd problems. If the API returns a retry-after header, that value is used instead. You can configure max_retries, initial_delay, max_delay, and backoff_multiplier. Use get_rate_limit_info() to check your current rate limit status." },
{ question: "Can I use the Python SDK as a context manager?", answer: "Yes. The SimStudioClient supports Python's context manager protocol. Use it with the 'with' statement to automatically close the underlying HTTP session when you are done, which is especially useful for scripts that create and discard client instances." },
{ question: "How do I handle different types of errors from the SDK?", answer: "The SDK raises SimStudioError with a code property for API-specific errors. Common error codes are UNAUTHORIZED (invalid API key), TIMEOUT (request timed out), RATE_LIMIT_EXCEEDED (too many requests), USAGE_LIMIT_EXCEEDED (billing limit reached), and EXECUTION_ERROR (workflow failed). Use the error code to implement targeted error handling and recovery logic." },
{ question: "How do I monitor my API usage and remaining quota?", answer: "Use the get_usage_limits() method to check your current usage. It returns sync and async rate limit details (limit, remaining, reset time, whether you are currently limited), plus your current period cost, usage limit, and plan tier. This lets you monitor consumption and alert before hitting limits." },
]} />

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,332 @@
---
title: Agiloft
description: Manage records in Agiloft CLM
---
import { BlockInfoCard } from "@/components/ui/block-info-card"
<BlockInfoCard
type="agiloft"
color="#FFFFFF"
/>
{/* MANUAL-CONTENT-START:intro */}
[Agiloft](https://www.agiloft.com/) is an enterprise contract lifecycle management (CLM) platform that helps organizations automate and manage contracts, agreements, and related business processes across any knowledge base.
With the Agiloft integration in Sim, you can:
- **Create records**: Add new records to any Agiloft table with custom field values
- **Read records**: Retrieve individual records by ID with optional field selection
- **Update records**: Modify existing record fields in any table
- **Delete records**: Remove records from your knowledge base
- **Search records**: Find records using Agiloft's query syntax with pagination support
- **Select records**: Query records using SQL WHERE clauses for advanced filtering
- **Saved searches**: List saved search definitions available for a table
- **Attach files**: Upload and attach files to record fields
- **Retrieve attachments**: Download attached files from record fields
- **Remove attachments**: Delete attached files from record fields by position
- **Attachment info**: Get metadata about all files attached to a record field
- **Lock records**: Check, acquire, or release locks on records for concurrent editing
In Sim, the Agiloft integration enables your agents to manage contracts and records programmatically as part of automated workflows. Agents can create and update records, search across tables, handle file attachments, and manage record locks — enabling intelligent contract lifecycle automation.
{/* MANUAL-CONTENT-END */}
## Usage Instructions
Integrate with Agiloft contract lifecycle management to create, read, update, delete, and search records. Supports file attachments, SQL-based selection, saved searches, and record locking across any table in your knowledge base.
## Tools
### `agiloft_attach_file`
Attach a file to a field in an Agiloft record.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `instanceUrl` | string | Yes | Agiloft instance URL \(e.g., https://mycompany.agiloft.com\) |
| `knowledgeBase` | string | Yes | Knowledge base name |
| `login` | string | Yes | Agiloft username |
| `password` | string | Yes | Agiloft password |
| `table` | string | Yes | Table name \(e.g., "contracts"\) |
| `recordId` | string | Yes | ID of the record to attach the file to |
| `fieldName` | string | Yes | Name of the attachment field |
| `file` | file | No | File to attach |
| `fileName` | string | No | Name to assign to the file \(defaults to original file name\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `recordId` | string | ID of the record the file was attached to |
| `fieldName` | string | Name of the field the file was attached to |
| `fileName` | string | Name of the attached file |
| `totalAttachments` | number | Total number of files attached in the field after the operation |
### `agiloft_attachment_info`
Get information about file attachments on a record field.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `instanceUrl` | string | Yes | Agiloft instance URL \(e.g., https://mycompany.agiloft.com\) |
| `knowledgeBase` | string | Yes | Knowledge base name |
| `login` | string | Yes | Agiloft username |
| `password` | string | Yes | Agiloft password |
| `table` | string | Yes | Table name \(e.g., "contracts"\) |
| `recordId` | string | Yes | ID of the record to check attachments on |
| `fieldName` | string | Yes | Name of the attachment field to inspect |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `attachments` | array | List of attachments with position, name, and size |
| ↳ `position` | number | Position index of the attachment in the field |
| ↳ `name` | string | File name of the attachment |
| ↳ `size` | number | File size in bytes |
| `totalCount` | number | Total number of attachments in the field |
### `agiloft_create_record`
Create a new record in an Agiloft table.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `instanceUrl` | string | Yes | Agiloft instance URL \(e.g., https://mycompany.agiloft.com\) |
| `knowledgeBase` | string | Yes | Knowledge base name |
| `login` | string | Yes | Agiloft username |
| `password` | string | Yes | Agiloft password |
| `table` | string | Yes | Table name \(e.g., "contracts", "contacts.employees"\) |
| `data` | string | Yes | Record field values as a JSON object \(e.g., \{"first_name": "John", "status": "Active"\}\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `id` | string | ID of the created record |
| `fields` | json | Field values of the created record |
### `agiloft_delete_record`
Delete a record from an Agiloft table.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `instanceUrl` | string | Yes | Agiloft instance URL \(e.g., https://mycompany.agiloft.com\) |
| `knowledgeBase` | string | Yes | Knowledge base name |
| `login` | string | Yes | Agiloft username |
| `password` | string | Yes | Agiloft password |
| `table` | string | Yes | Table name \(e.g., "contracts", "contacts.employees"\) |
| `recordId` | string | Yes | ID of the record to delete |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `id` | string | ID of the deleted record |
| `deleted` | boolean | Whether the record was successfully deleted |
### `agiloft_lock_record`
Lock, unlock, or check the lock status of an Agiloft record.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `instanceUrl` | string | Yes | Agiloft instance URL \(e.g., https://mycompany.agiloft.com\) |
| `knowledgeBase` | string | Yes | Knowledge base name |
| `login` | string | Yes | Agiloft username |
| `password` | string | Yes | Agiloft password |
| `table` | string | Yes | Table name \(e.g., "contracts"\) |
| `recordId` | string | Yes | ID of the record to lock, unlock, or check |
| `lockAction` | string | Yes | Action to perform: "lock", "unlock", or "check" |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `id` | string | Record ID |
| `lockStatus` | string | Lock status \(e.g., "LOCKED", "UNLOCKED"\) |
| `lockedBy` | string | Username of the user who locked the record |
| `lockExpiresInMinutes` | number | Minutes until the lock expires |
### `agiloft_read_record`
Read a record by ID from an Agiloft table.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `instanceUrl` | string | Yes | Agiloft instance URL \(e.g., https://mycompany.agiloft.com\) |
| `knowledgeBase` | string | Yes | Knowledge base name |
| `login` | string | Yes | Agiloft username |
| `password` | string | Yes | Agiloft password |
| `table` | string | Yes | Table name \(e.g., "contracts", "contacts.employees"\) |
| `recordId` | string | Yes | ID of the record to read |
| `fields` | string | No | Comma-separated list of field names to include in the response |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `id` | string | ID of the record |
| `fields` | json | Field values of the record |
### `agiloft_remove_attachment`
Remove an attached file from a field in an Agiloft record.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `instanceUrl` | string | Yes | Agiloft instance URL \(e.g., https://mycompany.agiloft.com\) |
| `knowledgeBase` | string | Yes | Knowledge base name |
| `login` | string | Yes | Agiloft username |
| `password` | string | Yes | Agiloft password |
| `table` | string | Yes | Table name \(e.g., "contracts"\) |
| `recordId` | string | Yes | ID of the record containing the attachment |
| `fieldName` | string | Yes | Name of the attachment field |
| `position` | string | Yes | Position index of the file to remove \(starting from 0\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `recordId` | string | ID of the record |
| `fieldName` | string | Name of the attachment field |
| `remainingAttachments` | number | Number of attachments remaining in the field after removal |
### `agiloft_retrieve_attachment`
Download an attached file from an Agiloft record field.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `instanceUrl` | string | Yes | Agiloft instance URL \(e.g., https://mycompany.agiloft.com\) |
| `knowledgeBase` | string | Yes | Knowledge base name |
| `login` | string | Yes | Agiloft username |
| `password` | string | Yes | Agiloft password |
| `table` | string | Yes | Table name \(e.g., "contracts"\) |
| `recordId` | string | Yes | ID of the record containing the attachment |
| `fieldName` | string | Yes | Name of the attachment field |
| `position` | string | Yes | Position index of the file in the field \(starting from 0\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `file` | file | Downloaded attachment file |
### `agiloft_saved_search`
List saved searches defined for an Agiloft table.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `instanceUrl` | string | Yes | Agiloft instance URL \(e.g., https://mycompany.agiloft.com\) |
| `knowledgeBase` | string | Yes | Knowledge base name |
| `login` | string | Yes | Agiloft username |
| `password` | string | Yes | Agiloft password |
| `table` | string | Yes | Table name to list saved searches for \(e.g., "contracts"\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `searches` | array | List of saved searches for the table |
| ↳ `name` | string | Saved search name |
| ↳ `label` | string | Saved search display label |
| ↳ `id` | string | Saved search database identifier |
| ↳ `description` | string | Saved search description |
### `agiloft_search_records`
Search for records in an Agiloft table using a query.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `instanceUrl` | string | Yes | Agiloft instance URL \(e.g., https://mycompany.agiloft.com\) |
| `knowledgeBase` | string | Yes | Knowledge base name |
| `login` | string | Yes | Agiloft username |
| `password` | string | Yes | Agiloft password |
| `table` | string | Yes | Table name to search in \(e.g., "contracts", "contacts.employees"\) |
| `query` | string | Yes | Search query using Agiloft query syntax \(e.g., "status=\'Active\'" or "company_name~=\'Acme\'"\) |
| `fields` | string | No | Comma-separated list of field names to include in the results |
| `page` | string | No | Page number for paginated results \(starting from 0\) |
| `limit` | string | No | Maximum number of records to return per page |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `records` | json | Array of matching records with their field values |
| `totalCount` | number | Total number of matching records |
| `page` | number | Current page number |
| `limit` | number | Records per page |
### `agiloft_select_records`
Select record IDs matching a SQL WHERE clause from an Agiloft table.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `instanceUrl` | string | Yes | Agiloft instance URL \(e.g., https://mycompany.agiloft.com\) |
| `knowledgeBase` | string | Yes | Knowledge base name |
| `login` | string | Yes | Agiloft username |
| `password` | string | Yes | Agiloft password |
| `table` | string | Yes | Table name \(e.g., "contracts", "contacts.employees"\) |
| `where` | string | Yes | SQL WHERE clause using database column names \(e.g., "summary like \'%new%\'" or "assigned_person=\'John Doe\'"\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `recordIds` | array | Array of record IDs matching the query |
| `totalCount` | number | Total number of matching records |
### `agiloft_update_record`
Update an existing record in an Agiloft table.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `instanceUrl` | string | Yes | Agiloft instance URL \(e.g., https://mycompany.agiloft.com\) |
| `knowledgeBase` | string | Yes | Knowledge base name |
| `login` | string | Yes | Agiloft username |
| `password` | string | Yes | Agiloft password |
| `table` | string | Yes | Table name \(e.g., "contracts", "contacts.employees"\) |
| `recordId` | string | Yes | ID of the record to update |
| `data` | string | Yes | Updated field values as a JSON object \(e.g., \{"status": "Active", "priority": "High"\}\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `id` | string | ID of the updated record |
| `fields` | json | Updated field values of the record |

View File

@@ -113,7 +113,7 @@ Retrieve the results of a completed Athena query execution
| `awsAccessKeyId` | string | Yes | AWS access key ID |
| `awsSecretAccessKey` | string | Yes | AWS secret access key |
| `queryExecutionId` | string | Yes | Query execution ID to get results for |
| `maxResults` | number | No | Maximum number of rows to return \(1-1000\) |
| `maxResults` | number | No | Maximum number of rows to return \(1-999\) |
| `nextToken` | string | No | Pagination token from a previous request |
#### Output

View File

@@ -0,0 +1,201 @@
---
title: Bright Data
description: Scrape websites, search engines, and extract structured data
---
import { BlockInfoCard } from "@/components/ui/block-info-card"
<BlockInfoCard
type="brightdata"
color="#FFFFFF"
/>
## Usage Instructions
Integrate Bright Data into the workflow. Scrape any URL with Web Unlocker, search Google and other engines with SERP API, discover web content ranked by intent, or trigger pre-built scrapers for structured data extraction.
## Tools
### `brightdata_scrape_url`
Fetch content from any URL using Bright Data Web Unlocker. Bypasses anti-bot protections, CAPTCHAs, and IP blocks automatically.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Bright Data API token |
| `zone` | string | Yes | Web Unlocker zone name from your Bright Data dashboard \(e.g., "web_unlocker1"\) |
| `url` | string | Yes | The URL to scrape \(e.g., "https://example.com/page"\) |
| `format` | string | No | Response format: "raw" for HTML or "json" for parsed content. Defaults to "raw" |
| `country` | string | No | Two-letter country code for geo-targeting \(e.g., "us", "gb"\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `content` | string | The scraped page content \(HTML or JSON depending on format\) |
| `url` | string | The URL that was scraped |
| `statusCode` | number | HTTP status code of the response |
### `brightdata_serp_search`
Search Google, Bing, DuckDuckGo, or Yandex and get structured search results using Bright Data SERP API.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Bright Data API token |
| `zone` | string | Yes | SERP API zone name from your Bright Data dashboard \(e.g., "serp_api1"\) |
| `query` | string | Yes | The search query \(e.g., "best project management tools"\) |
| `searchEngine` | string | No | Search engine to use: "google", "bing", "duckduckgo", or "yandex". Defaults to "google" |
| `country` | string | No | Two-letter country code for localized results \(e.g., "us", "gb"\) |
| `language` | string | No | Two-letter language code \(e.g., "en", "es"\) |
| `numResults` | number | No | Number of results to return \(e.g., 10, 20\). Defaults to 10 |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `results` | array | Array of search results |
| ↳ `title` | string | Title of the search result |
| ↳ `url` | string | URL of the search result |
| ↳ `description` | string | Snippet or description of the result |
| ↳ `rank` | number | Position in search results |
| `query` | string | The search query that was executed |
| `searchEngine` | string | The search engine that was used |
### `brightdata_discover`
AI-powered web discovery that finds and ranks results by intent. Returns up to 1,000 results with optional cleaned page content for RAG and verification.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Bright Data API token |
| `query` | string | Yes | The search query \(e.g., "competitor pricing changes enterprise plan"\) |
| `numResults` | number | No | Number of results to return, up to 1000. Defaults to 10 |
| `intent` | string | No | Describes what the agent is trying to accomplish, used to rank results by relevance \(e.g., "find official pricing pages and change notes"\) |
| `includeContent` | boolean | No | Whether to include cleaned page content in results |
| `format` | string | No | Response format: "json" or "markdown". Defaults to "json" |
| `language` | string | No | Search language code \(e.g., "en", "es", "fr"\). Defaults to "en" |
| `country` | string | No | Two-letter ISO country code for localized results \(e.g., "us", "gb"\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `results` | array | Array of discovered web results ranked by intent relevance |
| ↳ `url` | string | URL of the discovered page |
| ↳ `title` | string | Page title |
| ↳ `description` | string | Page description or snippet |
| ↳ `relevanceScore` | number | AI-calculated relevance score for intent-based ranking |
| ↳ `content` | string | Cleaned page content in the requested format \(when includeContent is true\) |
| `query` | string | The search query that was executed |
| `totalResults` | number | Total number of results returned |
### `brightdata_sync_scrape`
Scrape URLs synchronously using a Bright Data pre-built scraper and get structured results directly. Supports up to 20 URLs with a 1-minute timeout.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Bright Data API token |
| `datasetId` | string | Yes | Dataset scraper ID from your Bright Data dashboard \(e.g., "gd_l1viktl72bvl7bjuj0"\) |
| `urls` | string | Yes | JSON array of URL objects to scrape, up to 20 \(e.g., \[\{"url": "https://example.com/product"\}\]\) |
| `format` | string | No | Output format: "json", "ndjson", or "csv". Defaults to "json" |
| `includeErrors` | boolean | No | Whether to include error reports in results |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `data` | array | Array of scraped result objects with fields specific to the dataset scraper used |
| `snapshotId` | string | Snapshot ID returned if the request exceeded the 1-minute timeout and switched to async processing |
| `isAsync` | boolean | Whether the request fell back to async mode \(true means use snapshot ID to retrieve results\) |
### `brightdata_scrape_dataset`
Trigger a Bright Data pre-built scraper to extract structured data from URLs. Supports 660+ scrapers for platforms like Amazon, LinkedIn, Instagram, and more.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Bright Data API token |
| `datasetId` | string | Yes | Dataset scraper ID from your Bright Data dashboard \(e.g., "gd_l1viktl72bvl7bjuj0"\) |
| `urls` | string | Yes | JSON array of URL objects to scrape \(e.g., \[\{"url": "https://example.com/product"\}\]\) |
| `format` | string | No | Output format: "json" or "csv". Defaults to "json" |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `snapshotId` | string | The snapshot ID to retrieve results later |
| `status` | string | Status of the scraping job \(e.g., "triggered", "running"\) |
### `brightdata_snapshot_status`
Check the progress of an async Bright Data scraping job. Returns status: starting, running, ready, or failed.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Bright Data API token |
| `snapshotId` | string | Yes | The snapshot ID returned when the collection was triggered \(e.g., "s_m4x7enmven8djfqak"\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `snapshotId` | string | The snapshot ID that was queried |
| `datasetId` | string | The dataset ID associated with this snapshot |
| `status` | string | Current status of the snapshot: "starting", "running", "ready", or "failed" |
### `brightdata_download_snapshot`
Download the results of a completed Bright Data scraping job using its snapshot ID. The snapshot must have ready status.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Bright Data API token |
| `snapshotId` | string | Yes | The snapshot ID returned when the collection was triggered \(e.g., "s_m4x7enmven8djfqak"\) |
| `format` | string | No | Output format: "json", "ndjson", "jsonl", or "csv". Defaults to "json" |
| `compress` | boolean | No | Whether to compress the results |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `data` | array | Array of scraped result records |
| `format` | string | The content type of the downloaded data |
| `snapshotId` | string | The snapshot ID that was downloaded |
### `brightdata_cancel_snapshot`
Cancel an active Bright Data scraping job using its snapshot ID. Terminates data collection in progress.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Bright Data API token |
| `snapshotId` | string | Yes | The snapshot ID of the collection to cancel \(e.g., "s_m4x7enmven8djfqak"\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `snapshotId` | string | The snapshot ID that was cancelled |
| `cancelled` | boolean | Whether the cancellation was successful |

View File

@@ -10,6 +10,24 @@ import { BlockInfoCard } from "@/components/ui/block-info-card"
color="linear-gradient(45deg, #B0084D 0%, #FF4F8B 100%)"
/>
{/* MANUAL-CONTENT-START:intro */}
[AWS CloudWatch](https://aws.amazon.com/cloudwatch/) is a monitoring and observability service that provides data and actionable insights for AWS resources, applications, and services. CloudWatch collects monitoring and operational data in the form of logs, metrics, and events, giving you a unified view of your AWS environment.
With the CloudWatch integration, you can:
- **Query Logs (Insights)**: Run CloudWatch Log Insights queries against one or more log groups to analyze log data with a powerful query language
- **Describe Log Groups**: List available CloudWatch log groups in your account, optionally filtered by name prefix
- **Get Log Events**: Retrieve log events from a specific log stream within a log group
- **Describe Log Streams**: List log streams within a log group, ordered by last event time or filtered by name prefix
- **List Metrics**: Browse available CloudWatch metrics, optionally filtered by namespace, metric name, or recent activity
- **Get Metric Statistics**: Retrieve statistical data for a metric over a specified time range with configurable granularity
- **Publish Metric**: Publish custom metric data points to CloudWatch for your own application monitoring
- **Describe Alarms**: List and filter CloudWatch alarms by name prefix, state, or alarm type
In Sim, the CloudWatch integration enables your agents to monitor AWS infrastructure, analyze application logs, track custom metrics, and respond to alarm states as part of automated DevOps and SRE workflows. This is especially powerful when combined with other AWS integrations like CloudFormation and SNS for end-to-end infrastructure management.
{/* MANUAL-CONTENT-END */}
## Usage Instructions
Integrate AWS CloudWatch into workflows. Run Log Insights queries, list log groups, retrieve log events, list and get metrics, and monitor alarms. Requires AWS access key and secret access key.
@@ -155,6 +173,34 @@ Get statistics for a CloudWatch metric over a time range
| `label` | string | Metric label |
| `datapoints` | array | Datapoints with timestamp and statistics values |
### `cloudwatch_put_metric_data`
Publish a custom metric data point to CloudWatch
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `awsRegion` | string | Yes | AWS region \(e.g., us-east-1\) |
| `awsAccessKeyId` | string | Yes | AWS access key ID |
| `awsSecretAccessKey` | string | Yes | AWS secret access key |
| `namespace` | string | Yes | Metric namespace \(e.g., Custom/MyApp\) |
| `metricName` | string | Yes | Name of the metric |
| `value` | number | Yes | Metric value to publish |
| `unit` | string | No | Unit of the metric \(e.g., Count, Seconds, Bytes\) |
| `dimensions` | string | No | JSON string of dimension name/value pairs |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `success` | boolean | Whether the metric was published successfully |
| `namespace` | string | Metric namespace |
| `metricName` | string | Metric name |
| `value` | number | Published metric value |
| `unit` | string | Metric unit |
| `timestamp` | string | Timestamp when the metric was published |
### `cloudwatch_describe_alarms`
List and filter CloudWatch alarms

Some files were not shown because too many files have changed in this diff Show More