* improvement(repo): restructuring to make realtime image narrower scoped
* improvements
* chore(repo): rebase fixes and quality improvements for realtime split
Addresses merge-time issues and gaps from the realtime app split:
- Retarget stale vi.mock paths to @sim/workflow-persistence/subblocks
- Restore README branding, fix AGENTS.md script reference
- Restore TSDoc on workflow-persistence subblocks helpers
- Use toError() from @sim/utils/errors in save.ts
- Add vitest config + local mocks so @sim/audit tests run standalone
- Move socket.io-client to devDependencies in apps/realtime
- Add missing package COPY steps to docker/app.Dockerfile
- Add check:boundaries/check:realtime-prune scripts and wire into CI
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
* refactor(security): consolidate crypto primitives into @sim/security
Move general-purpose crypto primitives out of apps/sim into the
@sim/security package so both apps/sim and apps/realtime can share them.
@sim/security exports (all pure, dependency-free):
./compare safeCompare (constant-time HMAC-wrapped equality)
./encryption encrypt/decrypt (AES-256-GCM, iv:cipher:tag format)
./hash sha256Hex
./tokens generateSecureToken (base64url)
Migrate apps/sim call sites to use these + @sim/utils helpers:
crypto.randomUUID() -> generateId() from @sim/utils/id
createHash('sha256').digest -> sha256Hex
timingSafeEqual on hashed hex -> safeCompare
new Promise(setTimeout) -> sleep from @sim/utils/helpers
No behavior change: encryption format, digest output, and token
length are preserved exactly.
* refactor(copilot): use toError in remaining otel/finalize sites
Replace the last two `error instanceof Error ? error : new Error(String(error))`
patterns with toError from @sim/utils/errors. Completes the sweep of clean
candidates — no behavior change.
* refactor(security): consolidate HMAC-SHA256 primitives into @sim/security
Adds hmacSha256Hex and hmacSha256Base64 to @sim/security/hmac and migrates
15 webhook providers plus 5 other hot paths (deployment token signing,
outbound webhook requests, workspace notification delivery, notification
test route, Shopify OAuth callback) off bare `createHmac` calls. Secret
parameter accepts `string | Buffer` to cover base64-decoded Svix-style
secrets (Resend) and MS Teams' HMAC scheme. AWS SigV4 signing in S3 and
Textract tools intentionally retains direct `createHmac` usage — its
multi-step key derivation chain doesn't fit a generic helper.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
* chore(packages): post-audit test + packaging polish
- Add safeCompare unit tests (identity, length mismatch, hex-nibble diff).
- Add Buffer-secret cases to hmac tests to lock in Svix/MS-Teams contract.
- Declare `reactflow` as a peerDependency on @sim/workflow-types — only used for type imports.
- Add a barrel export to @sim/workflow-persistence for consumers that prefer package-level imports; subpath exports retained.
- Document the data-field invariant in load.ts for loop/parallel subflow patching.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
* chore(realtime): address PR review feedback
- Remove redundant SOCKET_PORT=3002 env from Dockerfile runner stage
(env.PORT already defaults to 3002 via zod schema).
- Reorder PORT fallback so an explicitly-set SOCKET_PORT wins over
the schema default for PORT; keeps SOCKET_PORT functional as an
override instead of dead code.
- Add dedicated type-check CI step for @sim/realtime so TS errors
surface pre-deploy (the Dockerfile runs source TS via Bun and has
no implicit build-time type check).
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
* chore(realtime): remove unused SOCKET_PORT env var
SOCKET_PORT has lived in the socket server since the June 2025 refactor
but was never actually set in any deploy config — docker-compose.prod,
helm values/templates, .env.example, and docs all use PORT or the 3002
default exclusively. No self-hoster was ever pointed at SOCKET_PORT, so
removing it is safe.
Simplifies realtime port resolution to `env.PORT` (zod-validated with a
3002 default) and drops the orphaned sim-side schema entry.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
---------
Co-authored-by: Waleed Latif <walif6@gmail.com>
Co-authored-by: Claude Opus 4.7 <noreply@anthropic.com>
* fix(docs): update simstudio.ai URLs to sim.ai in SSO docs
* improvement(docs): remove plan defaults table from data retention docs
* improvement(docs): consolidate self-hosting info at bottom of enterprise docs
* improvement(docs): reduce callout and FAQ overuse in enterprise docs
* improvement(docs): restore FAQs and genuine-gotcha callouts
* fix(deps): bump drizzle-orm to 0.45.2 (GHSA-gpj5-g38j-94v9)
Resolves Dependabot alert #98. Drizzle ORM <0.45.2 improperly escaped
quoted SQL identifiers, allowing SQL injection via untrusted input
passed to APIs like sql.identifier() or .as().
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
* chore(mcp): adopt native SDK types after @modelcontextprotocol/sdk 1.25.3 bump
Replace hand-written schema/annotation shapes with the SDK's exported
Tool, JSONRPCResultResponse, and Tool['annotations'] types so changes
upstream flow through automatically.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
* refactor(types): use drizzle $inferSelect for row types
Replace hand-written interfaces that duplicated schema shape with
typeof table.$inferSelect aliases for webhook, workflow, and
workspaceFiles rows. Also simplify metadata insert/update to use
.returning() instead of field-by-field copies.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
* fix(uploads): fall through to INSERT if restore-deleted row races a hard delete
If a hard delete races between the initial SELECT and the restore UPDATE,
.returning() yields no row. Previously the function would return undefined
and silently violate the Promise<FileMetadataRecord> contract. Now the
function falls through to the INSERT path, which already handles
uniqueness races via the 23505 catch.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
* chore(uploads): align metadata.ts with global standards
Replace dynamic uuid import with generateId() per @sim/utils/id
convention, narrow the error catch off `any`, and convert the inline
comment to TSDoc.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.7 <noreply@anthropic.com>
* fix(aws): add validateAwsRegion to all AWS route schemas to prevent SSRF
* fix(validation): add mx and eu-isoe prefixes to validateAwsRegion regex
* test(validation): add mx-central-1, eu-isoe-west-1, and us-iso-west-1 region test cases
* fix(aws): eliminate double validateAwsRegion call and fix regex alternation order
- Replace double-call .refine() pattern with single-call + static message across all 61 AWS routes
- Reorder regex alternation to put longer prefixes first (eu-isoe before eu, us-isob/us-iso/us-gov before us) for engine-agnostic correctness
* improvement(contact): add Turnstile CAPTCHA, honeypot, and robustness fixes
- Add Cloudflare Turnstile with graceful degradation: when the widget
fails to load (ad blockers, iOS privacy, corporate DNS), submissions
fall through to a tighter rate-limit bucket rather than hard-blocking
- Add honeypot field to filter automated submissions without user impact
- Add separate CAPTCHA_UNAVAILABLE_RATE_LIMIT bucket (3/min) for the
no-captcha path so spam via ad-blocker bypass remains expensive
- Pass expectedHostname to verifyTurnstileToken to close cross-site
token reuse gap
- Add SITE_HOSTNAME as module-level constant (avoid URL parsing per req)
- Wire onExpire/onError/onUnsupported callbacks so token expiry during
slow form-filling falls back gracefully instead of showing a captcha error
- Add getResponsePromise(30_000) timeout to prevent indefinite hang on
network blips
- Add size: 'invisible' to Turnstile options (required for execute mode)
- Move turnstile.ts to lib/core/security/ alongside csp/encryption/input-validation
- Switch all CSS to --landing-* variables throughout contact form
- Move error display inline next to label with truncation in LandingField
- Add labelClassName prop to LandingField for context-specific overrides
- Simplify contact page to single-column max-w-[640px] layout
* fix(contact): fall through to no-captcha rate limit on Cloudflare transport errors
* chore(contact): remove extraneous comments from route
* fix(contact): remove forced min-height on success state, let content flow naturally
* fix(contact): cast CONTACT_TOPIC_OPTIONS to satisfy Combobox mutable type
* fix(contact): disable submit during CAPTCHA resolution window, add relative to form
* feat(integrations): add AWS SES, IAM Identity Center, and enhanced IAM/STS/CloudWatch/DynamoDB integrations
- Add AWS SES v2 integration with 9 operations (send email, templated, bulk, templates, account)
- Add AWS IAM Identity Center integration with 12 operations (account assignments, permission sets, users, groups)
- Add 3 new IAM tools: list-attached-role-policies, list-attached-user-policies, simulate-principal-policy
- Fix DynamoDB duplicate subBlock IDs, add operation-scoped field names, add subblock migrations
- Add authMode: AuthMode.ApiKey to DynamoDB block
- Fix CloudWatch routes: toError, client.destroy(), withRouteHandler, auth outside try
- Fix STS/DynamoDB/IAM routes: nullable Zod schemas, withRouteHandler adoption
- Fix Identity Center: list_instances pagination, list_groups instanceArn condition
- Add subblock migrations for renamed DynamoDB fields (key, filterExpression, etc.)
- Apply withRouteHandler to all new and existing AWS tool routes
* docs(ses): add manual intro section to SES docs
* fix(dynamodb): add legacy fallbacks in params for subblock migration compatibility
Workflows saved with the old shared IDs (key, filterExpression, etc.) that migrate
to get-scoped slots via subblock-migrations still work correctly on update/delete/scan/put
operations via fallback lookups in tools.config.params.
* feat(contact): add contact page, migrate help/demo forms to useMutation (#4242)
* feat(contact): add contact page, migrate help/demo forms to useMutation
* improvement(contact): address greptile review feedback
- Map contact topic to help email type for accurate confirmation emails
- Drop Zod schema details from 400 response on public /api/contact
- Wire aria-describedby + aria-invalid in LandingField for both forms
- Reset helpMutation on modal reopen to match demo-request pattern
* improvement(landing): extract shared LandingField component
* fix(landing): resolve error-page crash on invalid /models and /integrations routes (#4243)
* fix(layout): use plain inline script for PublicEnvScript to set env before chunks eval on error pages
* fix(landing): handle runtime env race on error-page renders
React skips SSR on unhandled server errors and re-renders on the client
(see vercel/next.js#63980, #82456). Root-layout scripts — including the
runtime env script that populates window.__ENV — are inserted but not
executed on that client re-render, so any client module that reads env
at module evaluation crashes the render into a blank "Application error"
overlay instead of rendering the styled 404.
This replaces the earlier PublicEnvScript tweak with the architectural
fix:
- auth-client.ts: fall back to window.location.origin when getBaseUrl()
throws on the client. Auth endpoints are same-origin, so this is the
correct baseURL on the client. Server-side we still throw on genuine
misconfig.
- loading.tsx under /models/[provider], /models/[provider]/[model], and
/integrations/[slug]: establishes a Suspense boundary below the root
layout so a page-level notFound() no longer invalidates the layout's
SSR output (the fix endorsed by Next.js maintainers in #63980).
- layout.tsx: revert disableNextScript — the research showed this
doesn't actually fix error-page renders. The real fix is above.
* improvement(landing): use emcn Loader in scoped loading.tsx, trim auth-client comment
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.7 <noreply@anthropic.com>
* fix(iam): correct MissingContextValues mapping in simulatePrincipalPolicy
* fix(aws): add conditionExpression migration fallback for DynamoDB delete, fix SES pageSize min
* fix(aws): deep validation fixes across SES, IAM, Identity Center, DynamoDB integrations
- IAM: replace non-existent StatementId with SourcePolicyType in simulatePrincipalPolicy
- IAM: add .int() constraint to list-users/roles/policies/groups Zod schemas
- IAM: remove redundant manual requestId from all 21 IAM route handlers
- SES: add .refine() body validation to create-template route
- SES: make bulk email destination templateData optional, only include ReplacementEmailContent when present
- SES: fix pageSize guard to if (pageSize != null) to correctly forward 0
- SES: add max(100) to list-templates pageSize, revert list-identities to min(0) per SDK
- STS: fix logger.error calls to use structured metadata pattern
- Identity Center: remove deprecated account.Status fallback, use account.State only
- DynamoDB: convert empty interface extends to type aliases, remove redundant error field, fix barrel to absolute imports
* regen docs
* fix(iam): add .int() constraint to maxSessionDuration in create-role route
* fix(ses): forward pageSize=0 correctly in listIdentities util
* fix(aws): add gradient background to IdentityCenterIcon, fix listTemplates pageSize guard
---------
Co-authored-by: Claude Opus 4.7 <noreply@anthropic.com>
* improvement(landing): scope navbar/footer shell to (shell) route group, align scoped 404s with root
Move integrations and models page routes into a `(shell)` route group so the Navbar+Footer layout wraps pages but not `not-found.tsx`. This lets scoped 404s render the same `<AuthBackground>` + Navbar treatment as the root `/` 404, instead of appearing inside the landing CTA footer.
Extract the shared 404 markup into `<NotFoundView>` so root, integrations, and models 404s share a single source of truth. Route URLs are unchanged — route groups are URL-transparent.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
* fix(landing): convert relative imports to absolute in integrations (shell) page
Build failed because the move into the (shell) route group invalidated relative `./components/...` and `./data/...` imports. CLAUDE.md mandates absolute imports throughout — switching these resolves the Turbopack build errors.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.7 <noreply@anthropic.com>
* fix(layout): use plain inline script for PublicEnvScript to set env before chunks eval on error pages
* fix(landing): handle runtime env race on error-page renders
React skips SSR on unhandled server errors and re-renders on the client
(see vercel/next.js#63980, #82456). Root-layout scripts — including the
runtime env script that populates window.__ENV — are inserted but not
executed on that client re-render, so any client module that reads env
at module evaluation crashes the render into a blank "Application error"
overlay instead of rendering the styled 404.
This replaces the earlier PublicEnvScript tweak with the architectural
fix:
- auth-client.ts: fall back to window.location.origin when getBaseUrl()
throws on the client. Auth endpoints are same-origin, so this is the
correct baseURL on the client. Server-side we still throw on genuine
misconfig.
- loading.tsx under /models/[provider], /models/[provider]/[model], and
/integrations/[slug]: establishes a Suspense boundary below the root
layout so a page-level notFound() no longer invalidates the layout's
SSR output (the fix endorsed by Next.js maintainers in #63980).
- layout.tsx: revert disableNextScript — the research showed this
doesn't actually fix error-page renders. The real fix is above.
* improvement(landing): use emcn Loader in scoped loading.tsx, trim auth-client comment
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.7 <noreply@anthropic.com>
* feat(contact): add contact page, migrate help/demo forms to useMutation
* improvement(contact): address greptile review feedback
- Map contact topic to help email type for accurate confirmation emails
- Drop Zod schema details from 400 response on public /api/contact
- Wire aria-describedby + aria-invalid in LandingField for both forms
- Reset helpMutation on modal reopen to match demo-request pattern
* improvement(landing): extract shared LandingField component
* improvement(enterprise): slack wizard UI, enterprise docs, data retention updates
* improvement(docs): add enterprise screenshots to sso, access-control, whitelabeling pages
* form
* fix(enterprise): address PR review — h-full for recently-deleted, shared SettingRow, toast UX, stale form fix, emcn tokens
* fix(whitelabeling): scope drop zone to thumbnail only, not full upload row
* fix(whitelabeling): remove drop image text from drag overlay
* fix(config): add DATA_RETENTION_ENABLED to env schema to fix build type error
* fix(testing): add isDataRetentionEnabled to feature flags mock
* improvement(docs): remove redundant requirements section from data-retention page
* improvement(docs): remove requirements sections from all enterprise doc pages
* improvement(docs): add screenshot to audit-logs page
* fix(data-retention): bypass enterprise gate when billing is disabled for self-hosted
* improvement(knowledge): show selector with saved option in connector edit modal
* fix(kb-connectors): clear canonical siblings when non-canonical dep changes; share selector field
* refactor(kb-connectors): extract canonical-field logic into useConnectorConfigFields hook
* fix(kb-connectors): only merge changed fields into sourceConfig on edit save
Avoids writing spurious empty-string keys for untouched optional fields when
another field triggers a save.
* refactor(kb-connectors): tighten state primitives in modals
- edit modal: replace useMemo([]) + eslint-disable with useState lazy
initializer for initialSourceConfig — same mount-once semantics
without the escape hatch.
- add modal: drop useCallback on handleConnectNewAccount (no observer
saw the reference) and inline the one call site.
* fix(billing): close TOCTOU race in subscription transfer, centralize stripe test mocks
* more mocks
* fix(testing): provide complete Stripe.Event defaults in createMockStripeEvent
* fix(testing): make dbChainMock .for('update') chainable with .limit()
* fix(billing): gate subscription transfer noop behind membership check
Previously the 'already belongs to this organization' early return fired
before the org/member lookups, letting any authenticated caller probe
sub-to-org pairings without being a member of the target org. Move the
noop check after the admin/owner verification so unauthorized callers
hit the 403 first.
* fix(workday): validate tenantUrl to prevent SSRF in SOAP client
* fix(workday): use validation.sanitized in buildWsdlUrl
* fix(security): enforce URL validation across connectors, providers, auth
- Azure OpenAI/Anthropic: validate user-supplied azureEndpoint with validateUrlWithDNS to block SSRF to private IPs, localhost (in hosted mode), and dangerous ports.
- ServiceNow connector: enforce ServiceNow domain allowlist via validateServiceNowInstanceUrl before calling the instance URL.
- Obsidian connector: validate vaultUrl with validateUrlWithDNS and reuse the resolved IP via secureFetchWithPinnedIPAndRetry to block DNS rebinding between validation and request.
- Signup + verify flows: pass redirect/callbackUrl/redirectAfter and stored inviteRedirectUrl through validateCallbackUrl; drop unsafe values and log a warning.
- lib/knowledge/documents/utils.ts: add secureFetchWithPinnedIPAndRetry wrapper around secureFetchWithPinnedIP (used by Obsidian).
* fix(obsidian): use isomorphic SSRF validation to unblock client build
The Obsidian connector is reachable from client bundles via `connectors/registry.ts` (the knowledge UI reads metadata like `.icon`/`.name`). Importing `validateUrlWithDNS` / `secureFetchWithPinnedIP` from `input-validation.server` pulled `dns/promises`, `http`, `https`, `net` into client chunks, breaking the Turbopack build:
Module not found: Can't resolve 'dns/promises'
./apps/sim/lib/core/security/input-validation.server.ts [Client Component Browser]
./apps/sim/connectors/obsidian/obsidian.ts [Client Component Browser]
./apps/sim/connectors/registry.ts [Client Component Browser]
Once that file polluted a browser context, Turbopack also failed to resolve the Node builtins in its legitimate server-route imports, cascading the error across App Routes and Server Components.
Fix: switch the Obsidian connector to the isomorphic `validateExternalUrl` + `fetchWithRetry` helpers, matching the pattern used by every other connector in the registry. This keeps the core SSRF protections:
- hosted Sim: blocks localhost, private IPs, HTTP (HTTPS enforced)
- self-hosted Sim: allows localhost + HTTP, still blocks non-loopback private IPs and dangerous ports (22, 25, 3306, 5432, 6379, 27017, 9200)
Drops the DNS-rebinding defense specifically (the IP-pinned fetch chain). The trade-off is acceptable because the vault URL is entered by the workspace admin — not arbitrary untrusted input — and hosted deployments already force the plugin to be exposed through a public URL (tunnel/port-forward), making rebinding a narrow threat.
Also reverts the `secureFetchWithPinnedIPAndRetry` wrapper in `lib/knowledge/documents/utils.ts` (no longer needed, and its `.server` import was the original source of the client-bundle pollution).
* fix(servicenow): propagate URL validation errors in getDocument
Match listDocuments behavior — invalid instance URL should surface as a
configuration error rather than being swallowed into a "document not found"
null response during sync.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
* fix(obsidian): drop allowHttp to restore HTTPS enforcement in hosted mode
allowHttp: true permitted plaintext HTTP for all hosts in all deployment
modes, contradicting the documented policy. The default validateExternalUrl
behavior already allows http://localhost in self-hosted mode (the actual
Obsidian Local REST API use case) via the built-in carve-out, while correctly
rejecting HTTP for public hosts in hosted mode — which prevents leaking the
Bearer access token over plaintext.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.7 <noreply@anthropic.com>
* improvement(codebase): migrate tests to dbChainMock, extract react-query hooks
Migrate 97 test files to centralized dbChainMock/dbChainMockFns helpers from
@sim/testing — removes hoisted chain-wiring boilerplate.
Extend dbChainMock to cover insert/update/delete/transaction/execute patterns.
Extract useGitHubStars and useVoiceSettings react-query hooks from inline fetches.
Centralize additional mocks (authMockFns, hybridAuthMockFns) and update docs.
* fix(github-stars): centralize fallback via initialData, remove stale constants
Move the placeholder star count into useGitHubStars as initialData with
initialDataUpdatedAt: 0 so `data` is always a narrowed string while still
refetching on mount. Fixes two Bugbot issues: stale '25.8k' in chat.tsx
(vs '27.8k' in navbar) and empty-string return in fetchGitHubStars that
bypassed `??` fallbacks in consumers.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
* fix(testing): wire dbChainMock.db to shared transaction and execute fns
dbChainMock.db.transaction was an inline vi.fn() separate from the exported
dbChainMockFns.transaction, so dbChainMockFns.transaction.mockResolvedValueOnce
and assertions silently targeted the wrong instance. dbChainMock.db also
omitted execute, so tests for any module that calls db.execute (logging-session,
table service, billing balance) would throw TypeError. Both mocks now reference
the module-level constants so overrides and resetDbChainMock affect the same fn.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
* fix(chat,testing): memoize welcome message and add selectDistinct to dbChainMock.db
Why:
- Welcome ChatMessage was rebuilt inline each render, producing a fresh
timestamp and new array identity — cascading to ChatMessageContainer
and VoiceInterface props on every tick.
- dbChainMockFns exports selectDistinct/selectDistinctOn but the
dbChainMock.db object omitted them, so tests that stub those builders
hit undefined on the mocked module.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
* fix(chat): re-attach scroll listener once container mounts
The scroll effect's empty dep array meant it ran only on the first
render, when `chatConfig` is still loading and the component returns
`<ChatLoadingState />` — so `messagesContainerRef.current` was null and
the listener was never attached. Depend on the gating conditions that
control which tree renders, so the effect re-runs once the real
container is in the DOM (and re-attaches when toggling in/out of voice
mode).
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
* fix(chat): reset chat state on identifier change via key prop
Keying `<ChatClient>` on `identifier` guarantees a full remount on
route transitions between chats, so `conversationId`, `messages`, and
every other piece of local state start fresh — no reset effect
required.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.7 <noreply@anthropic.com>
* fix(settings): restore paste-to-destructure for workspace secrets, cleanup hooks and design tokens
Restores the env-var paste feature (KEY=VALUE → split rows, multi-line
→ multi rows) for workspace secrets that was lost when the unified
Credentials tab was split into Secrets and Integrations. Adds
`parseEnvVarLine`, `parseValidEnvVars`, and `handleWorkspacePaste` with
full support for export prefix, quoted values, inline comments, and
base64 false-positive guards. Also adds consistent value masking
(show on focus / mask on blur) to new workspace input rows.
Cleans up ~20 unnecessary `useCallback` wrappers, fixes a direct state
mutation in `handleSingleValuePaste`, moves `e.preventDefault()` inside
the `parsedVars.length > 0` guard, replaces all hardcoded hex colors
with CSS variable tokens, converts template-literal classNames to `cn()`,
and replaces raw `<button>` with emcn `Button`.
* fix(settings): fix handlePaste silent swallow, quote-strip bug, and credential sync efficiency
- Move e.preventDefault() inside parsedVars guard in handlePaste so KEY= lines
don't silently discard input (mirrors handleWorkspacePaste fix from same PR)
- Add value.length >= 2 guard before quote-stripping to prevent single-char
values like KEY=\" from being stripped to empty and silently dropped
- Introduce createWorkspaceEnvCredentials and deleteWorkspaceEnvCredentials
for delta-aware credential sync (O(k) instead of O(n*m) for env var mutations)
- Fix createWorkspaceEnvCredentials early-return bug that skipped credential
record creation when workspace had zero members
- Update credentials/[id] DELETE to use deleteWorkspaceEnvCredentials instead
of full syncWorkspaceEnvCredentials
- Optimize syncWorkspaceEnvCredentials to fetch workspace+member IDs in parallel
once instead of once per credential
* fix(settings): normalize Windows line endings in paste handlers
* fix(settings): eliminate double-parse in handlePaste by inlining handleKeyValuePaste
* fix(billing): route scope by subscription referenceId, sync plan from Stripe, transfer storage on org join
Route every billing decision (usage limits, credits, storage, rate
limit, threshold billing, webhooks, UI permissions) through the
subscription's `referenceId` instead of plan-name heuristics. Fixes
the production state where a `pro_6000` subscription attached to an
organization was treated as personal Pro by display/edit code while
execution correctly enforced the org cap.
Scope
- Add `isOrgScopedSubscription(sub, userId)` (pure) and
`isSubscriptionOrgScoped(sub)` (async DB-backed) helpers. One is
used wherever a user perspective is available; the other in webhook
handlers that only have a subscription row.
- Replace plan-name scope checks in ~20 files: usage/limit readers,
credits balance + purchase, threshold billing, storage limits +
tracking, rate limiter, invoice + subscription webhooks, seat
management, membership join/leave, `switch-plan` admin gate,
admin credits/billing routes, copilot 402 handler, UI subscription
settings + permissions + sidebar indicator, React Query types.
Plan sync
- Add `syncSubscriptionPlan(subscriptionId, currentPlan, planFromStripe)`
called from `onSubscriptionComplete` and `onSubscriptionUpdate` so
the DB `plan` column heals on every Stripe event. Pro->Team upgrades
previously updated price, seats, and referenceId but left `plan`
stale — this is what produced the `pro_6000`-on-org row.
Priority + grace period
- `getHighestPrioritySubscription` now prefers org over personal
within each tier (Enterprise > Team > Pro, org > personal at each).
A user with a `cancelAtPeriodEnd` personal Pro who joins a paid org
routes pooled resources to the org through the grace window.
- `calculateSubscriptionOverage` personal-Pro branch reads user_stats
directly (bypassing priority) and bills only `proPeriodCostSnapshot`
when the user joined a paid org mid-cycle, so post-join org usage
isn't double-charged on the personal Pro's final invoice.
`resetUsageForSubscription` mirrors this: preserves
`currentPeriodCost` / `currentPeriodCopilotCost` when
`proPeriodCostSnapshot > 0` so the org's next cycle-close captures
post-join usage correctly.
Uniform base-price formula
- `basePrice × (seats ?? 1)` everywhere: `getOrgUsageLimit`,
`updateOrganizationUsageLimit`, `setUsageLimitForCredits`,
`calculateSubscriptionOverage`, threshold billing,
`syncSubscriptionUsageLimits`, `getOrganizationBillingData`.
Admin dashboard math now agrees with enforcement math.
Storage transfer on join
- Invitation-accept flow moves `user_stats.storageUsedBytes` into
`organization.storageUsedBytes` inside the same transaction when
the org is paid.
- `syncSubscriptionUsageLimits` runs a bulk-backfill version so
members who joined before this fix, or orgs that upgraded from
free to paid after members joined, get pulled into the org pool
on the next subscription event. Idempotent.
UX polish
- Copilot 402 handler differentiates personal-scoped ("increase your
usage limit") from org-scoped ("ask an owner or admin to raise the
limit") while keeping the `increase_limit` action code the parser
already understands.
- Duplicate-subscription error on team upgrade names the existing
plan via `getDisplayPlanName`.
- Invitation-accept invalidates subscription + organization React
Query caches before redirect so settings doesn't flash the user's
pre-join personal view.
Dead code removal
- Remove unused `calculateUserOverage`, and the following fields on
`SubscriptionBillingData` / `getSimplifiedBillingSummary` that no
consumer in the monorepo read: `basePrice`, `overageAmount`,
`totalProjected`, `tierCredits`, `basePriceCredits`,
`currentUsageCredits`, `overageAmountCredits`, `totalProjectedCredits`,
`usageLimitCredits`, `currentCredits`, `limitCredits`,
`lastPeriodCostCredits`, `lastPeriodCopilotCostCredits`,
`copilotCostCredits`, and the `organizationData` subobject. Add
`metadata: unknown` to match what the server returns.
Notes for the triggering customer
- The `pro_6000`-on-org row self-heals on the next Stripe event via
`syncSubscriptionPlan`. For the one known customer, a direct
UPDATE is sufficient:
`UPDATE subscription SET plan='team_6000' WHERE id='aq2...' AND plan='pro_6000'`.
Made-with: Cursor
* fix tests
* address more comments
* progress
* harden further
* outbox service
* address comments
* address comment on check
* simplify
* cleanup code
* minor improvement
* fix(blocks): resolve variable display in mothership resource preview
Variables block showed empty assignments in the embedded workflow preview
because currentWorkflowId was read from URL params, which don't contain
workflowId in the mothership route. Fall back to activeWorkflowId from
the workflow registry.
* fix(blocks): narrow currentWorkflowId to string to satisfy strict null checks
* feat(tables): add column selection, missing keyboard shortcuts, and Sheets-aligned operations
Click column headers to select entire columns, shift-click to extend to
a column range. Delete, cut, and copy operations work on column
selections with full undo/redo support. Adds Home, End, Ctrl+Home,
Ctrl+End, PageUp, PageDown, Ctrl+Space, and all Shift variants.
Changes Ctrl+A to select all cells instead of checkbox rows. Column
header dropdown menu now opens on right-click instead of left-click.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(tables): chevron opens dropdown, drag header to reorder columns
Split column header into label area (click to select, draggable for
reorder) and chevron button (click to open dropdown menu). Remove
the grip handle — dragging the header itself now reorders columns.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(tables): full-column highlight during drag reorder
Replace the thin 2px line drop indicator with a full-column highlight
that spans the entire table height, matching Google Sheets behavior.
The insertion line is still shown at the drop edge for precision.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(tables): handle drag reorder edge cases, dim source column
Suppress drop indicator when drag would result in no position change
(dragging onto self or adjacent no-op positions). Dim the source
column body cells during drag with a background overlay. Skip the
API call when the computed order is identical to the current order.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* feat(tables): add column reorder undo/redo, body drop targets, and escape cancel
Column drag-and-drop now supports dropping anywhere in a column (not just headers),
pressing Escape to cancel a drag, and full undo/redo integration for column reordering.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(tables): merge partial updates in updateRow to prevent column data loss
When Mothership called updateRow directly (bypassing the PATCH API route),
it passed only the changed fields — which were written as the entire row,
wiping all other columns. Move the merge logic into updateRow itself so
all callers get correct partial-update semantics, and remove the now-redundant
pre-merge from both PATCH routes.
* test(tables): add updateRow partial merge tests
Covers the bug where partial updates wiped unmentioned columns — verifies
that fields not in the update payload are preserved, nulling a field works,
full-row updates are idempotent, and missing rows throw correctly.
* feat(tables): add delete-column undo/redo, rename metadata sync, and comprehensive row ID patching
- Delete column now captures column definition, cell data, order, and width for full undo/redo
- Column rename undo/redo now properly syncs columnWidths and columnOrder metadata
- patchRedoRowId/patchUndoRowId extended to handle all action types containing row IDs
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(tables): remove source column dimming during drag reorder
Only show the insertion line at the drop position, matching Google Sheets
behavior. Remove dragSourceBounds memo and isDragging prop.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(tables): preserve selection on right-click, auto-resize on double-click, fix escape during drag
- Right-clicking within an existing selection now preserves it instead of
resetting to a single cell, so context menu operations apply to the full range
- Double-clicking a column border auto-resizes the column to fit its content
- Escape during column drag now immediately clears refs before state update,
preventing the dragend handler from executing the reorder
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(tables): add aria-hidden value and aria-label for column header accessibility
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(tables): tighten auto-resize padding to match Google Sheets
Reduce header padding from +48px to +36px (icon + cell padding) and cell
padding from +20px to +17px (cell padding + border) for a snug fit.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(tables): clean drag ghost and clear selection on drag start
- Create a minimal custom drag image showing only the column name instead
of the browser's default ghost that includes adjacent columns/checkboxes
- Clear any existing cell/column selection when starting a column drag to
prevent stale highlights from persisting during reorder
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* feat(tables): add Shift+Space row selection and Ctrl+D fill down
Shift+Space now selects the entire row (all columns) instead of toggling
a checkbox, matching Google Sheets behavior. Ctrl+D copies the top cell's
value down through the selected range with full undo/redo support.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(tables): show toast on incompatible column type change
The server validates type compatibility and returns a clear error message
(e.g. "3 row(s) have incompatible values"), but the client was silently
swallowing it. Now surfaces the error via toast notification. Also moved
the undo push to onSuccess so a failed type change doesn't pollute the
undo stack.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(tables): scroll-into-view for selection focus, Home/End origin, delete-column undo timing
- Scroll-into-view now tracks selectionFocus (not just anchor), so
Shift+Arrow extending selection off-screen properly auto-scrolls
- Shift+Home/End now uses the current focus as origin (matching
Shift+Arrow behavior) instead of always using anchor
- Delete column undo entry is now pushed in onSuccess, preventing
a corrupted undo stack if the server rejects the deletion
- Dialog copy updated from "cannot be undone" to "You can undo this
action" since undo/redo is supported
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: resolve duplicate declarations from rebase against staging
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix file upload
* fix(tables): merge column widths on delete-column undo, try/finally for auto-resize
- Delete-column undo now reads current column widths via getColumnWidths
callback and merges the restored column's width into the full map,
preventing other columns' widths from being wiped
- Auto-resize measurement span is now wrapped in try/finally to ensure
DOM cleanup if an exception occurs during measurement
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: revert accidental home.tsx change from rebase conflict resolution
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(tables): clear isColumnSelection on double-click and right-click, skip scroll for column select
- Clear isColumnSelection when double-clicking a cell to edit, preventing
the column selection effect from fighting with the editing state
- Clear isColumnSelection when right-clicking outside the current
selection, preventing stale column selection from re-expanding
- Skip scroll-into-view when isColumnSelection is true, preventing
the viewport from jumping to the bottom row when clicking a column header
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(tables): remove inline font override in auto-resize, guard undefined columnOrder
- Remove `font:inherit` from measurement span inline style so Tailwind
classes (font-medium, text-small) control font properties for accurate
column width measurement
- Only include columnOrder in metadata update when defined, preventing
handleColumnRename from clearing a persisted column order when
columnOrderRef is null
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(tables): capture columnRequired in delete-column undo for full restoration
The delete-column undo action captured columnUnique but not columnRequired,
so undoing a delete on a required column would silently drop the constraint.
Now captures and restores both constraints.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(tables): restore width independently of order on delete-column undo, batch fill-down
- Column width restoration in delete-column undo no longer requires
previousOrder to be non-null — width is restored independently
- Ctrl+D fill-down now uses batchUpdateRef (single API call) instead
of calling mutateRef per row in a loop
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(tables): multi-column delete, select-all cell model, cut flash, chevron alignment
- Multi-select delete: detect column selection range and delete all selected
columns sequentially with individual undo entries
- Select all (header checkbox): use cell selection model instead of checkbox
model for consistent highlighting
- Cut flash: batch cell clears into single mutation to prevent stale data
flashing from multiple onSettled invalidations
- Chevron alignment: adjust right padding from pr-2 to pr-2.5
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(tables): restore column width locally on delete-column undo
Add onColumnWidthsChange callback to undo hook so restored column
widths update local component state, not just server metadata.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(tables): prevent Ctrl+D bookmark dialog, batch Delete/Backspace mutations
- Move e.preventDefault() before early returns in Ctrl+D handler so
the browser bookmark dialog is always suppressed
- Replace per-row mutateRef calls with single batchUpdateRef call in
both Delete/Backspace handlers (checked rows and cell selection),
consistent with cut and fill-down
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(tables): adjust column positions for multi-column delete undo
Capture original schema positions upfront and adjust each by the
count of previously-deleted columns with lower positions, so undo
restores columns at correct server-side positions in LIFO order.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(tables): only multi-delete when clicked column is within selection
Check that the right-clicked column is within the selected column
range before using multi-column delete. If the click is outside the
selection, delete only the clicked column.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(tables): prevent duplicate undo entry on column drag-drop
Clear dragColumnNameRef immediately in handleColumnDragEnd so the
second invocation (from dragend after drop already fired) is a no-op.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(tables): clean up width on delete-column redo, suppress click during drag
- Redo path for delete-column now removes the column's width from
metadata and local state, preventing stale width entries
- Add didDragRef to ColumnHeaderMenu to suppress the click event
that fires after a drag operation, preventing selection flash
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(tables): remove unstable mutation object from useCallback deps
deleteTableMutation is not referentially stable — only .mutateAsync()
is. Including the mutation object causes unnecessary callback recreation
on every mutation state change.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(tables): fix auto-resize header padding, deduplicate rename metadata logic
Increase header text measurement padding from 36px to 57px to account
for the chevron dropdown button (pl-0.5 + 9px icon + pr-2.5) that
always occupies layout space. Prevents header text truncation on
auto-resize.
Deduplicate column rename metadata logic by having columnRename.onSave
call handleColumnRename instead of reimplementing the same width/order
transfer and metadata persist.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(tables): log error on cell data restoration failure during undo
Add onError handler to the batchUpdateRowsMutation inside
delete-column undo so failures are logged instead of silently
swallowed. The column schema restores first, and the cell data
restoration is a separate async call that the outer try/catch
cannot intercept.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(tables): address audit findings across table, undo hook, and store
- Add missing bounds check in handleCopy (c >= cols.length) matching
handleCut for defensive consistency
- Clear lastCheckboxRowRef in Ctrl+Space and Shift+Space to prevent
stale shift-click checkbox range after keyboard selection
- Fix stale snapshot race in patchRedoRowId/patchUndoRowId by reading
state inside the set() updater instead of via get() outside it
- Add metadata cleanup to create-column undo so column width is removed
from both local state and server, symmetric with delete-column redo
- Remove stale width key from columnWidths on column delete instead of
persisting orphaned entries
- Normalize undefined vs null in handleInlineSave change detection to
prevent unnecessary mutations when oldValue is undefined
- Use ghost.parentNode?.removeChild instead of document.body.removeChild
in drag ghost cleanup to prevent throw on component unmount
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(tables): reset didDragRef in handleDragEnd to prevent stale flag
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* fix(pdf): PDF previews by adding the missing preview endpoint and allowing same-origin blob URLs in iframe CSP
* fixed
* add preview routes and tests
* follow nextjs route gen strat
* fix(fireflies): support V2 webhook payload format for meetingId mapping
Fireflies V2 webhooks use snake_case field names (meeting_id, event,
client_reference_id) instead of camelCase (meetingId, eventType,
clientReferenceId). The formatInput handler now auto-detects V1 vs V2
payloads and maps fields correctly, fixing empty meetingId on V2 webhooks.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(fireflies): guard against NaN timestamp, use stricter V2 detection
Address PR review feedback:
- Use Number.isFinite guard to prevent NaN timestamp propagation
- Use AND instead of OR for V2 detection since both meeting_id and
event are required fields in every V2 payload
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* fix(execution): run pptx/docx/pdf generation inside isolated-vm sandbox
Retires the legacy doc-worker.cjs / pptx-worker.cjs pipeline that ran user
DSL via node:vm + full require() in the same UID/PID namespace as the main
Next.js process. User code now runs inside the existing isolated-vm pool
(V8 isolate, no process / require / fs, no /proc/1/environ reachability).
Introduces a first-class SandboxTask abstraction under apps/sim/sandbox-tasks/
that mirrors apps/sim/background/ — one file per task, central typed
registry, kebab-case ids. Adding a new thing that runs in the isolate is
one file plus one registry entry.
Runtime additions in lib/execution/:
- task-mode execution in isolated-vm-worker.cjs: load pre-built library
bundles, run task bootstrap, run user code, run finalize, transfer
Uint8Array result as base64 via IPC
- named broker IPC bridge (generalizes the existing fetch bridge) with
args size, result size, and per-execution call caps
- cooperative AbortSignal support: cancel IPC disposes the isolate, pool
slot is freed, pending broker-call timers are swept
- compiled scripts + references explicitly released per execution
- isolate.isDisposed used for cancellation detection (no error-string
substring matching)
Library bundles (pptxgenjs, docx, pdf-lib) are built into isolate-safe
IIFE bundles by apps/sim/lib/execution/sandbox/bundles/build.ts and
committed; next.config.ts / trigger.config.ts / Dockerfile updated to
ship them instead of the deleted dist/*-worker.cjs artifacts.
Call sites migrated:
- app/api/workspaces/[id]/pptx/preview/route.ts
- app/api/files/serve/[...path]/route.ts (+ test mock)
- lib/copilot/tools/server/files/{workspace-file,edit-content}.ts
All pass owner key user:<userId> for per-user pool fairness + distributed
lease accounting.
Made-with: Cursor
* improvement(sandbox): delegate timers to Node, add phase timings + saturation logs
Follow-ups on top of the isolated-vm migration (da14027b2):
Timer delegation (laverdet/isolated-vm#136 recommended pattern):
- setTimeout / setInterval / clearTimeout / clearImmediate delegate to
Node's real timer heap via ivm.Reference. Real delays are honored;
clearTimeout actually cancels; ms is clamped to the script timeout
so callbacks can't fire after the isolate is disposed.
- Per-execution timer tracking + dispose-sweep in finally. Zero stale
callbacks post-dispose.
- unwrapPrimitive helper normalizes ivm.Reference-wrapped primitives
(arguments: { reference: true } applies uniformly to all args).
- _polyfills.ts shrinks from ~130 lines to the global->globalThis alias.
Timers / TextEncoder / TextDecoder / console all install per-execution
from the worker via ivm bridges.
AbortSignal race fix (pre-existing bug surfaced by the timer smoke):
- Listener is registered after await tryAcquireDistributedLease. If the
signal aborted during that ~200ms window (Redis down), AbortSignal
doesn't fire listeners registered after the fact — the abort was
silently missed. Now re-checks signal.aborted synchronously after
addEventListener.
Observability:
- executeTask returns IsolatedVMTaskTimings (setup, runtimeBootstrap,
bundles, brokerInstall, taskBootstrap, harden, userCode, finalize,
total) in every success + error path. run-task.ts logs these with
workspaceId + queueMs so 'which tenant is slow' is queryable.
- Pool saturation events now emit structured logger.warn with reason
codes: queue_full_global, queue_full_owner, queue_wait_timeout,
distributed_lease_limit. Matches the existing broker reject pattern.
Security policy:
- New .cursor/rules/sim-sandbox.mdc codifies the hard rules for the
worker process: no app credentials, all credentialed work goes
through host-side brokers, every broker scopes by workspaceId.
Pre-merge checklist for future changes to isolated-vm-worker.cjs.
Measured phase breakdown (local smoke, Redis down): pptx wall=~310ms
with bundles=~16ms, finalize=~83ms; docx ~290ms / 17ms / 70ms; pdf
~235ms / 17ms / 5ms. Bundle compilation is not the bottleneck —
library finalize is.
Made-with: Cursor
* fix(sandbox): thread AbortSignal into runSandboxTask at every call site
Three remaining callers of runSandboxTask were not threading a
cancellation signal, so a client disconnect mid-compile left the pool
slot occupied for the full 60s task timeout. Matching the pattern the
pptx-preview route already uses.
- apps/sim/app/api/files/serve/[...path]/route.ts — GET forwards
`request.signal` into handleLocalFile / handleCloudProxy, which
forward into compileDocumentIfNeeded, which forwards into
runSandboxTask.
- apps/sim/lib/copilot/tools/server/files/workspace-file.ts — passes
`context.abortSignal` (transport/user stop) into runSandboxTask.
- apps/sim/lib/copilot/tools/server/files/edit-content.ts — same.
Smoke: simulated client disconnect at t=1000ms during a task that would
otherwise have waited 10s. The pool slot unwinds at t=1002ms with
AbortError; previously would have sat 60s until the task-level timeout.
Made-with: Cursor
* chore(build): raise node heap to 8GB for next build type-check
Next.js's type-check worker OOMs at the default 4GB heap on Node 23 for
this project's type graph size. Bumps the heap to 8GB only for the
`next build` invocation inside `bun run build`.
Docker builds are unaffected — `next.config.ts` sets
`typescript.ignoreBuildErrors: true` when DOCKER_BUILD=1, which skips
the type-check pass entirely. This only fixes local `bun run build`.
No functional code changes.
Made-with: Cursor
* fix lint
* refactor(copilot): dedup getDocumentFormatInfo across copilot file tools
The same extension -> { formatName, sourceMime, taskId } mapping was
duplicated in workspace-file.ts and edit-content.ts. Any future format
or task-id change had to happen in two places.
Exports getDocumentFormatInfo + DocumentFormatInfo from workspace-file.ts
(which already owned the PPTX/DOCX/PDF source MIME constants) and
imports it in edit-content.ts. Same source-of-truth pattern the file
already uses for inferContentType.
Made-with: Cursor
* fix(sandbox): propagate empty-message broker/fetch errors
Both bridges in the isolate used truthiness to detect host-side errors:
if (response.error) throw new Error(response.error); // broker
if (result.error) throw new Error(result.error); // fetch
If a host handler ever threw `new Error('')`, err.message would be ''
(falsy), so { error: '' } was silently swallowed and the isolate saw
a successful null result. Existing call sites don't throw empty-message
errors, but the pattern was structurally unsafe.
Switch both to typeof check === 'string' and fall back to a default
message if the string is empty, so all host-reported errors propagate
into the isolate regardless of message content.
Made-with: Cursor
* improvement(mothership): agent model dropdown validations, recommendation system
* mark a few more models:
* remove regex based checks'
* remove dead code
* remove inherited reseller flags
* fix note
* address bugbot comments
* code cleanup
Replaces MutationObserver on document.documentElement (watching CSS variable
changes) + window resize listener with a ResizeObserver on the terminal element
itself. The terminal now measures its own rendered width directly, so it responds
correctly to all layout changes — sidebar, workflow panel, and mothership resize —
without indirect CSS variable plumbing or cross-component coupling.
* fix(docs): preserve gif playback position in lightbox and clean up ui components
- Capture currentTime on click and seek lightbox video to match using useLayoutEffect
- Convert lightboxStartTime from useState to useRef (no independent render needed)
- Apply same fix to ActionVideo in action-media.tsx
- Remove dead AnimatedBlocks component (zero imports)
- Fix language-dropdown to derive currentLang during render instead of mirroring into state via effect
- Replace template literals with cn() in faq.tsx and video.tsx
* fix(chat): prevent @-mention menu focus loss and stabilize render identity
Radix DropdownMenu's FocusScope was restoring focus from the search input
to the content root whenever registered menu items mounted or unmounted
inside the content, interrupting typing after a keystroke or two.
- Keep the default tree always mounted under `hidden` instead of swapping
subtrees when the filter activates.
- Render filtered results as plain <button role="menuitem"> so they do not
participate in Radix's menu Collection.
- Add activeIndex state with ArrowUp/Down/Enter keyboard nav, mouse-hover
sync, and scrollIntoView so the highlighted row stays visible and users
can see what Enter will select.
While tracing the cascade that compounded the bug:
- Hoist `select` in useWorkflowMap / useWorkspacesQuery / useFolderMap to
module scope so TanStack Query caches the select result across renders.
- Guard setSelectedContexts([]) with a functional updater that bails out
when already empty, preventing a fresh [] literal from invalidating
consumers that key on reference identity.
- Wrap WorkspaceHeader in React.memo so it bails out on parent renders
once its (now-stable) props are unchanged.
Made-with: Cursor
* remove extraneous comments
* cleanup
* fix(chat): apply same setState bail-out to clearContexts for consistency
Matches the invariant we already established for the message effect:
calling setSelectedContexts([]) against an already-empty array emits a
fresh [] reference (Object.is bails out are not reference-level), which
cascades through consumers that key on selectedContexts identity.
clearContexts is part of the hook's public API so callers can't know
whether the list is empty — make it safe for them.
Made-with: Cursor
* improvement(utils): add shared utility functions and replace inline patterns
Add sleep, toError, safeJsonParse, isNonNull helpers and invariant/assertNever
assertions. Replace all inline implementations across the codebase with these
shared utilities for consistency. Zero behavioral changes.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(agiloft): remove import type from .server module to fix client bundle build
Turbopack resolves .server.ts modules even for type-only imports,
pulling dns/promises into client bundles. Define SecureFetchResponse
locally instead.
* fix(agiloft): revert to client-safe imports to fix build
The SSRF upgrade to input-validation.server introduced dns/promises
into client bundles via tools/registry.ts. Revert to the original
client-safe validateExternalUrl + fetch. The SSRF DNS-pinning upgrade
for agiloft directExecution should be done via API routes in a
separate PR.
* feat(agiloft): add API route for retrieve_attachment, matching established file patterns
Convert retrieve_attachment from directExecution to standard API route
pattern, consistent with Slack download and Google Drive download tools.
- Create /api/tools/agiloft/retrieve with DNS validation, auth lifecycle,
and base64 file response matching the { file: { name, mimeType, data,
size } } convention
- Update retrieve_attachment tool to use request/transformResponse
instead of directExecution, removing the dependency on
executeAgiloftRequest from the tool definition
- File output type: 'file' enables FileToolProcessor to store downloaded
files in execution filesystem automatically
* shopify
* fix(agiloft): add optional flag to nullable lock record block outputs
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(agiloft): revert optional flag on block outputs — property only exists on tool outputs
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* chore(utils): remove unused utilities (asserts, safeJsonParse, isNonNull)
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* improvement(sidebar): interleave folders and workflows by sort order in all resource pickers
- Merge folder/workflow submenus into a single Workflows tree sorted by sortOrder in both the @ plus-menu and add-resource dropdowns
- Widen both dropdowns from 240px to 320px and remove type labels from search results
- Fix isOpen/onSwitch regression: WorkflowFolderTreeItems now forwards node.isOpen so already-open tabs are switched to rather than duplicated
- Apply same interleaved sortOrder ordering to the collapsed sidebar's root-level folder+workflow list
* fix(add-resource-dropdown): align sort tiebreaker with compareByOrder, document empty-folder omission
Use id.localeCompare as the sort tiebreaker in buildWorkflowFolderTree to match the sidebar's
compareByOrder fallback (sortOrder → id) instead of name. Add a comment clarifying that empty
folders are intentionally omitted from the tree view.
* chore: remove extraneous inline comment
* feat(monday): add full Monday.com integration with tools, block, triggers, and OAuth
Adds a comprehensive Monday.com integration:
- 13 tools: list/get boards, CRUD items, search, subitems, updates, groups, move, archive
- Block with operation dropdown, board/group selectors, OAuth credential, advanced mode
- 9 webhook triggers with auto-subscription lifecycle (create/delete via GraphQL API)
- OAuth config with 7 scopes (boards, updates, webhooks, me:read)
- Provider handler with challenge verification, formatInput, idempotency
- Docs, icon, selectors, and all registry wiring
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(monday): cast userId to string in deleteSubscription fallback
The DeleteSubscriptionContext type has userId as unknown, causing a
TypeScript error when passing it to getOAuthToken which expects string.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(monday): escape string params in GraphQL, align deleteSubscription with established patterns
- Use JSON.stringify() for groupId in get_items.ts (matches create_item.ts
and move_item_to_group.ts)
- Use JSON.stringify() for notificationUrl in webhook provider
- Remove non-standard getOAuthToken fallback in deleteSubscription to match
Airtable/Webflow pattern (credential resolution only, warn and return on failure)
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(monday): sanitize columns JSON in search_items GraphQL query
Parse and re-stringify the columns param to ensure well-formed JSON
before interpolating into the GraphQL query, preventing injection
via malformed input.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(monday): validate all numeric IDs and sanitize columns in GraphQL queries
- Add sanitizeNumericId() helper to tools/monday/utils.ts for consistent
validation across all tool body builders
- Apply to all 13 instances of boardId, itemId, parentItemId interpolation
across 11 tool files, preventing GraphQL injection via crafted IDs
- Wrap JSON.parse in search_items.ts with try-catch for user-friendly
error on malformed column filter JSON
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(monday): deduplicate numeric ID validation, sanitize limit/page params
- Refactor sanitizeNumericId to delegate to validateMondayNumericId
from input-validation.ts, eliminating duplicated regex logic
- Add sanitizeLimit helper for safe integer coercion with bounds
- Apply sanitizeLimit to limit/page params in list_boards, get_items,
and search_items for consistent validation across all GraphQL params
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(monday): align list_boards limit description with code (max 500)
The param description said "max 100" but sanitizeLimit caps at 500,
which is what Monday.com's API supports for boards. Updated both the
tool description and docs to say "max 500".
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* feat(triggers): add Atlassian triggers for Jira, JSM, and Confluence
- Jira: add 9 new triggers (sprint created/started/closed, project created, version released, comment updated/deleted, worklog updated/deleted)
- JSM: add 5 triggers from scratch (request created/updated/commented/resolved, generic webhook)
- Confluence: add 7 new triggers (comment updated, attachment updated, page/blog restored, space removed, page permissions updated, user created)
- Add JSM webhook provider handler with HMAC validation and changelog-based event matching
- Add Atlassian webhook identifier to idempotency service for native dedup
- Add extractIdempotencyId to Confluence handler
- Fix Jira generic webhook to pass through full payload for non-issue events
- Fix output schemas: add description (ADF), updateAuthor, resolution, components, fixVersions, worklog timestamps, note emailAddress as Jira Server only
* fix(triggers): replace any with Record<string, unknown> in confluence extract functions
* lint
* fix(triggers): use comment.id in JSM idempotency, fix confluence type cast
JSM extractIdempotencyId now prioritizes comment.id over issue.id for
comment_created events, matching Jira's documented webhook payload
structure. Also fixes type cast for confluence extract function calls.
* fix(triggers): correct comment.body type to json, fix TriggerOutput description type
- JSM webhook comment.body changed from string to json (ADF format)
- Widened TriggerOutput.description to accept TriggerOutput objects,
removing unsafe `as unknown as string` casts for Jira description fields
Previously, Non-ASCII characters (like Korean) in workflow names were
replaced by dashes during export because of a restrictive regex.
This update uses a Unicode-aware regex to allow letters and numbers
from any language while still sanitizing unsafe filesystem characters.
fixes#4119
Signed-off-by: JaeHyung Jang <jaehyung.jang@navercorp.com>
* improvement(logs): fix trigger badge wrapping, time range picker, status filters, and React anti-patterns
* chore(logs): remove dev mock logs
* fix(logs): prevent DatePicker onOpenChange from reverting time range after Apply
* fix(socket): sync deploy button state across collaborators
Broadcast workflow-deployed events via socket so all connected users
invalidate their deployment query cache when any user deploys, undeploys,
activates a version, or triggers a deploy through chat/form endpoints.
* fix(socket): check response status on deployment notification
Log a warning when the socket server returns a non-2xx status for
deployment notifications, matching the pattern in lifecycle.ts.
* improvement(config): consolidate socket server URL into getSocketServerUrl/getSocketUrl
Replace all inline `env.SOCKET_SERVER_URL || 'http://localhost:3002'` and
`getEnv('NEXT_PUBLIC_SOCKET_URL') || 'http://localhost:3002'` with centralized
utility functions in urls.ts, matching the getBaseUrl() pattern.
* improvement(config): consolidate Ollama URL and CSP socket/Ollama hardcodes
Add getOllamaUrl() to urls.ts and replace inline env.OLLAMA_URL fallbacks
in the provider and API route. Update CSP to use getSocketUrl(),
getOllamaUrl(), and a local toWebSocketUrl() helper instead of hardcoded
localhost strings.
* lint
* fix(tests): add missing mocks for new URL utility exports
Update lifecycle, async execute, and chat manage test mocks to include
getSocketServerUrl, getOllamaUrl, and notifySocketDeploymentChanged.
* fix(csp): remove urls.ts import to fix next.config.ts build
CSP is loaded by next.config.ts which transpiles outside the @/ alias
context. Use local constants instead of importing from urls.ts.
* fix(queries): invalidate chat and form status on deployment change
Add chatStatus and formStatus to invalidateDeploymentQueries so all
deployment-related queries refresh when any user deploys or undeploys.
* improvement(ui): remove React anti-patterns, fix CSP violations
* fix(ui): restore useMemo on existingKeys — it is observed by useAvailableResources
* improvement(ui): add RefreshCw icon, update Bell SVG, active state styling for header actions
* minor UI improvements
* feat(docs): fill documentation gaps across platform features
* fix(docs): address PR review comments on chat OTP cookies and MCP env var placeholders
* fix(docs): replace smart quotes with straight quotes in JSX attributes
* update(docs): update mcp, custom tools, and variables docs
* Fix grammar
* mothership docs, tags, connectors, api, chat deploy, etc
* more info
* more
* feat(docs): auto-generate per-provider trigger documentation
Extends scripts/generate-docs.ts to produce one MDX page per trigger
provider (39 pages) in apps/docs/content/docs/en/triggers/. The 5
hand-written pages (index, start, schedule, webhook, rss) are never
touched.
Key additions to the generation script:
- resolveConstVariable() resolves module-level const spreads so
providers like Vercel that build outputs from const variables (not
just functions) are fully documented
- resolveTriggerBuilderFunction() extended to expand variable spreads
(...varName) in addition to function-call spreads (...fn())
- groupTriggersByProvider() deduplicates v1/v2 trigger variants by
name, keeping the highest-versioned one per provider
- writeIconMapping() adds bare-name aliases for versioned block types
(github_v2 → github, fireflies_v2 → fireflies, etc.) so
BlockInfoCard resolves icons for all 39 trigger providers
- extractTriggerConfigFields() filters readOnly display blocks (webhook
URL displays, sample payloads, curl examples) from config tables
Each generated page includes: BlockInfoCard with correct icon/color,
trigger count, polling note where applicable, Configuration table, and
Output table for every trigger. No "Type:" lines.
* refactor(docs): align trigger docs structure with tools docs
- Use ### `trigger_id` headings (matching ### `tool_id` in tools docs)
- Wrap all trigger sections under a ## Triggers header
- Rename Configuration/Output to #### level (matching #### Input/Output)
- Use Parameter column header to match tools docs table style
- Map UI widget types to semantic types: short-input/long-input/dropdown
→ string, switch → boolean, slider → number, oauth-input → string
* refactor(docs): use human-readable names for trigger section headings
Trigger IDs are internal identifiers; users scan by name. Switch from
### `trigger_id` to ### Trigger Name for cleaner sidebar navigation
and better readability.
* fix(docs): resolve subBlock builder functions for all trigger Config sections
Extends generate-docs.ts to parse subBlock builder functions so all 15
providers previously missing Configuration sections now generate them.
Handles three patterns:
- `buildTriggerSubBlocks({extraFields: buildX(...)})` — extracts extra
fields from the call site and resolves them from the provider's utils.ts
- `return [...]` — direct array return (Attio, Confluence, etc.)
- `blocks.push(...)` — imperative push pattern (Linear, Ashby)
Also resolves const-reference field IDs (SCREAMING_CASE) by searching
the webhook provider constants cache, fixing Gong's `gongJwtPublicKeyPem`
field which was previously unresolvable. Adds title-as-description fallback
for OAuth credential fields that have no explicit description.
* fix(docs): correctly destructure nested implicit-object trigger outputs
Fixes a parser bug where output fields with no top-level `type` key but
child fields each having their own `type`/`description` were incorrectly
parsed. The `type:` and `description:` regex matches were not
depth-aware, so values from nested children bled into the parent field.
Changes:
- Add `isAtDepthZero()` helper for brace-depth-aware regex matching
- Fix `parseFieldContent` to only match `type:` at brace depth 0
- Fix `extractDescription` to only match `description:` at brace depth 0
- Add implicit-object fallback: when no top-level `type` exists but child
fields have their own types, treat as `object` with `properties`
- Regenerate all affected trigger docs (Cal.com payload, Linear data,
Jira issue.fields, Ashby application, Greenhouse candidate, etc.)
* chore(docs): update static trigger and start page images
* feat(providers): add claude-opus-4-7 model with adaptive thinking support
* Add workflow version screenshots
* Add function block screenshots
---------
Co-authored-by: Theodore Li <theo@sim.ai>
* improvement(landing): optimize core web vitals and accessibility
Code-split AuthModal and DemoRequestModal via next/dynamic across 7 landing
components to move auth-client bundle (~150-250KB) out of the initial JS payload.
Replace useSession import in navbar with direct SessionContext read to avoid
pulling the entire better-auth client into the landing page bundle. Add immutable
cache header for content-hashed _next/static assets. Defer PostHog session
recording until user identification to avoid loading the recorder (~80KB) on
anonymous visits. Fix accessibility issues flagged by Lighthouse: add missing
aria-label on preview submit button, add inert to aria-hidden ReactFlow wrapper,
set decorative alt on logos inside labeled links, disambiguate duplicate footer
API links.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix(posthog): guard startSessionRecording against repeated calls on refetch
The effect fires on every session reload (e.g., subscription upgrade).
Calling startSessionRecording() while already recording fragments the
session in the analytics dashboard. Add sessionRecordingStarted() guard
so recording only starts once per page lifecycle.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix(config): remove redundant _next/static cache header
Next.js already sets Cache-Control: public, max-age=31536000, immutable
on _next/static assets natively and this cannot be overridden. The custom
rule was redundant on Vercel and conflicted with the extension-based rule
on self-hosted deployments due to last-match-wins ordering.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix(brightdata): use params for echo-back fields in transformResponse
transformResponse receives params as its second argument. Use it to
return the original url, query, snapshotId, and searchEngine values
instead of hardcoding null or extracting from response data that may
not contain them.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(brightdata): handle async Discover API with polling
The Bright Data Discover API is asynchronous — POST /discover returns
a task_id, and results must be polled via GET /discover?task_id=...
The previous implementation incorrectly treated it as synchronous,
always returning empty results.
Uses postProcess (matching Firecrawl crawl pattern) to poll every 3s
with a 120s timeout until status is "done".
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(brightdata): alphabetize block registry entry
Move box before brandfetch/brightdata to maintain alphabetical ordering.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* lint
* fix(brightdata): return error objects instead of throwing in postProcess
The executor wraps postProcess in try-catch and falls back to the
intermediate transformResponse result on error, which has success: true
with empty results. Throwing errors would silently return empty results.
Match Firecrawl's pattern: return { ...result, success: false, error }
instead of throwing. Also add taskId to BrightDataDiscoverResponse type
to eliminate unsafe casts.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(brightdata): use platform execution timeout for Discover polling
Replace hardcoded 120s timeout with DEFAULT_EXECUTION_TIMEOUT_MS to
match Firecrawl and other async polling tools. Respects platform-
configured limits (300s free, 3000s paid).
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Derive sidebar open state from selection validity instead of using a
separate useEffect. Also removes unnecessary useMemo/useCallback in
non-memo'd components, replaces useEffect with render-time reset in
dashboard, fixes CSS tokens, and adds hierarchical query key factory.
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* feat(brightdata): add Bright Data integration with 8 tools
Add complete Bright Data integration supporting Web Unlocker, SERP API,
Discover API, and Web Scraper dataset operations. Includes scrape URL,
SERP search, discover, sync scrape, scrape dataset, snapshot status,
download snapshot, and cancel snapshot tools.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(brightdata): address PR review feedback
- Fix truncated "Download Snapshot" description in integrations.json and docs
- Map engine-specific query params (num/count/numdoc, hl/setLang/lang/kl,
gl/cc/lr) per search engine instead of using Google-specific params for all
- Attempt to parse snapshot_id from cancel/download response bodies instead
of hardcoding null
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* lint
* fix(agiloft): change bgColor to white; fix docs truncation
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(brightdata): avoid inner quotes in description to fix docs generation
The docs generator regex truncates at inner quotes. Reword the
download_snapshot description to avoid embedded double quotes.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(brightdata): disable incompatible DuckDuckGo and Yandex URL params
DuckDuckGo kl expects region-language format (us-en) and Yandex lr
expects numeric region IDs (213), not plain two-letter codes. Disable
these URL-level params since Bright Data normalizes localization through
the body-level country param.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* improvement(seo): optimize sitemaps and robots.txt across sim and docs
- Add missing pages to sim sitemap: blog author pages, academy catalog and course pages
- Fix 6x duplicate URL bug in docs sitemap by deduplicating with source.getLanguages()
- Convert docs sitemap from route handler to Next.js metadata convention with native hreflang
- Add x-default hreflang alternate for docs multi-language pages
- Remove changeFrequency and priority fields (Google ignores both)
- Fix inaccurate lastModified timestamps — derive from real content dates, omit when unknown
- Consolidate 20+ redundant per-bot robots rules into single wildcard entry
- Add /form/ and /credential-account/ to sim robots disallow list
- Reference image sitemap in sim robots.txt
- Remove deprecated host directive from sim robots
- Move disallow rules before allow in docs robots for crawler compatibility
- Extract hardcoded docs baseUrl to env variable with production fallback
* fix(seo): remove homepage new Date(), guard latestModelDate empty array
* improvement(seo): consolidate DOCS_BASE_URL, optimize core web vitals
Extract hardcoded https://docs.sim.ai into shared DOCS_BASE_URL constant
in lib/urls.ts and replace all 20+ instances across layouts, metadata,
structured data, LLM manifest, sitemap, and robots files. Remove
OneDollarStats analytics script and tighten CSP for improved core web vitals.
* fix: removed onedollarstats from bun lock
* fix(seo): guard per-provider Math.max, consolidate docs robots to single wildcard
* v0.6.29: login improvements, posthog telemetry (#4026)
* feat(posthog): Add tracking on mothership abort (#4023)
Co-authored-by: Theodore Li <theo@sim.ai>
* fix(login): fix captcha headers for manual login (#4025)
* fix(signup): fix turnstile key loading
* fix(login): fix captcha header passing
* Catch user already exists, remove login form captcha
* fix(landing): return 404 for invalid dynamic route slugs
Add `dynamicParams = false` to all landing page dynamic routes so
Next.js returns a proper 404 instead of a client-side exception for
slugs not in generateStaticParams.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(home): remove duplicate handleStopGeneration declaration
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Theodore Li <theodoreqili@gmail.com>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* refactor(microsoft-excel): export GRAPH_ID_PATTERN and reuse across routes
Export the shared regex pattern from utils.ts and import it in files/route.ts
and drives/route.ts instead of duplicating the inline pattern. Also reorders
the TSDoc comment to sit above getItemBasePath where it belongs.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* lint
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* feat(microsoft-excel): add SharePoint drive support for Excel integration
* fix(microsoft-excel): address PR review comments
- Validate siteId/driveId format in drives route to prevent path traversal
- Use direct single-drive endpoint for fetchById instead of filtering full list
- Fix dependsOn on sheet/spreadsheet selectors so driveId flows into context
- Fix NextRequest type in drives route for build compatibility
* fix(microsoft-excel): validate driveId in files route
Add regex validation for driveId query param in the Microsoft OAuth
files route to prevent path traversal, matching the drives route.
* fix(microsoft-excel): unblock OneDrive users and validate driveId in sheets route
- Add credential to any[] arrays so OneDrive users (no drive selected)
still pass the dependsOn gate while driveSelector remains in the
dependency list for context flow to SharePoint users
- Add /^[\w-]+$/ validation for driveId in sheets API route
* fix(microsoft-excel): validate driveId in getItemBasePath utility
Add regex validation for driveId at the shared utility level to prevent
path traversal through the tool execution path, which bypasses the
API route validators.
* fix(microsoft-excel): use centralized input validation
Replace inline regex validation with platform validators from
@/lib/core/security/input-validation:
- validateSharePointSiteId for siteId in drives route
- validateAlphanumericId for driveId in drives, sheets, files routes
and getItemBasePath utility
* lint
* improvement(microsoft-excel): add File Source dropdown to control SharePoint visibility
Replace always-visible optional SharePoint fields with a File Source
dropdown (OneDrive/SharePoint) that conditionally shows site and drive
selectors. OneDrive users see zero extra fields (default). SharePoint
users switch the dropdown and get the full cascade.
* fix(microsoft-excel): fix canonical param test failures
Make fileSource dropdown mode:'both' so it appears in basic and advanced
modes. Add condition to manualDriveId to match driveSelector's condition,
satisfying the canonical pair consistency test.
* fix(microsoft-excel): address PR review feedback for SharePoint drive support
- Clear stale driveId/siteId/spreadsheetId when fileSource changes by adding
fileSource to dependsOn arrays for siteSelector, driveSelector, and
spreadsheetId selectors
- Reorder manualDriveId before manualSpreadsheetId in advanced mode for
logical top-down flow
- Validate spreadsheetId with validateMicrosoftGraphId in getItemBasePath()
and sheets route to close injection vector (uses permissive validator that
accepts ! chars in OneDrive item IDs)
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(microsoft-excel): use validateMicrosoftGraphId for driveId validation
SharePoint drive IDs use the format b!<base64-string> which contains !
characters rejected by validateAlphanumericId. Switch all driveId
validation to validateMicrosoftGraphId which blocks path traversal and
control characters while accepting valid Microsoft Graph identifiers.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(microsoft-excel): use validatePathSegment with strict pattern for driveId/spreadsheetId
Replace validateMicrosoftGraphId with validatePathSegment using a custom
pattern ^[a-zA-Z0-9!_-]+$ for all URL-interpolated IDs. validatePathSegment
blocks /, \, path traversal, and null bytes before checking the pattern,
preventing URL-modifying characters like ?, #, & from altering the Graph
API endpoint. The pattern allows ! for SharePoint b!<base64> drive IDs.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* lint
* fix(microsoft-excel): reorder driveId before spreadsheetId in v1 block
Move driveId subBlock before manualSpreadsheetId in the legacy v1 block
to match the logical top-down flow (Drive ID → Spreadsheet ID), consistent
with the v2 block ordering.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(microsoft-excel): clear manualDriveId when fileSource changes
Add dependsOn: ['fileSource'] to manualDriveId so its value is cleared
when switching from SharePoint back to OneDrive. Without this, the stale
driveId would still be serialized and forwarded to getItemBasePath,
routing through the SharePoint drive path instead of me/drive.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* refactor(microsoft-excel): use getItemBasePath in sheets route to remove duplication
Replace inline URL construction and validation logic with the shared
getItemBasePath utility, eliminating duplicated GRAPH_ID_PATTERN regex
and conditional URL building.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* lint
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* fix(blocks): correct required field validation for Jira and Confluence blocks
Jira: summary is only required for create (not update), projectId is not required for update (API uses issueKey). Confluence: title and content are required for page creation, title is required for blog post creation — all enforced by backend validation.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(blocks): remove projectId dependsOn gate for update fields, require content for blog post creation
Jira: Remove dependsOn projectId from shared write/update fields — projectId is not required for update so the gate would disable all update fields when no project is selected. Write-only fields (issueType, parentIssue, reporter) retain the gate since projectId is required for create.
Confluence V2: Add create_blogpost to content required condition — backend Zod schema enforces content for blog post creation.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* lint
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
The PlayOutline icon had a non-standard viewBox and mismatched path,
causing it to render at an inconsistent size and shape compared to the
filled Play icon and other action bar icons.
* fix(seo): correct canonical URLs, compress oversized images, add cache headers
- Replace all hardcoded https://sim.ai with https://www.sim.ai via SITE_URL constant
- Migrate models, integrations, and homepage metadata from getBaseUrl() to SITE_URL
- Compress 6 blog/landing images from 2.6MB to 300KB total
- Convert mothership cover from PNG to JPEG (1.1MB → 99KB)
- Add Cache-Control headers for static assets (1d max-age, 7d stale-while-revalidate)
- Add SEO regression test scanning all public pages for canonical URL violations
* fix(seo): replace hardcoded URLs with SITE_URL, broaden test detection
- Replace hardcoded https://www.sim.ai with SITE_URL in academy, changelog.xml, and whitelabeling
- Broaden getBaseUrl() detection in SEO test to match any variable name assignment
- Add ee/whitelabeling/metadata.ts to SEO test scan scope
* improvement(ui): delegate streaming animation to Streamdown component
Remove custom useStreamingText hook and useThrottledValue indirection
in favor of Streamdown's built-in streaming props. This eliminates the
manual character-by-character reveal logic (setInterval, easing, chase
factor) and lets the library handle animation natively, reducing
complexity and improving consistency across Mothership and chat.
* improvement(ui): inline passthrough wrapper, add hydration guard
- Inline EnhancedMarkdownRenderer which became a trivial passthrough
after removing useThrottledValue
- Add hydration guard to MarkdownRenderer to prevent replaying the
entrance animation when mounting mid-stream with existing content
* improvement: removed chat animation
* improvement(ui): remove hardcoded fade-in animations from special tags
Remove animate-stream-fade-in from OptionsDisplay, CredentialDisplay,
MothershipErrorDisplay, and UsageUpgradeDisplay. These components
re-render after streaming ends, causing a visible flash as the
opacity animation replays. PendingTagIndicator retains its animation
since it only renders during active streaming.
* fix(ui): use streaming mode for Streamdown during active streams
mode='static' disables Remend (auto-closing incomplete markdown),
incremental block splitting, and React Transitions. Switch to
streaming mode while isStreaming is true so partial markdown renders
correctly, without re-adding animation props.
* fix(security): resolve ReDoS vulnerability in function execute tag pattern
Simplified regex to eliminate overlapping quantifiers that caused exponential
backtracking on malformed input without closing delimiter.
* feat(jira): support raw ADF document objects in description and environment fields
Add toAdf() helper that passes through ADF objects as-is or wraps plain
text in a single-paragraph ADF doc. Update write and update routes to
use it, replacing inline ADF wrapping. Update Zod schema to accept
string or object for description. Fully backward compatible — plain
text still works, but callers can now pass rich ADF with expand nodes,
tables, code blocks, etc.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(jira): handle partial ADF nodes and non-ADF objects in toAdf()
Wrap partial ADF nodes (type + content but not doc) in a doc envelope.
Fall back to JSON.stringify for non-ADF objects instead of String()
which produces [object Object].
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* lint
* fix(jira): handle JSON-stringified ADF in toAdf() for variable resolution
The executor's formatValueForBlock() JSON.stringify's object values when
resolving <Block.output> references. This means an ADF object from an
upstream Agent block arrives at the route as a JSON string. toAdf() now
detects JSON strings containing valid ADF documents or nodes and parses
them back, ensuring rich formatting is preserved through the pipeline.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* lint changes
* fix(jira): update environment Zod schema to accept ADF objects
Match the description field schema change — environment also passes
through toAdf() so its Zod schema must accept objects too.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* updated lobkc
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* fix(google-drive): add auto export format and Azure storage debug logging
* chore: remove Azure storage debug logging
* fix(google-drive): use status-based fallback instead of string matching for export errors
* fix(google-drive): validate export formats against Drive API docs, remove fallback
* fix(google-drive): use value function for dropdown default
* fix(google-drive): add text/markdown to valid export formats for Google Docs
* fix(google-drive): correct ODS MIME type for Sheets export format
* fix(ci): replace dynamic secret access with explicit secret references
Resolves CodeQL "Excessive Secrets Exposure" warning by replacing
secrets[matrix.ecr_repo_secret] with conditional expressions that
reference only the specific secrets needed.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(ci): add explicit ECR_REALTIME guard and use env block for secret injection
- Prevent silent fallthrough to ECR_REALTIME for unrecognized secret keys
- Move build-amd64 secret resolution to env: block matching build-dev pattern
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* improvement(ui): restore smooth streaming animation, fix follow-up auto-scroll, move CopyCodeButton to emcn
* fix(ui): restore delayed animation, handle tilde fences, fix follow-up scroll root cause
* fix(ui): extract useStreamingReveal to followup, keep cleanup changes
* fix(ui): restore hydratedStreamingRef for reconnect path order-of-ops
* fix(ui): restore full hydratedStreamingRef effect for reconnect path
* fix(ui): use hover-hover prefix on CopyCodeButton callers to correctly override ghost variant
* fix(logs): remove destructive color from cancel execution menu item
* feat(logs): optimistic cancelling status on cancel execution
* feat(logs): allow cancellation of pending (paused) executions
* fix(hitl): cancel paused executions directly in DB
Paused HITL executions are idle in the DB — they don't poll Redis or
run in-process, so the existing cancel signals had no effect. The DB
status stayed 'pending', causing the optimistic 'cancelling' update to
revert on refetch.
- Add PauseResumeManager.cancelPausedExecution: atomically sets
paused_executions.status and workflow_execution_logs.status to
'cancelled' inside a FOR UPDATE transaction
- Guard enqueueOrStartResume against resuming a cancelled execution
- Include pausedCancelled in the cancel route success check
* upgrade turbo
* test(hitl): update cancel route tests for paused execution cancellation
- Mock PauseResumeManager.cancelPausedExecution to prevent DB calls
- Add pausedCancelled to all expected response objects
- Add test for HITL paused execution cancellation path
- Add missing auth/authz tests
- Switch to vi.hoisted pattern for all mocks
* fix(hitl): set endedAt when cancelling paused execution
Without endedAt, the logs API running filter (isNull(endedAt)) would
keep cancelled paused executions in the running view indefinitely.
* fix(hitl): emit execution:cancelled event to canvas when cancelling paused execution
Paused HITL executions have no active SSE stream, so the canvas never
received the cancellation event. Now writes execution:cancelled to the
event buffer and updates the stream meta so the canvas reconnect path
picks it up and shows 'Execution Cancelled'.
* fix(hitl): isolate cancelPausedExecution failure from successful cancellation
Wrap cancelPausedExecution in try/catch so a DB error does not mask
a prior successful Redis or in-process cancellation. Also move the
resource-collapse side effect in home.tsx to a useEffect to avoid the
stale closure on the resources array.
* fix(hitl): add .catch() to fire-and-forget event buffer calls in cancel route
* fix(security): resolve ReDoS vulnerability in function execute tag pattern
Simplified regex to eliminate overlapping quantifiers that caused exponential
backtracking on malformed input without closing delimiter.
* fix(security): exclude trailing-dot refs and hoist tag pattern to module level
* fix(security): align tag pattern with codebase standard [^<>]+ pattern
Matches createReferencePattern() from reference-validation.ts used by the
core executor. Invalid refs handled gracefully by resolveBlockReference.
* refactor(security): use createReferencePattern() instead of inline regex
getTrigger() namespaces condition-gated subBlock IDs (e.g. webhookUrlDisplay
→ webhookUrlDisplay_github_release_published). The block card's useMemo was
checking for an exact match on 'webhookUrlDisplay', which never matched.
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* feat(jsm): add all Forms API endpoints for two-step form workflow
* removed tyoes
* fix(jsm): handle 204 No Content on action endpoints and reject array answers
* fix(jsm): validate formIds is an array in copy_forms route and block
* fix(jsm): add formTemplateId validation and conditional required on formAnswers
* feat(aws): add IAM and STS integrations
* fix(sts): address PR review comments
- Fix CrowdStrike tags to include "security" (unintended removal)
- Standardize STS tool versions to '1.0.0' (matching IAM convention)
- Add range validation to durationSeconds in Zod schemas
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* icon
* lint
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* Auto-focus input boxes for modals and copilot
* Fix focus in emcn modal
* Fix integrations manager focus
* Change modal tabs to auto focus on first text input
* Auto-focus mothership task chats
---------
Co-authored-by: Theodore Li <theo@sim.ai>
* feat(workspaces): add workspace logo upload
* feat(workspaces): add workspace logo upload
* fix(workspaces): validate logoUrl accepts only paths or HTTPS URLs
* fix(workspaces): add admin authorization, audit log, and posthog event for workspace logo uploads
* lint
* fix: add WebP support and use refs pattern in useProfilePictureUpload
- Add image/webp to ACCEPTED_IMAGE_TYPES in useProfilePictureUpload
- Add image/webp to file input accept attributes in whitelabeling settings
- Refactor useProfilePictureUpload to use refs for onUpload, onError, and
currentImage callbacks, matching the established codebase pattern
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: restore cloudwatch/cloudformation files from staging
These files were accidentally regressed during rebase conflict resolution,
reverting changes from #4027. Restoring to staging versions.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: add workspace_logo_uploaded to PostHogEventMap
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: separate workspaceId ref sync to prevent overwrite on re-render
Split the ref sync useEffect so workspaceIdRef only updates when the
workspaceId prop changes, not when onUpload/onError callbacks get new
references. Prevents setTargetWorkspaceId from being overwritten by
a re-render before the file upload completes.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: use Pick type for workspace dropdown in knowledge header
The shared Workspace type requires ownerId and other fields that aren't
available from the workspaces API response mapping. Use a Pick type to
accurately represent the subset of fields actually constructed.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* refactor: replace raw fetch with useWorkspacesQuery in knowledge header
Remove useState + useEffect + fetch anti-pattern for loading workspaces.
Use useWorkspacesQuery from React Query with inline filter for write/admin
permissions. Eliminates ~30 lines of manual state management, any casts,
and the Pick type workaround.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* fix(atlassian): unify error message extraction across all Jira, JSM, and Confluence routes
Add parseAtlassianErrorMessage() to jira/utils.ts as single source of truth for
parsing all 5 Atlassian error formats. Update 51 proxy routes (18 JSM, 5 Jira,
28 Confluence) to use it instead of hardcoded generic errors. Remove dead
errorExtractor field from 95 Atlassian tool files — the compat loop in
extractErrorMessage() already handles all formats without it. Consolidate
duplicate parseJsmErrorMessage into a re-export from the shared utility.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: address PR review comments from Bugbot
- Remove debug logger.info for formAnswers in JSM request route
- Restore user-friendly spaceId error message in Confluence create-page route
- Restore details field in Jira write and update route error responses
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* refactor: remove re-exports from jsm/utils and import directly from source
Remove re-exports of getJiraCloudId, parseAtlassianErrorMessage, and
parseJsmErrorMessage from jsm/utils.ts. Update all 21 JSM routes to
import directly from @/tools/jira/utils per CLAUDE.md import rules.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* regen docs
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* feat(agiloft): add Agiloft CLM integration with token-based auth
Add 12 tools (CRUD, search, select, saved search, attachments, lock),
block, icon, docs, and internal API route for file attachments.
Uses EWLogin/EWLogout for short-lived Bearer tokens — credentials
are never embedded in API request URLs.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(agiloft): address PR review feedback
- Add HTTPS enforcement guard to agiloftLogin to prevent plaintext credential transit
- Add null guard on data.output in attach_file transformResponse
- Change empty AgiloftSavedSearchParams interface to type alias
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(agiloft): add SSRF protection via DNS validation on instanceUrl
Validates user-supplied instanceUrl against private/reserved IP ranges
using validateUrlWithDNS before making any outbound requests. Uses dynamic
import to avoid bundling Node.js dns module in client-side code.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(agiloft): fix SSRF protection to avoid client bundle breakage
Replace dynamic import of input-validation.server (which Turbopack traces
into the client bundle) with client-safe validateExternalUrl in utils.ts.
Add full DNS-level SSRF validation via validateUrlWithDNS in the attach
API route (server-only file). This matches the Okta pattern for
directExecution tools and the textract pattern for API routes.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(agiloft): use DELETE method for EWRemoveAttachment endpoint
The remove_attachment tool was incorrectly using GET instead of DELETE
for the Agiloft EWRemoveAttachment endpoint, which would cause removals
to fail at runtime.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(agiloft): correct HTTP methods and parameter names per Agiloft API docs
- EWRemoveAttachment uses GET, not DELETE (revert incorrect change)
- EWRetrieve uses `filePosition` parameter, not `position`
- EWAttach uses PUT, not POST
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* fix(models): exclude reseller providers from model catalog pages
Reseller providers like OpenRouter, Fireworks, Azure, Vertex, and Bedrock
are aggregators that proxy other providers' models. Their model detail
pages were generating broken links. Filter them out of
MODEL_PROVIDERS_WITH_CATALOGS so they don't generate static pages or
appear as clickable entries in the model directory.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(models): use filtered catalog for JSON-LD structured data
Switch flatModels in page.tsx from MODEL_CATALOG_PROVIDERS to
MODEL_PROVIDERS_WITH_CATALOGS so the Schema.org ItemList excludes
reseller models, matching TOTAL_MODELS and avoiding broken URLs.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* feat(ee): enterprise feature flags, permission group platform controls, audit logs ui, delete account
* fix(settings): improve sidebar skeleton fidelity and fix credit purchase org cache invalidation
- Bump skeleton icon and text from 16/14px to 24px to better match real nav item visual weight
- Add orgId support to usePurchaseCredits so org billing/subscription caches are invalidated on credit purchase, matching the pattern used by useUpgradeSubscription
- Polish ColorInput in whitelabeling settings with auto-prefix and select-on-focus UX
* revert(settings): remove delete account feature
* fix(settings): address pr review — atomic autoAddNewMembers, extract query hook, fix types and signal forwarding
* chore(helm): add CREDENTIAL_SETS_ENABLED to values.yaml
* fix(access-control): dynamic platform category columns, atomic permission group delete
* fix(access-control): restore triggers section in blocks tab
* fix(access-control): merge triggers into tools section in blocks tab
* upgrade tubro
* fix(access-control): fix Select All state when config has stale blacklisted provider IDs
* fix(access-control): derive platform Select All from features list; revert turbo schema version
* fix(access-control): fix blocks Select All check, filter empty platform columns
* revert(settings): restore original skeleton icon and text sizes
* improvement: seo, geo, signup, posthog
* fix(landing): address PR review issues and convention violations
- Fix auth modal race condition: show loading state instead of redirecting when provider status hasn't loaded yet
- Fix auth modal HTTP error caching: reject non-200 responses so they aren't permanently cached
- Replace <img> with next/image <Image> in auth modal
- Use cn() instead of template literal class concatenation in hero, footer-cta
- Remove commented-out dead code in footer, landing, sitemap
- Remove unused arrow property from FooterItem interface
- Convert relative imports to absolute in integrations/[slug]/page
- Remove no-op sanitizedName variable in signup form
- Remove unnecessary async from llms-full.txt route
- Remove extraneous non-TSDoc comment in auth modal
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* style(landing): apply linter formatting fixes
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(landing): second pass — fix remaining code quality issues
- auth-modal: add @sim/logger, log social sign-in errors instead of swallowing silently
- auth-modal: extract duplicated social button classes into SOCIAL_BTN constant
- auth-modal: remove unused isProduction from ProviderStatus interface
- auth-modal: memoize getBrandConfig() call
- footer: remove stale arrow destructuring left after interface cleanup, use cn() throughout
- footer-cta: replace inline styles on submit button with Tailwind classes via cn()
- footer-cta: replace caretColor inline style with caret-white utility
- templates: fix incorrect section value 'landing_preview' → 'templates' for PostHog tracking
- events: add 'templates' to landing_cta_clicked section union
- integrations: replace "canvas" with "workflow builder" per constitution rules
- llms-full: replace "canvas" terminology with "visual builder"/"workflow builder"
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(landing): point Mothership and Workflows footer links to docs root
These docs pages don't exist yet — link to docs.sim.ai until they are published.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(landing): complete rebrand in blog fallback description
Remove "workflows" from the non-tagged blog meta description to
align with the AI workspace rebrand across the rest of the PR.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(landing): strip isProduction from provider response and handle late-resolve redirect
- Destructure only githubAvailable/googleAvailable from getOAuthProviderStatus
so isProduction is not leaked to unauthenticated callers.
- Add useEffect to redirect away from the modal if provider status resolves
after the modal is already open and no social providers are configured.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(landing): align auth modal with login/signup page logic
- Add SSO button when NEXT_PUBLIC_SSO_ENABLED is set
- Gate "Continue with email" behind EMAIL_PASSWORD_SIGNUP_ENABLED
- Expose registrationDisabled from /api/auth/providers and hide
the "Sign up" toggle when registration is disabled
- Simplify skip-modal logic: redirect to full page when no social
providers or SSO are available (hasModalContent)
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(landing): force login view when registration is disabled
When a CTA passes defaultView='signup' but registration is disabled,
the modal now opens in login mode instead of showing "Create free
account" with social buttons that would fail on the backend.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* lint
* fix(landing): correct signup view when registrationDisabled loads late
When the user opens the modal before providerStatus resolves and
registrationDisabled comes back true, the view was stuck on 'signup'.
Now the late-resolve useEffect also forces the view to 'login'.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(landing): add click tracking to integration page CTAs
Create IntegrationCtaButton client component that wraps AuthModal
and fires trackLandingCta on click, matching the pattern used by
every other landing section CTA.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(landing): prevent mobile auth modal from unmounting on open
Remove setMobileMenuOpen(false) from mobile AuthModal button onClick
handlers. Closing the mobile menu unmounts the AuthModal before it
can open. The modal overlay or page redirect makes the menu
irrelevant without needing to explicitly close it.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Waleed Latif <walif6@gmail.com>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* feat(ee): add enterprise audit logs settings page with server-side search
Add a new audit logs page under enterprise settings that displays all
actions captured via recordAudit. Includes server-side search, resource
type filtering, date range selection, and cursor-based pagination.
- Add internal API route (app/api/audit-logs) with session auth
- Extract shared query logic (buildFilterConditions, buildOrgScopeCondition,
queryAuditLogs) into app/api/v1/audit-logs/query.ts
- Refactor v1 and admin audit log routes to use shared query module
- Add React Query hook with useInfiniteQuery and cursor pagination
- Add audit logs UI with debounced search, combobox filters, expandable rows
- Gate behind requiresHosted + requiresEnterprise navigation flags
- Place all enterprise audit log code in ee/audit-logs/
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* lint
* fix(ee): fix build error and address PR review comments
- Fix import path: @/lib/utils → @/lib/core/utils/cn
- Guard against empty orgMemberIds array in buildOrgScopeCondition
- Skip debounce effect on mount when search is already synced
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* lint
* fix(ee): fix type error with unknown metadata in JSX expression
Use ternary instead of && chain to prevent unknown type from being
returned as ReactNode.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(ee): align skeleton filter width with actual component layout
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* lint
* feat(audit): add audit logging for passwords, credentials, and schedules
- Add PASSWORD_RESET_REQUESTED audit on forget-password with user lookup
- Add CREDENTIAL_CREATED/UPDATED/DELETED audit on credential CRUD routes
with metadata (credentialType, providerId, updatedFields, envKey)
- Add SCHEDULE_CREATED audit on schedule creation with cron/timezone metadata
- Fix SCHEDULE_DELETED (was incorrectly using SCHEDULE_UPDATED for deletes)
- Enhance existing schedule update/disable/reactivate audit with structured
metadata (operation, updatedFields, sourceType, previousStatus)
- Add CREDENTIAL resource type and Credential filter option to audit logs UI
- Enhance password reset completed description with user email
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(audit): align metadata with established recordAudit patterns
- Add actorName/actorEmail to all new credential and schedule audit calls
to match the established pattern (e.g., api-keys, byok-keys, knowledge)
- Add resourceId and resourceName to forget-password audit call
- Enhance forget-password description with user email
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(testing): sync audit mock with new AuditAction and AuditResourceType entries
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* refactor(audit-logs): derive resource type filter from AuditResourceType
Instead of maintaining a separate hardcoded list, the filter dropdown
now derives its options directly from the AuditResourceType const object.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* feat(audit): enrich all recordAudit calls with structured metadata
- Move resource type filter options to ee/audit-logs/constants.ts
(derived from AuditResourceType, no separate list to maintain)
- Remove export from internal cursor helpers in query.ts
- Add 5 new AuditAction entries: BYOK_KEY_UPDATED, ENVIRONMENT_DELETED,
INVITATION_RESENT, WORKSPACE_UPDATED, ORG_INVITATION_RESENT
- Enrich ~80 recordAudit calls across the codebase with structured
metadata (knowledge bases, connectors, documents, workspaces, members,
invitations, workflows, deployments, templates, MCP servers, credential
sets, organizations, permission groups, files, tables, notifications,
copilot operations)
- Sync audit mock with all new entries
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(audit): remove redundant metadata fields duplicating top-level audit fields
Remove metadata entries that duplicate resourceName, workspaceId, or
other top-level recordAudit fields. Also remove noisy fileNames arrays
from bulk document upload audits (kept fileCount).
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(audit): split audit types from server-only log module
Extract AuditAction, AuditResourceType, and their types into
lib/audit/types.ts (client-safe, no @sim/db dependency). The
server-only recordAudit stays in log.ts and re-exports the types
for backwards compatibility. constants.ts now imports from types.ts
directly, breaking the postgres -> tls client bundle chain.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(audit): escape LIKE wildcards in audit log search query
Escape %, _, and \ characters in the search parameter before embedding
in the LIKE pattern to prevent unintended broad matches.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(audit): use actual deletedCount in bulk API key revoke description
The description was using keys.length (requested count) instead of
deletedCount (actual count), which could differ if some keys didn't
exist.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(audit-logs): fix OAuth label displaying as "Oauth" in filter dropdown
ACRONYMS set stored 'OAuth' but lookup used toUpperCase() producing
'OAUTH' which never matched. Now store all acronyms uppercase and use
a display override map for special casing like OAuth.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* fix(trigger): auto-detect header row and rename lastKnownRowCount to lastIndexChecked
- Replace hardcoded !1:1 header fetch with detectHeaderRow(), which scans
the first 10 rows and returns the first non-empty row as headers. This
fixes row: null / headers: [] when a sheet has blank rows or a title row
above the actual column headers (e.g. headers in row 3).
- Rename lastKnownRowCount → lastIndexChecked in GoogleSheetsWebhookConfig
and all usage sites to clarify that the value is a row index pointer, not
a total count.
- Remove config parameter from processRows() since it was unused after the
includeHeaders flag was removed.
* fix(trigger): combine sheet state fetch, skip header/blank rows from data emission
- Replace separate getDataRowCount() + detectHeaderRow() with a single
fetchSheetState() call that returns rowCount, headers, and headerRowIndex
from one A:Z fetch. Saves one Sheets API round-trip per poll cycle when
new rows are detected.
- Use headerRowIndex to compute adjustedStartRow, preventing the header row
(and any blank rows above it) from being emitted as data events when
lastIndexChecked was seeded from an empty sheet.
- Handle the edge case where the entire batch falls within the header/blank
window by advancing the pointer and returning early without fetching rows.
- Skip empty rows (row.length === 0) in processRows rather than firing a
workflow run with no meaningful data.
* fix(trigger): preserve lastModifiedTime when remaining rows exist after header skip
When all rows in a batch fall within the header/blank window (adjustedStartRow
> endRow), the early return was unconditionally updating lastModifiedTime to the
current value. If there were additional rows beyond the batch cap, the next
Drive pre-check would see an unchanged modifiedTime and skip polling entirely,
leaving those rows unprocessed. Mirror the hasRemainingOrFailed pattern from the
normal processing path.
* chore(trigger): remove verbose inline comments from google-sheets poller
* fix(trigger): revert to full-width A:Z fetch for correct row count and consistent column scope
* fix(trigger): don't count skipped empty rows as processed
* chore(triggers): deprecate trigger-save subblock
Remove the defunct triggerSave subblock from all 102 trigger definitions,
the SubBlockType union, SYSTEM_SUBBLOCK_IDS, tool params, and command
templates. Retain the backwards-compat filter in getTrigger() for any
legacy stored data.
* fix(triggers): remove leftover no-op blocks.push() in linear utils
* chore(triggers): remove orphaned triggerId property and stale comments
* feat(knowledge): add token, sentence, recursive, and regex chunkers
* fix(chunkers): standardize token estimation and use emcn dropdown
- Refactor all existing chunkers (Text, JsonYaml, StructuredData, Docs) to use shared utils
- Fix inconsistent token estimation (JsonYaml used tiktoken, StructuredData used /3 ratio)
- Fix DocsChunker operator precedence bug and hard-coded 300-token limit
- Fix JsonYamlChunker isStructuredData false positive on plain strings
- Add MAX_DEPTH recursion guard to JsonYamlChunker
- Replace @/components/ui/select with emcn DropdownMenu in strategy selector
* fix(chunkers): address research audit findings
- Expand RecursiveChunker recipes: markdown adds horizontal rules, code
fences, blockquotes; code adds const/let/var/if/for/while/switch/return
- RecursiveChunker fallback uses splitAtWordBoundaries instead of char slicing
- RegexChunker ReDoS test uses adversarial strings (repeated chars, spaces)
- SentenceChunker abbreviation list adds St/Rev/Gen/No/Fig/Vol/months
and single-capital-letter lookbehind
- Add overlap < maxSize validation in Zod schema and UI form
- Add pattern max length (500) validation in Zod schema
- Fix StructuredDataChunker footer grammar
* fix(chunkers): fix remaining audit issues across all chunkers
- DocsChunker: extract headers from cleaned content (not raw markdown)
to fix position mismatch between header positions and chunk positions
- DocsChunker: strip export statements and JSX expressions in cleanContent
- DocsChunker: fix table merge dedup using equality instead of includes
- JsonYamlChunker: preserve path breadcrumbs when nested value fits in
one chunk, matching LangChain RecursiveJsonSplitter behavior
- StructuredDataChunker: detect 2-column CSV (lowered threshold from >2
to >=1) and use 20% relative tolerance instead of absolute +/-2
- TokenChunker: use sliding window overlap (matching LangChain/Chonkie)
where chunks stay within chunkSize instead of exceeding it
- utils: splitAtWordBoundaries accepts optional stepChars for sliding
window overlap; addOverlap uses newline join instead of space
* chore(chunkers): lint formatting
* updated styling
* fix(chunkers): audit fixes and comprehensive tests
- Fix SentenceChunker regex: lookbehinds now include the period to correctly handle abbreviations (Mr., Dr., etc.), initials (J.K.), and decimals
- Fix RegexChunker ReDoS: reset lastIndex between adversarial test iterations, add poisoned-suffix test strings
- Fix DocsChunker: skip code blocks during table boundary detection to prevent false positives from pipe characters
- Fix JsonYamlChunker: oversized primitive leaf values now fall back to text chunking instead of emitting a single chunk
- Fix TokenChunker: pass 0 to buildChunks for overlap metadata since sliding window handles overlap inherently
- Add defensive guard in splitAtWordBoundaries to prevent infinite loops if step is 0
- Add tests for utils, TokenChunker, SentenceChunker, RecursiveChunker, RegexChunker (236 total tests, 0 failures)
- Fix existing test expectations for updated footer format and isStructuredData behavior
* chore(chunkers): remove unnecessary comments and dead code
Strip 445 lines of redundant TSDoc, math calculation comments,
implementation rationale notes, and assertion-restating comments
across all chunker source and test files.
* fix(chunkers): address PR review comments
- Fix regex fallback path: use sliding window for overlap instead of
passing chunkOverlap to buildChunks without prepended overlap text
- Fix misleading strategy label: "Text (hierarchical splitting)" →
"Text (word boundary splitting)"
* fix(chunkers): use consistent overlap pattern in regex fallback
Use addOverlap + buildChunks(chunks, overlap) in the regex fallback
path to match the main path and all other chunkers (TextChunker,
RecursiveChunker). The sliding window approach was inconsistent.
* fix(chunkers): prevent content loss in word boundary splitting
When splitAtWordBoundaries snaps end back to a word boundary, advance
pos from end (not pos + step) in non-overlapping mode. The step-based
advancement is preserved for the sliding window case (TokenChunker).
* fix(chunkers): restore structured data token ratio and overlap joiner
- Restore /3 token estimation for StructuredDataChunker (structured data
is denser than prose, ~3 chars/token vs ~4)
- Change addOverlap joiner from \n to space to match original TextChunker
behavior
* lint
* fix(chunkers): fall back to character-level overlap in sentence chunker
When no complete sentence fits within the overlap budget,
fall back to character-level word-boundary overlap from the
previous group's text. This ensures buildChunks metadata is
always correct.
* fix(chunkers): fix log message and add missing month abbreviations
- Fix regex fallback log: "character splitting" → "word-boundary splitting"
- Add Jun and Jul to sentence chunker abbreviation list
* lint
* fix(chunkers): restore structured data detection threshold to > 2
avgCount >= 1 was too permissive — prose with consistent comma usage
would be misclassified as CSV. Restore original > 2 threshold while
keeping the improved proportional tolerance.
* fix(chunkers): pass chunkOverlap to buildChunks in TokenChunker
* fix(chunkers): restore separator-as-joiner pattern in splitRecursively
Separator was unconditionally prepended to parts after the first,
leaving leading punctuation on chunks after a boundary reset.
* feat(knowledge): add JSONL file support for knowledge base uploads
Parses JSON Lines files by splitting on newlines and converting to a
JSON array, which then flows through the existing JsonYamlChunker.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* improvement(integrations, models): ui/ux
* fix(models, integrations): dedup ChevronArrow/provider colors, fix UTC date rendering
- Extract PROVIDER_COLORS and getProviderColor to model-colors.ts to eliminate
identical definitions in model-comparison-charts and model-timeline-chart
- Remove duplicate private ChevronArrow from integration-card; import the
exported one from model-primitives instead
- Add timeZone: 'UTC' to formatShortDate so ISO date-only strings (parsed as
UTC midnight) render the correct calendar day in all timezones
* refactor(models): rename model-colors.ts to consts.ts
* improvement(models): derive provider colors/resellers from definitions, reorient FAQs to agent builder
Dynamic data:
- Add `color` and `isReseller` fields to ProviderDefinition interface
- Move brand colors for all 10 providers into their definitions
- Mark 6 reseller providers (Azure, Bedrock, Vertex, OpenRouter, Fireworks)
- consts.ts now derives color map from MODEL_CATALOG_PROVIDERS
- model-comparison-charts derives RESELLER_PROVIDERS from catalog
- Fix deepseek name: Deepseek → DeepSeek; remove now-redundant
PROVIDER_NAME_OVERRIDES and getProviderDisplayName from utils
- Add color/isReseller fields to CatalogProvider; clean up duplicate
providerDisplayName in searchText array
FAQs:
- Replace all 4 main-page FAQs with 5 agent-builder-oriented ones
covering model selection, context windows, pricing, tool use, and
how to use models in a Sim agent workflow
- buildProviderFaqs: add conditional tool use FAQ per provider
- buildModelFaqs: add bestFor FAQ (conditional on field presence);
improve context window answer to explain agent implications;
tighten capabilities answer wording
* chore(models): remove model-colors.ts (superseded by consts.ts)
* update footer
---------
Co-authored-by: waleed <walif6@gmail.com>
* fix(trigger): fix polling trigger config defaults, row count, clock-skew, and stale config clearing
* fix(deploy): track first-pass fills to prevent stale baseConfig bypassing required-field validation
Use a dedicated `filledSubBlockIds` Set populated during the first pass so the second-pass skip guard is based solely on live `getConfigValue` results, not on stale entries spread from `baseConfig` (`triggerConfig`).
* fix(trigger): prevent calendar cursor regression when all events are filtered client-side
* fix(ui): support Tab key to select items in tag, env-var, and resource dropdowns
* fix(ui): support Tab key to select items in tag, env-var, and resource dropdowns
* fix(ui): guard Tab selection against Shift+Tab and undefined index
The Forms API has a different base URL for OAuth vs Basic Auth.
Per Atlassian support, OAuth requires the /ex/jira/{cloudId}/forms
pattern, not /jira/forms/cloud/{cloudId} which only works with
Basic Auth. This was causing 401 Unauthorized errors.
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* fix(trigger): show selector display names on canvas for trigger file/sheet selectors
* fix(trigger): use isNonEmptyValue in canonical member scan to match visibility contract
* feat(trigger): add Google Sheets, Drive, and Calendar polling triggers
Add polling triggers for Google Sheets (new rows), Google Drive (file
changes via changes.list API), and Google Calendar (event updates via
updatedMin). Each includes OAuth credential support, configurable
filters (event type, MIME type, folder, search term, render options),
idempotency, and first-poll seeding. Wire triggers into block configs
and regenerate integrations.json. Update add-trigger skill with polling
instructions and versioned block wiring guidance.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(polling): address PR review feedback for Google polling triggers
- Fix Drive cursor stall: use nextPageToken as resume point when
breaking early from pagination instead of re-using the original token
- Eliminate redundant Drive API call in Sheets poller by returning
modifiedTime from the pre-check function
- Add 403/429 rate-limit handling to Sheets API calls matching the
Calendar handler pattern
- Remove unused changeType field from DriveChangeEntry interface
- Rename triggers/google_drive to triggers/google-drive for consistency
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(polling): fix Drive pre-check never activating in Sheets poller
isDriveFileUnchanged short-circuited when lastModifiedTime was
undefined, never calling the Drive API — so currentModifiedTime
was never populated, creating a permanent chicken-and-egg loop.
Now always calls the Drive API and returns the modifiedTime
regardless of whether there's a previous value to compare against.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* chore(lint): fix import ordering in triggers registry
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(polling): address PR review feedback for Google polling handlers
- Fix fetchHeaderRow to throw on 403/429 rate limits instead of silently
returning empty headers (prevents rows from being processed without
headers and lastKnownRowCount from advancing past them permanently)
- Fix Drive pagination to avoid advancing resume cursor past sliced
changes (prevents permanent change loss when allChanges > maxFiles)
- Remove unused logger import from Google Drive trigger config
* fix(polling): prevent data loss on partial row failures and harden idempotency key
- Sheets: only advance lastKnownRowCount by processedCount when there
are failures, so failed rows are retried on the next poll cycle
(idempotency deduplicates already-processed rows on re-fetch)
- Drive: add fallback for change.time in idempotency key to prevent
key collisions if the field is ever absent from the API response
* fix(polling): remove unused variable and preserve lastModifiedTime on Drive API failure
- Remove unused `now` variable from Google Drive polling handler
- Preserve stored lastModifiedTime when Drive API pre-check fails
(previously wrote undefined, disabling the optimization until the
next successful Drive API call)
* fix(polling): don't advance state when all events fail across sheets, calendar, drive handlers
* fix(polling): retry failed idempotency keys, fix drive cursor overshoot, fix calendar inclusive updatedMin
* fix(polling): revert calendar timestamp on any failure, not just all-fail
* fix(polling): revert drive cursor on any failure, not just all-fail
* feat(triggers): add canonical selector toggle to google polling triggers
- Add 'trigger-advanced' mode to SubBlockConfig so canonical pairs work in trigger mode
- Fix buildCanonicalIndex: trigger-mode subblocks don't overwrite non-trigger basicId, deduplicate advancedIds from block spreads
- Update editor, subblock layout, and trigger config aggregation to include trigger-advanced subblocks
- Replace dropdown+fetchOptions in Calendar/Sheets/Drive pollers with file-selector (basic) + short-input (advanced) canonical pairs
- Add canonicalParamId: 'oauthCredential' to triggerCredentials for selector context resolution
- Update polling handlers to read canonical fallbacks (calendarId||manualCalendarId, etc.)
* test(blocks): handle trigger-advanced mode in canonical validation tests
* fix(triggers): handle trigger-advanced mode in deploy, preview, params, and copilot
* fix(polling): use position-only idempotency key for sheets rows
* fix(polling): don't advance calendar timestamp to client clock on empty poll
* fix(polling): remove extraneous comment from calendar poller
* fix(polling): drive cursor stall on full page, calendar latestUpdated past filtered events
* fix(polling): advance calendar cursor past fully-filtered event batches
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* feat(tools): add fields parameter to Jira search block
Expose the Jira REST API `fields` parameter on the search operation,
allowing users to specify which fields to return per issue. This reduces
response payload size by 10-15x, preventing 10MB workflow state limit
errors for users with high ticket volume.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* style(tools): remove redundant type annotation in fields map callback
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(tools): restore type annotation for implicit any in params callback
The params object is untyped, so TypeScript cannot infer the string
element type from .split() — the explicit annotation is required.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Add the generated human-in-the-loop group to the docs navigation
and create meta.json listing all HITL operation IDs so endpoints
render in the API reference.
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* fix(log): log cleanup sql query
* perf(log): use startedAt index for cleanup query filter
Switch cleanup WHERE clause from createdAt to startedAt to leverage
the existing composite index (workspaceId, startedAt), converting a
full table scan to an index range scan. Also remove explanatory comment.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Theodore Li <theo@sim.ai>
Co-authored-by: Waleed Latif <walif6@gmail.com>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Update parseJsmErrorMessage to extract errors from all Atlassian API
response formats: errorMessage (JSM), errorMessages array (Jira),
errors[].title RFC 7807 (Confluence/Forms), field-level errors object,
and message (gateway). Remove redundant prefix wrapping so the raw
error message surfaces cleanly through the extractor.
* fix(tools): add Atlassian error extractor to all Jira, JSM, and Confluence tools
Wire up the existing `atlassian-errors` error extractor to all 95 Atlassian
tool configs so the executor surfaces meaningful error messages instead of
generic status codes. Also fix the extractor itself to handle all three
Atlassian error response formats: `errorMessage` (JSM), `errorMessages`
array (Jira), and `message` (Confluence).
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* chore(tools): lint formatting fix for error extractor
* fix(tools): handle all Atlassian error formats in error extractor
Add RFC 7807 errors[].title format (Confluence v2, Forms/ProForma API)
and Jira field-level errors object to the atlassian-errors extractor.
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* improvement(ci): parallelize Docker builds with tests and remove duplicate turbo install
* fix(test): use SecureFetchResponse shape in mock instead of standard Response
* chore(ci): bump actions/checkout to v6 and dorny/paths-filter to v4
* fix(ci): mock secureFetchWithPinnedIP in tools tests to prevent timeouts
* lint
* docs(openapi): add Human in the Loop API endpoints
Add HITL pause/resume endpoints to the OpenAPI spec covering
the full workflow pause lifecycle: listing paused executions,
inspecting pause details, and resuming with input.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* docs(openapi): add 403 and 500 responses to HITL endpoints
Address PR review feedback: add missing 403 Forbidden response
to all HITL endpoints (from validateWorkflowAccess), and 500
responses to resume endpoints that have explicit error paths.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* lint
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* feat(trigger): add ServiceNow webhook triggers
* fix(trigger): add webhook secret field and remove non-TSDoc comment
Add webhookSecret field to ServiceNow triggers (matching Salesforce pattern)
so users are prompted to protect the webhook endpoint. Update setup
instructions to include Authorization header in the Business Rule example.
Remove non-TSDoc inline comment in the block config.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* feat(trigger): add ServiceNow provider handler with event matching
Add dedicated ServiceNow webhook provider handler with:
- verifyAuth: validates webhookSecret via Bearer token or X-Sim-Webhook-Secret
- matchEvent: filters events by trigger type and table name using
isServiceNowEventMatch utility (matching Salesforce/GitHub pattern)
The event matcher handles incident created/updated and change request
created/updated triggers with table name enforcement and event type
normalization. The generic webhook trigger passes through all events
but still respects the optional table name filter.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* lint
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* feat(jsm): add ProForma/JSM Forms discovery tools
Add three new tools for discovering and inspecting JSM Forms (ProForma) templates
and their structure, enabling dynamic form-based workflows:
- jsm_get_form_templates: List form templates in a project with request type bindings
- jsm_get_form_structure: Get full form design (questions, layout, conditions, sections)
- jsm_get_issue_forms: List forms attached to an issue with submission status
All endpoints validated against the official Atlassian Forms REST API OpenAPI spec.
Uses the Forms Cloud API base URL (jira/forms/cloud/{cloudId}) with X-ExperimentalApi header.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(jsm): add input validation and extract shared error parser
- Add validateJiraIssueKey for projectIdOrKey in templates and structure routes
- Add validateJiraCloudId for formId (UUID) in structure route
- Extract parseJsmErrorMessage to shared utils.ts (was duplicated across 3 routes)
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* chore(jsm): remove unused FORM_QUESTION_PROPERTIES constant
Dead code — the get_form_structure tool passes the raw design object
through as JSON, so this output constant had no consumers.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* improvement(polling): fix correctness and efficiency across all polling handlers
- Gmail: paginate history API, add historyTypes filter, differentiate 403/429,
fetch fresh historyId on fallback to break 404 retry loop
- Outlook: follow @odata.nextLink pagination, use fetchWithRetry for all Graph
calls, fix $top alignment, skip folder filter on partial resolution failure,
remove Content-Type from GET requests
- RSS: add conditional GET (ETag/If-None-Match), raise GUID cap to 500, fix 304
ETag capture per RFC 9111, align GUID tracking with idempotency fallback key
- IMAP: single connection reuse, UIDVALIDITY tracking per mailbox, advance UID
only on successful fetch, fix messageFlagsAdd range type, remove cross-mailbox
legacy UID fallback
- Dispatch polling via trigger.dev task with per-provider concurrency key;
fall back to synchronous Redis-locked polling for self-hosted
* fix(rss): align idempotency key GUID fallback with tracking/filter guard
* removed comments
* fix(imap): clear stale UID when UIDVALIDITY changes during state merge
* fix(rss): skip items with no identifiable GUID to avoid idempotency key collisions
* fix(schedules): convert dynamic import of getWorkflowById to static import
* fix(imap): preserve fresh UID after UIDVALIDITY reset in state merge
* improvement(polling): remove trigger.dev dispatch, use synchronous Redis-locked polling
* fix(polling): decouple outlook page size from total email cap so pagination works
* feat(block): Add cloudwatch publish operation
* fix(integrations): validate and fix cloudwatch, cloudformation, athena conventions
- Update tool version strings from '1.0' to '1.0.0' across all three integrations
- Add missing `export * from './types'` barrel re-exports (cloudwatch, cloudformation)
- Add docsLink, wandConfig timestamps, mode: 'advanced' on optional fields (cloudwatch)
- Add dropdown defaults, ZodError handling, docs intro section (cloudwatch)
- Add mode: 'advanced' on limit field (cloudformation)
- Alphabetize registry entries (cloudwatch, cloudformation)
- Fix athena docs maxResults range (1-999)
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(cloudwatch): complete put_metric_data unit dropdown, add missing outputs, fix JSON error handling
- Add all 27 valid CloudWatch StandardUnit values to metricUnit dropdown (was 13)
- Add missing block outputs for put_metric_data: success, namespace, metricName, value, unit
- Add try-catch around dimensions JSON.parse in put-metric-data route for proper 400 errors
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(cloudwatch): fix DescribeAlarms returning only MetricAlarm when "All Types" selected
Per AWS docs, omitting AlarmTypes returns only MetricAlarm. Now explicitly
sends both MetricAlarm and CompositeAlarm when no filter is selected.
Also fix dimensions JSON parse errors returning 500 instead of 400 in
get-metric-statistics route.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(cloudwatch): validate dimensions JSON at Zod schema level
Move dimensions validation from runtime try-catch to Zod refinement,
catching malformed JSON and arrays at schema validation time (400)
instead of runtime (500). Also rejects JSON arrays that would produce
meaningless numeric dimension names.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(cloudwatch): reject non-numeric metricValue instead of silently publishing 0
Add NaN guard in block config and .finite() refinement in Zod schema
so "abc" → NaN is caught at both layers instead of coercing to 0.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(cloudwatch): use Number.isFinite to also reject Infinity in block config
Aligns block-level validation with route's Zod .finite() refinement so
Infinity/-Infinity are caught at the block config layer, not just the API.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Theodore Li <teddy@zenobiapay.com>
Co-authored-by: Waleed Latif <walif6@gmail.com>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* fix(billing): skip billing on streamed workflows with byok
* Simplify logic
* Address comments, skip tokenization billing fallback
* Fix tool usage billing for streamed outputs
* fix(webhook): throw webhook errors as 4xxs (#4050)
* fix(webhook): throw webhook errors as 4xxs
* Fix shadowing body var
---------
Co-authored-by: Theodore Li <theo@sim.ai>
* feat(enterprise): cloud whitelabeling for enterprise orgs (#4047)
* feat(enterprise): cloud whitelabeling for enterprise orgs
* fix(enterprise): scope enterprise plan check to target org in whitelabel PUT
* fix(enterprise): use isOrganizationOnEnterprisePlan for org-scoped enterprise check
* fix(enterprise): allow clearing whitelabel fields and guard against empty update result
* fix(enterprise): remove webp from logo accept attribute to match upload hook validation
* improvement(billing): use isBillingEnabled instead of isProd for plan gate bypasses
* fix(enterprise): show whitelabeling nav item when billing is enabled on non-hosted environments
* fix(enterprise): accept relative paths for logoUrl since upload API returns /api/files/serve/ paths
* fix(whitelabeling): prevent logo flash on refresh by hiding logo while branding loads
* fix(whitelabeling): wire hover color through CSS token on tertiary buttons
* fix(whitelabeling): show sim logo by default, only replace when org logo loads
* fix(whitelabeling): cache org logo url in localstorage to eliminate flash on repeat visits
* feat(whitelabeling): add wordmark support with drag/drop upload
* updated turbo
* fix(whitelabeling): defer localstorage read to effect to prevent hydration mismatch
* fix(whitelabeling): use layout effect for cache read to eliminate logo flash before paint
* fix(whitelabeling): cache theme css to eliminate color flash before org settings resolve
* fix(whitelabeling): deduplicate HEX_COLOR_REGEX into lib/branding and remove mutation from useCallback deps
* fix(whitelabeling): use cookie-based SSR cache to eliminate brand flash on all page loads
* fix(whitelabeling): use !orgSettings condition to fix SSR brand cache injection
React Query returns isLoading: false with data: undefined during SSR, so the
previous brandingLoading condition was always false on the server — initialCache
was never injected into brandConfig. Changing to !orgSettings correctly applies
the cookie cache both during SSR and while the client-side query loads, eliminating
the logo flash on hard refresh.
* fix(editor): stop highlighting start.input as blue when block is not connected to starter (#4054)
* fix: merge subblock values in auto-layout to prevent losing router context (#4055)
Auto-layout was reading from getWorkflowState() without merging subblock
store values, then persisting stale subblock data to the database. This
caused runtime-edited values (e.g. router_v2 context) to be overwritten
with their initial/empty values whenever auto-layout was triggered.
* fix(whitelabeling): eliminate logo flash by fetching org settings server-side (#4057)
* fix(whitelabeling): eliminate logo flash by fetching org settings server-side
* improvement(whitelabeling): add SVG support for logo and wordmark uploads
* skelly in workspace header
* remove dead code
* fix(whitelabeling): hydration error, SVG support, skeleton shimmer, dead code removal
* fix(whitelabeling): blob preview dep cycle and missing color fallback
* fix(whitelabeling): use brand-accent as color fallback when workspace color is undefined
* chore(whitelabeling): inline hasOrgBrand
---------
Co-authored-by: Theodore Li <theo@sim.ai>
* fix(whitelabeling): eliminate logo flash by fetching org settings server-side
* improvement(whitelabeling): add SVG support for logo and wordmark uploads
* skelly in workspace header
* remove dead code
* fix(whitelabeling): hydration error, SVG support, skeleton shimmer, dead code removal
* fix(whitelabeling): blob preview dep cycle and missing color fallback
* fix(whitelabeling): use brand-accent as color fallback when workspace color is undefined
* chore(whitelabeling): inline hasOrgBrand
Auto-layout was reading from getWorkflowState() without merging subblock
store values, then persisting stale subblock data to the database. This
caused runtime-edited values (e.g. router_v2 context) to be overwritten
with their initial/empty values whenever auto-layout was triggered.
* feat(enterprise): cloud whitelabeling for enterprise orgs
* fix(enterprise): scope enterprise plan check to target org in whitelabel PUT
* fix(enterprise): use isOrganizationOnEnterprisePlan for org-scoped enterprise check
* fix(enterprise): allow clearing whitelabel fields and guard against empty update result
* fix(enterprise): remove webp from logo accept attribute to match upload hook validation
* improvement(billing): use isBillingEnabled instead of isProd for plan gate bypasses
* fix(enterprise): show whitelabeling nav item when billing is enabled on non-hosted environments
* fix(enterprise): accept relative paths for logoUrl since upload API returns /api/files/serve/ paths
* fix(whitelabeling): prevent logo flash on refresh by hiding logo while branding loads
* fix(whitelabeling): wire hover color through CSS token on tertiary buttons
* fix(whitelabeling): show sim logo by default, only replace when org logo loads
* fix(whitelabeling): cache org logo url in localstorage to eliminate flash on repeat visits
* feat(whitelabeling): add wordmark support with drag/drop upload
* updated turbo
* fix(whitelabeling): defer localstorage read to effect to prevent hydration mismatch
* fix(whitelabeling): use layout effect for cache read to eliminate logo flash before paint
* fix(whitelabeling): cache theme css to eliminate color flash before org settings resolve
* fix(whitelabeling): deduplicate HEX_COLOR_REGEX into lib/branding and remove mutation from useCallback deps
* fix(whitelabeling): use cookie-based SSR cache to eliminate brand flash on all page loads
* fix(whitelabeling): use !orgSettings condition to fix SSR brand cache injection
React Query returns isLoading: false with data: undefined during SSR, so the
previous brandingLoading condition was always false on the server — initialCache
was never injected into brandConfig. Changing to !orgSettings correctly applies
the cookie cache both during SSR and while the client-side query loads, eliminating
the logo flash on hard refresh.
* fix(kb): improve error logging when connector token resolution fails
The generic "Failed to obtain access token" error hid the actual root cause.
Now logs credentialId, userId, authMode, and provider to help diagnose
token refresh failures in trigger.dev.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* feat(kb): disable connectors after 10 consecutive sync failures
Connectors that fail 10 times in a row are set to 'disabled' status,
stopping the cron from scheduling further syncs. The UI shows an alert
triangle with a reconnect banner. Users can re-enable via the play
button or by reconnecting their account, which resets failures.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(kb): disable sync button for disabled connectors, use amber badge variant
Sync button should be disabled when connector is in disabled state to
guide users toward reconnecting first. Badge variant changed from red
to amber to match the warning banner styling.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(kb): address PR review comments for disabled connector feature
- Use `=== undefined` instead of falsy check for nextSyncAt to preserve
explicit null (manual sync only) when syncIntervalMinutes is 0
- Gate Reconnect button on serviceId/providerId so it only renders for
OAuth connectors; show appropriate copy for API key connectors
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(kb): move resolveAccessToken inside try/catch for circuit-breaker coverage
Token resolution failures (e.g. revoked OAuth tokens) were thrown before
the try/catch block, bypassing consecutiveFailures tracking entirely.
Also removes dead `if (refreshed)` guards at mid-sync refresh sites since
resolveAccessToken now always returns a string or throws.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(kb): remove dead interval branch when re-enabling connector
When `updates.nextSyncAt === undefined`, syncIntervalMinutes was not in
the request, so `parsed.data.syncIntervalMinutes` is always undefined.
Simplify to just schedule an immediate sync — the sync engine sets the
proper nextSyncAt based on the connector's DB interval after completion.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* improvement(kb): deferred content fetching and metadata-based hashes for connectors
* fix(kb): remove message count from outlook contentHash to prevent list/get divergence
* fix(kb): increase outlook getDocument message limit from 50 to 250
* fix(kb): skip outlook messages without conversationId to prevent broken stubs
* fix(kb): scope outlook getDocument to same folder as listDocuments to prevent hash divergence
* fix(kb): add missing connector sync cron job to Helm values
The connector sync endpoint existed but had no cron job configured to trigger it,
meaning scheduled syncs would never fire.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* fix: address PR review comments on staging release
- Add try/catch around clipboard.writeText() in CopyCodeButton
- Add missing folder and past_chat cases in resolveResourceFromContext
- Return 400 for ZodError instead of 500 in all 8 Athena API routes
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(api): return 400 for Zod validation errors across 27 API routes
Routes using z.parse() were returning 500 for ZodError (client input
validation failures). Added instanceof z.ZodError check to return 400
before the generic 500 handler, matching the established pattern used
by 115+ other routes.
Affected services: CloudWatch (7), CloudFormation (7), DynamoDB (6),
Slack (3), Outlook (2), OneDrive (1), Google Drive (1).
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(api): add success:false to ZodError responses for consistency
7 routes used { success: false, error: ... } in their generic error
handler but our ZodError handler only returned { error: ... }. Aligned
the ZodError response shape to match.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* refactor(polling): consolidate polling services into provider handler pattern
Eliminate self-POST anti-pattern and extract shared boilerplate from 4 polling
services into a clean handler registry mirroring lib/webhooks/providers/.
- Add processPolledWebhookEvent() to processor.ts for direct in-process webhook
execution, removing HTTP round-trips that caused Lambda 403/timeout errors
- Extract shared utilities (markWebhookFailed/Success, fetchActiveWebhooks,
runWithConcurrency, resolveOAuthCredential, updateWebhookProviderConfig)
- Create PollingProviderHandler interface with per-provider implementations
- Consolidate 4 identical route files into single dynamic [provider] route
- Standardize concurrency to 10 across all providers
- No infra changes needed — Helm cron paths resolve via dynamic route
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* polish(polling): extract lock TTL constant and remove unnecessary type casts
- Widen processPolledWebhookEvent body param to accept object, eliminating
`as unknown as Record<string, unknown>` double casts in all 4 handlers
- Extract LOCK_TTL_SECONDS constant in route, tying maxDuration and lock TTL
to a single value
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(polling): address PR review feedback
- Add archivedAt filters to fetchActiveWebhooks query, matching
findWebhookAndWorkflow in processor.ts to prevent polling archived
webhooks/workflows
- Move provider validation after auth check to prevent provider
enumeration by unauthenticated callers
- Fix inconsistent pollingIdempotency import path in outlook.ts to
match other handlers
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(polling): use literal for maxDuration segment config
Next.js requires segment config exports to be statically analyzable
literals. Using a variable reference caused build failure.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* fix(admin): delete workspaces on ban
* Fix lint
* Wait until workspace deletion to return ban success
---------
Co-authored-by: Theodore Li <theo@sim.ai>
* feat(athena): add AWS Athena integration
* fix(athena): address PR review comments
- Fix variable shadowing: rename inner `data` to `rowData` in row mapper
- Fix first-page maxResults off-by-one: request maxResults+1 to compensate for header row
- Add missing runtime guard for queryString in create_named_query
- Move athena registry entries to correct alphabetical position
* fix(athena): alphabetize registry keys and add type re-exports
- Reorder athena_* registry keys to strict alphabetical order
- Add type re-exports from index.ts barrel
* fix(athena): cap maxResults at 999 to prevent overflow with header row adjustment
The +1 adjustment for the header row on first-page requests could
produce MaxResults=1001 when user requests 1000, exceeding the AWS
API hard cap of 1000.
* feat(chat): drag workflows and folders from sidebar into chat input
* fix(chat): fix effectAllowed, stale atInsertPosRef, and drag-enter overlay for resource drags
* feat(chat): add task dragging and visible drag ghost for sidebar items
* feat(sidebar): add drag ghost with icons and task icon to context chips
* refactor(types): narrow ChatMessageContext.kind to ChatContextKind union and add workflowBorderColor utility
* feat(user-input): support Tab to select resource in mention dropdown
* fix(user-input): narrow ChatContext discriminated union before accessing workflowId
* fix(colors): overload workflowBorderColor to accept string | undefined
* fix(colors): simplify workflowBorderColor to single string | undefined signature
* fix(chat): remove resource panel tab when context mention is deleted from input
* fix(chat): use resource ID for context removal identity check
* fix(chat): add folder/task cases to resource resolver, task key to existingResourceKeys, and use workflowBorderColor in drag ghost
* revert(chat): remove folder/task from resolveResourceFromContext — no panel UI for these types
* fix(chat): add chatId to stored context types and workflow.color to drag callback deps
* fix(chat): guard chatId before adding task key to existingResourceKeys
* improvement(secrets): parallelize save mutations and add admin visibility for workspace secrets
* fix(secrets): sequence workspace upsert/delete to avoid read-modify-write race
* fix(secrets): use Promise.allSettled to ensure credential invalidation after all mutations settle
* feat(slack): add subtype field and signature verification to Slack trigger
* fix(slack): guard against NaN timestamp and align null/empty-string convention
* fix(docs): resolve missing tool outputs for spread-inherited V2 tools
* fix(docs): add word boundary to baseToolRegex to prevent false matches
* fix(docs): remove unnecessary case-insensitive flag from baseToolRegex
* feat(auth): add DISABLE_GOOGLE_AUTH and DISABLE_GITHUB_AUTH env vars
* fix(auth): also disable server-side OAuth provider registration when flags are set
* lint
* fix(modals): consistent text colors, copy, and workspace delete confirmation
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(modal): replace useEffect with render-time state reset
Replace useEffect anti-pattern for resetting confirmation text with
React's recommended "adjusting state during render" pattern. This
ensures stale text is never painted and avoids an extra render cycle.
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* fix(knowledge): prevent navigation on context menu actions and widen tags modal
* fix(knowledge): guard onCopyId against navigation and use setTimeout for robustness
* refactor(knowledge): extract withActionGuard helper to deduplicate context menu guard
* fix(knowledge): wrap withActionGuard callback in try/finally to prevent stuck ref
* improvement(landing, blog): SEO and GEO optimization
* improvement(docs): ui/ux cleanup
* chore(blog): remove unused buildBlogJsonLd export and wordCount schema field
* fix(blog): stack related posts vertically on mobile and fill all suggestion slots
- Add flex-col sm:flex-row and matching border classes to related posts
nav for consistent mobile stacking with the main blog page
- Remove score > 0 filter in getRelatedPosts so it falls back to recent
posts when there aren't enough tag matches
- Align description text color with main page cards
The $contains filter operator builds an ILIKE pattern but does not
escape LIKE wildcard characters (%, _) in user-provided values.
This causes incorrect, over-broad query results when the search value
contains these characters. For example, filtering with
{ name: { $contains: "100%" } } matches any row where name
contains "100" followed by anything, not just the literal "100%".
Escape %, _, and \ in the value before interpolating into the ILIKE
pattern so that they match literally.
Co-authored-by: Waleed <walif6@gmail.com>
Co-authored-by: lawrence3699 <lawrence3699@users.noreply.github.com>
* fix(sso): default tokenEndpointAuthentication to client_secret_post
better-auth's SSO plugin does not URL-encode credentials before Base64
encoding in client_secret_basic mode (RFC 6749 §2.3.1). When the client
secret contains special characters (+, =, /), OIDC providers decode them
incorrectly, causing invalid_client errors.
Default to client_secret_post when tokenEndpointAuthentication is not
explicitly set to avoid this upstream encoding issue.
Fixes#3626
* fix(sso): use nullish coalescing and add env var for tokenEndpointAuthentication
- Use ?? instead of || for semantic correctness
- Add SSO_OIDC_TOKEN_ENDPOINT_AUTH env var so users can explicitly
set client_secret_basic when their provider requires it
* docs(sso): add SSO_OIDC_TOKEN_ENDPOINT_AUTH to script usage comment
Signed-off-by: Mini Jeong <mini.jeong@navercorp.com>
* fix(sso): validate SSO_OIDC_TOKEN_ENDPOINT_AUTH env var value
Replace unsafe `as` type cast with runtime validation to ensure only
'client_secret_post' or 'client_secret_basic' are accepted. Invalid
values (typos, empty strings) now fall back to undefined, letting the
downstream ?? fallback apply correctly.
Signed-off-by: Mini Jeong <mini.jeong@navercorp.com>
---------
Signed-off-by: Mini Jeong <mini.jeong@navercorp.com>
* refactor(triggers): consolidate v2 Linear triggers into same files as v1
Move v2 trigger exports from separate _v2.ts files into their
corresponding v1 files, matching the block v2 convention where
LinearV2Block lives alongside LinearBlock in the same file.
* updated
* fix: restore staging registry entries accidentally removed
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* docs
* fix: restore integrations.json to staging version
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(generate-docs): extract all trigger configs from multi-export files
The buildTriggerRegistry function used a single regex exec per file,
which only captured the first TriggerConfig export. Files that export
both v1 and v2 triggers (consolidated same-file convention) had their
v2 triggers silently dropped from integrations.json.
Split each file into segments per export and parse each independently.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: restore staging linear handler and utils with teamId support
Restores the staging version of linear provider handler and trigger
utils that were accidentally regressed. Key restorations:
- teamId sub-block and allPublicTeams fallback in createSubscription
- Timestamp skew validation in verifyAuth
- actorType renaming in formatInput (avoids TriggerOutput collision)
- url field in formatInput and all output builders
- edited field in comment outputs
- externalId validation after webhook creation
- isLinearEventMatch returns false (not true) for unknown triggers
Adds extractIdempotencyId to the linear provider handler for webhook
deduplication support.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: restore non-Linear files accidentally modified
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* refactor: remove redundant extractIdempotencyId from linear handler
The idempotency service already uses the Linear-Delivery header
(which Linear always sends) as the primary dedup key. The body-based
fallback was unnecessary defensive code.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* idempotency
* tets
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* fix(secrets): restore unsaved-changes guard for settings tab navigation
- Add useSettingsDirtyStore (stores/settings/dirty) to track dirty state across the settings sidebar and section components
- Wire credentials-manager and integrations-manager to sync dirty state to the store and clean up on unmount; also reset store synchronously in handleDiscardAndNavigate
- Update settings-sidebar to check dirty state before tab switches and Back navigation, showing an Unsaved Changes dialog if needed
- Remove dead stores/settings/environment directory; move EnvironmentVariable type into lib/environment/api
* fix(teams): harden Microsoft content URL validation
- Add isMicrosoftContentUrl helper with typed allowlist covering SharePoint, OneDrive, and Teams CDN domains
- Replace loose substring checks in Teams webhook handler with parsed-hostname matching to prevent bypass via partial domain names
- Deduplicate OneDrive share-link detection into isOneDriveShareLink flag and use searchParams API instead of string splitting
* fix(env): remove type re-exports from query file, drop keepPreviousData on static key
* fix(teams): remove smba.trafficmanager.net from Microsoft content allowlist
The subdomain check for smba.trafficmanager.net was unnecessary — Azure
Traffic Manager does not support nested subdomains of existing profiles,
but the pattern still raised a valid audit concern. Teams bot-framework
attachment URLs from this host fall through to the generic fetchWithDNSPinning
branch, which provides the same protection without the ambiguity.
* fix(secrets): guard active-tab re-click, restore keepPreviousData on workspace env query
* fix(teams): add 1drv.com apex to OneDrive share-link branch
1drv.com (apex) is a short-link domain functionally equivalent to
1drv.ms and requires share-token resolution, not direct fetch.
CDN subdomains (files.1drv.com) are unaffected — the exact-match
check leaves them on the direct-fetch path.
* fix(triggers): apply webhook audit follow-ups
Align the Greenhouse webhook matcher with provider conventions and clarify the Notion webhook secret setup text after the audit review.
Made-with: Cursor
* fix(webhooks): Salesforce provider handler, Zoom CRC and block wiring
Add salesforce WebhookProviderHandler with required shared secret auth,
matchEvent filtering, formatInput aligned to trigger outputs, and
idempotency keys. Require webhook secret and document JSON-only Flow
setup; enforce objectType when configured.
Zoom: pass raw body into URL validation signature check, try all active
webhooks on a path for secret match, add extractIdempotencyId, tighten
event matching for specialized triggers. Wire Zoom triggers into the
Zoom block. Extend handleChallenge with optional rawBody.
Register Salesforce pending verification probes for pre-save URL checks.
* fix(webhooks): harden Resend and Linear triggers (idempotency, auth, outputs)
- Dedupe Resend deliveries via svix-id and Linear via Linear-Delivery in idempotency keys
- Require Resend signing secret; validate createSubscription id and signing_secret
- Single source for Resend event maps in triggers/utils; fail closed on unknown trigger IDs
- Add raw event data to Resend trigger outputs and formatInput
- Linear: remove body-based idempotency key; timestamp skew after HMAC verify; format url and actorType
- Tighten isLinearEventMatch for unknown triggers; clarify generic webhook copy; fix header examples
- Add focused tests for idempotency headers and Linear matchEvent
* fix(webhooks): harden Vercel and Greenhouse trigger handlers
Require Vercel signing secret and validate x-vercel-signature; add
matchEvent with dynamic import, delivery idempotency, strict
createSubscription trigger IDs, and formatInput aligned to string IDs.
Greenhouse: dynamic import in matchEvent, strict unknown trigger IDs,
Greenhouse-Event-ID idempotency header, body fallback keys, clearer
optional secret copy. Update generic trigger wording and add tests.
* fix(gong): JWT verification, trigger UX, alignment script
- Optional RS256 verification when Gong JWT public key is configured (webhook_url + body_sha256 per Gong docs); URL secrecy when unset.
- Document that Gong rules filter calls; payload has no event type; add eventType + callId outputs for discoverability.
- Refactor Gong triggers to buildTriggerSubBlocks + shared JWT field; setup copy matches security model.
- Add check-trigger-alignment.ts (Gong bundled; extend PROVIDER_CHECKS for others) and update add-trigger guidance paths.
Made-with: Cursor
* fix(notion): align webhook lifecycle and outputs
Handle Notion verification requests safely, expose the documented webhook fields in the trigger contract, and update setup guidance so runtime data and user-facing configuration stay aligned.
Made-with: Cursor
* fix(webhooks): tighten remaining provider hardening
Close the remaining pre-merge caveats by tightening Salesforce, Zoom, and Linear behavior, and follow through on the deferred provider and tooling cleanup for Vercel, Greenhouse, Gong, and Notion.
Made-with: Cursor
* refactor(webhooks): move subscription helpers out of providers
Move provider subscription helpers alongside the subscription lifecycle module and add targeted TSDoc so the file placement matches the responsibility boundaries in the webhook architecture.
Made-with: Cursor
* fix(zoom): resolve env-backed secrets during validation
Use the same env-aware secret resolution path for Zoom endpoint validation as regular delivery verification so URL validation works correctly when the secret token is stored via env references.
Made-with: Cursor
* fix build
* consolidate tests
* refactor(salesforce): share payload object type parsing
Remove dead code in the Salesforce provider and move shared object-type extraction into a single helper so trigger matching and input shaping stay in sync.
Made-with: Cursor
* fix(webhooks): address remaining review follow-ups
Loosen Linear's replay window to better tolerate delayed retries and make Notion event mismatches return false consistently with the rest of the hardened providers.
Made-with: Cursor
* test(webhooks): separate Zoom coverage and clean Notion output shape
Move Zoom provider coverage into its own test file and strip undeclared Notion type fields from normalized output objects so the runtime shape better matches the trigger contract.
Made-with: Cursor
* feat(triggers): enrich Vercel and Greenhouse webhook output shapes
Document and pass through Vercel links, regions, deployment.meta, and
domain.delegated; add top-level Greenhouse applicationId, candidateId,
and jobId aligned with webhook common attributes. Extend alignment checker
for greenhouse, update provider docs, and add formatInput tests.
Made-with: Cursor
* feat(webhooks): enrich Resend trigger outputs; clarify Notion output docs
- Resend: expose broadcast_id, template_id, tags, and data_created_at from
payload data (per Resend webhook docs); keep alignment with formatInput.
- Add resend entry to check-trigger-alignment and unit test for formatInput.
- Notion: tighten output descriptions for authors, entity types, parent types,
attempt_number, and accessible_by per Notion webhooks event reference.
Made-with: Cursor
* feat(webhooks): enrich Zoom and Gong trigger output schemas
- Zoom: add formatInput passthrough, fix nested TriggerOutput shape (drop invalid `properties` wrappers), document host_email, join_url, agenda, status, meeting_type on recordings, participant duration, and alignment checker entry.
- Gong: flatten topics/highlights from callData.content in formatInput, extend metaData and trigger outputs per API docs, tests and alignment keys updated.
- Docs: add English webhook trigger sections for Zoom and Gong tools pages.
* feat(triggers): enrich Salesforce and Linear webhook output schemas
Salesforce: expose simEventType alongside eventType; pass OwnerId and
SystemModstamp on record lifecycle inputs; add AccountId/OwnerId for
Opportunity and AccountId/ContactId/OwnerId for Case. Align trigger
output docs with Flow JSON payloads and formatInput.
Linear: document actor email and profile url per official webhook
payload; add Comment data.edited from Linear's sample payload.
Tests: extend Salesforce formatInput coverage for new fields.
* remove from mdx
* chore(webhooks): expand trigger alignment coverage
Extend the trigger alignment checker to cover additional webhook providers so output contracts are verified across more of the recently added trigger surface.
Made-with: Cursor
* updated skills
* updated file naming semantics
* rename file
* feat(folders): soft-delete folders and show in Recently Deleted
Folders are now soft-deleted (archived) instead of permanently removed,
matching the existing pattern for workflows, tables, and knowledge bases.
Users can restore folders from Settings > Recently Deleted.
- Add `archivedAt` column to `workflowFolder` schema with index
- Change folder deletion to set `archivedAt` instead of hard-delete
- Add folder restore endpoint (POST /api/folders/[id]/restore)
- Batch-restore all workflows inside restored folders in one transaction
- Add scope filter to GET /api/folders (active/archived)
- Add Folders tab to Recently Deleted settings page
- Update delete modal messaging for restorable items
- Change "This action cannot be undone" styling to muted text
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(testing): add FOLDER_RESTORED to audit mock
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(folders): atomic restore transaction and scope to folder-deleted workflows
Address two review findings:
- Wrap entire folder restore in a single DB transaction to prevent
partial state if any step fails
- Only restore workflows archived within 5s of the folder's archivedAt,
so individually-deleted workflows are not silently un-deleted
- Add folder_restored to PostHog event map
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* refactor(folders): simplify restore to remove hacky 5s time window
The 5-second time window for scoping which workflows to restore was
a fragile heuristic (magic number, race-prone, non-deterministic).
Restoring a folder now restores all archived workflows in it, matching
standard trash/recycle-bin behavior. Users can re-delete any workflow
they don't want after restore.
The single-transaction wrapping from the prior commit is kept — that
was a legitimate atomicity fix.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(db): regenerate folder soft-delete migration with drizzle-kit
Replace manually created migration with proper drizzle-kit generated
one that includes the snapshot file, fixing CI schema sync check.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* chore(db): fix migration metadata formatting
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(folders): scope restore to folder-deleted workflows via shared timestamp
Use a single timestamp across the entire folder deletion — folders,
workflows, schedules, webhooks, etc. all get the exact same archivedAt.
On restore, match workflows by exact archivedAt equality with the
folder's timestamp, so individually-deleted workflows are not
silently un-deleted.
- Add optional archivedAt to ArchiveWorkflowOptions (backwards-compatible)
- Pass shared timestamp through deleteFolderRecursively → archiveWorkflowsByIdsInWorkspace
- Filter restore with eq(workflow.archivedAt, folderArchivedAt) instead of isNotNull
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(workflows): clear folderId on restore when folder is archived or missing
When individually restoring a workflow from Recently Deleted, check if
its folder still exists and is active. If the folder is archived or
missing, clear folderId so the workflow appears at root instead of
being orphaned (invisible in sidebar).
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(folders): format restoreFolderRecursively call to satisfy biome
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(folders): close remaining restore edge cases
Three issues caught by audit:
1. Child folder restore used isNotNull instead of timestamp matching,
so individually-deleted child folders would be incorrectly restored.
Now uses eq(archivedAt, folderArchivedAt) for both workflows AND
child folders — consistent and deterministic.
2. No workspace archived check — could restore a folder into an
archived workspace. Now checks getWorkspaceWithOwner, matching
the existing restoreWorkflow pattern.
3. Re-restoring an already-restored folder returned an error. Now
returns success with zero counts (idempotent).
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(folders): add archivedAt to optimistic folder creation objects
Ensures optimistic folder objects include archivedAt: null for
consistency with the database schema shape.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(folders): handle missing parent folder during restore reparenting
If the parent folder row no longer exists (not just archived), the
restored folder now correctly gets reparented to root instead of
retaining a dangling parentId reference.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* fix(subflows): make edges inside subflows directly clickable
Edges inside subflows defaulted to z-index 0, causing the subflow body
area (pointer-events: auto) to intercept clicks. Derive edge z-index
from the container's depth so edges sit just above their parent container
but below canvas blocks and child blocks.
* Fix edge deletion in nested subflows
* Fix bug with multi selecting nested subblock
---------
Co-authored-by: Theodore Li <theo@sim.ai>
* feat(home): add folders to resource menu
* fix(home): add folder to API validation and dedup logic
* fix(home): add folder context processing and generic title dedup
* fix(home): add folder icon to mention chip overlay
* fix(home): add folder to AgentContextType and context persistence
* fix(home): add workspace scoping to folder resolver, fix folderId type and dedup
* user message
* fix(copilot): fix copilot running workflow stuck on 10mb error
* Use correct try catch
* Add const
* Strip only logs on payload too large
* Fix threshold
---------
Co-authored-by: Theodore Li <theo@sim.ai>
* Add credential prompting for google service accounts
* Add service account credential block prompting for google service account
* Revert requiredCredentials change
* Fix lint
---------
Co-authored-by: Theodore Li <theo@sim.ai>
* fix(signup): show multiple signup errors at once
* Fix reset password error formatting
* Remove dead code
* Fix unit tests
---------
Co-authored-by: Theodore Li <theo@sim.ai>
* feat(triggers): add Linear v2 triggers with automatic webhook registration
* fix(triggers): preserve specific Linear API error messages in catch block
* fix(triggers): check response.ok before JSON parsing, replace as any with as unknown
* fix linear subscription params
* fix build
---------
Co-authored-by: Vikhyath Mondreti <vikhyath@simstudio.ai>
* feat(triggers): add Zoom webhook triggers with challenge-response and signature verification
Add 6 Zoom webhook triggers (meeting started/ended, participant joined/left, recording completed, generic webhook) with full Zoom protocol support including endpoint.url_validation challenge-response handling and x-zm-signature HMAC-SHA256 verification.
* fix(triggers): use webhook.isActive instead of non-existent deletedAt column
* fix(triggers): address PR review feedback for Zoom webhooks
- Add 30s timestamp freshness check to prevent replay attacks
- Return null from handleChallenge when no secret token found instead of responding with empty-key HMAC
- Remove all `as any` casts from output builder functions
* lint
* fix(triggers): harden Zoom webhook security per PR review
- verifyAuth now fails closed (401) when secretToken is missing
- handleChallenge DB query filters by provider='zoom' to avoid cross-provider leaks
- handleChallenge verifies x-zm-signature before responding to prevent HMAC oracle
* fix(triggers): rename type to meeting_type to avoid TriggerOutput type collision
* fix(triggers): make challenge signature verification mandatory, not optional
* fix(triggers): fail closed on unknown trigger IDs and update Zoom landing page data
- isZoomEventMatch now returns false for unrecognized trigger IDs
- Update integrations.json with 6 Zoom triggers
* fix(triggers): add missing id fields to Zoom trigger entries in integrations.json
* fix(triggers): increase Zoom timestamp tolerance to 300s per Zoom docs
* feat(triggers): add Vercel webhook triggers with automatic registration
* fix(triggers): add Vercel webhook signature verification and expand generic events
* fix(triggers): validate Vercel webhook ID before storing to prevent orphaned webhooks
* fix(triggers): add triggerId validation warning and JSON parse fallback for Vercel webhooks
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(triggers): add paramVisibility user-only to Vercel apiKey subblock
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* feat(triggers): add Notion webhook triggers for all event types
Add 9 Notion webhook triggers covering the full event lifecycle:
- Page events: created, properties updated, content updated, deleted
- Database events: created, schema updated, deleted
- Comment events: created
- Generic webhook trigger (all events)
Implements provider handler with HMAC SHA-256 signature verification,
event filtering via matchEvent, and structured input formatting.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(triggers): resolve type field collision in Notion trigger outputs
Rename nested `type` fields to `entity_type`/`parent_type` to avoid
collision with processOutputField's leaf node detection which checks
`'type' in field`. Remove spread of author outputs into `authors`
array which was overwriting `type: 'array'`.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(triggers): clarify Notion webhook signing secret vs verification_token
Update placeholder and description to distinguish the signing secret
(used for HMAC-SHA256 signature verification) from the verification_token
(one-time challenge echoed during initial setup).
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* refactor(webhooks): use createHmacVerifier for Notion provider handler
Replace inline verifyAuth boilerplate with createHmacVerifier utility,
consistent with Linear, Ashby, Cal.com, Circleback, Confluence, and
Fireflies providers.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* feat(triggers): add Greenhouse webhook triggers
Add 8 webhook triggers for Greenhouse ATS events:
- Candidate Hired, New Application, Stage Change, Rejected
- Offer Created, Job Created, Job Updated
- Generic Webhook (all events)
Includes event filtering via provider handler registry and output
schemas matching actual Greenhouse webhook payload structures.
* fix(triggers): address PR review feedback for Greenhouse triggers
- Fix rejection_reason.type key collision with mock payload generator
by renaming to reason_type
- Replace dynamic import with static import in matchEvent handler
- Add HMAC-SHA256 signature verification via createHmacVerifier
- Add secretKey extra field to all trigger subBlocks
- Extract shared buildJobPayload helper to deduplicate job outputs
* fix(triggers): align rejection_reason output with actual Greenhouse payload
Reverted reason_type rename — instead flattened rejection_reason to JSON
type since TriggerOutput's type?: string conflicts with nested type keys.
Also hardened processOutputField to check typeof type === 'string' before
treating an object as a leaf node, preventing this class of bug for future triggers.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* feat(triggers): add Gong webhook triggers for call events
* fix(triggers): reorder Gong trigger spread and dropdown options
* fix(triggers): resolve Biome lint errors in Gong trigger files
* json
* feat(triggers): add Resend webhook triggers with auto-registration
* fix(triggers): capture Resend signing secret and add Svix webhook verification
* fix(triggers): add paramVisibility, event-type filtering for Resend triggers
* fix(triggers): add Svix timestamp staleness check to prevent replay attacks
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(triggers): use Number.parseInt and Number.isNaN for lint compliance
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* feat(integrations): add Sixtyfour AI integration
Add Sixtyfour AI integration with 4 tools: find_phone, find_email, enrich_lead, enrich_company. Includes block with operation dropdown, API key auth, conditional fields per operation, brand icon, and generated docs.
* fix(integrations): add error handling to sixtyfour tools
Wrap JSON.parse calls in try/catch for enrich_lead and enrich_company.
Add response.ok checks to all 4 tools' transformResponse.
* fix(integrations): use typed Record for leadStruct to fix spread type error
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* docs
* airweave docslink
* turbo update
* more inp/outputs
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* refactor(webhooks): extract provider-specific logic into handler registry
* fix(webhooks): address PR review feedback
- Restore original fall-through behavior for generic requireAuth with no token
- Replace `any` params with proper types in processor helper functions
- Restore array-aware initializer in processTriggerFileOutputs
* fix(webhooks): fix build error from union type indexing in processTriggerFileOutputs
Cast array initializer to Record<string, unknown> to allow string indexing
while preserving array runtime semantics for the return value.
* fix(webhooks): return 401 when requireAuth is true but no token configured
If a user explicitly sets requireAuth: true, they expect auth to be enforced.
Returning 401 when no token is configured is the correct behavior — this is
an intentional improvement over the original code which silently allowed
unauthenticated access in this case.
* refactor(webhooks): move signature validators into provider handler files
Co-locate each validate*Signature function with its provider handler,
eliminating the circular dependency where handlers imported back from
utils.server.ts. validateJiraSignature is exported from jira.ts for
shared use by confluence.ts.
* refactor(webhooks): move challenge handlers into provider files
Move handleWhatsAppVerification to providers/whatsapp.ts and
handleSlackChallenge to providers/slack.ts. Update processor.ts
imports to point to provider files.
* refactor(webhooks): move fetchAndProcessAirtablePayloads into airtable handler
Co-locate the ~400-line Airtable payload processing function with its
provider handler. Remove AirtableChange interface from utils.server.ts.
* refactor(webhooks): extract polling config functions into polling-config.ts
Move configureGmailPolling, configureOutlookPolling, configureRssPolling,
and configureImapPolling out of utils.server.ts into a dedicated module.
Update imports in deploy.ts and webhooks/route.ts.
* refactor(webhooks): decompose formatWebhookInput into per-provider formatInput methods
Move all provider-specific input formatting from the monolithic formatWebhookInput
switch statement into each provider's handler file. Delete formatWebhookInput and
all its helper functions (fetchWithDNSPinning, formatTeamsGraphNotification, Slack
file helpers, convertSquareBracketsToTwiML) from utils.server.ts. Create new handler
files for gmail, outlook, rss, imap, and calendly providers. Update webhook-execution.ts
to use handler.formatInput as the primary path with raw body passthrough as fallback.
utils.server.ts reduced from ~1600 lines to ~370 lines containing only credential-sync
functions.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* refactor(webhooks): decompose provider-subscriptions into handler registry pattern
Move all provider-specific subscription create/delete logic from the monolithic
provider-subscriptions.ts into individual provider handler files via new
createSubscription/deleteSubscription methods on WebhookProviderHandler.
Replace the two massive if-else dispatch chains (11 branches each) with simple
registry lookups via getProviderHandler(). provider-subscriptions.ts reduced
from 2,337 lines to 128 lines (orchestration only).
Also migrate polling configuration (gmail, outlook, rss, imap) into provider
handlers via configurePolling() method, and challenge/verification handling
(slack, whatsapp, teams) via handleChallenge() method. Delete polling-config.ts.
Create new handler files for fathom and lemlist providers. Extract shared
subscription utilities into subscription-utils.ts.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(webhooks): fix attio build error, restore imap field, remove demarcation comments
- Cast `body` to `Record<string, unknown>` in attio formatInput to fix
type error with extractor functions
- Restore `rejectUnauthorized` field in imap configurePolling for parity
- Remove `// ---` section demarcation comments from route.ts and airtable.ts
- Update add-trigger skill to reflect handler-based architecture
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(webhooks): remove unused imports from utils.server.ts after rebase
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(webhooks): remove duplicate generic file processing from webhook-execution
The generic provider's processInputFiles handler already handles file[] field
processing via the handler.processInputFiles call. The hardcoded block from
staging was incorrectly preserved during rebase, causing double processing.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(webhooks): validate auth token is set when requireAuth is enabled at deploy time
Rejects deployment with a clear error message if a generic webhook trigger
has requireAuth enabled but no authentication token configured, rather than
letting requests fail with 401 at runtime.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(webhooks): remove unintended rejectUnauthorized field from IMAP polling config
The refactored IMAP handler added a rejectUnauthorized field that was not
present in the original configureImapPolling function. This would default
to true for all existing IMAP webhooks, potentially breaking connections
to servers with self-signed certificates.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(webhooks): replace crypto.randomUUID() with generateId() in ashby handler
Per project coding standards, use generateId() from @/lib/core/utils/uuid
instead of crypto.randomUUID() directly.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* refactor(webhooks): standardize logger names and remove any types from providers
- Standardize logger names to WebhookProvider:X pattern across 6 providers
(fathom, gmail, imap, lemlist, outlook, rss)
- Replace all `any` types in airtable handler with proper types:
- Add AirtableTableChanges interface for API response typing
- Change function params from `any` to `Record<string, unknown>`
- Change AirtableChange fields from Record<string, any> to Record<string, unknown>
- Change all catch blocks from `error: any` to `error: unknown`
- Change input object from `any` to `Record<string, unknown>`
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* refactor(webhooks): remove remaining any types from deploy.ts
Replace 3 `catch (error: any)` with `catch (error: unknown)` and
1 `Record<string, any>` with `Record<string, unknown>`.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* fix(blocks): resolve Ollama models incorrectly requiring API key in Docker
Server-side validation failed for Ollama models like mistral:latest because
the Zustand providers store is empty on the server and getProviderFromModel
misidentified them via regex pattern matching (e.g. mistral:latest matched
Mistral AI's /^mistral/ pattern).
Replace the hardcoded CLOUD_PROVIDER_PREFIXES list with existing data sources:
- Provider store (definitive on client, checks all provider buckets)
- getBaseModelProviders() from PROVIDER_DEFINITIONS (server-side static cloud model lookup)
- Slash convention for dynamic cloud providers (fireworks/, openrouter/, etc.)
- isOllamaConfigured feature flag using existing OLLAMA_URL env var
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* refactor: remove getProviderFromModel regex fallback from API key validation
The fallback was the last piece of regex-based matching in the function and
only ran for self-hosted without OLLAMA_URL on the server — a path where
Ollama models cannot appear in the dropdown anyway.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* lint
* fix: handle vLLM models in store provider check
vLLM is a local model server like Ollama and should not require an API key.
Add vllm to the store provider check as a safety net for models that may
not have the vllm/ prefix.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* fix(core): consolidate ID generation to prevent HTTP self-hosted crashes
crypto.randomUUID() requires a secure context (HTTPS) in browsers,
causing white-screen crashes on self-hosted HTTP deployments. This
replaces all direct usage of crypto.randomUUID(), nanoid, and the uuid
package with a central utility that falls back to crypto.getRandomValues()
which works in all contexts.
- Add generateId(), generateShortId(), isValidUuid() in @/lib/core/utils/uuid
- Replace crypto.randomUUID() imports across ~220 server + client files
- Replace nanoid imports with generateShortId()
- Replace uuid package validate with isValidUuid()
- Remove nanoid dependency from apps/sim and packages/testing
- Remove browser polyfill script from layout.tsx
- Update test mocks to target @/lib/core/utils/uuid
- Update CLAUDE.md, AGENTS.md, cursor rules, claude rules
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* update bunlock
* fix(core): remove UUID_REGEX shim, use isValidUuid directly
* fix(core): remove deprecated uuid mock helpers that use vi.doMock
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* feat(files): expand file editor to support more formats, add docx/xlsx preview
* lint
* fix(files): narrow fileData type for closure in docx/xlsx preview effects
* fix(files): address PR review — fix xlsx type, simplify error helper, tighten iframe sandbox
* add mothership read externsions
* fix(files): update upload test — js is now a supported extension
* fix(files): deduplicate code extensions, handle dotless filenames
* fix(files): lower xlsx preview row cap to 1k and type workbookRef properly
Reduces XLSX_MAX_ROWS from 10,000 to 1,000 to prevent browser sluggishness
on large spreadsheets. Types workbookRef with the proper xlsx.WorkBook
interface instead of unknown, removing the unsafe cast.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* refactor(files): extract shared DataTable, isolate client-safe constants
- Move SUPPORTED_CODE_EXTENSIONS to validation-constants.ts so client
components no longer transitively import Node's `path` module
- Extract shared DataTable component used by both CsvPreview and
XlsxPreview, eliminating duplicated table markup
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* refactor(validation): remove Node path import, use plain string extraction
Replace `import path from 'path'` with a simple `extractExtension` helper
that does `fileName.slice(fileName.lastIndexOf('.') + 1)`. This removes
the only Node module dependency from validation.ts, making it safe to
import from client components without pulling in a Node polyfill.
Deletes the unnecessary validation-constants.ts that was introduced as
a workaround — the constants now live back in validation.ts where they
belong.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* lint
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Vikhyath Mondreti <vikhyath@simstudio.ai>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* fix(kb): fix Linear connector GraphQL type errors and tag slot reuse
* fix(kb): simplify tag slot reuse, revert Linear GraphQL types to String
Clean up newTagSlotMapping into direct assignment, remove unnecessary
comment, and revert ID! back to String! to match Linear SDK types.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(kb): use ID! type for Linear GraphQL filter variables
* fix(kb): verify field type when reusing existing tag slots
Add fieldType check to the tag slot reuse logic so a connector with
a matching displayName but different fieldType falls through to fresh
slot allocation instead of silently reusing an incompatible slot.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(kb): enable search on connector selector dropdowns
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* feat(analytics): posthog audit — remove noise, add 10 new events
Remove task_marked_read (fires automatically on every task view).
Add workspace_id to task_message_sent for group analytics.
New events:
- search_result_selected: block/tool/trigger/workflow/table/file/
knowledge_base/workspace/task/page/docs with query_length
- workflow_imported: count + format (json/zip)
- workflow_exported: count + format (json/zip)
- folder_created / folder_deleted
- logs_filter_applied: status/workflow/folder/trigger/time
- knowledge_base_document_deleted
- scheduled_task_created / scheduled_task_deleted
* fix(analytics): use usePostHog + captureEvent in hooks, track custom date range
* fix(analytics): always fire scheduled_task_deleted regardless of workspaceId
* fix(analytics): correct format field logic and add missing useCallback deps
* feat(knowledge): add Live sync option to KB connector modal for Max/Enterprise users
Adds a "Live" (every 5 min) sync frequency option gated to Max and Enterprise plan users.
Includes client-side badge + disabled state, shared sync intervals constant, and server-side
plan validation on both POST and PATCH connector routes.
* fix(knowledge): record embedding usage cost for KB document processing
Adds billing tracking to the KB embedding pipeline, which was previously
generating OpenAI API calls with no cost recorded. Token counts are now
captured from the actual API response and recorded via recordUsage after
successful embedding insertion. BYOK workspaces are excluded from billing.
Applies to all execution paths: direct, BullMQ, and Trigger.dev.
* fix(knowledge): simplify embedding billing — use calculateCost, return modelName
- Use calculateCost() from @/providers/utils instead of inline formula, consistent
with how LLM billing works throughout the platform
- Return modelName from GenerateEmbeddingsResult so billing uses the actual model
(handles custom Azure deployments) instead of a hardcoded fallback string
- Fix docs-chunker.ts empty-path fallback to satisfy full GenerateEmbeddingsResult type
* fix(knowledge): remove dev bypass from hasLiveSyncAccess
* chore(knowledge): rename sync-intervals to consts, fix stale TSDoc comment
* improvement(knowledge): extract MaxBadge component, capture billing config once per document
* fix(knowledge): add knowledge-base to usage_log_source enum, fix docs-chunker type
* fix(knowledge): generate migration for knowledge-base usage_log_source enum value
* fix(knowledge): add knowledge-base to usage_log_source enum via drizzle-kit
* fix(knowledge): fix search embedding test mocks, parallelize billing lookups
* fix(knowledge): warn when embedding model has no pricing entry
* fix(knowledge): call checkAndBillOverageThreshold after embedding usage
* fix(envvars): restore workflowUserId fallback for scheduled execution env var resolution
* test(envvars): add coverage for env var user resolution branches
* fix(modals): center modals in visible content area accounting for sidebar and panel
* fix(modals): address pr feedback — comment clarity and document panel assumption
* fix(modals): remove open/close animation from modal content
* fix(modals): center modals in visible content area accounting for sidebar and panel
* fix(modals): address pr feedback — comment clarity and document panel assumption
* refactor(stores): consolidate variables stores into stores/variables/
Move variable data store from stores/panel/variables/ to stores/variables/
since the panel variables tab no longer exists. Rename the modal UI store
to useVariablesModalStore to eliminate naming collision with the data store.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: remove unused workflowId variable in deleteVariable
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* feat(blocks): add Credential block
* fix(blocks): explicit workspaceId guard in credential handler, clarify hasOAuthSelection
* feat(credential): add list operation with type/provider filters
* feat(credential): restrict to OAuth only, remove env vars and service accounts
* docs(credential): update screenshots
* fix(credential): remove stale isServiceAccount dep from overlayContent memo
* fix(credential): filter to oauth-only in handleComboboxChange matchedCred lookup
* feat(email): send plain personal email on abandoned checkout
* feat(email): lower free tier warning to 80% and add credits exhausted email
* feat(email): use wordmark in email header instead of icon-only logo
* fix(email): restore accidentally deleted social icons in email footer
* fix(email): prevent double email for free users at 80%, fix subject line
* improvement(emails): extract shared plain email styles and proFeatures constant, fix double email on 100% usage
* fix(email): filter subscription-mode checkout, skip already-subscribed users, fix preview text
* fix(email): use notifications type for onboarding followup to respect unsubscribe preferences
* fix(email): use limit instead of currentUsage in credits exhausted email body
* fix(email): use notifications type for abandoned checkout, clarify crosses80 comment
* chore(email): rename _constants.ts to constants.ts
* fix(email): use isProPlan to catch org-level subscriptions in abandoned checkout guard
* fix(email): align onboarding followup delay to 5 days for email/password users
* Directly query db for custom tool id
* Switch back to inline imports
* Fix lint
* Fix test
* Fix greptile comments
* Fix lint
* Make userId and workspaceId required
* Add back nullable userId and workspaceId fields
---------
Co-authored-by: Theodore Li <theo@sim.ai>
* feat(email): send onboarding followup email 3 days after signup
* fix(email): add trigger guard, idempotency key, and shared task ID constant
* fix(email): increase onboarding followup delay from 3 to 5 days
* feat(rootly): expand Rootly integration from 14 to 27 tools
Add 13 new tools: delete_incident, get_alert, update_alert,
acknowledge_alert, resolve_alert, create_action_item, list_action_items,
list_users, list_on_calls, list_schedules, list_escalation_policies,
list_causes, list_playbooks. Includes tool files, types, registry,
block definition with subBlocks/conditions/params, and docs.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(rootly): handle 204 No Content response for delete_incident
DELETE /v1/incidents/{id} returns 204 with empty body. Avoid calling
response.json() on success — return success/message instead.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(rootly): remove non-TSDoc comments, add empty body to acknowledge_alert
Remove all inline section comments from block definition per CLAUDE.md
guidelines. Add explicit empty JSON:API body to acknowledge_alert POST
to prevent potential 400 from servers expecting a body with Content-Type.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(rootly): send empty body on resolve_alert, guard assignedToUserId parse
resolve_alert now sends { data: {} } instead of undefined when no
optional params are provided, matching the acknowledge_alert fix.
create_action_item now validates assignedToUserId is numeric before
parseInt to avoid silent NaN coercion.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(rootly): extract on-call relationships from JSON:API relationships/included
On-call user, schedule, and escalation policy are exposed as JSON:API
relationships, not flat attributes. Now extracts IDs from
item.relationships and looks up names from the included array.
Adds ?include=user,schedule,escalation_policy to the request URL.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(rootly): remove last non-TSDoc comment from block definition
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* docs
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* feat(agentmail): add AgentMail integration with 21 tools
* fix(agentmail): clear stale to field when switching to reply_message operation
* fix(agentmail): guard messageId and label remappings with operation checks
* fix(agentmail): clean up subBlock titles
* fix(agentmail): guard replyTo and thread label remappings with operation checks
* fix(agentmail): guard inboxIdParam remapping with operation check
* fix(agentmail): guard permanent, replyAll, and draftInReplyTo with operation checks
* feat(rootly): add Rootly incident management integration with 14 tools
* fix(rootly): address PR review feedback - PATCH method, totalCount, environmentIds
- Changed update_incident HTTP method from PUT to PATCH per Rootly API spec
- Fixed totalCount in all 9 list tools to use data.meta?.total_count from API response
- Added missing updateEnvironmentIds subBlock and params mapping for update_incident
* fix(rootly): add id to PATCH body and unchanged option to update status dropdown
- Include incident id in JSON:API PATCH body per spec requirement
- Add 'Unchanged' empty option to updateStatus dropdown to avoid accidental overwrites
* icon update
* improvement(rootly): complete block-tool alignment and fix validation gaps
- Add missing get_incident output fields (private, shortUrl, closedAt)
- Add missing block subBlocks: createPrivate, alertStatus, alertExternalId, listAlertsServices
- Add pageNumber subBlocks for all 9 list operations
- Add teams/environments filter subBlocks for list_incidents and list_alerts
- Add environmentIds subBlock for create_alert
- Add empty default options to all optional dropdowns (createStatus, createKind, listIncidentsSort, eventVisibility)
- Wire all new subBlocks in tools.config.params and inputs
- Regenerate docs
* fix(rootly): align tools with OpenAPI spec
- list_incident_types: use filter[name] instead of unsupported filter[search]
- list_severities: add missing search param (filter[search])
- create_incident: title is optional per API (auto-generated if null)
- update_incident: add kind, private, labels, incidentTypeIds,
functionalityIds, cancellationMessage params
- create/update/list incidents: add scheduled, in_progress, completed
status values
- create_alert: fix status description (only open/triggered on create)
- add_incident_event: add updatedAt to response
- block: add matching subBlocks and params for all new tool fields
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(rootly): final validation fixes from OpenAPI spec audit
- update_incident: change PATCH to PUT per OpenAPI spec
- index.ts: add types re-export
- types.ts: fix id fields to string | null (matches ?? null runtime)
- block: add value initializers to 4 dropdowns missing them
- registry: fix alphabetical order (incident_types before incidents)
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* reorg
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* feat(rippling): expand Rippling integration from 16 to 86 tools
* fix(rippling): add required constraints on name and data subBlocks for create operations
* fix(rippling): add subblock ID migrations for removed legacy fields
* fix(docs): add MANUAL-CONTENT markers to tailscale docs and regenerate
* fix(rippling): add missing response fields to tool transforms
Add fields found missing by validation agents:
- list_companies: physical_address
- list/get_supergroups: sub_group_type, read_only, parent, mutually_exclusive_key, cumulatively_exhaustive_default, include_terminated
- list/get/create/update_custom_object: native_category_id, managed_package_install_id, owner_id
- list/get/create/update_custom_app: icon, pages
- list/get/create/update_custom_object_field: managed_package_install_id
* fix(rippling): add missing block outputs and required data conditions
- Add 17 missing collection output keys (titles, workLocations, supergroups, etc.)
- Add delete/bulk/report output keys (deleted, results, report_id, etc.)
- Mark data subBlock required for create_business_partner, create_custom_app,
and create_custom_object_field (all have required params via data JSON spread)
- Add optional: true to get_current_user work_email and company_id outputs
* fix(rippling): add missing supergroup fields and fix validation issues
- Add 5 missing supergroup fields (allow_non_employees, can_override_role_states, priority, is_invisible, ignore_prov_group_matching) to types, list, and get tools
- Fix ok fallback from true to false in supergroup inclusion/exclusion member update tools
- Fix truthy check to null check for description param in create_custom_object_field
* fix(rippling): add missing custom page fields and structured custom setting responses
- Add 5 missing CustomPage fields (components, actions, canvas_actions, variables, media) to types and all page tools
- Replace opaque data blob with structured field mapping in create/update custom setting transforms
- Fix secret_value type cast consistency in list_custom_settings
* fix(rippling): add missing response fields, fix truthy checks, and improve UX
- Add 9 missing Worker fields (location, gender, date_of_birth, race, ethnicity, citizenship, termination_details, custom_fields, country_fields)
- Add 5 missing User fields (name, emails, phone_numbers, addresses, photos)
- Add worker expandable field to GroupMember types and all 3 member list tools
- Add 5 optional params to trigger_report_run (includeObjectIds, includeTotalRows, formatDateFields, formatCurrencyFields, outputType)
- Fix truthy checks to null checks in create_department, create/update_work_location
- Fix customObjectId subBlock label to say "API Name" instead of "ID"
* update docs
* fix(rippling): fix truthy checks, add missing fields, and regenerate docs
- Replace all `if (params.x)` with `if (params.x != null)` across 30+ tool files to prevent empty string/false/zero suppression
- Add expandable `parent` and `department_hierarchy` fields to department tools
- Add expandable `parent` field to team tools
- Add `company` expandable field to get_current_user
- Add `addressType` param to create/update work location tools
- Fix `secret_value` output type from 'json' to 'string' in list_custom_settings
- Regenerate docs for all 86 tools from current definitions
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(rippling): add all remaining spec fields and regenerate docs
- Add 6 advanced params to create_custom_object_field: required, rqlDefinition,
formulaAttrMetas, section, derivedFieldFormula, derivedAggregatedField
- Add 6 advanced params to update_custom_object_field: required, rqlDefinition,
formulaAttrMetas, section, derivedFieldFormula, nameFieldDetails
- Add 4 record output fields to all custom object record tools: created_by,
last_modified_by, owner_role, system_updated_at
- Add cursor param to get_current_user
- Add __meta response field to get_report_run
- Regenerate docs for all 86 tools
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(rippling): align all tools with OpenAPI spec
- Add __meta to 14 GET-by-ID tools (MetaResponse pattern)
- Fix supergroup tools: add filter to list_supergroups, remove invalid
cursor from 4 list endpoints, revert update members to PATCH with
Operations body
- Fix query_custom_object_records: use query/limit/cursor body params,
return cursor instead of nextLink
- Fix bulk_create: use rows_to_write per spec
- Fix create/update record body wrappers with externalId support
- Update types.ts param interfaces and block config mappings
- Add limit param mapping with Number() conversion in block config
- Regenerate docs
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(rippling): address PR review comments — add dedicated subBlocks, fix data duplication, expand externalId condition
- Add dedicated apiName, businessPartnerGroupId, workerId, dataType subBlocks so required params are no longer hidden behind opaque data JSON
- Narrow `data: item` in custom object record tools to only include dynamic fields, avoiding duplication of enumerated fields
- Expand externalId subBlock condition to include create/update custom object record operations
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(rippling): remove data JSON required for ops with dedicated subBlocks
create_business_partner, create_custom_app, and create_custom_object_field
now have dedicated subBlocks for their required params, so the data JSON
field is supplementary (not required) for those operations.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(rippling): use rest-destructuring for all custom object record data output
The spec uses additionalProperties for custom fields at the top level,
not a nested `data` sub-object. Use the same rest-destructuring pattern
across all 6 custom object record tools so `data` only contains dynamic
fields, not duplicates of enumerated standard fields.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(rippling): make update_custom_object_record data param optional in type
Matches the tool's `required: false` — users may update only external_id
without changing data.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(rippling): add dedicated streetAddress subBlock for create_work_location
streetAddress is required by the tool but had no dedicated subBlock —
users had to include it in the data JSON. Now has its own required
subBlock matching the pattern used by all other required params.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(rippling): add allOrNothing subBlock for bulk operations
The bulk create/update/delete tools accept an optional allOrNothing
boolean param, but it had no subBlock and no way to be passed through
the block UI. Added as an advanced-mode dropdown with boolean coercion.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(rippling): derive spreadOps from DATA_OPS to prevent divergence
Replace the hardcoded spreadOps array with a derivation from the
file-level DATA_OPS constant minus non-spread operations. This ensures
new create/update operations added to DATA_OPS automatically get
spread behavior without needing a second manual update.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* updated
* fix(rippling): replace generic JSON outputs with specific fields per API spec
- Extract file_url, expires_at, output_type from report run result blob
- Rename bulk create/update outputs to createdRecords/updatedRecords
- Fix list_custom_settings output key mismatch (settings → customSettings)
- Make data optional for update_custom_object_record in block
- Update block outputs to match new tool output fields
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix landing
* restore FF
* fix(rippling): add wandConfig, clean titles, and migrate legacy operation values
- Remove "(JSON)" suffix from all subBlock titles
- Add wandConfig with AI prompts for filter, expand, orderBy, query, data, records, and dataType fields
- Add OPERATION_VALUE_MIGRATIONS to migrate old operation values (list_employees → list_workers, etc.) preventing runtime errors on saved workflows
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(rippling): fix grammar typos and revert unnecessary migration
- Fix "a object" → "an object" in update/delete object category descriptions
- Revert OPERATION_VALUE_MIGRATIONS (unnecessary for low-usage integration)
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* feat(landing): add interactive workspace preview tabs
Adds Tables, Files, Knowledge Base, Logs, and Scheduled Tasks preview
components to the landing hero, with sidebar nav items that switch to each view.
* test updates
* refactor(landing): clean up code quality issues in preview components
- Replace widthMultiplier with explicit width on PreviewColumn
- Replace key={i} with key={Icon.name} in connectorIcons
- Scope --c-active CSS variable to sidebar container, eliminating hardcoded #363636 duplication
- Replace '- - -' fallback with em dash
- Type onSelectNav as (id: SidebarView) removing the unsafe cast
* fix(landing): use stable index key in connectorIcons to avoid minification breakage
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* feat(auth): allow google service account
* Add gmail support for google services
* Refresh creds on typing in impersonated email
* Switch to adding subblock impersonateUserEmail conditionally
* Directly pass subblock for impersonateUserEmail
* Fix lint
* Update documentation for google service accounts
* Fix lint
* Address comments
* Remove hardcoded scopes, remove orphaned migration script
* Simplify subblocks for google service account
* Fix lint
* Fix build error
* Fix documentation scopes listed for google service accounts
* Fix issue with credential selector, remove bigquery and ad support
* create credentialCondition
* Shift conditional render out of subblock
* Simplify sublock values
* Fix security message
* Handle tool service accounts
* Address bugbot
* Fix lint
* Fix manual credential input not showing impersonate
* Fix tests
* Allow watching param id and subblock ids
* Fix bad test
---------
Co-authored-by: Theodore Li <theo@sim.ai>
* improvement(providers): audit and update all provider model definitions
* fix(providers): add maxOutputTokens to azure/o3 and azure/o4-mini
* fix(providers): move maxOutputTokens inside capabilities for azure models
* improvement(workflow): seed start block on server side
* add creating state machine for optimistic switch
* fix worksapce switch
* address comments
* address error handling at correct level
* fix: allow Bedrock provider to use AWS SDK default credential chain
Remove hard requirement for explicit AWS credentials in Bedrock provider.
When access key and secret key are not provided, the AWS SDK automatically
falls back to its default credential chain (env vars, instance profile,
ECS task role, EKS IRSA, SSO).
Closes#3694
Signed-off-by: majiayu000 <1835304752@qq.com>
* fix: add partial credential guard for Bedrock provider
Reject configurations where only one of bedrockAccessKeyId or
bedrockSecretKey is provided, preventing silent fallback to the
default credential chain with a potentially different identity.
Add tests covering all credential configuration scenarios.
Signed-off-by: majiayu000 <1835304752@qq.com>
* fix: clean up bedrock test lint and dead code
Remove unused config parameter and dead _lastConfig assignment
from mock factory. Break long mockReturnValue chain to satisfy
biome line-length rule.
Signed-off-by: majiayu000 <1835304752@qq.com>
* fix: address greptile review feedback on PR #3708
Use BedrockRuntimeClientConfig from SDK instead of inline type.
Add default return value for prepareToolsWithUsageControl mock.
Signed-off-by: majiayu000 <1835304752@qq.com>
* feat(providers): server-side credential hiding for Azure and Bedrock
* fix(providers): revert Bedrock credential fields to required with original placeholders
* fix(blocks): add hideWhenEnvSet to getProviderCredentialSubBlocks for Azure and Bedrock
* fix(agent): use getProviderCredentialSubBlocks() instead of duplicating credential subblocks
* fix(blocks): consolidate Vertex credential into shared factory with basic/advanced mode
* fix(types): resolve pre-existing TypeScript errors across auth, secrets, and copilot
* lint
* improvement(blocks): make Vertex AI project ID a password field
* fix(blocks): preserve vertexCredential subblock ID for backwards compatibility
* fix(blocks): follow canonicalParamId pattern correctly for vertex credential subblocks
* fix(blocks): keep vertexCredential subblock ID stable to preserve saved workflow state
* fix(blocks): add canonicalParamId to vertexCredential basic subblock to complete the swap pair
* fix types
* more types
---------
Signed-off-by: majiayu000 <1835304752@qq.com>
Co-authored-by: majiayu000 <1835304752@qq.com>
Co-authored-by: Vikhyath Mondreti <vikhyath@simstudio.ai>
* fix: specify authTagLength in AES-GCM decipheriv calls
Fixes missing authTagLength parameter in createDecipheriv calls using
AES-256-GCM mode. Without explicit tag length specification, the
application may be tricked into accepting shorter authentication tags,
potentially allowing ciphertext spoofing.
CWE-310: Cryptographic Issues (gcm-no-tag-length)
* fix: specify authTagLength on createCipheriv calls for AES-GCM consistency
Complements #3881 by adding explicit authTagLength: 16 to the encrypt
side as well, ensuring both cipher and decipher specify the tag length.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* refactor: clean up crypto modules
- Fix error: any → error: unknown with proper type guard in encryption.ts
- Eliminate duplicate iv.toString('hex') calls in both encrypt functions
- Remove redundant string split in decryptApiKey (was splitting twice)
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* new turborepo version
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: Lakee Sivaraya <71339072+lakeesiv@users.noreply.github.com>
Co-authored-by: Vikhyath Mondreti <vikhyath@simstudio.ai>
Co-authored-by: Vikhyath Mondreti <vikhyathvikku@gmail.com>
Co-authored-by: Siddharth Ganesan <33737564+Sg312@users.noreply.github.com>
Co-authored-by: NLmejiro <kuroda.k1021@gmail.com>
* improvement(triggers): add tags to all trigger.dev task invocations
* fix(triggers): prefix unused type param in buildTags
* fix(triggers): remove unused type param from buildTags
* feat(providers): add Fireworks AI provider integration
* fix(providers): remove unused logger and dead modelInfo from fireworks
* lint
* feat(providers): add Fireworks BYOK support and official icon
* fix(providers): add workspace membership check and remove shared fetch cache for fireworks models
* improvement(attio): validate integration, fix event bug, add missing tool and triggers
* fix(attio): wire new trigger extractors into dispatcher, trim targetUrl
Add extractAttioListData and extractAttioWorkspaceMemberData dispatch
branches in utils.server.ts so the four new triggers return correct
outputs instead of falling through to generic extraction.
Also add missing .trim() on targetUrl in update_webhook.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* improvement(workflows): replace Zustand workflow sync with React Query as single source of truth
* fix(workflows): address PR review feedback — sandbox execution, hydration deadlock, test mock, copy casing
* lint
* improvement(workflows): adopt skipToken over enabled+as-string for type-safe conditional queries
* improvement(workflows): remove dead complexity, fix mutation edge cases
- Throw on state PUT failure in useCreateWorkflow instead of swallowing
- Use Map for O(1) lookups in duplicate/export loops (3 hooks)
- Broaden invalidation scope in update/delete mutations to lists()
- Switch workflow-block to useWorkflowMap for direct ID lookup
- Consolidate use-workflow-operations to single useWorkflowMap hook
- Remove workspace transition guard (sync body, unreachable timeout)
- Make switchToWorkspace synchronous (remove async/try-catch/finally)
* fix(workflows): resolve cold-start deadlock on direct URL navigation
loadWorkflowState used hydration.workspaceId (null on cold start) to
look up the RQ cache, causing "Workflow not found" even when the
workflow exists in the DB. Now falls back to getWorkspaceIdFromUrl()
and skips the cache guard when the cache is empty (letting the API
fetch proceed).
Also removes the redundant isRegistryReady guard in workflow.tsx that
blocked setActiveWorkflow when hydration.workspaceId was null.
* fix(ui): prevent flash of empty state while workflows query is pending
Dashboard and EmbeddedWorkflow checked workflow list length before
the RQ query resolved, briefly showing "No workflows" or "Workflow
not found" on initial load. Now gates on isPending first.
* fix(workflows): address PR review — await description update, revert state PUT throw
- api-info-modal: use mutateAsync for description update so errors
are caught by the surrounding try/catch instead of silently swallowed
- useCreateWorkflow: revert state PUT to log-only — the workflow is
already created in the DB, throwing rolls back the optimistic entry
and makes it appear the creation failed when it actually succeeded
* move folders over to react query native, restructure passage of data
* pass signal correctly
* fix types
* fix workspace id
* address comment
* soft deletion accuring
---------
Co-authored-by: Vikhyath Mondreti <vikhyath@simstudio.ai>
* feat(infra): add dev environment support
* fix(ci): push :dev ECR tag when building from dev branch
* fix(feature-flags): simplify isHosted subdomain check
* fix(ci,feature-flags): guard URL parse, fix dev AWS creds in images.yml
* improvement(ui): fix nav loading flash, skeleton mismatches, and React anti-patterns across resource pages
- Convert knowledge, files, tables, scheduled-tasks, and home page.tsx files from async server components to simple client re-exports, eliminating the loading.tsx flash on every navigation
- Add client-side permission redirects (usePermissionConfig) to knowledge, files, and tables components to replace server-side checks
- Fix knowledge loading.tsx skeleton column count (6→7) and tables loading.tsx (remove phantom checkbox column)
- Fix connector document live updates: use isConnectorSyncingOrPending instead of status === 'syncing' so polling activates immediately after connector creation
- Remove dead chunk-switch useEffect in ChunkEditor (redundant with key prop remount)
- Replace useState+useEffect debounce with useDebounce hook in document.tsx
- Replace useRef+useEffect URL init with lazy useState initializers in document.tsx and logs.tsx
- Make handleToggleEnabled optimistic in document.tsx (cache first, onError rollback)
- Replace mutate+new Promise wrapper with mutateAsync+try/catch in base.tsx
- Fix schedule-modal.tsx: replace 15-setter useEffect with useState lazy initializers + key prop remount; wrap parseCronToScheduleType in useMemo
- Fix logs search: eliminate mount-only useEffect with eslint-disable by passing initialQuery to useSearchState; parse query once via shared initialParsed state
- Add useWorkspaceFileRecord hook to workspace-files.ts; refactor FileViewer to self-fetch
- Fix value: any → value: string in useTagSelection and collaborativeSetTagSelection
- Fix knowledge-tag-filters.tsx: pass '' instead of null when filters are cleared (type safety)
* fix(kb): use active scope in useWorkspaceFileRecord to share cache with useWorkspaceFiles
* fix(logs,kb,tasks): lazy-init useRef for URL param, add cold-path docs to useWorkspaceFileRecord, document key remount requirement in ScheduleModal
* fix(files): redirect to files list when file record not found in viewer
* revert(files): remove useEffect redirect from file-viewer, keep simple null return
* fix(scheduled-tasks): correct useMemo dep from schedule?.cronExpression to schedule
* feat(logs): add copy link and deep link support for log entries
* fix(logs): move Link icon to emcn and handle clipboard rejections
* feat(notifications): use executionId deep-link for View Log URLs
Switch buildLogUrl from ?search= to ?executionId= so email and Slack
'View Log' buttons open the logs page with the specific execution
auto-selected and the details panel expanded.
* fix(knowledge): fix document processing stuck in processing state
* fix(knowledge): use Promise.allSettled for document dispatch and fix Copilot OAuth context
- Change Promise.all to Promise.allSettled in processDocumentsWithQueue so
one failed dispatch doesn't abort the entire batch
- Add writeOAuthReturnContext before showing LazyOAuthRequiredModal from
Copilot tools so useOAuthReturnForWorkflow can handle the return
- Add consumeOAuthReturnContext on modal close to clean up stale context
* fix(knowledge): fix type error in useCredentialRefreshTriggers call
Pass empty string instead of undefined for connectorProviderId fallback
to match the hook's string parameter type.
* upgrade turbo
* fix(knowledge): fix type error in connectors-section useCredentialRefreshTriggers call
Same string narrowing fix as add-connector-modal — pass empty string
fallback for providerId.
* feat(logs): add copy link and deep link support for log entries
* fix(logs): fetch next page when deep linked log is beyond initial page
* fix(logs): move Link icon to emcn and handle clipboard rejections
* fix(logs): track isFetching reactively and drop empty-list early-return
- Remove guard that prevented clearing the
pending ref when filters return no results
- Use directly in the condition and add it to
the effect deps so the effect re-triggers after a background refetch
* fix(logs): guard deep-link ref clear until query has succeeded
Only clear pendingExecutionIdRef when the query status is 'success',
preventing premature clearing before the initial fetch completes.
On mount, the query is disabled (isInitialized.current starts false),
so hasNextPage is false but no data has loaded yet — the ref was being
cleared in the same effect pass that set it.
* fix(logs): guard fetchNextPage call until query has succeeded
Add logsQuery.status === 'success' to the fetchNextPage branch so it
mirrors the clear branch. On mount the query is disabled (isFetching is
false, status is pending), causing the effect to call fetchNextPage()
before the query is initialized — now both branches require success.
* feat(profound): add Profound AI visibility and analytics integration
* fix(profound): fix import ordering and JSON formatting for CI lint
* fix(profound): gate metrics mapping on current operation to prevent stale overrides
* fix(profound): guard JSON.parse on filters, fix offset=0 falsy check, remove duplicate prompt_answers in FILTER_OPS
* lint
* fix(docs): fix import ordering and trailing newline for docs lint
* fix(scripts): sort generated imports to match Biome's organizeImports order
* fix(profound): use != null checks for limit param across all tools
* fix(profound): flatten block output type to 'json' to pass block validation test
* fix(profound): remove invalid 'required' field from block inputs (not part of ParamConfig)
* fix(profound): rename tool files from kebab-case to snake_case for docs generator compatibility
* lint
* fix(docs): let biome auto-fix import order, revert custom sort in generator
* fix(landing): fix import order in sim icon-mapping via biome
* fix(scripts): match Biome's exact import sort order in docs generator
* fix(generate-docs): produce Biome-compatible JSON output
The generator wrote multi-line arrays for short string arrays (like tags)
and omitted trailing newlines, causing Biome format check failures in CI.
Post-process integrations.json to collapse short arrays onto single lines
and add trailing newlines to both integrations.json and meta.json.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* feat(logs): add additional metadata for workflow execution logs
* Revert "Feat(logs) upgrade mothership chat messages to error (#3772)"
This reverts commit 9d1b9763c5.
* Fix lint, address greptile comments
* improvement(sidebar): expand sidebar by hovering and clicking the edge (#3830)
* improvement(sidebar): expand sidebar by hovering and clicking the edge
* improvement(sidebar): add keyboard shortcuts for new workflow/task, center search modal, fix edge ARIA
* improvement(sidebar): use Tooltip.Shortcut for inline shortcut display
* fix(sidebar): change new workflow shortcut from Mod+Shift+W to Mod+Shift+P to avoid browser close-window conflict
* fix(hotkeys): fall back to event.code for international keyboard layout compatibility
* fix(sidebar): guard add-workflow shortcut with canEdit and isCreatingWorkflow checks
* feat(ui): handle image paste (#3826)
* feat(ui): handle image paste
* Fix lint
* Fix type error
---------
Co-authored-by: Theodore Li <theo@sim.ai>
* feat(files): interactive markdown checkbox toggling in preview (#3829)
* feat(files): interactive markdown checkbox toggling in preview
* fix(files): handle ordered-list checkboxes and fix index drift
* lint
* fix(files): remove counter offset that prevented checkbox toggling
* fix(files): apply task-list styling to ordered lists too
* fix(files): render single pass when interactive to avoid index drift
* fix(files): move useMemo above conditional return to fix Rules of Hooks
* fix(files): pass content directly to preview when not streaming to avoid stale frame
* improvement(home): position @ mention popup at caret and fix icon consistency (#3831)
* improvement(home): position @ mention popup at caret and fix icon consistency
* fix(home): pin mirror div to document origin and guard button anchor
* chore(auth): restore hybrid.ts to staging
* improvement(ui): sidebar (#3832)
* Fix logger tests
* Add metadata to mothership logs
---------
Co-authored-by: Theodore Li <theo@sim.ai>
Co-authored-by: Waleed <walif6@gmail.com>
Co-authored-by: Theodore Li <theo@sim.ai>
* fix(sidebar): cmd+click opens in new tab, shift+click for range select
* comment cleanup
* fix(sidebar): drop stale metaKey param from workflow and task selection hooks
* feat(file-viewer): add pan and zoom to image preview
* fix(viewer): fix sort key mapping, disable load-more on sort, hide status dots when menu open
* fix(file-viewer): prevent scroll bleed and zoom button micro-pans
* fix(file-viewer): use exponential zoom formula to prevent zero/negative multiplier
* improvement(tables): improve table filtering UX
- Replace popover filter with persistent inline panel below toolbar
- Add AND/OR toggle between filter rules (shown in Where label slot)
- Sync filter panel state from applied filter on open
- Show filter button active state when filter is applied or panel is open
- Use readable operator labels matching dropdown options
- Add Clear filters button (shown only when filter is active)
- Close filter panel when last rule is removed via X
- Fix empty gap rows appearing in filtered results by skipping position gap rendering when filter is active
- Add toggle mode to ResourceOptionsBar for inline panel pattern
- Memoize FilterRuleRow for perf, fix filterTags key collision, remove dead filterActiveCount prop
* fix(table-filter): use ref to stabilize handleRemove/handleApply callbacks
Reading rules via ref instead of closure eliminates rules from useCallback
dependency arrays, keeping callbacks stable across rule edits and preserving
the memo() benefit on FilterRuleRow.
* improvement(tables,kb): remove hacky patterns, fix KB filter popover width
- Remove non-TSDoc comment from table-filter (rulesRef pattern is self-evident)
- Simplify SearchSection: remove setState-during-render anti-pattern; controlled
input binds directly to search.value/onChange (simpler and correct)
- Reduce KB filter popover from w-[320px] to w-[200px]; tag filter uses vertical
layout so narrow width works; Status-only case is now appropriately compact
* feat(knowledge): add sort and filter to KB list page
Sort dropdown: name, documents, tokens, created, last updated — pre-sorted
externally before passing rows to Resource. Active sort highlights the Sort
button; clear resets to default (created desc).
Filter popover: filter by connector status (All / With connectors /
Without connectors). Active filter shown as a removable tag in the toolbar.
* feat(files): add sort and filter to files list page
* feat(scheduled-tasks): add sort and filter to scheduled tasks page
* fix(table-filter): use explicit close handler instead of toggle
* improvement(files,knowledge): replace manual debounce with useDebounce hook and use type guards for file filtering
* fix(resource): prevent popover from inheriting anchor min-width
* feat(tables): add sort to tables list page
* feat(knowledge): add content and owner filters to KB list
* feat(scheduled-tasks): add status and health filters
* feat(files): add size and uploaded-by filters to files list
* feat(tables): add row count, owner, and column type filters
* improvement(scheduled-tasks): use combobox filter panel matching logs UI style
* improvement(knowledge): use combobox filter panel matching logs UI style
* improvement(files): use combobox filter panel matching logs UI style
Replaces button-list filters with Combobox-based multi-select sections for file type, size, and uploaded-by filters, aligning the panel with the logs page filter UI.
* improvement(tables): use combobox filter panel matching logs UI style
* feat(settings): add sort to recently deleted page
Add a sort dropdown next to the search bar allowing users to sort by deletion date (default, newest first), name (A–Z), or type (A–Z).
* feat(logs): add sort to logs page
* improvement(knowledge): upgrade document list filter to combobox style
* fix(resources): fix missing imports, memoization, and stale refs across resource pages
* improvement(tables): remove column type filter
* fix(resources): fix filter/sort correctness issues from audit
* fix(chunks): add server-side sort to document chunks API
Chunk sort was previously done client-side on a single page of
server-paginated data, which only reordered the current page.
Now sort params (sortBy, sortOrder) flow through the full stack:
types → service → API route → query hook → useDocumentChunks → document.tsx.
* perf(resources): memoize filterContent JSX across all resource pages
Resource is wrapped in React.memo, so an unstable filterContent reference
on every parent re-render defeats the memo. Wrap filterContent in useMemo
with correct deps in all 6 pages (files, tables, scheduled-tasks, knowledge,
base, document).
* fix(resources): add missing sort options for all visible columns
Every column visible in a resource table should be sortable. Three pages
had visible columns with no sort support:
- files.tsx: add 'owner' sort (member name lookup)
- scheduled-tasks.tsx: add 'schedule' sort (localeCompare on description)
- knowledge.tsx: add 'connectors' (count) and 'owner' (member name) sorts
Also add 'members' to processedKBs deps in knowledge.tsx since owner
sort now reads member names inside the memo.
* whitelabeling updates, sidebar fixes, files bug
* increased type safety
* pr fixes
* improvement(home): position @ mention popup at caret and fix icon consistency
* fix(home): pin mirror div to document origin and guard button anchor
* chore(auth): restore hybrid.ts to staging
* feat(files): interactive markdown checkbox toggling in preview
* fix(files): handle ordered-list checkboxes and fix index drift
* lint
* fix(files): remove counter offset that prevented checkbox toggling
* fix(files): apply task-list styling to ordered lists too
* fix(files): render single pass when interactive to avoid index drift
* fix(files): move useMemo above conditional return to fix Rules of Hooks
* fix(files): pass content directly to preview when not streaming to avoid stale frame
* improvement(sidebar): expand sidebar by hovering and clicking the edge
* improvement(sidebar): add keyboard shortcuts for new workflow/task, center search modal, fix edge ARIA
* improvement(sidebar): use Tooltip.Shortcut for inline shortcut display
* fix(sidebar): change new workflow shortcut from Mod+Shift+W to Mod+Shift+P to avoid browser close-window conflict
* fix(hotkeys): fall back to event.code for international keyboard layout compatibility
* fix(sidebar): guard add-workflow shortcut with canEdit and isCreatingWorkflow checks
* fix(import): dedup workflow name (#3813)
* feat(concurrency): bullmq based concurrency control system (#3605)
* feat(concurrency): bullmq based queueing system
* fix bun lock
* remove manual execs off queues
* address comments
* fix legacy team limits
* cleanup enterprise typing code
* inline child triggers
* fix status check
* address more comments
* optimize reconciler scan
* remove dead code
* add to landing page
* Add load testing framework
* update bullmq
* fix
* fix headless path
---------
Co-authored-by: Theodore Li <teddy@zenobiapay.com>
* fix(linear): add default null for after cursor (#3814)
* fix(knowledge): reject non-alphanumeric file extensions from document names (#3816)
* fix(knowledge): reject non-alphanumeric file extensions from document names
* fix(knowledge): improve error message when extension is non-alphanumeric
* fix(security): SSRF, access control, and info disclosure (#3815)
* fix(security): scope copilot feedback GET endpoint to authenticated user
Add WHERE clause to filter feedback records by the authenticated user's
ID, preventing any authenticated user from reading all users' copilot
interactions, queries, and workflow YAML (IDOR / CWE-639).
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(smtp): add SSRF validation and genericize network error messages
Prevent SSRF via user-controlled smtpHost by validating with
validateDatabaseHost before creating the nodemailer transporter.
Collapse distinct network error messages (ECONNREFUSED, ECONNRESET,
ETIMEDOUT) into a single generic message to prevent port-state leakage.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(security): add SSRF validation to SFTP/SSH and access control to workspace invitations
Add `validateDatabaseHost` checks to SFTP and SSH connection utilities to
block connections to private/reserved IPs and localhost, matching the
existing pattern used by all database tools. Add authorization check to
the workspace invitation GET endpoint so only the invitee or a workspace
admin can view invitation details.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(smtp): restore SMTP response code handling for post-connection errors
SMTP 4xx/5xx response codes are application-level errors (invalid
recipient, mailbox full, server error) unrelated to the SSRF hardening
goal. Restore response code differentiation and logging to preserve
actionable user-facing error messages.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(security): use session email directly instead of extra DB query
Addresses PR review feedback — align with the workspace invitation
route pattern by using session.user.email instead of re-fetching
from the database.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* lint
* fix(auth): revert lint autofix that broke hasExternalApiCredentials return type
Biome auto-fixed `return auth !== null && auth.startsWith(...)` to
`return auth?.startsWith(...)` which returns `boolean | undefined`,
not `boolean`, causing a TypeScript build failure.
* fix(smtp): pin resolved IP to prevent DNS rebinding (TOCTOU)
Use the pre-resolved IP from validateDatabaseHost instead of the
original hostname when creating the nodemailer transporter. Set
servername to the original hostname to preserve TLS SNI validation.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* refactor(security): extract createPinnedLookup helper for DNS rebinding prevention
Extract reusable createPinnedLookup from secureFetchWithPinnedIP so
non-HTTP transports (SSH, SFTP, IMAP) can pin resolved IPs at the
socket level. SMTP route uses host+servername pinning instead since
nodemailer doesn't reliably pass lookup to both secure/plaintext paths.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(security): pin IMAP connections to validated resolved IP
Pass the resolved IP from validateDatabaseHost to ImapFlow as host,
with the original hostname as servername for TLS SNI verification.
Closes the DNS TOCTOU rebinding window.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* lint
* fix(auth): revert lint autofix on hasExternalApiCredentials return type
Also pin SFTP/SSH connections to validated resolved IP to prevent DNS rebinding.
* fix(security): short-circuit admin check when caller is invitee
Skip the hasWorkspaceAdminAccess DB query when the caller is already
the invitee, avoiding an unnecessary round-trip. Aligns with the org
invitation route pattern.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* fix(worker): dockerfile + helm updates (#3818)
* fix(worker): dockerfile + helm updates
* address comments
* update dockerfile (#3819)
* fix dockerfile
* fix(security): pentest remediation — condition escaping, SSRF hardening, ReDoS protection (#3820)
* fix(executor): escape newline characters in condition expression strings
Unescaped newline/carriage-return characters in resolved string values
cause unterminated string literals in generated JS, crashing condition
evaluation with a SyntaxError.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(security): prevent ReDoS in guardrails regex validation
Add safe-regex2 to reject catastrophic backtracking patterns before
execution and cap input length at 10k characters.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(security): SSRF localhost hardening and regex DoS protection
Block localhost/loopback URLs in hosted environments using isHosted flag
instead of allowHttp. Add safe-regex2 validation and input length limits
to regex guardrails to prevent catastrophic backtracking.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(security): validate regex syntax before safety check
Move new RegExp() before safe() so invalid patterns get a proper syntax
error instead of a misleading "catastrophic backtracking" message.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(security): address PR review feedback
- Hoist isLocalhost && isHosted guard to single early-return before
protocol checks, removing redundant duplicate block
- Move regex syntax validation (new RegExp) before safe-regex2 check
so invalid patterns get proper syntax error instead of misleading
"catastrophic backtracking" message
* fix(security): remove input length cap from regex validation
The 10k character cap would block legitimate guardrail checks on long
LLM outputs. Input length doesn't affect ReDoS risk — the safe-regex2
pattern check already prevents catastrophic backtracking.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(tests): mock isHosted in input-validation and function-execute tests
Tests that assert self-hosted localhost behavior need isHosted=false,
which is not guaranteed in CI where NEXT_PUBLIC_APP_URL is set to the
hosted domain.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* improvement(worker): configuration defaults (#3821)
* improvement(worker): configuration defaults
* update readmes
* realtime curl import
* improvement(tour): remove auto-start, only trigger on explicit user action (#3823)
* fix(mcp): use correct modal for creating workflow MCP servers in deploy (#3822)
* fix(mcp): use correct modal for creating workflow MCP servers in deploy
* fix(mcp): show workflows field during loading and when empty
* mock course
* fix(db): use bigint for token counter columns in user_stats (#3755)
* mock course
* updates
* updated X handle for emir
* cleanup: audit and clean academy implementation
* fix(academy): add label to ValidationRule, fix quiz gating, simplify getRuleMessage
* cleanup: remove unnecessary comments across academy files
* refactor(academy): simplify abstractions and fix perf issues
* perf(academy): convert course detail page to server component with client island
* fix(academy): null-safe canAdvance, render exercise instructions, remove stale comments
* fix(academy): remove orphaned migration, fix getCourseById, clean up comments
- Delete 0181_academy_certificate.sql (orphaned duplicate not in journal)
- Add getCourseById() to content/index.ts; use it in certificates API
(was using getCourse which searches by slug, not stable id)
- Remove JSX comments from catalog page
- Remove redundant `passed` recomputation in LessonQuiz
* chore(db): regenerate academy_certificate migration with drizzle-kit
* chore: include blog mdx and components changes
* fix(blog): correct cn import path
* fix(academy): constrain progress bar to max-w-3xl with proper padding
* feat(academy): show back-to-course button on first lesson
* fix(academy): force dark theme on all /academy routes
* content(academy): rewrite sim-foundations course with full 6-module curriculum
* fix(academy): correct edge handles, quiz explanation, and starter mock outputs
- Fix Exercise 2 initial edge handles: 'starter-1-source'/'agent-1-target' → 'source'/'target' (React Flow actual IDs)
- Fix M1-L4 Q4 quiz explanation: remove non-existent Ctrl/Cmd+D and Alt+drag shortcuts
- Add starter mock output to all exercises so run animation shows feedback on the first block
* refine(academy): fix inaccurate content and improve exercise clarity
- Fix Exercise 3: replace hardcoded <agent-1.content> (invalid UUID-based ref) with reference picker instructions
- Fix M4 Quiz Q5: Loop block (subflow container) is correct answer, not the Workflow block
- Fix M4 Quiz Q4: clarify fan-out vs Parallel block distinction in explanation
- Fix M4-L2 video description: accurately describe Loop and Parallel subflow blocks
- Fix M2 Quiz Q3: make response format question conceptual rather than syntax-specific
- Improve Exercise 4 branching instructions: clarify top=true / bottom=false output handles
- Improve Final Project instructions: step-by-step numbered flow
* fix(academy): remove double border on quiz question cards
* fix(academy): single scroll container on lesson pages — remove nested flex scroll
* fix(academy): remove min-h-screen from root layout — fixes double scrollbar on lesson pages
* fix(academy): use fixed inset-0 on lesson page to eliminate document-level scrollbar
* fix(academy): replace sr-only radio/checkbox inputs with buttons to prevent scroll-on-focus; restore layout min-h-screen
* improvement(academy): polish, security hardening, and certificate claim UI
- Replace raw localStorage with BrowserStorage utility in local-progress
- Pre-compute slug/id Maps in content/index for O(1) course lookups
- Move blockMap construction into edge_exists branch only in validation
- Extract navBtnClass constant and MetaRow/formatDate helpers in UI
- Add rate limiting, server-side completion verification, audit logging, and nanoid cert numbers to certificate issuance endpoint
- Add useIssueCertificate mutation hook with completedLessonIds
- Wire certificate claim UI into CourseProgress: sign-in prompt, claim button with loading state, and post-issuance view with link to certificate page
- Fix lesson page scroll container and quiz scroll-on-focus bug
* fix(academy): validate condition branch handles in edge_exists rules
- Add sourceHandle field to edge_exists ValidationRule type
- Check sourceHandle in validation.ts when specified
- Require both condition-if and condition-else branches to be connected in the branching and final project exercises
* fix(academy): address PR review — isHosted regression, stuck isExecuting, revoked cert 500, certificate SSR
- Restore env-var-based isHosted check (was hardcoded true, breaking self-hosted deployments)
- Fix isExecuting stuck at true when mock run fails validation — set isMockRunningRef immediately and reset both flags on early exit
- Fix revoked/expired certificate causing 500 — any existing record (not just active) now returns 409 instead of falling through to INSERT
- Convert certificate verification page from client component to server component — direct DB fetch, notFound() on missing cert, generateMetadata for SEO/social previews
* fix(auth): restore hybrid.ts from staging to fix CI type error
* fix(academy): mark video lessons complete on visit and fix sign-in path
* fix(academy): replace useEffect+setState with lazy useState initializer in CourseProgress
* fix(academy): reset exerciseComplete on lesson navigation, remove unused useAcademyCertificate hook
* fix(academy): useState for slug-change reset, cache() for cert page, handleMockRunRef for stale closure
* fix(academy): replace shadcn theme vars with explicit hex in LessonVideo fallback
* fix(academy): reset completedRef on exercise change, conditional verified badge, multi-select empty guard
* fix(academy): type safety fixes — null metadata fallbacks, returning() guard, exhaustive union, empty catch
* fix(academy): reset ExerciseView completed banner on nav; fix CourseProgress hydration mismatch
* fix(lightbox): guard effect body with isOpen to prevent spurious overflow reset
* fix(academy): reset LessonQuiz state on lesson change to prevent stale answers persisting
* fix(academy): course not-found metadata title; try-finally guard in mock run loop
* fix(academy): type safety, cert persistence, regex guard, mixed-lesson video, shorts support
- Derive AcademyCertificate from db $inferSelect to prevent schema drift
- Add useCourseCertificate query hook; GET /api/academy/certificates now accepts courseId for authenticated lookup
- Use useCourseCertificate in CourseProgress so certificate state survives page refresh
- Guard new RegExp(valuePattern) in validation.ts with try/catch; log warn on invalid pattern
- Add logger.warn for custom validation rules so content authors are alerted
- Add YouTube Shorts URL support to LessonVideo (youtube.com/shorts/VIDEO_ID)
- Fix mixed-lesson video gap: render videoUrl above quiz when mixed has quiz but no exercise
- Add academy-scoped not-found.tsx with link back to /academy
* fix(academy): reset hintIndex when exercise changes
* chore: remove ban-spam-accounts script (wrong branch)
* fix(academy): enforce availableBlocks in toolbar; fix mixed exercise+quiz rendering
- Add useSandboxBlockConstraints context; SandboxCanvasProvider provides exerciseConfig.availableBlocks so the toolbar only shows permitted block types. Empty array hides all blocks (configure-only exercises); non-null array restricts to listed types; triggers always hidden in sandbox.
- Fix mixed lesson with both exerciseConfig and quizConfig: exercise renders first, quiz reveals after exercise completes (sequential pedagogy). canAdvance now requires both exerciseComplete && quizComplete when both are present.
* chore(academy): remove extraneous inline comments
* fix(academy): blank mixed lesson, quiz canAdvance flag, empty-array valueNotEmpty
* prep for merge
* chore(db): regenerate academy certificate migration after staging merge
* fix(academy): disable auto-connect in sandbox mode
* fix(academy): render video in mixed lesson with no exercise or quiz
* fix(academy): mark mixed video-only lessons complete; handle cert insert race
* fix(canvas): add sandbox and embedded to nodes useMemo deps
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: Lakee Sivaraya <71339072+lakeesiv@users.noreply.github.com>
Co-authored-by: Vikhyath Mondreti <vikhyath@simstudio.ai>
Co-authored-by: Vikhyath Mondreti <vikhyathvikku@gmail.com>
Co-authored-by: Siddharth Ganesan <33737564+Sg312@users.noreply.github.com>
Co-authored-by: Theodore Li <teddy@zenobiapay.com>
* fix(knowledge): give users choice to keep or delete documents when removing connector
* refactor(knowledge): clean up connector delete and extract shared extension validator
- Extract `isAlphanumericExtension` helper to deduplicate regex across parser-extension.ts and validation.ts
- Extract `closeDeleteModal` callback to eliminate 4x scattered state resets
- Add archivedAt/deletedAt filters to UPDATE query in keep-docs delete path
- Parallelize storage file cleanup and tag definition cleanup with Promise.all
- Deduplicate URL construction in delete connector hook
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* refactor(knowledge): remove duplicate extension list from parser-extension
Use SUPPORTED_DOCUMENT_EXTENSIONS and isSupportedExtension from
validation.ts instead of maintaining a separate identical list.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(db): change document.connectorId FK from cascade to set null
The cascade behavior meant deleting a connector would always delete
its documents, contradicting the "keep documents" option. With set null,
the database automatically nullifies connectorId when a connector is
removed, and we only need explicit deletion when the user opts in.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* chore(db): add migration metadata for connectorId FK change
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(knowledge): fix connector delete test and use URL-safe searchParams
Use `new URL(request.url).searchParams` instead of `request.nextUrl.searchParams`
for compatibility with test mocks. Add missing `connectorType` to test fixture.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* spacing
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* fix(executor): escape newline characters in condition expression strings
Unescaped newline/carriage-return characters in resolved string values
cause unterminated string literals in generated JS, crashing condition
evaluation with a SyntaxError.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(security): prevent ReDoS in guardrails regex validation
Add safe-regex2 to reject catastrophic backtracking patterns before
execution and cap input length at 10k characters.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(security): SSRF localhost hardening and regex DoS protection
Block localhost/loopback URLs in hosted environments using isHosted flag
instead of allowHttp. Add safe-regex2 validation and input length limits
to regex guardrails to prevent catastrophic backtracking.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(security): validate regex syntax before safety check
Move new RegExp() before safe() so invalid patterns get a proper syntax
error instead of a misleading "catastrophic backtracking" message.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(security): address PR review feedback
- Hoist isLocalhost && isHosted guard to single early-return before
protocol checks, removing redundant duplicate block
- Move regex syntax validation (new RegExp) before safe-regex2 check
so invalid patterns get proper syntax error instead of misleading
"catastrophic backtracking" message
* fix(security): remove input length cap from regex validation
The 10k character cap would block legitimate guardrail checks on long
LLM outputs. Input length doesn't affect ReDoS risk — the safe-regex2
pattern check already prevents catastrophic backtracking.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(tests): mock isHosted in input-validation and function-execute tests
Tests that assert self-hosted localhost behavior need isHosted=false,
which is not guaranteed in CI where NEXT_PUBLIC_APP_URL is set to the
hosted domain.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* fix(security): scope copilot feedback GET endpoint to authenticated user
Add WHERE clause to filter feedback records by the authenticated user's
ID, preventing any authenticated user from reading all users' copilot
interactions, queries, and workflow YAML (IDOR / CWE-639).
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(smtp): add SSRF validation and genericize network error messages
Prevent SSRF via user-controlled smtpHost by validating with
validateDatabaseHost before creating the nodemailer transporter.
Collapse distinct network error messages (ECONNREFUSED, ECONNRESET,
ETIMEDOUT) into a single generic message to prevent port-state leakage.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(security): add SSRF validation to SFTP/SSH and access control to workspace invitations
Add `validateDatabaseHost` checks to SFTP and SSH connection utilities to
block connections to private/reserved IPs and localhost, matching the
existing pattern used by all database tools. Add authorization check to
the workspace invitation GET endpoint so only the invitee or a workspace
admin can view invitation details.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(smtp): restore SMTP response code handling for post-connection errors
SMTP 4xx/5xx response codes are application-level errors (invalid
recipient, mailbox full, server error) unrelated to the SSRF hardening
goal. Restore response code differentiation and logging to preserve
actionable user-facing error messages.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(security): use session email directly instead of extra DB query
Addresses PR review feedback — align with the workspace invitation
route pattern by using session.user.email instead of re-fetching
from the database.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* lint
* fix(auth): revert lint autofix that broke hasExternalApiCredentials return type
Biome auto-fixed `return auth !== null && auth.startsWith(...)` to
`return auth?.startsWith(...)` which returns `boolean | undefined`,
not `boolean`, causing a TypeScript build failure.
* fix(smtp): pin resolved IP to prevent DNS rebinding (TOCTOU)
Use the pre-resolved IP from validateDatabaseHost instead of the
original hostname when creating the nodemailer transporter. Set
servername to the original hostname to preserve TLS SNI validation.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* refactor(security): extract createPinnedLookup helper for DNS rebinding prevention
Extract reusable createPinnedLookup from secureFetchWithPinnedIP so
non-HTTP transports (SSH, SFTP, IMAP) can pin resolved IPs at the
socket level. SMTP route uses host+servername pinning instead since
nodemailer doesn't reliably pass lookup to both secure/plaintext paths.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(security): pin IMAP connections to validated resolved IP
Pass the resolved IP from validateDatabaseHost to ImapFlow as host,
with the original hostname as servername for TLS SNI verification.
Closes the DNS TOCTOU rebinding window.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* lint
* fix(auth): revert lint autofix on hasExternalApiCredentials return type
Also pin SFTP/SSH connections to validated resolved IP to prevent DNS rebinding.
* fix(security): short-circuit admin check when caller is invitee
Skip the hasWorkspaceAdminAccess DB query when the caller is already
the invitee, avoiding an unnecessary round-trip. Aligns with the org
invitation route pattern.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* fix(knowledge): scope sync/update state per-connector to prevent race conditions
* feat(knowledge): add connectors column to knowledge base list
* refactor(knowledge): extract set helpers, handleTogglePause, and filter-before-map
* refactor(knowledge): use onSettled for syncingIds cleanup, consistent with updatingIds
* feat(generic): add generic resource tab, refactor home structure, and UI polish
* reverted hardcoded ff
* fix build
* styling consistency
* styling
* fix(auth): extract shared auth button class and align SSO primary style
- Extract AUTH_SUBMIT_BTN constant to (auth)/components/auth-button-classes.ts,
replacing 10 copy-pasted identical className strings across 7 files
- Update SSOLoginButton primary variant to use AUTH_SUBMIT_BTN instead of
hardcoded purple gradient, making it consistent with all other auth form
submit buttons
- Fix missing isEphemeralResource import in lib/copilot/resources.ts
(was re-exported but not available in local scope)
* fix(auth): replace inline button class in chat auth components with AUTH_SUBMIT_BTN
* fix send button hover state
* feat(search): add tables, files, knowledge bases, and jobs to cmd-k search
* fix(search): address PR feedback — drop files/jobs, add onSelect to memo
* fix(search): add files back with per-file deep links, keep jobs out
* fix(search): remove onSelect from memo comparator to match existing pattern
* fix(knowledge): enqueue connector docs per-batch to survive sync timeouts
* fix(connectors): convert all connectors to contentDeferred pattern and fix validation issues
All 10 connectors now use contentDeferred: true in listDocuments, returning
lightweight metadata stubs instead of downloading content during listing.
Content is fetched lazily via getDocument only for new/changed documents,
preventing Trigger.dev task timeouts on large syncs.
Connector-specific fixes from validation audit:
- Google Drive: metadata-based contentHash, orderBy for deterministic pagination,
precise maxFiles, byte-length size check with truncation warning
- OneDrive: metadata-based contentHash, orderBy for deterministic pagination
- SharePoint: metadata-based contentHash, byte-length size check
- Dropbox: metadata-based contentHash using content_hash field
- Notion: code/equation block extraction, empty page fallback to title,
reduced CHILD_PAGE_CONCURRENCY to 5, syncContext parameter
- Confluence: syncContext caching for cloudId, reduced label concurrency to 5
- Gmail: use joinTagArray for label tags
- Obsidian: syncRunId-based stub hash for forced re-fetch, mtime-based hash
in getDocument, .trim() on vaultUrl, lightweight validateConfig
- Evernote: retryOptions threaded through apiFindNotesMetadata and apiGetNote
- GitHub: added contentDeferred: false to getDocument, syncContext parameter
Infrastructure:
- sync-engine: added syncRunId to syncContext for Obsidian change detection
- confluence/utils: replaced raw fetch with fetchWithRetry, added retryOptions
- oauth: added supportsRefreshTokenRotation: false for Dropbox
- Updated add-connector and validate-connector skills with contentDeferred docs
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(connectors): address PR review comments - metadata merge, retryOptions, UTF-8 safety
- Sync engine: merge metadata from getDocument during deferred hydration,
so Gmail/Obsidian/Confluence tags and metadata survive the stub→full transition
- Evernote: pass retryOptions {retries:3, backoff:500} from listDocuments and
getDocument callers into apiFindNotesMetadata and apiGetNote
- Google Drive + SharePoint: safe UTF-8 truncation that walks back to the last
complete character boundary instead of splitting multi-byte chars
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(evernote): use correct RetryOptions property names
maxRetries/initialDelayMs instead of retries/backoff to match the
RetryOptions interface from lib/knowledge/documents/utils.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(sync-engine): merge title from getDocument and skip unchanged docs after hydration
- Merge title from getDocument during deferred hydration so Gmail
documents get the email Subject header instead of the snippet text
- After hydration, compare the hydrated contentHash against the stored
DB hash — if they match, skip the update. This prevents Obsidian
(and any connector with a force-refresh stub hash) from re-uploading
and re-processing unchanged documents every sync
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(sync-engine): dedup externalIds, enable deletion reconciliation, merge sourceUrl
Three sync engine gaps identified during audit:
1. Duplicate externalId guard: if a connector returns the same externalId
across pages (pagination overlap), skip the second occurrence to prevent
unique constraint violations on add and double-uploads on update.
2. Deletion reconciliation: previously required explicit fullSync or
syncMode='full', meaning docs deleted from the source accumulated in
the KB forever. Now runs on all non-incremental syncs (which return
ALL docs). Includes a safety threshold: if >50% of existing docs
(and >5 docs) would be deleted, skip and warn — protects against
partial listing failures. Explicit fullSync bypasses the threshold.
3. sourceUrl merge: hydration now picks up sourceUrl from getDocument,
falling back to the stub's sourceUrl if getDocument doesn't set one.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* lint
* fix(connectors): confluence version metadata fallback and google drive maxFiles guard
- Confluence: use `version?.number` directly (undefined) in metadata instead
of `?? ''` (empty string) to prevent Number('') = 0 passing NaN check in
mapTags. Hash still uses `?? ''` for string interpolation.
- Google Drive: add early return when previouslyFetched >= maxFiles to prevent
effectivePageSize <= 0 which violates the API's pageSize requirement (1-1000).
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(connectors): blogpost labels and capped listing deletion reconciliation
- Confluence: fetchLabelsForPages now tries both /pages/{id}/labels and
/blogposts/{id}/labels, preventing label loss when getDocument hydrates
blogpost content (previously returned empty labels on 404).
- Sync engine: skip deletion reconciliation when listing was capped
(maxFiles/maxThreads). Connectors signal this via syncContext.listingCapped.
Prevents incorrect deletion of docs beyond the cap that still exist in source.
fullSync override still forces deletion for explicit cleanup.
- Google Drive & Gmail: set syncContext.listingCapped = true when cap is hit.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(connectors): set syncContext.listingCapped in all connectors with caps
OneDrive, Dropbox, SharePoint, Confluence (v2 + CQL), and Notion (3 listing
functions) now set syncContext.listingCapped = true when their respective
maxFiles/maxPages limit is hit. Without this, the sync engine's deletion
reconciliation would run against an incomplete listing and incorrectly
hard-delete documents that exist in the source but fell outside the cap window.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(evernote): thread retryOptions through apiListTags and apiListNotebooks
All calls to apiListTags and apiListNotebooks in both listDocuments and
getDocument now pass retryOptions for consistent retry protection across
all Thrift RPC calls.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* fix: prevent auth bypass via user-controlled context query param in file serve
The /api/files/serve endpoint trusted a user-supplied `context` query
parameter to skip authentication. An attacker could append
`?context=profile-pictures` to any file URL and download files without
auth. Now the public access gate checks the key prefix instead of the
query param, and `og-images/` is added to `inferContextFromKey`.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: use randomized heredoc delimiter in SSH execute-script route
Prevents accidental heredoc termination if script content contains
the delimiter string on its own line.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: escape workingDirectory in SSH execute-command route
Use escapeShellArg() with single quotes for the workingDirectory
parameter, consistent with all other SSH routes (execute-script,
create-directory, delete-file, move-rename).
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: harden chat/form deployment auth (OTP brute-force, CSPRNG, HMAC tokens)
- Add brute-force protection to OTP verification with attempt tracking (CWE-307)
- Replace Math.random() with crypto.randomInt() for OTP generation (CWE-338)
- Replace unsigned Base64 auth tokens with HMAC-SHA256 signed tokens (CWE-327)
- Use shared isEmailAllowed utility in OTP route instead of inline duplicate
- Simplify Redis OTP update to single KEEPTTL call
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: harden SSRF protections and input validation across API routes
Add DNS-based SSRF validation for MCP server URLs, secure OIDC discovery
with IP-pinned fetch, strengthen OTP/chat/form input validation, sanitize
1Password vault parameters, and tighten deployment security checks.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* lint
* fix(file-serve): remove user-controlled context param from authenticated path
The `?context` query param was still being passed to `handleCloudProxy`
in the authenticated code path, allowing any logged-in user to spoof
context as `profile-pictures` and bypass ownership checks in
`verifyFileAccess`. Now always use `inferContextFromKey` from the
server-controlled key prefix.
* fix: handle legacy OTP format in decodeOTPValue for deploy-time compat
Add guard for OTP values without colon separator (pre-deploy format)
to avoid misparse that would lock out users with in-flight OTPs.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(mcp): distinguish DNS resolution failures from SSRF policy blocks
DNS lookup failures now throw McpDnsResolutionError (502) instead of
McpSsrfError (403), so transient DNS hiccups surface as retryable
upstream errors rather than confusing permission rejections.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: make OTP attempt counting atomic to prevent TOCTOU race
Redis path: use Lua script for atomic read-increment-conditional-delete.
DB path: use optimistic locking (UPDATE WHERE value = currentValue) with
re-read fallback on conflict. Prevents concurrent wrong guesses from
each counting as a single attempt.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: check attempt count before OTP comparison to prevent bypass
Reject OTPs that have already reached max failed attempts before
comparing the code, closing a race window where a correct guess
could bypass brute-force protection.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: validate OIDC discovered endpoints against SSRF
The discovery URL itself was SSRF-validated, but endpoint URLs returned
in the discovery document (tokenEndpoint, userInfoEndpoint, jwksEndpoint)
were stored without validation. A malicious OIDC issuer on a public IP
could return internal network URLs in the discovery response.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: remove duplicate OIDC endpoint SSRF validation block
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: validate OIDC discovered endpoints and pin DNS for 1Password Connect
- SSRF-validate all endpoint URLs returned by OIDC discovery documents
before storing them (authorization, token, userinfo, jwks endpoints)
- Pin DNS resolution in 1Password Connect requests using
secureFetchWithPinnedIP to prevent TOCTOU DNS rebinding attacks
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* lint
* fix: replace KEEPTTL with TTL+EX for Redis <6.0 compat, add DB retry loop
- Lua script now reads TTL and uses SET...EX instead of KEEPTTL
- DB optimistic locking now retries up to 3 times on conflict
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: address review feedback on OTP atomicity and 1Password fetch
- Replace Redis KEEPTTL with TTL+SET EX for Redis <6.0 compatibility
- Add retry loop to DB optimistic lock path so concurrent OTP attempts
are actually counted instead of silently dropped
- Remove unreachable fallback fetch in 1Password Connect; make
validateConnectServerUrl return non-nullable string
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: treat Lua nil return as locked when OTP key is missing
When the Redis key is deleted/expired between getOTP and
incrementOTPAttempts, the Lua script returns nil. Handle this
as 'locked' instead of silently treating it as 'incremented'.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: handle Lua nil as locked OTP and add SSRF check to MCP env resolution
- Treat Redis Lua nil return (expired/deleted key) as 'locked' instead
of silently treating it as a successful increment
- Add validateMcpServerSsrf to MCP service resolveConfigEnvVars so
env-var URLs are SSRF-validated after resolution at execution time
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: narrow resolvedIP type guard instead of non-null assertion
Replace urlValidation.resolvedIP! with proper type narrowing by adding
!urlValidation.resolvedIP to the guard clause, so TypeScript can infer
the string type without a fragile assertion.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: bind auth tokens to deployment password for immediate revocation
Include a SHA-256 hash of the encrypted password in the HMAC-signed
token payload. Changing the deployment password now immediately
invalidates all existing auth cookies, restoring the pre-HMAC behavior.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: bind auth tokens to deployment password and remove resolvedIP non-null assertion
- Include SHA-256 hash of encryptedPassword in HMAC token payload so
changing a deployment's password immediately invalidates all sessions
- Pass encryptedPassword through setChatAuthCookie/setFormAuthCookie
and validateAuthToken at all call sites
- Replace non-null assertion on resolvedIP with proper narrowing guard
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: update test assertions for new encryptedPassword parameter
Tests now expect the encryptedPassword arg passed to validateAuthToken
and setDeploymentAuthCookie after the password-binding change.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: format long lines in chat/form test assertions
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: pass encryptedPassword through OTP route cookie generation
Select chat.password in PUT handler DB query and pass it to
setChatAuthCookie so OTP-issued tokens include the correct
password slot for subsequent validation.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* fix(copilot): expand tool metadata, fix thinking text rendering, clean up display logic
* fix(copilot): guard null reasoning data, use ensureTextBlock for thinking end
* fix(copilot): restore displayTitle precedence so cancelled tools show 'Stopped by user'
* feat: skills import, MCP modal updates, wordmark icon, tool-input improvements
- Add skills import functionality (route + components + utils)
- Update MCP deploy modal
- Add Wordmark emcn icon + logo SVG assets
- Improve tool-input component
- Update README branding to new wordmark
- Add ban-spam-accounts admin script
* fix: resolve build error and audit findings from simplify review
- Add BUILT_IN_TOOL_TYPES export to blocks/utils.ts (was removed from
tool-input.tsx but never added to the new import target — caused build
error "Export BUILT_IN_TOOL_TYPES doesn't exist in target module")
- Export Wordmark from emcn icons barrel (index.ts)
- Derive isDragging from dragCounter in skill-import.tsx instead of
maintaining redundant state that could desync
- Replace manual AbortController/setTimeout with AbortSignal.timeout()
in skills import API route (Node 17.3+ supported, cleaner no-cleanup)
- Use useId() for SVG gradient ID in wordmark.tsx to prevent duplicate
ID collisions if rendered multiple times on the same page
* fix(scripts): fix docs mismatch and N+1 query in ban-spam-accounts
- Fix comment: default pattern is @vapu.xyz, not @sharebot.net
- Replace per-user stats loop with a single aggregated JOIN query
* feat: wire wordmark into sidebar, fix credential selector modal dispatch
- Show Wordmark (icon + text) in the expanded sidebar instead of the
bare Sim icon; collapsed state keeps the small Sim icon unchanged
- Untrack scripts/ban-spam-accounts.ts (gitignored; one-off script)
- Credential selector: open OAuthRequiredModal inline instead of
navigating to Settings → Integrations (matches MCP/tool-input pattern)
- Credential selector: update billing import from getSubscriptionAccessState
to getSubscriptionStatus; drop writePendingCredentialCreateRequest and
useSettingsNavigation dependencies
* feat(misc): misc UX/UI improvements
* more random fixes
* more random fixes
* fix: address PR review findings from cursor bugbot
- settings-sidebar: use getSubscriptionAccessState instead of getSubscriptionStatus
so billingBlocked and status validity are checked; add requiresMax gating so
max-plan-only nav items (inbox) are hidden for lower-tier users
- credential-selector: same getSubscriptionAccessState migration for credential sets
visibility check
- mothership chats PATCH: change else if to if for isUnread so both title and
isUnread can be updated in a single request
- skills import: check Content-Length header before reading response body to avoid
loading oversized files into memory
* fix(skills): add ZIP file size guard before extraction
Checks file.size > 5 MB before calling extractSkillFromZip to prevent
zip bombs from exhausting browser memory at the client-side upload path.
* feat(settings-sidebar): show locked upsell items with plan badge
Sim Mailer (requiresMax) and Email Polling (requiresTeam) now always
appear in the settings sidebar when billing is enabled and the
deployment is hosted. If the user lacks the required plan they see a
small MAX / TEAM badge next to the label and are taken to the page
which already contains the upgrade prompt.
Enterprise (Access Control, SSO) and Team management stay hard-hidden
for lower tiers. Admin/superuser items stay truly hidden.
* fix(settings-sidebar): remove flex-1 from label span to fix text centering
* feat(settings-sidebar): remove team gate from email polling, keep only mailer max gate
* feat(subscription): billing details layout and Enterprise card improvements
- Move Enterprise plan card into the plan grid (auto-fit columns) instead
of a separate standalone section below billing details
- Refactor billing details section: remove outer border/background,
separate each row with top border + padding for cleaner separation
- Update button variants: Add Credits → active, Invoices → active
* fix(mothership): prevent lastSeenAt conflict when both title and isUnread are patched together
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
* fix(sidebar): prevent double-save race in flyout inline rename on Enter+blur
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
* fix(skills): normalize CRLF line endings before parsing SKILL.md frontmatter
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
* feat(log): enable info logs in staging and prod
* Upgrade info logs to error for message route
* Add to orchestrator, remove helm shennanigans
* Fix lint
---------
Co-authored-by: Theodore Li <theo@sim.ai>
* fix(ui): add request a demo modal
* Remove dead code
* Remove footer modal
* Address greptile comments
* Sanatize CRLF characters from emails
* extract shared email header safety regex
Co-authored-by: Theodore Li <TheodoreSpeaks@users.noreply.github.com>
* Use pricing CTA action for demo modal
Co-authored-by: Theodore Li <TheodoreSpeaks@users.noreply.github.com>
* fix demo request import ordering
Co-authored-by: Theodore Li <TheodoreSpeaks@users.noreply.github.com>
* merge staging and fix hubspot list formatting
Co-authored-by: Theodore Li <TheodoreSpeaks@users.noreply.github.com>
* fix(generate-docs): fix tool description extraction and simplify script
- Fix endsWith over-matching: basename === 'index.ts'/'types.ts' instead
of endsWith(), which was silently skipping valid tool files like
list_leave_types.ts, delete_index.ts, etc.
- Add extractSwitchCaseToolMapping() to resolve op ID → tool ID mismatches
where block switch statements map differently (e.g. HubSpot get_carts →
hubspot_list_carts)
- Fix double fs.readFileSync in writeIntegrationsJson — reuse existing
fileContent variable instead of re-reading the file
- Remove 5 dead functions superseded by *FromContent variants
- Simplify extractToolsAccessFromContent to use matchAll
- fix(upstash): replace template literal tool ID with explicit switch cases
* fix(generate-docs): restore extractIconName by aliasing to extractIconNameFromContent
* restore
* fix(demo-modal): reset form on open to prevent stale success state on reopen
* undo hardcoded ff
* fix(upstash): throw on unknown operation instead of silently falling back to get
---------
Co-authored-by: Theodore Li <teddy@zenobiapay.com>
Co-authored-by: Cursor Agent <cursoragent@cursor.com>
Co-authored-by: Theodore Li <TheodoreSpeaks@users.noreply.github.com>
Co-authored-by: waleed <walif6@gmail.com>
* feat(hubspot): add 27 CRM tools and fix OAuth scope mismatch
* lint
* fix(hubspot): switch marketing events to CRM Objects API and add HubSpotCrmObject base type
* chore(docs): fix import ordering and formatting lint errors
* feat(hubspot): wire all 27 new tools into block definition
* fix(hubspot): address review comments - schema mismatch, pagination, trim, descriptions
- Switch marketing event outputs to CRM envelope structure (id, properties, createdAt, updatedAt, archived) matching CRM Objects API
- Fix list_lists pagination: add offset param, map offset-based response to paging structure
- Add .trim() to contactId/companyId in pre-existing get/update tools
- Fix default limit descriptions (100 → 10) in list_contacts/list_companies
- Fix operator examples (CONTAINS → CONTAINS_TOKEN) in search_contacts/search_companies
- Remove unused params arg in get_users transformResponse
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(hubspot): revert to Marketing Events API and fix Lists pagination per API docs
Marketing Events:
- Revert from /crm/v3/objects/marketing_events back to /marketing/v3/marketing-events
- The Marketing Events API does NOT require appId for GET /marketing-events/{objectId}
- appId is only needed for the /events/{externalEventId} endpoint (which we don't use)
- Restore flat response schema (objectId, eventName, etc. at top level, not CRM envelope)
Lists:
- POST /crm/v3/lists/search uses offset-based pagination (not cursor-based)
- Response shape: { lists, hasMore, offset, total } — not { results, paging }
- Map offset → paging.next.after for consistent block interface
- Fix default count: 20 (not 25), max 500
- GET /crm/v3/lists/{listId} wraps response in { list: { ... } }
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(hubspot): final audit fixes verified against API docs
- Revert list_contacts/list_companies default limit back to 100 (confirmed by API docs)
- Add idProperty param to get_appointment.ts (was missing, inconsistent with update_appointment)
- Remove get_carts from idProperty block condition (carts don't support idProperty)
- Add get_lists to after block condition (pagination was inaccessible from UI)
- Add after pagination param to get_users.ts (was missing, users beyond first page unreachable)
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(hubspot): return paging in get_users and add to block after condition
- Add paging output to get_users transformResponse and outputs
- Add get_users to block after subBlock condition so cursor is accessible from UI
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(hubspot): align total fallback with type definitions in search tools
Use `?? 0` instead of `?? null` for search tools where the type declares
`total: number`. Also declare `total` in list_lists metadata output schema.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* feat(rippling): add Rippling HR integration with 19 tools
* fix(rippling): address PR review feedback
- Fix lint:check import ordering in icon-mapping.ts
- Build clean params object instead of spreading all UI fields to API
- Add try/catch around JSON.parse for users field
- Use != null guard for limit/offset to not drop 0 values
- Add missing tags to block config and integrations.json
* fix(rippling): guard startDate by operation and clarify totalCount descriptions
- Guard startDate/endDate with operation check to prevent candidateStartDate
from clobbering date filters on leave/activity operations
- Update totalCount output descriptions on paginated tools to clarify it
reflects page size, not total record count
* fix(rippling): use null-safe guard for groupVersion param
* fix(rippling): remove operation field from tool params payload
* fix(rippling): add input validation for action param and empty group update body
* fix(ui): fix kb id extraction logic for resource, sync tags
* Pass knowledge base id back on edit tag
---------
Co-authored-by: Theodore Li <theo@sim.ai>
- Add max-w-[260px] to Tooltip.Content so video previews don't blow out the tooltip size
- Replace cursor-help with cursor-default on info icons in settings
* improvement(tour): fix tour auto-start logic and standardize selectors
* fix(tour): address PR review comments
- Move autoStartAttempted.add() inside timer callback to prevent
blocking auto-start when tour first mounts while disabled
- Memoize setJoyrideRef with useCallback to prevent ref churn
- Remove unused joyrideRef
* feat(home): auth-aware landing page navigation
- Redirect authenticated users from / to /workspace via middleware (?home param bypasses)
- Show "Go to App" instead of "Log in / Get started" in navbar for authenticated users
- Logo links to /?home for authenticated users to stay in marketing context
- Settings "Home Page" button opens /?home
- Handle isPending session state to prevent CTA button flash
* lint
* fix(home): remove stale ?from=nav params in landing nav
* fix(home): preserve ?home param in nav links during session pending state
* lint
* feat: add product tour
* chore: updated modals
* chore: fix the tour
* chore: Tour Updates
* chore: fix review changes
* chore: fix review changes
* chore: fix review changes
* chore: fix review changes
* chore: fix review changes
* minor improvements
* chore(tour): address PR review comments
- Extract shared TourState, TourStateContext, mapPlacement, and TourTooltipAdapter
into tour-shared.tsx, eliminating ~100 lines of duplication between product-tour.tsx
and workflow-tour.tsx
- Fix stale closure in handleStartTour — add isOnWorkflowPage to useCallback deps
so Take a tour dispatches the correct event after navigation
* chore(tour): address remaining PR review comments
- Remove unused logger import and instance in product-tour.tsx
- Remove unused tour-tooltip-fade animation from tailwind config
- Remove unnecessary overflow-hidden wrapper around WorkflowTour
- Add border stroke to arrow SVG in tour-tooltip for visual consistency
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* chore(tour): address second round of PR review comments
- Remove unnecessary 'use client' from workflow layout (children are already client components)
- Fix ref guard timing issue in TourTooltipAdapter that could prevent Joyride from tracking tooltip on subsequent steps
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* chore(tour): extract shared Joyride config, fix popover arrow overflow
- Extract duplicated Joyride floaterProps/styles into getSharedJoyrideProps()
in tour-shared.tsx, parameterized by spotlightBorderRadius
- Fix showArrow disabling content scrolling in PopoverContent by wrapping
children in a scrollable div when arrow is visible
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* lint
* fix(tour): stop running tour when disabled becomes true
Prevents nav and workflow tours from overlapping. When a user navigates
to a workflow page while the nav tour is running, the disabled flag
now stops the nav tour instead of just suppressing auto-start.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(tour): move auto-start flag into timer, fix truncate selector conflict
- Move hasAutoStarted flag inside setTimeout callback so it's only set
when the timer fires, allowing retry if disabled changes during delay
- Add data-popover-scroll attribute to showArrow scroll wrapper and
exclude it from the flex-1 truncate selector to prevent overflow
conflict
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(tour): remove duplicate overlay on center-placed tour steps
Joyride's spotlight already renders a full-screen overlay via boxShadow.
The centered TourTooltip was adding its own bg-black/55 overlay on top,
causing double-darkened backgrounds. Removed the redundant overlay div.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* refactor: move docs link from settings to help dropdown
The Docs link (https://docs.sim.ai) was buried in settings navigation.
Moved it to the Help dropdown in the sidebar for better discoverability.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Adithya Krishna <aadithya794@gmail.com>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* feat(table): column drag-and-drop reorder
* fix(table): remove duplicate onDragEnd call from handleDrop
* fix(table): persist columnOrder on rename/delete and defer delete to onSuccess
* fix(table): prevent stale refs during column drag operations
Fix two bugs in column drag-and-drop:
1. Stale columnWidths ref during rename - compute updated widths inline
before passing to updateMetadata
2. Escape-cancelled drag still reorders - update dropTargetColumnNameRef
directly in handleColumnDragLeave to prevent handleColumnDragEnd from
reading stale ref value
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(table): insert column at correct side when anchor is unordered
When the anchor column isn't in columnOrder, add it first then insert
the new column relative to it, so 'right' insertions appear after the
anchor as expected.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* fix(home): voice input text persistence bugs
* fix(home): gate setIsListening on startRecognition success
* fix(home): handle startRecognition failure in restartRecognition
* fix(home): reset speech prefix on submit while mic is active
* feat(tools): advanced fields for youtube, vercel; added cloudflare and dataverse tools (#3257)
* refactor(vercel): mark optional fields as advanced mode
Move optional/power-user fields behind the advanced toggle:
- List Deployments: project filter, target, state
- Create Deployment: project ID override, redeploy from, target
- List Projects: search
- Create/Update Project: framework, build/output/install commands
- Env Vars: variable type
- Webhooks: project IDs filter
- Checks: path, details URL
- Team Members: role filter
- All operations: team ID scope
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* style(youtube): mark optional params as advanced mode
Hide pagination, sort order, and filter fields behind the advanced
toggle for a cleaner default UX across all YouTube operations.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* added advanced fields for vercel and youtube, added cloudflare and dataverse block
* addded desc for dataverse
* add more tools
* ack comment
* more
* ops
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* feat(tables): added tables (#2867)
* updates
* required
* trashy table viewer
* updates
* updates
* filtering ui
* updates
* updates
* updates
* one input mode
* format
* fix lints
* improved errors
* updates
* updates
* chages
* doc strings
* breaking down file
* update comments with ai
* updates
* comments
* changes
* revert
* updates
* dedupe
* updates
* updates
* updates
* refactoring
* renames & refactors
* refactoring
* updates
* undo
* update db
* wand
* updates
* fix comments
* fixes
* simplify comments
* u[dates
* renames
* better comments
* validation
* updates
* updates
* updates
* fix sorting
* fix appearnce
* updating prompt to make it user sort
* rm
* updates
* rename
* comments
* clean comments
* simplicifcaiton
* updates
* updates
* refactor
* reduced type confusion
* undo
* rename
* undo changes
* undo
* simplify
* updates
* updates
* revert
* updates
* db updates
* type fix
* fix
* fix error handling
* updates
* docs
* docs
* updates
* rename
* dedupe
* revert
* uncook
* updates
* fix
* fix
* fix
* fix
* prepare merge
* readd migrations
* add back missed code
* migrate enrichment logic to general abstraction
* address bugbot concerns
* adhere to size limits for tables
* remove conflicting migration
* add back migrations
* fix tables auth
* fix permissive auth
* fix lint
* reran migrations
* migrate to use tanstack query for all server state
* update table-selector
* update names
* added tables to permission groups, updated subblock types
---------
Co-authored-by: Vikhyath Mondreti <vikhyath@simstudio.ai>
Co-authored-by: waleed <walif6@gmail.com>
* fix(snapshot): changed insert to upsert when concurrent identical child workflows are running (#3259)
* fix(snapshot): changed insert to upsert when concurrent identical child workflows are running
* fixed ci tests failing
* fix(workflows): disallow duplicate workflow names at the same folder level (#3260)
* feat(tools): added redis, upstash, algolia, and revenuecat (#3261)
* feat(tools): added redis, upstash, algolia, and revenuecat
* ack comment
* feat(models): add gemini-3.1-pro-preview and update gemini-3-pro thinking levels (#3263)
* fix(audit-log): lazily resolve actor name/email when missing (#3262)
* fix(blocks): move type coercions from tools.config.tool to tools.config.params (#3264)
* fix(blocks): move type coercions from tools.config.tool to tools.config.params
Number() coercions in tools.config.tool ran at serialization time before
variable resolution, destroying dynamic references like <block.result.count>
by converting them to NaN/null. Moved all coercions to tools.config.params
which runs at execution time after variables are resolved.
Fixed in 15 blocks: exa, arxiv, sentry, incidentio, wikipedia, ahrefs,
posthog, elasticsearch, dropbox, hunter, lemlist, spotify, youtube, grafana,
parallel. Also added mode: 'advanced' to optional exa fields.
Closes#3258
* fix(blocks): address PR review — move remaining param mutations from tool() to params()
- Moved field mappings from tool() to params() in grafana, posthog,
lemlist, spotify, dropbox (same dynamic reference bug)
- Fixed parallel.ts excerpts/full_content boolean logic
- Fixed parallel.ts search_queries empty case (must set undefined)
- Fixed elasticsearch.ts timeout not included when already ends with 's'
- Restored dropbox.ts tool() switch for proper default fallback
* fix(blocks): restore field renames to tool() for serialization-time validation
Field renames (e.g. personalApiKey→apiKey) must be in tool() because
validateRequiredFieldsBeforeExecution calls selectToolId()→tool() then
checks renamed field names on params. Only type coercions (Number(),
boolean) stay in params() to avoid destroying dynamic variable references.
* improvement(resolver): resovled empty sentinel to not pass through unexecuted valid refs to text inputs (#3266)
* fix(blocks): add required constraint for serviceDeskId in JSM block (#3268)
* fix(blocks): add required constraint for serviceDeskId in JSM block
* fix(blocks): rename custom field values to request field values in JSM create request
* fix(trigger): add isolated-vm support to trigger.dev container builds (#3269)
Scheduled workflow executions running in trigger.dev containers were
failing to spawn isolated-vm workers because the native module wasn't
available in the container. This caused loop condition evaluation to
silently fail and exit after one iteration.
- Add isolated-vm to build.external and additionalPackages in trigger config
- Include isolated-vm-worker.cjs via additionalFiles for child process spawning
- Add fallback path resolution for worker file in trigger.dev environment
* fix(tables): hide tables from sidebar and block registry (#3270)
* fix(tables): hide tables from sidebar and block registry
* fix(trigger): add isolated-vm support to trigger.dev container builds (#3269)
Scheduled workflow executions running in trigger.dev containers were
failing to spawn isolated-vm workers because the native module wasn't
available in the container. This caused loop condition evaluation to
silently fail and exit after one iteration.
- Add isolated-vm to build.external and additionalPackages in trigger config
- Include isolated-vm-worker.cjs via additionalFiles for child process spawning
- Add fallback path resolution for worker file in trigger.dev environment
* lint
* fix(trigger): update node version to align with main app (#3272)
* fix(build): fix corrupted sticky disk cache on blacksmith (#3273)
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: Lakee Sivaraya <71339072+lakeesiv@users.noreply.github.com>
Co-authored-by: Vikhyath Mondreti <vikhyath@simstudio.ai>
Co-authored-by: Vikhyath Mondreti <vikhyathvikku@gmail.com>
description: Create or update a Sim integration block with correct subBlocks, conditions, dependsOn, modes, canonicalParamId usage, outputs, and tool wiring. Use when working on `apps/sim/blocks/blocks/{service}.ts` or aligning a block with its tools.
---
# Add Block Skill
You are an expert at creating block configurations for Sim. You understand the serializer, subBlock types, conditions, dependsOn, modes, and all UI patterns.
## Your Task
When the user asks you to create a block:
1. Create the block file in `apps/sim/blocks/blocks/{service}.ts`
2. Configure all subBlocks with proper types, conditions, and dependencies
3. Wire up tools correctly
## Hard Rule: No Guessed Tool Outputs
Blocks depend on tool outputs. If the underlying tool response schema is not documented or live-verified, you MUST tell the user instead of guessing block outputs.
- Do NOT invent block outputs for undocumented tool responses
- Do NOT describe unknown JSON shapes as if they were confirmed
- Do NOT wire fields into the block just because they seem likely to exist
If the tool outputs are not known, do one of these instead:
1. Ask the user for sample tool responses
2. Ask the user for test credentials so the tool responses can be verified
3. Limit the block to operations whose outputs are documented
4. Leave uncertain outputs out and explicitly tell the user what remains unknown
serviceId:'{service}',// Must match OAuth provider service key
requiredScopes: getScopesForService('{service}'),// Import from @/lib/oauth/utils
placeholder:'Select account',
required: true,
}
```
**Scopes:** Always use `getScopesForService(serviceId)` from `@/lib/oauth/utils` for `requiredScopes`. Never hardcode scope arrays — the single source of truth is `OAUTH_PROVIDERS` in `lib/oauth/oauth.ts`.
**Scope descriptions:** When adding a new OAuth provider, also add human-readable descriptions for all scopes in `SCOPE_DESCRIPTIONS` within `lib/oauth/utils.ts`.
-`'json-object'` - Raw JSON (adds "no markdown" instruction)
-`'json-schema'` - JSON Schema definitions
-`'sql-query'` - SQL statements
-`'timestamp'` - Adds current date/time context
## Tools Configuration
**Important:**`tools.config.tool` runs during serialization before variable resolution. Put `Number()` and other type coercions in `tools.config.params` instead, which runs at execution time after variables are resolved.
**Preferred:** Use tool names directly as dropdown option IDs to avoid switch cases:
When using `type: 'json'` and you know the object shape in advance, **describe the inner fields in the description** so downstream blocks know what properties are available. For well-known, stable objects, use nested output definitions instead:
```typescript
outputs:{
// BAD: Opaque json with no info about what's inside
plan:{type:'json',description:'Zone plan information'},
// GOOD: Describe the known fields in the description
plan:{
type:'json',
description:'Zone plan information (id, name, price, currency, frequency, is_subscribed)',
},
// BEST: Use nested output definition when the shape is stable and well-known
- The object has a small, stable set of fields (< 10)
- Downstream blocks will commonly access specific properties
- The API response shape is well-documented and unlikely to change
Use `type: 'json'` with a descriptive string when:
- The object has many fields or a dynamic shape
- It represents a list/array of items
- The shape varies by operation
If the output shape is unknown because the underlying tool response is undocumented, you MUST tell the user and stop. Unknown is not the same as variable. Never guess block outputs.
See the `/add-trigger` skill for creating triggers.
## Icon Requirement
If the icon doesn't already exist in `@/components/icons.tsx`, **do NOT search for it yourself**. After completing the block, ask the user to provide the SVG:
```
The block is complete, but I need an icon for {Service}.
Please provide the SVG and I'll convert it to a React component.
You can usually find this in the service's brand/press kit page, or copy it from their website.
```
## Advanced Mode for Optional Fields
Optional fields that are rarely used should be set to `mode: 'advanced'` so they don't clutter the basic UI. This includes:
mode:'advanced',// Rarely used, hide from basic view
}
```
## WandConfig for Complex Inputs
Use `wandConfig` for fields that are hard to fill out manually, such as timestamps, comma-separated lists, and complex query strings. This gives users an AI-assisted input experience.
```typescript
// Timestamps - use generationType: 'timestamp' to inject current date context
{
id:'startTime',
title:'Start Time',
type:'short-input',
mode:'advanced',
wandConfig:{
enabled: true,
prompt:'Generate an ISO 8601 timestamp based on the user description. Return ONLY the timestamp string.',
generationType:'timestamp',
},
}
// Comma-separated lists - simple prompt without generationType
{
id:'mediaIds',
title:'Media IDs',
type:'short-input',
mode:'advanced',
wandConfig:{
enabled: true,
prompt:'Generate a comma-separated list of media IDs. Return ONLY the comma-separated values.',
},
}
```
## Naming Convention
All tool IDs referenced in `tools.access` and returned by `tools.config.tool` MUST use `snake_case` (e.g., `x_create_tweet`, `slack_send_message`). Never use camelCase or PascalCase.
## Checklist Before Finishing
- [ ]`integrationType` is set to the correct `IntegrationType` enum value
- [ ]`tags` array includes all applicable `IntegrationTag` values
- [ ] All subBlocks have `id`, `title` (except switch), and `type`
description: Add or update a Sim knowledge base connector for syncing documents from an external source, including auth mode, config fields, pagination, document mapping, tags, and registry wiring. Use when working in `apps/sim/connectors/{service}/` or adding a new external document source.
---
# Add Connector Skill
You are an expert at adding knowledge base connectors to Sim. A connector syncs documents from an external source (Confluence, Google Drive, Notion, etc.) into a knowledge base.
## Your Task
When the user asks you to create a connector:
1. Use Context7 or WebFetch to read the service's API documentation
2. Determine the auth mode: **OAuth** (if Sim already has an OAuth provider for the service) or **API key** (if the service uses API key / Bearer token auth)
3. Create the connector directory and config
4. Register it in the connector registry
## Hard Rule: No Guessed Response Or Document Schemas
If the service docs do not clearly show the document list response, document fetch response, pagination shape, or metadata fields, you MUST tell the user instead of guessing.
- Do NOT invent document fields
- Do NOT guess pagination cursors or next-page fields
- Do NOT infer metadata/tag mappings from unrelated endpoints
- Do NOT fabricate `ExternalDocument` content structure from partial docs
If the source schema is unknown, do one of these instead:
1. Ask the user for sample API responses
2. Ask the user for test credentials so you can verify live payloads
3. Implement only the documented parts of the connector
4. Leave the connector incomplete and explicitly say which fields remain unknown
## Directory Structure
Create files in `apps/sim/connectors/{service}/`:
```
connectors/{service}/
├── index.ts # Barrel export
└── {service}.ts # ConnectorConfig definition
```
## Authentication
Connectors use a discriminated union for auth config (`ConnectorAuthConfig` in `connectors/types.ts`):
For services with existing OAuth providers in `apps/sim/lib/oauth/types.ts`. The `provider` must match an `OAuthService`. The modal shows a credential picker and handles token refresh automatically.
### API key mode
For services that use API key / Bearer token auth. The modal shows a password input with the configured `label` and `placeholder`. The API key is encrypted at rest using AES-256-GCM and stored in a dedicated `encryptedApiKey` column on the connector record. The sync engine decrypts it automatically — connectors receive the raw access token in `listDocuments`, `getDocument`, and `validateConfig`.
// Optional: map source metadata to semantic tag keys (translated to slots by sync engine)
mapTags:(metadata)=>{
// Return Record<string, unknown> with keys matching tagDefinitions[].id
},
}
```
Only map fields in `listDocuments`, `getDocument`, `validateConfig`, and `mapTags` when the source payload shape is documented or live-verified. If not, tell the user and stop rather than guessing.
### API key connector example
```typescript
exportconst{service}Connector: ConnectorConfig={
id:'{service}',
name:'{Service}',
description:'Sync documents from {Service} into your knowledge base',
version:'1.0.0',
icon:{Service}Icon,
auth:{
mode:'apiKey',
label:'API Key',// Shown above the input field
placeholder:'Enter your {Service} API key',// Input placeholder
The add-connector modal renders these automatically — no custom UI needed.
Three field types are supported: `short-input`, `dropdown`, and `selector`.
```typescript
// Text input
{
id:'domain',
title:'Domain',
type:'short-input',
placeholder:'yoursite.example.com',
required: true,
}
// Dropdown (static options)
{
id:'contentType',
title:'Content Type',
type:'dropdown',
required: false,
options:[
{label:'Pages only',id:'page'},
{label:'Blog posts only',id:'blogpost'},
{label:'All content',id:'all'},
],
}
```
## Dynamic Selectors (Canonical Pairs)
Use `type: 'selector'` to fetch options dynamically from the existing selector registry (`hooks/selectors/registry.ts`). Selectors are always paired with a manual fallback input using the **canonical pair** pattern — a `selector` field (basic mode) and a `short-input` field (advanced mode) linked by `canonicalParamId`.
The user sees a toggle button (ArrowLeftRight) to switch between the selector dropdown and manual text input. On submit, the modal resolves each canonical pair to the active mode's value, keyed by `canonicalParamId`.
### Rules
1.**Every selector field MUST have a canonical pair** — a corresponding `short-input` (or `dropdown`) field with the same `canonicalParamId` and `mode: 'advanced'`.
2.**`required` must be set identically on both fields** in a pair. If the selector is required, the manual input must also be required.
3.**`canonicalParamId` must match the key the connector expects in `sourceConfig`** (e.g. `baseId`, `channel`, `teamId`). The advanced field's `id` should typically match `canonicalParamId`.
4.**`dependsOn` references the selector field's `id`**, not the `canonicalParamId`. The modal propagates dependency clearing across canonical siblings automatically — changing either field in a parent pair clears dependent children.
### Selector canonical pair example (Airtable base → table cascade)
```typescript
configFields:[
// Base: selector (basic) + manual (advanced)
{
id:'baseSelector',
title:'Base',
type:'selector',
selectorKey:'airtable.bases',// Must exist in hooks/selectors/registry.ts
canonicalParamId:'baseId',
mode:'basic',
placeholder:'Select a base',
required: true,
},
{
id:'baseId',
title:'Base ID',
type:'short-input',
canonicalParamId:'baseId',
mode:'advanced',
placeholder:'e.g. appXXXXXXXXXXXXXX',
required: true,
},
// Table: selector depends on base (basic) + manual (advanced)
{
id:'tableSelector',
title:'Table',
type:'selector',
selectorKey:'airtable.tables',
canonicalParamId:'tableIdOrName',
mode:'basic',
dependsOn:['baseSelector'],// References the selector field ID
### Selector with domain dependency (Jira/Confluence pattern)
When a selector depends on a plain `short-input` field (no canonical pair), `dependsOn` references that field's `id` directly. The `domain` field's value maps to `SelectorContext.domain` automatically via `SELECTOR_CONTEXT_FIELDS`.
```typescript
configFields:[
{
id:'domain',
title:'Jira Domain',
type:'short-input',
placeholder:'yoursite.atlassian.net',
required: true,
},
{
id:'projectSelector',
title:'Project',
type:'selector',
selectorKey:'jira.projects',
canonicalParamId:'projectKey',
mode:'basic',
dependsOn:['domain'],
placeholder:'Select a project',
required: true,
},
{
id:'projectKey',
title:'Project Key',
type:'short-input',
canonicalParamId:'projectKey',
mode:'advanced',
placeholder:'e.g. ENG, PROJ',
required: true,
},
]
```
### How `dependsOn` maps to `SelectorContext`
The connector selector field builds a `SelectorContext` from dependency values. For the mapping to work, each dependency's `canonicalParamId` (or field `id` for non-canonical fields) must exist in `SELECTOR_CONTEXT_FIELDS` (`lib/workflows/subblocks/context.ts`):
| `confluence.spaces` | credential, `domain` | Space key + name |
| `notion.databases` | credential | Database ID + name |
| `asana.workspaces` | credential | Workspace GID + name |
| `microsoft.teams` | credential | Team ID + name |
| `microsoft.channels` | credential, `teamId` | Channel ID + name |
| `webflow.sites` | credential | Site ID + name |
| `outlook.folders` | credential | Folder ID + name |
## ExternalDocument Shape
Every document returned from `listDocuments`/`getDocument` must include:
```typescript
{
externalId: string// Source-specific unique ID
title: string// Document title
content: string// Extracted plain text (or '' if contentDeferred)
contentDeferred?: boolean// true = content will be fetched via getDocument
mimeType:'text/plain'// Always text/plain (content is extracted)
contentHash: string// Metadata-based hash for change detection
sourceUrl?: string// Link back to original (stored on document record)
metadata?: Record<string,unknown>// Source-specific data (fed to mapTags)
}
```
## Content Deferral (Required for file/content-download connectors)
**All connectors that require per-document API calls to fetch content MUST use `contentDeferred: true`.** This is the standard pattern — `listDocuments` returns lightweight metadata stubs, and content is fetched lazily by the sync engine via `getDocument` only for new/changed documents.
This pattern is critical for reliability: the sync engine processes documents in batches and enqueues each batch for processing immediately. If a sync times out, all previously-batched documents are already queued. Without deferral, content downloads during listing can exhaust the sync task's time budget before any documents are saved.
### When to use `contentDeferred: true`
- The service's list API does NOT return document content (only metadata)
- Content requires a separate download/export API call per document
- The list API already returns the full content inline (e.g., Slack messages, Reddit posts, HubSpot notes)
- No per-document API call is needed to get content
### Content Hash Strategy
Use a **metadata-based**`contentHash` — never a content-based hash. The hash must be derivable from the list response metadata alone, so the sync engine can detect changes without downloading content.
Good metadata hash sources:
-`modifiedTime` / `lastModifiedDateTime` — changes when file is edited
**Critical invariant:** The `contentHash` MUST be identical whether produced by `listDocuments` (stub) or `getDocument` (full doc). Both should use the same stub function to guarantee this.
All external API calls must use `fetchWithRetry` from `@/lib/knowledge/documents/utils` instead of raw `fetch()`. This provides exponential backoff with retries on 429/502/503/504 errors. It returns a standard `Response` — all `.ok`, `.json()`, `.text()` checks work unchanged.
For `validateConfig` (user-facing, called on save), pass `VALIDATE_RETRY_OPTIONS` to cap wait time at ~7s. Background operations (`listDocuments`, `getDocument`) use the built-in defaults (5 retries, ~31s max).
If `ExternalDocument.sourceUrl` is set, the sync engine stores it on the document record. Always construct the full URL (not a relative path).
## Sync Engine Behavior (Do Not Modify)
The sync engine (`lib/knowledge/connectors/sync-engine.ts`) is connector-agnostic. It:
1. Calls `listDocuments` with pagination until `hasMore` is false
2. Compares `contentHash` to detect new/changed/unchanged documents
3. Stores `sourceUrl` and calls `mapTags` on insert/update automatically
4. Handles soft-delete of removed documents
5. Resolves access tokens automatically — OAuth tokens are refreshed, API keys are decrypted from the `encryptedApiKey` column
You never need to modify the sync engine when adding a connector.
## Icon
The `icon` field on `ConnectorConfig` is used throughout the UI — in the connector list, the add-connector modal, and as the document icon in the knowledge base table (replacing the generic file type icon for connector-sourced documents). The icon is read from `CONNECTOR_REGISTRY[connectorType].icon` at runtime — no separate icon map to maintain.
If the service already has an icon in `apps/sim/components/icons.tsx` (from a tool integration), reuse it. Otherwise, ask the user to provide the SVG.
## Registering
Add one line to `apps/sim/connectors/registry.ts`:
description: Add a complete Sim integration from API docs, covering tools, block, icon, optional triggers, registrations, and integration conventions. Use when introducing a new service under `apps/sim/tools`, `apps/sim/blocks`, and `apps/sim/triggers`.
---
# Add Integration Skill
You are an expert at adding complete integrations to Sim. This skill orchestrates the full process of adding a new service integration.
## Overview
Adding an integration involves these steps in order:
1.**Research** - Read the service's API documentation
2.**Create Tools** - Build tool configurations for each API operation
3.**Create Block** - Build the block UI configuration
4.**Add Icon** - Add the service's brand icon
5.**Create Triggers** (optional) - If the service supports webhooks
6.**Register** - Register tools, block, and triggers in their registries
7.**Generate Docs** - Run the docs generation script
## Step 1: Research the API
Before writing any code:
1. Use Context7 to find official documentation: `mcp__plugin_context7_context7__resolve-library-id`
2. Or use WebFetch to read API docs directly
3. Identify:
- Authentication method (OAuth, API Key, both)
- Available operations (CRUD, search, etc.)
- Required vs optional parameters
- Response structures
### Hard Rule: No Guessed Response Schemas
If the official docs do not clearly show the response JSON shape for an endpoint, you MUST stop and tell the user exactly which outputs are unknown.
- Do NOT guess response field names
- Do NOT infer nested JSON paths from related endpoints
- Do NOT invent output properties just because they seem likely
- Do NOT implement `transformResponse` against unverified payload shapes
If response schemas are missing or incomplete, do one of the following before proceeding:
1. Ask the user for sample responses
2. Ask the user for test credentials so you can verify the live payload
3. Reduce the scope to only endpoints whose response shapes are documented
4. Leave the tool unimplemented and explicitly report why
-`visibility: 'user-only'` for API keys and user credentials
-`visibility: 'user-or-llm'` for operation parameters
- Always use `?? null` for nullable API response fields
- Always use `?? []` for optional array fields
- Set `optional: true` for outputs that may not exist
- Never output raw JSON dumps - extract meaningful fields
- When using `type: 'json'` and you know the object shape, define `properties` with the inner fields so downstream consumers know the structure. Only use bare `type: 'json'` when the shape is truly dynamic
- If you do not know the response JSON shape from docs or verified examples, you MUST tell the user and stop. Never guess outputs or response mappings.
-`canonicalParamId` must NOT match any subblock's `id` in the block
-`canonicalParamId` must be unique per operation/condition context
- Only use `canonicalParamId` to link basic/advanced alternatives for the same logical parameter
-`mode` only controls UI visibility, NOT serialization. Without `canonicalParamId`, both basic and advanced field values would be sent
- Every subblock `id` must be unique within the block. Duplicate IDs cause conflicts even with different conditions
- **Required consistency:** If one subblock in a canonical group has `required: true`, ALL subblocks in that group must have `required: true` (prevents bypassing validation by switching modes)
- **Inputs section:** Must list canonical param IDs (e.g., `fileId`), NOT raw subblock IDs (e.g., `fileSelector`, `manualFileId`)
- **Params function:** Must use canonical param IDs, NOT raw subblock IDs (raw IDs are deleted after canonical transformation)
- [ ] Secondary triggers do NOT have `includeDropdown`
- [ ] All triggers use `buildTriggerSubBlocks` helper
- [ ] Created `index.ts` barrel export
- [ ] Registered all triggers in `triggers/registry.ts`
### Docs
- [ ] Ran `bun run scripts/generate-docs.ts`
- [ ] Verified docs file created
### Final Validation (Required)
- [ ] Read every tool file and cross-referenced inputs/outputs against the API docs
- [ ] Verified block subBlocks cover all required tool params with correct conditions
- [ ] Verified block outputs match what the tools actually return
- [ ] Verified `tools.config.params` correctly maps and coerces all param types
- [ ] Verified every tool output and `transformResponse` path against documented or live-verified JSON responses
- [ ] If any response schema remained unknown, explicitly told the user instead of guessing
## Example Command
When the user asks to add an integration:
```
User: Add a Stripe integration
You: I'll add the Stripe integration. Let me:
1. First, research the Stripe API using Context7
2. Create the tools for key operations (payments, subscriptions, etc.)
3. Create the block with operation dropdown
4. Register everything
5. Generate docs
6. Ask you for the Stripe icon SVG
[Proceed with implementation...]
[After completing steps 1-5...]
I've completed the Stripe integration. Before I can add the icon, please provide the SVG for Stripe.
You can usually find this in the service's brand/press kit page, or copy it from their website.
Paste the SVG code here and I'll convert it to a React component.
```
## File Handling
When your integration handles file uploads or downloads, follow these patterns to work with `UserFile` objects consistently.
### What is a UserFile?
A `UserFile` is the standard file representation in Sim:
```typescript
interfaceUserFile{
id: string// Unique identifier
name: string// Original filename
url: string// Presigned URL for download
size: number// File size in bytes
type:string// MIME type (e.g., 'application/pdf')
base64?: string// Optional base64 content (if small file)
key?: string// Internal storage key
context?: object// Storage context metadata
}
```
### File Input Pattern (Uploads)
For tools that accept file uploads, **always route through an internal API endpoint** rather than calling external APIs directly. This ensures proper file content retrieval.
Optional fields that are rarely used should be set to `mode: 'advanced'` so they don't clutter the basic UI. Examples: pagination tokens, time range filters, sort order, max results, reply settings.
### WandConfig for Complex Inputs
Use `wandConfig` for fields that are hard to fill out manually:
- **Timestamps**: Use `generationType: 'timestamp'` to inject current date context into the AI prompt
- **JSON arrays**: Use `generationType: 'json-object'` for structured data
- **Complex queries**: Use a descriptive prompt explaining the expected format
```typescript
{
id:'startTime',
title:'Start Time',
type:'short-input',
mode:'advanced',
wandConfig:{
enabled: true,
prompt:'Generate an ISO 8601 timestamp. Return ONLY the timestamp string.',
generationType:'timestamp',
},
}
```
### OAuth Scopes (Centralized System)
Scopes are maintained in a single source of truth and reused everywhere:
1.**Define scopes** in `lib/oauth/oauth.ts` under `OAUTH_PROVIDERS[provider].services[service].scopes`
2.**Add descriptions** in `SCOPE_DESCRIPTIONS` within `lib/oauth/utils.ts` for the OAuth modal UI
3.**Reference in auth.ts** using `getCanonicalScopesForProvider(providerId)` from `@/lib/oauth/utils`
4.**Reference in blocks** using `getScopesForService(serviceId)` from `@/lib/oauth/utils`
**Never hardcode scope arrays** in `auth.ts` or block `requiredScopes`. Always import from the centralized source.
1.**OAuth serviceId must match** - The `serviceId` in oauth-input must match the OAuth provider configuration
2.**All tool IDs MUST be snake_case** - `stripe_create_payment`, not `stripeCreatePayment`. This applies to tool `id` fields, registry keys, `tools.access` arrays, and `tools.config.tool` return values
3.**Block type is snake_case** - `type: 'stripe'`, not `type: 'Stripe'`
4.**Alphabetical ordering** - Keep imports and registry entries alphabetically sorted
5.**Required can be conditional** - Use `required: { field: 'op', value: 'create' }` instead of always true
6.**DependsOn clears options** - When a dependency changes, selector options are refetched
7.**Never pass Buffer directly to fetch** - Convert to `new Uint8Array(buffer)` for TypeScript compatibility
description: Create or update Sim tool configurations from service API docs, including typed params, request mapping, response transforms, outputs, and registry entries. Use when working in `apps/sim/tools/{service}/` or fixing tool definitions for an integration.
---
# Add Tools Skill
You are an expert at creating tool configurations for Sim integrations. Your job is to read API documentation and create properly structured tool files.
## Your Task
When the user asks you to create tools for a service:
1. Use Context7 or WebFetch to read the service's API documentation
2. Create the tools directory structure
3. Generate properly typed tool configurations
## Hard Rule: No Guessed Response Schemas
If the docs do not clearly show the response JSON for a tool, you MUST tell the user exactly which outputs are unknown and stop short of guessing.
- Do NOT invent response field names
- Do NOT infer nested paths from nearby endpoints
- Do NOT guess array item shapes
- Do NOT write `transformResponse` against unverified payloads
If the response shape is unknown, do one of these instead:
1. Ask the user for sample responses
2. Ask the user for test credentials so you can verify live responses
3. Implement only the endpoints whose outputs are documented
4. Leave the tool unimplemented and explicitly say why
## Directory Structure
Create files in `apps/sim/tools/{service}/`:
```
tools/{service}/
├── index.ts # Barrel export
├── types.ts # Parameter & response types
└── {action}.ts # Individual tool files (one per operation)
// Trim ID fields to prevent copy-paste whitespace errors:
// userId: params.userId?.trim(),
}),
},
transformResponse: async(response: Response)=>{
constdata=awaitresponse.json()
return{
success: true,
output:{
// Map API response to output
// Use ?? null for nullable fields
// Use ?? [] for optional arrays
},
}
},
outputs:{
// Define each output field
},
}
```
## Critical Rules for Parameters
### Visibility Options
-`'hidden'` - System-injected (OAuth tokens, internal params). User never sees.
-`'user-only'` - User must provide (credentials, api keys, account-specific IDs)
-`'user-or-llm'` - User provides OR LLM can compute (search queries, content, filters, most fall into this category)
### Parameter Types
-`'string'` - Text values
-`'number'` - Numeric values
-`'boolean'` - True/false
-`'json'` - Complex objects (NOT 'object', use 'json')
-`'file'` - Single file
-`'file[]'` - Multiple files
### Required vs Optional
- Always explicitly set `required: true` or `required: false`
- Optional params should have `required: false`
## Critical Rules for Outputs
### Output Types
-`'string'`, `'number'`, `'boolean'` - Primitives
-`'json'` - Complex objects (use this, NOT 'object')
-`'array'` - Arrays with `items` property
-`'object'` - Objects with `properties` property
### Optional Outputs
Add `optional: true` for fields that may not exist in the response:
```typescript
closedAt:{
type:'string',
description:'When the issue was closed',
optional: true,
},
```
### Typed JSON Outputs
When using `type: 'json'` and you know the object shape in advance, **always define the inner structure** using `properties` so downstream consumers know what fields are available:
```typescript
// BAD: Opaque json with no info about what's inside
Only use bare `type: 'json'` without `properties` when the shape is truly dynamic or unknown.
If the response shape is unknown because the docs do not provide it, you MUST tell the user and stop. Unknown is not the same as dynamic. Never guess outputs.
## Critical Rules for transformResponse
### Handle Nullable Fields
ALWAYS use `?? null` for fields that may be undefined:
```typescript
transformResponse: async(response: Response)=>{
constdata=awaitresponse.json()
return{
success: true,
output:{
id: data.id,
title: data.title,
body: data.body??null,// May be undefined
assignee: data.assignee??null,// May be undefined
labels: data.labels??[],// Default to empty array
closedAt: data.closed_at??null,// May be undefined
},
}
}
```
### Never Output Raw JSON Dumps
DON'T do this:
```typescript
output:{
data: data,// BAD - raw JSON dump
}
```
DO this instead - extract meaningful fields:
```typescript
output:{
id: data.id,
name: data.name,
status: data.status,
metadata:{
createdAt: data.created_at,
updatedAt: data.updated_at,
},
}
```
## Types File Pattern
Create `types.ts` with interfaces for all params and responses:
2. Add to the `tools` object with snake_case keys (alphabetically):
```typescript
import{serviceActionTool}from'@/tools/{service}'
exportconsttools={
// ... existing tools ...
{service}_{action}:serviceActionTool,
}
```
## Wiring Tools into the Block (Required)
After registering in `tools/registry.ts`, you MUST also update the block definition at `apps/sim/blocks/blocks/{service}.ts`. This is not optional — tools are only usable from the UI if they are wired into the block.
### 1. Add to `tools.access`
```typescript
tools:{
access:[
// existing tools...
'service_new_action',// Add every new tool ID here
],
config:{...}
}
```
### 2. Add operation dropdown options
If the block uses an operation dropdown, add an option for each new tool:
```typescript
{
id:'operation',
type:'dropdown',
options:[
// existing options...
{label:'New Action',id:'new_action'},// id maps to what tools.config.tool returns
],
}
```
### 3. Add subBlocks for new tool params
For each new tool, add subBlocks covering all its required params (and optional ones where useful). Apply `condition` to show them only for the right operation, and mark required params with `required`:
```typescript
// Required param for new_action
{
id:'someParam',
title:'Some Param',
type:'short-input',
placeholder:'e.g., value',
condition:{field:'operation',value:'new_action'},
required:{field:'operation',value:'new_action'},
},
// Optional param — put in advanced mode
{
id:'optionalParam',
title:'Optional Param',
type:'short-input',
condition:{field:'operation',value:'new_action'},
mode:'advanced',
},
```
### 4. Update `tools.config.tool`
Ensure the tool selector returns the correct tool ID for every new operation. The simplest pattern:
```typescript
tool:(params)=>`service_${params.operation}`,
// If operation dropdown IDs already match tool IDs, this requires no change.
```
If the dropdown IDs differ from tool IDs, add explicit mappings:
- [ ] Operation dropdown has an option for each new tool
- [ ] SubBlocks cover all required params for each new tool
- [ ] SubBlocks have correct `condition` (only show for the right operation)
- [ ] Optional/rarely-used params set to `mode: 'advanced'`
- [ ]`tools.config.tool` returns correct ID for every new operation
- [ ]`tools.config.params` handles any ID remapping or type coercions
- [ ] New outputs added to block `outputs`
- [ ] New params added to block `inputs`
## V2 Tool Pattern
If creating V2 tools (API-aligned outputs), use `_v2` suffix:
- Tool ID: `{service}_{action}_v2`
- Variable name: `{action}V2Tool`
- Version: `'2.0.0'`
- Outputs: Flat, API-aligned (no content/metadata wrapper)
## Naming Convention
All tool IDs MUST use `snake_case`: `{service}_{action}` (e.g., `x_create_tweet`, `slack_send_message`). Never use camelCase or PascalCase for tool IDs.
## Checklist Before Finishing
- [ ] All tool IDs use snake_case
- [ ] All params have explicit `required: true` or `required: false`
- [ ] All params have appropriate `visibility`
- [ ] All nullable response fields use `?? null`
- [ ] All optional outputs have `optional: true`
- [ ] No raw JSON dumps in outputs
- [ ] Types file has all interfaces
- [ ] Index.ts exports all tools and re-exports types (`export * from './types'`)
description: Create or update Sim webhook triggers using the generic trigger builder, service-specific setup instructions, outputs, and registry wiring. Use when working in `apps/sim/triggers/{service}/` or adding webhook support to an integration.
---
# Add Trigger
You are an expert at creating webhook triggers for Sim. You understand the trigger system, the generic `buildTriggerSubBlocks` helper, and how triggers connect to blocks.
## Your Task
1. Research what webhook events the service supports
2. Create the trigger files using the generic builder
3. Create a provider handler if custom auth, formatting, or subscriptions are needed
4. Register triggers and connect them to the block
## Hard Rule: No Guessed Webhook Payload Schemas
If the service docs do not clearly show the webhook payload JSON for an event, you MUST tell the user instead of guessing trigger outputs or `formatInput` mappings.
- Do NOT invent payload field names
- Do NOT guess nested event object paths
- Do NOT infer output fields from the UI or marketing docs
- Do NOT write `formatInput` against unverified webhook bodies
If the payload shape is unknown, do one of these instead:
1. Ask the user for sample webhook payloads
2. Ask the user for a test webhook source so you can inspect a real event
3. Implement only the event registration/setup portions whose payloads are documented
4. Leave the trigger unimplemented and explicitly say which payload fields are unknown
## Directory Structure
```
apps/sim/triggers/{service}/
├── index.ts # Barrel exports
├── utils.ts # Service-specific helpers (options, instructions, extra fields, outputs)
If the service API supports programmatic webhook creation, implement `createSubscription` and `deleteSubscription` on the handler. The orchestration layer calls these automatically — **no code touches `route.ts`, `provider-subscriptions.ts`, or `deploy.ts`**.
description: Run all code quality skills in sequence — effects, memo, callbacks, state, React Query, and emcn design review
---
# Cleanup
Arguments:
- scope: what to review (default: your current changes). Examples: "diff to main", "PR #123", "src/components/", "whole codebase"
- fix: whether to apply fixes (default: true). Set to false to only propose changes.
User arguments: $ARGUMENTS
## Steps
Run each of these skills in order on the specified scope, passing through the scope and fix arguments. After each skill completes, move to the next. Do not skip any.
1.`/you-might-not-need-an-effect $ARGUMENTS`
2.`/you-might-not-need-a-memo $ARGUMENTS`
3.`/you-might-not-need-a-callback $ARGUMENTS`
4.`/you-might-not-need-state $ARGUMENTS`
5.`/react-query-best-practices $ARGUMENTS`
6.`/emcn-design-review $ARGUMENTS`
After all skills have run, output a summary of what was found and fixed (or proposed) across all six passes.
description: Review UI code for alignment with the emcn design system — components, tokens, patterns, and conventions
---
# EMCN Design Review
Arguments:
- scope: what to review (default: your current changes). Examples: "diff to main", "PR #123", "src/components/", "whole codebase"
- fix: whether to apply fixes (default: true). Set to false to only propose changes.
User arguments: $ARGUMENTS
## Context
This codebase uses **emcn**, a custom component library built on Radix UI primitives with CVA (class-variance-authority) variants and CSS variable design tokens. All UI must use emcn components and tokens — never raw HTML elements or hardcoded colors.
## Steps
1. Read the emcn barrel export at `apps/sim/components/emcn/components/index.ts` to know what's available
2. Read `apps/sim/app/_styles/globals.css` for the full set of CSS variable tokens
3. Analyze the specified scope against every rule below
4. If fix=true, apply the fixes. If fix=false, propose the fixes without applying.
---
## Imports
- Import components from `@/components/emcn`, never from subpaths
- Import icons from `@/components/emcn/icons` or `lucide-react`
- Import `cn` from `@/lib/core/utils/cn` for conditional class merging
- Import app-specific wrappers (Select, VerifiedBadge) from `@/components/ui`
Never use raw color values. Always use CSS variable tokens via Tailwind arbitrary values: `text-[var(--text-primary)]`, not `text-gray-500` or `#333`. The CSS variable pattern is canonical (1,700+ uses) — do not use Tailwind semantic classes like `text-muted-foreground`.
### Text hierarchy
| Token | Use |
|-------|-----|
| `text-[var(--text-primary)]` | Main content text |
| `text-[var(--text-secondary)]` | Secondary/supporting text |
| `text-[var(--text-tertiary)]` | Tertiary text |
| `text-[var(--text-muted)]` | Disabled, placeholder text |
| `text-[var(--text-icon)]` | Icon tinting |
| `text-[var(--text-inverse)]` | Text on dark backgrounds |
Use `text-[var(--text-icon)]` for icon color (113+ uses in codebase).
---
## Styling Rules
1.**Use `cn()` for conditional classes**: `cn('base', condition && 'conditional')` — never template literal concatenation like `` `base ${condition ? 'active' : ''}` ``
2.**Inline styles**: Avoid. Exception: dynamic values that can't be expressed as Tailwind classes (e.g., `style={{ width: dynamicVar }}` or CSS variable references). Never use inline styles for colors or static values.
3.**Never hardcode colors**: Use CSS variable tokens. Never `text-gray-500`, `bg-red-100`, `#fff`, or `rgb()`. Always `text-[var(--text-*)]`, `bg-[var(--surface-*)]`, etc.
4.**Never use Tailwind semantic color classes**: Use `text-[var(--text-muted)]` not `text-muted-foreground`. The CSS variable pattern is canonical.
5.**Never use global styles**: Keep all styling local to components
6.**Hover states**: Use `hover-hover:` pseudo-class for hover-capable devices
7.**Transitions**: Use `transition-colors` for color changes, `transition-colors duration-100` for fast hover
description: Audit React Query usage for best practices — key factories, staleTime, mutations, and server state ownership
---
# React Query Best Practices
Arguments:
- scope: what to analyze (default: your current changes). Examples: "diff to main", "PR #123", "src/hooks/queries/", "whole codebase"
- fix: whether to apply fixes (default: true). Set to false to only propose changes.
User arguments: $ARGUMENTS
## Context
This codebase uses React Query (TanStack Query) as the single source of truth for all server state. All query hooks live in `hooks/queries/`. Zustand is used only for client-only UI state. Server data must never be duplicated into useState or Zustand outside of mutation callbacks that coordinate cross-store state.
## References
Read these before analyzing:
1. https://tkdodo.eu/blog/practical-react-query — foundational defaults, custom hooks, avoiding local state copies
description: Audit an existing Sim knowledge base connector against the service API docs and repository conventions, then report and fix issues in auth, config fields, pagination, document mapping, tags, and registry entries. Use when validating or repairing code in `apps/sim/connectors/{service}/`.
---
# Validate Connector Skill
You are an expert auditor for Sim knowledge base connectors. Your job is to thoroughly validate that an existing connector is correct, complete, and follows all conventions.
## Your Task
When the user asks you to validate a connector:
1. Read the service's API documentation (via Context7 or WebFetch)
2. Read the connector implementation, OAuth config, and registry entries
3. Cross-reference everything against the API docs and Sim conventions
4. Report all issues found, grouped by severity (critical, warning, suggestion)
5. Fix all issues after reporting them
## Step 1: Gather All Files
Read **every** file for the connector — do not skip any:
Fetch the official API docs for the service. This is the **source of truth** for:
- Endpoint URLs, HTTP methods, and auth headers
- Required vs optional parameters
- Parameter types and allowed values
- Response shapes and field names
- Pagination patterns (cursor, offset, next token)
- Rate limits and error formats
- OAuth scopes and their meanings
Use Context7 (resolve-library-id → query-docs) or WebFetch to retrieve documentation. If both fail, note which claims are based on training knowledge vs verified docs.
### Hard Rule: No Guessed Source Schemas
If the service docs do not clearly show document list responses, document fetch responses, metadata fields, or pagination shapes, you MUST tell the user instead of guessing.
- Do NOT infer document fields from unrelated endpoints
- Do NOT guess pagination cursors or response wrappers
- Do NOT assume metadata keys that are not documented
- Do NOT treat probable shapes as validated
If a schema is unknown, validation must explicitly recommend:
1. sample API responses,
2. live test credentials, or
3. trimming the connector to only documented fields.
## Step 3: Validate API Endpoints
For **every** API call in the connector (`listDocuments`, `getDocument`, `validateConfig`, and any helper functions), verify against the API docs:
### URLs and Methods
- [ ] Base URL is correct for the service's API version
- [ ] Endpoint paths match the API docs exactly
- [ ] HTTP method is correct (GET, POST, PUT, PATCH, DELETE)
- [ ] Path parameters are correctly interpolated and URI-encoded where needed
- [ ] Query parameters use correct names and formats per the API docs
### Headers
- [ ] Authorization header uses the correct format:
- OAuth: `Authorization: Bearer ${accessToken}`
- API Key: correct header name per the service's docs
- [ ]`Content-Type` is set for POST/PUT/PATCH requests
- [ ] Any service-specific headers are present (e.g., `Notion-Version`, `Dropbox-API-Arg`)
- [ ] No headers are sent that the API doesn't support or silently ignores
### Request Bodies
- [ ] POST/PUT body fields match API parameter names exactly
- [ ] Required fields are always sent
- [ ] Optional fields are conditionally included (not sent as `null` or empty unless the API expects that)
- [ ] Field value types match API expectations (string vs number vs boolean)
### Input Sanitization
- [ ] User-controlled values interpolated into query strings are properly escaped:
- OData `$filter`: single quotes escaped with `''` (e.g., `externalId.replace(/'/g, "''")`)
- SOQL: single quotes escaped with `\'`
- GraphQL variables: passed as variables, not interpolated into query strings
Scopes must be correctly declared and sufficient for all API calls the connector makes.
### Connector requiredScopes
- [ ]`requiredScopes` in the connector's `auth` config lists all scopes needed by the connector
- [ ] Each scope in `requiredScopes` is a real, valid scope recognized by the service's API
- [ ] No invalid, deprecated, or made-up scopes are listed
- [ ] No unnecessary excess scopes beyond what the connector actually needs
### Scope Subset Validation (CRITICAL)
- [ ] Every scope in `requiredScopes` exists in the OAuth provider's `scopes` array in `lib/oauth/oauth.ts`
- [ ] Find the provider in `OAUTH_PROVIDERS[providerGroup].services[serviceId].scopes`
- [ ] Verify: `requiredScopes` ⊆ `OAUTH_PROVIDERS scopes` (every required scope is present in the provider config)
- [ ] If a required scope is NOT in the provider config, flag as **critical** — the connector will fail at runtime
### Scope Sufficiency
For each API endpoint the connector calls:
- [ ] Identify which scopes are required per the API docs
- [ ] Verify those scopes are included in the connector's `requiredScopes`
- [ ] If the connector calls endpoints requiring scopes not in `requiredScopes`, flag as **warning**
### Token Refresh Config
- [ ] Check the `getOAuthTokenRefreshConfig` function in `lib/oauth/oauth.ts` for this provider
- [ ]`useBasicAuth` matches the service's token exchange requirements
- [ ]`supportsRefreshTokenRotation` matches whether the service issues rotating refresh tokens
- [ ] Token endpoint URL is correct
## Step 5: Validate Pagination
### listDocuments Pagination
- [ ] Cursor/pagination parameter name matches the API docs
- [ ] Response pagination field is correctly extracted (e.g., `next_cursor`, `nextPageToken`, `@odata.nextLink`, `offset`)
- [ ]`hasMore` is correctly determined from the response
- [ ]`nextCursor` is correctly passed back for the next page
- [ ]`maxItems` / `maxRecords` cap is correctly applied across pages using `syncContext.totalDocsFetched`
- [ ] Page size is within the API's allowed range (not exceeding max page size)
- [ ] Last page precision: when a `maxItems` cap exists, the final page request uses `Math.min(PAGE_SIZE, remaining)` to avoid fetching more records than needed
- [ ] No off-by-one errors in pagination tracking
- [ ] The connector does NOT hit known API pagination limits silently (e.g., HubSpot search 10k cap)
### Pagination State Across Pages
- [ ]`syncContext` is used to cache state across pages (user names, field maps, instance URLs, portal IDs, etc.)
- [ ] Cached state in `syncContext` is correctly initialized on first page and reused on subsequent pages
## Step 6: Validate Data Transformation
### Content Deferral (CRITICAL)
Connectors that require per-document API calls to fetch content (file download, export, blocks fetch) MUST use `contentDeferred: true`. This is the standard pattern for reliability — without it, content downloads during listing can exhaust the sync task's time budget before any documents are saved.
- [ ] If the connector downloads content per-doc during `listDocuments`, it MUST use `contentDeferred: true` instead
- [ ]`listDocuments` returns lightweight stubs with `content: ''` and `contentDeferred: true`
- [ ]`getDocument` fetches actual content and returns the full document with `contentDeferred: false`
- [ ] A shared stub function (e.g., `fileToStub`) is used by both `listDocuments` and `getDocument` to guarantee `contentHash` consistency
- [ ]`contentHash` is metadata-based (e.g., `service:{id}:{modifiedTime}`), NOT content-based — it must be derivable from list metadata alone
- [ ] The `contentHash` is identical whether produced by `listDocuments` or `getDocument`
Connectors where the list API already returns content inline (e.g., Slack messages, Reddit posts) do NOT need `contentDeferred`.
### ExternalDocument Construction
- [ ]`externalId` is a stable, unique identifier from the source API
- [ ]`title` is extracted from the correct field and has a sensible fallback (e.g., `'Untitled'`)
- [ ]`content` is plain text — HTML content is stripped using `htmlToPlainText` from `@/connectors/utils`
- [ ]`mimeType` is `'text/plain'`
- [ ]`contentHash` uses a metadata-based format (e.g., `service:{id}:{modifiedTime}`) for connectors with `contentDeferred: true`, or `computeContentHash` from `@/connectors/utils` for inline-content connectors
- [ ]`sourceUrl` is a valid, complete URL back to the original resource (not relative)
- [ ]`metadata` contains all fields referenced by `mapTags` and `tagDefinitions`
### Content Extraction
- [ ] Rich text / HTML fields are converted to plain text before indexing
- [ ] Important content is not silently dropped (e.g., nested blocks, table cells, code blocks)
- [ ] Content is not silently truncated without logging a warning
- [ ] Empty/blank documents are properly filtered out
- [ ] Size checks use `Buffer.byteLength(text, 'utf8')` not `text.length` when comparing against byte-based limits (e.g., `MAX_FILE_SIZE` in bytes)
## Step 7: Validate Tag Definitions and mapTags
### tagDefinitions
- [ ] Each `tagDefinition` has an `id`, `displayName`, and `fieldType`
- [ ]`fieldType` matches the actual data type: `'text'` for strings, `'number'` for numbers, `'date'` for dates, `'boolean'` for booleans
- [ ] Every `id` in `tagDefinitions` is returned by `mapTags`
- [ ] No `tagDefinition` references a field that `mapTags` never produces
### mapTags
- [ ] Return keys match `tagDefinition``id` values exactly
- [ ] Date values are properly parsed using `parseTagDate` from `@/connectors/utils`
- [ ] Array values are properly joined using `joinTagArray` from `@/connectors/utils`
- [ ] Number values are validated (not `NaN`)
- [ ] Metadata field names accessed in `mapTags` match what `listDocuments`/`getDocument` store in `metadata`
## Step 8: Validate Config Fields and Validation
### configFields
- [ ] Every field has `id`, `title`, `type`
- [ ]`required` is set explicitly (not omitted)
- [ ] Dropdown fields have `options` with `label` and `id` for each option
- [ ] Selector fields follow the canonical pair pattern:
- A `type: 'selector'` field with `selectorKey`, `canonicalParamId`, `mode: 'basic'`
- A `type: 'short-input'` field with the same `canonicalParamId`, `mode: 'advanced'`
-`required` is identical on both fields in the pair
- [ ]`selectorKey` values exist in the selector registry
- [ ]`dependsOn` references selector field `id` values, not `canonicalParamId`
### validateConfig
- [ ] Validates all required fields are present before making API calls
- [ ] Catches exceptions and returns user-friendly error messages
- [ ] Does NOT make expensive calls (full data listing, large queries)
## Step 9: Validate getDocument
- [ ] Fetches a single document by `externalId`
- [ ] Returns `null` for 404 / not found (does not throw)
- [ ] Returns the same `ExternalDocument` shape as `listDocuments`
- [ ] If `listDocuments` uses `contentDeferred: true`, `getDocument` MUST fetch actual content and return `contentDeferred: false`
- [ ] If `listDocuments` uses `contentDeferred: true`, `getDocument` MUST use the same stub function to ensure `contentHash` is identical
- [ ] Handles all content types that `listDocuments` can produce (e.g., if `listDocuments` returns both pages and blogposts, `getDocument` must handle both — not hardcode one endpoint)
- [ ] Forwards `syncContext` if it needs cached state (user names, field maps, etc.)
- [ ] Error handling is graceful (catches, logs, returns null or throws with context)
- [ ] Does not redundantly re-fetch data already included in the initial API response (e.g., if comments come back with the post, don't fetch them again separately)
## Step 10: Validate General Quality
### fetchWithRetry Usage
- [ ] All external API calls use `fetchWithRetry` from `@/lib/knowledge/documents/utils`
- [ ] No raw `fetch()` calls to external APIs
- [ ]`VALIDATE_RETRY_OPTIONS` used in `validateConfig`
- [ ] If `validateConfig` calls a shared helper (e.g., `linearGraphQL`, `resolveId`), that helper must accept and forward `retryOptions` to `fetchWithRetry`
- [ ] Default retry options used in `listDocuments`/`getDocument`
### API Efficiency
- [ ] APIs that support field selection (e.g., `$select`, `sysparm_fields`, `fields`) should request only the fields the connector needs — in both `listDocuments` AND `getDocument`
- [ ] No redundant API calls: if a helper already fetches data (e.g., site metadata), callers should reuse the result instead of making a second call for the same information
- [ ] Sequential per-item API calls (fetching details for each document in a loop) should be batched with `Promise.all` and a concurrency limit of 3-5
### Error Handling
- [ ] Individual document failures are caught and logged without aborting the sync
- [ ] API error responses include status codes in error messages
- [ ] No unhandled promise rejections in concurrent operations
### Concurrency
- [ ] Concurrent API calls use reasonable batch sizes (3-5 is typical)
- [ ] No unbounded `Promise.all` over large arrays
### Logging
- [ ] Uses `createLogger` from `@sim/logger` (not `console.log`)
- [ ] Logs sync progress at `info` level
- [ ] Logs errors at `warn` or `error` level with context
### Registry
- [ ] Connector is exported from `connectors/{service}/index.ts`
- [ ] Connector is registered in `connectors/registry.ts`
- [ ] Registry key matches the connector's `id` field
## Step 11: Report and Fix
### Report Format
Group findings by severity:
**Critical** (will cause runtime errors, data loss, or auth failures):
- Wrong API endpoint URL or HTTP method
- Invalid or missing OAuth scopes (not in provider config)
- Incorrect response field mapping (accessing wrong path)
- SOQL/query fields that don't exist on the target object
- Pagination that silently hits undocumented API limits
- Missing error handling that would crash the sync
-`requiredScopes` not a subset of OAuth provider scopes
- Query/filter injection: user-controlled values interpolated into OData `$filter`, SOQL, or query strings without escaping
- Per-document content download in `listDocuments` without `contentDeferred: true` — causes sync timeouts for large document sets
-`contentHash` mismatch between `listDocuments` stub and `getDocument` return — causes unnecessary re-processing every sync
**Warning** (incorrect behavior, data quality issues, or convention violations):
- HTML content not stripped via `htmlToPlainText`
-`getDocument` not forwarding `syncContext`
-`getDocument` hardcoded to one content type when `listDocuments` returns multiple (e.g., only pages but not blogposts)
- Missing `tagDefinition` for metadata fields returned by `mapTags`
- Incorrect `useBasicAuth` or `supportsRefreshTokenRotation` in token refresh config
- Invalid scope names that the API doesn't recognize (even if silently ignored)
- Private resources excluded from name-based lookup despite scopes being available
- Silent data truncation without logging
- Size checks using `text.length` (character count) instead of `Buffer.byteLength` (byte count) for byte-based limits
- URL-type config fields not normalized (protocol prefix, trailing slashes cause API failures)
-`VALIDATE_RETRY_OPTIONS` not threaded through helper functions called by `validateConfig`
**Suggestion** (minor improvements):
- Missing incremental sync support despite API supporting it
- Overly broad scopes that could be narrowed (not wrong, but could be tighter)
- Source URL format could be more specific
- Missing `orderBy` for deterministic pagination
- Redundant API calls that could be cached in `syncContext`
- Sequential per-item API calls that could be batched with `Promise.all` (concurrency 3-5)
- API supports field selection but connector fetches all fields (e.g., missing `$select`, `sysparm_fields`, `fields`)
-`getDocument` re-fetches data already included in the initial API response (e.g., comments returned with post)
- Last page of pagination requests full `PAGE_SIZE` when fewer records remain (`Math.min(PAGE_SIZE, remaining)`)
### Fix All Issues
After reporting, fix every **critical** and **warning** issue. Apply **suggestions** where they don't add unnecessary complexity.
### Validation Output
After fixing, confirm:
1.`bun run lint` passes
2. TypeScript compiles clean
3. Re-read all modified files to verify fixes are correct
4. Any remaining unknown source schemas were explicitly reported to the user instead of guessed
description: Audit an existing Sim integration against the service API docs and repository conventions, then report and fix issues across tools, blocks, outputs, OAuth scopes, triggers, and registry entries. Use when validating or repairing a service integration under `apps/sim/tools`, `apps/sim/blocks`, or `apps/sim/triggers`.
---
# Validate Integration Skill
You are an expert auditor for Sim integrations. Your job is to thoroughly validate that an existing integration is correct, complete, and follows all conventions.
## Your Task
When the user asks you to validate an integration:
1. Read the service's API documentation (via WebFetch or Context7)
2. Read every tool, the block, and registry entries
3. Cross-reference everything against the API docs and Sim conventions
4. Report all issues found, grouped by severity (critical, warning, suggestion)
5. Fix all issues after reporting them
## Step 1: Gather All Files
Read **every** file for the integration — do not skip any:
```
apps/sim/tools/{service}/ # All tool files, types.ts, index.ts
- Credentials → `oauth-input` with correct `serviceId`
- [ ] Dropdown `value: () => 'default'` is set for dropdowns with a sensible default
### Advanced Mode
- [ ] Optional, rarely-used fields are set to `mode: 'advanced'`:
- Pagination tokens / next tokens
- Time range filters (start/end time)
- Sort order / direction options
- Max results / per page limits
- Reply settings / threading options
- Rarely used IDs (reply-to, quote-tweet, etc.)
- Exclude filters
- [ ]**Required** fields are NEVER set to `mode: 'advanced'`
- [ ] Fields that users fill in most of the time are NOT set to `mode: 'advanced'`
### WandConfig
- [ ] Timestamp fields have `wandConfig` with `generationType: 'timestamp'`
- [ ] Comma-separated list fields have `wandConfig` with a descriptive prompt
- [ ] Complex filter/query fields have `wandConfig` with format examples in the prompt
- [ ] All `wandConfig` prompts end with "Return ONLY the [format] - no explanations, no extra text."
- [ ]`wandConfig.placeholder` describes what to type in natural language
### Tools Config
- [ ]`tools.access` lists **every** tool ID the block can use — none missing
- [ ]`tools.config.tool` returns the correct tool ID for each operation
- [ ] Type coercions are in `tools.config.params` (runs at execution time), NOT in `tools.config.tool` (runs at serialization time before variable resolution)
- [ ]`tools.config.params` handles:
-`Number()` conversion for numeric params that come as strings from inputs
-`Boolean` / string-to-boolean conversion for toggle params
- Empty string → `undefined` conversion for optional dropdown values
- Any subBlock ID → tool param name remapping
- [ ] No `Number()`, `JSON.parse()`, or other coercions in `tools.config.tool` — these would destroy dynamic references like `<Block.output>`
### Block Outputs
- [ ] Outputs cover the key fields returned by ALL tools (not just one operation)
description: Audit an existing Sim webhook trigger against the service's webhook API docs and repository conventions, then report and fix issues across trigger definitions, provider handler, output alignment, registration, and security. Use when validating or repairing a trigger under `apps/sim/triggers/{service}/` or `apps/sim/lib/webhooks/providers/{service}.ts`.
---
# Validate Trigger
You are an expert auditor for Sim webhook triggers. Your job is to validate that an existing trigger implementation is correct, complete, secure, and aligned across all layers.
## Your Task
1. Read the service's webhook/API documentation (via WebFetch)
2. Read every trigger file, provider handler, and registry entry
3. Cross-reference against the API docs and Sim conventions
4. Report all issues grouped by severity (critical, warning, suggestion)
5. Fix all issues after reporting them
## Step 1: Gather All Files
Read **every** file for the trigger — do not skip any:
```
apps/sim/triggers/{service}/ # All trigger files, utils.ts, index.ts
description: Analyze and fix useCallback anti-patterns in your code
---
# You Might Not Need a Callback
Arguments:
- scope: what to analyze (default: your current changes). Examples: "diff to main", "PR #123", "src/components/", "whole codebase"
- fix: whether to apply fixes (default: true). Set to false to only propose changes.
User arguments: $ARGUMENTS
## References
Read before analyzing:
1. https://react.dev/reference/react/useCallback — official docs on when useCallback is actually needed
## When useCallback IS needed
- Passing a callback to a child wrapped in `React.memo` (to preserve referential equality)
- The callback is a dependency of another hook (`useEffect`, `useMemo`)
- The callback is used in a custom hook that documents referential stability requirements
## Anti-patterns to detect
1.**useCallback on functions not passed as props or deps**: If the function is only called within the same component and isn't in any dependency array, useCallback adds overhead for no benefit. Just declare the function normally.
2.**useCallback with exhaustive deps that change every render**: If the dependency array includes values that change on every render, useCallback recalculates every time. The memoization is wasted. Either stabilize the deps (use refs) or remove the useCallback.
3.**useCallback on event handlers passed to native elements**: `<button onClick={handleClick}>` — native elements don't benefit from stable references. Only child components wrapped in React.memo do.
4.**useCallback wrapping a function that creates new objects/arrays**: If the callback returns `{ ...newObj }` or `[...newArr]`, memoizing the callback doesn't prevent the child from re-rendering due to new return values. The memoization is at the wrong level.
5.**useCallback with an empty dep array when deps are needed**: Stale closures — the callback captures outdated values. Either add proper deps or use refs for values that shouldn't trigger re-creation.
6.**Pairing useCallback with React.memo unnecessarily**: If the child component is cheap to render, neither useCallback nor React.memo adds value. Only optimize when you've measured a performance problem.
7.**useCallback in custom hooks that don't need stable references**: Not every hook return needs to be memoized. Only stabilize callbacks when consumers depend on referential equality.
## Codebase-specific notes
This codebase uses a ref pattern for stable callbacks in hooks:
```tsx
constidRef=useRef(id)
useEffect(()=>{idRef.current=id},[id])
constfetchData=useCallback(async()=>{
// use idRef.current instead of id
},[])// empty deps because refs are used
```
This pattern is correct — don't flag it as an anti-pattern.
## Steps
1. Read the reference above
2. Analyze the specified scope for the anti-patterns listed above
3. If fix=true, apply the fixes. If fix=false, propose the fixes without applying.
description: Analyze and fix useMemo/React.memo anti-patterns in your code
---
# You Might Not Need a Memo
Arguments:
- scope: what to analyze (default: your current changes). Examples: "diff to main", "PR #123", "src/components/", "whole codebase"
- fix: whether to apply fixes (default: true). Set to false to only propose changes.
User arguments: $ARGUMENTS
## References
Read before analyzing:
1. https://overreacted.io/before-you-memo/ — two techniques to avoid memo entirely
## Anti-patterns to detect
1.**Wrapping a slow component in React.memo when state can be moved down**: If a component re-renders because of state it doesn't use, move that state into a smaller child component instead of memoizing. The slow component stops re-rendering without memo.
2.**Wrapping in React.memo when children can be lifted up**: If a parent owns state that changes frequently, extract the stateful part and pass the expensive subtree as `children`. Children passed as props don't re-render when the parent's state changes.
3.**useMemo on cheap computations**: Filtering or mapping a small array, string concatenation, simple arithmetic — these don't need memoization. Only memoize when you've measured a performance problem.
4.**useMemo with constantly-changing deps**: If the dependency array changes on every render, useMemo does nothing — it recalculates every time. Fix the deps or remove the memo.
5.**useMemo to create objects/arrays passed as props**: Instead of memoizing to prevent child re-renders, consider whether the child even needs referential stability. If the child doesn't use React.memo or pass it to a dep array, the memo is wasted.
6.**React.memo on components that always receive new props**: If the parent always passes new objects, arrays, or callbacks, React.memo's shallow comparison always fails. Fix the parent instead of memoizing the child.
7.**useMemo for derived state**: If you're computing a value from props or state, just compute it inline during render. React renders are fast. `const fullName = first + ' ' + last` doesn't need useMemo.
## Steps
1. Read the reference above to understand the two core techniques (move state down, lift content up)
2. Analyze the specified scope for the anti-patterns listed above
3. If fix=true, apply the fixes. If fix=false, propose the fixes without applying.
description: Analyze and fix unnecessary useState, derived state, and server-state-in-local-state anti-patterns
---
# You Might Not Need State
Arguments:
- scope: what to analyze (default: your current changes). Examples: "diff to main", "PR #123", "src/components/", "whole codebase"
- fix: whether to apply fixes (default: true). Set to false to only propose changes.
User arguments: $ARGUMENTS
## Context
This codebase uses React Query for all server state and Zustand for client-only global state. useState should only be used for ephemeral UI concerns (open/closed, hover, local form input). Server data should never be copied into useState or Zustand — React Query is the single source of truth.
## References
Read these before analyzing:
1. https://react.dev/learn/choosing-the-state-structure — 5 principles for structuring state
2. https://tkdodo.eu/blog/dont-over-use-state — never store derived/computed values in state
3. https://tkdodo.eu/blog/putting-props-to-use-state — never mirror props into state via useEffect
## Anti-patterns to detect
1.**Derived state stored in useState**: If a value can be computed from props, other state, or query data, compute it inline during render instead of storing it in state.
2.**Server state copied into useState**: Never `useState` + `useEffect` to sync React Query data into local state. Use query data directly. The only exception is forms where users edit server data.
3.**Props mirrored into state**: Never `useState(prop)` + `useEffect(() => setState(prop))`. Use the prop directly, or use a key to reset component state.
4.**Chained useEffect state updates**: Never chain Effects that set state to trigger other Effects. Calculate all derived values in the event handler or inline during render.
5.**Storing objects when an ID suffices**: Store `selectedId` not a copy of the selected object. Derive the object: `items.find(i => i.id === selectedId)`.
6.**State that duplicates Zustand or React Query**: If the data already lives in a store or query cache, don't create a parallel useState.
## Steps
1. Read the references above to understand the guidelines
2. Analyze the specified scope for the anti-patterns listed above
3. If fix=true, apply the fixes. If fix=false, propose the fixes without applying.
@@ -281,26 +283,110 @@ Every document returned from `listDocuments`/`getDocument` must include:
{
externalId: string// Source-specific unique ID
title: string// Document title
content: string// Extracted plain text
content: string// Extracted plain text (or '' if contentDeferred)
contentDeferred?: boolean// true = content will be fetched via getDocument
mimeType:'text/plain'// Always text/plain (content is extracted)
contentHash: string// SHA-256 of content (change detection)
contentHash: string// Metadata-based hash for change detection
sourceUrl?: string// Link back to original (stored on document record)
metadata?: Record<string,unknown>// Source-specific data (fed to mapTags)
}
```
## Content Hashing (Required)
## Content Deferral (Required for file/content-download connectors)
The sync engine uses content hashes for change detection:
**All connectors that require per-document API calls to fetch content MUST use `contentDeferred: true`.** This is the standard pattern — `listDocuments` returns lightweight metadata stubs, and content is fetched lazily by the sync engine via `getDocument` only for new/changed documents.
This pattern is critical for reliability: the sync engine processes documents in batches and enqueues each batch for processing immediately. If a sync times out, all previously-batched documents are already queued. Without deferral, content downloads during listing can exhaust the sync task's time budget before any documents are saved.
### When to use `contentDeferred: true`
- The service's list API does NOT return document content (only metadata)
- Content requires a separate download/export API call per document
- The list API already returns the full content inline (e.g., Slack messages, Reddit posts, HubSpot notes)
- No per-document API call is needed to get content
### Content Hash Strategy
Use a **metadata-based**`contentHash` — never a content-based hash. The hash must be derivable from the list response metadata alone, so the sync engine can detect changes without downloading content.
Good metadata hash sources:
-`modifiedTime` / `lastModifiedDateTime` — changes when file is edited
**Critical invariant:** The `contentHash` MUST be identical whether produced by `listDocuments` (stub) or `getDocument` (full doc). Both should use the same stub function to guarantee this.
description: Add hosted API key support to a tool so Sim provides the key when users don't bring their own
argument-hint: <service-name>
---
# Adding Hosted Key Support to a Tool
When a tool has hosted key support, Sim provides its own API key if the user hasn't configured one (via BYOK or env var). Usage is metered and billed to the workspace.
## Step 2: Research the API's Pricing Model and Rate Limits
**Before writing any `getCost` or `rateLimit` code**, look up the service's official documentation for both pricing and rate limits. You need to understand:
### Pricing
1.**How the API charges** — per request, per credit, per token, per step, per minute, etc.
2.**Whether the API reports cost in its response** — look for fields like `creditsUsed`, `costDollars`, `tokensUsed`, or similar in the response body or headers
3.**Whether cost varies by endpoint/options** — some APIs charge more for certain features (e.g., Firecrawl charges 1 credit/page base but +4 for JSON format, +4 for enhanced mode)
4.**The dollar-per-unit rate** — what each credit/token/unit costs in dollars on our plan
### Rate Limits
1.**What rate limits the API enforces** — requests per minute/second, tokens per minute, concurrent requests, etc.
2.**Whether limits vary by plan tier** — free vs paid vs enterprise often have different ceilings
3.**Whether limits are per-key or per-account** — determines whether adding more hosted keys actually increases total throughput
4.**What the API returns when rate limited** — HTTP 429, `Retry-After` header, error body format, etc.
5.**Whether there are multiple dimensions** — some APIs limit both requests/min AND tokens/min independently
Search the API's docs/pricing page (use WebSearch/WebFetch). Capture the pricing model as a comment in `getCost` so future maintainers know the source of truth.
### Setting Our Rate Limits
Our rate limiter (`lib/core/rate-limiter/hosted-key/`) uses a token-bucket algorithm applied **per billing actor** (workspace). It supports two modes:
- **`per_request`** — simple; just `requestsPerMinute`. Good when the API charges flat per-request or cost doesn't vary much.
- **`custom`** — `requestsPerMinute` plus additional `dimensions` (e.g., `tokens`, `search_units`). Each dimension has its own `limitPerMinute` and an `extractUsage` function that reads actual usage from the response. Use when the API charges on a variable metric (tokens, credits) and you want to cap that metric too.
When choosing values for `requestsPerMinute` and any dimension limits:
- **Stay well below the API's per-key limit** — our keys are shared across all workspaces. If the API allows 60 RPM per key and we have 3 keys, the global ceiling is ~180 RPM. Set the per-workspace limit low enough (e.g., 20-60 RPM) that many workspaces can coexist without collectively hitting the API's ceiling.
- **Account for key pooling** — our round-robin distributes requests across `N` hosted keys, so the effective API-side rate per key is `(total requests) / N`. But per-workspace limits are enforced *before* key selection, so they apply regardless of key count.
- **Prefer conservative defaults** — it's easy to raise limits later but hard to claw back after users depend on high throughput.
## Step 3: Add `hosting` Config to the Tool
Add a `hosting` object to the tool's `ToolConfig`. This tells the execution layer how to acquire hosted keys, calculate cost, and rate-limit.
Keys use a numbered naming pattern driven by a count env var:
```
YOUR_SERVICE_API_KEY_COUNT=3
YOUR_SERVICE_API_KEY_1=sk-...
YOUR_SERVICE_API_KEY_2=sk-...
YOUR_SERVICE_API_KEY_3=sk-...
```
The `envKeyPrefix` value (`YOUR_SERVICE_API_KEY`) determines which env vars are read at runtime. Adding more keys only requires bumping the count and adding the new env var.
### Pricing: Prefer API-Reported Cost
Always prefer using cost data returned by the API (e.g., `creditsUsed`, `costDollars`). This is the most accurate because it accounts for variable pricing tiers, feature modifiers, and plan-level discounts.
**When the API reports cost** — use it directly and throw if missing:
// Serper: 1 credit for <=10 results, 2 credits for >10 — from https://serper.dev/pricing
constcredits=Number(params.num)>10?2 : 1
return{cost: credits*0.001,metadata:{credits}}
},
},
```
**`getCost` must always throw** if it cannot determine cost. Never silently fall back to a default — this would hide billing inaccuracies.
### Capturing Cost Data from the API
If the API returns cost info, capture it in `transformResponse` so `getCost` can read it from the output:
```typescript
transformResponse: async(response: Response)=>{
constdata=awaitresponse.json()
return{
success: true,
output:{
results: data.results,
creditsUsed: data.creditsUsed,// pass through for getCost
},
}
},
```
For async/polling tools, capture it in `postProcess` when the job completes:
```typescript
if(jobData.status==='completed'){
result.output={
data: jobData.data,
creditsUsed: jobData.creditsUsed,
}
}
```
## Step 4: Hide the API Key Field When Hosted
In the block config (`blocks/blocks/{service}.ts`), add `hideWhenHosted: true` to the API key subblock. This hides the field on hosted Sim since the platform provides the key:
```typescript
{
id:'apiKey',
title:'API Key',
type:'short-input',
placeholder:'Enter your API key',
password: true,
required: true,
hideWhenHosted: true,
},
```
The visibility is controlled by `isSubBlockHidden()` in `lib/workflows/subblocks/visibility.ts`, which checks both the `isHosted` feature flag (`hideWhenHosted`) and optional env var conditions (`hideWhenEnvSet`).
### Excluding Specific Operations from Hosted Key Support
When a block has multiple operations but some operations should **not** use a hosted key (e.g., the underlying API is deprecated, unsupported, or too expensive), use the **duplicate apiKey subblock** pattern. This is the same pattern Exa uses for its `research` operation:
1.**Remove the `hosting` config** from the tool definition for that operation — it must not have a `hosting` object at all.
2.**Duplicate the `apiKey` subblock** in the block config with opposing conditions:
```typescript
// API Key — hidden when hosted for operations with hosted key support
Both subblocks share the same `id: 'apiKey'`, so the same value flows to the tool. The conditions ensure only one is visible at a time. The first has `hideWhenHosted: true` and shows for all hosted operations; the second has no `hideWhenHosted` and shows only for the excluded operation — meaning users must always provide their own key for that operation.
To exclude multiple operations, use an array: `{ field: 'operation', value: ['op_a', 'op_b'] }`.
Add an entry to the `PROVIDERS` array in the BYOK settings component so users can bring their own key. You need the service icon from `components/icons.tsx`:
```typescript
{
id:'your_service',
name:'Your Service',
icon: YourServiceIcon,
description:'What this service does',
placeholder:'Enter your API key',
},
```
## Step 6: Summarize Pricing and Throttling Comparison
After all code changes are complete, output a detailed summary to the user covering:
### What to include
1.**API's pricing model** — how the service charges (per token, per credit, per request, etc.), the specific rates found in docs, and whether the API reports cost in responses.
2.**Our `getCost` approach** — how we calculate cost, what fields we depend on, and any assumptions or estimates (especially when the API doesn't report exact dollar cost).
3.**API's rate limits** — the documented limits (RPM, TPM, concurrent, etc.), which plan tier they apply to, and whether they're per-key or per-account.
4.**Our `rateLimit` config** — what we set for `requestsPerMinute` (and dimensions if custom mode), why we chose those values, and how they compare to the API's limits.
5.**Key pooling impact** — how many hosted keys we expect, and how round-robin distribution affects the effective per-key rate at the API.
6.**Gaps or risks** — anything the API charges for that we don't meter, rate limit dimensions we chose not to enforce, or pricing that may be inaccurate due to variable model/tier costs.
### Format
Present this as a structured summary with clear headings. Example:
```
### Pricing
- **API charges**: $X per 1M tokens (input), $Y per 1M tokens (output) — varies by model
- **Response reports cost?**: No — only token counts in `usage` field
- **Our getCost**: Estimates cost at $Z per 1M total tokens based on median model pricing
- **Risk**: Actual cost varies by model; our estimate may over/undercharge for cheap/expensive models
description: Run all code quality skills in sequence — effects, memo, callbacks, state, React Query, and emcn design review
argument-hint: [scope] [fix=true|false]
---
# Cleanup
Arguments:
- scope: what to review (default: your current changes). Examples: "diff to main", "PR #123", "src/components/", "whole codebase"
- fix: whether to apply fixes (default: true). Set to false to only propose changes.
User arguments: $ARGUMENTS
## Steps
Run each of these skills in order on the specified scope, passing through the scope and fix arguments. After each skill completes, move to the next. Do not skip any.
1.`/you-might-not-need-an-effect $ARGUMENTS`
2.`/you-might-not-need-a-memo $ARGUMENTS`
3.`/you-might-not-need-a-callback $ARGUMENTS`
4.`/you-might-not-need-state $ARGUMENTS`
5.`/react-query-best-practices $ARGUMENTS`
6.`/emcn-design-review $ARGUMENTS`
After all skills have run, output a summary of what was found and fixed (or proposed) across all six passes.
description: Spawn task agents to explore a given area of interest in the codebase
argument-hint: <area-of-interest>
---
Based on the given area of interest, please:
1. Dig around the codebase in terms of that given area of interest, gather general information such as keywords and architecture overview.
2. Spawn off n=10 (unless specified otherwise) task agents to dig deeper into the codebase in terms of that given area of interest, some of them should be out of the box for variance.
3. Once the task agents are done, use the information to do what the user wants.
If user is in plan mode, use the information to create the plan.
description: Review UI code for alignment with the emcn design system — components, tokens, patterns, and conventions
argument-hint: [scope] [fix=true|false]
---
# EMCN Design Review
Arguments:
- scope: what to review (default: your current changes). Examples: "diff to main", "PR #123", "src/components/", "whole codebase"
- fix: whether to apply fixes (default: true). Set to false to only propose changes.
User arguments: $ARGUMENTS
## Context
This codebase uses **emcn**, a custom component library built on Radix UI primitives with CVA variants and CSS variable design tokens. All UI must use emcn components and tokens.
## Steps
1. Read the emcn barrel export at `apps/sim/components/emcn/components/index.ts` to know what's available
2. Read `apps/sim/app/_styles/globals.css` for CSS variable tokens
3. Analyze the specified scope against every rule below
4. If fix=true, apply the fixes. If fix=false, propose the fixes without applying.
---
## Imports
- Import from `@/components/emcn` barrel, never subpaths
- Icons from `@/components/emcn/icons` or `lucide-react`
- Use `cn` from `@/lib/core/utils/cn` for conditional classes
## Design Tokens
Use CSS variable pattern (`text-[var(--text-primary)]`), never Tailwind semantics (`text-muted-foreground`) or hardcoded colors (`text-gray-500`, `#333`).
Modal `size="sm"`, title "Delete/Remove {ItemType}", `variant="destructive"` action button, `variant="default"` cancel. Cancel left, action right (100% compliance). Use `text-[var(--text-error)]` for irreversible warnings.
## Toast
`toast.success()`, `toast.error()`, `toast()` from `@/components/emcn`. Never custom notification UI.
## Badges
`red`=error/failed, `gray-secondary`=metadata/roles, `type`=type annotations, `green`=success/active, `gray`=neutral, `amber`=processing, `orange`=paused, `blue`=info. Use `dot` prop for status indicators.
description: Audit React Query usage for best practices — key factories, staleTime, mutations, and server state ownership
argument-hint: [scope] [fix=true|false]
---
# React Query Best Practices
Arguments:
- scope: what to analyze (default: your current changes). Examples: "diff to main", "PR #123", "src/hooks/queries/", "whole codebase"
- fix: whether to apply fixes (default: true). Set to false to only propose changes.
User arguments: $ARGUMENTS
## Context
This codebase uses React Query (TanStack Query) as the single source of truth for all server state. All query hooks live in `hooks/queries/`. Zustand is used only for client-only UI state. Server data must never be duplicated into useState or Zustand outside of mutation callbacks that coordinate cross-store state.
## References
Read these before analyzing:
1. https://tkdodo.eu/blog/practical-react-query — foundational defaults, custom hooks, avoiding local state copies
description: Validate an existing Sim webhook trigger against provider API docs and repository conventions
argument-hint: <service-name> [api-docs-url]
---
# Validate Trigger
You are an expert auditor for Sim webhook triggers. Your job is to validate that an existing trigger implementation is correct, complete, secure, and aligned across all layers.
## Your Task
1. Read the service's webhook/API documentation (via WebFetch)
2. Read every trigger file, provider handler, and registry entry
3. Cross-reference against the API docs and Sim conventions
4. Report all issues grouped by severity (critical, warning, suggestion)
5. Fix all issues after reporting them
## Step 1: Gather All Files
Read **every** file for the trigger — do not skip any:
```
apps/sim/triggers/{service}/ # All trigger files, utils.ts, index.ts
description: Analyze and fix useCallback anti-patterns in your code
argument-hint: [scope] [fix=true|false]
---
# You Might Not Need a Callback
Arguments:
- scope: what to analyze (default: your current changes). Examples: "diff to main", "PR #123", "src/components/", "whole codebase"
- fix: whether to apply fixes (default: true). Set to false to only propose changes.
User arguments: $ARGUMENTS
## References
Read before analyzing:
1. https://react.dev/reference/react/useCallback — official docs on when useCallback is actually needed
## The one rule that matters
`useCallback` is only useful when **something observes the reference**. Ask: does anything care if this function gets a new identity on re-render?
Observers that care about reference stability:
- A `useEffect` that lists the function in its deps array
- A `useMemo` that lists the function in its deps array
- Another `useCallback` that lists the function in its deps array
- A child component wrapped in `React.memo` that receives the function as a prop
If none of those apply — if the function is only called inline, or passed to a non-memoized child, or assigned to a native element event — the reference is unobserved and `useCallback` adds overhead with zero benefit.
## Anti-patterns to detect
1.**No observer tracks the reference**: The function is only called inline in the same component, or passed to a non-memoized child, or used as a native element handler (`<button onClick={fn}>`). Nothing re-runs or bails out based on reference identity. Remove `useCallback`.
2.**useCallback with deps that change every render**: If a dep is a plain object/array created inline, or state that changes on every interaction, memoization buys nothing — the function gets a new identity anyway.
3.**useCallback on handlers passed only to native elements**: `<button onClick={fn}>` — React never does reference equality on native element props. No benefit.
4.**useCallback wrapping functions that return new objects/arrays**: Stable function identity, unstable return value — memoization is at the wrong level. Use `useMemo` on the return value instead, or restructure.
5.**useCallback with empty deps when deps are needed**: Stale closure — reads initial values forever. This is a correctness bug, not just a performance issue.
6.**Pairing useCallback + React.memo on trivially cheap renders**: If the child renders in < 1ms and re-renders rarely, the memo infrastructure costs more than it saves.
## Patterns that ARE correct — do not flag
-`useCallback` whose result is in a `useEffect` dep array — prevents the effect from re-running on every render
-`useCallback` whose result is in a `useMemo` dep array — prevents the memo from recomputing on every render
-`useCallback` whose result is a dep of another `useCallback` — stabilises a callback chain
-`useCallback` passed to a `React.memo`-wrapped child — the whole point of the pattern
- This codebase's ref pattern: `useRef` + callback with empty deps that reads the ref inside — correct, do not flag
## Steps
1. Read the reference above
2. Analyze the specified scope for the anti-patterns listed above
3. If fix=true, apply the fixes. If fix=false, propose the fixes without applying.
description: Analyze and fix useMemo/React.memo anti-patterns in your code
argument-hint: [scope] [fix=true|false]
---
# You Might Not Need a Memo
Arguments:
- scope: what to analyze (default: your current changes). Examples: "diff to main", "PR #123", "src/components/", "whole codebase"
- fix: whether to apply fixes (default: true). Set to false to only propose changes.
User arguments: $ARGUMENTS
## References
Read before analyzing:
1. https://overreacted.io/before-you-memo/ — two techniques to avoid memo entirely
## Anti-patterns to detect
1.**State can be moved down instead of memoizing**: Move state into a smaller child so the slow component stops re-rendering without memo.
2.**Children can be lifted up**: Extract stateful part, pass expensive subtree as `children` — children as props don't re-render when parent state changes.
3.**useMemo on cheap computations**: Small array filters, string concat, arithmetic don't need memoization.
4.**useMemo with constantly-changing deps**: Deps change every render = useMemo does nothing.
5.**useMemo to stabilize props for non-memoized children**: If the child isn't wrapped in React.memo, stable references don't matter.
6.**React.memo on components that always receive new props**: Fix the parent instead.
7.**useMemo for derived state**: Just compute inline during render.
## Steps
1. Read the reference above
2. Analyze the specified scope for the anti-patterns listed above
3. If fix=true, apply the fixes. If fix=false, propose the fixes without applying.
description: Analyze and fix unnecessary useState, derived state, and server-state-in-local-state anti-patterns
argument-hint: [scope] [fix=true|false]
---
# You Might Not Need State
Arguments:
- scope: what to analyze (default: your current changes). Examples: "diff to main", "PR #123", "src/components/", "whole codebase"
- fix: whether to apply fixes (default: true). Set to false to only propose changes.
User arguments: $ARGUMENTS
## Context
This codebase uses React Query for all server state and Zustand for client-only global state. useState should only be used for ephemeral UI concerns (open/closed, hover, local form input). Server data should never be copied into useState or Zustand — React Query is the single source of truth.
## References
Read these before analyzing:
1. https://react.dev/learn/choosing-the-state-structure — 5 principles for structuring state
2. https://tkdodo.eu/blog/dont-over-use-state — never store derived/computed values in state
3. https://tkdodo.eu/blog/putting-props-to-use-state — never mirror props into state via useEffect
## Anti-patterns to detect
1.**Derived state stored in useState**: If a value can be computed from props, other state, or query data, compute it inline during render instead of storing it in state.
2.**Server state copied into useState**: Never `useState` + `useEffect` to sync React Query data into local state. Use query data directly. The only exception is forms where users edit server data.
3.**Props mirrored into state**: Never `useState(prop)` + `useEffect(() => setState(prop))`. Use the prop directly, or use a key to reset component state.
4.**Chained useEffect state updates**: Never chain Effects that set state to trigger other Effects. Calculate all derived values in the event handler or inline during render.
5.**Storing objects when an ID suffices**: Store `selectedId` not a copy of the selected object. Derive the object: `items.find(i => i.id === selectedId)`.
6.**State that duplicates Zustand or React Query**: If the data already lives in a store or query cache, don't create a parallel useState.
## Steps
1. Read the references above to understand the guidelines
2. Analyze the specified scope for the anti-patterns listed above
3. If fix=true, apply the fixes. If fix=false, propose the fixes without applying.
When editing user-facing copy (landing pages, docs, metadata, marketing), follow these rules.
## Identity
Sim is the **AI workspace** where teams build and run AI agents. Not a workflow tool, not an agent framework, not an automation platform.
**Short definition:** Sim is the open-source AI workspace where teams build, deploy, and manage AI agents.
**Full definition:** Sim is the open-source AI workspace where teams build, deploy, and manage AI agents. Connect 1,000+ integrations and every major LLM to create agents that automate real work — visually, conversationally, or with code.
## Audience
**Primary:** Teams building AI agents for their organization — IT, operations, and technical teams who need governance, security, lifecycle management, and collaboration.
**Secondary:** Individual builders and developers who care about speed, flexibility, and open source.
| **Tables** | A database, built in. Store, query, and wire structured data into agent runs. |
| **Files** | Upload, create, and share. One store for your team and every agent. |
| **Logs** | Full visibility, every run. Trace execution block by block. |
## What We Never Say
- Never call Sim "just a workflow tool"
- Never compare only on integration count — we win on AI-native capabilities
- Never use "no-code" as the primary descriptor — say "visually, conversationally, or with code"
- Never promise unshipped features
- Never use jargon ("RAG", "vector database", "MCP") without plain-English explanation on public pages
- Avoid "agentic workforce" as a primary term — use "AI agents"
## Vision
Sim becomes the default environment where teams build AI agents — not a tool you visit for one task, but a workspace you live in. Workflows are one module; Mothership is another. The workspace is the constant; the interface adapts.
Import `createLogger` from `sim/logger`. Use `logger.info`, `logger.warn`, `logger.error` instead of `console.log`.
Import `createLogger` from `@sim/logger`. Use `logger.info`, `logger.warn`, `logger.error` instead of `console.log`. Inside API routes wrapped with `withRouteHandler`, loggers automatically include the request ID.
## API Route Handlers
All API route handlers must be wrapped with `withRouteHandler` from `@/lib/core/utils/with-route-handler`. Never export a bare `async function GET/POST/...` — always use `export const METHOD = withRouteHandler(...)`.
## Comments
Use TSDoc for documentation. No `====` separators. No non-TSDoc comments.
@@ -9,5 +12,47 @@ Use TSDoc for documentation. No `====` separators. No non-TSDoc comments.
## Styling
Never update global styles. Keep all styling local to components.
## ID Generation
Never use `crypto.randomUUID()`, `nanoid`, or the `uuid` package directly. Use the utilities from `@sim/utils/id`:
-`generateId()` — UUID v4, use by default
-`generateShortId(size?)` — short URL-safe ID (default 21 chars), for compact identifiers
Both use `crypto.getRandomValues()` under the hood and work in all contexts including non-secure (HTTP) browsers.
All React Query hooks live in `hooks/queries/`. All server state must go through React Query — never use `useState` + `fetch` in components for data fetching or mutations.
## Query Key Factory
Every query file defines a keys factory:
Every query file defines a hierarchical keys factory with an `all` root key and intermediate plural keys for prefix-level invalidation:
For optimistic mutations, use `onSettled` (not `onSuccess`) for cache reconciliation — `onSettled` fires on both success and error, ensuring the cache is always reconciled with the server.
For optimistic mutations syncing with Zustand, use `createOptimisticMutationHandlers` from `@/hooks/queries/utils/optimistic-mutation`.
## useCallback Dependencies
Never include mutation objects (e.g., `createEntity`) in `useCallback` dependency arrays — the mutation object is not referentially stable and changes on every state update. The `.mutate()` and `.mutateAsync()` functions are stable in TanStack Query v5.
### NEVER use `mockAuth()`, `mockConsoleLogger()`, or `setupCommonApiMocks()` from `@sim/testing`
These helpers internally use `vi.doMock()` which is slow. Use direct `vi.hoisted()` + `vi.mock()` instead.
### Mock heavy transitive dependencies
If a module under test imports `@/blocks` (200+ files), `@/tools/registry`, or other heavy modules, mock them:
@@ -135,83 +135,129 @@ await new Promise(r => setTimeout(r, 1))
vi.useFakeTimers()
```
## Mock Pattern Reference
## Centralized Mocks (prefer over local declarations)
`@sim/testing` exports ready-to-use mock modules for common dependencies. Import and pass directly to `vi.mock()` — no `vi.hoisted()` boilerplate needed. Each paired `*MockFns` object exposes the underlying `vi.fn()`s for per-test overrides.
Only define a local `vi.mock('@/lib/auth', ...)` if the module under test consumes exports outside the centralized shape (e.g., `auth.api.verifyOneTimeToken`, `auth.api.resetPassword`).
-`db.transaction(cb)` calls cb with `dbChainMock.db`
`.for('update')` (Postgres row-level locking) is supported on `where`
builders. It returns a thenable with `.limit` / `.orderBy` / `.returning` /
`.groupBy` attached, so both `await .where().for('update')` (terminal) and
`await .where().for('update').limit(1)` (chained) work. Override the terminal
result with `dbChainMockFns.for.mockResolvedValueOnce([...])`; for the chained
form, mock the downstream terminal (e.g. `dbChainMockFns.limit.mockResolvedValueOnce([...])`).
All terminals default to `Promise.resolve([])`. Override per-test with `dbChainMockFns.<terminal>.mockResolvedValueOnce(...)`.
Use `resetDbChainMock()` in `beforeEach` only when tests replace wiring with `.mockReturnValue` / `.mockResolvedValue` (permanent). Tests using only `...Once` variants don't need it.
You are an expert at creating block configurations for Sim. You understand the serializer, subBlock types, conditions, dependsOn, modes, and all UI patterns.
## Your Task
When the user asks you to create a block:
1. Create the block file in `apps/sim/blocks/blocks/{service}.ts`
2. Configure all subBlocks with proper types, conditions, and dependencies
serviceId:'{service}',// Must match OAuth provider service key
requiredScopes: getScopesForService('{service}'),// Import from @/lib/oauth/utils
placeholder:'Select account',
required: true,
}
```
**Scopes:** Always use `getScopesForService(serviceId)` from `@/lib/oauth/utils` for `requiredScopes`. Never hardcode scope arrays — the single source of truth is `OAUTH_PROVIDERS` in `lib/oauth/oauth.ts`.
**Scope descriptions:** When adding a new OAuth provider, also add human-readable descriptions for all scopes in `SCOPE_DESCRIPTIONS` within `lib/oauth/utils.ts`.
-`'json-object'` - Raw JSON (adds "no markdown" instruction)
-`'json-schema'` - JSON Schema definitions
-`'sql-query'` - SQL statements
-`'timestamp'` - Adds current date/time context
## Tools Configuration
**Important:**`tools.config.tool` runs during serialization before variable resolution. Put `Number()` and other type coercions in `tools.config.params` instead, which runs at execution time after variables are resolved.
**Preferred:** Use tool names directly as dropdown option IDs to avoid switch cases:
When using `type: 'json'` and you know the object shape in advance, **describe the inner fields in the description** so downstream blocks know what properties are available. For well-known, stable objects, use nested output definitions instead:
```typescript
outputs:{
// BAD: Opaque json with no info about what's inside
plan:{type:'json',description:'Zone plan information'},
// GOOD: Describe the known fields in the description
plan:{
type:'json',
description:'Zone plan information (id, name, price, currency, frequency, is_subscribed)',
},
// BEST: Use nested output definition when the shape is stable and well-known
See the `/add-trigger` skill for creating triggers.
## Icon Requirement
If the icon doesn't already exist in `@/components/icons.tsx`, **do NOT search for it yourself**. After completing the block, ask the user to provide the SVG:
```
The block is complete, but I need an icon for {Service}.
Please provide the SVG and I'll convert it to a React component.
You can usually find this in the service's brand/press kit page, or copy it from their website.
```
## Advanced Mode for Optional Fields
Optional fields that are rarely used should be set to `mode: 'advanced'` so they don't clutter the basic UI. This includes:
mode:'advanced',// Rarely used, hide from basic view
}
```
## WandConfig for Complex Inputs
Use `wandConfig` for fields that are hard to fill out manually, such as timestamps, comma-separated lists, and complex query strings. This gives users an AI-assisted input experience.
```typescript
// Timestamps - use generationType: 'timestamp' to inject current date context
{
id:'startTime',
title:'Start Time',
type:'short-input',
mode:'advanced',
wandConfig:{
enabled: true,
prompt:'Generate an ISO 8601 timestamp based on the user description. Return ONLY the timestamp string.',
generationType:'timestamp',
},
}
// Comma-separated lists - simple prompt without generationType
{
id:'mediaIds',
title:'Media IDs',
type:'short-input',
mode:'advanced',
wandConfig:{
enabled: true,
prompt:'Generate a comma-separated list of media IDs. Return ONLY the comma-separated values.',
},
}
```
## Naming Convention
All tool IDs referenced in `tools.access` and returned by `tools.config.tool` MUST use `snake_case` (e.g., `x_create_tweet`, `slack_send_message`). Never use camelCase or PascalCase.
## Checklist Before Finishing
- [ ]`integrationType` is set to the correct `IntegrationType` enum value
- [ ]`tags` array includes all applicable `IntegrationTag` values
- [ ] All subBlocks have `id`, `title` (except switch), and `type`
You are an expert at adding knowledge base connectors to Sim. A connector syncs documents from an external source (Confluence, Google Drive, Notion, etc.) into a knowledge base.
## Your Task
When the user asks you to create a connector:
1. Use Context7 or WebFetch to read the service's API documentation
2. Determine the auth mode: **OAuth** (if Sim already has an OAuth provider for the service) or **API key** (if the service uses API key / Bearer token auth)
3. Create the connector directory and config
4. Register it in the connector registry
## Directory Structure
Create files in `apps/sim/connectors/{service}/`:
```
connectors/{service}/
├── index.ts # Barrel export
└── {service}.ts # ConnectorConfig definition
```
## Authentication
Connectors use a discriminated union for auth config (`ConnectorAuthConfig` in `connectors/types.ts`):
For services with existing OAuth providers in `apps/sim/lib/oauth/types.ts`. The `provider` must match an `OAuthService`. The modal shows a credential picker and handles token refresh automatically.
### API key mode
For services that use API key / Bearer token auth. The modal shows a password input with the configured `label` and `placeholder`. The API key is encrypted at rest using AES-256-GCM and stored in a dedicated `encryptedApiKey` column on the connector record. The sync engine decrypts it automatically — connectors receive the raw access token in `listDocuments`, `getDocument`, and `validateConfig`.
The add-connector modal renders these automatically — no custom UI needed.
Three field types are supported: `short-input`, `dropdown`, and `selector`.
```typescript
// Text input
{
id:'domain',
title:'Domain',
type:'short-input',
placeholder:'yoursite.example.com',
required: true,
}
// Dropdown (static options)
{
id:'contentType',
title:'Content Type',
type:'dropdown',
required: false,
options:[
{label:'Pages only',id:'page'},
{label:'Blog posts only',id:'blogpost'},
{label:'All content',id:'all'},
],
}
```
## Dynamic Selectors (Canonical Pairs)
Use `type: 'selector'` to fetch options dynamically from the existing selector registry (`hooks/selectors/registry.ts`). Selectors are always paired with a manual fallback input using the **canonical pair** pattern — a `selector` field (basic mode) and a `short-input` field (advanced mode) linked by `canonicalParamId`.
The user sees a toggle button (ArrowLeftRight) to switch between the selector dropdown and manual text input. On submit, the modal resolves each canonical pair to the active mode's value, keyed by `canonicalParamId`.
### Rules
1.**Every selector field MUST have a canonical pair** — a corresponding `short-input` (or `dropdown`) field with the same `canonicalParamId` and `mode: 'advanced'`.
2.**`required` must be set identically on both fields** in a pair. If the selector is required, the manual input must also be required.
3.**`canonicalParamId` must match the key the connector expects in `sourceConfig`** (e.g. `baseId`, `channel`, `teamId`). The advanced field's `id` should typically match `canonicalParamId`.
4.**`dependsOn` references the selector field's `id`**, not the `canonicalParamId`. The modal propagates dependency clearing across canonical siblings automatically — changing either field in a parent pair clears dependent children.
### Selector canonical pair example (Airtable base → table cascade)
```typescript
configFields:[
// Base: selector (basic) + manual (advanced)
{
id:'baseSelector',
title:'Base',
type:'selector',
selectorKey:'airtable.bases',// Must exist in hooks/selectors/registry.ts
canonicalParamId:'baseId',
mode:'basic',
placeholder:'Select a base',
required: true,
},
{
id:'baseId',
title:'Base ID',
type:'short-input',
canonicalParamId:'baseId',
mode:'advanced',
placeholder:'e.g. appXXXXXXXXXXXXXX',
required: true,
},
// Table: selector depends on base (basic) + manual (advanced)
{
id:'tableSelector',
title:'Table',
type:'selector',
selectorKey:'airtable.tables',
canonicalParamId:'tableIdOrName',
mode:'basic',
dependsOn:['baseSelector'],// References the selector field ID
### Selector with domain dependency (Jira/Confluence pattern)
When a selector depends on a plain `short-input` field (no canonical pair), `dependsOn` references that field's `id` directly. The `domain` field's value maps to `SelectorContext.domain` automatically via `SELECTOR_CONTEXT_FIELDS`.
```typescript
configFields:[
{
id:'domain',
title:'Jira Domain',
type:'short-input',
placeholder:'yoursite.atlassian.net',
required: true,
},
{
id:'projectSelector',
title:'Project',
type:'selector',
selectorKey:'jira.projects',
canonicalParamId:'projectKey',
mode:'basic',
dependsOn:['domain'],
placeholder:'Select a project',
required: true,
},
{
id:'projectKey',
title:'Project Key',
type:'short-input',
canonicalParamId:'projectKey',
mode:'advanced',
placeholder:'e.g. ENG, PROJ',
required: true,
},
]
```
### How `dependsOn` maps to `SelectorContext`
The connector selector field builds a `SelectorContext` from dependency values. For the mapping to work, each dependency's `canonicalParamId` (or field `id` for non-canonical fields) must exist in `SELECTOR_CONTEXT_FIELDS` (`lib/workflows/subblocks/context.ts`):
| `confluence.spaces` | credential, `domain` | Space key + name |
| `notion.databases` | credential | Database ID + name |
| `asana.workspaces` | credential | Workspace GID + name |
| `microsoft.teams` | credential | Team ID + name |
| `microsoft.channels` | credential, `teamId` | Channel ID + name |
| `webflow.sites` | credential | Site ID + name |
| `outlook.folders` | credential | Folder ID + name |
## ExternalDocument Shape
Every document returned from `listDocuments`/`getDocument` must include:
```typescript
{
externalId: string// Source-specific unique ID
title: string// Document title
content: string// Extracted plain text (or '' if contentDeferred)
contentDeferred?: boolean// true = content will be fetched via getDocument
mimeType:'text/plain'// Always text/plain (content is extracted)
contentHash: string// Metadata-based hash for change detection
sourceUrl?: string// Link back to original (stored on document record)
metadata?: Record<string,unknown>// Source-specific data (fed to mapTags)
}
```
## Content Deferral (Required for file/content-download connectors)
**All connectors that require per-document API calls to fetch content MUST use `contentDeferred: true`.** This is the standard pattern — `listDocuments` returns lightweight metadata stubs, and content is fetched lazily by the sync engine via `getDocument` only for new/changed documents.
This pattern is critical for reliability: the sync engine processes documents in batches and enqueues each batch for processing immediately. If a sync times out, all previously-batched documents are already queued. Without deferral, content downloads during listing can exhaust the sync task's time budget before any documents are saved.
### When to use `contentDeferred: true`
- The service's list API does NOT return document content (only metadata)
- Content requires a separate download/export API call per document
- The list API already returns the full content inline (e.g., Slack messages, Reddit posts, HubSpot notes)
- No per-document API call is needed to get content
### Content Hash Strategy
Use a **metadata-based**`contentHash` — never a content-based hash. The hash must be derivable from the list response metadata alone, so the sync engine can detect changes without downloading content.
Good metadata hash sources:
-`modifiedTime` / `lastModifiedDateTime` — changes when file is edited
**Critical invariant:** The `contentHash` MUST be identical whether produced by `listDocuments` (stub) or `getDocument` (full doc). Both should use the same stub function to guarantee this.
All external API calls must use `fetchWithRetry` from `@/lib/knowledge/documents/utils` instead of raw `fetch()`. This provides exponential backoff with retries on 429/502/503/504 errors. It returns a standard `Response` — all `.ok`, `.json()`, `.text()` checks work unchanged.
For `validateConfig` (user-facing, called on save), pass `VALIDATE_RETRY_OPTIONS` to cap wait time at ~7s. Background operations (`listDocuments`, `getDocument`) use the built-in defaults (5 retries, ~31s max).
If `ExternalDocument.sourceUrl` is set, the sync engine stores it on the document record. Always construct the full URL (not a relative path).
## Sync Engine Behavior (Do Not Modify)
The sync engine (`lib/knowledge/connectors/sync-engine.ts`) is connector-agnostic. It:
1. Calls `listDocuments` with pagination until `hasMore` is false
2. Compares `contentHash` to detect new/changed/unchanged documents
3. Stores `sourceUrl` and calls `mapTags` on insert/update automatically
4. Handles soft-delete of removed documents
5. Resolves access tokens automatically — OAuth tokens are refreshed, API keys are decrypted from the `encryptedApiKey` column
You never need to modify the sync engine when adding a connector.
## Icon
The `icon` field on `ConnectorConfig` is used throughout the UI — in the connector list, the add-connector modal, and as the document icon in the knowledge base table (replacing the generic file type icon for connector-sourced documents). The icon is read from `CONNECTOR_REGISTRY[connectorType].icon` at runtime — no separate icon map to maintain.
If the service already has an icon in `apps/sim/components/icons.tsx` (from a tool integration), reuse it. Otherwise, ask the user to provide the SVG.
## Registering
Add one line to `apps/sim/connectors/registry.ts`:
When a tool has hosted key support, Sim provides its own API key if the user hasn't configured one (via BYOK or env var). Usage is metered and billed to the workspace.
## Step 2: Research the API's Pricing Model and Rate Limits
**Before writing any `getCost` or `rateLimit` code**, look up the service's official documentation for both pricing and rate limits. You need to understand:
### Pricing
1.**How the API charges** — per request, per credit, per token, per step, per minute, etc.
2.**Whether the API reports cost in its response** — look for fields like `creditsUsed`, `costDollars`, `tokensUsed`, or similar in the response body or headers
3.**Whether cost varies by endpoint/options** — some APIs charge more for certain features (e.g., Firecrawl charges 1 credit/page base but +4 for JSON format, +4 for enhanced mode)
4.**The dollar-per-unit rate** — what each credit/token/unit costs in dollars on our plan
### Rate Limits
1.**What rate limits the API enforces** — requests per minute/second, tokens per minute, concurrent requests, etc.
2.**Whether limits vary by plan tier** — free vs paid vs enterprise often have different ceilings
3.**Whether limits are per-key or per-account** — determines whether adding more hosted keys actually increases total throughput
4.**What the API returns when rate limited** — HTTP 429, `Retry-After` header, error body format, etc.
5.**Whether there are multiple dimensions** — some APIs limit both requests/min AND tokens/min independently
Search the API's docs/pricing page (use WebSearch/WebFetch). Capture the pricing model as a comment in `getCost` so future maintainers know the source of truth.
### Setting Our Rate Limits
Our rate limiter (`lib/core/rate-limiter/hosted-key/`) uses a token-bucket algorithm applied **per billing actor** (workspace). It supports two modes:
- **`per_request`** — simple; just `requestsPerMinute`. Good when the API charges flat per-request or cost doesn't vary much.
- **`custom`** — `requestsPerMinute` plus additional `dimensions` (e.g., `tokens`, `search_units`). Each dimension has its own `limitPerMinute` and an `extractUsage` function that reads actual usage from the response. Use when the API charges on a variable metric (tokens, credits) and you want to cap that metric too.
When choosing values for `requestsPerMinute` and any dimension limits:
- **Stay well below the API's per-key limit** — our keys are shared across all workspaces. If the API allows 60 RPM per key and we have 3 keys, the global ceiling is ~180 RPM. Set the per-workspace limit low enough (e.g., 20-60 RPM) that many workspaces can coexist without collectively hitting the API's ceiling.
- **Account for key pooling** — our round-robin distributes requests across `N` hosted keys, so the effective API-side rate per key is `(total requests) / N`. But per-workspace limits are enforced *before* key selection, so they apply regardless of key count.
- **Prefer conservative defaults** — it's easy to raise limits later but hard to claw back after users depend on high throughput.
## Step 3: Add `hosting` Config to the Tool
Add a `hosting` object to the tool's `ToolConfig`. This tells the execution layer how to acquire hosted keys, calculate cost, and rate-limit.
Keys use a numbered naming pattern driven by a count env var:
```
YOUR_SERVICE_API_KEY_COUNT=3
YOUR_SERVICE_API_KEY_1=sk-...
YOUR_SERVICE_API_KEY_2=sk-...
YOUR_SERVICE_API_KEY_3=sk-...
```
The `envKeyPrefix` value (`YOUR_SERVICE_API_KEY`) determines which env vars are read at runtime. Adding more keys only requires bumping the count and adding the new env var.
### Pricing: Prefer API-Reported Cost
Always prefer using cost data returned by the API (e.g., `creditsUsed`, `costDollars`). This is the most accurate because it accounts for variable pricing tiers, feature modifiers, and plan-level discounts.
**When the API reports cost** — use it directly and throw if missing:
// Serper: 1 credit for <=10 results, 2 credits for >10 — from https://serper.dev/pricing
constcredits=Number(params.num)>10?2 : 1
return{cost: credits*0.001,metadata:{credits}}
},
},
```
**`getCost` must always throw** if it cannot determine cost. Never silently fall back to a default — this would hide billing inaccuracies.
### Capturing Cost Data from the API
If the API returns cost info, capture it in `transformResponse` so `getCost` can read it from the output:
```typescript
transformResponse: async(response: Response)=>{
constdata=awaitresponse.json()
return{
success: true,
output:{
results: data.results,
creditsUsed: data.creditsUsed,// pass through for getCost
},
}
},
```
For async/polling tools, capture it in `postProcess` when the job completes:
```typescript
if(jobData.status==='completed'){
result.output={
data: jobData.data,
creditsUsed: jobData.creditsUsed,
}
}
```
## Step 4: Hide the API Key Field When Hosted
In the block config (`blocks/blocks/{service}.ts`), add `hideWhenHosted: true` to the API key subblock. This hides the field on hosted Sim since the platform provides the key:
```typescript
{
id:'apiKey',
title:'API Key',
type:'short-input',
placeholder:'Enter your API key',
password: true,
required: true,
hideWhenHosted: true,
},
```
The visibility is controlled by `isSubBlockHidden()` in `lib/workflows/subblocks/visibility.ts`, which checks both the `isHosted` feature flag (`hideWhenHosted`) and optional env var conditions (`hideWhenEnvSet`).
### Excluding Specific Operations from Hosted Key Support
When a block has multiple operations but some operations should **not** use a hosted key (e.g., the underlying API is deprecated, unsupported, or too expensive), use the **duplicate apiKey subblock** pattern. This is the same pattern Exa uses for its `research` operation:
1.**Remove the `hosting` config** from the tool definition for that operation — it must not have a `hosting` object at all.
2.**Duplicate the `apiKey` subblock** in the block config with opposing conditions:
```typescript
// API Key — hidden when hosted for operations with hosted key support
Both subblocks share the same `id: 'apiKey'`, so the same value flows to the tool. The conditions ensure only one is visible at a time. The first has `hideWhenHosted: true` and shows for all hosted operations; the second has no `hideWhenHosted` and shows only for the excluded operation — meaning users must always provide their own key for that operation.
To exclude multiple operations, use an array: `{ field: 'operation', value: ['op_a', 'op_b'] }`.
Add an entry to the `PROVIDERS` array in the BYOK settings component so users can bring their own key. You need the service icon from `components/icons.tsx`:
```typescript
{
id:'your_service',
name:'Your Service',
icon: YourServiceIcon,
description:'What this service does',
placeholder:'Enter your API key',
},
```
## Step 6: Summarize Pricing and Throttling Comparison
After all code changes are complete, output a detailed summary to the user covering:
### What to include
1.**API's pricing model** — how the service charges (per token, per credit, per request, etc.), the specific rates found in docs, and whether the API reports cost in responses.
2.**Our `getCost` approach** — how we calculate cost, what fields we depend on, and any assumptions or estimates (especially when the API doesn't report exact dollar cost).
3.**API's rate limits** — the documented limits (RPM, TPM, concurrent, etc.), which plan tier they apply to, and whether they're per-key or per-account.
4.**Our `rateLimit` config** — what we set for `requestsPerMinute` (and dimensions if custom mode), why we chose those values, and how they compare to the API's limits.
5.**Key pooling impact** — how many hosted keys we expect, and how round-robin distribution affects the effective per-key rate at the API.
6.**Gaps or risks** — anything the API charges for that we don't meter, rate limit dimensions we chose not to enforce, or pricing that may be inaccurate due to variable model/tier costs.
### Format
Present this as a structured summary with clear headings. Example:
```
### Pricing
- **API charges**: $X per 1M tokens (input), $Y per 1M tokens (output) — varies by model
- **Response reports cost?**: No — only token counts in `usage` field
- **Our getCost**: Estimates cost at $Z per 1M total tokens based on median model pricing
- **Risk**: Actual cost varies by model; our estimate may over/undercharge for cheap/expensive models
-`visibility: 'user-only'` for API keys and user credentials
-`visibility: 'user-or-llm'` for operation parameters
- Always use `?? null` for nullable API response fields
- Always use `?? []` for optional array fields
- Set `optional: true` for outputs that may not exist
- Never output raw JSON dumps - extract meaningful fields
- When using `type: 'json'` and you know the object shape, define `properties` with the inner fields so downstream consumers know the structure. Only use bare `type: 'json'` when the shape is truly dynamic
-`canonicalParamId` must NOT match any subblock's `id` in the block
-`canonicalParamId` must be unique per operation/condition context
- Only use `canonicalParamId` to link basic/advanced alternatives for the same logical parameter
-`mode` only controls UI visibility, NOT serialization. Without `canonicalParamId`, both basic and advanced field values would be sent
- Every subblock `id` must be unique within the block. Duplicate IDs cause conflicts even with different conditions
- **Required consistency:** If one subblock in a canonical group has `required: true`, ALL subblocks in that group must have `required: true` (prevents bypassing validation by switching modes)
- **Inputs section:** Must list canonical param IDs (e.g., `fileId`), NOT raw subblock IDs (e.g., `fileSelector`, `manualFileId`)
- **Params function:** Must use canonical param IDs, NOT raw subblock IDs (raw IDs are deleted after canonical transformation)
- [ ] Secondary triggers do NOT have `includeDropdown`
- [ ] All triggers use `buildTriggerSubBlocks` helper
- [ ] Created `index.ts` barrel export
- [ ] Registered all triggers in `triggers/registry.ts`
### Docs
- [ ] Ran `bun run scripts/generate-docs.ts`
- [ ] Verified docs file created
### Final Validation (Required)
- [ ] Read every tool file and cross-referenced inputs/outputs against the API docs
- [ ] Verified block subBlocks cover all required tool params with correct conditions
- [ ] Verified block outputs match what the tools actually return
- [ ] Verified `tools.config.params` correctly maps and coerces all param types
## Example Command
When the user asks to add an integration:
```
User: Add a Stripe integration
You: I'll add the Stripe integration. Let me:
1. First, research the Stripe API using Context7
2. Create the tools for key operations (payments, subscriptions, etc.)
3. Create the block with operation dropdown
4. Register everything
5. Generate docs
6. Ask you for the Stripe icon SVG
[Proceed with implementation...]
[After completing steps 1-5...]
I've completed the Stripe integration. Before I can add the icon, please provide the SVG for Stripe.
You can usually find this in the service's brand/press kit page, or copy it from their website.
Paste the SVG code here and I'll convert it to a React component.
```
## File Handling
When your integration handles file uploads or downloads, follow these patterns to work with `UserFile` objects consistently.
### What is a UserFile?
A `UserFile` is the standard file representation in Sim:
```typescript
interfaceUserFile{
id: string// Unique identifier
name: string// Original filename
url: string// Presigned URL for download
size: number// File size in bytes
type:string// MIME type (e.g., 'application/pdf')
base64?: string// Optional base64 content (if small file)
key?: string// Internal storage key
context?: object// Storage context metadata
}
```
### File Input Pattern (Uploads)
For tools that accept file uploads, **always route through an internal API endpoint** rather than calling external APIs directly. This ensures proper file content retrieval.
Optional fields that are rarely used should be set to `mode: 'advanced'` so they don't clutter the basic UI. Examples: pagination tokens, time range filters, sort order, max results, reply settings.
### WandConfig for Complex Inputs
Use `wandConfig` for fields that are hard to fill out manually:
- **Timestamps**: Use `generationType: 'timestamp'` to inject current date context into the AI prompt
- **JSON arrays**: Use `generationType: 'json-object'` for structured data
- **Complex queries**: Use a descriptive prompt explaining the expected format
```typescript
{
id:'startTime',
title:'Start Time',
type:'short-input',
mode:'advanced',
wandConfig:{
enabled: true,
prompt:'Generate an ISO 8601 timestamp. Return ONLY the timestamp string.',
generationType:'timestamp',
},
}
```
### OAuth Scopes (Centralized System)
Scopes are maintained in a single source of truth and reused everywhere:
1.**Define scopes** in `lib/oauth/oauth.ts` under `OAUTH_PROVIDERS[provider].services[service].scopes`
2.**Add descriptions** in `SCOPE_DESCRIPTIONS` within `lib/oauth/utils.ts` for the OAuth modal UI
3.**Reference in auth.ts** using `getCanonicalScopesForProvider(providerId)` from `@/lib/oauth/utils`
4.**Reference in blocks** using `getScopesForService(serviceId)` from `@/lib/oauth/utils`
**Never hardcode scope arrays** in `auth.ts` or block `requiredScopes`. Always import from the centralized source.
1.**OAuth serviceId must match** - The `serviceId` in oauth-input must match the OAuth provider configuration
2.**All tool IDs MUST be snake_case** - `stripe_create_payment`, not `stripeCreatePayment`. This applies to tool `id` fields, registry keys, `tools.access` arrays, and `tools.config.tool` return values
3.**Block type is snake_case** - `type: 'stripe'`, not `type: 'Stripe'`
4.**Alphabetical ordering** - Keep imports and registry entries alphabetically sorted
5.**Required can be conditional** - Use `required: { field: 'op', value: 'create' }` instead of always true
6.**DependsOn clears options** - When a dependency changes, selector options are refetched
7.**Never pass Buffer directly to fetch** - Convert to `new Uint8Array(buffer)` for TypeScript compatibility
You are an expert at creating tool configurations for Sim integrations. Your job is to read API documentation and create properly structured tool files.
## Your Task
When the user asks you to create tools for a service:
1. Use Context7 or WebFetch to read the service's API documentation
2. Create the tools directory structure
3. Generate properly typed tool configurations
## Directory Structure
Create files in `apps/sim/tools/{service}/`:
```
tools/{service}/
├── index.ts # Barrel export
├── types.ts # Parameter & response types
└── {action}.ts # Individual tool files (one per operation)
// Trim ID fields to prevent copy-paste whitespace errors:
// userId: params.userId?.trim(),
}),
},
transformResponse: async(response: Response)=>{
constdata=awaitresponse.json()
return{
success: true,
output:{
// Map API response to output
// Use ?? null for nullable fields
// Use ?? [] for optional arrays
},
}
},
outputs:{
// Define each output field
},
}
```
## Critical Rules for Parameters
### Visibility Options
-`'hidden'` - System-injected (OAuth tokens, internal params). User never sees.
-`'user-only'` - User must provide (credentials, api keys, account-specific IDs)
-`'user-or-llm'` - User provides OR LLM can compute (search queries, content, filters, most fall into this category)
### Parameter Types
-`'string'` - Text values
-`'number'` - Numeric values
-`'boolean'` - True/false
-`'json'` - Complex objects (NOT 'object', use 'json')
-`'file'` - Single file
-`'file[]'` - Multiple files
### Required vs Optional
- Always explicitly set `required: true` or `required: false`
- Optional params should have `required: false`
## Critical Rules for Outputs
### Output Types
-`'string'`, `'number'`, `'boolean'` - Primitives
-`'json'` - Complex objects (use this, NOT 'object')
-`'array'` - Arrays with `items` property
-`'object'` - Objects with `properties` property
### Optional Outputs
Add `optional: true` for fields that may not exist in the response:
```typescript
closedAt:{
type:'string',
description:'When the issue was closed',
optional: true,
},
```
### Typed JSON Outputs
When using `type: 'json'` and you know the object shape in advance, **always define the inner structure** using `properties` so downstream consumers know what fields are available:
```typescript
// BAD: Opaque json with no info about what's inside
2. Add to the `tools` object with snake_case keys:
```typescript
import{serviceActionTool}from'@/tools/{service}'
exportconsttools={
// ... existing tools ...
{service}_{action}:serviceActionTool,
}
```
## V2 Tool Pattern
If creating V2 tools (API-aligned outputs), use `_v2` suffix:
- Tool ID: `{service}_{action}_v2`
- Variable name: `{action}V2Tool`
- Version: `'2.0.0'`
- Outputs: Flat, API-aligned (no content/metadata wrapper)
## Naming Convention
All tool IDs MUST use `snake_case`: `{service}_{action}` (e.g., `x_create_tweet`, `slack_send_message`). Never use camelCase or PascalCase for tool IDs.
## Checklist Before Finishing
- [ ] All tool IDs use snake_case
- [ ] All params have explicit `required: true` or `required: false`
- [ ] All params have appropriate `visibility`
- [ ] All nullable response fields use `?? null`
- [ ] All optional outputs have `optional: true`
- [ ] No raw JSON dumps in outputs
- [ ] Types file has all interfaces
- [ ] Index.ts exports all tools
## Final Validation (Required)
After creating all tools, you MUST validate every tool before finishing:
1.**Read every tool file** you created — do not skip any
2.**Cross-reference with the API docs** to verify:
- All required params are marked `required: true`
- All optional params are marked `required: false`
- Param types match the API (string, number, boolean, json)
- Request URL, method, headers, and body match the API spec
-`transformResponse` extracts the correct fields from the API response
- All output fields match what the API actually returns
- No fields are missing from outputs that the API provides
- No extra fields are defined in outputs that the API doesn't return
3.**Verify consistency** across tools:
- Shared types in `types.ts` match all tools that use them
- Tool IDs in the barrel export match the tool file definitions
- Error handling is consistent (error checks, meaningful messages)
You are an expert at creating webhook and polling triggers for Sim. You understand the trigger system, the generic `buildTriggerSubBlocks` helper, polling infrastructure, and how triggers connect to blocks.
## Your Task
1. Research what webhook events the service supports — if the service lacks reliable webhooks, use polling
2. Create the trigger files using the generic builder (webhook) or manual config (polling)
3. Create a provider handler (webhook) or polling handler (polling)
4. Register triggers and connect them to the block
## Directory Structure
```
apps/sim/triggers/{service}/
├── index.ts # Barrel exports
├── utils.ts # Service-specific helpers (options, instructions, extra fields, outputs)
**Versioned blocks (V1 + V2):** Many integrations have a hidden V1 block and a visible V2 block. Where you add the trigger wiring depends on how V2 inherits from V1:
- **V2 uses `...V1Block` spread** (e.g., Google Calendar): Add trigger to V1 — V2 inherits both `subBlocks` and `triggers` automatically.
- **V2 defines its own `subBlocks`** (e.g., Google Sheets): Add trigger to V2 (the visible block). V1 is hidden and doesn't need it.
- **Single block, no V2** (e.g., Google Drive): Add trigger directly.
`generate-docs.ts` deduplicates by base type (first match wins). If V1 is processed first without triggers, the V2 triggers won't appear in `integrations.json`. Always verify by checking the output after running the script.
## Provider Handler
All provider-specific webhook logic lives in a single handler file: `apps/sim/lib/webhooks/providers/{service}.ts`.
If the service API supports programmatic webhook creation, implement `createSubscription` and `deleteSubscription` on the handler. The orchestration layer calls these automatically — **no code touches `route.ts`, `provider-subscriptions.ts`, or `deploy.ts`**.
Use polling when the service lacks reliable webhooks (e.g., Google Sheets, Google Drive, Google Calendar, Gmail, RSS, IMAP). Polling triggers do NOT use `buildTriggerSubBlocks` — they define subBlocks manually.
- scope: what to review (default: your current changes). Examples: "diff to main", "PR #123", "src/components/", "whole codebase"
- fix: whether to apply fixes (default: true). Set to false to only propose changes.
User arguments: $ARGUMENTS
## Steps
Run each of these skills in order on the specified scope, passing through the scope and fix arguments. After each skill completes, move to the next. Do not skip any.
1.`/you-might-not-need-an-effect $ARGUMENTS`
2.`/you-might-not-need-a-memo $ARGUMENTS`
3.`/you-might-not-need-a-callback $ARGUMENTS`
4.`/you-might-not-need-state $ARGUMENTS`
5.`/react-query-best-practices $ARGUMENTS`
6.`/emcn-design-review $ARGUMENTS`
After all skills have run, output a summary of what was found and fixed (or proposed) across all six passes.
- scope: what to review (default: your current changes). Examples: "diff to main", "PR #123", "src/components/", "whole codebase"
- fix: whether to apply fixes (default: true). Set to false to only propose changes.
User arguments: $ARGUMENTS
## Context
This codebase uses **emcn**, a custom component library built on Radix UI primitives with CVA variants and CSS variable design tokens. All UI must use emcn components and tokens.
## Steps
1. Read the emcn barrel export at `apps/sim/components/emcn/components/index.ts` to know what's available
2. Read `apps/sim/app/_styles/globals.css` for CSS variable tokens
3. Analyze the specified scope against every rule below
4. If fix=true, apply the fixes. If fix=false, propose the fixes without applying.
---
## Imports
- Import from `@/components/emcn` barrel, never subpaths
- Icons from `@/components/emcn/icons` or `lucide-react`
- Use `cn` from `@/lib/core/utils/cn` for conditional classes
## Design Tokens
Use CSS variable pattern (`text-[var(--text-primary)]`), never Tailwind semantics (`text-muted-foreground`) or hardcoded colors (`text-gray-500`, `#333`).
Modal `size="sm"`, title "Delete/Remove {ItemType}", `variant="destructive"` action button, `variant="default"` cancel. Cancel left, action right (100% compliance). Use `text-[var(--text-error)]` for irreversible warnings.
## Toast
`toast.success()`, `toast.error()`, `toast()` from `@/components/emcn`. Never custom notification UI.
## Badges
`red`=error/failed, `gray-secondary`=metadata/roles, `type`=type annotations, `green`=success/active, `gray`=neutral, `amber`=processing, `orange`=paused, `blue`=info. Use `dot` prop for status indicators.
- scope: what to analyze (default: your current changes). Examples: "diff to main", "PR #123", "src/hooks/queries/", "whole codebase"
- fix: whether to apply fixes (default: true). Set to false to only propose changes.
User arguments: $ARGUMENTS
## Context
This codebase uses React Query (TanStack Query) as the single source of truth for all server state. All query hooks live in `hooks/queries/`. Zustand is used only for client-only UI state. Server data must never be duplicated into useState or Zustand outside of mutation callbacks that coordinate cross-store state.
## References
Read these before analyzing:
1. https://tkdodo.eu/blog/practical-react-query — foundational defaults, custom hooks, avoiding local state copies
You are an expert auditor for Sim knowledge base connectors. Your job is to thoroughly validate that an existing connector is correct, complete, and follows all conventions.
## Your Task
When the user asks you to validate a connector:
1. Read the service's API documentation (via Context7 or WebFetch)
2. Read the connector implementation, OAuth config, and registry entries
3. Cross-reference everything against the API docs and Sim conventions
4. Report all issues found, grouped by severity (critical, warning, suggestion)
5. Fix all issues after reporting them
## Step 1: Gather All Files
Read **every** file for the connector — do not skip any:
Fetch the official API docs for the service. This is the **source of truth** for:
- Endpoint URLs, HTTP methods, and auth headers
- Required vs optional parameters
- Parameter types and allowed values
- Response shapes and field names
- Pagination patterns (cursor, offset, next token)
- Rate limits and error formats
- OAuth scopes and their meanings
Use Context7 (resolve-library-id → query-docs) or WebFetch to retrieve documentation. If both fail, note which claims are based on training knowledge vs verified docs.
## Step 3: Validate API Endpoints
For **every** API call in the connector (`listDocuments`, `getDocument`, `validateConfig`, and any helper functions), verify against the API docs:
### URLs and Methods
- [ ] Base URL is correct for the service's API version
- [ ] Endpoint paths match the API docs exactly
- [ ] HTTP method is correct (GET, POST, PUT, PATCH, DELETE)
- [ ] Path parameters are correctly interpolated and URI-encoded where needed
- [ ] Query parameters use correct names and formats per the API docs
### Headers
- [ ] Authorization header uses the correct format:
- OAuth: `Authorization: Bearer ${accessToken}`
- API Key: correct header name per the service's docs
- [ ]`Content-Type` is set for POST/PUT/PATCH requests
- [ ] Any service-specific headers are present (e.g., `Notion-Version`, `Dropbox-API-Arg`)
- [ ] No headers are sent that the API doesn't support or silently ignores
### Request Bodies
- [ ] POST/PUT body fields match API parameter names exactly
- [ ] Required fields are always sent
- [ ] Optional fields are conditionally included (not sent as `null` or empty unless the API expects that)
- [ ] Field value types match API expectations (string vs number vs boolean)
### Input Sanitization
- [ ] User-controlled values interpolated into query strings are properly escaped:
- OData `$filter`: single quotes escaped with `''` (e.g., `externalId.replace(/'/g, "''")`)
- SOQL: single quotes escaped with `\'`
- GraphQL variables: passed as variables, not interpolated into query strings
Scopes must be correctly declared and sufficient for all API calls the connector makes.
### Connector requiredScopes
- [ ]`requiredScopes` in the connector's `auth` config lists all scopes needed by the connector
- [ ] Each scope in `requiredScopes` is a real, valid scope recognized by the service's API
- [ ] No invalid, deprecated, or made-up scopes are listed
- [ ] No unnecessary excess scopes beyond what the connector actually needs
### Scope Subset Validation (CRITICAL)
- [ ] Every scope in `requiredScopes` exists in the OAuth provider's `scopes` array in `lib/oauth/oauth.ts`
- [ ] Find the provider in `OAUTH_PROVIDERS[providerGroup].services[serviceId].scopes`
- [ ] Verify: `requiredScopes` ⊆ `OAUTH_PROVIDERS scopes` (every required scope is present in the provider config)
- [ ] If a required scope is NOT in the provider config, flag as **critical** — the connector will fail at runtime
### Scope Sufficiency
For each API endpoint the connector calls:
- [ ] Identify which scopes are required per the API docs
- [ ] Verify those scopes are included in the connector's `requiredScopes`
- [ ] If the connector calls endpoints requiring scopes not in `requiredScopes`, flag as **warning**
### Token Refresh Config
- [ ] Check the `getOAuthTokenRefreshConfig` function in `lib/oauth/oauth.ts` for this provider
- [ ]`useBasicAuth` matches the service's token exchange requirements
- [ ]`supportsRefreshTokenRotation` matches whether the service issues rotating refresh tokens
- [ ] Token endpoint URL is correct
## Step 5: Validate Pagination
### listDocuments Pagination
- [ ] Cursor/pagination parameter name matches the API docs
- [ ] Response pagination field is correctly extracted (e.g., `next_cursor`, `nextPageToken`, `@odata.nextLink`, `offset`)
- [ ]`hasMore` is correctly determined from the response
- [ ]`nextCursor` is correctly passed back for the next page
- [ ]`maxItems` / `maxRecords` cap is correctly applied across pages using `syncContext.totalDocsFetched`
- [ ] Page size is within the API's allowed range (not exceeding max page size)
- [ ] Last page precision: when a `maxItems` cap exists, the final page request uses `Math.min(PAGE_SIZE, remaining)` to avoid fetching more records than needed
- [ ] No off-by-one errors in pagination tracking
- [ ] The connector does NOT hit known API pagination limits silently (e.g., HubSpot search 10k cap)
### Pagination State Across Pages
- [ ]`syncContext` is used to cache state across pages (user names, field maps, instance URLs, portal IDs, etc.)
- [ ] Cached state in `syncContext` is correctly initialized on first page and reused on subsequent pages
## Step 6: Validate Data Transformation
### ExternalDocument Construction
- [ ]`externalId` is a stable, unique identifier from the source API
- [ ]`title` is extracted from the correct field and has a sensible fallback (e.g., `'Untitled'`)
- [ ]`content` is plain text — HTML content is stripped using `htmlToPlainText` from `@/connectors/utils`
- [ ]`mimeType` is `'text/plain'`
- [ ]`contentHash` is computed using `computeContentHash` from `@/connectors/utils`
- [ ]`sourceUrl` is a valid, complete URL back to the original resource (not relative)
- [ ]`metadata` contains all fields referenced by `mapTags` and `tagDefinitions`
### Content Extraction
- [ ] Rich text / HTML fields are converted to plain text before indexing
- [ ] Important content is not silently dropped (e.g., nested blocks, table cells, code blocks)
- [ ] Content is not silently truncated without logging a warning
- [ ] Empty/blank documents are properly filtered out
- [ ] Size checks use `Buffer.byteLength(text, 'utf8')` not `text.length` when comparing against byte-based limits (e.g., `MAX_FILE_SIZE` in bytes)
## Step 7: Validate Tag Definitions and mapTags
### tagDefinitions
- [ ] Each `tagDefinition` has an `id`, `displayName`, and `fieldType`
- [ ]`fieldType` matches the actual data type: `'text'` for strings, `'number'` for numbers, `'date'` for dates, `'boolean'` for booleans
- [ ] Every `id` in `tagDefinitions` is returned by `mapTags`
- [ ] No `tagDefinition` references a field that `mapTags` never produces
### mapTags
- [ ] Return keys match `tagDefinition``id` values exactly
- [ ] Date values are properly parsed using `parseTagDate` from `@/connectors/utils`
- [ ] Array values are properly joined using `joinTagArray` from `@/connectors/utils`
- [ ] Number values are validated (not `NaN`)
- [ ] Metadata field names accessed in `mapTags` match what `listDocuments`/`getDocument` store in `metadata`
## Step 8: Validate Config Fields and Validation
### configFields
- [ ] Every field has `id`, `title`, `type`
- [ ]`required` is set explicitly (not omitted)
- [ ] Dropdown fields have `options` with `label` and `id` for each option
- [ ] Selector fields follow the canonical pair pattern:
- A `type: 'selector'` field with `selectorKey`, `canonicalParamId`, `mode: 'basic'`
- A `type: 'short-input'` field with the same `canonicalParamId`, `mode: 'advanced'`
-`required` is identical on both fields in the pair
- [ ]`selectorKey` values exist in the selector registry
- [ ]`dependsOn` references selector field `id` values, not `canonicalParamId`
### validateConfig
- [ ] Validates all required fields are present before making API calls
- [ ] Catches exceptions and returns user-friendly error messages
- [ ] Does NOT make expensive calls (full data listing, large queries)
## Step 9: Validate getDocument
- [ ] Fetches a single document by `externalId`
- [ ] Returns `null` for 404 / not found (does not throw)
- [ ] Returns the same `ExternalDocument` shape as `listDocuments`
- [ ] Handles all content types that `listDocuments` can produce (e.g., if `listDocuments` returns both pages and blogposts, `getDocument` must handle both — not hardcode one endpoint)
- [ ] Forwards `syncContext` if it needs cached state (user names, field maps, etc.)
- [ ] Error handling is graceful (catches, logs, returns null or throws with context)
- [ ] Does not redundantly re-fetch data already included in the initial API response (e.g., if comments come back with the post, don't fetch them again separately)
## Step 10: Validate General Quality
### fetchWithRetry Usage
- [ ] All external API calls use `fetchWithRetry` from `@/lib/knowledge/documents/utils`
- [ ] No raw `fetch()` calls to external APIs
- [ ]`VALIDATE_RETRY_OPTIONS` used in `validateConfig`
- [ ] If `validateConfig` calls a shared helper (e.g., `linearGraphQL`, `resolveId`), that helper must accept and forward `retryOptions` to `fetchWithRetry`
- [ ] Default retry options used in `listDocuments`/`getDocument`
### API Efficiency
- [ ] APIs that support field selection (e.g., `$select`, `sysparm_fields`, `fields`) should request only the fields the connector needs — in both `listDocuments` AND `getDocument`
- [ ] No redundant API calls: if a helper already fetches data (e.g., site metadata), callers should reuse the result instead of making a second call for the same information
- [ ] Sequential per-item API calls (fetching details for each document in a loop) should be batched with `Promise.all` and a concurrency limit of 3-5
### Error Handling
- [ ] Individual document failures are caught and logged without aborting the sync
- [ ] API error responses include status codes in error messages
- [ ] No unhandled promise rejections in concurrent operations
### Concurrency
- [ ] Concurrent API calls use reasonable batch sizes (3-5 is typical)
- [ ] No unbounded `Promise.all` over large arrays
### Logging
- [ ] Uses `createLogger` from `@sim/logger` (not `console.log`)
- [ ] Logs sync progress at `info` level
- [ ] Logs errors at `warn` or `error` level with context
### Registry
- [ ] Connector is exported from `connectors/{service}/index.ts`
- [ ] Connector is registered in `connectors/registry.ts`
- [ ] Registry key matches the connector's `id` field
## Step 11: Report and Fix
### Report Format
Group findings by severity:
**Critical** (will cause runtime errors, data loss, or auth failures):
- Wrong API endpoint URL or HTTP method
- Invalid or missing OAuth scopes (not in provider config)
- Incorrect response field mapping (accessing wrong path)
- SOQL/query fields that don't exist on the target object
- Pagination that silently hits undocumented API limits
- Missing error handling that would crash the sync
-`requiredScopes` not a subset of OAuth provider scopes
- Query/filter injection: user-controlled values interpolated into OData `$filter`, SOQL, or query strings without escaping
**Warning** (incorrect behavior, data quality issues, or convention violations):
- HTML content not stripped via `htmlToPlainText`
-`getDocument` not forwarding `syncContext`
-`getDocument` hardcoded to one content type when `listDocuments` returns multiple (e.g., only pages but not blogposts)
- Missing `tagDefinition` for metadata fields returned by `mapTags`
- Incorrect `useBasicAuth` or `supportsRefreshTokenRotation` in token refresh config
- Invalid scope names that the API doesn't recognize (even if silently ignored)
- Private resources excluded from name-based lookup despite scopes being available
- Silent data truncation without logging
- Size checks using `text.length` (character count) instead of `Buffer.byteLength` (byte count) for byte-based limits
- URL-type config fields not normalized (protocol prefix, trailing slashes cause API failures)
-`VALIDATE_RETRY_OPTIONS` not threaded through helper functions called by `validateConfig`
**Suggestion** (minor improvements):
- Missing incremental sync support despite API supporting it
- Overly broad scopes that could be narrowed (not wrong, but could be tighter)
- Source URL format could be more specific
- Missing `orderBy` for deterministic pagination
- Redundant API calls that could be cached in `syncContext`
- Sequential per-item API calls that could be batched with `Promise.all` (concurrency 3-5)
- API supports field selection but connector fetches all fields (e.g., missing `$select`, `sysparm_fields`, `fields`)
-`getDocument` re-fetches data already included in the initial API response (e.g., comments returned with post)
- Last page of pagination requests full `PAGE_SIZE` when fewer records remain (`Math.min(PAGE_SIZE, remaining)`)
### Fix All Issues
After reporting, fix every **critical** and **warning** issue. Apply **suggestions** where they don't add unnecessary complexity.
### Validation Output
After fixing, confirm:
1.`bun run lint` passes
2. TypeScript compiles clean
3. Re-read all modified files to verify fixes are correct
You are an expert auditor for Sim integrations. Your job is to thoroughly validate that an existing integration is correct, complete, and follows all conventions.
## Your Task
When the user asks you to validate an integration:
1. Read the service's API documentation (via WebFetch or Context7)
2. Read every tool, the block, and registry entries
3. Cross-reference everything against the API docs and Sim conventions
4. Report all issues found, grouped by severity (critical, warning, suggestion)
5. Fix all issues after reporting them
## Step 1: Gather All Files
Read **every** file for the integration — do not skip any:
```
apps/sim/tools/{service}/ # All tool files, types.ts, index.ts
- Credentials → `oauth-input` with correct `serviceId`
- [ ] Dropdown `value: () => 'default'` is set for dropdowns with a sensible default
### Advanced Mode
- [ ] Optional, rarely-used fields are set to `mode: 'advanced'`:
- Pagination tokens / next tokens
- Time range filters (start/end time)
- Sort order / direction options
- Max results / per page limits
- Reply settings / threading options
- Rarely used IDs (reply-to, quote-tweet, etc.)
- Exclude filters
- [ ]**Required** fields are NEVER set to `mode: 'advanced'`
- [ ] Fields that users fill in most of the time are NOT set to `mode: 'advanced'`
### WandConfig
- [ ] Timestamp fields have `wandConfig` with `generationType: 'timestamp'`
- [ ] Comma-separated list fields have `wandConfig` with a descriptive prompt
- [ ] Complex filter/query fields have `wandConfig` with format examples in the prompt
- [ ] All `wandConfig` prompts end with "Return ONLY the [format] - no explanations, no extra text."
- [ ]`wandConfig.placeholder` describes what to type in natural language
### Tools Config
- [ ]`tools.access` lists **every** tool ID the block can use — none missing
- [ ]`tools.config.tool` returns the correct tool ID for each operation
- [ ] Type coercions are in `tools.config.params` (runs at execution time), NOT in `tools.config.tool` (runs at serialization time before variable resolution)
- [ ]`tools.config.params` handles:
-`Number()` conversion for numeric params that come as strings from inputs
-`Boolean` / string-to-boolean conversion for toggle params
- Empty string → `undefined` conversion for optional dropdown values
- Any subBlock ID → tool param name remapping
- [ ] No `Number()`, `JSON.parse()`, or other coercions in `tools.config.tool` — these would destroy dynamic references like `<Block.output>`
### Block Outputs
- [ ] Outputs cover the key fields returned by ALL tools (not just one operation)
You are an expert auditor for Sim webhook triggers. Your job is to validate that an existing trigger implementation is correct, complete, secure, and aligned across all layers.
## Your Task
1. Read the service's webhook/API documentation (via WebFetch)
2. Read every trigger file, provider handler, and registry entry
3. Cross-reference against the API docs and Sim conventions
4. Report all issues grouped by severity (critical, warning, suggestion)
5. Fix all issues after reporting them
## Step 1: Gather All Files
Read **every** file for the trigger — do not skip any:
```
apps/sim/triggers/{service}/ # All trigger files, utils.ts, index.ts
- scope: what to analyze (default: your current changes). Examples: "diff to main", "PR #123", "src/components/", "whole codebase"
- fix: whether to apply fixes (default: true). Set to false to only propose changes.
User arguments: $ARGUMENTS
## References
Read before analyzing:
1. https://overreacted.io/before-you-memo/ — two techniques to avoid memo entirely
## Anti-patterns to detect
1.**State can be moved down instead of memoizing**: Move state into a smaller child so the slow component stops re-rendering without memo.
2.**Children can be lifted up**: Extract stateful part, pass expensive subtree as `children` — children as props don't re-render when parent state changes.
3.**useMemo on cheap computations**: Small array filters, string concat, arithmetic don't need memoization.
4.**useMemo with constantly-changing deps**: Deps change every render = useMemo does nothing.
5.**useMemo to stabilize props for non-memoized children**: If the child isn't wrapped in React.memo, stable references don't matter.
6.**React.memo on components that always receive new props**: Fix the parent instead.
7.**useMemo for derived state**: Just compute inline during render.
## Steps
1. Read the reference above
2. Analyze the specified scope for the anti-patterns listed above
3. If fix=true, apply the fixes. If fix=false, propose the fixes without applying.
- scope: what to analyze (default: your current changes). Examples: "diff to main", "PR #123", "src/components/", "whole codebase"
- fix: whether to apply fixes (default: true). Set to false to only propose changes.
User arguments: $ARGUMENTS
## Context
This codebase uses React Query for all server state and Zustand for client-only global state. useState should only be used for ephemeral UI concerns (open/closed, hover, local form input). Server data should never be copied into useState or Zustand — React Query is the single source of truth.
## References
Read these before analyzing:
1. https://react.dev/learn/choosing-the-state-structure — 5 principles for structuring state
2. https://tkdodo.eu/blog/dont-over-use-state — never store derived/computed values in state
3. https://tkdodo.eu/blog/putting-props-to-use-state — never mirror props into state via useEffect
## Anti-patterns to detect
1.**Derived state stored in useState**: If a value can be computed from props, other state, or query data, compute it inline during render instead of storing it in state.
2.**Server state copied into useState**: Never `useState` + `useEffect` to sync React Query data into local state. Use query data directly. The only exception is forms where users edit server data.
3.**Props mirrored into state**: Never `useState(prop)` + `useEffect(() => setState(prop))`. Use the prop directly, or use a key to reset component state.
4.**Chained useEffect state updates**: Never chain Effects that set state to trigger other Effects. Calculate all derived values in the event handler or inline during render.
5.**Storing objects when an ID suffices**: Store `selectedId` not a copy of the selected object. Derive the object: `items.find(i => i.id === selectedId)`.
6.**State that duplicates Zustand or React Query**: If the data already lives in a store or query cache, don't create a parallel useState.
## Steps
1. Read the references above to understand the guidelines
2. Analyze the specified scope for the anti-patterns listed above
3. If fix=true, apply the fixes. If fix=false, propose the fixes without applying.
When editing user-facing copy (landing pages, docs, metadata, marketing), follow these rules.
## Identity
Sim is the **AI workspace** where teams build and run AI agents. Not a workflow tool, not an agent framework, not an automation platform.
**Short definition:** Sim is the open-source AI workspace where teams build, deploy, and manage AI agents.
**Full definition:** Sim is the open-source AI workspace where teams build, deploy, and manage AI agents. Connect 1,000+ integrations and every major LLM to create agents that automate real work — visually, conversationally, or with code.
## Audience
**Primary:** Teams building AI agents for their organization — IT, operations, and technical teams who need governance, security, lifecycle management, and collaboration.
**Secondary:** Individual builders and developers who care about speed, flexibility, and open source.
| **Tables** | A database, built in. Store, query, and wire structured data into agent runs. |
| **Files** | Upload, create, and share. One store for your team and every agent. |
| **Logs** | Full visibility, every run. Trace execution block by block. |
## What We Never Say
- Never call Sim "just a workflow tool"
- Never compare only on integration count — we win on AI-native capabilities
- Never use "no-code" as the primary descriptor — say "visually, conversationally, or with code"
- Never promise unshipped features
- Never use jargon ("RAG", "vector database", "MCP") without plain-English explanation on public pages
- Avoid "agentic workforce" as a primary term — use "AI agents"
## Vision
Sim becomes the default environment where teams build AI agents — not a tool you visit for one task, but a workspace you live in. Workflows are one module; Mothership is another. The workspace is the constant; the interface adapts.
### NEVER use `mockAuth()`, `mockConsoleLogger()`, or `setupCommonApiMocks()` from `@sim/testing`
These helpers internally use `vi.doMock()` which is slow. Use direct `vi.hoisted()` + `vi.mock()` instead.
### Mock heavy transitive dependencies
If a module under test imports `@/blocks` (200+ files), `@/tools/registry`, or other heavy modules, mock them:
@@ -134,38 +135,61 @@ await new Promise(r => setTimeout(r, 1))
vi.useFakeTimers()
```
## Mock Pattern Reference
## Centralized Mocks (prefer over local declarations)
`@sim/testing` exports ready-to-use mock modules for common dependencies. Import and pass directly to `vi.mock()` — no `vi.hoisted()` boilerplate needed. Each paired `*MockFns` object exposes the underlying `vi.fn()`s for per-test overrides.
Only define a local `vi.mock('@/lib/auth', ...)` if the module under test consumes exports outside the centralized shape (e.g., `auth.api.verifyOneTimeToken`, `auth.api.resetPassword`).
@@ -192,7 +192,7 @@ In the block config (`blocks/blocks/{service}.ts`), add `hideWhenHosted: true` t
},
```
The visibility is controlled by `isSubBlockHiddenByHostedKey()` in `lib/workflows/subblocks/visibility.ts`, which checks the `isHosted` feature flag.
The visibility is controlled by `isSubBlockHidden()` in `lib/workflows/subblocks/visibility.ts`, which checks both the `isHosted` feature flag (`hideWhenHosted`) and optional env var conditions (`hideWhenEnvSet`).
### Excluding Specific Operations from Hosted Key Support
You are a professional software engineer. All code must follow best practices: accurate, readable, clean, and efficient.
## Global Standards
- **Logging**: Import `createLogger` from `@sim/logger`. Use `logger.info`, `logger.warn`, `logger.error` instead of `console.log`
- **Comments**: Use TSDoc for documentation. No `====` separators. No non-TSDoc comments
- **Styling**: Never update global styles. Keep all styling local to components
- **ID Generation**: Never use `crypto.randomUUID()`, `nanoid`, or `uuid` package. Use `generateId()` (UUID v4) or `generateShortId()` (compact) from `@sim/utils/id`
- **Package Manager**: Use `bun` and `bunx`, not `npm` and `npx`
## Architecture
### Core Principles
1. Single Responsibility: Each component, hook, store has one clear purpose
2. Composition Over Complexity: Break down complex logic into smaller pieces
3. Type Safety First: TypeScript interfaces for all props, state, return types
4. Predictable State: Zustand for global state, useState for UI-only concerns
├── workflow-persistence/ # @sim/workflow-persistence — raw load/save + subflow helpers
└── workflow-types/ # @sim/workflow-types — pure BlockState/Loop/Parallel/... types
```
### Package boundaries
-`apps/* → packages/*` only. Packages never import from `apps/*`.
- Each package has explicit subpath `exports` maps; no barrels that accidentally pull in heavy halves.
-`apps/realtime` intentionally avoids Next.js, React, the block/tool registry, provider SDKs, and the executor. CI enforces this via `scripts/check-monorepo-boundaries.ts` and `scripts/check-realtime-prune-graph.ts`.
- Auth is shared across services via the Better Auth "Shared Database Session" pattern: both apps read the same `BETTER_AUTH_SECRET` and point at the same DB via `@sim/db`.
### Naming Conventions
- Components: PascalCase (`WorkflowList`)
- Hooks: `use` prefix (`useWorkflowOperations`)
- Files: kebab-case (`workflow-list.tsx`)
- Stores: `stores/feature/store.ts`
- Constants: SCREAMING_SNAKE_CASE
- Interfaces: PascalCase with suffix (`WorkflowListProps`)
## Imports
**Always use absolute imports.** Never use relative imports.
Use `devtools` middleware. Use `persist` only when data should survive reload with `partialize` to persist only necessary state.
## React Query
All React Query hooks live in `hooks/queries/`. All server state must go through React Query — never use `useState` + `fetch` in components for data fetching or mutations.
### Query Key Factory
Every file must have a hierarchical key factory with an `all` root key and intermediate plural keys for prefix invalidation:
Import from `@/components/emcn`, never from subpaths (except CSS files). Use CVA when 2+ variants exist.
## Testing
Use Vitest. Test files: `feature.ts` → `feature.test.ts`. See `.cursor/rules/sim-testing.mdc` for full details.
### Global Mocks (vitest.setup.ts)
`@sim/db`, `drizzle-orm`, `@sim/logger`, `@/blocks/registry`, `@trigger.dev/sdk`, and store mocks are provided globally. Do NOT re-mock them unless overriding behavior.
tools:{access:['service_action'],config:{tool:(p)=>`service_${p.operation}`,params:(p)=>({/* type coercions here */})}},
inputs:{/* ... */},
outputs:{/* ... */},
}
```
Register in `blocks/registry.ts` (alphabetically).
**Important:**`tools.config.tool` runs during serialization (before variable resolution). Never do `Number()` or other type coercions there — dynamic references like `<Block.output>` will be destroyed. Use `tools.config.params` for type coercions (it runs during execution, after variables are resolved).
For file uploads, create an internal API route (`/api/tools/{service}/upload`) that uses `downloadFileFromStorage` to get file content from `UserFile` objects.
@@ -4,9 +4,12 @@ You are a professional software engineer. All code must follow best practices: a
## Global Standards
- **Logging**: Import `createLogger` from `@sim/logger`. Use `logger.info`, `logger.warn`, `logger.error` instead of `console.log`
- **Logging**: Import `createLogger` from `@sim/logger`. Use `logger.info`, `logger.warn`, `logger.error` instead of `console.log`. Inside API routes wrapped with `withRouteHandler`, loggers automatically include the request ID — no manual `withMetadata({ requestId })` needed
- **API Route Handlers**: All API route handlers (`GET`, `POST`, `PUT`, `DELETE`, `PATCH`) must be wrapped with `withRouteHandler` from `@/lib/core/utils/with-route-handler`. This provides request ID tracking, automatic error logging for 4xx/5xx responses, and unhandled error catching. See "API Route Pattern" section below
- **Comments**: Use TSDoc for documentation. No `====` separators. No non-TSDoc comments
- **Styling**: Never update global styles. Keep all styling local to components
- **ID Generation**: Never use `crypto.randomUUID()`, `nanoid`, or `uuid` package. Use `generateId()` (UUID v4) or `generateShortId()` (compact) from `@sim/utils/id`
- **Common Utilities**: Use shared helpers from `@sim/utils` instead of inline implementations. `sleep(ms)` from `@sim/utils/helpers` for delays, `toError(e)` from `@sim/utils/errors` to normalize caught values.
- **Package Manager**: Use `bun` and `bunx`, not `npm` and `npx`
Extract when: 50+ lines, used in 2+ files, or has own state/logic. Keep inline when: < 10 lines, single use, purely presentational.
## API Route Pattern
Every API route handler must be wrapped with `withRouteHandler`. This sets up `AsyncLocalStorage`-based request context so all loggers in the request lifecycle automatically include the request ID.
'Documentation for Sim — the open-source platform to build AI agents and run your agentic workforce. Connect 1,000+ integrations and LLMs to deploy and orchestrate agentic workflows.',
url:'https://docs.sim.ai',
'Documentation for Sim — the open-source AI workspace where teams build, deploy, and manage AI agents. Connect 1,000+ integrations and every major LLM.',
default:'Sim Documentation — Build AI Agents & Run Your Agentic Workforce',
template:'%s',
default:'Sim Documentation — The AI Workspace for Teams',
template:'%s | Sim Docs',
},
description:
'Documentation for Sim — the open-source platform to build AI agents and run your agentic workforce. Connect 1,000+ integrations and LLMs to deploy and orchestrate agentic workflows.',
'Documentation for Sim — the open-source AI workspace where teams build, deploy, and manage AI agents. Connect 1,000+ integrations and every major LLM.',
title:'Sim Documentation — Build AI Agents & Run Your Agentic Workforce',
title:'Sim Documentation — The AI Workspace for Teams',
description:
'Documentation for Sim — the open-source platform to build AI agents and run your agentic workforce. Connect 1,000+ integrations and LLMs to deploy and orchestrate agentic workflows.',
'Documentation for Sim — the open-source AI workspace where teams build, deploy, and manage AI agents. Connect 1,000+ integrations and every major LLM.',
title:'Sim Documentation — Build AI Agents & Run Your Agentic Workforce',
title:'Sim Documentation — The AI Workspace for Teams',
description:
'Documentation for Sim — the open-source platform to build AI agents and run your agentic workforce. Connect 1,000+ integrations and LLMs to deploy and orchestrate agentic workflows.',
'Documentation for Sim — the open-source AI workspace where teams build, deploy, and manage AI agents. Connect 1,000+ integrations and every major LLM.',
> The open-source platform to build AI agents and run your agentic workforce.
> The open-source AI workspace where teams build, deploy, and manage AI agents.
Sim is the open-source platform to build AI agents and run your agentic workforce. Connect 1,000+ integrations and LLMs to deploy and orchestrate agentic workflows. Create agents, workflows, knowledge bases, tables, and docs. Trusted by over 100,000 builders.
Sim is the open-source AI workspace where teams build, deploy, and manage AI agents. Connect 1,000+ integrations and every major LLM to create agents that automate real work — visually, conversationally, or with code. Trusted by over 100,000 builders.
## Documentation Overview
@@ -61,7 +62,7 @@ ${Object.entries(sections)
- Full documentation content: ${baseUrl}/llms-full.txt
@@ -17,7 +17,7 @@ export function StructuredData({
dateModified,
breadcrumb,
}:StructuredDataProps){
constbaseUrl='https://docs.sim.ai'
constbaseUrl=DOCS_BASE_URL
constarticleStructuredData={
'@context':'https://schema.org',
@@ -68,37 +68,15 @@ export function StructuredData({
})),
}
constwebsiteStructuredData=url===baseUrl&&{
'@context':'https://schema.org',
'@type':'WebSite',
name:'Sim Documentation',
url: baseUrl,
description:
'Documentation for Sim — the open-source platform to build AI agents and run your agentic workforce. Connect 1,000+ integrations and LLMs to deploy and orchestrate agentic workflows.',
'Sim is the open-source platform to build AI agents and run your agentic workforce. Connect 1,000+ integrations and LLMs to deploy and orchestrate agentic workflows. Create agents, workflows, knowledge bases, tables, and docs.',
'Sim is the open-source AI workspace where teams build, deploy, and manage AI agents. Connect 1,000+ integrations and every major LLM to create agents that automate real work.',
url: baseUrl,
author:{
'@type':'Organization',
@@ -109,8 +87,9 @@ export function StructuredData({
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.