* fix(trigger): auto-detect header row and rename lastKnownRowCount to lastIndexChecked
- Replace hardcoded !1:1 header fetch with detectHeaderRow(), which scans
the first 10 rows and returns the first non-empty row as headers. This
fixes row: null / headers: [] when a sheet has blank rows or a title row
above the actual column headers (e.g. headers in row 3).
- Rename lastKnownRowCount → lastIndexChecked in GoogleSheetsWebhookConfig
and all usage sites to clarify that the value is a row index pointer, not
a total count.
- Remove config parameter from processRows() since it was unused after the
includeHeaders flag was removed.
* fix(trigger): combine sheet state fetch, skip header/blank rows from data emission
- Replace separate getDataRowCount() + detectHeaderRow() with a single
fetchSheetState() call that returns rowCount, headers, and headerRowIndex
from one A:Z fetch. Saves one Sheets API round-trip per poll cycle when
new rows are detected.
- Use headerRowIndex to compute adjustedStartRow, preventing the header row
(and any blank rows above it) from being emitted as data events when
lastIndexChecked was seeded from an empty sheet.
- Handle the edge case where the entire batch falls within the header/blank
window by advancing the pointer and returning early without fetching rows.
- Skip empty rows (row.length === 0) in processRows rather than firing a
workflow run with no meaningful data.
* fix(trigger): preserve lastModifiedTime when remaining rows exist after header skip
When all rows in a batch fall within the header/blank window (adjustedStartRow
> endRow), the early return was unconditionally updating lastModifiedTime to the
current value. If there were additional rows beyond the batch cap, the next
Drive pre-check would see an unchanged modifiedTime and skip polling entirely,
leaving those rows unprocessed. Mirror the hasRemainingOrFailed pattern from the
normal processing path.
* chore(trigger): remove verbose inline comments from google-sheets poller
* fix(trigger): revert to full-width A:Z fetch for correct row count and consistent column scope
* fix(trigger): don't count skipped empty rows as processed
* chore(triggers): deprecate trigger-save subblock
Remove the defunct triggerSave subblock from all 102 trigger definitions,
the SubBlockType union, SYSTEM_SUBBLOCK_IDS, tool params, and command
templates. Retain the backwards-compat filter in getTrigger() for any
legacy stored data.
* fix(triggers): remove leftover no-op blocks.push() in linear utils
* chore(triggers): remove orphaned triggerId property and stale comments
* feat(knowledge): add token, sentence, recursive, and regex chunkers
* fix(chunkers): standardize token estimation and use emcn dropdown
- Refactor all existing chunkers (Text, JsonYaml, StructuredData, Docs) to use shared utils
- Fix inconsistent token estimation (JsonYaml used tiktoken, StructuredData used /3 ratio)
- Fix DocsChunker operator precedence bug and hard-coded 300-token limit
- Fix JsonYamlChunker isStructuredData false positive on plain strings
- Add MAX_DEPTH recursion guard to JsonYamlChunker
- Replace @/components/ui/select with emcn DropdownMenu in strategy selector
* fix(chunkers): address research audit findings
- Expand RecursiveChunker recipes: markdown adds horizontal rules, code
fences, blockquotes; code adds const/let/var/if/for/while/switch/return
- RecursiveChunker fallback uses splitAtWordBoundaries instead of char slicing
- RegexChunker ReDoS test uses adversarial strings (repeated chars, spaces)
- SentenceChunker abbreviation list adds St/Rev/Gen/No/Fig/Vol/months
and single-capital-letter lookbehind
- Add overlap < maxSize validation in Zod schema and UI form
- Add pattern max length (500) validation in Zod schema
- Fix StructuredDataChunker footer grammar
* fix(chunkers): fix remaining audit issues across all chunkers
- DocsChunker: extract headers from cleaned content (not raw markdown)
to fix position mismatch between header positions and chunk positions
- DocsChunker: strip export statements and JSX expressions in cleanContent
- DocsChunker: fix table merge dedup using equality instead of includes
- JsonYamlChunker: preserve path breadcrumbs when nested value fits in
one chunk, matching LangChain RecursiveJsonSplitter behavior
- StructuredDataChunker: detect 2-column CSV (lowered threshold from >2
to >=1) and use 20% relative tolerance instead of absolute +/-2
- TokenChunker: use sliding window overlap (matching LangChain/Chonkie)
where chunks stay within chunkSize instead of exceeding it
- utils: splitAtWordBoundaries accepts optional stepChars for sliding
window overlap; addOverlap uses newline join instead of space
* chore(chunkers): lint formatting
* updated styling
* fix(chunkers): audit fixes and comprehensive tests
- Fix SentenceChunker regex: lookbehinds now include the period to correctly handle abbreviations (Mr., Dr., etc.), initials (J.K.), and decimals
- Fix RegexChunker ReDoS: reset lastIndex between adversarial test iterations, add poisoned-suffix test strings
- Fix DocsChunker: skip code blocks during table boundary detection to prevent false positives from pipe characters
- Fix JsonYamlChunker: oversized primitive leaf values now fall back to text chunking instead of emitting a single chunk
- Fix TokenChunker: pass 0 to buildChunks for overlap metadata since sliding window handles overlap inherently
- Add defensive guard in splitAtWordBoundaries to prevent infinite loops if step is 0
- Add tests for utils, TokenChunker, SentenceChunker, RecursiveChunker, RegexChunker (236 total tests, 0 failures)
- Fix existing test expectations for updated footer format and isStructuredData behavior
* chore(chunkers): remove unnecessary comments and dead code
Strip 445 lines of redundant TSDoc, math calculation comments,
implementation rationale notes, and assertion-restating comments
across all chunker source and test files.
* fix(chunkers): address PR review comments
- Fix regex fallback path: use sliding window for overlap instead of
passing chunkOverlap to buildChunks without prepended overlap text
- Fix misleading strategy label: "Text (hierarchical splitting)" →
"Text (word boundary splitting)"
* fix(chunkers): use consistent overlap pattern in regex fallback
Use addOverlap + buildChunks(chunks, overlap) in the regex fallback
path to match the main path and all other chunkers (TextChunker,
RecursiveChunker). The sliding window approach was inconsistent.
* fix(chunkers): prevent content loss in word boundary splitting
When splitAtWordBoundaries snaps end back to a word boundary, advance
pos from end (not pos + step) in non-overlapping mode. The step-based
advancement is preserved for the sliding window case (TokenChunker).
* fix(chunkers): restore structured data token ratio and overlap joiner
- Restore /3 token estimation for StructuredDataChunker (structured data
is denser than prose, ~3 chars/token vs ~4)
- Change addOverlap joiner from \n to space to match original TextChunker
behavior
* lint
* fix(chunkers): fall back to character-level overlap in sentence chunker
When no complete sentence fits within the overlap budget,
fall back to character-level word-boundary overlap from the
previous group's text. This ensures buildChunks metadata is
always correct.
* fix(chunkers): fix log message and add missing month abbreviations
- Fix regex fallback log: "character splitting" → "word-boundary splitting"
- Add Jun and Jul to sentence chunker abbreviation list
* lint
* fix(chunkers): restore structured data detection threshold to > 2
avgCount >= 1 was too permissive — prose with consistent comma usage
would be misclassified as CSV. Restore original > 2 threshold while
keeping the improved proportional tolerance.
* fix(chunkers): pass chunkOverlap to buildChunks in TokenChunker
* fix(chunkers): restore separator-as-joiner pattern in splitRecursively
Separator was unconditionally prepended to parts after the first,
leaving leading punctuation on chunks after a boundary reset.
* feat(knowledge): add JSONL file support for knowledge base uploads
Parses JSON Lines files by splitting on newlines and converting to a
JSON array, which then flows through the existing JsonYamlChunker.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* improvement(integrations, models): ui/ux
* fix(models, integrations): dedup ChevronArrow/provider colors, fix UTC date rendering
- Extract PROVIDER_COLORS and getProviderColor to model-colors.ts to eliminate
identical definitions in model-comparison-charts and model-timeline-chart
- Remove duplicate private ChevronArrow from integration-card; import the
exported one from model-primitives instead
- Add timeZone: 'UTC' to formatShortDate so ISO date-only strings (parsed as
UTC midnight) render the correct calendar day in all timezones
* refactor(models): rename model-colors.ts to consts.ts
* improvement(models): derive provider colors/resellers from definitions, reorient FAQs to agent builder
Dynamic data:
- Add `color` and `isReseller` fields to ProviderDefinition interface
- Move brand colors for all 10 providers into their definitions
- Mark 6 reseller providers (Azure, Bedrock, Vertex, OpenRouter, Fireworks)
- consts.ts now derives color map from MODEL_CATALOG_PROVIDERS
- model-comparison-charts derives RESELLER_PROVIDERS from catalog
- Fix deepseek name: Deepseek → DeepSeek; remove now-redundant
PROVIDER_NAME_OVERRIDES and getProviderDisplayName from utils
- Add color/isReseller fields to CatalogProvider; clean up duplicate
providerDisplayName in searchText array
FAQs:
- Replace all 4 main-page FAQs with 5 agent-builder-oriented ones
covering model selection, context windows, pricing, tool use, and
how to use models in a Sim agent workflow
- buildProviderFaqs: add conditional tool use FAQ per provider
- buildModelFaqs: add bestFor FAQ (conditional on field presence);
improve context window answer to explain agent implications;
tighten capabilities answer wording
* chore(models): remove model-colors.ts (superseded by consts.ts)
* update footer
---------
Co-authored-by: waleed <walif6@gmail.com>
* fix(trigger): fix polling trigger config defaults, row count, clock-skew, and stale config clearing
* fix(deploy): track first-pass fills to prevent stale baseConfig bypassing required-field validation
Use a dedicated `filledSubBlockIds` Set populated during the first pass so the second-pass skip guard is based solely on live `getConfigValue` results, not on stale entries spread from `baseConfig` (`triggerConfig`).
* fix(trigger): prevent calendar cursor regression when all events are filtered client-side
* fix(ui): support Tab key to select items in tag, env-var, and resource dropdowns
* fix(ui): support Tab key to select items in tag, env-var, and resource dropdowns
* fix(ui): guard Tab selection against Shift+Tab and undefined index
The Forms API has a different base URL for OAuth vs Basic Auth.
Per Atlassian support, OAuth requires the /ex/jira/{cloudId}/forms
pattern, not /jira/forms/cloud/{cloudId} which only works with
Basic Auth. This was causing 401 Unauthorized errors.
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* fix(trigger): show selector display names on canvas for trigger file/sheet selectors
* fix(trigger): use isNonEmptyValue in canonical member scan to match visibility contract
* feat(trigger): add Google Sheets, Drive, and Calendar polling triggers
Add polling triggers for Google Sheets (new rows), Google Drive (file
changes via changes.list API), and Google Calendar (event updates via
updatedMin). Each includes OAuth credential support, configurable
filters (event type, MIME type, folder, search term, render options),
idempotency, and first-poll seeding. Wire triggers into block configs
and regenerate integrations.json. Update add-trigger skill with polling
instructions and versioned block wiring guidance.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(polling): address PR review feedback for Google polling triggers
- Fix Drive cursor stall: use nextPageToken as resume point when
breaking early from pagination instead of re-using the original token
- Eliminate redundant Drive API call in Sheets poller by returning
modifiedTime from the pre-check function
- Add 403/429 rate-limit handling to Sheets API calls matching the
Calendar handler pattern
- Remove unused changeType field from DriveChangeEntry interface
- Rename triggers/google_drive to triggers/google-drive for consistency
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(polling): fix Drive pre-check never activating in Sheets poller
isDriveFileUnchanged short-circuited when lastModifiedTime was
undefined, never calling the Drive API — so currentModifiedTime
was never populated, creating a permanent chicken-and-egg loop.
Now always calls the Drive API and returns the modifiedTime
regardless of whether there's a previous value to compare against.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* chore(lint): fix import ordering in triggers registry
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(polling): address PR review feedback for Google polling handlers
- Fix fetchHeaderRow to throw on 403/429 rate limits instead of silently
returning empty headers (prevents rows from being processed without
headers and lastKnownRowCount from advancing past them permanently)
- Fix Drive pagination to avoid advancing resume cursor past sliced
changes (prevents permanent change loss when allChanges > maxFiles)
- Remove unused logger import from Google Drive trigger config
* fix(polling): prevent data loss on partial row failures and harden idempotency key
- Sheets: only advance lastKnownRowCount by processedCount when there
are failures, so failed rows are retried on the next poll cycle
(idempotency deduplicates already-processed rows on re-fetch)
- Drive: add fallback for change.time in idempotency key to prevent
key collisions if the field is ever absent from the API response
* fix(polling): remove unused variable and preserve lastModifiedTime on Drive API failure
- Remove unused `now` variable from Google Drive polling handler
- Preserve stored lastModifiedTime when Drive API pre-check fails
(previously wrote undefined, disabling the optimization until the
next successful Drive API call)
* fix(polling): don't advance state when all events fail across sheets, calendar, drive handlers
* fix(polling): retry failed idempotency keys, fix drive cursor overshoot, fix calendar inclusive updatedMin
* fix(polling): revert calendar timestamp on any failure, not just all-fail
* fix(polling): revert drive cursor on any failure, not just all-fail
* feat(triggers): add canonical selector toggle to google polling triggers
- Add 'trigger-advanced' mode to SubBlockConfig so canonical pairs work in trigger mode
- Fix buildCanonicalIndex: trigger-mode subblocks don't overwrite non-trigger basicId, deduplicate advancedIds from block spreads
- Update editor, subblock layout, and trigger config aggregation to include trigger-advanced subblocks
- Replace dropdown+fetchOptions in Calendar/Sheets/Drive pollers with file-selector (basic) + short-input (advanced) canonical pairs
- Add canonicalParamId: 'oauthCredential' to triggerCredentials for selector context resolution
- Update polling handlers to read canonical fallbacks (calendarId||manualCalendarId, etc.)
* test(blocks): handle trigger-advanced mode in canonical validation tests
* fix(triggers): handle trigger-advanced mode in deploy, preview, params, and copilot
* fix(polling): use position-only idempotency key for sheets rows
* fix(polling): don't advance calendar timestamp to client clock on empty poll
* fix(polling): remove extraneous comment from calendar poller
* fix(polling): drive cursor stall on full page, calendar latestUpdated past filtered events
* fix(polling): advance calendar cursor past fully-filtered event batches
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* feat(tools): add fields parameter to Jira search block
Expose the Jira REST API `fields` parameter on the search operation,
allowing users to specify which fields to return per issue. This reduces
response payload size by 10-15x, preventing 10MB workflow state limit
errors for users with high ticket volume.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* style(tools): remove redundant type annotation in fields map callback
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(tools): restore type annotation for implicit any in params callback
The params object is untyped, so TypeScript cannot infer the string
element type from .split() — the explicit annotation is required.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Add the generated human-in-the-loop group to the docs navigation
and create meta.json listing all HITL operation IDs so endpoints
render in the API reference.
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* fix(log): log cleanup sql query
* perf(log): use startedAt index for cleanup query filter
Switch cleanup WHERE clause from createdAt to startedAt to leverage
the existing composite index (workspaceId, startedAt), converting a
full table scan to an index range scan. Also remove explanatory comment.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Theodore Li <theo@sim.ai>
Co-authored-by: Waleed Latif <walif6@gmail.com>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Update parseJsmErrorMessage to extract errors from all Atlassian API
response formats: errorMessage (JSM), errorMessages array (Jira),
errors[].title RFC 7807 (Confluence/Forms), field-level errors object,
and message (gateway). Remove redundant prefix wrapping so the raw
error message surfaces cleanly through the extractor.
* fix(tools): add Atlassian error extractor to all Jira, JSM, and Confluence tools
Wire up the existing `atlassian-errors` error extractor to all 95 Atlassian
tool configs so the executor surfaces meaningful error messages instead of
generic status codes. Also fix the extractor itself to handle all three
Atlassian error response formats: `errorMessage` (JSM), `errorMessages`
array (Jira), and `message` (Confluence).
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* chore(tools): lint formatting fix for error extractor
* fix(tools): handle all Atlassian error formats in error extractor
Add RFC 7807 errors[].title format (Confluence v2, Forms/ProForma API)
and Jira field-level errors object to the atlassian-errors extractor.
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* improvement(ci): parallelize Docker builds with tests and remove duplicate turbo install
* fix(test): use SecureFetchResponse shape in mock instead of standard Response
* chore(ci): bump actions/checkout to v6 and dorny/paths-filter to v4
* fix(ci): mock secureFetchWithPinnedIP in tools tests to prevent timeouts
* lint
* docs(openapi): add Human in the Loop API endpoints
Add HITL pause/resume endpoints to the OpenAPI spec covering
the full workflow pause lifecycle: listing paused executions,
inspecting pause details, and resuming with input.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* docs(openapi): add 403 and 500 responses to HITL endpoints
Address PR review feedback: add missing 403 Forbidden response
to all HITL endpoints (from validateWorkflowAccess), and 500
responses to resume endpoints that have explicit error paths.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* lint
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* feat(trigger): add ServiceNow webhook triggers
* fix(trigger): add webhook secret field and remove non-TSDoc comment
Add webhookSecret field to ServiceNow triggers (matching Salesforce pattern)
so users are prompted to protect the webhook endpoint. Update setup
instructions to include Authorization header in the Business Rule example.
Remove non-TSDoc inline comment in the block config.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* feat(trigger): add ServiceNow provider handler with event matching
Add dedicated ServiceNow webhook provider handler with:
- verifyAuth: validates webhookSecret via Bearer token or X-Sim-Webhook-Secret
- matchEvent: filters events by trigger type and table name using
isServiceNowEventMatch utility (matching Salesforce/GitHub pattern)
The event matcher handles incident created/updated and change request
created/updated triggers with table name enforcement and event type
normalization. The generic webhook trigger passes through all events
but still respects the optional table name filter.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* lint
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* feat(jsm): add ProForma/JSM Forms discovery tools
Add three new tools for discovering and inspecting JSM Forms (ProForma) templates
and their structure, enabling dynamic form-based workflows:
- jsm_get_form_templates: List form templates in a project with request type bindings
- jsm_get_form_structure: Get full form design (questions, layout, conditions, sections)
- jsm_get_issue_forms: List forms attached to an issue with submission status
All endpoints validated against the official Atlassian Forms REST API OpenAPI spec.
Uses the Forms Cloud API base URL (jira/forms/cloud/{cloudId}) with X-ExperimentalApi header.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(jsm): add input validation and extract shared error parser
- Add validateJiraIssueKey for projectIdOrKey in templates and structure routes
- Add validateJiraCloudId for formId (UUID) in structure route
- Extract parseJsmErrorMessage to shared utils.ts (was duplicated across 3 routes)
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* chore(jsm): remove unused FORM_QUESTION_PROPERTIES constant
Dead code — the get_form_structure tool passes the raw design object
through as JSON, so this output constant had no consumers.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* improvement(polling): fix correctness and efficiency across all polling handlers
- Gmail: paginate history API, add historyTypes filter, differentiate 403/429,
fetch fresh historyId on fallback to break 404 retry loop
- Outlook: follow @odata.nextLink pagination, use fetchWithRetry for all Graph
calls, fix $top alignment, skip folder filter on partial resolution failure,
remove Content-Type from GET requests
- RSS: add conditional GET (ETag/If-None-Match), raise GUID cap to 500, fix 304
ETag capture per RFC 9111, align GUID tracking with idempotency fallback key
- IMAP: single connection reuse, UIDVALIDITY tracking per mailbox, advance UID
only on successful fetch, fix messageFlagsAdd range type, remove cross-mailbox
legacy UID fallback
- Dispatch polling via trigger.dev task with per-provider concurrency key;
fall back to synchronous Redis-locked polling for self-hosted
* fix(rss): align idempotency key GUID fallback with tracking/filter guard
* removed comments
* fix(imap): clear stale UID when UIDVALIDITY changes during state merge
* fix(rss): skip items with no identifiable GUID to avoid idempotency key collisions
* fix(schedules): convert dynamic import of getWorkflowById to static import
* fix(imap): preserve fresh UID after UIDVALIDITY reset in state merge
* improvement(polling): remove trigger.dev dispatch, use synchronous Redis-locked polling
* fix(polling): decouple outlook page size from total email cap so pagination works
* feat(block): Add cloudwatch publish operation
* fix(integrations): validate and fix cloudwatch, cloudformation, athena conventions
- Update tool version strings from '1.0' to '1.0.0' across all three integrations
- Add missing `export * from './types'` barrel re-exports (cloudwatch, cloudformation)
- Add docsLink, wandConfig timestamps, mode: 'advanced' on optional fields (cloudwatch)
- Add dropdown defaults, ZodError handling, docs intro section (cloudwatch)
- Add mode: 'advanced' on limit field (cloudformation)
- Alphabetize registry entries (cloudwatch, cloudformation)
- Fix athena docs maxResults range (1-999)
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(cloudwatch): complete put_metric_data unit dropdown, add missing outputs, fix JSON error handling
- Add all 27 valid CloudWatch StandardUnit values to metricUnit dropdown (was 13)
- Add missing block outputs for put_metric_data: success, namespace, metricName, value, unit
- Add try-catch around dimensions JSON.parse in put-metric-data route for proper 400 errors
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(cloudwatch): fix DescribeAlarms returning only MetricAlarm when "All Types" selected
Per AWS docs, omitting AlarmTypes returns only MetricAlarm. Now explicitly
sends both MetricAlarm and CompositeAlarm when no filter is selected.
Also fix dimensions JSON parse errors returning 500 instead of 400 in
get-metric-statistics route.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(cloudwatch): validate dimensions JSON at Zod schema level
Move dimensions validation from runtime try-catch to Zod refinement,
catching malformed JSON and arrays at schema validation time (400)
instead of runtime (500). Also rejects JSON arrays that would produce
meaningless numeric dimension names.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(cloudwatch): reject non-numeric metricValue instead of silently publishing 0
Add NaN guard in block config and .finite() refinement in Zod schema
so "abc" → NaN is caught at both layers instead of coercing to 0.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(cloudwatch): use Number.isFinite to also reject Infinity in block config
Aligns block-level validation with route's Zod .finite() refinement so
Infinity/-Infinity are caught at the block config layer, not just the API.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Theodore Li <teddy@zenobiapay.com>
Co-authored-by: Waleed Latif <walif6@gmail.com>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* fix(billing): skip billing on streamed workflows with byok
* Simplify logic
* Address comments, skip tokenization billing fallback
* Fix tool usage billing for streamed outputs
* fix(webhook): throw webhook errors as 4xxs (#4050)
* fix(webhook): throw webhook errors as 4xxs
* Fix shadowing body var
---------
Co-authored-by: Theodore Li <theo@sim.ai>
* feat(enterprise): cloud whitelabeling for enterprise orgs (#4047)
* feat(enterprise): cloud whitelabeling for enterprise orgs
* fix(enterprise): scope enterprise plan check to target org in whitelabel PUT
* fix(enterprise): use isOrganizationOnEnterprisePlan for org-scoped enterprise check
* fix(enterprise): allow clearing whitelabel fields and guard against empty update result
* fix(enterprise): remove webp from logo accept attribute to match upload hook validation
* improvement(billing): use isBillingEnabled instead of isProd for plan gate bypasses
* fix(enterprise): show whitelabeling nav item when billing is enabled on non-hosted environments
* fix(enterprise): accept relative paths for logoUrl since upload API returns /api/files/serve/ paths
* fix(whitelabeling): prevent logo flash on refresh by hiding logo while branding loads
* fix(whitelabeling): wire hover color through CSS token on tertiary buttons
* fix(whitelabeling): show sim logo by default, only replace when org logo loads
* fix(whitelabeling): cache org logo url in localstorage to eliminate flash on repeat visits
* feat(whitelabeling): add wordmark support with drag/drop upload
* updated turbo
* fix(whitelabeling): defer localstorage read to effect to prevent hydration mismatch
* fix(whitelabeling): use layout effect for cache read to eliminate logo flash before paint
* fix(whitelabeling): cache theme css to eliminate color flash before org settings resolve
* fix(whitelabeling): deduplicate HEX_COLOR_REGEX into lib/branding and remove mutation from useCallback deps
* fix(whitelabeling): use cookie-based SSR cache to eliminate brand flash on all page loads
* fix(whitelabeling): use !orgSettings condition to fix SSR brand cache injection
React Query returns isLoading: false with data: undefined during SSR, so the
previous brandingLoading condition was always false on the server — initialCache
was never injected into brandConfig. Changing to !orgSettings correctly applies
the cookie cache both during SSR and while the client-side query loads, eliminating
the logo flash on hard refresh.
* fix(editor): stop highlighting start.input as blue when block is not connected to starter (#4054)
* fix: merge subblock values in auto-layout to prevent losing router context (#4055)
Auto-layout was reading from getWorkflowState() without merging subblock
store values, then persisting stale subblock data to the database. This
caused runtime-edited values (e.g. router_v2 context) to be overwritten
with their initial/empty values whenever auto-layout was triggered.
* fix(whitelabeling): eliminate logo flash by fetching org settings server-side (#4057)
* fix(whitelabeling): eliminate logo flash by fetching org settings server-side
* improvement(whitelabeling): add SVG support for logo and wordmark uploads
* skelly in workspace header
* remove dead code
* fix(whitelabeling): hydration error, SVG support, skeleton shimmer, dead code removal
* fix(whitelabeling): blob preview dep cycle and missing color fallback
* fix(whitelabeling): use brand-accent as color fallback when workspace color is undefined
* chore(whitelabeling): inline hasOrgBrand
---------
Co-authored-by: Theodore Li <theo@sim.ai>
* fix(whitelabeling): eliminate logo flash by fetching org settings server-side
* improvement(whitelabeling): add SVG support for logo and wordmark uploads
* skelly in workspace header
* remove dead code
* fix(whitelabeling): hydration error, SVG support, skeleton shimmer, dead code removal
* fix(whitelabeling): blob preview dep cycle and missing color fallback
* fix(whitelabeling): use brand-accent as color fallback when workspace color is undefined
* chore(whitelabeling): inline hasOrgBrand
Auto-layout was reading from getWorkflowState() without merging subblock
store values, then persisting stale subblock data to the database. This
caused runtime-edited values (e.g. router_v2 context) to be overwritten
with their initial/empty values whenever auto-layout was triggered.
* feat(enterprise): cloud whitelabeling for enterprise orgs
* fix(enterprise): scope enterprise plan check to target org in whitelabel PUT
* fix(enterprise): use isOrganizationOnEnterprisePlan for org-scoped enterprise check
* fix(enterprise): allow clearing whitelabel fields and guard against empty update result
* fix(enterprise): remove webp from logo accept attribute to match upload hook validation
* improvement(billing): use isBillingEnabled instead of isProd for plan gate bypasses
* fix(enterprise): show whitelabeling nav item when billing is enabled on non-hosted environments
* fix(enterprise): accept relative paths for logoUrl since upload API returns /api/files/serve/ paths
* fix(whitelabeling): prevent logo flash on refresh by hiding logo while branding loads
* fix(whitelabeling): wire hover color through CSS token on tertiary buttons
* fix(whitelabeling): show sim logo by default, only replace when org logo loads
* fix(whitelabeling): cache org logo url in localstorage to eliminate flash on repeat visits
* feat(whitelabeling): add wordmark support with drag/drop upload
* updated turbo
* fix(whitelabeling): defer localstorage read to effect to prevent hydration mismatch
* fix(whitelabeling): use layout effect for cache read to eliminate logo flash before paint
* fix(whitelabeling): cache theme css to eliminate color flash before org settings resolve
* fix(whitelabeling): deduplicate HEX_COLOR_REGEX into lib/branding and remove mutation from useCallback deps
* fix(whitelabeling): use cookie-based SSR cache to eliminate brand flash on all page loads
* fix(whitelabeling): use !orgSettings condition to fix SSR brand cache injection
React Query returns isLoading: false with data: undefined during SSR, so the
previous brandingLoading condition was always false on the server — initialCache
was never injected into brandConfig. Changing to !orgSettings correctly applies
the cookie cache both during SSR and while the client-side query loads, eliminating
the logo flash on hard refresh.
* fix(kb): improve error logging when connector token resolution fails
The generic "Failed to obtain access token" error hid the actual root cause.
Now logs credentialId, userId, authMode, and provider to help diagnose
token refresh failures in trigger.dev.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* feat(kb): disable connectors after 10 consecutive sync failures
Connectors that fail 10 times in a row are set to 'disabled' status,
stopping the cron from scheduling further syncs. The UI shows an alert
triangle with a reconnect banner. Users can re-enable via the play
button or by reconnecting their account, which resets failures.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(kb): disable sync button for disabled connectors, use amber badge variant
Sync button should be disabled when connector is in disabled state to
guide users toward reconnecting first. Badge variant changed from red
to amber to match the warning banner styling.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(kb): address PR review comments for disabled connector feature
- Use `=== undefined` instead of falsy check for nextSyncAt to preserve
explicit null (manual sync only) when syncIntervalMinutes is 0
- Gate Reconnect button on serviceId/providerId so it only renders for
OAuth connectors; show appropriate copy for API key connectors
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(kb): move resolveAccessToken inside try/catch for circuit-breaker coverage
Token resolution failures (e.g. revoked OAuth tokens) were thrown before
the try/catch block, bypassing consecutiveFailures tracking entirely.
Also removes dead `if (refreshed)` guards at mid-sync refresh sites since
resolveAccessToken now always returns a string or throws.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(kb): remove dead interval branch when re-enabling connector
When `updates.nextSyncAt === undefined`, syncIntervalMinutes was not in
the request, so `parsed.data.syncIntervalMinutes` is always undefined.
Simplify to just schedule an immediate sync — the sync engine sets the
proper nextSyncAt based on the connector's DB interval after completion.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>