The bundle was regenerated non-deterministically during development (same
pptxgenjs 4.0.1, different variable names in minifier output). No functional
change — restore the prior version to keep the diff clean.
- csp.ts: revert bare https: from img-src — it defeats the existing
domain allowlist and opens info-leakage vectors
- files/page.tsx + files/[fileId]/page.tsx: add explicit fallback={null}
to <Suspense> to make intent clear (React defaults to null, but
omitting it looks like an oversight)
- preview-panel.tsx: restore pre passthrough in STATIC_MARKDOWN_COMPONENTS
so Streamdown's wrapping <pre> doesn't nest inside the custom code
block <div>, which produced invalid HTML and broken styling
- file-viewer.tsx: add 'webm' to VIDEO_PREVIEWABLE_EXTENSIONS to match
'video/webm' in VIDEO_PREVIEWABLE_MIME_TYPES
- Use toError() from @sim/utils/errors across all catch blocks in
file-viewer.tsx, preview-panel.tsx, and route.ts instead of the
prohibited `err instanceof Error ? err.message : fallback` pattern
- Fix loading skeleton in files.tsx: bg-white → bg-[var(--surface-2)]
and shadow-[var(--shadow-medium)] → shadow-medium
## Core architectural fix
Move all react-pdf / pdfjs-dist code into a new pdf-viewer.tsx module and
import it exclusively via next/dynamic({ ssr: false }). pdfjs-dist v5
references DOMMatrix at module evaluation time, which crashed SSR. The
previous workaround (a DOMMatrix polyfill in instrumentation.ts) is removed
in favour of this proper hard module boundary.
## PDF viewer improvements
- Cursor-anchored zoom: Ctrl/⌘+wheel and trackpad-pinch now zoom toward the
cursor instead of the top-left corner. Toolbar ± buttons anchor to the
viewport centre. Uses the canonical scroll-adjust formula used by map and
canvas viewers.
- Horizontal scroll: dropping flex-col from the scroll container lets the
zoomed pages wrapper overflow naturally and produces a horizontal scrollbar
at zoom > 1×.
- Loading skeleton: replaced the conditional inline skeleton with an
absolute inset-0 overlay so it fills the scroll container correctly in all
layout contexts.
- Shadow tokens: fixed shadow-[var(--shadow-medium)] and
shadow-[var(--shadow-card)] to use the Tailwind utility classes
shadow-medium and shadow-card directly.
## File viewer cleanup
- data-table.tsx: wrap setInputRef in useCallback([]) so the ref callback
has a stable identity across renders. Previously the inline function got a
new identity on every keystroke (because editValue state changed), causing
React to teardown/remount the ref and re-run node.select() on every
character typed.
- preview-panel.tsx: keep useMemo on ctxValue passed to Context.Provider —
Context uses Object.is, so a new object every render causes unnecessary
consumer re-renders.
- resource-content.tsx: remove unnecessary useCallback/useMemo wrappers on
handlers and derived values that have no memoization observers.
## API route
- Wrap content route with withRouteHandler for automatic request-ID tracking
via AsyncLocalStorage; remove manual generateRequestId() calls.
- Add resourceName to audit record; add encoding param support (base64 /
utf-8).
## Query hooks
- Include key (storage object key) in both useWorkspaceFileContent and
useWorkspaceFileBinary query key tuples so the cache is correctly busted
when a file is re-uploaded with a new storage key.
## Other
- Add Suspense boundaries to files/page.tsx and files/[fileId]/page.tsx
(required for useSearchParams inside the Files component).
- Add mmd to SUPPORTED_CODE_EXTENSIONS (Mermaid diagrams).
- Add https: to CSP img-src.
- Remove ==== separator comments from lib/copilot/constants.ts.
- New dependencies: pdfjs-dist 5.4.296, mermaid 11.14.0,
monaco-editor 0.55.1, @monaco-editor/react 4.7.0.
* improvement(browser-use,stagehand): expose live session URLs and align with latest API specs
- Browser Use: switch to v2 camelCase schema, fetch live URL from sessions endpoint, add startUrl/maxSteps/allowedDomains/vision/flashMode/thinking/systemPromptExtension/structuredOutput/metadata params, surface liveUrl/shareUrl/sessionId outputs
- Stagehand: fetch Browserbase debug URL, add mode/maxSteps params, surface liveViewUrl/sessionId outputs, bump @browserbasehq/stagehand to ^3.2.1, update to claude-sonnet-4-6
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
* fix(browser-use): respect API default for highlightElements
Only send highlightElements when user explicitly toggles it; previously defaulted to true which silently overrode the v2 API default of false.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
* fix(browser-use,stagehand): address PR review feedback
- Browser Use: fetch liveUrl during polling once sessionId is known, instead of immediately after task creation. Handles tasks started without profile_id (where sessionId isn't returned in create response) and ensures session is active before fetching.
- Stagehand: coerce empty/whitespace maxSteps strings to undefined so they're dropped from the request body instead of failing zod validation as ''.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
* fix(stagehand): preserve liveViewUrl and sessionId on agent error
If the agent throws after Browserbase session init succeeds, callers can still surface the live view / session ID for debugging.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
* fix(browser-use): coerce empty maxSteps strings to undefined
Mirrors the Stagehand block's handling so a cleared field doesn't pass through as ''.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
* fix(browser-use): skip metadata when empty
---------
Co-authored-by: Claude Opus 4.7 <noreply@anthropic.com>
* feat(integrations): SAP S/4HANA tools, block, and proxy with multi-deployment support
* fix(sap_s4hana): address PR review comments
- Validate baseUrl/tokenUrl in Zod schema and at runtime to prevent SSRF
(https-only, deny loopback/link-local/cloud-metadata hosts)
- Cap proxy token cache at 500 entries with LRU eviction
- Add 30s timeout to outbound token, CSRF, and OData fetches
- Make parseJsonInput return T | undefined so missing input is type-safe
- Reset authType when deploymentType changes and surface OAuth fields
whenever auth is not basic, so cloud_public users always see clientId/
clientSecret after switching from a basic-auth private deployment
- Reject OData service names that are not uppercase identifiers and
paths containing ".." or "." traversal segments
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
* fix(sap_s4hana): allow versioned service names; tighten proxy SSRF defenses
- Permit ";v=NNNN" suffix on ServiceName regex so the four delivery tools
(API_OUTBOUND_DELIVERY_SRV;v=0002, API_INBOUND_DELIVERY_SRV;v=0002) pass
schema validation
- Restrict subdomain to RFC 1123 label characters and region to lowercase
alphanumeric short codes; run the constructed cloud_public host through
assertSafeExternalUrl so a crafted subdomain (e.g. "evil.com#") cannot
redirect requests carrying SAP credentials
- Block RFC-1918 (10/8, 172.16/12, 192.168/16), 127/8, 169.254/16, and
0.0.0.0 via isPrivateIPv4, plus IPv4-mapped IPv6 variants
(::ffff:10.0.0.1, ::10.0.0.1) so private internal hosts cannot be
reached from baseUrl, tokenUrl, or the resolved cloud_public URL
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
* fix(sap_s4hana): catch hex-form IPv4-mapped IPv6 in SSRF check
The WHATWG URL parser normalizes IPv4-mapped IPv6 addresses to hex form
(e.g. [::ffff:169.254.169.254] → [::ffff:a9fe:a9fe]), which slipped past
the dotted-decimal-only extractor. Decode the trailing two 16-bit hex
groups back into IPv4 octets and run them through isPrivateIPv4. Also
add isPrivateOrLoopbackIPv6 so pure IPv6 loopback (::, ::1), unique
local addresses (fc00::/7), and link-local (fe80::/10) cannot be reached.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
* fix(sap_s4hana): scope CSRF metadata fetch and isolate token cache by secret
- buildOdataUrl skips request query params when called with an internal
pathOverride so the /$metadata CSRF probe never carries user OData
options ($filter, $top, $select), which were causing write operations
through the generic odata_query tool to fail.
- tokenCacheKey now mixes a sha256 hash of clientSecret into the cache
key so two tenants sharing the same tokenUrl + clientId but different
secrets get isolated entries (no cross-tenant token leak).
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
* fix(sap_s4hana): reject ?/# in service path; trim long update tool descriptions
- ServicePath validator now rejects "?" and "#" so a caller can't smuggle
query options through the path field (e.g.,
"/A_BusinessPartner?$format=atomsvc"); the Zod refine now reports
".." / "." segments, "?", and "#" together.
- Update Customer / Update Supplier / Update Purchase Requisition tool
descriptions exceeded the docs generator's 600-char regex window, so
they were rendering with empty descriptions on the integrations
landing page. Trimmed them to fit while keeping the limited-fields
note and the If-Match guidance, then regenerated integrations.json
and tool docs.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
* fix(sap_s4hana): reject percent-encoded path traversal; widen Set-Cookie split
- ServicePath now also rejects %2e/%2E, %2f/%2F, %5c/%5C, %3f/%3F, %23
so a caller cannot smuggle ".." / "." / "/" / "\" / "?" / "#" past the
validator and have SAP's ABAP/ICM gateway decode them server-side.
- joinSetCookies fallback regex now allows the ", " separator that's
used when multiple Set-Cookie values are folded onto one header line
(older runtimes without Headers.getSetCookie). Prevents CSRF cookies
from being concatenated into a single value during write operations.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
* fix(sap_s4hana): preserve $ in OData query params; reject empty items array
- buildOdataUrl now constructs query strings manually with
encodeURIComponent and restores literal "$" so OData system options
($filter, $top, $select, $expand, $orderby, $skip, $format) reach
SAP and any intermediary proxies/WAFs as-is, not as "%24filter".
URLSearchParams was percent-encoding "$" to "%24" which most ICMs
decode but some intermediaries silently drop, returning unfiltered
results.
- create_sales_order now rejects an empty items array (matches
create_purchase_requisition) so callers get a clear client-side
error instead of an opaque SAP validation failure on the deep-insert.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
* fix(sap_s4hana): ignore baseUrl on cloud_public to prevent token redirection
Why: resolveHost previously preferred baseUrl unconditionally. A caller
sending deploymentType=cloud_public with a baseUrl pointing elsewhere
would obtain a real SAP UAA token, then forward it as Bearer to the
attacker host. Zod superRefine did not validate baseUrl for cloud_public.
Fix: resolveHost now constructs the SAP host from subdomain when
deploymentType is cloud_public and only uses baseUrl for cloud_private
and on_premise (where it is already SSRF-checked in superRefine).
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
* fix(icons): use useId for SapS4HanaIcon and PipedriveIcon gradients
Why: hardcoded SVG gradient/mask IDs collide when an icon renders more
than once on a page (e.g. integrations listing). All other icons in this
file use React's useId() — these were inconsistent.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
* icons
* fix(icons): use useId for AWS-style icon gradients
Why: IAMIcon, IdentityCenterIcon, STSIcon, SESIcon, and SecretsManagerIcon
all used hardcoded `id='xxxGradient'` values that collide when an icon
renders more than once on a page (e.g. integrations listing).
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
* fix(sap_s4hana): ignore tokenUrl on cloud_public to prevent UAA redirection
Why: resolveTokenUrl previously honored caller-supplied tokenUrl
regardless of deploymentType, mirroring the same redirection class as
the prior baseUrl bug. A cloud_public caller could send tokenUrl to an
attacker host, causing the proxy to POST clientId:clientSecret as Basic
auth to it. superRefine for cloud_public did not validate tokenUrl.
Fix: derive UAA URL from subdomain+region for cloud_public; only honor
tokenUrl for cloud_private/on_premise (already SSRF-checked).
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
* fix(icons): remove unused mask in PipedriveIcon
Why: the <mask> element had no consumer (no mask='url(#...)' anywhere
in the SVG), so both it and the maskId variable were dead code.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.7 <noreply@anthropic.com>
* Avoid bun memory leak bug from TransformStream
* fix(executor): skip content persistence when stream consumer exits early
Previously, if the onStream consumer caught an internal error without
re-throwing, the block-executor would treat the shortened accumulator
as the complete response, persist a truncated string to memory via
appendToMemory, and set it as executionOutput.content.
Track whether the source ReadableStream actually closed (done=true) in
the pull handler. If onStream returns before the source drains, skip
content persistence and log a warning — the old tee()-based flow was
immune to this because the executor branch drained independently of
the client branch.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix lint
* fix(executor): early-return when no streamed content to make onFullContent symmetric
Previously, executionOutput.content was guarded by `if (fullContent)`
but `onFullContent` fired regardless. The agent-handler implementor
defensively bails on empty/whitespace content, but that's a callee
contract, not enforced at the call site — future implementors could
spuriously persist empty assistant turns to memory.
Hoist the `!fullContent` check to a single early return, so both the
output write and the callback share the same precondition.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix(security): patch copilot tool & multipart upload IDORs
- multipart upload: bind upload session to (userId, workspaceId, key)
via short-lived HMAC-signed token; require workspace write access at
initiate; source key/uploadId/context from verified token (never
client) at get-part-urls/complete/abort
- copilot knowledge-base tools: gate all 11 read/write/tag/connector
ops with checkKnowledgeBaseAccess / checkKnowledgeBaseWriteAccess
- copilot user-table tools: add workspace-id check to get, get_schema,
add/rename/delete/update_column to match existing op pattern
- copilot manage-credential: add full ownership/write-permission auth
via getCredentialActorContext (previously had no auth)
- copilot restore-resource: verify workspace ownership and write
permission for workflow, table, knowledgebase, file, and folder
restores
- copilot folder rename/move: verify both folderId and parentId belong
to the caller's workspace via new verifyFolderWorkspace helper
- copilot get-job-logs: verify schedule belongs to caller's workspace
* fix(security): address PR review — document IDOR, log count, token split
- knowledge-base delete_document/update_document: verify each document
belongs to the claimed knowledgeBaseId via checkDocumentWriteAccess
(was: trusted args.knowledgeBaseId without binding it to the document)
- multipart batch complete: log verifiedEntries.length instead of raw
client-supplied data.uploads.length
- upload-token: reject tokens with !=2 dot-delimited segments
* fix(security): close folder workspace bypass when workspaceId is falsy
The visual filter/sort builders read the selected tableId from subBlock id
'tableId', but the Table block stores it under 'tableSelector' (basic) or
'manualTableId' (advanced) via canonicalParamId. The lookup always returned
null, so useTable was disabled and the column picker always showed
"no options available".
Adds useCanonicalSubBlockValue that resolves by canonicalParamId through
the canonical index, mirroring the pattern used by dropdown dependsOn.
* improvement(tables): race-free row-count trigger + scoped tx timeouts
* fix(tables): close upsert race + serialize replaceTableRows
Two concurrency bugs flagged by review:
1. `upsertRow` insert path: removing FOR UPDATE broke serialization between
the initial existing-row SELECT and the INSERT. Two concurrent upserts
on the same conflict target could both find no match, then both insert,
producing a duplicate that bypasses the app-level unique check. Fixed
by re-checking for the matching row *after* acquiring the per-table
advisory lock; if a racing tx already inserted, switch to UPDATE.
2. `replaceTableRows`: under READ COMMITTED each tx's DELETE uses its own
MVCC snapshot, so two concurrent replaces could each DELETE only the
rows visible at their start, then both INSERT, leaving the table with
the union of both row sets. Fixed by acquiring the per-table advisory
lock at the start of the tx to serialize replaces against each other
and against auto-position inserts.
Also updated a stale docstring on `upsertRow` that still referenced the
removed FOR UPDATE.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
* fix(tables): serialize explicit-position inserts with advisory lock
The `(table_id, position)` index is non-unique. Concurrent explicit-
position inserts at the same slot can both observe the slot empty, both
skip the shift, then each INSERT — producing duplicate `(table_id,
position)` rows.
Take the per-table advisory lock in the explicit-position branches of
`insertRow` and `batchInsertRows` (the auto-position branches already do
this). Updated the test that asserted the explicit path skipped the lock,
and added coverage for `batchInsertRows` with explicit positions.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
* refactor(tables): dedupe upsert UPDATE path + extract nextAutoPosition
Two pure cleanups on the round-1 changes:
1. Extract `nextAutoPosition(trx, tableId)` — the `SELECT coalesce(max(
position), -1) + 1` pattern was repeated in three call sites
(`insertRow` auto branch, `batchInsertRows` auto branch, `upsertRow`
insert branch). One helper, same behavior.
2. Consolidate `upsertRow` update path. The initial-SELECT match and the
post-lock re-select match previously had two literal duplicates of the
same UPDATE + return block. Resolve `matchedRowId` first, then run one
UPDATE branch. Lock is still only acquired when we don't find a match
on the first pass.
No behavior change. 98/98 table unit tests unchanged.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.7 <noreply@anthropic.com>
* refactor(ashby): align tools, block, and triggers with Ashby API
Audit-driven refactor to destructure rich fields per Ashby's API docs,
centralize output shapes via shared mappers in tools/ashby/utils.ts,
and align webhook provider handler with trigger IDs via a shared
action map. Removes stale block outputs left over from prior flat
response shapes.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
* fix(ashby): remove stale noteId output and reject ping events
- Remove stale `noteId` output descriptor from block (create_note
now returns `id` at the top level via the shared note mapper).
- Explicitly reject `ping` events in the webhook matchEvent before
falling back to the generic triggerId check, so webhook records
missing triggerId cannot execute workflows on Ashby ping probes.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
* fix(ashby): trim optional ID params in create/update tools
Optional ID params in create_application, change_application_stage,
and update_candidate were passed through to the request body without
.trim(), unlike their required ID siblings. Normalize to prevent
copy-paste whitespace errors.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
* fix(ashby): add subblock migration for removed filterCandidateId
Add SUBBLOCK_ID_MIGRATIONS entry so deployed workflows that previously
used the `filterCandidateId` subBlock on `list_applications` don't break
after the field was removed (Ashby's application.list doesn't filter by
candidateId). Also regenerate docs to sync noteId removal.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
* fix(ashby): final API alignment from parallel validation
- create_candidate: email is optional per Ashby docs (only name is
required); tool, types, and block all made non-required.
- list_applications: guard NaN when createdAfter can't be parsed so we
don't send a bad value to Ashby's API.
- webhook provider: replace createHmacVerifier with explicit
fail-closed verifyAuth that 401s when secretToken, signature header,
or signature match is missing (was previously fail-open on missing
secret).
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
* fix(ashby): preserve input.data path in webhook formatInput
Restore the explicit `data` key alongside the spread so deployed
workflows that reference `input.data.application.*`, `input.data.candidate.*`,
etc. keep working. The spread alone dropped those paths.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
* refactor(ashby): drop legacy input.data key from webhook formatInput
Keep formatInput aligned with the advertised trigger outputs schema
(flat top-level entities) and drop the legacy input.data.* compat path.
Every field declared in each trigger's outputs is now populated 1:1 by
the data spread plus the explicit action key — no undeclared keys.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
* fix(ashby): trim remaining ID body params for parity
Add .trim() on sourceId (create_candidate), jobId (list_applications),
applicationId and interviewStageId (list_interviews) to match the
trim-on-IDs pattern used across the rest of the Ashby tools and guard
against copy-paste whitespace.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
* update docs
---------
Co-authored-by: Claude Opus 4.7 <noreply@anthropic.com>
AGENTS.md / CLAUDE.md forbid crypto.randomUUID() (non-secure contexts throw
TypeError in browsers). Four copilot server-side files still violated this
rule, left over after PR #3397 polyfilled the client.
Routes through request lifecycle, OAuth draft insertion, persisted message
normalization, and table-row generation now use generateId from @sim/utils/id,
which is a drop-in UUID v4 producer that falls back to crypto.getRandomValues
when randomUUID is unavailable.
Refs #3393.
* feat(agentphone): add AgentPhone integration
* fix(agentphone): validate numeric inputs and metadata JSON
* chore(agentphone): remove dead from fallback in get_number_messages
* fix(agentphone): drop empty-string updates in update_contact
* fix(agentphone): scope limit/offset to list ops and revert stray IdentityCenter change
* lint
* feat(files): default sort by updated and add updated sort option
* feat(files): show Last Updated column
Matches the visible-column pattern already used on Knowledge and Tables
so users can see the value they're sorting by.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.7 <noreply@anthropic.com>
* fix(retention): switch data retention to be org-level
* fix lint
* cleanup mothership ran logs
* fix cleanup dispatcher
* fix ui flash for data retention settings
* fix lint
* remove raw sql string interprolation
* fix(api): return archivedAt for list tables route
* improvement(repo): restructuring to make realtime image narrower scoped
* improvements
* chore(repo): rebase fixes and quality improvements for realtime split
Addresses merge-time issues and gaps from the realtime app split:
- Retarget stale vi.mock paths to @sim/workflow-persistence/subblocks
- Restore README branding, fix AGENTS.md script reference
- Restore TSDoc on workflow-persistence subblocks helpers
- Use toError() from @sim/utils/errors in save.ts
- Add vitest config + local mocks so @sim/audit tests run standalone
- Move socket.io-client to devDependencies in apps/realtime
- Add missing package COPY steps to docker/app.Dockerfile
- Add check:boundaries/check:realtime-prune scripts and wire into CI
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
* refactor(security): consolidate crypto primitives into @sim/security
Move general-purpose crypto primitives out of apps/sim into the
@sim/security package so both apps/sim and apps/realtime can share them.
@sim/security exports (all pure, dependency-free):
./compare safeCompare (constant-time HMAC-wrapped equality)
./encryption encrypt/decrypt (AES-256-GCM, iv:cipher:tag format)
./hash sha256Hex
./tokens generateSecureToken (base64url)
Migrate apps/sim call sites to use these + @sim/utils helpers:
crypto.randomUUID() -> generateId() from @sim/utils/id
createHash('sha256').digest -> sha256Hex
timingSafeEqual on hashed hex -> safeCompare
new Promise(setTimeout) -> sleep from @sim/utils/helpers
No behavior change: encryption format, digest output, and token
length are preserved exactly.
* refactor(copilot): use toError in remaining otel/finalize sites
Replace the last two `error instanceof Error ? error : new Error(String(error))`
patterns with toError from @sim/utils/errors. Completes the sweep of clean
candidates — no behavior change.
* refactor(security): consolidate HMAC-SHA256 primitives into @sim/security
Adds hmacSha256Hex and hmacSha256Base64 to @sim/security/hmac and migrates
15 webhook providers plus 5 other hot paths (deployment token signing,
outbound webhook requests, workspace notification delivery, notification
test route, Shopify OAuth callback) off bare `createHmac` calls. Secret
parameter accepts `string | Buffer` to cover base64-decoded Svix-style
secrets (Resend) and MS Teams' HMAC scheme. AWS SigV4 signing in S3 and
Textract tools intentionally retains direct `createHmac` usage — its
multi-step key derivation chain doesn't fit a generic helper.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
* chore(packages): post-audit test + packaging polish
- Add safeCompare unit tests (identity, length mismatch, hex-nibble diff).
- Add Buffer-secret cases to hmac tests to lock in Svix/MS-Teams contract.
- Declare `reactflow` as a peerDependency on @sim/workflow-types — only used for type imports.
- Add a barrel export to @sim/workflow-persistence for consumers that prefer package-level imports; subpath exports retained.
- Document the data-field invariant in load.ts for loop/parallel subflow patching.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
* chore(realtime): address PR review feedback
- Remove redundant SOCKET_PORT=3002 env from Dockerfile runner stage
(env.PORT already defaults to 3002 via zod schema).
- Reorder PORT fallback so an explicitly-set SOCKET_PORT wins over
the schema default for PORT; keeps SOCKET_PORT functional as an
override instead of dead code.
- Add dedicated type-check CI step for @sim/realtime so TS errors
surface pre-deploy (the Dockerfile runs source TS via Bun and has
no implicit build-time type check).
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
* chore(realtime): remove unused SOCKET_PORT env var
SOCKET_PORT has lived in the socket server since the June 2025 refactor
but was never actually set in any deploy config — docker-compose.prod,
helm values/templates, .env.example, and docs all use PORT or the 3002
default exclusively. No self-hoster was ever pointed at SOCKET_PORT, so
removing it is safe.
Simplifies realtime port resolution to `env.PORT` (zod-validated with a
3002 default) and drops the orphaned sim-side schema entry.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
---------
Co-authored-by: Waleed Latif <walif6@gmail.com>
Co-authored-by: Claude Opus 4.7 <noreply@anthropic.com>
* fix(docs): update simstudio.ai URLs to sim.ai in SSO docs
* improvement(docs): remove plan defaults table from data retention docs
* improvement(docs): consolidate self-hosting info at bottom of enterprise docs
* improvement(docs): reduce callout and FAQ overuse in enterprise docs
* improvement(docs): restore FAQs and genuine-gotcha callouts