Compare commits

..

76 Commits

Author SHA1 Message Date
Waleed
b45f3962fc v0.5.89: resume execution on refresh, google books, tool input subblock improvements 2026-02-13 00:36:54 -08:00
Waleed
7fbbc7ba7a fix(tool-input): sync cleared subblock values to tool params (#3214) 2026-02-13 00:18:25 -08:00
Waleed
a337aa7dfe feat(internal): added internal api base url for internal calls (#3212)
* feat(internal): added internal api base url for internal calls

* make validation on http more lax
2026-02-12 23:56:35 -08:00
Waleed
022e84c4b1 feat(creators): added referrers, code redemption, campaign tracking, etc (#3198)
* feat(creators): added referrers, code redemption, campaign tracking, etc

* more

* added zod

* remove default

* remove duplicate index

* update admin routes

* reran migrations

* lint

* move userstats record creation inside tx

* added reason for already attributed case

* cleanup referral attributes
2026-02-12 20:07:40 -08:00
Waleed
602e371a7a refactor(tool-input): subblock-first rendering, component extraction, bug fixes (#3207)
* refactor(tool-input): eliminate SyncWrappers, add canonical toggle and dependsOn gating

Replace 17+ individual SyncWrapper components with a single centralized
ToolSubBlockRenderer that bridges the subblock store with StoredTool.params
via synthetic store keys. This reduces ~1000 lines of duplicated wrapper
code and ensures tool-input renders subblock components identically to
the standalone SubBlock path.

- Add ToolSubBlockRenderer with bidirectional store sync
- Add basic/advanced mode toggle (ArrowLeftRight) using collaborative functions
- Add dependsOn gating via useDependsOnGate (fields disable instead of hiding)
- Add paramVisibility field to SubBlockConfig for tool-input visibility control
- Pass canonicalModeOverrides through getSubBlocksForToolInput
- Show (optional) label for non-user-only fields (LLM can inject at runtime)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(tool-input): restore optional indicator, fix folder selector and canonical toggle, extract components

- Attach resolved paramVisibility to subblocks from getSubBlocksForToolInput
- Add labelSuffix prop to SubBlock for "(optional)" badge on user-or-llm params
- Fix folder selector missing for tools with canonicalParamId (e.g. Google Drive)
- Fix canonical toggle not clickable by letting SubBlock handle dependsOn internally
- Extract ParameterWithLabel, ToolSubBlockRenderer, ToolCredentialSelector to components/tools/
- Extract StoredTool interface to types.ts, selection helpers to utils.ts
- Remove dead code (mcpError, refreshTools, oldParamIds, initialParams)
- Strengthen typing: replace any with proper types on icon components and evaluateParameterCondition

* add sibling values to subblock context since subblock store isn't relevant in tool input, and removed unused param

* cleanup

* fix(tool-input): render uncovered tool params alongside subblocks

The SubBlock-first rendering path was hard-returning after rendering
subblocks, so tool params without matching subblocks (like inputMapping
for workflow tools) were never rendered. Now renders subblocks first,
then any remaining displayParams not covered by subblocks via the legacy
ParameterWithLabel fallback.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(tool-input): auto-refresh workflow inputs after redeploy

After redeploying a child workflow via the stale badge, the workflow
state cache was not invalidated, so WorkflowInputMapperInput kept
showing stale input fields until page refresh. Now invalidates
workflowKeys.state on deploy success.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(tool-input): correct workflow selector visibility and tighten (optional) spacing

- Set workflowId param to user-only in workflow_executor tool config
  so "Select Workflow" no longer shows "(optional)" indicator
- Tighten (optional) label spacing with -ml-[3px] to counteract
  parent Label's gap-[6px], making it feel inline with the label text

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(tool-input): align (optional) text to baseline instead of center

Use items-baseline instead of items-center on Label flex containers
so the smaller (optional) text aligns with the label text baseline
rather than sitting slightly below it.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(tool-input): increase top padding of expanded tool body

Bump the expanded tool body container's top padding from 8px to 12px
for more breathing room between the header bar and the first parameter.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(tool-input): apply extra top padding only to SubBlock-first path

Revert container padding to py-[8px] (MCP tools were correct).
Wrap SubBlock-first output in a div with pt-[4px] so only registry
tools get extra breathing room from the container top.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(tool-input): increase gap between SubBlock params for visual clarity

SubBlock's internal gap (10px between label and input) matched the
between-parameter gap (10px), making them indistinguishable. Increase
the between-parameter gap to 14px so consecutive parameters are
visually distinct, matching the separation seen in ParameterWithLabel.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix spacing and optional tag

* update styling + move predeploy checks earlier for first time deploys

* update change detection to account for synthetic tool ids

* fix remaining blocks who had files visibility set to hidden

* cleanup

* add catch

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-12 19:01:04 -08:00
Theodore Li
9a06cae591 Merge pull request #3210 from simstudioai/feat/google-books
feat(google books): Add google books integration
2026-02-12 16:18:42 -08:00
Theodore Li
dce47a101c Migrate last response to types 2026-02-12 15:45:00 -08:00
Theodore Li
1130f8ddb2 Remove redundant error handling, move volume item to types file 2026-02-12 15:31:12 -08:00
Waleed
ebc2ffa1c5 fix(agent): always fetch latest custom tool from DB when customToolId is present (#3208)
* fix(agent): always fetch latest custom tool from DB when customToolId is present

* test(agent): use generic test data for customToolId resolution tests

* fix(agent): mock buildAuthHeaders in tests for CI compatibility

* remove inline mocks in favor of sim/testing ones
2026-02-12 15:31:11 -08:00
Theodore Li
fc97ce007d Correct error handling, specify auth mode as api key 2026-02-12 15:26:13 -08:00
Theodore Li
6c006cdfec feat(google books): Add google books integration 2026-02-12 15:01:33 -08:00
Siddharth Ganesan
c380e59cb3 fix(copilot): make default model opus 4.5 (#3209)
* Fix default model

* Fix
2026-02-12 13:17:45 -08:00
Waleed
2944579d21 fix(s3): support get-object region override and robust S3 URL parsing (#3206)
* fix(s3): support get-object region override and robust S3 URL parsing

* ack pr comments
2026-02-12 10:59:22 -08:00
Waleed
81dfeb0bb0 fix(terminal): reconnect to running executions after page refresh (#3200)
* fix(terminal): reconnect to running executions after page refresh

* fix(terminal): use ExecutionEvent type instead of any in reconnection stream

* fix(execution): type event buffer with ExecutionEvent instead of Record<string, unknown>

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(execution): validate fromEventId query param in reconnection endpoint

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Fix some bugs

* fix(variables): fix tag dropdown and cursor alignment in variables block (#3199)

* feat(confluence): added list space labels, delete label, delete page prop (#3201)

* updated route

* ack comments

* fix(execution): reset execution state in reconnection cleanup to unblock re-entry

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(execution): restore running entries when reconnection is interrupted by navigation

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* done

* remove cast in ioredis types

* ack PR comments

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: Siddharth Ganesan <siddharthganesan@gmail.com>
2026-02-11 19:31:29 -08:00
Waleed
01577a18b4 fix(change-detection): resolve false positive trigger block change detection (#3204) 2026-02-11 17:24:17 -08:00
Waleed
07d50f8fe1 v0.5.88: interactions api for gemini, trigger machine size increase, confluence ops 2026-02-11 15:36:55 -08:00
Vikhyath Mondreti
52aff4d60b fix build 2026-02-11 15:33:22 -08:00
Waleed
3a3bddd6f8 fix(confl): use recommended query param pattern for confluence route (#3202)
* fix(confl): use recommended query param pattern for confluence route

* use unused var
2026-02-11 14:59:26 -08:00
Waleed
639d50d6b9 feat(confluence): added list space labels, delete label, delete page prop (#3201) 2026-02-11 14:40:31 -08:00
Waleed
cec74e09c2 fix(variables): fix tag dropdown and cursor alignment in variables block (#3199) 2026-02-11 14:40:31 -08:00
Waleed
d5a756c9f2 fix(hotkeys): remove C, T, E tab-switching hotkeys (#3197) 2026-02-11 13:24:00 -08:00
Waleed
f3e994baf0 improvement(oom): increase trigger machine size (#3196) 2026-02-11 13:11:28 -08:00
Vikhyath Mondreti
27973953f6 v0.5.87: workflow block auth fix 2026-02-10 22:33:55 -08:00
Waleed
50585273ce v0.5.86: server side copilot, copilot mcp, error notifications, jira outputs destructuring, slack trigger improvements 2026-02-10 21:49:58 -08:00
Vikhyath Mondreti
654cb2b407 v0.5.85: deployment improvements 2026-02-09 10:49:33 -08:00
Waleed
6c66521d64 v0.5.84: model request sanitization 2026-02-07 19:06:53 -08:00
Vikhyath Mondreti
479cd347ad v0.5.83: agent skills, concurrent workers for v8s, airweave integration 2026-02-07 12:27:11 -08:00
Waleed
a3a99eda19 v0.5.82: slack trigger files, pagination for linear, executor fixes 2026-02-06 00:41:52 -08:00
Waleed
1a66d48add v0.5.81: traces fix, additional confluence tools, azure anthropic support, opus 4.6 2026-02-05 11:28:54 -08:00
Waleed
46822e91f3 v0.5.80: lock feature, enterprise modules, time formatting consolidation, files, UX and UI improvements, longer timeouts 2026-02-04 18:27:05 -08:00
Waleed
2bb68335ee v0.5.79: longer MCP tools timeout, optimize loop/parallel regeneration, enrich.so integration 2026-01-31 21:57:56 -08:00
Waleed
8528fbe2d2 v0.5.78: billing fixes, mcp timeout increase, reactquery migrations, updated tool param visibilities, DSPy and Google Maps integrations 2026-01-31 13:48:22 -08:00
Waleed
31fdd2be13 v0.5.77: room manager redis migration, tool outputs, ui fixes 2026-01-30 14:57:17 -08:00
Waleed
028bc652c2 v0.5.76: posthog improvements, readme updates 2026-01-29 00:13:19 -08:00
Waleed
c6bf5cd58c v0.5.75: search modal overhaul, helm chart updates, run from block, terminal and visual debugging improvements 2026-01-28 22:54:13 -08:00
Vikhyath Mondreti
11dc18a80d v0.5.74: autolayout improvements, clerk integration, auth enforcements 2026-01-27 20:37:39 -08:00
Waleed
ab4e9dc72f v0.5.73: ci, helm updates, kb, ui fixes, note block enhancements 2026-01-26 22:04:35 -08:00
Vikhyath Mondreti
1c58c35bd8 v0.5.72: azure connection string, supabase improvement, multitrigger resolution, docs quick reference 2026-01-25 23:42:27 -08:00
Waleed
d63a5cb504 v0.5.71: ux, ci improvements, docs updates 2026-01-25 03:08:08 -08:00
Waleed
8bd5d41723 v0.5.70: router fix, anthropic agent response format adherence 2026-01-24 20:57:02 -08:00
Waleed
c12931bc50 v0.5.69: kb upgrades, blog, copilot improvements, auth consolidation (#2973)
* fix(subflows): tag dropdown + resolution logic (#2949)

* fix(subflows): tag dropdown + resolution logic

* fixes;

* revert parallel change

* chore(deps): bump posthog-js to 1.334.1 (#2948)

* fix(idempotency): add conflict target to atomicallyClaimDb query + remove redundant db namespace tracking (#2950)

* fix(idempotency): add conflict target to atomicallyClaimDb query

* delete needs to account for namespace

* simplify namespace filtering logic

* fix cleanup

* consistent target

* improvement(kb): add document filtering, select all, and React Query migration (#2951)

* improvement(kb): add document filtering, select all, and React Query migration

* test(kb): update tests for enabledFilter and removed userId params

* fix(kb): remove non-null assertion, add explicit guard

* improvement(logs): trace span, details (#2952)

* improvement(action-bar): ordering

* improvement(logs): details, trace span

* feat(blog): v0.5 release post (#2953)

* feat(blog): v0.5 post

* improvement(blog): simplify title and remove code block header

- Simplified blog title from Introducing Sim Studio v0.5 to Introducing Sim v0.5
- Removed language label header and copy button from code blocks for cleaner appearance

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* ack PR comments

* small styling improvements

* created system to create post-specific components

* updated componnet

* cache invalidation

---------

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>

* feat(admin): add credits endpoint to issue credits to users (#2954)

* feat(admin): add credits endpoint to issue credits to users

* fix(admin): use existing credit functions and handle enterprise seats

* fix(admin): reject NaN and Infinity in amount validation

* styling

* fix(admin): validate userId and email are strings

* improvement(copilot): fast mode, subagent tool responses and allow preferences (#2955)

* Improvements

* Fix actions mapping

* Remove console logs

* fix(billing): handle missing userStats and prevent crashes (#2956)

* fix(billing): handle missing userStats and prevent crashes

* fix(billing): correct import path for getFilledPillColor

* fix(billing): add Number.isFinite check to lastPeriodCost

* fix(logs): refresh logic to refresh logs details (#2958)

* fix(security): add authentication and input validation to API routes (#2959)

* fix(security): add authentication and input validation to API routes

* moved utils

* remove extraneous commetns

* removed unused dep

* improvement(helm): add internal ingress support and same-host path consolidation (#2960)

* improvement(helm): add internal ingress support and same-host path consolidation

* improvement(helm): clean up ingress template comments

Simplify verbose inline Helm comments and section dividers to match the
minimal style used in services.yaml.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* fix(helm): add missing copilot path consolidation for realtime host

When copilot.host equals realtime.host but differs from app.host,
copilot paths were not being routed. Added logic to consolidate
copilot paths into the realtime rule for this scenario.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* improvement(helm): follow ingress best practices

- Remove orphan comments that appeared when services were disabled
- Add documentation about path ordering requirements
- Paths rendered in order: realtime, copilot, app (specific before catch-all)
- Clean template output matching industry Helm chart standards

---------

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>

* feat(blog): enterprise post (#2961)

* feat(blog): enterprise post

* added more images, styling

* more content

* updated v0-5 post

* remove unused transition

---------

Co-authored-by: Vikhyath Mondreti <vikhyath@simstudio.ai>

* fix(envvars): resolution standardized (#2957)

* fix(envvars): resolution standardized

* remove comments

* address bugbot

* fix highlighting for env vars

* remove comments

* address greptile

* address bugbot

* fix(copilot): mask credentials fix (#2963)

* Fix copilot masking

* Clean up

* Lint

* improvement(webhooks): remove dead code (#2965)

* fix(webhooks): subscription recreation path

* improvement(webhooks): remove dead code

* fix tests

* address bugbot comments

* fix restoration edge case

* fix more edge cases

* address bugbot comments

* fix gmail polling

* add warnings for UI indication for credential sets

* fix(preview): subblock values (#2969)

* fix(child-workflow): nested spans handoff (#2966)

* fix(child-workflow): nested spans handoff

* remove overly defensive programming

* update type check

* type more code

* remove more dead code

* address bugbot comments

* fix(security): restrict API key access on internal-only routes (#2964)

* fix(security): restrict API key access on internal-only routes

* test(security): update function execute tests for checkInternalAuth

* updated agent handler

* move session check higher in checkSessionOrInternalAuth

* extracted duplicate code into helper for resolving user from jwt

* fix(copilot): update copilot chat title (#2968)

* fix(hitl): fix condition blocks after hitl (#2967)

* fix(notes): ghost edges (#2970)

* fix(notes): ghost edges

* fix deployed state fallback

* fallback

* remove UI level checks

* annotation missing from autoconnect source check

* improvement(docs): loop and parallel var reference syntax (#2975)

* fix(blog): slash actions description (#2976)

* improvement(docs): loop and parallel var reference syntax

* fix(blog): slash actions description

* fix(auth): copilot routes (#2977)

* Fix copilot auth

* Fix

* Fix

* Fix

* fix(copilot): fix edit summary for loops/parallels (#2978)

* fix(integrations): hide from tool bar (#2544)

* fix(landing): ui (#2979)

* fix(edge-validation): race condition on collaborative add (#2980)

* fix(variables): boolean type support and input improvements (#2981)

* fix(variables): boolean type support and input improvements

* fix formatting

---------

Co-authored-by: Vikhyath Mondreti <vikhyathvikku@gmail.com>
Co-authored-by: Emir Karabeg <78010029+emir-karabeg@users.noreply.github.com>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Co-authored-by: Siddharth Ganesan <33737564+Sg312@users.noreply.github.com>
Co-authored-by: Vikhyath Mondreti <vikhyath@simstudio.ai>
2026-01-24 14:29:53 -08:00
Waleed
e9c4251c1c v0.5.68: router block reasoning, executor improvements, variable resolution consolidation, helm updates (#2946)
* improvement(workflow-item): stabilize avatar layout and fix name truncation (#2939)

* improvement(workflow-item): stabilize avatar layout and fix name truncation

* fix(avatars): revert overflow bg to hardcoded color for contrast

* fix(executor): stop parallel execution when block errors (#2940)

* improvement(helm): add per-deployment extraVolumes support (#2942)

* fix(gmail): expose messageId field in read email block (#2943)

* fix(resolver): consolidate reference resolution  (#2941)

* fix(resolver): consolidate code to resolve references

* fix edge cases

* use already formatted error

* fix multi index

* fix backwards compat reachability

* handle backwards compatibility accurately

* use shared constant correctly

* feat(router): expose reasoning output in router v2 block (#2945)

* fix(copilot): always allow, credential masking (#2947)

* Fix always allow, credential validation

* Credential masking

* Autoload

* fix(executor): handle condition dead-end branches in loops (#2944)

---------

Co-authored-by: Vikhyath Mondreti <vikhyathvikku@gmail.com>
Co-authored-by: Siddharth Ganesan <33737564+Sg312@users.noreply.github.com>
2026-01-22 13:48:15 -08:00
Waleed
cc2be33d6b v0.5.67: loading, password reset, ui improvements, helm updates (#2928)
* fix(zustand): updated to useShallow from deprecated createWithEqualityFn (#2919)

* fix(logger): use direct env access for webpack inlining (#2920)

* fix(notifications): text overflow with line-clamp (#2921)

* chore(helm): add env vars for Vertex AI, orgs, and telemetry (#2922)

* fix(auth): improve reset password flow and consolidate brand detection (#2924)

* fix(auth): improve reset password flow and consolidate brand detection

* fix(auth): set errorHandled for EMAIL_NOT_VERIFIED to prevent duplicate error

* fix(auth): clear success message on login errors

* chore(auth): fix import order per lint

* fix(action-bar): duplicate subflows with children (#2923)

* fix(action-bar): duplicate subflows with children

* fix(action-bar): add validateTriggerPaste for subflow duplicate

* fix(resolver): agent response format, input formats, root level (#2925)

* fix(resolvers): agent response format, input formats, root level

* fix response block initial seeding

* fix tests

* fix(messages-input): fix cursor alignment and auto-resize with overlay (#2926)

* fix(messages-input): fix cursor alignment and auto-resize with overlay

* fixed remaining zustand warnings

* fix(stores): remove dead code causing log spam on startup (#2927)

* fix(stores): remove dead code causing log spam on startup

* fix(stores): replace custom tools zustand store with react query cache

* improvement(ui): use BrandedButton and BrandedLink components (#2930)

- Refactor auth forms to use BrandedButton component
- Add BrandedLink component for changelog page
- Reduce code duplication in login, signup, reset-password forms
- Update star count default value

* fix(custom-tools): remove unsafe title fallback in getCustomTool (#2929)

* fix(custom-tools): remove unsafe title fallback in getCustomTool

* fix(custom-tools): restore title fallback in getCustomTool lookup

Custom tools are referenced by title (custom_${title}), not database ID.
The title fallback is required for client-side tool resolution to work.

* fix(null-bodies): empty bodies handling (#2931)

* fix(null-statuses): empty bodies handling

* address bugbot comment

* fix(token-refresh): microsoft, notion, x, linear (#2933)

* fix(microsoft): proactive refresh needed

* fix(x): missing token refresh flag

* notion and linear missing flag too

* address bugbot comment

* fix(auth): handle EMAIL_NOT_VERIFIED in onError callback (#2932)

* fix(auth): handle EMAIL_NOT_VERIFIED in onError callback

* refactor(auth): extract redirectToVerify helper to reduce duplication

* fix(workflow-selector): use dedicated selector for workflow dropdown (#2934)

* feat(workflow-block): preview (#2935)

* improvement(copilot): tool configs to show nested props (#2936)

* fix(auth): add genericOAuth providers to trustedProviders (#2937)

---------

Co-authored-by: Vikhyath Mondreti <vikhyathvikku@gmail.com>
Co-authored-by: Emir Karabeg <78010029+emir-karabeg@users.noreply.github.com>
2026-01-21 22:53:25 -08:00
Vikhyath Mondreti
45371e521e v0.5.66: external http requests fix, ring highlighting 2026-01-21 02:55:39 -08:00
Waleed
0ce0f98aa5 v0.5.65: gemini updates, textract integration, ui updates (#2909)
* fix(google): wrap primitive tool responses for Gemini API compatibility (#2900)

* fix(canonical): copilot path + update parent (#2901)

* fix(rss): add top-level title, link, pubDate fields to RSS trigger output (#2902)

* fix(rss): add top-level title, link, pubDate fields to RSS trigger output

* fix(imap): add top-level fields to IMAP trigger output

* improvement(browseruse): add profile id param (#2903)

* improvement(browseruse): add profile id param

* make request a stub since we have directExec

* improvement(executor): upgraded abort controller to handle aborts for loops and parallels (#2880)

* improvement(executor): upgraded abort controller to handle aborts for loops and parallels

* comments

* improvement(files): update execution for passing base64 strings (#2906)

* progress

* improvement(execution): update execution for passing base64 strings

* fix types

* cleanup comments

* path security vuln

* reject promise correctly

* fix redirect case

* remove proxy routes

* fix tests

* use ipaddr

* feat(tools): added textract, added v2 for mistral, updated tag dropdown (#2904)

* feat(tools): added textract

* cleanup

* ack pr comments

* reorder

* removed upload for textract async version

* fix additional fields dropdown in editor, update parser to leave validation to be done on the server

* added mistral v2, files v2, and finalized textract

* updated the rest of the old file patterns, updated mistral outputs for v2

* updated tag dropdown to parse non-operation fields as well

* updated extension finder

* cleanup

* added description for inputs to workflow

* use helper for internal route check

* fix tag dropdown merge conflict change

* remove duplicate code

---------

Co-authored-by: Vikhyath Mondreti <vikhyath@simstudio.ai>

* fix(ui): change add inputs button to match output selector (#2907)

* fix(canvas): removed invite to workspace from canvas popover (#2908)

* fix(canvas): removed invite to workspace

* removed unused props

* fix(copilot): legacy tool display names (#2911)

* fix(a2a): canonical merge  (#2912)

* fix canonical merge

* fix empty array case

* fix(change-detection): copilot diffs have extra field (#2913)

* improvement(logs): improved logs ui bugs, added subflow disable UI (#2910)

* improvement(logs): improved logs ui bugs, added subflow disable UI

* added duplicate to action bar for subflows

* feat(broadcast): email v0.5 (#2905)

---------

Co-authored-by: Vikhyath Mondreti <vikhyathvikku@gmail.com>
Co-authored-by: Vikhyath Mondreti <vikhyath@simstudio.ai>
Co-authored-by: Emir Karabeg <78010029+emir-karabeg@users.noreply.github.com>
2026-01-20 23:54:55 -08:00
Waleed
dff1c9d083 v0.5.64: unsubscribe, search improvements, metrics, additional SSO configuration 2026-01-20 00:34:11 -08:00
Vikhyath Mondreti
b09f683072 v0.5.63: ui and performance improvements, more google tools 2026-01-18 15:22:42 -08:00
Vikhyath Mondreti
a8bb0db660 v0.5.62: webhook bug fixes, seeding default subblock values, block selection fixes 2026-01-16 20:27:06 -08:00
Waleed
af82820a28 v0.5.61: webhook improvements, workflow controls, react query for deployment status, chat fixes, reducto and pulse OCR, linear fixes 2026-01-16 18:06:23 -08:00
Waleed
4372841797 v0.5.60: invitation flow improvements, chat fixes, a2a improvements, additional copilot actions 2026-01-15 00:02:18 -08:00
Waleed
5e8c843241 v0.5.59: a2a support, documentation 2026-01-13 13:21:21 -08:00
Waleed
7bf3d73ee6 v0.5.58: export folders, new tools, permissions groups enhancements 2026-01-13 00:56:59 -08:00
Vikhyath Mondreti
7ffc11a738 v0.5.57: subagents, context menu improvements, bug fixes 2026-01-11 11:38:40 -08:00
Waleed
be578e2ed7 v0.5.56: batch operations, access control and permission groups, billing fixes 2026-01-10 00:31:34 -08:00
Waleed
f415e5edc4 v0.5.55: polling groups, bedrock provider, devcontainer fixes, workflow preview enhancements 2026-01-08 23:36:56 -08:00
Waleed
13a6e6c3fa v0.5.54: seo, model blacklist, helm chart updates, fireflies integration, autoconnect improvements, billing fixes 2026-01-07 16:09:45 -08:00
Waleed
f5ab7f21ae v0.5.53: hotkey improvements, added redis fallback, fixes for workflow tool 2026-01-06 23:34:52 -08:00
Waleed
bfb6fffe38 v0.5.52: new port-based router block, combobox expression and variable support 2026-01-06 16:14:10 -08:00
Waleed
4fbec0a43f v0.5.51: triggers, kb, condition block improvements, supabase and grain integration updates 2026-01-06 14:26:46 -08:00
Waleed
585f5e365b v0.5.50: import improvements, ui upgrades, kb styling and performance improvements 2026-01-05 00:35:55 -08:00
Waleed
3792bdd252 v0.5.49: hitl improvements, new email styles, imap trigger, logs context menu (#2672)
* feat(logs-context-menu): consolidated logs utils and types, added logs record context menu (#2659)

* feat(email): welcome email; improvement(emails): ui/ux (#2658)

* feat(email): welcome email; improvement(emails): ui/ux

* improvement(emails): links, accounts, preview

* refactor(emails): file structure and wrapper components

* added envvar for personal emails sent, added isHosted gate

* fixed failing tests, added env mock

* fix: removed comment

---------

Co-authored-by: waleed <walif6@gmail.com>

* fix(logging): hitl + trigger dev crash protection (#2664)

* hitl gaps

* deal with trigger worker crashes

* cleanup import strcuture

* feat(imap): added support for imap trigger (#2663)

* feat(tools): added support for imap trigger

* feat(imap): added parity, tested

* ack PR comments

* final cleanup

* feat(i18n): update translations (#2665)

Co-authored-by: waleedlatif1 <waleedlatif1@users.noreply.github.com>

* fix(grain): updated grain trigger to auto-establish trigger (#2666)

Co-authored-by: aadamgough <adam@sim.ai>

* feat(admin): routes to manage deployments (#2667)

* feat(admin): routes to manage deployments

* fix naming fo deployed by

* feat(time-picker): added timepicker emcn component, added to playground, added searchable prop for dropdown, added more timezones for schedule, updated license and notice date (#2668)

* feat(time-picker): added timepicker emcn component, added to playground, added searchable prop for dropdown, added more timezones for schedule, updated license and notice date

* removed unused params, cleaned up redundant utils

* improvement(invite): aligned styling (#2669)

* improvement(invite): aligned with rest of app

* fix(invite): error handling

* fix: addressed comments

---------

Co-authored-by: Emir Karabeg <78010029+emir-karabeg@users.noreply.github.com>
Co-authored-by: Vikhyath Mondreti <vikhyathvikku@gmail.com>
Co-authored-by: waleedlatif1 <waleedlatif1@users.noreply.github.com>
Co-authored-by: Adam Gough <77861281+aadamgough@users.noreply.github.com>
Co-authored-by: aadamgough <adam@sim.ai>
2026-01-03 13:19:18 -08:00
Waleed
eb5d1f3e5b v0.5.48: copy-paste workflow blocks, docs updates, mcp tool fixes 2025-12-31 18:00:04 -08:00
Waleed
54ab82c8dd v0.5.47: deploy workflow as mcp, kb chunks tokenizer, UI improvements, jira service management tools 2025-12-30 23:18:58 -08:00
Waleed
f895bf469b v0.5.46: build improvements, greptile, light mode improvements 2025-12-29 02:17:52 -08:00
Waleed
dd3209af06 v0.5.45: light mode fixes, realtime usage indicator, docker build improvements 2025-12-27 19:57:42 -08:00
Waleed
b6ba3b50a7 v0.5.44: keyboard shortcuts, autolayout, light mode, byok, testing improvements 2025-12-26 21:25:19 -08:00
Waleed
b304233062 v0.5.43: export logs, circleback, grain, vertex, code hygiene, schedule improvements 2025-12-23 19:19:18 -08:00
Vikhyath Mondreti
57e4b49bd6 v0.5.42: fix memory migration 2025-12-23 01:24:54 -08:00
Vikhyath Mondreti
e12dd204ed v0.5.41: memory fixes, copilot improvements, knowledgebase improvements, LLM providers standardization 2025-12-23 00:15:18 -08:00
Vikhyath Mondreti
3d9d9cbc54 v0.5.40: supabase ops to allow non-public schemas, jira uuid 2025-12-21 22:28:05 -08:00
Waleed
0f4ec962ad v0.5.39: notion, workflow variables fixes 2025-12-20 20:44:00 -08:00
Waleed
4827866f9a v0.5.38: snap to grid, copilot ux improvements, billing line items 2025-12-20 17:24:38 -08:00
Waleed
3e697d9ed9 v0.5.37: redaction utils consolidation, logs updates, autoconnect improvements, additional kb tag types 2025-12-19 22:31:55 -08:00
Martin Yankov
4431a1a484 fix(helm): add custom egress rules to realtime network policy (#2481)
The realtime service network policy was missing the custom egress rules section
that allows configuration of additional egress rules via values.yaml. This caused
the realtime pods to be unable to connect to external databases (e.g., PostgreSQL
on port 5432) when using external database configurations.

The app network policy already had this section, but the realtime network policy
was missing it, creating an inconsistency and preventing the realtime service
from accessing external databases configured via networkPolicy.egress values.

This fix adds the same custom egress rules template section to the realtime
network policy, matching the app network policy behavior and allowing users to
configure database connectivity via values.yaml.
2025-12-19 18:59:08 -08:00
Waleed
4d1a9a3f22 v0.5.36: hitl improvements, opengraph, slack fixes, one-click unsubscribe, auth checks, new db indexes 2025-12-19 01:27:49 -08:00
Vikhyath Mondreti
eb07a080fb v0.5.35: helm updates, copilot improvements, 404 for docs, salesforce fixes, subflow resize clamping 2025-12-18 16:23:19 -08:00
141 changed files with 17634 additions and 2149 deletions

View File

@@ -1157,6 +1157,21 @@ export function AirweaveIcon(props: SVGProps<SVGSVGElement>) {
)
}
export function GoogleBooksIcon(props: SVGProps<SVGSVGElement>) {
return (
<svg {...props} xmlns='http://www.w3.org/2000/svg' viewBox='0 0 478.633 540.068'>
<path
fill='#1C51A4'
d='M449.059,218.231L245.519,99.538l-0.061,193.23c0.031,1.504-0.368,2.977-1.166,4.204c-0.798,1.258-1.565,1.995-2.915,2.547c-1.35,0.552-2.792,0.706-4.204,0.399c-1.412-0.307-2.7-1.043-3.713-2.117l-69.166-70.609l-69.381,70.179c-1.013,0.982-2.301,1.657-3.652,1.903c-1.381,0.246-2.792,0.092-4.081-0.491c-1.289-0.583-1.626-0.522-2.394-1.749c-0.767-1.197-1.197-2.608-1.197-4.081L85.031,6.007l-2.915-1.289C43.973-11.638,0,16.409,0,59.891v420.306c0,46.029,49.312,74.782,88.775,51.767l360.285-210.138C488.491,298.782,488.491,241.246,449.059,218.231z'
/>
<path
fill='#80D7FB'
d='M88.805,8.124c-2.179-1.289-4.419-2.363-6.659-3.345l0.123,288.663c0,1.442,0.43,2.854,1.197,4.081c0.767,1.197,1.872,2.148,3.161,2.731c1.289,0.583,2.7,0.736,4.081,0.491c1.381-0.246,2.639-0.921,3.652-1.903l69.749-69.688l69.811,69.749c1.013,1.074,2.301,1.81,3.713,2.117c1.412,0.307,2.884,0.153,4.204-0.399c1.319-0.552,2.455-1.565,3.253-2.792c0.798-1.258,1.197-2.731,1.166-4.204V99.998L88.805,8.124z'
/>
</svg>
)
}
export function GoogleDocsIcon(props: SVGProps<SVGSVGElement>) {
return (
<svg

View File

@@ -38,6 +38,7 @@ import {
GithubIcon,
GitLabIcon,
GmailIcon,
GoogleBooksIcon,
GoogleCalendarIcon,
GoogleDocsIcon,
GoogleDriveIcon,
@@ -172,6 +173,7 @@ export const blockTypeToIconMap: Record<string, IconComponent> = {
github_v2: GithubIcon,
gitlab: GitLabIcon,
gmail_v2: GmailIcon,
google_books: GoogleBooksIcon,
google_calendar_v2: GoogleCalendarIcon,
google_docs: GoogleDocsIcon,
google_drive: GoogleDriveIcon,

View File

@@ -41,9 +41,6 @@ Diese Tastenkombinationen wechseln zwischen den Panel-Tabs auf der rechten Seite
| Tastenkombination | Aktion |
|----------|--------|
| `C` | Copilot-Tab fokussieren |
| `T` | Toolbar-Tab fokussieren |
| `E` | Editor-Tab fokussieren |
| `Mod` + `F` | Toolbar-Suche fokussieren |
## Globale Navigation

View File

@@ -43,9 +43,6 @@ These shortcuts switch between panel tabs on the right side of the canvas.
| Shortcut | Action |
|----------|--------|
| `C` | Focus Copilot tab |
| `T` | Focus Toolbar tab |
| `E` | Focus Editor tab |
| `Mod` + `F` | Focus Toolbar search |
## Global Navigation

View File

@@ -399,6 +399,28 @@ Create a new custom property (metadata) on a Confluence page.
| ↳ `authorId` | string | Account ID of the version author |
| ↳ `createdAt` | string | ISO 8601 timestamp of version creation |
### `confluence_delete_page_property`
Delete a content property from a Confluence page by its property ID.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `domain` | string | Yes | Your Confluence domain \(e.g., yourcompany.atlassian.net\) |
| `pageId` | string | Yes | The ID of the page containing the property |
| `propertyId` | string | Yes | The ID of the property to delete |
| `cloudId` | string | No | Confluence Cloud ID for the instance. If not provided, it will be fetched using the domain. |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `ts` | string | ISO 8601 timestamp of the operation |
| `pageId` | string | ID of the page |
| `propertyId` | string | ID of the deleted property |
| `deleted` | boolean | Deletion status |
### `confluence_search`
Search for content across Confluence pages, blog posts, and other content.
@@ -872,6 +894,90 @@ Add a label to a Confluence page for organization and categorization.
| `labelName` | string | Name of the added label |
| `labelId` | string | ID of the added label |
### `confluence_delete_label`
Remove a label from a Confluence page.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `domain` | string | Yes | Your Confluence domain \(e.g., yourcompany.atlassian.net\) |
| `pageId` | string | Yes | Confluence page ID to remove the label from |
| `labelName` | string | Yes | Name of the label to remove |
| `cloudId` | string | No | Confluence Cloud ID for the instance. If not provided, it will be fetched using the domain. |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `ts` | string | ISO 8601 timestamp of the operation |
| `pageId` | string | Page ID the label was removed from |
| `labelName` | string | Name of the removed label |
| `deleted` | boolean | Deletion status |
### `confluence_get_pages_by_label`
Retrieve all pages that have a specific label applied.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `domain` | string | Yes | Your Confluence domain \(e.g., yourcompany.atlassian.net\) |
| `labelId` | string | Yes | The ID of the label to get pages for |
| `limit` | number | No | Maximum number of pages to return \(default: 50, max: 250\) |
| `cursor` | string | No | Pagination cursor from previous response |
| `cloudId` | string | No | Confluence Cloud ID for the instance. If not provided, it will be fetched using the domain. |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `ts` | string | ISO 8601 timestamp of the operation |
| `labelId` | string | ID of the label |
| `pages` | array | Array of pages with this label |
| ↳ `id` | string | Unique page identifier |
| ↳ `title` | string | Page title |
| ↳ `status` | string | Page status \(e.g., current, archived, trashed, draft\) |
| ↳ `spaceId` | string | ID of the space containing the page |
| ↳ `parentId` | string | ID of the parent page \(null if top-level\) |
| ↳ `authorId` | string | Account ID of the page author |
| ↳ `createdAt` | string | ISO 8601 timestamp when the page was created |
| ↳ `version` | object | Page version information |
| ↳ `number` | number | Version number |
| ↳ `message` | string | Version message |
| ↳ `minorEdit` | boolean | Whether this is a minor edit |
| ↳ `authorId` | string | Account ID of the version author |
| ↳ `createdAt` | string | ISO 8601 timestamp of version creation |
| `nextCursor` | string | Cursor for fetching the next page of results |
### `confluence_list_space_labels`
List all labels associated with a Confluence space.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `domain` | string | Yes | Your Confluence domain \(e.g., yourcompany.atlassian.net\) |
| `spaceId` | string | Yes | The ID of the Confluence space to list labels from |
| `limit` | number | No | Maximum number of labels to return \(default: 25, max: 250\) |
| `cursor` | string | No | Pagination cursor from previous response |
| `cloudId` | string | No | Confluence Cloud ID for the instance. If not provided, it will be fetched using the domain. |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `ts` | string | ISO 8601 timestamp of the operation |
| `spaceId` | string | ID of the space |
| `labels` | array | Array of labels on the space |
| ↳ `id` | string | Unique label identifier |
| ↳ `name` | string | Label name |
| ↳ `prefix` | string | Label prefix/type \(e.g., global, my, team\) |
| `nextCursor` | string | Cursor for fetching the next page of results |
### `confluence_get_space`
Get details about a specific Confluence space.

View File

@@ -0,0 +1,96 @@
---
title: Google Books
description: Search and retrieve book information
---
import { BlockInfoCard } from "@/components/ui/block-info-card"
<BlockInfoCard
type="google_books"
color="#FFFFFF"
/>
## Usage Instructions
Search for books using the Google Books API. Find volumes by title, author, ISBN, or keywords, and retrieve detailed information about specific books including descriptions, ratings, and publication details.
## Tools
### `google_books_volume_search`
Search for books using the Google Books API
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Google Books API key |
| `query` | string | Yes | Search query. Supports special keywords: intitle:, inauthor:, inpublisher:, subject:, isbn: |
| `filter` | string | No | Filter results by availability \(partial, full, free-ebooks, paid-ebooks, ebooks\) |
| `printType` | string | No | Restrict to print type \(all, books, magazines\) |
| `orderBy` | string | No | Sort order \(relevance, newest\) |
| `startIndex` | number | No | Index of the first result to return \(for pagination\) |
| `maxResults` | number | No | Maximum number of results to return \(1-40\) |
| `langRestrict` | string | No | Restrict results to a specific language \(ISO 639-1 code\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `totalItems` | number | Total number of matching results |
| `volumes` | array | List of matching volumes |
| ↳ `id` | string | Volume ID |
| ↳ `title` | string | Book title |
| ↳ `subtitle` | string | Book subtitle |
| ↳ `authors` | array | List of authors |
| ↳ `publisher` | string | Publisher name |
| ↳ `publishedDate` | string | Publication date |
| ↳ `description` | string | Book description |
| ↳ `pageCount` | number | Number of pages |
| ↳ `categories` | array | Book categories |
| ↳ `averageRating` | number | Average rating \(1-5\) |
| ↳ `ratingsCount` | number | Number of ratings |
| ↳ `language` | string | Language code |
| ↳ `previewLink` | string | Link to preview on Google Books |
| ↳ `infoLink` | string | Link to info page |
| ↳ `thumbnailUrl` | string | Book cover thumbnail URL |
| ↳ `isbn10` | string | ISBN-10 identifier |
| ↳ `isbn13` | string | ISBN-13 identifier |
### `google_books_volume_details`
Get detailed information about a specific book volume
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Google Books API key |
| `volumeId` | string | Yes | The ID of the volume to retrieve |
| `projection` | string | No | Projection level \(full, lite\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `id` | string | Volume ID |
| `title` | string | Book title |
| `subtitle` | string | Book subtitle |
| `authors` | array | List of authors |
| `publisher` | string | Publisher name |
| `publishedDate` | string | Publication date |
| `description` | string | Book description |
| `pageCount` | number | Number of pages |
| `categories` | array | Book categories |
| `averageRating` | number | Average rating \(1-5\) |
| `ratingsCount` | number | Number of ratings |
| `language` | string | Language code |
| `previewLink` | string | Link to preview on Google Books |
| `infoLink` | string | Link to info page |
| `thumbnailUrl` | string | Book cover thumbnail URL |
| `isbn10` | string | ISBN-10 identifier |
| `isbn13` | string | ISBN-13 identifier |

View File

@@ -33,6 +33,7 @@
"github",
"gitlab",
"gmail",
"google_books",
"google_calendar",
"google_docs",
"google_drive",

View File

@@ -42,9 +42,6 @@ Estos atajos cambian entre las pestañas del panel en el lado derecho del lienzo
| Atajo | Acción |
|----------|--------|
| `C` | Enfocar pestaña Copilot |
| `T` | Enfocar pestaña Barra de herramientas |
| `E` | Enfocar pestaña Editor |
| `Mod` + `F` | Enfocar búsqueda de Barra de herramientas |
## Navegación global

View File

@@ -42,9 +42,6 @@ Ces raccourcis permettent de basculer entre les onglets du panneau sur le côté
| Raccourci | Action |
|----------|--------|
| `C` | Activer l'onglet Copilot |
| `T` | Activer l'onglet Barre d'outils |
| `E` | Activer l'onglet Éditeur |
| `Mod` + `F` | Activer la recherche dans la barre d'outils |
## Navigation globale

View File

@@ -41,9 +41,6 @@ import { Callout } from 'fumadocs-ui/components/callout'
| ショートカット | 操作 |
|----------|--------|
| `C` | Copilotタブにフォーカス |
| `T` | Toolbarタブにフォーカス |
| `E` | Editorタブにフォーカス |
| `Mod` + `F` | Toolbar検索にフォーカス |
## グローバルナビゲーション

View File

@@ -41,9 +41,6 @@ import { Callout } from 'fumadocs-ui/components/callout'
| 快捷键 | 操作 |
|----------|--------|
| `C` | 聚焦 Copilot 标签页 |
| `T` | 聚焦 Toolbar 标签页 |
| `E` | 聚焦 Editor 标签页 |
| `Mod` + `F` | 聚焦 Toolbar 搜索 |
## 全局导航

View File

@@ -13,6 +13,7 @@ BETTER_AUTH_URL=http://localhost:3000
# NextJS (Required)
NEXT_PUBLIC_APP_URL=http://localhost:3000
# INTERNAL_API_BASE_URL=http://sim-app.default.svc.cluster.local:3000 # Optional: internal URL for server-side /api self-calls; defaults to NEXT_PUBLIC_APP_URL
# Security (Required)
ENCRYPTION_KEY=your_encryption_key # Use `openssl rand -hex 32` to generate, used to encrypt environment variables

View File

@@ -1,7 +1,7 @@
import type { Artifact, Message, PushNotificationConfig, Task, TaskState } from '@a2a-js/sdk'
import { v4 as uuidv4 } from 'uuid'
import { generateInternalToken } from '@/lib/auth/internal'
import { getBaseUrl } from '@/lib/core/utils/urls'
import { getInternalApiBaseUrl } from '@/lib/core/utils/urls'
/** A2A v0.3 JSON-RPC method names */
export const A2A_METHODS = {
@@ -118,7 +118,7 @@ export interface ExecuteRequestResult {
export async function buildExecuteRequest(
config: ExecuteRequestConfig
): Promise<ExecuteRequestResult> {
const url = `${getBaseUrl()}/api/workflows/${config.workflowId}/execute`
const url = `${getInternalApiBaseUrl()}/api/workflows/${config.workflowId}/execute`
const headers: Record<string, string> = { 'Content-Type': 'application/json' }
let useInternalAuth = false

View File

@@ -0,0 +1,187 @@
/**
* POST /api/attribution
*
* Automatic UTM-based referral attribution.
*
* Reads the `sim_utm` cookie (set by proxy on auth pages), matches a campaign
* by UTM specificity, and atomically inserts an attribution record + applies
* bonus credits.
*
* Idempotent — the unique constraint on `userId` prevents double-attribution.
*/
import { db } from '@sim/db'
import { referralAttribution, referralCampaigns, userStats } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { eq } from 'drizzle-orm'
import { nanoid } from 'nanoid'
import { cookies } from 'next/headers'
import { NextResponse } from 'next/server'
import { z } from 'zod'
import { getSession } from '@/lib/auth'
import { applyBonusCredits } from '@/lib/billing/credits/bonus'
const logger = createLogger('AttributionAPI')
const COOKIE_NAME = 'sim_utm'
const UtmCookieSchema = z.object({
utm_source: z.string().optional(),
utm_medium: z.string().optional(),
utm_campaign: z.string().optional(),
utm_content: z.string().optional(),
referrer_url: z.string().optional(),
landing_page: z.string().optional(),
created_at: z.string().optional(),
})
/**
* Finds the most specific active campaign matching the given UTM params.
* Null fields on a campaign act as wildcards. Ties broken by newest campaign.
*/
async function findMatchingCampaign(utmData: z.infer<typeof UtmCookieSchema>) {
const campaigns = await db
.select()
.from(referralCampaigns)
.where(eq(referralCampaigns.isActive, true))
let bestMatch: (typeof campaigns)[number] | null = null
let bestScore = -1
for (const campaign of campaigns) {
let score = 0
let mismatch = false
const fields = [
{ campaignVal: campaign.utmSource, utmVal: utmData.utm_source },
{ campaignVal: campaign.utmMedium, utmVal: utmData.utm_medium },
{ campaignVal: campaign.utmCampaign, utmVal: utmData.utm_campaign },
{ campaignVal: campaign.utmContent, utmVal: utmData.utm_content },
] as const
for (const { campaignVal, utmVal } of fields) {
if (campaignVal === null) continue
if (campaignVal === utmVal) {
score++
} else {
mismatch = true
break
}
}
if (!mismatch && score > 0) {
if (
score > bestScore ||
(score === bestScore &&
bestMatch &&
campaign.createdAt.getTime() > bestMatch.createdAt.getTime())
) {
bestScore = score
bestMatch = campaign
}
}
}
return bestMatch
}
export async function POST() {
try {
const session = await getSession()
if (!session?.user?.id) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
const cookieStore = await cookies()
const utmCookie = cookieStore.get(COOKIE_NAME)
if (!utmCookie?.value) {
return NextResponse.json({ attributed: false, reason: 'no_utm_cookie' })
}
let utmData: z.infer<typeof UtmCookieSchema>
try {
let decoded: string
try {
decoded = decodeURIComponent(utmCookie.value)
} catch {
decoded = utmCookie.value
}
utmData = UtmCookieSchema.parse(JSON.parse(decoded))
} catch {
logger.warn('Failed to parse UTM cookie', { userId: session.user.id })
cookieStore.delete(COOKIE_NAME)
return NextResponse.json({ attributed: false, reason: 'invalid_cookie' })
}
const matchedCampaign = await findMatchingCampaign(utmData)
if (!matchedCampaign) {
cookieStore.delete(COOKIE_NAME)
return NextResponse.json({ attributed: false, reason: 'no_matching_campaign' })
}
const bonusAmount = Number(matchedCampaign.bonusCreditAmount)
let attributed = false
await db.transaction(async (tx) => {
const [existingStats] = await tx
.select({ id: userStats.id })
.from(userStats)
.where(eq(userStats.userId, session.user.id))
.limit(1)
if (!existingStats) {
await tx.insert(userStats).values({
id: nanoid(),
userId: session.user.id,
})
}
const result = await tx
.insert(referralAttribution)
.values({
id: nanoid(),
userId: session.user.id,
campaignId: matchedCampaign.id,
utmSource: utmData.utm_source || null,
utmMedium: utmData.utm_medium || null,
utmCampaign: utmData.utm_campaign || null,
utmContent: utmData.utm_content || null,
referrerUrl: utmData.referrer_url || null,
landingPage: utmData.landing_page || null,
bonusCreditAmount: bonusAmount.toString(),
})
.onConflictDoNothing({ target: referralAttribution.userId })
.returning({ id: referralAttribution.id })
if (result.length > 0) {
await applyBonusCredits(session.user.id, bonusAmount, tx)
attributed = true
}
})
if (attributed) {
logger.info('Referral attribution created and bonus credits applied', {
userId: session.user.id,
campaignId: matchedCampaign.id,
campaignName: matchedCampaign.name,
utmSource: utmData.utm_source,
utmCampaign: utmData.utm_campaign,
utmContent: utmData.utm_content,
bonusAmount,
})
} else {
logger.info('User already attributed, skipping', { userId: session.user.id })
}
cookieStore.delete(COOKIE_NAME)
return NextResponse.json({
attributed,
bonusAmount: attributed ? bonusAmount : undefined,
reason: attributed ? undefined : 'already_attributed',
})
} catch (error) {
logger.error('Attribution error', { error })
return NextResponse.json({ error: 'Internal server error' }, { status: 500 })
}
}

View File

@@ -4,20 +4,10 @@
* @vitest-environment node
*/
import { loggerMock } from '@sim/testing'
import { databaseMock, loggerMock } from '@sim/testing'
import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest'
vi.mock('@sim/db', () => ({
db: {
select: vi.fn().mockReturnThis(),
from: vi.fn().mockReturnThis(),
where: vi.fn().mockReturnThis(),
limit: vi.fn().mockReturnValue([]),
update: vi.fn().mockReturnThis(),
set: vi.fn().mockReturnThis(),
orderBy: vi.fn().mockReturnThis(),
},
}))
vi.mock('@sim/db', () => databaseMock)
vi.mock('@/lib/oauth/oauth', () => ({
refreshOAuthToken: vi.fn(),
@@ -34,13 +24,36 @@ import {
refreshTokenIfNeeded,
} from '@/app/api/auth/oauth/utils'
const mockDbTyped = db as any
const mockDb = db as any
const mockRefreshOAuthToken = refreshOAuthToken as any
/**
* Creates a chainable mock for db.select() calls.
* Returns a nested chain: select() -> from() -> where() -> limit() / orderBy()
*/
function mockSelectChain(limitResult: unknown[]) {
const mockLimit = vi.fn().mockReturnValue(limitResult)
const mockOrderBy = vi.fn().mockReturnValue(limitResult)
const mockWhere = vi.fn().mockReturnValue({ limit: mockLimit, orderBy: mockOrderBy })
const mockFrom = vi.fn().mockReturnValue({ where: mockWhere })
mockDb.select.mockReturnValueOnce({ from: mockFrom })
return { mockFrom, mockWhere, mockLimit }
}
/**
* Creates a chainable mock for db.update() calls.
* Returns a nested chain: update() -> set() -> where()
*/
function mockUpdateChain() {
const mockWhere = vi.fn().mockResolvedValue({})
const mockSet = vi.fn().mockReturnValue({ where: mockWhere })
mockDb.update.mockReturnValueOnce({ set: mockSet })
return { mockSet, mockWhere }
}
describe('OAuth Utils', () => {
beforeEach(() => {
vi.clearAllMocks()
mockDbTyped.limit.mockReturnValue([])
})
afterEach(() => {
@@ -50,20 +63,20 @@ describe('OAuth Utils', () => {
describe('getCredential', () => {
it('should return credential when found', async () => {
const mockCredential = { id: 'credential-id', userId: 'test-user-id' }
mockDbTyped.limit.mockReturnValueOnce([mockCredential])
const { mockFrom, mockWhere, mockLimit } = mockSelectChain([mockCredential])
const credential = await getCredential('request-id', 'credential-id', 'test-user-id')
expect(mockDbTyped.select).toHaveBeenCalled()
expect(mockDbTyped.from).toHaveBeenCalled()
expect(mockDbTyped.where).toHaveBeenCalled()
expect(mockDbTyped.limit).toHaveBeenCalledWith(1)
expect(mockDb.select).toHaveBeenCalled()
expect(mockFrom).toHaveBeenCalled()
expect(mockWhere).toHaveBeenCalled()
expect(mockLimit).toHaveBeenCalledWith(1)
expect(credential).toEqual(mockCredential)
})
it('should return undefined when credential is not found', async () => {
mockDbTyped.limit.mockReturnValueOnce([])
mockSelectChain([])
const credential = await getCredential('request-id', 'nonexistent-id', 'test-user-id')
@@ -102,11 +115,12 @@ describe('OAuth Utils', () => {
refreshToken: 'new-refresh-token',
})
mockUpdateChain()
const result = await refreshTokenIfNeeded('request-id', mockCredential, 'credential-id')
expect(mockRefreshOAuthToken).toHaveBeenCalledWith('google', 'refresh-token')
expect(mockDbTyped.update).toHaveBeenCalled()
expect(mockDbTyped.set).toHaveBeenCalled()
expect(mockDb.update).toHaveBeenCalled()
expect(result).toEqual({ accessToken: 'new-token', refreshed: true })
})
@@ -152,7 +166,7 @@ describe('OAuth Utils', () => {
providerId: 'google',
userId: 'test-user-id',
}
mockDbTyped.limit.mockReturnValueOnce([mockCredential])
mockSelectChain([mockCredential])
const token = await refreshAccessTokenIfNeeded('credential-id', 'test-user-id', 'request-id')
@@ -169,7 +183,8 @@ describe('OAuth Utils', () => {
providerId: 'google',
userId: 'test-user-id',
}
mockDbTyped.limit.mockReturnValueOnce([mockCredential])
mockSelectChain([mockCredential])
mockUpdateChain()
mockRefreshOAuthToken.mockResolvedValueOnce({
accessToken: 'new-token',
@@ -180,13 +195,12 @@ describe('OAuth Utils', () => {
const token = await refreshAccessTokenIfNeeded('credential-id', 'test-user-id', 'request-id')
expect(mockRefreshOAuthToken).toHaveBeenCalledWith('google', 'refresh-token')
expect(mockDbTyped.update).toHaveBeenCalled()
expect(mockDbTyped.set).toHaveBeenCalled()
expect(mockDb.update).toHaveBeenCalled()
expect(token).toBe('new-token')
})
it('should return null if credential not found', async () => {
mockDbTyped.limit.mockReturnValueOnce([])
mockSelectChain([])
const token = await refreshAccessTokenIfNeeded('nonexistent-id', 'test-user-id', 'request-id')
@@ -202,7 +216,7 @@ describe('OAuth Utils', () => {
providerId: 'google',
userId: 'test-user-id',
}
mockDbTyped.limit.mockReturnValueOnce([mockCredential])
mockSelectChain([mockCredential])
mockRefreshOAuthToken.mockResolvedValueOnce(null)

View File

@@ -85,7 +85,7 @@ const ChatMessageSchema = z.object({
chatId: z.string().optional(),
workflowId: z.string().optional(),
workflowName: z.string().optional(),
model: z.string().optional().default('claude-opus-4-6'),
model: z.string().optional().default('claude-opus-4-5'),
mode: z.enum(COPILOT_REQUEST_MODES).optional().default('agent'),
prefetch: z.boolean().optional(),
createNewChat: z.boolean().optional().default(false),
@@ -113,6 +113,7 @@ const ChatMessageSchema = z.object({
workflowId: z.string().optional(),
knowledgeId: z.string().optional(),
blockId: z.string().optional(),
blockIds: z.array(z.string()).optional(),
templateId: z.string().optional(),
executionId: z.string().optional(),
// For workflow_block, provide both workflowId and blockId
@@ -159,6 +160,20 @@ export async function POST(req: NextRequest) {
commands,
} = ChatMessageSchema.parse(body)
const normalizedContexts = Array.isArray(contexts)
? contexts.map((ctx) => {
if (ctx.kind !== 'blocks') return ctx
if (Array.isArray(ctx.blockIds) && ctx.blockIds.length > 0) return ctx
if (ctx.blockId) {
return {
...ctx,
blockIds: [ctx.blockId],
}
}
return ctx
})
: contexts
// Resolve workflowId - if not provided, use first workflow or find by name
const resolved = await resolveWorkflowIdForUser(
authenticatedUserId,
@@ -176,10 +191,10 @@ export async function POST(req: NextRequest) {
const userMessageIdToUse = userMessageId || crypto.randomUUID()
try {
logger.info(`[${tracker.requestId}] Received chat POST`, {
hasContexts: Array.isArray(contexts),
contextsCount: Array.isArray(contexts) ? contexts.length : 0,
contextsPreview: Array.isArray(contexts)
? contexts.map((c: any) => ({
hasContexts: Array.isArray(normalizedContexts),
contextsCount: Array.isArray(normalizedContexts) ? normalizedContexts.length : 0,
contextsPreview: Array.isArray(normalizedContexts)
? normalizedContexts.map((c: any) => ({
kind: c?.kind,
chatId: c?.chatId,
workflowId: c?.workflowId,
@@ -191,17 +206,25 @@ export async function POST(req: NextRequest) {
} catch {}
// Preprocess contexts server-side
let agentContexts: Array<{ type: string; content: string }> = []
if (Array.isArray(contexts) && contexts.length > 0) {
if (Array.isArray(normalizedContexts) && normalizedContexts.length > 0) {
try {
const { processContextsServer } = await import('@/lib/copilot/process-contents')
const processed = await processContextsServer(contexts as any, authenticatedUserId, message)
const processed = await processContextsServer(
normalizedContexts as any,
authenticatedUserId,
message
)
agentContexts = processed
logger.info(`[${tracker.requestId}] Contexts processed for request`, {
processedCount: agentContexts.length,
kinds: agentContexts.map((c) => c.type),
lengthPreview: agentContexts.map((c) => c.content?.length ?? 0),
})
if (Array.isArray(contexts) && contexts.length > 0 && agentContexts.length === 0) {
if (
Array.isArray(normalizedContexts) &&
normalizedContexts.length > 0 &&
agentContexts.length === 0
) {
logger.warn(
`[${tracker.requestId}] Contexts provided but none processed. Check executionId for logs contexts.`
)
@@ -215,7 +238,7 @@ export async function POST(req: NextRequest) {
let currentChat: any = null
let conversationHistory: any[] = []
let actualChatId = chatId
const selectedModel = model || 'claude-opus-4-6'
const selectedModel = model || 'claude-opus-4-5'
if (chatId || createNewChat) {
const chatResult = await resolveOrCreateChat({
@@ -246,11 +269,13 @@ export async function POST(req: NextRequest) {
mode,
model: selectedModel,
provider,
conversationId: effectiveConversationId,
conversationHistory,
contexts: agentContexts,
fileAttachments,
commands,
chatId: actualChatId,
prefetch,
implicitFeedback,
},
{
@@ -432,10 +457,15 @@ export async function POST(req: NextRequest) {
content: message,
timestamp: new Date().toISOString(),
...(fileAttachments && fileAttachments.length > 0 && { fileAttachments }),
...(Array.isArray(contexts) && contexts.length > 0 && { contexts }),
...(Array.isArray(contexts) &&
contexts.length > 0 && {
contentBlocks: [{ type: 'contexts', contexts: contexts as any, timestamp: Date.now() }],
...(Array.isArray(normalizedContexts) &&
normalizedContexts.length > 0 && {
contexts: normalizedContexts,
}),
...(Array.isArray(normalizedContexts) &&
normalizedContexts.length > 0 && {
contentBlocks: [
{ type: 'contexts', contexts: normalizedContexts as any, timestamp: Date.now() },
],
}),
}

View File

@@ -18,9 +18,9 @@ describe('Copilot Checkpoints Revert API Route', () => {
setupCommonApiMocks()
mockCryptoUuid()
// Mock getBaseUrl to return localhost for tests
vi.doMock('@/lib/core/utils/urls', () => ({
getBaseUrl: vi.fn(() => 'http://localhost:3000'),
getInternalApiBaseUrl: vi.fn(() => 'http://localhost:3000'),
getBaseDomain: vi.fn(() => 'localhost:3000'),
getEmailDomain: vi.fn(() => 'localhost:3000'),
}))

View File

@@ -11,7 +11,7 @@ import {
createRequestTracker,
createUnauthorizedResponse,
} from '@/lib/copilot/request-helpers'
import { getBaseUrl } from '@/lib/core/utils/urls'
import { getInternalApiBaseUrl } from '@/lib/core/utils/urls'
import { authorizeWorkflowByWorkspacePermission } from '@/lib/workflows/utils'
import { isUuidV4 } from '@/executor/constants'
@@ -99,7 +99,7 @@ export async function POST(request: NextRequest) {
}
const stateResponse = await fetch(
`${getBaseUrl()}/api/workflows/${checkpoint.workflowId}/state`,
`${getInternalApiBaseUrl()}/api/workflows/${checkpoint.workflowId}/state`,
{
method: 'PUT',
headers: {

View File

@@ -4,16 +4,12 @@
*
* @vitest-environment node
*/
import { createEnvMock, createMockLogger } from '@sim/testing'
import { createEnvMock, databaseMock, loggerMock } from '@sim/testing'
import { beforeEach, describe, expect, it, vi } from 'vitest'
const loggerMock = vi.hoisted(() => ({
createLogger: () => createMockLogger(),
}))
vi.mock('drizzle-orm')
vi.mock('@sim/logger', () => loggerMock)
vi.mock('@sim/db')
vi.mock('@sim/db', () => databaseMock)
vi.mock('@/lib/knowledge/documents/utils', () => ({
retryWithExponentialBackoff: (fn: any) => fn(),
}))

View File

@@ -38,7 +38,7 @@ import {
const logger = createLogger('CopilotMcpAPI')
const mcpRateLimiter = new RateLimiter()
const DEFAULT_COPILOT_MODEL = 'claude-opus-4-6'
const DEFAULT_COPILOT_MODEL = 'claude-opus-4-5'
export const dynamic = 'force-dynamic'
export const runtime = 'nodejs'

View File

@@ -72,6 +72,7 @@ describe('MCP Serve Route', () => {
}))
vi.doMock('@/lib/core/utils/urls', () => ({
getBaseUrl: () => 'http://localhost:3000',
getInternalApiBaseUrl: () => 'http://localhost:3000',
}))
vi.doMock('@/lib/core/execution-limits', () => ({
getMaxExecutionTimeout: () => 10_000,

View File

@@ -22,7 +22,7 @@ import { type NextRequest, NextResponse } from 'next/server'
import { type AuthResult, checkHybridAuth } from '@/lib/auth/hybrid'
import { generateInternalToken } from '@/lib/auth/internal'
import { getMaxExecutionTimeout } from '@/lib/core/execution-limits'
import { getBaseUrl } from '@/lib/core/utils/urls'
import { getInternalApiBaseUrl } from '@/lib/core/utils/urls'
import { getUserEntityPermissions } from '@/lib/workspaces/permissions/utils'
const logger = createLogger('WorkflowMcpServeAPI')
@@ -285,7 +285,7 @@ async function handleToolsCall(
)
}
const executeUrl = `${getBaseUrl()}/api/workflows/${tool.workflowId}/execute`
const executeUrl = `${getInternalApiBaseUrl()}/api/workflows/${tool.workflowId}/execute`
const headers: Record<string, string> = { 'Content-Type': 'application/json' }
if (publicServerOwnerId) {

View File

@@ -0,0 +1,170 @@
/**
* POST /api/referral-code/redeem
*
* Redeem a referral/promo code to receive bonus credits.
*
* Body:
* - code: string — The referral code to redeem
*
* Response: { redeemed: boolean, bonusAmount?: number, error?: string }
*
* Constraints:
* - Enterprise users cannot redeem codes
* - One redemption per user, ever (unique constraint on userId)
* - One redemption per organization for team users (partial unique on organizationId)
*/
import { db } from '@sim/db'
import { referralAttribution, referralCampaigns, userStats } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { and, eq } from 'drizzle-orm'
import { nanoid } from 'nanoid'
import { NextResponse } from 'next/server'
import { z } from 'zod'
import { getSession } from '@/lib/auth'
import { getHighestPrioritySubscription } from '@/lib/billing/core/subscription'
import { applyBonusCredits } from '@/lib/billing/credits/bonus'
const logger = createLogger('ReferralCodeRedemption')
const RedeemCodeSchema = z.object({
code: z.string().min(1, 'Code is required'),
})
export async function POST(request: Request) {
try {
const session = await getSession()
if (!session?.user?.id) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
const body = await request.json()
const { code } = RedeemCodeSchema.parse(body)
const subscription = await getHighestPrioritySubscription(session.user.id)
if (subscription?.plan === 'enterprise') {
return NextResponse.json({
redeemed: false,
error: 'Enterprise accounts cannot redeem referral codes',
})
}
const isTeam = subscription?.plan === 'team'
const orgId = isTeam ? subscription.referenceId : null
const normalizedCode = code.trim().toUpperCase()
const [campaign] = await db
.select()
.from(referralCampaigns)
.where(and(eq(referralCampaigns.code, normalizedCode), eq(referralCampaigns.isActive, true)))
.limit(1)
if (!campaign) {
logger.info('Invalid code redemption attempt', {
userId: session.user.id,
code: normalizedCode,
})
return NextResponse.json({ error: 'Invalid or expired code' }, { status: 404 })
}
const [existingUserAttribution] = await db
.select({ id: referralAttribution.id })
.from(referralAttribution)
.where(eq(referralAttribution.userId, session.user.id))
.limit(1)
if (existingUserAttribution) {
return NextResponse.json({
redeemed: false,
error: 'You have already redeemed a code',
})
}
if (orgId) {
const [existingOrgAttribution] = await db
.select({ id: referralAttribution.id })
.from(referralAttribution)
.where(eq(referralAttribution.organizationId, orgId))
.limit(1)
if (existingOrgAttribution) {
return NextResponse.json({
redeemed: false,
error: 'A code has already been redeemed for your organization',
})
}
}
const bonusAmount = Number(campaign.bonusCreditAmount)
let redeemed = false
await db.transaction(async (tx) => {
const [existingStats] = await tx
.select({ id: userStats.id })
.from(userStats)
.where(eq(userStats.userId, session.user.id))
.limit(1)
if (!existingStats) {
await tx.insert(userStats).values({
id: nanoid(),
userId: session.user.id,
})
}
const result = await tx
.insert(referralAttribution)
.values({
id: nanoid(),
userId: session.user.id,
organizationId: orgId,
campaignId: campaign.id,
utmSource: null,
utmMedium: null,
utmCampaign: null,
utmContent: null,
referrerUrl: null,
landingPage: null,
bonusCreditAmount: bonusAmount.toString(),
})
.onConflictDoNothing()
.returning({ id: referralAttribution.id })
if (result.length > 0) {
await applyBonusCredits(session.user.id, bonusAmount, tx)
redeemed = true
}
})
if (redeemed) {
logger.info('Referral code redeemed', {
userId: session.user.id,
organizationId: orgId,
code: normalizedCode,
campaignId: campaign.id,
campaignName: campaign.name,
bonusAmount,
})
}
if (!redeemed) {
return NextResponse.json({
redeemed: false,
error: 'You have already redeemed a code',
})
}
return NextResponse.json({
redeemed: true,
bonusAmount,
})
} catch (error) {
if (error instanceof z.ZodError) {
return NextResponse.json({ error: error.errors[0].message }, { status: 400 })
}
logger.error('Referral code redemption error', { error })
return NextResponse.json({ error: 'Internal server error' }, { status: 500 })
}
}

View File

@@ -3,17 +3,14 @@
*
* @vitest-environment node
*/
import { loggerMock } from '@sim/testing'
import { databaseMock, loggerMock } from '@sim/testing'
import { NextRequest } from 'next/server'
import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest'
const { mockGetSession, mockAuthorizeWorkflowByWorkspacePermission, mockDbSelect, mockDbUpdate } =
vi.hoisted(() => ({
mockGetSession: vi.fn(),
mockAuthorizeWorkflowByWorkspacePermission: vi.fn(),
mockDbSelect: vi.fn(),
mockDbUpdate: vi.fn(),
}))
const { mockGetSession, mockAuthorizeWorkflowByWorkspacePermission } = vi.hoisted(() => ({
mockGetSession: vi.fn(),
mockAuthorizeWorkflowByWorkspacePermission: vi.fn(),
}))
vi.mock('@/lib/auth', () => ({
getSession: mockGetSession,
@@ -23,12 +20,7 @@ vi.mock('@/lib/workflows/utils', () => ({
authorizeWorkflowByWorkspacePermission: mockAuthorizeWorkflowByWorkspacePermission,
}))
vi.mock('@sim/db', () => ({
db: {
select: mockDbSelect,
update: mockDbUpdate,
},
}))
vi.mock('@sim/db', () => databaseMock)
vi.mock('@sim/db/schema', () => ({
workflow: { id: 'id', userId: 'userId', workspaceId: 'workspaceId' },
@@ -59,6 +51,9 @@ function createParams(id: string): { params: Promise<{ id: string }> } {
return { params: Promise.resolve({ id }) }
}
const mockDbSelect = databaseMock.db.select as ReturnType<typeof vi.fn>
const mockDbUpdate = databaseMock.db.update as ReturnType<typeof vi.fn>
function mockDbChain(selectResults: unknown[][]) {
let selectCallIndex = 0
mockDbSelect.mockImplementation(() => ({

View File

@@ -3,17 +3,14 @@
*
* @vitest-environment node
*/
import { loggerMock } from '@sim/testing'
import { databaseMock, loggerMock } from '@sim/testing'
import { NextRequest } from 'next/server'
import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest'
const { mockGetSession, mockAuthorizeWorkflowByWorkspacePermission, mockDbSelect } = vi.hoisted(
() => ({
mockGetSession: vi.fn(),
mockAuthorizeWorkflowByWorkspacePermission: vi.fn(),
mockDbSelect: vi.fn(),
})
)
const { mockGetSession, mockAuthorizeWorkflowByWorkspacePermission } = vi.hoisted(() => ({
mockGetSession: vi.fn(),
mockAuthorizeWorkflowByWorkspacePermission: vi.fn(),
}))
vi.mock('@/lib/auth', () => ({
getSession: mockGetSession,
@@ -23,11 +20,7 @@ vi.mock('@/lib/workflows/utils', () => ({
authorizeWorkflowByWorkspacePermission: mockAuthorizeWorkflowByWorkspacePermission,
}))
vi.mock('@sim/db', () => ({
db: {
select: mockDbSelect,
},
}))
vi.mock('@sim/db', () => databaseMock)
vi.mock('@sim/db/schema', () => ({
workflow: { id: 'id', userId: 'userId', workspaceId: 'workspaceId' },
@@ -62,6 +55,8 @@ function createRequest(url: string): NextRequest {
return new NextRequest(new URL(url), { method: 'GET' })
}
const mockDbSelect = databaseMock.db.select as ReturnType<typeof vi.fn>
function mockDbChain(results: any[]) {
let callIndex = 0
mockDbSelect.mockImplementation(() => ({

View File

@@ -6,7 +6,7 @@ import { type NextRequest, NextResponse } from 'next/server'
import { v4 as uuidv4 } from 'uuid'
import { getSession } from '@/lib/auth'
import { generateRequestId } from '@/lib/core/utils/request'
import { getBaseUrl } from '@/lib/core/utils/urls'
import { getInternalApiBaseUrl } from '@/lib/core/utils/urls'
import {
type RegenerateStateInput,
regenerateWorkflowStateIds,
@@ -115,15 +115,18 @@ export async function POST(request: NextRequest, { params }: { params: Promise<{
// Step 3: Save the workflow state using the existing state endpoint (like imports do)
// Ensure variables in state are remapped for the new workflow as well
const workflowStateWithVariables = { ...workflowState, variables: remappedVariables }
const stateResponse = await fetch(`${getBaseUrl()}/api/workflows/${newWorkflowId}/state`, {
method: 'PUT',
headers: {
'Content-Type': 'application/json',
// Forward the session cookie for authentication
cookie: request.headers.get('cookie') || '',
},
body: JSON.stringify(workflowStateWithVariables),
})
const stateResponse = await fetch(
`${getInternalApiBaseUrl()}/api/workflows/${newWorkflowId}/state`,
{
method: 'PUT',
headers: {
'Content-Type': 'application/json',
// Forward the session cookie for authentication
cookie: request.headers.get('cookie') || '',
},
body: JSON.stringify(workflowStateWithVariables),
}
)
if (!stateResponse.ok) {
logger.error(`[${requestId}] Failed to save workflow state for template use`)

View File

@@ -191,3 +191,84 @@ export async function GET(request: NextRequest) {
)
}
}
// Delete a label from a page
export async function DELETE(request: NextRequest) {
try {
const auth = await checkSessionOrInternalAuth(request)
if (!auth.success || !auth.userId) {
return NextResponse.json({ error: auth.error || 'Unauthorized' }, { status: 401 })
}
const {
domain,
accessToken,
cloudId: providedCloudId,
pageId,
labelName,
} = await request.json()
if (!domain) {
return NextResponse.json({ error: 'Domain is required' }, { status: 400 })
}
if (!accessToken) {
return NextResponse.json({ error: 'Access token is required' }, { status: 400 })
}
if (!pageId) {
return NextResponse.json({ error: 'Page ID is required' }, { status: 400 })
}
if (!labelName) {
return NextResponse.json({ error: 'Label name is required' }, { status: 400 })
}
const pageIdValidation = validateAlphanumericId(pageId, 'pageId', 255)
if (!pageIdValidation.isValid) {
return NextResponse.json({ error: pageIdValidation.error }, { status: 400 })
}
const cloudId = providedCloudId || (await getConfluenceCloudId(domain, accessToken))
const cloudIdValidation = validateJiraCloudId(cloudId, 'cloudId')
if (!cloudIdValidation.isValid) {
return NextResponse.json({ error: cloudIdValidation.error }, { status: 400 })
}
const encodedLabel = encodeURIComponent(labelName.trim())
const url = `https://api.atlassian.com/ex/confluence/${cloudId}/wiki/rest/api/content/${pageId}/label?name=${encodedLabel}`
const response = await fetch(url, {
method: 'DELETE',
headers: {
Accept: 'application/json',
Authorization: `Bearer ${accessToken}`,
},
})
if (!response.ok) {
const errorData = await response.json().catch(() => null)
logger.error('Confluence API error response:', {
status: response.status,
statusText: response.statusText,
error: JSON.stringify(errorData, null, 2),
})
const errorMessage =
errorData?.message || `Failed to delete Confluence label (${response.status})`
return NextResponse.json({ error: errorMessage }, { status: response.status })
}
return NextResponse.json({
pageId,
labelName,
deleted: true,
})
} catch (error) {
logger.error('Error deleting Confluence label:', error)
return NextResponse.json(
{ error: (error as Error).message || 'Internal server error' },
{ status: 500 }
)
}
}

View File

@@ -0,0 +1,103 @@
import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server'
import { checkSessionOrInternalAuth } from '@/lib/auth/hybrid'
import { validateAlphanumericId, validateJiraCloudId } from '@/lib/core/security/input-validation'
import { getConfluenceCloudId } from '@/tools/confluence/utils'
const logger = createLogger('ConfluencePagesByLabelAPI')
export const dynamic = 'force-dynamic'
export async function GET(request: NextRequest) {
try {
const auth = await checkSessionOrInternalAuth(request)
if (!auth.success || !auth.userId) {
return NextResponse.json({ error: auth.error || 'Unauthorized' }, { status: 401 })
}
const { searchParams } = new URL(request.url)
const domain = searchParams.get('domain')
const accessToken = searchParams.get('accessToken')
const labelId = searchParams.get('labelId')
const providedCloudId = searchParams.get('cloudId')
const limit = searchParams.get('limit') || '50'
const cursor = searchParams.get('cursor')
if (!domain) {
return NextResponse.json({ error: 'Domain is required' }, { status: 400 })
}
if (!accessToken) {
return NextResponse.json({ error: 'Access token is required' }, { status: 400 })
}
if (!labelId) {
return NextResponse.json({ error: 'Label ID is required' }, { status: 400 })
}
const labelIdValidation = validateAlphanumericId(labelId, 'labelId', 255)
if (!labelIdValidation.isValid) {
return NextResponse.json({ error: labelIdValidation.error }, { status: 400 })
}
const cloudId = providedCloudId || (await getConfluenceCloudId(domain, accessToken))
const cloudIdValidation = validateJiraCloudId(cloudId, 'cloudId')
if (!cloudIdValidation.isValid) {
return NextResponse.json({ error: cloudIdValidation.error }, { status: 400 })
}
const queryParams = new URLSearchParams()
queryParams.append('limit', String(Math.min(Number(limit), 250)))
if (cursor) {
queryParams.append('cursor', cursor)
}
const url = `https://api.atlassian.com/ex/confluence/${cloudId}/wiki/api/v2/labels/${labelId}/pages?${queryParams.toString()}`
const response = await fetch(url, {
method: 'GET',
headers: {
Accept: 'application/json',
Authorization: `Bearer ${accessToken}`,
},
})
if (!response.ok) {
const errorData = await response.json().catch(() => null)
logger.error('Confluence API error response:', {
status: response.status,
statusText: response.statusText,
error: JSON.stringify(errorData, null, 2),
})
const errorMessage = errorData?.message || `Failed to get pages by label (${response.status})`
return NextResponse.json({ error: errorMessage }, { status: response.status })
}
const data = await response.json()
const pages = (data.results || []).map((page: any) => ({
id: page.id,
title: page.title,
status: page.status ?? null,
spaceId: page.spaceId ?? null,
parentId: page.parentId ?? null,
authorId: page.authorId ?? null,
createdAt: page.createdAt ?? null,
version: page.version ?? null,
}))
return NextResponse.json({
pages,
labelId,
nextCursor: data._links?.next
? new URL(data._links.next, 'https://placeholder').searchParams.get('cursor')
: null,
})
} catch (error) {
logger.error('Error getting pages by label:', error)
return NextResponse.json(
{ error: (error as Error).message || 'Internal server error' },
{ status: 500 }
)
}
}

View File

@@ -0,0 +1,98 @@
import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server'
import { checkSessionOrInternalAuth } from '@/lib/auth/hybrid'
import { validateAlphanumericId, validateJiraCloudId } from '@/lib/core/security/input-validation'
import { getConfluenceCloudId } from '@/tools/confluence/utils'
const logger = createLogger('ConfluenceSpaceLabelsAPI')
export const dynamic = 'force-dynamic'
export async function GET(request: NextRequest) {
try {
const auth = await checkSessionOrInternalAuth(request)
if (!auth.success || !auth.userId) {
return NextResponse.json({ error: auth.error || 'Unauthorized' }, { status: 401 })
}
const { searchParams } = new URL(request.url)
const domain = searchParams.get('domain')
const accessToken = searchParams.get('accessToken')
const spaceId = searchParams.get('spaceId')
const providedCloudId = searchParams.get('cloudId')
const limit = searchParams.get('limit') || '25'
const cursor = searchParams.get('cursor')
if (!domain) {
return NextResponse.json({ error: 'Domain is required' }, { status: 400 })
}
if (!accessToken) {
return NextResponse.json({ error: 'Access token is required' }, { status: 400 })
}
if (!spaceId) {
return NextResponse.json({ error: 'Space ID is required' }, { status: 400 })
}
const spaceIdValidation = validateAlphanumericId(spaceId, 'spaceId', 255)
if (!spaceIdValidation.isValid) {
return NextResponse.json({ error: spaceIdValidation.error }, { status: 400 })
}
const cloudId = providedCloudId || (await getConfluenceCloudId(domain, accessToken))
const cloudIdValidation = validateJiraCloudId(cloudId, 'cloudId')
if (!cloudIdValidation.isValid) {
return NextResponse.json({ error: cloudIdValidation.error }, { status: 400 })
}
const queryParams = new URLSearchParams()
queryParams.append('limit', String(Math.min(Number(limit), 250)))
if (cursor) {
queryParams.append('cursor', cursor)
}
const url = `https://api.atlassian.com/ex/confluence/${cloudId}/wiki/api/v2/spaces/${spaceId}/labels?${queryParams.toString()}`
const response = await fetch(url, {
method: 'GET',
headers: {
Accept: 'application/json',
Authorization: `Bearer ${accessToken}`,
},
})
if (!response.ok) {
const errorData = await response.json().catch(() => null)
logger.error('Confluence API error response:', {
status: response.status,
statusText: response.statusText,
error: JSON.stringify(errorData, null, 2),
})
const errorMessage = errorData?.message || `Failed to list space labels (${response.status})`
return NextResponse.json({ error: errorMessage }, { status: response.status })
}
const data = await response.json()
const labels = (data.results || []).map((label: any) => ({
id: label.id,
name: label.name,
prefix: label.prefix || 'global',
}))
return NextResponse.json({
labels,
spaceId,
nextCursor: data._links?.next
? new URL(data._links.next, 'https://placeholder').searchParams.get('cursor')
: null,
})
} catch (error) {
logger.error('Error listing space labels:', error)
return NextResponse.json(
{ error: (error as Error).message || 'Internal server error' },
{ status: 500 }
)
}
}

View File

@@ -66,6 +66,12 @@
* Credits:
* POST /api/v1/admin/credits - Issue credits to user (by userId or email)
*
* Referral Campaigns:
* GET /api/v1/admin/referral-campaigns - List campaigns (?active=true/false)
* POST /api/v1/admin/referral-campaigns - Create campaign
* GET /api/v1/admin/referral-campaigns/:id - Get campaign details
* PATCH /api/v1/admin/referral-campaigns/:id - Update campaign fields
*
* Access Control (Permission Groups):
* GET /api/v1/admin/access-control - List permission groups (?organizationId=X)
* DELETE /api/v1/admin/access-control - Delete permission groups for org (?organizationId=X)
@@ -97,6 +103,7 @@ export type {
AdminOrganization,
AdminOrganizationBillingSummary,
AdminOrganizationDetail,
AdminReferralCampaign,
AdminSeatAnalytics,
AdminSingleResponse,
AdminSubscription,
@@ -111,6 +118,7 @@ export type {
AdminWorkspaceMember,
DbMember,
DbOrganization,
DbReferralCampaign,
DbSubscription,
DbUser,
DbUserStats,
@@ -139,6 +147,7 @@ export {
parseWorkflowVariables,
toAdminFolder,
toAdminOrganization,
toAdminReferralCampaign,
toAdminSubscription,
toAdminUser,
toAdminWorkflow,

View File

@@ -0,0 +1,142 @@
/**
* GET /api/v1/admin/referral-campaigns/:id
*
* Get a single referral campaign by ID.
*
* PATCH /api/v1/admin/referral-campaigns/:id
*
* Update campaign fields. All fields are optional.
*
* Body:
* - name: string (non-empty) - Campaign name
* - bonusCreditAmount: number (> 0) - Bonus credits in dollars
* - isActive: boolean - Enable/disable the campaign
* - code: string | null (min 6 chars, auto-uppercased, null to remove) - Redeemable code
* - utmSource: string | null - UTM source match (null = wildcard)
* - utmMedium: string | null - UTM medium match (null = wildcard)
* - utmCampaign: string | null - UTM campaign match (null = wildcard)
* - utmContent: string | null - UTM content match (null = wildcard)
*/
import { db } from '@sim/db'
import { referralCampaigns } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { eq } from 'drizzle-orm'
import { getBaseUrl } from '@/lib/core/utils/urls'
import { withAdminAuthParams } from '@/app/api/v1/admin/middleware'
import {
badRequestResponse,
internalErrorResponse,
notFoundResponse,
singleResponse,
} from '@/app/api/v1/admin/responses'
import { toAdminReferralCampaign } from '@/app/api/v1/admin/types'
const logger = createLogger('AdminReferralCampaignDetailAPI')
interface RouteParams {
id: string
}
export const GET = withAdminAuthParams<RouteParams>(async (_, context) => {
try {
const { id: campaignId } = await context.params
const [campaign] = await db
.select()
.from(referralCampaigns)
.where(eq(referralCampaigns.id, campaignId))
.limit(1)
if (!campaign) {
return notFoundResponse('Campaign')
}
logger.info(`Admin API: Retrieved referral campaign ${campaignId}`)
return singleResponse(toAdminReferralCampaign(campaign, getBaseUrl()))
} catch (error) {
logger.error('Admin API: Failed to get referral campaign', { error })
return internalErrorResponse('Failed to get referral campaign')
}
})
export const PATCH = withAdminAuthParams<RouteParams>(async (request, context) => {
try {
const { id: campaignId } = await context.params
const body = await request.json()
const [existing] = await db
.select()
.from(referralCampaigns)
.where(eq(referralCampaigns.id, campaignId))
.limit(1)
if (!existing) {
return notFoundResponse('Campaign')
}
const updateData: Record<string, unknown> = { updatedAt: new Date() }
if (body.name !== undefined) {
if (typeof body.name !== 'string' || body.name.trim().length === 0) {
return badRequestResponse('name must be a non-empty string')
}
updateData.name = body.name.trim()
}
if (body.bonusCreditAmount !== undefined) {
if (
typeof body.bonusCreditAmount !== 'number' ||
!Number.isFinite(body.bonusCreditAmount) ||
body.bonusCreditAmount <= 0
) {
return badRequestResponse('bonusCreditAmount must be a positive number')
}
updateData.bonusCreditAmount = body.bonusCreditAmount.toString()
}
if (body.isActive !== undefined) {
if (typeof body.isActive !== 'boolean') {
return badRequestResponse('isActive must be a boolean')
}
updateData.isActive = body.isActive
}
if (body.code !== undefined) {
if (body.code !== null) {
if (typeof body.code !== 'string') {
return badRequestResponse('code must be a string or null')
}
if (body.code.trim().length < 6) {
return badRequestResponse('code must be at least 6 characters')
}
}
updateData.code = body.code ? body.code.trim().toUpperCase() : null
}
for (const field of ['utmSource', 'utmMedium', 'utmCampaign', 'utmContent'] as const) {
if (body[field] !== undefined) {
if (body[field] !== null && typeof body[field] !== 'string') {
return badRequestResponse(`${field} must be a string or null`)
}
updateData[field] = body[field] || null
}
}
const [updated] = await db
.update(referralCampaigns)
.set(updateData)
.where(eq(referralCampaigns.id, campaignId))
.returning()
logger.info(`Admin API: Updated referral campaign ${campaignId}`, {
fields: Object.keys(updateData).filter((k) => k !== 'updatedAt'),
})
return singleResponse(toAdminReferralCampaign(updated, getBaseUrl()))
} catch (error) {
logger.error('Admin API: Failed to update referral campaign', { error })
return internalErrorResponse('Failed to update referral campaign')
}
})

View File

@@ -0,0 +1,140 @@
/**
* GET /api/v1/admin/referral-campaigns
*
* List referral campaigns with optional filtering and pagination.
*
* Query Parameters:
* - active: string (optional) - Filter by active status ('true' or 'false')
* - limit: number (default: 50, max: 250)
* - offset: number (default: 0)
*
* POST /api/v1/admin/referral-campaigns
*
* Create a new referral campaign.
*
* Body:
* - name: string (required) - Campaign name
* - bonusCreditAmount: number (required, > 0) - Bonus credits in dollars
* - code: string | null (optional, min 6 chars, auto-uppercased) - Redeemable code
* - utmSource: string | null (optional) - UTM source match (null = wildcard)
* - utmMedium: string | null (optional) - UTM medium match (null = wildcard)
* - utmCampaign: string | null (optional) - UTM campaign match (null = wildcard)
* - utmContent: string | null (optional) - UTM content match (null = wildcard)
*/
import { db } from '@sim/db'
import { referralCampaigns } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { count, eq, type SQL } from 'drizzle-orm'
import { nanoid } from 'nanoid'
import { getBaseUrl } from '@/lib/core/utils/urls'
import { withAdminAuth } from '@/app/api/v1/admin/middleware'
import {
badRequestResponse,
internalErrorResponse,
listResponse,
singleResponse,
} from '@/app/api/v1/admin/responses'
import {
type AdminReferralCampaign,
createPaginationMeta,
parsePaginationParams,
toAdminReferralCampaign,
} from '@/app/api/v1/admin/types'
const logger = createLogger('AdminReferralCampaignsAPI')
export const GET = withAdminAuth(async (request) => {
const url = new URL(request.url)
const { limit, offset } = parsePaginationParams(url)
const activeFilter = url.searchParams.get('active')
try {
const conditions: SQL<unknown>[] = []
if (activeFilter === 'true') {
conditions.push(eq(referralCampaigns.isActive, true))
} else if (activeFilter === 'false') {
conditions.push(eq(referralCampaigns.isActive, false))
}
const whereClause = conditions.length > 0 ? conditions[0] : undefined
const baseUrl = getBaseUrl()
const [countResult, campaigns] = await Promise.all([
db.select({ total: count() }).from(referralCampaigns).where(whereClause),
db
.select()
.from(referralCampaigns)
.where(whereClause)
.orderBy(referralCampaigns.createdAt)
.limit(limit)
.offset(offset),
])
const total = countResult[0].total
const data: AdminReferralCampaign[] = campaigns.map((c) => toAdminReferralCampaign(c, baseUrl))
const pagination = createPaginationMeta(total, limit, offset)
logger.info(`Admin API: Listed ${data.length} referral campaigns (total: ${total})`)
return listResponse(data, pagination)
} catch (error) {
logger.error('Admin API: Failed to list referral campaigns', { error })
return internalErrorResponse('Failed to list referral campaigns')
}
})
export const POST = withAdminAuth(async (request) => {
try {
const body = await request.json()
const { name, code, utmSource, utmMedium, utmCampaign, utmContent, bonusCreditAmount } = body
if (!name || typeof name !== 'string') {
return badRequestResponse('name is required and must be a string')
}
if (
typeof bonusCreditAmount !== 'number' ||
!Number.isFinite(bonusCreditAmount) ||
bonusCreditAmount <= 0
) {
return badRequestResponse('bonusCreditAmount must be a positive number')
}
if (code !== undefined && code !== null) {
if (typeof code !== 'string') {
return badRequestResponse('code must be a string or null')
}
if (code.trim().length < 6) {
return badRequestResponse('code must be at least 6 characters')
}
}
const id = nanoid()
const [campaign] = await db
.insert(referralCampaigns)
.values({
id,
name,
code: code ? code.trim().toUpperCase() : null,
utmSource: utmSource || null,
utmMedium: utmMedium || null,
utmCampaign: utmCampaign || null,
utmContent: utmContent || null,
bonusCreditAmount: bonusCreditAmount.toString(),
})
.returning()
logger.info(`Admin API: Created referral campaign ${id}`, {
name,
code: campaign.code,
bonusCreditAmount,
})
return singleResponse(toAdminReferralCampaign(campaign, getBaseUrl()))
} catch (error) {
logger.error('Admin API: Failed to create referral campaign', { error })
return internalErrorResponse('Failed to create referral campaign')
}
})

View File

@@ -8,6 +8,7 @@
import type {
member,
organization,
referralCampaigns,
subscription,
user,
userStats,
@@ -31,6 +32,7 @@ export type DbOrganization = InferSelectModel<typeof organization>
export type DbSubscription = InferSelectModel<typeof subscription>
export type DbMember = InferSelectModel<typeof member>
export type DbUserStats = InferSelectModel<typeof userStats>
export type DbReferralCampaign = InferSelectModel<typeof referralCampaigns>
// =============================================================================
// Pagination
@@ -646,3 +648,49 @@ export interface AdminDeployResult {
export interface AdminUndeployResult {
isDeployed: boolean
}
// =============================================================================
// Referral Campaign Types
// =============================================================================
export interface AdminReferralCampaign {
id: string
name: string
code: string | null
utmSource: string | null
utmMedium: string | null
utmCampaign: string | null
utmContent: string | null
bonusCreditAmount: string
isActive: boolean
signupUrl: string | null
createdAt: string
updatedAt: string
}
export function toAdminReferralCampaign(
dbCampaign: DbReferralCampaign,
baseUrl: string
): AdminReferralCampaign {
const utmParams = new URLSearchParams()
if (dbCampaign.utmSource) utmParams.set('utm_source', dbCampaign.utmSource)
if (dbCampaign.utmMedium) utmParams.set('utm_medium', dbCampaign.utmMedium)
if (dbCampaign.utmCampaign) utmParams.set('utm_campaign', dbCampaign.utmCampaign)
if (dbCampaign.utmContent) utmParams.set('utm_content', dbCampaign.utmContent)
const query = utmParams.toString()
return {
id: dbCampaign.id,
name: dbCampaign.name,
code: dbCampaign.code,
utmSource: dbCampaign.utmSource,
utmMedium: dbCampaign.utmMedium,
utmCampaign: dbCampaign.utmCampaign,
utmContent: dbCampaign.utmContent,
bonusCreditAmount: dbCampaign.bonusCreditAmount,
isActive: dbCampaign.isActive,
signupUrl: query ? `${baseUrl}/signup?${query}` : null,
createdAt: dbCampaign.createdAt.toISOString(),
updatedAt: dbCampaign.updatedAt.toISOString(),
}
}

View File

@@ -8,7 +8,7 @@ import { resolveWorkflowIdForUser } from '@/lib/workflows/utils'
import { authenticateV1Request } from '@/app/api/v1/auth'
const logger = createLogger('CopilotHeadlessAPI')
const DEFAULT_COPILOT_MODEL = 'claude-opus-4-6'
const DEFAULT_COPILOT_MODEL = 'claude-opus-4-5'
const RequestSchema = z.object({
message: z.string().min(1, 'message is required'),

View File

@@ -29,7 +29,7 @@ const patchBodySchema = z
description: z
.string()
.trim()
.max(500, 'Description must be 500 characters or less')
.max(2000, 'Description must be 2000 characters or less')
.nullable()
.optional(),
isActive: z.literal(true).optional(), // Set to true to activate this version

View File

@@ -12,7 +12,7 @@ import {
import { generateRequestId } from '@/lib/core/utils/request'
import { SSE_HEADERS } from '@/lib/core/utils/sse'
import { getBaseUrl } from '@/lib/core/utils/urls'
import { markExecutionCancelled } from '@/lib/execution/cancellation'
import { createExecutionEventWriter, setExecutionMeta } from '@/lib/execution/event-buffer'
import { processInputFileFields } from '@/lib/execution/files'
import { preprocessExecution } from '@/lib/execution/preprocessing'
import { LoggingSession } from '@/lib/logs/execution/logging-session'
@@ -700,15 +700,27 @@ export async function POST(req: NextRequest, { params }: { params: Promise<{ id:
const timeoutController = createTimeoutAbortController(preprocessResult.executionTimeout?.sync)
let isStreamClosed = false
const eventWriter = createExecutionEventWriter(executionId)
setExecutionMeta(executionId, {
status: 'active',
userId: actorUserId,
workflowId,
}).catch(() => {})
const stream = new ReadableStream<Uint8Array>({
async start(controller) {
const sendEvent = (event: ExecutionEvent) => {
if (isStreamClosed) return
let finalMetaStatus: 'complete' | 'error' | 'cancelled' | null = null
try {
controller.enqueue(encodeSSEEvent(event))
} catch {
isStreamClosed = true
const sendEvent = (event: ExecutionEvent) => {
if (!isStreamClosed) {
try {
controller.enqueue(encodeSSEEvent(event))
} catch {
isStreamClosed = true
}
}
if (event.type !== 'stream:chunk' && event.type !== 'stream:done') {
eventWriter.write(event).catch(() => {})
}
}
@@ -829,14 +841,12 @@ export async function POST(req: NextRequest, { params }: { params: Promise<{ id:
const reader = streamingExec.stream.getReader()
const decoder = new TextDecoder()
let chunkCount = 0
try {
while (true) {
const { done, value } = await reader.read()
if (done) break
chunkCount++
const chunk = decoder.decode(value, { stream: true })
sendEvent({
type: 'stream:chunk',
@@ -951,6 +961,7 @@ export async function POST(req: NextRequest, { params }: { params: Promise<{ id:
duration: result.metadata?.duration || 0,
},
})
finalMetaStatus = 'error'
} else {
logger.info(`[${requestId}] Workflow execution was cancelled`)
@@ -963,6 +974,7 @@ export async function POST(req: NextRequest, { params }: { params: Promise<{ id:
duration: result.metadata?.duration || 0,
},
})
finalMetaStatus = 'cancelled'
}
return
}
@@ -986,6 +998,7 @@ export async function POST(req: NextRequest, { params }: { params: Promise<{ id:
endTime: result.metadata?.endTime || new Date().toISOString(),
},
})
finalMetaStatus = 'complete'
} catch (error: unknown) {
const isTimeout = isTimeoutError(error) || timeoutController.isTimedOut()
const errorMessage = isTimeout
@@ -1017,7 +1030,18 @@ export async function POST(req: NextRequest, { params }: { params: Promise<{ id:
duration: executionResult?.metadata?.duration || 0,
},
})
finalMetaStatus = 'error'
} finally {
try {
await eventWriter.close()
} catch (closeError) {
logger.warn(`[${requestId}] Failed to close event writer`, {
error: closeError instanceof Error ? closeError.message : String(closeError),
})
}
if (finalMetaStatus) {
setExecutionMeta(executionId, { status: finalMetaStatus }).catch(() => {})
}
timeoutController.cleanup()
if (executionId) {
await cleanupExecutionBase64Cache(executionId)
@@ -1032,10 +1056,7 @@ export async function POST(req: NextRequest, { params }: { params: Promise<{ id:
},
cancel() {
isStreamClosed = true
timeoutController.cleanup()
logger.info(`[${requestId}] Client aborted SSE stream, signalling cancellation`)
timeoutController.abort()
markExecutionCancelled(executionId).catch(() => {})
logger.info(`[${requestId}] Client disconnected from SSE stream`)
},
})

View File

@@ -0,0 +1,170 @@
import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server'
import { checkHybridAuth } from '@/lib/auth/hybrid'
import { SSE_HEADERS } from '@/lib/core/utils/sse'
import {
type ExecutionStreamStatus,
getExecutionMeta,
readExecutionEvents,
} from '@/lib/execution/event-buffer'
import { formatSSEEvent } from '@/lib/workflows/executor/execution-events'
import { authorizeWorkflowByWorkspacePermission } from '@/lib/workflows/utils'
const logger = createLogger('ExecutionStreamReconnectAPI')
const POLL_INTERVAL_MS = 500
const MAX_POLL_DURATION_MS = 10 * 60 * 1000 // 10 minutes
function isTerminalStatus(status: ExecutionStreamStatus): boolean {
return status === 'complete' || status === 'error' || status === 'cancelled'
}
export const runtime = 'nodejs'
export const dynamic = 'force-dynamic'
export async function GET(
req: NextRequest,
{ params }: { params: Promise<{ id: string; executionId: string }> }
) {
const { id: workflowId, executionId } = await params
try {
const auth = await checkHybridAuth(req, { requireWorkflowId: false })
if (!auth.success || !auth.userId) {
return NextResponse.json({ error: auth.error || 'Unauthorized' }, { status: 401 })
}
const workflowAuthorization = await authorizeWorkflowByWorkspacePermission({
workflowId,
userId: auth.userId,
action: 'read',
})
if (!workflowAuthorization.allowed) {
return NextResponse.json(
{ error: workflowAuthorization.message || 'Access denied' },
{ status: workflowAuthorization.status }
)
}
const meta = await getExecutionMeta(executionId)
if (!meta) {
return NextResponse.json({ error: 'Execution buffer not found or expired' }, { status: 404 })
}
if (meta.workflowId && meta.workflowId !== workflowId) {
return NextResponse.json(
{ error: 'Execution does not belong to this workflow' },
{ status: 403 }
)
}
const fromParam = req.nextUrl.searchParams.get('from')
const parsed = fromParam ? Number.parseInt(fromParam, 10) : 0
const fromEventId = Number.isFinite(parsed) && parsed >= 0 ? parsed : 0
logger.info('Reconnection stream requested', {
workflowId,
executionId,
fromEventId,
metaStatus: meta.status,
})
const encoder = new TextEncoder()
let closed = false
const stream = new ReadableStream<Uint8Array>({
async start(controller) {
let lastEventId = fromEventId
const pollDeadline = Date.now() + MAX_POLL_DURATION_MS
const enqueue = (text: string) => {
if (closed) return
try {
controller.enqueue(encoder.encode(text))
} catch {
closed = true
}
}
try {
const events = await readExecutionEvents(executionId, lastEventId)
for (const entry of events) {
if (closed) return
enqueue(formatSSEEvent(entry.event))
lastEventId = entry.eventId
}
const currentMeta = await getExecutionMeta(executionId)
if (!currentMeta || isTerminalStatus(currentMeta.status)) {
enqueue('data: [DONE]\n\n')
if (!closed) controller.close()
return
}
while (!closed && Date.now() < pollDeadline) {
await new Promise((resolve) => setTimeout(resolve, POLL_INTERVAL_MS))
if (closed) return
const newEvents = await readExecutionEvents(executionId, lastEventId)
for (const entry of newEvents) {
if (closed) return
enqueue(formatSSEEvent(entry.event))
lastEventId = entry.eventId
}
const polledMeta = await getExecutionMeta(executionId)
if (!polledMeta || isTerminalStatus(polledMeta.status)) {
const finalEvents = await readExecutionEvents(executionId, lastEventId)
for (const entry of finalEvents) {
if (closed) return
enqueue(formatSSEEvent(entry.event))
lastEventId = entry.eventId
}
enqueue('data: [DONE]\n\n')
if (!closed) controller.close()
return
}
}
if (!closed) {
logger.warn('Reconnection stream poll deadline reached', { executionId })
enqueue('data: [DONE]\n\n')
controller.close()
}
} catch (error) {
logger.error('Error in reconnection stream', {
executionId,
error: error instanceof Error ? error.message : String(error),
})
if (!closed) {
try {
controller.close()
} catch {}
}
}
},
cancel() {
closed = true
logger.info('Client disconnected from reconnection stream', { executionId })
},
})
return new NextResponse(stream, {
headers: {
...SSE_HEADERS,
'X-Execution-Id': executionId,
},
})
} catch (error: any) {
logger.error('Failed to start reconnection stream', {
workflowId,
executionId,
error: error.message,
})
return NextResponse.json(
{ error: error.message || 'Failed to start reconnection stream' },
{ status: 500 }
)
}
}

View File

@@ -5,7 +5,7 @@
* @vitest-environment node
*/
import { loggerMock } from '@sim/testing'
import { loggerMock, setupGlobalFetchMock } from '@sim/testing'
import { NextRequest } from 'next/server'
import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest'
@@ -284,9 +284,7 @@ describe('Workflow By ID API Route', () => {
where: vi.fn().mockResolvedValue([{ id: 'workflow-123' }]),
})
global.fetch = vi.fn().mockResolvedValue({
ok: true,
})
setupGlobalFetchMock({ ok: true })
const req = new NextRequest('http://localhost:3000/api/workflows/workflow-123', {
method: 'DELETE',
@@ -331,9 +329,7 @@ describe('Workflow By ID API Route', () => {
where: vi.fn().mockResolvedValue([{ id: 'workflow-123' }]),
})
global.fetch = vi.fn().mockResolvedValue({
ok: true,
})
setupGlobalFetchMock({ ok: true })
const req = new NextRequest('http://localhost:3000/api/workflows/workflow-123', {
method: 'DELETE',

View File

@@ -13,9 +13,6 @@ export type CommandId =
| 'goto-logs'
| 'open-search'
| 'run-workflow'
| 'focus-copilot-tab'
| 'focus-toolbar-tab'
| 'focus-editor-tab'
| 'clear-terminal-console'
| 'focus-toolbar-search'
| 'clear-notifications'
@@ -75,21 +72,6 @@ export const COMMAND_DEFINITIONS: Record<CommandId, CommandDefinition> = {
shortcut: 'Mod+Enter',
allowInEditable: false,
},
'focus-copilot-tab': {
id: 'focus-copilot-tab',
shortcut: 'C',
allowInEditable: false,
},
'focus-toolbar-tab': {
id: 'focus-toolbar-tab',
shortcut: 'T',
allowInEditable: false,
},
'focus-editor-tab': {
id: 'focus-editor-tab',
shortcut: 'E',
allowInEditable: false,
},
'clear-terminal-console': {
id: 'clear-terminal-console',
shortcut: 'Mod+D',

View File

@@ -113,7 +113,7 @@ export function VersionDescriptionModal({
className='min-h-[120px] resize-none'
value={description}
onChange={(e) => setDescription(e.target.value)}
maxLength={500}
maxLength={2000}
disabled={isGenerating}
/>
<div className='flex items-center justify-between'>
@@ -123,7 +123,7 @@ export function VersionDescriptionModal({
</p>
)}
{!updateMutation.error && !generateMutation.error && <div />}
<p className='text-[11px] text-[var(--text-tertiary)]'>{description.length}/500</p>
<p className='text-[11px] text-[var(--text-tertiary)]'>{description.length}/2000</p>
</div>
</ModalBody>
<ModalFooter>

View File

@@ -57,6 +57,21 @@ export function useChangeDetection({
}
}
if (block.triggerMode) {
const triggerConfigValue = blockSubValues?.triggerConfig
if (
triggerConfigValue &&
typeof triggerConfigValue === 'object' &&
!subBlocks.triggerConfig
) {
subBlocks.triggerConfig = {
id: 'triggerConfig',
type: 'short-input',
value: triggerConfigValue,
}
}
}
blocksWithSubBlocks[blockId] = {
...block,
subBlocks,

View File

@@ -1,7 +1,10 @@
import { useCallback, useState } from 'react'
import { createLogger } from '@sim/logger'
import { runPreDeployChecks } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/deploy/hooks/use-predeploy-checks'
import { useNotificationStore } from '@/stores/notifications'
import { useWorkflowRegistry } from '@/stores/workflows/registry/store'
import { mergeSubblockState } from '@/stores/workflows/utils'
import { useWorkflowStore } from '@/stores/workflows/workflow/store'
const logger = createLogger('useDeployment')
@@ -35,6 +38,24 @@ export function useDeployment({
return { success: true, shouldOpenModal: true }
}
const { blocks, edges, loops, parallels } = useWorkflowStore.getState()
const liveBlocks = mergeSubblockState(blocks, workflowId)
const checkResult = runPreDeployChecks({
blocks: liveBlocks,
edges,
loops,
parallels,
workflowId,
})
if (!checkResult.passed) {
addNotification({
level: 'error',
message: checkResult.error || 'Pre-deploy validation failed',
workflowId,
})
return { success: false, shouldOpenModal: false }
}
setIsDeploying(true)
try {
const response = await fetch(`/api/workflows/${workflowId}/deploy`, {

View File

@@ -4,6 +4,7 @@ import { Button, Combobox } from '@/components/emcn/components'
import {
getCanonicalScopesForProvider,
getProviderIdFromServiceId,
getServiceConfigByProviderId,
OAUTH_PROVIDERS,
type OAuthProvider,
type OAuthService,
@@ -26,6 +27,11 @@ const getProviderIcon = (providerName: OAuthProvider) => {
}
const getProviderName = (providerName: OAuthProvider) => {
const serviceConfig = getServiceConfigByProviderId(providerName)
if (serviceConfig) {
return serviceConfig.name
}
const { baseProvider } = parseProvider(providerName)
const baseProviderConfig = OAUTH_PROVIDERS[baseProvider]
@@ -54,7 +60,7 @@ export function ToolCredentialSelector({
onChange,
provider,
requiredScopes = [],
label = 'Select account',
label,
serviceId,
disabled = false,
}: ToolCredentialSelectorProps) {
@@ -64,6 +70,7 @@ export function ToolCredentialSelector({
const { activeWorkflowId } = useWorkflowRegistry()
const selectedId = value || ''
const effectiveLabel = label || `Select ${getProviderName(provider)} account`
const effectiveProviderId = useMemo(() => getProviderIdFromServiceId(serviceId), [serviceId])
@@ -203,7 +210,7 @@ export function ToolCredentialSelector({
selectedValue={selectedId}
onChange={handleComboboxChange}
onOpenChange={handleOpenChange}
placeholder={label}
placeholder={effectiveLabel}
disabled={disabled}
editable={true}
filterOptions={!isForeign}

View File

@@ -0,0 +1,186 @@
'use client'
import type React from 'react'
import { useRef, useState } from 'react'
import { ArrowLeftRight, ArrowUp } from 'lucide-react'
import { Button, Input, Label, Tooltip } from '@/components/emcn'
import { cn } from '@/lib/core/utils/cn'
import type { WandControlHandlers } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/sub-block'
/**
* Props for a generic parameter with label component
*/
export interface ParameterWithLabelProps {
paramId: string
title: string
isRequired: boolean
visibility: string
wandConfig?: {
enabled: boolean
prompt?: string
placeholder?: string
}
canonicalToggle?: {
mode: 'basic' | 'advanced'
disabled?: boolean
onToggle?: () => void
}
disabled: boolean
isPreview: boolean
children: (wandControlRef: React.MutableRefObject<WandControlHandlers | null>) => React.ReactNode
}
/**
* Generic wrapper component for parameters that manages wand state and renders label + input
*/
export function ParameterWithLabel({
paramId,
title,
isRequired,
visibility,
wandConfig,
canonicalToggle,
disabled,
isPreview,
children,
}: ParameterWithLabelProps) {
const [isSearchActive, setIsSearchActive] = useState(false)
const [searchQuery, setSearchQuery] = useState('')
const searchInputRef = useRef<HTMLInputElement>(null)
const wandControlRef = useRef<WandControlHandlers | null>(null)
const isWandEnabled = wandConfig?.enabled ?? false
const showWand = isWandEnabled && !isPreview && !disabled
const handleSearchClick = (): void => {
setIsSearchActive(true)
setTimeout(() => {
searchInputRef.current?.focus()
}, 0)
}
const handleSearchBlur = (): void => {
if (!searchQuery.trim() && !wandControlRef.current?.isWandStreaming) {
setIsSearchActive(false)
}
}
const handleSearchChange = (value: string): void => {
setSearchQuery(value)
}
const handleSearchSubmit = (): void => {
if (searchQuery.trim() && wandControlRef.current) {
wandControlRef.current.onWandTrigger(searchQuery)
setSearchQuery('')
setIsSearchActive(false)
}
}
const handleSearchCancel = (): void => {
setSearchQuery('')
setIsSearchActive(false)
}
const isStreaming = wandControlRef.current?.isWandStreaming ?? false
return (
<div key={paramId} className='relative min-w-0 space-y-[6px]'>
<div className='flex items-center justify-between gap-[6px] pl-[2px]'>
<Label className='flex items-baseline gap-[6px] whitespace-nowrap font-medium text-[13px] text-[var(--text-primary)]'>
{title}
{isRequired && visibility === 'user-only' && <span className='ml-0.5'>*</span>}
</Label>
<div className='flex min-w-0 flex-1 items-center justify-end gap-[6px]'>
{showWand &&
(!isSearchActive ? (
<Button
variant='active'
className='-my-1 h-5 px-2 py-0 text-[11px]'
onClick={handleSearchClick}
>
Generate
</Button>
) : (
<div className='-my-1 flex min-w-[120px] max-w-[280px] flex-1 items-center gap-[4px]'>
<Input
ref={searchInputRef}
value={isStreaming ? 'Generating...' : searchQuery}
onChange={(e: React.ChangeEvent<HTMLInputElement>) =>
handleSearchChange(e.target.value)
}
onBlur={(e: React.FocusEvent<HTMLInputElement>) => {
const relatedTarget = e.relatedTarget as HTMLElement | null
if (relatedTarget?.closest('button')) return
handleSearchBlur()
}}
onKeyDown={(e: React.KeyboardEvent<HTMLInputElement>) => {
if (e.key === 'Enter' && searchQuery.trim() && !isStreaming) {
handleSearchSubmit()
} else if (e.key === 'Escape') {
handleSearchCancel()
}
}}
disabled={isStreaming}
className={cn(
'h-5 min-w-[80px] flex-1 text-[11px]',
isStreaming && 'text-muted-foreground'
)}
placeholder='Generate with AI...'
/>
<Button
variant='tertiary'
disabled={!searchQuery.trim() || isStreaming}
onMouseDown={(e: React.MouseEvent) => {
e.preventDefault()
e.stopPropagation()
}}
onClick={(e: React.MouseEvent) => {
e.stopPropagation()
handleSearchSubmit()
}}
className='h-[20px] w-[20px] flex-shrink-0 p-0'
>
<ArrowUp className='h-[12px] w-[12px]' />
</Button>
</div>
))}
{canonicalToggle && !isPreview && (
<Tooltip.Root>
<Tooltip.Trigger asChild>
<button
type='button'
className='flex h-[12px] w-[12px] flex-shrink-0 items-center justify-center bg-transparent p-0 disabled:cursor-not-allowed disabled:opacity-50'
onClick={canonicalToggle.onToggle}
disabled={canonicalToggle.disabled || disabled}
aria-label={
canonicalToggle.mode === 'advanced'
? 'Switch to selector'
: 'Switch to manual ID'
}
>
<ArrowLeftRight
className={cn(
'!h-[12px] !w-[12px]',
canonicalToggle.mode === 'advanced'
? 'text-[var(--text-primary)]'
: 'text-[var(--text-secondary)]'
)}
/>
</button>
</Tooltip.Trigger>
<Tooltip.Content side='top'>
<p>
{canonicalToggle.mode === 'advanced'
? 'Switch to selector'
: 'Switch to manual ID'}
</p>
</Tooltip.Content>
</Tooltip.Root>
)}
</div>
</div>
<div className='relative w-full min-w-0'>{children(wandControlRef)}</div>
</div>
)
}

View File

@@ -0,0 +1,114 @@
'use client'
import { useEffect, useRef } from 'react'
import { useSubBlockValue } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/hooks/use-sub-block-value'
import { SubBlock } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/sub-block'
import type { SubBlockConfig as BlockSubBlockConfig } from '@/blocks/types'
interface ToolSubBlockRendererProps {
blockId: string
subBlockId: string
toolIndex: number
subBlock: BlockSubBlockConfig
effectiveParamId: string
toolParams: Record<string, string> | undefined
onParamChange: (toolIndex: number, paramId: string, value: string) => void
disabled: boolean
canonicalToggle?: {
mode: 'basic' | 'advanced'
disabled?: boolean
onToggle?: () => void
}
}
/**
* SubBlock types whose store values are objects/arrays/non-strings.
* tool.params stores strings (via JSON.stringify), so when syncing
* back to the store we parse them to restore the native shape.
*/
const OBJECT_SUBBLOCK_TYPES = new Set(['file-upload', 'table', 'grouped-checkbox-list'])
/**
* Bridges the subblock store with StoredTool.params via a synthetic store key,
* then delegates all rendering to SubBlock for full parity.
*/
export function ToolSubBlockRenderer({
blockId,
subBlockId,
toolIndex,
subBlock,
effectiveParamId,
toolParams,
onParamChange,
disabled,
canonicalToggle,
}: ToolSubBlockRendererProps) {
const syntheticId = `${subBlockId}-tool-${toolIndex}-${effectiveParamId}`
const [storeValue, setStoreValue] = useSubBlockValue(blockId, syntheticId)
const toolParamValue = toolParams?.[effectiveParamId] ?? ''
const isObjectType = OBJECT_SUBBLOCK_TYPES.has(subBlock.type)
const lastPushedToStoreRef = useRef<string | null>(null)
const lastPushedToParamsRef = useRef<string | null>(null)
useEffect(() => {
if (!toolParamValue && lastPushedToStoreRef.current === null) {
lastPushedToStoreRef.current = toolParamValue
lastPushedToParamsRef.current = toolParamValue
return
}
if (toolParamValue !== lastPushedToStoreRef.current) {
lastPushedToStoreRef.current = toolParamValue
lastPushedToParamsRef.current = toolParamValue
if (isObjectType && typeof toolParamValue === 'string' && toolParamValue) {
try {
const parsed = JSON.parse(toolParamValue)
if (typeof parsed === 'object' && parsed !== null) {
setStoreValue(parsed)
return
}
} catch {
// Not valid JSON — fall through to set as string
}
}
setStoreValue(toolParamValue)
}
}, [toolParamValue, setStoreValue, isObjectType])
useEffect(() => {
if (storeValue == null && lastPushedToParamsRef.current === null) return
const stringValue =
storeValue == null
? ''
: typeof storeValue === 'string'
? storeValue
: JSON.stringify(storeValue)
if (stringValue !== lastPushedToParamsRef.current) {
lastPushedToParamsRef.current = stringValue
lastPushedToStoreRef.current = stringValue
onParamChange(toolIndex, effectiveParamId, stringValue)
}
}, [storeValue, toolIndex, effectiveParamId, onParamChange])
const visibility = subBlock.paramVisibility ?? 'user-or-llm'
const isOptionalForUser = visibility !== 'user-only'
const config = {
...subBlock,
id: syntheticId,
...(isOptionalForUser && { required: false }),
}
return (
<SubBlock
blockId={blockId}
config={config}
isPreview={false}
disabled={disabled}
canonicalToggle={canonicalToggle}
dependencyContext={toolParams}
/>
)
}

View File

@@ -2,37 +2,12 @@
* @vitest-environment node
*/
import { describe, expect, it } from 'vitest'
interface StoredTool {
type: string
title?: string
toolId?: string
params?: Record<string, string>
customToolId?: string
schema?: any
code?: string
operation?: string
usageControl?: 'auto' | 'force' | 'none'
}
const isMcpToolAlreadySelected = (selectedTools: StoredTool[], mcpToolId: string): boolean => {
return selectedTools.some((tool) => tool.type === 'mcp' && tool.toolId === mcpToolId)
}
const isCustomToolAlreadySelected = (
selectedTools: StoredTool[],
customToolId: string
): boolean => {
return selectedTools.some(
(tool) => tool.type === 'custom-tool' && tool.customToolId === customToolId
)
}
const isWorkflowAlreadySelected = (selectedTools: StoredTool[], workflowId: string): boolean => {
return selectedTools.some(
(tool) => tool.type === 'workflow_input' && tool.params?.workflowId === workflowId
)
}
import type { StoredTool } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/components/tool-input/types'
import {
isCustomToolAlreadySelected,
isMcpToolAlreadySelected,
isWorkflowAlreadySelected,
} from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/components/tool-input/utils'
describe('isMcpToolAlreadySelected', () => {
describe('basic functionality', () => {

View File

@@ -0,0 +1,31 @@
/**
* Represents a tool selected and configured in the workflow
*
* @remarks
* For custom tools (new format), we only store: type, customToolId, usageControl, isExpanded.
* Everything else (title, schema, code) is loaded dynamically from the database.
* Legacy custom tools with inline schema/code are still supported for backwards compatibility.
*/
export interface StoredTool {
/** Block type identifier */
type: string
/** Display title for the tool (optional for new custom tool format) */
title?: string
/** Direct tool ID for execution (optional for new custom tool format) */
toolId?: string
/** Parameter values configured by the user (optional for new custom tool format) */
params?: Record<string, string>
/** Whether the tool details are expanded in UI */
isExpanded?: boolean
/** Database ID for custom tools (new format - reference only) */
customToolId?: string
/** Tool schema for custom tools (legacy format - inline JSON schema) */
// eslint-disable-next-line @typescript-eslint/no-explicit-any
schema?: Record<string, any>
/** Implementation code for custom tools (legacy format - inline) */
code?: string
/** Selected operation for multi-operation tools */
operation?: string
/** Tool usage control mode for LLM */
usageControl?: 'auto' | 'force' | 'none'
}

View File

@@ -0,0 +1,32 @@
import type { StoredTool } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/components/tool-input/types'
/**
* Checks if an MCP tool is already selected.
*/
export function isMcpToolAlreadySelected(selectedTools: StoredTool[], mcpToolId: string): boolean {
return selectedTools.some((tool) => tool.type === 'mcp' && tool.toolId === mcpToolId)
}
/**
* Checks if a custom tool is already selected.
*/
export function isCustomToolAlreadySelected(
selectedTools: StoredTool[],
customToolId: string
): boolean {
return selectedTools.some(
(tool) => tool.type === 'custom-tool' && tool.customToolId === customToolId
)
}
/**
* Checks if a workflow is already selected.
*/
export function isWorkflowAlreadySelected(
selectedTools: StoredTool[],
workflowId: string
): boolean {
return selectedTools.some(
(tool) => tool.type === 'workflow_input' && tool.params?.workflowId === workflowId
)
}

View File

@@ -3,7 +3,6 @@ import { isEqual } from 'lodash'
import { AlertTriangle, ArrowLeftRight, ArrowUp, Check, Clipboard } from 'lucide-react'
import { Button, Input, Label, Tooltip } from '@/components/emcn/components'
import { cn } from '@/lib/core/utils/cn'
import type { FieldDiffStatus } from '@/lib/workflows/diff/types'
import {
CheckboxList,
Code,
@@ -69,13 +68,15 @@ interface SubBlockProps {
isPreview?: boolean
subBlockValues?: Record<string, any>
disabled?: boolean
fieldDiffStatus?: FieldDiffStatus
allowExpandInPreview?: boolean
canonicalToggle?: {
mode: 'basic' | 'advanced'
disabled?: boolean
onToggle?: () => void
}
labelSuffix?: React.ReactNode
/** Provides sibling values for dependency resolution in non-preview contexts (e.g. tool-input) */
dependencyContext?: Record<string, unknown>
}
/**
@@ -162,16 +163,14 @@ const getPreviewValue = (
/**
* Renders the label with optional validation and description tooltips.
*
* @remarks
* Handles JSON validation indicators for code blocks and required field markers.
* Includes inline AI generate button when wand is enabled.
*
* @param config - The sub-block configuration defining the label content
* @param isValidJson - Whether the JSON content is valid (for code blocks)
* @param subBlockValues - Current values of all subblocks for evaluating conditional requirements
* @param wandState - Optional state and handlers for the AI wand feature
* @param canonicalToggle - Optional canonical toggle metadata and handlers
* @param canonicalToggleIsDisabled - Whether the canonical toggle is disabled
* @param wandState - State and handlers for the inline AI generate feature
* @param canonicalToggle - Metadata and handlers for the basic/advanced mode toggle
* @param canonicalToggleIsDisabled - Whether the canonical toggle is disabled (includes dependsOn gating)
* @param copyState - State and handler for the copy-to-clipboard button
* @param labelSuffix - Additional content rendered after the label text
* @returns The label JSX element, or `null` for switch types or when no title is defined
*/
const renderLabel = (
@@ -202,7 +201,8 @@ const renderLabel = (
showCopyButton: boolean
copied: boolean
onCopy: () => void
}
},
labelSuffix?: React.ReactNode
): JSX.Element | null => {
if (config.type === 'switch') return null
if (!config.title) return null
@@ -215,9 +215,10 @@ const renderLabel = (
return (
<div className='flex items-center justify-between gap-[6px] pl-[2px]'>
<Label className='flex items-center gap-[6px] whitespace-nowrap'>
<Label className='flex items-baseline gap-[6px] whitespace-nowrap'>
{config.title}
{required && <span className='ml-0.5'>*</span>}
{labelSuffix}
{config.type === 'code' &&
config.language === 'json' &&
!isValidJson &&
@@ -383,28 +384,25 @@ const arePropsEqual = (prevProps: SubBlockProps, nextProps: SubBlockProps): bool
prevProps.isPreview === nextProps.isPreview &&
valueEqual &&
prevProps.disabled === nextProps.disabled &&
prevProps.fieldDiffStatus === nextProps.fieldDiffStatus &&
prevProps.allowExpandInPreview === nextProps.allowExpandInPreview &&
canonicalToggleEqual
canonicalToggleEqual &&
prevProps.labelSuffix === nextProps.labelSuffix &&
prevProps.dependencyContext === nextProps.dependencyContext
)
}
/**
* Renders a single workflow sub-block input based on config.type.
*
* @remarks
* Supports multiple input types including short-input, long-input, dropdown,
* combobox, slider, table, code, switch, tool-input, and many more.
* Handles preview mode, disabled states, and AI wand generation.
*
* @param blockId - The parent block identifier
* @param config - Configuration defining the input type and properties
* @param isPreview - Whether to render in preview mode
* @param subBlockValues - Current values of all subblocks
* @param disabled - Whether the input is disabled
* @param fieldDiffStatus - Optional diff status for visual indicators
* @param allowExpandInPreview - Whether to allow expanding in preview mode
* @returns The rendered sub-block input component
* @param canonicalToggle - Metadata and handlers for the basic/advanced mode toggle
* @param labelSuffix - Additional content rendered after the label text
* @param dependencyContext - Sibling values for dependency resolution in non-preview contexts (e.g. tool-input)
*/
function SubBlockComponent({
blockId,
@@ -412,9 +410,10 @@ function SubBlockComponent({
isPreview = false,
subBlockValues,
disabled = false,
fieldDiffStatus,
allowExpandInPreview,
canonicalToggle,
labelSuffix,
dependencyContext,
}: SubBlockProps): JSX.Element {
const [isValidJson, setIsValidJson] = useState(true)
const [isSearchActive, setIsSearchActive] = useState(false)
@@ -423,7 +422,6 @@ function SubBlockComponent({
const searchInputRef = useRef<HTMLInputElement>(null)
const wandControlRef = useRef<WandControlHandlers | null>(null)
// Use webhook management hook when config has useWebhookUrl enabled
const webhookManagement = useWebhookManagement({
blockId,
triggerId: undefined,
@@ -510,10 +508,12 @@ function SubBlockComponent({
| null
| undefined
const contextValues = dependencyContext ?? (isPreview ? subBlockValues : undefined)
const { finalDisabled: gatedDisabled } = useDependsOnGate(blockId, config, {
disabled,
isPreview,
previewContextValues: isPreview ? subBlockValues : undefined,
previewContextValues: contextValues,
})
const isDisabled = gatedDisabled
@@ -797,7 +797,7 @@ function SubBlockComponent({
disabled={isDisabled}
isPreview={isPreview}
previewValue={previewValue}
previewContextValues={isPreview ? subBlockValues : undefined}
previewContextValues={contextValues}
/>
)
@@ -809,7 +809,7 @@ function SubBlockComponent({
disabled={isDisabled}
isPreview={isPreview}
previewValue={previewValue}
previewContextValues={isPreview ? subBlockValues : undefined}
previewContextValues={contextValues}
/>
)
@@ -821,7 +821,7 @@ function SubBlockComponent({
disabled={isDisabled}
isPreview={isPreview}
previewValue={previewValue}
previewContextValues={isPreview ? subBlockValues : undefined}
previewContextValues={contextValues}
/>
)
@@ -833,7 +833,7 @@ function SubBlockComponent({
disabled={isDisabled}
isPreview={isPreview}
previewValue={previewValue}
previewContextValues={isPreview ? subBlockValues : undefined}
previewContextValues={contextValues}
/>
)
@@ -845,7 +845,7 @@ function SubBlockComponent({
disabled={isDisabled}
isPreview={isPreview}
previewValue={previewValue}
previewContextValues={isPreview ? subBlockValues : undefined}
previewContextValues={contextValues}
/>
)
@@ -868,7 +868,7 @@ function SubBlockComponent({
disabled={isDisabled}
isPreview={isPreview}
previewValue={previewValue as any}
previewContextValues={isPreview ? subBlockValues : undefined}
previewContextValues={contextValues}
/>
)
@@ -880,7 +880,7 @@ function SubBlockComponent({
disabled={isDisabled}
isPreview={isPreview}
previewValue={previewValue as any}
previewContextValues={isPreview ? subBlockValues : undefined}
previewContextValues={contextValues}
/>
)
@@ -892,7 +892,7 @@ function SubBlockComponent({
disabled={isDisabled}
isPreview={isPreview}
previewValue={previewValue as any}
previewContextValues={isPreview ? subBlockValues : undefined}
previewContextValues={contextValues}
/>
)
@@ -917,7 +917,7 @@ function SubBlockComponent({
isPreview={isPreview}
previewValue={previewValue as any}
disabled={isDisabled}
previewContextValues={isPreview ? subBlockValues : undefined}
previewContextValues={contextValues}
/>
)
@@ -953,7 +953,7 @@ function SubBlockComponent({
disabled={isDisabled}
isPreview={isPreview}
previewValue={previewValue}
previewContextValues={isPreview ? subBlockValues : undefined}
previewContextValues={contextValues}
/>
)
@@ -987,7 +987,7 @@ function SubBlockComponent({
disabled={isDisabled}
isPreview={isPreview}
previewValue={previewValue as any}
previewContextValues={isPreview ? subBlockValues : undefined}
previewContextValues={contextValues}
/>
)
@@ -999,7 +999,7 @@ function SubBlockComponent({
disabled={isDisabled}
isPreview={isPreview}
previewValue={previewValue}
previewContextValues={isPreview ? subBlockValues : undefined}
previewContextValues={contextValues}
/>
)
@@ -1059,7 +1059,8 @@ function SubBlockComponent({
showCopyButton: Boolean(config.showCopyButton && config.useWebhookUrl),
copied,
onCopy: handleCopy,
}
},
labelSuffix
)}
{renderInput()}
</div>

View File

@@ -571,7 +571,6 @@ export function Editor() {
isPreview={false}
subBlockValues={subBlockState}
disabled={!canEditBlock}
fieldDiffStatus={undefined}
allowExpandInPreview={false}
canonicalToggle={
isCanonicalSwap && canonicalMode && canonicalId
@@ -635,7 +634,6 @@ export function Editor() {
isPreview={false}
subBlockValues={subBlockState}
disabled={!canEditBlock}
fieldDiffStatus={undefined}
allowExpandInPreview={false}
/>
{index < advancedOnlySubBlocks.length - 1 && (

View File

@@ -340,13 +340,7 @@ export const Panel = memo(function Panel() {
* Register global keyboard shortcuts using the central commands registry.
*
* - Mod+Enter: Run / cancel workflow (matches the Run button behavior)
* - C: Focus Copilot tab
* - T: Focus Toolbar tab
* - E: Focus Editor tab
* - Mod+F: Focus Toolbar tab and search input
*
* The tab-switching commands are disabled inside editable elements so typing
* in inputs or textareas is not interrupted.
*/
useRegisterGlobalCommands(() =>
createCommands([
@@ -363,33 +357,6 @@ export const Panel = memo(function Panel() {
allowInEditable: false,
},
},
{
id: 'focus-copilot-tab',
handler: () => {
setActiveTab('copilot')
},
overrides: {
allowInEditable: false,
},
},
{
id: 'focus-toolbar-tab',
handler: () => {
setActiveTab('toolbar')
},
overrides: {
allowInEditable: false,
},
},
{
id: 'focus-editor-tab',
handler: () => {
setActiveTab('editor')
},
overrides: {
allowInEditable: false,
},
},
{
id: 'focus-toolbar-search',
handler: () => {

View File

@@ -1,4 +1,4 @@
import { useCallback, useRef, useState } from 'react'
import { useCallback, useEffect, useRef, useState } from 'react'
import { createLogger } from '@sim/logger'
import { useQueryClient } from '@tanstack/react-query'
import { v4 as uuidv4 } from 'uuid'
@@ -46,7 +46,13 @@ import { useWorkflowStore } from '@/stores/workflows/workflow/store'
const logger = createLogger('useWorkflowExecution')
// Debug state validation result
/**
* Module-level Set tracking which workflows have an active reconnection effect.
* Prevents multiple hook instances (from different components) from starting
* concurrent reconnection streams for the same workflow during the same mount cycle.
*/
const activeReconnections = new Set<string>()
interface DebugValidationResult {
isValid: boolean
error?: string
@@ -54,7 +60,7 @@ interface DebugValidationResult {
interface BlockEventHandlerConfig {
workflowId?: string
executionId?: string
executionIdRef: { current: string }
workflowEdges: Array<{ id: string; target: string; sourceHandle?: string | null }>
activeBlocksSet: Set<string>
accumulatedBlockLogs: BlockLog[]
@@ -108,12 +114,15 @@ export function useWorkflowExecution() {
const queryClient = useQueryClient()
const currentWorkflow = useCurrentWorkflow()
const { activeWorkflowId, workflows } = useWorkflowRegistry()
const { toggleConsole, addConsole, updateConsole, cancelRunningEntries } =
const { toggleConsole, addConsole, updateConsole, cancelRunningEntries, clearExecutionEntries } =
useTerminalConsoleStore()
const hasHydrated = useTerminalConsoleStore((s) => s._hasHydrated)
const { getAllVariables } = useEnvironmentStore()
const { getVariablesByWorkflowId, variables } = useVariablesStore()
const { isExecuting, isDebugging, pendingBlocks, executor, debugContext } =
useCurrentWorkflowExecution()
const setCurrentExecutionId = useExecutionStore((s) => s.setCurrentExecutionId)
const getCurrentExecutionId = useExecutionStore((s) => s.getCurrentExecutionId)
const setIsExecuting = useExecutionStore((s) => s.setIsExecuting)
const setIsDebugging = useExecutionStore((s) => s.setIsDebugging)
const setPendingBlocks = useExecutionStore((s) => s.setPendingBlocks)
@@ -297,7 +306,7 @@ export function useWorkflowExecution() {
(config: BlockEventHandlerConfig) => {
const {
workflowId,
executionId,
executionIdRef,
workflowEdges,
activeBlocksSet,
accumulatedBlockLogs,
@@ -308,6 +317,14 @@ export function useWorkflowExecution() {
onBlockCompleteCallback,
} = config
/** Returns true if this execution was cancelled or superseded by another run. */
const isStaleExecution = () =>
!!(
workflowId &&
executionIdRef.current &&
useExecutionStore.getState().getCurrentExecutionId(workflowId) !== executionIdRef.current
)
const updateActiveBlocks = (blockId: string, isActive: boolean) => {
if (!workflowId) return
if (isActive) {
@@ -360,7 +377,7 @@ export function useWorkflowExecution() {
endedAt: data.endedAt,
workflowId,
blockId: data.blockId,
executionId,
executionId: executionIdRef.current,
blockName: data.blockName || 'Unknown Block',
blockType: data.blockType || 'unknown',
iterationCurrent: data.iterationCurrent,
@@ -383,7 +400,7 @@ export function useWorkflowExecution() {
endedAt: data.endedAt,
workflowId,
blockId: data.blockId,
executionId,
executionId: executionIdRef.current,
blockName: data.blockName || 'Unknown Block',
blockType: data.blockType || 'unknown',
iterationCurrent: data.iterationCurrent,
@@ -410,7 +427,7 @@ export function useWorkflowExecution() {
iterationType: data.iterationType,
iterationContainerId: data.iterationContainerId,
},
executionId
executionIdRef.current
)
}
@@ -432,11 +449,12 @@ export function useWorkflowExecution() {
iterationType: data.iterationType,
iterationContainerId: data.iterationContainerId,
},
executionId
executionIdRef.current
)
}
const onBlockStarted = (data: BlockStartedData) => {
if (isStaleExecution()) return
updateActiveBlocks(data.blockId, true)
markIncomingEdges(data.blockId)
@@ -453,7 +471,7 @@ export function useWorkflowExecution() {
endedAt: undefined,
workflowId,
blockId: data.blockId,
executionId,
executionId: executionIdRef.current,
blockName: data.blockName || 'Unknown Block',
blockType: data.blockType || 'unknown',
isRunning: true,
@@ -465,6 +483,7 @@ export function useWorkflowExecution() {
}
const onBlockCompleted = (data: BlockCompletedData) => {
if (isStaleExecution()) return
updateActiveBlocks(data.blockId, false)
if (workflowId) setBlockRunStatus(workflowId, data.blockId, 'success')
@@ -495,6 +514,7 @@ export function useWorkflowExecution() {
}
const onBlockError = (data: BlockErrorData) => {
if (isStaleExecution()) return
updateActiveBlocks(data.blockId, false)
if (workflowId) setBlockRunStatus(workflowId, data.blockId, 'error')
@@ -902,10 +922,6 @@ export function useWorkflowExecution() {
// Update block logs with actual stream completion times
if (result.logs && streamCompletionTimes.size > 0) {
const streamCompletionEndTime = new Date(
Math.max(...Array.from(streamCompletionTimes.values()))
).toISOString()
result.logs.forEach((log: BlockLog) => {
if (streamCompletionTimes.has(log.blockId)) {
const completionTime = streamCompletionTimes.get(log.blockId)!
@@ -987,7 +1003,6 @@ export function useWorkflowExecution() {
return { success: true, stream }
}
// For manual (non-chat) execution
const manualExecutionId = uuidv4()
try {
const result = await executeWorkflow(
@@ -1002,29 +1017,10 @@ export function useWorkflowExecution() {
if (result.metadata.pendingBlocks) {
setPendingBlocks(activeWorkflowId, result.metadata.pendingBlocks)
}
} else if (result && 'success' in result) {
setExecutionResult(result)
// Reset execution state after successful non-debug execution
setIsExecuting(activeWorkflowId, false)
setIsDebugging(activeWorkflowId, false)
setActiveBlocks(activeWorkflowId, new Set())
if (isChatExecution) {
if (!result.metadata) {
result.metadata = { duration: 0, startTime: new Date().toISOString() }
}
;(result.metadata as any).source = 'chat'
}
// Invalidate subscription queries to update usage
setTimeout(() => {
queryClient.invalidateQueries({ queryKey: subscriptionKeys.all })
}, 1000)
}
return result
} catch (error: any) {
const errorResult = handleExecutionError(error, { executionId: manualExecutionId })
// Note: Error logs are already persisted server-side via execution-core.ts
return errorResult
}
},
@@ -1275,7 +1271,7 @@ export function useWorkflowExecution() {
if (activeWorkflowId) {
logger.info('Using server-side executor')
const executionId = uuidv4()
const executionIdRef = { current: '' }
let executionResult: ExecutionResult = {
success: false,
@@ -1293,7 +1289,7 @@ export function useWorkflowExecution() {
try {
const blockHandlers = buildBlockEventHandlers({
workflowId: activeWorkflowId,
executionId,
executionIdRef,
workflowEdges,
activeBlocksSet,
accumulatedBlockLogs,
@@ -1326,6 +1322,10 @@ export function useWorkflowExecution() {
loops: clientWorkflowState.loops,
parallels: clientWorkflowState.parallels,
},
onExecutionId: (id) => {
executionIdRef.current = id
setCurrentExecutionId(activeWorkflowId, id)
},
callbacks: {
onExecutionStarted: (data) => {
logger.info('Server execution started:', data)
@@ -1368,6 +1368,18 @@ export function useWorkflowExecution() {
},
onExecutionCompleted: (data) => {
if (
activeWorkflowId &&
executionIdRef.current &&
useExecutionStore.getState().getCurrentExecutionId(activeWorkflowId) !==
executionIdRef.current
)
return
if (activeWorkflowId) {
setCurrentExecutionId(activeWorkflowId, null)
}
executionResult = {
success: data.success,
output: data.output,
@@ -1425,9 +1437,33 @@ export function useWorkflowExecution() {
})
}
}
const workflowExecState = activeWorkflowId
? useExecutionStore.getState().getWorkflowExecution(activeWorkflowId)
: null
if (activeWorkflowId && !workflowExecState?.isDebugging) {
setExecutionResult(executionResult)
setIsExecuting(activeWorkflowId, false)
setActiveBlocks(activeWorkflowId, new Set())
setTimeout(() => {
queryClient.invalidateQueries({ queryKey: subscriptionKeys.all })
}, 1000)
}
},
onExecutionError: (data) => {
if (
activeWorkflowId &&
executionIdRef.current &&
useExecutionStore.getState().getCurrentExecutionId(activeWorkflowId) !==
executionIdRef.current
)
return
if (activeWorkflowId) {
setCurrentExecutionId(activeWorkflowId, null)
}
executionResult = {
success: false,
output: {},
@@ -1441,43 +1477,53 @@ export function useWorkflowExecution() {
const isPreExecutionError = accumulatedBlockLogs.length === 0
handleExecutionErrorConsole({
workflowId: activeWorkflowId,
executionId,
executionId: executionIdRef.current,
error: data.error,
durationMs: data.duration,
blockLogs: accumulatedBlockLogs,
isPreExecutionError,
})
if (activeWorkflowId) {
setIsExecuting(activeWorkflowId, false)
setIsDebugging(activeWorkflowId, false)
setActiveBlocks(activeWorkflowId, new Set())
}
},
onExecutionCancelled: (data) => {
if (
activeWorkflowId &&
executionIdRef.current &&
useExecutionStore.getState().getCurrentExecutionId(activeWorkflowId) !==
executionIdRef.current
)
return
if (activeWorkflowId) {
setCurrentExecutionId(activeWorkflowId, null)
}
handleExecutionCancelledConsole({
workflowId: activeWorkflowId,
executionId,
executionId: executionIdRef.current,
durationMs: data?.duration,
})
if (activeWorkflowId) {
setIsExecuting(activeWorkflowId, false)
setIsDebugging(activeWorkflowId, false)
setActiveBlocks(activeWorkflowId, new Set())
}
},
},
})
return executionResult
} catch (error: any) {
// Don't log abort errors - they're intentional user actions
if (error.name === 'AbortError' || error.message?.includes('aborted')) {
logger.info('Execution aborted by user')
// Reset execution state
if (activeWorkflowId) {
setIsExecuting(activeWorkflowId, false)
setActiveBlocks(activeWorkflowId, new Set())
}
// Return gracefully without error
return {
success: false,
output: {},
metadata: { duration: 0 },
logs: [],
}
return executionResult
}
logger.error('Server-side execution failed:', error)
@@ -1485,7 +1531,6 @@ export function useWorkflowExecution() {
}
}
// Fallback: should never reach here
throw new Error('Server-side execution is required')
}
@@ -1717,25 +1762,28 @@ export function useWorkflowExecution() {
* Handles cancelling the current workflow execution
*/
const handleCancelExecution = useCallback(() => {
if (!activeWorkflowId) return
logger.info('Workflow execution cancellation requested')
// Cancel the execution stream for this workflow (server-side)
executionStream.cancel(activeWorkflowId ?? undefined)
const storedExecutionId = getCurrentExecutionId(activeWorkflowId)
// Mark current chat execution as superseded so its cleanup won't affect new executions
currentChatExecutionIdRef.current = null
// Mark all running entries as canceled in the terminal
if (activeWorkflowId) {
cancelRunningEntries(activeWorkflowId)
// Reset execution state - this triggers chat stream cleanup via useEffect in chat.tsx
setIsExecuting(activeWorkflowId, false)
setIsDebugging(activeWorkflowId, false)
setActiveBlocks(activeWorkflowId, new Set())
if (storedExecutionId) {
setCurrentExecutionId(activeWorkflowId, null)
fetch(`/api/workflows/${activeWorkflowId}/executions/${storedExecutionId}/cancel`, {
method: 'POST',
}).catch(() => {})
handleExecutionCancelledConsole({
workflowId: activeWorkflowId,
executionId: storedExecutionId,
})
}
// If in debug mode, also reset debug state
executionStream.cancel(activeWorkflowId)
currentChatExecutionIdRef.current = null
setIsExecuting(activeWorkflowId, false)
setIsDebugging(activeWorkflowId, false)
setActiveBlocks(activeWorkflowId, new Set())
if (isDebugging) {
resetDebugState()
}
@@ -1747,7 +1795,9 @@ export function useWorkflowExecution() {
setIsDebugging,
setActiveBlocks,
activeWorkflowId,
cancelRunningEntries,
getCurrentExecutionId,
setCurrentExecutionId,
handleExecutionCancelledConsole,
])
/**
@@ -1847,7 +1897,7 @@ export function useWorkflowExecution() {
}
setIsExecuting(workflowId, true)
const executionId = uuidv4()
const executionIdRef = { current: '' }
const accumulatedBlockLogs: BlockLog[] = []
const accumulatedBlockStates = new Map<string, BlockState>()
const executedBlockIds = new Set<string>()
@@ -1856,7 +1906,7 @@ export function useWorkflowExecution() {
try {
const blockHandlers = buildBlockEventHandlers({
workflowId,
executionId,
executionIdRef,
workflowEdges,
activeBlocksSet,
accumulatedBlockLogs,
@@ -1871,6 +1921,10 @@ export function useWorkflowExecution() {
startBlockId: blockId,
sourceSnapshot: effectiveSnapshot,
input: workflowInput,
onExecutionId: (id) => {
executionIdRef.current = id
setCurrentExecutionId(workflowId, id)
},
callbacks: {
onBlockStarted: blockHandlers.onBlockStarted,
onBlockCompleted: blockHandlers.onBlockCompleted,
@@ -1878,7 +1932,6 @@ export function useWorkflowExecution() {
onExecutionCompleted: (data) => {
if (data.success) {
// Add the start block (trigger) to executed blocks
executedBlockIds.add(blockId)
const mergedBlockStates: Record<string, BlockState> = {
@@ -1902,6 +1955,10 @@ export function useWorkflowExecution() {
}
setLastExecutionSnapshot(workflowId, updatedSnapshot)
}
setCurrentExecutionId(workflowId, null)
setIsExecuting(workflowId, false)
setActiveBlocks(workflowId, new Set())
},
onExecutionError: (data) => {
@@ -1921,19 +1978,27 @@ export function useWorkflowExecution() {
handleExecutionErrorConsole({
workflowId,
executionId,
executionId: executionIdRef.current,
error: data.error,
durationMs: data.duration,
blockLogs: accumulatedBlockLogs,
})
setCurrentExecutionId(workflowId, null)
setIsExecuting(workflowId, false)
setActiveBlocks(workflowId, new Set())
},
onExecutionCancelled: (data) => {
handleExecutionCancelledConsole({
workflowId,
executionId,
executionId: executionIdRef.current,
durationMs: data?.duration,
})
setCurrentExecutionId(workflowId, null)
setIsExecuting(workflowId, false)
setActiveBlocks(workflowId, new Set())
},
},
})
@@ -1942,14 +2007,20 @@ export function useWorkflowExecution() {
logger.error('Run-from-block failed:', error)
}
} finally {
setIsExecuting(workflowId, false)
setActiveBlocks(workflowId, new Set())
const currentId = getCurrentExecutionId(workflowId)
if (currentId === null || currentId === executionIdRef.current) {
setCurrentExecutionId(workflowId, null)
setIsExecuting(workflowId, false)
setActiveBlocks(workflowId, new Set())
}
}
},
[
getLastExecutionSnapshot,
setLastExecutionSnapshot,
clearLastExecutionSnapshot,
getCurrentExecutionId,
setCurrentExecutionId,
setIsExecuting,
setActiveBlocks,
setBlockRunStatus,
@@ -1979,29 +2050,213 @@ export function useWorkflowExecution() {
const executionId = uuidv4()
try {
const result = await executeWorkflow(
undefined,
undefined,
executionId,
undefined,
'manual',
blockId
)
if (result && 'success' in result) {
setExecutionResult(result)
}
await executeWorkflow(undefined, undefined, executionId, undefined, 'manual', blockId)
} catch (error) {
const errorResult = handleExecutionError(error, { executionId })
return errorResult
} finally {
setCurrentExecutionId(workflowId, null)
setIsExecuting(workflowId, false)
setIsDebugging(workflowId, false)
setActiveBlocks(workflowId, new Set())
}
},
[activeWorkflowId, setExecutionResult, setIsExecuting, setIsDebugging, setActiveBlocks]
[
activeWorkflowId,
setCurrentExecutionId,
setExecutionResult,
setIsExecuting,
setIsDebugging,
setActiveBlocks,
]
)
useEffect(() => {
if (!activeWorkflowId || !hasHydrated) return
const entries = useTerminalConsoleStore.getState().entries
const runningEntries = entries.filter(
(e) => e.isRunning && e.workflowId === activeWorkflowId && e.executionId
)
if (runningEntries.length === 0) return
if (activeReconnections.has(activeWorkflowId)) return
activeReconnections.add(activeWorkflowId)
executionStream.cancel(activeWorkflowId)
const sorted = [...runningEntries].sort((a, b) => {
const aTime = a.startedAt ? new Date(a.startedAt).getTime() : 0
const bTime = b.startedAt ? new Date(b.startedAt).getTime() : 0
return bTime - aTime
})
const executionId = sorted[0].executionId!
const otherExecutionIds = new Set(
sorted.filter((e) => e.executionId !== executionId).map((e) => e.executionId!)
)
if (otherExecutionIds.size > 0) {
cancelRunningEntries(activeWorkflowId)
}
setCurrentExecutionId(activeWorkflowId, executionId)
setIsExecuting(activeWorkflowId, true)
const workflowEdges = useWorkflowStore.getState().edges
const activeBlocksSet = new Set<string>()
const accumulatedBlockLogs: BlockLog[] = []
const accumulatedBlockStates = new Map<string, BlockState>()
const executedBlockIds = new Set<string>()
const executionIdRef = { current: executionId }
const handlers = buildBlockEventHandlers({
workflowId: activeWorkflowId,
executionIdRef,
workflowEdges,
activeBlocksSet,
accumulatedBlockLogs,
accumulatedBlockStates,
executedBlockIds,
consoleMode: 'update',
includeStartConsoleEntry: true,
})
const originalEntries = entries
.filter((e) => e.executionId === executionId)
.map((e) => ({ ...e }))
let cleared = false
let reconnectionComplete = false
let cleanupRan = false
const clearOnce = () => {
if (!cleared) {
cleared = true
clearExecutionEntries(executionId)
}
}
const reconnectWorkflowId = activeWorkflowId
executionStream
.reconnect({
workflowId: reconnectWorkflowId,
executionId,
callbacks: {
onBlockStarted: (data) => {
clearOnce()
handlers.onBlockStarted(data)
},
onBlockCompleted: (data) => {
clearOnce()
handlers.onBlockCompleted(data)
},
onBlockError: (data) => {
clearOnce()
handlers.onBlockError(data)
},
onExecutionCompleted: () => {
const currentId = useExecutionStore
.getState()
.getCurrentExecutionId(reconnectWorkflowId)
if (currentId !== executionId) {
reconnectionComplete = true
activeReconnections.delete(reconnectWorkflowId)
return
}
clearOnce()
reconnectionComplete = true
activeReconnections.delete(reconnectWorkflowId)
setCurrentExecutionId(reconnectWorkflowId, null)
setIsExecuting(reconnectWorkflowId, false)
setActiveBlocks(reconnectWorkflowId, new Set())
},
onExecutionError: (data) => {
const currentId = useExecutionStore
.getState()
.getCurrentExecutionId(reconnectWorkflowId)
if (currentId !== executionId) {
reconnectionComplete = true
activeReconnections.delete(reconnectWorkflowId)
return
}
clearOnce()
reconnectionComplete = true
activeReconnections.delete(reconnectWorkflowId)
setCurrentExecutionId(reconnectWorkflowId, null)
setIsExecuting(reconnectWorkflowId, false)
setActiveBlocks(reconnectWorkflowId, new Set())
handleExecutionErrorConsole({
workflowId: reconnectWorkflowId,
executionId,
error: data.error,
blockLogs: accumulatedBlockLogs,
})
},
onExecutionCancelled: () => {
const currentId = useExecutionStore
.getState()
.getCurrentExecutionId(reconnectWorkflowId)
if (currentId !== executionId) {
reconnectionComplete = true
activeReconnections.delete(reconnectWorkflowId)
return
}
clearOnce()
reconnectionComplete = true
activeReconnections.delete(reconnectWorkflowId)
setCurrentExecutionId(reconnectWorkflowId, null)
setIsExecuting(reconnectWorkflowId, false)
setActiveBlocks(reconnectWorkflowId, new Set())
handleExecutionCancelledConsole({
workflowId: reconnectWorkflowId,
executionId,
})
},
},
})
.catch((error) => {
logger.warn('Execution reconnection failed', { executionId, error })
})
.finally(() => {
if (reconnectionComplete || cleanupRan) return
const currentId = useExecutionStore.getState().getCurrentExecutionId(reconnectWorkflowId)
if (currentId !== executionId) return
reconnectionComplete = true
activeReconnections.delete(reconnectWorkflowId)
clearExecutionEntries(executionId)
for (const entry of originalEntries) {
addConsole({
workflowId: entry.workflowId,
blockId: entry.blockId,
blockName: entry.blockName,
blockType: entry.blockType,
executionId: entry.executionId,
executionOrder: entry.executionOrder,
isRunning: false,
warning: 'Execution result unavailable — check the logs page',
})
}
setCurrentExecutionId(reconnectWorkflowId, null)
setIsExecuting(reconnectWorkflowId, false)
setActiveBlocks(reconnectWorkflowId, new Set())
})
return () => {
cleanupRan = true
executionStream.cancel(reconnectWorkflowId)
activeReconnections.delete(reconnectWorkflowId)
if (cleared && !reconnectionComplete) {
clearExecutionEntries(executionId)
for (const entry of originalEntries) {
addConsole(entry)
}
}
}
// eslint-disable-next-line react-hooks/exhaustive-deps
}, [activeWorkflowId, hasHydrated])
return {
isExecuting,
isDebugging,

View File

@@ -1,3 +1,4 @@
export { CancelSubscription } from './cancel-subscription'
export { CreditBalance } from './credit-balance'
export { PlanCard, type PlanCardProps, type PlanFeature } from './plan-card'
export { ReferralCode } from './referral-code'

View File

@@ -0,0 +1,103 @@
'use client'
import { useState } from 'react'
import { createLogger } from '@sim/logger'
import { Button, Input, Label } from '@/components/emcn'
const logger = createLogger('ReferralCode')
interface ReferralCodeProps {
onRedeemComplete?: () => void
}
/**
* Inline referral/promo code entry field with redeem button.
* One-time use per account — shows success or "already redeemed" state.
*/
export function ReferralCode({ onRedeemComplete }: ReferralCodeProps) {
const [code, setCode] = useState('')
const [isRedeeming, setIsRedeeming] = useState(false)
const [error, setError] = useState<string | null>(null)
const [success, setSuccess] = useState<{ bonusAmount: number } | null>(null)
const handleRedeem = async () => {
const trimmed = code.trim()
if (!trimmed || isRedeeming) return
setIsRedeeming(true)
setError(null)
try {
const response = await fetch('/api/referral-code/redeem', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ code: trimmed }),
})
const data = await response.json()
if (!response.ok) {
throw new Error(data.error || 'Failed to redeem code')
}
if (data.redeemed) {
setSuccess({ bonusAmount: data.bonusAmount })
setCode('')
onRedeemComplete?.()
} else {
setError(data.error || 'Code could not be redeemed')
}
} catch (err) {
logger.error('Referral code redemption failed', { error: err })
setError(err instanceof Error ? err.message : 'Failed to redeem code')
} finally {
setIsRedeeming(false)
}
}
if (success) {
return (
<div className='flex items-center justify-between'>
<Label>Referral Code</Label>
<span className='text-[12px] text-[var(--text-secondary)]'>
+${success.bonusAmount} credits applied
</span>
</div>
)
}
return (
<div className='flex flex-col'>
<div className='flex items-center justify-between gap-[12px]'>
<Label className='shrink-0'>Referral Code</Label>
<div className='flex items-center gap-[8px]'>
<Input
type='text'
value={code}
onChange={(e) => {
setCode(e.target.value)
setError(null)
}}
onKeyDown={(e) => {
if (e.key === 'Enter') handleRedeem()
}}
placeholder='Enter code'
className='h-[32px] w-[140px] text-[12px]'
disabled={isRedeeming}
/>
<Button
variant='active'
className='h-[32px] shrink-0 rounded-[6px] text-[12px]'
onClick={handleRedeem}
disabled={isRedeeming || !code.trim()}
>
{isRedeeming ? 'Redeeming...' : 'Redeem'}
</Button>
</div>
</div>
<div className='mt-[4px] min-h-[18px] text-right'>
{error && <span className='text-[11px] text-[var(--text-error)]'>{error}</span>}
</div>
</div>
)
}

View File

@@ -17,6 +17,7 @@ import {
CancelSubscription,
CreditBalance,
PlanCard,
ReferralCode,
} from '@/app/workspace/[workspaceId]/w/components/sidebar/components/settings-modal/components/subscription/components'
import {
ENTERPRISE_PLAN_FEATURES,
@@ -549,6 +550,10 @@ export function Subscription() {
/>
)}
{!subscription.isEnterprise && (
<ReferralCode onRedeemComplete={() => refetchSubscription()} />
)}
{/* Next Billing Date - hidden from team members */}
{subscription.isPaid &&
subscriptionData?.data?.periodEnd &&

View File

@@ -4,12 +4,14 @@ import { useEffect } from 'react'
import { createLogger } from '@sim/logger'
import { useRouter } from 'next/navigation'
import { useSession } from '@/lib/auth/auth-client'
import { useReferralAttribution } from '@/hooks/use-referral-attribution'
const logger = createLogger('WorkspacePage')
export default function WorkspacePage() {
const router = useRouter()
const { data: session, isPending } = useSession()
useReferralAttribution()
useEffect(() => {
const redirectToFirstWorkspace = async () => {

View File

@@ -589,6 +589,7 @@ export async function executeScheduleJob(payload: ScheduleExecutionPayload) {
export const scheduleExecution = task({
id: 'schedule-execution',
machine: 'medium-1x',
retry: {
maxAttempts: 1,
},

View File

@@ -669,6 +669,7 @@ async function executeWebhookJobInternal(
export const webhookExecution = task({
id: 'webhook-execution',
machine: 'medium-1x',
retry: {
maxAttempts: 1,
},

View File

@@ -197,5 +197,6 @@ export async function executeWorkflowJob(payload: WorkflowExecutionPayload) {
export const workflowExecutionTask = task({
id: 'workflow-execution',
machine: 'medium-1x',
run: executeWorkflowJob,
})

View File

@@ -394,6 +394,7 @@ export const ConfluenceV2Block: BlockConfig<ConfluenceResponse> = {
// Page Property Operations
{ label: 'List Page Properties', id: 'list_page_properties' },
{ label: 'Create Page Property', id: 'create_page_property' },
{ label: 'Delete Page Property', id: 'delete_page_property' },
// Search Operations
{ label: 'Search Content', id: 'search' },
{ label: 'Search in Space', id: 'search_in_space' },
@@ -414,6 +415,9 @@ export const ConfluenceV2Block: BlockConfig<ConfluenceResponse> = {
// Label Operations
{ label: 'List Labels', id: 'list_labels' },
{ label: 'Add Label', id: 'add_label' },
{ label: 'Delete Label', id: 'delete_label' },
{ label: 'Get Pages by Label', id: 'get_pages_by_label' },
{ label: 'List Space Labels', id: 'list_space_labels' },
// Space Operations
{ label: 'Get Space', id: 'get_space' },
{ label: 'List Spaces', id: 'list_spaces' },
@@ -485,6 +489,8 @@ export const ConfluenceV2Block: BlockConfig<ConfluenceResponse> = {
'search_in_space',
'get_space',
'list_spaces',
'get_pages_by_label',
'list_space_labels',
],
not: true,
},
@@ -500,6 +506,8 @@ export const ConfluenceV2Block: BlockConfig<ConfluenceResponse> = {
'list_labels',
'upload_attachment',
'add_label',
'delete_label',
'delete_page_property',
'get_page_children',
'get_page_ancestors',
'list_page_versions',
@@ -527,6 +535,8 @@ export const ConfluenceV2Block: BlockConfig<ConfluenceResponse> = {
'search_in_space',
'get_space',
'list_spaces',
'get_pages_by_label',
'list_space_labels',
],
not: true,
},
@@ -542,6 +552,8 @@ export const ConfluenceV2Block: BlockConfig<ConfluenceResponse> = {
'list_labels',
'upload_attachment',
'add_label',
'delete_label',
'delete_page_property',
'get_page_children',
'get_page_ancestors',
'list_page_versions',
@@ -566,6 +578,7 @@ export const ConfluenceV2Block: BlockConfig<ConfluenceResponse> = {
'search_in_space',
'create_blogpost',
'list_blogposts_in_space',
'list_space_labels',
],
},
},
@@ -601,6 +614,14 @@ export const ConfluenceV2Block: BlockConfig<ConfluenceResponse> = {
required: true,
condition: { field: 'operation', value: 'create_page_property' },
},
{
id: 'propertyId',
title: 'Property ID',
type: 'short-input',
placeholder: 'Enter property ID to delete',
required: true,
condition: { field: 'operation', value: 'delete_page_property' },
},
{
id: 'title',
title: 'Title',
@@ -694,7 +715,7 @@ export const ConfluenceV2Block: BlockConfig<ConfluenceResponse> = {
type: 'short-input',
placeholder: 'Enter label name',
required: true,
condition: { field: 'operation', value: 'add_label' },
condition: { field: 'operation', value: ['add_label', 'delete_label'] },
},
{
id: 'labelPrefix',
@@ -709,6 +730,14 @@ export const ConfluenceV2Block: BlockConfig<ConfluenceResponse> = {
value: () => 'global',
condition: { field: 'operation', value: 'add_label' },
},
{
id: 'labelId',
title: 'Label ID',
type: 'short-input',
placeholder: 'Enter label ID',
required: true,
condition: { field: 'operation', value: 'get_pages_by_label' },
},
{
id: 'blogPostStatus',
title: 'Status',
@@ -759,6 +788,8 @@ export const ConfluenceV2Block: BlockConfig<ConfluenceResponse> = {
'list_page_versions',
'list_page_properties',
'list_labels',
'get_pages_by_label',
'list_space_labels',
],
},
},
@@ -780,6 +811,8 @@ export const ConfluenceV2Block: BlockConfig<ConfluenceResponse> = {
'list_page_versions',
'list_page_properties',
'list_labels',
'get_pages_by_label',
'list_space_labels',
],
},
},
@@ -800,6 +833,7 @@ export const ConfluenceV2Block: BlockConfig<ConfluenceResponse> = {
// Property Tools
'confluence_list_page_properties',
'confluence_create_page_property',
'confluence_delete_page_property',
// Search Tools
'confluence_search',
'confluence_search_in_space',
@@ -820,6 +854,9 @@ export const ConfluenceV2Block: BlockConfig<ConfluenceResponse> = {
// Label Tools
'confluence_list_labels',
'confluence_add_label',
'confluence_delete_label',
'confluence_get_pages_by_label',
'confluence_list_space_labels',
// Space Tools
'confluence_get_space',
'confluence_list_spaces',
@@ -852,6 +889,8 @@ export const ConfluenceV2Block: BlockConfig<ConfluenceResponse> = {
return 'confluence_list_page_properties'
case 'create_page_property':
return 'confluence_create_page_property'
case 'delete_page_property':
return 'confluence_delete_page_property'
// Search Operations
case 'search':
return 'confluence_search'
@@ -887,6 +926,12 @@ export const ConfluenceV2Block: BlockConfig<ConfluenceResponse> = {
return 'confluence_list_labels'
case 'add_label':
return 'confluence_add_label'
case 'delete_label':
return 'confluence_delete_label'
case 'get_pages_by_label':
return 'confluence_get_pages_by_label'
case 'list_space_labels':
return 'confluence_list_space_labels'
// Space Operations
case 'get_space':
return 'confluence_get_space'
@@ -908,7 +953,9 @@ export const ConfluenceV2Block: BlockConfig<ConfluenceResponse> = {
versionNumber,
propertyKey,
propertyValue,
propertyId,
labelPrefix,
labelId,
blogPostStatus,
purge,
bodyFormat,
@@ -959,7 +1006,9 @@ export const ConfluenceV2Block: BlockConfig<ConfluenceResponse> = {
}
}
// Operations that support cursor pagination
// Operations that support generic cursor pagination.
// get_pages_by_label and list_space_labels have dedicated handlers
// below that pass cursor along with their required params (labelId, spaceId).
const supportsCursor = [
'list_attachments',
'list_spaces',
@@ -996,6 +1045,35 @@ export const ConfluenceV2Block: BlockConfig<ConfluenceResponse> = {
}
}
if (operation === 'delete_page_property') {
return {
credential,
pageId: effectivePageId,
operation,
propertyId,
...rest,
}
}
if (operation === 'get_pages_by_label') {
return {
credential,
operation,
labelId,
cursor: cursor || undefined,
...rest,
}
}
if (operation === 'list_space_labels') {
return {
credential,
operation,
cursor: cursor || undefined,
...rest,
}
}
if (operation === 'upload_attachment') {
const normalizedFile = normalizeFileInput(attachmentFile, { single: true })
if (!normalizedFile) {
@@ -1044,7 +1122,9 @@ export const ConfluenceV2Block: BlockConfig<ConfluenceResponse> = {
attachmentFileName: { type: 'string', description: 'Custom file name for attachment' },
attachmentComment: { type: 'string', description: 'Comment for the attachment' },
labelName: { type: 'string', description: 'Label name' },
labelId: { type: 'string', description: 'Label identifier' },
labelPrefix: { type: 'string', description: 'Label prefix (global, my, team, system)' },
propertyId: { type: 'string', description: 'Property identifier' },
blogPostStatus: { type: 'string', description: 'Blog post status (current or draft)' },
purge: { type: 'boolean', description: 'Permanently delete instead of moving to trash' },
bodyFormat: { type: 'string', description: 'Body format for comments' },
@@ -1080,6 +1160,7 @@ export const ConfluenceV2Block: BlockConfig<ConfluenceResponse> = {
// Label Results
labels: { type: 'array', description: 'List of labels' },
labelName: { type: 'string', description: 'Label name' },
labelId: { type: 'string', description: 'Label identifier' },
// Space Results
spaces: { type: 'array', description: 'List of spaces' },
spaceId: { type: 'string', description: 'Space identifier' },

View File

@@ -0,0 +1,201 @@
import { GoogleBooksIcon } from '@/components/icons'
import type { BlockConfig } from '@/blocks/types'
import { AuthMode } from '@/blocks/types'
export const GoogleBooksBlock: BlockConfig = {
type: 'google_books',
name: 'Google Books',
description: 'Search and retrieve book information',
authMode: AuthMode.ApiKey,
longDescription:
'Search for books using the Google Books API. Find volumes by title, author, ISBN, or keywords, and retrieve detailed information about specific books including descriptions, ratings, and publication details.',
docsLink: 'https://docs.sim.ai/tools/google_books',
category: 'tools',
bgColor: '#E0E0E0',
icon: GoogleBooksIcon,
subBlocks: [
{
id: 'operation',
title: 'Operation',
type: 'dropdown',
options: [
{ label: 'Search Volumes', id: 'volume_search' },
{ label: 'Get Volume Details', id: 'volume_details' },
],
value: () => 'volume_search',
},
{
id: 'apiKey',
title: 'API Key',
type: 'short-input',
password: true,
placeholder: 'Enter your Google Books API key',
required: true,
},
{
id: 'query',
title: 'Search Query',
type: 'short-input',
placeholder: 'e.g., intitle:harry potter inauthor:rowling',
condition: { field: 'operation', value: 'volume_search' },
required: { field: 'operation', value: 'volume_search' },
},
{
id: 'filter',
title: 'Filter',
type: 'dropdown',
options: [
{ label: 'None', id: '' },
{ label: 'Partial Preview', id: 'partial' },
{ label: 'Full Preview', id: 'full' },
{ label: 'Free eBooks', id: 'free-ebooks' },
{ label: 'Paid eBooks', id: 'paid-ebooks' },
{ label: 'All eBooks', id: 'ebooks' },
],
condition: { field: 'operation', value: 'volume_search' },
mode: 'advanced',
},
{
id: 'printType',
title: 'Print Type',
type: 'dropdown',
options: [
{ label: 'All', id: 'all' },
{ label: 'Books', id: 'books' },
{ label: 'Magazines', id: 'magazines' },
],
value: () => 'all',
condition: { field: 'operation', value: 'volume_search' },
mode: 'advanced',
},
{
id: 'orderBy',
title: 'Order By',
type: 'dropdown',
options: [
{ label: 'Relevance', id: 'relevance' },
{ label: 'Newest', id: 'newest' },
],
value: () => 'relevance',
condition: { field: 'operation', value: 'volume_search' },
mode: 'advanced',
},
{
id: 'maxResults',
title: 'Max Results',
type: 'short-input',
placeholder: 'Number of results (1-40)',
condition: { field: 'operation', value: 'volume_search' },
mode: 'advanced',
},
{
id: 'startIndex',
title: 'Start Index',
type: 'short-input',
placeholder: 'Starting index for pagination',
condition: { field: 'operation', value: 'volume_search' },
mode: 'advanced',
},
{
id: 'langRestrict',
title: 'Language',
type: 'short-input',
placeholder: 'ISO 639-1 code (e.g., en, es, fr)',
condition: { field: 'operation', value: 'volume_search' },
mode: 'advanced',
},
{
id: 'volumeId',
title: 'Volume ID',
type: 'short-input',
placeholder: 'Google Books volume ID',
condition: { field: 'operation', value: 'volume_details' },
required: { field: 'operation', value: 'volume_details' },
},
{
id: 'projection',
title: 'Projection',
type: 'dropdown',
options: [
{ label: 'Full', id: 'full' },
{ label: 'Lite', id: 'lite' },
],
value: () => 'full',
condition: { field: 'operation', value: 'volume_details' },
mode: 'advanced',
},
],
tools: {
access: ['google_books_volume_search', 'google_books_volume_details'],
config: {
tool: (params) => `google_books_${params.operation}`,
params: (params) => {
const { operation, ...rest } = params
let maxResults: number | undefined
if (params.maxResults) {
maxResults = Number.parseInt(params.maxResults, 10)
if (Number.isNaN(maxResults)) {
maxResults = undefined
}
}
let startIndex: number | undefined
if (params.startIndex) {
startIndex = Number.parseInt(params.startIndex, 10)
if (Number.isNaN(startIndex)) {
startIndex = undefined
}
}
return {
...rest,
maxResults,
startIndex,
filter: params.filter || undefined,
printType: params.printType || undefined,
orderBy: params.orderBy || undefined,
projection: params.projection || undefined,
}
},
},
},
inputs: {
operation: { type: 'string', description: 'Operation to perform' },
apiKey: { type: 'string', description: 'Google Books API key' },
query: { type: 'string', description: 'Search query' },
filter: { type: 'string', description: 'Filter by availability' },
printType: { type: 'string', description: 'Print type filter' },
orderBy: { type: 'string', description: 'Sort order' },
maxResults: { type: 'string', description: 'Maximum number of results' },
startIndex: { type: 'string', description: 'Starting index for pagination' },
langRestrict: { type: 'string', description: 'Language restriction' },
volumeId: { type: 'string', description: 'Volume ID for details' },
projection: { type: 'string', description: 'Projection level' },
},
outputs: {
totalItems: { type: 'number', description: 'Total number of matching results' },
volumes: { type: 'json', description: 'List of matching volumes' },
id: { type: 'string', description: 'Volume ID' },
title: { type: 'string', description: 'Book title' },
subtitle: { type: 'string', description: 'Book subtitle' },
authors: { type: 'json', description: 'List of authors' },
publisher: { type: 'string', description: 'Publisher name' },
publishedDate: { type: 'string', description: 'Publication date' },
description: { type: 'string', description: 'Book description' },
pageCount: { type: 'number', description: 'Number of pages' },
categories: { type: 'json', description: 'Book categories' },
averageRating: { type: 'number', description: 'Average rating (1-5)' },
ratingsCount: { type: 'number', description: 'Number of ratings' },
language: { type: 'string', description: 'Language code' },
previewLink: { type: 'string', description: 'Link to preview on Google Books' },
infoLink: { type: 'string', description: 'Link to info page' },
thumbnailUrl: { type: 'string', description: 'Book cover thumbnail URL' },
isbn10: { type: 'string', description: 'ISBN-10 identifier' },
isbn13: { type: 'string', description: 'ISBN-13 identifier' },
},
}

View File

@@ -58,6 +58,16 @@ export const S3Block: BlockConfig<S3Response> = {
},
required: true,
},
{
id: 'getObjectRegion',
title: 'AWS Region',
type: 'short-input',
placeholder: 'Used when S3 URL does not include region',
condition: {
field: 'operation',
value: ['get_object'],
},
},
{
id: 'bucketName',
title: 'Bucket Name',
@@ -291,34 +301,11 @@ export const S3Block: BlockConfig<S3Response> = {
if (!params.s3Uri) {
throw new Error('S3 Object URL is required')
}
// Parse S3 URI for get_object
try {
const url = new URL(params.s3Uri)
const hostname = url.hostname
const bucketName = hostname.split('.')[0]
const regionMatch = hostname.match(/s3[.-]([^.]+)\.amazonaws\.com/)
const region = regionMatch ? regionMatch[1] : params.region
const objectKey = url.pathname.startsWith('/')
? url.pathname.substring(1)
: url.pathname
if (!bucketName || !objectKey) {
throw new Error('Could not parse S3 URL')
}
return {
accessKeyId: params.accessKeyId,
secretAccessKey: params.secretAccessKey,
region,
bucketName,
objectKey,
s3Uri: params.s3Uri,
}
} catch (_error) {
throw new Error(
'Invalid S3 Object URL format. Expected: https://bucket-name.s3.region.amazonaws.com/path/to/file'
)
return {
accessKeyId: params.accessKeyId,
secretAccessKey: params.secretAccessKey,
region: params.getObjectRegion || params.region,
s3Uri: params.s3Uri,
}
}
@@ -401,6 +388,7 @@ export const S3Block: BlockConfig<S3Response> = {
acl: { type: 'string', description: 'Access control list' },
// Download inputs
s3Uri: { type: 'string', description: 'S3 object URL' },
getObjectRegion: { type: 'string', description: 'Optional AWS region override for downloads' },
// List inputs
prefix: { type: 'string', description: 'Prefix filter' },
maxKeys: { type: 'number', description: 'Maximum results' },

View File

@@ -39,6 +39,7 @@ import { GitHubBlock, GitHubV2Block } from '@/blocks/blocks/github'
import { GitLabBlock } from '@/blocks/blocks/gitlab'
import { GmailBlock, GmailV2Block } from '@/blocks/blocks/gmail'
import { GoogleSearchBlock } from '@/blocks/blocks/google'
import { GoogleBooksBlock } from '@/blocks/blocks/google_books'
import { GoogleCalendarBlock, GoogleCalendarV2Block } from '@/blocks/blocks/google_calendar'
import { GoogleDocsBlock } from '@/blocks/blocks/google_docs'
import { GoogleDriveBlock } from '@/blocks/blocks/google_drive'
@@ -214,6 +215,7 @@ export const registry: Record<string, BlockConfig> = {
gmail_v2: GmailV2Block,
google_calendar: GoogleCalendarBlock,
google_calendar_v2: GoogleCalendarV2Block,
google_books: GoogleBooksBlock,
google_docs: GoogleDocsBlock,
google_drive: GoogleDriveBlock,
google_forms: GoogleFormsBlock,

View File

@@ -196,6 +196,8 @@ export interface SubBlockConfig {
type: SubBlockType
mode?: 'basic' | 'advanced' | 'both' | 'trigger' // Default is 'both' if not specified. 'trigger' means only shown in trigger mode
canonicalParamId?: string
/** Controls parameter visibility in agent/tool-input context */
paramVisibility?: 'user-or-llm' | 'user-only' | 'llm-only' | 'hidden'
required?:
| boolean
| {

View File

@@ -1157,6 +1157,21 @@ export function AirweaveIcon(props: SVGProps<SVGSVGElement>) {
)
}
export function GoogleBooksIcon(props: SVGProps<SVGSVGElement>) {
return (
<svg {...props} xmlns='http://www.w3.org/2000/svg' viewBox='0 0 478.633 540.068'>
<path
fill='#1C51A4'
d='M449.059,218.231L245.519,99.538l-0.061,193.23c0.031,1.504-0.368,2.977-1.166,4.204c-0.798,1.258-1.565,1.995-2.915,2.547c-1.35,0.552-2.792,0.706-4.204,0.399c-1.412-0.307-2.7-1.043-3.713-2.117l-69.166-70.609l-69.381,70.179c-1.013,0.982-2.301,1.657-3.652,1.903c-1.381,0.246-2.792,0.092-4.081-0.491c-1.289-0.583-1.626-0.522-2.394-1.749c-0.767-1.197-1.197-2.608-1.197-4.081L85.031,6.007l-2.915-1.289C43.973-11.638,0,16.409,0,59.891v420.306c0,46.029,49.312,74.782,88.775,51.767l360.285-210.138C488.491,298.782,488.491,241.246,449.059,218.231z'
/>
<path
fill='#80D7FB'
d='M88.805,8.124c-2.179-1.289-4.419-2.363-6.659-3.345l0.123,288.663c0,1.442,0.43,2.854,1.197,4.081c0.767,1.197,1.872,2.148,3.161,2.731c1.289,0.583,2.7,0.736,4.081,0.491c1.381-0.246,2.639-0.921,3.652-1.903l69.749-69.688l69.811,69.749c1.013,1.074,2.301,1.81,3.713,2.117c1.412,0.307,2.884,0.153,4.204-0.399c1.319-0.552,2.455-1.565,3.253-2.792c0.798-1.258,1.197-2.731,1.166-4.204V99.998L88.805,8.124z'
/>
</svg>
)
}
export function GoogleDocsIcon(props: SVGProps<SVGSVGElement>) {
return (
<svg

View File

@@ -1,90 +0,0 @@
---
slug: workflow-bench
title: 'Introducing Workflow Bench - Benchmarking Natural Language Workflow Building'
description: 'How we built a benchmark to measure how well AI models translate natural language instructions into executable workflows, and what we learned along the way'
date: 2026-02-11
updated: 2026-02-11
authors:
- sid
readingTime: 10
tags: [Benchmark, Evaluation, Workflows, Natural Language]
ogImage: /studio/workflow-bench/cover.png
ogAlt: 'Workflow Bench benchmark overview'
about: ['Benchmarking', 'Workflow Building', 'Natural Language']
timeRequired: PT10M
canonical: https://sim.ai/studio/workflow-bench
featured: false
draft: true
---
Building workflows from natural language sounds straightforward until you try to measure it. When a user says "send me a Slack message every morning with a summary of my unread emails," how do you evaluate whether the resulting workflow is correct? Is partial credit fair? What about workflows that are functionally equivalent but structurally different?
We built Workflow Bench to answer these questions. This post covers why we needed a dedicated benchmark, how we designed it, and what the results tell us about the current state of natural language workflow building.
## Why a Workflow Benchmark?
<!-- TODO: Motivation for building Workflow Bench -->
<!-- - Gap in existing benchmarks (code gen benchmarks don't capture workflow semantics) -->
<!-- - Need to track progress as we iterate on the copilot / natural language builder -->
<!-- - Workflows are structured artifacts, not just code — they have topology, block types, connections, configs -->
## What We're Measuring
<!-- TODO: Define the core evaluation dimensions -->
<!-- - Structural correctness (right blocks, right connections) -->
<!-- - Configuration accuracy (correct params, API mappings) -->
<!-- - Functional equivalence (does it do the same thing even if shaped differently?) -->
<!-- - Edge cases: loops, conditionals, parallel branches, error handling -->
## Benchmark Design
<!-- TODO: How the benchmark dataset is constructed -->
<!-- - Task categories and complexity tiers -->
<!-- - How ground truth workflows are defined -->
<!-- - Natural language prompt variations (terse vs. detailed, ambiguous vs. precise) -->
### Task Categories
<!-- TODO: Break down the types of workflows in the benchmark -->
<!-- - Simple linear (A → B → C) -->
<!-- - Branching / conditional -->
<!-- - Looping / iterative -->
<!-- - Parallel fan-out / fan-in -->
<!-- - Multi-trigger -->
### Scoring
<!-- TODO: Explain the scoring methodology -->
<!-- - How partial credit works -->
<!-- - Structural similarity metrics -->
<!-- - Config-level accuracy -->
<!-- - Overall composite score -->
## Evaluation Pipeline
<!-- TODO: How we run the benchmark end to end -->
<!-- - Prompt → model → workflow JSON → evaluator → score -->
<!-- - Automation and reproducibility -->
<!-- - How we handle non-determinism across runs -->
## Results
<!-- TODO: Present the benchmark results -->
<!-- - Model comparisons -->
<!-- - Performance by task category -->
<!-- - Where models struggle most -->
<!-- - Trends over time as we iterate -->
## What We Learned
<!-- TODO: Key takeaways from running the benchmark -->
<!-- - Surprising strengths and weaknesses -->
<!-- - How benchmark results influenced product decisions -->
<!-- - Common failure modes -->
## What's Next
<!-- TODO: Future directions -->
<!-- - Expanding the benchmark (more tasks, more complexity) -->
<!-- - Community contributions / open-sourcing -->
<!-- - Using the benchmark to guide copilot improvements -->

View File

@@ -1,3 +1,4 @@
import { setupGlobalFetchMock } from '@sim/testing'
import { afterEach, beforeEach, describe, expect, it, type Mock, vi } from 'vitest'
import { getAllBlocks } from '@/blocks'
import { BlockType, isMcpTool } from '@/executor/constants'
@@ -61,6 +62,30 @@ vi.mock('@/providers', () => ({
}),
}))
vi.mock('@/executor/utils/http', () => ({
buildAuthHeaders: vi.fn().mockResolvedValue({ 'Content-Type': 'application/json' }),
buildAPIUrl: vi.fn((path: string, params?: Record<string, string>) => {
const url = new URL(path, 'http://localhost:3000')
if (params) {
for (const [key, value] of Object.entries(params)) {
if (value !== undefined && value !== null) {
url.searchParams.set(key, value)
}
}
}
return url
}),
extractAPIErrorMessage: vi.fn(async (response: Response) => {
const defaultMessage = `API request failed with status ${response.status}`
try {
const errorData = await response.json()
return errorData.error || defaultMessage
} catch {
return defaultMessage
}
}),
}))
vi.mock('@sim/db', () => ({
db: {
select: vi.fn().mockReturnValue({
@@ -84,7 +109,7 @@ vi.mock('@sim/db/schema', () => ({
},
}))
global.fetch = Object.assign(vi.fn(), { preconnect: vi.fn() }) as typeof fetch
setupGlobalFetchMock()
const mockGetAllBlocks = getAllBlocks as Mock
const mockExecuteTool = executeTool as Mock
@@ -1901,5 +1926,301 @@ describe('AgentBlockHandler', () => {
expect(discoveryCalls[0].url).toContain('serverId=mcp-legacy-server')
})
describe('customToolId resolution - DB as source of truth', () => {
const staleInlineSchema = {
function: {
name: 'formatReport',
description: 'Formats a report',
parameters: {
type: 'object',
properties: {
title: { type: 'string', description: 'Report title' },
content: { type: 'string', description: 'Report content' },
},
required: ['title', 'content'],
},
},
}
const dbSchema = {
function: {
name: 'formatReport',
description: 'Formats a report',
parameters: {
type: 'object',
properties: {
title: { type: 'string', description: 'Report title' },
content: { type: 'string', description: 'Report content' },
format: { type: 'string', description: 'Output format' },
},
required: ['title', 'content', 'format'],
},
},
}
const staleInlineCode = 'return { title, content };'
const dbCode = 'return { title, content, format };'
function mockFetchForCustomTool(toolId: string) {
mockFetch.mockImplementation((url: string) => {
if (typeof url === 'string' && url.includes('/api/tools/custom')) {
return Promise.resolve({
ok: true,
headers: { get: () => null },
json: () =>
Promise.resolve({
data: [
{
id: toolId,
title: 'formatReport',
schema: dbSchema,
code: dbCode,
},
],
}),
})
}
return Promise.resolve({
ok: true,
headers: { get: () => null },
json: () => Promise.resolve({}),
})
})
}
function mockFetchFailure() {
mockFetch.mockImplementation((url: string) => {
if (typeof url === 'string' && url.includes('/api/tools/custom')) {
return Promise.resolve({
ok: false,
status: 500,
headers: { get: () => null },
json: () => Promise.resolve({}),
})
}
return Promise.resolve({
ok: true,
headers: { get: () => null },
json: () => Promise.resolve({}),
})
})
}
beforeEach(() => {
Object.defineProperty(global, 'window', {
value: undefined,
writable: true,
configurable: true,
})
})
it('should always fetch latest schema from DB when customToolId is present', async () => {
const toolId = 'custom-tool-123'
mockFetchForCustomTool(toolId)
const inputs = {
model: 'gpt-4o',
userPrompt: 'Format a report',
apiKey: 'test-api-key',
tools: [
{
type: 'custom-tool',
customToolId: toolId,
title: 'formatReport',
schema: staleInlineSchema,
code: staleInlineCode,
usageControl: 'auto' as const,
},
],
}
mockGetProviderFromModel.mockReturnValue('openai')
await handler.execute(mockContext, mockBlock, inputs)
expect(mockExecuteProviderRequest).toHaveBeenCalled()
const providerCall = mockExecuteProviderRequest.mock.calls[0]
const tools = providerCall[1].tools
expect(tools.length).toBe(1)
// DB schema wins over stale inline — includes format param
expect(tools[0].parameters.required).toContain('format')
expect(tools[0].parameters.properties).toHaveProperty('format')
})
it('should fetch from DB when customToolId has no inline schema', async () => {
const toolId = 'custom-tool-123'
mockFetchForCustomTool(toolId)
const inputs = {
model: 'gpt-4o',
userPrompt: 'Format a report',
apiKey: 'test-api-key',
tools: [
{
type: 'custom-tool',
customToolId: toolId,
usageControl: 'auto' as const,
},
],
}
mockGetProviderFromModel.mockReturnValue('openai')
await handler.execute(mockContext, mockBlock, inputs)
expect(mockExecuteProviderRequest).toHaveBeenCalled()
const providerCall = mockExecuteProviderRequest.mock.calls[0]
const tools = providerCall[1].tools
expect(tools.length).toBe(1)
expect(tools[0].name).toBe('formatReport')
expect(tools[0].parameters.required).toContain('format')
})
it('should fall back to inline schema when DB fetch fails and inline exists', async () => {
mockFetchFailure()
const inputs = {
model: 'gpt-4o',
userPrompt: 'Format a report',
apiKey: 'test-api-key',
tools: [
{
type: 'custom-tool',
customToolId: 'custom-tool-123',
title: 'formatReport',
schema: staleInlineSchema,
code: staleInlineCode,
usageControl: 'auto' as const,
},
],
}
mockGetProviderFromModel.mockReturnValue('openai')
await handler.execute(mockContext, mockBlock, inputs)
expect(mockExecuteProviderRequest).toHaveBeenCalled()
const providerCall = mockExecuteProviderRequest.mock.calls[0]
const tools = providerCall[1].tools
expect(tools.length).toBe(1)
expect(tools[0].name).toBe('formatReport')
expect(tools[0].parameters.required).not.toContain('format')
})
it('should return null when DB fetch fails and no inline schema exists', async () => {
mockFetchFailure()
const inputs = {
model: 'gpt-4o',
userPrompt: 'Format a report',
apiKey: 'test-api-key',
tools: [
{
type: 'custom-tool',
customToolId: 'custom-tool-123',
usageControl: 'auto' as const,
},
],
}
mockGetProviderFromModel.mockReturnValue('openai')
await handler.execute(mockContext, mockBlock, inputs)
expect(mockExecuteProviderRequest).toHaveBeenCalled()
const providerCall = mockExecuteProviderRequest.mock.calls[0]
const tools = providerCall[1].tools
expect(tools.length).toBe(0)
})
it('should use DB code for executeFunction when customToolId resolves', async () => {
const toolId = 'custom-tool-123'
mockFetchForCustomTool(toolId)
let capturedTools: any[] = []
Promise.all = vi.fn().mockImplementation((promises: Promise<any>[]) => {
const result = originalPromiseAll.call(Promise, promises)
result.then((tools: any[]) => {
if (tools?.length) {
capturedTools = tools.filter((t) => t !== null)
}
})
return result
})
const inputs = {
model: 'gpt-4o',
userPrompt: 'Format a report',
apiKey: 'test-api-key',
tools: [
{
type: 'custom-tool',
customToolId: toolId,
title: 'formatReport',
schema: staleInlineSchema,
code: staleInlineCode,
usageControl: 'auto' as const,
},
],
}
mockGetProviderFromModel.mockReturnValue('openai')
await handler.execute(mockContext, mockBlock, inputs)
expect(capturedTools.length).toBe(1)
expect(typeof capturedTools[0].executeFunction).toBe('function')
await capturedTools[0].executeFunction({ title: 'Q1', format: 'pdf' })
expect(mockExecuteTool).toHaveBeenCalledWith(
'function_execute',
expect.objectContaining({
code: dbCode,
}),
false,
expect.any(Object)
)
})
it('should not fetch from DB when no customToolId is present', async () => {
const inputs = {
model: 'gpt-4o',
userPrompt: 'Use the tool',
apiKey: 'test-api-key',
tools: [
{
type: 'custom-tool',
title: 'formatReport',
schema: staleInlineSchema,
code: staleInlineCode,
usageControl: 'auto' as const,
},
],
}
mockGetProviderFromModel.mockReturnValue('openai')
await handler.execute(mockContext, mockBlock, inputs)
const customToolFetches = mockFetch.mock.calls.filter(
(call: any[]) => typeof call[0] === 'string' && call[0].includes('/api/tools/custom')
)
expect(customToolFetches.length).toBe(0)
expect(mockExecuteProviderRequest).toHaveBeenCalled()
const providerCall = mockExecuteProviderRequest.mock.calls[0]
const tools = providerCall[1].tools
expect(tools.length).toBe(1)
expect(tools[0].name).toBe('formatReport')
expect(tools[0].parameters.required).not.toContain('format')
})
})
})
})

View File

@@ -62,9 +62,12 @@ export class AgentBlockHandler implements BlockHandler {
await validateModelProvider(ctx.userId, model, ctx)
const providerId = getProviderFromModel(model)
const formattedTools = await this.formatTools(ctx, filteredInputs.tools || [])
const formattedTools = await this.formatTools(
ctx,
filteredInputs.tools || [],
block.canonicalModes
)
// Resolve skill metadata for progressive disclosure
const skillInputs = filteredInputs.skills ?? []
let skillMetadata: Array<{ name: string; description: string }> = []
if (skillInputs.length > 0 && ctx.workspaceId) {
@@ -221,7 +224,11 @@ export class AgentBlockHandler implements BlockHandler {
})
}
private async formatTools(ctx: ExecutionContext, inputTools: ToolInput[]): Promise<any[]> {
private async formatTools(
ctx: ExecutionContext,
inputTools: ToolInput[],
canonicalModes?: Record<string, 'basic' | 'advanced'>
): Promise<any[]> {
if (!Array.isArray(inputTools)) return []
const filtered = inputTools.filter((tool) => {
@@ -249,7 +256,7 @@ export class AgentBlockHandler implements BlockHandler {
if (tool.type === 'custom-tool' && (tool.schema || tool.customToolId)) {
return await this.createCustomTool(ctx, tool)
}
return this.transformBlockTool(ctx, tool)
return this.transformBlockTool(ctx, tool, canonicalModes)
} catch (error) {
logger.error(`[AgentHandler] Error creating tool:`, { tool, error })
return null
@@ -272,15 +279,16 @@ export class AgentBlockHandler implements BlockHandler {
let code = tool.code
let title = tool.title
if (tool.customToolId && !schema) {
if (tool.customToolId) {
const resolved = await this.fetchCustomToolById(ctx, tool.customToolId)
if (!resolved) {
if (resolved) {
schema = resolved.schema
code = resolved.code
title = resolved.title
} else if (!schema) {
logger.error(`Custom tool not found: ${tool.customToolId}`)
return null
}
schema = resolved.schema
code = resolved.code
title = resolved.title
}
if (!schema?.function) {
@@ -719,12 +727,17 @@ export class AgentBlockHandler implements BlockHandler {
}
}
private async transformBlockTool(ctx: ExecutionContext, tool: ToolInput) {
private async transformBlockTool(
ctx: ExecutionContext,
tool: ToolInput,
canonicalModes?: Record<string, 'basic' | 'advanced'>
) {
const transformedTool = await transformBlockTool(tool, {
selectedOperation: tool.operation,
getAllBlocks,
getToolAsync: (toolId: string) => getToolAsync(toolId, ctx.workflowId),
getTool,
canonicalModes,
})
if (transformedTool) {

View File

@@ -2,7 +2,7 @@ import { db } from '@sim/db'
import { account } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { eq } from 'drizzle-orm'
import { getBaseUrl } from '@/lib/core/utils/urls'
import { getInternalApiBaseUrl } from '@/lib/core/utils/urls'
import { refreshTokenIfNeeded } from '@/app/api/auth/oauth/utils'
import { generateRouterPrompt, generateRouterV2Prompt } from '@/blocks/blocks/router'
import type { BlockOutput } from '@/blocks/types'
@@ -79,7 +79,7 @@ export class RouterBlockHandler implements BlockHandler {
const providerId = getProviderFromModel(routerConfig.model)
try {
const url = new URL('/api/providers', getBaseUrl())
const url = new URL('/api/providers', getInternalApiBaseUrl())
if (ctx.userId) url.searchParams.set('userId', ctx.userId)
const messages = [{ role: 'user', content: routerConfig.prompt }]
@@ -209,7 +209,7 @@ export class RouterBlockHandler implements BlockHandler {
const providerId = getProviderFromModel(routerConfig.model)
try {
const url = new URL('/api/providers', getBaseUrl())
const url = new URL('/api/providers', getInternalApiBaseUrl())
if (ctx.userId) url.searchParams.set('userId', ctx.userId)
const messages = [{ role: 'user', content: routerConfig.context }]

View File

@@ -1,3 +1,4 @@
import { setupGlobalFetchMock } from '@sim/testing'
import { beforeEach, describe, expect, it, type Mock, vi } from 'vitest'
import { BlockType } from '@/executor/constants'
import { WorkflowBlockHandler } from '@/executor/handlers/workflow/workflow-handler'
@@ -9,7 +10,7 @@ vi.mock('@/lib/auth/internal', () => ({
}))
// Mock fetch globally
global.fetch = vi.fn()
setupGlobalFetchMock()
describe('WorkflowBlockHandler', () => {
let handler: WorkflowBlockHandler

View File

@@ -1,5 +1,5 @@
import { generateInternalToken } from '@/lib/auth/internal'
import { getBaseUrl } from '@/lib/core/utils/urls'
import { getBaseUrl, getInternalApiBaseUrl } from '@/lib/core/utils/urls'
import { HTTP } from '@/executor/constants'
export async function buildAuthHeaders(): Promise<Record<string, string>> {
@@ -16,7 +16,8 @@ export async function buildAuthHeaders(): Promise<Record<string, string>> {
}
export function buildAPIUrl(path: string, params?: Record<string, string>): URL {
const url = new URL(path, getBaseUrl())
const baseUrl = path.startsWith('/api/') ? getInternalApiBaseUrl() : getBaseUrl()
const url = new URL(path, baseUrl)
if (params) {
for (const [key, value] of Object.entries(params)) {

View File

@@ -423,7 +423,7 @@ interface GenerateVersionDescriptionVariables {
const VERSION_DESCRIPTION_SYSTEM_PROMPT = `You are writing deployment version descriptions for a workflow automation platform.
Write a brief, factual description (1-3 sentences, under 400 characters) that states what changed between versions.
Write a brief, factual description (1-3 sentences, under 2000 characters) that states what changed between versions.
Guidelines:
- Use the specific values provided (credential names, channel names, model names)

View File

@@ -642,6 +642,10 @@ export function useDeployChildWorkflow() {
queryClient.invalidateQueries({
queryKey: workflowKeys.deploymentStatus(variables.workflowId),
})
// Invalidate workflow state so tool input mappings refresh
queryClient.invalidateQueries({
queryKey: workflowKeys.state(variables.workflowId),
})
// Also invalidate deployment queries
queryClient.invalidateQueries({
queryKey: deploymentKeys.info(variables.workflowId),

View File

@@ -1,4 +1,4 @@
import { useCallback, useRef } from 'react'
import { useCallback } from 'react'
import { createLogger } from '@sim/logger'
import type {
BlockCompletedData,
@@ -16,6 +16,18 @@ import type { SerializableExecutionState } from '@/executor/execution/types'
const logger = createLogger('useExecutionStream')
/**
* Detects errors caused by the browser killing a fetch (page refresh, navigation, tab close).
* These should be treated as clean disconnects, not execution errors.
*/
function isClientDisconnectError(error: any): boolean {
if (error.name === 'AbortError') return true
const msg = (error.message ?? '').toLowerCase()
return (
msg.includes('network error') || msg.includes('failed to fetch') || msg.includes('load failed')
)
}
/**
* Processes SSE events from a response body and invokes appropriate callbacks.
*/
@@ -121,6 +133,7 @@ export interface ExecuteStreamOptions {
parallels?: Record<string, any>
}
stopAfterBlockId?: string
onExecutionId?: (executionId: string) => void
callbacks?: ExecutionStreamCallbacks
}
@@ -129,30 +142,40 @@ export interface ExecuteFromBlockOptions {
startBlockId: string
sourceSnapshot: SerializableExecutionState
input?: any
onExecutionId?: (executionId: string) => void
callbacks?: ExecutionStreamCallbacks
}
export interface ReconnectStreamOptions {
workflowId: string
executionId: string
fromEventId?: number
callbacks?: ExecutionStreamCallbacks
}
/**
* Module-level map shared across all hook instances.
* Ensures ANY instance can cancel streams started by ANY other instance,
* which is critical for SPA navigation where the original hook instance unmounts
* but the SSE stream must be cancellable from the new instance.
*/
const sharedAbortControllers = new Map<string, AbortController>()
/**
* Hook for executing workflows via server-side SSE streaming.
* Supports concurrent executions via per-workflow AbortController maps.
*/
export function useExecutionStream() {
const abortControllersRef = useRef<Map<string, AbortController>>(new Map())
const currentExecutionsRef = useRef<Map<string, { workflowId: string; executionId: string }>>(
new Map()
)
const execute = useCallback(async (options: ExecuteStreamOptions) => {
const { workflowId, callbacks = {}, ...payload } = options
const { workflowId, callbacks = {}, onExecutionId, ...payload } = options
const existing = abortControllersRef.current.get(workflowId)
const existing = sharedAbortControllers.get(workflowId)
if (existing) {
existing.abort()
}
const abortController = new AbortController()
abortControllersRef.current.set(workflowId, abortController)
currentExecutionsRef.current.delete(workflowId)
sharedAbortControllers.set(workflowId, abortController)
try {
const response = await fetch(`/api/workflows/${workflowId}/execute`, {
@@ -177,42 +200,48 @@ export function useExecutionStream() {
throw new Error('No response body')
}
const executionId = response.headers.get('X-Execution-Id')
if (executionId) {
currentExecutionsRef.current.set(workflowId, { workflowId, executionId })
const serverExecutionId = response.headers.get('X-Execution-Id')
if (serverExecutionId) {
onExecutionId?.(serverExecutionId)
}
const reader = response.body.getReader()
await processSSEStream(reader, callbacks, 'Execution')
} catch (error: any) {
if (error.name === 'AbortError') {
logger.info('Execution stream cancelled')
callbacks.onExecutionCancelled?.({ duration: 0 })
} else {
logger.error('Execution stream error:', error)
callbacks.onExecutionError?.({
error: error.message || 'Unknown error',
duration: 0,
})
if (isClientDisconnectError(error)) {
logger.info('Execution stream disconnected (page unload or abort)')
return
}
logger.error('Execution stream error:', error)
callbacks.onExecutionError?.({
error: error.message || 'Unknown error',
duration: 0,
})
throw error
} finally {
abortControllersRef.current.delete(workflowId)
currentExecutionsRef.current.delete(workflowId)
if (sharedAbortControllers.get(workflowId) === abortController) {
sharedAbortControllers.delete(workflowId)
}
}
}, [])
const executeFromBlock = useCallback(async (options: ExecuteFromBlockOptions) => {
const { workflowId, startBlockId, sourceSnapshot, input, callbacks = {} } = options
const {
workflowId,
startBlockId,
sourceSnapshot,
input,
onExecutionId,
callbacks = {},
} = options
const existing = abortControllersRef.current.get(workflowId)
const existing = sharedAbortControllers.get(workflowId)
if (existing) {
existing.abort()
}
const abortController = new AbortController()
abortControllersRef.current.set(workflowId, abortController)
currentExecutionsRef.current.delete(workflowId)
sharedAbortControllers.set(workflowId, abortController)
try {
const response = await fetch(`/api/workflows/${workflowId}/execute`, {
@@ -246,64 +275,80 @@ export function useExecutionStream() {
throw new Error('No response body')
}
const executionId = response.headers.get('X-Execution-Id')
if (executionId) {
currentExecutionsRef.current.set(workflowId, { workflowId, executionId })
const serverExecutionId = response.headers.get('X-Execution-Id')
if (serverExecutionId) {
onExecutionId?.(serverExecutionId)
}
const reader = response.body.getReader()
await processSSEStream(reader, callbacks, 'Run-from-block')
} catch (error: any) {
if (error.name === 'AbortError') {
logger.info('Run-from-block execution cancelled')
callbacks.onExecutionCancelled?.({ duration: 0 })
} else {
logger.error('Run-from-block execution error:', error)
callbacks.onExecutionError?.({
error: error.message || 'Unknown error',
duration: 0,
})
if (isClientDisconnectError(error)) {
logger.info('Run-from-block stream disconnected (page unload or abort)')
return
}
logger.error('Run-from-block execution error:', error)
callbacks.onExecutionError?.({
error: error.message || 'Unknown error',
duration: 0,
})
throw error
} finally {
abortControllersRef.current.delete(workflowId)
currentExecutionsRef.current.delete(workflowId)
if (sharedAbortControllers.get(workflowId) === abortController) {
sharedAbortControllers.delete(workflowId)
}
}
}, [])
const reconnect = useCallback(async (options: ReconnectStreamOptions) => {
const { workflowId, executionId, fromEventId = 0, callbacks = {} } = options
const existing = sharedAbortControllers.get(workflowId)
if (existing) {
existing.abort()
}
const abortController = new AbortController()
sharedAbortControllers.set(workflowId, abortController)
try {
const response = await fetch(
`/api/workflows/${workflowId}/executions/${executionId}/stream?from=${fromEventId}`,
{ signal: abortController.signal }
)
if (!response.ok) throw new Error(`Reconnect failed (${response.status})`)
if (!response.body) throw new Error('No response body')
await processSSEStream(response.body.getReader(), callbacks, 'Reconnect')
} catch (error: any) {
if (isClientDisconnectError(error)) return
logger.error('Reconnection stream error:', error)
throw error
} finally {
if (sharedAbortControllers.get(workflowId) === abortController) {
sharedAbortControllers.delete(workflowId)
}
}
}, [])
const cancel = useCallback((workflowId?: string) => {
if (workflowId) {
const execution = currentExecutionsRef.current.get(workflowId)
if (execution) {
fetch(`/api/workflows/${execution.workflowId}/executions/${execution.executionId}/cancel`, {
method: 'POST',
}).catch(() => {})
}
const controller = abortControllersRef.current.get(workflowId)
const controller = sharedAbortControllers.get(workflowId)
if (controller) {
controller.abort()
abortControllersRef.current.delete(workflowId)
sharedAbortControllers.delete(workflowId)
}
currentExecutionsRef.current.delete(workflowId)
} else {
for (const [, execution] of currentExecutionsRef.current) {
fetch(`/api/workflows/${execution.workflowId}/executions/${execution.executionId}/cancel`, {
method: 'POST',
}).catch(() => {})
}
for (const [, controller] of abortControllersRef.current) {
for (const [, controller] of sharedAbortControllers) {
controller.abort()
}
abortControllersRef.current.clear()
currentExecutionsRef.current.clear()
sharedAbortControllers.clear()
}
}, [])
return {
execute,
executeFromBlock,
reconnect,
cancel,
}
}

View File

@@ -0,0 +1,46 @@
'use client'
import { useEffect, useRef } from 'react'
import { createLogger } from '@sim/logger'
const logger = createLogger('ReferralAttribution')
const COOKIE_NAME = 'sim_utm'
const TERMINAL_REASONS = new Set([
'invalid_cookie',
'no_utm_cookie',
'no_matching_campaign',
'already_attributed',
])
/**
* Fires a one-shot `POST /api/attribution` when a `sim_utm` cookie is present.
* Retries on transient failures; stops on terminal outcomes.
*/
export function useReferralAttribution() {
const calledRef = useRef(false)
useEffect(() => {
if (calledRef.current) return
if (!document.cookie.includes(COOKIE_NAME)) return
calledRef.current = true
fetch('/api/attribution', { method: 'POST' })
.then((res) => res.json())
.then((data) => {
if (data.attributed) {
logger.info('Referral attribution successful', { bonusAmount: data.bonusAmount })
} else if (data.error || TERMINAL_REASONS.has(data.reason)) {
logger.info('Referral attribution skipped', { reason: data.reason || data.error })
} else {
calledRef.current = false
}
})
.catch((err) => {
logger.warn('Referral attribution failed, will retry', { error: err })
calledRef.current = false
})
}, [])
}

View File

@@ -0,0 +1,64 @@
import { db } from '@sim/db'
import { organization, userStats } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { eq, sql } from 'drizzle-orm'
import { getHighestPrioritySubscription } from '@/lib/billing/core/subscription'
import type { DbOrTx } from '@/lib/db/types'
const logger = createLogger('BonusCredits')
/**
* Apply bonus credits to a user (e.g. referral bonuses, promotional codes).
*
* Detects the user's current plan and routes credits accordingly:
* - Free/Pro: adds to `userStats.creditBalance` and increments `currentUsageLimit`
* - Team/Enterprise: adds to `organization.creditBalance` and increments `orgUsageLimit`
*
* Uses direct increment (not recalculation) so it works correctly for free-tier
* users where `setUsageLimitForCredits` would compute planBase=0 and skip the update.
*
* @param tx - Optional Drizzle transaction context. When provided, all DB writes
* participate in the caller's transaction for atomicity.
*/
export async function applyBonusCredits(
userId: string,
amount: number,
tx?: DbOrTx
): Promise<void> {
const dbCtx = tx ?? db
const subscription = await getHighestPrioritySubscription(userId)
const isTeamOrEnterprise = subscription?.plan === 'team' || subscription?.plan === 'enterprise'
if (isTeamOrEnterprise && subscription?.referenceId) {
const orgId = subscription.referenceId
await dbCtx
.update(organization)
.set({
creditBalance: sql`${organization.creditBalance} + ${amount}`,
orgUsageLimit: sql`COALESCE(${organization.orgUsageLimit}, '0')::decimal + ${amount}`,
})
.where(eq(organization.id, orgId))
logger.info('Applied bonus credits to organization', {
userId,
organizationId: orgId,
plan: subscription.plan,
amount,
})
} else {
await dbCtx
.update(userStats)
.set({
creditBalance: sql`${userStats.creditBalance} + ${amount}`,
currentUsageLimit: sql`COALESCE(${userStats.currentUsageLimit}, '0')::decimal + ${amount}`,
})
.where(eq(userStats.userId, userId))
logger.info('Applied bonus credits to user', {
userId,
plan: subscription?.plan || 'free',
amount,
})
}
}

View File

@@ -20,6 +20,8 @@ export interface BuildPayloadParams {
fileAttachments?: Array<{ id: string; key: string; size: number; [key: string]: unknown }>
commands?: string[]
chatId?: string
conversationId?: string
prefetch?: boolean
implicitFeedback?: string
}
@@ -64,6 +66,10 @@ export async function buildCopilotRequestPayload(
fileAttachments,
commands,
chatId,
conversationId,
prefetch,
conversationHistory,
implicitFeedback,
} = params
const selectedModel = options.selectedModel
@@ -154,6 +160,12 @@ export async function buildCopilotRequestPayload(
version: SIM_AGENT_VERSION,
...(contexts && contexts.length > 0 ? { context: contexts } : {}),
...(chatId ? { chatId } : {}),
...(conversationId ? { conversationId } : {}),
...(Array.isArray(conversationHistory) && conversationHistory.length > 0
? { conversationHistory }
: {}),
...(typeof prefetch === 'boolean' ? { prefetch } : {}),
...(implicitFeedback ? { implicitFeedback } : {}),
...(processedFileContents.length > 0 ? { fileAttachments: processedFileContents } : {}),
...(integrationTools.length > 0 ? { integrationTools } : {}),
...(credentials ? { credentials } : {}),

View File

@@ -1,7 +1,7 @@
import { db } from '@sim/db'
import { workflow } from '@sim/db/schema'
import { customTools, workflow } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { eq } from 'drizzle-orm'
import { and, desc, eq, isNull, or } from 'drizzle-orm'
import { SIM_AGENT_API_URL } from '@/lib/copilot/constants'
import type {
ExecutionContext,
@@ -12,6 +12,7 @@ import { routeExecution } from '@/lib/copilot/tools/server/router'
import { env } from '@/lib/core/config/env'
import { getBaseUrl } from '@/lib/core/utils/urls'
import { getEffectiveDecryptedEnv } from '@/lib/environment/utils'
import { upsertCustomTools } from '@/lib/workflows/custom-tools/operations'
import { getTool, resolveToolId } from '@/tools/utils'
import {
executeCheckDeploymentStatus,
@@ -76,6 +77,247 @@ import {
const logger = createLogger('CopilotToolExecutor')
type ManageCustomToolOperation = 'add' | 'edit' | 'delete' | 'list'
interface ManageCustomToolSchema {
type: 'function'
function: {
name: string
description?: string
parameters: Record<string, unknown>
}
}
interface ManageCustomToolParams {
operation?: string
toolId?: string
schema?: ManageCustomToolSchema
code?: string
title?: string
workspaceId?: string
}
async function executeManageCustomTool(
rawParams: Record<string, unknown>,
context: ExecutionContext
): Promise<ToolCallResult> {
const params = rawParams as ManageCustomToolParams
const operation = String(params.operation || '').toLowerCase() as ManageCustomToolOperation
const workspaceId = params.workspaceId || context.workspaceId
if (!operation) {
return { success: false, error: "Missing required 'operation' argument" }
}
try {
if (operation === 'list') {
const toolsForUser = workspaceId
? await db
.select()
.from(customTools)
.where(
or(
eq(customTools.workspaceId, workspaceId),
and(isNull(customTools.workspaceId), eq(customTools.userId, context.userId))
)
)
.orderBy(desc(customTools.createdAt))
: await db
.select()
.from(customTools)
.where(and(isNull(customTools.workspaceId), eq(customTools.userId, context.userId)))
.orderBy(desc(customTools.createdAt))
return {
success: true,
output: {
success: true,
operation,
tools: toolsForUser,
count: toolsForUser.length,
},
}
}
if (operation === 'add') {
if (!workspaceId) {
return {
success: false,
error: "workspaceId is required for operation 'add'",
}
}
if (!params.schema || !params.code) {
return {
success: false,
error: "Both 'schema' and 'code' are required for operation 'add'",
}
}
const title = params.title || params.schema.function?.name
if (!title) {
return { success: false, error: "Missing tool title or schema.function.name for 'add'" }
}
const resultTools = await upsertCustomTools({
tools: [
{
title,
schema: params.schema,
code: params.code,
},
],
workspaceId,
userId: context.userId,
})
const created = resultTools.find((tool) => tool.title === title)
return {
success: true,
output: {
success: true,
operation,
toolId: created?.id,
title,
message: `Created custom tool "${title}"`,
},
}
}
if (operation === 'edit') {
if (!workspaceId) {
return {
success: false,
error: "workspaceId is required for operation 'edit'",
}
}
if (!params.toolId) {
return { success: false, error: "'toolId' is required for operation 'edit'" }
}
if (!params.schema && !params.code) {
return {
success: false,
error: "At least one of 'schema' or 'code' is required for operation 'edit'",
}
}
const workspaceTool = await db
.select()
.from(customTools)
.where(and(eq(customTools.id, params.toolId), eq(customTools.workspaceId, workspaceId)))
.limit(1)
const legacyTool =
workspaceTool.length === 0
? await db
.select()
.from(customTools)
.where(
and(
eq(customTools.id, params.toolId),
isNull(customTools.workspaceId),
eq(customTools.userId, context.userId)
)
)
.limit(1)
: []
const existing = workspaceTool[0] || legacyTool[0]
if (!existing) {
return { success: false, error: `Custom tool not found: ${params.toolId}` }
}
const mergedSchema = params.schema || (existing.schema as ManageCustomToolSchema)
const mergedCode = params.code || existing.code
const title = params.title || mergedSchema.function?.name || existing.title
await upsertCustomTools({
tools: [
{
id: params.toolId,
title,
schema: mergedSchema,
code: mergedCode,
},
],
workspaceId,
userId: context.userId,
})
return {
success: true,
output: {
success: true,
operation,
toolId: params.toolId,
title,
message: `Updated custom tool "${title}"`,
},
}
}
if (operation === 'delete') {
if (!params.toolId) {
return { success: false, error: "'toolId' is required for operation 'delete'" }
}
const workspaceDelete =
workspaceId != null
? await db
.delete(customTools)
.where(
and(eq(customTools.id, params.toolId), eq(customTools.workspaceId, workspaceId))
)
.returning({ id: customTools.id })
: []
const legacyDelete =
workspaceDelete.length === 0
? await db
.delete(customTools)
.where(
and(
eq(customTools.id, params.toolId),
isNull(customTools.workspaceId),
eq(customTools.userId, context.userId)
)
)
.returning({ id: customTools.id })
: []
const deleted = workspaceDelete[0] || legacyDelete[0]
if (!deleted) {
return { success: false, error: `Custom tool not found: ${params.toolId}` }
}
return {
success: true,
output: {
success: true,
operation,
toolId: params.toolId,
message: 'Deleted custom tool',
},
}
}
return {
success: false,
error: `Unsupported operation for manage_custom_tool: ${operation}`,
}
} catch (error) {
logger.error('manage_custom_tool execution failed', {
operation,
workspaceId,
userId: context.userId,
error: error instanceof Error ? error.message : String(error),
})
return {
success: false,
error: error instanceof Error ? error.message : 'Failed to manage custom tool',
}
}
}
const SERVER_TOOLS = new Set<string>([
'get_blocks_and_tools',
'get_blocks_metadata',
@@ -161,6 +403,19 @@ const SIM_WORKFLOW_TOOL_HANDLERS: Record<
}
}
},
oauth_request_access: async (p, _c) => {
const providerName = (p.providerName || p.provider_name || 'the provider') as string
return {
success: true,
output: {
success: true,
status: 'requested',
providerName,
message: `Requested ${providerName} OAuth connection. The user should complete the OAuth modal in the UI, then retry credential-dependent actions.`,
},
}
},
manage_custom_tool: (p, c) => executeManageCustomTool(p, c),
}
/**

View File

@@ -220,6 +220,7 @@ export const env = createEnv({
SOCKET_SERVER_URL: z.string().url().optional(), // WebSocket server URL for real-time features
SOCKET_PORT: z.number().optional(), // Port for WebSocket server
PORT: z.number().optional(), // Main application port
INTERNAL_API_BASE_URL: z.string().optional(), // Optional internal base URL for server-side self-calls; must include protocol if set (e.g., http://sim-app.namespace.svc.cluster.local:3000)
ALLOWED_ORIGINS: z.string().optional(), // CORS allowed origins
// OAuth Integration Credentials - All optional, enables third-party integrations

View File

@@ -1,6 +1,19 @@
import { getEnv } from '@/lib/core/config/env'
import { isProd } from '@/lib/core/config/feature-flags'
function hasHttpProtocol(url: string): boolean {
return /^https?:\/\//i.test(url)
}
function normalizeBaseUrl(url: string): string {
if (hasHttpProtocol(url)) {
return url
}
const protocol = isProd ? 'https://' : 'http://'
return `${protocol}${url}`
}
/**
* Returns the base URL of the application from NEXT_PUBLIC_APP_URL
* This ensures webhooks, callbacks, and other integrations always use the correct public URL
@@ -8,7 +21,7 @@ import { isProd } from '@/lib/core/config/feature-flags'
* @throws Error if NEXT_PUBLIC_APP_URL is not configured
*/
export function getBaseUrl(): string {
const baseUrl = getEnv('NEXT_PUBLIC_APP_URL')
const baseUrl = getEnv('NEXT_PUBLIC_APP_URL')?.trim()
if (!baseUrl) {
throw new Error(
@@ -16,12 +29,26 @@ export function getBaseUrl(): string {
)
}
if (baseUrl.startsWith('http://') || baseUrl.startsWith('https://')) {
return baseUrl
return normalizeBaseUrl(baseUrl)
}
/**
* Returns the base URL used by server-side internal API calls.
* Falls back to NEXT_PUBLIC_APP_URL when INTERNAL_API_BASE_URL is not set.
*/
export function getInternalApiBaseUrl(): string {
const internalBaseUrl = getEnv('INTERNAL_API_BASE_URL')?.trim()
if (!internalBaseUrl) {
return getBaseUrl()
}
const protocol = isProd ? 'https://' : 'http://'
return `${protocol}${baseUrl}`
if (!hasHttpProtocol(internalBaseUrl)) {
throw new Error(
'INTERNAL_API_BASE_URL must include protocol (http:// or https://), e.g. http://sim-app.default.svc.cluster.local:3000'
)
}
return internalBaseUrl
}
/**

View File

@@ -0,0 +1,246 @@
import { createLogger } from '@sim/logger'
import { getRedisClient } from '@/lib/core/config/redis'
import type { ExecutionEvent } from '@/lib/workflows/executor/execution-events'
const logger = createLogger('ExecutionEventBuffer')
const REDIS_PREFIX = 'execution:stream:'
const TTL_SECONDS = 60 * 60 // 1 hour
const EVENT_LIMIT = 1000
const RESERVE_BATCH = 100
const FLUSH_INTERVAL_MS = 15
const FLUSH_MAX_BATCH = 200
function getEventsKey(executionId: string) {
return `${REDIS_PREFIX}${executionId}:events`
}
function getSeqKey(executionId: string) {
return `${REDIS_PREFIX}${executionId}:seq`
}
function getMetaKey(executionId: string) {
return `${REDIS_PREFIX}${executionId}:meta`
}
export type ExecutionStreamStatus = 'active' | 'complete' | 'error' | 'cancelled'
export interface ExecutionStreamMeta {
status: ExecutionStreamStatus
userId?: string
workflowId?: string
updatedAt?: string
}
export interface ExecutionEventEntry {
eventId: number
executionId: string
event: ExecutionEvent
}
export interface ExecutionEventWriter {
write: (event: ExecutionEvent) => Promise<ExecutionEventEntry>
flush: () => Promise<void>
close: () => Promise<void>
}
export async function setExecutionMeta(
executionId: string,
meta: Partial<ExecutionStreamMeta>
): Promise<void> {
const redis = getRedisClient()
if (!redis) {
logger.warn('setExecutionMeta: Redis client unavailable', { executionId })
return
}
try {
const key = getMetaKey(executionId)
const payload: Record<string, string> = {
updatedAt: new Date().toISOString(),
}
if (meta.status) payload.status = meta.status
if (meta.userId) payload.userId = meta.userId
if (meta.workflowId) payload.workflowId = meta.workflowId
await redis.hset(key, payload)
await redis.expire(key, TTL_SECONDS)
} catch (error) {
logger.warn('Failed to update execution meta', {
executionId,
error: error instanceof Error ? error.message : String(error),
})
}
}
export async function getExecutionMeta(executionId: string): Promise<ExecutionStreamMeta | null> {
const redis = getRedisClient()
if (!redis) {
logger.warn('getExecutionMeta: Redis client unavailable', { executionId })
return null
}
try {
const key = getMetaKey(executionId)
const meta = await redis.hgetall(key)
if (!meta || Object.keys(meta).length === 0) return null
return meta as unknown as ExecutionStreamMeta
} catch (error) {
logger.warn('Failed to read execution meta', {
executionId,
error: error instanceof Error ? error.message : String(error),
})
return null
}
}
export async function readExecutionEvents(
executionId: string,
afterEventId: number
): Promise<ExecutionEventEntry[]> {
const redis = getRedisClient()
if (!redis) return []
try {
const raw = await redis.zrangebyscore(getEventsKey(executionId), afterEventId + 1, '+inf')
return raw
.map((entry) => {
try {
return JSON.parse(entry) as ExecutionEventEntry
} catch {
return null
}
})
.filter((entry): entry is ExecutionEventEntry => Boolean(entry))
} catch (error) {
logger.warn('Failed to read execution events', {
executionId,
error: error instanceof Error ? error.message : String(error),
})
return []
}
}
export function createExecutionEventWriter(executionId: string): ExecutionEventWriter {
const redis = getRedisClient()
if (!redis) {
logger.warn(
'createExecutionEventWriter: Redis client unavailable, events will not be buffered',
{
executionId,
}
)
return {
write: async (event) => ({ eventId: 0, executionId, event }),
flush: async () => {},
close: async () => {},
}
}
let pending: ExecutionEventEntry[] = []
let nextEventId = 0
let maxReservedId = 0
let flushTimer: ReturnType<typeof setTimeout> | null = null
const scheduleFlush = () => {
if (flushTimer) return
flushTimer = setTimeout(() => {
flushTimer = null
void flush()
}, FLUSH_INTERVAL_MS)
}
const reserveIds = async (minCount: number) => {
const reserveCount = Math.max(RESERVE_BATCH, minCount)
const newMax = await redis.incrby(getSeqKey(executionId), reserveCount)
const startId = newMax - reserveCount + 1
if (nextEventId === 0 || nextEventId > maxReservedId) {
nextEventId = startId
maxReservedId = newMax
}
}
let flushPromise: Promise<void> | null = null
let closed = false
const inflightWrites = new Set<Promise<ExecutionEventEntry>>()
const doFlush = async () => {
if (pending.length === 0) return
const batch = pending
pending = []
try {
const key = getEventsKey(executionId)
const zaddArgs: (string | number)[] = []
for (const entry of batch) {
zaddArgs.push(entry.eventId, JSON.stringify(entry))
}
const pipeline = redis.pipeline()
pipeline.zadd(key, ...zaddArgs)
pipeline.expire(key, TTL_SECONDS)
pipeline.expire(getSeqKey(executionId), TTL_SECONDS)
pipeline.zremrangebyrank(key, 0, -EVENT_LIMIT - 1)
await pipeline.exec()
} catch (error) {
logger.warn('Failed to flush execution events', {
executionId,
batchSize: batch.length,
error: error instanceof Error ? error.message : String(error),
stack: error instanceof Error ? error.stack : undefined,
})
pending = batch.concat(pending)
}
}
const flush = async () => {
if (flushPromise) {
await flushPromise
return
}
flushPromise = doFlush()
try {
await flushPromise
} finally {
flushPromise = null
if (pending.length > 0) scheduleFlush()
}
}
const writeCore = async (event: ExecutionEvent): Promise<ExecutionEventEntry> => {
if (closed) return { eventId: 0, executionId, event }
if (nextEventId === 0 || nextEventId > maxReservedId) {
await reserveIds(1)
}
const eventId = nextEventId++
const entry: ExecutionEventEntry = { eventId, executionId, event }
pending.push(entry)
if (pending.length >= FLUSH_MAX_BATCH) {
await flush()
} else {
scheduleFlush()
}
return entry
}
const write = (event: ExecutionEvent): Promise<ExecutionEventEntry> => {
const p = writeCore(event)
inflightWrites.add(p)
const remove = () => inflightWrites.delete(p)
p.then(remove, remove)
return p
}
const close = async () => {
closed = true
if (flushTimer) {
clearTimeout(flushTimer)
flushTimer = null
}
if (inflightWrites.size > 0) {
await Promise.allSettled(inflightWrites)
}
if (flushPromise) {
await flushPromise
}
if (pending.length > 0) {
await doFlush()
}
}
return { write, flush, close }
}

View File

@@ -2,7 +2,7 @@ import { db } from '@sim/db'
import { account } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { eq } from 'drizzle-orm'
import { getBaseUrl } from '@/lib/core/utils/urls'
import { getInternalApiBaseUrl } from '@/lib/core/utils/urls'
import { refreshTokenIfNeeded } from '@/app/api/auth/oauth/utils'
import { executeProviderRequest } from '@/providers'
import { getProviderFromModel } from '@/providers/utils'
@@ -61,7 +61,7 @@ async function queryKnowledgeBase(
})
// Call the knowledge base search API directly
const searchUrl = `${getBaseUrl()}/api/knowledge/search`
const searchUrl = `${getInternalApiBaseUrl()}/api/knowledge/search`
const response = await fetch(searchUrl, {
method: 'POST',

View File

@@ -539,8 +539,8 @@ async function executeMistralOCRRequest(
const isInternalRoute = url.startsWith('/')
if (isInternalRoute) {
const { getBaseUrl } = await import('@/lib/core/utils/urls')
url = `${getBaseUrl()}${url}`
const { getInternalApiBaseUrl } = await import('@/lib/core/utils/urls')
url = `${getInternalApiBaseUrl()}${url}`
}
let headers =

View File

@@ -1,4 +1,4 @@
import { createEnvMock, createMockLogger } from '@sim/testing'
import { createEnvMock, loggerMock } from '@sim/testing'
import { beforeEach, describe, expect, it, type Mock, vi } from 'vitest'
/**
@@ -10,10 +10,6 @@ import { beforeEach, describe, expect, it, type Mock, vi } from 'vitest'
* mock functions can intercept.
*/
const loggerMock = vi.hoisted(() => ({
createLogger: () => createMockLogger(),
}))
const mockSend = vi.fn()
const mockBatchSend = vi.fn()
const mockAzureBeginSend = vi.fn()

View File

@@ -1,20 +1,8 @@
import { createEnvMock, createMockLogger } from '@sim/testing'
import { createEnvMock, databaseMock, loggerMock } from '@sim/testing'
import { beforeEach, describe, expect, it, vi } from 'vitest'
import type { EmailType } from '@/lib/messaging/email/mailer'
const loggerMock = vi.hoisted(() => ({
createLogger: () => createMockLogger(),
}))
const mockDb = vi.hoisted(() => ({
select: vi.fn(),
insert: vi.fn(),
update: vi.fn(),
}))
vi.mock('@sim/db', () => ({
db: mockDb,
}))
vi.mock('@sim/db', () => databaseMock)
vi.mock('@sim/db/schema', () => ({
user: { id: 'id', email: 'email' },
@@ -30,6 +18,8 @@ vi.mock('drizzle-orm', () => ({
eq: vi.fn((a, b) => ({ type: 'eq', left: a, right: b })),
}))
const mockDb = databaseMock.db as Record<string, ReturnType<typeof vi.fn>>
vi.mock('@/lib/core/config/env', () => createEnvMock({ BETTER_AUTH_SECRET: 'test-secret-key' }))
vi.mock('@sim/logger', () => loggerMock)

View File

@@ -11,7 +11,7 @@ import { and, eq, isNull, or, sql } from 'drizzle-orm'
import { nanoid } from 'nanoid'
import { isOrganizationOnTeamOrEnterprisePlan } from '@/lib/billing'
import { pollingIdempotency } from '@/lib/core/idempotency/service'
import { getBaseUrl } from '@/lib/core/utils/urls'
import { getInternalApiBaseUrl } from '@/lib/core/utils/urls'
import { getOAuthToken, refreshAccessTokenIfNeeded } from '@/app/api/auth/oauth/utils'
import type { GmailAttachment } from '@/tools/gmail/types'
import { downloadAttachments, extractAttachmentInfo } from '@/tools/gmail/utils'
@@ -691,7 +691,7 @@ async function processEmails(
`[${requestId}] Sending ${config.includeRawEmail ? 'simplified + raw' : 'simplified'} email payload for ${email.id}`
)
const webhookUrl = `${getBaseUrl()}/api/webhooks/trigger/${webhookData.path}`
const webhookUrl = `${getInternalApiBaseUrl()}/api/webhooks/trigger/${webhookData.path}`
const response = await fetch(webhookUrl, {
method: 'POST',

View File

@@ -7,7 +7,7 @@ import type { FetchMessageObject, MailboxLockObject } from 'imapflow'
import { ImapFlow } from 'imapflow'
import { nanoid } from 'nanoid'
import { pollingIdempotency } from '@/lib/core/idempotency/service'
import { getBaseUrl } from '@/lib/core/utils/urls'
import { getInternalApiBaseUrl } from '@/lib/core/utils/urls'
import { MAX_CONSECUTIVE_FAILURES } from '@/triggers/constants'
const logger = createLogger('ImapPollingService')
@@ -639,7 +639,7 @@ async function processEmails(
timestamp: new Date().toISOString(),
}
const webhookUrl = `${getBaseUrl()}/api/webhooks/trigger/${webhookData.path}`
const webhookUrl = `${getInternalApiBaseUrl()}/api/webhooks/trigger/${webhookData.path}`
const response = await fetch(webhookUrl, {
method: 'POST',

View File

@@ -12,7 +12,7 @@ import { htmlToText } from 'html-to-text'
import { nanoid } from 'nanoid'
import { isOrganizationOnTeamOrEnterprisePlan } from '@/lib/billing'
import { pollingIdempotency } from '@/lib/core/idempotency'
import { getBaseUrl } from '@/lib/core/utils/urls'
import { getInternalApiBaseUrl } from '@/lib/core/utils/urls'
import { getOAuthToken, refreshAccessTokenIfNeeded } from '@/app/api/auth/oauth/utils'
import { MAX_CONSECUTIVE_FAILURES } from '@/triggers/constants'
@@ -601,7 +601,7 @@ async function processOutlookEmails(
`[${requestId}] Processing email: ${email.subject} from ${email.from?.emailAddress?.address}`
)
const webhookUrl = `${getBaseUrl()}/api/webhooks/trigger/${webhookData.path}`
const webhookUrl = `${getInternalApiBaseUrl()}/api/webhooks/trigger/${webhookData.path}`
const response = await fetch(webhookUrl, {
method: 'POST',

View File

@@ -9,7 +9,7 @@ import {
secureFetchWithPinnedIP,
validateUrlWithDNS,
} from '@/lib/core/security/input-validation.server'
import { getBaseUrl } from '@/lib/core/utils/urls'
import { getInternalApiBaseUrl } from '@/lib/core/utils/urls'
import { MAX_CONSECUTIVE_FAILURES } from '@/triggers/constants'
const logger = createLogger('RssPollingService')
@@ -376,7 +376,7 @@ async function processRssItems(
timestamp: new Date().toISOString(),
}
const webhookUrl = `${getBaseUrl()}/api/webhooks/trigger/${webhookData.path}`
const webhookUrl = `${getInternalApiBaseUrl()}/api/webhooks/trigger/${webhookData.path}`
const response = await fetch(webhookUrl, {
method: 'POST',

View File

@@ -2364,6 +2364,261 @@ describe('hasWorkflowChanged', () => {
})
})
describe('Trigger Config Normalization (False Positive Prevention)', () => {
it.concurrent(
'should not detect change when deployed has null fields but current has values from triggerConfig',
() => {
// Core scenario: deployed state has null individual fields, current state has
// values populated from triggerConfig at runtime by populateTriggerFieldsFromConfig
const deployedState = createWorkflowState({
blocks: {
block1: createBlock('block1', {
type: 'starter',
subBlocks: {
signingSecret: { id: 'signingSecret', type: 'short-input', value: null },
botToken: { id: 'botToken', type: 'short-input', value: null },
triggerConfig: {
id: 'triggerConfig',
type: 'short-input',
value: { signingSecret: 'secret123', botToken: 'token456' },
},
},
}),
},
})
const currentState = createWorkflowState({
blocks: {
block1: createBlock('block1', {
type: 'starter',
subBlocks: {
signingSecret: { id: 'signingSecret', type: 'short-input', value: 'secret123' },
botToken: { id: 'botToken', type: 'short-input', value: 'token456' },
triggerConfig: {
id: 'triggerConfig',
type: 'short-input',
value: { signingSecret: 'secret123', botToken: 'token456' },
},
},
}),
},
})
expect(hasWorkflowChanged(currentState, deployedState)).toBe(false)
}
)
it.concurrent(
'should detect change when user edits a trigger field to a different value',
() => {
const deployedState = createWorkflowState({
blocks: {
block1: createBlock('block1', {
type: 'starter',
subBlocks: {
signingSecret: { id: 'signingSecret', type: 'short-input', value: null },
triggerConfig: {
id: 'triggerConfig',
type: 'short-input',
value: { signingSecret: 'old-secret' },
},
},
}),
},
})
const currentState = createWorkflowState({
blocks: {
block1: createBlock('block1', {
type: 'starter',
subBlocks: {
signingSecret: { id: 'signingSecret', type: 'short-input', value: 'new-secret' },
triggerConfig: {
id: 'triggerConfig',
type: 'short-input',
value: { signingSecret: 'old-secret' },
},
},
}),
},
})
expect(hasWorkflowChanged(currentState, deployedState)).toBe(true)
}
)
it.concurrent('should not detect change when both sides have no triggerConfig', () => {
const deployedState = createWorkflowState({
blocks: {
block1: createBlock('block1', {
type: 'starter',
subBlocks: {
signingSecret: { id: 'signingSecret', type: 'short-input', value: null },
},
}),
},
})
const currentState = createWorkflowState({
blocks: {
block1: createBlock('block1', {
type: 'starter',
subBlocks: {
signingSecret: { id: 'signingSecret', type: 'short-input', value: null },
},
}),
},
})
expect(hasWorkflowChanged(currentState, deployedState)).toBe(false)
})
it.concurrent(
'should not detect change when deployed has empty fields and triggerConfig populates them',
() => {
// Empty string is also treated as "empty" by normalizeTriggerConfigValues
const deployedState = createWorkflowState({
blocks: {
block1: createBlock('block1', {
type: 'starter',
subBlocks: {
signingSecret: { id: 'signingSecret', type: 'short-input', value: '' },
triggerConfig: {
id: 'triggerConfig',
type: 'short-input',
value: { signingSecret: 'secret123' },
},
},
}),
},
})
const currentState = createWorkflowState({
blocks: {
block1: createBlock('block1', {
type: 'starter',
subBlocks: {
signingSecret: { id: 'signingSecret', type: 'short-input', value: 'secret123' },
triggerConfig: {
id: 'triggerConfig',
type: 'short-input',
value: { signingSecret: 'secret123' },
},
},
}),
},
})
expect(hasWorkflowChanged(currentState, deployedState)).toBe(false)
}
)
it.concurrent('should not detect change when triggerId differs', () => {
const deployedState = createWorkflowState({
blocks: {
block1: createBlock('block1', {
type: 'starter',
subBlocks: {
model: { value: 'gpt-4' },
triggerId: { value: null },
},
}),
},
})
const currentState = createWorkflowState({
blocks: {
block1: createBlock('block1', {
type: 'starter',
subBlocks: {
model: { value: 'gpt-4' },
triggerId: { value: 'slack_webhook' },
},
}),
},
})
expect(hasWorkflowChanged(currentState, deployedState)).toBe(false)
})
it.concurrent(
'should not detect change for namespaced system subBlock IDs like samplePayload_slack_webhook',
() => {
const deployedState = createWorkflowState({
blocks: {
block1: createBlock('block1', {
type: 'starter',
subBlocks: {
model: { value: 'gpt-4' },
samplePayload_slack_webhook: { value: 'old payload' },
triggerInstructions_slack_webhook: { value: 'old instructions' },
},
}),
},
})
const currentState = createWorkflowState({
blocks: {
block1: createBlock('block1', {
type: 'starter',
subBlocks: {
model: { value: 'gpt-4' },
samplePayload_slack_webhook: { value: 'new payload' },
triggerInstructions_slack_webhook: { value: 'new instructions' },
},
}),
},
})
expect(hasWorkflowChanged(currentState, deployedState)).toBe(false)
}
)
it.concurrent(
'should handle mixed scenario: some fields from triggerConfig, some user-edited',
() => {
const deployedState = createWorkflowState({
blocks: {
block1: createBlock('block1', {
type: 'starter',
subBlocks: {
signingSecret: { id: 'signingSecret', type: 'short-input', value: null },
botToken: { id: 'botToken', type: 'short-input', value: null },
includeFiles: { id: 'includeFiles', type: 'switch', value: false },
triggerConfig: {
id: 'triggerConfig',
type: 'short-input',
value: { signingSecret: 'secret123', botToken: 'token456' },
},
},
}),
},
})
const currentState = createWorkflowState({
blocks: {
block1: createBlock('block1', {
type: 'starter',
subBlocks: {
signingSecret: { id: 'signingSecret', type: 'short-input', value: 'secret123' },
botToken: { id: 'botToken', type: 'short-input', value: 'token456' },
includeFiles: { id: 'includeFiles', type: 'switch', value: true },
triggerConfig: {
id: 'triggerConfig',
type: 'short-input',
value: { signingSecret: 'secret123', botToken: 'token456' },
},
},
}),
},
})
// includeFiles changed from false to true — this IS a real change
expect(hasWorkflowChanged(currentState, deployedState)).toBe(true)
}
)
})
describe('Trigger Runtime Metadata (Should Not Trigger Change)', () => {
it.concurrent('should not detect change when webhookId differs', () => {
const deployedState = createWorkflowState({

View File

@@ -9,6 +9,7 @@ import {
normalizeLoop,
normalizeParallel,
normalizeSubBlockValue,
normalizeTriggerConfigValues,
normalizeValue,
normalizeVariables,
sanitizeVariable,
@@ -172,14 +173,18 @@ export function generateWorkflowDiffSummary(
}
}
// Normalize trigger config values for both states before comparison
const normalizedCurrentSubs = normalizeTriggerConfigValues(currentSubBlocks)
const normalizedPreviousSubs = normalizeTriggerConfigValues(previousSubBlocks)
// Compare subBlocks using shared helper for filtering (single source of truth)
const allSubBlockIds = filterSubBlockIds([
...new Set([...Object.keys(currentSubBlocks), ...Object.keys(previousSubBlocks)]),
...new Set([...Object.keys(normalizedCurrentSubs), ...Object.keys(normalizedPreviousSubs)]),
])
for (const subId of allSubBlockIds) {
const currentSub = currentSubBlocks[subId] as Record<string, unknown> | undefined
const previousSub = previousSubBlocks[subId] as Record<string, unknown> | undefined
const currentSub = normalizedCurrentSubs[subId] as Record<string, unknown> | undefined
const previousSub = normalizedPreviousSubs[subId] as Record<string, unknown> | undefined
if (!currentSub || !previousSub) {
changes.push({

View File

@@ -4,10 +4,12 @@
import { describe, expect, it } from 'vitest'
import type { Loop, Parallel } from '@/stores/workflows/workflow/types'
import {
filterSubBlockIds,
normalizedStringify,
normalizeEdge,
normalizeLoop,
normalizeParallel,
normalizeTriggerConfigValues,
normalizeValue,
sanitizeInputFormat,
sanitizeTools,
@@ -584,4 +586,226 @@ describe('Workflow Normalization Utilities', () => {
expect(result2).toBe(result3)
})
})
describe('filterSubBlockIds', () => {
it.concurrent('should exclude exact SYSTEM_SUBBLOCK_IDS', () => {
const ids = ['signingSecret', 'samplePayload', 'triggerInstructions', 'botToken']
const result = filterSubBlockIds(ids)
expect(result).toEqual(['botToken', 'signingSecret'])
})
it.concurrent('should exclude namespaced SYSTEM_SUBBLOCK_IDS (prefix matching)', () => {
const ids = [
'signingSecret',
'samplePayload_slack_webhook',
'triggerInstructions_slack_webhook',
'webhookUrlDisplay_slack_webhook',
'botToken',
]
const result = filterSubBlockIds(ids)
expect(result).toEqual(['botToken', 'signingSecret'])
})
it.concurrent('should exclude exact TRIGGER_RUNTIME_SUBBLOCK_IDS', () => {
const ids = ['webhookId', 'triggerPath', 'triggerConfig', 'triggerId', 'signingSecret']
const result = filterSubBlockIds(ids)
expect(result).toEqual(['signingSecret'])
})
it.concurrent('should not exclude IDs that merely contain a system ID substring', () => {
const ids = ['mySamplePayload', 'notSamplePayload']
const result = filterSubBlockIds(ids)
expect(result).toEqual(['mySamplePayload', 'notSamplePayload'])
})
it.concurrent('should return sorted results', () => {
const ids = ['zebra', 'alpha', 'middle']
const result = filterSubBlockIds(ids)
expect(result).toEqual(['alpha', 'middle', 'zebra'])
})
it.concurrent('should handle empty array', () => {
expect(filterSubBlockIds([])).toEqual([])
})
it.concurrent('should handle all IDs being excluded', () => {
const ids = ['webhookId', 'triggerPath', 'samplePayload', 'triggerConfig']
const result = filterSubBlockIds(ids)
expect(result).toEqual([])
})
it.concurrent('should exclude setupScript and scheduleInfo namespaced variants', () => {
const ids = ['setupScript_google_sheets_row', 'scheduleInfo_cron_trigger', 'realField']
const result = filterSubBlockIds(ids)
expect(result).toEqual(['realField'])
})
it.concurrent('should exclude triggerCredentials namespaced variants', () => {
const ids = ['triggerCredentials_slack_webhook', 'signingSecret']
const result = filterSubBlockIds(ids)
expect(result).toEqual(['signingSecret'])
})
it.concurrent('should exclude synthetic tool-input subBlock IDs', () => {
const ids = [
'toolConfig',
'toolConfig-tool-0-query',
'toolConfig-tool-0-url',
'toolConfig-tool-1-status',
'systemPrompt',
]
const result = filterSubBlockIds(ids)
expect(result).toEqual(['systemPrompt', 'toolConfig'])
})
})
describe('normalizeTriggerConfigValues', () => {
it.concurrent('should return subBlocks unchanged when no triggerConfig exists', () => {
const subBlocks = {
signingSecret: { id: 'signingSecret', type: 'short-input', value: 'secret123' },
botToken: { id: 'botToken', type: 'short-input', value: 'token456' },
}
const result = normalizeTriggerConfigValues(subBlocks)
expect(result).toEqual(subBlocks)
})
it.concurrent('should return subBlocks unchanged when triggerConfig value is null', () => {
const subBlocks = {
triggerConfig: { id: 'triggerConfig', type: 'short-input', value: null },
signingSecret: { id: 'signingSecret', type: 'short-input', value: null },
}
const result = normalizeTriggerConfigValues(subBlocks)
expect(result).toEqual(subBlocks)
})
it.concurrent(
'should return subBlocks unchanged when triggerConfig value is not an object',
() => {
const subBlocks = {
triggerConfig: { id: 'triggerConfig', type: 'short-input', value: 'string-value' },
signingSecret: { id: 'signingSecret', type: 'short-input', value: null },
}
const result = normalizeTriggerConfigValues(subBlocks)
expect(result).toEqual(subBlocks)
}
)
it.concurrent('should populate null individual fields from triggerConfig', () => {
const subBlocks = {
triggerConfig: {
id: 'triggerConfig',
type: 'short-input',
value: { signingSecret: 'secret123', botToken: 'token456' },
},
signingSecret: { id: 'signingSecret', type: 'short-input', value: null },
botToken: { id: 'botToken', type: 'short-input', value: null },
}
const result = normalizeTriggerConfigValues(subBlocks)
expect((result.signingSecret as Record<string, unknown>).value).toBe('secret123')
expect((result.botToken as Record<string, unknown>).value).toBe('token456')
})
it.concurrent('should populate undefined individual fields from triggerConfig', () => {
const subBlocks = {
triggerConfig: {
id: 'triggerConfig',
type: 'short-input',
value: { signingSecret: 'secret123' },
},
signingSecret: { id: 'signingSecret', type: 'short-input', value: undefined },
}
const result = normalizeTriggerConfigValues(subBlocks)
expect((result.signingSecret as Record<string, unknown>).value).toBe('secret123')
})
it.concurrent('should populate empty string individual fields from triggerConfig', () => {
const subBlocks = {
triggerConfig: {
id: 'triggerConfig',
type: 'short-input',
value: { signingSecret: 'secret123' },
},
signingSecret: { id: 'signingSecret', type: 'short-input', value: '' },
}
const result = normalizeTriggerConfigValues(subBlocks)
expect((result.signingSecret as Record<string, unknown>).value).toBe('secret123')
})
it.concurrent('should NOT overwrite existing non-empty individual field values', () => {
const subBlocks = {
triggerConfig: {
id: 'triggerConfig',
type: 'short-input',
value: { signingSecret: 'old-secret' },
},
signingSecret: { id: 'signingSecret', type: 'short-input', value: 'user-edited-secret' },
}
const result = normalizeTriggerConfigValues(subBlocks)
expect((result.signingSecret as Record<string, unknown>).value).toBe('user-edited-secret')
})
it.concurrent('should skip triggerConfig fields that are null/undefined', () => {
const subBlocks = {
triggerConfig: {
id: 'triggerConfig',
type: 'short-input',
value: { signingSecret: null, botToken: undefined },
},
signingSecret: { id: 'signingSecret', type: 'short-input', value: null },
botToken: { id: 'botToken', type: 'short-input', value: null },
}
const result = normalizeTriggerConfigValues(subBlocks)
expect((result.signingSecret as Record<string, unknown>).value).toBe(null)
expect((result.botToken as Record<string, unknown>).value).toBe(null)
})
it.concurrent('should skip fields from triggerConfig that have no matching subBlock', () => {
const subBlocks = {
triggerConfig: {
id: 'triggerConfig',
type: 'short-input',
value: { nonExistentField: 'value123' },
},
signingSecret: { id: 'signingSecret', type: 'short-input', value: null },
}
const result = normalizeTriggerConfigValues(subBlocks)
expect(result.nonExistentField).toBeUndefined()
expect((result.signingSecret as Record<string, unknown>).value).toBe(null)
})
it.concurrent('should not mutate the original subBlocks object', () => {
const original = {
triggerConfig: {
id: 'triggerConfig',
type: 'short-input',
value: { signingSecret: 'secret123' },
},
signingSecret: { id: 'signingSecret', type: 'short-input', value: null },
}
normalizeTriggerConfigValues(original)
expect((original.signingSecret as Record<string, unknown>).value).toBe(null)
})
it.concurrent('should preserve other subBlock properties when populating value', () => {
const subBlocks = {
triggerConfig: {
id: 'triggerConfig',
type: 'short-input',
value: { signingSecret: 'secret123' },
},
signingSecret: {
id: 'signingSecret',
type: 'short-input',
value: null,
placeholder: 'Enter signing secret',
},
}
const result = normalizeTriggerConfigValues(subBlocks)
const normalized = result.signingSecret as Record<string, unknown>
expect(normalized.value).toBe('secret123')
expect(normalized.id).toBe('signingSecret')
expect(normalized.type).toBe('short-input')
expect(normalized.placeholder).toBe('Enter signing secret')
})
})
})

View File

@@ -411,17 +411,63 @@ export function extractBlockFieldsForComparison(block: BlockState): ExtractedBlo
}
/**
* Filters subBlock IDs to exclude system and trigger runtime subBlocks.
* Pattern matching synthetic subBlock IDs created by ToolSubBlockRenderer.
* These IDs follow the format `{subBlockId}-tool-{index}-{paramId}` and are
* mirrors of values already stored in toolConfig.value.tools[N].params.
*/
const SYNTHETIC_TOOL_SUBBLOCK_RE = /-tool-\d+-/
/**
* Filters subBlock IDs to exclude system, trigger runtime, and synthetic tool subBlocks.
*
* @param subBlockIds - Array of subBlock IDs to filter
* @returns Filtered and sorted array of subBlock IDs
*/
export function filterSubBlockIds(subBlockIds: string[]): string[] {
return subBlockIds
.filter((id) => !SYSTEM_SUBBLOCK_IDS.includes(id) && !TRIGGER_RUNTIME_SUBBLOCK_IDS.includes(id))
.filter((id) => {
if (TRIGGER_RUNTIME_SUBBLOCK_IDS.includes(id)) return false
if (SYSTEM_SUBBLOCK_IDS.some((sysId) => id === sysId || id.startsWith(`${sysId}_`)))
return false
if (SYNTHETIC_TOOL_SUBBLOCK_RE.test(id)) return false
return true
})
.sort()
}
/**
* Normalizes trigger block subBlocks by populating null/empty individual fields
* from the triggerConfig aggregate subBlock. This compensates for the runtime
* population done by populateTriggerFieldsFromConfig, ensuring consistent
* comparison between client state (with populated values) and deployed state
* (with null values from DB).
*/
export function normalizeTriggerConfigValues(
subBlocks: Record<string, unknown>
): Record<string, unknown> {
const triggerConfigSub = subBlocks.triggerConfig as Record<string, unknown> | undefined
const triggerConfigValue = triggerConfigSub?.value
if (!triggerConfigValue || typeof triggerConfigValue !== 'object') {
return subBlocks
}
const result = { ...subBlocks }
for (const [fieldId, configValue] of Object.entries(
triggerConfigValue as Record<string, unknown>
)) {
if (configValue === null || configValue === undefined) continue
const existingSub = result[fieldId] as Record<string, unknown> | undefined
if (
existingSub &&
(existingSub.value === null || existingSub.value === undefined || existingSub.value === '')
) {
result[fieldId] = { ...existingSub, value: configValue }
}
}
return result
}
/**
* Normalizes a subBlock value with sanitization for specific subBlock types.
* Sanitizes: tools (removes isExpanded), inputFormat (removes collapsed)

View File

@@ -1,18 +1,11 @@
/**
* @vitest-environment node
*/
import { loggerMock } from '@sim/testing'
import { beforeEach, describe, expect, it, vi } from 'vitest'
import type { BlockState, WorkflowState } from '@/stores/workflows/workflow/types'
// Mock all external dependencies before imports
vi.mock('@sim/logger', () => ({
createLogger: () => ({
info: vi.fn(),
warn: vi.fn(),
error: vi.fn(),
debug: vi.fn(),
}),
}))
vi.mock('@sim/logger', () => loggerMock)
vi.mock('@/stores/workflows/workflow/store', () => ({
useWorkflowStore: {

View File

@@ -14,22 +14,15 @@ import {
databaseMock,
expectWorkflowAccessDenied,
expectWorkflowAccessGranted,
mockAuth,
} from '@sim/testing'
import { beforeEach, describe, expect, it, vi } from 'vitest'
vi.mock('@sim/db', () => databaseMock)
// Mock the auth module
vi.mock('@/lib/auth', () => ({
getSession: vi.fn(),
}))
import { db } from '@sim/db'
import { getSession } from '@/lib/auth'
// Import after mocks are set up
import { validateWorkflowPermissions } from '@/lib/workflows/utils'
const mockDb = databaseMock.db
describe('validateWorkflowPermissions', () => {
const auth = mockAuth()
const mockSession = createSession({ userId: 'user-1', email: 'user1@test.com' })
const mockWorkflow = createWorkflowRecord({
id: 'wf-1',
@@ -42,13 +35,17 @@ describe('validateWorkflowPermissions', () => {
})
beforeEach(() => {
vi.resetModules()
vi.clearAllMocks()
vi.doMock('@sim/db', () => databaseMock)
})
describe('authentication', () => {
it('should return 401 when no session exists', async () => {
vi.mocked(getSession).mockResolvedValue(null)
auth.setUnauthenticated()
const { validateWorkflowPermissions } = await import('@/lib/workflows/utils')
const result = await validateWorkflowPermissions('wf-1', 'req-1', 'read')
expectWorkflowAccessDenied(result, 401)
@@ -56,8 +53,9 @@ describe('validateWorkflowPermissions', () => {
})
it('should return 401 when session has no user id', async () => {
vi.mocked(getSession).mockResolvedValue({ user: {} } as any)
auth.mockGetSession.mockResolvedValue({ user: {} } as any)
const { validateWorkflowPermissions } = await import('@/lib/workflows/utils')
const result = await validateWorkflowPermissions('wf-1', 'req-1', 'read')
expectWorkflowAccessDenied(result, 401)
@@ -66,14 +64,14 @@ describe('validateWorkflowPermissions', () => {
describe('workflow not found', () => {
it('should return 404 when workflow does not exist', async () => {
vi.mocked(getSession).mockResolvedValue(mockSession as any)
auth.mockGetSession.mockResolvedValue(mockSession as any)
// Mock workflow query to return empty
const mockLimit = vi.fn().mockResolvedValue([])
const mockWhere = vi.fn(() => ({ limit: mockLimit }))
const mockFrom = vi.fn(() => ({ where: mockWhere }))
vi.mocked(db.select).mockReturnValue({ from: mockFrom } as any)
vi.mocked(mockDb.select).mockReturnValue({ from: mockFrom } as any)
const { validateWorkflowPermissions } = await import('@/lib/workflows/utils')
const result = await validateWorkflowPermissions('non-existent', 'req-1', 'read')
expectWorkflowAccessDenied(result, 404)
@@ -83,43 +81,42 @@ describe('validateWorkflowPermissions', () => {
describe('owner access', () => {
it('should deny access to workflow owner without workspace permissions for read action', async () => {
const ownerSession = createSession({ userId: 'owner-1' })
vi.mocked(getSession).mockResolvedValue(ownerSession as any)
auth.setAuthenticated({ id: 'owner-1', email: 'owner-1@test.com' })
// Mock workflow query
const mockLimit = vi.fn().mockResolvedValue([mockWorkflow])
const mockWhere = vi.fn(() => ({ limit: mockLimit }))
const mockFrom = vi.fn(() => ({ where: mockWhere }))
vi.mocked(db.select).mockReturnValue({ from: mockFrom } as any)
vi.mocked(mockDb.select).mockReturnValue({ from: mockFrom } as any)
const { validateWorkflowPermissions } = await import('@/lib/workflows/utils')
const result = await validateWorkflowPermissions('wf-1', 'req-1', 'read')
expectWorkflowAccessDenied(result, 403)
})
it('should deny access to workflow owner without workspace permissions for write action', async () => {
const ownerSession = createSession({ userId: 'owner-1' })
vi.mocked(getSession).mockResolvedValue(ownerSession as any)
auth.setAuthenticated({ id: 'owner-1', email: 'owner-1@test.com' })
const mockLimit = vi.fn().mockResolvedValue([mockWorkflow])
const mockWhere = vi.fn(() => ({ limit: mockLimit }))
const mockFrom = vi.fn(() => ({ where: mockWhere }))
vi.mocked(db.select).mockReturnValue({ from: mockFrom } as any)
vi.mocked(mockDb.select).mockReturnValue({ from: mockFrom } as any)
const { validateWorkflowPermissions } = await import('@/lib/workflows/utils')
const result = await validateWorkflowPermissions('wf-1', 'req-1', 'write')
expectWorkflowAccessDenied(result, 403)
})
it('should deny access to workflow owner without workspace permissions for admin action', async () => {
const ownerSession = createSession({ userId: 'owner-1' })
vi.mocked(getSession).mockResolvedValue(ownerSession as any)
auth.setAuthenticated({ id: 'owner-1', email: 'owner-1@test.com' })
const mockLimit = vi.fn().mockResolvedValue([mockWorkflow])
const mockWhere = vi.fn(() => ({ limit: mockLimit }))
const mockFrom = vi.fn(() => ({ where: mockWhere }))
vi.mocked(db.select).mockReturnValue({ from: mockFrom } as any)
vi.mocked(mockDb.select).mockReturnValue({ from: mockFrom } as any)
const { validateWorkflowPermissions } = await import('@/lib/workflows/utils')
const result = await validateWorkflowPermissions('wf-1', 'req-1', 'admin')
expectWorkflowAccessDenied(result, 403)
@@ -128,11 +125,10 @@ describe('validateWorkflowPermissions', () => {
describe('workspace member access with permissions', () => {
beforeEach(() => {
vi.mocked(getSession).mockResolvedValue(mockSession as any)
auth.mockGetSession.mockResolvedValue(mockSession as any)
})
it('should grant read access to user with read permission', async () => {
// First call: workflow query, second call: workspace owner, third call: permission
let callCount = 0
const mockLimit = vi.fn().mockImplementation(() => {
callCount++
@@ -141,8 +137,9 @@ describe('validateWorkflowPermissions', () => {
})
const mockWhere = vi.fn(() => ({ limit: mockLimit }))
const mockFrom = vi.fn(() => ({ where: mockWhere }))
vi.mocked(db.select).mockReturnValue({ from: mockFrom } as any)
vi.mocked(mockDb.select).mockReturnValue({ from: mockFrom } as any)
const { validateWorkflowPermissions } = await import('@/lib/workflows/utils')
const result = await validateWorkflowPermissions('wf-1', 'req-1', 'read')
expectWorkflowAccessGranted(result)
@@ -157,8 +154,9 @@ describe('validateWorkflowPermissions', () => {
})
const mockWhere = vi.fn(() => ({ limit: mockLimit }))
const mockFrom = vi.fn(() => ({ where: mockWhere }))
vi.mocked(db.select).mockReturnValue({ from: mockFrom } as any)
vi.mocked(mockDb.select).mockReturnValue({ from: mockFrom } as any)
const { validateWorkflowPermissions } = await import('@/lib/workflows/utils')
const result = await validateWorkflowPermissions('wf-1', 'req-1', 'write')
expectWorkflowAccessDenied(result, 403)
@@ -174,8 +172,9 @@ describe('validateWorkflowPermissions', () => {
})
const mockWhere = vi.fn(() => ({ limit: mockLimit }))
const mockFrom = vi.fn(() => ({ where: mockWhere }))
vi.mocked(db.select).mockReturnValue({ from: mockFrom } as any)
vi.mocked(mockDb.select).mockReturnValue({ from: mockFrom } as any)
const { validateWorkflowPermissions } = await import('@/lib/workflows/utils')
const result = await validateWorkflowPermissions('wf-1', 'req-1', 'write')
expectWorkflowAccessGranted(result)
@@ -190,8 +189,9 @@ describe('validateWorkflowPermissions', () => {
})
const mockWhere = vi.fn(() => ({ limit: mockLimit }))
const mockFrom = vi.fn(() => ({ where: mockWhere }))
vi.mocked(db.select).mockReturnValue({ from: mockFrom } as any)
vi.mocked(mockDb.select).mockReturnValue({ from: mockFrom } as any)
const { validateWorkflowPermissions } = await import('@/lib/workflows/utils')
const result = await validateWorkflowPermissions('wf-1', 'req-1', 'write')
expectWorkflowAccessGranted(result)
@@ -206,8 +206,9 @@ describe('validateWorkflowPermissions', () => {
})
const mockWhere = vi.fn(() => ({ limit: mockLimit }))
const mockFrom = vi.fn(() => ({ where: mockWhere }))
vi.mocked(db.select).mockReturnValue({ from: mockFrom } as any)
vi.mocked(mockDb.select).mockReturnValue({ from: mockFrom } as any)
const { validateWorkflowPermissions } = await import('@/lib/workflows/utils')
const result = await validateWorkflowPermissions('wf-1', 'req-1', 'admin')
expectWorkflowAccessDenied(result, 403)
@@ -223,8 +224,9 @@ describe('validateWorkflowPermissions', () => {
})
const mockWhere = vi.fn(() => ({ limit: mockLimit }))
const mockFrom = vi.fn(() => ({ where: mockWhere }))
vi.mocked(db.select).mockReturnValue({ from: mockFrom } as any)
vi.mocked(mockDb.select).mockReturnValue({ from: mockFrom } as any)
const { validateWorkflowPermissions } = await import('@/lib/workflows/utils')
const result = await validateWorkflowPermissions('wf-1', 'req-1', 'admin')
expectWorkflowAccessGranted(result)
@@ -233,18 +235,19 @@ describe('validateWorkflowPermissions', () => {
describe('no workspace permission', () => {
it('should deny access to user without any workspace permission', async () => {
vi.mocked(getSession).mockResolvedValue(mockSession as any)
auth.mockGetSession.mockResolvedValue(mockSession as any)
let callCount = 0
const mockLimit = vi.fn().mockImplementation(() => {
callCount++
if (callCount === 1) return Promise.resolve([mockWorkflow])
return Promise.resolve([]) // No permission record
return Promise.resolve([])
})
const mockWhere = vi.fn(() => ({ limit: mockLimit }))
const mockFrom = vi.fn(() => ({ where: mockWhere }))
vi.mocked(db.select).mockReturnValue({ from: mockFrom } as any)
vi.mocked(mockDb.select).mockReturnValue({ from: mockFrom } as any)
const { validateWorkflowPermissions } = await import('@/lib/workflows/utils')
const result = await validateWorkflowPermissions('wf-1', 'req-1', 'read')
expectWorkflowAccessDenied(result, 403)
@@ -259,13 +262,14 @@ describe('validateWorkflowPermissions', () => {
workspaceId: null,
})
vi.mocked(getSession).mockResolvedValue(mockSession as any)
auth.mockGetSession.mockResolvedValue(mockSession as any)
const mockLimit = vi.fn().mockResolvedValue([workflowWithoutWorkspace])
const mockWhere = vi.fn(() => ({ limit: mockLimit }))
const mockFrom = vi.fn(() => ({ where: mockWhere }))
vi.mocked(db.select).mockReturnValue({ from: mockFrom } as any)
vi.mocked(mockDb.select).mockReturnValue({ from: mockFrom } as any)
const { validateWorkflowPermissions } = await import('@/lib/workflows/utils')
const result = await validateWorkflowPermissions('wf-2', 'req-1', 'read')
expectWorkflowAccessDenied(result, 403)
@@ -278,13 +282,14 @@ describe('validateWorkflowPermissions', () => {
workspaceId: null,
})
vi.mocked(getSession).mockResolvedValue(mockSession as any)
auth.mockGetSession.mockResolvedValue(mockSession as any)
const mockLimit = vi.fn().mockResolvedValue([workflowWithoutWorkspace])
const mockWhere = vi.fn(() => ({ limit: mockLimit }))
const mockFrom = vi.fn(() => ({ where: mockWhere }))
vi.mocked(db.select).mockReturnValue({ from: mockFrom } as any)
vi.mocked(mockDb.select).mockReturnValue({ from: mockFrom } as any)
const { validateWorkflowPermissions } = await import('@/lib/workflows/utils')
const result = await validateWorkflowPermissions('wf-2', 'req-1', 'read')
expectWorkflowAccessDenied(result, 403)
@@ -293,7 +298,7 @@ describe('validateWorkflowPermissions', () => {
describe('default action', () => {
it('should default to read action when not specified', async () => {
vi.mocked(getSession).mockResolvedValue(mockSession as any)
auth.mockGetSession.mockResolvedValue(mockSession as any)
let callCount = 0
const mockLimit = vi.fn().mockImplementation(() => {
@@ -303,8 +308,9 @@ describe('validateWorkflowPermissions', () => {
})
const mockWhere = vi.fn(() => ({ limit: mockLimit }))
const mockFrom = vi.fn(() => ({ where: mockWhere }))
vi.mocked(db.select).mockReturnValue({ from: mockFrom } as any)
vi.mocked(mockDb.select).mockReturnValue({ from: mockFrom } as any)
const { validateWorkflowPermissions } = await import('@/lib/workflows/utils')
const result = await validateWorkflowPermissions('wf-1', 'req-1')
expectWorkflowAccessGranted(result)

View File

@@ -1,17 +1,7 @@
import { drizzleOrmMock } from '@sim/testing/mocks'
import { databaseMock, drizzleOrmMock } from '@sim/testing'
import { beforeEach, describe, expect, it, vi } from 'vitest'
vi.mock('@sim/db', () => ({
db: {
select: vi.fn(),
from: vi.fn(),
where: vi.fn(),
limit: vi.fn(),
innerJoin: vi.fn(),
leftJoin: vi.fn(),
orderBy: vi.fn(),
},
}))
vi.mock('@sim/db', () => databaseMock)
vi.mock('@sim/db/schema', () => ({
permissions: {

View File

@@ -112,6 +112,8 @@ export interface ProviderToolConfig {
required: string[]
}
usageControl?: ToolUsageControl
/** Block-level params transformer — converts SubBlock values to tool-ready params */
paramsTransform?: (params: Record<string, any>) => Record<string, any>
}
export interface Message {

View File

@@ -4,6 +4,12 @@ import type { ChatCompletionChunk } from 'openai/resources/chat/completions'
import type { CompletionUsage } from 'openai/resources/completions'
import { env } from '@/lib/core/config/env'
import { isHosted } from '@/lib/core/config/feature-flags'
import {
buildCanonicalIndex,
type CanonicalGroup,
getCanonicalValues,
isCanonicalPair,
} from '@/lib/workflows/subblocks/visibility'
import { isCustomTool } from '@/executor/constants'
import {
getComputerUseModels,
@@ -437,9 +443,10 @@ export async function transformBlockTool(
getAllBlocks: () => any[]
getTool: (toolId: string) => any
getToolAsync?: (toolId: string) => Promise<any>
canonicalModes?: Record<string, 'basic' | 'advanced'>
}
): Promise<ProviderToolConfig | null> {
const { selectedOperation, getAllBlocks, getTool, getToolAsync } = options
const { selectedOperation, getAllBlocks, getTool, getToolAsync, canonicalModes } = options
const blockDef = getAllBlocks().find((b: any) => b.type === block.type)
if (!blockDef) {
@@ -516,12 +523,66 @@ export async function transformBlockTool(
uniqueToolId = `${toolConfig.id}_${userProvidedParams.knowledgeBaseId}`
}
const blockParamsFn = blockDef?.tools?.config?.params as
| ((p: Record<string, any>) => Record<string, any>)
| undefined
const blockInputDefs = blockDef?.inputs as Record<string, any> | undefined
const canonicalGroups: CanonicalGroup[] = blockDef?.subBlocks
? Object.values(buildCanonicalIndex(blockDef.subBlocks).groupsById).filter(isCanonicalPair)
: []
const needsTransform = blockParamsFn || blockInputDefs || canonicalGroups.length > 0
const paramsTransform = needsTransform
? (params: Record<string, any>): Record<string, any> => {
let result = { ...params }
for (const group of canonicalGroups) {
const { basicValue, advancedValue } = getCanonicalValues(group, result)
const scopedKey = `${block.type}:${group.canonicalId}`
const pairMode = canonicalModes?.[scopedKey] ?? 'basic'
const chosen = pairMode === 'advanced' ? advancedValue : basicValue
const sourceIds = [group.basicId, ...group.advancedIds].filter(Boolean) as string[]
sourceIds.forEach((id) => delete result[id])
if (chosen !== undefined) {
result[group.canonicalId] = chosen
}
}
if (blockParamsFn) {
const transformed = blockParamsFn(result)
result = { ...result, ...transformed }
}
if (blockInputDefs) {
for (const [key, schema] of Object.entries(blockInputDefs)) {
const value = result[key]
if (typeof value === 'string' && value.trim().length > 0) {
const inputType = typeof schema === 'object' ? schema.type : schema
if (inputType === 'json' || inputType === 'array') {
try {
result[key] = JSON.parse(value.trim())
} catch {
// Not valid JSON — keep as string
}
}
}
}
}
return result
}
: undefined
return {
id: uniqueToolId,
name: toolName,
description: toolDescription,
params: userProvidedParams,
parameters: llmSchema,
paramsTransform,
}
}
@@ -1028,7 +1089,11 @@ export function getMaxOutputTokensForModel(model: string): number {
* Prepare tool execution parameters, separating tool parameters from system parameters
*/
export function prepareToolExecution(
tool: { params?: Record<string, any>; parameters?: Record<string, any> },
tool: {
params?: Record<string, any>
parameters?: Record<string, any>
paramsTransform?: (params: Record<string, any>) => Record<string, any>
},
llmArgs: Record<string, any>,
request: {
workflowId?: string
@@ -1045,8 +1110,15 @@ export function prepareToolExecution(
toolParams: Record<string, any>
executionParams: Record<string, any>
} {
// Use centralized merge logic from tools/params
const toolParams = mergeToolParameters(tool.params || {}, llmArgs) as Record<string, any>
let toolParams = mergeToolParameters(tool.params || {}, llmArgs) as Record<string, any>
if (tool.paramsTransform) {
try {
toolParams = tool.paramsTransform(toolParams)
} catch (err) {
logger.warn('paramsTransform failed, using raw params', { error: err })
}
}
const executionParams = {
...toolParams,

Some files were not shown because too many files have changed in this diff Show More