Compare commits

...

60 Commits

Author SHA1 Message Date
Waleed
6c66521d64 v0.5.84: model request sanitization 2026-02-07 19:06:53 -08:00
Waleed
f9b885f6d5 fix(models): add request sanitization (#3165) 2026-02-07 19:04:15 -08:00
Vikhyath Mondreti
479cd347ad v0.5.83: agent skills, concurrent workers for v8s, airweave integration 2026-02-07 12:27:11 -08:00
Vikhyath Mondreti
0cb6714496 fix(rooms): cleanup edge case for 1hr ttl (#3163)
* fix(rooms): cleanup edge case for 1hr ttl

* revert feature flags

* address comments

* remove console log
2026-02-07 12:18:07 -08:00
Waleed
7b36f9257e improvement(models): reorder models dropdown (#3164) 2026-02-07 12:05:10 -08:00
Waleed
99ae5435e3 feat(models): updated model configs, updated anthropic provider to propagate errors back to user if any (#3159)
* feat(models): updated model configs, updated anthropic provider to propagate errors back to user if any

* moved max tokens to advanced

* updated model configs and testesd

* removed default in max config for output tokens

* moved more stuff to advanced mode in the agent block

* stronger typing

* move api key under model, update mistral and groq

* update openrouter, fixed serializer to allow ollama/vllm models without api key

* removed ollama handling
2026-02-06 22:35:57 -08:00
Vikhyath Mondreti
925f06add7 improvement(preview): render nested values like input format correctly in workflow execution preview (#3154)
* improvement(preview): nested workflow snapshots/preview when not executed

* improvements to resolve nested subblock values

* few more things

* add try catch

* fix fallback case

* deps
2026-02-06 22:12:40 -08:00
Vikhyath Mondreti
193b95cfec fix(auth): swap out hybrid auth in relevant callsites (#3160)
* fix(logs): execution files should always use our internal route

* correct degree of access control

* fix tests

* fix tag defs flag

* fix type check

* fix mcp tools

* make webhooks consistent

* fix ollama and vllm visibility

* remove dup test
2026-02-06 22:07:55 -08:00
Waleed
0ca25bbab6 fix(function): isolated-vm worker pool to prevent single-worker bottleneck + execution user id resolution (#3155)
* fix(executor): isolated-vm worker pool to prevent single-worker bottleneck

* chore(helm): add isolated-vm worker pool env vars to values.yaml

* fix(userid): resolution for fair scheduling

* add fallback back

* add to helm charts

* remove constant fallbacks

* fix

* address bugbot comments

* fix fallbacks

* one more bugbot comment

---------

Co-authored-by: Vikhyath Mondreti <vikhyath@simstudio.ai>
2026-02-06 18:34:03 -08:00
Waleed
1edaf197b2 fix(azure): add azure-anthropic support to router, evaluator, copilot, and tokenization (#3158)
* fix(azure): add azure-anthropic support to router, evaluator, copilot, and tokenization

* added azure anthropic values to env

* fix(azure): make anthropic-version configurable for azure-anthropic provider

* fix(azure): thread provider credentials through guardrails and fix translate missing bedrockAccessKeyId

* updated guardrails

* ack'd PR comments

* fix(azure): unify credential passing pattern across all LLM handlers

- Pass all provider credentials unconditionally in router, evaluator (matching agent pattern)
- Remove conditional if-branching on providerId for credential fields
- Thread workspaceId through guardrails → hallucination validator for BYOK key resolution
- Remove getApiKey() from hallucination validator, let executeProviderRequest handle it
- Resolve vertex OAuth credentials in hallucination validator matching agent handler pattern

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-06 15:26:10 -08:00
Waleed
474b1af145 improvement(ui): improved skills UI, validation, and permissions (#3156)
* improvement(ui): improved skills UI, validation, and permissions

* stronger typing for Skill interface

* added missing docs description

* ack comment
2026-02-06 13:11:56 -08:00
Waleed
a3a99eda19 v0.5.82: slack trigger files, pagination for linear, executor fixes 2026-02-06 00:41:52 -08:00
Waleed
1a66d48add v0.5.81: traces fix, additional confluence tools, azure anthropic support, opus 4.6 2026-02-05 11:28:54 -08:00
Waleed
46822e91f3 v0.5.80: lock feature, enterprise modules, time formatting consolidation, files, UX and UI improvements, longer timeouts 2026-02-04 18:27:05 -08:00
Waleed
2bb68335ee v0.5.79: longer MCP tools timeout, optimize loop/parallel regeneration, enrich.so integration 2026-01-31 21:57:56 -08:00
Waleed
8528fbe2d2 v0.5.78: billing fixes, mcp timeout increase, reactquery migrations, updated tool param visibilities, DSPy and Google Maps integrations 2026-01-31 13:48:22 -08:00
Waleed
31fdd2be13 v0.5.77: room manager redis migration, tool outputs, ui fixes 2026-01-30 14:57:17 -08:00
Waleed
028bc652c2 v0.5.76: posthog improvements, readme updates 2026-01-29 00:13:19 -08:00
Waleed
c6bf5cd58c v0.5.75: search modal overhaul, helm chart updates, run from block, terminal and visual debugging improvements 2026-01-28 22:54:13 -08:00
Vikhyath Mondreti
11dc18a80d v0.5.74: autolayout improvements, clerk integration, auth enforcements 2026-01-27 20:37:39 -08:00
Waleed
ab4e9dc72f v0.5.73: ci, helm updates, kb, ui fixes, note block enhancements 2026-01-26 22:04:35 -08:00
Vikhyath Mondreti
1c58c35bd8 v0.5.72: azure connection string, supabase improvement, multitrigger resolution, docs quick reference 2026-01-25 23:42:27 -08:00
Waleed
d63a5cb504 v0.5.71: ux, ci improvements, docs updates 2026-01-25 03:08:08 -08:00
Waleed
8bd5d41723 v0.5.70: router fix, anthropic agent response format adherence 2026-01-24 20:57:02 -08:00
Waleed
c12931bc50 v0.5.69: kb upgrades, blog, copilot improvements, auth consolidation (#2973)
* fix(subflows): tag dropdown + resolution logic (#2949)

* fix(subflows): tag dropdown + resolution logic

* fixes;

* revert parallel change

* chore(deps): bump posthog-js to 1.334.1 (#2948)

* fix(idempotency): add conflict target to atomicallyClaimDb query + remove redundant db namespace tracking (#2950)

* fix(idempotency): add conflict target to atomicallyClaimDb query

* delete needs to account for namespace

* simplify namespace filtering logic

* fix cleanup

* consistent target

* improvement(kb): add document filtering, select all, and React Query migration (#2951)

* improvement(kb): add document filtering, select all, and React Query migration

* test(kb): update tests for enabledFilter and removed userId params

* fix(kb): remove non-null assertion, add explicit guard

* improvement(logs): trace span, details (#2952)

* improvement(action-bar): ordering

* improvement(logs): details, trace span

* feat(blog): v0.5 release post (#2953)

* feat(blog): v0.5 post

* improvement(blog): simplify title and remove code block header

- Simplified blog title from Introducing Sim Studio v0.5 to Introducing Sim v0.5
- Removed language label header and copy button from code blocks for cleaner appearance

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* ack PR comments

* small styling improvements

* created system to create post-specific components

* updated componnet

* cache invalidation

---------

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>

* feat(admin): add credits endpoint to issue credits to users (#2954)

* feat(admin): add credits endpoint to issue credits to users

* fix(admin): use existing credit functions and handle enterprise seats

* fix(admin): reject NaN and Infinity in amount validation

* styling

* fix(admin): validate userId and email are strings

* improvement(copilot): fast mode, subagent tool responses and allow preferences (#2955)

* Improvements

* Fix actions mapping

* Remove console logs

* fix(billing): handle missing userStats and prevent crashes (#2956)

* fix(billing): handle missing userStats and prevent crashes

* fix(billing): correct import path for getFilledPillColor

* fix(billing): add Number.isFinite check to lastPeriodCost

* fix(logs): refresh logic to refresh logs details (#2958)

* fix(security): add authentication and input validation to API routes (#2959)

* fix(security): add authentication and input validation to API routes

* moved utils

* remove extraneous commetns

* removed unused dep

* improvement(helm): add internal ingress support and same-host path consolidation (#2960)

* improvement(helm): add internal ingress support and same-host path consolidation

* improvement(helm): clean up ingress template comments

Simplify verbose inline Helm comments and section dividers to match the
minimal style used in services.yaml.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* fix(helm): add missing copilot path consolidation for realtime host

When copilot.host equals realtime.host but differs from app.host,
copilot paths were not being routed. Added logic to consolidate
copilot paths into the realtime rule for this scenario.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* improvement(helm): follow ingress best practices

- Remove orphan comments that appeared when services were disabled
- Add documentation about path ordering requirements
- Paths rendered in order: realtime, copilot, app (specific before catch-all)
- Clean template output matching industry Helm chart standards

---------

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>

* feat(blog): enterprise post (#2961)

* feat(blog): enterprise post

* added more images, styling

* more content

* updated v0-5 post

* remove unused transition

---------

Co-authored-by: Vikhyath Mondreti <vikhyath@simstudio.ai>

* fix(envvars): resolution standardized (#2957)

* fix(envvars): resolution standardized

* remove comments

* address bugbot

* fix highlighting for env vars

* remove comments

* address greptile

* address bugbot

* fix(copilot): mask credentials fix (#2963)

* Fix copilot masking

* Clean up

* Lint

* improvement(webhooks): remove dead code (#2965)

* fix(webhooks): subscription recreation path

* improvement(webhooks): remove dead code

* fix tests

* address bugbot comments

* fix restoration edge case

* fix more edge cases

* address bugbot comments

* fix gmail polling

* add warnings for UI indication for credential sets

* fix(preview): subblock values (#2969)

* fix(child-workflow): nested spans handoff (#2966)

* fix(child-workflow): nested spans handoff

* remove overly defensive programming

* update type check

* type more code

* remove more dead code

* address bugbot comments

* fix(security): restrict API key access on internal-only routes (#2964)

* fix(security): restrict API key access on internal-only routes

* test(security): update function execute tests for checkInternalAuth

* updated agent handler

* move session check higher in checkSessionOrInternalAuth

* extracted duplicate code into helper for resolving user from jwt

* fix(copilot): update copilot chat title (#2968)

* fix(hitl): fix condition blocks after hitl (#2967)

* fix(notes): ghost edges (#2970)

* fix(notes): ghost edges

* fix deployed state fallback

* fallback

* remove UI level checks

* annotation missing from autoconnect source check

* improvement(docs): loop and parallel var reference syntax (#2975)

* fix(blog): slash actions description (#2976)

* improvement(docs): loop and parallel var reference syntax

* fix(blog): slash actions description

* fix(auth): copilot routes (#2977)

* Fix copilot auth

* Fix

* Fix

* Fix

* fix(copilot): fix edit summary for loops/parallels (#2978)

* fix(integrations): hide from tool bar (#2544)

* fix(landing): ui (#2979)

* fix(edge-validation): race condition on collaborative add (#2980)

* fix(variables): boolean type support and input improvements (#2981)

* fix(variables): boolean type support and input improvements

* fix formatting

---------

Co-authored-by: Vikhyath Mondreti <vikhyathvikku@gmail.com>
Co-authored-by: Emir Karabeg <78010029+emir-karabeg@users.noreply.github.com>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Co-authored-by: Siddharth Ganesan <33737564+Sg312@users.noreply.github.com>
Co-authored-by: Vikhyath Mondreti <vikhyath@simstudio.ai>
2026-01-24 14:29:53 -08:00
Waleed
e9c4251c1c v0.5.68: router block reasoning, executor improvements, variable resolution consolidation, helm updates (#2946)
* improvement(workflow-item): stabilize avatar layout and fix name truncation (#2939)

* improvement(workflow-item): stabilize avatar layout and fix name truncation

* fix(avatars): revert overflow bg to hardcoded color for contrast

* fix(executor): stop parallel execution when block errors (#2940)

* improvement(helm): add per-deployment extraVolumes support (#2942)

* fix(gmail): expose messageId field in read email block (#2943)

* fix(resolver): consolidate reference resolution  (#2941)

* fix(resolver): consolidate code to resolve references

* fix edge cases

* use already formatted error

* fix multi index

* fix backwards compat reachability

* handle backwards compatibility accurately

* use shared constant correctly

* feat(router): expose reasoning output in router v2 block (#2945)

* fix(copilot): always allow, credential masking (#2947)

* Fix always allow, credential validation

* Credential masking

* Autoload

* fix(executor): handle condition dead-end branches in loops (#2944)

---------

Co-authored-by: Vikhyath Mondreti <vikhyathvikku@gmail.com>
Co-authored-by: Siddharth Ganesan <33737564+Sg312@users.noreply.github.com>
2026-01-22 13:48:15 -08:00
Waleed
cc2be33d6b v0.5.67: loading, password reset, ui improvements, helm updates (#2928)
* fix(zustand): updated to useShallow from deprecated createWithEqualityFn (#2919)

* fix(logger): use direct env access for webpack inlining (#2920)

* fix(notifications): text overflow with line-clamp (#2921)

* chore(helm): add env vars for Vertex AI, orgs, and telemetry (#2922)

* fix(auth): improve reset password flow and consolidate brand detection (#2924)

* fix(auth): improve reset password flow and consolidate brand detection

* fix(auth): set errorHandled for EMAIL_NOT_VERIFIED to prevent duplicate error

* fix(auth): clear success message on login errors

* chore(auth): fix import order per lint

* fix(action-bar): duplicate subflows with children (#2923)

* fix(action-bar): duplicate subflows with children

* fix(action-bar): add validateTriggerPaste for subflow duplicate

* fix(resolver): agent response format, input formats, root level (#2925)

* fix(resolvers): agent response format, input formats, root level

* fix response block initial seeding

* fix tests

* fix(messages-input): fix cursor alignment and auto-resize with overlay (#2926)

* fix(messages-input): fix cursor alignment and auto-resize with overlay

* fixed remaining zustand warnings

* fix(stores): remove dead code causing log spam on startup (#2927)

* fix(stores): remove dead code causing log spam on startup

* fix(stores): replace custom tools zustand store with react query cache

* improvement(ui): use BrandedButton and BrandedLink components (#2930)

- Refactor auth forms to use BrandedButton component
- Add BrandedLink component for changelog page
- Reduce code duplication in login, signup, reset-password forms
- Update star count default value

* fix(custom-tools): remove unsafe title fallback in getCustomTool (#2929)

* fix(custom-tools): remove unsafe title fallback in getCustomTool

* fix(custom-tools): restore title fallback in getCustomTool lookup

Custom tools are referenced by title (custom_${title}), not database ID.
The title fallback is required for client-side tool resolution to work.

* fix(null-bodies): empty bodies handling (#2931)

* fix(null-statuses): empty bodies handling

* address bugbot comment

* fix(token-refresh): microsoft, notion, x, linear (#2933)

* fix(microsoft): proactive refresh needed

* fix(x): missing token refresh flag

* notion and linear missing flag too

* address bugbot comment

* fix(auth): handle EMAIL_NOT_VERIFIED in onError callback (#2932)

* fix(auth): handle EMAIL_NOT_VERIFIED in onError callback

* refactor(auth): extract redirectToVerify helper to reduce duplication

* fix(workflow-selector): use dedicated selector for workflow dropdown (#2934)

* feat(workflow-block): preview (#2935)

* improvement(copilot): tool configs to show nested props (#2936)

* fix(auth): add genericOAuth providers to trustedProviders (#2937)

---------

Co-authored-by: Vikhyath Mondreti <vikhyathvikku@gmail.com>
Co-authored-by: Emir Karabeg <78010029+emir-karabeg@users.noreply.github.com>
2026-01-21 22:53:25 -08:00
Vikhyath Mondreti
45371e521e v0.5.66: external http requests fix, ring highlighting 2026-01-21 02:55:39 -08:00
Waleed
0ce0f98aa5 v0.5.65: gemini updates, textract integration, ui updates (#2909)
* fix(google): wrap primitive tool responses for Gemini API compatibility (#2900)

* fix(canonical): copilot path + update parent (#2901)

* fix(rss): add top-level title, link, pubDate fields to RSS trigger output (#2902)

* fix(rss): add top-level title, link, pubDate fields to RSS trigger output

* fix(imap): add top-level fields to IMAP trigger output

* improvement(browseruse): add profile id param (#2903)

* improvement(browseruse): add profile id param

* make request a stub since we have directExec

* improvement(executor): upgraded abort controller to handle aborts for loops and parallels (#2880)

* improvement(executor): upgraded abort controller to handle aborts for loops and parallels

* comments

* improvement(files): update execution for passing base64 strings (#2906)

* progress

* improvement(execution): update execution for passing base64 strings

* fix types

* cleanup comments

* path security vuln

* reject promise correctly

* fix redirect case

* remove proxy routes

* fix tests

* use ipaddr

* feat(tools): added textract, added v2 for mistral, updated tag dropdown (#2904)

* feat(tools): added textract

* cleanup

* ack pr comments

* reorder

* removed upload for textract async version

* fix additional fields dropdown in editor, update parser to leave validation to be done on the server

* added mistral v2, files v2, and finalized textract

* updated the rest of the old file patterns, updated mistral outputs for v2

* updated tag dropdown to parse non-operation fields as well

* updated extension finder

* cleanup

* added description for inputs to workflow

* use helper for internal route check

* fix tag dropdown merge conflict change

* remove duplicate code

---------

Co-authored-by: Vikhyath Mondreti <vikhyath@simstudio.ai>

* fix(ui): change add inputs button to match output selector (#2907)

* fix(canvas): removed invite to workspace from canvas popover (#2908)

* fix(canvas): removed invite to workspace

* removed unused props

* fix(copilot): legacy tool display names (#2911)

* fix(a2a): canonical merge  (#2912)

* fix canonical merge

* fix empty array case

* fix(change-detection): copilot diffs have extra field (#2913)

* improvement(logs): improved logs ui bugs, added subflow disable UI (#2910)

* improvement(logs): improved logs ui bugs, added subflow disable UI

* added duplicate to action bar for subflows

* feat(broadcast): email v0.5 (#2905)

---------

Co-authored-by: Vikhyath Mondreti <vikhyathvikku@gmail.com>
Co-authored-by: Vikhyath Mondreti <vikhyath@simstudio.ai>
Co-authored-by: Emir Karabeg <78010029+emir-karabeg@users.noreply.github.com>
2026-01-20 23:54:55 -08:00
Waleed
dff1c9d083 v0.5.64: unsubscribe, search improvements, metrics, additional SSO configuration 2026-01-20 00:34:11 -08:00
Vikhyath Mondreti
b09f683072 v0.5.63: ui and performance improvements, more google tools 2026-01-18 15:22:42 -08:00
Vikhyath Mondreti
a8bb0db660 v0.5.62: webhook bug fixes, seeding default subblock values, block selection fixes 2026-01-16 20:27:06 -08:00
Waleed
af82820a28 v0.5.61: webhook improvements, workflow controls, react query for deployment status, chat fixes, reducto and pulse OCR, linear fixes 2026-01-16 18:06:23 -08:00
Waleed
4372841797 v0.5.60: invitation flow improvements, chat fixes, a2a improvements, additional copilot actions 2026-01-15 00:02:18 -08:00
Waleed
5e8c843241 v0.5.59: a2a support, documentation 2026-01-13 13:21:21 -08:00
Waleed
7bf3d73ee6 v0.5.58: export folders, new tools, permissions groups enhancements 2026-01-13 00:56:59 -08:00
Vikhyath Mondreti
7ffc11a738 v0.5.57: subagents, context menu improvements, bug fixes 2026-01-11 11:38:40 -08:00
Waleed
be578e2ed7 v0.5.56: batch operations, access control and permission groups, billing fixes 2026-01-10 00:31:34 -08:00
Waleed
f415e5edc4 v0.5.55: polling groups, bedrock provider, devcontainer fixes, workflow preview enhancements 2026-01-08 23:36:56 -08:00
Waleed
13a6e6c3fa v0.5.54: seo, model blacklist, helm chart updates, fireflies integration, autoconnect improvements, billing fixes 2026-01-07 16:09:45 -08:00
Waleed
f5ab7f21ae v0.5.53: hotkey improvements, added redis fallback, fixes for workflow tool 2026-01-06 23:34:52 -08:00
Waleed
bfb6fffe38 v0.5.52: new port-based router block, combobox expression and variable support 2026-01-06 16:14:10 -08:00
Waleed
4fbec0a43f v0.5.51: triggers, kb, condition block improvements, supabase and grain integration updates 2026-01-06 14:26:46 -08:00
Waleed
585f5e365b v0.5.50: import improvements, ui upgrades, kb styling and performance improvements 2026-01-05 00:35:55 -08:00
Waleed
3792bdd252 v0.5.49: hitl improvements, new email styles, imap trigger, logs context menu (#2672)
* feat(logs-context-menu): consolidated logs utils and types, added logs record context menu (#2659)

* feat(email): welcome email; improvement(emails): ui/ux (#2658)

* feat(email): welcome email; improvement(emails): ui/ux

* improvement(emails): links, accounts, preview

* refactor(emails): file structure and wrapper components

* added envvar for personal emails sent, added isHosted gate

* fixed failing tests, added env mock

* fix: removed comment

---------

Co-authored-by: waleed <walif6@gmail.com>

* fix(logging): hitl + trigger dev crash protection (#2664)

* hitl gaps

* deal with trigger worker crashes

* cleanup import strcuture

* feat(imap): added support for imap trigger (#2663)

* feat(tools): added support for imap trigger

* feat(imap): added parity, tested

* ack PR comments

* final cleanup

* feat(i18n): update translations (#2665)

Co-authored-by: waleedlatif1 <waleedlatif1@users.noreply.github.com>

* fix(grain): updated grain trigger to auto-establish trigger (#2666)

Co-authored-by: aadamgough <adam@sim.ai>

* feat(admin): routes to manage deployments (#2667)

* feat(admin): routes to manage deployments

* fix naming fo deployed by

* feat(time-picker): added timepicker emcn component, added to playground, added searchable prop for dropdown, added more timezones for schedule, updated license and notice date (#2668)

* feat(time-picker): added timepicker emcn component, added to playground, added searchable prop for dropdown, added more timezones for schedule, updated license and notice date

* removed unused params, cleaned up redundant utils

* improvement(invite): aligned styling (#2669)

* improvement(invite): aligned with rest of app

* fix(invite): error handling

* fix: addressed comments

---------

Co-authored-by: Emir Karabeg <78010029+emir-karabeg@users.noreply.github.com>
Co-authored-by: Vikhyath Mondreti <vikhyathvikku@gmail.com>
Co-authored-by: waleedlatif1 <waleedlatif1@users.noreply.github.com>
Co-authored-by: Adam Gough <77861281+aadamgough@users.noreply.github.com>
Co-authored-by: aadamgough <adam@sim.ai>
2026-01-03 13:19:18 -08:00
Waleed
eb5d1f3e5b v0.5.48: copy-paste workflow blocks, docs updates, mcp tool fixes 2025-12-31 18:00:04 -08:00
Waleed
54ab82c8dd v0.5.47: deploy workflow as mcp, kb chunks tokenizer, UI improvements, jira service management tools 2025-12-30 23:18:58 -08:00
Waleed
f895bf469b v0.5.46: build improvements, greptile, light mode improvements 2025-12-29 02:17:52 -08:00
Waleed
dd3209af06 v0.5.45: light mode fixes, realtime usage indicator, docker build improvements 2025-12-27 19:57:42 -08:00
Waleed
b6ba3b50a7 v0.5.44: keyboard shortcuts, autolayout, light mode, byok, testing improvements 2025-12-26 21:25:19 -08:00
Waleed
b304233062 v0.5.43: export logs, circleback, grain, vertex, code hygiene, schedule improvements 2025-12-23 19:19:18 -08:00
Vikhyath Mondreti
57e4b49bd6 v0.5.42: fix memory migration 2025-12-23 01:24:54 -08:00
Vikhyath Mondreti
e12dd204ed v0.5.41: memory fixes, copilot improvements, knowledgebase improvements, LLM providers standardization 2025-12-23 00:15:18 -08:00
Vikhyath Mondreti
3d9d9cbc54 v0.5.40: supabase ops to allow non-public schemas, jira uuid 2025-12-21 22:28:05 -08:00
Waleed
0f4ec962ad v0.5.39: notion, workflow variables fixes 2025-12-20 20:44:00 -08:00
Waleed
4827866f9a v0.5.38: snap to grid, copilot ux improvements, billing line items 2025-12-20 17:24:38 -08:00
Waleed
3e697d9ed9 v0.5.37: redaction utils consolidation, logs updates, autoconnect improvements, additional kb tag types 2025-12-19 22:31:55 -08:00
Martin Yankov
4431a1a484 fix(helm): add custom egress rules to realtime network policy (#2481)
The realtime service network policy was missing the custom egress rules section
that allows configuration of additional egress rules via values.yaml. This caused
the realtime pods to be unable to connect to external databases (e.g., PostgreSQL
on port 5432) when using external database configurations.

The app network policy already had this section, but the realtime network policy
was missing it, creating an inconsistency and preventing the realtime service
from accessing external databases configured via networkPolicy.egress values.

This fix adds the same custom egress rules template section to the realtime
network policy, matching the app network policy behavior and allowing users to
configure database connectivity via values.yaml.
2025-12-19 18:59:08 -08:00
Waleed
4d1a9a3f22 v0.5.36: hitl improvements, opengraph, slack fixes, one-click unsubscribe, auth checks, new db indexes 2025-12-19 01:27:49 -08:00
Vikhyath Mondreti
eb07a080fb v0.5.35: helm updates, copilot improvements, 404 for docs, salesforce fixes, subflow resize clamping 2025-12-18 16:23:19 -08:00
112 changed files with 3284 additions and 1145 deletions

View File

@@ -5462,3 +5462,24 @@ export function EnrichSoIcon(props: SVGProps<SVGSVGElement>) {
</svg> </svg>
) )
} }
export function AgentSkillsIcon(props: SVGProps<SVGSVGElement>) {
return (
<svg
{...props}
xmlns='http://www.w3.org/2000/svg'
width='16'
height='16'
viewBox='0 0 16 16'
fill='none'
>
<path
d='M8 1L14.0622 4.5V11.5L8 15L1.93782 11.5V4.5L8 1Z'
stroke='currentColor'
strokeWidth='1.5'
fill='none'
/>
<path d='M8 4.5L11 6.25V9.75L8 11.5L5 9.75V6.25L8 4.5Z' fill='currentColor' />
</svg>
)
}

View File

@@ -18,7 +18,9 @@ This means you can attach many skills to an agent without bloating its context w
## Creating Skills ## Creating Skills
Go to **Settings** (gear icon) and select **Skills** under the Tools section. Go to **Settings** and select **Skills** under the Tools section.
![Manage Skills](/static/skills/manage-skills.png)
Click **Add** to create a new skill with three fields: Click **Add** to create a new skill with three fields:
@@ -52,11 +54,22 @@ Use when the user asks you to write, optimize, or debug SQL queries.
... ...
``` ```
**Recommended structure:**
- **When to use** — Specific triggers and scenarios
- **Instructions** — Step-by-step guidance with numbered lists
- **Examples** — Input/output samples showing expected behavior
- **Common Patterns** — Reusable approaches for frequent tasks
- **Edge Cases** — Gotchas and special considerations
Keep skills focused and under 500 lines. If a skill grows too large, split it into multiple specialized skills.
## Adding Skills to an Agent ## Adding Skills to an Agent
Open any **Agent** block and find the **Skills** dropdown below the tools section. Select the skills you want the agent to have access to. Open any **Agent** block and find the **Skills** dropdown below the tools section. Select the skills you want the agent to have access to.
Selected skills appear as chips that you can click to edit or remove. ![Add Skill](/static/skills/add-skill.png)
Selected skills appear as cards that you can click to edit or remove.
### What Happens at Runtime ### What Happens at Runtime
@@ -69,12 +82,50 @@ When the workflow runs:
This works across all supported LLM providers — the `load_skill` tool uses standard tool-calling, so no provider-specific configuration is needed. This works across all supported LLM providers — the `load_skill` tool uses standard tool-calling, so no provider-specific configuration is needed.
## Tips ## Common Use Cases
- **Keep descriptions actionable** — Instead of "Helps with SQL", write "Write optimized SQL queries for PostgreSQL, MySQL, and SQLite, including index recommendations and query plan analysis" Skills are most valuable when agents need specialized knowledge or multi-step workflows:
**Domain Expertise**
- `api-integration-expert` — Best practices for calling specific APIs (authentication, rate limiting, error handling)
- `data-transformation` — ETL patterns, data cleaning, and validation rules
- `code-reviewer` — Code review guidelines specific to your team's standards
**Workflow Templates**
- `bug-investigation` — Step-by-step debugging methodology (reproduce → isolate → test → fix)
- `feature-implementation` — Development workflow from requirements to deployment
- `document-generator` — Templates and formatting rules for technical documentation
**Company-Specific Knowledge**
- `our-architecture` — System architecture diagrams, service dependencies, and deployment processes
- `style-guide` — Brand guidelines, writing tone, UI/UX patterns
- `customer-onboarding` — Standard procedures and common customer questions
**When to use skills vs. agent instructions:**
- Use **skills** for knowledge that applies across multiple workflows or changes frequently
- Use **agent instructions** for task-specific context that's unique to a single agent
## Best Practices
**Writing Effective Descriptions**
- **Be specific and keyword-rich** — Instead of "Helps with SQL", write "Write optimized SQL queries for PostgreSQL, MySQL, and SQLite, including index recommendations and query plan analysis"
- **Include activation triggers** — Mention specific words or phrases that should prompt the skill (e.g., "Use when the user mentions PDFs, forms, or document extraction")
- **Keep it under 200 words** — Agents scan descriptions quickly; make every word count
**Skill Scope and Organization**
- **One skill per domain** — A focused `sql-expert` skill works better than a broad `database-everything` skill - **One skill per domain** — A focused `sql-expert` skill works better than a broad `database-everything` skill
- **Use markdown structure** — Headers, lists, and code blocks help the agent parse and follow instructions - **Limit to 5-10 skills per agent** — More skills = more decision overhead; start small and add as needed
- **Test iteratively** — Run your workflow and check if the agent activates the skill when expected - **Split large skills** — If a skill exceeds 500 lines, break it into focused sub-skills
**Content Structure**
- **Use markdown formatting** — Headers, lists, and code blocks help agents parse and follow instructions
- **Provide examples** — Show input/output pairs so agents understand expected behavior
- **Be explicit about edge cases** — Don't assume agents will infer special handling
**Testing and Iteration**
- **Test activation** — Run your workflow and verify the agent loads the skill when expected
- **Check for false positives** — Make sure skills aren't activating when they shouldn't
- **Refine descriptions** — If a skill isn't loading when needed, add more keywords to the description
## Learn More ## Learn More

View File

@@ -10,6 +10,21 @@ import { BlockInfoCard } from "@/components/ui/block-info-card"
color="#6366F1" color="#6366F1"
/> />
{/* MANUAL-CONTENT-START:intro */}
[Airweave](https://airweave.ai/) is an AI-powered semantic search platform that helps you discover and retrieve knowledge across all your synced data sources. Built for modern teams, Airweave enables fast, relevant search results using neural, hybrid, or keyword-based strategies tailored to your needs.
With Airweave, you can:
- **Search smarter**: Use natural language queries to uncover information stored across your connected tools and databases
- **Unify your data**: Seamlessly access content from sources like code, docs, chat, emails, cloud files, and more
- **Customize retrieval**: Select between hybrid (semantic + keyword), neural, or keyword search strategies for optimal results
- **Boost recall**: Expand search queries with AI to find more comprehensive answers
- **Rerank results using AI**: Prioritize the most relevant answers with powerful language models
- **Get instant answers**: Generate clear, AI-powered responses synthesized from your data
In Sim, the Airweave integration empowers your agents to search, summarize, and extract insights from all your organizations data via a single tool. Use Airweave to drive rich, contextual knowledge retrieval within your workflows—whether answering questions, generating summaries, or supporting dynamic decision-making.
{/* MANUAL-CONTENT-END */}
## Usage Instructions ## Usage Instructions
Search across your synced data sources using Airweave. Supports semantic search with hybrid, neural, or keyword retrieval strategies. Optionally generate AI-powered answers from search results. Search across your synced data sources using Airweave. Supports semantic search with hybrid, neural, or keyword retrieval strategies. Optionally generate AI-powered answers from search results.

Binary file not shown.

After

Width:  |  Height:  |  Size: 28 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 56 KiB

View File

@@ -5,7 +5,7 @@ import { eq } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server' import { type NextRequest, NextResponse } from 'next/server'
import { generateAgentCard, generateSkillsFromWorkflow } from '@/lib/a2a/agent-card' import { generateAgentCard, generateSkillsFromWorkflow } from '@/lib/a2a/agent-card'
import type { AgentCapabilities, AgentSkill } from '@/lib/a2a/types' import type { AgentCapabilities, AgentSkill } from '@/lib/a2a/types'
import { checkHybridAuth } from '@/lib/auth/hybrid' import { checkSessionOrInternalAuth } from '@/lib/auth/hybrid'
import { getRedisClient } from '@/lib/core/config/redis' import { getRedisClient } from '@/lib/core/config/redis'
import { loadWorkflowFromNormalizedTables } from '@/lib/workflows/persistence/utils' import { loadWorkflowFromNormalizedTables } from '@/lib/workflows/persistence/utils'
import { checkWorkspaceAccess } from '@/lib/workspaces/permissions/utils' import { checkWorkspaceAccess } from '@/lib/workspaces/permissions/utils'
@@ -40,7 +40,7 @@ export async function GET(request: NextRequest, { params }: { params: Promise<Ro
} }
if (!agent.agent.isPublished) { if (!agent.agent.isPublished) {
const auth = await checkHybridAuth(request, { requireWorkflowId: false }) const auth = await checkSessionOrInternalAuth(request, { requireWorkflowId: false })
if (!auth.success) { if (!auth.success) {
return NextResponse.json({ error: 'Agent not published' }, { status: 404 }) return NextResponse.json({ error: 'Agent not published' }, { status: 404 })
} }
@@ -81,7 +81,7 @@ export async function PUT(request: NextRequest, { params }: { params: Promise<Ro
const { agentId } = await params const { agentId } = await params
try { try {
const auth = await checkHybridAuth(request, { requireWorkflowId: false }) const auth = await checkSessionOrInternalAuth(request, { requireWorkflowId: false })
if (!auth.success || !auth.userId) { if (!auth.success || !auth.userId) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 }) return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
} }
@@ -151,7 +151,7 @@ export async function DELETE(request: NextRequest, { params }: { params: Promise
const { agentId } = await params const { agentId } = await params
try { try {
const auth = await checkHybridAuth(request, { requireWorkflowId: false }) const auth = await checkSessionOrInternalAuth(request, { requireWorkflowId: false })
if (!auth.success || !auth.userId) { if (!auth.success || !auth.userId) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 }) return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
} }
@@ -189,7 +189,7 @@ export async function POST(request: NextRequest, { params }: { params: Promise<R
const { agentId } = await params const { agentId } = await params
try { try {
const auth = await checkHybridAuth(request, { requireWorkflowId: false }) const auth = await checkSessionOrInternalAuth(request, { requireWorkflowId: false })
if (!auth.success || !auth.userId) { if (!auth.success || !auth.userId) {
logger.warn('A2A agent publish auth failed:', { error: auth.error, hasUserId: !!auth.userId }) logger.warn('A2A agent publish auth failed:', { error: auth.error, hasUserId: !!auth.userId })
return NextResponse.json({ error: auth.error || 'Unauthorized' }, { status: 401 }) return NextResponse.json({ error: auth.error || 'Unauthorized' }, { status: 401 })

View File

@@ -13,7 +13,7 @@ import { v4 as uuidv4 } from 'uuid'
import { generateSkillsFromWorkflow } from '@/lib/a2a/agent-card' import { generateSkillsFromWorkflow } from '@/lib/a2a/agent-card'
import { A2A_DEFAULT_CAPABILITIES } from '@/lib/a2a/constants' import { A2A_DEFAULT_CAPABILITIES } from '@/lib/a2a/constants'
import { sanitizeAgentName } from '@/lib/a2a/utils' import { sanitizeAgentName } from '@/lib/a2a/utils'
import { checkHybridAuth } from '@/lib/auth/hybrid' import { checkSessionOrInternalAuth } from '@/lib/auth/hybrid'
import { loadWorkflowFromNormalizedTables } from '@/lib/workflows/persistence/utils' import { loadWorkflowFromNormalizedTables } from '@/lib/workflows/persistence/utils'
import { hasValidStartBlockInState } from '@/lib/workflows/triggers/trigger-utils' import { hasValidStartBlockInState } from '@/lib/workflows/triggers/trigger-utils'
import { getWorkspaceById } from '@/lib/workspaces/permissions/utils' import { getWorkspaceById } from '@/lib/workspaces/permissions/utils'
@@ -27,7 +27,7 @@ export const dynamic = 'force-dynamic'
*/ */
export async function GET(request: NextRequest) { export async function GET(request: NextRequest) {
try { try {
const auth = await checkHybridAuth(request, { requireWorkflowId: false }) const auth = await checkSessionOrInternalAuth(request, { requireWorkflowId: false })
if (!auth.success || !auth.userId) { if (!auth.success || !auth.userId) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 }) return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
} }
@@ -87,7 +87,7 @@ export async function GET(request: NextRequest) {
*/ */
export async function POST(request: NextRequest) { export async function POST(request: NextRequest) {
try { try {
const auth = await checkHybridAuth(request, { requireWorkflowId: false }) const auth = await checkSessionOrInternalAuth(request, { requireWorkflowId: false })
if (!auth.success || !auth.userId) { if (!auth.success || !auth.userId) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 }) return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
} }

View File

@@ -5,7 +5,7 @@ import { and, eq } from 'drizzle-orm'
import { jwtDecode } from 'jwt-decode' import { jwtDecode } from 'jwt-decode'
import { type NextRequest, NextResponse } from 'next/server' import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod' import { z } from 'zod'
import { checkHybridAuth } from '@/lib/auth/hybrid' import { checkSessionOrInternalAuth } from '@/lib/auth/hybrid'
import { generateRequestId } from '@/lib/core/utils/request' import { generateRequestId } from '@/lib/core/utils/request'
import { evaluateScopeCoverage, type OAuthProvider, parseProvider } from '@/lib/oauth' import { evaluateScopeCoverage, type OAuthProvider, parseProvider } from '@/lib/oauth'
import { getUserEntityPermissions } from '@/lib/workspaces/permissions/utils' import { getUserEntityPermissions } from '@/lib/workspaces/permissions/utils'
@@ -81,7 +81,7 @@ export async function GET(request: NextRequest) {
const { provider: providerParam, workflowId, credentialId } = parseResult.data const { provider: providerParam, workflowId, credentialId } = parseResult.data
// Authenticate requester (supports session, API key, internal JWT) // Authenticate requester (supports session, API key, internal JWT)
const authResult = await checkHybridAuth(request) const authResult = await checkSessionOrInternalAuth(request)
if (!authResult.success || !authResult.userId) { if (!authResult.success || !authResult.userId) {
logger.warn(`[${requestId}] Unauthenticated credentials request rejected`) logger.warn(`[${requestId}] Unauthenticated credentials request rejected`)
return NextResponse.json({ error: 'User not authenticated' }, { status: 401 }) return NextResponse.json({ error: 'User not authenticated' }, { status: 401 })

View File

@@ -12,7 +12,7 @@ describe('OAuth Token API Routes', () => {
const mockRefreshTokenIfNeeded = vi.fn() const mockRefreshTokenIfNeeded = vi.fn()
const mockGetOAuthToken = vi.fn() const mockGetOAuthToken = vi.fn()
const mockAuthorizeCredentialUse = vi.fn() const mockAuthorizeCredentialUse = vi.fn()
const mockCheckHybridAuth = vi.fn() const mockCheckSessionOrInternalAuth = vi.fn()
const mockLogger = createMockLogger() const mockLogger = createMockLogger()
@@ -42,7 +42,7 @@ describe('OAuth Token API Routes', () => {
})) }))
vi.doMock('@/lib/auth/hybrid', () => ({ vi.doMock('@/lib/auth/hybrid', () => ({
checkHybridAuth: mockCheckHybridAuth, checkSessionOrInternalAuth: mockCheckSessionOrInternalAuth,
})) }))
}) })
@@ -235,7 +235,7 @@ describe('OAuth Token API Routes', () => {
describe('credentialAccountUserId + providerId path', () => { describe('credentialAccountUserId + providerId path', () => {
it('should reject unauthenticated requests', async () => { it('should reject unauthenticated requests', async () => {
mockCheckHybridAuth.mockResolvedValueOnce({ mockCheckSessionOrInternalAuth.mockResolvedValueOnce({
success: false, success: false,
error: 'Authentication required', error: 'Authentication required',
}) })
@@ -255,30 +255,8 @@ describe('OAuth Token API Routes', () => {
expect(mockGetOAuthToken).not.toHaveBeenCalled() expect(mockGetOAuthToken).not.toHaveBeenCalled()
}) })
it('should reject API key authentication', async () => {
mockCheckHybridAuth.mockResolvedValueOnce({
success: true,
authType: 'api_key',
userId: 'test-user-id',
})
const req = createMockRequest('POST', {
credentialAccountUserId: 'test-user-id',
providerId: 'google',
})
const { POST } = await import('@/app/api/auth/oauth/token/route')
const response = await POST(req)
const data = await response.json()
expect(response.status).toBe(401)
expect(data).toHaveProperty('error', 'User not authenticated')
expect(mockGetOAuthToken).not.toHaveBeenCalled()
})
it('should reject internal JWT authentication', async () => { it('should reject internal JWT authentication', async () => {
mockCheckHybridAuth.mockResolvedValueOnce({ mockCheckSessionOrInternalAuth.mockResolvedValueOnce({
success: true, success: true,
authType: 'internal_jwt', authType: 'internal_jwt',
userId: 'test-user-id', userId: 'test-user-id',
@@ -300,7 +278,7 @@ describe('OAuth Token API Routes', () => {
}) })
it('should reject requests for other users credentials', async () => { it('should reject requests for other users credentials', async () => {
mockCheckHybridAuth.mockResolvedValueOnce({ mockCheckSessionOrInternalAuth.mockResolvedValueOnce({
success: true, success: true,
authType: 'session', authType: 'session',
userId: 'attacker-user-id', userId: 'attacker-user-id',
@@ -322,7 +300,7 @@ describe('OAuth Token API Routes', () => {
}) })
it('should allow session-authenticated users to access their own credentials', async () => { it('should allow session-authenticated users to access their own credentials', async () => {
mockCheckHybridAuth.mockResolvedValueOnce({ mockCheckSessionOrInternalAuth.mockResolvedValueOnce({
success: true, success: true,
authType: 'session', authType: 'session',
userId: 'test-user-id', userId: 'test-user-id',
@@ -345,7 +323,7 @@ describe('OAuth Token API Routes', () => {
}) })
it('should return 404 when credential not found for user', async () => { it('should return 404 when credential not found for user', async () => {
mockCheckHybridAuth.mockResolvedValueOnce({ mockCheckSessionOrInternalAuth.mockResolvedValueOnce({
success: true, success: true,
authType: 'session', authType: 'session',
userId: 'test-user-id', userId: 'test-user-id',
@@ -373,7 +351,7 @@ describe('OAuth Token API Routes', () => {
*/ */
describe('GET handler', () => { describe('GET handler', () => {
it('should return access token successfully', async () => { it('should return access token successfully', async () => {
mockCheckHybridAuth.mockResolvedValueOnce({ mockCheckSessionOrInternalAuth.mockResolvedValueOnce({
success: true, success: true,
authType: 'session', authType: 'session',
userId: 'test-user-id', userId: 'test-user-id',
@@ -402,7 +380,7 @@ describe('OAuth Token API Routes', () => {
expect(response.status).toBe(200) expect(response.status).toBe(200)
expect(data).toHaveProperty('accessToken', 'fresh-token') expect(data).toHaveProperty('accessToken', 'fresh-token')
expect(mockCheckHybridAuth).toHaveBeenCalled() expect(mockCheckSessionOrInternalAuth).toHaveBeenCalled()
expect(mockGetCredential).toHaveBeenCalledWith(mockRequestId, 'credential-id', 'test-user-id') expect(mockGetCredential).toHaveBeenCalledWith(mockRequestId, 'credential-id', 'test-user-id')
expect(mockRefreshTokenIfNeeded).toHaveBeenCalled() expect(mockRefreshTokenIfNeeded).toHaveBeenCalled()
}) })
@@ -421,7 +399,7 @@ describe('OAuth Token API Routes', () => {
}) })
it('should handle authentication failure', async () => { it('should handle authentication failure', async () => {
mockCheckHybridAuth.mockResolvedValueOnce({ mockCheckSessionOrInternalAuth.mockResolvedValueOnce({
success: false, success: false,
error: 'Authentication required', error: 'Authentication required',
}) })
@@ -440,7 +418,7 @@ describe('OAuth Token API Routes', () => {
}) })
it('should handle credential not found', async () => { it('should handle credential not found', async () => {
mockCheckHybridAuth.mockResolvedValueOnce({ mockCheckSessionOrInternalAuth.mockResolvedValueOnce({
success: true, success: true,
authType: 'session', authType: 'session',
userId: 'test-user-id', userId: 'test-user-id',
@@ -461,7 +439,7 @@ describe('OAuth Token API Routes', () => {
}) })
it('should handle missing access token', async () => { it('should handle missing access token', async () => {
mockCheckHybridAuth.mockResolvedValueOnce({ mockCheckSessionOrInternalAuth.mockResolvedValueOnce({
success: true, success: true,
authType: 'session', authType: 'session',
userId: 'test-user-id', userId: 'test-user-id',
@@ -487,7 +465,7 @@ describe('OAuth Token API Routes', () => {
}) })
it('should handle token refresh failure', async () => { it('should handle token refresh failure', async () => {
mockCheckHybridAuth.mockResolvedValueOnce({ mockCheckSessionOrInternalAuth.mockResolvedValueOnce({
success: true, success: true,
authType: 'session', authType: 'session',
userId: 'test-user-id', userId: 'test-user-id',

View File

@@ -2,7 +2,7 @@ import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server' import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod' import { z } from 'zod'
import { authorizeCredentialUse } from '@/lib/auth/credential-access' import { authorizeCredentialUse } from '@/lib/auth/credential-access'
import { checkHybridAuth } from '@/lib/auth/hybrid' import { checkSessionOrInternalAuth } from '@/lib/auth/hybrid'
import { generateRequestId } from '@/lib/core/utils/request' import { generateRequestId } from '@/lib/core/utils/request'
import { getCredential, getOAuthToken, refreshTokenIfNeeded } from '@/app/api/auth/oauth/utils' import { getCredential, getOAuthToken, refreshTokenIfNeeded } from '@/app/api/auth/oauth/utils'
@@ -71,7 +71,7 @@ export async function POST(request: NextRequest) {
providerId, providerId,
}) })
const auth = await checkHybridAuth(request, { requireWorkflowId: false }) const auth = await checkSessionOrInternalAuth(request, { requireWorkflowId: false })
if (!auth.success || auth.authType !== 'session' || !auth.userId) { if (!auth.success || auth.authType !== 'session' || !auth.userId) {
logger.warn(`[${requestId}] Unauthorized request for credentialAccountUserId path`, { logger.warn(`[${requestId}] Unauthorized request for credentialAccountUserId path`, {
success: auth.success, success: auth.success,
@@ -187,7 +187,7 @@ export async function GET(request: NextRequest) {
const { credentialId } = parseResult.data const { credentialId } = parseResult.data
// For GET requests, we only support session-based authentication // For GET requests, we only support session-based authentication
const auth = await checkHybridAuth(request, { requireWorkflowId: false }) const auth = await checkSessionOrInternalAuth(request, { requireWorkflowId: false })
if (!auth.success || auth.authType !== 'session' || !auth.userId) { if (!auth.success || auth.authType !== 'session' || !auth.userId) {
return NextResponse.json({ error: 'User not authenticated' }, { status: 401 }) return NextResponse.json({ error: 'User not authenticated' }, { status: 401 })
} }

View File

@@ -285,6 +285,14 @@ export async function POST(req: NextRequest) {
apiVersion: 'preview', apiVersion: 'preview',
endpoint: env.AZURE_OPENAI_ENDPOINT, endpoint: env.AZURE_OPENAI_ENDPOINT,
} }
} else if (providerEnv === 'azure-anthropic') {
providerConfig = {
provider: 'azure-anthropic',
model: envModel,
apiKey: env.AZURE_ANTHROPIC_API_KEY,
apiVersion: env.AZURE_ANTHROPIC_API_VERSION,
endpoint: env.AZURE_ANTHROPIC_ENDPOINT,
}
} else if (providerEnv === 'vertex') { } else if (providerEnv === 'vertex') {
providerConfig = { providerConfig = {
provider: 'vertex', provider: 'vertex',

View File

@@ -29,7 +29,7 @@ function setupFileApiMocks(
} }
vi.doMock('@/lib/auth/hybrid', () => ({ vi.doMock('@/lib/auth/hybrid', () => ({
checkHybridAuth: vi.fn().mockResolvedValue({ checkSessionOrInternalAuth: vi.fn().mockResolvedValue({
success: authenticated, success: authenticated,
userId: authenticated ? 'test-user-id' : undefined, userId: authenticated ? 'test-user-id' : undefined,
error: authenticated ? undefined : 'Unauthorized', error: authenticated ? undefined : 'Unauthorized',

View File

@@ -1,7 +1,7 @@
import { createLogger } from '@sim/logger' import { createLogger } from '@sim/logger'
import type { NextRequest } from 'next/server' import type { NextRequest } from 'next/server'
import { NextResponse } from 'next/server' import { NextResponse } from 'next/server'
import { checkHybridAuth } from '@/lib/auth/hybrid' import { checkSessionOrInternalAuth } from '@/lib/auth/hybrid'
import type { StorageContext } from '@/lib/uploads/config' import type { StorageContext } from '@/lib/uploads/config'
import { deleteFile, hasCloudStorage } from '@/lib/uploads/core/storage-service' import { deleteFile, hasCloudStorage } from '@/lib/uploads/core/storage-service'
import { extractStorageKey, inferContextFromKey } from '@/lib/uploads/utils/file-utils' import { extractStorageKey, inferContextFromKey } from '@/lib/uploads/utils/file-utils'
@@ -24,7 +24,7 @@ const logger = createLogger('FilesDeleteAPI')
*/ */
export async function POST(request: NextRequest) { export async function POST(request: NextRequest) {
try { try {
const authResult = await checkHybridAuth(request, { requireWorkflowId: false }) const authResult = await checkSessionOrInternalAuth(request, { requireWorkflowId: false })
if (!authResult.success || !authResult.userId) { if (!authResult.success || !authResult.userId) {
logger.warn('Unauthorized file delete request', { logger.warn('Unauthorized file delete request', {

View File

@@ -1,6 +1,6 @@
import { createLogger } from '@sim/logger' import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server' import { type NextRequest, NextResponse } from 'next/server'
import { checkHybridAuth } from '@/lib/auth/hybrid' import { checkSessionOrInternalAuth } from '@/lib/auth/hybrid'
import type { StorageContext } from '@/lib/uploads/config' import type { StorageContext } from '@/lib/uploads/config'
import { hasCloudStorage } from '@/lib/uploads/core/storage-service' import { hasCloudStorage } from '@/lib/uploads/core/storage-service'
import { verifyFileAccess } from '@/app/api/files/authorization' import { verifyFileAccess } from '@/app/api/files/authorization'
@@ -12,7 +12,7 @@ export const dynamic = 'force-dynamic'
export async function POST(request: NextRequest) { export async function POST(request: NextRequest) {
try { try {
const authResult = await checkHybridAuth(request, { requireWorkflowId: false }) const authResult = await checkSessionOrInternalAuth(request, { requireWorkflowId: false })
if (!authResult.success || !authResult.userId) { if (!authResult.success || !authResult.userId) {
logger.warn('Unauthorized download URL request', { logger.warn('Unauthorized download URL request', {

View File

@@ -35,7 +35,7 @@ function setupFileApiMocks(
} }
vi.doMock('@/lib/auth/hybrid', () => ({ vi.doMock('@/lib/auth/hybrid', () => ({
checkHybridAuth: vi.fn().mockResolvedValue({ checkInternalAuth: vi.fn().mockResolvedValue({
success: authenticated, success: authenticated,
userId: authenticated ? 'test-user-id' : undefined, userId: authenticated ? 'test-user-id' : undefined,
error: authenticated ? undefined : 'Unauthorized', error: authenticated ? undefined : 'Unauthorized',

View File

@@ -5,7 +5,7 @@ import path from 'path'
import { createLogger } from '@sim/logger' import { createLogger } from '@sim/logger'
import binaryExtensionsList from 'binary-extensions' import binaryExtensionsList from 'binary-extensions'
import { type NextRequest, NextResponse } from 'next/server' import { type NextRequest, NextResponse } from 'next/server'
import { checkHybridAuth } from '@/lib/auth/hybrid' import { checkInternalAuth } from '@/lib/auth/hybrid'
import { import {
secureFetchWithPinnedIP, secureFetchWithPinnedIP,
validateUrlWithDNS, validateUrlWithDNS,
@@ -66,7 +66,7 @@ export async function POST(request: NextRequest) {
const startTime = Date.now() const startTime = Date.now()
try { try {
const authResult = await checkHybridAuth(request, { requireWorkflowId: true }) const authResult = await checkInternalAuth(request, { requireWorkflowId: true })
if (!authResult.success) { if (!authResult.success) {
logger.warn('Unauthorized file parse request', { logger.warn('Unauthorized file parse request', {

View File

@@ -55,7 +55,7 @@ describe('File Serve API Route', () => {
}) })
vi.doMock('@/lib/auth/hybrid', () => ({ vi.doMock('@/lib/auth/hybrid', () => ({
checkHybridAuth: vi.fn().mockResolvedValue({ checkSessionOrInternalAuth: vi.fn().mockResolvedValue({
success: true, success: true,
userId: 'test-user-id', userId: 'test-user-id',
}), }),
@@ -165,7 +165,7 @@ describe('File Serve API Route', () => {
})) }))
vi.doMock('@/lib/auth/hybrid', () => ({ vi.doMock('@/lib/auth/hybrid', () => ({
checkHybridAuth: vi.fn().mockResolvedValue({ checkSessionOrInternalAuth: vi.fn().mockResolvedValue({
success: true, success: true,
userId: 'test-user-id', userId: 'test-user-id',
}), }),
@@ -226,7 +226,7 @@ describe('File Serve API Route', () => {
})) }))
vi.doMock('@/lib/auth/hybrid', () => ({ vi.doMock('@/lib/auth/hybrid', () => ({
checkHybridAuth: vi.fn().mockResolvedValue({ checkSessionOrInternalAuth: vi.fn().mockResolvedValue({
success: true, success: true,
userId: 'test-user-id', userId: 'test-user-id',
}), }),
@@ -291,7 +291,7 @@ describe('File Serve API Route', () => {
})) }))
vi.doMock('@/lib/auth/hybrid', () => ({ vi.doMock('@/lib/auth/hybrid', () => ({
checkHybridAuth: vi.fn().mockResolvedValue({ checkSessionOrInternalAuth: vi.fn().mockResolvedValue({
success: true, success: true,
userId: 'test-user-id', userId: 'test-user-id',
}), }),
@@ -350,7 +350,7 @@ describe('File Serve API Route', () => {
for (const test of contentTypeTests) { for (const test of contentTypeTests) {
it(`should serve ${test.ext} file with correct content type`, async () => { it(`should serve ${test.ext} file with correct content type`, async () => {
vi.doMock('@/lib/auth/hybrid', () => ({ vi.doMock('@/lib/auth/hybrid', () => ({
checkHybridAuth: vi.fn().mockResolvedValue({ checkSessionOrInternalAuth: vi.fn().mockResolvedValue({
success: true, success: true,
userId: 'test-user-id', userId: 'test-user-id',
}), }),

View File

@@ -2,7 +2,7 @@ import { readFile } from 'fs/promises'
import { createLogger } from '@sim/logger' import { createLogger } from '@sim/logger'
import type { NextRequest } from 'next/server' import type { NextRequest } from 'next/server'
import { NextResponse } from 'next/server' import { NextResponse } from 'next/server'
import { checkHybridAuth } from '@/lib/auth/hybrid' import { checkSessionOrInternalAuth } from '@/lib/auth/hybrid'
import { CopilotFiles, isUsingCloudStorage } from '@/lib/uploads' import { CopilotFiles, isUsingCloudStorage } from '@/lib/uploads'
import type { StorageContext } from '@/lib/uploads/config' import type { StorageContext } from '@/lib/uploads/config'
import { downloadFile } from '@/lib/uploads/core/storage-service' import { downloadFile } from '@/lib/uploads/core/storage-service'
@@ -49,7 +49,7 @@ export async function GET(
return await handleLocalFilePublic(fullPath) return await handleLocalFilePublic(fullPath)
} }
const authResult = await checkHybridAuth(request, { requireWorkflowId: false }) const authResult = await checkSessionOrInternalAuth(request, { requireWorkflowId: false })
if (!authResult.success || !authResult.userId) { if (!authResult.success || !authResult.userId) {
logger.warn('Unauthorized file access attempt', { logger.warn('Unauthorized file access attempt', {

View File

@@ -845,6 +845,8 @@ export async function POST(req: NextRequest) {
contextVariables, contextVariables,
timeoutMs: timeout, timeoutMs: timeout,
requestId, requestId,
ownerKey: `user:${auth.userId}`,
ownerWeight: 1,
}) })
const executionTime = Date.now() - startTime const executionTime = Date.now() - startTime

View File

@@ -23,7 +23,16 @@ export async function POST(request: NextRequest) {
topK, topK,
model, model,
apiKey, apiKey,
azureEndpoint,
azureApiVersion,
vertexProject,
vertexLocation,
vertexCredential,
bedrockAccessKeyId,
bedrockSecretKey,
bedrockRegion,
workflowId, workflowId,
workspaceId,
piiEntityTypes, piiEntityTypes,
piiMode, piiMode,
piiLanguage, piiLanguage,
@@ -110,7 +119,18 @@ export async function POST(request: NextRequest) {
topK, topK,
model, model,
apiKey, apiKey,
{
azureEndpoint,
azureApiVersion,
vertexProject,
vertexLocation,
vertexCredential,
bedrockAccessKeyId,
bedrockSecretKey,
bedrockRegion,
},
workflowId, workflowId,
workspaceId,
piiEntityTypes, piiEntityTypes,
piiMode, piiMode,
piiLanguage, piiLanguage,
@@ -178,7 +198,18 @@ async function executeValidation(
topK: string | undefined, topK: string | undefined,
model: string, model: string,
apiKey: string | undefined, apiKey: string | undefined,
providerCredentials: {
azureEndpoint?: string
azureApiVersion?: string
vertexProject?: string
vertexLocation?: string
vertexCredential?: string
bedrockAccessKeyId?: string
bedrockSecretKey?: string
bedrockRegion?: string
},
workflowId: string | undefined, workflowId: string | undefined,
workspaceId: string | undefined,
piiEntityTypes: string[] | undefined, piiEntityTypes: string[] | undefined,
piiMode: string | undefined, piiMode: string | undefined,
piiLanguage: string | undefined, piiLanguage: string | undefined,
@@ -219,7 +250,9 @@ async function executeValidation(
topK: topK ? Number.parseInt(topK) : 10, // Default topK is 10 topK: topK ? Number.parseInt(topK) : 10, // Default topK is 10
model: model, model: model,
apiKey, apiKey,
providerCredentials,
workflowId, workflowId,
workspaceId,
requestId, requestId,
}) })
} }

View File

@@ -2,7 +2,7 @@ import { randomUUID } from 'crypto'
import { createLogger } from '@sim/logger' import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server' import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod' import { z } from 'zod'
import { checkHybridAuth } from '@/lib/auth/hybrid' import { checkSessionOrInternalAuth } from '@/lib/auth/hybrid'
import { SUPPORTED_FIELD_TYPES } from '@/lib/knowledge/constants' import { SUPPORTED_FIELD_TYPES } from '@/lib/knowledge/constants'
import { createTagDefinition, getTagDefinitions } from '@/lib/knowledge/tags/service' import { createTagDefinition, getTagDefinitions } from '@/lib/knowledge/tags/service'
import { checkKnowledgeBaseAccess } from '@/app/api/knowledge/utils' import { checkKnowledgeBaseAccess } from '@/app/api/knowledge/utils'
@@ -19,19 +19,11 @@ export async function GET(req: NextRequest, { params }: { params: Promise<{ id:
try { try {
logger.info(`[${requestId}] Getting tag definitions for knowledge base ${knowledgeBaseId}`) logger.info(`[${requestId}] Getting tag definitions for knowledge base ${knowledgeBaseId}`)
const auth = await checkHybridAuth(req, { requireWorkflowId: false }) const auth = await checkSessionOrInternalAuth(req, { requireWorkflowId: false })
if (!auth.success) { if (!auth.success) {
return NextResponse.json({ error: auth.error || 'Unauthorized' }, { status: 401 }) return NextResponse.json({ error: auth.error || 'Unauthorized' }, { status: 401 })
} }
// Only allow session and internal JWT auth (not API key)
if (auth.authType === 'api_key') {
return NextResponse.json(
{ error: 'API key auth not supported for this endpoint' },
{ status: 401 }
)
}
// For session auth, verify KB access. Internal JWT is trusted. // For session auth, verify KB access. Internal JWT is trusted.
if (auth.authType === 'session' && auth.userId) { if (auth.authType === 'session' && auth.userId) {
const accessCheck = await checkKnowledgeBaseAccess(knowledgeBaseId, auth.userId) const accessCheck = await checkKnowledgeBaseAccess(knowledgeBaseId, auth.userId)
@@ -64,19 +56,11 @@ export async function POST(req: NextRequest, { params }: { params: Promise<{ id:
try { try {
logger.info(`[${requestId}] Creating tag definition for knowledge base ${knowledgeBaseId}`) logger.info(`[${requestId}] Creating tag definition for knowledge base ${knowledgeBaseId}`)
const auth = await checkHybridAuth(req, { requireWorkflowId: false }) const auth = await checkSessionOrInternalAuth(req, { requireWorkflowId: false })
if (!auth.success) { if (!auth.success) {
return NextResponse.json({ error: auth.error || 'Unauthorized' }, { status: 401 }) return NextResponse.json({ error: auth.error || 'Unauthorized' }, { status: 401 })
} }
// Only allow session and internal JWT auth (not API key)
if (auth.authType === 'api_key') {
return NextResponse.json(
{ error: 'API key auth not supported for this endpoint' },
{ status: 401 }
)
}
// For session auth, verify KB access. Internal JWT is trusted. // For session auth, verify KB access. Internal JWT is trusted.
if (auth.authType === 'session' && auth.userId) { if (auth.authType === 'session' && auth.userId) {
const accessCheck = await checkKnowledgeBaseAccess(knowledgeBaseId, auth.userId) const accessCheck = await checkKnowledgeBaseAccess(knowledgeBaseId, auth.userId)

View File

@@ -8,7 +8,7 @@ import {
import { createLogger } from '@sim/logger' import { createLogger } from '@sim/logger'
import { and, eq, inArray } from 'drizzle-orm' import { and, eq, inArray } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server' import { type NextRequest, NextResponse } from 'next/server'
import { checkHybridAuth } from '@/lib/auth/hybrid' import { checkSessionOrInternalAuth } from '@/lib/auth/hybrid'
import { generateRequestId } from '@/lib/core/utils/request' import { generateRequestId } from '@/lib/core/utils/request'
import type { TraceSpan, WorkflowExecutionLog } from '@/lib/logs/types' import type { TraceSpan, WorkflowExecutionLog } from '@/lib/logs/types'
@@ -23,7 +23,7 @@ export async function GET(
try { try {
const { executionId } = await params const { executionId } = await params
const authResult = await checkHybridAuth(request, { requireWorkflowId: false }) const authResult = await checkSessionOrInternalAuth(request, { requireWorkflowId: false })
if (!authResult.success || !authResult.userId) { if (!authResult.success || !authResult.userId) {
logger.warn(`[${requestId}] Unauthorized execution data access attempt for: ${executionId}`) logger.warn(`[${requestId}] Unauthorized execution data access attempt for: ${executionId}`)
return NextResponse.json( return NextResponse.json(

View File

@@ -4,7 +4,7 @@ import { createLogger } from '@sim/logger'
import { and, eq } from 'drizzle-orm' import { and, eq } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server' import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod' import { z } from 'zod'
import { checkHybridAuth } from '@/lib/auth/hybrid' import { checkInternalAuth } from '@/lib/auth/hybrid'
import { generateRequestId } from '@/lib/core/utils/request' import { generateRequestId } from '@/lib/core/utils/request'
import { checkWorkspaceAccess } from '@/lib/workspaces/permissions/utils' import { checkWorkspaceAccess } from '@/lib/workspaces/permissions/utils'
@@ -36,7 +36,7 @@ async function validateMemoryAccess(
requestId: string, requestId: string,
action: 'read' | 'write' action: 'read' | 'write'
): Promise<{ userId: string } | { error: NextResponse }> { ): Promise<{ userId: string } | { error: NextResponse }> {
const authResult = await checkHybridAuth(request, { requireWorkflowId: false }) const authResult = await checkInternalAuth(request, { requireWorkflowId: false })
if (!authResult.success || !authResult.userId) { if (!authResult.success || !authResult.userId) {
logger.warn(`[${requestId}] Unauthorized memory ${action} attempt`) logger.warn(`[${requestId}] Unauthorized memory ${action} attempt`)
return { return {

View File

@@ -3,7 +3,7 @@ import { memory } from '@sim/db/schema'
import { createLogger } from '@sim/logger' import { createLogger } from '@sim/logger'
import { and, eq, isNull, like } from 'drizzle-orm' import { and, eq, isNull, like } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server' import { type NextRequest, NextResponse } from 'next/server'
import { checkHybridAuth } from '@/lib/auth/hybrid' import { checkInternalAuth } from '@/lib/auth/hybrid'
import { generateRequestId } from '@/lib/core/utils/request' import { generateRequestId } from '@/lib/core/utils/request'
import { checkWorkspaceAccess } from '@/lib/workspaces/permissions/utils' import { checkWorkspaceAccess } from '@/lib/workspaces/permissions/utils'
@@ -16,7 +16,7 @@ export async function GET(request: NextRequest) {
const requestId = generateRequestId() const requestId = generateRequestId()
try { try {
const authResult = await checkHybridAuth(request) const authResult = await checkInternalAuth(request)
if (!authResult.success || !authResult.userId) { if (!authResult.success || !authResult.userId) {
logger.warn(`[${requestId}] Unauthorized memory access attempt`) logger.warn(`[${requestId}] Unauthorized memory access attempt`)
return NextResponse.json( return NextResponse.json(
@@ -89,7 +89,7 @@ export async function POST(request: NextRequest) {
const requestId = generateRequestId() const requestId = generateRequestId()
try { try {
const authResult = await checkHybridAuth(request) const authResult = await checkInternalAuth(request)
if (!authResult.success || !authResult.userId) { if (!authResult.success || !authResult.userId) {
logger.warn(`[${requestId}] Unauthorized memory creation attempt`) logger.warn(`[${requestId}] Unauthorized memory creation attempt`)
return NextResponse.json( return NextResponse.json(
@@ -228,7 +228,7 @@ export async function DELETE(request: NextRequest) {
const requestId = generateRequestId() const requestId = generateRequestId()
try { try {
const authResult = await checkHybridAuth(request) const authResult = await checkInternalAuth(request)
if (!authResult.success || !authResult.userId) { if (!authResult.success || !authResult.userId) {
logger.warn(`[${requestId}] Unauthorized memory deletion attempt`) logger.warn(`[${requestId}] Unauthorized memory deletion attempt`)
return NextResponse.json( return NextResponse.json(

View File

@@ -3,7 +3,7 @@ import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server' import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod' import { z } from 'zod'
import { createA2AClient } from '@/lib/a2a/utils' import { createA2AClient } from '@/lib/a2a/utils'
import { checkHybridAuth } from '@/lib/auth/hybrid' import { checkSessionOrInternalAuth } from '@/lib/auth/hybrid'
import { generateRequestId } from '@/lib/core/utils/request' import { generateRequestId } from '@/lib/core/utils/request'
const logger = createLogger('A2ACancelTaskAPI') const logger = createLogger('A2ACancelTaskAPI')
@@ -20,7 +20,7 @@ export async function POST(request: NextRequest) {
const requestId = generateRequestId() const requestId = generateRequestId()
try { try {
const authResult = await checkHybridAuth(request, { requireWorkflowId: false }) const authResult = await checkSessionOrInternalAuth(request, { requireWorkflowId: false })
if (!authResult.success) { if (!authResult.success) {
logger.warn(`[${requestId}] Unauthorized A2A cancel task attempt`) logger.warn(`[${requestId}] Unauthorized A2A cancel task attempt`)

View File

@@ -2,7 +2,7 @@ import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server' import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod' import { z } from 'zod'
import { createA2AClient } from '@/lib/a2a/utils' import { createA2AClient } from '@/lib/a2a/utils'
import { checkHybridAuth } from '@/lib/auth/hybrid' import { checkSessionOrInternalAuth } from '@/lib/auth/hybrid'
import { generateRequestId } from '@/lib/core/utils/request' import { generateRequestId } from '@/lib/core/utils/request'
export const dynamic = 'force-dynamic' export const dynamic = 'force-dynamic'
@@ -20,7 +20,7 @@ export async function POST(request: NextRequest) {
const requestId = generateRequestId() const requestId = generateRequestId()
try { try {
const authResult = await checkHybridAuth(request, { requireWorkflowId: false }) const authResult = await checkSessionOrInternalAuth(request, { requireWorkflowId: false })
if (!authResult.success) { if (!authResult.success) {
logger.warn( logger.warn(

View File

@@ -2,7 +2,7 @@ import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server' import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod' import { z } from 'zod'
import { createA2AClient } from '@/lib/a2a/utils' import { createA2AClient } from '@/lib/a2a/utils'
import { checkHybridAuth } from '@/lib/auth/hybrid' import { checkSessionOrInternalAuth } from '@/lib/auth/hybrid'
import { generateRequestId } from '@/lib/core/utils/request' import { generateRequestId } from '@/lib/core/utils/request'
export const dynamic = 'force-dynamic' export const dynamic = 'force-dynamic'
@@ -18,7 +18,7 @@ export async function POST(request: NextRequest) {
const requestId = generateRequestId() const requestId = generateRequestId()
try { try {
const authResult = await checkHybridAuth(request, { requireWorkflowId: false }) const authResult = await checkSessionOrInternalAuth(request, { requireWorkflowId: false })
if (!authResult.success) { if (!authResult.success) {
logger.warn(`[${requestId}] Unauthorized A2A get agent card attempt: ${authResult.error}`) logger.warn(`[${requestId}] Unauthorized A2A get agent card attempt: ${authResult.error}`)

View File

@@ -2,7 +2,7 @@ import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server' import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod' import { z } from 'zod'
import { createA2AClient } from '@/lib/a2a/utils' import { createA2AClient } from '@/lib/a2a/utils'
import { checkHybridAuth } from '@/lib/auth/hybrid' import { checkSessionOrInternalAuth } from '@/lib/auth/hybrid'
import { generateRequestId } from '@/lib/core/utils/request' import { generateRequestId } from '@/lib/core/utils/request'
export const dynamic = 'force-dynamic' export const dynamic = 'force-dynamic'
@@ -19,7 +19,7 @@ export async function POST(request: NextRequest) {
const requestId = generateRequestId() const requestId = generateRequestId()
try { try {
const authResult = await checkHybridAuth(request, { requireWorkflowId: false }) const authResult = await checkSessionOrInternalAuth(request, { requireWorkflowId: false })
if (!authResult.success) { if (!authResult.success) {
logger.warn( logger.warn(

View File

@@ -3,7 +3,7 @@ import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server' import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod' import { z } from 'zod'
import { createA2AClient } from '@/lib/a2a/utils' import { createA2AClient } from '@/lib/a2a/utils'
import { checkHybridAuth } from '@/lib/auth/hybrid' import { checkSessionOrInternalAuth } from '@/lib/auth/hybrid'
import { generateRequestId } from '@/lib/core/utils/request' import { generateRequestId } from '@/lib/core/utils/request'
export const dynamic = 'force-dynamic' export const dynamic = 'force-dynamic'
@@ -21,7 +21,7 @@ export async function POST(request: NextRequest) {
const requestId = generateRequestId() const requestId = generateRequestId()
try { try {
const authResult = await checkHybridAuth(request, { requireWorkflowId: false }) const authResult = await checkSessionOrInternalAuth(request, { requireWorkflowId: false })
if (!authResult.success) { if (!authResult.success) {
logger.warn(`[${requestId}] Unauthorized A2A get task attempt: ${authResult.error}`) logger.warn(`[${requestId}] Unauthorized A2A get task attempt: ${authResult.error}`)

View File

@@ -10,7 +10,7 @@ import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server' import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod' import { z } from 'zod'
import { createA2AClient, extractTextContent, isTerminalState } from '@/lib/a2a/utils' import { createA2AClient, extractTextContent, isTerminalState } from '@/lib/a2a/utils'
import { checkHybridAuth } from '@/lib/auth/hybrid' import { checkSessionOrInternalAuth } from '@/lib/auth/hybrid'
import { generateRequestId } from '@/lib/core/utils/request' import { generateRequestId } from '@/lib/core/utils/request'
const logger = createLogger('A2AResubscribeAPI') const logger = createLogger('A2AResubscribeAPI')
@@ -27,7 +27,7 @@ export async function POST(request: NextRequest) {
const requestId = generateRequestId() const requestId = generateRequestId()
try { try {
const authResult = await checkHybridAuth(request, { requireWorkflowId: false }) const authResult = await checkSessionOrInternalAuth(request, { requireWorkflowId: false })
if (!authResult.success) { if (!authResult.success) {
logger.warn(`[${requestId}] Unauthorized A2A resubscribe attempt`) logger.warn(`[${requestId}] Unauthorized A2A resubscribe attempt`)

View File

@@ -3,7 +3,7 @@ import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server' import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod' import { z } from 'zod'
import { createA2AClient, extractTextContent, isTerminalState } from '@/lib/a2a/utils' import { createA2AClient, extractTextContent, isTerminalState } from '@/lib/a2a/utils'
import { checkHybridAuth } from '@/lib/auth/hybrid' import { checkSessionOrInternalAuth } from '@/lib/auth/hybrid'
import { validateUrlWithDNS } from '@/lib/core/security/input-validation.server' import { validateUrlWithDNS } from '@/lib/core/security/input-validation.server'
import { generateRequestId } from '@/lib/core/utils/request' import { generateRequestId } from '@/lib/core/utils/request'
@@ -32,7 +32,7 @@ export async function POST(request: NextRequest) {
const requestId = generateRequestId() const requestId = generateRequestId()
try { try {
const authResult = await checkHybridAuth(request, { requireWorkflowId: false }) const authResult = await checkSessionOrInternalAuth(request, { requireWorkflowId: false })
if (!authResult.success) { if (!authResult.success) {
logger.warn(`[${requestId}] Unauthorized A2A send message attempt: ${authResult.error}`) logger.warn(`[${requestId}] Unauthorized A2A send message attempt: ${authResult.error}`)

View File

@@ -2,7 +2,7 @@ import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server' import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod' import { z } from 'zod'
import { createA2AClient } from '@/lib/a2a/utils' import { createA2AClient } from '@/lib/a2a/utils'
import { checkHybridAuth } from '@/lib/auth/hybrid' import { checkSessionOrInternalAuth } from '@/lib/auth/hybrid'
import { validateUrlWithDNS } from '@/lib/core/security/input-validation.server' import { validateUrlWithDNS } from '@/lib/core/security/input-validation.server'
import { generateRequestId } from '@/lib/core/utils/request' import { generateRequestId } from '@/lib/core/utils/request'
@@ -22,7 +22,7 @@ export async function POST(request: NextRequest) {
const requestId = generateRequestId() const requestId = generateRequestId()
try { try {
const authResult = await checkHybridAuth(request, { requireWorkflowId: false }) const authResult = await checkSessionOrInternalAuth(request, { requireWorkflowId: false })
if (!authResult.success) { if (!authResult.success) {
logger.warn(`[${requestId}] Unauthorized A2A set push notification attempt`, { logger.warn(`[${requestId}] Unauthorized A2A set push notification attempt`, {

View File

@@ -1,7 +1,7 @@
import { createLogger } from '@sim/logger' import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server' import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod' import { z } from 'zod'
import { checkHybridAuth } from '@/lib/auth/hybrid' import { checkSessionOrInternalAuth } from '@/lib/auth/hybrid'
import { getUserUsageLogs, type UsageLogSource } from '@/lib/billing/core/usage-log' import { getUserUsageLogs, type UsageLogSource } from '@/lib/billing/core/usage-log'
const logger = createLogger('UsageLogsAPI') const logger = createLogger('UsageLogsAPI')
@@ -20,7 +20,7 @@ const QuerySchema = z.object({
*/ */
export async function GET(req: NextRequest) { export async function GET(req: NextRequest) {
try { try {
const auth = await checkHybridAuth(req, { requireWorkflowId: false }) const auth = await checkSessionOrInternalAuth(req, { requireWorkflowId: false })
if (!auth.success || !auth.userId) { if (!auth.success || !auth.userId) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 }) return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })

View File

@@ -325,6 +325,11 @@ export async function POST(req: NextRequest, { params }: { params: Promise<{ id:
requestId requestId
) )
// Client-side sessions and personal API keys bill/permission-check the
// authenticated user, not the workspace billed account.
const useAuthenticatedUserAsActor =
isClientSession || (auth.authType === 'api_key' && auth.apiKeyType === 'personal')
const preprocessResult = await preprocessExecution({ const preprocessResult = await preprocessExecution({
workflowId, workflowId,
userId, userId,
@@ -334,6 +339,7 @@ export async function POST(req: NextRequest, { params }: { params: Promise<{ id:
checkDeployment: !shouldUseDraftState, checkDeployment: !shouldUseDraftState,
loggingSession, loggingSession,
useDraftState: shouldUseDraftState, useDraftState: shouldUseDraftState,
useAuthenticatedUserAsActor,
}) })
if (!preprocessResult.success) { if (!preprocessResult.success) {

View File

@@ -74,8 +74,7 @@ function FileCard({ file, isExecutionFile = false, workspaceId }: FileCardProps)
} }
if (isExecutionFile) { if (isExecutionFile) {
const serveUrl = const serveUrl = `/api/files/serve/${encodeURIComponent(file.key)}?context=execution`
file.url || `/api/files/serve/${encodeURIComponent(file.key)}?context=execution`
window.open(serveUrl, '_blank') window.open(serveUrl, '_blank')
logger.info(`Opened execution file serve URL: ${serveUrl}`) logger.info(`Opened execution file serve URL: ${serveUrl}`)
} else { } else {
@@ -88,16 +87,12 @@ function FileCard({ file, isExecutionFile = false, workspaceId }: FileCardProps)
logger.warn( logger.warn(
`Could not construct viewer URL for file: ${file.name}, falling back to serve URL` `Could not construct viewer URL for file: ${file.name}, falling back to serve URL`
) )
const serveUrl = const serveUrl = `/api/files/serve/${encodeURIComponent(file.key)}?context=workspace`
file.url || `/api/files/serve/${encodeURIComponent(file.key)}?context=workspace`
window.open(serveUrl, '_blank') window.open(serveUrl, '_blank')
} }
} }
} catch (error) { } catch (error) {
logger.error(`Failed to download file ${file.name}:`, error) logger.error(`Failed to download file ${file.name}:`, error)
if (file.url) {
window.open(file.url, '_blank')
}
} finally { } finally {
setIsDownloading(false) setIsDownloading(false)
} }
@@ -198,8 +193,7 @@ export function FileDownload({
} }
if (isExecutionFile) { if (isExecutionFile) {
const serveUrl = const serveUrl = `/api/files/serve/${encodeURIComponent(file.key)}?context=execution`
file.url || `/api/files/serve/${encodeURIComponent(file.key)}?context=execution`
window.open(serveUrl, '_blank') window.open(serveUrl, '_blank')
logger.info(`Opened execution file serve URL: ${serveUrl}`) logger.info(`Opened execution file serve URL: ${serveUrl}`)
} else { } else {
@@ -212,16 +206,12 @@ export function FileDownload({
logger.warn( logger.warn(
`Could not construct viewer URL for file: ${file.name}, falling back to serve URL` `Could not construct viewer URL for file: ${file.name}, falling back to serve URL`
) )
const serveUrl = const serveUrl = `/api/files/serve/${encodeURIComponent(file.key)}?context=workspace`
file.url || `/api/files/serve/${encodeURIComponent(file.key)}?context=workspace`
window.open(serveUrl, '_blank') window.open(serveUrl, '_blank')
} }
} }
} catch (error) { } catch (error) {
logger.error(`Failed to download file ${file.name}:`, error) logger.error(`Failed to download file ${file.name}:`, error)
if (file.url) {
window.open(file.url, '_blank')
}
} finally { } finally {
setIsDownloading(false) setIsDownloading(false)
} }

View File

@@ -89,7 +89,7 @@ export function WorkflowSelector({
onMouseDown={(e) => handleRemove(e, w.id)} onMouseDown={(e) => handleRemove(e, w.id)}
> >
{w.name} {w.name}
<X className='h-3 w-3' /> <X className='!text-[var(--text-primary)] h-4 w-4 flex-shrink-0 opacity-50' />
</Badge> </Badge>
))} ))}
{selectedWorkflows.length > 2 && ( {selectedWorkflows.length > 2 && (

View File

@@ -35,6 +35,7 @@ interface CredentialSelectorProps {
disabled?: boolean disabled?: boolean
isPreview?: boolean isPreview?: boolean
previewValue?: any | null previewValue?: any | null
previewContextValues?: Record<string, unknown>
} }
export function CredentialSelector({ export function CredentialSelector({
@@ -43,6 +44,7 @@ export function CredentialSelector({
disabled = false, disabled = false,
isPreview = false, isPreview = false,
previewValue, previewValue,
previewContextValues,
}: CredentialSelectorProps) { }: CredentialSelectorProps) {
const [showOAuthModal, setShowOAuthModal] = useState(false) const [showOAuthModal, setShowOAuthModal] = useState(false)
const [editingValue, setEditingValue] = useState('') const [editingValue, setEditingValue] = useState('')
@@ -67,7 +69,11 @@ export function CredentialSelector({
canUseCredentialSets canUseCredentialSets
) )
const { depsSatisfied, dependsOn } = useDependsOnGate(blockId, subBlock, { disabled, isPreview }) const { depsSatisfied, dependsOn } = useDependsOnGate(blockId, subBlock, {
disabled,
isPreview,
previewContextValues,
})
const hasDependencies = dependsOn.length > 0 const hasDependencies = dependsOn.length > 0
const effectiveDisabled = disabled || (hasDependencies && !depsSatisfied) const effectiveDisabled = disabled || (hasDependencies && !depsSatisfied)

View File

@@ -5,6 +5,7 @@ import { Tooltip } from '@/components/emcn'
import { SelectorCombobox } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/components/selector-combobox/selector-combobox' import { SelectorCombobox } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/components/selector-combobox/selector-combobox'
import { useDependsOnGate } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/hooks/use-depends-on-gate' import { useDependsOnGate } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/hooks/use-depends-on-gate'
import { useSubBlockValue } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/hooks/use-sub-block-value' import { useSubBlockValue } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/hooks/use-sub-block-value'
import { resolvePreviewContextValue } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/utils'
import type { SubBlockConfig } from '@/blocks/types' import type { SubBlockConfig } from '@/blocks/types'
import type { SelectorContext } from '@/hooks/selectors/types' import type { SelectorContext } from '@/hooks/selectors/types'
@@ -33,7 +34,9 @@ export function DocumentSelector({
previewContextValues, previewContextValues,
}) })
const [knowledgeBaseIdFromStore] = useSubBlockValue(blockId, 'knowledgeBaseId') const [knowledgeBaseIdFromStore] = useSubBlockValue(blockId, 'knowledgeBaseId')
const knowledgeBaseIdValue = previewContextValues?.knowledgeBaseId ?? knowledgeBaseIdFromStore const knowledgeBaseIdValue = previewContextValues
? resolvePreviewContextValue(previewContextValues.knowledgeBaseId)
: knowledgeBaseIdFromStore
const normalizedKnowledgeBaseId = const normalizedKnowledgeBaseId =
typeof knowledgeBaseIdValue === 'string' && knowledgeBaseIdValue.trim().length > 0 typeof knowledgeBaseIdValue === 'string' && knowledgeBaseIdValue.trim().length > 0
? knowledgeBaseIdValue ? knowledgeBaseIdValue

View File

@@ -17,6 +17,7 @@ import { formatDisplayText } from '@/app/workspace/[workspaceId]/w/[workflowId]/
import { TagDropdown } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/components/tag-dropdown/tag-dropdown' import { TagDropdown } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/components/tag-dropdown/tag-dropdown'
import { useSubBlockInput } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/hooks/use-sub-block-input' import { useSubBlockInput } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/hooks/use-sub-block-input'
import { useSubBlockValue } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/hooks/use-sub-block-value' import { useSubBlockValue } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/hooks/use-sub-block-value'
import { resolvePreviewContextValue } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/utils'
import { useAccessibleReferencePrefixes } from '@/app/workspace/[workspaceId]/w/[workflowId]/hooks/use-accessible-reference-prefixes' import { useAccessibleReferencePrefixes } from '@/app/workspace/[workspaceId]/w/[workflowId]/hooks/use-accessible-reference-prefixes'
import type { SubBlockConfig } from '@/blocks/types' import type { SubBlockConfig } from '@/blocks/types'
import { useKnowledgeBaseTagDefinitions } from '@/hooks/kb/use-knowledge-base-tag-definitions' import { useKnowledgeBaseTagDefinitions } from '@/hooks/kb/use-knowledge-base-tag-definitions'
@@ -77,7 +78,9 @@ export function DocumentTagEntry({
}) })
const [knowledgeBaseIdFromStore] = useSubBlockValue(blockId, 'knowledgeBaseId') const [knowledgeBaseIdFromStore] = useSubBlockValue(blockId, 'knowledgeBaseId')
const knowledgeBaseIdValue = previewContextValues?.knowledgeBaseId ?? knowledgeBaseIdFromStore const knowledgeBaseIdValue = previewContextValues
? resolvePreviewContextValue(previewContextValues.knowledgeBaseId)
: knowledgeBaseIdFromStore
const knowledgeBaseId = const knowledgeBaseId =
typeof knowledgeBaseIdValue === 'string' && knowledgeBaseIdValue.trim().length > 0 typeof knowledgeBaseIdValue === 'string' && knowledgeBaseIdValue.trim().length > 0
? knowledgeBaseIdValue ? knowledgeBaseIdValue

View File

@@ -9,6 +9,7 @@ import { SelectorCombobox } from '@/app/workspace/[workspaceId]/w/[workflowId]/c
import { useDependsOnGate } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/hooks/use-depends-on-gate' import { useDependsOnGate } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/hooks/use-depends-on-gate'
import { useForeignCredential } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/hooks/use-foreign-credential' import { useForeignCredential } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/hooks/use-foreign-credential'
import { useSubBlockValue } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/hooks/use-sub-block-value' import { useSubBlockValue } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/hooks/use-sub-block-value'
import { resolvePreviewContextValue } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/utils'
import { getBlock } from '@/blocks/registry' import { getBlock } from '@/blocks/registry'
import type { SubBlockConfig } from '@/blocks/types' import type { SubBlockConfig } from '@/blocks/types'
import { isDependency } from '@/blocks/utils' import { isDependency } from '@/blocks/utils'
@@ -62,42 +63,56 @@ export function FileSelectorInput({
const [domainValueFromStore] = useSubBlockValue(blockId, 'domain') const [domainValueFromStore] = useSubBlockValue(blockId, 'domain')
const connectedCredential = previewContextValues?.credential ?? blockValues.credential const connectedCredential = previewContextValues
const domainValue = previewContextValues?.domain ?? domainValueFromStore ? resolvePreviewContextValue(previewContextValues.credential)
: blockValues.credential
const domainValue = previewContextValues
? resolvePreviewContextValue(previewContextValues.domain)
: domainValueFromStore
const teamIdValue = useMemo( const teamIdValue = useMemo(
() => () =>
previewContextValues?.teamId ?? previewContextValues
resolveDependencyValue('teamId', blockValues, canonicalIndex, canonicalModeOverrides), ? resolvePreviewContextValue(previewContextValues.teamId)
[previewContextValues?.teamId, blockValues, canonicalIndex, canonicalModeOverrides] : resolveDependencyValue('teamId', blockValues, canonicalIndex, canonicalModeOverrides),
[previewContextValues, blockValues, canonicalIndex, canonicalModeOverrides]
) )
const siteIdValue = useMemo( const siteIdValue = useMemo(
() => () =>
previewContextValues?.siteId ?? previewContextValues
resolveDependencyValue('siteId', blockValues, canonicalIndex, canonicalModeOverrides), ? resolvePreviewContextValue(previewContextValues.siteId)
[previewContextValues?.siteId, blockValues, canonicalIndex, canonicalModeOverrides] : resolveDependencyValue('siteId', blockValues, canonicalIndex, canonicalModeOverrides),
[previewContextValues, blockValues, canonicalIndex, canonicalModeOverrides]
) )
const collectionIdValue = useMemo( const collectionIdValue = useMemo(
() => () =>
previewContextValues?.collectionId ?? previewContextValues
resolveDependencyValue('collectionId', blockValues, canonicalIndex, canonicalModeOverrides), ? resolvePreviewContextValue(previewContextValues.collectionId)
[previewContextValues?.collectionId, blockValues, canonicalIndex, canonicalModeOverrides] : resolveDependencyValue(
'collectionId',
blockValues,
canonicalIndex,
canonicalModeOverrides
),
[previewContextValues, blockValues, canonicalIndex, canonicalModeOverrides]
) )
const projectIdValue = useMemo( const projectIdValue = useMemo(
() => () =>
previewContextValues?.projectId ?? previewContextValues
resolveDependencyValue('projectId', blockValues, canonicalIndex, canonicalModeOverrides), ? resolvePreviewContextValue(previewContextValues.projectId)
[previewContextValues?.projectId, blockValues, canonicalIndex, canonicalModeOverrides] : resolveDependencyValue('projectId', blockValues, canonicalIndex, canonicalModeOverrides),
[previewContextValues, blockValues, canonicalIndex, canonicalModeOverrides]
) )
const planIdValue = useMemo( const planIdValue = useMemo(
() => () =>
previewContextValues?.planId ?? previewContextValues
resolveDependencyValue('planId', blockValues, canonicalIndex, canonicalModeOverrides), ? resolvePreviewContextValue(previewContextValues.planId)
[previewContextValues?.planId, blockValues, canonicalIndex, canonicalModeOverrides] : resolveDependencyValue('planId', blockValues, canonicalIndex, canonicalModeOverrides),
[previewContextValues, blockValues, canonicalIndex, canonicalModeOverrides]
) )
const normalizedCredentialId = const normalizedCredentialId =

View File

@@ -6,6 +6,7 @@ import { SelectorCombobox } from '@/app/workspace/[workspaceId]/w/[workflowId]/c
import { useDependsOnGate } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/hooks/use-depends-on-gate' import { useDependsOnGate } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/hooks/use-depends-on-gate'
import { useForeignCredential } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/hooks/use-foreign-credential' import { useForeignCredential } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/hooks/use-foreign-credential'
import { useSubBlockValue } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/hooks/use-sub-block-value' import { useSubBlockValue } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/hooks/use-sub-block-value'
import { resolvePreviewContextValue } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/utils'
import type { SubBlockConfig } from '@/blocks/types' import type { SubBlockConfig } from '@/blocks/types'
import { resolveSelectorForSubBlock } from '@/hooks/selectors/resolution' import { resolveSelectorForSubBlock } from '@/hooks/selectors/resolution'
import { useCollaborativeWorkflow } from '@/hooks/use-collaborative-workflow' import { useCollaborativeWorkflow } from '@/hooks/use-collaborative-workflow'
@@ -17,6 +18,7 @@ interface FolderSelectorInputProps {
disabled?: boolean disabled?: boolean
isPreview?: boolean isPreview?: boolean
previewValue?: any | null previewValue?: any | null
previewContextValues?: Record<string, unknown>
} }
export function FolderSelectorInput({ export function FolderSelectorInput({
@@ -25,9 +27,13 @@ export function FolderSelectorInput({
disabled = false, disabled = false,
isPreview = false, isPreview = false,
previewValue, previewValue,
previewContextValues,
}: FolderSelectorInputProps) { }: FolderSelectorInputProps) {
const [storeValue] = useSubBlockValue(blockId, subBlock.id) const [storeValue] = useSubBlockValue(blockId, subBlock.id)
const [connectedCredential] = useSubBlockValue(blockId, 'credential') const [credentialFromStore] = useSubBlockValue(blockId, 'credential')
const connectedCredential = previewContextValues
? resolvePreviewContextValue(previewContextValues.credential)
: credentialFromStore
const { collaborativeSetSubblockValue } = useCollaborativeWorkflow() const { collaborativeSetSubblockValue } = useCollaborativeWorkflow()
const { activeWorkflowId } = useWorkflowRegistry() const { activeWorkflowId } = useWorkflowRegistry()
const [selectedFolderId, setSelectedFolderId] = useState<string>('') const [selectedFolderId, setSelectedFolderId] = useState<string>('')
@@ -47,7 +53,11 @@ export function FolderSelectorInput({
) )
// Central dependsOn gating // Central dependsOn gating
const { finalDisabled } = useDependsOnGate(blockId, subBlock, { disabled, isPreview }) const { finalDisabled } = useDependsOnGate(blockId, subBlock, {
disabled,
isPreview,
previewContextValues,
})
// Get the current value from the store or prop value if in preview mode // Get the current value from the store or prop value if in preview mode
useEffect(() => { useEffect(() => {

View File

@@ -7,6 +7,7 @@ import { formatDisplayText } from '@/app/workspace/[workspaceId]/w/[workflowId]/
import { TagDropdown } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/components/tag-dropdown/tag-dropdown' import { TagDropdown } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/components/tag-dropdown/tag-dropdown'
import { useSubBlockInput } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/hooks/use-sub-block-input' import { useSubBlockInput } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/hooks/use-sub-block-input'
import { useSubBlockValue } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/hooks/use-sub-block-value' import { useSubBlockValue } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/hooks/use-sub-block-value'
import { resolvePreviewContextValue } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/utils'
import { useAccessibleReferencePrefixes } from '@/app/workspace/[workspaceId]/w/[workflowId]/hooks/use-accessible-reference-prefixes' import { useAccessibleReferencePrefixes } from '@/app/workspace/[workspaceId]/w/[workflowId]/hooks/use-accessible-reference-prefixes'
import { useWorkflowState } from '@/hooks/queries/workflows' import { useWorkflowState } from '@/hooks/queries/workflows'
@@ -37,6 +38,8 @@ interface InputMappingProps {
isPreview?: boolean isPreview?: boolean
previewValue?: Record<string, unknown> previewValue?: Record<string, unknown>
disabled?: boolean disabled?: boolean
/** Sub-block values from the preview context for resolving sibling sub-block values */
previewContextValues?: Record<string, unknown>
} }
/** /**
@@ -50,9 +53,13 @@ export function InputMapping({
isPreview = false, isPreview = false,
previewValue, previewValue,
disabled = false, disabled = false,
previewContextValues,
}: InputMappingProps) { }: InputMappingProps) {
const [mapping, setMapping] = useSubBlockValue(blockId, subBlockId) const [mapping, setMapping] = useSubBlockValue(blockId, subBlockId)
const [selectedWorkflowId] = useSubBlockValue(blockId, 'workflowId') const [storeWorkflowId] = useSubBlockValue(blockId, 'workflowId')
const selectedWorkflowId = previewContextValues
? resolvePreviewContextValue(previewContextValues.workflowId)
: storeWorkflowId
const inputController = useSubBlockInput({ const inputController = useSubBlockInput({
blockId, blockId,

View File

@@ -17,6 +17,7 @@ import { type FilterFieldType, getOperatorsForFieldType } from '@/lib/knowledge/
import { formatDisplayText } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/components/formatted-text' import { formatDisplayText } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/components/formatted-text'
import { TagDropdown } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/components/tag-dropdown/tag-dropdown' import { TagDropdown } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/components/tag-dropdown/tag-dropdown'
import { useSubBlockInput } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/hooks/use-sub-block-input' import { useSubBlockInput } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/hooks/use-sub-block-input'
import { resolvePreviewContextValue } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/utils'
import { useAccessibleReferencePrefixes } from '@/app/workspace/[workspaceId]/w/[workflowId]/hooks/use-accessible-reference-prefixes' import { useAccessibleReferencePrefixes } from '@/app/workspace/[workspaceId]/w/[workflowId]/hooks/use-accessible-reference-prefixes'
import type { SubBlockConfig } from '@/blocks/types' import type { SubBlockConfig } from '@/blocks/types'
import { useKnowledgeBaseTagDefinitions } from '@/hooks/kb/use-knowledge-base-tag-definitions' import { useKnowledgeBaseTagDefinitions } from '@/hooks/kb/use-knowledge-base-tag-definitions'
@@ -69,7 +70,9 @@ export function KnowledgeTagFilters({
const overlayRefs = useRef<Record<string, HTMLDivElement>>({}) const overlayRefs = useRef<Record<string, HTMLDivElement>>({})
const [knowledgeBaseIdFromStore] = useSubBlockValue(blockId, 'knowledgeBaseId') const [knowledgeBaseIdFromStore] = useSubBlockValue(blockId, 'knowledgeBaseId')
const knowledgeBaseIdValue = previewContextValues?.knowledgeBaseId ?? knowledgeBaseIdFromStore const knowledgeBaseIdValue = previewContextValues
? resolvePreviewContextValue(previewContextValues.knowledgeBaseId)
: knowledgeBaseIdFromStore
const knowledgeBaseId = const knowledgeBaseId =
typeof knowledgeBaseIdValue === 'string' && knowledgeBaseIdValue.trim().length > 0 typeof knowledgeBaseIdValue === 'string' && knowledgeBaseIdValue.trim().length > 0
? knowledgeBaseIdValue ? knowledgeBaseIdValue

View File

@@ -6,6 +6,7 @@ import { cn } from '@/lib/core/utils/cn'
import { LongInput } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/components/long-input/long-input' import { LongInput } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/components/long-input/long-input'
import { ShortInput } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/components/short-input/short-input' import { ShortInput } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/components/short-input/short-input'
import { useSubBlockValue } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/hooks/use-sub-block-value' import { useSubBlockValue } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/hooks/use-sub-block-value'
import { resolvePreviewContextValue } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/utils'
import type { SubBlockConfig } from '@/blocks/types' import type { SubBlockConfig } from '@/blocks/types'
import { useMcpTools } from '@/hooks/mcp/use-mcp-tools' import { useMcpTools } from '@/hooks/mcp/use-mcp-tools'
import { formatParameterLabel } from '@/tools/params' import { formatParameterLabel } from '@/tools/params'
@@ -18,6 +19,7 @@ interface McpDynamicArgsProps {
disabled?: boolean disabled?: boolean
isPreview?: boolean isPreview?: boolean
previewValue?: any previewValue?: any
previewContextValues?: Record<string, unknown>
} }
/** /**
@@ -47,12 +49,19 @@ export function McpDynamicArgs({
disabled = false, disabled = false,
isPreview = false, isPreview = false,
previewValue, previewValue,
previewContextValues,
}: McpDynamicArgsProps) { }: McpDynamicArgsProps) {
const params = useParams() const params = useParams()
const workspaceId = params.workspaceId as string const workspaceId = params.workspaceId as string
const { mcpTools, isLoading } = useMcpTools(workspaceId) const { mcpTools, isLoading } = useMcpTools(workspaceId)
const [selectedTool] = useSubBlockValue(blockId, 'tool') const [toolFromStore] = useSubBlockValue(blockId, 'tool')
const [cachedSchema] = useSubBlockValue(blockId, '_toolSchema') const selectedTool = previewContextValues
? resolvePreviewContextValue(previewContextValues.tool)
: toolFromStore
const [schemaFromStore] = useSubBlockValue(blockId, '_toolSchema')
const cachedSchema = previewContextValues
? resolvePreviewContextValue(previewContextValues._toolSchema)
: schemaFromStore
const [toolArgs, setToolArgs] = useSubBlockValue(blockId, subBlockId) const [toolArgs, setToolArgs] = useSubBlockValue(blockId, subBlockId)
const selectedToolConfig = mcpTools.find((tool) => tool.id === selectedTool) const selectedToolConfig = mcpTools.find((tool) => tool.id === selectedTool)

View File

@@ -4,6 +4,7 @@ import { useEffect, useMemo, useState } from 'react'
import { useParams } from 'next/navigation' import { useParams } from 'next/navigation'
import { Combobox } from '@/components/emcn/components' import { Combobox } from '@/components/emcn/components'
import { useSubBlockValue } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/hooks/use-sub-block-value' import { useSubBlockValue } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/hooks/use-sub-block-value'
import { resolvePreviewContextValue } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/utils'
import type { SubBlockConfig } from '@/blocks/types' import type { SubBlockConfig } from '@/blocks/types'
import { useMcpTools } from '@/hooks/mcp/use-mcp-tools' import { useMcpTools } from '@/hooks/mcp/use-mcp-tools'
@@ -13,6 +14,7 @@ interface McpToolSelectorProps {
disabled?: boolean disabled?: boolean
isPreview?: boolean isPreview?: boolean
previewValue?: string | null previewValue?: string | null
previewContextValues?: Record<string, unknown>
} }
export function McpToolSelector({ export function McpToolSelector({
@@ -21,6 +23,7 @@ export function McpToolSelector({
disabled = false, disabled = false,
isPreview = false, isPreview = false,
previewValue, previewValue,
previewContextValues,
}: McpToolSelectorProps) { }: McpToolSelectorProps) {
const params = useParams() const params = useParams()
const workspaceId = params.workspaceId as string const workspaceId = params.workspaceId as string
@@ -31,7 +34,10 @@ export function McpToolSelector({
const [storeValue, setStoreValue] = useSubBlockValue(blockId, subBlock.id) const [storeValue, setStoreValue] = useSubBlockValue(blockId, subBlock.id)
const [, setSchemaCache] = useSubBlockValue(blockId, '_toolSchema') const [, setSchemaCache] = useSubBlockValue(blockId, '_toolSchema')
const [serverValue] = useSubBlockValue(blockId, 'server') const [serverFromStore] = useSubBlockValue(blockId, 'server')
const serverValue = previewContextValues
? resolvePreviewContextValue(previewContextValues.server)
: serverFromStore
const label = subBlock.placeholder || 'Select tool' const label = subBlock.placeholder || 'Select tool'

View File

@@ -9,6 +9,7 @@ import { SelectorCombobox } from '@/app/workspace/[workspaceId]/w/[workflowId]/c
import { useDependsOnGate } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/hooks/use-depends-on-gate' import { useDependsOnGate } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/hooks/use-depends-on-gate'
import { useForeignCredential } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/hooks/use-foreign-credential' import { useForeignCredential } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/hooks/use-foreign-credential'
import { useSubBlockValue } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/hooks/use-sub-block-value' import { useSubBlockValue } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/hooks/use-sub-block-value'
import { resolvePreviewContextValue } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/utils'
import { getBlock } from '@/blocks/registry' import { getBlock } from '@/blocks/registry'
import type { SubBlockConfig } from '@/blocks/types' import type { SubBlockConfig } from '@/blocks/types'
import { resolveSelectorForSubBlock } from '@/hooks/selectors/resolution' import { resolveSelectorForSubBlock } from '@/hooks/selectors/resolution'
@@ -55,14 +56,19 @@ export function ProjectSelectorInput({
return (workflowValues as Record<string, Record<string, unknown>>)[blockId] || {} return (workflowValues as Record<string, Record<string, unknown>>)[blockId] || {}
}) })
const connectedCredential = previewContextValues?.credential ?? blockValues.credential const connectedCredential = previewContextValues
const jiraDomain = previewContextValues?.domain ?? jiraDomainFromStore ? resolvePreviewContextValue(previewContextValues.credential)
: blockValues.credential
const jiraDomain = previewContextValues
? resolvePreviewContextValue(previewContextValues.domain)
: jiraDomainFromStore
const linearTeamId = useMemo( const linearTeamId = useMemo(
() => () =>
previewContextValues?.teamId ?? previewContextValues
resolveDependencyValue('teamId', blockValues, canonicalIndex, canonicalModeOverrides), ? resolvePreviewContextValue(previewContextValues.teamId)
[previewContextValues?.teamId, blockValues, canonicalIndex, canonicalModeOverrides] : resolveDependencyValue('teamId', blockValues, canonicalIndex, canonicalModeOverrides),
[previewContextValues, blockValues, canonicalIndex, canonicalModeOverrides]
) )
const serviceId = subBlock.serviceId || '' const serviceId = subBlock.serviceId || ''

View File

@@ -8,6 +8,7 @@ import { buildCanonicalIndex, resolveDependencyValue } from '@/lib/workflows/sub
import { SelectorCombobox } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/components/selector-combobox/selector-combobox' import { SelectorCombobox } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/components/selector-combobox/selector-combobox'
import { useDependsOnGate } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/hooks/use-depends-on-gate' import { useDependsOnGate } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/hooks/use-depends-on-gate'
import { useForeignCredential } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/hooks/use-foreign-credential' import { useForeignCredential } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/hooks/use-foreign-credential'
import { resolvePreviewContextValue } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/utils'
import { getBlock } from '@/blocks/registry' import { getBlock } from '@/blocks/registry'
import type { SubBlockConfig } from '@/blocks/types' import type { SubBlockConfig } from '@/blocks/types'
import { resolveSelectorForSubBlock, type SelectorResolution } from '@/hooks/selectors/resolution' import { resolveSelectorForSubBlock, type SelectorResolution } from '@/hooks/selectors/resolution'
@@ -66,9 +67,12 @@ export function SheetSelectorInput({
[blockValues, canonicalIndex, canonicalModeOverrides] [blockValues, canonicalIndex, canonicalModeOverrides]
) )
const connectedCredential = previewContextValues?.credential ?? connectedCredentialFromStore const connectedCredential = previewContextValues
? resolvePreviewContextValue(previewContextValues.credential)
: connectedCredentialFromStore
const spreadsheetId = previewContextValues const spreadsheetId = previewContextValues
? (previewContextValues.spreadsheetId ?? previewContextValues.manualSpreadsheetId) ? (resolvePreviewContextValue(previewContextValues.spreadsheetId) ??
resolvePreviewContextValue(previewContextValues.manualSpreadsheetId))
: spreadsheetIdFromStore : spreadsheetIdFromStore
const normalizedCredentialId = const normalizedCredentialId =

View File

@@ -130,39 +130,52 @@ export function SkillInput({
onOpenChange={setOpen} onOpenChange={setOpen}
/> />
{selectedSkills.length > 0 && ( {selectedSkills.length > 0 &&
<div className='flex flex-wrap gap-[4px]'> selectedSkills.map((stored) => {
{selectedSkills.map((stored) => { const fullSkill = workspaceSkills.find((s) => s.id === stored.skillId)
const fullSkill = workspaceSkills.find((s) => s.id === stored.skillId) return (
return ( <div
key={stored.skillId}
className='group relative flex flex-col overflow-hidden rounded-[4px] border border-[var(--border-1)] transition-all duration-200 ease-in-out'
>
<div <div
key={stored.skillId} className='flex cursor-pointer items-center justify-between gap-[8px] rounded-t-[4px] bg-[var(--surface-4)] px-[8px] py-[6.5px]'
className='flex cursor-pointer items-center gap-[4px] rounded-[4px] border border-[var(--border-1)] bg-[var(--surface-5)] px-[6px] py-[2px] font-medium text-[12px] text-[var(--text-secondary)] hover:bg-[var(--surface-6)]'
onClick={() => { onClick={() => {
if (fullSkill && !disabled && !isPreview) { if (fullSkill && !disabled && !isPreview) {
setEditingSkill(fullSkill) setEditingSkill(fullSkill)
} }
}} }}
> >
<AgentSkillsIcon className='h-[10px] w-[10px] text-[var(--text-tertiary)]' /> <div className='flex min-w-0 flex-1 items-center gap-[8px]'>
<span className='max-w-[140px] truncate'>{resolveSkillName(stored)}</span> <div
{!disabled && !isPreview && ( className='flex h-[16px] w-[16px] flex-shrink-0 items-center justify-center rounded-[4px]'
<button style={{ backgroundColor: '#e0e0e0' }}
type='button'
onClick={(e) => {
e.stopPropagation()
handleRemove(stored.skillId)
}}
className='ml-[2px] rounded-[2px] p-[1px] text-[var(--text-tertiary)] hover:bg-[var(--surface-7)] hover:text-[var(--text-secondary)]'
> >
<XIcon className='h-[10px] w-[10px]' /> <AgentSkillsIcon className='h-[10px] w-[10px] text-[#333]' />
</button> </div>
)} <span className='truncate font-medium text-[13px] text-[var(--text-primary)]'>
{resolveSkillName(stored)}
</span>
</div>
<div className='flex flex-shrink-0 items-center gap-[8px]'>
{!disabled && !isPreview && (
<button
type='button'
onClick={(e) => {
e.stopPropagation()
handleRemove(stored.skillId)
}}
className='flex items-center justify-center text-[var(--text-tertiary)] transition-colors hover:text-[var(--text-primary)]'
aria-label='Remove skill'
>
<XIcon className='h-[13px] w-[13px]' />
</button>
)}
</div>
</div> </div>
) </div>
})} )
</div> })}
)}
</div> </div>
<SkillModal <SkillModal

View File

@@ -8,6 +8,7 @@ import { SelectorCombobox } from '@/app/workspace/[workspaceId]/w/[workflowId]/c
import { useDependsOnGate } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/hooks/use-depends-on-gate' import { useDependsOnGate } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/hooks/use-depends-on-gate'
import { useForeignCredential } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/hooks/use-foreign-credential' import { useForeignCredential } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/hooks/use-foreign-credential'
import { useSubBlockValue } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/hooks/use-sub-block-value' import { useSubBlockValue } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/hooks/use-sub-block-value'
import { resolvePreviewContextValue } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/utils'
import type { SubBlockConfig } from '@/blocks/types' import type { SubBlockConfig } from '@/blocks/types'
import type { SelectorContext, SelectorKey } from '@/hooks/selectors/types' import type { SelectorContext, SelectorKey } from '@/hooks/selectors/types'
@@ -58,9 +59,15 @@ export function SlackSelectorInput({
const [botToken] = useSubBlockValue(blockId, 'botToken') const [botToken] = useSubBlockValue(blockId, 'botToken')
const [connectedCredential] = useSubBlockValue(blockId, 'credential') const [connectedCredential] = useSubBlockValue(blockId, 'credential')
const effectiveAuthMethod = previewContextValues?.authMethod ?? authMethod const effectiveAuthMethod = previewContextValues
const effectiveBotToken = previewContextValues?.botToken ?? botToken ? resolvePreviewContextValue(previewContextValues.authMethod)
const effectiveCredential = previewContextValues?.credential ?? connectedCredential : authMethod
const effectiveBotToken = previewContextValues
? resolvePreviewContextValue(previewContextValues.botToken)
: botToken
const effectiveCredential = previewContextValues
? resolvePreviewContextValue(previewContextValues.credential)
: connectedCredential
const [_selectedValue, setSelectedValue] = useState<string | null>(null) const [_selectedValue, setSelectedValue] = useState<string | null>(null)
const serviceId = subBlock.serviceId || '' const serviceId = subBlock.serviceId || ''

View File

@@ -332,6 +332,7 @@ function FolderSelectorSyncWrapper({
dependsOn: uiComponent.dependsOn, dependsOn: uiComponent.dependsOn,
}} }}
disabled={disabled} disabled={disabled}
previewContextValues={previewContextValues}
/> />
</GenericSyncWrapper> </GenericSyncWrapper>
) )

View File

@@ -797,6 +797,7 @@ function SubBlockComponent({
disabled={isDisabled} disabled={isDisabled}
isPreview={isPreview} isPreview={isPreview}
previewValue={previewValue} previewValue={previewValue}
previewContextValues={isPreview ? subBlockValues : undefined}
/> />
) )
@@ -832,6 +833,7 @@ function SubBlockComponent({
disabled={isDisabled} disabled={isDisabled}
isPreview={isPreview} isPreview={isPreview}
previewValue={previewValue} previewValue={previewValue}
previewContextValues={isPreview ? subBlockValues : undefined}
/> />
) )
@@ -843,6 +845,7 @@ function SubBlockComponent({
disabled={isDisabled} disabled={isDisabled}
isPreview={isPreview} isPreview={isPreview}
previewValue={previewValue} previewValue={previewValue}
previewContextValues={isPreview ? subBlockValues : undefined}
/> />
) )
@@ -865,6 +868,7 @@ function SubBlockComponent({
disabled={isDisabled} disabled={isDisabled}
isPreview={isPreview} isPreview={isPreview}
previewValue={previewValue as any} previewValue={previewValue as any}
previewContextValues={isPreview ? subBlockValues : undefined}
/> />
) )
@@ -876,6 +880,7 @@ function SubBlockComponent({
disabled={isDisabled} disabled={isDisabled}
isPreview={isPreview} isPreview={isPreview}
previewValue={previewValue as any} previewValue={previewValue as any}
previewContextValues={isPreview ? subBlockValues : undefined}
/> />
) )
@@ -887,6 +892,7 @@ function SubBlockComponent({
disabled={isDisabled} disabled={isDisabled}
isPreview={isPreview} isPreview={isPreview}
previewValue={previewValue as any} previewValue={previewValue as any}
previewContextValues={isPreview ? subBlockValues : undefined}
/> />
) )
@@ -911,6 +917,7 @@ function SubBlockComponent({
isPreview={isPreview} isPreview={isPreview}
previewValue={previewValue as any} previewValue={previewValue as any}
disabled={isDisabled} disabled={isDisabled}
previewContextValues={isPreview ? subBlockValues : undefined}
/> />
) )
@@ -946,6 +953,7 @@ function SubBlockComponent({
disabled={isDisabled} disabled={isDisabled}
isPreview={isPreview} isPreview={isPreview}
previewValue={previewValue} previewValue={previewValue}
previewContextValues={isPreview ? subBlockValues : undefined}
/> />
) )
@@ -979,6 +987,7 @@ function SubBlockComponent({
disabled={isDisabled} disabled={isDisabled}
isPreview={isPreview} isPreview={isPreview}
previewValue={previewValue as any} previewValue={previewValue as any}
previewContextValues={isPreview ? subBlockValues : undefined}
/> />
) )
@@ -990,6 +999,7 @@ function SubBlockComponent({
disabled={isDisabled} disabled={isDisabled}
isPreview={isPreview} isPreview={isPreview}
previewValue={previewValue} previewValue={previewValue}
previewContextValues={isPreview ? subBlockValues : undefined}
/> />
) )

View File

@@ -0,0 +1,18 @@
/**
* Extracts the raw value from a preview context entry.
*
* @remarks
* In the sub-block preview context, values are wrapped as `{ value: T }` objects
* (the full sub-block state). In the tool-input preview context, values are already
* raw. This function normalizes both cases to return the underlying value.
*
* @param raw - The preview context entry, which may be a raw value or a `{ value: T }` wrapper
* @returns The unwrapped value, or `null` if the input is nullish
*/
export function resolvePreviewContextValue(raw: unknown): unknown {
if (raw === null || raw === undefined) return null
if (typeof raw === 'object' && !Array.isArray(raw) && 'value' in raw) {
return (raw as Record<string, unknown>).value ?? null
}
return raw
}

View File

@@ -6,6 +6,7 @@ import {
isSubBlockVisibleForMode, isSubBlockVisibleForMode,
} from '@/lib/workflows/subblocks/visibility' } from '@/lib/workflows/subblocks/visibility'
import type { BlockConfig, SubBlockConfig, SubBlockType } from '@/blocks/types' import type { BlockConfig, SubBlockConfig, SubBlockType } from '@/blocks/types'
import { usePermissionConfig } from '@/hooks/use-permission-config'
import { useWorkflowDiffStore } from '@/stores/workflow-diff' import { useWorkflowDiffStore } from '@/stores/workflow-diff'
import { mergeSubblockState } from '@/stores/workflows/utils' import { mergeSubblockState } from '@/stores/workflows/utils'
import { useWorkflowStore } from '@/stores/workflows/workflow/store' import { useWorkflowStore } from '@/stores/workflows/workflow/store'
@@ -35,6 +36,7 @@ export function useEditorSubblockLayout(
const blockDataFromStore = useWorkflowStore( const blockDataFromStore = useWorkflowStore(
useCallback((state) => state.blocks?.[blockId]?.data, [blockId]) useCallback((state) => state.blocks?.[blockId]?.data, [blockId])
) )
const { config: permissionConfig } = usePermissionConfig()
return useMemo(() => { return useMemo(() => {
// Guard against missing config or block selection // Guard against missing config or block selection
@@ -100,6 +102,9 @@ export function useEditorSubblockLayout(
const visibleSubBlocks = (config.subBlocks || []).filter((block) => { const visibleSubBlocks = (config.subBlocks || []).filter((block) => {
if (block.hidden) return false if (block.hidden) return false
// Hide skill-input subblock when skills are disabled via permissions
if (block.type === 'skill-input' && permissionConfig.disableSkills) return false
// Check required feature if specified - declarative feature gating // Check required feature if specified - declarative feature gating
if (!isSubBlockFeatureEnabled(block)) return false if (!isSubBlockFeatureEnabled(block)) return false
@@ -149,5 +154,6 @@ export function useEditorSubblockLayout(
activeWorkflowId, activeWorkflowId,
isSnapshotView, isSnapshotView,
blockDataFromStore, blockDataFromStore,
permissionConfig.disableSkills,
]) ])
} }

View File

@@ -40,6 +40,7 @@ import { useCustomTools } from '@/hooks/queries/custom-tools'
import { useMcpServers, useMcpToolsQuery } from '@/hooks/queries/mcp' import { useMcpServers, useMcpToolsQuery } from '@/hooks/queries/mcp'
import { useCredentialName } from '@/hooks/queries/oauth-credentials' import { useCredentialName } from '@/hooks/queries/oauth-credentials'
import { useReactivateSchedule, useScheduleInfo } from '@/hooks/queries/schedules' import { useReactivateSchedule, useScheduleInfo } from '@/hooks/queries/schedules'
import { useSkills } from '@/hooks/queries/skills'
import { useDeployChildWorkflow } from '@/hooks/queries/workflows' import { useDeployChildWorkflow } from '@/hooks/queries/workflows'
import { useSelectorDisplayName } from '@/hooks/use-selector-display-name' import { useSelectorDisplayName } from '@/hooks/use-selector-display-name'
import { useVariablesStore } from '@/stores/panel' import { useVariablesStore } from '@/stores/panel'
@@ -618,6 +619,48 @@ const SubBlockRow = memo(function SubBlockRow({
return `${toolNames[0]}, ${toolNames[1]} +${toolNames.length - 2}` return `${toolNames[0]}, ${toolNames[1]} +${toolNames.length - 2}`
}, [subBlock?.type, rawValue, customTools, workspaceId]) }, [subBlock?.type, rawValue, customTools, workspaceId])
/**
* Hydrates skill references to display names.
* Resolves skill IDs to their current names from the skills query.
*/
const { data: workspaceSkills = [] } = useSkills(workspaceId || '')
const skillsDisplayValue = useMemo(() => {
if (subBlock?.type !== 'skill-input' || !Array.isArray(rawValue) || rawValue.length === 0) {
return null
}
interface StoredSkill {
skillId: string
name?: string
}
const skillNames = rawValue
.map((skill: StoredSkill) => {
if (!skill || typeof skill !== 'object') return null
// Priority 1: Resolve skill name from the skills query (fresh data)
if (skill.skillId) {
const foundSkill = workspaceSkills.find((s) => s.id === skill.skillId)
if (foundSkill?.name) return foundSkill.name
}
// Priority 2: Fall back to stored name (for deleted skills)
if (skill.name && typeof skill.name === 'string') return skill.name
// Priority 3: Use skillId as last resort
if (skill.skillId) return skill.skillId
return null
})
.filter((name): name is string => !!name)
if (skillNames.length === 0) return null
if (skillNames.length === 1) return skillNames[0]
if (skillNames.length === 2) return `${skillNames[0]}, ${skillNames[1]}`
return `${skillNames[0]}, ${skillNames[1]} +${skillNames.length - 2}`
}, [subBlock?.type, rawValue, workspaceSkills])
const isPasswordField = subBlock?.password === true const isPasswordField = subBlock?.password === true
const maskedValue = isPasswordField && value && value !== '-' ? '•••' : null const maskedValue = isPasswordField && value && value !== '-' ? '•••' : null
@@ -627,6 +670,7 @@ const SubBlockRow = memo(function SubBlockRow({
dropdownLabel || dropdownLabel ||
variablesDisplayValue || variablesDisplayValue ||
toolsDisplayValue || toolsDisplayValue ||
skillsDisplayValue ||
knowledgeBaseDisplayName || knowledgeBaseDisplayName ||
workflowSelectionName || workflowSelectionName ||
mcpServerDisplayName || mcpServerDisplayName ||

View File

@@ -784,8 +784,12 @@ function PreviewEditorContent({
? childWorkflowSnapshotState ? childWorkflowSnapshotState
: childWorkflowState : childWorkflowState
const resolvedIsLoadingChildWorkflow = isExecutionMode ? false : isLoadingChildWorkflow const resolvedIsLoadingChildWorkflow = isExecutionMode ? false : isLoadingChildWorkflow
const isBlockNotExecuted = isExecutionMode && !executionData
const isMissingChildWorkflow = const isMissingChildWorkflow =
Boolean(childWorkflowId) && !resolvedIsLoadingChildWorkflow && !resolvedChildWorkflowState Boolean(childWorkflowId) &&
!isBlockNotExecuted &&
!resolvedIsLoadingChildWorkflow &&
!resolvedChildWorkflowState
/** Drills down into the child workflow or opens it in a new tab */ /** Drills down into the child workflow or opens it in a new tab */
const handleExpandChildWorkflow = useCallback(() => { const handleExpandChildWorkflow = useCallback(() => {
@@ -1192,7 +1196,7 @@ function PreviewEditorContent({
<div ref={subBlocksRef} className='subblocks-section flex flex-1 flex-col overflow-hidden'> <div ref={subBlocksRef} className='subblocks-section flex flex-1 flex-col overflow-hidden'>
<div className='flex-1 overflow-y-auto overflow-x-hidden'> <div className='flex-1 overflow-y-auto overflow-x-hidden'>
{/* Not Executed Banner - shown when in execution mode but block wasn't executed */} {/* Not Executed Banner - shown when in execution mode but block wasn't executed */}
{isExecutionMode && !executionData && ( {isBlockNotExecuted && (
<div className='flex min-w-0 flex-col gap-[8px] overflow-hidden border-[var(--border)] border-b px-[12px] py-[10px]'> <div className='flex min-w-0 flex-col gap-[8px] overflow-hidden border-[var(--border)] border-b px-[12px] py-[10px]'>
<div className='flex items-center justify-between'> <div className='flex items-center justify-between'>
<Badge variant='gray-secondary' size='sm' dot> <Badge variant='gray-secondary' size='sm' dot>
@@ -1419,9 +1423,11 @@ function PreviewEditorContent({
) : ( ) : (
<div className='flex h-full items-center justify-center bg-[var(--surface-3)]'> <div className='flex h-full items-center justify-center bg-[var(--surface-3)]'>
<span className='text-[13px] text-[var(--text-tertiary)]'> <span className='text-[13px] text-[var(--text-tertiary)]'>
{isMissingChildWorkflow {isBlockNotExecuted
? DELETED_WORKFLOW_LABEL ? 'Not Executed'
: 'Unable to load preview'} : isMissingChildWorkflow
? DELETED_WORKFLOW_LABEL
: 'Unable to load preview'}
</span> </span>
</div> </div>
)} )}

View File

@@ -27,6 +27,13 @@ interface SkillModalProps {
const KEBAB_CASE_REGEX = /^[a-z0-9]+(-[a-z0-9]+)*$/ const KEBAB_CASE_REGEX = /^[a-z0-9]+(-[a-z0-9]+)*$/
interface FieldErrors {
name?: string
description?: string
content?: string
general?: string
}
export function SkillModal({ export function SkillModal({
open, open,
onOpenChange, onOpenChange,
@@ -43,7 +50,7 @@ export function SkillModal({
const [name, setName] = useState('') const [name, setName] = useState('')
const [description, setDescription] = useState('') const [description, setDescription] = useState('')
const [content, setContent] = useState('') const [content, setContent] = useState('')
const [formError, setFormError] = useState('') const [errors, setErrors] = useState<FieldErrors>({})
const [saving, setSaving] = useState(false) const [saving, setSaving] = useState(false)
useEffect(() => { useEffect(() => {
@@ -57,7 +64,7 @@ export function SkillModal({
setDescription('') setDescription('')
setContent('') setContent('')
} }
setFormError('') setErrors({})
} }
}, [open, initialValues]) }, [open, initialValues])
@@ -71,24 +78,26 @@ export function SkillModal({
}, [name, description, content, initialValues]) }, [name, description, content, initialValues])
const handleSave = async () => { const handleSave = async () => {
const newErrors: FieldErrors = {}
if (!name.trim()) { if (!name.trim()) {
setFormError('Name is required') newErrors.name = 'Name is required'
return } else if (name.length > 64) {
} newErrors.name = 'Name must be 64 characters or less'
if (name.length > 64) { } else if (!KEBAB_CASE_REGEX.test(name)) {
setFormError('Name must be 64 characters or less') newErrors.name = 'Name must be kebab-case (e.g. my-skill)'
return
}
if (!KEBAB_CASE_REGEX.test(name)) {
setFormError('Name must be kebab-case (e.g. my-skill)')
return
} }
if (!description.trim()) { if (!description.trim()) {
setFormError('Description is required') newErrors.description = 'Description is required'
return
} }
if (!content.trim()) { if (!content.trim()) {
setFormError('Content is required') newErrors.content = 'Content is required'
}
if (Object.keys(newErrors).length > 0) {
setErrors(newErrors)
return return
} }
@@ -113,7 +122,7 @@ export function SkillModal({
error instanceof Error && error.message.includes('already exists') error instanceof Error && error.message.includes('already exists')
? error.message ? error.message
: 'Failed to save skill. Please try again.' : 'Failed to save skill. Please try again.'
setFormError(message) setErrors({ general: message })
} finally { } finally {
setSaving(false) setSaving(false)
} }
@@ -135,12 +144,17 @@ export function SkillModal({
value={name} value={name}
onChange={(e) => { onChange={(e) => {
setName(e.target.value) setName(e.target.value)
if (formError) setFormError('') if (errors.name || errors.general)
setErrors((prev) => ({ ...prev, name: undefined, general: undefined }))
}} }}
/> />
<span className='text-[11px] text-[var(--text-muted)]'> {errors.name ? (
Lowercase letters, numbers, and hyphens (e.g. my-skill) <p className='text-[12px] text-[var(--text-error)]'>{errors.name}</p>
</span> ) : (
<span className='text-[11px] text-[var(--text-muted)]'>
Lowercase letters, numbers, and hyphens (e.g. my-skill)
</span>
)}
</div> </div>
<div className='flex flex-col gap-[4px]'> <div className='flex flex-col gap-[4px]'>
@@ -153,10 +167,14 @@ export function SkillModal({
value={description} value={description}
onChange={(e) => { onChange={(e) => {
setDescription(e.target.value) setDescription(e.target.value)
if (formError) setFormError('') if (errors.description || errors.general)
setErrors((prev) => ({ ...prev, description: undefined, general: undefined }))
}} }}
maxLength={1024} maxLength={1024}
/> />
{errors.description && (
<p className='text-[12px] text-[var(--text-error)]'>{errors.description}</p>
)}
</div> </div>
<div className='flex flex-col gap-[4px]'> <div className='flex flex-col gap-[4px]'>
@@ -169,13 +187,19 @@ export function SkillModal({
value={content} value={content}
onChange={(e: ChangeEvent<HTMLTextAreaElement>) => { onChange={(e: ChangeEvent<HTMLTextAreaElement>) => {
setContent(e.target.value) setContent(e.target.value)
if (formError) setFormError('') if (errors.content || errors.general)
setErrors((prev) => ({ ...prev, content: undefined, general: undefined }))
}} }}
className='min-h-[200px] resize-y font-mono text-[13px]' className='min-h-[200px] resize-y font-mono text-[13px]'
/> />
{errors.content && (
<p className='text-[12px] text-[var(--text-error)]'>{errors.content}</p>
)}
</div> </div>
{formError && <span className='text-[11px] text-[var(--text-error)]'>{formError}</span>} {errors.general && (
<p className='text-[12px] text-[var(--text-error)]'>{errors.general}</p>
)}
</div> </div>
</ModalBody> </ModalBody>
<ModalFooter className='items-center justify-between'> <ModalFooter className='items-center justify-between'>

View File

@@ -1,11 +1,10 @@
import { createLogger } from '@sim/logger' import { createLogger } from '@sim/logger'
import { AgentIcon } from '@/components/icons' import { AgentIcon } from '@/components/icons'
import { isHosted } from '@/lib/core/config/feature-flags'
import type { BlockConfig } from '@/blocks/types' import type { BlockConfig } from '@/blocks/types'
import { AuthMode } from '@/blocks/types' import { AuthMode } from '@/blocks/types'
import { getApiKeyCondition } from '@/blocks/utils'
import { import {
getBaseModelProviders, getBaseModelProviders,
getHostedModels,
getMaxTemperature, getMaxTemperature,
getProviderIcon, getProviderIcon,
getReasoningEffortValuesForModel, getReasoningEffortValuesForModel,
@@ -17,15 +16,6 @@ import {
providers, providers,
supportsTemperature, supportsTemperature,
} from '@/providers/utils' } from '@/providers/utils'
const getCurrentOllamaModels = () => {
return useProvidersStore.getState().providers.ollama.models
}
const getCurrentVLLMModels = () => {
return useProvidersStore.getState().providers.vllm.models
}
import { useProvidersStore } from '@/stores/providers' import { useProvidersStore } from '@/stores/providers'
import type { ToolResponse } from '@/tools/types' import type { ToolResponse } from '@/tools/types'
@@ -164,6 +154,7 @@ Return ONLY the JSON array.`,
type: 'dropdown', type: 'dropdown',
placeholder: 'Select reasoning effort...', placeholder: 'Select reasoning effort...',
options: [ options: [
{ label: 'auto', id: 'auto' },
{ label: 'low', id: 'low' }, { label: 'low', id: 'low' },
{ label: 'medium', id: 'medium' }, { label: 'medium', id: 'medium' },
{ label: 'high', id: 'high' }, { label: 'high', id: 'high' },
@@ -173,9 +164,12 @@ Return ONLY the JSON array.`,
const { useSubBlockStore } = await import('@/stores/workflows/subblock/store') const { useSubBlockStore } = await import('@/stores/workflows/subblock/store')
const { useWorkflowRegistry } = await import('@/stores/workflows/registry/store') const { useWorkflowRegistry } = await import('@/stores/workflows/registry/store')
const autoOption = { label: 'auto', id: 'auto' }
const activeWorkflowId = useWorkflowRegistry.getState().activeWorkflowId const activeWorkflowId = useWorkflowRegistry.getState().activeWorkflowId
if (!activeWorkflowId) { if (!activeWorkflowId) {
return [ return [
autoOption,
{ label: 'low', id: 'low' }, { label: 'low', id: 'low' },
{ label: 'medium', id: 'medium' }, { label: 'medium', id: 'medium' },
{ label: 'high', id: 'high' }, { label: 'high', id: 'high' },
@@ -188,6 +182,7 @@ Return ONLY the JSON array.`,
if (!modelValue) { if (!modelValue) {
return [ return [
autoOption,
{ label: 'low', id: 'low' }, { label: 'low', id: 'low' },
{ label: 'medium', id: 'medium' }, { label: 'medium', id: 'medium' },
{ label: 'high', id: 'high' }, { label: 'high', id: 'high' },
@@ -197,15 +192,16 @@ Return ONLY the JSON array.`,
const validOptions = getReasoningEffortValuesForModel(modelValue) const validOptions = getReasoningEffortValuesForModel(modelValue)
if (!validOptions) { if (!validOptions) {
return [ return [
autoOption,
{ label: 'low', id: 'low' }, { label: 'low', id: 'low' },
{ label: 'medium', id: 'medium' }, { label: 'medium', id: 'medium' },
{ label: 'high', id: 'high' }, { label: 'high', id: 'high' },
] ]
} }
return validOptions.map((opt) => ({ label: opt, id: opt })) return [autoOption, ...validOptions.map((opt) => ({ label: opt, id: opt }))]
}, },
value: () => 'medium', mode: 'advanced',
condition: { condition: {
field: 'model', field: 'model',
value: MODELS_WITH_REASONING_EFFORT, value: MODELS_WITH_REASONING_EFFORT,
@@ -217,6 +213,7 @@ Return ONLY the JSON array.`,
type: 'dropdown', type: 'dropdown',
placeholder: 'Select verbosity...', placeholder: 'Select verbosity...',
options: [ options: [
{ label: 'auto', id: 'auto' },
{ label: 'low', id: 'low' }, { label: 'low', id: 'low' },
{ label: 'medium', id: 'medium' }, { label: 'medium', id: 'medium' },
{ label: 'high', id: 'high' }, { label: 'high', id: 'high' },
@@ -226,9 +223,12 @@ Return ONLY the JSON array.`,
const { useSubBlockStore } = await import('@/stores/workflows/subblock/store') const { useSubBlockStore } = await import('@/stores/workflows/subblock/store')
const { useWorkflowRegistry } = await import('@/stores/workflows/registry/store') const { useWorkflowRegistry } = await import('@/stores/workflows/registry/store')
const autoOption = { label: 'auto', id: 'auto' }
const activeWorkflowId = useWorkflowRegistry.getState().activeWorkflowId const activeWorkflowId = useWorkflowRegistry.getState().activeWorkflowId
if (!activeWorkflowId) { if (!activeWorkflowId) {
return [ return [
autoOption,
{ label: 'low', id: 'low' }, { label: 'low', id: 'low' },
{ label: 'medium', id: 'medium' }, { label: 'medium', id: 'medium' },
{ label: 'high', id: 'high' }, { label: 'high', id: 'high' },
@@ -241,6 +241,7 @@ Return ONLY the JSON array.`,
if (!modelValue) { if (!modelValue) {
return [ return [
autoOption,
{ label: 'low', id: 'low' }, { label: 'low', id: 'low' },
{ label: 'medium', id: 'medium' }, { label: 'medium', id: 'medium' },
{ label: 'high', id: 'high' }, { label: 'high', id: 'high' },
@@ -250,15 +251,16 @@ Return ONLY the JSON array.`,
const validOptions = getVerbosityValuesForModel(modelValue) const validOptions = getVerbosityValuesForModel(modelValue)
if (!validOptions) { if (!validOptions) {
return [ return [
autoOption,
{ label: 'low', id: 'low' }, { label: 'low', id: 'low' },
{ label: 'medium', id: 'medium' }, { label: 'medium', id: 'medium' },
{ label: 'high', id: 'high' }, { label: 'high', id: 'high' },
] ]
} }
return validOptions.map((opt) => ({ label: opt, id: opt })) return [autoOption, ...validOptions.map((opt) => ({ label: opt, id: opt }))]
}, },
value: () => 'medium', mode: 'advanced',
condition: { condition: {
field: 'model', field: 'model',
value: MODELS_WITH_VERBOSITY, value: MODELS_WITH_VERBOSITY,
@@ -270,6 +272,7 @@ Return ONLY the JSON array.`,
type: 'dropdown', type: 'dropdown',
placeholder: 'Select thinking level...', placeholder: 'Select thinking level...',
options: [ options: [
{ label: 'none', id: 'none' },
{ label: 'minimal', id: 'minimal' }, { label: 'minimal', id: 'minimal' },
{ label: 'low', id: 'low' }, { label: 'low', id: 'low' },
{ label: 'medium', id: 'medium' }, { label: 'medium', id: 'medium' },
@@ -281,12 +284,11 @@ Return ONLY the JSON array.`,
const { useSubBlockStore } = await import('@/stores/workflows/subblock/store') const { useSubBlockStore } = await import('@/stores/workflows/subblock/store')
const { useWorkflowRegistry } = await import('@/stores/workflows/registry/store') const { useWorkflowRegistry } = await import('@/stores/workflows/registry/store')
const noneOption = { label: 'none', id: 'none' }
const activeWorkflowId = useWorkflowRegistry.getState().activeWorkflowId const activeWorkflowId = useWorkflowRegistry.getState().activeWorkflowId
if (!activeWorkflowId) { if (!activeWorkflowId) {
return [ return [noneOption, { label: 'low', id: 'low' }, { label: 'high', id: 'high' }]
{ label: 'low', id: 'low' },
{ label: 'high', id: 'high' },
]
} }
const workflowValues = useSubBlockStore.getState().workflowValues[activeWorkflowId] const workflowValues = useSubBlockStore.getState().workflowValues[activeWorkflowId]
@@ -294,23 +296,17 @@ Return ONLY the JSON array.`,
const modelValue = blockValues?.model as string const modelValue = blockValues?.model as string
if (!modelValue) { if (!modelValue) {
return [ return [noneOption, { label: 'low', id: 'low' }, { label: 'high', id: 'high' }]
{ label: 'low', id: 'low' },
{ label: 'high', id: 'high' },
]
} }
const validOptions = getThinkingLevelsForModel(modelValue) const validOptions = getThinkingLevelsForModel(modelValue)
if (!validOptions) { if (!validOptions) {
return [ return [noneOption, { label: 'low', id: 'low' }, { label: 'high', id: 'high' }]
{ label: 'low', id: 'low' },
{ label: 'high', id: 'high' },
]
} }
return validOptions.map((opt) => ({ label: opt, id: opt })) return [noneOption, ...validOptions.map((opt) => ({ label: opt, id: opt }))]
}, },
value: () => 'high', mode: 'advanced',
condition: { condition: {
field: 'model', field: 'model',
value: MODELS_WITH_THINKING, value: MODELS_WITH_THINKING,
@@ -333,11 +329,11 @@ Return ONLY the JSON array.`,
id: 'azureApiVersion', id: 'azureApiVersion',
title: 'Azure API Version', title: 'Azure API Version',
type: 'short-input', type: 'short-input',
placeholder: '2024-07-01-preview', placeholder: 'Enter API version',
connectionDroppable: false, connectionDroppable: false,
condition: { condition: {
field: 'model', field: 'model',
value: providers['azure-openai'].models, value: [...providers['azure-openai'].models, ...providers['azure-anthropic'].models],
}, },
}, },
{ {
@@ -401,6 +397,16 @@ Return ONLY the JSON array.`,
value: providers.bedrock.models, value: providers.bedrock.models,
}, },
}, },
{
id: 'apiKey',
title: 'API Key',
type: 'short-input',
placeholder: 'Enter your API key',
password: true,
connectionDroppable: false,
required: true,
condition: getApiKeyCondition(),
},
{ {
id: 'tools', id: 'tools',
title: 'Tools', title: 'Tools',
@@ -413,32 +419,6 @@ Return ONLY the JSON array.`,
type: 'skill-input', type: 'skill-input',
defaultValue: [], defaultValue: [],
}, },
{
id: 'apiKey',
title: 'API Key',
type: 'short-input',
placeholder: 'Enter your API key',
password: true,
connectionDroppable: false,
required: true,
// Hide API key for hosted models, Ollama models, vLLM models, Vertex models (uses OAuth), and Bedrock (uses AWS credentials)
condition: isHosted
? {
field: 'model',
value: [...getHostedModels(), ...providers.vertex.models, ...providers.bedrock.models],
not: true, // Show for all models EXCEPT those listed
}
: () => ({
field: 'model',
value: [
...getCurrentOllamaModels(),
...getCurrentVLLMModels(),
...providers.vertex.models,
...providers.bedrock.models,
],
not: true, // Show for all models EXCEPT Ollama, vLLM, Vertex, and Bedrock models
}),
},
{ {
id: 'memoryType', id: 'memoryType',
title: 'Memory', title: 'Memory',
@@ -493,6 +473,7 @@ Return ONLY the JSON array.`,
min: 0, min: 0,
max: 1, max: 1,
defaultValue: 0.3, defaultValue: 0.3,
mode: 'advanced',
condition: () => ({ condition: () => ({
field: 'model', field: 'model',
value: (() => { value: (() => {
@@ -510,6 +491,7 @@ Return ONLY the JSON array.`,
min: 0, min: 0,
max: 2, max: 2,
defaultValue: 0.3, defaultValue: 0.3,
mode: 'advanced',
condition: () => ({ condition: () => ({
field: 'model', field: 'model',
value: (() => { value: (() => {
@@ -525,6 +507,7 @@ Return ONLY the JSON array.`,
title: 'Max Output Tokens', title: 'Max Output Tokens',
type: 'short-input', type: 'short-input',
placeholder: 'Enter max tokens (e.g., 4096)...', placeholder: 'Enter max tokens (e.g., 4096)...',
mode: 'advanced',
}, },
{ {
id: 'responseFormat', id: 'responseFormat',
@@ -715,7 +698,7 @@ Example 3 (Array Input):
}, },
model: { type: 'string', description: 'AI model to use' }, model: { type: 'string', description: 'AI model to use' },
apiKey: { type: 'string', description: 'Provider API key' }, apiKey: { type: 'string', description: 'Provider API key' },
azureEndpoint: { type: 'string', description: 'Azure OpenAI endpoint URL' }, azureEndpoint: { type: 'string', description: 'Azure endpoint URL' },
azureApiVersion: { type: 'string', description: 'Azure API version' }, azureApiVersion: { type: 'string', description: 'Azure API version' },
vertexProject: { type: 'string', description: 'Google Cloud project ID for Vertex AI' }, vertexProject: { type: 'string', description: 'Google Cloud project ID for Vertex AI' },
vertexLocation: { type: 'string', description: 'Google Cloud location for Vertex AI' }, vertexLocation: { type: 'string', description: 'Google Cloud location for Vertex AI' },

View File

@@ -76,8 +76,9 @@ export const TranslateBlock: BlockConfig = {
vertexProject: params.vertexProject, vertexProject: params.vertexProject,
vertexLocation: params.vertexLocation, vertexLocation: params.vertexLocation,
vertexCredential: params.vertexCredential, vertexCredential: params.vertexCredential,
bedrockRegion: params.bedrockRegion, bedrockAccessKeyId: params.bedrockAccessKeyId,
bedrockSecretKey: params.bedrockSecretKey, bedrockSecretKey: params.bedrockSecretKey,
bedrockRegion: params.bedrockRegion,
}), }),
}, },
}, },

View File

@@ -208,7 +208,7 @@ export interface SubBlockConfig {
not?: boolean not?: boolean
} }
} }
| (() => { | ((values?: Record<string, unknown>) => {
field: string field: string
value: string | number | boolean | Array<string | number | boolean> value: string | number | boolean | Array<string | number | boolean>
not?: boolean not?: boolean
@@ -261,7 +261,7 @@ export interface SubBlockConfig {
not?: boolean not?: boolean
} }
} }
| (() => { | ((values?: Record<string, unknown>) => {
field: string field: string
value: string | number | boolean | Array<string | number | boolean> value: string | number | boolean | Array<string | number | boolean>
not?: boolean not?: boolean

View File

@@ -1,6 +1,6 @@
import { isHosted } from '@/lib/core/config/feature-flags' import { isHosted } from '@/lib/core/config/feature-flags'
import type { BlockOutput, OutputFieldDefinition, SubBlockConfig } from '@/blocks/types' import type { BlockOutput, OutputFieldDefinition, SubBlockConfig } from '@/blocks/types'
import { getHostedModels, providers } from '@/providers/utils' import { getHostedModels, getProviderFromModel, providers } from '@/providers/utils'
import { useProvidersStore } from '@/stores/providers/store' import { useProvidersStore } from '@/stores/providers/store'
/** /**
@@ -48,11 +48,54 @@ const getCurrentOllamaModels = () => {
return useProvidersStore.getState().providers.ollama.models return useProvidersStore.getState().providers.ollama.models
} }
/** function buildModelVisibilityCondition(model: string, shouldShow: boolean) {
* Helper to get current vLLM models from store if (!model) {
*/ return { field: 'model', value: '__no_model_selected__' }
const getCurrentVLLMModels = () => { }
return useProvidersStore.getState().providers.vllm.models
return shouldShow ? { field: 'model', value: model } : { field: 'model', value: model, not: true }
}
function shouldRequireApiKeyForModel(model: string): boolean {
const normalizedModel = model.trim().toLowerCase()
if (!normalizedModel) return false
const hostedModels = getHostedModels()
const isHostedModel = hostedModels.some(
(hostedModel) => hostedModel.toLowerCase() === normalizedModel
)
if (isHosted && isHostedModel) return false
if (normalizedModel.startsWith('vertex/') || normalizedModel.startsWith('bedrock/')) {
return false
}
if (normalizedModel.startsWith('vllm/')) {
return false
}
const currentOllamaModels = getCurrentOllamaModels()
if (currentOllamaModels.some((ollamaModel) => ollamaModel.toLowerCase() === normalizedModel)) {
return false
}
if (!isHosted) {
try {
const providerId = getProviderFromModel(model)
if (
providerId === 'ollama' ||
providerId === 'vllm' ||
providerId === 'vertex' ||
providerId === 'bedrock'
) {
return false
}
} catch {
// If model resolution fails, fall through and require an API key.
}
}
return true
} }
/** /**
@@ -60,27 +103,16 @@ const getCurrentVLLMModels = () => {
* Handles hosted vs self-hosted environments and excludes providers that don't need API key. * Handles hosted vs self-hosted environments and excludes providers that don't need API key.
*/ */
export function getApiKeyCondition() { export function getApiKeyCondition() {
return isHosted return (values?: Record<string, unknown>) => {
? { const model = typeof values?.model === 'string' ? values.model : ''
field: 'model', const shouldShow = shouldRequireApiKeyForModel(model)
value: [...getHostedModels(), ...providers.vertex.models, ...providers.bedrock.models], return buildModelVisibilityCondition(model, shouldShow)
not: true, }
}
: () => ({
field: 'model',
value: [
...getCurrentOllamaModels(),
...getCurrentVLLMModels(),
...providers.vertex.models,
...providers.bedrock.models,
],
not: true,
})
} }
/** /**
* Returns the standard provider credential subblocks used by LLM-based blocks. * Returns the standard provider credential subblocks used by LLM-based blocks.
* This includes: Vertex AI OAuth, API Key, Azure OpenAI, Vertex AI config, and Bedrock config. * This includes: Vertex AI OAuth, API Key, Azure (OpenAI + Anthropic), Vertex AI config, and Bedrock config.
* *
* Usage: Spread into your block's subBlocks array after block-specific fields * Usage: Spread into your block's subBlocks array after block-specific fields
*/ */
@@ -111,25 +143,25 @@ export function getProviderCredentialSubBlocks(): SubBlockConfig[] {
}, },
{ {
id: 'azureEndpoint', id: 'azureEndpoint',
title: 'Azure OpenAI Endpoint', title: 'Azure Endpoint',
type: 'short-input', type: 'short-input',
password: true, password: true,
placeholder: 'https://your-resource.openai.azure.com', placeholder: 'https://your-resource.services.ai.azure.com',
connectionDroppable: false, connectionDroppable: false,
condition: { condition: {
field: 'model', field: 'model',
value: providers['azure-openai'].models, value: [...providers['azure-openai'].models, ...providers['azure-anthropic'].models],
}, },
}, },
{ {
id: 'azureApiVersion', id: 'azureApiVersion',
title: 'Azure API Version', title: 'Azure API Version',
type: 'short-input', type: 'short-input',
placeholder: '2024-07-01-preview', placeholder: 'Enter API version',
connectionDroppable: false, connectionDroppable: false,
condition: { condition: {
field: 'model', field: 'model',
value: providers['azure-openai'].models, value: [...providers['azure-openai'].models, ...providers['azure-anthropic'].models],
}, },
}, },
{ {
@@ -202,7 +234,7 @@ export function getProviderCredentialSubBlocks(): SubBlockConfig[] {
*/ */
export const PROVIDER_CREDENTIAL_INPUTS = { export const PROVIDER_CREDENTIAL_INPUTS = {
apiKey: { type: 'string', description: 'Provider API key' }, apiKey: { type: 'string', description: 'Provider API key' },
azureEndpoint: { type: 'string', description: 'Azure OpenAI endpoint URL' }, azureEndpoint: { type: 'string', description: 'Azure endpoint URL' },
azureApiVersion: { type: 'string', description: 'Azure API version' }, azureApiVersion: { type: 'string', description: 'Azure API version' },
vertexProject: { type: 'string', description: 'Google Cloud project ID for Vertex AI' }, vertexProject: { type: 'string', description: 'Google Cloud project ID for Vertex AI' },
vertexLocation: { type: 'string', description: 'Google Cloud location for Vertex AI' }, vertexLocation: { type: 'string', description: 'Google Cloud location for Vertex AI' },

View File

@@ -5468,18 +5468,18 @@ export function AgentSkillsIcon(props: SVGProps<SVGSVGElement>) {
<svg <svg
{...props} {...props}
xmlns='http://www.w3.org/2000/svg' xmlns='http://www.w3.org/2000/svg'
width='24' width='16'
height='24' height='16'
viewBox='0 0 32 32' viewBox='0 0 16 16'
fill='none' fill='none'
> >
<path d='M16 0.5L29.4234 8.25V23.75L16 31.5L2.57661 23.75V8.25L16 0.5Z' fill='currentColor' />
<path <path
d='M16 6L24.6603 11V21L16 26L7.33975 21V11L16 6Z' d='M8 1L14.0622 4.5V11.5L8 15L1.93782 11.5V4.5L8 1Z'
fill='currentColor' stroke='currentColor'
stroke='var(--background, white)' strokeWidth='1.5'
strokeWidth='3' fill='none'
/> />
<path d='M8 4.5L11 6.25V9.75L8 11.5L5 9.75V6.25L8 4.5Z' fill='currentColor' />
</svg> </svg>
) )
} }

View File

@@ -326,6 +326,7 @@ export class AgentBlockHandler implements BlockHandler {
_context: { _context: {
workflowId: ctx.workflowId, workflowId: ctx.workflowId,
workspaceId: ctx.workspaceId, workspaceId: ctx.workspaceId,
userId: ctx.userId,
isDeployedContext: ctx.isDeployedContext, isDeployedContext: ctx.isDeployedContext,
}, },
}, },
@@ -377,6 +378,9 @@ export class AgentBlockHandler implements BlockHandler {
if (ctx.workflowId) { if (ctx.workflowId) {
params.workflowId = ctx.workflowId params.workflowId = ctx.workflowId
} }
if (ctx.userId) {
params.userId = ctx.userId
}
const url = buildAPIUrl('/api/tools/custom', params) const url = buildAPIUrl('/api/tools/custom', params)
const response = await fetch(url.toString(), { const response = await fetch(url.toString(), {
@@ -487,7 +491,9 @@ export class AgentBlockHandler implements BlockHandler {
usageControl: tool.usageControl || 'auto', usageControl: tool.usageControl || 'auto',
executeFunction: async (callParams: Record<string, any>) => { executeFunction: async (callParams: Record<string, any>) => {
const headers = await buildAuthHeaders() const headers = await buildAuthHeaders()
const execUrl = buildAPIUrl('/api/mcp/tools/execute') const execParams: Record<string, string> = {}
if (ctx.userId) execParams.userId = ctx.userId
const execUrl = buildAPIUrl('/api/mcp/tools/execute', execParams)
const execResponse = await fetch(execUrl.toString(), { const execResponse = await fetch(execUrl.toString(), {
method: 'POST', method: 'POST',
@@ -596,6 +602,7 @@ export class AgentBlockHandler implements BlockHandler {
serverId, serverId,
workspaceId: ctx.workspaceId, workspaceId: ctx.workspaceId,
workflowId: ctx.workflowId, workflowId: ctx.workflowId,
...(ctx.userId ? { userId: ctx.userId } : {}),
}) })
const maxAttempts = 2 const maxAttempts = 2
@@ -670,7 +677,9 @@ export class AgentBlockHandler implements BlockHandler {
usageControl: tool.usageControl || 'auto', usageControl: tool.usageControl || 'auto',
executeFunction: async (callParams: Record<string, any>) => { executeFunction: async (callParams: Record<string, any>) => {
const headers = await buildAuthHeaders() const headers = await buildAuthHeaders()
const execUrl = buildAPIUrl('/api/mcp/tools/execute') const discoverExecParams: Record<string, string> = {}
if (ctx.userId) discoverExecParams.userId = ctx.userId
const execUrl = buildAPIUrl('/api/mcp/tools/execute', discoverExecParams)
const execResponse = await fetch(execUrl.toString(), { const execResponse = await fetch(execUrl.toString(), {
method: 'POST', method: 'POST',
@@ -906,24 +915,17 @@ export class AgentBlockHandler implements BlockHandler {
} }
} }
// Find first system message
const firstSystemIndex = messages.findIndex((msg) => msg.role === 'system') const firstSystemIndex = messages.findIndex((msg) => msg.role === 'system')
if (firstSystemIndex === -1) { if (firstSystemIndex === -1) {
// No system message exists - add at position 0
messages.unshift({ role: 'system', content }) messages.unshift({ role: 'system', content })
} else if (firstSystemIndex === 0) { } else if (firstSystemIndex === 0) {
// System message already at position 0 - replace it
// Explicit systemPrompt parameter takes precedence over memory/messages
messages[0] = { role: 'system', content } messages[0] = { role: 'system', content }
} else { } else {
// System message exists but not at position 0 - move it to position 0
// and update with new content
messages.splice(firstSystemIndex, 1) messages.splice(firstSystemIndex, 1)
messages.unshift({ role: 'system', content }) messages.unshift({ role: 'system', content })
} }
// Remove any additional system messages (keep only the first one)
for (let i = messages.length - 1; i >= 1; i--) { for (let i = messages.length - 1; i >= 1; i--) {
if (messages[i].role === 'system') { if (messages[i].role === 'system') {
messages.splice(i, 1) messages.splice(i, 1)
@@ -989,13 +991,14 @@ export class AgentBlockHandler implements BlockHandler {
workflowId: ctx.workflowId, workflowId: ctx.workflowId,
workspaceId: ctx.workspaceId, workspaceId: ctx.workspaceId,
stream: streaming, stream: streaming,
messages, messages: messages?.map(({ executionId, ...msg }) => msg),
environmentVariables: ctx.environmentVariables || {}, environmentVariables: ctx.environmentVariables || {},
workflowVariables: ctx.workflowVariables || {}, workflowVariables: ctx.workflowVariables || {},
blockData, blockData,
blockNameMapping, blockNameMapping,
reasoningEffort: inputs.reasoningEffort, reasoningEffort: inputs.reasoningEffort,
verbosity: inputs.verbosity, verbosity: inputs.verbosity,
thinkingLevel: inputs.thinkingLevel,
} }
} }
@@ -1055,6 +1058,7 @@ export class AgentBlockHandler implements BlockHandler {
responseFormat: providerRequest.responseFormat, responseFormat: providerRequest.responseFormat,
workflowId: providerRequest.workflowId, workflowId: providerRequest.workflowId,
workspaceId: ctx.workspaceId, workspaceId: ctx.workspaceId,
userId: ctx.userId,
stream: providerRequest.stream, stream: providerRequest.stream,
messages: 'messages' in providerRequest ? providerRequest.messages : undefined, messages: 'messages' in providerRequest ? providerRequest.messages : undefined,
environmentVariables: ctx.environmentVariables || {}, environmentVariables: ctx.environmentVariables || {},
@@ -1064,6 +1068,7 @@ export class AgentBlockHandler implements BlockHandler {
isDeployedContext: ctx.isDeployedContext, isDeployedContext: ctx.isDeployedContext,
reasoningEffort: providerRequest.reasoningEffort, reasoningEffort: providerRequest.reasoningEffort,
verbosity: providerRequest.verbosity, verbosity: providerRequest.verbosity,
thinkingLevel: providerRequest.thinkingLevel,
}) })
return this.processProviderResponse(response, block, responseFormat) return this.processProviderResponse(response, block, responseFormat)
@@ -1081,8 +1086,6 @@ export class AgentBlockHandler implements BlockHandler {
logger.info(`[${requestId}] Resolving Vertex AI credential: ${credentialId}`) logger.info(`[${requestId}] Resolving Vertex AI credential: ${credentialId}`)
// Get the credential - we need to find the owner
// Since we're in a workflow context, we can query the credential directly
const credential = await db.query.account.findFirst({ const credential = await db.query.account.findFirst({
where: eq(account.id, credentialId), where: eq(account.id, credentialId),
}) })
@@ -1091,7 +1094,6 @@ export class AgentBlockHandler implements BlockHandler {
throw new Error(`Vertex AI credential not found: ${credentialId}`) throw new Error(`Vertex AI credential not found: ${credentialId}`)
} }
// Refresh the token if needed
const { accessToken } = await refreshTokenIfNeeded(requestId, credential, credentialId) const { accessToken } = await refreshTokenIfNeeded(requestId, credential, credentialId)
if (!accessToken) { if (!accessToken) {

View File

@@ -34,6 +34,7 @@ export interface AgentInputs {
bedrockRegion?: string bedrockRegion?: string
reasoningEffort?: string reasoningEffort?: string
verbosity?: string verbosity?: string
thinkingLevel?: string
} }
export interface ToolInput { export interface ToolInput {

View File

@@ -72,6 +72,7 @@ export class ApiBlockHandler implements BlockHandler {
workflowId: ctx.workflowId, workflowId: ctx.workflowId,
workspaceId: ctx.workspaceId, workspaceId: ctx.workspaceId,
executionId: ctx.executionId, executionId: ctx.executionId,
userId: ctx.userId,
isDeployedContext: ctx.isDeployedContext, isDeployedContext: ctx.isDeployedContext,
}, },
}, },

View File

@@ -48,6 +48,7 @@ export async function evaluateConditionExpression(
_context: { _context: {
workflowId: ctx.workflowId, workflowId: ctx.workflowId,
workspaceId: ctx.workspaceId, workspaceId: ctx.workspaceId,
userId: ctx.userId,
isDeployedContext: ctx.isDeployedContext, isDeployedContext: ctx.isDeployedContext,
}, },
}, },

View File

@@ -104,7 +104,7 @@ export class EvaluatorBlockHandler implements BlockHandler {
} }
try { try {
const url = buildAPIUrl('/api/providers') const url = buildAPIUrl('/api/providers', ctx.userId ? { userId: ctx.userId } : {})
const providerRequest: Record<string, any> = { const providerRequest: Record<string, any> = {
provider: providerId, provider: providerId,
@@ -121,26 +121,17 @@ export class EvaluatorBlockHandler implements BlockHandler {
temperature: EVALUATOR.DEFAULT_TEMPERATURE, temperature: EVALUATOR.DEFAULT_TEMPERATURE,
apiKey: finalApiKey, apiKey: finalApiKey,
azureEndpoint: inputs.azureEndpoint,
azureApiVersion: inputs.azureApiVersion,
vertexProject: evaluatorConfig.vertexProject,
vertexLocation: evaluatorConfig.vertexLocation,
bedrockAccessKeyId: evaluatorConfig.bedrockAccessKeyId,
bedrockSecretKey: evaluatorConfig.bedrockSecretKey,
bedrockRegion: evaluatorConfig.bedrockRegion,
workflowId: ctx.workflowId, workflowId: ctx.workflowId,
workspaceId: ctx.workspaceId, workspaceId: ctx.workspaceId,
} }
if (providerId === 'vertex') {
providerRequest.vertexProject = evaluatorConfig.vertexProject
providerRequest.vertexLocation = evaluatorConfig.vertexLocation
}
if (providerId === 'azure-openai') {
providerRequest.azureEndpoint = inputs.azureEndpoint
providerRequest.azureApiVersion = inputs.azureApiVersion
}
if (providerId === 'bedrock') {
providerRequest.bedrockAccessKeyId = evaluatorConfig.bedrockAccessKeyId
providerRequest.bedrockSecretKey = evaluatorConfig.bedrockSecretKey
providerRequest.bedrockRegion = evaluatorConfig.bedrockRegion
}
const response = await fetch(url.toString(), { const response = await fetch(url.toString(), {
method: 'POST', method: 'POST',
headers: await buildAuthHeaders(), headers: await buildAuthHeaders(),

View File

@@ -39,6 +39,7 @@ export class FunctionBlockHandler implements BlockHandler {
_context: { _context: {
workflowId: ctx.workflowId, workflowId: ctx.workflowId,
workspaceId: ctx.workspaceId, workspaceId: ctx.workspaceId,
userId: ctx.userId,
isDeployedContext: ctx.isDeployedContext, isDeployedContext: ctx.isDeployedContext,
}, },
}, },

View File

@@ -66,6 +66,7 @@ export class GenericBlockHandler implements BlockHandler {
workflowId: ctx.workflowId, workflowId: ctx.workflowId,
workspaceId: ctx.workspaceId, workspaceId: ctx.workspaceId,
executionId: ctx.executionId, executionId: ctx.executionId,
userId: ctx.userId,
isDeployedContext: ctx.isDeployedContext, isDeployedContext: ctx.isDeployedContext,
}, },
}, },

View File

@@ -605,6 +605,7 @@ export class HumanInTheLoopBlockHandler implements BlockHandler {
_context: { _context: {
workflowId: ctx.workflowId, workflowId: ctx.workflowId,
workspaceId: ctx.workspaceId, workspaceId: ctx.workspaceId,
userId: ctx.userId,
isDeployedContext: ctx.isDeployedContext, isDeployedContext: ctx.isDeployedContext,
}, },
blockData: blockDataWithPause, blockData: blockDataWithPause,

View File

@@ -80,6 +80,7 @@ export class RouterBlockHandler implements BlockHandler {
try { try {
const url = new URL('/api/providers', getBaseUrl()) const url = new URL('/api/providers', getBaseUrl())
if (ctx.userId) url.searchParams.set('userId', ctx.userId)
const messages = [{ role: 'user', content: routerConfig.prompt }] const messages = [{ role: 'user', content: routerConfig.prompt }]
const systemPrompt = generateRouterPrompt(routerConfig.prompt, targetBlocks) const systemPrompt = generateRouterPrompt(routerConfig.prompt, targetBlocks)
@@ -96,26 +97,17 @@ export class RouterBlockHandler implements BlockHandler {
context: JSON.stringify(messages), context: JSON.stringify(messages),
temperature: ROUTER.INFERENCE_TEMPERATURE, temperature: ROUTER.INFERENCE_TEMPERATURE,
apiKey: finalApiKey, apiKey: finalApiKey,
azureEndpoint: inputs.azureEndpoint,
azureApiVersion: inputs.azureApiVersion,
vertexProject: routerConfig.vertexProject,
vertexLocation: routerConfig.vertexLocation,
bedrockAccessKeyId: routerConfig.bedrockAccessKeyId,
bedrockSecretKey: routerConfig.bedrockSecretKey,
bedrockRegion: routerConfig.bedrockRegion,
workflowId: ctx.workflowId, workflowId: ctx.workflowId,
workspaceId: ctx.workspaceId, workspaceId: ctx.workspaceId,
} }
if (providerId === 'vertex') {
providerRequest.vertexProject = routerConfig.vertexProject
providerRequest.vertexLocation = routerConfig.vertexLocation
}
if (providerId === 'azure-openai') {
providerRequest.azureEndpoint = inputs.azureEndpoint
providerRequest.azureApiVersion = inputs.azureApiVersion
}
if (providerId === 'bedrock') {
providerRequest.bedrockAccessKeyId = routerConfig.bedrockAccessKeyId
providerRequest.bedrockSecretKey = routerConfig.bedrockSecretKey
providerRequest.bedrockRegion = routerConfig.bedrockRegion
}
const response = await fetch(url.toString(), { const response = await fetch(url.toString(), {
method: 'POST', method: 'POST',
headers: await buildAuthHeaders(), headers: await buildAuthHeaders(),
@@ -218,6 +210,7 @@ export class RouterBlockHandler implements BlockHandler {
try { try {
const url = new URL('/api/providers', getBaseUrl()) const url = new URL('/api/providers', getBaseUrl())
if (ctx.userId) url.searchParams.set('userId', ctx.userId)
const messages = [{ role: 'user', content: routerConfig.context }] const messages = [{ role: 'user', content: routerConfig.context }]
const systemPrompt = generateRouterV2Prompt(routerConfig.context, routes) const systemPrompt = generateRouterV2Prompt(routerConfig.context, routes)
@@ -234,6 +227,13 @@ export class RouterBlockHandler implements BlockHandler {
context: JSON.stringify(messages), context: JSON.stringify(messages),
temperature: ROUTER.INFERENCE_TEMPERATURE, temperature: ROUTER.INFERENCE_TEMPERATURE,
apiKey: finalApiKey, apiKey: finalApiKey,
azureEndpoint: inputs.azureEndpoint,
azureApiVersion: inputs.azureApiVersion,
vertexProject: routerConfig.vertexProject,
vertexLocation: routerConfig.vertexLocation,
bedrockAccessKeyId: routerConfig.bedrockAccessKeyId,
bedrockSecretKey: routerConfig.bedrockSecretKey,
bedrockRegion: routerConfig.bedrockRegion,
workflowId: ctx.workflowId, workflowId: ctx.workflowId,
workspaceId: ctx.workspaceId, workspaceId: ctx.workspaceId,
responseFormat: { responseFormat: {
@@ -257,22 +257,6 @@ export class RouterBlockHandler implements BlockHandler {
}, },
} }
if (providerId === 'vertex') {
providerRequest.vertexProject = routerConfig.vertexProject
providerRequest.vertexLocation = routerConfig.vertexLocation
}
if (providerId === 'azure-openai') {
providerRequest.azureEndpoint = inputs.azureEndpoint
providerRequest.azureApiVersion = inputs.azureApiVersion
}
if (providerId === 'bedrock') {
providerRequest.bedrockAccessKeyId = routerConfig.bedrockAccessKeyId
providerRequest.bedrockSecretKey = routerConfig.bedrockSecretKey
providerRequest.bedrockRegion = routerConfig.bedrockRegion
}
const response = await fetch(url.toString(), { const response = await fetch(url.toString(), {
method: 'POST', method: 'POST',
headers: await buildAuthHeaders(), headers: await buildAuthHeaders(),

View File

@@ -511,6 +511,8 @@ export class LoopOrchestrator {
contextVariables: {}, contextVariables: {},
timeoutMs: LOOP_CONDITION_TIMEOUT_MS, timeoutMs: LOOP_CONDITION_TIMEOUT_MS,
requestId, requestId,
ownerKey: `user:${ctx.userId}`,
ownerWeight: 1,
}) })
if (vmResult.error) { if (vmResult.error) {

View File

@@ -2,13 +2,13 @@ import { db } from '@sim/db'
import { account, workflow as workflowTable } from '@sim/db/schema' import { account, workflow as workflowTable } from '@sim/db/schema'
import { eq } from 'drizzle-orm' import { eq } from 'drizzle-orm'
import type { NextRequest } from 'next/server' import type { NextRequest } from 'next/server'
import { checkHybridAuth } from '@/lib/auth/hybrid' import { checkSessionOrInternalAuth } from '@/lib/auth/hybrid'
import { getUserEntityPermissions } from '@/lib/workspaces/permissions/utils' import { getUserEntityPermissions } from '@/lib/workspaces/permissions/utils'
export interface CredentialAccessResult { export interface CredentialAccessResult {
ok: boolean ok: boolean
error?: string error?: string
authType?: 'session' | 'api_key' | 'internal_jwt' authType?: 'session' | 'internal_jwt'
requesterUserId?: string requesterUserId?: string
credentialOwnerUserId?: string credentialOwnerUserId?: string
workspaceId?: string workspaceId?: string
@@ -16,10 +16,10 @@ export interface CredentialAccessResult {
/** /**
* Centralizes auth + collaboration rules for credential use. * Centralizes auth + collaboration rules for credential use.
* - Uses checkHybridAuth to authenticate the caller * - Uses checkSessionOrInternalAuth to authenticate the caller
* - Fetches credential owner * - Fetches credential owner
* - Authorization rules: * - Authorization rules:
* - session/api_key: allow if requester owns the credential; otherwise require workflowId and * - session: allow if requester owns the credential; otherwise require workflowId and
* verify BOTH requester and owner have access to the workflow's workspace * verify BOTH requester and owner have access to the workflow's workspace
* - internal_jwt: require workflowId (by default) and verify credential owner has access to the * - internal_jwt: require workflowId (by default) and verify credential owner has access to the
* workflow's workspace (requester identity is the system/workflow) * workflow's workspace (requester identity is the system/workflow)
@@ -30,7 +30,9 @@ export async function authorizeCredentialUse(
): Promise<CredentialAccessResult> { ): Promise<CredentialAccessResult> {
const { credentialId, workflowId, requireWorkflowIdForInternal = true } = params const { credentialId, workflowId, requireWorkflowIdForInternal = true } = params
const auth = await checkHybridAuth(request, { requireWorkflowId: requireWorkflowIdForInternal }) const auth = await checkSessionOrInternalAuth(request, {
requireWorkflowId: requireWorkflowIdForInternal,
})
if (!auth.success || !auth.userId) { if (!auth.success || !auth.userId) {
return { ok: false, error: auth.error || 'Authentication required' } return { ok: false, error: auth.error || 'Authentication required' }
} }
@@ -52,7 +54,7 @@ export async function authorizeCredentialUse(
if (auth.authType !== 'internal_jwt' && auth.userId === credentialOwnerUserId) { if (auth.authType !== 'internal_jwt' && auth.userId === credentialOwnerUserId) {
return { return {
ok: true, ok: true,
authType: auth.authType, authType: auth.authType as CredentialAccessResult['authType'],
requesterUserId: auth.userId, requesterUserId: auth.userId,
credentialOwnerUserId, credentialOwnerUserId,
} }
@@ -85,14 +87,14 @@ export async function authorizeCredentialUse(
} }
return { return {
ok: true, ok: true,
authType: auth.authType, authType: auth.authType as CredentialAccessResult['authType'],
requesterUserId: auth.userId, requesterUserId: auth.userId,
credentialOwnerUserId, credentialOwnerUserId,
workspaceId: wf.workspaceId, workspaceId: wf.workspaceId,
} }
} }
// Session/API key: verify BOTH requester and owner belong to the workflow's workspace // Session: verify BOTH requester and owner belong to the workflow's workspace
const requesterPerm = await getUserEntityPermissions(auth.userId, 'workspace', wf.workspaceId) const requesterPerm = await getUserEntityPermissions(auth.userId, 'workspace', wf.workspaceId)
const ownerPerm = await getUserEntityPermissions( const ownerPerm = await getUserEntityPermissions(
credentialOwnerUserId, credentialOwnerUserId,
@@ -105,7 +107,7 @@ export async function authorizeCredentialUse(
return { return {
ok: true, ok: true,
authType: auth.authType, authType: auth.authType as CredentialAccessResult['authType'],
requesterUserId: auth.userId, requesterUserId: auth.userId,
credentialOwnerUserId, credentialOwnerUserId,
workspaceId: wf.workspaceId, workspaceId: wf.workspaceId,

View File

@@ -1,7 +1,4 @@
import { db } from '@sim/db'
import { workflow } from '@sim/db/schema'
import { createLogger } from '@sim/logger' import { createLogger } from '@sim/logger'
import { eq } from 'drizzle-orm'
import type { NextRequest } from 'next/server' import type { NextRequest } from 'next/server'
import { authenticateApiKeyFromHeader, updateApiKeyLastUsed } from '@/lib/api-key/service' import { authenticateApiKeyFromHeader, updateApiKeyLastUsed } from '@/lib/api-key/service'
import { getSession } from '@/lib/auth' import { getSession } from '@/lib/auth'
@@ -13,35 +10,33 @@ export interface AuthResult {
success: boolean success: boolean
userId?: string userId?: string
authType?: 'session' | 'api_key' | 'internal_jwt' authType?: 'session' | 'api_key' | 'internal_jwt'
apiKeyType?: 'personal' | 'workspace'
error?: string error?: string
} }
/** /**
* Resolves userId from a verified internal JWT token. * Resolves userId from a verified internal JWT token.
* Extracts workflowId/userId from URL params or POST body, then looks up userId if needed. * Extracts userId from the JWT payload, URL search params, or POST body.
*/ */
async function resolveUserFromJwt( async function resolveUserFromJwt(
request: NextRequest, request: NextRequest,
verificationUserId: string | null, verificationUserId: string | null,
options: { requireWorkflowId?: boolean } options: { requireWorkflowId?: boolean }
): Promise<AuthResult> { ): Promise<AuthResult> {
let workflowId: string | null = null
let userId: string | null = verificationUserId let userId: string | null = verificationUserId
const { searchParams } = new URL(request.url)
workflowId = searchParams.get('workflowId')
if (!userId) { if (!userId) {
const { searchParams } = new URL(request.url)
userId = searchParams.get('userId') userId = searchParams.get('userId')
} }
if (!workflowId && !userId && request.method === 'POST') { if (!userId && request.method === 'POST') {
try { try {
const clonedRequest = request.clone() const clonedRequest = request.clone()
const bodyText = await clonedRequest.text() const bodyText = await clonedRequest.text()
if (bodyText) { if (bodyText) {
const body = JSON.parse(bodyText) const body = JSON.parse(bodyText)
workflowId = body.workflowId || body._context?.workflowId userId = body.userId || body._context?.userId || null
userId = userId || body.userId || body._context?.userId
} }
} catch { } catch {
// Ignore JSON parse errors // Ignore JSON parse errors
@@ -52,22 +47,8 @@ async function resolveUserFromJwt(
return { success: true, userId, authType: 'internal_jwt' } return { success: true, userId, authType: 'internal_jwt' }
} }
if (workflowId) {
const [workflowData] = await db
.select({ userId: workflow.userId })
.from(workflow)
.where(eq(workflow.id, workflowId))
.limit(1)
if (!workflowData) {
return { success: false, error: 'Workflow not found' }
}
return { success: true, userId: workflowData.userId, authType: 'internal_jwt' }
}
if (options.requireWorkflowId !== false) { if (options.requireWorkflowId !== false) {
return { success: false, error: 'workflowId or userId required for internal JWT calls' } return { success: false, error: 'userId required for internal JWT calls' }
} }
return { success: true, authType: 'internal_jwt' } return { success: true, authType: 'internal_jwt' }
@@ -222,6 +203,7 @@ export async function checkHybridAuth(
success: true, success: true,
userId: result.userId!, userId: result.userId!,
authType: 'api_key', authType: 'api_key',
apiKeyType: result.keyType,
} }
} }

View File

@@ -12,6 +12,7 @@ const VALID_PROVIDER_IDS: readonly ProviderId[] = [
'openai', 'openai',
'azure-openai', 'azure-openai',
'anthropic', 'anthropic',
'azure-anthropic',
'google', 'google',
'deepseek', 'deepseek',
'xai', 'xai',

View File

@@ -147,6 +147,13 @@ export type CopilotProviderConfig =
apiVersion?: string apiVersion?: string
endpoint?: string endpoint?: string
} }
| {
provider: 'azure-anthropic'
model: string
apiKey?: string
apiVersion?: string
endpoint?: string
}
| { | {
provider: 'vertex' provider: 'vertex'
model: string model: string
@@ -155,7 +162,7 @@ export type CopilotProviderConfig =
vertexLocation?: string vertexLocation?: string
} }
| { | {
provider: Exclude<ProviderId, 'azure-openai' | 'vertex'> provider: Exclude<ProviderId, 'azure-openai' | 'azure-anthropic' | 'vertex'>
model?: string model?: string
apiKey?: string apiKey?: string
} }

View File

@@ -95,6 +95,9 @@ export const env = createEnv({
AZURE_OPENAI_ENDPOINT: z.string().url().optional(), // Shared Azure OpenAI service endpoint AZURE_OPENAI_ENDPOINT: z.string().url().optional(), // Shared Azure OpenAI service endpoint
AZURE_OPENAI_API_VERSION: z.string().optional(), // Shared Azure OpenAI API version AZURE_OPENAI_API_VERSION: z.string().optional(), // Shared Azure OpenAI API version
AZURE_OPENAI_API_KEY: z.string().min(1).optional(), // Shared Azure OpenAI API key AZURE_OPENAI_API_KEY: z.string().min(1).optional(), // Shared Azure OpenAI API key
AZURE_ANTHROPIC_ENDPOINT: z.string().url().optional(), // Azure Anthropic service endpoint
AZURE_ANTHROPIC_API_KEY: z.string().min(1).optional(), // Azure Anthropic API key
AZURE_ANTHROPIC_API_VERSION: z.string().min(1).optional(), // Azure Anthropic API version (e.g. 2023-06-01)
KB_OPENAI_MODEL_NAME: z.string().optional(), // Knowledge base OpenAI model name (works with both regular OpenAI and Azure OpenAI) KB_OPENAI_MODEL_NAME: z.string().optional(), // Knowledge base OpenAI model name (works with both regular OpenAI and Azure OpenAI)
WAND_OPENAI_MODEL_NAME: z.string().optional(), // Wand generation OpenAI model name (works with both regular OpenAI and Azure OpenAI) WAND_OPENAI_MODEL_NAME: z.string().optional(), // Wand generation OpenAI model name (works with both regular OpenAI and Azure OpenAI)
OCR_AZURE_ENDPOINT: z.string().url().optional(), // Azure Mistral OCR service endpoint OCR_AZURE_ENDPOINT: z.string().url().optional(), // Azure Mistral OCR service endpoint
@@ -180,6 +183,24 @@ export const env = createEnv({
EXECUTION_TIMEOUT_ASYNC_TEAM: z.string().optional().default('5400'), // 90 minutes EXECUTION_TIMEOUT_ASYNC_TEAM: z.string().optional().default('5400'), // 90 minutes
EXECUTION_TIMEOUT_ASYNC_ENTERPRISE: z.string().optional().default('5400'), // 90 minutes EXECUTION_TIMEOUT_ASYNC_ENTERPRISE: z.string().optional().default('5400'), // 90 minutes
// Isolated-VM Worker Pool Configuration
IVM_POOL_SIZE: z.string().optional().default('4'), // Max worker processes in pool
IVM_MAX_CONCURRENT: z.string().optional().default('10000'), // Max concurrent executions globally
IVM_MAX_PER_WORKER: z.string().optional().default('2500'), // Max concurrent executions per worker
IVM_WORKER_IDLE_TIMEOUT_MS: z.string().optional().default('60000'), // Worker idle cleanup timeout (ms)
IVM_MAX_QUEUE_SIZE: z.string().optional().default('10000'), // Max pending queued executions in memory
IVM_MAX_FETCH_RESPONSE_BYTES: z.string().optional().default('8388608'),// Max bytes read from sandbox fetch responses
IVM_MAX_FETCH_RESPONSE_CHARS: z.string().optional().default('4000000'),// Max chars returned to sandbox from fetch body
IVM_MAX_FETCH_OPTIONS_JSON_CHARS: z.string().optional().default('262144'), // Max JSON payload size for sandbox fetch options
IVM_MAX_FETCH_URL_LENGTH: z.string().optional().default('8192'), // Max URL length accepted by sandbox fetch
IVM_MAX_STDOUT_CHARS: z.string().optional().default('200000'), // Max captured stdout characters per execution
IVM_MAX_ACTIVE_PER_OWNER: z.string().optional().default('200'), // Max active executions per owner (per process)
IVM_MAX_QUEUED_PER_OWNER: z.string().optional().default('2000'), // Max queued executions per owner (per process)
IVM_MAX_OWNER_WEIGHT: z.string().optional().default('5'), // Max accepted weight for weighted owner scheduling
IVM_DISTRIBUTED_MAX_INFLIGHT_PER_OWNER:z.string().optional().default('2200'), // Max owner in-flight leases across replicas
IVM_DISTRIBUTED_LEASE_MIN_TTL_MS: z.string().optional().default('120000'), // Min TTL for distributed in-flight leases (ms)
IVM_QUEUE_TIMEOUT_MS: z.string().optional().default('300000'), // Max queue wait before rejection (ms)
// Knowledge Base Processing Configuration - Shared across all processing methods // Knowledge Base Processing Configuration - Shared across all processing methods
KB_CONFIG_MAX_DURATION: z.number().optional().default(600), // Max processing duration in seconds (10 minutes) KB_CONFIG_MAX_DURATION: z.number().optional().default(600), // Max processing duration in seconds (10 minutes)
KB_CONFIG_MAX_ATTEMPTS: z.number().optional().default(3), // Max retry attempts KB_CONFIG_MAX_ATTEMPTS: z.number().optional().default(3), // Max retry attempts

View File

@@ -103,6 +103,7 @@ export interface SecureFetchOptions {
body?: string | Buffer | Uint8Array body?: string | Buffer | Uint8Array
timeout?: number timeout?: number
maxRedirects?: number maxRedirects?: number
maxResponseBytes?: number
} }
export class SecureFetchHeaders { export class SecureFetchHeaders {
@@ -165,6 +166,7 @@ export async function secureFetchWithPinnedIP(
redirectCount = 0 redirectCount = 0
): Promise<SecureFetchResponse> { ): Promise<SecureFetchResponse> {
const maxRedirects = options.maxRedirects ?? DEFAULT_MAX_REDIRECTS const maxRedirects = options.maxRedirects ?? DEFAULT_MAX_REDIRECTS
const maxResponseBytes = options.maxResponseBytes
return new Promise((resolve, reject) => { return new Promise((resolve, reject) => {
const parsed = new URL(url) const parsed = new URL(url)
@@ -237,14 +239,32 @@ export async function secureFetchWithPinnedIP(
} }
const chunks: Buffer[] = [] const chunks: Buffer[] = []
let totalBytes = 0
let responseTerminated = false
res.on('data', (chunk: Buffer) => chunks.push(chunk)) res.on('data', (chunk: Buffer) => {
if (responseTerminated) return
totalBytes += chunk.length
if (
typeof maxResponseBytes === 'number' &&
maxResponseBytes > 0 &&
totalBytes > maxResponseBytes
) {
responseTerminated = true
res.destroy(new Error(`Response exceeded maximum size of ${maxResponseBytes} bytes`))
return
}
chunks.push(chunk)
})
res.on('error', (error) => { res.on('error', (error) => {
reject(error) reject(error)
}) })
res.on('end', () => { res.on('end', () => {
if (responseTerminated) return
const bodyBuffer = Buffer.concat(chunks) const bodyBuffer = Buffer.concat(chunks)
const body = bodyBuffer.toString('utf-8') const body = bodyBuffer.toString('utf-8')
const headersRecord: Record<string, string> = {} const headersRecord: Record<string, string> = {}

View File

@@ -9,6 +9,21 @@ const USER_CODE_START_LINE = 4
const pendingFetches = new Map() const pendingFetches = new Map()
let fetchIdCounter = 0 let fetchIdCounter = 0
const FETCH_TIMEOUT_MS = 300000 // 5 minutes const FETCH_TIMEOUT_MS = 300000 // 5 minutes
const MAX_STDOUT_CHARS = Number.parseInt(process.env.IVM_MAX_STDOUT_CHARS || '', 10) || 200000
const MAX_FETCH_OPTIONS_JSON_CHARS =
Number.parseInt(process.env.IVM_MAX_FETCH_OPTIONS_JSON_CHARS || '', 10) || 256 * 1024
function stringifyLogValue(value) {
if (typeof value !== 'object' || value === null) {
return String(value)
}
try {
return JSON.stringify(value)
} catch {
return '[unserializable]'
}
}
/** /**
* Extract line and column from error stack or message * Extract line and column from error stack or message
@@ -101,8 +116,32 @@ function convertToCompatibleError(errorInfo, userCode) {
async function executeCode(request) { async function executeCode(request) {
const { code, params, envVars, contextVariables, timeoutMs, requestId } = request const { code, params, envVars, contextVariables, timeoutMs, requestId } = request
const stdoutChunks = [] const stdoutChunks = []
let stdoutLength = 0
let stdoutTruncated = false
let isolate = null let isolate = null
const appendStdout = (line) => {
if (stdoutTruncated || !line) return
const remaining = MAX_STDOUT_CHARS - stdoutLength
if (remaining <= 0) {
stdoutTruncated = true
stdoutChunks.push('[stdout truncated]\n')
return
}
if (line.length <= remaining) {
stdoutChunks.push(line)
stdoutLength += line.length
return
}
stdoutChunks.push(line.slice(0, remaining))
stdoutChunks.push('\n[stdout truncated]\n')
stdoutLength = MAX_STDOUT_CHARS
stdoutTruncated = true
}
try { try {
isolate = new ivm.Isolate({ memoryLimit: 128 }) isolate = new ivm.Isolate({ memoryLimit: 128 })
const context = await isolate.createContext() const context = await isolate.createContext()
@@ -111,18 +150,14 @@ async function executeCode(request) {
await jail.set('global', jail.derefInto()) await jail.set('global', jail.derefInto())
const logCallback = new ivm.Callback((...args) => { const logCallback = new ivm.Callback((...args) => {
const message = args const message = args.map((arg) => stringifyLogValue(arg)).join(' ')
.map((arg) => (typeof arg === 'object' ? JSON.stringify(arg) : String(arg))) appendStdout(`${message}\n`)
.join(' ')
stdoutChunks.push(`${message}\n`)
}) })
await jail.set('__log', logCallback) await jail.set('__log', logCallback)
const errorCallback = new ivm.Callback((...args) => { const errorCallback = new ivm.Callback((...args) => {
const message = args const message = args.map((arg) => stringifyLogValue(arg)).join(' ')
.map((arg) => (typeof arg === 'object' ? JSON.stringify(arg) : String(arg))) appendStdout(`ERROR: ${message}\n`)
.join(' ')
stdoutChunks.push(`ERROR: ${message}\n`)
}) })
await jail.set('__error', errorCallback) await jail.set('__error', errorCallback)
@@ -178,6 +213,9 @@ async function executeCode(request) {
} catch { } catch {
throw new Error('fetch options must be JSON-serializable'); throw new Error('fetch options must be JSON-serializable');
} }
if (optionsJson.length > ${MAX_FETCH_OPTIONS_JSON_CHARS}) {
throw new Error('fetch options exceed maximum payload size');
}
} }
const resultJson = await __fetchRef.apply(undefined, [url, optionsJson], { result: { promise: true } }); const resultJson = await __fetchRef.apply(undefined, [url, optionsJson], { result: { promise: true } });
let result; let result;

View File

@@ -0,0 +1,500 @@
import { EventEmitter } from 'node:events'
import { afterEach, describe, expect, it, vi } from 'vitest'
type MockProc = EventEmitter & {
connected: boolean
stderr: EventEmitter
send: (message: unknown) => boolean
kill: () => boolean
}
type SpawnFactory = () => MockProc
type RedisEval = (...args: any[]) => unknown | Promise<unknown>
type SecureFetchImpl = (...args: any[]) => unknown | Promise<unknown>
function createBaseProc(): MockProc {
const proc = new EventEmitter() as MockProc
proc.connected = true
proc.stderr = new EventEmitter()
proc.send = () => true
proc.kill = () => {
if (!proc.connected) return true
proc.connected = false
setImmediate(() => proc.emit('exit', 0))
return true
}
return proc
}
function createStartupFailureProc(): MockProc {
const proc = createBaseProc()
setImmediate(() => {
proc.connected = false
proc.emit('exit', 1)
})
return proc
}
function createReadyProc(result: unknown): MockProc {
const proc = createBaseProc()
proc.send = (message: unknown) => {
const msg = message as { type?: string; executionId?: number }
if (msg.type === 'execute') {
setImmediate(() => {
proc.emit('message', {
type: 'result',
executionId: msg.executionId,
result: { result, stdout: '' },
})
})
}
return true
}
setImmediate(() => proc.emit('message', { type: 'ready' }))
return proc
}
function createReadyProcWithDelay(delayMs: number): MockProc {
const proc = createBaseProc()
proc.send = (message: unknown) => {
const msg = message as { type?: string; executionId?: number; request?: { requestId?: string } }
if (msg.type === 'execute') {
setTimeout(() => {
proc.emit('message', {
type: 'result',
executionId: msg.executionId,
result: { result: msg.request?.requestId ?? 'unknown', stdout: '' },
})
}, delayMs)
}
return true
}
setImmediate(() => proc.emit('message', { type: 'ready' }))
return proc
}
function createReadyFetchProxyProc(fetchMessage: { url: string; optionsJson?: string }): MockProc {
const proc = createBaseProc()
let currentExecutionId = 0
proc.send = (message: unknown) => {
const msg = message as { type?: string; executionId?: number; request?: { requestId?: string } }
if (msg.type === 'execute') {
currentExecutionId = msg.executionId ?? 0
setImmediate(() => {
proc.emit('message', {
type: 'fetch',
fetchId: 1,
requestId: msg.request?.requestId ?? 'fetch-test',
url: fetchMessage.url,
optionsJson: fetchMessage.optionsJson,
})
})
return true
}
if (msg.type === 'fetchResponse') {
const fetchResponse = message as { response?: string }
setImmediate(() => {
proc.emit('message', {
type: 'result',
executionId: currentExecutionId,
result: { result: fetchResponse.response ?? '', stdout: '' },
})
})
return true
}
return true
}
setImmediate(() => proc.emit('message', { type: 'ready' }))
return proc
}
async function loadExecutionModule(options: {
envOverrides?: Record<string, string>
spawns: SpawnFactory[]
redisEvalImpl?: RedisEval
secureFetchImpl?: SecureFetchImpl
}) {
vi.resetModules()
const spawnQueue = [...options.spawns]
const spawnMock = vi.fn(() => {
const next = spawnQueue.shift()
if (!next) {
throw new Error('No mock spawn factory configured')
}
return next() as any
})
vi.doMock('@sim/logger', () => ({
createLogger: () => ({
info: vi.fn(),
warn: vi.fn(),
error: vi.fn(),
}),
}))
const secureFetchMock = vi.fn(
options.secureFetchImpl ??
(async () => ({
ok: true,
status: 200,
statusText: 'OK',
headers: new Map<string, string>(),
text: async () => '',
json: async () => ({}),
arrayBuffer: async () => new ArrayBuffer(0),
}))
)
vi.doMock('@/lib/core/security/input-validation.server', () => ({
secureFetchWithValidation: secureFetchMock,
}))
vi.doMock('@/lib/core/config/env', () => ({
env: {
IVM_POOL_SIZE: '1',
IVM_MAX_CONCURRENT: '100',
IVM_MAX_PER_WORKER: '100',
IVM_WORKER_IDLE_TIMEOUT_MS: '60000',
IVM_MAX_QUEUE_SIZE: '10',
IVM_MAX_ACTIVE_PER_OWNER: '100',
IVM_MAX_QUEUED_PER_OWNER: '10',
IVM_MAX_OWNER_WEIGHT: '5',
IVM_DISTRIBUTED_MAX_INFLIGHT_PER_OWNER: '100',
IVM_DISTRIBUTED_LEASE_MIN_TTL_MS: '1000',
IVM_QUEUE_TIMEOUT_MS: '1000',
...(options.envOverrides ?? {}),
},
}))
const redisEval = options.redisEvalImpl ? vi.fn(options.redisEvalImpl) : undefined
vi.doMock('@/lib/core/config/redis', () => ({
getRedisClient: vi.fn(() =>
redisEval
? ({
eval: redisEval,
} as any)
: null
),
}))
vi.doMock('node:child_process', () => ({
execSync: vi.fn(() => Buffer.from('v23.11.0')),
spawn: spawnMock,
}))
const mod = await import('./isolated-vm')
return { ...mod, spawnMock, secureFetchMock }
}
describe('isolated-vm scheduler', () => {
afterEach(() => {
vi.restoreAllMocks()
vi.resetModules()
})
it('recovers from an initial spawn failure and drains queued work', async () => {
const { executeInIsolatedVM, spawnMock } = await loadExecutionModule({
spawns: [createStartupFailureProc, () => createReadyProc('ok')],
})
const result = await executeInIsolatedVM({
code: 'return "ok"',
params: {},
envVars: {},
contextVariables: {},
timeoutMs: 100,
requestId: 'req-1',
})
expect(result.error).toBeUndefined()
expect(result.result).toBe('ok')
expect(spawnMock).toHaveBeenCalledTimes(2)
})
it('rejects new requests when the queue is full', async () => {
const { executeInIsolatedVM } = await loadExecutionModule({
envOverrides: {
IVM_MAX_QUEUE_SIZE: '1',
IVM_QUEUE_TIMEOUT_MS: '200',
},
spawns: [createStartupFailureProc, createStartupFailureProc, createStartupFailureProc],
})
const firstPromise = executeInIsolatedVM({
code: 'return 1',
params: {},
envVars: {},
contextVariables: {},
timeoutMs: 100,
requestId: 'req-2',
ownerKey: 'user:a',
})
await new Promise((resolve) => setTimeout(resolve, 25))
const second = await executeInIsolatedVM({
code: 'return 2',
params: {},
envVars: {},
contextVariables: {},
timeoutMs: 100,
requestId: 'req-3',
ownerKey: 'user:b',
})
expect(second.error?.message).toContain('at capacity')
const first = await firstPromise
expect(first.error?.message).toContain('timed out waiting')
})
it('enforces per-owner queued limit', async () => {
const { executeInIsolatedVM } = await loadExecutionModule({
envOverrides: {
IVM_MAX_QUEUED_PER_OWNER: '1',
IVM_QUEUE_TIMEOUT_MS: '200',
},
spawns: [createStartupFailureProc, createStartupFailureProc, createStartupFailureProc],
})
const firstPromise = executeInIsolatedVM({
code: 'return 1',
params: {},
envVars: {},
contextVariables: {},
timeoutMs: 100,
requestId: 'req-4',
ownerKey: 'user:hog',
})
await new Promise((resolve) => setTimeout(resolve, 25))
const second = await executeInIsolatedVM({
code: 'return 2',
params: {},
envVars: {},
contextVariables: {},
timeoutMs: 100,
requestId: 'req-5',
ownerKey: 'user:hog',
})
expect(second.error?.message).toContain('Too many concurrent')
const first = await firstPromise
expect(first.error?.message).toContain('timed out waiting')
})
it('enforces distributed owner in-flight lease limit when Redis is configured', async () => {
const { executeInIsolatedVM } = await loadExecutionModule({
envOverrides: {
IVM_DISTRIBUTED_MAX_INFLIGHT_PER_OWNER: '1',
REDIS_URL: 'redis://localhost:6379',
},
spawns: [() => createReadyProc('ok')],
redisEvalImpl: (...args: any[]) => {
const script = String(args[0] ?? '')
if (script.includes('ZREMRANGEBYSCORE')) {
return 0
}
return 1
},
})
const result = await executeInIsolatedVM({
code: 'return "blocked"',
params: {},
envVars: {},
contextVariables: {},
timeoutMs: 100,
requestId: 'req-6',
ownerKey: 'user:distributed',
})
expect(result.error?.message).toContain('Too many concurrent')
})
it('fails closed when Redis is configured but unavailable', async () => {
const { executeInIsolatedVM } = await loadExecutionModule({
envOverrides: {
REDIS_URL: 'redis://localhost:6379',
},
spawns: [() => createReadyProc('ok')],
})
const result = await executeInIsolatedVM({
code: 'return "blocked"',
params: {},
envVars: {},
contextVariables: {},
timeoutMs: 100,
requestId: 'req-7',
ownerKey: 'user:redis-down',
})
expect(result.error?.message).toContain('temporarily unavailable')
})
it('fails closed when Redis lease evaluation errors', async () => {
const { executeInIsolatedVM } = await loadExecutionModule({
envOverrides: {
REDIS_URL: 'redis://localhost:6379',
},
spawns: [() => createReadyProc('ok')],
redisEvalImpl: (...args: any[]) => {
const script = String(args[0] ?? '')
if (script.includes('ZREMRANGEBYSCORE')) {
throw new Error('redis timeout')
}
return 1
},
})
const result = await executeInIsolatedVM({
code: 'return "blocked"',
params: {},
envVars: {},
contextVariables: {},
timeoutMs: 100,
requestId: 'req-8',
ownerKey: 'user:redis-error',
})
expect(result.error?.message).toContain('temporarily unavailable')
})
it('applies weighted owner scheduling when draining queued executions', async () => {
const { executeInIsolatedVM } = await loadExecutionModule({
envOverrides: {
IVM_MAX_PER_WORKER: '1',
},
spawns: [() => createReadyProcWithDelay(10)],
})
const completionOrder: string[] = []
const pushCompletion = (label: string) => (res: { result: unknown }) => {
completionOrder.push(String(res.result ?? label))
return res
}
const p1 = executeInIsolatedVM({
code: 'return 1',
params: {},
envVars: {},
contextVariables: {},
timeoutMs: 500,
requestId: 'a-1',
ownerKey: 'user:a',
ownerWeight: 2,
}).then(pushCompletion('a-1'))
const p2 = executeInIsolatedVM({
code: 'return 2',
params: {},
envVars: {},
contextVariables: {},
timeoutMs: 500,
requestId: 'a-2',
ownerKey: 'user:a',
ownerWeight: 2,
}).then(pushCompletion('a-2'))
const p3 = executeInIsolatedVM({
code: 'return 3',
params: {},
envVars: {},
contextVariables: {},
timeoutMs: 500,
requestId: 'b-1',
ownerKey: 'user:b',
ownerWeight: 1,
}).then(pushCompletion('b-1'))
const p4 = executeInIsolatedVM({
code: 'return 4',
params: {},
envVars: {},
contextVariables: {},
timeoutMs: 500,
requestId: 'b-2',
ownerKey: 'user:b',
ownerWeight: 1,
}).then(pushCompletion('b-2'))
const p5 = executeInIsolatedVM({
code: 'return 5',
params: {},
envVars: {},
contextVariables: {},
timeoutMs: 500,
requestId: 'a-3',
ownerKey: 'user:a',
ownerWeight: 2,
}).then(pushCompletion('a-3'))
await Promise.all([p1, p2, p3, p4, p5])
expect(completionOrder.slice(0, 3)).toEqual(['a-1', 'a-2', 'a-3'])
expect(completionOrder).toEqual(['a-1', 'a-2', 'a-3', 'b-1', 'b-2'])
})
it('rejects oversized fetch options payloads before outbound call', async () => {
const { executeInIsolatedVM, secureFetchMock } = await loadExecutionModule({
envOverrides: {
IVM_MAX_FETCH_OPTIONS_JSON_CHARS: '50',
},
spawns: [
() =>
createReadyFetchProxyProc({
url: 'https://example.com',
optionsJson: 'x'.repeat(100),
}),
],
})
const result = await executeInIsolatedVM({
code: 'return "fetch-options"',
params: {},
envVars: {},
contextVariables: {},
timeoutMs: 100,
requestId: 'req-fetch-options',
})
const payload = JSON.parse(String(result.result))
expect(payload.error).toContain('Fetch options exceed maximum payload size')
expect(secureFetchMock).not.toHaveBeenCalled()
})
it('rejects overly long fetch URLs before outbound call', async () => {
const { executeInIsolatedVM, secureFetchMock } = await loadExecutionModule({
envOverrides: {
IVM_MAX_FETCH_URL_LENGTH: '30',
},
spawns: [
() =>
createReadyFetchProxyProc({
url: 'https://example.com/path/to/a/very/long/resource',
}),
],
})
const result = await executeInIsolatedVM({
code: 'return "fetch-url"',
params: {},
envVars: {},
contextVariables: {},
timeoutMs: 100,
requestId: 'req-fetch-url',
})
const payload = JSON.parse(String(result.result))
expect(payload.error).toContain('fetch URL exceeds maximum length')
expect(secureFetchMock).not.toHaveBeenCalled()
})
})

File diff suppressed because it is too large Load Diff

View File

@@ -124,6 +124,7 @@ export interface PreprocessExecutionOptions {
workspaceId?: string // If known, used for billing resolution workspaceId?: string // If known, used for billing resolution
loggingSession?: LoggingSession // If provided, will be used for error logging loggingSession?: LoggingSession // If provided, will be used for error logging
isResumeContext?: boolean // If true, allows fallback billing on resolution failure (for paused workflow resumes) isResumeContext?: boolean // If true, allows fallback billing on resolution failure (for paused workflow resumes)
useAuthenticatedUserAsActor?: boolean // If true, use the authenticated userId as actorUserId (for client-side executions and personal API keys)
/** @deprecated No longer used - background/async executions always use deployed state */ /** @deprecated No longer used - background/async executions always use deployed state */
useDraftState?: boolean useDraftState?: boolean
} }
@@ -170,6 +171,7 @@ export async function preprocessExecution(
workspaceId: providedWorkspaceId, workspaceId: providedWorkspaceId,
loggingSession: providedLoggingSession, loggingSession: providedLoggingSession,
isResumeContext = false, isResumeContext = false,
useAuthenticatedUserAsActor = false,
} = options } = options
logger.info(`[${requestId}] Starting execution preprocessing`, { logger.info(`[${requestId}] Starting execution preprocessing`, {
@@ -257,7 +259,14 @@ export async function preprocessExecution(
let actorUserId: string | null = null let actorUserId: string | null = null
try { try {
if (workspaceId) { // For client-side executions and personal API keys, the authenticated
// user is the billing and permission actor — not the workspace owner.
if (useAuthenticatedUserAsActor && userId) {
actorUserId = userId
logger.info(`[${requestId}] Using authenticated user as actor: ${actorUserId}`)
}
if (!actorUserId && workspaceId) {
actorUserId = await getWorkspaceBilledAccountUserId(workspaceId) actorUserId = await getWorkspaceBilledAccountUserId(workspaceId)
if (actorUserId) { if (actorUserId) {
logger.info(`[${requestId}] Using workspace billed account: ${actorUserId}`) logger.info(`[${requestId}] Using workspace billed account: ${actorUserId}`)

View File

@@ -1,7 +1,11 @@
import { db } from '@sim/db'
import { account } from '@sim/db/schema'
import { createLogger } from '@sim/logger' import { createLogger } from '@sim/logger'
import { eq } from 'drizzle-orm'
import { getBaseUrl } from '@/lib/core/utils/urls' import { getBaseUrl } from '@/lib/core/utils/urls'
import { refreshTokenIfNeeded } from '@/app/api/auth/oauth/utils'
import { executeProviderRequest } from '@/providers' import { executeProviderRequest } from '@/providers'
import { getApiKey, getProviderFromModel } from '@/providers/utils' import { getProviderFromModel } from '@/providers/utils'
const logger = createLogger('HallucinationValidator') const logger = createLogger('HallucinationValidator')
@@ -19,7 +23,18 @@ export interface HallucinationValidationInput {
topK: number // Number of chunks to retrieve, default 10 topK: number // Number of chunks to retrieve, default 10
model: string model: string
apiKey?: string apiKey?: string
providerCredentials?: {
azureEndpoint?: string
azureApiVersion?: string
vertexProject?: string
vertexLocation?: string
vertexCredential?: string
bedrockAccessKeyId?: string
bedrockSecretKey?: string
bedrockRegion?: string
}
workflowId?: string workflowId?: string
workspaceId?: string
requestId: string requestId: string
} }
@@ -89,7 +104,9 @@ async function scoreHallucinationWithLLM(
userInput: string, userInput: string,
ragContext: string[], ragContext: string[],
model: string, model: string,
apiKey: string, apiKey: string | undefined,
providerCredentials: HallucinationValidationInput['providerCredentials'],
workspaceId: string | undefined,
requestId: string requestId: string
): Promise<{ score: number; reasoning: string }> { ): Promise<{ score: number; reasoning: string }> {
try { try {
@@ -127,6 +144,23 @@ Evaluate the consistency and provide your score and reasoning in JSON format.`
const providerId = getProviderFromModel(model) const providerId = getProviderFromModel(model)
let finalApiKey: string | undefined = apiKey
if (providerId === 'vertex' && providerCredentials?.vertexCredential) {
const credential = await db.query.account.findFirst({
where: eq(account.id, providerCredentials.vertexCredential),
})
if (credential) {
const { accessToken } = await refreshTokenIfNeeded(
requestId,
credential,
providerCredentials.vertexCredential
)
if (accessToken) {
finalApiKey = accessToken
}
}
}
const response = await executeProviderRequest(providerId, { const response = await executeProviderRequest(providerId, {
model, model,
systemPrompt, systemPrompt,
@@ -137,7 +171,15 @@ Evaluate the consistency and provide your score and reasoning in JSON format.`
}, },
], ],
temperature: 0.1, // Low temperature for consistent scoring temperature: 0.1, // Low temperature for consistent scoring
apiKey, apiKey: finalApiKey,
azureEndpoint: providerCredentials?.azureEndpoint,
azureApiVersion: providerCredentials?.azureApiVersion,
vertexProject: providerCredentials?.vertexProject,
vertexLocation: providerCredentials?.vertexLocation,
bedrockAccessKeyId: providerCredentials?.bedrockAccessKeyId,
bedrockSecretKey: providerCredentials?.bedrockSecretKey,
bedrockRegion: providerCredentials?.bedrockRegion,
workspaceId,
}) })
if (response instanceof ReadableStream || ('stream' in response && 'execution' in response)) { if (response instanceof ReadableStream || ('stream' in response && 'execution' in response)) {
@@ -184,8 +226,18 @@ Evaluate the consistency and provide your score and reasoning in JSON format.`
export async function validateHallucination( export async function validateHallucination(
input: HallucinationValidationInput input: HallucinationValidationInput
): Promise<HallucinationValidationResult> { ): Promise<HallucinationValidationResult> {
const { userInput, knowledgeBaseId, threshold, topK, model, apiKey, workflowId, requestId } = const {
input userInput,
knowledgeBaseId,
threshold,
topK,
model,
apiKey,
providerCredentials,
workflowId,
workspaceId,
requestId,
} = input
try { try {
if (!userInput || userInput.trim().length === 0) { if (!userInput || userInput.trim().length === 0) {
@@ -202,17 +254,6 @@ export async function validateHallucination(
} }
} }
let finalApiKey: string
try {
const providerId = getProviderFromModel(model)
finalApiKey = getApiKey(providerId, model, apiKey)
} catch (error: any) {
return {
passed: false,
error: `API key error: ${error.message}`,
}
}
// Step 1: Query knowledge base with RAG // Step 1: Query knowledge base with RAG
const ragContext = await queryKnowledgeBase( const ragContext = await queryKnowledgeBase(
knowledgeBaseId, knowledgeBaseId,
@@ -234,7 +275,9 @@ export async function validateHallucination(
userInput, userInput,
ragContext, ragContext,
model, model,
finalApiKey, apiKey,
providerCredentials,
workspaceId,
requestId requestId
) )

View File

@@ -33,11 +33,25 @@ export class SnapshotService implements ISnapshotService {
const existingSnapshot = await this.getSnapshotByHash(workflowId, stateHash) const existingSnapshot = await this.getSnapshotByHash(workflowId, stateHash)
if (existingSnapshot) { if (existingSnapshot) {
let refreshedState: WorkflowState = existingSnapshot.stateData
try {
await db
.update(workflowExecutionSnapshots)
.set({ stateData: state })
.where(eq(workflowExecutionSnapshots.id, existingSnapshot.id))
refreshedState = state
} catch (error) {
logger.warn(
`Failed to refresh snapshot stateData for ${existingSnapshot.id}, continuing with existing data`,
error
)
}
logger.info( logger.info(
`Reusing existing snapshot for workflow ${workflowId} (hash: ${stateHash.slice(0, 12)}...)` `Reusing existing snapshot for workflow ${workflowId} (hash: ${stateHash.slice(0, 12)}...)`
) )
return { return {
snapshot: existingSnapshot, snapshot: { ...existingSnapshot, stateData: refreshedState },
isNew: false, isNew: false,
} }
} }

View File

@@ -1,6 +1,6 @@
import { createLogger } from '@sim/logger' import { createLogger } from '@sim/logger'
import type { NextRequest, NextResponse } from 'next/server' import type { NextRequest, NextResponse } from 'next/server'
import { checkHybridAuth } from '@/lib/auth/hybrid' import { checkSessionOrInternalAuth } from '@/lib/auth/hybrid'
import { generateRequestId } from '@/lib/core/utils/request' import { generateRequestId } from '@/lib/core/utils/request'
import { createMcpErrorResponse } from '@/lib/mcp/utils' import { createMcpErrorResponse } from '@/lib/mcp/utils'
import { getUserEntityPermissions } from '@/lib/workspaces/permissions/utils' import { getUserEntityPermissions } from '@/lib/workspaces/permissions/utils'
@@ -43,7 +43,7 @@ async function validateMcpAuth(
const requestId = generateRequestId() const requestId = generateRequestId()
try { try {
const auth = await checkHybridAuth(request, { requireWorkflowId: false }) const auth = await checkSessionOrInternalAuth(request, { requireWorkflowId: false })
if (!auth.success || !auth.userId) { if (!auth.success || !auth.userId) {
logger.warn(`[${requestId}] Authentication failed: ${auth.error}`) logger.warn(`[${requestId}] Authentication failed: ${auth.error}`)
return { return {

View File

@@ -21,6 +21,11 @@ export const TOKENIZATION_CONFIG = {
confidence: 'high', confidence: 'high',
supportedMethods: ['heuristic', 'fallback'], supportedMethods: ['heuristic', 'fallback'],
}, },
'azure-anthropic': {
avgCharsPerToken: 4.5,
confidence: 'high',
supportedMethods: ['heuristic', 'fallback'],
},
google: { google: {
avgCharsPerToken: 5, avgCharsPerToken: 5,
confidence: 'medium', confidence: 'medium',

View File

@@ -204,6 +204,7 @@ export function estimateTokenCount(text: string, providerId?: string): TokenEsti
estimatedTokens = estimateOpenAITokens(text) estimatedTokens = estimateOpenAITokens(text)
break break
case 'anthropic': case 'anthropic':
case 'azure-anthropic':
estimatedTokens = estimateAnthropicTokens(text) estimatedTokens = estimateAnthropicTokens(text)
break break
case 'google': case 'google':

View File

@@ -24,6 +24,7 @@ import {
validateTypeformSignature, validateTypeformSignature,
verifyProviderWebhook, verifyProviderWebhook,
} from '@/lib/webhooks/utils.server' } from '@/lib/webhooks/utils.server'
import { getWorkspaceBilledAccountUserId } from '@/lib/workspaces/utils'
import { executeWebhookJob } from '@/background/webhook-execution' import { executeWebhookJob } from '@/background/webhook-execution'
import { resolveEnvVarReferences } from '@/executor/utils/reference-validation' import { resolveEnvVarReferences } from '@/executor/utils/reference-validation'
import { isGitHubEventMatch } from '@/triggers/github/utils' import { isGitHubEventMatch } from '@/triggers/github/utils'
@@ -1003,10 +1004,23 @@ export async function queueWebhookExecution(
} }
} }
if (!foundWorkflow.workspaceId) {
logger.error(`[${options.requestId}] Workflow ${foundWorkflow.id} has no workspaceId`)
return NextResponse.json({ error: 'Workflow has no associated workspace' }, { status: 500 })
}
const actorUserId = await getWorkspaceBilledAccountUserId(foundWorkflow.workspaceId)
if (!actorUserId) {
logger.error(
`[${options.requestId}] No billing account for workspace ${foundWorkflow.workspaceId}`
)
return NextResponse.json({ error: 'Unable to resolve billing account' }, { status: 500 })
}
const payload = { const payload = {
webhookId: foundWebhook.id, webhookId: foundWebhook.id,
workflowId: foundWorkflow.id, workflowId: foundWorkflow.id,
userId: foundWorkflow.userId, userId: actorUserId,
provider: foundWebhook.provider, provider: foundWebhook.provider,
body, body,
headers, headers,
@@ -1017,7 +1031,7 @@ export async function queueWebhookExecution(
const jobQueue = await getJobQueue() const jobQueue = await getJobQueue()
const jobId = await jobQueue.enqueue('webhook-execution', payload, { const jobId = await jobQueue.enqueue('webhook-execution', payload, {
metadata: { workflowId: foundWorkflow.id, userId: foundWorkflow.userId }, metadata: { workflowId: foundWorkflow.id, userId: actorUserId },
}) })
logger.info( logger.info(
`[${options.requestId}] Queued webhook execution task ${jobId} for ${foundWebhook.provider} webhook` `[${options.requestId}] Queued webhook execution task ${jobId} for ${foundWebhook.provider} webhook`

View File

@@ -156,6 +156,15 @@ describe('evaluateSubBlockCondition', () => {
expect(evaluateSubBlockCondition(condition, values)).toBe(true) expect(evaluateSubBlockCondition(condition, values)).toBe(true)
}) })
it.concurrent('passes current values into function conditions', () => {
const condition = (values?: Record<string, unknown>) => ({
field: 'model',
value: typeof values?.model === 'string' ? values.model : '__no_model_selected__',
})
const values = { model: 'ollama/gemma3:4b' }
expect(evaluateSubBlockCondition(condition, values)).toBe(true)
})
it.concurrent('handles boolean values', () => { it.concurrent('handles boolean values', () => {
const condition = { field: 'enabled', value: true } const condition = { field: 'enabled', value: true }
const values = { enabled: true } const values = { enabled: true }

View File

@@ -100,11 +100,14 @@ export function resolveCanonicalMode(
* Evaluate a subblock condition against a map of raw values. * Evaluate a subblock condition against a map of raw values.
*/ */
export function evaluateSubBlockCondition( export function evaluateSubBlockCondition(
condition: SubBlockCondition | (() => SubBlockCondition) | undefined, condition:
| SubBlockCondition
| ((values?: Record<string, unknown>) => SubBlockCondition)
| undefined,
values: Record<string, unknown> values: Record<string, unknown>
): boolean { ): boolean {
if (!condition) return true if (!condition) return true
const actual = typeof condition === 'function' ? condition() : condition const actual = typeof condition === 'function' ? condition(values) : condition
const fieldValue = values[actual.field] const fieldValue = values[actual.field]
const valueMatch = Array.isArray(actual.value) const valueMatch = Array.isArray(actual.value)
? fieldValue != null && ? fieldValue != null &&

View File

@@ -1,5 +1,6 @@
import type Anthropic from '@anthropic-ai/sdk' import type Anthropic from '@anthropic-ai/sdk'
import { transformJSONSchema } from '@anthropic-ai/sdk/lib/transform-json-schema' import { transformJSONSchema } from '@anthropic-ai/sdk/lib/transform-json-schema'
import type { RawMessageStreamEvent } from '@anthropic-ai/sdk/resources/messages/messages'
import type { Logger } from '@sim/logger' import type { Logger } from '@sim/logger'
import type { StreamingExecution } from '@/executor/types' import type { StreamingExecution } from '@/executor/types'
import { MAX_TOOL_ITERATIONS } from '@/providers' import { MAX_TOOL_ITERATIONS } from '@/providers'
@@ -34,11 +35,21 @@ export interface AnthropicProviderConfig {
logger: Logger logger: Logger
} }
/**
* Custom payload type extending the SDK's base message creation params.
* Adds fields not yet in the SDK: adaptive thinking, output_format, output_config.
*/
interface AnthropicPayload extends Omit<Anthropic.Messages.MessageStreamParams, 'thinking'> {
thinking?: Anthropic.Messages.ThinkingConfigParam | { type: 'adaptive' }
output_format?: { type: 'json_schema'; schema: Record<string, unknown> }
output_config?: { effort: string }
}
/** /**
* Generates prompt-based schema instructions for older models that don't support native structured outputs. * Generates prompt-based schema instructions for older models that don't support native structured outputs.
* This is a fallback approach that adds schema requirements to the system prompt. * This is a fallback approach that adds schema requirements to the system prompt.
*/ */
function generateSchemaInstructions(schema: any, schemaName?: string): string { function generateSchemaInstructions(schema: Record<string, unknown>, schemaName?: string): string {
const name = schemaName || 'response' const name = schemaName || 'response'
return `IMPORTANT: You must respond with a valid JSON object that conforms to the following schema. return `IMPORTANT: You must respond with a valid JSON object that conforms to the following schema.
Do not include any text before or after the JSON object. Only output the JSON. Do not include any text before or after the JSON object. Only output the JSON.
@@ -113,6 +124,30 @@ function buildThinkingConfig(
} }
} }
/**
* The Anthropic SDK requires streaming for non-streaming requests when max_tokens exceeds
* this threshold, to avoid HTTP timeouts. When thinking is enabled and pushes max_tokens
* above this limit, we use streaming internally and collect the final message.
*/
const ANTHROPIC_SDK_NON_STREAMING_MAX_TOKENS = 21333
/**
* Creates an Anthropic message, automatically using streaming internally when max_tokens
* exceeds the SDK's non-streaming threshold. Returns the same Message object either way.
*/
async function createMessage(
anthropic: Anthropic,
payload: AnthropicPayload
): Promise<Anthropic.Messages.Message> {
if (payload.max_tokens > ANTHROPIC_SDK_NON_STREAMING_MAX_TOKENS && !payload.stream) {
const stream = anthropic.messages.stream(payload as Anthropic.Messages.MessageStreamParams)
return stream.finalMessage()
}
return anthropic.messages.create(
payload as Anthropic.Messages.MessageCreateParamsNonStreaming
) as Promise<Anthropic.Messages.Message>
}
/** /**
* Executes a request using the Anthropic API with full tool loop support. * Executes a request using the Anthropic API with full tool loop support.
* This is the shared core implementation used by both the standard Anthropic provider * This is the shared core implementation used by both the standard Anthropic provider
@@ -135,7 +170,7 @@ export async function executeAnthropicProviderRequest(
const anthropic = config.createClient(request.apiKey, useNativeStructuredOutputs) const anthropic = config.createClient(request.apiKey, useNativeStructuredOutputs)
const messages: any[] = [] const messages: Anthropic.Messages.MessageParam[] = []
let systemPrompt = request.systemPrompt || '' let systemPrompt = request.systemPrompt || ''
if (request.context) { if (request.context) {
@@ -153,8 +188,8 @@ export async function executeAnthropicProviderRequest(
content: [ content: [
{ {
type: 'tool_result', type: 'tool_result',
tool_use_id: msg.name, tool_use_id: msg.name || '',
content: msg.content, content: msg.content || undefined,
}, },
], ],
}) })
@@ -188,12 +223,12 @@ export async function executeAnthropicProviderRequest(
systemPrompt = '' systemPrompt = ''
} }
let anthropicTools = request.tools?.length let anthropicTools: Anthropic.Messages.Tool[] | undefined = request.tools?.length
? request.tools.map((tool) => ({ ? request.tools.map((tool) => ({
name: tool.id, name: tool.id,
description: tool.description, description: tool.description,
input_schema: { input_schema: {
type: 'object', type: 'object' as const,
properties: tool.parameters.properties, properties: tool.parameters.properties,
required: tool.parameters.required, required: tool.parameters.required,
}, },
@@ -238,13 +273,12 @@ export async function executeAnthropicProviderRequest(
} }
} }
const payload: any = { const payload: AnthropicPayload = {
model: request.model, model: request.model,
messages, messages,
system: systemPrompt, system: systemPrompt,
max_tokens: max_tokens:
Number.parseInt(String(request.maxTokens)) || Number.parseInt(String(request.maxTokens)) || getMaxOutputTokensForModel(request.model),
getMaxOutputTokensForModel(request.model, request.stream ?? false),
temperature: Number.parseFloat(String(request.temperature ?? 0.7)), temperature: Number.parseFloat(String(request.temperature ?? 0.7)),
} }
@@ -268,13 +302,35 @@ export async function executeAnthropicProviderRequest(
} }
// Add extended thinking configuration if supported and requested // Add extended thinking configuration if supported and requested
if (request.thinkingLevel) { // The 'none' sentinel means "disable thinking" — skip configuration entirely.
if (request.thinkingLevel && request.thinkingLevel !== 'none') {
const thinkingConfig = buildThinkingConfig(request.model, request.thinkingLevel) const thinkingConfig = buildThinkingConfig(request.model, request.thinkingLevel)
if (thinkingConfig) { if (thinkingConfig) {
payload.thinking = thinkingConfig.thinking payload.thinking = thinkingConfig.thinking
if (thinkingConfig.outputConfig) { if (thinkingConfig.outputConfig) {
payload.output_config = thinkingConfig.outputConfig payload.output_config = thinkingConfig.outputConfig
} }
// Per Anthropic docs: budget_tokens must be less than max_tokens.
// Ensure max_tokens leaves room for both thinking and text output.
if (
thinkingConfig.thinking.type === 'enabled' &&
'budget_tokens' in thinkingConfig.thinking
) {
const budgetTokens = thinkingConfig.thinking.budget_tokens
const minMaxTokens = budgetTokens + 4096
if (payload.max_tokens < minMaxTokens) {
const modelMax = getMaxOutputTokensForModel(request.model)
payload.max_tokens = Math.min(minMaxTokens, modelMax)
logger.info(
`Adjusted max_tokens to ${payload.max_tokens} to satisfy budget_tokens (${budgetTokens}) constraint`
)
}
}
// Per Anthropic docs: thinking is not compatible with temperature or top_k modifications.
payload.temperature = undefined
const isAdaptive = thinkingConfig.thinking.type === 'adaptive' const isAdaptive = thinkingConfig.thinking.type === 'adaptive'
logger.info( logger.info(
`Using ${isAdaptive ? 'adaptive' : 'extended'} thinking for model: ${modelId} with ${isAdaptive ? `effort: ${request.thinkingLevel}` : `budget: ${(thinkingConfig.thinking as { budget_tokens: number }).budget_tokens}`}` `Using ${isAdaptive ? 'adaptive' : 'extended'} thinking for model: ${modelId} with ${isAdaptive ? `effort: ${request.thinkingLevel}` : `budget: ${(thinkingConfig.thinking as { budget_tokens: number }).budget_tokens}`}`
@@ -288,7 +344,16 @@ export async function executeAnthropicProviderRequest(
if (anthropicTools?.length) { if (anthropicTools?.length) {
payload.tools = anthropicTools payload.tools = anthropicTools
if (toolChoice !== 'auto') { // Per Anthropic docs: forced tool_choice (type: "tool" or "any") is incompatible with
// thinking. Only auto and none are supported when thinking is enabled.
if (payload.thinking) {
// Per Anthropic docs: only 'auto' (default) and 'none' work with thinking.
if (toolChoice === 'none') {
payload.tool_choice = { type: 'none' }
}
} else if (toolChoice === 'none') {
payload.tool_choice = { type: 'none' }
} else if (toolChoice !== 'auto') {
payload.tool_choice = toolChoice payload.tool_choice = toolChoice
} }
} }
@@ -301,42 +366,46 @@ export async function executeAnthropicProviderRequest(
const providerStartTime = Date.now() const providerStartTime = Date.now()
const providerStartTimeISO = new Date(providerStartTime).toISOString() const providerStartTimeISO = new Date(providerStartTime).toISOString()
const streamResponse: any = await anthropic.messages.create({ const streamResponse = await anthropic.messages.create({
...payload, ...payload,
stream: true, stream: true,
}) } as Anthropic.Messages.MessageCreateParamsStreaming)
const streamingResult = { const streamingResult = {
stream: createReadableStreamFromAnthropicStream(streamResponse, (content, usage) => { stream: createReadableStreamFromAnthropicStream(
streamingResult.execution.output.content = content streamResponse as AsyncIterable<RawMessageStreamEvent>,
streamingResult.execution.output.tokens = { (content, usage) => {
input: usage.input_tokens, streamingResult.execution.output.content = content
output: usage.output_tokens, streamingResult.execution.output.tokens = {
total: usage.input_tokens + usage.output_tokens, input: usage.input_tokens,
} output: usage.output_tokens,
total: usage.input_tokens + usage.output_tokens,
}
const costResult = calculateCost(request.model, usage.input_tokens, usage.output_tokens) const costResult = calculateCost(request.model, usage.input_tokens, usage.output_tokens)
streamingResult.execution.output.cost = { streamingResult.execution.output.cost = {
input: costResult.input, input: costResult.input,
output: costResult.output, output: costResult.output,
total: costResult.total, total: costResult.total,
} }
const streamEndTime = Date.now() const streamEndTime = Date.now()
const streamEndTimeISO = new Date(streamEndTime).toISOString() const streamEndTimeISO = new Date(streamEndTime).toISOString()
if (streamingResult.execution.output.providerTiming) { if (streamingResult.execution.output.providerTiming) {
streamingResult.execution.output.providerTiming.endTime = streamEndTimeISO streamingResult.execution.output.providerTiming.endTime = streamEndTimeISO
streamingResult.execution.output.providerTiming.duration = streamingResult.execution.output.providerTiming.duration =
streamEndTime - providerStartTime
if (streamingResult.execution.output.providerTiming.timeSegments?.[0]) {
streamingResult.execution.output.providerTiming.timeSegments[0].endTime = streamEndTime
streamingResult.execution.output.providerTiming.timeSegments[0].duration =
streamEndTime - providerStartTime streamEndTime - providerStartTime
if (streamingResult.execution.output.providerTiming.timeSegments?.[0]) {
streamingResult.execution.output.providerTiming.timeSegments[0].endTime =
streamEndTime
streamingResult.execution.output.providerTiming.timeSegments[0].duration =
streamEndTime - providerStartTime
}
} }
} }
}), ),
execution: { execution: {
success: true, success: true,
output: { output: {
@@ -385,21 +454,13 @@ export async function executeAnthropicProviderRequest(
const providerStartTime = Date.now() const providerStartTime = Date.now()
const providerStartTimeISO = new Date(providerStartTime).toISOString() const providerStartTimeISO = new Date(providerStartTime).toISOString()
// Cap intermediate calls at non-streaming limit to avoid SDK timeout errors,
// but allow users to set lower values if desired
const nonStreamingLimit = getMaxOutputTokensForModel(request.model, false)
const nonStreamingMaxTokens = request.maxTokens
? Math.min(Number.parseInt(String(request.maxTokens)), nonStreamingLimit)
: nonStreamingLimit
const intermediatePayload = { ...payload, max_tokens: nonStreamingMaxTokens }
try { try {
const initialCallTime = Date.now() const initialCallTime = Date.now()
const originalToolChoice = intermediatePayload.tool_choice const originalToolChoice = payload.tool_choice
const forcedTools = preparedTools?.forcedTools || [] const forcedTools = preparedTools?.forcedTools || []
let usedForcedTools: string[] = [] let usedForcedTools: string[] = []
let currentResponse = await anthropic.messages.create(intermediatePayload) let currentResponse = await createMessage(anthropic, payload)
const firstResponseTime = Date.now() - initialCallTime const firstResponseTime = Date.now() - initialCallTime
let content = '' let content = ''
@@ -468,10 +529,10 @@ export async function executeAnthropicProviderRequest(
const toolExecutionPromises = toolUses.map(async (toolUse) => { const toolExecutionPromises = toolUses.map(async (toolUse) => {
const toolCallStartTime = Date.now() const toolCallStartTime = Date.now()
const toolName = toolUse.name const toolName = toolUse.name
const toolArgs = toolUse.input as Record<string, any> const toolArgs = toolUse.input as Record<string, unknown>
try { try {
const tool = request.tools?.find((t: any) => t.id === toolName) const tool = request.tools?.find((t) => t.id === toolName)
if (!tool) return null if (!tool) return null
const { toolParams, executionParams } = prepareToolExecution(tool, toolArgs, request) const { toolParams, executionParams } = prepareToolExecution(tool, toolArgs, request)
@@ -512,17 +573,8 @@ export async function executeAnthropicProviderRequest(
const executionResults = await Promise.allSettled(toolExecutionPromises) const executionResults = await Promise.allSettled(toolExecutionPromises)
// Collect all tool_use and tool_result blocks for batching // Collect all tool_use and tool_result blocks for batching
const toolUseBlocks: Array<{ const toolUseBlocks: Anthropic.Messages.ToolUseBlockParam[] = []
type: 'tool_use' const toolResultBlocks: Anthropic.Messages.ToolResultBlockParam[] = []
id: string
name: string
input: Record<string, unknown>
}> = []
const toolResultBlocks: Array<{
type: 'tool_result'
tool_use_id: string
content: string
}> = []
for (const settledResult of executionResults) { for (const settledResult of executionResults) {
if (settledResult.status === 'rejected' || !settledResult.value) continue if (settledResult.status === 'rejected' || !settledResult.value) continue
@@ -583,11 +635,25 @@ export async function executeAnthropicProviderRequest(
}) })
} }
// Add ONE assistant message with ALL tool_use blocks // Per Anthropic docs: thinking blocks must be preserved in assistant messages
// during tool use to maintain reasoning continuity.
const thinkingBlocks = currentResponse.content.filter(
(
item
): item is
| Anthropic.Messages.ThinkingBlock
| Anthropic.Messages.RedactedThinkingBlock =>
item.type === 'thinking' || item.type === 'redacted_thinking'
)
// Add ONE assistant message with thinking + tool_use blocks
if (toolUseBlocks.length > 0) { if (toolUseBlocks.length > 0) {
currentMessages.push({ currentMessages.push({
role: 'assistant', role: 'assistant',
content: toolUseBlocks as unknown as Anthropic.Messages.ContentBlock[], content: [
...thinkingBlocks,
...toolUseBlocks,
] as Anthropic.Messages.ContentBlockParam[],
}) })
} }
@@ -595,19 +661,23 @@ export async function executeAnthropicProviderRequest(
if (toolResultBlocks.length > 0) { if (toolResultBlocks.length > 0) {
currentMessages.push({ currentMessages.push({
role: 'user', role: 'user',
content: toolResultBlocks as unknown as Anthropic.Messages.ContentBlockParam[], content: toolResultBlocks as Anthropic.Messages.ContentBlockParam[],
}) })
} }
const thisToolsTime = Date.now() - toolsStartTime const thisToolsTime = Date.now() - toolsStartTime
toolsTime += thisToolsTime toolsTime += thisToolsTime
const nextPayload = { const nextPayload: AnthropicPayload = {
...intermediatePayload, ...payload,
messages: currentMessages, messages: currentMessages,
} }
// Per Anthropic docs: forced tool_choice is incompatible with thinking.
// Only auto and none are supported when thinking is enabled.
const thinkingEnabled = !!payload.thinking
if ( if (
!thinkingEnabled &&
typeof originalToolChoice === 'object' && typeof originalToolChoice === 'object' &&
hasUsedForcedTool && hasUsedForcedTool &&
forcedTools.length > 0 forcedTools.length > 0
@@ -624,7 +694,11 @@ export async function executeAnthropicProviderRequest(
nextPayload.tool_choice = undefined nextPayload.tool_choice = undefined
logger.info('All forced tools have been used, removing tool_choice parameter') logger.info('All forced tools have been used, removing tool_choice parameter')
} }
} else if (hasUsedForcedTool && typeof originalToolChoice === 'object') { } else if (
!thinkingEnabled &&
hasUsedForcedTool &&
typeof originalToolChoice === 'object'
) {
nextPayload.tool_choice = undefined nextPayload.tool_choice = undefined
logger.info( logger.info(
'Removing tool_choice parameter for subsequent requests after forced tool was used' 'Removing tool_choice parameter for subsequent requests after forced tool was used'
@@ -633,7 +707,7 @@ export async function executeAnthropicProviderRequest(
const nextModelStartTime = Date.now() const nextModelStartTime = Date.now()
currentResponse = await anthropic.messages.create(nextPayload) currentResponse = await createMessage(anthropic, nextPayload)
const nextCheckResult = checkForForcedToolUsage( const nextCheckResult = checkForForcedToolUsage(
currentResponse, currentResponse,
@@ -682,33 +756,38 @@ export async function executeAnthropicProviderRequest(
tool_choice: undefined, tool_choice: undefined,
} }
const streamResponse: any = await anthropic.messages.create(streamingPayload) const streamResponse = await anthropic.messages.create(
streamingPayload as Anthropic.Messages.MessageCreateParamsStreaming
)
const streamingResult = { const streamingResult = {
stream: createReadableStreamFromAnthropicStream(streamResponse, (streamContent, usage) => { stream: createReadableStreamFromAnthropicStream(
streamingResult.execution.output.content = streamContent streamResponse as AsyncIterable<RawMessageStreamEvent>,
streamingResult.execution.output.tokens = { (streamContent, usage) => {
input: tokens.input + usage.input_tokens, streamingResult.execution.output.content = streamContent
output: tokens.output + usage.output_tokens, streamingResult.execution.output.tokens = {
total: tokens.total + usage.input_tokens + usage.output_tokens, input: tokens.input + usage.input_tokens,
} output: tokens.output + usage.output_tokens,
total: tokens.total + usage.input_tokens + usage.output_tokens,
}
const streamCost = calculateCost(request.model, usage.input_tokens, usage.output_tokens) const streamCost = calculateCost(request.model, usage.input_tokens, usage.output_tokens)
streamingResult.execution.output.cost = { streamingResult.execution.output.cost = {
input: accumulatedCost.input + streamCost.input, input: accumulatedCost.input + streamCost.input,
output: accumulatedCost.output + streamCost.output, output: accumulatedCost.output + streamCost.output,
total: accumulatedCost.total + streamCost.total, total: accumulatedCost.total + streamCost.total,
} }
const streamEndTime = Date.now() const streamEndTime = Date.now()
const streamEndTimeISO = new Date(streamEndTime).toISOString() const streamEndTimeISO = new Date(streamEndTime).toISOString()
if (streamingResult.execution.output.providerTiming) { if (streamingResult.execution.output.providerTiming) {
streamingResult.execution.output.providerTiming.endTime = streamEndTimeISO streamingResult.execution.output.providerTiming.endTime = streamEndTimeISO
streamingResult.execution.output.providerTiming.duration = streamingResult.execution.output.providerTiming.duration =
streamEndTime - providerStartTime streamEndTime - providerStartTime
}
} }
}), ),
execution: { execution: {
success: true, success: true,
output: { output: {
@@ -778,21 +857,13 @@ export async function executeAnthropicProviderRequest(
const providerStartTime = Date.now() const providerStartTime = Date.now()
const providerStartTimeISO = new Date(providerStartTime).toISOString() const providerStartTimeISO = new Date(providerStartTime).toISOString()
// Cap intermediate calls at non-streaming limit to avoid SDK timeout errors,
// but allow users to set lower values if desired
const nonStreamingLimit = getMaxOutputTokensForModel(request.model, false)
const toolLoopMaxTokens = request.maxTokens
? Math.min(Number.parseInt(String(request.maxTokens)), nonStreamingLimit)
: nonStreamingLimit
const toolLoopPayload = { ...payload, max_tokens: toolLoopMaxTokens }
try { try {
const initialCallTime = Date.now() const initialCallTime = Date.now()
const originalToolChoice = toolLoopPayload.tool_choice const originalToolChoice = payload.tool_choice
const forcedTools = preparedTools?.forcedTools || [] const forcedTools = preparedTools?.forcedTools || []
let usedForcedTools: string[] = [] let usedForcedTools: string[] = []
let currentResponse = await anthropic.messages.create(toolLoopPayload) let currentResponse = await createMessage(anthropic, payload)
const firstResponseTime = Date.now() - initialCallTime const firstResponseTime = Date.now() - initialCallTime
let content = '' let content = ''
@@ -872,7 +943,7 @@ export async function executeAnthropicProviderRequest(
const toolExecutionPromises = toolUses.map(async (toolUse) => { const toolExecutionPromises = toolUses.map(async (toolUse) => {
const toolCallStartTime = Date.now() const toolCallStartTime = Date.now()
const toolName = toolUse.name const toolName = toolUse.name
const toolArgs = toolUse.input as Record<string, any> const toolArgs = toolUse.input as Record<string, unknown>
// Preserve the original tool_use ID from Claude's response // Preserve the original tool_use ID from Claude's response
const toolUseId = toolUse.id const toolUseId = toolUse.id
@@ -918,17 +989,8 @@ export async function executeAnthropicProviderRequest(
const executionResults = await Promise.allSettled(toolExecutionPromises) const executionResults = await Promise.allSettled(toolExecutionPromises)
// Collect all tool_use and tool_result blocks for batching // Collect all tool_use and tool_result blocks for batching
const toolUseBlocks: Array<{ const toolUseBlocks: Anthropic.Messages.ToolUseBlockParam[] = []
type: 'tool_use' const toolResultBlocks: Anthropic.Messages.ToolResultBlockParam[] = []
id: string
name: string
input: Record<string, unknown>
}> = []
const toolResultBlocks: Array<{
type: 'tool_result'
tool_use_id: string
content: string
}> = []
for (const settledResult of executionResults) { for (const settledResult of executionResults) {
if (settledResult.status === 'rejected' || !settledResult.value) continue if (settledResult.status === 'rejected' || !settledResult.value) continue
@@ -989,11 +1051,23 @@ export async function executeAnthropicProviderRequest(
}) })
} }
// Add ONE assistant message with ALL tool_use blocks // Per Anthropic docs: thinking blocks must be preserved in assistant messages
// during tool use to maintain reasoning continuity.
const thinkingBlocks = currentResponse.content.filter(
(
item
): item is Anthropic.Messages.ThinkingBlock | Anthropic.Messages.RedactedThinkingBlock =>
item.type === 'thinking' || item.type === 'redacted_thinking'
)
// Add ONE assistant message with thinking + tool_use blocks
if (toolUseBlocks.length > 0) { if (toolUseBlocks.length > 0) {
currentMessages.push({ currentMessages.push({
role: 'assistant', role: 'assistant',
content: toolUseBlocks as unknown as Anthropic.Messages.ContentBlock[], content: [
...thinkingBlocks,
...toolUseBlocks,
] as Anthropic.Messages.ContentBlockParam[],
}) })
} }
@@ -1001,19 +1075,27 @@ export async function executeAnthropicProviderRequest(
if (toolResultBlocks.length > 0) { if (toolResultBlocks.length > 0) {
currentMessages.push({ currentMessages.push({
role: 'user', role: 'user',
content: toolResultBlocks as unknown as Anthropic.Messages.ContentBlockParam[], content: toolResultBlocks as Anthropic.Messages.ContentBlockParam[],
}) })
} }
const thisToolsTime = Date.now() - toolsStartTime const thisToolsTime = Date.now() - toolsStartTime
toolsTime += thisToolsTime toolsTime += thisToolsTime
const nextPayload = { const nextPayload: AnthropicPayload = {
...toolLoopPayload, ...payload,
messages: currentMessages, messages: currentMessages,
} }
if (typeof originalToolChoice === 'object' && hasUsedForcedTool && forcedTools.length > 0) { // Per Anthropic docs: forced tool_choice is incompatible with thinking.
// Only auto and none are supported when thinking is enabled.
const thinkingEnabled = !!payload.thinking
if (
!thinkingEnabled &&
typeof originalToolChoice === 'object' &&
hasUsedForcedTool &&
forcedTools.length > 0
) {
const remainingTools = forcedTools.filter((tool) => !usedForcedTools.includes(tool)) const remainingTools = forcedTools.filter((tool) => !usedForcedTools.includes(tool))
if (remainingTools.length > 0) { if (remainingTools.length > 0) {
@@ -1026,7 +1108,11 @@ export async function executeAnthropicProviderRequest(
nextPayload.tool_choice = undefined nextPayload.tool_choice = undefined
logger.info('All forced tools have been used, removing tool_choice parameter') logger.info('All forced tools have been used, removing tool_choice parameter')
} }
} else if (hasUsedForcedTool && typeof originalToolChoice === 'object') { } else if (
!thinkingEnabled &&
hasUsedForcedTool &&
typeof originalToolChoice === 'object'
) {
nextPayload.tool_choice = undefined nextPayload.tool_choice = undefined
logger.info( logger.info(
'Removing tool_choice parameter for subsequent requests after forced tool was used' 'Removing tool_choice parameter for subsequent requests after forced tool was used'
@@ -1035,7 +1121,7 @@ export async function executeAnthropicProviderRequest(
const nextModelStartTime = Date.now() const nextModelStartTime = Date.now()
currentResponse = await anthropic.messages.create(nextPayload) currentResponse = await createMessage(anthropic, nextPayload)
const nextCheckResult = checkForForcedToolUsage( const nextCheckResult = checkForForcedToolUsage(
currentResponse, currentResponse,
@@ -1098,33 +1184,38 @@ export async function executeAnthropicProviderRequest(
tool_choice: undefined, tool_choice: undefined,
} }
const streamResponse: any = await anthropic.messages.create(streamingPayload) const streamResponse = await anthropic.messages.create(
streamingPayload as Anthropic.Messages.MessageCreateParamsStreaming
)
const streamingResult = { const streamingResult = {
stream: createReadableStreamFromAnthropicStream(streamResponse, (streamContent, usage) => { stream: createReadableStreamFromAnthropicStream(
streamingResult.execution.output.content = streamContent streamResponse as AsyncIterable<RawMessageStreamEvent>,
streamingResult.execution.output.tokens = { (streamContent, usage) => {
input: tokens.input + usage.input_tokens, streamingResult.execution.output.content = streamContent
output: tokens.output + usage.output_tokens, streamingResult.execution.output.tokens = {
total: tokens.total + usage.input_tokens + usage.output_tokens, input: tokens.input + usage.input_tokens,
} output: tokens.output + usage.output_tokens,
total: tokens.total + usage.input_tokens + usage.output_tokens,
}
const streamCost = calculateCost(request.model, usage.input_tokens, usage.output_tokens) const streamCost = calculateCost(request.model, usage.input_tokens, usage.output_tokens)
streamingResult.execution.output.cost = { streamingResult.execution.output.cost = {
input: cost.input + streamCost.input, input: cost.input + streamCost.input,
output: cost.output + streamCost.output, output: cost.output + streamCost.output,
total: cost.total + streamCost.total, total: cost.total + streamCost.total,
} }
const streamEndTime = Date.now() const streamEndTime = Date.now()
const streamEndTimeISO = new Date(streamEndTime).toISOString() const streamEndTimeISO = new Date(streamEndTime).toISOString()
if (streamingResult.execution.output.providerTiming) { if (streamingResult.execution.output.providerTiming) {
streamingResult.execution.output.providerTiming.endTime = streamEndTimeISO streamingResult.execution.output.providerTiming.endTime = streamEndTimeISO
streamingResult.execution.output.providerTiming.duration = streamingResult.execution.output.providerTiming.duration =
streamEndTime - providerStartTime streamEndTime - providerStartTime
}
} }
}), ),
execution: { execution: {
success: true, success: true,
output: { output: {
@@ -1179,7 +1270,7 @@ export async function executeAnthropicProviderRequest(
toolCalls.length > 0 toolCalls.length > 0
? toolCalls.map((tc) => ({ ? toolCalls.map((tc) => ({
name: tc.name, name: tc.name,
arguments: tc.arguments as Record<string, any>, arguments: tc.arguments as Record<string, unknown>,
startTime: tc.startTime, startTime: tc.startTime,
endTime: tc.endTime, endTime: tc.endTime,
duration: tc.duration, duration: tc.duration,

View File

@@ -35,6 +35,8 @@ export const azureAnthropicProvider: ProviderConfig = {
// The SDK appends /v1/messages automatically // The SDK appends /v1/messages automatically
const baseURL = `${request.azureEndpoint.replace(/\/$/, '')}/anthropic` const baseURL = `${request.azureEndpoint.replace(/\/$/, '')}/anthropic`
const anthropicVersion = request.azureApiVersion || '2023-06-01'
return executeAnthropicProviderRequest( return executeAnthropicProviderRequest(
{ {
...request, ...request,
@@ -49,7 +51,7 @@ export const azureAnthropicProvider: ProviderConfig = {
apiKey, apiKey,
defaultHeaders: { defaultHeaders: {
'api-key': apiKey, 'api-key': apiKey,
'anthropic-version': '2023-06-01', 'anthropic-version': anthropicVersion,
...(useNativeStructuredOutputs ...(useNativeStructuredOutputs
? { 'anthropic-beta': 'structured-outputs-2025-11-13' } ? { 'anthropic-beta': 'structured-outputs-2025-11-13' }
: {}), : {}),

View File

@@ -1,6 +1,14 @@
import { createLogger } from '@sim/logger' import { createLogger } from '@sim/logger'
import { AzureOpenAI } from 'openai' import { AzureOpenAI } from 'openai'
import type { ChatCompletionCreateParamsStreaming } from 'openai/resources/chat/completions' import type {
ChatCompletion,
ChatCompletionCreateParamsBase,
ChatCompletionCreateParamsStreaming,
ChatCompletionMessageParam,
ChatCompletionTool,
ChatCompletionToolChoiceOption,
} from 'openai/resources/chat/completions'
import type { ReasoningEffort } from 'openai/resources/shared'
import { env } from '@/lib/core/config/env' import { env } from '@/lib/core/config/env'
import type { StreamingExecution } from '@/executor/types' import type { StreamingExecution } from '@/executor/types'
import { MAX_TOOL_ITERATIONS } from '@/providers' import { MAX_TOOL_ITERATIONS } from '@/providers'
@@ -16,6 +24,7 @@ import {
import { getProviderDefaultModel, getProviderModels } from '@/providers/models' import { getProviderDefaultModel, getProviderModels } from '@/providers/models'
import { executeResponsesProviderRequest } from '@/providers/openai/core' import { executeResponsesProviderRequest } from '@/providers/openai/core'
import type { import type {
FunctionCallResponse,
ProviderConfig, ProviderConfig,
ProviderRequest, ProviderRequest,
ProviderResponse, ProviderResponse,
@@ -59,7 +68,7 @@ async function executeChatCompletionsRequest(
endpoint: azureEndpoint, endpoint: azureEndpoint,
}) })
const allMessages: any[] = [] const allMessages: ChatCompletionMessageParam[] = []
if (request.systemPrompt) { if (request.systemPrompt) {
allMessages.push({ allMessages.push({
@@ -76,12 +85,12 @@ async function executeChatCompletionsRequest(
} }
if (request.messages) { if (request.messages) {
allMessages.push(...request.messages) allMessages.push(...(request.messages as ChatCompletionMessageParam[]))
} }
const tools = request.tools?.length const tools: ChatCompletionTool[] | undefined = request.tools?.length
? request.tools.map((tool) => ({ ? request.tools.map((tool) => ({
type: 'function', type: 'function' as const,
function: { function: {
name: tool.id, name: tool.id,
description: tool.description, description: tool.description,
@@ -90,7 +99,7 @@ async function executeChatCompletionsRequest(
})) }))
: undefined : undefined
const payload: any = { const payload: ChatCompletionCreateParamsBase & { verbosity?: string } = {
model: deploymentName, model: deploymentName,
messages: allMessages, messages: allMessages,
} }
@@ -98,8 +107,10 @@ async function executeChatCompletionsRequest(
if (request.temperature !== undefined) payload.temperature = request.temperature if (request.temperature !== undefined) payload.temperature = request.temperature
if (request.maxTokens != null) payload.max_completion_tokens = request.maxTokens if (request.maxTokens != null) payload.max_completion_tokens = request.maxTokens
if (request.reasoningEffort !== undefined) payload.reasoning_effort = request.reasoningEffort if (request.reasoningEffort !== undefined && request.reasoningEffort !== 'auto')
if (request.verbosity !== undefined) payload.verbosity = request.verbosity payload.reasoning_effort = request.reasoningEffort as ReasoningEffort
if (request.verbosity !== undefined && request.verbosity !== 'auto')
payload.verbosity = request.verbosity
if (request.responseFormat) { if (request.responseFormat) {
payload.response_format = { payload.response_format = {
@@ -121,8 +132,8 @@ async function executeChatCompletionsRequest(
const { tools: filteredTools, toolChoice } = preparedTools const { tools: filteredTools, toolChoice } = preparedTools
if (filteredTools?.length && toolChoice) { if (filteredTools?.length && toolChoice) {
payload.tools = filteredTools payload.tools = filteredTools as ChatCompletionTool[]
payload.tool_choice = toolChoice payload.tool_choice = toolChoice as ChatCompletionToolChoiceOption
logger.info('Azure OpenAI request configuration:', { logger.info('Azure OpenAI request configuration:', {
toolCount: filteredTools.length, toolCount: filteredTools.length,
@@ -231,7 +242,7 @@ async function executeChatCompletionsRequest(
const forcedTools = preparedTools?.forcedTools || [] const forcedTools = preparedTools?.forcedTools || []
let usedForcedTools: string[] = [] let usedForcedTools: string[] = []
let currentResponse = await azureOpenAI.chat.completions.create(payload) let currentResponse = (await azureOpenAI.chat.completions.create(payload)) as ChatCompletion
const firstResponseTime = Date.now() - initialCallTime const firstResponseTime = Date.now() - initialCallTime
let content = currentResponse.choices[0]?.message?.content || '' let content = currentResponse.choices[0]?.message?.content || ''
@@ -240,8 +251,8 @@ async function executeChatCompletionsRequest(
output: currentResponse.usage?.completion_tokens || 0, output: currentResponse.usage?.completion_tokens || 0,
total: currentResponse.usage?.total_tokens || 0, total: currentResponse.usage?.total_tokens || 0,
} }
const toolCalls = [] const toolCalls: (FunctionCallResponse & { success: boolean })[] = []
const toolResults = [] const toolResults: Record<string, unknown>[] = []
const currentMessages = [...allMessages] const currentMessages = [...allMessages]
let iterationCount = 0 let iterationCount = 0
let modelTime = firstResponseTime let modelTime = firstResponseTime
@@ -260,7 +271,7 @@ async function executeChatCompletionsRequest(
const firstCheckResult = checkForForcedToolUsage( const firstCheckResult = checkForForcedToolUsage(
currentResponse, currentResponse,
originalToolChoice, originalToolChoice ?? 'auto',
logger, logger,
forcedTools, forcedTools,
usedForcedTools usedForcedTools
@@ -356,10 +367,10 @@ async function executeChatCompletionsRequest(
duration: duration, duration: duration,
}) })
let resultContent: any let resultContent: Record<string, unknown>
if (result.success) { if (result.success) {
toolResults.push(result.output) toolResults.push(result.output as Record<string, unknown>)
resultContent = result.output resultContent = result.output as Record<string, unknown>
} else { } else {
resultContent = { resultContent = {
error: true, error: true,
@@ -409,11 +420,11 @@ async function executeChatCompletionsRequest(
} }
const nextModelStartTime = Date.now() const nextModelStartTime = Date.now()
currentResponse = await azureOpenAI.chat.completions.create(nextPayload) currentResponse = (await azureOpenAI.chat.completions.create(nextPayload)) as ChatCompletion
const nextCheckResult = checkForForcedToolUsage( const nextCheckResult = checkForForcedToolUsage(
currentResponse, currentResponse,
nextPayload.tool_choice, nextPayload.tool_choice ?? 'auto',
logger, logger,
forcedTools, forcedTools,
usedForcedTools usedForcedTools

View File

@@ -1,4 +1,5 @@
import type { Logger } from '@sim/logger' import type { Logger } from '@sim/logger'
import type OpenAI from 'openai'
import type { ChatCompletionChunk } from 'openai/resources/chat/completions' import type { ChatCompletionChunk } from 'openai/resources/chat/completions'
import type { CompletionUsage } from 'openai/resources/completions' import type { CompletionUsage } from 'openai/resources/completions'
import type { Stream } from 'openai/streaming' import type { Stream } from 'openai/streaming'
@@ -20,8 +21,8 @@ export function createReadableStreamFromAzureOpenAIStream(
* Uses the shared OpenAI-compatible forced tool usage helper. * Uses the shared OpenAI-compatible forced tool usage helper.
*/ */
export function checkForForcedToolUsage( export function checkForForcedToolUsage(
response: any, response: OpenAI.Chat.Completions.ChatCompletion,
toolChoice: string | { type: string; function?: { name: string }; name?: string; any?: any }, toolChoice: string | { type: string; function?: { name: string }; name?: string },
_logger: Logger, _logger: Logger,
forcedTools: string[], forcedTools: string[],
usedForcedTools: string[] usedForcedTools: string[]

View File

@@ -197,6 +197,9 @@ export const bedrockProvider: ProviderConfig = {
} else if (tc.type === 'function' && tc.function?.name) { } else if (tc.type === 'function' && tc.function?.name) {
toolChoice = { tool: { name: tc.function.name } } toolChoice = { tool: { name: tc.function.name } }
logger.info(`Using Bedrock tool_choice format: force tool "${tc.function.name}"`) logger.info(`Using Bedrock tool_choice format: force tool "${tc.function.name}"`)
} else if (tc.type === 'any') {
toolChoice = { any: {} }
logger.info('Using Bedrock tool_choice format: any tool')
} else { } else {
toolChoice = { auto: {} } toolChoice = { auto: {} }
} }
@@ -413,6 +416,7 @@ export const bedrockProvider: ProviderConfig = {
input: initialCost.input, input: initialCost.input,
output: initialCost.output, output: initialCost.output,
total: initialCost.total, total: initialCost.total,
pricing: initialCost.pricing,
} }
const toolCalls: any[] = [] const toolCalls: any[] = []
@@ -860,6 +864,12 @@ export const bedrockProvider: ProviderConfig = {
content, content,
model: request.model, model: request.model,
tokens, tokens,
cost: {
input: cost.input,
output: cost.output,
total: cost.total,
pricing: cost.pricing,
},
toolCalls: toolCalls:
toolCalls.length > 0 toolCalls.length > 0
? toolCalls.map((tc) => ({ ? toolCalls.map((tc) => ({

View File

@@ -24,7 +24,6 @@ import {
extractTextContent, extractTextContent,
mapToThinkingLevel, mapToThinkingLevel,
} from '@/providers/google/utils' } from '@/providers/google/utils'
import { getThinkingCapability } from '@/providers/models'
import type { FunctionCallResponse, ProviderRequest, ProviderResponse } from '@/providers/types' import type { FunctionCallResponse, ProviderRequest, ProviderResponse } from '@/providers/types'
import { import {
calculateCost, calculateCost,
@@ -432,13 +431,11 @@ export async function executeGeminiRequest(
logger.warn('Gemini does not support responseFormat with tools. Structured output ignored.') logger.warn('Gemini does not support responseFormat with tools. Structured output ignored.')
} }
// Configure thinking for models that support it // Configure thinking only when the user explicitly selects a thinking level
const thinkingCapability = getThinkingCapability(model) if (request.thinkingLevel && request.thinkingLevel !== 'none') {
if (thinkingCapability) {
const level = request.thinkingLevel ?? thinkingCapability.default ?? 'high'
const thinkingConfig: ThinkingConfig = { const thinkingConfig: ThinkingConfig = {
includeThoughts: false, includeThoughts: false,
thinkingLevel: mapToThinkingLevel(level), thinkingLevel: mapToThinkingLevel(request.thinkingLevel),
} }
geminiConfig.thinkingConfig = thinkingConfig geminiConfig.thinkingConfig = thinkingConfig
} }

View File

@@ -8,7 +8,10 @@ import {
calculateCost, calculateCost,
generateStructuredOutputInstructions, generateStructuredOutputInstructions,
shouldBillModelUsage, shouldBillModelUsage,
supportsReasoningEffort,
supportsTemperature, supportsTemperature,
supportsThinking,
supportsVerbosity,
} from '@/providers/utils' } from '@/providers/utils'
const logger = createLogger('Providers') const logger = createLogger('Providers')
@@ -21,11 +24,24 @@ export const MAX_TOOL_ITERATIONS = 20
function sanitizeRequest(request: ProviderRequest): ProviderRequest { function sanitizeRequest(request: ProviderRequest): ProviderRequest {
const sanitizedRequest = { ...request } const sanitizedRequest = { ...request }
const model = sanitizedRequest.model
if (sanitizedRequest.model && !supportsTemperature(sanitizedRequest.model)) { if (model && !supportsTemperature(model)) {
sanitizedRequest.temperature = undefined sanitizedRequest.temperature = undefined
} }
if (model && !supportsReasoningEffort(model)) {
sanitizedRequest.reasoningEffort = undefined
}
if (model && !supportsVerbosity(model)) {
sanitizedRequest.verbosity = undefined
}
if (model && !supportsThinking(model)) {
sanitizedRequest.thinkingLevel = undefined
}
return sanitizedRequest return sanitizedRequest
} }

View File

@@ -141,7 +141,6 @@ export const mistralProvider: ProviderConfig = {
const streamingParams: ChatCompletionCreateParamsStreaming = { const streamingParams: ChatCompletionCreateParamsStreaming = {
...payload, ...payload,
stream: true, stream: true,
stream_options: { include_usage: true },
} }
const streamResponse = await mistral.chat.completions.create(streamingParams) const streamResponse = await mistral.chat.completions.create(streamingParams)
@@ -453,7 +452,6 @@ export const mistralProvider: ProviderConfig = {
messages: currentMessages, messages: currentMessages,
tool_choice: 'auto', tool_choice: 'auto',
stream: true, stream: true,
stream_options: { include_usage: true },
} }
const streamResponse = await mistral.chat.completions.create(streamingParams) const streamResponse = await mistral.chat.completions.create(streamingParams)

View File

@@ -34,17 +34,8 @@ export interface ModelCapabilities {
toolUsageControl?: boolean toolUsageControl?: boolean
computerUse?: boolean computerUse?: boolean
nativeStructuredOutputs?: boolean nativeStructuredOutputs?: boolean
/** /** Maximum supported output tokens for this model */
* Max output tokens configuration for Anthropic SDK's streaming timeout workaround. maxOutputTokens?: number
* The Anthropic SDK throws an error for non-streaming requests that may take >10 minutes.
* This only applies to direct Anthropic API calls, not Bedrock (which uses AWS SDK).
*/
maxOutputTokens?: {
/** Maximum tokens for streaming requests */
max: number
/** Safe default for non-streaming requests (to avoid Anthropic SDK timeout errors) */
default: number
}
reasoningEffort?: { reasoningEffort?: {
values: string[] values: string[]
} }
@@ -109,7 +100,7 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
name: 'OpenAI', name: 'OpenAI',
description: "OpenAI's models", description: "OpenAI's models",
defaultModel: 'gpt-4o', defaultModel: 'gpt-4o',
modelPatterns: [/^gpt/, /^o1/, /^text-embedding/], modelPatterns: [/^gpt/, /^o\d/, /^text-embedding/],
icon: OpenAIIcon, icon: OpenAIIcon,
capabilities: { capabilities: {
toolUsageControl: true, toolUsageControl: true,
@@ -138,7 +129,7 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
}, },
capabilities: { capabilities: {
reasoningEffort: { reasoningEffort: {
values: ['none', 'minimal', 'low', 'medium', 'high', 'xhigh'], values: ['none', 'low', 'medium', 'high', 'xhigh'],
}, },
verbosity: { verbosity: {
values: ['low', 'medium', 'high'], values: ['low', 'medium', 'high'],
@@ -164,60 +155,6 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
}, },
contextWindow: 400000, contextWindow: 400000,
}, },
// {
// id: 'gpt-5.1-mini',
// pricing: {
// input: 0.25,
// cachedInput: 0.025,
// output: 2.0,
// updatedAt: '2025-11-14',
// },
// capabilities: {
// reasoningEffort: {
// values: ['none', 'low', 'medium', 'high'],
// },
// verbosity: {
// values: ['low', 'medium', 'high'],
// },
// },
// contextWindow: 400000,
// },
// {
// id: 'gpt-5.1-nano',
// pricing: {
// input: 0.05,
// cachedInput: 0.005,
// output: 0.4,
// updatedAt: '2025-11-14',
// },
// capabilities: {
// reasoningEffort: {
// values: ['none', 'low', 'medium', 'high'],
// },
// verbosity: {
// values: ['low', 'medium', 'high'],
// },
// },
// contextWindow: 400000,
// },
// {
// id: 'gpt-5.1-codex',
// pricing: {
// input: 1.25,
// cachedInput: 0.125,
// output: 10.0,
// updatedAt: '2025-11-14',
// },
// capabilities: {
// reasoningEffort: {
// values: ['none', 'medium', 'high'],
// },
// verbosity: {
// values: ['low', 'medium', 'high'],
// },
// },
// contextWindow: 400000,
// },
{ {
id: 'gpt-5', id: 'gpt-5',
pricing: { pricing: {
@@ -280,8 +217,10 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
output: 10.0, output: 10.0,
updatedAt: '2025-08-07', updatedAt: '2025-08-07',
}, },
capabilities: {}, capabilities: {
contextWindow: 400000, temperature: { min: 0, max: 2 },
},
contextWindow: 128000,
}, },
{ {
id: 'o1', id: 'o1',
@@ -311,7 +250,7 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
values: ['low', 'medium', 'high'], values: ['low', 'medium', 'high'],
}, },
}, },
contextWindow: 128000, contextWindow: 200000,
}, },
{ {
id: 'o4-mini', id: 'o4-mini',
@@ -326,7 +265,7 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
values: ['low', 'medium', 'high'], values: ['low', 'medium', 'high'],
}, },
}, },
contextWindow: 128000, contextWindow: 200000,
}, },
{ {
id: 'gpt-4.1', id: 'gpt-4.1',
@@ -391,7 +330,7 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
capabilities: { capabilities: {
temperature: { min: 0, max: 1 }, temperature: { min: 0, max: 1 },
nativeStructuredOutputs: true, nativeStructuredOutputs: true,
maxOutputTokens: { max: 128000, default: 8192 }, maxOutputTokens: 128000,
thinking: { thinking: {
levels: ['low', 'medium', 'high', 'max'], levels: ['low', 'medium', 'high', 'max'],
default: 'high', default: 'high',
@@ -410,10 +349,10 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
capabilities: { capabilities: {
temperature: { min: 0, max: 1 }, temperature: { min: 0, max: 1 },
nativeStructuredOutputs: true, nativeStructuredOutputs: true,
maxOutputTokens: { max: 64000, default: 8192 }, maxOutputTokens: 64000,
thinking: { thinking: {
levels: ['low', 'medium', 'high'], levels: ['low', 'medium', 'high'],
default: 'medium', default: 'high',
}, },
}, },
contextWindow: 200000, contextWindow: 200000,
@@ -429,10 +368,10 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
capabilities: { capabilities: {
temperature: { min: 0, max: 1 }, temperature: { min: 0, max: 1 },
nativeStructuredOutputs: true, nativeStructuredOutputs: true,
maxOutputTokens: { max: 64000, default: 8192 }, maxOutputTokens: 64000,
thinking: { thinking: {
levels: ['low', 'medium', 'high'], levels: ['low', 'medium', 'high'],
default: 'medium', default: 'high',
}, },
}, },
contextWindow: 200000, contextWindow: 200000,
@@ -447,10 +386,10 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
}, },
capabilities: { capabilities: {
temperature: { min: 0, max: 1 }, temperature: { min: 0, max: 1 },
maxOutputTokens: { max: 64000, default: 8192 }, maxOutputTokens: 64000,
thinking: { thinking: {
levels: ['low', 'medium', 'high'], levels: ['low', 'medium', 'high'],
default: 'medium', default: 'high',
}, },
}, },
contextWindow: 200000, contextWindow: 200000,
@@ -466,10 +405,10 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
capabilities: { capabilities: {
temperature: { min: 0, max: 1 }, temperature: { min: 0, max: 1 },
nativeStructuredOutputs: true, nativeStructuredOutputs: true,
maxOutputTokens: { max: 64000, default: 8192 }, maxOutputTokens: 64000,
thinking: { thinking: {
levels: ['low', 'medium', 'high'], levels: ['low', 'medium', 'high'],
default: 'medium', default: 'high',
}, },
}, },
contextWindow: 200000, contextWindow: 200000,
@@ -484,10 +423,10 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
}, },
capabilities: { capabilities: {
temperature: { min: 0, max: 1 }, temperature: { min: 0, max: 1 },
maxOutputTokens: { max: 64000, default: 8192 }, maxOutputTokens: 64000,
thinking: { thinking: {
levels: ['low', 'medium', 'high'], levels: ['low', 'medium', 'high'],
default: 'medium', default: 'high',
}, },
}, },
contextWindow: 200000, contextWindow: 200000,
@@ -503,10 +442,10 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
capabilities: { capabilities: {
temperature: { min: 0, max: 1 }, temperature: { min: 0, max: 1 },
nativeStructuredOutputs: true, nativeStructuredOutputs: true,
maxOutputTokens: { max: 64000, default: 8192 }, maxOutputTokens: 64000,
thinking: { thinking: {
levels: ['low', 'medium', 'high'], levels: ['low', 'medium', 'high'],
default: 'medium', default: 'high',
}, },
}, },
contextWindow: 200000, contextWindow: 200000,
@@ -515,13 +454,13 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
id: 'claude-3-haiku-20240307', id: 'claude-3-haiku-20240307',
pricing: { pricing: {
input: 0.25, input: 0.25,
cachedInput: 0.025, cachedInput: 0.03,
output: 1.25, output: 1.25,
updatedAt: '2026-02-05', updatedAt: '2026-02-05',
}, },
capabilities: { capabilities: {
temperature: { min: 0, max: 1 }, temperature: { min: 0, max: 1 },
maxOutputTokens: { max: 4096, default: 4096 }, maxOutputTokens: 4096,
}, },
contextWindow: 200000, contextWindow: 200000,
}, },
@@ -536,10 +475,10 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
capabilities: { capabilities: {
temperature: { min: 0, max: 1 }, temperature: { min: 0, max: 1 },
computerUse: true, computerUse: true,
maxOutputTokens: { max: 8192, default: 8192 }, maxOutputTokens: 64000,
thinking: { thinking: {
levels: ['low', 'medium', 'high'], levels: ['low', 'medium', 'high'],
default: 'medium', default: 'high',
}, },
}, },
contextWindow: 200000, contextWindow: 200000,
@@ -580,7 +519,7 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
}, },
capabilities: { capabilities: {
reasoningEffort: { reasoningEffort: {
values: ['none', 'minimal', 'low', 'medium', 'high', 'xhigh'], values: ['none', 'low', 'medium', 'high', 'xhigh'],
}, },
verbosity: { verbosity: {
values: ['low', 'medium', 'high'], values: ['low', 'medium', 'high'],
@@ -606,42 +545,6 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
}, },
contextWindow: 400000, contextWindow: 400000,
}, },
{
id: 'azure/gpt-5.1-mini',
pricing: {
input: 0.25,
cachedInput: 0.025,
output: 2.0,
updatedAt: '2025-11-14',
},
capabilities: {
reasoningEffort: {
values: ['none', 'low', 'medium', 'high'],
},
verbosity: {
values: ['low', 'medium', 'high'],
},
},
contextWindow: 400000,
},
{
id: 'azure/gpt-5.1-nano',
pricing: {
input: 0.05,
cachedInput: 0.005,
output: 0.4,
updatedAt: '2025-11-14',
},
capabilities: {
reasoningEffort: {
values: ['none', 'low', 'medium', 'high'],
},
verbosity: {
values: ['low', 'medium', 'high'],
},
},
contextWindow: 400000,
},
{ {
id: 'azure/gpt-5.1-codex', id: 'azure/gpt-5.1-codex',
pricing: { pricing: {
@@ -652,7 +555,7 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
}, },
capabilities: { capabilities: {
reasoningEffort: { reasoningEffort: {
values: ['none', 'medium', 'high'], values: ['none', 'low', 'medium', 'high'],
}, },
verbosity: { verbosity: {
values: ['low', 'medium', 'high'], values: ['low', 'medium', 'high'],
@@ -722,23 +625,25 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
output: 10.0, output: 10.0,
updatedAt: '2025-08-07', updatedAt: '2025-08-07',
}, },
capabilities: {}, capabilities: {
contextWindow: 400000, temperature: { min: 0, max: 2 },
},
contextWindow: 128000,
}, },
{ {
id: 'azure/o3', id: 'azure/o3',
pricing: { pricing: {
input: 10, input: 2,
cachedInput: 2.5, cachedInput: 0.5,
output: 40, output: 8,
updatedAt: '2025-06-15', updatedAt: '2026-02-06',
}, },
capabilities: { capabilities: {
reasoningEffort: { reasoningEffort: {
values: ['low', 'medium', 'high'], values: ['low', 'medium', 'high'],
}, },
}, },
contextWindow: 128000, contextWindow: 200000,
}, },
{ {
id: 'azure/o4-mini', id: 'azure/o4-mini',
@@ -753,7 +658,7 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
values: ['low', 'medium', 'high'], values: ['low', 'medium', 'high'],
}, },
}, },
contextWindow: 128000, contextWindow: 200000,
}, },
{ {
id: 'azure/gpt-4.1', id: 'azure/gpt-4.1',
@@ -763,7 +668,35 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
output: 8.0, output: 8.0,
updatedAt: '2025-06-15', updatedAt: '2025-06-15',
}, },
capabilities: {}, capabilities: {
temperature: { min: 0, max: 2 },
},
contextWindow: 1000000,
},
{
id: 'azure/gpt-4.1-mini',
pricing: {
input: 0.4,
cachedInput: 0.1,
output: 1.6,
updatedAt: '2025-06-15',
},
capabilities: {
temperature: { min: 0, max: 2 },
},
contextWindow: 1000000,
},
{
id: 'azure/gpt-4.1-nano',
pricing: {
input: 0.1,
cachedInput: 0.025,
output: 0.4,
updatedAt: '2025-06-15',
},
capabilities: {
temperature: { min: 0, max: 2 },
},
contextWindow: 1000000, contextWindow: 1000000,
}, },
{ {
@@ -775,7 +708,7 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
updatedAt: '2025-06-15', updatedAt: '2025-06-15',
}, },
capabilities: {}, capabilities: {},
contextWindow: 1000000, contextWindow: 200000,
}, },
], ],
}, },
@@ -801,7 +734,7 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
capabilities: { capabilities: {
temperature: { min: 0, max: 1 }, temperature: { min: 0, max: 1 },
nativeStructuredOutputs: true, nativeStructuredOutputs: true,
maxOutputTokens: { max: 128000, default: 8192 }, maxOutputTokens: 128000,
thinking: { thinking: {
levels: ['low', 'medium', 'high', 'max'], levels: ['low', 'medium', 'high', 'max'],
default: 'high', default: 'high',
@@ -820,10 +753,10 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
capabilities: { capabilities: {
temperature: { min: 0, max: 1 }, temperature: { min: 0, max: 1 },
nativeStructuredOutputs: true, nativeStructuredOutputs: true,
maxOutputTokens: { max: 64000, default: 8192 }, maxOutputTokens: 64000,
thinking: { thinking: {
levels: ['low', 'medium', 'high'], levels: ['low', 'medium', 'high'],
default: 'medium', default: 'high',
}, },
}, },
contextWindow: 200000, contextWindow: 200000,
@@ -839,10 +772,10 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
capabilities: { capabilities: {
temperature: { min: 0, max: 1 }, temperature: { min: 0, max: 1 },
nativeStructuredOutputs: true, nativeStructuredOutputs: true,
maxOutputTokens: { max: 64000, default: 8192 }, maxOutputTokens: 64000,
thinking: { thinking: {
levels: ['low', 'medium', 'high'], levels: ['low', 'medium', 'high'],
default: 'medium', default: 'high',
}, },
}, },
contextWindow: 200000, contextWindow: 200000,
@@ -858,10 +791,10 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
capabilities: { capabilities: {
temperature: { min: 0, max: 1 }, temperature: { min: 0, max: 1 },
nativeStructuredOutputs: true, nativeStructuredOutputs: true,
maxOutputTokens: { max: 64000, default: 8192 }, maxOutputTokens: 64000,
thinking: { thinking: {
levels: ['low', 'medium', 'high'], levels: ['low', 'medium', 'high'],
default: 'medium', default: 'high',
}, },
}, },
contextWindow: 200000, contextWindow: 200000,
@@ -877,10 +810,10 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
capabilities: { capabilities: {
temperature: { min: 0, max: 1 }, temperature: { min: 0, max: 1 },
nativeStructuredOutputs: true, nativeStructuredOutputs: true,
maxOutputTokens: { max: 64000, default: 8192 }, maxOutputTokens: 64000,
thinking: { thinking: {
levels: ['low', 'medium', 'high'], levels: ['low', 'medium', 'high'],
default: 'medium', default: 'high',
}, },
}, },
contextWindow: 200000, contextWindow: 200000,
@@ -2548,14 +2481,11 @@ export function getThinkingLevelsForModel(modelId: string): string[] | null {
} }
/** /**
* Get the max output tokens for a specific model * Get the max output tokens for a specific model.
* Returns the model's max capacity for streaming requests,
* or the model's safe default for non-streaming requests to avoid timeout issues.
* *
* @param modelId - The model ID * @param modelId - The model ID
* @param streaming - Whether the request is streaming (default: false)
*/ */
export function getMaxOutputTokensForModel(modelId: string, streaming = false): number { export function getMaxOutputTokensForModel(modelId: string): number {
const normalizedModelId = modelId.toLowerCase() const normalizedModelId = modelId.toLowerCase()
const STANDARD_MAX_OUTPUT_TOKENS = 4096 const STANDARD_MAX_OUTPUT_TOKENS = 4096
@@ -2563,11 +2493,7 @@ export function getMaxOutputTokensForModel(modelId: string, streaming = false):
for (const model of provider.models) { for (const model of provider.models) {
const baseModelId = model.id.toLowerCase() const baseModelId = model.id.toLowerCase()
if (normalizedModelId === baseModelId || normalizedModelId.startsWith(`${baseModelId}-`)) { if (normalizedModelId === baseModelId || normalizedModelId.startsWith(`${baseModelId}-`)) {
const outputTokens = model.capabilities.maxOutputTokens return model.capabilities.maxOutputTokens || STANDARD_MAX_OUTPUT_TOKENS
if (outputTokens) {
return streaming ? outputTokens.max : outputTokens.default
}
return STANDARD_MAX_OUTPUT_TOKENS
} }
} }
} }

View File

@@ -1,4 +1,5 @@
import type { Logger } from '@sim/logger' import type { Logger } from '@sim/logger'
import type OpenAI from 'openai'
import type { StreamingExecution } from '@/executor/types' import type { StreamingExecution } from '@/executor/types'
import { MAX_TOOL_ITERATIONS } from '@/providers' import { MAX_TOOL_ITERATIONS } from '@/providers'
import type { Message, ProviderRequest, ProviderResponse, TimeSegment } from '@/providers/types' import type { Message, ProviderRequest, ProviderResponse, TimeSegment } from '@/providers/types'
@@ -30,7 +31,7 @@ type ToolChoice = PreparedTools['toolChoice']
* - Sets additionalProperties: false on all object types. * - Sets additionalProperties: false on all object types.
* - Ensures required includes ALL property keys. * - Ensures required includes ALL property keys.
*/ */
function enforceStrictSchema(schema: any): any { function enforceStrictSchema(schema: Record<string, unknown>): Record<string, unknown> {
if (!schema || typeof schema !== 'object') return schema if (!schema || typeof schema !== 'object') return schema
const result = { ...schema } const result = { ...schema }
@@ -41,23 +42,26 @@ function enforceStrictSchema(schema: any): any {
// Recursively process properties and ensure required includes all keys // Recursively process properties and ensure required includes all keys
if (result.properties && typeof result.properties === 'object') { if (result.properties && typeof result.properties === 'object') {
const propKeys = Object.keys(result.properties) const propKeys = Object.keys(result.properties as Record<string, unknown>)
result.required = propKeys // Strict mode requires ALL properties result.required = propKeys // Strict mode requires ALL properties
result.properties = Object.fromEntries( result.properties = Object.fromEntries(
Object.entries(result.properties).map(([key, value]) => [key, enforceStrictSchema(value)]) Object.entries(result.properties as Record<string, unknown>).map(([key, value]) => [
key,
enforceStrictSchema(value as Record<string, unknown>),
])
) )
} }
} }
// Handle array items // Handle array items
if (result.type === 'array' && result.items) { if (result.type === 'array' && result.items) {
result.items = enforceStrictSchema(result.items) result.items = enforceStrictSchema(result.items as Record<string, unknown>)
} }
// Handle anyOf, oneOf, allOf // Handle anyOf, oneOf, allOf
for (const keyword of ['anyOf', 'oneOf', 'allOf']) { for (const keyword of ['anyOf', 'oneOf', 'allOf']) {
if (Array.isArray(result[keyword])) { if (Array.isArray(result[keyword])) {
result[keyword] = result[keyword].map(enforceStrictSchema) result[keyword] = (result[keyword] as Record<string, unknown>[]).map(enforceStrictSchema)
} }
} }
@@ -65,7 +69,10 @@ function enforceStrictSchema(schema: any): any {
for (const defKey of ['$defs', 'definitions']) { for (const defKey of ['$defs', 'definitions']) {
if (result[defKey] && typeof result[defKey] === 'object') { if (result[defKey] && typeof result[defKey] === 'object') {
result[defKey] = Object.fromEntries( result[defKey] = Object.fromEntries(
Object.entries(result[defKey]).map(([key, value]) => [key, enforceStrictSchema(value)]) Object.entries(result[defKey] as Record<string, unknown>).map(([key, value]) => [
key,
enforceStrictSchema(value as Record<string, unknown>),
])
) )
} }
} }
@@ -123,29 +130,29 @@ export async function executeResponsesProviderRequest(
const initialInput = buildResponsesInputFromMessages(allMessages) const initialInput = buildResponsesInputFromMessages(allMessages)
const basePayload: Record<string, any> = { const basePayload: Record<string, unknown> = {
model: config.modelName, model: config.modelName,
} }
if (request.temperature !== undefined) basePayload.temperature = request.temperature if (request.temperature !== undefined) basePayload.temperature = request.temperature
if (request.maxTokens != null) basePayload.max_output_tokens = request.maxTokens if (request.maxTokens != null) basePayload.max_output_tokens = request.maxTokens
if (request.reasoningEffort !== undefined) { if (request.reasoningEffort !== undefined && request.reasoningEffort !== 'auto') {
basePayload.reasoning = { basePayload.reasoning = {
effort: request.reasoningEffort, effort: request.reasoningEffort,
summary: 'auto', summary: 'auto',
} }
} }
if (request.verbosity !== undefined) { if (request.verbosity !== undefined && request.verbosity !== 'auto') {
basePayload.text = { basePayload.text = {
...(basePayload.text ?? {}), ...((basePayload.text as Record<string, unknown>) ?? {}),
verbosity: request.verbosity, verbosity: request.verbosity,
} }
} }
// Store response format config - for Azure with tools, we defer applying it until after tool calls complete // Store response format config - for Azure with tools, we defer applying it until after tool calls complete
let deferredTextFormat: { type: string; name: string; schema: any; strict: boolean } | undefined let deferredTextFormat: OpenAI.Responses.ResponseFormatTextJSONSchemaConfig | undefined
const hasTools = !!request.tools?.length const hasTools = !!request.tools?.length
const isAzure = config.providerId === 'azure-openai' const isAzure = config.providerId === 'azure-openai'
@@ -171,7 +178,7 @@ export async function executeResponsesProviderRequest(
) )
} else { } else {
basePayload.text = { basePayload.text = {
...(basePayload.text ?? {}), ...((basePayload.text as Record<string, unknown>) ?? {}),
format: textFormat, format: textFormat,
} }
logger.info(`Added JSON schema response format to ${config.providerLabel} request`) logger.info(`Added JSON schema response format to ${config.providerLabel} request`)
@@ -231,7 +238,10 @@ export async function executeResponsesProviderRequest(
} }
} }
const createRequestBody = (input: ResponsesInputItem[], overrides: Record<string, any> = {}) => ({ const createRequestBody = (
input: ResponsesInputItem[],
overrides: Record<string, unknown> = {}
) => ({
...basePayload, ...basePayload,
input, input,
...overrides, ...overrides,
@@ -247,7 +257,9 @@ export async function executeResponsesProviderRequest(
} }
} }
const postResponses = async (body: Record<string, any>) => { const postResponses = async (
body: Record<string, unknown>
): Promise<OpenAI.Responses.Response> => {
const response = await fetch(config.endpoint, { const response = await fetch(config.endpoint, {
method: 'POST', method: 'POST',
headers: config.headers, headers: config.headers,
@@ -496,10 +508,10 @@ export async function executeResponsesProviderRequest(
duration: duration, duration: duration,
}) })
let resultContent: any let resultContent: Record<string, unknown>
if (result.success) { if (result.success) {
toolResults.push(result.output) toolResults.push(result.output)
resultContent = result.output resultContent = result.output as Record<string, unknown>
} else { } else {
resultContent = { resultContent = {
error: true, error: true,
@@ -615,11 +627,11 @@ export async function executeResponsesProviderRequest(
} }
// Make final call with the response format - build payload without tools // Make final call with the response format - build payload without tools
const finalPayload: Record<string, any> = { const finalPayload: Record<string, unknown> = {
model: config.modelName, model: config.modelName,
input: formattedInput, input: formattedInput,
text: { text: {
...(basePayload.text ?? {}), ...((basePayload.text as Record<string, unknown>) ?? {}),
format: deferredTextFormat, format: deferredTextFormat,
}, },
} }
@@ -627,15 +639,15 @@ export async function executeResponsesProviderRequest(
// Copy over non-tool related settings // Copy over non-tool related settings
if (request.temperature !== undefined) finalPayload.temperature = request.temperature if (request.temperature !== undefined) finalPayload.temperature = request.temperature
if (request.maxTokens != null) finalPayload.max_output_tokens = request.maxTokens if (request.maxTokens != null) finalPayload.max_output_tokens = request.maxTokens
if (request.reasoningEffort !== undefined) { if (request.reasoningEffort !== undefined && request.reasoningEffort !== 'auto') {
finalPayload.reasoning = { finalPayload.reasoning = {
effort: request.reasoningEffort, effort: request.reasoningEffort,
summary: 'auto', summary: 'auto',
} }
} }
if (request.verbosity !== undefined) { if (request.verbosity !== undefined && request.verbosity !== 'auto') {
finalPayload.text = { finalPayload.text = {
...finalPayload.text, ...((finalPayload.text as Record<string, unknown>) ?? {}),
verbosity: request.verbosity, verbosity: request.verbosity,
} }
} }
@@ -679,10 +691,10 @@ export async function executeResponsesProviderRequest(
const accumulatedCost = calculateCost(request.model, tokens.input, tokens.output) const accumulatedCost = calculateCost(request.model, tokens.input, tokens.output)
// For Azure with deferred format in streaming mode, include the format in the streaming call // For Azure with deferred format in streaming mode, include the format in the streaming call
const streamOverrides: Record<string, any> = { stream: true, tool_choice: 'auto' } const streamOverrides: Record<string, unknown> = { stream: true, tool_choice: 'auto' }
if (deferredTextFormat) { if (deferredTextFormat) {
streamOverrides.text = { streamOverrides.text = {
...(basePayload.text ?? {}), ...((basePayload.text as Record<string, unknown>) ?? {}),
format: deferredTextFormat, format: deferredTextFormat,
} }
} }

View File

@@ -1,4 +1,5 @@
import { createLogger } from '@sim/logger' import { createLogger } from '@sim/logger'
import type OpenAI from 'openai'
import type { Message } from '@/providers/types' import type { Message } from '@/providers/types'
const logger = createLogger('ResponsesUtils') const logger = createLogger('ResponsesUtils')
@@ -38,7 +39,7 @@ export interface ResponsesToolDefinition {
type: 'function' type: 'function'
name: string name: string
description?: string description?: string
parameters?: Record<string, any> parameters?: Record<string, unknown>
} }
/** /**
@@ -85,7 +86,15 @@ export function buildResponsesInputFromMessages(messages: Message[]): ResponsesI
/** /**
* Converts tool definitions to the Responses API format. * Converts tool definitions to the Responses API format.
*/ */
export function convertToolsToResponses(tools: any[]): ResponsesToolDefinition[] { export function convertToolsToResponses(
tools: Array<{
type?: string
name?: string
description?: string
parameters?: Record<string, unknown>
function?: { name: string; description?: string; parameters?: Record<string, unknown> }
}>
): ResponsesToolDefinition[] {
return tools return tools
.map((tool) => { .map((tool) => {
const name = tool.function?.name ?? tool.name const name = tool.function?.name ?? tool.name
@@ -131,7 +140,7 @@ export function toResponsesToolChoice(
return 'auto' return 'auto'
} }
function extractTextFromMessageItem(item: any): string { function extractTextFromMessageItem(item: Record<string, unknown>): string {
if (!item) { if (!item) {
return '' return ''
} }
@@ -170,7 +179,7 @@ function extractTextFromMessageItem(item: any): string {
/** /**
* Extracts plain text from Responses API output items. * Extracts plain text from Responses API output items.
*/ */
export function extractResponseText(output: unknown): string { export function extractResponseText(output: OpenAI.Responses.ResponseOutputItem[]): string {
if (!Array.isArray(output)) { if (!Array.isArray(output)) {
return '' return ''
} }
@@ -181,7 +190,7 @@ export function extractResponseText(output: unknown): string {
continue continue
} }
const text = extractTextFromMessageItem(item) const text = extractTextFromMessageItem(item as unknown as Record<string, unknown>)
if (text) { if (text) {
textParts.push(text) textParts.push(text)
} }
@@ -193,7 +202,9 @@ export function extractResponseText(output: unknown): string {
/** /**
* Converts Responses API output items into input items for subsequent calls. * Converts Responses API output items into input items for subsequent calls.
*/ */
export function convertResponseOutputToInputItems(output: unknown): ResponsesInputItem[] { export function convertResponseOutputToInputItems(
output: OpenAI.Responses.ResponseOutputItem[]
): ResponsesInputItem[] {
if (!Array.isArray(output)) { if (!Array.isArray(output)) {
return [] return []
} }
@@ -205,7 +216,7 @@ export function convertResponseOutputToInputItems(output: unknown): ResponsesInp
} }
if (item.type === 'message') { if (item.type === 'message') {
const text = extractTextFromMessageItem(item) const text = extractTextFromMessageItem(item as unknown as Record<string, unknown>)
if (text) { if (text) {
items.push({ items.push({
role: 'assistant', role: 'assistant',
@@ -213,18 +224,20 @@ export function convertResponseOutputToInputItems(output: unknown): ResponsesInp
}) })
} }
const toolCalls = Array.isArray(item.tool_calls) ? item.tool_calls : [] // Handle Chat Completions-style tool_calls nested under message items
const msgRecord = item as unknown as Record<string, unknown>
const toolCalls = Array.isArray(msgRecord.tool_calls) ? msgRecord.tool_calls : []
for (const toolCall of toolCalls) { for (const toolCall of toolCalls) {
const callId = toolCall?.id const tc = toolCall as Record<string, unknown>
const name = toolCall?.function?.name ?? toolCall?.name const fn = tc.function as Record<string, unknown> | undefined
const callId = tc.id as string | undefined
const name = (fn?.name ?? tc.name) as string | undefined
if (!callId || !name) { if (!callId || !name) {
continue continue
} }
const argumentsValue = const argumentsValue =
typeof toolCall?.function?.arguments === 'string' typeof fn?.arguments === 'string' ? fn.arguments : JSON.stringify(fn?.arguments ?? {})
? toolCall.function.arguments
: JSON.stringify(toolCall?.function?.arguments ?? {})
items.push({ items.push({
type: 'function_call', type: 'function_call',
@@ -238,14 +251,18 @@ export function convertResponseOutputToInputItems(output: unknown): ResponsesInp
} }
if (item.type === 'function_call') { if (item.type === 'function_call') {
const callId = item.call_id ?? item.id const fc = item as OpenAI.Responses.ResponseFunctionToolCall
const name = item.name ?? item.function?.name const fcRecord = item as unknown as Record<string, unknown>
const callId = fc.call_id ?? (fcRecord.id as string | undefined)
const name =
fc.name ??
((fcRecord.function as Record<string, unknown> | undefined)?.name as string | undefined)
if (!callId || !name) { if (!callId || !name) {
continue continue
} }
const argumentsValue = const argumentsValue =
typeof item.arguments === 'string' ? item.arguments : JSON.stringify(item.arguments ?? {}) typeof fc.arguments === 'string' ? fc.arguments : JSON.stringify(fc.arguments ?? {})
items.push({ items.push({
type: 'function_call', type: 'function_call',
@@ -262,7 +279,9 @@ export function convertResponseOutputToInputItems(output: unknown): ResponsesInp
/** /**
* Extracts tool calls from Responses API output items. * Extracts tool calls from Responses API output items.
*/ */
export function extractResponseToolCalls(output: unknown): ResponsesToolCall[] { export function extractResponseToolCalls(
output: OpenAI.Responses.ResponseOutputItem[]
): ResponsesToolCall[] {
if (!Array.isArray(output)) { if (!Array.isArray(output)) {
return [] return []
} }
@@ -275,14 +294,18 @@ export function extractResponseToolCalls(output: unknown): ResponsesToolCall[] {
} }
if (item.type === 'function_call') { if (item.type === 'function_call') {
const callId = item.call_id ?? item.id const fc = item as OpenAI.Responses.ResponseFunctionToolCall
const name = item.name ?? item.function?.name const fcRecord = item as unknown as Record<string, unknown>
const callId = fc.call_id ?? (fcRecord.id as string | undefined)
const name =
fc.name ??
((fcRecord.function as Record<string, unknown> | undefined)?.name as string | undefined)
if (!callId || !name) { if (!callId || !name) {
continue continue
} }
const argumentsValue = const argumentsValue =
typeof item.arguments === 'string' ? item.arguments : JSON.stringify(item.arguments ?? {}) typeof fc.arguments === 'string' ? fc.arguments : JSON.stringify(fc.arguments ?? {})
toolCalls.push({ toolCalls.push({
id: callId, id: callId,
@@ -292,18 +315,20 @@ export function extractResponseToolCalls(output: unknown): ResponsesToolCall[] {
continue continue
} }
if (item.type === 'message' && Array.isArray(item.tool_calls)) { // Handle Chat Completions-style tool_calls nested under message items
for (const toolCall of item.tool_calls) { const msgRecord = item as unknown as Record<string, unknown>
const callId = toolCall?.id if (item.type === 'message' && Array.isArray(msgRecord.tool_calls)) {
const name = toolCall?.function?.name ?? toolCall?.name for (const toolCall of msgRecord.tool_calls) {
const tc = toolCall as Record<string, unknown>
const fn = tc.function as Record<string, unknown> | undefined
const callId = tc.id as string | undefined
const name = (fn?.name ?? tc.name) as string | undefined
if (!callId || !name) { if (!callId || !name) {
continue continue
} }
const argumentsValue = const argumentsValue =
typeof toolCall?.function?.arguments === 'string' typeof fn?.arguments === 'string' ? fn.arguments : JSON.stringify(fn?.arguments ?? {})
? toolCall.function.arguments
: JSON.stringify(toolCall?.function?.arguments ?? {})
toolCalls.push({ toolCalls.push({
id: callId, id: callId,
@@ -323,15 +348,17 @@ export function extractResponseToolCalls(output: unknown): ResponsesToolCall[] {
* Note: output_tokens is expected to include reasoning tokens; fall back to reasoning_tokens * Note: output_tokens is expected to include reasoning tokens; fall back to reasoning_tokens
* when output_tokens is missing or zero. * when output_tokens is missing or zero.
*/ */
export function parseResponsesUsage(usage: any): ResponsesUsageTokens | undefined { export function parseResponsesUsage(
if (!usage || typeof usage !== 'object') { usage: OpenAI.Responses.ResponseUsage | undefined
): ResponsesUsageTokens | undefined {
if (!usage) {
return undefined return undefined
} }
const inputTokens = Number(usage.input_tokens ?? 0) const inputTokens = usage.input_tokens ?? 0
const outputTokens = Number(usage.output_tokens ?? 0) const outputTokens = usage.output_tokens ?? 0
const cachedTokens = Number(usage.input_tokens_details?.cached_tokens ?? 0) const cachedTokens = usage.input_tokens_details?.cached_tokens ?? 0
const reasoningTokens = Number(usage.output_tokens_details?.reasoning_tokens ?? 0) const reasoningTokens = usage.output_tokens_details?.reasoning_tokens ?? 0
const completionTokens = Math.max(outputTokens, reasoningTokens) const completionTokens = Math.max(outputTokens, reasoningTokens)
const totalTokens = inputTokens + completionTokens const totalTokens = inputTokens + completionTokens
@@ -398,7 +425,7 @@ export function createReadableStreamFromResponses(
continue continue
} }
let event: any let event: Record<string, unknown>
try { try {
event = JSON.parse(data) event = JSON.parse(data)
} catch (error) { } catch (error) {
@@ -416,7 +443,8 @@ export function createReadableStreamFromResponses(
eventType === 'error' || eventType === 'error' ||
eventType === 'response.failed' eventType === 'response.failed'
) { ) {
const message = event?.error?.message || 'Responses API stream error' const errorObj = event.error as Record<string, unknown> | undefined
const message = (errorObj?.message as string) || 'Responses API stream error'
controller.error(new Error(message)) controller.error(new Error(message))
return return
} }
@@ -426,12 +454,13 @@ export function createReadableStreamFromResponses(
eventType === 'response.output_json.delta' eventType === 'response.output_json.delta'
) { ) {
let deltaText = '' let deltaText = ''
if (typeof event.delta === 'string') { const delta = event.delta as string | Record<string, unknown> | undefined
deltaText = event.delta if (typeof delta === 'string') {
} else if (event.delta && typeof event.delta.text === 'string') { deltaText = delta
deltaText = event.delta.text } else if (delta && typeof delta.text === 'string') {
} else if (event.delta && event.delta.json !== undefined) { deltaText = delta.text
deltaText = JSON.stringify(event.delta.json) } else if (delta && delta.json !== undefined) {
deltaText = JSON.stringify(delta.json)
} else if (event.json !== undefined) { } else if (event.json !== undefined) {
deltaText = JSON.stringify(event.json) deltaText = JSON.stringify(event.json)
} else if (typeof event.text === 'string') { } else if (typeof event.text === 'string') {
@@ -445,7 +474,11 @@ export function createReadableStreamFromResponses(
} }
if (eventType === 'response.completed') { if (eventType === 'response.completed') {
finalUsage = parseResponsesUsage(event?.response?.usage ?? event?.usage) const responseObj = event.response as Record<string, unknown> | undefined
const usageData = (responseObj?.usage ?? event.usage) as
| OpenAI.Responses.ResponseUsage
| undefined
finalUsage = parseResponsesUsage(usageData)
} }
} }
} }

Some files were not shown because too many files have changed in this diff Show More