Compare commits

...

22 Commits

Author SHA1 Message Date
waleed
9f34a27981 refactor(files): cleanup anti-patterns across file viewer components
Six-pass cleanup over the file-viewer directory:

Effects (you-might-not-need-an-effect):
- AudioPreview, VideoPreview: replace reset useEffect with key={file.id} so
  the component remounts on file change — React's canonical solution
- DocxPreview: same key-prop fix; removes a 5-setState reset effect that was
  also clearing containerRef.current.innerHTML unnecessarily

Callbacks (you-might-not-need-a-callback):
- handleEditorMount, handleEditorChange: remove useCallback — MonacoEditor is
  dynamic(), not React.memo, so reference stability has no observer
- markSavedContent: remove useCallback — called only through an onSaveRef,
  never directly observed
- DataTable.setInputRef: remove useCallback — callback refs on native elements
  are called regardless of reference identity

Design tokens (emcn-design-review):
- VideoPreview: bg-black → bg-[var(--surface-inverted)]
- HtmlPreview iframe: bg-white → bg-[var(--surface-2)]

useMemo, useState, and react-query passes found no issues.
2026-04-27 23:59:21 -07:00
waleed
a05542a4d2 fix(files): fix Monaco stale closure, XLSX Ctrl+S data loss, and async workbook mutation
Three bugs from Cursor Bugbot follow-up review:

1. Stale closure in handleEditorMount (Medium): useCallback([], []) captured
   content='' at first render. When Monaco mounts after content loads (e.g.
   switching from preview to editor mode), lastSyncedContentRef was never
   initialized and external content changes stopped syncing. Fixed by keeping
   a contentRef updated on every render and reading it inside handleEditorMount.

2. XLSX Ctrl+S discards active cell edit (Medium): handleSave read from
   workbookRef.current before DataTable's in-progress editValue was committed.
   Fixed by exposing commitEdit() from DataTable via useImperativeHandle
   (using an always-current editStateRef so the handle stays stable) and
   calling it at the top of handleSave.

3. Async workbook mutation fragility (Low): handleCellChange / handleHeaderChange
   updated the workbook inside import('xlsx').then(), creating microtask-order
   coupling with handleSave. Fixed by caching the xlsx module in xlsxModuleRef
   on first parse and using it synchronously in both handlers.
2026-04-27 23:49:38 -07:00
waleed
e794eeb3af chore(files): revert accidental pptxgenjs.cjs re-minification
The bundle was regenerated non-deterministically during development (same
pptxgenjs 4.0.1, different variable names in minifier output). No functional
change — restore the prior version to keep the diff clean.
2026-04-27 23:43:19 -07:00
waleed
809c9d451f fix(files): address PR review findings
- csp.ts: revert bare https: from img-src — it defeats the existing
  domain allowlist and opens info-leakage vectors
- files/page.tsx + files/[fileId]/page.tsx: add explicit fallback={null}
  to <Suspense> to make intent clear (React defaults to null, but
  omitting it looks like an oversight)
- preview-panel.tsx: restore pre passthrough in STATIC_MARKDOWN_COMPONENTS
  so Streamdown's wrapping <pre> doesn't nest inside the custom code
  block <div>, which produced invalid HTML and broken styling
- file-viewer.tsx: add 'webm' to VIDEO_PREVIEWABLE_EXTENSIONS to match
  'video/webm' in VIDEO_PREVIEWABLE_MIME_TYPES
2026-04-27 19:59:44 -07:00
waleed
4d3da794d8 fix(files): replace instanceof Error checks with toError() and fix skeleton tokens
- Use toError() from @sim/utils/errors across all catch blocks in
  file-viewer.tsx, preview-panel.tsx, and route.ts instead of the
  prohibited `err instanceof Error ? err.message : fallback` pattern
- Fix loading skeleton in files.tsx: bg-white → bg-[var(--surface-2)]
  and shadow-[var(--shadow-medium)] → shadow-medium
2026-04-27 19:57:28 -07:00
waleed
3250264de6 feat(files): extract PDF viewer behind SSR boundary and polish file preview
## Core architectural fix

Move all react-pdf / pdfjs-dist code into a new pdf-viewer.tsx module and
import it exclusively via next/dynamic({ ssr: false }). pdfjs-dist v5
references DOMMatrix at module evaluation time, which crashed SSR. The
previous workaround (a DOMMatrix polyfill in instrumentation.ts) is removed
in favour of this proper hard module boundary.

## PDF viewer improvements

- Cursor-anchored zoom: Ctrl/⌘+wheel and trackpad-pinch now zoom toward the
  cursor instead of the top-left corner. Toolbar ± buttons anchor to the
  viewport centre. Uses the canonical scroll-adjust formula used by map and
  canvas viewers.
- Horizontal scroll: dropping flex-col from the scroll container lets the
  zoomed pages wrapper overflow naturally and produces a horizontal scrollbar
  at zoom > 1×.
- Loading skeleton: replaced the conditional inline skeleton with an
  absolute inset-0 overlay so it fills the scroll container correctly in all
  layout contexts.
- Shadow tokens: fixed shadow-[var(--shadow-medium)] and
  shadow-[var(--shadow-card)] to use the Tailwind utility classes
  shadow-medium and shadow-card directly.

## File viewer cleanup

- data-table.tsx: wrap setInputRef in useCallback([]) so the ref callback
  has a stable identity across renders. Previously the inline function got a
  new identity on every keystroke (because editValue state changed), causing
  React to teardown/remount the ref and re-run node.select() on every
  character typed.
- preview-panel.tsx: keep useMemo on ctxValue passed to Context.Provider —
  Context uses Object.is, so a new object every render causes unnecessary
  consumer re-renders.
- resource-content.tsx: remove unnecessary useCallback/useMemo wrappers on
  handlers and derived values that have no memoization observers.

## API route

- Wrap content route with withRouteHandler for automatic request-ID tracking
  via AsyncLocalStorage; remove manual generateRequestId() calls.
- Add resourceName to audit record; add encoding param support (base64 /
  utf-8).

## Query hooks

- Include key (storage object key) in both useWorkspaceFileContent and
  useWorkspaceFileBinary query key tuples so the cache is correctly busted
  when a file is re-uploaded with a new storage key.

## Other

- Add Suspense boundaries to files/page.tsx and files/[fileId]/page.tsx
  (required for useSearchParams inside the Files component).
- Add mmd to SUPPORTED_CODE_EXTENSIONS (Mermaid diagrams).
- Add https: to CSP img-src.
- Remove ==== separator comments from lib/copilot/constants.ts.
- New dependencies: pdfjs-dist 5.4.296, mermaid 11.14.0,
  monaco-editor 0.55.1, @monaco-editor/react 4.7.0.
2026-04-27 19:50:24 -07:00
Theodore Li
154b9d0883 fix(vm): categorize user or server side errors (#4283)
* fix(vm): categorize user or server side errors

* recategorize function syntax errors as 4xx
2026-04-27 22:27:50 -04:00
Waleed
c95ac3bc23 improvement(browser-use,stagehand): expose live session URLs (#4314)
* improvement(browser-use,stagehand): expose live session URLs and align with latest API specs

- Browser Use: switch to v2 camelCase schema, fetch live URL from sessions endpoint, add startUrl/maxSteps/allowedDomains/vision/flashMode/thinking/systemPromptExtension/structuredOutput/metadata params, surface liveUrl/shareUrl/sessionId outputs
- Stagehand: fetch Browserbase debug URL, add mode/maxSteps params, surface liveViewUrl/sessionId outputs, bump @browserbasehq/stagehand to ^3.2.1, update to claude-sonnet-4-6

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* fix(browser-use): respect API default for highlightElements

Only send highlightElements when user explicitly toggles it; previously defaulted to true which silently overrode the v2 API default of false.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* fix(browser-use,stagehand): address PR review feedback

- Browser Use: fetch liveUrl during polling once sessionId is known, instead of immediately after task creation. Handles tasks started without profile_id (where sessionId isn't returned in create response) and ensures session is active before fetching.
- Stagehand: coerce empty/whitespace maxSteps strings to undefined so they're dropped from the request body instead of failing zod validation as ''.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* fix(stagehand): preserve liveViewUrl and sessionId on agent error

If the agent throws after Browserbase session init succeeds, callers can still surface the live view / session ID for debugging.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* fix(browser-use): coerce empty maxSteps strings to undefined

Mirrors the Stagehand block's handling so a cleared field doesn't pass through as ''.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* fix(browser-use): skip metadata when empty

---------

Co-authored-by: Claude Opus 4.7 <noreply@anthropic.com>
2026-04-27 18:02:51 -07:00
Theodore Li
ca814f021d fix(ui): display file upload error messages (#4315)
* fix(ui): display file upload messages

* Address pr comments
2026-04-27 18:58:41 -04:00
Waleed
2502369122 feat(integrations): SAP S/4HANA (#4301)
* feat(integrations): SAP S/4HANA tools, block, and proxy with multi-deployment support

* fix(sap_s4hana): address PR review comments

- Validate baseUrl/tokenUrl in Zod schema and at runtime to prevent SSRF
  (https-only, deny loopback/link-local/cloud-metadata hosts)
- Cap proxy token cache at 500 entries with LRU eviction
- Add 30s timeout to outbound token, CSRF, and OData fetches
- Make parseJsonInput return T | undefined so missing input is type-safe
- Reset authType when deploymentType changes and surface OAuth fields
  whenever auth is not basic, so cloud_public users always see clientId/
  clientSecret after switching from a basic-auth private deployment
- Reject OData service names that are not uppercase identifiers and
  paths containing ".." or "." traversal segments

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* fix(sap_s4hana): allow versioned service names; tighten proxy SSRF defenses

- Permit ";v=NNNN" suffix on ServiceName regex so the four delivery tools
  (API_OUTBOUND_DELIVERY_SRV;v=0002, API_INBOUND_DELIVERY_SRV;v=0002) pass
  schema validation
- Restrict subdomain to RFC 1123 label characters and region to lowercase
  alphanumeric short codes; run the constructed cloud_public host through
  assertSafeExternalUrl so a crafted subdomain (e.g. "evil.com#") cannot
  redirect requests carrying SAP credentials
- Block RFC-1918 (10/8, 172.16/12, 192.168/16), 127/8, 169.254/16, and
  0.0.0.0 via isPrivateIPv4, plus IPv4-mapped IPv6 variants
  (::ffff:10.0.0.1, ::10.0.0.1) so private internal hosts cannot be
  reached from baseUrl, tokenUrl, or the resolved cloud_public URL

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* fix(sap_s4hana): catch hex-form IPv4-mapped IPv6 in SSRF check

The WHATWG URL parser normalizes IPv4-mapped IPv6 addresses to hex form
(e.g. [::ffff:169.254.169.254] → [::ffff:a9fe:a9fe]), which slipped past
the dotted-decimal-only extractor. Decode the trailing two 16-bit hex
groups back into IPv4 octets and run them through isPrivateIPv4. Also
add isPrivateOrLoopbackIPv6 so pure IPv6 loopback (::, ::1), unique
local addresses (fc00::/7), and link-local (fe80::/10) cannot be reached.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* fix(sap_s4hana): scope CSRF metadata fetch and isolate token cache by secret

- buildOdataUrl skips request query params when called with an internal
  pathOverride so the /$metadata CSRF probe never carries user OData
  options ($filter, $top, $select), which were causing write operations
  through the generic odata_query tool to fail.
- tokenCacheKey now mixes a sha256 hash of clientSecret into the cache
  key so two tenants sharing the same tokenUrl + clientId but different
  secrets get isolated entries (no cross-tenant token leak).

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* fix(sap_s4hana): reject ?/# in service path; trim long update tool descriptions

- ServicePath validator now rejects "?" and "#" so a caller can't smuggle
  query options through the path field (e.g.,
  "/A_BusinessPartner?$format=atomsvc"); the Zod refine now reports
  ".." / "." segments, "?", and "#" together.
- Update Customer / Update Supplier / Update Purchase Requisition tool
  descriptions exceeded the docs generator's 600-char regex window, so
  they were rendering with empty descriptions on the integrations
  landing page. Trimmed them to fit while keeping the limited-fields
  note and the If-Match guidance, then regenerated integrations.json
  and tool docs.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* fix(sap_s4hana): reject percent-encoded path traversal; widen Set-Cookie split

- ServicePath now also rejects %2e/%2E, %2f/%2F, %5c/%5C, %3f/%3F, %23
  so a caller cannot smuggle ".." / "." / "/" / "\" / "?" / "#" past the
  validator and have SAP's ABAP/ICM gateway decode them server-side.
- joinSetCookies fallback regex now allows the ", " separator that's
  used when multiple Set-Cookie values are folded onto one header line
  (older runtimes without Headers.getSetCookie). Prevents CSRF cookies
  from being concatenated into a single value during write operations.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* fix(sap_s4hana): preserve $ in OData query params; reject empty items array

- buildOdataUrl now constructs query strings manually with
  encodeURIComponent and restores literal "$" so OData system options
  ($filter, $top, $select, $expand, $orderby, $skip, $format) reach
  SAP and any intermediary proxies/WAFs as-is, not as "%24filter".
  URLSearchParams was percent-encoding "$" to "%24" which most ICMs
  decode but some intermediaries silently drop, returning unfiltered
  results.
- create_sales_order now rejects an empty items array (matches
  create_purchase_requisition) so callers get a clear client-side
  error instead of an opaque SAP validation failure on the deep-insert.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* fix(sap_s4hana): ignore baseUrl on cloud_public to prevent token redirection

Why: resolveHost previously preferred baseUrl unconditionally. A caller
sending deploymentType=cloud_public with a baseUrl pointing elsewhere
would obtain a real SAP UAA token, then forward it as Bearer to the
attacker host. Zod superRefine did not validate baseUrl for cloud_public.

Fix: resolveHost now constructs the SAP host from subdomain when
deploymentType is cloud_public and only uses baseUrl for cloud_private
and on_premise (where it is already SSRF-checked in superRefine).

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* fix(icons): use useId for SapS4HanaIcon and PipedriveIcon gradients

Why: hardcoded SVG gradient/mask IDs collide when an icon renders more
than once on a page (e.g. integrations listing). All other icons in this
file use React's useId() — these were inconsistent.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* icons

* fix(icons): use useId for AWS-style icon gradients

Why: IAMIcon, IdentityCenterIcon, STSIcon, SESIcon, and SecretsManagerIcon
all used hardcoded `id='xxxGradient'` values that collide when an icon
renders more than once on a page (e.g. integrations listing).

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* fix(sap_s4hana): ignore tokenUrl on cloud_public to prevent UAA redirection

Why: resolveTokenUrl previously honored caller-supplied tokenUrl
regardless of deploymentType, mirroring the same redirection class as
the prior baseUrl bug. A cloud_public caller could send tokenUrl to an
attacker host, causing the proxy to POST clientId:clientSecret as Basic
auth to it. superRefine for cloud_public did not validate tokenUrl.

Fix: derive UAA URL from subdomain+region for cloud_public; only honor
tokenUrl for cloud_private/on_premise (already SSRF-checked).

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* fix(icons): remove unused mask in PipedriveIcon

Why: the <mask> element had no consumer (no mask='url(#...)' anywhere
in the SVG), so both it and the maskId variable were dead code.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.7 <noreply@anthropic.com>
2026-04-27 15:07:00 -07:00
Waleed
8266f0afdb fix(security): rate limit chat OTP + validate mothership proxy endpoint (#4312)
* fix(security): rate limit chat OTP endpoint to prevent email bombing

* fix(security): validate mothership proxy endpoint to block path traversal

* fix(security): address greptile feedback on OTP rate limiting
2026-04-27 14:51:58 -07:00
Waleed
896a00ae31 fix(security): require internal API key for copilot training endpoints (#4311) 2026-04-27 14:30:47 -07:00
Waleed
74946fb162 improvement(docker): speed up app image build with cache mounts and parallel node-gyp (#4310) 2026-04-27 13:34:57 -07:00
Waleed
f62d274478 fix(mothership): stabilize task sidebar ordering on selection (#4309) 2026-04-27 12:42:20 -07:00
Theodore Li
65e17de065 fix(retention-job): add chunking strategy for cleanup (#4305)
* fix(retention-job): add chunking strategy for cleanup

* change stats to be perjob not per chunk
2026-04-27 15:13:00 -04:00
Vikhyath Mondreti
79ff5d80b3 improvement(slack): channel selector for list canvasses (#4307) 2026-04-27 12:11:14 -07:00
Vikhyath Mondreti
2a52141d2f feat(slack): canvas related operations (#4306)
* feat(slack): canvas related operations

* extract shared code
2026-04-27 12:00:40 -07:00
Theodore Li
76ad59fd7d fix(stream): Avoid bun memory leak bug from TransformStream (#4255)
* Avoid bun memory leak bug from TransformStream

* fix(executor): skip content persistence when stream consumer exits early

Previously, if the onStream consumer caught an internal error without
re-throwing, the block-executor would treat the shortened accumulator
as the complete response, persist a truncated string to memory via
appendToMemory, and set it as executionOutput.content.

Track whether the source ReadableStream actually closed (done=true) in
the pull handler. If onStream returns before the source drains, skip
content persistence and log a warning — the old tee()-based flow was
immune to this because the executor branch drained independently of
the client branch.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix lint

* fix(executor): early-return when no streamed content to make onFullContent symmetric

Previously, executionOutput.content was guarded by `if (fullContent)`
but `onFullContent` fired regardless. The agent-handler implementor
defensively bails on empty/whitespace content, but that's a callee
contract, not enforced at the call site — future implementors could
spuriously persist empty assistant turns to memory.

Hoist the `!fullContent` check to a single early return, so both the
output write and the callback share the same precondition.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-27 14:23:36 -04:00
Waleed
c32c1cb917 fix(security): patch copilot tool & multipart upload IDORs (#4304)
* fix(security): patch copilot tool & multipart upload IDORs

- multipart upload: bind upload session to (userId, workspaceId, key)
  via short-lived HMAC-signed token; require workspace write access at
  initiate; source key/uploadId/context from verified token (never
  client) at get-part-urls/complete/abort
- copilot knowledge-base tools: gate all 11 read/write/tag/connector
  ops with checkKnowledgeBaseAccess / checkKnowledgeBaseWriteAccess
- copilot user-table tools: add workspace-id check to get, get_schema,
  add/rename/delete/update_column to match existing op pattern
- copilot manage-credential: add full ownership/write-permission auth
  via getCredentialActorContext (previously had no auth)
- copilot restore-resource: verify workspace ownership and write
  permission for workflow, table, knowledgebase, file, and folder
  restores
- copilot folder rename/move: verify both folderId and parentId belong
  to the caller's workspace via new verifyFolderWorkspace helper
- copilot get-job-logs: verify schedule belongs to caller's workspace

* fix(security): address PR review — document IDOR, log count, token split

- knowledge-base delete_document/update_document: verify each document
  belongs to the claimed knowledgeBaseId via checkDocumentWriteAccess
  (was: trusted args.knowledgeBaseId without binding it to the document)
- multipart batch complete: log verifiedEntries.length instead of raw
  client-supplied data.uploads.length
- upload-token: reject tokens with !=2 dot-delimited segments

* fix(security): close folder workspace bypass when workspaceId is falsy
2026-04-27 11:05:22 -07:00
Vikhyath Mondreti
50e74f75ef fix(mothership): parallel subagent rendering, exec stream re-attach (#4299)
* fix(mothership): parallel subagent rendering

* fix rendering logic

* address comments

* cleanup dead code

* address mothership legacy key

* prevent id collision

* rollout stream edge case

* address bugbot comments"

* remove unused fallbacks

* fix execution stream attach, cleanup comments

* remove debug logs

* cleanup failed reconnect
2026-04-27 00:40:39 -07:00
Waleed
60652e621c fix(security): credential-set invite email check + shopify authorize XSS (#4302) 2026-04-26 20:52:42 -07:00
Waleed
8863f1132a feat(models): add gpt-5.5 models (#4300)
* feat(models): add gpt-5.5 models

* fix(models): address gpt-5.5 review feedback

* fix(models): align gpt-5.5 pro controls
2026-04-25 19:13:29 -07:00
142 changed files with 13694 additions and 1945 deletions

View File

@@ -4045,6 +4045,7 @@ export function AsanaIcon(props: SVGProps<SVGSVGElement>) {
}
export function PipedriveIcon(props: SVGProps<SVGSVGElement>) {
const pathId = useId()
return (
<svg
{...props}
@@ -4058,7 +4059,7 @@ export function PipedriveIcon(props: SVGProps<SVGSVGElement>) {
<defs>
<path
d='M59.6807,81.1772 C59.6807,101.5343 70.0078,123.4949 92.7336,123.4949 C109.5872,123.4949 126.6277,110.3374 126.6277,80.8785 C126.6277,55.0508 113.232,37.7119 93.2944,37.7119 C77.0483,37.7119 59.6807,49.1244 59.6807,81.1772 Z M101.3006,0 C142.0482,0 169.4469,32.2728 169.4469,80.3126 C169.4469,127.5978 140.584,160.60942 99.3224,160.60942 C79.6495,160.60942 67.0483,152.1836 60.4595,146.0843 C60.5063,147.5305 60.5374,149.1497 60.5374,150.8788 L60.5374,215 L18.32565,215 L18.32565,44.157 C18.32565,41.6732 17.53126,40.8873 15.07021,40.8873 L0.5531,40.8873 L0.5531,3.4741 L35.9736,3.4741 C52.282,3.4741 56.4564,11.7741 57.2508,18.1721 C63.8708,10.7524 77.5935,0 101.3006,0 Z'
id='path-1'
id={pathId}
/>
</defs>
<g
@@ -4069,10 +4070,7 @@ export function PipedriveIcon(props: SVGProps<SVGSVGElement>) {
fillRule='evenodd'
>
<g transform='translate(67.000000, 44.000000)'>
<mask id='mask-2' fill='white'>
<use href='#path-1' />
</mask>
<use id='Clip-5' fill='#FFFFFF' xlinkHref='#path-1' />
<use fill='#FFFFFF' xlinkHref={`#${pathId}`} />
</g>
</g>
</svg>
@@ -4098,6 +4096,40 @@ export function SalesforceIcon(props: SVGProps<SVGSVGElement>) {
)
}
export function SapS4HanaIcon(props: SVGProps<SVGSVGElement>) {
const id = useId()
return (
<svg {...props} xmlns='http://www.w3.org/2000/svg' viewBox='0 0 412.38 204'>
<defs>
<linearGradient
id={id}
x1='206.19'
y1='0'
x2='206.19'
y2='204'
gradientUnits='userSpaceOnUse'
>
<stop offset='0' stopColor='#00b1eb' />
<stop offset='.212' stopColor='#009ad9' />
<stop offset='.519' stopColor='#007fc4' />
<stop offset='.792' stopColor='#006eb8' />
<stop offset='1' stopColor='#0069b4' />
</linearGradient>
</defs>
<polyline
fill={`url(#${id})`}
fillRule='evenodd'
points='0 204 208.413 204 412.38 0 0 0 0 204'
/>
<path
fill='#fff'
fillRule='evenodd'
d='m244.727,38.359l-40.593-.025v96.518l-35.46-96.518h-35.16l-30.277,80.716c-3.224-20.352-24.277-27.38-40.84-32.649-10.937-3.512-22.541-8.678-22.434-14.387.091-4.687,6.225-9.04,18.377-8.385,8.17.433,15.373,1.092,29.71,8.006l14.102-24.557c-13.088-6.658-31.169-10.867-45.985-10.883h-.086c-17.277,0-31.677,5.598-40.602,14.824-6.221,6.443-9.572,14.626-9.712,23.679-.227,12.454,4.341,21.292,13.938,28.338,8.104,5.944,18.468,9.794,27.603,12.626,11.27,3.492,20.467,6.526,20.36,13.002-.083,2.355-.977,4.552-2.671,6.337-2.807,2.897-7.124,3.986-13.084,4.098-11.497.243-20.026-1.559-33.61-9.585l-12.536,24.903c13.546,7.705,29.586,12.223,45.952,12.223l2.106-.024c14.247-.256,25.745-4.316,34.929-11.712.527-.416,1.001-.845,1.488-1.277l-4.073,10.874h36.875l6.189-18.822c6.477,2.214,13.847,3.437,21.676,3.437,7.618,0,14.795-1.17,21.156-3.252l5.965,18.637h60.137v-38.969h13.113c31.706,0,50.456-16.147,50.456-43.202,0-30.139-18.219-43.969-57.011-43.969Zm-93.816,82.587c-4.737,0-9.177-.828-13.006-2.275l12.866-40.593h.244l12.643,40.708c-3.801,1.349-8.138,2.16-12.746,2.16Zm96.199-23.324h-8.941v-32.711h8.941c11.927,0,21.437,3.961,21.437,16.139,0,12.602-9.51,16.572-21.437,16.572'
/>
</svg>
)
}
export function ServiceNowIcon(props: SVGProps<SVGSVGElement>) {
return (
<svg {...props} xmlns='http://www.w3.org/2000/svg' viewBox='0 0 71.1 63.6'>
@@ -4694,15 +4726,16 @@ export function DynamoDBIcon(props: SVGProps<SVGSVGElement>) {
}
export function IAMIcon(props: SVGProps<SVGSVGElement>) {
const id = useId()
return (
<svg {...props} viewBox='0 0 80 80' xmlns='http://www.w3.org/2000/svg'>
<defs>
<linearGradient x1='0%' y1='100%' x2='100%' y2='0%' id='iamGradient'>
<linearGradient x1='0%' y1='100%' x2='100%' y2='0%' id={id}>
<stop stopColor='#BD0816' offset='0%' />
<stop stopColor='#FF5252' offset='100%' />
</linearGradient>
</defs>
<rect fill='url(#iamGradient)' width='80' height='80' />
<rect fill={`url(#${id})`} width='80' height='80' />
<path
d='M14,59 L66,59 L66,21 L14,21 L14,59 Z M68,20 L68,60 C68,60.552 67.553,61 67,61 L13,61 C12.447,61 12,60.552 12,60 L12,20 C12,19.448 12.447,19 13,19 L67,19 C67.553,19 68,19.448 68,20 L68,20 Z M44,48 L59,48 L59,46 L44,46 L44,48 Z M57,42 L62,42 L62,40 L57,40 L57,42 Z M44,42 L52,42 L52,40 L44,40 L44,42 Z M29,46 C29,45.449 28.552,45 28,45 C27.448,45 27,45.449 27,46 C27,46.551 27.448,47 28,47 C28.552,47 29,46.551 29,46 L29,46 Z M31,46 C31,47.302 30.161,48.401 29,48.816 L29,51 L27,51 L27,48.815 C25.839,48.401 25,47.302 25,46 C25,44.346 26.346,43 28,43 C29.654,43 31,44.346 31,46 L31,46 Z M19,53.993 L36.994,54 L36.996,50 L33,50 L33,48 L36.996,48 L36.998,45 L33,45 L33,43 L36.999,43 L37,40.007 L19.006,40 L19,53.993 Z M22,38.001 L34,38.006 L34,31 C34.001,28.697 31.197,26.677 28,26.675 L27.996,26.675 C24.804,26.675 22.004,28.696 22.002,31 L22,38.001 Z M17,54.992 L17.006,39 C17.006,38.734 17.111,38.48 17.299,38.292 C17.486,38.105 17.741,38 18.006,38 L20,38.001 L20.002,31 C20.004,27.512 23.59,24.675 27.996,24.675 L28,24.675 C32.412,24.677 36.001,27.515 36,31 L36,38.007 L38,38.008 C38.553,38.008 39,38.456 39,39.008 L38.994,55 C38.994,55.266 38.889,55.52 38.701,55.708 C38.514,55.895 38.259,56 37.994,56 L18,55.992 C17.447,55.992 17,55.544 17,54.992 L17,54.992 Z M60,36 L62,36 L62,34 L60,34 L60,36 Z M44,36 L55,36 L55,34 L44,34 L44,36 Z'
fill='#FFFFFF'
@@ -4712,15 +4745,16 @@ export function IAMIcon(props: SVGProps<SVGSVGElement>) {
}
export function IdentityCenterIcon(props: SVGProps<SVGSVGElement>) {
const id = useId()
return (
<svg {...props} viewBox='0 0 80 80' xmlns='http://www.w3.org/2000/svg'>
<defs>
<linearGradient x1='0%' y1='100%' x2='100%' y2='0%' id='identityCenterGradient'>
<linearGradient x1='0%' y1='100%' x2='100%' y2='0%' id={id}>
<stop stopColor='#BD0816' offset='0%' />
<stop stopColor='#FF5252' offset='100%' />
</linearGradient>
</defs>
<rect fill='url(#identityCenterGradient)' width='80' height='80' />
<rect fill={`url(#${id})`} width='80' height='80' />
<path
d='M46.694,46.8194562 C47.376,46.1374562 47.376,45.0294562 46.694,44.3474562 C46.353,44.0074562 45.906,43.8374562 45.459,43.8374562 C45.01,43.8374562 44.563,44.0074562 44.222,44.3474562 C43.542,45.0284562 43.542,46.1384562 44.222,46.8194562 C44.905,47.5014562 46.013,47.4994562 46.694,46.8194562 M47.718,47.1374562 L51.703,51.1204562 L50.996,51.8274562 L49.868,50.6994562 L48.793,51.7754562 L48.086,51.0684562 L49.161,49.9924562 L47.011,47.8444562 C46.545,48.1654562 46.003,48.3294562 45.458,48.3294562 C44.755,48.3294562 44.051,48.0624562 43.515,47.5264562 C42.445,46.4554562 42.445,44.7124562 43.515,43.6404562 C44.586,42.5714562 46.329,42.5694562 47.401,43.6404562 C48.351,44.5904562 48.455,46.0674562 47.718,47.1374562 M53,44.1014562 C53,46.1684562 51.505,47.0934562 50.023,47.0934562 L50.023,46.0934562 C50.487,46.0934562 52,45.9494562 52,44.1014562 C52,43.0044562 51.353,42.3894562 49.905,42.1084562 C49.68,42.0654562 49.514,41.8754562 49.501,41.6484562 C49.446,40.7444562 48.987,40.1124562 48.384,40.1124562 C48.084,40.1124562 47.854,40.2424562 47.616,40.5464562 C47.506,40.6884562 47.324,40.7594562 47.147,40.7324562 C46.968,40.7054562 46.818,40.5844562 46.755,40.4144562 C46.577,39.9434562 46.211,39.4334562 45.723,38.9774562 C45.231,38.5094562 43.883,37.5074562 41.972,38.2734562 C40.885,38.7054562 40.034,39.9494562 40.034,41.1074562 C40.034,41.2354562 40.043,41.3624562 40.058,41.4884562 C40.061,41.5094562 40.062,41.5304562 40.062,41.5514562 C40.062,41.7994562 39.882,42.0064562 39.645,42.0464562 C38.886,42.2394562 38,42.7454562 38,44.0554562 L38.005,44.2104562 C38.069,45.3254562 39.252,45.9954562 40.358,45.9984562 L41,45.9984562 L41,46.9984562 L40.357,46.9984562 C38.536,46.9944562 37.095,45.8194562 37.006,44.2644562 C37.003,44.1944562 37,44.1244562 37,44.0554562 C37,42.6944562 37.752,41.6484562 39.035,41.1884562 C39.034,41.1614562 39.034,41.1344562 39.034,41.1074562 C39.034,39.5434562 40.138,37.9254562 41.602,37.3434562 C43.298,36.6654562 45.095,37.0034562 46.409,38.2494562 C46.706,38.5274562 47.076,38.9264562 47.372,39.4134562 C47.673,39.2124562 48.008,39.1124562 48.384,39.1124562 C49.257,39.1124562 50.231,39.7714562 50.458,41.2074562 C52.145,41.6324562 53,42.6054562 53,44.1014562 M27,53 L27,27 L53,27 L53,34 L51,34 L51,29 L29,29 L29,51 L51,51 L51,46 L53,46 L53,53 Z'
fill='#FFFFFF'
@@ -4730,15 +4764,16 @@ export function IdentityCenterIcon(props: SVGProps<SVGSVGElement>) {
}
export function STSIcon(props: SVGProps<SVGSVGElement>) {
const id = useId()
return (
<svg {...props} viewBox='0 0 80 80' xmlns='http://www.w3.org/2000/svg'>
<defs>
<linearGradient x1='0%' y1='100%' x2='100%' y2='0%' id='stsGradient'>
<linearGradient x1='0%' y1='100%' x2='100%' y2='0%' id={id}>
<stop stopColor='#BD0816' offset='0%' />
<stop stopColor='#FF5252' offset='100%' />
</linearGradient>
</defs>
<rect fill='url(#stsGradient)' width='80' height='80' />
<rect fill={`url(#${id})`} width='80' height='80' />
<path
d='M14,59 L66,59 L66,21 L14,21 L14,59 Z M68,20 L68,60 C68,60.552 67.553,61 67,61 L13,61 C12.447,61 12,60.552 12,60 L12,20 C12,19.448 12.447,19 13,19 L67,19 C67.553,19 68,19.448 68,20 L68,20 Z M44,48 L59,48 L59,46 L44,46 L44,48 Z M57,42 L62,42 L62,40 L57,40 L57,42 Z M44,42 L52,42 L52,40 L44,40 L44,42 Z M29,46 C29,45.449 28.552,45 28,45 C27.448,45 27,45.449 27,46 C27,46.551 27.448,47 28,47 C28.552,47 29,46.551 29,46 L29,46 Z M31,46 C31,47.302 30.161,48.401 29,48.816 L29,51 L27,51 L27,48.815 C25.839,48.401 25,47.302 25,46 C25,44.346 26.346,43 28,43 C29.654,43 31,44.346 31,46 L31,46 Z M19,53.993 L36.994,54 L36.996,50 L33,50 L33,48 L36.996,48 L36.998,45 L33,45 L33,43 L36.999,43 L37,40.007 L19.006,40 L19,53.993 Z M22,38.001 L34,38.006 L34,31 C34.001,28.697 31.197,26.677 28,26.675 L27.996,26.675 C24.804,26.675 22.004,28.696 22.002,31 L22,38.001 Z M17,54.992 L17.006,39 C17.006,38.734 17.111,38.48 17.299,38.292 C17.486,38.105 17.741,38 18.006,38 L20,38.001 L20.002,31 C20.004,27.512 23.59,24.675 27.996,24.675 L28,24.675 C32.412,24.677 36.001,27.515 36,31 L36,38.007 L38,38.008 C38.553,38.008 39,38.456 39,39.008 L38.994,55 C38.994,55.266 38.889,55.52 38.701,55.708 C38.514,55.895 38.259,56 37.994,56 L18,55.992 C17.447,55.992 17,55.544 17,54.992 L17,54.992 Z M60,36 L62,36 L62,34 L60,34 L60,36 Z M44,36 L55,36 L55,34 L44,34 L44,36 Z'
fill='#FFFFFF'
@@ -4748,15 +4783,16 @@ export function STSIcon(props: SVGProps<SVGSVGElement>) {
}
export function SESIcon(props: SVGProps<SVGSVGElement>) {
const id = useId()
return (
<svg {...props} viewBox='0 0 80 80' xmlns='http://www.w3.org/2000/svg'>
<defs>
<linearGradient x1='0%' y1='100%' x2='100%' y2='0%' id='sesGradient'>
<linearGradient x1='0%' y1='100%' x2='100%' y2='0%' id={id}>
<stop stopColor='#BD0816' offset='0%' />
<stop stopColor='#FF5252' offset='100%' />
</linearGradient>
</defs>
<rect fill='url(#sesGradient)' width='80' height='80' />
<rect fill={`url(#${id})`} width='80' height='80' />
<path
d='M57,60.999875 C57,59.373846 55.626,57.9998214 54,57.9998214 C52.374,57.9998214 51,59.373846 51,60.999875 C51,62.625904 52.374,63.9999286 54,63.9999286 C55.626,63.9999286 57,62.625904 57,60.999875 L57,60.999875 Z M40,59.9998571 C38.374,59.9998571 37,61.3738817 37,62.9999107 C37,64.6259397 38.374,65.9999643 40,65.9999643 C41.626,65.9999643 43,64.6259397 43,62.9999107 C43,61.3738817 41.626,59.9998571 40,59.9998571 L40,59.9998571 Z M26,57.9998214 C24.374,57.9998214 23,59.373846 23,60.999875 C23,62.625904 24.374,63.9999286 26,63.9999286 C27.626,63.9999286 29,62.625904 29,60.999875 C29,59.373846 27.626,57.9998214 26,57.9998214 L26,57.9998214 Z M28.605,42.9995536 L51.395,42.9995536 L43.739,36.1104305 L40.649,38.7584778 C40.463,38.9194807 40.23,38.9994821 39.999,38.9994821 C39.768,38.9994821 39.535,38.9194807 39.349,38.7584778 L36.26,36.1104305 L28.605,42.9995536 Z M27,28.1732888 L27,41.7545313 L34.729,34.7984071 L27,28.1732888 Z M51.297,26.9992678 L28.703,26.9992678 L39.999,36.6824408 L51.297,26.9992678 Z M53,41.7545313 L53,28.1732888 L45.271,34.7974071 L53,41.7545313 Z M59,60.999875 C59,63.7099234 56.71,65.9999643 54,65.9999643 C51.29,65.9999643 49,63.7099234 49,60.999875 C49,58.6308327 50.75,56.5837961 53,56.1057876 L53,52.9997321 L41,52.9997321 L41,58.1058233 C43.25,58.5838319 45,60.6308684 45,62.9999107 C45,65.7099591 42.71,68 40,68 C37.29,68 35,65.7099591 35,62.9999107 C35,60.6308684 36.75,58.5838319 39,58.1058233 L39,52.9997321 L27,52.9997321 L27,56.1057876 C29.25,56.5837961 31,58.6308327 31,60.999875 C31,63.7099234 28.71,65.9999643 26,65.9999643 C23.29,65.9999643 21,63.7099234 21,60.999875 C21,58.6308327 22.75,56.5837961 25,56.1057876 L25,51.9997143 C25,51.4477044 25.447,50.9996964 26,50.9996964 L39,50.9996964 L39,44.9995893 L26,44.9995893 C25.447,44.9995893 25,44.5515813 25,43.9995714 L25,25.99925 C25,25.4472401 25.447,24.9992321 26,24.9992321 L54,24.9992321 C54.553,24.9992321 55,25.4472401 55,25.99925 L55,43.9995714 C55,44.5515813 54.553,44.9995893 54,44.9995893 L41,44.9995893 L41,50.9996964 L54,50.9996964 C54.553,50.9996964 55,51.4477044 55,51.9997143 L55,56.1057876 C57.25,56.5837961 59,58.6308327 59,60.999875 L59,60.999875 Z M68,39.9995 C68,45.9066055 66.177,51.5597064 62.727,56.3447919 L61.104,55.174771 C64.307,50.7316916 66,45.4845979 66,39.9995 C66,25.664244 54.337,14.0000357 40.001,14.0000357 C25.664,14.0000357 14,25.664244 14,39.9995 C14,45.4845979 15.693,50.7316916 18.896,55.174771 L17.273,56.3447919 C13.823,51.5597064 12,45.9066055 12,39.9995 C12,24.5612243 24.561,12 39.999,12 C55.438,12 68,24.5612243 68,39.9995 L68,39.9995 Z'
fill='#FFFFFF'
@@ -4766,15 +4802,16 @@ export function SESIcon(props: SVGProps<SVGSVGElement>) {
}
export function SecretsManagerIcon(props: SVGProps<SVGSVGElement>) {
const id = useId()
return (
<svg {...props} viewBox='0 0 80 80' xmlns='http://www.w3.org/2000/svg'>
<defs>
<linearGradient x1='0%' y1='100%' x2='100%' y2='0%' id='secretsManagerGradient'>
<linearGradient x1='0%' y1='100%' x2='100%' y2='0%' id={id}>
<stop stopColor='#BD0816' offset='0%' />
<stop stopColor='#FF5252' offset='100%' />
</linearGradient>
</defs>
<rect fill='url(#secretsManagerGradient)' width='80' height='80' />
<rect fill={`url(#${id})`} width='80' height='80' />
<path
d='M38.76,43.36 C38.76,44.044 39.317,44.6 40,44.6 C40.684,44.6 41.24,44.044 41.24,43.36 C41.24,42.676 40.684,42.12 40,42.12 C39.317,42.12 38.76,42.676 38.76,43.36 L38.76,43.36 Z M36.76,43.36 C36.76,41.573 38.213,40.12 40,40.12 C41.787,40.12 43.24,41.573 43.24,43.36 C43.24,44.796 42.296,46.002 41,46.426 L41,49 L39,49 L39,46.426 C37.704,46.002 36.76,44.796 36.76,43.36 L36.76,43.36 Z M49,38 L31,38 L31,51 L49,51 L49,48 L46,48 L46,46 L49,46 L49,43 L46,43 L46,41 L49,41 L49,38 Z M34,36 L45.999,36 L46,31 C46.001,28.384 43.143,26.002 40.004,26 L40.001,26 C38.472,26 36.928,26.574 35.763,27.575 C34.643,28.537 34,29.786 34,31.001 L34,36 Z M48,31.001 L47.999,36 L50,36 C50.553,36 51,36.448 51,37 L51,52 C51,52.552 50.553,53 50,53 L30,53 C29.447,53 29,52.552 29,52 L29,37 C29,36.448 29.447,36 30,36 L32,36 L32,31 C32.001,29.202 32.897,27.401 34.459,26.058 C35.982,24.75 38.001,24 40.001,24 L40.004,24 C44.265,24.002 48.001,27.273 48,31.001 L48,31.001 Z M19.207,55.049 L20.828,53.877 C18.093,50.097 16.581,45.662 16.396,41 L19,41 L19,39 L16.399,39 C16.598,34.366 18.108,29.957 20.828,26.198 L19.207,25.025 C16.239,29.128 14.599,33.942 14.399,39 L12,39 L12,41 L14.396,41 C14.582,46.086 16.224,50.926 19.207,55.049 L19.207,55.049 Z M53.838,59.208 C50.069,61.936 45.648,63.446 41,63.639 L41,61 L39,61 L39,63.639 C34.352,63.447 29.93,61.937 26.159,59.208 L24.988,60.828 C29.1,63.805 33.928,65.445 39,65.639 L39,68 L41,68 L41,65.639 C46.072,65.445 50.898,63.805 55.01,60.828 L53.838,59.208 Z M26.159,20.866 C29.93,18.138 34.352,16.628 39,16.436 L39,19 L41,19 L41,16.436 C45.648,16.628 50.069,18.138 53.838,20.866 L55.01,19.246 C50.898,16.27 46.072,14.63 41,14.436 L41,12 L39,12 L39,14.436 C33.928,14.629 29.1,16.269 24.988,19.246 L26.159,20.866 Z M65.599,39 C65.399,33.942 63.759,29.128 60.79,25.025 L59.169,26.198 C61.89,29.957 63.4,34.366 63.599,39 L61,39 L61,41 L63.602,41 C63.416,45.662 61.905,50.097 59.169,53.877 L60.79,55.049 C63.774,50.926 65.415,46.086 65.602,41 L68,41 L68,39 L65.599,39 Z M56.386,25.064 L64.226,17.224 L62.812,15.81 L54.972,23.65 L56.386,25.064 Z M23.612,55.01 L15.772,62.85 L17.186,64.264 L25.026,56.424 L23.612,55.01 Z M28.666,27.253 L13.825,12.413 L12.411,13.827 L27.252,28.667 L28.666,27.253 Z M54.193,52.78 L67.586,66.173 L66.172,67.587 L52.779,54.194 L54.193,52.78 Z'
fill='#FFFFFF'

View File

@@ -154,6 +154,7 @@ import {
RootlyIcon,
S3Icon,
SalesforceIcon,
SapS4HanaIcon,
SESIcon,
SearchIcon,
SecretsManagerIcon,
@@ -369,6 +370,7 @@ export const blockTypeToIconMap: Record<string, IconComponent> = {
rootly: RootlyIcon,
s3: S3Icon,
salesforce: SalesforceIcon,
sap_s4hana: SapS4HanaIcon,
search: SearchIcon,
secrets_manager: SecretsManagerIcon,
sendgrid: SendgridIcon,

View File

@@ -42,9 +42,18 @@ Runs a browser automation task using BrowserUse
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `task` | string | Yes | What should the browser agent do |
| `variables` | json | No | Optional variables to use as secrets \(format: \{key: value\}\) |
| `save_browser_data` | boolean | No | Whether to save browser data |
| `model` | string | No | LLM model to use \(default: gpt-4o\) |
| `startUrl` | string | No | Initial page URL to start the agent on \(reduces navigation steps\) |
| `variables` | json | No | Optional secrets injected into the task \(format: \{key: value\}\) |
| `allowedDomains` | string | No | Comma-separated list of domains the agent is allowed to visit |
| `maxSteps` | number | No | Maximum number of steps the agent may take \(default 100, max 10000\) |
| `flashMode` | boolean | No | Enable flash mode \(faster, less careful navigation\) |
| `thinking` | boolean | No | Enable extended reasoning mode |
| `vision` | string | No | Vision capability: "true", "false", or "auto" |
| `systemPromptExtension` | string | No | Optional text appended to the agent system prompt \(max 2000 chars\) |
| `structuredOutput` | string | No | Stringified JSON schema for the structured output |
| `highlightElements` | boolean | No | Highlight interactive elements on the page \(default true\) |
| `metadata` | json | No | Custom key-value metadata \(up to 10 pairs\) for tracking |
| `model` | string | No | LLM model identifier \(e.g. browser-use-2.0\) |
| `apiKey` | string | Yes | API key for BrowserUse API |
| `profile_id` | string | No | Browser profile ID for persistent sessions \(cookies, login state\) |
@@ -54,7 +63,18 @@ Runs a browser automation task using BrowserUse
| --------- | ---- | ----------- |
| `id` | string | Task execution identifier |
| `success` | boolean | Task completion status |
| `output` | json | Task output data |
| `steps` | json | Execution steps taken |
| `output` | json | Final task output \(string or structured\) |
| `steps` | array | Steps the agent executed \(number, memory, nextGoal, url, actions, duration\) |
| ↳ `number` | number | Sequential step number |
| ↳ `memory` | string | Agent memory at this step |
| ↳ `evaluationPreviousGoal` | string | Evaluation of previous goal completion |
| ↳ `nextGoal` | string | Goal for the next step |
| ↳ `url` | string | Current URL of the browser |
| ↳ `screenshotUrl` | string | Optional screenshot URL |
| ↳ `actions` | array | Stringified JSON actions performed |
| ↳ `duration` | number | Step duration in seconds |
| `liveUrl` | string | Embeddable live browser session URL \(active during execution\) |
| `shareUrl` | string | Public shareable URL for the recorded session \(post-run\) |
| `sessionId` | string | Browser Use session identifier |

View File

@@ -150,6 +150,7 @@
"rootly",
"s3",
"salesforce",
"sap_s4hana",
"search",
"secrets_manager",
"sendgrid",

File diff suppressed because it is too large Load Diff

View File

@@ -72,6 +72,8 @@ Run an autonomous web agent to complete tasks and extract structured data
| `provider` | string | No | AI provider to use: openai or anthropic |
| `apiKey` | string | Yes | API key for the selected provider |
| `outputSchema` | json | No | Optional JSON schema defining the structure of data the agent should return |
| `mode` | string | No | Agent tool mode: dom \(default\), hybrid, or cua |
| `maxSteps` | number | No | Maximum agent steps \(default 20, max 200\) |
#### Output
@@ -92,5 +94,7 @@ Run an autonomous web agent to complete tasks and extract structured data
| ↳ `timestamp` | number | Unix timestamp when the action was performed |
| ↳ `timeMs` | number | Time in milliseconds \(for wait actions\) |
| `structuredOutput` | object | Extracted data matching the provided output schema |
| `liveViewUrl` | string | Embeddable Browserbase live view URL \(active only while the session is running\) |
| `sessionId` | string | Browserbase session identifier |

View File

@@ -154,6 +154,7 @@ import {
RootlyIcon,
S3Icon,
SalesforceIcon,
SapS4HanaIcon,
SESIcon,
SearchIcon,
SecretsManagerIcon,
@@ -351,6 +352,7 @@ export const blockTypeToIconMap: Record<string, IconComponent> = {
rootly: RootlyIcon,
s3: S3Icon,
salesforce: SalesforceIcon,
sap_s4hana: SapS4HanaIcon,
search: SearchIcon,
secrets_manager: SecretsManagerIcon,
sendgrid: SendgridIcon,

View File

@@ -11379,6 +11379,173 @@
"integrationTypes": ["crm", "customer-support", "sales"],
"tags": ["sales-engagement", "customer-support"]
},
{
"type": "sap_s4hana",
"slug": "sap-s-4hana",
"name": "SAP S/4HANA",
"description": "Read and write SAP S/4HANA Cloud business data via OData",
"longDescription": "Connect SAP S/4HANA Cloud Public Edition with per-tenant OAuth 2.0 client credentials configured in your Communication Arrangements. Read and create business partners, customers, suppliers, sales orders, deliveries (inbound/outbound), billing documents, products, stock and material documents, purchase requisitions, purchase orders, and supplier invoices, or run arbitrary OData v2 queries against any whitelisted Communication Scenario.",
"bgColor": "#0A6ED1",
"iconName": "SapS4HanaIcon",
"docsUrl": "https://docs.sim.ai/tools/sap_s4hana",
"operations": [
{
"name": "List Business Partners",
"description": "List business partners from SAP S/4HANA Cloud (API_BUSINESS_PARTNER, A_BusinessPartner) with optional OData $filter, $top, $skip, $orderby, $select, $expand."
},
{
"name": "Get Business Partner",
"description": "Retrieve a single business partner by BusinessPartner key from SAP S/4HANA Cloud (API_BUSINESS_PARTNER, A_BusinessPartner)."
},
{
"name": "Create Business Partner",
"description": "Create a business partner in SAP S/4HANA Cloud (API_BUSINESS_PARTNER, A_BusinessPartner). For Person category 1 provide FirstName and LastName. For Organization category 2 provide OrganizationBPName1."
},
{
"name": "Update Business Partner",
"description": "Update fields on an A_BusinessPartner entity in SAP S/4HANA Cloud (API_BUSINESS_PARTNER). PATCH only sends the fields you provide; existing values are preserved. If-Match defaults to a wildcard (unconditional) — for safe concurrent updates pass the ETag from a prior GET to avoid lost updates."
},
{
"name": "List Customers",
"description": "List customers from SAP S/4HANA Cloud (API_BUSINESS_PARTNER, A_Customer) with optional OData $filter, $top, $skip, $orderby, $select, $expand."
},
{
"name": "Get Customer",
"description": "Retrieve a single customer by Customer key from SAP S/4HANA Cloud (API_BUSINESS_PARTNER, A_Customer)."
},
{
"name": "Update Customer",
"description": "Update fields on an A_Customer entity in SAP S/4HANA Cloud (API_BUSINESS_PARTNER). PATCH only sends the fields you provide; existing values are preserved. A_Customer PATCH is limited to modifiable fields such as OrderIsBlockedForCustomer, DeliveryIsBlock, BillingIsBlockedForCustomer, PostingIsBlocked, and DeletionIndicator. If-Match defaults to a wildcard - for safe concurrent updates pass the ETag from a prior GET to avoid lost updates."
},
{
"name": "List Suppliers",
"description": "List suppliers from SAP S/4HANA Cloud (API_BUSINESS_PARTNER, A_Supplier) with optional OData $filter, $top, $skip, $orderby, $select, $expand."
},
{
"name": "Get Supplier",
"description": "Retrieve a single supplier by Supplier key from SAP S/4HANA Cloud (API_BUSINESS_PARTNER, A_Supplier)."
},
{
"name": "Update Supplier",
"description": "Update fields on an A_Supplier entity in SAP S/4HANA Cloud (API_BUSINESS_PARTNER). PATCH only sends the fields you provide; existing values are preserved. A_Supplier PATCH is limited to modifiable fields such as PostingIsBlocked, PurchasingIsBlocked, PaymentIsBlockedForSupplier, DeletionIndicator, and SupplierAccountGroup. If-Match defaults to a wildcard - for safe concurrent updates pass the ETag from a prior GET to avoid lost updates."
},
{
"name": "List Sales Orders",
"description": "List sales orders from SAP S/4HANA Cloud (API_SALES_ORDER_SRV, A_SalesOrder) with optional OData $filter, $top, $skip, $orderby, $select, $expand."
},
{
"name": "Get Sales Order",
"description": "Retrieve a single sales order by SalesOrder key from SAP S/4HANA Cloud (API_SALES_ORDER_SRV, A_SalesOrder)."
},
{
"name": "Create Sales Order",
"description": "Create a sales order in SAP S/4HANA Cloud (API_SALES_ORDER_SRV, A_SalesOrder) with deep insert of sales order items via to_Item."
},
{
"name": "Update Sales Order",
"description": "Update fields on an A_SalesOrder entity in SAP S/4HANA Cloud (API_SALES_ORDER_SRV). PATCH only sends the fields you provide; existing values are preserved. If-Match defaults to a wildcard (unconditional) — for safe concurrent updates pass the ETag from a prior GET to avoid lost updates."
},
{
"name": "Delete Sales Order",
"description": "Delete an A_SalesOrder entity in SAP S/4HANA Cloud (API_SALES_ORDER_SRV). Only orders without subsequent documents (deliveries, invoices) can be deleted; otherwise reject items via update instead."
},
{
"name": "List Outbound Deliveries",
"description": "List outbound deliveries from SAP S/4HANA Cloud (API_OUTBOUND_DELIVERY_SRV;v=0002, A_OutbDeliveryHeader) with optional OData $filter, $top, $skip, $orderby, $select, $expand."
},
{
"name": "Get Outbound Delivery",
"description": "Retrieve a single outbound delivery by DeliveryDocument key from SAP S/4HANA Cloud (API_OUTBOUND_DELIVERY_SRV;v=0002, A_OutbDeliveryHeader)."
},
{
"name": "List Inbound Deliveries",
"description": "List inbound deliveries from SAP S/4HANA Cloud (API_INBOUND_DELIVERY_SRV;v=0002, A_InbDeliveryHeader) with optional OData $filter, $top, $skip, $orderby, $select, $expand."
},
{
"name": "Get Inbound Delivery",
"description": "Retrieve a single inbound delivery by DeliveryDocument key from SAP S/4HANA Cloud (API_INBOUND_DELIVERY_SRV;v=0002, A_InbDeliveryHeader)."
},
{
"name": "List Billing Documents",
"description": "List billing documents (customer invoices) from SAP S/4HANA Cloud (API_BILLING_DOCUMENT_SRV, A_BillingDocument) with optional OData $filter, $top, $skip, $orderby, $select, $expand."
},
{
"name": "Get Billing Document",
"description": "Retrieve a single billing document (customer invoice) by BillingDocument key from SAP S/4HANA Cloud (API_BILLING_DOCUMENT_SRV, A_BillingDocument)."
},
{
"name": "List Products",
"description": "List products (materials) from SAP S/4HANA Cloud (API_PRODUCT_SRV, A_Product) with optional OData $filter, $top, $skip, $orderby, $select, $expand."
},
{
"name": "Get Product",
"description": "Retrieve a single product (material) by Product key from SAP S/4HANA Cloud (API_PRODUCT_SRV, A_Product)."
},
{
"name": "Update Product",
"description": "Update fields on an A_Product entity in SAP S/4HANA Cloud (API_PRODUCT_SRV). PATCH only sends the fields you provide; existing values are preserved. Flat scalar header fields only — deep/multi-entity updates across navigation properties are not supported by API_PRODUCT_SRV PATCH/PUT (see SAP KBA 2833338); update child entities (plant, valuation, sales data, etc.) via their own endpoints. If-Match defaults to a wildcard (unconditional) — for safe concurrent updates pass the ETag from a prior GET."
},
{
"name": "List Material Stock",
"description": "List material stock quantities from SAP S/4HANA Cloud (API_MATERIAL_STOCK_SRV, A_MatlStkInAcctMod). The entity uses an 11-field composite key (Material, Plant, StorageLocation, Batch, Supplier, Customer, WBSElementInternalID, SDDocument, SDDocumentItem, InventorySpecialStockType, InventoryStockType) — query with $filter on these fields instead of a direct key lookup."
},
{
"name": "List Material Documents",
"description": "List material document headers (goods movements) from SAP S/4HANA Cloud (API_MATERIAL_DOCUMENT_SRV, A_MaterialDocumentHeader) with optional OData $filter, $top, $skip, $orderby, $select, $expand."
},
{
"name": "List Purchase Requisitions",
"description": "List purchase requisitions from SAP S/4HANA Cloud (API_PURCHASEREQ_PROCESS_SRV, A_PurchaseRequisitionHeader) with optional OData $filter, $top, $skip, $orderby, $select, $expand. Note: API_PURCHASEREQ_PROCESS_SRV is deprecated since S/4HANA Cloud Public Edition 2402; the successor is API_PURCHASEREQUISITION_2 (OData v4). This tool still works against tenants where the legacy service is enabled."
},
{
"name": "Get Purchase Requisition",
"description": "Retrieve a single purchase requisition by PurchaseRequisition key from SAP S/4HANA Cloud (API_PURCHASEREQ_PROCESS_SRV, A_PurchaseRequisitionHeader). Note: API_PURCHASEREQ_PROCESS_SRV is deprecated since S/4HANA Cloud Public Edition 2402; the successor is API_PURCHASEREQUISITION_2 (OData v4). This tool still works against tenants where the legacy service is enabled."
},
{
"name": "Create Purchase Requisition",
"description": "Create a purchase requisition in SAP S/4HANA Cloud (API_PURCHASEREQ_PROCESS_SRV, A_PurchaseRequisitionHeader). PurchaseRequisition is auto-assigned by SAP from the document number range; provide line items via the to_PurchaseReqnItem deep-insert array. Note: API_PURCHASEREQ_PROCESS_SRV is deprecated since S/4HANA Cloud Public Edition 2402; the successor is API_PURCHASEREQUISITION_2 (OData v4). This tool still works against tenants where the legacy service is enabled."
},
{
"name": "Update Purchase Requisition",
"description": "Update fields on an A_PurchaseRequisitionHeader entity in SAP S/4HANA Cloud (API_PURCHASEREQ_PROCESS_SRV; deprecated since S/4HANA 2402, successor is API_PURCHASEREQUISITION_2 OData v4). PATCH only sends the fields you provide; existing values are preserved. If-Match defaults to a wildcard - for safe concurrent updates pass the ETag from a prior GET to avoid lost updates."
},
{
"name": "List Purchase Orders",
"description": "List purchase orders from SAP S/4HANA Cloud (API_PURCHASEORDER_PROCESS_SRV, A_PurchaseOrder) with optional OData $filter, $top, $skip, $orderby, $select, $expand."
},
{
"name": "Get Purchase Order",
"description": "Retrieve a single purchase order by PurchaseOrder key from SAP S/4HANA Cloud (API_PURCHASEORDER_PROCESS_SRV, A_PurchaseOrder)."
},
{
"name": "Create Purchase Order",
"description": "Create a purchase order in SAP S/4HANA Cloud (API_PURCHASEORDER_PROCESS_SRV, A_PurchaseOrder). PurchaseOrder is auto-assigned by SAP from the document number range; provide line items via the body parameter."
},
{
"name": "Update Purchase Order",
"description": "Update fields on an A_PurchaseOrder entity in SAP S/4HANA Cloud (API_PURCHASEORDER_PROCESS_SRV). PATCH only sends the fields you provide; existing values are preserved. If-Match defaults to a wildcard (unconditional) — for safe concurrent updates pass the ETag from a prior GET to avoid lost updates."
},
{
"name": "List Supplier Invoices",
"description": "List supplier invoices from SAP S/4HANA Cloud (API_SUPPLIERINVOICE_PROCESS_SRV, A_SupplierInvoice) with optional OData $filter, $top, $skip, $orderby, $select, $expand."
},
{
"name": "Get Supplier Invoice",
"description": "Retrieve a single supplier invoice by composite key (SupplierInvoice + FiscalYear) from SAP S/4HANA Cloud (API_SUPPLIERINVOICE_PROCESS_SRV, A_SupplierInvoice)."
},
{
"name": "OData Query (advanced)",
"description": "Make an arbitrary OData v2 call against any SAP S/4HANA Cloud whitelisted Communication Scenario. Use when no dedicated tool exists for the entity. The proxy handles auth, CSRF, and OData unwrapping."
}
],
"operationCount": 37,
"triggers": [],
"triggerCount": 0,
"authType": "none",
"category": "tools",
"integrationTypes": ["other", "developer-tools"],
"tags": ["automation"]
},
{
"type": "search",
"slug": "search",

View File

@@ -16,6 +16,14 @@ function getMothershipUrl(environment: string): string | null {
return ENV_URLS[environment] ?? null
}
const ENDPOINT_PATTERN = /^[a-zA-Z0-9_-]+(?:\/[a-zA-Z0-9_-]+)*$/
function isValidEndpoint(endpoint: string): boolean {
if (!endpoint) return false
if (endpoint.includes('..')) return false
return ENDPOINT_PATTERN.test(endpoint)
}
async function isAdminRequestAuthorized() {
const session = await getSession()
if (!session?.user?.id) return false
@@ -57,6 +65,10 @@ export const POST = withRouteHandler(async (req: NextRequest) => {
return NextResponse.json({ error: 'endpoint query param required' }, { status: 400 })
}
if (!isValidEndpoint(endpoint)) {
return NextResponse.json({ error: 'invalid endpoint' }, { status: 400 })
}
const baseUrl = getMothershipUrl(environment)
if (!baseUrl) {
return NextResponse.json(
@@ -108,6 +120,10 @@ export const GET = withRouteHandler(async (req: NextRequest) => {
return NextResponse.json({ error: 'endpoint query param required' }, { status: 400 })
}
if (!isValidEndpoint(endpoint)) {
return NextResponse.json({ error: 'invalid endpoint' }, { status: 400 })
}
const baseUrl = getMothershipUrl(environment)
if (!baseUrl) {
return NextResponse.json(

View File

@@ -32,7 +32,9 @@ export const GET = withRouteHandler(async (request: NextRequest) => {
const returnUrl = request.nextUrl.searchParams.get('returnUrl')
if (!shopDomain) {
const returnUrlParam = returnUrl ? encodeURIComponent(returnUrl) : ''
const safeReturnUrl =
returnUrl && isSameOrigin(returnUrl) ? encodeURIComponent(returnUrl) : ''
const returnUrlJsLiteral = JSON.stringify(safeReturnUrl)
return new NextResponse(
`<!DOCTYPE html>
<html>
@@ -120,7 +122,7 @@ export const GET = withRouteHandler(async (request: NextRequest) => {
</div>
<script>
const returnUrl = '${returnUrlParam}';
const returnUrl = ${returnUrlJsLiteral};
function handleSubmit(e) {
e.preventDefault();
let shop = document.getElementById('shop').value.trim().toLowerCase();

View File

@@ -112,6 +112,16 @@ vi.mock('@/lib/core/storage', () => ({
getStorageMethod: mockGetStorageMethod,
}))
const { mockCheckRateLimitDirect } = vi.hoisted(() => ({
mockCheckRateLimitDirect: vi.fn(),
}))
vi.mock('@/lib/core/rate-limiter', () => ({
RateLimiter: class {
checkRateLimitDirect = mockCheckRateLimitDirect
},
}))
vi.mock('@/lib/messaging/email/mailer', () => ({
sendEmail: mockSendEmail,
}))
@@ -234,6 +244,13 @@ describe('Chat OTP API Route', () => {
}))
requestUtilsMockFns.mockGenerateRequestId.mockReturnValue('req-123')
requestUtilsMockFns.mockGetClientIp.mockReturnValue('1.2.3.4')
mockCheckRateLimitDirect.mockResolvedValue({
allowed: true,
remaining: 10,
resetAt: new Date(Date.now() + 60_000),
})
mockZodParse.mockImplementation((data: unknown) => data)
@@ -283,6 +300,134 @@ describe('Chat OTP API Route', () => {
})
})
describe('POST - Rate limiting', () => {
const buildDeploymentSelect = () =>
mockDbSelect.mockImplementationOnce(() => ({
from: vi.fn().mockReturnValue({
where: vi.fn().mockReturnValue({
limit: vi.fn().mockResolvedValue([
{
id: mockChatId,
authType: 'email',
allowedEmails: [mockEmail],
title: 'Test Chat',
},
]),
}),
}),
}))
it('returns 429 with Retry-After when IP rate limit is exceeded', async () => {
mockCheckRateLimitDirect.mockResolvedValueOnce({
allowed: false,
remaining: 0,
resetAt: new Date(Date.now() + 900_000),
retryAfterMs: 900_000,
})
const headerSet = vi.fn()
mockCreateErrorResponse.mockImplementationOnce((message: string, status: number) => ({
json: () => Promise.resolve({ error: message }),
status,
headers: { set: headerSet },
}))
const request = new NextRequest('http://localhost:3000/api/chat/test/otp', {
method: 'POST',
body: JSON.stringify({ email: mockEmail }),
})
const response = await POST(request, {
params: Promise.resolve({ identifier: mockIdentifier }),
})
expect(response.status).toBe(429)
expect(headerSet).toHaveBeenCalledWith('Retry-After', '900')
expect(mockSendEmail).not.toHaveBeenCalled()
expect(mockDbSelect).not.toHaveBeenCalled()
})
it('returns 429 with Retry-After when email rate limit is exceeded', async () => {
mockCheckRateLimitDirect
.mockResolvedValueOnce({
allowed: true,
remaining: 9,
resetAt: new Date(Date.now() + 60_000),
})
.mockResolvedValueOnce({
allowed: false,
remaining: 0,
resetAt: new Date(Date.now() + 900_000),
retryAfterMs: 900_000,
})
const headerSet = vi.fn()
mockCreateErrorResponse.mockImplementationOnce((message: string, status: number) => ({
json: () => Promise.resolve({ error: message }),
status,
headers: { set: headerSet },
}))
buildDeploymentSelect()
const request = new NextRequest('http://localhost:3000/api/chat/test/otp', {
method: 'POST',
body: JSON.stringify({ email: mockEmail }),
})
const response = await POST(request, {
params: Promise.resolve({ identifier: mockIdentifier }),
})
expect(response.status).toBe(429)
expect(headerSet).toHaveBeenCalledWith('Retry-After', '900')
expect(mockSendEmail).not.toHaveBeenCalled()
})
it('falls back to refill interval when retryAfterMs is missing', async () => {
mockCheckRateLimitDirect.mockResolvedValueOnce({
allowed: false,
remaining: 0,
resetAt: new Date(Date.now() + 900_000),
})
const headerSet = vi.fn()
mockCreateErrorResponse.mockImplementationOnce((message: string, status: number) => ({
json: () => Promise.resolve({ error: message }),
status,
headers: { set: headerSet },
}))
const request = new NextRequest('http://localhost:3000/api/chat/test/otp', {
method: 'POST',
body: JSON.stringify({ email: mockEmail }),
})
await POST(request, { params: Promise.resolve({ identifier: mockIdentifier }) })
expect(headerSet).toHaveBeenCalledWith('Retry-After', '900')
})
it('skips IP rate limit when client IP is unknown', async () => {
requestUtilsMockFns.mockGetClientIp.mockReturnValueOnce('unknown')
buildDeploymentSelect()
const request = new NextRequest('http://localhost:3000/api/chat/test/otp', {
method: 'POST',
body: JSON.stringify({ email: mockEmail }),
})
await POST(request, { params: Promise.resolve({ identifier: mockIdentifier }) })
// Only the email-scoped check should run, not the IP-scoped one
expect(mockCheckRateLimitDirect).toHaveBeenCalledTimes(1)
expect(mockCheckRateLimitDirect).toHaveBeenCalledWith(
expect.stringContaining('chat-otp:email:'),
expect.any(Object)
)
})
})
describe('POST - Store OTP (Database path)', () => {
beforeEach(() => {
mockGetStorageMethod.mockReturnValue('database')

View File

@@ -8,9 +8,11 @@ import type { NextRequest } from 'next/server'
import { z } from 'zod'
import { renderOTPEmail } from '@/components/emails'
import { getRedisClient } from '@/lib/core/config/redis'
import type { TokenBucketConfig } from '@/lib/core/rate-limiter'
import { RateLimiter } from '@/lib/core/rate-limiter'
import { addCorsHeaders, isEmailAllowed } from '@/lib/core/security/deployment'
import { getStorageMethod } from '@/lib/core/storage'
import { generateRequestId } from '@/lib/core/utils/request'
import { generateRequestId, getClientIp } from '@/lib/core/utils/request'
import { withRouteHandler } from '@/lib/core/utils/with-route-handler'
import { sendEmail } from '@/lib/messaging/email/mailer'
import { setChatAuthCookie } from '@/app/api/chat/utils'
@@ -18,6 +20,20 @@ import { createErrorResponse, createSuccessResponse } from '@/app/api/workflows/
const logger = createLogger('ChatOtpAPI')
const rateLimiter = new RateLimiter()
const OTP_IP_RATE_LIMIT: TokenBucketConfig = {
maxTokens: 10,
refillRate: 10,
refillIntervalMs: 15 * 60_000,
}
const OTP_EMAIL_RATE_LIMIT: TokenBucketConfig = {
maxTokens: 3,
refillRate: 3,
refillIntervalMs: 15 * 60_000,
}
function generateOTP(): string {
return randomInt(100000, 1000000).toString()
}
@@ -214,6 +230,23 @@ export const POST = withRouteHandler(
const requestId = generateRequestId()
try {
const ip = getClientIp(request)
if (ip !== 'unknown') {
const ipRateLimit = await rateLimiter.checkRateLimitDirect(
`chat-otp:ip:${identifier}:${ip}`,
OTP_IP_RATE_LIMIT
)
if (!ipRateLimit.allowed) {
logger.warn(`[${requestId}] OTP IP rate limit exceeded for ${identifier} from ${ip}`)
const retryAfter = Math.ceil(
(ipRateLimit.retryAfterMs ?? OTP_IP_RATE_LIMIT.refillIntervalMs) / 1000
)
const response = createErrorResponse('Too many requests. Please try again later.', 429)
response.headers.set('Retry-After', String(retryAfter))
return addCorsHeaders(response, request)
}
}
const body = await request.json()
const { email } = otpRequestSchema.parse(body)
@@ -255,6 +288,25 @@ export const POST = withRouteHandler(
)
}
const emailRateLimit = await rateLimiter.checkRateLimitDirect(
`chat-otp:email:${deployment.id}:${email.toLowerCase()}`,
OTP_EMAIL_RATE_LIMIT
)
if (!emailRateLimit.allowed) {
logger.warn(
`[${requestId}] OTP email rate limit exceeded for ${email} on chat ${deployment.id}`
)
const retryAfter = Math.ceil(
(emailRateLimit.retryAfterMs ?? OTP_EMAIL_RATE_LIMIT.refillIntervalMs) / 1000
)
const response = createErrorResponse(
'Too many verification code requests. Please try again later.',
429
)
response.headers.set('Retry-After', String(retryAfter))
return addCorsHeaders(response, request)
}
const otp = generateOTP()
await storeOTP(email, deployment.id, otp)

View File

@@ -1,10 +1,7 @@
import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import {
authenticateCopilotRequestSessionOnly,
createUnauthorizedResponse,
} from '@/lib/copilot/request/http'
import { checkInternalApiKey, createUnauthorizedResponse } from '@/lib/copilot/request/http'
import { env } from '@/lib/core/config/env'
import { withRouteHandler } from '@/lib/core/utils/with-route-handler'
@@ -21,8 +18,8 @@ const TrainingExampleSchema = z.object({
})
export const POST = withRouteHandler(async (request: NextRequest) => {
const { userId, isAuthenticated } = await authenticateCopilotRequestSessionOnly()
if (!isAuthenticated || !userId) {
const auth = checkInternalApiKey(request)
if (!auth.success) {
return createUnauthorizedResponse()
}

View File

@@ -1,10 +1,7 @@
import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import {
authenticateCopilotRequestSessionOnly,
createUnauthorizedResponse,
} from '@/lib/copilot/request/http'
import { checkInternalApiKey, createUnauthorizedResponse } from '@/lib/copilot/request/http'
import { env } from '@/lib/core/config/env'
import { withRouteHandler } from '@/lib/core/utils/with-route-handler'
@@ -27,8 +24,8 @@ const TrainingDataSchema = z.object({
})
export const POST = withRouteHandler(async (request: NextRequest) => {
const { userId, isAuthenticated } = await authenticateCopilotRequestSessionOnly()
if (!isAuthenticated || !userId) {
const auth = checkInternalApiKey(request)
if (!auth.success) {
return createUnauthorizedResponse()
}

View File

@@ -12,6 +12,7 @@ import { and, eq } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { getSession } from '@/lib/auth'
import { withRouteHandler } from '@/lib/core/utils/with-route-handler'
import { normalizeEmail } from '@/lib/invitations/core'
import { syncAllWebhooksForCredentialSet } from '@/lib/webhooks/utils.server'
const logger = createLogger('CredentialSetInviteToken')
@@ -111,6 +112,21 @@ export const POST = withRouteHandler(
return NextResponse.json({ error: 'Invitation has expired' }, { status: 410 })
}
if (invitation.email) {
const sessionEmail = session.user.email
if (!sessionEmail || normalizeEmail(sessionEmail) !== normalizeEmail(invitation.email)) {
logger.warn('Rejected credential set invitation accept due to email mismatch', {
invitationId: invitation.id,
credentialSetId: invitation.credentialSetId,
userId: session.user.id,
})
return NextResponse.json(
{ error: 'This invitation was sent to a different email address' },
{ status: 403 }
)
}
}
const existingMember = await db
.select()
.from(credentialSetMember)

View File

@@ -8,21 +8,61 @@ import {
isUsingCloudStorage,
type StorageContext,
} from '@/lib/uploads'
import {
signUploadToken,
type UploadTokenPayload,
verifyUploadToken,
} from '@/lib/uploads/core/upload-token'
import { getUserEntityPermissions } from '@/lib/workspaces/permissions/utils'
const logger = createLogger('MultipartUploadAPI')
const ALLOWED_UPLOAD_CONTEXTS = new Set<StorageContext>([
'knowledge-base',
'chat',
'copilot',
'mothership',
'execution',
'workspace',
'profile-pictures',
'og-images',
'logs',
'workspace-logos',
])
interface InitiateMultipartRequest {
fileName: string
contentType: string
fileSize: number
workspaceId: string
context?: StorageContext
}
interface GetPartUrlsRequest {
uploadId: string
key: string
interface TokenBoundRequest {
uploadToken: string
}
interface GetPartUrlsRequest extends TokenBoundRequest {
partNumbers: number[]
context?: StorageContext
}
interface CompleteSingleRequest extends TokenBoundRequest {
parts: unknown
}
interface CompleteBatchRequest {
uploads: Array<TokenBoundRequest & { parts: unknown }>
}
const verifyTokenForUser = (token: string | undefined, userId: string) => {
if (!token || typeof token !== 'string') {
return null
}
const result = verifyUploadToken(token)
if (!result.valid || result.payload.userId !== userId) {
return null
}
return result.payload
}
export const POST = withRouteHandler(async (request: NextRequest) => {
@@ -31,6 +71,7 @@ export const POST = withRouteHandler(async (request: NextRequest) => {
if (!session?.user?.id) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
const userId = session.user.id
const action = request.nextUrl.searchParams.get('action')
@@ -45,32 +86,34 @@ export const POST = withRouteHandler(async (request: NextRequest) => {
switch (action) {
case 'initiate': {
const data: InitiateMultipartRequest = await request.json()
const { fileName, contentType, fileSize, context = 'knowledge-base' } = data
const data = (await request.json()) as InitiateMultipartRequest
const { fileName, contentType, fileSize, workspaceId, context = 'knowledge-base' } = data
if (!workspaceId || typeof workspaceId !== 'string') {
return NextResponse.json({ error: 'workspaceId is required' }, { status: 400 })
}
if (!ALLOWED_UPLOAD_CONTEXTS.has(context)) {
return NextResponse.json({ error: 'Invalid storage context' }, { status: 400 })
}
const permission = await getUserEntityPermissions(userId, 'workspace', workspaceId)
if (permission !== 'write' && permission !== 'admin') {
return NextResponse.json({ error: 'Forbidden' }, { status: 403 })
}
const config = getStorageConfig(context)
let uploadId: string
let key: string
if (storageProvider === 's3') {
const { initiateS3MultipartUpload } = await import('@/lib/uploads/providers/s3/client')
const result = await initiateS3MultipartUpload({
fileName,
contentType,
fileSize,
})
logger.info(
`Initiated S3 multipart upload for ${fileName} (context: ${context}): ${result.uploadId}`
)
return NextResponse.json({
uploadId: result.uploadId,
key: result.key,
})
}
if (storageProvider === 'blob') {
const result = await initiateS3MultipartUpload({ fileName, contentType, fileSize })
uploadId = result.uploadId
key = result.key
} else if (storageProvider === 'blob') {
const { initiateMultipartUpload } = await import('@/lib/uploads/providers/blob/client')
const result = await initiateMultipartUpload({
fileName,
contentType,
@@ -82,46 +125,55 @@ export const POST = withRouteHandler(async (request: NextRequest) => {
connectionString: config.connectionString,
},
})
logger.info(
`Initiated Azure multipart upload for ${fileName} (context: ${context}): ${result.uploadId}`
uploadId = result.uploadId
key = result.key
} else {
return NextResponse.json(
{ error: `Unsupported storage provider: ${storageProvider}` },
{ status: 400 }
)
return NextResponse.json({
uploadId: result.uploadId,
key: result.key,
})
}
return NextResponse.json(
{ error: `Unsupported storage provider: ${storageProvider}` },
{ status: 400 }
const uploadToken = signUploadToken({
uploadId,
key,
userId,
workspaceId,
context,
})
logger.info(
`Initiated ${storageProvider} multipart upload for ${fileName} (context: ${context}, workspace: ${workspaceId}): ${uploadId}`
)
return NextResponse.json({ uploadId, key, uploadToken })
}
case 'get-part-urls': {
const data: GetPartUrlsRequest = await request.json()
const { uploadId, key, partNumbers, context = 'knowledge-base' } = data
const data = (await request.json()) as GetPartUrlsRequest
const { partNumbers } = data
const tokenPayload = verifyTokenForUser(data.uploadToken, userId)
if (!tokenPayload) {
return NextResponse.json({ error: 'Invalid or expired upload token' }, { status: 403 })
}
const { uploadId, key, context } = tokenPayload
const config = getStorageConfig(context)
if (storageProvider === 's3') {
const { getS3MultipartPartUrls } = await import('@/lib/uploads/providers/s3/client')
const presignedUrls = await getS3MultipartPartUrls(key, uploadId, partNumbers)
return NextResponse.json({ presignedUrls })
}
if (storageProvider === 'blob') {
const { getMultipartPartUrls } = await import('@/lib/uploads/providers/blob/client')
const presignedUrls = await getMultipartPartUrls(key, partNumbers, {
containerName: config.containerName!,
accountName: config.accountName!,
accountKey: config.accountKey,
connectionString: config.connectionString,
})
return NextResponse.json({ presignedUrls })
}
@@ -132,24 +184,32 @@ export const POST = withRouteHandler(async (request: NextRequest) => {
}
case 'complete': {
const data = await request.json()
const context: StorageContext = data.context || 'knowledge-base'
const data = (await request.json()) as CompleteSingleRequest | CompleteBatchRequest
const config = getStorageConfig(context)
if ('uploads' in data && Array.isArray(data.uploads)) {
const verified = data.uploads.map((upload) => {
const payload = verifyTokenForUser(upload.uploadToken, userId)
return payload ? { payload, parts: upload.parts } : null
})
if (verified.some((entry) => entry === null)) {
return NextResponse.json({ error: 'Invalid or expired upload token' }, { status: 403 })
}
const verifiedEntries = verified.filter(
(entry): entry is { payload: UploadTokenPayload; parts: unknown } => entry !== null
)
if ('uploads' in data) {
const results = await Promise.all(
data.uploads.map(async (upload: any) => {
const { uploadId, key } = upload
verifiedEntries.map(async ({ payload, parts }) => {
const { uploadId, key, context } = payload
const config = getStorageConfig(context)
if (storageProvider === 's3') {
const { completeS3MultipartUpload } = await import(
'@/lib/uploads/providers/s3/client'
)
const parts = upload.parts // S3 format: { ETag, PartNumber }
const result = await completeS3MultipartUpload(key, uploadId, parts)
const result = await completeS3MultipartUpload(key, uploadId, parts as any)
return {
success: true,
location: result.location,
@@ -161,15 +221,12 @@ export const POST = withRouteHandler(async (request: NextRequest) => {
const { completeMultipartUpload } = await import(
'@/lib/uploads/providers/blob/client'
)
const parts = upload.parts // Azure format: { blockId, partNumber }
const result = await completeMultipartUpload(key, parts, {
const result = await completeMultipartUpload(key, parts as any, {
containerName: config.containerName!,
accountName: config.accountName!,
accountKey: config.accountKey,
connectionString: config.connectionString,
})
return {
success: true,
location: result.location,
@@ -182,19 +239,23 @@ export const POST = withRouteHandler(async (request: NextRequest) => {
})
)
logger.info(`Completed ${data.uploads.length} multipart uploads (context: ${context})`)
logger.info(`Completed ${verifiedEntries.length} multipart uploads`)
return NextResponse.json({ results })
}
const { uploadId, key, parts } = data
const single = data as CompleteSingleRequest
const tokenPayload = verifyTokenForUser(single.uploadToken, userId)
if (!tokenPayload) {
return NextResponse.json({ error: 'Invalid or expired upload token' }, { status: 403 })
}
const { uploadId, key, context } = tokenPayload
const config = getStorageConfig(context)
if (storageProvider === 's3') {
const { completeS3MultipartUpload } = await import('@/lib/uploads/providers/s3/client')
const result = await completeS3MultipartUpload(key, uploadId, parts)
const result = await completeS3MultipartUpload(key, uploadId, single.parts as any)
logger.info(`Completed S3 multipart upload for key ${key} (context: ${context})`)
return NextResponse.json({
success: true,
location: result.location,
@@ -204,16 +265,13 @@ export const POST = withRouteHandler(async (request: NextRequest) => {
}
if (storageProvider === 'blob') {
const { completeMultipartUpload } = await import('@/lib/uploads/providers/blob/client')
const result = await completeMultipartUpload(key, parts, {
const result = await completeMultipartUpload(key, single.parts as any, {
containerName: config.containerName!,
accountName: config.accountName!,
accountKey: config.accountKey,
connectionString: config.connectionString,
})
logger.info(`Completed Azure multipart upload for key ${key} (context: ${context})`)
return NextResponse.json({
success: true,
location: result.location,
@@ -229,27 +287,27 @@ export const POST = withRouteHandler(async (request: NextRequest) => {
}
case 'abort': {
const data = await request.json()
const { uploadId, key, context = 'knowledge-base' } = data
const data = (await request.json()) as TokenBoundRequest
const tokenPayload = verifyTokenForUser(data.uploadToken, userId)
if (!tokenPayload) {
return NextResponse.json({ error: 'Invalid or expired upload token' }, { status: 403 })
}
const config = getStorageConfig(context as StorageContext)
const { uploadId, key, context } = tokenPayload
const config = getStorageConfig(context)
if (storageProvider === 's3') {
const { abortS3MultipartUpload } = await import('@/lib/uploads/providers/s3/client')
await abortS3MultipartUpload(key, uploadId)
logger.info(`Aborted S3 multipart upload for key ${key} (context: ${context})`)
} else if (storageProvider === 'blob') {
const { abortMultipartUpload } = await import('@/lib/uploads/providers/blob/client')
await abortMultipartUpload(key, {
containerName: config.containerName!,
accountName: config.accountName!,
accountKey: config.accountKey,
connectionString: config.connectionString,
})
logger.info(`Aborted Azure multipart upload for key ${key} (context: ${context})`)
} else {
return NextResponse.json(

View File

@@ -191,7 +191,7 @@ describe('Function Execute API Route', () => {
const response = await POST(req)
const data = await response.json()
if (response.status === 500) {
if (response.status === 422 || response.status === 500) {
expect(data.success).toBe(false)
} else {
const result = data.output?.result
@@ -504,7 +504,7 @@ describe('Function Execute API Route', () => {
const response = await POST(req)
const data = await response.json()
expect(response.status).toBe(500)
expect(response.status).toBe(422)
expect(data.success).toBe(false)
expect(data.error).toBeTruthy()
})
@@ -518,7 +518,7 @@ describe('Function Execute API Route', () => {
const response = await POST(req)
const data = await response.json()
expect(response.status).toBe(500)
expect(response.status).toBe(422)
expect(data.success).toBe(false)
expect(data.error).toContain('Type Error')
expect(data.error).toContain('Cannot read properties of null')
@@ -533,7 +533,7 @@ describe('Function Execute API Route', () => {
const response = await POST(req)
const data = await response.json()
expect(response.status).toBe(500)
expect(response.status).toBe(422)
expect(data.success).toBe(false)
expect(data.error).toContain('Reference Error')
expect(data.error).toContain('undefinedVariable is not defined')
@@ -548,7 +548,7 @@ describe('Function Execute API Route', () => {
const response = await POST(req)
const data = await response.json()
expect(response.status).toBe(500)
expect(response.status).toBe(422)
expect(data.success).toBe(false)
expect(data.error).toContain('Custom error message')
})
@@ -562,7 +562,7 @@ describe('Function Execute API Route', () => {
const response = await POST(req)
const data = await response.json()
expect(response.status).toBe(500)
expect(response.status).toBe(422)
expect(data.success).toBe(false)
expect(data.error).toBeTruthy()
})

View File

@@ -1088,9 +1088,12 @@ export const POST = withRouteHandler(async (req: NextRequest) => {
const executionTime = Date.now() - startTime
if (isolatedResult.error) {
logger.error(`[${requestId}] Function execution failed in isolated-vm`, {
const isSystemError = isolatedResult.error.isSystemError === true
const logFn = isSystemError ? logger.error.bind(logger) : logger.warn.bind(logger)
logFn(`[${requestId}] Function execution failed in isolated-vm`, {
error: isolatedResult.error,
executionTime,
isSystemError,
})
const ivmError = isolatedResult.error
@@ -1119,7 +1122,8 @@ export const POST = withRouteHandler(async (req: NextRequest) => {
resolvedCode
)
logger.error(`[${requestId}] Enhanced error details`, {
const detailLogFn = isSystemError ? logger.error.bind(logger) : logger.warn.bind(logger)
detailLogFn(`[${requestId}] Enhanced error details`, {
originalMessage: ivmError.message,
enhancedMessage: userFriendlyErrorMessage,
line: enhancedError.line,
@@ -1145,7 +1149,7 @@ export const POST = withRouteHandler(async (req: NextRequest) => {
stack: enhancedError.stack,
},
},
{ status: 500 }
{ status: isSystemError ? 500 : 422 }
)
}

View File

@@ -0,0 +1,614 @@
import { createHash } from 'node:crypto'
import { createLogger } from '@sim/logger'
import { toError } from '@sim/utils/errors'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { checkInternalAuth } from '@/lib/auth/hybrid'
import { generateRequestId } from '@/lib/core/utils/request'
import { withRouteHandler } from '@/lib/core/utils/with-route-handler'
export const dynamic = 'force-dynamic'
const logger = createLogger('SapS4HanaProxyAPI')
const HttpMethod = z.enum(['GET', 'POST', 'PATCH', 'PUT', 'DELETE', 'MERGE'])
const DeploymentType = z.enum(['cloud_public', 'cloud_private', 'on_premise'])
const AuthType = z.enum(['oauth_client_credentials', 'basic'])
const ServiceName = z
.string()
.min(1, 'service is required')
.regex(
/^[A-Z][A-Z0-9_]*(;v=\d+)?$/,
'service must be an uppercase OData service name optionally suffixed with ";v=NNNN" (e.g., API_BUSINESS_PARTNER, API_OUTBOUND_DELIVERY_SRV;v=0002)'
)
const ServicePath = z
.string()
.min(1, 'path is required')
.refine(
(p) =>
!p.split(/[/\\]/).some((seg) => seg === '..' || seg === '.') &&
!p.includes('?') &&
!p.includes('#') &&
!/%(?:2[eEfF]|5[cC]|3[fF]|23)/.test(p),
{
message:
'path must not contain ".." or "." segments, "?", "#", or percent-encoded path/query/fragment characters',
}
)
const Subdomain = z
.string()
.regex(
/^[a-z0-9]([a-z0-9-]{0,61}[a-z0-9])?$/i,
'subdomain must contain only letters, digits, and hyphens (1-63 chars)'
)
const ProxyRequestSchema = z
.object({
deploymentType: DeploymentType.default('cloud_public'),
authType: AuthType.default('oauth_client_credentials'),
subdomain: Subdomain.optional(),
region: z
.string()
.regex(/^[a-z]{2,4}\d{1,3}$/i, 'region must be an SAP BTP region code (e.g., eu10, us30)')
.optional(),
baseUrl: z.string().optional(),
tokenUrl: z.string().optional(),
clientId: z.string().optional(),
clientSecret: z.string().optional(),
username: z.string().optional(),
password: z.string().optional(),
service: ServiceName,
path: ServicePath,
method: HttpMethod.default('GET'),
query: z.record(z.union([z.string(), z.number(), z.boolean()])).optional(),
body: z.unknown().optional(),
ifMatch: z.string().optional(),
})
.superRefine((req, ctx) => {
if (req.deploymentType === 'cloud_public') {
if (!req.subdomain) {
ctx.addIssue({
code: z.ZodIssueCode.custom,
path: ['subdomain'],
message: 'subdomain is required for cloud_public deployment',
})
}
if (!req.region) {
ctx.addIssue({
code: z.ZodIssueCode.custom,
path: ['region'],
message: 'region is required for cloud_public deployment',
})
}
if (req.authType !== 'oauth_client_credentials') {
ctx.addIssue({
code: z.ZodIssueCode.custom,
path: ['authType'],
message: 'cloud_public deployment only supports oauth_client_credentials',
})
}
if (!req.clientId) {
ctx.addIssue({
code: z.ZodIssueCode.custom,
path: ['clientId'],
message: 'clientId is required',
})
}
if (!req.clientSecret) {
ctx.addIssue({
code: z.ZodIssueCode.custom,
path: ['clientSecret'],
message: 'clientSecret is required',
})
}
} else {
if (!req.baseUrl) {
ctx.addIssue({
code: z.ZodIssueCode.custom,
path: ['baseUrl'],
message: 'baseUrl is required for cloud_private and on_premise deployments',
})
} else {
const baseUrlCheck = checkExternalUrlSafety(req.baseUrl, 'baseUrl')
if (!baseUrlCheck.ok) {
ctx.addIssue({
code: z.ZodIssueCode.custom,
path: ['baseUrl'],
message: baseUrlCheck.message,
})
}
}
if (req.authType === 'oauth_client_credentials') {
if (!req.tokenUrl) {
ctx.addIssue({
code: z.ZodIssueCode.custom,
path: ['tokenUrl'],
message: 'tokenUrl is required for OAuth on cloud_private/on_premise',
})
} else {
const tokenUrlCheck = checkExternalUrlSafety(req.tokenUrl, 'tokenUrl')
if (!tokenUrlCheck.ok) {
ctx.addIssue({
code: z.ZodIssueCode.custom,
path: ['tokenUrl'],
message: tokenUrlCheck.message,
})
}
}
if (!req.clientId) {
ctx.addIssue({
code: z.ZodIssueCode.custom,
path: ['clientId'],
message: 'clientId is required for OAuth',
})
}
if (!req.clientSecret) {
ctx.addIssue({
code: z.ZodIssueCode.custom,
path: ['clientSecret'],
message: 'clientSecret is required for OAuth',
})
}
} else {
if (!req.username) {
ctx.addIssue({
code: z.ZodIssueCode.custom,
path: ['username'],
message: 'username is required for Basic auth',
})
}
if (!req.password) {
ctx.addIssue({
code: z.ZodIssueCode.custom,
path: ['password'],
message: 'password is required for Basic auth',
})
}
}
}
})
type ProxyRequest = z.infer<typeof ProxyRequestSchema>
interface CachedToken {
accessToken: string
expiresAt: number
}
const TOKEN_CACHE = new Map<string, CachedToken>()
const TOKEN_CACHE_MAX_ENTRIES = 500
const TOKEN_SAFETY_WINDOW_MS = 60_000
const OUTBOUND_FETCH_TIMEOUT_MS = 30_000
const FORBIDDEN_HOSTS = new Set([
'localhost',
'0.0.0.0',
'127.0.0.1',
'169.254.169.254',
'metadata.google.internal',
'metadata',
'[::1]',
'[::]',
'[::ffff:127.0.0.1]',
'[fd00:ec2::254]',
])
function isPrivateIPv4(host: string): boolean {
const match = host.match(/^(\d{1,3})\.(\d{1,3})\.(\d{1,3})\.(\d{1,3})$/)
if (!match) return false
const octets = match.slice(1, 5).map(Number) as [number, number, number, number]
if (octets.some((o) => o < 0 || o > 255)) return false
const [a, b] = octets
if (a === 10) return true
if (a === 172 && b >= 16 && b <= 31) return true
if (a === 192 && b === 168) return true
if (a === 127) return true
if (a === 169 && b === 254) return true
if (a === 0) return true
return false
}
function extractIPv4MappedHost(host: string): string | null {
const stripped = host.startsWith('[') && host.endsWith(']') ? host.slice(1, -1) : host
const lower = stripped.toLowerCase()
for (const prefix of ['::ffff:', '::']) {
if (lower.startsWith(prefix)) {
const candidate = lower.slice(prefix.length)
if (/^\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}$/.test(candidate)) return candidate
}
}
const hexMatch = lower.match(/^::ffff:([0-9a-f]{1,4}):([0-9a-f]{1,4})$/)
if (hexMatch) {
const high = Number.parseInt(hexMatch[1] as string, 16)
const low = Number.parseInt(hexMatch[2] as string, 16)
if (high >= 0 && high <= 0xffff && low >= 0 && low <= 0xffff) {
const a = (high >> 8) & 0xff
const b = high & 0xff
const c = (low >> 8) & 0xff
const d = low & 0xff
return `${a}.${b}.${c}.${d}`
}
}
return null
}
function isPrivateOrLoopbackIPv6(host: string): boolean {
const stripped = host.startsWith('[') && host.endsWith(']') ? host.slice(1, -1) : host
const lower = stripped.toLowerCase()
if (lower === '::' || lower === '::1') return true
if (/^fc[0-9a-f]{2}:/.test(lower) || /^fd[0-9a-f]{2}:/.test(lower)) return true
if (lower.startsWith('fe80:')) return true
return false
}
function checkExternalUrlSafety(
rawUrl: string,
label: string
): { ok: true; url: URL } | { ok: false; message: string } {
let parsed: URL
try {
parsed = new URL(rawUrl)
} catch {
return { ok: false, message: `${label} must be a valid URL` }
}
if (parsed.protocol !== 'https:') {
return { ok: false, message: `${label} must use https://` }
}
const host = parsed.hostname.toLowerCase()
if (FORBIDDEN_HOSTS.has(host) || FORBIDDEN_HOSTS.has(`[${host}]`)) {
return { ok: false, message: `${label} host is not allowed` }
}
if (isPrivateIPv4(host)) {
return { ok: false, message: `${label} host is not allowed (private/loopback range)` }
}
const mapped = extractIPv4MappedHost(host)
if (mapped && isPrivateIPv4(mapped)) {
return { ok: false, message: `${label} host is not allowed (IPv4-mapped private range)` }
}
if (isPrivateOrLoopbackIPv6(host)) {
return { ok: false, message: `${label} host is not allowed (IPv6 private/loopback)` }
}
return { ok: true, url: parsed }
}
function assertSafeExternalUrl(rawUrl: string, label: string): URL {
const result = checkExternalUrlSafety(rawUrl, label)
if (!result.ok) throw new Error(result.message)
return result.url
}
function resolveTokenUrl(req: ProxyRequest): string {
if (req.deploymentType === 'cloud_public') {
return `https://${req.subdomain}.authentication.${req.region}.hana.ondemand.com/oauth/token`
}
if (!req.tokenUrl) {
throw new Error('tokenUrl is required for OAuth on cloud_private/on_premise')
}
return req.tokenUrl
}
function tokenCacheKey(req: ProxyRequest): string {
const secretHash = req.clientSecret
? createHash('sha256').update(req.clientSecret).digest('hex').slice(0, 16)
: ''
return `${resolveTokenUrl(req)}::${req.clientId ?? ''}::${secretHash}`
}
function rememberToken(key: string, token: CachedToken): void {
if (TOKEN_CACHE.has(key)) TOKEN_CACHE.delete(key)
TOKEN_CACHE.set(key, token)
while (TOKEN_CACHE.size > TOKEN_CACHE_MAX_ENTRIES) {
const oldestKey = TOKEN_CACHE.keys().next().value
if (oldestKey === undefined) break
TOKEN_CACHE.delete(oldestKey)
}
}
async function fetchAccessToken(req: ProxyRequest, requestId: string): Promise<string> {
const cacheKey = tokenCacheKey(req)
const cached = TOKEN_CACHE.get(cacheKey)
if (cached && cached.expiresAt - TOKEN_SAFETY_WINDOW_MS > Date.now()) {
return cached.accessToken
}
const tokenUrl = assertSafeExternalUrl(resolveTokenUrl(req), 'tokenUrl').toString()
const basic = Buffer.from(`${req.clientId}:${req.clientSecret}`).toString('base64')
const response = await fetch(tokenUrl, {
method: 'POST',
headers: {
Authorization: `Basic ${basic}`,
'Content-Type': 'application/x-www-form-urlencoded',
Accept: 'application/json',
},
body: 'grant_type=client_credentials',
signal: AbortSignal.timeout(OUTBOUND_FETCH_TIMEOUT_MS),
})
if (!response.ok) {
const text = await response.text().catch(() => '')
logger.warn(`[${requestId}] Token fetch failed (${response.status}): ${text}`)
throw new Error(`SAP token request failed: HTTP ${response.status}`)
}
const data = (await response.json()) as {
access_token?: string
expires_in?: number
}
if (!data.access_token) {
throw new Error('SAP token response missing access_token')
}
const expiresInMs = (data.expires_in ?? 3600) * 1000
rememberToken(cacheKey, {
accessToken: data.access_token,
expiresAt: Date.now() + expiresInMs,
})
return data.access_token
}
interface CsrfBundle {
token: string
cookie: string
}
function joinSetCookies(headers: Headers): string {
const cookies =
typeof (headers as { getSetCookie?: () => string[] }).getSetCookie === 'function'
? (headers as { getSetCookie: () => string[] }).getSetCookie()
: (headers.get('set-cookie') ?? '').split(/,\s*(?=[^=,;\s]+=)/)
return cookies
.map((c) => c.split(';')[0]?.trim())
.filter(Boolean)
.join('; ')
}
function buildAuthHeader(req: ProxyRequest, accessToken: string | null): string {
if (req.authType === 'basic') {
const basic = Buffer.from(`${req.username}:${req.password}`).toString('base64')
return `Basic ${basic}`
}
return `Bearer ${accessToken}`
}
async function fetchCsrf(
req: ProxyRequest,
accessToken: string | null,
requestId: string
): Promise<CsrfBundle | null> {
const url = buildOdataUrl(req, '/$metadata')
const response = await fetch(url, {
method: 'GET',
headers: {
Authorization: buildAuthHeader(req, accessToken),
Accept: 'application/xml',
'X-CSRF-Token': 'Fetch',
},
signal: AbortSignal.timeout(OUTBOUND_FETCH_TIMEOUT_MS),
})
if (!response.ok) {
const text = await response.text().catch(() => '')
logger.warn(`[${requestId}] CSRF fetch failed (${response.status}): ${text}`)
return null
}
const token = response.headers.get('x-csrf-token')
const cookie = joinSetCookies(response.headers)
if (!token) return null
return { token, cookie }
}
function resolveHost(req: ProxyRequest): string {
if (req.deploymentType === 'cloud_public') {
const constructed = `https://${req.subdomain}-api.s4hana.ondemand.com`
return assertSafeExternalUrl(constructed, 'subdomain').toString().replace(/\/+$/, '')
}
if (!req.baseUrl) {
throw new Error('baseUrl is required for cloud_private and on_premise deployments')
}
const trimmed = req.baseUrl.replace(/\/+$/, '')
return assertSafeExternalUrl(trimmed, 'baseUrl').toString().replace(/\/+$/, '')
}
function buildOdataUrl(req: ProxyRequest, pathOverride?: string): string {
const host = resolveHost(req)
const servicePath = `/sap/opu/odata/sap/${req.service}`
const subPath = pathOverride ?? req.path
const normalized = subPath.startsWith('/') ? subPath : `/${subPath}`
const base = `${host}${servicePath}${normalized}`
if (pathOverride !== undefined) {
return base
}
if (!req.query || Object.keys(req.query).length === 0) {
return base
}
const encode = (s: string) => encodeURIComponent(s).replace(/%24/g, '$')
const parts: string[] = []
for (const [key, value] of Object.entries(req.query)) {
if (value === undefined || value === null) continue
parts.push(`${encode(key)}=${encode(String(value))}`)
}
const queryString = parts.join('&')
if (!queryString) return base
return base.includes('?') ? `${base}&${queryString}` : `${base}?${queryString}`
}
const WRITE_METHODS = new Set(['POST', 'PUT', 'PATCH', 'DELETE', 'MERGE'])
interface OdataInvocation {
status: number
body: unknown
raw: string
csrfHeader: string
}
async function callOdata(
req: ProxyRequest,
accessToken: string | null,
csrf: CsrfBundle | null
): Promise<OdataInvocation> {
const url = buildOdataUrl(req)
const headers: Record<string, string> = {
Authorization: buildAuthHeader(req, accessToken),
Accept: 'application/json',
}
const isWrite = WRITE_METHODS.has(req.method)
const hasBody = req.body !== undefined && req.body !== null
if (hasBody) headers['Content-Type'] = 'application/json'
if (req.ifMatch) headers['If-Match'] = req.ifMatch
if (isWrite && csrf) {
headers['X-CSRF-Token'] = csrf.token
if (csrf.cookie) headers.Cookie = csrf.cookie
}
const response = await fetch(url, {
method: req.method,
headers,
body: hasBody ? JSON.stringify(req.body) : undefined,
signal: AbortSignal.timeout(OUTBOUND_FETCH_TIMEOUT_MS),
})
const raw = await response.text()
let parsed: unknown = null
if (raw.length > 0) {
try {
parsed = JSON.parse(raw)
} catch {
parsed = raw
}
}
const csrfHeader = response.headers.get('x-csrf-token')?.toLowerCase() ?? ''
return { status: response.status, body: parsed, raw, csrfHeader }
}
function isCsrfRequired(invocation: OdataInvocation): boolean {
if (invocation.status !== 403) return false
if (invocation.csrfHeader === 'required') return true
if (typeof invocation.body !== 'object' || invocation.body === null) return false
const errorObj = (invocation.body as { error?: { message?: { value?: string } | string } }).error
const messageField = errorObj?.message
const message = typeof messageField === 'string' ? messageField : (messageField?.value ?? '')
return message.toLowerCase().includes('csrf')
}
function extractOdataError(body: unknown, status: number): string {
if (body && typeof body === 'object') {
const err = (
body as {
error?: {
message?: { value?: string } | string
code?: string
innererror?: {
errordetails?: Array<{ code?: string; message?: string; severity?: string }>
}
}
}
).error
if (err) {
const messageField = err.message
const base =
typeof messageField === 'string' ? messageField : (messageField?.value ?? err.code ?? '')
const prefix = err.code ? `[${err.code}] ` : ''
const details = err.innererror?.errordetails
?.filter((d) => d.message && (!d.severity || d.severity.toLowerCase() !== 'info'))
.map((d) => {
const tag = d.code ? `[${d.code}] ` : ''
return `${tag}${d.message}`
})
.filter((m): m is string => Boolean(m))
if (details && details.length > 0) {
const extras = details.filter((d) => !d.endsWith(base))
return extras.length > 0 ? `${prefix}${base} (${extras.join('; ')})` : `${prefix}${base}`
}
if (base) return `${prefix}${base}`
}
}
if (typeof body === 'string' && body.length > 0) return body
return `SAP request failed with HTTP ${status}`
}
function unwrapOdata(body: unknown): unknown {
if (!body || typeof body !== 'object') return body
const root = (body as { d?: unknown }).d
if (root === undefined) return body
if (root && typeof root === 'object' && 'results' in (root as Record<string, unknown>)) {
const rootObj = root as { results: unknown; __count?: string; __next?: string }
if (rootObj.__count !== undefined || rootObj.__next !== undefined) {
return {
results: rootObj.results,
...(rootObj.__count !== undefined && { __count: rootObj.__count }),
...(rootObj.__next !== undefined && { __next: rootObj.__next }),
}
}
return rootObj.results
}
return root
}
export const POST = withRouteHandler(async (request: NextRequest) => {
const requestId = generateRequestId()
try {
const authResult = await checkInternalAuth(request, { requireWorkflowId: false })
if (!authResult.success) {
logger.warn(`[${requestId}] Unauthorized SAP proxy request: ${authResult.error}`)
return NextResponse.json(
{ success: false, error: authResult.error || 'Authentication required' },
{ status: 401 }
)
}
const json = await request.json()
const proxyReq = ProxyRequestSchema.parse(json)
const isWrite = WRITE_METHODS.has(proxyReq.method)
const accessToken =
proxyReq.authType === 'oauth_client_credentials'
? await fetchAccessToken(proxyReq, requestId)
: null
const csrf = isWrite ? await fetchCsrf(proxyReq, accessToken, requestId) : null
let invocation = await callOdata(proxyReq, accessToken, csrf)
if (isWrite && isCsrfRequired(invocation)) {
logger.info(`[${requestId}] CSRF token rejected, refetching and retrying`)
const refreshed = await fetchCsrf(proxyReq, accessToken, requestId)
if (refreshed) {
invocation = await callOdata(proxyReq, accessToken, refreshed)
}
}
if (invocation.status >= 200 && invocation.status < 300) {
const data = invocation.status === 204 ? null : unwrapOdata(invocation.body)
return NextResponse.json({ success: true, output: { status: invocation.status, data } })
}
const message = extractOdataError(invocation.body, invocation.status)
logger.warn(
`[${requestId}] SAP API error (${invocation.status}) ${proxyReq.service}${proxyReq.path}: ${message}`
)
return NextResponse.json(
{ success: false, error: message, status: invocation.status },
{ status: invocation.status }
)
} catch (error) {
if (error instanceof z.ZodError) {
logger.warn(`[${requestId}] Validation error:`, error.errors)
return NextResponse.json(
{ success: false, error: error.errors[0]?.message || 'Validation failed' },
{ status: 400 }
)
}
logger.error(`[${requestId}] Unexpected SAP proxy error:`, error)
return NextResponse.json({ success: false, error: toError(error).message }, { status: 500 })
}
})

View File

@@ -22,6 +22,8 @@ const requestSchema = z.object({
variables: z.any(),
provider: z.enum(['openai', 'anthropic']).optional().default('openai'),
apiKey: z.string(),
mode: z.enum(['dom', 'hybrid', 'cua']).optional().default('dom'),
maxSteps: z.number().int().min(1).max(200).optional().default(20),
})
/**
@@ -121,7 +123,7 @@ export const POST = withRouteHandler(async (request: NextRequest) => {
}
const params = validationResult.data
const { task, startUrl: rawStartUrl, outputSchema, provider, apiKey } = params
const { task, startUrl: rawStartUrl, outputSchema, provider, apiKey, mode, maxSteps } = params
const variablesObject = processVariables(params.variables)
const startUrl = normalizeUrl(rawStartUrl)
@@ -165,8 +167,10 @@ export const POST = withRouteHandler(async (request: NextRequest) => {
return NextResponse.json({ error: 'Invalid Anthropic API key format' }, { status: 400 })
}
const modelName =
provider === 'anthropic' ? 'anthropic/claude-sonnet-4-5-20250929' : 'openai/gpt-5'
const modelName = provider === 'anthropic' ? 'anthropic/claude-sonnet-4-6' : 'openai/gpt-5'
let sessionId: string | null = null
let liveViewUrl: string | null = null
try {
logger.info('Initializing Stagehand with Browserbase (v3)', { provider, modelName })
@@ -190,6 +194,35 @@ export const POST = withRouteHandler(async (request: NextRequest) => {
await stagehand.init()
logger.info('Stagehand initialized successfully')
sessionId = stagehand.browserbaseSessionID ?? null
if (sessionId) {
try {
const debugResponse = await fetch(
`https://api.browserbase.com/v1/sessions/${sessionId}/debug`,
{
method: 'GET',
headers: {
'X-BB-API-Key': BROWSERBASE_API_KEY,
},
}
)
if (debugResponse.ok) {
const debugData = (await debugResponse.json()) as {
debuggerFullscreenUrl?: string
debuggerUrl?: string
}
liveViewUrl = debugData.debuggerFullscreenUrl ?? debugData.debuggerUrl ?? null
if (liveViewUrl) {
logger.info(`Browserbase live view URL: ${liveViewUrl}`)
}
} else {
logger.warn(`Failed to fetch Browserbase debug URL: ${debugResponse.statusText}`)
}
} catch (debugError) {
logger.warn('Error fetching Browserbase debug URL', { error: debugError })
}
}
const page = stagehand.context.pages()[0]
logger.info(`Navigating to ${startUrl}`)
await page.goto(startUrl, { waitUntil: 'networkidle' })
@@ -223,13 +256,14 @@ export const POST = withRouteHandler(async (request: NextRequest) => {
apiKey: apiKey,
},
systemPrompt: agentInstructions,
mode,
})
logger.info('Executing agent task', { task: taskWithVariables })
logger.info('Executing agent task', { task: taskWithVariables, mode, maxSteps })
const agentExecutionResult = await agent.execute({
instruction: taskWithVariables,
maxSteps: 20,
maxSteps,
})
const agentResult = {
@@ -293,6 +327,8 @@ export const POST = withRouteHandler(async (request: NextRequest) => {
return NextResponse.json({
agentResult,
structuredOutput,
liveViewUrl,
sessionId,
})
} catch (error) {
logger.error('Stagehand agent execution error', {
@@ -327,6 +363,8 @@ export const POST = withRouteHandler(async (request: NextRequest) => {
{
error: errorMessage,
details: errorDetails,
liveViewUrl,
sessionId,
},
{ status: 500 }
)

View File

@@ -17,8 +17,6 @@ const BROWSERBASE_PROJECT_ID = env.BROWSERBASE_PROJECT_ID
const requestSchema = z.object({
instruction: z.string(),
schema: z.record(z.any()),
useTextExtract: z.boolean().optional().default(false),
selector: z.string().nullable().optional(),
provider: z.enum(['openai', 'anthropic']).optional().default('openai'),
apiKey: z.string(),
url: z.string().url(),
@@ -51,7 +49,7 @@ export const POST = withRouteHandler(async (request: NextRequest) => {
}
const params = validationResult.data
const { url: rawUrl, instruction, selector, provider, apiKey, schema } = params
const { url: rawUrl, instruction, provider, apiKey, schema } = params
const url = normalizeUrl(rawUrl)
const urlValidation = await validateUrlWithDNS(url, 'url')
if (!urlValidation.isValid) {
@@ -101,8 +99,7 @@ export const POST = withRouteHandler(async (request: NextRequest) => {
}
try {
const modelName =
provider === 'anthropic' ? 'anthropic/claude-sonnet-4-5-20250929' : 'openai/gpt-5'
const modelName = provider === 'anthropic' ? 'anthropic/claude-sonnet-4-6' : 'openai/gpt-5'
logger.info('Initializing Stagehand with Browserbase (v3)', { provider, modelName })
@@ -162,14 +159,11 @@ export const POST = withRouteHandler(async (request: NextRequest) => {
logger.info('Calling stagehand.extract with options', {
hasInstruction: !!instruction,
hasSchema: !!zodSchema,
hasSelector: !!selector,
})
let extractedData
if (zodSchema) {
extractedData = await stagehand.extract(instruction, zodSchema, {
selector: selector || undefined,
})
extractedData = await stagehand.extract(instruction, zodSchema)
} else {
extractedData = await stagehand.extract(instruction)
}

View File

@@ -3,7 +3,7 @@ import { toError } from '@sim/utils/errors'
import { type NextRequest, NextResponse } from 'next/server'
import { getSession } from '@/lib/auth'
import { MAX_DOCUMENT_PREVIEW_CODE_BYTES } from '@/lib/execution/constants'
import { runSandboxTask } from '@/lib/execution/sandbox/run-task'
import { runSandboxTask, SandboxUserCodeError } from '@/lib/execution/sandbox/run-task'
import { verifyWorkspaceMembership } from '@/app/api/workflows/utils'
import type { SandboxTaskId } from '@/sandbox-tasks/registry'
@@ -83,6 +83,14 @@ export function createDocumentPreviewRoute(config: DocumentPreviewRouteConfig) {
})
} catch (err) {
const message = toError(err).message
if (err instanceof SandboxUserCodeError) {
logger.warn(`${config.label} preview user code failed`, {
error: message,
errorName: err.name,
workspaceId,
})
return NextResponse.json({ error: message, errorName: err.name }, { status: 422 })
}
logger.error(`${config.label} preview generation failed`, { error: message, workspaceId })
return NextResponse.json({ error: message }, { status: 500 })
}

View File

@@ -6,9 +6,15 @@ import { NextRequest } from 'next/server'
import { beforeEach, describe, expect, it, vi } from 'vitest'
import { MAX_DOCUMENT_PREVIEW_CODE_BYTES } from '@/lib/execution/constants'
const { mockRunSandboxTask } = vi.hoisted(() => ({
mockRunSandboxTask: vi.fn(),
}))
const { mockRunSandboxTask, SandboxUserCodeError } = vi.hoisted(() => {
class SandboxUserCodeError extends Error {
constructor(message: string, name: string) {
super(message)
this.name = name
}
}
return { mockRunSandboxTask: vi.fn(), SandboxUserCodeError }
})
const mockVerifyWorkspaceMembership = workflowsApiUtilsMockFns.mockVerifyWorkspaceMembership
@@ -16,6 +22,7 @@ vi.mock('@/app/api/workflows/utils', () => workflowsApiUtilsMock)
vi.mock('@/lib/execution/sandbox/run-task', () => ({
runSandboxTask: mockRunSandboxTask,
SandboxUserCodeError,
}))
import { POST } from '@/app/api/workspaces/[id]/docx/preview/route'
@@ -189,4 +196,31 @@ describe('DOCX preview API route', () => {
expect(response.status).toBe(500)
await expect(response.json()).resolves.toEqual({ error: 'boom: sandbox failed' })
})
it('returns 422 when user code throws inside the sandbox', async () => {
mockRunSandboxTask.mockRejectedValue(
new SandboxUserCodeError('Invalid or unexpected token', 'SyntaxError')
)
const request = new NextRequest(
'http://localhost:3000/api/workspaces/workspace-1/docx/preview',
{
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({ code: 'const x = ' }),
}
)
const response = await POST(request, {
params: Promise.resolve({ id: 'workspace-1' }),
})
expect(response.status).toBe(422)
await expect(response.json()).resolves.toEqual({
error: 'Invalid or unexpected token',
errorName: 'SyntaxError',
})
})
})

View File

@@ -1,8 +1,8 @@
import { AuditAction, AuditResourceType, recordAudit } from '@sim/audit'
import { createLogger } from '@sim/logger'
import { toError } from '@sim/utils/errors'
import { type NextRequest, NextResponse } from 'next/server'
import { getSession } from '@/lib/auth'
import { generateRequestId } from '@/lib/core/utils/request'
import { withRouteHandler } from '@/lib/core/utils/with-route-handler'
import { updateWorkspaceFileContent } from '@/lib/uploads/contexts/workspace'
import { getUserEntityPermissions } from '@/lib/workspaces/permissions/utils'
@@ -17,7 +17,6 @@ const logger = createLogger('WorkspaceFileContentAPI')
*/
export const PUT = withRouteHandler(
async (request: NextRequest, { params }: { params: Promise<{ id: string; fileId: string }> }) => {
const requestId = generateRequestId()
const { id: workspaceId, fileId } = await params
try {
@@ -32,20 +31,19 @@ export const PUT = withRouteHandler(
workspaceId
)
if (userPermission !== 'admin' && userPermission !== 'write') {
logger.warn(
`[${requestId}] User ${session.user.id} lacks write permission for workspace ${workspaceId}`
)
logger.warn(`User ${session.user.id} lacks write permission for workspace ${workspaceId}`)
return NextResponse.json({ error: 'Insufficient permissions' }, { status: 403 })
}
const body = await request.json()
const { content } = body as { content: string }
const { content, encoding } = body as { content: string; encoding?: 'base64' | 'utf-8' }
if (typeof content !== 'string') {
return NextResponse.json({ error: 'Content must be a string' }, { status: 400 })
}
const buffer = Buffer.from(content, 'utf-8')
const buffer =
encoding === 'base64' ? Buffer.from(content, 'base64') : Buffer.from(content, 'utf-8')
const maxFileSizeBytes = 50 * 1024 * 1024
if (buffer.length > maxFileSizeBytes) {
@@ -62,7 +60,7 @@ export const PUT = withRouteHandler(
buffer
)
logger.info(`[${requestId}] Updated content for workspace file: ${updatedFile.name}`)
logger.info(`Updated content for workspace file: ${updatedFile.name}`)
recordAudit({
workspaceId,
@@ -83,15 +81,15 @@ export const PUT = withRouteHandler(
file: updatedFile,
})
} catch (error) {
const errorMessage = error instanceof Error ? error.message : 'Failed to update file content'
const errorMessage = toError(error).message || 'Failed to update file content'
const isNotFound = errorMessage.includes('File not found')
const isQuotaExceeded = errorMessage.includes('Storage limit exceeded')
const status = isNotFound ? 404 : isQuotaExceeded ? 402 : 500
if (status === 500) {
logger.error(`[${requestId}] Error updating file content:`, error)
logger.error('Error updating file content:', error)
} else {
logger.warn(`[${requestId}] ${errorMessage}`)
logger.warn(errorMessage)
}
return NextResponse.json({ success: false, error: errorMessage }, { status })

View File

@@ -6,9 +6,15 @@ import { NextRequest } from 'next/server'
import { beforeEach, describe, expect, it, vi } from 'vitest'
import { MAX_DOCUMENT_PREVIEW_CODE_BYTES } from '@/lib/execution/constants'
const { mockRunSandboxTask } = vi.hoisted(() => ({
mockRunSandboxTask: vi.fn(),
}))
const { mockRunSandboxTask, SandboxUserCodeError } = vi.hoisted(() => {
class SandboxUserCodeError extends Error {
constructor(message: string, name: string) {
super(message)
this.name = name
}
}
return { mockRunSandboxTask: vi.fn(), SandboxUserCodeError }
})
const mockVerifyWorkspaceMembership = workflowsApiUtilsMockFns.mockVerifyWorkspaceMembership
@@ -16,6 +22,7 @@ vi.mock('@/app/api/workflows/utils', () => workflowsApiUtilsMock)
vi.mock('@/lib/execution/sandbox/run-task', () => ({
runSandboxTask: mockRunSandboxTask,
SandboxUserCodeError,
}))
import { POST } from '@/app/api/workspaces/[id]/pdf/preview/route'
@@ -187,4 +194,31 @@ describe('PDF preview API route', () => {
expect(response.status).toBe(500)
await expect(response.json()).resolves.toEqual({ error: 'boom: sandbox failed' })
})
it('returns 422 when user code throws inside the sandbox', async () => {
mockRunSandboxTask.mockRejectedValue(
new SandboxUserCodeError('Invalid or unexpected token', 'SyntaxError')
)
const request = new NextRequest(
'http://localhost:3000/api/workspaces/workspace-1/pdf/preview',
{
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({ code: 'const x = ' }),
}
)
const response = await POST(request, {
params: Promise.resolve({ id: 'workspace-1' }),
})
expect(response.status).toBe(422)
await expect(response.json()).resolves.toEqual({
error: 'Invalid or unexpected token',
errorName: 'SyntaxError',
})
})
})

View File

@@ -6,9 +6,15 @@ import { NextRequest } from 'next/server'
import { beforeEach, describe, expect, it, vi } from 'vitest'
import { MAX_DOCUMENT_PREVIEW_CODE_BYTES } from '@/lib/execution/constants'
const { mockRunSandboxTask } = vi.hoisted(() => ({
mockRunSandboxTask: vi.fn(),
}))
const { mockRunSandboxTask, SandboxUserCodeError } = vi.hoisted(() => {
class SandboxUserCodeError extends Error {
constructor(message: string, name: string) {
super(message)
this.name = name
}
}
return { mockRunSandboxTask: vi.fn(), SandboxUserCodeError }
})
const mockVerifyWorkspaceMembership = workflowsApiUtilsMockFns.mockVerifyWorkspaceMembership
@@ -16,6 +22,7 @@ vi.mock('@/app/api/workflows/utils', () => workflowsApiUtilsMock)
vi.mock('@/lib/execution/sandbox/run-task', () => ({
runSandboxTask: mockRunSandboxTask,
SandboxUserCodeError,
}))
import { POST } from '@/app/api/workspaces/[id]/pptx/preview/route'
@@ -189,4 +196,31 @@ describe('PPTX preview API route', () => {
expect(response.status).toBe(500)
await expect(response.json()).resolves.toEqual({ error: 'boom: sandbox failed' })
})
it('returns 422 when user code throws inside the sandbox', async () => {
mockRunSandboxTask.mockRejectedValue(
new SandboxUserCodeError('Invalid or unexpected token', 'SyntaxError')
)
const request = new NextRequest(
'http://localhost:3000/api/workspaces/workspace-1/pptx/preview',
{
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({ code: 'const x = ' }),
}
)
const response = await POST(request, {
params: Promise.resolve({ id: 'workspace-1' }),
})
expect(response.status).toBe(422)
await expect(response.json()).resolves.toEqual({
error: 'Invalid or unexpected token',
errorName: 'SyntaxError',
})
})
})

View File

@@ -1,3 +1,4 @@
import { Suspense } from 'react'
import type { Metadata } from 'next'
import { Files } from '../files'
@@ -6,4 +7,10 @@ export const metadata: Metadata = {
robots: { index: false },
}
export default Files
export default function FilesFilePage() {
return (
<Suspense fallback={null}>
<Files />
</Suspense>
)
}

View File

@@ -1,11 +1,92 @@
import { memo } from 'react'
'use client'
import { forwardRef, memo, useImperativeHandle, useRef, useState } from 'react'
import { cn } from '@/lib/core/utils/cn'
interface EditConfig {
onCellChange: (row: number, col: number, value: string) => void
onHeaderChange: (col: number, value: string) => void
}
interface DataTableProps {
headers: string[]
rows: string[][]
editConfig?: EditConfig
}
export const DataTable = memo(function DataTable({ headers, rows }: DataTableProps) {
export interface DataTableHandle {
commitEdit: () => void
}
type EditingCell = { row: number; col: number } | null
const DataTableBase = forwardRef<DataTableHandle, DataTableProps>(function DataTable(
{ headers, rows, editConfig },
ref
) {
const [editingCell, setEditingCell] = useState<EditingCell>(null)
const [editValue, setEditValue] = useState('')
// Always-current ref so the imperative handle doesn't go stale
const editStateRef = useRef({ editingCell, editValue, editConfig })
editStateRef.current = { editingCell, editValue, editConfig }
useImperativeHandle(
ref,
() => ({
commitEdit: () => {
const { editingCell, editValue, editConfig } = editStateRef.current
if (!editingCell || !editConfig) return
const { row, col } = editingCell
if (row === -1) {
editConfig.onHeaderChange(col, editValue)
} else {
editConfig.onCellChange(row, col, editValue)
}
setEditingCell(null)
},
}),
[]
)
const setInputRef = (node: HTMLInputElement | null) => {
if (node) {
node.focus()
node.select()
}
}
const startEdit = (row: number, col: number, currentValue: string) => {
if (!editConfig) return
setEditingCell({ row, col })
setEditValue(currentValue)
}
const commitEdit = () => {
if (!editingCell || !editConfig) return
const { row, col } = editingCell
if (row === -1) {
editConfig.onHeaderChange(col, editValue)
} else {
editConfig.onCellChange(row, col, editValue)
}
setEditingCell(null)
}
const cancelEdit = () => setEditingCell(null)
const handleKeyDown = (e: React.KeyboardEvent<HTMLInputElement>) => {
if (e.key === 'Enter' || e.key === 'Tab') {
e.preventDefault()
commitEdit()
} else if (e.key === 'Escape') {
cancelEdit()
}
}
const isEditing = (row: number, col: number) =>
editingCell?.row === row && editingCell?.col === col
return (
<div className='overflow-x-auto rounded-md border border-[var(--border)]'>
<table className='w-full border-collapse text-[13px]'>
@@ -14,9 +95,24 @@ export const DataTable = memo(function DataTable({ headers, rows }: DataTablePro
{headers.map((header, i) => (
<th
key={i}
className='whitespace-nowrap px-3 py-2 text-left font-semibold text-[12px] text-[var(--text-primary)]'
className={cn(
'whitespace-nowrap px-3 py-2 text-left font-semibold text-[12px] text-[var(--text-primary)]',
editConfig && 'cursor-pointer select-none hover:bg-[var(--surface-3)]'
)}
onClick={() => editConfig && startEdit(-1, i, String(header ?? ''))}
>
{String(header ?? '')}
{isEditing(-1, i) ? (
<input
ref={setInputRef}
value={editValue}
onChange={(e) => setEditValue(e.target.value)}
onBlur={commitEdit}
onKeyDown={handleKeyDown}
className='w-full min-w-[60px] bg-transparent font-semibold text-[12px] text-[var(--text-primary)] outline-none ring-1 ring-[var(--brand-secondary)] ring-inset'
/>
) : (
String(header ?? '')
)}
</th>
))}
</tr>
@@ -25,8 +121,26 @@ export const DataTable = memo(function DataTable({ headers, rows }: DataTablePro
{rows.map((row, ri) => (
<tr key={ri} className='border-[var(--border)] border-t'>
{headers.map((_, ci) => (
<td key={ci} className='whitespace-nowrap px-3 py-2 text-[var(--text-secondary)]'>
{String(row[ci] ?? '')}
<td
key={ci}
className={cn(
'whitespace-nowrap px-3 py-2 text-[var(--text-secondary)]',
editConfig && 'cursor-pointer select-none hover:bg-[var(--surface-2)]'
)}
onClick={() => editConfig && startEdit(ri, ci, String(row[ci] ?? ''))}
>
{isEditing(ri, ci) ? (
<input
ref={setInputRef}
value={editValue}
onChange={(e) => setEditValue(e.target.value)}
onBlur={commitEdit}
onKeyDown={handleKeyDown}
className='w-full min-w-[60px] bg-transparent text-[13px] text-[var(--text-secondary)] outline-none ring-1 ring-[var(--brand-secondary)] ring-inset'
/>
) : (
String(row[ci] ?? '')
)}
</td>
))}
</tr>
@@ -36,3 +150,5 @@ export const DataTable = memo(function DataTable({ headers, rows }: DataTablePro
</div>
)
})
export const DataTable = memo(DataTableBase)

View File

@@ -0,0 +1,308 @@
'use client'
import { memo, useCallback, useEffect, useMemo, useRef, useState } from 'react'
import { createLogger } from '@sim/logger'
import { ChevronLeft, ChevronRight, ZoomIn, ZoomOut } from 'lucide-react'
import { pdfjs, Document as ReactPdfDocument, Page as ReactPdfPage } from 'react-pdf'
import 'react-pdf/dist/Page/TextLayer.css'
import { Button, Skeleton } from '@/components/emcn'
pdfjs.GlobalWorkerOptions.workerSrc = new URL(
'pdfjs-dist/build/pdf.worker.min.mjs',
import.meta.url
).href
const logger = createLogger('PdfViewer')
const PDF_ZOOM_MIN = 0.5
const PDF_ZOOM_MAX = 3
const PDF_ZOOM_DEFAULT = 1
const PDF_ZOOM_STEP = 1.25
const PDF_PAGE_MAX_WIDTH = 816
const PDF_VIEWER_PADDING = 24
export type PdfDocumentSource =
| { kind: 'url'; url: string }
| { kind: 'buffer'; buffer: ArrayBuffer }
interface PdfViewerCoreProps {
source: PdfDocumentSource
filename: string
}
const PDF_SKELETON = (
<div className='absolute inset-0 flex flex-col items-center gap-4 overflow-y-auto bg-[var(--surface-1)] p-6'>
{[0, 1].map((i) => (
<div
key={i}
className='w-full max-w-[640px] shrink-0 rounded-md bg-[var(--surface-2)] p-8 shadow-medium'
style={{ aspectRatio: '1 / 1.414' }}
>
<div className='flex flex-col gap-3'>
<Skeleton className='h-[14px] w-[60%]' />
<Skeleton className='h-[14px] w-[80%]' />
<Skeleton className='h-[14px] w-[55%]' />
<Skeleton className='mt-2 h-[14px] w-[75%]' />
<Skeleton className='h-[14px] w-[65%]' />
<Skeleton className='h-[14px] w-[85%]' />
<Skeleton className='h-[14px] w-[50%]' />
</div>
</div>
))}
</div>
)
function PdfError({ error }: { error: string }) {
return (
<div className='flex flex-1 flex-col items-center justify-center gap-[8px]'>
<p className='font-medium text-[14px] text-[var(--text-body)]'>Failed to preview PDF</p>
<p className='text-[13px] text-[var(--text-muted)]'>{error}</p>
</div>
)
}
export const PdfViewerCore = memo(function PdfViewerCore({ source, filename }: PdfViewerCoreProps) {
const containerRef = useRef<HTMLDivElement>(null)
const paddingWrapperRef = useRef<HTMLDivElement>(null)
const pagesWrapperRef = useRef<HTMLDivElement>(null)
const pageRefs = useRef<(HTMLDivElement | null)[]>([])
const pageWidthRef = useRef<number | undefined>(undefined)
const zoomRef = useRef(PDF_ZOOM_DEFAULT)
const [containerWidth, setContainerWidth] = useState(0)
const [pageCount, setPageCount] = useState(0)
const [isDocumentReady, setIsDocumentReady] = useState(false)
const [displayZoom, setDisplayZoom] = useState(PDF_ZOOM_DEFAULT)
const [currentPage, setCurrentPage] = useState(1)
const [loadError, setLoadError] = useState<string | null>(null)
const sourceValue = source.kind === 'url' ? source.url : source.buffer
const file = useMemo(
() => (source.kind === 'url' ? source.url : { data: new Uint8Array(source.buffer) }),
[sourceValue]
)
useEffect(() => {
const container = containerRef.current
if (!container) return
const observer = new ResizeObserver(([entry]) => {
setContainerWidth(entry.contentRect.width)
})
observer.observe(container)
return () => observer.disconnect()
}, [])
const pageWidth =
containerWidth > 0
? Math.min(containerWidth - 2 * PDF_VIEWER_PADDING, PDF_PAGE_MAX_WIDTH)
: undefined
pageWidthRef.current = pageWidth
const applyZoomAt = useCallback((next: number, anchorX: number, anchorY: number) => {
const container = containerRef.current
const wrapper = pagesWrapperRef.current
const padWrapper = paddingWrapperRef.current
const pw = pageWidthRef.current
if (!container || !wrapper) return
const ratio = next / zoomRef.current
wrapper.style.zoom = String(next)
if (padWrapper && pw !== undefined) {
padWrapper.style.minWidth = `${pw * next + 2 * PDF_VIEWER_PADDING}px`
}
// Padding is outside the zoom subtree, so offset the anchor by it before scaling.
container.scrollLeft =
(container.scrollLeft + anchorX - PDF_VIEWER_PADDING) * ratio + PDF_VIEWER_PADDING - anchorX
container.scrollTop =
(container.scrollTop + anchorY - PDF_VIEWER_PADDING) * ratio + PDF_VIEWER_PADDING - anchorY
zoomRef.current = next
setDisplayZoom(next)
}, [])
const scrollToPage = (page: number) => {
const wrapper = pageRefs.current[page - 1]
if (wrapper && containerRef.current) {
containerRef.current.scrollTo({ top: wrapper.offsetTop - 16, behavior: 'smooth' })
}
}
useEffect(() => {
const container = containerRef.current
if (!container || pageCount === 0) return
const observer = new IntersectionObserver(
(entries) => {
for (const entry of entries) {
if (entry.isIntersecting) {
const pageNum = Number((entry.target as HTMLElement).dataset.page)
if (pageNum) setCurrentPage(pageNum)
}
}
},
{ root: container, threshold: 0.5 }
)
for (const wrapper of pageRefs.current) {
if (wrapper) observer.observe(wrapper)
}
return () => observer.disconnect()
}, [pageCount])
useEffect(() => {
const container = containerRef.current
if (!container) return
const onWheel = (e: WheelEvent) => {
if (!e.ctrlKey) return
e.preventDefault()
const next = Math.min(
PDF_ZOOM_MAX,
Math.max(PDF_ZOOM_MIN, zoomRef.current * (1 - e.deltaY * 0.005))
)
const rect = container.getBoundingClientRect()
applyZoomAt(next, e.clientX - rect.left, e.clientY - rect.top)
}
container.addEventListener('wheel', onWheel, { passive: false })
return () => container.removeEventListener('wheel', onWheel)
}, [applyZoomAt])
return (
<div className='flex flex-1 flex-col overflow-hidden'>
{pageCount > 0 && !loadError && (
<div className='flex shrink-0 items-center justify-between border-[var(--border)] border-b bg-[var(--surface-1)] px-3 py-1.5'>
<div className='flex items-center gap-1'>
<Button
variant='ghost'
size='sm'
onClick={() => {
const prev = Math.max(1, currentPage - 1)
setCurrentPage(prev)
scrollToPage(prev)
}}
disabled={currentPage <= 1}
className='h-6 w-6 p-0 text-[var(--text-icon)]'
aria-label='Previous page'
>
<ChevronLeft className='h-[14px] w-[14px]' />
</Button>
<span className='min-w-[5rem] text-center text-[12px] text-[var(--text-secondary)]'>
{currentPage} / {pageCount}
</span>
<Button
variant='ghost'
size='sm'
onClick={() => {
const next = Math.min(pageCount, currentPage + 1)
setCurrentPage(next)
scrollToPage(next)
}}
disabled={currentPage >= pageCount}
className='h-6 w-6 p-0 text-[var(--text-icon)]'
aria-label='Next page'
>
<ChevronRight className='h-[14px] w-[14px]' />
</Button>
</div>
<div className='flex items-center gap-1'>
<Button
variant='ghost'
size='sm'
onClick={() => {
const c = containerRef.current
applyZoomAt(
Math.max(PDF_ZOOM_MIN, zoomRef.current / PDF_ZOOM_STEP),
c ? c.clientWidth / 2 : 0,
c ? c.clientHeight / 2 : 0
)
}}
disabled={displayZoom <= PDF_ZOOM_MIN}
className='h-6 w-6 p-0 text-[var(--text-icon)]'
aria-label='Zoom out'
>
<ZoomOut className='h-[14px] w-[14px]' />
</Button>
<span className='min-w-[3rem] text-center text-[12px] text-[var(--text-secondary)]'>
{Math.round(displayZoom * 100)}%
</span>
<Button
variant='ghost'
size='sm'
onClick={() => {
const c = containerRef.current
applyZoomAt(
Math.min(PDF_ZOOM_MAX, zoomRef.current * PDF_ZOOM_STEP),
c ? c.clientWidth / 2 : 0,
c ? c.clientHeight / 2 : 0
)
}}
disabled={displayZoom >= PDF_ZOOM_MAX}
className='h-6 w-6 p-0 text-[var(--text-icon)]'
aria-label='Zoom in'
>
<ZoomIn className='h-[14px] w-[14px]' />
</Button>
</div>
</div>
)}
<div
ref={containerRef}
className='relative flex flex-1 items-start overflow-auto bg-[var(--surface-1)]'
>
{!isDocumentReady && PDF_SKELETON}
<ReactPdfDocument
file={file}
onLoadSuccess={({ numPages }) => {
setPageCount(numPages)
setCurrentPage(1)
setIsDocumentReady(true)
}}
onLoadError={(err) => {
logger.error('PDF load failed', { error: err.message })
setLoadError(err.message)
setIsDocumentReady(true)
}}
error={<PdfError error={loadError ?? 'Failed to load PDF'} />}
className='mx-auto'
>
<div
ref={paddingWrapperRef}
style={{
padding: PDF_VIEWER_PADDING,
minWidth:
pageWidth !== undefined
? `${pageWidth * displayZoom + 2 * PDF_VIEWER_PADDING}px`
: undefined,
}}
>
<div ref={pagesWrapperRef} style={{ width: pageWidth }}>
{Array.from({ length: pageCount }, (_, i) => (
<div
key={i}
ref={(el) => {
pageRefs.current[i] = el
}}
data-page={i + 1}
className='mb-4 overflow-clip rounded-md shadow-medium'
>
<ReactPdfPage
pageNumber={i + 1}
width={pageWidth}
className='!overflow-clip [&_.textLayer]:!overflow-clip'
renderTextLayer
renderAnnotationLayer={false}
aria-label={`${filename} page ${i + 1}`}
/>
</div>
))}
</div>
</div>
</ReactPdfDocument>
</div>
</div>
)
})

View File

@@ -1,23 +1,27 @@
'use client'
import {
createContext,
memo,
useCallback,
useContext,
useEffect,
useMemo,
useRef,
useState,
} from 'react'
import { createContext, memo, useContext, useEffect, useMemo, useRef, useState } from 'react'
import matter from 'gray-matter'
import { useRouter } from 'next/navigation'
import rehypeSlug from 'rehype-slug'
import remarkBreaks from 'remark-breaks'
import remarkGfm from 'remark-gfm'
import { Streamdown } from 'streamdown'
import 'streamdown/styles.css'
import { Checkbox } from '@/components/emcn'
import { toError } from '@sim/utils/errors'
import { generateShortId } from '@sim/utils/id'
import { Checkbox, CopyCodeButton, highlight, languages } from '@/components/emcn'
import '@/components/emcn/components/code/code.css'
import 'prismjs/components/prism-bash'
import 'prismjs/components/prism-css'
import 'prismjs/components/prism-markup'
import 'prismjs/components/prism-javascript'
import 'prismjs/components/prism-typescript'
import 'prismjs/components/prism-yaml'
import 'prismjs/components/prism-sql'
import 'prismjs/components/prism-python'
import { cn } from '@/lib/core/utils/cn'
import { extractTextContent } from '@/lib/core/utils/react-node-text'
import { getFileExtension } from '@/lib/uploads/utils/file-utils'
import { useAutoScroll } from '@/hooks/use-auto-scroll'
import { DataTable } from './data-table'
@@ -26,13 +30,14 @@ interface HastNode {
position?: { start?: { offset?: number } }
}
type PreviewType = 'markdown' | 'html' | 'csv' | 'svg' | null
type PreviewType = 'markdown' | 'html' | 'csv' | 'svg' | 'mermaid' | null
const PREVIEWABLE_MIME_TYPES: Record<string, PreviewType> = {
'text/markdown': 'markdown',
'text/html': 'html',
'text/csv': 'csv',
'image/svg+xml': 'svg',
'text/x-mermaid': 'mermaid',
}
const PREVIEWABLE_EXTENSIONS: Record<string, PreviewType> = {
@@ -41,6 +46,7 @@ const PREVIEWABLE_EXTENSIONS: Record<string, PreviewType> = {
htm: 'html',
csv: 'csv',
svg: 'svg',
mmd: 'mermaid',
}
/** All extensions that have a rich preview renderer. */
@@ -80,11 +86,58 @@ export const PreviewPanel = memo(function PreviewPanel({
if (previewType === 'html') return <HtmlPreview content={content} />
if (previewType === 'csv') return <CsvPreview content={content} />
if (previewType === 'svg') return <SvgPreview content={content} />
if (previewType === 'mermaid') return <MermaidFilePreview content={content} />
return null
})
const REMARK_PLUGINS = [remarkGfm, remarkBreaks]
const CALLOUT_TYPES = new Set(['NOTE', 'TIP', 'WARNING', 'IMPORTANT', 'CAUTION'])
function remarkCallouts() {
return (tree: { type: string; children?: unknown[] }) => {
function processNode(node: { type: string; children?: unknown[] }) {
if (!node.children) return
for (const child of node.children) {
processNode(child as { type: string; children?: unknown[] })
const c = child as {
type: string
children?: unknown[]
data?: { hName?: string; hProperties?: Record<string, string> }
}
if (c.type !== 'blockquote') continue
const first = (c.children?.[0] ?? null) as {
type: string
children?: unknown[]
} | null
if (!first || first.type !== 'paragraph') continue
const firstText = (first.children?.[0] ?? null) as {
type: string
value?: string
} | null
if (!firstText || firstText.type !== 'text' || !firstText.value) continue
const match = firstText.value.match(/^\[!(NOTE|TIP|WARNING|IMPORTANT|CAUTION)\]\s?/i)
if (!match) continue
const calloutType = match[1].toUpperCase()
if (!CALLOUT_TYPES.has(calloutType)) continue
c.data ??= {}
c.data.hProperties = { ...(c.data.hProperties ?? {}), 'data-callout': calloutType }
const remainder = firstText.value.slice(match[0].length)
if (remainder) {
firstText.value = remainder
} else if (first.children && first.children.length === 1) {
c.children?.shift()
} else {
first.children?.shift()
}
}
}
processNode(tree)
}
}
const REMARK_PLUGINS = [remarkGfm, remarkBreaks, remarkCallouts]
const REHYPE_PLUGINS = [rehypeSlug]
/**
@@ -101,7 +154,155 @@ const CheckboxIndexCtx = createContext(-1)
const NavigateCtx = createContext<((path: string) => void) | null>(null)
const CALLOUT_CONFIG: Record<
string,
{ label: string; borderColor: string; bgColor: string; textColor: string; iconColor: string }
> = {
NOTE: {
label: 'Note',
borderColor: 'border-blue-400/60',
bgColor: 'bg-blue-400/10',
textColor: 'text-[var(--text-primary)]',
iconColor: 'text-blue-500',
},
TIP: {
label: 'Tip',
borderColor: 'border-emerald-400/60',
bgColor: 'bg-emerald-400/10',
textColor: 'text-[var(--text-primary)]',
iconColor: 'text-emerald-500',
},
WARNING: {
label: 'Warning',
borderColor: 'border-amber-400/60',
bgColor: 'bg-amber-400/10',
textColor: 'text-[var(--text-primary)]',
iconColor: 'text-amber-500',
},
IMPORTANT: {
label: 'Important',
borderColor: 'border-violet-400/60',
bgColor: 'bg-violet-400/10',
textColor: 'text-[var(--text-primary)]',
iconColor: 'text-violet-500',
},
CAUTION: {
label: 'Caution',
borderColor: 'border-red-400/60',
bgColor: 'bg-red-400/10',
textColor: 'text-[var(--text-primary)]',
iconColor: 'text-red-500',
},
}
const CALLOUT_ICONS: Record<string, string> = {
NOTE: '',
TIP: '💡',
WARNING: '⚠',
IMPORTANT: '❕',
CAUTION: '🛑',
}
const LANG_ALIASES: Record<string, string> = {
js: 'javascript',
ts: 'typescript',
tsx: 'typescript',
jsx: 'javascript',
sh: 'bash',
shell: 'bash',
html: 'markup',
xml: 'markup',
yml: 'yaml',
py: 'python',
}
function CalloutBlock({ type, children }: { type: string; children?: React.ReactNode }) {
const config = CALLOUT_CONFIG[type]
if (!config) {
return (
<blockquote className='my-4 break-words border-[var(--border-1)] border-l-4 py-1 pl-4 text-[var(--text-tertiary)] italic'>
{children}
</blockquote>
)
}
return (
<div
className={cn(
'my-4 rounded-lg border-l-4 px-4 py-3 text-[14px]',
config.borderColor,
config.bgColor
)}
>
<div
className={cn('mb-1 flex items-center gap-1.5 font-semibold text-[13px]', config.iconColor)}
>
<span>{CALLOUT_ICONS[type]}</span>
<span>{config.label}</span>
</div>
<div className={cn('break-words leading-[1.6]', config.textColor)}>{children}</div>
</div>
)
}
const MermaidDiagram = memo(function MermaidDiagram({ definition }: { definition: string }) {
const [svg, setSvg] = useState<string | null>(null)
const [error, setError] = useState<string | null>(null)
const idRef = useRef(`mermaid-${generateShortId(8)}`)
useEffect(() => {
if (typeof window === 'undefined') return
let cancelled = false
async function render() {
try {
const { default: mermaid } = await import('mermaid')
if (cancelled) return
mermaid.initialize({
startOnLoad: false,
securityLevel: 'strict',
theme: 'default',
})
const { svg: rendered } = await mermaid.render(idRef.current, definition.trim())
if (!cancelled) {
setSvg(rendered)
setError(null)
}
} catch (err) {
if (!cancelled) {
setError(toError(err).message || 'Failed to render diagram')
setSvg(null)
}
}
}
setSvg(null)
setError(null)
render()
return () => {
cancelled = true
}
}, [definition])
if (error) {
return (
<div className='my-4 rounded-lg border border-[var(--border)] p-4 text-[13px] text-[var(--text-muted)]'>
<span className='font-medium text-[var(--text-body)]'>Diagram error: </span>
{error}
</div>
)
}
if (!svg) {
return <div className='my-4 h-[100px] animate-pulse rounded-lg bg-[var(--surface-2)]' />
}
return <div className='my-4 overflow-auto rounded-lg' dangerouslySetInnerHTML={{ __html: svg }} />
})
const STATIC_MARKDOWN_COMPONENTS = {
pre: ({ children }: { children?: React.ReactNode }) => <>{children}</>,
p: ({ children }: { children?: React.ReactNode }) => (
<p className='mb-3 break-words text-[14px] text-[var(--text-primary)] leading-[1.6] last:mb-0'>
{children}
@@ -139,32 +340,104 @@ const STATIC_MARKDOWN_COMPONENTS = {
{children}
</h4>
),
h5: ({ id, children }: { id?: string; children?: React.ReactNode }) => (
<h5
id={id}
className='mt-3 mb-1 break-words font-semibold text-[13px] text-[var(--text-primary)] first:mt-0'
>
{children}
</h5>
),
h6: ({ id, children }: { id?: string; children?: React.ReactNode }) => (
<h6
id={id}
className='mt-3 mb-1 break-words font-medium text-[12px] text-[var(--text-secondary)] first:mt-0'
>
{children}
</h6>
),
inlineCode: ({ children }: { children?: React.ReactNode }) => {
if (typeof children === 'string' && children.includes('\n')) {
return (
<code className='my-4 block overflow-x-auto whitespace-pre-wrap break-words rounded-lg bg-[var(--surface-5)] p-4 font-mono text-[13px] text-[var(--text-primary)] leading-[1.6]'>
<code className='my-4 block overflow-x-auto whitespace-pre-wrap break-words rounded-lg bg-[var(--surface-5)] p-4 font-mono text-[var(--text-primary)] leading-[1.6]'>
{children}
</code>
)
}
return (
<code className='whitespace-normal rounded bg-[var(--surface-5)] px-1.5 py-0.5 font-mono text-[13px] text-[var(--caution)]'>
<code className='whitespace-normal rounded bg-[var(--surface-5)] px-1.5 py-0.5 font-mono text-[var(--caution)]'>
{children}
</code>
)
},
pre: ({ children }: { children?: React.ReactNode }) => <>{children}</>,
code: ({ children, className }: { children?: React.ReactNode; className?: string }) => {
const langMatch = className?.match(/language-(\w+)/)
const langRaw = langMatch?.[1] ?? ''
const codeString = extractTextContent(children)
if (langRaw === 'mermaid') {
return <MermaidDiagram definition={codeString} />
}
if (!codeString) {
return (
<code className='whitespace-normal rounded bg-[var(--surface-5)] px-1.5 py-0.5 font-mono text-[var(--caution)]'>
{children}
</code>
)
}
const resolved = LANG_ALIASES[langRaw] || langRaw || 'javascript'
const grammar = languages[resolved] || languages.javascript
const html = grammar ? highlight(codeString.trimEnd(), grammar, resolved) : null
return (
<div className='my-4 overflow-hidden rounded-lg border border-[var(--border)]'>
<div className='flex items-center justify-between border-[var(--border)] border-b bg-[var(--surface-3)] px-3 py-1.5'>
<span className='text-[11px] text-[var(--text-tertiary)]'>{langRaw || 'code'}</span>
<CopyCodeButton
code={codeString}
className='-mr-1 text-[var(--text-tertiary)] hover:text-[var(--text-secondary)]'
/>
</div>
<div className='code-editor-theme bg-[var(--surface-5)]'>
{html ? (
<pre
className='m-0 overflow-x-auto whitespace-pre p-4 font-mono text-[13px] leading-[1.6]'
dangerouslySetInnerHTML={{ __html: html }}
/>
) : (
<pre className='m-0 overflow-x-auto whitespace-pre p-4 font-mono text-[13px] text-[var(--text-primary)] leading-[1.6]'>
<code>{codeString.trimEnd()}</code>
</pre>
)}
</div>
</div>
)
},
strong: ({ children }: { children?: React.ReactNode }) => (
<strong className='break-words font-semibold text-[var(--text-primary)]'>{children}</strong>
<strong className='break-words font-semibold'>{children}</strong>
),
em: ({ children }: { children?: React.ReactNode }) => (
<em className='break-words text-[var(--text-tertiary)]'>{children}</em>
),
blockquote: ({ children }: { children?: React.ReactNode }) => (
<blockquote className='my-4 break-words border-[var(--border-1)] border-l-4 py-1 pl-4 text-[var(--text-tertiary)] italic'>
{children}
</blockquote>
em: ({ children }: { children?: React.ReactNode }) => <em className='break-words'>{children}</em>,
del: ({ children }: { children?: React.ReactNode }) => (
<del className='line-through opacity-50'>{children}</del>
),
blockquote: ({
children,
'data-callout': calloutType,
}: {
children?: React.ReactNode
'data-callout'?: string
}) => {
if (calloutType && CALLOUT_TYPES.has(calloutType)) {
return <CalloutBlock type={calloutType}>{children}</CalloutBlock>
}
return (
<blockquote className='my-4 break-words border-[var(--border-1)] border-l-4 py-1 pl-4 text-[var(--text-tertiary)] italic'>
{children}
</blockquote>
)
},
hr: () => <hr className='my-6 border-[var(--border)]' />,
img: ({ src, alt }: React.ImgHTMLAttributes<HTMLImageElement>) => (
<img
@@ -299,36 +572,33 @@ function isInternalHref(
function AnchorRenderer({ href, children }: { href?: string; children?: React.ReactNode }) {
const navigate = useContext(NavigateCtx)
const parsed = useMemo(() => (href ? isInternalHref(href) : null), [href])
const parsed = href ? isInternalHref(href) : null
const handleClick = useCallback(
(e: React.MouseEvent<HTMLAnchorElement>) => {
if (!parsed || e.metaKey || e.ctrlKey || e.shiftKey || e.altKey) return
const handleClick = (e: React.MouseEvent<HTMLAnchorElement>) => {
if (!parsed || e.metaKey || e.ctrlKey || e.shiftKey || e.altKey) return
e.preventDefault()
e.preventDefault()
if (parsed.pathname === '' && parsed.hash) {
const el = document.getElementById(parsed.hash.slice(1))
if (el) {
const container = el.closest('.overflow-auto') as HTMLElement | null
if (container) {
container.scrollTo({ top: el.offsetTop - container.offsetTop, behavior: 'smooth' })
} else {
el.scrollIntoView({ behavior: 'smooth' })
}
if (parsed.pathname === '' && parsed.hash) {
const el = document.getElementById(parsed.hash.slice(1))
if (el) {
const container = el.closest('.overflow-auto') as HTMLElement | null
if (container) {
container.scrollTo({ top: el.offsetTop - container.offsetTop, behavior: 'smooth' })
} else {
el.scrollIntoView({ behavior: 'smooth' })
}
return
}
return
}
const destination = parsed.pathname + parsed.hash
if (navigate) {
navigate(destination)
} else {
window.location.assign(destination)
}
},
[parsed, navigate]
)
const destination = parsed.pathname + parsed.hash
if (navigate) {
navigate(destination)
} else {
window.location.assign(destination)
}
}
return (
<a
@@ -352,6 +622,30 @@ const MARKDOWN_COMPONENTS = {
input: InputRenderer,
}
function FrontMatterCard({ data }: { data: Record<string, unknown> }) {
const entries = Object.entries(data)
if (entries.length === 0) return null
return (
<div className='mb-6 rounded-lg border border-[var(--border)] bg-[var(--surface-2)] px-4 py-3 text-[13px]'>
<dl className='flex flex-col gap-1.5'>
{entries.map(([key, value]) => (
<div key={key} className='flex gap-2 break-words'>
<dt className='shrink-0 font-medium text-[var(--text-secondary)]'>{key}:</dt>
<dd className='text-[var(--text-primary)]'>
{Array.isArray(value)
? value.join(', ')
: value instanceof Date
? value.toISOString().split('T')[0]
: String(value ?? '')}
</dd>
</div>
))}
</dl>
</div>
)
}
const MarkdownPreview = memo(function MarkdownPreview({
content,
isStreaming = false,
@@ -363,22 +657,28 @@ const MarkdownPreview = memo(function MarkdownPreview({
}) {
const { push: navigate } = useRouter()
const { ref: autoScrollRef } = useAutoScroll(isStreaming)
const scrollContainerRef = useRef<HTMLDivElement>(null)
const contentRef = useRef(content)
contentRef.current = content
const { frontMatterData, markdownContent } = useMemo(() => {
if (isStreaming) return { frontMatterData: null, markdownContent: content }
try {
const parsed = matter(content)
const hasFrontMatter = Object.keys(parsed.data).length > 0
return {
frontMatterData: hasFrontMatter ? parsed.data : null,
markdownContent: hasFrontMatter ? parsed.content : content,
}
} catch {
return { frontMatterData: null, markdownContent: content }
}
}, [content, isStreaming])
const ctxValue = useMemo(
() => (onCheckboxToggle ? { contentRef, onToggle: onCheckboxToggle } : null),
[onCheckboxToggle]
)
const setScrollRef = useCallback(
(node: HTMLDivElement | null) => {
scrollContainerRef.current = node
autoScrollRef(node)
},
[autoScrollRef]
)
const hasScrolledToHash = useRef(false)
useEffect(() => {
@@ -398,37 +698,27 @@ const MarkdownPreview = memo(function MarkdownPreview({
const streamdownMode = isStreaming ? undefined : 'static'
if (onCheckboxToggle) {
return (
<NavigateCtx.Provider value={navigate}>
<MarkdownCheckboxCtx.Provider value={ctxValue}>
<div ref={setScrollRef} className='h-full overflow-auto p-6'>
<Streamdown
mode={streamdownMode}
remarkPlugins={REMARK_PLUGINS}
rehypePlugins={REHYPE_PLUGINS}
components={MARKDOWN_COMPONENTS}
>
{content}
</Streamdown>
</div>
</MarkdownCheckboxCtx.Provider>
</NavigateCtx.Provider>
)
}
const body = (
<div ref={autoScrollRef} className='h-full overflow-auto p-6'>
{frontMatterData && <FrontMatterCard data={frontMatterData} />}
<Streamdown
mode={streamdownMode}
remarkPlugins={REMARK_PLUGINS}
rehypePlugins={REHYPE_PLUGINS}
components={MARKDOWN_COMPONENTS}
>
{markdownContent}
</Streamdown>
</div>
)
return (
<NavigateCtx.Provider value={navigate}>
<div ref={setScrollRef} className='h-full overflow-auto p-6'>
<Streamdown
mode={streamdownMode}
remarkPlugins={REMARK_PLUGINS}
rehypePlugins={REHYPE_PLUGINS}
components={MARKDOWN_COMPONENTS}
>
{content}
</Streamdown>
</div>
{onCheckboxToggle ? (
<MarkdownCheckboxCtx.Provider value={ctxValue}>{body}</MarkdownCheckboxCtx.Provider>
) : (
body
)}
</NavigateCtx.Provider>
)
})
@@ -497,9 +787,7 @@ function buildHtmlPreviewDocument(content: string): string {
}
const HtmlPreview = memo(function HtmlPreview({ content }: { content: string }) {
// Run inline HTML/JS in an isolated iframe while blocking any navigation
// that would replace the preview with another document.
const wrappedContent = useMemo(() => buildHtmlPreviewDocument(content), [content])
const wrappedContent = buildHtmlPreviewDocument(content)
const containerRef = useRef<HTMLDivElement>(null)
const [isRenderable, setIsRenderable] = useState(false)
const [resumeNonce, setResumeNonce] = useState(0)
@@ -563,19 +851,15 @@ const HtmlPreview = memo(function HtmlPreview({ content }: { content: string })
sandbox='allow-scripts'
referrerPolicy='no-referrer'
title='HTML Preview'
className='h-full w-full border-0 bg-white'
className='h-full w-full border-0 bg-[var(--surface-2)]'
/>
)}
</div>
)
})
const SvgPreview = memo(function SvgPreview({ content }: { content: string }) {
const wrappedContent = useMemo(
() =>
`<!DOCTYPE html><html><head><style>body{margin:0;display:flex;align-items:center;justify-content:center;min-height:100vh;background:transparent;}svg{max-width:100%;max-height:100vh;}</style></head><body>${content}</body></html>`,
[content]
)
function SvgPreview({ content }: { content: string }) {
const wrappedContent = `<!DOCTYPE html><html><head><style>body{margin:0;display:flex;align-items:center;justify-content:center;min-height:100vh;background:transparent;}svg{max-width:100%;max-height:100vh;}</style></head><body>${content}</body></html>`
return (
<div className='h-full overflow-hidden'>
@@ -587,7 +871,15 @@ const SvgPreview = memo(function SvgPreview({ content }: { content: string }) {
/>
</div>
)
})
}
function MermaidFilePreview({ content }: { content: string }) {
return (
<div className='h-full overflow-auto p-6'>
<MermaidDiagram definition={content} />
</div>
)
}
const CsvPreview = memo(function CsvPreview({ content }: { content: string }) {
const { headers, rows } = useMemo(() => parseCsv(content), [content])

View File

@@ -2,7 +2,7 @@
import { memo, useCallback, useEffect, useMemo, useRef, useState } from 'react'
import { createLogger } from '@sim/logger'
import { useParams, useRouter } from 'next/navigation'
import { useParams, useRouter, useSearchParams } from 'next/navigation'
import {
Button,
Columns2,
@@ -136,7 +136,10 @@ export function Files() {
const params = useParams()
const router = useRouter()
const searchParams = useSearchParams()
const isNewFile = searchParams.get('new') === '1'
const workspaceId = params?.workspaceId as string
const fileIdFromRoute =
typeof params?.fileId === 'string' && params.fileId.length > 0 ? params.fileId : null
const userPermissions = useUserPermissionsContext()
@@ -178,6 +181,8 @@ export function Files() {
const [uploading, setUploading] = useState(false)
const [uploadProgress, setUploadProgress] = useState({ completed: 0, total: 0 })
const [isDraggingOver, setIsDraggingOver] = useState(false)
const dragCounterRef = useRef(0)
const [inputValue, setInputValue] = useState('')
const debouncedSearchTerm = useDebounce(inputValue, 200)
const [activeSort, setActiveSort] = useState<{
@@ -192,6 +197,7 @@ export function Files() {
const [isDirty, setIsDirty] = useState(false)
const [saveStatus, setSaveStatus] = useState<SaveStatus>('idle')
const [previewMode, setPreviewMode] = useState<PreviewMode>(() => {
if (isNewFile) return 'editor'
if (fileIdFromRoute) {
const file = files.find((f) => f.id === fileIdFromRoute)
if (file && isPreviewable(file)) return 'preview'
@@ -201,7 +207,7 @@ export function Files() {
})
const [showUnsavedChangesAlert, setShowUnsavedChangesAlert] = useState(false)
const [showDeleteConfirm, setShowDeleteConfirm] = useState(false)
const [contextMenuFile, setContextMenuFile] = useState<WorkspaceFileRecord | null>(null)
const contextMenuFileRef = useRef<WorkspaceFileRecord | null>(null)
const [deleteTargetFile, setDeleteTargetFile] = useState<WorkspaceFileRecord | null>(null)
const listRename = useInlineRename({
@@ -365,27 +371,26 @@ export function Files() {
filteredFiles,
])
const handleFileChange = useCallback(
async (e: React.ChangeEvent<HTMLInputElement>) => {
const list = e.target.files
if (!list || list.length === 0 || !workspaceId) return
const uploadFiles = useCallback(
async (filesToUpload: File[]) => {
if (!workspaceId || filesToUpload.length === 0) return
const unsupported: string[] = []
const allowedFiles = filesToUpload.filter((f) => {
const ext = getFileExtension(f.name)
const ok = SUPPORTED_EXTENSIONS.includes(ext as (typeof SUPPORTED_EXTENSIONS)[number])
if (!ok) unsupported.push(f.name)
return ok
})
if (unsupported.length > 0) {
logger.warn('Unsupported file types skipped:', unsupported)
}
if (allowedFiles.length === 0) return
try {
setUploading(true)
const filesToUpload = Array.from(list)
const unsupported: string[] = []
const allowedFiles = filesToUpload.filter((f) => {
const ext = getFileExtension(f.name)
const ok = SUPPORTED_EXTENSIONS.includes(ext as (typeof SUPPORTED_EXTENSIONS)[number])
if (!ok) unsupported.push(f.name)
return ok
})
if (unsupported.length > 0) {
logger.warn('Unsupported file types skipped:', unsupported)
}
setUploadProgress({ completed: 0, total: allowedFiles.length })
for (let i = 0; i < allowedFiles.length; i++) {
@@ -401,14 +406,42 @@ export function Files() {
} finally {
setUploading(false)
setUploadProgress({ completed: 0, total: 0 })
if (fileInputRef.current) {
fileInputRef.current.value = ''
}
}
},
[workspaceId, uploadFile]
)
const handleFileChange = async (e: React.ChangeEvent<HTMLInputElement>) => {
const list = e.target.files
if (!list || list.length === 0) return
await uploadFiles(Array.from(list))
if (fileInputRef.current) fileInputRef.current.value = ''
}
const handleDragEnter = (e: React.DragEvent) => {
e.preventDefault()
dragCounterRef.current++
if (e.dataTransfer.types.includes('Files')) setIsDraggingOver(true)
}
const handleDragLeave = () => {
dragCounterRef.current--
if (dragCounterRef.current === 0) setIsDraggingOver(false)
}
const handleDragOver = (e: React.DragEvent) => {
e.preventDefault()
e.dataTransfer.dropEffect = 'copy'
}
const handleDrop = async (e: React.DragEvent) => {
e.preventDefault()
dragCounterRef.current = 0
setIsDraggingOver(false)
const dropped = Array.from(e.dataTransfer.files)
if (dropped.length > 0) await uploadFiles(dropped)
}
const handleDownload = useCallback(async (file: WorkspaceFileRecord) => {
try {
await downloadWorkspaceFile(file)
@@ -527,13 +560,13 @@ export function Files() {
]
)
const handleDiscardChanges = useCallback(() => {
const handleDiscardChanges = () => {
setShowUnsavedChangesAlert(false)
setIsDirty(false)
setSaveStatus('idle')
setPreviewMode('editor')
router.push(`/workspace/${workspaceId}/files`)
}, [router, workspaceId])
}
const creatingFileRef = useRef(creatingFile)
creatingFileRef.current = creatingFile
@@ -558,7 +591,7 @@ export function Files() {
const fileId = result.file?.id
if (fileId) {
justCreatedFileIdRef.current = fileId
router.push(`/workspace/${workspaceId}/files/${fileId}`)
router.push(`/workspace/${workspaceId}/files/${fileId}?new=1`)
}
} catch (err) {
logger.error('Failed to create file:', err)
@@ -571,16 +604,13 @@ export function Files() {
(e: React.MouseEvent, rowId: string) => {
const file = filesRef.current.find((f) => f.id === rowId)
if (file) {
setContextMenuFile(file)
contextMenuFileRef.current = file
openContextMenu(e)
}
},
[openContextMenu]
)
const contextMenuFileRef = useRef(contextMenuFile)
contextMenuFileRef.current = contextMenuFile
const handleContextMenuOpen = useCallback(() => {
const file = contextMenuFileRef.current
if (!file) return
@@ -632,7 +662,7 @@ export function Files() {
if (fileIdFromRoute !== prevFileIdRef.current) {
prevFileIdRef.current = fileIdFromRoute
const isJustCreated =
fileIdFromRoute != null && justCreatedFileIdRef.current === fileIdFromRoute
isNewFile || (fileIdFromRoute != null && justCreatedFileIdRef.current === fileIdFromRoute)
if (justCreatedFileIdRef.current && !isJustCreated) {
justCreatedFileIdRef.current = null
}
@@ -649,6 +679,12 @@ export function Files() {
}
}
useEffect(() => {
if (isNewFile && fileIdFromRoute) {
router.replace(`/workspace/${workspaceId}/files/${fileIdFromRoute}`)
}
}, [])
useEffect(() => {
const handleKeyDown = (e: KeyboardEvent) => {
if (!fileIdFromRouteRef.current) return
@@ -774,34 +810,25 @@ export function Files() {
const canEdit = userPermissions.canEdit === true
const searchConfig: SearchConfig = useMemo(
() => ({
value: inputValue,
onChange: setInputValue,
onClearAll: () => setInputValue(''),
placeholder: 'Search files...',
}),
[inputValue]
)
const searchConfig: SearchConfig = {
value: inputValue,
onChange: setInputValue,
onClearAll: () => setInputValue(''),
placeholder: 'Search files...',
}
const createConfig = useMemo(
() => ({
label: 'New file',
onClick: handleCreateFile,
disabled: uploading || creatingFile || !canEdit,
}),
[handleCreateFile, uploading, creatingFile, canEdit]
)
const createConfig = {
label: 'New file',
onClick: handleCreateFile,
disabled: uploading || creatingFile || !canEdit,
}
const uploadButtonLabel = useMemo(
() =>
uploading && uploadProgress.total > 0
? `${uploadProgress.completed}/${uploadProgress.total}`
: uploading
? 'Uploading...'
: 'Upload',
[uploading, uploadProgress.completed, uploadProgress.total]
)
const uploadButtonLabel =
uploading && uploadProgress.total > 0
? `${uploadProgress.completed}/${uploadProgress.total}`
: uploading
? 'Uploading...'
: 'Upload'
const headerActionsConfig = useMemo(
() => [
@@ -818,39 +845,7 @@ export function Files() {
router.push(`/workspace/${workspaceId}/files`)
}, [router, workspaceId])
const loadingBreadcrumbs = useMemo(
() => [{ label: 'Files', onClick: handleNavigateToFiles }, { label: '...' }],
[handleNavigateToFiles]
)
const typeDisplayLabel = useMemo(() => {
if (typeFilter.length === 0) return 'All'
if (typeFilter.length === 1) {
const labels: Record<string, string> = {
document: 'Documents',
audio: 'Audio',
video: 'Video',
}
return labels[typeFilter[0]] ?? typeFilter[0]
}
return `${typeFilter.length} selected`
}, [typeFilter])
const sizeDisplayLabel = useMemo(() => {
if (sizeFilter.length === 0) return 'All'
if (sizeFilter.length === 1) {
const labels: Record<string, string> = { small: 'Small', medium: 'Medium', large: 'Large' }
return labels[sizeFilter[0]] ?? sizeFilter[0]
}
return `${sizeFilter.length} selected`
}, [sizeFilter])
const uploadedByDisplayLabel = useMemo(() => {
if (uploadedByFilter.length === 0) return 'All'
if (uploadedByFilter.length === 1)
return members?.find((m) => m.userId === uploadedByFilter[0])?.name ?? '1 member'
return `${uploadedByFilter.length} members`
}, [uploadedByFilter, members])
const loadingBreadcrumbs = [{ label: 'Files', onClick: handleNavigateToFiles }, { label: '...' }]
const memberOptions: ComboboxOption[] = useMemo(
() =>
@@ -893,8 +888,33 @@ export function Files() {
const hasActiveFilters =
typeFilter.length > 0 || sizeFilter.length > 0 || uploadedByFilter.length > 0
const filterContent = useMemo(
() => (
const filterContent = useMemo(() => {
const typeDisplayLabel =
typeFilter.length === 0
? 'All'
: typeFilter.length === 1
? (({ document: 'Documents', audio: 'Audio', video: 'Video' } as Record<string, string>)[
typeFilter[0]
] ?? typeFilter[0])
: `${typeFilter.length} selected`
const sizeDisplayLabel =
sizeFilter.length === 0
? 'All'
: sizeFilter.length === 1
? (({ small: 'Small', medium: 'Medium', large: 'Large' } as Record<string, string>)[
sizeFilter[0]
] ?? sizeFilter[0])
: `${sizeFilter.length} selected`
const uploadedByDisplayLabel =
uploadedByFilter.length === 0
? 'All'
: uploadedByFilter.length === 1
? (members?.find((m) => m.userId === uploadedByFilter[0])?.name ?? '1 member')
: `${uploadedByFilter.length} members`
return (
<div className='flex w-[240px] flex-col gap-3 p-3'>
<div className='flex flex-col gap-1.5'>
<span className='font-medium text-[var(--text-secondary)] text-caption'>File Type</span>
@@ -974,18 +994,8 @@ export function Files() {
</button>
)}
</div>
),
[
typeFilter,
sizeFilter,
uploadedByFilter,
memberOptions,
typeDisplayLabel,
sizeDisplayLabel,
uploadedByDisplayLabel,
hasActiveFilters,
]
)
)
}, [typeFilter, sizeFilter, uploadedByFilter, memberOptions, members, hasActiveFilters])
const filterTags: FilterTag[] = useMemo(() => {
const tags: FilterTag[] = []
@@ -1027,8 +1037,24 @@ export function Files() {
return (
<div className='flex h-full flex-1 flex-col overflow-hidden bg-[var(--bg)]'>
<ResourceHeader icon={FilesIcon} breadcrumbs={loadingBreadcrumbs} />
<div className='flex flex-1 items-center justify-center'>
<Skeleton className='h-[16px] w-[200px]' />
<div className='flex flex-1 flex-col items-center gap-4 overflow-y-auto bg-[var(--surface-1)] p-6'>
{[0, 1].map((i) => (
<div
key={i}
className='w-full max-w-[640px] shrink-0 rounded-md bg-[var(--surface-2)] p-8 shadow-medium'
style={{ aspectRatio: '1 / 1.414' }}
>
<div className='flex flex-col gap-3'>
<Skeleton className='h-[14px] w-[60%]' />
<Skeleton className='h-[14px] w-[80%]' />
<Skeleton className='h-[14px] w-[55%]' />
<Skeleton className='mt-2 h-[14px] w-[75%]' />
<Skeleton className='h-[14px] w-[65%]' />
<Skeleton className='h-[14px] w-[85%]' />
<Skeleton className='h-[14px] w-[50%]' />
</div>
</div>
))}
</div>
</div>
)
@@ -1049,7 +1075,7 @@ export function Files() {
workspaceId={workspaceId}
canEdit={canEdit}
previewMode={previewMode}
autoFocus={justCreatedFileIdRef.current === selectedFile.id}
autoFocus={isNewFile || justCreatedFileIdRef.current === selectedFile.id}
onDirtyChange={setIsDirty}
onSaveStatusChange={setSaveStatus}
saveRef={saveRef}
@@ -1087,7 +1113,13 @@ export function Files() {
}
return (
<>
<div
className='relative flex h-full flex-col overflow-hidden'
onDragEnter={canEdit ? handleDragEnter : undefined}
onDragLeave={canEdit ? handleDragLeave : undefined}
onDragOver={canEdit ? handleDragOver : undefined}
onDrop={canEdit ? handleDrop : undefined}
>
<Resource
icon={FilesIcon}
title='Files'
@@ -1103,6 +1135,19 @@ export function Files() {
onRowContextMenu={handleRowContextMenu}
isLoading={isLoading}
onContextMenu={handleContentContextMenu}
overlay={
isDraggingOver ? (
<div className='pointer-events-none absolute inset-0 z-50 flex flex-col items-center justify-center gap-2 border border-[var(--accent)] border-dashed bg-[var(--surface-4)] transition-colors'>
<Upload className='h-5 w-5 text-[var(--accent)]' />
<div className='flex flex-col gap-0.5 text-center'>
<p className='font-medium text-[14px] text-[var(--accent)]'>Drop to upload</p>
<p className='text-[11px] text-[var(--text-tertiary)]'>
Release files here to add them to this workspace
</p>
</div>
</div>
) : null
}
/>
<FilesListContextMenu
@@ -1143,7 +1188,7 @@ export function Files() {
accept={ACCEPT_ATTR}
multiple
/>
</>
</div>
)
}

View File

@@ -1,3 +1,4 @@
import { Suspense } from 'react'
import type { Metadata } from 'next'
import { Files } from './files'
@@ -6,4 +7,10 @@ export const metadata: Metadata = {
robots: { index: false },
}
export default Files
export default function FilesPage() {
return (
<Suspense fallback={null}>
<Files />
</Suspense>
)
}

View File

@@ -156,32 +156,86 @@ function toToolData(tc: NonNullable<ContentBlock['toolCall']>): ToolCallData {
*/
function parseBlocks(blocks: ContentBlock[]): MessageSegment[] {
const segments: MessageSegment[] = []
let group: AgentGroupSegment | null = null
const pushGroup = (nextGroup: AgentGroupSegment, isOpen = false) => {
segments.push({ ...nextGroup, isOpen })
const groupsByKey = new Map<string, AgentGroupSegment>()
let activeGroupKey: string | null = null
const groupKey = (name: string, parentToolCallId: string | undefined) =>
parentToolCallId ? `${name}:${parentToolCallId}` : `${name}:legacy`
const resolveGroupKey = (name: string, parentToolCallId: string | undefined) => {
if (parentToolCallId) return groupKey(name, parentToolCallId)
if (activeGroupKey && groupsByKey.get(activeGroupKey)?.agentName === name) {
return activeGroupKey
}
for (const [key, g] of groupsByKey) {
if (g.agentName === name && g.isOpen) return key
}
return groupKey(name, undefined)
}
const ensureGroup = (
name: string,
parentToolCallId: string | undefined
): { group: AgentGroupSegment; created: boolean } => {
const key = resolveGroupKey(name, parentToolCallId)
const existing = groupsByKey.get(key)
if (existing) return { group: existing, created: false }
const group: AgentGroupSegment = {
type: 'agent_group',
id: `agent-${key}-${segments.length}`,
agentName: name,
agentLabel: resolveAgentLabel(name),
items: [],
isDelegating: false,
isOpen: false,
}
segments.push(group)
groupsByKey.set(key, group)
return { group, created: true }
}
const findGroupForSubagentChunk = (
parentToolCallId: string | undefined
): AgentGroupSegment | undefined => {
if (parentToolCallId) {
for (const [key, g] of groupsByKey) {
if (key.endsWith(`:${parentToolCallId}`)) return g
}
return undefined
}
if (activeGroupKey) return groupsByKey.get(activeGroupKey)
return undefined
}
const flushLanes = () => {
for (const g of groupsByKey.values()) {
g.isOpen = false
g.isDelegating = false
}
groupsByKey.clear()
activeGroupKey = null
}
for (let i = 0; i < blocks.length; i++) {
const block = blocks[i]
if (block.type === 'subagent_text' || block.type === 'subagent_thinking') {
if (!block.content || !group) continue
group.isDelegating = false
const lastItem = group.items[group.items.length - 1]
if (!block.content) continue
const g = findGroupForSubagentChunk(block.parentToolCallId)
if (!g) continue
g.isDelegating = false
const lastItem = g.items[g.items.length - 1]
if (lastItem?.type === 'text') {
lastItem.content += block.content
} else {
group.items.push({ type: 'text', content: block.content })
g.items.push({ type: 'text', content: block.content })
}
continue
}
if (block.type === 'thinking') {
if (!block.content?.trim()) continue
if (group) {
pushGroup(group)
group = null
}
flushLanes()
const last = segments[segments.length - 1]
if (last?.type === 'thinking' && last.endedAt === undefined) {
last.content += block.content
@@ -201,21 +255,19 @@ function parseBlocks(blocks: ContentBlock[]): MessageSegment[] {
if (block.type === 'text') {
if (!block.content) continue
if (block.subagent) {
if (group && group.agentName === block.subagent) {
group.isDelegating = false
const lastItem = group.items[group.items.length - 1]
const g = groupsByKey.get(resolveGroupKey(block.subagent, block.parentToolCallId))
if (g) {
g.isDelegating = false
const lastItem = g.items[g.items.length - 1]
if (lastItem?.type === 'text') {
lastItem.content += block.content
} else {
group.items.push({ type: 'text', content: block.content })
g.items.push({ type: 'text', content: block.content })
}
continue
}
}
if (group) {
pushGroup(group)
group = null
}
flushLanes()
const last = segments[segments.length - 1]
if (last?.type === 'text') {
last.content += block.content
@@ -228,34 +280,23 @@ function parseBlocks(blocks: ContentBlock[]): MessageSegment[] {
if (block.type === 'subagent') {
if (!block.content) continue
const key = block.content
if (group && group.agentName === key) continue
const dispatchToolName = SUBAGENT_DISPATCH_TOOLS[key]
let inheritedDelegation = false
if (group && dispatchToolName) {
const last: AgentGroupItem | undefined = group.items[group.items.length - 1]
if (last?.type === 'tool' && last.data.toolName === dispatchToolName) {
inheritedDelegation = !isToolDone(last.data.status) && Boolean(last.data.streamingArgs)
group.items.pop()
const dispatchToolName = SUBAGENT_DISPATCH_TOOLS[key]
if (dispatchToolName) {
const mship = groupsByKey.get(groupKey('mothership', undefined))
if (mship) {
const last = mship.items[mship.items.length - 1]
if (last?.type === 'tool' && last.data.toolName === dispatchToolName) {
inheritedDelegation = !isToolDone(last.data.status) && Boolean(last.data.streamingArgs)
mship.items.pop()
}
}
if (group.items.length > 0) {
pushGroup(group)
}
group = null
} else if (group) {
pushGroup(group)
group = null
}
group = {
type: 'agent_group',
id: `agent-${key}-${i}`,
agentName: key,
agentLabel: resolveAgentLabel(key),
items: [],
isDelegating: inheritedDelegation,
isOpen: false,
}
groupsByKey.delete(groupKey('mothership', undefined))
const { group: g } = ensureGroup(key, block.parentToolCallId)
if (inheritedDelegation) g.isDelegating = true
g.isOpen = true
activeGroupKey = resolveGroupKey(key, block.parentToolCallId)
continue
}
@@ -267,95 +308,75 @@ function parseBlocks(blocks: ContentBlock[]): MessageSegment[] {
const isDispatch = SUBAGENT_KEYS.has(tc.name) && !tc.calledBy
if (isDispatch) {
if (!group || group.agentName !== tc.name) {
if (group) {
pushGroup(group)
group = null
}
group = {
type: 'agent_group',
id: `agent-${tc.name}-${i}`,
agentName: tc.name,
agentLabel: resolveAgentLabel(tc.name),
items: [],
isDelegating: false,
isOpen: false,
}
}
group.isDelegating = isDelegatingTool(tc)
groupsByKey.delete(groupKey('mothership', undefined))
const { group: g } = ensureGroup(tc.name, tc.id)
g.isDelegating = isDelegatingTool(tc)
g.isOpen = g.isDelegating
continue
}
const tool = toToolData(tc)
if (tc.calledBy && group && group.agentName === tc.calledBy) {
group.isDelegating = false
group.items.push({ type: 'tool', data: tool })
} else if (tc.calledBy) {
if (group) {
pushGroup(group)
group = null
}
group = {
type: 'agent_group',
id: `agent-${tc.calledBy}-${i}`,
agentName: tc.calledBy,
agentLabel: resolveAgentLabel(tc.calledBy),
items: [{ type: 'tool', data: tool }],
isDelegating: false,
isOpen: false,
}
if (tc.calledBy) {
const { group: g, created } = ensureGroup(tc.calledBy, block.parentToolCallId)
g.isDelegating = false
if (created && block.parentToolCallId) g.isOpen = true
g.items.push({ type: 'tool', data: tool })
activeGroupKey = resolveGroupKey(tc.calledBy, block.parentToolCallId)
} else {
if (group && group.agentName === 'mothership') {
group.items.push({ type: 'tool', data: tool })
} else {
if (group) {
pushGroup(group)
group = null
}
group = {
type: 'agent_group',
id: `agent-mothership-${i}`,
agentName: 'mothership',
agentLabel: 'Mothership',
items: [{ type: 'tool', data: tool }],
isDelegating: false,
isOpen: false,
}
}
const { group: g } = ensureGroup('mothership', undefined)
g.items.push({ type: 'tool', data: tool })
}
continue
}
if (block.type === 'options') {
if (!block.options?.length) continue
if (group) {
pushGroup(group)
group = null
}
flushLanes()
segments.push({ type: 'options', items: block.options })
continue
}
if (block.type === 'subagent_end') {
if (group) {
pushGroup(group)
group = null
if (block.parentToolCallId) {
for (const [key, g] of groupsByKey) {
if (key.endsWith(`:${block.parentToolCallId}`)) {
g.isOpen = false
g.isDelegating = false
}
}
if (activeGroupKey?.endsWith(`:${block.parentToolCallId}`)) {
activeGroupKey = null
}
} else {
for (const [key, g] of groupsByKey) {
if (key.endsWith(':legacy') && g.agentName !== 'mothership') {
g.isOpen = false
g.isDelegating = false
}
}
if (activeGroupKey?.endsWith(':legacy')) {
activeGroupKey = null
}
}
continue
}
if (block.type === 'stopped') {
if (group) {
pushGroup(group)
group = null
}
flushLanes()
segments.push({ type: 'stopped' })
}
}
if (group) pushGroup(group, true)
return segments
const visibleSegments = segments.filter(
(segment) =>
segment.type !== 'agent_group' ||
segment.items.length > 0 ||
segment.isDelegating ||
segment.isOpen
)
return visibleSegments
}
/**
@@ -428,12 +449,6 @@ export function MessageContent({
isStreaming &&
!hasTrailingContent &&
(lastSegment.type === 'thinking' || hasSubagentEnded || allLastGroupToolsDone)
const lastOpenSubagentGroupId = [...segments]
.reverse()
.find(
(segment): segment is AgentGroupSegment =>
segment.type === 'agent_group' && segment.agentName !== 'mothership' && segment.isOpen
)?.id
return (
<div className='space-y-[10px]'>
@@ -488,8 +503,8 @@ export function MessageContent({
items={segment.items}
isDelegating={segment.isDelegating}
isStreaming={isStreaming}
autoCollapse={allToolsDone && hasFollowingText}
defaultExpanded={segment.id === lastOpenSubagentGroupId}
autoCollapse={!segment.isOpen && allToolsDone && hasFollowingText}
defaultExpanded={segment.isOpen}
/>
</div>
)

View File

@@ -1,6 +1,6 @@
'use client'
import { lazy, memo, Suspense, useCallback, useEffect, useMemo, useState } from 'react'
import { lazy, memo, Suspense, useEffect, useMemo, useState } from 'react'
import { createLogger } from '@sim/logger'
import { formatDuration } from '@sim/utils/formatting'
import { Square } from 'lucide-react'
@@ -128,7 +128,6 @@ export const ResourceContent = memo(function ResourceContent({
}
}, [workspaceId, streamFileName])
const streamingFileMode: 'append' | 'replace' = 'replace'
const disableStreamingAutoScroll = previewSession?.operation === 'patch'
const rawPreviewText = previewSession?.previewText
const streamingPreviewText =
@@ -144,10 +143,9 @@ export const ResourceContent = memo(function ResourceContent({
canEdit={false}
previewMode={previewMode ?? 'preview'}
streamingContent={streamingPreviewText}
streamingMode={streamingFileMode}
streamingMode='replace'
disableStreamingAutoScroll={disableStreamingAutoScroll}
previewContextKey={previewContextKey}
useCodeRendererForCodeFiles
/>
) : (
<div className='flex h-full items-center justify-center'>
@@ -172,7 +170,7 @@ export const ResourceContent = memo(function ResourceContent({
streamingContent={
previewSession?.fileId === resource.id ? streamingPreviewText : undefined
}
streamingMode={streamingFileMode}
streamingMode='replace'
disableStreamingAutoScroll={disableStreamingAutoScroll}
previewContextKey={previewContextKey}
/>
@@ -257,7 +255,7 @@ export function EmbeddedWorkflowActions({ workspaceId, workflowId }: EmbeddedWor
const isRunButtonDisabled =
!isExecuting && !effectivePermissions.canRead && !effectivePermissions.isLoading
const handleRun = useCallback(async () => {
const handleRun = async () => {
setActiveWorkflow(workflowId)
if (isExecuting) {
@@ -274,19 +272,11 @@ export function EmbeddedWorkflowActions({ workspaceId, workflowId }: EmbeddedWor
}
await handleRunWorkflow()
}, [
handleCancelExecution,
handleRunWorkflow,
isExecuting,
navigateToSettings,
setActiveWorkflow,
usageExceeded,
workflowId,
])
}
const handleOpenWorkflow = useCallback(() => {
const handleOpenWorkflow = () => {
window.open(`/workspace/${workspaceId}/w/${workflowId}`, '_blank')
}, [workspaceId, workflowId])
}
return (
<>
@@ -340,9 +330,9 @@ export function EmbeddedKnowledgeBaseActions({
}: EmbeddedKnowledgeBaseActionsProps) {
const router = useRouter()
const handleOpenKnowledgeBase = useCallback(() => {
const handleOpenKnowledgeBase = () => {
router.push(`/workspace/${workspaceId}/knowledge/${knowledgeBaseId}`)
}, [router, workspaceId, knowledgeBaseId])
}
return (
<Tooltip.Root>
@@ -375,18 +365,18 @@ function EmbeddedFileActions({ workspaceId, fileId }: EmbeddedFileActionsProps)
const { data: files = [] } = useWorkspaceFiles(workspaceId)
const file = useMemo(() => files.find((f) => f.id === fileId), [files, fileId])
const handleDownload = useCallback(async () => {
const handleDownload = async () => {
if (!file) return
try {
await downloadWorkspaceFile(file)
} catch (err) {
fileLogger.error('Failed to download file:', err)
}
}, [file])
}
const handleOpenInFiles = useCallback(() => {
const handleOpenInFiles = () => {
router.push(`/workspace/${workspaceId}/files/${encodeURIComponent(fileId)}`)
}, [router, workspaceId, fileId])
}
return (
<>
@@ -432,10 +422,7 @@ interface EmbeddedWorkflowProps {
function EmbeddedWorkflow({ workspaceId, workflowId }: EmbeddedWorkflowProps) {
const { data: workflowList, isPending: isWorkflowsPending } = useWorkflows(workspaceId)
const workflowExists = useMemo(
() => (workflowList ?? []).some((w) => w.id === workflowId),
[workflowList, workflowId]
)
const workflowExists = (workflowList ?? []).some((w) => w.id === workflowId)
const hasLoadError = useWorkflowRegistry(
(state) => state.hydration.phase === 'error' && state.hydration.workflowId === workflowId
)
@@ -514,7 +501,6 @@ function EmbeddedFile({
streamingContent={streamingContent}
disableStreamingAutoScroll={disableStreamingAutoScroll}
previewContextKey={previewContextKey}
useCodeRendererForCodeFiles
/>
</div>
)
@@ -529,15 +515,8 @@ function EmbeddedFolder({ workspaceId, folderId }: EmbeddedFolderProps) {
const { data: folderList, isPending: isFoldersPending } = useFolders(workspaceId)
const { data: workflowList = [] } = useWorkflows(workspaceId)
const folder = useMemo(
() => (folderList ?? []).find((f) => f.id === folderId),
[folderList, folderId]
)
const folderWorkflows = useMemo(
() => workflowList.filter((w) => w.folderId === folderId),
[workflowList, folderId]
)
const folder = (folderList ?? []).find((f) => f.id === folderId)
const folderWorkflows = workflowList.filter((w) => w.folderId === folderId)
if (isFoldersPending) return LOADING_SKELETON
@@ -604,20 +583,14 @@ function EmbeddedLog({ logId }: EmbeddedLogProps) {
return filterHiddenOutputKeys(executionData.finalOutput) as Record<string, unknown>
}, [log?.executionData])
const isWorkflowExecutionLog = useMemo(() => {
if (!log) return false
return (
(log.trigger === 'manual' && !!log.duration) ||
(log.executionData?.enhanced && log.executionData?.traceSpans)
)
}, [log])
const isWorkflowExecutionLog =
!!log &&
((log.trigger === 'manual' && !!log.duration) ||
!!(log.executionData?.enhanced && log.executionData?.traceSpans))
const hasCostInfo = isWorkflowExecutionLog && log?.cost
const formattedTimestamp = useMemo(
() => (log ? formatDate(log.createdAt) : null),
[log?.createdAt]
)
const formattedTimestamp = log ? formatDate(log.createdAt) : null
if (isLoading) return LOADING_SKELETON
@@ -874,10 +847,10 @@ export function EmbeddedLogActions({ workspaceId, logId }: EmbeddedLogActionsPro
const router = useRouter()
const { data: log } = useLogDetail(logId)
const handleOpenInLogs = useCallback(() => {
const handleOpenInLogs = () => {
const param = log?.executionId ? `?executionId=${log.executionId}` : ''
router.push(`/workspace/${workspaceId}/logs${param}`)
}, [router, workspaceId, log?.executionId])
}
return (
<Tooltip.Root>

View File

@@ -131,6 +131,7 @@ import type {
MothershipResource,
MothershipResourceType,
QueuedMessage,
ToolCallInfo,
} from '../types'
import { ToolCallStatus } from '../types'
@@ -701,7 +702,9 @@ function parseStreamBatchResponse(value: unknown): StreamBatchResponse {
function toRawPersistedContentBlock(block: ContentBlock): Record<string, unknown> | null {
const persisted = toRawPersistedContentBlockBody(block)
return persisted ? withBlockTiming(persisted, block) : null
if (!persisted) return null
if (block.parentToolCallId) persisted.parentToolCallId = block.parentToolCallId
return withBlockTiming(persisted, block)
}
function toRawPersistedContentBlockBody(block: ContentBlock): Record<string, unknown> | null {
@@ -1215,7 +1218,7 @@ export function useChat(
reader: ReadableStreamDefaultReader<Uint8Array>,
assistantId: string,
expectedGen?: number,
options?: { preserveExistingState?: boolean }
options?: { preserveExistingState?: boolean; suppressWorkflowToolStarts?: boolean }
) => Promise<{ sawStreamError: boolean; sawComplete: boolean }>
>(async () => ({ sawStreamError: false, sawComplete: false }))
const attachToExistingStreamRef = useRef<
@@ -1457,6 +1460,9 @@ export function useChat(
if (handledClientWorkflowToolIdsRef.current.has(toolCallId)) {
return
}
if (recoveringClientWorkflowToolIdsRef.current.has(toolCallId)) {
return
}
handledClientWorkflowToolIdsRef.current.add(toolCallId)
ensureWorkflowToolResource(toolArgs)
@@ -1467,41 +1473,41 @@ export function useChat(
const recoverPendingClientWorkflowTools = useCallback(
async (nextMessages: ChatMessage[]) => {
const pending: ToolCallInfo[] = []
for (const message of nextMessages) {
for (const block of message.contentBlocks ?? []) {
const toolCall = block.toolCall
if (!toolCall || !isWorkflowToolName(toolCall.name)) {
continue
}
if (toolCall.status !== 'executing') {
continue
}
if (!toolCall || !isWorkflowToolName(toolCall.name)) continue
if (toolCall.status !== 'executing') continue
if (
handledClientWorkflowToolIdsRef.current.has(toolCall.id) ||
recoveringClientWorkflowToolIdsRef.current.has(toolCall.id)
) {
continue
}
recoveringClientWorkflowToolIdsRef.current.add(toolCall.id)
pending.push(toolCall)
}
}
try {
const toolArgs = toolCall.params ?? {}
const targetWorkflowId = ensureWorkflowToolResource(toolArgs)
for (const toolCall of pending) {
try {
const toolArgs = toolCall.params ?? {}
const targetWorkflowId = ensureWorkflowToolResource(toolArgs)
if (targetWorkflowId) {
const rebound = await bindRunToolToExecution(toolCall.id, targetWorkflowId)
if (rebound) {
handledClientWorkflowToolIdsRef.current.add(toolCall.id)
continue
}
if (targetWorkflowId) {
const rebound = await bindRunToolToExecution(toolCall.id, targetWorkflowId)
if (rebound) {
handledClientWorkflowToolIdsRef.current.add(toolCall.id)
continue
}
startClientWorkflowTool(toolCall.id, toolCall.name, toolArgs)
} finally {
recoveringClientWorkflowToolIdsRef.current.delete(toolCall.id)
}
recoveringClientWorkflowToolIdsRef.current.delete(toolCall.id)
startClientWorkflowTool(toolCall.id, toolCall.name, toolArgs)
} finally {
recoveringClientWorkflowToolIdsRef.current.delete(toolCall.id)
}
}
},
@@ -1701,7 +1707,7 @@ export function useChat(
reader: ReadableStreamDefaultReader<Uint8Array>,
assistantId: string,
expectedGen?: number,
options?: { preserveExistingState?: boolean }
options?: { preserveExistingState?: boolean; suppressWorkflowToolStarts?: boolean }
) => {
const decoder = new TextDecoder()
streamReaderRef.current = reader
@@ -1731,6 +1737,7 @@ export function useChat(
for (let i = blocks.length - 1; i >= 0; i--) {
if (blocks[i].type === 'subagent' && blocks[i].content) {
activeSubagent = blocks[i].content
activeSubagentParentToolCallId = blocks[i].parentToolCallId
break
}
if (blocks[i].type === 'subagent_end') {
@@ -1760,23 +1767,45 @@ export function useChat(
if (block && block.endedAt === undefined) block.endedAt = toEventMs(ts)
}
const ensureTextBlock = (subagentName: string | undefined, ts?: string): ContentBlock => {
const ensureTextBlock = (
subagentName: string | undefined,
parentToolCallId: string | undefined,
ts?: string
): ContentBlock => {
const last = blocks[blocks.length - 1]
if (last?.type === 'text' && last.subagent === subagentName) return last
if (
last?.type === 'text' &&
last.subagent === subagentName &&
last.parentToolCallId === parentToolCallId
) {
return last
}
stampBlockEnd(last, ts)
const b: ContentBlock = { type: 'text', content: '', timestamp: toEventMs(ts) }
if (subagentName) b.subagent = subagentName
if (parentToolCallId) b.parentToolCallId = parentToolCallId
blocks.push(b)
return b
}
const ensureThinkingBlock = (subagentName: string | undefined, ts?: string): ContentBlock => {
const ensureThinkingBlock = (
subagentName: string | undefined,
parentToolCallId: string | undefined,
ts?: string
): ContentBlock => {
const targetType = subagentName ? 'subagent_thinking' : 'thinking'
const last = blocks[blocks.length - 1]
if (last?.type === targetType && last.subagent === subagentName) return last
if (
last?.type === targetType &&
last.subagent === subagentName &&
last.parentToolCallId === parentToolCallId
) {
return last
}
stampBlockEnd(last, ts)
const b: ContentBlock = { type: targetType, content: '', timestamp: toEventMs(ts) }
if (subagentName) b.subagent = subagentName
if (parentToolCallId) b.parentToolCallId = parentToolCallId
blocks.push(b)
return b
}
@@ -1793,9 +1822,27 @@ export function useChat(
return activeSubagent
}
const appendInlineErrorTag = (tag: string, subagentName?: string, ts?: string) => {
const resolveParentForSubagentBlock = (
subagent: string | undefined,
scopedParent: string | undefined
): string | undefined => {
if (!subagent) return undefined
if (scopedParent) return scopedParent
if (activeSubagent === subagent) return activeSubagentParentToolCallId
for (const [parent, name] of subagentByParentToolCallId) {
if (name === subagent) return parent
}
return undefined
}
const appendInlineErrorTag = (
tag: string,
subagentName?: string,
parentToolCallId?: string,
ts?: string
) => {
if (runningText.includes(tag)) return
const tb = ensureTextBlock(subagentName, ts)
const tb = ensureTextBlock(subagentName, parentToolCallId, ts)
const prefix = runningText.length > 0 && !runningText.endsWith('\n') ? '\n' : ''
tb.content = `${tb.content ?? ''}${prefix}${tag}`
runningText += `${prefix}${tag}`
@@ -2008,7 +2055,11 @@ export function useChat(
if (chunk) {
const eventTs = typeof parsed.ts === 'string' ? parsed.ts : undefined
if (parsed.payload.channel === MothershipStreamV1TextChannel.thinking) {
const tb = ensureThinkingBlock(scopedSubagent, eventTs)
const scopedParentForBlock = resolveParentForSubagentBlock(
scopedSubagent,
scopedParentToolCallId
)
const tb = ensureThinkingBlock(scopedSubagent, scopedParentForBlock, eventTs)
tb.content = (tb.content ?? '') + chunk
flushText()
break
@@ -2019,7 +2070,11 @@ export function useChat(
lastContentSource !== contentSource &&
runningText.length > 0 &&
!runningText.endsWith('\n')
const tb = ensureTextBlock(scopedSubagent, eventTs)
const scopedParentForBlock = resolveParentForSubagentBlock(
scopedSubagent,
scopedParentToolCallId
)
const tb = ensureTextBlock(scopedSubagent, scopedParentForBlock, eventTs)
const normalizedChunk = needsBoundaryNewline ? `\n${chunk}` : chunk
tb.content = (tb.content ?? '') + normalizedChunk
runningText += normalizedChunk
@@ -2355,9 +2410,17 @@ export function useChat(
}
}
if (!toolMap.has(id)) {
const existingToolCall = toolMap.has(id)
? blocks[toolMap.get(id)!]?.toolCall
: undefined
const isNewToolCall = !existingToolCall
if (isNewToolCall) {
stampBlockEnd(blocks[blocks.length - 1])
toolMap.set(id, blocks.length)
const parentToolCallIdForBlock = resolveParentForSubagentBlock(
scopedSubagent,
scopedParentToolCallId
)
blocks.push({
type: 'tool_call',
toolCall: {
@@ -2368,6 +2431,9 @@ export function useChat(
params: args,
calledBy: scopedSubagent,
},
...(parentToolCallIdForBlock
? { parentToolCallId: parentToolCallIdForBlock }
: {}),
timestamp: Date.now(),
})
if (name === ReadTool.id || isResourceToolName(name)) {
@@ -2385,7 +2451,14 @@ export function useChat(
flush()
if (isWorkflowToolName(name) && !isPartial) {
startClientWorkflowTool(id, name, args ?? {})
const shouldStartWorkflowTool =
!options?.suppressWorkflowToolStarts &&
(isNewToolCall ||
(existingToolCall?.status === ToolCallStatus.executing &&
!existingToolCall.result))
if (shouldStartWorkflowTool) {
startClientWorkflowTool(id, name, args ?? {})
}
}
break
}
@@ -2488,9 +2561,13 @@ export function useChat(
break
}
const spanData = asPayloadRecord(payload.data)
const parentToolCallId =
scopedParentToolCallId ??
(typeof spanData?.tool_call_id === 'string' ? spanData.tool_call_id : undefined)
const parentToolCallIdFromData =
typeof spanData?.tool_call_id === 'string'
? spanData.tool_call_id
: typeof spanData?.toolCallId === 'string'
? spanData.toolCallId
: undefined
const parentToolCallId = scopedParentToolCallId ?? parentToolCallIdFromData
const isPendingPause = spanData?.pending === true
const name = typeof payload.agent === 'string' ? payload.agent : scopedAgentId
if (payload.event === MothershipStreamV1SpanLifecycleEvent.start && name) {
@@ -2505,7 +2582,12 @@ export function useChat(
activeSubagentParentToolCallId = parentToolCallId
if (!isSameActiveSubagent) {
stampBlockEnd(blocks[blocks.length - 1])
blocks.push({ type: 'subagent', content: name, timestamp: Date.now() })
blocks.push({
type: 'subagent',
content: name,
...(parentToolCallId ? { parentToolCallId } : {}),
timestamp: Date.now(),
})
}
if (name === FILE_SUBAGENT_ID && !isSameActiveSubagent) {
applyPreviewSessionUpdate({
@@ -2549,14 +2631,23 @@ export function useChat(
if (name) {
for (let i = blocks.length - 1; i >= 0; i--) {
const b = blocks[i]
if (b.type === 'subagent' && b.content === name && b.endedAt === undefined) {
if (
b.type === 'subagent' &&
b.content === name &&
b.endedAt === undefined &&
(!parentToolCallId || b.parentToolCallId === parentToolCallId)
) {
b.endedAt = endNow
break
}
}
}
stampBlockEnd(blocks[blocks.length - 1])
blocks.push({ type: 'subagent_end', timestamp: endNow })
blocks.push({
type: 'subagent_end',
...(parentToolCallId ? { parentToolCallId } : {}),
timestamp: endNow,
})
flush()
}
break
@@ -2567,6 +2658,7 @@ export function useChat(
appendInlineErrorTag(
buildInlineErrorTag(parsed.payload),
scopedSubagent,
resolveParentForSubagentBlock(scopedSubagent, scopedParentToolCallId),
typeof parsed.ts === 'string' ? parsed.ts : undefined
)
break
@@ -2671,6 +2763,7 @@ export function useChat(
let latestCursor = afterCursor
let seedEvents = opts.initialBatch?.events ?? []
let streamStatus = opts.initialBatch?.status ?? 'unknown'
let suppressSeedWorkflowStarts = seedEvents.length > 0
const isStaleReconnect = () =>
streamGenRef.current !== expectedGen || abortControllerRef.current?.signal.aborted === true
@@ -2689,11 +2782,15 @@ export function useChat(
buildReplayStream(seedEvents).getReader(),
assistantId,
expectedGen,
{ preserveExistingState: true }
{
preserveExistingState: true,
suppressWorkflowToolStarts: suppressSeedWorkflowStarts,
}
)
latestCursor = String(seedEvents[seedEvents.length - 1]?.eventId ?? latestCursor)
lastCursorRef.current = latestCursor
seedEvents = []
suppressSeedWorkflowStarts = false
if (replayResult.sawStreamError) {
return { error: true, aborted: false }
@@ -2998,6 +3095,7 @@ export function useChat(
...(display ? { display } : {}),
calledBy: block.toolCall.calledBy,
},
...(block.parentToolCallId ? { parentToolCallId: block.parentToolCallId } : {}),
...timing,
}
}
@@ -3005,6 +3103,7 @@ export function useChat(
type: block.type,
content: block.content,
...(block.subagent ? { lane: 'subagent' } : {}),
...(block.parentToolCallId ? { parentToolCallId: block.parentToolCallId } : {}),
...timing,
}
})

View File

@@ -133,6 +133,7 @@ export interface ContentBlock {
options?: OptionItem[]
timestamp?: number
endedAt?: number
parentToolCallId?: string
}
export interface ChatMessageAttachment {

View File

@@ -604,6 +604,10 @@ export function useKnowledgeUpload(options: UseKnowledgeUploadOptions = {}) {
const startTime = getHighResTime()
try {
if (!options.workspaceId) {
throw new Error('workspaceId is required for multipart upload')
}
const initiateResponse = await fetch('/api/files/multipart?action=initiate', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
@@ -611,6 +615,7 @@ export function useKnowledgeUpload(options: UseKnowledgeUploadOptions = {}) {
fileName: file.name,
contentType: getFileContentType(file),
fileSize: file.size,
workspaceId: options.workspaceId,
}),
})
@@ -618,7 +623,7 @@ export function useKnowledgeUpload(options: UseKnowledgeUploadOptions = {}) {
throw new Error(`Failed to initiate multipart upload: ${initiateResponse.statusText}`)
}
const { uploadId, key } = await initiateResponse.json()
const { uploadId, key, uploadToken } = await initiateResponse.json()
logger.info(`Initiated multipart upload with ID: ${uploadId}`)
const chunkSize = UPLOAD_CONFIG.CHUNK_SIZE
@@ -629,8 +634,7 @@ export function useKnowledgeUpload(options: UseKnowledgeUploadOptions = {}) {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
uploadId,
key,
uploadToken,
partNumbers,
}),
})
@@ -639,7 +643,7 @@ export function useKnowledgeUpload(options: UseKnowledgeUploadOptions = {}) {
await fetch('/api/files/multipart?action=abort', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ uploadId, key }),
body: JSON.stringify({ uploadToken }),
})
throw new Error(`Failed to get part URLs: ${partUrlsResponse.statusText}`)
}
@@ -723,8 +727,7 @@ export function useKnowledgeUpload(options: UseKnowledgeUploadOptions = {}) {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
uploadId,
key,
uploadToken,
parts: uploadedParts,
}),
})
@@ -791,7 +794,7 @@ export function useKnowledgeUpload(options: UseKnowledgeUploadOptions = {}) {
}
throw new DirectUploadError(
`Failed to upload ${file.name}: ${errorData?.error || 'Unknown error'}`,
`Failed to upload ${file.name}: ${errorData?.message || errorData?.error || 'Unknown error'}`,
errorData
)
}

View File

@@ -78,8 +78,10 @@ export function useProfilePictureUpload({
})
if (!response.ok) {
const errorData = await response.json().catch(() => ({ error: response.statusText }))
throw new Error(errorData.error || `Failed to upload file: ${response.status}`)
const errorData = await response.json().catch(() => ({ message: response.statusText }))
throw new Error(
errorData.message || errorData.error || `Failed to upload file: ${response.status}`
)
}
const data = await response.json()

View File

@@ -2,7 +2,9 @@
import { useCallback, useEffect, useRef, useState } from 'react'
import { createLogger } from '@sim/logger'
import { toError } from '@sim/utils/errors'
import { generateId } from '@sim/utils/id'
import { toast } from '@/components/emcn'
import { resolveFileType } from '@/lib/uploads/utils/file-utils'
const logger = createLogger('useFileAttachments')
@@ -147,9 +149,13 @@ export function useFileAttachments(props: UseFileAttachmentsProps) {
if (!uploadResponse.ok) {
const errorData = await uploadResponse.json().catch(() => ({
error: `Upload failed: ${uploadResponse.status}`,
message: `Upload failed: ${uploadResponse.status}`,
}))
throw new Error(errorData.error || `Failed to upload file: ${uploadResponse.status}`)
throw new Error(
errorData.message ||
errorData.error ||
`Failed to upload file: ${uploadResponse.status}`
)
}
const uploadData = await uploadResponse.json()
@@ -172,6 +178,9 @@ export function useFileAttachments(props: UseFileAttachmentsProps) {
)
} catch (error) {
logger.error(`File upload failed: ${error}`)
toast.error(`Couldn't upload "${file.name}"`, {
description: toError(error).message,
})
if (placeholder.previewUrl) URL.revokeObjectURL(placeholder.previewUrl)
setAttachedFiles((prev) => prev.filter((f) => f.id !== placeholder.id))
}

View File

@@ -328,7 +328,8 @@ export function FileUpload({
const data = await response.json()
if (!response.ok) {
const errorMessage = data.error || `Failed to upload file: ${response.status}`
const errorMessage =
data.message || data.error || `Failed to upload file: ${response.status}`
uploadErrors.push(`${file.name}: ${errorMessage}`)
setUploadError(errorMessage)

View File

@@ -34,7 +34,7 @@ import { hasExecutionResult } from '@/executor/utils/errors'
import { coerceValue } from '@/executor/utils/start-block'
import { subscriptionKeys } from '@/hooks/queries/subscription'
import { getWorkflows } from '@/hooks/queries/utils/workflow-cache'
import { useExecutionStream } from '@/hooks/use-execution-stream'
import { isExecutionStreamHttpError, useExecutionStream } from '@/hooks/use-execution-stream'
import { WorkflowValidationError } from '@/serializer'
import { useCurrentWorkflowExecution, useExecutionStore } from '@/stores/execution'
import { useNotificationStore } from '@/stores/notifications'
@@ -60,6 +60,13 @@ const logger = createLogger('useWorkflowExecution')
*/
const activeReconnections = new Set<string>()
function isReconnectTerminal(error: unknown): boolean {
return (
isExecutionStreamHttpError(error) &&
(error.httpStatus === 404 || error.httpStatus === 403 || error.httpStatus === 401)
)
}
interface DebugValidationResult {
isValid: boolean
error?: string
@@ -487,8 +494,14 @@ export function useWorkflowExecution() {
logger.error('Unexpected upload response format:', uploadResult)
}
} else {
const errorText = await response.text()
const message = `Failed to upload ${fileData.name}: ${response.status} ${errorText}`
const cloned = response.clone()
const errorData = await response.json().catch(() => null)
const reason =
errorData?.message ||
errorData?.error ||
(await cloned.text().catch(() => '')) ||
`${response.status}`
const message = `Failed to upload ${fileData.name}: ${reason}`
logger.error(message)
if (isUploadErrorCapable(workflowInput)) {
try {
@@ -1283,8 +1296,7 @@ export function useWorkflowExecution() {
} else {
if (!executor) {
try {
const httpStatus =
isRecord(error) && typeof error.httpStatus === 'number' ? error.httpStatus : undefined
const httpStatus = isExecutionStreamHttpError(error) ? error.httpStatus : undefined
const storeAddConsole = useTerminalConsoleStore.getState().addConsole
if (httpStatus && activeWorkflowId) {
@@ -1867,8 +1879,6 @@ export function useWorkflowExecution() {
activeReconnections.add(reconnectWorkflowId)
executionStream.cancel(reconnectWorkflowId)
setCurrentExecutionId(reconnectWorkflowId, executionId)
setIsExecuting(reconnectWorkflowId, true)
const workflowEdges = useWorkflowStore.getState().edges
const activeBlocksSet = new Set<string>()
@@ -1891,13 +1901,47 @@ export function useWorkflowExecution() {
includeStartConsoleEntry: true,
})
clearExecutionEntries(executionId)
const capturedExecutionId = executionId
const MAX_ATTEMPTS = 5
const BASE_DELAY_MS = 1000
const MAX_DELAY_MS = 15000
let activated = false
const ensureActivated = () => {
if (activated || cleanupRan) return
activated = true
setCurrentExecutionId(reconnectWorkflowId, capturedExecutionId)
setIsExecuting(reconnectWorkflowId, true)
clearExecutionEntries(capturedExecutionId)
}
const wrapHandler =
<T>(handler: (data: T) => void) =>
(data: T) => {
ensureActivated()
handler(data)
}
const cleanupFailedReconnect = () => {
const currentId = useExecutionStore.getState().getCurrentExecutionId(reconnectWorkflowId)
if (currentId && currentId !== capturedExecutionId) return
const hasRunningEntry = useTerminalConsoleStore
.getState()
.getWorkflowEntries(reconnectWorkflowId)
.some((entry) => entry.isRunning && entry.executionId === capturedExecutionId)
if (activated || hasRunningEntry) {
cancelRunningEntries(reconnectWorkflowId)
}
if (currentId === capturedExecutionId) {
setCurrentExecutionId(reconnectWorkflowId, null)
setIsExecuting(reconnectWorkflowId, false)
setActiveBlocks(reconnectWorkflowId, new Set())
}
}
const attemptReconnect = async (attempt: number): Promise<void> => {
if (cleanupRan || reconnectionComplete) return
@@ -1914,38 +1958,39 @@ export function useWorkflowExecution() {
fromEventId,
callbacks: {
onEventId: (eid) => {
ensureActivated()
fromEventId = eid
},
onBlockStarted: handlers.onBlockStarted,
onBlockCompleted: handlers.onBlockCompleted,
onBlockError: handlers.onBlockError,
onBlockChildWorkflowStarted: handlers.onBlockChildWorkflowStarted,
onBlockStarted: wrapHandler(handlers.onBlockStarted),
onBlockCompleted: wrapHandler(handlers.onBlockCompleted),
onBlockError: wrapHandler(handlers.onBlockError),
onBlockChildWorkflowStarted: wrapHandler(handlers.onBlockChildWorkflowStarted),
onExecutionCompleted: () => {
reconnectionComplete = true
activeReconnections.delete(reconnectWorkflowId)
if (!activated) {
clearExecutionPointer(reconnectWorkflowId)
return
}
const currentId = useExecutionStore
.getState()
.getCurrentExecutionId(reconnectWorkflowId)
if (currentId !== capturedExecutionId) {
reconnectionComplete = true
activeReconnections.delete(reconnectWorkflowId)
return
}
reconnectionComplete = true
activeReconnections.delete(reconnectWorkflowId)
if (currentId !== capturedExecutionId) return
setCurrentExecutionId(reconnectWorkflowId, null)
setIsExecuting(reconnectWorkflowId, false)
setActiveBlocks(reconnectWorkflowId, new Set())
},
onExecutionError: (data) => {
reconnectionComplete = true
activeReconnections.delete(reconnectWorkflowId)
if (!activated) {
clearExecutionPointer(reconnectWorkflowId)
return
}
const currentId = useExecutionStore
.getState()
.getCurrentExecutionId(reconnectWorkflowId)
if (currentId !== capturedExecutionId) {
reconnectionComplete = true
activeReconnections.delete(reconnectWorkflowId)
return
}
reconnectionComplete = true
activeReconnections.delete(reconnectWorkflowId)
if (currentId !== capturedExecutionId) return
setCurrentExecutionId(reconnectWorkflowId, null)
setIsExecuting(reconnectWorkflowId, false)
setActiveBlocks(reconnectWorkflowId, new Set())
@@ -1957,16 +2002,16 @@ export function useWorkflowExecution() {
})
},
onExecutionCancelled: () => {
reconnectionComplete = true
activeReconnections.delete(reconnectWorkflowId)
if (!activated) {
clearExecutionPointer(reconnectWorkflowId)
return
}
const currentId = useExecutionStore
.getState()
.getCurrentExecutionId(reconnectWorkflowId)
if (currentId !== capturedExecutionId) {
reconnectionComplete = true
activeReconnections.delete(reconnectWorkflowId)
return
}
reconnectionComplete = true
activeReconnections.delete(reconnectWorkflowId)
if (currentId !== capturedExecutionId) return
setCurrentExecutionId(reconnectWorkflowId, null)
setIsExecuting(reconnectWorkflowId, false)
setActiveBlocks(reconnectWorkflowId, new Set())
@@ -1978,6 +2023,17 @@ export function useWorkflowExecution() {
},
})
} catch (error) {
if (isReconnectTerminal(error)) {
logger.info('Reconnection skipped; run buffer no longer exists', {
executionId: capturedExecutionId,
})
reconnectionComplete = true
activeReconnections.delete(reconnectWorkflowId)
clearExecutionPointer(reconnectWorkflowId)
cleanupFailedReconnect()
return
}
logger.warn('Execution reconnection attempt failed', {
executionId: capturedExecutionId,
attempt,
@@ -1986,17 +2042,27 @@ export function useWorkflowExecution() {
if (!cleanupRan && !reconnectionComplete && attempt < MAX_ATTEMPTS) {
return attemptReconnect(attempt + 1)
}
if (!cleanupRan && !reconnectionComplete) {
reconnectionComplete = true
activeReconnections.delete(reconnectWorkflowId)
cleanupFailedReconnect()
return
}
}
if (!reconnectionComplete && !cleanupRan) {
reconnectionComplete = true
activeReconnections.delete(reconnectWorkflowId)
const currentId = useExecutionStore.getState().getCurrentExecutionId(reconnectWorkflowId)
if (currentId === capturedExecutionId) {
cancelRunningEntries(reconnectWorkflowId)
setCurrentExecutionId(reconnectWorkflowId, null)
setIsExecuting(reconnectWorkflowId, false)
setActiveBlocks(reconnectWorkflowId, new Set())
if (activated) {
const currentId = useExecutionStore
.getState()
.getCurrentExecutionId(reconnectWorkflowId)
if (currentId === capturedExecutionId) {
cancelRunningEntries(reconnectWorkflowId)
setCurrentExecutionId(reconnectWorkflowId, null)
setIsExecuting(reconnectWorkflowId, false)
setActiveBlocks(reconnectWorkflowId, new Set())
}
}
}
}

View File

@@ -73,8 +73,10 @@ export function useWorkspaceLogoUpload({
})
if (!response.ok) {
const errorData = await response.json().catch(() => ({ error: response.statusText }))
throw new Error(errorData.error || `Failed to upload file: ${response.status}`)
const errorData = await response.json().catch(() => ({ message: response.statusText }))
throw new Error(
errorData.message || errorData.error || `Failed to upload file: ${response.status}`
)
}
const data = await response.json()

View File

@@ -206,10 +206,7 @@ const SidebarTaskItem = memo(function SidebarTaskItem({
e.preventDefault()
onMultiSelectClick(task.id, true)
} else {
useFolderStore.setState({
selectedTasks: new Set<string>(),
lastSelectedTaskId: task.id,
})
useFolderStore.getState().selectTaskOnly(task.id)
}
}}
onContextMenu={task.id !== 'new' ? (e) => onContextMenu(e, task.id) : undefined}

View File

@@ -4,169 +4,71 @@ import { createLogger } from '@sim/logger'
import { task } from '@trigger.dev/sdk'
import { and, inArray, lt } from 'drizzle-orm'
import { type CleanupJobPayload, resolveCleanupScope } from '@/lib/billing/cleanup-dispatcher'
import {
batchDeleteByWorkspaceAndTimestamp,
chunkedBatchDelete,
type TableCleanupResult,
} from '@/lib/cleanup/batch-delete'
import { snapshotService } from '@/lib/logs/execution/snapshot/service'
import { isUsingCloudStorage, StorageService } from '@/lib/uploads'
import { deleteFileMetadata } from '@/lib/uploads/server/metadata'
const logger = createLogger('CleanupLogs')
const BATCH_SIZE = 2000
const MAX_BATCHES_PER_TIER = 10
interface TierResults {
total: number
deleted: number
deleteFailed: number
interface FileDeleteStats {
filesTotal: number
filesDeleted: number
filesDeleteFailed: number
}
function emptyTierResults(): TierResults {
return {
total: 0,
deleted: 0,
deleteFailed: 0,
filesTotal: 0,
filesDeleted: 0,
filesDeleteFailed: 0,
}
}
async function deleteExecutionFiles(files: unknown, results: TierResults): Promise<void> {
async function deleteExecutionFiles(files: unknown, stats: FileDeleteStats): Promise<void> {
if (!isUsingCloudStorage() || !files || !Array.isArray(files)) return
const keys = files.filter((f) => f && typeof f === 'object' && f.key).map((f) => f.key as string)
results.filesTotal += keys.length
stats.filesTotal += keys.length
await Promise.all(
keys.map(async (key) => {
try {
await StorageService.deleteFile({ key, context: 'execution' })
await deleteFileMetadata(key)
results.filesDeleted++
stats.filesDeleted++
} catch (fileError) {
results.filesDeleteFailed++
stats.filesDeleteFailed++
logger.error(`Failed to delete file ${key}:`, { fileError })
}
})
)
}
async function cleanupTier(
async function cleanupWorkflowExecutionLogs(
workspaceIds: string[],
retentionDate: Date,
label: string
): Promise<TierResults> {
const results = emptyTierResults()
if (workspaceIds.length === 0) return results
): Promise<TableCleanupResult & FileDeleteStats> {
const fileStats: FileDeleteStats = { filesTotal: 0, filesDeleted: 0, filesDeleteFailed: 0 }
let batchesProcessed = 0
let hasMore = true
while (hasMore && batchesProcessed < MAX_BATCHES_PER_TIER) {
const batch = await db
.select({
id: workflowExecutionLogs.id,
files: workflowExecutionLogs.files,
})
.from(workflowExecutionLogs)
.where(
and(
inArray(workflowExecutionLogs.workspaceId, workspaceIds),
lt(workflowExecutionLogs.startedAt, retentionDate)
const dbStats = await chunkedBatchDelete({
tableDef: workflowExecutionLogs,
workspaceIds,
tableName: `${label}/workflow_execution_logs`,
selectChunk: (chunkIds, limit) =>
db
.select({ id: workflowExecutionLogs.id, files: workflowExecutionLogs.files })
.from(workflowExecutionLogs)
.where(
and(
inArray(workflowExecutionLogs.workspaceId, chunkIds),
lt(workflowExecutionLogs.startedAt, retentionDate)
)
)
)
.limit(BATCH_SIZE)
.limit(limit),
onBatch: async (rows) => {
for (const row of rows) await deleteExecutionFiles(row.files, fileStats)
},
})
results.total += batch.length
if (batch.length === 0) {
hasMore = false
break
}
for (const log of batch) {
await deleteExecutionFiles(log.files, results)
}
const logIds = batch.map((log) => log.id)
try {
const deleted = await db
.delete(workflowExecutionLogs)
.where(inArray(workflowExecutionLogs.id, logIds))
.returning({ id: workflowExecutionLogs.id })
results.deleted += deleted.length
} catch (deleteError) {
results.deleteFailed += logIds.length
logger.error(`Batch delete failed for ${label}:`, { deleteError })
}
batchesProcessed++
hasMore = batch.length === BATCH_SIZE
logger.info(`[${label}] Batch ${batchesProcessed}: ${batch.length} logs processed`)
}
return results
}
interface JobLogCleanupResults {
deleted: number
deleteFailed: number
}
async function cleanupJobExecutionLogsTier(
workspaceIds: string[],
retentionDate: Date,
label: string
): Promise<JobLogCleanupResults> {
const results: JobLogCleanupResults = { deleted: 0, deleteFailed: 0 }
if (workspaceIds.length === 0) return results
let batchesProcessed = 0
let hasMore = true
while (hasMore && batchesProcessed < MAX_BATCHES_PER_TIER) {
const batch = await db
.select({ id: jobExecutionLogs.id })
.from(jobExecutionLogs)
.where(
and(
inArray(jobExecutionLogs.workspaceId, workspaceIds),
lt(jobExecutionLogs.startedAt, retentionDate)
)
)
.limit(BATCH_SIZE)
if (batch.length === 0) {
hasMore = false
break
}
const logIds = batch.map((log) => log.id)
try {
const deleted = await db
.delete(jobExecutionLogs)
.where(inArray(jobExecutionLogs.id, logIds))
.returning({ id: jobExecutionLogs.id })
results.deleted += deleted.length
} catch (deleteError) {
results.deleteFailed += logIds.length
logger.error(`Batch delete failed for ${label} (job_execution_logs):`, { deleteError })
}
batchesProcessed++
hasMore = batch.length === BATCH_SIZE
logger.info(
`[${label}] job_execution_logs batch ${batchesProcessed}: ${batch.length} rows processed`
)
}
return results
return { ...dbStats, ...fileStats }
}
export async function runCleanupLogs(payload: CleanupJobPayload): Promise<void> {
@@ -190,15 +92,19 @@ export async function runCleanupLogs(payload: CleanupJobPayload): Promise<void>
`[${label}] Cleaning ${workspaceIds.length} workspaces, cutoff: ${retentionDate.toISOString()}`
)
const results = await cleanupTier(workspaceIds, retentionDate, label)
const workflowResults = await cleanupWorkflowExecutionLogs(workspaceIds, retentionDate, label)
logger.info(
`[${label}] workflow_execution_logs: ${results.deleted} deleted, ${results.deleteFailed} failed out of ${results.total} candidates`
`[${label}] workflow_execution_logs files: ${workflowResults.filesDeleted}/${workflowResults.filesTotal} deleted, ${workflowResults.filesDeleteFailed} failed`
)
const jobLogResults = await cleanupJobExecutionLogsTier(workspaceIds, retentionDate, label)
logger.info(
`[${label}] job_execution_logs: ${jobLogResults.deleted} deleted, ${jobLogResults.deleteFailed} failed`
)
await batchDeleteByWorkspaceAndTimestamp({
tableDef: jobExecutionLogs,
workspaceIdCol: jobExecutionLogs.workspaceId,
timestampCol: jobExecutionLogs.startedAt,
workspaceIds,
retentionDate,
tableName: `${label}/job_execution_logs`,
})
// Snapshot cleanup runs only on the free job to avoid running it N times for N enterprise workspaces.
if (payload.plan === 'free') {

View File

@@ -18,9 +18,8 @@ import { and, inArray, isNotNull, lt } from 'drizzle-orm'
import { type CleanupJobPayload, resolveCleanupScope } from '@/lib/billing/cleanup-dispatcher'
import {
batchDeleteByWorkspaceAndTimestamp,
DEFAULT_BATCH_SIZE,
DEFAULT_MAX_BATCHES_PER_TABLE,
deleteRowsById,
selectRowsByIdChunks,
} from '@/lib/cleanup/batch-delete'
import { prepareChatCleanup } from '@/lib/cleanup/chat-cleanup'
import type { StorageContext } from '@/lib/uploads'
@@ -44,35 +43,37 @@ async function selectExpiredWorkspaceFiles(
workspaceIds: string[],
retentionDate: Date
): Promise<WorkspaceFileScope> {
const limit = DEFAULT_BATCH_SIZE * DEFAULT_MAX_BATCHES_PER_TABLE
const [legacyRows, multiContextRows] = await Promise.all([
db
.select({ id: workspaceFile.id, key: workspaceFile.key })
.from(workspaceFile)
.where(
and(
inArray(workspaceFile.workspaceId, workspaceIds),
isNotNull(workspaceFile.deletedAt),
lt(workspaceFile.deletedAt, retentionDate)
selectRowsByIdChunks(workspaceIds, (chunkIds, chunkLimit) =>
db
.select({ id: workspaceFile.id, key: workspaceFile.key })
.from(workspaceFile)
.where(
and(
inArray(workspaceFile.workspaceId, chunkIds),
isNotNull(workspaceFile.deletedAt),
lt(workspaceFile.deletedAt, retentionDate)
)
)
)
.limit(limit),
db
.select({
id: workspaceFiles.id,
key: workspaceFiles.key,
context: workspaceFiles.context,
})
.from(workspaceFiles)
.where(
and(
inArray(workspaceFiles.workspaceId, workspaceIds),
isNotNull(workspaceFiles.deletedAt),
lt(workspaceFiles.deletedAt, retentionDate)
.limit(chunkLimit)
),
selectRowsByIdChunks(workspaceIds, (chunkIds, chunkLimit) =>
db
.select({
id: workspaceFiles.id,
key: workspaceFiles.key,
context: workspaceFiles.context,
})
.from(workspaceFiles)
.where(
and(
inArray(workspaceFiles.workspaceId, chunkIds),
isNotNull(workspaceFiles.deletedAt),
lt(workspaceFiles.deletedAt, retentionDate)
)
)
)
.limit(limit),
.limit(chunkLimit)
),
])
return {
@@ -182,17 +183,19 @@ export async function runCleanupSoftDeletes(payload: CleanupJobPayload): Promise
// (chats + S3) AND the DB deletes below — selecting twice could return
// different subsets above the LIMIT cap and orphan or prematurely purge data.
const [doomedWorkflows, fileScope] = await Promise.all([
db
.select({ id: workflow.id })
.from(workflow)
.where(
and(
inArray(workflow.workspaceId, workspaceIds),
isNotNull(workflow.archivedAt),
lt(workflow.archivedAt, retentionDate)
selectRowsByIdChunks(workspaceIds, (chunkIds, chunkLimit) =>
db
.select({ id: workflow.id })
.from(workflow)
.where(
and(
inArray(workflow.workspaceId, chunkIds),
isNotNull(workflow.archivedAt),
lt(workflow.archivedAt, retentionDate)
)
)
)
.limit(DEFAULT_BATCH_SIZE * DEFAULT_MAX_BATCHES_PER_TABLE),
.limit(chunkLimit)
),
selectExpiredWorkspaceFiles(workspaceIds, retentionDate),
])
@@ -200,11 +203,13 @@ export async function runCleanupSoftDeletes(payload: CleanupJobPayload): Promise
let chatCleanup: { execute: () => Promise<void> } | null = null
if (doomedWorkflowIds.length > 0) {
const doomedChats = await db
.select({ id: copilotChats.id })
.from(copilotChats)
.where(inArray(copilotChats.workflowId, doomedWorkflowIds))
.limit(DEFAULT_BATCH_SIZE * DEFAULT_MAX_BATCHES_PER_TABLE)
const doomedChats = await selectRowsByIdChunks(doomedWorkflowIds, (chunkIds, chunkLimit) =>
db
.select({ id: copilotChats.id })
.from(copilotChats)
.where(inArray(copilotChats.workflowId, chunkIds))
.limit(chunkLimit)
)
const doomedChatIds = doomedChats.map((c) => c.id)
if (doomedChatIds.length > 0) {

View File

@@ -13,9 +13,8 @@ import { and, inArray, lt, sql } from 'drizzle-orm'
import { type CleanupJobPayload, resolveCleanupScope } from '@/lib/billing/cleanup-dispatcher'
import {
batchDeleteByWorkspaceAndTimestamp,
DEFAULT_BATCH_SIZE,
DEFAULT_MAX_BATCHES_PER_TABLE,
deleteRowsById,
selectRowsByIdChunks,
type TableCleanupResult,
} from '@/lib/cleanup/batch-delete'
import { prepareChatCleanup } from '@/lib/cleanup/chat-cleanup'
@@ -67,13 +66,15 @@ async function cleanupRunChildren(
): Promise<TableCleanupResult[]> {
if (workspaceIds.length === 0) return []
const runIds = await db
.select({ id: copilotRuns.id })
.from(copilotRuns)
.where(
and(inArray(copilotRuns.workspaceId, workspaceIds), lt(copilotRuns.updatedAt, retentionDate))
)
.limit(DEFAULT_BATCH_SIZE * DEFAULT_MAX_BATCHES_PER_TABLE)
const runIds = await selectRowsByIdChunks(workspaceIds, (chunkIds, chunkLimit) =>
db
.select({ id: copilotRuns.id })
.from(copilotRuns)
.where(
and(inArray(copilotRuns.workspaceId, chunkIds), lt(copilotRuns.updatedAt, retentionDate))
)
.limit(chunkLimit)
)
if (runIds.length === 0) {
return RUN_CHILD_TABLES.map((t) => ({ table: `${label}/${t.name}`, deleted: 0, failed: 0 }))
@@ -107,17 +108,15 @@ export async function runCleanupTasks(payload: CleanupJobPayload): Promise<void>
`[${label}] Processing ${workspaceIds.length} workspaces, cutoff: ${retentionDate.toISOString()}`
)
// Collect chat IDs before deleting so we can clean up the copilot backend after
const doomedChats = await db
.select({ id: copilotChats.id })
.from(copilotChats)
.where(
and(
inArray(copilotChats.workspaceId, workspaceIds),
lt(copilotChats.updatedAt, retentionDate)
const doomedChats = await selectRowsByIdChunks(workspaceIds, (chunkIds, chunkLimit) =>
db
.select({ id: copilotChats.id })
.from(copilotChats)
.where(
and(inArray(copilotChats.workspaceId, chunkIds), lt(copilotChats.updatedAt, retentionDate))
)
)
.limit(DEFAULT_BATCH_SIZE * DEFAULT_MAX_BATCHES_PER_TABLE)
.limit(chunkLimit)
)
const doomedChatIds = doomedChats.map((c) => c.id)

View File

@@ -23,6 +23,12 @@ export const BrowserUseBlock: BlockConfig<BrowserUseResponse> = {
placeholder: 'Describe what the browser agent should do...',
required: true,
},
{
id: 'startUrl',
title: 'Start URL',
type: 'short-input',
placeholder: 'https://example.com (optional starting URL)',
},
{
id: 'variables',
title: 'Variables (Secrets)',
@@ -51,22 +57,85 @@ export const BrowserUseBlock: BlockConfig<BrowserUseResponse> = {
{ label: 'Claude 3.7 Sonnet', id: 'claude-3-7-sonnet-20250219' },
{ label: 'Claude Sonnet 4', id: 'claude-sonnet-4-20250514' },
{ label: 'Claude Sonnet 4.5', id: 'claude-sonnet-4-5-20250929' },
{ label: 'Claude Sonnet 4.6', id: 'claude-sonnet-4-6' },
{ label: 'Claude Opus 4.5', id: 'claude-opus-4-5-20251101' },
{ label: 'Llama 4 Maverick', id: 'llama-4-maverick-17b-128e-instruct' },
],
},
{
id: 'save_browser_data',
title: 'Save Browser Data',
type: 'switch',
placeholder: 'Save browser data',
},
{
id: 'profile_id',
title: 'Profile ID',
type: 'short-input',
placeholder: 'Enter browser profile ID (optional)',
},
{
id: 'maxSteps',
title: 'Max Steps',
type: 'short-input',
placeholder: '100',
mode: 'advanced',
},
{
id: 'allowedDomains',
title: 'Allowed Domains',
type: 'short-input',
placeholder: 'example.com, docs.example.com',
mode: 'advanced',
},
{
id: 'vision',
title: 'Vision',
type: 'dropdown',
options: [
{ label: 'Auto (default)', id: 'auto' },
{ label: 'Enabled', id: 'true' },
{ label: 'Disabled', id: 'false' },
],
mode: 'advanced',
},
{
id: 'flashMode',
title: 'Flash Mode',
type: 'switch',
placeholder: 'Faster but less careful navigation',
mode: 'advanced',
},
{
id: 'thinking',
title: 'Thinking',
type: 'switch',
placeholder: 'Enable extended reasoning',
mode: 'advanced',
},
{
id: 'highlightElements',
title: 'Highlight Elements',
type: 'switch',
placeholder: 'Visually mark interactive elements',
mode: 'advanced',
},
{
id: 'systemPromptExtension',
title: 'System Prompt Extension',
type: 'long-input',
placeholder: 'Append custom instructions to the agent system prompt (max 2000 chars)',
mode: 'advanced',
},
{
id: 'structuredOutput',
title: 'Structured Output Schema',
type: 'code',
language: 'json',
placeholder: 'Stringified JSON schema for structured output',
mode: 'advanced',
},
{
id: 'metadata',
title: 'Metadata',
type: 'table',
columns: ['Key', 'Value'],
mode: 'advanced',
},
{
id: 'apiKey',
title: 'API Key',
@@ -78,19 +147,68 @@ export const BrowserUseBlock: BlockConfig<BrowserUseResponse> = {
],
tools: {
access: ['browser_use_run_task'],
config: {
tool: () => 'browser_use_run_task',
params: (params) => {
const next: Record<string, any> = { ...params }
if (typeof next.maxSteps === 'string') {
const trimmed = next.maxSteps.trim()
if (trimmed === '') {
next.maxSteps = undefined
} else {
const n = Number(trimmed)
next.maxSteps = Number.isFinite(n) ? n : undefined
}
}
if (next.vision === 'true') next.vision = true
else if (next.vision === 'false') next.vision = false
if (next.metadata && Array.isArray(next.metadata)) {
const obj: Record<string, string> = {}
for (const row of next.metadata as Array<Record<string, any>>) {
const key = row?.cells?.Key ?? row?.Key
const value = row?.cells?.Value ?? row?.Value
if (key) obj[key] = String(value ?? '')
}
next.metadata = obj
}
return next
},
},
},
inputs: {
task: { type: 'string', description: 'Browser automation task' },
startUrl: { type: 'string', description: 'Starting URL for the agent' },
apiKey: { type: 'string', description: 'BrowserUse API key' },
variables: { type: 'json', description: 'Task variables' },
model: { type: 'string', description: 'AI model to use' },
save_browser_data: { type: 'boolean', description: 'Save browser data' },
variables: { type: 'json', description: 'Secrets to inject into the task' },
model: { type: 'string', description: 'LLM model to use' },
profile_id: { type: 'string', description: 'Browser profile ID for persistent sessions' },
maxSteps: { type: 'number', description: 'Maximum agent steps' },
allowedDomains: { type: 'string', description: 'Comma-separated allowed domains' },
vision: { type: 'string', description: 'Vision capability (auto / true / false)' },
flashMode: { type: 'boolean', description: 'Enable flash mode' },
thinking: { type: 'boolean', description: 'Enable extended reasoning' },
highlightElements: { type: 'boolean', description: 'Highlight interactive elements' },
systemPromptExtension: { type: 'string', description: 'Custom system prompt extension' },
structuredOutput: { type: 'string', description: 'Stringified JSON schema' },
metadata: { type: 'json', description: 'Custom key-value metadata' },
},
outputs: {
id: { type: 'string', description: 'Task execution identifier' },
success: { type: 'boolean', description: 'Task completion status' },
output: { type: 'json', description: 'Task output data' },
steps: { type: 'json', description: 'Execution steps taken' },
output: { type: 'json', description: 'Final task output (string or structured)' },
steps: {
type: 'json',
description:
'Steps the agent executed (number, memory, evaluationPreviousGoal, nextGoal, url, screenshotUrl, actions, duration)',
},
liveUrl: {
type: 'string',
description: 'Embeddable live browser session URL (active during execution)',
},
shareUrl: {
type: 'string',
description: 'Public shareable URL for the session (post-run)',
},
sessionId: { type: 'string', description: 'Browser Use session identifier' },
},
}

File diff suppressed because it is too large Load Diff

View File

@@ -46,6 +46,10 @@ export const SlackBlock: BlockConfig<SlackResponse> = {
{ label: 'Get User Presence', id: 'get_user_presence' },
{ label: 'Edit Canvas', id: 'edit_canvas' },
{ label: 'Create Channel Canvas', id: 'create_channel_canvas' },
{ label: 'Get Canvas Info', id: 'get_canvas' },
{ label: 'List Canvases', id: 'list_canvases' },
{ label: 'Lookup Canvas Sections', id: 'lookup_canvas_sections' },
{ label: 'Delete Canvas', id: 'delete_canvas' },
{ label: 'Create Conversation', id: 'create_conversation' },
{ label: 'Invite to Conversation', id: 'invite_to_conversation' },
{ label: 'Open View', id: 'open_view' },
@@ -146,6 +150,9 @@ export const SlackBlock: BlockConfig<SlackResponse> = {
'get_user',
'get_user_presence',
'edit_canvas',
'get_canvas',
'lookup_canvas_sections',
'delete_canvas',
'create_conversation',
'open_view',
'update_view',
@@ -160,7 +167,11 @@ export const SlackBlock: BlockConfig<SlackResponse> = {
},
}
},
required: true,
required: {
field: 'operation',
value: 'list_canvases',
not: true,
},
},
{
id: 'manualChannel',
@@ -182,6 +193,9 @@ export const SlackBlock: BlockConfig<SlackResponse> = {
'get_user',
'get_user_presence',
'edit_canvas',
'get_canvas',
'lookup_canvas_sections',
'delete_canvas',
'create_conversation',
'open_view',
'update_view',
@@ -196,7 +210,11 @@ export const SlackBlock: BlockConfig<SlackResponse> = {
},
}
},
required: true,
required: {
field: 'operation',
value: 'list_canvases',
not: true,
},
},
{
id: 'dmUserId',
@@ -820,6 +838,121 @@ Return ONLY the timestamp string - no explanations, no quotes, no extra text.`,
value: 'create_channel_canvas',
},
},
// Get Canvas specific fields
{
id: 'getCanvasId',
title: 'Canvas ID',
type: 'short-input',
placeholder: 'Enter canvas ID (e.g., F1234ABCD)',
condition: {
field: 'operation',
value: 'get_canvas',
},
required: true,
},
// List Canvases specific fields
{
id: 'canvasListCount',
title: 'Canvas Limit',
type: 'short-input',
placeholder: '100',
condition: {
field: 'operation',
value: 'list_canvases',
},
mode: 'advanced',
},
{
id: 'canvasListPage',
title: 'Page',
type: 'short-input',
placeholder: '1',
condition: {
field: 'operation',
value: 'list_canvases',
},
mode: 'advanced',
},
{
id: 'canvasListUser',
title: 'User ID',
type: 'short-input',
placeholder: 'Optional creator filter (e.g., U1234567890)',
condition: {
field: 'operation',
value: 'list_canvases',
},
mode: 'advanced',
},
{
id: 'canvasListTsFrom',
title: 'Created After',
type: 'short-input',
placeholder: 'Unix timestamp (e.g., 123456789)',
condition: {
field: 'operation',
value: 'list_canvases',
},
mode: 'advanced',
},
{
id: 'canvasListTsTo',
title: 'Created Before',
type: 'short-input',
placeholder: 'Unix timestamp (e.g., 123456789)',
condition: {
field: 'operation',
value: 'list_canvases',
},
mode: 'advanced',
},
{
id: 'canvasListTeamId',
title: 'Team ID',
type: 'short-input',
placeholder: 'Encoded team ID (org tokens only)',
condition: {
field: 'operation',
value: 'list_canvases',
},
mode: 'advanced',
},
// Lookup Canvas Sections specific fields
{
id: 'lookupCanvasId',
title: 'Canvas ID',
type: 'short-input',
placeholder: 'Enter canvas ID (e.g., F1234ABCD)',
condition: {
field: 'operation',
value: 'lookup_canvas_sections',
},
required: true,
},
{
id: 'sectionCriteria',
title: 'Section Criteria',
type: 'code',
language: 'json',
placeholder: '{"section_types":["h1"],"contains_text":"Roadmap"}',
condition: {
field: 'operation',
value: 'lookup_canvas_sections',
},
required: true,
},
// Delete Canvas specific fields
{
id: 'deleteCanvasId',
title: 'Canvas ID',
type: 'short-input',
placeholder: 'Enter canvas ID (e.g., F1234ABCD)',
condition: {
field: 'operation',
value: 'delete_canvas',
},
required: true,
},
// Create Conversation specific fields
{
id: 'conversationName',
@@ -1058,6 +1191,10 @@ Do not include any explanations, markdown formatting, or other text outside the
'slack_get_user_presence',
'slack_edit_canvas',
'slack_create_channel_canvas',
'slack_get_canvas',
'slack_list_canvases',
'slack_lookup_canvas_sections',
'slack_delete_canvas',
'slack_create_conversation',
'slack_invite_to_conversation',
'slack_open_view',
@@ -1106,6 +1243,14 @@ Do not include any explanations, markdown formatting, or other text outside the
return 'slack_edit_canvas'
case 'create_channel_canvas':
return 'slack_create_channel_canvas'
case 'get_canvas':
return 'slack_get_canvas'
case 'list_canvases':
return 'slack_list_canvases'
case 'lookup_canvas_sections':
return 'slack_lookup_canvas_sections'
case 'delete_canvas':
return 'slack_delete_canvas'
case 'create_conversation':
return 'slack_create_conversation'
case 'invite_to_conversation':
@@ -1164,6 +1309,16 @@ Do not include any explanations, markdown formatting, or other text outside the
canvasTitle,
channelCanvasTitle,
channelCanvasContent,
getCanvasId,
canvasListCount,
canvasListPage,
canvasListUser,
canvasListTsFrom,
canvasListTsTo,
canvasListTeamId,
lookupCanvasId,
sectionCriteria,
deleteCanvasId,
conversationName,
isPrivate,
teamId,
@@ -1343,6 +1498,46 @@ Do not include any explanations, markdown formatting, or other text outside the
}
break
case 'get_canvas':
baseParams.canvasId = getCanvasId
break
case 'list_canvases':
if (canvasListCount) {
const parsedCount = Number.parseInt(canvasListCount, 10)
if (!Number.isNaN(parsedCount) && parsedCount > 0) {
baseParams.count = parsedCount
}
}
if (canvasListPage) {
const parsedPage = Number.parseInt(canvasListPage, 10)
if (!Number.isNaN(parsedPage) && parsedPage > 0) {
baseParams.page = parsedPage
}
}
if (canvasListUser) {
baseParams.user = String(canvasListUser).trim()
}
if (canvasListTsFrom) {
baseParams.tsFrom = String(canvasListTsFrom).trim()
}
if (canvasListTsTo) {
baseParams.tsTo = String(canvasListTsTo).trim()
}
if (canvasListTeamId) {
baseParams.teamId = String(canvasListTeamId).trim()
}
break
case 'lookup_canvas_sections':
baseParams.canvasId = lookupCanvasId
baseParams.criteria = sectionCriteria
break
case 'delete_canvas':
baseParams.canvasId = deleteCanvasId
break
case 'create_conversation':
baseParams.name = conversationName
baseParams.isPrivate = isPrivate === 'true'
@@ -1461,6 +1656,23 @@ Do not include any explanations, markdown formatting, or other text outside the
// Create Channel Canvas inputs
channelCanvasTitle: { type: 'string', description: 'Title for channel canvas' },
channelCanvasContent: { type: 'string', description: 'Content for channel canvas' },
// Canvas management inputs
getCanvasId: { type: 'string', description: 'Canvas ID to retrieve' },
canvasListCount: { type: 'string', description: 'Maximum number of canvases to return' },
canvasListPage: { type: 'string', description: 'Canvas list page number' },
canvasListUser: { type: 'string', description: 'Optional canvas creator user filter' },
canvasListTsFrom: {
type: 'string',
description: 'Filter canvases created after this timestamp',
},
canvasListTsTo: {
type: 'string',
description: 'Filter canvases created before this timestamp',
},
canvasListTeamId: { type: 'string', description: 'Encoded team ID for org tokens' },
lookupCanvasId: { type: 'string', description: 'Canvas ID to search for sections' },
sectionCriteria: { type: 'json', description: 'Canvas section lookup criteria' },
deleteCanvasId: { type: 'string', description: 'Canvas ID to delete' },
// Create Conversation inputs
conversationName: { type: 'string', description: 'Name for the new channel' },
isPrivate: { type: 'string', description: 'Create as private channel (true/false)' },
@@ -1511,6 +1723,26 @@ Do not include any explanations, markdown formatting, or other text outside the
// slack_canvas outputs
canvas_id: { type: 'string', description: 'Canvas identifier for created canvases' },
title: { type: 'string', description: 'Canvas title' },
canvas: {
type: 'json',
description: 'Canvas file metadata returned by Slack',
},
canvases: {
type: 'json',
description: 'Array of canvas file objects returned by Slack',
},
paging: {
type: 'json',
description: 'Pagination information for listed canvases',
},
sections: {
type: 'json',
description: 'Canvas section IDs returned by Slack section lookup',
},
ok: {
type: 'boolean',
description: 'Whether Slack completed the canvas operation successfully',
},
// slack_message_reader outputs (read operation)
messages: {

View File

@@ -1,28 +1,6 @@
import { StagehandIcon } from '@/components/icons'
import { AuthMode, type BlockConfig, IntegrationType } from '@/blocks/types'
import type { ToolResponse } from '@/tools/types'
export interface StagehandExtractResponse extends ToolResponse {
output: {
data: Record<string, any>
}
}
export interface StagehandAgentResponse extends ToolResponse {
output: {
agentResult: {
success: boolean
completed: boolean
message: string
actions?: Array<{
type: string
description: string
result?: string
}>
}
structuredOutput?: Record<string, any>
}
}
import type { StagehandAgentResponse, StagehandExtractResponse } from '@/tools/stagehand/types'
export type StagehandResponse = StagehandExtractResponse | StagehandAgentResponse
@@ -345,6 +323,27 @@ Example 3 (Data Collection):
generationType: 'json-schema',
},
},
{
id: 'mode',
title: 'Agent Mode',
type: 'dropdown',
options: [
{ label: 'DOM (default)', id: 'dom' },
{ label: 'Hybrid', id: 'hybrid' },
{ label: 'CUA', id: 'cua' },
],
value: () => 'dom',
condition: { field: 'operation', value: 'agent' },
mode: 'advanced',
},
{
id: 'maxSteps',
title: 'Max Steps',
type: 'short-input',
placeholder: '20',
condition: { field: 'operation', value: 'agent' },
mode: 'advanced',
},
// Shared API key field
{
id: 'apiKey',
@@ -361,6 +360,19 @@ Example 3 (Data Collection):
tool: (params) => {
return params.operation === 'agent' ? 'stagehand_agent' : 'stagehand_extract'
},
params: (params) => {
const next: Record<string, any> = { ...params }
if (typeof next.maxSteps === 'string') {
const trimmed = next.maxSteps.trim()
if (trimmed === '') {
next.maxSteps = undefined
} else {
const n = Number(trimmed)
next.maxSteps = Number.isFinite(n) ? n : undefined
}
}
return next
},
},
},
inputs: {
@@ -376,6 +388,8 @@ Example 3 (Data Collection):
task: { type: 'string', description: 'Task description (agent operation)' },
variables: { type: 'json', description: 'Task variables (agent operation)' },
outputSchema: { type: 'json', description: 'Output schema (agent operation)' },
mode: { type: 'string', description: 'Agent mode: dom, hybrid, or cua (agent operation)' },
maxSteps: { type: 'number', description: 'Max agent steps (agent operation)' },
},
outputs: {
// Extract outputs
@@ -383,5 +397,10 @@ Example 3 (Data Collection):
// Agent outputs
agentResult: { type: 'json', description: 'Agent execution result (agent operation)' },
structuredOutput: { type: 'json', description: 'Structured output data (agent operation)' },
liveViewUrl: {
type: 'string',
description: 'Embeddable Browserbase live view URL (agent operation)',
},
sessionId: { type: 'string', description: 'Browserbase session identifier (agent operation)' },
},
}

View File

@@ -169,6 +169,7 @@ import { RouterBlock, RouterV2Block } from '@/blocks/blocks/router'
import { RssBlock } from '@/blocks/blocks/rss'
import { S3Block } from '@/blocks/blocks/s3'
import { SalesforceBlock } from '@/blocks/blocks/salesforce'
import { SapS4HanaBlock } from '@/blocks/blocks/sap_s4hana'
import { ScheduleBlock } from '@/blocks/blocks/schedule'
import { SearchBlock } from '@/blocks/blocks/search'
import { SecretsManagerBlock } from '@/blocks/blocks/secrets_manager'
@@ -419,6 +420,7 @@ export const registry: Record<string, BlockConfig> = {
rss: RssBlock,
s3: S3Block,
salesforce: SalesforceBlock,
sap_s4hana: SapS4HanaBlock,
schedule: ScheduleBlock,
search: SearchBlock,
sendgrid: SendGridBlock,

View File

@@ -4045,6 +4045,7 @@ export function AsanaIcon(props: SVGProps<SVGSVGElement>) {
}
export function PipedriveIcon(props: SVGProps<SVGSVGElement>) {
const pathId = useId()
return (
<svg
{...props}
@@ -4058,7 +4059,7 @@ export function PipedriveIcon(props: SVGProps<SVGSVGElement>) {
<defs>
<path
d='M59.6807,81.1772 C59.6807,101.5343 70.0078,123.4949 92.7336,123.4949 C109.5872,123.4949 126.6277,110.3374 126.6277,80.8785 C126.6277,55.0508 113.232,37.7119 93.2944,37.7119 C77.0483,37.7119 59.6807,49.1244 59.6807,81.1772 Z M101.3006,0 C142.0482,0 169.4469,32.2728 169.4469,80.3126 C169.4469,127.5978 140.584,160.60942 99.3224,160.60942 C79.6495,160.60942 67.0483,152.1836 60.4595,146.0843 C60.5063,147.5305 60.5374,149.1497 60.5374,150.8788 L60.5374,215 L18.32565,215 L18.32565,44.157 C18.32565,41.6732 17.53126,40.8873 15.07021,40.8873 L0.5531,40.8873 L0.5531,3.4741 L35.9736,3.4741 C52.282,3.4741 56.4564,11.7741 57.2508,18.1721 C63.8708,10.7524 77.5935,0 101.3006,0 Z'
id='path-1'
id={pathId}
/>
</defs>
<g
@@ -4069,10 +4070,7 @@ export function PipedriveIcon(props: SVGProps<SVGSVGElement>) {
fillRule='evenodd'
>
<g transform='translate(67.000000, 44.000000)'>
<mask id='mask-2' fill='white'>
<use href='#path-1' />
</mask>
<use id='Clip-5' fill='#FFFFFF' xlinkHref='#path-1' />
<use fill='#FFFFFF' xlinkHref={`#${pathId}`} />
</g>
</g>
</svg>
@@ -4098,6 +4096,40 @@ export function SalesforceIcon(props: SVGProps<SVGSVGElement>) {
)
}
export function SapS4HanaIcon(props: SVGProps<SVGSVGElement>) {
const id = useId()
return (
<svg {...props} xmlns='http://www.w3.org/2000/svg' viewBox='0 0 412.38 204'>
<defs>
<linearGradient
id={id}
x1='206.19'
y1='0'
x2='206.19'
y2='204'
gradientUnits='userSpaceOnUse'
>
<stop offset='0' stopColor='#00b1eb' />
<stop offset='.212' stopColor='#009ad9' />
<stop offset='.519' stopColor='#007fc4' />
<stop offset='.792' stopColor='#006eb8' />
<stop offset='1' stopColor='#0069b4' />
</linearGradient>
</defs>
<polyline
fill={`url(#${id})`}
fillRule='evenodd'
points='0 204 208.413 204 412.38 0 0 0 0 204'
/>
<path
fill='#fff'
fillRule='evenodd'
d='m244.727,38.359l-40.593-.025v96.518l-35.46-96.518h-35.16l-30.277,80.716c-3.224-20.352-24.277-27.38-40.84-32.649-10.937-3.512-22.541-8.678-22.434-14.387.091-4.687,6.225-9.04,18.377-8.385,8.17.433,15.373,1.092,29.71,8.006l14.102-24.557c-13.088-6.658-31.169-10.867-45.985-10.883h-.086c-17.277,0-31.677,5.598-40.602,14.824-6.221,6.443-9.572,14.626-9.712,23.679-.227,12.454,4.341,21.292,13.938,28.338,8.104,5.944,18.468,9.794,27.603,12.626,11.27,3.492,20.467,6.526,20.36,13.002-.083,2.355-.977,4.552-2.671,6.337-2.807,2.897-7.124,3.986-13.084,4.098-11.497.243-20.026-1.559-33.61-9.585l-12.536,24.903c13.546,7.705,29.586,12.223,45.952,12.223l2.106-.024c14.247-.256,25.745-4.316,34.929-11.712.527-.416,1.001-.845,1.488-1.277l-4.073,10.874h36.875l6.189-18.822c6.477,2.214,13.847,3.437,21.676,3.437,7.618,0,14.795-1.17,21.156-3.252l5.965,18.637h60.137v-38.969h13.113c31.706,0,50.456-16.147,50.456-43.202,0-30.139-18.219-43.969-57.011-43.969Zm-93.816,82.587c-4.737,0-9.177-.828-13.006-2.275l12.866-40.593h.244l12.643,40.708c-3.801,1.349-8.138,2.16-12.746,2.16Zm96.199-23.324h-8.941v-32.711h8.941c11.927,0,21.437,3.961,21.437,16.139,0,12.602-9.51,16.572-21.437,16.572'
/>
</svg>
)
}
export function ServiceNowIcon(props: SVGProps<SVGSVGElement>) {
return (
<svg {...props} xmlns='http://www.w3.org/2000/svg' viewBox='0 0 71.1 63.6'>
@@ -4694,15 +4726,16 @@ export function DynamoDBIcon(props: SVGProps<SVGSVGElement>) {
}
export function IAMIcon(props: SVGProps<SVGSVGElement>) {
const id = useId()
return (
<svg {...props} viewBox='0 0 80 80' xmlns='http://www.w3.org/2000/svg'>
<defs>
<linearGradient x1='0%' y1='100%' x2='100%' y2='0%' id='iamGradient'>
<linearGradient x1='0%' y1='100%' x2='100%' y2='0%' id={id}>
<stop stopColor='#BD0816' offset='0%' />
<stop stopColor='#FF5252' offset='100%' />
</linearGradient>
</defs>
<rect fill='url(#iamGradient)' width='80' height='80' />
<rect fill={`url(#${id})`} width='80' height='80' />
<path
d='M14,59 L66,59 L66,21 L14,21 L14,59 Z M68,20 L68,60 C68,60.552 67.553,61 67,61 L13,61 C12.447,61 12,60.552 12,60 L12,20 C12,19.448 12.447,19 13,19 L67,19 C67.553,19 68,19.448 68,20 L68,20 Z M44,48 L59,48 L59,46 L44,46 L44,48 Z M57,42 L62,42 L62,40 L57,40 L57,42 Z M44,42 L52,42 L52,40 L44,40 L44,42 Z M29,46 C29,45.449 28.552,45 28,45 C27.448,45 27,45.449 27,46 C27,46.551 27.448,47 28,47 C28.552,47 29,46.551 29,46 L29,46 Z M31,46 C31,47.302 30.161,48.401 29,48.816 L29,51 L27,51 L27,48.815 C25.839,48.401 25,47.302 25,46 C25,44.346 26.346,43 28,43 C29.654,43 31,44.346 31,46 L31,46 Z M19,53.993 L36.994,54 L36.996,50 L33,50 L33,48 L36.996,48 L36.998,45 L33,45 L33,43 L36.999,43 L37,40.007 L19.006,40 L19,53.993 Z M22,38.001 L34,38.006 L34,31 C34.001,28.697 31.197,26.677 28,26.675 L27.996,26.675 C24.804,26.675 22.004,28.696 22.002,31 L22,38.001 Z M17,54.992 L17.006,39 C17.006,38.734 17.111,38.48 17.299,38.292 C17.486,38.105 17.741,38 18.006,38 L20,38.001 L20.002,31 C20.004,27.512 23.59,24.675 27.996,24.675 L28,24.675 C32.412,24.677 36.001,27.515 36,31 L36,38.007 L38,38.008 C38.553,38.008 39,38.456 39,39.008 L38.994,55 C38.994,55.266 38.889,55.52 38.701,55.708 C38.514,55.895 38.259,56 37.994,56 L18,55.992 C17.447,55.992 17,55.544 17,54.992 L17,54.992 Z M60,36 L62,36 L62,34 L60,34 L60,36 Z M44,36 L55,36 L55,34 L44,34 L44,36 Z'
fill='#FFFFFF'
@@ -4712,15 +4745,16 @@ export function IAMIcon(props: SVGProps<SVGSVGElement>) {
}
export function IdentityCenterIcon(props: SVGProps<SVGSVGElement>) {
const id = useId()
return (
<svg {...props} viewBox='0 0 80 80' xmlns='http://www.w3.org/2000/svg'>
<defs>
<linearGradient x1='0%' y1='100%' x2='100%' y2='0%' id='identityCenterGradient'>
<linearGradient x1='0%' y1='100%' x2='100%' y2='0%' id={id}>
<stop stopColor='#BD0816' offset='0%' />
<stop stopColor='#FF5252' offset='100%' />
</linearGradient>
</defs>
<rect fill='url(#identityCenterGradient)' width='80' height='80' />
<rect fill={`url(#${id})`} width='80' height='80' />
<path
d='M46.694,46.8194562 C47.376,46.1374562 47.376,45.0294562 46.694,44.3474562 C46.353,44.0074562 45.906,43.8374562 45.459,43.8374562 C45.01,43.8374562 44.563,44.0074562 44.222,44.3474562 C43.542,45.0284562 43.542,46.1384562 44.222,46.8194562 C44.905,47.5014562 46.013,47.4994562 46.694,46.8194562 M47.718,47.1374562 L51.703,51.1204562 L50.996,51.8274562 L49.868,50.6994562 L48.793,51.7754562 L48.086,51.0684562 L49.161,49.9924562 L47.011,47.8444562 C46.545,48.1654562 46.003,48.3294562 45.458,48.3294562 C44.755,48.3294562 44.051,48.0624562 43.515,47.5264562 C42.445,46.4554562 42.445,44.7124562 43.515,43.6404562 C44.586,42.5714562 46.329,42.5694562 47.401,43.6404562 C48.351,44.5904562 48.455,46.0674562 47.718,47.1374562 M53,44.1014562 C53,46.1684562 51.505,47.0934562 50.023,47.0934562 L50.023,46.0934562 C50.487,46.0934562 52,45.9494562 52,44.1014562 C52,43.0044562 51.353,42.3894562 49.905,42.1084562 C49.68,42.0654562 49.514,41.8754562 49.501,41.6484562 C49.446,40.7444562 48.987,40.1124562 48.384,40.1124562 C48.084,40.1124562 47.854,40.2424562 47.616,40.5464562 C47.506,40.6884562 47.324,40.7594562 47.147,40.7324562 C46.968,40.7054562 46.818,40.5844562 46.755,40.4144562 C46.577,39.9434562 46.211,39.4334562 45.723,38.9774562 C45.231,38.5094562 43.883,37.5074562 41.972,38.2734562 C40.885,38.7054562 40.034,39.9494562 40.034,41.1074562 C40.034,41.2354562 40.043,41.3624562 40.058,41.4884562 C40.061,41.5094562 40.062,41.5304562 40.062,41.5514562 C40.062,41.7994562 39.882,42.0064562 39.645,42.0464562 C38.886,42.2394562 38,42.7454562 38,44.0554562 L38.005,44.2104562 C38.069,45.3254562 39.252,45.9954562 40.358,45.9984562 L41,45.9984562 L41,46.9984562 L40.357,46.9984562 C38.536,46.9944562 37.095,45.8194562 37.006,44.2644562 C37.003,44.1944562 37,44.1244562 37,44.0554562 C37,42.6944562 37.752,41.6484562 39.035,41.1884562 C39.034,41.1614562 39.034,41.1344562 39.034,41.1074562 C39.034,39.5434562 40.138,37.9254562 41.602,37.3434562 C43.298,36.6654562 45.095,37.0034562 46.409,38.2494562 C46.706,38.5274562 47.076,38.9264562 47.372,39.4134562 C47.673,39.2124562 48.008,39.1124562 48.384,39.1124562 C49.257,39.1124562 50.231,39.7714562 50.458,41.2074562 C52.145,41.6324562 53,42.6054562 53,44.1014562 M27,53 L27,27 L53,27 L53,34 L51,34 L51,29 L29,29 L29,51 L51,51 L51,46 L53,46 L53,53 Z'
fill='#FFFFFF'
@@ -4730,15 +4764,16 @@ export function IdentityCenterIcon(props: SVGProps<SVGSVGElement>) {
}
export function STSIcon(props: SVGProps<SVGSVGElement>) {
const id = useId()
return (
<svg {...props} viewBox='0 0 80 80' xmlns='http://www.w3.org/2000/svg'>
<defs>
<linearGradient x1='0%' y1='100%' x2='100%' y2='0%' id='stsGradient'>
<linearGradient x1='0%' y1='100%' x2='100%' y2='0%' id={id}>
<stop stopColor='#BD0816' offset='0%' />
<stop stopColor='#FF5252' offset='100%' />
</linearGradient>
</defs>
<rect fill='url(#stsGradient)' width='80' height='80' />
<rect fill={`url(#${id})`} width='80' height='80' />
<path
d='M14,59 L66,59 L66,21 L14,21 L14,59 Z M68,20 L68,60 C68,60.552 67.553,61 67,61 L13,61 C12.447,61 12,60.552 12,60 L12,20 C12,19.448 12.447,19 13,19 L67,19 C67.553,19 68,19.448 68,20 L68,20 Z M44,48 L59,48 L59,46 L44,46 L44,48 Z M57,42 L62,42 L62,40 L57,40 L57,42 Z M44,42 L52,42 L52,40 L44,40 L44,42 Z M29,46 C29,45.449 28.552,45 28,45 C27.448,45 27,45.449 27,46 C27,46.551 27.448,47 28,47 C28.552,47 29,46.551 29,46 L29,46 Z M31,46 C31,47.302 30.161,48.401 29,48.816 L29,51 L27,51 L27,48.815 C25.839,48.401 25,47.302 25,46 C25,44.346 26.346,43 28,43 C29.654,43 31,44.346 31,46 L31,46 Z M19,53.993 L36.994,54 L36.996,50 L33,50 L33,48 L36.996,48 L36.998,45 L33,45 L33,43 L36.999,43 L37,40.007 L19.006,40 L19,53.993 Z M22,38.001 L34,38.006 L34,31 C34.001,28.697 31.197,26.677 28,26.675 L27.996,26.675 C24.804,26.675 22.004,28.696 22.002,31 L22,38.001 Z M17,54.992 L17.006,39 C17.006,38.734 17.111,38.48 17.299,38.292 C17.486,38.105 17.741,38 18.006,38 L20,38.001 L20.002,31 C20.004,27.512 23.59,24.675 27.996,24.675 L28,24.675 C32.412,24.677 36.001,27.515 36,31 L36,38.007 L38,38.008 C38.553,38.008 39,38.456 39,39.008 L38.994,55 C38.994,55.266 38.889,55.52 38.701,55.708 C38.514,55.895 38.259,56 37.994,56 L18,55.992 C17.447,55.992 17,55.544 17,54.992 L17,54.992 Z M60,36 L62,36 L62,34 L60,34 L60,36 Z M44,36 L55,36 L55,34 L44,34 L44,36 Z'
fill='#FFFFFF'
@@ -4748,15 +4783,16 @@ export function STSIcon(props: SVGProps<SVGSVGElement>) {
}
export function SESIcon(props: SVGProps<SVGSVGElement>) {
const id = useId()
return (
<svg {...props} viewBox='0 0 80 80' xmlns='http://www.w3.org/2000/svg'>
<defs>
<linearGradient x1='0%' y1='100%' x2='100%' y2='0%' id='sesGradient'>
<linearGradient x1='0%' y1='100%' x2='100%' y2='0%' id={id}>
<stop stopColor='#BD0816' offset='0%' />
<stop stopColor='#FF5252' offset='100%' />
</linearGradient>
</defs>
<rect fill='url(#sesGradient)' width='80' height='80' />
<rect fill={`url(#${id})`} width='80' height='80' />
<path
d='M57,60.999875 C57,59.373846 55.626,57.9998214 54,57.9998214 C52.374,57.9998214 51,59.373846 51,60.999875 C51,62.625904 52.374,63.9999286 54,63.9999286 C55.626,63.9999286 57,62.625904 57,60.999875 L57,60.999875 Z M40,59.9998571 C38.374,59.9998571 37,61.3738817 37,62.9999107 C37,64.6259397 38.374,65.9999643 40,65.9999643 C41.626,65.9999643 43,64.6259397 43,62.9999107 C43,61.3738817 41.626,59.9998571 40,59.9998571 L40,59.9998571 Z M26,57.9998214 C24.374,57.9998214 23,59.373846 23,60.999875 C23,62.625904 24.374,63.9999286 26,63.9999286 C27.626,63.9999286 29,62.625904 29,60.999875 C29,59.373846 27.626,57.9998214 26,57.9998214 L26,57.9998214 Z M28.605,42.9995536 L51.395,42.9995536 L43.739,36.1104305 L40.649,38.7584778 C40.463,38.9194807 40.23,38.9994821 39.999,38.9994821 C39.768,38.9994821 39.535,38.9194807 39.349,38.7584778 L36.26,36.1104305 L28.605,42.9995536 Z M27,28.1732888 L27,41.7545313 L34.729,34.7984071 L27,28.1732888 Z M51.297,26.9992678 L28.703,26.9992678 L39.999,36.6824408 L51.297,26.9992678 Z M53,41.7545313 L53,28.1732888 L45.271,34.7974071 L53,41.7545313 Z M59,60.999875 C59,63.7099234 56.71,65.9999643 54,65.9999643 C51.29,65.9999643 49,63.7099234 49,60.999875 C49,58.6308327 50.75,56.5837961 53,56.1057876 L53,52.9997321 L41,52.9997321 L41,58.1058233 C43.25,58.5838319 45,60.6308684 45,62.9999107 C45,65.7099591 42.71,68 40,68 C37.29,68 35,65.7099591 35,62.9999107 C35,60.6308684 36.75,58.5838319 39,58.1058233 L39,52.9997321 L27,52.9997321 L27,56.1057876 C29.25,56.5837961 31,58.6308327 31,60.999875 C31,63.7099234 28.71,65.9999643 26,65.9999643 C23.29,65.9999643 21,63.7099234 21,60.999875 C21,58.6308327 22.75,56.5837961 25,56.1057876 L25,51.9997143 C25,51.4477044 25.447,50.9996964 26,50.9996964 L39,50.9996964 L39,44.9995893 L26,44.9995893 C25.447,44.9995893 25,44.5515813 25,43.9995714 L25,25.99925 C25,25.4472401 25.447,24.9992321 26,24.9992321 L54,24.9992321 C54.553,24.9992321 55,25.4472401 55,25.99925 L55,43.9995714 C55,44.5515813 54.553,44.9995893 54,44.9995893 L41,44.9995893 L41,50.9996964 L54,50.9996964 C54.553,50.9996964 55,51.4477044 55,51.9997143 L55,56.1057876 C57.25,56.5837961 59,58.6308327 59,60.999875 L59,60.999875 Z M68,39.9995 C68,45.9066055 66.177,51.5597064 62.727,56.3447919 L61.104,55.174771 C64.307,50.7316916 66,45.4845979 66,39.9995 C66,25.664244 54.337,14.0000357 40.001,14.0000357 C25.664,14.0000357 14,25.664244 14,39.9995 C14,45.4845979 15.693,50.7316916 18.896,55.174771 L17.273,56.3447919 C13.823,51.5597064 12,45.9066055 12,39.9995 C12,24.5612243 24.561,12 39.999,12 C55.438,12 68,24.5612243 68,39.9995 L68,39.9995 Z'
fill='#FFFFFF'
@@ -4766,15 +4802,16 @@ export function SESIcon(props: SVGProps<SVGSVGElement>) {
}
export function SecretsManagerIcon(props: SVGProps<SVGSVGElement>) {
const id = useId()
return (
<svg {...props} viewBox='0 0 80 80' xmlns='http://www.w3.org/2000/svg'>
<defs>
<linearGradient x1='0%' y1='100%' x2='100%' y2='0%' id='secretsManagerGradient'>
<linearGradient x1='0%' y1='100%' x2='100%' y2='0%' id={id}>
<stop stopColor='#BD0816' offset='0%' />
<stop stopColor='#FF5252' offset='100%' />
</linearGradient>
</defs>
<rect fill='url(#secretsManagerGradient)' width='80' height='80' />
<rect fill={`url(#${id})`} width='80' height='80' />
<path
d='M38.76,43.36 C38.76,44.044 39.317,44.6 40,44.6 C40.684,44.6 41.24,44.044 41.24,43.36 C41.24,42.676 40.684,42.12 40,42.12 C39.317,42.12 38.76,42.676 38.76,43.36 L38.76,43.36 Z M36.76,43.36 C36.76,41.573 38.213,40.12 40,40.12 C41.787,40.12 43.24,41.573 43.24,43.36 C43.24,44.796 42.296,46.002 41,46.426 L41,49 L39,49 L39,46.426 C37.704,46.002 36.76,44.796 36.76,43.36 L36.76,43.36 Z M49,38 L31,38 L31,51 L49,51 L49,48 L46,48 L46,46 L49,46 L49,43 L46,43 L46,41 L49,41 L49,38 Z M34,36 L45.999,36 L46,31 C46.001,28.384 43.143,26.002 40.004,26 L40.001,26 C38.472,26 36.928,26.574 35.763,27.575 C34.643,28.537 34,29.786 34,31.001 L34,36 Z M48,31.001 L47.999,36 L50,36 C50.553,36 51,36.448 51,37 L51,52 C51,52.552 50.553,53 50,53 L30,53 C29.447,53 29,52.552 29,52 L29,37 C29,36.448 29.447,36 30,36 L32,36 L32,31 C32.001,29.202 32.897,27.401 34.459,26.058 C35.982,24.75 38.001,24 40.001,24 L40.004,24 C44.265,24.002 48.001,27.273 48,31.001 L48,31.001 Z M19.207,55.049 L20.828,53.877 C18.093,50.097 16.581,45.662 16.396,41 L19,41 L19,39 L16.399,39 C16.598,34.366 18.108,29.957 20.828,26.198 L19.207,25.025 C16.239,29.128 14.599,33.942 14.399,39 L12,39 L12,41 L14.396,41 C14.582,46.086 16.224,50.926 19.207,55.049 L19.207,55.049 Z M53.838,59.208 C50.069,61.936 45.648,63.446 41,63.639 L41,61 L39,61 L39,63.639 C34.352,63.447 29.93,61.937 26.159,59.208 L24.988,60.828 C29.1,63.805 33.928,65.445 39,65.639 L39,68 L41,68 L41,65.639 C46.072,65.445 50.898,63.805 55.01,60.828 L53.838,59.208 Z M26.159,20.866 C29.93,18.138 34.352,16.628 39,16.436 L39,19 L41,19 L41,16.436 C45.648,16.628 50.069,18.138 53.838,20.866 L55.01,19.246 C50.898,16.27 46.072,14.63 41,14.436 L41,12 L39,12 L39,14.436 C33.928,14.629 29.1,16.269 24.988,19.246 L26.159,20.866 Z M65.599,39 C65.399,33.942 63.759,29.128 60.79,25.025 L59.169,26.198 C61.89,29.957 63.4,34.366 63.599,39 L61,39 L61,41 L63.602,41 C63.416,45.662 61.905,50.097 59.169,53.877 L60.79,55.049 C63.774,50.926 65.415,46.086 65.602,41 L68,41 L68,39 L65.599,39 Z M56.386,25.064 L64.226,17.224 L62.812,15.81 L54.972,23.65 L56.386,25.064 Z M23.612,55.01 L15.772,62.85 L17.186,64.264 L25.026,56.424 L23.612,55.01 Z M28.666,27.253 L13.825,12.413 L12.411,13.827 L27.252,28.667 L28.666,27.253 Z M54.193,52.78 L67.586,66.173 L66.172,67.587 L52.779,54.194 L54.193,52.78 Z'
fill='#FFFFFF'

View File

@@ -34,6 +34,7 @@ import {
type ExecutionContext,
getNextExecutionOrder,
type NormalizedBlockOutput,
type StreamingExecution,
} from '@/executor/types'
import { streamingResponseFormatProcessor } from '@/executor/utils'
import { buildBlockExecutionError, normalizeError } from '@/executor/utils/errors'
@@ -140,7 +141,7 @@ export class BlockExecutor {
let normalizedOutput: NormalizedBlockOutput
if (isStreamingExecution) {
const streamingExec = output as { stream: ReadableStream; execution: any }
const streamingExec = output as StreamingExecution
if (ctx.onStream) {
await this.handleStreamingExecution(
@@ -602,7 +603,7 @@ export class BlockExecutor {
ctx: ExecutionContext,
node: DAGNode,
block: SerializedBlock,
streamingExec: { stream: ReadableStream; execution: any },
streamingExec: StreamingExecution,
resolvedInputs: Record<string, any>,
selectedOutputs: string[]
): Promise<void> {
@@ -613,56 +614,39 @@ export class BlockExecutor {
(block.config?.params as Record<string, any> | undefined)?.responseFormat ??
(block.config as Record<string, any> | undefined)?.responseFormat
const stream = streamingExec.stream
if (typeof stream.tee !== 'function') {
await this.forwardStream(ctx, blockId, streamingExec, stream, responseFormat, selectedOutputs)
return
}
const sourceReader = streamingExec.stream.getReader()
const decoder = new TextDecoder()
const accumulated: string[] = []
let drainError: unknown
let sourceFullyDrained = false
const [clientStream, executorStream] = stream.tee()
const clientSource = new ReadableStream<Uint8Array>({
async pull(controller) {
try {
const { done, value } = await sourceReader.read()
if (done) {
const tail = decoder.decode()
if (tail) accumulated.push(tail)
sourceFullyDrained = true
controller.close()
return
}
accumulated.push(decoder.decode(value, { stream: true }))
controller.enqueue(value)
} catch (error) {
drainError = error
controller.error(error)
}
},
async cancel(reason) {
try {
await sourceReader.cancel(reason)
} catch {}
},
})
const processedClientStream = streamingResponseFormatProcessor.processStream(
clientStream,
blockId,
selectedOutputs,
responseFormat
)
const clientStreamingExec = {
...streamingExec,
stream: processedClientStream,
}
const executorConsumption = this.consumeExecutorStream(
executorStream,
streamingExec,
blockId,
responseFormat
)
const clientConsumption = (async () => {
try {
await ctx.onStream?.(clientStreamingExec)
} catch (error) {
this.execLogger.error('Error in onStream callback', { blockId, error })
// Cancel the client stream to release the tee'd buffer
await processedClientStream.cancel().catch(() => {})
}
})()
await Promise.all([clientConsumption, executorConsumption])
}
private async forwardStream(
ctx: ExecutionContext,
blockId: string,
streamingExec: { stream: ReadableStream; execution: any },
stream: ReadableStream,
responseFormat: any,
selectedOutputs: string[]
): Promise<void> {
const processedStream = streamingResponseFormatProcessor.processStream(
stream,
clientSource,
blockId,
selectedOutputs,
responseFormat
@@ -670,72 +654,75 @@ export class BlockExecutor {
try {
await ctx.onStream?.({
...streamingExec,
stream: processedStream,
stream: processedClientStream,
execution: streamingExec.execution,
})
} catch (error) {
this.execLogger.error('Error in onStream callback', { blockId, error })
await processedStream.cancel().catch(() => {})
}
}
private async consumeExecutorStream(
stream: ReadableStream,
streamingExec: { execution: any },
blockId: string,
responseFormat: any
): Promise<void> {
const reader = stream.getReader()
const decoder = new TextDecoder()
const chunks: string[] = []
try {
while (true) {
const { done, value } = await reader.read()
if (done) break
chunks.push(decoder.decode(value, { stream: true }))
}
const tail = decoder.decode()
if (tail) chunks.push(tail)
} catch (error) {
this.execLogger.error('Error reading executor stream for block', { blockId, error })
await processedClientStream.cancel().catch(() => {})
} finally {
try {
await reader.cancel().catch(() => {})
sourceReader.releaseLock()
} catch {}
}
const fullContent = chunks.join('')
if (drainError) {
this.execLogger.error('Error reading stream for block', { blockId, error: drainError })
return
}
// If the onStream consumer exited before the source drained (e.g. it caught
// an internal error and returned normally), `accumulated` holds a truncated
// response. Persisting that to memory or setting it as the block output
// would corrupt downstream state — skip and log instead.
if (!sourceFullyDrained) {
this.execLogger.warn(
'Stream consumer exited before source drained; skipping content persistence',
{
blockId,
}
)
return
}
const fullContent = accumulated.join('')
if (!fullContent) {
return
}
const executionOutput = streamingExec.execution?.output
if (!executionOutput || typeof executionOutput !== 'object') {
return
}
if (responseFormat) {
try {
const parsed = JSON.parse(fullContent.trim())
streamingExec.execution.output = {
...parsed,
tokens: executionOutput.tokens,
toolCalls: executionOutput.toolCalls,
providerTiming: executionOutput.providerTiming,
cost: executionOutput.cost,
model: executionOutput.model,
if (executionOutput && typeof executionOutput === 'object') {
let parsedForFormat = false
if (responseFormat) {
try {
const parsed = JSON.parse(fullContent.trim())
streamingExec.execution.output = {
...parsed,
tokens: executionOutput.tokens,
toolCalls: executionOutput.toolCalls,
providerTiming: executionOutput.providerTiming,
cost: executionOutput.cost,
model: executionOutput.model,
}
parsedForFormat = true
} catch (error) {
this.execLogger.warn('Failed to parse streamed content for response format', {
blockId,
error,
})
}
return
} catch (error) {
this.execLogger.warn('Failed to parse streamed content for response format', {
blockId,
error,
})
}
if (!parsedForFormat) {
executionOutput.content = fullContent
}
}
executionOutput.content = fullContent
if (streamingExec.onFullContent) {
try {
await streamingExec.onFullContent(fullContent)
} catch (error) {
this.execLogger.error('onFullContent callback failed', { blockId, error })
}
}
}
}

View File

@@ -958,8 +958,16 @@ export class AgentBlockHandler implements BlockHandler {
streamingExec: StreamingExecution
): StreamingExecution {
return {
stream: memoryService.wrapStreamForPersistence(streamingExec.stream, ctx, inputs),
stream: streamingExec.stream,
execution: streamingExec.execution,
onFullContent: async (content: string) => {
if (!content.trim()) return
try {
await memoryService.appendToMemory(ctx, inputs, { role: 'assistant', content })
} catch (error) {
logger.error('Failed to persist streaming response:', error)
}
},
}
}

View File

@@ -111,35 +111,6 @@ export class Memory {
})
}
wrapStreamForPersistence(
stream: ReadableStream<Uint8Array>,
ctx: ExecutionContext,
inputs: AgentInputs
): ReadableStream<Uint8Array> {
const chunks: string[] = []
const decoder = new TextDecoder()
const transformStream = new TransformStream<Uint8Array, Uint8Array>({
transform: (chunk, controller) => {
controller.enqueue(chunk)
const decoded = decoder.decode(chunk, { stream: true })
chunks.push(decoded)
},
flush: () => {
const content = chunks.join('')
if (content.trim()) {
this.appendToMemory(ctx, inputs, {
role: 'assistant',
content,
}).catch((error) => logger.error('Failed to persist streaming response:', error))
}
},
})
return stream.pipeThrough(transformStream)
}
private requireWorkspaceId(ctx: ExecutionContext): string {
if (!ctx.workspaceId) {
throw new Error('workspaceId is required for memory operations')

View File

@@ -717,10 +717,13 @@ export class LoopOrchestrator {
})
if (vmResult.error) {
logger.error('Failed to evaluate loop condition', {
const isSystemError = vmResult.error.isSystemError === true
const logFn = isSystemError ? logger.error.bind(logger) : logger.warn.bind(logger)
logFn('Failed to evaluate loop condition', {
condition,
evaluatedCondition,
error: vmResult.error,
isSystemError,
})
return false
}

View File

@@ -359,6 +359,12 @@ export interface ExecutionResult {
export interface StreamingExecution {
stream: ReadableStream
execution: ExecutionResult & { isStreaming?: boolean }
/**
* Invoked with the assembled response text after the stream drains. Lets agent
* blocks persist the full response without interposing a TransformStream on a
* fetch-backed source — that pattern amplifies memory on Bun via #28035.
*/
onFullContent?: (content: string) => void | Promise<void>
}
export interface BlockExecutor {

View File

@@ -1,4 +1,4 @@
import { keepPreviousData, useMutation, useQuery, useQueryClient } from '@tanstack/react-query'
import { useMutation, useQuery, useQueryClient } from '@tanstack/react-query'
import type { PersistedMessage } from '@/lib/copilot/chat/persisted-message'
import { normalizeMessage } from '@/lib/copilot/chat/persisted-message'
import {
@@ -254,7 +254,6 @@ export function useTasks(workspaceId?: string) {
queryKey: taskKeys.list(workspaceId),
queryFn: ({ signal }) => fetchTasks(workspaceId as string, signal),
enabled: Boolean(workspaceId),
placeholderData: keepPreviousData,
staleTime: 60 * 1000,
})
}
@@ -535,6 +534,10 @@ async function markTaskUnread(chatId: string): Promise<void> {
/**
* Marks a task as read with optimistic update.
*
* The server only updates `lastSeenAt`, never `updatedAt`, so we deliberately
* do not invalidate the list cache — that would trigger a refetch that can
* reorder the sidebar if any unrelated server-side update landed in between.
*/
export function useMarkTaskRead(workspaceId?: string) {
const queryClient = useQueryClient()
@@ -556,14 +559,14 @@ export function useMarkTaskRead(workspaceId?: string) {
queryClient.setQueryData(taskKeys.list(workspaceId), context.previousTasks)
}
},
onSettled: () => {
queryClient.invalidateQueries({ queryKey: taskKeys.list(workspaceId) })
},
})
}
/**
* Marks a task as unread with optimistic update.
*
* Same rationale as `useMarkTaskRead` — no list invalidation, since the server
* only flips `lastSeenAt` and the optimistic update fully reflects the change.
*/
export function useMarkTaskUnread(workspaceId?: string) {
const queryClient = useQueryClient()
@@ -585,8 +588,5 @@ export function useMarkTaskUnread(workspaceId?: string) {
queryClient.setQueryData(taskKeys.list(workspaceId), context.previousTasks)
}
},
onSettled: () => {
queryClient.invalidateQueries({ queryKey: taskKeys.list(workspaceId) })
},
})
}

View File

@@ -101,7 +101,9 @@ async function fetchWorkspaceFileContent(
}
/**
* Hook to fetch workspace file content as text
* Hook to fetch workspace file content as text.
* `key` (the storage object key) is included in the query key so that a new
* storage key (e.g. after a file is re-uploaded) correctly busts the cache.
*/
export function useWorkspaceFileContent(
workspaceId: string,
@@ -110,7 +112,7 @@ export function useWorkspaceFileContent(
raw?: boolean
) {
return useQuery({
queryKey: workspaceFilesKeys.content(workspaceId, fileId, raw ? 'raw' : 'text'),
queryKey: [...workspaceFilesKeys.content(workspaceId, fileId, raw ? 'raw' : 'text'), key],
queryFn: ({ signal }) => fetchWorkspaceFileContent(key, signal, raw),
enabled: !!workspaceId && !!fileId && !!key,
staleTime: 30 * 1000,
@@ -127,12 +129,12 @@ async function fetchWorkspaceFileBinary(key: string, signal?: AbortSignal): Prom
/**
* Hook to fetch workspace file content as binary (ArrayBuffer).
* Shares the same query key as useWorkspaceFileContent so cache
* invalidation from file updates triggers a refetch automatically.
* `key` (the storage object key) is included in the query key so that a new
* storage key (e.g. after a file is re-uploaded) correctly busts the cache.
*/
export function useWorkspaceFileBinary(workspaceId: string, fileId: string, key: string) {
return useQuery({
queryKey: workspaceFilesKeys.content(workspaceId, fileId, 'binary'),
queryKey: [...workspaceFilesKeys.content(workspaceId, fileId, 'binary'), key],
queryFn: ({ signal }) => fetchWorkspaceFileBinary(key, signal),
enabled: !!workspaceId && !!fileId && !!key,
staleTime: 30 * 1000,
@@ -210,10 +212,8 @@ export function useUploadWorkspaceFile() {
return data
},
onSuccess: (_data, variables) => {
// Invalidate files list to refetch
onSettled: () => {
queryClient.invalidateQueries({ queryKey: workspaceFilesKeys.lists() })
// Invalidate storage info to update usage
queryClient.invalidateQueries({ queryKey: workspaceFilesKeys.storageInfo() })
},
onError: (error) => {
@@ -229,17 +229,18 @@ interface UpdateFileContentParams {
workspaceId: string
fileId: string
content: string
encoding?: 'base64' | 'utf-8'
}
export function useUpdateWorkspaceFileContent() {
const queryClient = useQueryClient()
return useMutation({
mutationFn: async ({ workspaceId, fileId, content }: UpdateFileContentParams) => {
mutationFn: async ({ workspaceId, fileId, content, encoding }: UpdateFileContentParams) => {
const response = await fetch(`/api/workspaces/${workspaceId}/files/${fileId}/content`, {
method: 'PUT',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ content }),
body: JSON.stringify(encoding ? { content, encoding } : { content }),
})
const data = await response.json()

View File

@@ -18,6 +18,20 @@ import type { SerializableExecutionState } from '@/executor/execution/types'
const logger = createLogger('useExecutionStream')
export class ExecutionStreamHttpError extends Error {
constructor(
message: string,
public readonly httpStatus: number
) {
super(message)
this.name = 'ExecutionStreamHttpError'
}
}
export function isExecutionStreamHttpError(error: unknown): error is ExecutionStreamHttpError {
return error instanceof ExecutionStreamHttpError
}
/**
* Detects errors caused by the browser killing a fetch (page refresh, navigation, tab close).
* These should be treated as clean disconnects, not execution errors.
@@ -205,11 +219,13 @@ export function useExecutionStream() {
if (!response.ok) {
const errorResponse = await response.json()
const error = new Error(errorResponse.error || 'Failed to start execution')
const error = new ExecutionStreamHttpError(
errorResponse.error || 'Failed to start execution',
response.status
)
if (errorResponse && typeof errorResponse === 'object') {
Object.assign(error, { executionResult: errorResponse })
}
Object.assign(error, { httpStatus: response.status })
throw error
}
@@ -279,15 +295,18 @@ export function useExecutionStream() {
try {
errorResponse = await response.json()
} catch {
const error = new Error(`Server error (${response.status}): ${response.statusText}`)
Object.assign(error, { httpStatus: response.status })
throw error
throw new ExecutionStreamHttpError(
`Server error (${response.status}): ${response.statusText}`,
response.status
)
}
const error = new Error(errorResponse.error || 'Failed to start execution')
const error = new ExecutionStreamHttpError(
errorResponse.error || 'Failed to start execution',
response.status
)
if (errorResponse && typeof errorResponse === 'object') {
Object.assign(error, { executionResult: errorResponse })
}
Object.assign(error, { httpStatus: response.status })
throw error
}
@@ -335,7 +354,9 @@ export function useExecutionStream() {
`/api/workflows/${workflowId}/executions/${executionId}/stream?from=${fromEventId}`,
{ signal: abortController.signal }
)
if (!response.ok) throw new Error(`Reconnect failed (${response.status})`)
if (!response.ok) {
throw new ExecutionStreamHttpError(`Reconnect failed (${response.status})`, response.status)
}
if (!response.body) throw new Error('No response body')
await processSSEStream(response.body.getReader(), callbacks, 'Reconnect')

View File

@@ -50,12 +50,9 @@ describe('handleTaskStatusEvent', () => {
})
})
it('preserves list invalidation when task event payload is invalid', () => {
it('does not invalidate when task event payload is invalid', () => {
handleTaskStatusEvent(queryClient, 'ws-1', '{')
expect(queryClient.invalidateQueries).toHaveBeenCalledTimes(1)
expect(queryClient.invalidateQueries).toHaveBeenCalledWith({
queryKey: taskKeys.list('ws-1'),
})
expect(queryClient.invalidateQueries).not.toHaveBeenCalled()
})
})

View File

@@ -41,13 +41,13 @@ export function handleTaskStatusEvent(
workspaceId: string,
data: unknown
): void {
queryClient.invalidateQueries({ queryKey: taskKeys.list(workspaceId) })
const payload = parseTaskStatusEventPayload(data)
if (!payload) {
logger.warn('Received invalid task_status payload')
return
}
queryClient.invalidateQueries({ queryKey: taskKeys.list(workspaceId) })
}
/**

View File

@@ -4,6 +4,7 @@
* This is the main entry point for OpenTelemetry instrumentation.
* It delegates to runtime-specific instrumentation modules.
*/
export async function register() {
// Load Node.js-specific instrumentation
if (process.env.NEXT_RUNTIME === 'nodejs') {

View File

@@ -7,6 +7,55 @@ const logger = createLogger('BatchDelete')
export const DEFAULT_BATCH_SIZE = 2000
export const DEFAULT_MAX_BATCHES_PER_TABLE = 10
/**
* Split workspaceIds into this-sized groups before running SELECT/DELETE. Large
* IN lists combined with `started_at < X` force Postgres to probe every
* workspace range in the composite index, which blows the 90s statement timeout
* at the scale of the full free tier.
*/
export const DEFAULT_WORKSPACE_CHUNK_SIZE = 50
export function chunkArray<T>(arr: T[], size: number): T[][] {
const out: T[][] = []
for (let i = 0; i < arr.length; i += size) out.push(arr.slice(i, i + size))
return out
}
export interface SelectByIdChunksOptions {
/** Cap on rows returned across all chunks. Defaults to a full per-table cleanup budget. */
overallLimit?: number
chunkSize?: number
}
/**
* Run a SELECT query once per ID chunk and concatenate results up to
* `overallLimit`. Each chunk's query is passed the remaining row budget so the
* total never exceeds the cap. Use this when you need the selected row set
* (e.g. to drive S3 or copilot-backend cleanup alongside the DB delete).
*
* Works for any large ID set — workspace IDs, workflow IDs, etc. Avoids
* sending one massive `IN (...)` list that would blow Postgres's statement
* timeout.
*/
export async function selectRowsByIdChunks<T>(
ids: string[],
query: (chunkIds: string[], chunkLimit: number) => Promise<T[]>,
{
overallLimit = DEFAULT_BATCH_SIZE * DEFAULT_MAX_BATCHES_PER_TABLE,
chunkSize = DEFAULT_WORKSPACE_CHUNK_SIZE,
}: SelectByIdChunksOptions = {}
): Promise<T[]> {
if (ids.length === 0) return []
const rows: T[] = []
for (const chunkIds of chunkArray(ids, chunkSize)) {
if (rows.length >= overallLimit) break
const remaining = overallLimit - rows.length
const chunkRows = await query(chunkIds, remaining)
rows.push(...chunkRows)
}
return rows
}
export interface TableCleanupResult {
table: string
@@ -14,6 +63,111 @@ export interface TableCleanupResult {
failed: number
}
export interface ChunkedBatchDeleteOptions<TRow extends { id: string }> {
tableDef: PgTable
workspaceIds: string[]
tableName: string
/** SELECT eligible rows for one workspace chunk. The result must include `id`. */
selectChunk: (chunkIds: string[], limit: number) => Promise<TRow[]>
/** Runs between SELECT and DELETE; receives the just-selected rows. */
onBatch?: (rows: TRow[]) => Promise<void>
batchSize?: number
/** Max batches per workspace chunk. */
maxBatches?: number
/**
* Hard cap on rows processed (deleted + failed) across all chunks per call.
* Defaults to `DEFAULT_BATCH_SIZE * DEFAULT_MAX_BATCHES_PER_TABLE`. Cron
* runs frequently enough to catch up the backlog over multiple invocations.
*/
totalRowLimit?: number
workspaceChunkSize?: number
}
/**
* Inner loop primitive for cleanup jobs.
*
* For each workspace chunk: SELECT a batch of eligible rows → run optional
* `onBatch` hook (e.g. to delete S3 files) → DELETE those rows by ID. Repeats
* until exhausted or `maxBatches` is hit, then moves to the next chunk. Stops
* the whole call once `totalRowLimit` rows have been processed.
*
* Workspace IDs are chunked before the SELECT — see
* `DEFAULT_WORKSPACE_CHUNK_SIZE` for why.
*/
export async function chunkedBatchDelete<TRow extends { id: string }>({
tableDef,
workspaceIds,
tableName,
selectChunk,
onBatch,
batchSize = DEFAULT_BATCH_SIZE,
maxBatches = DEFAULT_MAX_BATCHES_PER_TABLE,
totalRowLimit = DEFAULT_BATCH_SIZE * DEFAULT_MAX_BATCHES_PER_TABLE,
workspaceChunkSize = DEFAULT_WORKSPACE_CHUNK_SIZE,
}: ChunkedBatchDeleteOptions<TRow>): Promise<TableCleanupResult> {
const result: TableCleanupResult = { table: tableName, deleted: 0, failed: 0 }
if (workspaceIds.length === 0) {
logger.info(`[${tableName}] Skipped — no workspaces in scope`)
return result
}
const chunks = chunkArray(workspaceIds, workspaceChunkSize)
let stoppedEarly = false
for (const [chunkIdx, chunkIds] of chunks.entries()) {
if (result.deleted + result.failed >= totalRowLimit) {
stoppedEarly = true
break
}
let batchesProcessed = 0
let hasMore = true
while (
hasMore &&
batchesProcessed < maxBatches &&
result.deleted + result.failed < totalRowLimit
) {
let rows: TRow[] = []
try {
rows = await selectChunk(chunkIds, batchSize)
if (rows.length === 0) {
hasMore = false
break
}
if (onBatch) await onBatch(rows)
const ids = rows.map((r) => r.id)
const deleted = await db
.delete(tableDef)
.where(inArray(sql`id`, ids))
.returning({ id: sql`id` })
result.deleted += deleted.length
hasMore = rows.length === batchSize
batchesProcessed++
} catch (error) {
// Count rows we tried to delete; SELECT-stage errors leave rows=[].
result.failed += rows.length
logger.error(
`[${tableName}] Batch failed (chunk ${chunkIdx + 1}/${chunks.length}, ${rows.length} rows):`,
{ error }
)
hasMore = false
}
}
}
logger.info(
`[${tableName}] Complete: ${result.deleted} deleted, ${result.failed} failed across ${chunks.length} chunks${stoppedEarly ? ' (row-limit reached, remaining chunks deferred to next run)' : ''}`
)
return result
}
export interface BatchDeleteOptions {
tableDef: PgTable
workspaceIdCol: PgColumn
@@ -25,13 +179,13 @@ export interface BatchDeleteOptions {
requireTimestampNotNull?: boolean
batchSize?: number
maxBatches?: number
workspaceChunkSize?: number
}
/**
* Iteratively delete rows in a table matching a workspace + time-based predicate.
*
* Uses a SELECT-with-LIMIT → DELETE-by-ID pattern to keep each round bounded in
* memory and I/O (PostgreSQL DELETE does not support LIMIT directly).
* Convenience wrapper around `chunkedBatchDelete` for the common case: delete
* rows where `workspaceId IN (...) AND timestamp < retentionDate`. Use this
* when there's no per-row side effect (e.g. no S3 files to clean up alongside).
*/
export async function batchDeleteByWorkspaceAndTimestamp({
tableDef,
@@ -41,56 +195,23 @@ export async function batchDeleteByWorkspaceAndTimestamp({
retentionDate,
tableName,
requireTimestampNotNull = false,
batchSize = DEFAULT_BATCH_SIZE,
maxBatches = DEFAULT_MAX_BATCHES_PER_TABLE,
...rest
}: BatchDeleteOptions): Promise<TableCleanupResult> {
const result: TableCleanupResult = { table: tableName, deleted: 0, failed: 0 }
if (workspaceIds.length === 0) {
logger.info(`[${tableName}] Skipped — no workspaces in scope`)
return result
}
const predicates = [inArray(workspaceIdCol, workspaceIds), lt(timestampCol, retentionDate)]
if (requireTimestampNotNull) predicates.push(isNotNull(timestampCol))
const whereClause = and(...predicates)
let batchesProcessed = 0
let hasMore = true
while (hasMore && batchesProcessed < maxBatches) {
try {
const batch = await db
return chunkedBatchDelete({
tableDef,
workspaceIds,
tableName,
selectChunk: (chunkIds, limit) => {
const predicates = [inArray(workspaceIdCol, chunkIds), lt(timestampCol, retentionDate)]
if (requireTimestampNotNull) predicates.push(isNotNull(timestampCol))
return db
.select({ id: sql<string>`id` })
.from(tableDef)
.where(whereClause)
.limit(batchSize)
if (batch.length === 0) {
logger.info(`[${tableName}] No expired rows found`)
hasMore = false
break
}
const ids = batch.map((r) => r.id)
const deleted = await db
.delete(tableDef)
.where(inArray(sql`id`, ids))
.returning({ id: sql`id` })
result.deleted += deleted.length
hasMore = batch.length === batchSize
batchesProcessed++
logger.info(`[${tableName}] Batch ${batchesProcessed}: deleted ${deleted.length} rows`)
} catch (error) {
result.failed++
logger.error(`[${tableName}] Batch delete failed:`, { error })
hasMore = false
}
}
return result
.where(and(...predicates))
.limit(limit)
},
...rest,
})
}
/**

View File

@@ -46,7 +46,11 @@ function toToolCallInfo(block: PersistedContentBlock): ToolCallInfo | undefined
function toDisplayBlock(block: PersistedContentBlock): ContentBlock | undefined {
const displayed = toDisplayBlockBody(block)
return displayed ? withBlockTiming(displayed, block) : undefined
if (!displayed) return undefined
if (block.parentToolCallId && displayed.parentToolCallId === undefined) {
displayed.parentToolCallId = block.parentToolCallId
}
return withBlockTiming(displayed, block)
}
function toDisplayBlockBody(block: PersistedContentBlock): ContentBlock | undefined {

View File

@@ -77,11 +77,16 @@ function appendTextBlock(
content: string,
options: {
lane?: 'subagent'
parentToolCallId?: string
}
): void {
if (!content) return
const last = blocks[blocks.length - 1]
if (last?.type === MothershipStreamV1EventType.text && last.lane === options.lane) {
if (
last?.type === MothershipStreamV1EventType.text &&
last.lane === options.lane &&
last.parentToolCallId === options.parentToolCallId
) {
last.content = `${typeof last.content === 'string' ? last.content : ''}${content}`
return
}
@@ -89,6 +94,7 @@ function appendTextBlock(
blocks.push({
type: MothershipStreamV1EventType.text,
...(options.lane ? { lane: options.lane } : {}),
...(options.parentToolCallId ? { parentToolCallId: options.parentToolCallId } : {}),
content,
})
}
@@ -122,10 +128,24 @@ function buildLiveAssistantMessage(params: {
return activeSubagent
}
const resolveParentForSubagentBlock = (
subagent: string | undefined,
scopedParent: string | undefined
): string | undefined => {
if (!subagent) return undefined
if (scopedParent) return scopedParent
if (activeSubagent === subagent) return activeSubagentParentToolCallId
for (const [parent, name] of subagentByParentToolCallId) {
if (name === subagent) return parent
}
return undefined
}
const ensureToolBlock = (input: {
toolCallId: string
toolName: string
calledBy?: string
parentToolCallId?: string
displayTitle?: string
params?: Record<string, unknown>
result?: { success: boolean; output?: unknown; error?: string }
@@ -155,6 +175,7 @@ function buildLiveAssistantMessage(params: {
? { display: existingToolCall.display }
: {}),
}
if (input.parentToolCallId) existing.parentToolCallId = input.parentToolCallId
return existing
}
@@ -176,6 +197,7 @@ function buildLiveAssistantMessage(params: {
}
: {}),
},
...(input.parentToolCallId ? { parentToolCallId: input.parentToolCallId } : {}),
}
toolIndexById.set(input.toolCallId, blocks.length)
blocks.push(nextBlock)
@@ -219,8 +241,10 @@ function buildLiveAssistantMessage(params: {
runningText.length > 0 &&
!runningText.endsWith('\n')
const normalizedChunk = needsBoundaryNewline ? `\n${chunk}` : chunk
const parentForBlock = resolveParentForSubagentBlock(scopedSubagent, scopedParentToolCallId)
appendTextBlock(blocks, normalizedChunk, {
...(scopedSubagent ? { lane: 'subagent' as const } : {}),
...(parentForBlock ? { parentToolCallId: parentForBlock } : {}),
})
runningText += normalizedChunk
lastContentSource = contentSource
@@ -239,11 +263,14 @@ function buildLiveAssistantMessage(params: {
continue
}
const parentForBlock = resolveParentForSubagentBlock(scopedSubagent, scopedParentToolCallId)
if (payload.phase === MothershipStreamV1ToolPhase.result) {
ensureToolBlock({
toolCallId,
toolName: payload.toolName,
calledBy: scopedSubagent,
...(parentForBlock ? { parentToolCallId: parentForBlock } : {}),
state: resolveStreamToolOutcome(payload),
result: {
success: payload.success,
@@ -258,6 +285,7 @@ function buildLiveAssistantMessage(params: {
toolCallId,
toolName: payload.toolName,
calledBy: scopedSubagent,
...(parentForBlock ? { parentToolCallId: parentForBlock } : {}),
displayTitle,
params: isRecord(payload.arguments) ? payload.arguments : undefined,
state: typeof payload.status === 'string' ? payload.status : 'executing',
@@ -270,9 +298,13 @@ function buildLiveAssistantMessage(params: {
}
const spanData = asPayloadRecord(parsed.payload.data)
const parentToolCallId =
scopedParentToolCallId ??
(typeof spanData?.tool_call_id === 'string' ? spanData.tool_call_id : undefined)
const parentToolCallIdFromData =
typeof spanData?.tool_call_id === 'string'
? spanData.tool_call_id
: typeof spanData?.toolCallId === 'string'
? spanData.toolCallId
: undefined
const parentToolCallId = scopedParentToolCallId ?? parentToolCallIdFromData
const name = typeof parsed.payload.agent === 'string' ? parsed.payload.agent : scopedAgentId
if (parsed.payload.event === MothershipStreamV1SpanLifecycleEvent.start && name) {
if (parentToolCallId) {
@@ -285,6 +317,7 @@ function buildLiveAssistantMessage(params: {
kind: MothershipStreamV1SpanPayloadKind.subagent,
lifecycle: MothershipStreamV1SpanLifecycleEvent.start,
content: name,
...(parentToolCallId ? { parentToolCallId } : {}),
})
continue
}
@@ -308,6 +341,7 @@ function buildLiveAssistantMessage(params: {
type: MothershipStreamV1EventType.span,
kind: MothershipStreamV1SpanPayloadKind.subagent,
lifecycle: MothershipStreamV1SpanLifecycleEvent.end,
...(parentToolCallId ? { parentToolCallId } : {}),
})
}
continue
@@ -343,8 +377,10 @@ function buildLiveAssistantMessage(params: {
}
const prefix = runningText.length > 0 && !runningText.endsWith('\n') ? '\n' : ''
const content = `${prefix}${tag}`
const errorParent = resolveParentForSubagentBlock(scopedSubagent, scopedParentToolCallId)
appendTextBlock(blocks, content, {
...(scopedSubagent ? { lane: 'subagent' as const } : {}),
...(errorParent ? { parentToolCallId: errorParent } : {}),
})
runningText += content
continue

View File

@@ -41,6 +41,7 @@ export interface PersistedContentBlock {
toolCall?: PersistedToolCall
timestamp?: number
endedAt?: number
parentToolCallId?: string
}
export interface PersistedFileAttachment {
@@ -101,9 +102,16 @@ export function withBlockTiming<T>(target: T, src: { timestamp?: number; endedAt
return target
}
function withBlockParent<T>(target: T, src: { parentToolCallId?: string }): T {
if (src.parentToolCallId) {
;(target as { parentToolCallId?: string }).parentToolCallId = src.parentToolCallId
}
return target
}
function mapContentBlock(block: ContentBlock): PersistedContentBlock {
const persisted = mapContentBlockBody(block)
return withBlockTiming(persisted, block)
return withBlockParent(withBlockTiming(persisted, block), block)
}
function mapContentBlockBody(block: ContentBlock): PersistedContentBlock {
@@ -265,6 +273,7 @@ interface RawBlock {
status?: string
timestamp?: number
endedAt?: number
parentToolCallId?: string
toolCall?: {
id?: string
name?: string
@@ -321,6 +330,7 @@ function normalizeCanonicalBlock(block: RawBlock): PersistedContentBlock {
if (block.kind) result.kind = block.kind as MothershipStreamV1SpanPayloadKind
if (block.lifecycle) result.lifecycle = block.lifecycle as MothershipStreamV1SpanLifecycleEvent
if (block.status) result.status = block.status as MothershipStreamV1CompletionStatus
if (block.parentToolCallId) result.parentToolCallId = block.parentToolCallId
if (block.toolCall) {
result.toolCall = {
id: block.toolCall.id ?? '',
@@ -438,6 +448,9 @@ function normalizeBlock(block: RawBlock): PersistedContentBlock {
if (typeof block.endedAt === 'number' && result.endedAt === undefined) {
result.endedAt = block.endedAt
}
if (block.parentToolCallId && result.parentToolCallId === undefined) {
result.parentToolCallId = block.parentToolCallId
}
return result
}

View File

@@ -10,27 +10,15 @@ export const SIM_AGENT_API_URL =
? rawAgentUrl
: SIM_AGENT_API_URL_DEFAULT
// ---------------------------------------------------------------------------
// Timeouts
// ---------------------------------------------------------------------------
/** Default timeout for the copilot orchestration stream loop (60 min). */
export const ORCHESTRATION_TIMEOUT_MS = 3_600_000
/** Timeout for the client-side streaming response handler (60 min). */
export const STREAM_TIMEOUT_MS = 3_600_000
// ---------------------------------------------------------------------------
// Stream resume
// ---------------------------------------------------------------------------
/** SessionStorage key for persisting active stream metadata across page reloads. */
export const STREAM_STORAGE_KEY = 'copilot_active_stream'
// ---------------------------------------------------------------------------
// Copilot API paths (client-side fetch targets)
// ---------------------------------------------------------------------------
/** POST — send a chat message through the unified mothership chat surface. */
export const MOTHERSHIP_CHAT_API_PATH = '/api/mothership/chat'
@@ -39,18 +27,9 @@ export const COPILOT_CONFIRM_API_PATH = '/api/copilot/confirm'
/** POST — forward diff-accepted/rejected stats to the copilot backend. */
export const COPILOT_STATS_API_PATH = '/api/copilot/stats'
// ---------------------------------------------------------------------------
// Dedup limits
// ---------------------------------------------------------------------------
/** Maximum entries in the in-memory SSE tool-event dedup cache. */
export const STREAM_BUFFER_MAX_DEDUP_ENTRIES = 1_000
// ---------------------------------------------------------------------------
// Tool result size limits
// ---------------------------------------------------------------------------
/** Approximate max inline tool-result budget before artifact/error handling takes over. */
export const TOOL_RESULT_MAX_INLINE_TOKENS = 50_000
@@ -61,10 +40,6 @@ export const TOOL_RESULT_ESTIMATED_CHARS_PER_TOKEN = 4
export const TOOL_RESULT_MAX_INLINE_CHARS =
TOOL_RESULT_MAX_INLINE_TOKENS * TOOL_RESULT_ESTIMATED_CHARS_PER_TOKEN
// ---------------------------------------------------------------------------
// Copilot modes
// ---------------------------------------------------------------------------
export const COPILOT_MODES = ['ask', 'build', 'plan'] as const
export const COPILOT_REQUEST_MODES = ['ask', 'build', 'plan', 'agent'] as const

View File

@@ -361,28 +361,29 @@ export async function runStreamLoop(
flushSubagentThinkingBlock(context)
flushThinkingBlock(context)
if (spanEvt === MothershipStreamV1SpanLifecycleEvent.start) {
const lastParent = context.subAgentParentStack[context.subAgentParentStack.length - 1]
const lastBlock = context.contentBlocks[context.contentBlocks.length - 1]
if (toolCallId) {
if (lastParent !== toolCallId) {
if (!context.subAgentParentStack.includes(toolCallId)) {
context.subAgentParentStack.push(toolCallId)
}
context.subAgentParentToolCallId = toolCallId
context.subAgentContent[toolCallId] ??= ''
context.subAgentToolCalls[toolCallId] ??= []
}
if (
subagentName &&
!(
lastParent === toolCallId &&
lastBlock?.type === 'subagent' &&
lastBlock.content === subagentName
)
) {
context.contentBlocks.push({
type: 'subagent',
content: subagentName,
timestamp: Date.now(),
if (toolCallId && subagentName) {
const openParents = (context.openSubagentParents ??= new Set<string>())
if (!openParents.has(toolCallId)) {
openParents.add(toolCallId)
context.contentBlocks.push({
type: 'subagent',
content: subagentName,
parentToolCallId: toolCallId,
timestamp: Date.now(),
})
}
} else {
logger.warn('subagent start missing toolCallId or agent name', {
hasToolCallId: Boolean(toolCallId),
hasSubagentName: Boolean(subagentName),
})
}
return
@@ -391,27 +392,33 @@ export async function runStreamLoop(
if (isPendingPause) {
return
}
if (context.subAgentParentStack.length > 0) {
context.subAgentParentStack.pop()
if (toolCallId) {
const idx = context.subAgentParentStack.lastIndexOf(toolCallId)
if (idx >= 0) {
context.subAgentParentStack.splice(idx, 1)
} else {
logger.warn('subagent end without matching start', { toolCallId })
}
} else {
logger.warn('subagent end without matching start')
logger.warn('subagent end missing toolCallId')
}
context.subAgentParentToolCallId =
context.subAgentParentStack.length > 0
? context.subAgentParentStack[context.subAgentParentStack.length - 1]
: undefined
if (subagentName) {
if (toolCallId) {
for (let i = context.contentBlocks.length - 1; i >= 0; i--) {
const b = context.contentBlocks[i]
if (
b.type === 'subagent' &&
b.content === subagentName &&
b.endedAt === undefined
b.endedAt === undefined &&
b.parentToolCallId === toolCallId
) {
b.endedAt = Date.now()
break
}
}
context.openSubagentParents?.delete(toolCallId)
}
return
}

View File

@@ -22,10 +22,17 @@ export function handleTextEvent(scope: ToolScope): StreamHandler {
const parentToolCallId = getScopedParentToolCallId(event, context)
if (!parentToolCallId) return
if (event.payload.channel === MothershipStreamV1TextChannel.thinking) {
if (
context.currentSubagentThinkingBlock &&
context.currentSubagentThinkingBlock.parentToolCallId !== parentToolCallId
) {
flushSubagentThinkingBlock(context)
}
if (!context.currentSubagentThinkingBlock) {
context.currentSubagentThinkingBlock = {
type: 'subagent_thinking',
content: '',
parentToolCallId,
timestamp: Date.now(),
}
}
@@ -40,7 +47,7 @@ export function handleTextEvent(scope: ToolScope): StreamHandler {
}
context.subAgentContent[parentToolCallId] =
(context.subAgentContent[parentToolCallId] || '') + chunk
addContentBlock(context, { type: 'subagent_text', content: chunk })
addContentBlock(context, { type: 'subagent_text', content: chunk, parentToolCallId })
return
}

View File

@@ -340,6 +340,7 @@ function registerSubagentToolCall(
type: 'tool_call',
toolCall,
calledBy: parentToolCall?.name,
parentToolCallId,
})
}
}

View File

@@ -56,6 +56,7 @@ export interface ContentBlock {
calledBy?: string
timestamp: number
endedAt?: number
parentToolCallId?: string
}
export interface StreamingContext {
@@ -86,6 +87,7 @@ export interface StreamingContext {
subAgentParentStack: string[]
subAgentContent: Record<string, string>
subAgentToolCalls: Record<string, ToolCallState[]>
openSubagentParents?: Set<string>
pendingContent: string
streamComplete: boolean
wasAborted: boolean
@@ -136,31 +138,12 @@ export interface OrchestratorOptions {
onComplete?: (result: OrchestratorResult) => void | Promise<void>
onError?: (error: Error) => void | Promise<void>
abortSignal?: AbortSignal
/**
* Invoked when the orchestrator infers that the run was aborted via
* an out-of-band signal (currently: a Redis abort marker observed
* at SSE body close). Callers wire this to fire their local
* `AbortController` so `signal.reason` is set and `recordCancelled`
* classifies as `explicit_stop` rather than `unknown`.
*/
onAbortObserved?: (reason: string) => void
interactive?: boolean
}
export interface OrchestratorResult {
success: boolean
/**
* True iff the non-success outcome was a user-initiated cancel
* (abort signal fired or client disconnected). Lets callers treat
* cancels differently from actual errors — notably, `buildOnComplete`
* must NOT finalize the chat row on cancel, because the browser's
* `/api/copilot/chat/stop` POST owns writing the partial assistant
* content and clearing `conversationId` in one UPDATE. Finalizing
* here would race and clear `conversationId` first, making the stop
* UPDATE match zero rows and the partial content vanish on refetch.
*
* Always false when `success=true`.
*/
cancelled?: boolean
content: string
contentBlocks: ContentBlock[]

View File

@@ -3,10 +3,11 @@ import { credential } from '@sim/db/schema'
import { toError } from '@sim/utils/errors'
import { eq } from 'drizzle-orm'
import type { ExecutionContext, ToolCallResult } from '@/lib/copilot/request/types'
import { getCredentialActorContext } from '@/lib/credentials/access'
export function executeManageCredential(
rawParams: Record<string, unknown>,
_context: ExecutionContext
context: ExecutionContext
): Promise<ToolCallResult> {
const params = rawParams as {
operation: string
@@ -17,26 +18,30 @@ export function executeManageCredential(
const { operation, displayName } = params
return (async () => {
try {
if (!context?.userId) {
return { success: false, error: 'Authentication required' }
}
switch (operation) {
case 'rename': {
const credentialId = params.credentialId
if (!credentialId) return { success: false, error: 'credentialId is required for rename' }
if (!displayName) return { success: false, error: 'displayName is required for rename' }
const [row] = await db
.select({
id: credential.id,
type: credential.type,
displayName: credential.displayName,
})
.from(credential)
.where(eq(credential.id, credentialId))
.limit(1)
if (!row) return { success: false, error: 'Credential not found' }
if (row.type !== 'oauth')
const actor = await getCredentialActorContext(credentialId, context.userId)
if (!actor.credential || !actor.hasWorkspaceAccess) {
return { success: false, error: 'Credential not found' }
}
if (actor.credential.type !== 'oauth') {
return {
success: false,
error: 'Only OAuth credentials can be managed with this tool.',
}
}
if (!actor.canWriteWorkspace && !actor.isAdmin) {
return { success: false, error: 'Write access required to rename this credential' }
}
await db
.update(credential)
.set({ displayName, updatedAt: new Date() })
@@ -53,12 +58,16 @@ export function executeManageCredential(
const failed: string[] = []
for (const id of ids) {
const [row] = await db
.select({ id: credential.id, type: credential.type })
.from(credential)
.where(eq(credential.id, id))
.limit(1)
if (!row || row.type !== 'oauth') {
const actor = await getCredentialActorContext(id, context.userId)
if (
!actor.credential ||
!actor.hasWorkspaceAccess ||
actor.credential.type !== 'oauth'
) {
failed.push(id)
continue
}
if (!actor.canWriteWorkspace && !actor.isAdmin) {
failed.push(id)
continue
}

View File

@@ -1,6 +1,9 @@
import { db } from '@sim/db'
import { knowledgeBase } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { toError } from '@sim/utils/errors'
import { generateId } from '@sim/utils/id'
import { eq } from 'drizzle-orm'
import type { ExecutionContext, ToolCallResult } from '@/lib/copilot/request/types'
import { restoreKnowledgeBase } from '@/lib/knowledge/service'
import { getTableById, restoreTable } from '@/lib/table/service'
@@ -10,6 +13,8 @@ import {
} from '@/lib/uploads/contexts/workspace/workspace-file-manager'
import { restoreWorkflow } from '@/lib/workflows/lifecycle'
import { performRestoreFolder } from '@/lib/workflows/orchestration/folder-lifecycle'
import { getWorkflowById } from '@/lib/workflows/utils'
import { getUserEntityPermissions } from '@/lib/workspaces/permissions/utils'
const logger = createLogger('RestoreResource')
@@ -33,10 +38,25 @@ export async function executeRestoreResource(
}
const requestId = generateId().slice(0, 8)
const callerWorkspaceId = context.workspaceId
const hasWriteAccess = async (resourceWorkspaceId: string | null | undefined) => {
if (!resourceWorkspaceId || resourceWorkspaceId !== callerWorkspaceId) return false
const permission = await getUserEntityPermissions(
context.userId,
'workspace',
resourceWorkspaceId
)
return permission === 'write' || permission === 'admin'
}
try {
switch (type) {
case 'workflow': {
const existing = await getWorkflowById(id, { includeArchived: true })
if (!existing || !(await hasWriteAccess(existing.workspaceId))) {
return { success: false, error: 'Workflow not found' }
}
const result = await restoreWorkflow(id, { requestId })
if (!result.restored) {
return { success: false, error: 'Workflow not found or not archived' }
@@ -50,9 +70,13 @@ export async function executeRestoreResource(
}
case 'table': {
const existing = await getTableById(id, { includeArchived: true })
if (!existing || !(await hasWriteAccess(existing.workspaceId))) {
return { success: false, error: 'Table not found' }
}
await restoreTable(id, requestId)
const table = await getTableById(id)
const tableName = table?.name || id
const tableName = table?.name || existing.name
logger.info('Table restored via copilot', { tableId: id, name: tableName })
return {
success: true,
@@ -62,6 +86,9 @@ export async function executeRestoreResource(
}
case 'file': {
if (!(await hasWriteAccess(context.workspaceId))) {
return { success: false, error: 'File not found' }
}
await restoreWorkspaceFile(context.workspaceId, id)
const fileRecord = await getWorkspaceFile(context.workspaceId, id)
const fileName = fileRecord?.name || id
@@ -74,6 +101,14 @@ export async function executeRestoreResource(
}
case 'knowledgebase': {
const [existing] = await db
.select({ workspaceId: knowledgeBase.workspaceId })
.from(knowledgeBase)
.where(eq(knowledgeBase.id, id))
.limit(1)
if (!existing || !(await hasWriteAccess(existing.workspaceId))) {
return { success: false, error: 'Knowledge base not found' }
}
await restoreKnowledgeBase(id, requestId)
logger.info('Knowledge base restored via copilot', { knowledgeBaseId: id })
return {
@@ -83,6 +118,9 @@ export async function executeRestoreResource(
}
case 'folder': {
if (!(await hasWriteAccess(context.workspaceId))) {
return { success: false, error: 'Folder not found' }
}
const result = await performRestoreFolder({
folderId: id,
workspaceId: context.workspaceId,

View File

@@ -28,6 +28,7 @@ import {
setWorkflowVariables,
updateFolderRecord,
updateWorkflowRecord,
verifyFolderWorkspace,
} from '@/lib/workflows/utils'
import { hasExecutionResult } from '@/executor/utils/errors'
import type { BlockState, WorkflowState } from '@/stores/workflows/workflow/types'
@@ -522,7 +523,13 @@ export async function executeMoveWorkflow(
for (const workflowId of workflowIds) {
try {
await ensureWorkflowAccess(workflowId, context.userId, 'write')
const { workspaceId } = await ensureWorkflowAccess(workflowId, context.userId, 'write')
if (folderId) {
if (!workspaceId || !(await verifyFolderWorkspace(folderId, workspaceId))) {
failed.push(workflowId)
continue
}
}
assertWorkflowMutationNotAborted(context)
await updateWorkflowRecord(workflowId, { folderId })
moved.push(workflowId)
@@ -562,6 +569,14 @@ export async function executeMoveFolder(
const workspaceId = context.workspaceId || (await getDefaultWorkspaceId(context.userId))
await ensureWorkspaceAccess(workspaceId, context.userId, 'write')
if (!(await verifyFolderWorkspace(folderId, workspaceId))) {
return { success: false, error: 'Folder not found' }
}
if (parentId && !(await verifyFolderWorkspace(parentId, workspaceId))) {
return { success: false, error: 'Parent folder not found' }
}
assertWorkflowMutationNotAborted(context)
await updateFolderRecord(folderId, { parentId })
@@ -1007,6 +1022,11 @@ export async function executeRenameFolder(
const workspaceId = context.workspaceId || (await getDefaultWorkspaceId(context.userId))
await ensureWorkspaceAccess(workspaceId, context.userId, 'write')
if (!(await verifyFolderWorkspace(folderId, workspaceId))) {
return { success: false, error: 'Folder not found' }
}
assertWorkflowMutationNotAborted(context)
await updateFolderRecord(folderId, { name })

View File

@@ -105,11 +105,12 @@ export const getJobLogsServerTool: BaseServerTool<GetJobLogsArgs, JobLogEntry[]>
}
const wsId = workspaceId || context.workspaceId
if (wsId) {
const access = await checkWorkspaceAccess(wsId, context.userId)
if (!access.hasAccess) {
throw new Error('Unauthorized workspace access')
}
if (!wsId) {
throw new Error('Workspace context required')
}
const access = await checkWorkspaceAccess(wsId, context.userId)
if (!access.hasAccess) {
throw new Error('Unauthorized workspace access')
}
const clampedLimit = Math.min(Math.max(1, limit), 5)
@@ -121,7 +122,10 @@ export const getJobLogsServerTool: BaseServerTool<GetJobLogsArgs, JobLogEntry[]>
includeDetails,
})
const conditions = [eq(jobExecutionLogs.scheduleId, jobId)]
const conditions = [
eq(jobExecutionLogs.scheduleId, jobId),
eq(jobExecutionLogs.workspaceId, wsId),
]
if (executionId) {
conditions.push(eq(jobExecutionLogs.executionId, executionId))
}

View File

@@ -37,6 +37,11 @@ import {
import { StorageService } from '@/lib/uploads'
import { resolveWorkspaceFileReference } from '@/lib/uploads/contexts/workspace/workspace-file-manager'
import { getQueryStrategy, handleVectorOnlySearch } from '@/app/api/knowledge/search/utils'
import {
checkDocumentWriteAccess,
checkKnowledgeBaseAccess,
checkKnowledgeBaseWriteAccess,
} from '@/app/api/knowledge/utils'
const logger = createLogger('KnowledgeBaseServerTool')
@@ -141,6 +146,14 @@ export const knowledgeBaseServerTool: BaseServerTool<KnowledgeBaseArgs, Knowledg
}
}
const access = await checkKnowledgeBaseAccess(args.knowledgeBaseId, context.userId)
if (!access.hasAccess) {
return {
success: false,
message: `Knowledge base with ID "${args.knowledgeBaseId}" not found`,
}
}
const knowledgeBase = await getKnowledgeBaseById(args.knowledgeBaseId)
if (!knowledgeBase) {
return {
@@ -187,6 +200,14 @@ export const knowledgeBaseServerTool: BaseServerTool<KnowledgeBaseArgs, Knowledg
}
}
const access = await checkKnowledgeBaseAccess(args.knowledgeBaseId, context.userId)
if (!access.hasAccess) {
return {
success: false,
message: `Knowledge base with ID "${args.knowledgeBaseId}" not found`,
}
}
const kb = await getKnowledgeBaseById(args.knowledgeBaseId)
if (!kb) {
return {
@@ -257,6 +278,17 @@ export const knowledgeBaseServerTool: BaseServerTool<KnowledgeBaseArgs, Knowledg
}
}
const writeAccess = await checkKnowledgeBaseWriteAccess(
args.knowledgeBaseId,
context.userId
)
if (!writeAccess.hasAccess) {
return {
success: false,
message: `Knowledge base with ID "${args.knowledgeBaseId}" not found`,
}
}
const targetKb = await getKnowledgeBaseById(args.knowledgeBaseId)
if (!targetKb || !targetKb.workspaceId) {
return {
@@ -363,6 +395,17 @@ export const knowledgeBaseServerTool: BaseServerTool<KnowledgeBaseArgs, Knowledg
}
}
const writeAccess = await checkKnowledgeBaseWriteAccess(
args.knowledgeBaseId,
context.userId
)
if (!writeAccess.hasAccess) {
return {
success: false,
message: `Knowledge base with ID "${args.knowledgeBaseId}" not found`,
}
}
const requestId = generateId().slice(0, 8)
assertNotAborted()
const updatedKb = await updateKnowledgeBase(args.knowledgeBaseId, updates, requestId)
@@ -400,6 +443,12 @@ export const knowledgeBaseServerTool: BaseServerTool<KnowledgeBaseArgs, Knowledg
const notFound: string[] = []
for (const kbId of kbIds) {
const writeAccess = await checkKnowledgeBaseWriteAccess(kbId, context.userId)
if (!writeAccess.hasAccess) {
notFound.push(kbId)
continue
}
const kbToDelete = await getKnowledgeBaseById(kbId)
if (!kbToDelete) {
notFound.push(kbId)
@@ -444,8 +493,17 @@ export const knowledgeBaseServerTool: BaseServerTool<KnowledgeBaseArgs, Knowledg
const failed: string[] = []
for (const docId of docIds) {
const requestId = generateId().slice(0, 8)
assertNotAborted()
const docAccess = await checkDocumentWriteAccess(
args.knowledgeBaseId,
docId,
context.userId
)
if (!docAccess.hasAccess) {
failed.push(docId)
continue
}
const requestId = generateId().slice(0, 8)
const result = await deleteDocument(docId, requestId)
if (result.success) {
deleted.push(docId)
@@ -481,6 +539,17 @@ export const knowledgeBaseServerTool: BaseServerTool<KnowledgeBaseArgs, Knowledg
message: 'At least one of filename or enabled is required for update_document',
}
}
const docAccess = await checkDocumentWriteAccess(
args.knowledgeBaseId,
args.documentId,
context.userId
)
if (!docAccess.hasAccess) {
return {
success: false,
message: `Document with ID "${args.documentId}" not found`,
}
}
const requestId = generateId().slice(0, 8)
assertNotAborted()
await updateDocument(args.documentId, updateData, requestId)
@@ -503,6 +572,14 @@ export const knowledgeBaseServerTool: BaseServerTool<KnowledgeBaseArgs, Knowledg
}
}
const access = await checkKnowledgeBaseAccess(args.knowledgeBaseId, context.userId)
if (!access.hasAccess) {
return {
success: false,
message: `Knowledge base with ID "${args.knowledgeBaseId}" not found`,
}
}
const tagDefinitions = await getDocumentTagDefinitions(args.knowledgeBaseId)
logger.info('Tag definitions listed via copilot', {
@@ -537,6 +614,18 @@ export const knowledgeBaseServerTool: BaseServerTool<KnowledgeBaseArgs, Knowledg
message: 'tagDisplayName is required for create_tag operation',
}
}
const writeAccess = await checkKnowledgeBaseWriteAccess(
args.knowledgeBaseId,
context.userId
)
if (!writeAccess.hasAccess) {
return {
success: false,
message: `Knowledge base with ID "${args.knowledgeBaseId}" not found`,
}
}
const fieldType = args.tagFieldType || 'text'
const tagSlot = await getNextAvailableSlot(args.knowledgeBaseId, fieldType)
@@ -606,6 +695,17 @@ export const knowledgeBaseServerTool: BaseServerTool<KnowledgeBaseArgs, Knowledg
}
}
const writeAccess = await checkKnowledgeBaseWriteAccess(
existingTag.knowledgeBaseId,
context.userId
)
if (!writeAccess.hasAccess) {
return {
success: false,
message: `Tag definition with ID "${args.tagDefinitionId}" not found`,
}
}
const requestId = generateId().slice(0, 8)
assertNotAborted()
const updatedTag = await updateTagDefinition(args.tagDefinitionId, updateData, requestId)
@@ -643,6 +743,17 @@ export const knowledgeBaseServerTool: BaseServerTool<KnowledgeBaseArgs, Knowledg
}
}
const writeAccess = await checkKnowledgeBaseWriteAccess(
args.knowledgeBaseId,
context.userId
)
if (!writeAccess.hasAccess) {
return {
success: false,
message: `Knowledge base with ID "${args.knowledgeBaseId}" not found`,
}
}
const requestId = generateId().slice(0, 8)
assertNotAborted()
const deleted = await deleteTagDefinition(
@@ -677,6 +788,14 @@ export const knowledgeBaseServerTool: BaseServerTool<KnowledgeBaseArgs, Knowledg
}
}
const access = await checkKnowledgeBaseAccess(args.knowledgeBaseId, context.userId)
if (!access.hasAccess) {
return {
success: false,
message: `Knowledge base with ID "${args.knowledgeBaseId}" not found`,
}
}
const requestId = generateId().slice(0, 8)
const stats = await getTagUsageStats(args.knowledgeBaseId, requestId)
@@ -702,6 +821,17 @@ export const knowledgeBaseServerTool: BaseServerTool<KnowledgeBaseArgs, Knowledg
}
}
const writeAccess = await checkKnowledgeBaseWriteAccess(
args.knowledgeBaseId,
context.userId
)
if (!writeAccess.hasAccess) {
return {
success: false,
message: `Knowledge base with ID "${args.knowledgeBaseId}" not found`,
}
}
const createBody: Record<string, unknown> = {
connectorType: args.connectorType,
sourceConfig: args.sourceConfig ?? {},
@@ -762,6 +892,11 @@ export const knowledgeBaseServerTool: BaseServerTool<KnowledgeBaseArgs, Knowledg
return { success: false, message: `Connector "${args.connectorId}" not found` }
}
const writeAccess = await checkKnowledgeBaseWriteAccess(kbId, context.userId)
if (!writeAccess.hasAccess) {
return { success: false, message: `Connector "${args.connectorId}" not found` }
}
const updateBody: Record<string, unknown> = {}
if (args.sourceConfig !== undefined) updateBody.sourceConfig = args.sourceConfig
if (args.syncIntervalMinutes !== undefined)
@@ -810,6 +945,11 @@ export const knowledgeBaseServerTool: BaseServerTool<KnowledgeBaseArgs, Knowledg
return { success: false, message: `Connector "${args.connectorId}" not found` }
}
const writeAccess = await checkKnowledgeBaseWriteAccess(deleteKbId, context.userId)
if (!writeAccess.hasAccess) {
return { success: false, message: `Connector "${args.connectorId}" not found` }
}
assertNotAborted()
const deleteRes = await connectorApiCall(
context.userId,
@@ -843,6 +983,11 @@ export const knowledgeBaseServerTool: BaseServerTool<KnowledgeBaseArgs, Knowledg
return { success: false, message: `Connector "${args.connectorId}" not found` }
}
const writeAccess = await checkKnowledgeBaseWriteAccess(syncKbId, context.userId)
if (!writeAccess.hasAccess) {
return { success: false, message: `Connector "${args.connectorId}" not found` }
}
assertNotAborted()
const syncRes = await connectorApiCall(
context.userId,

View File

@@ -223,9 +223,12 @@ export const userTableServerTool: BaseServerTool<UserTableArgs, UserTableResult>
if (!args.tableId) {
return { success: false, message: 'Table ID is required' }
}
if (!workspaceId) {
return { success: false, message: 'Workspace ID is required' }
}
const table = await getTableById(args.tableId)
if (!table) {
if (!table || table.workspaceId !== workspaceId) {
return { success: false, message: `Table not found: ${args.tableId}` }
}
@@ -240,9 +243,12 @@ export const userTableServerTool: BaseServerTool<UserTableArgs, UserTableResult>
if (!args.tableId) {
return { success: false, message: 'Table ID is required' }
}
if (!workspaceId) {
return { success: false, message: 'Workspace ID is required' }
}
const table = await getTableById(args.tableId)
if (!table) {
if (!table || table.workspaceId !== workspaceId) {
return { success: false, message: `Table not found: ${args.tableId}` }
}
@@ -816,6 +822,9 @@ export const userTableServerTool: BaseServerTool<UserTableArgs, UserTableResult>
if (!args.tableId) {
return { success: false, message: 'Table ID is required' }
}
if (!workspaceId) {
return { success: false, message: 'Workspace ID is required' }
}
const col = (args as Record<string, unknown>).column as
| {
name: string
@@ -830,6 +839,10 @@ export const userTableServerTool: BaseServerTool<UserTableArgs, UserTableResult>
message: 'column with name and type is required for add_column',
}
}
const tableForAdd = await getTableById(args.tableId)
if (!tableForAdd || tableForAdd.workspaceId !== workspaceId) {
return { success: false, message: `Table not found: ${args.tableId}` }
}
const requestId = generateId().slice(0, 8)
assertNotAborted()
const updated = await addTableColumn(args.tableId, col, requestId)
@@ -844,11 +857,18 @@ export const userTableServerTool: BaseServerTool<UserTableArgs, UserTableResult>
if (!args.tableId) {
return { success: false, message: 'Table ID is required' }
}
if (!workspaceId) {
return { success: false, message: 'Workspace ID is required' }
}
const colName = (args as Record<string, unknown>).columnName as string | undefined
const newColName = (args as Record<string, unknown>).newName as string | undefined
if (!colName || !newColName) {
return { success: false, message: 'columnName and newName are required' }
}
const tableForRename = await getTableById(args.tableId)
if (!tableForRename || tableForRename.workspaceId !== workspaceId) {
return { success: false, message: `Table not found: ${args.tableId}` }
}
const requestId = generateId().slice(0, 8)
assertNotAborted()
const updated = await renameColumn(
@@ -866,12 +886,19 @@ export const userTableServerTool: BaseServerTool<UserTableArgs, UserTableResult>
if (!args.tableId) {
return { success: false, message: 'Table ID is required' }
}
if (!workspaceId) {
return { success: false, message: 'Workspace ID is required' }
}
const colName = (args as Record<string, unknown>).columnName as string | undefined
const colNames = (args as Record<string, unknown>).columnNames as string[] | undefined
const names = colNames ?? (colName ? [colName] : null)
if (!names || names.length === 0) {
return { success: false, message: 'columnName or columnNames is required' }
}
const tableForDelete = await getTableById(args.tableId)
if (!tableForDelete || tableForDelete.workspaceId !== workspaceId) {
return { success: false, message: `Table not found: ${args.tableId}` }
}
const requestId = generateId().slice(0, 8)
if (names.length === 1) {
assertNotAborted()
@@ -901,6 +928,9 @@ export const userTableServerTool: BaseServerTool<UserTableArgs, UserTableResult>
if (!args.tableId) {
return { success: false, message: 'Table ID is required' }
}
if (!workspaceId) {
return { success: false, message: 'Workspace ID is required' }
}
const colName = (args as Record<string, unknown>).columnName as string | undefined
if (!colName) {
return { success: false, message: 'columnName is required' }
@@ -913,6 +943,10 @@ export const userTableServerTool: BaseServerTool<UserTableArgs, UserTableResult>
message: 'At least one of newType or unique must be provided',
}
}
const tableForUpdate = await getTableById(args.tableId)
if (!tableForUpdate || tableForUpdate.workspaceId !== workspaceId) {
return { success: false, message: `Table not found: ${args.tableId}` }
}
const requestId = generateId().slice(0, 8)
let result: TableDefinition | undefined
if (newType !== undefined) {

View File

@@ -99,6 +99,15 @@ export interface IsolatedVMError {
line?: number
column?: number
lineContent?: string
/**
* True when the failure is host-infrastructure caused (worker crash, IPC
* failure, pool saturation, task misconfig) rather than anything the user's
* code did. Callers use this to keep genuine server failures as 5xx while
* translating user-caused failures (code errors, timeouts, aborts, per-owner
* rate limits) into 4xx. Defaults to undefined/false — new error sites
* default to user-caused unless explicitly marked.
*/
isSystemError?: boolean
}
const POOL_SIZE = Number.parseInt(env.IVM_POOL_SIZE) || 4
@@ -838,7 +847,11 @@ function cleanupWorker(workerId: number) {
pending.resolve({
result: null,
stdout: '',
error: { message: 'Code execution failed unexpectedly. Please try again.', name: 'Error' },
error: {
message: 'Code execution failed unexpectedly. Please try again.',
name: 'Error',
isSystemError: true,
},
})
workerInfo.pendingExecutions.delete(id)
}
@@ -1125,7 +1138,11 @@ function dispatchToWorker(
resolve({
result: null,
stdout: '',
error: { message: 'Code execution failed to start. Please try again.', name: 'Error' },
error: {
message: 'Code execution failed to start. Please try again.',
name: 'Error',
isSystemError: true,
},
})
if (workerInfo.retiring && workerInfo.activeExecutions === 0) {
cleanupWorker(workerInfo.id)
@@ -1159,6 +1176,7 @@ function enqueueExecution(
error: {
message: 'Code execution is at capacity. Please try again in a moment.',
name: 'Error',
isSystemError: true,
},
})
return
@@ -1198,6 +1216,7 @@ function enqueueExecution(
error: {
message: 'Code execution timed out waiting for an available worker. Please try again.',
name: 'Error',
isSystemError: true,
},
})
}, QUEUE_TIMEOUT_MS)
@@ -1294,6 +1313,7 @@ export async function executeInIsolatedVM(
error: {
message: `Task "${req.task.id}" requires broker "${brokerName}" but none was provided`,
name: 'Error',
isSystemError: true,
},
}
}

View File

@@ -20,6 +20,24 @@ export interface RunSandboxTaskOptions {
signal?: AbortSignal
}
/**
* Thrown when the sandbox failure is attributable to the caller — user code
* errors (SyntaxError, ReferenceError, user-thrown exceptions), timeouts from
* user code, client aborts, or per-owner rate limits. Callers should translate
* this into a 4xx response so genuine 5xx remains a signal of server health.
*
* System-origin failures (worker crash, IPC failure, pool saturation, task
* misconfig) are tagged with `isSystemError` at the isolated-vm layer and
* surface as a plain `Error` → 500.
*/
export class SandboxUserCodeError extends Error {
constructor(message: string, name: string, stack?: string) {
super(message)
this.name = name || 'SandboxUserCodeError'
if (stack) this.stack = stack
}
}
/**
* Executes a sandbox task inside the shared isolated-vm pool and returns the
* binary result buffer. Throws with a human-readable message if the task fails
@@ -70,7 +88,9 @@ export async function runSandboxTask<TInput extends SandboxTaskInput>(
const queueMs = result.timings ? Math.max(0, elapsedMs - result.timings.total) : undefined
if (result.error) {
logger.warn('Sandbox task failed', {
const isSystemError = result.error.isSystemError === true
const logFn = isSystemError ? logger.error.bind(logger) : logger.warn.bind(logger)
logFn('Sandbox task failed', {
taskId,
requestId,
workspaceId: input.workspaceId,
@@ -79,11 +99,19 @@ export async function runSandboxTask<TInput extends SandboxTaskInput>(
timings: result.timings,
error: result.error.message,
errorName: result.error.name,
isSystemError,
})
const err = new Error(result.error.message)
err.name = result.error.name || 'SandboxTaskError'
if (result.error.stack) err.stack = result.error.stack
throw err
if (isSystemError) {
const err = new Error(result.error.message)
err.name = result.error.name || 'SandboxSystemError'
if (result.error.stack) err.stack = result.error.stack
throw err
}
throw new SandboxUserCodeError(
result.error.message,
result.error.name || 'SandboxTaskError',
result.error.stack
)
}
if (typeof result.bytesBase64 !== 'string' || result.bytesBase64.length === 0) {

View File

@@ -704,7 +704,7 @@ export const OAUTH_PROVIDERS: Record<string, OAuthProviderConfig> = {
services: {
slack: {
name: 'Slack',
description: 'Send messages using a bot for Slack.',
description: 'Use Slack messaging, files, reactions, views, and canvases.',
providerId: 'slack',
icon: SlackIcon,
baseProviderIcon: SlackIcon,
@@ -722,6 +722,7 @@ export const OAUTH_PROVIDERS: Record<string, OAuthProviderConfig> = {
// TODO: Add 'users:read.email' once Slack app review is approved
'files:write',
'files:read',
'canvases:read',
'canvases:write',
'reactions:write',
],

View File

@@ -278,7 +278,8 @@ export const SCOPE_DESCRIPTIONS: Record<string, string> = {
'users:read.email': 'View user email addresses',
'files:write': 'Upload files',
'files:read': 'Download and read files',
'canvases:write': 'Create canvas documents',
'canvases:read': 'Read canvas sections',
'canvases:write': 'Create, edit, and delete canvas documents',
'reactions:write': 'Add emoji reactions to messages',
// Webflow scopes

View File

@@ -0,0 +1,87 @@
import { safeCompare } from '@sim/security/compare'
import { hmacSha256Base64 } from '@sim/security/hmac'
import { env } from '@/lib/core/config/env'
import type { StorageContext } from '@/lib/uploads/shared/types'
export interface UploadTokenPayload {
uploadId: string
key: string
userId: string
workspaceId: string
context: StorageContext
}
interface SignedPayload extends UploadTokenPayload {
exp: number
v: 1
}
const toBase64Url = (input: string): string => Buffer.from(input, 'utf8').toString('base64url')
const fromBase64Url = (input: string): string => Buffer.from(input, 'base64url').toString('utf8')
const sign = (payload: string): string => hmacSha256Base64(payload, env.INTERNAL_API_SECRET)
/**
* Sign an upload session token binding (uploadId, key, userId, workspaceId, context).
* Used to prevent IDOR on multipart upload follow-up calls (get-part-urls, complete, abort).
*/
export function signUploadToken(payload: UploadTokenPayload, expiresInSeconds = 60 * 60): string {
const signed: SignedPayload = {
...payload,
exp: Math.floor(Date.now() / 1000) + expiresInSeconds,
v: 1,
}
const encoded = toBase64Url(JSON.stringify(signed))
return `${encoded}.${sign(encoded)}`
}
export type UploadTokenVerification =
| { valid: true; payload: UploadTokenPayload }
| { valid: false }
export function verifyUploadToken(token: string): UploadTokenVerification {
if (typeof token !== 'string') {
return { valid: false }
}
const parts = token.split('.')
if (parts.length !== 2) return { valid: false }
const [encoded, signature] = parts
if (!encoded || !signature) return { valid: false }
const expected = sign(encoded)
if (!safeCompare(signature, expected)) {
return { valid: false }
}
let parsed: SignedPayload
try {
parsed = JSON.parse(fromBase64Url(encoded)) as SignedPayload
} catch {
return { valid: false }
}
if (
parsed.v !== 1 ||
typeof parsed.exp !== 'number' ||
parsed.exp < Math.floor(Date.now() / 1000) ||
typeof parsed.uploadId !== 'string' ||
typeof parsed.key !== 'string' ||
typeof parsed.userId !== 'string' ||
typeof parsed.workspaceId !== 'string' ||
typeof parsed.context !== 'string'
) {
return { valid: false }
}
return {
valid: true,
payload: {
uploadId: parsed.uploadId,
key: parsed.key,
userId: parsed.userId,
workspaceId: parsed.workspaceId,
context: parsed.context as StorageContext,
},
}
}

View File

@@ -77,6 +77,7 @@ export const SUPPORTED_CODE_EXTENSIONS = [
'editorconfig',
'prettierrc',
'eslintrc',
'mmd',
] as const
export type SupportedCodeExtension = (typeof SUPPORTED_CODE_EXTENSIONS)[number]

View File

@@ -564,6 +564,18 @@ export async function updateFolderRecord(
await db.update(workflowFolder).set(setData).where(eq(workflowFolder.id, folderId))
}
export async function verifyFolderWorkspace(
folderId: string,
workspaceId: string
): Promise<boolean> {
const [row] = await db
.select({ id: workflowFolder.id })
.from(workflowFolder)
.where(and(eq(workflowFolder.id, folderId), eq(workflowFolder.workspaceId, workspaceId)))
.limit(1)
return Boolean(row)
}
export async function deleteFolderRecord(folderId: string): Promise<boolean> {
const [folder] = await db
.select({ parentId: workflowFolder.parentId })

View File

@@ -55,7 +55,7 @@
"@azure/storage-blob": "12.27.0",
"@better-auth/sso": "1.3.12",
"@better-auth/stripe": "1.3.12",
"@browserbasehq/stagehand": "^3.0.5",
"@browserbasehq/stagehand": "^3.2.1",
"@cerebras/cerebras_cloud_sdk": "^1.23.0",
"@e2b/code-interpreter": "^2.0.0",
"@google/genai": "1.34.0",
@@ -63,6 +63,7 @@
"@linear/sdk": "40.0.0",
"@marsidev/react-turnstile": "1.4.2",
"@modelcontextprotocol/sdk": "1.25.3",
"@monaco-editor/react": "4.7.0",
"@opentelemetry/api": "^1.9.0",
"@opentelemetry/exporter-jaeger": "2.1.0",
"@opentelemetry/exporter-trace-otlp-http": "^0.200.0",
@@ -153,7 +154,9 @@
"lucide-react": "^0.479.0",
"mammoth": "^1.9.0",
"marked": "17.0.4",
"mermaid": "11.14.0",
"micromatch": "4.0.8",
"monaco-editor": "0.55.1",
"mongodb": "6.19.0",
"mysql2": "3.14.3",
"neo4j-driver": "6.0.1",
@@ -166,6 +169,7 @@
"openai": "^4.91.1",
"papaparse": "5.5.3",
"pdf-lib": "1.17.1",
"pdfjs-dist": "5.4.296",
"postgres": "^3.4.5",
"posthog-js": "1.364.4",
"posthog-node": "5.28.9",
@@ -176,6 +180,7 @@
"react-dom": "19.2.4",
"react-hook-form": "^7.54.2",
"react-joyride": "2.9.3",
"react-pdf": "10.4.1",
"react-simple-code-editor": "^0.14.1",
"react-window": "2.2.3",
"reactflow": "^11.11.4",

View File

@@ -183,6 +183,49 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
contextWindow: 1047576,
releaseDate: '2025-04-14',
},
// GPT-5.5 family
{
id: 'gpt-5.5-pro',
pricing: {
input: 30.0,
output: 180.0,
updatedAt: '2026-04-23',
},
capabilities: {
nativeStructuredOutputs: true,
reasoningEffort: {
values: ['none', 'low', 'medium', 'high', 'xhigh'],
},
verbosity: {
values: ['low', 'medium', 'high'],
},
maxOutputTokens: 128000,
},
contextWindow: 1050000,
releaseDate: '2026-04-23',
},
{
id: 'gpt-5.5',
pricing: {
input: 5.0,
cachedInput: 0.5,
output: 30.0,
updatedAt: '2026-04-23',
},
capabilities: {
nativeStructuredOutputs: true,
reasoningEffort: {
values: ['none', 'low', 'medium', 'high', 'xhigh'],
},
verbosity: {
values: ['low', 'medium', 'high'],
},
maxOutputTokens: 128000,
},
contextWindow: 1050000,
releaseDate: '2026-04-23',
recommended: true,
},
// GPT-5.4 family
{
id: 'gpt-5.4-pro',
@@ -219,7 +262,6 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
},
contextWindow: 1050000,
releaseDate: '2026-03-05',
recommended: true,
},
{
id: 'gpt-5.4-mini',

View File

@@ -9,13 +9,14 @@ const logger = createLogger('BrowserUseTool')
const POLL_INTERVAL_MS = 5000
const MAX_POLL_TIME_MS = getMaxExecutionTimeout()
const MAX_CONSECUTIVE_ERRORS = 3
const API_BASE = 'https://api.browser-use.com/api/v2'
async function createSessionWithProfile(
profileId: string,
apiKey: string
): Promise<{ sessionId: string } | { error: string }> {
try {
const response = await fetch('https://api.browser-use.com/api/v2/sessions', {
const response = await fetch(`${API_BASE}/sessions`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
@@ -43,7 +44,7 @@ async function createSessionWithProfile(
async function stopSession(sessionId: string, apiKey: string): Promise<void> {
try {
const response = await fetch(`https://api.browser-use.com/api/v2/sessions/${sessionId}`, {
const response = await fetch(`${API_BASE}/sessions/${sessionId}`, {
method: 'PATCH',
headers: {
'Content-Type': 'application/json',
@@ -62,58 +63,92 @@ async function stopSession(sessionId: string, apiKey: string): Promise<void> {
}
}
async function fetchSessionLiveUrl(
sessionId: string,
apiKey: string
): Promise<{ liveUrl: string | null; publicShareUrl: string | null }> {
try {
const response = await fetch(`${API_BASE}/sessions/${sessionId}`, {
method: 'GET',
headers: { 'X-Browser-Use-API-Key': apiKey },
})
if (!response.ok) {
return { liveUrl: null, publicShareUrl: null }
}
const data = (await response.json()) as { liveUrl?: string; publicShareUrl?: string }
return {
liveUrl: data.liveUrl ?? null,
publicShareUrl: data.publicShareUrl ?? null,
}
} catch (error: any) {
logger.warn(`Error fetching session ${sessionId}:`, error)
return { liveUrl: null, publicShareUrl: null }
}
}
function normalizeSecrets(variables: BrowserUseRunTaskParams['variables']): Record<string, string> {
const secrets: Record<string, string> = {}
if (!variables) return secrets
if (Array.isArray(variables)) {
for (const row of variables as Array<Record<string, any>>) {
if (row?.cells?.Key && row.cells.Value !== undefined) {
secrets[row.cells.Key] = row.cells.Value
} else if (row?.Key && row.Value !== undefined) {
secrets[row.Key] = row.Value
}
}
} else if (typeof variables === 'object') {
for (const [k, v] of Object.entries(variables)) {
if (typeof v === 'string') secrets[k] = v
}
}
return secrets
}
function parseAllowedDomains(input?: string | string[]): string[] | undefined {
if (!input) return undefined
const arr = Array.isArray(input)
? input
: input
.split(',')
.map((s) => s.trim())
.filter(Boolean)
return arr.length > 0 ? arr : undefined
}
function buildRequestBody(
params: BrowserUseRunTaskParams,
sessionId?: string
): Record<string, any> {
const requestBody: Record<string, any> = {
task: params.task,
}
const body: Record<string, any> = { task: params.task }
if (sessionId) {
requestBody.sessionId = sessionId
logger.info(`Using session ${sessionId} for task`)
}
if (sessionId) body.sessionId = sessionId
if (params.model) body.llm = params.model
if (params.startUrl?.trim()) body.startUrl = params.startUrl.trim()
if (typeof params.maxSteps === 'number' && params.maxSteps > 0) body.maxSteps = params.maxSteps
if (params.structuredOutput) body.structuredOutput = params.structuredOutput
if (typeof params.flashMode === 'boolean') body.flashMode = params.flashMode
if (typeof params.thinking === 'boolean') body.thinking = params.thinking
if (typeof params.vision === 'boolean' || params.vision === 'auto') body.vision = params.vision
if (params.systemPromptExtension) body.systemPromptExtension = params.systemPromptExtension
if (typeof params.highlightElements === 'boolean')
body.highlightElements = params.highlightElements
if (params.variables) {
let secrets: Record<string, string> = {}
const allowedDomains = parseAllowedDomains(params.allowedDomains)
if (allowedDomains) body.allowedDomains = allowedDomains
if (Array.isArray(params.variables)) {
logger.info('Converting variables array to dictionary format')
params.variables.forEach((row: any) => {
if (row.cells?.Key && row.cells.Value !== undefined) {
secrets[row.cells.Key] = row.cells.Value
logger.info(`Added secret for key: ${row.cells.Key}`)
} else if (row.Key && row.Value !== undefined) {
secrets[row.Key] = row.Value
logger.info(`Added secret for key: ${row.Key}`)
}
})
} else if (typeof params.variables === 'object' && params.variables !== null) {
logger.info('Using variables object directly')
secrets = params.variables
}
const secrets = normalizeSecrets(params.variables)
if (Object.keys(secrets).length > 0) body.secrets = secrets
if (Object.keys(secrets).length > 0) {
logger.info(`Found ${Object.keys(secrets).length} secrets to include`)
requestBody.secrets = secrets
} else {
logger.warn('No usable secrets found in variables')
}
}
if (
params.metadata &&
typeof params.metadata === 'object' &&
Object.keys(params.metadata).length > 0
)
body.metadata = params.metadata
if (params.model) {
requestBody.llm_model = params.model
}
if (params.save_browser_data) {
requestBody.save_browser_data = params.save_browser_data
}
requestBody.use_adblock = true
requestBody.highlight_elements = true
return requestBody
return body
}
async function fetchTaskStatus(
@@ -121,30 +156,36 @@ async function fetchTaskStatus(
apiKey: string
): Promise<{ ok: true; data: any } | { ok: false; error: string }> {
try {
const response = await fetch(`https://api.browser-use.com/api/v2/tasks/${taskId}`, {
const response = await fetch(`${API_BASE}/tasks/${taskId}`, {
method: 'GET',
headers: {
'X-Browser-Use-API-Key': apiKey,
},
headers: { 'X-Browser-Use-API-Key': apiKey },
})
if (!response.ok) {
return { ok: false, error: `HTTP ${response.status}: ${response.statusText}` }
}
const data = await response.json()
return { ok: true, data }
return { ok: true, data: await response.json() }
} catch (error: any) {
return { ok: false, error: error.message || 'Network error' }
}
}
async function pollForCompletion(
taskId: string,
apiKey: string
): Promise<{ success: boolean; output: any; steps: any[]; error?: string }> {
let liveUrlLogged = false
interface PollResult {
success: boolean
output: any
steps: any[]
sessionId: string | null
liveUrl: string | null
publicShareUrl: string | null
error?: string
}
async function pollForCompletion(taskId: string, apiKey: string): Promise<PollResult> {
let consecutiveErrors = 0
let sessionId: string | null = null
let liveUrl: string | null = null
let publicShareUrl: string | null = null
const startTime = Date.now()
while (Date.now() - startTime < MAX_POLL_TIME_MS) {
@@ -157,11 +198,13 @@ async function pollForCompletion(
)
if (consecutiveErrors >= MAX_CONSECUTIVE_ERRORS) {
logger.error(`Max consecutive errors reached for task ${taskId}`)
return {
success: false,
output: null,
steps: [],
sessionId,
liveUrl,
publicShareUrl,
error: `Failed to poll task status after ${MAX_CONSECUTIVE_ERRORS} attempts: ${result.error}`,
}
}
@@ -172,23 +215,31 @@ async function pollForCompletion(
consecutiveErrors = 0
const taskData = result.data
if (taskData.sessionId) sessionId = taskData.sessionId
const status = taskData.status
logger.info(`BrowserUse task ${taskId} status: ${status}`)
if (sessionId && !liveUrl) {
const session = await fetchSessionLiveUrl(sessionId, apiKey)
if (session.liveUrl) {
liveUrl = session.liveUrl
logger.info(`BrowserUse live URL: ${liveUrl}`)
}
if (session.publicShareUrl) publicShareUrl = session.publicShareUrl
}
if (['finished', 'failed', 'stopped'].includes(status)) {
return {
success: status === 'finished',
output: taskData.output ?? null,
steps: taskData.steps || [],
sessionId,
liveUrl,
publicShareUrl,
}
}
if (!liveUrlLogged && taskData.live_url) {
logger.info(`BrowserUse task ${taskId} live URL: ${taskData.live_url}`)
liveUrlLogged = true
}
await sleep(POLL_INTERVAL_MS)
}
@@ -198,20 +249,58 @@ async function pollForCompletion(
success: finalResult.data.status === 'finished',
output: finalResult.data.output ?? null,
steps: finalResult.data.steps || [],
sessionId: finalResult.data.sessionId ?? sessionId,
liveUrl,
publicShareUrl,
}
}
logger.warn(
`Task ${taskId} did not complete within the maximum polling time (${MAX_POLL_TIME_MS / 1000}s)`
)
return {
success: false,
output: null,
steps: [],
sessionId,
liveUrl,
publicShareUrl,
error: `Task did not complete within the maximum polling time (${MAX_POLL_TIME_MS / 1000}s)`,
}
}
async function createShareUrl(sessionId: string, apiKey: string): Promise<string | null> {
try {
const response = await fetch(`${API_BASE}/sessions/${sessionId}/public-share`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'X-Browser-Use-API-Key': apiKey,
},
})
if (!response.ok) {
logger.warn(`Failed to create share URL for session ${sessionId}: ${response.statusText}`)
return null
}
const data = (await response.json()) as { shareUrl?: string; shareToken?: string }
return data.shareUrl ?? null
} catch (error: any) {
logger.warn(`Error creating share URL for session ${sessionId}:`, error)
return null
}
}
function emptyOutput(): BrowserUseRunTaskResponse['output'] {
return {
id: '',
success: false,
output: null,
steps: [],
liveUrl: null,
shareUrl: null,
sessionId: null,
}
}
export const runTaskTool: ToolConfig<BrowserUseRunTaskParams, BrowserUseRunTaskResponse> = {
id: 'browser_use_run_task',
name: 'Browser Use',
@@ -225,23 +314,77 @@ export const runTaskTool: ToolConfig<BrowserUseRunTaskParams, BrowserUseRunTaskR
visibility: 'user-or-llm',
description: 'What should the browser agent do',
},
startUrl: {
type: 'string',
required: false,
visibility: 'user-or-llm',
description: 'Initial page URL to start the agent on (reduces navigation steps)',
},
variables: {
type: 'json',
required: false,
visibility: 'user-only',
description: 'Optional variables to use as secrets (format: {key: value})',
description: 'Optional secrets injected into the task (format: {key: value})',
},
save_browser_data: {
allowedDomains: {
type: 'string',
required: false,
visibility: 'user-only',
description: 'Comma-separated list of domains the agent is allowed to visit',
},
maxSteps: {
type: 'number',
required: false,
visibility: 'user-only',
description: 'Maximum number of steps the agent may take (default 100, max 10000)',
},
flashMode: {
type: 'boolean',
required: false,
visibility: 'user-only',
description: 'Whether to save browser data',
description: 'Enable flash mode (faster, less careful navigation)',
},
thinking: {
type: 'boolean',
required: false,
visibility: 'user-only',
description: 'Enable extended reasoning mode',
},
vision: {
type: 'string',
required: false,
visibility: 'user-only',
description: 'Vision capability: "true", "false", or "auto"',
},
systemPromptExtension: {
type: 'string',
required: false,
visibility: 'user-only',
description: 'Optional text appended to the agent system prompt (max 2000 chars)',
},
structuredOutput: {
type: 'string',
required: false,
visibility: 'user-only',
description: 'Stringified JSON schema for the structured output',
},
highlightElements: {
type: 'boolean',
required: false,
visibility: 'user-only',
description: 'Highlight interactive elements on the page (default true)',
},
metadata: {
type: 'json',
required: false,
visibility: 'user-only',
description: 'Custom key-value metadata (up to 10 pairs) for tracking',
},
model: {
type: 'string',
required: false,
visibility: 'user-only',
description: 'LLM model to use (default: gpt-4o)',
description: 'LLM model identifier (e.g. browser-use-2.0)',
},
apiKey: {
type: 'string',
@@ -258,7 +401,7 @@ export const runTaskTool: ToolConfig<BrowserUseRunTaskParams, BrowserUseRunTaskR
},
request: {
url: 'https://api.browser-use.com/api/v2/tasks',
url: `${API_BASE}/tasks`,
method: 'POST',
headers: (params) => ({
'Content-Type': 'application/json',
@@ -273,16 +416,7 @@ export const runTaskTool: ToolConfig<BrowserUseRunTaskParams, BrowserUseRunTaskR
logger.info(`Creating session with profile ID: ${params.profile_id}`)
const sessionResult = await createSessionWithProfile(params.profile_id, params.apiKey)
if ('error' in sessionResult) {
return {
success: false,
output: {
id: null,
success: false,
output: null,
steps: [],
},
error: sessionResult.error,
}
return { success: false, output: emptyOutput(), error: sessionResult.error }
}
sessionId = sessionResult.sessionId
}
@@ -291,7 +425,7 @@ export const runTaskTool: ToolConfig<BrowserUseRunTaskParams, BrowserUseRunTaskR
logger.info('Creating BrowserUse task', { hasSession: !!sessionId })
try {
const response = await fetch('https://api.browser-use.com/api/v2/tasks', {
const response = await fetch(`${API_BASE}/tasks`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
@@ -305,22 +439,23 @@ export const runTaskTool: ToolConfig<BrowserUseRunTaskParams, BrowserUseRunTaskR
logger.error(`Failed to create task: ${errorText}`)
return {
success: false,
output: {
id: null,
success: false,
output: null,
steps: [],
},
output: emptyOutput(),
error: `Failed to create task: ${response.statusText}`,
}
}
const data = (await response.json()) as { id: string }
const data = (await response.json()) as { id: string; sessionId?: string }
const taskId = data.id
logger.info(`Created BrowserUse task: ${taskId}`)
const initialSessionId = sessionId ?? data.sessionId ?? null
logger.info(`Created BrowserUse task ${taskId}`, { sessionId: initialSessionId })
const result = await pollForCompletion(taskId, params.apiKey)
const finalSessionId = result.sessionId ?? initialSessionId
const shareUrl =
result.publicShareUrl ??
(finalSessionId ? await createShareUrl(finalSessionId, params.apiKey) : null)
if (sessionId) {
await stopSession(sessionId, params.apiKey)
}
@@ -332,24 +467,20 @@ export const runTaskTool: ToolConfig<BrowserUseRunTaskParams, BrowserUseRunTaskR
success: result.success,
output: result.output,
steps: result.steps,
liveUrl: result.liveUrl,
shareUrl,
sessionId: finalSessionId,
},
error: result.error,
}
} catch (error: any) {
logger.error('Error creating BrowserUse task:', error)
if (sessionId) {
await stopSession(sessionId, params.apiKey)
}
return {
success: false,
output: {
id: null,
success: false,
output: null,
steps: [],
},
output: emptyOutput(),
error: `Error creating task: ${error.message}`,
}
}
@@ -358,7 +489,43 @@ export const runTaskTool: ToolConfig<BrowserUseRunTaskParams, BrowserUseRunTaskR
outputs: {
id: { type: 'string', description: 'Task execution identifier' },
success: { type: 'boolean', description: 'Task completion status' },
output: { type: 'json', description: 'Task output data' },
steps: { type: 'json', description: 'Execution steps taken' },
output: { type: 'json', description: 'Final task output (string or structured)' },
steps: {
type: 'array',
description: 'Steps the agent executed (number, memory, nextGoal, url, actions, duration)',
items: {
type: 'object',
properties: {
number: { type: 'number', description: 'Sequential step number' },
memory: { type: 'string', description: 'Agent memory at this step' },
evaluationPreviousGoal: {
type: 'string',
description: 'Evaluation of previous goal completion',
},
nextGoal: { type: 'string', description: 'Goal for the next step' },
url: { type: 'string', description: 'Current URL of the browser' },
screenshotUrl: { type: 'string', description: 'Optional screenshot URL', optional: true },
actions: {
type: 'array',
description: 'Stringified JSON actions performed',
items: { type: 'string', description: 'Action JSON' },
},
duration: {
type: 'number',
description: 'Step duration in seconds',
optional: true,
},
},
},
},
liveUrl: {
type: 'string',
description: 'Embeddable live browser session URL (active during execution)',
},
shareUrl: {
type: 'string',
description: 'Public shareable URL for the recorded session (post-run)',
},
sessionId: { type: 'string', description: 'Browser Use session identifier' },
},
}

View File

@@ -3,26 +3,40 @@ import type { ToolResponse } from '@/tools/types'
export interface BrowserUseRunTaskParams {
task: string
apiKey: string
variables?: Record<string, string>
variables?: Record<string, string> | Array<Record<string, any>>
model?: string
save_browser_data?: boolean
startUrl?: string
allowedDomains?: string | string[]
maxSteps?: number
flashMode?: boolean
thinking?: boolean
vision?: boolean | 'auto'
systemPromptExtension?: string
structuredOutput?: string
highlightElements?: boolean
metadata?: Record<string, string>
profile_id?: string
}
export interface BrowserUseTaskStep {
id: string
step: number
evaluation_previous_goal: string
next_goal: string
url?: string
extracted_data?: Record<string, any>
number: number
memory: string
evaluationPreviousGoal: string
nextGoal: string
url: string
screenshotUrl?: string | null
actions: string[]
duration?: number | null
}
export interface BrowserUseTaskOutput {
id: string
success: boolean
output: any
output: string | null
steps: BrowserUseTaskStep[]
liveUrl: string | null
shareUrl: string | null
sessionId: string | null
}
export interface BrowserUseRunTaskResponse extends ToolResponse {
@@ -30,10 +44,5 @@ export interface BrowserUseRunTaskResponse extends ToolResponse {
}
export interface BrowserUseResponse extends ToolResponse {
output: {
id: string
success: boolean
output: any
steps: BrowserUseTaskStep[]
}
output: BrowserUseTaskOutput
}

View File

@@ -2248,6 +2248,45 @@ import {
salesforceUpdateOpportunityTool,
salesforceUpdateTaskTool,
} from '@/tools/salesforce'
import {
createBusinessPartnerTool as sapS4HanaCreateBusinessPartnerTool,
createPurchaseOrderTool as sapS4HanaCreatePurchaseOrderTool,
createPurchaseRequisitionTool as sapS4HanaCreatePurchaseRequisitionTool,
createSalesOrderTool as sapS4HanaCreateSalesOrderTool,
deleteSalesOrderTool as sapS4HanaDeleteSalesOrderTool,
getBillingDocumentTool as sapS4HanaGetBillingDocumentTool,
getBusinessPartnerTool as sapS4HanaGetBusinessPartnerTool,
getCustomerTool as sapS4HanaGetCustomerTool,
getInboundDeliveryTool as sapS4HanaGetInboundDeliveryTool,
getOutboundDeliveryTool as sapS4HanaGetOutboundDeliveryTool,
getProductTool as sapS4HanaGetProductTool,
getPurchaseOrderTool as sapS4HanaGetPurchaseOrderTool,
getPurchaseRequisitionTool as sapS4HanaGetPurchaseRequisitionTool,
getSalesOrderTool as sapS4HanaGetSalesOrderTool,
getSupplierInvoiceTool as sapS4HanaGetSupplierInvoiceTool,
getSupplierTool as sapS4HanaGetSupplierTool,
listBillingDocumentsTool as sapS4HanaListBillingDocumentsTool,
listBusinessPartnersTool as sapS4HanaListBusinessPartnersTool,
listCustomersTool as sapS4HanaListCustomersTool,
listInboundDeliveriesTool as sapS4HanaListInboundDeliveriesTool,
listMaterialDocumentsTool as sapS4HanaListMaterialDocumentsTool,
listMaterialStockTool as sapS4HanaListMaterialStockTool,
listOutboundDeliveriesTool as sapS4HanaListOutboundDeliveriesTool,
listProductsTool as sapS4HanaListProductsTool,
listPurchaseOrdersTool as sapS4HanaListPurchaseOrdersTool,
listPurchaseRequisitionsTool as sapS4HanaListPurchaseRequisitionsTool,
listSalesOrdersTool as sapS4HanaListSalesOrdersTool,
listSupplierInvoicesTool as sapS4HanaListSupplierInvoicesTool,
listSuppliersTool as sapS4HanaListSuppliersTool,
odataQueryTool as sapS4HanaOdataQueryTool,
updateBusinessPartnerTool as sapS4HanaUpdateBusinessPartnerTool,
updateCustomerTool as sapS4HanaUpdateCustomerTool,
updateProductTool as sapS4HanaUpdateProductTool,
updatePurchaseOrderTool as sapS4HanaUpdatePurchaseOrderTool,
updatePurchaseRequisitionTool as sapS4HanaUpdatePurchaseRequisitionTool,
updateSalesOrderTool as sapS4HanaUpdateSalesOrderTool,
updateSupplierTool as sapS4HanaUpdateSupplierTool,
} from '@/tools/sap_s4hana'
import { searchTool } from '@/tools/search'
import {
secretsManagerCreateSecretTool,
@@ -2364,19 +2403,23 @@ import {
slackCanvasTool,
slackCreateChannelCanvasTool,
slackCreateConversationTool,
slackDeleteCanvasTool,
slackDeleteMessageTool,
slackDownloadTool,
slackEditCanvasTool,
slackEphemeralMessageTool,
slackGetCanvasTool,
slackGetChannelInfoTool,
slackGetMessageTool,
slackGetThreadTool,
slackGetUserPresenceTool,
slackGetUserTool,
slackInviteToConversationTool,
slackListCanvasesTool,
slackListChannelsTool,
slackListMembersTool,
slackListUsersTool,
slackLookupCanvasSectionsTool,
slackMessageReaderTool,
slackMessageTool,
slackOpenViewTool,
@@ -3360,6 +3403,10 @@ export const tools: Record<string, ToolConfig> = {
slack_publish_view: slackPublishViewTool,
slack_edit_canvas: slackEditCanvasTool,
slack_create_channel_canvas: slackCreateChannelCanvasTool,
slack_get_canvas: slackGetCanvasTool,
slack_list_canvases: slackListCanvasesTool,
slack_lookup_canvas_sections: slackLookupCanvasSectionsTool,
slack_delete_canvas: slackDeleteCanvasTool,
slack_create_conversation: slackCreateConversationTool,
slack_invite_to_conversation: slackInviteToConversationTool,
github_repo_info: githubRepoInfoTool,
@@ -5278,6 +5325,43 @@ export const tools: Record<string, ToolConfig> = {
salesforce_query_more: salesforceQueryMoreTool,
salesforce_describe_object: salesforceDescribeObjectTool,
salesforce_list_objects: salesforceListObjectsTool,
sap_s4hana_create_business_partner: sapS4HanaCreateBusinessPartnerTool,
sap_s4hana_create_purchase_order: sapS4HanaCreatePurchaseOrderTool,
sap_s4hana_create_purchase_requisition: sapS4HanaCreatePurchaseRequisitionTool,
sap_s4hana_create_sales_order: sapS4HanaCreateSalesOrderTool,
sap_s4hana_delete_sales_order: sapS4HanaDeleteSalesOrderTool,
sap_s4hana_get_billing_document: sapS4HanaGetBillingDocumentTool,
sap_s4hana_get_business_partner: sapS4HanaGetBusinessPartnerTool,
sap_s4hana_get_customer: sapS4HanaGetCustomerTool,
sap_s4hana_get_inbound_delivery: sapS4HanaGetInboundDeliveryTool,
sap_s4hana_get_outbound_delivery: sapS4HanaGetOutboundDeliveryTool,
sap_s4hana_get_product: sapS4HanaGetProductTool,
sap_s4hana_get_purchase_order: sapS4HanaGetPurchaseOrderTool,
sap_s4hana_get_purchase_requisition: sapS4HanaGetPurchaseRequisitionTool,
sap_s4hana_get_sales_order: sapS4HanaGetSalesOrderTool,
sap_s4hana_get_supplier: sapS4HanaGetSupplierTool,
sap_s4hana_get_supplier_invoice: sapS4HanaGetSupplierInvoiceTool,
sap_s4hana_list_billing_documents: sapS4HanaListBillingDocumentsTool,
sap_s4hana_list_business_partners: sapS4HanaListBusinessPartnersTool,
sap_s4hana_list_customers: sapS4HanaListCustomersTool,
sap_s4hana_list_inbound_deliveries: sapS4HanaListInboundDeliveriesTool,
sap_s4hana_list_material_documents: sapS4HanaListMaterialDocumentsTool,
sap_s4hana_list_material_stock: sapS4HanaListMaterialStockTool,
sap_s4hana_list_outbound_deliveries: sapS4HanaListOutboundDeliveriesTool,
sap_s4hana_list_products: sapS4HanaListProductsTool,
sap_s4hana_list_purchase_orders: sapS4HanaListPurchaseOrdersTool,
sap_s4hana_list_purchase_requisitions: sapS4HanaListPurchaseRequisitionsTool,
sap_s4hana_list_sales_orders: sapS4HanaListSalesOrdersTool,
sap_s4hana_list_supplier_invoices: sapS4HanaListSupplierInvoicesTool,
sap_s4hana_list_suppliers: sapS4HanaListSuppliersTool,
sap_s4hana_odata_query: sapS4HanaOdataQueryTool,
sap_s4hana_update_business_partner: sapS4HanaUpdateBusinessPartnerTool,
sap_s4hana_update_customer: sapS4HanaUpdateCustomerTool,
sap_s4hana_update_product: sapS4HanaUpdateProductTool,
sap_s4hana_update_purchase_order: sapS4HanaUpdatePurchaseOrderTool,
sap_s4hana_update_purchase_requisition: sapS4HanaUpdatePurchaseRequisitionTool,
sap_s4hana_update_sales_order: sapS4HanaUpdateSalesOrderTool,
sap_s4hana_update_supplier: sapS4HanaUpdateSupplierTool,
sqs_send: sqsSendTool,
sts_assume_role: stsAssumeRoleTool,
sts_get_caller_identity: stsGetCallerIdentityTool,

View File

@@ -0,0 +1,162 @@
import type { CreateBusinessPartnerParams, SapProxyResponse } from '@/tools/sap_s4hana/types'
import {
baseProxyBody,
parseJsonInput,
SAP_PROXY_URL,
transformSapProxyResponse,
} from '@/tools/sap_s4hana/utils'
import type { ToolConfig } from '@/tools/types'
export const createBusinessPartnerTool: ToolConfig<CreateBusinessPartnerParams, SapProxyResponse> =
{
id: 'sap_s4hana_create_business_partner',
name: 'SAP S/4HANA Create Business Partner',
description:
'Create a business partner in SAP S/4HANA Cloud (API_BUSINESS_PARTNER, A_BusinessPartner). For Person category 1 provide FirstName and LastName. For Organization category 2 provide OrganizationBPName1.',
version: '1.0.0',
params: {
subdomain: {
type: 'string',
required: true,
visibility: 'user-only',
description:
'SAP BTP subaccount subdomain (technical name of your subaccount, not the S/4HANA host)',
},
region: {
type: 'string',
required: true,
visibility: 'user-only',
description: 'BTP region (e.g. eu10, us10)',
},
clientId: {
type: 'string',
required: true,
visibility: 'user-only',
description: 'OAuth client ID from the S/4HANA Communication Arrangement',
},
clientSecret: {
type: 'string',
required: true,
visibility: 'user-only',
description: 'OAuth client secret from the S/4HANA Communication Arrangement',
},
deploymentType: {
type: 'string',
required: false,
visibility: 'user-only',
description: 'Deployment type: cloud_public (default), cloud_private, or on_premise',
},
authType: {
type: 'string',
required: false,
visibility: 'user-only',
description: 'Authentication type: oauth_client_credentials (default) or basic',
},
baseUrl: {
type: 'string',
required: false,
visibility: 'user-only',
description: 'Base URL of the S/4HANA host (Cloud Private / On-Premise)',
},
tokenUrl: {
type: 'string',
required: false,
visibility: 'user-only',
description: 'OAuth token URL (Cloud Private / On-Premise + OAuth)',
},
username: {
type: 'string',
required: false,
visibility: 'user-only',
description: 'Username for HTTP Basic auth',
},
password: {
type: 'string',
required: false,
visibility: 'user-only',
description: 'Password for HTTP Basic auth',
},
businessPartnerCategory: {
type: 'string',
required: true,
visibility: 'user-or-llm',
description: 'BusinessPartnerCategory: "1" Person, "2" Organization, "3" Group',
},
businessPartnerGrouping: {
type: 'string',
required: true,
visibility: 'user-or-llm',
description:
'BusinessPartnerGrouping (number range / role grouping configured in S/4HANA, e.g. "0001")',
},
firstName: {
type: 'string',
required: false,
visibility: 'user-or-llm',
description: 'FirstName (required for Person)',
},
lastName: {
type: 'string',
required: false,
visibility: 'user-or-llm',
description: 'LastName (required for Person)',
},
organizationBPName1: {
type: 'string',
required: false,
visibility: 'user-or-llm',
description: 'OrganizationBPName1 (required for Organization)',
},
body: {
type: 'json',
required: false,
visibility: 'user-or-llm',
description: 'Optional additional A_BusinessPartner fields merged into the create payload',
},
},
request: {
url: SAP_PROXY_URL,
method: 'POST',
headers: () => ({ 'Content-Type': 'application/json' }),
body: (params) => {
const extra = parseJsonInput<Record<string, unknown>>(params.body, 'body') ?? {}
const extraHasName = (key: string) => Object.hasOwn(extra, key) && Boolean(extra[key])
if (params.businessPartnerCategory === '1') {
const hasFirst = Boolean(params.firstName) || extraHasName('FirstName')
const hasLast = Boolean(params.lastName) || extraHasName('LastName')
if (!hasFirst || !hasLast) {
throw new Error('BusinessPartnerCategory "1" (Person) requires FirstName and LastName')
}
} else if (params.businessPartnerCategory === '2') {
const hasOrgName =
Boolean(params.organizationBPName1) || extraHasName('OrganizationBPName1')
if (!hasOrgName) {
throw new Error(
'BusinessPartnerCategory "2" (Organization) requires OrganizationBPName1'
)
}
}
const payload: Record<string, unknown> = {
...extra,
BusinessPartnerCategory: params.businessPartnerCategory,
BusinessPartnerGrouping: params.businessPartnerGrouping,
}
if (params.firstName) payload.FirstName = params.firstName
if (params.lastName) payload.LastName = params.lastName
if (params.organizationBPName1) payload.OrganizationBPName1 = params.organizationBPName1
return {
...baseProxyBody(params),
service: 'API_BUSINESS_PARTNER',
path: '/A_BusinessPartner',
method: 'POST',
query: { $format: 'json' },
body: payload,
}
},
},
transformResponse: transformSapProxyResponse,
outputs: {
status: { type: 'number', description: 'HTTP status code returned by SAP' },
data: { type: 'json', description: 'Created A_BusinessPartner entity' },
},
}

View File

@@ -0,0 +1,145 @@
import type { CreatePurchaseOrderParams, SapProxyResponse } from '@/tools/sap_s4hana/types'
import {
baseProxyBody,
parseJsonInput,
SAP_PROXY_URL,
transformSapProxyResponse,
} from '@/tools/sap_s4hana/utils'
import type { ToolConfig } from '@/tools/types'
export const createPurchaseOrderTool: ToolConfig<CreatePurchaseOrderParams, SapProxyResponse> = {
id: 'sap_s4hana_create_purchase_order',
name: 'SAP S/4HANA Create Purchase Order',
description:
'Create a purchase order in SAP S/4HANA Cloud (API_PURCHASEORDER_PROCESS_SRV, A_PurchaseOrder). PurchaseOrder is auto-assigned by SAP from the document number range; provide line items via the body parameter.',
version: '1.0.0',
params: {
subdomain: {
type: 'string',
required: true,
visibility: 'user-only',
description:
'SAP BTP subaccount subdomain (technical name of your subaccount, not the S/4HANA host)',
},
region: {
type: 'string',
required: true,
visibility: 'user-only',
description: 'BTP region (e.g. eu10, us10)',
},
clientId: {
type: 'string',
required: true,
visibility: 'user-only',
description: 'OAuth client ID from the S/4HANA Communication Arrangement',
},
clientSecret: {
type: 'string',
required: true,
visibility: 'user-only',
description: 'OAuth client secret from the S/4HANA Communication Arrangement',
},
deploymentType: {
type: 'string',
required: false,
visibility: 'user-only',
description: 'Deployment type: cloud_public (default), cloud_private, or on_premise',
},
authType: {
type: 'string',
required: false,
visibility: 'user-only',
description: 'Authentication type: oauth_client_credentials (default) or basic',
},
baseUrl: {
type: 'string',
required: false,
visibility: 'user-only',
description: 'Base URL of the S/4HANA host (Cloud Private / On-Premise)',
},
tokenUrl: {
type: 'string',
required: false,
visibility: 'user-only',
description: 'OAuth token URL (Cloud Private / On-Premise + OAuth)',
},
username: {
type: 'string',
required: false,
visibility: 'user-only',
description: 'Username for HTTP Basic auth',
},
password: {
type: 'string',
required: false,
visibility: 'user-only',
description: 'Password for HTTP Basic auth',
},
purchaseOrderType: {
type: 'string',
required: true,
visibility: 'user-or-llm',
description: 'PurchaseOrderType (e.g., "NB" Standard PO)',
},
companyCode: {
type: 'string',
required: true,
visibility: 'user-or-llm',
description: 'CompanyCode (4 chars, e.g., "1010")',
},
purchasingOrganization: {
type: 'string',
required: true,
visibility: 'user-or-llm',
description: 'PurchasingOrganization (4 chars)',
},
purchasingGroup: {
type: 'string',
required: true,
visibility: 'user-or-llm',
description: 'PurchasingGroup (3 chars)',
},
supplier: {
type: 'string',
required: true,
visibility: 'user-or-llm',
description: 'Supplier business partner key (up to 10 chars)',
},
body: {
type: 'json',
required: false,
visibility: 'user-or-llm',
description:
'Additional A_PurchaseOrder fields and to_PurchaseOrderItem deep-insert items merged into the create payload (e.g., {"to_PurchaseOrderItem":[{"PurchaseOrderItem":"10","Material":"TG11","OrderQuantity":"5","Plant":"1010","PurchaseOrderQuantityUnit":"PC","NetPriceAmount":"100.00","DocumentCurrency":"USD"}]}).',
},
},
request: {
url: SAP_PROXY_URL,
method: 'POST',
headers: () => ({ 'Content-Type': 'application/json' }),
body: (params) => {
const extra = parseJsonInput<Record<string, unknown>>(params.body, 'body') ?? {}
const payload: Record<string, unknown> = {
...extra,
PurchaseOrderType: params.purchaseOrderType,
CompanyCode: params.companyCode,
PurchasingOrganization: params.purchasingOrganization,
PurchasingGroup: params.purchasingGroup,
Supplier: params.supplier,
}
return {
...baseProxyBody(params),
service: 'API_PURCHASEORDER_PROCESS_SRV',
path: '/A_PurchaseOrder',
method: 'POST',
query: { $format: 'json' },
body: payload,
}
},
},
transformResponse: transformSapProxyResponse,
outputs: {
status: { type: 'number', description: 'HTTP status code returned by SAP' },
data: { type: 'json', description: 'Created A_PurchaseOrder entity' },
},
}

View File

@@ -0,0 +1,132 @@
import type { CreatePurchaseRequisitionParams, SapProxyResponse } from '@/tools/sap_s4hana/types'
import {
baseProxyBody,
parseJsonInput,
SAP_PROXY_URL,
transformSapProxyResponse,
} from '@/tools/sap_s4hana/utils'
import type { ToolConfig } from '@/tools/types'
export const createPurchaseRequisitionTool: ToolConfig<
CreatePurchaseRequisitionParams,
SapProxyResponse
> = {
id: 'sap_s4hana_create_purchase_requisition',
name: 'SAP S/4HANA Create Purchase Requisition',
description:
'Create a purchase requisition in SAP S/4HANA Cloud (API_PURCHASEREQ_PROCESS_SRV, A_PurchaseRequisitionHeader). PurchaseRequisition is auto-assigned by SAP from the document number range; provide line items via the to_PurchaseReqnItem deep-insert array. Note: API_PURCHASEREQ_PROCESS_SRV is deprecated since S/4HANA Cloud Public Edition 2402; the successor is API_PURCHASEREQUISITION_2 (OData v4). This tool still works against tenants where the legacy service is enabled.',
version: '1.0.0',
params: {
subdomain: {
type: 'string',
required: true,
visibility: 'user-only',
description:
'SAP BTP subaccount subdomain (technical name of your subaccount, not the S/4HANA host)',
},
region: {
type: 'string',
required: true,
visibility: 'user-only',
description: 'BTP region (e.g. eu10, us10)',
},
clientId: {
type: 'string',
required: true,
visibility: 'user-only',
description: 'OAuth client ID from the S/4HANA Communication Arrangement',
},
clientSecret: {
type: 'string',
required: true,
visibility: 'user-only',
description: 'OAuth client secret from the S/4HANA Communication Arrangement',
},
deploymentType: {
type: 'string',
required: false,
visibility: 'user-only',
description: 'Deployment type: cloud_public (default), cloud_private, or on_premise',
},
authType: {
type: 'string',
required: false,
visibility: 'user-only',
description: 'Authentication type: oauth_client_credentials (default) or basic',
},
baseUrl: {
type: 'string',
required: false,
visibility: 'user-only',
description: 'Base URL of the S/4HANA host (Cloud Private / On-Premise)',
},
tokenUrl: {
type: 'string',
required: false,
visibility: 'user-only',
description: 'OAuth token URL (Cloud Private / On-Premise + OAuth)',
},
username: {
type: 'string',
required: false,
visibility: 'user-only',
description: 'Username for HTTP Basic auth',
},
password: {
type: 'string',
required: false,
visibility: 'user-only',
description: 'Password for HTTP Basic auth',
},
purchaseRequisitionType: {
type: 'string',
required: true,
visibility: 'user-or-llm',
description: 'PurchaseRequisitionType (e.g., "NB" Standard PR)',
},
items: {
type: 'json',
required: true,
visibility: 'user-or-llm',
description:
'to_PurchaseReqnItem deep-insert array (e.g., [{"PurchaseRequisitionItem":"10","Material":"TG11","RequestedQuantity":"5","Plant":"1010","BaseUnit":"PC","DeliveryDate":"/Date(1735689600000)/"}])',
},
body: {
type: 'json',
required: false,
visibility: 'user-or-llm',
description:
'Additional A_PurchaseRequisitionHeader fields merged into the create payload (e.g., {"PurchaseRequisitionDescription":"Office supplies"})',
},
},
request: {
url: SAP_PROXY_URL,
method: 'POST',
headers: () => ({ 'Content-Type': 'application/json' }),
body: (params) => {
const items = parseJsonInput<Array<Record<string, unknown>>>(params.items, 'items')
if (!Array.isArray(items) || items.length === 0) {
throw new Error('items must be a non-empty JSON array of purchase requisition items')
}
const extra = parseJsonInput<Record<string, unknown>>(params.body, 'body') ?? {}
const payload: Record<string, unknown> = {
...extra,
PurchaseRequisitionType: params.purchaseRequisitionType,
to_PurchaseReqnItem: items,
}
return {
...baseProxyBody(params),
service: 'API_PURCHASEREQ_PROCESS_SRV',
path: '/A_PurchaseRequisitionHeader',
method: 'POST',
query: { $format: 'json' },
body: payload,
}
},
},
transformResponse: transformSapProxyResponse,
outputs: {
status: { type: 'number', description: 'HTTP status code returned by SAP' },
data: { type: 'json', description: 'Created A_PurchaseRequisitionHeader entity' },
},
}

View File

@@ -0,0 +1,159 @@
import type { CreateSalesOrderParams, SapProxyResponse } from '@/tools/sap_s4hana/types'
import {
baseProxyBody,
parseJsonInput,
SAP_PROXY_URL,
transformSapProxyResponse,
} from '@/tools/sap_s4hana/utils'
import type { ToolConfig } from '@/tools/types'
export const createSalesOrderTool: ToolConfig<CreateSalesOrderParams, SapProxyResponse> = {
id: 'sap_s4hana_create_sales_order',
name: 'SAP S/4HANA Create Sales Order',
description:
'Create a sales order in SAP S/4HANA Cloud (API_SALES_ORDER_SRV, A_SalesOrder) with deep insert of sales order items via to_Item.',
version: '1.0.0',
params: {
subdomain: {
type: 'string',
required: true,
visibility: 'user-only',
description:
'SAP BTP subaccount subdomain (technical name of your subaccount, not the S/4HANA host)',
},
region: {
type: 'string',
required: true,
visibility: 'user-only',
description: 'BTP region (e.g. eu10, us10)',
},
clientId: {
type: 'string',
required: true,
visibility: 'user-only',
description: 'OAuth client ID from the S/4HANA Communication Arrangement',
},
clientSecret: {
type: 'string',
required: true,
visibility: 'user-only',
description: 'OAuth client secret from the S/4HANA Communication Arrangement',
},
deploymentType: {
type: 'string',
required: false,
visibility: 'user-only',
description: 'Deployment type: cloud_public (default), cloud_private, or on_premise',
},
authType: {
type: 'string',
required: false,
visibility: 'user-only',
description: 'Authentication type: oauth_client_credentials (default) or basic',
},
baseUrl: {
type: 'string',
required: false,
visibility: 'user-only',
description: 'Base URL of the S/4HANA host (Cloud Private / On-Premise)',
},
tokenUrl: {
type: 'string',
required: false,
visibility: 'user-only',
description: 'OAuth token URL (Cloud Private / On-Premise + OAuth)',
},
username: {
type: 'string',
required: false,
visibility: 'user-only',
description: 'Username for HTTP Basic auth',
},
password: {
type: 'string',
required: false,
visibility: 'user-only',
description: 'Password for HTTP Basic auth',
},
salesOrderType: {
type: 'string',
required: true,
visibility: 'user-or-llm',
description: 'SalesOrderType (e.g., "OR" Standard Order)',
},
salesOrganization: {
type: 'string',
required: true,
visibility: 'user-or-llm',
description: 'SalesOrganization (4 chars, e.g., "1010")',
},
distributionChannel: {
type: 'string',
required: true,
visibility: 'user-or-llm',
description: 'DistributionChannel (2 chars, e.g., "10")',
},
organizationDivision: {
type: 'string',
required: true,
visibility: 'user-or-llm',
description: 'OrganizationDivision (2 chars, e.g., "00")',
},
soldToParty: {
type: 'string',
required: true,
visibility: 'user-or-llm',
description: 'SoldToParty business partner key (up to 10 chars)',
},
items: {
type: 'json',
required: true,
visibility: 'user-or-llm',
description:
'Array of sales order items for to_Item deep insert. Each item should include Material and RequestedQuantity (e.g., [{"Material":"TG11","RequestedQuantity":"1"}]).',
},
body: {
type: 'json',
required: false,
visibility: 'user-or-llm',
description: 'Optional additional A_SalesOrder fields merged into the create payload',
},
},
request: {
url: SAP_PROXY_URL,
method: 'POST',
headers: () => ({ 'Content-Type': 'application/json' }),
body: (params) => {
const items = parseJsonInput<Array<Record<string, unknown>>>(params.items, 'items')
if (!Array.isArray(items) || items.length === 0) {
throw new Error('items must be a non-empty JSON array of sales order item objects')
}
const extra = parseJsonInput<Record<string, unknown>>(params.body, 'body') ?? {}
const payload: Record<string, unknown> = {
...extra,
SalesOrderType: params.salesOrderType,
SalesOrganization: params.salesOrganization,
DistributionChannel: params.distributionChannel,
OrganizationDivision: params.organizationDivision,
SoldToParty: params.soldToParty,
to_Item: items,
}
return {
...baseProxyBody(params),
service: 'API_SALES_ORDER_SRV',
path: '/A_SalesOrder',
method: 'POST',
query: { $format: 'json' },
body: payload,
}
},
},
transformResponse: transformSapProxyResponse,
outputs: {
status: { type: 'number', description: 'HTTP status code returned by SAP' },
data: {
type: 'json',
description: 'Created A_SalesOrder entity (with deep-inserted items if expanded by SAP)',
},
},
}

View File

@@ -0,0 +1,108 @@
import type { DeleteSalesOrderParams, SapProxyResponse } from '@/tools/sap_s4hana/types'
import {
baseProxyBody,
quoteOdataKey,
SAP_PROXY_URL,
transformSapProxyResponse,
} from '@/tools/sap_s4hana/utils'
import type { ToolConfig } from '@/tools/types'
export const deleteSalesOrderTool: ToolConfig<DeleteSalesOrderParams, SapProxyResponse> = {
id: 'sap_s4hana_delete_sales_order',
name: 'SAP S/4HANA Delete Sales Order',
description:
'Delete an A_SalesOrder entity in SAP S/4HANA Cloud (API_SALES_ORDER_SRV). Only orders without subsequent documents (deliveries, invoices) can be deleted; otherwise reject items via update instead.',
version: '1.0.0',
params: {
subdomain: {
type: 'string',
required: true,
visibility: 'user-only',
description:
'SAP BTP subaccount subdomain (technical name of your subaccount, not the S/4HANA host)',
},
region: {
type: 'string',
required: true,
visibility: 'user-only',
description: 'BTP region (e.g. eu10, us10)',
},
clientId: {
type: 'string',
required: true,
visibility: 'user-only',
description: 'OAuth client ID from the S/4HANA Communication Arrangement',
},
clientSecret: {
type: 'string',
required: true,
visibility: 'user-only',
description: 'OAuth client secret from the S/4HANA Communication Arrangement',
},
deploymentType: {
type: 'string',
required: false,
visibility: 'user-only',
description: 'Deployment type: cloud_public (default), cloud_private, or on_premise',
},
authType: {
type: 'string',
required: false,
visibility: 'user-only',
description: 'Authentication type: oauth_client_credentials (default) or basic',
},
baseUrl: {
type: 'string',
required: false,
visibility: 'user-only',
description: 'Base URL of the S/4HANA host (Cloud Private / On-Premise)',
},
tokenUrl: {
type: 'string',
required: false,
visibility: 'user-only',
description: 'OAuth token URL (Cloud Private / On-Premise + OAuth)',
},
username: {
type: 'string',
required: false,
visibility: 'user-only',
description: 'Username for HTTP Basic auth',
},
password: {
type: 'string',
required: false,
visibility: 'user-only',
description: 'Password for HTTP Basic auth',
},
salesOrder: {
type: 'string',
required: true,
visibility: 'user-or-llm',
description: 'SalesOrder key to delete (string, up to 10 characters)',
},
ifMatch: {
type: 'string',
required: false,
visibility: 'user-or-llm',
description: 'If-Match ETag for optimistic concurrency. Defaults to "*" (unconditional).',
},
},
request: {
url: SAP_PROXY_URL,
method: 'POST',
headers: () => ({ 'Content-Type': 'application/json' }),
body: (params) => ({
...baseProxyBody(params),
service: 'API_SALES_ORDER_SRV',
path: `/A_SalesOrder(${quoteOdataKey(params.salesOrder)})`,
method: 'DELETE',
ifMatch: params.ifMatch || '*',
}),
},
transformResponse: transformSapProxyResponse,
outputs: {
status: { type: 'number', description: 'HTTP status code returned by SAP (204 on success)' },
data: { type: 'json', description: 'Null on successful deletion' },
},
}

View File

@@ -0,0 +1,115 @@
import type { GetBillingDocumentParams, SapProxyResponse } from '@/tools/sap_s4hana/types'
import {
baseProxyBody,
buildEntityQuery,
quoteOdataKey,
SAP_PROXY_URL,
transformSapProxyResponse,
} from '@/tools/sap_s4hana/utils'
import type { ToolConfig } from '@/tools/types'
export const getBillingDocumentTool: ToolConfig<GetBillingDocumentParams, SapProxyResponse> = {
id: 'sap_s4hana_get_billing_document',
name: 'SAP S/4HANA Get Billing Document',
description:
'Retrieve a single billing document (customer invoice) by BillingDocument key from SAP S/4HANA Cloud (API_BILLING_DOCUMENT_SRV, A_BillingDocument).',
version: '1.0.0',
params: {
subdomain: {
type: 'string',
required: true,
visibility: 'user-only',
description:
'SAP BTP subaccount subdomain (technical name of your subaccount, not the S/4HANA host)',
},
region: {
type: 'string',
required: true,
visibility: 'user-only',
description: 'BTP region (e.g. eu10, us10)',
},
clientId: {
type: 'string',
required: true,
visibility: 'user-only',
description: 'OAuth client ID from the S/4HANA Communication Arrangement',
},
clientSecret: {
type: 'string',
required: true,
visibility: 'user-only',
description: 'OAuth client secret from the S/4HANA Communication Arrangement',
},
deploymentType: {
type: 'string',
required: false,
visibility: 'user-only',
description: 'Deployment type: cloud_public (default), cloud_private, or on_premise',
},
authType: {
type: 'string',
required: false,
visibility: 'user-only',
description: 'Authentication type: oauth_client_credentials (default) or basic',
},
baseUrl: {
type: 'string',
required: false,
visibility: 'user-only',
description: 'Base URL of the S/4HANA host (Cloud Private / On-Premise)',
},
tokenUrl: {
type: 'string',
required: false,
visibility: 'user-only',
description: 'OAuth token URL (Cloud Private / On-Premise + OAuth)',
},
username: {
type: 'string',
required: false,
visibility: 'user-only',
description: 'Username for HTTP Basic auth',
},
password: {
type: 'string',
required: false,
visibility: 'user-only',
description: 'Password for HTTP Basic auth',
},
billingDocument: {
type: 'string',
required: true,
visibility: 'user-or-llm',
description: 'BillingDocument key (string, up to 10 characters)',
},
select: {
type: 'string',
required: false,
visibility: 'user-or-llm',
description: 'Comma-separated fields to return ($select)',
},
expand: {
type: 'string',
required: false,
visibility: 'user-or-llm',
description: 'Comma-separated navigation properties to expand (e.g., "to_Item,to_Partner")',
},
},
request: {
url: SAP_PROXY_URL,
method: 'POST',
headers: () => ({ 'Content-Type': 'application/json' }),
body: (params) => ({
...baseProxyBody(params),
service: 'API_BILLING_DOCUMENT_SRV',
path: `/A_BillingDocument(${quoteOdataKey(params.billingDocument)})`,
method: 'GET',
query: buildEntityQuery(params),
}),
},
transformResponse: transformSapProxyResponse,
outputs: {
status: { type: 'number', description: 'HTTP status code returned by SAP' },
data: { type: 'json', description: 'A_BillingDocument entity' },
},
}

View File

@@ -0,0 +1,115 @@
import type { GetBusinessPartnerParams, SapProxyResponse } from '@/tools/sap_s4hana/types'
import {
baseProxyBody,
buildEntityQuery,
quoteOdataKey,
SAP_PROXY_URL,
transformSapProxyResponse,
} from '@/tools/sap_s4hana/utils'
import type { ToolConfig } from '@/tools/types'
export const getBusinessPartnerTool: ToolConfig<GetBusinessPartnerParams, SapProxyResponse> = {
id: 'sap_s4hana_get_business_partner',
name: 'SAP S/4HANA Get Business Partner',
description:
'Retrieve a single business partner by BusinessPartner key from SAP S/4HANA Cloud (API_BUSINESS_PARTNER, A_BusinessPartner).',
version: '1.0.0',
params: {
subdomain: {
type: 'string',
required: true,
visibility: 'user-only',
description:
'SAP BTP subaccount subdomain (technical name of your subaccount, not the S/4HANA host)',
},
region: {
type: 'string',
required: true,
visibility: 'user-only',
description: 'BTP region (e.g. eu10, us10)',
},
clientId: {
type: 'string',
required: true,
visibility: 'user-only',
description: 'OAuth client ID from the S/4HANA Communication Arrangement',
},
clientSecret: {
type: 'string',
required: true,
visibility: 'user-only',
description: 'OAuth client secret from the S/4HANA Communication Arrangement',
},
deploymentType: {
type: 'string',
required: false,
visibility: 'user-only',
description: 'Deployment type: cloud_public (default), cloud_private, or on_premise',
},
authType: {
type: 'string',
required: false,
visibility: 'user-only',
description: 'Authentication type: oauth_client_credentials (default) or basic',
},
baseUrl: {
type: 'string',
required: false,
visibility: 'user-only',
description: 'Base URL of the S/4HANA host (Cloud Private / On-Premise)',
},
tokenUrl: {
type: 'string',
required: false,
visibility: 'user-only',
description: 'OAuth token URL (Cloud Private / On-Premise + OAuth)',
},
username: {
type: 'string',
required: false,
visibility: 'user-only',
description: 'Username for HTTP Basic auth',
},
password: {
type: 'string',
required: false,
visibility: 'user-only',
description: 'Password for HTTP Basic auth',
},
businessPartner: {
type: 'string',
required: true,
visibility: 'user-or-llm',
description: 'BusinessPartner key (string, up to 10 characters)',
},
select: {
type: 'string',
required: false,
visibility: 'user-or-llm',
description: 'Comma-separated fields to return ($select)',
},
expand: {
type: 'string',
required: false,
visibility: 'user-or-llm',
description: 'Comma-separated navigation properties to expand ($expand)',
},
},
request: {
url: SAP_PROXY_URL,
method: 'POST',
headers: () => ({ 'Content-Type': 'application/json' }),
body: (params) => ({
...baseProxyBody(params),
service: 'API_BUSINESS_PARTNER',
path: `/A_BusinessPartner(${quoteOdataKey(params.businessPartner)})`,
method: 'GET',
query: buildEntityQuery(params),
}),
},
transformResponse: transformSapProxyResponse,
outputs: {
status: { type: 'number', description: 'HTTP status code returned by SAP' },
data: { type: 'json', description: 'A_BusinessPartner entity' },
},
}

View File

@@ -0,0 +1,116 @@
import type { GetCustomerParams, SapProxyResponse } from '@/tools/sap_s4hana/types'
import {
baseProxyBody,
buildEntityQuery,
quoteOdataKey,
SAP_PROXY_URL,
transformSapProxyResponse,
} from '@/tools/sap_s4hana/utils'
import type { ToolConfig } from '@/tools/types'
export const getCustomerTool: ToolConfig<GetCustomerParams, SapProxyResponse> = {
id: 'sap_s4hana_get_customer',
name: 'SAP S/4HANA Get Customer',
description:
'Retrieve a single customer by Customer key from SAP S/4HANA Cloud (API_BUSINESS_PARTNER, A_Customer).',
version: '1.0.0',
params: {
subdomain: {
type: 'string',
required: true,
visibility: 'user-only',
description:
'SAP BTP subaccount subdomain (technical name of your subaccount, not the S/4HANA host)',
},
region: {
type: 'string',
required: true,
visibility: 'user-only',
description: 'BTP region (e.g. eu10, us10)',
},
clientId: {
type: 'string',
required: true,
visibility: 'user-only',
description: 'OAuth client ID from the S/4HANA Communication Arrangement',
},
clientSecret: {
type: 'string',
required: true,
visibility: 'user-only',
description: 'OAuth client secret from the S/4HANA Communication Arrangement',
},
deploymentType: {
type: 'string',
required: false,
visibility: 'user-only',
description: 'Deployment type: cloud_public (default), cloud_private, or on_premise',
},
authType: {
type: 'string',
required: false,
visibility: 'user-only',
description: 'Authentication type: oauth_client_credentials (default) or basic',
},
baseUrl: {
type: 'string',
required: false,
visibility: 'user-only',
description: 'Base URL of the S/4HANA host (Cloud Private / On-Premise)',
},
tokenUrl: {
type: 'string',
required: false,
visibility: 'user-only',
description: 'OAuth token URL (Cloud Private / On-Premise + OAuth)',
},
username: {
type: 'string',
required: false,
visibility: 'user-only',
description: 'Username for HTTP Basic auth',
},
password: {
type: 'string',
required: false,
visibility: 'user-only',
description: 'Password for HTTP Basic auth',
},
customer: {
type: 'string',
required: true,
visibility: 'user-or-llm',
description: 'Customer key (string, up to 10 characters)',
},
select: {
type: 'string',
required: false,
visibility: 'user-or-llm',
description: 'Comma-separated fields to return ($select)',
},
expand: {
type: 'string',
required: false,
visibility: 'user-or-llm',
description:
'Comma-separated navigation properties to expand (e.g., "to_CustomerCompany,to_CustomerSalesArea")',
},
},
request: {
url: SAP_PROXY_URL,
method: 'POST',
headers: () => ({ 'Content-Type': 'application/json' }),
body: (params) => ({
...baseProxyBody(params),
service: 'API_BUSINESS_PARTNER',
path: `/A_Customer(${quoteOdataKey(params.customer)})`,
method: 'GET',
query: buildEntityQuery(params),
}),
},
transformResponse: transformSapProxyResponse,
outputs: {
status: { type: 'number', description: 'HTTP status code returned by SAP' },
data: { type: 'json', description: 'A_Customer entity' },
},
}

View File

@@ -0,0 +1,116 @@
import type { GetInboundDeliveryParams, SapProxyResponse } from '@/tools/sap_s4hana/types'
import {
baseProxyBody,
buildEntityQuery,
quoteOdataKey,
SAP_PROXY_URL,
transformSapProxyResponse,
} from '@/tools/sap_s4hana/utils'
import type { ToolConfig } from '@/tools/types'
export const getInboundDeliveryTool: ToolConfig<GetInboundDeliveryParams, SapProxyResponse> = {
id: 'sap_s4hana_get_inbound_delivery',
name: 'SAP S/4HANA Get Inbound Delivery',
description:
'Retrieve a single inbound delivery by DeliveryDocument key from SAP S/4HANA Cloud (API_INBOUND_DELIVERY_SRV;v=0002, A_InbDeliveryHeader).',
version: '1.0.0',
params: {
subdomain: {
type: 'string',
required: true,
visibility: 'user-only',
description:
'SAP BTP subaccount subdomain (technical name of your subaccount, not the S/4HANA host)',
},
region: {
type: 'string',
required: true,
visibility: 'user-only',
description: 'BTP region (e.g. eu10, us10)',
},
clientId: {
type: 'string',
required: true,
visibility: 'user-only',
description: 'OAuth client ID from the S/4HANA Communication Arrangement',
},
clientSecret: {
type: 'string',
required: true,
visibility: 'user-only',
description: 'OAuth client secret from the S/4HANA Communication Arrangement',
},
deploymentType: {
type: 'string',
required: false,
visibility: 'user-only',
description: 'Deployment type: cloud_public (default), cloud_private, or on_premise',
},
authType: {
type: 'string',
required: false,
visibility: 'user-only',
description: 'Authentication type: oauth_client_credentials (default) or basic',
},
baseUrl: {
type: 'string',
required: false,
visibility: 'user-only',
description: 'Base URL of the S/4HANA host (Cloud Private / On-Premise)',
},
tokenUrl: {
type: 'string',
required: false,
visibility: 'user-only',
description: 'OAuth token URL (Cloud Private / On-Premise + OAuth)',
},
username: {
type: 'string',
required: false,
visibility: 'user-only',
description: 'Username for HTTP Basic auth',
},
password: {
type: 'string',
required: false,
visibility: 'user-only',
description: 'Password for HTTP Basic auth',
},
deliveryDocument: {
type: 'string',
required: true,
visibility: 'user-or-llm',
description: 'DeliveryDocument key (string, up to 10 characters)',
},
select: {
type: 'string',
required: false,
visibility: 'user-or-llm',
description: 'Comma-separated fields to return ($select)',
},
expand: {
type: 'string',
required: false,
visibility: 'user-or-llm',
description:
'Comma-separated navigation properties to expand (e.g., "to_DeliveryDocumentItem")',
},
},
request: {
url: SAP_PROXY_URL,
method: 'POST',
headers: () => ({ 'Content-Type': 'application/json' }),
body: (params) => ({
...baseProxyBody(params),
service: 'API_INBOUND_DELIVERY_SRV;v=0002',
path: `/A_InbDeliveryHeader(${quoteOdataKey(params.deliveryDocument)})`,
method: 'GET',
query: buildEntityQuery(params),
}),
},
transformResponse: transformSapProxyResponse,
outputs: {
status: { type: 'number', description: 'HTTP status code returned by SAP' },
data: { type: 'json', description: 'A_InbDeliveryHeader entity' },
},
}

Some files were not shown because too many files have changed in this diff Show More