Compare commits

..

14 Commits

Author SHA1 Message Date
Waleed Latif
315e4509a3 fix(attio): use code subblock type for JSON input fields 2026-02-24 15:25:47 -08:00
Waleed Latif
5e3c43ff83 fix(attio): use timestamp generationType for date wandConfig fields 2026-02-24 13:52:40 -08:00
waleed
8db43b775c update docs 2026-02-24 13:50:36 -08:00
Waleed Latif
edf3c0dc06 feat(attio): add Attio CRM integration with 40 tools and 18 webhook triggers 2026-02-24 13:43:48 -08:00
Waleed
9a31c7d8ad improvement(processing): reduce redundant DB queries in execution preprocessing (#3320)
* improvement(processing): reduce redundant DB queries in execution preprocessing

* improvement(processing): add defensive ID check for prefetched workflow record

* improvement(processing): fix type safety in execution error logging

Replace `as any` cast in non-SSE error path with proper `buildTraceSpans()`
transformation, matching the SSE error path. Remove redundant `as any` cast
in preprocessing.ts where the types already align.

* improvement(processing): replace `as any` casts with proper types in logging

- logger.ts: cast JSONB cost column to `WorkflowExecutionLog['cost']` instead
  of `any` in both `completeWorkflowExecution` and `getWorkflowExecution`
- logger.ts: replace `(orgUsageBefore as any)?.toString?.()` with `String()`
  since COALESCE guarantees a non-null SQL aggregate value
- logging-session.ts: cast JSONB cost to `AccumulatedCost` (the local
  interface) instead of `any` in `loadExistingCost`

* improvement(processing): use exported HighestPrioritySubscription type in usage.ts

Replace inline `Awaited<ReturnType<typeof getHighestPrioritySubscription>>`
with the already-exported `HighestPrioritySubscription` type alias.

* improvement(processing): replace remaining `as any` casts with proper types

- preprocessing.ts: use exported `HighestPrioritySubscription` type instead
  of redeclaring via `Awaited<ReturnType<...>>`
- deploy/route.ts, status/route.ts: cast `hasWorkflowChanged` args to
  `WorkflowState` instead of `any` (JSONB + object literal narrowing)
- state/route.ts: type block sanitization and save with `BlockState` and
  `WorkflowState` instead of `any`
- search-suggestions.ts: remove 8 unnecessary `as any` casts on `'date'`
  literal that already satisfies the `Suggestion['category']` union

* fix(processing): prevent double-billing race in LoggingSession completion

When executeWorkflowCore throws, its catch block fire-and-forgets
safeCompleteWithError, then re-throws. The caller's catch block also
fire-and-forgets safeCompleteWithError on the same LoggingSession. Both
check this.completed (still false) before either's async DB write resolves,
so both proceed to completeWorkflowExecution which uses additive SQL for
billing — doubling the charged cost on every failed execution.

Fix: add a synchronous `completing` flag set immediately before the async
work begins. This blocks concurrent callers at the guard check. On failure,
the flag is reset so the safe* fallback path (completeWithCostOnlyLog) can
still attempt recovery.

* fix(processing): unblock error responses and isolate run-count failures

Remove unnecessary `await waitForCompletion()` from non-SSE and SSE error
paths where no `markAsFailed()` follows — these were blocking error responses
on log persistence for no reason. Wrap `updateWorkflowRunCounts` in its own
try/catch so a run-count DB failure cannot prevent session completion, billing,
and trace span persistence.

* improvement(processing): remove dead setupExecutor method

The method body was just a debug log with an `any` parameter — logging
now works entirely through trace spans with no executor integration.

* remove logger.debug

* fix(processing): guard completionPromise as write-once (singleton promise)

Prevent concurrent safeComplete* calls from overwriting completionPromise
with a no-op. The guard now lives at the assignment site — if a completion
is already in-flight, return its promise instead of starting a new one.
This ensures waitForCompletion() always awaits the real work.

* improvement(processing): remove empty else/catch blocks left by debug log cleanup

* fix(processing): enforce waitForCompletion inside markAsFailed to prevent completion races

Move waitForCompletion() into markAsFailed() so every call site is
automatically safe against in-flight fire-and-forget completions.
Remove the now-redundant external waitForCompletion() calls in route.ts.

* fix(processing): reset completing flag on fallback failure, clean up empty catch

- completeWithCostOnlyLog now resets this.completing = false when
  the fallback itself fails, preventing a permanently stuck session
- Use _disconnectError in MCP test-connection to signal intentional ignore

* fix(processing): restore disconnect error logging in MCP test-connection

Revert unrelated debug log removal — this file isn't part of the
processing improvements and the log aids connection leak detection.

* fix(processing): address audit findings across branch

- preprocessing.ts: use undefined (not null) for failed subscription
  fetch so getUserUsageLimit does a fresh lookup instead of silently
  falling back to free-tier limits
- deployed/route.ts: log warning on loadDeployedWorkflowState failure
  instead of silently swallowing the error
- schedule-execution.ts: remove dead successLog parameter and all
  call-site arguments left over from logger.debug cleanup
- mcp/middleware.ts: drop unused error binding in empty catch
- audit/log.ts, wand.ts: promote logger.debug to logger.warn in catch
  blocks where these are the only failure signal

* revert: undo unnecessary subscription null→undefined change

getHighestPrioritySubscription never throws (it catches internally
and returns null), so the catch block in preprocessExecution is dead
code. The null vs undefined distinction doesn't matter and the
coercions added unnecessary complexity.

* improvement(processing): remove dead try/catch around getHighestPrioritySubscription

getHighestPrioritySubscription catches internally and returns null
on error, so the wrapping try/catch was unreachable dead code.

* improvement(processing): remove dead getSnapshotByHash method

No longer called after createSnapshotWithDeduplication was refactored
to use a single upsert instead of select-then-insert.

---------
2026-02-24 11:55:59 -08:00
Jay Prajapati
9e817bc5b0 fix(auth): make DISABLE_AUTH work in web app (#3297)
Return an anonymous session using the same response envelope as Better Auth's get-session endpoint, and make the session provider tolerant to both wrapped and raw session payloads.

Fixes #2524
2026-02-24 09:52:44 -08:00
Waleed
d824ce5b07 feat(confluence): add webhook triggers for Confluence events (#3318)
* feat(confluence): add webhook triggers for Confluence events

Adds 16 Confluence triggers: page CRUD, comments, blogs, attachments,
spaces, and labels — plus a generic webhook trigger.

* feat(confluence): wire triggers into block and webhook processor

Add trigger subBlocks and triggers config to ConfluenceV2Block so
triggers appear in the UI. Add Confluence signature verification and
event filtering to the webhook processor.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(confluence): align trigger outputs with actual webhook payloads

- Rewrite output builders to match real Confluence webhook payload
  structure (flat spaceKey, numeric version, actual API fields)
- Remove fabricated fields (nested space/version objects, comment.body)
- Add missing fields (creatorAccountId, lastModifierAccountId, self,
  creationDate, modificationDate, accountType)
- Add extractor functions (extractPageData, extractCommentData, etc.)
  following the same pattern as Jira
- Add formatWebhookInput handler for Confluence in utils.server.ts
  so payloads are properly destructured before reaching workflows
- Make event field matching resilient (check both event and webhookEvent)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(confluence): handle generic webhook in formatWebhookInput

The generic webhook (confluence_webhook) was falling through to
extractPageData, which only returns the page field. For a catch-all
trigger that accepts all event types, preserve all entity fields
(page, comment, blog, attachment, space, label, content).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(confluence): use payload-based filtering instead of nonexistent event field

Confluence Cloud webhooks don't include an event/webhookEvent field in the
body (unlike Jira). Replaced broken event string matching with structural
payload filtering that checks which entity key is present.

* lint

* fix(confluence): read webhookSecret instead of secret in signature verification

* fix(webhooks): read webhookSecret for jira, linear, and github signature verification

These providers define their secret subBlock with id: 'webhookSecret' but the
processor was reading providerConfig.secret which is always undefined, silently
skipping signature verification even when a secret is configured.

* fix(confluence): use event field for exact matching with entity-category fallback

Admin REST API webhooks (Settings > Webhooks) include an event field for
action-level filtering (page_created vs page_updated). Connect app webhooks
omit it, so we fall back to entity-category matching.

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-23 23:36:43 -08:00
Waleed
9bd357f184 improvement(audit): enrich metadata across 23 audit log call sites (#3319)
* improvement(audit): enrich metadata across 23 audit log call sites

* improvement(audit): enrich metadata across 23 audit log call sites
2026-02-23 23:35:57 -08:00
Waleed
d4a014f423 feat(public-api): add env var and permission group controls to disable public API access (#3317)
Add DISABLE_PUBLIC_API / NEXT_PUBLIC_DISABLE_PUBLIC_API environment variables
and disablePublicApi permission group config option to allow self-hosted
deployments and enterprise admins to globally disable the public API toggle.

When disabled: the Access toggle is hidden in the Edit API Info modal,
the execute route blocks unauthenticated public access (401), and the
public-api PATCH route rejects enabling public API (403).

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-23 23:03:03 -08:00
Waleed
fe34d23a98 feat(gong): add Gong integration with 18 API tools (#3316)
* feat(gong): add Gong integration with 18 API tools

* fix(gong): make toDateTime optional for list_calls, add list_trackers to workspaceId condition

* chore(gong): regenerate docs

* fix(hex): update icon color and block bgColor
2026-02-23 17:57:10 -08:00
Waleed
b8dfb4dd20 fix(copy): preserve block names when pasting into workflows without conflicts (#3315) 2026-02-23 15:42:24 -08:00
Waleed
91666491cd fix(execution): scope X-Sim-Via header to internal routes and enforce depth limit (#3313)
* feat(execution): workflow cycle detection via X-Sim-Via header

* fix(execution): scope X-Sim-Via header to internal routes and add child workflow depth validation

- Move call chain header injection from HTTP tool layer (request.ts/utils.ts)
  to tool execution layer (tools/index.ts) gated on isInternalRoute, preventing
  internal workflow IDs from leaking to external third-party APIs
- Remove cycle detection from validateCallChain — depth limit alone prevents
  infinite loops while allowing legitimate self-recursion (pagination, tree
  processing, batch splitting)
- Add validateCallChain check in workflow-handler.ts before spawning child
  executor, closing the gap where in-process child workflows skipped validation
- Remove unsafe `(params as any)._context` type bypass in request.ts

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(execution): validate child call chain instead of parent chain

Validate childCallChain (after appending current workflow ID) rather
than ctx.callChain (parent). Prevents an off-by-one where a chain at
depth 10 could still spawn an 11th workflow.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-23 15:19:31 -08:00
Waleed
eafbb9fef4 fix(tag-dropdown): exclude downstream blocks in loops and parallel siblings (#3312)
* fix(tag-dropdown): exclude downstream blocks in loops and parallel siblings from reference picker

* chore(serializer): remove unused computeAccessibleBlockIds method

* chore(block-path-calculator): remove unused calculateAccessibleBlocksForWorkflow method

* chore(tag-dropdown): remove no-op loop node filter

* fix(tag-dropdown): remove parallel container from accessible references in parallel branches

* chore(tag-dropdown): remove no-op starter block filter

* fix(tag-dropdown): restore parallel container in accessible references for blocks inside parallel

* fix(copilot): exclude downstream loop nodes and parallel siblings from accessible references
2026-02-23 14:21:40 -08:00
Waleed
132fef06a1 fix(redis): tighten stale TCP connection detection and add fast lease deadline (#3311)
* fix(redis): tighten stale TCP connection detection and add fast lease deadline

* revert(redis): restore original retryStrategy logging

* fix(redis): clear deadline timer after Promise.race to prevent memory leak

* fix(redis): downgrade lease fallback log to warn — unavailable is expected fallback
2026-02-23 13:22:29 -08:00
231 changed files with 28627 additions and 849 deletions

View File

@@ -710,6 +710,17 @@ export function NotionIcon(props: SVGProps<SVGSVGElement>) {
)
}
export function GongIcon(props: SVGProps<SVGSVGElement>) {
return (
<svg {...props} viewBox='0 0 55.4 60' fill='none' xmlns='http://www.w3.org/2000/svg'>
<path
fill='currentColor'
d='M54.1,25.7H37.8c-0.9,0-1.6,1-1.3,1.8l3.9,10.1c0.2,0.4-0.2,0.9-0.7,0.9l-5-0.3c-0.2,0-0.4,0.1-0.6,0.3L30.3,44c-0.2,0.3-0.6,0.4-1,0.2l-5.8-3.9c-0.2-0.2-0.5-0.2-0.8,0l-8,5.4c-0.5,0.4-1.2-0.1-1-0.7L16,37c0.1-0.3-0.1-0.7-0.4-0.8l-4.2-1.7c-0.4-0.2-0.6-0.7-0.3-1l3.7-4.6c0.2-0.2,0.2-0.6,0-0.8l-3.1-4.5c-0.3-0.4,0-1,0.5-1l4.9-0.4c0.4,0,0.6-0.3,0.6-0.7l-0.4-6.8c0-0.5,0.5-0.8,0.9-0.7l6,2.5c0.3,0.1,0.6,0,0.8-0.2l4.2-4.6c0.3-0.4,0.9-0.3,1.1,0.2l2.5,6.4c0.3,0.8,1.3,1.1,2,0.6l9.8-7.3c1.1-0.8,0.4-2.6-1-2.4L37.3,10c-0.3,0-0.6-0.1-0.7-0.4l-3.4-8.7c-0.4-0.9-1.5-1.1-2.2-0.4l-7.4,8c-0.2,0.2-0.5,0.3-0.8,0.2l-9.7-4.1c-0.9-0.4-1.8,0.2-1.9,1.2l-0.4,10c0,0.4-0.3,0.6-0.6,0.6l-8.9,0.6c-1,0.1-1.6,1.2-1,2.1l5.9,8.7c0.2,0.2,0.2,0.6,0,0.8l-6,6.9C-0.3,36,0,37.1,0.8,37.4l6.9,3c0.3,0.1,0.5,0.5,0.4,0.8L3.7,58.3c-0.3,1.2,1.1,2.1,2.1,1.4l16.5-11.8c0.2-0.2,0.5-0.2,0.8,0l7.5,5.3c0.6,0.4,1.5,0.3,1.9-0.4l4.7-7.2c0.1-0.2,0.4-0.3,0.6-0.3l11.2,1.4c0.9,0.1,1.8-0.6,1.5-1.5l-4.7-12.1c-0.1-0.3,0-0.7,0.4-0.9l8.5-4C55.9,27.6,55.5,25.7,54.1,25.7z'
/>
</svg>
)
}
export function GmailIcon(props: SVGProps<SVGSVGElement>) {
return (
<svg
@@ -3541,6 +3552,15 @@ export function TrelloIcon(props: SVGProps<SVGSVGElement>) {
)
}
export function AttioIcon(props: SVGProps<SVGSVGElement>) {
return (
<svg {...props} xmlns='http://www.w3.org/2000/svg' viewBox='0 0 60.9 50' fill='currentColor'>
<path d='M60.3,34.8l-5.1-8.1c0,0,0,0,0,0L54.7,26c-0.8-1.2-2.1-1.9-3.5-1.9L43,24L42.5,25l-9.8,15.7l-0.5,0.9l4.1,6.6c0.8,1.2,2.1,1.9,3.5,1.9h11.5c1.4,0,2.8-0.7,3.5-1.9l0.4-0.6c0,0,0,0,0,0l5.1-8.2C61.1,37.9,61.1,36.2,60.3,34.8L60.3,34.8z M58.7,38.3l-5.1,8.2c0,0,0,0.1-0.1,0.1c-0.2,0.2-0.4,0.2-0.5,0.2c-0.1,0-0.4,0-0.6-0.3l-5.1-8.2c-0.1-0.1-0.1-0.2-0.2-0.3c0-0.1-0.1-0.2-0.1-0.3c-0.1-0.4-0.1-0.8,0-1.3c0.1-0.2,0.1-0.4,0.3-0.6l5.1-8.1c0,0,0,0,0,0c0.1-0.2,0.3-0.3,0.4-0.3c0.1,0,0.1,0,0.1,0c0,0,0,0,0.1,0c0.1,0,0.4,0,0.6,0.3l5.1,8.1C59.2,36.6,59.2,37.5,58.7,38.3L58.7,38.3z' />
<path d='M45.2,15.1c0.8-1.3,0.8-3.1,0-4.4l-5.1-8.1l-0.4-0.7C38.9,0.7,37.6,0,36.2,0H24.7c-1.4,0-2.7,0.7-3.5,1.9L0.6,34.9C0.2,35.5,0,36.3,0,37c0,0.8,0.2,1.5,0.6,2.2l5.5,8.8C6.9,49.3,8.2,50,9.7,50h11.5c1.4,0,2.8-0.7,3.5-1.9l0.4-0.7c0,0,0,0,0,0c0,0,0,0,0,0l4.1-6.6l12.1-19.4L45.2,15.1L45.2,15.1z M44,13c0,0.4-0.1,0.8-0.4,1.2L23.5,46.4c-0.2,0.3-0.5,0.3-0.6,0.3c-0.1,0-0.4,0-0.6-0.3l-5.1-8.2c-0.5-0.7-0.5-1.7,0-2.4L37.4,3.6c0.2-0.3,0.5-0.3,0.6-0.3c0.1,0,0.4,0,0.6,0.3l5.1,8.1C43.9,12.1,44,12.5,44,13z' />
</svg>
)
}
export function AsanaIcon(props: SVGProps<SVGSVGElement>) {
return (
<svg {...props} xmlns='http://www.w3.org/2000/svg' viewBox='0 0 24 24' fill='none'>
@@ -5824,7 +5844,7 @@ export function HexIcon(props: SVGProps<SVGSVGElement>) {
return (
<svg {...props} xmlns='http://www.w3.org/2000/svg' viewBox='0 0 1450.3 600'>
<path
fill='#5F509D'
fill='#EDB9B8'
fillRule='evenodd'
d='m250.11,0v199.49h-50V0H0v600h200.11v-300.69h50v300.69h200.18V0h-200.18Zm249.9,0v600h450.29v-250.23h-200.2v149h-50v-199.46h250.2V0h-450.29Zm200.09,199.49v-99.49h50v99.49h-50Zm550.02,0V0h200.18v150l-100,100.09,100,100.09v249.82h-200.18v-300.69h-50v300.69h-200.11v-249.82l100.11-100.09-100.11-100.09V0h200.11v199.49h50Z'
/>

View File

@@ -13,6 +13,7 @@ import {
ApolloIcon,
ArxivIcon,
AsanaIcon,
AttioIcon,
BrainIcon,
BrowserUseIcon,
CalComIcon,
@@ -40,6 +41,7 @@ import {
GithubIcon,
GitLabIcon,
GmailIcon,
GongIcon,
GoogleBooksIcon,
GoogleCalendarIcon,
GoogleDocsIcon,
@@ -158,6 +160,7 @@ export const blockTypeToIconMap: Record<string, IconComponent> = {
apollo: ApolloIcon,
arxiv: ArxivIcon,
asana: AsanaIcon,
attio: AttioIcon,
browser_use: BrowserUseIcon,
calcom: CalComIcon,
calendly: CalendlyIcon,
@@ -183,6 +186,7 @@ export const blockTypeToIconMap: Record<string, IconComponent> = {
github_v2: GithubIcon,
gitlab: GitLabIcon,
gmail_v2: GmailIcon,
gong: GongIcon,
google_books: GoogleBooksIcon,
google_calendar_v2: GoogleCalendarIcon,
google_docs: GoogleDocsIcon,

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,774 @@
---
title: Gong
description: Revenue intelligence and conversation analytics
---
import { BlockInfoCard } from "@/components/ui/block-info-card"
<BlockInfoCard
type="gong"
color="#8039DF"
/>
{/* MANUAL-CONTENT-START:intro */}
[Gong](https://www.gong.io/) is a revenue intelligence platform that captures and analyzes customer interactions across calls, emails, and meetings. By integrating Gong with Sim, your agents can access conversation data, user analytics, coaching metrics, and more through automated workflows.
The Gong integration in Sim provides tools to:
- **List and retrieve calls:** Fetch calls by date range, get individual call details, or retrieve extensive call data including trackers, topics, interaction stats, and points of interest.
- **Access call transcripts:** Retrieve full transcripts with speaker turns, topics, and sentence-level timestamps for any recorded call.
- **Manage users:** List all Gong users in your account or retrieve detailed information for a specific user, including settings, spoken languages, and contact details.
- **Analyze activity and performance:** Pull aggregated activity statistics, interaction stats (longest monologue, interactivity, patience, question rate), and answered scorecard data for your team.
- **Work with scorecards and trackers:** List scorecard definitions and keyword tracker configurations to understand how your team's conversations are being evaluated and monitored.
- **Browse the call library:** List library folders and retrieve their contents, including call snippets and notes curated by your team.
- **Access coaching metrics:** Retrieve coaching data for managers and their direct reports to track team development.
- **List Engage flows:** Fetch sales engagement sequences (flows) with visibility and ownership details.
- **Look up contacts by email or phone:** Find all Gong references to a specific email address or phone number, including related calls, emails, meetings, CRM data, and customer engagement events.
By combining these capabilities, you can automate sales coaching workflows, extract conversation insights, monitor team performance, sync Gong data with other systems, and build intelligent pipelines around your organization's revenue conversations -- all securely using your Gong API credentials.
{/* MANUAL-CONTENT-END */}
## Usage Instructions
Integrate Gong into your workflow. Access call recordings, transcripts, user data, activity stats, scorecards, trackers, library content, coaching metrics, and more via the Gong API.
## Tools
### `gong_list_calls`
Retrieve call data by date range from Gong.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `accessKey` | string | Yes | Gong API Access Key |
| `accessKeySecret` | string | Yes | Gong API Access Key Secret |
| `fromDateTime` | string | Yes | Start date/time in ISO-8601 format \(e.g., 2024-01-01T00:00:00Z\) |
| `toDateTime` | string | No | End date/time in ISO-8601 format \(e.g., 2024-01-31T23:59:59Z\). If omitted, lists calls up to the most recent. |
| `cursor` | string | No | Pagination cursor from a previous response |
| `workspaceId` | string | No | Gong workspace ID to filter calls |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `calls` | array | List of calls matching the date range |
| ↳ `id` | string | Gong's unique numeric identifier for the call |
| ↳ `title` | string | Call title |
| ↳ `scheduled` | string | Scheduled call time in ISO-8601 format |
| ↳ `started` | string | Recording start time in ISO-8601 format |
| ↳ `duration` | number | Call duration in seconds |
| ↳ `direction` | string | Call direction \(Inbound/Outbound\) |
| ↳ `system` | string | Communication platform used \(e.g., Outreach\) |
| ↳ `scope` | string | Call scope: 'Internal', 'External', or 'Unknown' |
| ↳ `media` | string | Media type \(e.g., Video\) |
| ↳ `language` | string | Language code in ISO-639-2B format |
| ↳ `url` | string | URL to the call in the Gong web app |
| ↳ `primaryUserId` | string | Host team member identifier |
| ↳ `workspaceId` | string | Workspace identifier |
| ↳ `sdrDisposition` | string | SDR disposition classification |
| ↳ `clientUniqueId` | string | Call identifier from the origin recording system |
| ↳ `customData` | string | Metadata provided during call creation |
| ↳ `purpose` | string | Call purpose |
| ↳ `meetingUrl` | string | Web conference provider URL |
| ↳ `isPrivate` | boolean | Whether the call is private |
| ↳ `calendarEventId` | string | Calendar event identifier |
| `cursor` | string | Pagination cursor for the next page |
| `totalRecords` | number | Total number of records matching the filter |
### `gong_get_call`
Retrieve detailed data for a specific call from Gong.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `accessKey` | string | Yes | Gong API Access Key |
| `accessKeySecret` | string | Yes | Gong API Access Key Secret |
| `callId` | string | Yes | The Gong call ID to retrieve |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `id` | string | Gong's unique numeric identifier for the call |
| `title` | string | Call title |
| `url` | string | URL to the call in the Gong web app |
| `scheduled` | string | Scheduled call time in ISO-8601 format |
| `started` | string | Recording start time in ISO-8601 format |
| `duration` | number | Call duration in seconds |
| `direction` | string | Call direction \(Inbound/Outbound\) |
| `system` | string | Communication platform used \(e.g., Outreach\) |
| `scope` | string | Call scope: 'Internal', 'External', or 'Unknown' |
| `media` | string | Media type \(e.g., Video\) |
| `language` | string | Language code in ISO-639-2B format |
| `primaryUserId` | string | Host team member identifier |
| `workspaceId` | string | Workspace identifier |
| `sdrDisposition` | string | SDR disposition classification |
| `clientUniqueId` | string | Call identifier from the origin recording system |
| `customData` | string | Metadata provided during call creation |
| `purpose` | string | Call purpose |
| `meetingUrl` | string | Web conference provider URL |
| `isPrivate` | boolean | Whether the call is private |
| `calendarEventId` | string | Calendar event identifier |
### `gong_get_call_transcript`
Retrieve transcripts of calls from Gong by call IDs or date range.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `accessKey` | string | Yes | Gong API Access Key |
| `accessKeySecret` | string | Yes | Gong API Access Key Secret |
| `callIds` | string | No | Comma-separated list of call IDs to retrieve transcripts for |
| `fromDateTime` | string | No | Start date/time filter in ISO-8601 format |
| `toDateTime` | string | No | End date/time filter in ISO-8601 format |
| `workspaceId` | string | No | Gong workspace ID to filter calls |
| `cursor` | string | No | Pagination cursor from a previous response |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `callTranscripts` | array | List of call transcripts with speaker turns and sentences |
| ↳ `callId` | string | Gong's unique numeric identifier for the call |
| ↳ `transcript` | array | List of monologues in the call |
| ↳ `speakerId` | string | Unique ID of the speaker, cross-reference with parties |
| ↳ `topic` | string | Name of the topic being discussed |
| ↳ `sentences` | array | List of sentences spoken in the monologue |
| ↳ `start` | number | Start time of the sentence in milliseconds from call start |
| ↳ `end` | number | End time of the sentence in milliseconds from call start |
| ↳ `text` | string | The sentence text |
| `cursor` | string | Pagination cursor for the next page |
### `gong_get_extensive_calls`
Retrieve detailed call data including trackers, topics, and highlights from Gong.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `accessKey` | string | Yes | Gong API Access Key |
| `accessKeySecret` | string | Yes | Gong API Access Key Secret |
| `callIds` | string | No | Comma-separated list of call IDs to retrieve detailed data for |
| `fromDateTime` | string | No | Start date/time filter in ISO-8601 format |
| `toDateTime` | string | No | End date/time filter in ISO-8601 format |
| `workspaceId` | string | No | Gong workspace ID to filter calls |
| `primaryUserIds` | string | No | Comma-separated list of user IDs to filter calls by host |
| `cursor` | string | No | Pagination cursor from a previous response |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `calls` | array | List of detailed call objects with metadata, content, interaction stats, and collaboration data |
| ↳ `metaData` | object | Call metadata \(same fields as CallBasicData\) |
| ↳ `id` | string | Call ID |
| ↳ `title` | string | Call title |
| ↳ `scheduled` | string | Scheduled time in ISO-8601 |
| ↳ `started` | string | Start time in ISO-8601 |
| ↳ `duration` | number | Duration in seconds |
| ↳ `direction` | string | Call direction |
| ↳ `system` | string | Communication platform |
| ↳ `scope` | string | Internal/External/Unknown |
| ↳ `media` | string | Media type |
| ↳ `language` | string | Language code \(ISO-639-2B\) |
| ↳ `url` | string | Gong web app URL |
| ↳ `primaryUserId` | string | Host user ID |
| ↳ `workspaceId` | string | Workspace ID |
| ↳ `sdrDisposition` | string | SDR disposition |
| ↳ `clientUniqueId` | string | Origin system call ID |
| ↳ `customData` | string | Custom metadata |
| ↳ `purpose` | string | Call purpose |
| ↳ `meetingUrl` | string | Meeting URL |
| ↳ `isPrivate` | boolean | Whether call is private |
| ↳ `calendarEventId` | string | Calendar event ID |
| ↳ `context` | array | Links to external systems \(CRM, Dialer, etc.\) |
| ↳ `system` | string | External system name \(e.g., Salesforce\) |
| ↳ `objects` | array | List of objects within the external system |
| ↳ `parties` | array | List of call participants |
| ↳ `id` | string | Unique participant ID in the call |
| ↳ `name` | string | Participant name |
| ↳ `emailAddress` | string | Email address |
| ↳ `title` | string | Job title |
| ↳ `phoneNumber` | string | Phone number |
| ↳ `speakerId` | string | Speaker ID for transcript cross-reference |
| ↳ `userId` | string | Gong user ID |
| ↳ `affiliation` | string | Company or non-company |
| ↳ `methods` | array | Whether invited or attended |
| ↳ `context` | array | Links to external systems for this party |
| ↳ `content` | object | Call content data |
| ↳ `structure` | array | Call agenda parts |
| ↳ `name` | string | Agenda name |
| ↳ `duration` | number | Duration of this part in seconds |
| ↳ `topics` | array | Topics and their durations |
| ↳ `name` | string | Topic name \(e.g., Pricing\) |
| ↳ `duration` | number | Time spent on topic in seconds |
| ↳ `trackers` | array | Trackers found in the call |
| ↳ `id` | string | Tracker ID |
| ↳ `name` | string | Tracker name |
| ↳ `count` | number | Number of occurrences |
| ↳ `type` | string | Keyword or Smart |
| ↳ `occurrences` | array | Details for each occurrence |
| ↳ `speakerId` | string | Speaker who said it |
| ↳ `startTime` | number | Seconds from call start |
| ↳ `phrases` | array | Per-phrase occurrence counts |
| ↳ `phrase` | string | Specific phrase |
| ↳ `count` | number | Occurrences of this phrase |
| ↳ `occurrences` | array | Details per occurrence |
| ↳ `highlights` | array | AI-generated highlights including next steps, action items, and key moments |
| ↳ `title` | string | Title of the highlight |
| ↳ `interaction` | object | Interaction statistics |
| ↳ `interactionStats` | array | Interaction stats per user |
| ↳ `userId` | string | Gong user ID |
| ↳ `userEmailAddress` | string | User email |
| ↳ `personInteractionStats` | array | Stats list \(Longest Monologue, Interactivity, Patience, etc.\) |
| ↳ `name` | string | Stat name |
| ↳ `value` | number | Stat value |
| ↳ `speakers` | array | Talk duration per speaker |
| ↳ `id` | string | Participant ID |
| ↳ `userId` | string | Gong user ID |
| ↳ `talkTime` | number | Talk duration in seconds |
| ↳ `video` | array | Video statistics |
| ↳ `name` | string | Segment type: Browser, Presentation, WebcamPrimaryUser, WebcamNonCompany, Webcam |
| ↳ `duration` | number | Total segment duration in seconds |
| ↳ `questions` | object | Question counts |
| ↳ `companyCount` | number | Questions by company speakers |
| ↳ `nonCompanyCount` | number | Questions by non-company speakers |
| ↳ `collaboration` | object | Collaboration data |
| ↳ `publicComments` | array | Public comments on the call |
| ↳ `id` | string | Comment ID |
| ↳ `commenterUserId` | string | Commenter user ID |
| ↳ `comment` | string | Comment text |
| ↳ `posted` | string | Posted time in ISO-8601 |
| ↳ `audioStartTime` | number | Seconds from call start the comment refers to |
| ↳ `audioEndTime` | number | Seconds from call start the comment end refers to |
| ↳ `duringCall` | boolean | Whether the comment was posted during the call |
| ↳ `inReplyTo` | string | ID of original comment if this is a reply |
| ↳ `media` | object | Media download URLs \(available for 8 hours\) |
| ↳ `audioUrl` | string | Audio download URL |
| ↳ `videoUrl` | string | Video download URL |
| `cursor` | string | Pagination cursor for the next page |
### `gong_list_users`
List all users in your Gong account.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `accessKey` | string | Yes | Gong API Access Key |
| `accessKeySecret` | string | Yes | Gong API Access Key Secret |
| `cursor` | string | No | Pagination cursor from a previous response |
| `includeAvatars` | string | No | Whether to include avatar URLs \(true/false\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `users` | array | List of Gong users |
| ↳ `id` | string | Unique numeric user ID \(up to 20 digits\) |
| ↳ `emailAddress` | string | User email address |
| ↳ `created` | string | User creation timestamp \(ISO-8601\) |
| ↳ `active` | boolean | Whether the user is active |
| ↳ `emailAliases` | array | Alternative email addresses for the user |
| ↳ `trustedEmailAddress` | string | Trusted email address for the user |
| ↳ `firstName` | string | First name |
| ↳ `lastName` | string | Last name |
| ↳ `title` | string | Job title |
| ↳ `phoneNumber` | string | Phone number |
| ↳ `extension` | string | Phone extension number |
| ↳ `personalMeetingUrls` | array | Personal meeting URLs |
| ↳ `settings` | object | User settings |
| ↳ `webConferencesRecorded` | boolean | Whether web conferences are recorded |
| ↳ `preventWebConferenceRecording` | boolean | Whether web conference recording is prevented |
| ↳ `telephonyCallsImported` | boolean | Whether telephony calls are imported |
| ↳ `emailsImported` | boolean | Whether emails are imported |
| ↳ `preventEmailImport` | boolean | Whether email import is prevented |
| ↳ `nonRecordedMeetingsImported` | boolean | Whether non-recorded meetings are imported |
| ↳ `gongConnectEnabled` | boolean | Whether Gong Connect is enabled |
| ↳ `managerId` | string | Manager user ID |
| ↳ `meetingConsentPageUrl` | string | Meeting consent page URL |
| ↳ `spokenLanguages` | array | Languages spoken by the user |
| ↳ `language` | string | Language code |
| ↳ `primary` | boolean | Whether this is the primary language |
| `cursor` | string | Pagination cursor for the next page |
| `totalRecords` | number | Total number of user records |
| `currentPageSize` | number | Number of records in the current page |
| `currentPageNumber` | number | Current page number |
### `gong_get_user`
Retrieve details for a specific user from Gong.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `accessKey` | string | Yes | Gong API Access Key |
| `accessKeySecret` | string | Yes | Gong API Access Key Secret |
| `userId` | string | Yes | The Gong user ID to retrieve |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `id` | string | Unique numeric user ID \(up to 20 digits\) |
| `emailAddress` | string | User email address |
| `created` | string | User creation timestamp \(ISO-8601\) |
| `active` | boolean | Whether the user is active |
| `emailAliases` | array | Alternative email addresses for the user |
| `trustedEmailAddress` | string | Trusted email address for the user |
| `firstName` | string | First name |
| `lastName` | string | Last name |
| `title` | string | Job title |
| `phoneNumber` | string | Phone number |
| `extension` | string | Phone extension number |
| `personalMeetingUrls` | array | Personal meeting URLs |
| `settings` | object | User settings |
| ↳ `webConferencesRecorded` | boolean | Whether web conferences are recorded |
| ↳ `preventWebConferenceRecording` | boolean | Whether web conference recording is prevented |
| ↳ `telephonyCallsImported` | boolean | Whether telephony calls are imported |
| ↳ `emailsImported` | boolean | Whether emails are imported |
| ↳ `preventEmailImport` | boolean | Whether email import is prevented |
| ↳ `nonRecordedMeetingsImported` | boolean | Whether non-recorded meetings are imported |
| ↳ `gongConnectEnabled` | boolean | Whether Gong Connect is enabled |
| `managerId` | string | Manager user ID |
| `meetingConsentPageUrl` | string | Meeting consent page URL |
| `spokenLanguages` | array | Languages spoken by the user |
| ↳ `language` | string | Language code |
| ↳ `primary` | boolean | Whether this is the primary language |
### `gong_aggregate_activity`
Retrieve aggregated activity statistics for users by date range from Gong.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `accessKey` | string | Yes | Gong API Access Key |
| `accessKeySecret` | string | Yes | Gong API Access Key Secret |
| `userIds` | string | No | Comma-separated list of Gong user IDs \(up to 20 digits each\) |
| `fromDate` | string | Yes | Start date in YYYY-MM-DD format \(inclusive, in company timezone\) |
| `toDate` | string | Yes | End date in YYYY-MM-DD format \(exclusive, in company timezone, cannot exceed current day\) |
| `cursor` | string | No | Pagination cursor from a previous response |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `usersActivity` | array | Aggregated activity statistics per user |
| ↳ `userId` | string | Gong's unique numeric identifier for the user |
| ↳ `userEmailAddress` | string | Email address of the Gong user |
| ↳ `callsAsHost` | number | Number of recorded calls this user hosted |
| ↳ `callsAttended` | number | Number of calls where this user was a participant \(not host\) |
| ↳ `callsGaveFeedback` | number | Number of recorded calls the user gave feedback on |
| ↳ `callsReceivedFeedback` | number | Number of recorded calls the user received feedback on |
| ↳ `callsRequestedFeedback` | number | Number of recorded calls the user requested feedback on |
| ↳ `callsScorecardsFilled` | number | Number of scorecards the user completed |
| ↳ `callsScorecardsReceived` | number | Number of calls where someone filled a scorecard on the user's calls |
| ↳ `ownCallsListenedTo` | number | Number of the user's own calls the user listened to |
| ↳ `othersCallsListenedTo` | number | Number of other users' calls the user listened to |
| ↳ `callsSharedInternally` | number | Number of calls the user shared internally |
| ↳ `callsSharedExternally` | number | Number of calls the user shared externally |
| ↳ `callsCommentsGiven` | number | Number of calls where the user provided at least one comment |
| ↳ `callsCommentsReceived` | number | Number of calls where the user received at least one comment |
| ↳ `callsMarkedAsFeedbackGiven` | number | Number of calls where the user selected Mark as reviewed |
| ↳ `callsMarkedAsFeedbackReceived` | number | Number of calls where others selected Mark as reviewed on the user's calls |
| `timeZone` | string | The company's defined timezone in Gong |
| `fromDateTime` | string | Start of results in ISO-8601 format |
| `toDateTime` | string | End of results in ISO-8601 format |
| `cursor` | string | Pagination cursor for the next page |
### `gong_interaction_stats`
Retrieve interaction statistics for users by date range from Gong. Only includes calls with Whisper enabled.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `accessKey` | string | Yes | Gong API Access Key |
| `accessKeySecret` | string | Yes | Gong API Access Key Secret |
| `userIds` | string | No | Comma-separated list of Gong user IDs \(up to 20 digits each\) |
| `fromDate` | string | Yes | Start date in YYYY-MM-DD format \(inclusive, in company timezone\) |
| `toDate` | string | Yes | End date in YYYY-MM-DD format \(exclusive, in company timezone, cannot exceed current day\) |
| `cursor` | string | No | Pagination cursor from a previous response |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `peopleInteractionStats` | array | Email address of the Gong user |
| ↳ `userId` | string | Gong's unique numeric identifier for the user |
| ↳ `userEmailAddress` | string | Email address of the Gong user |
| ↳ `personInteractionStats` | array | List of interaction stat measurements for this user |
| ↳ `name` | string | Stat name \(e.g. Longest Monologue, Interactivity, Patience, Question Rate\) |
| ↳ `value` | number | Stat measurement value \(can be double or integer\) |
| `timeZone` | string | The company's defined timezone in Gong |
| `fromDateTime` | string | Start of results in ISO-8601 format |
| `toDateTime` | string | End of results in ISO-8601 format |
| `cursor` | string | Pagination cursor for the next page |
### `gong_answered_scorecards`
Retrieve answered scorecards for reviewed users or by date range from Gong.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `accessKey` | string | Yes | Gong API Access Key |
| `accessKeySecret` | string | Yes | Gong API Access Key Secret |
| `callFromDate` | string | No | Start date for calls in YYYY-MM-DD format \(inclusive, in company timezone\). Defaults to earliest recorded call. |
| `callToDate` | string | No | End date for calls in YYYY-MM-DD format \(exclusive, in company timezone\). Defaults to latest recorded call. |
| `reviewFromDate` | string | No | Start date for reviews in YYYY-MM-DD format \(inclusive, in company timezone\). Defaults to earliest reviewed call. |
| `reviewToDate` | string | No | End date for reviews in YYYY-MM-DD format \(exclusive, in company timezone\). Defaults to latest reviewed call. |
| `scorecardIds` | string | No | Comma-separated list of scorecard IDs to filter by |
| `reviewedUserIds` | string | No | Comma-separated list of reviewed user IDs to filter by |
| `cursor` | string | No | Pagination cursor from a previous response |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `answeredScorecards` | array | List of answered scorecards with scores and answers |
| ↳ `answeredScorecardId` | number | Identifier of the answered scorecard |
| ↳ `scorecardId` | number | Identifier of the scorecard |
| ↳ `scorecardName` | string | Scorecard name |
| ↳ `callId` | number | Gong's unique numeric identifier for the call |
| ↳ `callStartTime` | string | Date/time of the call in ISO-8601 format |
| ↳ `reviewedUserId` | number | User ID of the team member being reviewed |
| ↳ `reviewerUserId` | number | User ID of the team member who completed the scorecard |
| ↳ `reviewTime` | string | Date/time when the review was completed in ISO-8601 format |
| ↳ `visibilityType` | string | Visibility type of the scorecard answer |
| ↳ `answers` | array | Answers in the answered scorecard |
| ↳ `questionId` | number | Identifier of the question |
| ↳ `questionRevisionId` | number | Identifier of the revision version of the question |
| ↳ `isOverall` | boolean | Whether this is the overall question |
| ↳ `score` | number | Score between 1 to 5 if answered, null otherwise |
| ↳ `answerText` | string | The answer's text if answered, null otherwise |
| ↳ `notApplicable` | boolean | Whether the question is not applicable to this call |
| `cursor` | string | Pagination cursor for the next page |
### `gong_list_library_folders`
Retrieve library folders from Gong.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `accessKey` | string | Yes | Gong API Access Key |
| `accessKeySecret` | string | Yes | Gong API Access Key Secret |
| `workspaceId` | string | No | Gong workspace ID to filter folders |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `folders` | array | List of library folders with id, name, and parent relationships |
| ↳ `id` | string | Gong unique numeric identifier for the folder |
| ↳ `name` | string | Display name of the folder |
| ↳ `parentFolderId` | string | Gong unique numeric identifier for the parent folder \(null for root folder\) |
| ↳ `createdBy` | string | Gong unique numeric identifier for the user who added the folder |
| ↳ `updated` | string | Folder's last update time in ISO-8601 format |
### `gong_get_folder_content`
Retrieve the list of calls in a specific library folder from Gong.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `accessKey` | string | Yes | Gong API Access Key |
| `accessKeySecret` | string | Yes | Gong API Access Key Secret |
| `folderId` | string | Yes | The library folder ID to retrieve content for |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `folderId` | string | Gong's unique numeric identifier for the folder |
| `folderName` | string | Display name of the folder |
| `createdBy` | string | Gong's unique numeric identifier for the user who added the folder |
| `updated` | string | Folder's last update time in ISO-8601 format |
| `calls` | array | List of calls in the library folder |
| ↳ `id` | string | Gong unique numeric identifier of the call |
| ↳ `title` | string | The title of the call |
| ↳ `note` | string | A note attached to the call in the folder |
| ↳ `addedBy` | string | Gong unique numeric identifier for the user who added the call |
| ↳ `created` | string | Date and time the call was added to folder in ISO-8601 format |
| ↳ `url` | string | URL of the call |
| ↳ `snippet` | object | Call snippet time range |
| ↳ `fromSec` | number | Snippet start in seconds relative to call start |
| ↳ `toSec` | number | Snippet end in seconds relative to call start |
### `gong_list_scorecards`
Retrieve scorecard definitions from Gong settings.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `accessKey` | string | Yes | Gong API Access Key |
| `accessKeySecret` | string | Yes | Gong API Access Key Secret |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `scorecards` | array | List of scorecard definitions with questions |
| ↳ `scorecardId` | string | Unique identifier for the scorecard |
| ↳ `scorecardName` | string | Display name of the scorecard |
| ↳ `workspaceId` | string | Workspace identifier associated with this scorecard |
| ↳ `enabled` | boolean | Whether the scorecard is active |
| ↳ `updaterUserId` | string | ID of the user who last modified the scorecard |
| ↳ `created` | string | Creation timestamp in ISO-8601 format |
| ↳ `updated` | string | Last update timestamp in ISO-8601 format |
| ↳ `questions` | array | List of questions in the scorecard |
| ↳ `questionId` | string | Unique identifier for the question |
| ↳ `questionText` | string | The text content of the question |
| ↳ `questionRevisionId` | string | Identifier for the specific revision of the question |
| ↳ `isOverall` | boolean | Whether this is the primary overall question |
| ↳ `created` | string | Question creation timestamp in ISO-8601 format |
| ↳ `updated` | string | Question last update timestamp in ISO-8601 format |
| ↳ `updaterUserId` | string | ID of the user who last modified the question |
### `gong_list_trackers`
Retrieve smart tracker and keyword tracker definitions from Gong settings.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `accessKey` | string | Yes | Gong API Access Key |
| `accessKeySecret` | string | Yes | Gong API Access Key Secret |
| `workspaceId` | string | No | The ID of the workspace the keyword trackers are in. When empty, all trackers in all workspaces are returned. |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `trackers` | array | List of keyword tracker definitions |
| ↳ `trackerId` | string | Unique identifier for the tracker |
| ↳ `trackerName` | string | Display name of the tracker |
| ↳ `workspaceId` | string | ID of the workspace containing the tracker |
| ↳ `languageKeywords` | array | Keywords organized by language |
| ↳ `language` | string | ISO 639-2/B language code \("mul" means keywords apply across all languages\) |
| ↳ `keywords` | array | Words and phrases in the designated language |
| ↳ `includeRelatedForms` | boolean | Whether to include different word forms |
| ↳ `affiliation` | string | Speaker affiliation filter: "Anyone", "Company", or "NonCompany" |
| ↳ `partOfQuestion` | boolean | Whether to track keywords only within questions |
| ↳ `saidAt` | string | Position in call: "Anytime", "First", or "Last" |
| ↳ `saidAtInterval` | number | Duration to search \(in minutes or percentage\) |
| ↳ `saidAtUnit` | string | Unit for saidAtInterval |
| ↳ `saidInTopics` | array | Topics where keywords should be detected |
| ↳ `saidInCallParts` | array | Specific call segments to monitor |
| ↳ `filterQuery` | string | JSON-formatted call filtering criteria |
| ↳ `created` | string | Creation timestamp in ISO-8601 format |
| ↳ `creatorUserId` | string | ID of the user who created the tracker \(null for built-in trackers\) |
| ↳ `updated` | string | Last modification timestamp in ISO-8601 format |
| ↳ `updaterUserId` | string | ID of the user who last modified the tracker |
### `gong_list_workspaces`
List all company workspaces in Gong.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `accessKey` | string | Yes | Gong API Access Key |
| `accessKeySecret` | string | Yes | Gong API Access Key Secret |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `workspaces` | array | List of Gong workspaces |
| ↳ `id` | string | Gong unique numeric identifier for the workspace |
| ↳ `name` | string | Display name of the workspace |
| ↳ `description` | string | Description of the workspace's purpose or content |
### `gong_list_flows`
List Gong Engage flows (sales engagement sequences).
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `accessKey` | string | Yes | Gong API Access Key |
| `accessKeySecret` | string | Yes | Gong API Access Key Secret |
| `flowOwnerEmail` | string | Yes | Email of a Gong user. The API will return 'PERSONAL' flows belonging to this user in addition to 'COMPANY' flows. |
| `workspaceId` | string | No | Optional workspace ID to filter flows to a specific workspace |
| `cursor` | string | No | Pagination cursor from a previous API call to retrieve the next page of records |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `requestId` | string | A Gong request reference ID for troubleshooting purposes |
| `flows` | array | List of Gong Engage flows |
| ↳ `id` | string | The ID of the flow |
| ↳ `name` | string | The name of the flow |
| ↳ `folderId` | string | The ID of the folder this flow is under |
| ↳ `folderName` | string | The name of the folder this flow is under |
| ↳ `visibility` | string | The flow visibility type \(COMPANY, PERSONAL, or SHARED\) |
| ↳ `creationDate` | string | Creation time of the flow in ISO-8601 format |
| ↳ `exclusive` | boolean | Indicates whether a prospect in this flow can be added to other flows |
| `totalRecords` | number | Total number of flow records available |
| `currentPageSize` | number | Number of records returned in the current page |
| `currentPageNumber` | number | Current page number |
| `cursor` | string | Pagination cursor for retrieving the next page of records |
### `gong_get_coaching`
Retrieve coaching metrics for a manager from Gong.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `accessKey` | string | Yes | Gong API Access Key |
| `accessKeySecret` | string | Yes | Gong API Access Key Secret |
| `managerId` | string | Yes | Gong user ID of the manager |
| `workspaceId` | string | Yes | Gong workspace ID |
| `fromDate` | string | Yes | Start date in ISO-8601 format |
| `toDate` | string | Yes | End date in ISO-8601 format |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `requestId` | string | A Gong request reference ID for troubleshooting purposes |
| `coachingData` | array | The manager user information |
| ↳ `manager` | object | The manager user information |
| ↳ `id` | string | Gong unique numeric identifier for the user |
| ↳ `emailAddress` | string | Email address of the Gong user |
| ↳ `firstName` | string | First name of the Gong user |
| ↳ `lastName` | string | Last name of the Gong user |
| ↳ `title` | string | Job title of the Gong user |
| ↳ `directReportsMetrics` | array | Coaching metrics for each direct report |
| ↳ `report` | object | The direct report user information |
| ↳ `id` | string | Gong unique numeric identifier for the user |
| ↳ `emailAddress` | string | Email address of the Gong user |
| ↳ `firstName` | string | First name of the Gong user |
| ↳ `lastName` | string | Last name of the Gong user |
| ↳ `title` | string | Job title of the Gong user |
| ↳ `metrics` | json | A map of metric names to arrays of string values representing coaching metrics |
### `gong_lookup_email`
Find all references to an email address in Gong (calls, email messages, meetings, CRM data, engagement).
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `accessKey` | string | Yes | Gong API Access Key |
| `accessKeySecret` | string | Yes | Gong API Access Key Secret |
| `emailAddress` | string | Yes | Email address to look up |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `requestId` | string | Gong request reference ID for troubleshooting |
| `calls` | array | Related calls referencing this email address |
| ↳ `id` | string | Gong's unique numeric identifier for the call \(up to 20 digits\) |
| ↳ `status` | string | Call status |
| ↳ `externalSystems` | array | Links to external systems such as CRM, Telephony System, etc. |
| ↳ `system` | string | External system name |
| ↳ `objects` | array | List of objects within the external system |
| ↳ `objectType` | string | Object type |
| ↳ `externalId` | string | External ID |
| `emails` | array | Related email messages referencing this email address |
| ↳ `id` | string | Gong's unique 32 character identifier for the email message |
| ↳ `from` | string | The sender's email address |
| ↳ `sentTime` | string | Date and time the email was sent in ISO-8601 format |
| ↳ `mailbox` | string | The mailbox from which the email was retrieved |
| ↳ `messageHash` | string | Hash code of the email message |
| `meetings` | array | Related meetings referencing this email address |
| ↳ `id` | string | Gong's unique identifier for the meeting |
| `customerData` | array | Links to data from external systems \(CRM, Telephony, etc.\) that reference this email |
| ↳ `system` | string | External system name |
| ↳ `objects` | array | List of objects in the external system |
| ↳ `id` | string | Gong's unique numeric identifier for the Lead or Contact \(up to 20 digits\) |
| ↳ `objectType` | string | Object type |
| ↳ `externalId` | string | External ID |
| ↳ `mirrorId` | string | CRM Mirror ID |
| ↳ `fields` | array | Object fields |
| ↳ `name` | string | Field name |
| ↳ `value` | json | Field value |
| `customerEngagement` | array | Customer engagement events \(such as viewing external shared calls\) |
| ↳ `eventType` | string | Event type |
| ↳ `eventName` | string | Event name |
| ↳ `timestamp` | string | Date and time the event occurred in ISO-8601 format |
| ↳ `contentId` | string | Event content ID |
| ↳ `contentUrl` | string | Event content URL |
| ↳ `reportingSystem` | string | Event reporting system |
| ↳ `sourceEventId` | string | Source event ID |
### `gong_lookup_phone`
Find all references to a phone number in Gong (calls, email messages, meetings, CRM data, and associated contacts).
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `accessKey` | string | Yes | Gong API Access Key |
| `accessKeySecret` | string | Yes | Gong API Access Key Secret |
| `phoneNumber` | string | Yes | Phone number to look up \(must start with + followed by country code\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `requestId` | string | Gong request reference ID for troubleshooting |
| `suppliedPhoneNumber` | string | The phone number that was supplied in the request |
| `matchingPhoneNumbers` | array | Phone numbers found in the system that match the supplied number |
| `emailAddresses` | array | Email addresses associated with the phone number |
| `calls` | array | Related calls referencing this phone number |
| ↳ `id` | string | Gong's unique numeric identifier for the call \(up to 20 digits\) |
| ↳ `status` | string | Call status |
| ↳ `externalSystems` | array | Links to external systems such as CRM, Telephony System, etc. |
| ↳ `system` | string | External system name |
| ↳ `objects` | array | List of objects within the external system |
| ↳ `objectType` | string | Object type |
| ↳ `externalId` | string | External ID |
| `emails` | array | Related email messages associated with contacts matching this phone number |
| ↳ `id` | string | Gong's unique 32 character identifier for the email message |
| ↳ `from` | string | The sender's email address |
| ↳ `sentTime` | string | Date and time the email was sent in ISO-8601 format |
| ↳ `mailbox` | string | The mailbox from which the email was retrieved |
| ↳ `messageHash` | string | Hash code of the email message |
| `meetings` | array | Related meetings associated with this phone number |
| ↳ `id` | string | Gong's unique identifier for the meeting |
| `customerData` | array | Links to data from external systems \(CRM, Telephony, etc.\) that reference this phone number |
| ↳ `system` | string | External system name |
| ↳ `objects` | array | List of objects in the external system |
| ↳ `id` | string | Gong's unique numeric identifier for the Lead or Contact \(up to 20 digits\) |
| ↳ `objectType` | string | Object type |
| ↳ `externalId` | string | External ID |
| ↳ `mirrorId` | string | CRM Mirror ID |
| ↳ `fields` | array | Object fields |
| ↳ `name` | string | Field name |
| ↳ `value` | json | Field value |

View File

@@ -7,7 +7,7 @@ import { BlockInfoCard } from "@/components/ui/block-info-card"
<BlockInfoCard
type="hex"
color="#F5E6FF"
color="#14151A"
/>
{/* MANUAL-CONTENT-START:intro */}

View File

@@ -10,6 +10,7 @@
"apollo",
"arxiv",
"asana",
"attio",
"browser_use",
"calcom",
"calendly",
@@ -35,6 +36,7 @@
"github",
"gitlab",
"gmail",
"gong",
"google_books",
"google_calendar",
"google_docs",

View File

@@ -5,6 +5,7 @@ import { createContext, useCallback, useEffect, useMemo, useState } from 'react'
import { useQueryClient } from '@tanstack/react-query'
import posthog from 'posthog-js'
import { client } from '@/lib/auth/auth-client'
import { extractSessionDataFromAuthClientResult } from '@/lib/auth/session-response'
export type AppSession = {
user: {
@@ -45,7 +46,8 @@ export function SessionProvider({ children }: { children: React.ReactNode }) {
const res = bypassCache
? await client.getSession({ query: { disableCookieCache: true } })
: await client.getSession()
setData(res?.data ?? null)
const session = extractSessionDataFromAuthClientResult(res) as AppSession
setData(session)
} catch (e) {
setError(e instanceof Error ? e : new Error('Failed to fetch session'))
} finally {

View File

@@ -0,0 +1,93 @@
/**
* @vitest-environment node
*/
import { createMockRequest, setupCommonApiMocks } from '@sim/testing'
import { beforeEach, describe, expect, it, vi } from 'vitest'
const handlerMocks = vi.hoisted(() => ({
betterAuthGET: vi.fn(),
betterAuthPOST: vi.fn(),
ensureAnonymousUserExists: vi.fn(),
createAnonymousGetSessionResponse: vi.fn(() => ({
data: {
user: { id: 'anon' },
session: { id: 'anon-session' },
},
})),
}))
vi.mock('better-auth/next-js', () => ({
toNextJsHandler: () => ({
GET: handlerMocks.betterAuthGET,
POST: handlerMocks.betterAuthPOST,
}),
}))
vi.mock('@/lib/auth', () => ({
auth: { handler: {} },
}))
vi.mock('@/lib/auth/anonymous', () => ({
ensureAnonymousUserExists: handlerMocks.ensureAnonymousUserExists,
createAnonymousGetSessionResponse: handlerMocks.createAnonymousGetSessionResponse,
}))
describe('auth catch-all route (DISABLE_AUTH get-session)', () => {
beforeEach(() => {
vi.resetModules()
setupCommonApiMocks()
handlerMocks.betterAuthGET.mockReset()
handlerMocks.betterAuthPOST.mockReset()
handlerMocks.ensureAnonymousUserExists.mockReset()
handlerMocks.createAnonymousGetSessionResponse.mockClear()
})
it('returns anonymous session in better-auth response envelope when auth is disabled', async () => {
vi.doMock('@/lib/core/config/feature-flags', () => ({ isAuthDisabled: true }))
const req = createMockRequest(
'GET',
undefined,
{},
'http://localhost:3000/api/auth/get-session'
)
const { GET } = await import('@/app/api/auth/[...all]/route')
const res = await GET(req as any)
const json = await res.json()
expect(handlerMocks.ensureAnonymousUserExists).toHaveBeenCalledTimes(1)
expect(handlerMocks.betterAuthGET).not.toHaveBeenCalled()
expect(json).toEqual({
data: {
user: { id: 'anon' },
session: { id: 'anon-session' },
},
})
})
it('delegates to better-auth handler when auth is enabled', async () => {
vi.doMock('@/lib/core/config/feature-flags', () => ({ isAuthDisabled: false }))
handlerMocks.betterAuthGET.mockResolvedValueOnce(
new (await import('next/server')).NextResponse(JSON.stringify({ data: { ok: true } }), {
headers: { 'content-type': 'application/json' },
}) as any
)
const req = createMockRequest(
'GET',
undefined,
{},
'http://localhost:3000/api/auth/get-session'
)
const { GET } = await import('@/app/api/auth/[...all]/route')
const res = await GET(req as any)
const json = await res.json()
expect(handlerMocks.ensureAnonymousUserExists).not.toHaveBeenCalled()
expect(handlerMocks.betterAuthGET).toHaveBeenCalledTimes(1)
expect(json).toEqual({ data: { ok: true } })
})
})

View File

@@ -1,7 +1,7 @@
import { toNextJsHandler } from 'better-auth/next-js'
import { type NextRequest, NextResponse } from 'next/server'
import { auth } from '@/lib/auth'
import { createAnonymousSession, ensureAnonymousUserExists } from '@/lib/auth/anonymous'
import { createAnonymousGetSessionResponse, ensureAnonymousUserExists } from '@/lib/auth/anonymous'
import { isAuthDisabled } from '@/lib/core/config/feature-flags'
export const dynamic = 'force-dynamic'
@@ -14,7 +14,7 @@ export async function GET(request: NextRequest) {
if (path === 'get-session' && isAuthDisabled) {
await ensureAnonymousUserExists()
return NextResponse.json(createAnonymousSession())
return NextResponse.json(createAnonymousGetSessionResponse())
}
return betterAuthGET(request)

View File

@@ -33,7 +33,6 @@ export async function POST(req: NextRequest) {
logger.info(`[${requestId}] Update cost request started`)
if (!isBillingEnabled) {
logger.debug(`[${requestId}] Billing is disabled, skipping cost update`)
return NextResponse.json({
success: true,
message: 'Billing disabled, cost update skipped',

View File

@@ -117,8 +117,6 @@ export async function POST(
const requestId = generateRequestId()
try {
logger.debug(`[${requestId}] Processing OTP request for identifier: ${identifier}`)
const body = await request.json()
const { email } = otpRequestSchema.parse(body)
@@ -211,8 +209,6 @@ export async function PUT(
const requestId = generateRequestId()
try {
logger.debug(`[${requestId}] Verifying OTP for identifier: ${identifier}`)
const body = await request.json()
const { email, otp } = otpVerifySchema.parse(body)

View File

@@ -42,8 +42,6 @@ export async function POST(
const requestId = generateRequestId()
try {
logger.debug(`[${requestId}] Processing chat request for identifier: ${identifier}`)
let parsedBody
try {
const rawBody = await request.json()
@@ -294,8 +292,6 @@ export async function GET(
const requestId = generateRequestId()
try {
logger.debug(`[${requestId}] Fetching chat info for identifier: ${identifier}`)
const deploymentResult = await db
.select({
id: chat.id,

View File

@@ -95,11 +95,6 @@ export async function POST(request: NextRequest) {
const body = await request.json()
const data = CreateCreatorProfileSchema.parse(body)
logger.debug(`[${requestId}] Creating creator profile:`, {
referenceType: data.referenceType,
referenceId: data.referenceId,
})
// Validate permissions
if (data.referenceType === 'user') {
if (data.referenceId !== session.user.id) {

View File

@@ -150,6 +150,7 @@ export async function POST(
})
recordAudit({
workspaceId: null,
actorId: session.user.id,
actorName: session.user.name,
actorEmail: session.user.email,
@@ -158,7 +159,7 @@ export async function POST(
resourceId: id,
resourceName: result.set.name,
description: `Resent credential set invitation to ${invitation.email}`,
metadata: { invitationId, email: invitation.email },
metadata: { invitationId, targetEmail: invitation.email },
request: req,
})

View File

@@ -186,6 +186,7 @@ export async function POST(req: NextRequest, { params }: { params: Promise<{ id:
actorEmail: session.user.email ?? undefined,
resourceName: result.set.name,
description: `Created invitation for credential set "${result.set.name}"${email ? ` to ${email}` : ''}`,
metadata: { targetEmail: email || undefined },
request: req,
})
@@ -239,7 +240,7 @@ export async function DELETE(req: NextRequest, { params }: { params: Promise<{ i
return NextResponse.json({ error: 'Admin or owner permissions required' }, { status: 403 })
}
await db
const [revokedInvitation] = await db
.update(credentialSetInvitation)
.set({ status: 'cancelled' })
.where(
@@ -248,6 +249,7 @@ export async function DELETE(req: NextRequest, { params }: { params: Promise<{ i
eq(credentialSetInvitation.credentialSetId, id)
)
)
.returning({ email: credentialSetInvitation.email })
recordAudit({
workspaceId: null,
@@ -259,6 +261,7 @@ export async function DELETE(req: NextRequest, { params }: { params: Promise<{ i
actorEmail: session.user.email ?? undefined,
resourceName: result.set.name,
description: `Revoked invitation "${invitationId}" for credential set "${result.set.name}"`,
metadata: { targetEmail: revokedInvitation?.email ?? undefined },
request: req,
})

View File

@@ -151,8 +151,15 @@ export async function DELETE(req: NextRequest, { params }: { params: Promise<{ i
}
const [memberToRemove] = await db
.select()
.select({
id: credentialSetMember.id,
credentialSetId: credentialSetMember.credentialSetId,
userId: credentialSetMember.userId,
status: credentialSetMember.status,
email: user.email,
})
.from(credentialSetMember)
.innerJoin(user, eq(credentialSetMember.userId, user.id))
.where(and(eq(credentialSetMember.id, memberId), eq(credentialSetMember.credentialSetId, id)))
.limit(1)
@@ -189,6 +196,7 @@ export async function DELETE(req: NextRequest, { params }: { params: Promise<{ i
actorEmail: session.user.email ?? undefined,
resourceName: result.set.name,
description: `Removed member from credential set "${result.set.name}"`,
metadata: { targetEmail: memberToRemove.email ?? undefined },
request: req,
})

View File

@@ -148,6 +148,10 @@ export async function POST(req: NextRequest, { params }: { params: Promise<{ id:
actorEmail: session.user.email ?? undefined,
resourceName: name,
description: `Duplicated folder "${sourceFolder.name}" as "${name}"`,
metadata: {
sourceId: sourceFolder.id,
affected: { workflows: workflowStats.succeeded, folders: folderMapping.size },
},
request: req,
})

View File

@@ -178,6 +178,12 @@ export async function DELETE(
resourceId: id,
resourceName: existingFolder.name,
description: `Deleted folder "${existingFolder.name}"`,
metadata: {
affected: {
workflows: deletionStats.workflows,
subfolders: deletionStats.folders - 1,
},
},
request,
})

View File

@@ -58,8 +58,6 @@ export async function POST(
const requestId = generateRequestId()
try {
logger.debug(`[${requestId}] Processing form submission for identifier: ${identifier}`)
let parsedBody
try {
const rawBody = await request.json()
@@ -300,8 +298,6 @@ export async function GET(
const requestId = generateRequestId()
try {
logger.debug(`[${requestId}] Fetching form info for identifier: ${identifier}`)
const deploymentResult = await db
.select({
id: form.id,

View File

@@ -77,8 +77,6 @@ export async function POST(req: NextRequest) {
}
}
logger.debug(`[${requestId}] Help request includes ${images.length} images`)
const userId = session.user.id
let emailText = `
Type: ${type}

View File

@@ -281,6 +281,7 @@ export async function DELETE(
resourceId: documentId,
resourceName: accessCheck.document?.filename,
description: `Deleted document "${documentId}" from knowledge base "${knowledgeBaseId}"`,
metadata: { fileName: accessCheck.document?.filename },
request: req,
})

View File

@@ -255,6 +255,10 @@ export async function POST(req: NextRequest, { params }: { params: Promise<{ id:
resourceId: knowledgeBaseId,
resourceName: `${createdDocuments.length} document(s)`,
description: `Uploaded ${createdDocuments.length} document(s) to knowledge base "${knowledgeBaseId}"`,
metadata: {
fileCount: createdDocuments.length,
fileNames: createdDocuments.map((doc) => doc.filename),
},
request: req,
})
@@ -316,6 +320,11 @@ export async function POST(req: NextRequest, { params }: { params: Promise<{ id:
resourceId: knowledgeBaseId,
resourceName: validatedData.filename,
description: `Uploaded document "${validatedData.filename}" to knowledge base "${knowledgeBaseId}"`,
metadata: {
fileName: validatedData.filename,
fileType: validatedData.mimeType,
fileSize: validatedData.fileSize,
},
request: req,
})

View File

@@ -186,8 +186,6 @@ export async function POST(request: NextRequest) {
valueTo: filter.valueTo,
}
})
logger.debug(`[${requestId}] Processed ${structuredFilters.length} structured filters`)
}
if (accessibleKbIds.length === 0) {
@@ -220,7 +218,6 @@ export async function POST(request: NextRequest) {
if (!hasQuery && hasFilters) {
// Tag-only search without vector similarity
logger.debug(`[${requestId}] Executing tag-only search with filters:`, structuredFilters)
results = await handleTagOnlySearch({
knowledgeBaseIds: accessibleKbIds,
topK: validatedData.topK,
@@ -244,7 +241,6 @@ export async function POST(request: NextRequest) {
})
} else if (hasQuery && !hasFilters) {
// Vector-only search
logger.debug(`[${requestId}] Executing vector-only search`)
const strategy = getQueryStrategy(accessibleKbIds.length, validatedData.topK)
const queryVector = JSON.stringify(await queryEmbeddingPromise)

View File

@@ -1,11 +1,8 @@
import { db } from '@sim/db'
import { document, embedding } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { and, eq, inArray, isNull, sql } from 'drizzle-orm'
import type { StructuredFilter } from '@/lib/knowledge/types'
const logger = createLogger('KnowledgeSearchUtils')
export async function getDocumentNamesByIds(
documentIds: string[]
): Promise<Record<string, string>> {
@@ -140,17 +137,12 @@ function buildFilterCondition(filter: StructuredFilter, embeddingTable: any) {
const { tagSlot, fieldType, operator, value, valueTo } = filter
if (!isTagSlotKey(tagSlot)) {
logger.debug(`[getStructuredTagFilters] Unknown tag slot: ${tagSlot}`)
return null
}
const column = embeddingTable[tagSlot]
if (!column) return null
logger.debug(
`[getStructuredTagFilters] Processing ${tagSlot} (${fieldType}) ${operator} ${value}`
)
// Handle text operators
if (fieldType === 'text') {
const stringValue = String(value)
@@ -208,7 +200,6 @@ function buildFilterCondition(filter: StructuredFilter, embeddingTable: any) {
const dateStr = String(value)
// Validate YYYY-MM-DD format
if (!/^\d{4}-\d{2}-\d{2}$/.test(dateStr)) {
logger.debug(`[getStructuredTagFilters] Invalid date format: ${dateStr}, expected YYYY-MM-DD`)
return null
}
@@ -287,9 +278,6 @@ function getStructuredTagFilters(filters: StructuredFilter[], embeddingTable: an
conditions.push(slotConditions[0])
} else {
// Multiple conditions for same slot - OR them together
logger.debug(
`[getStructuredTagFilters] OR'ing ${slotConditions.length} conditions for ${slot}`
)
conditions.push(sql`(${sql.join(slotConditions, sql` OR `)})`)
}
}
@@ -380,8 +368,6 @@ export async function handleTagOnlySearch(params: SearchParams): Promise<SearchR
throw new Error('Tag filters are required for tag-only search')
}
logger.debug(`[handleTagOnlySearch] Executing tag-only search with filters:`, structuredFilters)
const strategy = getQueryStrategy(knowledgeBaseIds.length, topK)
const tagFilterConditions = getStructuredTagFilters(structuredFilters, embedding)
@@ -431,8 +417,6 @@ export async function handleVectorOnlySearch(params: SearchParams): Promise<Sear
throw new Error('Query vector and distance threshold are required for vector-only search')
}
logger.debug(`[handleVectorOnlySearch] Executing vector-only search`)
const strategy = getQueryStrategy(knowledgeBaseIds.length, topK)
const distanceExpr = sql<number>`${embedding.embedding} <=> ${queryVector}::vector`.as('distance')
@@ -489,23 +473,13 @@ export async function handleTagAndVectorSearch(params: SearchParams): Promise<Se
throw new Error('Query vector and distance threshold are required for tag and vector search')
}
logger.debug(
`[handleTagAndVectorSearch] Executing tag + vector search with filters:`,
structuredFilters
)
// Step 1: Filter by tags first
const tagFilteredIds = await executeTagFilterQuery(knowledgeBaseIds, structuredFilters)
if (tagFilteredIds.length === 0) {
logger.debug(`[handleTagAndVectorSearch] No results found after tag filtering`)
return []
}
logger.debug(
`[handleTagAndVectorSearch] Found ${tagFilteredIds.length} results after tag filtering`
)
// Step 2: Perform vector search only on tag-filtered results
return await executeVectorSearchOnIds(
tagFilteredIds.map((r) => r.id),

View File

@@ -34,10 +34,6 @@ export async function GET(
const authenticatedUserId = authResult.userId
logger.debug(
`[${requestId}] Fetching execution data for: ${executionId} (auth: ${authResult.authType})`
)
const [workflowLog] = await db
.select({
id: workflowExecutionLogs.id,
@@ -125,11 +121,6 @@ export async function GET(
},
}
logger.debug(`[${requestId}] Successfully fetched execution data for: ${executionId}`)
logger.debug(
`[${requestId}] Workflow state contains ${Object.keys((snapshot.stateData as any)?.blocks || {}).length} blocks`
)
return NextResponse.json(response)
} catch (error) {
logger.error(`[${requestId}] Error fetching execution data:`, error)

View File

@@ -23,6 +23,7 @@ import { type AuthResult, checkHybridAuth } from '@/lib/auth/hybrid'
import { generateInternalToken } from '@/lib/auth/internal'
import { getMaxExecutionTimeout } from '@/lib/core/execution-limits'
import { getInternalApiBaseUrl } from '@/lib/core/utils/urls'
import { SIM_VIA_HEADER } from '@/lib/execution/call-chain'
import { getUserEntityPermissions } from '@/lib/workspaces/permissions/utils'
const logger = createLogger('WorkflowMcpServeAPI')
@@ -181,7 +182,8 @@ export async function POST(request: NextRequest, { params }: { params: Promise<R
serverId,
rpcParams as { name: string; arguments?: Record<string, unknown> },
executeAuthContext,
server.isPublic ? server.createdBy : undefined
server.isPublic ? server.createdBy : undefined,
request.headers.get(SIM_VIA_HEADER)
)
default:
@@ -244,7 +246,8 @@ async function handleToolsCall(
serverId: string,
params: { name: string; arguments?: Record<string, unknown> } | undefined,
executeAuthContext?: ExecuteAuthContext | null,
publicServerOwnerId?: string
publicServerOwnerId?: string,
simViaHeader?: string | null
): Promise<NextResponse> {
try {
if (!params?.name) {
@@ -300,6 +303,10 @@ async function handleToolsCall(
}
}
if (simViaHeader) {
headers[SIM_VIA_HEADER] = simViaHeader
}
logger.info(`Executing workflow ${tool.workflowId} via MCP tool ${params.name}`)
const response = await fetch(executeUrl, {

View File

@@ -83,7 +83,6 @@ export const POST = withMcpAuth('read')(
serverId: serverId,
serverName: 'provided-schema',
} as McpTool
logger.debug(`[${requestId}] Using provided schema for ${toolName}, skipping discovery`)
} else {
const tools = await mcpService.discoverServerTools(userId, serverId, workspaceId)
tool = tools.find((t) => t.name === toolName) ?? null

View File

@@ -598,7 +598,12 @@ export async function PUT(
actorName: session.user.name ?? undefined,
actorEmail: session.user.email ?? undefined,
description: `Organization invitation ${status} for ${orgInvitation.email}`,
metadata: { invitationId, email: orgInvitation.email, status },
metadata: {
invitationId,
targetEmail: orgInvitation.email,
targetRole: orgInvitation.role,
status,
},
request: req,
})

View File

@@ -423,7 +423,7 @@ export async function POST(request: NextRequest, { params }: { params: Promise<{
actorEmail: session.user.email ?? undefined,
resourceName: organizationEntry[0]?.name,
description: `Invited ${inv.email} to organization as ${role}`,
metadata: { invitationId: inv.id, email: inv.email, role },
metadata: { invitationId: inv.id, targetEmail: inv.email, targetRole: role },
request,
})
}
@@ -558,7 +558,7 @@ export async function DELETE(
actorName: session.user.name ?? undefined,
actorEmail: session.user.email ?? undefined,
description: `Revoked organization invitation for ${result[0].email}`,
metadata: { invitationId, email: result[0].email },
metadata: { invitationId, targetEmail: result[0].email },
request,
})

View File

@@ -173,8 +173,15 @@ export async function PUT(
}
const targetMember = await db
.select()
.select({
id: member.id,
role: member.role,
userId: member.userId,
email: user.email,
name: user.name,
})
.from(member)
.innerJoin(user, eq(member.userId, user.id))
.where(and(eq(member.organizationId, organizationId), eq(member.userId, memberId)))
.limit(1)
@@ -223,7 +230,12 @@ export async function PUT(
actorName: session.user.name ?? undefined,
actorEmail: session.user.email ?? undefined,
description: `Changed role for member ${memberId} to ${role}`,
metadata: { targetUserId: memberId, newRole: role },
metadata: {
targetUserId: memberId,
targetEmail: targetMember[0].email ?? undefined,
targetName: targetMember[0].name ?? undefined,
changes: [{ field: 'role', from: targetMember[0].role, to: role }],
},
request,
})
@@ -286,8 +298,9 @@ export async function DELETE(
}
const targetMember = await db
.select({ id: member.id, role: member.role })
.select({ id: member.id, role: member.role, email: user.email, name: user.name })
.from(member)
.innerJoin(user, eq(member.userId, user.id))
.where(and(eq(member.organizationId, organizationId), eq(member.userId, targetUserId)))
.limit(1)
@@ -331,7 +344,12 @@ export async function DELETE(
session.user.id === targetUserId
? 'Left the organization'
: `Removed member ${targetUserId} from organization`,
metadata: { targetUserId, wasSelfRemoval: session.user.id === targetUserId },
metadata: {
targetUserId,
targetEmail: targetMember[0].email ?? undefined,
targetName: targetMember[0].name ?? undefined,
wasSelfRemoval: session.user.id === targetUserId,
},
request,
})

View File

@@ -295,7 +295,7 @@ export async function POST(request: NextRequest, { params }: { params: Promise<{
actorName: session.user.name ?? undefined,
actorEmail: session.user.email ?? undefined,
description: `Invited ${normalizedEmail} to organization as ${role}`,
metadata: { invitationId, email: normalizedEmail, role },
metadata: { invitationId, targetEmail: normalizedEmail, targetRole: role },
request,
})

View File

@@ -100,8 +100,9 @@ export async function POST(req: NextRequest, { params }: { params: Promise<{ id:
const { userId } = addMemberSchema.parse(body)
const [orgMember] = await db
.select({ id: member.id })
.select({ id: member.id, email: user.email })
.from(member)
.innerJoin(user, eq(member.userId, user.id))
.where(and(eq(member.userId, userId), eq(member.organizationId, result.group.organizationId)))
.limit(1)
@@ -163,7 +164,11 @@ export async function POST(req: NextRequest, { params }: { params: Promise<{ id:
actorName: session.user.name ?? undefined,
actorEmail: session.user.email ?? undefined,
description: `Added member ${userId} to permission group "${result.group.name}"`,
metadata: { targetUserId: userId, permissionGroupId: id },
metadata: {
targetUserId: userId,
targetEmail: orgMember.email ?? undefined,
permissionGroupId: id,
},
request: req,
})
@@ -218,8 +223,14 @@ export async function DELETE(req: NextRequest, { params }: { params: Promise<{ i
}
const [memberToRemove] = await db
.select()
.select({
id: permissionGroupMember.id,
permissionGroupId: permissionGroupMember.permissionGroupId,
userId: permissionGroupMember.userId,
email: user.email,
})
.from(permissionGroupMember)
.innerJoin(user, eq(permissionGroupMember.userId, user.id))
.where(
and(eq(permissionGroupMember.id, memberId), eq(permissionGroupMember.permissionGroupId, id))
)
@@ -247,7 +258,12 @@ export async function DELETE(req: NextRequest, { params }: { params: Promise<{ i
actorName: session.user.name ?? undefined,
actorEmail: session.user.email ?? undefined,
description: `Removed member ${memberToRemove.userId} from permission group "${result.group.name}"`,
metadata: { targetUserId: memberToRemove.userId, memberId, permissionGroupId: id },
metadata: {
targetUserId: memberToRemove.userId,
targetEmail: memberToRemove.email ?? undefined,
memberId,
permissionGroupId: id,
},
request: req,
})

View File

@@ -26,7 +26,6 @@ export async function PUT(request: NextRequest, { params }: { params: Promise<{
try {
const { id: scheduleId } = await params
logger.debug(`[${requestId}] Reactivating schedule with ID: ${scheduleId}`)
const session = await getSession()
if (!session?.user?.id) {
@@ -116,6 +115,7 @@ export async function PUT(request: NextRequest, { params }: { params: Promise<{
actorName: session.user.name ?? undefined,
actorEmail: session.user.email ?? undefined,
description: `Reactivated schedule for workflow ${schedule.workflowId}`,
metadata: { cronExpression: schedule.cronExpression, timezone: schedule.timezone },
request,
})

View File

@@ -51,7 +51,6 @@ export async function GET(request: NextRequest) {
lastQueuedAt: workflowSchedule.lastQueuedAt,
})
logger.debug(`[${requestId}] Successfully queried schedules: ${dueSchedules.length} found`)
logger.info(`[${requestId}] Processing ${dueSchedules.length} due scheduled workflows`)
const jobQueue = await getJobQueue()

View File

@@ -24,8 +24,6 @@ export async function GET(request: NextRequest, { params }: { params: Promise<{
try {
const session = await getSession()
logger.debug(`[${requestId}] Fetching template: ${id}`)
const result = await db
.select({
template: templates,
@@ -74,8 +72,6 @@ export async function GET(request: NextRequest, { params }: { params: Promise<{
views: sql`${templates.views} + 1`,
})
.where(eq(templates.id, id))
logger.debug(`[${requestId}] Incremented view count for template: ${id}`)
} catch (viewError) {
logger.warn(`[${requestId}] Failed to increment view count for template: ${id}`, viewError)
}

View File

@@ -58,8 +58,6 @@ export async function POST(request: NextRequest, { params }: { params: Promise<{
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
logger.debug(`[${requestId}] Adding star for template: ${id}, user: ${session.user.id}`)
// Verify the template exists
const templateExists = await db
.select({ id: templates.id })
@@ -133,8 +131,6 @@ export async function DELETE(
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
logger.debug(`[${requestId}] Removing star for template: ${id}, user: ${session.user.id}`)
// Check if the star exists
const existingStar = await db
.select({ id: templateStars.id })

View File

@@ -68,8 +68,6 @@ export async function GET(request: NextRequest) {
const { searchParams } = new URL(request.url)
const params = QueryParamsSchema.parse(Object.fromEntries(searchParams.entries()))
logger.debug(`[${requestId}] Fetching templates with params:`, params)
// Check if user is a super user
const { effectiveSuperUser } = await verifyEffectiveSuperUser(session.user.id)
const isSuperUser = effectiveSuperUser
@@ -187,11 +185,6 @@ export async function POST(request: NextRequest) {
const body = await request.json()
const data = CreateTemplateSchema.parse(body)
logger.debug(`[${requestId}] Creating template:`, {
name: data.name,
workflowId: data.workflowId,
})
// Verify the workflow exists and belongs to the user
const workflowExists = await db
.select({ id: workflow.id })

View File

@@ -18,7 +18,6 @@ export async function DELETE(
const { id } = await params
try {
logger.debug(`[${requestId}] Deleting API key: ${id}`)
const session = await getSession()
if (!session?.user?.id) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })

View File

@@ -103,12 +103,10 @@ async function updateUserStatsForWand(
isBYOK = false
): Promise<void> {
if (!isBillingEnabled) {
logger.debug(`[${requestId}] Billing is disabled, skipping wand usage cost update`)
return
}
if (!usage.total_tokens || usage.total_tokens <= 0) {
logger.debug(`[${requestId}] No tokens to update in user stats`)
return
}
@@ -146,13 +144,6 @@ async function updateUserStatsForWand(
})
.where(eq(userStats.userId, userId))
logger.debug(`[${requestId}] Updated user stats for wand usage`, {
userId,
tokensUsed: totalTokens,
costAdded: costToStore,
isBYOK,
})
await logModelUsage({
userId,
source: 'wand',
@@ -291,23 +282,8 @@ export async function POST(req: NextRequest) {
messages.push({ role: 'user', content: prompt })
logger.debug(
`[${requestId}] Calling ${useWandAzure ? 'Azure OpenAI' : 'OpenAI'} API for wand generation`,
{
stream,
historyLength: history.length,
endpoint: useWandAzure ? azureEndpoint : 'api.openai.com',
model: useWandAzure ? wandModelName : 'gpt-4o',
apiVersion: useWandAzure ? azureApiVersion : 'N/A',
}
)
if (stream) {
try {
logger.debug(
`[${requestId}] Starting streaming request to ${useWandAzure ? 'Azure OpenAI' : 'OpenAI'}`
)
logger.info(
`[${requestId}] About to create stream with model: ${useWandAzure ? wandModelName : 'gpt-4o'}`
)
@@ -327,8 +303,6 @@ export async function POST(req: NextRequest) {
headers.Authorization = `Bearer ${activeOpenAIKey}`
}
logger.debug(`[${requestId}] Making streaming request to: ${apiUrl}`)
const response = await fetch(apiUrl, {
method: 'POST',
headers,
@@ -429,7 +403,6 @@ export async function POST(req: NextRequest) {
try {
parsed = JSON.parse(data)
} catch (parseError) {
logger.debug(`[${requestId}] Skipped non-JSON line: ${data.substring(0, 100)}`)
continue
}

View File

@@ -21,7 +21,6 @@ export async function GET(request: NextRequest, { params }: { params: Promise<{
try {
const { id } = await params
logger.debug(`[${requestId}] Fetching webhook with ID: ${id}`)
const auth = await checkSessionOrInternalAuth(request, { requireWorkflowId: false })
if (!auth.success || !auth.userId) {
@@ -77,7 +76,6 @@ export async function PATCH(request: NextRequest, { params }: { params: Promise<
try {
const { id } = await params
logger.debug(`[${requestId}] Updating webhook with ID: ${id}`)
const auth = await checkSessionOrInternalAuth(request, { requireWorkflowId: false })
if (!auth.success || !auth.userId) {
@@ -129,11 +127,6 @@ export async function PATCH(request: NextRequest, { params }: { params: Promise<
return NextResponse.json({ error: 'Access denied' }, { status: 403 })
}
logger.debug(`[${requestId}] Updating webhook properties`, {
hasActiveUpdate: isActive !== undefined,
hasFailedCountUpdate: failedCount !== undefined,
})
const updatedWebhook = await db
.update(webhook)
.set({
@@ -161,7 +154,6 @@ export async function DELETE(
try {
const { id } = await params
logger.debug(`[${requestId}] Deleting webhook with ID: ${id}`)
const auth = await checkSessionOrInternalAuth(request, { requireWorkflowId: false })
if (!auth.success || !auth.userId) {

View File

@@ -112,7 +112,6 @@ export async function GET(request: NextRequest) {
return NextResponse.json({ webhooks: [] }, { status: 200 })
}
logger.debug(`[${requestId}] Fetching workspace-accessible webhooks for ${session.user.id}`)
const workspacePermissionRows = await db
.select({ workspaceId: permissions.entityId })
.from(permissions)

View File

@@ -35,8 +35,6 @@ export async function GET(request: NextRequest, { params }: { params: Promise<{
)
}
logger.debug(`[${requestId}] Checking chat deployment status for workflow: ${id}`)
// Find any active chat deployments for this workflow
const deploymentResults = await db
.select({

View File

@@ -22,6 +22,7 @@ import {
} from '@/lib/workflows/schedules'
import { validateWorkflowPermissions } from '@/lib/workflows/utils'
import { createErrorResponse, createSuccessResponse } from '@/app/api/workflows/utils'
import type { WorkflowState } from '@/stores/workflows/workflow/types'
const logger = createLogger('WorkflowDeployAPI')
@@ -33,8 +34,6 @@ export async function GET(request: NextRequest, { params }: { params: Promise<{
const { id } = await params
try {
logger.debug(`[${requestId}] Fetching deployment info for workflow: ${id}`)
const { error, workflow: workflowData } = await validateWorkflowPermissions(
id,
requestId,
@@ -51,6 +50,7 @@ export async function GET(request: NextRequest, { params }: { params: Promise<{
deployedAt: null,
apiKey: null,
needsRedeployment: false,
isPublicApi: workflowData.isPublicApi ?? false,
})
}
@@ -85,7 +85,10 @@ export async function GET(request: NextRequest, { params }: { params: Promise<{
variables: workflowRecord?.variables || {},
}
const { hasWorkflowChanged } = await import('@/lib/workflows/comparison')
needsRedeployment = hasWorkflowChanged(currentState as any, active.state as any)
needsRedeployment = hasWorkflowChanged(
currentState as WorkflowState,
active.state as WorkflowState
)
}
}
@@ -98,6 +101,7 @@ export async function GET(request: NextRequest, { params }: { params: Promise<{
isDeployed: workflowData.isDeployed,
deployedAt: workflowData.deployedAt,
needsRedeployment,
isPublicApi: workflowData.isPublicApi ?? false,
})
} catch (error: any) {
logger.error(`[${requestId}] Error fetching deployment info: ${id}`, error)
@@ -110,8 +114,6 @@ export async function POST(request: NextRequest, { params }: { params: Promise<{
const { id } = await params
try {
logger.debug(`[${requestId}] Deploying workflow: ${id}`)
const {
error,
session,
@@ -269,6 +271,7 @@ export async function POST(request: NextRequest, { params }: { params: Promise<{
resourceId: id,
resourceName: workflowData?.name,
description: `Deployed workflow "${workflowData?.name || id}"`,
metadata: { version: deploymentVersionId },
request,
})
@@ -301,6 +304,49 @@ export async function POST(request: NextRequest, { params }: { params: Promise<{
}
}
export async function PATCH(request: NextRequest, { params }: { params: Promise<{ id: string }> }) {
const requestId = generateRequestId()
const { id } = await params
try {
const { error, session } = await validateWorkflowPermissions(id, requestId, 'admin')
if (error) {
return createErrorResponse(error.message, error.status)
}
const body = await request.json()
const { isPublicApi } = body
if (typeof isPublicApi !== 'boolean') {
return createErrorResponse('Invalid request body: isPublicApi must be a boolean', 400)
}
if (isPublicApi) {
const { validatePublicApiAllowed, PublicApiNotAllowedError } = await import(
'@/ee/access-control/utils/permission-check'
)
try {
await validatePublicApiAllowed(session?.user?.id)
} catch (err) {
if (err instanceof PublicApiNotAllowedError) {
return createErrorResponse('Public API access is disabled', 403)
}
throw err
}
}
await db.update(workflow).set({ isPublicApi }).where(eq(workflow.id, id))
logger.info(`[${requestId}] Updated isPublicApi for workflow ${id} to ${isPublicApi}`)
return createSuccessResponse({ isPublicApi })
} catch (error: unknown) {
const message = error instanceof Error ? error.message : 'Failed to update deployment settings'
logger.error(`[${requestId}] Error updating deployment settings: ${id}`, { error })
return createErrorResponse(message, 500)
}
}
export async function DELETE(
request: NextRequest,
{ params }: { params: Promise<{ id: string }> }
@@ -309,8 +355,6 @@ export async function DELETE(
const { id } = await params
try {
logger.debug(`[${requestId}] Undeploying workflow: ${id}`)
const {
error,
session,

View File

@@ -21,8 +21,6 @@ export async function GET(request: NextRequest, { params }: { params: Promise<{
const { id } = await params
try {
logger.debug(`[${requestId}] Fetching deployed state for workflow: ${id}`)
const authHeader = request.headers.get('authorization')
let isInternalCall = false
@@ -38,8 +36,6 @@ export async function GET(request: NextRequest, { params }: { params: Promise<{
const response = createErrorResponse(error.message, error.status)
return addNoCacheHeaders(response)
}
} else {
logger.debug(`[${requestId}] Internal API call for deployed workflow: ${id}`)
}
let deployedState = null
@@ -52,7 +48,8 @@ export async function GET(request: NextRequest, { params }: { params: Promise<{
parallels: data.parallels,
variables: data.variables,
}
} catch {
} catch (error) {
logger.warn(`[${requestId}] Failed to load deployed state for workflow ${id}`, { error })
deployedState = null
}

View File

@@ -12,11 +12,16 @@ import {
import { generateRequestId } from '@/lib/core/utils/request'
import { SSE_HEADERS } from '@/lib/core/utils/sse'
import { getBaseUrl } from '@/lib/core/utils/urls'
import {
buildNextCallChain,
parseCallChain,
SIM_VIA_HEADER,
validateCallChain,
} from '@/lib/execution/call-chain'
import { createExecutionEventWriter, setExecutionMeta } from '@/lib/execution/event-buffer'
import { processInputFileFields } from '@/lib/execution/files'
import { preprocessExecution } from '@/lib/execution/preprocessing'
import { LoggingSession } from '@/lib/logs/execution/logging-session'
import { buildTraceSpans } from '@/lib/logs/execution/trace-spans/trace-spans'
import {
cleanupExecutionBase64Cache,
hydrateUserFilesWithBase64,
@@ -155,10 +160,11 @@ type AsyncExecutionParams = {
input: any
triggerType: CoreTriggerType
executionId: string
callChain?: string[]
}
async function handleAsyncExecution(params: AsyncExecutionParams): Promise<NextResponse> {
const { requestId, workflowId, userId, input, triggerType, executionId } = params
const { requestId, workflowId, userId, input, triggerType, executionId, callChain } = params
const payload: WorkflowExecutionPayload = {
workflowId,
@@ -166,6 +172,7 @@ async function handleAsyncExecution(params: AsyncExecutionParams): Promise<NextR
input,
triggerType,
executionId,
callChain,
}
try {
@@ -236,12 +243,59 @@ export async function POST(req: NextRequest, { params }: { params: Promise<{ id:
const requestId = generateRequestId()
const { id: workflowId } = await params
const incomingCallChain = parseCallChain(req.headers.get(SIM_VIA_HEADER))
const callChainError = validateCallChain(incomingCallChain)
if (callChainError) {
logger.warn(`[${requestId}] Call chain rejected for workflow ${workflowId}: ${callChainError}`)
return NextResponse.json({ error: callChainError }, { status: 409 })
}
const callChain = buildNextCallChain(incomingCallChain, workflowId)
try {
const auth = await checkHybridAuth(req, { requireWorkflowId: false })
let userId: string
let isPublicApiAccess = false
if (!auth.success || !auth.userId) {
return NextResponse.json({ error: auth.error || 'Unauthorized' }, { status: 401 })
const hasExplicitCredentials =
req.headers.has('x-api-key') || req.headers.get('authorization')?.startsWith('Bearer ')
if (hasExplicitCredentials) {
return NextResponse.json({ error: auth.error || 'Unauthorized' }, { status: 401 })
}
const { db: dbClient, workflow: workflowTable } = await import('@sim/db')
const { eq } = await import('drizzle-orm')
const [wf] = await dbClient
.select({
isPublicApi: workflowTable.isPublicApi,
isDeployed: workflowTable.isDeployed,
userId: workflowTable.userId,
})
.from(workflowTable)
.where(eq(workflowTable.id, workflowId))
.limit(1)
if (!wf?.isPublicApi || !wf.isDeployed) {
return NextResponse.json({ error: auth.error || 'Unauthorized' }, { status: 401 })
}
const { isPublicApiDisabled } = await import('@/lib/core/config/feature-flags')
if (isPublicApiDisabled) {
return NextResponse.json({ error: auth.error || 'Unauthorized' }, { status: 401 })
}
const { getUserPermissionConfig } = await import('@/ee/access-control/utils/permission-check')
const ownerConfig = await getUserPermissionConfig(wf.userId)
if (ownerConfig?.disablePublicApi) {
return NextResponse.json({ error: auth.error || 'Unauthorized' }, { status: 401 })
}
userId = wf.userId
isPublicApiAccess = true
} else {
userId = auth.userId
}
const userId = auth.userId
let body: any = {}
try {
@@ -268,7 +322,7 @@ export async function POST(req: NextRequest, { params }: { params: Promise<{ id:
)
}
const defaultTriggerType = auth.authType === 'api_key' ? 'api' : 'manual'
const defaultTriggerType = isPublicApiAccess || auth.authType === 'api_key' ? 'api' : 'manual'
const {
selectedOutputs,
@@ -289,7 +343,9 @@ export async function POST(req: NextRequest, { params }: { params: Promise<{ id:
| { startBlockId: string; sourceSnapshot: SerializableExecutionState }
| undefined
if (rawRunFromBlock) {
if (rawRunFromBlock.sourceSnapshot) {
if (rawRunFromBlock.sourceSnapshot && !isPublicApiAccess) {
// Public API callers cannot inject arbitrary block state via sourceSnapshot.
// They must use executionId to resume from a server-stored execution state.
resolvedRunFromBlock = {
startBlockId: rawRunFromBlock.startBlockId,
sourceSnapshot: rawRunFromBlock.sourceSnapshot as SerializableExecutionState,
@@ -325,7 +381,7 @@ export async function POST(req: NextRequest, { params }: { params: Promise<{ id:
// For API key and internal JWT auth, the entire body is the input (except for our control fields)
// For session auth, the input is explicitly provided in the input field
const input =
auth.authType === 'api_key' || auth.authType === 'internal_jwt'
isPublicApiAccess || auth.authType === 'api_key' || auth.authType === 'internal_jwt'
? (() => {
const {
selectedOutputs,
@@ -344,19 +400,14 @@ export async function POST(req: NextRequest, { params }: { params: Promise<{ id:
})()
: validatedInput
const shouldUseDraftState = useDraftState ?? auth.authType === 'session'
const workflowAuthorization = await authorizeWorkflowByWorkspacePermission({
workflowId,
userId,
action: shouldUseDraftState ? 'write' : 'read',
})
if (!workflowAuthorization.allowed) {
return NextResponse.json(
{ error: workflowAuthorization.message || 'Access denied' },
{ status: workflowAuthorization.status }
)
}
// Public API callers must not inject arbitrary workflow state overrides (code injection risk).
// stopAfterBlockId and runFromBlock are safe — they control execution flow within the deployed state.
const sanitizedWorkflowStateOverride = isPublicApiAccess ? undefined : workflowStateOverride
// Public API callers always execute the deployed state, never the draft.
const shouldUseDraftState = isPublicApiAccess
? false
: (useDraftState ?? auth.authType === 'session')
const streamHeader = req.headers.get('X-Stream-Response') === 'true'
const enableSSE = streamHeader || streamParam === true
const executionModeHeader = req.headers.get('X-Execution-Mode')
@@ -391,6 +442,21 @@ export async function POST(req: NextRequest, { params }: { params: Promise<{ id:
const useAuthenticatedUserAsActor =
isClientSession || (auth.authType === 'api_key' && auth.apiKeyType === 'personal')
// Authorization fetches the full workflow record and checks workspace permissions.
// Run it first so we can pass the record to preprocessing (eliminates a duplicate DB query).
const workflowAuthorization = await authorizeWorkflowByWorkspacePermission({
workflowId,
userId,
action: shouldUseDraftState ? 'write' : 'read',
})
if (!workflowAuthorization.allowed) {
return NextResponse.json(
{ error: workflowAuthorization.message || 'Access denied' },
{ status: workflowAuthorization.status }
)
}
// Pass the pre-fetched workflow record to skip the redundant Step 1 DB query in preprocessing.
const preprocessResult = await preprocessExecution({
workflowId,
userId,
@@ -401,6 +467,7 @@ export async function POST(req: NextRequest, { params }: { params: Promise<{ id:
loggingSession,
useDraftState: shouldUseDraftState,
useAuthenticatedUserAsActor,
workflowRecord: workflowAuthorization.workflow ?? undefined,
})
if (!preprocessResult.success) {
@@ -433,6 +500,7 @@ export async function POST(req: NextRequest, { params }: { params: Promise<{ id:
input,
triggerType: loggingTriggerType,
executionId,
callChain,
})
}
@@ -449,7 +517,7 @@ export async function POST(req: NextRequest, { params }: { params: Promise<{ id:
try {
const workflowData = shouldUseDraftState
? await loadWorkflowFromNormalizedTables(workflowId)
: await loadDeployedWorkflowState(workflowId)
: await loadDeployedWorkflowState(workflowId, workspaceId)
if (workflowData) {
const deployedVariables =
@@ -516,7 +584,8 @@ export async function POST(req: NextRequest, { params }: { params: Promise<{ id:
)
}
const effectiveWorkflowStateOverride = workflowStateOverride || cachedWorkflowData || undefined
const effectiveWorkflowStateOverride =
sanitizedWorkflowStateOverride || cachedWorkflowData || undefined
if (!enableSSE) {
logger.info(`[${requestId}] Using non-SSE execution (direct JSON response)`)
@@ -539,6 +608,7 @@ export async function POST(req: NextRequest, { params }: { params: Promise<{ id:
isClientSession,
enforceCredentialAccess: useAuthenticatedUserAsActor,
workflowStateOverride: effectiveWorkflowStateOverride,
callChain,
}
const executionVariables = cachedWorkflowData?.variables ?? workflow.variables ?? {}
@@ -627,12 +697,6 @@ export async function POST(req: NextRequest, { params }: { params: Promise<{ id:
const executionResult = hasExecutionResult(error) ? error.executionResult : undefined
await loggingSession.safeCompleteWithError({
totalDurationMs: executionResult?.metadata?.duration,
error: { message: errorMessage },
traceSpans: executionResult?.logs as any,
})
return NextResponse.json(
{
success: false,
@@ -651,11 +715,9 @@ export async function POST(req: NextRequest, { params }: { params: Promise<{ id:
} finally {
timeoutController.cleanup()
if (executionId) {
try {
await cleanupExecutionBase64Cache(executionId)
} catch (error) {
void cleanupExecutionBase64Cache(executionId).catch((error) => {
logger.error(`[${requestId}] Failed to cleanup base64 cache`, { error })
}
})
}
}
}
@@ -909,6 +971,7 @@ export async function POST(req: NextRequest, { params }: { params: Promise<{ id:
isClientSession,
enforceCredentialAccess: useAuthenticatedUserAsActor,
workflowStateOverride: effectiveWorkflowStateOverride,
callChain,
}
const sseExecutionVariables = cachedWorkflowData?.variables ?? workflow.variables ?? {}
@@ -1055,15 +1118,6 @@ export async function POST(req: NextRequest, { params }: { params: Promise<{ id:
logger.error(`[${requestId}] SSE execution failed: ${errorMessage}`, { isTimeout })
const executionResult = hasExecutionResult(error) ? error.executionResult : undefined
const { traceSpans, totalDuration } = executionResult
? buildTraceSpans(executionResult)
: { traceSpans: [], totalDuration: 0 }
await loggingSession.safeCompleteWithError({
totalDurationMs: totalDuration || executionResult?.metadata?.duration,
error: { message: errorMessage },
traceSpans,
})
sendEvent({
type: 'execution:error',

View File

@@ -77,18 +77,9 @@ export async function GET(request: NextRequest, { params }: { params: Promise<{
}
}
logger.debug(`[${requestId}] Attempting to load workflow ${workflowId} from normalized tables`)
const normalizedData = await loadWorkflowFromNormalizedTables(workflowId)
if (normalizedData) {
logger.debug(`[${requestId}] Found normalized data for workflow ${workflowId}:`, {
blocksCount: Object.keys(normalizedData.blocks).length,
edgesCount: normalizedData.edges.length,
loopsCount: Object.keys(normalizedData.loops).length,
parallelsCount: Object.keys(normalizedData.parallels).length,
loops: normalizedData.loops,
})
const finalWorkflowData = {
...workflowData,
state: {
@@ -347,6 +338,9 @@ export async function DELETE(
resourceId: workflowId,
resourceName: workflowData.name,
description: `Deleted workflow "${workflowData.name}"`,
metadata: {
deleteTemplates: deleteTemplatesParam === 'delete',
},
request,
})

View File

@@ -11,7 +11,7 @@ import { extractAndPersistCustomTools } from '@/lib/workflows/persistence/custom
import { saveWorkflowToNormalizedTables } from '@/lib/workflows/persistence/utils'
import { sanitizeAgentToolsInBlocks } from '@/lib/workflows/sanitization/validation'
import { authorizeWorkflowByWorkspacePermission } from '@/lib/workflows/utils'
import type { BlockState } from '@/stores/workflows/workflow/types'
import type { BlockState, WorkflowState } from '@/stores/workflows/workflow/types'
import { generateLoopBlocks, generateParallelBlocks } from '@/stores/workflows/workflow/utils'
const logger = createLogger('WorkflowStateAPI')
@@ -153,13 +153,15 @@ export async function PUT(request: NextRequest, { params }: { params: Promise<{
}
// Sanitize custom tools in agent blocks before saving
const { blocks: sanitizedBlocks, warnings } = sanitizeAgentToolsInBlocks(state.blocks as any)
const { blocks: sanitizedBlocks, warnings } = sanitizeAgentToolsInBlocks(
state.blocks as Record<string, BlockState>
)
// Save to normalized tables
// Ensure all required fields are present for WorkflowState type
// Filter out blocks without type or name before saving
const filteredBlocks = Object.entries(sanitizedBlocks).reduce(
(acc, [blockId, block]: [string, any]) => {
(acc, [blockId, block]: [string, BlockState]) => {
if (block.type && block.name) {
// Ensure all required fields are present
acc[blockId] = {
@@ -191,7 +193,10 @@ export async function PUT(request: NextRequest, { params }: { params: Promise<{
deployedAt: state.deployedAt,
}
const saveResult = await saveWorkflowToNormalizedTables(workflowId, workflowState as any)
const saveResult = await saveWorkflowToNormalizedTables(
workflowId,
workflowState as WorkflowState
)
if (!saveResult.success) {
logger.error(`[${requestId}] Failed to save workflow ${workflowId} state:`, saveResult.error)

View File

@@ -7,6 +7,7 @@ import { hasWorkflowChanged } from '@/lib/workflows/comparison'
import { loadWorkflowFromNormalizedTables } from '@/lib/workflows/persistence/utils'
import { validateWorkflowAccess } from '@/app/api/workflows/middleware'
import { createErrorResponse, createSuccessResponse } from '@/app/api/workflows/utils'
import type { WorkflowState } from '@/stores/workflows/workflow/types'
const logger = createLogger('WorkflowStatusAPI')
@@ -64,7 +65,10 @@ export async function GET(request: NextRequest, { params }: { params: Promise<{
.limit(1)
if (active?.state) {
needsRedeployment = hasWorkflowChanged(currentState as any, active.state as any)
needsRedeployment = hasWorkflowChanged(
currentState as WorkflowState,
active.state as WorkflowState
)
}
}

View File

@@ -90,6 +90,7 @@ export async function POST(req: NextRequest, { params }: { params: Promise<{ id:
resourceId: workflowId,
resourceName: workflowData.name ?? undefined,
description: `Updated workflow variables`,
metadata: { variableCount: Object.keys(variables).length },
request: req,
})

View File

@@ -137,7 +137,7 @@ export async function DELETE(
.where(
and(eq(apiKey.workspaceId, workspaceId), eq(apiKey.id, keyId), eq(apiKey.type, 'workspace'))
)
.returning({ id: apiKey.id, name: apiKey.name })
.returning({ id: apiKey.id, name: apiKey.name, lastUsed: apiKey.lastUsed })
if (deletedRows.length === 0) {
return NextResponse.json({ error: 'API key not found' }, { status: 404 })
@@ -155,6 +155,7 @@ export async function DELETE(
actorEmail: session.user.email ?? undefined,
resourceName: deletedKey.name,
description: `Revoked workspace API key: ${deletedKey.name}`,
metadata: { lastUsed: deletedKey.lastUsed?.toISOString() ?? null },
request,
})

View File

@@ -56,6 +56,10 @@ export async function POST(req: NextRequest, { params }: { params: Promise<{ id:
resourceId: result.id,
resourceName: name,
description: `Duplicated workspace to "${name}"`,
metadata: {
sourceWorkspaceId,
affected: { workflows: result.workflowsCount, folders: result.foldersCount },
},
request: req,
})

View File

@@ -140,7 +140,7 @@ export async function PUT(request: NextRequest, { params }: { params: Promise<{
resourceType: AuditResourceType.ENVIRONMENT,
resourceId: workspaceId,
description: `Updated environment variables`,
metadata: { keysUpdated: Object.keys(variables) },
metadata: { variableCount: Object.keys(variables).length },
request,
})

View File

@@ -1,6 +1,6 @@
import crypto from 'crypto'
import { db } from '@sim/db'
import { permissions, workspace, workspaceEnvironment } from '@sim/db/schema'
import { permissions, user, workspace, workspaceEnvironment } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { and, eq } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
@@ -132,6 +132,21 @@ export async function PATCH(request: NextRequest, { params }: { params: Promise<
)
}
// Capture existing permissions and user info for audit metadata
const existingPerms = await db
.select({
userId: permissions.userId,
permissionType: permissions.permissionType,
email: user.email,
})
.from(permissions)
.innerJoin(user, eq(permissions.userId, user.id))
.where(and(eq(permissions.entityType, 'workspace'), eq(permissions.entityId, workspaceId)))
const permLookup = new Map(
existingPerms.map((p) => [p.userId, { permission: p.permissionType, email: p.email }])
)
await db.transaction(async (tx) => {
for (const update of body.updates) {
await tx
@@ -182,7 +197,17 @@ export async function PATCH(request: NextRequest, { params }: { params: Promise<
actorName: session.user.name ?? undefined,
actorEmail: session.user.email ?? undefined,
description: `Changed permissions for user ${update.userId} to ${update.permissions}`,
metadata: { targetUserId: update.userId, newPermissions: update.permissions },
metadata: {
targetUserId: update.userId,
targetEmail: permLookup.get(update.userId)?.email ?? undefined,
changes: [
{
field: 'permissions',
from: permLookup.get(update.userId)?.permission ?? null,
to: update.permissions,
},
],
},
request,
})
}

View File

@@ -237,6 +237,7 @@ export async function DELETE(
.limit(1)
// Delete workspace and all related data in a transaction
let workspaceWorkflowCount = 0
await db.transaction(async (tx) => {
// Get all workflows in this workspace before deletion
const workspaceWorkflows = await tx
@@ -244,6 +245,8 @@ export async function DELETE(
.from(workflow)
.where(eq(workflow.workspaceId, workspaceId))
workspaceWorkflowCount = workspaceWorkflows.length
if (workspaceWorkflows.length > 0) {
const workflowIds = workspaceWorkflows.map((w) => w.id)
@@ -299,6 +302,12 @@ export async function DELETE(
resourceId: workspaceId,
resourceName: workspaceRecord?.name,
description: `Deleted workspace "${workspaceRecord?.name || workspaceId}"`,
metadata: {
affected: {
workflows: workspaceWorkflowCount,
},
deleteTemplates,
},
request,
})

View File

@@ -189,6 +189,7 @@ export async function GET(
actorEmail: session.user.email ?? undefined,
resourceName: workspaceDetails.name,
description: `Accepted workspace invitation to "${workspaceDetails.name}"`,
metadata: { targetEmail: invitation.email },
request: req,
})
@@ -255,7 +256,7 @@ export async function DELETE(
actorName: session.user.name ?? undefined,
actorEmail: session.user.email ?? undefined,
description: `Revoked workspace invitation for ${invitation.email}`,
metadata: { invitationId, email: invitation.email },
metadata: { invitationId, targetEmail: invitation.email },
request: _request,
})

View File

@@ -225,7 +225,7 @@ export async function POST(req: NextRequest) {
resourceId: workspaceId,
resourceName: email,
description: `Invited ${email} as ${permission}`,
metadata: { email, role: permission },
metadata: { targetEmail: email, targetRole: permission },
request: req,
})

View File

@@ -21,6 +21,7 @@ interface WorkflowDeploymentInfo {
endpoint: string
exampleCommand: string
needsRedeployment: boolean
isPublicApi?: boolean
}
interface ApiDeployProps {
@@ -107,12 +108,12 @@ export function ApiDeploy({
if (!info) return ''
const endpoint = getBaseEndpoint()
const payload = getPayloadObject()
const isPublic = info.isPublicApi
switch (language) {
case 'curl':
return `curl -X POST \\
-H "X-API-Key: $SIM_API_KEY" \\
-H "Content-Type: application/json" \\
${isPublic ? '' : ' -H "X-API-Key: $SIM_API_KEY" \\\n'} -H "Content-Type: application/json" \\
-d '${JSON.stringify(payload)}' \\
${endpoint}`
@@ -123,8 +124,7 @@ import requests
response = requests.post(
"${endpoint}",
headers={
"X-API-Key": os.environ.get("SIM_API_KEY"),
"Content-Type": "application/json"
${isPublic ? '' : ' "X-API-Key": os.environ.get("SIM_API_KEY"),\n'} "Content-Type": "application/json"
},
json=${JSON.stringify(payload, null, 4).replace(/\n/g, '\n ')}
)
@@ -135,8 +135,7 @@ print(response.json())`
return `const response = await fetch("${endpoint}", {
method: "POST",
headers: {
"X-API-Key": process.env.SIM_API_KEY,
"Content-Type": "application/json"
${isPublic ? '' : ' "X-API-Key": process.env.SIM_API_KEY,\n'} "Content-Type": "application/json"
},
body: JSON.stringify(${JSON.stringify(payload)})
});
@@ -148,8 +147,7 @@ console.log(data);`
return `const response = await fetch("${endpoint}", {
method: "POST",
headers: {
"X-API-Key": process.env.SIM_API_KEY,
"Content-Type": "application/json"
${isPublic ? '' : ' "X-API-Key": process.env.SIM_API_KEY,\n'} "Content-Type": "application/json"
},
body: JSON.stringify(${JSON.stringify(payload)})
});
@@ -166,12 +164,12 @@ console.log(data);`
if (!info) return ''
const endpoint = getBaseEndpoint()
const payload = getStreamPayloadObject()
const isPublic = info.isPublicApi
switch (language) {
case 'curl':
return `curl -X POST \\
-H "X-API-Key: $SIM_API_KEY" \\
-H "Content-Type: application/json" \\
${isPublic ? '' : ' -H "X-API-Key: $SIM_API_KEY" \\\n'} -H "Content-Type: application/json" \\
-d '${JSON.stringify(payload)}' \\
${endpoint}`
@@ -182,8 +180,7 @@ import requests
response = requests.post(
"${endpoint}",
headers={
"X-API-Key": os.environ.get("SIM_API_KEY"),
"Content-Type": "application/json"
${isPublic ? '' : ' "X-API-Key": os.environ.get("SIM_API_KEY"),\n'} "Content-Type": "application/json"
},
json=${JSON.stringify(payload, null, 4).replace(/\n/g, '\n ')},
stream=True
@@ -197,8 +194,7 @@ for line in response.iter_lines():
return `const response = await fetch("${endpoint}", {
method: "POST",
headers: {
"X-API-Key": process.env.SIM_API_KEY,
"Content-Type": "application/json"
${isPublic ? '' : ' "X-API-Key": process.env.SIM_API_KEY,\n'} "Content-Type": "application/json"
},
body: JSON.stringify(${JSON.stringify(payload)})
});
@@ -216,8 +212,7 @@ while (true) {
return `const response = await fetch("${endpoint}", {
method: "POST",
headers: {
"X-API-Key": process.env.SIM_API_KEY,
"Content-Type": "application/json"
${isPublic ? '' : ' "X-API-Key": process.env.SIM_API_KEY,\n'} "Content-Type": "application/json"
},
body: JSON.stringify(${JSON.stringify(payload)})
});
@@ -241,14 +236,14 @@ while (true) {
const endpoint = getBaseEndpoint()
const baseUrl = endpoint.split('/api/workflows/')[0]
const payload = getPayloadObject()
const isPublic = info.isPublicApi
switch (asyncExampleType) {
case 'execute':
switch (language) {
case 'curl':
return `curl -X POST \\
-H "X-API-Key: $SIM_API_KEY" \\
-H "Content-Type: application/json" \\
${isPublic ? '' : ' -H "X-API-Key: $SIM_API_KEY" \\\n'} -H "Content-Type: application/json" \\
-H "X-Execution-Mode: async" \\
-d '${JSON.stringify(payload)}' \\
${endpoint}`
@@ -260,8 +255,7 @@ import requests
response = requests.post(
"${endpoint}",
headers={
"X-API-Key": os.environ.get("SIM_API_KEY"),
"Content-Type": "application/json",
${isPublic ? '' : ' "X-API-Key": os.environ.get("SIM_API_KEY"),\n'} "Content-Type": "application/json",
"X-Execution-Mode": "async"
},
json=${JSON.stringify(payload, null, 4).replace(/\n/g, '\n ')}
@@ -274,8 +268,7 @@ print(job) # Contains jobId and executionId`
return `const response = await fetch("${endpoint}", {
method: "POST",
headers: {
"X-API-Key": process.env.SIM_API_KEY,
"Content-Type": "application/json",
${isPublic ? '' : ' "X-API-Key": process.env.SIM_API_KEY,\n'} "Content-Type": "application/json",
"X-Execution-Mode": "async"
},
body: JSON.stringify(${JSON.stringify(payload)})
@@ -288,8 +281,7 @@ console.log(job); // Contains jobId and executionId`
return `const response = await fetch("${endpoint}", {
method: "POST",
headers: {
"X-API-Key": process.env.SIM_API_KEY,
"Content-Type": "application/json",
${isPublic ? '' : ' "X-API-Key": process.env.SIM_API_KEY,\n'} "Content-Type": "application/json",
"X-Execution-Mode": "async"
},
body: JSON.stringify(${JSON.stringify(payload)})

View File

@@ -4,6 +4,8 @@ import { useCallback, useEffect, useMemo, useRef, useState } from 'react'
import {
Badge,
Button,
ButtonGroup,
ButtonGroupItem,
Input,
Label,
Modal,
@@ -16,6 +18,8 @@ import {
import { normalizeInputFormatValue } from '@/lib/workflows/input-format'
import { isInputDefinitionTrigger } from '@/lib/workflows/triggers/input-definition-triggers'
import type { InputFormatField } from '@/lib/workflows/types'
import { useDeploymentInfo, useUpdatePublicApi } from '@/hooks/queries/deployments'
import { usePermissionConfig } from '@/hooks/use-permission-config'
import { useWorkflowRegistry } from '@/stores/workflows/registry/store'
import { useSubBlockStore } from '@/stores/workflows/subblock/store'
import { useWorkflowStore } from '@/stores/workflows/workflow/store'
@@ -40,13 +44,20 @@ export function ApiInfoModal({ open, onOpenChange, workflowId }: ApiInfoModalPro
)
const updateWorkflow = useWorkflowRegistry((state) => state.updateWorkflow)
const { data: deploymentData } = useDeploymentInfo(workflowId, { enabled: open })
const updatePublicApiMutation = useUpdatePublicApi()
const { isPublicApiDisabled } = usePermissionConfig()
const [description, setDescription] = useState('')
const [paramDescriptions, setParamDescriptions] = useState<Record<string, string>>({})
const [accessMode, setAccessMode] = useState<'api_key' | 'public'>('api_key')
const [isSaving, setIsSaving] = useState(false)
const [saveError, setSaveError] = useState<string | null>(null)
const [showUnsavedChangesAlert, setShowUnsavedChangesAlert] = useState(false)
const initialDescriptionRef = useRef('')
const initialParamDescriptionsRef = useRef<Record<string, string>>({})
const initialAccessModeRef = useRef<'api_key' | 'public'>('api_key')
const starterBlockId = useMemo(() => {
for (const [blockId, block] of Object.entries(blocks)) {
@@ -71,6 +82,8 @@ export function ApiInfoModal({ open, onOpenChange, workflowId }: ApiInfoModalPro
return normalizeInputFormatValue(blockValue) as NormalizedField[]
}, [starterBlockId, subBlockValues, blocks])
const accessModeInitializedRef = useRef(false)
useEffect(() => {
if (open) {
const normalizedDesc = workflowMetadata?.description?.toLowerCase().trim()
@@ -92,11 +105,24 @@ export function ApiInfoModal({ open, onOpenChange, workflowId }: ApiInfoModalPro
}
setParamDescriptions(descriptions)
initialParamDescriptionsRef.current = { ...descriptions }
setSaveError(null)
accessModeInitializedRef.current = false
}
}, [open, workflowMetadata, inputFormat])
useEffect(() => {
if (open && deploymentData && !accessModeInitializedRef.current) {
const initialAccess = deploymentData.isPublicApi ? 'public' : 'api_key'
setAccessMode(initialAccess)
initialAccessModeRef.current = initialAccess
accessModeInitializedRef.current = true
}
}, [open, deploymentData])
const hasChanges = useMemo(() => {
if (description.trim() !== initialDescriptionRef.current.trim()) return true
if (accessMode !== initialAccessModeRef.current) return true
for (const field of inputFormat) {
const currentValue = (paramDescriptions[field.name] || '').trim()
@@ -105,7 +131,7 @@ export function ApiInfoModal({ open, onOpenChange, workflowId }: ApiInfoModalPro
}
return false
}, [description, paramDescriptions, inputFormat])
}, [description, paramDescriptions, inputFormat, accessMode])
const handleParamDescriptionChange = (fieldName: string, value: string) => {
setParamDescriptions((prev) => ({
@@ -126,6 +152,7 @@ export function ApiInfoModal({ open, onOpenChange, workflowId }: ApiInfoModalPro
setShowUnsavedChangesAlert(false)
setDescription(initialDescriptionRef.current)
setParamDescriptions({ ...initialParamDescriptionsRef.current })
setAccessMode(initialAccessModeRef.current)
onOpenChange(false)
}, [onOpenChange])
@@ -138,7 +165,15 @@ export function ApiInfoModal({ open, onOpenChange, workflowId }: ApiInfoModalPro
}
setIsSaving(true)
setSaveError(null)
try {
if (accessMode !== initialAccessModeRef.current) {
await updatePublicApiMutation.mutateAsync({
workflowId,
isPublicApi: accessMode === 'public',
})
}
if (description.trim() !== (workflowMetadata?.description || '')) {
updateWorkflow(workflowId, { description: description.trim() || 'New workflow' })
}
@@ -152,6 +187,9 @@ export function ApiInfoModal({ open, onOpenChange, workflowId }: ApiInfoModalPro
}
onOpenChange(false)
} catch (err: unknown) {
const message = err instanceof Error ? err.message : 'Failed to update access settings'
setSaveError(message)
} finally {
setIsSaving(false)
}
@@ -165,6 +203,8 @@ export function ApiInfoModal({ open, onOpenChange, workflowId }: ApiInfoModalPro
paramDescriptions,
setValue,
onOpenChange,
accessMode,
updatePublicApiMutation,
])
return (
@@ -187,6 +227,26 @@ export function ApiInfoModal({ open, onOpenChange, workflowId }: ApiInfoModalPro
/>
</div>
{!isPublicApiDisabled && (
<div>
<Label className='mb-[6.5px] block pl-[2px] font-medium text-[13px] text-[var(--text-primary)]'>
Access
</Label>
<ButtonGroup
value={accessMode}
onValueChange={(val) => setAccessMode(val as 'api_key' | 'public')}
>
<ButtonGroupItem value='api_key'>API Key</ButtonGroupItem>
<ButtonGroupItem value='public'>Public</ButtonGroupItem>
</ButtonGroup>
<p className='mt-1 text-[12px] text-[var(--text-secondary)]'>
{accessMode === 'public'
? 'Anyone can call this API without authentication. You will be billed for all usage.'
: 'Requires a valid API key to call this endpoint.'}
</p>
</div>
)}
{inputFormat.length > 0 && (
<div>
<Label className='mb-[6.5px] block pl-[2px] font-medium text-[13px] text-[var(--text-primary)]'>
@@ -227,6 +287,9 @@ export function ApiInfoModal({ open, onOpenChange, workflowId }: ApiInfoModalPro
)}
</ModalBody>
<ModalFooter>
{saveError && (
<p className='mr-auto text-[12px] text-[var(--text-error)]'>{saveError}</p>
)}
<Button variant='default' onClick={handleCloseAttempt} disabled={isSaving}>
Cancel
</Button>

View File

@@ -70,6 +70,7 @@ interface WorkflowDeploymentInfoUI {
endpoint: string
exampleCommand: string
needsRedeployment: boolean
isPublicApi: boolean
}
type TabView = 'general' | 'api' | 'chat' | 'template' | 'mcp' | 'form' | 'a2a'
@@ -117,7 +118,7 @@ export function DeployModal({
const [isApiInfoModalOpen, setIsApiInfoModalOpen] = useState(false)
const userPermissions = useUserPermissionsContext()
const canManageWorkspaceKeys = userPermissions.canAdmin
const { config: permissionConfig } = usePermissionConfig()
const { config: permissionConfig, isPublicApiDisabled } = usePermissionConfig()
const { data: apiKeysData, isLoading: isLoadingKeys } = useApiKeys(workflowWorkspaceId || '')
const { data: workspaceSettingsData, isLoading: isLoadingSettings } = useWorkspaceSettings(
workflowWorkspaceId || ''
@@ -214,9 +215,11 @@ export function DeployModal({
endpoint,
exampleCommand: `curl -X POST -H "X-API-Key: ${placeholderKey}" -H "Content-Type: application/json"${inputFormatExample} ${endpoint}`,
needsRedeployment: deploymentInfoData.needsRedeployment,
isPublicApi: isPublicApiDisabled ? false : (deploymentInfoData.isPublicApi ?? false),
}
}, [
deploymentInfoData,
isPublicApiDisabled,
workflowId,
selectedStreamingOutputs,
getInputFormatExample,

View File

@@ -301,6 +301,16 @@ const SCOPE_DESCRIPTIONS: Record<string, string> = {
'user-follow-modify': 'Follow and unfollow artists and users',
'user-read-playback-position': 'View playback position in podcasts',
'ugc-image-upload': 'Upload images to Spotify playlists',
// Attio
'record_permission:read-write': 'Read and write CRM records',
'object_configuration:read-write': 'Read and manage object schemas',
'list_configuration:read-write': 'Read and manage list configurations',
'list_entry:read-write': 'Read and write list entries',
'note:read-write': 'Read and write notes',
'task:read-write': 'Read and write tasks',
'comment:read-write': 'Read and write comments and threads',
'user_management:read': 'View workspace members',
'webhook:read-write': 'Manage webhooks',
}
function getScopeDescription(scope: string): string {

View File

@@ -2,7 +2,6 @@ import { useMemo } from 'react'
import { useShallow } from 'zustand/react/shallow'
import { BlockPathCalculator } from '@/lib/workflows/blocks/block-path-calculator'
import { SYSTEM_REFERENCE_PREFIXES } from '@/lib/workflows/sanitization/references'
import { isInputDefinitionTrigger } from '@/lib/workflows/triggers/input-definition-triggers'
import { normalizeName } from '@/executor/constants'
import { useWorkflowStore } from '@/stores/workflows/workflow/store'
import type { Loop, Parallel } from '@/stores/workflows/workflow/types'
@@ -27,27 +26,12 @@ export function useAccessibleReferencePrefixes(blockId?: string | null): Set<str
const accessibleIds = new Set<string>(ancestorIds)
accessibleIds.add(blockId)
const starterBlock = Object.values(blocks).find((block) => isInputDefinitionTrigger(block.type))
if (starterBlock && ancestorIds.includes(starterBlock.id)) {
accessibleIds.add(starterBlock.id)
}
const loopValues = Object.values(loops as Record<string, Loop>)
loopValues.forEach((loop) => {
if (!loop?.nodes) return
if (loop.nodes.includes(blockId)) {
accessibleIds.add(loop.id) // Add the loop block itself
loop.nodes.forEach((nodeId) => accessibleIds.add(nodeId))
}
Object.values(loops as Record<string, Loop>).forEach((loop) => {
if (loop?.nodes?.includes(blockId)) accessibleIds.add(loop.id)
})
const parallelValues = Object.values(parallels as Record<string, Parallel>)
parallelValues.forEach((parallel) => {
if (!parallel?.nodes) return
if (parallel.nodes.includes(blockId)) {
accessibleIds.add(parallel.id) // Add the parallel block itself
parallel.nodes.forEach((nodeId) => accessibleIds.add(nodeId))
}
Object.values(parallels as Record<string, Parallel>).forEach((parallel) => {
if (parallel?.nodes?.includes(blockId)) accessibleIds.add(parallel.id)
})
const prefixes = new Set<string>()

View File

@@ -40,15 +40,10 @@ async function applyScheduleUpdate(
scheduleId: string,
updates: WorkflowScheduleUpdate,
requestId: string,
context: string,
successLog?: string
context: string
) {
try {
await db.update(workflowSchedule).set(updates).where(eq(workflowSchedule.id, scheduleId))
if (successLog) {
logger.debug(`[${requestId}] ${successLog}`)
}
} catch (error) {
logger.error(`[${requestId}] ${context}`, error)
}
@@ -132,8 +127,10 @@ async function runWorkflowExecution({
asyncTimeout?: number
}): Promise<RunWorkflowResult> {
try {
logger.debug(`[${requestId}] Loading deployed workflow ${payload.workflowId}`)
const deployedData = await loadDeployedWorkflowState(payload.workflowId)
const deployedData = await loadDeployedWorkflowState(
payload.workflowId,
workflowRecord.workspaceId ?? undefined
)
const blocks = deployedData.blocks
const { deploymentVersionId } = deployedData
@@ -351,8 +348,7 @@ export async function executeScheduleJob(payload: ScheduleExecutionPayload) {
status: 'disabled',
},
requestId,
`Failed to disable schedule ${payload.scheduleId} after authentication error`,
`Disabled schedule ${payload.scheduleId} due to authentication failure (401)`
`Failed to disable schedule ${payload.scheduleId} after authentication error`
)
return
}
@@ -370,8 +366,7 @@ export async function executeScheduleJob(payload: ScheduleExecutionPayload) {
status: 'disabled',
},
requestId,
`Failed to disable schedule ${payload.scheduleId} after authorization error`,
`Disabled schedule ${payload.scheduleId} due to authorization failure (403)`
`Failed to disable schedule ${payload.scheduleId} after authorization error`
)
return
}
@@ -386,8 +381,7 @@ export async function executeScheduleJob(payload: ScheduleExecutionPayload) {
status: 'disabled',
},
requestId,
`Failed to disable schedule ${payload.scheduleId} after missing workflow`,
`Disabled schedule ${payload.scheduleId} because the workflow no longer exists`
`Failed to disable schedule ${payload.scheduleId} after missing workflow`
)
return
}
@@ -404,8 +398,7 @@ export async function executeScheduleJob(payload: ScheduleExecutionPayload) {
nextRunAt: nextRetryAt,
},
requestId,
`Error updating schedule ${payload.scheduleId} for rate limit`,
`Updated next retry time for schedule ${payload.scheduleId} due to rate limit`
`Error updating schedule ${payload.scheduleId} for rate limit`
)
return
}
@@ -421,8 +414,7 @@ export async function executeScheduleJob(payload: ScheduleExecutionPayload) {
nextRunAt,
},
requestId,
`Error updating schedule ${payload.scheduleId} after usage limit check`,
`Scheduled next run for ${payload.scheduleId} after usage limit`
`Error updating schedule ${payload.scheduleId} after usage limit check`
)
}
return
@@ -450,8 +442,7 @@ export async function executeScheduleJob(payload: ScheduleExecutionPayload) {
status: shouldDisable ? 'disabled' : 'active',
},
requestId,
`Error updating schedule ${payload.scheduleId} after preprocessing failure`,
`Updated schedule ${payload.scheduleId} after preprocessing failure`
`Error updating schedule ${payload.scheduleId} after preprocessing failure`
)
return
}
@@ -503,8 +494,7 @@ export async function executeScheduleJob(payload: ScheduleExecutionPayload) {
lastQueuedAt: null,
},
requestId,
`Error updating schedule ${payload.scheduleId} after success`,
`Updated next run time for workflow ${payload.workflowId} to ${nextRunAt.toISOString()}`
`Error updating schedule ${payload.scheduleId} after success`
)
return
}
@@ -531,8 +521,7 @@ export async function executeScheduleJob(payload: ScheduleExecutionPayload) {
status: shouldDisable ? 'disabled' : 'active',
},
requestId,
`Error updating schedule ${payload.scheduleId} after failure`,
`Updated schedule ${payload.scheduleId} after failure`
`Error updating schedule ${payload.scheduleId} after failure`
)
} catch (error: unknown) {
const errorMessage = error instanceof Error ? error.message : String(error)
@@ -550,8 +539,7 @@ export async function executeScheduleJob(payload: ScheduleExecutionPayload) {
nextRunAt: nextRetryAt,
},
requestId,
`Error updating schedule ${payload.scheduleId} for service overload`,
`Updated schedule ${payload.scheduleId} retry time due to service overload`
`Error updating schedule ${payload.scheduleId} for service overload`
)
return
}
@@ -578,8 +566,7 @@ export async function executeScheduleJob(payload: ScheduleExecutionPayload) {
status: shouldDisable ? 'disabled' : 'active',
},
requestId,
`Error updating schedule ${payload.scheduleId} after execution error`,
`Updated schedule ${payload.scheduleId} after execution error`
`Error updating schedule ${payload.scheduleId} after execution error`
)
}
} catch (error: unknown) {

View File

@@ -440,7 +440,6 @@ async function executeWebhookJobInternal(
const triggerConfig = getTrigger(resolvedTriggerId)
if (triggerConfig.outputs) {
logger.debug(`[${requestId}] Processing trigger ${resolvedTriggerId} file outputs`)
const processedInput = await processTriggerFileOutputs(input, triggerConfig.outputs, {
workspaceId,
workflowId: payload.workflowId,
@@ -450,8 +449,6 @@ async function executeWebhookJobInternal(
})
safeAssign(input, processedInput as Record<string, unknown>)
}
} else {
logger.debug(`[${requestId}] No valid triggerId found for block ${payload.blockId}`)
}
} catch (error) {
logger.error(`[${requestId}] Error processing trigger file outputs:`, error)
@@ -469,7 +466,6 @@ async function executeWebhookJobInternal(
name: string
type: 'string' | 'number' | 'boolean' | 'object' | 'array' | 'file[]'
}>
logger.debug(`[${requestId}] Processing generic webhook files from inputFormat`)
const fileFields = inputFormat.filter((field) => field.type === 'file[]')

View File

@@ -7,7 +7,6 @@ import { LoggingSession } from '@/lib/logs/execution/logging-session'
import { buildTraceSpans } from '@/lib/logs/execution/trace-spans/trace-spans'
import { executeWorkflowCore } from '@/lib/workflows/executor/execution-core'
import { PauseResumeManager } from '@/lib/workflows/executor/human-in-the-loop-manager'
import { getWorkflowById } from '@/lib/workflows/utils'
import { ExecutionSnapshot } from '@/executor/execution/snapshot'
import type { ExecutionMetadata } from '@/executor/execution/types'
import { hasExecutionResult } from '@/executor/utils/errors'
@@ -22,6 +21,7 @@ export type WorkflowExecutionPayload = {
triggerType?: CoreTriggerType
executionId?: string
metadata?: Record<string, any>
callChain?: string[]
}
/**
@@ -78,10 +78,7 @@ export async function executeWorkflowJob(payload: WorkflowExecutionPayload) {
variables: {},
})
const workflow = await getWorkflowById(workflowId)
if (!workflow) {
throw new Error(`Workflow ${workflowId} not found after preprocessing`)
}
const workflow = preprocessResult.workflowRecord!
const metadata: ExecutionMetadata = {
requestId,
@@ -95,6 +92,7 @@ export async function executeWorkflowJob(payload: WorkflowExecutionPayload) {
useDraftState: false,
startTime: new Date().toISOString(),
isClientSession: false,
callChain: payload.callChain,
}
const snapshot = new ExecutionSnapshot(

File diff suppressed because it is too large Load Diff

View File

@@ -3,6 +3,7 @@ import type { BlockConfig } from '@/blocks/types'
import { AuthMode } from '@/blocks/types'
import { normalizeFileInput } from '@/blocks/utils'
import type { ConfluenceResponse } from '@/tools/confluence/types'
import { getTrigger } from '@/triggers'
export const ConfluenceBlock: BlockConfig<ConfluenceResponse> = {
type: 'confluence',
@@ -838,7 +839,46 @@ export const ConfluenceV2Block: BlockConfig<ConfluenceResponse> = {
],
},
},
// Trigger subBlocks
...getTrigger('confluence_page_created').subBlocks,
...getTrigger('confluence_page_updated').subBlocks,
...getTrigger('confluence_page_removed').subBlocks,
...getTrigger('confluence_page_moved').subBlocks,
...getTrigger('confluence_comment_created').subBlocks,
...getTrigger('confluence_comment_removed').subBlocks,
...getTrigger('confluence_blog_created').subBlocks,
...getTrigger('confluence_blog_updated').subBlocks,
...getTrigger('confluence_blog_removed').subBlocks,
...getTrigger('confluence_attachment_created').subBlocks,
...getTrigger('confluence_attachment_removed').subBlocks,
...getTrigger('confluence_space_created').subBlocks,
...getTrigger('confluence_space_updated').subBlocks,
...getTrigger('confluence_label_added').subBlocks,
...getTrigger('confluence_label_removed').subBlocks,
...getTrigger('confluence_webhook').subBlocks,
],
triggers: {
enabled: true,
available: [
'confluence_page_created',
'confluence_page_updated',
'confluence_page_removed',
'confluence_page_moved',
'confluence_comment_created',
'confluence_comment_removed',
'confluence_blog_created',
'confluence_blog_updated',
'confluence_blog_removed',
'confluence_attachment_created',
'confluence_attachment_removed',
'confluence_space_created',
'confluence_space_updated',
'confluence_label_added',
'confluence_label_removed',
'confluence_webhook',
],
},
tools: {
access: [
// Page Tools

View File

@@ -0,0 +1,569 @@
import { GongIcon } from '@/components/icons'
import { AuthMode, type BlockConfig } from '@/blocks/types'
import type { GongResponse } from '@/tools/gong/types'
export const GongBlock: BlockConfig<GongResponse> = {
type: 'gong',
name: 'Gong',
description: 'Revenue intelligence and conversation analytics',
authMode: AuthMode.ApiKey,
longDescription:
'Integrate Gong into your workflow. Access call recordings, transcripts, user data, activity stats, scorecards, trackers, library content, coaching metrics, and more via the Gong API.',
docsLink: 'https://docs.sim.ai/tools/gong',
category: 'tools',
bgColor: '#8039DF',
icon: GongIcon,
subBlocks: [
{
id: 'operation',
title: 'Operation',
type: 'dropdown',
options: [
{ label: 'List Calls', id: 'list_calls' },
{ label: 'Get Call', id: 'get_call' },
{ label: 'Get Call Transcript', id: 'get_call_transcript' },
{ label: 'Get Extensive Calls', id: 'get_extensive_calls' },
{ label: 'List Users', id: 'list_users' },
{ label: 'Get User', id: 'get_user' },
{ label: 'Aggregate Activity', id: 'aggregate_activity' },
{ label: 'Interaction Stats', id: 'interaction_stats' },
{ label: 'Answered Scorecards', id: 'answered_scorecards' },
{ label: 'List Library Folders', id: 'list_library_folders' },
{ label: 'Get Folder Content', id: 'get_folder_content' },
{ label: 'List Scorecards', id: 'list_scorecards' },
{ label: 'List Trackers', id: 'list_trackers' },
{ label: 'List Workspaces', id: 'list_workspaces' },
{ label: 'List Flows', id: 'list_flows' },
{ label: 'Get Coaching', id: 'get_coaching' },
{ label: 'Lookup Email', id: 'lookup_email' },
{ label: 'Lookup Phone', id: 'lookup_phone' },
],
value: () => 'list_calls',
},
// List Calls inputs
{
id: 'fromDateTime',
title: 'From Date/Time',
type: 'short-input',
placeholder: '2024-01-01T00:00:00Z',
condition: {
field: 'operation',
value: ['list_calls'],
},
required: { field: 'operation', value: 'list_calls' },
wandConfig: {
enabled: true,
prompt: `Generate an ISO 8601 timestamp based on the user's description.
The timestamp should be in the format: YYYY-MM-DDTHH:MM:SSZ (UTC timezone).
Examples:
- "today" -> Today's date at 00:00:00Z
- "beginning of this week" -> Monday of the current week at 00:00:00Z
- "start of month" -> First day of current month at 00:00:00Z
- "last week" -> 7 days ago at 00:00:00Z
Return ONLY the timestamp string in ISO 8601 format - no explanations, no quotes, no extra text.`,
placeholder: 'Describe the start time (e.g., "beginning of last month")...',
generationType: 'timestamp',
},
},
{
id: 'toDateTime',
title: 'To Date/Time',
type: 'short-input',
placeholder: '2024-01-31T23:59:59Z',
condition: {
field: 'operation',
value: ['list_calls'],
},
mode: 'advanced',
wandConfig: {
enabled: true,
prompt: `Generate an ISO 8601 timestamp based on the user's description.
The timestamp should be in the format: YYYY-MM-DDTHH:MM:SSZ (UTC timezone).
Examples:
- "now" -> Current date and time in UTC
- "end of this week" -> Sunday of the current week at 23:59:59Z
- "end of month" -> Last day of current month at 23:59:59Z
- "yesterday" -> Yesterday at 23:59:59Z
Return ONLY the timestamp string in ISO 8601 format - no explanations, no quotes, no extra text.`,
placeholder: 'Describe the end time (e.g., "end of last month")...',
generationType: 'timestamp',
},
},
// Get Call inputs
{
id: 'callId',
title: 'Call ID',
type: 'short-input',
placeholder: 'Enter the Gong call ID',
condition: { field: 'operation', value: 'get_call' },
required: { field: 'operation', value: 'get_call' },
},
// Get Call Transcript / Get Extensive Calls inputs
{
id: 'callIds',
title: 'Call IDs',
type: 'short-input',
placeholder: 'Comma-separated call IDs (optional)',
condition: { field: 'operation', value: ['get_call_transcript', 'get_extensive_calls'] },
},
{
id: 'transcriptFromDateTime',
title: 'From Date/Time',
type: 'short-input',
placeholder: '2024-01-01T00:00:00Z (optional)',
condition: { field: 'operation', value: ['get_call_transcript', 'get_extensive_calls'] },
mode: 'advanced',
wandConfig: {
enabled: true,
prompt: `Generate an ISO 8601 timestamp based on the user's description.
The timestamp should be in the format: YYYY-MM-DDTHH:MM:SSZ (UTC timezone).
Examples:
- "today" -> Today's date at 00:00:00Z
- "beginning of this week" -> Monday of the current week at 00:00:00Z
- "start of month" -> First day of current month at 00:00:00Z
Return ONLY the timestamp string in ISO 8601 format - no explanations, no quotes, no extra text.`,
placeholder: 'Describe the start time (e.g., "start of last week")...',
generationType: 'timestamp',
},
},
{
id: 'transcriptToDateTime',
title: 'To Date/Time',
type: 'short-input',
placeholder: '2024-01-31T23:59:59Z (optional)',
condition: { field: 'operation', value: ['get_call_transcript', 'get_extensive_calls'] },
mode: 'advanced',
wandConfig: {
enabled: true,
prompt: `Generate an ISO 8601 timestamp based on the user's description.
The timestamp should be in the format: YYYY-MM-DDTHH:MM:SSZ (UTC timezone).
Examples:
- "now" -> Current date and time in UTC
- "end of this week" -> Sunday of the current week at 23:59:59Z
- "end of month" -> Last day of current month at 23:59:59Z
Return ONLY the timestamp string in ISO 8601 format - no explanations, no quotes, no extra text.`,
placeholder: 'Describe the end time (e.g., "end of last week")...',
generationType: 'timestamp',
},
},
{
id: 'primaryUserIds',
title: 'Primary User IDs',
type: 'short-input',
placeholder: 'Comma-separated user IDs (optional)',
condition: { field: 'operation', value: 'get_extensive_calls' },
mode: 'advanced',
},
// List Users inputs
{
id: 'includeAvatars',
title: 'Include Avatars',
type: 'dropdown',
options: [
{ label: 'No', id: 'false' },
{ label: 'Yes', id: 'true' },
],
value: () => 'false',
condition: { field: 'operation', value: 'list_users' },
mode: 'advanced',
},
// Get User inputs
{
id: 'userId',
title: 'User ID',
type: 'short-input',
placeholder: 'Enter the Gong user ID',
condition: { field: 'operation', value: 'get_user' },
required: { field: 'operation', value: 'get_user' },
},
// Aggregate Activity & Interaction Stats inputs
{
id: 'statsFromDate',
title: 'From Date',
type: 'short-input',
placeholder: '2024-01-01 (YYYY-MM-DD, inclusive)',
condition: { field: 'operation', value: ['aggregate_activity', 'interaction_stats'] },
required: { field: 'operation', value: ['aggregate_activity', 'interaction_stats'] },
wandConfig: {
enabled: true,
prompt: `Generate a date string in YYYY-MM-DD format based on the user's description.
Examples:
- "today" -> Today's date
- "beginning of this month" -> First day of current month
- "start of last quarter" -> First day of the previous quarter
- "30 days ago" -> Date 30 days in the past
Return ONLY the date string in YYYY-MM-DD format - no explanations, no quotes, no extra text.`,
placeholder: 'Describe the start date (e.g., "beginning of last month")...',
generationType: 'timestamp',
},
},
{
id: 'statsToDate',
title: 'To Date',
type: 'short-input',
placeholder: '2024-01-31 (YYYY-MM-DD, exclusive)',
condition: { field: 'operation', value: ['aggregate_activity', 'interaction_stats'] },
required: { field: 'operation', value: ['aggregate_activity', 'interaction_stats'] },
wandConfig: {
enabled: true,
prompt: `Generate a date string in YYYY-MM-DD format based on the user's description.
The date is exclusive (results up to but not including this date).
Examples:
- "today" -> Today's date
- "end of this month" -> First day of next month
- "end of last quarter" -> First day of current quarter
Return ONLY the date string in YYYY-MM-DD format - no explanations, no quotes, no extra text.`,
placeholder: 'Describe the end date (e.g., "end of last month")...',
generationType: 'timestamp',
},
},
{
id: 'userIds',
title: 'User IDs',
type: 'short-input',
placeholder: 'Comma-separated user IDs (optional)',
condition: { field: 'operation', value: ['aggregate_activity', 'interaction_stats'] },
mode: 'advanced',
},
// Answered Scorecards inputs
{
id: 'callFromDate',
title: 'Call From Date',
type: 'short-input',
placeholder: '2024-01-01 (YYYY-MM-DD, optional)',
condition: { field: 'operation', value: 'answered_scorecards' },
wandConfig: {
enabled: true,
prompt: `Generate a date string in YYYY-MM-DD format based on the user's description.
Examples:
- "today" -> Today's date
- "beginning of this month" -> First day of current month
- "start of last quarter" -> First day of the previous quarter
Return ONLY the date string in YYYY-MM-DD format - no explanations, no quotes, no extra text.`,
placeholder: 'Describe the call start date...',
generationType: 'timestamp',
},
},
{
id: 'callToDate',
title: 'Call To Date',
type: 'short-input',
placeholder: '2024-01-31 (YYYY-MM-DD, optional)',
condition: { field: 'operation', value: 'answered_scorecards' },
wandConfig: {
enabled: true,
prompt: `Generate a date string in YYYY-MM-DD format based on the user's description.
Examples:
- "today" -> Today's date
- "end of this month" -> First day of next month
Return ONLY the date string in YYYY-MM-DD format - no explanations, no quotes, no extra text.`,
placeholder: 'Describe the call end date...',
generationType: 'timestamp',
},
},
{
id: 'reviewFromDate',
title: 'Review From Date',
type: 'short-input',
placeholder: '2024-01-01 (YYYY-MM-DD, optional)',
condition: { field: 'operation', value: 'answered_scorecards' },
mode: 'advanced',
wandConfig: {
enabled: true,
prompt: `Generate a date string in YYYY-MM-DD format based on the user's description.
Return ONLY the date string in YYYY-MM-DD format - no explanations, no quotes, no extra text.`,
placeholder: 'Describe the review start date...',
generationType: 'timestamp',
},
},
{
id: 'reviewToDate',
title: 'Review To Date',
type: 'short-input',
placeholder: '2024-01-31 (YYYY-MM-DD, optional)',
condition: { field: 'operation', value: 'answered_scorecards' },
mode: 'advanced',
wandConfig: {
enabled: true,
prompt: `Generate a date string in YYYY-MM-DD format based on the user's description.
Return ONLY the date string in YYYY-MM-DD format - no explanations, no quotes, no extra text.`,
placeholder: 'Describe the review end date...',
generationType: 'timestamp',
},
},
{
id: 'scorecardIds',
title: 'Scorecard IDs',
type: 'short-input',
placeholder: 'Comma-separated scorecard IDs (optional)',
condition: { field: 'operation', value: 'answered_scorecards' },
mode: 'advanced',
},
{
id: 'reviewedUserIds',
title: 'Reviewed User IDs',
type: 'short-input',
placeholder: 'Comma-separated user IDs (optional)',
condition: { field: 'operation', value: 'answered_scorecards' },
mode: 'advanced',
},
// Get Folder Content inputs
{
id: 'folderId',
title: 'Folder ID',
type: 'short-input',
placeholder: 'Enter the library folder ID',
condition: { field: 'operation', value: 'get_folder_content' },
required: { field: 'operation', value: 'get_folder_content' },
},
// Workspace ID (shared by multiple operations)
{
id: 'workspaceId',
title: 'Workspace ID',
type: 'short-input',
placeholder: 'Gong workspace ID (optional)',
condition: {
field: 'operation',
value: [
'list_calls',
'get_call_transcript',
'get_extensive_calls',
'list_library_folders',
'list_flows',
'list_trackers',
],
},
mode: 'advanced',
},
// List Flows inputs
{
id: 'flowOwnerEmail',
title: 'Flow Owner Email',
type: 'short-input',
placeholder: 'user@example.com',
condition: { field: 'operation', value: 'list_flows' },
required: { field: 'operation', value: 'list_flows' },
},
// Get Coaching inputs
{
id: 'managerId',
title: 'Manager ID',
type: 'short-input',
placeholder: 'Manager user ID',
condition: { field: 'operation', value: 'get_coaching' },
required: { field: 'operation', value: 'get_coaching' },
},
{
id: 'coachingWorkspaceId',
title: 'Workspace ID',
type: 'short-input',
placeholder: 'Gong workspace ID',
condition: { field: 'operation', value: 'get_coaching' },
required: { field: 'operation', value: 'get_coaching' },
},
{
id: 'coachingFromDate',
title: 'From Date',
type: 'short-input',
placeholder: '2024-01-01T00:00:00Z',
condition: { field: 'operation', value: 'get_coaching' },
required: { field: 'operation', value: 'get_coaching' },
wandConfig: {
enabled: true,
prompt: `Generate an ISO 8601 timestamp based on the user's description.
The timestamp should be in the format: YYYY-MM-DDTHH:MM:SSZ (UTC timezone).
Examples:
- "today" -> Today's date at 00:00:00Z
- "beginning of this month" -> First day of current month at 00:00:00Z
- "start of last quarter" -> First day of the previous quarter at 00:00:00Z
Return ONLY the timestamp string in ISO 8601 format - no explanations, no quotes, no extra text.`,
placeholder: 'Describe the start time (e.g., "beginning of last month")...',
generationType: 'timestamp',
},
},
{
id: 'coachingToDate',
title: 'To Date',
type: 'short-input',
placeholder: '2024-01-31T23:59:59Z',
condition: { field: 'operation', value: 'get_coaching' },
required: { field: 'operation', value: 'get_coaching' },
wandConfig: {
enabled: true,
prompt: `Generate an ISO 8601 timestamp based on the user's description.
The timestamp should be in the format: YYYY-MM-DDTHH:MM:SSZ (UTC timezone).
Examples:
- "now" -> Current date and time in UTC
- "end of this month" -> Last day of current month at 23:59:59Z
- "end of last quarter" -> Last day of the previous quarter at 23:59:59Z
Return ONLY the timestamp string in ISO 8601 format - no explanations, no quotes, no extra text.`,
placeholder: 'Describe the end time (e.g., "end of last month")...',
generationType: 'timestamp',
},
},
// Lookup Email inputs
{
id: 'emailAddress',
title: 'Email Address',
type: 'short-input',
placeholder: 'user@example.com',
condition: { field: 'operation', value: 'lookup_email' },
required: { field: 'operation', value: 'lookup_email' },
},
// Lookup Phone inputs
{
id: 'phoneNumber',
title: 'Phone Number',
type: 'short-input',
placeholder: '+1234567890',
condition: { field: 'operation', value: 'lookup_phone' },
required: { field: 'operation', value: 'lookup_phone' },
},
// Pagination cursor (shared)
{
id: 'cursor',
title: 'Cursor',
type: 'short-input',
placeholder: 'Pagination cursor (optional)',
condition: {
field: 'operation',
value: [
'list_calls',
'get_call_transcript',
'get_extensive_calls',
'list_users',
'aggregate_activity',
'interaction_stats',
'answered_scorecards',
'list_flows',
],
},
mode: 'advanced',
},
// API credentials
{
id: 'accessKey',
title: 'Access Key',
type: 'short-input',
placeholder: 'Enter your Gong API access key',
password: true,
required: true,
},
{
id: 'accessKeySecret',
title: 'Access Key Secret',
type: 'short-input',
placeholder: 'Enter your Gong API access key secret',
password: true,
required: true,
},
],
tools: {
access: [
'gong_list_calls',
'gong_get_call',
'gong_get_call_transcript',
'gong_get_extensive_calls',
'gong_list_users',
'gong_get_user',
'gong_aggregate_activity',
'gong_interaction_stats',
'gong_answered_scorecards',
'gong_list_library_folders',
'gong_get_folder_content',
'gong_list_scorecards',
'gong_list_trackers',
'gong_list_workspaces',
'gong_list_flows',
'gong_get_coaching',
'gong_lookup_email',
'gong_lookup_phone',
],
config: {
tool: (params) => `gong_${params.operation}`,
params: (params) => {
const result: Record<string, unknown> = {}
// Map operation-specific subBlock IDs to tool param names
if (params.transcriptFromDateTime) result.fromDateTime = params.transcriptFromDateTime
if (params.transcriptToDateTime) result.toDateTime = params.transcriptToDateTime
if (params.statsFromDate) result.fromDate = params.statsFromDate
if (params.statsToDate) result.toDate = params.statsToDate
if (params.callFromDate) result.callFromDate = params.callFromDate
if (params.callToDate) result.callToDate = params.callToDate
if (params.reviewFromDate) result.reviewFromDate = params.reviewFromDate
if (params.reviewToDate) result.reviewToDate = params.reviewToDate
if (params.coachingWorkspaceId) result.workspaceId = params.coachingWorkspaceId
if (params.coachingFromDate) result.fromDate = params.coachingFromDate
if (params.coachingToDate) result.toDate = params.coachingToDate
return result
},
},
},
inputs: {
operation: { type: 'string', description: 'Operation to perform' },
accessKey: { type: 'string', description: 'Gong API Access Key' },
accessKeySecret: { type: 'string', description: 'Gong API Access Key Secret' },
fromDateTime: {
type: 'string',
description: 'Start date/time in ISO-8601 format (list calls)',
},
toDateTime: { type: 'string', description: 'End date/time in ISO-8601 format (list calls)' },
callId: { type: 'string', description: 'Gong call ID' },
callIds: { type: 'string', description: 'Comma-separated call IDs' },
userId: { type: 'string', description: 'Gong user ID' },
userIds: { type: 'string', description: 'Comma-separated user IDs' },
statsFromDate: { type: 'string', description: 'Start date in YYYY-MM-DD format (stats)' },
statsToDate: { type: 'string', description: 'End date in YYYY-MM-DD format (stats)' },
callFromDate: { type: 'string', description: 'Call start date in YYYY-MM-DD (scorecards)' },
callToDate: { type: 'string', description: 'Call end date in YYYY-MM-DD (scorecards)' },
reviewFromDate: { type: 'string', description: 'Review start date in YYYY-MM-DD (scorecards)' },
reviewToDate: { type: 'string', description: 'Review end date in YYYY-MM-DD (scorecards)' },
scorecardIds: { type: 'string', description: 'Comma-separated scorecard IDs' },
reviewedUserIds: { type: 'string', description: 'Comma-separated reviewed user IDs' },
primaryUserIds: {
type: 'string',
description: 'Comma-separated primary user IDs (extensive calls)',
},
folderId: { type: 'string', description: 'Library folder ID' },
workspaceId: { type: 'string', description: 'Gong workspace ID' },
managerId: { type: 'string', description: 'Manager user ID for coaching' },
flowOwnerEmail: {
type: 'string',
description: 'Email of a Gong user to retrieve personal and company flows',
},
emailAddress: { type: 'string', description: 'Email address to look up' },
phoneNumber: { type: 'string', description: 'Phone number to look up' },
cursor: { type: 'string', description: 'Pagination cursor' },
},
outputs: {
response: {
type: 'json',
description: 'Gong API response data',
},
},
}

View File

@@ -11,7 +11,7 @@ export const HexBlock: BlockConfig<HexResponse> = {
'Integrate Hex into your workflow. Run projects, check run status, manage collections and groups, list users, and view data connections. Requires a Hex API token.',
docsLink: 'https://docs.sim.ai/tools/hex',
category: 'tools',
bgColor: '#F5E6FF',
bgColor: '#14151A',
icon: HexIcon,
authMode: AuthMode.ApiKey,

View File

@@ -10,6 +10,7 @@ import { ApifyBlock } from '@/blocks/blocks/apify'
import { ApolloBlock } from '@/blocks/blocks/apollo'
import { ArxivBlock } from '@/blocks/blocks/arxiv'
import { AsanaBlock } from '@/blocks/blocks/asana'
import { AttioBlock } from '@/blocks/blocks/attio'
import { BrowserUseBlock } from '@/blocks/blocks/browser_use'
import { CalComBlock } from '@/blocks/blocks/calcom'
import { CalendlyBlock } from '@/blocks/blocks/calendly'
@@ -40,6 +41,7 @@ import { GenericWebhookBlock } from '@/blocks/blocks/generic_webhook'
import { GitHubBlock, GitHubV2Block } from '@/blocks/blocks/github'
import { GitLabBlock } from '@/blocks/blocks/gitlab'
import { GmailBlock, GmailV2Block } from '@/blocks/blocks/gmail'
import { GongBlock } from '@/blocks/blocks/gong'
import { GoogleSearchBlock } from '@/blocks/blocks/google'
import { GoogleBooksBlock } from '@/blocks/blocks/google_books'
import { GoogleCalendarBlock, GoogleCalendarV2Block } from '@/blocks/blocks/google_calendar'
@@ -186,6 +188,7 @@ export const registry: Record<string, BlockConfig> = {
apollo: ApolloBlock,
arxiv: ArxivBlock,
asana: AsanaBlock,
attio: AttioBlock,
browser_use: BrowserUseBlock,
calcom: CalComBlock,
calendly: CalendlyBlock,
@@ -231,6 +234,7 @@ export const registry: Record<string, BlockConfig> = {
google_forms: GoogleFormsBlock,
google_groups: GoogleGroupsBlock,
google_maps: GoogleMapsBlock,
gong: GongBlock,
google_search: GoogleSearchBlock,
google_sheets: GoogleSheetsBlock,
google_sheets_v2: GoogleSheetsV2Block,

View File

@@ -710,6 +710,17 @@ export function NotionIcon(props: SVGProps<SVGSVGElement>) {
)
}
export function GongIcon(props: SVGProps<SVGSVGElement>) {
return (
<svg {...props} viewBox='0 0 55.4 60' fill='none' xmlns='http://www.w3.org/2000/svg'>
<path
fill='currentColor'
d='M54.1,25.7H37.8c-0.9,0-1.6,1-1.3,1.8l3.9,10.1c0.2,0.4-0.2,0.9-0.7,0.9l-5-0.3c-0.2,0-0.4,0.1-0.6,0.3L30.3,44c-0.2,0.3-0.6,0.4-1,0.2l-5.8-3.9c-0.2-0.2-0.5-0.2-0.8,0l-8,5.4c-0.5,0.4-1.2-0.1-1-0.7L16,37c0.1-0.3-0.1-0.7-0.4-0.8l-4.2-1.7c-0.4-0.2-0.6-0.7-0.3-1l3.7-4.6c0.2-0.2,0.2-0.6,0-0.8l-3.1-4.5c-0.3-0.4,0-1,0.5-1l4.9-0.4c0.4,0,0.6-0.3,0.6-0.7l-0.4-6.8c0-0.5,0.5-0.8,0.9-0.7l6,2.5c0.3,0.1,0.6,0,0.8-0.2l4.2-4.6c0.3-0.4,0.9-0.3,1.1,0.2l2.5,6.4c0.3,0.8,1.3,1.1,2,0.6l9.8-7.3c1.1-0.8,0.4-2.6-1-2.4L37.3,10c-0.3,0-0.6-0.1-0.7-0.4l-3.4-8.7c-0.4-0.9-1.5-1.1-2.2-0.4l-7.4,8c-0.2,0.2-0.5,0.3-0.8,0.2l-9.7-4.1c-0.9-0.4-1.8,0.2-1.9,1.2l-0.4,10c0,0.4-0.3,0.6-0.6,0.6l-8.9,0.6c-1,0.1-1.6,1.2-1,2.1l5.9,8.7c0.2,0.2,0.2,0.6,0,0.8l-6,6.9C-0.3,36,0,37.1,0.8,37.4l6.9,3c0.3,0.1,0.5,0.5,0.4,0.8L3.7,58.3c-0.3,1.2,1.1,2.1,2.1,1.4l16.5-11.8c0.2-0.2,0.5-0.2,0.8,0l7.5,5.3c0.6,0.4,1.5,0.3,1.9-0.4l4.7-7.2c0.1-0.2,0.4-0.3,0.6-0.3l11.2,1.4c0.9,0.1,1.8-0.6,1.5-1.5l-4.7-12.1c-0.1-0.3,0-0.7,0.4-0.9l8.5-4C55.9,27.6,55.5,25.7,54.1,25.7z'
/>
</svg>
)
}
export function GmailIcon(props: SVGProps<SVGSVGElement>) {
return (
<svg
@@ -3541,6 +3552,15 @@ export function TrelloIcon(props: SVGProps<SVGSVGElement>) {
)
}
export function AttioIcon(props: SVGProps<SVGSVGElement>) {
return (
<svg {...props} xmlns='http://www.w3.org/2000/svg' viewBox='0 0 60.9 50' fill='currentColor'>
<path d='M60.3,34.8l-5.1-8.1c0,0,0,0,0,0L54.7,26c-0.8-1.2-2.1-1.9-3.5-1.9L43,24L42.5,25l-9.8,15.7l-0.5,0.9l4.1,6.6c0.8,1.2,2.1,1.9,3.5,1.9h11.5c1.4,0,2.8-0.7,3.5-1.9l0.4-0.6c0,0,0,0,0,0l5.1-8.2C61.1,37.9,61.1,36.2,60.3,34.8L60.3,34.8z M58.7,38.3l-5.1,8.2c0,0,0,0.1-0.1,0.1c-0.2,0.2-0.4,0.2-0.5,0.2c-0.1,0-0.4,0-0.6-0.3l-5.1-8.2c-0.1-0.1-0.1-0.2-0.2-0.3c0-0.1-0.1-0.2-0.1-0.3c-0.1-0.4-0.1-0.8,0-1.3c0.1-0.2,0.1-0.4,0.3-0.6l5.1-8.1c0,0,0,0,0,0c0.1-0.2,0.3-0.3,0.4-0.3c0.1,0,0.1,0,0.1,0c0,0,0,0,0.1,0c0.1,0,0.4,0,0.6,0.3l5.1,8.1C59.2,36.6,59.2,37.5,58.7,38.3L58.7,38.3z' />
<path d='M45.2,15.1c0.8-1.3,0.8-3.1,0-4.4l-5.1-8.1l-0.4-0.7C38.9,0.7,37.6,0,36.2,0H24.7c-1.4,0-2.7,0.7-3.5,1.9L0.6,34.9C0.2,35.5,0,36.3,0,37c0,0.8,0.2,1.5,0.6,2.2l5.5,8.8C6.9,49.3,8.2,50,9.7,50h11.5c1.4,0,2.8-0.7,3.5-1.9l0.4-0.7c0,0,0,0,0,0c0,0,0,0,0,0l4.1-6.6l12.1-19.4L45.2,15.1L45.2,15.1z M44,13c0,0.4-0.1,0.8-0.4,1.2L23.5,46.4c-0.2,0.3-0.5,0.3-0.6,0.3c-0.1,0-0.4,0-0.6-0.3l-5.1-8.2c-0.5-0.7-0.5-1.7,0-2.4L37.4,3.6c0.2-0.3,0.5-0.3,0.6-0.3c0.1,0,0.4,0,0.6,0.3l5.1,8.1C43.9,12.1,44,12.5,44,13z' />
</svg>
)
}
export function AsanaIcon(props: SVGProps<SVGSVGElement>) {
return (
<svg {...props} xmlns='http://www.w3.org/2000/svg' viewBox='0 0 24 24' fill='none'>
@@ -5824,7 +5844,7 @@ export function HexIcon(props: SVGProps<SVGSVGElement>) {
return (
<svg {...props} xmlns='http://www.w3.org/2000/svg' viewBox='0 0 1450.3 600'>
<path
fill='#5F509D'
fill='#EDB9B8'
fillRule='evenodd'
d='m250.11,0v199.49h-50V0H0v600h200.11v-300.69h50v300.69h200.18V0h-200.18Zm249.9,0v600h450.29v-250.23h-200.2v149h-50v-199.46h250.2V0h-450.29Zm200.09,199.49v-99.49h50v99.49h-50Zm550.02,0V0h200.18v150l-100,100.09,100,100.09v249.82h-200.18v-300.69h-50v300.69h-200.11v-249.82l100.11-100.09-100.11-100.09V0h200.11v199.49h50Z'
/>

View File

@@ -391,6 +391,12 @@ export function AccessControl() {
category: 'Collaboration',
configKey: 'disableInvitations' as const,
},
{
id: 'disable-public-api',
label: 'Public API',
category: 'Features',
configKey: 'disablePublicApi' as const,
},
],
[]
)
@@ -966,6 +972,7 @@ export function AccessControl() {
!editingConfig?.disableSkills &&
!editingConfig?.hideTraceSpans &&
!editingConfig?.disableInvitations &&
!editingConfig?.disablePublicApi &&
!editingConfig?.hideDeployApi &&
!editingConfig?.hideDeployMcp &&
!editingConfig?.hideDeployA2a &&
@@ -987,6 +994,7 @@ export function AccessControl() {
disableSkills: allVisible,
hideTraceSpans: allVisible,
disableInvitations: allVisible,
disablePublicApi: allVisible,
hideDeployApi: allVisible,
hideDeployMcp: allVisible,
hideDeployA2a: allVisible,
@@ -1009,6 +1017,7 @@ export function AccessControl() {
!editingConfig?.disableSkills &&
!editingConfig?.hideTraceSpans &&
!editingConfig?.disableInvitations &&
!editingConfig?.disablePublicApi &&
!editingConfig?.hideDeployApi &&
!editingConfig?.hideDeployMcp &&
!editingConfig?.hideDeployA2a &&

View File

@@ -66,6 +66,13 @@ export class InvitationsNotAllowedError extends Error {
}
}
export class PublicApiNotAllowedError extends Error {
constructor() {
super('Public API access is not allowed based on your permission group settings')
this.name = 'PublicApiNotAllowedError'
}
}
/**
* Merges the env allowlist into a permission config.
* If `config` is null and no env allowlist is set, returns null.
@@ -296,3 +303,30 @@ export async function validateInvitationsAllowed(userId: string | undefined): Pr
throw new InvitationsNotAllowedError()
}
}
/**
* Validates if the user is allowed to enable public API access.
* Also checks the global feature flag.
*/
export async function validatePublicApiAllowed(userId: string | undefined): Promise<void> {
const { isPublicApiDisabled } = await import('@/lib/core/config/feature-flags')
if (isPublicApiDisabled) {
logger.warn('Public API blocked by feature flag')
throw new PublicApiNotAllowedError()
}
if (!userId) {
return
}
const config = await getUserPermissionConfig(userId)
if (!config) {
return
}
if (config.disablePublicApi) {
logger.warn('Public API blocked by permission group', { userId })
throw new PublicApiNotAllowedError()
}
}

View File

@@ -330,6 +330,7 @@ export class DAGExecutor {
base64MaxBytes: this.contextExtensions.base64MaxBytes,
runFromBlockContext: overrides?.runFromBlockContext,
stopAfterBlockId: this.contextExtensions.stopAfterBlockId,
callChain: this.contextExtensions.callChain,
}
if (this.contextExtensions.resumeFromSnapshot) {

View File

@@ -27,6 +27,7 @@ export interface ExecutionMetadata {
parallels?: Record<string, any>
deploymentVersionId?: string
}
callChain?: string[]
}
export interface SerializableExecutionState {
@@ -167,6 +168,12 @@ export interface ContextExtensions {
* Stop execution after this block completes. Used for "run until block" feature.
*/
stopAfterBlockId?: string
/**
* Ordered list of workflow IDs in the current call chain, used for cycle detection.
* Each hop appends the current workflow ID before making outgoing requests.
*/
callChain?: string[]
}
export interface WorkflowInput {

View File

@@ -75,6 +75,7 @@ export class ApiBlockHandler implements BlockHandler {
userId: ctx.userId,
isDeployedContext: ctx.isDeployedContext,
enforceCredentialAccess: ctx.enforceCredentialAccess,
callChain: ctx.callChain,
},
},
false,

View File

@@ -1,4 +1,5 @@
import { createLogger } from '@sim/logger'
import { buildNextCallChain, validateCallChain } from '@/lib/execution/call-chain'
import { snapshotService } from '@/lib/logs/execution/snapshot/service'
import { buildTraceSpans } from '@/lib/logs/execution/trace-spans/trace-spans'
import type { TraceSpan } from '@/lib/logs/types'
@@ -167,6 +168,15 @@ export class WorkflowBlockHandler implements BlockHandler {
ctx.onChildWorkflowInstanceReady?.(effectiveBlockId, instanceId, iterationContext)
}
const childCallChain = buildNextCallChain(ctx.callChain || [], workflowId)
const depthError = validateCallChain(childCallChain)
if (depthError) {
throw new ChildWorkflowError({
message: depthError,
childWorkflowName,
})
}
const subExecutor = new Executor({
workflow: childWorkflow.serializedState,
workflowInput: childWorkflowInput,
@@ -180,6 +190,7 @@ export class WorkflowBlockHandler implements BlockHandler {
userId: ctx.userId,
executionId: ctx.executionId,
abortSignal: ctx.abortSignal,
callChain: childCallChain,
...(shouldPropagateCallbacks && {
onBlockStart: ctx.onBlockStart,
onBlockComplete: ctx.onBlockComplete,

View File

@@ -301,6 +301,12 @@ export interface ExecutionContext {
*/
stopAfterBlockId?: string
/**
* Ordered list of workflow IDs in the current call chain, used for cycle detection.
* Passed to outgoing HTTP requests via the X-Sim-Via header.
*/
callChain?: string[]
/**
* Counter for generating monotonically increasing execution order values.
* Starts at 0 and increments for each block. Use getNextExecutionOrder() to access.

View File

@@ -32,6 +32,7 @@ export interface WorkflowDeploymentInfo {
deployedAt: string | null
apiKey: string | null
needsRedeployment: boolean
isPublicApi: boolean
}
/**
@@ -50,6 +51,7 @@ async function fetchDeploymentInfo(workflowId: string): Promise<WorkflowDeployme
deployedAt: data.deployedAt ?? null,
apiKey: data.apiKey ?? null,
needsRedeployment: data.needsRedeployment ?? false,
isPublicApi: data.isPublicApi ?? false,
}
}
@@ -614,3 +616,49 @@ export function useActivateDeploymentVersion() {
},
})
}
/**
* Variables for updating public API access
*/
interface UpdatePublicApiVariables {
workflowId: string
isPublicApi: boolean
}
/**
* Mutation hook for toggling a workflow's public API access.
* Invalidates deployment info query on success.
*/
export function useUpdatePublicApi() {
const queryClient = useQueryClient()
return useMutation({
mutationFn: async ({ workflowId, isPublicApi }: UpdatePublicApiVariables) => {
const response = await fetch(`/api/workflows/${workflowId}/deploy`, {
method: 'PATCH',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ isPublicApi }),
})
if (!response.ok) {
const errorData = await response.json()
throw new Error(errorData.error || 'Failed to update public API setting')
}
return response.json()
},
onSuccess: (_, variables) => {
logger.info('Public API setting updated', {
workflowId: variables.workflowId,
isPublicApi: variables.isPublicApi,
})
queryClient.invalidateQueries({
queryKey: deploymentKeys.info(variables.workflowId),
})
},
onError: (error) => {
logger.error('Failed to update public API setting', { error })
},
})
}

View File

@@ -20,6 +20,7 @@ export interface PermissionConfigResult {
isBlockAllowed: (blockType: string) => boolean
isProviderAllowed: (providerId: string) => boolean
isInvitationsDisabled: boolean
isPublicApiDisabled: boolean
}
interface AllowedIntegrationsResponse {
@@ -116,6 +117,11 @@ export function usePermissionConfig(): PermissionConfigResult {
return featureFlagDisabled || config.disableInvitations
}, [config.disableInvitations])
const isPublicApiDisabled = useMemo(() => {
const featureFlagDisabled = isTruthy(getEnv('NEXT_PUBLIC_DISABLE_PUBLIC_API'))
return featureFlagDisabled || config.disablePublicApi
}, [config.disablePublicApi])
const mergedConfig = useMemo(
() => ({ ...config, allowedIntegrations: mergedAllowedIntegrations }),
[config, mergedAllowedIntegrations]
@@ -131,6 +137,7 @@ export function usePermissionConfig(): PermissionConfigResult {
isBlockAllowed,
isProviderAllowed,
isInvitationsDisabled,
isPublicApiDisabled,
}),
[
mergedConfig,
@@ -141,6 +148,7 @@ export function usePermissionConfig(): PermissionConfigResult {
isBlockAllowed,
isProviderAllowed,
isInvitationsDisabled,
isPublicApiDisabled,
]
)
}

View File

@@ -215,7 +215,7 @@ async function insertAuditLog(params: AuditLogParams): Promise<void> {
actorName = row?.name ?? undefined
actorEmail = row?.email ?? undefined
} catch (error) {
logger.debug('Failed to resolve actor info', { error, actorId: params.actorId })
logger.warn('Failed to resolve actor info', { error, actorId: params.actorId })
}
}

View File

@@ -102,3 +102,7 @@ export function createAnonymousSession(): AnonymousSession {
},
}
}
export function createAnonymousGetSessionResponse(): { data: AnonymousSession } {
return { data: createAnonymousSession() }
}

View File

@@ -503,6 +503,7 @@ export const auth = betterAuth({
'zoom',
'wordpress',
'linear',
'attio',
'shopify',
'trello',
'calcom',
@@ -2237,6 +2238,69 @@ export const auth = betterAuth({
},
},
{
providerId: 'attio',
clientId: env.ATTIO_CLIENT_ID as string,
clientSecret: env.ATTIO_CLIENT_SECRET as string,
authorizationUrl: 'https://app.attio.com/authorize',
tokenUrl: 'https://app.attio.com/oauth/token',
scopes: [
'record_permission:read-write',
'object_configuration:read-write',
'list_configuration:read-write',
'list_entry:read-write',
'note:read-write',
'task:read-write',
'comment:read-write',
'user_management:read',
'webhook:read-write',
],
responseType: 'code',
redirectURI: `${getBaseUrl()}/api/auth/oauth2/callback/attio`,
getUserInfo: async (tokens) => {
try {
const response = await fetch('https://api.attio.com/v2/workspace_members', {
headers: {
Authorization: `Bearer ${tokens.accessToken}`,
},
})
if (!response.ok) {
const errorText = await response.text()
logger.error('Attio API error:', {
status: response.status,
statusText: response.statusText,
body: errorText,
})
throw new Error(`Attio API error: ${response.status} ${response.statusText}`)
}
const { data } = await response.json()
if (!data || data.length === 0) {
throw new Error('No workspace members found in Attio response')
}
const member = data[0]
return {
id: `${member.id.workspace_member_id}-${crypto.randomUUID()}`,
email: member.email_address,
name:
`${member.first_name ?? ''} ${member.last_name ?? ''}`.trim() ||
member.email_address,
emailVerified: true,
createdAt: new Date(),
updatedAt: new Date(),
image: member.avatar_url || undefined,
}
} catch (error) {
logger.error('Error in Attio getUserInfo:', error)
throw error
}
},
},
{
providerId: 'dropbox',
clientId: env.DROPBOX_CLIENT_ID as string,

View File

@@ -0,0 +1,31 @@
/**
* @vitest-environment node
*/
import { describe, expect, it } from 'vitest'
import { extractSessionDataFromAuthClientResult } from '@/lib/auth/session-response'
describe('extractSessionDataFromAuthClientResult', () => {
it('returns null for non-objects', () => {
expect(extractSessionDataFromAuthClientResult(null)).toBeNull()
expect(extractSessionDataFromAuthClientResult(undefined)).toBeNull()
expect(extractSessionDataFromAuthClientResult('nope')).toBeNull()
expect(extractSessionDataFromAuthClientResult(123)).toBeNull()
})
it('prefers .data when present', () => {
expect(extractSessionDataFromAuthClientResult({ data: null })).toBeNull()
const session = { user: { id: 'u1' }, session: { id: 's1' } }
expect(extractSessionDataFromAuthClientResult({ data: session })).toEqual(session)
})
it('falls back to raw session payload shape', () => {
const raw = { user: { id: 'u1' }, session: { id: 's1' } }
expect(extractSessionDataFromAuthClientResult(raw)).toEqual(raw)
})
it('returns null for unknown object shapes', () => {
expect(extractSessionDataFromAuthClientResult({})).toBeNull()
expect(extractSessionDataFromAuthClientResult({ ok: true })).toBeNull()
})
})

View File

@@ -0,0 +1,19 @@
export function extractSessionDataFromAuthClientResult(result: unknown): unknown | null {
if (!result || typeof result !== 'object') {
return null
}
const record = result as Record<string, unknown>
// Expected shape from better-auth client: { data: <session> }
if ('data' in record) {
return (record as { data?: unknown }).data ?? null
}
// Fallback for raw session payloads: { user, session }
if ('user' in record) {
return record
}
return null
}

View File

@@ -2,6 +2,7 @@ import { db } from '@sim/db'
import { member, organization, userStats } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { and, eq, inArray } from 'drizzle-orm'
import type { HighestPrioritySubscription } from '@/lib/billing/core/plan'
import { getUserUsageLimit } from '@/lib/billing/core/usage'
import { isBillingEnabled } from '@/lib/core/config/feature-flags'
@@ -21,7 +22,10 @@ interface UsageData {
* Checks a user's cost usage against their subscription plan limit
* and returns usage information including whether they're approaching the limit
*/
export async function checkUsageStatus(userId: string): Promise<UsageData> {
export async function checkUsageStatus(
userId: string,
preloadedSubscription?: HighestPrioritySubscription
): Promise<UsageData> {
try {
// If billing is disabled, always return permissive limits
if (!isBillingEnabled) {
@@ -42,7 +46,7 @@ export async function checkUsageStatus(userId: string): Promise<UsageData> {
}
// Get usage limit from user_stats (per-user cap)
const limit = await getUserUsageLimit(userId)
const limit = await getUserUsageLimit(userId, preloadedSubscription)
logger.info('Using stored usage limit', { userId, limit })
// Get actual usage from the database
@@ -228,7 +232,10 @@ export async function checkAndNotifyUsage(userId: string): Promise<void> {
* @param userId The ID of the user to check
* @returns An object containing the exceeded status and usage details
*/
export async function checkServerSideUsageLimits(userId: string): Promise<{
export async function checkServerSideUsageLimits(
userId: string,
preloadedSubscription?: HighestPrioritySubscription
): Promise<{
isExceeded: boolean
currentUsage: number
limit: number
@@ -314,7 +321,7 @@ export async function checkServerSideUsageLimits(userId: string): Promise<{
}
}
const usageData = await checkUsageStatus(userId)
const usageData = await checkUsageStatus(userId, preloadedSubscription)
return {
isExceeded: usageData.isExceeded,

View File

@@ -6,6 +6,8 @@ import { checkEnterprisePlan, checkProPlan, checkTeamPlan } from '@/lib/billing/
const logger = createLogger('PlanLookup')
export type HighestPrioritySubscription = Awaited<ReturnType<typeof getHighestPrioritySubscription>>
/**
* Get the highest priority active subscription for a user
* Priority: Enterprise > Team > Pro > Free

View File

@@ -7,7 +7,10 @@ import {
renderFreeTierUpgradeEmail,
renderUsageThresholdEmail,
} from '@/components/emails'
import { getHighestPrioritySubscription } from '@/lib/billing/core/plan'
import {
getHighestPrioritySubscription,
type HighestPrioritySubscription,
} from '@/lib/billing/core/plan'
import {
canEditUsageLimit,
getFreeTierLimit,
@@ -352,8 +355,14 @@ export async function updateUserUsageLimit(
* Free/Pro: Individual user limit from userStats
* Team/Enterprise: Organization limit
*/
export async function getUserUsageLimit(userId: string): Promise<number> {
const subscription = await getHighestPrioritySubscription(userId)
export async function getUserUsageLimit(
userId: string,
preloadedSubscription?: HighestPrioritySubscription
): Promise<number> {
const subscription =
preloadedSubscription !== undefined
? preloadedSubscription
: await getHighestPrioritySubscription(userId)
if (!subscription || subscription.plan === 'free' || subscription.plan === 'pro') {
// Free/Pro: Use individual limit from userStats

View File

@@ -154,7 +154,6 @@ export async function restoreUserProSubscription(userId: string): Promise<Restor
.where(eq(subscriptionTable.id, personalPro.id))
result.restored = true
logger.info('Restored personal Pro subscription', {
userId,
subscriptionId: personalPro.id,

View File

@@ -14,7 +14,6 @@ import {
loadDeployedWorkflowState,
loadWorkflowFromNormalizedTables,
} from '@/lib/workflows/persistence/utils'
import { isInputDefinitionTrigger } from '@/lib/workflows/triggers/input-definition-triggers'
import { hasTriggerCapability } from '@/lib/workflows/triggers/trigger-utils'
import { getBlock } from '@/blocks/registry'
import { normalizeName } from '@/executor/constants'
@@ -457,20 +456,9 @@ export async function executeGetBlockUpstreamReferences(
const accessibleIds = new Set<string>(ancestorIds)
accessibleIds.add(blockId)
const starterBlock = Object.values(blocks).find((b) => isInputDefinitionTrigger(b.type))
if (starterBlock && ancestorIds.includes(starterBlock.id)) {
accessibleIds.add(starterBlock.id)
}
containingLoopIds.forEach((loopId) => accessibleIds.add(loopId))
containingLoopIds.forEach((loopId) => {
accessibleIds.add(loopId)
loops[loopId]?.nodes?.forEach((nodeId: string) => accessibleIds.add(nodeId))
})
containingParallelIds.forEach((parallelId) => {
accessibleIds.add(parallelId)
parallels[parallelId]?.nodes?.forEach((nodeId: string) => accessibleIds.add(nodeId))
})
containingParallelIds.forEach((parallelId) => accessibleIds.add(parallelId))
const accessibleBlocks: AccessibleBlockEntry[] = []

View File

@@ -281,6 +281,8 @@ export const env = createEnv({
SPOTIFY_CLIENT_ID: z.string().optional(), // Spotify OAuth client ID
SPOTIFY_CLIENT_SECRET: z.string().optional(), // Spotify OAuth client secret
CALCOM_CLIENT_ID: z.string().optional(), // Cal.com OAuth client ID
ATTIO_CLIENT_ID: z.string().optional(), // Attio OAuth client ID
ATTIO_CLIENT_SECRET: z.string().optional(), // Attio OAuth client secret
// E2B Remote Code Execution
E2B_ENABLED: z.string().optional(), // Enable E2B remote code execution
@@ -297,6 +299,7 @@ export const env = createEnv({
// Invitations - for self-hosted deployments
DISABLE_INVITATIONS: z.boolean().optional(), // Disable workspace invitations globally (for self-hosted deployments)
DISABLE_PUBLIC_API: z.boolean().optional(), // Disable public API access globally (for self-hosted deployments)
// Development Tools
REACT_GRAB_ENABLED: z.boolean().optional(), // Enable React Grab for UI element debugging in Cursor/AI agents (dev only)
@@ -382,6 +385,7 @@ export const env = createEnv({
NEXT_PUBLIC_ACCESS_CONTROL_ENABLED: z.boolean().optional(), // Enable access control (permission groups) on self-hosted
NEXT_PUBLIC_ORGANIZATIONS_ENABLED: z.boolean().optional(), // Enable organizations on self-hosted (bypasses plan requirements)
NEXT_PUBLIC_DISABLE_INVITATIONS: z.boolean().optional(), // Disable workspace invitations globally (for self-hosted deployments)
NEXT_PUBLIC_DISABLE_PUBLIC_API: z.boolean().optional(), // Disable public API access UI toggle globally
NEXT_PUBLIC_EMAIL_PASSWORD_SIGNUP_ENABLED: z.boolean().optional().default(true), // Control visibility of email/password login forms
},
@@ -413,6 +417,7 @@ export const env = createEnv({
NEXT_PUBLIC_ACCESS_CONTROL_ENABLED: process.env.NEXT_PUBLIC_ACCESS_CONTROL_ENABLED,
NEXT_PUBLIC_ORGANIZATIONS_ENABLED: process.env.NEXT_PUBLIC_ORGANIZATIONS_ENABLED,
NEXT_PUBLIC_DISABLE_INVITATIONS: process.env.NEXT_PUBLIC_DISABLE_INVITATIONS,
NEXT_PUBLIC_DISABLE_PUBLIC_API: process.env.NEXT_PUBLIC_DISABLE_PUBLIC_API,
NEXT_PUBLIC_EMAIL_PASSWORD_SIGNUP_ENABLED: process.env.NEXT_PUBLIC_EMAIL_PASSWORD_SIGNUP_ENABLED,
NEXT_PUBLIC_E2B_ENABLED: process.env.NEXT_PUBLIC_E2B_ENABLED,
NEXT_PUBLIC_COPILOT_TRAINING_ENABLED: process.env.NEXT_PUBLIC_COPILOT_TRAINING_ENABLED,

View File

@@ -111,6 +111,12 @@ export const isE2bEnabled = isTruthy(env.E2B_ENABLED)
*/
export const isInvitationsDisabled = isTruthy(env.DISABLE_INVITATIONS)
/**
* Is public API access disabled globally
* When true, the public API toggle is hidden and public API access is blocked
*/
export const isPublicApiDisabled = isTruthy(env.DISABLE_PUBLIC_API)
/**
* Is React Grab enabled for UI element debugging
* When true and in development mode, enables React Grab for copying UI element context to clipboard

View File

@@ -29,9 +29,8 @@ describe('redis config', () => {
getRedisClient()
mockRedisInstance.ping.mockRejectedValue(new Error('ETIMEDOUT'))
await vi.advanceTimersByTimeAsync(30_000)
await vi.advanceTimersByTimeAsync(30_000)
await vi.advanceTimersByTimeAsync(30_000)
await vi.advanceTimersByTimeAsync(15_000)
await vi.advanceTimersByTimeAsync(15_000)
expect(listener).toHaveBeenCalledTimes(1)
})
@@ -44,9 +43,9 @@ describe('redis config', () => {
getRedisClient()
mockRedisInstance.ping.mockResolvedValue('PONG')
await vi.advanceTimersByTimeAsync(30_000)
await vi.advanceTimersByTimeAsync(30_000)
await vi.advanceTimersByTimeAsync(30_000)
await vi.advanceTimersByTimeAsync(15_000)
await vi.advanceTimersByTimeAsync(15_000)
await vi.advanceTimersByTimeAsync(15_000)
expect(listener).not.toHaveBeenCalled()
})
@@ -58,34 +57,29 @@ describe('redis config', () => {
getRedisClient()
// 2 failures then a success — should reset counter
// 1 failure then a success — should reset counter
mockRedisInstance.ping.mockRejectedValueOnce(new Error('timeout'))
await vi.advanceTimersByTimeAsync(30_000)
mockRedisInstance.ping.mockRejectedValueOnce(new Error('timeout'))
await vi.advanceTimersByTimeAsync(30_000)
await vi.advanceTimersByTimeAsync(15_000)
mockRedisInstance.ping.mockResolvedValueOnce('PONG')
await vi.advanceTimersByTimeAsync(30_000)
await vi.advanceTimersByTimeAsync(15_000)
// 2 more failures — should NOT trigger reconnect (counter was reset)
// 1 more failure — should NOT trigger reconnect (counter was reset)
mockRedisInstance.ping.mockRejectedValueOnce(new Error('timeout'))
await vi.advanceTimersByTimeAsync(30_000)
mockRedisInstance.ping.mockRejectedValueOnce(new Error('timeout'))
await vi.advanceTimersByTimeAsync(30_000)
await vi.advanceTimersByTimeAsync(15_000)
expect(listener).not.toHaveBeenCalled()
})
it('should call disconnect(true) after 3 consecutive PING failures', async () => {
it('should call disconnect(true) after 2 consecutive PING failures', async () => {
const { getRedisClient } = await import('./redis')
getRedisClient()
mockRedisInstance.ping.mockRejectedValue(new Error('ETIMEDOUT'))
await vi.advanceTimersByTimeAsync(30_000)
await vi.advanceTimersByTimeAsync(30_000)
await vi.advanceTimersByTimeAsync(15_000)
expect(mockRedisInstance.disconnect).not.toHaveBeenCalled()
await vi.advanceTimersByTimeAsync(30_000)
await vi.advanceTimersByTimeAsync(15_000)
expect(mockRedisInstance.disconnect).toHaveBeenCalledWith(true)
})
@@ -100,9 +94,8 @@ describe('redis config', () => {
getRedisClient()
mockRedisInstance.ping.mockRejectedValue(new Error('timeout'))
await vi.advanceTimersByTimeAsync(30_000)
await vi.advanceTimersByTimeAsync(30_000)
await vi.advanceTimersByTimeAsync(30_000)
await vi.advanceTimersByTimeAsync(15_000)
await vi.advanceTimersByTimeAsync(15_000)
expect(badListener).toHaveBeenCalledTimes(1)
expect(goodListener).toHaveBeenCalledTimes(1)
@@ -119,7 +112,7 @@ describe('redis config', () => {
// After closing, PING failures should not trigger disconnect
mockRedisInstance.ping.mockRejectedValue(new Error('timeout'))
await vi.advanceTimersByTimeAsync(30_000 * 5)
await vi.advanceTimersByTimeAsync(15_000 * 5)
expect(mockRedisInstance.disconnect).not.toHaveBeenCalled()
})
})

View File

@@ -11,8 +11,8 @@ let pingFailures = 0
let pingInterval: NodeJS.Timeout | null = null
let pingInFlight = false
const PING_INTERVAL_MS = 30_000
const MAX_PING_FAILURES = 3
const PING_INTERVAL_MS = 15_000
const MAX_PING_FAILURES = 2
/** Callbacks invoked when the PING health check forces a reconnect. */
const reconnectListeners: Array<() => void> = []
@@ -42,7 +42,7 @@ function startPingHealthCheck(redis: Redis): void {
})
if (pingFailures >= MAX_PING_FAILURES) {
logger.error('Redis PING failed 3 consecutive times — forcing reconnect', {
logger.error('Redis PING failed consecutive times — forcing reconnect', {
consecutiveFailures: pingFailures,
})
pingFailures = 0

View File

@@ -0,0 +1,130 @@
/**
* @vitest-environment node
*/
import { describe, expect, it } from 'vitest'
import {
buildNextCallChain,
MAX_CALL_CHAIN_DEPTH,
parseCallChain,
SIM_VIA_HEADER,
serializeCallChain,
validateCallChain,
} from '@/lib/execution/call-chain'
describe('call-chain', () => {
describe('SIM_VIA_HEADER', () => {
it('has the expected header name', () => {
expect(SIM_VIA_HEADER).toBe('X-Sim-Via')
})
})
describe('MAX_CALL_CHAIN_DEPTH', () => {
it('equals 10', () => {
expect(MAX_CALL_CHAIN_DEPTH).toBe(10)
})
})
describe('parseCallChain', () => {
it('returns empty array for null', () => {
expect(parseCallChain(null)).toEqual([])
})
it('returns empty array for undefined', () => {
expect(parseCallChain(undefined)).toEqual([])
})
it('returns empty array for empty string', () => {
expect(parseCallChain('')).toEqual([])
})
it('returns empty array for whitespace-only string', () => {
expect(parseCallChain(' ')).toEqual([])
})
it('parses a single workflow ID', () => {
expect(parseCallChain('wf-abc')).toEqual(['wf-abc'])
})
it('parses multiple comma-separated workflow IDs', () => {
expect(parseCallChain('wf-a,wf-b,wf-c')).toEqual(['wf-a', 'wf-b', 'wf-c'])
})
it('trims whitespace around workflow IDs', () => {
expect(parseCallChain(' wf-a , wf-b , wf-c ')).toEqual(['wf-a', 'wf-b', 'wf-c'])
})
it('filters out empty segments', () => {
expect(parseCallChain('wf-a,,wf-b')).toEqual(['wf-a', 'wf-b'])
})
})
describe('serializeCallChain', () => {
it('serializes an empty array', () => {
expect(serializeCallChain([])).toBe('')
})
it('serializes a single ID', () => {
expect(serializeCallChain(['wf-a'])).toBe('wf-a')
})
it('serializes multiple IDs with commas', () => {
expect(serializeCallChain(['wf-a', 'wf-b', 'wf-c'])).toBe('wf-a,wf-b,wf-c')
})
})
describe('validateCallChain', () => {
it('returns null for an empty chain', () => {
expect(validateCallChain([])).toBeNull()
})
it('returns null when chain is under max depth', () => {
expect(validateCallChain(['wf-a', 'wf-b'])).toBeNull()
})
it('allows legitimate self-recursion', () => {
expect(validateCallChain(['wf-a', 'wf-a', 'wf-a'])).toBeNull()
})
it('returns depth error when chain is at max depth', () => {
const chain = Array.from({ length: MAX_CALL_CHAIN_DEPTH }, (_, i) => `wf-${i}`)
const error = validateCallChain(chain)
expect(error).toContain(
`Maximum workflow call chain depth (${MAX_CALL_CHAIN_DEPTH}) exceeded`
)
})
it('allows chain just under max depth', () => {
const chain = Array.from({ length: MAX_CALL_CHAIN_DEPTH - 1 }, (_, i) => `wf-${i}`)
expect(validateCallChain(chain)).toBeNull()
})
})
describe('buildNextCallChain', () => {
it('appends workflow ID to empty chain', () => {
expect(buildNextCallChain([], 'wf-a')).toEqual(['wf-a'])
})
it('appends workflow ID to existing chain', () => {
expect(buildNextCallChain(['wf-a', 'wf-b'], 'wf-c')).toEqual(['wf-a', 'wf-b', 'wf-c'])
})
it('does not mutate the original chain', () => {
const original = ['wf-a']
const result = buildNextCallChain(original, 'wf-b')
expect(original).toEqual(['wf-a'])
expect(result).toEqual(['wf-a', 'wf-b'])
})
})
describe('round-trip', () => {
it('parse → serialize is identity', () => {
const header = 'wf-a,wf-b,wf-c'
expect(serializeCallChain(parseCallChain(header))).toBe(header)
})
it('serialize → parse is identity', () => {
const chain = ['wf-a', 'wf-b', 'wf-c']
expect(parseCallChain(serializeCallChain(chain))).toEqual(chain)
})
})
})

View File

@@ -0,0 +1,51 @@
/**
* Workflow call chain detection using the Via-style pattern.
*
* Prevents infinite execution loops when workflows call each other via API or
* MCP endpoints. Each hop appends the current workflow ID to the `X-Sim-Via`
* header; on ingress the chain is checked for depth.
*/
export const SIM_VIA_HEADER = 'X-Sim-Via'
export const MAX_CALL_CHAIN_DEPTH = 10
/**
* Parses the `X-Sim-Via` header value into an ordered list of workflow IDs.
* Returns an empty array when the header is absent or empty.
*/
export function parseCallChain(headerValue: string | null | undefined): string[] {
if (!headerValue || !headerValue.trim()) {
return []
}
return headerValue
.split(',')
.map((id) => id.trim())
.filter(Boolean)
}
/**
* Serializes a call chain array back into the header value format.
*/
export function serializeCallChain(chain: string[]): string {
return chain.join(',')
}
/**
* Validates that the call chain has not exceeded the maximum depth.
* Returns an error message string if invalid, or `null` if the chain is
* safe to extend.
*/
export function validateCallChain(chain: string[]): string | null {
if (chain.length >= MAX_CALL_CHAIN_DEPTH) {
return `Maximum workflow call chain depth (${MAX_CALL_CHAIN_DEPTH}) exceeded.`
}
return null
}
/**
* Builds the next call chain by appending the current workflow ID.
*/
export function buildNextCallChain(chain: string[], workflowId: string): string[] {
return [...chain, workflowId]
}

View File

@@ -44,8 +44,6 @@ export async function processExecutionFile(
)
}
logger.debug(`[${requestId}] Uploading file: ${file.name} (${buffer.length} bytes)`)
const userFile = await uploadExecutionFile(
executionContext,
buffer,
@@ -54,7 +52,6 @@ export async function processExecutionFile(
userId
)
logger.debug(`[${requestId}] Successfully uploaded ${file.name}`)
return userFile
}
@@ -69,8 +66,6 @@ export async function processExecutionFile(
)
}
logger.debug(`[${requestId}] Uploading file from URL: ${file.name} (${buffer.length} bytes)`)
const userFile = await uploadExecutionFile(
executionContext,
buffer,
@@ -79,7 +74,6 @@ export async function processExecutionFile(
userId
)
logger.debug(`[${requestId}] Successfully uploaded ${file.name} from URL`)
return userFile
}

View File

@@ -71,6 +71,7 @@ const DISTRIBUTED_MAX_INFLIGHT_PER_OWNER =
MAX_ACTIVE_PER_OWNER + MAX_QUEUED_PER_OWNER
const DISTRIBUTED_LEASE_MIN_TTL_MS = Number.parseInt(env.IVM_DISTRIBUTED_LEASE_MIN_TTL_MS) || 120000
const DISTRIBUTED_KEY_PREFIX = 'ivm:fair:v1:owner'
const LEASE_REDIS_DEADLINE_MS = 200
const QUEUE_RETRY_DELAY_MS = 1000
const DISTRIBUTED_LEASE_GRACE_MS = 30000
@@ -292,21 +293,37 @@ async function tryAcquireDistributedLease(
return 1
`
try {
const result = await redis.eval(
script,
1,
key,
now.toString(),
DISTRIBUTED_MAX_INFLIGHT_PER_OWNER.toString(),
expiresAt.toString(),
leaseId,
leaseTtlMs.toString()
let deadlineTimer: NodeJS.Timeout | undefined
const deadline = new Promise<never>((_, reject) => {
deadlineTimer = setTimeout(
() => reject(new Error(`Redis lease timed out after ${LEASE_REDIS_DEADLINE_MS}ms`)),
LEASE_REDIS_DEADLINE_MS
)
})
try {
const result = await Promise.race([
redis.eval(
script,
1,
key,
now.toString(),
DISTRIBUTED_MAX_INFLIGHT_PER_OWNER.toString(),
expiresAt.toString(),
leaseId,
leaseTtlMs.toString()
),
deadline,
])
return Number(result) === 1 ? 'acquired' : 'limit_exceeded'
} catch (error) {
logger.error('Failed to acquire distributed owner lease', { ownerKey, error })
logger.warn('Failed to acquire distributed owner lease — falling back to local execution', {
ownerKey,
error,
})
return 'unavailable'
} finally {
clearTimeout(deadlineTimer)
}
}

View File

@@ -3,6 +3,7 @@ import { workflow } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { eq } from 'drizzle-orm'
import { checkServerSideUsageLimits } from '@/lib/billing/calculations/usage-monitor'
import type { HighestPrioritySubscription } from '@/lib/billing/core/plan'
import { getHighestPrioritySubscription } from '@/lib/billing/core/subscription'
import { getExecutionTimeout } from '@/lib/core/execution-limits'
import { RateLimiter } from '@/lib/core/rate-limiter/rate-limiter'
@@ -39,6 +40,8 @@ export interface PreprocessExecutionOptions {
useAuthenticatedUserAsActor?: boolean // If true, use the authenticated userId as actorUserId (for client-side executions and personal API keys)
/** @deprecated No longer used - background/async executions always use deployed state */
useDraftState?: boolean
/** Pre-fetched workflow record to skip the Step 1 DB query. Must be a full workflow table row. */
workflowRecord?: WorkflowRecord
}
/**
@@ -66,7 +69,7 @@ export interface PreprocessExecutionResult {
}
type WorkflowRecord = typeof workflow.$inferSelect
type SubscriptionInfo = Awaited<ReturnType<typeof getHighestPrioritySubscription>>
type SubscriptionInfo = HighestPrioritySubscription
export async function preprocessExecution(
options: PreprocessExecutionOptions
@@ -84,6 +87,7 @@ export async function preprocessExecution(
loggingSession: providedLoggingSession,
isResumeContext: _isResumeContext = false,
useAuthenticatedUserAsActor = false,
workflowRecord: prefetchedWorkflowRecord,
} = options
logger.info(`[${requestId}] Starting execution preprocessing`, {
@@ -94,58 +98,69 @@ export async function preprocessExecution(
})
// ========== STEP 1: Validate Workflow Exists ==========
let workflowRecord: WorkflowRecord | null = null
try {
const records = await db.select().from(workflow).where(eq(workflow.id, workflowId)).limit(1)
if (prefetchedWorkflowRecord && prefetchedWorkflowRecord.id !== workflowId) {
logger.error(`[${requestId}] Prefetched workflow record ID mismatch`, {
expected: workflowId,
received: prefetchedWorkflowRecord.id,
})
throw new Error(
`Prefetched workflow record ID mismatch: expected ${workflowId}, got ${prefetchedWorkflowRecord.id}`
)
}
let workflowRecord: WorkflowRecord | null = prefetchedWorkflowRecord ?? null
if (!workflowRecord) {
try {
const records = await db.select().from(workflow).where(eq(workflow.id, workflowId)).limit(1)
if (records.length === 0) {
logger.warn(`[${requestId}] Workflow not found: ${workflowId}`)
if (records.length === 0) {
logger.warn(`[${requestId}] Workflow not found: ${workflowId}`)
await logPreprocessingError({
workflowId,
executionId,
triggerType,
requestId,
userId: 'unknown',
workspaceId: '',
errorMessage:
'Workflow not found. The workflow may have been deleted or is no longer accessible.',
loggingSession: providedLoggingSession,
})
return {
success: false,
error: {
message: 'Workflow not found',
statusCode: 404,
logCreated: true,
},
}
}
workflowRecord = records[0]
} catch (error) {
logger.error(`[${requestId}] Error fetching workflow`, { error, workflowId })
await logPreprocessingError({
workflowId,
executionId,
triggerType,
requestId,
userId: 'unknown',
workspaceId: '',
errorMessage:
'Workflow not found. The workflow may have been deleted or is no longer accessible.',
userId: userId || 'unknown',
workspaceId: providedWorkspaceId || '',
errorMessage: 'Internal error while fetching workflow',
loggingSession: providedLoggingSession,
})
return {
success: false,
error: {
message: 'Workflow not found',
statusCode: 404,
message: 'Internal error while fetching workflow',
statusCode: 500,
logCreated: true,
},
}
}
workflowRecord = records[0]
} catch (error) {
logger.error(`[${requestId}] Error fetching workflow`, { error, workflowId })
await logPreprocessingError({
workflowId,
executionId,
triggerType,
requestId,
userId: userId || 'unknown',
workspaceId: providedWorkspaceId || '',
errorMessage: 'Internal error while fetching workflow',
loggingSession: providedLoggingSession,
})
return {
success: false,
error: {
message: 'Internal error while fetching workflow',
statusCode: 500,
logCreated: true,
},
}
}
const workspaceId = workflowRecord.workspaceId || providedWorkspaceId || ''
@@ -249,20 +264,77 @@ export async function preprocessExecution(
}
}
// ========== STEP 4: Get User Subscription ==========
let userSubscription: SubscriptionInfo = null
try {
userSubscription = await getHighestPrioritySubscription(actorUserId)
logger.debug(`[${requestId}] User subscription retrieved`, {
actorUserId,
hasSub: !!userSubscription,
plan: userSubscription?.plan,
})
} catch (error) {
logger.error(`[${requestId}] Error fetching subscription`, { error, actorUserId })
// ========== STEP 4: Get Subscription ==========
const userSubscription = await getHighestPrioritySubscription(actorUserId)
// ========== STEP 5: Check Usage Limits ==========
if (!skipUsageLimits) {
try {
const usageCheck = await checkServerSideUsageLimits(actorUserId, userSubscription)
if (usageCheck.isExceeded) {
logger.warn(
`[${requestId}] User ${actorUserId} has exceeded usage limits. Blocking execution.`,
{
currentUsage: usageCheck.currentUsage,
limit: usageCheck.limit,
workflowId,
triggerType,
}
)
await logPreprocessingError({
workflowId,
executionId,
triggerType,
requestId,
userId: actorUserId,
workspaceId,
errorMessage:
usageCheck.message ||
`Usage limit exceeded: $${usageCheck.currentUsage?.toFixed(2)} used of $${usageCheck.limit?.toFixed(2)} limit. Please upgrade your plan to continue.`,
loggingSession: providedLoggingSession,
})
return {
success: false,
error: {
message:
usageCheck.message || 'Usage limit exceeded. Please upgrade your plan to continue.',
statusCode: 402,
logCreated: true,
},
}
}
} catch (error) {
logger.error(`[${requestId}] Error checking usage limits`, {
error,
actorUserId,
})
await logPreprocessingError({
workflowId,
executionId,
triggerType,
requestId,
userId: actorUserId,
workspaceId,
errorMessage:
'Unable to determine usage limits. Execution blocked for security. Please contact support.',
loggingSession: providedLoggingSession,
})
return {
success: false,
error: {
message: 'Unable to determine usage limits. Execution blocked for security.',
statusCode: 500,
logCreated: true,
},
}
}
}
// ========== STEP 5: Check Rate Limits ==========
// ========== STEP 6: Check Rate Limits ==========
let rateLimitInfo: { allowed: boolean; remaining: number; resetAt: Date } | undefined
if (checkRateLimit) {
@@ -302,10 +374,6 @@ export async function preprocessExecution(
},
}
}
logger.debug(`[${requestId}] Rate limit check passed`, {
remaining: rateLimitInfo.remaining,
})
} catch (error) {
logger.error(`[${requestId}] Error checking rate limits`, { error, actorUserId })
@@ -331,78 +399,6 @@ export async function preprocessExecution(
}
}
// ========== STEP 6: Check Usage Limits (CRITICAL) ==========
if (!skipUsageLimits) {
try {
const usageCheck = await checkServerSideUsageLimits(actorUserId)
if (usageCheck.isExceeded) {
logger.warn(
`[${requestId}] User ${actorUserId} has exceeded usage limits. Blocking execution.`,
{
currentUsage: usageCheck.currentUsage,
limit: usageCheck.limit,
workflowId,
triggerType,
}
)
await logPreprocessingError({
workflowId,
executionId,
triggerType,
requestId,
userId: actorUserId,
workspaceId,
errorMessage:
usageCheck.message ||
`Usage limit exceeded: $${usageCheck.currentUsage?.toFixed(2)} used of $${usageCheck.limit?.toFixed(2)} limit. Please upgrade your plan to continue.`,
loggingSession: providedLoggingSession,
})
return {
success: false,
error: {
message:
usageCheck.message || 'Usage limit exceeded. Please upgrade your plan to continue.',
statusCode: 402,
logCreated: true,
},
}
}
logger.debug(`[${requestId}] Usage limit check passed`, {
currentUsage: usageCheck.currentUsage,
limit: usageCheck.limit,
})
} catch (error) {
logger.error(`[${requestId}] Error checking usage limits`, { error, actorUserId })
await logPreprocessingError({
workflowId,
executionId,
triggerType,
requestId,
userId: actorUserId,
workspaceId,
errorMessage:
'Unable to determine usage limits. Execution blocked for security. Please contact support.',
loggingSession: providedLoggingSession,
})
return {
success: false,
error: {
message: 'Unable to determine usage limits. Execution blocked for security.',
statusCode: 500,
logCreated: true,
},
}
}
} else {
logger.debug(`[${requestId}] Skipping usage limits check (test mode)`)
}
// ========== SUCCESS: All Checks Passed ==========
logger.info(`[${requestId}] All preprocessing checks passed`, {
workflowId,
@@ -461,7 +457,7 @@ async function logPreprocessingError(params: {
try {
const session =
loggingSession || new LoggingSession(workflowId, executionId, triggerType as any, requestId)
loggingSession || new LoggingSession(workflowId, executionId, triggerType, requestId)
await session.safeStart({
userId,
@@ -477,11 +473,6 @@ async function logPreprocessingError(params: {
traceSpans: [],
skipCost: true, // Preprocessing errors should not charge - no execution occurred
})
logger.debug(`[${requestId}] Logged preprocessing error to database`, {
workflowId,
executionId,
})
} catch (error) {
logger.error(`[${requestId}] Failed to log preprocessing error`, {
error,

View File

@@ -54,12 +54,6 @@ async function queryKnowledgeBase(
authHeaders?: { cookie?: string; authorization?: string }
): Promise<string[]> {
try {
logger.info(`[${requestId}] Querying knowledge base`, {
knowledgeBaseId,
query: query.substring(0, 100),
topK,
})
// Call the knowledge base search API directly
const searchUrl = `${getInternalApiBaseUrl()}/api/knowledge/search`
@@ -90,8 +84,6 @@ async function queryKnowledgeBase(
const chunks = results.map((r: any) => r.content || '').filter((c: string) => c.length > 0)
logger.info(`[${requestId}] Retrieved ${chunks.length} chunks from knowledge base`)
return chunks
} catch (error: any) {
logger.error(`[${requestId}] Error querying knowledge base`, {
@@ -194,7 +186,6 @@ Evaluate the consistency and provide your score and reasoning in JSON format.`
}
const content = response.content.trim()
logger.debug(`[${requestId}] LLM response:`, { content })
let jsonContent = content

View File

@@ -359,9 +359,7 @@ export class ExecutionLogger implements IExecutionLoggerService {
.leftJoin(userStats, eq(member.userId, userStats.userId))
.where(eq(member.organizationId, sub.referenceId))
.limit(1)
const orgUsageBeforeNum = Number.parseFloat(
(orgUsageBefore as any)?.toString?.() || '0'
)
const orgUsageBeforeNum = Number.parseFloat(String(orgUsageBefore ?? '0'))
await this.updateUserStats(
updatedLog.workflowId,
@@ -433,7 +431,7 @@ export class ExecutionLogger implements IExecutionLoggerService {
endedAt: updatedLog.endedAt?.toISOString() || endedAt,
totalDurationMs: updatedLog.totalDurationMs || totalDurationMs,
executionData: updatedLog.executionData as WorkflowExecutionLog['executionData'],
cost: updatedLog.cost as any,
cost: updatedLog.cost as WorkflowExecutionLog['cost'],
createdAt: updatedLog.createdAt.toISOString(),
}
@@ -467,7 +465,7 @@ export class ExecutionLogger implements IExecutionLoggerService {
endedAt: workflowLog.endedAt?.toISOString() || workflowLog.startedAt.toISOString(),
totalDurationMs: workflowLog.totalDurationMs || 0,
executionData: workflowLog.executionData as WorkflowExecutionLog['executionData'],
cost: workflowLog.cost as any,
cost: workflowLog.cost as WorkflowExecutionLog['cost'],
createdAt: workflowLog.createdAt.toISOString(),
}
}

Some files were not shown because too many files have changed in this diff Show More