Compare commits

...

41 Commits

Author SHA1 Message Date
Waleed
e9bdc57616 v0.5.112: trace spans improvements, fathom integration, jira fixes, canvas navigation updates 2026-03-12 13:30:20 -07:00
Waleed
e7b4da2689 feat(slack): add email field to get user and list users tools (#3509)
* feat(slack): add email field to get user and list users tools

* fix(slack): use empty string fallback for email and make type non-optional

* fix(slack): comment out users:read.email scope pending app review
2026-03-12 13:27:37 -07:00
Waleed
aa0101c666 fix(blocks): clarify condition ID suffix slicing for readability (#3546)
Use explicit hyphen separator instead of relying on slice offset to
implicitly include the hyphen in the suffix, making the intent clearer.

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-12 13:26:11 -07:00
Waleed
c939f8a76e fix(jira): add explicit fields parameter to search/jql endpoint (#3544)
The GET /rest/api/3/search/jql endpoint requires an explicit `fields`
parameter to return issue data. Without it, only the issue `id` is
returned with all other fields empty. This adds `fields=*all` as the
default when the user doesn't specify custom fields.

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-12 12:51:27 -07:00
Waleed
0b19ad0013 improvement(canvas): enable middle mouse button panning in cursor mode (#3542) 2026-03-12 12:44:15 -07:00
Waleed
3d5141d852 chore(oauth): remove unused github-repo generic OAuth provider (#3543) 2026-03-12 12:39:31 -07:00
Waleed
75832ca007 fix(jira): add missing write:attachment:jira oauth scope (#3541) 2026-03-12 12:13:57 -07:00
Waleed
97f78c60b4 feat(tools): add Fathom AI Notetaker integration (#3531)
* feat(fathom): add Fathom AI Notetaker integration

* fix(fathom): address PR review feedback

- Add response.ok checks to all 5 tool transformResponse functions
- Fix include_summary default to respect explicit false (check undefined)
- Add externalId validation before URL interpolation in webhook deletion

* fix(fathom): address second round PR review feedback

- Remove redundant 204 status check in deleteFathomWebhook (204 is ok)
- Use consistent undefined-guard pattern for all include flags
- Add .catch() fallback on webhook creation JSON parse
- Change recording_id default from 0 to null to avoid misleading sentinel

* fix(fathom): add missing crm_matches to list_meetings transform and fix action_items type

- Add crm_matches pass-through in list_meetings transform (was silently dropped)
- Fix action_items type to match API schema (description, user_generated, completed, etc.)
- Add crm_matches type with contacts, companies, deals, error fields

* fix(fathom): guard against undefined webhook id on creation success

* fix(fathom): add type to nested trigger outputs and fix boolean coercion

- Add type: 'object' to recorded_by and default_summary trigger outputs
- Use val === true || val === 'true' pattern for include flag coercion
  to safely handle both boolean and string values from providerConfig

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: Lakee Sivaraya <71339072+lakeesiv@users.noreply.github.com>
Co-authored-by: Vikhyath Mondreti <vikhyath@simstudio.ai>
Co-authored-by: Vikhyath Mondreti <vikhyathvikku@gmail.com>
2026-03-12 11:00:07 -07:00
Waleed
9295499405 fix(traces): prevent condition blocks from rendering source agent's timeSegments (#3534)
* fix(traces): prevent condition blocks from rendering source agent's timeSegments

Condition blocks spread their source block's entire output into their own
output. When the source is an agent, this leaked providerTiming/timeSegments
into the condition's output, causing buildTraceSpans to create "Initial
response" as a child of the condition span instead of the agent span.

Two fixes:
- Skip timeSegment child creation for condition block types in buildTraceSpans
- Filter execution metadata (providerTiming, tokens, toolCalls, model, cost)
  from condition handler's filterSourceOutput

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(traces): guard condition blocks from leaked metadata on old persisted logs

Extend isConditionBlockType guards to also skip setting span.providerTiming,
span.cost, span.tokens, and span.model for condition blocks. This ensures
old persisted logs (recorded before the filterSourceOutput fix) don't display
misleading execution metadata on condition spans.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(traces): guard toolCalls fallback path for condition blocks on old logs

The else branch that extracts toolCalls from log.output also needs a
condition block guard, otherwise old persisted logs with leaked toolCalls
from the source agent would render on the condition span.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* refactor(traces): extract isCondition to local variable for readability

Cache isConditionBlockType(log.blockType) in a local const at the top
of the forEach loop instead of calling it 6 times per iteration.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-12 01:39:02 -07:00
Waleed
6bcbd15ee6 fix(blocks): remap condition/router IDs when duplicating blocks (#3533)
* fix(blocks): remap condition/router IDs when duplicating blocks

Condition and router blocks embed IDs in the format `{blockId}-{suffix}`
inside their subBlock values and edge sourceHandles. When blocks were
duplicated, these IDs were not updated to reference the new block ID,
causing duplicate handle IDs and broken edge routing.

Fixes all four duplication paths: single block duplicate, copy/paste,
workflow duplication (server-side), and workflow import.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(blocks): deep-clone subBlocks before mutating condition IDs

Shallow copy of subBlocks meant remapConditionIds could mutate the
source data (clipboard on repeated paste, or input workflowState on
import). Deep-clone subBlocks in both regenerateBlockIds and
regenerateWorkflowIds to prevent this.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(blocks): remap condition IDs in regenerateWorkflowStateIds (template use)

The template use code path was missing condition/router ID remapping,
causing broken condition blocks when creating workflows from templates.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-12 01:19:38 -07:00
Vikhyath Mondreti
36612ae42a v0.5.111: non-polling webhook execs off trigger.dev, gmail subject headers, webhook trigger configs (#3530) 2026-03-11 17:47:28 -07:00
Vikhyath Mondreti
68d207df94 improvement(webhooks): move non-polling executions off trigger.dev (#3527)
* improvement(webhooks): move non-polling off trigger.dev

* restore constants file

* improve comment

* add unit test to prevent drift
2026-03-11 17:07:24 -07:00
Vikhyath Mondreti
d5502d602b feat(webhooks): dedup and custom ack configuration (#3525)
* feat(webhooks): dedup and custom ack configuration

* address review comments

* reject object typed idempotency key
2026-03-11 15:51:35 -07:00
Waleed
37d524bb0a fix(gmail): RFC 2047 encode subject headers for non-ASCII characters (#3526)
* fix(gmail): RFC 2047 encode subject headers for non-ASCII characters

* Fix RFC 2047 encoded word length limit

Split long email subjects into multiple RFC 2047 encoded words to comply with the 75-character limit per RFC 2047 Section 2. Each encoded word now contains at most 45 bytes of UTF-8 content (producing max 60 chars of base64 + 12 chars overhead = 72 total). Multiple encoded words are separated by CRLF + space (folding whitespace).

Applied via @cursor push command

* fix(gmail): split RFC 2047 encoded words on character boundaries

* fix(gmail): simplify RFC 2047 encoding to match Google's own sample

---------

Co-authored-by: Cursor Agent <cursoragent@cursor.com>
2026-03-11 15:48:07 -07:00
Waleed
1c2c2c65d4 v0.5.110: webhook execution speedups, SSRF patches 2026-03-11 15:00:24 -07:00
Waleed
19ef526886 fix(webhooks): eliminate redundant DB queries from webhook execution path (#3523)
* fix(webhooks): eliminate redundant DB queries from webhook execution path

* chore(webhooks): remove implementation-detail comments

* fix(webhooks): restore auth-first ordering and add credential resolution warning

- Revert parallel auth+preprocessing to sequential auth→preprocessing
  to prevent rate-limit exhaustion via unauthenticated requests
- Add warning log when credential account resolution fails in background job

* fix(webhooks): restore auth-before-reachability ordering and remove dead credentialAccountUserId field

- Move reachability test back after auth to prevent path enumeration
- Remove dead credentialAccountUserId from WebhookExecutionPayload
- Simplify credential resolution condition in background job
2026-03-11 14:51:04 -07:00
Waleed
ff2a1527ab fix(security): add SSRF protection to database tools and webhook delivery (#3500)
* fix(security): add SSRF protection to database tools and webhook delivery

* fix(security): address review comments on SSRF PR

- Remove Promise.race timeout pattern to avoid unhandled rejections
  (http.request timeout is sufficient for webhook delivery)
- Use safeCompare in verifyCronAuth instead of inline HMAC logic
- Strip IPv6 brackets before validateDatabaseHost in Redis route

* fix(security): allow HTTP webhooks and fix misleading MCP error docs

- Add allowHttp option to validateExternalUrl, validateUrlWithDNS,
  and secureFetchWithValidation to support HTTP webhook URLs
- Pass allowHttp: true for webhook delivery and test endpoints
- Fix misleading JSDoc on createMcpErrorResponse (doesn't log errors)
- Mark unused error param with underscore prefix

* fix(security): forward allowHttp option through redirect validation

Pass allowHttp to validateUrlWithDNS in the redirect handler of
secureFetchWithPinnedIP so HTTP-to-HTTP redirects work when allowHttp
is enabled for webhook delivery.

* fix(security): block localhost when allowHttp is enabled

When allowHttp is true (user-supplied webhook URLs), explicitly block
localhost/loopback in both validateExternalUrl and validateUrlWithDNS
to prevent SSRF against internal services.

* fix(security): always strip multi-line content in sanitizeConnectionError

Take the first line of the error message regardless of length to
prevent leaking sensitive data from multi-line error messages.
2026-03-09 20:28:28 -07:00
Waleed
2e1c639a81 fix(parallel): align integration with Parallel AI API docs (#3501)
* fix(parallel): align integration with Parallel AI API docs

* fix(parallel): keep processor subBlock ID for backwards compatibility

* fix(parallel): move error field to top level per ToolResponse interface

* fix(parallel): guard research_input and prevent domain leakage across operations

* fix(parallel): make url/title nullable in types to match transformResponse

* fix(parallel): revert search_queries param type to string for backwards compatibility
2026-03-09 19:47:30 -07:00
Waleed
ecd3536a72 v0.5.109: obsidian and evernote integrations, slack fixes, remove memory instrumentation 2026-03-09 10:40:37 -07:00
Theodore Li
635179d696 Revert "feat(hosted key): Add exa hosted key (#3221)" (#3495)
This reverts commit 158d5236bc.

Co-authored-by: Theodore Li <teddy@zenobiapay.com>
2026-03-09 10:31:54 -07:00
Waleed
f88926a6a8 fix(webhooks): return empty 200 for Slack to close modals cleanly (#3492)
* fix(webhooks): return empty 200 for Slack to close modals cleanly

* fix(webhooks): add clarifying comment on Slack error path trade-off
2026-03-09 10:11:36 -07:00
Waleed
690b47a0bf chore(monitoring): remove SSE connection tracking and Bun.gc debug instrumentation (#3472) 2026-03-08 17:27:05 -07:00
Theodore Li
158d5236bc feat(hosted key): Add exa hosted key (#3221)
* feat(hosted keys): Implement serper hosted key

* Handle required fields correctly for hosted keys

* Add rate limiting (3 tries, exponential backoff)

* Add custom pricing, switch to exa as first hosted key

* Add telemetry

* Consolidate byok type definitions

* Add warning comment if default calculation is used

* Record usage to user stats table

* Fix unit tests, use cost property

* Include more metadata in cost output

* Fix disabled tests

* Fix spacing

* Fix lint

* Move knowledge cost restructuring away from generic block handler

* Migrate knowledge unit tests

* Lint

* Fix broken tests

* Add user based hosted key throttling

* Refactor hosted key handling. Add optimistic handling of throttling for custom throttle rules.

* Remove research as hosted key. Recommend BYOK if throtttling occurs

* Make adding api keys adjustable via env vars

* Remove vestigial fields from research

* Make billing actor id required for throttling

* Switch to round robin for api key distribution

* Add helper method for adding hosted key cost

* Strip leading double underscores to avoid breaking change

* Lint fix

* Remove falsy check in favor for explicit null check

* Add more detailed metrics for different throttling types

* Fix _costDollars field

* Handle hosted agent tool calls

* Fail loudly if cost field isn't found

* Remove any type

* Fix type error

* Fix lint

* Fix usage log double logging data

* Fix test

---------

Co-authored-by: Theodore Li <teddy@zenobiapay.com>
2026-03-07 13:06:57 -05:00
Vikhyath Mondreti
8c0a2e04b1 v0.5.108: workflow input params in agent tools, bun upgrade, dropdown selectors for 14 blocks 2026-03-06 21:02:25 -08:00
Waleed
6586c5ce40 v0.5.107: new reddit, slack tools 2026-03-05 22:48:20 -08:00
Vikhyath Mondreti
3ce947566d v0.5.106: condition block and legacy kbs fixes, GPT 5.4 2026-03-05 17:30:05 -08:00
Waleed
70c36cb7aa v0.5.105: slack remove reaction, nested subflow locks fix, servicenow pagination, memory improvements 2026-03-04 22:38:26 -08:00
Waleed
f1ec5fe824 v0.5.104: memory improvements, nested subflows, careers page redirect, brandfetch, google meet 2026-03-03 23:45:29 -08:00
Waleed
e07e3c34cc v0.5.103: memory util instrumentation, API docs, amplitude, google pagespeed insights, pagerduty 2026-03-01 23:27:02 -08:00
Waleed
0d2e6ff31d v0.5.102: new integrations, new tools, ci speedups, memory leak instrumentation 2026-02-28 12:48:10 -08:00
Waleed
4fd0989264 v0.5.101: circular dependency mitigation, confluence enhancements, google tasks and bigquery integrations, workflow lock 2026-02-26 15:04:53 -08:00
Waleed
67f8a687f6 v0.5.100: multiple credentials, 40% speedup, gong, attio, audit log improvements 2026-02-25 00:28:25 -08:00
Waleed
af592349d3 v0.5.99: local dev improvements, live workflow logs in terminal 2026-02-23 00:24:49 -08:00
Waleed
0d86ea01f0 v0.5.98: change detection improvements, rate limit and code execution fixes, removed retired models, hex integration 2026-02-21 18:07:40 -08:00
Waleed
115f04e989 v0.5.97: oidc discovery for copilot mcp 2026-02-21 02:06:25 -08:00
Waleed
34d92fae89 v0.5.96: sim oauth provider, slack ephemeral message tool and blockkit support 2026-02-20 18:22:20 -08:00
Waleed
67aa4bb332 v0.5.95: gemini 3.1 pro, cloudflare, dataverse, revenuecat, redis, upstash, algolia tools; isolated-vm robustness improvements, tables backend (#3271)
* feat(tools): advanced fields for youtube, vercel; added cloudflare and dataverse tools (#3257)

* refactor(vercel): mark optional fields as advanced mode

Move optional/power-user fields behind the advanced toggle:
- List Deployments: project filter, target, state
- Create Deployment: project ID override, redeploy from, target
- List Projects: search
- Create/Update Project: framework, build/output/install commands
- Env Vars: variable type
- Webhooks: project IDs filter
- Checks: path, details URL
- Team Members: role filter
- All operations: team ID scope

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* style(youtube): mark optional params as advanced mode

Hide pagination, sort order, and filter fields behind the advanced
toggle for a cleaner default UX across all YouTube operations.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* added advanced fields for vercel and youtube, added cloudflare and dataverse block

* addded desc for dataverse

* add more tools

* ack comment

* more

* ops

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>

* feat(tables): added tables (#2867)

* updates

* required

* trashy table viewer

* updates

* updates

* filtering ui

* updates

* updates

* updates

* one input mode

* format

* fix lints

* improved errors

* updates

* updates

* chages

* doc strings

* breaking down file

* update comments with ai

* updates

* comments

* changes

* revert

* updates

* dedupe

* updates

* updates

* updates

* refactoring

* renames & refactors

* refactoring

* updates

* undo

* update db

* wand

* updates

* fix comments

* fixes

* simplify comments

* u[dates

* renames

* better comments

* validation

* updates

* updates

* updates

* fix sorting

* fix appearnce

* updating prompt to make it user sort

* rm

* updates

* rename

* comments

* clean comments

* simplicifcaiton

* updates

* updates

* refactor

* reduced type confusion

* undo

* rename

* undo changes

* undo

* simplify

* updates

* updates

* revert

* updates

* db updates

* type fix

* fix

* fix error handling

* updates

* docs

* docs

* updates

* rename

* dedupe

* revert

* uncook

* updates

* fix

* fix

* fix

* fix

* prepare merge

* readd migrations

* add back missed code

* migrate enrichment logic to general abstraction

* address bugbot concerns

* adhere to size limits for tables

* remove conflicting migration

* add back migrations

* fix tables auth

* fix permissive auth

* fix lint

* reran migrations

* migrate to use tanstack query for all server state

* update table-selector

* update names

* added tables to permission groups, updated subblock types

---------

Co-authored-by: Vikhyath Mondreti <vikhyath@simstudio.ai>
Co-authored-by: waleed <walif6@gmail.com>

* fix(snapshot): changed insert to upsert when concurrent identical child workflows are running (#3259)

* fix(snapshot): changed insert to upsert when concurrent identical child workflows are running

* fixed ci tests failing

* fix(workflows): disallow duplicate workflow names at the same folder level (#3260)

* feat(tools): added redis, upstash, algolia, and revenuecat (#3261)

* feat(tools): added redis, upstash, algolia, and revenuecat

* ack comment

* feat(models): add gemini-3.1-pro-preview and update gemini-3-pro thinking levels (#3263)

* fix(audit-log): lazily resolve actor name/email when missing (#3262)

* fix(blocks): move type coercions from tools.config.tool to tools.config.params (#3264)

* fix(blocks): move type coercions from tools.config.tool to tools.config.params

Number() coercions in tools.config.tool ran at serialization time before
variable resolution, destroying dynamic references like <block.result.count>
by converting them to NaN/null. Moved all coercions to tools.config.params
which runs at execution time after variables are resolved.

Fixed in 15 blocks: exa, arxiv, sentry, incidentio, wikipedia, ahrefs,
posthog, elasticsearch, dropbox, hunter, lemlist, spotify, youtube, grafana,
parallel. Also added mode: 'advanced' to optional exa fields.

Closes #3258

* fix(blocks): address PR review — move remaining param mutations from tool() to params()

- Moved field mappings from tool() to params() in grafana, posthog,
  lemlist, spotify, dropbox (same dynamic reference bug)
- Fixed parallel.ts excerpts/full_content boolean logic
- Fixed parallel.ts search_queries empty case (must set undefined)
- Fixed elasticsearch.ts timeout not included when already ends with 's'
- Restored dropbox.ts tool() switch for proper default fallback

* fix(blocks): restore field renames to tool() for serialization-time validation

Field renames (e.g. personalApiKey→apiKey) must be in tool() because
validateRequiredFieldsBeforeExecution calls selectToolId()→tool() then
checks renamed field names on params. Only type coercions (Number(),
boolean) stay in params() to avoid destroying dynamic variable references.

* improvement(resolver): resovled empty sentinel to not pass through unexecuted valid refs to text inputs (#3266)

* fix(blocks): add required constraint for serviceDeskId in JSM block (#3268)

* fix(blocks): add required constraint for serviceDeskId in JSM block

* fix(blocks): rename custom field values to request field values in JSM create request

* fix(trigger): add isolated-vm support to trigger.dev container builds (#3269)

Scheduled workflow executions running in trigger.dev containers were
failing to spawn isolated-vm workers because the native module wasn't
available in the container. This caused loop condition evaluation to
silently fail and exit after one iteration.

- Add isolated-vm to build.external and additionalPackages in trigger config
- Include isolated-vm-worker.cjs via additionalFiles for child process spawning
- Add fallback path resolution for worker file in trigger.dev environment

* fix(tables): hide tables from sidebar and block registry (#3270)

* fix(tables): hide tables from sidebar and block registry

* fix(trigger): add isolated-vm support to trigger.dev container builds (#3269)

Scheduled workflow executions running in trigger.dev containers were
failing to spawn isolated-vm workers because the native module wasn't
available in the container. This caused loop condition evaluation to
silently fail and exit after one iteration.

- Add isolated-vm to build.external and additionalPackages in trigger config
- Include isolated-vm-worker.cjs via additionalFiles for child process spawning
- Add fallback path resolution for worker file in trigger.dev environment

* lint

* fix(trigger): update node version to align with main app (#3272)

* fix(build): fix corrupted sticky disk cache on blacksmith (#3273)

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: Lakee Sivaraya <71339072+lakeesiv@users.noreply.github.com>
Co-authored-by: Vikhyath Mondreti <vikhyath@simstudio.ai>
Co-authored-by: Vikhyath Mondreti <vikhyathvikku@gmail.com>
2026-02-20 13:43:07 -08:00
Waleed
15ace5e63f v0.5.94: vercel integration, folder insertion, migrated tracking redirects to rewrites 2026-02-18 16:53:34 -08:00
Waleed
fdca73679d v0.5.93: NextJS config changes, MCP and Blocks whitelisting, copilot keyboard shortcuts, audit logs 2026-02-18 12:10:05 -08:00
Waleed
da46a387c9 v0.5.92: shortlinks, copilot scrolling stickiness, pagination 2026-02-17 15:13:21 -08:00
Waleed
b7e377ec4b v0.5.91: docs i18n, turborepo upgrade 2026-02-16 00:36:05 -08:00
113 changed files with 2861 additions and 749 deletions

View File

@@ -1979,6 +1979,24 @@ export function ElevenLabsIcon(props: SVGProps<SVGSVGElement>) {
)
}
export function FathomIcon(props: SVGProps<SVGSVGElement>) {
return (
<svg {...props} xmlns='http://www.w3.org/2000/svg' viewBox='0 0 1000 1000' fill='none'>
<path
d='M0,668.7v205.78c0,53.97,34.24,102.88,85.8,119.08,87.48,27.49,167.88-36.99,167.88-120.22v-77.45L0,668.7Z'
fill='#007299'
/>
<path
d='M873.72,626.07c-19.05,0-38.38-4.3-56.58-13.38L72.78,241.43C11.15,210.69-17.51,136.6,11.18,74.05,41.2,8.59,119.26-18.53,183.23,13.38l744.25,371.21c62.45,31.15,91,109.08,59.79,171.43-22.22,44.38-67.02,70.05-113.55,70.05Z'
fill='#00beff'
/>
<path
d='M500.09,813.66c-19.05,0-38.38-4.3-56.58-13.38l-370.72-184.9c-61.63-30.74-90.29-104.82-61.61-167.37,30.02-65.46,108.08-92.59,172.06-60.68l370.62,184.85c62.45,31.15,91,109.08,59.79,171.43-22.22,44.38-67.02,70.05-113.55,70.05Z'
fill='#00beff'
/>
</svg>
)
}
export function LinkupIcon(props: SVGProps<SVGSVGElement>) {
return (
<svg {...props} xmlns='http://www.w3.org/2000/svg' viewBox='0 0 154 107' fill='none'>

View File

@@ -43,6 +43,7 @@ import {
EvernoteIcon,
ExaAIIcon,
EyeIcon,
FathomIcon,
FirecrawlIcon,
FirefliesIcon,
GammaIcon,
@@ -206,6 +207,7 @@ export const blockTypeToIconMap: Record<string, IconComponent> = {
enrich: EnrichSoIcon,
evernote: EvernoteIcon,
exa: ExaAIIcon,
fathom: FathomIcon,
file_v3: DocumentIcon,
firecrawl: FirecrawlIcon,
fireflies_v2: FirefliesIcon,

View File

@@ -0,0 +1,135 @@
---
title: Fathom
description: Access meeting recordings, transcripts, and summaries
---
import { BlockInfoCard } from "@/components/ui/block-info-card"
<BlockInfoCard
type="fathom"
color="#181C1E"
/>
## Usage Instructions
Integrate Fathom AI Notetaker into your workflow. List meetings, get transcripts and summaries, and manage team members and teams. Can also trigger workflows when new meeting content is ready.
## Tools
### `fathom_list_meetings`
List recent meetings recorded by the user or shared to their team.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Fathom API Key |
| `includeSummary` | string | No | Include meeting summary \(true/false\) |
| `includeTranscript` | string | No | Include meeting transcript \(true/false\) |
| `includeActionItems` | string | No | Include action items \(true/false\) |
| `includeCrmMatches` | string | No | Include linked CRM matches \(true/false\) |
| `createdAfter` | string | No | Filter meetings created after this ISO 8601 timestamp |
| `createdBefore` | string | No | Filter meetings created before this ISO 8601 timestamp |
| `recordedBy` | string | No | Filter by recorder email address |
| `teams` | string | No | Filter by team name |
| `cursor` | string | No | Pagination cursor from a previous response |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `meetings` | array | List of meetings |
| ↳ `title` | string | Meeting title |
| ↳ `recording_id` | number | Unique recording ID |
| ↳ `url` | string | URL to view the meeting |
| ↳ `share_url` | string | Shareable URL |
| ↳ `created_at` | string | Creation timestamp |
| ↳ `transcript_language` | string | Transcript language |
| `next_cursor` | string | Pagination cursor for next page |
### `fathom_get_summary`
Get the call summary for a specific meeting recording.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Fathom API Key |
| `recordingId` | string | Yes | The recording ID of the meeting |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `template_name` | string | Name of the summary template used |
| `markdown_formatted` | string | Markdown-formatted summary text |
### `fathom_get_transcript`
Get the full transcript for a specific meeting recording.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Fathom API Key |
| `recordingId` | string | Yes | The recording ID of the meeting |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `transcript` | array | Array of transcript entries with speaker, text, and timestamp |
| ↳ `speaker` | object | Speaker information |
| ↳ `display_name` | string | Speaker display name |
| ↳ `matched_calendar_invitee_email` | string | Matched calendar invitee email |
| ↳ `text` | string | Transcript text |
| ↳ `timestamp` | string | Timestamp \(HH:MM:SS\) |
### `fathom_list_team_members`
List team members in your Fathom organization.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Fathom API Key |
| `teams` | string | No | Team name to filter by |
| `cursor` | string | No | Pagination cursor from a previous response |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `members` | array | List of team members |
| ↳ `name` | string | Team member name |
| ↳ `email` | string | Team member email |
| ↳ `created_at` | string | Date the member was added |
| `next_cursor` | string | Pagination cursor for next page |
### `fathom_list_teams`
List teams in your Fathom organization.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Fathom API Key |
| `cursor` | string | No | Pagination cursor from a previous response |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `teams` | array | List of teams |
| ↳ `name` | string | Team name |
| ↳ `created_at` | string | Date the team was created |
| `next_cursor` | string | Pagination cursor for next page |

View File

@@ -37,6 +37,7 @@
"enrich",
"evernote",
"exa",
"fathom",
"file",
"firecrawl",
"fireflies",

View File

@@ -44,20 +44,24 @@ Search the web using Parallel AI. Provides comprehensive search results with int
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `objective` | string | Yes | The search objective or question to answer |
| `search_queries` | string | No | Optional comma-separated list of search queries to execute |
| `processor` | string | No | Processing method: base or pro \(default: base\) |
| `max_results` | number | No | Maximum number of results to return \(default: 5\) |
| `max_chars_per_result` | number | No | Maximum characters per result \(default: 1500\) |
| `search_queries` | string | No | Comma-separated list of search queries to execute |
| `mode` | string | No | Search mode: one-shot, agentic, or fast \(default: one-shot\) |
| `max_results` | number | No | Maximum number of results to return \(default: 10\) |
| `max_chars_per_result` | number | No | Maximum characters per result excerpt \(minimum: 1000\) |
| `include_domains` | string | No | Comma-separated list of domains to restrict search results to |
| `exclude_domains` | string | No | Comma-separated list of domains to exclude from search results |
| `apiKey` | string | Yes | Parallel AI API Key |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `search_id` | string | Unique identifier for this search request |
| `results` | array | Search results with excerpts from relevant pages |
| ↳ `url` | string | The URL of the search result |
| ↳ `title` | string | The title of the search result |
| ↳ `excerpts` | array | Text excerpts from the page |
| ↳ `publish_date` | string | Publication date of the page \(YYYY-MM-DD\) |
| ↳ `excerpts` | array | LLM-optimized excerpts from the page |
### `parallel_extract`
@@ -68,31 +72,33 @@ Extract targeted information from specific URLs using Parallel AI. Processes pro
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `urls` | string | Yes | Comma-separated list of URLs to extract information from |
| `objective` | string | Yes | What information to extract from the provided URLs |
| `excerpts` | boolean | Yes | Include relevant excerpts from the content |
| `full_content` | boolean | Yes | Include full page content |
| `objective` | string | No | What information to extract from the provided URLs |
| `excerpts` | boolean | No | Include relevant excerpts from the content \(default: true\) |
| `full_content` | boolean | No | Include full page content as markdown \(default: false\) |
| `apiKey` | string | Yes | Parallel AI API Key |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `extract_id` | string | Unique identifier for this extraction request |
| `results` | array | Extracted information from the provided URLs |
| ↳ `url` | string | The source URL |
| ↳ `title` | string | The title of the page |
| ↳ `content` | string | Extracted content |
| ↳ `excerpts` | array | Relevant text excerpts |
| ↳ `publish_date` | string | Publication date \(YYYY-MM-DD\) |
| ↳ `excerpts` | array | Relevant text excerpts in markdown |
| ↳ `full_content` | string | Full page content as markdown |
### `parallel_deep_research`
Conduct comprehensive deep research across the web using Parallel AI. Synthesizes information from multiple sources with citations. Can take up to 15 minutes to complete.
Conduct comprehensive deep research across the web using Parallel AI. Synthesizes information from multiple sources with citations. Can take up to 45 minutes to complete.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `input` | string | Yes | Research query or question \(up to 15,000 characters\) |
| `processor` | string | No | Compute level: base, lite, pro, ultra, ultra2x, ultra4x, ultra8x \(default: base\) |
| `processor` | string | No | Processing tier: pro, ultra, pro-fast, ultra-fast \(default: pro\) |
| `include_domains` | string | No | Comma-separated list of domains to restrict research to \(source policy\) |
| `exclude_domains` | string | No | Comma-separated list of domains to exclude from research \(source policy\) |
| `apiKey` | string | Yes | Parallel AI API Key |
@@ -101,17 +107,17 @@ Conduct comprehensive deep research across the web using Parallel AI. Synthesize
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `status` | string | Task status \(completed, failed\) |
| `status` | string | Task status \(completed, failed, running\) |
| `run_id` | string | Unique ID for this research task |
| `message` | string | Status message |
| `content` | object | Research results \(structured based on output_schema\) |
| `basis` | array | Citations and sources with reasoning and confidence levels |
| ↳ `field` | string | Output field name |
| ↳ `field` | string | Output field dot-notation path |
| ↳ `reasoning` | string | Explanation for the result |
| ↳ `citations` | array | Array of sources |
| ↳ `url` | string | Source URL |
| ↳ `title` | string | Source title |
| ↳ `excerpts` | array | Relevant excerpts from the source |
| ↳ `confidence` | string | Confidence level indicator |
| ↳ `confidence` | string | Confidence level \(high, medium\) |

View File

@@ -590,6 +590,7 @@ List all users in a Slack workspace. Returns user profiles with names and avatar
| ↳ `name` | string | Username \(handle\) |
| ↳ `real_name` | string | Full real name |
| ↳ `display_name` | string | Display name shown in Slack |
| ↳ `email` | string | Email address \(requires users:read.email scope\) |
| ↳ `is_bot` | boolean | Whether the user is a bot |
| ↳ `is_admin` | boolean | Whether the user is a workspace admin |
| ↳ `is_owner` | boolean | Whether the user is the workspace owner |
@@ -629,6 +630,7 @@ Get detailed information about a specific Slack user by their user ID.
| ↳ `title` | string | Job title |
| ↳ `phone` | string | Phone number |
| ↳ `skype` | string | Skype handle |
| ↳ `email` | string | Email address \(requires users:read.email scope\) |
| ↳ `is_bot` | boolean | Whether the user is a bot |
| ↳ `is_admin` | boolean | Whether the user is a workspace admin |
| ↳ `is_owner` | boolean | Whether the user is the workspace owner |

View File

@@ -19,7 +19,6 @@ import { validateUrlWithDNS } from '@/lib/core/security/input-validation.server'
import { SSE_HEADERS } from '@/lib/core/utils/sse'
import { getBaseUrl } from '@/lib/core/utils/urls'
import { markExecutionCancelled } from '@/lib/execution/cancellation'
import { decrementSSEConnections, incrementSSEConnections } from '@/lib/monitoring/sse-connections'
import { checkWorkspaceAccess } from '@/lib/workspaces/permissions/utils'
import { getWorkspaceBilledAccountUserId } from '@/lib/workspaces/utils'
import {
@@ -631,11 +630,9 @@ async function handleMessageStream(
}
const encoder = new TextEncoder()
let messageStreamDecremented = false
const stream = new ReadableStream({
async start(controller) {
incrementSSEConnections('a2a-message')
const sendEvent = (event: string, data: unknown) => {
try {
const jsonRpcResponse = {
@@ -845,19 +842,10 @@ async function handleMessageStream(
})
} finally {
await releaseLock(lockKey, lockValue)
if (!messageStreamDecremented) {
messageStreamDecremented = true
decrementSSEConnections('a2a-message')
}
controller.close()
}
},
cancel() {
if (!messageStreamDecremented) {
messageStreamDecremented = true
decrementSSEConnections('a2a-message')
}
},
cancel() {},
})
return new NextResponse(stream, {
@@ -1042,22 +1030,16 @@ async function handleTaskResubscribe(
{ once: true }
)
let sseDecremented = false
const cleanup = () => {
isCancelled = true
if (pollTimeoutId) {
clearTimeout(pollTimeoutId)
pollTimeoutId = null
}
if (!sseDecremented) {
sseDecremented = true
decrementSSEConnections('a2a-resubscribe')
}
}
const stream = new ReadableStream({
async start(controller) {
incrementSSEConnections('a2a-resubscribe')
const sendEvent = (event: string, data: unknown): boolean => {
if (isCancelled || abortSignal.aborted) return false
try {

View File

@@ -14,7 +14,6 @@ import { getSession } from '@/lib/auth'
import { SSE_HEADERS } from '@/lib/core/utils/sse'
import { mcpConnectionManager } from '@/lib/mcp/connection-manager'
import { mcpPubSub } from '@/lib/mcp/pubsub'
import { decrementSSEConnections, incrementSSEConnections } from '@/lib/monitoring/sse-connections'
import { getUserEntityPermissions } from '@/lib/workspaces/permissions/utils'
const logger = createLogger('McpEventsSSE')
@@ -50,14 +49,11 @@ export async function GET(request: NextRequest) {
for (const unsub of unsubscribers) {
unsub()
}
decrementSSEConnections('mcp-events')
logger.info(`SSE connection closed for workspace ${workspaceId}`)
}
const stream = new ReadableStream({
start(controller) {
incrementSSEConnections('mcp-events')
const send = (eventName: string, data: Record<string, unknown>) => {
if (cleaned) return
try {

View File

@@ -192,7 +192,8 @@ export const POST = withMcpAuth<{ id: string }>('read')(
)
} catch (error) {
connectionStatus = 'error'
lastError = error instanceof Error ? error.message : 'Connection test failed'
lastError =
error instanceof Error ? error.message.split('\n')[0].slice(0, 200) : 'Connection failed'
logger.warn(`[${requestId}] Failed to connect to server ${serverId}:`, error)
}

View File

@@ -41,6 +41,20 @@ interface TestConnectionResult {
warnings?: string[]
}
/**
* Extracts a user-friendly error message from connection errors.
* Keeps diagnostic info (timeout, DNS, HTTP status) but strips
* verbose internals (Zod details, full response bodies, stack traces).
*/
function sanitizeConnectionError(error: unknown): string {
if (!(error instanceof Error)) {
return 'Unknown connection error'
}
const firstLine = error.message.split('\n')[0]
return firstLine.length > 200 ? `${firstLine.slice(0, 200)}...` : firstLine
}
/**
* POST - Test connection to an MCP server before registering it
*/
@@ -137,8 +151,7 @@ export const POST = withMcpAuth('write')(
} catch (toolError) {
logger.warn(`[${requestId}] Connection established but could not list tools:`, toolError)
result.success = false
const errorMessage = toolError instanceof Error ? toolError.message : 'Unknown error'
result.error = `Connection established but could not list tools: ${errorMessage}`
result.error = 'Connection established but could not list tools'
result.warnings = result.warnings || []
result.warnings.push(
'Server connected but tool listing failed - connection may be incomplete'
@@ -163,11 +176,7 @@ export const POST = withMcpAuth('write')(
logger.warn(`[${requestId}] MCP server test failed:`, error)
result.success = false
if (error instanceof Error) {
result.error = error.message
} else {
result.error = 'Unknown connection error'
}
result.error = sanitizeConnectionError(error)
} finally {
if (client) {
try {

View File

@@ -89,11 +89,12 @@ export const POST = withMcpAuth('read')(
tool = tools.find((t) => t.name === toolName) ?? null
if (!tool) {
logger.warn(`[${requestId}] Tool ${toolName} not found on server ${serverId}`, {
availableTools: tools.map((t) => t.name),
})
return createMcpErrorResponse(
new Error(
`Tool ${toolName} not found on server ${serverId}. Available tools: ${tools.map((t) => t.name).join(', ')}`
),
'Tool not found',
new Error('Tool not found'),
'Tool not found on the specified server',
404
)
}

View File

@@ -76,7 +76,7 @@ export async function POST(request: NextRequest) {
return NextResponse.json(
{
success: false,
error: error instanceof Error ? error.message : 'Failed to cancel task',
error: 'Failed to cancel task',
},
{ status: 500 }
)

View File

@@ -86,7 +86,7 @@ export async function POST(request: NextRequest) {
return NextResponse.json(
{
success: false,
error: error instanceof Error ? error.message : 'Failed to delete push notification',
error: 'Failed to delete push notification',
},
{ status: 500 }
)

View File

@@ -84,7 +84,7 @@ export async function POST(request: NextRequest) {
return NextResponse.json(
{
success: false,
error: error instanceof Error ? error.message : 'Failed to fetch Agent Card',
error: 'Failed to fetch Agent Card',
},
{ status: 500 }
)

View File

@@ -107,7 +107,7 @@ export async function POST(request: NextRequest) {
return NextResponse.json(
{
success: false,
error: error instanceof Error ? error.message : 'Failed to get push notification',
error: 'Failed to get push notification',
},
{ status: 500 }
)

View File

@@ -87,7 +87,7 @@ export async function POST(request: NextRequest) {
return NextResponse.json(
{
success: false,
error: error instanceof Error ? error.message : 'Failed to get task',
error: 'Failed to get task',
},
{ status: 500 }
)

View File

@@ -111,7 +111,7 @@ export async function POST(request: NextRequest) {
return NextResponse.json(
{
success: false,
error: error instanceof Error ? error.message : 'Failed to resubscribe',
error: 'Failed to resubscribe',
},
{ status: 500 }
)

View File

@@ -70,7 +70,7 @@ export async function POST(request: NextRequest) {
return NextResponse.json(
{
success: false,
error: `Failed to connect to agent: ${clientError instanceof Error ? clientError.message : 'Unknown error'}`,
error: 'Failed to connect to agent',
},
{ status: 502 }
)
@@ -158,7 +158,7 @@ export async function POST(request: NextRequest) {
return NextResponse.json(
{
success: false,
error: `Failed to send message: ${sendError instanceof Error ? sendError.message : 'Unknown error'}`,
error: 'Failed to send message to agent',
},
{ status: 502 }
)
@@ -218,7 +218,7 @@ export async function POST(request: NextRequest) {
return NextResponse.json(
{
success: false,
error: error instanceof Error ? error.message : 'Internal server error',
error: 'Internal server error',
},
{ status: 500 }
)

View File

@@ -98,7 +98,7 @@ export async function POST(request: NextRequest) {
return NextResponse.json(
{
success: false,
error: error instanceof Error ? error.message : 'Failed to set push notification',
error: 'Failed to set push notification',
},
{ status: 500 }
)

View File

@@ -1,7 +1,13 @@
import { MongoClient } from 'mongodb'
import { validateDatabaseHost } from '@/lib/core/security/input-validation.server'
import type { MongoDBCollectionInfo, MongoDBConnectionConfig } from '@/tools/mongodb/types'
export async function createMongoDBConnection(config: MongoDBConnectionConfig) {
const hostValidation = await validateDatabaseHost(config.host, 'host')
if (!hostValidation.isValid) {
throw new Error(hostValidation.error)
}
const credentials =
config.username && config.password
? `${encodeURIComponent(config.username)}:${encodeURIComponent(config.password)}@`

View File

@@ -1,4 +1,5 @@
import mysql from 'mysql2/promise'
import { validateDatabaseHost } from '@/lib/core/security/input-validation.server'
export interface MySQLConnectionConfig {
host: string
@@ -10,6 +11,11 @@ export interface MySQLConnectionConfig {
}
export async function createMySQLConnection(config: MySQLConnectionConfig) {
const hostValidation = await validateDatabaseHost(config.host, 'host')
if (!hostValidation.isValid) {
throw new Error(hostValidation.error)
}
const connectionConfig: mysql.ConnectionOptions = {
host: config.host,
port: config.port,

View File

@@ -1,7 +1,13 @@
import neo4j from 'neo4j-driver'
import { validateDatabaseHost } from '@/lib/core/security/input-validation.server'
import type { Neo4jConnectionConfig } from '@/tools/neo4j/types'
export async function createNeo4jDriver(config: Neo4jConnectionConfig) {
const hostValidation = await validateDatabaseHost(config.host, 'host')
if (!hostValidation.isValid) {
throw new Error(hostValidation.error)
}
const isAuraHost =
config.host === 'databases.neo4j.io' || config.host.endsWith('.databases.neo4j.io')

View File

@@ -35,7 +35,7 @@ export async function POST(request: NextRequest) {
`[${requestId}] Deleting data from ${params.table} on ${params.host}:${params.port}/${params.database}`
)
const sql = createPostgresConnection({
const sql = await createPostgresConnection({
host: params.host,
port: params.port,
database: params.database,

View File

@@ -47,7 +47,7 @@ export async function POST(request: NextRequest) {
)
}
const sql = createPostgresConnection({
const sql = await createPostgresConnection({
host: params.host,
port: params.port,
database: params.database,

View File

@@ -57,7 +57,7 @@ export async function POST(request: NextRequest) {
`[${requestId}] Inserting data into ${params.table} on ${params.host}:${params.port}/${params.database}`
)
const sql = createPostgresConnection({
const sql = await createPostgresConnection({
host: params.host,
port: params.port,
database: params.database,

View File

@@ -34,7 +34,7 @@ export async function POST(request: NextRequest) {
`[${requestId}] Introspecting PostgreSQL schema on ${params.host}:${params.port}/${params.database}`
)
const sql = createPostgresConnection({
const sql = await createPostgresConnection({
host: params.host,
port: params.port,
database: params.database,

View File

@@ -34,7 +34,7 @@ export async function POST(request: NextRequest) {
`[${requestId}] Executing PostgreSQL query on ${params.host}:${params.port}/${params.database}`
)
const sql = createPostgresConnection({
const sql = await createPostgresConnection({
host: params.host,
port: params.port,
database: params.database,

View File

@@ -54,7 +54,7 @@ export async function POST(request: NextRequest) {
`[${requestId}] Updating data in ${params.table} on ${params.host}:${params.port}/${params.database}`
)
const sql = createPostgresConnection({
const sql = await createPostgresConnection({
host: params.host,
port: params.port,
database: params.database,

View File

@@ -1,7 +1,13 @@
import postgres from 'postgres'
import { validateDatabaseHost } from '@/lib/core/security/input-validation.server'
import type { PostgresConnectionConfig } from '@/tools/postgresql/types'
export function createPostgresConnection(config: PostgresConnectionConfig) {
export async function createPostgresConnection(config: PostgresConnectionConfig) {
const hostValidation = await validateDatabaseHost(config.host, 'host')
if (!hostValidation.isValid) {
throw new Error(hostValidation.error)
}
const sslConfig =
config.ssl === 'disabled'
? false

View File

@@ -3,6 +3,7 @@ import Redis from 'ioredis'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { checkInternalAuth } from '@/lib/auth/hybrid'
import { validateDatabaseHost } from '@/lib/core/security/input-validation.server'
const logger = createLogger('RedisAPI')
@@ -24,6 +25,16 @@ export async function POST(request: NextRequest) {
const body = await request.json()
const { url, command, args } = RequestSchema.parse(body)
const parsedUrl = new URL(url)
const hostname =
parsedUrl.hostname.startsWith('[') && parsedUrl.hostname.endsWith(']')
? parsedUrl.hostname.slice(1, -1)
: parsedUrl.hostname
const hostValidation = await validateDatabaseHost(hostname, 'host')
if (!hostValidation.isValid) {
return NextResponse.json({ error: hostValidation.error }, { status: 400 })
}
client = new Redis(url, {
connectTimeout: 10000,
commandTimeout: 10000,

View File

@@ -10,7 +10,6 @@ import { checkAndBillOverageThreshold } from '@/lib/billing/threshold-billing'
import { env } from '@/lib/core/config/env'
import { getCostMultiplier, isBillingEnabled } from '@/lib/core/config/feature-flags'
import { generateRequestId } from '@/lib/core/utils/request'
import { decrementSSEConnections, incrementSSEConnections } from '@/lib/monitoring/sse-connections'
import { enrichTableSchema } from '@/lib/table/llm/wand'
import { verifyWorkspaceMembership } from '@/app/api/workflows/utils'
import { extractResponseText, parseResponsesUsage } from '@/providers/openai/utils'
@@ -331,14 +330,10 @@ export async function POST(req: NextRequest) {
const encoder = new TextEncoder()
const decoder = new TextDecoder()
let wandStreamClosed = false
const readable = new ReadableStream({
async start(controller) {
incrementSSEConnections('wand')
const reader = response.body?.getReader()
if (!reader) {
wandStreamClosed = true
decrementSSEConnections('wand')
controller.close()
return
}
@@ -483,18 +478,9 @@ export async function POST(req: NextRequest) {
controller.close()
} finally {
reader.releaseLock()
if (!wandStreamClosed) {
wandStreamClosed = true
decrementSSEConnections('wand')
}
}
},
cancel() {
if (!wandStreamClosed) {
wandStreamClosed = true
decrementSSEConnections('wand')
}
},
cancel() {},
})
return new Response(readable, {

View File

@@ -367,9 +367,7 @@ export async function POST(request: NextRequest) {
)
}
// Configure each new webhook (for providers that need configuration)
const pollingProviders = ['gmail', 'outlook']
const needsConfiguration = pollingProviders.includes(provider)
const needsConfiguration = provider === 'gmail' || provider === 'outlook'
if (needsConfiguration) {
const configureFunc =

View File

@@ -324,7 +324,9 @@ vi.mock('@/lib/webhooks/processor', () => ({
return null
}
),
checkWebhookPreprocessing: vi.fn().mockResolvedValue(null),
checkWebhookPreprocessing: vi
.fn()
.mockResolvedValue({ error: null, actorUserId: 'test-user-id' }),
formatProviderErrorResponse: vi.fn().mockImplementation((_webhook, error, status) => {
const { NextResponse } = require('next/server')
return NextResponse.json({ error }, { status })

View File

@@ -4,7 +4,6 @@ import { generateRequestId } from '@/lib/core/utils/request'
import {
checkWebhookPreprocessing,
findAllWebhooksForPath,
formatProviderErrorResponse,
handlePreDeploymentVerification,
handleProviderChallenges,
handleProviderReachabilityTest,
@@ -82,7 +81,6 @@ export async function POST(
requestId
)
if (authError) {
// For multi-webhook, log and continue to next webhook
if (webhooksForPath.length > 1) {
logger.warn(`[${requestId}] Auth failed for webhook ${foundWebhook.id}, continuing to next`)
continue
@@ -92,39 +90,18 @@ export async function POST(
const reachabilityResponse = handleProviderReachabilityTest(foundWebhook, body, requestId)
if (reachabilityResponse) {
// Reachability test should return immediately for the first webhook
return reachabilityResponse
}
let preprocessError: NextResponse | null = null
try {
preprocessError = await checkWebhookPreprocessing(foundWorkflow, foundWebhook, requestId)
if (preprocessError) {
if (webhooksForPath.length > 1) {
logger.warn(
`[${requestId}] Preprocessing failed for webhook ${foundWebhook.id}, continuing to next`
)
continue
}
return preprocessError
}
} catch (error) {
logger.error(`[${requestId}] Unexpected error during webhook preprocessing`, {
error: error instanceof Error ? error.message : String(error),
stack: error instanceof Error ? error.stack : undefined,
webhookId: foundWebhook.id,
workflowId: foundWorkflow.id,
})
const preprocessResult = await checkWebhookPreprocessing(foundWorkflow, foundWebhook, requestId)
if (preprocessResult.error) {
if (webhooksForPath.length > 1) {
logger.warn(
`[${requestId}] Preprocessing failed for webhook ${foundWebhook.id}, continuing to next`
)
continue
}
return formatProviderErrorResponse(
foundWebhook,
'An unexpected error occurred during preprocessing',
500
)
return preprocessResult.error
}
if (foundWebhook.blockId) {
@@ -152,6 +129,7 @@ export async function POST(
const response = await queueWebhookExecution(foundWebhook, foundWorkflow, body, request, {
requestId,
path,
actorUserId: preprocessResult.actorUserId,
})
responses.push(response)
}

View File

@@ -22,7 +22,6 @@ import { createExecutionEventWriter, setExecutionMeta } from '@/lib/execution/ev
import { processInputFileFields } from '@/lib/execution/files'
import { preprocessExecution } from '@/lib/execution/preprocessing'
import { LoggingSession } from '@/lib/logs/execution/logging-session'
import { decrementSSEConnections, incrementSSEConnections } from '@/lib/monitoring/sse-connections'
import {
cleanupExecutionBase64Cache,
hydrateUserFilesWithBase64,
@@ -764,7 +763,6 @@ export async function POST(req: NextRequest, { params }: { params: Promise<{ id:
const encoder = new TextEncoder()
const timeoutController = createTimeoutAbortController(preprocessResult.executionTimeout?.sync)
let isStreamClosed = false
let sseDecremented = false
const eventWriter = createExecutionEventWriter(executionId)
setExecutionMeta(executionId, {
@@ -775,7 +773,6 @@ export async function POST(req: NextRequest, { params }: { params: Promise<{ id:
const stream = new ReadableStream<Uint8Array>({
async start(controller) {
incrementSSEConnections('workflow-execute')
let finalMetaStatus: 'complete' | 'error' | 'cancelled' | null = null
const sendEvent = (event: ExecutionEvent) => {
@@ -1159,10 +1156,6 @@ export async function POST(req: NextRequest, { params }: { params: Promise<{ id:
if (executionId) {
await cleanupExecutionBase64Cache(executionId)
}
if (!sseDecremented) {
sseDecremented = true
decrementSSEConnections('workflow-execute')
}
if (!isStreamClosed) {
try {
controller.enqueue(encoder.encode('data: [DONE]\n\n'))
@@ -1174,10 +1167,6 @@ export async function POST(req: NextRequest, { params }: { params: Promise<{ id:
cancel() {
isStreamClosed = true
logger.info(`[${requestId}] Client disconnected from SSE stream`)
if (!sseDecremented) {
sseDecremented = true
decrementSSEConnections('workflow-execute')
}
},
})

View File

@@ -7,7 +7,6 @@ import {
getExecutionMeta,
readExecutionEvents,
} from '@/lib/execution/event-buffer'
import { decrementSSEConnections, incrementSSEConnections } from '@/lib/monitoring/sse-connections'
import { formatSSEEvent } from '@/lib/workflows/executor/execution-events'
import { authorizeWorkflowByWorkspacePermission } from '@/lib/workflows/utils'
@@ -74,10 +73,8 @@ export async function GET(
let closed = false
let sseDecremented = false
const stream = new ReadableStream<Uint8Array>({
async start(controller) {
incrementSSEConnections('execution-stream-reconnect')
let lastEventId = fromEventId
const pollDeadline = Date.now() + MAX_POLL_DURATION_MS
@@ -145,20 +142,11 @@ export async function GET(
controller.close()
} catch {}
}
} finally {
if (!sseDecremented) {
sseDecremented = true
decrementSSEConnections('execution-stream-reconnect')
}
}
},
cancel() {
closed = true
logger.info('Client disconnected from reconnection stream', { executionId })
if (!sseDecremented) {
sseDecremented = true
decrementSSEConnections('execution-stream-reconnect')
}
},
})

View File

@@ -12,6 +12,7 @@ import {
} from '@/components/emails'
import { getSession } from '@/lib/auth'
import { decryptSecret } from '@/lib/core/security/encryption'
import { secureFetchWithValidation } from '@/lib/core/security/input-validation.server'
import { getBaseUrl } from '@/lib/core/utils/urls'
import { sendEmail } from '@/lib/messaging/email/mailer'
import { getUserEntityPermissions } from '@/lib/workspaces/permissions/utils'
@@ -135,18 +136,18 @@ async function testWebhook(subscription: typeof workspaceNotificationSubscriptio
headers['sim-signature'] = `t=${timestamp},v1=${signature}`
}
const controller = new AbortController()
const timeoutId = setTimeout(() => controller.abort(), 10000)
try {
const response = await fetch(webhookConfig.url, {
method: 'POST',
headers,
body,
signal: controller.signal,
})
clearTimeout(timeoutId)
const response = await secureFetchWithValidation(
webhookConfig.url,
{
method: 'POST',
headers,
body,
timeout: 10000,
allowHttp: true,
},
'webhookUrl'
)
const responseBody = await response.text().catch(() => '')
return {
@@ -157,12 +158,10 @@ async function testWebhook(subscription: typeof workspaceNotificationSubscriptio
timestamp: new Date().toISOString(),
}
} catch (error: unknown) {
clearTimeout(timeoutId)
const err = error as Error & { name?: string }
if (err.name === 'AbortError') {
return { success: false, error: 'Request timeout after 10 seconds' }
}
return { success: false, error: err.message }
logger.warn('Webhook test failed', {
error: error instanceof Error ? error.message : String(error),
})
return { success: false, error: 'Failed to deliver webhook' }
}
}
@@ -268,13 +267,15 @@ async function testSlack(
return {
success: result.ok,
error: result.error,
error: result.ok ? undefined : `Slack error: ${result.error || 'unknown'}`,
channel: result.channel,
timestamp: new Date().toISOString(),
}
} catch (error: unknown) {
const err = error as Error
return { success: false, error: err.message }
logger.warn('Slack test notification failed', {
error: error instanceof Error ? error.message : String(error),
})
return { success: false, error: 'Failed to send Slack notification' }
}
}

View File

@@ -12,7 +12,7 @@ interface UseShiftSelectionLockResult {
/** Computed ReactFlow props based on current selection state */
selectionProps: {
selectionOnDrag: boolean
panOnDrag: [number, number] | false
panOnDrag: number[]
selectionKeyCode: string | null
}
}
@@ -55,7 +55,7 @@ export function useShiftSelectionLock({
const selectionProps = {
selectionOnDrag: !isHandMode || isShiftSelecting,
panOnDrag: (isHandMode && !isShiftSelecting ? [0, 1] : false) as [number, number] | false,
panOnDrag: isHandMode && !isShiftSelecting ? [0, 1] : [1],
selectionKeyCode: isShiftSelecting ? null : 'Shift',
}

View File

@@ -1,18 +1,13 @@
import { db } from '@sim/db'
import { webhook, workflow as workflowTable } from '@sim/db/schema'
import { account, webhook } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { task } from '@trigger.dev/sdk'
import { eq } from 'drizzle-orm'
import { v4 as uuidv4 } from 'uuid'
import { getHighestPrioritySubscription } from '@/lib/billing'
import {
createTimeoutAbortController,
getExecutionTimeout,
getTimeoutErrorMessage,
} from '@/lib/core/execution-limits'
import { createTimeoutAbortController, getTimeoutErrorMessage } from '@/lib/core/execution-limits'
import { IdempotencyService, webhookIdempotency } from '@/lib/core/idempotency'
import type { SubscriptionPlan } from '@/lib/core/rate-limiter/types'
import { processExecutionFiles } from '@/lib/execution/files'
import { preprocessExecution } from '@/lib/execution/preprocessing'
import { LoggingSession } from '@/lib/logs/execution/logging-session'
import { buildTraceSpans } from '@/lib/logs/execution/trace-spans/trace-spans'
import { WebhookAttachmentProcessor } from '@/lib/webhooks/attachment-processor'
@@ -20,7 +15,7 @@ import { fetchAndProcessAirtablePayloads, formatWebhookInput } from '@/lib/webho
import { executeWorkflowCore } from '@/lib/workflows/executor/execution-core'
import { PauseResumeManager } from '@/lib/workflows/executor/human-in-the-loop-manager'
import { loadDeployedWorkflowState } from '@/lib/workflows/persistence/utils'
import { getWorkflowById } from '@/lib/workflows/utils'
import { resolveOAuthAccountId } from '@/app/api/auth/oauth/utils'
import { getBlock } from '@/blocks'
import { ExecutionSnapshot } from '@/executor/execution/snapshot'
import type { ExecutionMetadata } from '@/executor/execution/types'
@@ -109,8 +104,8 @@ export type WebhookExecutionPayload = {
headers: Record<string, string>
path: string
blockId?: string
workspaceId?: string
credentialId?: string
credentialAccountUserId?: string
}
export async function executeWebhookJob(payload: WebhookExecutionPayload) {
@@ -143,6 +138,22 @@ export async function executeWebhookJob(payload: WebhookExecutionPayload) {
)
}
/**
* Resolve the account userId for a credential
*/
async function resolveCredentialAccountUserId(credentialId: string): Promise<string | undefined> {
const resolved = await resolveOAuthAccountId(credentialId)
if (!resolved) {
return undefined
}
const [credentialRecord] = await db
.select({ userId: account.userId })
.from(account)
.where(eq(account.id, resolved.accountId))
.limit(1)
return credentialRecord?.userId
}
async function executeWebhookJobInternal(
payload: WebhookExecutionPayload,
executionId: string,
@@ -155,17 +166,56 @@ async function executeWebhookJobInternal(
requestId
)
const userSubscription = await getHighestPrioritySubscription(payload.userId)
const asyncTimeout = getExecutionTimeout(
userSubscription?.plan as SubscriptionPlan | undefined,
'async'
)
// Resolve workflow record, billing actor, subscription, and timeout
const preprocessResult = await preprocessExecution({
workflowId: payload.workflowId,
userId: payload.userId,
triggerType: 'webhook',
executionId,
requestId,
checkRateLimit: false,
checkDeployment: false,
skipUsageLimits: true,
workspaceId: payload.workspaceId,
loggingSession,
})
if (!preprocessResult.success) {
throw new Error(preprocessResult.error?.message || 'Preprocessing failed in background job')
}
const { workflowRecord, executionTimeout } = preprocessResult
if (!workflowRecord) {
throw new Error(`Workflow ${payload.workflowId} not found during preprocessing`)
}
const workspaceId = workflowRecord.workspaceId
if (!workspaceId) {
throw new Error(`Workflow ${payload.workflowId} has no associated workspace`)
}
const workflowVariables = (workflowRecord.variables as Record<string, any>) || {}
const asyncTimeout = executionTimeout?.async ?? 120_000
const timeoutController = createTimeoutAbortController(asyncTimeout)
let deploymentVersionId: string | undefined
try {
const workflowData = await loadDeployedWorkflowState(payload.workflowId)
// Parallelize workflow state, webhook record, and credential resolution
const [workflowData, webhookRows, resolvedCredentialUserId] = await Promise.all([
loadDeployedWorkflowState(payload.workflowId, workspaceId),
db.select().from(webhook).where(eq(webhook.id, payload.webhookId)).limit(1),
payload.credentialId
? resolveCredentialAccountUserId(payload.credentialId)
: Promise.resolve(undefined),
])
const credentialAccountUserId = resolvedCredentialUserId
if (payload.credentialId && !credentialAccountUserId) {
logger.warn(
`[${requestId}] Failed to resolve credential account for credential ${payload.credentialId}`
)
}
if (!workflowData) {
throw new Error(
'Workflow state not found. The workflow may not be deployed or the deployment data may be corrupted.'
@@ -178,28 +228,11 @@ async function executeWebhookJobInternal(
? (workflowData.deploymentVersionId as string)
: undefined
const wfRows = await db
.select({ workspaceId: workflowTable.workspaceId, variables: workflowTable.variables })
.from(workflowTable)
.where(eq(workflowTable.id, payload.workflowId))
.limit(1)
const workspaceId = wfRows[0]?.workspaceId
if (!workspaceId) {
throw new Error(`Workflow ${payload.workflowId} has no associated workspace`)
}
const workflowVariables = (wfRows[0]?.variables as Record<string, any>) || {}
// Handle special Airtable case
if (payload.provider === 'airtable') {
logger.info(`[${requestId}] Processing Airtable webhook via fetchAndProcessAirtablePayloads`)
// Load the actual webhook record from database to get providerConfig
const [webhookRecord] = await db
.select()
.from(webhook)
.where(eq(webhook.id, payload.webhookId))
.limit(1)
const webhookRecord = webhookRows[0]
if (!webhookRecord) {
throw new Error(`Webhook record not found: ${payload.webhookId}`)
}
@@ -210,29 +243,20 @@ async function executeWebhookJobInternal(
providerConfig: webhookRecord.providerConfig,
}
// Create a mock workflow object for Airtable processing
const mockWorkflow = {
id: payload.workflowId,
userId: payload.userId,
}
// Get the processed Airtable input
const airtableInput = await fetchAndProcessAirtablePayloads(
webhookData,
mockWorkflow,
requestId
)
// If we got input (changes), execute the workflow like other providers
if (airtableInput) {
logger.info(`[${requestId}] Executing workflow with Airtable changes`)
// Get workflow for core execution
const workflow = await getWorkflowById(payload.workflowId)
if (!workflow) {
throw new Error(`Workflow ${payload.workflowId} not found`)
}
const metadata: ExecutionMetadata = {
requestId,
executionId,
@@ -240,13 +264,13 @@ async function executeWebhookJobInternal(
workspaceId,
userId: payload.userId,
sessionUserId: undefined,
workflowUserId: workflow.userId,
workflowUserId: workflowRecord.userId,
triggerType: payload.provider || 'webhook',
triggerBlockId: payload.blockId,
useDraftState: false,
startTime: new Date().toISOString(),
isClientSession: false,
credentialAccountUserId: payload.credentialAccountUserId,
credentialAccountUserId,
workflowStateOverride: {
blocks,
edges,
@@ -258,7 +282,7 @@ async function executeWebhookJobInternal(
const snapshot = new ExecutionSnapshot(
metadata,
workflow,
workflowRecord,
airtableInput,
workflowVariables,
[]
@@ -329,7 +353,6 @@ async function executeWebhookJobInternal(
// No changes to process
logger.info(`[${requestId}] No Airtable changes to process`)
// Start logging session so the complete call has a log entry to update
await loggingSession.safeStart({
userId: payload.userId,
workspaceId,
@@ -357,13 +380,6 @@ async function executeWebhookJobInternal(
}
// Format input for standard webhooks
// Load the actual webhook to get providerConfig (needed for Teams credentialId)
const webhookRows = await db
.select()
.from(webhook)
.where(eq(webhook.id, payload.webhookId))
.limit(1)
const actualWebhook =
webhookRows.length > 0
? webhookRows[0]
@@ -386,7 +402,6 @@ async function executeWebhookJobInternal(
if (!input && payload.provider === 'whatsapp') {
logger.info(`[${requestId}] No messages in WhatsApp payload, skipping execution`)
// Start logging session so the complete call has a log entry to update
await loggingSession.safeStart({
userId: payload.userId,
workspaceId,
@@ -452,7 +467,6 @@ async function executeWebhookJobInternal(
}
} catch (error) {
logger.error(`[${requestId}] Error processing trigger file outputs:`, error)
// Continue without processing attachments rather than failing execution
}
}
@@ -499,18 +513,11 @@ async function executeWebhookJobInternal(
}
} catch (error) {
logger.error(`[${requestId}] Error processing generic webhook files:`, error)
// Continue without processing files rather than failing execution
}
}
logger.info(`[${requestId}] Executing workflow for ${payload.provider} webhook`)
// Get workflow for core execution
const workflow = await getWorkflowById(payload.workflowId)
if (!workflow) {
throw new Error(`Workflow ${payload.workflowId} not found`)
}
const metadata: ExecutionMetadata = {
requestId,
executionId,
@@ -518,13 +525,13 @@ async function executeWebhookJobInternal(
workspaceId,
userId: payload.userId,
sessionUserId: undefined,
workflowUserId: workflow.userId,
workflowUserId: workflowRecord.userId,
triggerType: payload.provider || 'webhook',
triggerBlockId: payload.blockId,
useDraftState: false,
startTime: new Date().toISOString(),
isClientSession: false,
credentialAccountUserId: payload.credentialAccountUserId,
credentialAccountUserId,
workflowStateOverride: {
blocks,
edges,
@@ -536,7 +543,13 @@ async function executeWebhookJobInternal(
const triggerInput = input || {}
const snapshot = new ExecutionSnapshot(metadata, workflow, triggerInput, workflowVariables, [])
const snapshot = new ExecutionSnapshot(
metadata,
workflowRecord,
triggerInput,
workflowVariables,
[]
)
const executionResult = await executeWorkflowCore({
snapshot,
@@ -611,23 +624,9 @@ async function executeWebhookJobInternal(
})
try {
const wfRow = await db
.select({ workspaceId: workflowTable.workspaceId })
.from(workflowTable)
.where(eq(workflowTable.id, payload.workflowId))
.limit(1)
const errorWorkspaceId = wfRow[0]?.workspaceId
if (!errorWorkspaceId) {
logger.warn(
`[${requestId}] Cannot log error: workflow ${payload.workflowId} has no workspace`
)
throw error
}
await loggingSession.safeStart({
userId: payload.userId,
workspaceId: errorWorkspaceId,
workspaceId,
variables: {},
triggerData: {
isTest: false,

View File

@@ -19,6 +19,7 @@ import { checkUsageStatus } from '@/lib/billing/calculations/usage-monitor'
import { getHighestPrioritySubscription } from '@/lib/billing/core/subscription'
import { RateLimiter } from '@/lib/core/rate-limiter'
import { decryptSecret } from '@/lib/core/security/encryption'
import { secureFetchWithValidation } from '@/lib/core/security/input-validation.server'
import { formatDuration } from '@/lib/core/utils/formatting'
import { getBaseUrl } from '@/lib/core/utils/urls'
import type { TraceSpan, WorkflowExecutionLog } from '@/lib/logs/types'
@@ -207,18 +208,18 @@ async function deliverWebhook(
headers['sim-signature'] = `t=${payload.timestamp},v1=${signature}`
}
const controller = new AbortController()
const timeoutId = setTimeout(() => controller.abort(), 30000)
try {
const response = await fetch(webhookConfig.url, {
method: 'POST',
headers,
body,
signal: controller.signal,
})
clearTimeout(timeoutId)
const response = await secureFetchWithValidation(
webhookConfig.url,
{
method: 'POST',
headers,
body,
timeout: 30000,
allowHttp: true,
},
'webhookUrl'
)
return {
success: response.ok,
@@ -226,11 +227,13 @@ async function deliverWebhook(
error: response.ok ? undefined : `HTTP ${response.status}`,
}
} catch (error: unknown) {
clearTimeout(timeoutId)
const err = error as Error & { name?: string }
logger.warn('Webhook delivery failed', {
error: error instanceof Error ? error.message : String(error),
webhookUrl: webhookConfig.url,
})
return {
success: false,
error: err.name === 'AbortError' ? 'Request timeout' : err.message,
error: 'Failed to deliver webhook',
}
}
}

View File

@@ -0,0 +1,211 @@
import { FathomIcon } from '@/components/icons'
import { AuthMode, type BlockConfig } from '@/blocks/types'
import type { FathomResponse } from '@/tools/fathom/types'
import { getTrigger } from '@/triggers'
import { fathomTriggerOptions } from '@/triggers/fathom/utils'
export const FathomBlock: BlockConfig<FathomResponse> = {
type: 'fathom',
name: 'Fathom',
description: 'Access meeting recordings, transcripts, and summaries',
authMode: AuthMode.ApiKey,
triggerAllowed: true,
longDescription:
'Integrate Fathom AI Notetaker into your workflow. List meetings, get transcripts and summaries, and manage team members and teams. Can also trigger workflows when new meeting content is ready.',
docsLink: 'https://docs.sim.ai/tools/fathom',
category: 'tools',
bgColor: '#181C1E',
icon: FathomIcon,
subBlocks: [
{
id: 'operation',
title: 'Operation',
type: 'dropdown',
options: [
{ label: 'List Meetings', id: 'fathom_list_meetings' },
{ label: 'Get Summary', id: 'fathom_get_summary' },
{ label: 'Get Transcript', id: 'fathom_get_transcript' },
{ label: 'List Team Members', id: 'fathom_list_team_members' },
{ label: 'List Teams', id: 'fathom_list_teams' },
],
value: () => 'fathom_list_meetings',
},
{
id: 'recordingId',
title: 'Recording ID',
type: 'short-input',
required: { field: 'operation', value: ['fathom_get_summary', 'fathom_get_transcript'] },
placeholder: 'Enter the recording ID',
condition: { field: 'operation', value: ['fathom_get_summary', 'fathom_get_transcript'] },
},
{
id: 'includeSummary',
title: 'Include Summary',
type: 'dropdown',
options: [
{ label: 'No', id: 'false' },
{ label: 'Yes', id: 'true' },
],
value: () => 'false',
condition: { field: 'operation', value: 'fathom_list_meetings' },
},
{
id: 'includeTranscript',
title: 'Include Transcript',
type: 'dropdown',
options: [
{ label: 'No', id: 'false' },
{ label: 'Yes', id: 'true' },
],
value: () => 'false',
condition: { field: 'operation', value: 'fathom_list_meetings' },
},
{
id: 'includeActionItems',
title: 'Include Action Items',
type: 'dropdown',
options: [
{ label: 'No', id: 'false' },
{ label: 'Yes', id: 'true' },
],
value: () => 'false',
condition: { field: 'operation', value: 'fathom_list_meetings' },
},
{
id: 'includeCrmMatches',
title: 'Include CRM Matches',
type: 'dropdown',
options: [
{ label: 'No', id: 'false' },
{ label: 'Yes', id: 'true' },
],
value: () => 'false',
condition: { field: 'operation', value: 'fathom_list_meetings' },
},
{
id: 'createdAfter',
title: 'Created After',
type: 'short-input',
placeholder: 'ISO 8601 timestamp (e.g., 2025-01-01T00:00:00Z)',
condition: { field: 'operation', value: 'fathom_list_meetings' },
mode: 'advanced',
wandConfig: {
enabled: true,
prompt: 'Generate an ISO 8601 timestamp. Return ONLY the timestamp string.',
generationType: 'timestamp',
},
},
{
id: 'createdBefore',
title: 'Created Before',
type: 'short-input',
placeholder: 'ISO 8601 timestamp (e.g., 2025-12-31T23:59:59Z)',
condition: { field: 'operation', value: 'fathom_list_meetings' },
mode: 'advanced',
wandConfig: {
enabled: true,
prompt: 'Generate an ISO 8601 timestamp. Return ONLY the timestamp string.',
generationType: 'timestamp',
},
},
{
id: 'recordedBy',
title: 'Recorded By',
type: 'short-input',
placeholder: 'Filter by recorder email',
condition: { field: 'operation', value: 'fathom_list_meetings' },
mode: 'advanced',
},
{
id: 'teams',
title: 'Team',
type: 'short-input',
placeholder: 'Filter by team name',
condition: {
field: 'operation',
value: ['fathom_list_meetings', 'fathom_list_team_members'],
},
mode: 'advanced',
},
{
id: 'cursor',
title: 'Pagination Cursor',
type: 'short-input',
placeholder: 'Cursor from a previous response',
condition: {
field: 'operation',
value: ['fathom_list_meetings', 'fathom_list_team_members', 'fathom_list_teams'],
},
mode: 'advanced',
},
{
id: 'apiKey',
title: 'API Key',
type: 'short-input',
required: true,
placeholder: 'Enter your Fathom API key',
password: true,
},
{
id: 'selectedTriggerId',
title: 'Trigger Type',
type: 'dropdown',
mode: 'trigger',
options: fathomTriggerOptions,
value: () => 'fathom_new_meeting',
required: true,
},
...getTrigger('fathom_new_meeting').subBlocks,
...getTrigger('fathom_webhook').subBlocks,
],
tools: {
access: [
'fathom_list_meetings',
'fathom_get_summary',
'fathom_get_transcript',
'fathom_list_team_members',
'fathom_list_teams',
],
config: {
tool: (params) => {
return params.operation || 'fathom_list_meetings'
},
},
},
inputs: {
operation: { type: 'string', description: 'Operation to perform' },
apiKey: { type: 'string', description: 'Fathom API key' },
recordingId: { type: 'string', description: 'Recording ID for summary or transcript' },
includeSummary: { type: 'string', description: 'Include summary in meetings response' },
includeTranscript: { type: 'string', description: 'Include transcript in meetings response' },
includeActionItems: {
type: 'string',
description: 'Include action items in meetings response',
},
includeCrmMatches: {
type: 'string',
description: 'Include linked CRM matches in meetings response',
},
createdAfter: { type: 'string', description: 'Filter meetings created after this timestamp' },
createdBefore: {
type: 'string',
description: 'Filter meetings created before this timestamp',
},
recordedBy: { type: 'string', description: 'Filter by recorder email' },
teams: { type: 'string', description: 'Filter by team name' },
cursor: { type: 'string', description: 'Pagination cursor for next page' },
},
outputs: {
meetings: { type: 'json', description: 'List of meetings' },
template_name: { type: 'string', description: 'Summary template name' },
markdown_formatted: { type: 'string', description: 'Markdown-formatted summary' },
transcript: { type: 'json', description: 'Meeting transcript entries' },
members: { type: 'json', description: 'List of team members' },
teams: { type: 'json', description: 'List of teams' },
next_cursor: { type: 'string', description: 'Pagination cursor' },
},
triggers: {
enabled: true,
available: ['fathom_new_meeting', 'fathom_webhook'],
},
}

View File

@@ -18,6 +18,7 @@ export const GenericWebhookBlock: BlockConfig = {
bestPractices: `
- You can test the webhook by sending a request to the webhook URL. E.g. depending on authorization: curl -X POST http://localhost:3000/api/webhooks/trigger/d8abcf0d-1ee5-4b77-bb07-b1e8142ea4e9 -H "Content-Type: application/json" -H "X-Sim-Secret: 1234" -d '{"message": "Test webhook trigger", "data": {"key": "v"}}'
- Continuing example above, the body can be accessed in downstream block using dot notation. E.g. <webhook1.message> and <webhook1.data.key>
- To deduplicate incoming events, set the Deduplication Field to a dot-notation path of a unique field in the payload (e.g. "event.id"). Duplicate values within 7 days will be skipped.
- Only use when there's no existing integration for the service with triggerAllowed flag set to true.
`,
subBlocks: [...getTrigger('generic_webhook').subBlocks],

View File

@@ -9,7 +9,7 @@ export const ParallelBlock: BlockConfig<ToolResponse> = {
authMode: AuthMode.ApiKey,
longDescription:
'Integrate Parallel AI into the workflow. Can search the web, extract information from URLs, and conduct deep research.',
docsLink: 'https://docs.parallel.ai/',
docsLink: 'https://docs.sim.ai/tools/parallel-ai',
category: 'tools',
bgColor: '#E0E0E0',
icon: ParallelIcon,
@@ -56,7 +56,7 @@ export const ParallelBlock: BlockConfig<ToolResponse> = {
title: 'Extract Objective',
type: 'long-input',
placeholder: 'What information to extract from the URLs?',
required: true,
required: false,
condition: { field: 'operation', value: 'extract' },
},
{
@@ -89,6 +89,37 @@ export const ParallelBlock: BlockConfig<ToolResponse> = {
required: true,
condition: { field: 'operation', value: 'deep_research' },
},
{
id: 'search_mode',
title: 'Search Mode',
type: 'dropdown',
options: [
{ label: 'One-Shot', id: 'one-shot' },
{ label: 'Agentic', id: 'agentic' },
{ label: 'Fast', id: 'fast' },
],
value: () => 'one-shot',
condition: { field: 'operation', value: 'search' },
mode: 'advanced',
},
{
id: 'search_include_domains',
title: 'Include Domains',
type: 'short-input',
placeholder: 'Comma-separated domains to include (e.g., .edu, example.com)',
required: false,
condition: { field: 'operation', value: 'search' },
mode: 'advanced',
},
{
id: 'search_exclude_domains',
title: 'Exclude Domains',
type: 'short-input',
placeholder: 'Comma-separated domains to exclude',
required: false,
condition: { field: 'operation', value: 'search' },
mode: 'advanced',
},
{
id: 'include_domains',
title: 'Include Domains',
@@ -96,6 +127,7 @@ export const ParallelBlock: BlockConfig<ToolResponse> = {
placeholder: 'Comma-separated domains to include',
required: false,
condition: { field: 'operation', value: 'deep_research' },
mode: 'advanced',
},
{
id: 'exclude_domains',
@@ -104,37 +136,37 @@ export const ParallelBlock: BlockConfig<ToolResponse> = {
placeholder: 'Comma-separated domains to exclude',
required: false,
condition: { field: 'operation', value: 'deep_research' },
mode: 'advanced',
},
{
id: 'processor',
title: 'Processor',
title: 'Research Processor',
type: 'dropdown',
options: [
{ label: 'Lite', id: 'lite' },
{ label: 'Base', id: 'base' },
{ label: 'Core', id: 'core' },
{ label: 'Core 2x', id: 'core2x' },
{ label: 'Pro', id: 'pro' },
{ label: 'Ultra', id: 'ultra' },
{ label: 'Ultra 2x', id: 'ultra2x' },
{ label: 'Ultra 4x', id: 'ultra4x' },
{ label: 'Pro Fast', id: 'pro-fast' },
{ label: 'Ultra Fast', id: 'ultra-fast' },
],
value: () => 'base',
condition: { field: 'operation', value: ['search', 'deep_research'] },
value: () => 'pro',
condition: { field: 'operation', value: 'deep_research' },
mode: 'advanced',
},
{
id: 'max_results',
title: 'Max Results',
type: 'short-input',
placeholder: '5',
placeholder: '10',
condition: { field: 'operation', value: 'search' },
mode: 'advanced',
},
{
id: 'max_chars_per_result',
title: 'Max Chars',
title: 'Max Chars Per Result',
type: 'short-input',
placeholder: '1500',
condition: { field: 'operation', value: 'search' },
mode: 'advanced',
},
{
id: 'apiKey',
@@ -149,8 +181,6 @@ export const ParallelBlock: BlockConfig<ToolResponse> = {
access: ['parallel_search', 'parallel_extract', 'parallel_deep_research'],
config: {
tool: (params) => {
if (params.extract_objective) params.objective = params.extract_objective
if (params.research_input) params.input = params.research_input
switch (params.operation) {
case 'search':
return 'parallel_search'
@@ -174,21 +204,30 @@ export const ParallelBlock: BlockConfig<ToolResponse> = {
.filter((query: string) => query.length > 0)
if (queries.length > 0) {
result.search_queries = queries
} else {
result.search_queries = undefined
}
}
if (params.search_mode && params.search_mode !== 'one-shot') {
result.mode = params.search_mode
}
if (params.max_results) result.max_results = Number(params.max_results)
if (params.max_chars_per_result) {
result.max_chars_per_result = Number(params.max_chars_per_result)
}
result.include_domains = params.search_include_domains || undefined
result.exclude_domains = params.search_exclude_domains || undefined
}
if (operation === 'extract') {
if (params.extract_objective) result.objective = params.extract_objective
result.excerpts = !(params.excerpts === 'false' || params.excerpts === false)
result.full_content = params.full_content === 'true' || params.full_content === true
}
if (operation === 'deep_research') {
if (params.research_input) result.input = params.research_input
if (params.processor) result.processor = params.processor
}
return result
},
},
@@ -202,29 +241,34 @@ export const ParallelBlock: BlockConfig<ToolResponse> = {
excerpts: { type: 'boolean', description: 'Include excerpts' },
full_content: { type: 'boolean', description: 'Include full content' },
research_input: { type: 'string', description: 'Deep research query' },
include_domains: { type: 'string', description: 'Domains to include' },
exclude_domains: { type: 'string', description: 'Domains to exclude' },
processor: { type: 'string', description: 'Processing method' },
include_domains: { type: 'string', description: 'Domains to include (deep research)' },
exclude_domains: { type: 'string', description: 'Domains to exclude (deep research)' },
search_include_domains: { type: 'string', description: 'Domains to include (search)' },
search_exclude_domains: { type: 'string', description: 'Domains to exclude (search)' },
search_mode: { type: 'string', description: 'Search mode (one-shot, agentic, fast)' },
processor: { type: 'string', description: 'Research processing tier' },
max_results: { type: 'number', description: 'Maximum number of results' },
max_chars_per_result: { type: 'number', description: 'Maximum characters per result' },
apiKey: { type: 'string', description: 'Parallel AI API key' },
},
outputs: {
results: { type: 'string', description: 'Search or extract results (JSON stringified)' },
results: {
type: 'json',
description: 'Search or extract results (array of url, title, excerpts)',
},
search_id: { type: 'string', description: 'Search request ID (for search)' },
extract_id: { type: 'string', description: 'Extract request ID (for extract)' },
status: { type: 'string', description: 'Task status (for deep research)' },
run_id: { type: 'string', description: 'Task run ID (for deep research)' },
message: { type: 'string', description: 'Status message (for deep research)' },
content: {
type: 'string',
description: 'Research content (for deep research, JSON stringified)',
type: 'json',
description: 'Research content (for deep research, structured based on output_schema)',
},
basis: {
type: 'string',
description: 'Citations and sources (for deep research, JSON stringified)',
},
metadata: {
type: 'string',
description: 'Task metadata (for deep research, JSON stringified)',
type: 'json',
description:
'Citations and sources with field, reasoning, citations, confidence (for deep research)',
},
},
}

View File

@@ -40,6 +40,7 @@ import { EnrichBlock } from '@/blocks/blocks/enrich'
import { EvaluatorBlock } from '@/blocks/blocks/evaluator'
import { EvernoteBlock } from '@/blocks/blocks/evernote'
import { ExaBlock } from '@/blocks/blocks/exa'
import { FathomBlock } from '@/blocks/blocks/fathom'
import { FileBlock, FileV2Block, FileV3Block } from '@/blocks/blocks/file'
import { FirecrawlBlock } from '@/blocks/blocks/firecrawl'
import { FirefliesBlock, FirefliesV2Block } from '@/blocks/blocks/fireflies'
@@ -235,6 +236,7 @@ export const registry: Record<string, BlockConfig> = {
dynamodb: DynamoDBBlock,
elasticsearch: ElasticsearchBlock,
elevenlabs: ElevenLabsBlock,
fathom: FathomBlock,
enrich: EnrichBlock,
evernote: EvernoteBlock,
evaluator: EvaluatorBlock,

View File

@@ -1979,6 +1979,24 @@ export function ElevenLabsIcon(props: SVGProps<SVGSVGElement>) {
)
}
export function FathomIcon(props: SVGProps<SVGSVGElement>) {
return (
<svg {...props} xmlns='http://www.w3.org/2000/svg' viewBox='0 0 1000 1000' fill='none'>
<path
d='M0,668.7v205.78c0,53.97,34.24,102.88,85.8,119.08,87.48,27.49,167.88-36.99,167.88-120.22v-77.45L0,668.7Z'
fill='#007299'
/>
<path
d='M873.72,626.07c-19.05,0-38.38-4.3-56.58-13.38L72.78,241.43C11.15,210.69-17.51,136.6,11.18,74.05,41.2,8.59,119.26-18.53,183.23,13.38l744.25,371.21c62.45,31.15,91,109.08,59.79,171.43-22.22,44.38-67.02,70.05-113.55,70.05Z'
fill='#00beff'
/>
<path
d='M500.09,813.66c-19.05,0-38.38-4.3-56.58-13.38l-370.72-184.9c-61.63-30.74-90.29-104.82-61.61-167.37,30.02-65.46,108.08-92.59,172.06-60.68l370.62,184.85c62.45,31.15,91,109.08,59.79,171.43-22.22,44.38-67.02,70.05-113.55,70.05Z'
fill='#00beff'
/>
</svg>
)
}
export function LinkupIcon(props: SVGProps<SVGSVGElement>) {
return (
<svg {...props} xmlns='http://www.w3.org/2000/svg' viewBox='0 0 154 107' fill='none'>

View File

@@ -166,7 +166,8 @@ export class ConditionBlockHandler implements BlockHandler {
if (!output || typeof output !== 'object') {
return output
}
const { _pauseMetadata, error, ...rest } = output
const { _pauseMetadata, error, providerTiming, tokens, toolCalls, model, cost, ...rest } =
output
return rest
}

View File

@@ -22,7 +22,7 @@ export class TriggerBlockHandler implements BlockHandler {
}
const existingState = ctx.blockStates.get(block.id)
if (existingState?.output && Object.keys(existingState.output).length > 0) {
if (existingState?.output) {
return existingState.output
}

View File

@@ -7,6 +7,7 @@ import {
ClientFactoryOptions,
} from '@a2a-js/sdk/client'
import { createLogger } from '@sim/logger'
import { validateUrlWithDNS } from '@/lib/core/security/input-validation.server'
import { isInternalFileUrl } from '@/lib/uploads/utils/file-utils'
import { A2A_TERMINAL_STATES } from './constants'
@@ -43,6 +44,11 @@ class ApiKeyInterceptor implements CallInterceptor {
* Tries standard path first, falls back to root URL for compatibility.
*/
export async function createA2AClient(agentUrl: string, apiKey?: string): Promise<Client> {
const validation = await validateUrlWithDNS(agentUrl, 'agentUrl')
if (!validation.isValid) {
throw new Error(validation.error || 'Agent URL validation failed')
}
const factoryOptions = apiKey
? ClientFactoryOptions.createFrom(ClientFactoryOptions.default, {
clientConfig: {

View File

@@ -8,6 +8,7 @@ import {
isLegacyApiKeyFormat,
} from '@/lib/api-key/crypto'
import { env } from '@/lib/core/config/env'
import { safeCompare } from '@/lib/core/security/encryption'
const logger = createLogger('ApiKeyAuth')
@@ -39,7 +40,7 @@ export async function authenticateApiKey(inputKey: string, storedKey: string): P
if (isEncryptedKey(storedKey)) {
try {
const { decrypted } = await decryptApiKey(storedKey)
return inputKey === decrypted
return safeCompare(inputKey, decrypted)
} catch (decryptError) {
logger.error('Failed to decrypt stored API key:', { error: decryptError })
return false
@@ -54,27 +55,27 @@ export async function authenticateApiKey(inputKey: string, storedKey: string): P
if (isEncryptedKey(storedKey)) {
try {
const { decrypted } = await decryptApiKey(storedKey)
return inputKey === decrypted
return safeCompare(inputKey, decrypted)
} catch (decryptError) {
logger.error('Failed to decrypt stored API key:', { error: decryptError })
// Fall through to plain text comparison if decryption fails
}
}
// Legacy format can match against plain text storage
return inputKey === storedKey
return safeCompare(inputKey, storedKey)
}
// If no recognized prefix, fall back to original behavior
if (isEncryptedKey(storedKey)) {
try {
const { decrypted } = await decryptApiKey(storedKey)
return inputKey === decrypted
return safeCompare(inputKey, decrypted)
} catch (decryptError) {
logger.error('Failed to decrypt stored API key:', { error: decryptError })
}
}
return inputKey === storedKey
return safeCompare(inputKey, storedKey)
} catch (error) {
logger.error('API key authentication error:', { error })
return false

View File

@@ -492,7 +492,7 @@ export const auth = betterAuth({
'google-meet',
'google-tasks',
'vertex-ai',
'github-repo',
'microsoft-dataverse',
'microsoft-teams',
'microsoft-excel',
@@ -754,83 +754,6 @@ export const auth = betterAuth({
}),
genericOAuth({
config: [
{
providerId: 'github-repo',
clientId: env.GITHUB_REPO_CLIENT_ID as string,
clientSecret: env.GITHUB_REPO_CLIENT_SECRET as string,
authorizationUrl: 'https://github.com/login/oauth/authorize',
accessType: 'offline',
prompt: 'consent',
tokenUrl: 'https://github.com/login/oauth/access_token',
userInfoUrl: 'https://api.github.com/user',
scopes: getCanonicalScopesForProvider('github-repo'),
redirectURI: `${getBaseUrl()}/api/auth/oauth2/callback/github-repo`,
getUserInfo: async (tokens) => {
try {
const profileResponse = await fetch('https://api.github.com/user', {
headers: {
Authorization: `Bearer ${tokens.accessToken}`,
'User-Agent': 'sim-studio',
},
})
if (!profileResponse.ok) {
await profileResponse.text().catch(() => {})
logger.error('Failed to fetch GitHub profile', {
status: profileResponse.status,
statusText: profileResponse.statusText,
})
throw new Error(`Failed to fetch GitHub profile: ${profileResponse.statusText}`)
}
const profile = await profileResponse.json()
if (!profile.email) {
const emailsResponse = await fetch('https://api.github.com/user/emails', {
headers: {
Authorization: `Bearer ${tokens.accessToken}`,
'User-Agent': 'sim-studio',
},
})
if (emailsResponse.ok) {
const emails = await emailsResponse.json()
const primaryEmail =
emails.find(
(email: { primary: boolean; email: string; verified: boolean }) =>
email.primary
) || emails[0]
if (primaryEmail) {
profile.email = primaryEmail.email
profile.emailVerified = primaryEmail.verified || false
}
} else {
logger.warn('Failed to fetch GitHub emails', {
status: emailsResponse.status,
statusText: emailsResponse.statusText,
})
}
}
const now = new Date()
return {
id: `${profile.id.toString()}-${crypto.randomUUID()}`,
name: profile.name || profile.login,
email: profile.email,
image: profile.avatar_url,
emailVerified: profile.emailVerified || false,
createdAt: now,
updatedAt: now,
}
} catch (error) {
logger.error('Error in GitHub getUserInfo', { error })
throw error
}
},
},
// Google providers
{
providerId: 'google-email',

View File

@@ -2,6 +2,7 @@ import { createLogger } from '@sim/logger'
import { jwtVerify, SignJWT } from 'jose'
import { type NextRequest, NextResponse } from 'next/server'
import { env } from '@/lib/core/config/env'
import { safeCompare } from '@/lib/core/security/encryption'
const logger = createLogger('CronAuth')
@@ -81,7 +82,8 @@ export function verifyCronAuth(request: NextRequest, context?: string): NextResp
const authHeader = request.headers.get('authorization')
const expectedAuth = `Bearer ${env.CRON_SECRET}`
if (authHeader !== expectedAuth) {
const isValid = authHeader !== null && safeCompare(authHeader, expectedAuth)
if (!isValid) {
const contextInfo = context ? ` for ${context}` : ''
logger.warn(`Unauthorized CRON access attempt${contextInfo}`, {
providedAuth: authHeader,

View File

@@ -1,5 +1,6 @@
import type { NextRequest } from 'next/server'
import { env } from '@/lib/core/config/env'
import { safeCompare } from '@/lib/core/security/encryption'
export function checkInternalApiKey(req: NextRequest) {
const apiKey = req.headers.get('x-api-key')
@@ -13,7 +14,7 @@ export function checkInternalApiKey(req: NextRequest) {
return { success: false, error: 'API key required' }
}
if (apiKey !== expectedApiKey) {
if (!safeCompare(apiKey, expectedApiKey)) {
return { success: false, error: 'Invalid API key' }
}

View File

@@ -7,6 +7,7 @@ const logger = createLogger('AsyncJobsConfig')
let cachedBackend: JobQueueBackend | null = null
let cachedBackendType: AsyncBackendType | null = null
let cachedInlineBackend: JobQueueBackend | null = null
/**
* Determines which async backend to use based on environment configuration.
@@ -71,6 +72,31 @@ export function getCurrentBackendType(): AsyncBackendType | null {
return cachedBackendType
}
/**
* Gets a job queue backend that bypasses Trigger.dev (Redis -> Database).
* Used for non-polling webhooks that should always execute inline.
*/
export async function getInlineJobQueue(): Promise<JobQueueBackend> {
if (cachedInlineBackend) {
return cachedInlineBackend
}
const redis = getRedisClient()
let type: string
if (redis) {
const { RedisJobQueue } = await import('@/lib/core/async-jobs/backends/redis')
cachedInlineBackend = new RedisJobQueue(redis)
type = 'redis'
} else {
const { DatabaseJobQueue } = await import('@/lib/core/async-jobs/backends/database')
cachedInlineBackend = new DatabaseJobQueue()
type = 'database'
}
logger.info(`Inline job backend initialized: ${type}`)
return cachedInlineBackend
}
/**
* Checks if jobs should be executed inline (fire-and-forget).
* For Redis/DB backends, we execute inline. Trigger.dev handles execution itself.
@@ -85,4 +111,5 @@ export function shouldExecuteInline(): boolean {
export function resetJobQueueCache(): void {
cachedBackend = null
cachedBackendType = null
cachedInlineBackend = null
}

View File

@@ -1,6 +1,7 @@
export {
getAsyncBackendType,
getCurrentBackendType,
getInlineJobQueue,
getJobQueue,
resetJobQueueCache,
shouldExecuteInline,

View File

@@ -230,8 +230,7 @@ export const env = createEnv({
GOOGLE_CLIENT_SECRET: z.string().optional(), // Google OAuth client secret
GITHUB_CLIENT_ID: z.string().optional(), // GitHub OAuth client ID for GitHub integration
GITHUB_CLIENT_SECRET: z.string().optional(), // GitHub OAuth client secret
GITHUB_REPO_CLIENT_ID: z.string().optional(), // GitHub OAuth client ID for repo access
GITHUB_REPO_CLIENT_SECRET: z.string().optional(), // GitHub OAuth client secret for repo access
X_CLIENT_ID: z.string().optional(), // X (Twitter) OAuth client ID
X_CLIENT_SECRET: z.string().optional(), // X (Twitter) OAuth client secret
CONFLUENCE_CLIENT_ID: z.string().optional(), // Atlassian Confluence OAuth client ID

View File

@@ -413,6 +413,7 @@ export class IdempotencyService {
: undefined
const webhookIdHeader =
normalizedHeaders?.['x-sim-idempotency-key'] ||
normalizedHeaders?.['webhook-id'] ||
normalizedHeaders?.['x-webhook-id'] ||
normalizedHeaders?.['x-shopify-webhook-id'] ||

View File

@@ -81,7 +81,9 @@ export function setDeploymentAuthCookie(
}
/**
* Adds CORS headers to allow cross-origin requests for embedded deployments
* Adds CORS headers to allow cross-origin requests for embedded deployments.
* Embedded chat widgets and forms are designed to run on any customer domain,
* so we reflect the requesting origin rather than restricting to an allowlist.
*/
export function addCorsHeaders(response: NextResponse, request: NextRequest): NextResponse {
const origin = request.headers.get('origin') || ''

View File

@@ -1,4 +1,4 @@
import { createCipheriv, createDecipheriv, randomBytes, timingSafeEqual } from 'crypto'
import { createCipheriv, createDecipheriv, createHmac, randomBytes, timingSafeEqual } from 'crypto'
import { createLogger } from '@sim/logger'
import { env } from '@/lib/core/config/env'
@@ -91,8 +91,8 @@ export function generatePassword(length = 24): string {
* @returns True if strings are equal, false otherwise
*/
export function safeCompare(a: string, b: string): boolean {
if (a.length !== b.length) {
return false
}
return timingSafeEqual(Buffer.from(a), Buffer.from(b))
const key = 'safeCompare'
const ha = createHmac('sha256', key).update(a).digest()
const hb = createHmac('sha256', key).update(b).digest()
return timingSafeEqual(ha, hb)
}

View File

@@ -54,9 +54,10 @@ function isPrivateOrReservedIP(ip: string): boolean {
*/
export async function validateUrlWithDNS(
url: string | null | undefined,
paramName = 'url'
paramName = 'url',
options: { allowHttp?: boolean } = {}
): Promise<AsyncValidationResult> {
const basicValidation = validateExternalUrl(url, paramName)
const basicValidation = validateExternalUrl(url, paramName, options)
if (!basicValidation.isValid) {
return basicValidation
}
@@ -88,7 +89,10 @@ export async function validateUrlWithDNS(
return ip === '127.0.0.1' || ip === '::1'
})()
if (isPrivateOrReservedIP(address) && !(isLocalhost && resolvedIsLoopback)) {
if (
isPrivateOrReservedIP(address) &&
!(isLocalhost && resolvedIsLoopback && !options.allowHttp)
) {
logger.warn('URL resolves to blocked IP address', {
paramName,
hostname,
@@ -118,6 +122,70 @@ export async function validateUrlWithDNS(
}
}
/**
* Validates a database hostname by resolving DNS and checking the resolved IP
* against private/reserved ranges to prevent SSRF via database connections.
*
* Unlike validateHostname (which enforces strict RFC hostname format), this
* function is permissive about hostname format to avoid breaking legitimate
* database hostnames (e.g. underscores in Docker/K8s service names). It only
* blocks localhost and private/reserved IPs.
*
* @param host - The database hostname to validate
* @param paramName - Name of the parameter for error messages
* @returns AsyncValidationResult with resolved IP
*/
export async function validateDatabaseHost(
host: string | null | undefined,
paramName = 'host'
): Promise<AsyncValidationResult> {
if (!host) {
return { isValid: false, error: `${paramName} is required` }
}
const lowerHost = host.toLowerCase()
if (lowerHost === 'localhost') {
return { isValid: false, error: `${paramName} cannot be localhost` }
}
if (ipaddr.isValid(lowerHost) && isPrivateOrReservedIP(lowerHost)) {
return { isValid: false, error: `${paramName} cannot be a private IP address` }
}
try {
const { address } = await dns.lookup(host, { verbatim: true })
if (isPrivateOrReservedIP(address)) {
logger.warn('Database host resolves to blocked IP address', {
paramName,
hostname: host,
resolvedIP: address,
})
return {
isValid: false,
error: `${paramName} resolves to a blocked IP address`,
}
}
return {
isValid: true,
resolvedIP: address,
originalHostname: host,
}
} catch (error) {
logger.warn('DNS lookup failed for database host', {
paramName,
hostname: host,
error: error instanceof Error ? error.message : String(error),
})
return {
isValid: false,
error: `${paramName} hostname could not be resolved`,
}
}
}
export interface SecureFetchOptions {
method?: string
headers?: Record<string, string>
@@ -183,7 +251,7 @@ function resolveRedirectUrl(baseUrl: string, location: string): string {
export async function secureFetchWithPinnedIP(
url: string,
resolvedIP: string,
options: SecureFetchOptions = {},
options: SecureFetchOptions & { allowHttp?: boolean } = {},
redirectCount = 0
): Promise<SecureFetchResponse> {
const maxRedirects = options.maxRedirects ?? DEFAULT_MAX_REDIRECTS
@@ -231,7 +299,7 @@ export async function secureFetchWithPinnedIP(
res.resume()
const redirectUrl = resolveRedirectUrl(url, location)
validateUrlWithDNS(redirectUrl, 'redirectUrl')
validateUrlWithDNS(redirectUrl, 'redirectUrl', { allowHttp: options.allowHttp })
.then((validation) => {
if (!validation.isValid) {
reject(new Error(`Redirect blocked: ${validation.error}`))
@@ -340,10 +408,12 @@ export async function secureFetchWithPinnedIP(
*/
export async function secureFetchWithValidation(
url: string,
options: SecureFetchOptions = {},
options: SecureFetchOptions & { allowHttp?: boolean } = {},
paramName = 'url'
): Promise<SecureFetchResponse> {
const validation = await validateUrlWithDNS(url, paramName)
const validation = await validateUrlWithDNS(url, paramName, {
allowHttp: options.allowHttp,
})
if (!validation.isValid) {
throw new Error(validation.error)
}

View File

@@ -676,7 +676,8 @@ export function validateJiraIssueKey(
*/
export function validateExternalUrl(
url: string | null | undefined,
paramName = 'url'
paramName = 'url',
options: { allowHttp?: boolean } = {}
): ValidationResult {
if (!url || typeof url !== 'string') {
return {
@@ -709,7 +710,20 @@ export function validateExternalUrl(
}
}
if (protocol !== 'https:' && !(protocol === 'http:' && isLocalhost)) {
if (options.allowHttp) {
if (protocol !== 'https:' && protocol !== 'http:') {
return {
isValid: false,
error: `${paramName} must use http:// or https:// protocol`,
}
}
if (isLocalhost) {
return {
isValid: false,
error: `${paramName} cannot point to localhost`,
}
}
} else if (protocol !== 'https:' && !(protocol === 'http:' && isLocalhost)) {
return {
isValid: false,
error: `${paramName} must use https:// protocol`,

View File

@@ -1,6 +1,10 @@
import { createLogger } from '@sim/logger'
import type { ToolCall, TraceSpan } from '@/lib/logs/types'
import { isWorkflowBlockType, stripCustomToolPrefix } from '@/executor/constants'
import {
isConditionBlockType,
isWorkflowBlockType,
stripCustomToolPrefix,
} from '@/executor/constants'
import type { ExecutionResult } from '@/executor/types'
import { stripCloneSuffixes } from '@/executor/utils/subflow-utils'
@@ -109,6 +113,7 @@ export function buildTraceSpans(result: ExecutionResult): {
if (!log.blockId || !log.blockType) return
const spanId = `${log.blockId}-${new Date(log.startedAt).getTime()}`
const isCondition = isConditionBlockType(log.blockType)
const duration = log.durationMs || 0
@@ -164,7 +169,7 @@ export function buildTraceSpans(result: ExecutionResult): {
...(log.parentIterations?.length && { parentIterations: log.parentIterations }),
}
if (log.output?.providerTiming) {
if (!isCondition && log.output?.providerTiming) {
const providerTiming = log.output.providerTiming as {
duration: number
startTime: string
@@ -186,7 +191,7 @@ export function buildTraceSpans(result: ExecutionResult): {
}
}
if (log.output?.cost) {
if (!isCondition && log.output?.cost) {
span.cost = log.output.cost as {
input?: number
output?: number
@@ -194,7 +199,7 @@ export function buildTraceSpans(result: ExecutionResult): {
}
}
if (log.output?.tokens) {
if (!isCondition && log.output?.tokens) {
const t = log.output.tokens as
| number
| {
@@ -224,12 +229,13 @@ export function buildTraceSpans(result: ExecutionResult): {
}
}
if (log.output?.model) {
if (!isCondition && log.output?.model) {
span.model = log.output.model as string
}
if (
!isWorkflowBlockType(log.blockType) &&
!isCondition &&
log.output?.providerTiming?.timeSegments &&
Array.isArray(log.output.providerTiming.timeSegments)
) {
@@ -317,7 +323,7 @@ export function buildTraceSpans(result: ExecutionResult): {
}
}
)
} else {
} else if (!isCondition) {
let toolCallsList = null
try {

View File

@@ -246,7 +246,7 @@ describe('categorizeError', () => {
const error = new Error('Server not accessible')
const result = categorizeError(error)
expect(result.status).toBe(404)
expect(result.message).toBe('Server not accessible')
expect(result.message).toBe('Resource not found')
})
it.concurrent('returns 401 for authentication errors', () => {
@@ -267,28 +267,28 @@ describe('categorizeError', () => {
const error = new Error('Invalid parameter provided')
const result = categorizeError(error)
expect(result.status).toBe(400)
expect(result.message).toBe('Invalid parameter provided')
expect(result.message).toBe('Invalid request parameters')
})
it.concurrent('returns 400 for missing required errors', () => {
const error = new Error('Missing required field: name')
const result = categorizeError(error)
expect(result.status).toBe(400)
expect(result.message).toBe('Missing required field: name')
expect(result.message).toBe('Invalid request parameters')
})
it.concurrent('returns 400 for validation errors', () => {
const error = new Error('Validation failed for input')
const result = categorizeError(error)
expect(result.status).toBe(400)
expect(result.message).toBe('Validation failed for input')
expect(result.message).toBe('Invalid request parameters')
})
it.concurrent('returns 500 for generic errors', () => {
const error = new Error('Something went wrong')
const result = categorizeError(error)
expect(result.status).toBe(500)
expect(result.message).toBe('Something went wrong')
expect(result.message).toBe('Internal server error')
})
it.concurrent('returns 500 for non-Error objects', () => {

View File

@@ -49,18 +49,18 @@ export const MCP_CLIENT_CONSTANTS = {
} as const
/**
* Create standardized MCP error response
* Create standardized MCP error response.
* Always returns the defaultMessage to clients to prevent leaking internal error details.
* Callers are responsible for logging the original error before calling this function.
*/
export function createMcpErrorResponse(
error: unknown,
_error: unknown,
defaultMessage: string,
status = 500
): NextResponse {
const errorMessage = error instanceof Error ? error.message : defaultMessage
const response: McpApiResponse = {
success: false,
error: errorMessage,
error: defaultMessage,
}
return NextResponse.json(response, { status })
@@ -115,36 +115,33 @@ export function validateRequiredFields(
}
/**
* Enhanced error categorization for more specific HTTP status codes
* Enhanced error categorization for more specific HTTP status codes.
* Returns safe, generic messages to prevent leaking internal details.
*/
export function categorizeError(error: unknown): { message: string; status: number } {
if (!(error instanceof Error)) {
return { message: 'Unknown error occurred', status: 500 }
}
const message = error.message.toLowerCase()
const msg = error.message.toLowerCase()
if (message.includes('timeout')) {
if (msg.includes('timeout')) {
return { message: 'Request timed out', status: 408 }
}
if (message.includes('not found') || message.includes('not accessible')) {
return { message: error.message, status: 404 }
if (msg.includes('not found') || msg.includes('not accessible')) {
return { message: 'Resource not found', status: 404 }
}
if (message.includes('authentication') || message.includes('unauthorized')) {
if (msg.includes('authentication') || msg.includes('unauthorized')) {
return { message: 'Authentication required', status: 401 }
}
if (
message.includes('invalid') ||
message.includes('missing required') ||
message.includes('validation')
) {
return { message: error.message, status: 400 }
if (msg.includes('invalid') || msg.includes('missing required') || msg.includes('validation')) {
return { message: 'Invalid request parameters', status: 400 }
}
return { message: error.message, status: 500 }
return { message: 'Internal server error', status: 500 }
}
/**

View File

@@ -1,16 +1,10 @@
/**
* Periodic memory telemetry for diagnosing heap growth in production.
* Logs process.memoryUsage(), V8 heap stats, and active SSE connection
* counts every 60s, enabling correlation between connection leaks and
* memory spikes.
* Periodic memory telemetry for monitoring heap growth in production.
* Logs process.memoryUsage() and V8 heap stats every 60s.
*/
import v8 from 'node:v8'
import { createLogger } from '@sim/logger'
import {
getActiveSSEConnectionCount,
getActiveSSEConnectionsByRoute,
} from '@/lib/monitoring/sse-connections'
const logger = createLogger('MemoryTelemetry', { logLevel: 'INFO' })
@@ -23,16 +17,6 @@ export function startMemoryTelemetry(intervalMs = 60_000) {
started = true
const timer = setInterval(() => {
// Trigger opportunistic (non-blocking) garbage collection if running on Bun.
// This signals JSC GC + mimalloc page purge without blocking the event loop,
// helping reclaim RSS that mimalloc otherwise retains under sustained load.
const bunGlobal = (globalThis as Record<string, unknown>).Bun as
| { gc?: (force: boolean) => void }
| undefined
if (typeof bunGlobal?.gc === 'function') {
bunGlobal.gc(false)
}
const mem = process.memoryUsage()
const heap = v8.getHeapStatistics()
@@ -49,8 +33,6 @@ export function startMemoryTelemetry(intervalMs = 60_000) {
? process.getActiveResourcesInfo().length
: -1,
uptimeMin: Math.round(process.uptime() / 60),
activeSSEConnections: getActiveSSEConnectionCount(),
sseByRoute: getActiveSSEConnectionsByRoute(),
})
}, intervalMs)
timer.unref()

View File

@@ -1,27 +0,0 @@
/**
* Tracks active SSE connections by route for memory leak diagnostics.
* Logged alongside periodic memory telemetry to correlate connection
* counts with heap growth.
*/
const connections = new Map<string, number>()
export function incrementSSEConnections(route: string) {
connections.set(route, (connections.get(route) ?? 0) + 1)
}
export function decrementSSEConnections(route: string) {
const count = (connections.get(route) ?? 0) - 1
if (count <= 0) connections.delete(route)
else connections.set(route, count)
}
export function getActiveSSEConnectionCount(): number {
let total = 0
for (const count of connections.values()) total += count
return total
}
export function getActiveSSEConnectionsByRoute(): Record<string, number> {
return Object.fromEntries(connections)
}

View File

@@ -170,11 +170,6 @@ describe('OAuth Token Refresh', () => {
describe('Body Credential Providers', () => {
const bodyCredentialProviders = [
{ name: 'Google', providerId: 'google', endpoint: 'https://oauth2.googleapis.com/token' },
{
name: 'GitHub',
providerId: 'github',
endpoint: 'https://github.com/login/oauth/access_token',
},
{
name: 'Microsoft',
providerId: 'microsoft',
@@ -279,19 +274,6 @@ describe('OAuth Token Refresh', () => {
)
})
it.concurrent('should include Accept header for GitHub requests', async () => {
const mockFetch = createMockFetch(defaultOAuthResponse)
const refreshToken = 'test_refresh_token'
await withMockFetch(mockFetch, () => refreshOAuthToken('github', refreshToken))
const [, requestOptions] = mockFetch.mock.calls[0] as [
string,
{ headers: Record<string, string>; body: string },
]
expect(requestOptions.headers.Accept).toBe('application/json')
})
it.concurrent('should include User-Agent header for Reddit requests', async () => {
const mockFetch = createMockFetch(defaultOAuthResponse)
const refreshToken = 'test_refresh_token'

View File

@@ -6,7 +6,6 @@ import {
CalComIcon,
ConfluenceIcon,
DropboxIcon,
GithubIcon,
GmailIcon,
GoogleBigQueryIcon,
GoogleCalendarIcon,
@@ -340,21 +339,6 @@ export const OAUTH_PROVIDERS: Record<string, OAuthProviderConfig> = {
},
defaultService: 'outlook',
},
github: {
name: 'GitHub',
icon: GithubIcon,
services: {
github: {
name: 'GitHub',
description: 'Manage repositories, issues, and pull requests.',
providerId: 'github-repo',
icon: GithubIcon,
baseProviderIcon: GithubIcon,
scopes: ['repo', 'user:email', 'read:user', 'workflow'],
},
},
defaultService: 'github',
},
x: {
name: 'X',
icon: xIcon,
@@ -474,6 +458,7 @@ export const OAUTH_PROVIDERS: Record<string, OAuthProviderConfig> = {
'read:comment:jira',
'delete:comment:jira',
'read:attachment:jira',
'write:attachment:jira',
'delete:attachment:jira',
'write:issue-worklog:jira',
'read:issue-worklog:jira',
@@ -639,6 +624,7 @@ export const OAUTH_PROVIDERS: Record<string, OAuthProviderConfig> = {
'im:history',
'im:read',
'users:read',
// TODO: Add 'users:read.email' once Slack app review is approved
'files:write',
'files:read',
'canvases:write',
@@ -987,19 +973,6 @@ function getProviderAuthConfig(provider: string): ProviderAuthConfig {
useBasicAuth: false,
}
}
case 'github': {
const { clientId, clientSecret } = getCredentials(
env.GITHUB_CLIENT_ID,
env.GITHUB_CLIENT_SECRET
)
return {
tokenEndpoint: 'https://github.com/login/oauth/access_token',
clientId,
clientSecret,
useBasicAuth: false,
additionalHeaders: { Accept: 'application/json' },
}
}
case 'x': {
const { clientId, clientSecret } = getCredentials(env.X_CLIENT_ID, env.X_CLIENT_SECRET)
return {

View File

@@ -15,8 +15,6 @@ export type OAuthProvider =
| 'google-groups'
| 'google-meet'
| 'vertex-ai'
| 'github'
| 'github-repo'
| 'x'
| 'confluence'
| 'airtable'
@@ -64,7 +62,6 @@ export type OAuthService =
| 'google-groups'
| 'google-meet'
| 'vertex-ai'
| 'github'
| 'x'
| 'confluence'
| 'airtable'

View File

@@ -66,11 +66,6 @@ describe('getAllOAuthServices', () => {
it.concurrent('should include single-service providers', () => {
const services = getAllOAuthServices()
const githubService = services.find((s) => s.providerId === 'github-repo')
expect(githubService).toBeDefined()
expect(githubService?.name).toBe('GitHub')
expect(githubService?.baseProvider).toBe('github')
const slackService = services.find((s) => s.providerId === 'slack')
expect(slackService).toBeDefined()
expect(slackService?.name).toBe('Slack')
@@ -145,14 +140,6 @@ describe('getServiceByProviderAndId', () => {
expect(service.name).toBe('Microsoft Excel')
})
it.concurrent('should work with single-service providers', () => {
const service = getServiceByProviderAndId('github')
expect(service).toBeDefined()
expect(service.providerId).toBe('github-repo')
expect(service.name).toBe('GitHub')
})
it.concurrent('should include scopes in returned service config', () => {
const service = getServiceByProviderAndId('google', 'gmail')
@@ -182,12 +169,6 @@ describe('getProviderIdFromServiceId', () => {
expect(providerId).toBe('outlook')
})
it.concurrent('should return correct providerId for GitHub', () => {
const providerId = getProviderIdFromServiceId('github')
expect(providerId).toBe('github-repo')
})
it.concurrent('should return correct providerId for Microsoft Excel', () => {
const providerId = getProviderIdFromServiceId('microsoft-excel')
@@ -262,14 +243,6 @@ describe('getServiceConfigByProviderId', () => {
expect(excelService?.name).toBe('Microsoft Excel')
})
it.concurrent('should work for GitHub', () => {
const service = getServiceConfigByProviderId('github-repo')
expect(service).toBeDefined()
expect(service?.providerId).toBe('github-repo')
expect(service?.name).toBe('GitHub')
})
it.concurrent('should work for Slack', () => {
const service = getServiceConfigByProviderId('slack')
@@ -338,14 +311,6 @@ describe('getCanonicalScopesForProvider', () => {
expect(excelScopes).toContain('Files.Read')
})
it.concurrent('should return scopes for GitHub', () => {
const scopes = getCanonicalScopesForProvider('github-repo')
expect(scopes.length).toBeGreaterThan(0)
expect(scopes).toContain('repo')
expect(scopes).toContain('user:email')
})
it.concurrent('should handle providers with empty scopes array', () => {
const scopes = getCanonicalScopesForProvider('notion')
@@ -397,13 +362,6 @@ describe('parseProvider', () => {
expect(teamsConfig.featureType).toBe('microsoft-teams')
})
it.concurrent('should parse GitHub provider', () => {
const config = parseProvider('github-repo' as OAuthProvider)
expect(config.baseProvider).toBe('github')
expect(config.featureType).toBe('github')
})
it.concurrent('should parse Slack provider', () => {
const config = parseProvider('slack' as OAuthProvider)

View File

@@ -157,6 +157,7 @@ export const SCOPE_DESCRIPTIONS: Record<string, string> = {
'read:comment:jira': 'Read comments on Jira issues',
'delete:comment:jira': 'Delete comments from Jira issues',
'read:attachment:jira': 'Read attachments from Jira issues',
'write:attachment:jira': 'Add attachments to Jira issues',
'delete:attachment:jira': 'Delete attachments from Jira issues',
'write:issue-worklog:jira': 'Add and update worklog entries on Jira issues',
'read:issue-worklog:jira': 'Read worklog entries from Jira issues',
@@ -269,6 +270,7 @@ export const SCOPE_DESCRIPTIONS: Record<string, string> = {
'im:history': 'Read direct message history',
'im:read': 'View direct message channels',
'users:read': 'View workspace users',
'users:read.email': 'View user email addresses',
'files:write': 'Upload files',
'files:read': 'Download and read files',
'canvases:write': 'Create canvas documents',

View File

@@ -1,12 +1,13 @@
import { db, webhook, workflow, workflowDeploymentVersion } from '@sim/db'
import { account, credentialSet, subscription } from '@sim/db/schema'
import { credentialSet, subscription } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { and, eq, isNull, or } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { v4 as uuidv4 } from 'uuid'
import { checkEnterprisePlan, checkTeamPlan } from '@/lib/billing/subscriptions/utils'
import { getJobQueue, shouldExecuteInline } from '@/lib/core/async-jobs'
import { getInlineJobQueue, getJobQueue, shouldExecuteInline } from '@/lib/core/async-jobs'
import { isProd } from '@/lib/core/config/feature-flags'
import { safeCompare } from '@/lib/core/security/encryption'
import { getEffectiveDecryptedEnv } from '@/lib/environment/utils'
import { preprocessExecution } from '@/lib/execution/preprocessing'
import { convertSquareBracketsToTwiML } from '@/lib/webhooks/utils'
@@ -25,11 +26,10 @@ import {
validateTypeformSignature,
verifyProviderWebhook,
} from '@/lib/webhooks/utils.server'
import { getWorkspaceBilledAccountUserId } from '@/lib/workspaces/utils'
import { resolveOAuthAccountId } from '@/app/api/auth/oauth/utils'
import { executeWebhookJob } from '@/background/webhook-execution'
import { resolveEnvVarReferences } from '@/executor/utils/reference-validation'
import { isConfluencePayloadMatch } from '@/triggers/confluence/utils'
import { isPollingWebhookProvider } from '@/triggers/constants'
import { isGitHubEventMatch } from '@/triggers/github/utils'
import { isHubSpotContactEventMatch } from '@/triggers/hubspot/utils'
import { isJiraEventMatch } from '@/triggers/jira/utils'
@@ -40,6 +40,12 @@ export interface WebhookProcessorOptions {
requestId: string
path?: string
webhookId?: string
actorUserId?: string
}
export interface WebhookPreprocessingResult {
error: NextResponse | null
actorUserId?: string
}
function getExternalUrl(request: NextRequest): string {
@@ -800,14 +806,14 @@ export async function verifyProviderAuth(
if (secretHeaderName) {
const headerValue = request.headers.get(secretHeaderName.toLowerCase())
if (headerValue === configToken) {
if (headerValue && safeCompare(headerValue, configToken)) {
isTokenValid = true
}
} else {
const authHeader = request.headers.get('authorization')
if (authHeader?.toLowerCase().startsWith('bearer ')) {
const token = authHeader.substring(7)
if (token === configToken) {
if (safeCompare(token, configToken)) {
isTokenValid = true
}
}
@@ -835,7 +841,7 @@ export async function checkWebhookPreprocessing(
foundWorkflow: any,
foundWebhook: any,
requestId: string
): Promise<NextResponse | null> {
): Promise<WebhookPreprocessingResult> {
try {
const executionId = uuidv4()
@@ -848,6 +854,7 @@ export async function checkWebhookPreprocessing(
checkRateLimit: true,
checkDeployment: true,
workspaceId: foundWorkflow.workspaceId,
workflowRecord: foundWorkflow,
})
if (!preprocessResult.success) {
@@ -859,33 +866,39 @@ export async function checkWebhookPreprocessing(
})
if (foundWebhook.provider === 'microsoft-teams') {
return NextResponse.json(
{
type: 'message',
text: error.message,
},
{ status: error.statusCode }
)
return {
error: NextResponse.json(
{
type: 'message',
text: error.message,
},
{ status: error.statusCode }
),
}
}
return NextResponse.json({ error: error.message }, { status: error.statusCode })
return { error: NextResponse.json({ error: error.message }, { status: error.statusCode }) }
}
return null
return { error: null, actorUserId: preprocessResult.actorUserId }
} catch (preprocessError) {
logger.error(`[${requestId}] Error during webhook preprocessing:`, preprocessError)
if (foundWebhook.provider === 'microsoft-teams') {
return NextResponse.json(
{
type: 'message',
text: 'Internal error during preprocessing',
},
{ status: 500 }
)
return {
error: NextResponse.json(
{
type: 'message',
text: 'Internal error during preprocessing',
},
{ status: 500 }
),
}
}
return NextResponse.json({ error: 'Internal error during preprocessing' }, { status: 500 })
return {
error: NextResponse.json({ error: 'Internal error during preprocessing' }, { status: 500 }),
}
}
}
@@ -1037,7 +1050,7 @@ export async function queueWebhookExecution(
}
}
const headers = Object.fromEntries(request.headers.entries())
const { 'x-sim-idempotency-key': _, ...headers } = Object.fromEntries(request.headers.entries())
// For Microsoft Teams Graph notifications, extract unique identifiers for idempotency
if (
@@ -1055,26 +1068,22 @@ export async function queueWebhookExecution(
}
}
// Extract credentialId from webhook config
// Note: Each webhook now has its own credentialId (credential sets are fanned out at save time)
const providerConfig = (foundWebhook.providerConfig as Record<string, any>) || {}
const credentialId = providerConfig.credentialId as string | undefined
let credentialAccountUserId: string | undefined
if (credentialId) {
const resolved = await resolveOAuthAccountId(credentialId)
if (!resolved) {
logger.error(
`[${options.requestId}] Failed to resolve OAuth account for credential ${credentialId}`
)
return formatProviderErrorResponse(foundWebhook, 'Failed to resolve credential', 500)
if (foundWebhook.provider === 'generic') {
const idempotencyField = providerConfig.idempotencyField as string | undefined
if (idempotencyField && body) {
const value = idempotencyField
.split('.')
.reduce((acc: any, key: string) => acc?.[key], body)
if (value !== undefined && value !== null && typeof value !== 'object') {
headers['x-sim-idempotency-key'] = String(value)
}
}
const [credentialRecord] = await db
.select({ userId: account.userId })
.from(account)
.where(eq(account.id, resolved.accountId))
.limit(1)
credentialAccountUserId = credentialRecord?.userId
}
const credentialId = providerConfig.credentialId as string | undefined
// credentialSetId is a direct field on webhook table, not in providerConfig
const credentialSetId = foundWebhook.credentialSetId as string | undefined
@@ -1089,16 +1098,9 @@ export async function queueWebhookExecution(
}
}
if (!foundWorkflow.workspaceId) {
logger.error(`[${options.requestId}] Workflow ${foundWorkflow.id} has no workspaceId`)
return NextResponse.json({ error: 'Workflow has no associated workspace' }, { status: 500 })
}
const actorUserId = await getWorkspaceBilledAccountUserId(foundWorkflow.workspaceId)
const actorUserId = options.actorUserId
if (!actorUserId) {
logger.error(
`[${options.requestId}] No billing account for workspace ${foundWorkflow.workspaceId}`
)
logger.error(`[${options.requestId}] No actorUserId provided for webhook ${foundWebhook.id}`)
return NextResponse.json({ error: 'Unable to resolve billing account' }, { status: 500 })
}
@@ -1111,19 +1113,28 @@ export async function queueWebhookExecution(
headers,
path: options.path || foundWebhook.path,
blockId: foundWebhook.blockId,
workspaceId: foundWorkflow.workspaceId,
...(credentialId ? { credentialId } : {}),
...(credentialAccountUserId ? { credentialAccountUserId } : {}),
}
const jobQueue = await getJobQueue()
const jobId = await jobQueue.enqueue('webhook-execution', payload, {
metadata: { workflowId: foundWorkflow.id, userId: actorUserId },
})
logger.info(
`[${options.requestId}] Queued webhook execution task ${jobId} for ${foundWebhook.provider} webhook`
)
const isPolling = isPollingWebhookProvider(payload.provider)
if (shouldExecuteInline()) {
if (isPolling && !shouldExecuteInline()) {
const jobQueue = await getJobQueue()
const jobId = await jobQueue.enqueue('webhook-execution', payload, {
metadata: { workflowId: foundWorkflow.id, userId: actorUserId },
})
logger.info(
`[${options.requestId}] Queued polling webhook execution task ${jobId} for ${foundWebhook.provider} webhook via job queue`
)
} else {
const jobQueue = await getInlineJobQueue()
const jobId = await jobQueue.enqueue('webhook-execution', payload, {
metadata: { workflowId: foundWorkflow.id, userId: actorUserId },
})
logger.info(
`[${options.requestId}] Executing ${foundWebhook.provider} webhook ${jobId} inline`
)
void (async () => {
try {
await jobQueue.startJob(jobId)
@@ -1166,6 +1177,12 @@ export async function queueWebhookExecution(
})
}
// Slack requires an empty 200 for interactive payloads (view_submission, block_actions, etc.)
// A JSON body like {"message":"..."} is not a recognized response format and causes modal errors
if (foundWebhook.provider === 'slack') {
return new NextResponse(null, { status: 200 })
}
// Twilio Voice requires TwiML XML response
if (foundWebhook.provider === 'twilio_voice') {
const providerConfig = (foundWebhook.providerConfig as Record<string, any>) || {}
@@ -1197,6 +1214,26 @@ export async function queueWebhookExecution(
})
}
if (foundWebhook.provider === 'generic' && providerConfig.responseMode === 'custom') {
const rawCode = Number(providerConfig.responseStatusCode) || 200
const statusCode = rawCode >= 100 && rawCode <= 599 ? rawCode : 200
const responseBody = (providerConfig.responseBody as string | undefined)?.trim()
if (!responseBody) {
return new NextResponse(null, { status: statusCode })
}
try {
const parsed = JSON.parse(responseBody)
return NextResponse.json(parsed, { status: statusCode })
} catch {
return new NextResponse(responseBody, {
status: statusCode,
headers: { 'Content-Type': 'text/plain' },
})
}
}
return NextResponse.json({ message: 'Webhook processed' })
} catch (error: any) {
logger.error(`[${options.requestId}] Failed to queue webhook execution:`, error)
@@ -1211,6 +1248,12 @@ export async function queueWebhookExecution(
)
}
if (foundWebhook.provider === 'slack') {
// Return empty 200 to avoid Slack showing an error dialog to the user,
// even though processing failed. The error is already logged above.
return new NextResponse(null, { status: 200 })
}
if (foundWebhook.provider === 'twilio_voice') {
const errorTwiml = `<?xml version="1.0" encoding="UTF-8"?>
<Response>

View File

@@ -17,6 +17,7 @@ const airtableLogger = createLogger('AirtableWebhook')
const typeformLogger = createLogger('TypeformWebhook')
const calendlyLogger = createLogger('CalendlyWebhook')
const grainLogger = createLogger('GrainWebhook')
const fathomLogger = createLogger('FathomWebhook')
const lemlistLogger = createLogger('LemlistWebhook')
const webflowLogger = createLogger('WebflowWebhook')
const attioLogger = createLogger('AttioWebhook')
@@ -792,6 +793,60 @@ export async function deleteGrainWebhook(webhook: any, requestId: string): Promi
}
}
/**
* Delete a Fathom webhook
* Don't fail webhook deletion if cleanup fails
*/
export async function deleteFathomWebhook(webhook: any, requestId: string): Promise<void> {
try {
const config = getProviderConfig(webhook)
const apiKey = config.apiKey as string | undefined
const externalId = config.externalId as string | undefined
if (!apiKey) {
fathomLogger.warn(
`[${requestId}] Missing apiKey for Fathom webhook deletion ${webhook.id}, skipping cleanup`
)
return
}
if (!externalId) {
fathomLogger.warn(
`[${requestId}] Missing externalId for Fathom webhook deletion ${webhook.id}, skipping cleanup`
)
return
}
const idValidation = validateAlphanumericId(externalId, 'Fathom webhook ID', 100)
if (!idValidation.isValid) {
fathomLogger.warn(
`[${requestId}] Invalid externalId format for Fathom webhook deletion ${webhook.id}, skipping cleanup`
)
return
}
const fathomApiUrl = `https://api.fathom.ai/external/v1/webhooks/${externalId}`
const fathomResponse = await fetch(fathomApiUrl, {
method: 'DELETE',
headers: {
'X-Api-Key': apiKey,
'Content-Type': 'application/json',
},
})
if (!fathomResponse.ok && fathomResponse.status !== 404) {
fathomLogger.warn(
`[${requestId}] Failed to delete Fathom webhook (non-fatal): ${fathomResponse.status}`
)
} else {
fathomLogger.info(`[${requestId}] Successfully deleted Fathom webhook ${externalId}`)
}
} catch (error) {
fathomLogger.warn(`[${requestId}] Error deleting Fathom webhook (non-fatal)`, error)
}
}
/**
* Delete a Lemlist webhook
* Don't fail webhook deletion if cleanup fails
@@ -1314,6 +1369,116 @@ export async function createGrainWebhookSubscription(
}
}
export async function createFathomWebhookSubscription(
_request: NextRequest,
webhookData: any,
requestId: string
): Promise<{ id: string } | undefined> {
try {
const { path, providerConfig } = webhookData
const {
apiKey,
triggerId,
triggeredFor,
includeSummary,
includeTranscript,
includeActionItems,
includeCrmMatches,
} = providerConfig || {}
if (!apiKey) {
fathomLogger.warn(`[${requestId}] Missing apiKey for Fathom webhook creation.`, {
webhookId: webhookData.id,
})
throw new Error(
'Fathom API Key is required. Please provide your API key in the trigger configuration.'
)
}
const notificationUrl = `${getBaseUrl()}/api/webhooks/trigger/${path}`
const triggeredForValue = triggeredFor || 'my_recordings'
const toBool = (val: unknown, fallback: boolean): boolean => {
if (val === undefined) return fallback
return val === true || val === 'true'
}
const requestBody: Record<string, any> = {
destination_url: notificationUrl,
triggered_for: [triggeredForValue],
include_summary: toBool(includeSummary, true),
include_transcript: toBool(includeTranscript, false),
include_action_items: toBool(includeActionItems, false),
include_crm_matches: toBool(includeCrmMatches, false),
}
fathomLogger.info(`[${requestId}] Creating Fathom webhook`, {
triggerId,
triggeredFor: triggeredForValue,
webhookId: webhookData.id,
})
const fathomResponse = await fetch('https://api.fathom.ai/external/v1/webhooks', {
method: 'POST',
headers: {
'X-Api-Key': apiKey,
'Content-Type': 'application/json',
},
body: JSON.stringify(requestBody),
})
const responseBody = await fathomResponse.json().catch(() => ({}))
if (!fathomResponse.ok) {
const errorMessage =
(responseBody as Record<string, string>).message ||
(responseBody as Record<string, string>).error ||
'Unknown Fathom API error'
fathomLogger.error(
`[${requestId}] Failed to create webhook in Fathom for webhook ${webhookData.id}. Status: ${fathomResponse.status}`,
{ message: errorMessage, response: responseBody }
)
let userFriendlyMessage = 'Failed to create webhook subscription in Fathom'
if (fathomResponse.status === 401) {
userFriendlyMessage = 'Invalid Fathom API Key. Please verify your key is correct.'
} else if (fathomResponse.status === 400) {
userFriendlyMessage = `Fathom error: ${errorMessage}`
} else if (errorMessage && errorMessage !== 'Unknown Fathom API error') {
userFriendlyMessage = `Fathom error: ${errorMessage}`
}
throw new Error(userFriendlyMessage)
}
if (!responseBody.id) {
fathomLogger.error(
`[${requestId}] Fathom webhook creation returned success but no webhook ID for ${webhookData.id}.`
)
throw new Error('Fathom webhook created but no ID returned. Please try again.')
}
fathomLogger.info(
`[${requestId}] Successfully created webhook in Fathom for webhook ${webhookData.id}.`,
{
fathomWebhookId: responseBody.id,
}
)
return { id: responseBody.id }
} catch (error: any) {
fathomLogger.error(
`[${requestId}] Exception during Fathom webhook creation for webhook ${webhookData.id}.`,
{
message: error.message,
stack: error.stack,
}
)
throw error
}
}
export async function createLemlistWebhookSubscription(
webhookData: any,
requestId: string
@@ -1811,6 +1976,7 @@ const PROVIDERS_WITH_EXTERNAL_SUBSCRIPTIONS = new Set([
'airtable',
'attio',
'calendly',
'fathom',
'webflow',
'typeform',
'grain',
@@ -1923,6 +2089,12 @@ export async function createExternalWebhookSubscription(
updatedProviderConfig = { ...updatedProviderConfig, webhookTag: usedTag }
}
externalSubscriptionCreated = true
} else if (provider === 'fathom') {
const result = await createFathomWebhookSubscription(request, webhookData, requestId)
if (result) {
updatedProviderConfig = { ...updatedProviderConfig, externalId: result.id }
externalSubscriptionCreated = true
}
} else if (provider === 'grain') {
const result = await createGrainWebhookSubscription(request, webhookData, requestId)
if (result) {
@@ -1968,6 +2140,8 @@ export async function cleanupExternalWebhook(
await deleteCalendlyWebhook(webhook, requestId)
} else if (webhook.provider === 'webflow') {
await deleteWebflowWebhook(webhook, workflow, requestId)
} else if (webhook.provider === 'fathom') {
await deleteFathomWebhook(webhook, requestId)
} else if (webhook.provider === 'grain') {
await deleteGrainWebhook(webhook, requestId)
} else if (webhook.provider === 'lemlist') {

View File

@@ -19,6 +19,7 @@ import {
refreshAccessTokenIfNeeded,
resolveOAuthAccountId,
} from '@/app/api/auth/oauth/utils'
import { isPollingWebhookProvider } from '@/triggers/constants'
const logger = createLogger('WebhookUtils')
@@ -2222,10 +2223,7 @@ export async function syncWebhooksForCredentialSet(params: {
`[${requestId}] Syncing webhooks for credential set ${credentialSetId}, provider ${provider}`
)
// Polling providers get unique paths per credential (for independent state)
// External webhook providers share the same path (external service sends to one URL)
const pollingProviders = ['gmail', 'outlook', 'rss', 'imap']
const useUniquePaths = pollingProviders.includes(provider)
const useUniquePaths = isPollingWebhookProvider(provider)
const credentials = await getCredentialsForCredentialSet(credentialSetId, oauthProviderId)

View File

@@ -433,7 +433,7 @@ describe('hasWorkflowChanged', () => {
expect(hasWorkflowChanged(state1, state2)).toBe(true)
})
it.concurrent('should detect subBlock type changes', () => {
it.concurrent('should ignore subBlock type changes', () => {
const state1 = createWorkflowState({
blocks: {
block1: createBlock('block1', {
@@ -448,7 +448,7 @@ describe('hasWorkflowChanged', () => {
}),
},
})
expect(hasWorkflowChanged(state1, state2)).toBe(true)
expect(hasWorkflowChanged(state1, state2)).toBe(false)
})
it.concurrent('should handle null/undefined subBlock values consistently', () => {

View File

@@ -496,7 +496,14 @@ export function normalizeSubBlockValue(subBlockId: string, value: unknown): unkn
* @returns SubBlock fields excluding value and is_diff
*/
export function extractSubBlockRest(subBlock: Record<string, unknown>): Record<string, unknown> {
const { value: _v, is_diff: _sd, ...rest } = subBlock as SubBlockWithDiffMarker
const {
value: _v,
is_diff: _sd,
type: _type,
...rest
} = subBlock as SubBlockWithDiffMarker & {
type?: unknown
}
return rest
}

View File

@@ -0,0 +1,57 @@
import { EDGE } from '@/executor/constants'
/**
* Remaps condition/router block IDs in a parsed conditions array.
* Condition IDs use the format `{blockId}-{suffix}` and must be updated
* when a block is duplicated to reference the new block ID.
*
* @param conditions - Parsed array of condition block objects with `id` fields
* @param oldBlockId - The original block ID prefix to replace
* @param newBlockId - The new block ID prefix
* @returns Whether any IDs were changed (mutates in place)
*/
export function remapConditionBlockIds(
conditions: Array<{ id: string; [key: string]: unknown }>,
oldBlockId: string,
newBlockId: string
): boolean {
let changed = false
const prefix = `${oldBlockId}-`
for (const condition of conditions) {
if (typeof condition.id === 'string' && condition.id.startsWith(prefix)) {
const suffix = condition.id.slice(prefix.length)
condition.id = `${newBlockId}-${suffix}`
changed = true
}
}
return changed
}
/** Handle prefixes that embed block-scoped condition/route IDs */
const HANDLE_PREFIXES = [EDGE.CONDITION_PREFIX, EDGE.ROUTER_PREFIX] as const
/**
* Remaps a condition or router edge sourceHandle from the old block ID to the new one.
* Handle formats:
* - Condition: `condition-{blockId}-{suffix}`
* - Router V2: `router-{blockId}-{suffix}`
*
* @returns The remapped handle string, or the original if no remapping needed
*/
export function remapConditionEdgeHandle(
sourceHandle: string,
oldBlockId: string,
newBlockId: string
): string {
for (const handlePrefix of HANDLE_PREFIXES) {
if (!sourceHandle.startsWith(handlePrefix)) continue
const innerId = sourceHandle.slice(handlePrefix.length)
if (!innerId.startsWith(`${oldBlockId}-`)) continue
const suffix = innerId.slice(oldBlockId.length + 1)
return `${handlePrefix}${newBlockId}-${suffix}`
}
return sourceHandle
}

View File

@@ -8,6 +8,7 @@ import {
} from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { and, eq, isNull, min } from 'drizzle-orm'
import { remapConditionBlockIds, remapConditionEdgeHandle } from '@/lib/workflows/condition-ids'
import { authorizeWorkflowByWorkspacePermission } from '@/lib/workflows/utils'
import { getUserEntityPermissions } from '@/lib/workspaces/permissions/utils'
import type { Variable } from '@/stores/panel/variables/types'
@@ -77,6 +78,40 @@ function remapVariableIdsInSubBlocks(
return updated
}
/**
* Remaps condition/router block IDs within subBlocks when a block is duplicated.
* Returns a new object without mutating the input.
*/
function remapConditionIdsInSubBlocks(
subBlocks: Record<string, any>,
oldBlockId: string,
newBlockId: string
): Record<string, any> {
const updated: Record<string, any> = {}
for (const [key, subBlock] of Object.entries(subBlocks)) {
if (
subBlock &&
typeof subBlock === 'object' &&
(subBlock.type === 'condition-input' || subBlock.type === 'router-input') &&
typeof subBlock.value === 'string'
) {
try {
const parsed = JSON.parse(subBlock.value)
if (Array.isArray(parsed) && remapConditionBlockIds(parsed, oldBlockId, newBlockId)) {
updated[key] = { ...subBlock, value: JSON.stringify(parsed) }
continue
}
} catch {
// Not valid JSON, skip
}
}
updated[key] = subBlock
}
return updated
}
/**
* Duplicate a workflow with all its blocks, edges, and subflows
* This is a shared helper used by both the workflow duplicate API and folder duplicate API
@@ -259,6 +294,15 @@ export async function duplicateWorkflow(
)
}
// Remap condition/router IDs to use the new block ID
if (updatedSubBlocks && typeof updatedSubBlocks === 'object') {
updatedSubBlocks = remapConditionIdsInSubBlocks(
updatedSubBlocks as Record<string, any>,
block.id,
newBlockId
)
}
return {
...block,
id: newBlockId,
@@ -286,15 +330,24 @@ export async function duplicateWorkflow(
.where(eq(workflowEdges.workflowId, sourceWorkflowId))
if (sourceEdges.length > 0) {
const newEdges = sourceEdges.map((edge) => ({
...edge,
id: crypto.randomUUID(), // Generate new edge ID
workflowId: newWorkflowId,
sourceBlockId: blockIdMapping.get(edge.sourceBlockId) || edge.sourceBlockId,
targetBlockId: blockIdMapping.get(edge.targetBlockId) || edge.targetBlockId,
createdAt: now,
updatedAt: now,
}))
const newEdges = sourceEdges.map((edge) => {
const newSourceBlockId = blockIdMapping.get(edge.sourceBlockId) || edge.sourceBlockId
const newSourceHandle =
edge.sourceHandle && blockIdMapping.has(edge.sourceBlockId)
? remapConditionEdgeHandle(edge.sourceHandle, edge.sourceBlockId, newSourceBlockId)
: edge.sourceHandle
return {
...edge,
id: crypto.randomUUID(),
workflowId: newWorkflowId,
sourceBlockId: newSourceBlockId,
targetBlockId: blockIdMapping.get(edge.targetBlockId) || edge.targetBlockId,
sourceHandle: newSourceHandle,
createdAt: now,
updatedAt: now,
}
})
await tx.insert(workflowEdges).values(newEdges)
logger.info(`[${requestId}] Copied ${sourceEdges.length} edges with updated block references`)

View File

@@ -14,6 +14,7 @@ import { and, desc, eq, inArray, sql } from 'drizzle-orm'
import type { Edge } from 'reactflow'
import { v4 as uuidv4 } from 'uuid'
import type { DbOrTx } from '@/lib/db/types'
import { remapConditionBlockIds, remapConditionEdgeHandle } from '@/lib/workflows/condition-ids'
import {
backfillCanonicalModes,
migrateSubblockIds,
@@ -833,7 +834,12 @@ export function regenerateWorkflowStateIds(state: RegenerateStateInput): Regener
Object.entries(state.blocks || {}).forEach(([oldId, block]) => {
const newId = blockIdMapping.get(oldId)!
// Duplicated blocks are always unlocked so users can edit them
const newBlock: BlockState = { ...block, id: newId, locked: false }
const newBlock: BlockState = {
...block,
id: newId,
subBlocks: JSON.parse(JSON.stringify(block.subBlocks)),
locked: false,
}
// Update parentId reference if it exists
if (newBlock.data?.parentId) {
@@ -857,6 +863,21 @@ export function regenerateWorkflowStateIds(state: RegenerateStateInput): Regener
updatedSubBlock.value = blockIdMapping.get(updatedSubBlock.value) ?? updatedSubBlock.value
}
// Remap condition/router IDs embedded in condition-input/router-input subBlocks
if (
(updatedSubBlock.type === 'condition-input' || updatedSubBlock.type === 'router-input') &&
typeof updatedSubBlock.value === 'string'
) {
try {
const parsed = JSON.parse(updatedSubBlock.value)
if (Array.isArray(parsed) && remapConditionBlockIds(parsed, oldId, newId)) {
updatedSubBlock.value = JSON.stringify(parsed)
}
} catch {
// Not valid JSON, skip
}
}
updatedSubBlocks[subId] = updatedSubBlock
})
newBlock.subBlocks = updatedSubBlocks
@@ -871,12 +892,17 @@ export function regenerateWorkflowStateIds(state: RegenerateStateInput): Regener
const newId = edgeIdMapping.get(edge.id)!
const newSource = blockIdMapping.get(edge.source) || edge.source
const newTarget = blockIdMapping.get(edge.target) || edge.target
const newSourceHandle =
edge.sourceHandle && blockIdMapping.has(edge.source)
? remapConditionEdgeHandle(edge.sourceHandle, edge.source, newSource)
: edge.sourceHandle
newEdges.push({
...edge,
id: newId,
source: newSource,
target: newTarget,
sourceHandle: newSourceHandle,
})
})

View File

@@ -1,5 +1,6 @@
import type { IncomingMessage, ServerResponse } from 'http'
import { env } from '@/lib/core/config/env'
import { safeCompare } from '@/lib/core/security/encryption'
import type { IRoomManager } from '@/socket/rooms'
interface Logger {
@@ -21,7 +22,8 @@ function checkInternalApiKey(req: IncomingMessage): { success: boolean; error?:
return { success: false, error: 'API key required' }
}
if (apiKey !== expectedApiKey) {
const apiKeyStr = Array.isArray(apiKey) ? apiKey[0] : apiKey
if (!apiKeyStr || !safeCompare(apiKeyStr, expectedApiKey)) {
return { success: false, error: 'Invalid API key' }
}

View File

@@ -2,6 +2,7 @@ import type { Edge } from 'reactflow'
import { v4 as uuidv4 } from 'uuid'
import { DEFAULT_DUPLICATE_OFFSET } from '@/lib/workflows/autolayout/constants'
import { getEffectiveBlockOutputs } from '@/lib/workflows/blocks/block-outputs'
import { remapConditionBlockIds, remapConditionEdgeHandle } from '@/lib/workflows/condition-ids'
import { mergeSubblockStateWithValues } from '@/lib/workflows/subblocks'
import { buildDefaultCanonicalModes } from '@/lib/workflows/subblocks/visibility'
import { hasTriggerCapability } from '@/lib/workflows/triggers/trigger-utils'
@@ -363,13 +364,15 @@ export function regenerateWorkflowIds(
const nameMap = new Map<string, string>()
const newBlocks: Record<string, BlockState> = {}
// First pass: generate new IDs
// First pass: generate new IDs and remap condition/router IDs in subBlocks
Object.entries(workflowState.blocks).forEach(([oldId, block]) => {
const newId = uuidv4()
blockIdMap.set(oldId, newId)
const oldNormalizedName = normalizeName(block.name)
nameMap.set(oldNormalizedName, oldNormalizedName)
newBlocks[newId] = { ...block, id: newId }
const newBlock = { ...block, id: newId, subBlocks: JSON.parse(JSON.stringify(block.subBlocks)) }
remapConditionIds(newBlock.subBlocks, {}, oldId, newId)
newBlocks[newId] = newBlock
})
// Second pass: update parentId references
@@ -385,12 +388,21 @@ export function regenerateWorkflowIds(
}
})
const newEdges = workflowState.edges.map((edge) => ({
...edge,
id: uuidv4(),
source: blockIdMap.get(edge.source) || edge.source,
target: blockIdMap.get(edge.target) || edge.target,
}))
const newEdges = workflowState.edges.map((edge) => {
const newSource = blockIdMap.get(edge.source) || edge.source
const newSourceHandle =
edge.sourceHandle && blockIdMap.has(edge.source)
? remapConditionEdgeHandle(edge.sourceHandle, edge.source, newSource)
: edge.sourceHandle
return {
...edge,
id: uuidv4(),
source: newSource,
target: blockIdMap.get(edge.target) || edge.target,
sourceHandle: newSourceHandle,
}
})
const newLoops: Record<string, Loop> = {}
if (workflowState.loops) {
@@ -429,6 +441,37 @@ export function regenerateWorkflowIds(
}
}
/**
* Remaps condition/router block IDs within subBlock values when a block is duplicated.
* Mutates both `subBlocks` and `subBlockValues` in place (callers must pass cloned data).
*/
export function remapConditionIds(
subBlocks: Record<string, SubBlockState>,
subBlockValues: Record<string, unknown>,
oldBlockId: string,
newBlockId: string
): void {
for (const [subBlockId, subBlock] of Object.entries(subBlocks)) {
if (subBlock.type !== 'condition-input' && subBlock.type !== 'router-input') continue
const value = subBlockValues[subBlockId] ?? subBlock.value
if (typeof value !== 'string') continue
try {
const parsed = JSON.parse(value)
if (!Array.isArray(parsed)) continue
if (remapConditionBlockIds(parsed, oldBlockId, newBlockId)) {
const newValue = JSON.stringify(parsed)
subBlock.value = newValue
subBlockValues[subBlockId] = newValue
}
} catch {
// Not valid JSON, skip
}
}
}
export function regenerateBlockIds(
blocks: Record<string, BlockState>,
edges: Edge[],
@@ -497,6 +540,7 @@ export function regenerateBlockIds(
id: newId,
name: newName,
position: newPosition,
subBlocks: JSON.parse(JSON.stringify(block.subBlocks)),
// Temporarily keep data as-is, we'll fix parentId in second pass
data: block.data ? { ...block.data } : block.data,
// Duplicated blocks are always unlocked so users can edit them
@@ -510,6 +554,9 @@ export function regenerateBlockIds(
if (subBlockValues[oldId]) {
newSubBlockValues[newId] = JSON.parse(JSON.stringify(subBlockValues[oldId]))
}
// Remap condition/router IDs in the duplicated block
remapConditionIds(newBlock.subBlocks, newSubBlockValues[newId] || {}, oldId, newId)
})
// Second pass: update parentId references for nested blocks
@@ -542,12 +589,21 @@ export function regenerateBlockIds(
}
})
const newEdges = edges.map((edge) => ({
...edge,
id: uuidv4(),
source: blockIdMap.get(edge.source) || edge.source,
target: blockIdMap.get(edge.target) || edge.target,
}))
const newEdges = edges.map((edge) => {
const newSource = blockIdMap.get(edge.source) || edge.source
const newSourceHandle =
edge.sourceHandle && blockIdMap.has(edge.source)
? remapConditionEdgeHandle(edge.sourceHandle, edge.source, newSource)
: edge.sourceHandle
return {
...edge,
id: uuidv4(),
source: newSource,
target: blockIdMap.get(edge.target) || edge.target,
sourceHandle: newSourceHandle,
}
})
const newLoops: Record<string, Loop> = {}
Object.entries(loops).forEach(([oldLoopId, loop]) => {

View File

@@ -12,6 +12,7 @@ import {
filterValidEdges,
getUniqueBlockName,
mergeSubblockState,
remapConditionIds,
} from '@/stores/workflows/utils'
import type {
Position,
@@ -611,6 +612,21 @@ export const useWorkflowStore = create<WorkflowStore>()(
{}
)
// Remap condition/router IDs in the duplicated subBlocks
const clonedSubBlockValues = activeWorkflowId
? JSON.parse(
JSON.stringify(
useSubBlockStore.getState().workflowValues[activeWorkflowId]?.[id] || {}
)
)
: {}
remapConditionIds(
newSubBlocks as Record<string, SubBlockState>,
clonedSubBlockValues,
id,
newId
)
const newState = {
blocks: {
...get().blocks,
@@ -630,14 +646,12 @@ export const useWorkflowStore = create<WorkflowStore>()(
}
if (activeWorkflowId) {
const subBlockValues =
useSubBlockStore.getState().workflowValues[activeWorkflowId]?.[id] || {}
useSubBlockStore.setState((state) => ({
workflowValues: {
...state.workflowValues,
[activeWorkflowId]: {
...state.workflowValues[activeWorkflowId],
[newId]: JSON.parse(JSON.stringify(subBlockValues)),
[newId]: clonedSubBlockValues,
},
},
}))

View File

@@ -0,0 +1,74 @@
import type { FathomGetSummaryParams, FathomGetSummaryResponse } from '@/tools/fathom/types'
import type { ToolConfig } from '@/tools/types'
export const getSummaryTool: ToolConfig<FathomGetSummaryParams, FathomGetSummaryResponse> = {
id: 'fathom_get_summary',
name: 'Fathom Get Summary',
description: 'Get the call summary for a specific meeting recording.',
version: '1.0.0',
params: {
apiKey: {
type: 'string',
required: true,
visibility: 'user-only',
description: 'Fathom API Key',
},
recordingId: {
type: 'string',
required: true,
visibility: 'user-or-llm',
description: 'The recording ID of the meeting',
},
},
request: {
url: (params) =>
`https://api.fathom.ai/external/v1/recordings/${encodeURIComponent(params.recordingId.trim())}/summary`,
method: 'GET',
headers: (params) => ({
'X-Api-Key': params.apiKey,
'Content-Type': 'application/json',
}),
},
transformResponse: async (response: Response) => {
if (!response.ok) {
const errorData = await response.json().catch(() => ({}))
return {
success: false,
error:
(errorData as Record<string, string>).message ||
`Fathom API error: ${response.status} ${response.statusText}`,
output: {
template_name: null,
markdown_formatted: null,
},
}
}
const data = await response.json()
const summary = data.summary ?? data
return {
success: true,
output: {
template_name: summary.template_name ?? null,
markdown_formatted: summary.markdown_formatted ?? null,
},
}
},
outputs: {
template_name: {
type: 'string',
description: 'Name of the summary template used',
optional: true,
},
markdown_formatted: {
type: 'string',
description: 'Markdown-formatted summary text',
optional: true,
},
},
}

View File

@@ -0,0 +1,95 @@
import type { FathomGetTranscriptParams, FathomGetTranscriptResponse } from '@/tools/fathom/types'
import type { ToolConfig } from '@/tools/types'
export const getTranscriptTool: ToolConfig<FathomGetTranscriptParams, FathomGetTranscriptResponse> =
{
id: 'fathom_get_transcript',
name: 'Fathom Get Transcript',
description: 'Get the full transcript for a specific meeting recording.',
version: '1.0.0',
params: {
apiKey: {
type: 'string',
required: true,
visibility: 'user-only',
description: 'Fathom API Key',
},
recordingId: {
type: 'string',
required: true,
visibility: 'user-or-llm',
description: 'The recording ID of the meeting',
},
},
request: {
url: (params) =>
`https://api.fathom.ai/external/v1/recordings/${encodeURIComponent(params.recordingId.trim())}/transcript`,
method: 'GET',
headers: (params) => ({
'X-Api-Key': params.apiKey,
'Content-Type': 'application/json',
}),
},
transformResponse: async (response: Response) => {
if (!response.ok) {
const errorData = await response.json().catch(() => ({}))
return {
success: false,
error:
(errorData as Record<string, string>).message ||
`Fathom API error: ${response.status} ${response.statusText}`,
output: {
transcript: [],
},
}
}
const data = await response.json()
const transcript = (data.transcript ?? []).map(
(entry: { speaker?: Record<string, unknown>; text?: string; timestamp?: string }) => ({
speaker: {
display_name: entry.speaker?.display_name ?? '',
matched_calendar_invitee_email: entry.speaker?.matched_calendar_invitee_email ?? null,
},
text: entry.text ?? '',
timestamp: entry.timestamp ?? '',
})
)
return {
success: true,
output: {
transcript,
},
}
},
outputs: {
transcript: {
type: 'array',
description: 'Array of transcript entries with speaker, text, and timestamp',
items: {
type: 'object',
properties: {
speaker: {
type: 'object',
description: 'Speaker information',
properties: {
display_name: { type: 'string', description: 'Speaker display name' },
matched_calendar_invitee_email: {
type: 'string',
description: 'Matched calendar invitee email',
optional: true,
},
},
},
text: { type: 'string', description: 'Transcript text' },
timestamp: { type: 'string', description: 'Timestamp (HH:MM:SS)' },
},
},
},
},
}

View File

@@ -0,0 +1,13 @@
import { getSummaryTool } from '@/tools/fathom/get_summary'
import { getTranscriptTool } from '@/tools/fathom/get_transcript'
import { listMeetingsTool } from '@/tools/fathom/list_meetings'
import { listTeamMembersTool } from '@/tools/fathom/list_team_members'
import { listTeamsTool } from '@/tools/fathom/list_teams'
export const fathomGetSummaryTool = getSummaryTool
export const fathomGetTranscriptTool = getTranscriptTool
export const fathomListMeetingsTool = listMeetingsTool
export const fathomListTeamMembersTool = listTeamMembersTool
export const fathomListTeamsTool = listTeamsTool
export * from './types'

View File

@@ -0,0 +1,174 @@
import type { FathomListMeetingsParams, FathomListMeetingsResponse } from '@/tools/fathom/types'
import type { ToolConfig } from '@/tools/types'
export const listMeetingsTool: ToolConfig<FathomListMeetingsParams, FathomListMeetingsResponse> = {
id: 'fathom_list_meetings',
name: 'Fathom List Meetings',
description: 'List recent meetings recorded by the user or shared to their team.',
version: '1.0.0',
params: {
apiKey: {
type: 'string',
required: true,
visibility: 'user-only',
description: 'Fathom API Key',
},
includeSummary: {
type: 'string',
required: false,
visibility: 'user-or-llm',
description: 'Include meeting summary (true/false)',
},
includeTranscript: {
type: 'string',
required: false,
visibility: 'user-or-llm',
description: 'Include meeting transcript (true/false)',
},
includeActionItems: {
type: 'string',
required: false,
visibility: 'user-or-llm',
description: 'Include action items (true/false)',
},
includeCrmMatches: {
type: 'string',
required: false,
visibility: 'user-or-llm',
description: 'Include linked CRM matches (true/false)',
},
createdAfter: {
type: 'string',
required: false,
visibility: 'user-or-llm',
description: 'Filter meetings created after this ISO 8601 timestamp',
},
createdBefore: {
type: 'string',
required: false,
visibility: 'user-or-llm',
description: 'Filter meetings created before this ISO 8601 timestamp',
},
recordedBy: {
type: 'string',
required: false,
visibility: 'user-or-llm',
description: 'Filter by recorder email address',
},
teams: {
type: 'string',
required: false,
visibility: 'user-or-llm',
description: 'Filter by team name',
},
cursor: {
type: 'string',
required: false,
visibility: 'user-or-llm',
description: 'Pagination cursor from a previous response',
},
},
request: {
url: (params) => {
const url = new URL('https://api.fathom.ai/external/v1/meetings')
if (params.includeSummary === 'true') url.searchParams.append('include_summary', 'true')
if (params.includeTranscript === 'true') url.searchParams.append('include_transcript', 'true')
if (params.includeActionItems === 'true')
url.searchParams.append('include_action_items', 'true')
if (params.includeCrmMatches === 'true')
url.searchParams.append('include_crm_matches', 'true')
if (params.createdAfter) url.searchParams.append('created_after', params.createdAfter)
if (params.createdBefore) url.searchParams.append('created_before', params.createdBefore)
if (params.recordedBy) url.searchParams.append('recorded_by[]', params.recordedBy)
if (params.teams) url.searchParams.append('teams[]', params.teams)
if (params.cursor) url.searchParams.append('cursor', params.cursor)
return url.toString()
},
method: 'GET',
headers: (params) => ({
'X-Api-Key': params.apiKey,
'Content-Type': 'application/json',
}),
},
transformResponse: async (response: Response) => {
if (!response.ok) {
const errorData = await response.json().catch(() => ({}))
return {
success: false,
error:
(errorData as Record<string, string>).message ||
`Fathom API error: ${response.status} ${response.statusText}`,
output: {
meetings: [],
next_cursor: null,
},
}
}
const data = await response.json()
const meetings = (data.items ?? []).map(
(meeting: Record<string, unknown> & { recorded_by?: Record<string, unknown> }) => ({
title: meeting.title ?? '',
meeting_title: meeting.meeting_title ?? null,
recording_id: meeting.recording_id ?? null,
url: meeting.url ?? '',
share_url: meeting.share_url ?? '',
created_at: meeting.created_at ?? '',
scheduled_start_time: meeting.scheduled_start_time ?? null,
scheduled_end_time: meeting.scheduled_end_time ?? null,
recording_start_time: meeting.recording_start_time ?? null,
recording_end_time: meeting.recording_end_time ?? null,
transcript_language: meeting.transcript_language ?? '',
calendar_invitees_domains_type: meeting.calendar_invitees_domains_type ?? null,
recorded_by: meeting.recorded_by
? {
name: meeting.recorded_by.name ?? '',
email: meeting.recorded_by.email ?? '',
email_domain: meeting.recorded_by.email_domain ?? '',
team: meeting.recorded_by.team ?? null,
}
: null,
calendar_invitees: (meeting.calendar_invitees as Array<Record<string, unknown>>) ?? [],
default_summary: meeting.default_summary ?? null,
transcript: meeting.transcript ?? null,
action_items: meeting.action_items ?? null,
crm_matches: meeting.crm_matches ?? null,
})
)
return {
success: true,
output: {
meetings,
next_cursor: data.next_cursor ?? null,
},
}
},
outputs: {
meetings: {
type: 'array',
description: 'List of meetings',
items: {
type: 'object',
properties: {
title: { type: 'string', description: 'Meeting title' },
recording_id: { type: 'number', description: 'Unique recording ID' },
url: { type: 'string', description: 'URL to view the meeting' },
share_url: { type: 'string', description: 'Shareable URL' },
created_at: { type: 'string', description: 'Creation timestamp' },
transcript_language: { type: 'string', description: 'Transcript language' },
},
},
},
next_cursor: {
type: 'string',
description: 'Pagination cursor for next page',
optional: true,
},
},
}

View File

@@ -0,0 +1,103 @@
import type {
FathomListTeamMembersParams,
FathomListTeamMembersResponse,
} from '@/tools/fathom/types'
import type { ToolConfig } from '@/tools/types'
export const listTeamMembersTool: ToolConfig<
FathomListTeamMembersParams,
FathomListTeamMembersResponse
> = {
id: 'fathom_list_team_members',
name: 'Fathom List Team Members',
description: 'List team members in your Fathom organization.',
version: '1.0.0',
params: {
apiKey: {
type: 'string',
required: true,
visibility: 'user-only',
description: 'Fathom API Key',
},
teams: {
type: 'string',
required: false,
visibility: 'user-or-llm',
description: 'Team name to filter by',
},
cursor: {
type: 'string',
required: false,
visibility: 'user-or-llm',
description: 'Pagination cursor from a previous response',
},
},
request: {
url: (params) => {
const url = new URL('https://api.fathom.ai/external/v1/team_members')
if (params.teams) url.searchParams.append('team', params.teams)
if (params.cursor) url.searchParams.append('cursor', params.cursor)
return url.toString()
},
method: 'GET',
headers: (params) => ({
'X-Api-Key': params.apiKey,
'Content-Type': 'application/json',
}),
},
transformResponse: async (response: Response) => {
if (!response.ok) {
const errorData = await response.json().catch(() => ({}))
return {
success: false,
error:
(errorData as Record<string, string>).message ||
`Fathom API error: ${response.status} ${response.statusText}`,
output: {
members: [],
next_cursor: null,
},
}
}
const data = await response.json()
const members = (data.items ?? []).map(
(member: { name?: string; email?: string; created_at?: string }) => ({
name: member.name ?? '',
email: member.email ?? '',
created_at: member.created_at ?? '',
})
)
return {
success: true,
output: {
members,
next_cursor: data.next_cursor ?? null,
},
}
},
outputs: {
members: {
type: 'array',
description: 'List of team members',
items: {
type: 'object',
properties: {
name: { type: 'string', description: 'Team member name' },
email: { type: 'string', description: 'Team member email' },
created_at: { type: 'string', description: 'Date the member was added' },
},
},
},
next_cursor: {
type: 'string',
description: 'Pagination cursor for next page',
optional: true,
},
},
}

View File

@@ -0,0 +1,86 @@
import type { FathomListTeamsParams, FathomListTeamsResponse } from '@/tools/fathom/types'
import type { ToolConfig } from '@/tools/types'
export const listTeamsTool: ToolConfig<FathomListTeamsParams, FathomListTeamsResponse> = {
id: 'fathom_list_teams',
name: 'Fathom List Teams',
description: 'List teams in your Fathom organization.',
version: '1.0.0',
params: {
apiKey: {
type: 'string',
required: true,
visibility: 'user-only',
description: 'Fathom API Key',
},
cursor: {
type: 'string',
required: false,
visibility: 'user-or-llm',
description: 'Pagination cursor from a previous response',
},
},
request: {
url: (params) => {
const url = new URL('https://api.fathom.ai/external/v1/teams')
if (params.cursor) url.searchParams.append('cursor', params.cursor)
return url.toString()
},
method: 'GET',
headers: (params) => ({
'X-Api-Key': params.apiKey,
'Content-Type': 'application/json',
}),
},
transformResponse: async (response: Response) => {
if (!response.ok) {
const errorData = await response.json().catch(() => ({}))
return {
success: false,
error:
(errorData as Record<string, string>).message ||
`Fathom API error: ${response.status} ${response.statusText}`,
output: {
teams: [],
next_cursor: null,
},
}
}
const data = await response.json()
const teams = (data.items ?? []).map((team: { name?: string; created_at?: string }) => ({
name: team.name ?? '',
created_at: team.created_at ?? '',
}))
return {
success: true,
output: {
teams,
next_cursor: data.next_cursor ?? null,
},
}
},
outputs: {
teams: {
type: 'array',
description: 'List of teams',
items: {
type: 'object',
properties: {
name: { type: 'string', description: 'Team name' },
created_at: { type: 'string', description: 'Date the team was created' },
},
},
},
next_cursor: {
type: 'string',
description: 'Pagination cursor for next page',
optional: true,
},
},
}

View File

@@ -0,0 +1,127 @@
import type { ToolResponse } from '@/tools/types'
export interface FathomBaseParams {
apiKey: string
}
export interface FathomListMeetingsParams extends FathomBaseParams {
includeSummary?: string
includeTranscript?: string
includeActionItems?: string
includeCrmMatches?: string
createdAfter?: string
createdBefore?: string
recordedBy?: string
teams?: string
cursor?: string
}
export interface FathomListMeetingsResponse extends ToolResponse {
output: {
meetings: Array<{
title: string
meeting_title: string | null
recording_id: number | null
url: string
share_url: string
created_at: string
scheduled_start_time: string | null
scheduled_end_time: string | null
recording_start_time: string | null
recording_end_time: string | null
transcript_language: string
calendar_invitees_domains_type: string | null
recorded_by: { name: string; email: string; email_domain: string; team: string | null } | null
calendar_invitees: Array<{
name: string | null
email: string
email_domain: string | null
is_external: boolean
matched_speaker_display_name: string | null
}>
default_summary: { template_name: string | null; markdown_formatted: string | null } | null
transcript: Array<{
speaker: { display_name: string; matched_calendar_invitee_email: string | null }
text: string
timestamp: string
}> | null
action_items: Array<{
description: string
user_generated: boolean
completed: boolean
recording_timestamp: string
recording_playback_url: string
assignee: { name: string | null; email: string | null; team: string | null }
}> | null
crm_matches: {
contacts: Array<{ name: string; email: string; record_url: string }>
companies: Array<{ name: string; record_url: string }>
deals: Array<{ name: string; amount: number; record_url: string }>
error: string | null
} | null
}>
next_cursor: string | null
}
}
export interface FathomGetSummaryParams extends FathomBaseParams {
recordingId: string
}
export interface FathomGetSummaryResponse extends ToolResponse {
output: {
template_name: string | null
markdown_formatted: string | null
}
}
export interface FathomGetTranscriptParams extends FathomBaseParams {
recordingId: string
}
export interface FathomGetTranscriptResponse extends ToolResponse {
output: {
transcript: Array<{
speaker: { display_name: string; matched_calendar_invitee_email: string | null }
text: string
timestamp: string
}>
}
}
export interface FathomListTeamMembersParams extends FathomBaseParams {
teams?: string
cursor?: string
}
export interface FathomListTeamMembersResponse extends ToolResponse {
output: {
members: Array<{
name: string
email: string
created_at: string
}>
next_cursor: string | null
}
}
export interface FathomListTeamsParams extends FathomBaseParams {
cursor?: string
}
export interface FathomListTeamsResponse extends ToolResponse {
output: {
teams: Array<{
name: string
created_at: string
}>
next_cursor: string | null
}
}
export type FathomResponse =
| FathomListMeetingsResponse
| FathomGetSummaryResponse
| FathomGetTranscriptResponse
| FathomListTeamMembersResponse
| FathomListTeamsResponse

View File

@@ -0,0 +1,36 @@
/**
* @vitest-environment node
*/
import { describe, expect, it } from 'vitest'
import { encodeRfc2047 } from './utils'
describe('encodeRfc2047', () => {
it('returns ASCII text unchanged', () => {
expect(encodeRfc2047('Simple ASCII Subject')).toBe('Simple ASCII Subject')
})
it('returns empty string unchanged', () => {
expect(encodeRfc2047('')).toBe('')
})
it('encodes emojis as RFC 2047 base64', () => {
const result = encodeRfc2047('Time to Stretch! 🧘')
expect(result).toBe('=?UTF-8?B?VGltZSB0byBTdHJldGNoISDwn6eY?=')
})
it('round-trips non-ASCII subjects correctly', () => {
const subjects = ['Hello 世界', 'Café résumé', '🎉🎊🎈 Party!', '今週のミーティング']
for (const subject of subjects) {
const encoded = encodeRfc2047(subject)
const match = encoded.match(/^=\?UTF-8\?B\?(.+)\?=$/)
expect(match).not.toBeNull()
const decoded = Buffer.from(match![1], 'base64').toString('utf-8')
expect(decoded).toBe(subject)
}
})
it('does not double-encode already-encoded subjects', () => {
const alreadyEncoded = '=?UTF-8?B?VGltZSB0byBTdHJldGNoISDwn6eY?='
expect(encodeRfc2047(alreadyEncoded)).toBe(alreadyEncoded)
})
})

View File

@@ -294,6 +294,19 @@ function generateBoundary(): string {
return `----=_Part_${Date.now()}_${Math.random().toString(36).substring(2, 15)}`
}
/**
* Encode a header value using RFC 2047 Base64 encoding if it contains non-ASCII characters.
* This matches Google's own Gmail API sample: `=?utf-8?B?${Buffer.from(subject).toString('base64')}?=`
* @see https://github.com/googleapis/google-api-nodejs-client/blob/main/samples/gmail/send.js
*/
export function encodeRfc2047(value: string): string {
// eslint-disable-next-line no-control-regex
if (/^[\x00-\x7F]*$/.test(value)) {
return value
}
return `=?UTF-8?B?${Buffer.from(value, 'utf-8').toString('base64')}?=`
}
/**
* Encode string or buffer to base64url format (URL-safe base64)
* Gmail API requires base64url encoding for the raw message field
@@ -333,7 +346,7 @@ export function buildSimpleEmailMessage(params: {
emailHeaders.push(`Bcc: ${bcc}`)
}
emailHeaders.push(`Subject: ${subject || ''}`)
emailHeaders.push(`Subject: ${encodeRfc2047(subject || '')}`)
if (inReplyTo) {
emailHeaders.push(`In-Reply-To: ${inReplyTo}`)
@@ -380,7 +393,7 @@ export function buildMimeMessage(params: BuildMimeMessageParams): string {
if (bcc) {
messageParts.push(`Bcc: ${bcc}`)
}
messageParts.push(`Subject: ${subject || ''}`)
messageParts.push(`Subject: ${encodeRfc2047(subject || '')}`)
if (inReplyTo) {
messageParts.push(`In-Reply-To: ${inReplyTo}`)

View File

@@ -137,8 +137,11 @@ export const jiraSearchIssuesTool: ToolConfig<JiraSearchIssuesParams, JiraSearch
if (params.nextPageToken) query.set('nextPageToken', params.nextPageToken)
if (typeof params.maxResults === 'number')
query.set('maxResults', String(params.maxResults))
if (Array.isArray(params.fields) && params.fields.length > 0)
if (Array.isArray(params.fields) && params.fields.length > 0) {
query.set('fields', params.fields.join(','))
} else {
query.set('fields', '*all')
}
const qs = query.toString()
return `https://api.atlassian.com/ex/jira/${params.cloudId}/rest/api/3/search/jql${qs ? `?${qs}` : ''}`
}
@@ -159,8 +162,11 @@ export const jiraSearchIssuesTool: ToolConfig<JiraSearchIssuesParams, JiraSearch
if (params?.jql) query.set('jql', params.jql)
if (params?.nextPageToken) query.set('nextPageToken', params.nextPageToken)
if (typeof params?.maxResults === 'number') query.set('maxResults', String(params.maxResults))
if (Array.isArray(params?.fields) && params.fields.length > 0)
if (Array.isArray(params?.fields) && params.fields.length > 0) {
query.set('fields', params.fields.join(','))
} else {
query.set('fields', '*all')
}
const searchUrl = `https://api.atlassian.com/ex/jira/${cloudId}/rest/api/3/search/jql?${query.toString()}`
const searchResponse = await fetch(searchUrl, {
method: 'GET',

View File

@@ -8,7 +8,7 @@ export const deepResearchTool: ToolConfig<ParallelDeepResearchParams, ToolRespon
id: 'parallel_deep_research',
name: 'Parallel AI Deep Research',
description:
'Conduct comprehensive deep research across the web using Parallel AI. Synthesizes information from multiple sources with citations. Can take up to 15 minutes to complete.',
'Conduct comprehensive deep research across the web using Parallel AI. Synthesizes information from multiple sources with citations. Can take up to 45 minutes to complete.',
version: '1.0.0',
params: {
@@ -22,8 +22,7 @@ export const deepResearchTool: ToolConfig<ParallelDeepResearchParams, ToolRespon
type: 'string',
required: false,
visibility: 'user-only',
description:
'Compute level: base, lite, pro, ultra, ultra2x, ultra4x, ultra8x (default: base)',
description: 'Processing tier: pro, ultra, pro-fast, ultra-fast (default: pro)',
},
include_domains: {
type: 'string',
@@ -55,15 +54,12 @@ export const deepResearchTool: ToolConfig<ParallelDeepResearchParams, ToolRespon
body: (params) => {
const body: Record<string, unknown> = {
input: params.input,
processor: params.processor || 'base',
processor: params.processor || 'pro',
task_spec: {
output_schema: 'auto',
},
}
const taskSpec: Record<string, unknown> = {}
taskSpec.output_schema = 'auto'
body.task_spec = taskSpec
if (params.include_domains || params.exclude_domains) {
const sourcePolicy: Record<string, string[]> = {}
@@ -91,14 +87,21 @@ export const deepResearchTool: ToolConfig<ParallelDeepResearchParams, ToolRespon
},
transformResponse: async (response: Response) => {
if (!response.ok) {
const errorText = await response.text()
throw new Error(
`Parallel AI deep research task creation failed: ${response.status} - ${errorText}`
)
}
const data = await response.json()
return {
success: true,
output: {
run_id: data.run_id,
status: data.status,
message: `Research task ${data.status}, waiting for completion...`,
run_id: data.run_id ?? null,
status: data.status ?? null,
message: `Research task ${data.status ?? 'created'}, waiting for completion...`,
content: {},
basis: [],
},
@@ -122,13 +125,16 @@ export const deepResearchTool: ToolConfig<ParallelDeepResearchParams, ToolRespon
logger.info(`Parallel AI deep research task ${runId} created, fetching results...`)
try {
const resultResponse = await fetch(`https://api.parallel.ai/v1/tasks/runs/${runId}/result`, {
method: 'GET',
headers: {
'x-api-key': params.apiKey,
'Content-Type': 'application/json',
},
})
const resultResponse = await fetch(
`https://api.parallel.ai/v1/tasks/runs/${String(runId).trim()}/result`,
{
method: 'GET',
headers: {
'x-api-key': params.apiKey,
'Content-Type': 'application/json',
},
}
)
if (!resultResponse.ok) {
const errorText = await resultResponse.text()
@@ -138,17 +144,17 @@ export const deepResearchTool: ToolConfig<ParallelDeepResearchParams, ToolRespon
const taskResult = await resultResponse.json()
logger.info(`Parallel AI deep research task ${runId} completed`)
const output = taskResult.output || {}
const run = taskResult.run || {}
const output = taskResult.output ?? {}
const status = taskResult.status ?? 'completed'
return {
success: true,
output: {
status: run.status || 'completed',
status,
run_id: runId,
message: 'Research completed successfully',
content: output.content || {},
basis: output.basis || [],
content: output.content ?? {},
basis: output.basis ?? [],
},
}
} catch (error: unknown) {
@@ -169,7 +175,7 @@ export const deepResearchTool: ToolConfig<ParallelDeepResearchParams, ToolRespon
outputs: {
status: {
type: 'string',
description: 'Task status (completed, failed)',
description: 'Task status (completed, failed, running)',
},
run_id: {
type: 'string',
@@ -189,7 +195,7 @@ export const deepResearchTool: ToolConfig<ParallelDeepResearchParams, ToolRespon
items: {
type: 'object',
properties: {
field: { type: 'string', description: 'Output field name' },
field: { type: 'string', description: 'Output field dot-notation path' },
reasoning: { type: 'string', description: 'Explanation for the result' },
citations: {
type: 'array',
@@ -203,7 +209,7 @@ export const deepResearchTool: ToolConfig<ParallelDeepResearchParams, ToolRespon
},
},
},
confidence: { type: 'string', description: 'Confidence level indicator' },
confidence: { type: 'string', description: 'Confidence level (high, medium)' },
},
},
},

View File

@@ -17,21 +17,21 @@ export const extractTool: ToolConfig<ParallelExtractParams, ToolResponse> = {
},
objective: {
type: 'string',
required: true,
required: false,
visibility: 'user-or-llm',
description: 'What information to extract from the provided URLs',
},
excerpts: {
type: 'boolean',
required: true,
visibility: 'user-only',
description: 'Include relevant excerpts from the content',
required: false,
visibility: 'user-or-llm',
description: 'Include relevant excerpts from the content (default: true)',
},
full_content: {
type: 'boolean',
required: true,
visibility: 'user-only',
description: 'Include full page content',
required: false,
visibility: 'user-or-llm',
description: 'Include full page content as markdown (default: false)',
},
apiKey: {
type: 'string',
@@ -50,7 +50,6 @@ export const extractTool: ToolConfig<ParallelExtractParams, ToolResponse> = {
'parallel-beta': 'search-extract-2025-10-10',
}),
body: (params) => {
// Convert comma-separated URLs to array
const urlArray = params.urls
.split(',')
.map((url) => url.trim())
@@ -58,10 +57,9 @@ export const extractTool: ToolConfig<ParallelExtractParams, ToolResponse> = {
const body: Record<string, unknown> = {
urls: urlArray,
objective: params.objective,
}
// Add optional parameters if provided
if (params.objective) body.objective = params.objective
if (params.excerpts !== undefined) body.excerpts = params.excerpts
if (params.full_content !== undefined) body.full_content = params.full_content
@@ -70,17 +68,44 @@ export const extractTool: ToolConfig<ParallelExtractParams, ToolResponse> = {
},
transformResponse: async (response: Response) => {
if (!response.ok) {
const errorText = await response.text()
throw new Error(`Parallel AI extract failed: ${response.status} - ${errorText}`)
}
const data = await response.json()
if (!data.results) {
return {
success: false,
error: 'No results returned from extraction',
output: {
results: [],
extract_id: data.extract_id ?? null,
},
}
}
return {
success: true,
output: {
results: data.results || [],
extract_id: data.extract_id ?? null,
results: data.results.map((result: Record<string, unknown>) => ({
url: result.url ?? null,
title: result.title ?? null,
publish_date: result.publish_date ?? null,
excerpts: result.excerpts ?? [],
full_content: result.full_content ?? null,
})),
},
}
},
outputs: {
extract_id: {
type: 'string',
description: 'Unique identifier for this extraction request',
},
results: {
type: 'array',
description: 'Extracted information from the provided URLs',
@@ -88,12 +113,22 @@ export const extractTool: ToolConfig<ParallelExtractParams, ToolResponse> = {
type: 'object',
properties: {
url: { type: 'string', description: 'The source URL' },
title: { type: 'string', description: 'The title of the page' },
content: { type: 'string', description: 'Extracted content' },
title: { type: 'string', description: 'The title of the page', optional: true },
publish_date: {
type: 'string',
description: 'Publication date (YYYY-MM-DD)',
optional: true,
},
excerpts: {
type: 'array',
description: 'Relevant text excerpts',
description: 'Relevant text excerpts in markdown',
items: { type: 'string' },
optional: true,
},
full_content: {
type: 'string',
description: 'Full page content as markdown',
optional: true,
},
},
},

View File

@@ -5,3 +5,5 @@ import { searchTool } from '@/tools/parallel/search'
export const parallelSearchTool = searchTool
export const parallelExtractTool = extractTool
export const parallelDeepResearchTool = deepResearchTool
export * from './types'

View File

@@ -19,25 +19,37 @@ export const searchTool: ToolConfig<ParallelSearchParams, ToolResponse> = {
type: 'string',
required: false,
visibility: 'user-or-llm',
description: 'Optional comma-separated list of search queries to execute',
description: 'Comma-separated list of search queries to execute',
},
processor: {
mode: {
type: 'string',
required: false,
visibility: 'user-only',
description: 'Processing method: base or pro (default: base)',
description: 'Search mode: one-shot, agentic, or fast (default: one-shot)',
},
max_results: {
type: 'number',
required: false,
visibility: 'user-only',
description: 'Maximum number of results to return (default: 5)',
description: 'Maximum number of results to return (default: 10)',
},
max_chars_per_result: {
type: 'number',
required: false,
visibility: 'user-only',
description: 'Maximum characters per result (default: 1500)',
description: 'Maximum characters per result excerpt (minimum: 1000)',
},
include_domains: {
type: 'string',
required: false,
visibility: 'user-or-llm',
description: 'Comma-separated list of domains to restrict search results to',
},
exclude_domains: {
type: 'string',
required: false,
visibility: 'user-or-llm',
description: 'Comma-separated list of domains to exclude from search results',
},
apiKey: {
type: 'string',
@@ -60,44 +72,83 @@ export const searchTool: ToolConfig<ParallelSearchParams, ToolResponse> = {
objective: params.objective,
}
// Only include search_queries if it's not empty
if (
params.search_queries !== undefined &&
params.search_queries !== null &&
params.search_queries.length > 0
) {
body.search_queries = params.search_queries
if (params.search_queries) {
if (Array.isArray(params.search_queries)) {
body.search_queries = params.search_queries
} else if (typeof params.search_queries === 'string') {
const queries = params.search_queries
.split(',')
.map((q: string) => q.trim())
.filter((q: string) => q.length > 0)
if (queries.length > 0) body.search_queries = queries
}
}
// Add optional parameters if provided
if (params.processor) body.processor = params.processor
if (params.mode) body.mode = params.mode
if (params.max_results) body.max_results = Number(params.max_results)
if (params.max_chars_per_result)
body.max_chars_per_result = Number(params.max_chars_per_result)
if (params.max_chars_per_result) {
body.excerpts = { max_chars_per_result: Number(params.max_chars_per_result) }
}
const sourcePolicy: Record<string, string[]> = {}
if (params.include_domains) {
sourcePolicy.include_domains = params.include_domains
.split(',')
.map((d: string) => d.trim())
.filter((d: string) => d.length > 0)
}
if (params.exclude_domains) {
sourcePolicy.exclude_domains = params.exclude_domains
.split(',')
.map((d: string) => d.trim())
.filter((d: string) => d.length > 0)
}
if (Object.keys(sourcePolicy).length > 0) {
body.source_policy = sourcePolicy
}
return body
},
},
transformResponse: async (response: Response) => {
if (!response.ok) {
const errorText = await response.text()
throw new Error(`Parallel AI search failed: ${response.status} - ${errorText}`)
}
const data = await response.json()
if (!data.results) {
return {
success: false,
error: 'No results returned from search',
output: {
results: [],
search_id: data.search_id ?? null,
},
}
}
return {
success: true,
output: {
results: data.results.map((result: unknown) => {
const resultObj = result as Record<string, unknown>
return {
url: resultObj.url || '',
title: resultObj.title || '',
excerpts: resultObj.excerpts || [],
}
}),
search_id: data.search_id ?? null,
results: data.results.map((result: Record<string, unknown>) => ({
url: result.url ?? null,
title: result.title ?? null,
publish_date: result.publish_date ?? null,
excerpts: result.excerpts ?? [],
})),
},
}
},
outputs: {
search_id: {
type: 'string',
description: 'Unique identifier for this search request',
},
results: {
type: 'array',
description: 'Search results with excerpts from relevant pages',
@@ -106,9 +157,14 @@ export const searchTool: ToolConfig<ParallelSearchParams, ToolResponse> = {
properties: {
url: { type: 'string', description: 'The URL of the search result' },
title: { type: 'string', description: 'The title of the search result' },
publish_date: {
type: 'string',
description: 'Publication date of the page (YYYY-MM-DD)',
optional: true,
},
excerpts: {
type: 'array',
description: 'Text excerpts from the page',
description: 'LLM-optimized excerpts from the page',
items: { type: 'string' },
},
},

View File

@@ -1,39 +1,51 @@
import type { ToolResponse } from '@/tools/types'
export interface ParallelSearchParams {
objective: string
search_queries: string[]
processor?: string
search_queries?: string[] | string
mode?: string
max_results?: number
max_chars_per_result?: number
include_domains?: string
exclude_domains?: string
apiKey: string
}
export interface ParallelSearchResult {
url: string
title: string
url: string | null
title: string | null
publish_date?: string | null
excerpts: string[]
}
export interface ParallelSearchResponse {
results: ParallelSearchResult[]
export interface ParallelSearchResponse extends ToolResponse {
output: {
search_id: string | null
results: ParallelSearchResult[]
}
}
export interface ParallelExtractParams {
urls: string
objective: string
excerpts: boolean
full_content: boolean
objective?: string
excerpts?: boolean
full_content?: boolean
apiKey: string
}
export interface ParallelExtractResult {
url: string
title: string
content?: string
url: string | null
title?: string | null
publish_date?: string | null
excerpts?: string[]
full_content?: string | null
}
export interface ParallelExtractResponse {
results: ParallelExtractResult[]
export interface ParallelExtractResponse extends ToolResponse {
output: {
extract_id: string | null
results: ParallelExtractResult[]
}
}
export interface ParallelDeepResearchParams {
@@ -45,17 +57,22 @@ export interface ParallelDeepResearchParams {
}
export interface ParallelDeepResearchBasis {
url: string
title: string
excerpt: string
confidence?: number
field: string
reasoning: string
citations: {
url: string
title: string
excerpts: string[]
}[]
confidence: string
}
export interface ParallelDeepResearchResponse {
status: string
run_id: string
message?: string
content?: Record<string, unknown>
basis?: ParallelDeepResearchBasis[]
metadata?: Record<string, unknown>
export interface ParallelDeepResearchResponse extends ToolResponse {
output: {
status: string
run_id: string
message: string
content: Record<string, unknown>
basis: ParallelDeepResearchBasis[]
}
}

View File

@@ -446,6 +446,13 @@ import {
exaResearchTool,
exaSearchTool,
} from '@/tools/exa'
import {
fathomGetSummaryTool,
fathomGetTranscriptTool,
fathomListMeetingsTool,
fathomListTeamMembersTool,
fathomListTeamsTool,
} from '@/tools/fathom'
import { fileParserV2Tool, fileParserV3Tool, fileParseTool } from '@/tools/file'
import {
firecrawlAgentTool,
@@ -3666,6 +3673,11 @@ export const tools: Record<string, ToolConfig> = {
knowledge_create_document: knowledgeCreateDocumentTool,
search_tool: searchTool,
elevenlabs_tts: elevenLabsTtsTool,
fathom_list_meetings: fathomListMeetingsTool,
fathom_get_summary: fathomGetSummaryTool,
fathom_get_transcript: fathomGetTranscriptTool,
fathom_list_team_members: fathomListTeamMembersTool,
fathom_list_teams: fathomListTeamsTool,
stt_whisper: whisperSttTool,
stt_whisper_v2: whisperSttV2Tool,
stt_deepgram: deepgramSttTool,

View File

@@ -85,6 +85,7 @@ export const slackGetUserTool: ToolConfig<SlackGetUserParams, SlackGetUserRespon
first_name: profile.first_name || '',
last_name: profile.last_name || '',
title: profile.title || '',
email: profile.email || '',
phone: profile.phone || '',
skype: profile.skype || '',
is_bot: user.is_bot || false,

View File

@@ -93,6 +93,7 @@ export const slackListUsersTool: ToolConfig<SlackListUsersParams, SlackListUsers
name: user.name,
real_name: user.real_name || user.profile?.real_name || '',
display_name: user.profile?.display_name || '',
email: user.profile?.email || '',
is_bot: user.is_bot || false,
is_admin: user.is_admin || false,
is_owner: user.is_owner || false,

View File

@@ -376,6 +376,11 @@ export const USER_OUTPUT_PROPERTIES = {
title: { type: 'string', description: 'Job title', optional: true },
phone: { type: 'string', description: 'Phone number', optional: true },
skype: { type: 'string', description: 'Skype handle', optional: true },
email: {
type: 'string',
description: 'Email address (requires users:read.email scope)',
optional: true,
},
is_bot: { type: 'boolean', description: 'Whether the user is a bot' },
is_admin: { type: 'boolean', description: 'Whether the user is a workspace admin' },
is_owner: { type: 'boolean', description: 'Whether the user is the workspace owner' },
@@ -438,6 +443,11 @@ export const USER_SUMMARY_OUTPUT_PROPERTIES = {
name: { type: 'string', description: 'Username (handle)' },
real_name: { type: 'string', description: 'Full real name' },
display_name: { type: 'string', description: 'Display name shown in Slack' },
email: {
type: 'string',
description: 'Email address (requires users:read.email scope)',
optional: true,
},
is_bot: { type: 'boolean', description: 'Whether the user is a bot' },
is_admin: { type: 'boolean', description: 'Whether the user is a workspace admin' },
is_owner: { type: 'boolean', description: 'Whether the user is the workspace owner' },
@@ -953,6 +963,7 @@ export interface SlackUser {
title?: string
phone?: string
skype?: string
email: string
is_bot: boolean
is_admin: boolean
is_owner: boolean

Some files were not shown because too many files have changed in this diff Show More