Compare commits

...

79 Commits

Author SHA1 Message Date
waleed
eb493d94ed type fixes 2026-02-28 12:03:17 -08:00
Vasyl Abramovych
dd14f9d750 fix(lint): satisfy biome for short.io
Made-with: Cursor
2026-02-28 11:38:34 -08:00
Vasyl Abramovych
fc5e4237ab fix(short-io): address PR review feedback
- Change apiKey visibility from 'hidden' to 'user-only' in all 6 tools
- Simplify block tool selector to string interpolation
- Move QR code generation to server-side API route, return as file
  object (name, mimeType, data, size) matching standard file pattern
- Update block outputs and docs to reflect file type for QR code
2026-02-28 11:38:34 -08:00
Vasyl Abramovych
fcefd01f94 docs(short-io): add Short.io tool documentation
Add documentation page covering all 6 Short.io tools with input/output
parameter tables and usage instructions.
2026-02-28 11:38:34 -08:00
Vasyl Abramovych
2408b5af2d feat(blocks): add Short.io block and icon
Add Short.io block config with 6 operations (create link, list domains,
list links, delete link, get QR code, get analytics). Add ShortIoIcon
and register the block in the blocks registry.
2026-02-28 11:38:34 -08:00
Vasyl Abramovych
7327ec0058 feat(tools): add Short.io tools and registry
Add 6 Short.io tool implementations (create link, list domains, list
links, delete link, get QR code, get analytics) with shared types and
barrel export. Register all tools in the tools registry.
2026-02-28 11:38:34 -08:00
Waleed
96096e0ad1 improvement(resend): add error handling, authMode, and naming consistency (#3382) 2026-02-28 11:19:42 -08:00
Waleed
647a3eb05b improvement(luma): expand host response fields and harden event ID inputs (#3383) 2026-02-28 11:19:24 -08:00
Waleed
0195a4cd18 improvement(ashby): validate ashby integration and update skill files (#3381) 2026-02-28 11:16:40 -08:00
Waleed
b42f80e8ab fix(sse): fix memory leaks in SSE stream cleanup and add memory telemetry (#3378)
* fix(sse): fix memory leaks in SSE stream cleanup and add memory telemetry

* improvement(monitoring): add SSE metering to wand, execution-stream, and a2a-message endpoints

* fix(workflow-execute): remove abort from cancel() to preserve run-on-leave behavior

* improvement(monitoring): use stable process.getActiveResourcesInfo() API

* refactor(a2a): hoist resubscribe cleanup to eliminate duplication between start() and cancel()

* style(a2a): format import line

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(wand): set guard flag on early-return decrement for consistency

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-28 10:37:07 -08:00
Waleed
38ac86c4fd improvement(mcp): add all MCP server tools individually instead of as single server entry (#3376)
* improvement(mcp): add all MCP server tools individually instead of as single server entry

* fix(mcp): prevent remove popover from opening inadvertently
2026-02-27 14:02:11 -08:00
Waleed
4cfe8be75a feat(google-contacts): add google contacts integration (#3340)
* feat(google-contacts): add google contacts integration

* fix(google-contacts): throw error when no update fields provided

* lint

* update icon

* improvement(google-contacts): add advanced mode, error handling, and input trimming

- Set mode: 'advanced' on optional fields (emailType, phoneType, notes, pageSize, pageToken, sortOrder)
- Add createLogger and response.ok error handling to all 6 tools
- Add .trim() on resourceName in get, update, delete URL builders
2026-02-27 10:55:51 -08:00
Vikhyath Mondreti
49db3ca50b improvement(selectors): consolidate selector input logic (#3375) 2026-02-27 10:18:25 -08:00
Vikhyath Mondreti
e3ff595a84 improvement(selectors): make selectorKeys declarative (#3374)
* fix(webflow): resolution for selectors

* remove unecessary fallback'

* fix teams selector resolution

* make selector keys declarative

* selectors fixes
2026-02-27 07:56:35 -08:00
Waleed
b3424e2047 improvement(ci): add sticky disk caches and bump runner for faster builds (#3373) 2026-02-27 00:12:36 -08:00
Waleed
71ecf6c82e improvement(x): align OAuth scopes, add scope descriptions, and set optional fields to advanced mode (#3372)
* improvement(x): align OAuth scopes, add scope descriptions, and set optional fields to advanced mode

* improvement(skills): add typed JSON outputs guidance to add-tools, add-block, and add-integration skills

* improvement(skills): add final validation steps to add-tools, add-block, and add-integration skills

* fix(skills): correct misleading JSON array comment in wandConfig example

* feat(skills): add validate-integration skill for auditing tools, blocks, and registry against API docs

* improvement(skills): expand validate-integration with full block-tool alignment, OAuth scopes, pagination, and error handling checks
2026-02-26 23:30:24 -08:00
Waleed
e9e5ba2c5b improvement(docs): audit and standardize tool description sections, update developer count to 70k (#3371) 2026-02-26 23:02:58 -08:00
Waleed
9233d4ebc9 feat(x): add 28 new X API v2 tool integrations and expand OAuth scopes (#3365)
* feat(x): add 28 new X API v2 tool integrations and expand OAuth scopes

* fix(x): add missing nextToken param to search tweets and fix XCreateTweetParams type

* fix(x): correct API spec issues in retweeted_by, quote_tweets, personalized_trends, and usage tools

* fix(x): add missing newestId and oldestId to error meta in get_liked_tweets and get_quote_tweets

* fix(x): add missing newestId/oldestId to get_liked_tweets success branch and includes to XTweetListResponse

* fix(x): add error handling to create_tweet and delete_tweet transformResponse

* fix(x): add error handling and logger to all X tools

* fix(x): revert block requiredScopes to match current operations

* feat(x): update block to support all 28 new X API v2 tools

* fix(x): add missing text output and fix hiddenResult output key mismatch

* docs(x): regenerate docs for all 28 new X API v2 tools
2026-02-26 22:40:57 -08:00
Waleed
78901ef517 improvement(blocks): update luma styling and linkup field modes (#3370)
* improvement(blocks): update luma styling and linkup field modes

* improvement(fireflies): move optional fields to advanced mode

* improvement(blocks): move optional fields to advanced mode for 10 integrations

* improvement(blocks): move optional fields to advanced mode for 6 more integrations
2026-02-26 22:27:58 -08:00
Waleed
47fef540cc feat(resend): expand integration with contacts, domains, and enhanced email ops (#3366) 2026-02-26 22:12:48 -08:00
Waleed
f193e9ebbc feat(loops): add Loops email platform integration (#3359)
* feat(loops): add Loops email platform integration

Add complete Loops integration with 10 tools covering all API endpoints:
- Contact management: create, update, find, delete
- Email: send transactional emails with attachments
- Events: trigger automated email sequences
- Lists: list mailing lists and transactional email templates
- Properties: create and list contact properties

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* ran litn

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-26 22:09:02 -08:00
Waleed
c0f22d7722 improvement(oauth): reordered oauth modal (#3368) 2026-02-26 19:43:59 -08:00
Waleed
bf0e25c9d0 feat(ashby): add ashby integration for candidate, job, and application management (#3362)
* feat(ashby): add ashby integration for candidate, job, and application management

* fix(ashby): auto-fix lint formatting in docs files
2026-02-26 19:10:06 -08:00
Waleed
d4f8ac8107 feat(greenhouse): add greenhouse integration for managing candidates, jobs, and applications (#3363) 2026-02-26 19:09:03 -08:00
Waleed
63fa938dd7 feat(gamma): add gamma integration for AI-powered content generation (#3358)
* feat(gamma): add gamma integration for AI-powered content generation

* fix(gamma): address PR review comments

- Make credits/error conditionally included in check_status response to avoid always-truthy objects
- Replace full wordmark SVG with square "G" letterform for proper rendering in icon slots

* fix(gamma): remove imageSource from generate_from_template endpoint

The from-template API only accepts imageOptions.model and imageOptions.style,
not imageOptions.source (image source is inherited from the template).

* fix(gamma): use typed output in check_status transformResponse

* regen docs
2026-02-26 19:08:46 -08:00
Waleed
50b882a3ad feat(luma): add Luma integration for event and guest management (#3364)
* feat(luma): add Luma integration for event and guest management

Add complete Luma (lu.ma) integration with 6 tools: get event, create event,
update event, list calendar events, get guests, and add guests. Includes block
configuration with wandConfig for timestamps/timezones/durations, advanced mode
for optional fields, and generated documentation.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(luma): address PR review feedback

- Remove hosts field from list_events transformResponse (not in LumaEventEntry type)
- Fix truncated add_guests description by removing quotes that broke docs generator

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(luma): fix update_event field name and add_guests response parsing

- Use 'id' instead of 'event_id' in update_event request body per API spec
- Fix add_guests to parse entries[].guest response structure instead of flat guests array

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-26 19:08:20 -08:00
Waleed
c8a0b62a9c feat(databricks): add Databricks integration with 8 tools (#3361)
* feat(databricks): add Databricks integration with 8 tools

Add complete Databricks integration supporting SQL execution, job management,
run monitoring, and cluster listing via Personal Access Token authentication.

Tools: execute_sql, list_jobs, run_job, get_run, list_runs, cancel_run,
get_run_output, list_clusters

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(databricks): throw on invalid JSON params, fix boolean coercion, add expandTasks field

- Throw errors on invalid JSON in jobParameters/notebookParams instead of silently defaulting to {}
- Always set boolean params explicitly to prevent string 'false' being truthy
- Add missing expandTasks dropdown UI field for list_jobs operation

* fix(databricks): align tool inputs/outputs with official API spec

- execute_sql: fix wait_timeout default description (50s, not 10s)
- get_run: add queueDuration field, update lifecycle/result state enums
- get_run_output: fix notebook output size (5 MB not 1 MB), add logsTruncated field
- list_runs: add userCancelledOrTimedout to state, fix limit range (1-24), update state enums
- list_jobs: fix name filter description to "exact case-insensitive"
- list_clusters: add PIPELINE_MAINTENANCE to ClusterSource enum

* fix(databricks): regenerate docs to reflect API spec fixes

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-26 19:05:47 -08:00
Waleed
4ccb57371b improvement(tests): speed up unit tests by eliminating vi.resetModules anti-pattern (#3357)
* improvement(tests): speed up unit tests by eliminating vi.resetModules anti-pattern

- convert 51 test files from vi.resetModules/vi.doMock/dynamic import to vi.hoisted/vi.mock/static import
- add global @sim/db mock to vitest.setup.ts
- switch 4 test files from jsdom to node environment
- remove all vi.importActual calls that loaded heavy modules (200+ block files)
- remove slow mockConsoleLogger/mockAuth/setupCommonApiMocks helpers
- reduce real setTimeout delays in engine tests
- mock heavy transitive deps in diff-engine test

test execution time: 34s -> 9s (3.9x faster)
environment time: 2.5s -> 0.6s (4x faster)

* docs(testing): update testing best practices with performance rules

- document vi.hoisted + vi.mock + static import as the standard pattern
- explicitly ban vi.resetModules, vi.doMock, vi.importActual, mockAuth, setupCommonApiMocks
- document global mocks from vitest.setup.ts
- add mock pattern reference for auth, hybrid auth, and database chains
- add performance rules section covering heavy deps, jsdom vs node, real timers

* fix(tests): fix 4 failing test files with missing mocks

- socket/middleware/permissions: add vi.mock for @/lib/auth to prevent transitive getBaseUrl() call
- workflow-handler: add vi.mock for @/executor/utils/http matching executor mock pattern
- evaluator-handler: add db.query.account mock structure before vi.spyOn
- router-handler: same db.query.account fix as evaluator

* fix(tests): replace banned Function type with explicit callback signature
2026-02-26 15:46:49 -08:00
Waleed
c6e147e56a feat(agent): add MCP server discovery mode for agent tool input (#3353)
* feat(agent): add MCP server discovery mode for agent tool input

* fix(tool-input): use type variant for MCP server tool count badge

* fix(mcp-dynamic-args): align label styling with standard subblock labels

* standardized inp format UI

* feat(tool-input): replace MCP server inline expand with drill-down navigation

* feat(tool-input): add chevron affordance and keyboard nav for MCP server drill-down

* fix(tool-input): handle mcp-server type in refresh, validation, badges, and usage control

* refactor(tool-validation): extract getMcpServerIssue, remove fake tool hack

* lint

* reorder dropdown

* perf(agent): parallelize MCP server tool creation with Promise.all

* fix(combobox): preserve cursor movement in search input, reset query on drilldown

* fix(combobox): route ArrowRight through handleSelect, remove redundant type guards

* fix(agent): rename mcpServers to mcpServerSelections to avoid shadowing DB import, route ArrowRight through handleSelect

* docs: update google integration docs

* fix(tool-input): reset drilldown state on tool selection to prevent stale view

* perf(agent): parallelize MCP server discovery across multiple servers
2026-02-26 15:17:23 -08:00
Waleed
345a95f48d fix(confluence): prevent content erasure on page/blogpost update and fix space update (#3356)
- Add body-format=storage to GET-before-PUT for page and blogpost updates
  (without this, Confluence v2 API does not return body content, causing
  the fallback to erase content when only updating the title)
- Fetch current space name when updating only description (Confluence API
  requires name on PUT, so we preserve the existing name automatically)
2026-02-26 14:52:57 -08:00
Waleed
e07963f88c chore(db): drop 8 redundant indexes and add partial index for stale execution cleanup (#3354) 2026-02-26 13:17:39 -08:00
Waleed
25c59e3e2e feat(devin): add devin integration for autonomous coding sessions (#3352)
* feat(devin): add devin integration for autonomous coding sessions

* lint

* improvement(devin): update tool names and add manual docs description

* improvement(devin): rename tool files to snake_case and regenerate docs

* regen docs

* fix(devin): remove redundant Number() conversions in tool request bodies
2026-02-26 11:57:50 -08:00
Waleed
dde098e8e5 fix: prevent raw workflowInput from overwriting coerced start block values (#3347)
buildUnifiedStartOutput and buildIntegrationTriggerOutput first populate
output with schema-coerced structuredInput values (via coerceValue), then
iterate workflowInput and unconditionally overwrite those keys with raw
strings. This causes typed values (arrays, objects, numbers, booleans)
passed to child workflows to arrive as stringified versions.

Add a structuredKeys guard so the workflowInput loop skips keys already
set by the coerced structuredInput, letting coerceValue's type-aware
parsing (JSON.parse for objects/arrays, Number() for numbers, etc.)
take effect.

Fixes #3105

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-26 07:13:19 -08:00
Waleed
5ae0115444 feat(sidebar): add lock/unlock to workflow registry context menu (#3350)
* feat(sidebar): add lock/unlock to workflow registry context menu

* docs(tools): add manual descriptions to google_books and table

* docs(tools): add manual descriptions to google_bigquery and google_tasks

* fix(sidebar): avoid unnecessary store subscriptions and fix mixed lock state toggle

* fix(sidebar): use getWorkflowLockToggleIds utility for lock toggle

Replaces manual pivot-sorting logic with the existing utility function,
which handles block ordering and no-op guards consistently.

* lint
2026-02-25 23:40:30 -08:00
Waleed
fbafe204e5 fix(confluence): add input validation for SSRF-flagged parameters (#3351) 2026-02-25 23:35:45 -08:00
Waleed
ba7d6ff298 fix(credential-selector): remove reserved icon space when no credential selected (#3348) 2026-02-25 22:29:35 -08:00
Waleed
40016e79a1 feat(google-tasks): add Google Tasks integration (#3342)
* feat(google-tasks): add Google Tasks integration

* fix(google-tasks): return actual taskId in delete response

* fix(google-tasks): use absolute imports and fix registry order

* fix(google-tasks): rename list-task-lists to list_task_lists for doc generator

* improvement(google-tasks): destructure task and taskList outputs with typed schemas

* ran lint

* improvement(google-tasks): add wandConfig for due date timestamp generation
2026-02-25 21:52:34 -08:00
Waleed
e4fb8b2fdd feat(bigquery): add Google BigQuery integration (#3341)
* feat(bigquery): add Google BigQuery integration

* fix(bigquery): add auth provider, fix docsLink and insertedRows count

* fix(bigquery): set pageToken visibility to user-or-llm for pagination

* fix(bigquery): use prefixed export names to avoid aliased imports

* lint

* improvement(bigquery): destructure tool outputs with structured array/object types

* lint
2026-02-25 19:31:06 -08:00
Waleed
d98545d554 fix(terminal): thread executionOrder through child workflow SSE events for loop support (#3346)
* fix(terminal): thread executionOrder through child workflow SSE events for loop support

* ran lint

* fix(terminal): render iteration children through EntryNodeRow for workflow block expansion

IterationNodeRow was rendering all children as flat BlockRow components,
ignoring nodeType. Workflow blocks inside loop iterations were never
rendered as WorkflowNodeRow, so they had no expand chevron or child tree.

* fix(terminal): add childWorkflowBlockId to matchesEntryForUpdate

Sub-executors reset executionOrderCounter, so child blocks across loop
iterations share the same blockId + executionOrder. Without checking
childWorkflowBlockId, updateConsole for iteration N overwrites entries
from iterations 0..N-1, causing all child blocks to be grouped under
the last iteration's workflow instance.
2026-02-25 19:02:44 -08:00
Waleed
fadbad4085 feat(confluence): add get user by account ID tool (#3345)
* feat(confluence): add get user by account ID tool

* feat(confluence): add missing tools for tasks, blog posts, spaces, descendants, permissions, and properties

Add 16 new Confluence operations: list/get/update tasks, update/delete blog posts,
create/update/delete spaces, get page descendants, list space permissions,
list/create/delete space properties. Includes API routes, tool definitions,
block config wiring, OAuth scopes, and generated docs.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(confluence): add missing OAuth scopes to auth.ts provider config

The OAuth authorization flow uses scopes from auth.ts, not oauth.ts.
The 9 new scopes were only added to oauth.ts and the block config but
not to the actual provider config in auth.ts, causing re-auth to still
return tokens without the new scopes.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* lint

* fix(confluence): fix truncated get_user tool description in docs

Remove apostrophe from description that caused MDX generation to
truncate at the escape character.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(confluence): address PR review feedback

- Move get_user from GET to POST to avoid exposing access token in URL
- Add 400 validation for missing params in space-properties create/delete
- Add null check for blog post version before update to prevent TypeError

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* feat(confluence): add missing response fields for descendants and tasks

- Add type and depth fields to page descendants (from Confluence API)
- Add body field (storage format) to task list/get/update responses

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* lint

* fix(confluence): use validatePathSegment for Atlassian account IDs

validateAlphanumericId rejects valid Atlassian account IDs that contain
colons (e.g. 557058:6b9c9931-4693-49c1-8b3a-931f1af98134). Use
validatePathSegment with a custom pattern allowing colons instead.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* ran lint

* update mock

* upgrade turborepo

* fix(confluence): reject empty update body for space PUT

Return 400 when neither name nor description is provided for space
update, instead of sending an empty body to the Confluence API.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(confluence): remove spaceId requirement for create_space and fix list_tasks pagination

- Remove create_space from spaceId condition array since creating a space
  doesn't require a space ID input
- Remove list_tasks from generic supportsCursor array so it uses its
  dedicated handler that correctly passes assignedTo and status filters
  during pagination

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* ran lint

* fixed type errors

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-25 17:16:53 -08:00
Waleed
244e1ee495 feat(workflow): lock/unlock workflow from context menu and panel (#3336)
* feat(workflow): lock/unlock workflow from context menu and panel

* lint

* fix(workflow): prevent duplicate lock notifications, no-op guard, fix orphaned JSDoc

* improvement(workflow): memoize hasLockedBlocks to avoid inline recomputation

* feat(google-translate): add Google Translate integration (#3337)

* feat(google-translate): add Google Translate integration

* fix(google-translate): api key as query param, fix docsLink, rename tool file

* feat(google): add missing tools for Gmail, Drive, Sheets, and Calendar (#3338)

* feat(google): add missing tools for Gmail, Drive, Sheets, and Calendar

* fix(google-drive): remove dead transformResponse from move tool

* feat(confluence): return page content in get page version tool (#3344)

* feat(confluence): return page content in get page version tool

* lint

* feat(api): audit log read endpoints for admin and enterprise (#3343)

* feat(api): audit log read endpoints for admin and enterprise

* fix(api): address PR review — boolean coercion, cursor validation, detail scope

* ran lint

* unified list of languages for google translate

* fix(workflow): respect snapshot view for panel lock toggle, remove unused disableAdmin prop

* improvement(canvas-menu): remove lock icon from workflow lock toggle

* feat(audit): record audit log for workflow lock/unlock
2026-02-25 15:23:30 -08:00
Waleed
1f3dc52d15 feat(api): audit log read endpoints for admin and enterprise (#3343)
* feat(api): audit log read endpoints for admin and enterprise

* fix(api): address PR review — boolean coercion, cursor validation, detail scope

* ran lint
2026-02-25 13:46:37 -08:00
Waleed
f625482bcb feat(confluence): return page content in get page version tool (#3344)
* feat(confluence): return page content in get page version tool

* lint
2026-02-25 13:45:19 -08:00
Waleed
16f337f6fd feat(google): add missing tools for Gmail, Drive, Sheets, and Calendar (#3338)
* feat(google): add missing tools for Gmail, Drive, Sheets, and Calendar

* fix(google-drive): remove dead transformResponse from move tool
2026-02-25 13:38:35 -08:00
Waleed
063ec87ced feat(google-translate): add Google Translate integration (#3337)
* feat(google-translate): add Google Translate integration

* fix(google-translate): api key as query param, fix docsLink, rename tool file
2026-02-25 13:24:22 -08:00
Waleed
870d4b55c6 fix(templates): show description tagline on template cards (#3335) 2026-02-25 12:10:22 -08:00
Waleed
95304b2941 feat(google-sheets): add filter support to read operation (#3333)
* feat(google-sheets): add filter support to read operation

* ran lint
2026-02-25 11:34:12 -08:00
Waleed
8b0c47b06c chore(executor): extract shared utils and remove dead code from handlers (#3334) 2026-02-25 11:28:16 -08:00
Vikhyath Mondreti
774771fddd fix(call-chain): x-sim-via propagation for API blocks and MCP tools (#3332)
* fix(call-chain): x-sim-via propagation for API blocks and MCP tools

* addres bugbot comment
2026-02-25 08:41:54 -08:00
Waleed
43c0f5b199 feat(api): retry configuration for api block (#3329)
* fix(api): add configurable request retries

The API block docs described automatic retries, but the block didn't expose any retry controls and requests were executed only once.

This adds tool-level retry support with exponential backoff (including Retry-After support) for timeouts, 429s, and 5xx responses, exposes retry settings in the API block and http_request tool, and updates the docs to match.

Fixes #3225

* remove unnecessary helpers, cleanup

* update desc

* ack comments

* ack comment

* ack

* handle timeouts

---------

Co-authored-by: Jay Prajapati <79649559+jayy-77@users.noreply.github.com>
2026-02-25 00:13:47 -08:00
Waleed
ff01825b20 docs(credentials): replace environment variables page with credentials docs (#3331) 2026-02-25 00:02:16 -08:00
Vikhyath Mondreti
58d0fda173 fix(serializer): default canonical modes construction (#3330)
* fix(serializer): default canonical modes construction

* defaults for copilot

* address bugbot comments
2026-02-24 22:05:17 -08:00
Waleed
ecdb133d1b improvement(creds): bulk paste functionality, save notification, error notif (#3328)
* improvement(creds): bulk paste functionality, save notification, error notif

* use effect anti patterns

* fix add to cursor button

* fix(attio): wrap webhook body in data object and include required filter field

* fixed and tested attio webhook lifecycle
2026-02-24 19:12:10 -08:00
Waleed
d06459f489 fix(attio): automatic webhook lifecycle management and tool fixes (#3327)
* fix(attio): use code subblock type for JSON input fields

* fix(attio): correct people name attribute format in wand prompt example

* fix(attio): improve wand prompt with correct attribute formats for all field types

* fix(attio): use array format with full_name for personal-name attribute in wand prompt

* fix(attio): use loose null checks to prevent sending null params to API

* fix(attio): add offset param and make pagination fields advanced mode

* fix(attio): remove redundant (optional) from placeholders

* fix(attio): always send required workspace_access and workspace_member_access in create list

* fix(attio): always send api_slug in create list, auto-generate from name if not provided

* fix(attio): update api slug placeholder text

* fix(tools): manage lifecycle for attio tools

* updated docs

* fix(attio): remove incorrect save button reference from setup instructions

* fix(attio): log debug message when signature verification is skipped
2026-02-24 17:30:52 -08:00
Waleed
0574427d45 fix(providers): propagate abort signal to all LLM SDK calls (#3325)
* fix(providers): propagate abort signal to all LLM SDK calls

* fix(providers): propagate abort signal to deep research interactions API

* fix(providers): clean up abort listener when sleep timer resolves
2026-02-24 14:59:02 -08:00
Emir Karabeg
8f9b859a53 improvement(credentials): ui (#3322)
* improvement(credentials): ui

* fix: credentials logic

* improvement(credentials): ui

* improvement(credentials): members UI

* improvement(secrets): ui

* fix(credentials): show error when OAuth deletion fails due to missing fields

- Add deleteError state to track and display deletion errors
- Keep confirmation dialog open when deletion fails
- Show user-friendly error message when accountId or providerId is missing
- Add loading state to delete button during deletion
- Display error message in confirmation dialog with proper styling

Co-authored-by: Emir Karabeg <emir-karabeg@users.noreply.github.com>

* ran lint

* removed worktree file

---------

Co-authored-by: Cursor Agent <cursoragent@cursor.com>
Co-authored-by: Emir Karabeg <emir-karabeg@users.noreply.github.com>
Co-authored-by: Waleed Latif <walif6@gmail.com>
2026-02-24 14:48:13 -08:00
Waleed
60f9eb21bf feat(attio): add Attio CRM integration with 40 tools and 18 webhook triggers (#3324)
* feat(attio): add Attio CRM integration with 40 tools and 18 webhook triggers

* update docs

* fix(attio): use timestamp generationType for date wandConfig fields
2026-02-24 13:56:42 -08:00
Waleed
9a31c7d8ad improvement(processing): reduce redundant DB queries in execution preprocessing (#3320)
* improvement(processing): reduce redundant DB queries in execution preprocessing

* improvement(processing): add defensive ID check for prefetched workflow record

* improvement(processing): fix type safety in execution error logging

Replace `as any` cast in non-SSE error path with proper `buildTraceSpans()`
transformation, matching the SSE error path. Remove redundant `as any` cast
in preprocessing.ts where the types already align.

* improvement(processing): replace `as any` casts with proper types in logging

- logger.ts: cast JSONB cost column to `WorkflowExecutionLog['cost']` instead
  of `any` in both `completeWorkflowExecution` and `getWorkflowExecution`
- logger.ts: replace `(orgUsageBefore as any)?.toString?.()` with `String()`
  since COALESCE guarantees a non-null SQL aggregate value
- logging-session.ts: cast JSONB cost to `AccumulatedCost` (the local
  interface) instead of `any` in `loadExistingCost`

* improvement(processing): use exported HighestPrioritySubscription type in usage.ts

Replace inline `Awaited<ReturnType<typeof getHighestPrioritySubscription>>`
with the already-exported `HighestPrioritySubscription` type alias.

* improvement(processing): replace remaining `as any` casts with proper types

- preprocessing.ts: use exported `HighestPrioritySubscription` type instead
  of redeclaring via `Awaited<ReturnType<...>>`
- deploy/route.ts, status/route.ts: cast `hasWorkflowChanged` args to
  `WorkflowState` instead of `any` (JSONB + object literal narrowing)
- state/route.ts: type block sanitization and save with `BlockState` and
  `WorkflowState` instead of `any`
- search-suggestions.ts: remove 8 unnecessary `as any` casts on `'date'`
  literal that already satisfies the `Suggestion['category']` union

* fix(processing): prevent double-billing race in LoggingSession completion

When executeWorkflowCore throws, its catch block fire-and-forgets
safeCompleteWithError, then re-throws. The caller's catch block also
fire-and-forgets safeCompleteWithError on the same LoggingSession. Both
check this.completed (still false) before either's async DB write resolves,
so both proceed to completeWorkflowExecution which uses additive SQL for
billing — doubling the charged cost on every failed execution.

Fix: add a synchronous `completing` flag set immediately before the async
work begins. This blocks concurrent callers at the guard check. On failure,
the flag is reset so the safe* fallback path (completeWithCostOnlyLog) can
still attempt recovery.

* fix(processing): unblock error responses and isolate run-count failures

Remove unnecessary `await waitForCompletion()` from non-SSE and SSE error
paths where no `markAsFailed()` follows — these were blocking error responses
on log persistence for no reason. Wrap `updateWorkflowRunCounts` in its own
try/catch so a run-count DB failure cannot prevent session completion, billing,
and trace span persistence.

* improvement(processing): remove dead setupExecutor method

The method body was just a debug log with an `any` parameter — logging
now works entirely through trace spans with no executor integration.

* remove logger.debug

* fix(processing): guard completionPromise as write-once (singleton promise)

Prevent concurrent safeComplete* calls from overwriting completionPromise
with a no-op. The guard now lives at the assignment site — if a completion
is already in-flight, return its promise instead of starting a new one.
This ensures waitForCompletion() always awaits the real work.

* improvement(processing): remove empty else/catch blocks left by debug log cleanup

* fix(processing): enforce waitForCompletion inside markAsFailed to prevent completion races

Move waitForCompletion() into markAsFailed() so every call site is
automatically safe against in-flight fire-and-forget completions.
Remove the now-redundant external waitForCompletion() calls in route.ts.

* fix(processing): reset completing flag on fallback failure, clean up empty catch

- completeWithCostOnlyLog now resets this.completing = false when
  the fallback itself fails, preventing a permanently stuck session
- Use _disconnectError in MCP test-connection to signal intentional ignore

* fix(processing): restore disconnect error logging in MCP test-connection

Revert unrelated debug log removal — this file isn't part of the
processing improvements and the log aids connection leak detection.

* fix(processing): address audit findings across branch

- preprocessing.ts: use undefined (not null) for failed subscription
  fetch so getUserUsageLimit does a fresh lookup instead of silently
  falling back to free-tier limits
- deployed/route.ts: log warning on loadDeployedWorkflowState failure
  instead of silently swallowing the error
- schedule-execution.ts: remove dead successLog parameter and all
  call-site arguments left over from logger.debug cleanup
- mcp/middleware.ts: drop unused error binding in empty catch
- audit/log.ts, wand.ts: promote logger.debug to logger.warn in catch
  blocks where these are the only failure signal

* revert: undo unnecessary subscription null→undefined change

getHighestPrioritySubscription never throws (it catches internally
and returns null), so the catch block in preprocessExecution is dead
code. The null vs undefined distinction doesn't matter and the
coercions added unnecessary complexity.

* improvement(processing): remove dead try/catch around getHighestPrioritySubscription

getHighestPrioritySubscription catches internally and returns null
on error, so the wrapping try/catch was unreachable dead code.

* improvement(processing): remove dead getSnapshotByHash method

No longer called after createSnapshotWithDeduplication was refactored
to use a single upsert instead of select-then-insert.

---------
2026-02-24 11:55:59 -08:00
Jay Prajapati
9e817bc5b0 fix(auth): make DISABLE_AUTH work in web app (#3297)
Return an anonymous session using the same response envelope as Better Auth's get-session endpoint, and make the session provider tolerant to both wrapped and raw session payloads.

Fixes #2524
2026-02-24 09:52:44 -08:00
Waleed
d824ce5b07 feat(confluence): add webhook triggers for Confluence events (#3318)
* feat(confluence): add webhook triggers for Confluence events

Adds 16 Confluence triggers: page CRUD, comments, blogs, attachments,
spaces, and labels — plus a generic webhook trigger.

* feat(confluence): wire triggers into block and webhook processor

Add trigger subBlocks and triggers config to ConfluenceV2Block so
triggers appear in the UI. Add Confluence signature verification and
event filtering to the webhook processor.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(confluence): align trigger outputs with actual webhook payloads

- Rewrite output builders to match real Confluence webhook payload
  structure (flat spaceKey, numeric version, actual API fields)
- Remove fabricated fields (nested space/version objects, comment.body)
- Add missing fields (creatorAccountId, lastModifierAccountId, self,
  creationDate, modificationDate, accountType)
- Add extractor functions (extractPageData, extractCommentData, etc.)
  following the same pattern as Jira
- Add formatWebhookInput handler for Confluence in utils.server.ts
  so payloads are properly destructured before reaching workflows
- Make event field matching resilient (check both event and webhookEvent)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(confluence): handle generic webhook in formatWebhookInput

The generic webhook (confluence_webhook) was falling through to
extractPageData, which only returns the page field. For a catch-all
trigger that accepts all event types, preserve all entity fields
(page, comment, blog, attachment, space, label, content).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(confluence): use payload-based filtering instead of nonexistent event field

Confluence Cloud webhooks don't include an event/webhookEvent field in the
body (unlike Jira). Replaced broken event string matching with structural
payload filtering that checks which entity key is present.

* lint

* fix(confluence): read webhookSecret instead of secret in signature verification

* fix(webhooks): read webhookSecret for jira, linear, and github signature verification

These providers define their secret subBlock with id: 'webhookSecret' but the
processor was reading providerConfig.secret which is always undefined, silently
skipping signature verification even when a secret is configured.

* fix(confluence): use event field for exact matching with entity-category fallback

Admin REST API webhooks (Settings > Webhooks) include an event field for
action-level filtering (page_created vs page_updated). Connect app webhooks
omit it, so we fall back to entity-category matching.

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-23 23:36:43 -08:00
Waleed
9bd357f184 improvement(audit): enrich metadata across 23 audit log call sites (#3319)
* improvement(audit): enrich metadata across 23 audit log call sites

* improvement(audit): enrich metadata across 23 audit log call sites
2026-02-23 23:35:57 -08:00
Waleed
d4a014f423 feat(public-api): add env var and permission group controls to disable public API access (#3317)
Add DISABLE_PUBLIC_API / NEXT_PUBLIC_DISABLE_PUBLIC_API environment variables
and disablePublicApi permission group config option to allow self-hosted
deployments and enterprise admins to globally disable the public API toggle.

When disabled: the Access toggle is hidden in the Edit API Info modal,
the execute route blocks unauthenticated public access (401), and the
public-api PATCH route rejects enabling public API (403).

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-23 23:03:03 -08:00
Waleed
fe34d23a98 feat(gong): add Gong integration with 18 API tools (#3316)
* feat(gong): add Gong integration with 18 API tools

* fix(gong): make toDateTime optional for list_calls, add list_trackers to workspaceId condition

* chore(gong): regenerate docs

* fix(hex): update icon color and block bgColor
2026-02-23 17:57:10 -08:00
Waleed
b8dfb4dd20 fix(copy): preserve block names when pasting into workflows without conflicts (#3315) 2026-02-23 15:42:24 -08:00
Waleed
91666491cd fix(execution): scope X-Sim-Via header to internal routes and enforce depth limit (#3313)
* feat(execution): workflow cycle detection via X-Sim-Via header

* fix(execution): scope X-Sim-Via header to internal routes and add child workflow depth validation

- Move call chain header injection from HTTP tool layer (request.ts/utils.ts)
  to tool execution layer (tools/index.ts) gated on isInternalRoute, preventing
  internal workflow IDs from leaking to external third-party APIs
- Remove cycle detection from validateCallChain — depth limit alone prevents
  infinite loops while allowing legitimate self-recursion (pagination, tree
  processing, batch splitting)
- Add validateCallChain check in workflow-handler.ts before spawning child
  executor, closing the gap where in-process child workflows skipped validation
- Remove unsafe `(params as any)._context` type bypass in request.ts

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(execution): validate child call chain instead of parent chain

Validate childCallChain (after appending current workflow ID) rather
than ctx.callChain (parent). Prevents an off-by-one where a chain at
depth 10 could still spawn an 11th workflow.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-23 15:19:31 -08:00
Waleed
eafbb9fef4 fix(tag-dropdown): exclude downstream blocks in loops and parallel siblings (#3312)
* fix(tag-dropdown): exclude downstream blocks in loops and parallel siblings from reference picker

* chore(serializer): remove unused computeAccessibleBlockIds method

* chore(block-path-calculator): remove unused calculateAccessibleBlocksForWorkflow method

* chore(tag-dropdown): remove no-op loop node filter

* fix(tag-dropdown): remove parallel container from accessible references in parallel branches

* chore(tag-dropdown): remove no-op starter block filter

* fix(tag-dropdown): restore parallel container in accessible references for blocks inside parallel

* fix(copilot): exclude downstream loop nodes and parallel siblings from accessible references
2026-02-23 14:21:40 -08:00
Waleed
132fef06a1 fix(redis): tighten stale TCP connection detection and add fast lease deadline (#3311)
* fix(redis): tighten stale TCP connection detection and add fast lease deadline

* revert(redis): restore original retryStrategy logging

* fix(redis): clear deadline timer after Promise.race to prevent memory leak

* fix(redis): downgrade lease fallback log to warn — unavailable is expected fallback
2026-02-23 13:22:29 -08:00
Vikhyath Mondreti
2ae814549a improvement(migration): move credential selector automigration logic to server side (#3310)
* improvement(credentials): move client side automigration to server side

* fix migration func

* fix tests

* address bugbot
2026-02-23 06:33:54 -08:00
Vikhyath Mondreti
e55d41f2ef fix(credentials): credential dependent endpoints (#3309)
* fix(dependent): credential dependent endpoints

* fix tests

* fix route to not block ws creds"

* remove faulty auth checks:

* prevent unintended cascade by depends on during migration

* address bugbot comments
2026-02-23 04:38:03 -08:00
Vikhyath Mondreti
364bb196ea feat(credentials): multiple credentials per provider (#3211)
* feat(mult-credentials): progress

* checkpoint

* make it autoselect personal secret when create secret is clicked

* improve collaborative UX

* remove add member ui for workspace secrets

* bulk entry of .env

* promote to workspace secret

* more ux improvmeent

* share with workspace for oauth

* remove new badge

* share button

* copilot + oauth name comflict

* reconnect option to connect diff account

* remove credential no access marker

* canonical credential id entry

* remove migration to prep stagin migration

* migration readded

* backfill improvements

* run lint

* fix tests

* remove unused code

* autoselect provider when connecting from block

* address bugbot comments

* remove some dead code

* more permissions stuff

* remove more unused code

* address bugbot

* add filter

* remove migration to prep migration

* fix migration

* fix migration issues

* remove migration prep merge

* readd migration

* include user tables triggers

* extract shared code

* fix

* fix tx issue

* remove migration to prep merge

* readd migration

* fix agent tool input

* agent with tool input deletion case

* fix credential subblock saving

* remove dead code

* fix tests

* address bugbot comments
2026-02-23 02:26:16 -08:00
Waleed
69ec70af13 feat(terminal): expandable child workflow blocks in console (#3306)
* feat(terminal): expandable child workflow blocks in console

* fix(terminal): cycle guard in collectWorkflowDescendants, workflow node running/canceled state

* fix(terminal): expand workflow blocks nested inside loop/parallel iterations

* fix(terminal): prevent child block mixing across loop iterations for workflow blocks

* ack PR comments, remove extranoeus logs

* feat(terminal): real-time child workflow block propagation in console

* fix(terminal): align parallel guard in WorkflowBlockHandler.getIterationContext with BlockExecutor

* fix(terminal): fire onChildWorkflowInstanceReady regardless of nodeMetadata presence

* fix(terminal): use shared isWorkflowBlockType from executor/constants
2026-02-23 00:17:44 -08:00
Waleed
687c12528b fix(parallel): correct active state pulsing and duration display for parallel subflow blocks (#3305)
* fix(executor): resolve block ID for parallel subflow active state

* fix timing for parallel block

* refactor(parallel): extract shared updateActiveBlockRefCount helper

* fix(parallel): error-sticky block run status to prevent branch success masking failure

* Revert "fix(parallel): error-sticky block run status to prevent branch success masking failure"

This reverts commit 9c087cd466.
2026-02-22 15:03:33 -08:00
Waleed
996dc96d6e fix(security): allow HTTP for localhost and loopback addresses (#3304)
* fix(security): allow localhost HTTP without weakening SSRF protections

* fix(security): remove extraneous comments and fix failing SSRF test

* fix(security): derive isLocalhost from hostname not resolved IP in validateUrlWithDNS

* fix(security): verify resolved IP is loopback when hostname is localhost in validateUrlWithDNS

---------

Co-authored-by: aayush598 <aayushgid598@gmail.com>
2026-02-22 14:58:11 -08:00
Waleed
04286fc16b fix(hex): scope param renames to their respective operations (#3295) 2026-02-21 17:53:04 -08:00
Waleed
c52f78c840 fix(models): remove retired claude-3-7-sonnet and update default models (#3292) 2026-02-21 16:44:54 -08:00
Waleed
e318bf2e65 feat(tools): added hex (#3293)
* feat(tools): added hex

* update tool names
2026-02-21 16:44:39 -08:00
Waleed
4913799a27 feat(oauth): add CIMD support for client metadata discovery (#3285)
* feat(oauth): add CIMD support for client metadata discovery

* fix(oauth): add response size limit, redirect_uri and logo_uri validation to CIMD

- Add maxResponseBytes (256KB) to prevent oversized responses
- Validate redirect_uri schemes (https/http only) and reject commas
- Validate logo_uri requires HTTPS, silently drop invalid logos

* fix(oauth): add explicit userId null for CIMD client insert

* fix(oauth): fix redirect_uri error handling, skip upsert on cache hit

- Move scheme check outside try/catch so specific error isn't swallowed
- Return fromCache flag from resolveClientMetadata to skip redundant DB writes

* fix(oauth): evict CIMD cache on upsert failure to allow retry
2026-02-21 14:38:05 -08:00
Waleed
ccb4f5956d fix(redis): prevent false rate limits and code execution failures during Redis outages (#3289) 2026-02-21 12:20:19 -08:00
Vikhyath Mondreti
2a6d4fcb96 fix(deploy): reuse subblock merge helper in use change detection hook (#3287)
* fix(workflow-changes): change detection logic divergence

* use shared helper
2026-02-21 07:57:11 -08:00
841 changed files with 108865 additions and 11876 deletions

View File

@@ -532,6 +532,41 @@ outputs: {
}
```
### Typed JSON Outputs
When using `type: 'json'` and you know the object shape in advance, **describe the inner fields in the description** so downstream blocks know what properties are available. For well-known, stable objects, use nested output definitions instead:
```typescript
outputs: {
// BAD: Opaque json with no info about what's inside
plan: { type: 'json', description: 'Zone plan information' },
// GOOD: Describe the known fields in the description
plan: {
type: 'json',
description: 'Zone plan information (id, name, price, currency, frequency, is_subscribed)',
},
// BEST: Use nested output definition when the shape is stable and well-known
plan: {
id: { type: 'string', description: 'Plan identifier' },
name: { type: 'string', description: 'Plan name' },
price: { type: 'number', description: 'Plan price' },
currency: { type: 'string', description: 'Price currency' },
},
}
```
Use the nested pattern when:
- The object has a small, stable set of fields (< 10)
- Downstream blocks will commonly access specific properties
- The API response shape is well-documented and unlikely to change
Use `type: 'json'` with a descriptive string when:
- The object has many fields or a dynamic shape
- It represents a list/array of items
- The shape varies by operation
## V2 Block Pattern
When creating V2 blocks (alongside legacy V1):
@@ -695,6 +730,62 @@ Please provide the SVG and I'll convert it to a React component.
You can usually find this in the service's brand/press kit page, or copy it from their website.
```
## Advanced Mode for Optional Fields
Optional fields that are rarely used should be set to `mode: 'advanced'` so they don't clutter the basic UI. This includes:
- Pagination tokens
- Time range filters (start/end time)
- Sort order options
- Reply settings
- Rarely used IDs (e.g., reply-to tweet ID, quote tweet ID)
- Max results / limits
```typescript
{
id: 'startTime',
title: 'Start Time',
type: 'short-input',
placeholder: 'ISO 8601 timestamp',
condition: { field: 'operation', value: ['search', 'list'] },
mode: 'advanced', // Rarely used, hide from basic view
}
```
## WandConfig for Complex Inputs
Use `wandConfig` for fields that are hard to fill out manually, such as timestamps, comma-separated lists, and complex query strings. This gives users an AI-assisted input experience.
```typescript
// Timestamps - use generationType: 'timestamp' to inject current date context
{
id: 'startTime',
title: 'Start Time',
type: 'short-input',
mode: 'advanced',
wandConfig: {
enabled: true,
prompt: 'Generate an ISO 8601 timestamp based on the user description. Return ONLY the timestamp string.',
generationType: 'timestamp',
},
}
// Comma-separated lists - simple prompt without generationType
{
id: 'mediaIds',
title: 'Media IDs',
type: 'short-input',
mode: 'advanced',
wandConfig: {
enabled: true,
prompt: 'Generate a comma-separated list of media IDs. Return ONLY the comma-separated values.',
},
}
```
## Naming Convention
All tool IDs referenced in `tools.access` and returned by `tools.config.tool` MUST use `snake_case` (e.g., `x_create_tweet`, `slack_send_message`). Never use camelCase or PascalCase.
## Checklist Before Finishing
- [ ] All subBlocks have `id`, `title` (except switch), and `type`
@@ -702,9 +793,24 @@ You can usually find this in the service's brand/press kit page, or copy it from
- [ ] DependsOn set for fields that need other values
- [ ] Required fields marked correctly (boolean or condition)
- [ ] OAuth inputs have correct `serviceId`
- [ ] Tools.access lists all tool IDs
- [ ] Tools.config.tool returns correct tool ID
- [ ] Tools.access lists all tool IDs (snake_case)
- [ ] Tools.config.tool returns correct tool ID (snake_case)
- [ ] Outputs match tool outputs
- [ ] Block registered in registry.ts
- [ ] If icon missing: asked user to provide SVG
- [ ] If triggers exist: `triggers` config set, trigger subBlocks spread
- [ ] Optional/rarely-used fields set to `mode: 'advanced'`
- [ ] Timestamps and complex inputs have `wandConfig` enabled
## Final Validation (Required)
After creating the block, you MUST validate it against every tool it references:
1. **Read every tool definition** that appears in `tools.access` — do not skip any
2. **For each tool, verify the block has correct:**
- SubBlock inputs that cover all required tool params (with correct `condition` to show for that operation)
- SubBlock input types that match the tool param types (e.g., dropdown for enums, short-input for strings)
- `tools.config.params` correctly maps subBlock IDs to tool param names (if they differ)
- Type coercions in `tools.config.params` for any params that need conversion (Number(), Boolean(), JSON.parse())
3. **Verify block outputs** cover the key fields returned by all tools
4. **Verify conditions** — each subBlock should only show for the operations that actually use it

View File

@@ -102,6 +102,7 @@ export const {service}{Action}Tool: ToolConfig<Params, Response> = {
- Always use `?? []` for optional array fields
- Set `optional: true` for outputs that may not exist
- Never output raw JSON dumps - extract meaningful fields
- When using `type: 'json'` and you know the object shape, define `properties` with the inner fields so downstream consumers know the structure. Only use bare `type: 'json'` when the shape is truly dynamic
## Step 3: Create Block
@@ -436,6 +437,12 @@ If creating V2 versions (API-aligned outputs):
- [ ] Ran `bun run scripts/generate-docs.ts`
- [ ] Verified docs file created
### Final Validation (Required)
- [ ] Read every tool file and cross-referenced inputs/outputs against the API docs
- [ ] Verified block subBlocks cover all required tool params with correct conditions
- [ ] Verified block outputs match what the tools actually return
- [ ] Verified `tools.config.params` correctly maps and coerces all param types
## Example Command
When the user asks to add an integration:
@@ -685,13 +692,40 @@ return NextResponse.json({
| `isUserFile` | `@/lib/core/utils/user-file` | Type guard for UserFile objects |
| `FileInputSchema` | `@/lib/uploads/utils/file-schemas` | Zod schema for file validation |
### Advanced Mode for Optional Fields
Optional fields that are rarely used should be set to `mode: 'advanced'` so they don't clutter the basic UI. Examples: pagination tokens, time range filters, sort order, max results, reply settings.
### WandConfig for Complex Inputs
Use `wandConfig` for fields that are hard to fill out manually:
- **Timestamps**: Use `generationType: 'timestamp'` to inject current date context into the AI prompt
- **JSON arrays**: Use `generationType: 'json-object'` for structured data
- **Complex queries**: Use a descriptive prompt explaining the expected format
```typescript
{
id: 'startTime',
title: 'Start Time',
type: 'short-input',
mode: 'advanced',
wandConfig: {
enabled: true,
prompt: 'Generate an ISO 8601 timestamp. Return ONLY the timestamp string.',
generationType: 'timestamp',
},
}
```
### Common Gotchas
1. **OAuth serviceId must match** - The `serviceId` in oauth-input must match the OAuth provider configuration
2. **Tool IDs are snake_case** - `stripe_create_payment`, not `stripeCreatePayment`
2. **All tool IDs MUST be snake_case** - `stripe_create_payment`, not `stripeCreatePayment`. This applies to tool `id` fields, registry keys, `tools.access` arrays, and `tools.config.tool` return values
3. **Block type is snake_case** - `type: 'stripe'`, not `type: 'Stripe'`
4. **Alphabetical ordering** - Keep imports and registry entries alphabetically sorted
5. **Required can be conditional** - Use `required: { field: 'op', value: 'create' }` instead of always true
6. **DependsOn clears options** - When a dependency changes, selector options are refetched
7. **Never pass Buffer directly to fetch** - Convert to `new Uint8Array(buffer)` for TypeScript compatibility
8. **Always handle legacy file params** - Keep hidden `fileContent` params for backwards compatibility
9. **Optional fields use advanced mode** - Set `mode: 'advanced'` on rarely-used optional fields
10. **Complex inputs need wandConfig** - Timestamps, JSON arrays, and other hard-to-type values should have `wandConfig` enabled

View File

@@ -147,9 +147,18 @@ closedAt: {
},
```
### Nested Properties
For complex outputs, define nested structure:
### Typed JSON Outputs
When using `type: 'json'` and you know the object shape in advance, **always define the inner structure** using `properties` so downstream consumers know what fields are available:
```typescript
// BAD: Opaque json with no info about what's inside
metadata: {
type: 'json',
description: 'Response metadata',
},
// GOOD: Define the known properties
metadata: {
type: 'json',
description: 'Response metadata',
@@ -159,7 +168,10 @@ metadata: {
count: { type: 'number', description: 'Total count' },
},
},
```
For arrays of objects, define the item structure:
```typescript
items: {
type: 'array',
description: 'List of items',
@@ -173,6 +185,8 @@ items: {
},
```
Only use bare `type: 'json'` without `properties` when the shape is truly dynamic or unknown.
## Critical Rules for transformResponse
### Handle Nullable Fields
@@ -272,8 +286,13 @@ If creating V2 tools (API-aligned outputs), use `_v2` suffix:
- Version: `'2.0.0'`
- Outputs: Flat, API-aligned (no content/metadata wrapper)
## Naming Convention
All tool IDs MUST use `snake_case`: `{service}_{action}` (e.g., `x_create_tweet`, `slack_send_message`). Never use camelCase or PascalCase for tool IDs.
## Checklist Before Finishing
- [ ] All tool IDs use snake_case
- [ ] All params have explicit `required: true` or `required: false`
- [ ] All params have appropriate `visibility`
- [ ] All nullable response fields use `?? null`
@@ -281,4 +300,22 @@ If creating V2 tools (API-aligned outputs), use `_v2` suffix:
- [ ] No raw JSON dumps in outputs
- [ ] Types file has all interfaces
- [ ] Index.ts exports all tools
- [ ] Tool IDs use snake_case
## Final Validation (Required)
After creating all tools, you MUST validate every tool before finishing:
1. **Read every tool file** you created — do not skip any
2. **Cross-reference with the API docs** to verify:
- All required params are marked `required: true`
- All optional params are marked `required: false`
- Param types match the API (string, number, boolean, json)
- Request URL, method, headers, and body match the API spec
- `transformResponse` extracts the correct fields from the API response
- All output fields match what the API actually returns
- No fields are missing from outputs that the API provides
- No extra fields are defined in outputs that the API doesn't return
3. **Verify consistency** across tools:
- Shared types in `types.ts` match all tools that use them
- Tool IDs in the barrel export match the tool file definitions
- Error handling is consistent (error checks, meaningful messages)

View File

@@ -0,0 +1,283 @@
---
description: Validate an existing Sim integration (tools, block, registry) against the service's API docs
argument-hint: <service-name> [api-docs-url]
---
# Validate Integration Skill
You are an expert auditor for Sim integrations. Your job is to thoroughly validate that an existing integration is correct, complete, and follows all conventions.
## Your Task
When the user asks you to validate an integration:
1. Read the service's API documentation (via WebFetch or Context7)
2. Read every tool, the block, and registry entries
3. Cross-reference everything against the API docs and Sim conventions
4. Report all issues found, grouped by severity (critical, warning, suggestion)
5. Fix all issues after reporting them
## Step 1: Gather All Files
Read **every** file for the integration — do not skip any:
```
apps/sim/tools/{service}/ # All tool files, types.ts, index.ts
apps/sim/blocks/blocks/{service}.ts # Block definition
apps/sim/tools/registry.ts # Tool registry entries for this service
apps/sim/blocks/registry.ts # Block registry entry for this service
apps/sim/components/icons.tsx # Icon definition
apps/sim/lib/auth/auth.ts # OAuth scopes (if OAuth service)
apps/sim/lib/oauth/oauth.ts # OAuth provider config (if OAuth service)
```
## Step 2: Pull API Documentation
Fetch the official API docs for the service. This is the **source of truth** for:
- Endpoint URLs, HTTP methods, and auth headers
- Required vs optional parameters
- Parameter types and allowed values
- Response shapes and field names
- Pagination patterns (which param name, which response field)
- Rate limits and error formats
## Step 3: Validate Tools
For **every** tool file, check:
### Tool ID and Naming
- [ ] Tool ID uses `snake_case`: `{service}_{action}` (e.g., `x_create_tweet`, `slack_send_message`)
- [ ] Tool `name` is human-readable (e.g., `'X Create Tweet'`)
- [ ] Tool `description` is a concise one-liner describing what it does
- [ ] Tool `version` is set (`'1.0.0'` or `'2.0.0'` for V2)
### Params
- [ ] All required API params are marked `required: true`
- [ ] All optional API params are marked `required: false`
- [ ] Every param has explicit `required: true` or `required: false` — never omitted
- [ ] Param types match the API (`'string'`, `'number'`, `'boolean'`, `'json'`)
- [ ] Visibility is correct:
- `'hidden'` — ONLY for OAuth access tokens and system-injected params
- `'user-only'` — for API keys, credentials, and account-specific IDs the user must provide
- `'user-or-llm'` — for everything else (search queries, content, filters, IDs that could come from other blocks)
- [ ] Every param has a `description` that explains what it does
### Request
- [ ] URL matches the API endpoint exactly (correct base URL, path segments, path params)
- [ ] HTTP method matches the API spec (GET, POST, PUT, PATCH, DELETE)
- [ ] Headers include correct auth pattern:
- OAuth: `Authorization: Bearer ${params.accessToken}`
- API Key: correct header name and format per the service's docs
- [ ] `Content-Type` header is set for POST/PUT/PATCH requests
- [ ] Body sends all required fields and only includes optional fields when provided
- [ ] For GET requests with query params: URL is constructed correctly with query string
- [ ] ID fields in URL paths are `.trim()`-ed to prevent copy-paste whitespace errors
- [ ] Path params use template literals correctly: `` `https://api.service.com/v1/${params.id.trim()}` ``
### Response / transformResponse
- [ ] Correctly parses the API response (`await response.json()`)
- [ ] Extracts the right fields from the response structure (e.g., `data.data` vs `data` vs `data.results`)
- [ ] All nullable fields use `?? null`
- [ ] All optional arrays use `?? []`
- [ ] Error cases are handled: checks for missing/empty data and returns meaningful error
- [ ] Does NOT do raw JSON dumps — extracts meaningful, individual fields
### Outputs
- [ ] All output fields match what the API actually returns
- [ ] No fields are missing that the API provides and users would commonly need
- [ ] No phantom fields defined that the API doesn't return
- [ ] `optional: true` is set on fields that may not exist in all responses
- [ ] When using `type: 'json'` and the shape is known, `properties` defines the inner fields
- [ ] When using `type: 'array'`, `items` defines the item structure with `properties`
- [ ] Field descriptions are accurate and helpful
### Types (types.ts)
- [ ] Has param interfaces for every tool (e.g., `XCreateTweetParams`)
- [ ] Has response interfaces for every tool (extending `ToolResponse`)
- [ ] Optional params use `?` in the interface (e.g., `replyTo?: string`)
- [ ] Field names in types match actual API field names
- [ ] Shared response types are properly reused (e.g., `XTweetResponse` shared across tweet tools)
### Barrel Export (index.ts)
- [ ] Every tool is exported
- [ ] All types are re-exported (`export * from './types'`)
- [ ] No orphaned exports (tools that don't exist)
### Tool Registry (tools/registry.ts)
- [ ] Every tool is imported and registered
- [ ] Registry keys use snake_case and match tool IDs exactly
- [ ] Entries are in alphabetical order within the file
## Step 4: Validate Block
### Block ↔ Tool Alignment (CRITICAL)
This is the most important validation — the block must be perfectly aligned with every tool it references.
For **each tool** in `tools.access`:
- [ ] The operation dropdown has an option whose ID matches the tool ID (or the `tools.config.tool` function correctly maps to it)
- [ ] Every **required** tool param (except `accessToken`) has a corresponding subBlock input that is:
- Shown when that operation is selected (correct `condition`)
- Marked as `required: true` (or conditionally required)
- [ ] Every **optional** tool param has a corresponding subBlock input (or is intentionally omitted if truly never needed)
- [ ] SubBlock `id` values are unique across the entire block — no duplicates even across different conditions
- [ ] The `tools.config.tool` function returns the correct tool ID for every possible operation value
- [ ] The `tools.config.params` function correctly maps subBlock IDs to tool param names when they differ
### SubBlocks
- [ ] Operation dropdown lists ALL tool operations available in `tools.access`
- [ ] Dropdown option labels are human-readable and descriptive
- [ ] Conditions use correct syntax:
- Single value: `{ field: 'operation', value: 'x_create_tweet' }`
- Multiple values (OR): `{ field: 'operation', value: ['x_create_tweet', 'x_delete_tweet'] }`
- Negation: `{ field: 'operation', value: 'delete', not: true }`
- Compound: `{ field: 'op', value: 'send', and: { field: 'type', value: 'dm' } }`
- [ ] Condition arrays include ALL operations that use that field — none missing
- [ ] `dependsOn` is set for fields that need other values (selectors depending on credential, cascading dropdowns)
- [ ] SubBlock types match tool param types:
- Enum/fixed options → `dropdown`
- Free text → `short-input`
- Long text/content → `long-input`
- True/false → `dropdown` with Yes/No options (not `switch` unless purely UI toggle)
- Credentials → `oauth-input` with correct `serviceId`
- [ ] Dropdown `value: () => 'default'` is set for dropdowns with a sensible default
### Advanced Mode
- [ ] Optional, rarely-used fields are set to `mode: 'advanced'`:
- Pagination tokens / next tokens
- Time range filters (start/end time)
- Sort order / direction options
- Max results / per page limits
- Reply settings / threading options
- Rarely used IDs (reply-to, quote-tweet, etc.)
- Exclude filters
- [ ] **Required** fields are NEVER set to `mode: 'advanced'`
- [ ] Fields that users fill in most of the time are NOT set to `mode: 'advanced'`
### WandConfig
- [ ] Timestamp fields have `wandConfig` with `generationType: 'timestamp'`
- [ ] Comma-separated list fields have `wandConfig` with a descriptive prompt
- [ ] Complex filter/query fields have `wandConfig` with format examples in the prompt
- [ ] All `wandConfig` prompts end with "Return ONLY the [format] - no explanations, no extra text."
- [ ] `wandConfig.placeholder` describes what to type in natural language
### Tools Config
- [ ] `tools.access` lists **every** tool ID the block can use — none missing
- [ ] `tools.config.tool` returns the correct tool ID for each operation
- [ ] Type coercions are in `tools.config.params` (runs at execution time), NOT in `tools.config.tool` (runs at serialization time before variable resolution)
- [ ] `tools.config.params` handles:
- `Number()` conversion for numeric params that come as strings from inputs
- `Boolean` / string-to-boolean conversion for toggle params
- Empty string → `undefined` conversion for optional dropdown values
- Any subBlock ID → tool param name remapping
- [ ] No `Number()`, `JSON.parse()`, or other coercions in `tools.config.tool` — these would destroy dynamic references like `<Block.output>`
### Block Outputs
- [ ] Outputs cover the key fields returned by ALL tools (not just one operation)
- [ ] Output types are correct (`'string'`, `'number'`, `'boolean'`, `'json'`)
- [ ] `type: 'json'` outputs either:
- Describe inner fields in the description string (GOOD): `'User profile (id, name, username, bio)'`
- Use nested output definitions (BEST): `{ id: { type: 'string' }, name: { type: 'string' } }`
- [ ] No opaque `type: 'json'` with vague descriptions like `'Response data'`
- [ ] Outputs that only appear for certain operations use `condition` if supported, or document which operations return them
### Block Metadata
- [ ] `type` is snake_case (e.g., `'x'`, `'cloudflare'`)
- [ ] `name` is human-readable (e.g., `'X'`, `'Cloudflare'`)
- [ ] `description` is a concise one-liner
- [ ] `longDescription` provides detail for docs
- [ ] `docsLink` points to `'https://docs.sim.ai/tools/{service}'`
- [ ] `category` is `'tools'`
- [ ] `bgColor` uses the service's brand color hex
- [ ] `icon` references the correct icon component from `@/components/icons`
- [ ] `authMode` is set correctly (`AuthMode.OAuth` or `AuthMode.ApiKey`)
- [ ] Block is registered in `blocks/registry.ts` alphabetically
### Block Inputs
- [ ] `inputs` section lists all subBlock params that the block accepts
- [ ] Input types match the subBlock types
- [ ] When using `canonicalParamId`, inputs list the canonical ID (not the raw subBlock IDs)
## Step 5: Validate OAuth Scopes (if OAuth service)
- [ ] `auth.ts` scopes include ALL scopes needed by ALL tools in the integration
- [ ] `oauth.ts` provider config scopes match `auth.ts` scopes
- [ ] Block `requiredScopes` (if defined) matches `auth.ts` scopes
- [ ] No excess scopes that aren't needed by any tool
- [ ] Each scope has a human-readable description in `oauth-required-modal.tsx`'s `SCOPE_DESCRIPTIONS`
## Step 6: Validate Pagination Consistency
If any tools support pagination:
- [ ] Pagination param names match the API docs (e.g., `pagination_token` vs `next_token` vs `cursor`)
- [ ] Different API endpoints that use different pagination param names have separate subBlocks in the block
- [ ] Pagination response fields (`nextToken`, `cursor`, etc.) are included in tool outputs
- [ ] Pagination subBlocks are set to `mode: 'advanced'`
## Step 7: Validate Error Handling
- [ ] `transformResponse` checks for error conditions before accessing data
- [ ] Error responses include meaningful messages (not just generic "failed")
- [ ] HTTP error status codes are handled (check `response.ok` or status codes)
## Step 8: Report and Fix
### Report Format
Group findings by severity:
**Critical** (will cause runtime errors or incorrect behavior):
- Wrong endpoint URL or HTTP method
- Missing required params or wrong `required` flag
- Incorrect response field mapping (accessing wrong path in response)
- Missing error handling that would cause crashes
- Tool ID mismatch between tool file, registry, and block `tools.access`
- OAuth scopes missing in `auth.ts` that tools need
- `tools.config.tool` returning wrong tool ID for an operation
- Type coercions in `tools.config.tool` instead of `tools.config.params`
**Warning** (follows conventions incorrectly or has usability issues):
- Optional field not set to `mode: 'advanced'`
- Missing `wandConfig` on timestamp/complex fields
- Wrong `visibility` on params (e.g., `'hidden'` instead of `'user-or-llm'`)
- Missing `optional: true` on nullable outputs
- Opaque `type: 'json'` without property descriptions
- Missing `.trim()` on ID fields in request URLs
- Missing `?? null` on nullable response fields
- Block condition array missing an operation that uses that field
- Missing scope description in `oauth-required-modal.tsx`
**Suggestion** (minor improvements):
- Better description text
- Inconsistent naming across tools
- Missing `longDescription` or `docsLink`
- Pagination fields that could benefit from `wandConfig`
### Fix All Issues
After reporting, fix every **critical** and **warning** issue. Apply **suggestions** where they don't add unnecessary complexity.
### Validation Output
After fixing, confirm:
1. `bun run lint` passes with no fixes needed
2. TypeScript compiles clean (no type errors)
3. Re-read all modified files to verify fixes are correct
## Checklist Summary
- [ ] Read ALL tool files, block, types, index, and registries
- [ ] Pulled and read official API documentation
- [ ] Validated every tool's ID, params, request, response, outputs, and types against API docs
- [ ] Validated block ↔ tool alignment (every tool param has a subBlock, every condition is correct)
- [ ] Validated advanced mode on optional/rarely-used fields
- [ ] Validated wandConfig on timestamps and complex inputs
- [ ] Validated tools.config mapping, tool selector, and type coercions
- [ ] Validated block outputs match what tools return, with typed JSON where possible
- [ ] Validated OAuth scopes alignment across auth.ts, oauth.ts, block, and modal (if OAuth)
- [ ] Validated pagination consistency across tools and block
- [ ] Validated error handling (error checks, meaningful messages)
- [ ] Validated registry entries (tools and block, alphabetical, correct imports)
- [ ] Reported all issues grouped by severity
- [ ] Fixed all critical and warning issues
- [ ] Ran `bun run lint` after fixes
- [ ] Verified TypeScript compiles clean

View File

@@ -8,51 +8,210 @@ paths:
Use Vitest. Test files: `feature.ts``feature.test.ts`
## Global Mocks (vitest.setup.ts)
These modules are mocked globally — do NOT re-mock them in test files unless you need to override behavior:
- `@sim/db``databaseMock`
- `drizzle-orm``drizzleOrmMock`
- `@sim/logger``loggerMock`
- `@/stores/console/store`, `@/stores/terminal`, `@/stores/execution/store`
- `@/blocks/registry`
- `@trigger.dev/sdk`
## Structure
```typescript
/**
* @vitest-environment node
*/
import { databaseMock, loggerMock } from '@sim/testing'
import { describe, expect, it, vi } from 'vitest'
import { createMockRequest } from '@sim/testing'
import { beforeEach, describe, expect, it, vi } from 'vitest'
vi.mock('@sim/db', () => databaseMock)
vi.mock('@sim/logger', () => loggerMock)
const { mockGetSession } = vi.hoisted(() => ({
mockGetSession: vi.fn(),
}))
import { myFunction } from '@/lib/feature'
vi.mock('@/lib/auth', () => ({
auth: { api: { getSession: vi.fn() } },
getSession: mockGetSession,
}))
describe('myFunction', () => {
beforeEach(() => vi.clearAllMocks())
it.concurrent('isolated tests run in parallel', () => { ... })
import { GET, POST } from '@/app/api/my-route/route'
describe('my route', () => {
beforeEach(() => {
vi.clearAllMocks()
mockGetSession.mockResolvedValue({ user: { id: 'user-1' } })
})
it('returns data', async () => {
const req = createMockRequest('GET')
const res = await GET(req)
expect(res.status).toBe(200)
})
})
```
## Performance Rules (Critical)
### NEVER use `vi.resetModules()` + `vi.doMock()` + `await import()`
This is the #1 cause of slow tests. It forces complete module re-evaluation per test.
```typescript
// BAD — forces module re-evaluation every test (~50-100ms each)
beforeEach(() => {
vi.resetModules()
vi.doMock('@/lib/auth', () => ({ getSession: vi.fn() }))
})
it('test', async () => {
const { GET } = await import('./route') // slow dynamic import
})
// GOOD — module loaded once, mocks reconfigured per test (~1ms each)
const { mockGetSession } = vi.hoisted(() => ({
mockGetSession: vi.fn(),
}))
vi.mock('@/lib/auth', () => ({ getSession: mockGetSession }))
import { GET } from '@/app/api/my-route/route'
beforeEach(() => { vi.clearAllMocks() })
it('test', () => {
mockGetSession.mockResolvedValue({ user: { id: '1' } })
})
```
**Only exception:** Singleton modules that cache state at module scope (e.g., Redis clients, connection pools). These genuinely need `vi.resetModules()` + dynamic import to get a fresh instance per test.
### NEVER use `vi.importActual()`
This defeats the purpose of mocking by loading the real module and all its dependencies.
```typescript
// BAD — loads real module + all transitive deps
vi.mock('@/lib/workspaces/utils', async () => {
const actual = await vi.importActual('@/lib/workspaces/utils')
return { ...actual, myFn: vi.fn() }
})
// GOOD — mock everything, only implement what tests need
vi.mock('@/lib/workspaces/utils', () => ({
myFn: vi.fn(),
otherFn: vi.fn(),
}))
```
### NEVER use `mockAuth()`, `mockConsoleLogger()`, or `setupCommonApiMocks()` from `@sim/testing`
These helpers internally use `vi.doMock()` which is slow. Use direct `vi.hoisted()` + `vi.mock()` instead.
### Mock heavy transitive dependencies
If a module under test imports `@/blocks` (200+ files), `@/tools/registry`, or other heavy modules, mock them:
```typescript
vi.mock('@/blocks', () => ({
getBlock: () => null,
getAllBlocks: () => ({}),
getAllBlockTypes: () => [],
registry: {},
}))
```
### Use `@vitest-environment node` unless DOM is needed
Only use `@vitest-environment jsdom` if the test uses `window`, `document`, `FormData`, or other browser APIs. Node environment is significantly faster.
### Avoid real timers in tests
```typescript
// BAD
await new Promise(r => setTimeout(r, 500))
// GOOD — use minimal delays or fake timers
await new Promise(r => setTimeout(r, 1))
// or
vi.useFakeTimers()
```
## Mock Pattern Reference
### Auth mocking (API routes)
```typescript
const { mockGetSession } = vi.hoisted(() => ({
mockGetSession: vi.fn(),
}))
vi.mock('@/lib/auth', () => ({
auth: { api: { getSession: vi.fn() } },
getSession: mockGetSession,
}))
// In tests:
mockGetSession.mockResolvedValue({ user: { id: 'user-1', email: 'test@example.com' } })
mockGetSession.mockResolvedValue(null) // unauthenticated
```
### Hybrid auth mocking
```typescript
const { mockCheckSessionOrInternalAuth } = vi.hoisted(() => ({
mockCheckSessionOrInternalAuth: vi.fn(),
}))
vi.mock('@/lib/auth/hybrid', () => ({
checkSessionOrInternalAuth: mockCheckSessionOrInternalAuth,
}))
// In tests:
mockCheckSessionOrInternalAuth.mockResolvedValue({
success: true, userId: 'user-1', authType: 'session',
})
```
### Database chain mocking
```typescript
const { mockSelect, mockFrom, mockWhere } = vi.hoisted(() => ({
mockSelect: vi.fn(),
mockFrom: vi.fn(),
mockWhere: vi.fn(),
}))
vi.mock('@sim/db', () => ({
db: { select: mockSelect },
}))
beforeEach(() => {
mockSelect.mockReturnValue({ from: mockFrom })
mockFrom.mockReturnValue({ where: mockWhere })
mockWhere.mockResolvedValue([{ id: '1', name: 'test' }])
})
```
## @sim/testing Package
Always prefer over local mocks.
Always prefer over local test data.
| Category | Utilities |
|----------|-----------|
| **Mocks** | `loggerMock`, `databaseMock`, `setupGlobalFetchMock()` |
| **Factories** | `createSession()`, `createWorkflowRecord()`, `createBlock()`, `createExecutorContext()` |
| **Mocks** | `loggerMock`, `databaseMock`, `drizzleOrmMock`, `setupGlobalFetchMock()` |
| **Factories** | `createSession()`, `createWorkflowRecord()`, `createBlock()`, `createExecutionContext()` |
| **Builders** | `WorkflowBuilder`, `ExecutionContextBuilder` |
| **Assertions** | `expectWorkflowAccessGranted()`, `expectBlockExecuted()` |
| **Requests** | `createMockRequest()`, `createEnvMock()` |
## Rules
## Rules Summary
1. `@vitest-environment node` directive at file top
2. `vi.mock()` calls before importing mocked modules
3. `@sim/testing` utilities over local mocks
4. `it.concurrent` for isolated tests (no shared mutable state)
5. `beforeEach(() => vi.clearAllMocks())` to reset state
## Hoisted Mocks
For mutable mock references:
```typescript
const mockFn = vi.hoisted(() => vi.fn())
vi.mock('@/lib/module', () => ({ myFunction: mockFn }))
mockFn.mockResolvedValue({ data: 'test' })
```
1. `@vitest-environment node` unless DOM is required
2. `vi.hoisted()` + `vi.mock()` + static imports — never `vi.resetModules()` + `vi.doMock()` + dynamic imports
3. `vi.mock()` calls before importing mocked modules
4. `@sim/testing` utilities over local mocks
5. `beforeEach(() => vi.clearAllMocks())` to reset state — no redundant `afterEach`
6. No `vi.importActual()` — mock everything explicitly
7. No `mockAuth()`, `mockConsoleLogger()`, `setupCommonApiMocks()` — use direct mocks
8. Mock heavy deps (`@/blocks`, `@/tools/registry`, `@/triggers`) in tests that don't need them
9. Use absolute imports in test files
10. Avoid real timers — use 1ms delays or `vi.useFakeTimers()`

View File

@@ -7,51 +7,210 @@ globs: ["apps/sim/**/*.test.ts", "apps/sim/**/*.test.tsx"]
Use Vitest. Test files: `feature.ts` → `feature.test.ts`
## Global Mocks (vitest.setup.ts)
These modules are mocked globally — do NOT re-mock them in test files unless you need to override behavior:
- `@sim/db` → `databaseMock`
- `drizzle-orm` → `drizzleOrmMock`
- `@sim/logger` → `loggerMock`
- `@/stores/console/store`, `@/stores/terminal`, `@/stores/execution/store`
- `@/blocks/registry`
- `@trigger.dev/sdk`
## Structure
```typescript
/**
* @vitest-environment node
*/
import { databaseMock, loggerMock } from '@sim/testing'
import { describe, expect, it, vi } from 'vitest'
import { createMockRequest } from '@sim/testing'
import { beforeEach, describe, expect, it, vi } from 'vitest'
vi.mock('@sim/db', () => databaseMock)
vi.mock('@sim/logger', () => loggerMock)
const { mockGetSession } = vi.hoisted(() => ({
mockGetSession: vi.fn(),
}))
import { myFunction } from '@/lib/feature'
vi.mock('@/lib/auth', () => ({
auth: { api: { getSession: vi.fn() } },
getSession: mockGetSession,
}))
describe('myFunction', () => {
beforeEach(() => vi.clearAllMocks())
it.concurrent('isolated tests run in parallel', () => { ... })
import { GET, POST } from '@/app/api/my-route/route'
describe('my route', () => {
beforeEach(() => {
vi.clearAllMocks()
mockGetSession.mockResolvedValue({ user: { id: 'user-1' } })
})
it('returns data', async () => {
const req = createMockRequest('GET')
const res = await GET(req)
expect(res.status).toBe(200)
})
})
```
## Performance Rules (Critical)
### NEVER use `vi.resetModules()` + `vi.doMock()` + `await import()`
This is the #1 cause of slow tests. It forces complete module re-evaluation per test.
```typescript
// BAD — forces module re-evaluation every test (~50-100ms each)
beforeEach(() => {
vi.resetModules()
vi.doMock('@/lib/auth', () => ({ getSession: vi.fn() }))
})
it('test', async () => {
const { GET } = await import('./route') // slow dynamic import
})
// GOOD — module loaded once, mocks reconfigured per test (~1ms each)
const { mockGetSession } = vi.hoisted(() => ({
mockGetSession: vi.fn(),
}))
vi.mock('@/lib/auth', () => ({ getSession: mockGetSession }))
import { GET } from '@/app/api/my-route/route'
beforeEach(() => { vi.clearAllMocks() })
it('test', () => {
mockGetSession.mockResolvedValue({ user: { id: '1' } })
})
```
**Only exception:** Singleton modules that cache state at module scope (e.g., Redis clients, connection pools). These genuinely need `vi.resetModules()` + dynamic import to get a fresh instance per test.
### NEVER use `vi.importActual()`
This defeats the purpose of mocking by loading the real module and all its dependencies.
```typescript
// BAD — loads real module + all transitive deps
vi.mock('@/lib/workspaces/utils', async () => {
const actual = await vi.importActual('@/lib/workspaces/utils')
return { ...actual, myFn: vi.fn() }
})
// GOOD — mock everything, only implement what tests need
vi.mock('@/lib/workspaces/utils', () => ({
myFn: vi.fn(),
otherFn: vi.fn(),
}))
```
### NEVER use `mockAuth()`, `mockConsoleLogger()`, or `setupCommonApiMocks()` from `@sim/testing`
These helpers internally use `vi.doMock()` which is slow. Use direct `vi.hoisted()` + `vi.mock()` instead.
### Mock heavy transitive dependencies
If a module under test imports `@/blocks` (200+ files), `@/tools/registry`, or other heavy modules, mock them:
```typescript
vi.mock('@/blocks', () => ({
getBlock: () => null,
getAllBlocks: () => ({}),
getAllBlockTypes: () => [],
registry: {},
}))
```
### Use `@vitest-environment node` unless DOM is needed
Only use `@vitest-environment jsdom` if the test uses `window`, `document`, `FormData`, or other browser APIs. Node environment is significantly faster.
### Avoid real timers in tests
```typescript
// BAD
await new Promise(r => setTimeout(r, 500))
// GOOD — use minimal delays or fake timers
await new Promise(r => setTimeout(r, 1))
// or
vi.useFakeTimers()
```
## Mock Pattern Reference
### Auth mocking (API routes)
```typescript
const { mockGetSession } = vi.hoisted(() => ({
mockGetSession: vi.fn(),
}))
vi.mock('@/lib/auth', () => ({
auth: { api: { getSession: vi.fn() } },
getSession: mockGetSession,
}))
// In tests:
mockGetSession.mockResolvedValue({ user: { id: 'user-1', email: 'test@example.com' } })
mockGetSession.mockResolvedValue(null) // unauthenticated
```
### Hybrid auth mocking
```typescript
const { mockCheckSessionOrInternalAuth } = vi.hoisted(() => ({
mockCheckSessionOrInternalAuth: vi.fn(),
}))
vi.mock('@/lib/auth/hybrid', () => ({
checkSessionOrInternalAuth: mockCheckSessionOrInternalAuth,
}))
// In tests:
mockCheckSessionOrInternalAuth.mockResolvedValue({
success: true, userId: 'user-1', authType: 'session',
})
```
### Database chain mocking
```typescript
const { mockSelect, mockFrom, mockWhere } = vi.hoisted(() => ({
mockSelect: vi.fn(),
mockFrom: vi.fn(),
mockWhere: vi.fn(),
}))
vi.mock('@sim/db', () => ({
db: { select: mockSelect },
}))
beforeEach(() => {
mockSelect.mockReturnValue({ from: mockFrom })
mockFrom.mockReturnValue({ where: mockWhere })
mockWhere.mockResolvedValue([{ id: '1', name: 'test' }])
})
```
## @sim/testing Package
Always prefer over local mocks.
Always prefer over local test data.
| Category | Utilities |
|----------|-----------|
| **Mocks** | `loggerMock`, `databaseMock`, `setupGlobalFetchMock()` |
| **Factories** | `createSession()`, `createWorkflowRecord()`, `createBlock()`, `createExecutorContext()` |
| **Mocks** | `loggerMock`, `databaseMock`, `drizzleOrmMock`, `setupGlobalFetchMock()` |
| **Factories** | `createSession()`, `createWorkflowRecord()`, `createBlock()`, `createExecutionContext()` |
| **Builders** | `WorkflowBuilder`, `ExecutionContextBuilder` |
| **Assertions** | `expectWorkflowAccessGranted()`, `expectBlockExecuted()` |
| **Requests** | `createMockRequest()`, `createEnvMock()` |
## Rules
## Rules Summary
1. `@vitest-environment node` directive at file top
2. `vi.mock()` calls before importing mocked modules
3. `@sim/testing` utilities over local mocks
4. `it.concurrent` for isolated tests (no shared mutable state)
5. `beforeEach(() => vi.clearAllMocks())` to reset state
## Hoisted Mocks
For mutable mock references:
```typescript
const mockFn = vi.hoisted(() => vi.fn())
vi.mock('@/lib/module', () => ({ myFunction: mockFn }))
mockFn.mockResolvedValue({ data: 'test' })
```
1. `@vitest-environment node` unless DOM is required
2. `vi.hoisted()` + `vi.mock()` + static imports — never `vi.resetModules()` + `vi.doMock()` + dynamic imports
3. `vi.mock()` calls before importing mocked modules
4. `@sim/testing` utilities over local mocks
5. `beforeEach(() => vi.clearAllMocks())` to reset state — no redundant `afterEach`
6. No `vi.importActual()` — mock everything explicitly
7. No `mockAuth()`, `mockConsoleLogger()`, `setupCommonApiMocks()` — use direct mocks
8. Mock heavy deps (`@/blocks`, `@/tools/registry`, `@/triggers`) in tests that don't need them
9. Use absolute imports in test files
10. Avoid real timers — use 1ms delays or `vi.useFakeTimers()`

View File

@@ -8,7 +8,7 @@ on:
concurrency:
group: ci-${{ github.ref }}
cancel-in-progress: false
cancel-in-progress: ${{ github.event_name == 'pull_request' }}
permissions:
contents: read

View File

@@ -10,7 +10,7 @@ permissions:
jobs:
test-build:
name: Test and Build
runs-on: blacksmith-4vcpu-ubuntu-2404
runs-on: blacksmith-8vcpu-ubuntu-2404
steps:
- name: Checkout code
@@ -38,6 +38,20 @@ jobs:
key: ${{ github.repository }}-node-modules
path: ./node_modules
- name: Mount Turbo cache (Sticky Disk)
uses: useblacksmith/stickydisk@v1
with:
key: ${{ github.repository }}-turbo-cache
path: ./.turbo
- name: Restore Next.js build cache
uses: actions/cache@v4
with:
path: ./apps/sim/.next/cache
key: ${{ runner.os }}-nextjs-${{ hashFiles('bun.lock') }}
restore-keys: |
${{ runner.os }}-nextjs-
- name: Install dependencies
run: bun install --frozen-lockfile
@@ -85,6 +99,7 @@ jobs:
NEXT_PUBLIC_APP_URL: 'https://www.sim.ai'
DATABASE_URL: 'postgresql://postgres:postgres@localhost:5432/simstudio'
ENCRYPTION_KEY: '7cf672e460e430c1fba707575c2b0e2ad5a99dddf9b7b7e3b5646e630861db1c' # dummy key for CI only
TURBO_CACHE_DIR: .turbo
run: bun run test
- name: Check schema and migrations are in sync
@@ -110,6 +125,7 @@ jobs:
RESEND_API_KEY: 'dummy_key_for_ci_only'
AWS_REGION: 'us-west-2'
ENCRYPTION_KEY: '7cf672e460e430c1fba707575c2b0e2ad5a99dddf9b7b7e3b5646e630861db1c' # dummy key for CI only
TURBO_CACHE_DIR: .turbo
run: bunx turbo run build --filter=sim
- name: Upload coverage to Codecov

4
.gitignore vendored
View File

@@ -73,3 +73,7 @@ start-collector.sh
## Helm Chart Tests
helm/sim/test
i18n.cache
## Claude Code
.claude/launch.json
.claude/worktrees/

View File

@@ -167,27 +167,51 @@ Import from `@/components/emcn`, never from subpaths (except CSS files). Use CVA
## Testing
Use Vitest. Test files: `feature.ts``feature.test.ts`
Use Vitest. Test files: `feature.ts``feature.test.ts`. See `.cursor/rules/sim-testing.mdc` for full details.
### Global Mocks (vitest.setup.ts)
`@sim/db`, `drizzle-orm`, `@sim/logger`, `@/blocks/registry`, `@trigger.dev/sdk`, and store mocks are provided globally. Do NOT re-mock them unless overriding behavior.
### Standard Test Pattern
```typescript
/**
* @vitest-environment node
*/
import { databaseMock, loggerMock } from '@sim/testing'
import { describe, expect, it, vi } from 'vitest'
import { createMockRequest } from '@sim/testing'
import { beforeEach, describe, expect, it, vi } from 'vitest'
vi.mock('@sim/db', () => databaseMock)
vi.mock('@sim/logger', () => loggerMock)
const { mockGetSession } = vi.hoisted(() => ({
mockGetSession: vi.fn(),
}))
import { myFunction } from '@/lib/feature'
vi.mock('@/lib/auth', () => ({
auth: { api: { getSession: vi.fn() } },
getSession: mockGetSession,
}))
describe('feature', () => {
beforeEach(() => vi.clearAllMocks())
it.concurrent('runs in parallel', () => { ... })
import { GET } from '@/app/api/my-route/route'
describe('my route', () => {
beforeEach(() => {
vi.clearAllMocks()
mockGetSession.mockResolvedValue({ user: { id: 'user-1' } })
})
it('returns data', async () => { ... })
})
```
Use `@sim/testing` mocks/factories over local test data. See `.cursor/rules/sim-testing.mdc` for details.
### Performance Rules
- **NEVER** use `vi.resetModules()` + `vi.doMock()` + `await import()` — use `vi.hoisted()` + `vi.mock()` + static imports
- **NEVER** use `vi.importActual()` — mock everything explicitly
- **NEVER** use `mockAuth()`, `mockConsoleLogger()`, `setupCommonApiMocks()` from `@sim/testing` — they use `vi.doMock()` internally
- **Mock heavy deps** (`@/blocks`, `@/tools/registry`, `@/triggers`) in tests that don't need them
- **Use `@vitest-environment node`** unless DOM APIs are needed (`window`, `document`, `FormData`)
- **Avoid real timers** — use 1ms delays or `vi.useFakeTimers()`
Use `@sim/testing` mocks/factories over local test data.
## Utils Rules

View File

@@ -13,7 +13,7 @@ export function TOCFooter() {
<div className='text-balance font-semibold text-base leading-tight'>
Start building today
</div>
<div className='text-muted-foreground'>Trusted by over 60,000 builders.</div>
<div className='text-muted-foreground'>Trusted by over 70,000 builders.</div>
<div className='text-muted-foreground'>
Build Agentic workflows visually on a drag-and-drop canvas or with natural language.
</div>

View File

@@ -76,7 +76,6 @@ export function ApiIcon(props: SVGProps<SVGSVGElement>) {
</svg>
)
}
export function ConditionalIcon(props: SVGProps<SVGSVGElement>) {
return (
<svg
@@ -526,6 +525,17 @@ export function SlackMonoIcon(props: SVGProps<SVGSVGElement>) {
)
}
export function GammaIcon(props: SVGProps<SVGSVGElement>) {
return (
<svg {...props} viewBox='-14 0 192 192' fill='none' xmlns='http://www.w3.org/2000/svg'>
<path
fill='currentColor'
d='M47.2,14.4c-14.4,8.2-26,19.6-34.4,33.6C4.3,62.1,0,77.7,0,94.3s4.3,32.2,12.7,46.3c8.5,14.1,20,25.4,34.4,33.6,14.4,8.2,30.4,12.4,47.7,12.4h69.8v-112.5h-81v39.1h38.2v31.8h-25.6c-9.1,0-17.6-2.3-25.2-6.9-7.6-4.6-13.8-10.8-18.3-18.4-4.5-7.7-6.7-16.2-6.7-25.3s2.3-17.7,6.7-25.3c4.5-7.7,10.6-13.9,18.3-18.4,7.6-4.6,16.1-6.9,25.2-6.9h68.5V2h-69.8c-17.3,0-33.3,4.2-47.7,12.4h0Z'
/>
</svg>
)
}
export function GithubIcon(props: SVGProps<SVGSVGElement>) {
return (
<svg {...props} width='26' height='26' viewBox='0 0 26 26' xmlns='http://www.w3.org/2000/svg'>
@@ -710,6 +720,17 @@ export function NotionIcon(props: SVGProps<SVGSVGElement>) {
)
}
export function GongIcon(props: SVGProps<SVGSVGElement>) {
return (
<svg {...props} viewBox='0 0 55.4 60' fill='none' xmlns='http://www.w3.org/2000/svg'>
<path
fill='currentColor'
d='M54.1,25.7H37.8c-0.9,0-1.6,1-1.3,1.8l3.9,10.1c0.2,0.4-0.2,0.9-0.7,0.9l-5-0.3c-0.2,0-0.4,0.1-0.6,0.3L30.3,44c-0.2,0.3-0.6,0.4-1,0.2l-5.8-3.9c-0.2-0.2-0.5-0.2-0.8,0l-8,5.4c-0.5,0.4-1.2-0.1-1-0.7L16,37c0.1-0.3-0.1-0.7-0.4-0.8l-4.2-1.7c-0.4-0.2-0.6-0.7-0.3-1l3.7-4.6c0.2-0.2,0.2-0.6,0-0.8l-3.1-4.5c-0.3-0.4,0-1,0.5-1l4.9-0.4c0.4,0,0.6-0.3,0.6-0.7l-0.4-6.8c0-0.5,0.5-0.8,0.9-0.7l6,2.5c0.3,0.1,0.6,0,0.8-0.2l4.2-4.6c0.3-0.4,0.9-0.3,1.1,0.2l2.5,6.4c0.3,0.8,1.3,1.1,2,0.6l9.8-7.3c1.1-0.8,0.4-2.6-1-2.4L37.3,10c-0.3,0-0.6-0.1-0.7-0.4l-3.4-8.7c-0.4-0.9-1.5-1.1-2.2-0.4l-7.4,8c-0.2,0.2-0.5,0.3-0.8,0.2l-9.7-4.1c-0.9-0.4-1.8,0.2-1.9,1.2l-0.4,10c0,0.4-0.3,0.6-0.6,0.6l-8.9,0.6c-1,0.1-1.6,1.2-1,2.1l5.9,8.7c0.2,0.2,0.2,0.6,0,0.8l-6,6.9C-0.3,36,0,37.1,0.8,37.4l6.9,3c0.3,0.1,0.5,0.5,0.4,0.8L3.7,58.3c-0.3,1.2,1.1,2.1,2.1,1.4l16.5-11.8c0.2-0.2,0.5-0.2,0.8,0l7.5,5.3c0.6,0.4,1.5,0.3,1.9-0.4l4.7-7.2c0.1-0.2,0.4-0.3,0.6-0.3l11.2,1.4c0.9,0.1,1.8-0.6,1.5-1.5l-4.7-12.1c-0.1-0.3,0-0.7,0.4-0.9l8.5-4C55.9,27.6,55.5,25.7,54.1,25.7z'
/>
</svg>
)
}
export function GmailIcon(props: SVGProps<SVGSVGElement>) {
return (
<svg
@@ -928,6 +949,25 @@ export function GoogleIcon(props: SVGProps<SVGSVGElement>) {
)
}
export function DevinIcon(props: SVGProps<SVGSVGElement>) {
return (
<svg {...props} viewBox='0 0 500 500' fill='none' xmlns='http://www.w3.org/2000/svg'>
<path
d='M59.29,209.39l48.87,28.21c1.75,1.01,3.71,1.51,5.67,1.51c1.95,0,3.92-0.52,5.67-1.51l48.87-28.21c0,0,0.14-0.11,0.2-0.16c0.74-0.45,1.44-0.99,2.07-1.6c0.09-0.09,0.18-0.2,0.27-0.29c0.54-0.58,1.03-1.21,1.44-1.89c0.06-0.11,0.16-0.2,0.2-0.32c0.43-0.74,0.74-1.53,0.99-2.37c0.05-0.18,0.09-0.36,0.14-0.54c0.2-0.86,0.36-1.74,0.36-2.66v-28.21c0-10.89,5.87-21.03,15.3-26.48c9.42-5.45,21.15-5.44,30.59,0l24.43,14.11c0.79,0.45,1.62,0.77,2.47,1.01c0.18,0.05,0.37,0.11,0.54,0.16c0.83,0.2,1.69,0.32,2.54,0.34c0.05,0,0.09,0,0.11,0c0.09,0,0.18-0.05,0.26-0.05c0.79,0,1.58-0.11,2.34-0.32c0.14-0.03,0.27-0.05,0.4-0.09c0.83-0.23,1.64-0.57,2.41-0.99c0.06-0.05,0.16-0.05,0.23-0.09l48.87-28.21c3.51-2.03,5.67-5.76,5.67-9.81V64.52c0-4.05-2.16-7.78-5.67-9.81l-48.91-28.19c-3.51-2.03-7.81-2.03-11.32,0l-48.87,28.21c0,0-0.14,0.11-0.2,0.16c-0.74,0.45-1.44,0.99-2.07,1.6c-0.09,0.09-0.18,0.2-0.27,0.29c-0.54,0.58-1.03,1.21-1.44,1.89c-0.06,0.11-0.16,0.2-0.2,0.31c-0.43,0.74-0.74,1.53-0.99,2.37c-0.05,0.18-0.09,0.36-0.14,0.54c-0.2,0.86-0.36,1.74-0.36,2.66v28.21c0,10.89-5.87,21.03-15.3,26.5c-9.42,5.44-21.15,5.44-30.59,0l-24.42-14.1c-0.79-0.45-1.63-0.77-2.47-1.01c-0.18-0.05-0.36-0.11-0.54-0.16c-0.84-0.2-1.69-0.31-2.55-0.34c-0.14,0-0.25,0-0.38,0c-0.81,0-1.6,0.11-2.37,0.31c-0.14,0.02-0.25,0.05-0.38,0.09c-0.82,0.23-1.63,0.57-2.4,1c-0.06,0.05-0.16,0.05-0.23,0.09l-48.84,28.24c-3.51,2.03-5.67,5.76-5.67,9.81v56.42c0,4.05,2.16,7.78,5.67,9.81C59.29,209.41,59.29,209.39,59.29,209.39z'
fill='#2A6DCE'
/>
<path
d='M325.46,223.49c9.42-5.44,21.15-5.44,30.59,0l24.43,14.11c0.79,0.45,1.62,0.77,2.47,1.01c0.18,0.05,0.36,0.11,0.54,0.16c0.83,0.2,1.69,0.31,2.54,0.34c0.05,0,0.09,0,0.11,0c0.09,0,0.18-0.03,0.26-0.05c0.79,0,1.58-0.11,2.34-0.31c0.14-0.03,0.27-0.05,0.4-0.09c0.83-0.23,1.62-0.57,2.41-0.99c0.06-0.05,0.16-0.05,0.25-0.09l48.87-28.21c3.51-2.03,5.67-5.76,5.67-9.81v-56.43c0-4.05-2.16-7.78-5.67-9.81l-48.84-28.22c-3.51-2.03-7.81-2.03-11.32,0l-48.87,28.21c0,0-0.14,0.11-0.2,0.16c-0.74,0.45-1.44,0.99-2.07,1.6c-0.09,0.09-0.18,0.2-0.26,0.29c-0.54,0.58-1.03,1.21-1.44,1.89c-0.06,0.11-0.16,0.2-0.2,0.32c-0.43,0.74-0.74,1.53-0.99,2.37c-0.05,0.18-0.09,0.36-0.14,0.54c-0.2,0.86-0.36,1.74-0.36,2.66v28.21c0,10.89-5.87,21.03-15.3,26.5c-9.42,5.44-21.15,5.44-30.59,0l-24.43-14.11c-0.79-0.45-1.62-0.77-2.47-1.01c-0.18-0.05-0.36-0.11-0.54-0.16c-0.83-0.2-1.69-0.32-2.54-0.34c-0.14,0-0.25,0-0.38,0c-0.81,0-1.6,0.11-2.37,0.32c-0.14,0.03-0.25,0.05-0.38,0.09c-0.83,0.23-1.64,0.57-2.41,0.99c-0.06,0.05-0.16,0.05-0.23,0.09l-48.87,28.21c-3.51,2.03-5.67,5.76-5.67,9.81v56.43c0,4.05,2.16,7.78,5.67,9.81l48.87,28.21c0,0,0.16,0.05,0.23,0.09c0.77,0.43,1.58,0.77,2.41,0.99c0.14,0.05,0.27,0.05,0.4,0.09c0.77,0.18,1.55,0.29,2.34,0.32c0.09,0,0.18,0.05,0.27,0.05c0.05,0,0.09,0,0.11,0c0.86,0,1.69-0.14,2.54-0.34c0.18-0.05,0.36-0.09,0.54-0.16c0.86-0.25,1.69-0.57,2.47-1.01l24.43-14.11c9.42-5.44,21.15-5.44,30.59,0c9.42,5.44,15.3,15.59,15.3,26.48v28.21c0,0.92,0.14,1.8,0.36,2.66c0.05,0.18,0.09,0.36,0.14,0.54c0.25,0.83,0.56,1.62,0.99,2.37c0.06,0.11,0.14,0.2,0.2,0.31c0.4,0.68,0.9,1.31,1.44,1.89c0.09,0.09,0.18,0.2,0.26,0.29c0.61,0.6,1.31,1.12,2.07,1.6c0.06,0.05,0.11,0.11,0.2,0.16l48.87,28.21c1.75,1.01,3.72,1.51,5.67,1.51s3.92-0.52,5.67-1.51l48.87-28.21c3.51-2.03,5.67-5.76,5.67-9.81v-56.43c0-4.05-2.16-7.78-5.67-9.81l-48.87-28.21c0,0-0.16-0.05-0.23-0.09c-0.77-0.43-1.58-0.77-2.41-0.99c-0.14-0.05-0.25-0.05-0.38-0.09c-0.79-0.18-1.57-0.29-2.38-0.32c-0.11,0-0.25,0-0.36,0c-0.86,0-1.71,0.14-2.54,0.34c-0.18,0.05-0.34,0.09-0.52,0.16c-0.86,0.25-1.69,0.57-2.47,1.01l-24.43,14.11c-9.42,5.44-21.15,5.44-30.58,0c-9.42-5.44-15.3-15.59-15.3-26.5c0-10.91,5.87-21.03,15.3-26.48C325.55,223.49,325.46,223.49,325.46,223.49z'
fill='#1DC19C'
/>
<path
d='M304.5,369.22l-48.87-28.21c0,0-0.16-0.05-0.23-0.09c-0.77-0.43-1.57-0.77-2.41-0.99c-0.14-0.05-0.27-0.05-0.4-0.09c-0.79-0.18-1.57-0.29-2.37-0.32c-0.14,0-0.25,0-0.38,0c-0.86,0-1.71,0.14-2.54,0.34c-0.18,0.05-0.34,0.09-0.52,0.16c-0.86,0.25-1.69,0.57-2.47,1.01l-24.43,14.11c-9.42,5.44-21.15,5.44-30.58,0c-9.42-5.44-15.3-15.59-15.3-26.5v-28.22c0-0.92-0.14-1.8-0.36-2.66c-0.05-0.18-0.09-0.36-0.14-0.54c-0.25-0.83-0.57-1.62-0.99-2.37c-0.06-0.11-0.14-0.2-0.2-0.32c-0.4-0.68-0.9-1.31-1.44-1.89c-0.09-0.09-0.18-0.2-0.27-0.29c-0.6-0.6-1.31-1.12-2.07-1.6c-0.06-0.05-0.11-0.11-0.2-0.16l-48.87-28.21c-3.51-2.03-7.81-2.03-11.32,0L59.28,290.6c-3.51,2.03-5.67,5.76-5.67,9.81v56.43c0,4.05,2.16,7.78,5.67,9.81l48.87,28.21c0,0,0.16,0.06,0.23,0.09c0.77,0.43,1.55,0.77,2.38,0.99c0.14,0.05,0.27,0.06,0.4,0.09c0.77,0.18,1.55,0.29,2.34,0.32c0.09,0,0.18,0.05,0.29,0.05c0.05,0,0.09,0,0.14,0c0.86,0,1.69-0.14,2.52-0.34c0.18-0.05,0.36-0.09,0.54-0.16c0.86-0.25,1.69-0.57,2.47-1.01l24.43-14.11c9.42-5.44,21.15-5.44,30.59,0c9.42,5.44,15.3,15.59,15.3,26.48v28.21c0,0.92,0.14,1.8,0.36,2.66c0.05,0.18,0.09,0.36,0.14,0.54c0.25,0.83,0.57,1.62,0.99,2.37c0.06,0.11,0.14,0.2,0.2,0.32c0.4,0.68,0.9,1.31,1.44,1.89c0.09,0.09,0.18,0.2,0.27,0.29c0.61,0.61,1.31,1.12,2.07,1.6c0.06,0.05,0.11,0.11,0.2,0.16l48.87,28.21c1.75,1.01,3.71,1.51,5.67,1.51c1.96,0,3.92-0.52,5.67-1.51l48.87-28.21c3.51-2.03,5.67-5.76,5.67-9.81v-56.43c0-4.05-2.16-7.78-5.67-9.81L304.5,369.22z'
fill='#1796E2'
/>
</svg>
)
}
export function DiscordIcon(props: SVGProps<SVGSVGElement>) {
return (
<svg
@@ -1224,6 +1264,20 @@ export function GoogleSlidesIcon(props: SVGProps<SVGSVGElement>) {
)
}
export function GoogleContactsIcon(props: SVGProps<SVGSVGElement>) {
return (
<svg {...props} xmlns='http://www.w3.org/2000/svg' viewBox='0 0 500 500'>
<path fill='#86a9ff' d='M199 244c-89 0-161 71-161 160v67c0 16 13 29 29 29h77l77-256z' />
<path fill='#578cff' d='M462 349c0-58-48-105-106-105h-77v256h77c58 0 106-47 106-106' />
<path
fill='#0057cc'
d='M115 349c0-58 48-105 106-105h58c58 0 106 47 106 105v45c0 59-48 106-106 106H144c-16 0-29-13-29-29z'
/>
<circle cx='250' cy='99.4' r='99.4' fill='#0057cc' />
</svg>
)
}
export function GoogleCalendarIcon(props: SVGProps<SVGSVGElement>) {
return (
<svg
@@ -1291,6 +1345,21 @@ export function GoogleCalendarIcon(props: SVGProps<SVGSVGElement>) {
)
}
export function GoogleTasksIcon(props: SVGProps<SVGSVGElement>) {
return (
<svg {...props} viewBox='0 0 527.1 500' xmlns='http://www.w3.org/2000/svg'>
<polygon
fill='#0066DA'
points='410.4,58.3 368.8,81.2 348.2,120.6 368.8,168.8 407.8,211 450,187.5 475.9,142.8 450,87.5'
/>
<path
fill='#2684FC'
d='M249.3,219.4l98.9-98.9c29.1,22.1,50.5,53.8,59.6,90.4L272.1,346.7c-12.2,12.2-32,12.2-44.2,0l-91.5-91.5 c-9.8-9.8-9.8-25.6,0-35.3l39-39c9.8-9.8,25.6-9.8,35.3,0L249.3,219.4z M519.8,63.6l-39.7-39.7c-9.7-9.7-25.6-9.7-35.3,0 l-34.4,34.4c27.5,23,49.9,51.8,65.5,84.5l43.9-43.9C529.6,89.2,529.6,73.3,519.8,63.6z M412.5,250c0,89.8-72.8,162.5-162.5,162.5 S87.5,339.8,87.5,250S160.2,87.5,250,87.5c36.9,0,70.9,12.3,98.2,33.1l62.2-62.2C367,21.9,311.1,0,250,0C111.9,0,0,111.9,0,250 s111.9,250,250,250s250-111.9,250-250c0-38.3-8.7-74.7-24.1-107.2L407.8,211C410.8,223.5,412.5,236.6,412.5,250z'
/>
</svg>
)
}
export function SupabaseIcon(props: SVGProps<SVGSVGElement>) {
const id = useId()
const gradient0 = `supabase_paint0_${id}`
@@ -2917,6 +2986,19 @@ export function QdrantIcon(props: SVGProps<SVGSVGElement>) {
)
}
export function AshbyIcon(props: SVGProps<SVGSVGElement>) {
return (
<svg {...props} viewBox='0 0 254 260' fill='none' xmlns='http://www.w3.org/2000/svg'>
<path
fillRule='evenodd'
clipRule='evenodd'
d='M76.07 250.537v9.16H.343v-9.16c19.618 0 27.465-4.381 34.527-23.498l73.764-209.09h34.92l81.219 209.09c7.847 19.515 11.77 23.498 28.642 23.498v9.16H134.363v-9.16c28.242 0 30.625-2.582 22.14-23.498l-21.58-57.35H69.399l-19.226 56.155c-5.614 18.997-4.387 24.693 25.896 24.693zm24.326-171.653l-26.681 78.459h56.5l-29.819-78.459z'
fill='currentColor'
/>
</svg>
)
}
export function ArxivIcon(props: SVGProps<SVGSVGElement>) {
return (
<svg {...props} id='logomark' xmlns='http://www.w3.org/2000/svg' viewBox='0 0 17.732 24.269'>
@@ -3419,6 +3501,23 @@ export const ResendIcon = (props: SVGProps<SVGSVGElement>) => (
</svg>
)
export const GoogleBigQueryIcon = (props: SVGProps<SVGSVGElement>) => (
<svg {...props} xmlns='http://www.w3.org/2000/svg' viewBox='0 0 64 64'>
<path
d='M14.48 58.196L.558 34.082c-.744-1.288-.744-2.876 0-4.164L14.48 5.805c.743-1.287 2.115-2.08 3.6-2.082h27.857c1.48.007 2.845.8 3.585 2.082l13.92 24.113c.744 1.288.744 2.876 0 4.164L49.52 58.196c-.743 1.287-2.115 2.08-3.6 2.082H18.07c-1.483-.005-2.85-.798-3.593-2.082z'
fill='#4386fa'
/>
<path
d='M40.697 24.235s3.87 9.283-1.406 14.545-14.883 1.894-14.883 1.894L43.95 60.27h1.984c1.486-.002 2.858-.796 3.6-2.082L58.75 42.23z'
opacity='.1'
/>
<path
d='M45.267 43.23L41 38.953a.67.67 0 0 0-.158-.12 11.63 11.63 0 1 0-2.032 2.037.67.67 0 0 0 .113.15l4.277 4.277a.67.67 0 0 0 .947 0l1.12-1.12a.67.67 0 0 0 0-.947zM31.64 40.464a8.75 8.75 0 1 1 8.749-8.749 8.75 8.75 0 0 1-8.749 8.749zm-5.593-9.216v3.616c.557.983 1.363 1.803 2.338 2.375v-6.013zm4.375-2.998v9.772a6.45 6.45 0 0 0 2.338 0V28.25zm6.764 6.606v-2.142H34.85v4.5a6.43 6.43 0 0 0 2.338-2.368z'
fill='#fff'
/>
</svg>
)
export const GoogleVaultIcon = (props: SVGProps<SVGSVGElement>) => (
<svg {...props} xmlns='http://www.w3.org/2000/svg' viewBox='0 0 82 82'>
<path
@@ -3541,6 +3640,15 @@ export function TrelloIcon(props: SVGProps<SVGSVGElement>) {
)
}
export function AttioIcon(props: SVGProps<SVGSVGElement>) {
return (
<svg {...props} xmlns='http://www.w3.org/2000/svg' viewBox='0 0 60.9 50' fill='currentColor'>
<path d='M60.3,34.8l-5.1-8.1c0,0,0,0,0,0L54.7,26c-0.8-1.2-2.1-1.9-3.5-1.9L43,24L42.5,25l-9.8,15.7l-0.5,0.9l4.1,6.6c0.8,1.2,2.1,1.9,3.5,1.9h11.5c1.4,0,2.8-0.7,3.5-1.9l0.4-0.6c0,0,0,0,0,0l5.1-8.2C61.1,37.9,61.1,36.2,60.3,34.8L60.3,34.8z M58.7,38.3l-5.1,8.2c0,0,0,0.1-0.1,0.1c-0.2,0.2-0.4,0.2-0.5,0.2c-0.1,0-0.4,0-0.6-0.3l-5.1-8.2c-0.1-0.1-0.1-0.2-0.2-0.3c0-0.1-0.1-0.2-0.1-0.3c-0.1-0.4-0.1-0.8,0-1.3c0.1-0.2,0.1-0.4,0.3-0.6l5.1-8.1c0,0,0,0,0,0c0.1-0.2,0.3-0.3,0.4-0.3c0.1,0,0.1,0,0.1,0c0,0,0,0,0.1,0c0.1,0,0.4,0,0.6,0.3l5.1,8.1C59.2,36.6,59.2,37.5,58.7,38.3L58.7,38.3z' />
<path d='M45.2,15.1c0.8-1.3,0.8-3.1,0-4.4l-5.1-8.1l-0.4-0.7C38.9,0.7,37.6,0,36.2,0H24.7c-1.4,0-2.7,0.7-3.5,1.9L0.6,34.9C0.2,35.5,0,36.3,0,37c0,0.8,0.2,1.5,0.6,2.2l5.5,8.8C6.9,49.3,8.2,50,9.7,50h11.5c1.4,0,2.8-0.7,3.5-1.9l0.4-0.7c0,0,0,0,0,0c0,0,0,0,0,0l4.1-6.6l12.1-19.4L45.2,15.1L45.2,15.1z M44,13c0,0.4-0.1,0.8-0.4,1.2L23.5,46.4c-0.2,0.3-0.5,0.3-0.6,0.3c-0.1,0-0.4,0-0.6-0.3l-5.1-8.2c-0.5-0.7-0.5-1.7,0-2.4L37.4,3.6c0.2-0.3,0.5-0.3,0.6-0.3c0.1,0,0.4,0,0.6,0.3l5.1,8.1C43.9,12.1,44,12.5,44,13z' />
</svg>
)
}
export function AsanaIcon(props: SVGProps<SVGSVGElement>) {
return (
<svg {...props} xmlns='http://www.w3.org/2000/svg' viewBox='0 0 24 24' fill='none'>
@@ -3885,6 +3993,28 @@ export function IntercomIcon(props: SVGProps<SVGSVGElement>) {
)
}
export function LoopsIcon(props: SVGProps<SVGSVGElement>) {
return (
<svg {...props} viewBox='0 0 256 256' fill='none' xmlns='http://www.w3.org/2000/svg'>
<path
fill='currentColor'
d='M192.352 88.042c0-7.012-5.685-12.697-12.697-12.697s-12.697 5.685-12.697 12.697c0 .634.052 1.255.142 1.866a25.248 25.248 0 0 0-4.9-.49c-14.006 0-25.36 11.354-25.36 25.36 0 1.63.16 3.222.456 4.765a37.8 37.8 0 0 0-9.296-1.173c-20.95 0-37.935 16.985-37.935 37.935S107.05 194.24 128 194.24s37.935-16.985 37.935-37.935a37.7 37.7 0 0 0-3.78-16.555 25.2 25.2 0 0 0 12.487-3.336 25.2 25.2 0 0 0 4.558 3.336v.02c14.006 0 25.36-11.354 25.36-25.36 0-12.48-9.018-22.855-20.888-24.996a12.6 12.6 0 0 0 8.68-11.972m-77.05 68.263c0-7.012 5.685-12.697 12.697-12.697s12.697 5.685 12.697 12.697c0 7.013-5.685 12.697-12.697 12.697s-12.697-5.685-12.697-12.697'
/>
</svg>
)
}
export function LumaIcon(props: SVGProps<SVGSVGElement>) {
return (
<svg {...props} fill='none' viewBox='0 0 133 134' xmlns='http://www.w3.org/2000/svg'>
<path
d='M133 67C96.282 67 66.5 36.994 66.5 0c0 36.994-29.782 67-66.5 67 36.718 0 66.5 30.006 66.5 67 0-36.994 29.782-67 66.5-67'
fill='#000000'
/>
</svg>
)
}
export function MailchimpIcon(props: SVGProps<SVGSVGElement>) {
return (
<svg
@@ -4406,6 +4536,17 @@ export function SSHIcon(props: SVGProps<SVGSVGElement>) {
)
}
export function DatabricksIcon(props: SVGProps<SVGSVGElement>) {
return (
<svg {...props} viewBox='0 0 241 266' fill='none' xmlns='http://www.w3.org/2000/svg'>
<path
d='M228.085 109.654L120.615 171.674L5.53493 105.41L0 108.475V156.582L120.615 225.911L228.085 164.128V189.596L120.615 251.615L5.53493 185.351L0 188.417V196.67L120.615 266L241 196.67V148.564L235.465 145.498L120.615 211.527L12.9148 149.743V124.275L120.615 186.059L241 116.729V69.3298L235.004 65.7925L120.615 131.585L18.4498 73.1028L120.615 14.3848L204.562 62.7269L211.942 58.4823V52.5869L120.615 0L0 69.3298V76.8759L120.615 146.206L228.085 84.1862V109.654Z'
fill='#F9F7F4'
/>
</svg>
)
}
export function DatadogIcon(props: SVGProps<SVGSVGElement>) {
return (
<svg {...props} xmlns='http://www.w3.org/2000/svg' viewBox='0 0 64 64'>
@@ -4754,6 +4895,17 @@ export function CirclebackIcon(props: SVGProps<SVGSVGElement>) {
)
}
export function GreenhouseIcon(props: SVGProps<SVGSVGElement>) {
return (
<svg {...props} viewBox='0 0 51.4 107.7' xmlns='http://www.w3.org/2000/svg'>
<path
d='M44.9,32c0,5.2-2.2,9.8-5.8,13.4c-4,4-9.8,5-9.8,8.4c0,4.6,7.4,3.2,14.5,10.3c4.7,4.7,7.6,10.9,7.6,18.1c0,14.2-11.4,25.5-25.7,25.5S0,96.4,0,82.2C0,75,2.9,68.8,7.6,64.1c7.1-7.1,14.5-5.7,14.5-10.3c0-3.4-5.8-4.4-9.8-8.4c-3.6-3.6-5.8-8.2-5.8-13.6C6.5,21.4,15,13,25.4,13c2,0,3.8,0.3,5.3,0.3c2.7,0,4.1-1.2,4.1-3.1c0-1.1-0.5-2.5-0.5-4c0-3.4,2.9-6.2,6.4-6.2S47,2.9,47,6.4c0,3.7-2.9,5.4-5.1,6.2c-1.8,0.6-3.2,1.4-3.2,3.2C38.7,19.2,44.9,22.5,44.9,32z M42.9,82.2c0-9.9-7.3-17.9-17.2-17.9s-17.2,8-17.2,17.9c0,9.8,7.3,17.9,17.2,17.9S42.9,92,42.9,82.2z M37,31.8c0-6.3-5.1-11.5-11.3-11.5s-11.3,5.2-11.3,11.5s5.1,11.5,11.3,11.5S37,38.1,37,31.8z'
fill='currentColor'
/>
</svg>
)
}
export function GreptileIcon(props: SVGProps<SVGSVGElement>) {
return (
<svg {...props} viewBox='0 0 24 24' xmlns='http://www.w3.org/2000/svg'>
@@ -5425,6 +5577,34 @@ export function GoogleMapsIcon(props: SVGProps<SVGSVGElement>) {
)
}
export function GoogleTranslateIcon(props: SVGProps<SVGSVGElement>) {
return (
<svg {...props} xmlns='http://www.w3.org/2000/svg' viewBox='0 0 998.1 998.3'>
<path
fill='#DBDBDB'
d='M931.7 998.3c36.5 0 66.4-29.4 66.4-65.4V265.8c0-36-29.9-65.4-66.4-65.4H283.6l260.1 797.9h388z'
/>
<path
fill='#DCDCDC'
d='M931.7 230.4c9.7 0 18.9 3.8 25.8 10.6 6.8 6.7 10.6 15.5 10.6 24.8v667.1c0 9.3-3.7 18.1-10.6 24.8-6.9 6.8-16.1 10.6-25.8 10.6H565.5L324.9 230.4h606.8m0-30H283.6l260.1 797.9h388c36.5 0 66.4-29.4 66.4-65.4V265.8c0-36-29.9-65.4-66.4-65.4z'
/>
<polygon fill='#4352B8' points='482.3,809.8 543.7,998.3 714.4,809.8' />
<path
fill='#607988'
d='M936.1 476.1V437H747.6v-63.2h-61.2V437H566.1v39.1h239.4c-12.8 45.1-41.1 87.7-68.7 120.8-48.9-57.9-49.1-76.7-49.1-76.7h-50.8s2.1 28.2 70.7 108.6c-22.3 22.8-39.2 36.3-39.2 36.3l15.6 48.8s23.6-20.3 53.1-51.6c29.6 32.1 67.8 70.7 117.2 116.7l32.1-32.1c-52.9-48-91.7-86.1-120.2-116.7 38.2-45.2 77-102.1 85.2-154.2H936v.1z'
/>
<path
fill='#4285F4'
d='M66.4 0C29.9 0 0 29.9 0 66.5v677c0 36.5 29.9 66.4 66.4 66.4h648.1L454.4 0h-388z'
/>
<path
fill='#EEEEEE'
d='M371.4 430.6c-2.5 30.3-28.4 75.2-91.1 75.2-54.3 0-98.3-44.9-98.3-100.2s44-100.2 98.3-100.2c30.9 0 51.5 13.4 63.3 24.3l41.2-39.6c-27.1-25-62.4-40.6-104.5-40.6-86.1 0-156 69.9-156 156s69.9 156 156 156c90.2 0 149.8-63.3 149.8-152.6 0-12.8-1.6-22.2-3.7-31.8h-146v53.4l91 .1z'
/>
</svg>
)
}
export function DsPyIcon(props: SVGProps<SVGSVGElement>) {
return (
<svg {...props} xmlns='http://www.w3.org/2000/svg' viewBox='30 28 185 175' fill='none'>
@@ -5819,3 +5999,32 @@ export function RedisIcon(props: SVGProps<SVGSVGElement>) {
</svg>
)
}
export function HexIcon(props: SVGProps<SVGSVGElement>) {
return (
<svg {...props} xmlns='http://www.w3.org/2000/svg' viewBox='0 0 1450.3 600'>
<path
fill='#EDB9B8'
fillRule='evenodd'
d='m250.11,0v199.49h-50V0H0v600h200.11v-300.69h50v300.69h200.18V0h-200.18Zm249.9,0v600h450.29v-250.23h-200.2v149h-50v-199.46h250.2V0h-450.29Zm200.09,199.49v-99.49h50v99.49h-50Zm550.02,0V0h200.18v150l-100,100.09,100,100.09v249.82h-200.18v-300.69h-50v300.69h-200.11v-249.82l100.11-100.09-100.11-100.09V0h200.11v199.49h50Z'
/>
</svg>
)
}
export function ShortIoIcon(props: SVGProps<SVGSVGElement>) {
return (
<svg {...props} viewBox='0 0 64 65' fill='none' xmlns='http://www.w3.org/2000/svg'>
<rect width='64' height='65' fill='#FFFFFF' />
<path
d='M41.1 45.7c0 2-.8 3.5-2.5 4.6-1.6 1-3.8 1.6-6.5 1.6-3.4 0-6-.8-8-2.3-2-1.6-3-3.6-3.2-6.1l-16.3-.4c0 4.1 1.2 7.8 3.6 11.1A24 24 0 0 0 18 62c2.2 1 4.5 1.7 7 2.2l.4.1H0V.2h24.9A25.4 25.4 0 0 0 9.3 9.5C7.1 12.5 6 15.9 6 19.7c0 4.2.9 7.6 2.6 10.1 1.7 2.5 4 4.4 6.8 5.7 2.8 1.3 6.3 2.3 10.6 3.2 4.4.9 7.5 1.6 9.5 2.2 1.9.5 3.3 1.1 4.3 1.9.8.6 1.3 1.6 1.3 2.9Z'
fill='#0BB07D'
/>
<path d='M25.3 64.2h-.6l.1-.1.5.1Z' fill='#33333D' />
<path
d='M64 64.2H38.1a28 28 0 0 0 7.1-2.2 23 23 0 0 0 9.4-7.6c2.2-3.2 3.4-6.8 3.4-10.8a17 17 0 0 0-2.6-9.8c-1.7-2.4-4-4.3-6.9-5.5a54.4 54.4 0 0 0-10.8-3.1c-4.3-.8-7.3-1.5-9.2-2.1a12 12 0 0 1-4.2-1.8c-.9-.7-1.3-1.7-1.3-3 0-1.9.7-3.3 2.2-4.3 1.5-1 3.4-1.5 5.8-1.5 2.7 0 4.9.7 6.5 2.1a7.8 7.8 0 0 1 2.7 5.4h16.4c0-3.8-1.1-7.3-3.3-10.5a23 23 0 0 0-9.1-7.4c-2.1-1-4.4-1.7-6.8-2.1H64v64.2Z'
fill='#383738'
/>
</svg>
)
}

View File

@@ -13,6 +13,8 @@ import {
ApolloIcon,
ArxivIcon,
AsanaIcon,
AshbyIcon,
AttioIcon,
BrainIcon,
BrowserUseIcon,
CalComIcon,
@@ -23,7 +25,9 @@ import {
CloudflareIcon,
ConfluenceIcon,
CursorIcon,
DatabricksIcon,
DatadogIcon,
DevinIcon,
DiscordIcon,
DocumentIcon,
DropboxIcon,
@@ -37,11 +41,15 @@ import {
EyeIcon,
FirecrawlIcon,
FirefliesIcon,
GammaIcon,
GithubIcon,
GitLabIcon,
GmailIcon,
GongIcon,
GoogleBigQueryIcon,
GoogleBooksIcon,
GoogleCalendarIcon,
GoogleContactsIcon,
GoogleDocsIcon,
GoogleDriveIcon,
GoogleFormsIcon,
@@ -50,10 +58,14 @@ import {
GoogleMapsIcon,
GoogleSheetsIcon,
GoogleSlidesIcon,
GoogleTasksIcon,
GoogleTranslateIcon,
GoogleVaultIcon,
GrafanaIcon,
GrainIcon,
GreenhouseIcon,
GreptileIcon,
HexIcon,
HubspotIcon,
HuggingFaceIcon,
HunterIOIcon,
@@ -69,6 +81,8 @@ import {
LinearIcon,
LinkedInIcon,
LinkupIcon,
LoopsIcon,
LumaIcon,
MailchimpIcon,
MailgunIcon,
MailServerIcon,
@@ -112,6 +126,7 @@ import {
ServiceNowIcon,
SftpIcon,
ShopifyIcon,
ShortIoIcon,
SimilarwebIcon,
SlackIcon,
SmtpIcon,
@@ -157,6 +172,8 @@ export const blockTypeToIconMap: Record<string, IconComponent> = {
apollo: ApolloIcon,
arxiv: ArxivIcon,
asana: AsanaIcon,
ashby: AshbyIcon,
attio: AttioIcon,
browser_use: BrowserUseIcon,
calcom: CalComIcon,
calendly: CalendlyIcon,
@@ -166,7 +183,9 @@ export const blockTypeToIconMap: Record<string, IconComponent> = {
cloudflare: CloudflareIcon,
confluence_v2: ConfluenceIcon,
cursor_v2: CursorIcon,
databricks: DatabricksIcon,
datadog: DatadogIcon,
devin: DevinIcon,
discord: DiscordIcon,
dropbox: DropboxIcon,
dspy: DsPyIcon,
@@ -179,11 +198,15 @@ export const blockTypeToIconMap: Record<string, IconComponent> = {
file_v3: DocumentIcon,
firecrawl: FirecrawlIcon,
fireflies_v2: FirefliesIcon,
gamma: GammaIcon,
github_v2: GithubIcon,
gitlab: GitLabIcon,
gmail_v2: GmailIcon,
gong: GongIcon,
google_bigquery: GoogleBigQueryIcon,
google_books: GoogleBooksIcon,
google_calendar_v2: GoogleCalendarIcon,
google_contacts: GoogleContactsIcon,
google_docs: GoogleDocsIcon,
google_drive: GoogleDriveIcon,
google_forms: GoogleFormsIcon,
@@ -192,10 +215,14 @@ export const blockTypeToIconMap: Record<string, IconComponent> = {
google_search: GoogleIcon,
google_sheets_v2: GoogleSheetsIcon,
google_slides_v2: GoogleSlidesIcon,
google_tasks: GoogleTasksIcon,
google_translate: GoogleTranslateIcon,
google_vault: GoogleVaultIcon,
grafana: GrafanaIcon,
grain: GrainIcon,
greenhouse: GreenhouseIcon,
greptile: GreptileIcon,
hex: HexIcon,
hubspot: HubspotIcon,
huggingface: HuggingFaceIcon,
hunter: HunterIOIcon,
@@ -213,6 +240,8 @@ export const blockTypeToIconMap: Record<string, IconComponent> = {
linear: LinearIcon,
linkedin: LinkedInIcon,
linkup: LinkupIcon,
loops: LoopsIcon,
luma: LumaIcon,
mailchimp: MailchimpIcon,
mailgun: MailgunIcon,
mem0: Mem0Icon,
@@ -255,6 +284,7 @@ export const blockTypeToIconMap: Record<string, IconComponent> = {
sftp: SftpIcon,
sharepoint: MicrosoftSharepointIcon,
shopify: ShopifyIcon,
short_io: ShortIoIcon,
similarweb: SimilarwebIcon,
slack: SlackIcon,
smtp: SmtpIcon,

View File

@@ -1,96 +0,0 @@
---
title: Umgebungsvariablen
---
import { Callout } from 'fumadocs-ui/components/callout'
import { Image } from '@/components/ui/image'
Umgebungsvariablen bieten eine sichere Möglichkeit, Konfigurationswerte und Geheimnisse in Ihren Workflows zu verwalten, einschließlich API-Schlüssel und anderer sensibler Daten, auf die Ihre Workflows zugreifen müssen. Sie halten Geheimnisse aus Ihren Workflow-Definitionen heraus und machen sie während der Ausführung verfügbar.
## Variablentypen
Umgebungsvariablen in Sim funktionieren auf zwei Ebenen:
- **Persönliche Umgebungsvariablen**: Privat für Ihr Konto, nur Sie können sie sehen und verwenden
- **Workspace-Umgebungsvariablen**: Werden im gesamten Workspace geteilt und sind für alle Teammitglieder verfügbar
<Callout type="info">
Workspace-Umgebungsvariablen haben Vorrang vor persönlichen Variablen, wenn es einen Namenskonflikt gibt.
</Callout>
## Einrichten von Umgebungsvariablen
Navigieren Sie zu den Einstellungen, um Ihre Umgebungsvariablen zu konfigurieren:
<Image
src="/static/environment/environment-1.png"
alt="Umgebungsvariablen-Modal zum Erstellen neuer Variablen"
width={500}
height={350}
/>
In Ihren Workspace-Einstellungen können Sie sowohl persönliche als auch Workspace-Umgebungsvariablen erstellen und verwalten. Persönliche Variablen sind privat für Ihr Konto, während Workspace-Variablen mit allen Teammitgliedern geteilt werden.
### Variablen auf Workspace-Ebene setzen
Verwenden Sie den Workspace-Bereichsschalter, um Variablen für Ihr gesamtes Team verfügbar zu machen:
<Image
src="/static/environment/environment-2.png"
alt="Workspace-Bereich für Umgebungsvariablen umschalten"
width={500}
height={350}
/>
Wenn Sie den Workspace-Bereich aktivieren, wird die Variable für alle Workspace-Mitglieder verfügbar und kann in jedem Workflow innerhalb dieses Workspaces verwendet werden.
### Ansicht der Workspace-Variablen
Sobald Sie Workspace-Variablen haben, erscheinen sie in Ihrer Liste der Umgebungsvariablen:
<Image
src="/static/environment/environment-3.png"
alt="Workspace-Variablen in der Liste der Umgebungsvariablen"
width={500}
height={350}
/>
## Verwendung von Variablen in Workflows
Um Umgebungsvariablen in Ihren Workflows zu referenzieren, verwenden Sie die `{{}}` Notation. Wenn Sie `{{` in ein beliebiges Eingabefeld eingeben, erscheint ein Dropdown-Menü mit Ihren persönlichen und Workspace-Umgebungsvariablen. Wählen Sie einfach die Variable aus, die Sie verwenden möchten.
<Image
src="/static/environment/environment-4.png"
alt="Verwendung von Umgebungsvariablen mit doppelter Klammernotation"
width={500}
height={350}
/>
## Wie Variablen aufgelöst werden
**Workspace-Variablen haben immer Vorrang** vor persönlichen Variablen, unabhängig davon, wer den Workflow ausführt.
Wenn keine Workspace-Variable für einen Schlüssel existiert, werden persönliche Variablen verwendet:
- **Manuelle Ausführungen (UI)**: Ihre persönlichen Variablen
- **Automatisierte Ausführungen (API, Webhook, Zeitplan, bereitgestellter Chat)**: Persönliche Variablen des Workflow-Besitzers
<Callout type="info">
Persönliche Variablen eignen sich am besten zum Testen. Verwenden Sie Workspace-Variablen für Produktions-Workflows.
</Callout>
## Sicherheits-Best-Practices
### Für sensible Daten
- Speichern Sie API-Schlüssel, Tokens und Passwörter als Umgebungsvariablen anstatt sie im Code festzuschreiben
- Verwenden Sie Workspace-Variablen für gemeinsam genutzte Ressourcen, die mehrere Teammitglieder benötigen
- Bewahren Sie persönliche Anmeldedaten in persönlichen Variablen auf
### Variablenbenennung
- Verwenden Sie beschreibende Namen: `DATABASE_URL` anstatt `DB`
- Folgen Sie einheitlichen Benennungskonventionen in Ihrem Team
- Erwägen Sie Präfixe, um Konflikte zu vermeiden: `PROD_API_KEY`, `DEV_API_KEY`
### Zugriffskontrolle
- Workspace-Umgebungsvariablen respektieren Workspace-Berechtigungen
- Nur Benutzer mit Schreibzugriff oder höher können Workspace-Variablen erstellen/ändern
- Persönliche Variablen sind immer privat für den einzelnen Benutzer

View File

@@ -95,11 +95,17 @@ const apiUrl = `https://api.example.com/users/${userId}/profile`;
### Request Retries
The API block automatically handles:
- Network timeouts with exponential backoff
- Rate limit responses (429 status codes)
- Server errors (5xx status codes) with retry logic
- Connection failures with reconnection attempts
The API block supports **configurable retries** (see the blocks **Advanced** settings):
- **Retries**: Number of retry attempts (additional tries after the first request)
- **Retry delay (ms)**: Initial delay before retrying (uses exponential backoff)
- **Max retry delay (ms)**: Maximum delay between retries
- **Retry non-idempotent methods**: Allow retries for **POST/PATCH** (may create duplicate requests)
Retries are attempted for:
- Network/connection failures and timeouts (with exponential backoff)
- Rate limits (**429**) and server errors (**5xx**)
### Response Validation

View File

@@ -0,0 +1,192 @@
---
title: Credentials
description: Manage secrets, API keys, and OAuth connections for your workflows
---
import { Callout } from 'fumadocs-ui/components/callout'
import { Image } from '@/components/ui/image'
import { Step, Steps } from 'fumadocs-ui/components/steps'
Credentials provide a secure way to manage API keys, tokens, and third-party service connections across your workflows. Instead of hardcoding sensitive values into your workflow, you store them as credentials and reference them at runtime.
Sim supports two categories of credentials: **secrets** for static values like API keys, and **OAuth accounts** for authenticated service connections like Google or Slack.
## Getting Started
To manage credentials, open your workspace **Settings** and navigate to the **Secrets** tab.
<Image
src="/static/credentials/settings-secrets.png"
alt="Settings modal showing the Secrets tab with a list of saved credentials"
width={700}
height={200}
/>
From here you can search, create, and delete both secrets and OAuth connections.
## Secrets
Secrets are key-value pairs that store sensitive data like API keys, tokens, and passwords. Each secret has a **key** (used to reference it in workflows) and a **value** (the actual secret).
### Creating a Secret
<Image
src="/static/credentials/create-secret.png"
alt="Create Secret dialog with fields for key, value, description, and scope toggle"
width={500}
height={400}
/>
<Steps>
<Step>
Click **+ Add** and select **Secret** as the type
</Step>
<Step>
Enter a **Key** name (letters, numbers, and underscores only, e.g. `OPENAI_API_KEY`)
</Step>
<Step>
Enter the **Value**
</Step>
<Step>
Optionally add a **Description** to help your team understand what the secret is for
</Step>
<Step>
Choose the **Scope** — Workspace or Personal
</Step>
<Step>
Click **Create**
</Step>
</Steps>
### Using Secrets in Workflows
To reference a secret in any input field, type `{{` to open the dropdown. It will show your available secrets grouped by scope.
<Image
src="/static/credentials/secret-dropdown.png"
alt="Typing {{ in a code block opens a dropdown showing available workspace secrets"
width={400}
height={250}
/>
Select the secret you want to use. The reference will appear highlighted in blue, indicating it will be resolved at runtime.
<Image
src="/static/credentials/secret-resolved.png"
alt="A resolved secret reference shown in blue text as {{OPENAI_API_KEY}}"
width={400}
height={200}
/>
<Callout type="warn">
Secret values are never exposed in the workflow editor or logs. They are only resolved during execution.
</Callout>
### Bulk Import
You can import multiple secrets at once by pasting `.env`-style content:
1. Click **+ Add**, then switch to **Bulk** mode
2. Paste your environment variables in `KEY=VALUE` format
3. Choose the scope for all imported secrets
4. Click **Create**
The parser supports standard `KEY=VALUE` pairs, quoted values, comments (`#`), and blank lines.
## OAuth Accounts
OAuth accounts are authenticated connections to third-party services like Google, Slack, GitHub, and more. Sim handles the OAuth flow, token storage, and automatic refresh.
You can connect **multiple accounts per provider** — for example, two separate Gmail accounts for different workflows.
### Connecting an OAuth Account
<Image
src="/static/credentials/create-oauth.png"
alt="Create Secret dialog with OAuth Account type selected, showing display name and provider dropdown"
width={500}
height={400}
/>
<Steps>
<Step>
Click **+ Add** and select **OAuth Account** as the type
</Step>
<Step>
Enter a **Display name** to identify this connection (e.g. "Work Gmail" or "Marketing Slack")
</Step>
<Step>
Optionally add a **Description**
</Step>
<Step>
Select the **Account** provider from the dropdown
</Step>
<Step>
Click **Connect** and complete the authorization flow
</Step>
</Steps>
### Using OAuth Accounts in Workflows
Blocks that require authentication (e.g. Gmail, Slack, Google Sheets) display a credential selector dropdown. Select the OAuth account you want the block to use.
<Image
src="/static/credentials/oauth-selector.png"
alt="Gmail block showing the account selector dropdown with a connected account and option to connect another"
width={500}
height={350}
/>
You can also connect additional accounts directly from the block by selecting **Connect another account** at the bottom of the dropdown.
<Callout type="info">
If a block requires an OAuth connection and none is selected, the workflow will fail at that step.
</Callout>
## Workspace vs. Personal
Credentials can be scoped to your **workspace** (shared with your team) or kept **personal** (private to you).
| | Workspace | Personal |
|---|---|---|
| **Visibility** | All workspace members | Only you |
| **Use in workflows** | Any member can use | Only you can use |
| **Best for** | Production workflows, shared services | Testing, personal API keys |
| **Who can edit** | Workspace admins | Only you |
| **Auto-shared** | Yes — all members get access on creation | No — only you have access |
<Callout type="info">
When a workspace and personal secret share the same key name, the **workspace secret takes precedence**.
</Callout>
### Resolution Order
When a workflow runs, Sim resolves secrets in this order:
1. **Workspace secrets** are checked first
2. **Personal secrets** are used as a fallback — from the user who triggered the run (manual) or the workflow owner (automated runs via API, webhook, or schedule)
## Access Control
Each credential has role-based access control:
- **Admin** — can view, edit, delete, and manage who has access
- **Member** — can use the credential in workflows (read-only)
When you create a workspace secret, all current workspace members are automatically granted access. Personal secrets are only accessible to you by default.
### Sharing a Credential
To share a credential with specific team members:
1. Click **Details** on the credential
2. Invite members by email
3. Assign them an **Admin** or **Member** role
## Best Practices
- **Use workspace credentials for production** so workflows work regardless of who triggers them
- **Use personal credentials for development** to keep your test keys separate
- **Name keys descriptively** — `STRIPE_SECRET_KEY` over `KEY1`
- **Connect multiple OAuth accounts** when you need different permissions or identities per workflow
- **Never hardcode secrets** in workflow input fields — always use `{{KEY}}` references

View File

@@ -97,6 +97,7 @@ Understanding these core principles will help you build better workflows:
3. **Smart Data Flow**: Outputs flow automatically to connected blocks
4. **Error Handling**: Failed blocks stop their execution path but don't affect independent paths
5. **State Persistence**: All block outputs and execution details are preserved for debugging
6. **Cycle Protection**: Workflows that call other workflows (via Workflow blocks, MCP tools, or API blocks) are tracked with a call chain. If the chain exceeds 25 hops, execution is stopped to prevent infinite loops
## Next Steps

View File

@@ -13,6 +13,7 @@
"skills",
"knowledgebase",
"variables",
"credentials",
"execution",
"permissions",
"sdks",

View File

@@ -0,0 +1,473 @@
---
title: Ashby
description: Manage candidates, jobs, and applications in Ashby
---
import { BlockInfoCard } from "@/components/ui/block-info-card"
<BlockInfoCard
type="ashby"
color="#5D4ED6"
/>
{/* MANUAL-CONTENT-START:intro */}
[Ashby](https://www.ashbyhq.com/) is an all-in-one recruiting platform that combines an applicant tracking system (ATS), CRM, scheduling, and analytics to help teams hire more effectively.
With Ashby, you can:
- **List and search candidates**: Browse your full candidate pipeline or search by name and email to quickly find specific people
- **Create candidates**: Add new candidates to your Ashby organization with contact details
- **View candidate details**: Retrieve full candidate profiles including tags, email, phone, and timestamps
- **Add notes to candidates**: Attach notes to candidate records to capture feedback, context, or follow-up items
- **List and view jobs**: Browse all open, closed, and archived job postings with location and department info
- **List applications**: View all applications across your organization with candidate and job details, status tracking, and pagination
In Sim, the Ashby integration enables your agents to programmatically manage your recruiting pipeline. Agents can search for candidates, create new candidate records, add notes after interviews, and monitor applications across jobs. This allows you to automate recruiting workflows like candidate intake, interview follow-ups, pipeline reporting, and cross-referencing candidates across roles.
{/* MANUAL-CONTENT-END */}
## Usage Instructions
Integrate Ashby into the workflow. Can list, search, create, and update candidates, list and get job details, create notes, list notes, list and get applications, create applications, and list offers.
## Tools
### `ashby_create_application`
Creates a new application for a candidate on a job. Optionally specify interview plan, stage, source, and credited user.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Ashby API Key |
| `candidateId` | string | Yes | The UUID of the candidate to consider for the job |
| `jobId` | string | Yes | The UUID of the job to consider the candidate for |
| `interviewPlanId` | string | No | UUID of the interview plan to use \(defaults to the job default plan\) |
| `interviewStageId` | string | No | UUID of the interview stage to place the application in \(defaults to first Lead stage\) |
| `sourceId` | string | No | UUID of the source to set on the application |
| `creditedToUserId` | string | No | UUID of the user the application is credited to |
| `createdAt` | string | No | ISO 8601 timestamp to set as the application creation date \(defaults to now\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `id` | string | Created application UUID |
| `status` | string | Application status \(Active, Hired, Archived, Lead\) |
| `candidate` | object | Associated candidate |
| ↳ `id` | string | Candidate UUID |
| ↳ `name` | string | Candidate name |
| `job` | object | Associated job |
| ↳ `id` | string | Job UUID |
| ↳ `title` | string | Job title |
| `currentInterviewStage` | object | Current interview stage |
| ↳ `id` | string | Stage UUID |
| ↳ `title` | string | Stage title |
| ↳ `type` | string | Stage type |
| `source` | object | Application source |
| ↳ `id` | string | Source UUID |
| ↳ `title` | string | Source title |
| `createdAt` | string | ISO 8601 creation timestamp |
| `updatedAt` | string | ISO 8601 last update timestamp |
### `ashby_create_candidate`
Creates a new candidate record in Ashby.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Ashby API Key |
| `name` | string | Yes | The candidate full name |
| `email` | string | No | Primary email address for the candidate |
| `emailType` | string | No | Email address type: Personal, Work, or Other \(default Work\) |
| `phoneNumber` | string | No | Primary phone number for the candidate |
| `phoneType` | string | No | Phone number type: Personal, Work, or Other \(default Work\) |
| `linkedInUrl` | string | No | LinkedIn profile URL |
| `githubUrl` | string | No | GitHub profile URL |
| `sourceId` | string | No | UUID of the source to attribute the candidate to |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `id` | string | Created candidate UUID |
| `name` | string | Full name |
| `primaryEmailAddress` | object | Primary email contact info |
| ↳ `value` | string | Email address |
| ↳ `type` | string | Contact type \(Personal, Work, Other\) |
| ↳ `isPrimary` | boolean | Whether this is the primary email |
| `primaryPhoneNumber` | object | Primary phone contact info |
| ↳ `value` | string | Phone number |
| ↳ `type` | string | Contact type \(Personal, Work, Other\) |
| ↳ `isPrimary` | boolean | Whether this is the primary phone |
| `createdAt` | string | ISO 8601 creation timestamp |
### `ashby_create_note`
Creates a note on a candidate in Ashby. Supports plain text and HTML content (bold, italic, underline, links, lists, code).
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Ashby API Key |
| `candidateId` | string | Yes | The UUID of the candidate to add the note to |
| `note` | string | Yes | The note content. If noteType is text/html, supports: &lt;b&gt;, &lt;i&gt;, &lt;u&gt;, &lt;a&gt;, &lt;ul&gt;, &lt;ol&gt;, &lt;li&gt;, &lt;code&gt;, &lt;pre&gt; |
| `noteType` | string | No | Content type of the note: text/plain \(default\) or text/html |
| `sendNotifications` | boolean | No | Whether to send notifications to subscribed users \(default false\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `id` | string | Created note UUID |
| `content` | string | Note content as stored |
| `author` | object | Note author |
| ↳ `id` | string | Author user UUID |
| ↳ `firstName` | string | First name |
| ↳ `lastName` | string | Last name |
| ↳ `email` | string | Email address |
| `createdAt` | string | ISO 8601 creation timestamp |
### `ashby_get_application`
Retrieves full details about a single application by its ID.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Ashby API Key |
| `applicationId` | string | Yes | The UUID of the application to fetch |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `id` | string | Application UUID |
| `status` | string | Application status \(Active, Hired, Archived, Lead\) |
| `candidate` | object | Associated candidate |
| ↳ `id` | string | Candidate UUID |
| ↳ `name` | string | Candidate name |
| `job` | object | Associated job |
| ↳ `id` | string | Job UUID |
| ↳ `title` | string | Job title |
| `currentInterviewStage` | object | Current interview stage |
| ↳ `id` | string | Stage UUID |
| ↳ `title` | string | Stage title |
| ↳ `type` | string | Stage type |
| `source` | object | Application source |
| ↳ `id` | string | Source UUID |
| ↳ `title` | string | Source title |
| `archiveReason` | object | Reason for archival |
| ↳ `id` | string | Reason UUID |
| ↳ `text` | string | Reason text |
| ↳ `reasonType` | string | Reason type |
| `archivedAt` | string | ISO 8601 archive timestamp |
| `createdAt` | string | ISO 8601 creation timestamp |
| `updatedAt` | string | ISO 8601 last update timestamp |
### `ashby_get_candidate`
Retrieves full details about a single candidate by their ID.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Ashby API Key |
| `candidateId` | string | Yes | The UUID of the candidate to fetch |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `id` | string | Candidate UUID |
| `name` | string | Full name |
| `primaryEmailAddress` | object | Primary email contact info |
| ↳ `value` | string | Email address |
| ↳ `type` | string | Contact type \(Personal, Work, Other\) |
| ↳ `isPrimary` | boolean | Whether this is the primary email |
| `primaryPhoneNumber` | object | Primary phone contact info |
| ↳ `value` | string | Phone number |
| ↳ `type` | string | Contact type \(Personal, Work, Other\) |
| ↳ `isPrimary` | boolean | Whether this is the primary phone |
| `profileUrl` | string | URL to the candidate Ashby profile |
| `position` | string | Current position or title |
| `company` | string | Current company |
| `linkedInUrl` | string | LinkedIn profile URL |
| `githubUrl` | string | GitHub profile URL |
| `tags` | array | Tags applied to the candidate |
| ↳ `id` | string | Tag UUID |
| ↳ `title` | string | Tag title |
| `applicationIds` | array | IDs of associated applications |
| `createdAt` | string | ISO 8601 creation timestamp |
| `updatedAt` | string | ISO 8601 last update timestamp |
### `ashby_get_job`
Retrieves full details about a single job by its ID.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Ashby API Key |
| `jobId` | string | Yes | The UUID of the job to fetch |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `id` | string | Job UUID |
| `title` | string | Job title |
| `status` | string | Job status \(Open, Closed, Draft, Archived, On Hold\) |
| `employmentType` | string | Employment type \(FullTime, PartTime, Intern, Contract, Temporary\) |
| `departmentId` | string | Department UUID |
| `locationId` | string | Location UUID |
| `descriptionPlain` | string | Job description in plain text |
| `isArchived` | boolean | Whether the job is archived |
| `createdAt` | string | ISO 8601 creation timestamp |
| `updatedAt` | string | ISO 8601 last update timestamp |
### `ashby_list_applications`
Lists all applications in an Ashby organization with pagination and optional filters for status, job, candidate, and creation date.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Ashby API Key |
| `cursor` | string | No | Opaque pagination cursor from a previous response nextCursor value |
| `perPage` | number | No | Number of results per page \(default 100\) |
| `status` | string | No | Filter by application status: Active, Hired, Archived, or Lead |
| `jobId` | string | No | Filter applications by a specific job UUID |
| `candidateId` | string | No | Filter applications by a specific candidate UUID |
| `createdAfter` | string | No | Filter to applications created after this ISO 8601 timestamp \(e.g. 2024-01-01T00:00:00Z\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `applications` | array | List of applications |
| ↳ `id` | string | Application UUID |
| ↳ `status` | string | Application status \(Active, Hired, Archived, Lead\) |
| ↳ `candidate` | object | Associated candidate |
| ↳ `id` | string | Candidate UUID |
| ↳ `name` | string | Candidate name |
| ↳ `job` | object | Associated job |
| ↳ `id` | string | Job UUID |
| ↳ `title` | string | Job title |
| ↳ `currentInterviewStage` | object | Current interview stage |
| ↳ `id` | string | Stage UUID |
| ↳ `title` | string | Stage title |
| ↳ `type` | string | Stage type |
| ↳ `source` | object | Application source |
| ↳ `id` | string | Source UUID |
| ↳ `title` | string | Source title |
| ↳ `createdAt` | string | ISO 8601 creation timestamp |
| ↳ `updatedAt` | string | ISO 8601 last update timestamp |
| `moreDataAvailable` | boolean | Whether more pages of results exist |
| `nextCursor` | string | Opaque cursor for fetching the next page |
### `ashby_list_candidates`
Lists all candidates in an Ashby organization with cursor-based pagination.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Ashby API Key |
| `cursor` | string | No | Opaque pagination cursor from a previous response nextCursor value |
| `perPage` | number | No | Number of results per page \(default 100\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `candidates` | array | List of candidates |
| ↳ `id` | string | Candidate UUID |
| ↳ `name` | string | Full name |
| ↳ `primaryEmailAddress` | object | Primary email contact info |
| ↳ `value` | string | Email address |
| ↳ `type` | string | Contact type \(Personal, Work, Other\) |
| ↳ `isPrimary` | boolean | Whether this is the primary email |
| ↳ `primaryPhoneNumber` | object | Primary phone contact info |
| ↳ `value` | string | Phone number |
| ↳ `type` | string | Contact type \(Personal, Work, Other\) |
| ↳ `isPrimary` | boolean | Whether this is the primary phone |
| ↳ `createdAt` | string | ISO 8601 creation timestamp |
| ↳ `updatedAt` | string | ISO 8601 last update timestamp |
| `moreDataAvailable` | boolean | Whether more pages of results exist |
| `nextCursor` | string | Opaque cursor for fetching the next page |
### `ashby_list_jobs`
Lists all jobs in an Ashby organization. By default returns Open, Closed, and Archived jobs. Specify status to filter.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Ashby API Key |
| `cursor` | string | No | Opaque pagination cursor from a previous response nextCursor value |
| `perPage` | number | No | Number of results per page \(default 100\) |
| `status` | string | No | Filter by job status: Open, Closed, Archived, or Draft |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `jobs` | array | List of jobs |
| ↳ `id` | string | Job UUID |
| ↳ `title` | string | Job title |
| ↳ `status` | string | Job status \(Open, Closed, Archived, Draft\) |
| ↳ `employmentType` | string | Employment type \(FullTime, PartTime, Intern, Contract, Temporary\) |
| ↳ `departmentId` | string | Department UUID |
| ↳ `locationId` | string | Location UUID |
| ↳ `createdAt` | string | ISO 8601 creation timestamp |
| ↳ `updatedAt` | string | ISO 8601 last update timestamp |
| `moreDataAvailable` | boolean | Whether more pages of results exist |
| `nextCursor` | string | Opaque cursor for fetching the next page |
### `ashby_list_notes`
Lists all notes on a candidate with pagination support.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Ashby API Key |
| `candidateId` | string | Yes | The UUID of the candidate to list notes for |
| `cursor` | string | No | Opaque pagination cursor from a previous response nextCursor value |
| `perPage` | number | No | Number of results per page |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `notes` | array | List of notes on the candidate |
| ↳ `id` | string | Note UUID |
| ↳ `content` | string | Note content |
| ↳ `author` | object | Note author |
| ↳ `id` | string | Author user UUID |
| ↳ `firstName` | string | First name |
| ↳ `lastName` | string | Last name |
| ↳ `email` | string | Email address |
| ↳ `createdAt` | string | ISO 8601 creation timestamp |
| `moreDataAvailable` | boolean | Whether more pages of results exist |
| `nextCursor` | string | Opaque cursor for fetching the next page |
### `ashby_list_offers`
Lists all offers with their latest version in an Ashby organization.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Ashby API Key |
| `cursor` | string | No | Opaque pagination cursor from a previous response nextCursor value |
| `perPage` | number | No | Number of results per page |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `offers` | array | List of offers |
| ↳ `id` | string | Offer UUID |
| ↳ `status` | string | Offer status |
| ↳ `candidate` | object | Associated candidate |
| ↳ `id` | string | Candidate UUID |
| ↳ `name` | string | Candidate name |
| ↳ `job` | object | Associated job |
| ↳ `id` | string | Job UUID |
| ↳ `title` | string | Job title |
| ↳ `createdAt` | string | ISO 8601 creation timestamp |
| ↳ `updatedAt` | string | ISO 8601 last update timestamp |
| `moreDataAvailable` | boolean | Whether more pages of results exist |
| `nextCursor` | string | Opaque cursor for fetching the next page |
### `ashby_search_candidates`
Searches for candidates by name and/or email with AND logic. Results are limited to 100 matches. Use candidate.list for full pagination.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Ashby API Key |
| `name` | string | No | Candidate name to search for \(combined with email using AND logic\) |
| `email` | string | No | Candidate email to search for \(combined with name using AND logic\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `candidates` | array | Matching candidates \(max 100 results\) |
| ↳ `id` | string | Candidate UUID |
| ↳ `name` | string | Full name |
| ↳ `primaryEmailAddress` | object | Primary email contact info |
| ↳ `value` | string | Email address |
| ↳ `type` | string | Contact type \(Personal, Work, Other\) |
| ↳ `isPrimary` | boolean | Whether this is the primary email |
| ↳ `primaryPhoneNumber` | object | Primary phone contact info |
| ↳ `value` | string | Phone number |
| ↳ `type` | string | Contact type \(Personal, Work, Other\) |
| ↳ `isPrimary` | boolean | Whether this is the primary phone |
### `ashby_update_candidate`
Updates an existing candidate record in Ashby. Only provided fields are changed.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Ashby API Key |
| `candidateId` | string | Yes | The UUID of the candidate to update |
| `name` | string | No | Updated full name |
| `email` | string | No | Updated primary email address |
| `emailType` | string | No | Email address type: Personal, Work, or Other \(default Work\) |
| `phoneNumber` | string | No | Updated primary phone number |
| `phoneType` | string | No | Phone number type: Personal, Work, or Other \(default Work\) |
| `linkedInUrl` | string | No | LinkedIn profile URL |
| `githubUrl` | string | No | GitHub profile URL |
| `websiteUrl` | string | No | Personal website URL |
| `sourceId` | string | No | UUID of the source to attribute the candidate to |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `id` | string | Candidate UUID |
| `name` | string | Full name |
| `primaryEmailAddress` | object | Primary email contact info |
| ↳ `value` | string | Email address |
| ↳ `type` | string | Contact type \(Personal, Work, Other\) |
| ↳ `isPrimary` | boolean | Whether this is the primary email |
| `primaryPhoneNumber` | object | Primary phone contact info |
| ↳ `value` | string | Phone number |
| ↳ `type` | string | Contact type \(Personal, Work, Other\) |
| ↳ `isPrimary` | boolean | Whether this is the primary phone |
| `profileUrl` | string | URL to the candidate Ashby profile |
| `position` | string | Current position or title |
| `company` | string | Current company |
| `linkedInUrl` | string | LinkedIn profile URL |
| `githubUrl` | string | GitHub profile URL |
| `tags` | array | Tags applied to the candidate |
| ↳ `id` | string | Tag UUID |
| ↳ `title` | string | Tag title |
| `applicationIds` | array | IDs of associated applications |
| `createdAt` | string | ISO 8601 creation timestamp |
| `updatedAt` | string | ISO 8601 last update timestamp |

File diff suppressed because it is too large Load Diff

View File

@@ -326,6 +326,8 @@ Get details about a specific version of a Confluence page.
| --------- | ---- | ----------- |
| `ts` | string | ISO 8601 timestamp of the operation |
| `pageId` | string | ID of the page |
| `title` | string | Page title at this version |
| `content` | string | Page content with HTML tags stripped at this version |
| `version` | object | Detailed version information |
| ↳ `number` | number | Version number |
| ↳ `message` | string | Version message |
@@ -336,6 +338,9 @@ Get details about a specific version of a Confluence page.
| ↳ `collaborators` | array | List of collaborator account IDs for this version |
| ↳ `prevVersion` | number | Previous version number |
| ↳ `nextVersion` | number | Next version number |
| `body` | object | Raw page body content in storage format at this version |
| ↳ `value` | string | The content value in the specified format |
| ↳ `representation` | string | Content representation type |
### `confluence_list_page_properties`
@@ -1008,6 +1013,85 @@ Get details about a specific Confluence space.
| ↳ `value` | string | Description text content |
| ↳ `representation` | string | Content representation format \(e.g., plain, view, storage\) |
### `confluence_create_space`
Create a new Confluence space.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `domain` | string | Yes | Your Confluence domain \(e.g., yourcompany.atlassian.net\) |
| `name` | string | Yes | Name for the new space |
| `key` | string | Yes | Unique key for the space \(uppercase, no spaces\) |
| `description` | string | No | Description for the new space |
| `cloudId` | string | No | Confluence Cloud ID for the instance. If not provided, it will be fetched using the domain. |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `ts` | string | ISO 8601 timestamp of the operation |
| `spaceId` | string | Created space ID |
| `name` | string | Space name |
| `key` | string | Space key |
| `type` | string | Space type |
| `status` | string | Space status |
| `url` | string | URL to view the space |
| `homepageId` | string | Homepage ID |
| `description` | object | Space description |
| ↳ `value` | string | Description text content |
| ↳ `representation` | string | Content representation format \(e.g., plain, view, storage\) |
### `confluence_update_space`
Update a Confluence space name or description.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `domain` | string | Yes | Your Confluence domain \(e.g., yourcompany.atlassian.net\) |
| `spaceId` | string | Yes | ID of the space to update |
| `name` | string | No | New name for the space |
| `description` | string | No | New description for the space |
| `cloudId` | string | No | Confluence Cloud ID for the instance. If not provided, it will be fetched using the domain. |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `ts` | string | ISO 8601 timestamp of the operation |
| `spaceId` | string | Updated space ID |
| `name` | string | Space name |
| `key` | string | Space key |
| `type` | string | Space type |
| `status` | string | Space status |
| `url` | string | URL to view the space |
| `description` | object | Space description |
| ↳ `value` | string | Description text content |
| ↳ `representation` | string | Content representation format \(e.g., plain, view, storage\) |
### `confluence_delete_space`
Delete a Confluence space.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `domain` | string | Yes | Your Confluence domain \(e.g., yourcompany.atlassian.net\) |
| `spaceId` | string | Yes | ID of the space to delete |
| `cloudId` | string | No | Confluence Cloud ID for the instance. If not provided, it will be fetched using the domain. |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `ts` | string | ISO 8601 timestamp of the operation |
| `spaceId` | string | Deleted space ID |
| `deleted` | boolean | Deletion status |
### `confluence_list_spaces`
List all Confluence spaces accessible to the user.
@@ -1040,4 +1124,311 @@ List all Confluence spaces accessible to the user.
| ↳ `representation` | string | Content representation format \(e.g., plain, view, storage\) |
| `nextCursor` | string | Cursor for fetching the next page of results |
### `confluence_list_space_properties`
List properties on a Confluence space.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `domain` | string | Yes | Your Confluence domain \(e.g., yourcompany.atlassian.net\) |
| `spaceId` | string | Yes | Space ID to list properties for |
| `limit` | number | No | Maximum number of properties to return \(default: 50, max: 250\) |
| `cursor` | string | No | Pagination cursor from previous response |
| `cloudId` | string | No | Confluence Cloud ID for the instance. If not provided, it will be fetched using the domain. |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `ts` | string | ISO 8601 timestamp of the operation |
| `properties` | array | Array of space properties |
| ↳ `id` | string | Property ID |
| ↳ `key` | string | Property key |
| ↳ `value` | json | Property value |
| `spaceId` | string | Space ID |
| `nextCursor` | string | Cursor for fetching the next page of results |
### `confluence_create_space_property`
Create a property on a Confluence space.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `domain` | string | Yes | Your Confluence domain \(e.g., yourcompany.atlassian.net\) |
| `spaceId` | string | Yes | Space ID to create the property on |
| `key` | string | Yes | Property key/name |
| `value` | json | No | Property value \(JSON\) |
| `cloudId` | string | No | Confluence Cloud ID for the instance. If not provided, it will be fetched using the domain. |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `ts` | string | ISO 8601 timestamp of the operation |
| `propertyId` | string | Created property ID |
| `key` | string | Property key |
| `value` | json | Property value |
| `spaceId` | string | Space ID |
### `confluence_delete_space_property`
Delete a property from a Confluence space.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `domain` | string | Yes | Your Confluence domain \(e.g., yourcompany.atlassian.net\) |
| `spaceId` | string | Yes | Space ID the property belongs to |
| `propertyId` | string | Yes | Property ID to delete |
| `cloudId` | string | No | Confluence Cloud ID for the instance. If not provided, it will be fetched using the domain. |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `ts` | string | ISO 8601 timestamp of the operation |
| `spaceId` | string | Space ID |
| `propertyId` | string | Deleted property ID |
| `deleted` | boolean | Deletion status |
### `confluence_list_space_permissions`
List permissions for a Confluence space.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `domain` | string | Yes | Your Confluence domain \(e.g., yourcompany.atlassian.net\) |
| `spaceId` | string | Yes | Space ID to list permissions for |
| `limit` | number | No | Maximum number of permissions to return \(default: 50, max: 250\) |
| `cursor` | string | No | Pagination cursor from previous response |
| `cloudId` | string | No | Confluence Cloud ID for the instance. If not provided, it will be fetched using the domain. |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `ts` | string | ISO 8601 timestamp of the operation |
| `permissions` | array | Array of space permissions |
| ↳ `id` | string | Permission ID |
| ↳ `principalType` | string | Principal type \(user, group, role\) |
| ↳ `principalId` | string | Principal ID |
| ↳ `operationKey` | string | Operation key \(read, create, delete, etc.\) |
| ↳ `operationTargetType` | string | Target type \(page, blogpost, space, etc.\) |
| ↳ `anonymousAccess` | boolean | Whether anonymous access is allowed |
| ↳ `unlicensedAccess` | boolean | Whether unlicensed access is allowed |
| `spaceId` | string | Space ID |
| `nextCursor` | string | Cursor for fetching the next page of results |
### `confluence_get_page_descendants`
Get all descendants of a Confluence page recursively.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `domain` | string | Yes | Your Confluence domain \(e.g., yourcompany.atlassian.net\) |
| `pageId` | string | Yes | Page ID to get descendants for |
| `limit` | number | No | Maximum number of descendants to return \(default: 50, max: 250\) |
| `cursor` | string | No | Pagination cursor from previous response |
| `cloudId` | string | No | Confluence Cloud ID for the instance. If not provided, it will be fetched using the domain. |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `ts` | string | ISO 8601 timestamp of the operation |
| `descendants` | array | Array of descendant pages |
| ↳ `id` | string | Page ID |
| ↳ `title` | string | Page title |
| ↳ `type` | string | Content type \(page, whiteboard, database, etc.\) |
| ↳ `status` | string | Page status |
| ↳ `spaceId` | string | Space ID |
| ↳ `parentId` | string | Parent page ID |
| ↳ `childPosition` | number | Position among siblings |
| ↳ `depth` | number | Depth in the hierarchy |
| `pageId` | string | Parent page ID |
| `nextCursor` | string | Cursor for fetching the next page of results |
### `confluence_list_tasks`
List inline tasks from Confluence. Optionally filter by page, space, assignee, or status.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `domain` | string | Yes | Your Confluence domain \(e.g., yourcompany.atlassian.net\) |
| `pageId` | string | No | Filter tasks by page ID |
| `spaceId` | string | No | Filter tasks by space ID |
| `assignedTo` | string | No | Filter tasks by assignee account ID |
| `status` | string | No | Filter tasks by status \(complete or incomplete\) |
| `limit` | number | No | Maximum number of tasks to return \(default: 50, max: 250\) |
| `cursor` | string | No | Pagination cursor from previous response |
| `cloudId` | string | No | Confluence Cloud ID for the instance. If not provided, it will be fetched using the domain. |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `ts` | string | ISO 8601 timestamp of the operation |
| `tasks` | array | Array of Confluence tasks |
| ↳ `id` | string | Task ID |
| ↳ `localId` | string | Local task ID |
| ↳ `spaceId` | string | Space ID |
| ↳ `pageId` | string | Page ID |
| ↳ `blogPostId` | string | Blog post ID |
| ↳ `status` | string | Task status \(complete or incomplete\) |
| ↳ `body` | string | Task body content in storage format |
| ↳ `createdBy` | string | Creator account ID |
| ↳ `assignedTo` | string | Assignee account ID |
| ↳ `completedBy` | string | Completer account ID |
| ↳ `createdAt` | string | Creation timestamp |
| ↳ `updatedAt` | string | Last update timestamp |
| ↳ `dueAt` | string | Due date |
| ↳ `completedAt` | string | Completion timestamp |
| `nextCursor` | string | Cursor for fetching the next page of results |
### `confluence_get_task`
Get a specific Confluence inline task by ID.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `domain` | string | Yes | Your Confluence domain \(e.g., yourcompany.atlassian.net\) |
| `taskId` | string | Yes | The ID of the task to retrieve |
| `cloudId` | string | No | Confluence Cloud ID for the instance. If not provided, it will be fetched using the domain. |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `ts` | string | ISO 8601 timestamp of the operation |
| `id` | string | Task ID |
| `localId` | string | Local task ID |
| `spaceId` | string | Space ID |
| `pageId` | string | Page ID |
| `blogPostId` | string | Blog post ID |
| `status` | string | Task status \(complete or incomplete\) |
| `body` | string | Task body content in storage format |
| `createdBy` | string | Creator account ID |
| `assignedTo` | string | Assignee account ID |
| `completedBy` | string | Completer account ID |
| `createdAt` | string | Creation timestamp |
| `updatedAt` | string | Last update timestamp |
| `dueAt` | string | Due date |
| `completedAt` | string | Completion timestamp |
### `confluence_update_task`
Update the status of a Confluence inline task (complete or incomplete).
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `domain` | string | Yes | Your Confluence domain \(e.g., yourcompany.atlassian.net\) |
| `taskId` | string | Yes | The ID of the task to update |
| `status` | string | Yes | New status for the task \(complete or incomplete\) |
| `cloudId` | string | No | Confluence Cloud ID for the instance. If not provided, it will be fetched using the domain. |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `ts` | string | ISO 8601 timestamp of the operation |
| `id` | string | Task ID |
| `localId` | string | Local task ID |
| `spaceId` | string | Space ID |
| `pageId` | string | Page ID |
| `blogPostId` | string | Blog post ID |
| `status` | string | Updated task status |
| `body` | string | Task body content in storage format |
| `createdBy` | string | Creator account ID |
| `assignedTo` | string | Assignee account ID |
| `completedBy` | string | Completer account ID |
| `createdAt` | string | Creation timestamp |
| `updatedAt` | string | Last update timestamp |
| `dueAt` | string | Due date |
| `completedAt` | string | Completion timestamp |
### `confluence_update_blogpost`
Update an existing Confluence blog post title and/or content.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `domain` | string | Yes | Your Confluence domain \(e.g., yourcompany.atlassian.net\) |
| `blogPostId` | string | Yes | The ID of the blog post to update |
| `title` | string | No | New title for the blog post |
| `content` | string | No | New content for the blog post in storage format |
| `cloudId` | string | No | Confluence Cloud ID for the instance. If not provided, it will be fetched using the domain. |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `ts` | string | ISO 8601 timestamp of the operation |
| `blogPostId` | string | Updated blog post ID |
| `title` | string | Blog post title |
| `status` | string | Blog post status |
| `spaceId` | string | Space ID |
| `version` | json | Version information |
| `url` | string | URL to view the blog post |
### `confluence_delete_blogpost`
Delete a Confluence blog post.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `domain` | string | Yes | Your Confluence domain \(e.g., yourcompany.atlassian.net\) |
| `blogPostId` | string | Yes | The ID of the blog post to delete |
| `cloudId` | string | No | Confluence Cloud ID for the instance. If not provided, it will be fetched using the domain. |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `ts` | string | ISO 8601 timestamp of the operation |
| `blogPostId` | string | Deleted blog post ID |
| `deleted` | boolean | Deletion status |
### `confluence_get_user`
Get display name and profile info for a Confluence user by account ID.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `domain` | string | Yes | Your Confluence domain \(e.g., yourcompany.atlassian.net\) |
| `accountId` | string | Yes | The Atlassian account ID of the user to look up |
| `cloudId` | string | No | Confluence Cloud ID for the instance. If not provided, it will be fetched using the domain. |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `ts` | string | ISO 8601 timestamp of the operation |
| `accountId` | string | Atlassian account ID of the user |
| `displayName` | string | Display name of the user |
| `email` | string | Email address of the user |
| `accountType` | string | Account type \(e.g., atlassian, app, customer\) |
| `profilePicture` | string | Path to the user profile picture |
| `publicName` | string | Public name of the user |

View File

@@ -0,0 +1,267 @@
---
title: Databricks
description: Run SQL queries and manage jobs on Databricks
---
import { BlockInfoCard } from "@/components/ui/block-info-card"
<BlockInfoCard
type="databricks"
color="#FF3621"
/>
{/* MANUAL-CONTENT-START:intro */}
[Databricks](https://www.databricks.com/) is a unified data analytics platform built on Apache Spark, providing a collaborative environment for data engineering, data science, and machine learning. Databricks combines data warehousing, ETL, and AI workloads into a single lakehouse architecture, with support for SQL analytics, job orchestration, and cluster management across major cloud providers.
With the Databricks integration in Sim, you can:
- **Execute SQL queries**: Run SQL statements against Databricks SQL warehouses with support for parameterized queries and Unity Catalog
- **Manage jobs**: List, trigger, and monitor Databricks job runs programmatically
- **Track run status**: Get detailed run information including timing, state, and output results
- **Control clusters**: List and inspect cluster configurations, states, and resource details
- **Retrieve run outputs**: Access notebook results, error messages, and logs from completed job runs
In Sim, the Databricks integration enables your agents to interact with your data lakehouse as part of automated workflows. Agents can query large-scale datasets, orchestrate ETL pipelines by triggering jobs, monitor job execution, and retrieve results—all without leaving the workflow canvas. This is ideal for automated reporting, data pipeline management, scheduled analytics, and building AI-driven data workflows that react to query results or job outcomes.
{/* MANUAL-CONTENT-END */}
## Usage Instructions
Connect to Databricks to execute SQL queries against SQL warehouses, trigger and monitor job runs, manage clusters, and retrieve run outputs. Requires a Personal Access Token and workspace host URL.
## Tools
### `databricks_execute_sql`
Execute a SQL statement against a Databricks SQL warehouse and return results inline. Supports parameterized queries and Unity Catalog.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `host` | string | Yes | Databricks workspace host \(e.g., dbc-abc123.cloud.databricks.com\) |
| `apiKey` | string | Yes | Databricks Personal Access Token |
| `warehouseId` | string | Yes | The ID of the SQL warehouse to execute against |
| `statement` | string | Yes | The SQL statement to execute \(max 16 MiB\) |
| `catalog` | string | No | Unity Catalog name \(equivalent to USE CATALOG\) |
| `schema` | string | No | Schema name \(equivalent to USE SCHEMA\) |
| `rowLimit` | number | No | Maximum number of rows to return |
| `waitTimeout` | string | No | How long to wait for results \(e.g., "50s"\). Range: "0s" or "5s" to "50s". Default: "50s" |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `statementId` | string | Unique identifier for the executed statement |
| `status` | string | Execution status \(SUCCEEDED, PENDING, RUNNING, FAILED, CANCELED, CLOSED\) |
| `columns` | array | Column schema of the result set |
| ↳ `name` | string | Column name |
| ↳ `position` | number | Column position \(0-based\) |
| ↳ `typeName` | string | Column type \(STRING, INT, LONG, DOUBLE, BOOLEAN, TIMESTAMP, DATE, DECIMAL, etc.\) |
| `data` | array | Result rows as a 2D array of strings where each inner array is a row of column values |
| `totalRows` | number | Total number of rows in the result |
| `truncated` | boolean | Whether the result set was truncated due to row_limit or byte_limit |
### `databricks_list_jobs`
List all jobs in a Databricks workspace with optional filtering by name.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `host` | string | Yes | Databricks workspace host \(e.g., dbc-abc123.cloud.databricks.com\) |
| `apiKey` | string | Yes | Databricks Personal Access Token |
| `limit` | number | No | Maximum number of jobs to return \(range 1-100, default 20\) |
| `offset` | number | No | Offset for pagination |
| `name` | string | No | Filter jobs by exact name \(case-insensitive\) |
| `expandTasks` | boolean | No | Include task and cluster details in the response \(max 100 elements\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `jobs` | array | List of jobs in the workspace |
| ↳ `jobId` | number | Unique job identifier |
| ↳ `name` | string | Job name |
| ↳ `createdTime` | number | Job creation timestamp \(epoch ms\) |
| ↳ `creatorUserName` | string | Email of the job creator |
| ↳ `maxConcurrentRuns` | number | Maximum number of concurrent runs |
| ↳ `format` | string | Job format \(SINGLE_TASK or MULTI_TASK\) |
| `hasMore` | boolean | Whether more jobs are available for pagination |
| `nextPageToken` | string | Token for fetching the next page of results |
### `databricks_run_job`
Trigger an existing Databricks job to run immediately with optional job-level or notebook parameters.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `host` | string | Yes | Databricks workspace host \(e.g., dbc-abc123.cloud.databricks.com\) |
| `apiKey` | string | Yes | Databricks Personal Access Token |
| `jobId` | number | Yes | The ID of the job to trigger |
| `jobParameters` | string | No | Job-level parameter overrides as a JSON object \(e.g., \{"key": "value"\}\) |
| `notebookParams` | string | No | Notebook task parameters as a JSON object \(e.g., \{"param1": "value1"\}\) |
| `idempotencyToken` | string | No | Idempotency token to prevent duplicate runs \(max 64 characters\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `runId` | number | The globally unique ID of the triggered run |
| `numberInJob` | number | The sequence number of this run among all runs of the job |
### `databricks_get_run`
Get the status, timing, and details of a Databricks job run by its run ID.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `host` | string | Yes | Databricks workspace host \(e.g., dbc-abc123.cloud.databricks.com\) |
| `apiKey` | string | Yes | Databricks Personal Access Token |
| `runId` | number | Yes | The canonical identifier of the run |
| `includeHistory` | boolean | No | Include repair history in the response |
| `includeResolvedValues` | boolean | No | Include resolved parameter values in the response |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `runId` | number | The run ID |
| `jobId` | number | The job ID this run belongs to |
| `runName` | string | Name of the run |
| `runType` | string | Type of run \(JOB_RUN, WORKFLOW_RUN, SUBMIT_RUN\) |
| `attemptNumber` | number | Retry attempt number \(0 for initial attempt\) |
| `state` | object | Run state information |
| ↳ `lifeCycleState` | string | Lifecycle state \(QUEUED, PENDING, RUNNING, TERMINATING, TERMINATED, SKIPPED, INTERNAL_ERROR, BLOCKED, WAITING_FOR_RETRY\) |
| ↳ `resultState` | string | Result state \(SUCCESS, FAILED, TIMEDOUT, CANCELED, SUCCESS_WITH_FAILURES, UPSTREAM_FAILED, UPSTREAM_CANCELED, EXCLUDED\) |
| ↳ `stateMessage` | string | Descriptive message for the current state |
| ↳ `userCancelledOrTimedout` | boolean | Whether the run was cancelled by user or timed out |
| `startTime` | number | Run start timestamp \(epoch ms\) |
| `endTime` | number | Run end timestamp \(epoch ms, 0 if still running\) |
| `setupDuration` | number | Cluster setup duration \(ms\) |
| `executionDuration` | number | Execution duration \(ms\) |
| `cleanupDuration` | number | Cleanup duration \(ms\) |
| `queueDuration` | number | Time spent in queue before execution \(ms\) |
| `runPageUrl` | string | URL to the run detail page in Databricks UI |
| `creatorUserName` | string | Email of the user who triggered the run |
### `databricks_list_runs`
List job runs in a Databricks workspace with optional filtering by job, status, and time range.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `host` | string | Yes | Databricks workspace host \(e.g., dbc-abc123.cloud.databricks.com\) |
| `apiKey` | string | Yes | Databricks Personal Access Token |
| `jobId` | number | No | Filter runs by job ID. Omit to list runs across all jobs |
| `activeOnly` | boolean | No | Only include active runs \(PENDING, RUNNING, or TERMINATING\) |
| `completedOnly` | boolean | No | Only include completed runs |
| `limit` | number | No | Maximum number of runs to return \(range 1-24, default 20\) |
| `offset` | number | No | Offset for pagination |
| `runType` | string | No | Filter by run type \(JOB_RUN, WORKFLOW_RUN, SUBMIT_RUN\) |
| `startTimeFrom` | number | No | Filter runs started at or after this timestamp \(epoch ms\) |
| `startTimeTo` | number | No | Filter runs started at or before this timestamp \(epoch ms\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `runs` | array | List of job runs |
| ↳ `runId` | number | Unique run identifier |
| ↳ `jobId` | number | Job this run belongs to |
| ↳ `runName` | string | Run name |
| ↳ `runType` | string | Run type \(JOB_RUN, WORKFLOW_RUN, SUBMIT_RUN\) |
| ↳ `state` | object | Run state information |
| ↳ `lifeCycleState` | string | Lifecycle state \(QUEUED, PENDING, RUNNING, TERMINATING, TERMINATED, SKIPPED, INTERNAL_ERROR, BLOCKED, WAITING_FOR_RETRY\) |
| ↳ `resultState` | string | Result state \(SUCCESS, FAILED, TIMEDOUT, CANCELED, SUCCESS_WITH_FAILURES, UPSTREAM_FAILED, UPSTREAM_CANCELED, EXCLUDED\) |
| ↳ `stateMessage` | string | Descriptive state message |
| ↳ `userCancelledOrTimedout` | boolean | Whether the run was cancelled by user or timed out |
| ↳ `startTime` | number | Run start timestamp \(epoch ms\) |
| ↳ `endTime` | number | Run end timestamp \(epoch ms\) |
| `hasMore` | boolean | Whether more runs are available for pagination |
| `nextPageToken` | string | Token for fetching the next page of results |
### `databricks_cancel_run`
Cancel a running or pending Databricks job run. Cancellation is asynchronous; poll the run status to confirm termination.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `host` | string | Yes | Databricks workspace host \(e.g., dbc-abc123.cloud.databricks.com\) |
| `apiKey` | string | Yes | Databricks Personal Access Token |
| `runId` | number | Yes | The canonical identifier of the run to cancel |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `success` | boolean | Whether the cancel request was accepted |
### `databricks_get_run_output`
Get the output of a completed Databricks job run, including notebook results, error messages, and logs. For multi-task jobs, use the task run ID (not the parent run ID).
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `host` | string | Yes | Databricks workspace host \(e.g., dbc-abc123.cloud.databricks.com\) |
| `apiKey` | string | Yes | Databricks Personal Access Token |
| `runId` | number | Yes | The run ID to get output for. For multi-task jobs, use the task run ID |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `notebookOutput` | object | Notebook task output \(from dbutils.notebook.exit\(\)\) |
| ↳ `result` | string | Value passed to dbutils.notebook.exit\(\) \(max 5 MB\) |
| ↳ `truncated` | boolean | Whether the result was truncated |
| `error` | string | Error message if the run failed or output is unavailable |
| `errorTrace` | string | Error stack trace if available |
| `logs` | string | Log output \(last 5 MB\) from spark_jar, spark_python, or python_wheel tasks |
| `logsTruncated` | boolean | Whether the log output was truncated |
### `databricks_list_clusters`
List all clusters in a Databricks workspace including their state, configuration, and resource details.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `host` | string | Yes | Databricks workspace host \(e.g., dbc-abc123.cloud.databricks.com\) |
| `apiKey` | string | Yes | Databricks Personal Access Token |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `clusters` | array | List of clusters in the workspace |
| ↳ `clusterId` | string | Unique cluster identifier |
| ↳ `clusterName` | string | Cluster display name |
| ↳ `state` | string | Current state \(PENDING, RUNNING, RESTARTING, RESIZING, TERMINATING, TERMINATED, ERROR, UNKNOWN\) |
| ↳ `stateMessage` | string | Human-readable state description |
| ↳ `creatorUserName` | string | Email of the cluster creator |
| ↳ `sparkVersion` | string | Spark runtime version \(e.g., 13.3.x-scala2.12\) |
| ↳ `nodeTypeId` | string | Worker node type identifier |
| ↳ `driverNodeTypeId` | string | Driver node type identifier |
| ↳ `numWorkers` | number | Number of worker nodes \(for fixed-size clusters\) |
| ↳ `autoscale` | object | Autoscaling configuration \(null for fixed-size clusters\) |
| ↳ `minWorkers` | number | Minimum number of workers |
| ↳ `maxWorkers` | number | Maximum number of workers |
| ↳ `clusterSource` | string | Origin \(API, UI, JOB, MODELS, PIPELINE, PIPELINE_MAINTENANCE, SQL\) |
| ↳ `autoterminationMinutes` | number | Minutes of inactivity before auto-termination \(0 = disabled\) |
| ↳ `startTime` | number | Cluster start timestamp \(epoch ms\) |

View File

@@ -0,0 +1,157 @@
---
title: Devin
description: Autonomous AI software engineer
---
import { BlockInfoCard } from "@/components/ui/block-info-card"
<BlockInfoCard
type="devin"
color="#12141A"
/>
{/* MANUAL-CONTENT-START:intro */}
[Devin](https://devin.ai/) is an autonomous AI software engineer by Cognition that can independently write, run, debug, and deploy code.
With Devin, you can:
- **Automate coding tasks**: Assign software engineering tasks and let Devin autonomously write, test, and iterate on code
- **Manage sessions**: Create, monitor, and interact with Devin sessions to track progress on assigned tasks
- **Guide active work**: Send messages to running sessions to provide additional context, redirect efforts, or answer questions
- **Retrieve structured output**: Poll completed sessions for pull requests, structured results, and detailed status
- **Control costs**: Set ACU (Autonomous Compute Unit) limits to cap spending on long-running tasks
- **Standardize workflows**: Use playbook IDs to apply repeatable task patterns across sessions
In Sim, the Devin integration enables your agents to programmatically manage Devin sessions as part of their workflows:
- **Create sessions**: Kick off new Devin sessions with a prompt describing the task, optional playbook, ACU limits, and tags
- **Get session details**: Retrieve the full state of a session including status, pull requests, structured output, and resource consumption
- **List sessions**: Query all sessions in your organization with optional pagination
- **Send messages**: Communicate with active or suspended sessions to provide guidance, and automatically resume suspended sessions
This allows for powerful automation scenarios such as triggering code generation from upstream events, polling for completion before consuming results, orchestrating multi-step development pipelines, and integrating Devin's output into broader agent workflows.
{/* MANUAL-CONTENT-END */}
## Usage Instructions
Integrate Devin into your workflow. Create sessions to assign coding tasks, send messages to guide active sessions, and retrieve session status and results. Devin autonomously writes, runs, and tests code.
## Tools
### `devin_create_session`
Create a new Devin session with a prompt. Devin will autonomously work on the task described in the prompt.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Devin API key \(service user credential starting with cog_\) |
| `prompt` | string | Yes | The task prompt for Devin to work on |
| `playbookId` | string | No | Optional playbook ID to guide the session |
| `maxAcuLimit` | number | No | Maximum ACU limit for the session |
| `tags` | string | No | Comma-separated tags for the session |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `sessionId` | string | Unique identifier for the session |
| `url` | string | URL to view the session in the Devin UI |
| `status` | string | Session status \(new, claimed, running, exit, error, suspended, resuming\) |
| `statusDetail` | string | Detailed status \(working, waiting_for_user, waiting_for_approval, finished, inactivity, etc.\) |
| `title` | string | Session title |
| `createdAt` | number | Unix timestamp when the session was created |
| `updatedAt` | number | Unix timestamp when the session was last updated |
| `acusConsumed` | number | ACUs consumed by the session |
| `tags` | json | Tags associated with the session |
| `pullRequests` | json | Pull requests created during the session |
| `structuredOutput` | json | Structured output from the session |
| `playbookId` | string | Associated playbook ID |
### `devin_get_session`
Retrieve details of an existing Devin session including status, tags, pull requests, and structured output.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Devin API key \(service user credential starting with cog_\) |
| `sessionId` | string | Yes | The session ID to retrieve |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `sessionId` | string | Unique identifier for the session |
| `url` | string | URL to view the session in the Devin UI |
| `status` | string | Session status \(new, claimed, running, exit, error, suspended, resuming\) |
| `statusDetail` | string | Detailed status \(working, waiting_for_user, waiting_for_approval, finished, inactivity, etc.\) |
| `title` | string | Session title |
| `createdAt` | number | Unix timestamp when the session was created |
| `updatedAt` | number | Unix timestamp when the session was last updated |
| `acusConsumed` | number | ACUs consumed by the session |
| `tags` | json | Tags associated with the session |
| `pullRequests` | json | Pull requests created during the session |
| `structuredOutput` | json | Structured output from the session |
| `playbookId` | string | Associated playbook ID |
### `devin_list_sessions`
List Devin sessions in the organization. Returns up to 100 sessions by default.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Devin API key \(service user credential starting with cog_\) |
| `limit` | number | No | Maximum number of sessions to return \(1-200, default: 100\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `sessions` | array | List of Devin sessions |
| ↳ `sessionId` | string | Unique identifier for the session |
| ↳ `url` | string | URL to view the session |
| ↳ `status` | string | Session status |
| ↳ `statusDetail` | string | Detailed status |
| ↳ `title` | string | Session title |
| ↳ `createdAt` | number | Creation timestamp \(Unix\) |
| ↳ `updatedAt` | number | Last updated timestamp \(Unix\) |
| ↳ `tags` | json | Session tags |
### `devin_send_message`
Send a message to a Devin session. If the session is suspended, it will be automatically resumed. Returns the updated session state.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Devin API key \(service user credential starting with cog_\) |
| `sessionId` | string | Yes | The session ID to send the message to |
| `message` | string | Yes | The message to send to Devin |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `sessionId` | string | Unique identifier for the session |
| `url` | string | URL to view the session in the Devin UI |
| `status` | string | Session status \(new, claimed, running, exit, error, suspended, resuming\) |
| `statusDetail` | string | Detailed status \(working, waiting_for_user, waiting_for_approval, finished, inactivity, etc.\) |
| `title` | string | Session title |
| `createdAt` | number | Unix timestamp when the session was created |
| `updatedAt` | number | Unix timestamp when the session was last updated |
| `acusConsumed` | number | ACUs consumed by the session |
| `tags` | json | Tags associated with the session |
| `pullRequests` | json | Pull requests created during the session |
| `structuredOutput` | json | Structured output from the session |
| `playbookId` | string | Associated playbook ID |

View File

@@ -0,0 +1,165 @@
---
title: Gamma
description: Generate presentations, documents, and webpages with AI
---
import { BlockInfoCard } from "@/components/ui/block-info-card"
<BlockInfoCard
type="gamma"
color="#002253"
/>
{/* MANUAL-CONTENT-START:intro */}
[Gamma](https://gamma.app/) is an AI-powered platform for creating presentations, documents, webpages, and social posts. Gamma's API lets you programmatically generate polished, visually rich content from text prompts, adapt existing templates, and manage workspace assets like themes and folders.
With Gamma, you can:
- **Generate presentations and documents:** Create slide decks, documents, webpages, and social posts from text input with full control over format, tone, and image sourcing.
- **Create from templates:** Adapt existing Gamma templates with custom prompts to quickly produce tailored content.
- **Check generation status:** Poll for completion of async generation jobs and retrieve the final Gamma URL.
- **Browse themes and folders:** List available workspace themes and folders to organize and style your generated content.
In Sim, the Gamma integration enables your agents to automatically generate presentations and documents, create content from templates, and manage workspace assets directly within your workflows. This allows you to automate content creation pipelines, batch-produce slide decks, and integrate AI-generated presentations into broader business automation scenarios.
{/* MANUAL-CONTENT-END */}
## Usage Instructions
Integrate Gamma into the workflow. Can generate presentations, documents, webpages, and social posts from text, create from templates, check generation status, and browse themes and folders.
## Tools
### `gamma_generate`
Generate a new Gamma presentation, document, webpage, or social post from text input.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Gamma API key |
| `inputText` | string | Yes | Text and image URLs used to generate your gamma \(1-100,000 tokens\) |
| `textMode` | string | Yes | How to handle input text: generate \(AI expands\), condense \(AI summarizes\), or preserve \(keep as-is\) |
| `format` | string | No | Output format: presentation, document, webpage, or social \(default: presentation\) |
| `themeId` | string | No | Custom Gamma workspace theme ID \(use List Themes to find available themes\) |
| `numCards` | number | No | Number of cards/slides to generate \(1-60 for Pro, 1-75 for Ultra; default: 10\) |
| `cardSplit` | string | No | How to split content into cards: auto or inputTextBreaks \(default: auto\) |
| `cardDimensions` | string | No | Card aspect ratio. Presentation: fluid, 16x9, 4x3. Document: fluid, pageless, letter, a4. Social: 1x1, 4x5, 9x16 |
| `additionalInstructions` | string | No | Additional instructions for the AI generation \(max 2000 chars\) |
| `exportAs` | string | No | Automatically export the generated gamma as pdf or pptx |
| `folderIds` | string | No | Comma-separated folder IDs to store the generated gamma in |
| `textAmount` | string | No | Amount of text per card: brief, medium, detailed, or extensive |
| `textTone` | string | No | Tone of the generated text, e.g. "professional", "casual" \(max 500 chars\) |
| `textAudience` | string | No | Target audience for the generated text, e.g. "executives", "students" \(max 500 chars\) |
| `textLanguage` | string | No | Language code for the generated text \(default: en\) |
| `imageSource` | string | No | Where to source images: aiGenerated, pictographic, unsplash, webAllImages, webFreeToUse, webFreeToUseCommercially, giphy, placeholder, or noImages |
| `imageModel` | string | No | AI image generation model to use when imageSource is aiGenerated |
| `imageStyle` | string | No | Style directive for AI-generated images, e.g. "watercolor", "photorealistic" \(max 500 chars\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `generationId` | string | The ID of the generation job. Use with Check Status to poll for completion. |
### `gamma_generate_from_template`
Generate a new Gamma by adapting an existing template with a prompt.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Gamma API key |
| `gammaId` | string | Yes | The ID of the template gamma to adapt |
| `prompt` | string | Yes | Instructions for how to adapt the template \(1-100,000 tokens\) |
| `themeId` | string | No | Custom Gamma workspace theme ID to apply |
| `exportAs` | string | No | Automatically export the generated gamma as pdf or pptx |
| `folderIds` | string | No | Comma-separated folder IDs to store the generated gamma in |
| `imageModel` | string | No | AI image generation model to use when imageSource is aiGenerated |
| `imageStyle` | string | No | Style directive for AI-generated images, e.g. "watercolor", "photorealistic" \(max 500 chars\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `generationId` | string | The ID of the generation job. Use with Check Status to poll for completion. |
### `gamma_check_status`
Check the status of a Gamma generation job. Returns the gamma URL when completed, or error details if failed.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Gamma API key |
| `generationId` | string | Yes | The generation ID returned by the Generate or Generate from Template tool |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `generationId` | string | The generation ID that was checked |
| `status` | string | Generation status: pending, completed, or failed |
| `gammaUrl` | string | URL of the generated gamma \(only present when status is completed\) |
| `credits` | object | Credit usage information \(only present when status is completed\) |
| ↳ `deducted` | number | Number of credits deducted for this generation |
| ↳ `remaining` | number | Remaining credits in the account |
| `error` | object | Error details \(only present when status is failed\) |
| ↳ `message` | string | Human-readable error message |
| ↳ `statusCode` | number | HTTP status code of the error |
### `gamma_list_themes`
List available themes in your Gamma workspace. Returns theme IDs, names, and keywords for styling.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Gamma API key |
| `query` | string | No | Search query to filter themes by name \(case-insensitive\) |
| `limit` | number | No | Maximum number of themes to return per page \(max 50\) |
| `after` | string | No | Pagination cursor from a previous response \(nextCursor\) to fetch the next page |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `themes` | array | List of available themes |
| ↳ `id` | string | Theme ID \(use with themeId parameter\) |
| ↳ `name` | string | Theme display name |
| ↳ `type` | string | Theme type: standard or custom |
| ↳ `colorKeywords` | array | Color descriptors for this theme |
| ↳ `toneKeywords` | array | Tone descriptors for this theme |
| `hasMore` | boolean | Whether more results are available on the next page |
| `nextCursor` | string | Pagination cursor to pass as the after parameter for the next page |
### `gamma_list_folders`
List available folders in your Gamma workspace. Returns folder IDs and names for organizing generated content.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Gamma API key |
| `query` | string | No | Search query to filter folders by name \(case-sensitive\) |
| `limit` | number | No | Maximum number of folders to return per page \(max 50\) |
| `after` | string | No | Pagination cursor from a previous response \(nextCursor\) to fetch the next page |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `folders` | array | List of available folders |
| ↳ `id` | string | Folder ID \(use with folderIds parameter\) |
| ↳ `name` | string | Folder display name |
| `hasMore` | boolean | Whether more results are available on the next page |
| `nextCursor` | string | Pagination cursor to pass as the after parameter for the next page |

View File

@@ -11,19 +11,21 @@ import { BlockInfoCard } from "@/components/ui/block-info-card"
/>
{/* MANUAL-CONTENT-START:intro */}
[Gmail](https://mail.google.com/) is one of the worlds most popular and reliable email services, trusted by individuals and organizations to send, receive, and manage messages. Gmail offers a secure, intuitive interface with advanced organization and search capabilities, making it a top choice for personal and professional communication.
[Gmail](https://mail.google.com/) is one of the worlds most popular email services, trusted by individuals and organizations to send, receive, and manage messages securely.
Gmail provides a comprehensive suite of features for efficient email management, message filtering, and workflow integration. With its powerful API, Gmail enables developers and platforms to automate common email-related tasks, integrate mailbox activities into broader workflows, and enhance productivity by reducing manual effort.
With the Gmail integration in Sim, you can:
Key features of Gmail include:
- **Send emails**: Compose and send emails with support for recipients, CC, BCC, subject, body, and attachments
- **Create drafts**: Save email drafts for later review and sending
- **Read emails**: Retrieve email messages by ID with full content and metadata
- **Search emails**: Find emails using Gmails powerful search query syntax
- **Move emails**: Move messages between folders or labels
- **Manage read status**: Mark emails as read or unread
- **Archive and unarchive**: Archive messages to clean up your inbox or restore them
- **Delete emails**: Remove messages from your mailbox
- **Manage labels**: Add or remove labels from emails for organization
- Email Sending and Receiving: Compose, send, and receive emails reliably and securely
- Message Search and Organization: Advanced search, labels, and filters to easily find and categorize messages
- Conversation Threading: Keeps related messages grouped together for better conversation tracking
- Attachments and Formatting: Support for file attachments, rich formatting, and embedded media
- Integration and Automation: Robust API for integrating with other tools and automating email workflows
In Sim, the Gmail integration allows your agents to interact with your emails programmatically—sending, receiving, searching, and organizing messages as part of powerful AI workflows. Agents can draft emails, trigger processes based on new email arrivals, and automate repetitive email tasks, freeing up time and reducing manual labor. By connecting Sim with Gmail, you can build intelligent agents to manage communications, automate follow-ups, and maintain organized inboxes within your workflows.
In Sim, the Gmail integration enables your agents to interact with your inbox programmatically as part of automated workflows. Agents can send notifications, search for specific emails, organize messages, and trigger actions based on email content—enabling intelligent email automation and communication workflows.
{/* MANUAL-CONTENT-END */}

View File

@@ -0,0 +1,774 @@
---
title: Gong
description: Revenue intelligence and conversation analytics
---
import { BlockInfoCard } from "@/components/ui/block-info-card"
<BlockInfoCard
type="gong"
color="#8039DF"
/>
{/* MANUAL-CONTENT-START:intro */}
[Gong](https://www.gong.io/) is a revenue intelligence platform that captures and analyzes customer interactions across calls, emails, and meetings. By integrating Gong with Sim, your agents can access conversation data, user analytics, coaching metrics, and more through automated workflows.
The Gong integration in Sim provides tools to:
- **List and retrieve calls:** Fetch calls by date range, get individual call details, or retrieve extensive call data including trackers, topics, interaction stats, and points of interest.
- **Access call transcripts:** Retrieve full transcripts with speaker turns, topics, and sentence-level timestamps for any recorded call.
- **Manage users:** List all Gong users in your account or retrieve detailed information for a specific user, including settings, spoken languages, and contact details.
- **Analyze activity and performance:** Pull aggregated activity statistics, interaction stats (longest monologue, interactivity, patience, question rate), and answered scorecard data for your team.
- **Work with scorecards and trackers:** List scorecard definitions and keyword tracker configurations to understand how your team's conversations are being evaluated and monitored.
- **Browse the call library:** List library folders and retrieve their contents, including call snippets and notes curated by your team.
- **Access coaching metrics:** Retrieve coaching data for managers and their direct reports to track team development.
- **List Engage flows:** Fetch sales engagement sequences (flows) with visibility and ownership details.
- **Look up contacts by email or phone:** Find all Gong references to a specific email address or phone number, including related calls, emails, meetings, CRM data, and customer engagement events.
By combining these capabilities, you can automate sales coaching workflows, extract conversation insights, monitor team performance, sync Gong data with other systems, and build intelligent pipelines around your organization's revenue conversations -- all securely using your Gong API credentials.
{/* MANUAL-CONTENT-END */}
## Usage Instructions
Integrate Gong into your workflow. Access call recordings, transcripts, user data, activity stats, scorecards, trackers, library content, coaching metrics, and more via the Gong API.
## Tools
### `gong_list_calls`
Retrieve call data by date range from Gong.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `accessKey` | string | Yes | Gong API Access Key |
| `accessKeySecret` | string | Yes | Gong API Access Key Secret |
| `fromDateTime` | string | Yes | Start date/time in ISO-8601 format \(e.g., 2024-01-01T00:00:00Z\) |
| `toDateTime` | string | No | End date/time in ISO-8601 format \(e.g., 2024-01-31T23:59:59Z\). If omitted, lists calls up to the most recent. |
| `cursor` | string | No | Pagination cursor from a previous response |
| `workspaceId` | string | No | Gong workspace ID to filter calls |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `calls` | array | List of calls matching the date range |
| ↳ `id` | string | Gong's unique numeric identifier for the call |
| ↳ `title` | string | Call title |
| ↳ `scheduled` | string | Scheduled call time in ISO-8601 format |
| ↳ `started` | string | Recording start time in ISO-8601 format |
| ↳ `duration` | number | Call duration in seconds |
| ↳ `direction` | string | Call direction \(Inbound/Outbound\) |
| ↳ `system` | string | Communication platform used \(e.g., Outreach\) |
| ↳ `scope` | string | Call scope: 'Internal', 'External', or 'Unknown' |
| ↳ `media` | string | Media type \(e.g., Video\) |
| ↳ `language` | string | Language code in ISO-639-2B format |
| ↳ `url` | string | URL to the call in the Gong web app |
| ↳ `primaryUserId` | string | Host team member identifier |
| ↳ `workspaceId` | string | Workspace identifier |
| ↳ `sdrDisposition` | string | SDR disposition classification |
| ↳ `clientUniqueId` | string | Call identifier from the origin recording system |
| ↳ `customData` | string | Metadata provided during call creation |
| ↳ `purpose` | string | Call purpose |
| ↳ `meetingUrl` | string | Web conference provider URL |
| ↳ `isPrivate` | boolean | Whether the call is private |
| ↳ `calendarEventId` | string | Calendar event identifier |
| `cursor` | string | Pagination cursor for the next page |
| `totalRecords` | number | Total number of records matching the filter |
### `gong_get_call`
Retrieve detailed data for a specific call from Gong.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `accessKey` | string | Yes | Gong API Access Key |
| `accessKeySecret` | string | Yes | Gong API Access Key Secret |
| `callId` | string | Yes | The Gong call ID to retrieve |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `id` | string | Gong's unique numeric identifier for the call |
| `title` | string | Call title |
| `url` | string | URL to the call in the Gong web app |
| `scheduled` | string | Scheduled call time in ISO-8601 format |
| `started` | string | Recording start time in ISO-8601 format |
| `duration` | number | Call duration in seconds |
| `direction` | string | Call direction \(Inbound/Outbound\) |
| `system` | string | Communication platform used \(e.g., Outreach\) |
| `scope` | string | Call scope: 'Internal', 'External', or 'Unknown' |
| `media` | string | Media type \(e.g., Video\) |
| `language` | string | Language code in ISO-639-2B format |
| `primaryUserId` | string | Host team member identifier |
| `workspaceId` | string | Workspace identifier |
| `sdrDisposition` | string | SDR disposition classification |
| `clientUniqueId` | string | Call identifier from the origin recording system |
| `customData` | string | Metadata provided during call creation |
| `purpose` | string | Call purpose |
| `meetingUrl` | string | Web conference provider URL |
| `isPrivate` | boolean | Whether the call is private |
| `calendarEventId` | string | Calendar event identifier |
### `gong_get_call_transcript`
Retrieve transcripts of calls from Gong by call IDs or date range.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `accessKey` | string | Yes | Gong API Access Key |
| `accessKeySecret` | string | Yes | Gong API Access Key Secret |
| `callIds` | string | No | Comma-separated list of call IDs to retrieve transcripts for |
| `fromDateTime` | string | No | Start date/time filter in ISO-8601 format |
| `toDateTime` | string | No | End date/time filter in ISO-8601 format |
| `workspaceId` | string | No | Gong workspace ID to filter calls |
| `cursor` | string | No | Pagination cursor from a previous response |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `callTranscripts` | array | List of call transcripts with speaker turns and sentences |
| ↳ `callId` | string | Gong's unique numeric identifier for the call |
| ↳ `transcript` | array | List of monologues in the call |
| ↳ `speakerId` | string | Unique ID of the speaker, cross-reference with parties |
| ↳ `topic` | string | Name of the topic being discussed |
| ↳ `sentences` | array | List of sentences spoken in the monologue |
| ↳ `start` | number | Start time of the sentence in milliseconds from call start |
| ↳ `end` | number | End time of the sentence in milliseconds from call start |
| ↳ `text` | string | The sentence text |
| `cursor` | string | Pagination cursor for the next page |
### `gong_get_extensive_calls`
Retrieve detailed call data including trackers, topics, and highlights from Gong.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `accessKey` | string | Yes | Gong API Access Key |
| `accessKeySecret` | string | Yes | Gong API Access Key Secret |
| `callIds` | string | No | Comma-separated list of call IDs to retrieve detailed data for |
| `fromDateTime` | string | No | Start date/time filter in ISO-8601 format |
| `toDateTime` | string | No | End date/time filter in ISO-8601 format |
| `workspaceId` | string | No | Gong workspace ID to filter calls |
| `primaryUserIds` | string | No | Comma-separated list of user IDs to filter calls by host |
| `cursor` | string | No | Pagination cursor from a previous response |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `calls` | array | List of detailed call objects with metadata, content, interaction stats, and collaboration data |
| ↳ `metaData` | object | Call metadata \(same fields as CallBasicData\) |
| ↳ `id` | string | Call ID |
| ↳ `title` | string | Call title |
| ↳ `scheduled` | string | Scheduled time in ISO-8601 |
| ↳ `started` | string | Start time in ISO-8601 |
| ↳ `duration` | number | Duration in seconds |
| ↳ `direction` | string | Call direction |
| ↳ `system` | string | Communication platform |
| ↳ `scope` | string | Internal/External/Unknown |
| ↳ `media` | string | Media type |
| ↳ `language` | string | Language code \(ISO-639-2B\) |
| ↳ `url` | string | Gong web app URL |
| ↳ `primaryUserId` | string | Host user ID |
| ↳ `workspaceId` | string | Workspace ID |
| ↳ `sdrDisposition` | string | SDR disposition |
| ↳ `clientUniqueId` | string | Origin system call ID |
| ↳ `customData` | string | Custom metadata |
| ↳ `purpose` | string | Call purpose |
| ↳ `meetingUrl` | string | Meeting URL |
| ↳ `isPrivate` | boolean | Whether call is private |
| ↳ `calendarEventId` | string | Calendar event ID |
| ↳ `context` | array | Links to external systems \(CRM, Dialer, etc.\) |
| ↳ `system` | string | External system name \(e.g., Salesforce\) |
| ↳ `objects` | array | List of objects within the external system |
| ↳ `parties` | array | List of call participants |
| ↳ `id` | string | Unique participant ID in the call |
| ↳ `name` | string | Participant name |
| ↳ `emailAddress` | string | Email address |
| ↳ `title` | string | Job title |
| ↳ `phoneNumber` | string | Phone number |
| ↳ `speakerId` | string | Speaker ID for transcript cross-reference |
| ↳ `userId` | string | Gong user ID |
| ↳ `affiliation` | string | Company or non-company |
| ↳ `methods` | array | Whether invited or attended |
| ↳ `context` | array | Links to external systems for this party |
| ↳ `content` | object | Call content data |
| ↳ `structure` | array | Call agenda parts |
| ↳ `name` | string | Agenda name |
| ↳ `duration` | number | Duration of this part in seconds |
| ↳ `topics` | array | Topics and their durations |
| ↳ `name` | string | Topic name \(e.g., Pricing\) |
| ↳ `duration` | number | Time spent on topic in seconds |
| ↳ `trackers` | array | Trackers found in the call |
| ↳ `id` | string | Tracker ID |
| ↳ `name` | string | Tracker name |
| ↳ `count` | number | Number of occurrences |
| ↳ `type` | string | Keyword or Smart |
| ↳ `occurrences` | array | Details for each occurrence |
| ↳ `speakerId` | string | Speaker who said it |
| ↳ `startTime` | number | Seconds from call start |
| ↳ `phrases` | array | Per-phrase occurrence counts |
| ↳ `phrase` | string | Specific phrase |
| ↳ `count` | number | Occurrences of this phrase |
| ↳ `occurrences` | array | Details per occurrence |
| ↳ `highlights` | array | AI-generated highlights including next steps, action items, and key moments |
| ↳ `title` | string | Title of the highlight |
| ↳ `interaction` | object | Interaction statistics |
| ↳ `interactionStats` | array | Interaction stats per user |
| ↳ `userId` | string | Gong user ID |
| ↳ `userEmailAddress` | string | User email |
| ↳ `personInteractionStats` | array | Stats list \(Longest Monologue, Interactivity, Patience, etc.\) |
| ↳ `name` | string | Stat name |
| ↳ `value` | number | Stat value |
| ↳ `speakers` | array | Talk duration per speaker |
| ↳ `id` | string | Participant ID |
| ↳ `userId` | string | Gong user ID |
| ↳ `talkTime` | number | Talk duration in seconds |
| ↳ `video` | array | Video statistics |
| ↳ `name` | string | Segment type: Browser, Presentation, WebcamPrimaryUser, WebcamNonCompany, Webcam |
| ↳ `duration` | number | Total segment duration in seconds |
| ↳ `questions` | object | Question counts |
| ↳ `companyCount` | number | Questions by company speakers |
| ↳ `nonCompanyCount` | number | Questions by non-company speakers |
| ↳ `collaboration` | object | Collaboration data |
| ↳ `publicComments` | array | Public comments on the call |
| ↳ `id` | string | Comment ID |
| ↳ `commenterUserId` | string | Commenter user ID |
| ↳ `comment` | string | Comment text |
| ↳ `posted` | string | Posted time in ISO-8601 |
| ↳ `audioStartTime` | number | Seconds from call start the comment refers to |
| ↳ `audioEndTime` | number | Seconds from call start the comment end refers to |
| ↳ `duringCall` | boolean | Whether the comment was posted during the call |
| ↳ `inReplyTo` | string | ID of original comment if this is a reply |
| ↳ `media` | object | Media download URLs \(available for 8 hours\) |
| ↳ `audioUrl` | string | Audio download URL |
| ↳ `videoUrl` | string | Video download URL |
| `cursor` | string | Pagination cursor for the next page |
### `gong_list_users`
List all users in your Gong account.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `accessKey` | string | Yes | Gong API Access Key |
| `accessKeySecret` | string | Yes | Gong API Access Key Secret |
| `cursor` | string | No | Pagination cursor from a previous response |
| `includeAvatars` | string | No | Whether to include avatar URLs \(true/false\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `users` | array | List of Gong users |
| ↳ `id` | string | Unique numeric user ID \(up to 20 digits\) |
| ↳ `emailAddress` | string | User email address |
| ↳ `created` | string | User creation timestamp \(ISO-8601\) |
| ↳ `active` | boolean | Whether the user is active |
| ↳ `emailAliases` | array | Alternative email addresses for the user |
| ↳ `trustedEmailAddress` | string | Trusted email address for the user |
| ↳ `firstName` | string | First name |
| ↳ `lastName` | string | Last name |
| ↳ `title` | string | Job title |
| ↳ `phoneNumber` | string | Phone number |
| ↳ `extension` | string | Phone extension number |
| ↳ `personalMeetingUrls` | array | Personal meeting URLs |
| ↳ `settings` | object | User settings |
| ↳ `webConferencesRecorded` | boolean | Whether web conferences are recorded |
| ↳ `preventWebConferenceRecording` | boolean | Whether web conference recording is prevented |
| ↳ `telephonyCallsImported` | boolean | Whether telephony calls are imported |
| ↳ `emailsImported` | boolean | Whether emails are imported |
| ↳ `preventEmailImport` | boolean | Whether email import is prevented |
| ↳ `nonRecordedMeetingsImported` | boolean | Whether non-recorded meetings are imported |
| ↳ `gongConnectEnabled` | boolean | Whether Gong Connect is enabled |
| ↳ `managerId` | string | Manager user ID |
| ↳ `meetingConsentPageUrl` | string | Meeting consent page URL |
| ↳ `spokenLanguages` | array | Languages spoken by the user |
| ↳ `language` | string | Language code |
| ↳ `primary` | boolean | Whether this is the primary language |
| `cursor` | string | Pagination cursor for the next page |
| `totalRecords` | number | Total number of user records |
| `currentPageSize` | number | Number of records in the current page |
| `currentPageNumber` | number | Current page number |
### `gong_get_user`
Retrieve details for a specific user from Gong.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `accessKey` | string | Yes | Gong API Access Key |
| `accessKeySecret` | string | Yes | Gong API Access Key Secret |
| `userId` | string | Yes | The Gong user ID to retrieve |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `id` | string | Unique numeric user ID \(up to 20 digits\) |
| `emailAddress` | string | User email address |
| `created` | string | User creation timestamp \(ISO-8601\) |
| `active` | boolean | Whether the user is active |
| `emailAliases` | array | Alternative email addresses for the user |
| `trustedEmailAddress` | string | Trusted email address for the user |
| `firstName` | string | First name |
| `lastName` | string | Last name |
| `title` | string | Job title |
| `phoneNumber` | string | Phone number |
| `extension` | string | Phone extension number |
| `personalMeetingUrls` | array | Personal meeting URLs |
| `settings` | object | User settings |
| ↳ `webConferencesRecorded` | boolean | Whether web conferences are recorded |
| ↳ `preventWebConferenceRecording` | boolean | Whether web conference recording is prevented |
| ↳ `telephonyCallsImported` | boolean | Whether telephony calls are imported |
| ↳ `emailsImported` | boolean | Whether emails are imported |
| ↳ `preventEmailImport` | boolean | Whether email import is prevented |
| ↳ `nonRecordedMeetingsImported` | boolean | Whether non-recorded meetings are imported |
| ↳ `gongConnectEnabled` | boolean | Whether Gong Connect is enabled |
| `managerId` | string | Manager user ID |
| `meetingConsentPageUrl` | string | Meeting consent page URL |
| `spokenLanguages` | array | Languages spoken by the user |
| ↳ `language` | string | Language code |
| ↳ `primary` | boolean | Whether this is the primary language |
### `gong_aggregate_activity`
Retrieve aggregated activity statistics for users by date range from Gong.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `accessKey` | string | Yes | Gong API Access Key |
| `accessKeySecret` | string | Yes | Gong API Access Key Secret |
| `userIds` | string | No | Comma-separated list of Gong user IDs \(up to 20 digits each\) |
| `fromDate` | string | Yes | Start date in YYYY-MM-DD format \(inclusive, in company timezone\) |
| `toDate` | string | Yes | End date in YYYY-MM-DD format \(exclusive, in company timezone, cannot exceed current day\) |
| `cursor` | string | No | Pagination cursor from a previous response |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `usersActivity` | array | Aggregated activity statistics per user |
| ↳ `userId` | string | Gong's unique numeric identifier for the user |
| ↳ `userEmailAddress` | string | Email address of the Gong user |
| ↳ `callsAsHost` | number | Number of recorded calls this user hosted |
| ↳ `callsAttended` | number | Number of calls where this user was a participant \(not host\) |
| ↳ `callsGaveFeedback` | number | Number of recorded calls the user gave feedback on |
| ↳ `callsReceivedFeedback` | number | Number of recorded calls the user received feedback on |
| ↳ `callsRequestedFeedback` | number | Number of recorded calls the user requested feedback on |
| ↳ `callsScorecardsFilled` | number | Number of scorecards the user completed |
| ↳ `callsScorecardsReceived` | number | Number of calls where someone filled a scorecard on the user's calls |
| ↳ `ownCallsListenedTo` | number | Number of the user's own calls the user listened to |
| ↳ `othersCallsListenedTo` | number | Number of other users' calls the user listened to |
| ↳ `callsSharedInternally` | number | Number of calls the user shared internally |
| ↳ `callsSharedExternally` | number | Number of calls the user shared externally |
| ↳ `callsCommentsGiven` | number | Number of calls where the user provided at least one comment |
| ↳ `callsCommentsReceived` | number | Number of calls where the user received at least one comment |
| ↳ `callsMarkedAsFeedbackGiven` | number | Number of calls where the user selected Mark as reviewed |
| ↳ `callsMarkedAsFeedbackReceived` | number | Number of calls where others selected Mark as reviewed on the user's calls |
| `timeZone` | string | The company's defined timezone in Gong |
| `fromDateTime` | string | Start of results in ISO-8601 format |
| `toDateTime` | string | End of results in ISO-8601 format |
| `cursor` | string | Pagination cursor for the next page |
### `gong_interaction_stats`
Retrieve interaction statistics for users by date range from Gong. Only includes calls with Whisper enabled.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `accessKey` | string | Yes | Gong API Access Key |
| `accessKeySecret` | string | Yes | Gong API Access Key Secret |
| `userIds` | string | No | Comma-separated list of Gong user IDs \(up to 20 digits each\) |
| `fromDate` | string | Yes | Start date in YYYY-MM-DD format \(inclusive, in company timezone\) |
| `toDate` | string | Yes | End date in YYYY-MM-DD format \(exclusive, in company timezone, cannot exceed current day\) |
| `cursor` | string | No | Pagination cursor from a previous response |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `peopleInteractionStats` | array | Email address of the Gong user |
| ↳ `userId` | string | Gong's unique numeric identifier for the user |
| ↳ `userEmailAddress` | string | Email address of the Gong user |
| ↳ `personInteractionStats` | array | List of interaction stat measurements for this user |
| ↳ `name` | string | Stat name \(e.g. Longest Monologue, Interactivity, Patience, Question Rate\) |
| ↳ `value` | number | Stat measurement value \(can be double or integer\) |
| `timeZone` | string | The company's defined timezone in Gong |
| `fromDateTime` | string | Start of results in ISO-8601 format |
| `toDateTime` | string | End of results in ISO-8601 format |
| `cursor` | string | Pagination cursor for the next page |
### `gong_answered_scorecards`
Retrieve answered scorecards for reviewed users or by date range from Gong.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `accessKey` | string | Yes | Gong API Access Key |
| `accessKeySecret` | string | Yes | Gong API Access Key Secret |
| `callFromDate` | string | No | Start date for calls in YYYY-MM-DD format \(inclusive, in company timezone\). Defaults to earliest recorded call. |
| `callToDate` | string | No | End date for calls in YYYY-MM-DD format \(exclusive, in company timezone\). Defaults to latest recorded call. |
| `reviewFromDate` | string | No | Start date for reviews in YYYY-MM-DD format \(inclusive, in company timezone\). Defaults to earliest reviewed call. |
| `reviewToDate` | string | No | End date for reviews in YYYY-MM-DD format \(exclusive, in company timezone\). Defaults to latest reviewed call. |
| `scorecardIds` | string | No | Comma-separated list of scorecard IDs to filter by |
| `reviewedUserIds` | string | No | Comma-separated list of reviewed user IDs to filter by |
| `cursor` | string | No | Pagination cursor from a previous response |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `answeredScorecards` | array | List of answered scorecards with scores and answers |
| ↳ `answeredScorecardId` | number | Identifier of the answered scorecard |
| ↳ `scorecardId` | number | Identifier of the scorecard |
| ↳ `scorecardName` | string | Scorecard name |
| ↳ `callId` | number | Gong's unique numeric identifier for the call |
| ↳ `callStartTime` | string | Date/time of the call in ISO-8601 format |
| ↳ `reviewedUserId` | number | User ID of the team member being reviewed |
| ↳ `reviewerUserId` | number | User ID of the team member who completed the scorecard |
| ↳ `reviewTime` | string | Date/time when the review was completed in ISO-8601 format |
| ↳ `visibilityType` | string | Visibility type of the scorecard answer |
| ↳ `answers` | array | Answers in the answered scorecard |
| ↳ `questionId` | number | Identifier of the question |
| ↳ `questionRevisionId` | number | Identifier of the revision version of the question |
| ↳ `isOverall` | boolean | Whether this is the overall question |
| ↳ `score` | number | Score between 1 to 5 if answered, null otherwise |
| ↳ `answerText` | string | The answer's text if answered, null otherwise |
| ↳ `notApplicable` | boolean | Whether the question is not applicable to this call |
| `cursor` | string | Pagination cursor for the next page |
### `gong_list_library_folders`
Retrieve library folders from Gong.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `accessKey` | string | Yes | Gong API Access Key |
| `accessKeySecret` | string | Yes | Gong API Access Key Secret |
| `workspaceId` | string | No | Gong workspace ID to filter folders |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `folders` | array | List of library folders with id, name, and parent relationships |
| ↳ `id` | string | Gong unique numeric identifier for the folder |
| ↳ `name` | string | Display name of the folder |
| ↳ `parentFolderId` | string | Gong unique numeric identifier for the parent folder \(null for root folder\) |
| ↳ `createdBy` | string | Gong unique numeric identifier for the user who added the folder |
| ↳ `updated` | string | Folder's last update time in ISO-8601 format |
### `gong_get_folder_content`
Retrieve the list of calls in a specific library folder from Gong.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `accessKey` | string | Yes | Gong API Access Key |
| `accessKeySecret` | string | Yes | Gong API Access Key Secret |
| `folderId` | string | Yes | The library folder ID to retrieve content for |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `folderId` | string | Gong's unique numeric identifier for the folder |
| `folderName` | string | Display name of the folder |
| `createdBy` | string | Gong's unique numeric identifier for the user who added the folder |
| `updated` | string | Folder's last update time in ISO-8601 format |
| `calls` | array | List of calls in the library folder |
| ↳ `id` | string | Gong unique numeric identifier of the call |
| ↳ `title` | string | The title of the call |
| ↳ `note` | string | A note attached to the call in the folder |
| ↳ `addedBy` | string | Gong unique numeric identifier for the user who added the call |
| ↳ `created` | string | Date and time the call was added to folder in ISO-8601 format |
| ↳ `url` | string | URL of the call |
| ↳ `snippet` | object | Call snippet time range |
| ↳ `fromSec` | number | Snippet start in seconds relative to call start |
| ↳ `toSec` | number | Snippet end in seconds relative to call start |
### `gong_list_scorecards`
Retrieve scorecard definitions from Gong settings.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `accessKey` | string | Yes | Gong API Access Key |
| `accessKeySecret` | string | Yes | Gong API Access Key Secret |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `scorecards` | array | List of scorecard definitions with questions |
| ↳ `scorecardId` | string | Unique identifier for the scorecard |
| ↳ `scorecardName` | string | Display name of the scorecard |
| ↳ `workspaceId` | string | Workspace identifier associated with this scorecard |
| ↳ `enabled` | boolean | Whether the scorecard is active |
| ↳ `updaterUserId` | string | ID of the user who last modified the scorecard |
| ↳ `created` | string | Creation timestamp in ISO-8601 format |
| ↳ `updated` | string | Last update timestamp in ISO-8601 format |
| ↳ `questions` | array | List of questions in the scorecard |
| ↳ `questionId` | string | Unique identifier for the question |
| ↳ `questionText` | string | The text content of the question |
| ↳ `questionRevisionId` | string | Identifier for the specific revision of the question |
| ↳ `isOverall` | boolean | Whether this is the primary overall question |
| ↳ `created` | string | Question creation timestamp in ISO-8601 format |
| ↳ `updated` | string | Question last update timestamp in ISO-8601 format |
| ↳ `updaterUserId` | string | ID of the user who last modified the question |
### `gong_list_trackers`
Retrieve smart tracker and keyword tracker definitions from Gong settings.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `accessKey` | string | Yes | Gong API Access Key |
| `accessKeySecret` | string | Yes | Gong API Access Key Secret |
| `workspaceId` | string | No | The ID of the workspace the keyword trackers are in. When empty, all trackers in all workspaces are returned. |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `trackers` | array | List of keyword tracker definitions |
| ↳ `trackerId` | string | Unique identifier for the tracker |
| ↳ `trackerName` | string | Display name of the tracker |
| ↳ `workspaceId` | string | ID of the workspace containing the tracker |
| ↳ `languageKeywords` | array | Keywords organized by language |
| ↳ `language` | string | ISO 639-2/B language code \("mul" means keywords apply across all languages\) |
| ↳ `keywords` | array | Words and phrases in the designated language |
| ↳ `includeRelatedForms` | boolean | Whether to include different word forms |
| ↳ `affiliation` | string | Speaker affiliation filter: "Anyone", "Company", or "NonCompany" |
| ↳ `partOfQuestion` | boolean | Whether to track keywords only within questions |
| ↳ `saidAt` | string | Position in call: "Anytime", "First", or "Last" |
| ↳ `saidAtInterval` | number | Duration to search \(in minutes or percentage\) |
| ↳ `saidAtUnit` | string | Unit for saidAtInterval |
| ↳ `saidInTopics` | array | Topics where keywords should be detected |
| ↳ `saidInCallParts` | array | Specific call segments to monitor |
| ↳ `filterQuery` | string | JSON-formatted call filtering criteria |
| ↳ `created` | string | Creation timestamp in ISO-8601 format |
| ↳ `creatorUserId` | string | ID of the user who created the tracker \(null for built-in trackers\) |
| ↳ `updated` | string | Last modification timestamp in ISO-8601 format |
| ↳ `updaterUserId` | string | ID of the user who last modified the tracker |
### `gong_list_workspaces`
List all company workspaces in Gong.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `accessKey` | string | Yes | Gong API Access Key |
| `accessKeySecret` | string | Yes | Gong API Access Key Secret |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `workspaces` | array | List of Gong workspaces |
| ↳ `id` | string | Gong unique numeric identifier for the workspace |
| ↳ `name` | string | Display name of the workspace |
| ↳ `description` | string | Description of the workspace's purpose or content |
### `gong_list_flows`
List Gong Engage flows (sales engagement sequences).
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `accessKey` | string | Yes | Gong API Access Key |
| `accessKeySecret` | string | Yes | Gong API Access Key Secret |
| `flowOwnerEmail` | string | Yes | Email of a Gong user. The API will return 'PERSONAL' flows belonging to this user in addition to 'COMPANY' flows. |
| `workspaceId` | string | No | Optional workspace ID to filter flows to a specific workspace |
| `cursor` | string | No | Pagination cursor from a previous API call to retrieve the next page of records |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `requestId` | string | A Gong request reference ID for troubleshooting purposes |
| `flows` | array | List of Gong Engage flows |
| ↳ `id` | string | The ID of the flow |
| ↳ `name` | string | The name of the flow |
| ↳ `folderId` | string | The ID of the folder this flow is under |
| ↳ `folderName` | string | The name of the folder this flow is under |
| ↳ `visibility` | string | The flow visibility type \(COMPANY, PERSONAL, or SHARED\) |
| ↳ `creationDate` | string | Creation time of the flow in ISO-8601 format |
| ↳ `exclusive` | boolean | Indicates whether a prospect in this flow can be added to other flows |
| `totalRecords` | number | Total number of flow records available |
| `currentPageSize` | number | Number of records returned in the current page |
| `currentPageNumber` | number | Current page number |
| `cursor` | string | Pagination cursor for retrieving the next page of records |
### `gong_get_coaching`
Retrieve coaching metrics for a manager from Gong.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `accessKey` | string | Yes | Gong API Access Key |
| `accessKeySecret` | string | Yes | Gong API Access Key Secret |
| `managerId` | string | Yes | Gong user ID of the manager |
| `workspaceId` | string | Yes | Gong workspace ID |
| `fromDate` | string | Yes | Start date in ISO-8601 format |
| `toDate` | string | Yes | End date in ISO-8601 format |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `requestId` | string | A Gong request reference ID for troubleshooting purposes |
| `coachingData` | array | The manager user information |
| ↳ `manager` | object | The manager user information |
| ↳ `id` | string | Gong unique numeric identifier for the user |
| ↳ `emailAddress` | string | Email address of the Gong user |
| ↳ `firstName` | string | First name of the Gong user |
| ↳ `lastName` | string | Last name of the Gong user |
| ↳ `title` | string | Job title of the Gong user |
| ↳ `directReportsMetrics` | array | Coaching metrics for each direct report |
| ↳ `report` | object | The direct report user information |
| ↳ `id` | string | Gong unique numeric identifier for the user |
| ↳ `emailAddress` | string | Email address of the Gong user |
| ↳ `firstName` | string | First name of the Gong user |
| ↳ `lastName` | string | Last name of the Gong user |
| ↳ `title` | string | Job title of the Gong user |
| ↳ `metrics` | json | A map of metric names to arrays of string values representing coaching metrics |
### `gong_lookup_email`
Find all references to an email address in Gong (calls, email messages, meetings, CRM data, engagement).
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `accessKey` | string | Yes | Gong API Access Key |
| `accessKeySecret` | string | Yes | Gong API Access Key Secret |
| `emailAddress` | string | Yes | Email address to look up |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `requestId` | string | Gong request reference ID for troubleshooting |
| `calls` | array | Related calls referencing this email address |
| ↳ `id` | string | Gong's unique numeric identifier for the call \(up to 20 digits\) |
| ↳ `status` | string | Call status |
| ↳ `externalSystems` | array | Links to external systems such as CRM, Telephony System, etc. |
| ↳ `system` | string | External system name |
| ↳ `objects` | array | List of objects within the external system |
| ↳ `objectType` | string | Object type |
| ↳ `externalId` | string | External ID |
| `emails` | array | Related email messages referencing this email address |
| ↳ `id` | string | Gong's unique 32 character identifier for the email message |
| ↳ `from` | string | The sender's email address |
| ↳ `sentTime` | string | Date and time the email was sent in ISO-8601 format |
| ↳ `mailbox` | string | The mailbox from which the email was retrieved |
| ↳ `messageHash` | string | Hash code of the email message |
| `meetings` | array | Related meetings referencing this email address |
| ↳ `id` | string | Gong's unique identifier for the meeting |
| `customerData` | array | Links to data from external systems \(CRM, Telephony, etc.\) that reference this email |
| ↳ `system` | string | External system name |
| ↳ `objects` | array | List of objects in the external system |
| ↳ `id` | string | Gong's unique numeric identifier for the Lead or Contact \(up to 20 digits\) |
| ↳ `objectType` | string | Object type |
| ↳ `externalId` | string | External ID |
| ↳ `mirrorId` | string | CRM Mirror ID |
| ↳ `fields` | array | Object fields |
| ↳ `name` | string | Field name |
| ↳ `value` | json | Field value |
| `customerEngagement` | array | Customer engagement events \(such as viewing external shared calls\) |
| ↳ `eventType` | string | Event type |
| ↳ `eventName` | string | Event name |
| ↳ `timestamp` | string | Date and time the event occurred in ISO-8601 format |
| ↳ `contentId` | string | Event content ID |
| ↳ `contentUrl` | string | Event content URL |
| ↳ `reportingSystem` | string | Event reporting system |
| ↳ `sourceEventId` | string | Source event ID |
### `gong_lookup_phone`
Find all references to a phone number in Gong (calls, email messages, meetings, CRM data, and associated contacts).
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `accessKey` | string | Yes | Gong API Access Key |
| `accessKeySecret` | string | Yes | Gong API Access Key Secret |
| `phoneNumber` | string | Yes | Phone number to look up \(must start with + followed by country code\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `requestId` | string | Gong request reference ID for troubleshooting |
| `suppliedPhoneNumber` | string | The phone number that was supplied in the request |
| `matchingPhoneNumbers` | array | Phone numbers found in the system that match the supplied number |
| `emailAddresses` | array | Email addresses associated with the phone number |
| `calls` | array | Related calls referencing this phone number |
| ↳ `id` | string | Gong's unique numeric identifier for the call \(up to 20 digits\) |
| ↳ `status` | string | Call status |
| ↳ `externalSystems` | array | Links to external systems such as CRM, Telephony System, etc. |
| ↳ `system` | string | External system name |
| ↳ `objects` | array | List of objects within the external system |
| ↳ `objectType` | string | Object type |
| ↳ `externalId` | string | External ID |
| `emails` | array | Related email messages associated with contacts matching this phone number |
| ↳ `id` | string | Gong's unique 32 character identifier for the email message |
| ↳ `from` | string | The sender's email address |
| ↳ `sentTime` | string | Date and time the email was sent in ISO-8601 format |
| ↳ `mailbox` | string | The mailbox from which the email was retrieved |
| ↳ `messageHash` | string | Hash code of the email message |
| `meetings` | array | Related meetings associated with this phone number |
| ↳ `id` | string | Gong's unique identifier for the meeting |
| `customerData` | array | Links to data from external systems \(CRM, Telephony, etc.\) that reference this phone number |
| ↳ `system` | string | External system name |
| ↳ `objects` | array | List of objects in the external system |
| ↳ `id` | string | Gong's unique numeric identifier for the Lead or Contact \(up to 20 digits\) |
| ↳ `objectType` | string | Object type |
| ↳ `externalId` | string | External ID |
| ↳ `mirrorId` | string | CRM Mirror ID |
| ↳ `fields` | array | Object fields |
| ↳ `name` | string | Field name |
| ↳ `value` | json | Field value |

View File

@@ -0,0 +1,175 @@
---
title: Google BigQuery
description: Query, list, and insert data in Google BigQuery
---
import { BlockInfoCard } from "@/components/ui/block-info-card"
<BlockInfoCard
type="google_bigquery"
color="#E0E0E0"
/>
{/* MANUAL-CONTENT-START:intro */}
[Google BigQuery](https://cloud.google.com/bigquery) is Google Cloud's fully managed, serverless data warehouse designed for large-scale data analytics. BigQuery lets you run fast SQL queries on massive datasets, making it ideal for business intelligence, data exploration, and machine learning pipelines.
With the Google BigQuery integration in Sim, you can:
- **Run SQL queries**: Execute queries against your BigQuery datasets and retrieve results for analysis or downstream processing
- **List datasets**: Browse available datasets within a Google Cloud project
- **List and inspect tables**: Enumerate tables within a dataset and retrieve detailed schema information
- **Insert rows**: Stream new rows into BigQuery tables for real-time data ingestion
In Sim, the Google BigQuery integration enables your agents to query datasets, inspect schemas, and insert rows as part of automated workflows. This is ideal for automated reporting, data pipeline orchestration, real-time data ingestion, and analytics-driven decision making.
{/* MANUAL-CONTENT-END */}
## Usage Instructions
Connect to Google BigQuery to run SQL queries, list datasets and tables, get table metadata, and insert rows.
## Tools
### `google_bigquery_query`
Run a SQL query against Google BigQuery and return the results
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `projectId` | string | Yes | Google Cloud project ID |
| `query` | string | Yes | SQL query to execute |
| `useLegacySql` | boolean | No | Whether to use legacy SQL syntax \(default: false\) |
| `maxResults` | number | No | Maximum number of rows to return |
| `defaultDatasetId` | string | No | Default dataset for unqualified table names |
| `location` | string | No | Processing location \(e.g., "US", "EU"\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `columns` | array | Array of column names from the query result |
| `rows` | array | Array of row objects keyed by column name |
| `totalRows` | string | Total number of rows in the complete result set |
| `jobComplete` | boolean | Whether the query completed within the timeout |
| `totalBytesProcessed` | string | Total bytes processed by the query |
| `cacheHit` | boolean | Whether the query result was served from cache |
| `jobReference` | object | Job reference \(useful when jobComplete is false\) |
| ↳ `projectId` | string | Project ID containing the job |
| ↳ `jobId` | string | Unique job identifier |
| ↳ `location` | string | Geographic location of the job |
| `pageToken` | string | Token for fetching additional result pages |
### `google_bigquery_list_datasets`
List all datasets in a Google BigQuery project
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `projectId` | string | Yes | Google Cloud project ID |
| `maxResults` | number | No | Maximum number of datasets to return |
| `pageToken` | string | No | Token for pagination |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `datasets` | array | Array of dataset objects |
| ↳ `datasetId` | string | Unique dataset identifier |
| ↳ `projectId` | string | Project ID containing this dataset |
| ↳ `friendlyName` | string | Descriptive name for the dataset |
| ↳ `location` | string | Geographic location where the data resides |
| `nextPageToken` | string | Token for fetching next page of results |
### `google_bigquery_list_tables`
List all tables in a Google BigQuery dataset
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `projectId` | string | Yes | Google Cloud project ID |
| `datasetId` | string | Yes | BigQuery dataset ID |
| `maxResults` | number | No | Maximum number of tables to return |
| `pageToken` | string | No | Token for pagination |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `tables` | array | Array of table objects |
| ↳ `tableId` | string | Table identifier |
| ↳ `datasetId` | string | Dataset ID containing this table |
| ↳ `projectId` | string | Project ID containing this table |
| ↳ `type` | string | Table type \(TABLE, VIEW, EXTERNAL, etc.\) |
| ↳ `friendlyName` | string | User-friendly name for the table |
| ↳ `creationTime` | string | Time when created, in milliseconds since epoch |
| `totalItems` | number | Total number of tables in the dataset |
| `nextPageToken` | string | Token for fetching next page of results |
### `google_bigquery_get_table`
Get metadata and schema for a Google BigQuery table
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `projectId` | string | Yes | Google Cloud project ID |
| `datasetId` | string | Yes | BigQuery dataset ID |
| `tableId` | string | Yes | BigQuery table ID |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `tableId` | string | Table ID |
| `datasetId` | string | Dataset ID |
| `projectId` | string | Project ID |
| `type` | string | Table type \(TABLE, VIEW, SNAPSHOT, MATERIALIZED_VIEW, EXTERNAL\) |
| `description` | string | Table description |
| `numRows` | string | Total number of rows |
| `numBytes` | string | Total size in bytes, excluding data in streaming buffer |
| `schema` | array | Array of column definitions |
| ↳ `name` | string | Column name |
| ↳ `type` | string | Data type \(STRING, INTEGER, FLOAT, BOOLEAN, TIMESTAMP, RECORD, etc.\) |
| ↳ `mode` | string | Column mode \(NULLABLE, REQUIRED, or REPEATED\) |
| ↳ `description` | string | Column description |
| `creationTime` | string | Table creation time \(milliseconds since epoch\) |
| `lastModifiedTime` | string | Last modification time \(milliseconds since epoch\) |
| `location` | string | Geographic location where the table resides |
### `google_bigquery_insert_rows`
Insert rows into a Google BigQuery table using streaming insert
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `projectId` | string | Yes | Google Cloud project ID |
| `datasetId` | string | Yes | BigQuery dataset ID |
| `tableId` | string | Yes | BigQuery table ID |
| `rows` | string | Yes | JSON array of row objects to insert |
| `skipInvalidRows` | boolean | No | Whether to insert valid rows even if some are invalid |
| `ignoreUnknownValues` | boolean | No | Whether to ignore columns not in the table schema |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `insertedRows` | number | Number of rows successfully inserted |
| `errors` | array | Array of per-row insertion errors \(empty if all succeeded\) |
| ↳ `index` | number | Zero-based index of the row that failed |
| ↳ `errors` | array | Error details for this row |
| ↳ `reason` | string | Short error code summarizing the error |
| ↳ `location` | string | Where the error occurred |
| ↳ `message` | string | Human-readable error description |

View File

@@ -10,6 +10,18 @@ import { BlockInfoCard } from "@/components/ui/block-info-card"
color="#E0E0E0"
/>
{/* MANUAL-CONTENT-START:intro */}
[Google Books](https://books.google.com) is Google's comprehensive book discovery and metadata service, providing access to millions of books from publishers, libraries, and digitized collections worldwide.
With the Google Books integration in Sim, you can:
- **Search for books**: Find volumes by title, author, ISBN, or keyword across the entire Google Books catalog
- **Retrieve volume details**: Get detailed metadata for a specific book including title, authors, description, ratings, and publication details
In Sim, the Google Books integration allows your agents to search for books and retrieve volume details as part of automated workflows. This enables use cases such as content research, reading list curation, bibliographic data enrichment, and knowledge gathering from published works.
{/* MANUAL-CONTENT-END */}
## Usage Instructions
Search for books using the Google Books API. Find volumes by title, author, ISBN, or keywords, and retrieve detailed information about specific books including descriptions, ratings, and publication details.

View File

@@ -0,0 +1,144 @@
---
title: Google Contacts
description: Manage Google Contacts
---
import { BlockInfoCard } from "@/components/ui/block-info-card"
<BlockInfoCard
type="google_contacts"
color="#E0E0E0"
/>
## Usage Instructions
Integrate Google Contacts into the workflow. Can create, read, update, delete, list, and search contacts.
## Tools
### `google_contacts_create`
Create a new contact in Google Contacts
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `givenName` | string | Yes | First name of the contact |
| `familyName` | string | No | Last name of the contact |
| `email` | string | No | Email address of the contact |
| `emailType` | string | No | Email type: home, work, or other |
| `phone` | string | No | Phone number of the contact |
| `phoneType` | string | No | Phone type: mobile, home, work, or other |
| `organization` | string | No | Organization/company name |
| `jobTitle` | string | No | Job title at the organization |
| `notes` | string | No | Notes or biography for the contact |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `content` | string | Contact creation confirmation message |
| `metadata` | json | Created contact metadata including resource name and details |
### `google_contacts_get`
Get a specific contact from Google Contacts
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `resourceName` | string | Yes | Resource name of the contact \(e.g., people/c1234567890\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `content` | string | Contact retrieval confirmation message |
| `metadata` | json | Contact details including name, email, phone, and organization |
### `google_contacts_list`
List contacts from Google Contacts
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `pageSize` | number | No | Number of contacts to return \(1-1000, default 100\) |
| `pageToken` | string | No | Page token from a previous list request for pagination |
| `sortOrder` | string | No | Sort order for contacts |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `content` | string | Summary of found contacts count |
| `metadata` | json | List of contacts with pagination tokens |
### `google_contacts_search`
Search contacts in Google Contacts by name, email, phone, or organization
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `query` | string | Yes | Search query to match against contact names, emails, phones, and organizations |
| `pageSize` | number | No | Number of results to return \(default 10, max 30\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `content` | string | Summary of search results count |
| `metadata` | json | Search results with matching contacts |
### `google_contacts_update`
Update an existing contact in Google Contacts
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `resourceName` | string | Yes | Resource name of the contact \(e.g., people/c1234567890\) |
| `etag` | string | Yes | ETag from a previous get request \(required for concurrency control\) |
| `givenName` | string | No | Updated first name |
| `familyName` | string | No | Updated last name |
| `email` | string | No | Updated email address |
| `emailType` | string | No | Email type: home, work, or other |
| `phone` | string | No | Updated phone number |
| `phoneType` | string | No | Phone type: mobile, home, work, or other |
| `organization` | string | No | Updated organization/company name |
| `jobTitle` | string | No | Updated job title |
| `notes` | string | No | Updated notes or biography |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `content` | string | Contact update confirmation message |
| `metadata` | json | Updated contact metadata |
### `google_contacts_delete`
Delete a contact from Google Contacts
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `resourceName` | string | Yes | Resource name of the contact to delete \(e.g., people/c1234567890\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `content` | string | Contact deletion confirmation message |
| `metadata` | json | Deletion details including resource name |

View File

@@ -11,9 +11,13 @@ import { BlockInfoCard } from "@/components/ui/block-info-card"
/>
{/* MANUAL-CONTENT-START:intro */}
[Google Search](https://www.google.com) is the world's most widely used web search engine, making it easy to find information, discover new content, and answer questions in real time. With advanced search algorithms, Google Search helps you quickly locate web pages, images, news, and more using simple or complex queries.
[Google Search](https://www.google.com) is the world's most widely used web search engine, making it easy to find information, discover new content, and answer questions in real time.
In Sim, the Google Search integration allows your agents to search the web and retrieve live information as part of automated workflows. This enables powerful use cases such as automated research, fact-checking, knowledge synthesis, and dynamic content discovery. By connecting Sim with Google Search, your agents can perform queries, process and analyze web results, and incorporate the latest information into their decisions—without manual effort. Enhance your workflows with always up-to-date knowledge from across the internet.
With the Google Search integration in Sim, you can:
- **Search the web**: Perform queries using Google's Custom Search API and retrieve structured search results with titles, snippets, and URLs
In Sim, the Google Search integration allows your agents to search the web and retrieve live information as part of automated workflows. This enables use cases such as automated research, fact-checking, knowledge synthesis, and dynamic content discovery.
{/* MANUAL-CONTENT-END */}

View File

@@ -11,18 +11,20 @@ import { BlockInfoCard } from "@/components/ui/block-info-card"
/>
{/* MANUAL-CONTENT-START:intro */}
[Google Sheets](https://www.google.com/sheets/about/) is a powerful, cloud-based spreadsheet platform that allows teams and individuals to create, edit, and collaborate on spreadsheets in real-time. Widely used for data tracking, reporting, and lightweight database needs, Google Sheets seamlessly integrates with many tools and services to empower workflow automation and data-driven operations.
[Google Sheets](https://www.google.com/sheets/about/) is a cloud-based spreadsheet platform that allows teams and individuals to create, edit, and collaborate on spreadsheets in real-time. Widely used for data tracking, reporting, and lightweight database needs, Google Sheets integrates with many tools and services.
Google Sheets offers an extensive feature set for managing and analyzing tabular data, supporting everything from basic calculations to complex reporting and collaborative editing. Its robust API and integration capabilities enable automated access, updates, and reporting from agents and external services.
With the Google Sheets integration in Sim, you can:
Key features of Google Sheets include:
- **Read data**: Retrieve cell values from specific ranges in a spreadsheet
- **Write data**: Write values to specific cell ranges
- **Update data**: Modify existing cell values in a spreadsheet
- **Append rows**: Add new rows of data to the end of a sheet
- **Clear ranges**: Remove data from specific cell ranges
- **Manage spreadsheets**: Create new spreadsheets or retrieve metadata about existing ones
- **Batch operations**: Perform batch read, update, and clear operations across multiple ranges
- **Copy sheets**: Duplicate sheets within or between spreadsheets
- Real-Time Collaboration: Multiple users can edit and view spreadsheets simultaneously from anywhere.
- Rich Data Manipulation: Support for formulas, charts, pivots, and add-ons to analyze and visualize data.
- Easy Data Import/Export: Ability to connect and sync data from various sources using integrations and APIs.
- Powerful Permissions: Fine-grained sharing, access controls, and version history for team management.
In Sim, the Google Sheets integration empowers your agents to automate reading from, writing to, and updating specific sheets within spreadsheets. Agents can interact programmatically with Google Sheets to retrieve or modify data, manage collaborative documents, and automate reporting or record-keeping as part of your AI workflows. By connecting Sim with Google Sheets, you can build intelligent agents that manage, analyze, and update your data dynamically—streamlining operations, enhancing productivity, and ensuring up-to-date data access across your organization.
In Sim, the Google Sheets integration enables your agents to read from, write to, and manage spreadsheets as part of automated workflows. This is ideal for automated reporting, data synchronization, record-keeping, and building data pipelines that use spreadsheets as a collaborative data layer.
{/* MANUAL-CONTENT-END */}

View File

@@ -0,0 +1,214 @@
---
title: Google Tasks
description: Manage Google Tasks
---
import { BlockInfoCard } from "@/components/ui/block-info-card"
<BlockInfoCard
type="google_tasks"
color="#E0E0E0"
/>
{/* MANUAL-CONTENT-START:intro */}
[Google Tasks](https://support.google.com/tasks) is Google's lightweight task management service, integrated into Gmail, Google Calendar, and the standalone Google Tasks app. It provides a simple way to create, organize, and track to-do items with support for due dates, subtasks, and task lists.
With the Google Tasks integration in Sim, you can:
- **Create tasks**: Add new to-do items to any task list with titles, notes, and due dates
- **List tasks**: Retrieve all tasks from a specific task list
- **Get task details**: Fetch detailed information about a specific task by ID
- **Update tasks**: Modify task titles, notes, due dates, or completion status
- **Delete tasks**: Remove tasks from a task list
- **List task lists**: Browse all available task lists in a Google account
In Sim, the Google Tasks integration allows your agents to manage to-do items programmatically as part of automated workflows. This enables use cases such as automated task creation from incoming data, deadline monitoring, and workflow-triggered task management.
{/* MANUAL-CONTENT-END */}
## Usage Instructions
Integrate Google Tasks into your workflow. Create, read, update, delete, and list tasks and task lists.
## Tools
### `google_tasks_create`
Create a new task in a Google Tasks list
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `taskListId` | string | No | Task list ID \(defaults to primary task list "@default"\) |
| `title` | string | Yes | Title of the task \(max 1024 characters\) |
| `notes` | string | No | Notes/description for the task \(max 8192 characters\) |
| `due` | string | No | Due date in RFC 3339 format \(e.g., 2025-06-03T00:00:00.000Z\) |
| `status` | string | No | Task status: "needsAction" or "completed" |
| `parent` | string | No | Parent task ID to create this task as a subtask. Omit for top-level tasks. |
| `previous` | string | No | Previous sibling task ID to position after. Omit to place first among siblings. |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `id` | string | Task ID |
| `title` | string | Task title |
| `notes` | string | Task notes |
| `status` | string | Task status \(needsAction or completed\) |
| `due` | string | Due date |
| `updated` | string | Last modification time |
| `selfLink` | string | URL for the task |
| `webViewLink` | string | Link to task in Google Tasks UI |
| `parent` | string | Parent task ID |
| `position` | string | Position among sibling tasks |
| `completed` | string | Completion date |
| `deleted` | boolean | Whether the task is deleted |
### `google_tasks_list`
List all tasks in a Google Tasks list
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `taskListId` | string | No | Task list ID \(defaults to primary task list "@default"\) |
| `maxResults` | number | No | Maximum number of tasks to return \(default 20, max 100\) |
| `pageToken` | string | No | Token for pagination |
| `showCompleted` | boolean | No | Whether to show completed tasks \(default true\) |
| `showDeleted` | boolean | No | Whether to show deleted tasks \(default false\) |
| `showHidden` | boolean | No | Whether to show hidden tasks \(default false\) |
| `dueMin` | string | No | Lower bound for due date filter \(RFC 3339 timestamp\) |
| `dueMax` | string | No | Upper bound for due date filter \(RFC 3339 timestamp\) |
| `completedMin` | string | No | Lower bound for task completion date \(RFC 3339 timestamp\) |
| `completedMax` | string | No | Upper bound for task completion date \(RFC 3339 timestamp\) |
| `updatedMin` | string | No | Lower bound for last modification time \(RFC 3339 timestamp\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `tasks` | array | List of tasks |
| ↳ `id` | string | Task identifier |
| ↳ `title` | string | Title of the task |
| ↳ `notes` | string | Notes/description for the task |
| ↳ `status` | string | Task status: "needsAction" or "completed" |
| ↳ `due` | string | Due date \(RFC 3339 timestamp\) |
| ↳ `completed` | string | Completion date \(RFC 3339 timestamp\) |
| ↳ `updated` | string | Last modification time \(RFC 3339 timestamp\) |
| ↳ `selfLink` | string | URL pointing to this task |
| ↳ `webViewLink` | string | Link to task in Google Tasks UI |
| ↳ `parent` | string | Parent task identifier |
| ↳ `position` | string | Position among sibling tasks \(string-based ordering\) |
| ↳ `hidden` | boolean | Whether the task is hidden |
| ↳ `deleted` | boolean | Whether the task is deleted |
| ↳ `links` | array | Collection of links associated with the task |
| ↳ `type` | string | Link type \(e.g., "email", "generic", "chat_message"\) |
| ↳ `description` | string | Link description |
| ↳ `link` | string | The URL |
| `nextPageToken` | string | Token for retrieving the next page of results |
### `google_tasks_get`
Retrieve a specific task by ID from a Google Tasks list
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `taskListId` | string | No | Task list ID \(defaults to primary task list "@default"\) |
| `taskId` | string | Yes | The ID of the task to retrieve |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `id` | string | Task ID |
| `title` | string | Task title |
| `notes` | string | Task notes |
| `status` | string | Task status \(needsAction or completed\) |
| `due` | string | Due date |
| `updated` | string | Last modification time |
| `selfLink` | string | URL for the task |
| `webViewLink` | string | Link to task in Google Tasks UI |
| `parent` | string | Parent task ID |
| `position` | string | Position among sibling tasks |
| `completed` | string | Completion date |
| `deleted` | boolean | Whether the task is deleted |
### `google_tasks_update`
Update an existing task in a Google Tasks list
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `taskListId` | string | No | Task list ID \(defaults to primary task list "@default"\) |
| `taskId` | string | Yes | The ID of the task to update |
| `title` | string | No | New title for the task |
| `notes` | string | No | New notes for the task |
| `due` | string | No | New due date in RFC 3339 format |
| `status` | string | No | New status: "needsAction" or "completed" |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `id` | string | Task ID |
| `title` | string | Task title |
| `notes` | string | Task notes |
| `status` | string | Task status \(needsAction or completed\) |
| `due` | string | Due date |
| `updated` | string | Last modification time |
| `selfLink` | string | URL for the task |
| `webViewLink` | string | Link to task in Google Tasks UI |
| `parent` | string | Parent task ID |
| `position` | string | Position among sibling tasks |
| `completed` | string | Completion date |
| `deleted` | boolean | Whether the task is deleted |
### `google_tasks_delete`
Delete a task from a Google Tasks list
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `taskListId` | string | No | Task list ID \(defaults to primary task list "@default"\) |
| `taskId` | string | Yes | The ID of the task to delete |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `taskId` | string | Deleted task ID |
| `deleted` | boolean | Whether deletion was successful |
### `google_tasks_list_task_lists`
Retrieve all task lists for the authenticated user
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `maxResults` | number | No | Maximum number of task lists to return \(default 20, max 100\) |
| `pageToken` | string | No | Token for pagination |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `taskLists` | array | List of task lists |
| ↳ `id` | string | Task list identifier |
| ↳ `title` | string | Title of the task list |
| ↳ `updated` | string | Last modification time \(RFC 3339 timestamp\) |
| ↳ `selfLink` | string | URL pointing to this task list |
| `nextPageToken` | string | Token for retrieving the next page of results |

View File

@@ -0,0 +1,72 @@
---
title: Google Translate
description: Translate text using Google Cloud Translation
---
import { BlockInfoCard } from "@/components/ui/block-info-card"
<BlockInfoCard
type="google_translate"
color="#E0E0E0"
/>
{/* MANUAL-CONTENT-START:intro */}
[Google Translate](https://translate.google.com/) is Google's powerful translation service, supporting over 100 languages for text, documents, and websites. Backed by advanced neural machine translation, Google Translate delivers fast and accurate translations for a wide range of use cases.
With the Google Translate integration in Sim, you can:
- **Translate text**: Convert text between over 100 languages using Google Cloud Translation
- **Detect languages**: Automatically identify the language of a given text input
In Sim, the Google Translate integration allows your agents to translate text and detect languages as part of automated workflows. This enables use cases such as localization, multilingual support, content translation, and language detection at scale.
{/* MANUAL-CONTENT-END */}
## Usage Instructions
Translate and detect languages using the Google Cloud Translation API. Supports auto-detection of the source language.
## Tools
### `google_translate_text`
Translate text between languages using the Google Cloud Translation API. Supports auto-detection of the source language.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Google Cloud API key with Cloud Translation API enabled |
| `text` | string | Yes | The text to translate |
| `target` | string | Yes | Target language code \(e.g., "es", "fr", "de", "ja"\) |
| `source` | string | No | Source language code. If omitted, the API will auto-detect the source language. |
| `format` | string | No | Format of the text: "text" for plain text, "html" for HTML content |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `translatedText` | string | The translated text |
| `detectedSourceLanguage` | string | The detected source language code \(if source was not specified\) |
### `google_translate_detect`
Detect the language of text using the Google Cloud Translation API.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Google Cloud API key with Cloud Translation API enabled |
| `text` | string | Yes | The text to detect the language of |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `language` | string | The detected language code \(e.g., "en", "es", "fr"\) |
| `confidence` | number | Confidence score of the detection |

View File

@@ -0,0 +1,575 @@
---
title: Greenhouse
description: Manage candidates, jobs, and applications in Greenhouse
---
import { BlockInfoCard } from "@/components/ui/block-info-card"
<BlockInfoCard
type="greenhouse"
color="#469776"
/>
{/* MANUAL-CONTENT-START:intro */}
[Greenhouse](https://www.greenhouse.com/) is a leading applicant tracking system (ATS) and hiring platform designed to help companies optimize their recruiting processes. Greenhouse provides structured hiring workflows, candidate management, interview scheduling, and analytics to help organizations make better hiring decisions at scale.
With the Greenhouse integration in Sim, you can:
- **Manage candidates**: List and retrieve detailed candidate profiles including contact information, tags, and application history
- **Track jobs**: List and view job postings with details on hiring teams, openings, and confidentiality settings
- **Monitor applications**: List and retrieve applications with status, source, and interview stage information
- **Access user data**: List and look up Greenhouse users including recruiters, coordinators, and hiring managers
- **Browse organizational data**: List departments, offices, and job stages to understand your hiring pipeline structure
In Sim, the Greenhouse integration enables your agents to interact with your recruiting data as part of automated workflows. Agents can pull candidate information, monitor application pipelines, track job openings, and cross-reference hiring team data—all programmatically. This is ideal for building automated recruiting reports, candidate pipeline monitoring, hiring analytics dashboards, and workflows that react to changes in your talent pipeline.
{/* MANUAL-CONTENT-END */}
## Usage Instructions
Integrate Greenhouse into the workflow. List and retrieve candidates, jobs, applications, users, departments, offices, and job stages from your Greenhouse ATS account.
## Tools
### `greenhouse_list_candidates`
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `candidates` | json | List of candidates |
| `jobs` | json | List of jobs |
| `applications` | json | List of applications |
| `users` | json | List of users |
| `departments` | json | List of departments |
| `offices` | json | List of offices |
| `stages` | json | List of job stages |
| `count` | number | Number of results returned |
| `id` | number | Resource ID |
| `first_name` | string | First name |
| `last_name` | string | Last name |
| `name` | string | Resource name |
| `status` | string | Status |
| `email_addresses` | json | Email addresses |
| `phone_numbers` | json | Phone numbers |
| `tags` | json | Tags |
| `application_ids` | json | Associated application IDs |
| `recruiter` | json | Assigned recruiter |
| `coordinator` | json | Assigned coordinator |
| `current_stage` | json | Current interview stage |
| `source` | json | Application source |
| `hiring_team` | json | Hiring team members |
| `openings` | json | Job openings |
| `custom_fields` | json | Custom field values |
| `attachments` | json | File attachments |
| `educations` | json | Education history |
| `employments` | json | Employment history |
| `answers` | json | Application question answers |
| `prospect` | boolean | Whether this is a prospect |
| `confidential` | boolean | Whether the job is confidential |
| `is_private` | boolean | Whether the candidate is private |
| `can_email` | boolean | Whether the candidate can be emailed |
| `disabled` | boolean | Whether the user is disabled |
| `site_admin` | boolean | Whether the user is a site admin |
| `primary_email_address` | string | Primary email address |
| `created_at` | string | Creation timestamp \(ISO 8601\) |
| `updated_at` | string | Last updated timestamp \(ISO 8601\) |
### `greenhouse_get_candidate`
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `candidates` | json | List of candidates |
| `jobs` | json | List of jobs |
| `applications` | json | List of applications |
| `users` | json | List of users |
| `departments` | json | List of departments |
| `offices` | json | List of offices |
| `stages` | json | List of job stages |
| `count` | number | Number of results returned |
| `id` | number | Resource ID |
| `first_name` | string | First name |
| `last_name` | string | Last name |
| `name` | string | Resource name |
| `status` | string | Status |
| `email_addresses` | json | Email addresses |
| `phone_numbers` | json | Phone numbers |
| `tags` | json | Tags |
| `application_ids` | json | Associated application IDs |
| `recruiter` | json | Assigned recruiter |
| `coordinator` | json | Assigned coordinator |
| `current_stage` | json | Current interview stage |
| `source` | json | Application source |
| `hiring_team` | json | Hiring team members |
| `openings` | json | Job openings |
| `custom_fields` | json | Custom field values |
| `attachments` | json | File attachments |
| `educations` | json | Education history |
| `employments` | json | Employment history |
| `answers` | json | Application question answers |
| `prospect` | boolean | Whether this is a prospect |
| `confidential` | boolean | Whether the job is confidential |
| `is_private` | boolean | Whether the candidate is private |
| `can_email` | boolean | Whether the candidate can be emailed |
| `disabled` | boolean | Whether the user is disabled |
| `site_admin` | boolean | Whether the user is a site admin |
| `primary_email_address` | string | Primary email address |
| `created_at` | string | Creation timestamp \(ISO 8601\) |
| `updated_at` | string | Last updated timestamp \(ISO 8601\) |
### `greenhouse_list_jobs`
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `candidates` | json | List of candidates |
| `jobs` | json | List of jobs |
| `applications` | json | List of applications |
| `users` | json | List of users |
| `departments` | json | List of departments |
| `offices` | json | List of offices |
| `stages` | json | List of job stages |
| `count` | number | Number of results returned |
| `id` | number | Resource ID |
| `first_name` | string | First name |
| `last_name` | string | Last name |
| `name` | string | Resource name |
| `status` | string | Status |
| `email_addresses` | json | Email addresses |
| `phone_numbers` | json | Phone numbers |
| `tags` | json | Tags |
| `application_ids` | json | Associated application IDs |
| `recruiter` | json | Assigned recruiter |
| `coordinator` | json | Assigned coordinator |
| `current_stage` | json | Current interview stage |
| `source` | json | Application source |
| `hiring_team` | json | Hiring team members |
| `openings` | json | Job openings |
| `custom_fields` | json | Custom field values |
| `attachments` | json | File attachments |
| `educations` | json | Education history |
| `employments` | json | Employment history |
| `answers` | json | Application question answers |
| `prospect` | boolean | Whether this is a prospect |
| `confidential` | boolean | Whether the job is confidential |
| `is_private` | boolean | Whether the candidate is private |
| `can_email` | boolean | Whether the candidate can be emailed |
| `disabled` | boolean | Whether the user is disabled |
| `site_admin` | boolean | Whether the user is a site admin |
| `primary_email_address` | string | Primary email address |
| `created_at` | string | Creation timestamp \(ISO 8601\) |
| `updated_at` | string | Last updated timestamp \(ISO 8601\) |
### `greenhouse_get_job`
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `candidates` | json | List of candidates |
| `jobs` | json | List of jobs |
| `applications` | json | List of applications |
| `users` | json | List of users |
| `departments` | json | List of departments |
| `offices` | json | List of offices |
| `stages` | json | List of job stages |
| `count` | number | Number of results returned |
| `id` | number | Resource ID |
| `first_name` | string | First name |
| `last_name` | string | Last name |
| `name` | string | Resource name |
| `status` | string | Status |
| `email_addresses` | json | Email addresses |
| `phone_numbers` | json | Phone numbers |
| `tags` | json | Tags |
| `application_ids` | json | Associated application IDs |
| `recruiter` | json | Assigned recruiter |
| `coordinator` | json | Assigned coordinator |
| `current_stage` | json | Current interview stage |
| `source` | json | Application source |
| `hiring_team` | json | Hiring team members |
| `openings` | json | Job openings |
| `custom_fields` | json | Custom field values |
| `attachments` | json | File attachments |
| `educations` | json | Education history |
| `employments` | json | Employment history |
| `answers` | json | Application question answers |
| `prospect` | boolean | Whether this is a prospect |
| `confidential` | boolean | Whether the job is confidential |
| `is_private` | boolean | Whether the candidate is private |
| `can_email` | boolean | Whether the candidate can be emailed |
| `disabled` | boolean | Whether the user is disabled |
| `site_admin` | boolean | Whether the user is a site admin |
| `primary_email_address` | string | Primary email address |
| `created_at` | string | Creation timestamp \(ISO 8601\) |
| `updated_at` | string | Last updated timestamp \(ISO 8601\) |
### `greenhouse_list_applications`
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `candidates` | json | List of candidates |
| `jobs` | json | List of jobs |
| `applications` | json | List of applications |
| `users` | json | List of users |
| `departments` | json | List of departments |
| `offices` | json | List of offices |
| `stages` | json | List of job stages |
| `count` | number | Number of results returned |
| `id` | number | Resource ID |
| `first_name` | string | First name |
| `last_name` | string | Last name |
| `name` | string | Resource name |
| `status` | string | Status |
| `email_addresses` | json | Email addresses |
| `phone_numbers` | json | Phone numbers |
| `tags` | json | Tags |
| `application_ids` | json | Associated application IDs |
| `recruiter` | json | Assigned recruiter |
| `coordinator` | json | Assigned coordinator |
| `current_stage` | json | Current interview stage |
| `source` | json | Application source |
| `hiring_team` | json | Hiring team members |
| `openings` | json | Job openings |
| `custom_fields` | json | Custom field values |
| `attachments` | json | File attachments |
| `educations` | json | Education history |
| `employments` | json | Employment history |
| `answers` | json | Application question answers |
| `prospect` | boolean | Whether this is a prospect |
| `confidential` | boolean | Whether the job is confidential |
| `is_private` | boolean | Whether the candidate is private |
| `can_email` | boolean | Whether the candidate can be emailed |
| `disabled` | boolean | Whether the user is disabled |
| `site_admin` | boolean | Whether the user is a site admin |
| `primary_email_address` | string | Primary email address |
| `created_at` | string | Creation timestamp \(ISO 8601\) |
| `updated_at` | string | Last updated timestamp \(ISO 8601\) |
### `greenhouse_get_application`
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `candidates` | json | List of candidates |
| `jobs` | json | List of jobs |
| `applications` | json | List of applications |
| `users` | json | List of users |
| `departments` | json | List of departments |
| `offices` | json | List of offices |
| `stages` | json | List of job stages |
| `count` | number | Number of results returned |
| `id` | number | Resource ID |
| `first_name` | string | First name |
| `last_name` | string | Last name |
| `name` | string | Resource name |
| `status` | string | Status |
| `email_addresses` | json | Email addresses |
| `phone_numbers` | json | Phone numbers |
| `tags` | json | Tags |
| `application_ids` | json | Associated application IDs |
| `recruiter` | json | Assigned recruiter |
| `coordinator` | json | Assigned coordinator |
| `current_stage` | json | Current interview stage |
| `source` | json | Application source |
| `hiring_team` | json | Hiring team members |
| `openings` | json | Job openings |
| `custom_fields` | json | Custom field values |
| `attachments` | json | File attachments |
| `educations` | json | Education history |
| `employments` | json | Employment history |
| `answers` | json | Application question answers |
| `prospect` | boolean | Whether this is a prospect |
| `confidential` | boolean | Whether the job is confidential |
| `is_private` | boolean | Whether the candidate is private |
| `can_email` | boolean | Whether the candidate can be emailed |
| `disabled` | boolean | Whether the user is disabled |
| `site_admin` | boolean | Whether the user is a site admin |
| `primary_email_address` | string | Primary email address |
| `created_at` | string | Creation timestamp \(ISO 8601\) |
| `updated_at` | string | Last updated timestamp \(ISO 8601\) |
### `greenhouse_list_users`
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `candidates` | json | List of candidates |
| `jobs` | json | List of jobs |
| `applications` | json | List of applications |
| `users` | json | List of users |
| `departments` | json | List of departments |
| `offices` | json | List of offices |
| `stages` | json | List of job stages |
| `count` | number | Number of results returned |
| `id` | number | Resource ID |
| `first_name` | string | First name |
| `last_name` | string | Last name |
| `name` | string | Resource name |
| `status` | string | Status |
| `email_addresses` | json | Email addresses |
| `phone_numbers` | json | Phone numbers |
| `tags` | json | Tags |
| `application_ids` | json | Associated application IDs |
| `recruiter` | json | Assigned recruiter |
| `coordinator` | json | Assigned coordinator |
| `current_stage` | json | Current interview stage |
| `source` | json | Application source |
| `hiring_team` | json | Hiring team members |
| `openings` | json | Job openings |
| `custom_fields` | json | Custom field values |
| `attachments` | json | File attachments |
| `educations` | json | Education history |
| `employments` | json | Employment history |
| `answers` | json | Application question answers |
| `prospect` | boolean | Whether this is a prospect |
| `confidential` | boolean | Whether the job is confidential |
| `is_private` | boolean | Whether the candidate is private |
| `can_email` | boolean | Whether the candidate can be emailed |
| `disabled` | boolean | Whether the user is disabled |
| `site_admin` | boolean | Whether the user is a site admin |
| `primary_email_address` | string | Primary email address |
| `created_at` | string | Creation timestamp \(ISO 8601\) |
| `updated_at` | string | Last updated timestamp \(ISO 8601\) |
### `greenhouse_get_user`
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `candidates` | json | List of candidates |
| `jobs` | json | List of jobs |
| `applications` | json | List of applications |
| `users` | json | List of users |
| `departments` | json | List of departments |
| `offices` | json | List of offices |
| `stages` | json | List of job stages |
| `count` | number | Number of results returned |
| `id` | number | Resource ID |
| `first_name` | string | First name |
| `last_name` | string | Last name |
| `name` | string | Resource name |
| `status` | string | Status |
| `email_addresses` | json | Email addresses |
| `phone_numbers` | json | Phone numbers |
| `tags` | json | Tags |
| `application_ids` | json | Associated application IDs |
| `recruiter` | json | Assigned recruiter |
| `coordinator` | json | Assigned coordinator |
| `current_stage` | json | Current interview stage |
| `source` | json | Application source |
| `hiring_team` | json | Hiring team members |
| `openings` | json | Job openings |
| `custom_fields` | json | Custom field values |
| `attachments` | json | File attachments |
| `educations` | json | Education history |
| `employments` | json | Employment history |
| `answers` | json | Application question answers |
| `prospect` | boolean | Whether this is a prospect |
| `confidential` | boolean | Whether the job is confidential |
| `is_private` | boolean | Whether the candidate is private |
| `can_email` | boolean | Whether the candidate can be emailed |
| `disabled` | boolean | Whether the user is disabled |
| `site_admin` | boolean | Whether the user is a site admin |
| `primary_email_address` | string | Primary email address |
| `created_at` | string | Creation timestamp \(ISO 8601\) |
| `updated_at` | string | Last updated timestamp \(ISO 8601\) |
### `greenhouse_list_departments`
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `candidates` | json | List of candidates |
| `jobs` | json | List of jobs |
| `applications` | json | List of applications |
| `users` | json | List of users |
| `departments` | json | List of departments |
| `offices` | json | List of offices |
| `stages` | json | List of job stages |
| `count` | number | Number of results returned |
| `id` | number | Resource ID |
| `first_name` | string | First name |
| `last_name` | string | Last name |
| `name` | string | Resource name |
| `status` | string | Status |
| `email_addresses` | json | Email addresses |
| `phone_numbers` | json | Phone numbers |
| `tags` | json | Tags |
| `application_ids` | json | Associated application IDs |
| `recruiter` | json | Assigned recruiter |
| `coordinator` | json | Assigned coordinator |
| `current_stage` | json | Current interview stage |
| `source` | json | Application source |
| `hiring_team` | json | Hiring team members |
| `openings` | json | Job openings |
| `custom_fields` | json | Custom field values |
| `attachments` | json | File attachments |
| `educations` | json | Education history |
| `employments` | json | Employment history |
| `answers` | json | Application question answers |
| `prospect` | boolean | Whether this is a prospect |
| `confidential` | boolean | Whether the job is confidential |
| `is_private` | boolean | Whether the candidate is private |
| `can_email` | boolean | Whether the candidate can be emailed |
| `disabled` | boolean | Whether the user is disabled |
| `site_admin` | boolean | Whether the user is a site admin |
| `primary_email_address` | string | Primary email address |
| `created_at` | string | Creation timestamp \(ISO 8601\) |
| `updated_at` | string | Last updated timestamp \(ISO 8601\) |
### `greenhouse_list_offices`
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `candidates` | json | List of candidates |
| `jobs` | json | List of jobs |
| `applications` | json | List of applications |
| `users` | json | List of users |
| `departments` | json | List of departments |
| `offices` | json | List of offices |
| `stages` | json | List of job stages |
| `count` | number | Number of results returned |
| `id` | number | Resource ID |
| `first_name` | string | First name |
| `last_name` | string | Last name |
| `name` | string | Resource name |
| `status` | string | Status |
| `email_addresses` | json | Email addresses |
| `phone_numbers` | json | Phone numbers |
| `tags` | json | Tags |
| `application_ids` | json | Associated application IDs |
| `recruiter` | json | Assigned recruiter |
| `coordinator` | json | Assigned coordinator |
| `current_stage` | json | Current interview stage |
| `source` | json | Application source |
| `hiring_team` | json | Hiring team members |
| `openings` | json | Job openings |
| `custom_fields` | json | Custom field values |
| `attachments` | json | File attachments |
| `educations` | json | Education history |
| `employments` | json | Employment history |
| `answers` | json | Application question answers |
| `prospect` | boolean | Whether this is a prospect |
| `confidential` | boolean | Whether the job is confidential |
| `is_private` | boolean | Whether the candidate is private |
| `can_email` | boolean | Whether the candidate can be emailed |
| `disabled` | boolean | Whether the user is disabled |
| `site_admin` | boolean | Whether the user is a site admin |
| `primary_email_address` | string | Primary email address |
| `created_at` | string | Creation timestamp \(ISO 8601\) |
| `updated_at` | string | Last updated timestamp \(ISO 8601\) |
### `greenhouse_list_job_stages`
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `candidates` | json | List of candidates |
| `jobs` | json | List of jobs |
| `applications` | json | List of applications |
| `users` | json | List of users |
| `departments` | json | List of departments |
| `offices` | json | List of offices |
| `stages` | json | List of job stages |
| `count` | number | Number of results returned |
| `id` | number | Resource ID |
| `first_name` | string | First name |
| `last_name` | string | Last name |
| `name` | string | Resource name |
| `status` | string | Status |
| `email_addresses` | json | Email addresses |
| `phone_numbers` | json | Phone numbers |
| `tags` | json | Tags |
| `application_ids` | json | Associated application IDs |
| `recruiter` | json | Assigned recruiter |
| `coordinator` | json | Assigned coordinator |
| `current_stage` | json | Current interview stage |
| `source` | json | Application source |
| `hiring_team` | json | Hiring team members |
| `openings` | json | Job openings |
| `custom_fields` | json | Custom field values |
| `attachments` | json | File attachments |
| `educations` | json | Education history |
| `employments` | json | Employment history |
| `answers` | json | Application question answers |
| `prospect` | boolean | Whether this is a prospect |
| `confidential` | boolean | Whether the job is confidential |
| `is_private` | boolean | Whether the candidate is private |
| `can_email` | boolean | Whether the candidate can be emailed |
| `disabled` | boolean | Whether the user is disabled |
| `site_admin` | boolean | Whether the user is a site admin |
| `primary_email_address` | string | Primary email address |
| `created_at` | string | Creation timestamp \(ISO 8601\) |
| `updated_at` | string | Last updated timestamp \(ISO 8601\) |

View File

@@ -0,0 +1,459 @@
---
title: Hex
description: Run and manage Hex projects
---
import { BlockInfoCard } from "@/components/ui/block-info-card"
<BlockInfoCard
type="hex"
color="#14151A"
/>
{/* MANUAL-CONTENT-START:intro */}
[Hex](https://hex.tech/) is a collaborative platform for analytics and data science that allows you to build, run, and share interactive data projects and notebooks. Hex lets teams work together on data exploration, transformation, and visualization, making it easy to turn analysis into shareable insights.
With Hex, you can:
- **Create and run powerful notebooks**: Blend SQL, Python, and visualizations in a single, interactive workspace.
- **Collaborate and share**: Work together with teammates in real time and publish interactive data apps for broader audiences.
- **Automate and orchestrate workflows**: Schedule notebook runs, parameterize runs with inputs, and automate data tasks.
- **Visualize and communicate results**: Turn analysis results into dashboards or interactive apps that anyone can use.
- **Integrate with your data stack**: Connect easily to data warehouses, APIs, and other sources.
The Sim Hex integration allows your AI agents or workflows to:
- List, get, and manage Hex projects directly from Sim.
- Trigger and monitor notebook runs, check their statuses, or cancel them as part of larger automation flows.
- Retrieve run results and use them within Sim-powered processes and decision-making.
- Leverage Hexs interactive analytics capabilities right inside your automated Sim workflows.
Whether youre empowering analysts, automating reporting, or embedding actionable data into your processes, Hex and Sim provide a seamless way to operationalize analytics and bring data-driven insights to your team.
{/* MANUAL-CONTENT-END */}
## Usage Instructions
Integrate Hex into your workflow. Run projects, check run status, manage collections and groups, list users, and view data connections. Requires a Hex API token.
## Tools
### `hex_cancel_run`
Cancel an active Hex project run.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Hex API token \(Personal or Workspace\) |
| `projectId` | string | Yes | The UUID of the Hex project |
| `runId` | string | Yes | The UUID of the run to cancel |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `success` | boolean | Whether the run was successfully cancelled |
| `projectId` | string | Project UUID |
| `runId` | string | Run UUID that was cancelled |
### `hex_create_collection`
Create a new collection in the Hex workspace to organize projects.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Hex API token \(Personal or Workspace\) |
| `name` | string | Yes | Name for the new collection |
| `description` | string | No | Optional description for the collection |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `id` | string | Newly created collection UUID |
| `name` | string | Collection name |
| `description` | string | Collection description |
| `creator` | object | Collection creator |
| ↳ `email` | string | Creator email |
| ↳ `id` | string | Creator UUID |
### `hex_get_collection`
Retrieve details for a specific Hex collection by its ID.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Hex API token \(Personal or Workspace\) |
| `collectionId` | string | Yes | The UUID of the collection |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `id` | string | Collection UUID |
| `name` | string | Collection name |
| `description` | string | Collection description |
| `creator` | object | Collection creator |
| ↳ `email` | string | Creator email |
| ↳ `id` | string | Creator UUID |
### `hex_get_data_connection`
Retrieve details for a specific data connection including type, description, and configuration flags.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Hex API token \(Personal or Workspace\) |
| `dataConnectionId` | string | Yes | The UUID of the data connection |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `id` | string | Connection UUID |
| `name` | string | Connection name |
| `type` | string | Connection type \(e.g., snowflake, postgres, bigquery\) |
| `description` | string | Connection description |
| `connectViaSsh` | boolean | Whether SSH tunneling is enabled |
| `includeMagic` | boolean | Whether Magic AI features are enabled |
| `allowWritebackCells` | boolean | Whether writeback cells are allowed |
### `hex_get_group`
Retrieve details for a specific Hex group.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Hex API token \(Personal or Workspace\) |
| `groupId` | string | Yes | The UUID of the group |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `id` | string | Group UUID |
| `name` | string | Group name |
| `createdAt` | string | Creation timestamp |
### `hex_get_project`
Get metadata and details for a specific Hex project by its ID.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Hex API token \(Personal or Workspace\) |
| `projectId` | string | Yes | The UUID of the Hex project |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `id` | string | Project UUID |
| `title` | string | Project title |
| `description` | string | Project description |
| `status` | object | Project status |
| ↳ `name` | string | Status name \(e.g., PUBLISHED, DRAFT\) |
| `type` | string | Project type \(PROJECT or COMPONENT\) |
| `creator` | object | Project creator |
| ↳ `email` | string | Creator email |
| `owner` | object | Project owner |
| ↳ `email` | string | Owner email |
| `categories` | array | Project categories |
| ↳ `name` | string | Category name |
| ↳ `description` | string | Category description |
| `lastEditedAt` | string | ISO 8601 last edited timestamp |
| `lastPublishedAt` | string | ISO 8601 last published timestamp |
| `createdAt` | string | ISO 8601 creation timestamp |
| `archivedAt` | string | ISO 8601 archived timestamp |
| `trashedAt` | string | ISO 8601 trashed timestamp |
### `hex_get_project_runs`
Retrieve API-triggered runs for a Hex project with optional filtering by status and pagination.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Hex API token \(Personal or Workspace\) |
| `projectId` | string | Yes | The UUID of the Hex project |
| `limit` | number | No | Maximum number of runs to return \(1-100, default: 25\) |
| `offset` | number | No | Offset for paginated results \(default: 0\) |
| `statusFilter` | string | No | Filter by run status: PENDING, RUNNING, ERRORED, COMPLETED, KILLED, UNABLE_TO_ALLOCATE_KERNEL |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `runs` | array | List of project runs |
| ↳ `projectId` | string | Project UUID |
| ↳ `runId` | string | Run UUID |
| ↳ `runUrl` | string | URL to view the run |
| ↳ `status` | string | Run status \(PENDING, RUNNING, COMPLETED, ERRORED, KILLED, UNABLE_TO_ALLOCATE_KERNEL\) |
| ↳ `startTime` | string | Run start time |
| ↳ `endTime` | string | Run end time |
| ↳ `elapsedTime` | number | Elapsed time in seconds |
| ↳ `traceId` | string | Trace ID |
| ↳ `projectVersion` | number | Project version number |
| `total` | number | Total number of runs returned |
| `traceId` | string | Top-level trace ID |
### `hex_get_queried_tables`
Return the warehouse tables queried by a Hex project, including data connection and table names.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Hex API token \(Personal or Workspace\) |
| `projectId` | string | Yes | The UUID of the Hex project |
| `limit` | number | No | Maximum number of tables to return \(1-100\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `tables` | array | List of warehouse tables queried by the project |
| ↳ `dataConnectionId` | string | Data connection UUID |
| ↳ `dataConnectionName` | string | Data connection name |
| ↳ `tableName` | string | Table name |
| `total` | number | Total number of tables returned |
### `hex_get_run_status`
Check the status of a Hex project run by its run ID.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Hex API token \(Personal or Workspace\) |
| `projectId` | string | Yes | The UUID of the Hex project |
| `runId` | string | Yes | The UUID of the run to check |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `projectId` | string | Project UUID |
| `runId` | string | Run UUID |
| `runUrl` | string | URL to view the run |
| `status` | string | Run status \(PENDING, RUNNING, COMPLETED, ERRORED, KILLED, UNABLE_TO_ALLOCATE_KERNEL\) |
| `startTime` | string | ISO 8601 run start time |
| `endTime` | string | ISO 8601 run end time |
| `elapsedTime` | number | Elapsed time in seconds |
| `traceId` | string | Trace ID for debugging |
| `projectVersion` | number | Project version number |
### `hex_list_collections`
List all collections in the Hex workspace.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Hex API token \(Personal or Workspace\) |
| `limit` | number | No | Maximum number of collections to return \(1-500, default: 25\) |
| `sortBy` | string | No | Sort by field: NAME |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `collections` | array | List of collections |
| ↳ `id` | string | Collection UUID |
| ↳ `name` | string | Collection name |
| ↳ `description` | string | Collection description |
| ↳ `creator` | object | Collection creator |
| ↳ `email` | string | Creator email |
| ↳ `id` | string | Creator UUID |
| `total` | number | Total number of collections returned |
### `hex_list_data_connections`
List all data connections in the Hex workspace (e.g., Snowflake, PostgreSQL, BigQuery).
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Hex API token \(Personal or Workspace\) |
| `limit` | number | No | Maximum number of connections to return \(1-500, default: 25\) |
| `sortBy` | string | No | Sort by field: CREATED_AT or NAME |
| `sortDirection` | string | No | Sort direction: ASC or DESC |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `connections` | array | List of data connections |
| ↳ `id` | string | Connection UUID |
| ↳ `name` | string | Connection name |
| ↳ `type` | string | Connection type \(e.g., athena, bigquery, databricks, postgres, redshift, snowflake\) |
| ↳ `description` | string | Connection description |
| ↳ `connectViaSsh` | boolean | Whether SSH tunneling is enabled |
| ↳ `includeMagic` | boolean | Whether Magic AI features are enabled |
| ↳ `allowWritebackCells` | boolean | Whether writeback cells are allowed |
| `total` | number | Total number of connections returned |
### `hex_list_groups`
List all groups in the Hex workspace with optional sorting.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Hex API token \(Personal or Workspace\) |
| `limit` | number | No | Maximum number of groups to return \(1-500, default: 25\) |
| `sortBy` | string | No | Sort by field: CREATED_AT or NAME |
| `sortDirection` | string | No | Sort direction: ASC or DESC |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `groups` | array | List of workspace groups |
| ↳ `id` | string | Group UUID |
| ↳ `name` | string | Group name |
| ↳ `createdAt` | string | Creation timestamp |
| `total` | number | Total number of groups returned |
### `hex_list_projects`
List all projects in your Hex workspace with optional filtering by status.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Hex API token \(Personal or Workspace\) |
| `limit` | number | No | Maximum number of projects to return \(1-100\) |
| `includeArchived` | boolean | No | Include archived projects in results |
| `statusFilter` | string | No | Filter by status: PUBLISHED, DRAFT, or ALL |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `projects` | array | List of Hex projects |
| ↳ `id` | string | Project UUID |
| ↳ `title` | string | Project title |
| ↳ `description` | string | Project description |
| ↳ `status` | object | Project status |
| ↳ `name` | string | Status name \(e.g., PUBLISHED, DRAFT\) |
| ↳ `type` | string | Project type \(PROJECT or COMPONENT\) |
| ↳ `creator` | object | Project creator |
| ↳ `email` | string | Creator email |
| ↳ `owner` | object | Project owner |
| ↳ `email` | string | Owner email |
| ↳ `lastEditedAt` | string | Last edited timestamp |
| ↳ `lastPublishedAt` | string | Last published timestamp |
| ↳ `createdAt` | string | Creation timestamp |
| ↳ `archivedAt` | string | Archived timestamp |
| `total` | number | Total number of projects returned |
### `hex_list_users`
List all users in the Hex workspace with optional filtering and sorting.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Hex API token \(Personal or Workspace\) |
| `limit` | number | No | Maximum number of users to return \(1-100, default: 25\) |
| `sortBy` | string | No | Sort by field: NAME or EMAIL |
| `sortDirection` | string | No | Sort direction: ASC or DESC |
| `groupId` | string | No | Filter users by group UUID |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `users` | array | List of workspace users |
| ↳ `id` | string | User UUID |
| ↳ `name` | string | User name |
| ↳ `email` | string | User email |
| ↳ `role` | string | User role \(ADMIN, MANAGER, EDITOR, EXPLORER, MEMBER, GUEST, EMBEDDED_USER, ANONYMOUS\) |
| `total` | number | Total number of users returned |
### `hex_run_project`
Execute a published Hex project. Optionally pass input parameters and control caching behavior.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Hex API token \(Personal or Workspace\) |
| `projectId` | string | Yes | The UUID of the Hex project to run |
| `inputParams` | json | No | JSON object of input parameters for the project \(e.g., \{"date": "2024-01-01"\}\) |
| `dryRun` | boolean | No | If true, perform a dry run without executing the project |
| `updateCache` | boolean | No | \(Deprecated\) If true, update the cached results after execution |
| `updatePublishedResults` | boolean | No | If true, update the published app results after execution |
| `useCachedSqlResults` | boolean | No | If true, use cached SQL results instead of re-running queries |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `projectId` | string | Project UUID |
| `runId` | string | Run UUID |
| `runUrl` | string | URL to view the run |
| `runStatusUrl` | string | URL to check run status |
| `traceId` | string | Trace ID for debugging |
| `projectVersion` | number | Project version number |
### `hex_update_project`
Update a Hex project status label (e.g., endorsement or custom workspace statuses).
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Hex API token \(Personal or Workspace\) |
| `projectId` | string | Yes | The UUID of the Hex project to update |
| `status` | string | Yes | New project status name \(custom workspace status label\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `id` | string | Project UUID |
| `title` | string | Project title |
| `description` | string | Project description |
| `status` | object | Updated project status |
| ↳ `name` | string | Status name \(e.g., PUBLISHED, DRAFT\) |
| `type` | string | Project type \(PROJECT or COMPONENT\) |
| `creator` | object | Project creator |
| ↳ `email` | string | Creator email |
| `owner` | object | Project owner |
| ↳ `email` | string | Owner email |
| `categories` | array | Project categories |
| ↳ `name` | string | Category name |
| ↳ `description` | string | Category description |
| `lastEditedAt` | string | Last edited timestamp |
| `lastPublishedAt` | string | Last published timestamp |
| `createdAt` | string | Creation timestamp |
| `archivedAt` | string | Archived timestamp |
| `trashedAt` | string | Trashed timestamp |

View File

@@ -11,20 +11,16 @@ import { BlockInfoCard } from "@/components/ui/block-info-card"
/>
{/* MANUAL-CONTENT-START:intro */}
[HubSpot](https://www.hubspot.com) is a comprehensive CRM platform that provides a full suite of marketing, sales, and customer service tools to help businesses grow better. With its powerful automation capabilities and extensive API, HubSpot has become one of the world's leading CRM platforms, serving businesses of all sizes across industries.
[HubSpot](https://www.hubspot.com) is a comprehensive CRM platform that provides a full suite of marketing, sales, and customer service tools to help businesses grow. With powerful automation capabilities and an extensive API, HubSpot serves businesses of all sizes across industries.
HubSpot CRM offers a complete solution for managing customer relationships, from initial contact through to long-term customer success. The platform combines contact management, deal tracking, marketing automation, and customer service tools into a unified system that helps teams stay aligned and focused on customer success.
With the HubSpot integration in Sim, you can:
Key features of HubSpot CRM include:
- **Manage contacts**: List, get, create, update, and search contacts in your CRM
- **Manage companies**: List, get, create, update, and search company records
- **Track deals**: List deals in your sales pipeline
- **Access users**: Retrieve user information from your HubSpot account
- Contact & Company Management: Comprehensive database for storing and organizing customer and prospect information
- Deal Pipeline: Visual sales pipeline for tracking opportunities through customizable stages
- Marketing Events: Track and manage marketing campaigns and events with detailed attribution
- Ticket Management: Customer support ticketing system for tracking and resolving customer issues
- Quotes & Line Items: Create and manage sales quotes with detailed product line items
- User & Team Management: Organize teams, assign ownership, and track user activity across the platform
In Sim, the HubSpot integration enables your AI agents to seamlessly interact with your CRM data and automate key business processes. This creates powerful opportunities for intelligent lead qualification, automated contact enrichment, deal management, customer support automation, and data synchronization across your tech stack. The integration allows agents to create, retrieve, update, and search across all major HubSpot objects, enabling sophisticated workflows that can respond to CRM events, maintain data quality, and ensure your team has the most up-to-date customer information. By connecting Sim with HubSpot, you can build AI agents that automatically qualify leads, route support tickets, update deal stages based on customer interactions, generate quotes, and keep your CRM data synchronized with other business systems—ultimately increasing team productivity and improving customer experiences.
In Sim, the HubSpot integration enables your agents to interact with your CRM data as part of automated workflows. Agents can qualify leads, enrich contact records, track deals, and synchronize data across your tech stack—enabling intelligent sales and marketing automation.
{/* MANUAL-CONTENT-END */}

View File

@@ -11,16 +11,13 @@ import { BlockInfoCard } from "@/components/ui/block-info-card"
/>
{/* MANUAL-CONTENT-START:intro */}
[HuggingFace](https://huggingface.co/) is a leading AI platform that provides access to thousands of pre-trained machine learning models and powerful inference capabilities. With its extensive model hub and robust API, HuggingFace offers comprehensive tools for both research and production AI applications.
With HuggingFace, you can:
[Hugging Face](https://huggingface.co/) is a leading AI platform that provides access to thousands of pre-trained machine learning models and powerful inference capabilities. With its extensive model hub and robust API, Hugging Face offers comprehensive tools for both research and production AI applications.
Access pre-trained models: Utilize models for text generation, translation, image processing, and more
Generate AI completions: Create content using state-of-the-art language models through the Inference API
Natural language processing: Process and analyze text with specialized NLP models
Deploy at scale: Host and serve models for production applications
Customize models: Fine-tune existing models for specific use cases
With the Hugging Face integration in Sim, you can:
In Sim, the HuggingFace integration enables your agents to programmatically generate completions using the HuggingFace Inference API. This allows for powerful automation scenarios such as content generation, text analysis, code completion, and creative writing. Your agents can generate completions with natural language prompts, access specialized models for different tasks, and integrate AI-generated content into workflows. This integration bridges the gap between your AI workflows and machine learning capabilities, enabling seamless AI-powered automation with one of the world's most comprehensive ML platforms.
- **Generate completions**: Create text content using state-of-the-art language models through the Hugging Face Inference API, with support for custom prompts and model selection
In Sim, the Hugging Face integration enables your agents to generate AI completions as part of automated workflows. This allows for content generation, text analysis, code completion, and creative writing using models from the Hugging Face model hub.
{/* MANUAL-CONTENT-END */}

View File

@@ -11,18 +11,22 @@ import { BlockInfoCard } from "@/components/ui/block-info-card"
/>
{/* MANUAL-CONTENT-START:intro */}
[Jira](https://www.atlassian.com/jira) is a leading project management and issue tracking platform that helps teams plan, track, and manage agile software development projects effectively. As part of the Atlassian suite, Jira has become the industry standard for software development teams and project management professionals worldwide.
[Jira](https://www.atlassian.com/jira) is a leading project management and issue tracking platform from Atlassian that helps teams plan, track, and manage agile software development projects. Jira supports Scrum and Kanban methodologies with customizable boards, workflows, and advanced reporting.
Jira provides a comprehensive set of tools for managing complex projects through its flexible and customizable workflow system. With its robust API and integration capabilities, Jira enables teams to streamline their development processes and maintain clear visibility of project progress.
With the Jira integration in Sim, you can:
Key features of Jira include:
- **Manage issues**: Create, retrieve, update, delete, and bulk-read issues in your Jira projects
- **Transition issues**: Move issues through workflow stages programmatically
- **Assign issues**: Set or change issue assignees
- **Search issues**: Use JQL (Jira Query Language) to find and filter issues
- **Manage comments**: Add, retrieve, update, and delete comments on issues
- **Handle attachments**: Upload, retrieve, and delete file attachments on issues
- **Track work**: Add, retrieve, update, and delete worklogs for time tracking
- **Link issues**: Create and delete issue links to establish relationships between issues
- **Manage watchers**: Add or remove watchers from issues
- **Access users**: Retrieve user information from your Jira instance
- Agile Project Management: Support for Scrum and Kanban methodologies with customizable boards and workflows
- Issue Tracking: Sophisticated tracking system for bugs, stories, epics, and tasks with detailed reporting
- Workflow Automation: Powerful automation rules to streamline repetitive tasks and processes
- Advanced Search: JQL (Jira Query Language) for complex issue filtering and reporting
In Sim, the Jira integration allows your agents to seamlessly interact with your project management workflow. This creates opportunities for automated issue creation, updates, and tracking as part of your AI workflows. The integration enables agents to create, retrieve, and update Jira issues programmatically, facilitating automated project management tasks and ensuring that important information is properly tracked and documented. By connecting Sim with Jira, you can build intelligent agents that maintain project visibility while automating routine project management tasks, enhancing team productivity and ensuring consistent project tracking.
In Sim, the Jira integration enables your agents to interact with your project management workflow as part of automated processes. Agents can create issues from external triggers, update statuses, track progress, and manage project data—enabling intelligent project management automation.
{/* MANUAL-CONTENT-END */}

View File

@@ -11,19 +11,15 @@ import { BlockInfoCard } from "@/components/ui/block-info-card"
/>
{/* MANUAL-CONTENT-START:intro */}
Sim's Knowledge Base is a powerful native feature that enables you to create, manage, and query custom knowledge bases directly within the platform. Using advanced AI embeddings and vector search technology, the Knowledge Base block allows you to build intelligent search capabilities into your workflows, making it easy to find and utilize relevant information across your organization.
Sim's Knowledge Base is a native feature that enables you to create, manage, and query custom knowledge bases directly within the platform. Using advanced AI embeddings and vector search, the Knowledge Base block allows you to build intelligent search capabilities into your workflows.
The Knowledge Base system provides a comprehensive solution for managing organizational knowledge through its flexible and scalable architecture. With its built-in vector search capabilities, teams can perform semantic searches that understand meaning and context, going beyond traditional keyword matching.
With the Knowledge Base in Sim, you can:
Key features of the Knowledge Base include:
- **Search knowledge**: Perform semantic searches across your custom knowledge bases using AI-powered vector similarity matching
- **Upload chunks**: Add text chunks with metadata to a knowledge base for indexing
- **Create documents**: Add new documents to a knowledge base for searchable content
- Semantic Search: Advanced AI-powered search that understands meaning and context, not just keywords
- Vector Embeddings: Automatic conversion of text into high-dimensional vectors for intelligent similarity matching
- Custom Knowledge Bases: Create and manage multiple knowledge bases for different purposes or departments
- Flexible Content Types: Support for various document formats and content types
- Real-time Updates: Immediate indexing of new content for instant searchability
In Sim, the Knowledge Base block enables your agents to perform intelligent semantic searches across your custom knowledge bases. This creates opportunities for automated information retrieval, content recommendations, and knowledge discovery as part of your AI workflows. The integration allows agents to search and retrieve relevant information programmatically, facilitating automated knowledge management tasks and ensuring that important information is easily accessible. By leveraging the Knowledge Base block, you can build intelligent agents that enhance information discovery while automating routine knowledge management tasks, improving team efficiency and ensuring consistent access to organizational knowledge.
In Sim, the Knowledge Base block enables your agents to perform intelligent semantic searches across your organizational knowledge as part of automated workflows. This is ideal for information retrieval, content recommendations, FAQ automation, and grounding agent responses in your own data.
{/* MANUAL-CONTENT-END */}

View File

@@ -11,18 +11,21 @@ import { BlockInfoCard } from "@/components/ui/block-info-card"
/>
{/* MANUAL-CONTENT-START:intro */}
[Linear](https://linear.app) is a leading project management and issue tracking platform that helps teams plan, track, and manage their work effectively. As a modern project management tool, Linear has become increasingly popular among software development teams and project management professionals for its streamlined interface and powerful features.
[Linear](https://linear.app) is a modern project management and issue tracking platform that helps teams plan, track, and manage their work with a streamlined interface. Linear supports agile methodologies with customizable workflows, cycles, and project milestones.
Linear provides a comprehensive set of tools for managing complex projects through its flexible and customizable workflow system. With its robust API and integration capabilities, Linear enables teams to streamline their development processes and maintain clear visibility of project progress.
With the Linear integration in Sim, you can:
Key features of Linear include:
- **Manage issues**: Create, read, update, search, archive, unarchive, and delete issues
- **Manage labels**: Add or remove labels from issues, and create, update, or archive labels
- **Comment on issues**: Create, update, delete, and list comments on issues
- **Manage projects**: List, get, create, update, archive, and delete projects with milestones, labels, and statuses
- **Track cycles**: List, get, and create cycles, and retrieve the active cycle
- **Handle attachments**: Create, list, update, and delete attachments on issues
- **Manage issue relations**: Create, list, and delete relationships between issues
- **Access team data**: List users, teams, workflow states, notifications, and favorites
- **Manage customers**: Create, update, delete, list, and merge customers with statuses, tiers, and requests
- Agile Project Management: Support for Scrum and Kanban methodologies with customizable boards and workflows
- Issue Tracking: Sophisticated tracking system for bugs, stories, epics, and tasks with detailed reporting
- Workflow Automation: Powerful automation rules to streamline repetitive tasks and processes
- Advanced Search: Complex filtering and reporting capabilities for efficient issue management
In Sim, the Linear integration allows your agents to seamlessly interact with your project management workflow. This creates opportunities for automated issue creation, updates, and tracking as part of your AI workflows. The integration enables agents to read existing issues and create new ones programmatically, facilitating automated project management tasks and ensuring that important information is properly tracked and documented. By connecting Sim with Linear, you can build intelligent agents that maintain project visibility while automating routine project management tasks, enhancing team productivity and ensuring consistent project tracking.
In Sim, the Linear integration enables your agents to interact with your project management workflow as part of automated processes. Agents can create issues from external triggers, update statuses, manage projects and cycles, and synchronize data—enabling intelligent project management automation at scale.
{/* MANUAL-CONTENT-END */}

View File

@@ -0,0 +1,273 @@
---
title: Loops
description: Manage contacts and send emails with Loops
---
import { BlockInfoCard } from "@/components/ui/block-info-card"
<BlockInfoCard
type="loops"
color="#FAFAF9"
/>
{/* MANUAL-CONTENT-START:intro */}
[Loops](https://loops.so/) is an email platform built for modern SaaS companies, offering transactional emails, marketing campaigns, and event-driven automations through a clean API. This integration connects Loops directly into Sim workflows.
With Loops in Sim, you can:
- **Manage contacts**: Create, update, find, and delete contacts in your Loops audience
- **Send transactional emails**: Trigger templated transactional emails with dynamic data variables
- **Fire events**: Send events to Loops to trigger automated email sequences and workflows
- **Manage subscriptions**: Control mailing list subscriptions and contact properties programmatically
- **Enrich contact data**: Attach custom properties, user groups, and mailing list memberships to contacts
In Sim, the Loops integration enables your agents to manage email operations as part of their workflows. Supported operations include:
- **Create Contact**: Add a new contact to your Loops audience with email, name, and custom properties.
- **Update Contact**: Update an existing contact or create one if no match exists (upsert behavior).
- **Find Contact**: Look up a contact by email address or userId.
- **Delete Contact**: Remove a contact from your audience.
- **Send Transactional Email**: Send a templated transactional email to a recipient with dynamic data variables.
- **Send Event**: Trigger a Loops event to start automated email sequences for a contact.
Configure the Loops block with your API key from the Loops dashboard (Settings > API), select an operation, and provide the required parameters. Your agents can then manage contacts and send emails as part of any workflow.
{/* MANUAL-CONTENT-END */}
## Usage Instructions
Integrate Loops into the workflow. Create and manage contacts, send transactional emails, and trigger event-based automations.
## Tools
### `loops_create_contact`
Create a new contact in your Loops audience with an email address and optional properties like name, user group, and mailing list subscriptions.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Loops API key for authentication |
| `email` | string | Yes | The email address for the new contact |
| `firstName` | string | No | The contact first name |
| `lastName` | string | No | The contact last name |
| `source` | string | No | Custom source value replacing the default "API" |
| `subscribed` | boolean | No | Whether the contact receives campaign emails \(defaults to true\) |
| `userGroup` | string | No | Group to segment the contact into \(one group per contact\) |
| `userId` | string | No | Unique user identifier from your application |
| `mailingLists` | json | No | Mailing list IDs mapped to boolean values \(true to subscribe, false to unsubscribe\) |
| `customProperties` | json | No | Custom contact properties as key-value pairs \(string, number, boolean, or date values\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `success` | boolean | Whether the contact was created successfully |
| `id` | string | The Loops-assigned ID of the created contact |
### `loops_update_contact`
Update an existing contact in Loops by email or userId. Creates a new contact if no match is found (upsert). Can update name, subscription status, user group, mailing lists, and custom properties.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Loops API key for authentication |
| `email` | string | No | The contact email address \(at least one of email or userId is required\) |
| `userId` | string | No | The contact userId \(at least one of email or userId is required\) |
| `firstName` | string | No | The contact first name |
| `lastName` | string | No | The contact last name |
| `source` | string | No | Custom source value replacing the default "API" |
| `subscribed` | boolean | No | Whether the contact receives campaign emails \(sending true re-subscribes unsubscribed contacts\) |
| `userGroup` | string | No | Group to segment the contact into \(one group per contact\) |
| `mailingLists` | json | No | Mailing list IDs mapped to boolean values \(true to subscribe, false to unsubscribe\) |
| `customProperties` | json | No | Custom contact properties as key-value pairs \(send null to reset a property\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `success` | boolean | Whether the contact was updated successfully |
| `id` | string | The Loops-assigned ID of the updated or created contact |
### `loops_find_contact`
Find a contact in Loops by email address or userId. Returns an array of matching contacts with all their properties including name, subscription status, user group, and mailing lists.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Loops API key for authentication |
| `email` | string | No | The contact email address to search for \(at least one of email or userId is required\) |
| `userId` | string | No | The contact userId to search for \(at least one of email or userId is required\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `contacts` | array | Array of matching contact objects \(empty array if no match found\) |
| ↳ `id` | string | Loops-assigned contact ID |
| ↳ `email` | string | Contact email address |
| ↳ `firstName` | string | Contact first name |
| ↳ `lastName` | string | Contact last name |
| ↳ `source` | string | Source the contact was created from |
| ↳ `subscribed` | boolean | Whether the contact receives campaign emails |
| ↳ `userGroup` | string | Contact user group |
| ↳ `userId` | string | External user identifier |
| ↳ `mailingLists` | object | Mailing list IDs mapped to subscription status |
| ↳ `optInStatus` | string | Double opt-in status: pending, accepted, rejected, or null |
### `loops_delete_contact`
Delete a contact from Loops by email address or userId. At least one identifier must be provided.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Loops API key for authentication |
| `email` | string | No | The email address of the contact to delete \(at least one of email or userId is required\) |
| `userId` | string | No | The userId of the contact to delete \(at least one of email or userId is required\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `success` | boolean | Whether the contact was deleted successfully |
| `message` | string | Status message from the API |
### `loops_send_transactional_email`
Send a transactional email to a recipient using a Loops template. Supports dynamic data variables for personalization and optionally adds the recipient to your audience.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Loops API key for authentication |
| `email` | string | Yes | The email address of the recipient |
| `transactionalId` | string | Yes | The ID of the transactional email template to send |
| `dataVariables` | json | No | Template data variables as key-value pairs \(string or number values\) |
| `addToAudience` | boolean | No | Whether to create the recipient as a contact if they do not already exist \(default: false\) |
| `attachments` | json | No | Array of file attachments. Each object must have filename \(string\), contentType \(MIME type string\), and data \(base64-encoded string\). |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `success` | boolean | Whether the transactional email was sent successfully |
### `loops_send_event`
Send an event to Loops to trigger automated email sequences for a contact. Identify the contact by email or userId and include optional event properties and mailing list changes.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Loops API key for authentication |
| `email` | string | No | The email address of the contact \(at least one of email or userId is required\) |
| `userId` | string | No | The userId of the contact \(at least one of email or userId is required\) |
| `eventName` | string | Yes | The name of the event to trigger |
| `eventProperties` | json | No | Event data as key-value pairs \(string, number, boolean, or date values\) |
| `mailingLists` | json | No | Mailing list IDs mapped to boolean values \(true to subscribe, false to unsubscribe\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `success` | boolean | Whether the event was sent successfully |
### `loops_list_mailing_lists`
Retrieve all mailing lists from your Loops account. Returns each list with its ID, name, description, and public/private status.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Loops API key for authentication |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `mailingLists` | array | Array of mailing list objects |
| ↳ `id` | string | The mailing list ID |
| ↳ `name` | string | The mailing list name |
| ↳ `description` | string | The mailing list description \(null if not set\) |
| ↳ `isPublic` | boolean | Whether the list is public or private |
### `loops_list_transactional_emails`
Retrieve a list of published transactional email templates from your Loops account. Returns each template with its ID, name, last updated timestamp, and data variables.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Loops API key for authentication |
| `perPage` | string | No | Number of results per page \(10-50, default: 20\) |
| `cursor` | string | No | Pagination cursor from a previous response to fetch the next page |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `transactionalEmails` | array | Array of published transactional email templates |
| ↳ `id` | string | The transactional email template ID |
| ↳ `name` | string | The template name |
| ↳ `lastUpdated` | string | Last updated timestamp |
| ↳ `dataVariables` | array | Template data variable names |
| `pagination` | object | Pagination information |
| ↳ `totalResults` | number | Total number of results |
| ↳ `returnedResults` | number | Number of results returned |
| ↳ `perPage` | number | Results per page |
| ↳ `totalPages` | number | Total number of pages |
| ↳ `nextCursor` | string | Cursor for next page \(null if no more pages\) |
| ↳ `nextPage` | string | URL for next page \(null if no more pages\) |
### `loops_create_contact_property`
Create a new custom contact property in your Loops account. The property name must be in camelCase format.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Loops API key for authentication |
| `name` | string | Yes | The property name in camelCase format \(e.g., "favoriteColor"\) |
| `type` | string | Yes | The property data type \(e.g., "string", "number", "boolean", "date"\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `success` | boolean | Whether the contact property was created successfully |
### `loops_list_contact_properties`
Retrieve a list of contact properties from your Loops account. Returns each property with its key, label, and data type. Can filter to show all properties or only custom ones.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Loops API key for authentication |
| `list` | string | No | Filter type: "all" for all properties \(default\) or "custom" for custom properties only |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `properties` | array | Array of contact property objects |
| ↳ `key` | string | The property key \(camelCase identifier\) |
| ↳ `label` | string | The property display label |
| ↳ `type` | string | The property data type \(string, number, boolean, date\) |

View File

@@ -0,0 +1,284 @@
---
title: Luma
description: Manage events and guests on Luma
---
import { BlockInfoCard } from "@/components/ui/block-info-card"
<BlockInfoCard
type="luma"
color="#FFFFFF"
/>
{/* MANUAL-CONTENT-START:intro */}
[Luma](https://lu.ma/) is an event management platform that makes it easy to create, manage, and share events with your community.
With Luma integrated into Sim, your agents can:
- **Create events**: Set up new events with name, time, timezone, description, and visibility settings.
- **Update events**: Modify existing event details like name, time, description, and visibility.
- **Get event details**: Retrieve full details for any event by its ID.
- **List calendar events**: Browse your calendar's events with date range filtering and pagination.
- **Manage guest lists**: View attendees for an event, filtered by approval status.
- **Add guests**: Invite new guests to events programmatically.
By connecting Sim with Luma, you can automate event operations within your agent workflows. Automatically create events based on triggers, sync guest lists, monitor registrations, and manage your event calendar—all handled directly by your agents via the Luma API.
Whether you're running community meetups, conferences, or internal team events, the Luma tool makes it easy to coordinate event management within your Sim workflows.
{/* MANUAL-CONTENT-END */}
## Usage Instructions
Integrate Luma into the workflow. Can create events, update events, get event details, list calendar events, get guest lists, and add guests to events.
## Tools
### `luma_get_event`
Retrieve details of a Luma event including name, time, location, hosts, and visibility settings.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Luma API key |
| `eventId` | string | Yes | Event ID \(starts with evt-\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `event` | object | Event details |
| ↳ `id` | string | Event ID |
| ↳ `name` | string | Event name |
| ↳ `startAt` | string | Event start time \(ISO 8601\) |
| ↳ `endAt` | string | Event end time \(ISO 8601\) |
| ↳ `timezone` | string | Event timezone \(IANA\) |
| ↳ `durationInterval` | string | Event duration \(ISO 8601 interval, e.g. PT2H\) |
| ↳ `createdAt` | string | Event creation timestamp \(ISO 8601\) |
| ↳ `description` | string | Event description \(plain text\) |
| ↳ `descriptionMd` | string | Event description \(Markdown\) |
| ↳ `coverUrl` | string | Event cover image URL |
| ↳ `url` | string | Event page URL on lu.ma |
| ↳ `visibility` | string | Event visibility \(public, members-only, private\) |
| ↳ `meetingUrl` | string | Virtual meeting URL |
| ↳ `geoAddressJson` | json | Structured location/address data |
| ↳ `geoLatitude` | string | Venue latitude coordinate |
| ↳ `geoLongitude` | string | Venue longitude coordinate |
| ↳ `calendarId` | string | Associated calendar ID |
| `hosts` | array | Event hosts |
| ↳ `id` | string | Host ID |
| ↳ `name` | string | Host display name |
| ↳ `firstName` | string | Host first name |
| ↳ `lastName` | string | Host last name |
| ↳ `email` | string | Host email address |
| ↳ `avatarUrl` | string | Host avatar image URL |
### `luma_create_event`
Create a new event on Luma with a name, start time, timezone, and optional details like description, location, and visibility.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Luma API key |
| `name` | string | Yes | Event name/title |
| `startAt` | string | Yes | Event start time in ISO 8601 format \(e.g., 2025-03-15T18:00:00Z\) |
| `timezone` | string | Yes | IANA timezone \(e.g., America/New_York, Europe/London\) |
| `endAt` | string | No | Event end time in ISO 8601 format \(e.g., 2025-03-15T20:00:00Z\) |
| `durationInterval` | string | No | Event duration as ISO 8601 interval \(e.g., PT2H for 2 hours, PT30M for 30 minutes\). Used if endAt is not provided. |
| `descriptionMd` | string | No | Event description in Markdown format |
| `meetingUrl` | string | No | Virtual meeting URL for online events \(e.g., Zoom, Google Meet link\) |
| `visibility` | string | No | Event visibility: public, members-only, or private \(defaults to public\) |
| `coverUrl` | string | No | Cover image URL \(must be a Luma CDN URL from images.lumacdn.com\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `event` | object | Created event details |
| ↳ `id` | string | Event ID |
| ↳ `name` | string | Event name |
| ↳ `startAt` | string | Event start time \(ISO 8601\) |
| ↳ `endAt` | string | Event end time \(ISO 8601\) |
| ↳ `timezone` | string | Event timezone \(IANA\) |
| ↳ `durationInterval` | string | Event duration \(ISO 8601 interval, e.g. PT2H\) |
| ↳ `createdAt` | string | Event creation timestamp \(ISO 8601\) |
| ↳ `description` | string | Event description \(plain text\) |
| ↳ `descriptionMd` | string | Event description \(Markdown\) |
| ↳ `coverUrl` | string | Event cover image URL |
| ↳ `url` | string | Event page URL on lu.ma |
| ↳ `visibility` | string | Event visibility \(public, members-only, private\) |
| ↳ `meetingUrl` | string | Virtual meeting URL |
| ↳ `geoAddressJson` | json | Structured location/address data |
| ↳ `geoLatitude` | string | Venue latitude coordinate |
| ↳ `geoLongitude` | string | Venue longitude coordinate |
| ↳ `calendarId` | string | Associated calendar ID |
| `hosts` | array | Event hosts |
| ↳ `id` | string | Host ID |
| ↳ `name` | string | Host display name |
| ↳ `firstName` | string | Host first name |
| ↳ `lastName` | string | Host last name |
| ↳ `email` | string | Host email address |
| ↳ `avatarUrl` | string | Host avatar image URL |
### `luma_update_event`
Update an existing Luma event. Only the fields you provide will be changed; all other fields remain unchanged.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Luma API key |
| `eventId` | string | Yes | Event ID to update \(starts with evt-\) |
| `name` | string | No | New event name/title |
| `startAt` | string | No | New start time in ISO 8601 format \(e.g., 2025-03-15T18:00:00Z\) |
| `timezone` | string | No | New IANA timezone \(e.g., America/New_York, Europe/London\) |
| `endAt` | string | No | New end time in ISO 8601 format \(e.g., 2025-03-15T20:00:00Z\) |
| `durationInterval` | string | No | New duration as ISO 8601 interval \(e.g., PT2H for 2 hours\). Used if endAt is not provided. |
| `descriptionMd` | string | No | New event description in Markdown format |
| `meetingUrl` | string | No | New virtual meeting URL \(e.g., Zoom, Google Meet link\) |
| `visibility` | string | No | New visibility: public, members-only, or private |
| `coverUrl` | string | No | New cover image URL \(must be a Luma CDN URL from images.lumacdn.com\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `event` | object | Updated event details |
| ↳ `id` | string | Event ID |
| ↳ `name` | string | Event name |
| ↳ `startAt` | string | Event start time \(ISO 8601\) |
| ↳ `endAt` | string | Event end time \(ISO 8601\) |
| ↳ `timezone` | string | Event timezone \(IANA\) |
| ↳ `durationInterval` | string | Event duration \(ISO 8601 interval, e.g. PT2H\) |
| ↳ `createdAt` | string | Event creation timestamp \(ISO 8601\) |
| ↳ `description` | string | Event description \(plain text\) |
| ↳ `descriptionMd` | string | Event description \(Markdown\) |
| ↳ `coverUrl` | string | Event cover image URL |
| ↳ `url` | string | Event page URL on lu.ma |
| ↳ `visibility` | string | Event visibility \(public, members-only, private\) |
| ↳ `meetingUrl` | string | Virtual meeting URL |
| ↳ `geoAddressJson` | json | Structured location/address data |
| ↳ `geoLatitude` | string | Venue latitude coordinate |
| ↳ `geoLongitude` | string | Venue longitude coordinate |
| ↳ `calendarId` | string | Associated calendar ID |
| `hosts` | array | Event hosts |
| ↳ `id` | string | Host ID |
| ↳ `name` | string | Host display name |
| ↳ `firstName` | string | Host first name |
| ↳ `lastName` | string | Host last name |
| ↳ `email` | string | Host email address |
| ↳ `avatarUrl` | string | Host avatar image URL |
### `luma_list_events`
List events from your Luma calendar with optional date range filtering, sorting, and pagination.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Luma API key |
| `after` | string | No | Return events after this ISO 8601 datetime \(e.g., 2025-01-01T00:00:00Z\) |
| `before` | string | No | Return events before this ISO 8601 datetime \(e.g., 2025-12-31T23:59:59Z\) |
| `paginationLimit` | number | No | Maximum number of events to return per page |
| `paginationCursor` | string | No | Pagination cursor from a previous response \(next_cursor\) to fetch the next page of results |
| `sortColumn` | string | No | Column to sort by \(only start_at is supported\) |
| `sortDirection` | string | No | Sort direction: asc, desc, asc nulls last, or desc nulls last |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `events` | array | List of calendar events |
| ↳ `id` | string | Event ID |
| ↳ `name` | string | Event name |
| ↳ `startAt` | string | Event start time \(ISO 8601\) |
| ↳ `endAt` | string | Event end time \(ISO 8601\) |
| ↳ `timezone` | string | Event timezone \(IANA\) |
| ↳ `durationInterval` | string | Event duration \(ISO 8601 interval, e.g. PT2H\) |
| ↳ `createdAt` | string | Event creation timestamp \(ISO 8601\) |
| ↳ `description` | string | Event description \(plain text\) |
| ↳ `descriptionMd` | string | Event description \(Markdown\) |
| ↳ `coverUrl` | string | Event cover image URL |
| ↳ `url` | string | Event page URL on lu.ma |
| ↳ `visibility` | string | Event visibility \(public, members-only, private\) |
| ↳ `meetingUrl` | string | Virtual meeting URL |
| ↳ `geoAddressJson` | json | Structured location/address data |
| ↳ `geoLatitude` | string | Venue latitude coordinate |
| ↳ `geoLongitude` | string | Venue longitude coordinate |
| ↳ `calendarId` | string | Associated calendar ID |
| `hasMore` | boolean | Whether more results are available for pagination |
| `nextCursor` | string | Cursor to pass as paginationCursor to fetch the next page |
### `luma_get_guests`
Retrieve the guest list for a Luma event with optional filtering by approval status, sorting, and pagination.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Luma API key |
| `eventId` | string | Yes | Event ID \(starts with evt-\) |
| `approvalStatus` | string | No | Filter by approval status: approved, session, pending_approval, invited, declined, or waitlist |
| `paginationLimit` | number | No | Maximum number of guests to return per page |
| `paginationCursor` | string | No | Pagination cursor from a previous response \(next_cursor\) to fetch the next page of results |
| `sortColumn` | string | No | Column to sort by: name, email, created_at, registered_at, or checked_in_at |
| `sortDirection` | string | No | Sort direction: asc, desc, asc nulls last, or desc nulls last |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `guests` | array | List of event guests |
| ↳ `id` | string | Guest ID |
| ↳ `email` | string | Guest email address |
| ↳ `name` | string | Guest full name |
| ↳ `firstName` | string | Guest first name |
| ↳ `lastName` | string | Guest last name |
| ↳ `approvalStatus` | string | Guest approval status \(approved, session, pending_approval, invited, declined, waitlist\) |
| ↳ `registeredAt` | string | Registration timestamp \(ISO 8601\) |
| ↳ `invitedAt` | string | Invitation timestamp \(ISO 8601\) |
| ↳ `joinedAt` | string | Join timestamp \(ISO 8601\) |
| ↳ `checkedInAt` | string | Check-in timestamp \(ISO 8601\) |
| ↳ `phoneNumber` | string | Guest phone number |
| `hasMore` | boolean | Whether more results are available for pagination |
| `nextCursor` | string | Cursor to pass as paginationCursor to fetch the next page |
### `luma_add_guests`
Add guests to a Luma event by email. Guests are added with Going (approved) status and receive one ticket of the default ticket type.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Luma API key |
| `eventId` | string | Yes | Event ID \(starts with evt-\) |
| `guests` | string | Yes | JSON array of guest objects. Each guest requires an "email" field and optionally "name", "first_name", "last_name". Example: \[\{"email": "user@example.com", "name": "John Doe"\}\] |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `guests` | array | List of added guests with their assigned status and ticket info |
| ↳ `id` | string | Guest ID |
| ↳ `email` | string | Guest email address |
| ↳ `name` | string | Guest full name |
| ↳ `firstName` | string | Guest first name |
| ↳ `lastName` | string | Guest last name |
| ↳ `approvalStatus` | string | Guest approval status \(approved, session, pending_approval, invited, declined, waitlist\) |
| ↳ `registeredAt` | string | Registration timestamp \(ISO 8601\) |
| ↳ `invitedAt` | string | Invitation timestamp \(ISO 8601\) |
| ↳ `joinedAt` | string | Join timestamp \(ISO 8601\) |
| ↳ `checkedInAt` | string | Check-in timestamp \(ISO 8601\) |
| ↳ `phoneNumber` | string | Guest phone number |

View File

@@ -10,6 +10,8 @@
"apollo",
"arxiv",
"asana",
"ashby",
"attio",
"browser_use",
"calcom",
"calendly",
@@ -19,7 +21,9 @@
"cloudflare",
"confluence",
"cursor",
"databricks",
"datadog",
"devin",
"discord",
"dropbox",
"dspy",
@@ -32,11 +36,15 @@
"file",
"firecrawl",
"fireflies",
"gamma",
"github",
"gitlab",
"gmail",
"gong",
"google_bigquery",
"google_books",
"google_calendar",
"google_contacts",
"google_docs",
"google_drive",
"google_forms",
@@ -45,10 +53,14 @@
"google_search",
"google_sheets",
"google_slides",
"google_tasks",
"google_translate",
"google_vault",
"grafana",
"grain",
"greenhouse",
"greptile",
"hex",
"hubspot",
"huggingface",
"hunter",
@@ -66,6 +78,8 @@
"linear",
"linkedin",
"linkup",
"loops",
"luma",
"mailchimp",
"mailgun",
"mem0",
@@ -108,6 +122,7 @@
"sftp",
"sharepoint",
"shopify",
"short_io",
"similarweb",
"slack",
"smtp",

View File

@@ -11,19 +11,19 @@ import { BlockInfoCard } from "@/components/ui/block-info-card"
/>
{/* MANUAL-CONTENT-START:intro */}
[Pipedrive](https://www.pipedrive.com) is a powerful sales-focused CRM platform designed to help sales teams manage leads, track deals, and optimize their sales pipeline. Built with simplicity and effectiveness in mind, Pipedrive has become a favorite among sales professionals and growing businesses worldwide for its intuitive visual pipeline management and actionable sales insights.
[Pipedrive](https://www.pipedrive.com) is a sales-focused CRM platform designed to help sales teams manage leads, track deals, and optimize their sales pipeline. Built with simplicity and effectiveness in mind, Pipedrive provides intuitive visual pipeline management and actionable sales insights.
Pipedrive provides a comprehensive suite of tools for managing the entire sales process from lead capture to deal closure. With its robust API and extensive integration capabilities, Pipedrive enables sales teams to automate repetitive tasks, maintain data consistency, and focus on what matters most—closing deals.
With the Pipedrive integration in Sim, you can:
Key features of Pipedrive include:
- **Manage deals**: List, get, create, and update deals in your sales pipeline
- **Manage leads**: Get, create, update, and delete leads for prospect tracking
- **Track activities**: Get, create, and update sales activities such as calls, meetings, and tasks
- **Manage projects**: List and create projects for post-sale delivery tracking
- **Access pipelines**: Get pipeline configurations and list deals within specific pipelines
- **Retrieve files**: Access files attached to deals, contacts, or other records
- **Access email**: Get mail messages and threads linked to CRM records
- Visual Sales Pipeline: Intuitive drag-and-drop interface for managing deals through customizable sales stages
- Lead Management: Comprehensive lead inbox for capturing, qualifying, and converting potential opportunities
- Activity Tracking: Sophisticated system for scheduling and tracking calls, meetings, emails, and tasks
- Project Management: Built-in project tracking capabilities for post-sale customer success and delivery
- Email Integration: Native mailbox integration for seamless communication tracking within the CRM
In Sim, the Pipedrive integration allows your AI agents to seamlessly interact with your sales workflow. This creates opportunities for automated lead qualification, deal creation and updates, activity scheduling, and pipeline management as part of your AI-powered sales processes. The integration enables agents to create, retrieve, update, and manage deals, leads, activities, and projects programmatically, facilitating intelligent sales automation and ensuring that critical customer information is properly tracked and acted upon. By connecting Sim with Pipedrive, you can build AI agents that maintain sales pipeline visibility, automate routine CRM tasks, qualify leads intelligently, and ensure no opportunities slip through the cracks—enhancing sales team productivity and driving consistent revenue growth.
In Sim, the Pipedrive integration enables your agents to interact with your sales workflow as part of automated processes. Agents can qualify leads, manage deals through pipeline stages, schedule activities, and keep CRM data synchronized—enabling intelligent sales automation.
{/* MANUAL-CONTENT-END */}

View File

@@ -1,6 +1,6 @@
---
title: Resend
description: Send emails with Resend.
description: Send emails and manage contacts with Resend.
---
import { BlockInfoCard } from "@/components/ui/block-info-card"
@@ -27,7 +27,7 @@ In Sim, the Resend integration allows your agents to programmatically send email
## Usage Instructions
Integrate Resend into the workflow. Can send emails. Requires API Key.
Integrate Resend into your workflow. Send emails, retrieve email status, manage contacts, and view domains. Requires API Key.
@@ -46,6 +46,11 @@ Send an email using your own Resend API key and from address
| `subject` | string | Yes | Email subject line |
| `body` | string | Yes | Email body content \(plain text or HTML based on contentType\) |
| `contentType` | string | No | Content type for the email body: "text" for plain text or "html" for HTML content |
| `cc` | string | No | Carbon copy recipient email address |
| `bcc` | string | No | Blind carbon copy recipient email address |
| `replyTo` | string | No | Reply-to email address |
| `scheduledAt` | string | No | Schedule email to be sent later in ISO 8601 format |
| `tags` | string | No | Comma-separated key:value pairs for email tags \(e.g., "category:welcome,type:onboarding"\) |
| `resendApiKey` | string | Yes | Resend API key for sending emails |
#### Output
@@ -53,8 +58,152 @@ Send an email using your own Resend API key and from address
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `success` | boolean | Whether the email was sent successfully |
| `id` | string | Email ID returned by Resend |
| `to` | string | Recipient email address |
| `subject` | string | Email subject |
| `body` | string | Email body content |
### `resend_get_email`
Retrieve details of a previously sent email by its ID
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `emailId` | string | Yes | The ID of the email to retrieve |
| `resendApiKey` | string | Yes | Resend API key |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `id` | string | Email ID |
| `from` | string | Sender email address |
| `to` | json | Recipient email addresses |
| `subject` | string | Email subject |
| `html` | string | HTML email content |
| `text` | string | Plain text email content |
| `cc` | json | CC email addresses |
| `bcc` | json | BCC email addresses |
| `replyTo` | json | Reply-to email addresses |
| `lastEvent` | string | Last event status \(e.g., delivered, bounced\) |
| `createdAt` | string | Email creation timestamp |
| `scheduledAt` | string | Scheduled send timestamp |
| `tags` | json | Email tags as name-value pairs |
### `resend_create_contact`
Create a new contact in Resend
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `email` | string | Yes | Email address of the contact |
| `firstName` | string | No | First name of the contact |
| `lastName` | string | No | Last name of the contact |
| `unsubscribed` | boolean | No | Whether the contact is unsubscribed from all broadcasts |
| `resendApiKey` | string | Yes | Resend API key |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `id` | string | Created contact ID |
### `resend_list_contacts`
List all contacts in Resend
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `resendApiKey` | string | Yes | Resend API key |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `contacts` | json | Array of contacts with id, email, first_name, last_name, created_at, unsubscribed |
| `hasMore` | boolean | Whether there are more contacts to retrieve |
### `resend_get_contact`
Retrieve details of a contact by ID or email
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `contactId` | string | Yes | The contact ID or email address to retrieve |
| `resendApiKey` | string | Yes | Resend API key |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `id` | string | Contact ID |
| `email` | string | Contact email address |
| `firstName` | string | Contact first name |
| `lastName` | string | Contact last name |
| `createdAt` | string | Contact creation timestamp |
| `unsubscribed` | boolean | Whether the contact is unsubscribed |
### `resend_update_contact`
Update an existing contact in Resend
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `contactId` | string | Yes | The contact ID or email address to update |
| `firstName` | string | No | Updated first name |
| `lastName` | string | No | Updated last name |
| `unsubscribed` | boolean | No | Whether the contact should be unsubscribed from all broadcasts |
| `resendApiKey` | string | Yes | Resend API key |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `id` | string | Updated contact ID |
### `resend_delete_contact`
Delete a contact from Resend by ID or email
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `contactId` | string | Yes | The contact ID or email address to delete |
| `resendApiKey` | string | Yes | Resend API key |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `id` | string | Deleted contact ID |
| `deleted` | boolean | Whether the contact was successfully deleted |
### `resend_list_domains`
List all verified domains in your Resend account
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `resendApiKey` | string | Yes | Resend API key |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `domains` | json | Array of domains with id, name, status, region, and createdAt |
| `hasMore` | boolean | Whether there are more domains to retrieve |

View File

@@ -0,0 +1,173 @@
---
title: Short.io
description: Create and manage short links, domains, and analytics.
---
import { BlockInfoCard } from "@/components/ui/block-info-card"
<BlockInfoCard
type="short_io"
color="#FFFFFF"
/>
{/* MANUAL-CONTENT-START:intro */}
[Short.io](https://short.io/) is a white-label URL shortener that lets you create branded short links on your own domain, track clicks, and manage links at scale. Short.io is designed for businesses that want professional short URLs, QR codes, and link analytics without relying on generic shorteners.
With Short.io in Sim, you can:
- **Create short links**: Generate branded short URLs from long URLs using your custom domain, with optional custom paths
- **List domains**: Retrieve all Short.io domains on your account to get domain IDs for listing links
- **List links**: List short links for a domain with pagination and optional date sort order
- **Delete links**: Remove a short link by its ID (e.g. lnk_abc123_abcdef)
- **Generate QR codes**: Create QR codes for any Short.io link with optional size, color, background color, and format (PNG or SVG); returns a base64 data URL
- **Get link statistics**: Fetch click analytics for a link including total clicks, human clicks, referrer/country/browser/OS/city breakdowns, UTM dimensions, time-series data, and date interval
These capabilities allow your Sim agents to automate link shortening, QR code generation, and analytics reporting directly in your workflows — from campaign tracking to link management and performance dashboards.
{/* MANUAL-CONTENT-END */}
## Usage Instructions
Integrate Short.io to generate branded short links, list domains and links, delete links, generate QR codes, and view link statistics. Requires your Short.io Secret API Key.
## Tools
### `short_io_create_link`
Create a short link using your Short.io custom domain.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Short.io Secret API Key |
| `domain` | string | Yes | Your registered Short.io custom domain |
| `originalURL` | string | Yes | The long URL to shorten |
| `path` | string | No | Optional custom path for the short link |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `shortURL` | string | The generated short link URL |
| `idString` | string | The unique Short.io link ID string |
| `originalURL` | string | The original long URL |
| `path` | string | The path/slug of the short link |
| `createdAt` | string | ISO 8601 creation timestamp |
### `short_io_list_domains`
List Short.io domains. Returns domain IDs and details for use in List Links.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Short.io Secret API Key |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `domains` | array | List of domain objects \(id, hostname, etc.\) |
| `count` | number | Number of domains |
### `short_io_list_links`
List short links for a domain. Requires domain_id (from List Domains or dashboard). Max 150 per request.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Short.io Secret API Key |
| `domainId` | number | Yes | Domain ID \(from List Domains\) |
| `limit` | number | No | Max links to return \(1150\) |
| `pageToken` | string | No | Pagination token from previous response |
| `dateSortOrder` | string | No | Sort by date: asc or desc |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `links` | array | List of link objects \(idString, shortURL, originalURL, path, etc.\) |
| `count` | number | Number of links returned |
| `nextPageToken` | string | Token for next page |
### `short_io_delete_link`
Delete a short link by ID (e.g. lnk_abc123_abcdef). Rate limit 20/s.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Short.io Secret API Key |
| `linkId` | string | Yes | Link ID to delete |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `deleted` | boolean | Whether the link was deleted |
| `idString` | string | Deleted link ID |
### `short_io_get_qr_code`
Generate a QR code for a Short.io link (POST /links/qr/{linkIdString}).
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Short.io Secret API Key |
| `linkId` | string | Yes | Link ID \(e.g. lnk_abc123_abcdef\) |
| `color` | string | No | QR color hex \(e.g. 000000\) |
| `backgroundColor` | string | No | Background color hex \(e.g. FFFFFF\) |
| `size` | number | No | QR size 199 |
| `type` | string | No | Output format: png or svg |
| `useDomainSettings` | boolean | No | Use domain settings \(default true\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `file` | file | Generated QR code image file |
### `short_io_get_analytics`
Fetch click statistics for a Short.io link (Statistics API: totalClicks, humanClicks, referer, country, etc.).
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Short.io Secret API Key |
| `linkId` | string | Yes | No description |
| `period` | string | Yes | Period: today, yesterday, last7, last30, total, week, month, lastmonth |
| `tz` | string | No | Timezone \(default UTC\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `totalClicks` | number | Total clicks |
| `humanClicks` | number | Human clicks |
| `totalClicksChange` | string | Change vs previous period |
| `humanClicksChange` | string | Human clicks change |
| `referer` | array | Referrer breakdown \(referer, score\) |
| `country` | array | Country breakdown \(countryName, country, score\) |
| `browser` | array | Browser breakdown \(browser, score\) |
| `os` | array | OS breakdown \(os, score\) |
| `city` | array | City breakdown \(city, name, countryCode, score\) |
| `device` | array | Device breakdown |
| `social` | array | Social source breakdown \(social, score\) |
| `utmMedium` | array | UTM medium breakdown |
| `utmSource` | array | UTM source breakdown |
| `utmCampaign` | array | UTM campaign breakdown |
| `clickStatistics` | object | Time-series click data \(datasets with x/y points per interval\) |
| `interval` | object | Date range \(startDate, endDate, prevStartDate, prevEndDate, tz\) |

View File

@@ -13,39 +13,19 @@ import { BlockInfoCard } from "@/components/ui/block-info-card"
{/* MANUAL-CONTENT-START:intro */}
[Slack](https://www.slack.com/) is a business communication platform that offers teams a unified place for messaging, tools, and files.
With Slack, you can:
With the Slack integration in Sim, you can:
- **Automate agent notifications**: Send real-time updates from your Sim agents to any Slack channel
- **Create webhook endpoints**: Configure Slack bots as webhooks to trigger Sim workflows from Slack activities
- **Enhance agent workflows**: Integrate Slack messaging into your agents to deliver results, alerts, and status updates
- **Create and share Slack canvases**: Programmatically generate collaborative documents (canvases) in Slack channels
- **Read messages from channels**: Retrieve and process recent messages from any Slack channel for monitoring or workflow triggers
- **Manage bot messages**: Update, delete, and add reactions to messages sent by your bot
In Sim, the Slack integration enables your agents to programmatically interact with Slack with full message management capabilities as part of their workflows:
- **Send messages**: Agents can send formatted messages to any Slack channel or user, supporting Slack's mrkdwn syntax for rich formatting
- **Send messages**: Send formatted messages to any Slack channel or user, supporting Slack's mrkdwn syntax for rich formatting
- **Send ephemeral messages**: Send temporary messages visible only to a specific user in a channel
- **Update messages**: Edit previously sent bot messages to correct information or provide status updates
- **Delete messages**: Remove bot messages when they're no longer needed or contain errors
- **Add reactions**: Express sentiment or acknowledgment by adding emoji reactions to any message
- **Create canvases**: Create and share Slack canvases (collaborative documents) directly in channels, enabling richer content sharing and documentation
- **Read messages**: Read recent messages from channels, allowing for monitoring, reporting, or triggering further actions based on channel activity
- **Create canvases**: Create and share Slack canvases (collaborative documents) directly in channels
- **Read messages**: Retrieve recent messages from channels or DMs, with filtering by time range
- **Manage channels and users**: List channels, members, and users in your Slack workspace
- **Download files**: Retrieve files shared in Slack channels for processing or archival
This allows for powerful automation scenarios such as sending notifications with dynamic updates, managing conversational flows with editable status messages, acknowledging important messages with reactions, and maintaining clean channels by removing outdated bot messages. Your agents can deliver timely information, update messages as workflows progress, create collaborative documents, or alert team members when attention is needed. This integration bridges the gap between your AI workflows and your team's communication, ensuring everyone stays informed with accurate, up-to-date information. By connecting Sim with Slack, you can create agents that keep your team updated with relevant information at the right time, enhance collaboration by sharing and updating insights automatically, and reduce the need for manual status updates—all while leveraging your existing Slack workspace where your team already communicates.
## Getting Started
To connect Slack to your Sim workflows:
1. Sign up or log in at [sim.ai](https://sim.ai)
2. Create a new workflow or open an existing one
3. Drag a **Slack** block onto your canvas
4. Click the credential selector and choose **Connect**
5. Authorize Sim to access your Slack workspace
6. Select your target channel or user
Once connected, you can use any of the Slack operations listed below.
In Sim, the Slack integration enables your agents to programmatically interact with Slack as part of their workflows. This allows for automation scenarios such as sending notifications with dynamic updates, managing conversational flows with editable status messages, acknowledging important messages with reactions, and maintaining clean channels by removing outdated bot messages. The integration can also be used in trigger mode to start a workflow when a message is sent to a channel.
## AI-Generated Content

View File

@@ -5,11 +5,12 @@ description: User-defined data tables for storing and querying structured data
import { BlockInfoCard } from "@/components/ui/block-info-card"
<BlockInfoCard
<BlockInfoCard
type="table"
color="#10B981"
/>
{/* MANUAL-CONTENT-START:intro */}
Tables allow you to create and manage custom data tables directly within Sim. Store, query, and manipulate structured data within your workflows without needing external database integrations.
**Why Use Tables?**
@@ -26,6 +27,7 @@ Tables allow you to create and manage custom data tables directly within Sim. St
- Batch operations for bulk inserts
- Bulk updates and deletes by filter
- Up to 10,000 rows per table, 100 tables per workspace
{/* MANUAL-CONTENT-END */}
## Creating Tables

View File

@@ -11,11 +11,21 @@ import { BlockInfoCard } from "@/components/ui/block-info-card"
/>
{/* MANUAL-CONTENT-START:intro */}
[Telegram](https://telegram.org) is a secure, cloud-based messaging platform that enables fast and reliable communication across devices and platforms. With over 700 million monthly active users, Telegram has established itself as one of the world's leading messaging services, known for its security, speed, and powerful API capabilities.
[Telegram](https://telegram.org) is a secure, cloud-based messaging platform that enables fast and reliable communication across devices. With its powerful Bot API, Telegram provides a robust framework for automated messaging and integration.
Telegram's Bot API provides a robust framework for creating automated messaging solutions and integrating communication features into applications. With support for rich media, inline keyboards, and custom commands, Telegram bots can facilitate sophisticated interaction patterns and automated workflows.
With the Telegram integration in Sim, you can:
Learn how to create a webhook trigger in Sim that seamlessly initiates workflows from Telegram messages. This tutorial walks you through setting up a webhook, configuring it with Telegram's bot API, and triggering automated actions in real-time. Perfect for streamlining tasks directly from your chat!
- **Send messages**: Send text messages to Telegram chats, groups, or channels
- **Delete messages**: Remove previously sent messages from a chat
- **Send photos**: Share images with optional captions
- **Send videos**: Share video files with optional captions
- **Send audio**: Share audio files with optional captions
- **Send animations**: Share GIF animations with optional captions
- **Send documents**: Share files of any type with optional captions
In Sim, the Telegram integration enables your agents to send messages and rich media to Telegram chats as part of automated workflows. This is ideal for automated notifications, alerts, content distribution, and interactive bot experiences.
Learn how to create a webhook trigger in Sim that seamlessly initiates workflows from Telegram messages. This tutorial walks you through setting up a webhook, configuring it with Telegram's bot API, and triggering automated actions in real-time.
<iframe
width="100%"
@@ -27,7 +37,7 @@ Learn how to create a webhook trigger in Sim that seamlessly initiates workflows
allowFullScreen
></iframe>
Learn how to use the Telegram Tool in Sim to seamlessly automate message delivery to any Telegram group. This tutorial walks you through integrating the tool into your workflow, configuring group messaging, and triggering automated updates in real-time. Perfect for enhancing communication directly from your workspace!
Learn how to use the Telegram Tool in Sim to seamlessly automate message delivery to any Telegram group. This tutorial walks you through integrating the tool into your workflow, configuring group messaging, and triggering automated updates in real-time.
<iframe
width="100%"
@@ -38,15 +48,6 @@ Learn how to use the Telegram Tool in Sim to seamlessly automate message deliver
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture"
allowFullScreen
></iframe>
Key features of Telegram include:
- Secure Communication: End-to-end encryption and secure cloud storage for messages and media
- Bot Platform: Powerful bot API for creating automated messaging solutions and interactive experiences
- Rich Media Support: Send and receive messages with text formatting, images, files, and interactive elements
- Global Reach: Connect with users worldwide with support for multiple languages and platforms
In Sim, the Telegram integration enables your agents to leverage these powerful messaging capabilities as part of their workflows. This creates opportunities for automated notifications, alerts, and interactive conversations through Telegram's secure messaging platform. The integration allows agents to send messages programmatically to individuals or channels, enabling timely communication and updates. By connecting Sim with Telegram, you can build intelligent agents that engage users through a secure and widely-adopted messaging platform, perfect for delivering notifications, updates, and interactive communications.
{/* MANUAL-CONTENT-END */}

View File

@@ -11,11 +11,19 @@ import { BlockInfoCard } from "@/components/ui/block-info-card"
/>
{/* MANUAL-CONTENT-START:intro */}
[WordPress](https://wordpress.org/) is the worlds leading open-source content management system, making it easy to publish and manage websites, blogs, and all types of online content. With WordPress, you can create and update posts or pages, organize your content with categories and tags, manage media files, moderate comments, and handle user accounts—allowing you to run everything from personal blogs to complex business sites.
[WordPress](https://wordpress.org/) is the worlds leading open-source content management system, powering websites, blogs, and online stores of all sizes. WordPress provides a flexible platform for publishing and managing content with extensive plugin and theme support.
Sims integration with WordPress lets your agents automate essential website tasks. You can programmatically create new blog posts with specific titles, content, categories, tags, and featured images. Updating existing posts—such as changing their content, title, or publishing status—is straightforward. You can also publish or save content as drafts, manage static pages, work with media uploads, oversee comments, and assign content to relevant organizational taxonomies.
With the WordPress integration in Sim, you can:
By connecting WordPress to your automations, Sim empowers your agents to streamline content publishing, editorial workflows, and everyday site management—helping you keep your website fresh, organized, and secure without manual effort.
- **Manage posts**: Create, update, delete, get, and list blog posts with full control over content, status, categories, and tags
- **Manage pages**: Create, update, delete, get, and list static pages
- **Handle media**: Upload, get, list, and delete media files such as images, videos, and documents
- **Moderate comments**: Create, list, update, and delete comments on posts and pages
- **Organize content**: Create and list categories and tags for content taxonomy
- **Manage users**: Get the current user, list users, and retrieve user details
- **Search content**: Search across all content types on the WordPress site
In Sim, the WordPress integration enables your agents to automate content publishing and site management as part of automated workflows. Agents can create and publish posts, manage media assets, moderate comments, and organize content—keeping your website fresh and organized without manual effort.
{/* MANUAL-CONTENT-END */}

View File

@@ -29,74 +29,65 @@ In Sim, the X integration enables sophisticated social media automation scenario
## Usage Instructions
Integrate X into the workflow. Can post a new tweet, get tweet details, search tweets, and get user profile.
Integrate X into the workflow. Search tweets, manage bookmarks, follow/block/mute users, like and retweet, view trends, and more.
## Tools
### `x_write`
### `x_create_tweet`
Post new tweets, reply to tweets, or create polls on X (Twitter)
Create a new tweet, reply, or quote tweet on X
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `text` | string | Yes | The text content of your tweet \(max 280 characters\) |
| `replyTo` | string | No | ID of the tweet to reply to \(e.g., 1234567890123456789\) |
| `mediaIds` | array | No | Array of media IDs to attach to the tweet |
| `poll` | object | No | Poll configuration for the tweet |
| `text` | string | Yes | The text content of the tweet \(max 280 characters\) |
| `replyToTweetId` | string | No | Tweet ID to reply to |
| `quoteTweetId` | string | No | Tweet ID to quote |
| `mediaIds` | string | No | Comma-separated media IDs to attach \(up to 4\) |
| `replySettings` | string | No | Who can reply: "mentionedUsers", "following", "subscribers", or "verified" |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `tweet` | object | The newly created tweet data |
| ↳ `id` | string | Tweet ID |
| ↳ `text` | string | Tweet content text |
| ↳ `createdAt` | string | Tweet creation timestamp |
| ↳ `authorId` | string | ID of the tweet author |
| ↳ `conversationId` | string | Conversation thread ID |
| ↳ `attachments` | object | Media or poll attachments |
| ↳ `mediaKeys` | array | Media attachment keys |
| ↳ `pollId` | string | Poll ID if poll attached |
| `id` | string | The ID of the created tweet |
| `text` | string | The text of the created tweet |
### `x_read`
### `x_delete_tweet`
Read tweet details, including replies and conversation context
Delete a tweet authored by the authenticated user
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `tweetId` | string | Yes | ID of the tweet to read \(e.g., 1234567890123456789\) |
| `includeReplies` | boolean | No | Whether to include replies to the tweet |
| `tweetId` | string | Yes | The ID of the tweet to delete |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `tweet` | object | The main tweet data |
| ↳ `id` | string | Tweet ID |
| ↳ `text` | string | Tweet content text |
| ↳ `createdAt` | string | Tweet creation timestamp |
| ↳ `authorId` | string | ID of the tweet author |
| `context` | object | Conversation context including parent and root tweets |
| `deleted` | boolean | Whether the tweet was successfully deleted |
### `x_search`
### `x_search_tweets`
Search for tweets using keywords, hashtags, or advanced queries
Search for recent tweets using keywords, hashtags, or advanced query operators
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `query` | string | Yes | Search query \(e.g., "AI news", "#technology", "from:username"\). Supports X search operators |
| `maxResults` | number | No | Maximum number of results to return \(e.g., 10, 25, 50\). Default: 10, max: 100 |
| `startTime` | string | No | Start time for search \(ISO 8601 format\) |
| `endTime` | string | No | End time for search \(ISO 8601 format\) |
| `sortOrder` | string | No | Sort order for results \(recency or relevancy\) |
| `query` | string | Yes | Search query \(supports operators like "from:", "to:", "#hashtag", "has:images", "is:retweet", "lang:"\) |
| `maxResults` | number | No | Maximum number of results \(10-100, default 10\) |
| `startTime` | string | No | Oldest UTC timestamp in ISO 8601 format \(e.g., 2024-01-01T00:00:00Z\) |
| `endTime` | string | No | Newest UTC timestamp in ISO 8601 format |
| `sinceId` | string | No | Returns tweets with ID greater than this |
| `untilId` | string | No | Returns tweets with ID less than this |
| `sortOrder` | string | No | Sort order: "recency" or "relevancy" |
| `nextToken` | string | No | Pagination token for next page of results |
#### Output
@@ -104,38 +95,748 @@ Search for tweets using keywords, hashtags, or advanced queries
| --------- | ---- | ----------- |
| `tweets` | array | Array of tweets matching the search query |
| ↳ `id` | string | Tweet ID |
| ↳ `text` | string | Tweet content |
| ↳ `createdAt` | string | Creation timestamp |
| ↳ `text` | string | Tweet text content |
| ↳ `createdAt` | string | Tweet creation timestamp |
| ↳ `authorId` | string | Author user ID |
| `includes` | object | Additional data including user profiles and media |
| ↳ `conversationId` | string | Conversation thread ID |
| ↳ `inReplyToUserId` | string | User ID being replied to |
| ↳ `publicMetrics` | object | Engagement metrics |
| ↳ `retweetCount` | number | Number of retweets |
| ↳ `replyCount` | number | Number of replies |
| ↳ `likeCount` | number | Number of likes |
| ↳ `quoteCount` | number | Number of quotes |
| `includes` | object | Additional data including user profiles |
| ↳ `users` | array | Array of user objects referenced in tweets |
| ↳ `id` | string | User ID |
| ↳ `username` | string | Username without @ symbol |
| ↳ `name` | string | Display name |
| ↳ `description` | string | User bio |
| ↳ `profileImageUrl` | string | Profile image URL |
| ↳ `verified` | boolean | Whether the user is verified |
| ↳ `metrics` | object | User statistics |
| ↳ `followersCount` | number | Number of followers |
| ↳ `followingCount` | number | Number of users following |
| ↳ `tweetCount` | number | Total number of tweets |
| `meta` | object | Search metadata including result count and pagination tokens |
| ↳ `resultCount` | number | Number of results returned |
| ↳ `newestId` | string | ID of the newest tweet |
| ↳ `oldestId` | string | ID of the oldest tweet |
| ↳ `nextToken` | string | Pagination token for next page |
### `x_user`
### `x_get_tweets_by_ids`
Get user profile information
Look up multiple tweets by their IDs (up to 100)
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `username` | string | Yes | Username to look up without @ symbol \(e.g., elonmusk, openai\) |
| `ids` | string | Yes | Comma-separated tweet IDs \(up to 100\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `user` | object | X user profile information |
| `tweets` | array | Array of tweets matching the provided IDs |
| ↳ `id` | string | Tweet ID |
| ↳ `text` | string | Tweet text content |
| ↳ `createdAt` | string | Tweet creation timestamp |
| ↳ `authorId` | string | Author user ID |
| ↳ `conversationId` | string | Conversation thread ID |
| ↳ `inReplyToUserId` | string | User ID being replied to |
| ↳ `publicMetrics` | object | Engagement metrics |
| ↳ `retweetCount` | number | Number of retweets |
| ↳ `replyCount` | number | Number of replies |
| ↳ `likeCount` | number | Number of likes |
| ↳ `quoteCount` | number | Number of quotes |
| `includes` | object | Additional data including user profiles |
| ↳ `users` | array | Array of user objects referenced in tweets |
| ↳ `id` | string | User ID |
| ↳ `username` | string | Username without @ symbol |
| ↳ `name` | string | Display name |
| ↳ `description` | string | User bio |
| ↳ `profileImageUrl` | string | Profile image URL |
| ↳ `verified` | boolean | Whether the user is verified |
| ↳ `metrics` | object | User statistics |
| ↳ `followersCount` | number | Number of followers |
| ↳ `followingCount` | number | Number of users following |
| ↳ `tweetCount` | number | Total number of tweets |
### `x_get_quote_tweets`
Get tweets that quote a specific tweet
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `tweetId` | string | Yes | The tweet ID to get quote tweets for |
| `maxResults` | number | No | Maximum number of results \(10-100, default 10\) |
| `paginationToken` | string | No | Pagination token for next page |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `tweets` | array | Array of quote tweets |
| ↳ `id` | string | Tweet ID |
| ↳ `text` | string | Tweet text content |
| ↳ `createdAt` | string | Tweet creation timestamp |
| ↳ `authorId` | string | Author user ID |
| ↳ `conversationId` | string | Conversation thread ID |
| ↳ `inReplyToUserId` | string | User ID being replied to |
| ↳ `publicMetrics` | object | Engagement metrics |
| ↳ `retweetCount` | number | Number of retweets |
| ↳ `replyCount` | number | Number of replies |
| ↳ `likeCount` | number | Number of likes |
| ↳ `quoteCount` | number | Number of quotes |
| `meta` | object | Pagination metadata |
| ↳ `resultCount` | number | Number of results returned |
| ↳ `nextToken` | string | Token for next page |
### `x_hide_reply`
Hide or unhide a reply to a tweet authored by the authenticated user
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `tweetId` | string | Yes | The reply tweet ID to hide or unhide |
| `hidden` | boolean | Yes | Set to true to hide the reply, false to unhide |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `hidden` | boolean | Whether the reply is now hidden |
### `x_get_user_tweets`
Get tweets authored by a specific user
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `userId` | string | Yes | The user ID whose tweets to retrieve |
| `maxResults` | number | No | Maximum number of results \(5-100, default 10\) |
| `paginationToken` | string | No | Pagination token for next page of results |
| `startTime` | string | No | Oldest UTC timestamp in ISO 8601 format |
| `endTime` | string | No | Newest UTC timestamp in ISO 8601 format |
| `sinceId` | string | No | Returns tweets with ID greater than this |
| `untilId` | string | No | Returns tweets with ID less than this |
| `exclude` | string | No | Comma-separated types to exclude: "retweets", "replies" |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `tweets` | array | Array of tweets by the user |
| ↳ `id` | string | Tweet ID |
| ↳ `text` | string | Tweet text content |
| ↳ `createdAt` | string | Tweet creation timestamp |
| ↳ `authorId` | string | Author user ID |
| ↳ `conversationId` | string | Conversation thread ID |
| ↳ `inReplyToUserId` | string | User ID being replied to |
| ↳ `publicMetrics` | object | Engagement metrics |
| ↳ `retweetCount` | number | Number of retweets |
| ↳ `replyCount` | number | Number of replies |
| ↳ `likeCount` | number | Number of likes |
| ↳ `quoteCount` | number | Number of quotes |
| `includes` | object | Additional data including user profiles |
| ↳ `users` | array | Array of user objects referenced in tweets |
| ↳ `id` | string | User ID |
| ↳ `username` | string | Username without @ symbol |
| ↳ `name` | string | Display name |
| ↳ `description` | string | User bio |
| ↳ `profileImageUrl` | string | Profile image URL |
| ↳ `verified` | boolean | Whether the user is verified |
| ↳ `metrics` | object | User statistics |
| ↳ `followersCount` | number | Number of followers |
| ↳ `followingCount` | number | Number of users following |
| ↳ `tweetCount` | number | Total number of tweets |
| `meta` | object | Pagination metadata |
| ↳ `resultCount` | number | Number of results returned |
| ↳ `newestId` | string | ID of the newest tweet |
| ↳ `oldestId` | string | ID of the oldest tweet |
| ↳ `nextToken` | string | Token for next page |
| ↳ `previousToken` | string | Token for previous page |
### `x_get_user_mentions`
Get tweets that mention a specific user
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `userId` | string | Yes | The user ID whose mentions to retrieve |
| `maxResults` | number | No | Maximum number of results \(5-100, default 10\) |
| `paginationToken` | string | No | Pagination token for next page of results |
| `startTime` | string | No | Oldest UTC timestamp in ISO 8601 format |
| `endTime` | string | No | Newest UTC timestamp in ISO 8601 format |
| `sinceId` | string | No | Returns tweets with ID greater than this |
| `untilId` | string | No | Returns tweets with ID less than this |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `tweets` | array | Array of tweets mentioning the user |
| ↳ `id` | string | Tweet ID |
| ↳ `text` | string | Tweet text content |
| ↳ `createdAt` | string | Tweet creation timestamp |
| ↳ `authorId` | string | Author user ID |
| ↳ `conversationId` | string | Conversation thread ID |
| ↳ `inReplyToUserId` | string | User ID being replied to |
| ↳ `publicMetrics` | object | Engagement metrics |
| ↳ `retweetCount` | number | Number of retweets |
| ↳ `replyCount` | number | Number of replies |
| ↳ `likeCount` | number | Number of likes |
| ↳ `quoteCount` | number | Number of quotes |
| `includes` | object | Additional data including user profiles |
| ↳ `users` | array | Array of user objects referenced in tweets |
| ↳ `id` | string | User ID |
| ↳ `username` | string | Username without @ symbol |
| ↳ `name` | string | Display name |
| ↳ `description` | string | User bio |
| ↳ `profileImageUrl` | string | Profile image URL |
| ↳ `verified` | boolean | Whether the user is verified |
| ↳ `metrics` | object | User statistics |
| ↳ `followersCount` | number | Number of followers |
| ↳ `followingCount` | number | Number of users following |
| ↳ `tweetCount` | number | Total number of tweets |
| `meta` | object | Pagination metadata |
| ↳ `resultCount` | number | Number of results returned |
| ↳ `newestId` | string | ID of the newest tweet |
| ↳ `oldestId` | string | ID of the oldest tweet |
| ↳ `nextToken` | string | Token for next page |
| ↳ `previousToken` | string | Token for previous page |
### `x_get_user_timeline`
Get the reverse chronological home timeline for the authenticated user
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `userId` | string | Yes | The authenticated user ID |
| `maxResults` | number | No | Maximum number of results \(1-100, default 10\) |
| `paginationToken` | string | No | Pagination token for next page of results |
| `startTime` | string | No | Oldest UTC timestamp in ISO 8601 format |
| `endTime` | string | No | Newest UTC timestamp in ISO 8601 format |
| `sinceId` | string | No | Returns tweets with ID greater than this |
| `untilId` | string | No | Returns tweets with ID less than this |
| `exclude` | string | No | Comma-separated types to exclude: "retweets", "replies" |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `tweets` | array | Array of timeline tweets |
| ↳ `id` | string | Tweet ID |
| ↳ `text` | string | Tweet text content |
| ↳ `createdAt` | string | Tweet creation timestamp |
| ↳ `authorId` | string | Author user ID |
| ↳ `conversationId` | string | Conversation thread ID |
| ↳ `inReplyToUserId` | string | User ID being replied to |
| ↳ `publicMetrics` | object | Engagement metrics |
| ↳ `retweetCount` | number | Number of retweets |
| ↳ `replyCount` | number | Number of replies |
| ↳ `likeCount` | number | Number of likes |
| ↳ `quoteCount` | number | Number of quotes |
| `includes` | object | Additional data including user profiles |
| ↳ `users` | array | Array of user objects referenced in tweets |
| ↳ `id` | string | User ID |
| ↳ `username` | string | Username without @ symbol |
| ↳ `name` | string | Display name |
| ↳ `description` | string | User bio |
| ↳ `profileImageUrl` | string | Profile image URL |
| ↳ `verified` | boolean | Whether the user is verified |
| ↳ `metrics` | object | User statistics |
| ↳ `followersCount` | number | Number of followers |
| ↳ `followingCount` | number | Number of users following |
| ↳ `tweetCount` | number | Total number of tweets |
| `meta` | object | Pagination metadata |
| ↳ `resultCount` | number | Number of results returned |
| ↳ `newestId` | string | ID of the newest tweet |
| ↳ `oldestId` | string | ID of the oldest tweet |
| ↳ `nextToken` | string | Token for next page |
| ↳ `previousToken` | string | Token for previous page |
### `x_manage_like`
Like or unlike a tweet on X
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `userId` | string | Yes | The authenticated user ID |
| `tweetId` | string | Yes | The tweet ID to like or unlike |
| `action` | string | Yes | Action to perform: "like" or "unlike" |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `liked` | boolean | Whether the tweet is now liked |
### `x_manage_retweet`
Retweet or unretweet a tweet on X
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `userId` | string | Yes | The authenticated user ID |
| `tweetId` | string | Yes | The tweet ID to retweet or unretweet |
| `action` | string | Yes | Action to perform: "retweet" or "unretweet" |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `retweeted` | boolean | Whether the tweet is now retweeted |
### `x_get_liked_tweets`
Get tweets liked by a specific user
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `userId` | string | Yes | The user ID whose liked tweets to retrieve |
| `maxResults` | number | No | Maximum number of results \(5-100\) |
| `paginationToken` | string | No | Pagination token for next page |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `tweets` | array | Array of liked tweets |
| ↳ `id` | string | Tweet ID |
| ↳ `text` | string | Tweet content |
| ↳ `createdAt` | string | Creation timestamp |
| ↳ `authorId` | string | Author user ID |
| `meta` | object | Pagination metadata |
| ↳ `resultCount` | number | Number of results returned |
| ↳ `nextToken` | string | Token for next page |
### `x_get_liking_users`
Get the list of users who liked a specific tweet
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `tweetId` | string | Yes | The tweet ID to get liking users for |
| `maxResults` | number | No | Maximum number of results \(1-100, default 100\) |
| `paginationToken` | string | No | Pagination token for next page |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `users` | array | Array of users who liked the tweet |
| ↳ `id` | string | User ID |
| ↳ `username` | string | Username without @ symbol |
| ↳ `name` | string | Display name |
| ↳ `description` | string | User bio/description |
| ↳ `description` | string | User bio |
| ↳ `profileImageUrl` | string | Profile image URL |
| ↳ `verified` | boolean | Whether the user is verified |
| ↳ `metrics` | object | User statistics |
| ↳ `followersCount` | number | Number of followers |
| ↳ `followingCount` | number | Number of users following |
| ↳ `tweetCount` | number | Total number of tweets |
| `meta` | object | Pagination metadata |
| ↳ `resultCount` | number | Number of results returned |
| ↳ `nextToken` | string | Token for next page |
### `x_get_retweeted_by`
Get the list of users who retweeted a specific tweet
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `tweetId` | string | Yes | The tweet ID to get retweeters for |
| `maxResults` | number | No | Maximum number of results \(1-100, default 100\) |
| `paginationToken` | string | No | Pagination token for next page |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `users` | array | Array of users who retweeted the tweet |
| ↳ `id` | string | User ID |
| ↳ `username` | string | Username without @ symbol |
| ↳ `name` | string | Display name |
| ↳ `description` | string | User bio |
| ↳ `profileImageUrl` | string | Profile image URL |
| ↳ `verified` | boolean | Whether the user is verified |
| ↳ `metrics` | object | User statistics |
| ↳ `followersCount` | number | Number of followers |
| ↳ `followingCount` | number | Number of users following |
| ↳ `tweetCount` | number | Total number of tweets |
| `meta` | object | Metadata |
| ↳ `resultCount` | number | Number of results returned |
| ↳ `nextToken` | string | Token for next page |
### `x_get_bookmarks`
Get bookmarked tweets for the authenticated user
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `userId` | string | Yes | The authenticated user ID |
| `maxResults` | number | No | Maximum number of results \(1-100\) |
| `paginationToken` | string | No | Pagination token for next page of results |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `tweets` | array | Array of bookmarked tweets |
| ↳ `id` | string | Tweet ID |
| ↳ `text` | string | Tweet text content |
| ↳ `createdAt` | string | Tweet creation timestamp |
| ↳ `authorId` | string | Author user ID |
| ↳ `conversationId` | string | Conversation thread ID |
| ↳ `inReplyToUserId` | string | User ID being replied to |
| ↳ `publicMetrics` | object | Engagement metrics |
| ↳ `retweetCount` | number | Number of retweets |
| ↳ `replyCount` | number | Number of replies |
| ↳ `likeCount` | number | Number of likes |
| ↳ `quoteCount` | number | Number of quotes |
| `includes` | object | Additional data including user profiles |
| ↳ `users` | array | Array of user objects referenced in tweets |
| ↳ `id` | string | User ID |
| ↳ `username` | string | Username without @ symbol |
| ↳ `name` | string | Display name |
| ↳ `description` | string | User bio |
| ↳ `profileImageUrl` | string | Profile image URL |
| ↳ `verified` | boolean | Whether the user is verified |
| ↳ `metrics` | object | User statistics |
| ↳ `followersCount` | number | Number of followers |
| ↳ `followingCount` | number | Number of users following |
| ↳ `tweetCount` | number | Total number of tweets |
| `meta` | object | Pagination metadata |
| ↳ `resultCount` | number | Number of results returned |
| ↳ `newestId` | string | ID of the newest tweet |
| ↳ `oldestId` | string | ID of the oldest tweet |
| ↳ `nextToken` | string | Token for next page |
| ↳ `previousToken` | string | Token for previous page |
### `x_create_bookmark`
Bookmark a tweet for the authenticated user
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `userId` | string | Yes | The authenticated user ID |
| `tweetId` | string | Yes | The tweet ID to bookmark |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `bookmarked` | boolean | Whether the tweet was successfully bookmarked |
### `x_delete_bookmark`
Remove a tweet from the authenticated user
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `userId` | string | Yes | The authenticated user ID |
| `tweetId` | string | Yes | The tweet ID to remove from bookmarks |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `bookmarked` | boolean | Whether the tweet is still bookmarked \(should be false after deletion\) |
### `x_get_me`
Get the authenticated user
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `user` | object | Authenticated user profile |
| ↳ `id` | string | User ID |
| ↳ `username` | string | Username without @ symbol |
| ↳ `name` | string | Display name |
| ↳ `description` | string | User bio |
| ↳ `profileImageUrl` | string | Profile image URL |
| ↳ `verified` | boolean | Whether the user is verified |
| ↳ `metrics` | object | User statistics |
| ↳ `followersCount` | number | Number of followers |
| ↳ `followingCount` | number | Number of users following |
| ↳ `tweetCount` | number | Total number of tweets |
### `x_search_users`
Search for X users by name, username, or bio
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `query` | string | Yes | Search keyword \(1-50 chars, matches name, username, or bio\) |
| `maxResults` | number | No | Maximum number of results \(1-1000, default 100\) |
| `nextToken` | string | No | Pagination token for next page |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `users` | array | Array of users matching the search query |
| ↳ `id` | string | User ID |
| ↳ `username` | string | Username without @ symbol |
| ↳ `name` | string | Display name |
| ↳ `description` | string | User bio |
| ↳ `profileImageUrl` | string | Profile image URL |
| ↳ `verified` | boolean | Whether the user is verified |
| ↳ `metrics` | object | User statistics |
| ↳ `followersCount` | number | Number of followers |
| ↳ `followingCount` | number | Number of users following |
| ↳ `tweetCount` | number | Total number of tweets |
| `meta` | object | Search metadata |
| ↳ `resultCount` | number | Number of results returned |
| ↳ `nextToken` | string | Pagination token for next page |
### `x_get_followers`
Get the list of followers for a user
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `userId` | string | Yes | The user ID whose followers to retrieve |
| `maxResults` | number | No | Maximum number of results \(1-1000, default 100\) |
| `paginationToken` | string | No | Pagination token for next page |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `users` | array | Array of follower user profiles |
| ↳ `id` | string | User ID |
| ↳ `username` | string | Username without @ symbol |
| ↳ `name` | string | Display name |
| ↳ `description` | string | User bio |
| ↳ `profileImageUrl` | string | Profile image URL |
| ↳ `verified` | boolean | Whether the user is verified |
| ↳ `metrics` | object | User statistics |
| ↳ `followersCount` | number | Number of followers |
| ↳ `followingCount` | number | Number of users following |
| ↳ `tweetCount` | number | Total number of tweets |
| `meta` | object | Pagination metadata |
| ↳ `resultCount` | number | Number of results returned |
| ↳ `nextToken` | string | Token for next page |
### `x_get_following`
Get the list of users that a user is following
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `userId` | string | Yes | The user ID whose following list to retrieve |
| `maxResults` | number | No | Maximum number of results \(1-1000, default 100\) |
| `paginationToken` | string | No | Pagination token for next page |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `users` | array | Array of users being followed |
| ↳ `id` | string | User ID |
| ↳ `username` | string | Username without @ symbol |
| ↳ `name` | string | Display name |
| ↳ `description` | string | User bio |
| ↳ `profileImageUrl` | string | Profile image URL |
| ↳ `verified` | boolean | Whether the user is verified |
| ↳ `metrics` | object | User statistics |
| ↳ `followersCount` | number | Number of followers |
| ↳ `followingCount` | number | Number of users following |
| ↳ `tweetCount` | number | Total number of tweets |
| `meta` | object | Pagination metadata |
| ↳ `resultCount` | number | Number of results returned |
| ↳ `nextToken` | string | Token for next page |
### `x_manage_follow`
Follow or unfollow a user on X
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `userId` | string | Yes | The authenticated user ID |
| `targetUserId` | string | Yes | The user ID to follow or unfollow |
| `action` | string | Yes | Action to perform: "follow" or "unfollow" |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `following` | boolean | Whether you are now following the user |
| `pendingFollow` | boolean | Whether the follow request is pending \(for protected accounts\) |
### `x_manage_block`
Block or unblock a user on X
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `userId` | string | Yes | The authenticated user ID |
| `targetUserId` | string | Yes | The user ID to block or unblock |
| `action` | string | Yes | Action to perform: "block" or "unblock" |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `blocking` | boolean | Whether you are now blocking the user |
### `x_get_blocking`
Get the list of users blocked by the authenticated user
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `userId` | string | Yes | The authenticated user ID |
| `maxResults` | number | No | Maximum number of results \(1-1000\) |
| `paginationToken` | string | No | Pagination token for next page |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `users` | array | Array of blocked user profiles |
| ↳ `id` | string | User ID |
| ↳ `username` | string | Username without @ symbol |
| ↳ `name` | string | Display name |
| ↳ `description` | string | User bio |
| ↳ `profileImageUrl` | string | Profile image URL |
| ↳ `verified` | boolean | Whether the user is verified |
| ↳ `metrics` | object | User statistics |
| ↳ `followersCount` | number | Number of followers |
| ↳ `followingCount` | number | Number of users following |
| ↳ `tweetCount` | number | Total number of tweets |
| `meta` | object | Pagination metadata |
| ↳ `resultCount` | number | Number of results returned |
| ↳ `nextToken` | string | Token for next page |
### `x_manage_mute`
Mute or unmute a user on X
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `userId` | string | Yes | The authenticated user ID |
| `targetUserId` | string | Yes | The user ID to mute or unmute |
| `action` | string | Yes | Action to perform: "mute" or "unmute" |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `muting` | boolean | Whether you are now muting the user |
### `x_get_trends_by_woeid`
Get trending topics for a specific location by WOEID (e.g., 1 for worldwide, 23424977 for US)
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `woeid` | string | Yes | Yahoo Where On Earth ID \(e.g., "1" for worldwide, "23424977" for US, "23424975" for UK\) |
| `maxTrends` | number | No | Maximum number of trends to return \(1-50, default 20\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `trends` | array | Array of trending topics |
| ↳ `trendName` | string | Name of the trending topic |
| ↳ `tweetCount` | number | Number of tweets for this trend |
### `x_get_personalized_trends`
Get personalized trending topics for the authenticated user
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `trends` | array | Array of personalized trending topics |
| ↳ `trendName` | string | Name of the trending topic |
| ↳ `postCount` | number | Number of posts for this trend |
| ↳ `category` | string | Category of the trend |
| ↳ `trendingSince` | string | ISO 8601 timestamp of when the topic started trending |
### `x_get_usage`
Get the API usage data for your X project
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `days` | number | No | Number of days of usage data to return \(1-90, default 7\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `capResetDay` | number | Day of month when usage cap resets |
| `projectId` | string | The project ID |
| `projectCap` | number | The project tweet consumption cap |
| `projectUsage` | number | Total tweets consumed in current period |
| `dailyProjectUsage` | array | Daily project usage breakdown |
| ↳ `date` | string | Usage date in ISO 8601 format |
| ↳ `usage` | number | Number of tweets consumed |
| `dailyClientAppUsage` | array | Daily per-app usage breakdown |
| ↳ `clientAppId` | string | Client application ID |
| ↳ `usage` | array | Daily usage entries for this app |
| ↳ `date` | string | Usage date in ISO 8601 format |
| ↳ `usage` | number | Number of tweets consumed |

View File

@@ -1,96 +0,0 @@
---
title: Environment Variables
---
import { Callout } from 'fumadocs-ui/components/callout'
import { Image } from '@/components/ui/image'
Environment variables provide a secure way to manage configuration values and secrets across your workflows, including API keys and other sensitive data that your workflows need to access. They keep secrets out of your workflow definitions while making them available during execution.
## Variable Types
Environment variables in Sim work at two levels:
- **Personal Environment Variables**: Private to your account, only you can see and use them
- **Workspace Environment Variables**: Shared across the entire workspace, available to all team members
<Callout type="info">
Workspace environment variables take precedence over personal ones when there's a naming conflict.
</Callout>
## Setting up Environment Variables
Navigate to Settings to configure your environment variables:
<Image
src="/static/environment/environment-1.png"
alt="Environment variables modal for creating new variables"
width={500}
height={350}
/>
From your workspace settings, you can create and manage both personal and workspace-level environment variables. Personal variables are private to your account, while workspace variables are shared with all team members.
### Making Variables Workspace-Scoped
Use the workspace scope toggle to make variables available to your entire team:
<Image
src="/static/environment/environment-2.png"
alt="Toggle workspace scope for environment variables"
width={500}
height={350}
/>
When you enable workspace scope, the variable becomes available to all workspace members and can be used in any workflow within that workspace.
### Workspace Variables View
Once you have workspace-scoped variables, they appear in your environment variables list:
<Image
src="/static/environment/environment-3.png"
alt="Workspace-scoped variables in the environment variables list"
width={500}
height={350}
/>
## Using Variables in Workflows
To reference environment variables in your workflows, use the `{{}}` notation. When you type `{{` in any input field, a dropdown will appear showing both your personal and workspace-level environment variables. Simply select the variable you want to use.
<Image
src="/static/environment/environment-4.png"
alt="Using environment variables with double brace notation"
width={500}
height={350}
/>
## How Variables are Resolved
**Workspace variables always take precedence** over personal variables, regardless of who runs the workflow.
When no workspace variable exists for a key, personal variables are used:
- **Manual runs (UI)**: Your personal variables
- **Automated runs (API, webhook, schedule, deployed chat)**: Workflow owner's personal variables
<Callout type="info">
Personal variables are best for testing. Use workspace variables for production workflows.
</Callout>
## Security Best Practices
### For Sensitive Data
- Store API keys, tokens, and passwords as environment variables instead of hardcoding them
- Use workspace variables for shared resources that multiple team members need
- Keep personal credentials in personal variables
### Variable Naming
- Use descriptive names: `DATABASE_URL` instead of `DB`
- Follow consistent naming conventions across your team
- Consider prefixes to avoid conflicts: `PROD_API_KEY`, `DEV_API_KEY`
### Access Control
- Workspace environment variables respect workspace permissions
- Only users with write access or higher can create/modify workspace variables
- Personal variables are always private to the individual user

View File

@@ -1,96 +0,0 @@
---
title: Variables de entorno
---
import { Callout } from 'fumadocs-ui/components/callout'
import { Image } from '@/components/ui/image'
Las variables de entorno proporcionan una forma segura de gestionar valores de configuración y secretos en tus flujos de trabajo, incluyendo claves API y otros datos sensibles que tus flujos de trabajo necesitan acceder. Mantienen los secretos fuera de las definiciones de tu flujo de trabajo mientras los hacen disponibles durante la ejecución.
## Tipos de variables
Las variables de entorno en Sim funcionan en dos niveles:
- **Variables de entorno personales**: Privadas para tu cuenta, solo tú puedes verlas y usarlas
- **Variables de entorno del espacio de trabajo**: Compartidas en todo el espacio de trabajo, disponibles para todos los miembros del equipo
<Callout type="info">
Las variables de entorno del espacio de trabajo tienen prioridad sobre las personales cuando hay un conflicto de nombres.
</Callout>
## Configuración de variables de entorno
Navega a Configuración para configurar tus variables de entorno:
<Image
src="/static/environment/environment-1.png"
alt="Modal de variables de entorno para crear nuevas variables"
width={500}
height={350}
/>
Desde la configuración de tu espacio de trabajo, puedes crear y gestionar variables de entorno tanto personales como a nivel de espacio de trabajo. Las variables personales son privadas para tu cuenta, mientras que las variables del espacio de trabajo se comparten con todos los miembros del equipo.
### Hacer variables con ámbito de espacio de trabajo
Usa el interruptor de ámbito del espacio de trabajo para hacer que las variables estén disponibles para todo tu equipo:
<Image
src="/static/environment/environment-2.png"
alt="Interruptor de ámbito del espacio de trabajo para variables de entorno"
width={500}
height={350}
/>
Cuando habilitas el ámbito del espacio de trabajo, la variable se vuelve disponible para todos los miembros del espacio de trabajo y puede ser utilizada en cualquier flujo de trabajo dentro de ese espacio de trabajo.
### Vista de variables del espacio de trabajo
Una vez que tienes variables con ámbito de espacio de trabajo, aparecen en tu lista de variables de entorno:
<Image
src="/static/environment/environment-3.png"
alt="Variables con ámbito de espacio de trabajo en la lista de variables de entorno"
width={500}
height={350}
/>
## Uso de variables en flujos de trabajo
Para hacer referencia a variables de entorno en tus flujos de trabajo, utiliza la notación `{{}}`. Cuando escribas `{{` en cualquier campo de entrada, aparecerá un menú desplegable mostrando tanto tus variables de entorno personales como las del espacio de trabajo. Simplemente selecciona la variable que deseas utilizar.
<Image
src="/static/environment/environment-4.png"
alt="Uso de variables de entorno con notación de doble llave"
width={500}
height={350}
/>
## Cómo se resuelven las variables
**Las variables del espacio de trabajo siempre tienen prioridad** sobre las variables personales, independientemente de quién ejecute el flujo de trabajo.
Cuando no existe una variable de espacio de trabajo para una clave, se utilizan las variables personales:
- **Ejecuciones manuales (UI)**: Tus variables personales
- **Ejecuciones automatizadas (API, webhook, programación, chat implementado)**: Variables personales del propietario del flujo de trabajo
<Callout type="info">
Las variables personales son mejores para pruebas. Usa variables de espacio de trabajo para flujos de trabajo de producción.
</Callout>
## Mejores prácticas de seguridad
### Para datos sensibles
- Almacena claves API, tokens y contraseñas como variables de entorno en lugar de codificarlos directamente
- Usa variables de espacio de trabajo para recursos compartidos que varios miembros del equipo necesitan
- Mantén las credenciales personales en variables personales
### Nomenclatura de variables
- Usa nombres descriptivos: `DATABASE_URL` en lugar de `DB`
- Sigue convenciones de nomenclatura consistentes en todo tu equipo
- Considera usar prefijos para evitar conflictos: `PROD_API_KEY`, `DEV_API_KEY`
### Control de acceso
- Las variables de entorno del espacio de trabajo respetan los permisos del espacio de trabajo
- Solo los usuarios con acceso de escritura o superior pueden crear/modificar variables del espacio de trabajo
- Las variables personales siempre son privadas para el usuario individual

View File

@@ -1,96 +0,0 @@
---
title: Variables d'environnement
---
import { Callout } from 'fumadocs-ui/components/callout'
import { Image } from '@/components/ui/image'
Les variables d'environnement offrent un moyen sécurisé de gérer les valeurs de configuration et les secrets dans vos workflows, y compris les clés API et autres données sensibles dont vos workflows ont besoin. Elles gardent les secrets en dehors de vos définitions de workflow tout en les rendant disponibles pendant l'exécution.
## Types de variables
Les variables d'environnement dans Sim fonctionnent à deux niveaux :
- **Variables d'environnement personnelles** : privées à votre compte, vous seul pouvez les voir et les utiliser
- **Variables d'environnement d'espace de travail** : partagées dans tout l'espace de travail, disponibles pour tous les membres de l'équipe
<Callout type="info">
Les variables d'environnement d'espace de travail ont priorité sur les variables personnelles en cas de conflit de noms.
</Callout>
## Configuration des variables d'environnement
Accédez aux Paramètres pour configurer vos variables d'environnement :
<Image
src="/static/environment/environment-1.png"
alt="Fenêtre modale de variables d'environnement pour créer de nouvelles variables"
width={500}
height={350}
/>
Depuis les paramètres de votre espace de travail, vous pouvez créer et gérer des variables d'environnement personnelles et au niveau de l'espace de travail. Les variables personnelles sont privées à votre compte, tandis que les variables d'espace de travail sont partagées avec tous les membres de l'équipe.
### Définir des variables au niveau de l'espace de travail
Utilisez le bouton de portée d'espace de travail pour rendre les variables disponibles à toute votre équipe :
<Image
src="/static/environment/environment-2.png"
alt="Activer la portée d'espace de travail pour les variables d'environnement"
width={500}
height={350}
/>
Lorsque vous activez la portée d'espace de travail, la variable devient disponible pour tous les membres de l'espace de travail et peut être utilisée dans n'importe quel workflow au sein de cet espace de travail.
### Vue des variables d'espace de travail
Une fois que vous avez des variables à portée d'espace de travail, elles apparaissent dans votre liste de variables d'environnement :
<Image
src="/static/environment/environment-3.png"
alt="Variables à portée d'espace de travail dans la liste des variables d'environnement"
width={500}
height={350}
/>
## Utilisation des variables dans les workflows
Pour référencer des variables d'environnement dans vos workflows, utilisez la notation `{{}}`. Lorsque vous tapez `{{` dans n'importe quel champ de saisie, un menu déroulant apparaîtra affichant à la fois vos variables d'environnement personnelles et celles au niveau de l'espace de travail. Sélectionnez simplement la variable que vous souhaitez utiliser.
<Image
src="/static/environment/environment-4.png"
alt="Utilisation des variables d'environnement avec la notation à double accolade"
width={500}
height={350}
/>
## Comment les variables sont résolues
**Les variables d'espace de travail ont toujours la priorité** sur les variables personnelles, quel que soit l'utilisateur qui exécute le flux de travail.
Lorsqu'aucune variable d'espace de travail n'existe pour une clé, les variables personnelles sont utilisées :
- **Exécutions manuelles (UI)** : Vos variables personnelles
- **Exécutions automatisées (API, webhook, planification, chat déployé)** : Variables personnelles du propriétaire du flux de travail
<Callout type="info">
Les variables personnelles sont idéales pour les tests. Utilisez les variables d'espace de travail pour les flux de travail en production.
</Callout>
## Bonnes pratiques de sécurité
### Pour les données sensibles
- Stockez les clés API, les jetons et les mots de passe comme variables d'environnement au lieu de les coder en dur
- Utilisez des variables d'espace de travail pour les ressources partagées dont plusieurs membres de l'équipe ont besoin
- Conservez vos identifiants personnels dans des variables personnelles
### Nommage des variables
- Utilisez des noms descriptifs : `DATABASE_URL` au lieu de `DB`
- Suivez des conventions de nommage cohérentes au sein de votre équipe
- Envisagez des préfixes pour éviter les conflits : `PROD_API_KEY`, `DEV_API_KEY`
### Contrôle d'accès
- Les variables d'environnement de l'espace de travail respectent les permissions de l'espace de travail
- Seuls les utilisateurs disposant d'un accès en écriture ou supérieur peuvent créer/modifier les variables d'espace de travail
- Les variables personnelles sont toujours privées pour l'utilisateur individuel

View File

@@ -1,96 +0,0 @@
---
title: 環境変数
---
import { Callout } from 'fumadocs-ui/components/callout'
import { Image } from '@/components/ui/image'
環境変数は、APIキーやワークフローがアクセスする必要のあるその他の機密データなど、ワークフロー全体で設定値や機密情報を安全に管理する方法を提供します。これにより、実行中にそれらを利用可能にしながら、ワークフロー定義から機密情報を切り離すことができます。
## 変数タイプ
Simの環境変数は2つのレベルで機能します
- **個人環境変数**:あなたのアカウントに限定され、あなただけが閲覧・使用できます
- **ワークスペース環境変数**:ワークスペース全体で共有され、すべてのチームメンバーが利用できます
<Callout type="info">
名前の競合がある場合、ワークスペース環境変数は個人環境変数よりも優先されます。
</Callout>
## 環境変数の設定
設定に移動して環境変数を構成します:
<Image
src="/static/environment/environment-1.png"
alt="新しい変数を作成するための環境変数モーダル"
width={500}
height={350}
/>
ワークスペース設定から、個人レベルとワークスペースレベルの両方の環境変数を作成・管理できます。個人変数はあなたのアカウントに限定されますが、ワークスペース変数はすべてのチームメンバーと共有されます。
### 変数をワークスペーススコープにする
ワークスペーススコープトグルを使用して、変数をチーム全体で利用可能にします:
<Image
src="/static/environment/environment-2.png"
alt="環境変数のワークスペーススコープを切り替えるトグル"
width={500}
height={350}
/>
ワークスペーススコープを有効にすると、その変数はすべてのワークスペースメンバーが利用でき、そのワークスペース内のあらゆるワークフローで使用できるようになります。
### ワークスペース変数ビュー
ワークスペーススコープの変数を作成すると、環境変数リストに表示されます:
<Image
src="/static/environment/environment-3.png"
alt="環境変数リスト内のワークスペーススコープ変数"
width={500}
height={350}
/>
## ワークフローでの変数の使用
ワークフローで環境変数を参照するには、`{{}}`表記を使用します。任意の入力フィールドで`{{`と入力すると、個人用とワークスペースレベルの両方の環境変数を表示するドロップダウンが表示されます。使用したい変数を選択するだけです。
<Image
src="/static/environment/environment-4.png"
alt="二重括弧表記を使用した環境変数の使用方法"
width={500}
height={350}
/>
## 変数の解決方法
**ワークスペース変数は常に優先されます**。誰がワークフローを実行するかに関わらず、個人変数よりも優先されます。
キーに対するワークスペース変数が存在しない場合、個人変数が使用されます:
- **手動実行UI**:あなたの個人変数
- **自動実行API、ウェブフック、スケジュール、デプロイされたチャット**:ワークフロー所有者の個人変数
<Callout type="info">
個人変数はテストに最適です。本番環境のワークフローにはワークスペース変数を使用してください。
</Callout>
## セキュリティのベストプラクティス
### 機密データについて
- APIキー、トークン、パスワードはハードコーディングせず、環境変数として保存してください
- 複数のチームメンバーが必要とする共有リソースにはワークスペース変数を使用してください
- 個人の認証情報は個人変数に保管してください
### 変数の命名
- 説明的な名前を使用する:`DATABASE_URL`ではなく`DB`
- チーム全体で一貫した命名規則に従う
- 競合を避けるために接頭辞を検討する:`PROD_API_KEY`、`DEV_API_KEY`
### アクセス制御
- ワークスペース環境変数はワークスペースの権限を尊重します
- 書き込みアクセス権以上を持つユーザーのみがワークスペース変数を作成/変更できます
- 個人変数は常に個々のユーザーにプライベートです

View File

@@ -1,96 +0,0 @@
---
title: 环境变量
---
import { Callout } from 'fumadocs-ui/components/callout'
import { Image } from '@/components/ui/image'
环境变量为管理工作流中的配置值和密钥(包括 API 密钥和其他敏感数据)提供了一种安全的方式。它们可以在执行期间使用,同时将敏感信息从工作流定义中隔离开来。
## 变量类型
Sim 中的环境变量分为两个级别:
- **个人环境变量**:仅限于您的账户,只有您可以查看和使用
- **工作区环境变量**:在整个工作区内共享,所有团队成员都可以使用
<Callout type="info">
当命名冲突时,工作区环境变量优先于个人环境变量。
</Callout>
## 设置环境变量
前往设置页面配置您的环境变量:
<Image
src="/static/environment/environment-1.png"
alt="用于创建新变量的环境变量弹窗"
width={500}
height={350}
/>
在工作区设置中,您可以创建和管理个人及工作区级别的环境变量。个人变量仅限于您的账户,而工作区变量会与所有团队成员共享。
### 将变量设为工作区范围
使用工作区范围切换按钮,使变量对整个团队可用:
<Image
src="/static/environment/environment-2.png"
alt="切换环境变量的工作区范围"
width={500}
height={350}
/>
启用工作区范围后,该变量将对所有工作区成员可用,并可在该工作区内的任何工作流中使用。
### 工作区变量视图
一旦您拥有了工作区范围的变量,它们将显示在您的环境变量列表中:
<Image
src="/static/environment/environment-3.png"
alt="环境变量列表中的工作区范围变量"
width={500}
height={350}
/>
## 在工作流中使用变量
要在工作流中引用环境变量,请使用 `{{}}` 表示法。当您在任何输入字段中键入 `{{` 时,将会出现一个下拉菜单,显示您的个人和工作区级别的环境变量。只需选择您想要使用的变量即可。
<Image
src="/static/environment/environment-4.png"
alt="使用双大括号表示法的环境变量"
width={500}
height={350}
/>
## 变量的解析方式
**工作区变量始终优先于**个人变量,无论是谁运行工作流。
当某个键没有工作区变量时,将使用个人变量:
- **手动运行UI**:使用您的个人变量
- **自动运行API、Webhook、计划任务、已部署的聊天**:使用工作流所有者的个人变量
<Callout type="info">
个人变量最适合用于测试。生产环境的工作流请使用工作区变量。
</Callout>
## 安全最佳实践
### 针对敏感数据
- 将 API 密钥、令牌和密码存储为环境变量,而不是硬编码它们
- 对于多个团队成员需要的共享资源,使用工作区变量
- 将个人凭据保存在个人变量中
### 变量命名
- 使用描述性名称:`DATABASE_URL` 而不是 `DB`
- 在团队中遵循一致的命名约定
- 考虑使用前缀以避免冲突:`PROD_API_KEY`、`DEV_API_KEY`
### 访问控制
- 工作区环境变量遵循工作区权限
- 只有具有写入权限或更高权限的用户才能创建/修改工作区变量
- 个人变量始终对个人用户私有

Binary file not shown.

After

Width:  |  Height:  |  Size: 83 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 84 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 48 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 23 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 26 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 72 KiB

View File

@@ -46,7 +46,7 @@ export default function OAuthConsentPage() {
return
}
fetch(`/api/auth/oauth2/client/${clientId}`, { credentials: 'include' })
fetch(`/api/auth/oauth2/client/${encodeURIComponent(clientId)}`, { credentials: 'include' })
.then(async (res) => {
if (!res.ok) return
const data = await res.json()
@@ -164,13 +164,12 @@ export default function OAuthConsentPage() {
<div className='flex flex-col items-center justify-center'>
<div className='mb-6 flex items-center gap-4'>
{clientInfo?.icon ? (
<Image
<img
src={clientInfo.icon}
alt={clientName ?? 'Application'}
width={48}
height={48}
className='rounded-[10px]'
unoptimized
/>
) : (
<div className='flex h-12 w-12 items-center justify-center rounded-[10px] bg-muted font-medium text-[18px] text-muted-foreground'>

View File

@@ -38,7 +38,7 @@ export default function StructuredData() {
url: 'https://sim.ai',
name: 'Sim - AI Agent Workflow Builder',
description:
'Open-source AI agent workflow builder. 60,000+ developers build and deploy agentic workflows. SOC2 and HIPAA compliant.',
'Open-source AI agent workflow builder. 70,000+ developers build and deploy agentic workflows. SOC2 and HIPAA compliant.',
publisher: {
'@id': 'https://sim.ai/#organization',
},
@@ -87,7 +87,7 @@ export default function StructuredData() {
'@id': 'https://sim.ai/#software',
name: 'Sim - AI Agent Workflow Builder',
description:
'Open-source AI agent workflow builder used by 60,000+ developers. Build agentic workflows with visual drag-and-drop interface. SOC2 and HIPAA compliant. Integrate with 100+ apps.',
'Open-source AI agent workflow builder used by 70,000+ developers. Build agentic workflows with visual drag-and-drop interface. SOC2 and HIPAA compliant. Integrate with 100+ apps.',
applicationCategory: 'DeveloperApplication',
applicationSubCategory: 'AI Development Tools',
operatingSystem: 'Web, Windows, macOS, Linux',
@@ -187,7 +187,7 @@ export default function StructuredData() {
name: 'What is Sim?',
acceptedAnswer: {
'@type': 'Answer',
text: 'Sim is an open-source AI agent workflow builder used by 60,000+ developers at trail-blazing startups to Fortune 500 companies. It provides a visual drag-and-drop interface for building and deploying agentic workflows. Sim is SOC2 and HIPAA compliant.',
text: 'Sim is an open-source AI agent workflow builder used by 70,000+ developers at trail-blazing startups to Fortune 500 companies. It provides a visual drag-and-drop interface for building and deploying agentic workflows. Sim is SOC2 and HIPAA compliant.',
},
},
{

View File

@@ -5,6 +5,7 @@ import { createContext, useCallback, useEffect, useMemo, useState } from 'react'
import { useQueryClient } from '@tanstack/react-query'
import posthog from 'posthog-js'
import { client } from '@/lib/auth/auth-client'
import { extractSessionDataFromAuthClientResult } from '@/lib/auth/session-response'
export type AppSession = {
user: {
@@ -45,7 +46,8 @@ export function SessionProvider({ children }: { children: React.ReactNode }) {
const res = bypassCache
? await client.getSession({ query: { disableCookieCache: true } })
: await client.getSession()
setData(res?.data ?? null)
const session = extractSessionDataFromAuthClientResult(res) as AppSession
setData(session)
} catch (e) {
setError(e instanceof Error ? e : new Error('Failed to fetch session'))
} finally {

View File

@@ -19,6 +19,7 @@ import { validateUrlWithDNS } from '@/lib/core/security/input-validation.server'
import { SSE_HEADERS } from '@/lib/core/utils/sse'
import { getBaseUrl } from '@/lib/core/utils/urls'
import { markExecutionCancelled } from '@/lib/execution/cancellation'
import { decrementSSEConnections, incrementSSEConnections } from '@/lib/monitoring/sse-connections'
import { checkWorkspaceAccess } from '@/lib/workspaces/permissions/utils'
import { getWorkspaceBilledAccountUserId } from '@/lib/workspaces/utils'
import {
@@ -630,9 +631,11 @@ async function handleMessageStream(
}
const encoder = new TextEncoder()
let messageStreamDecremented = false
const stream = new ReadableStream({
async start(controller) {
incrementSSEConnections('a2a-message')
const sendEvent = (event: string, data: unknown) => {
try {
const jsonRpcResponse = {
@@ -841,9 +844,19 @@ async function handleMessageStream(
})
} finally {
await releaseLock(lockKey, lockValue)
if (!messageStreamDecremented) {
messageStreamDecremented = true
decrementSSEConnections('a2a-message')
}
controller.close()
}
},
cancel() {
if (!messageStreamDecremented) {
messageStreamDecremented = true
decrementSSEConnections('a2a-message')
}
},
})
return new NextResponse(stream, {
@@ -1016,16 +1029,34 @@ async function handleTaskResubscribe(
let pollTimeoutId: ReturnType<typeof setTimeout> | null = null
const abortSignal = request.signal
abortSignal.addEventListener('abort', () => {
abortSignal.addEventListener(
'abort',
() => {
isCancelled = true
if (pollTimeoutId) {
clearTimeout(pollTimeoutId)
pollTimeoutId = null
}
},
{ once: true }
)
let sseDecremented = false
const cleanup = () => {
isCancelled = true
if (pollTimeoutId) {
clearTimeout(pollTimeoutId)
pollTimeoutId = null
}
})
if (!sseDecremented) {
sseDecremented = true
decrementSSEConnections('a2a-resubscribe')
}
}
const stream = new ReadableStream({
async start(controller) {
incrementSSEConnections('a2a-resubscribe')
const sendEvent = (event: string, data: unknown): boolean => {
if (isCancelled || abortSignal.aborted) return false
try {
@@ -1041,14 +1072,6 @@ async function handleTaskResubscribe(
}
}
const cleanup = () => {
isCancelled = true
if (pollTimeoutId) {
clearTimeout(pollTimeoutId)
pollTimeoutId = null
}
}
if (
!sendEvent('status', {
kind: 'status',
@@ -1160,11 +1183,7 @@ async function handleTaskResubscribe(
poll()
},
cancel() {
isCancelled = true
if (pollTimeoutId) {
clearTimeout(pollTimeoutId)
pollTimeoutId = null
}
cleanup()
},
})

View File

@@ -0,0 +1,97 @@
/**
* @vitest-environment node
*/
import { createMockRequest } from '@sim/testing'
import { beforeEach, describe, expect, it, vi } from 'vitest'
const handlerMocks = vi.hoisted(() => ({
betterAuthGET: vi.fn(),
betterAuthPOST: vi.fn(),
ensureAnonymousUserExists: vi.fn(),
createAnonymousGetSessionResponse: vi.fn(() => ({
data: {
user: { id: 'anon' },
session: { id: 'anon-session' },
},
})),
isAuthDisabled: false,
}))
vi.mock('better-auth/next-js', () => ({
toNextJsHandler: () => ({
GET: handlerMocks.betterAuthGET,
POST: handlerMocks.betterAuthPOST,
}),
}))
vi.mock('@/lib/auth', () => ({
auth: { handler: {} },
}))
vi.mock('@/lib/auth/anonymous', () => ({
ensureAnonymousUserExists: handlerMocks.ensureAnonymousUserExists,
createAnonymousGetSessionResponse: handlerMocks.createAnonymousGetSessionResponse,
}))
vi.mock('@/lib/core/config/feature-flags', () => ({
get isAuthDisabled() {
return handlerMocks.isAuthDisabled
},
}))
import { GET } from '@/app/api/auth/[...all]/route'
describe('auth catch-all route (DISABLE_AUTH get-session)', () => {
beforeEach(() => {
vi.clearAllMocks()
handlerMocks.isAuthDisabled = false
})
it('returns anonymous session in better-auth response envelope when auth is disabled', async () => {
handlerMocks.isAuthDisabled = true
const req = createMockRequest(
'GET',
undefined,
{},
'http://localhost:3000/api/auth/get-session'
)
const res = await GET(req as any)
const json = await res.json()
expect(handlerMocks.ensureAnonymousUserExists).toHaveBeenCalledTimes(1)
expect(handlerMocks.betterAuthGET).not.toHaveBeenCalled()
expect(json).toEqual({
data: {
user: { id: 'anon' },
session: { id: 'anon-session' },
},
})
})
it('delegates to better-auth handler when auth is enabled', async () => {
handlerMocks.isAuthDisabled = false
const { NextResponse } = await import('next/server')
handlerMocks.betterAuthGET.mockResolvedValueOnce(
new NextResponse(JSON.stringify({ data: { ok: true } }), {
headers: { 'content-type': 'application/json' },
}) as any
)
const req = createMockRequest(
'GET',
undefined,
{},
'http://localhost:3000/api/auth/get-session'
)
const res = await GET(req as any)
const json = await res.json()
expect(handlerMocks.ensureAnonymousUserExists).not.toHaveBeenCalled()
expect(handlerMocks.betterAuthGET).toHaveBeenCalledTimes(1)
expect(json).toEqual({ data: { ok: true } })
})
})

View File

@@ -1,7 +1,7 @@
import { toNextJsHandler } from 'better-auth/next-js'
import { type NextRequest, NextResponse } from 'next/server'
import { auth } from '@/lib/auth'
import { createAnonymousSession, ensureAnonymousUserExists } from '@/lib/auth/anonymous'
import { createAnonymousGetSessionResponse, ensureAnonymousUserExists } from '@/lib/auth/anonymous'
import { isAuthDisabled } from '@/lib/core/config/feature-flags'
export const dynamic = 'force-dynamic'
@@ -14,7 +14,7 @@ export async function GET(request: NextRequest) {
if (path === 'get-session' && isAuthDisabled) {
await ensureAnonymousUserExists()
return NextResponse.json(createAnonymousSession())
return NextResponse.json(createAnonymousGetSessionResponse())
}
return betterAuthGET(request)

View File

@@ -1,7 +1,7 @@
import { db } from '@sim/db'
import { account } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { and, eq } from 'drizzle-orm'
import { and, desc, eq } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { getSession } from '@/lib/auth'
@@ -31,15 +31,13 @@ export async function GET(request: NextRequest) {
})
.from(account)
.where(and(...whereConditions))
// Use the user's email as the display name (consistent with credential selector)
const userEmail = session.user.email
.orderBy(desc(account.updatedAt))
const accountsWithDisplayName = accounts.map((acc) => ({
id: acc.id,
accountId: acc.accountId,
providerId: acc.providerId,
displayName: userEmail || acc.providerId,
displayName: acc.accountId || acc.providerId,
}))
return NextResponse.json({ accounts: accountsWithDisplayName })

View File

@@ -3,63 +3,45 @@
*
* @vitest-environment node
*/
import {
createMockRequest,
mockConsoleLogger,
mockCryptoUuid,
mockDrizzleOrm,
mockUuid,
setupCommonApiMocks,
} from '@sim/testing'
import { createMockRequest } from '@sim/testing'
import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest'
const { mockForgetPassword, mockLogger } = vi.hoisted(() => {
const logger = {
info: vi.fn(),
warn: vi.fn(),
error: vi.fn(),
debug: vi.fn(),
trace: vi.fn(),
fatal: vi.fn(),
child: vi.fn(),
}
return {
mockForgetPassword: vi.fn(),
mockLogger: logger,
}
})
vi.mock('@/lib/core/utils/urls', () => ({
getBaseUrl: vi.fn(() => 'https://app.example.com'),
}))
/** Setup auth API mocks for testing authentication routes */
function setupAuthApiMocks(
options: {
operations?: {
forgetPassword?: { success?: boolean; error?: string }
resetPassword?: { success?: boolean; error?: string }
}
} = {}
) {
setupCommonApiMocks()
mockUuid()
mockCryptoUuid()
mockConsoleLogger()
mockDrizzleOrm()
const { operations = {} } = options
const defaultOperations = {
forgetPassword: { success: true, error: 'Forget password error', ...operations.forgetPassword },
resetPassword: { success: true, error: 'Reset password error', ...operations.resetPassword },
}
const createAuthMethod = (config: { success?: boolean; error?: string }) => {
return vi.fn().mockImplementation(() => {
if (config.success) {
return Promise.resolve()
}
return Promise.reject(new Error(config.error))
})
}
vi.doMock('@/lib/auth', () => ({
auth: {
api: {
forgetPassword: createAuthMethod(defaultOperations.forgetPassword),
resetPassword: createAuthMethod(defaultOperations.resetPassword),
},
vi.mock('@/lib/auth', () => ({
auth: {
api: {
forgetPassword: mockForgetPassword,
},
}))
}
},
}))
vi.mock('@sim/logger', () => ({
createLogger: vi.fn().mockReturnValue(mockLogger),
}))
import { POST } from '@/app/api/auth/forget-password/route'
describe('Forget Password API Route', () => {
beforeEach(() => {
vi.resetModules()
vi.clearAllMocks()
mockForgetPassword.mockResolvedValue(undefined)
})
afterEach(() => {
@@ -67,27 +49,18 @@ describe('Forget Password API Route', () => {
})
it('should send password reset email successfully with same-origin redirectTo', async () => {
setupAuthApiMocks({
operations: {
forgetPassword: { success: true },
},
})
const req = createMockRequest('POST', {
email: 'test@example.com',
redirectTo: 'https://app.example.com/reset',
})
const { POST } = await import('@/app/api/auth/forget-password/route')
const response = await POST(req)
const data = await response.json()
expect(response.status).toBe(200)
expect(data.success).toBe(true)
const auth = await import('@/lib/auth')
expect(auth.auth.api.forgetPassword).toHaveBeenCalledWith({
expect(mockForgetPassword).toHaveBeenCalledWith({
body: {
email: 'test@example.com',
redirectTo: 'https://app.example.com/reset',
@@ -97,50 +70,32 @@ describe('Forget Password API Route', () => {
})
it('should reject external redirectTo URL', async () => {
setupAuthApiMocks({
operations: {
forgetPassword: { success: true },
},
})
const req = createMockRequest('POST', {
email: 'test@example.com',
redirectTo: 'https://evil.com/phishing',
})
const { POST } = await import('@/app/api/auth/forget-password/route')
const response = await POST(req)
const data = await response.json()
expect(response.status).toBe(400)
expect(data.message).toBe('Redirect URL must be a valid same-origin URL')
const auth = await import('@/lib/auth')
expect(auth.auth.api.forgetPassword).not.toHaveBeenCalled()
expect(mockForgetPassword).not.toHaveBeenCalled()
})
it('should send password reset email without redirectTo', async () => {
setupAuthApiMocks({
operations: {
forgetPassword: { success: true },
},
})
const req = createMockRequest('POST', {
email: 'test@example.com',
})
const { POST } = await import('@/app/api/auth/forget-password/route')
const response = await POST(req)
const data = await response.json()
expect(response.status).toBe(200)
expect(data.success).toBe(true)
const auth = await import('@/lib/auth')
expect(auth.auth.api.forgetPassword).toHaveBeenCalledWith({
expect(mockForgetPassword).toHaveBeenCalledWith({
body: {
email: 'test@example.com',
redirectTo: undefined,
@@ -150,97 +105,64 @@ describe('Forget Password API Route', () => {
})
it('should handle missing email', async () => {
setupAuthApiMocks()
const req = createMockRequest('POST', {})
const { POST } = await import('@/app/api/auth/forget-password/route')
const response = await POST(req)
const data = await response.json()
expect(response.status).toBe(400)
expect(data.message).toBe('Email is required')
const auth = await import('@/lib/auth')
expect(auth.auth.api.forgetPassword).not.toHaveBeenCalled()
expect(mockForgetPassword).not.toHaveBeenCalled()
})
it('should handle empty email', async () => {
setupAuthApiMocks()
const req = createMockRequest('POST', {
email: '',
})
const { POST } = await import('@/app/api/auth/forget-password/route')
const response = await POST(req)
const data = await response.json()
expect(response.status).toBe(400)
expect(data.message).toBe('Please provide a valid email address')
const auth = await import('@/lib/auth')
expect(auth.auth.api.forgetPassword).not.toHaveBeenCalled()
expect(mockForgetPassword).not.toHaveBeenCalled()
})
it('should handle auth service error with message', async () => {
const errorMessage = 'User not found'
setupAuthApiMocks({
operations: {
forgetPassword: {
success: false,
error: errorMessage,
},
},
})
mockForgetPassword.mockRejectedValue(new Error(errorMessage))
const req = createMockRequest('POST', {
email: 'nonexistent@example.com',
})
const { POST } = await import('@/app/api/auth/forget-password/route')
const response = await POST(req)
const data = await response.json()
expect(response.status).toBe(500)
expect(data.message).toBe(errorMessage)
const logger = await import('@sim/logger')
const mockLogger = logger.createLogger('ForgetPasswordTest')
expect(mockLogger.error).toHaveBeenCalledWith('Error requesting password reset:', {
error: expect.any(Error),
})
})
it('should handle unknown error', async () => {
setupAuthApiMocks()
vi.doMock('@/lib/auth', () => ({
auth: {
api: {
forgetPassword: vi.fn().mockRejectedValue('Unknown error'),
},
},
}))
mockForgetPassword.mockRejectedValue('Unknown error')
const req = createMockRequest('POST', {
email: 'test@example.com',
})
const { POST } = await import('@/app/api/auth/forget-password/route')
const response = await POST(req)
const data = await response.json()
expect(response.status).toBe(500)
expect(data.message).toBe('Failed to send password reset email. Please try again later.')
const logger = await import('@sim/logger')
const mockLogger = logger.createLogger('ForgetPasswordTest')
expect(mockLogger.error).toHaveBeenCalled()
})
})

View File

@@ -3,52 +3,81 @@
*
* @vitest-environment node
*/
import { createMockLogger, createMockRequest } from '@sim/testing'
import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest'
import { createMockRequest } from '@sim/testing'
import { beforeEach, describe, expect, it, vi } from 'vitest'
describe('OAuth Connections API Route', () => {
const mockGetSession = vi.fn()
const mockDb = {
const {
mockGetSession,
mockDb,
mockLogger,
mockParseProvider,
mockEvaluateScopeCoverage,
mockJwtDecode,
mockEq,
} = vi.hoisted(() => {
const db = {
select: vi.fn().mockReturnThis(),
from: vi.fn().mockReturnThis(),
where: vi.fn().mockReturnThis(),
limit: vi.fn(),
}
const mockLogger = createMockLogger()
const mockParseProvider = vi.fn()
const mockEvaluateScopeCoverage = vi.fn()
const logger = {
info: vi.fn(),
warn: vi.fn(),
error: vi.fn(),
debug: vi.fn(),
trace: vi.fn(),
fatal: vi.fn(),
child: vi.fn(),
}
return {
mockGetSession: vi.fn(),
mockDb: db,
mockLogger: logger,
mockParseProvider: vi.fn(),
mockEvaluateScopeCoverage: vi.fn(),
mockJwtDecode: vi.fn(),
mockEq: vi.fn((field: unknown, value: unknown) => ({ field, value, type: 'eq' })),
}
})
const mockUUID = 'mock-uuid-12345678-90ab-cdef-1234-567890abcdef'
vi.mock('@/lib/auth', () => ({
getSession: mockGetSession,
}))
vi.mock('@sim/db', () => ({
db: mockDb,
account: { userId: 'userId', providerId: 'providerId' },
user: { email: 'email', id: 'id' },
eq: mockEq,
}))
vi.mock('drizzle-orm', () => ({
eq: mockEq,
}))
vi.mock('jwt-decode', () => ({
jwtDecode: mockJwtDecode,
}))
vi.mock('@sim/logger', () => ({
createLogger: vi.fn().mockReturnValue(mockLogger),
}))
vi.mock('@/lib/oauth/utils', () => ({
parseProvider: mockParseProvider,
evaluateScopeCoverage: mockEvaluateScopeCoverage,
}))
import { GET } from '@/app/api/auth/oauth/connections/route'
describe('OAuth Connections API Route', () => {
beforeEach(() => {
vi.resetModules()
vi.clearAllMocks()
vi.stubGlobal('crypto', {
randomUUID: vi.fn().mockReturnValue(mockUUID),
})
vi.doMock('@/lib/auth', () => ({
getSession: mockGetSession,
}))
vi.doMock('@sim/db', () => ({
db: mockDb,
account: { userId: 'userId', providerId: 'providerId' },
user: { email: 'email', id: 'id' },
eq: vi.fn((field, value) => ({ field, value, type: 'eq' })),
}))
vi.doMock('drizzle-orm', () => ({
eq: vi.fn((field, value) => ({ field, value, type: 'eq' })),
}))
vi.doMock('jwt-decode', () => ({
jwtDecode: vi.fn(),
}))
vi.doMock('@sim/logger', () => ({
createLogger: vi.fn().mockReturnValue(mockLogger),
}))
mockDb.select.mockReturnThis()
mockDb.from.mockReturnThis()
mockDb.where.mockReturnThis()
mockParseProvider.mockImplementation((providerId: string) => ({
baseProvider: providerId.split('-')[0] || providerId,
@@ -64,15 +93,6 @@ describe('OAuth Connections API Route', () => {
requiresReauthorization: false,
})
)
vi.doMock('@/lib/oauth/utils', () => ({
parseProvider: mockParseProvider,
evaluateScopeCoverage: mockEvaluateScopeCoverage,
}))
})
afterEach(() => {
vi.clearAllMocks()
})
it('should return connections successfully', async () => {
@@ -111,7 +131,6 @@ describe('OAuth Connections API Route', () => {
mockDb.limit.mockResolvedValueOnce(mockUserRecord)
const req = createMockRequest('GET')
const { GET } = await import('@/app/api/auth/oauth/connections/route')
const response = await GET(req)
const data = await response.json()
@@ -136,7 +155,6 @@ describe('OAuth Connections API Route', () => {
mockGetSession.mockResolvedValueOnce(null)
const req = createMockRequest('GET')
const { GET } = await import('@/app/api/auth/oauth/connections/route')
const response = await GET(req)
const data = await response.json()
@@ -161,7 +179,6 @@ describe('OAuth Connections API Route', () => {
mockDb.limit.mockResolvedValueOnce([])
const req = createMockRequest('GET')
const { GET } = await import('@/app/api/auth/oauth/connections/route')
const response = await GET(req)
const data = await response.json()
@@ -180,7 +197,6 @@ describe('OAuth Connections API Route', () => {
mockDb.where.mockRejectedValueOnce(new Error('Database error'))
const req = createMockRequest('GET')
const { GET } = await import('@/app/api/auth/oauth/connections/route')
const response = await GET(req)
const data = await response.json()
@@ -191,9 +207,6 @@ describe('OAuth Connections API Route', () => {
})
it('should decode ID token for display name', async () => {
const { jwtDecode } = await import('jwt-decode')
const mockJwtDecode = jwtDecode as any
mockGetSession.mockResolvedValueOnce({
user: { id: 'user-123' },
})
@@ -224,7 +237,6 @@ describe('OAuth Connections API Route', () => {
mockDb.limit.mockResolvedValueOnce([])
const req = createMockRequest('GET')
const { GET } = await import('@/app/api/auth/oauth/connections/route')
const response = await GET(req)
const data = await response.json()

View File

@@ -4,70 +4,89 @@
* @vitest-environment node
*/
import { createMockLogger } from '@sim/testing'
import { NextRequest } from 'next/server'
import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest'
import { beforeEach, describe, expect, it, vi } from 'vitest'
const { mockCheckSessionOrInternalAuth, mockEvaluateScopeCoverage, mockLogger } = vi.hoisted(() => {
const logger = {
info: vi.fn(),
warn: vi.fn(),
error: vi.fn(),
debug: vi.fn(),
trace: vi.fn(),
fatal: vi.fn(),
child: vi.fn(),
}
return {
mockCheckSessionOrInternalAuth: vi.fn(),
mockEvaluateScopeCoverage: vi.fn(),
mockLogger: logger,
}
})
vi.mock('@/lib/auth/hybrid', () => ({
checkSessionOrInternalAuth: mockCheckSessionOrInternalAuth,
}))
vi.mock('@/lib/oauth', () => ({
evaluateScopeCoverage: mockEvaluateScopeCoverage,
}))
vi.mock('@/lib/core/utils/request', () => ({
generateRequestId: vi.fn().mockReturnValue('mock-request-id'),
}))
vi.mock('@/lib/credentials/oauth', () => ({
syncWorkspaceOAuthCredentialsForUser: vi.fn(),
}))
vi.mock('@/lib/workflows/utils', () => ({
authorizeWorkflowByWorkspacePermission: vi.fn(),
}))
vi.mock('@/lib/workspaces/permissions/utils', () => ({
checkWorkspaceAccess: vi.fn(),
}))
vi.mock('@sim/db/schema', () => ({
account: {
userId: 'userId',
providerId: 'providerId',
id: 'id',
scope: 'scope',
updatedAt: 'updatedAt',
},
credential: {
id: 'id',
workspaceId: 'workspaceId',
type: 'type',
displayName: 'displayName',
providerId: 'providerId',
accountId: 'accountId',
},
credentialMember: {
id: 'id',
credentialId: 'credentialId',
userId: 'userId',
status: 'status',
},
user: { email: 'email', id: 'id' },
}))
vi.mock('@sim/logger', () => ({
createLogger: vi.fn().mockReturnValue(mockLogger),
}))
import { GET } from '@/app/api/auth/oauth/credentials/route'
describe('OAuth Credentials API Route', () => {
const mockGetSession = vi.fn()
const mockParseProvider = vi.fn()
const mockEvaluateScopeCoverage = vi.fn()
const mockDb = {
select: vi.fn().mockReturnThis(),
from: vi.fn().mockReturnThis(),
where: vi.fn().mockReturnThis(),
limit: vi.fn(),
}
const mockLogger = createMockLogger()
const mockUUID = 'mock-uuid-12345678-90ab-cdef-1234-567890abcdef'
function createMockRequestWithQuery(method = 'GET', queryParams = ''): NextRequest {
const url = `http://localhost:3000/api/auth/oauth/credentials${queryParams}`
return new NextRequest(new URL(url), { method })
}
beforeEach(() => {
vi.resetModules()
vi.stubGlobal('crypto', {
randomUUID: vi.fn().mockReturnValue(mockUUID),
})
vi.doMock('@/lib/auth', () => ({
getSession: mockGetSession,
}))
vi.doMock('@/lib/oauth/utils', () => ({
parseProvider: mockParseProvider,
evaluateScopeCoverage: mockEvaluateScopeCoverage,
}))
vi.doMock('@sim/db', () => ({
db: mockDb,
}))
vi.doMock('@sim/db/schema', () => ({
account: { userId: 'userId', providerId: 'providerId' },
user: { email: 'email', id: 'id' },
}))
vi.doMock('drizzle-orm', () => ({
and: vi.fn((...conditions) => ({ conditions, type: 'and' })),
eq: vi.fn((field, value) => ({ field, value, type: 'eq' })),
}))
vi.doMock('jwt-decode', () => ({
jwtDecode: vi.fn(),
}))
vi.doMock('@sim/logger', () => ({
createLogger: vi.fn().mockReturnValue(mockLogger),
}))
mockParseProvider.mockImplementation((providerId: string) => ({
baseProvider: providerId.split('-')[0] || providerId,
}))
vi.clearAllMocks()
mockEvaluateScopeCoverage.mockImplementation(
(_providerId: string, grantedScopes: string[]) => ({
@@ -80,75 +99,14 @@ describe('OAuth Credentials API Route', () => {
)
})
afterEach(() => {
vi.clearAllMocks()
})
it('should return credentials successfully', async () => {
mockGetSession.mockResolvedValueOnce({
user: { id: 'user-123' },
})
mockParseProvider.mockReturnValueOnce({
baseProvider: 'google',
})
const mockAccounts = [
{
id: 'credential-1',
userId: 'user-123',
providerId: 'google-email',
accountId: 'test@example.com',
updatedAt: new Date('2024-01-01'),
idToken: null,
},
{
id: 'credential-2',
userId: 'user-123',
providerId: 'google-default',
accountId: 'user-id',
updatedAt: new Date('2024-01-02'),
idToken: null,
},
]
mockDb.select.mockReturnValueOnce(mockDb)
mockDb.from.mockReturnValueOnce(mockDb)
mockDb.where.mockResolvedValueOnce(mockAccounts)
mockDb.select.mockReturnValueOnce(mockDb)
mockDb.from.mockReturnValueOnce(mockDb)
mockDb.where.mockReturnValueOnce(mockDb)
mockDb.limit.mockResolvedValueOnce([{ email: 'user@example.com' }])
const req = createMockRequestWithQuery('GET', '?provider=google-email')
const { GET } = await import('@/app/api/auth/oauth/credentials/route')
const response = await GET(req)
const data = await response.json()
expect(response.status).toBe(200)
expect(data.credentials).toHaveLength(2)
expect(data.credentials[0]).toMatchObject({
id: 'credential-1',
provider: 'google-email',
isDefault: false,
})
expect(data.credentials[1]).toMatchObject({
id: 'credential-2',
provider: 'google-default',
isDefault: true,
})
})
it('should handle unauthenticated user', async () => {
mockGetSession.mockResolvedValueOnce(null)
mockCheckSessionOrInternalAuth.mockResolvedValueOnce({
success: false,
error: 'Authentication required',
})
const req = createMockRequestWithQuery('GET', '?provider=google')
const { GET } = await import('@/app/api/auth/oauth/credentials/route')
const response = await GET(req)
const data = await response.json()
@@ -158,14 +116,14 @@ describe('OAuth Credentials API Route', () => {
})
it('should handle missing provider parameter', async () => {
mockGetSession.mockResolvedValueOnce({
user: { id: 'user-123' },
mockCheckSessionOrInternalAuth.mockResolvedValueOnce({
success: true,
userId: 'user-123',
authType: 'session',
})
const req = createMockRequestWithQuery('GET')
const { GET } = await import('@/app/api/auth/oauth/credentials/route')
const response = await GET(req)
const data = await response.json()
@@ -175,22 +133,14 @@ describe('OAuth Credentials API Route', () => {
})
it('should handle no credentials found', async () => {
mockGetSession.mockResolvedValueOnce({
user: { id: 'user-123' },
mockCheckSessionOrInternalAuth.mockResolvedValueOnce({
success: true,
userId: 'user-123',
authType: 'session',
})
mockParseProvider.mockReturnValueOnce({
baseProvider: 'github',
})
mockDb.select.mockReturnValueOnce(mockDb)
mockDb.from.mockReturnValueOnce(mockDb)
mockDb.where.mockResolvedValueOnce([])
const req = createMockRequestWithQuery('GET', '?provider=github')
const { GET } = await import('@/app/api/auth/oauth/credentials/route')
const response = await GET(req)
const data = await response.json()
@@ -198,71 +148,19 @@ describe('OAuth Credentials API Route', () => {
expect(data.credentials).toHaveLength(0)
})
it('should decode ID token for display name', async () => {
const { jwtDecode } = await import('jwt-decode')
const mockJwtDecode = jwtDecode as any
mockGetSession.mockResolvedValueOnce({
user: { id: 'user-123' },
it('should return empty credentials when no workspace context', async () => {
mockCheckSessionOrInternalAuth.mockResolvedValueOnce({
success: true,
userId: 'user-123',
authType: 'session',
})
mockParseProvider.mockReturnValueOnce({
baseProvider: 'google',
})
const mockAccounts = [
{
id: 'credential-1',
userId: 'user-123',
providerId: 'google-default',
accountId: 'google-user-id',
updatedAt: new Date('2024-01-01'),
idToken: 'mock-jwt-token',
},
]
mockJwtDecode.mockReturnValueOnce({
email: 'decoded@example.com',
name: 'Decoded User',
})
mockDb.select.mockReturnValueOnce(mockDb)
mockDb.from.mockReturnValueOnce(mockDb)
mockDb.where.mockResolvedValueOnce(mockAccounts)
const req = createMockRequestWithQuery('GET', '?provider=google')
const { GET } = await import('@/app/api/auth/oauth/credentials/route')
const req = createMockRequestWithQuery('GET', '?provider=google-email')
const response = await GET(req)
const data = await response.json()
expect(response.status).toBe(200)
expect(data.credentials[0].name).toBe('decoded@example.com')
})
it('should handle database error', async () => {
mockGetSession.mockResolvedValueOnce({
user: { id: 'user-123' },
})
mockParseProvider.mockReturnValueOnce({
baseProvider: 'google',
})
mockDb.select.mockReturnValueOnce(mockDb)
mockDb.from.mockReturnValueOnce(mockDb)
mockDb.where.mockRejectedValueOnce(new Error('Database error'))
const req = createMockRequestWithQuery('GET', '?provider=google')
const { GET } = await import('@/app/api/auth/oauth/credentials/route')
const response = await GET(req)
const data = await response.json()
expect(response.status).toBe(500)
expect(data.error).toBe('Internal server error')
expect(mockLogger.error).toHaveBeenCalled()
expect(data.credentials).toHaveLength(0)
})
})

View File

@@ -1,14 +1,15 @@
import { db } from '@sim/db'
import { account, user } from '@sim/db/schema'
import { account, credential, credentialMember } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { and, eq } from 'drizzle-orm'
import { jwtDecode } from 'jwt-decode'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { checkSessionOrInternalAuth } from '@/lib/auth/hybrid'
import { generateRequestId } from '@/lib/core/utils/request'
import { evaluateScopeCoverage, type OAuthProvider, parseProvider } from '@/lib/oauth'
import { syncWorkspaceOAuthCredentialsForUser } from '@/lib/credentials/oauth'
import { evaluateScopeCoverage } from '@/lib/oauth'
import { authorizeWorkflowByWorkspacePermission } from '@/lib/workflows/utils'
import { checkWorkspaceAccess } from '@/lib/workspaces/permissions/utils'
export const dynamic = 'force-dynamic'
@@ -18,6 +19,7 @@ const credentialsQuerySchema = z
.object({
provider: z.string().nullish(),
workflowId: z.string().uuid('Workflow ID must be a valid UUID').nullish(),
workspaceId: z.string().uuid('Workspace ID must be a valid UUID').nullish(),
credentialId: z
.string()
.min(1, 'Credential ID must not be empty')
@@ -29,10 +31,30 @@ const credentialsQuerySchema = z
path: ['provider'],
})
interface GoogleIdToken {
email?: string
sub?: string
name?: string
function toCredentialResponse(
id: string,
displayName: string,
providerId: string,
updatedAt: Date,
scope: string | null
) {
const storedScope = scope?.trim()
const grantedScopes = storedScope ? storedScope.split(/[\s,]+/).filter(Boolean) : []
const scopeEvaluation = evaluateScopeCoverage(providerId, grantedScopes)
const [_, featureType = 'default'] = providerId.split('-')
return {
id,
name: displayName,
provider: providerId,
lastUsed: updatedAt.toISOString(),
isDefault: featureType === 'default',
scopes: scopeEvaluation.grantedScopes,
canonicalScopes: scopeEvaluation.canonicalScopes,
missingScopes: scopeEvaluation.missingScopes,
extraScopes: scopeEvaluation.extraScopes,
requiresReauthorization: scopeEvaluation.requiresReauthorization,
}
}
/**
@@ -46,6 +68,7 @@ export async function GET(request: NextRequest) {
const rawQuery = {
provider: searchParams.get('provider'),
workflowId: searchParams.get('workflowId'),
workspaceId: searchParams.get('workspaceId'),
credentialId: searchParams.get('credentialId'),
}
@@ -78,7 +101,7 @@ export async function GET(request: NextRequest) {
)
}
const { provider: providerParam, workflowId, credentialId } = parseResult.data
const { provider: providerParam, workflowId, workspaceId, credentialId } = parseResult.data
// Authenticate requester (supports session and internal JWT)
const authResult = await checkSessionOrInternalAuth(request)
@@ -88,7 +111,7 @@ export async function GET(request: NextRequest) {
}
const requesterUserId = authResult.userId
const effectiveUserId = requesterUserId
let effectiveWorkspaceId = workspaceId ?? undefined
if (workflowId) {
const workflowAuthorization = await authorizeWorkflowByWorkspacePermission({
workflowId,
@@ -106,105 +129,125 @@ export async function GET(request: NextRequest) {
{ status: workflowAuthorization.status }
)
}
effectiveWorkspaceId = workflowAuthorization.workflow?.workspaceId || undefined
}
// Parse the provider to get base provider and feature type (if provider is present)
const { baseProvider } = parseProvider((providerParam || 'google') as OAuthProvider)
let accountsData
if (credentialId && workflowId) {
// When both workflowId and credentialId are provided, fetch by ID only.
// Workspace authorization above already proves access; the credential
// may belong to another workspace member (e.g. for display name resolution).
accountsData = await db.select().from(account).where(eq(account.id, credentialId))
} else if (credentialId) {
accountsData = await db
.select()
.from(account)
.where(and(eq(account.userId, effectiveUserId), eq(account.id, credentialId)))
} else {
// Fetch all credentials for provider and effective user
accountsData = await db
.select()
.from(account)
.where(and(eq(account.userId, effectiveUserId), eq(account.providerId, providerParam!)))
if (effectiveWorkspaceId) {
const workspaceAccess = await checkWorkspaceAccess(effectiveWorkspaceId, requesterUserId)
if (!workspaceAccess.hasAccess) {
return NextResponse.json({ error: 'Forbidden' }, { status: 403 })
}
}
// Transform accounts into credentials
const credentials = await Promise.all(
accountsData.map(async (acc) => {
// Extract the feature type from providerId (e.g., 'google-default' -> 'default')
const [_, featureType = 'default'] = acc.providerId.split('-')
if (credentialId) {
const [platformCredential] = await db
.select({
id: credential.id,
workspaceId: credential.workspaceId,
type: credential.type,
displayName: credential.displayName,
providerId: credential.providerId,
accountId: credential.accountId,
accountProviderId: account.providerId,
accountScope: account.scope,
accountUpdatedAt: account.updatedAt,
})
.from(credential)
.leftJoin(account, eq(credential.accountId, account.id))
.where(eq(credential.id, credentialId))
.limit(1)
// Try multiple methods to get a user-friendly display name
let displayName = ''
if (platformCredential) {
if (platformCredential.type !== 'oauth' || !platformCredential.accountId) {
return NextResponse.json({ credentials: [] }, { status: 200 })
}
// Method 1: Try to extract email from ID token (works for Google, etc.)
if (acc.idToken) {
try {
const decoded = jwtDecode<GoogleIdToken>(acc.idToken)
if (decoded.email) {
displayName = decoded.email
} else if (decoded.name) {
displayName = decoded.name
}
} catch (_error) {
logger.warn(`[${requestId}] Error decoding ID token`, {
accountId: acc.id,
})
if (workflowId) {
if (!effectiveWorkspaceId || platformCredential.workspaceId !== effectiveWorkspaceId) {
return NextResponse.json({ error: 'Forbidden' }, { status: 403 })
}
} else {
const [membership] = await db
.select({ id: credentialMember.id })
.from(credentialMember)
.where(
and(
eq(credentialMember.credentialId, platformCredential.id),
eq(credentialMember.userId, requesterUserId),
eq(credentialMember.status, 'active')
)
)
.limit(1)
if (!membership) {
return NextResponse.json({ error: 'Forbidden' }, { status: 403 })
}
}
// Method 2: For GitHub, the accountId might be the username
if (!displayName && baseProvider === 'github') {
displayName = `${acc.accountId} (GitHub)`
if (!platformCredential.accountProviderId || !platformCredential.accountUpdatedAt) {
return NextResponse.json({ credentials: [] }, { status: 200 })
}
// Method 3: Try to get the user's email from our database
if (!displayName) {
try {
const userRecord = await db
.select({ email: user.email })
.from(user)
.where(eq(user.id, acc.userId))
.limit(1)
return NextResponse.json(
{
credentials: [
toCredentialResponse(
platformCredential.id,
platformCredential.displayName,
platformCredential.accountProviderId,
platformCredential.accountUpdatedAt,
platformCredential.accountScope
),
],
},
{ status: 200 }
)
}
}
if (userRecord.length > 0) {
displayName = userRecord[0].email
}
} catch (_error) {
logger.warn(`[${requestId}] Error fetching user email`, {
userId: acc.userId,
})
}
}
// Fallback: Use accountId with provider type as context
if (!displayName) {
displayName = `${acc.accountId} (${baseProvider})`
}
const storedScope = acc.scope?.trim()
const grantedScopes = storedScope ? storedScope.split(/[\s,]+/).filter(Boolean) : []
const scopeEvaluation = evaluateScopeCoverage(acc.providerId, grantedScopes)
return {
id: acc.id,
name: displayName,
provider: acc.providerId,
lastUsed: acc.updatedAt.toISOString(),
isDefault: featureType === 'default',
scopes: scopeEvaluation.grantedScopes,
canonicalScopes: scopeEvaluation.canonicalScopes,
missingScopes: scopeEvaluation.missingScopes,
extraScopes: scopeEvaluation.extraScopes,
requiresReauthorization: scopeEvaluation.requiresReauthorization,
}
if (effectiveWorkspaceId && providerParam) {
await syncWorkspaceOAuthCredentialsForUser({
workspaceId: effectiveWorkspaceId,
userId: requesterUserId,
})
)
return NextResponse.json({ credentials }, { status: 200 })
const credentialsData = await db
.select({
id: credential.id,
displayName: credential.displayName,
providerId: account.providerId,
scope: account.scope,
updatedAt: account.updatedAt,
})
.from(credential)
.innerJoin(account, eq(credential.accountId, account.id))
.innerJoin(
credentialMember,
and(
eq(credentialMember.credentialId, credential.id),
eq(credentialMember.userId, requesterUserId),
eq(credentialMember.status, 'active')
)
)
.where(
and(
eq(credential.workspaceId, effectiveWorkspaceId),
eq(credential.type, 'oauth'),
eq(account.providerId, providerParam)
)
)
return NextResponse.json(
{
credentials: credentialsData.map((row) =>
toCredentialResponse(row.id, row.displayName, row.providerId, row.updatedAt, row.scope)
),
},
{ status: 200 }
)
}
return NextResponse.json({ credentials: [] }, { status: 200 })
} catch (error) {
logger.error(`[${requestId}] Error fetching OAuth credentials`, error)
return NextResponse.json({ error: 'Internal server error' }, { status: 500 })

View File

@@ -3,76 +3,102 @@
*
* @vitest-environment node
*/
import { auditMock, createMockLogger, createMockRequest } from '@sim/testing'
import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest'
import { createMockRequest } from '@sim/testing'
import { beforeEach, describe, expect, it, vi } from 'vitest'
describe('OAuth Disconnect API Route', () => {
const mockGetSession = vi.fn()
const mockSelectChain = {
from: vi.fn().mockReturnThis(),
innerJoin: vi.fn().mockReturnThis(),
where: vi.fn().mockResolvedValue([]),
}
const mockDb = {
delete: vi.fn().mockReturnThis(),
where: vi.fn(),
select: vi.fn().mockReturnValue(mockSelectChain),
}
const mockLogger = createMockLogger()
const mockSyncAllWebhooksForCredentialSet = vi.fn().mockResolvedValue({})
const mockUUID = 'mock-uuid-12345678-90ab-cdef-1234-567890abcdef'
beforeEach(() => {
vi.resetModules()
vi.stubGlobal('crypto', {
randomUUID: vi.fn().mockReturnValue(mockUUID),
})
vi.doMock('@/lib/auth', () => ({
getSession: mockGetSession,
}))
vi.doMock('@sim/db', () => ({
db: mockDb,
}))
vi.doMock('@sim/db/schema', () => ({
account: { userId: 'userId', providerId: 'providerId' },
credentialSetMember: {
id: 'id',
credentialSetId: 'credentialSetId',
userId: 'userId',
status: 'status',
},
credentialSet: { id: 'id', providerId: 'providerId' },
}))
vi.doMock('drizzle-orm', () => ({
and: vi.fn((...conditions) => ({ conditions, type: 'and' })),
eq: vi.fn((field, value) => ({ field, value, type: 'eq' })),
like: vi.fn((field, value) => ({ field, value, type: 'like' })),
or: vi.fn((...conditions) => ({ conditions, type: 'or' })),
}))
vi.doMock('@sim/logger', () => ({
createLogger: vi.fn().mockReturnValue(mockLogger),
}))
vi.doMock('@/lib/core/utils/request', () => ({
generateRequestId: vi.fn().mockReturnValue('test-request-id'),
}))
vi.doMock('@/lib/webhooks/utils.server', () => ({
syncAllWebhooksForCredentialSet: mockSyncAllWebhooksForCredentialSet,
}))
vi.doMock('@/lib/audit/log', () => auditMock)
const { mockGetSession, mockDb, mockSelectChain, mockLogger, mockSyncAllWebhooksForCredentialSet } =
vi.hoisted(() => {
const selectChain = {
from: vi.fn().mockReturnThis(),
innerJoin: vi.fn().mockReturnThis(),
where: vi.fn().mockResolvedValue([]),
}
const db = {
delete: vi.fn().mockReturnThis(),
where: vi.fn(),
select: vi.fn().mockReturnValue(selectChain),
}
const logger = {
info: vi.fn(),
warn: vi.fn(),
error: vi.fn(),
debug: vi.fn(),
trace: vi.fn(),
fatal: vi.fn(),
child: vi.fn(),
}
return {
mockGetSession: vi.fn(),
mockDb: db,
mockSelectChain: selectChain,
mockLogger: logger,
mockSyncAllWebhooksForCredentialSet: vi.fn().mockResolvedValue({}),
}
})
afterEach(() => {
vi.mock('@/lib/auth', () => ({
getSession: mockGetSession,
}))
vi.mock('@sim/db', () => ({
db: mockDb,
}))
vi.mock('@sim/db/schema', () => ({
account: { userId: 'userId', providerId: 'providerId' },
credentialSetMember: {
id: 'id',
credentialSetId: 'credentialSetId',
userId: 'userId',
status: 'status',
},
credentialSet: { id: 'id', providerId: 'providerId' },
}))
vi.mock('drizzle-orm', () => ({
and: vi.fn((...conditions: unknown[]) => ({ conditions, type: 'and' })),
eq: vi.fn((field: unknown, value: unknown) => ({ field, value, type: 'eq' })),
like: vi.fn((field: unknown, value: unknown) => ({ field, value, type: 'like' })),
or: vi.fn((...conditions: unknown[]) => ({ conditions, type: 'or' })),
}))
vi.mock('@sim/logger', () => ({
createLogger: vi.fn().mockReturnValue(mockLogger),
}))
vi.mock('@/lib/core/utils/request', () => ({
generateRequestId: vi.fn().mockReturnValue('test-request-id'),
}))
vi.mock('@/lib/webhooks/utils.server', () => ({
syncAllWebhooksForCredentialSet: mockSyncAllWebhooksForCredentialSet,
}))
vi.mock('@/lib/audit/log', () => ({
recordAudit: vi.fn(),
AuditAction: {
CREDENTIAL_SET_CREATED: 'credential_set.created',
CREDENTIAL_SET_UPDATED: 'credential_set.updated',
CREDENTIAL_SET_DELETED: 'credential_set.deleted',
OAUTH_CONNECTED: 'oauth.connected',
OAUTH_DISCONNECTED: 'oauth.disconnected',
},
AuditResourceType: {
CREDENTIAL_SET: 'credential_set',
OAUTH_CONNECTION: 'oauth_connection',
},
}))
import { POST } from '@/app/api/auth/oauth/disconnect/route'
describe('OAuth Disconnect API Route', () => {
beforeEach(() => {
vi.clearAllMocks()
mockDb.delete.mockReturnThis()
mockSelectChain.from.mockReturnThis()
mockSelectChain.innerJoin.mockReturnThis()
mockSelectChain.where.mockResolvedValue([])
})
it('should disconnect provider successfully', async () => {
@@ -87,8 +113,6 @@ describe('OAuth Disconnect API Route', () => {
provider: 'google',
})
const { POST } = await import('@/app/api/auth/oauth/disconnect/route')
const response = await POST(req)
const data = await response.json()
@@ -110,8 +134,6 @@ describe('OAuth Disconnect API Route', () => {
providerId: 'google-email',
})
const { POST } = await import('@/app/api/auth/oauth/disconnect/route')
const response = await POST(req)
const data = await response.json()
@@ -127,8 +149,6 @@ describe('OAuth Disconnect API Route', () => {
provider: 'google',
})
const { POST } = await import('@/app/api/auth/oauth/disconnect/route')
const response = await POST(req)
const data = await response.json()
@@ -144,8 +164,6 @@ describe('OAuth Disconnect API Route', () => {
const req = createMockRequest('POST', {})
const { POST } = await import('@/app/api/auth/oauth/disconnect/route')
const response = await POST(req)
const data = await response.json()
@@ -166,8 +184,6 @@ describe('OAuth Disconnect API Route', () => {
provider: 'google',
})
const { POST } = await import('@/app/api/auth/oauth/disconnect/route')
const response = await POST(req)
const data = await response.json()

View File

@@ -16,6 +16,7 @@ const logger = createLogger('OAuthDisconnectAPI')
const disconnectSchema = z.object({
provider: z.string({ required_error: 'Provider is required' }).min(1, 'Provider is required'),
providerId: z.string().optional(),
accountId: z.string().optional(),
})
/**
@@ -51,15 +52,20 @@ export async function POST(request: NextRequest) {
)
}
const { provider, providerId } = parseResult.data
const { provider, providerId, accountId } = parseResult.data
logger.info(`[${requestId}] Processing OAuth disconnect request`, {
provider,
hasProviderId: !!providerId,
})
// If a specific providerId is provided, delete only that account
if (providerId) {
// If a specific account row ID is provided, delete that exact account
if (accountId) {
await db
.delete(account)
.where(and(eq(account.userId, session.user.id), eq(account.id, accountId)))
} else if (providerId) {
// If a specific providerId is provided, delete accounts for that provider ID
await db
.delete(account)
.where(and(eq(account.userId, session.user.id), eq(account.providerId, providerId)))

View File

@@ -38,13 +38,18 @@ export async function GET(request: NextRequest) {
return NextResponse.json({ error: authz.error || 'Unauthorized' }, { status })
}
const credential = await getCredential(requestId, credentialId, authz.credentialOwnerUserId)
const resolvedCredentialId = authz.resolvedCredentialId || credentialId
const credential = await getCredential(
requestId,
resolvedCredentialId,
authz.credentialOwnerUserId
)
if (!credential) {
return NextResponse.json({ error: 'Credential not found' }, { status: 404 })
}
const accessToken = await refreshAccessTokenIfNeeded(
credentialId,
resolvedCredentialId,
authz.credentialOwnerUserId,
requestId
)

View File

@@ -37,14 +37,19 @@ export async function GET(request: NextRequest) {
return NextResponse.json({ error: authz.error || 'Unauthorized' }, { status })
}
const credential = await getCredential(requestId, credentialId, authz.credentialOwnerUserId)
const resolvedCredentialId = authz.resolvedCredentialId || credentialId
const credential = await getCredential(
requestId,
resolvedCredentialId,
authz.credentialOwnerUserId
)
if (!credential) {
return NextResponse.json({ error: 'Credential not found' }, { status: 404 })
}
// Refresh access token if needed using the utility function
const accessToken = await refreshAccessTokenIfNeeded(
credentialId,
resolvedCredentialId,
authz.credentialOwnerUserId,
requestId
)

View File

@@ -3,48 +3,63 @@
*
* @vitest-environment node
*/
import { createMockLogger, createMockRequest, mockHybridAuth } from '@sim/testing'
import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest'
import { createMockRequest } from '@sim/testing'
import { beforeEach, describe, expect, it, vi } from 'vitest'
const {
mockGetUserId,
mockGetCredential,
mockRefreshTokenIfNeeded,
mockGetOAuthToken,
mockAuthorizeCredentialUse,
mockCheckSessionOrInternalAuth,
mockLogger,
} = vi.hoisted(() => {
const logger = {
info: vi.fn(),
warn: vi.fn(),
error: vi.fn(),
debug: vi.fn(),
trace: vi.fn(),
fatal: vi.fn(),
child: vi.fn(),
}
return {
mockGetUserId: vi.fn(),
mockGetCredential: vi.fn(),
mockRefreshTokenIfNeeded: vi.fn(),
mockGetOAuthToken: vi.fn(),
mockAuthorizeCredentialUse: vi.fn(),
mockCheckSessionOrInternalAuth: vi.fn(),
mockLogger: logger,
}
})
vi.mock('@/app/api/auth/oauth/utils', () => ({
getUserId: mockGetUserId,
getCredential: mockGetCredential,
refreshTokenIfNeeded: mockRefreshTokenIfNeeded,
getOAuthToken: mockGetOAuthToken,
}))
vi.mock('@sim/logger', () => ({
createLogger: vi.fn().mockReturnValue(mockLogger),
}))
vi.mock('@/lib/auth/credential-access', () => ({
authorizeCredentialUse: mockAuthorizeCredentialUse,
}))
vi.mock('@/lib/auth/hybrid', () => ({
checkHybridAuth: vi.fn(),
checkSessionOrInternalAuth: mockCheckSessionOrInternalAuth,
checkInternalAuth: vi.fn(),
}))
import { GET, POST } from '@/app/api/auth/oauth/token/route'
describe('OAuth Token API Routes', () => {
const mockGetUserId = vi.fn()
const mockGetCredential = vi.fn()
const mockRefreshTokenIfNeeded = vi.fn()
const mockGetOAuthToken = vi.fn()
const mockAuthorizeCredentialUse = vi.fn()
let mockCheckSessionOrInternalAuth: ReturnType<typeof vi.fn>
const mockLogger = createMockLogger()
const mockUUID = 'mock-uuid-12345678-90ab-cdef-1234-567890abcdef'
const mockRequestId = mockUUID.slice(0, 8)
beforeEach(() => {
vi.resetModules()
vi.stubGlobal('crypto', {
randomUUID: vi.fn().mockReturnValue(mockUUID),
})
vi.doMock('@/app/api/auth/oauth/utils', () => ({
getUserId: mockGetUserId,
getCredential: mockGetCredential,
refreshTokenIfNeeded: mockRefreshTokenIfNeeded,
getOAuthToken: mockGetOAuthToken,
}))
vi.doMock('@sim/logger', () => ({
createLogger: vi.fn().mockReturnValue(mockLogger),
}))
vi.doMock('@/lib/auth/credential-access', () => ({
authorizeCredentialUse: mockAuthorizeCredentialUse,
}))
;({ mockCheckSessionOrInternalAuth } = mockHybridAuth())
})
afterEach(() => {
vi.clearAllMocks()
})
@@ -75,8 +90,6 @@ describe('OAuth Token API Routes', () => {
credentialId: 'credential-id',
})
const { POST } = await import('@/app/api/auth/oauth/token/route')
const response = await POST(req)
const data = await response.json()
@@ -112,8 +125,6 @@ describe('OAuth Token API Routes', () => {
workflowId: 'workflow-id',
})
const { POST } = await import('@/app/api/auth/oauth/token/route')
const response = await POST(req)
const data = await response.json()
@@ -127,8 +138,6 @@ describe('OAuth Token API Routes', () => {
it('should handle missing credentialId', async () => {
const req = createMockRequest('POST', {})
const { POST } = await import('@/app/api/auth/oauth/token/route')
const response = await POST(req)
const data = await response.json()
@@ -150,8 +159,6 @@ describe('OAuth Token API Routes', () => {
credentialId: 'credential-id',
})
const { POST } = await import('@/app/api/auth/oauth/token/route')
const response = await POST(req)
const data = await response.json()
@@ -167,8 +174,6 @@ describe('OAuth Token API Routes', () => {
workflowId: 'nonexistent-workflow-id',
})
const { POST } = await import('@/app/api/auth/oauth/token/route')
const response = await POST(req)
const data = await response.json()
@@ -188,8 +193,6 @@ describe('OAuth Token API Routes', () => {
credentialId: 'nonexistent-credential-id',
})
const { POST } = await import('@/app/api/auth/oauth/token/route')
const response = await POST(req)
const data = await response.json()
@@ -217,8 +220,6 @@ describe('OAuth Token API Routes', () => {
credentialId: 'credential-id',
})
const { POST } = await import('@/app/api/auth/oauth/token/route')
const response = await POST(req)
const data = await response.json()
@@ -238,8 +239,6 @@ describe('OAuth Token API Routes', () => {
providerId: 'google',
})
const { POST } = await import('@/app/api/auth/oauth/token/route')
const response = await POST(req)
const data = await response.json()
@@ -260,8 +259,6 @@ describe('OAuth Token API Routes', () => {
providerId: 'google',
})
const { POST } = await import('@/app/api/auth/oauth/token/route')
const response = await POST(req)
const data = await response.json()
@@ -282,8 +279,6 @@ describe('OAuth Token API Routes', () => {
providerId: 'google',
})
const { POST } = await import('@/app/api/auth/oauth/token/route')
const response = await POST(req)
const data = await response.json()
@@ -305,8 +300,6 @@ describe('OAuth Token API Routes', () => {
providerId: 'google',
})
const { POST } = await import('@/app/api/auth/oauth/token/route')
const response = await POST(req)
const data = await response.json()
@@ -328,8 +321,6 @@ describe('OAuth Token API Routes', () => {
providerId: 'nonexistent-provider',
})
const { POST } = await import('@/app/api/auth/oauth/token/route')
const response = await POST(req)
const data = await response.json()
@@ -344,10 +335,11 @@ describe('OAuth Token API Routes', () => {
*/
describe('GET handler', () => {
it('should return access token successfully', async () => {
mockCheckSessionOrInternalAuth.mockResolvedValueOnce({
success: true,
mockAuthorizeCredentialUse.mockResolvedValueOnce({
ok: true,
authType: 'session',
userId: 'test-user-id',
requesterUserId: 'test-user-id',
credentialOwnerUserId: 'test-user-id',
})
mockGetCredential.mockResolvedValueOnce({
id: 'credential-id',
@@ -365,24 +357,20 @@ describe('OAuth Token API Routes', () => {
'http://localhost:3000/api/auth/oauth/token?credentialId=credential-id'
)
const { GET } = await import('@/app/api/auth/oauth/token/route')
const response = await GET(req as any)
const data = await response.json()
expect(response.status).toBe(200)
expect(data).toHaveProperty('accessToken', 'fresh-token')
expect(mockCheckSessionOrInternalAuth).toHaveBeenCalled()
expect(mockGetCredential).toHaveBeenCalledWith(mockRequestId, 'credential-id', 'test-user-id')
expect(mockAuthorizeCredentialUse).toHaveBeenCalled()
expect(mockGetCredential).toHaveBeenCalled()
expect(mockRefreshTokenIfNeeded).toHaveBeenCalled()
})
it('should handle missing credentialId', async () => {
const req = new Request('http://localhost:3000/api/auth/oauth/token')
const { GET } = await import('@/app/api/auth/oauth/token/route')
const response = await GET(req as any)
const data = await response.json()
@@ -392,8 +380,8 @@ describe('OAuth Token API Routes', () => {
})
it('should handle authentication failure', async () => {
mockCheckSessionOrInternalAuth.mockResolvedValueOnce({
success: false,
mockAuthorizeCredentialUse.mockResolvedValueOnce({
ok: false,
error: 'Authentication required',
})
@@ -401,20 +389,19 @@ describe('OAuth Token API Routes', () => {
'http://localhost:3000/api/auth/oauth/token?credentialId=credential-id'
)
const { GET } = await import('@/app/api/auth/oauth/token/route')
const response = await GET(req as any)
const data = await response.json()
expect(response.status).toBe(401)
expect(response.status).toBe(403)
expect(data).toHaveProperty('error')
})
it('should handle credential not found', async () => {
mockCheckSessionOrInternalAuth.mockResolvedValueOnce({
success: true,
mockAuthorizeCredentialUse.mockResolvedValueOnce({
ok: true,
authType: 'session',
userId: 'test-user-id',
requesterUserId: 'test-user-id',
credentialOwnerUserId: 'test-user-id',
})
mockGetCredential.mockResolvedValueOnce(undefined)
@@ -422,8 +409,6 @@ describe('OAuth Token API Routes', () => {
'http://localhost:3000/api/auth/oauth/token?credentialId=nonexistent-credential-id'
)
const { GET } = await import('@/app/api/auth/oauth/token/route')
const response = await GET(req as any)
const data = await response.json()
@@ -432,10 +417,11 @@ describe('OAuth Token API Routes', () => {
})
it('should handle missing access token', async () => {
mockCheckSessionOrInternalAuth.mockResolvedValueOnce({
success: true,
mockAuthorizeCredentialUse.mockResolvedValueOnce({
ok: true,
authType: 'session',
userId: 'test-user-id',
requesterUserId: 'test-user-id',
credentialOwnerUserId: 'test-user-id',
})
mockGetCredential.mockResolvedValueOnce({
id: 'credential-id',
@@ -448,8 +434,6 @@ describe('OAuth Token API Routes', () => {
'http://localhost:3000/api/auth/oauth/token?credentialId=credential-id'
)
const { GET } = await import('@/app/api/auth/oauth/token/route')
const response = await GET(req as any)
const data = await response.json()
@@ -458,10 +442,11 @@ describe('OAuth Token API Routes', () => {
})
it('should handle token refresh failure', async () => {
mockCheckSessionOrInternalAuth.mockResolvedValueOnce({
success: true,
mockAuthorizeCredentialUse.mockResolvedValueOnce({
ok: true,
authType: 'session',
userId: 'test-user-id',
requesterUserId: 'test-user-id',
credentialOwnerUserId: 'test-user-id',
})
mockGetCredential.mockResolvedValueOnce({
id: 'credential-id',
@@ -476,8 +461,6 @@ describe('OAuth Token API Routes', () => {
'http://localhost:3000/api/auth/oauth/token?credentialId=credential-id'
)
const { GET } = await import('@/app/api/auth/oauth/token/route')
const response = await GET(req as any)
const data = await response.json()

View File

@@ -110,23 +110,35 @@ export async function POST(request: NextRequest) {
return NextResponse.json({ error: 'Credential ID is required' }, { status: 400 })
}
const callerUserId = new URL(request.url).searchParams.get('userId') || undefined
const authz = await authorizeCredentialUse(request, {
credentialId,
workflowId: workflowId ?? undefined,
requireWorkflowIdForInternal: false,
callerUserId,
})
if (!authz.ok || !authz.credentialOwnerUserId) {
return NextResponse.json({ error: authz.error || 'Unauthorized' }, { status: 403 })
}
const credential = await getCredential(requestId, credentialId, authz.credentialOwnerUserId)
const resolvedCredentialId = authz.resolvedCredentialId || credentialId
const credential = await getCredential(
requestId,
resolvedCredentialId,
authz.credentialOwnerUserId
)
if (!credential) {
return NextResponse.json({ error: 'Credential not found' }, { status: 404 })
}
try {
const { accessToken } = await refreshTokenIfNeeded(requestId, credential, credentialId)
const { accessToken } = await refreshTokenIfNeeded(
requestId,
credential,
resolvedCredentialId
)
let instanceUrl: string | undefined
if (credential.providerId === 'salesforce' && credential.scope) {
@@ -186,13 +198,20 @@ export async function GET(request: NextRequest) {
const { credentialId } = parseResult.data
// For GET requests, we only support session-based authentication
const auth = await checkSessionOrInternalAuth(request, { requireWorkflowId: false })
if (!auth.success || auth.authType !== 'session' || !auth.userId) {
return NextResponse.json({ error: 'User not authenticated' }, { status: 401 })
const authz = await authorizeCredentialUse(request, {
credentialId,
requireWorkflowIdForInternal: false,
})
if (!authz.ok || authz.authType !== 'session' || !authz.credentialOwnerUserId) {
return NextResponse.json({ error: authz.error || 'Unauthorized' }, { status: 403 })
}
const credential = await getCredential(requestId, credentialId, auth.userId)
const resolvedCredentialId = authz.resolvedCredentialId || credentialId
const credential = await getCredential(
requestId,
resolvedCredentialId,
authz.credentialOwnerUserId
)
if (!credential) {
return NextResponse.json({ error: 'Credential not found' }, { status: 404 })
@@ -204,7 +223,11 @@ export async function GET(request: NextRequest) {
}
try {
const { accessToken } = await refreshTokenIfNeeded(requestId, credential, credentialId)
const { accessToken } = await refreshTokenIfNeeded(
requestId,
credential,
resolvedCredentialId
)
// For Salesforce, extract instanceUrl from the scope field
let instanceUrl: string | undefined

View File

@@ -62,21 +62,23 @@ describe('OAuth Utils', () => {
describe('getCredential', () => {
it('should return credential when found', async () => {
const mockCredential = { id: 'credential-id', userId: 'test-user-id' }
const { mockFrom, mockWhere, mockLimit } = mockSelectChain([mockCredential])
const mockCredentialRow = { type: 'oauth', accountId: 'resolved-account-id' }
const mockAccountRow = { id: 'resolved-account-id', userId: 'test-user-id' }
mockSelectChain([mockCredentialRow])
mockSelectChain([mockAccountRow])
const credential = await getCredential('request-id', 'credential-id', 'test-user-id')
expect(mockDb.select).toHaveBeenCalled()
expect(mockFrom).toHaveBeenCalled()
expect(mockWhere).toHaveBeenCalled()
expect(mockLimit).toHaveBeenCalledWith(1)
expect(mockDb.select).toHaveBeenCalledTimes(2)
expect(credential).toEqual(mockCredential)
expect(credential).toMatchObject(mockAccountRow)
expect(credential).toMatchObject({ resolvedCredentialId: 'resolved-account-id' })
})
it('should return undefined when credential is not found', async () => {
mockSelectChain([])
mockSelectChain([])
const credential = await getCredential('request-id', 'nonexistent-id', 'test-user-id')
@@ -158,15 +160,17 @@ describe('OAuth Utils', () => {
describe('refreshAccessTokenIfNeeded', () => {
it('should return valid access token without refresh if not expired', async () => {
const mockCredential = {
id: 'credential-id',
const mockCredentialRow = { type: 'oauth', accountId: 'account-id' }
const mockAccountRow = {
id: 'account-id',
accessToken: 'valid-token',
refreshToken: 'refresh-token',
accessTokenExpiresAt: new Date(Date.now() + 3600 * 1000),
providerId: 'google',
userId: 'test-user-id',
}
mockSelectChain([mockCredential])
mockSelectChain([mockCredentialRow])
mockSelectChain([mockAccountRow])
const token = await refreshAccessTokenIfNeeded('credential-id', 'test-user-id', 'request-id')
@@ -175,15 +179,17 @@ describe('OAuth Utils', () => {
})
it('should refresh token when expired', async () => {
const mockCredential = {
id: 'credential-id',
const mockCredentialRow = { type: 'oauth', accountId: 'account-id' }
const mockAccountRow = {
id: 'account-id',
accessToken: 'expired-token',
refreshToken: 'refresh-token',
accessTokenExpiresAt: new Date(Date.now() - 3600 * 1000),
providerId: 'google',
userId: 'test-user-id',
}
mockSelectChain([mockCredential])
mockSelectChain([mockCredentialRow])
mockSelectChain([mockAccountRow])
mockUpdateChain()
mockRefreshOAuthToken.mockResolvedValueOnce({
@@ -201,6 +207,7 @@ describe('OAuth Utils', () => {
it('should return null if credential not found', async () => {
mockSelectChain([])
mockSelectChain([])
const token = await refreshAccessTokenIfNeeded('nonexistent-id', 'test-user-id', 'request-id')
@@ -208,15 +215,17 @@ describe('OAuth Utils', () => {
})
it('should return null if refresh fails', async () => {
const mockCredential = {
id: 'credential-id',
const mockCredentialRow = { type: 'oauth', accountId: 'account-id' }
const mockAccountRow = {
id: 'account-id',
accessToken: 'expired-token',
refreshToken: 'refresh-token',
accessTokenExpiresAt: new Date(Date.now() - 3600 * 1000),
providerId: 'google',
userId: 'test-user-id',
}
mockSelectChain([mockCredential])
mockSelectChain([mockCredentialRow])
mockSelectChain([mockAccountRow])
mockRefreshOAuthToken.mockResolvedValueOnce(null)

View File

@@ -1,5 +1,5 @@
import { db } from '@sim/db'
import { account, credentialSetMember } from '@sim/db/schema'
import { account, credential, credentialSetMember } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { and, desc, eq, inArray } from 'drizzle-orm'
import { refreshOAuthToken } from '@/lib/oauth'
@@ -25,6 +25,38 @@ interface AccountInsertData {
accessTokenExpiresAt?: Date
}
/**
* Resolves a credential ID to its underlying account ID.
* If `credentialId` matches a `credential` row, returns its `accountId` and `workspaceId`.
* Otherwise assumes `credentialId` is already a raw `account.id` (legacy).
*/
export async function resolveOAuthAccountId(
credentialId: string
): Promise<{ accountId: string; workspaceId?: string; usedCredentialTable: boolean } | null> {
const [credentialRow] = await db
.select({
type: credential.type,
accountId: credential.accountId,
workspaceId: credential.workspaceId,
})
.from(credential)
.where(eq(credential.id, credentialId))
.limit(1)
if (credentialRow) {
if (credentialRow.type !== 'oauth' || !credentialRow.accountId) {
return null
}
return {
accountId: credentialRow.accountId,
workspaceId: credentialRow.workspaceId,
usedCredentialTable: true,
}
}
return { accountId: credentialId, usedCredentialTable: false }
}
/**
* Safely inserts an account record, handling duplicate constraint violations gracefully.
* If a duplicate is detected (unique constraint violation), logs a warning and returns success.
@@ -52,10 +84,16 @@ export async function safeAccountInsert(
* Get a credential by ID and verify it belongs to the user
*/
export async function getCredential(requestId: string, credentialId: string, userId: string) {
const resolved = await resolveOAuthAccountId(credentialId)
if (!resolved) {
logger.warn(`[${requestId}] Credential is not an OAuth credential`)
return undefined
}
const credentials = await db
.select()
.from(account)
.where(and(eq(account.id, credentialId), eq(account.userId, userId)))
.where(and(eq(account.id, resolved.accountId), eq(account.userId, userId)))
.limit(1)
if (!credentials.length) {
@@ -63,7 +101,10 @@ export async function getCredential(requestId: string, credentialId: string, use
return undefined
}
return credentials[0]
return {
...credentials[0],
resolvedCredentialId: resolved.accountId,
}
}
export async function getOAuthToken(userId: string, providerId: string): Promise<string | null> {
@@ -238,7 +279,9 @@ export async function refreshAccessTokenIfNeeded(
}
// Update the token in the database
await db.update(account).set(updateData).where(eq(account.id, credentialId))
const resolvedCredentialId =
(credential as { resolvedCredentialId?: string }).resolvedCredentialId ?? credentialId
await db.update(account).set(updateData).where(eq(account.id, resolvedCredentialId))
logger.info(`[${requestId}] Successfully refreshed access token for credential`)
return refreshedToken.accessToken
@@ -274,6 +317,8 @@ export async function refreshTokenIfNeeded(
credential: any,
credentialId: string
): Promise<{ accessToken: string; refreshed: boolean }> {
const resolvedCredentialId = credential.resolvedCredentialId ?? credentialId
// Decide if we should refresh: token missing OR expired
const accessTokenExpiresAt = credential.accessTokenExpiresAt
const refreshTokenExpiresAt = credential.refreshTokenExpiresAt
@@ -334,7 +379,7 @@ export async function refreshTokenIfNeeded(
updateData.refreshTokenExpiresAt = getMicrosoftRefreshTokenExpiry()
}
await db.update(account).set(updateData).where(eq(account.id, credentialId))
await db.update(account).set(updateData).where(eq(account.id, resolvedCredentialId))
logger.info(`[${requestId}] Successfully refreshed access token`)
return { accessToken: refreshedToken, refreshed: true }
@@ -343,7 +388,7 @@ export async function refreshTokenIfNeeded(
`[${requestId}] Refresh attempt failed, checking if another concurrent request succeeded`
)
const freshCredential = await getCredential(requestId, credentialId, credential.userId)
const freshCredential = await getCredential(requestId, resolvedCredentialId, credential.userId)
if (freshCredential?.accessToken) {
const freshExpiresAt = freshCredential.accessTokenExpiresAt
const stillValid = !freshExpiresAt || freshExpiresAt > new Date()

View File

@@ -6,7 +6,7 @@ import { type NextRequest, NextResponse } from 'next/server'
import { getSession } from '@/lib/auth'
import { validateEnum, validatePathSegment } from '@/lib/core/security/input-validation'
import { generateRequestId } from '@/lib/core/utils/request'
import { refreshAccessTokenIfNeeded } from '@/app/api/auth/oauth/utils'
import { refreshAccessTokenIfNeeded, resolveOAuthAccountId } from '@/app/api/auth/oauth/utils'
export const dynamic = 'force-dynamic'
@@ -57,24 +57,41 @@ export async function GET(request: NextRequest) {
return NextResponse.json({ error: itemIdValidation.error }, { status: 400 })
}
const credentials = await db.select().from(account).where(eq(account.id, credentialId)).limit(1)
const resolved = await resolveOAuthAccountId(credentialId)
if (!resolved) {
return NextResponse.json({ error: 'Credential not found' }, { status: 404 })
}
if (resolved.workspaceId) {
const { getUserEntityPermissions } = await import('@/lib/workspaces/permissions/utils')
const perm = await getUserEntityPermissions(
session.user.id,
'workspace',
resolved.workspaceId
)
if (perm === null) {
return NextResponse.json({ error: 'Forbidden' }, { status: 403 })
}
}
const credentials = await db
.select()
.from(account)
.where(eq(account.id, resolved.accountId))
.limit(1)
if (!credentials.length) {
logger.warn(`[${requestId}] Credential not found`, { credentialId })
return NextResponse.json({ error: 'Credential not found' }, { status: 404 })
}
const credential = credentials[0]
const accountRow = credentials[0]
if (credential.userId !== session.user.id) {
logger.warn(`[${requestId}] Unauthorized credential access attempt`, {
credentialUserId: credential.userId,
requestUserId: session.user.id,
})
return NextResponse.json({ error: 'Unauthorized' }, { status: 403 })
}
const accessToken = await refreshAccessTokenIfNeeded(credentialId, session.user.id, requestId)
const accessToken = await refreshAccessTokenIfNeeded(
resolved.accountId,
accountRow.userId,
requestId
)
if (!accessToken) {
logger.error(`[${requestId}] Failed to obtain valid access token`)

View File

@@ -5,7 +5,7 @@ import { eq } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { getSession } from '@/lib/auth'
import { generateRequestId } from '@/lib/core/utils/request'
import { refreshAccessTokenIfNeeded } from '@/app/api/auth/oauth/utils'
import { refreshAccessTokenIfNeeded, resolveOAuthAccountId } from '@/app/api/auth/oauth/utils'
export const dynamic = 'force-dynamic'
@@ -47,27 +47,41 @@ export async function GET(request: NextRequest) {
)
}
// Get the credential from the database
const credentials = await db.select().from(account).where(eq(account.id, credentialId)).limit(1)
const resolved = await resolveOAuthAccountId(credentialId)
if (!resolved) {
return NextResponse.json({ error: 'Credential not found' }, { status: 404 })
}
if (resolved.workspaceId) {
const { getUserEntityPermissions } = await import('@/lib/workspaces/permissions/utils')
const perm = await getUserEntityPermissions(
session.user.id,
'workspace',
resolved.workspaceId
)
if (perm === null) {
return NextResponse.json({ error: 'Forbidden' }, { status: 403 })
}
}
const credentials = await db
.select()
.from(account)
.where(eq(account.id, resolved.accountId))
.limit(1)
if (!credentials.length) {
logger.warn(`[${requestId}] Credential not found`, { credentialId })
return NextResponse.json({ error: 'Credential not found' }, { status: 404 })
}
const credential = credentials[0]
const accountRow = credentials[0]
// Check if the credential belongs to the user
if (credential.userId !== session.user.id) {
logger.warn(`[${requestId}] Unauthorized credential access attempt`, {
credentialUserId: credential.userId,
requestUserId: session.user.id,
})
return NextResponse.json({ error: 'Unauthorized' }, { status: 403 })
}
// Refresh access token if needed
const accessToken = await refreshAccessTokenIfNeeded(credentialId, session.user.id, requestId)
const accessToken = await refreshAccessTokenIfNeeded(
resolved.accountId,
accountRow.userId,
requestId
)
if (!accessToken) {
logger.error(`[${requestId}] Failed to obtain valid access token`)

View File

@@ -48,16 +48,21 @@ export async function GET(request: NextRequest) {
const shopData = await shopResponse.json()
const shopInfo = shopData.shop
const stableAccountId = shopInfo.id?.toString() || shopDomain
const existing = await db.query.account.findFirst({
where: and(eq(account.userId, session.user.id), eq(account.providerId, 'shopify')),
where: and(
eq(account.userId, session.user.id),
eq(account.providerId, 'shopify'),
eq(account.accountId, stableAccountId)
),
})
const now = new Date()
const accountData = {
accessToken: accessToken,
accountId: shopInfo.id?.toString() || shopDomain,
accountId: stableAccountId,
scope: scope || '',
updatedAt: now,
idToken: shopDomain,

View File

@@ -3,59 +3,42 @@
*
* @vitest-environment node
*/
import {
createMockRequest,
mockConsoleLogger,
mockCryptoUuid,
mockDrizzleOrm,
mockUuid,
setupCommonApiMocks,
} from '@sim/testing'
import { createMockRequest } from '@sim/testing'
import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest'
/** Setup auth API mocks for testing authentication routes */
function setupAuthApiMocks(
options: {
operations?: {
forgetPassword?: { success?: boolean; error?: string }
resetPassword?: { success?: boolean; error?: string }
}
} = {}
) {
setupCommonApiMocks()
mockUuid()
mockCryptoUuid()
mockConsoleLogger()
mockDrizzleOrm()
const { operations = {} } = options
const defaultOperations = {
forgetPassword: { success: true, error: 'Forget password error', ...operations.forgetPassword },
resetPassword: { success: true, error: 'Reset password error', ...operations.resetPassword },
const { mockResetPassword, mockLogger } = vi.hoisted(() => {
const logger = {
info: vi.fn(),
warn: vi.fn(),
error: vi.fn(),
debug: vi.fn(),
trace: vi.fn(),
fatal: vi.fn(),
child: vi.fn(),
}
const createAuthMethod = (config: { success?: boolean; error?: string }) => {
return vi.fn().mockImplementation(() => {
if (config.success) {
return Promise.resolve()
}
return Promise.reject(new Error(config.error))
})
return {
mockResetPassword: vi.fn(),
mockLogger: logger,
}
})
vi.doMock('@/lib/auth', () => ({
auth: {
api: {
forgetPassword: createAuthMethod(defaultOperations.forgetPassword),
resetPassword: createAuthMethod(defaultOperations.resetPassword),
},
vi.mock('@/lib/auth', () => ({
auth: {
api: {
resetPassword: mockResetPassword,
},
}))
}
},
}))
vi.mock('@sim/logger', () => ({
createLogger: vi.fn().mockReturnValue(mockLogger),
}))
import { POST } from '@/app/api/auth/reset-password/route'
describe('Reset Password API Route', () => {
beforeEach(() => {
vi.resetModules()
vi.clearAllMocks()
mockResetPassword.mockResolvedValue(undefined)
})
afterEach(() => {
@@ -63,27 +46,18 @@ describe('Reset Password API Route', () => {
})
it('should reset password successfully', async () => {
setupAuthApiMocks({
operations: {
resetPassword: { success: true },
},
})
const req = createMockRequest('POST', {
token: 'valid-reset-token',
newPassword: 'newSecurePassword123!',
})
const { POST } = await import('@/app/api/auth/reset-password/route')
const response = await POST(req)
const data = await response.json()
expect(response.status).toBe(200)
expect(data.success).toBe(true)
const auth = await import('@/lib/auth')
expect(auth.auth.api.resetPassword).toHaveBeenCalledWith({
expect(mockResetPassword).toHaveBeenCalledWith({
body: {
token: 'valid-reset-token',
newPassword: 'newSecurePassword123!',
@@ -93,133 +67,92 @@ describe('Reset Password API Route', () => {
})
it('should handle missing token', async () => {
setupAuthApiMocks()
const req = createMockRequest('POST', {
newPassword: 'newSecurePassword123',
})
const { POST } = await import('@/app/api/auth/reset-password/route')
const response = await POST(req)
const data = await response.json()
expect(response.status).toBe(400)
expect(data.message).toBe('Token is required')
const auth = await import('@/lib/auth')
expect(auth.auth.api.resetPassword).not.toHaveBeenCalled()
expect(mockResetPassword).not.toHaveBeenCalled()
})
it('should handle missing new password', async () => {
setupAuthApiMocks()
const req = createMockRequest('POST', {
token: 'valid-reset-token',
})
const { POST } = await import('./route')
const response = await POST(req)
const data = await response.json()
expect(response.status).toBe(400)
expect(data.message).toBe('Password is required')
const auth = await import('@/lib/auth')
expect(auth.auth.api.resetPassword).not.toHaveBeenCalled()
expect(mockResetPassword).not.toHaveBeenCalled()
})
it('should handle empty token', async () => {
setupAuthApiMocks()
const req = createMockRequest('POST', {
token: '',
newPassword: 'newSecurePassword123',
})
const { POST } = await import('@/app/api/auth/reset-password/route')
const response = await POST(req)
const data = await response.json()
expect(response.status).toBe(400)
expect(data.message).toBe('Token is required')
const auth = await import('@/lib/auth')
expect(auth.auth.api.resetPassword).not.toHaveBeenCalled()
expect(mockResetPassword).not.toHaveBeenCalled()
})
it('should handle empty new password', async () => {
setupAuthApiMocks()
const req = createMockRequest('POST', {
token: 'valid-reset-token',
newPassword: '',
})
const { POST } = await import('@/app/api/auth/reset-password/route')
const response = await POST(req)
const data = await response.json()
expect(response.status).toBe(400)
expect(data.message).toBe('Password must be at least 8 characters long')
const auth = await import('@/lib/auth')
expect(auth.auth.api.resetPassword).not.toHaveBeenCalled()
expect(mockResetPassword).not.toHaveBeenCalled()
})
it('should handle auth service error with message', async () => {
const errorMessage = 'Invalid or expired token'
setupAuthApiMocks({
operations: {
resetPassword: {
success: false,
error: errorMessage,
},
},
})
mockResetPassword.mockRejectedValue(new Error(errorMessage))
const req = createMockRequest('POST', {
token: 'invalid-token',
newPassword: 'newSecurePassword123!',
})
const { POST } = await import('@/app/api/auth/reset-password/route')
const response = await POST(req)
const data = await response.json()
expect(response.status).toBe(500)
expect(data.message).toBe(errorMessage)
const logger = await import('@sim/logger')
const mockLogger = logger.createLogger('PasswordResetAPI')
expect(mockLogger.error).toHaveBeenCalledWith('Error during password reset:', {
error: expect.any(Error),
})
})
it('should handle unknown error', async () => {
setupAuthApiMocks()
vi.doMock('@/lib/auth', () => ({
auth: {
api: {
resetPassword: vi.fn().mockRejectedValue('Unknown error'),
},
},
}))
mockResetPassword.mockRejectedValue('Unknown error')
const req = createMockRequest('POST', {
token: 'valid-reset-token',
newPassword: 'newSecurePassword123!',
})
const { POST } = await import('@/app/api/auth/reset-password/route')
const response = await POST(req)
const data = await response.json()
@@ -228,8 +161,6 @@ describe('Reset Password API Route', () => {
'Failed to reset password. Please try again or request a new reset link.'
)
const logger = await import('@sim/logger')
const mockLogger = logger.createLogger('PasswordResetAPI')
expect(mockLogger.error).toHaveBeenCalled()
})
})

View File

@@ -52,7 +52,11 @@ export async function POST(request: NextRequest) {
const trelloUser = await userResponse.json()
const existing = await db.query.account.findFirst({
where: and(eq(account.userId, session.user.id), eq(account.providerId, 'trello')),
where: and(
eq(account.userId, session.user.id),
eq(account.providerId, 'trello'),
eq(account.accountId, trelloUser.id)
),
})
const now = new Date()

View File

@@ -33,7 +33,6 @@ export async function POST(req: NextRequest) {
logger.info(`[${requestId}] Update cost request started`)
if (!isBillingEnabled) {
logger.debug(`[${requestId}] Billing is disabled, skipping cost update`)
return NextResponse.json({
success: true,
message: 'Billing disabled, cost update skipped',

View File

@@ -6,21 +6,38 @@
import { NextRequest } from 'next/server'
import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest'
describe('Chat OTP API Route', () => {
const mockEmail = 'test@example.com'
const mockChatId = 'chat-123'
const mockIdentifier = 'test-chat'
const mockOTP = '123456'
const {
mockRedisSet,
mockRedisGet,
mockRedisDel,
mockGetRedisClient,
mockRedisClient,
mockDbSelect,
mockDbInsert,
mockDbDelete,
mockSendEmail,
mockRenderOTPEmail,
mockAddCorsHeaders,
mockCreateSuccessResponse,
mockCreateErrorResponse,
mockSetChatAuthCookie,
mockGenerateRequestId,
mockGetStorageMethod,
mockZodParse,
mockGetEnv,
} = vi.hoisted(() => {
const mockRedisSet = vi.fn()
const mockRedisGet = vi.fn()
const mockRedisDel = vi.fn()
const mockRedisClient = {
set: mockRedisSet,
get: mockRedisGet,
del: mockRedisDel,
}
const mockGetRedisClient = vi.fn()
const mockDbSelect = vi.fn()
const mockDbInsert = vi.fn()
const mockDbDelete = vi.fn()
const mockSendEmail = vi.fn()
const mockRenderOTPEmail = vi.fn()
const mockAddCorsHeaders = vi.fn()
@@ -28,11 +45,152 @@ describe('Chat OTP API Route', () => {
const mockCreateErrorResponse = vi.fn()
const mockSetChatAuthCookie = vi.fn()
const mockGenerateRequestId = vi.fn()
const mockGetStorageMethod = vi.fn()
const mockZodParse = vi.fn()
const mockGetEnv = vi.fn()
let storageMethod: 'redis' | 'database' = 'redis'
return {
mockRedisSet,
mockRedisGet,
mockRedisDel,
mockGetRedisClient,
mockRedisClient,
mockDbSelect,
mockDbInsert,
mockDbDelete,
mockSendEmail,
mockRenderOTPEmail,
mockAddCorsHeaders,
mockCreateSuccessResponse,
mockCreateErrorResponse,
mockSetChatAuthCookie,
mockGenerateRequestId,
mockGetStorageMethod,
mockZodParse,
mockGetEnv,
}
})
vi.mock('@/lib/core/config/redis', () => ({
getRedisClient: mockGetRedisClient,
}))
vi.mock('@sim/db', () => ({
db: {
select: mockDbSelect,
insert: mockDbInsert,
delete: mockDbDelete,
transaction: vi.fn(async (callback: (tx: Record<string, unknown>) => unknown) => {
return callback({
select: mockDbSelect,
insert: mockDbInsert,
delete: mockDbDelete,
})
}),
},
}))
vi.mock('@sim/db/schema', () => ({
chat: {
id: 'id',
authType: 'authType',
allowedEmails: 'allowedEmails',
title: 'title',
},
verification: {
id: 'id',
identifier: 'identifier',
value: 'value',
expiresAt: 'expiresAt',
createdAt: 'createdAt',
updatedAt: 'updatedAt',
},
}))
vi.mock('drizzle-orm', () => ({
eq: vi.fn((field: string, value: string) => ({ field, value, type: 'eq' })),
and: vi.fn((...conditions: unknown[]) => ({ conditions, type: 'and' })),
gt: vi.fn((field: string, value: string) => ({ field, value, type: 'gt' })),
lt: vi.fn((field: string, value: string) => ({ field, value, type: 'lt' })),
}))
vi.mock('@/lib/core/storage', () => ({
getStorageMethod: mockGetStorageMethod,
}))
vi.mock('@/lib/messaging/email/mailer', () => ({
sendEmail: mockSendEmail,
}))
vi.mock('@/components/emails/render-email', () => ({
renderOTPEmail: mockRenderOTPEmail,
}))
vi.mock('@/app/api/chat/utils', () => ({
addCorsHeaders: mockAddCorsHeaders,
setChatAuthCookie: mockSetChatAuthCookie,
}))
vi.mock('@/app/api/workflows/utils', () => ({
createSuccessResponse: mockCreateSuccessResponse,
createErrorResponse: mockCreateErrorResponse,
}))
vi.mock('@sim/logger', () => ({
createLogger: vi.fn().mockReturnValue({
info: vi.fn(),
error: vi.fn(),
warn: vi.fn(),
debug: vi.fn(),
}),
}))
vi.mock('@/lib/core/config/env', () => ({
env: {
NEXT_PUBLIC_APP_URL: 'http://localhost:3000',
NODE_ENV: 'test',
},
getEnv: mockGetEnv,
isTruthy: vi.fn().mockReturnValue(false),
isFalsy: vi.fn().mockReturnValue(true),
}))
vi.mock('zod', () => {
class ZodError extends Error {
errors: Array<{ message: string }>
constructor(issues: Array<{ message: string }>) {
super('ZodError')
this.errors = issues
}
}
const mockStringReturnValue = {
email: vi.fn().mockReturnThis(),
length: vi.fn().mockReturnThis(),
}
return {
z: {
object: vi.fn().mockReturnValue({
parse: mockZodParse,
}),
string: vi.fn().mockReturnValue(mockStringReturnValue),
ZodError,
},
}
})
vi.mock('@/lib/core/utils/request', () => ({
generateRequestId: mockGenerateRequestId,
}))
import { POST, PUT } from './route'
describe('Chat OTP API Route', () => {
const mockEmail = 'test@example.com'
const mockChatId = 'chat-123'
const mockIdentifier = 'test-chat'
const mockOTP = '123456'
beforeEach(() => {
vi.resetModules()
vi.clearAllMocks()
vi.spyOn(Math, 'random').mockReturnValue(0.123456)
@@ -43,21 +201,12 @@ describe('Chat OTP API Route', () => {
randomUUID: vi.fn().mockReturnValue('test-uuid-1234'),
})
const mockRedisClient = {
set: mockRedisSet,
get: mockRedisGet,
del: mockRedisDel,
}
mockGetRedisClient.mockReturnValue(mockRedisClient)
mockRedisSet.mockResolvedValue('OK')
mockRedisGet.mockResolvedValue(null)
mockRedisDel.mockResolvedValue(1)
vi.doMock('@/lib/core/config/redis', () => ({
getRedisClient: mockGetRedisClient,
}))
const createDbChain = (result: any) => ({
const createDbChain = (result: unknown) => ({
from: vi.fn().mockReturnValue({
where: vi.fn().mockReturnValue({
limit: vi.fn().mockResolvedValue(result),
@@ -73,110 +222,26 @@ describe('Chat OTP API Route', () => {
where: vi.fn().mockResolvedValue(undefined),
}))
vi.doMock('@sim/db', () => ({
db: {
select: mockDbSelect,
insert: mockDbInsert,
delete: mockDbDelete,
transaction: vi.fn(async (callback) => {
return callback({
select: mockDbSelect,
insert: mockDbInsert,
delete: mockDbDelete,
})
}),
},
}))
vi.doMock('@sim/db/schema', () => ({
chat: {
id: 'id',
authType: 'authType',
allowedEmails: 'allowedEmails',
title: 'title',
},
verification: {
id: 'id',
identifier: 'identifier',
value: 'value',
expiresAt: 'expiresAt',
createdAt: 'createdAt',
updatedAt: 'updatedAt',
},
}))
vi.doMock('drizzle-orm', () => ({
eq: vi.fn((field, value) => ({ field, value, type: 'eq' })),
and: vi.fn((...conditions) => ({ conditions, type: 'and' })),
gt: vi.fn((field, value) => ({ field, value, type: 'gt' })),
lt: vi.fn((field, value) => ({ field, value, type: 'lt' })),
}))
vi.doMock('@/lib/core/storage', () => ({
getStorageMethod: vi.fn(() => storageMethod),
}))
mockGetStorageMethod.mockReturnValue('redis')
mockSendEmail.mockResolvedValue({ success: true })
mockRenderOTPEmail.mockResolvedValue('<html>OTP Email</html>')
vi.doMock('@/lib/messaging/email/mailer', () => ({
sendEmail: mockSendEmail,
}))
vi.doMock('@/components/emails/render-email', () => ({
renderOTPEmail: mockRenderOTPEmail,
}))
mockAddCorsHeaders.mockImplementation((response) => response)
mockCreateSuccessResponse.mockImplementation((data) => ({
mockAddCorsHeaders.mockImplementation((response: unknown) => response)
mockCreateSuccessResponse.mockImplementation((data: unknown) => ({
json: () => Promise.resolve(data),
status: 200,
}))
mockCreateErrorResponse.mockImplementation((message, status) => ({
mockCreateErrorResponse.mockImplementation((message: string, status: number) => ({
json: () => Promise.resolve({ error: message }),
status,
}))
vi.doMock('@/app/api/chat/utils', () => ({
addCorsHeaders: mockAddCorsHeaders,
setChatAuthCookie: mockSetChatAuthCookie,
}))
vi.doMock('@/app/api/workflows/utils', () => ({
createSuccessResponse: mockCreateSuccessResponse,
createErrorResponse: mockCreateErrorResponse,
}))
vi.doMock('@sim/logger', () => ({
createLogger: vi.fn().mockReturnValue({
info: vi.fn(),
error: vi.fn(),
warn: vi.fn(),
debug: vi.fn(),
}),
}))
vi.doMock('@/lib/core/config/env', async () => {
const { createEnvMock } = await import('@sim/testing')
return createEnvMock()
})
vi.doMock('zod', () => ({
z: {
object: vi.fn().mockReturnValue({
parse: vi.fn().mockImplementation((data) => data),
}),
string: vi.fn().mockReturnValue({
email: vi.fn().mockReturnThis(),
length: vi.fn().mockReturnThis(),
}),
},
}))
mockGenerateRequestId.mockReturnValue('req-123')
vi.doMock('@/lib/core/utils/request', () => ({
generateRequestId: mockGenerateRequestId,
}))
mockZodParse.mockImplementation((data: unknown) => data)
mockGetEnv.mockReturnValue('http://localhost:3000')
})
afterEach(() => {
@@ -185,12 +250,10 @@ describe('Chat OTP API Route', () => {
describe('POST - Store OTP (Redis path)', () => {
beforeEach(() => {
storageMethod = 'redis'
mockGetStorageMethod.mockReturnValue('redis')
})
it('should store OTP in Redis when storage method is redis', async () => {
const { POST } = await import('./route')
mockDbSelect.mockImplementationOnce(() => ({
from: vi.fn().mockReturnValue({
where: vi.fn().mockReturnValue({
@@ -226,13 +289,11 @@ describe('Chat OTP API Route', () => {
describe('POST - Store OTP (Database path)', () => {
beforeEach(() => {
storageMethod = 'database'
mockGetStorageMethod.mockReturnValue('database')
mockGetRedisClient.mockReturnValue(null)
})
it('should store OTP in database when storage method is database', async () => {
const { POST } = await import('./route')
mockDbSelect.mockImplementationOnce(() => ({
from: vi.fn().mockReturnValue({
where: vi.fn().mockReturnValue({
@@ -283,13 +344,11 @@ describe('Chat OTP API Route', () => {
describe('PUT - Verify OTP (Redis path)', () => {
beforeEach(() => {
storageMethod = 'redis'
mockGetStorageMethod.mockReturnValue('redis')
mockRedisGet.mockResolvedValue(mockOTP)
})
it('should retrieve OTP from Redis and verify successfully', async () => {
const { PUT } = await import('./route')
mockDbSelect.mockImplementationOnce(() => ({
from: vi.fn().mockReturnValue({
where: vi.fn().mockReturnValue({
@@ -320,13 +379,11 @@ describe('Chat OTP API Route', () => {
describe('PUT - Verify OTP (Database path)', () => {
beforeEach(() => {
storageMethod = 'database'
mockGetStorageMethod.mockReturnValue('database')
mockGetRedisClient.mockReturnValue(null)
})
it('should retrieve OTP from database and verify successfully', async () => {
const { PUT } = await import('./route')
let selectCallCount = 0
mockDbSelect.mockImplementation(() => ({
@@ -373,8 +430,6 @@ describe('Chat OTP API Route', () => {
})
it('should reject expired OTP from database', async () => {
const { PUT } = await import('./route')
let selectCallCount = 0
mockDbSelect.mockImplementation(() => ({
@@ -412,12 +467,10 @@ describe('Chat OTP API Route', () => {
describe('DELETE OTP (Redis path)', () => {
beforeEach(() => {
storageMethod = 'redis'
mockGetStorageMethod.mockReturnValue('redis')
})
it('should delete OTP from Redis after verification', async () => {
const { PUT } = await import('./route')
mockRedisGet.mockResolvedValue(mockOTP)
mockDbSelect.mockImplementationOnce(() => ({
@@ -447,13 +500,11 @@ describe('Chat OTP API Route', () => {
describe('DELETE OTP (Database path)', () => {
beforeEach(() => {
storageMethod = 'database'
mockGetStorageMethod.mockReturnValue('database')
mockGetRedisClient.mockReturnValue(null)
})
it('should delete OTP from database after verification', async () => {
const { PUT } = await import('./route')
let selectCallCount = 0
mockDbSelect.mockImplementation(() => ({
from: vi.fn().mockReturnValue({
@@ -490,11 +541,9 @@ describe('Chat OTP API Route', () => {
describe('Behavior consistency between Redis and Database', () => {
it('should have same behavior for missing OTP in both storage methods', async () => {
storageMethod = 'redis'
mockGetStorageMethod.mockReturnValue('redis')
mockRedisGet.mockResolvedValue(null)
const { PUT: PUTRedis } = await import('./route')
mockDbSelect.mockImplementation(() => ({
from: vi.fn().mockReturnValue({
where: vi.fn().mockReturnValue({
@@ -508,7 +557,7 @@ describe('Chat OTP API Route', () => {
body: JSON.stringify({ email: mockEmail, otp: mockOTP }),
})
await PUTRedis(requestRedis, { params: Promise.resolve({ identifier: mockIdentifier }) })
await PUT(requestRedis, { params: Promise.resolve({ identifier: mockIdentifier }) })
expect(mockCreateErrorResponse).toHaveBeenCalledWith(
'No verification code found, request a new one',
@@ -519,8 +568,7 @@ describe('Chat OTP API Route', () => {
it('should have same OTP expiry time in both storage methods', async () => {
const OTP_EXPIRY = 15 * 60
storageMethod = 'redis'
const { POST: POSTRedis } = await import('./route')
mockGetStorageMethod.mockReturnValue('redis')
mockDbSelect.mockImplementation(() => ({
from: vi.fn().mockReturnValue({
@@ -542,7 +590,7 @@ describe('Chat OTP API Route', () => {
body: JSON.stringify({ email: mockEmail }),
})
await POSTRedis(requestRedis, { params: Promise.resolve({ identifier: mockIdentifier }) })
await POST(requestRedis, { params: Promise.resolve({ identifier: mockIdentifier }) })
expect(mockRedisSet).toHaveBeenCalledWith(
expect.any(String),

View File

@@ -117,8 +117,6 @@ export async function POST(
const requestId = generateRequestId()
try {
logger.debug(`[${requestId}] Processing OTP request for identifier: ${identifier}`)
const body = await request.json()
const { email } = otpRequestSchema.parse(body)
@@ -211,8 +209,6 @@ export async function PUT(
const requestId = generateRequestId()
try {
logger.debug(`[${requestId}] Verifying OTP for identifier: ${identifier}`)
const body = await request.json()
const { email, otp } = otpVerifySchema.parse(body)

View File

@@ -51,6 +51,61 @@ const createMockStream = () => {
})
}
const {
mockDbSelect,
mockAddCorsHeaders,
mockValidateChatAuth,
mockSetChatAuthCookie,
mockValidateAuthToken,
mockCreateErrorResponse,
mockCreateSuccessResponse,
} = vi.hoisted(() => ({
mockDbSelect: vi.fn(),
mockAddCorsHeaders: vi.fn().mockImplementation((response: Response) => response),
mockValidateChatAuth: vi.fn().mockResolvedValue({ authorized: true }),
mockSetChatAuthCookie: vi.fn(),
mockValidateAuthToken: vi.fn().mockReturnValue(false),
mockCreateErrorResponse: vi
.fn()
.mockImplementation((message: string, status: number, code?: string) => {
return new Response(
JSON.stringify({
error: code || 'Error',
message,
}),
{ status }
)
}),
mockCreateSuccessResponse: vi.fn().mockImplementation((data: unknown) => {
return new Response(JSON.stringify(data), { status: 200 })
}),
}))
vi.mock('@sim/db', () => ({
db: { select: mockDbSelect },
chat: {},
workflow: {},
}))
vi.mock('@/lib/core/security/deployment', () => ({
addCorsHeaders: mockAddCorsHeaders,
validateAuthToken: mockValidateAuthToken,
setDeploymentAuthCookie: vi.fn(),
isEmailAllowed: vi.fn().mockReturnValue(false),
}))
vi.mock('@/app/api/chat/utils', () => ({
validateChatAuth: mockValidateChatAuth,
setChatAuthCookie: mockSetChatAuthCookie,
}))
vi.mock('@sim/logger', () => loggerMock)
vi.mock('@/app/api/workflows/utils', () => ({
createErrorResponse: mockCreateErrorResponse,
createSuccessResponse: mockCreateSuccessResponse,
}))
vi.mock('@/lib/execution/preprocessing', () => ({
preprocessExecution: vi.fn().mockResolvedValue({
success: true,
@@ -100,12 +155,11 @@ vi.mock('@/lib/core/security/encryption', () => ({
decryptSecret: vi.fn().mockResolvedValue({ decrypted: 'test-password' }),
}))
describe('Chat Identifier API Route', () => {
const mockAddCorsHeaders = vi.fn().mockImplementation((response) => response)
const mockValidateChatAuth = vi.fn().mockResolvedValue({ authorized: true })
const mockSetChatAuthCookie = vi.fn()
const mockValidateAuthToken = vi.fn().mockReturnValue(false)
import { preprocessExecution } from '@/lib/execution/preprocessing'
import { createStreamingResponse } from '@/lib/workflows/streaming/streaming'
import { GET, POST } from '@/app/api/chat/[identifier]/route'
describe('Chat Identifier API Route', () => {
const mockChatResult = [
{
id: 'chat-id',
@@ -142,66 +196,42 @@ describe('Chat Identifier API Route', () => {
]
beforeEach(() => {
vi.resetModules()
vi.clearAllMocks()
vi.doMock('@/lib/core/security/deployment', () => ({
addCorsHeaders: mockAddCorsHeaders,
validateAuthToken: mockValidateAuthToken,
setDeploymentAuthCookie: vi.fn(),
isEmailAllowed: vi.fn().mockReturnValue(false),
}))
mockAddCorsHeaders.mockImplementation((response: Response) => response)
mockValidateChatAuth.mockResolvedValue({ authorized: true })
mockValidateAuthToken.mockReturnValue(false)
mockCreateErrorResponse.mockImplementation((message: string, status: number, code?: string) => {
return new Response(
JSON.stringify({
error: code || 'Error',
message,
}),
{ status }
)
})
mockCreateSuccessResponse.mockImplementation((data: unknown) => {
return new Response(JSON.stringify(data), { status: 200 })
})
vi.doMock('@/app/api/chat/utils', () => ({
validateChatAuth: mockValidateChatAuth,
setChatAuthCookie: mockSetChatAuthCookie,
}))
// Mock logger - use loggerMock from @sim/testing
vi.doMock('@sim/logger', () => loggerMock)
vi.doMock('@sim/db', () => {
const mockSelect = vi.fn().mockImplementation((fields) => {
if (fields && fields.isDeployed !== undefined) {
return {
from: vi.fn().mockReturnValue({
where: vi.fn().mockReturnValue({
limit: vi.fn().mockReturnValue(mockWorkflowResult),
}),
}),
}
}
mockDbSelect.mockImplementation((fields: Record<string, unknown>) => {
if (fields && fields.isDeployed !== undefined) {
return {
from: vi.fn().mockReturnValue({
where: vi.fn().mockReturnValue({
limit: vi.fn().mockReturnValue(mockChatResult),
limit: vi.fn().mockReturnValue(mockWorkflowResult),
}),
}),
}
})
}
return {
db: {
select: mockSelect,
},
chat: {},
workflow: {},
from: vi.fn().mockReturnValue({
where: vi.fn().mockReturnValue({
limit: vi.fn().mockReturnValue(mockChatResult),
}),
}),
}
})
vi.doMock('@/app/api/workflows/utils', () => ({
createErrorResponse: vi.fn().mockImplementation((message, status, code) => {
return new Response(
JSON.stringify({
error: code || 'Error',
message,
}),
{ status }
)
}),
createSuccessResponse: vi.fn().mockImplementation((data) => {
return new Response(JSON.stringify(data), { status: 200 })
}),
}))
})
afterEach(() => {
@@ -213,8 +243,6 @@ describe('Chat Identifier API Route', () => {
const req = createMockNextRequest('GET')
const params = Promise.resolve({ identifier: 'test-chat' })
const { GET } = await import('@/app/api/chat/[identifier]/route')
const response = await GET(req, { params })
expect(response.status).toBe(200)
@@ -228,24 +256,19 @@ describe('Chat Identifier API Route', () => {
})
it('should return 404 for non-existent identifier', async () => {
vi.doMock('@sim/db', () => {
const mockLimit = vi.fn().mockReturnValue([])
const mockWhere = vi.fn().mockReturnValue({ limit: mockLimit })
const mockFrom = vi.fn().mockReturnValue({ where: mockWhere })
const mockSelect = vi.fn().mockReturnValue({ from: mockFrom })
mockDbSelect.mockImplementation(() => {
return {
db: {
select: mockSelect,
},
from: vi.fn().mockReturnValue({
where: vi.fn().mockReturnValue({
limit: vi.fn().mockReturnValue([]),
}),
}),
}
})
const req = createMockNextRequest('GET')
const params = Promise.resolve({ identifier: 'nonexistent' })
const { GET } = await import('@/app/api/chat/[identifier]/route')
const response = await GET(req, { params })
expect(response.status).toBe(404)
@@ -256,30 +279,25 @@ describe('Chat Identifier API Route', () => {
})
it('should return 403 for inactive chat', async () => {
vi.doMock('@sim/db', () => {
const mockLimit = vi.fn().mockReturnValue([
{
id: 'chat-id',
isActive: false,
authType: 'public',
},
])
const mockWhere = vi.fn().mockReturnValue({ limit: mockLimit })
const mockFrom = vi.fn().mockReturnValue({ where: mockWhere })
const mockSelect = vi.fn().mockReturnValue({ from: mockFrom })
mockDbSelect.mockImplementation(() => {
return {
db: {
select: mockSelect,
},
from: vi.fn().mockReturnValue({
where: vi.fn().mockReturnValue({
limit: vi.fn().mockReturnValue([
{
id: 'chat-id',
isActive: false,
authType: 'public',
},
]),
}),
}),
}
})
const req = createMockNextRequest('GET')
const params = Promise.resolve({ identifier: 'inactive-chat' })
const { GET } = await import('@/app/api/chat/[identifier]/route')
const response = await GET(req, { params })
expect(response.status).toBe(403)
@@ -290,17 +308,14 @@ describe('Chat Identifier API Route', () => {
})
it('should return 401 when authentication is required', async () => {
const originalValidateChatAuth = mockValidateChatAuth.getMockImplementation()
mockValidateChatAuth.mockImplementationOnce(async () => ({
mockValidateChatAuth.mockResolvedValueOnce({
authorized: false,
error: 'auth_required_password',
}))
})
const req = createMockNextRequest('GET')
const params = Promise.resolve({ identifier: 'password-protected-chat' })
const { GET } = await import('@/app/api/chat/[identifier]/route')
const response = await GET(req, { params })
expect(response.status).toBe(401)
@@ -308,10 +323,6 @@ describe('Chat Identifier API Route', () => {
const data = await response.json()
expect(data).toHaveProperty('error')
expect(data).toHaveProperty('message', 'auth_required_password')
if (originalValidateChatAuth) {
mockValidateChatAuth.mockImplementation(originalValidateChatAuth)
}
})
})
@@ -320,8 +331,6 @@ describe('Chat Identifier API Route', () => {
const req = createMockNextRequest('POST', { password: 'test-password' })
const params = Promise.resolve({ identifier: 'password-protected-chat' })
const { POST } = await import('@/app/api/chat/[identifier]/route')
const response = await POST(req, { params })
expect(response.status).toBe(200)
@@ -336,8 +345,6 @@ describe('Chat Identifier API Route', () => {
const req = createMockNextRequest('POST', {})
const params = Promise.resolve({ identifier: 'test-chat' })
const { POST } = await import('@/app/api/chat/[identifier]/route')
const response = await POST(req, { params })
expect(response.status).toBe(400)
@@ -348,17 +355,14 @@ describe('Chat Identifier API Route', () => {
})
it('should return 401 for unauthorized access', async () => {
const originalValidateChatAuth = mockValidateChatAuth.getMockImplementation()
mockValidateChatAuth.mockImplementationOnce(async () => ({
mockValidateChatAuth.mockResolvedValueOnce({
authorized: false,
error: 'Authentication required',
}))
})
const req = createMockNextRequest('POST', { input: 'Hello' })
const params = Promise.resolve({ identifier: 'protected-chat' })
const { POST } = await import('@/app/api/chat/[identifier]/route')
const response = await POST(req, { params })
expect(response.status).toBe(401)
@@ -366,16 +370,9 @@ describe('Chat Identifier API Route', () => {
const data = await response.json()
expect(data).toHaveProperty('error')
expect(data).toHaveProperty('message', 'Authentication required')
if (originalValidateChatAuth) {
mockValidateChatAuth.mockImplementation(originalValidateChatAuth)
}
})
it('should return 503 when workflow is not available', async () => {
const { preprocessExecution } = await import('@/lib/execution/preprocessing')
const originalImplementation = vi.mocked(preprocessExecution).getMockImplementation()
vi.mocked(preprocessExecution).mockResolvedValueOnce({
success: false,
error: {
@@ -388,8 +385,6 @@ describe('Chat Identifier API Route', () => {
const req = createMockNextRequest('POST', { input: 'Hello' })
const params = Promise.resolve({ identifier: 'test-chat' })
const { POST } = await import('@/app/api/chat/[identifier]/route')
const response = await POST(req, { params })
expect(response.status).toBe(403)
@@ -397,10 +392,6 @@ describe('Chat Identifier API Route', () => {
const data = await response.json()
expect(data).toHaveProperty('error')
expect(data).toHaveProperty('message', 'Workflow is not deployed')
if (originalImplementation) {
vi.mocked(preprocessExecution).mockImplementation(originalImplementation)
}
})
it('should return streaming response for valid chat messages', async () => {
@@ -410,9 +401,6 @@ describe('Chat Identifier API Route', () => {
})
const params = Promise.resolve({ identifier: 'test-chat' })
const { POST } = await import('@/app/api/chat/[identifier]/route')
const { createStreamingResponse } = await import('@/lib/workflows/streaming/streaming')
const response = await POST(req, { params })
expect(response.status).toBe(200)
@@ -442,8 +430,6 @@ describe('Chat Identifier API Route', () => {
const req = createMockNextRequest('POST', { input: 'Hello world' })
const params = Promise.resolve({ identifier: 'test-chat' })
const { POST } = await import('@/app/api/chat/[identifier]/route')
const response = await POST(req, { params })
expect(response.status).toBe(200)
@@ -463,8 +449,6 @@ describe('Chat Identifier API Route', () => {
})
it('should handle workflow execution errors gracefully', async () => {
const { createStreamingResponse } = await import('@/lib/workflows/streaming/streaming')
const originalStreamingResponse = vi.mocked(createStreamingResponse).getMockImplementation()
vi.mocked(createStreamingResponse).mockImplementationOnce(async () => {
throw new Error('Execution failed')
})
@@ -472,8 +456,6 @@ describe('Chat Identifier API Route', () => {
const req = createMockNextRequest('POST', { input: 'Trigger error' })
const params = Promise.resolve({ identifier: 'test-chat' })
const { POST } = await import('@/app/api/chat/[identifier]/route')
const response = await POST(req, { params })
expect(response.status).toBe(500)
@@ -481,10 +463,6 @@ describe('Chat Identifier API Route', () => {
const data = await response.json()
expect(data).toHaveProperty('error')
expect(data).toHaveProperty('message', 'Execution failed')
if (originalStreamingResponse) {
vi.mocked(createStreamingResponse).mockImplementation(originalStreamingResponse)
}
})
it('should handle invalid JSON in request body', async () => {
@@ -496,8 +474,6 @@ describe('Chat Identifier API Route', () => {
const params = Promise.resolve({ identifier: 'test-chat' })
const { POST } = await import('@/app/api/chat/[identifier]/route')
const response = await POST(req, { params })
expect(response.status).toBe(400)
@@ -514,9 +490,6 @@ describe('Chat Identifier API Route', () => {
})
const params = Promise.resolve({ identifier: 'test-chat' })
const { POST } = await import('@/app/api/chat/[identifier]/route')
const { createStreamingResponse } = await import('@/lib/workflows/streaming/streaming')
await POST(req, { params })
expect(createStreamingResponse).toHaveBeenCalledWith(
@@ -533,9 +506,6 @@ describe('Chat Identifier API Route', () => {
const req = createMockNextRequest('POST', { input: 'Hello world' })
const params = Promise.resolve({ identifier: 'test-chat' })
const { POST } = await import('@/app/api/chat/[identifier]/route')
const { createStreamingResponse } = await import('@/lib/workflows/streaming/streaming')
await POST(req, { params })
expect(createStreamingResponse).toHaveBeenCalledWith(

View File

@@ -42,8 +42,6 @@ export async function POST(
const requestId = generateRequestId()
try {
logger.debug(`[${requestId}] Processing chat request for identifier: ${identifier}`)
let parsedBody
try {
const rawBody = await request.json()
@@ -294,8 +292,6 @@ export async function GET(
const requestId = generateRequestId()
try {
logger.debug(`[${requestId}] Fetching chat info for identifier: ${identifier}`)
const deploymentResult = await db
.select({
id: chat.id,

View File

@@ -3,35 +3,100 @@
*
* @vitest-environment node
*/
import { auditMock, loggerMock } from '@sim/testing'
import { auditMock } from '@sim/testing'
import { NextRequest } from 'next/server'
import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest'
vi.mock('@/lib/audit/log', () => auditMock)
const {
mockGetSession,
mockSelect,
mockFrom,
mockWhere,
mockLimit,
mockUpdate,
mockSet,
mockDelete,
mockCreateSuccessResponse,
mockCreateErrorResponse,
mockEncryptSecret,
mockCheckChatAccess,
mockDeployWorkflow,
mockLogger,
} = vi.hoisted(() => {
const logger = {
info: vi.fn(),
warn: vi.fn(),
error: vi.fn(),
debug: vi.fn(),
trace: vi.fn(),
fatal: vi.fn(),
child: vi.fn(),
}
return {
mockGetSession: vi.fn(),
mockSelect: vi.fn(),
mockFrom: vi.fn(),
mockWhere: vi.fn(),
mockLimit: vi.fn(),
mockUpdate: vi.fn(),
mockSet: vi.fn(),
mockDelete: vi.fn(),
mockCreateSuccessResponse: vi.fn(),
mockCreateErrorResponse: vi.fn(),
mockEncryptSecret: vi.fn(),
mockCheckChatAccess: vi.fn(),
mockDeployWorkflow: vi.fn(),
mockLogger: logger,
}
})
vi.mock('@/lib/audit/log', () => auditMock)
vi.mock('@/lib/core/config/feature-flags', () => ({
isDev: true,
isHosted: false,
isProd: false,
}))
vi.mock('@/lib/auth', () => ({
getSession: mockGetSession,
}))
vi.mock('@sim/logger', () => ({
createLogger: vi.fn().mockReturnValue(mockLogger),
}))
vi.mock('@sim/db', () => ({
db: {
select: mockSelect,
update: mockUpdate,
delete: mockDelete,
},
}))
vi.mock('@sim/db/schema', () => ({
chat: { id: 'id', identifier: 'identifier', userId: 'userId' },
}))
vi.mock('@/app/api/workflows/utils', () => ({
createSuccessResponse: mockCreateSuccessResponse,
createErrorResponse: mockCreateErrorResponse,
}))
vi.mock('@/lib/core/security/encryption', () => ({
encryptSecret: mockEncryptSecret,
}))
vi.mock('@/lib/core/utils/urls', () => ({
getEmailDomain: vi.fn().mockReturnValue('localhost:3000'),
}))
vi.mock('@/app/api/chat/utils', () => ({
checkChatAccess: mockCheckChatAccess,
}))
vi.mock('@/lib/workflows/persistence/utils', () => ({
deployWorkflow: mockDeployWorkflow,
}))
vi.mock('drizzle-orm', () => ({
eq: vi.fn((field, value) => ({ field, value, type: 'eq' })),
}))
import { DELETE, GET, PATCH } from '@/app/api/chat/manage/[id]/route'
describe('Chat Edit API Route', () => {
const mockSelect = vi.fn()
const mockFrom = vi.fn()
const mockWhere = vi.fn()
const mockLimit = vi.fn()
const mockUpdate = vi.fn()
const mockSet = vi.fn()
const mockDelete = vi.fn()
const mockCreateSuccessResponse = vi.fn()
const mockCreateErrorResponse = vi.fn()
const mockEncryptSecret = vi.fn()
const mockCheckChatAccess = vi.fn()
const mockDeployWorkflow = vi.fn()
beforeEach(() => {
vi.resetModules()
vi.clearAllMocks()
mockLimit.mockResolvedValue([])
mockSelect.mockReturnValue({ from: mockFrom })
@@ -41,56 +106,21 @@ describe('Chat Edit API Route', () => {
mockSet.mockReturnValue({ where: mockWhere })
mockDelete.mockReturnValue({ where: mockWhere })
vi.doMock('@sim/db', () => ({
db: {
select: mockSelect,
update: mockUpdate,
delete: mockDelete,
},
}))
vi.doMock('@sim/db/schema', () => ({
chat: { id: 'id', identifier: 'identifier', userId: 'userId' },
}))
// Mock logger - use loggerMock from @sim/testing
vi.doMock('@sim/logger', () => loggerMock)
vi.doMock('@/app/api/workflows/utils', () => ({
createSuccessResponse: mockCreateSuccessResponse.mockImplementation((data) => {
return new Response(JSON.stringify(data), {
status: 200,
headers: { 'Content-Type': 'application/json' },
})
}),
createErrorResponse: mockCreateErrorResponse.mockImplementation((message, status = 500) => {
return new Response(JSON.stringify({ error: message }), {
status,
headers: { 'Content-Type': 'application/json' },
})
}),
}))
vi.doMock('@/lib/core/security/encryption', () => ({
encryptSecret: mockEncryptSecret.mockResolvedValue({ encrypted: 'encrypted-password' }),
}))
vi.doMock('@/lib/core/utils/urls', () => ({
getEmailDomain: vi.fn().mockReturnValue('localhost:3000'),
}))
vi.doMock('@/app/api/chat/utils', () => ({
checkChatAccess: mockCheckChatAccess,
}))
mockCreateSuccessResponse.mockImplementation((data) => {
return new Response(JSON.stringify(data), {
status: 200,
headers: { 'Content-Type': 'application/json' },
})
})
mockCreateErrorResponse.mockImplementation((message, status = 500) => {
return new Response(JSON.stringify({ error: message }), {
status,
headers: { 'Content-Type': 'application/json' },
})
})
mockEncryptSecret.mockResolvedValue({ encrypted: 'encrypted-password' })
mockDeployWorkflow.mockResolvedValue({ success: true, version: 1 })
vi.doMock('@/lib/workflows/persistence/utils', () => ({
deployWorkflow: mockDeployWorkflow,
}))
vi.doMock('drizzle-orm', () => ({
eq: vi.fn((field, value) => ({ field, value, type: 'eq' })),
}))
})
afterEach(() => {
@@ -99,12 +129,9 @@ describe('Chat Edit API Route', () => {
describe('GET', () => {
it('should return 401 when user is not authenticated', async () => {
vi.doMock('@/lib/auth', () => ({
getSession: vi.fn().mockResolvedValue(null),
}))
mockGetSession.mockResolvedValue(null)
const req = new NextRequest('http://localhost:3000/api/chat/manage/chat-123')
const { GET } = await import('@/app/api/chat/manage/[id]/route')
const response = await GET(req, { params: Promise.resolve({ id: 'chat-123' }) })
expect(response.status).toBe(401)
@@ -113,16 +140,13 @@ describe('Chat Edit API Route', () => {
})
it('should return 404 when chat not found or access denied', async () => {
vi.doMock('@/lib/auth', () => ({
getSession: vi.fn().mockResolvedValue({
user: { id: 'user-id' },
}),
}))
mockGetSession.mockResolvedValue({
user: { id: 'user-id' },
})
mockCheckChatAccess.mockResolvedValue({ hasAccess: false })
const req = new NextRequest('http://localhost:3000/api/chat/manage/chat-123')
const { GET } = await import('@/app/api/chat/manage/[id]/route')
const response = await GET(req, { params: Promise.resolve({ id: 'chat-123' }) })
expect(response.status).toBe(404)
@@ -132,11 +156,9 @@ describe('Chat Edit API Route', () => {
})
it('should return chat details when user has access', async () => {
vi.doMock('@/lib/auth', () => ({
getSession: vi.fn().mockResolvedValue({
user: { id: 'user-id' },
}),
}))
mockGetSession.mockResolvedValue({
user: { id: 'user-id' },
})
const mockChat = {
id: 'chat-123',
@@ -150,7 +172,6 @@ describe('Chat Edit API Route', () => {
mockCheckChatAccess.mockResolvedValue({ hasAccess: true, chat: mockChat })
const req = new NextRequest('http://localhost:3000/api/chat/manage/chat-123')
const { GET } = await import('@/app/api/chat/manage/[id]/route')
const response = await GET(req, { params: Promise.resolve({ id: 'chat-123' }) })
expect(response.status).toBe(200)
@@ -165,15 +186,12 @@ describe('Chat Edit API Route', () => {
describe('PATCH', () => {
it('should return 401 when user is not authenticated', async () => {
vi.doMock('@/lib/auth', () => ({
getSession: vi.fn().mockResolvedValue(null),
}))
mockGetSession.mockResolvedValue(null)
const req = new NextRequest('http://localhost:3000/api/chat/manage/chat-123', {
method: 'PATCH',
body: JSON.stringify({ title: 'Updated Chat' }),
})
const { PATCH } = await import('@/app/api/chat/manage/[id]/route')
const response = await PATCH(req, { params: Promise.resolve({ id: 'chat-123' }) })
expect(response.status).toBe(401)
@@ -182,11 +200,9 @@ describe('Chat Edit API Route', () => {
})
it('should return 404 when chat not found or access denied', async () => {
vi.doMock('@/lib/auth', () => ({
getSession: vi.fn().mockResolvedValue({
user: { id: 'user-id' },
}),
}))
mockGetSession.mockResolvedValue({
user: { id: 'user-id' },
})
mockCheckChatAccess.mockResolvedValue({ hasAccess: false })
@@ -194,7 +210,6 @@ describe('Chat Edit API Route', () => {
method: 'PATCH',
body: JSON.stringify({ title: 'Updated Chat' }),
})
const { PATCH } = await import('@/app/api/chat/manage/[id]/route')
const response = await PATCH(req, { params: Promise.resolve({ id: 'chat-123' }) })
expect(response.status).toBe(404)
@@ -204,11 +219,9 @@ describe('Chat Edit API Route', () => {
})
it('should update chat when user has access', async () => {
vi.doMock('@/lib/auth', () => ({
getSession: vi.fn().mockResolvedValue({
user: { id: 'user-id' },
}),
}))
mockGetSession.mockResolvedValue({
user: { id: 'user-id' },
})
const mockChat = {
id: 'chat-123',
@@ -228,7 +241,6 @@ describe('Chat Edit API Route', () => {
method: 'PATCH',
body: JSON.stringify({ title: 'Updated Chat', description: 'Updated description' }),
})
const { PATCH } = await import('@/app/api/chat/manage/[id]/route')
const response = await PATCH(req, { params: Promise.resolve({ id: 'chat-123' }) })
expect(response.status).toBe(200)
@@ -240,11 +252,9 @@ describe('Chat Edit API Route', () => {
})
it('should handle identifier conflicts', async () => {
vi.doMock('@/lib/auth', () => ({
getSession: vi.fn().mockResolvedValue({
user: { id: 'user-id' },
}),
}))
mockGetSession.mockResolvedValue({
user: { id: 'user-id' },
})
const mockChat = {
id: 'chat-123',
@@ -263,7 +273,6 @@ describe('Chat Edit API Route', () => {
method: 'PATCH',
body: JSON.stringify({ identifier: 'new-identifier' }),
})
const { PATCH } = await import('@/app/api/chat/manage/[id]/route')
const response = await PATCH(req, { params: Promise.resolve({ id: 'chat-123' }) })
expect(response.status).toBe(400)
@@ -272,11 +281,9 @@ describe('Chat Edit API Route', () => {
})
it('should validate password requirement for password auth', async () => {
vi.doMock('@/lib/auth', () => ({
getSession: vi.fn().mockResolvedValue({
user: { id: 'user-id' },
}),
}))
mockGetSession.mockResolvedValue({
user: { id: 'user-id' },
})
const mockChat = {
id: 'chat-123',
@@ -293,7 +300,6 @@ describe('Chat Edit API Route', () => {
method: 'PATCH',
body: JSON.stringify({ authType: 'password' }),
})
const { PATCH } = await import('@/app/api/chat/manage/[id]/route')
const response = await PATCH(req, { params: Promise.resolve({ id: 'chat-123' }) })
expect(response.status).toBe(400)
@@ -302,11 +308,9 @@ describe('Chat Edit API Route', () => {
})
it('should allow access when user has workspace admin permission', async () => {
vi.doMock('@/lib/auth', () => ({
getSession: vi.fn().mockResolvedValue({
user: { id: 'admin-user-id' },
}),
}))
mockGetSession.mockResolvedValue({
user: { id: 'admin-user-id' },
})
const mockChat = {
id: 'chat-123',
@@ -326,7 +330,6 @@ describe('Chat Edit API Route', () => {
method: 'PATCH',
body: JSON.stringify({ title: 'Admin Updated Chat' }),
})
const { PATCH } = await import('@/app/api/chat/manage/[id]/route')
const response = await PATCH(req, { params: Promise.resolve({ id: 'chat-123' }) })
expect(response.status).toBe(200)
@@ -336,14 +339,11 @@ describe('Chat Edit API Route', () => {
describe('DELETE', () => {
it('should return 401 when user is not authenticated', async () => {
vi.doMock('@/lib/auth', () => ({
getSession: vi.fn().mockResolvedValue(null),
}))
mockGetSession.mockResolvedValue(null)
const req = new NextRequest('http://localhost:3000/api/chat/manage/chat-123', {
method: 'DELETE',
})
const { DELETE } = await import('@/app/api/chat/manage/[id]/route')
const response = await DELETE(req, { params: Promise.resolve({ id: 'chat-123' }) })
expect(response.status).toBe(401)
@@ -352,18 +352,15 @@ describe('Chat Edit API Route', () => {
})
it('should return 404 when chat not found or access denied', async () => {
vi.doMock('@/lib/auth', () => ({
getSession: vi.fn().mockResolvedValue({
user: { id: 'user-id' },
}),
}))
mockGetSession.mockResolvedValue({
user: { id: 'user-id' },
})
mockCheckChatAccess.mockResolvedValue({ hasAccess: false })
const req = new NextRequest('http://localhost:3000/api/chat/manage/chat-123', {
method: 'DELETE',
})
const { DELETE } = await import('@/app/api/chat/manage/[id]/route')
const response = await DELETE(req, { params: Promise.resolve({ id: 'chat-123' }) })
expect(response.status).toBe(404)
@@ -373,11 +370,9 @@ describe('Chat Edit API Route', () => {
})
it('should delete chat when user has access', async () => {
vi.doMock('@/lib/auth', () => ({
getSession: vi.fn().mockResolvedValue({
user: { id: 'user-id' },
}),
}))
mockGetSession.mockResolvedValue({
user: { id: 'user-id' },
})
mockCheckChatAccess.mockResolvedValue({
hasAccess: true,
@@ -388,7 +383,6 @@ describe('Chat Edit API Route', () => {
const req = new NextRequest('http://localhost:3000/api/chat/manage/chat-123', {
method: 'DELETE',
})
const { DELETE } = await import('@/app/api/chat/manage/[id]/route')
const response = await DELETE(req, { params: Promise.resolve({ id: 'chat-123' }) })
expect(response.status).toBe(200)
@@ -398,11 +392,9 @@ describe('Chat Edit API Route', () => {
})
it('should allow deletion when user has workspace admin permission', async () => {
vi.doMock('@/lib/auth', () => ({
getSession: vi.fn().mockResolvedValue({
user: { id: 'admin-user-id' },
}),
}))
mockGetSession.mockResolvedValue({
user: { id: 'admin-user-id' },
})
mockCheckChatAccess.mockResolvedValue({
hasAccess: true,
@@ -413,7 +405,6 @@ describe('Chat Edit API Route', () => {
const req = new NextRequest('http://localhost:3000/api/chat/manage/chat-123', {
method: 'DELETE',
})
const { DELETE } = await import('@/app/api/chat/manage/[id]/route')
const response = await DELETE(req, { params: Promise.resolve({ id: 'chat-123' }) })
expect(response.status).toBe(200)

View File

@@ -3,27 +3,93 @@
*
* @vitest-environment node
*/
import { auditMock } from '@sim/testing'
import { auditMock, createEnvMock } from '@sim/testing'
import { NextRequest } from 'next/server'
import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest'
const {
mockSelect,
mockFrom,
mockWhere,
mockLimit,
mockInsert,
mockValues,
mockReturning,
mockCreateSuccessResponse,
mockCreateErrorResponse,
mockEncryptSecret,
mockCheckWorkflowAccessForChatCreation,
mockDeployWorkflow,
mockGetSession,
mockUuidV4,
} = vi.hoisted(() => ({
mockSelect: vi.fn(),
mockFrom: vi.fn(),
mockWhere: vi.fn(),
mockLimit: vi.fn(),
mockInsert: vi.fn(),
mockValues: vi.fn(),
mockReturning: vi.fn(),
mockCreateSuccessResponse: vi.fn(),
mockCreateErrorResponse: vi.fn(),
mockEncryptSecret: vi.fn(),
mockCheckWorkflowAccessForChatCreation: vi.fn(),
mockDeployWorkflow: vi.fn(),
mockGetSession: vi.fn(),
mockUuidV4: vi.fn(),
}))
vi.mock('@/lib/audit/log', () => auditMock)
vi.mock('@sim/db', () => ({
db: {
select: mockSelect,
insert: mockInsert,
},
}))
vi.mock('@sim/db/schema', () => ({
chat: { userId: 'userId', identifier: 'identifier' },
workflow: { id: 'id', userId: 'userId', isDeployed: 'isDeployed' },
}))
vi.mock('@/app/api/workflows/utils', () => ({
createSuccessResponse: mockCreateSuccessResponse,
createErrorResponse: mockCreateErrorResponse,
}))
vi.mock('@/lib/core/security/encryption', () => ({
encryptSecret: mockEncryptSecret,
}))
vi.mock('uuid', () => ({
v4: mockUuidV4,
}))
vi.mock('@/app/api/chat/utils', () => ({
checkWorkflowAccessForChatCreation: mockCheckWorkflowAccessForChatCreation,
}))
vi.mock('@/lib/workflows/persistence/utils', () => ({
deployWorkflow: mockDeployWorkflow,
}))
vi.mock('@/lib/auth', () => ({
getSession: mockGetSession,
}))
vi.mock('@/lib/core/config/env', () =>
createEnvMock({
NODE_ENV: 'development',
NEXT_PUBLIC_APP_URL: 'http://localhost:3000',
})
)
import { GET, POST } from '@/app/api/chat/route'
describe('Chat API Route', () => {
const mockSelect = vi.fn()
const mockFrom = vi.fn()
const mockWhere = vi.fn()
const mockLimit = vi.fn()
const mockInsert = vi.fn()
const mockValues = vi.fn()
const mockReturning = vi.fn()
const mockCreateSuccessResponse = vi.fn()
const mockCreateErrorResponse = vi.fn()
const mockEncryptSecret = vi.fn()
const mockCheckWorkflowAccessForChatCreation = vi.fn()
const mockDeployWorkflow = vi.fn()
beforeEach(() => {
vi.resetModules()
vi.clearAllMocks()
mockSelect.mockReturnValue({ from: mockFrom })
mockFrom.mockReturnValue({ where: mockWhere })
@@ -31,63 +97,29 @@ describe('Chat API Route', () => {
mockInsert.mockReturnValue({ values: mockValues })
mockValues.mockReturnValue({ returning: mockReturning })
vi.doMock('@/lib/audit/log', () => auditMock)
mockUuidV4.mockReturnValue('test-uuid')
vi.doMock('@sim/db', () => ({
db: {
select: mockSelect,
insert: mockInsert,
},
}))
mockCreateSuccessResponse.mockImplementation((data) => {
return new Response(JSON.stringify(data), {
status: 200,
headers: { 'Content-Type': 'application/json' },
})
})
vi.doMock('@sim/db/schema', () => ({
chat: { userId: 'userId', identifier: 'identifier' },
workflow: { id: 'id', userId: 'userId', isDeployed: 'isDeployed' },
}))
mockCreateErrorResponse.mockImplementation((message, status = 500) => {
return new Response(JSON.stringify({ error: message }), {
status,
headers: { 'Content-Type': 'application/json' },
})
})
vi.doMock('@sim/logger', () => ({
createLogger: vi.fn().mockReturnValue({
info: vi.fn(),
error: vi.fn(),
warn: vi.fn(),
debug: vi.fn(),
}),
}))
mockEncryptSecret.mockResolvedValue({ encrypted: 'encrypted-password' })
vi.doMock('@/app/api/workflows/utils', () => ({
createSuccessResponse: mockCreateSuccessResponse.mockImplementation((data) => {
return new Response(JSON.stringify(data), {
status: 200,
headers: { 'Content-Type': 'application/json' },
})
}),
createErrorResponse: mockCreateErrorResponse.mockImplementation((message, status = 500) => {
return new Response(JSON.stringify({ error: message }), {
status,
headers: { 'Content-Type': 'application/json' },
})
}),
}))
vi.doMock('@/lib/core/security/encryption', () => ({
encryptSecret: mockEncryptSecret.mockResolvedValue({ encrypted: 'encrypted-password' }),
}))
vi.doMock('uuid', () => ({
v4: vi.fn().mockReturnValue('test-uuid'),
}))
vi.doMock('@/app/api/chat/utils', () => ({
checkWorkflowAccessForChatCreation: mockCheckWorkflowAccessForChatCreation,
}))
vi.doMock('@/lib/workflows/persistence/utils', () => ({
deployWorkflow: mockDeployWorkflow.mockResolvedValue({
success: true,
version: 1,
deployedAt: new Date(),
}),
}))
mockDeployWorkflow.mockResolvedValue({
success: true,
version: 1,
deployedAt: new Date(),
})
})
afterEach(() => {
@@ -96,12 +128,9 @@ describe('Chat API Route', () => {
describe('GET', () => {
it('should return 401 when user is not authenticated', async () => {
vi.doMock('@/lib/auth', () => ({
getSession: vi.fn().mockResolvedValue(null),
}))
mockGetSession.mockResolvedValue(null)
const req = new NextRequest('http://localhost:3000/api/chat')
const { GET } = await import('@/app/api/chat/route')
const response = await GET(req)
expect(response.status).toBe(401)
@@ -109,17 +138,14 @@ describe('Chat API Route', () => {
})
it('should return chat deployments for authenticated user', async () => {
vi.doMock('@/lib/auth', () => ({
getSession: vi.fn().mockResolvedValue({
user: { id: 'user-id' },
}),
}))
mockGetSession.mockResolvedValue({
user: { id: 'user-id' },
})
const mockDeployments = [{ id: 'deployment-1' }, { id: 'deployment-2' }]
mockWhere.mockResolvedValue(mockDeployments)
const req = new NextRequest('http://localhost:3000/api/chat')
const { GET } = await import('@/app/api/chat/route')
const response = await GET(req)
expect(response.status).toBe(200)
@@ -128,16 +154,13 @@ describe('Chat API Route', () => {
})
it('should handle errors when fetching deployments', async () => {
vi.doMock('@/lib/auth', () => ({
getSession: vi.fn().mockResolvedValue({
user: { id: 'user-id' },
}),
}))
mockGetSession.mockResolvedValue({
user: { id: 'user-id' },
})
mockWhere.mockRejectedValue(new Error('Database error'))
const req = new NextRequest('http://localhost:3000/api/chat')
const { GET } = await import('@/app/api/chat/route')
const response = await GET(req)
expect(response.status).toBe(500)
@@ -147,15 +170,12 @@ describe('Chat API Route', () => {
describe('POST', () => {
it('should return 401 when user is not authenticated', async () => {
vi.doMock('@/lib/auth', () => ({
getSession: vi.fn().mockResolvedValue(null),
}))
mockGetSession.mockResolvedValue(null)
const req = new NextRequest('http://localhost:3000/api/chat', {
method: 'POST',
body: JSON.stringify({}),
})
const { POST } = await import('@/app/api/chat/route')
const response = await POST(req)
expect(response.status).toBe(401)
@@ -163,11 +183,9 @@ describe('Chat API Route', () => {
})
it('should validate request data', async () => {
vi.doMock('@/lib/auth', () => ({
getSession: vi.fn().mockResolvedValue({
user: { id: 'user-id' },
}),
}))
mockGetSession.mockResolvedValue({
user: { id: 'user-id' },
})
const invalidData = { title: 'Test Chat' } // Missing required fields
@@ -175,18 +193,15 @@ describe('Chat API Route', () => {
method: 'POST',
body: JSON.stringify(invalidData),
})
const { POST } = await import('@/app/api/chat/route')
const response = await POST(req)
expect(response.status).toBe(400)
})
it('should reject if identifier already exists', async () => {
vi.doMock('@/lib/auth', () => ({
getSession: vi.fn().mockResolvedValue({
user: { id: 'user-id' },
}),
}))
mockGetSession.mockResolvedValue({
user: { id: 'user-id' },
})
const validData = {
workflowId: 'workflow-123',
@@ -204,7 +219,6 @@ describe('Chat API Route', () => {
method: 'POST',
body: JSON.stringify(validData),
})
const { POST } = await import('@/app/api/chat/route')
const response = await POST(req)
expect(response.status).toBe(400)
@@ -212,11 +226,9 @@ describe('Chat API Route', () => {
})
it('should reject if workflow not found', async () => {
vi.doMock('@/lib/auth', () => ({
getSession: vi.fn().mockResolvedValue({
user: { id: 'user-id' },
}),
}))
mockGetSession.mockResolvedValue({
user: { id: 'user-id' },
})
const validData = {
workflowId: 'workflow-123',
@@ -235,7 +247,6 @@ describe('Chat API Route', () => {
method: 'POST',
body: JSON.stringify(validData),
})
const { POST } = await import('@/app/api/chat/route')
const response = await POST(req)
expect(response.status).toBe(404)
@@ -246,18 +257,8 @@ describe('Chat API Route', () => {
})
it('should allow chat deployment when user owns workflow directly', async () => {
vi.doMock('@/lib/auth', () => ({
getSession: vi.fn().mockResolvedValue({
user: { id: 'user-id', email: 'user@example.com' },
}),
}))
vi.doMock('@/lib/core/config/env', async () => {
const { createEnvMock } = await import('@sim/testing')
return createEnvMock({
NODE_ENV: 'development',
NEXT_PUBLIC_APP_URL: 'http://localhost:3000',
})
mockGetSession.mockResolvedValue({
user: { id: 'user-id', email: 'user@example.com' },
})
const validData = {
@@ -281,7 +282,6 @@ describe('Chat API Route', () => {
method: 'POST',
body: JSON.stringify(validData),
})
const { POST } = await import('@/app/api/chat/route')
const response = await POST(req)
expect(response.status).toBe(200)
@@ -289,18 +289,8 @@ describe('Chat API Route', () => {
})
it('should allow chat deployment when user has workspace admin permission', async () => {
vi.doMock('@/lib/auth', () => ({
getSession: vi.fn().mockResolvedValue({
user: { id: 'user-id', email: 'user@example.com' },
}),
}))
vi.doMock('@/lib/core/config/env', async () => {
const { createEnvMock } = await import('@sim/testing')
return createEnvMock({
NODE_ENV: 'development',
NEXT_PUBLIC_APP_URL: 'http://localhost:3000',
})
mockGetSession.mockResolvedValue({
user: { id: 'user-id', email: 'user@example.com' },
})
const validData = {
@@ -324,7 +314,6 @@ describe('Chat API Route', () => {
method: 'POST',
body: JSON.stringify(validData),
})
const { POST } = await import('@/app/api/chat/route')
const response = await POST(req)
expect(response.status).toBe(200)
@@ -332,11 +321,9 @@ describe('Chat API Route', () => {
})
it('should reject when workflow is in workspace but user lacks admin permission', async () => {
vi.doMock('@/lib/auth', () => ({
getSession: vi.fn().mockResolvedValue({
user: { id: 'user-id' },
}),
}))
mockGetSession.mockResolvedValue({
user: { id: 'user-id' },
})
const validData = {
workflowId: 'workflow-123',
@@ -357,7 +344,6 @@ describe('Chat API Route', () => {
method: 'POST',
body: JSON.stringify(validData),
})
const { POST } = await import('@/app/api/chat/route')
const response = await POST(req)
expect(response.status).toBe(404)
@@ -369,11 +355,9 @@ describe('Chat API Route', () => {
})
it('should handle workspace permission check errors gracefully', async () => {
vi.doMock('@/lib/auth', () => ({
getSession: vi.fn().mockResolvedValue({
user: { id: 'user-id' },
}),
}))
mockGetSession.mockResolvedValue({
user: { id: 'user-id' },
})
const validData = {
workflowId: 'workflow-123',
@@ -392,7 +376,6 @@ describe('Chat API Route', () => {
method: 'POST',
body: JSON.stringify(validData),
})
const { POST } = await import('@/app/api/chat/route')
const response = await POST(req)
expect(response.status).toBe(500)
@@ -400,11 +383,9 @@ describe('Chat API Route', () => {
})
it('should auto-deploy workflow if not already deployed', async () => {
vi.doMock('@/lib/auth', () => ({
getSession: vi.fn().mockResolvedValue({
user: { id: 'user-id', email: 'user@example.com' },
}),
}))
mockGetSession.mockResolvedValue({
user: { id: 'user-id', email: 'user@example.com' },
})
const validData = {
workflowId: 'workflow-123',
@@ -427,7 +408,6 @@ describe('Chat API Route', () => {
method: 'POST',
body: JSON.stringify(validData),
})
const { POST } = await import('@/app/api/chat/route')
const response = await POST(req)
expect(response.status).toBe(200)

View File

@@ -1,11 +1,19 @@
import { databaseMock, loggerMock, requestUtilsMock } from '@sim/testing'
import type { NextResponse } from 'next/server'
/**
* Tests for chat API utils
*
* @vitest-environment node
*/
import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest'
import { databaseMock, loggerMock, requestUtilsMock } from '@sim/testing'
import type { NextResponse } from 'next/server'
import { beforeEach, describe, expect, it, vi } from 'vitest'
const { mockDecryptSecret, mockMergeSubblockStateWithValues, mockMergeSubBlockValues } = vi.hoisted(
() => ({
mockDecryptSecret: vi.fn(),
mockMergeSubblockStateWithValues: vi.fn().mockReturnValue({}),
mockMergeSubBlockValues: vi.fn().mockReturnValue({}),
})
)
vi.mock('@sim/db', () => databaseMock)
vi.mock('@sim/logger', () => loggerMock)
@@ -27,12 +35,10 @@ vi.mock('@/serializer', () => ({
}))
vi.mock('@/lib/workflows/subblocks', () => ({
mergeSubblockStateWithValues: vi.fn().mockReturnValue({}),
mergeSubBlockValues: vi.fn().mockReturnValue({}),
mergeSubblockStateWithValues: mockMergeSubblockStateWithValues,
mergeSubBlockValues: mockMergeSubBlockValues,
}))
const mockDecryptSecret = vi.fn()
vi.mock('@/lib/core/security/encryption', () => ({
decryptSecret: mockDecryptSecret,
}))
@@ -49,8 +55,13 @@ vi.mock('@/lib/workflows/utils', () => ({
authorizeWorkflowByWorkspacePermission: vi.fn(),
}))
import { addCorsHeaders, validateAuthToken } from '@/lib/core/security/deployment'
import { decryptSecret } from '@/lib/core/security/encryption'
import { setChatAuthCookie, validateChatAuth } from '@/app/api/chat/utils'
describe('Chat API Utils', () => {
beforeEach(() => {
vi.clearAllMocks()
vi.stubGlobal('process', {
...process,
env: {
@@ -60,14 +71,8 @@ describe('Chat API Utils', () => {
})
})
afterEach(() => {
vi.clearAllMocks()
})
describe('Auth token utils', () => {
it.concurrent('should validate auth tokens', async () => {
const { validateAuthToken } = await import('@/lib/core/security/deployment')
it.concurrent('should validate auth tokens', () => {
const chatId = 'test-chat-id'
const type = 'password'
@@ -82,9 +87,7 @@ describe('Chat API Utils', () => {
expect(isInvalidChat).toBe(false)
})
it.concurrent('should reject expired tokens', async () => {
const { validateAuthToken } = await import('@/lib/core/security/deployment')
it.concurrent('should reject expired tokens', () => {
const chatId = 'test-chat-id'
const expiredToken = Buffer.from(
`${chatId}:password:${Date.now() - 25 * 60 * 60 * 1000}`
@@ -96,9 +99,7 @@ describe('Chat API Utils', () => {
})
describe('Cookie handling', () => {
it('should set auth cookie correctly', async () => {
const { setChatAuthCookie } = await import('@/app/api/chat/utils')
it('should set auth cookie correctly', () => {
const mockSet = vi.fn()
const mockResponse = {
cookies: {
@@ -125,9 +126,7 @@ describe('Chat API Utils', () => {
})
describe('CORS handling', () => {
it('should add CORS headers for localhost in development', async () => {
const { addCorsHeaders } = await import('@/lib/core/security/deployment')
it('should add CORS headers for localhost in development', () => {
const mockRequest = {
headers: {
get: vi.fn().mockReturnValue('http://localhost:3000'),
@@ -162,28 +161,11 @@ describe('Chat API Utils', () => {
})
describe('Chat auth validation', () => {
beforeEach(async () => {
vi.clearAllMocks()
beforeEach(() => {
mockDecryptSecret.mockResolvedValue({ decrypted: 'correct-password' })
vi.doMock('@/app/api/chat/utils', async (importOriginal) => {
const original = (await importOriginal()) as any
return {
...original,
validateAuthToken: vi.fn((token, id) => {
if (token === 'valid-token' && id === 'chat-id') {
return true
}
return false
}),
}
})
})
it('should allow access to public chats', async () => {
const utils = await import('@/app/api/chat/utils')
const { validateChatAuth } = utils
const deployment = {
id: 'chat-id',
authType: 'public',
@@ -201,8 +183,6 @@ describe('Chat API Utils', () => {
})
it('should request password auth for GET requests', async () => {
const { validateChatAuth } = await import('@/app/api/chat/utils')
const deployment = {
id: 'chat-id',
authType: 'password',
@@ -222,9 +202,6 @@ describe('Chat API Utils', () => {
})
it('should validate password for POST requests', async () => {
const { validateChatAuth } = await import('@/app/api/chat/utils')
const { decryptSecret } = await import('@/lib/core/security/encryption')
const deployment = {
id: 'chat-id',
authType: 'password',
@@ -249,8 +226,6 @@ describe('Chat API Utils', () => {
})
it('should reject incorrect password', async () => {
const { validateChatAuth } = await import('@/app/api/chat/utils')
const deployment = {
id: 'chat-id',
authType: 'password',
@@ -275,8 +250,6 @@ describe('Chat API Utils', () => {
})
it('should request email auth for email-protected chats', async () => {
const { validateChatAuth } = await import('@/app/api/chat/utils')
const deployment = {
id: 'chat-id',
authType: 'email',
@@ -297,8 +270,6 @@ describe('Chat API Utils', () => {
})
it('should check allowed emails for email auth', async () => {
const { validateChatAuth } = await import('@/app/api/chat/utils')
const deployment = {
id: 'chat-id',
authType: 'email',

View File

@@ -3,45 +3,46 @@
*
* @vitest-environment node
*/
import { mockAuth, mockCryptoUuid, setupCommonApiMocks } from '@sim/testing'
import { NextRequest } from 'next/server'
import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest'
import { beforeEach, describe, expect, it, vi } from 'vitest'
const { mockGetSession, mockFetch } = vi.hoisted(() => ({
mockGetSession: vi.fn(),
mockFetch: vi.fn(),
}))
vi.mock('@/lib/auth', () => ({
getSession: mockGetSession,
}))
vi.mock('@/lib/copilot/constants', () => ({
SIM_AGENT_API_URL_DEFAULT: 'https://agent.sim.example.com',
SIM_AGENT_API_URL: 'https://agent.sim.example.com',
}))
vi.mock('@/lib/core/config/env', () => ({
env: {
COPILOT_API_KEY: 'test-api-key',
},
getEnv: vi.fn(),
isTruthy: (value: string | boolean | number | undefined) =>
typeof value === 'string' ? value.toLowerCase() === 'true' || value === '1' : Boolean(value),
isFalsy: (value: string | boolean | number | undefined) =>
typeof value === 'string' ? value.toLowerCase() === 'false' || value === '0' : value === false,
}))
import { DELETE, GET } from '@/app/api/copilot/api-keys/route'
describe('Copilot API Keys API Route', () => {
const mockFetch = vi.fn()
beforeEach(() => {
vi.resetModules()
setupCommonApiMocks()
mockCryptoUuid()
global.fetch = mockFetch
vi.doMock('@/lib/copilot/constants', () => ({
SIM_AGENT_API_URL_DEFAULT: 'https://agent.sim.example.com',
SIM_AGENT_API_URL: 'https://agent.sim.example.com',
}))
vi.doMock('@/lib/core/config/env', async () => {
const { createEnvMock } = await import('@sim/testing')
return createEnvMock({
SIM_AGENT_API_URL: undefined,
COPILOT_API_KEY: 'test-api-key',
})
})
})
afterEach(() => {
vi.clearAllMocks()
vi.restoreAllMocks()
global.fetch = mockFetch
})
describe('GET', () => {
it('should return 401 when user is not authenticated', async () => {
const authMocks = mockAuth()
authMocks.setUnauthenticated()
mockGetSession.mockResolvedValue(null)
const { GET } = await import('@/app/api/copilot/api-keys/route')
const request = new NextRequest('http://localhost:3000/api/copilot/api-keys')
const response = await GET(request)
@@ -51,8 +52,7 @@ describe('Copilot API Keys API Route', () => {
})
it('should return list of API keys with masked values', async () => {
const authMocks = mockAuth()
authMocks.setAuthenticated()
mockGetSession.mockResolvedValue({ user: { id: 'user-123', email: 'test@example.com' } })
const mockApiKeys = [
{
@@ -76,7 +76,6 @@ describe('Copilot API Keys API Route', () => {
json: () => Promise.resolve(mockApiKeys),
})
const { GET } = await import('@/app/api/copilot/api-keys/route')
const request = new NextRequest('http://localhost:3000/api/copilot/api-keys')
const response = await GET(request)
@@ -91,15 +90,13 @@ describe('Copilot API Keys API Route', () => {
})
it('should return empty array when user has no API keys', async () => {
const authMocks = mockAuth()
authMocks.setAuthenticated()
mockGetSession.mockResolvedValue({ user: { id: 'user-123', email: 'test@example.com' } })
mockFetch.mockResolvedValueOnce({
ok: true,
json: () => Promise.resolve([]),
})
const { GET } = await import('@/app/api/copilot/api-keys/route')
const request = new NextRequest('http://localhost:3000/api/copilot/api-keys')
const response = await GET(request)
@@ -109,15 +106,13 @@ describe('Copilot API Keys API Route', () => {
})
it('should forward userId to Sim Agent', async () => {
const authMocks = mockAuth()
authMocks.setAuthenticated()
mockGetSession.mockResolvedValue({ user: { id: 'user-123', email: 'test@example.com' } })
mockFetch.mockResolvedValueOnce({
ok: true,
json: () => Promise.resolve([]),
})
const { GET } = await import('@/app/api/copilot/api-keys/route')
const request = new NextRequest('http://localhost:3000/api/copilot/api-keys')
await GET(request)
@@ -135,8 +130,7 @@ describe('Copilot API Keys API Route', () => {
})
it('should return error when Sim Agent returns non-ok response', async () => {
const authMocks = mockAuth()
authMocks.setAuthenticated()
mockGetSession.mockResolvedValue({ user: { id: 'user-123', email: 'test@example.com' } })
mockFetch.mockResolvedValueOnce({
ok: false,
@@ -144,7 +138,6 @@ describe('Copilot API Keys API Route', () => {
json: () => Promise.resolve({ error: 'Service unavailable' }),
})
const { GET } = await import('@/app/api/copilot/api-keys/route')
const request = new NextRequest('http://localhost:3000/api/copilot/api-keys')
const response = await GET(request)
@@ -154,15 +147,13 @@ describe('Copilot API Keys API Route', () => {
})
it('should return 500 when Sim Agent returns invalid response', async () => {
const authMocks = mockAuth()
authMocks.setAuthenticated()
mockGetSession.mockResolvedValue({ user: { id: 'user-123', email: 'test@example.com' } })
mockFetch.mockResolvedValueOnce({
ok: true,
json: () => Promise.resolve({ invalid: 'response' }),
})
const { GET } = await import('@/app/api/copilot/api-keys/route')
const request = new NextRequest('http://localhost:3000/api/copilot/api-keys')
const response = await GET(request)
@@ -172,12 +163,10 @@ describe('Copilot API Keys API Route', () => {
})
it('should handle network errors gracefully', async () => {
const authMocks = mockAuth()
authMocks.setAuthenticated()
mockGetSession.mockResolvedValue({ user: { id: 'user-123', email: 'test@example.com' } })
mockFetch.mockRejectedValueOnce(new Error('Network error'))
const { GET } = await import('@/app/api/copilot/api-keys/route')
const request = new NextRequest('http://localhost:3000/api/copilot/api-keys')
const response = await GET(request)
@@ -187,8 +176,7 @@ describe('Copilot API Keys API Route', () => {
})
it('should handle API keys with empty apiKey string', async () => {
const authMocks = mockAuth()
authMocks.setAuthenticated()
mockGetSession.mockResolvedValue({ user: { id: 'user-123', email: 'test@example.com' } })
const mockApiKeys = [
{
@@ -205,7 +193,6 @@ describe('Copilot API Keys API Route', () => {
json: () => Promise.resolve(mockApiKeys),
})
const { GET } = await import('@/app/api/copilot/api-keys/route')
const request = new NextRequest('http://localhost:3000/api/copilot/api-keys')
const response = await GET(request)
@@ -215,15 +202,13 @@ describe('Copilot API Keys API Route', () => {
})
it('should handle JSON parsing errors from Sim Agent', async () => {
const authMocks = mockAuth()
authMocks.setAuthenticated()
mockGetSession.mockResolvedValue({ user: { id: 'user-123', email: 'test@example.com' } })
mockFetch.mockResolvedValueOnce({
ok: true,
json: () => Promise.reject(new Error('Invalid JSON')),
})
const { GET } = await import('@/app/api/copilot/api-keys/route')
const request = new NextRequest('http://localhost:3000/api/copilot/api-keys')
const response = await GET(request)
@@ -235,10 +220,8 @@ describe('Copilot API Keys API Route', () => {
describe('DELETE', () => {
it('should return 401 when user is not authenticated', async () => {
const authMocks = mockAuth()
authMocks.setUnauthenticated()
mockGetSession.mockResolvedValue(null)
const { DELETE } = await import('@/app/api/copilot/api-keys/route')
const request = new NextRequest('http://localhost:3000/api/copilot/api-keys?id=key-123')
const response = await DELETE(request)
@@ -248,10 +231,8 @@ describe('Copilot API Keys API Route', () => {
})
it('should return 400 when id parameter is missing', async () => {
const authMocks = mockAuth()
authMocks.setAuthenticated()
mockGetSession.mockResolvedValue({ user: { id: 'user-123', email: 'test@example.com' } })
const { DELETE } = await import('@/app/api/copilot/api-keys/route')
const request = new NextRequest('http://localhost:3000/api/copilot/api-keys')
const response = await DELETE(request)
@@ -261,15 +242,13 @@ describe('Copilot API Keys API Route', () => {
})
it('should successfully delete an API key', async () => {
const authMocks = mockAuth()
authMocks.setAuthenticated()
mockGetSession.mockResolvedValue({ user: { id: 'user-123', email: 'test@example.com' } })
mockFetch.mockResolvedValueOnce({
ok: true,
json: () => Promise.resolve({ success: true }),
})
const { DELETE } = await import('@/app/api/copilot/api-keys/route')
const request = new NextRequest('http://localhost:3000/api/copilot/api-keys?id=key-123')
const response = await DELETE(request)
@@ -291,8 +270,7 @@ describe('Copilot API Keys API Route', () => {
})
it('should return error when Sim Agent returns non-ok response', async () => {
const authMocks = mockAuth()
authMocks.setAuthenticated()
mockGetSession.mockResolvedValue({ user: { id: 'user-123', email: 'test@example.com' } })
mockFetch.mockResolvedValueOnce({
ok: false,
@@ -300,7 +278,6 @@ describe('Copilot API Keys API Route', () => {
json: () => Promise.resolve({ error: 'Key not found' }),
})
const { DELETE } = await import('@/app/api/copilot/api-keys/route')
const request = new NextRequest('http://localhost:3000/api/copilot/api-keys?id=non-existent')
const response = await DELETE(request)
@@ -310,15 +287,13 @@ describe('Copilot API Keys API Route', () => {
})
it('should return 500 when Sim Agent returns invalid response', async () => {
const authMocks = mockAuth()
authMocks.setAuthenticated()
mockGetSession.mockResolvedValue({ user: { id: 'user-123', email: 'test@example.com' } })
mockFetch.mockResolvedValueOnce({
ok: true,
json: () => Promise.resolve({ success: false }),
})
const { DELETE } = await import('@/app/api/copilot/api-keys/route')
const request = new NextRequest('http://localhost:3000/api/copilot/api-keys?id=key-123')
const response = await DELETE(request)
@@ -328,12 +303,10 @@ describe('Copilot API Keys API Route', () => {
})
it('should handle network errors gracefully', async () => {
const authMocks = mockAuth()
authMocks.setAuthenticated()
mockGetSession.mockResolvedValue({ user: { id: 'user-123', email: 'test@example.com' } })
mockFetch.mockRejectedValueOnce(new Error('Network error'))
const { DELETE } = await import('@/app/api/copilot/api-keys/route')
const request = new NextRequest('http://localhost:3000/api/copilot/api-keys?id=key-123')
const response = await DELETE(request)
@@ -343,15 +316,13 @@ describe('Copilot API Keys API Route', () => {
})
it('should handle JSON parsing errors from Sim Agent on delete', async () => {
const authMocks = mockAuth()
authMocks.setAuthenticated()
mockGetSession.mockResolvedValue({ user: { id: 'user-123', email: 'test@example.com' } })
mockFetch.mockResolvedValueOnce({
ok: true,
json: () => Promise.reject(new Error('Invalid JSON')),
})
const { DELETE } = await import('@/app/api/copilot/api-keys/route')
const request = new NextRequest('http://localhost:3000/api/copilot/api-keys?id=key-123')
const response = await DELETE(request)

View File

@@ -3,55 +3,68 @@
*
* @vitest-environment node
*/
import { createMockRequest, mockAuth, mockCryptoUuid, setupCommonApiMocks } from '@sim/testing'
import { NextRequest } from 'next/server'
import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest'
describe('Copilot Chat Delete API Route', () => {
const mockDelete = vi.fn()
const mockWhere = vi.fn()
const { mockDelete, mockWhere, mockGetSession } = vi.hoisted(() => ({
mockDelete: vi.fn(),
mockWhere: vi.fn(),
mockGetSession: vi.fn(),
}))
vi.mock('@/lib/auth', () => ({
getSession: mockGetSession,
}))
vi.mock('@sim/db', () => ({
db: {
delete: mockDelete,
},
}))
vi.mock('@sim/db/schema', () => ({
copilotChats: {
id: 'id',
userId: 'userId',
},
}))
vi.mock('drizzle-orm', () => ({
eq: vi.fn((field: unknown, value: unknown) => ({ field, value, type: 'eq' })),
}))
import { DELETE } from '@/app/api/copilot/chat/delete/route'
function createMockRequest(method: string, body: Record<string, unknown>): NextRequest {
return new NextRequest('http://localhost:3000/api/copilot/chat/delete', {
method,
body: JSON.stringify(body),
headers: { 'Content-Type': 'application/json' },
})
}
describe('Copilot Chat Delete API Route', () => {
beforeEach(() => {
vi.resetModules()
setupCommonApiMocks()
mockCryptoUuid()
vi.clearAllMocks()
mockGetSession.mockResolvedValue(null)
mockDelete.mockReturnValue({ where: mockWhere })
mockWhere.mockResolvedValue([])
vi.doMock('@sim/db', () => ({
db: {
delete: mockDelete,
},
}))
vi.doMock('@sim/db/schema', () => ({
copilotChats: {
id: 'id',
userId: 'userId',
},
}))
vi.doMock('drizzle-orm', () => ({
eq: vi.fn((field, value) => ({ field, value, type: 'eq' })),
}))
})
afterEach(() => {
vi.clearAllMocks()
vi.restoreAllMocks()
})
describe('DELETE', () => {
it('should return 401 when user is not authenticated', async () => {
const authMocks = mockAuth()
authMocks.setUnauthenticated()
mockGetSession.mockResolvedValue(null)
const req = createMockRequest('DELETE', {
chatId: 'chat-123',
})
const { DELETE } = await import('@/app/api/copilot/chat/delete/route')
const response = await DELETE(req)
expect(response.status).toBe(401)
@@ -60,8 +73,7 @@ describe('Copilot Chat Delete API Route', () => {
})
it('should successfully delete a chat', async () => {
const authMocks = mockAuth()
authMocks.setAuthenticated()
mockGetSession.mockResolvedValue({ user: { id: 'user-123' } })
mockWhere.mockResolvedValueOnce([{ id: 'chat-123' }])
@@ -69,7 +81,6 @@ describe('Copilot Chat Delete API Route', () => {
chatId: 'chat-123',
})
const { DELETE } = await import('@/app/api/copilot/chat/delete/route')
const response = await DELETE(req)
expect(response.status).toBe(200)
@@ -81,12 +92,10 @@ describe('Copilot Chat Delete API Route', () => {
})
it('should return 500 for invalid request body - missing chatId', async () => {
const authMocks = mockAuth()
authMocks.setAuthenticated()
mockGetSession.mockResolvedValue({ user: { id: 'user-123' } })
const req = createMockRequest('DELETE', {})
const { DELETE } = await import('@/app/api/copilot/chat/delete/route')
const response = await DELETE(req)
expect(response.status).toBe(500)
@@ -95,14 +104,12 @@ describe('Copilot Chat Delete API Route', () => {
})
it('should return 500 for invalid request body - chatId is not a string', async () => {
const authMocks = mockAuth()
authMocks.setAuthenticated()
mockGetSession.mockResolvedValue({ user: { id: 'user-123' } })
const req = createMockRequest('DELETE', {
chatId: 12345,
})
const { DELETE } = await import('@/app/api/copilot/chat/delete/route')
const response = await DELETE(req)
expect(response.status).toBe(500)
@@ -111,8 +118,7 @@ describe('Copilot Chat Delete API Route', () => {
})
it('should handle database errors gracefully', async () => {
const authMocks = mockAuth()
authMocks.setAuthenticated()
mockGetSession.mockResolvedValue({ user: { id: 'user-123' } })
mockWhere.mockRejectedValueOnce(new Error('Database connection failed'))
@@ -120,7 +126,6 @@ describe('Copilot Chat Delete API Route', () => {
chatId: 'chat-123',
})
const { DELETE } = await import('@/app/api/copilot/chat/delete/route')
const response = await DELETE(req)
expect(response.status).toBe(500)
@@ -129,8 +134,7 @@ describe('Copilot Chat Delete API Route', () => {
})
it('should handle JSON parsing errors in request body', async () => {
const authMocks = mockAuth()
authMocks.setAuthenticated()
mockGetSession.mockResolvedValue({ user: { id: 'user-123' } })
const req = new NextRequest('http://localhost:3000/api/copilot/chat/delete', {
method: 'DELETE',
@@ -140,7 +144,6 @@ describe('Copilot Chat Delete API Route', () => {
},
})
const { DELETE } = await import('@/app/api/copilot/chat/delete/route')
const response = await DELETE(req)
expect(response.status).toBe(500)
@@ -149,8 +152,7 @@ describe('Copilot Chat Delete API Route', () => {
})
it('should delete chat even if it does not exist (idempotent)', async () => {
const authMocks = mockAuth()
authMocks.setAuthenticated()
mockGetSession.mockResolvedValue({ user: { id: 'user-123' } })
mockWhere.mockResolvedValueOnce([])
@@ -158,7 +160,6 @@ describe('Copilot Chat Delete API Route', () => {
chatId: 'non-existent-chat',
})
const { DELETE } = await import('@/app/api/copilot/chat/delete/route')
const response = await DELETE(req)
expect(response.status).toBe(200)
@@ -167,14 +168,12 @@ describe('Copilot Chat Delete API Route', () => {
})
it('should delete chat with empty string chatId (validation should fail)', async () => {
const authMocks = mockAuth()
authMocks.setAuthenticated()
mockGetSession.mockResolvedValue({ user: { id: 'user-123' } })
const req = createMockRequest('DELETE', {
chatId: '',
})
const { DELETE } = await import('@/app/api/copilot/chat/delete/route')
const response = await DELETE(req)
expect(response.status).toBe(200)

View File

@@ -3,61 +3,86 @@
*
* @vitest-environment node
*/
import { createMockRequest, mockAuth, mockCryptoUuid, setupCommonApiMocks } from '@sim/testing'
import { NextRequest } from 'next/server'
import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest'
describe('Copilot Chat Update Messages API Route', () => {
const mockSelect = vi.fn()
const mockFrom = vi.fn()
const mockWhere = vi.fn()
const mockLimit = vi.fn()
const mockUpdate = vi.fn()
const mockSet = vi.fn()
const {
mockSelect,
mockFrom,
mockWhere,
mockLimit,
mockUpdate,
mockSet,
mockUpdateWhere,
mockGetSession,
} = vi.hoisted(() => ({
mockSelect: vi.fn(),
mockFrom: vi.fn(),
mockWhere: vi.fn(),
mockLimit: vi.fn(),
mockUpdate: vi.fn(),
mockSet: vi.fn(),
mockUpdateWhere: vi.fn(),
mockGetSession: vi.fn(),
}))
vi.mock('@/lib/auth', () => ({
getSession: mockGetSession,
}))
vi.mock('@sim/db', () => ({
db: {
select: mockSelect,
update: mockUpdate,
},
}))
vi.mock('@sim/db/schema', () => ({
copilotChats: {
id: 'id',
userId: 'userId',
messages: 'messages',
updatedAt: 'updatedAt',
},
}))
vi.mock('drizzle-orm', () => ({
and: vi.fn((...conditions: unknown[]) => ({ conditions, type: 'and' })),
eq: vi.fn((field: unknown, value: unknown) => ({ field, value, type: 'eq' })),
}))
import { POST } from '@/app/api/copilot/chat/update-messages/route'
function createMockRequest(method: string, body: Record<string, unknown>): NextRequest {
return new NextRequest('http://localhost:3000/api/copilot/chat/update-messages', {
method,
body: JSON.stringify(body),
headers: { 'Content-Type': 'application/json' },
})
}
describe('Copilot Chat Update Messages API Route', () => {
beforeEach(() => {
vi.resetModules()
setupCommonApiMocks()
mockCryptoUuid()
vi.clearAllMocks()
mockGetSession.mockResolvedValue(null)
mockSelect.mockReturnValue({ from: mockFrom })
mockFrom.mockReturnValue({ where: mockWhere })
mockWhere.mockReturnValue({ limit: mockLimit })
mockLimit.mockResolvedValue([]) // Default: no chat found
mockLimit.mockResolvedValue([])
mockUpdate.mockReturnValue({ set: mockSet })
mockSet.mockReturnValue({ where: vi.fn().mockResolvedValue(undefined) }) // Different where for update
vi.doMock('@sim/db', () => ({
db: {
select: mockSelect,
update: mockUpdate,
},
}))
vi.doMock('@sim/db/schema', () => ({
copilotChats: {
id: 'id',
userId: 'userId',
messages: 'messages',
updatedAt: 'updatedAt',
},
}))
vi.doMock('drizzle-orm', () => ({
and: vi.fn((...conditions) => ({ conditions, type: 'and' })),
eq: vi.fn((field, value) => ({ field, value, type: 'eq' })),
}))
mockUpdateWhere.mockResolvedValue(undefined)
mockSet.mockReturnValue({ where: mockUpdateWhere })
})
afterEach(() => {
vi.clearAllMocks()
vi.restoreAllMocks()
})
describe('POST', () => {
it('should return 401 when user is not authenticated', async () => {
const authMocks = mockAuth()
authMocks.setUnauthenticated()
mockGetSession.mockResolvedValue(null)
const req = createMockRequest('POST', {
chatId: 'chat-123',
@@ -71,7 +96,6 @@ describe('Copilot Chat Update Messages API Route', () => {
],
})
const { POST } = await import('@/app/api/copilot/chat/update-messages/route')
const response = await POST(req)
expect(response.status).toBe(401)
@@ -80,8 +104,7 @@ describe('Copilot Chat Update Messages API Route', () => {
})
it('should return 400 for invalid request body - missing chatId', async () => {
const authMocks = mockAuth()
authMocks.setAuthenticated()
mockGetSession.mockResolvedValue({ user: { id: 'user-123' } })
const req = createMockRequest('POST', {
messages: [
@@ -92,10 +115,8 @@ describe('Copilot Chat Update Messages API Route', () => {
timestamp: '2024-01-01T00:00:00.000Z',
},
],
// Missing chatId
})
const { POST } = await import('@/app/api/copilot/chat/update-messages/route')
const response = await POST(req)
expect(response.status).toBe(500)
@@ -104,15 +125,12 @@ describe('Copilot Chat Update Messages API Route', () => {
})
it('should return 400 for invalid request body - missing messages', async () => {
const authMocks = mockAuth()
authMocks.setAuthenticated()
mockGetSession.mockResolvedValue({ user: { id: 'user-123' } })
const req = createMockRequest('POST', {
chatId: 'chat-123',
// Missing messages
})
const { POST } = await import('@/app/api/copilot/chat/update-messages/route')
const response = await POST(req)
expect(response.status).toBe(500)
@@ -121,20 +139,17 @@ describe('Copilot Chat Update Messages API Route', () => {
})
it('should return 400 for invalid message structure - missing required fields', async () => {
const authMocks = mockAuth()
authMocks.setAuthenticated()
mockGetSession.mockResolvedValue({ user: { id: 'user-123' } })
const req = createMockRequest('POST', {
chatId: 'chat-123',
messages: [
{
id: 'msg-1',
// Missing role, content, timestamp
},
],
})
const { POST } = await import('@/app/api/copilot/chat/update-messages/route')
const response = await POST(req)
expect(response.status).toBe(500)
@@ -143,8 +158,7 @@ describe('Copilot Chat Update Messages API Route', () => {
})
it('should return 400 for invalid message role', async () => {
const authMocks = mockAuth()
authMocks.setAuthenticated()
mockGetSession.mockResolvedValue({ user: { id: 'user-123' } })
const req = createMockRequest('POST', {
chatId: 'chat-123',
@@ -158,7 +172,6 @@ describe('Copilot Chat Update Messages API Route', () => {
],
})
const { POST } = await import('@/app/api/copilot/chat/update-messages/route')
const response = await POST(req)
expect(response.status).toBe(500)
@@ -167,10 +180,8 @@ describe('Copilot Chat Update Messages API Route', () => {
})
it('should return 404 when chat is not found', async () => {
const authMocks = mockAuth()
authMocks.setAuthenticated()
mockGetSession.mockResolvedValue({ user: { id: 'user-123' } })
// Mock chat not found
mockLimit.mockResolvedValueOnce([])
const req = createMockRequest('POST', {
@@ -185,7 +196,6 @@ describe('Copilot Chat Update Messages API Route', () => {
],
})
const { POST } = await import('@/app/api/copilot/chat/update-messages/route')
const response = await POST(req)
expect(response.status).toBe(404)
@@ -194,10 +204,8 @@ describe('Copilot Chat Update Messages API Route', () => {
})
it('should return 404 when chat belongs to different user', async () => {
const authMocks = mockAuth()
authMocks.setAuthenticated()
mockGetSession.mockResolvedValue({ user: { id: 'user-123' } })
// Mock chat not found (due to user mismatch)
mockLimit.mockResolvedValueOnce([])
const req = createMockRequest('POST', {
@@ -212,7 +220,6 @@ describe('Copilot Chat Update Messages API Route', () => {
],
})
const { POST } = await import('@/app/api/copilot/chat/update-messages/route')
const response = await POST(req)
expect(response.status).toBe(404)
@@ -221,8 +228,7 @@ describe('Copilot Chat Update Messages API Route', () => {
})
it('should successfully update chat messages', async () => {
const authMocks = mockAuth()
authMocks.setAuthenticated()
mockGetSession.mockResolvedValue({ user: { id: 'user-123' } })
const existingChat = {
id: 'chat-123',
@@ -251,7 +257,6 @@ describe('Copilot Chat Update Messages API Route', () => {
messages,
})
const { POST } = await import('@/app/api/copilot/chat/update-messages/route')
const response = await POST(req)
expect(response.status).toBe(200)
@@ -270,8 +275,7 @@ describe('Copilot Chat Update Messages API Route', () => {
})
it('should successfully update chat messages with optional fields', async () => {
const authMocks = mockAuth()
authMocks.setAuthenticated()
mockGetSession.mockResolvedValue({ user: { id: 'user-123' } })
const existingChat = {
id: 'chat-456',
@@ -313,7 +317,6 @@ describe('Copilot Chat Update Messages API Route', () => {
messages,
})
const { POST } = await import('@/app/api/copilot/chat/update-messages/route')
const response = await POST(req)
expect(response.status).toBe(200)
@@ -330,8 +333,7 @@ describe('Copilot Chat Update Messages API Route', () => {
})
it('should handle empty messages array', async () => {
const authMocks = mockAuth()
authMocks.setAuthenticated()
mockGetSession.mockResolvedValue({ user: { id: 'user-123' } })
const existingChat = {
id: 'chat-789',
@@ -345,7 +347,6 @@ describe('Copilot Chat Update Messages API Route', () => {
messages: [],
})
const { POST } = await import('@/app/api/copilot/chat/update-messages/route')
const response = await POST(req)
expect(response.status).toBe(200)
@@ -362,8 +363,7 @@ describe('Copilot Chat Update Messages API Route', () => {
})
it('should handle database errors during chat lookup', async () => {
const authMocks = mockAuth()
authMocks.setAuthenticated()
mockGetSession.mockResolvedValue({ user: { id: 'user-123' } })
mockLimit.mockRejectedValueOnce(new Error('Database connection failed'))
@@ -379,7 +379,6 @@ describe('Copilot Chat Update Messages API Route', () => {
],
})
const { POST } = await import('@/app/api/copilot/chat/update-messages/route')
const response = await POST(req)
expect(response.status).toBe(500)
@@ -388,8 +387,7 @@ describe('Copilot Chat Update Messages API Route', () => {
})
it('should handle database errors during update operation', async () => {
const authMocks = mockAuth()
authMocks.setAuthenticated()
mockGetSession.mockResolvedValue({ user: { id: 'user-123' } })
const existingChat = {
id: 'chat-123',
@@ -414,7 +412,6 @@ describe('Copilot Chat Update Messages API Route', () => {
],
})
const { POST } = await import('@/app/api/copilot/chat/update-messages/route')
const response = await POST(req)
expect(response.status).toBe(500)
@@ -423,8 +420,7 @@ describe('Copilot Chat Update Messages API Route', () => {
})
it('should handle JSON parsing errors in request body', async () => {
const authMocks = mockAuth()
authMocks.setAuthenticated()
mockGetSession.mockResolvedValue({ user: { id: 'user-123' } })
const req = new NextRequest('http://localhost:3000/api/copilot/chat/update-messages', {
method: 'POST',
@@ -434,7 +430,6 @@ describe('Copilot Chat Update Messages API Route', () => {
},
})
const { POST } = await import('@/app/api/copilot/chat/update-messages/route')
const response = await POST(req)
expect(response.status).toBe(500)
@@ -443,8 +438,7 @@ describe('Copilot Chat Update Messages API Route', () => {
})
it('should handle large message arrays', async () => {
const authMocks = mockAuth()
authMocks.setAuthenticated()
mockGetSession.mockResolvedValue({ user: { id: 'user-123' } })
const existingChat = {
id: 'chat-large',
@@ -465,7 +459,6 @@ describe('Copilot Chat Update Messages API Route', () => {
messages,
})
const { POST } = await import('@/app/api/copilot/chat/update-messages/route')
const response = await POST(req)
expect(response.status).toBe(200)
@@ -482,8 +475,7 @@ describe('Copilot Chat Update Messages API Route', () => {
})
it('should handle messages with both user and assistant roles', async () => {
const authMocks = mockAuth()
authMocks.setAuthenticated()
mockGetSession.mockResolvedValue({ user: { id: 'user-123' } })
const existingChat = {
id: 'chat-mixed',
@@ -531,7 +523,6 @@ describe('Copilot Chat Update Messages API Route', () => {
messages,
})
const { POST } = await import('@/app/api/copilot/chat/update-messages/route')
const response = await POST(req)
expect(response.status).toBe(200)

View File

@@ -3,76 +3,84 @@
*
* @vitest-environment node
*/
import { mockCryptoUuid, setupCommonApiMocks } from '@sim/testing'
import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest'
describe('Copilot Chats List API Route', () => {
const mockSelect = vi.fn()
const mockFrom = vi.fn()
const mockWhere = vi.fn()
const mockOrderBy = vi.fn()
const {
mockSelect,
mockFrom,
mockWhere,
mockOrderBy,
mockAuthenticate,
mockCreateUnauthorizedResponse,
mockCreateInternalServerErrorResponse,
} = vi.hoisted(() => ({
mockSelect: vi.fn(),
mockFrom: vi.fn(),
mockWhere: vi.fn(),
mockOrderBy: vi.fn(),
mockAuthenticate: vi.fn(),
mockCreateUnauthorizedResponse: vi.fn(),
mockCreateInternalServerErrorResponse: vi.fn(),
}))
vi.mock('@sim/db', () => ({
db: {
select: mockSelect,
},
}))
vi.mock('@sim/db/schema', () => ({
copilotChats: {
id: 'id',
title: 'title',
workflowId: 'workflowId',
userId: 'userId',
updatedAt: 'updatedAt',
},
}))
vi.mock('drizzle-orm', () => ({
and: vi.fn((...conditions: unknown[]) => ({ conditions, type: 'and' })),
eq: vi.fn((field: unknown, value: unknown) => ({ field, value, type: 'eq' })),
desc: vi.fn((field: unknown) => ({ field, type: 'desc' })),
}))
vi.mock('@/lib/copilot/request-helpers', () => ({
authenticateCopilotRequestSessionOnly: mockAuthenticate,
createUnauthorizedResponse: mockCreateUnauthorizedResponse,
createInternalServerErrorResponse: mockCreateInternalServerErrorResponse,
}))
import { GET } from '@/app/api/copilot/chats/route'
describe('Copilot Chats List API Route', () => {
beforeEach(() => {
vi.resetModules()
setupCommonApiMocks()
mockCryptoUuid()
vi.clearAllMocks()
mockSelect.mockReturnValue({ from: mockFrom })
mockFrom.mockReturnValue({ where: mockWhere })
mockWhere.mockReturnValue({ orderBy: mockOrderBy })
mockOrderBy.mockResolvedValue([])
vi.doMock('@sim/db', () => ({
db: {
select: mockSelect,
},
}))
vi.doMock('@sim/db/schema', () => ({
copilotChats: {
id: 'id',
title: 'title',
workflowId: 'workflowId',
userId: 'userId',
updatedAt: 'updatedAt',
},
}))
vi.doMock('drizzle-orm', () => ({
and: vi.fn((...conditions) => ({ conditions, type: 'and' })),
eq: vi.fn((field, value) => ({ field, value, type: 'eq' })),
desc: vi.fn((field) => ({ field, type: 'desc' })),
}))
vi.doMock('@/lib/copilot/request-helpers', () => ({
authenticateCopilotRequestSessionOnly: vi.fn(),
createUnauthorizedResponse: vi
.fn()
.mockReturnValue(new Response(JSON.stringify({ error: 'Unauthorized' }), { status: 401 })),
createInternalServerErrorResponse: vi
.fn()
.mockImplementation(
(message) => new Response(JSON.stringify({ error: message }), { status: 500 })
),
}))
mockCreateUnauthorizedResponse.mockReturnValue(
new Response(JSON.stringify({ error: 'Unauthorized' }), { status: 401 })
)
mockCreateInternalServerErrorResponse.mockImplementation(
(message: string) => new Response(JSON.stringify({ error: message }), { status: 500 })
)
})
afterEach(() => {
vi.clearAllMocks()
vi.restoreAllMocks()
})
describe('GET', () => {
it('should return 401 when user is not authenticated', async () => {
const { authenticateCopilotRequestSessionOnly } = await import(
'@/lib/copilot/request-helpers'
)
vi.mocked(authenticateCopilotRequestSessionOnly).mockResolvedValueOnce({
mockAuthenticate.mockResolvedValueOnce({
userId: null,
isAuthenticated: false,
})
const { GET } = await import('@/app/api/copilot/chats/route')
const request = new Request('http://localhost:3000/api/copilot/chats')
const response = await GET(request as any)
@@ -82,17 +90,13 @@ describe('Copilot Chats List API Route', () => {
})
it('should return empty chats array when user has no chats', async () => {
const { authenticateCopilotRequestSessionOnly } = await import(
'@/lib/copilot/request-helpers'
)
vi.mocked(authenticateCopilotRequestSessionOnly).mockResolvedValueOnce({
mockAuthenticate.mockResolvedValueOnce({
userId: 'user-123',
isAuthenticated: true,
})
mockOrderBy.mockResolvedValueOnce([])
const { GET } = await import('@/app/api/copilot/chats/route')
const request = new Request('http://localhost:3000/api/copilot/chats')
const response = await GET(request as any)
@@ -105,10 +109,7 @@ describe('Copilot Chats List API Route', () => {
})
it('should return list of chats for authenticated user', async () => {
const { authenticateCopilotRequestSessionOnly } = await import(
'@/lib/copilot/request-helpers'
)
vi.mocked(authenticateCopilotRequestSessionOnly).mockResolvedValueOnce({
mockAuthenticate.mockResolvedValueOnce({
userId: 'user-123',
isAuthenticated: true,
})
@@ -129,7 +130,6 @@ describe('Copilot Chats List API Route', () => {
]
mockOrderBy.mockResolvedValueOnce(mockChats)
const { GET } = await import('@/app/api/copilot/chats/route')
const request = new Request('http://localhost:3000/api/copilot/chats')
const response = await GET(request as any)
@@ -143,10 +143,7 @@ describe('Copilot Chats List API Route', () => {
})
it('should return chats ordered by updatedAt descending', async () => {
const { authenticateCopilotRequestSessionOnly } = await import(
'@/lib/copilot/request-helpers'
)
vi.mocked(authenticateCopilotRequestSessionOnly).mockResolvedValueOnce({
mockAuthenticate.mockResolvedValueOnce({
userId: 'user-123',
isAuthenticated: true,
})
@@ -173,7 +170,6 @@ describe('Copilot Chats List API Route', () => {
]
mockOrderBy.mockResolvedValueOnce(mockChats)
const { GET } = await import('@/app/api/copilot/chats/route')
const request = new Request('http://localhost:3000/api/copilot/chats')
const response = await GET(request as any)
@@ -184,10 +180,7 @@ describe('Copilot Chats List API Route', () => {
})
it('should handle chats with null workflowId', async () => {
const { authenticateCopilotRequestSessionOnly } = await import(
'@/lib/copilot/request-helpers'
)
vi.mocked(authenticateCopilotRequestSessionOnly).mockResolvedValueOnce({
mockAuthenticate.mockResolvedValueOnce({
userId: 'user-123',
isAuthenticated: true,
})
@@ -202,7 +195,6 @@ describe('Copilot Chats List API Route', () => {
]
mockOrderBy.mockResolvedValueOnce(mockChats)
const { GET } = await import('@/app/api/copilot/chats/route')
const request = new Request('http://localhost:3000/api/copilot/chats')
const response = await GET(request as any)
@@ -212,17 +204,13 @@ describe('Copilot Chats List API Route', () => {
})
it('should handle database errors gracefully', async () => {
const { authenticateCopilotRequestSessionOnly } = await import(
'@/lib/copilot/request-helpers'
)
vi.mocked(authenticateCopilotRequestSessionOnly).mockResolvedValueOnce({
mockAuthenticate.mockResolvedValueOnce({
userId: 'user-123',
isAuthenticated: true,
})
mockOrderBy.mockRejectedValueOnce(new Error('Database connection failed'))
const { GET } = await import('@/app/api/copilot/chats/route')
const request = new Request('http://localhost:3000/api/copilot/chats')
const response = await GET(request as any)
@@ -232,10 +220,7 @@ describe('Copilot Chats List API Route', () => {
})
it('should only return chats belonging to authenticated user', async () => {
const { authenticateCopilotRequestSessionOnly } = await import(
'@/lib/copilot/request-helpers'
)
vi.mocked(authenticateCopilotRequestSessionOnly).mockResolvedValueOnce({
mockAuthenticate.mockResolvedValueOnce({
userId: 'user-123',
isAuthenticated: true,
})
@@ -250,7 +235,6 @@ describe('Copilot Chats List API Route', () => {
]
mockOrderBy.mockResolvedValueOnce(mockChats)
const { GET } = await import('@/app/api/copilot/chats/route')
const request = new Request('http://localhost:3000/api/copilot/chats')
await GET(request as any)
@@ -259,15 +243,11 @@ describe('Copilot Chats List API Route', () => {
})
it('should return 401 when userId is null despite isAuthenticated being true', async () => {
const { authenticateCopilotRequestSessionOnly } = await import(
'@/lib/copilot/request-helpers'
)
vi.mocked(authenticateCopilotRequestSessionOnly).mockResolvedValueOnce({
mockAuthenticate.mockResolvedValueOnce({
userId: null,
isAuthenticated: true,
})
const { GET } = await import('@/app/api/copilot/chats/route')
const request = new Request('http://localhost:3000/api/copilot/chats')
const response = await GET(request as any)

View File

@@ -3,63 +3,105 @@
*
* @vitest-environment node
*/
import { createMockRequest, mockAuth, mockCryptoUuid, setupCommonApiMocks } from '@sim/testing'
import { NextRequest } from 'next/server'
import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest'
const {
mockSelect,
mockFrom,
mockWhere,
mockThen,
mockDelete,
mockDeleteWhere,
mockAuthorize,
mockGetSession,
} = vi.hoisted(() => ({
mockSelect: vi.fn(),
mockFrom: vi.fn(),
mockWhere: vi.fn(),
mockThen: vi.fn(),
mockDelete: vi.fn(),
mockDeleteWhere: vi.fn(),
mockAuthorize: vi.fn(),
mockGetSession: vi.fn(),
}))
vi.mock('@/lib/auth', () => ({
getSession: mockGetSession,
}))
vi.mock('@/lib/core/utils/urls', () => ({
getBaseUrl: vi.fn(() => 'http://localhost:3000'),
getInternalApiBaseUrl: vi.fn(() => 'http://localhost:3000'),
getBaseDomain: vi.fn(() => 'localhost:3000'),
getEmailDomain: vi.fn(() => 'localhost:3000'),
}))
vi.mock('@/lib/workflows/utils', () => ({
authorizeWorkflowByWorkspacePermission: mockAuthorize,
}))
vi.mock('@sim/db', () => ({
db: {
select: mockSelect,
delete: mockDelete,
},
}))
vi.mock('@sim/db/schema', () => ({
workflowCheckpoints: {
id: 'id',
userId: 'userId',
workflowId: 'workflowId',
workflowState: 'workflowState',
},
workflow: {
id: 'id',
userId: 'userId',
},
}))
vi.mock('drizzle-orm', () => ({
and: vi.fn((...conditions: unknown[]) => ({ conditions, type: 'and' })),
eq: vi.fn((field: unknown, value: unknown) => ({ field, value, type: 'eq' })),
}))
import { POST } from '@/app/api/copilot/checkpoints/revert/route'
describe('Copilot Checkpoints Revert API Route', () => {
const mockSelect = vi.fn()
const mockFrom = vi.fn()
const mockWhere = vi.fn()
const mockThen = vi.fn()
/** Queued results for successive `.then()` calls in the db select chain */
let thenResults: unknown[]
beforeEach(() => {
vi.resetModules()
setupCommonApiMocks()
mockCryptoUuid()
vi.clearAllMocks()
vi.doMock('@/lib/core/utils/urls', () => ({
getBaseUrl: vi.fn(() => 'http://localhost:3000'),
getInternalApiBaseUrl: vi.fn(() => 'http://localhost:3000'),
getBaseDomain: vi.fn(() => 'localhost:3000'),
getEmailDomain: vi.fn(() => 'localhost:3000'),
}))
thenResults = []
vi.doMock('@/lib/workflows/utils', () => ({
authorizeWorkflowByWorkspacePermission: vi.fn().mockResolvedValue({
allowed: true,
status: 200,
}),
}))
mockGetSession.mockResolvedValue(null)
mockAuthorize.mockResolvedValue({
allowed: true,
status: 200,
})
mockSelect.mockReturnValue({ from: mockFrom })
mockFrom.mockReturnValue({ where: mockWhere })
mockWhere.mockReturnValue({ then: mockThen })
mockThen.mockResolvedValue(null) // Default: no data found
vi.doMock('@sim/db', () => ({
db: {
select: mockSelect,
},
}))
// Drizzle's .then() is a thenable: it receives a callback like (rows) => rows[0].
// We invoke the callback with our mock rows array so the route gets the expected value.
mockThen.mockImplementation((callback: (rows: unknown[]) => unknown) => {
const result = thenResults.shift()
if (result instanceof Error) {
return Promise.reject(result)
}
const rows = result === undefined ? [] : [result]
return Promise.resolve(callback(rows))
})
vi.doMock('@sim/db/schema', () => ({
workflowCheckpoints: {
id: 'id',
userId: 'userId',
workflowId: 'workflowId',
workflowState: 'workflowState',
},
workflow: {
id: 'id',
userId: 'userId',
},
}))
vi.doMock('drizzle-orm', () => ({
and: vi.fn((...conditions) => ({ conditions, type: 'and' })),
eq: vi.fn((field, value) => ({ field, value, type: 'eq' })),
}))
// Mock delete chain
mockDelete.mockReturnValue({ where: mockDeleteWhere })
mockDeleteWhere.mockResolvedValue(undefined)
global.fetch = vi.fn()
@@ -83,16 +125,26 @@ describe('Copilot Checkpoints Revert API Route', () => {
vi.restoreAllMocks()
})
/** Helper to set authenticated state */
function setAuthenticated(user = { id: 'user-123', email: 'test@example.com' }) {
mockGetSession.mockResolvedValue({ user })
}
/** Helper to set unauthenticated state */
function setUnauthenticated() {
mockGetSession.mockResolvedValue(null)
}
describe('POST', () => {
it('should return 401 when user is not authenticated', async () => {
const authMocks = mockAuth()
authMocks.setUnauthenticated()
setUnauthenticated()
const req = createMockRequest('POST', {
checkpointId: 'checkpoint-123',
const req = new NextRequest('http://localhost:3000/api/copilot/checkpoints/revert', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ checkpointId: 'checkpoint-123' }),
})
const { POST } = await import('@/app/api/copilot/checkpoints/revert/route')
const response = await POST(req)
expect(response.status).toBe(401)
@@ -101,14 +153,14 @@ describe('Copilot Checkpoints Revert API Route', () => {
})
it('should return 500 for invalid request body - missing checkpointId', async () => {
const authMocks = mockAuth()
authMocks.setAuthenticated()
setAuthenticated()
const req = createMockRequest('POST', {
// Missing checkpointId
const req = new NextRequest('http://localhost:3000/api/copilot/checkpoints/revert', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({}),
})
const { POST } = await import('@/app/api/copilot/checkpoints/revert/route')
const response = await POST(req)
expect(response.status).toBe(500)
@@ -117,14 +169,14 @@ describe('Copilot Checkpoints Revert API Route', () => {
})
it('should return 500 for empty checkpointId', async () => {
const authMocks = mockAuth()
authMocks.setAuthenticated()
setAuthenticated()
const req = createMockRequest('POST', {
checkpointId: '',
const req = new NextRequest('http://localhost:3000/api/copilot/checkpoints/revert', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ checkpointId: '' }),
})
const { POST } = await import('@/app/api/copilot/checkpoints/revert/route')
const response = await POST(req)
expect(response.status).toBe(500)
@@ -133,17 +185,17 @@ describe('Copilot Checkpoints Revert API Route', () => {
})
it('should return 404 when checkpoint is not found', async () => {
const authMocks = mockAuth()
authMocks.setAuthenticated()
setAuthenticated()
// Mock checkpoint not found
mockThen.mockResolvedValueOnce(undefined)
thenResults.push(undefined)
const req = createMockRequest('POST', {
checkpointId: 'non-existent-checkpoint',
const req = new NextRequest('http://localhost:3000/api/copilot/checkpoints/revert', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ checkpointId: 'non-existent-checkpoint' }),
})
const { POST } = await import('@/app/api/copilot/checkpoints/revert/route')
const response = await POST(req)
expect(response.status).toBe(404)
@@ -152,17 +204,17 @@ describe('Copilot Checkpoints Revert API Route', () => {
})
it('should return 404 when checkpoint belongs to different user', async () => {
const authMocks = mockAuth()
authMocks.setAuthenticated()
setAuthenticated()
// Mock checkpoint not found (due to user mismatch in query)
mockThen.mockResolvedValueOnce(undefined)
thenResults.push(undefined)
const req = createMockRequest('POST', {
checkpointId: 'other-user-checkpoint',
const req = new NextRequest('http://localhost:3000/api/copilot/checkpoints/revert', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ checkpointId: 'other-user-checkpoint' }),
})
const { POST } = await import('@/app/api/copilot/checkpoints/revert/route')
const response = await POST(req)
expect(response.status).toBe(404)
@@ -171,10 +223,8 @@ describe('Copilot Checkpoints Revert API Route', () => {
})
it('should return 404 when workflow is not found', async () => {
const authMocks = mockAuth()
authMocks.setAuthenticated()
setAuthenticated()
// Mock checkpoint found but workflow not found
const mockCheckpoint = {
id: 'checkpoint-123',
workflowId: 'a1b2c3d4-e5f6-4a78-b9c0-d1e2f3a4b5c6',
@@ -182,15 +232,15 @@ describe('Copilot Checkpoints Revert API Route', () => {
workflowState: { blocks: {}, edges: [] },
}
mockThen
.mockResolvedValueOnce(mockCheckpoint) // Checkpoint found
.mockResolvedValueOnce(undefined) // Workflow not found
thenResults.push(mockCheckpoint) // Checkpoint found
thenResults.push(undefined) // Workflow not found
const req = createMockRequest('POST', {
checkpointId: 'checkpoint-123',
const req = new NextRequest('http://localhost:3000/api/copilot/checkpoints/revert', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ checkpointId: 'checkpoint-123' }),
})
const { POST } = await import('@/app/api/copilot/checkpoints/revert/route')
const response = await POST(req)
expect(response.status).toBe(404)
@@ -199,10 +249,8 @@ describe('Copilot Checkpoints Revert API Route', () => {
})
it('should return 401 when workflow belongs to different user', async () => {
const authMocks = mockAuth()
authMocks.setAuthenticated()
setAuthenticated()
// Mock checkpoint found but workflow belongs to different user
const mockCheckpoint = {
id: 'checkpoint-123',
workflowId: 'b2c3d4e5-f6a7-4b89-a0d1-e2f3a4b5c6d7',
@@ -215,21 +263,20 @@ describe('Copilot Checkpoints Revert API Route', () => {
userId: 'different-user',
}
mockThen
.mockResolvedValueOnce(mockCheckpoint) // Checkpoint found
.mockResolvedValueOnce(mockWorkflow) // Workflow found but different user
thenResults.push(mockCheckpoint) // Checkpoint found
thenResults.push(mockWorkflow) // Workflow found but different user
const { authorizeWorkflowByWorkspacePermission } = await import('@/lib/workflows/utils')
vi.mocked(authorizeWorkflowByWorkspacePermission).mockResolvedValueOnce({
mockAuthorize.mockResolvedValueOnce({
allowed: false,
status: 403,
})
const req = createMockRequest('POST', {
checkpointId: 'checkpoint-123',
const req = new NextRequest('http://localhost:3000/api/copilot/checkpoints/revert', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ checkpointId: 'checkpoint-123' }),
})
const { POST } = await import('@/app/api/copilot/checkpoints/revert/route')
const response = await POST(req)
expect(response.status).toBe(401)
@@ -238,8 +285,7 @@ describe('Copilot Checkpoints Revert API Route', () => {
})
it('should successfully revert checkpoint with basic workflow state', async () => {
const authMocks = mockAuth()
authMocks.setAuthenticated()
setAuthenticated()
const mockCheckpoint = {
id: 'checkpoint-123',
@@ -260,11 +306,8 @@ describe('Copilot Checkpoints Revert API Route', () => {
userId: 'user-123',
}
mockThen
.mockResolvedValueOnce(mockCheckpoint) // Checkpoint found
.mockResolvedValueOnce(mockWorkflow) // Workflow found
// Mock successful state API call
thenResults.push(mockCheckpoint) // Checkpoint found
thenResults.push(mockWorkflow) // Workflow found
;(global.fetch as any).mockResolvedValue({
ok: true,
@@ -282,7 +325,6 @@ describe('Copilot Checkpoints Revert API Route', () => {
}),
})
const { POST } = await import('@/app/api/copilot/checkpoints/revert/route')
const response = await POST(req)
expect(response.status).toBe(200)
@@ -329,8 +371,7 @@ describe('Copilot Checkpoints Revert API Route', () => {
})
it('should handle checkpoint state with valid deployedAt date', async () => {
const authMocks = mockAuth()
authMocks.setAuthenticated()
setAuthenticated()
const mockCheckpoint = {
id: 'checkpoint-with-date',
@@ -349,18 +390,20 @@ describe('Copilot Checkpoints Revert API Route', () => {
userId: 'user-123',
}
mockThen.mockResolvedValueOnce(mockCheckpoint).mockResolvedValueOnce(mockWorkflow)
thenResults.push(mockCheckpoint)
thenResults.push(mockWorkflow)
;(global.fetch as any).mockResolvedValue({
ok: true,
json: () => Promise.resolve({ success: true }),
})
const req = createMockRequest('POST', {
checkpointId: 'checkpoint-with-date',
const req = new NextRequest('http://localhost:3000/api/copilot/checkpoints/revert', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ checkpointId: 'checkpoint-with-date' }),
})
const { POST } = await import('@/app/api/copilot/checkpoints/revert/route')
const response = await POST(req)
expect(response.status).toBe(200)
@@ -370,8 +413,7 @@ describe('Copilot Checkpoints Revert API Route', () => {
})
it('should handle checkpoint state with invalid deployedAt date', async () => {
const authMocks = mockAuth()
authMocks.setAuthenticated()
setAuthenticated()
const mockCheckpoint = {
id: 'checkpoint-invalid-date',
@@ -390,18 +432,20 @@ describe('Copilot Checkpoints Revert API Route', () => {
userId: 'user-123',
}
mockThen.mockResolvedValueOnce(mockCheckpoint).mockResolvedValueOnce(mockWorkflow)
thenResults.push(mockCheckpoint)
thenResults.push(mockWorkflow)
;(global.fetch as any).mockResolvedValue({
ok: true,
json: () => Promise.resolve({ success: true }),
})
const req = createMockRequest('POST', {
checkpointId: 'checkpoint-invalid-date',
const req = new NextRequest('http://localhost:3000/api/copilot/checkpoints/revert', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ checkpointId: 'checkpoint-invalid-date' }),
})
const { POST } = await import('@/app/api/copilot/checkpoints/revert/route')
const response = await POST(req)
expect(response.status).toBe(200)
@@ -411,8 +455,7 @@ describe('Copilot Checkpoints Revert API Route', () => {
})
it('should handle checkpoint state with null/undefined values', async () => {
const authMocks = mockAuth()
authMocks.setAuthenticated()
setAuthenticated()
const mockCheckpoint = {
id: 'checkpoint-null-values',
@@ -432,18 +475,20 @@ describe('Copilot Checkpoints Revert API Route', () => {
userId: 'user-123',
}
mockThen.mockResolvedValueOnce(mockCheckpoint).mockResolvedValueOnce(mockWorkflow)
thenResults.push(mockCheckpoint)
thenResults.push(mockWorkflow)
;(global.fetch as any).mockResolvedValue({
ok: true,
json: () => Promise.resolve({ success: true }),
})
const req = createMockRequest('POST', {
checkpointId: 'checkpoint-null-values',
const req = new NextRequest('http://localhost:3000/api/copilot/checkpoints/revert', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ checkpointId: 'checkpoint-null-values' }),
})
const { POST } = await import('@/app/api/copilot/checkpoints/revert/route')
const response = await POST(req)
expect(response.status).toBe(200)
@@ -462,8 +507,7 @@ describe('Copilot Checkpoints Revert API Route', () => {
})
it('should return 500 when state API call fails', async () => {
const authMocks = mockAuth()
authMocks.setAuthenticated()
setAuthenticated()
const mockCheckpoint = {
id: 'checkpoint-123',
@@ -477,22 +521,20 @@ describe('Copilot Checkpoints Revert API Route', () => {
userId: 'user-123',
}
mockThen
.mockResolvedValueOnce(mockCheckpoint)
.mockResolvedValueOnce(mockWorkflow)
// Mock failed state API call
thenResults.push(mockCheckpoint)
thenResults.push(mockWorkflow)
;(global.fetch as any).mockResolvedValue({
ok: false,
text: () => Promise.resolve('State validation failed'),
})
const req = createMockRequest('POST', {
checkpointId: 'checkpoint-123',
const req = new NextRequest('http://localhost:3000/api/copilot/checkpoints/revert', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ checkpointId: 'checkpoint-123' }),
})
const { POST } = await import('@/app/api/copilot/checkpoints/revert/route')
const response = await POST(req)
expect(response.status).toBe(500)
@@ -501,17 +543,17 @@ describe('Copilot Checkpoints Revert API Route', () => {
})
it('should handle database errors during checkpoint lookup', async () => {
const authMocks = mockAuth()
authMocks.setAuthenticated()
setAuthenticated()
// Mock database error
mockThen.mockRejectedValueOnce(new Error('Database connection failed'))
thenResults.push(new Error('Database connection failed'))
const req = createMockRequest('POST', {
checkpointId: 'checkpoint-123',
const req = new NextRequest('http://localhost:3000/api/copilot/checkpoints/revert', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ checkpointId: 'checkpoint-123' }),
})
const { POST } = await import('@/app/api/copilot/checkpoints/revert/route')
const response = await POST(req)
expect(response.status).toBe(500)
@@ -520,8 +562,7 @@ describe('Copilot Checkpoints Revert API Route', () => {
})
it('should handle database errors during workflow lookup', async () => {
const authMocks = mockAuth()
authMocks.setAuthenticated()
setAuthenticated()
const mockCheckpoint = {
id: 'checkpoint-123',
@@ -530,15 +571,15 @@ describe('Copilot Checkpoints Revert API Route', () => {
workflowState: { blocks: {}, edges: [] },
}
mockThen
.mockResolvedValueOnce(mockCheckpoint) // Checkpoint found
.mockRejectedValueOnce(new Error('Database error during workflow lookup')) // Workflow lookup fails
thenResults.push(mockCheckpoint) // Checkpoint found
thenResults.push(new Error('Database error during workflow lookup')) // Workflow lookup fails
const req = createMockRequest('POST', {
checkpointId: 'checkpoint-123',
const req = new NextRequest('http://localhost:3000/api/copilot/checkpoints/revert', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ checkpointId: 'checkpoint-123' }),
})
const { POST } = await import('@/app/api/copilot/checkpoints/revert/route')
const response = await POST(req)
expect(response.status).toBe(500)
@@ -547,8 +588,7 @@ describe('Copilot Checkpoints Revert API Route', () => {
})
it('should handle fetch network errors', async () => {
const authMocks = mockAuth()
authMocks.setAuthenticated()
setAuthenticated()
const mockCheckpoint = {
id: 'checkpoint-123',
@@ -562,19 +602,17 @@ describe('Copilot Checkpoints Revert API Route', () => {
userId: 'user-123',
}
mockThen
.mockResolvedValueOnce(mockCheckpoint)
.mockResolvedValueOnce(mockWorkflow)
// Mock fetch network error
thenResults.push(mockCheckpoint)
thenResults.push(mockWorkflow)
;(global.fetch as any).mockRejectedValue(new Error('Network error'))
const req = createMockRequest('POST', {
checkpointId: 'checkpoint-123',
const req = new NextRequest('http://localhost:3000/api/copilot/checkpoints/revert', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ checkpointId: 'checkpoint-123' }),
})
const { POST } = await import('@/app/api/copilot/checkpoints/revert/route')
const response = await POST(req)
expect(response.status).toBe(500)
@@ -583,10 +621,8 @@ describe('Copilot Checkpoints Revert API Route', () => {
})
it('should handle JSON parsing errors in request body', async () => {
const authMocks = mockAuth()
authMocks.setAuthenticated()
setAuthenticated()
// Create a request with invalid JSON
const req = new NextRequest('http://localhost:3000/api/copilot/checkpoints/revert', {
method: 'POST',
body: '{invalid-json',
@@ -595,7 +631,6 @@ describe('Copilot Checkpoints Revert API Route', () => {
},
})
const { POST } = await import('@/app/api/copilot/checkpoints/revert/route')
const response = await POST(req)
expect(response.status).toBe(500)
@@ -604,8 +639,7 @@ describe('Copilot Checkpoints Revert API Route', () => {
})
it('should forward cookies to state API call', async () => {
const authMocks = mockAuth()
authMocks.setAuthenticated()
setAuthenticated()
const mockCheckpoint = {
id: 'checkpoint-123',
@@ -619,7 +653,8 @@ describe('Copilot Checkpoints Revert API Route', () => {
userId: 'user-123',
}
mockThen.mockResolvedValueOnce(mockCheckpoint).mockResolvedValueOnce(mockWorkflow)
thenResults.push(mockCheckpoint)
thenResults.push(mockWorkflow)
;(global.fetch as any).mockResolvedValue({
ok: true,
@@ -637,7 +672,6 @@ describe('Copilot Checkpoints Revert API Route', () => {
}),
})
const { POST } = await import('@/app/api/copilot/checkpoints/revert/route')
await POST(req)
expect(global.fetch).toHaveBeenCalledWith(
@@ -654,8 +688,7 @@ describe('Copilot Checkpoints Revert API Route', () => {
})
it('should handle missing cookies gracefully', async () => {
const authMocks = mockAuth()
authMocks.setAuthenticated()
setAuthenticated()
const mockCheckpoint = {
id: 'checkpoint-123',
@@ -669,7 +702,8 @@ describe('Copilot Checkpoints Revert API Route', () => {
userId: 'user-123',
}
mockThen.mockResolvedValueOnce(mockCheckpoint).mockResolvedValueOnce(mockWorkflow)
thenResults.push(mockCheckpoint)
thenResults.push(mockWorkflow)
;(global.fetch as any).mockResolvedValue({
ok: true,
@@ -687,7 +721,6 @@ describe('Copilot Checkpoints Revert API Route', () => {
}),
})
const { POST } = await import('@/app/api/copilot/checkpoints/revert/route')
const response = await POST(req)
expect(response.status).toBe(200)
@@ -705,8 +738,7 @@ describe('Copilot Checkpoints Revert API Route', () => {
})
it('should handle complex checkpoint state with all fields', async () => {
const authMocks = mockAuth()
authMocks.setAuthenticated()
setAuthenticated()
const mockCheckpoint = {
id: 'checkpoint-complex',
@@ -742,18 +774,20 @@ describe('Copilot Checkpoints Revert API Route', () => {
userId: 'user-123',
}
mockThen.mockResolvedValueOnce(mockCheckpoint).mockResolvedValueOnce(mockWorkflow)
thenResults.push(mockCheckpoint)
thenResults.push(mockWorkflow)
;(global.fetch as any).mockResolvedValue({
ok: true,
json: () => Promise.resolve({ success: true }),
})
const req = createMockRequest('POST', {
checkpointId: 'checkpoint-complex',
const req = new NextRequest('http://localhost:3000/api/copilot/checkpoints/revert', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ checkpointId: 'checkpoint-complex' }),
})
const { POST } = await import('@/app/api/copilot/checkpoints/revert/route')
const response = await POST(req)
expect(response.status).toBe(200)

View File

@@ -3,22 +3,45 @@
*
* @vitest-environment node
*/
import { createMockRequest, mockAuth, mockCryptoUuid, setupCommonApiMocks } from '@sim/testing'
import { NextRequest } from 'next/server'
import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest'
describe('Copilot Checkpoints API Route', () => {
const mockSelect = vi.fn()
const mockFrom = vi.fn()
const mockWhere = vi.fn()
const mockLimit = vi.fn()
const mockOrderBy = vi.fn()
const mockInsert = vi.fn()
const mockValues = vi.fn()
const mockReturning = vi.fn()
const {
mockSelect,
mockFrom,
mockWhere,
mockLimit,
mockOrderBy,
mockInsert,
mockValues,
mockReturning,
mockGetSession,
} = vi.hoisted(() => ({
mockSelect: vi.fn(),
mockFrom: vi.fn(),
mockWhere: vi.fn(),
mockLimit: vi.fn(),
mockOrderBy: vi.fn(),
mockInsert: vi.fn(),
mockValues: vi.fn(),
mockReturning: vi.fn(),
mockGetSession: vi.fn(),
}))
const mockCopilotChats = { id: 'id', userId: 'userId' }
const mockWorkflowCheckpoints = {
vi.mock('@/lib/auth', () => ({
getSession: mockGetSession,
}))
vi.mock('@sim/db', () => ({
db: {
select: mockSelect,
insert: mockInsert,
},
}))
vi.mock('@sim/db/schema', () => ({
copilotChats: { id: 'id', userId: 'userId' },
workflowCheckpoints: {
id: 'id',
userId: 'userId',
workflowId: 'workflowId',
@@ -26,12 +49,30 @@ describe('Copilot Checkpoints API Route', () => {
messageId: 'messageId',
createdAt: 'createdAt',
updatedAt: 'updatedAt',
}
},
}))
vi.mock('drizzle-orm', () => ({
and: vi.fn((...conditions: unknown[]) => ({ conditions, type: 'and' })),
eq: vi.fn((field: unknown, value: unknown) => ({ field, value, type: 'eq' })),
desc: vi.fn((field: unknown) => ({ field, type: 'desc' })),
}))
import { GET, POST } from '@/app/api/copilot/checkpoints/route'
function createMockRequest(method: string, body: Record<string, unknown>): NextRequest {
return new NextRequest('http://localhost:3000/api/copilot/checkpoints', {
method,
body: JSON.stringify(body),
headers: { 'Content-Type': 'application/json' },
})
}
describe('Copilot Checkpoints API Route', () => {
beforeEach(() => {
vi.resetModules()
setupCommonApiMocks()
mockCryptoUuid()
vi.clearAllMocks()
mockGetSession.mockResolvedValue(null)
mockSelect.mockReturnValue({ from: mockFrom })
mockFrom.mockReturnValue({ where: mockWhere })
@@ -43,35 +84,15 @@ describe('Copilot Checkpoints API Route', () => {
mockLimit.mockResolvedValue([])
mockInsert.mockReturnValue({ values: mockValues })
mockValues.mockReturnValue({ returning: mockReturning })
vi.doMock('@sim/db', () => ({
db: {
select: mockSelect,
insert: mockInsert,
},
}))
vi.doMock('@sim/db/schema', () => ({
copilotChats: mockCopilotChats,
workflowCheckpoints: mockWorkflowCheckpoints,
}))
vi.doMock('drizzle-orm', () => ({
and: vi.fn((...conditions) => ({ conditions, type: 'and' })),
eq: vi.fn((field, value) => ({ field, value, type: 'eq' })),
desc: vi.fn((field) => ({ field, type: 'desc' })),
}))
})
afterEach(() => {
vi.clearAllMocks()
vi.restoreAllMocks()
})
describe('POST', () => {
it('should return 401 when user is not authenticated', async () => {
const authMocks = mockAuth()
authMocks.setUnauthenticated()
mockGetSession.mockResolvedValue(null)
const req = createMockRequest('POST', {
workflowId: 'workflow-123',
@@ -79,7 +100,6 @@ describe('Copilot Checkpoints API Route', () => {
workflowState: '{"blocks": []}',
})
const { POST } = await import('@/app/api/copilot/checkpoints/route')
const response = await POST(req)
expect(response.status).toBe(401)
@@ -88,16 +108,12 @@ describe('Copilot Checkpoints API Route', () => {
})
it('should return 500 for invalid request body', async () => {
const authMocks = mockAuth()
authMocks.setAuthenticated()
mockGetSession.mockResolvedValue({ user: { id: 'user-123' } })
const req = createMockRequest('POST', {
// Missing required fields
workflowId: 'workflow-123',
// Missing chatId and workflowState
})
const { POST } = await import('@/app/api/copilot/checkpoints/route')
const response = await POST(req)
expect(response.status).toBe(500)
@@ -106,10 +122,8 @@ describe('Copilot Checkpoints API Route', () => {
})
it('should return 400 when chat not found or unauthorized', async () => {
const authMocks = mockAuth()
authMocks.setAuthenticated()
mockGetSession.mockResolvedValue({ user: { id: 'user-123' } })
// Mock chat not found
mockLimit.mockResolvedValue([])
const req = createMockRequest('POST', {
@@ -118,7 +132,6 @@ describe('Copilot Checkpoints API Route', () => {
workflowState: '{"blocks": []}',
})
const { POST } = await import('@/app/api/copilot/checkpoints/route')
const response = await POST(req)
expect(response.status).toBe(400)
@@ -127,10 +140,8 @@ describe('Copilot Checkpoints API Route', () => {
})
it('should return 400 for invalid workflow state JSON', async () => {
const authMocks = mockAuth()
authMocks.setAuthenticated()
mockGetSession.mockResolvedValue({ user: { id: 'user-123' } })
// Mock chat exists
const chat = {
id: 'chat-123',
userId: 'user-123',
@@ -140,10 +151,9 @@ describe('Copilot Checkpoints API Route', () => {
const req = createMockRequest('POST', {
workflowId: 'workflow-123',
chatId: 'chat-123',
workflowState: 'invalid-json', // Invalid JSON
workflowState: 'invalid-json',
})
const { POST } = await import('@/app/api/copilot/checkpoints/route')
const response = await POST(req)
expect(response.status).toBe(400)
@@ -152,17 +162,14 @@ describe('Copilot Checkpoints API Route', () => {
})
it('should successfully create a checkpoint', async () => {
const authMocks = mockAuth()
authMocks.setAuthenticated()
mockGetSession.mockResolvedValue({ user: { id: 'user-123' } })
// Mock chat exists
const chat = {
id: 'chat-123',
userId: 'user-123',
}
mockLimit.mockResolvedValue([chat])
// Mock successful checkpoint creation
const checkpoint = {
id: 'checkpoint-123',
userId: 'user-123',
@@ -182,7 +189,6 @@ describe('Copilot Checkpoints API Route', () => {
workflowState: JSON.stringify(workflowState),
})
const { POST } = await import('@/app/api/copilot/checkpoints/route')
const response = await POST(req)
expect(response.status).toBe(200)
@@ -200,29 +206,25 @@ describe('Copilot Checkpoints API Route', () => {
},
})
// Verify database operations
expect(mockInsert).toHaveBeenCalled()
expect(mockValues).toHaveBeenCalledWith({
userId: 'user-123',
workflowId: 'workflow-123',
chatId: 'chat-123',
messageId: 'message-123',
workflowState: workflowState, // Should be parsed JSON object
workflowState: workflowState,
})
})
it('should create checkpoint without messageId', async () => {
const authMocks = mockAuth()
authMocks.setAuthenticated()
mockGetSession.mockResolvedValue({ user: { id: 'user-123' } })
// Mock chat exists
const chat = {
id: 'chat-123',
userId: 'user-123',
}
mockLimit.mockResolvedValue([chat])
// Mock successful checkpoint creation
const checkpoint = {
id: 'checkpoint-123',
userId: 'user-123',
@@ -238,11 +240,9 @@ describe('Copilot Checkpoints API Route', () => {
const req = createMockRequest('POST', {
workflowId: 'workflow-123',
chatId: 'chat-123',
// No messageId provided
workflowState: JSON.stringify(workflowState),
})
const { POST } = await import('@/app/api/copilot/checkpoints/route')
const response = await POST(req)
expect(response.status).toBe(200)
@@ -252,17 +252,14 @@ describe('Copilot Checkpoints API Route', () => {
})
it('should handle database errors during checkpoint creation', async () => {
const authMocks = mockAuth()
authMocks.setAuthenticated()
mockGetSession.mockResolvedValue({ user: { id: 'user-123' } })
// Mock chat exists
const chat = {
id: 'chat-123',
userId: 'user-123',
}
mockLimit.mockResolvedValue([chat])
// Mock database error
mockReturning.mockRejectedValue(new Error('Database insert failed'))
const req = createMockRequest('POST', {
@@ -271,7 +268,6 @@ describe('Copilot Checkpoints API Route', () => {
workflowState: '{"blocks": []}',
})
const { POST } = await import('@/app/api/copilot/checkpoints/route')
const response = await POST(req)
expect(response.status).toBe(500)
@@ -280,10 +276,8 @@ describe('Copilot Checkpoints API Route', () => {
})
it('should handle database errors during chat lookup', async () => {
const authMocks = mockAuth()
authMocks.setAuthenticated()
mockGetSession.mockResolvedValue({ user: { id: 'user-123' } })
// Mock database error during chat lookup
mockLimit.mockRejectedValue(new Error('Database query failed'))
const req = createMockRequest('POST', {
@@ -292,7 +286,6 @@ describe('Copilot Checkpoints API Route', () => {
workflowState: '{"blocks": []}',
})
const { POST } = await import('@/app/api/copilot/checkpoints/route')
const response = await POST(req)
expect(response.status).toBe(500)
@@ -303,12 +296,10 @@ describe('Copilot Checkpoints API Route', () => {
describe('GET', () => {
it('should return 401 when user is not authenticated', async () => {
const authMocks = mockAuth()
authMocks.setUnauthenticated()
mockGetSession.mockResolvedValue(null)
const req = new NextRequest('http://localhost:3000/api/copilot/checkpoints?chatId=chat-123')
const { GET } = await import('@/app/api/copilot/checkpoints/route')
const response = await GET(req)
expect(response.status).toBe(401)
@@ -317,12 +308,10 @@ describe('Copilot Checkpoints API Route', () => {
})
it('should return 400 when chatId is missing', async () => {
const authMocks = mockAuth()
authMocks.setAuthenticated()
mockGetSession.mockResolvedValue({ user: { id: 'user-123' } })
const req = new NextRequest('http://localhost:3000/api/copilot/checkpoints')
const { GET } = await import('@/app/api/copilot/checkpoints/route')
const response = await GET(req)
expect(response.status).toBe(400)
@@ -331,8 +320,7 @@ describe('Copilot Checkpoints API Route', () => {
})
it('should return checkpoints for authenticated user and chat', async () => {
const authMocks = mockAuth()
authMocks.setAuthenticated()
mockGetSession.mockResolvedValue({ user: { id: 'user-123' } })
const mockCheckpoints = [
{
@@ -359,7 +347,6 @@ describe('Copilot Checkpoints API Route', () => {
const req = new NextRequest('http://localhost:3000/api/copilot/checkpoints?chatId=chat-123')
const { GET } = await import('@/app/api/copilot/checkpoints/route')
const response = await GET(req)
expect(response.status).toBe(200)
@@ -388,22 +375,18 @@ describe('Copilot Checkpoints API Route', () => {
],
})
// Verify database query was made correctly
expect(mockSelect).toHaveBeenCalled()
expect(mockWhere).toHaveBeenCalled()
expect(mockOrderBy).toHaveBeenCalled()
})
it('should handle database errors when fetching checkpoints', async () => {
const authMocks = mockAuth()
authMocks.setAuthenticated()
mockGetSession.mockResolvedValue({ user: { id: 'user-123' } })
// Mock database error
mockOrderBy.mockRejectedValue(new Error('Database query failed'))
const req = new NextRequest('http://localhost:3000/api/copilot/checkpoints?chatId=chat-123')
const { GET } = await import('@/app/api/copilot/checkpoints/route')
const response = await GET(req)
expect(response.status).toBe(500)
@@ -412,14 +395,12 @@ describe('Copilot Checkpoints API Route', () => {
})
it('should return empty array when no checkpoints found', async () => {
const authMocks = mockAuth()
authMocks.setAuthenticated()
mockGetSession.mockResolvedValue({ user: { id: 'user-123' } })
mockOrderBy.mockResolvedValue([])
const req = new NextRequest('http://localhost:3000/api/copilot/checkpoints?chatId=chat-123')
const { GET } = await import('@/app/api/copilot/checkpoints/route')
const response = await GET(req)
expect(response.status).toBe(200)

Some files were not shown because too many files have changed in this diff Show More