Compare commits

...

30 Commits

Author SHA1 Message Date
Waleed
e07e3c34cc v0.5.103: memory util instrumentation, API docs, amplitude, google pagespeed insights, pagerduty 2026-03-01 23:27:02 -08:00
Waleed
79bb4e5ad8 feat(docs): add API reference with OpenAPI spec and auto-generated endpoint pages (#3388)
* feat(docs): add API reference with OpenAPI spec and auto-generated endpoint pages

* multiline curl

* random improvements

* cleanup

* update docs copy

* fix build

* cast

* fix builg

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: Lakee Sivaraya <71339072+lakeesiv@users.noreply.github.com>
Co-authored-by: Vikhyath Mondreti <vikhyath@simstudio.ai>
Co-authored-by: Vikhyath Mondreti <vikhyathvikku@gmail.com>
2026-03-01 22:53:18 -08:00
Waleed
ee20e119de feat(integrations): add amplitude, google pagespeed insights, and pagerduty integrations (#3385)
* feat(integrations): add amplitude and google pagespeed insights integrations

* verified and regen docs

* fix icons

* fix(integrations): add pagerduty to tool and block registries

Re-add registry entries that were reverted after initial commit.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* more updates

* ack comemnts

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-28 18:56:34 -08:00
Waleed
3788660366 fix(monitoring): set MemoryTelemetry logger to INFO level for production visibility (#3386)
Production defaults to ERROR-only logging. Without this override,
memory snapshots would be silently suppressed.
2026-02-28 13:58:21 -08:00
Waleed
0d2e6ff31d v0.5.102: new integrations, new tools, ci speedups, memory leak instrumentation 2026-02-28 12:48:10 -08:00
Waleed
9be75e3633 improvement(loops): validate loops integration and update skill files (#3384)
* improvement(loops): validate loops integration and update skill files

* loops icon color

* update databricks icon
2026-02-28 12:37:01 -08:00
Waleed
40bab7731a fix(chat-deploy): fix launch chat popup and auth persistence, clean up React anti-patterns (#3380)
* fix(chat-deploy): fix launch chat popup and auth persistence, clean up React anti-patterns

* lint

* fix(greenhouse): fix email_address query param, add .trim() to ID paths, revert onValidationChange to useEffect

* fix(chat-deploy): fix stale AuthSelector state, stabilize refetch ref, clean up copy timeout

* fix(chat-deploy): reset chatSuccess on modal open to prevent stuck state
2026-02-28 12:01:42 -08:00
Waleed
96096e0ad1 improvement(resend): add error handling, authMode, and naming consistency (#3382) 2026-02-28 11:19:42 -08:00
Waleed
647a3eb05b improvement(luma): expand host response fields and harden event ID inputs (#3383) 2026-02-28 11:19:24 -08:00
Waleed
0195a4cd18 improvement(ashby): validate ashby integration and update skill files (#3381) 2026-02-28 11:16:40 -08:00
Waleed
b42f80e8ab fix(sse): fix memory leaks in SSE stream cleanup and add memory telemetry (#3378)
* fix(sse): fix memory leaks in SSE stream cleanup and add memory telemetry

* improvement(monitoring): add SSE metering to wand, execution-stream, and a2a-message endpoints

* fix(workflow-execute): remove abort from cancel() to preserve run-on-leave behavior

* improvement(monitoring): use stable process.getActiveResourcesInfo() API

* refactor(a2a): hoist resubscribe cleanup to eliminate duplication between start() and cancel()

* style(a2a): format import line

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(wand): set guard flag on early-return decrement for consistency

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-28 10:37:07 -08:00
Waleed
38ac86c4fd improvement(mcp): add all MCP server tools individually instead of as single server entry (#3376)
* improvement(mcp): add all MCP server tools individually instead of as single server entry

* fix(mcp): prevent remove popover from opening inadvertently
2026-02-27 14:02:11 -08:00
Waleed
4cfe8be75a feat(google-contacts): add google contacts integration (#3340)
* feat(google-contacts): add google contacts integration

* fix(google-contacts): throw error when no update fields provided

* lint

* update icon

* improvement(google-contacts): add advanced mode, error handling, and input trimming

- Set mode: 'advanced' on optional fields (emailType, phoneType, notes, pageSize, pageToken, sortOrder)
- Add createLogger and response.ok error handling to all 6 tools
- Add .trim() on resourceName in get, update, delete URL builders
2026-02-27 10:55:51 -08:00
Vikhyath Mondreti
49db3ca50b improvement(selectors): consolidate selector input logic (#3375) 2026-02-27 10:18:25 -08:00
Vikhyath Mondreti
e3ff595a84 improvement(selectors): make selectorKeys declarative (#3374)
* fix(webflow): resolution for selectors

* remove unecessary fallback'

* fix teams selector resolution

* make selector keys declarative

* selectors fixes
2026-02-27 07:56:35 -08:00
Waleed
b3424e2047 improvement(ci): add sticky disk caches and bump runner for faster builds (#3373) 2026-02-27 00:12:36 -08:00
Waleed
71ecf6c82e improvement(x): align OAuth scopes, add scope descriptions, and set optional fields to advanced mode (#3372)
* improvement(x): align OAuth scopes, add scope descriptions, and set optional fields to advanced mode

* improvement(skills): add typed JSON outputs guidance to add-tools, add-block, and add-integration skills

* improvement(skills): add final validation steps to add-tools, add-block, and add-integration skills

* fix(skills): correct misleading JSON array comment in wandConfig example

* feat(skills): add validate-integration skill for auditing tools, blocks, and registry against API docs

* improvement(skills): expand validate-integration with full block-tool alignment, OAuth scopes, pagination, and error handling checks
2026-02-26 23:30:24 -08:00
Waleed
e9e5ba2c5b improvement(docs): audit and standardize tool description sections, update developer count to 70k (#3371) 2026-02-26 23:02:58 -08:00
Waleed
9233d4ebc9 feat(x): add 28 new X API v2 tool integrations and expand OAuth scopes (#3365)
* feat(x): add 28 new X API v2 tool integrations and expand OAuth scopes

* fix(x): add missing nextToken param to search tweets and fix XCreateTweetParams type

* fix(x): correct API spec issues in retweeted_by, quote_tweets, personalized_trends, and usage tools

* fix(x): add missing newestId and oldestId to error meta in get_liked_tweets and get_quote_tweets

* fix(x): add missing newestId/oldestId to get_liked_tweets success branch and includes to XTweetListResponse

* fix(x): add error handling to create_tweet and delete_tweet transformResponse

* fix(x): add error handling and logger to all X tools

* fix(x): revert block requiredScopes to match current operations

* feat(x): update block to support all 28 new X API v2 tools

* fix(x): add missing text output and fix hiddenResult output key mismatch

* docs(x): regenerate docs for all 28 new X API v2 tools
2026-02-26 22:40:57 -08:00
Waleed
78901ef517 improvement(blocks): update luma styling and linkup field modes (#3370)
* improvement(blocks): update luma styling and linkup field modes

* improvement(fireflies): move optional fields to advanced mode

* improvement(blocks): move optional fields to advanced mode for 10 integrations

* improvement(blocks): move optional fields to advanced mode for 6 more integrations
2026-02-26 22:27:58 -08:00
Waleed
47fef540cc feat(resend): expand integration with contacts, domains, and enhanced email ops (#3366) 2026-02-26 22:12:48 -08:00
Waleed
f193e9ebbc feat(loops): add Loops email platform integration (#3359)
* feat(loops): add Loops email platform integration

Add complete Loops integration with 10 tools covering all API endpoints:
- Contact management: create, update, find, delete
- Email: send transactional emails with attachments
- Events: trigger automated email sequences
- Lists: list mailing lists and transactional email templates
- Properties: create and list contact properties

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* ran litn

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-26 22:09:02 -08:00
Waleed
c0f22d7722 improvement(oauth): reordered oauth modal (#3368) 2026-02-26 19:43:59 -08:00
Waleed
bf0e25c9d0 feat(ashby): add ashby integration for candidate, job, and application management (#3362)
* feat(ashby): add ashby integration for candidate, job, and application management

* fix(ashby): auto-fix lint formatting in docs files
2026-02-26 19:10:06 -08:00
Waleed
d4f8ac8107 feat(greenhouse): add greenhouse integration for managing candidates, jobs, and applications (#3363) 2026-02-26 19:09:03 -08:00
Waleed
63fa938dd7 feat(gamma): add gamma integration for AI-powered content generation (#3358)
* feat(gamma): add gamma integration for AI-powered content generation

* fix(gamma): address PR review comments

- Make credits/error conditionally included in check_status response to avoid always-truthy objects
- Replace full wordmark SVG with square "G" letterform for proper rendering in icon slots

* fix(gamma): remove imageSource from generate_from_template endpoint

The from-template API only accepts imageOptions.model and imageOptions.style,
not imageOptions.source (image source is inherited from the template).

* fix(gamma): use typed output in check_status transformResponse

* regen docs
2026-02-26 19:08:46 -08:00
Waleed
50b882a3ad feat(luma): add Luma integration for event and guest management (#3364)
* feat(luma): add Luma integration for event and guest management

Add complete Luma (lu.ma) integration with 6 tools: get event, create event,
update event, list calendar events, get guests, and add guests. Includes block
configuration with wandConfig for timestamps/timezones/durations, advanced mode
for optional fields, and generated documentation.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(luma): address PR review feedback

- Remove hosts field from list_events transformResponse (not in LumaEventEntry type)
- Fix truncated add_guests description by removing quotes that broke docs generator

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(luma): fix update_event field name and add_guests response parsing

- Use 'id' instead of 'event_id' in update_event request body per API spec
- Fix add_guests to parse entries[].guest response structure instead of flat guests array

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-26 19:08:20 -08:00
Waleed
c8a0b62a9c feat(databricks): add Databricks integration with 8 tools (#3361)
* feat(databricks): add Databricks integration with 8 tools

Add complete Databricks integration supporting SQL execution, job management,
run monitoring, and cluster listing via Personal Access Token authentication.

Tools: execute_sql, list_jobs, run_job, get_run, list_runs, cancel_run,
get_run_output, list_clusters

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(databricks): throw on invalid JSON params, fix boolean coercion, add expandTasks field

- Throw errors on invalid JSON in jobParameters/notebookParams instead of silently defaulting to {}
- Always set boolean params explicitly to prevent string 'false' being truthy
- Add missing expandTasks dropdown UI field for list_jobs operation

* fix(databricks): align tool inputs/outputs with official API spec

- execute_sql: fix wait_timeout default description (50s, not 10s)
- get_run: add queueDuration field, update lifecycle/result state enums
- get_run_output: fix notebook output size (5 MB not 1 MB), add logsTruncated field
- list_runs: add userCancelledOrTimedout to state, fix limit range (1-24), update state enums
- list_jobs: fix name filter description to "exact case-insensitive"
- list_clusters: add PIPELINE_MAINTENANCE to ClusterSource enum

* fix(databricks): regenerate docs to reflect API spec fixes

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-26 19:05:47 -08:00
Waleed
4ccb57371b improvement(tests): speed up unit tests by eliminating vi.resetModules anti-pattern (#3357)
* improvement(tests): speed up unit tests by eliminating vi.resetModules anti-pattern

- convert 51 test files from vi.resetModules/vi.doMock/dynamic import to vi.hoisted/vi.mock/static import
- add global @sim/db mock to vitest.setup.ts
- switch 4 test files from jsdom to node environment
- remove all vi.importActual calls that loaded heavy modules (200+ block files)
- remove slow mockConsoleLogger/mockAuth/setupCommonApiMocks helpers
- reduce real setTimeout delays in engine tests
- mock heavy transitive deps in diff-engine test

test execution time: 34s -> 9s (3.9x faster)
environment time: 2.5s -> 0.6s (4x faster)

* docs(testing): update testing best practices with performance rules

- document vi.hoisted + vi.mock + static import as the standard pattern
- explicitly ban vi.resetModules, vi.doMock, vi.importActual, mockAuth, setupCommonApiMocks
- document global mocks from vitest.setup.ts
- add mock pattern reference for auth, hybrid auth, and database chains
- add performance rules section covering heavy deps, jsdom vs node, real timers

* fix(tests): fix 4 failing test files with missing mocks

- socket/middleware/permissions: add vi.mock for @/lib/auth to prevent transitive getBaseUrl() call
- workflow-handler: add vi.mock for @/executor/utils/http matching executor mock pattern
- evaluator-handler: add db.query.account mock structure before vi.spyOn
- router-handler: same db.query.account fix as evaluator

* fix(tests): replace banned Function type with explicit callback signature
2026-02-26 15:46:49 -08:00
Waleed
c6e147e56a feat(agent): add MCP server discovery mode for agent tool input (#3353)
* feat(agent): add MCP server discovery mode for agent tool input

* fix(tool-input): use type variant for MCP server tool count badge

* fix(mcp-dynamic-args): align label styling with standard subblock labels

* standardized inp format UI

* feat(tool-input): replace MCP server inline expand with drill-down navigation

* feat(tool-input): add chevron affordance and keyboard nav for MCP server drill-down

* fix(tool-input): handle mcp-server type in refresh, validation, badges, and usage control

* refactor(tool-validation): extract getMcpServerIssue, remove fake tool hack

* lint

* reorder dropdown

* perf(agent): parallelize MCP server tool creation with Promise.all

* fix(combobox): preserve cursor movement in search input, reset query on drilldown

* fix(combobox): route ArrowRight through handleSelect, remove redundant type guards

* fix(agent): rename mcpServers to mcpServerSelections to avoid shadowing DB import, route ArrowRight through handleSelect

* docs: update google integration docs

* fix(tool-input): reset drilldown state on tool selection to prevent stale view

* perf(agent): parallelize MCP server discovery across multiple servers
2026-02-26 15:17:23 -08:00
410 changed files with 50147 additions and 7035 deletions

View File

@@ -532,6 +532,41 @@ outputs: {
}
```
### Typed JSON Outputs
When using `type: 'json'` and you know the object shape in advance, **describe the inner fields in the description** so downstream blocks know what properties are available. For well-known, stable objects, use nested output definitions instead:
```typescript
outputs: {
// BAD: Opaque json with no info about what's inside
plan: { type: 'json', description: 'Zone plan information' },
// GOOD: Describe the known fields in the description
plan: {
type: 'json',
description: 'Zone plan information (id, name, price, currency, frequency, is_subscribed)',
},
// BEST: Use nested output definition when the shape is stable and well-known
plan: {
id: { type: 'string', description: 'Plan identifier' },
name: { type: 'string', description: 'Plan name' },
price: { type: 'number', description: 'Plan price' },
currency: { type: 'string', description: 'Price currency' },
},
}
```
Use the nested pattern when:
- The object has a small, stable set of fields (< 10)
- Downstream blocks will commonly access specific properties
- The API response shape is well-documented and unlikely to change
Use `type: 'json'` with a descriptive string when:
- The object has many fields or a dynamic shape
- It represents a list/array of items
- The shape varies by operation
## V2 Block Pattern
When creating V2 blocks (alongside legacy V1):
@@ -695,6 +730,62 @@ Please provide the SVG and I'll convert it to a React component.
You can usually find this in the service's brand/press kit page, or copy it from their website.
```
## Advanced Mode for Optional Fields
Optional fields that are rarely used should be set to `mode: 'advanced'` so they don't clutter the basic UI. This includes:
- Pagination tokens
- Time range filters (start/end time)
- Sort order options
- Reply settings
- Rarely used IDs (e.g., reply-to tweet ID, quote tweet ID)
- Max results / limits
```typescript
{
id: 'startTime',
title: 'Start Time',
type: 'short-input',
placeholder: 'ISO 8601 timestamp',
condition: { field: 'operation', value: ['search', 'list'] },
mode: 'advanced', // Rarely used, hide from basic view
}
```
## WandConfig for Complex Inputs
Use `wandConfig` for fields that are hard to fill out manually, such as timestamps, comma-separated lists, and complex query strings. This gives users an AI-assisted input experience.
```typescript
// Timestamps - use generationType: 'timestamp' to inject current date context
{
id: 'startTime',
title: 'Start Time',
type: 'short-input',
mode: 'advanced',
wandConfig: {
enabled: true,
prompt: 'Generate an ISO 8601 timestamp based on the user description. Return ONLY the timestamp string.',
generationType: 'timestamp',
},
}
// Comma-separated lists - simple prompt without generationType
{
id: 'mediaIds',
title: 'Media IDs',
type: 'short-input',
mode: 'advanced',
wandConfig: {
enabled: true,
prompt: 'Generate a comma-separated list of media IDs. Return ONLY the comma-separated values.',
},
}
```
## Naming Convention
All tool IDs referenced in `tools.access` and returned by `tools.config.tool` MUST use `snake_case` (e.g., `x_create_tweet`, `slack_send_message`). Never use camelCase or PascalCase.
## Checklist Before Finishing
- [ ] All subBlocks have `id`, `title` (except switch), and `type`
@@ -702,9 +793,24 @@ You can usually find this in the service's brand/press kit page, or copy it from
- [ ] DependsOn set for fields that need other values
- [ ] Required fields marked correctly (boolean or condition)
- [ ] OAuth inputs have correct `serviceId`
- [ ] Tools.access lists all tool IDs
- [ ] Tools.config.tool returns correct tool ID
- [ ] Tools.access lists all tool IDs (snake_case)
- [ ] Tools.config.tool returns correct tool ID (snake_case)
- [ ] Outputs match tool outputs
- [ ] Block registered in registry.ts
- [ ] If icon missing: asked user to provide SVG
- [ ] If triggers exist: `triggers` config set, trigger subBlocks spread
- [ ] Optional/rarely-used fields set to `mode: 'advanced'`
- [ ] Timestamps and complex inputs have `wandConfig` enabled
## Final Validation (Required)
After creating the block, you MUST validate it against every tool it references:
1. **Read every tool definition** that appears in `tools.access` — do not skip any
2. **For each tool, verify the block has correct:**
- SubBlock inputs that cover all required tool params (with correct `condition` to show for that operation)
- SubBlock input types that match the tool param types (e.g., dropdown for enums, short-input for strings)
- `tools.config.params` correctly maps subBlock IDs to tool param names (if they differ)
- Type coercions in `tools.config.params` for any params that need conversion (Number(), Boolean(), JSON.parse())
3. **Verify block outputs** cover the key fields returned by all tools
4. **Verify conditions** — each subBlock should only show for the operations that actually use it

View File

@@ -102,6 +102,7 @@ export const {service}{Action}Tool: ToolConfig<Params, Response> = {
- Always use `?? []` for optional array fields
- Set `optional: true` for outputs that may not exist
- Never output raw JSON dumps - extract meaningful fields
- When using `type: 'json'` and you know the object shape, define `properties` with the inner fields so downstream consumers know the structure. Only use bare `type: 'json'` when the shape is truly dynamic
## Step 3: Create Block
@@ -436,6 +437,12 @@ If creating V2 versions (API-aligned outputs):
- [ ] Ran `bun run scripts/generate-docs.ts`
- [ ] Verified docs file created
### Final Validation (Required)
- [ ] Read every tool file and cross-referenced inputs/outputs against the API docs
- [ ] Verified block subBlocks cover all required tool params with correct conditions
- [ ] Verified block outputs match what the tools actually return
- [ ] Verified `tools.config.params` correctly maps and coerces all param types
## Example Command
When the user asks to add an integration:
@@ -685,13 +692,40 @@ return NextResponse.json({
| `isUserFile` | `@/lib/core/utils/user-file` | Type guard for UserFile objects |
| `FileInputSchema` | `@/lib/uploads/utils/file-schemas` | Zod schema for file validation |
### Advanced Mode for Optional Fields
Optional fields that are rarely used should be set to `mode: 'advanced'` so they don't clutter the basic UI. Examples: pagination tokens, time range filters, sort order, max results, reply settings.
### WandConfig for Complex Inputs
Use `wandConfig` for fields that are hard to fill out manually:
- **Timestamps**: Use `generationType: 'timestamp'` to inject current date context into the AI prompt
- **JSON arrays**: Use `generationType: 'json-object'` for structured data
- **Complex queries**: Use a descriptive prompt explaining the expected format
```typescript
{
id: 'startTime',
title: 'Start Time',
type: 'short-input',
mode: 'advanced',
wandConfig: {
enabled: true,
prompt: 'Generate an ISO 8601 timestamp. Return ONLY the timestamp string.',
generationType: 'timestamp',
},
}
```
### Common Gotchas
1. **OAuth serviceId must match** - The `serviceId` in oauth-input must match the OAuth provider configuration
2. **Tool IDs are snake_case** - `stripe_create_payment`, not `stripeCreatePayment`
2. **All tool IDs MUST be snake_case** - `stripe_create_payment`, not `stripeCreatePayment`. This applies to tool `id` fields, registry keys, `tools.access` arrays, and `tools.config.tool` return values
3. **Block type is snake_case** - `type: 'stripe'`, not `type: 'Stripe'`
4. **Alphabetical ordering** - Keep imports and registry entries alphabetically sorted
5. **Required can be conditional** - Use `required: { field: 'op', value: 'create' }` instead of always true
6. **DependsOn clears options** - When a dependency changes, selector options are refetched
7. **Never pass Buffer directly to fetch** - Convert to `new Uint8Array(buffer)` for TypeScript compatibility
8. **Always handle legacy file params** - Keep hidden `fileContent` params for backwards compatibility
9. **Optional fields use advanced mode** - Set `mode: 'advanced'` on rarely-used optional fields
10. **Complex inputs need wandConfig** - Timestamps, JSON arrays, and other hard-to-type values should have `wandConfig` enabled

View File

@@ -147,9 +147,18 @@ closedAt: {
},
```
### Nested Properties
For complex outputs, define nested structure:
### Typed JSON Outputs
When using `type: 'json'` and you know the object shape in advance, **always define the inner structure** using `properties` so downstream consumers know what fields are available:
```typescript
// BAD: Opaque json with no info about what's inside
metadata: {
type: 'json',
description: 'Response metadata',
},
// GOOD: Define the known properties
metadata: {
type: 'json',
description: 'Response metadata',
@@ -159,7 +168,10 @@ metadata: {
count: { type: 'number', description: 'Total count' },
},
},
```
For arrays of objects, define the item structure:
```typescript
items: {
type: 'array',
description: 'List of items',
@@ -173,6 +185,8 @@ items: {
},
```
Only use bare `type: 'json'` without `properties` when the shape is truly dynamic or unknown.
## Critical Rules for transformResponse
### Handle Nullable Fields
@@ -272,8 +286,13 @@ If creating V2 tools (API-aligned outputs), use `_v2` suffix:
- Version: `'2.0.0'`
- Outputs: Flat, API-aligned (no content/metadata wrapper)
## Naming Convention
All tool IDs MUST use `snake_case`: `{service}_{action}` (e.g., `x_create_tweet`, `slack_send_message`). Never use camelCase or PascalCase for tool IDs.
## Checklist Before Finishing
- [ ] All tool IDs use snake_case
- [ ] All params have explicit `required: true` or `required: false`
- [ ] All params have appropriate `visibility`
- [ ] All nullable response fields use `?? null`
@@ -281,4 +300,22 @@ If creating V2 tools (API-aligned outputs), use `_v2` suffix:
- [ ] No raw JSON dumps in outputs
- [ ] Types file has all interfaces
- [ ] Index.ts exports all tools
- [ ] Tool IDs use snake_case
## Final Validation (Required)
After creating all tools, you MUST validate every tool before finishing:
1. **Read every tool file** you created — do not skip any
2. **Cross-reference with the API docs** to verify:
- All required params are marked `required: true`
- All optional params are marked `required: false`
- Param types match the API (string, number, boolean, json)
- Request URL, method, headers, and body match the API spec
- `transformResponse` extracts the correct fields from the API response
- All output fields match what the API actually returns
- No fields are missing from outputs that the API provides
- No extra fields are defined in outputs that the API doesn't return
3. **Verify consistency** across tools:
- Shared types in `types.ts` match all tools that use them
- Tool IDs in the barrel export match the tool file definitions
- Error handling is consistent (error checks, meaningful messages)

View File

@@ -0,0 +1,283 @@
---
description: Validate an existing Sim integration (tools, block, registry) against the service's API docs
argument-hint: <service-name> [api-docs-url]
---
# Validate Integration Skill
You are an expert auditor for Sim integrations. Your job is to thoroughly validate that an existing integration is correct, complete, and follows all conventions.
## Your Task
When the user asks you to validate an integration:
1. Read the service's API documentation (via WebFetch or Context7)
2. Read every tool, the block, and registry entries
3. Cross-reference everything against the API docs and Sim conventions
4. Report all issues found, grouped by severity (critical, warning, suggestion)
5. Fix all issues after reporting them
## Step 1: Gather All Files
Read **every** file for the integration — do not skip any:
```
apps/sim/tools/{service}/ # All tool files, types.ts, index.ts
apps/sim/blocks/blocks/{service}.ts # Block definition
apps/sim/tools/registry.ts # Tool registry entries for this service
apps/sim/blocks/registry.ts # Block registry entry for this service
apps/sim/components/icons.tsx # Icon definition
apps/sim/lib/auth/auth.ts # OAuth scopes (if OAuth service)
apps/sim/lib/oauth/oauth.ts # OAuth provider config (if OAuth service)
```
## Step 2: Pull API Documentation
Fetch the official API docs for the service. This is the **source of truth** for:
- Endpoint URLs, HTTP methods, and auth headers
- Required vs optional parameters
- Parameter types and allowed values
- Response shapes and field names
- Pagination patterns (which param name, which response field)
- Rate limits and error formats
## Step 3: Validate Tools
For **every** tool file, check:
### Tool ID and Naming
- [ ] Tool ID uses `snake_case`: `{service}_{action}` (e.g., `x_create_tweet`, `slack_send_message`)
- [ ] Tool `name` is human-readable (e.g., `'X Create Tweet'`)
- [ ] Tool `description` is a concise one-liner describing what it does
- [ ] Tool `version` is set (`'1.0.0'` or `'2.0.0'` for V2)
### Params
- [ ] All required API params are marked `required: true`
- [ ] All optional API params are marked `required: false`
- [ ] Every param has explicit `required: true` or `required: false` — never omitted
- [ ] Param types match the API (`'string'`, `'number'`, `'boolean'`, `'json'`)
- [ ] Visibility is correct:
- `'hidden'` — ONLY for OAuth access tokens and system-injected params
- `'user-only'` — for API keys, credentials, and account-specific IDs the user must provide
- `'user-or-llm'` — for everything else (search queries, content, filters, IDs that could come from other blocks)
- [ ] Every param has a `description` that explains what it does
### Request
- [ ] URL matches the API endpoint exactly (correct base URL, path segments, path params)
- [ ] HTTP method matches the API spec (GET, POST, PUT, PATCH, DELETE)
- [ ] Headers include correct auth pattern:
- OAuth: `Authorization: Bearer ${params.accessToken}`
- API Key: correct header name and format per the service's docs
- [ ] `Content-Type` header is set for POST/PUT/PATCH requests
- [ ] Body sends all required fields and only includes optional fields when provided
- [ ] For GET requests with query params: URL is constructed correctly with query string
- [ ] ID fields in URL paths are `.trim()`-ed to prevent copy-paste whitespace errors
- [ ] Path params use template literals correctly: `` `https://api.service.com/v1/${params.id.trim()}` ``
### Response / transformResponse
- [ ] Correctly parses the API response (`await response.json()`)
- [ ] Extracts the right fields from the response structure (e.g., `data.data` vs `data` vs `data.results`)
- [ ] All nullable fields use `?? null`
- [ ] All optional arrays use `?? []`
- [ ] Error cases are handled: checks for missing/empty data and returns meaningful error
- [ ] Does NOT do raw JSON dumps — extracts meaningful, individual fields
### Outputs
- [ ] All output fields match what the API actually returns
- [ ] No fields are missing that the API provides and users would commonly need
- [ ] No phantom fields defined that the API doesn't return
- [ ] `optional: true` is set on fields that may not exist in all responses
- [ ] When using `type: 'json'` and the shape is known, `properties` defines the inner fields
- [ ] When using `type: 'array'`, `items` defines the item structure with `properties`
- [ ] Field descriptions are accurate and helpful
### Types (types.ts)
- [ ] Has param interfaces for every tool (e.g., `XCreateTweetParams`)
- [ ] Has response interfaces for every tool (extending `ToolResponse`)
- [ ] Optional params use `?` in the interface (e.g., `replyTo?: string`)
- [ ] Field names in types match actual API field names
- [ ] Shared response types are properly reused (e.g., `XTweetResponse` shared across tweet tools)
### Barrel Export (index.ts)
- [ ] Every tool is exported
- [ ] All types are re-exported (`export * from './types'`)
- [ ] No orphaned exports (tools that don't exist)
### Tool Registry (tools/registry.ts)
- [ ] Every tool is imported and registered
- [ ] Registry keys use snake_case and match tool IDs exactly
- [ ] Entries are in alphabetical order within the file
## Step 4: Validate Block
### Block ↔ Tool Alignment (CRITICAL)
This is the most important validation — the block must be perfectly aligned with every tool it references.
For **each tool** in `tools.access`:
- [ ] The operation dropdown has an option whose ID matches the tool ID (or the `tools.config.tool` function correctly maps to it)
- [ ] Every **required** tool param (except `accessToken`) has a corresponding subBlock input that is:
- Shown when that operation is selected (correct `condition`)
- Marked as `required: true` (or conditionally required)
- [ ] Every **optional** tool param has a corresponding subBlock input (or is intentionally omitted if truly never needed)
- [ ] SubBlock `id` values are unique across the entire block — no duplicates even across different conditions
- [ ] The `tools.config.tool` function returns the correct tool ID for every possible operation value
- [ ] The `tools.config.params` function correctly maps subBlock IDs to tool param names when they differ
### SubBlocks
- [ ] Operation dropdown lists ALL tool operations available in `tools.access`
- [ ] Dropdown option labels are human-readable and descriptive
- [ ] Conditions use correct syntax:
- Single value: `{ field: 'operation', value: 'x_create_tweet' }`
- Multiple values (OR): `{ field: 'operation', value: ['x_create_tweet', 'x_delete_tweet'] }`
- Negation: `{ field: 'operation', value: 'delete', not: true }`
- Compound: `{ field: 'op', value: 'send', and: { field: 'type', value: 'dm' } }`
- [ ] Condition arrays include ALL operations that use that field — none missing
- [ ] `dependsOn` is set for fields that need other values (selectors depending on credential, cascading dropdowns)
- [ ] SubBlock types match tool param types:
- Enum/fixed options → `dropdown`
- Free text → `short-input`
- Long text/content → `long-input`
- True/false → `dropdown` with Yes/No options (not `switch` unless purely UI toggle)
- Credentials → `oauth-input` with correct `serviceId`
- [ ] Dropdown `value: () => 'default'` is set for dropdowns with a sensible default
### Advanced Mode
- [ ] Optional, rarely-used fields are set to `mode: 'advanced'`:
- Pagination tokens / next tokens
- Time range filters (start/end time)
- Sort order / direction options
- Max results / per page limits
- Reply settings / threading options
- Rarely used IDs (reply-to, quote-tweet, etc.)
- Exclude filters
- [ ] **Required** fields are NEVER set to `mode: 'advanced'`
- [ ] Fields that users fill in most of the time are NOT set to `mode: 'advanced'`
### WandConfig
- [ ] Timestamp fields have `wandConfig` with `generationType: 'timestamp'`
- [ ] Comma-separated list fields have `wandConfig` with a descriptive prompt
- [ ] Complex filter/query fields have `wandConfig` with format examples in the prompt
- [ ] All `wandConfig` prompts end with "Return ONLY the [format] - no explanations, no extra text."
- [ ] `wandConfig.placeholder` describes what to type in natural language
### Tools Config
- [ ] `tools.access` lists **every** tool ID the block can use — none missing
- [ ] `tools.config.tool` returns the correct tool ID for each operation
- [ ] Type coercions are in `tools.config.params` (runs at execution time), NOT in `tools.config.tool` (runs at serialization time before variable resolution)
- [ ] `tools.config.params` handles:
- `Number()` conversion for numeric params that come as strings from inputs
- `Boolean` / string-to-boolean conversion for toggle params
- Empty string → `undefined` conversion for optional dropdown values
- Any subBlock ID → tool param name remapping
- [ ] No `Number()`, `JSON.parse()`, or other coercions in `tools.config.tool` — these would destroy dynamic references like `<Block.output>`
### Block Outputs
- [ ] Outputs cover the key fields returned by ALL tools (not just one operation)
- [ ] Output types are correct (`'string'`, `'number'`, `'boolean'`, `'json'`)
- [ ] `type: 'json'` outputs either:
- Describe inner fields in the description string (GOOD): `'User profile (id, name, username, bio)'`
- Use nested output definitions (BEST): `{ id: { type: 'string' }, name: { type: 'string' } }`
- [ ] No opaque `type: 'json'` with vague descriptions like `'Response data'`
- [ ] Outputs that only appear for certain operations use `condition` if supported, or document which operations return them
### Block Metadata
- [ ] `type` is snake_case (e.g., `'x'`, `'cloudflare'`)
- [ ] `name` is human-readable (e.g., `'X'`, `'Cloudflare'`)
- [ ] `description` is a concise one-liner
- [ ] `longDescription` provides detail for docs
- [ ] `docsLink` points to `'https://docs.sim.ai/tools/{service}'`
- [ ] `category` is `'tools'`
- [ ] `bgColor` uses the service's brand color hex
- [ ] `icon` references the correct icon component from `@/components/icons`
- [ ] `authMode` is set correctly (`AuthMode.OAuth` or `AuthMode.ApiKey`)
- [ ] Block is registered in `blocks/registry.ts` alphabetically
### Block Inputs
- [ ] `inputs` section lists all subBlock params that the block accepts
- [ ] Input types match the subBlock types
- [ ] When using `canonicalParamId`, inputs list the canonical ID (not the raw subBlock IDs)
## Step 5: Validate OAuth Scopes (if OAuth service)
- [ ] `auth.ts` scopes include ALL scopes needed by ALL tools in the integration
- [ ] `oauth.ts` provider config scopes match `auth.ts` scopes
- [ ] Block `requiredScopes` (if defined) matches `auth.ts` scopes
- [ ] No excess scopes that aren't needed by any tool
- [ ] Each scope has a human-readable description in `oauth-required-modal.tsx`'s `SCOPE_DESCRIPTIONS`
## Step 6: Validate Pagination Consistency
If any tools support pagination:
- [ ] Pagination param names match the API docs (e.g., `pagination_token` vs `next_token` vs `cursor`)
- [ ] Different API endpoints that use different pagination param names have separate subBlocks in the block
- [ ] Pagination response fields (`nextToken`, `cursor`, etc.) are included in tool outputs
- [ ] Pagination subBlocks are set to `mode: 'advanced'`
## Step 7: Validate Error Handling
- [ ] `transformResponse` checks for error conditions before accessing data
- [ ] Error responses include meaningful messages (not just generic "failed")
- [ ] HTTP error status codes are handled (check `response.ok` or status codes)
## Step 8: Report and Fix
### Report Format
Group findings by severity:
**Critical** (will cause runtime errors or incorrect behavior):
- Wrong endpoint URL or HTTP method
- Missing required params or wrong `required` flag
- Incorrect response field mapping (accessing wrong path in response)
- Missing error handling that would cause crashes
- Tool ID mismatch between tool file, registry, and block `tools.access`
- OAuth scopes missing in `auth.ts` that tools need
- `tools.config.tool` returning wrong tool ID for an operation
- Type coercions in `tools.config.tool` instead of `tools.config.params`
**Warning** (follows conventions incorrectly or has usability issues):
- Optional field not set to `mode: 'advanced'`
- Missing `wandConfig` on timestamp/complex fields
- Wrong `visibility` on params (e.g., `'hidden'` instead of `'user-or-llm'`)
- Missing `optional: true` on nullable outputs
- Opaque `type: 'json'` without property descriptions
- Missing `.trim()` on ID fields in request URLs
- Missing `?? null` on nullable response fields
- Block condition array missing an operation that uses that field
- Missing scope description in `oauth-required-modal.tsx`
**Suggestion** (minor improvements):
- Better description text
- Inconsistent naming across tools
- Missing `longDescription` or `docsLink`
- Pagination fields that could benefit from `wandConfig`
### Fix All Issues
After reporting, fix every **critical** and **warning** issue. Apply **suggestions** where they don't add unnecessary complexity.
### Validation Output
After fixing, confirm:
1. `bun run lint` passes with no fixes needed
2. TypeScript compiles clean (no type errors)
3. Re-read all modified files to verify fixes are correct
## Checklist Summary
- [ ] Read ALL tool files, block, types, index, and registries
- [ ] Pulled and read official API documentation
- [ ] Validated every tool's ID, params, request, response, outputs, and types against API docs
- [ ] Validated block ↔ tool alignment (every tool param has a subBlock, every condition is correct)
- [ ] Validated advanced mode on optional/rarely-used fields
- [ ] Validated wandConfig on timestamps and complex inputs
- [ ] Validated tools.config mapping, tool selector, and type coercions
- [ ] Validated block outputs match what tools return, with typed JSON where possible
- [ ] Validated OAuth scopes alignment across auth.ts, oauth.ts, block, and modal (if OAuth)
- [ ] Validated pagination consistency across tools and block
- [ ] Validated error handling (error checks, meaningful messages)
- [ ] Validated registry entries (tools and block, alphabetical, correct imports)
- [ ] Reported all issues grouped by severity
- [ ] Fixed all critical and warning issues
- [ ] Ran `bun run lint` after fixes
- [ ] Verified TypeScript compiles clean

View File

@@ -8,51 +8,210 @@ paths:
Use Vitest. Test files: `feature.ts``feature.test.ts`
## Global Mocks (vitest.setup.ts)
These modules are mocked globally — do NOT re-mock them in test files unless you need to override behavior:
- `@sim/db``databaseMock`
- `drizzle-orm``drizzleOrmMock`
- `@sim/logger``loggerMock`
- `@/stores/console/store`, `@/stores/terminal`, `@/stores/execution/store`
- `@/blocks/registry`
- `@trigger.dev/sdk`
## Structure
```typescript
/**
* @vitest-environment node
*/
import { databaseMock, loggerMock } from '@sim/testing'
import { describe, expect, it, vi } from 'vitest'
import { createMockRequest } from '@sim/testing'
import { beforeEach, describe, expect, it, vi } from 'vitest'
vi.mock('@sim/db', () => databaseMock)
vi.mock('@sim/logger', () => loggerMock)
const { mockGetSession } = vi.hoisted(() => ({
mockGetSession: vi.fn(),
}))
import { myFunction } from '@/lib/feature'
vi.mock('@/lib/auth', () => ({
auth: { api: { getSession: vi.fn() } },
getSession: mockGetSession,
}))
describe('myFunction', () => {
beforeEach(() => vi.clearAllMocks())
it.concurrent('isolated tests run in parallel', () => { ... })
import { GET, POST } from '@/app/api/my-route/route'
describe('my route', () => {
beforeEach(() => {
vi.clearAllMocks()
mockGetSession.mockResolvedValue({ user: { id: 'user-1' } })
})
it('returns data', async () => {
const req = createMockRequest('GET')
const res = await GET(req)
expect(res.status).toBe(200)
})
})
```
## Performance Rules (Critical)
### NEVER use `vi.resetModules()` + `vi.doMock()` + `await import()`
This is the #1 cause of slow tests. It forces complete module re-evaluation per test.
```typescript
// BAD — forces module re-evaluation every test (~50-100ms each)
beforeEach(() => {
vi.resetModules()
vi.doMock('@/lib/auth', () => ({ getSession: vi.fn() }))
})
it('test', async () => {
const { GET } = await import('./route') // slow dynamic import
})
// GOOD — module loaded once, mocks reconfigured per test (~1ms each)
const { mockGetSession } = vi.hoisted(() => ({
mockGetSession: vi.fn(),
}))
vi.mock('@/lib/auth', () => ({ getSession: mockGetSession }))
import { GET } from '@/app/api/my-route/route'
beforeEach(() => { vi.clearAllMocks() })
it('test', () => {
mockGetSession.mockResolvedValue({ user: { id: '1' } })
})
```
**Only exception:** Singleton modules that cache state at module scope (e.g., Redis clients, connection pools). These genuinely need `vi.resetModules()` + dynamic import to get a fresh instance per test.
### NEVER use `vi.importActual()`
This defeats the purpose of mocking by loading the real module and all its dependencies.
```typescript
// BAD — loads real module + all transitive deps
vi.mock('@/lib/workspaces/utils', async () => {
const actual = await vi.importActual('@/lib/workspaces/utils')
return { ...actual, myFn: vi.fn() }
})
// GOOD — mock everything, only implement what tests need
vi.mock('@/lib/workspaces/utils', () => ({
myFn: vi.fn(),
otherFn: vi.fn(),
}))
```
### NEVER use `mockAuth()`, `mockConsoleLogger()`, or `setupCommonApiMocks()` from `@sim/testing`
These helpers internally use `vi.doMock()` which is slow. Use direct `vi.hoisted()` + `vi.mock()` instead.
### Mock heavy transitive dependencies
If a module under test imports `@/blocks` (200+ files), `@/tools/registry`, or other heavy modules, mock them:
```typescript
vi.mock('@/blocks', () => ({
getBlock: () => null,
getAllBlocks: () => ({}),
getAllBlockTypes: () => [],
registry: {},
}))
```
### Use `@vitest-environment node` unless DOM is needed
Only use `@vitest-environment jsdom` if the test uses `window`, `document`, `FormData`, or other browser APIs. Node environment is significantly faster.
### Avoid real timers in tests
```typescript
// BAD
await new Promise(r => setTimeout(r, 500))
// GOOD — use minimal delays or fake timers
await new Promise(r => setTimeout(r, 1))
// or
vi.useFakeTimers()
```
## Mock Pattern Reference
### Auth mocking (API routes)
```typescript
const { mockGetSession } = vi.hoisted(() => ({
mockGetSession: vi.fn(),
}))
vi.mock('@/lib/auth', () => ({
auth: { api: { getSession: vi.fn() } },
getSession: mockGetSession,
}))
// In tests:
mockGetSession.mockResolvedValue({ user: { id: 'user-1', email: 'test@example.com' } })
mockGetSession.mockResolvedValue(null) // unauthenticated
```
### Hybrid auth mocking
```typescript
const { mockCheckSessionOrInternalAuth } = vi.hoisted(() => ({
mockCheckSessionOrInternalAuth: vi.fn(),
}))
vi.mock('@/lib/auth/hybrid', () => ({
checkSessionOrInternalAuth: mockCheckSessionOrInternalAuth,
}))
// In tests:
mockCheckSessionOrInternalAuth.mockResolvedValue({
success: true, userId: 'user-1', authType: 'session',
})
```
### Database chain mocking
```typescript
const { mockSelect, mockFrom, mockWhere } = vi.hoisted(() => ({
mockSelect: vi.fn(),
mockFrom: vi.fn(),
mockWhere: vi.fn(),
}))
vi.mock('@sim/db', () => ({
db: { select: mockSelect },
}))
beforeEach(() => {
mockSelect.mockReturnValue({ from: mockFrom })
mockFrom.mockReturnValue({ where: mockWhere })
mockWhere.mockResolvedValue([{ id: '1', name: 'test' }])
})
```
## @sim/testing Package
Always prefer over local mocks.
Always prefer over local test data.
| Category | Utilities |
|----------|-----------|
| **Mocks** | `loggerMock`, `databaseMock`, `setupGlobalFetchMock()` |
| **Factories** | `createSession()`, `createWorkflowRecord()`, `createBlock()`, `createExecutorContext()` |
| **Mocks** | `loggerMock`, `databaseMock`, `drizzleOrmMock`, `setupGlobalFetchMock()` |
| **Factories** | `createSession()`, `createWorkflowRecord()`, `createBlock()`, `createExecutionContext()` |
| **Builders** | `WorkflowBuilder`, `ExecutionContextBuilder` |
| **Assertions** | `expectWorkflowAccessGranted()`, `expectBlockExecuted()` |
| **Requests** | `createMockRequest()`, `createEnvMock()` |
## Rules
## Rules Summary
1. `@vitest-environment node` directive at file top
2. `vi.mock()` calls before importing mocked modules
3. `@sim/testing` utilities over local mocks
4. `it.concurrent` for isolated tests (no shared mutable state)
5. `beforeEach(() => vi.clearAllMocks())` to reset state
## Hoisted Mocks
For mutable mock references:
```typescript
const mockFn = vi.hoisted(() => vi.fn())
vi.mock('@/lib/module', () => ({ myFunction: mockFn }))
mockFn.mockResolvedValue({ data: 'test' })
```
1. `@vitest-environment node` unless DOM is required
2. `vi.hoisted()` + `vi.mock()` + static imports — never `vi.resetModules()` + `vi.doMock()` + dynamic imports
3. `vi.mock()` calls before importing mocked modules
4. `@sim/testing` utilities over local mocks
5. `beforeEach(() => vi.clearAllMocks())` to reset state — no redundant `afterEach`
6. No `vi.importActual()` — mock everything explicitly
7. No `mockAuth()`, `mockConsoleLogger()`, `setupCommonApiMocks()` — use direct mocks
8. Mock heavy deps (`@/blocks`, `@/tools/registry`, `@/triggers`) in tests that don't need them
9. Use absolute imports in test files
10. Avoid real timers — use 1ms delays or `vi.useFakeTimers()`

View File

@@ -7,51 +7,210 @@ globs: ["apps/sim/**/*.test.ts", "apps/sim/**/*.test.tsx"]
Use Vitest. Test files: `feature.ts` → `feature.test.ts`
## Global Mocks (vitest.setup.ts)
These modules are mocked globally — do NOT re-mock them in test files unless you need to override behavior:
- `@sim/db` → `databaseMock`
- `drizzle-orm` → `drizzleOrmMock`
- `@sim/logger` → `loggerMock`
- `@/stores/console/store`, `@/stores/terminal`, `@/stores/execution/store`
- `@/blocks/registry`
- `@trigger.dev/sdk`
## Structure
```typescript
/**
* @vitest-environment node
*/
import { databaseMock, loggerMock } from '@sim/testing'
import { describe, expect, it, vi } from 'vitest'
import { createMockRequest } from '@sim/testing'
import { beforeEach, describe, expect, it, vi } from 'vitest'
vi.mock('@sim/db', () => databaseMock)
vi.mock('@sim/logger', () => loggerMock)
const { mockGetSession } = vi.hoisted(() => ({
mockGetSession: vi.fn(),
}))
import { myFunction } from '@/lib/feature'
vi.mock('@/lib/auth', () => ({
auth: { api: { getSession: vi.fn() } },
getSession: mockGetSession,
}))
describe('myFunction', () => {
beforeEach(() => vi.clearAllMocks())
it.concurrent('isolated tests run in parallel', () => { ... })
import { GET, POST } from '@/app/api/my-route/route'
describe('my route', () => {
beforeEach(() => {
vi.clearAllMocks()
mockGetSession.mockResolvedValue({ user: { id: 'user-1' } })
})
it('returns data', async () => {
const req = createMockRequest('GET')
const res = await GET(req)
expect(res.status).toBe(200)
})
})
```
## Performance Rules (Critical)
### NEVER use `vi.resetModules()` + `vi.doMock()` + `await import()`
This is the #1 cause of slow tests. It forces complete module re-evaluation per test.
```typescript
// BAD — forces module re-evaluation every test (~50-100ms each)
beforeEach(() => {
vi.resetModules()
vi.doMock('@/lib/auth', () => ({ getSession: vi.fn() }))
})
it('test', async () => {
const { GET } = await import('./route') // slow dynamic import
})
// GOOD — module loaded once, mocks reconfigured per test (~1ms each)
const { mockGetSession } = vi.hoisted(() => ({
mockGetSession: vi.fn(),
}))
vi.mock('@/lib/auth', () => ({ getSession: mockGetSession }))
import { GET } from '@/app/api/my-route/route'
beforeEach(() => { vi.clearAllMocks() })
it('test', () => {
mockGetSession.mockResolvedValue({ user: { id: '1' } })
})
```
**Only exception:** Singleton modules that cache state at module scope (e.g., Redis clients, connection pools). These genuinely need `vi.resetModules()` + dynamic import to get a fresh instance per test.
### NEVER use `vi.importActual()`
This defeats the purpose of mocking by loading the real module and all its dependencies.
```typescript
// BAD — loads real module + all transitive deps
vi.mock('@/lib/workspaces/utils', async () => {
const actual = await vi.importActual('@/lib/workspaces/utils')
return { ...actual, myFn: vi.fn() }
})
// GOOD — mock everything, only implement what tests need
vi.mock('@/lib/workspaces/utils', () => ({
myFn: vi.fn(),
otherFn: vi.fn(),
}))
```
### NEVER use `mockAuth()`, `mockConsoleLogger()`, or `setupCommonApiMocks()` from `@sim/testing`
These helpers internally use `vi.doMock()` which is slow. Use direct `vi.hoisted()` + `vi.mock()` instead.
### Mock heavy transitive dependencies
If a module under test imports `@/blocks` (200+ files), `@/tools/registry`, or other heavy modules, mock them:
```typescript
vi.mock('@/blocks', () => ({
getBlock: () => null,
getAllBlocks: () => ({}),
getAllBlockTypes: () => [],
registry: {},
}))
```
### Use `@vitest-environment node` unless DOM is needed
Only use `@vitest-environment jsdom` if the test uses `window`, `document`, `FormData`, or other browser APIs. Node environment is significantly faster.
### Avoid real timers in tests
```typescript
// BAD
await new Promise(r => setTimeout(r, 500))
// GOOD — use minimal delays or fake timers
await new Promise(r => setTimeout(r, 1))
// or
vi.useFakeTimers()
```
## Mock Pattern Reference
### Auth mocking (API routes)
```typescript
const { mockGetSession } = vi.hoisted(() => ({
mockGetSession: vi.fn(),
}))
vi.mock('@/lib/auth', () => ({
auth: { api: { getSession: vi.fn() } },
getSession: mockGetSession,
}))
// In tests:
mockGetSession.mockResolvedValue({ user: { id: 'user-1', email: 'test@example.com' } })
mockGetSession.mockResolvedValue(null) // unauthenticated
```
### Hybrid auth mocking
```typescript
const { mockCheckSessionOrInternalAuth } = vi.hoisted(() => ({
mockCheckSessionOrInternalAuth: vi.fn(),
}))
vi.mock('@/lib/auth/hybrid', () => ({
checkSessionOrInternalAuth: mockCheckSessionOrInternalAuth,
}))
// In tests:
mockCheckSessionOrInternalAuth.mockResolvedValue({
success: true, userId: 'user-1', authType: 'session',
})
```
### Database chain mocking
```typescript
const { mockSelect, mockFrom, mockWhere } = vi.hoisted(() => ({
mockSelect: vi.fn(),
mockFrom: vi.fn(),
mockWhere: vi.fn(),
}))
vi.mock('@sim/db', () => ({
db: { select: mockSelect },
}))
beforeEach(() => {
mockSelect.mockReturnValue({ from: mockFrom })
mockFrom.mockReturnValue({ where: mockWhere })
mockWhere.mockResolvedValue([{ id: '1', name: 'test' }])
})
```
## @sim/testing Package
Always prefer over local mocks.
Always prefer over local test data.
| Category | Utilities |
|----------|-----------|
| **Mocks** | `loggerMock`, `databaseMock`, `setupGlobalFetchMock()` |
| **Factories** | `createSession()`, `createWorkflowRecord()`, `createBlock()`, `createExecutorContext()` |
| **Mocks** | `loggerMock`, `databaseMock`, `drizzleOrmMock`, `setupGlobalFetchMock()` |
| **Factories** | `createSession()`, `createWorkflowRecord()`, `createBlock()`, `createExecutionContext()` |
| **Builders** | `WorkflowBuilder`, `ExecutionContextBuilder` |
| **Assertions** | `expectWorkflowAccessGranted()`, `expectBlockExecuted()` |
| **Requests** | `createMockRequest()`, `createEnvMock()` |
## Rules
## Rules Summary
1. `@vitest-environment node` directive at file top
2. `vi.mock()` calls before importing mocked modules
3. `@sim/testing` utilities over local mocks
4. `it.concurrent` for isolated tests (no shared mutable state)
5. `beforeEach(() => vi.clearAllMocks())` to reset state
## Hoisted Mocks
For mutable mock references:
```typescript
const mockFn = vi.hoisted(() => vi.fn())
vi.mock('@/lib/module', () => ({ myFunction: mockFn }))
mockFn.mockResolvedValue({ data: 'test' })
```
1. `@vitest-environment node` unless DOM is required
2. `vi.hoisted()` + `vi.mock()` + static imports — never `vi.resetModules()` + `vi.doMock()` + dynamic imports
3. `vi.mock()` calls before importing mocked modules
4. `@sim/testing` utilities over local mocks
5. `beforeEach(() => vi.clearAllMocks())` to reset state — no redundant `afterEach`
6. No `vi.importActual()` — mock everything explicitly
7. No `mockAuth()`, `mockConsoleLogger()`, `setupCommonApiMocks()` — use direct mocks
8. Mock heavy deps (`@/blocks`, `@/tools/registry`, `@/triggers`) in tests that don't need them
9. Use absolute imports in test files
10. Avoid real timers — use 1ms delays or `vi.useFakeTimers()`

View File

@@ -8,7 +8,7 @@ on:
concurrency:
group: ci-${{ github.ref }}
cancel-in-progress: false
cancel-in-progress: ${{ github.event_name == 'pull_request' }}
permissions:
contents: read

View File

@@ -10,7 +10,7 @@ permissions:
jobs:
test-build:
name: Test and Build
runs-on: blacksmith-4vcpu-ubuntu-2404
runs-on: blacksmith-8vcpu-ubuntu-2404
steps:
- name: Checkout code
@@ -38,6 +38,20 @@ jobs:
key: ${{ github.repository }}-node-modules
path: ./node_modules
- name: Mount Turbo cache (Sticky Disk)
uses: useblacksmith/stickydisk@v1
with:
key: ${{ github.repository }}-turbo-cache
path: ./.turbo
- name: Restore Next.js build cache
uses: actions/cache@v4
with:
path: ./apps/sim/.next/cache
key: ${{ runner.os }}-nextjs-${{ hashFiles('bun.lock') }}
restore-keys: |
${{ runner.os }}-nextjs-
- name: Install dependencies
run: bun install --frozen-lockfile
@@ -85,6 +99,7 @@ jobs:
NEXT_PUBLIC_APP_URL: 'https://www.sim.ai'
DATABASE_URL: 'postgresql://postgres:postgres@localhost:5432/simstudio'
ENCRYPTION_KEY: '7cf672e460e430c1fba707575c2b0e2ad5a99dddf9b7b7e3b5646e630861db1c' # dummy key for CI only
TURBO_CACHE_DIR: .turbo
run: bun run test
- name: Check schema and migrations are in sync
@@ -110,6 +125,7 @@ jobs:
RESEND_API_KEY: 'dummy_key_for_ci_only'
AWS_REGION: 'us-west-2'
ENCRYPTION_KEY: '7cf672e460e430c1fba707575c2b0e2ad5a99dddf9b7b7e3b5646e630861db1c' # dummy key for CI only
TURBO_CACHE_DIR: .turbo
run: bunx turbo run build --filter=sim
- name: Upload coverage to Codecov

View File

@@ -167,27 +167,51 @@ Import from `@/components/emcn`, never from subpaths (except CSS files). Use CVA
## Testing
Use Vitest. Test files: `feature.ts``feature.test.ts`
Use Vitest. Test files: `feature.ts``feature.test.ts`. See `.cursor/rules/sim-testing.mdc` for full details.
### Global Mocks (vitest.setup.ts)
`@sim/db`, `drizzle-orm`, `@sim/logger`, `@/blocks/registry`, `@trigger.dev/sdk`, and store mocks are provided globally. Do NOT re-mock them unless overriding behavior.
### Standard Test Pattern
```typescript
/**
* @vitest-environment node
*/
import { databaseMock, loggerMock } from '@sim/testing'
import { describe, expect, it, vi } from 'vitest'
import { createMockRequest } from '@sim/testing'
import { beforeEach, describe, expect, it, vi } from 'vitest'
vi.mock('@sim/db', () => databaseMock)
vi.mock('@sim/logger', () => loggerMock)
const { mockGetSession } = vi.hoisted(() => ({
mockGetSession: vi.fn(),
}))
import { myFunction } from '@/lib/feature'
vi.mock('@/lib/auth', () => ({
auth: { api: { getSession: vi.fn() } },
getSession: mockGetSession,
}))
describe('feature', () => {
beforeEach(() => vi.clearAllMocks())
it.concurrent('runs in parallel', () => { ... })
import { GET } from '@/app/api/my-route/route'
describe('my route', () => {
beforeEach(() => {
vi.clearAllMocks()
mockGetSession.mockResolvedValue({ user: { id: 'user-1' } })
})
it('returns data', async () => { ... })
})
```
Use `@sim/testing` mocks/factories over local test data. See `.cursor/rules/sim-testing.mdc` for details.
### Performance Rules
- **NEVER** use `vi.resetModules()` + `vi.doMock()` + `await import()` — use `vi.hoisted()` + `vi.mock()` + static imports
- **NEVER** use `vi.importActual()` — mock everything explicitly
- **NEVER** use `mockAuth()`, `mockConsoleLogger()`, `setupCommonApiMocks()` from `@sim/testing` — they use `vi.doMock()` internally
- **Mock heavy deps** (`@/blocks`, `@/tools/registry`, `@/triggers`) in tests that don't need them
- **Use `@vitest-environment node`** unless DOM APIs are needed (`window`, `document`, `FormData`)
- **Avoid real timers** — use 1ms delays or `vi.useFakeTimers()`
Use `@sim/testing` mocks/factories over local test data.
## Utils Rules

View File

@@ -1,5 +1,8 @@
import type React from 'react'
import type { Root } from 'fumadocs-core/page-tree'
import { findNeighbour } from 'fumadocs-core/page-tree'
import type { ApiPageProps } from 'fumadocs-openapi/ui'
import { createAPIPage } from 'fumadocs-openapi/ui'
import { Pre } from 'fumadocs-ui/components/codeblock'
import defaultMdxComponents from 'fumadocs-ui/mdx'
import { DocsBody, DocsDescription, DocsPage, DocsTitle } from 'fumadocs-ui/page'
@@ -12,28 +15,75 @@ import { LLMCopyButton } from '@/components/page-actions'
import { StructuredData } from '@/components/structured-data'
import { CodeBlock } from '@/components/ui/code-block'
import { Heading } from '@/components/ui/heading'
import { ResponseSection } from '@/components/ui/response-section'
import { i18n } from '@/lib/i18n'
import { getApiSpecContent, openapi } from '@/lib/openapi'
import { type PageData, source } from '@/lib/source'
const SUPPORTED_LANGUAGES: Set<string> = new Set(i18n.languages)
const BASE_URL = 'https://docs.sim.ai'
function resolveLangAndSlug(params: { slug?: string[]; lang: string }) {
const isValidLang = SUPPORTED_LANGUAGES.has(params.lang)
const lang = isValidLang ? params.lang : 'en'
const slug = isValidLang ? params.slug : [params.lang, ...(params.slug ?? [])]
return { lang, slug }
}
const APIPage = createAPIPage(openapi, {
playground: { enabled: false },
content: {
renderOperationLayout: async (slots) => {
return (
<div className='flex @4xl:flex-row flex-col @4xl:items-start gap-x-6 gap-y-4'>
<div className='min-w-0 flex-1'>
{slots.header}
{slots.apiPlayground}
{slots.authSchemes && <div className='api-section-divider'>{slots.authSchemes}</div>}
{slots.paremeters}
{slots.body && <div className='api-section-divider'>{slots.body}</div>}
<ResponseSection>{slots.responses}</ResponseSection>
{slots.callbacks}
</div>
<div className='@4xl:sticky @4xl:top-[calc(var(--fd-docs-row-1,2rem)+1rem)] @4xl:w-[400px]'>
{slots.apiExample}
</div>
</div>
)
},
},
})
export default async function Page(props: { params: Promise<{ slug?: string[]; lang: string }> }) {
const params = await props.params
const page = source.getPage(params.slug, params.lang)
const { lang, slug } = resolveLangAndSlug(params)
const page = source.getPage(slug, lang)
if (!page) notFound()
const data = page.data as PageData
const MDX = data.body
const baseUrl = 'https://docs.sim.ai'
const markdownContent = await data.getText('processed')
const data = page.data as unknown as PageData & {
_openapi?: { method?: string }
getAPIPageProps?: () => ApiPageProps
}
const isOpenAPI = '_openapi' in data && data._openapi != null
const isApiReference = slug?.some((s) => s === 'api-reference') ?? false
const pageTreeRecord = source.pageTree as Record<string, any>
const pageTree =
pageTreeRecord[params.lang] ?? pageTreeRecord.en ?? Object.values(pageTreeRecord)[0]
const neighbours = pageTree ? findNeighbour(pageTree, page.url) : null
const pageTreeRecord = source.pageTree as Record<string, Root>
const pageTree = pageTreeRecord[lang] ?? pageTreeRecord.en ?? Object.values(pageTreeRecord)[0]
const rawNeighbours = pageTree ? findNeighbour(pageTree, page.url) : null
const neighbours = isApiReference
? {
previous: rawNeighbours?.previous?.url.includes('/api-reference/')
? rawNeighbours.previous
: undefined,
next: rawNeighbours?.next?.url.includes('/api-reference/') ? rawNeighbours.next : undefined,
}
: rawNeighbours
const generateBreadcrumbs = () => {
const breadcrumbs: Array<{ name: string; url: string }> = [
{
name: 'Home',
url: baseUrl,
url: BASE_URL,
},
]
@@ -41,7 +91,7 @@ export default async function Page(props: { params: Promise<{ slug?: string[]; l
let currentPath = ''
urlParts.forEach((part, index) => {
if (index === 0 && ['en', 'es', 'fr', 'de', 'ja', 'zh'].includes(part)) {
if (index === 0 && SUPPORTED_LANGUAGES.has(part)) {
currentPath = `/${part}`
return
}
@@ -56,12 +106,12 @@ export default async function Page(props: { params: Promise<{ slug?: string[]; l
if (index === urlParts.length - 1) {
breadcrumbs.push({
name: data.title,
url: `${baseUrl}${page.url}`,
url: `${BASE_URL}${page.url}`,
})
} else {
breadcrumbs.push({
name: name,
url: `${baseUrl}${currentPath}`,
url: `${BASE_URL}${currentPath}`,
})
}
})
@@ -73,7 +123,6 @@ export default async function Page(props: { params: Promise<{ slug?: string[]; l
const CustomFooter = () => (
<div className='mt-12'>
{/* Navigation links */}
<div className='flex items-center justify-between py-8'>
{neighbours?.previous ? (
<Link
@@ -100,10 +149,8 @@ export default async function Page(props: { params: Promise<{ slug?: string[]; l
)}
</div>
{/* Divider line */}
<div className='border-border border-t' />
{/* Social icons */}
<div className='flex items-center gap-4 py-6'>
<Link
href='https://x.com/simdotai'
@@ -169,13 +216,69 @@ export default async function Page(props: { params: Promise<{ slug?: string[]; l
</div>
)
if (isOpenAPI && data.getAPIPageProps) {
const apiProps = data.getAPIPageProps()
const apiPageContent = getApiSpecContent(
data.title,
data.description,
apiProps.operations ?? []
)
return (
<>
<StructuredData
title={data.title}
description={data.description || ''}
url={`${BASE_URL}${page.url}`}
lang={lang}
breadcrumb={breadcrumbs}
/>
<DocsPage
toc={data.toc}
breadcrumb={{
enabled: false,
}}
tableOfContent={{
style: 'clerk',
enabled: false,
}}
tableOfContentPopover={{
style: 'clerk',
enabled: false,
}}
footer={{
enabled: true,
component: <CustomFooter />,
}}
>
<div className='api-page-header relative mt-6 sm:mt-0'>
<div className='absolute top-1 right-0 flex items-center gap-2'>
<div className='hidden sm:flex'>
<LLMCopyButton content={apiPageContent} />
</div>
<PageNavigationArrows previous={neighbours?.previous} next={neighbours?.next} />
</div>
<DocsTitle>{data.title}</DocsTitle>
<DocsDescription>{data.description}</DocsDescription>
</div>
<DocsBody>
<APIPage {...apiProps} />
</DocsBody>
</DocsPage>
</>
)
}
const MDX = data.body
const markdownContent = await data.getText('processed')
return (
<>
<StructuredData
title={data.title}
description={data.description || ''}
url={`${baseUrl}${page.url}`}
lang={params.lang}
url={`${BASE_URL}${page.url}`}
lang={lang}
breadcrumb={breadcrumbs}
/>
<DocsPage
@@ -252,14 +355,14 @@ export async function generateMetadata(props: {
params: Promise<{ slug?: string[]; lang: string }>
}) {
const params = await props.params
const page = source.getPage(params.slug, params.lang)
const { lang, slug } = resolveLangAndSlug(params)
const page = source.getPage(slug, lang)
if (!page) notFound()
const data = page.data as PageData
const baseUrl = 'https://docs.sim.ai'
const fullUrl = `${baseUrl}${page.url}`
const data = page.data as unknown as PageData
const fullUrl = `${BASE_URL}${page.url}`
const ogImageUrl = `${baseUrl}/api/og?title=${encodeURIComponent(data.title)}`
const ogImageUrl = `${BASE_URL}/api/og?title=${encodeURIComponent(data.title)}`
return {
title: data.title,
@@ -286,10 +389,10 @@ export async function generateMetadata(props: {
url: fullUrl,
siteName: 'Sim Documentation',
type: 'article',
locale: params.lang === 'en' ? 'en_US' : `${params.lang}_${params.lang.toUpperCase()}`,
locale: lang === 'en' ? 'en_US' : `${lang}_${lang.toUpperCase()}`,
alternateLocale: ['en', 'es', 'fr', 'de', 'ja', 'zh']
.filter((lang) => lang !== params.lang)
.map((lang) => (lang === 'en' ? 'en_US' : `${lang}_${lang.toUpperCase()}`)),
.filter((l) => l !== lang)
.map((l) => (l === 'en' ? 'en_US' : `${l}_${l.toUpperCase()}`)),
images: [
{
url: ogImageUrl,
@@ -323,13 +426,13 @@ export async function generateMetadata(props: {
alternates: {
canonical: fullUrl,
languages: {
'x-default': `${baseUrl}${page.url.replace(`/${params.lang}`, '')}`,
en: `${baseUrl}${page.url.replace(`/${params.lang}`, '')}`,
es: `${baseUrl}/es${page.url.replace(`/${params.lang}`, '')}`,
fr: `${baseUrl}/fr${page.url.replace(`/${params.lang}`, '')}`,
de: `${baseUrl}/de${page.url.replace(`/${params.lang}`, '')}`,
ja: `${baseUrl}/ja${page.url.replace(`/${params.lang}`, '')}`,
zh: `${baseUrl}/zh${page.url.replace(`/${params.lang}`, '')}`,
'x-default': `${BASE_URL}${page.url.replace(`/${lang}`, '')}`,
en: `${BASE_URL}${page.url.replace(`/${lang}`, '')}`,
es: `${BASE_URL}/es${page.url.replace(`/${lang}`, '')}`,
fr: `${BASE_URL}/fr${page.url.replace(`/${lang}`, '')}`,
de: `${BASE_URL}/de${page.url.replace(`/${lang}`, '')}`,
ja: `${BASE_URL}/ja${page.url.replace(`/${lang}`, '')}`,
zh: `${BASE_URL}/zh${page.url.replace(`/${lang}`, '')}`,
},
},
}

View File

@@ -55,8 +55,11 @@ type LayoutProps = {
params: Promise<{ lang: string }>
}
const SUPPORTED_LANGUAGES: Set<string> = new Set(i18n.languages)
export default async function Layout({ children, params }: LayoutProps) {
const { lang } = await params
const { lang: rawLang } = await params
const lang = SUPPORTED_LANGUAGES.has(rawLang) ? rawLang : 'en'
const structuredData = {
'@context': 'https://schema.org',
@@ -107,6 +110,7 @@ export default async function Layout({ children, params }: LayoutProps) {
title: <SimLogoFull className='h-7 w-auto' />,
}}
sidebar={{
tabs: false,
defaultOpenLevel: 0,
collapsible: false,
footer: null,

View File

@@ -1,6 +1,7 @@
@import "tailwindcss";
@import "fumadocs-ui/css/neutral.css";
@import "fumadocs-ui/css/preset.css";
@import "fumadocs-openapi/css/preset.css";
/* Prevent overscroll bounce effect on the page */
html,
@@ -8,18 +9,12 @@ body {
overscroll-behavior: none;
}
/* Reserve scrollbar space to prevent layout jitter between pages */
html {
scrollbar-gutter: stable;
}
@theme {
--color-fd-primary: #33c482; /* Green from Sim logo */
--font-geist-sans: var(--font-geist-sans);
--font-geist-mono: var(--font-geist-mono);
}
/* Ensure primary color is set in both light and dark modes */
:root {
--color-fd-primary: #33c482;
}
.dark {
--color-fd-primary: #33c482;
}
@@ -34,12 +29,6 @@ body {
"Liberation Mono", "Courier New", monospace;
}
/* Target any potential border classes */
* {
--fd-border-sidebar: transparent !important;
}
/* Override any CSS custom properties for borders */
:root {
--fd-border: transparent !important;
--fd-border-sidebar: transparent !important;
@@ -86,7 +75,6 @@ body {
[data-sidebar-container],
#nd-sidebar {
background: transparent !important;
background-color: transparent !important;
border: none !important;
--color-fd-muted: transparent !important;
--color-fd-card: transparent !important;
@@ -96,9 +84,7 @@ body {
aside[data-sidebar],
aside#nd-sidebar {
background: transparent !important;
background-color: transparent !important;
border: none !important;
border-right: none !important;
}
/* Fumadocs v16: Add sidebar placeholder styling for grid area */
@@ -157,7 +143,6 @@ aside#nd-sidebar {
#nd-sidebar > div {
padding: 0.5rem 12px 12px;
background: transparent !important;
background-color: transparent !important;
}
/* Override sidebar item styling to match Raindrop */
@@ -434,10 +419,6 @@ aside[data-sidebar],
#nd-sidebar,
#nd-sidebar * {
border: none !important;
border-right: none !important;
border-left: none !important;
border-top: none !important;
border-bottom: none !important;
}
/* Override fumadocs background colors for sidebar */
@@ -447,7 +428,6 @@ aside[data-sidebar],
--color-fd-muted: transparent !important;
--color-fd-secondary: transparent !important;
background: transparent !important;
background-color: transparent !important;
}
/* Force normal text flow in sidebar */
@@ -564,16 +544,682 @@ main[data-main] {
padding-top: 1.5rem !important;
}
/* Override Fumadocs default content padding */
article[data-content],
div[data-content] {
padding-top: 1.5rem !important;
}
/* Remove any unwanted borders/outlines from video elements */
/* Remove any unwanted outlines from video elements */
video {
outline: none !important;
border-style: solid !important;
}
/* API Reference Pages — Mintlify-style overrides */
/* OpenAPI pages: span main + TOC grid columns for wide two-column layout.
The grid has columns: spacer | sidebar | main | toc | spacer.
By spanning columns 3-4, the article fills both main and toc areas,
while the grid structure stays identical to non-OpenAPI pages (no jitter). */
#nd-page:has(.api-page-header) {
grid-column: 3 / span 2 !important;
max-width: 1400px !important;
}
/* Hide the empty TOC aside on OpenAPI pages so it doesn't overlay content */
#nd-docs-layout:has(#nd-page .api-page-header) #nd-toc {
display: none;
}
/* Hide the default "Response Body" heading rendered by fumadocs-openapi */
.response-section-wrapper > .response-section-content > h2,
.response-section-wrapper > .response-section-content > h3 {
display: none !important;
}
/* Hide default accordion triggers (status code rows) — we show our own dropdown */
.response-section-wrapper [data-orientation="vertical"] > [data-state] > h3 {
display: none !important;
}
/* Ensure API reference pages use the same font as the rest of the docs */
#nd-page:has(.api-page-header),
#nd-page:has(.api-page-header) h2,
#nd-page:has(.api-page-header) h3,
#nd-page:has(.api-page-header) h4,
#nd-page:has(.api-page-header) p,
#nd-page:has(.api-page-header) span,
#nd-page:has(.api-page-header) div,
#nd-page:has(.api-page-header) label,
#nd-page:has(.api-page-header) button {
font-family: var(--font-geist-sans), ui-sans-serif, system-ui, -apple-system, BlinkMacSystemFont,
"Segoe UI", Roboto, "Helvetica Neue", Arial, sans-serif;
}
/* Method badge pills in page content — colored background pills */
#nd-page span.font-mono.font-medium[class*="text-green"] {
background-color: rgb(220 252 231 / 0.6);
padding: 0.125rem 0.5rem;
border-radius: 0.375rem;
font-size: 0.75rem;
}
html.dark #nd-page span.font-mono.font-medium[class*="text-green"] {
background-color: rgb(34 197 94 / 0.15);
}
#nd-page span.font-mono.font-medium[class*="text-blue"] {
background-color: rgb(219 234 254 / 0.6);
padding: 0.125rem 0.5rem;
border-radius: 0.375rem;
font-size: 0.75rem;
}
html.dark #nd-page span.font-mono.font-medium[class*="text-blue"] {
background-color: rgb(59 130 246 / 0.15);
}
#nd-page span.font-mono.font-medium[class*="text-orange"] {
background-color: rgb(255 237 213 / 0.6);
padding: 0.125rem 0.5rem;
border-radius: 0.375rem;
font-size: 0.75rem;
}
html.dark #nd-page span.font-mono.font-medium[class*="text-orange"] {
background-color: rgb(249 115 22 / 0.15);
}
#nd-page span.font-mono.font-medium[class*="text-red"] {
background-color: rgb(254 226 226 / 0.6);
padding: 0.125rem 0.5rem;
border-radius: 0.375rem;
font-size: 0.75rem;
}
html.dark #nd-page span.font-mono.font-medium[class*="text-red"] {
background-color: rgb(239 68 68 / 0.15);
}
/* Sidebar links with method badges — flex for vertical centering */
#nd-sidebar a:has(span.font-mono.font-medium) {
display: flex !important;
align-items: center !important;
gap: 6px;
}
/* Sidebar method badges — ensure proper inline flex display */
#nd-sidebar a span.font-mono.font-medium {
display: inline-flex;
align-items: center;
justify-content: center;
min-width: 2.25rem;
font-size: 10px !important;
line-height: 1 !important;
padding: 2.5px 4px;
border-radius: 3px;
flex-shrink: 0;
}
/* Sidebar GET badges */
#nd-sidebar a span.font-mono.font-medium[class*="text-green"] {
background-color: rgb(220 252 231 / 0.6);
}
html.dark #nd-sidebar a span.font-mono.font-medium[class*="text-green"] {
background-color: rgb(34 197 94 / 0.15);
}
/* Sidebar POST badges */
#nd-sidebar a span.font-mono.font-medium[class*="text-blue"] {
background-color: rgb(219 234 254 / 0.6);
}
html.dark #nd-sidebar a span.font-mono.font-medium[class*="text-blue"] {
background-color: rgb(59 130 246 / 0.15);
}
/* Sidebar PUT badges */
#nd-sidebar a span.font-mono.font-medium[class*="text-orange"] {
background-color: rgb(255 237 213 / 0.6);
}
html.dark #nd-sidebar a span.font-mono.font-medium[class*="text-orange"] {
background-color: rgb(249 115 22 / 0.15);
}
/* Sidebar DELETE badges */
#nd-sidebar a span.font-mono.font-medium[class*="text-red"] {
background-color: rgb(254 226 226 / 0.6);
}
html.dark #nd-sidebar a span.font-mono.font-medium[class*="text-red"] {
background-color: rgb(239 68 68 / 0.15);
}
/* Code block containers — match regular docs styling */
#nd-page:has(.api-page-header) figure.shiki {
border-radius: 0.75rem !important;
background-color: var(--color-fd-card) !important;
}
/* Hide "Filter Properties" search bar everywhere — main page and popovers */
input[placeholder="Filter Properties"] {
display: none !important;
}
div:has(> input[placeholder="Filter Properties"]) {
display: none !important;
}
/* Remove top border on first visible property after hidden Filter Properties */
div:has(> input[placeholder="Filter Properties"]) + .text-sm.border-t {
border-top: none !important;
}
/* Hide "TypeScript Definitions" copy panel on API pages */
#nd-page:has(.api-page-header) div.not-prose.rounded-xl.border.p-3.mb-4 {
display: none !important;
}
#nd-page:has(.api-page-header) div.not-prose.rounded-xl.border.p-3:has(> div > p.font-medium) {
display: none !important;
}
/* Hide info tags (Format, Default, etc.) everywhere — main page and popovers */
div.flex.flex-row.gap-2.flex-wrap.not-prose:has(> div.bg-fd-secondary) {
display: none !important;
}
div.flex.flex-row.items-start.bg-fd-secondary.border.rounded-lg.text-xs {
display: none !important;
}
/* Method+path bar — cleaner, lighter styling like Gumloop.
Override bg-fd-card CSS variable directly for reliability. */
#nd-page:has(.api-page-header) div.flex.flex-row.items-center.rounded-xl.border.not-prose {
--color-fd-card: rgb(249 250 251) !important;
background-color: rgb(249 250 251) !important;
border-color: rgb(229 231 235) !important;
}
html.dark
#nd-page:has(.api-page-header)
div.flex.flex-row.items-center.rounded-xl.border.not-prose {
--color-fd-card: rgb(24 24 27) !important;
background-color: rgb(24 24 27) !important;
border-color: rgb(63 63 70) !important;
}
/* Method badge inside path bar — cleaner sans-serif, softer colors */
#nd-page:has(.api-page-header)
div.flex.flex-row.items-center.rounded-xl.border.not-prose
span.font-mono.font-medium {
font-family: var(--font-geist-sans), ui-sans-serif, system-ui, sans-serif !important;
font-weight: 600 !important;
font-size: 0.6875rem !important;
letter-spacing: 0.025em;
text-transform: uppercase;
}
/* POST — softer blue */
#nd-page:has(.api-page-header)
div.flex.flex-row.items-center.rounded-xl.border.not-prose
span.font-mono.font-medium[class*="text-blue"] {
color: rgb(37 99 235) !important;
background-color: rgb(219 234 254 / 0.7) !important;
}
html.dark
#nd-page:has(.api-page-header)
div.flex.flex-row.items-center.rounded-xl.border.not-prose
span.font-mono.font-medium[class*="text-blue"] {
color: rgb(96 165 250) !important;
background-color: rgb(59 130 246 / 0.15) !important;
}
/* GET — softer green */
#nd-page:has(.api-page-header)
div.flex.flex-row.items-center.rounded-xl.border.not-prose
span.font-mono.font-medium[class*="text-green"] {
color: rgb(22 163 74) !important;
background-color: rgb(220 252 231 / 0.7) !important;
}
html.dark
#nd-page:has(.api-page-header)
div.flex.flex-row.items-center.rounded-xl.border.not-prose
span.font-mono.font-medium[class*="text-green"] {
color: rgb(74 222 128) !important;
background-color: rgb(34 197 94 / 0.15) !important;
}
/* Path text inside method+path bar — monospace, bright like Gumloop */
#nd-page:has(.api-page-header) div.flex.flex-row.items-center.rounded-xl.border.not-prose code {
color: rgb(55 65 81) !important;
background: none !important;
border: none !important;
padding: 0 !important;
font-size: 0.8125rem !important;
}
html.dark
#nd-page:has(.api-page-header)
div.flex.flex-row.items-center.rounded-xl.border.not-prose
code {
color: rgb(229 231 235) !important;
}
/* Inline code in API pages — neutral color instead of red.
Exclude code inside the method+path bar (handled above). */
#nd-page:has(.api-page-header) .prose :not(pre) > code {
color: rgb(79 70 229) !important;
}
html.dark #nd-page:has(.api-page-header) .prose :not(pre) > code {
color: rgb(165 180 252) !important;
}
/* Response Section — custom dropdown-based rendering (Mintlify style) */
/* Hide divider lines between accordion items */
.response-section-wrapper [data-orientation="vertical"].divide-y > * {
border-top-width: 0 !important;
border-bottom-width: 0 !important;
}
.response-section-wrapper [data-orientation="vertical"].divide-y {
border-top: none !important;
}
/* Remove content type labels inside accordion items (we show one in the header) */
.response-section-wrapper [data-orientation="vertical"] p.not-prose:has(code.text-xs) {
display: none !important;
}
/* Hide the top-level response description (e.g. "Execution was successfully cancelled.")
but NOT field descriptions inside Schema which also use prose-no-margin.
The response description is a direct child of AccordionContent (role=region) with mb-2. */
.response-section-wrapper [data-orientation="vertical"] [role="region"] > .prose-no-margin.mb-2,
.response-section-wrapper
[data-orientation="vertical"]
[role="region"]
> div
> .prose-no-margin.mb-2 {
display: none !important;
}
/* Remove left padding on accordion content so it aligns with Path Parameters */
.response-section-wrapper [data-orientation="vertical"] [role="region"] {
padding-inline-start: 0 !important;
}
/* Response section header */
.response-section-header {
display: flex;
align-items: center;
gap: 1rem;
margin-top: 1.75rem;
margin-bottom: 0.5rem;
}
.response-section-title {
font-size: 1.5rem;
font-weight: 600;
margin: 0;
color: var(--color-fd-foreground);
font-family: var(--font-geist-sans), ui-sans-serif, system-ui, -apple-system, sans-serif;
}
.response-section-meta {
display: flex;
align-items: center;
gap: 0.75rem;
margin-left: auto;
}
/* Status code dropdown */
.response-section-dropdown-wrapper {
position: relative;
}
.response-section-dropdown-trigger {
display: flex;
align-items: center;
gap: 0.25rem;
padding: 0.125rem 0.25rem;
font-size: 0.875rem;
font-weight: 500;
color: var(--color-fd-muted-foreground);
background: none;
border: none;
cursor: pointer;
border-radius: 0.25rem;
transition: color 0.15s;
font-family: var(--font-geist-sans), ui-sans-serif, system-ui, sans-serif;
}
.response-section-dropdown-trigger:hover {
color: var(--color-fd-foreground);
}
.response-section-chevron {
width: 0.75rem;
height: 0.75rem;
transition: transform 0.15s;
}
.response-section-chevron-open {
transform: rotate(180deg);
}
.response-section-dropdown-menu {
position: absolute;
top: calc(100% + 0.25rem);
left: 0;
z-index: 50;
min-width: 5rem;
background-color: white;
border: 1px solid rgb(229 231 235);
border-radius: 0.5rem;
box-shadow:
0 4px 6px -1px rgb(0 0 0 / 0.1),
0 2px 4px -2px rgb(0 0 0 / 0.1);
padding: 0.25rem;
overflow: hidden;
}
html.dark .response-section-dropdown-menu {
background-color: rgb(24 24 27);
border-color: rgb(63 63 70);
box-shadow: 0 4px 6px -1px rgb(0 0 0 / 0.3);
}
.response-section-dropdown-item {
display: flex;
align-items: center;
justify-content: space-between;
width: 100%;
padding: 0.375rem 0.5rem;
font-size: 0.875rem;
color: var(--color-fd-muted-foreground);
background: none;
border: none;
cursor: pointer;
border-radius: 0.25rem;
transition:
background-color 0.1s,
color 0.1s;
font-family: var(--font-geist-sans), ui-sans-serif, system-ui, sans-serif;
}
.response-section-dropdown-item:hover {
background-color: rgb(243 244 246);
color: var(--color-fd-foreground);
}
html.dark .response-section-dropdown-item:hover {
background-color: rgb(39 39 42);
}
.response-section-dropdown-item-selected {
color: var(--color-fd-foreground);
}
.response-section-check {
width: 0.875rem;
height: 0.875rem;
}
.response-section-content-type {
font-size: 0.875rem;
color: var(--color-fd-muted-foreground);
font-family: var(--font-geist-sans), ui-sans-serif, system-ui, sans-serif;
}
/* Response schema container — remove border to match Path Parameters style */
.response-section-wrapper [data-orientation="vertical"] .border.px-3.py-2.rounded-lg {
border: none !important;
padding: 0 !important;
border-radius: 0 !important;
background-color: transparent;
}
/* Property row — reorder: name (1) → type badge (2) → required badge (3) */
#nd-page:has(.api-page-header) .flex.flex-wrap.items-center.gap-3.not-prose {
display: flex;
flex-wrap: wrap;
align-items: center;
}
/* Name span — order 1 */
#nd-page:has(.api-page-header)
.flex.flex-wrap.items-center.gap-3.not-prose
> span.font-medium.font-mono.text-fd-primary {
order: 1;
}
/* Type badge — order 2, grey pill like Mintlify */
#nd-page:has(.api-page-header)
.flex.flex-wrap.items-center.gap-3.not-prose
> span.text-sm.font-mono.text-fd-muted-foreground {
order: 2;
background-color: rgb(240 240 243);
color: rgb(100 100 110);
padding: 0.125rem 0.5rem;
border-radius: 0.375rem;
font-size: 0.6875rem;
line-height: 1.25rem;
font-weight: 500;
font-family: var(--font-geist-sans), ui-sans-serif, system-ui, sans-serif;
}
html.dark
#nd-page:has(.api-page-header)
.flex.flex-wrap.items-center.gap-3.not-prose
> span.text-sm.font-mono.text-fd-muted-foreground {
background-color: rgb(39 39 42);
color: rgb(212 212 216);
}
/* Hide the "*" inside the name span — we'll add "required" as a ::after on the flex row */
#nd-page:has(.api-page-header) span.font-medium.font-mono.text-fd-primary > span.text-red-400 {
display: none;
}
/* Required badge — order 3, light red pill */
#nd-page:has(.api-page-header)
.flex.flex-wrap.items-center.gap-3.not-prose:has(span.text-red-400)::after {
content: "required";
order: 3;
display: inline-flex;
align-items: center;
background-color: rgb(254 235 235);
color: rgb(220 38 38);
padding: 0.125rem 0.5rem;
border-radius: 0.375rem;
font-size: 0.6875rem;
line-height: 1.25rem;
font-weight: 500;
font-family: var(--font-geist-sans), ui-sans-serif, system-ui, sans-serif;
}
html.dark
#nd-page:has(.api-page-header)
.flex.flex-wrap.items-center.gap-3.not-prose:has(span.text-red-400)::after {
background-color: rgb(127 29 29 / 0.2);
color: rgb(252 165 165);
}
/* Optional "?" indicator — hide it */
#nd-page:has(.api-page-header)
span.font-medium.font-mono.text-fd-primary
> span.text-fd-muted-foreground {
display: none;
}
/* Hide the auth scheme type label (e.g. "apiKey") next to Authorization heading */
#nd-page:has(.api-page-header) .flex.items-start.justify-between.gap-2 > div.not-prose {
display: none !important;
}
/* Auth property — replace "<token>" with "string" badge, add "header" and "required" badges.
Auth properties use my-4 (vs py-4 for regular properties). */
/* Auth property flex row — name: order 1, type: order 2, ::before "header": order 3, ::after "required": order 4 */
#nd-page:has(.api-page-header)
div.my-4
> .flex.flex-wrap.items-center.gap-3.not-prose
> span.font-medium.font-mono.text-fd-primary {
order: 1;
}
#nd-page:has(.api-page-header)
div.my-4
> .flex.flex-wrap.items-center.gap-3.not-prose
> span.text-sm.font-mono.text-fd-muted-foreground {
order: 2;
font-size: 0;
padding: 0 !important;
background: none !important;
line-height: 0;
}
#nd-page:has(.api-page-header)
div.my-4
> .flex.flex-wrap.items-center.gap-3.not-prose
> span.text-sm.font-mono.text-fd-muted-foreground::after {
content: "string";
font-size: 0.6875rem;
line-height: 1.25rem;
font-weight: 500;
font-family: var(--font-geist-sans), ui-sans-serif, system-ui, sans-serif;
background-color: rgb(240 240 243);
color: rgb(100 100 110);
padding: 0.125rem 0.5rem;
border-radius: 0.375rem;
display: inline-flex;
align-items: center;
}
html.dark
#nd-page:has(.api-page-header)
div.my-4
> .flex.flex-wrap.items-center.gap-3.not-prose
> span.text-sm.font-mono.text-fd-muted-foreground::after {
background-color: rgb(39 39 42);
color: rgb(212 212 216);
}
/* "header" badge via ::before on the auth flex row */
#nd-page:has(.api-page-header) div.my-4 > .flex.flex-wrap.items-center.gap-3.not-prose::before {
content: "header";
order: 3;
display: inline-flex;
align-items: center;
background-color: rgb(240 240 243);
color: rgb(100 100 110);
padding: 0.125rem 0.5rem;
border-radius: 0.375rem;
font-size: 0.6875rem;
line-height: 1.25rem;
font-weight: 500;
font-family: var(--font-geist-sans), ui-sans-serif, system-ui, sans-serif;
}
html.dark
#nd-page:has(.api-page-header)
div.my-4
> .flex.flex-wrap.items-center.gap-3.not-prose::before {
background-color: rgb(39 39 42);
color: rgb(212 212 216);
}
/* "required" badge via ::after on the auth flex row — light red pill */
#nd-page:has(.api-page-header) div.my-4 > .flex.flex-wrap.items-center.gap-3.not-prose::after {
content: "required";
order: 4;
display: inline-flex;
align-items: center;
background-color: rgb(254 235 235);
color: rgb(220 38 38);
padding: 0.125rem 0.5rem;
border-radius: 0.375rem;
font-size: 0.6875rem;
line-height: 1.25rem;
font-weight: 500;
font-family: var(--font-geist-sans), ui-sans-serif, system-ui, sans-serif;
}
html.dark
#nd-page:has(.api-page-header)
div.my-4
> .flex.flex-wrap.items-center.gap-3.not-prose::after {
background-color: rgb(127 29 29 / 0.2);
color: rgb(252 165 165);
}
/* Hide "In: header" text below auth property — redundant with the header badge */
#nd-page:has(.api-page-header) div.my-4 .prose-no-margin p:has(> code) {
display: none !important;
}
/* Section dividers — bottom border after Authorization and Body sections. */
.api-section-divider {
padding-bottom: 0.5rem;
border-bottom: 1px solid rgb(229 231 235 / 0.6);
}
html.dark .api-section-divider {
border-bottom-color: rgb(255 255 255 / 0.07);
}
/* Property rows — breathing room like Mintlify.
Regular properties use border-t py-4; auth properties use border-t my-4. */
#nd-page:has(.api-page-header) .text-sm.border-t.py-4 {
padding-top: 1.25rem !important;
padding-bottom: 1.25rem !important;
}
#nd-page:has(.api-page-header) .text-sm.border-t.my-4 {
margin-top: 1.25rem !important;
margin-bottom: 1.25rem !important;
padding-top: 1.25rem;
}
/* Divider lines between fields — very subtle like Mintlify */
#nd-page:has(.api-page-header) .text-sm.border-t {
border-color: rgb(229 231 235 / 0.6);
}
html.dark #nd-page:has(.api-page-header) .text-sm.border-t {
border-color: rgb(255 255 255 / 0.07);
}
/* Body/Callback section "application/json" label — remove inline code styling */
#nd-page:has(.api-page-header) .flex.gap-2.items-center.justify-between p.not-prose code.text-xs,
#nd-page:has(.api-page-header) .flex.justify-between.gap-2.items-end p.not-prose code.text-xs {
background: none !important;
border: none !important;
padding: 0 !important;
color: var(--color-fd-muted-foreground) !important;
font-size: 0.875rem !important;
font-family: var(--font-geist-sans), ui-sans-serif, system-ui, sans-serif !important;
}
/* Object/array type triggers in property rows — order 2 + badge chip styling */
#nd-page:has(.api-page-header) .flex.flex-wrap.items-center.gap-3.not-prose > button,
#nd-page:has(.api-page-header) .flex.flex-wrap.items-center.gap-3.not-prose > span:has(> button) {
order: 2;
background-color: rgb(240 240 243);
color: rgb(100 100 110);
padding: 0.125rem 0.5rem;
border-radius: 0.375rem;
font-size: 0.6875rem;
line-height: 1.25rem;
font-weight: 500;
font-family: var(--font-geist-sans), ui-sans-serif, system-ui, sans-serif;
}
html.dark #nd-page:has(.api-page-header) .flex.flex-wrap.items-center.gap-3.not-prose > button,
html.dark
#nd-page:has(.api-page-header)
.flex.flex-wrap.items-center.gap-3.not-prose
> span:has(> button) {
background-color: rgb(39 39 42);
color: rgb(212 212 216);
}
/* Section headings (Authorization, Path Parameters, etc.) — consistent top spacing */
#nd-page:has(.api-page-header) .min-w-0.flex-1 h2 {
margin-top: 1.75rem !important;
margin-bottom: 0.25rem !important;
}
/* Code examples in right column — wrap long lines instead of horizontal scroll */
#nd-page:has(.api-page-header) pre {
white-space: pre-wrap !important;
word-break: break-all !important;
}
#nd-page:has(.api-page-header) pre code {
width: 100% !important;
word-break: break-all !important;
overflow-wrap: break-word !important;
}
/* API page header — constrain title/copy-page to left content column, not full width.
Only applies on OpenAPI pages (which have the two-column layout). */
@media (min-width: 1280px) {
.api-page-header {
max-width: calc(100% - 400px - 1.5rem);
}
}
/* Footer navigation — constrain to left content column on OpenAPI pages only.
Target pages that contain the two-column layout via :has() selector. */
#nd-page:has(.api-page-header) > div:last-child {
max-width: calc(100% - 400px - 1.5rem);
}
@media (max-width: 1024px) {
#nd-page:has(.api-page-header) > div:last-child {
max-width: 100%;
}
}
/* Tailwind v4 content sources */

View File

@@ -1,21 +0,0 @@
import type { BaseLayoutProps } from 'fumadocs-ui/layouts/shared'
/**
* Shared layout configurations
*
* you can customise layouts individually from:
* Home Layout: app/(home)/layout.tsx
* Docs Layout: app/docs/layout.tsx
*/
export const baseOptions: BaseLayoutProps = {
nav: {
title: (
<>
<svg width='24' height='24' xmlns='http://www.w3.org/2000/svg' aria-label='Logo'>
<circle cx={12} cy={12} r={12} fill='currentColor' />
</svg>
My App
</>
),
},
}

View File

@@ -52,15 +52,26 @@ export function SidebarItem({ item }: { item: Item }) {
)
}
function isApiReferenceFolder(node: Folder): boolean {
if (node.index?.url.includes('/api-reference/')) return true
for (const child of node.children) {
if (child.type === 'page' && child.url.includes('/api-reference/')) return true
if (child.type === 'folder' && isApiReferenceFolder(child)) return true
}
return false
}
export function SidebarFolder({ item, children }: { item: Folder; children: ReactNode }) {
const pathname = usePathname()
const hasActiveChild = checkHasActiveChild(item, pathname)
const isApiRef = isApiReferenceFolder(item)
const isOnApiRefPage = stripLangPrefix(pathname).startsWith('/api-reference')
const hasChildren = item.children.length > 0
const [open, setOpen] = useState(hasActiveChild)
const [open, setOpen] = useState(hasActiveChild || (isApiRef && isOnApiRefPage))
useEffect(() => {
setOpen(hasActiveChild)
}, [hasActiveChild])
setOpen(hasActiveChild || (isApiRef && isOnApiRefPage))
}, [hasActiveChild, isApiRef, isOnApiRefPage])
const active = item.index ? isActive(item.index.url, pathname, false) : false
@@ -157,16 +168,18 @@ export function SidebarFolder({ item, children }: { item: Folder; children: Reac
{hasChildren && (
<div
className={cn(
'overflow-hidden transition-all duration-200 ease-in-out',
open ? 'max-h-[10000px] opacity-100' : 'max-h-0 opacity-0'
'grid transition-[grid-template-rows,opacity] duration-200 ease-in-out',
open ? 'grid-rows-[1fr] opacity-100' : 'grid-rows-[0fr] opacity-0'
)}
>
{/* Mobile: simple indent */}
<div className='ml-4 flex flex-col gap-0.5 lg:hidden'>{children}</div>
{/* Desktop: styled with border */}
<ul className='mt-0.5 ml-2 hidden space-y-[0.0625rem] border-gray-200/60 border-l pl-2.5 lg:block dark:border-gray-700/60'>
{children}
</ul>
<div className='overflow-hidden'>
{/* Mobile: simple indent */}
<div className='ml-4 flex flex-col gap-0.5 lg:hidden'>{children}</div>
{/* Desktop: styled with border */}
<ul className='mt-0.5 ml-2 hidden space-y-[0.0625rem] border-gray-200/60 border-l pl-2.5 lg:block dark:border-gray-700/60'>
{children}
</ul>
</div>
</div>
)}
</div>

View File

@@ -1,19 +1,16 @@
'use client'
import { useState } from 'react'
import { ArrowRight, ChevronRight } from 'lucide-react'
import Link from 'next/link'
export function TOCFooter() {
const [isHovered, setIsHovered] = useState(false)
return (
<div className='sticky bottom-0 mt-6'>
<div className='flex flex-col gap-2 rounded-lg border border-border bg-secondary p-6 text-sm'>
<div className='text-balance font-semibold text-base leading-tight'>
Start building today
</div>
<div className='text-muted-foreground'>Trusted by over 60,000 builders.</div>
<div className='text-muted-foreground'>Trusted by over 70,000 builders.</div>
<div className='text-muted-foreground'>
Build Agentic workflows visually on a drag-and-drop canvas or with natural language.
</div>
@@ -21,18 +18,19 @@ export function TOCFooter() {
href='https://sim.ai/signup'
target='_blank'
rel='noopener noreferrer'
onMouseEnter={() => setIsHovered(true)}
onMouseLeave={() => setIsHovered(false)}
className='group mt-2 inline-flex h-8 w-fit items-center justify-center gap-1 whitespace-nowrap rounded-[10px] border border-[#2AAD6C] bg-gradient-to-b from-[#3ED990] to-[#2AAD6C] px-3 pr-[10px] pl-[12px] font-medium text-sm text-white shadow-[inset_0_2px_4px_0_#5EE8A8] outline-none transition-all hover:shadow-lg focus-visible:border-ring focus-visible:ring-[3px] focus-visible:ring-ring/50'
aria-label='Get started with Sim - Sign up for free'
>
<span>Get started</span>
<span className='inline-flex transition-transform duration-200 group-hover:translate-x-0.5'>
{isHovered ? (
<ArrowRight className='h-4 w-4' aria-hidden='true' />
) : (
<ChevronRight className='h-4 w-4' aria-hidden='true' />
)}
<span className='relative inline-flex h-4 w-4 transition-transform duration-200 group-hover:translate-x-0.5'>
<ChevronRight
className='absolute inset-0 h-4 w-4 transition-opacity duration-200 group-hover:opacity-0'
aria-hidden='true'
/>
<ArrowRight
className='absolute inset-0 h-4 w-4 opacity-0 transition-opacity duration-200 group-hover:opacity-100'
aria-hidden='true'
/>
</span>
</Link>
</div>

View File

@@ -526,6 +526,17 @@ export function SlackMonoIcon(props: SVGProps<SVGSVGElement>) {
)
}
export function GammaIcon(props: SVGProps<SVGSVGElement>) {
return (
<svg {...props} viewBox='-14 0 192 192' fill='none' xmlns='http://www.w3.org/2000/svg'>
<path
fill='currentColor'
d='M47.2,14.4c-14.4,8.2-26,19.6-34.4,33.6C4.3,62.1,0,77.7,0,94.3s4.3,32.2,12.7,46.3c8.5,14.1,20,25.4,34.4,33.6,14.4,8.2,30.4,12.4,47.7,12.4h69.8v-112.5h-81v39.1h38.2v31.8h-25.6c-9.1,0-17.6-2.3-25.2-6.9-7.6-4.6-13.8-10.8-18.3-18.4-4.5-7.7-6.7-16.2-6.7-25.3s2.3-17.7,6.7-25.3c4.5-7.7,10.6-13.9,18.3-18.4,7.6-4.6,16.1-6.9,25.2-6.9h68.5V2h-69.8c-17.3,0-33.3,4.2-47.7,12.4h0Z'
/>
</svg>
)
}
export function GithubIcon(props: SVGProps<SVGSVGElement>) {
return (
<svg {...props} width='26' height='26' viewBox='0 0 26 26' xmlns='http://www.w3.org/2000/svg'>
@@ -1198,6 +1209,17 @@ export function AlgoliaIcon(props: SVGProps<SVGSVGElement>) {
)
}
export function AmplitudeIcon(props: SVGProps<SVGSVGElement>) {
return (
<svg {...props} xmlns='http://www.w3.org/2000/svg' viewBox='0 0 49 49'>
<path
fill='#FFFFFF'
d='M23.4,15.3c0.6,1.8,1.2,4.1,1.9,6.7c-2.6,0-5.3-0.1-7.8-0.1h-1.3c1.5-5.7,3.2-10.1,4.6-11.1 c0.1-0.1,0.2-0.1,0.4-0.1c0.2,0,0.3,0.1,0.5,0.3C21.9,11.5,22.5,12.7,23.4,15.3z M49,24.5C49,38,38,49,24.5,49S0,38,0,24.5 S11,0,24.5,0S49,11,49,24.5z M42.7,23.9c0-0.6-0.4-1.2-1-1.3l0,0l0,0l0,0c-0.1,0-0.1,0-0.2,0h-0.2c-4.1-0.3-8.4-0.4-12.4-0.5l0,0 C27,14.8,24.5,7.4,21.3,7.4c-3,0-5.8,4.9-8.2,14.5c-1.7,0-3.2,0-4.6-0.1c-0.1,0-0.2,0-0.2,0c-0.3,0-0.5,0-0.5,0 c-0.8,0.1-1.4,0.9-1.4,1.7c0,0.8,0.6,1.6,1.5,1.7l0,0h4.6c-0.4,1.9-0.8,3.8-1.1,5.6l-0.1,0.8l0,0c0,0.6,0.5,1.1,1.1,1.1 c0.4,0,0.8-0.2,1-0.5l0,0l2.2-7.1h10.7c0.8,3.1,1.7,6.3,2.8,9.3c0.6,1.6,2,5.4,4.4,5.4l0,0c3.6,0,5-5.8,5.9-9.6 c0.2-0.8,0.4-1.5,0.5-2.1l0.1-0.2l0,0c0-0.1,0-0.2,0-0.3c-0.1-0.2-0.2-0.3-0.4-0.4c-0.3-0.1-0.5,0.1-0.6,0.4l0,0l-0.1,0.2 c-0.3,0.8-0.6,1.6-0.8,2.3v0.1c-1.6,4.4-2.3,6.4-3.7,6.4l0,0l0,0l0,0c-1.8,0-3.5-7.3-4.1-10.1c-0.1-0.5-0.2-0.9-0.3-1.3h11.7 c0.2,0,0.4-0.1,0.6-0.1l0,0c0,0,0,0,0.1,0c0,0,0,0,0.1,0l0,0c0,0,0.1,0,0.1-0.1l0,0C42.5,24.6,42.7,24.3,42.7,23.9z'
/>
</svg>
)
}
export function GoogleBooksIcon(props: SVGProps<SVGSVGElement>) {
return (
<svg {...props} xmlns='http://www.w3.org/2000/svg' viewBox='0 0 478.633 540.068'>
@@ -1254,6 +1276,20 @@ export function GoogleSlidesIcon(props: SVGProps<SVGSVGElement>) {
)
}
export function GoogleContactsIcon(props: SVGProps<SVGSVGElement>) {
return (
<svg {...props} xmlns='http://www.w3.org/2000/svg' viewBox='0 0 500 500'>
<path fill='#86a9ff' d='M199 244c-89 0-161 71-161 160v67c0 16 13 29 29 29h77l77-256z' />
<path fill='#578cff' d='M462 349c0-58-48-105-106-105h-77v256h77c58 0 106-47 106-106' />
<path
fill='#0057cc'
d='M115 349c0-58 48-105 106-105h58c58 0 106 47 106 105v45c0 59-48 106-106 106H144c-16 0-29-13-29-29z'
/>
<circle cx='250' cy='99.4' r='99.4' fill='#0057cc' />
</svg>
)
}
export function GoogleCalendarIcon(props: SVGProps<SVGSVGElement>) {
return (
<svg
@@ -1913,13 +1949,11 @@ export function ElevenLabsIcon(props: SVGProps<SVGSVGElement>) {
export function LinkupIcon(props: SVGProps<SVGSVGElement>) {
return (
<svg {...props} xmlns='http://www.w3.org/2000/svg' viewBox='0 0 24 24'>
<g transform='translate(12, 12) scale(1.3) translate(-12, -12)'>
<path
d='M20.2 14.1c-.4-.3-1.6-.4-2.9-.2.5-1.4 1.3-3.9.1-5-.6-.5-1.5-.7-2.6-.5-.3 0-.6.1-1 .2-1.1-1.6-2.4-2.5-3.8-2.5-1.6 0-3.1 1-4.1 2.9-1.2 2.1-1.9 5.1-1.9 8.8v.03l.4.3c3-.9 7.5-2.3 10.7-2.9 0 .9.1 1.9.1 2.8v.03l.4.3c.1 0 5.4-1.7 5.3-3.3 0-.2-.1-.5-.3-.7zM19.9 14.7c.03.4-1.7 1.4-4 2.3.5-.7 1-1.6 1.3-2.5 1.4-.1 2.4-.1 2.7.2zM16.4 14.6c-.3.7-.7 1.4-1.2 2-.02-.6-.1-1.2-.2-1.8.4-.1.9-.1 1.4-.2zM16.5 9.4c.8.7.9 2.4.1 5.1-.5.1-1 .1-1.5.2-.3-2-.9-3.8-1.7-5.3.3-.1.6-.2.8-.2.9-.1 1.7.05 2.3.2zM9.5 6.8c1.2 0 2.3.7 3.2 2.1-2.8 1.1-5.9 3.4-8.4 7.8.2-5.1 1.9-9.9 5.2-9.9zM4.7 17c3.4-4.9 6.4-6.8 8.4-7.8.7 1.3 1.2 2.9 1.5 4.8-3.2.6-7.3 1.8-9.9 3z'
fill='#000000'
/>
</g>
<svg {...props} xmlns='http://www.w3.org/2000/svg' viewBox='0 0 154 107' fill='none'>
<path
d='M150.677 72.7113C146.612 70.2493 137.909 69.542 124.794 70.6076C128.992 57.6776 133.757 35.3911 121.323 25.1527C115.886 20.6743 107.471 19.0437 97.6162 20.5594C94.6758 21.0142 91.5752 21.7445 88.3878 22.732C78.8667 8.28165 66.2954 0 53.8613 0C39.4288 0 26.1304 9.3381 16.4081 26.2872C5.67515 45.014 0 71.9626 0 104.23V104.533L3.60356 106.94L3.88251 106.825C30.5754 95.5628 67.5759 85.0718 100.593 79.4037C101.604 87.644 102.116 95.9945 102.116 104.235V104.52L105.491 107L105.761 106.913C106.255 106.752 155.159 90.8822 153.979 77.5894C153.856 76.2022 153.183 74.2271 150.677 72.7113ZM148.409 78.09C148.715 81.5442 133.236 91.0568 111.838 98.8883C115.968 92.0995 119.818 84.1715 122.777 76.3584C135.659 75.1411 144.531 75.5545 147.792 77.5296C148.377 77.8833 148.409 78.09 148.409 78.09ZM116.668 77.0106C114.084 83.3769 110.951 89.6329 107.54 95.2458C107.334 89.5135 106.913 83.8821 106.296 78.4621C109.922 77.8971 113.407 77.4102 116.668 77.0106ZM117.774 29.4979C125.379 35.7585 125.782 51.3205 118.867 71.1772C114.747 71.6319 110.284 72.2382 105.596 72.9777C103.049 55.1742 98.2839 39.966 91.4243 27.7525C94.566 26.8155 96.9669 26.3469 98.4622 26.1127C106.721 24.8404 113.581 26.0438 117.774 29.4979ZM53.8567 5.62215C65.0561 5.62215 74.8882 12.0022 83.0922 24.5923C57.7027 34.5413 30.3193 59.4092 5.78032 94.8003C7.43119 51.4813 23.0299 5.62215 53.8613 5.62215M10.1933 98.2406C40.7504 53.9341 68.2024 36.4429 86.0739 29.5852C92.4487 41.2383 97.2046 56.5522 99.8433 73.9331C70.5209 79.0316 35.6377 88.4983 10.1933 98.2406Z'
fill='#000000'
/>
</svg>
)
}
@@ -2428,6 +2462,17 @@ export function OutlookIcon(props: SVGProps<SVGSVGElement>) {
)
}
export function PagerDutyIcon(props: SVGProps<SVGSVGElement>) {
return (
<svg {...props} xmlns='http://www.w3.org/2000/svg' viewBox='0 0 64 64' fill='none'>
<path
d='M6.704 59.217H0v-33.65c0-3.455 1.418-5.544 2.604-6.704 2.63-2.58 6.2-2.656 6.782-2.656h10.546c3.765 0 5.93 1.52 7.117 2.8 2.346 2.553 2.372 5.853 2.32 6.73v12.687c0 3.662-1.496 5.828-2.733 6.988-2.553 2.398-5.93 2.45-6.73 2.424H6.704zm13.46-18.102c.36 0 1.367-.103 1.908-.62.413-.387.62-1.083.62-2.1v-13.02c0-.36-.077-1.315-.593-1.857-.5-.516-1.444-.62-2.166-.62h-10.6c-2.63 0-2.63 1.985-2.63 2.656v15.55zM57.296 4.783H64V38.46c0 3.455-1.418 5.544-2.604 6.704-2.63 2.58-6.2 2.656-6.782 2.656H44.068c-3.765 0-5.93-1.52-7.117-2.8-2.346-2.553-2.372-5.853-2.32-6.73V25.62c0-3.662 1.496-5.828 2.733-6.988 2.553-2.398 5.93-2.45 6.73-2.424h13.202zM43.836 22.9c-.36 0-1.367.103-1.908.62-.413.387-.62 1.083-.62 2.1v13.02c0 .36.077 1.315.593 1.857.5.516 1.444.62 2.166.62h10.598c2.656-.026 2.656-2 2.656-2.682V22.9z'
fill='#06AC38'
/>
</svg>
)
}
export function MicrosoftExcelIcon(props: SVGProps<SVGSVGElement>) {
const id = useId()
const gradientId = `excel_gradient_${id}`
@@ -2962,6 +3007,19 @@ export function QdrantIcon(props: SVGProps<SVGSVGElement>) {
)
}
export function AshbyIcon(props: SVGProps<SVGSVGElement>) {
return (
<svg {...props} viewBox='0 0 254 260' fill='none' xmlns='http://www.w3.org/2000/svg'>
<path
fillRule='evenodd'
clipRule='evenodd'
d='M76.07 250.537v9.16H.343v-9.16c19.618 0 27.465-4.381 34.527-23.498l73.764-209.09h34.92l81.219 209.09c7.847 19.515 11.77 23.498 28.642 23.498v9.16H134.363v-9.16c28.242 0 30.625-2.582 22.14-23.498l-21.58-57.35H69.399l-19.226 56.155c-5.614 18.997-4.387 24.693 25.896 24.693zm24.326-171.653l-26.681 78.459h56.5l-29.819-78.459z'
fill='currentColor'
/>
</svg>
)
}
export function ArxivIcon(props: SVGProps<SVGSVGElement>) {
return (
<svg {...props} id='logomark' xmlns='http://www.w3.org/2000/svg' viewBox='0 0 17.732 24.269'>
@@ -3956,6 +4014,28 @@ export function IntercomIcon(props: SVGProps<SVGSVGElement>) {
)
}
export function LoopsIcon(props: SVGProps<SVGSVGElement>) {
return (
<svg {...props} viewBox='0 0 214 186' fill='none' xmlns='http://www.w3.org/2000/svg'>
<path
d='M122.19,0 H90.27 C40.51,0 0,39.88 0,92.95 C0,141.07 38.93,183.77 90.27,183.77 H122.19 C172.61,183.77 213.31,142.82 213.31,92.95 C213.31,43.29 173.09,0 122.19,0 Z M10.82,92.54 C10.82,50.19 45.91,11.49 91.96,11.49 C138.73,11.49 172.69,50.33 172.69,92.13 C172.69,117.76 154.06,139.09 129.02,143.31 C145.16,131.15 155.48,112.73 155.48,92.4 C155.48,59.09 127.44,28.82 92.37,28.82 C57.23,28.82 28.51,57.23 28.51,92.91 C28.51,122.63 43.61,151.08 69.99,168.21 L71.74,169.33 C35.99,161.39 10.82,130.11 10.82,92.54 Z M106.33,42.76 C128.88,50.19 143.91,68.92 143.91,92.26 C143.91,114.23 128.68,134.63 106.12,141.71 C105.44,141.96 105.17,141.96 105.17,141.96 C83.91,135.76 69.29,116.38 69.29,92.71 C69.29,69.91 83.71,50.33 106.33,42.76 Z M120.91,172.13 C76.11,172.13 40.09,137.21 40.09,93.32 C40.09,67.03 57.17,46.11 83.98,41.33 C67.04,53.83 57.3,71.71 57.3,92.71 C57.3,125.75 82.94,155.33 120.77,155.33 C155.01,155.33 184.31,125.2 184.31,92.47 C184.31,62.34 169.96,34.06 141.92,14.55 L141.65,14.34 C175.81,23.68 202.26,54.11 202.26,92.81 C202.26,135.69 166.38,172.13 120.91,172.13 Z'
fill='#FB5001'
/>
</svg>
)
}
export function LumaIcon(props: SVGProps<SVGSVGElement>) {
return (
<svg {...props} fill='none' viewBox='0 0 133 134' xmlns='http://www.w3.org/2000/svg'>
<path
d='M133 67C96.282 67 66.5 36.994 66.5 0c0 36.994-29.782 67-66.5 67 36.718 0 66.5 30.006 66.5 67 0-36.994 29.782-67 66.5-67'
fill='#000000'
/>
</svg>
)
}
export function MailchimpIcon(props: SVGProps<SVGSVGElement>) {
return (
<svg
@@ -4477,6 +4557,17 @@ export function SSHIcon(props: SVGProps<SVGSVGElement>) {
)
}
export function DatabricksIcon(props: SVGProps<SVGSVGElement>) {
return (
<svg {...props} viewBox='0 0 241 266' fill='none' xmlns='http://www.w3.org/2000/svg'>
<path
d='M228.085 109.654L120.615 171.674L5.53493 105.41L0 108.475V156.582L120.615 225.911L228.085 164.128V189.596L120.615 251.615L5.53493 185.351L0 188.417V196.67L120.615 266L241 196.67V148.564L235.465 145.498L120.615 211.527L12.9148 149.743V124.275L120.615 186.059L241 116.729V69.3298L235.004 65.7925L120.615 131.585L18.4498 73.1028L120.615 14.3848L204.562 62.7269L211.942 58.4823V52.5869L120.615 0L0 69.3298V76.8759L120.615 146.206L228.085 84.1862V109.654Z'
fill='#FF3621'
/>
</svg>
)
}
export function DatadogIcon(props: SVGProps<SVGSVGElement>) {
return (
<svg {...props} xmlns='http://www.w3.org/2000/svg' viewBox='0 0 64 64'>
@@ -4825,6 +4916,17 @@ export function CirclebackIcon(props: SVGProps<SVGSVGElement>) {
)
}
export function GreenhouseIcon(props: SVGProps<SVGSVGElement>) {
return (
<svg {...props} viewBox='0 0 51.4 107.7' xmlns='http://www.w3.org/2000/svg'>
<path
d='M44.9,32c0,5.2-2.2,9.8-5.8,13.4c-4,4-9.8,5-9.8,8.4c0,4.6,7.4,3.2,14.5,10.3c4.7,4.7,7.6,10.9,7.6,18.1c0,14.2-11.4,25.5-25.7,25.5S0,96.4,0,82.2C0,75,2.9,68.8,7.6,64.1c7.1-7.1,14.5-5.7,14.5-10.3c0-3.4-5.8-4.4-9.8-8.4c-3.6-3.6-5.8-8.2-5.8-13.6C6.5,21.4,15,13,25.4,13c2,0,3.8,0.3,5.3,0.3c2.7,0,4.1-1.2,4.1-3.1c0-1.1-0.5-2.5-0.5-4c0-3.4,2.9-6.2,6.4-6.2S47,2.9,47,6.4c0,3.7-2.9,5.4-5.1,6.2c-1.8,0.6-3.2,1.4-3.2,3.2C38.7,19.2,44.9,22.5,44.9,32z M42.9,82.2c0-9.9-7.3-17.9-17.2-17.9s-17.2,8-17.2,17.9c0,9.8,7.3,17.9,17.2,17.9S42.9,92,42.9,82.2z M37,31.8c0-6.3-5.1-11.5-11.3-11.5s-11.3,5.2-11.3,11.5s5.1,11.5,11.3,11.5S37,38.1,37,31.8z'
fill='currentColor'
/>
</svg>
)
}
export function GreptileIcon(props: SVGProps<SVGSVGElement>) {
return (
<svg {...props} viewBox='0 0 24 24' xmlns='http://www.w3.org/2000/svg'>
@@ -5496,6 +5598,35 @@ export function GoogleMapsIcon(props: SVGProps<SVGSVGElement>) {
)
}
export function GooglePagespeedIcon(props: SVGProps<SVGSVGElement>) {
return (
<svg {...props} viewBox='-1.74 -1.81 285.55 266.85' xmlns='http://www.w3.org/2000/svg'>
<path
d='M272.73 37.23v179.68a18.58 18.58 0 0 1-18.57 18.59H18.65A18.58 18.58 0 0 1 .06 216.94V37.23z'
fill='#e1e1e1'
/>
<path
d='M18.65 0h235.5a18.58 18.58 0 0 1 18.58 18.56v18.67H.07V18.59A18.58 18.58 0 0 1 18.64 0z'
fill='#c2c2c2'
/>
<path
d='M136.3 92.96a99 99 0 0 0-99 99v.13c0 2.08-.12 4.64 0 6.2h43.25a54.87 54.87 0 0 1 0-6.2 55.81 55.81 0 0 1 85.06-47.45l31.12-31.12a98.76 98.76 0 0 0-60.44-20.57z'
fill='#4285f4'
/>
<path
d='M196.73 113.46l-31.14 31.14a55.74 55.74 0 0 1 26.56 47.45 54.87 54.87 0 0 1 0 6.2h43.39c.12-1.48 0-4.12 0-6.2a99 99 0 0 0-38.81-78.59z'
fill='#f44336'
/>
<circle cx='24.85' cy='18.59' fill='#eee' r='6.2' />
<circle cx='49.65' cy='18.59' fill='#eee' r='6.2' />
<path
d='M197.01 117.23a3.05 3.05 0 0 0 .59-1.81 3.11 3.11 0 0 0-3.1-3.1 3 3 0 0 0-1.91.68l-67.56 52a18.58 18.58 0 1 0 27.24 24.33l44.73-72.1z'
fill='#9e9e9e'
/>
</svg>
)
}
export function GoogleTranslateIcon(props: SVGProps<SVGSVGElement>) {
return (
<svg {...props} xmlns='http://www.w3.org/2000/svg' viewBox='0 0 998.1 998.3'>

View File

@@ -1,12 +1,17 @@
'use client'
import Link from 'next/link'
import { usePathname } from 'next/navigation'
import { LanguageDropdown } from '@/components/ui/language-dropdown'
import { SearchTrigger } from '@/components/ui/search-trigger'
import { SimLogoFull } from '@/components/ui/sim-logo'
import { ThemeToggle } from '@/components/ui/theme-toggle'
import { cn } from '@/lib/utils'
export function Navbar() {
const pathname = usePathname()
const isApiReference = pathname.includes('/api-reference')
return (
<nav className='sticky top-0 z-50 border-border/50 border-b bg-background/80 backdrop-blur-md backdrop-saturate-150'>
{/* Desktop: Single row layout */}
@@ -31,16 +36,30 @@ export function Navbar() {
</div>
{/* Right cluster aligns with TOC edge */}
<div className='flex items-center gap-4'>
<div className='flex items-center gap-1'>
<Link
href='/introduction'
className={cn(
'rounded-xl px-3 py-2 font-normal text-[0.9375rem] leading-[1.4] transition-colors hover:bg-foreground/8 hover:text-foreground',
!isApiReference ? 'text-foreground' : 'text-foreground/60'
)}
>
Documentation
</Link>
<Link
href='/api-reference/getting-started'
className={cn(
'rounded-xl px-3 py-2 font-normal text-[0.9375rem] leading-[1.4] transition-colors hover:bg-foreground/8 hover:text-foreground',
isApiReference ? 'text-foreground' : 'text-foreground/60'
)}
>
API
</Link>
<Link
href='https://sim.ai'
target='_blank'
rel='noopener noreferrer'
className='rounded-xl px-3 py-2 font-normal text-[0.9375rem] text-foreground/60 leading-[1.4] transition-colors hover:bg-foreground/8 hover:text-foreground'
style={{
fontFamily:
'-apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, "Helvetica Neue", Arial, sans-serif',
}}
>
Platform
</Link>

View File

@@ -25,8 +25,8 @@ export function StructuredData({
headline: title,
description: description,
url: url,
datePublished: dateModified || new Date().toISOString(),
dateModified: dateModified || new Date().toISOString(),
...(dateModified && { datePublished: dateModified }),
...(dateModified && { dateModified }),
author: {
'@type': 'Organization',
name: 'Sim Team',
@@ -91,12 +91,6 @@ export function StructuredData({
inLanguage: ['en', 'es', 'fr', 'de', 'ja', 'zh'],
}
const faqStructuredData = title.toLowerCase().includes('faq') && {
'@context': 'https://schema.org',
'@type': 'FAQPage',
mainEntity: [],
}
const softwareStructuredData = {
'@context': 'https://schema.org',
'@type': 'SoftwareApplication',
@@ -151,15 +145,6 @@ export function StructuredData({
}}
/>
)}
{faqStructuredData && (
<Script
id='faq-structured-data'
type='application/ld+json'
dangerouslySetInnerHTML={{
__html: JSON.stringify(faqStructuredData),
}}
/>
)}
{url === baseUrl && (
<Script
id='software-structured-data'

View File

@@ -9,10 +9,12 @@ import {
AirtableIcon,
AirweaveIcon,
AlgoliaIcon,
AmplitudeIcon,
ApifyIcon,
ApolloIcon,
ArxivIcon,
AsanaIcon,
AshbyIcon,
AttioIcon,
BrainIcon,
BrowserUseIcon,
@@ -24,6 +26,7 @@ import {
CloudflareIcon,
ConfluenceIcon,
CursorIcon,
DatabricksIcon,
DatadogIcon,
DevinIcon,
DiscordIcon,
@@ -39,6 +42,7 @@ import {
EyeIcon,
FirecrawlIcon,
FirefliesIcon,
GammaIcon,
GithubIcon,
GitLabIcon,
GmailIcon,
@@ -46,12 +50,14 @@ import {
GoogleBigQueryIcon,
GoogleBooksIcon,
GoogleCalendarIcon,
GoogleContactsIcon,
GoogleDocsIcon,
GoogleDriveIcon,
GoogleFormsIcon,
GoogleGroupsIcon,
GoogleIcon,
GoogleMapsIcon,
GooglePagespeedIcon,
GoogleSheetsIcon,
GoogleSlidesIcon,
GoogleTasksIcon,
@@ -59,6 +65,7 @@ import {
GoogleVaultIcon,
GrafanaIcon,
GrainIcon,
GreenhouseIcon,
GreptileIcon,
HexIcon,
HubspotIcon,
@@ -76,6 +83,8 @@ import {
LinearIcon,
LinkedInIcon,
LinkupIcon,
LoopsIcon,
LumaIcon,
MailchimpIcon,
MailgunIcon,
MailServerIcon,
@@ -95,6 +104,7 @@ import {
OpenAIIcon,
OutlookIcon,
PackageSearchIcon,
PagerDutyIcon,
ParallelIcon,
PerplexityIcon,
PineconeIcon,
@@ -160,10 +170,12 @@ export const blockTypeToIconMap: Record<string, IconComponent> = {
airtable: AirtableIcon,
airweave: AirweaveIcon,
algolia: AlgoliaIcon,
amplitude: AmplitudeIcon,
apify: ApifyIcon,
apollo: ApolloIcon,
arxiv: ArxivIcon,
asana: AsanaIcon,
ashby: AshbyIcon,
attio: AttioIcon,
browser_use: BrowserUseIcon,
calcom: CalComIcon,
@@ -174,6 +186,7 @@ export const blockTypeToIconMap: Record<string, IconComponent> = {
cloudflare: CloudflareIcon,
confluence_v2: ConfluenceIcon,
cursor_v2: CursorIcon,
databricks: DatabricksIcon,
datadog: DatadogIcon,
devin: DevinIcon,
discord: DiscordIcon,
@@ -188,6 +201,7 @@ export const blockTypeToIconMap: Record<string, IconComponent> = {
file_v3: DocumentIcon,
firecrawl: FirecrawlIcon,
fireflies_v2: FirefliesIcon,
gamma: GammaIcon,
github_v2: GithubIcon,
gitlab: GitLabIcon,
gmail_v2: GmailIcon,
@@ -195,11 +209,13 @@ export const blockTypeToIconMap: Record<string, IconComponent> = {
google_bigquery: GoogleBigQueryIcon,
google_books: GoogleBooksIcon,
google_calendar_v2: GoogleCalendarIcon,
google_contacts: GoogleContactsIcon,
google_docs: GoogleDocsIcon,
google_drive: GoogleDriveIcon,
google_forms: GoogleFormsIcon,
google_groups: GoogleGroupsIcon,
google_maps: GoogleMapsIcon,
google_pagespeed: GooglePagespeedIcon,
google_search: GoogleIcon,
google_sheets_v2: GoogleSheetsIcon,
google_slides_v2: GoogleSlidesIcon,
@@ -208,6 +224,7 @@ export const blockTypeToIconMap: Record<string, IconComponent> = {
google_vault: GoogleVaultIcon,
grafana: GrafanaIcon,
grain: GrainIcon,
greenhouse: GreenhouseIcon,
greptile: GreptileIcon,
hex: HexIcon,
hubspot: HubspotIcon,
@@ -227,6 +244,8 @@ export const blockTypeToIconMap: Record<string, IconComponent> = {
linear: LinearIcon,
linkedin: LinkedInIcon,
linkup: LinkupIcon,
loops: LoopsIcon,
luma: LumaIcon,
mailchimp: MailchimpIcon,
mailgun: MailgunIcon,
mem0: Mem0Icon,
@@ -244,6 +263,7 @@ export const blockTypeToIconMap: Record<string, IconComponent> = {
onepassword: OnePasswordIcon,
openai: OpenAIIcon,
outlook: OutlookIcon,
pagerduty: PagerDutyIcon,
parallel_ai: ParallelIcon,
perplexity: PerplexityIcon,
pinecone: PineconeIcon,

View File

@@ -0,0 +1,169 @@
'use client'
import { useEffect, useRef, useState } from 'react'
import { ChevronDown } from 'lucide-react'
import { cn } from '@/lib/utils'
interface ResponseSectionProps {
children: React.ReactNode
}
export function ResponseSection({ children }: ResponseSectionProps) {
const containerRef = useRef<HTMLDivElement>(null)
const [statusCodes, setStatusCodes] = useState<string[]>([])
const [selectedCode, setSelectedCode] = useState<string>('')
const [isOpen, setIsOpen] = useState(false)
const dropdownRef = useRef<HTMLDivElement>(null)
function getAccordionItems() {
const root = containerRef.current?.querySelector('[data-orientation="vertical"]')
if (!root) return []
return Array.from(root.children).filter(
(el) => el.getAttribute('data-state') !== null
) as HTMLElement[]
}
function showStatusCode(code: string) {
const items = getAccordionItems()
for (const item of items) {
const triggerBtn = item.querySelector('h3 button') as HTMLButtonElement | null
const text = triggerBtn?.textContent?.trim() ?? ''
const itemCode = text.match(/^\d{3}/)?.[0]
if (itemCode === code) {
item.style.display = ''
if (item.getAttribute('data-state') === 'closed' && triggerBtn) {
triggerBtn.click()
}
} else {
item.style.display = 'none'
if (item.getAttribute('data-state') === 'open' && triggerBtn) {
triggerBtn.click()
}
}
}
}
/**
* Detect when the fumadocs accordion children mount via MutationObserver,
* then extract status codes and show the first one.
* Replaces the previous approach that used `children` as a dependency
* (which triggered on every render since children is a new object each time).
*/
useEffect(() => {
const container = containerRef.current
if (!container) return
const initialize = () => {
const items = getAccordionItems()
if (items.length === 0) return false
const codes: string[] = []
const seen = new Set<string>()
for (const item of items) {
const triggerBtn = item.querySelector('h3 button')
if (triggerBtn) {
const text = triggerBtn.textContent?.trim() ?? ''
const code = text.match(/^\d{3}/)?.[0]
if (code && !seen.has(code)) {
seen.add(code)
codes.push(code)
}
}
}
if (codes.length > 0) {
setStatusCodes(codes)
setSelectedCode(codes[0])
showStatusCode(codes[0])
return true
}
return false
}
if (initialize()) return
const observer = new MutationObserver(() => {
if (initialize()) {
observer.disconnect()
}
})
observer.observe(container, { childList: true, subtree: true })
return () => observer.disconnect()
}, []) // eslint-disable-line react-hooks/exhaustive-deps
function handleSelectCode(code: string) {
setSelectedCode(code)
setIsOpen(false)
showStatusCode(code)
}
useEffect(() => {
function handleClickOutside(event: MouseEvent) {
if (dropdownRef.current && !dropdownRef.current.contains(event.target as Node)) {
setIsOpen(false)
}
}
document.addEventListener('mousedown', handleClickOutside)
return () => document.removeEventListener('mousedown', handleClickOutside)
}, [])
return (
<div ref={containerRef} className='response-section-wrapper'>
{statusCodes.length > 0 && (
<div className='response-section-header'>
<h2 className='response-section-title'>Response</h2>
<div className='response-section-meta'>
<div ref={dropdownRef} className='response-section-dropdown-wrapper'>
<button
type='button'
className='response-section-dropdown-trigger'
onClick={() => setIsOpen(!isOpen)}
>
<span>{selectedCode}</span>
<ChevronDown
className={cn(
'response-section-chevron',
isOpen && 'response-section-chevron-open'
)}
/>
</button>
{isOpen && (
<div className='response-section-dropdown-menu'>
{statusCodes.map((code) => (
<button
key={code}
type='button'
className={cn(
'response-section-dropdown-item',
code === selectedCode && 'response-section-dropdown-item-selected'
)}
onClick={() => handleSelectCode(code)}
>
<span>{code}</span>
{code === selectedCode && (
<svg
className='response-section-check'
viewBox='0 0 24 24'
fill='none'
stroke='currentColor'
strokeWidth='2'
>
<polyline points='20 6 9 17 4 12' />
</svg>
)}
</button>
))}
</div>
)}
</div>
<span className='response-section-content-type'>application/json</span>
</div>
</div>
)}
<div className='response-section-content'>{children}</div>
</div>
)
}

View File

@@ -0,0 +1,94 @@
---
title: Authentication
description: API key types, generation, and how to authenticate requests
---
import { Callout } from 'fumadocs-ui/components/callout'
import { Tab, Tabs } from 'fumadocs-ui/components/tabs'
To access the Sim API, you need an API key. Sim supports two types of API keys — **personal keys** and **workspace keys** — each with different billing and access behaviors.
## Key Types
| | **Personal Keys** | **Workspace Keys** |
| --- | --- | --- |
| **Billed to** | Your individual account | Workspace owner |
| **Scope** | Across workspaces you have access to | Shared across the workspace |
| **Managed by** | Each user individually | Workspace admins |
| **Permissions** | Must be enabled at workspace level | Require admin permissions |
<Callout type="info">
Workspace admins can disable personal API key usage for their workspace. If disabled, only workspace keys can be used.
</Callout>
## Generating API Keys
To generate a key, open the Sim dashboard and navigate to **Settings**, then go to **Sim Keys** and click **Create**.
<Callout type="warn">
API keys are only shown once when generated. Store your key securely — you will not be able to view it again.
</Callout>
## Using API Keys
Pass your API key in the `X-API-Key` header with every request:
<Tabs items={['curl', 'TypeScript', 'Python']}>
<Tab value="curl">
```bash
curl -X POST https://www.sim.ai/api/workflows/{workflowId}/execute \
-H "Content-Type: application/json" \
-H "X-API-Key: YOUR_API_KEY" \
-d '{"inputs": {}}'
```
</Tab>
<Tab value="TypeScript">
```typescript
const response = await fetch(
'https://www.sim.ai/api/workflows/{workflowId}/execute',
{
method: 'POST',
headers: {
'Content-Type': 'application/json',
'X-API-Key': process.env.SIM_API_KEY!,
},
body: JSON.stringify({ inputs: {} }),
}
)
```
</Tab>
<Tab value="Python">
```python
import requests
response = requests.post(
"https://www.sim.ai/api/workflows/{workflowId}/execute",
headers={
"Content-Type": "application/json",
"X-API-Key": os.environ["SIM_API_KEY"],
},
json={"inputs": {}},
)
```
</Tab>
</Tabs>
## Where Keys Are Used
API keys authenticate access to:
- **Workflow execution** — run deployed workflows via the API
- **Logs API** — query workflow execution logs and metrics
- **MCP servers** — authenticate connections to deployed MCP servers
- **SDKs** — the [Python](/api-reference/python) and [TypeScript](/api-reference/typescript) SDKs use API keys for all operations
## Security
- Keys use the `sk-sim-` prefix and are encrypted at rest
- Keys can be revoked at any time from the dashboard
- Use environment variables to store keys — never hardcode them in source code
- For browser-based applications, use a backend proxy to avoid exposing keys to the client
<Callout type="warn">
Never expose your API key in client-side code. Use a server-side proxy to make authenticated requests on behalf of your frontend.
</Callout>

View File

@@ -0,0 +1,210 @@
---
title: Getting Started
description: Base URL, first API call, response format, error handling, and pagination
---
import { Callout } from 'fumadocs-ui/components/callout'
import { Tab, Tabs } from 'fumadocs-ui/components/tabs'
import { Step, Steps } from 'fumadocs-ui/components/steps'
## Base URL
All API requests are made to:
```
https://www.sim.ai
```
## Quick Start
<Steps>
<Step>
### Get your API key
Go to the Sim dashboard and navigate to **Settings → Sim Keys**, then click **Create**. See [Authentication](/api-reference/authentication) for details on key types.
</Step>
<Step>
### Find your workflow ID
Open a workflow in the Sim editor. The workflow ID is in the URL:
```
https://www.sim.ai/workspace/{workspaceId}/w/{workflowId}
```
You can also use the [List Workflows](/api-reference/workflows/listWorkflows) endpoint to get all workflow IDs in a workspace.
</Step>
<Step>
### Deploy your workflow
A workflow must be deployed before it can be executed via the API. Click the **Deploy** button in the editor toolbar.
</Step>
<Step>
### Make your first request
<Tabs items={['curl', 'TypeScript', 'Python']}>
<Tab value="curl">
```bash
curl -X POST https://www.sim.ai/api/workflows/{workflowId}/execute \
-H "Content-Type: application/json" \
-H "X-API-Key: YOUR_API_KEY" \
-d '{"inputs": {}}'
```
</Tab>
<Tab value="TypeScript">
```typescript
const response = await fetch(
`https://www.sim.ai/api/workflows/${workflowId}/execute`,
{
method: 'POST',
headers: {
'Content-Type': 'application/json',
'X-API-Key': process.env.SIM_API_KEY!,
},
body: JSON.stringify({ inputs: {} }),
}
)
const data = await response.json()
console.log(data.output)
```
</Tab>
<Tab value="Python">
```python
import requests
import os
response = requests.post(
f"https://www.sim.ai/api/workflows/{workflow_id}/execute",
headers={
"Content-Type": "application/json",
"X-API-Key": os.environ["SIM_API_KEY"],
},
json={"inputs": {}},
)
data = response.json()
print(data["output"])
```
</Tab>
</Tabs>
</Step>
</Steps>
## Sync vs Async Execution
By default, workflow executions are **synchronous** — the API blocks until the workflow completes and returns the result directly.
For long-running workflows, use **asynchronous execution** by passing `async: true`:
```bash
curl -X POST https://www.sim.ai/api/workflows/{workflowId}/execute \
-H "Content-Type: application/json" \
-H "X-API-Key: YOUR_API_KEY" \
-d '{"inputs": {}, "async": true}'
```
This returns immediately with a `taskId`:
```json
{
"success": true,
"taskId": "job_abc123",
"status": "queued"
}
```
Poll the [Get Job Status](/api-reference/workflows/getJobStatus) endpoint until the status is `completed` or `failed`:
```bash
curl https://www.sim.ai/api/jobs/{taskId} \
-H "X-API-Key: YOUR_API_KEY"
```
<Callout type="info">
Job status transitions follow: `queued` → `processing` → `completed` or `failed`. The `output` field is only present when status is `completed`.
</Callout>
## Response Format
Successful responses include an `output` object with your workflow results and a `limits` object with your current rate limit and usage status:
```json
{
"success": true,
"output": {
"result": "Hello, world!"
},
"limits": {
"workflowExecutionRateLimit": {
"sync": {
"requestsPerMinute": 60,
"maxBurst": 10,
"remaining": 59,
"resetAt": "2025-01-01T00:01:00Z"
},
"async": {
"requestsPerMinute": 30,
"maxBurst": 5,
"remaining": 30,
"resetAt": "2025-01-01T00:01:00Z"
}
},
"usage": {
"currentPeriodCost": 1.25,
"limit": 50.00,
"plan": "pro",
"isExceeded": false
}
}
}
```
## Error Handling
The API uses standard HTTP status codes. Error responses include a human-readable `error` message:
```json
{
"error": "Workflow not found"
}
```
| Status | Meaning | What to do |
| --- | --- | --- |
| `400` | Invalid request parameters | Check the `details` array for specific field errors |
| `401` | Missing or invalid API key | Verify your `X-API-Key` header |
| `403` | Access denied | Check you have permission for this resource |
| `404` | Resource not found | Verify the ID exists and belongs to your workspace |
| `429` | Rate limit exceeded | Wait for the duration in the `Retry-After` header |
<Callout type="info">
Use the [Get Usage Limits](/api-reference/usage/getUsageLimits) endpoint to check your current rate limit status and billing usage at any time.
</Callout>
## Rate Limits
Rate limits depend on your subscription plan and apply separately to synchronous and asynchronous executions. Every execution response includes a `limits` object showing your current rate limit status.
When rate limited, the API returns a `429` response with a `Retry-After` header indicating how many seconds to wait before retrying.
## Pagination
List endpoints (workflows, logs, audit logs) use **cursor-based pagination**:
```bash
# First page
curl "https://www.sim.ai/api/v1/logs?limit=20" \
-H "X-API-Key: YOUR_API_KEY"
# Next page — use the nextCursor from the previous response
curl "https://www.sim.ai/api/v1/logs?limit=20&cursor=abc123" \
-H "X-API-Key: YOUR_API_KEY"
```
The response includes a `nextCursor` field. When `nextCursor` is absent or `null`, you have reached the last page.

View File

@@ -0,0 +1,16 @@
{
"title": "API Reference",
"root": true,
"pages": [
"getting-started",
"authentication",
"---SDKs---",
"python",
"typescript",
"---Endpoints---",
"(generated)/workflows",
"(generated)/logs",
"(generated)/usage",
"(generated)/audit-logs"
]
}

View File

@@ -0,0 +1,766 @@
---
title: Python
---
import { Callout } from 'fumadocs-ui/components/callout'
import { Card, Cards } from 'fumadocs-ui/components/card'
import { Step, Steps } from 'fumadocs-ui/components/steps'
import { Tab, Tabs } from 'fumadocs-ui/components/tabs'
Das offizielle Python SDK für Sim ermöglicht es Ihnen, Workflows programmatisch aus Ihren Python-Anwendungen heraus mit dem offiziellen Python SDK auszuführen.
<Callout type="info">
Das Python SDK unterstützt Python 3.8+ mit Unterstützung für asynchrone Ausführung, automatischer Ratenbegrenzung mit exponentiellem Backoff und Nutzungsverfolgung.
</Callout>
## Installation
Installieren Sie das SDK mit pip:
```bash
pip install simstudio-sdk
```
## Schnellstart
Hier ist ein einfaches Beispiel für den Einstieg:
```python
from simstudio import SimStudioClient
# Initialize the client
client = SimStudioClient(
api_key="your-api-key-here",
base_url="https://sim.ai" # optional, defaults to https://sim.ai
)
# Execute a workflow
try:
result = client.execute_workflow("workflow-id")
print("Workflow executed successfully:", result)
except Exception as error:
print("Workflow execution failed:", error)
```
## API-Referenz
### SimStudioClient
#### Konstruktor
```python
SimStudioClient(api_key: str, base_url: str = "https://sim.ai")
```
**Parameter:**
- `api_key` (str): Ihr Sim API-Schlüssel
- `base_url` (str, optional): Basis-URL für die Sim API
#### Methoden
##### execute_workflow()
Führt einen Workflow mit optionalen Eingabedaten aus.
```python
result = client.execute_workflow(
"workflow-id",
input_data={"message": "Hello, world!"},
timeout=30.0 # 30 seconds
)
```
**Parameter:**
- `workflow_id` (str): Die ID des auszuführenden Workflows
- `input_data` (dict, optional): Eingabedaten, die an den Workflow übergeben werden
- `timeout` (float, optional): Timeout in Sekunden (Standard: 30.0)
- `stream` (bool, optional): Streaming-Antworten aktivieren (Standard: False)
- `selected_outputs` (list[str], optional): Block-Ausgaben zum Streamen im Format `blockName.attribute` (z. B. `["agent1.content"]`)
- `async_execution` (bool, optional): Asynchron ausführen (Standard: False)
**Rückgabewert:** `WorkflowExecutionResult | AsyncExecutionResult`
Wenn `async_execution=True`, wird sofort mit einer Task-ID zum Polling zurückgegeben. Andernfalls wird auf die Fertigstellung gewartet.
##### get_workflow_status()
Ruft den Status eines Workflows ab (Deployment-Status usw.).
```python
status = client.get_workflow_status("workflow-id")
print("Is deployed:", status.is_deployed)
```
**Parameter:**
- `workflow_id` (str): Die ID des Workflows
**Rückgabe:** `WorkflowStatus`
##### validate_workflow()
Überprüft, ob ein Workflow zur Ausführung bereit ist.
```python
is_ready = client.validate_workflow("workflow-id")
if is_ready:
# Workflow is deployed and ready
pass
```
**Parameter:**
- `workflow_id` (str): Die ID des Workflows
**Rückgabe:** `bool`
##### get_job_status()
Ruft den Status einer asynchronen Job-Ausführung ab.
```python
status = client.get_job_status("task-id-from-async-execution")
print("Status:", status["status"]) # 'queued', 'processing', 'completed', 'failed'
if status["status"] == "completed":
print("Output:", status["output"])
```
**Parameter:**
- `task_id` (str): Die Task-ID, die von der asynchronen Ausführung zurückgegeben wurde
**Rückgabe:** `Dict[str, Any]`
**Antwortfelder:**
- `success` (bool): Ob die Anfrage erfolgreich war
- `taskId` (str): Die Task-ID
- `status` (str): Einer von `'queued'`, `'processing'`, `'completed'`, `'failed'`, `'cancelled'`
- `metadata` (dict): Enthält `startedAt`, `completedAt` und `duration`
- `output` (any, optional): Die Workflow-Ausgabe (wenn abgeschlossen)
- `error` (any, optional): Fehlerdetails (wenn fehlgeschlagen)
- `estimatedDuration` (int, optional): Geschätzte Dauer in Millisekunden (wenn in Bearbeitung/in Warteschlange)
##### execute_with_retry()
Führt einen Workflow mit automatischer Wiederholung bei Rate-Limit-Fehlern unter Verwendung von exponentiellem Backoff aus.
```python
result = client.execute_with_retry(
"workflow-id",
input_data={"message": "Hello"},
timeout=30.0,
max_retries=3, # Maximum number of retries
initial_delay=1.0, # Initial delay in seconds
max_delay=30.0, # Maximum delay in seconds
backoff_multiplier=2.0 # Exponential backoff multiplier
)
```
**Parameter:**
- `workflow_id` (str): Die ID des auszuführenden Workflows
- `input_data` (dict, optional): Eingabedaten, die an den Workflow übergeben werden
- `timeout` (float, optional): Timeout in Sekunden
- `stream` (bool, optional): Streaming-Antworten aktivieren
- `selected_outputs` (list, optional): Block-Ausgaben zum Streamen
- `async_execution` (bool, optional): Asynchron ausführen
- `max_retries` (int, optional): Maximale Anzahl von Wiederholungen (Standard: 3)
- `initial_delay` (float, optional): Anfangsverzögerung in Sekunden (Standard: 1.0)
- `max_delay` (float, optional): Maximale Verzögerung in Sekunden (Standard: 30.0)
- `backoff_multiplier` (float, optional): Backoff-Multiplikator (Standard: 2.0)
**Rückgabe:** `WorkflowExecutionResult | AsyncExecutionResult`
Die Wiederholungslogik verwendet exponentielles Backoff (1s → 2s → 4s → 8s...) mit ±25% Jitter, um Thundering Herd zu verhindern. Wenn die API einen `retry-after`-Header bereitstellt, wird dieser stattdessen verwendet.
##### get_rate_limit_info()
Ruft die aktuellen Rate-Limit-Informationen aus der letzten API-Antwort ab.
```python
rate_limit_info = client.get_rate_limit_info()
if rate_limit_info:
print("Limit:", rate_limit_info.limit)
print("Remaining:", rate_limit_info.remaining)
print("Reset:", datetime.fromtimestamp(rate_limit_info.reset))
```
**Rückgabewert:** `RateLimitInfo | None`
##### get_usage_limits()
Ruft aktuelle Nutzungslimits und Kontingentinformationen für Ihr Konto ab.
```python
limits = client.get_usage_limits()
print("Sync requests remaining:", limits.rate_limit["sync"]["remaining"])
print("Async requests remaining:", limits.rate_limit["async"]["remaining"])
print("Current period cost:", limits.usage["currentPeriodCost"])
print("Plan:", limits.usage["plan"])
```
**Rückgabewert:** `UsageLimits`
**Antwortstruktur:**
```python
{
"success": bool,
"rateLimit": {
"sync": {
"isLimited": bool,
"limit": int,
"remaining": int,
"resetAt": str
},
"async": {
"isLimited": bool,
"limit": int,
"remaining": int,
"resetAt": str
},
"authType": str # 'api' or 'manual'
},
"usage": {
"currentPeriodCost": float,
"limit": float,
"plan": str # e.g., 'free', 'pro'
}
}
```
##### set_api_key()
Aktualisiert den API-Schlüssel.
```python
client.set_api_key("new-api-key")
```
##### set_base_url()
Aktualisiert die Basis-URL.
```python
client.set_base_url("https://my-custom-domain.com")
```
##### close()
Schließt die zugrunde liegende HTTP-Sitzung.
```python
client.close()
```
## Datenklassen
### WorkflowExecutionResult
```python
@dataclass
class WorkflowExecutionResult:
success: bool
output: Optional[Any] = None
error: Optional[str] = None
logs: Optional[List[Any]] = None
metadata: Optional[Dict[str, Any]] = None
trace_spans: Optional[List[Any]] = None
total_duration: Optional[float] = None
```
### AsyncExecutionResult
```python
@dataclass
class AsyncExecutionResult:
success: bool
task_id: str
status: str # 'queued'
created_at: str
links: Dict[str, str] # e.g., {"status": "/api/jobs/{taskId}"}
```
### WorkflowStatus
```python
@dataclass
class WorkflowStatus:
is_deployed: bool
deployed_at: Optional[str] = None
needs_redeployment: bool = False
```
### RateLimitInfo
```python
@dataclass
class RateLimitInfo:
limit: int
remaining: int
reset: int
retry_after: Optional[int] = None
```
### UsageLimits
```python
@dataclass
class UsageLimits:
success: bool
rate_limit: Dict[str, Any]
usage: Dict[str, Any]
```
### SimStudioError
```python
class SimStudioError(Exception):
def __init__(self, message: str, code: Optional[str] = None, status: Optional[int] = None):
super().__init__(message)
self.code = code
self.status = status
```
**Häufige Fehlercodes:**
- `UNAUTHORIZED`: Ungültiger API-Schlüssel
- `TIMEOUT`: Zeitüberschreitung der Anfrage
- `RATE_LIMIT_EXCEEDED`: Ratenlimit überschritten
- `USAGE_LIMIT_EXCEEDED`: Nutzungslimit überschritten
- `EXECUTION_ERROR`: Workflow-Ausführung fehlgeschlagen
## Beispiele
### Grundlegende Workflow-Ausführung
<Steps>
<Step title="Client initialisieren">
Richten Sie den SimStudioClient mit Ihrem API-Schlüssel ein.
</Step>
<Step title="Workflow validieren">
Prüfen Sie, ob der Workflow bereitgestellt und zur Ausführung bereit ist.
</Step>
<Step title="Workflow ausführen">
Führen Sie den Workflow mit Ihren Eingabedaten aus.
</Step>
<Step title="Ergebnis verarbeiten">
Verarbeiten Sie das Ausführungsergebnis und behandeln Sie eventuelle Fehler.
</Step>
</Steps>
```python
import os
from simstudio import SimStudioClient
client = SimStudioClient(api_key=os.getenv("SIM_API_KEY"))
def run_workflow():
try:
# Check if workflow is ready
is_ready = client.validate_workflow("my-workflow-id")
if not is_ready:
raise Exception("Workflow is not deployed or ready")
# Execute the workflow
result = client.execute_workflow(
"my-workflow-id",
input_data={
"message": "Process this data",
"user_id": "12345"
}
)
if result.success:
print("Output:", result.output)
print("Duration:", result.metadata.get("duration") if result.metadata else None)
else:
print("Workflow failed:", result.error)
except Exception as error:
print("Error:", error)
run_workflow()
```
### Fehlerbehandlung
Behandeln Sie verschiedene Fehlertypen, die während der Workflow-Ausführung auftreten können:
```python
from simstudio import SimStudioClient, SimStudioError
import os
client = SimStudioClient(api_key=os.getenv("SIM_API_KEY"))
def execute_with_error_handling():
try:
result = client.execute_workflow("workflow-id")
return result
except SimStudioError as error:
if error.code == "UNAUTHORIZED":
print("Invalid API key")
elif error.code == "TIMEOUT":
print("Workflow execution timed out")
elif error.code == "USAGE_LIMIT_EXCEEDED":
print("Usage limit exceeded")
elif error.code == "INVALID_JSON":
print("Invalid JSON in request body")
else:
print(f"Workflow error: {error}")
raise
except Exception as error:
print(f"Unexpected error: {error}")
raise
```
### Verwendung des Context-Managers
Verwenden Sie den Client als Context-Manager, um die Ressourcenbereinigung automatisch zu handhaben:
```python
from simstudio import SimStudioClient
import os
# Using context manager to automatically close the session
with SimStudioClient(api_key=os.getenv("SIM_API_KEY")) as client:
result = client.execute_workflow("workflow-id")
print("Result:", result)
# Session is automatically closed here
```
### Batch-Workflow-Ausführung
Führen Sie mehrere Workflows effizient aus:
```python
from simstudio import SimStudioClient
import os
client = SimStudioClient(api_key=os.getenv("SIM_API_KEY"))
def execute_workflows_batch(workflow_data_pairs):
"""Execute multiple workflows with different input data."""
results = []
for workflow_id, input_data in workflow_data_pairs:
try:
# Validate workflow before execution
if not client.validate_workflow(workflow_id):
print(f"Skipping {workflow_id}: not deployed")
continue
result = client.execute_workflow(workflow_id, input_data)
results.append({
"workflow_id": workflow_id,
"success": result.success,
"output": result.output,
"error": result.error
})
except Exception as error:
results.append({
"workflow_id": workflow_id,
"success": False,
"error": str(error)
})
return results
# Example usage
workflows = [
("workflow-1", {"type": "analysis", "data": "sample1"}),
("workflow-2", {"type": "processing", "data": "sample2"}),
]
results = execute_workflows_batch(workflows)
for result in results:
print(f"Workflow {result['workflow_id']}: {'Success' if result['success'] else 'Failed'}")
```
### Asynchrone Workflow-Ausführung
Führen Sie Workflows asynchron für langwierige Aufgaben aus:
```python
import os
import time
from simstudio import SimStudioClient
client = SimStudioClient(api_key=os.getenv("SIM_API_KEY"))
def execute_async():
try:
# Start async execution
result = client.execute_workflow(
"workflow-id",
input_data={"data": "large dataset"},
async_execution=True # Execute asynchronously
)
# Check if result is an async execution
if hasattr(result, 'task_id'):
print(f"Task ID: {result.task_id}")
print(f"Status endpoint: {result.links['status']}")
# Poll for completion
status = client.get_job_status(result.task_id)
while status["status"] in ["queued", "processing"]:
print(f"Current status: {status['status']}")
time.sleep(2) # Wait 2 seconds
status = client.get_job_status(result.task_id)
if status["status"] == "completed":
print("Workflow completed!")
print(f"Output: {status['output']}")
print(f"Duration: {status['metadata']['duration']}")
else:
print(f"Workflow failed: {status['error']}")
except Exception as error:
print(f"Error: {error}")
execute_async()
```
### Ratenlimitierung und Wiederholung
Behandeln Sie Ratenbegrenzungen automatisch mit exponentiellem Backoff:
```python
import os
from simstudio import SimStudioClient, SimStudioError
client = SimStudioClient(api_key=os.getenv("SIM_API_KEY"))
def execute_with_retry_handling():
try:
# Automatically retries on rate limit
result = client.execute_with_retry(
"workflow-id",
input_data={"message": "Process this"},
max_retries=5,
initial_delay=1.0,
max_delay=60.0,
backoff_multiplier=2.0
)
print(f"Success: {result}")
except SimStudioError as error:
if error.code == "RATE_LIMIT_EXCEEDED":
print("Rate limit exceeded after all retries")
# Check rate limit info
rate_limit_info = client.get_rate_limit_info()
if rate_limit_info:
from datetime import datetime
reset_time = datetime.fromtimestamp(rate_limit_info.reset)
print(f"Rate limit resets at: {reset_time}")
execute_with_retry_handling()
```
### Nutzungsüberwachung
Überwachen Sie die Nutzung und Limits Ihres Kontos:
```python
import os
from simstudio import SimStudioClient
client = SimStudioClient(api_key=os.getenv("SIM_API_KEY"))
def check_usage():
try:
limits = client.get_usage_limits()
print("=== Rate Limits ===")
print("Sync requests:")
print(f" Limit: {limits.rate_limit['sync']['limit']}")
print(f" Remaining: {limits.rate_limit['sync']['remaining']}")
print(f" Resets at: {limits.rate_limit['sync']['resetAt']}")
print(f" Is limited: {limits.rate_limit['sync']['isLimited']}")
print("\nAsync requests:")
print(f" Limit: {limits.rate_limit['async']['limit']}")
print(f" Remaining: {limits.rate_limit['async']['remaining']}")
print(f" Resets at: {limits.rate_limit['async']['resetAt']}")
print(f" Is limited: {limits.rate_limit['async']['isLimited']}")
print("\n=== Usage ===")
print(f"Current period cost: ${limits.usage['currentPeriodCost']:.2f}")
print(f"Limit: ${limits.usage['limit']:.2f}")
print(f"Plan: {limits.usage['plan']}")
percent_used = (limits.usage['currentPeriodCost'] / limits.usage['limit']) * 100
print(f"Usage: {percent_used:.1f}%")
if percent_used > 80:
print("⚠️ Warning: You are approaching your usage limit!")
except Exception as error:
print(f"Error checking usage: {error}")
check_usage()
```
### Streaming-Workflow-Ausführung
Führen Sie Workflows mit Echtzeit-Streaming-Antworten aus:
```python
from simstudio import SimStudioClient
import os
client = SimStudioClient(api_key=os.getenv("SIM_API_KEY"))
def execute_with_streaming():
"""Execute workflow with streaming enabled."""
try:
# Enable streaming for specific block outputs
result = client.execute_workflow(
"workflow-id",
input_data={"message": "Count to five"},
stream=True,
selected_outputs=["agent1.content"] # Use blockName.attribute format
)
print("Workflow result:", result)
except Exception as error:
print("Error:", error)
execute_with_streaming()
```
Die Streaming-Antwort folgt dem Server-Sent-Events- (SSE-) Format:
```
data: {"blockId":"7b7735b9-19e5-4bd6-818b-46aae2596e9f","chunk":"One"}
data: {"blockId":"7b7735b9-19e5-4bd6-818b-46aae2596e9f","chunk":", two"}
data: {"event":"done","success":true,"output":{},"metadata":{"duration":610}}
data: [DONE]
```
**Flask-Streaming-Beispiel:**
```python
from flask import Flask, Response, stream_with_context
import requests
import json
import os
app = Flask(__name__)
@app.route('/stream-workflow')
def stream_workflow():
"""Stream workflow execution to the client."""
def generate():
response = requests.post(
'https://sim.ai/api/workflows/WORKFLOW_ID/execute',
headers={
'Content-Type': 'application/json',
'X-API-Key': os.getenv('SIM_API_KEY')
},
json={
'message': 'Generate a story',
'stream': True,
'selectedOutputs': ['agent1.content']
},
stream=True
)
for line in response.iter_lines():
if line:
decoded_line = line.decode('utf-8')
if decoded_line.startswith('data: '):
data = decoded_line[6:] # Remove 'data: ' prefix
if data == '[DONE]':
break
try:
parsed = json.loads(data)
if 'chunk' in parsed:
yield f"data: {json.dumps(parsed)}\n\n"
elif parsed.get('event') == 'done':
yield f"data: {json.dumps(parsed)}\n\n"
print("Execution complete:", parsed.get('metadata'))
except json.JSONDecodeError:
pass
return Response(
stream_with_context(generate()),
mimetype='text/event-stream'
)
if __name__ == '__main__':
app.run(debug=True)
```
### Umgebungs­konfiguration
Konfigurieren Sie den Client mit Umgebungsvariablen:
<Tabs items={['Development', 'Production']}>
<Tab value="Development">
```python
import os
from simstudio import SimStudioClient
# Development configuration
client = SimStudioClient(
api_key=os.getenv("SIM_API_KEY")
base_url=os.getenv("SIM_BASE_URL", "https://sim.ai")
)
```
</Tab>
<Tab value="Production">
```python
import os
from simstudio import SimStudioClient
# Production configuration with error handling
api_key = os.getenv("SIM_API_KEY")
if not api_key:
raise ValueError("SIM_API_KEY environment variable is required")
client = SimStudioClient(
api_key=api_key,
base_url=os.getenv("SIM_BASE_URL", "https://sim.ai")
)
```
</Tab>
</Tabs>
## Ihren API-Schlüssel erhalten
<Steps>
<Step title="Bei Sim anmelden">
Navigieren Sie zu [Sim](https://sim.ai) und melden Sie sich in Ihrem Konto an.
</Step>
<Step title="Workflow öffnen">
Navigieren Sie zu dem Workflow, den Sie programmatisch ausführen möchten.
</Step>
<Step title="Workflow bereitstellen">
Klicken Sie auf "Bereitstellen", um Ihren Workflow bereitzustellen, falls dies noch nicht geschehen ist.
</Step>
<Step title="API-Schlüssel erstellen oder auswählen">
Wählen oder erstellen Sie während des Bereitstellungsprozesses einen API-Schlüssel.
</Step>
<Step title="API-Schlüssel kopieren">
Kopieren Sie den API-Schlüssel, um ihn in Ihrer Python-Anwendung zu verwenden.
</Step>
</Steps>
## Voraussetzungen
- Python 3.8+
- requests >= 2.25.0
## Lizenz
Apache-2.0

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,24 @@
{
"title": "Sim Documentation",
"pages": [
"./introduction/index",
"./getting-started/index",
"./quick-reference/index",
"triggers",
"blocks",
"tools",
"connections",
"mcp",
"copilot",
"skills",
"knowledgebase",
"variables",
"credentials",
"execution",
"permissions",
"self-hosting",
"./enterprise/index",
"./keyboard-shortcuts/index"
],
"defaultOpen": false
}

View File

@@ -0,0 +1,3 @@
{
"pages": ["executeWorkflow", "cancelExecution", "listWorkflows", "getWorkflow", "getJobStatus"]
}

View File

@@ -0,0 +1,94 @@
---
title: Authentication
description: API key types, generation, and how to authenticate requests
---
import { Callout } from 'fumadocs-ui/components/callout'
import { Tab, Tabs } from 'fumadocs-ui/components/tabs'
To access the Sim API, you need an API key. Sim supports two types of API keys — **personal keys** and **workspace keys** — each with different billing and access behaviors.
## Key Types
| | **Personal Keys** | **Workspace Keys** |
| --- | --- | --- |
| **Billed to** | Your individual account | Workspace owner |
| **Scope** | Across workspaces you have access to | Shared across the workspace |
| **Managed by** | Each user individually | Workspace admins |
| **Permissions** | Must be enabled at workspace level | Require admin permissions |
<Callout type="info">
Workspace admins can disable personal API key usage for their workspace. If disabled, only workspace keys can be used.
</Callout>
## Generating API Keys
To generate a key, open the Sim dashboard and navigate to **Settings**, then go to **Sim Keys** and click **Create**.
<Callout type="warn">
API keys are only shown once when generated. Store your key securely — you will not be able to view it again.
</Callout>
## Using API Keys
Pass your API key in the `X-API-Key` header with every request:
<Tabs items={['curl', 'TypeScript', 'Python']}>
<Tab value="curl">
```bash
curl -X POST https://www.sim.ai/api/workflows/{workflowId}/execute \
-H "Content-Type: application/json" \
-H "X-API-Key: YOUR_API_KEY" \
-d '{"inputs": {}}'
```
</Tab>
<Tab value="TypeScript">
```typescript
const response = await fetch(
'https://www.sim.ai/api/workflows/{workflowId}/execute',
{
method: 'POST',
headers: {
'Content-Type': 'application/json',
'X-API-Key': process.env.SIM_API_KEY!,
},
body: JSON.stringify({ inputs: {} }),
}
)
```
</Tab>
<Tab value="Python">
```python
import requests
response = requests.post(
"https://www.sim.ai/api/workflows/{workflowId}/execute",
headers={
"Content-Type": "application/json",
"X-API-Key": os.environ["SIM_API_KEY"],
},
json={"inputs": {}},
)
```
</Tab>
</Tabs>
## Where Keys Are Used
API keys authenticate access to:
- **Workflow execution** — run deployed workflows via the API
- **Logs API** — query workflow execution logs and metrics
- **MCP servers** — authenticate connections to deployed MCP servers
- **SDKs** — the [Python](/api-reference/python) and [TypeScript](/api-reference/typescript) SDKs use API keys for all operations
## Security
- Keys use the `sk-sim-` prefix and are encrypted at rest
- Keys can be revoked at any time from the dashboard
- Use environment variables to store keys — never hardcode them in source code
- For browser-based applications, use a backend proxy to avoid exposing keys to the client
<Callout type="warn">
Never expose your API key in client-side code. Use a server-side proxy to make authenticated requests on behalf of your frontend.
</Callout>

View File

@@ -0,0 +1,210 @@
---
title: Getting Started
description: Base URL, first API call, response format, error handling, and pagination
---
import { Callout } from 'fumadocs-ui/components/callout'
import { Tab, Tabs } from 'fumadocs-ui/components/tabs'
import { Step, Steps } from 'fumadocs-ui/components/steps'
## Base URL
All API requests are made to:
```
https://www.sim.ai
```
## Quick Start
<Steps>
<Step>
### Get your API key
Go to the Sim platform and navigate to **Settings**, then go to **Sim Keys** and click **Create**. See [Authentication](/api-reference/authentication) for details on key types.
</Step>
<Step>
### Find your workflow ID
Open a workflow in the Sim editor. The workflow ID is in the URL:
```
https://www.sim.ai/workspace/{workspaceId}/w/{workflowId}
```
You can also use the [List Workflows](/api-reference/workflows/listWorkflows) endpoint to get all workflow IDs in a workspace.
</Step>
<Step>
### Deploy your workflow
A workflow must be deployed before it can be executed via the API. Click the **Deploy** button in the editor toolbar, or use the dashboard to manage deployments.
</Step>
<Step>
### Make your first request
<Tabs items={['curl', 'TypeScript', 'Python']}>
<Tab value="curl">
```bash
curl -X POST https://www.sim.ai/api/workflows/{workflowId}/execute \
-H "Content-Type: application/json" \
-H "X-API-Key: YOUR_API_KEY" \
-d '{"inputs": {}}'
```
</Tab>
<Tab value="TypeScript">
```typescript
const response = await fetch(
`https://www.sim.ai/api/workflows/${workflowId}/execute`,
{
method: 'POST',
headers: {
'Content-Type': 'application/json',
'X-API-Key': process.env.SIM_API_KEY!,
},
body: JSON.stringify({ inputs: {} }),
}
)
const data = await response.json()
console.log(data.output)
```
</Tab>
<Tab value="Python">
```python
import requests
import os
response = requests.post(
f"https://www.sim.ai/api/workflows/{workflow_id}/execute",
headers={
"Content-Type": "application/json",
"X-API-Key": os.environ["SIM_API_KEY"],
},
json={"inputs": {}},
)
data = response.json()
print(data["output"])
```
</Tab>
</Tabs>
</Step>
</Steps>
## Sync vs Async Execution
By default, workflow executions are **synchronous** — the API blocks until the workflow completes and returns the result directly.
For long-running workflows, use **asynchronous execution** by passing `async: true`:
```bash
curl -X POST https://www.sim.ai/api/workflows/{workflowId}/execute \
-H "Content-Type: application/json" \
-H "X-API-Key: YOUR_API_KEY" \
-d '{"inputs": {}, "async": true}'
```
This returns immediately with a `taskId`:
```json
{
"success": true,
"taskId": "job_abc123",
"status": "queued"
}
```
Poll the [Get Job Status](/api-reference/workflows/getJobStatus) endpoint until the status is `completed` or `failed`:
```bash
curl https://www.sim.ai/api/jobs/{taskId} \
-H "X-API-Key: YOUR_API_KEY"
```
<Callout type="info">
Job status transitions follow: `queued` → `processing` → `completed` or `failed`. The `output` field is only present when status is `completed`.
</Callout>
## Response Format
Successful responses include an `output` object with your workflow results and a `limits` object with your current rate limit and usage status:
```json
{
"success": true,
"output": {
"result": "Hello, world!"
},
"limits": {
"workflowExecutionRateLimit": {
"sync": {
"requestsPerMinute": 60,
"maxBurst": 10,
"remaining": 59,
"resetAt": "2025-01-01T00:01:00Z"
},
"async": {
"requestsPerMinute": 30,
"maxBurst": 5,
"remaining": 30,
"resetAt": "2025-01-01T00:01:00Z"
}
},
"usage": {
"currentPeriodCost": 1.25,
"limit": 50.00,
"plan": "pro",
"isExceeded": false
}
}
}
```
## Error Handling
The API uses standard HTTP status codes. Error responses include a human-readable `error` message:
```json
{
"error": "Workflow not found"
}
```
| Status | Meaning | What to do |
| --- | --- | --- |
| `400` | Invalid request parameters | Check the `details` array for specific field errors |
| `401` | Missing or invalid API key | Verify your `X-API-Key` header |
| `403` | Access denied | Check you have permission for this resource |
| `404` | Resource not found | Verify the ID exists and belongs to your workspace |
| `429` | Rate limit exceeded | Wait for the duration in the `Retry-After` header |
<Callout type="info">
Use the [Get Usage Limits](/api-reference/usage/getUsageLimits) endpoint to check your current rate limit status and billing usage at any time.
</Callout>
## Rate Limits
Rate limits depend on your subscription plan and apply separately to synchronous and asynchronous executions. Every execution response includes a `limits` object showing your current rate limit status.
When rate limited, the API returns a `429` response with a `Retry-After` header indicating how many seconds to wait before retrying.
## Pagination
List endpoints (workflows, logs, audit logs) use **cursor-based pagination**:
```bash
# First page
curl "https://www.sim.ai/api/v1/logs?limit=20" \
-H "X-API-Key: YOUR_API_KEY"
# Next page — use the nextCursor from the previous response
curl "https://www.sim.ai/api/v1/logs?limit=20&cursor=abc123" \
-H "X-API-Key: YOUR_API_KEY"
```
The response includes a `nextCursor` field. When `nextCursor` is absent or `null`, you have reached the last page.

View File

@@ -0,0 +1,16 @@
{
"title": "API Reference",
"root": true,
"pages": [
"getting-started",
"authentication",
"---SDKs---",
"python",
"typescript",
"---Endpoints---",
"(generated)/workflows",
"(generated)/logs",
"(generated)/usage",
"(generated)/audit-logs"
]
}

View File

@@ -0,0 +1,761 @@
---
title: Python
---
import { Callout } from 'fumadocs-ui/components/callout'
import { Card, Cards } from 'fumadocs-ui/components/card'
import { Step, Steps } from 'fumadocs-ui/components/steps'
import { Tab, Tabs } from 'fumadocs-ui/components/tabs'
The official Python SDK for Sim allows you to execute workflows programmatically from your Python applications using the official Python SDK.
<Callout type="info">
The Python SDK supports Python 3.8+ with async execution support, automatic rate limiting with exponential backoff, and usage tracking.
</Callout>
## Installation
Install the SDK using pip:
```bash
pip install simstudio-sdk
```
## Quick Start
Here's a simple example to get you started:
```python
from simstudio import SimStudioClient
# Initialize the client
client = SimStudioClient(
api_key="your-api-key-here",
base_url="https://sim.ai" # optional, defaults to https://sim.ai
)
# Execute a workflow
try:
result = client.execute_workflow("workflow-id")
print("Workflow executed successfully:", result)
except Exception as error:
print("Workflow execution failed:", error)
```
## API Reference
### SimStudioClient
#### Constructor
```python
SimStudioClient(api_key: str, base_url: str = "https://sim.ai")
```
**Parameters:**
- `api_key` (str): Your Sim API key
- `base_url` (str, optional): Base URL for the Sim API
#### Methods
##### execute_workflow()
Execute a workflow with optional input data.
```python
result = client.execute_workflow(
"workflow-id",
input_data={"message": "Hello, world!"},
timeout=30.0 # 30 seconds
)
```
**Parameters:**
- `workflow_id` (str): The ID of the workflow to execute
- `input_data` (dict, optional): Input data to pass to the workflow
- `timeout` (float, optional): Timeout in seconds (default: 30.0)
- `stream` (bool, optional): Enable streaming responses (default: False)
- `selected_outputs` (list[str], optional): Block outputs to stream in `blockName.attribute` format (e.g., `["agent1.content"]`)
- `async_execution` (bool, optional): Execute asynchronously (default: False)
**Returns:** `WorkflowExecutionResult | AsyncExecutionResult`
When `async_execution=True`, returns immediately with a task ID for polling. Otherwise, waits for completion.
##### get_workflow_status()
Get the status of a workflow (deployment status, etc.).
```python
status = client.get_workflow_status("workflow-id")
print("Is deployed:", status.is_deployed)
```
**Parameters:**
- `workflow_id` (str): The ID of the workflow
**Returns:** `WorkflowStatus`
##### validate_workflow()
Validate that a workflow is ready for execution.
```python
is_ready = client.validate_workflow("workflow-id")
if is_ready:
# Workflow is deployed and ready
pass
```
**Parameters:**
- `workflow_id` (str): The ID of the workflow
**Returns:** `bool`
##### get_job_status()
Get the status of an async job execution.
```python
status = client.get_job_status("task-id-from-async-execution")
print("Status:", status["status"]) # 'queued', 'processing', 'completed', 'failed'
if status["status"] == "completed":
print("Output:", status["output"])
```
**Parameters:**
- `task_id` (str): The task ID returned from async execution
**Returns:** `Dict[str, Any]`
**Response fields:**
- `success` (bool): Whether the request was successful
- `taskId` (str): The task ID
- `status` (str): One of `'queued'`, `'processing'`, `'completed'`, `'failed'`, `'cancelled'`
- `metadata` (dict): Contains `startedAt`, `completedAt`, and `duration`
- `output` (any, optional): The workflow output (when completed)
- `error` (any, optional): Error details (when failed)
- `estimatedDuration` (int, optional): Estimated duration in milliseconds (when processing/queued)
##### execute_with_retry()
Execute a workflow with automatic retry on rate limit errors using exponential backoff.
```python
result = client.execute_with_retry(
"workflow-id",
input_data={"message": "Hello"},
timeout=30.0,
max_retries=3, # Maximum number of retries
initial_delay=1.0, # Initial delay in seconds
max_delay=30.0, # Maximum delay in seconds
backoff_multiplier=2.0 # Exponential backoff multiplier
)
```
**Parameters:**
- `workflow_id` (str): The ID of the workflow to execute
- `input_data` (dict, optional): Input data to pass to the workflow
- `timeout` (float, optional): Timeout in seconds
- `stream` (bool, optional): Enable streaming responses
- `selected_outputs` (list, optional): Block outputs to stream
- `async_execution` (bool, optional): Execute asynchronously
- `max_retries` (int, optional): Maximum number of retries (default: 3)
- `initial_delay` (float, optional): Initial delay in seconds (default: 1.0)
- `max_delay` (float, optional): Maximum delay in seconds (default: 30.0)
- `backoff_multiplier` (float, optional): Backoff multiplier (default: 2.0)
**Returns:** `WorkflowExecutionResult | AsyncExecutionResult`
The retry logic uses exponential backoff (1s → 2s → 4s → 8s...) with ±25% jitter to prevent thundering herd. If the API provides a `retry-after` header, it will be used instead.
##### get_rate_limit_info()
Get the current rate limit information from the last API response.
```python
rate_limit_info = client.get_rate_limit_info()
if rate_limit_info:
print("Limit:", rate_limit_info.limit)
print("Remaining:", rate_limit_info.remaining)
print("Reset:", datetime.fromtimestamp(rate_limit_info.reset))
```
**Returns:** `RateLimitInfo | None`
##### get_usage_limits()
Get current usage limits and quota information for your account.
```python
limits = client.get_usage_limits()
print("Sync requests remaining:", limits.rate_limit["sync"]["remaining"])
print("Async requests remaining:", limits.rate_limit["async"]["remaining"])
print("Current period cost:", limits.usage["currentPeriodCost"])
print("Plan:", limits.usage["plan"])
```
**Returns:** `UsageLimits`
**Response structure:**
```python
{
"success": bool,
"rateLimit": {
"sync": {
"isLimited": bool,
"limit": int,
"remaining": int,
"resetAt": str
},
"async": {
"isLimited": bool,
"limit": int,
"remaining": int,
"resetAt": str
},
"authType": str # 'api' or 'manual'
},
"usage": {
"currentPeriodCost": float,
"limit": float,
"plan": str # e.g., 'free', 'pro'
}
}
```
##### set_api_key()
Update the API key.
```python
client.set_api_key("new-api-key")
```
##### set_base_url()
Update the base URL.
```python
client.set_base_url("https://my-custom-domain.com")
```
##### close()
Close the underlying HTTP session.
```python
client.close()
```
## Data Classes
### WorkflowExecutionResult
```python
@dataclass
class WorkflowExecutionResult:
success: bool
output: Optional[Any] = None
error: Optional[str] = None
logs: Optional[List[Any]] = None
metadata: Optional[Dict[str, Any]] = None
trace_spans: Optional[List[Any]] = None
total_duration: Optional[float] = None
```
### AsyncExecutionResult
```python
@dataclass
class AsyncExecutionResult:
success: bool
task_id: str
status: str # 'queued'
created_at: str
links: Dict[str, str] # e.g., {"status": "/api/jobs/{taskId}"}
```
### WorkflowStatus
```python
@dataclass
class WorkflowStatus:
is_deployed: bool
deployed_at: Optional[str] = None
needs_redeployment: bool = False
```
### RateLimitInfo
```python
@dataclass
class RateLimitInfo:
limit: int
remaining: int
reset: int
retry_after: Optional[int] = None
```
### UsageLimits
```python
@dataclass
class UsageLimits:
success: bool
rate_limit: Dict[str, Any]
usage: Dict[str, Any]
```
### SimStudioError
```python
class SimStudioError(Exception):
def __init__(self, message: str, code: Optional[str] = None, status: Optional[int] = None):
super().__init__(message)
self.code = code
self.status = status
```
**Common error codes:**
- `UNAUTHORIZED`: Invalid API key
- `TIMEOUT`: Request timed out
- `RATE_LIMIT_EXCEEDED`: Rate limit exceeded
- `USAGE_LIMIT_EXCEEDED`: Usage limit exceeded
- `EXECUTION_ERROR`: Workflow execution failed
## Examples
### Basic Workflow Execution
<Steps>
<Step title="Initialize the client">
Set up the SimStudioClient with your API key.
</Step>
<Step title="Validate the workflow">
Check if the workflow is deployed and ready for execution.
</Step>
<Step title="Execute the workflow">
Run the workflow with your input data.
</Step>
<Step title="Handle the result">
Process the execution result and handle any errors.
</Step>
</Steps>
```python
import os
from simstudio import SimStudioClient
client = SimStudioClient(api_key=os.getenv("SIM_API_KEY"))
def run_workflow():
try:
# Check if workflow is ready
is_ready = client.validate_workflow("my-workflow-id")
if not is_ready:
raise Exception("Workflow is not deployed or ready")
# Execute the workflow
result = client.execute_workflow(
"my-workflow-id",
input_data={
"message": "Process this data",
"user_id": "12345"
}
)
if result.success:
print("Output:", result.output)
print("Duration:", result.metadata.get("duration") if result.metadata else None)
else:
print("Workflow failed:", result.error)
except Exception as error:
print("Error:", error)
run_workflow()
```
### Error Handling
Handle different types of errors that may occur during workflow execution:
```python
from simstudio import SimStudioClient, SimStudioError
import os
client = SimStudioClient(api_key=os.getenv("SIM_API_KEY"))
def execute_with_error_handling():
try:
result = client.execute_workflow("workflow-id")
return result
except SimStudioError as error:
if error.code == "UNAUTHORIZED":
print("Invalid API key")
elif error.code == "TIMEOUT":
print("Workflow execution timed out")
elif error.code == "USAGE_LIMIT_EXCEEDED":
print("Usage limit exceeded")
elif error.code == "INVALID_JSON":
print("Invalid JSON in request body")
else:
print(f"Workflow error: {error}")
raise
except Exception as error:
print(f"Unexpected error: {error}")
raise
```
### Context Manager Usage
Use the client as a context manager to automatically handle resource cleanup:
```python
from simstudio import SimStudioClient
import os
# Using context manager to automatically close the session
with SimStudioClient(api_key=os.getenv("SIM_API_KEY")) as client:
result = client.execute_workflow("workflow-id")
print("Result:", result)
# Session is automatically closed here
```
### Batch Workflow Execution
Execute multiple workflows efficiently:
```python
from simstudio import SimStudioClient
import os
client = SimStudioClient(api_key=os.getenv("SIM_API_KEY"))
def execute_workflows_batch(workflow_data_pairs):
"""Execute multiple workflows with different input data."""
results = []
for workflow_id, input_data in workflow_data_pairs:
try:
# Validate workflow before execution
if not client.validate_workflow(workflow_id):
print(f"Skipping {workflow_id}: not deployed")
continue
result = client.execute_workflow(workflow_id, input_data)
results.append({
"workflow_id": workflow_id,
"success": result.success,
"output": result.output,
"error": result.error
})
except Exception as error:
results.append({
"workflow_id": workflow_id,
"success": False,
"error": str(error)
})
return results
# Example usage
workflows = [
("workflow-1", {"type": "analysis", "data": "sample1"}),
("workflow-2", {"type": "processing", "data": "sample2"}),
]
results = execute_workflows_batch(workflows)
for result in results:
print(f"Workflow {result['workflow_id']}: {'Success' if result['success'] else 'Failed'}")
```
### Async Workflow Execution
Execute workflows asynchronously for long-running tasks:
```python
import os
import time
from simstudio import SimStudioClient
client = SimStudioClient(api_key=os.getenv("SIM_API_KEY"))
def execute_async():
try:
# Start async execution
result = client.execute_workflow(
"workflow-id",
input_data={"data": "large dataset"},
async_execution=True # Execute asynchronously
)
# Check if result is an async execution
if hasattr(result, 'task_id'):
print(f"Task ID: {result.task_id}")
print(f"Status endpoint: {result.links['status']}")
# Poll for completion
status = client.get_job_status(result.task_id)
while status["status"] in ["queued", "processing"]:
print(f"Current status: {status['status']}")
time.sleep(2) # Wait 2 seconds
status = client.get_job_status(result.task_id)
if status["status"] == "completed":
print("Workflow completed!")
print(f"Output: {status['output']}")
print(f"Duration: {status['metadata']['duration']}")
else:
print(f"Workflow failed: {status['error']}")
except Exception as error:
print(f"Error: {error}")
execute_async()
```
### Rate Limiting and Retry
Handle rate limits automatically with exponential backoff:
```python
import os
from simstudio import SimStudioClient, SimStudioError
client = SimStudioClient(api_key=os.getenv("SIM_API_KEY"))
def execute_with_retry_handling():
try:
# Automatically retries on rate limit
result = client.execute_with_retry(
"workflow-id",
input_data={"message": "Process this"},
max_retries=5,
initial_delay=1.0,
max_delay=60.0,
backoff_multiplier=2.0
)
print(f"Success: {result}")
except SimStudioError as error:
if error.code == "RATE_LIMIT_EXCEEDED":
print("Rate limit exceeded after all retries")
# Check rate limit info
rate_limit_info = client.get_rate_limit_info()
if rate_limit_info:
from datetime import datetime
reset_time = datetime.fromtimestamp(rate_limit_info.reset)
print(f"Rate limit resets at: {reset_time}")
execute_with_retry_handling()
```
### Usage Monitoring
Monitor your account usage and limits:
```python
import os
from simstudio import SimStudioClient
client = SimStudioClient(api_key=os.getenv("SIM_API_KEY"))
def check_usage():
try:
limits = client.get_usage_limits()
print("=== Rate Limits ===")
print("Sync requests:")
print(f" Limit: {limits.rate_limit['sync']['limit']}")
print(f" Remaining: {limits.rate_limit['sync']['remaining']}")
print(f" Resets at: {limits.rate_limit['sync']['resetAt']}")
print(f" Is limited: {limits.rate_limit['sync']['isLimited']}")
print("\nAsync requests:")
print(f" Limit: {limits.rate_limit['async']['limit']}")
print(f" Remaining: {limits.rate_limit['async']['remaining']}")
print(f" Resets at: {limits.rate_limit['async']['resetAt']}")
print(f" Is limited: {limits.rate_limit['async']['isLimited']}")
print("\n=== Usage ===")
print(f"Current period cost: ${limits.usage['currentPeriodCost']:.2f}")
print(f"Limit: ${limits.usage['limit']:.2f}")
print(f"Plan: {limits.usage['plan']}")
percent_used = (limits.usage['currentPeriodCost'] / limits.usage['limit']) * 100
print(f"Usage: {percent_used:.1f}%")
if percent_used > 80:
print("⚠️ Warning: You are approaching your usage limit!")
except Exception as error:
print(f"Error checking usage: {error}")
check_usage()
```
### Streaming Workflow Execution
Execute workflows with real-time streaming responses:
```python
from simstudio import SimStudioClient
import os
client = SimStudioClient(api_key=os.getenv("SIM_API_KEY"))
def execute_with_streaming():
"""Execute workflow with streaming enabled."""
try:
# Enable streaming for specific block outputs
result = client.execute_workflow(
"workflow-id",
input_data={"message": "Count to five"},
stream=True,
selected_outputs=["agent1.content"] # Use blockName.attribute format
)
print("Workflow result:", result)
except Exception as error:
print("Error:", error)
execute_with_streaming()
```
The streaming response follows the Server-Sent Events (SSE) format:
```
data: {"blockId":"7b7735b9-19e5-4bd6-818b-46aae2596e9f","chunk":"One"}
data: {"blockId":"7b7735b9-19e5-4bd6-818b-46aae2596e9f","chunk":", two"}
data: {"event":"done","success":true,"output":{},"metadata":{"duration":610}}
data: [DONE]
```
**Flask Streaming Example:**
```python
from flask import Flask, Response, stream_with_context
import requests
import json
import os
app = Flask(__name__)
@app.route('/stream-workflow')
def stream_workflow():
"""Stream workflow execution to the client."""
def generate():
response = requests.post(
'https://sim.ai/api/workflows/WORKFLOW_ID/execute',
headers={
'Content-Type': 'application/json',
'X-API-Key': os.getenv('SIM_API_KEY')
},
json={
'message': 'Generate a story',
'stream': True,
'selectedOutputs': ['agent1.content']
},
stream=True
)
for line in response.iter_lines():
if line:
decoded_line = line.decode('utf-8')
if decoded_line.startswith('data: '):
data = decoded_line[6:] # Remove 'data: ' prefix
if data == '[DONE]':
break
try:
parsed = json.loads(data)
if 'chunk' in parsed:
yield f"data: {json.dumps(parsed)}\n\n"
elif parsed.get('event') == 'done':
yield f"data: {json.dumps(parsed)}\n\n"
print("Execution complete:", parsed.get('metadata'))
except json.JSONDecodeError:
pass
return Response(
stream_with_context(generate()),
mimetype='text/event-stream'
)
if __name__ == '__main__':
app.run(debug=True)
```
### Environment Configuration
Configure the client using environment variables:
<Tabs items={['Development', 'Production']}>
<Tab value="Development">
```python
import os
from simstudio import SimStudioClient
# Development configuration
client = SimStudioClient(
api_key=os.getenv("SIM_API_KEY")
base_url=os.getenv("SIM_BASE_URL", "https://sim.ai")
)
```
</Tab>
<Tab value="Production">
```python
import os
from simstudio import SimStudioClient
# Production configuration with error handling
api_key = os.getenv("SIM_API_KEY")
if not api_key:
raise ValueError("SIM_API_KEY environment variable is required")
client = SimStudioClient(
api_key=api_key,
base_url=os.getenv("SIM_BASE_URL", "https://sim.ai")
)
```
</Tab>
</Tabs>
## Getting Your API Key
<Steps>
<Step title="Log in to Sim">
Navigate to [Sim](https://sim.ai) and log in to your account.
</Step>
<Step title="Open your workflow">
Navigate to the workflow you want to execute programmatically.
</Step>
<Step title="Deploy your workflow">
Click on "Deploy" to deploy your workflow if it hasn't been deployed yet.
</Step>
<Step title="Create or select an API key">
During the deployment process, select or create an API key.
</Step>
<Step title="Copy the API key">
Copy the API key to use in your Python application.
</Step>
</Steps>
## Requirements
- Python 3.8+
- requests >= 2.25.0
## License
Apache-2.0

File diff suppressed because it is too large Load Diff

View File

@@ -17,7 +17,7 @@ curl -H "x-api-key: YOUR_API_KEY" \
https://sim.ai/api/v1/logs?workspaceId=YOUR_WORKSPACE_ID
```
You can generate API keys from your user settings in the Sim dashboard.
You can generate API keys from the Sim platform and navigate to **Settings**, then go to **Sim Keys** and click **Create**.
## Logs API

View File

@@ -16,7 +16,6 @@
"credentials",
"execution",
"permissions",
"sdks",
"self-hosting",
"./enterprise/index",
"./keyboard-shortcuts/index"

View File

@@ -113,7 +113,7 @@ Users can create two types of environment variables:
### Personal Environment Variables
- Only visible to the individual user
- Available in all workflows they run
- Managed in user settings
- Managed in **Settings**, then go to **Secrets**
### Workspace Environment Variables
- **Read permission**: Can see variable names and values

View File

@@ -0,0 +1,313 @@
---
title: Amplitude
description: Track events and query analytics from Amplitude
---
import { BlockInfoCard } from "@/components/ui/block-info-card"
<BlockInfoCard
type="amplitude"
color="#1B1F3B"
/>
{/* MANUAL-CONTENT-START:intro */}
[Amplitude](https://amplitude.com/) is a leading digital analytics platform that helps teams understand user behavior, measure product performance, and make data-driven decisions at scale.
The Amplitude integration in Sim connects with the Amplitude HTTP and Dashboard REST APIs using API key and secret key authentication, allowing your agents to track events, manage user properties, and query analytics data programmatically. This API-based approach ensures secure access to Amplitude's full suite of analytics capabilities.
With the Amplitude integration, your agents can:
- **Track events**: Send custom events to Amplitude with rich properties, revenue data, and user context directly from your workflows
- **Identify users**: Set and update user properties using operations like $set, $setOnce, $add, $append, and $unset to maintain detailed user profiles
- **Search for users**: Look up users by User ID, Device ID, or Amplitude ID to retrieve profile information and metadata
- **Query event analytics**: Run event segmentation queries with grouping, custom metrics (uniques, totals, averages, DAU percentages), and flexible date ranges
- **Monitor user activity**: Retrieve event streams for specific users to understand individual user journeys and behavior patterns
- **Analyze active users**: Get active or new user counts over time with daily, weekly, or monthly granularity
- **Track revenue**: Access revenue LTV metrics including ARPU, ARPPU, total revenue, and paying user counts
In Sim, the Amplitude integration enables powerful analytics automation scenarios. Your agents can track product events in real time based on workflow triggers, enrich user profiles as new data becomes available, query segmentation data to inform downstream decisions, or build monitoring workflows that alert on changes in key metrics. By connecting Sim with Amplitude, you can build intelligent agents that bridge the gap between analytics insights and automated action, enabling data-driven workflows that respond to user behavior patterns and product performance trends.
{/* MANUAL-CONTENT-END */}
## Usage Instructions
Integrate Amplitude into your workflow to track events, identify users and groups, search for users, query analytics, and retrieve revenue data.
## Tools
### `amplitude_send_event`
Track an event in Amplitude using the HTTP V2 API.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Amplitude API Key |
| `userId` | string | No | User ID \(required if no device_id\) |
| `deviceId` | string | No | Device ID \(required if no user_id\) |
| `eventType` | string | Yes | Name of the event \(e.g., "page_view", "purchase"\) |
| `eventProperties` | string | No | JSON object of custom event properties |
| `userProperties` | string | No | JSON object of user properties to set \(supports $set, $setOnce, $add, $append, $unset\) |
| `time` | string | No | Event timestamp in milliseconds since epoch |
| `sessionId` | string | No | Session start time in milliseconds since epoch |
| `insertId` | string | No | Unique ID for deduplication \(within 7-day window\) |
| `appVersion` | string | No | Application version string |
| `platform` | string | No | Platform \(e.g., "Web", "iOS", "Android"\) |
| `country` | string | No | Two-letter country code |
| `language` | string | No | Language code \(e.g., "en"\) |
| `ip` | string | No | IP address for geo-location |
| `price` | string | No | Price of the item purchased |
| `quantity` | string | No | Quantity of items purchased |
| `revenue` | string | No | Revenue amount |
| `productId` | string | No | Product identifier |
| `revenueType` | string | No | Revenue type \(e.g., "purchase", "refund"\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `code` | number | Response code \(200 for success\) |
| `eventsIngested` | number | Number of events ingested |
| `payloadSizeBytes` | number | Size of the payload in bytes |
| `serverUploadTime` | number | Server upload timestamp |
### `amplitude_identify_user`
Set user properties in Amplitude using the Identify API. Supports $set, $setOnce, $add, $append, $unset operations.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Amplitude API Key |
| `userId` | string | No | User ID \(required if no device_id\) |
| `deviceId` | string | No | Device ID \(required if no user_id\) |
| `userProperties` | string | Yes | JSON object of user properties. Use operations like $set, $setOnce, $add, $append, $unset. |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `code` | number | HTTP response status code |
| `message` | string | Response message |
### `amplitude_group_identify`
Set group-level properties in Amplitude. Supports $set, $setOnce, $add, $append, $unset operations.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Amplitude API Key |
| `groupType` | string | Yes | Group classification \(e.g., "company", "org_id"\) |
| `groupValue` | string | Yes | Specific group identifier \(e.g., "Acme Corp"\) |
| `groupProperties` | string | Yes | JSON object of group properties. Use operations like $set, $setOnce, $add, $append, $unset. |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `code` | number | HTTP response status code |
| `message` | string | Response message |
### `amplitude_user_search`
Search for a user by User ID, Device ID, or Amplitude ID using the Dashboard REST API.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Amplitude API Key |
| `secretKey` | string | Yes | Amplitude Secret Key |
| `user` | string | Yes | User ID, Device ID, or Amplitude ID to search for |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `matches` | array | List of matching users |
| ↳ `amplitudeId` | number | Amplitude internal user ID |
| ↳ `userId` | string | External user ID |
| `type` | string | Match type \(e.g., match_user_or_device_id\) |
### `amplitude_user_activity`
Get the event stream for a specific user by their Amplitude ID.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Amplitude API Key |
| `secretKey` | string | Yes | Amplitude Secret Key |
| `amplitudeId` | string | Yes | Amplitude internal user ID |
| `offset` | string | No | Offset for pagination \(default 0\) |
| `limit` | string | No | Maximum number of events to return \(default 1000, max 1000\) |
| `direction` | string | No | Sort direction: "latest" or "earliest" \(default: latest\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `events` | array | List of user events |
| ↳ `eventType` | string | Type of event |
| ↳ `eventTime` | string | Event timestamp |
| ↳ `eventProperties` | json | Custom event properties |
| ↳ `userProperties` | json | User properties at event time |
| ↳ `sessionId` | number | Session ID |
| ↳ `platform` | string | Platform |
| ↳ `country` | string | Country |
| ↳ `city` | string | City |
| `userData` | json | User metadata |
| ↳ `userId` | string | External user ID |
| ↳ `canonicalAmplitudeId` | number | Canonical Amplitude ID |
| ↳ `numEvents` | number | Total event count |
| ↳ `numSessions` | number | Total session count |
| ↳ `platform` | string | Primary platform |
| ↳ `country` | string | Country |
### `amplitude_user_profile`
Get a user profile including properties, cohort memberships, and computed properties.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `secretKey` | string | Yes | Amplitude Secret Key |
| `userId` | string | No | External user ID \(required if no device_id\) |
| `deviceId` | string | No | Device ID \(required if no user_id\) |
| `getAmpProps` | string | No | Include Amplitude user properties \(true/false, default: false\) |
| `getCohortIds` | string | No | Include cohort IDs the user belongs to \(true/false, default: false\) |
| `getComputations` | string | No | Include computed user properties \(true/false, default: false\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `userId` | string | External user ID |
| `deviceId` | string | Device ID |
| `ampProps` | json | Amplitude user properties \(library, first_used, last_used, custom properties\) |
| `cohortIds` | array | List of cohort IDs the user belongs to |
| `computations` | json | Computed user properties |
### `amplitude_event_segmentation`
Query event analytics data with segmentation. Get event counts, uniques, averages, and more.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Amplitude API Key |
| `secretKey` | string | Yes | Amplitude Secret Key |
| `eventType` | string | Yes | Event type name to analyze |
| `start` | string | Yes | Start date in YYYYMMDD format |
| `end` | string | Yes | End date in YYYYMMDD format |
| `metric` | string | No | Metric type: uniques, totals, pct_dau, average, histogram, sums, value_avg, or formula \(default: uniques\) |
| `interval` | string | No | Time interval: 1 \(daily\), 7 \(weekly\), or 30 \(monthly\) |
| `groupBy` | string | No | Property name to group by \(prefix custom user properties with "gp:"\) |
| `limit` | string | No | Maximum number of group-by values \(max 1000\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `series` | json | Time-series data arrays indexed by series |
| `seriesLabels` | array | Labels for each data series |
| `seriesCollapsed` | json | Collapsed aggregate totals per series |
| `xValues` | array | Date values for the x-axis |
### `amplitude_get_active_users`
Get active or new user counts over a date range from the Dashboard REST API.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Amplitude API Key |
| `secretKey` | string | Yes | Amplitude Secret Key |
| `start` | string | Yes | Start date in YYYYMMDD format |
| `end` | string | Yes | End date in YYYYMMDD format |
| `metric` | string | No | Metric type: "active" or "new" \(default: active\) |
| `interval` | string | No | Time interval: 1 \(daily\), 7 \(weekly\), or 30 \(monthly\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `series` | json | Array of data series with user counts per time interval |
| `seriesMeta` | array | Metadata labels for each data series \(e.g., segment names\) |
| `xValues` | array | Date values for the x-axis |
### `amplitude_realtime_active_users`
Get real-time active user counts at 5-minute granularity for the last 2 days.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Amplitude API Key |
| `secretKey` | string | Yes | Amplitude Secret Key |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `series` | json | Array of data series with active user counts at 5-minute intervals |
| `seriesLabels` | array | Labels for each series \(e.g., "Today", "Yesterday"\) |
| `xValues` | array | Time values for the x-axis \(e.g., "15:00", "15:05"\) |
### `amplitude_list_events`
List all event types in the Amplitude project with their weekly totals and unique counts.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Amplitude API Key |
| `secretKey` | string | Yes | Amplitude Secret Key |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `events` | array | List of event types in the project |
| ↳ `value` | string | Event type name |
| ↳ `displayName` | string | Event display name |
| ↳ `totals` | number | Weekly total count |
| ↳ `hidden` | boolean | Whether the event is hidden |
| ↳ `deleted` | boolean | Whether the event is deleted |
### `amplitude_get_revenue`
Get revenue LTV data including ARPU, ARPPU, total revenue, and paying user counts.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Amplitude API Key |
| `secretKey` | string | Yes | Amplitude Secret Key |
| `start` | string | Yes | Start date in YYYYMMDD format |
| `end` | string | Yes | End date in YYYYMMDD format |
| `metric` | string | No | Metric: 0 \(ARPU\), 1 \(ARPPU\), 2 \(Total Revenue\), 3 \(Paying Users\) |
| `interval` | string | No | Time interval: 1 \(daily\), 7 \(weekly\), or 30 \(monthly\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `series` | json | Array of revenue data series |
| `seriesLabels` | array | Labels for each data series |
| `xValues` | array | Date values for the x-axis |

View File

@@ -0,0 +1,473 @@
---
title: Ashby
description: Manage candidates, jobs, and applications in Ashby
---
import { BlockInfoCard } from "@/components/ui/block-info-card"
<BlockInfoCard
type="ashby"
color="#5D4ED6"
/>
{/* MANUAL-CONTENT-START:intro */}
[Ashby](https://www.ashbyhq.com/) is an all-in-one recruiting platform that combines an applicant tracking system (ATS), CRM, scheduling, and analytics to help teams hire more effectively.
With Ashby, you can:
- **List and search candidates**: Browse your full candidate pipeline or search by name and email to quickly find specific people
- **Create candidates**: Add new candidates to your Ashby organization with contact details
- **View candidate details**: Retrieve full candidate profiles including tags, email, phone, and timestamps
- **Add notes to candidates**: Attach notes to candidate records to capture feedback, context, or follow-up items
- **List and view jobs**: Browse all open, closed, and archived job postings with location and department info
- **List applications**: View all applications across your organization with candidate and job details, status tracking, and pagination
In Sim, the Ashby integration enables your agents to programmatically manage your recruiting pipeline. Agents can search for candidates, create new candidate records, add notes after interviews, and monitor applications across jobs. This allows you to automate recruiting workflows like candidate intake, interview follow-ups, pipeline reporting, and cross-referencing candidates across roles.
{/* MANUAL-CONTENT-END */}
## Usage Instructions
Integrate Ashby into the workflow. Can list, search, create, and update candidates, list and get job details, create notes, list notes, list and get applications, create applications, and list offers.
## Tools
### `ashby_create_application`
Creates a new application for a candidate on a job. Optionally specify interview plan, stage, source, and credited user.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Ashby API Key |
| `candidateId` | string | Yes | The UUID of the candidate to consider for the job |
| `jobId` | string | Yes | The UUID of the job to consider the candidate for |
| `interviewPlanId` | string | No | UUID of the interview plan to use \(defaults to the job default plan\) |
| `interviewStageId` | string | No | UUID of the interview stage to place the application in \(defaults to first Lead stage\) |
| `sourceId` | string | No | UUID of the source to set on the application |
| `creditedToUserId` | string | No | UUID of the user the application is credited to |
| `createdAt` | string | No | ISO 8601 timestamp to set as the application creation date \(defaults to now\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `id` | string | Created application UUID |
| `status` | string | Application status \(Active, Hired, Archived, Lead\) |
| `candidate` | object | Associated candidate |
| ↳ `id` | string | Candidate UUID |
| ↳ `name` | string | Candidate name |
| `job` | object | Associated job |
| ↳ `id` | string | Job UUID |
| ↳ `title` | string | Job title |
| `currentInterviewStage` | object | Current interview stage |
| ↳ `id` | string | Stage UUID |
| ↳ `title` | string | Stage title |
| ↳ `type` | string | Stage type |
| `source` | object | Application source |
| ↳ `id` | string | Source UUID |
| ↳ `title` | string | Source title |
| `createdAt` | string | ISO 8601 creation timestamp |
| `updatedAt` | string | ISO 8601 last update timestamp |
### `ashby_create_candidate`
Creates a new candidate record in Ashby.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Ashby API Key |
| `name` | string | Yes | The candidate full name |
| `email` | string | No | Primary email address for the candidate |
| `emailType` | string | No | Email address type: Personal, Work, or Other \(default Work\) |
| `phoneNumber` | string | No | Primary phone number for the candidate |
| `phoneType` | string | No | Phone number type: Personal, Work, or Other \(default Work\) |
| `linkedInUrl` | string | No | LinkedIn profile URL |
| `githubUrl` | string | No | GitHub profile URL |
| `sourceId` | string | No | UUID of the source to attribute the candidate to |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `id` | string | Created candidate UUID |
| `name` | string | Full name |
| `primaryEmailAddress` | object | Primary email contact info |
| ↳ `value` | string | Email address |
| ↳ `type` | string | Contact type \(Personal, Work, Other\) |
| ↳ `isPrimary` | boolean | Whether this is the primary email |
| `primaryPhoneNumber` | object | Primary phone contact info |
| ↳ `value` | string | Phone number |
| ↳ `type` | string | Contact type \(Personal, Work, Other\) |
| ↳ `isPrimary` | boolean | Whether this is the primary phone |
| `createdAt` | string | ISO 8601 creation timestamp |
### `ashby_create_note`
Creates a note on a candidate in Ashby. Supports plain text and HTML content (bold, italic, underline, links, lists, code).
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Ashby API Key |
| `candidateId` | string | Yes | The UUID of the candidate to add the note to |
| `note` | string | Yes | The note content. If noteType is text/html, supports: &lt;b&gt;, &lt;i&gt;, &lt;u&gt;, &lt;a&gt;, &lt;ul&gt;, &lt;ol&gt;, &lt;li&gt;, &lt;code&gt;, &lt;pre&gt; |
| `noteType` | string | No | Content type of the note: text/plain \(default\) or text/html |
| `sendNotifications` | boolean | No | Whether to send notifications to subscribed users \(default false\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `id` | string | Created note UUID |
| `content` | string | Note content as stored |
| `author` | object | Note author |
| ↳ `id` | string | Author user UUID |
| ↳ `firstName` | string | First name |
| ↳ `lastName` | string | Last name |
| ↳ `email` | string | Email address |
| `createdAt` | string | ISO 8601 creation timestamp |
### `ashby_get_application`
Retrieves full details about a single application by its ID.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Ashby API Key |
| `applicationId` | string | Yes | The UUID of the application to fetch |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `id` | string | Application UUID |
| `status` | string | Application status \(Active, Hired, Archived, Lead\) |
| `candidate` | object | Associated candidate |
| ↳ `id` | string | Candidate UUID |
| ↳ `name` | string | Candidate name |
| `job` | object | Associated job |
| ↳ `id` | string | Job UUID |
| ↳ `title` | string | Job title |
| `currentInterviewStage` | object | Current interview stage |
| ↳ `id` | string | Stage UUID |
| ↳ `title` | string | Stage title |
| ↳ `type` | string | Stage type |
| `source` | object | Application source |
| ↳ `id` | string | Source UUID |
| ↳ `title` | string | Source title |
| `archiveReason` | object | Reason for archival |
| ↳ `id` | string | Reason UUID |
| ↳ `text` | string | Reason text |
| ↳ `reasonType` | string | Reason type |
| `archivedAt` | string | ISO 8601 archive timestamp |
| `createdAt` | string | ISO 8601 creation timestamp |
| `updatedAt` | string | ISO 8601 last update timestamp |
### `ashby_get_candidate`
Retrieves full details about a single candidate by their ID.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Ashby API Key |
| `candidateId` | string | Yes | The UUID of the candidate to fetch |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `id` | string | Candidate UUID |
| `name` | string | Full name |
| `primaryEmailAddress` | object | Primary email contact info |
| ↳ `value` | string | Email address |
| ↳ `type` | string | Contact type \(Personal, Work, Other\) |
| ↳ `isPrimary` | boolean | Whether this is the primary email |
| `primaryPhoneNumber` | object | Primary phone contact info |
| ↳ `value` | string | Phone number |
| ↳ `type` | string | Contact type \(Personal, Work, Other\) |
| ↳ `isPrimary` | boolean | Whether this is the primary phone |
| `profileUrl` | string | URL to the candidate Ashby profile |
| `position` | string | Current position or title |
| `company` | string | Current company |
| `linkedInUrl` | string | LinkedIn profile URL |
| `githubUrl` | string | GitHub profile URL |
| `tags` | array | Tags applied to the candidate |
| ↳ `id` | string | Tag UUID |
| ↳ `title` | string | Tag title |
| `applicationIds` | array | IDs of associated applications |
| `createdAt` | string | ISO 8601 creation timestamp |
| `updatedAt` | string | ISO 8601 last update timestamp |
### `ashby_get_job`
Retrieves full details about a single job by its ID.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Ashby API Key |
| `jobId` | string | Yes | The UUID of the job to fetch |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `id` | string | Job UUID |
| `title` | string | Job title |
| `status` | string | Job status \(Open, Closed, Draft, Archived, On Hold\) |
| `employmentType` | string | Employment type \(FullTime, PartTime, Intern, Contract, Temporary\) |
| `departmentId` | string | Department UUID |
| `locationId` | string | Location UUID |
| `descriptionPlain` | string | Job description in plain text |
| `isArchived` | boolean | Whether the job is archived |
| `createdAt` | string | ISO 8601 creation timestamp |
| `updatedAt` | string | ISO 8601 last update timestamp |
### `ashby_list_applications`
Lists all applications in an Ashby organization with pagination and optional filters for status, job, candidate, and creation date.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Ashby API Key |
| `cursor` | string | No | Opaque pagination cursor from a previous response nextCursor value |
| `perPage` | number | No | Number of results per page \(default 100\) |
| `status` | string | No | Filter by application status: Active, Hired, Archived, or Lead |
| `jobId` | string | No | Filter applications by a specific job UUID |
| `candidateId` | string | No | Filter applications by a specific candidate UUID |
| `createdAfter` | string | No | Filter to applications created after this ISO 8601 timestamp \(e.g. 2024-01-01T00:00:00Z\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `applications` | array | List of applications |
| ↳ `id` | string | Application UUID |
| ↳ `status` | string | Application status \(Active, Hired, Archived, Lead\) |
| ↳ `candidate` | object | Associated candidate |
| ↳ `id` | string | Candidate UUID |
| ↳ `name` | string | Candidate name |
| ↳ `job` | object | Associated job |
| ↳ `id` | string | Job UUID |
| ↳ `title` | string | Job title |
| ↳ `currentInterviewStage` | object | Current interview stage |
| ↳ `id` | string | Stage UUID |
| ↳ `title` | string | Stage title |
| ↳ `type` | string | Stage type |
| ↳ `source` | object | Application source |
| ↳ `id` | string | Source UUID |
| ↳ `title` | string | Source title |
| ↳ `createdAt` | string | ISO 8601 creation timestamp |
| ↳ `updatedAt` | string | ISO 8601 last update timestamp |
| `moreDataAvailable` | boolean | Whether more pages of results exist |
| `nextCursor` | string | Opaque cursor for fetching the next page |
### `ashby_list_candidates`
Lists all candidates in an Ashby organization with cursor-based pagination.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Ashby API Key |
| `cursor` | string | No | Opaque pagination cursor from a previous response nextCursor value |
| `perPage` | number | No | Number of results per page \(default 100\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `candidates` | array | List of candidates |
| ↳ `id` | string | Candidate UUID |
| ↳ `name` | string | Full name |
| ↳ `primaryEmailAddress` | object | Primary email contact info |
| ↳ `value` | string | Email address |
| ↳ `type` | string | Contact type \(Personal, Work, Other\) |
| ↳ `isPrimary` | boolean | Whether this is the primary email |
| ↳ `primaryPhoneNumber` | object | Primary phone contact info |
| ↳ `value` | string | Phone number |
| ↳ `type` | string | Contact type \(Personal, Work, Other\) |
| ↳ `isPrimary` | boolean | Whether this is the primary phone |
| ↳ `createdAt` | string | ISO 8601 creation timestamp |
| ↳ `updatedAt` | string | ISO 8601 last update timestamp |
| `moreDataAvailable` | boolean | Whether more pages of results exist |
| `nextCursor` | string | Opaque cursor for fetching the next page |
### `ashby_list_jobs`
Lists all jobs in an Ashby organization. By default returns Open, Closed, and Archived jobs. Specify status to filter.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Ashby API Key |
| `cursor` | string | No | Opaque pagination cursor from a previous response nextCursor value |
| `perPage` | number | No | Number of results per page \(default 100\) |
| `status` | string | No | Filter by job status: Open, Closed, Archived, or Draft |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `jobs` | array | List of jobs |
| ↳ `id` | string | Job UUID |
| ↳ `title` | string | Job title |
| ↳ `status` | string | Job status \(Open, Closed, Archived, Draft\) |
| ↳ `employmentType` | string | Employment type \(FullTime, PartTime, Intern, Contract, Temporary\) |
| ↳ `departmentId` | string | Department UUID |
| ↳ `locationId` | string | Location UUID |
| ↳ `createdAt` | string | ISO 8601 creation timestamp |
| ↳ `updatedAt` | string | ISO 8601 last update timestamp |
| `moreDataAvailable` | boolean | Whether more pages of results exist |
| `nextCursor` | string | Opaque cursor for fetching the next page |
### `ashby_list_notes`
Lists all notes on a candidate with pagination support.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Ashby API Key |
| `candidateId` | string | Yes | The UUID of the candidate to list notes for |
| `cursor` | string | No | Opaque pagination cursor from a previous response nextCursor value |
| `perPage` | number | No | Number of results per page |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `notes` | array | List of notes on the candidate |
| ↳ `id` | string | Note UUID |
| ↳ `content` | string | Note content |
| ↳ `author` | object | Note author |
| ↳ `id` | string | Author user UUID |
| ↳ `firstName` | string | First name |
| ↳ `lastName` | string | Last name |
| ↳ `email` | string | Email address |
| ↳ `createdAt` | string | ISO 8601 creation timestamp |
| `moreDataAvailable` | boolean | Whether more pages of results exist |
| `nextCursor` | string | Opaque cursor for fetching the next page |
### `ashby_list_offers`
Lists all offers with their latest version in an Ashby organization.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Ashby API Key |
| `cursor` | string | No | Opaque pagination cursor from a previous response nextCursor value |
| `perPage` | number | No | Number of results per page |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `offers` | array | List of offers |
| ↳ `id` | string | Offer UUID |
| ↳ `status` | string | Offer status |
| ↳ `candidate` | object | Associated candidate |
| ↳ `id` | string | Candidate UUID |
| ↳ `name` | string | Candidate name |
| ↳ `job` | object | Associated job |
| ↳ `id` | string | Job UUID |
| ↳ `title` | string | Job title |
| ↳ `createdAt` | string | ISO 8601 creation timestamp |
| ↳ `updatedAt` | string | ISO 8601 last update timestamp |
| `moreDataAvailable` | boolean | Whether more pages of results exist |
| `nextCursor` | string | Opaque cursor for fetching the next page |
### `ashby_search_candidates`
Searches for candidates by name and/or email with AND logic. Results are limited to 100 matches. Use candidate.list for full pagination.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Ashby API Key |
| `name` | string | No | Candidate name to search for \(combined with email using AND logic\) |
| `email` | string | No | Candidate email to search for \(combined with name using AND logic\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `candidates` | array | Matching candidates \(max 100 results\) |
| ↳ `id` | string | Candidate UUID |
| ↳ `name` | string | Full name |
| ↳ `primaryEmailAddress` | object | Primary email contact info |
| ↳ `value` | string | Email address |
| ↳ `type` | string | Contact type \(Personal, Work, Other\) |
| ↳ `isPrimary` | boolean | Whether this is the primary email |
| ↳ `primaryPhoneNumber` | object | Primary phone contact info |
| ↳ `value` | string | Phone number |
| ↳ `type` | string | Contact type \(Personal, Work, Other\) |
| ↳ `isPrimary` | boolean | Whether this is the primary phone |
### `ashby_update_candidate`
Updates an existing candidate record in Ashby. Only provided fields are changed.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Ashby API Key |
| `candidateId` | string | Yes | The UUID of the candidate to update |
| `name` | string | No | Updated full name |
| `email` | string | No | Updated primary email address |
| `emailType` | string | No | Email address type: Personal, Work, or Other \(default Work\) |
| `phoneNumber` | string | No | Updated primary phone number |
| `phoneType` | string | No | Phone number type: Personal, Work, or Other \(default Work\) |
| `linkedInUrl` | string | No | LinkedIn profile URL |
| `githubUrl` | string | No | GitHub profile URL |
| `websiteUrl` | string | No | Personal website URL |
| `sourceId` | string | No | UUID of the source to attribute the candidate to |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `id` | string | Candidate UUID |
| `name` | string | Full name |
| `primaryEmailAddress` | object | Primary email contact info |
| ↳ `value` | string | Email address |
| ↳ `type` | string | Contact type \(Personal, Work, Other\) |
| ↳ `isPrimary` | boolean | Whether this is the primary email |
| `primaryPhoneNumber` | object | Primary phone contact info |
| ↳ `value` | string | Phone number |
| ↳ `type` | string | Contact type \(Personal, Work, Other\) |
| ↳ `isPrimary` | boolean | Whether this is the primary phone |
| `profileUrl` | string | URL to the candidate Ashby profile |
| `position` | string | Current position or title |
| `company` | string | Current company |
| `linkedInUrl` | string | LinkedIn profile URL |
| `githubUrl` | string | GitHub profile URL |
| `tags` | array | Tags applied to the candidate |
| ↳ `id` | string | Tag UUID |
| ↳ `title` | string | Tag title |
| `applicationIds` | array | IDs of associated applications |
| `createdAt` | string | ISO 8601 creation timestamp |
| `updatedAt` | string | ISO 8601 last update timestamp |

View File

@@ -0,0 +1,267 @@
---
title: Databricks
description: Run SQL queries and manage jobs on Databricks
---
import { BlockInfoCard } from "@/components/ui/block-info-card"
<BlockInfoCard
type="databricks"
color="#F9F7F4"
/>
{/* MANUAL-CONTENT-START:intro */}
[Databricks](https://www.databricks.com/) is a unified data analytics platform built on Apache Spark, providing a collaborative environment for data engineering, data science, and machine learning. Databricks combines data warehousing, ETL, and AI workloads into a single lakehouse architecture, with support for SQL analytics, job orchestration, and cluster management across major cloud providers.
With the Databricks integration in Sim, you can:
- **Execute SQL queries**: Run SQL statements against Databricks SQL warehouses with support for parameterized queries and Unity Catalog
- **Manage jobs**: List, trigger, and monitor Databricks job runs programmatically
- **Track run status**: Get detailed run information including timing, state, and output results
- **Control clusters**: List and inspect cluster configurations, states, and resource details
- **Retrieve run outputs**: Access notebook results, error messages, and logs from completed job runs
In Sim, the Databricks integration enables your agents to interact with your data lakehouse as part of automated workflows. Agents can query large-scale datasets, orchestrate ETL pipelines by triggering jobs, monitor job execution, and retrieve results—all without leaving the workflow canvas. This is ideal for automated reporting, data pipeline management, scheduled analytics, and building AI-driven data workflows that react to query results or job outcomes.
{/* MANUAL-CONTENT-END */}
## Usage Instructions
Connect to Databricks to execute SQL queries against SQL warehouses, trigger and monitor job runs, manage clusters, and retrieve run outputs. Requires a Personal Access Token and workspace host URL.
## Tools
### `databricks_execute_sql`
Execute a SQL statement against a Databricks SQL warehouse and return results inline. Supports parameterized queries and Unity Catalog.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `host` | string | Yes | Databricks workspace host \(e.g., dbc-abc123.cloud.databricks.com\) |
| `apiKey` | string | Yes | Databricks Personal Access Token |
| `warehouseId` | string | Yes | The ID of the SQL warehouse to execute against |
| `statement` | string | Yes | The SQL statement to execute \(max 16 MiB\) |
| `catalog` | string | No | Unity Catalog name \(equivalent to USE CATALOG\) |
| `schema` | string | No | Schema name \(equivalent to USE SCHEMA\) |
| `rowLimit` | number | No | Maximum number of rows to return |
| `waitTimeout` | string | No | How long to wait for results \(e.g., "50s"\). Range: "0s" or "5s" to "50s". Default: "50s" |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `statementId` | string | Unique identifier for the executed statement |
| `status` | string | Execution status \(SUCCEEDED, PENDING, RUNNING, FAILED, CANCELED, CLOSED\) |
| `columns` | array | Column schema of the result set |
| ↳ `name` | string | Column name |
| ↳ `position` | number | Column position \(0-based\) |
| ↳ `typeName` | string | Column type \(STRING, INT, LONG, DOUBLE, BOOLEAN, TIMESTAMP, DATE, DECIMAL, etc.\) |
| `data` | array | Result rows as a 2D array of strings where each inner array is a row of column values |
| `totalRows` | number | Total number of rows in the result |
| `truncated` | boolean | Whether the result set was truncated due to row_limit or byte_limit |
### `databricks_list_jobs`
List all jobs in a Databricks workspace with optional filtering by name.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `host` | string | Yes | Databricks workspace host \(e.g., dbc-abc123.cloud.databricks.com\) |
| `apiKey` | string | Yes | Databricks Personal Access Token |
| `limit` | number | No | Maximum number of jobs to return \(range 1-100, default 20\) |
| `offset` | number | No | Offset for pagination |
| `name` | string | No | Filter jobs by exact name \(case-insensitive\) |
| `expandTasks` | boolean | No | Include task and cluster details in the response \(max 100 elements\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `jobs` | array | List of jobs in the workspace |
| ↳ `jobId` | number | Unique job identifier |
| ↳ `name` | string | Job name |
| ↳ `createdTime` | number | Job creation timestamp \(epoch ms\) |
| ↳ `creatorUserName` | string | Email of the job creator |
| ↳ `maxConcurrentRuns` | number | Maximum number of concurrent runs |
| ↳ `format` | string | Job format \(SINGLE_TASK or MULTI_TASK\) |
| `hasMore` | boolean | Whether more jobs are available for pagination |
| `nextPageToken` | string | Token for fetching the next page of results |
### `databricks_run_job`
Trigger an existing Databricks job to run immediately with optional job-level or notebook parameters.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `host` | string | Yes | Databricks workspace host \(e.g., dbc-abc123.cloud.databricks.com\) |
| `apiKey` | string | Yes | Databricks Personal Access Token |
| `jobId` | number | Yes | The ID of the job to trigger |
| `jobParameters` | string | No | Job-level parameter overrides as a JSON object \(e.g., \{"key": "value"\}\) |
| `notebookParams` | string | No | Notebook task parameters as a JSON object \(e.g., \{"param1": "value1"\}\) |
| `idempotencyToken` | string | No | Idempotency token to prevent duplicate runs \(max 64 characters\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `runId` | number | The globally unique ID of the triggered run |
| `numberInJob` | number | The sequence number of this run among all runs of the job |
### `databricks_get_run`
Get the status, timing, and details of a Databricks job run by its run ID.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `host` | string | Yes | Databricks workspace host \(e.g., dbc-abc123.cloud.databricks.com\) |
| `apiKey` | string | Yes | Databricks Personal Access Token |
| `runId` | number | Yes | The canonical identifier of the run |
| `includeHistory` | boolean | No | Include repair history in the response |
| `includeResolvedValues` | boolean | No | Include resolved parameter values in the response |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `runId` | number | The run ID |
| `jobId` | number | The job ID this run belongs to |
| `runName` | string | Name of the run |
| `runType` | string | Type of run \(JOB_RUN, WORKFLOW_RUN, SUBMIT_RUN\) |
| `attemptNumber` | number | Retry attempt number \(0 for initial attempt\) |
| `state` | object | Run state information |
| ↳ `lifeCycleState` | string | Lifecycle state \(QUEUED, PENDING, RUNNING, TERMINATING, TERMINATED, SKIPPED, INTERNAL_ERROR, BLOCKED, WAITING_FOR_RETRY\) |
| ↳ `resultState` | string | Result state \(SUCCESS, FAILED, TIMEDOUT, CANCELED, SUCCESS_WITH_FAILURES, UPSTREAM_FAILED, UPSTREAM_CANCELED, EXCLUDED\) |
| ↳ `stateMessage` | string | Descriptive message for the current state |
| ↳ `userCancelledOrTimedout` | boolean | Whether the run was cancelled by user or timed out |
| `startTime` | number | Run start timestamp \(epoch ms\) |
| `endTime` | number | Run end timestamp \(epoch ms, 0 if still running\) |
| `setupDuration` | number | Cluster setup duration \(ms\) |
| `executionDuration` | number | Execution duration \(ms\) |
| `cleanupDuration` | number | Cleanup duration \(ms\) |
| `queueDuration` | number | Time spent in queue before execution \(ms\) |
| `runPageUrl` | string | URL to the run detail page in Databricks UI |
| `creatorUserName` | string | Email of the user who triggered the run |
### `databricks_list_runs`
List job runs in a Databricks workspace with optional filtering by job, status, and time range.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `host` | string | Yes | Databricks workspace host \(e.g., dbc-abc123.cloud.databricks.com\) |
| `apiKey` | string | Yes | Databricks Personal Access Token |
| `jobId` | number | No | Filter runs by job ID. Omit to list runs across all jobs |
| `activeOnly` | boolean | No | Only include active runs \(PENDING, RUNNING, or TERMINATING\) |
| `completedOnly` | boolean | No | Only include completed runs |
| `limit` | number | No | Maximum number of runs to return \(range 1-24, default 20\) |
| `offset` | number | No | Offset for pagination |
| `runType` | string | No | Filter by run type \(JOB_RUN, WORKFLOW_RUN, SUBMIT_RUN\) |
| `startTimeFrom` | number | No | Filter runs started at or after this timestamp \(epoch ms\) |
| `startTimeTo` | number | No | Filter runs started at or before this timestamp \(epoch ms\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `runs` | array | List of job runs |
| ↳ `runId` | number | Unique run identifier |
| ↳ `jobId` | number | Job this run belongs to |
| ↳ `runName` | string | Run name |
| ↳ `runType` | string | Run type \(JOB_RUN, WORKFLOW_RUN, SUBMIT_RUN\) |
| ↳ `state` | object | Run state information |
| ↳ `lifeCycleState` | string | Lifecycle state \(QUEUED, PENDING, RUNNING, TERMINATING, TERMINATED, SKIPPED, INTERNAL_ERROR, BLOCKED, WAITING_FOR_RETRY\) |
| ↳ `resultState` | string | Result state \(SUCCESS, FAILED, TIMEDOUT, CANCELED, SUCCESS_WITH_FAILURES, UPSTREAM_FAILED, UPSTREAM_CANCELED, EXCLUDED\) |
| ↳ `stateMessage` | string | Descriptive state message |
| ↳ `userCancelledOrTimedout` | boolean | Whether the run was cancelled by user or timed out |
| ↳ `startTime` | number | Run start timestamp \(epoch ms\) |
| ↳ `endTime` | number | Run end timestamp \(epoch ms\) |
| `hasMore` | boolean | Whether more runs are available for pagination |
| `nextPageToken` | string | Token for fetching the next page of results |
### `databricks_cancel_run`
Cancel a running or pending Databricks job run. Cancellation is asynchronous; poll the run status to confirm termination.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `host` | string | Yes | Databricks workspace host \(e.g., dbc-abc123.cloud.databricks.com\) |
| `apiKey` | string | Yes | Databricks Personal Access Token |
| `runId` | number | Yes | The canonical identifier of the run to cancel |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `success` | boolean | Whether the cancel request was accepted |
### `databricks_get_run_output`
Get the output of a completed Databricks job run, including notebook results, error messages, and logs. For multi-task jobs, use the task run ID (not the parent run ID).
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `host` | string | Yes | Databricks workspace host \(e.g., dbc-abc123.cloud.databricks.com\) |
| `apiKey` | string | Yes | Databricks Personal Access Token |
| `runId` | number | Yes | The run ID to get output for. For multi-task jobs, use the task run ID |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `notebookOutput` | object | Notebook task output \(from dbutils.notebook.exit\(\)\) |
| ↳ `result` | string | Value passed to dbutils.notebook.exit\(\) \(max 5 MB\) |
| ↳ `truncated` | boolean | Whether the result was truncated |
| `error` | string | Error message if the run failed or output is unavailable |
| `errorTrace` | string | Error stack trace if available |
| `logs` | string | Log output \(last 5 MB\) from spark_jar, spark_python, or python_wheel tasks |
| `logsTruncated` | boolean | Whether the log output was truncated |
### `databricks_list_clusters`
List all clusters in a Databricks workspace including their state, configuration, and resource details.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `host` | string | Yes | Databricks workspace host \(e.g., dbc-abc123.cloud.databricks.com\) |
| `apiKey` | string | Yes | Databricks Personal Access Token |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `clusters` | array | List of clusters in the workspace |
| ↳ `clusterId` | string | Unique cluster identifier |
| ↳ `clusterName` | string | Cluster display name |
| ↳ `state` | string | Current state \(PENDING, RUNNING, RESTARTING, RESIZING, TERMINATING, TERMINATED, ERROR, UNKNOWN\) |
| ↳ `stateMessage` | string | Human-readable state description |
| ↳ `creatorUserName` | string | Email of the cluster creator |
| ↳ `sparkVersion` | string | Spark runtime version \(e.g., 13.3.x-scala2.12\) |
| ↳ `nodeTypeId` | string | Worker node type identifier |
| ↳ `driverNodeTypeId` | string | Driver node type identifier |
| ↳ `numWorkers` | number | Number of worker nodes \(for fixed-size clusters\) |
| ↳ `autoscale` | object | Autoscaling configuration \(null for fixed-size clusters\) |
| ↳ `minWorkers` | number | Minimum number of workers |
| ↳ `maxWorkers` | number | Maximum number of workers |
| ↳ `clusterSource` | string | Origin \(API, UI, JOB, MODELS, PIPELINE, PIPELINE_MAINTENANCE, SQL\) |
| ↳ `autoterminationMinutes` | number | Minutes of inactivity before auto-termination \(0 = disabled\) |
| ↳ `startTime` | number | Cluster start timestamp \(epoch ms\) |

View File

@@ -0,0 +1,165 @@
---
title: Gamma
description: Generate presentations, documents, and webpages with AI
---
import { BlockInfoCard } from "@/components/ui/block-info-card"
<BlockInfoCard
type="gamma"
color="#002253"
/>
{/* MANUAL-CONTENT-START:intro */}
[Gamma](https://gamma.app/) is an AI-powered platform for creating presentations, documents, webpages, and social posts. Gamma's API lets you programmatically generate polished, visually rich content from text prompts, adapt existing templates, and manage workspace assets like themes and folders.
With Gamma, you can:
- **Generate presentations and documents:** Create slide decks, documents, webpages, and social posts from text input with full control over format, tone, and image sourcing.
- **Create from templates:** Adapt existing Gamma templates with custom prompts to quickly produce tailored content.
- **Check generation status:** Poll for completion of async generation jobs and retrieve the final Gamma URL.
- **Browse themes and folders:** List available workspace themes and folders to organize and style your generated content.
In Sim, the Gamma integration enables your agents to automatically generate presentations and documents, create content from templates, and manage workspace assets directly within your workflows. This allows you to automate content creation pipelines, batch-produce slide decks, and integrate AI-generated presentations into broader business automation scenarios.
{/* MANUAL-CONTENT-END */}
## Usage Instructions
Integrate Gamma into the workflow. Can generate presentations, documents, webpages, and social posts from text, create from templates, check generation status, and browse themes and folders.
## Tools
### `gamma_generate`
Generate a new Gamma presentation, document, webpage, or social post from text input.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Gamma API key |
| `inputText` | string | Yes | Text and image URLs used to generate your gamma \(1-100,000 tokens\) |
| `textMode` | string | Yes | How to handle input text: generate \(AI expands\), condense \(AI summarizes\), or preserve \(keep as-is\) |
| `format` | string | No | Output format: presentation, document, webpage, or social \(default: presentation\) |
| `themeId` | string | No | Custom Gamma workspace theme ID \(use List Themes to find available themes\) |
| `numCards` | number | No | Number of cards/slides to generate \(1-60 for Pro, 1-75 for Ultra; default: 10\) |
| `cardSplit` | string | No | How to split content into cards: auto or inputTextBreaks \(default: auto\) |
| `cardDimensions` | string | No | Card aspect ratio. Presentation: fluid, 16x9, 4x3. Document: fluid, pageless, letter, a4. Social: 1x1, 4x5, 9x16 |
| `additionalInstructions` | string | No | Additional instructions for the AI generation \(max 2000 chars\) |
| `exportAs` | string | No | Automatically export the generated gamma as pdf or pptx |
| `folderIds` | string | No | Comma-separated folder IDs to store the generated gamma in |
| `textAmount` | string | No | Amount of text per card: brief, medium, detailed, or extensive |
| `textTone` | string | No | Tone of the generated text, e.g. "professional", "casual" \(max 500 chars\) |
| `textAudience` | string | No | Target audience for the generated text, e.g. "executives", "students" \(max 500 chars\) |
| `textLanguage` | string | No | Language code for the generated text \(default: en\) |
| `imageSource` | string | No | Where to source images: aiGenerated, pictographic, unsplash, webAllImages, webFreeToUse, webFreeToUseCommercially, giphy, placeholder, or noImages |
| `imageModel` | string | No | AI image generation model to use when imageSource is aiGenerated |
| `imageStyle` | string | No | Style directive for AI-generated images, e.g. "watercolor", "photorealistic" \(max 500 chars\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `generationId` | string | The ID of the generation job. Use with Check Status to poll for completion. |
### `gamma_generate_from_template`
Generate a new Gamma by adapting an existing template with a prompt.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Gamma API key |
| `gammaId` | string | Yes | The ID of the template gamma to adapt |
| `prompt` | string | Yes | Instructions for how to adapt the template \(1-100,000 tokens\) |
| `themeId` | string | No | Custom Gamma workspace theme ID to apply |
| `exportAs` | string | No | Automatically export the generated gamma as pdf or pptx |
| `folderIds` | string | No | Comma-separated folder IDs to store the generated gamma in |
| `imageModel` | string | No | AI image generation model to use when imageSource is aiGenerated |
| `imageStyle` | string | No | Style directive for AI-generated images, e.g. "watercolor", "photorealistic" \(max 500 chars\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `generationId` | string | The ID of the generation job. Use with Check Status to poll for completion. |
### `gamma_check_status`
Check the status of a Gamma generation job. Returns the gamma URL when completed, or error details if failed.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Gamma API key |
| `generationId` | string | Yes | The generation ID returned by the Generate or Generate from Template tool |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `generationId` | string | The generation ID that was checked |
| `status` | string | Generation status: pending, completed, or failed |
| `gammaUrl` | string | URL of the generated gamma \(only present when status is completed\) |
| `credits` | object | Credit usage information \(only present when status is completed\) |
| ↳ `deducted` | number | Number of credits deducted for this generation |
| ↳ `remaining` | number | Remaining credits in the account |
| `error` | object | Error details \(only present when status is failed\) |
| ↳ `message` | string | Human-readable error message |
| ↳ `statusCode` | number | HTTP status code of the error |
### `gamma_list_themes`
List available themes in your Gamma workspace. Returns theme IDs, names, and keywords for styling.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Gamma API key |
| `query` | string | No | Search query to filter themes by name \(case-insensitive\) |
| `limit` | number | No | Maximum number of themes to return per page \(max 50\) |
| `after` | string | No | Pagination cursor from a previous response \(nextCursor\) to fetch the next page |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `themes` | array | List of available themes |
| ↳ `id` | string | Theme ID \(use with themeId parameter\) |
| ↳ `name` | string | Theme display name |
| ↳ `type` | string | Theme type: standard or custom |
| ↳ `colorKeywords` | array | Color descriptors for this theme |
| ↳ `toneKeywords` | array | Tone descriptors for this theme |
| `hasMore` | boolean | Whether more results are available on the next page |
| `nextCursor` | string | Pagination cursor to pass as the after parameter for the next page |
### `gamma_list_folders`
List available folders in your Gamma workspace. Returns folder IDs and names for organizing generated content.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Gamma API key |
| `query` | string | No | Search query to filter folders by name \(case-sensitive\) |
| `limit` | number | No | Maximum number of folders to return per page \(max 50\) |
| `after` | string | No | Pagination cursor from a previous response \(nextCursor\) to fetch the next page |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `folders` | array | List of available folders |
| ↳ `id` | string | Folder ID \(use with folderIds parameter\) |
| ↳ `name` | string | Folder display name |
| `hasMore` | boolean | Whether more results are available on the next page |
| `nextCursor` | string | Pagination cursor to pass as the after parameter for the next page |

View File

@@ -11,19 +11,21 @@ import { BlockInfoCard } from "@/components/ui/block-info-card"
/>
{/* MANUAL-CONTENT-START:intro */}
[Gmail](https://mail.google.com/) is one of the worlds most popular and reliable email services, trusted by individuals and organizations to send, receive, and manage messages. Gmail offers a secure, intuitive interface with advanced organization and search capabilities, making it a top choice for personal and professional communication.
[Gmail](https://mail.google.com/) is one of the worlds most popular email services, trusted by individuals and organizations to send, receive, and manage messages securely.
Gmail provides a comprehensive suite of features for efficient email management, message filtering, and workflow integration. With its powerful API, Gmail enables developers and platforms to automate common email-related tasks, integrate mailbox activities into broader workflows, and enhance productivity by reducing manual effort.
With the Gmail integration in Sim, you can:
Key features of Gmail include:
- **Send emails**: Compose and send emails with support for recipients, CC, BCC, subject, body, and attachments
- **Create drafts**: Save email drafts for later review and sending
- **Read emails**: Retrieve email messages by ID with full content and metadata
- **Search emails**: Find emails using Gmails powerful search query syntax
- **Move emails**: Move messages between folders or labels
- **Manage read status**: Mark emails as read or unread
- **Archive and unarchive**: Archive messages to clean up your inbox or restore them
- **Delete emails**: Remove messages from your mailbox
- **Manage labels**: Add or remove labels from emails for organization
- Email Sending and Receiving: Compose, send, and receive emails reliably and securely
- Message Search and Organization: Advanced search, labels, and filters to easily find and categorize messages
- Conversation Threading: Keeps related messages grouped together for better conversation tracking
- Attachments and Formatting: Support for file attachments, rich formatting, and embedded media
- Integration and Automation: Robust API for integrating with other tools and automating email workflows
In Sim, the Gmail integration allows your agents to interact with your emails programmatically—sending, receiving, searching, and organizing messages as part of powerful AI workflows. Agents can draft emails, trigger processes based on new email arrivals, and automate repetitive email tasks, freeing up time and reducing manual labor. By connecting Sim with Gmail, you can build intelligent agents to manage communications, automate follow-ups, and maintain organized inboxes within your workflows.
In Sim, the Gmail integration enables your agents to interact with your inbox programmatically as part of automated workflows. Agents can send notifications, search for specific emails, organize messages, and trigger actions based on email content—enabling intelligent email automation and communication workflows.
{/* MANUAL-CONTENT-END */}

View File

@@ -11,9 +11,16 @@ import { BlockInfoCard } from "@/components/ui/block-info-card"
/>
{/* MANUAL-CONTENT-START:intro */}
[Google BigQuery](https://cloud.google.com/bigquery) is Google Cloud's fully managed, serverless data warehouse designed for large-scale data analytics. BigQuery lets you run fast SQL queries on massive datasets, making it ideal for business intelligence, data exploration, and machine learning pipelines. It supports standard SQL, streaming inserts, and integrates with the broader Google Cloud ecosystem.
[Google BigQuery](https://cloud.google.com/bigquery) is Google Cloud's fully managed, serverless data warehouse designed for large-scale data analytics. BigQuery lets you run fast SQL queries on massive datasets, making it ideal for business intelligence, data exploration, and machine learning pipelines.
In Sim, the Google BigQuery integration allows your agents to query datasets, list tables, inspect schemas, and insert rows as part of automated workflows. This enables use cases such as automated reporting, data pipeline orchestration, real-time data ingestion, and analytics-driven decision making. By connecting Sim with BigQuery, your agents can pull insights from petabytes of data, write results back to tables, and keep your analytics workflows running without manual intervention.
With the Google BigQuery integration in Sim, you can:
- **Run SQL queries**: Execute queries against your BigQuery datasets and retrieve results for analysis or downstream processing
- **List datasets**: Browse available datasets within a Google Cloud project
- **List and inspect tables**: Enumerate tables within a dataset and retrieve detailed schema information
- **Insert rows**: Stream new rows into BigQuery tables for real-time data ingestion
In Sim, the Google BigQuery integration enables your agents to query datasets, inspect schemas, and insert rows as part of automated workflows. This is ideal for automated reporting, data pipeline orchestration, real-time data ingestion, and analytics-driven decision making.
{/* MANUAL-CONTENT-END */}

View File

@@ -11,9 +11,14 @@ import { BlockInfoCard } from "@/components/ui/block-info-card"
/>
{/* MANUAL-CONTENT-START:intro */}
[Google Books](https://books.google.com) is Google's comprehensive book discovery and metadata service, providing access to millions of books from publishers, libraries, and digitized collections worldwide. The Google Books API enables programmatic search and retrieval of detailed book information including titles, authors, descriptions, ratings, and publication details.
[Google Books](https://books.google.com) is Google's comprehensive book discovery and metadata service, providing access to millions of books from publishers, libraries, and digitized collections worldwide.
In Sim, the Google Books integration allows your agents to search for books and retrieve volume details as part of automated workflows. This enables use cases such as content research, reading list curation, bibliographic data enrichment, ISBN lookups, and knowledge gathering from published works. By connecting Sim with Google Books, your agents can discover and analyze book metadata, filter by availability or format, and incorporate literary references into their outputs—all without manual research.
With the Google Books integration in Sim, you can:
- **Search for books**: Find volumes by title, author, ISBN, or keyword across the entire Google Books catalog
- **Retrieve volume details**: Get detailed metadata for a specific book including title, authors, description, ratings, and publication details
In Sim, the Google Books integration allows your agents to search for books and retrieve volume details as part of automated workflows. This enables use cases such as content research, reading list curation, bibliographic data enrichment, and knowledge gathering from published works.
{/* MANUAL-CONTENT-END */}

View File

@@ -0,0 +1,144 @@
---
title: Google Contacts
description: Manage Google Contacts
---
import { BlockInfoCard } from "@/components/ui/block-info-card"
<BlockInfoCard
type="google_contacts"
color="#E0E0E0"
/>
## Usage Instructions
Integrate Google Contacts into the workflow. Can create, read, update, delete, list, and search contacts.
## Tools
### `google_contacts_create`
Create a new contact in Google Contacts
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `givenName` | string | Yes | First name of the contact |
| `familyName` | string | No | Last name of the contact |
| `email` | string | No | Email address of the contact |
| `emailType` | string | No | Email type: home, work, or other |
| `phone` | string | No | Phone number of the contact |
| `phoneType` | string | No | Phone type: mobile, home, work, or other |
| `organization` | string | No | Organization/company name |
| `jobTitle` | string | No | Job title at the organization |
| `notes` | string | No | Notes or biography for the contact |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `content` | string | Contact creation confirmation message |
| `metadata` | json | Created contact metadata including resource name and details |
### `google_contacts_get`
Get a specific contact from Google Contacts
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `resourceName` | string | Yes | Resource name of the contact \(e.g., people/c1234567890\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `content` | string | Contact retrieval confirmation message |
| `metadata` | json | Contact details including name, email, phone, and organization |
### `google_contacts_list`
List contacts from Google Contacts
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `pageSize` | number | No | Number of contacts to return \(1-1000, default 100\) |
| `pageToken` | string | No | Page token from a previous list request for pagination |
| `sortOrder` | string | No | Sort order for contacts |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `content` | string | Summary of found contacts count |
| `metadata` | json | List of contacts with pagination tokens |
### `google_contacts_search`
Search contacts in Google Contacts by name, email, phone, or organization
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `query` | string | Yes | Search query to match against contact names, emails, phones, and organizations |
| `pageSize` | number | No | Number of results to return \(default 10, max 30\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `content` | string | Summary of search results count |
| `metadata` | json | Search results with matching contacts |
### `google_contacts_update`
Update an existing contact in Google Contacts
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `resourceName` | string | Yes | Resource name of the contact \(e.g., people/c1234567890\) |
| `etag` | string | Yes | ETag from a previous get request \(required for concurrency control\) |
| `givenName` | string | No | Updated first name |
| `familyName` | string | No | Updated last name |
| `email` | string | No | Updated email address |
| `emailType` | string | No | Email type: home, work, or other |
| `phone` | string | No | Updated phone number |
| `phoneType` | string | No | Phone type: mobile, home, work, or other |
| `organization` | string | No | Updated organization/company name |
| `jobTitle` | string | No | Updated job title |
| `notes` | string | No | Updated notes or biography |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `content` | string | Contact update confirmation message |
| `metadata` | json | Updated contact metadata |
### `google_contacts_delete`
Delete a contact from Google Contacts
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `resourceName` | string | Yes | Resource name of the contact to delete \(e.g., people/c1234567890\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `content` | string | Contact deletion confirmation message |
| `metadata` | json | Deletion details including resource name |

View File

@@ -0,0 +1,84 @@
---
title: Google PageSpeed
description: Analyze webpage performance with Google PageSpeed Insights
---
import { BlockInfoCard } from "@/components/ui/block-info-card"
<BlockInfoCard
type="google_pagespeed"
color="#E0E0E0"
/>
{/* MANUAL-CONTENT-START:intro */}
[Google PageSpeed Insights](https://pagespeed.web.dev/) is a web performance analysis tool powered by Lighthouse that evaluates the quality of web pages across multiple dimensions including performance, accessibility, SEO, and best practices.
With the Google PageSpeed integration in Sim, you can:
- **Analyze webpage performance**: Get detailed performance scores and metrics for any public URL, including First Contentful Paint, Largest Contentful Paint, and Speed Index
- **Evaluate accessibility**: Check how well a webpage meets accessibility standards and identify areas for improvement
- **Audit SEO**: Assess a page's search engine optimization and discover opportunities to improve rankings
- **Review best practices**: Verify that a webpage follows modern web development best practices
- **Compare strategies**: Run analyses using either desktop or mobile strategies to understand performance across device types
- **Localize results**: Retrieve analysis results in different locales for internationalized reporting
In Sim, the Google PageSpeed integration enables your agents to programmatically audit web pages as part of automated workflows. This is useful for monitoring site performance over time, triggering alerts when scores drop below thresholds, generating performance reports, and ensuring that deployed changes meet quality standards before release.
### Getting Your API Key
1. Go to the [Google Cloud Console](https://console.cloud.google.com/)
2. Create or select a project
3. Enable the **PageSpeed Insights API** from the API Library
4. Navigate to **Credentials** and create an API key
5. Use the API key in the Sim block configuration
{/* MANUAL-CONTENT-END */}
## Usage Instructions
Analyze web pages for performance, accessibility, SEO, and best practices using Google PageSpeed Insights API powered by Lighthouse.
## Tools
### `google_pagespeed_analyze`
Analyze a webpage for performance, accessibility, SEO, and best practices using Google PageSpeed Insights.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Google PageSpeed Insights API Key |
| `url` | string | Yes | The URL of the webpage to analyze |
| `category` | string | No | Lighthouse categories to analyze \(comma-separated\): performance, accessibility, best-practices, seo |
| `strategy` | string | No | Analysis strategy: desktop or mobile |
| `locale` | string | No | Locale for results \(e.g., en, fr, de\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `finalUrl` | string | The final URL after redirects |
| `performanceScore` | number | Performance category score \(0-1\) |
| `accessibilityScore` | number | Accessibility category score \(0-1\) |
| `bestPracticesScore` | number | Best Practices category score \(0-1\) |
| `seoScore` | number | SEO category score \(0-1\) |
| `firstContentfulPaint` | string | Time to First Contentful Paint \(display value\) |
| `firstContentfulPaintMs` | number | Time to First Contentful Paint in milliseconds |
| `largestContentfulPaint` | string | Time to Largest Contentful Paint \(display value\) |
| `largestContentfulPaintMs` | number | Time to Largest Contentful Paint in milliseconds |
| `totalBlockingTime` | string | Total Blocking Time \(display value\) |
| `totalBlockingTimeMs` | number | Total Blocking Time in milliseconds |
| `cumulativeLayoutShift` | string | Cumulative Layout Shift \(display value\) |
| `cumulativeLayoutShiftValue` | number | Cumulative Layout Shift numeric value |
| `speedIndex` | string | Speed Index \(display value\) |
| `speedIndexMs` | number | Speed Index in milliseconds |
| `interactive` | string | Time to Interactive \(display value\) |
| `interactiveMs` | number | Time to Interactive in milliseconds |
| `overallCategory` | string | Overall loading experience category \(FAST, AVERAGE, SLOW, or NONE\) |
| `analysisTimestamp` | string | UTC timestamp of the analysis |
| `lighthouseVersion` | string | Version of Lighthouse used for the analysis |

View File

@@ -11,9 +11,13 @@ import { BlockInfoCard } from "@/components/ui/block-info-card"
/>
{/* MANUAL-CONTENT-START:intro */}
[Google Search](https://www.google.com) is the world's most widely used web search engine, making it easy to find information, discover new content, and answer questions in real time. With advanced search algorithms, Google Search helps you quickly locate web pages, images, news, and more using simple or complex queries.
[Google Search](https://www.google.com) is the world's most widely used web search engine, making it easy to find information, discover new content, and answer questions in real time.
In Sim, the Google Search integration allows your agents to search the web and retrieve live information as part of automated workflows. This enables powerful use cases such as automated research, fact-checking, knowledge synthesis, and dynamic content discovery. By connecting Sim with Google Search, your agents can perform queries, process and analyze web results, and incorporate the latest information into their decisions—without manual effort. Enhance your workflows with always up-to-date knowledge from across the internet.
With the Google Search integration in Sim, you can:
- **Search the web**: Perform queries using Google's Custom Search API and retrieve structured search results with titles, snippets, and URLs
In Sim, the Google Search integration allows your agents to search the web and retrieve live information as part of automated workflows. This enables use cases such as automated research, fact-checking, knowledge synthesis, and dynamic content discovery.
{/* MANUAL-CONTENT-END */}

View File

@@ -11,18 +11,20 @@ import { BlockInfoCard } from "@/components/ui/block-info-card"
/>
{/* MANUAL-CONTENT-START:intro */}
[Google Sheets](https://www.google.com/sheets/about/) is a powerful, cloud-based spreadsheet platform that allows teams and individuals to create, edit, and collaborate on spreadsheets in real-time. Widely used for data tracking, reporting, and lightweight database needs, Google Sheets seamlessly integrates with many tools and services to empower workflow automation and data-driven operations.
[Google Sheets](https://www.google.com/sheets/about/) is a cloud-based spreadsheet platform that allows teams and individuals to create, edit, and collaborate on spreadsheets in real-time. Widely used for data tracking, reporting, and lightweight database needs, Google Sheets integrates with many tools and services.
Google Sheets offers an extensive feature set for managing and analyzing tabular data, supporting everything from basic calculations to complex reporting and collaborative editing. Its robust API and integration capabilities enable automated access, updates, and reporting from agents and external services.
With the Google Sheets integration in Sim, you can:
Key features of Google Sheets include:
- **Read data**: Retrieve cell values from specific ranges in a spreadsheet
- **Write data**: Write values to specific cell ranges
- **Update data**: Modify existing cell values in a spreadsheet
- **Append rows**: Add new rows of data to the end of a sheet
- **Clear ranges**: Remove data from specific cell ranges
- **Manage spreadsheets**: Create new spreadsheets or retrieve metadata about existing ones
- **Batch operations**: Perform batch read, update, and clear operations across multiple ranges
- **Copy sheets**: Duplicate sheets within or between spreadsheets
- Real-Time Collaboration: Multiple users can edit and view spreadsheets simultaneously from anywhere.
- Rich Data Manipulation: Support for formulas, charts, pivots, and add-ons to analyze and visualize data.
- Easy Data Import/Export: Ability to connect and sync data from various sources using integrations and APIs.
- Powerful Permissions: Fine-grained sharing, access controls, and version history for team management.
In Sim, the Google Sheets integration empowers your agents to automate reading from, writing to, and updating specific sheets within spreadsheets. Agents can interact programmatically with Google Sheets to retrieve or modify data, manage collaborative documents, and automate reporting or record-keeping as part of your AI workflows. By connecting Sim with Google Sheets, you can build intelligent agents that manage, analyze, and update your data dynamically—streamlining operations, enhancing productivity, and ensuring up-to-date data access across your organization.
In Sim, the Google Sheets integration enables your agents to read from, write to, and manage spreadsheets as part of automated workflows. This is ideal for automated reporting, data synchronization, record-keeping, and building data pipelines that use spreadsheets as a collaborative data layer.
{/* MANUAL-CONTENT-END */}

View File

@@ -11,9 +11,18 @@ import { BlockInfoCard } from "@/components/ui/block-info-card"
/>
{/* MANUAL-CONTENT-START:intro */}
[Google Tasks](https://support.google.com/tasks) is Google's lightweight task management service, integrated into Gmail, Google Calendar, and the standalone Google Tasks app. It provides a simple way to create, organize, and track to-do items with support for due dates, subtasks, and task lists. As part of Google Workspace, Google Tasks keeps your action items synchronized across all your devices.
[Google Tasks](https://support.google.com/tasks) is Google's lightweight task management service, integrated into Gmail, Google Calendar, and the standalone Google Tasks app. It provides a simple way to create, organize, and track to-do items with support for due dates, subtasks, and task lists.
In Sim, the Google Tasks integration allows your agents to create, read, update, delete, and list tasks and task lists as part of automated workflows. This enables use cases such as automated task creation from incoming data, to-do list management based on workflow triggers, task status tracking, and deadline monitoring. By connecting Sim with Google Tasks, your agents can manage action items programmatically, keep teams organized, and ensure nothing falls through the cracks.
With the Google Tasks integration in Sim, you can:
- **Create tasks**: Add new to-do items to any task list with titles, notes, and due dates
- **List tasks**: Retrieve all tasks from a specific task list
- **Get task details**: Fetch detailed information about a specific task by ID
- **Update tasks**: Modify task titles, notes, due dates, or completion status
- **Delete tasks**: Remove tasks from a task list
- **List task lists**: Browse all available task lists in a Google account
In Sim, the Google Tasks integration allows your agents to manage to-do items programmatically as part of automated workflows. This enables use cases such as automated task creation from incoming data, deadline monitoring, and workflow-triggered task management.
{/* MANUAL-CONTENT-END */}

View File

@@ -10,6 +10,18 @@ import { BlockInfoCard } from "@/components/ui/block-info-card"
color="#E0E0E0"
/>
{/* MANUAL-CONTENT-START:intro */}
[Google Translate](https://translate.google.com/) is Google's powerful translation service, supporting over 100 languages for text, documents, and websites. Backed by advanced neural machine translation, Google Translate delivers fast and accurate translations for a wide range of use cases.
With the Google Translate integration in Sim, you can:
- **Translate text**: Convert text between over 100 languages using Google Cloud Translation
- **Detect languages**: Automatically identify the language of a given text input
In Sim, the Google Translate integration allows your agents to translate text and detect languages as part of automated workflows. This enables use cases such as localization, multilingual support, content translation, and language detection at scale.
{/* MANUAL-CONTENT-END */}
## Usage Instructions
Translate and detect languages using the Google Cloud Translation API. Supports auto-detection of the source language.

View File

@@ -0,0 +1,506 @@
---
title: Greenhouse
description: Manage candidates, jobs, and applications in Greenhouse
---
import { BlockInfoCard } from "@/components/ui/block-info-card"
<BlockInfoCard
type="greenhouse"
color="#469776"
/>
{/* MANUAL-CONTENT-START:intro */}
[Greenhouse](https://www.greenhouse.com/) is a leading applicant tracking system (ATS) and hiring platform designed to help companies optimize their recruiting processes. Greenhouse provides structured hiring workflows, candidate management, interview scheduling, and analytics to help organizations make better hiring decisions at scale.
With the Greenhouse integration in Sim, you can:
- **Manage candidates**: List and retrieve detailed candidate profiles including contact information, tags, and application history
- **Track jobs**: List and view job postings with details on hiring teams, openings, and confidentiality settings
- **Monitor applications**: List and retrieve applications with status, source, and interview stage information
- **Access user data**: List and look up Greenhouse users including recruiters, coordinators, and hiring managers
- **Browse organizational data**: List departments, offices, and job stages to understand your hiring pipeline structure
In Sim, the Greenhouse integration enables your agents to interact with your recruiting data as part of automated workflows. Agents can pull candidate information, monitor application pipelines, track job openings, and cross-reference hiring team data—all programmatically. This is ideal for building automated recruiting reports, candidate pipeline monitoring, hiring analytics dashboards, and workflows that react to changes in your talent pipeline.
{/* MANUAL-CONTENT-END */}
## Usage Instructions
Integrate Greenhouse into the workflow. List and retrieve candidates, jobs, applications, users, departments, offices, and job stages from your Greenhouse ATS account.
## Tools
### `greenhouse_list_candidates`
Lists candidates from Greenhouse with optional filtering by date, job, or email
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Greenhouse Harvest API key |
| `per_page` | number | No | Number of results per page \(1-500, default 100\) |
| `page` | number | No | Page number for pagination |
| `created_after` | string | No | Return only candidates created at or after this ISO 8601 timestamp |
| `created_before` | string | No | Return only candidates created before this ISO 8601 timestamp |
| `updated_after` | string | No | Return only candidates updated at or after this ISO 8601 timestamp |
| `updated_before` | string | No | Return only candidates updated before this ISO 8601 timestamp |
| `job_id` | string | No | Filter to candidates who applied to this job ID \(excludes prospects\) |
| `email` | string | No | Filter to candidates with this email address |
| `candidate_ids` | string | No | Comma-separated candidate IDs to retrieve \(max 50\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `candidates` | array | List of candidates |
| ↳ `id` | number | Candidate ID |
| ↳ `first_name` | string | First name |
| ↳ `last_name` | string | Last name |
| ↳ `company` | string | Current employer |
| ↳ `title` | string | Current job title |
| ↳ `is_private` | boolean | Whether candidate is private |
| ↳ `can_email` | boolean | Whether candidate can be emailed |
| ↳ `email_addresses` | array | Email addresses |
| ↳ `value` | string | Email address |
| ↳ `type` | string | Email type \(personal, work, other\) |
| ↳ `tags` | array | Candidate tags |
| ↳ `application_ids` | array | Associated application IDs |
| ↳ `created_at` | string | Creation timestamp \(ISO 8601\) |
| ↳ `updated_at` | string | Last updated timestamp \(ISO 8601\) |
| ↳ `last_activity` | string | Last activity timestamp \(ISO 8601\) |
| `count` | number | Number of candidates returned |
### `greenhouse_get_candidate`
Retrieves a specific candidate by ID with full details including contact info, education, and employment history
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Greenhouse Harvest API key |
| `candidateId` | string | Yes | The ID of the candidate to retrieve |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `id` | number | Candidate ID |
| `first_name` | string | First name |
| `last_name` | string | Last name |
| `company` | string | Current employer |
| `title` | string | Current job title |
| `is_private` | boolean | Whether candidate is private |
| `can_email` | boolean | Whether candidate can be emailed |
| `created_at` | string | Creation timestamp \(ISO 8601\) |
| `updated_at` | string | Last updated timestamp \(ISO 8601\) |
| `last_activity` | string | Last activity timestamp \(ISO 8601\) |
| `email_addresses` | array | Email addresses |
| ↳ `value` | string | Email address |
| ↳ `type` | string | Type \(personal, work, other\) |
| `phone_numbers` | array | Phone numbers |
| ↳ `value` | string | Phone number |
| ↳ `type` | string | Type \(home, work, mobile, skype, other\) |
| `addresses` | array | Addresses |
| ↳ `value` | string | Address |
| ↳ `type` | string | Type \(home, work, other\) |
| `website_addresses` | array | Website addresses |
| ↳ `value` | string | URL |
| ↳ `type` | string | Type \(personal, company, portfolio, blog, other\) |
| `social_media_addresses` | array | Social media profiles |
| ↳ `value` | string | URL or handle |
| `tags` | array | Tags |
| `application_ids` | array | Associated application IDs |
| `recruiter` | object | Assigned recruiter |
| ↳ `id` | number | User ID |
| ↳ `first_name` | string | First name |
| ↳ `last_name` | string | Last name |
| ↳ `name` | string | Full name |
| ↳ `employee_id` | string | Employee ID |
| `coordinator` | object | Assigned coordinator |
| ↳ `id` | number | User ID |
| ↳ `first_name` | string | First name |
| ↳ `last_name` | string | Last name |
| ↳ `name` | string | Full name |
| ↳ `employee_id` | string | Employee ID |
| `attachments` | array | File attachments \(URLs expire after 7 days\) |
| ↳ `filename` | string | File name |
| ↳ `url` | string | Download URL \(expires after 7 days\) |
| ↳ `type` | string | Type \(resume, cover_letter, offer_packet, other\) |
| ↳ `created_at` | string | Upload timestamp |
| `educations` | array | Education history |
| ↳ `id` | number | Education record ID |
| ↳ `school_name` | string | School name |
| ↳ `degree` | string | Degree type |
| ↳ `discipline` | string | Field of study |
| ↳ `start_date` | string | Start date \(ISO 8601\) |
| ↳ `end_date` | string | End date \(ISO 8601\) |
| `employments` | array | Employment history |
| ↳ `id` | number | Employment record ID |
| ↳ `company_name` | string | Company name |
| ↳ `title` | string | Job title |
| ↳ `start_date` | string | Start date \(ISO 8601\) |
| ↳ `end_date` | string | End date \(ISO 8601\) |
| `custom_fields` | object | Custom field values |
### `greenhouse_list_jobs`
Lists jobs from Greenhouse with optional filtering by status, department, or office
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Greenhouse Harvest API key |
| `per_page` | number | No | Number of results per page \(1-500, default 100\) |
| `page` | number | No | Page number for pagination |
| `status` | string | No | Filter by job status \(open, closed, draft\) |
| `created_after` | string | No | Return only jobs created at or after this ISO 8601 timestamp |
| `created_before` | string | No | Return only jobs created before this ISO 8601 timestamp |
| `updated_after` | string | No | Return only jobs updated at or after this ISO 8601 timestamp |
| `updated_before` | string | No | Return only jobs updated before this ISO 8601 timestamp |
| `department_id` | string | No | Filter to jobs in this department ID |
| `office_id` | string | No | Filter to jobs in this office ID |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `jobs` | array | List of jobs |
| ↳ `id` | number | Job ID |
| ↳ `name` | string | Job title |
| ↳ `status` | string | Job status \(open, closed, draft\) |
| ↳ `confidential` | boolean | Whether the job is confidential |
| ↳ `departments` | array | Associated departments |
| ↳ `id` | number | Department ID |
| ↳ `name` | string | Department name |
| ↳ `offices` | array | Associated offices |
| ↳ `id` | number | Office ID |
| ↳ `name` | string | Office name |
| ↳ `opened_at` | string | Date job was opened \(ISO 8601\) |
| ↳ `closed_at` | string | Date job was closed \(ISO 8601\) |
| ↳ `created_at` | string | Creation timestamp \(ISO 8601\) |
| ↳ `updated_at` | string | Last updated timestamp \(ISO 8601\) |
| `count` | number | Number of jobs returned |
### `greenhouse_get_job`
Retrieves a specific job by ID with full details including hiring team, openings, and custom fields
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Greenhouse Harvest API key |
| `jobId` | string | Yes | The ID of the job to retrieve |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `id` | number | Job ID |
| `name` | string | Job title |
| `requisition_id` | string | External requisition ID |
| `status` | string | Job status \(open, closed, draft\) |
| `confidential` | boolean | Whether the job is confidential |
| `created_at` | string | Creation timestamp \(ISO 8601\) |
| `opened_at` | string | Date job was opened \(ISO 8601\) |
| `closed_at` | string | Date job was closed \(ISO 8601\) |
| `updated_at` | string | Last updated timestamp \(ISO 8601\) |
| `is_template` | boolean | Whether this is a job template |
| `notes` | string | Hiring plan notes \(may contain HTML\) |
| `departments` | array | Associated departments |
| ↳ `id` | number | Department ID |
| ↳ `name` | string | Department name |
| ↳ `parent_id` | number | Parent department ID |
| `offices` | array | Associated offices |
| ↳ `id` | number | Office ID |
| ↳ `name` | string | Office name |
| ↳ `location` | object | Office location |
| ↳ `name` | string | Location name |
| `hiring_team` | object | Hiring team members |
| ↳ `hiring_managers` | array | Hiring managers |
| ↳ `recruiters` | array | Recruiters \(includes responsible flag\) |
| ↳ `coordinators` | array | Coordinators \(includes responsible flag\) |
| ↳ `sourcers` | array | Sourcers |
| `openings` | array | Job openings/slots |
| ↳ `id` | number | Opening internal ID |
| ↳ `opening_id` | string | Custom opening identifier |
| ↳ `status` | string | Opening status \(open, closed\) |
| ↳ `opened_at` | string | Date opened \(ISO 8601\) |
| ↳ `closed_at` | string | Date closed \(ISO 8601\) |
| ↳ `application_id` | number | Hired application ID |
| ↳ `close_reason` | object | Reason for closing |
| ↳ `id` | number | Close reason ID |
| ↳ `name` | string | Close reason name |
| `custom_fields` | object | Custom field values |
### `greenhouse_list_applications`
Lists applications from Greenhouse with optional filtering by job, status, or date
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Greenhouse Harvest API key |
| `per_page` | number | No | Number of results per page \(1-500, default 100\) |
| `page` | number | No | Page number for pagination |
| `job_id` | string | No | Filter applications by job ID |
| `status` | string | No | Filter by status \(active, converted, hired, rejected\) |
| `created_after` | string | No | Return only applications created at or after this ISO 8601 timestamp |
| `created_before` | string | No | Return only applications created before this ISO 8601 timestamp |
| `last_activity_after` | string | No | Return only applications with activity at or after this ISO 8601 timestamp |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `applications` | array | List of applications |
| ↳ `id` | number | Application ID |
| ↳ `candidate_id` | number | Associated candidate ID |
| ↳ `prospect` | boolean | Whether this is a prospect application |
| ↳ `status` | string | Status \(active, converted, hired, rejected\) |
| ↳ `current_stage` | object | Current interview stage |
| ↳ `id` | number | Stage ID |
| ↳ `name` | string | Stage name |
| ↳ `jobs` | array | Associated jobs |
| ↳ `id` | number | Job ID |
| ↳ `name` | string | Job name |
| ↳ `applied_at` | string | Application date \(ISO 8601\) |
| ↳ `rejected_at` | string | Rejection date \(ISO 8601\) |
| ↳ `last_activity_at` | string | Last activity date \(ISO 8601\) |
| `count` | number | Number of applications returned |
### `greenhouse_get_application`
Retrieves a specific application by ID with full details including source, stage, answers, and attachments
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Greenhouse Harvest API key |
| `applicationId` | string | Yes | The ID of the application to retrieve |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `id` | number | Application ID |
| `candidate_id` | number | Associated candidate ID |
| `prospect` | boolean | Whether this is a prospect application |
| `status` | string | Status \(active, converted, hired, rejected\) |
| `applied_at` | string | Application date \(ISO 8601\) |
| `rejected_at` | string | Rejection date \(ISO 8601\) |
| `last_activity_at` | string | Last activity date \(ISO 8601\) |
| `location` | object | Candidate location |
| ↳ `address` | string | Location address |
| `source` | object | Application source |
| ↳ `id` | number | Source ID |
| ↳ `public_name` | string | Source name |
| `credited_to` | object | User credited for the application |
| ↳ `id` | number | User ID |
| ↳ `first_name` | string | First name |
| ↳ `last_name` | string | Last name |
| ↳ `name` | string | Full name |
| ↳ `employee_id` | string | Employee ID |
| `recruiter` | object | Assigned recruiter |
| ↳ `id` | number | User ID |
| ↳ `first_name` | string | First name |
| ↳ `last_name` | string | Last name |
| ↳ `name` | string | Full name |
| ↳ `employee_id` | string | Employee ID |
| `coordinator` | object | Assigned coordinator |
| ↳ `id` | number | User ID |
| ↳ `first_name` | string | First name |
| ↳ `last_name` | string | Last name |
| ↳ `name` | string | Full name |
| ↳ `employee_id` | string | Employee ID |
| `current_stage` | object | Current interview stage \(null when hired\) |
| ↳ `id` | number | Stage ID |
| ↳ `name` | string | Stage name |
| `rejection_reason` | object | Rejection reason |
| ↳ `id` | number | Rejection reason ID |
| ↳ `name` | string | Rejection reason name |
| ↳ `type` | object | Rejection reason type |
| ↳ `id` | number | Type ID |
| ↳ `name` | string | Type name |
| `jobs` | array | Associated jobs |
| ↳ `id` | number | Job ID |
| ↳ `name` | string | Job name |
| `job_post_id` | number | Job post ID |
| `answers` | array | Application question answers |
| ↳ `question` | string | Question text |
| ↳ `answer` | string | Answer text |
| `attachments` | array | File attachments \(URLs expire after 7 days\) |
| ↳ `filename` | string | File name |
| ↳ `url` | string | Download URL \(expires after 7 days\) |
| ↳ `type` | string | Type \(resume, cover_letter, offer_packet, other\) |
| ↳ `created_at` | string | Upload timestamp |
| `custom_fields` | object | Custom field values |
### `greenhouse_list_users`
Lists Greenhouse users (recruiters, hiring managers, admins) with optional filtering
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Greenhouse Harvest API key |
| `per_page` | number | No | Number of results per page \(1-500, default 100\) |
| `page` | number | No | Page number for pagination |
| `created_after` | string | No | Return only users created at or after this ISO 8601 timestamp |
| `created_before` | string | No | Return only users created before this ISO 8601 timestamp |
| `updated_after` | string | No | Return only users updated at or after this ISO 8601 timestamp |
| `updated_before` | string | No | Return only users updated before this ISO 8601 timestamp |
| `email` | string | No | Filter by email address |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `users` | array | List of Greenhouse users |
| ↳ `id` | number | User ID |
| ↳ `name` | string | Full name |
| ↳ `first_name` | string | First name |
| ↳ `last_name` | string | Last name |
| ↳ `primary_email_address` | string | Primary email |
| ↳ `disabled` | boolean | Whether the user is disabled |
| ↳ `site_admin` | boolean | Whether the user is a site admin |
| ↳ `emails` | array | All email addresses |
| ↳ `employee_id` | string | Employee ID |
| ↳ `linked_candidate_ids` | array | IDs of candidates linked to this user |
| ↳ `created_at` | string | Creation timestamp \(ISO 8601\) |
| ↳ `updated_at` | string | Last updated timestamp \(ISO 8601\) |
| `count` | number | Number of users returned |
### `greenhouse_get_user`
Retrieves a specific Greenhouse user by ID
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Greenhouse Harvest API key |
| `userId` | string | Yes | The ID of the user to retrieve |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `id` | number | User ID |
| `name` | string | Full name |
| `first_name` | string | First name |
| `last_name` | string | Last name |
| `primary_email_address` | string | Primary email address |
| `disabled` | boolean | Whether the user is disabled |
| `site_admin` | boolean | Whether the user is a site admin |
| `emails` | array | All email addresses |
| `employee_id` | string | Employee ID |
| `linked_candidate_ids` | array | IDs of candidates linked to this user |
| `created_at` | string | Creation timestamp \(ISO 8601\) |
| `updated_at` | string | Last updated timestamp \(ISO 8601\) |
### `greenhouse_list_departments`
Lists all departments configured in Greenhouse
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Greenhouse Harvest API key |
| `per_page` | number | No | Number of results per page \(1-500, default 100\) |
| `page` | number | No | Page number for pagination |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `departments` | array | List of departments |
| ↳ `id` | number | Department ID |
| ↳ `name` | string | Department name |
| ↳ `parent_id` | number | Parent department ID |
| ↳ `child_ids` | array | Child department IDs |
| ↳ `external_id` | string | External system ID |
| `count` | number | Number of departments returned |
### `greenhouse_list_offices`
Lists all offices configured in Greenhouse
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Greenhouse Harvest API key |
| `per_page` | number | No | Number of results per page \(1-500, default 100\) |
| `page` | number | No | Page number for pagination |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `offices` | array | List of offices |
| ↳ `id` | number | Office ID |
| ↳ `name` | string | Office name |
| ↳ `location` | object | Office location |
| ↳ `name` | string | Location name |
| ↳ `primary_contact_user_id` | number | Primary contact user ID |
| ↳ `parent_id` | number | Parent office ID |
| ↳ `child_ids` | array | Child office IDs |
| ↳ `external_id` | string | External system ID |
| `count` | number | Number of offices returned |
### `greenhouse_list_job_stages`
Lists all interview stages for a specific job in Greenhouse
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Greenhouse Harvest API key |
| `jobId` | string | Yes | The job ID to list stages for |
| `per_page` | number | No | Number of results per page \(1-500, default 100\) |
| `page` | number | No | Page number for pagination |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `stages` | array | List of job stages in order |
| ↳ `id` | number | Stage ID |
| ↳ `name` | string | Stage name |
| ↳ `created_at` | string | Creation timestamp \(ISO 8601\) |
| ↳ `updated_at` | string | Last updated timestamp \(ISO 8601\) |
| ↳ `job_id` | number | Associated job ID |
| ↳ `priority` | number | Stage order priority |
| ↳ `active` | boolean | Whether the stage is active |
| ↳ `interviews` | array | Interview steps in this stage |
| ↳ `id` | number | Interview ID |
| ↳ `name` | string | Interview name |
| ↳ `schedulable` | boolean | Whether the interview is schedulable |
| ↳ `estimated_minutes` | number | Estimated duration in minutes |
| ↳ `default_interviewer_users` | array | Default interviewers |
| ↳ `id` | number | User ID |
| ↳ `name` | string | Full name |
| ↳ `first_name` | string | First name |
| ↳ `last_name` | string | Last name |
| ↳ `employee_id` | string | Employee ID |
| ↳ `interview_kit` | object | Interview kit details |
| ↳ `id` | number | Kit ID |
| ↳ `content` | string | Kit content \(HTML\) |
| ↳ `questions` | array | Interview kit questions |
| ↳ `id` | number | Question ID |
| ↳ `question` | string | Question text |
| `count` | number | Number of stages returned |

View File

@@ -11,20 +11,16 @@ import { BlockInfoCard } from "@/components/ui/block-info-card"
/>
{/* MANUAL-CONTENT-START:intro */}
[HubSpot](https://www.hubspot.com) is a comprehensive CRM platform that provides a full suite of marketing, sales, and customer service tools to help businesses grow better. With its powerful automation capabilities and extensive API, HubSpot has become one of the world's leading CRM platforms, serving businesses of all sizes across industries.
[HubSpot](https://www.hubspot.com) is a comprehensive CRM platform that provides a full suite of marketing, sales, and customer service tools to help businesses grow. With powerful automation capabilities and an extensive API, HubSpot serves businesses of all sizes across industries.
HubSpot CRM offers a complete solution for managing customer relationships, from initial contact through to long-term customer success. The platform combines contact management, deal tracking, marketing automation, and customer service tools into a unified system that helps teams stay aligned and focused on customer success.
With the HubSpot integration in Sim, you can:
Key features of HubSpot CRM include:
- **Manage contacts**: List, get, create, update, and search contacts in your CRM
- **Manage companies**: List, get, create, update, and search company records
- **Track deals**: List deals in your sales pipeline
- **Access users**: Retrieve user information from your HubSpot account
- Contact & Company Management: Comprehensive database for storing and organizing customer and prospect information
- Deal Pipeline: Visual sales pipeline for tracking opportunities through customizable stages
- Marketing Events: Track and manage marketing campaigns and events with detailed attribution
- Ticket Management: Customer support ticketing system for tracking and resolving customer issues
- Quotes & Line Items: Create and manage sales quotes with detailed product line items
- User & Team Management: Organize teams, assign ownership, and track user activity across the platform
In Sim, the HubSpot integration enables your AI agents to seamlessly interact with your CRM data and automate key business processes. This creates powerful opportunities for intelligent lead qualification, automated contact enrichment, deal management, customer support automation, and data synchronization across your tech stack. The integration allows agents to create, retrieve, update, and search across all major HubSpot objects, enabling sophisticated workflows that can respond to CRM events, maintain data quality, and ensure your team has the most up-to-date customer information. By connecting Sim with HubSpot, you can build AI agents that automatically qualify leads, route support tickets, update deal stages based on customer interactions, generate quotes, and keep your CRM data synchronized with other business systems—ultimately increasing team productivity and improving customer experiences.
In Sim, the HubSpot integration enables your agents to interact with your CRM data as part of automated workflows. Agents can qualify leads, enrich contact records, track deals, and synchronize data across your tech stack—enabling intelligent sales and marketing automation.
{/* MANUAL-CONTENT-END */}

View File

@@ -11,16 +11,13 @@ import { BlockInfoCard } from "@/components/ui/block-info-card"
/>
{/* MANUAL-CONTENT-START:intro */}
[HuggingFace](https://huggingface.co/) is a leading AI platform that provides access to thousands of pre-trained machine learning models and powerful inference capabilities. With its extensive model hub and robust API, HuggingFace offers comprehensive tools for both research and production AI applications.
With HuggingFace, you can:
[Hugging Face](https://huggingface.co/) is a leading AI platform that provides access to thousands of pre-trained machine learning models and powerful inference capabilities. With its extensive model hub and robust API, Hugging Face offers comprehensive tools for both research and production AI applications.
Access pre-trained models: Utilize models for text generation, translation, image processing, and more
Generate AI completions: Create content using state-of-the-art language models through the Inference API
Natural language processing: Process and analyze text with specialized NLP models
Deploy at scale: Host and serve models for production applications
Customize models: Fine-tune existing models for specific use cases
With the Hugging Face integration in Sim, you can:
In Sim, the HuggingFace integration enables your agents to programmatically generate completions using the HuggingFace Inference API. This allows for powerful automation scenarios such as content generation, text analysis, code completion, and creative writing. Your agents can generate completions with natural language prompts, access specialized models for different tasks, and integrate AI-generated content into workflows. This integration bridges the gap between your AI workflows and machine learning capabilities, enabling seamless AI-powered automation with one of the world's most comprehensive ML platforms.
- **Generate completions**: Create text content using state-of-the-art language models through the Hugging Face Inference API, with support for custom prompts and model selection
In Sim, the Hugging Face integration enables your agents to generate AI completions as part of automated workflows. This allows for content generation, text analysis, code completion, and creative writing using models from the Hugging Face model hub.
{/* MANUAL-CONTENT-END */}

View File

@@ -11,18 +11,22 @@ import { BlockInfoCard } from "@/components/ui/block-info-card"
/>
{/* MANUAL-CONTENT-START:intro */}
[Jira](https://www.atlassian.com/jira) is a leading project management and issue tracking platform that helps teams plan, track, and manage agile software development projects effectively. As part of the Atlassian suite, Jira has become the industry standard for software development teams and project management professionals worldwide.
[Jira](https://www.atlassian.com/jira) is a leading project management and issue tracking platform from Atlassian that helps teams plan, track, and manage agile software development projects. Jira supports Scrum and Kanban methodologies with customizable boards, workflows, and advanced reporting.
Jira provides a comprehensive set of tools for managing complex projects through its flexible and customizable workflow system. With its robust API and integration capabilities, Jira enables teams to streamline their development processes and maintain clear visibility of project progress.
With the Jira integration in Sim, you can:
Key features of Jira include:
- **Manage issues**: Create, retrieve, update, delete, and bulk-read issues in your Jira projects
- **Transition issues**: Move issues through workflow stages programmatically
- **Assign issues**: Set or change issue assignees
- **Search issues**: Use JQL (Jira Query Language) to find and filter issues
- **Manage comments**: Add, retrieve, update, and delete comments on issues
- **Handle attachments**: Upload, retrieve, and delete file attachments on issues
- **Track work**: Add, retrieve, update, and delete worklogs for time tracking
- **Link issues**: Create and delete issue links to establish relationships between issues
- **Manage watchers**: Add or remove watchers from issues
- **Access users**: Retrieve user information from your Jira instance
- Agile Project Management: Support for Scrum and Kanban methodologies with customizable boards and workflows
- Issue Tracking: Sophisticated tracking system for bugs, stories, epics, and tasks with detailed reporting
- Workflow Automation: Powerful automation rules to streamline repetitive tasks and processes
- Advanced Search: JQL (Jira Query Language) for complex issue filtering and reporting
In Sim, the Jira integration allows your agents to seamlessly interact with your project management workflow. This creates opportunities for automated issue creation, updates, and tracking as part of your AI workflows. The integration enables agents to create, retrieve, and update Jira issues programmatically, facilitating automated project management tasks and ensuring that important information is properly tracked and documented. By connecting Sim with Jira, you can build intelligent agents that maintain project visibility while automating routine project management tasks, enhancing team productivity and ensuring consistent project tracking.
In Sim, the Jira integration enables your agents to interact with your project management workflow as part of automated processes. Agents can create issues from external triggers, update statuses, track progress, and manage project data—enabling intelligent project management automation.
{/* MANUAL-CONTENT-END */}

View File

@@ -11,19 +11,15 @@ import { BlockInfoCard } from "@/components/ui/block-info-card"
/>
{/* MANUAL-CONTENT-START:intro */}
Sim's Knowledge Base is a powerful native feature that enables you to create, manage, and query custom knowledge bases directly within the platform. Using advanced AI embeddings and vector search technology, the Knowledge Base block allows you to build intelligent search capabilities into your workflows, making it easy to find and utilize relevant information across your organization.
Sim's Knowledge Base is a native feature that enables you to create, manage, and query custom knowledge bases directly within the platform. Using advanced AI embeddings and vector search, the Knowledge Base block allows you to build intelligent search capabilities into your workflows.
The Knowledge Base system provides a comprehensive solution for managing organizational knowledge through its flexible and scalable architecture. With its built-in vector search capabilities, teams can perform semantic searches that understand meaning and context, going beyond traditional keyword matching.
With the Knowledge Base in Sim, you can:
Key features of the Knowledge Base include:
- **Search knowledge**: Perform semantic searches across your custom knowledge bases using AI-powered vector similarity matching
- **Upload chunks**: Add text chunks with metadata to a knowledge base for indexing
- **Create documents**: Add new documents to a knowledge base for searchable content
- Semantic Search: Advanced AI-powered search that understands meaning and context, not just keywords
- Vector Embeddings: Automatic conversion of text into high-dimensional vectors for intelligent similarity matching
- Custom Knowledge Bases: Create and manage multiple knowledge bases for different purposes or departments
- Flexible Content Types: Support for various document formats and content types
- Real-time Updates: Immediate indexing of new content for instant searchability
In Sim, the Knowledge Base block enables your agents to perform intelligent semantic searches across your custom knowledge bases. This creates opportunities for automated information retrieval, content recommendations, and knowledge discovery as part of your AI workflows. The integration allows agents to search and retrieve relevant information programmatically, facilitating automated knowledge management tasks and ensuring that important information is easily accessible. By leveraging the Knowledge Base block, you can build intelligent agents that enhance information discovery while automating routine knowledge management tasks, improving team efficiency and ensuring consistent access to organizational knowledge.
In Sim, the Knowledge Base block enables your agents to perform intelligent semantic searches across your organizational knowledge as part of automated workflows. This is ideal for information retrieval, content recommendations, FAQ automation, and grounding agent responses in your own data.
{/* MANUAL-CONTENT-END */}

View File

@@ -11,18 +11,21 @@ import { BlockInfoCard } from "@/components/ui/block-info-card"
/>
{/* MANUAL-CONTENT-START:intro */}
[Linear](https://linear.app) is a leading project management and issue tracking platform that helps teams plan, track, and manage their work effectively. As a modern project management tool, Linear has become increasingly popular among software development teams and project management professionals for its streamlined interface and powerful features.
[Linear](https://linear.app) is a modern project management and issue tracking platform that helps teams plan, track, and manage their work with a streamlined interface. Linear supports agile methodologies with customizable workflows, cycles, and project milestones.
Linear provides a comprehensive set of tools for managing complex projects through its flexible and customizable workflow system. With its robust API and integration capabilities, Linear enables teams to streamline their development processes and maintain clear visibility of project progress.
With the Linear integration in Sim, you can:
Key features of Linear include:
- **Manage issues**: Create, read, update, search, archive, unarchive, and delete issues
- **Manage labels**: Add or remove labels from issues, and create, update, or archive labels
- **Comment on issues**: Create, update, delete, and list comments on issues
- **Manage projects**: List, get, create, update, archive, and delete projects with milestones, labels, and statuses
- **Track cycles**: List, get, and create cycles, and retrieve the active cycle
- **Handle attachments**: Create, list, update, and delete attachments on issues
- **Manage issue relations**: Create, list, and delete relationships between issues
- **Access team data**: List users, teams, workflow states, notifications, and favorites
- **Manage customers**: Create, update, delete, list, and merge customers with statuses, tiers, and requests
- Agile Project Management: Support for Scrum and Kanban methodologies with customizable boards and workflows
- Issue Tracking: Sophisticated tracking system for bugs, stories, epics, and tasks with detailed reporting
- Workflow Automation: Powerful automation rules to streamline repetitive tasks and processes
- Advanced Search: Complex filtering and reporting capabilities for efficient issue management
In Sim, the Linear integration allows your agents to seamlessly interact with your project management workflow. This creates opportunities for automated issue creation, updates, and tracking as part of your AI workflows. The integration enables agents to read existing issues and create new ones programmatically, facilitating automated project management tasks and ensuring that important information is properly tracked and documented. By connecting Sim with Linear, you can build intelligent agents that maintain project visibility while automating routine project management tasks, enhancing team productivity and ensuring consistent project tracking.
In Sim, the Linear integration enables your agents to interact with your project management workflow as part of automated processes. Agents can create issues from external triggers, update statuses, manage projects and cycles, and synchronize data—enabling intelligent project management automation at scale.
{/* MANUAL-CONTENT-END */}

View File

@@ -0,0 +1,273 @@
---
title: Loops
description: Manage contacts and send emails with Loops
---
import { BlockInfoCard } from "@/components/ui/block-info-card"
<BlockInfoCard
type="loops"
color="#FAFAF9"
/>
{/* MANUAL-CONTENT-START:intro */}
[Loops](https://loops.so/) is an email platform built for modern SaaS companies, offering transactional emails, marketing campaigns, and event-driven automations through a clean API. This integration connects Loops directly into Sim workflows.
With Loops in Sim, you can:
- **Manage contacts**: Create, update, find, and delete contacts in your Loops audience
- **Send transactional emails**: Trigger templated transactional emails with dynamic data variables
- **Fire events**: Send events to Loops to trigger automated email sequences and workflows
- **Manage subscriptions**: Control mailing list subscriptions and contact properties programmatically
- **Enrich contact data**: Attach custom properties, user groups, and mailing list memberships to contacts
In Sim, the Loops integration enables your agents to manage email operations as part of their workflows. Supported operations include:
- **Create Contact**: Add a new contact to your Loops audience with email, name, and custom properties.
- **Update Contact**: Update an existing contact or create one if no match exists (upsert behavior).
- **Find Contact**: Look up a contact by email address or userId.
- **Delete Contact**: Remove a contact from your audience.
- **Send Transactional Email**: Send a templated transactional email to a recipient with dynamic data variables.
- **Send Event**: Trigger a Loops event to start automated email sequences for a contact.
Configure the Loops block with your API key from the Loops dashboard (Settings > API), select an operation, and provide the required parameters. Your agents can then manage contacts and send emails as part of any workflow.
{/* MANUAL-CONTENT-END */}
## Usage Instructions
Integrate Loops into the workflow. Create and manage contacts, send transactional emails, and trigger event-based automations.
## Tools
### `loops_create_contact`
Create a new contact in your Loops audience with an email address and optional properties like name, user group, and mailing list subscriptions.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Loops API key for authentication |
| `email` | string | Yes | The email address for the new contact |
| `firstName` | string | No | The contact first name |
| `lastName` | string | No | The contact last name |
| `source` | string | No | Custom source value replacing the default "API" |
| `subscribed` | boolean | No | Whether the contact receives campaign emails \(defaults to true\) |
| `userGroup` | string | No | Group to segment the contact into \(one group per contact\) |
| `userId` | string | No | Unique user identifier from your application |
| `mailingLists` | json | No | Mailing list IDs mapped to boolean values \(true to subscribe, false to unsubscribe\) |
| `customProperties` | json | No | Custom contact properties as key-value pairs \(string, number, boolean, or date values\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `success` | boolean | Whether the contact was created successfully |
| `id` | string | The Loops-assigned ID of the created contact |
### `loops_update_contact`
Update an existing contact in Loops by email or userId. Creates a new contact if no match is found (upsert). Can update name, subscription status, user group, mailing lists, and custom properties.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Loops API key for authentication |
| `email` | string | No | The contact email address \(at least one of email or userId is required\) |
| `userId` | string | No | The contact userId \(at least one of email or userId is required\) |
| `firstName` | string | No | The contact first name |
| `lastName` | string | No | The contact last name |
| `source` | string | No | Custom source value replacing the default "API" |
| `subscribed` | boolean | No | Whether the contact receives campaign emails \(sending true re-subscribes unsubscribed contacts\) |
| `userGroup` | string | No | Group to segment the contact into \(one group per contact\) |
| `mailingLists` | json | No | Mailing list IDs mapped to boolean values \(true to subscribe, false to unsubscribe\) |
| `customProperties` | json | No | Custom contact properties as key-value pairs \(send null to reset a property\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `success` | boolean | Whether the contact was updated successfully |
| `id` | string | The Loops-assigned ID of the updated or created contact |
### `loops_find_contact`
Find a contact in Loops by email address or userId. Returns an array of matching contacts with all their properties including name, subscription status, user group, and mailing lists.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Loops API key for authentication |
| `email` | string | No | The contact email address to search for \(at least one of email or userId is required\) |
| `userId` | string | No | The contact userId to search for \(at least one of email or userId is required\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `contacts` | array | Array of matching contact objects \(empty array if no match found\) |
| ↳ `id` | string | Loops-assigned contact ID |
| ↳ `email` | string | Contact email address |
| ↳ `firstName` | string | Contact first name |
| ↳ `lastName` | string | Contact last name |
| ↳ `source` | string | Source the contact was created from |
| ↳ `subscribed` | boolean | Whether the contact receives campaign emails |
| ↳ `userGroup` | string | Contact user group |
| ↳ `userId` | string | External user identifier |
| ↳ `mailingLists` | object | Mailing list IDs mapped to subscription status |
| ↳ `optInStatus` | string | Double opt-in status: pending, accepted, rejected, or null |
### `loops_delete_contact`
Delete a contact from Loops by email address or userId. At least one identifier must be provided.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Loops API key for authentication |
| `email` | string | No | The email address of the contact to delete \(at least one of email or userId is required\) |
| `userId` | string | No | The userId of the contact to delete \(at least one of email or userId is required\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `success` | boolean | Whether the contact was deleted successfully |
| `message` | string | Status message from the API |
### `loops_send_transactional_email`
Send a transactional email to a recipient using a Loops template. Supports dynamic data variables for personalization and optionally adds the recipient to your audience.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Loops API key for authentication |
| `email` | string | Yes | The email address of the recipient |
| `transactionalId` | string | Yes | The ID of the transactional email template to send |
| `dataVariables` | json | No | Template data variables as key-value pairs \(string or number values\) |
| `addToAudience` | boolean | No | Whether to create the recipient as a contact if they do not already exist \(default: false\) |
| `attachments` | json | No | Array of file attachments. Each object must have filename \(string\), contentType \(MIME type string\), and data \(base64-encoded string\). |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `success` | boolean | Whether the transactional email was sent successfully |
### `loops_send_event`
Send an event to Loops to trigger automated email sequences for a contact. Identify the contact by email or userId and include optional event properties and mailing list changes.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Loops API key for authentication |
| `email` | string | No | The email address of the contact \(at least one of email or userId is required\) |
| `userId` | string | No | The userId of the contact \(at least one of email or userId is required\) |
| `eventName` | string | Yes | The name of the event to trigger |
| `eventProperties` | json | No | Event data as key-value pairs \(string, number, boolean, or date values\) |
| `mailingLists` | json | No | Mailing list IDs mapped to boolean values \(true to subscribe, false to unsubscribe\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `success` | boolean | Whether the event was sent successfully |
### `loops_list_mailing_lists`
Retrieve all mailing lists from your Loops account. Returns each list with its ID, name, description, and public/private status.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Loops API key for authentication |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `mailingLists` | array | Array of mailing list objects |
| ↳ `id` | string | The mailing list ID |
| ↳ `name` | string | The mailing list name |
| ↳ `description` | string | The mailing list description \(null if not set\) |
| ↳ `isPublic` | boolean | Whether the list is public or private |
### `loops_list_transactional_emails`
Retrieve a list of published transactional email templates from your Loops account. Returns each template with its ID, name, last updated timestamp, and data variables.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Loops API key for authentication |
| `perPage` | string | No | Number of results per page \(10-50, default: 20\) |
| `cursor` | string | No | Pagination cursor from a previous response to fetch the next page |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `transactionalEmails` | array | Array of published transactional email templates |
| ↳ `id` | string | The transactional email template ID |
| ↳ `name` | string | The template name |
| ↳ `lastUpdated` | string | Last updated timestamp |
| ↳ `dataVariables` | array | Template data variable names |
| `pagination` | object | Pagination information |
| ↳ `totalResults` | number | Total number of results |
| ↳ `returnedResults` | number | Number of results returned |
| ↳ `perPage` | number | Results per page |
| ↳ `totalPages` | number | Total number of pages |
| ↳ `nextCursor` | string | Cursor for next page \(null if no more pages\) |
| ↳ `nextPage` | string | URL for next page \(null if no more pages\) |
### `loops_create_contact_property`
Create a new custom contact property in your Loops account. The property name must be in camelCase format.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Loops API key for authentication |
| `name` | string | Yes | The property name in camelCase format \(e.g., "favoriteColor"\) |
| `type` | string | Yes | The property data type \(e.g., "string", "number", "boolean", "date"\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `success` | boolean | Whether the contact property was created successfully |
### `loops_list_contact_properties`
Retrieve a list of contact properties from your Loops account. Returns each property with its key, label, and data type. Can filter to show all properties or only custom ones.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Loops API key for authentication |
| `list` | string | No | Filter type: "all" for all properties \(default\) or "custom" for custom properties only |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `properties` | array | Array of contact property objects |
| ↳ `key` | string | The property key \(camelCase identifier\) |
| ↳ `label` | string | The property display label |
| ↳ `type` | string | The property data type \(string, number, boolean, date\) |

View File

@@ -0,0 +1,284 @@
---
title: Luma
description: Manage events and guests on Luma
---
import { BlockInfoCard } from "@/components/ui/block-info-card"
<BlockInfoCard
type="luma"
color="#FFFFFF"
/>
{/* MANUAL-CONTENT-START:intro */}
[Luma](https://lu.ma/) is an event management platform that makes it easy to create, manage, and share events with your community.
With Luma integrated into Sim, your agents can:
- **Create events**: Set up new events with name, time, timezone, description, and visibility settings.
- **Update events**: Modify existing event details like name, time, description, and visibility.
- **Get event details**: Retrieve full details for any event by its ID.
- **List calendar events**: Browse your calendar's events with date range filtering and pagination.
- **Manage guest lists**: View attendees for an event, filtered by approval status.
- **Add guests**: Invite new guests to events programmatically.
By connecting Sim with Luma, you can automate event operations within your agent workflows. Automatically create events based on triggers, sync guest lists, monitor registrations, and manage your event calendar—all handled directly by your agents via the Luma API.
Whether you're running community meetups, conferences, or internal team events, the Luma tool makes it easy to coordinate event management within your Sim workflows.
{/* MANUAL-CONTENT-END */}
## Usage Instructions
Integrate Luma into the workflow. Can create events, update events, get event details, list calendar events, get guest lists, and add guests to events.
## Tools
### `luma_get_event`
Retrieve details of a Luma event including name, time, location, hosts, and visibility settings.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Luma API key |
| `eventId` | string | Yes | Event ID \(starts with evt-\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `event` | object | Event details |
| ↳ `id` | string | Event ID |
| ↳ `name` | string | Event name |
| ↳ `startAt` | string | Event start time \(ISO 8601\) |
| ↳ `endAt` | string | Event end time \(ISO 8601\) |
| ↳ `timezone` | string | Event timezone \(IANA\) |
| ↳ `durationInterval` | string | Event duration \(ISO 8601 interval, e.g. PT2H\) |
| ↳ `createdAt` | string | Event creation timestamp \(ISO 8601\) |
| ↳ `description` | string | Event description \(plain text\) |
| ↳ `descriptionMd` | string | Event description \(Markdown\) |
| ↳ `coverUrl` | string | Event cover image URL |
| ↳ `url` | string | Event page URL on lu.ma |
| ↳ `visibility` | string | Event visibility \(public, members-only, private\) |
| ↳ `meetingUrl` | string | Virtual meeting URL |
| ↳ `geoAddressJson` | json | Structured location/address data |
| ↳ `geoLatitude` | string | Venue latitude coordinate |
| ↳ `geoLongitude` | string | Venue longitude coordinate |
| ↳ `calendarId` | string | Associated calendar ID |
| `hosts` | array | Event hosts |
| ↳ `id` | string | Host ID |
| ↳ `name` | string | Host display name |
| ↳ `firstName` | string | Host first name |
| ↳ `lastName` | string | Host last name |
| ↳ `email` | string | Host email address |
| ↳ `avatarUrl` | string | Host avatar image URL |
### `luma_create_event`
Create a new event on Luma with a name, start time, timezone, and optional details like description, location, and visibility.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Luma API key |
| `name` | string | Yes | Event name/title |
| `startAt` | string | Yes | Event start time in ISO 8601 format \(e.g., 2025-03-15T18:00:00Z\) |
| `timezone` | string | Yes | IANA timezone \(e.g., America/New_York, Europe/London\) |
| `endAt` | string | No | Event end time in ISO 8601 format \(e.g., 2025-03-15T20:00:00Z\) |
| `durationInterval` | string | No | Event duration as ISO 8601 interval \(e.g., PT2H for 2 hours, PT30M for 30 minutes\). Used if endAt is not provided. |
| `descriptionMd` | string | No | Event description in Markdown format |
| `meetingUrl` | string | No | Virtual meeting URL for online events \(e.g., Zoom, Google Meet link\) |
| `visibility` | string | No | Event visibility: public, members-only, or private \(defaults to public\) |
| `coverUrl` | string | No | Cover image URL \(must be a Luma CDN URL from images.lumacdn.com\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `event` | object | Created event details |
| ↳ `id` | string | Event ID |
| ↳ `name` | string | Event name |
| ↳ `startAt` | string | Event start time \(ISO 8601\) |
| ↳ `endAt` | string | Event end time \(ISO 8601\) |
| ↳ `timezone` | string | Event timezone \(IANA\) |
| ↳ `durationInterval` | string | Event duration \(ISO 8601 interval, e.g. PT2H\) |
| ↳ `createdAt` | string | Event creation timestamp \(ISO 8601\) |
| ↳ `description` | string | Event description \(plain text\) |
| ↳ `descriptionMd` | string | Event description \(Markdown\) |
| ↳ `coverUrl` | string | Event cover image URL |
| ↳ `url` | string | Event page URL on lu.ma |
| ↳ `visibility` | string | Event visibility \(public, members-only, private\) |
| ↳ `meetingUrl` | string | Virtual meeting URL |
| ↳ `geoAddressJson` | json | Structured location/address data |
| ↳ `geoLatitude` | string | Venue latitude coordinate |
| ↳ `geoLongitude` | string | Venue longitude coordinate |
| ↳ `calendarId` | string | Associated calendar ID |
| `hosts` | array | Event hosts |
| ↳ `id` | string | Host ID |
| ↳ `name` | string | Host display name |
| ↳ `firstName` | string | Host first name |
| ↳ `lastName` | string | Host last name |
| ↳ `email` | string | Host email address |
| ↳ `avatarUrl` | string | Host avatar image URL |
### `luma_update_event`
Update an existing Luma event. Only the fields you provide will be changed; all other fields remain unchanged.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Luma API key |
| `eventId` | string | Yes | Event ID to update \(starts with evt-\) |
| `name` | string | No | New event name/title |
| `startAt` | string | No | New start time in ISO 8601 format \(e.g., 2025-03-15T18:00:00Z\) |
| `timezone` | string | No | New IANA timezone \(e.g., America/New_York, Europe/London\) |
| `endAt` | string | No | New end time in ISO 8601 format \(e.g., 2025-03-15T20:00:00Z\) |
| `durationInterval` | string | No | New duration as ISO 8601 interval \(e.g., PT2H for 2 hours\). Used if endAt is not provided. |
| `descriptionMd` | string | No | New event description in Markdown format |
| `meetingUrl` | string | No | New virtual meeting URL \(e.g., Zoom, Google Meet link\) |
| `visibility` | string | No | New visibility: public, members-only, or private |
| `coverUrl` | string | No | New cover image URL \(must be a Luma CDN URL from images.lumacdn.com\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `event` | object | Updated event details |
| ↳ `id` | string | Event ID |
| ↳ `name` | string | Event name |
| ↳ `startAt` | string | Event start time \(ISO 8601\) |
| ↳ `endAt` | string | Event end time \(ISO 8601\) |
| ↳ `timezone` | string | Event timezone \(IANA\) |
| ↳ `durationInterval` | string | Event duration \(ISO 8601 interval, e.g. PT2H\) |
| ↳ `createdAt` | string | Event creation timestamp \(ISO 8601\) |
| ↳ `description` | string | Event description \(plain text\) |
| ↳ `descriptionMd` | string | Event description \(Markdown\) |
| ↳ `coverUrl` | string | Event cover image URL |
| ↳ `url` | string | Event page URL on lu.ma |
| ↳ `visibility` | string | Event visibility \(public, members-only, private\) |
| ↳ `meetingUrl` | string | Virtual meeting URL |
| ↳ `geoAddressJson` | json | Structured location/address data |
| ↳ `geoLatitude` | string | Venue latitude coordinate |
| ↳ `geoLongitude` | string | Venue longitude coordinate |
| ↳ `calendarId` | string | Associated calendar ID |
| `hosts` | array | Event hosts |
| ↳ `id` | string | Host ID |
| ↳ `name` | string | Host display name |
| ↳ `firstName` | string | Host first name |
| ↳ `lastName` | string | Host last name |
| ↳ `email` | string | Host email address |
| ↳ `avatarUrl` | string | Host avatar image URL |
### `luma_list_events`
List events from your Luma calendar with optional date range filtering, sorting, and pagination.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Luma API key |
| `after` | string | No | Return events after this ISO 8601 datetime \(e.g., 2025-01-01T00:00:00Z\) |
| `before` | string | No | Return events before this ISO 8601 datetime \(e.g., 2025-12-31T23:59:59Z\) |
| `paginationLimit` | number | No | Maximum number of events to return per page |
| `paginationCursor` | string | No | Pagination cursor from a previous response \(next_cursor\) to fetch the next page of results |
| `sortColumn` | string | No | Column to sort by \(only start_at is supported\) |
| `sortDirection` | string | No | Sort direction: asc, desc, asc nulls last, or desc nulls last |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `events` | array | List of calendar events |
| ↳ `id` | string | Event ID |
| ↳ `name` | string | Event name |
| ↳ `startAt` | string | Event start time \(ISO 8601\) |
| ↳ `endAt` | string | Event end time \(ISO 8601\) |
| ↳ `timezone` | string | Event timezone \(IANA\) |
| ↳ `durationInterval` | string | Event duration \(ISO 8601 interval, e.g. PT2H\) |
| ↳ `createdAt` | string | Event creation timestamp \(ISO 8601\) |
| ↳ `description` | string | Event description \(plain text\) |
| ↳ `descriptionMd` | string | Event description \(Markdown\) |
| ↳ `coverUrl` | string | Event cover image URL |
| ↳ `url` | string | Event page URL on lu.ma |
| ↳ `visibility` | string | Event visibility \(public, members-only, private\) |
| ↳ `meetingUrl` | string | Virtual meeting URL |
| ↳ `geoAddressJson` | json | Structured location/address data |
| ↳ `geoLatitude` | string | Venue latitude coordinate |
| ↳ `geoLongitude` | string | Venue longitude coordinate |
| ↳ `calendarId` | string | Associated calendar ID |
| `hasMore` | boolean | Whether more results are available for pagination |
| `nextCursor` | string | Cursor to pass as paginationCursor to fetch the next page |
### `luma_get_guests`
Retrieve the guest list for a Luma event with optional filtering by approval status, sorting, and pagination.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Luma API key |
| `eventId` | string | Yes | Event ID \(starts with evt-\) |
| `approvalStatus` | string | No | Filter by approval status: approved, session, pending_approval, invited, declined, or waitlist |
| `paginationLimit` | number | No | Maximum number of guests to return per page |
| `paginationCursor` | string | No | Pagination cursor from a previous response \(next_cursor\) to fetch the next page of results |
| `sortColumn` | string | No | Column to sort by: name, email, created_at, registered_at, or checked_in_at |
| `sortDirection` | string | No | Sort direction: asc, desc, asc nulls last, or desc nulls last |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `guests` | array | List of event guests |
| ↳ `id` | string | Guest ID |
| ↳ `email` | string | Guest email address |
| ↳ `name` | string | Guest full name |
| ↳ `firstName` | string | Guest first name |
| ↳ `lastName` | string | Guest last name |
| ↳ `approvalStatus` | string | Guest approval status \(approved, session, pending_approval, invited, declined, waitlist\) |
| ↳ `registeredAt` | string | Registration timestamp \(ISO 8601\) |
| ↳ `invitedAt` | string | Invitation timestamp \(ISO 8601\) |
| ↳ `joinedAt` | string | Join timestamp \(ISO 8601\) |
| ↳ `checkedInAt` | string | Check-in timestamp \(ISO 8601\) |
| ↳ `phoneNumber` | string | Guest phone number |
| `hasMore` | boolean | Whether more results are available for pagination |
| `nextCursor` | string | Cursor to pass as paginationCursor to fetch the next page |
### `luma_add_guests`
Add guests to a Luma event by email. Guests are added with Going (approved) status and receive one ticket of the default ticket type.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Luma API key |
| `eventId` | string | Yes | Event ID \(starts with evt-\) |
| `guests` | string | Yes | JSON array of guest objects. Each guest requires an "email" field and optionally "name", "first_name", "last_name". Example: \[\{"email": "user@example.com", "name": "John Doe"\}\] |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `guests` | array | List of added guests with their assigned status and ticket info |
| ↳ `id` | string | Guest ID |
| ↳ `email` | string | Guest email address |
| ↳ `name` | string | Guest full name |
| ↳ `firstName` | string | Guest first name |
| ↳ `lastName` | string | Guest last name |
| ↳ `approvalStatus` | string | Guest approval status \(approved, session, pending_approval, invited, declined, waitlist\) |
| ↳ `registeredAt` | string | Registration timestamp \(ISO 8601\) |
| ↳ `invitedAt` | string | Invitation timestamp \(ISO 8601\) |
| ↳ `joinedAt` | string | Join timestamp \(ISO 8601\) |
| ↳ `checkedInAt` | string | Check-in timestamp \(ISO 8601\) |
| ↳ `phoneNumber` | string | Guest phone number |

View File

@@ -6,10 +6,12 @@
"airtable",
"airweave",
"algolia",
"amplitude",
"apify",
"apollo",
"arxiv",
"asana",
"ashby",
"attio",
"browser_use",
"calcom",
@@ -20,6 +22,7 @@
"cloudflare",
"confluence",
"cursor",
"databricks",
"datadog",
"devin",
"discord",
@@ -34,6 +37,7 @@
"file",
"firecrawl",
"fireflies",
"gamma",
"github",
"gitlab",
"gmail",
@@ -41,11 +45,13 @@
"google_bigquery",
"google_books",
"google_calendar",
"google_contacts",
"google_docs",
"google_drive",
"google_forms",
"google_groups",
"google_maps",
"google_pagespeed",
"google_search",
"google_sheets",
"google_slides",
@@ -54,6 +60,7 @@
"google_vault",
"grafana",
"grain",
"greenhouse",
"greptile",
"hex",
"hubspot",
@@ -73,6 +80,8 @@
"linear",
"linkedin",
"linkup",
"loops",
"luma",
"mailchimp",
"mailgun",
"mem0",
@@ -90,6 +99,7 @@
"onepassword",
"openai",
"outlook",
"pagerduty",
"parallel_ai",
"perplexity",
"pinecone",

View File

@@ -0,0 +1,217 @@
---
title: PagerDuty
description: Manage incidents and on-call schedules with PagerDuty
---
import { BlockInfoCard } from "@/components/ui/block-info-card"
<BlockInfoCard
type="pagerduty"
color="#06AC38"
/>
{/* MANUAL-CONTENT-START:intro */}
[PagerDuty](https://www.pagerduty.com/) is a leading incident management platform that helps engineering and operations teams detect, triage, and resolve infrastructure and application issues in real time. PagerDuty integrates with monitoring tools, orchestrates on-call schedules, and ensures the right people are alerted when incidents occur.
The PagerDuty integration in Sim connects with the PagerDuty REST API v2 using API key authentication, enabling your agents to manage the full incident lifecycle and query on-call information programmatically.
With the PagerDuty integration, your agents can:
- **List and filter incidents**: Retrieve incidents filtered by status (triggered, acknowledged, resolved), service, date range, and sort order to monitor your operational health
- **Create incidents**: Trigger new incidents on specific services with custom titles, descriptions, urgency levels, and assignees directly from your workflows
- **Update incidents**: Acknowledge or resolve incidents, change urgency, and add resolution notes to keep your incident management in sync with automated processes
- **Add notes to incidents**: Attach contextual information, investigation findings, or automated diagnostics as notes on existing incidents
- **List services**: Query your PagerDuty service catalog to discover service IDs and metadata for use in other operations
- **Check on-call schedules**: Retrieve current on-call entries filtered by escalation policy or schedule to determine who is responsible at any given time
In Sim, the PagerDuty integration enables powerful incident automation scenarios. Your agents can automatically create incidents based on monitoring alerts, enrich incidents with diagnostic data from other tools, resolve incidents when automated remediation succeeds, or build escalation workflows that check on-call schedules and route notifications accordingly. By connecting Sim with PagerDuty, you can build intelligent agents that bridge the gap between detection and response, reducing mean time to resolution and ensuring consistent incident handling across your organization.
{/* MANUAL-CONTENT-END */}
## Usage Instructions
Integrate PagerDuty into your workflow to list, create, and update incidents, add notes, list services, and check on-call schedules.
## Tools
### `pagerduty_list_incidents`
List incidents from PagerDuty with optional filters.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | PagerDuty REST API Key |
| `statuses` | string | No | Comma-separated statuses to filter \(triggered, acknowledged, resolved\) |
| `serviceIds` | string | No | Comma-separated service IDs to filter |
| `since` | string | No | Start date filter \(ISO 8601 format\) |
| `until` | string | No | End date filter \(ISO 8601 format\) |
| `sortBy` | string | No | Sort field \(e.g., created_at:desc\) |
| `limit` | string | No | Maximum number of results \(max 100\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `incidents` | array | Array of incidents |
| ↳ `id` | string | Incident ID |
| ↳ `incidentNumber` | number | Incident number |
| ↳ `title` | string | Incident title |
| ↳ `status` | string | Incident status |
| ↳ `urgency` | string | Incident urgency |
| ↳ `createdAt` | string | Creation timestamp |
| ↳ `updatedAt` | string | Last updated timestamp |
| ↳ `serviceName` | string | Service name |
| ↳ `serviceId` | string | Service ID |
| ↳ `assigneeName` | string | Assignee name |
| ↳ `assigneeId` | string | Assignee ID |
| ↳ `escalationPolicyName` | string | Escalation policy name |
| ↳ `htmlUrl` | string | PagerDuty web URL |
| `total` | number | Total number of matching incidents |
| `more` | boolean | Whether more results are available |
### `pagerduty_create_incident`
Create a new incident in PagerDuty.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | PagerDuty REST API Key |
| `fromEmail` | string | Yes | Email address of a valid PagerDuty user |
| `title` | string | Yes | Incident title/summary |
| `serviceId` | string | Yes | ID of the PagerDuty service |
| `urgency` | string | No | Urgency level \(high or low\) |
| `body` | string | No | Detailed description of the incident |
| `escalationPolicyId` | string | No | Escalation policy ID to assign |
| `assigneeId` | string | No | User ID to assign the incident to |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `id` | string | Created incident ID |
| `incidentNumber` | number | Incident number |
| `title` | string | Incident title |
| `status` | string | Incident status |
| `urgency` | string | Incident urgency |
| `createdAt` | string | Creation timestamp |
| `serviceName` | string | Service name |
| `serviceId` | string | Service ID |
| `htmlUrl` | string | PagerDuty web URL |
### `pagerduty_update_incident`
Update an incident in PagerDuty (acknowledge, resolve, change urgency, etc.).
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | PagerDuty REST API Key |
| `fromEmail` | string | Yes | Email address of a valid PagerDuty user |
| `incidentId` | string | Yes | ID of the incident to update |
| `status` | string | No | New status \(acknowledged or resolved\) |
| `title` | string | No | New incident title |
| `urgency` | string | No | New urgency \(high or low\) |
| `escalationLevel` | string | No | Escalation level to escalate to |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `id` | string | Incident ID |
| `incidentNumber` | number | Incident number |
| `title` | string | Incident title |
| `status` | string | Updated status |
| `urgency` | string | Updated urgency |
| `updatedAt` | string | Last updated timestamp |
| `htmlUrl` | string | PagerDuty web URL |
### `pagerduty_add_note`
Add a note to an existing PagerDuty incident.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | PagerDuty REST API Key |
| `fromEmail` | string | Yes | Email address of a valid PagerDuty user |
| `incidentId` | string | Yes | ID of the incident to add the note to |
| `content` | string | Yes | Note content text |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `id` | string | Note ID |
| `content` | string | Note content |
| `createdAt` | string | Creation timestamp |
| `userName` | string | Name of the user who created the note |
### `pagerduty_list_services`
List services from PagerDuty with optional name filter.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | PagerDuty REST API Key |
| `query` | string | No | Filter services by name |
| `limit` | string | No | Maximum number of results \(max 100\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `services` | array | Array of services |
| ↳ `id` | string | Service ID |
| ↳ `name` | string | Service name |
| ↳ `description` | string | Service description |
| ↳ `status` | string | Service status |
| ↳ `escalationPolicyName` | string | Escalation policy name |
| ↳ `escalationPolicyId` | string | Escalation policy ID |
| ↳ `createdAt` | string | Creation timestamp |
| ↳ `htmlUrl` | string | PagerDuty web URL |
| `total` | number | Total number of matching services |
| `more` | boolean | Whether more results are available |
### `pagerduty_list_oncalls`
List current on-call entries from PagerDuty.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | PagerDuty REST API Key |
| `escalationPolicyIds` | string | No | Comma-separated escalation policy IDs to filter |
| `scheduleIds` | string | No | Comma-separated schedule IDs to filter |
| `since` | string | No | Start time filter \(ISO 8601 format\) |
| `until` | string | No | End time filter \(ISO 8601 format\) |
| `limit` | string | No | Maximum number of results \(max 100\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `oncalls` | array | Array of on-call entries |
| ↳ `userName` | string | On-call user name |
| ↳ `userId` | string | On-call user ID |
| ↳ `escalationLevel` | number | Escalation level |
| ↳ `escalationPolicyName` | string | Escalation policy name |
| ↳ `escalationPolicyId` | string | Escalation policy ID |
| ↳ `scheduleName` | string | Schedule name |
| ↳ `scheduleId` | string | Schedule ID |
| ↳ `start` | string | On-call start time |
| ↳ `end` | string | On-call end time |
| `total` | number | Total number of matching on-call entries |
| `more` | boolean | Whether more results are available |

View File

@@ -11,19 +11,19 @@ import { BlockInfoCard } from "@/components/ui/block-info-card"
/>
{/* MANUAL-CONTENT-START:intro */}
[Pipedrive](https://www.pipedrive.com) is a powerful sales-focused CRM platform designed to help sales teams manage leads, track deals, and optimize their sales pipeline. Built with simplicity and effectiveness in mind, Pipedrive has become a favorite among sales professionals and growing businesses worldwide for its intuitive visual pipeline management and actionable sales insights.
[Pipedrive](https://www.pipedrive.com) is a sales-focused CRM platform designed to help sales teams manage leads, track deals, and optimize their sales pipeline. Built with simplicity and effectiveness in mind, Pipedrive provides intuitive visual pipeline management and actionable sales insights.
Pipedrive provides a comprehensive suite of tools for managing the entire sales process from lead capture to deal closure. With its robust API and extensive integration capabilities, Pipedrive enables sales teams to automate repetitive tasks, maintain data consistency, and focus on what matters most—closing deals.
With the Pipedrive integration in Sim, you can:
Key features of Pipedrive include:
- **Manage deals**: List, get, create, and update deals in your sales pipeline
- **Manage leads**: Get, create, update, and delete leads for prospect tracking
- **Track activities**: Get, create, and update sales activities such as calls, meetings, and tasks
- **Manage projects**: List and create projects for post-sale delivery tracking
- **Access pipelines**: Get pipeline configurations and list deals within specific pipelines
- **Retrieve files**: Access files attached to deals, contacts, or other records
- **Access email**: Get mail messages and threads linked to CRM records
- Visual Sales Pipeline: Intuitive drag-and-drop interface for managing deals through customizable sales stages
- Lead Management: Comprehensive lead inbox for capturing, qualifying, and converting potential opportunities
- Activity Tracking: Sophisticated system for scheduling and tracking calls, meetings, emails, and tasks
- Project Management: Built-in project tracking capabilities for post-sale customer success and delivery
- Email Integration: Native mailbox integration for seamless communication tracking within the CRM
In Sim, the Pipedrive integration allows your AI agents to seamlessly interact with your sales workflow. This creates opportunities for automated lead qualification, deal creation and updates, activity scheduling, and pipeline management as part of your AI-powered sales processes. The integration enables agents to create, retrieve, update, and manage deals, leads, activities, and projects programmatically, facilitating intelligent sales automation and ensuring that critical customer information is properly tracked and acted upon. By connecting Sim with Pipedrive, you can build AI agents that maintain sales pipeline visibility, automate routine CRM tasks, qualify leads intelligently, and ensure no opportunities slip through the cracks—enhancing sales team productivity and driving consistent revenue growth.
In Sim, the Pipedrive integration enables your agents to interact with your sales workflow as part of automated processes. Agents can qualify leads, manage deals through pipeline stages, schedule activities, and keep CRM data synchronized—enabling intelligent sales automation.
{/* MANUAL-CONTENT-END */}

View File

@@ -1,6 +1,6 @@
---
title: Resend
description: Send emails with Resend.
description: Send emails and manage contacts with Resend.
---
import { BlockInfoCard } from "@/components/ui/block-info-card"
@@ -27,7 +27,7 @@ In Sim, the Resend integration allows your agents to programmatically send email
## Usage Instructions
Integrate Resend into the workflow. Can send emails. Requires API Key.
Integrate Resend into your workflow. Send emails, retrieve email status, manage contacts, and view domains. Requires API Key.
@@ -46,6 +46,11 @@ Send an email using your own Resend API key and from address
| `subject` | string | Yes | Email subject line |
| `body` | string | Yes | Email body content \(plain text or HTML based on contentType\) |
| `contentType` | string | No | Content type for the email body: "text" for plain text or "html" for HTML content |
| `cc` | string | No | Carbon copy recipient email address |
| `bcc` | string | No | Blind carbon copy recipient email address |
| `replyTo` | string | No | Reply-to email address |
| `scheduledAt` | string | No | Schedule email to be sent later in ISO 8601 format |
| `tags` | string | No | Comma-separated key:value pairs for email tags \(e.g., "category:welcome,type:onboarding"\) |
| `resendApiKey` | string | Yes | Resend API key for sending emails |
#### Output
@@ -53,8 +58,152 @@ Send an email using your own Resend API key and from address
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `success` | boolean | Whether the email was sent successfully |
| `id` | string | Email ID returned by Resend |
| `to` | string | Recipient email address |
| `subject` | string | Email subject |
| `body` | string | Email body content |
### `resend_get_email`
Retrieve details of a previously sent email by its ID
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `emailId` | string | Yes | The ID of the email to retrieve |
| `resendApiKey` | string | Yes | Resend API key |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `id` | string | Email ID |
| `from` | string | Sender email address |
| `to` | json | Recipient email addresses |
| `subject` | string | Email subject |
| `html` | string | HTML email content |
| `text` | string | Plain text email content |
| `cc` | json | CC email addresses |
| `bcc` | json | BCC email addresses |
| `replyTo` | json | Reply-to email addresses |
| `lastEvent` | string | Last event status \(e.g., delivered, bounced\) |
| `createdAt` | string | Email creation timestamp |
| `scheduledAt` | string | Scheduled send timestamp |
| `tags` | json | Email tags as name-value pairs |
### `resend_create_contact`
Create a new contact in Resend
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `email` | string | Yes | Email address of the contact |
| `firstName` | string | No | First name of the contact |
| `lastName` | string | No | Last name of the contact |
| `unsubscribed` | boolean | No | Whether the contact is unsubscribed from all broadcasts |
| `resendApiKey` | string | Yes | Resend API key |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `id` | string | Created contact ID |
### `resend_list_contacts`
List all contacts in Resend
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `resendApiKey` | string | Yes | Resend API key |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `contacts` | json | Array of contacts with id, email, first_name, last_name, created_at, unsubscribed |
| `hasMore` | boolean | Whether there are more contacts to retrieve |
### `resend_get_contact`
Retrieve details of a contact by ID or email
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `contactId` | string | Yes | The contact ID or email address to retrieve |
| `resendApiKey` | string | Yes | Resend API key |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `id` | string | Contact ID |
| `email` | string | Contact email address |
| `firstName` | string | Contact first name |
| `lastName` | string | Contact last name |
| `createdAt` | string | Contact creation timestamp |
| `unsubscribed` | boolean | Whether the contact is unsubscribed |
### `resend_update_contact`
Update an existing contact in Resend
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `contactId` | string | Yes | The contact ID or email address to update |
| `firstName` | string | No | Updated first name |
| `lastName` | string | No | Updated last name |
| `unsubscribed` | boolean | No | Whether the contact should be unsubscribed from all broadcasts |
| `resendApiKey` | string | Yes | Resend API key |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `id` | string | Updated contact ID |
### `resend_delete_contact`
Delete a contact from Resend by ID or email
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `contactId` | string | Yes | The contact ID or email address to delete |
| `resendApiKey` | string | Yes | Resend API key |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `id` | string | Deleted contact ID |
| `deleted` | boolean | Whether the contact was successfully deleted |
### `resend_list_domains`
List all verified domains in your Resend account
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `resendApiKey` | string | Yes | Resend API key |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `domains` | json | Array of domains with id, name, status, region, and createdAt |
| `hasMore` | boolean | Whether there are more domains to retrieve |

View File

@@ -13,39 +13,19 @@ import { BlockInfoCard } from "@/components/ui/block-info-card"
{/* MANUAL-CONTENT-START:intro */}
[Slack](https://www.slack.com/) is a business communication platform that offers teams a unified place for messaging, tools, and files.
With Slack, you can:
With the Slack integration in Sim, you can:
- **Automate agent notifications**: Send real-time updates from your Sim agents to any Slack channel
- **Create webhook endpoints**: Configure Slack bots as webhooks to trigger Sim workflows from Slack activities
- **Enhance agent workflows**: Integrate Slack messaging into your agents to deliver results, alerts, and status updates
- **Create and share Slack canvases**: Programmatically generate collaborative documents (canvases) in Slack channels
- **Read messages from channels**: Retrieve and process recent messages from any Slack channel for monitoring or workflow triggers
- **Manage bot messages**: Update, delete, and add reactions to messages sent by your bot
In Sim, the Slack integration enables your agents to programmatically interact with Slack with full message management capabilities as part of their workflows:
- **Send messages**: Agents can send formatted messages to any Slack channel or user, supporting Slack's mrkdwn syntax for rich formatting
- **Send messages**: Send formatted messages to any Slack channel or user, supporting Slack's mrkdwn syntax for rich formatting
- **Send ephemeral messages**: Send temporary messages visible only to a specific user in a channel
- **Update messages**: Edit previously sent bot messages to correct information or provide status updates
- **Delete messages**: Remove bot messages when they're no longer needed or contain errors
- **Add reactions**: Express sentiment or acknowledgment by adding emoji reactions to any message
- **Create canvases**: Create and share Slack canvases (collaborative documents) directly in channels, enabling richer content sharing and documentation
- **Read messages**: Read recent messages from channels, allowing for monitoring, reporting, or triggering further actions based on channel activity
- **Create canvases**: Create and share Slack canvases (collaborative documents) directly in channels
- **Read messages**: Retrieve recent messages from channels or DMs, with filtering by time range
- **Manage channels and users**: List channels, members, and users in your Slack workspace
- **Download files**: Retrieve files shared in Slack channels for processing or archival
This allows for powerful automation scenarios such as sending notifications with dynamic updates, managing conversational flows with editable status messages, acknowledging important messages with reactions, and maintaining clean channels by removing outdated bot messages. Your agents can deliver timely information, update messages as workflows progress, create collaborative documents, or alert team members when attention is needed. This integration bridges the gap between your AI workflows and your team's communication, ensuring everyone stays informed with accurate, up-to-date information. By connecting Sim with Slack, you can create agents that keep your team updated with relevant information at the right time, enhance collaboration by sharing and updating insights automatically, and reduce the need for manual status updates—all while leveraging your existing Slack workspace where your team already communicates.
## Getting Started
To connect Slack to your Sim workflows:
1. Sign up or log in at [sim.ai](https://sim.ai)
2. Create a new workflow or open an existing one
3. Drag a **Slack** block onto your canvas
4. Click the credential selector and choose **Connect**
5. Authorize Sim to access your Slack workspace
6. Select your target channel or user
Once connected, you can use any of the Slack operations listed below.
In Sim, the Slack integration enables your agents to programmatically interact with Slack as part of their workflows. This allows for automation scenarios such as sending notifications with dynamic updates, managing conversational flows with editable status messages, acknowledging important messages with reactions, and maintaining clean channels by removing outdated bot messages. The integration can also be used in trigger mode to start a workflow when a message is sent to a channel.
## AI-Generated Content

View File

@@ -11,11 +11,21 @@ import { BlockInfoCard } from "@/components/ui/block-info-card"
/>
{/* MANUAL-CONTENT-START:intro */}
[Telegram](https://telegram.org) is a secure, cloud-based messaging platform that enables fast and reliable communication across devices and platforms. With over 700 million monthly active users, Telegram has established itself as one of the world's leading messaging services, known for its security, speed, and powerful API capabilities.
[Telegram](https://telegram.org) is a secure, cloud-based messaging platform that enables fast and reliable communication across devices. With its powerful Bot API, Telegram provides a robust framework for automated messaging and integration.
Telegram's Bot API provides a robust framework for creating automated messaging solutions and integrating communication features into applications. With support for rich media, inline keyboards, and custom commands, Telegram bots can facilitate sophisticated interaction patterns and automated workflows.
With the Telegram integration in Sim, you can:
Learn how to create a webhook trigger in Sim that seamlessly initiates workflows from Telegram messages. This tutorial walks you through setting up a webhook, configuring it with Telegram's bot API, and triggering automated actions in real-time. Perfect for streamlining tasks directly from your chat!
- **Send messages**: Send text messages to Telegram chats, groups, or channels
- **Delete messages**: Remove previously sent messages from a chat
- **Send photos**: Share images with optional captions
- **Send videos**: Share video files with optional captions
- **Send audio**: Share audio files with optional captions
- **Send animations**: Share GIF animations with optional captions
- **Send documents**: Share files of any type with optional captions
In Sim, the Telegram integration enables your agents to send messages and rich media to Telegram chats as part of automated workflows. This is ideal for automated notifications, alerts, content distribution, and interactive bot experiences.
Learn how to create a webhook trigger in Sim that seamlessly initiates workflows from Telegram messages. This tutorial walks you through setting up a webhook, configuring it with Telegram's bot API, and triggering automated actions in real-time.
<iframe
width="100%"
@@ -27,7 +37,7 @@ Learn how to create a webhook trigger in Sim that seamlessly initiates workflows
allowFullScreen
></iframe>
Learn how to use the Telegram Tool in Sim to seamlessly automate message delivery to any Telegram group. This tutorial walks you through integrating the tool into your workflow, configuring group messaging, and triggering automated updates in real-time. Perfect for enhancing communication directly from your workspace!
Learn how to use the Telegram Tool in Sim to seamlessly automate message delivery to any Telegram group. This tutorial walks you through integrating the tool into your workflow, configuring group messaging, and triggering automated updates in real-time.
<iframe
width="100%"
@@ -38,15 +48,6 @@ Learn how to use the Telegram Tool in Sim to seamlessly automate message deliver
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture"
allowFullScreen
></iframe>
Key features of Telegram include:
- Secure Communication: End-to-end encryption and secure cloud storage for messages and media
- Bot Platform: Powerful bot API for creating automated messaging solutions and interactive experiences
- Rich Media Support: Send and receive messages with text formatting, images, files, and interactive elements
- Global Reach: Connect with users worldwide with support for multiple languages and platforms
In Sim, the Telegram integration enables your agents to leverage these powerful messaging capabilities as part of their workflows. This creates opportunities for automated notifications, alerts, and interactive conversations through Telegram's secure messaging platform. The integration allows agents to send messages programmatically to individuals or channels, enabling timely communication and updates. By connecting Sim with Telegram, you can build intelligent agents that engage users through a secure and widely-adopted messaging platform, perfect for delivering notifications, updates, and interactive communications.
{/* MANUAL-CONTENT-END */}

View File

@@ -11,11 +11,19 @@ import { BlockInfoCard } from "@/components/ui/block-info-card"
/>
{/* MANUAL-CONTENT-START:intro */}
[WordPress](https://wordpress.org/) is the worlds leading open-source content management system, making it easy to publish and manage websites, blogs, and all types of online content. With WordPress, you can create and update posts or pages, organize your content with categories and tags, manage media files, moderate comments, and handle user accounts—allowing you to run everything from personal blogs to complex business sites.
[WordPress](https://wordpress.org/) is the worlds leading open-source content management system, powering websites, blogs, and online stores of all sizes. WordPress provides a flexible platform for publishing and managing content with extensive plugin and theme support.
Sims integration with WordPress lets your agents automate essential website tasks. You can programmatically create new blog posts with specific titles, content, categories, tags, and featured images. Updating existing posts—such as changing their content, title, or publishing status—is straightforward. You can also publish or save content as drafts, manage static pages, work with media uploads, oversee comments, and assign content to relevant organizational taxonomies.
With the WordPress integration in Sim, you can:
By connecting WordPress to your automations, Sim empowers your agents to streamline content publishing, editorial workflows, and everyday site management—helping you keep your website fresh, organized, and secure without manual effort.
- **Manage posts**: Create, update, delete, get, and list blog posts with full control over content, status, categories, and tags
- **Manage pages**: Create, update, delete, get, and list static pages
- **Handle media**: Upload, get, list, and delete media files such as images, videos, and documents
- **Moderate comments**: Create, list, update, and delete comments on posts and pages
- **Organize content**: Create and list categories and tags for content taxonomy
- **Manage users**: Get the current user, list users, and retrieve user details
- **Search content**: Search across all content types on the WordPress site
In Sim, the WordPress integration enables your agents to automate content publishing and site management as part of automated workflows. Agents can create and publish posts, manage media assets, moderate comments, and organize content—keeping your website fresh and organized without manual effort.
{/* MANUAL-CONTENT-END */}

View File

@@ -29,74 +29,65 @@ In Sim, the X integration enables sophisticated social media automation scenario
## Usage Instructions
Integrate X into the workflow. Can post a new tweet, get tweet details, search tweets, and get user profile.
Integrate X into the workflow. Search tweets, manage bookmarks, follow/block/mute users, like and retweet, view trends, and more.
## Tools
### `x_write`
### `x_create_tweet`
Post new tweets, reply to tweets, or create polls on X (Twitter)
Create a new tweet, reply, or quote tweet on X
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `text` | string | Yes | The text content of your tweet \(max 280 characters\) |
| `replyTo` | string | No | ID of the tweet to reply to \(e.g., 1234567890123456789\) |
| `mediaIds` | array | No | Array of media IDs to attach to the tweet |
| `poll` | object | No | Poll configuration for the tweet |
| `text` | string | Yes | The text content of the tweet \(max 280 characters\) |
| `replyToTweetId` | string | No | Tweet ID to reply to |
| `quoteTweetId` | string | No | Tweet ID to quote |
| `mediaIds` | string | No | Comma-separated media IDs to attach \(up to 4\) |
| `replySettings` | string | No | Who can reply: "mentionedUsers", "following", "subscribers", or "verified" |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `tweet` | object | The newly created tweet data |
| ↳ `id` | string | Tweet ID |
| ↳ `text` | string | Tweet content text |
| ↳ `createdAt` | string | Tweet creation timestamp |
| ↳ `authorId` | string | ID of the tweet author |
| ↳ `conversationId` | string | Conversation thread ID |
| ↳ `attachments` | object | Media or poll attachments |
| ↳ `mediaKeys` | array | Media attachment keys |
| ↳ `pollId` | string | Poll ID if poll attached |
| `id` | string | The ID of the created tweet |
| `text` | string | The text of the created tweet |
### `x_read`
### `x_delete_tweet`
Read tweet details, including replies and conversation context
Delete a tweet authored by the authenticated user
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `tweetId` | string | Yes | ID of the tweet to read \(e.g., 1234567890123456789\) |
| `includeReplies` | boolean | No | Whether to include replies to the tweet |
| `tweetId` | string | Yes | The ID of the tweet to delete |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `tweet` | object | The main tweet data |
| ↳ `id` | string | Tweet ID |
| ↳ `text` | string | Tweet content text |
| ↳ `createdAt` | string | Tweet creation timestamp |
| ↳ `authorId` | string | ID of the tweet author |
| `context` | object | Conversation context including parent and root tweets |
| `deleted` | boolean | Whether the tweet was successfully deleted |
### `x_search`
### `x_search_tweets`
Search for tweets using keywords, hashtags, or advanced queries
Search for recent tweets using keywords, hashtags, or advanced query operators
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `query` | string | Yes | Search query \(e.g., "AI news", "#technology", "from:username"\). Supports X search operators |
| `maxResults` | number | No | Maximum number of results to return \(e.g., 10, 25, 50\). Default: 10, max: 100 |
| `startTime` | string | No | Start time for search \(ISO 8601 format\) |
| `endTime` | string | No | End time for search \(ISO 8601 format\) |
| `sortOrder` | string | No | Sort order for results \(recency or relevancy\) |
| `query` | string | Yes | Search query \(supports operators like "from:", "to:", "#hashtag", "has:images", "is:retweet", "lang:"\) |
| `maxResults` | number | No | Maximum number of results \(10-100, default 10\) |
| `startTime` | string | No | Oldest UTC timestamp in ISO 8601 format \(e.g., 2024-01-01T00:00:00Z\) |
| `endTime` | string | No | Newest UTC timestamp in ISO 8601 format |
| `sinceId` | string | No | Returns tweets with ID greater than this |
| `untilId` | string | No | Returns tweets with ID less than this |
| `sortOrder` | string | No | Sort order: "recency" or "relevancy" |
| `nextToken` | string | No | Pagination token for next page of results |
#### Output
@@ -104,38 +95,748 @@ Search for tweets using keywords, hashtags, or advanced queries
| --------- | ---- | ----------- |
| `tweets` | array | Array of tweets matching the search query |
| ↳ `id` | string | Tweet ID |
| ↳ `text` | string | Tweet content |
| ↳ `createdAt` | string | Creation timestamp |
| ↳ `text` | string | Tweet text content |
| ↳ `createdAt` | string | Tweet creation timestamp |
| ↳ `authorId` | string | Author user ID |
| `includes` | object | Additional data including user profiles and media |
| ↳ `conversationId` | string | Conversation thread ID |
| ↳ `inReplyToUserId` | string | User ID being replied to |
| ↳ `publicMetrics` | object | Engagement metrics |
| ↳ `retweetCount` | number | Number of retweets |
| ↳ `replyCount` | number | Number of replies |
| ↳ `likeCount` | number | Number of likes |
| ↳ `quoteCount` | number | Number of quotes |
| `includes` | object | Additional data including user profiles |
| ↳ `users` | array | Array of user objects referenced in tweets |
| ↳ `id` | string | User ID |
| ↳ `username` | string | Username without @ symbol |
| ↳ `name` | string | Display name |
| ↳ `description` | string | User bio |
| ↳ `profileImageUrl` | string | Profile image URL |
| ↳ `verified` | boolean | Whether the user is verified |
| ↳ `metrics` | object | User statistics |
| ↳ `followersCount` | number | Number of followers |
| ↳ `followingCount` | number | Number of users following |
| ↳ `tweetCount` | number | Total number of tweets |
| `meta` | object | Search metadata including result count and pagination tokens |
| ↳ `resultCount` | number | Number of results returned |
| ↳ `newestId` | string | ID of the newest tweet |
| ↳ `oldestId` | string | ID of the oldest tweet |
| ↳ `nextToken` | string | Pagination token for next page |
### `x_user`
### `x_get_tweets_by_ids`
Get user profile information
Look up multiple tweets by their IDs (up to 100)
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `username` | string | Yes | Username to look up without @ symbol \(e.g., elonmusk, openai\) |
| `ids` | string | Yes | Comma-separated tweet IDs \(up to 100\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `user` | object | X user profile information |
| `tweets` | array | Array of tweets matching the provided IDs |
| ↳ `id` | string | Tweet ID |
| ↳ `text` | string | Tweet text content |
| ↳ `createdAt` | string | Tweet creation timestamp |
| ↳ `authorId` | string | Author user ID |
| ↳ `conversationId` | string | Conversation thread ID |
| ↳ `inReplyToUserId` | string | User ID being replied to |
| ↳ `publicMetrics` | object | Engagement metrics |
| ↳ `retweetCount` | number | Number of retweets |
| ↳ `replyCount` | number | Number of replies |
| ↳ `likeCount` | number | Number of likes |
| ↳ `quoteCount` | number | Number of quotes |
| `includes` | object | Additional data including user profiles |
| ↳ `users` | array | Array of user objects referenced in tweets |
| ↳ `id` | string | User ID |
| ↳ `username` | string | Username without @ symbol |
| ↳ `name` | string | Display name |
| ↳ `description` | string | User bio |
| ↳ `profileImageUrl` | string | Profile image URL |
| ↳ `verified` | boolean | Whether the user is verified |
| ↳ `metrics` | object | User statistics |
| ↳ `followersCount` | number | Number of followers |
| ↳ `followingCount` | number | Number of users following |
| ↳ `tweetCount` | number | Total number of tweets |
### `x_get_quote_tweets`
Get tweets that quote a specific tweet
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `tweetId` | string | Yes | The tweet ID to get quote tweets for |
| `maxResults` | number | No | Maximum number of results \(10-100, default 10\) |
| `paginationToken` | string | No | Pagination token for next page |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `tweets` | array | Array of quote tweets |
| ↳ `id` | string | Tweet ID |
| ↳ `text` | string | Tweet text content |
| ↳ `createdAt` | string | Tweet creation timestamp |
| ↳ `authorId` | string | Author user ID |
| ↳ `conversationId` | string | Conversation thread ID |
| ↳ `inReplyToUserId` | string | User ID being replied to |
| ↳ `publicMetrics` | object | Engagement metrics |
| ↳ `retweetCount` | number | Number of retweets |
| ↳ `replyCount` | number | Number of replies |
| ↳ `likeCount` | number | Number of likes |
| ↳ `quoteCount` | number | Number of quotes |
| `meta` | object | Pagination metadata |
| ↳ `resultCount` | number | Number of results returned |
| ↳ `nextToken` | string | Token for next page |
### `x_hide_reply`
Hide or unhide a reply to a tweet authored by the authenticated user
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `tweetId` | string | Yes | The reply tweet ID to hide or unhide |
| `hidden` | boolean | Yes | Set to true to hide the reply, false to unhide |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `hidden` | boolean | Whether the reply is now hidden |
### `x_get_user_tweets`
Get tweets authored by a specific user
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `userId` | string | Yes | The user ID whose tweets to retrieve |
| `maxResults` | number | No | Maximum number of results \(5-100, default 10\) |
| `paginationToken` | string | No | Pagination token for next page of results |
| `startTime` | string | No | Oldest UTC timestamp in ISO 8601 format |
| `endTime` | string | No | Newest UTC timestamp in ISO 8601 format |
| `sinceId` | string | No | Returns tweets with ID greater than this |
| `untilId` | string | No | Returns tweets with ID less than this |
| `exclude` | string | No | Comma-separated types to exclude: "retweets", "replies" |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `tweets` | array | Array of tweets by the user |
| ↳ `id` | string | Tweet ID |
| ↳ `text` | string | Tweet text content |
| ↳ `createdAt` | string | Tweet creation timestamp |
| ↳ `authorId` | string | Author user ID |
| ↳ `conversationId` | string | Conversation thread ID |
| ↳ `inReplyToUserId` | string | User ID being replied to |
| ↳ `publicMetrics` | object | Engagement metrics |
| ↳ `retweetCount` | number | Number of retweets |
| ↳ `replyCount` | number | Number of replies |
| ↳ `likeCount` | number | Number of likes |
| ↳ `quoteCount` | number | Number of quotes |
| `includes` | object | Additional data including user profiles |
| ↳ `users` | array | Array of user objects referenced in tweets |
| ↳ `id` | string | User ID |
| ↳ `username` | string | Username without @ symbol |
| ↳ `name` | string | Display name |
| ↳ `description` | string | User bio |
| ↳ `profileImageUrl` | string | Profile image URL |
| ↳ `verified` | boolean | Whether the user is verified |
| ↳ `metrics` | object | User statistics |
| ↳ `followersCount` | number | Number of followers |
| ↳ `followingCount` | number | Number of users following |
| ↳ `tweetCount` | number | Total number of tweets |
| `meta` | object | Pagination metadata |
| ↳ `resultCount` | number | Number of results returned |
| ↳ `newestId` | string | ID of the newest tweet |
| ↳ `oldestId` | string | ID of the oldest tweet |
| ↳ `nextToken` | string | Token for next page |
| ↳ `previousToken` | string | Token for previous page |
### `x_get_user_mentions`
Get tweets that mention a specific user
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `userId` | string | Yes | The user ID whose mentions to retrieve |
| `maxResults` | number | No | Maximum number of results \(5-100, default 10\) |
| `paginationToken` | string | No | Pagination token for next page of results |
| `startTime` | string | No | Oldest UTC timestamp in ISO 8601 format |
| `endTime` | string | No | Newest UTC timestamp in ISO 8601 format |
| `sinceId` | string | No | Returns tweets with ID greater than this |
| `untilId` | string | No | Returns tweets with ID less than this |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `tweets` | array | Array of tweets mentioning the user |
| ↳ `id` | string | Tweet ID |
| ↳ `text` | string | Tweet text content |
| ↳ `createdAt` | string | Tweet creation timestamp |
| ↳ `authorId` | string | Author user ID |
| ↳ `conversationId` | string | Conversation thread ID |
| ↳ `inReplyToUserId` | string | User ID being replied to |
| ↳ `publicMetrics` | object | Engagement metrics |
| ↳ `retweetCount` | number | Number of retweets |
| ↳ `replyCount` | number | Number of replies |
| ↳ `likeCount` | number | Number of likes |
| ↳ `quoteCount` | number | Number of quotes |
| `includes` | object | Additional data including user profiles |
| ↳ `users` | array | Array of user objects referenced in tweets |
| ↳ `id` | string | User ID |
| ↳ `username` | string | Username without @ symbol |
| ↳ `name` | string | Display name |
| ↳ `description` | string | User bio |
| ↳ `profileImageUrl` | string | Profile image URL |
| ↳ `verified` | boolean | Whether the user is verified |
| ↳ `metrics` | object | User statistics |
| ↳ `followersCount` | number | Number of followers |
| ↳ `followingCount` | number | Number of users following |
| ↳ `tweetCount` | number | Total number of tweets |
| `meta` | object | Pagination metadata |
| ↳ `resultCount` | number | Number of results returned |
| ↳ `newestId` | string | ID of the newest tweet |
| ↳ `oldestId` | string | ID of the oldest tweet |
| ↳ `nextToken` | string | Token for next page |
| ↳ `previousToken` | string | Token for previous page |
### `x_get_user_timeline`
Get the reverse chronological home timeline for the authenticated user
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `userId` | string | Yes | The authenticated user ID |
| `maxResults` | number | No | Maximum number of results \(1-100, default 10\) |
| `paginationToken` | string | No | Pagination token for next page of results |
| `startTime` | string | No | Oldest UTC timestamp in ISO 8601 format |
| `endTime` | string | No | Newest UTC timestamp in ISO 8601 format |
| `sinceId` | string | No | Returns tweets with ID greater than this |
| `untilId` | string | No | Returns tweets with ID less than this |
| `exclude` | string | No | Comma-separated types to exclude: "retweets", "replies" |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `tweets` | array | Array of timeline tweets |
| ↳ `id` | string | Tweet ID |
| ↳ `text` | string | Tweet text content |
| ↳ `createdAt` | string | Tweet creation timestamp |
| ↳ `authorId` | string | Author user ID |
| ↳ `conversationId` | string | Conversation thread ID |
| ↳ `inReplyToUserId` | string | User ID being replied to |
| ↳ `publicMetrics` | object | Engagement metrics |
| ↳ `retweetCount` | number | Number of retweets |
| ↳ `replyCount` | number | Number of replies |
| ↳ `likeCount` | number | Number of likes |
| ↳ `quoteCount` | number | Number of quotes |
| `includes` | object | Additional data including user profiles |
| ↳ `users` | array | Array of user objects referenced in tweets |
| ↳ `id` | string | User ID |
| ↳ `username` | string | Username without @ symbol |
| ↳ `name` | string | Display name |
| ↳ `description` | string | User bio |
| ↳ `profileImageUrl` | string | Profile image URL |
| ↳ `verified` | boolean | Whether the user is verified |
| ↳ `metrics` | object | User statistics |
| ↳ `followersCount` | number | Number of followers |
| ↳ `followingCount` | number | Number of users following |
| ↳ `tweetCount` | number | Total number of tweets |
| `meta` | object | Pagination metadata |
| ↳ `resultCount` | number | Number of results returned |
| ↳ `newestId` | string | ID of the newest tweet |
| ↳ `oldestId` | string | ID of the oldest tweet |
| ↳ `nextToken` | string | Token for next page |
| ↳ `previousToken` | string | Token for previous page |
### `x_manage_like`
Like or unlike a tweet on X
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `userId` | string | Yes | The authenticated user ID |
| `tweetId` | string | Yes | The tweet ID to like or unlike |
| `action` | string | Yes | Action to perform: "like" or "unlike" |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `liked` | boolean | Whether the tweet is now liked |
### `x_manage_retweet`
Retweet or unretweet a tweet on X
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `userId` | string | Yes | The authenticated user ID |
| `tweetId` | string | Yes | The tweet ID to retweet or unretweet |
| `action` | string | Yes | Action to perform: "retweet" or "unretweet" |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `retweeted` | boolean | Whether the tweet is now retweeted |
### `x_get_liked_tweets`
Get tweets liked by a specific user
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `userId` | string | Yes | The user ID whose liked tweets to retrieve |
| `maxResults` | number | No | Maximum number of results \(5-100\) |
| `paginationToken` | string | No | Pagination token for next page |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `tweets` | array | Array of liked tweets |
| ↳ `id` | string | Tweet ID |
| ↳ `text` | string | Tweet content |
| ↳ `createdAt` | string | Creation timestamp |
| ↳ `authorId` | string | Author user ID |
| `meta` | object | Pagination metadata |
| ↳ `resultCount` | number | Number of results returned |
| ↳ `nextToken` | string | Token for next page |
### `x_get_liking_users`
Get the list of users who liked a specific tweet
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `tweetId` | string | Yes | The tweet ID to get liking users for |
| `maxResults` | number | No | Maximum number of results \(1-100, default 100\) |
| `paginationToken` | string | No | Pagination token for next page |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `users` | array | Array of users who liked the tweet |
| ↳ `id` | string | User ID |
| ↳ `username` | string | Username without @ symbol |
| ↳ `name` | string | Display name |
| ↳ `description` | string | User bio/description |
| ↳ `description` | string | User bio |
| ↳ `profileImageUrl` | string | Profile image URL |
| ↳ `verified` | boolean | Whether the user is verified |
| ↳ `metrics` | object | User statistics |
| ↳ `followersCount` | number | Number of followers |
| ↳ `followingCount` | number | Number of users following |
| ↳ `tweetCount` | number | Total number of tweets |
| `meta` | object | Pagination metadata |
| ↳ `resultCount` | number | Number of results returned |
| ↳ `nextToken` | string | Token for next page |
### `x_get_retweeted_by`
Get the list of users who retweeted a specific tweet
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `tweetId` | string | Yes | The tweet ID to get retweeters for |
| `maxResults` | number | No | Maximum number of results \(1-100, default 100\) |
| `paginationToken` | string | No | Pagination token for next page |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `users` | array | Array of users who retweeted the tweet |
| ↳ `id` | string | User ID |
| ↳ `username` | string | Username without @ symbol |
| ↳ `name` | string | Display name |
| ↳ `description` | string | User bio |
| ↳ `profileImageUrl` | string | Profile image URL |
| ↳ `verified` | boolean | Whether the user is verified |
| ↳ `metrics` | object | User statistics |
| ↳ `followersCount` | number | Number of followers |
| ↳ `followingCount` | number | Number of users following |
| ↳ `tweetCount` | number | Total number of tweets |
| `meta` | object | Metadata |
| ↳ `resultCount` | number | Number of results returned |
| ↳ `nextToken` | string | Token for next page |
### `x_get_bookmarks`
Get bookmarked tweets for the authenticated user
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `userId` | string | Yes | The authenticated user ID |
| `maxResults` | number | No | Maximum number of results \(1-100\) |
| `paginationToken` | string | No | Pagination token for next page of results |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `tweets` | array | Array of bookmarked tweets |
| ↳ `id` | string | Tweet ID |
| ↳ `text` | string | Tweet text content |
| ↳ `createdAt` | string | Tweet creation timestamp |
| ↳ `authorId` | string | Author user ID |
| ↳ `conversationId` | string | Conversation thread ID |
| ↳ `inReplyToUserId` | string | User ID being replied to |
| ↳ `publicMetrics` | object | Engagement metrics |
| ↳ `retweetCount` | number | Number of retweets |
| ↳ `replyCount` | number | Number of replies |
| ↳ `likeCount` | number | Number of likes |
| ↳ `quoteCount` | number | Number of quotes |
| `includes` | object | Additional data including user profiles |
| ↳ `users` | array | Array of user objects referenced in tweets |
| ↳ `id` | string | User ID |
| ↳ `username` | string | Username without @ symbol |
| ↳ `name` | string | Display name |
| ↳ `description` | string | User bio |
| ↳ `profileImageUrl` | string | Profile image URL |
| ↳ `verified` | boolean | Whether the user is verified |
| ↳ `metrics` | object | User statistics |
| ↳ `followersCount` | number | Number of followers |
| ↳ `followingCount` | number | Number of users following |
| ↳ `tweetCount` | number | Total number of tweets |
| `meta` | object | Pagination metadata |
| ↳ `resultCount` | number | Number of results returned |
| ↳ `newestId` | string | ID of the newest tweet |
| ↳ `oldestId` | string | ID of the oldest tweet |
| ↳ `nextToken` | string | Token for next page |
| ↳ `previousToken` | string | Token for previous page |
### `x_create_bookmark`
Bookmark a tweet for the authenticated user
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `userId` | string | Yes | The authenticated user ID |
| `tweetId` | string | Yes | The tweet ID to bookmark |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `bookmarked` | boolean | Whether the tweet was successfully bookmarked |
### `x_delete_bookmark`
Remove a tweet from the authenticated user
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `userId` | string | Yes | The authenticated user ID |
| `tweetId` | string | Yes | The tweet ID to remove from bookmarks |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `bookmarked` | boolean | Whether the tweet is still bookmarked \(should be false after deletion\) |
### `x_get_me`
Get the authenticated user
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `user` | object | Authenticated user profile |
| ↳ `id` | string | User ID |
| ↳ `username` | string | Username without @ symbol |
| ↳ `name` | string | Display name |
| ↳ `description` | string | User bio |
| ↳ `profileImageUrl` | string | Profile image URL |
| ↳ `verified` | boolean | Whether the user is verified |
| ↳ `metrics` | object | User statistics |
| ↳ `followersCount` | number | Number of followers |
| ↳ `followingCount` | number | Number of users following |
| ↳ `tweetCount` | number | Total number of tweets |
### `x_search_users`
Search for X users by name, username, or bio
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `query` | string | Yes | Search keyword \(1-50 chars, matches name, username, or bio\) |
| `maxResults` | number | No | Maximum number of results \(1-1000, default 100\) |
| `nextToken` | string | No | Pagination token for next page |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `users` | array | Array of users matching the search query |
| ↳ `id` | string | User ID |
| ↳ `username` | string | Username without @ symbol |
| ↳ `name` | string | Display name |
| ↳ `description` | string | User bio |
| ↳ `profileImageUrl` | string | Profile image URL |
| ↳ `verified` | boolean | Whether the user is verified |
| ↳ `metrics` | object | User statistics |
| ↳ `followersCount` | number | Number of followers |
| ↳ `followingCount` | number | Number of users following |
| ↳ `tweetCount` | number | Total number of tweets |
| `meta` | object | Search metadata |
| ↳ `resultCount` | number | Number of results returned |
| ↳ `nextToken` | string | Pagination token for next page |
### `x_get_followers`
Get the list of followers for a user
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `userId` | string | Yes | The user ID whose followers to retrieve |
| `maxResults` | number | No | Maximum number of results \(1-1000, default 100\) |
| `paginationToken` | string | No | Pagination token for next page |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `users` | array | Array of follower user profiles |
| ↳ `id` | string | User ID |
| ↳ `username` | string | Username without @ symbol |
| ↳ `name` | string | Display name |
| ↳ `description` | string | User bio |
| ↳ `profileImageUrl` | string | Profile image URL |
| ↳ `verified` | boolean | Whether the user is verified |
| ↳ `metrics` | object | User statistics |
| ↳ `followersCount` | number | Number of followers |
| ↳ `followingCount` | number | Number of users following |
| ↳ `tweetCount` | number | Total number of tweets |
| `meta` | object | Pagination metadata |
| ↳ `resultCount` | number | Number of results returned |
| ↳ `nextToken` | string | Token for next page |
### `x_get_following`
Get the list of users that a user is following
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `userId` | string | Yes | The user ID whose following list to retrieve |
| `maxResults` | number | No | Maximum number of results \(1-1000, default 100\) |
| `paginationToken` | string | No | Pagination token for next page |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `users` | array | Array of users being followed |
| ↳ `id` | string | User ID |
| ↳ `username` | string | Username without @ symbol |
| ↳ `name` | string | Display name |
| ↳ `description` | string | User bio |
| ↳ `profileImageUrl` | string | Profile image URL |
| ↳ `verified` | boolean | Whether the user is verified |
| ↳ `metrics` | object | User statistics |
| ↳ `followersCount` | number | Number of followers |
| ↳ `followingCount` | number | Number of users following |
| ↳ `tweetCount` | number | Total number of tweets |
| `meta` | object | Pagination metadata |
| ↳ `resultCount` | number | Number of results returned |
| ↳ `nextToken` | string | Token for next page |
### `x_manage_follow`
Follow or unfollow a user on X
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `userId` | string | Yes | The authenticated user ID |
| `targetUserId` | string | Yes | The user ID to follow or unfollow |
| `action` | string | Yes | Action to perform: "follow" or "unfollow" |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `following` | boolean | Whether you are now following the user |
| `pendingFollow` | boolean | Whether the follow request is pending \(for protected accounts\) |
### `x_manage_block`
Block or unblock a user on X
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `userId` | string | Yes | The authenticated user ID |
| `targetUserId` | string | Yes | The user ID to block or unblock |
| `action` | string | Yes | Action to perform: "block" or "unblock" |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `blocking` | boolean | Whether you are now blocking the user |
### `x_get_blocking`
Get the list of users blocked by the authenticated user
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `userId` | string | Yes | The authenticated user ID |
| `maxResults` | number | No | Maximum number of results \(1-1000\) |
| `paginationToken` | string | No | Pagination token for next page |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `users` | array | Array of blocked user profiles |
| ↳ `id` | string | User ID |
| ↳ `username` | string | Username without @ symbol |
| ↳ `name` | string | Display name |
| ↳ `description` | string | User bio |
| ↳ `profileImageUrl` | string | Profile image URL |
| ↳ `verified` | boolean | Whether the user is verified |
| ↳ `metrics` | object | User statistics |
| ↳ `followersCount` | number | Number of followers |
| ↳ `followingCount` | number | Number of users following |
| ↳ `tweetCount` | number | Total number of tweets |
| `meta` | object | Pagination metadata |
| ↳ `resultCount` | number | Number of results returned |
| ↳ `nextToken` | string | Token for next page |
### `x_manage_mute`
Mute or unmute a user on X
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `userId` | string | Yes | The authenticated user ID |
| `targetUserId` | string | Yes | The user ID to mute or unmute |
| `action` | string | Yes | Action to perform: "mute" or "unmute" |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `muting` | boolean | Whether you are now muting the user |
### `x_get_trends_by_woeid`
Get trending topics for a specific location by WOEID (e.g., 1 for worldwide, 23424977 for US)
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `woeid` | string | Yes | Yahoo Where On Earth ID \(e.g., "1" for worldwide, "23424977" for US, "23424975" for UK\) |
| `maxTrends` | number | No | Maximum number of trends to return \(1-50, default 20\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `trends` | array | Array of trending topics |
| ↳ `trendName` | string | Name of the trending topic |
| ↳ `tweetCount` | number | Number of tweets for this trend |
### `x_get_personalized_trends`
Get personalized trending topics for the authenticated user
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `trends` | array | Array of personalized trending topics |
| ↳ `trendName` | string | Name of the trending topic |
| ↳ `postCount` | number | Number of posts for this trend |
| ↳ `category` | string | Category of the trend |
| ↳ `trendingSince` | string | ISO 8601 timestamp of when the topic started trending |
### `x_get_usage`
Get the API usage data for your X project
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `days` | number | No | Number of days of usage data to return \(1-90, default 7\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `capResetDay` | number | Day of month when usage cap resets |
| `projectId` | string | The project ID |
| `projectCap` | number | The project tweet consumption cap |
| `projectUsage` | number | Total tweets consumed in current period |
| `dailyProjectUsage` | array | Daily project usage breakdown |
| ↳ `date` | string | Usage date in ISO 8601 format |
| ↳ `usage` | number | Number of tweets consumed |
| `dailyClientAppUsage` | array | Daily per-app usage breakdown |
| ↳ `clientAppId` | string | Client application ID |
| ↳ `usage` | array | Daily usage entries for this app |
| ↳ `date` | string | Usage date in ISO 8601 format |
| ↳ `usage` | number | Number of tweets consumed |

View File

@@ -0,0 +1,94 @@
---
title: Authentication
description: API key types, generation, and how to authenticate requests
---
import { Callout } from 'fumadocs-ui/components/callout'
import { Tab, Tabs } from 'fumadocs-ui/components/tabs'
To access the Sim API, you need an API key. Sim supports two types of API keys — **personal keys** and **workspace keys** — each with different billing and access behaviors.
## Key Types
| | **Personal Keys** | **Workspace Keys** |
| --- | --- | --- |
| **Billed to** | Your individual account | Workspace owner |
| **Scope** | Across workspaces you have access to | Shared across the workspace |
| **Managed by** | Each user individually | Workspace admins |
| **Permissions** | Must be enabled at workspace level | Require admin permissions |
<Callout type="info">
Workspace admins can disable personal API key usage for their workspace. If disabled, only workspace keys can be used.
</Callout>
## Generating API Keys
To generate a key, open the Sim dashboard and navigate to **Settings**, then go to **Sim Keys** and click **Create**.
<Callout type="warn">
API keys are only shown once when generated. Store your key securely — you will not be able to view it again.
</Callout>
## Using API Keys
Pass your API key in the `X-API-Key` header with every request:
<Tabs items={['curl', 'TypeScript', 'Python']}>
<Tab value="curl">
```bash
curl -X POST https://www.sim.ai/api/workflows/{workflowId}/execute \
-H "Content-Type: application/json" \
-H "X-API-Key: YOUR_API_KEY" \
-d '{"inputs": {}}'
```
</Tab>
<Tab value="TypeScript">
```typescript
const response = await fetch(
'https://www.sim.ai/api/workflows/{workflowId}/execute',
{
method: 'POST',
headers: {
'Content-Type': 'application/json',
'X-API-Key': process.env.SIM_API_KEY!,
},
body: JSON.stringify({ inputs: {} }),
}
)
```
</Tab>
<Tab value="Python">
```python
import requests
response = requests.post(
"https://www.sim.ai/api/workflows/{workflowId}/execute",
headers={
"Content-Type": "application/json",
"X-API-Key": os.environ["SIM_API_KEY"],
},
json={"inputs": {}},
)
```
</Tab>
</Tabs>
## Where Keys Are Used
API keys authenticate access to:
- **Workflow execution** — run deployed workflows via the API
- **Logs API** — query workflow execution logs and metrics
- **MCP servers** — authenticate connections to deployed MCP servers
- **SDKs** — the [Python](/api-reference/python) and [TypeScript](/api-reference/typescript) SDKs use API keys for all operations
## Security
- Keys use the `sk-sim-` prefix and are encrypted at rest
- Keys can be revoked at any time from the dashboard
- Use environment variables to store keys — never hardcode them in source code
- For browser-based applications, use a backend proxy to avoid exposing keys to the client
<Callout type="warn">
Never expose your API key in client-side code. Use a server-side proxy to make authenticated requests on behalf of your frontend.
</Callout>

View File

@@ -0,0 +1,210 @@
---
title: Getting Started
description: Base URL, first API call, response format, error handling, and pagination
---
import { Callout } from 'fumadocs-ui/components/callout'
import { Tab, Tabs } from 'fumadocs-ui/components/tabs'
import { Step, Steps } from 'fumadocs-ui/components/steps'
## Base URL
All API requests are made to:
```
https://www.sim.ai
```
## Quick Start
<Steps>
<Step>
### Get your API key
Go to the Sim platform and navigate to **Settings**, then go to **Sim Keys** and click **Create**. See [Authentication](/api-reference/authentication) for details on key types.
</Step>
<Step>
### Find your workflow ID
Open a workflow in the Sim editor. The workflow ID is in the URL:
```
https://www.sim.ai/workspace/{workspaceId}/w/{workflowId}
```
You can also use the [List Workflows](/api-reference/workflows/listWorkflows) endpoint to get all workflow IDs in a workspace.
</Step>
<Step>
### Deploy your workflow
A workflow must be deployed before it can be executed via the API. Click the **Deploy** button in the editor toolbar, or use the dashboard to manage deployments.
</Step>
<Step>
### Make your first request
<Tabs items={['curl', 'TypeScript', 'Python']}>
<Tab value="curl">
```bash
curl -X POST https://www.sim.ai/api/workflows/{workflowId}/execute \
-H "Content-Type: application/json" \
-H "X-API-Key: YOUR_API_KEY" \
-d '{"inputs": {}}'
```
</Tab>
<Tab value="TypeScript">
```typescript
const response = await fetch(
`https://www.sim.ai/api/workflows/${workflowId}/execute`,
{
method: 'POST',
headers: {
'Content-Type': 'application/json',
'X-API-Key': process.env.SIM_API_KEY!,
},
body: JSON.stringify({ inputs: {} }),
}
)
const data = await response.json()
console.log(data.output)
```
</Tab>
<Tab value="Python">
```python
import requests
import os
response = requests.post(
f"https://www.sim.ai/api/workflows/{workflow_id}/execute",
headers={
"Content-Type": "application/json",
"X-API-Key": os.environ["SIM_API_KEY"],
},
json={"inputs": {}},
)
data = response.json()
print(data["output"])
```
</Tab>
</Tabs>
</Step>
</Steps>
## Sync vs Async Execution
By default, workflow executions are **synchronous** — the API blocks until the workflow completes and returns the result directly.
For long-running workflows, use **asynchronous execution** by passing `async: true`:
```bash
curl -X POST https://www.sim.ai/api/workflows/{workflowId}/execute \
-H "Content-Type: application/json" \
-H "X-API-Key: YOUR_API_KEY" \
-d '{"inputs": {}, "async": true}'
```
This returns immediately with a `taskId`:
```json
{
"success": true,
"taskId": "job_abc123",
"status": "queued"
}
```
Poll the [Get Job Status](/api-reference/workflows/getJobStatus) endpoint until the status is `completed` or `failed`:
```bash
curl https://www.sim.ai/api/jobs/{taskId} \
-H "X-API-Key: YOUR_API_KEY"
```
<Callout type="info">
Job status transitions follow: `queued` → `processing` → `completed` or `failed`. The `output` field is only present when status is `completed`.
</Callout>
## Response Format
Successful responses include an `output` object with your workflow results and a `limits` object with your current rate limit and usage status:
```json
{
"success": true,
"output": {
"result": "Hello, world!"
},
"limits": {
"workflowExecutionRateLimit": {
"sync": {
"requestsPerMinute": 60,
"maxBurst": 10,
"remaining": 59,
"resetAt": "2025-01-01T00:01:00Z"
},
"async": {
"requestsPerMinute": 30,
"maxBurst": 5,
"remaining": 30,
"resetAt": "2025-01-01T00:01:00Z"
}
},
"usage": {
"currentPeriodCost": 1.25,
"limit": 50.00,
"plan": "pro",
"isExceeded": false
}
}
}
```
## Error Handling
The API uses standard HTTP status codes. Error responses include a human-readable `error` message:
```json
{
"error": "Workflow not found"
}
```
| Status | Meaning | What to do |
| --- | --- | --- |
| `400` | Invalid request parameters | Check the `details` array for specific field errors |
| `401` | Missing or invalid API key | Verify your `X-API-Key` header |
| `403` | Access denied | Check you have permission for this resource |
| `404` | Resource not found | Verify the ID exists and belongs to your workspace |
| `429` | Rate limit exceeded | Wait for the duration in the `Retry-After` header |
<Callout type="info">
Use the [Get Usage Limits](/api-reference/usage/getUsageLimits) endpoint to check your current rate limit status and billing usage at any time.
</Callout>
## Rate Limits
Rate limits depend on your subscription plan and apply separately to synchronous and asynchronous executions. Every execution response includes a `limits` object showing your current rate limit status.
When rate limited, the API returns a `429` response with a `Retry-After` header indicating how many seconds to wait before retrying.
## Pagination
List endpoints (workflows, logs, audit logs) use **cursor-based pagination**:
```bash
# First page
curl "https://www.sim.ai/api/v1/logs?limit=20" \
-H "X-API-Key: YOUR_API_KEY"
# Next page — use the nextCursor from the previous response
curl "https://www.sim.ai/api/v1/logs?limit=20&cursor=abc123" \
-H "X-API-Key: YOUR_API_KEY"
```
The response includes a `nextCursor` field. When `nextCursor` is absent or `null`, you have reached the last page.

View File

@@ -0,0 +1,16 @@
{
"title": "API Reference",
"root": true,
"pages": [
"getting-started",
"authentication",
"---SDKs---",
"python",
"typescript",
"---Endpoints---",
"(generated)/workflows",
"(generated)/logs",
"(generated)/usage",
"(generated)/audit-logs"
]
}

View File

@@ -0,0 +1,766 @@
---
title: Python
---
import { Callout } from 'fumadocs-ui/components/callout'
import { Card, Cards } from 'fumadocs-ui/components/card'
import { Step, Steps } from 'fumadocs-ui/components/steps'
import { Tab, Tabs } from 'fumadocs-ui/components/tabs'
El SDK oficial de Python para Sim te permite ejecutar flujos de trabajo programáticamente desde tus aplicaciones Python utilizando el SDK oficial de Python.
<Callout type="info">
El SDK de Python es compatible con Python 3.8+ con soporte para ejecución asíncrona, limitación automática de velocidad con retroceso exponencial y seguimiento de uso.
</Callout>
## Instalación
Instala el SDK usando pip:
```bash
pip install simstudio-sdk
```
## Inicio rápido
Aquí tienes un ejemplo sencillo para empezar:
```python
from simstudio import SimStudioClient
# Initialize the client
client = SimStudioClient(
api_key="your-api-key-here",
base_url="https://sim.ai" # optional, defaults to https://sim.ai
)
# Execute a workflow
try:
result = client.execute_workflow("workflow-id")
print("Workflow executed successfully:", result)
except Exception as error:
print("Workflow execution failed:", error)
```
## Referencia de la API
### SimStudioClient
#### Constructor
```python
SimStudioClient(api_key: str, base_url: str = "https://sim.ai")
```
**Parámetros:**
- `api_key` (str): Tu clave API de Sim
- `base_url` (str, opcional): URL base para la API de Sim
#### Métodos
##### execute_workflow()
Ejecuta un flujo de trabajo con datos de entrada opcionales.
```python
result = client.execute_workflow(
"workflow-id",
input_data={"message": "Hello, world!"},
timeout=30.0 # 30 seconds
)
```
**Parámetros:**
- `workflow_id` (str): El ID del flujo de trabajo a ejecutar
- `input_data` (dict, opcional): Datos de entrada para pasar al flujo de trabajo
- `timeout` (float, opcional): Tiempo de espera en segundos (predeterminado: 30.0)
- `stream` (bool, opcional): Habilitar respuestas en streaming (predeterminado: False)
- `selected_outputs` (list[str], opcional): Salidas de bloque para transmitir en formato `blockName.attribute` (p. ej., `["agent1.content"]`)
- `async_execution` (bool, opcional): Ejecutar de forma asíncrona (predeterminado: False)
**Devuelve:** `WorkflowExecutionResult | AsyncExecutionResult`
Cuando `async_execution=True`, devuelve inmediatamente un ID de tarea para sondeo. De lo contrario, espera a que se complete.
##### get_workflow_status()
Obtener el estado de un flujo de trabajo (estado de implementación, etc.).
```python
status = client.get_workflow_status("workflow-id")
print("Is deployed:", status.is_deployed)
```
**Parámetros:**
- `workflow_id` (str): El ID del flujo de trabajo
**Devuelve:** `WorkflowStatus`
##### validate_workflow()
Validar que un flujo de trabajo está listo para su ejecución.
```python
is_ready = client.validate_workflow("workflow-id")
if is_ready:
# Workflow is deployed and ready
pass
```
**Parámetros:**
- `workflow_id` (str): El ID del flujo de trabajo
**Devuelve:** `bool`
##### get_job_status()
Obtener el estado de una ejecución de trabajo asíncrono.
```python
status = client.get_job_status("task-id-from-async-execution")
print("Status:", status["status"]) # 'queued', 'processing', 'completed', 'failed'
if status["status"] == "completed":
print("Output:", status["output"])
```
**Parámetros:**
- `task_id` (str): El ID de tarea devuelto de la ejecución asíncrona
**Devuelve:** `Dict[str, Any]`
**Campos de respuesta:**
- `success` (bool): Si la solicitud fue exitosa
- `taskId` (str): El ID de la tarea
- `status` (str): Uno de `'queued'`, `'processing'`, `'completed'`, `'failed'`, `'cancelled'`
- `metadata` (dict): Contiene `startedAt`, `completedAt`, y `duration`
- `output` (any, opcional): La salida del flujo de trabajo (cuando se completa)
- `error` (any, opcional): Detalles del error (cuando falla)
- `estimatedDuration` (int, opcional): Duración estimada en milisegundos (cuando está procesando/en cola)
##### execute_with_retry()
Ejecutar un flujo de trabajo con reintento automático en errores de límite de velocidad usando retroceso exponencial.
```python
result = client.execute_with_retry(
"workflow-id",
input_data={"message": "Hello"},
timeout=30.0,
max_retries=3, # Maximum number of retries
initial_delay=1.0, # Initial delay in seconds
max_delay=30.0, # Maximum delay in seconds
backoff_multiplier=2.0 # Exponential backoff multiplier
)
```
**Parámetros:**
- `workflow_id` (str): El ID del flujo de trabajo a ejecutar
- `input_data` (dict, opcional): Datos de entrada para pasar al flujo de trabajo
- `timeout` (float, opcional): Tiempo de espera en segundos
- `stream` (bool, opcional): Habilitar respuestas en streaming
- `selected_outputs` (list, opcional): Salidas de bloque para transmitir
- `async_execution` (bool, opcional): Ejecutar de forma asíncrona
- `max_retries` (int, opcional): Número máximo de reintentos (predeterminado: 3)
- `initial_delay` (float, opcional): Retraso inicial en segundos (predeterminado: 1.0)
- `max_delay` (float, opcional): Retraso máximo en segundos (predeterminado: 30.0)
- `backoff_multiplier` (float, opcional): Multiplicador de retroceso (predeterminado: 2.0)
**Devuelve:** `WorkflowExecutionResult | AsyncExecutionResult`
La lógica de reintento utiliza retroceso exponencial (1s → 2s → 4s → 8s...) con fluctuación de ±25% para evitar el efecto de manada. Si la API proporciona un encabezado `retry-after`, se utilizará en su lugar.
##### get_rate_limit_info()
Obtiene la información actual del límite de tasa de la última respuesta de la API.
```python
rate_limit_info = client.get_rate_limit_info()
if rate_limit_info:
print("Limit:", rate_limit_info.limit)
print("Remaining:", rate_limit_info.remaining)
print("Reset:", datetime.fromtimestamp(rate_limit_info.reset))
```
**Devuelve:** `RateLimitInfo | None`
##### get_usage_limits()
Obtiene los límites de uso actuales y la información de cuota para tu cuenta.
```python
limits = client.get_usage_limits()
print("Sync requests remaining:", limits.rate_limit["sync"]["remaining"])
print("Async requests remaining:", limits.rate_limit["async"]["remaining"])
print("Current period cost:", limits.usage["currentPeriodCost"])
print("Plan:", limits.usage["plan"])
```
**Devuelve:** `UsageLimits`
**Estructura de respuesta:**
```python
{
"success": bool,
"rateLimit": {
"sync": {
"isLimited": bool,
"limit": int,
"remaining": int,
"resetAt": str
},
"async": {
"isLimited": bool,
"limit": int,
"remaining": int,
"resetAt": str
},
"authType": str # 'api' or 'manual'
},
"usage": {
"currentPeriodCost": float,
"limit": float,
"plan": str # e.g., 'free', 'pro'
}
}
```
##### set_api_key()
Actualiza la clave API.
```python
client.set_api_key("new-api-key")
```
##### set_base_url()
Actualiza la URL base.
```python
client.set_base_url("https://my-custom-domain.com")
```
##### close()
Cierra la sesión HTTP subyacente.
```python
client.close()
```
## Clases de datos
### WorkflowExecutionResult
```python
@dataclass
class WorkflowExecutionResult:
success: bool
output: Optional[Any] = None
error: Optional[str] = None
logs: Optional[List[Any]] = None
metadata: Optional[Dict[str, Any]] = None
trace_spans: Optional[List[Any]] = None
total_duration: Optional[float] = None
```
### AsyncExecutionResult
```python
@dataclass
class AsyncExecutionResult:
success: bool
task_id: str
status: str # 'queued'
created_at: str
links: Dict[str, str] # e.g., {"status": "/api/jobs/{taskId}"}
```
### WorkflowStatus
```python
@dataclass
class WorkflowStatus:
is_deployed: bool
deployed_at: Optional[str] = None
needs_redeployment: bool = False
```
### RateLimitInfo
```python
@dataclass
class RateLimitInfo:
limit: int
remaining: int
reset: int
retry_after: Optional[int] = None
```
### UsageLimits
```python
@dataclass
class UsageLimits:
success: bool
rate_limit: Dict[str, Any]
usage: Dict[str, Any]
```
### SimStudioError
```python
class SimStudioError(Exception):
def __init__(self, message: str, code: Optional[str] = None, status: Optional[int] = None):
super().__init__(message)
self.code = code
self.status = status
```
**Códigos de error comunes:**
- `UNAUTHORIZED`: Clave API inválida
- `TIMEOUT`: Tiempo de espera agotado
- `RATE_LIMIT_EXCEEDED`: Límite de tasa excedido
- `USAGE_LIMIT_EXCEEDED`: Límite de uso excedido
- `EXECUTION_ERROR`: Ejecución del flujo de trabajo fallida
## Ejemplos
### Ejecución básica de flujo de trabajo
<Steps>
<Step title="Inicializar el cliente">
Configura el SimStudioClient con tu clave API.
</Step>
<Step title="Validar el flujo de trabajo">
Comprueba si el flujo de trabajo está desplegado y listo para su ejecución.
</Step>
<Step title="Ejecutar el flujo de trabajo">
Ejecuta el flujo de trabajo con tus datos de entrada.
</Step>
<Step title="Manejar el resultado">
Procesa el resultado de la ejecución y gestiona cualquier error.
</Step>
</Steps>
```python
import os
from simstudio import SimStudioClient
client = SimStudioClient(api_key=os.getenv("SIM_API_KEY"))
def run_workflow():
try:
# Check if workflow is ready
is_ready = client.validate_workflow("my-workflow-id")
if not is_ready:
raise Exception("Workflow is not deployed or ready")
# Execute the workflow
result = client.execute_workflow(
"my-workflow-id",
input_data={
"message": "Process this data",
"user_id": "12345"
}
)
if result.success:
print("Output:", result.output)
print("Duration:", result.metadata.get("duration") if result.metadata else None)
else:
print("Workflow failed:", result.error)
except Exception as error:
print("Error:", error)
run_workflow()
```
### Manejo de errores
Maneja diferentes tipos de errores que pueden ocurrir durante la ejecución del flujo de trabajo:
```python
from simstudio import SimStudioClient, SimStudioError
import os
client = SimStudioClient(api_key=os.getenv("SIM_API_KEY"))
def execute_with_error_handling():
try:
result = client.execute_workflow("workflow-id")
return result
except SimStudioError as error:
if error.code == "UNAUTHORIZED":
print("Invalid API key")
elif error.code == "TIMEOUT":
print("Workflow execution timed out")
elif error.code == "USAGE_LIMIT_EXCEEDED":
print("Usage limit exceeded")
elif error.code == "INVALID_JSON":
print("Invalid JSON in request body")
else:
print(f"Workflow error: {error}")
raise
except Exception as error:
print(f"Unexpected error: {error}")
raise
```
### Uso del gestor de contexto
Usa el cliente como un gestor de contexto para manejar automáticamente la limpieza de recursos:
```python
from simstudio import SimStudioClient
import os
# Using context manager to automatically close the session
with SimStudioClient(api_key=os.getenv("SIM_API_KEY")) as client:
result = client.execute_workflow("workflow-id")
print("Result:", result)
# Session is automatically closed here
```
### Ejecución de flujos de trabajo por lotes
Ejecuta múltiples flujos de trabajo de manera eficiente:
```python
from simstudio import SimStudioClient
import os
client = SimStudioClient(api_key=os.getenv("SIM_API_KEY"))
def execute_workflows_batch(workflow_data_pairs):
"""Execute multiple workflows with different input data."""
results = []
for workflow_id, input_data in workflow_data_pairs:
try:
# Validate workflow before execution
if not client.validate_workflow(workflow_id):
print(f"Skipping {workflow_id}: not deployed")
continue
result = client.execute_workflow(workflow_id, input_data)
results.append({
"workflow_id": workflow_id,
"success": result.success,
"output": result.output,
"error": result.error
})
except Exception as error:
results.append({
"workflow_id": workflow_id,
"success": False,
"error": str(error)
})
return results
# Example usage
workflows = [
("workflow-1", {"type": "analysis", "data": "sample1"}),
("workflow-2", {"type": "processing", "data": "sample2"}),
]
results = execute_workflows_batch(workflows)
for result in results:
print(f"Workflow {result['workflow_id']}: {'Success' if result['success'] else 'Failed'}")
```
### Ejecución asíncrona de flujos de trabajo
Ejecuta flujos de trabajo de forma asíncrona para tareas de larga duración:
```python
import os
import time
from simstudio import SimStudioClient
client = SimStudioClient(api_key=os.getenv("SIM_API_KEY"))
def execute_async():
try:
# Start async execution
result = client.execute_workflow(
"workflow-id",
input_data={"data": "large dataset"},
async_execution=True # Execute asynchronously
)
# Check if result is an async execution
if hasattr(result, 'task_id'):
print(f"Task ID: {result.task_id}")
print(f"Status endpoint: {result.links['status']}")
# Poll for completion
status = client.get_job_status(result.task_id)
while status["status"] in ["queued", "processing"]:
print(f"Current status: {status['status']}")
time.sleep(2) # Wait 2 seconds
status = client.get_job_status(result.task_id)
if status["status"] == "completed":
print("Workflow completed!")
print(f"Output: {status['output']}")
print(f"Duration: {status['metadata']['duration']}")
else:
print(f"Workflow failed: {status['error']}")
except Exception as error:
print(f"Error: {error}")
execute_async()
```
### Límite de tasa y reintentos
Maneja los límites de tasa automáticamente con retroceso exponencial:
```python
import os
from simstudio import SimStudioClient, SimStudioError
client = SimStudioClient(api_key=os.getenv("SIM_API_KEY"))
def execute_with_retry_handling():
try:
# Automatically retries on rate limit
result = client.execute_with_retry(
"workflow-id",
input_data={"message": "Process this"},
max_retries=5,
initial_delay=1.0,
max_delay=60.0,
backoff_multiplier=2.0
)
print(f"Success: {result}")
except SimStudioError as error:
if error.code == "RATE_LIMIT_EXCEEDED":
print("Rate limit exceeded after all retries")
# Check rate limit info
rate_limit_info = client.get_rate_limit_info()
if rate_limit_info:
from datetime import datetime
reset_time = datetime.fromtimestamp(rate_limit_info.reset)
print(f"Rate limit resets at: {reset_time}")
execute_with_retry_handling()
```
### Monitoreo de uso
Monitorea el uso de tu cuenta y sus límites:
```python
import os
from simstudio import SimStudioClient
client = SimStudioClient(api_key=os.getenv("SIM_API_KEY"))
def check_usage():
try:
limits = client.get_usage_limits()
print("=== Rate Limits ===")
print("Sync requests:")
print(f" Limit: {limits.rate_limit['sync']['limit']}")
print(f" Remaining: {limits.rate_limit['sync']['remaining']}")
print(f" Resets at: {limits.rate_limit['sync']['resetAt']}")
print(f" Is limited: {limits.rate_limit['sync']['isLimited']}")
print("\nAsync requests:")
print(f" Limit: {limits.rate_limit['async']['limit']}")
print(f" Remaining: {limits.rate_limit['async']['remaining']}")
print(f" Resets at: {limits.rate_limit['async']['resetAt']}")
print(f" Is limited: {limits.rate_limit['async']['isLimited']}")
print("\n=== Usage ===")
print(f"Current period cost: ${limits.usage['currentPeriodCost']:.2f}")
print(f"Limit: ${limits.usage['limit']:.2f}")
print(f"Plan: {limits.usage['plan']}")
percent_used = (limits.usage['currentPeriodCost'] / limits.usage['limit']) * 100
print(f"Usage: {percent_used:.1f}%")
if percent_used > 80:
print("⚠️ Warning: You are approaching your usage limit!")
except Exception as error:
print(f"Error checking usage: {error}")
check_usage()
```
### Ejecución de flujo de trabajo en streaming
Ejecuta flujos de trabajo con respuestas en tiempo real:
```python
from simstudio import SimStudioClient
import os
client = SimStudioClient(api_key=os.getenv("SIM_API_KEY"))
def execute_with_streaming():
"""Execute workflow with streaming enabled."""
try:
# Enable streaming for specific block outputs
result = client.execute_workflow(
"workflow-id",
input_data={"message": "Count to five"},
stream=True,
selected_outputs=["agent1.content"] # Use blockName.attribute format
)
print("Workflow result:", result)
except Exception as error:
print("Error:", error)
execute_with_streaming()
```
La respuesta en streaming sigue el formato de Server-Sent Events (SSE):
```
data: {"blockId":"7b7735b9-19e5-4bd6-818b-46aae2596e9f","chunk":"One"}
data: {"blockId":"7b7735b9-19e5-4bd6-818b-46aae2596e9f","chunk":", two"}
data: {"event":"done","success":true,"output":{},"metadata":{"duration":610}}
data: [DONE]
```
**Ejemplo de streaming con Flask:**
```python
from flask import Flask, Response, stream_with_context
import requests
import json
import os
app = Flask(__name__)
@app.route('/stream-workflow')
def stream_workflow():
"""Stream workflow execution to the client."""
def generate():
response = requests.post(
'https://sim.ai/api/workflows/WORKFLOW_ID/execute',
headers={
'Content-Type': 'application/json',
'X-API-Key': os.getenv('SIM_API_KEY')
},
json={
'message': 'Generate a story',
'stream': True,
'selectedOutputs': ['agent1.content']
},
stream=True
)
for line in response.iter_lines():
if line:
decoded_line = line.decode('utf-8')
if decoded_line.startswith('data: '):
data = decoded_line[6:] # Remove 'data: ' prefix
if data == '[DONE]':
break
try:
parsed = json.loads(data)
if 'chunk' in parsed:
yield f"data: {json.dumps(parsed)}\n\n"
elif parsed.get('event') == 'done':
yield f"data: {json.dumps(parsed)}\n\n"
print("Execution complete:", parsed.get('metadata'))
except json.JSONDecodeError:
pass
return Response(
stream_with_context(generate()),
mimetype='text/event-stream'
)
if __name__ == '__main__':
app.run(debug=True)
```
### Configuración del entorno
Configura el cliente usando variables de entorno:
<Tabs items={['Development', 'Production']}>
<Tab value="Development">
```python
import os
from simstudio import SimStudioClient
# Development configuration
client = SimStudioClient(
api_key=os.getenv("SIM_API_KEY")
base_url=os.getenv("SIM_BASE_URL", "https://sim.ai")
)
```
</Tab>
<Tab value="Production">
```python
import os
from simstudio import SimStudioClient
# Production configuration with error handling
api_key = os.getenv("SIM_API_KEY")
if not api_key:
raise ValueError("SIM_API_KEY environment variable is required")
client = SimStudioClient(
api_key=api_key,
base_url=os.getenv("SIM_BASE_URL", "https://sim.ai")
)
```
</Tab>
</Tabs>
## Obtener tu clave API
<Steps>
<Step title="Inicia sesión en Sim">
Navega a [Sim](https://sim.ai) e inicia sesión en tu cuenta.
</Step>
<Step title="Abre tu flujo de trabajo">
Navega al flujo de trabajo que quieres ejecutar programáticamente.
</Step>
<Step title="Despliega tu flujo de trabajo">
Haz clic en "Deploy" para desplegar tu flujo de trabajo si aún no ha sido desplegado.
</Step>
<Step title="Crea o selecciona una clave API">
Durante el proceso de despliegue, selecciona o crea una clave API.
</Step>
<Step title="Copia la clave API">
Copia la clave API para usarla en tu aplicación Python.
</Step>
</Steps>
## Requisitos
- Python 3.8+
- requests >= 2.25.0
## Licencia
Apache-2.0

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,24 @@
{
"title": "Sim Documentation",
"pages": [
"./introduction/index",
"./getting-started/index",
"./quick-reference/index",
"triggers",
"blocks",
"tools",
"connections",
"mcp",
"copilot",
"skills",
"knowledgebase",
"variables",
"credentials",
"execution",
"permissions",
"self-hosting",
"./enterprise/index",
"./keyboard-shortcuts/index"
],
"defaultOpen": false
}

View File

@@ -0,0 +1,94 @@
---
title: Authentication
description: API key types, generation, and how to authenticate requests
---
import { Callout } from 'fumadocs-ui/components/callout'
import { Tab, Tabs } from 'fumadocs-ui/components/tabs'
To access the Sim API, you need an API key. Sim supports two types of API keys — **personal keys** and **workspace keys** — each with different billing and access behaviors.
## Key Types
| | **Personal Keys** | **Workspace Keys** |
| --- | --- | --- |
| **Billed to** | Your individual account | Workspace owner |
| **Scope** | Across workspaces you have access to | Shared across the workspace |
| **Managed by** | Each user individually | Workspace admins |
| **Permissions** | Must be enabled at workspace level | Require admin permissions |
<Callout type="info">
Workspace admins can disable personal API key usage for their workspace. If disabled, only workspace keys can be used.
</Callout>
## Generating API Keys
To generate a key, open the Sim dashboard and navigate to **Settings**, then go to **Sim Keys** and click **Create**.
<Callout type="warn">
API keys are only shown once when generated. Store your key securely — you will not be able to view it again.
</Callout>
## Using API Keys
Pass your API key in the `X-API-Key` header with every request:
<Tabs items={['curl', 'TypeScript', 'Python']}>
<Tab value="curl">
```bash
curl -X POST https://www.sim.ai/api/workflows/{workflowId}/execute \
-H "Content-Type: application/json" \
-H "X-API-Key: YOUR_API_KEY" \
-d '{"inputs": {}}'
```
</Tab>
<Tab value="TypeScript">
```typescript
const response = await fetch(
'https://www.sim.ai/api/workflows/{workflowId}/execute',
{
method: 'POST',
headers: {
'Content-Type': 'application/json',
'X-API-Key': process.env.SIM_API_KEY!,
},
body: JSON.stringify({ inputs: {} }),
}
)
```
</Tab>
<Tab value="Python">
```python
import requests
response = requests.post(
"https://www.sim.ai/api/workflows/{workflowId}/execute",
headers={
"Content-Type": "application/json",
"X-API-Key": os.environ["SIM_API_KEY"],
},
json={"inputs": {}},
)
```
</Tab>
</Tabs>
## Where Keys Are Used
API keys authenticate access to:
- **Workflow execution** — run deployed workflows via the API
- **Logs API** — query workflow execution logs and metrics
- **MCP servers** — authenticate connections to deployed MCP servers
- **SDKs** — the [Python](/api-reference/python) and [TypeScript](/api-reference/typescript) SDKs use API keys for all operations
## Security
- Keys use the `sk-sim-` prefix and are encrypted at rest
- Keys can be revoked at any time from the dashboard
- Use environment variables to store keys — never hardcode them in source code
- For browser-based applications, use a backend proxy to avoid exposing keys to the client
<Callout type="warn">
Never expose your API key in client-side code. Use a server-side proxy to make authenticated requests on behalf of your frontend.
</Callout>

View File

@@ -0,0 +1,210 @@
---
title: Getting Started
description: Base URL, first API call, response format, error handling, and pagination
---
import { Callout } from 'fumadocs-ui/components/callout'
import { Tab, Tabs } from 'fumadocs-ui/components/tabs'
import { Step, Steps } from 'fumadocs-ui/components/steps'
## Base URL
All API requests are made to:
```
https://www.sim.ai
```
## Quick Start
<Steps>
<Step>
### Get your API key
Go to the Sim platform and navigate to **Settings**, then go to **Sim Keys** and click **Create**. See [Authentication](/api-reference/authentication) for details on key types.
</Step>
<Step>
### Find your workflow ID
Open a workflow in the Sim editor. The workflow ID is in the URL:
```
https://www.sim.ai/workspace/{workspaceId}/w/{workflowId}
```
You can also use the [List Workflows](/api-reference/workflows/listWorkflows) endpoint to get all workflow IDs in a workspace.
</Step>
<Step>
### Deploy your workflow
A workflow must be deployed before it can be executed via the API. Click the **Deploy** button in the editor toolbar, or use the dashboard to manage deployments.
</Step>
<Step>
### Make your first request
<Tabs items={['curl', 'TypeScript', 'Python']}>
<Tab value="curl">
```bash
curl -X POST https://www.sim.ai/api/workflows/{workflowId}/execute \
-H "Content-Type: application/json" \
-H "X-API-Key: YOUR_API_KEY" \
-d '{"inputs": {}}'
```
</Tab>
<Tab value="TypeScript">
```typescript
const response = await fetch(
`https://www.sim.ai/api/workflows/${workflowId}/execute`,
{
method: 'POST',
headers: {
'Content-Type': 'application/json',
'X-API-Key': process.env.SIM_API_KEY!,
},
body: JSON.stringify({ inputs: {} }),
}
)
const data = await response.json()
console.log(data.output)
```
</Tab>
<Tab value="Python">
```python
import requests
import os
response = requests.post(
f"https://www.sim.ai/api/workflows/{workflow_id}/execute",
headers={
"Content-Type": "application/json",
"X-API-Key": os.environ["SIM_API_KEY"],
},
json={"inputs": {}},
)
data = response.json()
print(data["output"])
```
</Tab>
</Tabs>
</Step>
</Steps>
## Sync vs Async Execution
By default, workflow executions are **synchronous** — the API blocks until the workflow completes and returns the result directly.
For long-running workflows, use **asynchronous execution** by passing `async: true`:
```bash
curl -X POST https://www.sim.ai/api/workflows/{workflowId}/execute \
-H "Content-Type: application/json" \
-H "X-API-Key: YOUR_API_KEY" \
-d '{"inputs": {}, "async": true}'
```
This returns immediately with a `taskId`:
```json
{
"success": true,
"taskId": "job_abc123",
"status": "queued"
}
```
Poll the [Get Job Status](/api-reference/workflows/getJobStatus) endpoint until the status is `completed` or `failed`:
```bash
curl https://www.sim.ai/api/jobs/{taskId} \
-H "X-API-Key: YOUR_API_KEY"
```
<Callout type="info">
Job status transitions follow: `queued` → `processing` → `completed` or `failed`. The `output` field is only present when status is `completed`.
</Callout>
## Response Format
Successful responses include an `output` object with your workflow results and a `limits` object with your current rate limit and usage status:
```json
{
"success": true,
"output": {
"result": "Hello, world!"
},
"limits": {
"workflowExecutionRateLimit": {
"sync": {
"requestsPerMinute": 60,
"maxBurst": 10,
"remaining": 59,
"resetAt": "2025-01-01T00:01:00Z"
},
"async": {
"requestsPerMinute": 30,
"maxBurst": 5,
"remaining": 30,
"resetAt": "2025-01-01T00:01:00Z"
}
},
"usage": {
"currentPeriodCost": 1.25,
"limit": 50.00,
"plan": "pro",
"isExceeded": false
}
}
}
```
## Error Handling
The API uses standard HTTP status codes. Error responses include a human-readable `error` message:
```json
{
"error": "Workflow not found"
}
```
| Status | Meaning | What to do |
| --- | --- | --- |
| `400` | Invalid request parameters | Check the `details` array for specific field errors |
| `401` | Missing or invalid API key | Verify your `X-API-Key` header |
| `403` | Access denied | Check you have permission for this resource |
| `404` | Resource not found | Verify the ID exists and belongs to your workspace |
| `429` | Rate limit exceeded | Wait for the duration in the `Retry-After` header |
<Callout type="info">
Use the [Get Usage Limits](/api-reference/usage/getUsageLimits) endpoint to check your current rate limit status and billing usage at any time.
</Callout>
## Rate Limits
Rate limits depend on your subscription plan and apply separately to synchronous and asynchronous executions. Every execution response includes a `limits` object showing your current rate limit status.
When rate limited, the API returns a `429` response with a `Retry-After` header indicating how many seconds to wait before retrying.
## Pagination
List endpoints (workflows, logs, audit logs) use **cursor-based pagination**:
```bash
# First page
curl "https://www.sim.ai/api/v1/logs?limit=20" \
-H "X-API-Key: YOUR_API_KEY"
# Next page — use the nextCursor from the previous response
curl "https://www.sim.ai/api/v1/logs?limit=20&cursor=abc123" \
-H "X-API-Key: YOUR_API_KEY"
```
The response includes a `nextCursor` field. When `nextCursor` is absent or `null`, you have reached the last page.

View File

@@ -0,0 +1,16 @@
{
"title": "API Reference",
"root": true,
"pages": [
"getting-started",
"authentication",
"---SDKs---",
"python",
"typescript",
"---Endpoints---",
"(generated)/workflows",
"(generated)/logs",
"(generated)/usage",
"(generated)/audit-logs"
]
}

View File

@@ -0,0 +1,766 @@
---
title: Python
---
import { Callout } from 'fumadocs-ui/components/callout'
import { Card, Cards } from 'fumadocs-ui/components/card'
import { Step, Steps } from 'fumadocs-ui/components/steps'
import { Tab, Tabs } from 'fumadocs-ui/components/tabs'
Le SDK Python officiel pour Sim vous permet d'exécuter des workflows de manière programmatique à partir de vos applications Python en utilisant le SDK Python officiel.
<Callout type="info">
Le SDK Python prend en charge Python 3.8+ avec support d'exécution asynchrone, limitation automatique du débit avec backoff exponentiel, et suivi d'utilisation.
</Callout>
## Installation
Installez le SDK en utilisant pip :
```bash
pip install simstudio-sdk
```
## Démarrage rapide
Voici un exemple simple pour commencer :
```python
from simstudio import SimStudioClient
# Initialize the client
client = SimStudioClient(
api_key="your-api-key-here",
base_url="https://sim.ai" # optional, defaults to https://sim.ai
)
# Execute a workflow
try:
result = client.execute_workflow("workflow-id")
print("Workflow executed successfully:", result)
except Exception as error:
print("Workflow execution failed:", error)
```
## Référence de l'API
### SimStudioClient
#### Constructeur
```python
SimStudioClient(api_key: str, base_url: str = "https://sim.ai")
```
**Paramètres :**
- `api_key` (str) : Votre clé API Sim
- `base_url` (str, facultatif) : URL de base pour l'API Sim
#### Méthodes
##### execute_workflow()
Exécuter un workflow avec des données d'entrée facultatives.
```python
result = client.execute_workflow(
"workflow-id",
input_data={"message": "Hello, world!"},
timeout=30.0 # 30 seconds
)
```
**Paramètres :**
- `workflow_id` (str) : L'identifiant du workflow à exécuter
- `input_data` (dict, facultatif) : Données d'entrée à transmettre au workflow
- `timeout` (float, facultatif) : Délai d'expiration en secondes (par défaut : 30.0)
- `stream` (bool, facultatif) : Activer les réponses en streaming (par défaut : False)
- `selected_outputs` (list[str], facultatif) : Sorties de blocs à diffuser au format `blockName.attribute` (par exemple, `["agent1.content"]`)
- `async_execution` (bool, facultatif) : Exécuter de manière asynchrone (par défaut : False)
**Retourne :** `WorkflowExecutionResult | AsyncExecutionResult`
Lorsque `async_execution=True`, retourne immédiatement un identifiant de tâche pour l'interrogation. Sinon, attend la fin de l'exécution.
##### get_workflow_status()
Obtenir le statut d'un workflow (statut de déploiement, etc.).
```python
status = client.get_workflow_status("workflow-id")
print("Is deployed:", status.is_deployed)
```
**Paramètres :**
- `workflow_id` (str) : L'identifiant du workflow
**Retourne :** `WorkflowStatus`
##### validate_workflow()
Valider qu'un workflow est prêt pour l'exécution.
```python
is_ready = client.validate_workflow("workflow-id")
if is_ready:
# Workflow is deployed and ready
pass
```
**Paramètres :**
- `workflow_id` (str) : L'identifiant du workflow
**Retourne :** `bool`
##### get_job_status()
Obtenir le statut d'une exécution de tâche asynchrone.
```python
status = client.get_job_status("task-id-from-async-execution")
print("Status:", status["status"]) # 'queued', 'processing', 'completed', 'failed'
if status["status"] == "completed":
print("Output:", status["output"])
```
**Paramètres :**
- `task_id` (str) : L'identifiant de tâche retourné par l'exécution asynchrone
**Retourne :** `Dict[str, Any]`
**Champs de réponse :**
- `success` (bool) : Si la requête a réussi
- `taskId` (str) : L'identifiant de la tâche
- `status` (str) : L'un des états suivants : `'queued'`, `'processing'`, `'completed'`, `'failed'`, `'cancelled'`
- `metadata` (dict) : Contient `startedAt`, `completedAt`, et `duration`
- `output` (any, facultatif) : La sortie du workflow (une fois terminé)
- `error` (any, facultatif) : Détails de l'erreur (en cas d'échec)
- `estimatedDuration` (int, facultatif) : Durée estimée en millisecondes (lors du traitement/mise en file d'attente)
##### execute_with_retry()
Exécuter un workflow avec réessai automatique en cas d'erreurs de limitation de débit, en utilisant un backoff exponentiel.
```python
result = client.execute_with_retry(
"workflow-id",
input_data={"message": "Hello"},
timeout=30.0,
max_retries=3, # Maximum number of retries
initial_delay=1.0, # Initial delay in seconds
max_delay=30.0, # Maximum delay in seconds
backoff_multiplier=2.0 # Exponential backoff multiplier
)
```
**Paramètres :**
- `workflow_id` (str) : L'identifiant du workflow à exécuter
- `input_data` (dict, facultatif) : Données d'entrée à transmettre au workflow
- `timeout` (float, facultatif) : Délai d'expiration en secondes
- `stream` (bool, facultatif) : Activer les réponses en streaming
- `selected_outputs` (list, facultatif) : Sorties de blocs à diffuser
- `async_execution` (bool, facultatif) : Exécuter de manière asynchrone
- `max_retries` (int, facultatif) : Nombre maximum de tentatives (par défaut : 3)
- `initial_delay` (float, facultatif) : Délai initial en secondes (par défaut : 1.0)
- `max_delay` (float, facultatif) : Délai maximum en secondes (par défaut : 30.0)
- `backoff_multiplier` (float, facultatif) : Multiplicateur de backoff (par défaut : 2.0)
**Retourne :** `WorkflowExecutionResult | AsyncExecutionResult`
La logique de nouvelle tentative utilise un backoff exponentiel (1s → 2s → 4s → 8s...) avec une variation aléatoire de ±25% pour éviter l'effet de horde. Si l'API fournit un en-tête `retry-after`, celui-ci sera utilisé à la place.
##### get_rate_limit_info()
Obtenir les informations actuelles sur les limites de débit à partir de la dernière réponse de l'API.
```python
rate_limit_info = client.get_rate_limit_info()
if rate_limit_info:
print("Limit:", rate_limit_info.limit)
print("Remaining:", rate_limit_info.remaining)
print("Reset:", datetime.fromtimestamp(rate_limit_info.reset))
```
**Retourne :** `RateLimitInfo | None`
##### get_usage_limits()
Obtenir les limites d'utilisation actuelles et les informations de quota pour votre compte.
```python
limits = client.get_usage_limits()
print("Sync requests remaining:", limits.rate_limit["sync"]["remaining"])
print("Async requests remaining:", limits.rate_limit["async"]["remaining"])
print("Current period cost:", limits.usage["currentPeriodCost"])
print("Plan:", limits.usage["plan"])
```
**Retourne :** `UsageLimits`
**Structure de la réponse :**
```python
{
"success": bool,
"rateLimit": {
"sync": {
"isLimited": bool,
"limit": int,
"remaining": int,
"resetAt": str
},
"async": {
"isLimited": bool,
"limit": int,
"remaining": int,
"resetAt": str
},
"authType": str # 'api' or 'manual'
},
"usage": {
"currentPeriodCost": float,
"limit": float,
"plan": str # e.g., 'free', 'pro'
}
}
```
##### set_api_key()
Mettre à jour la clé API.
```python
client.set_api_key("new-api-key")
```
##### set_base_url()
Mettre à jour l'URL de base.
```python
client.set_base_url("https://my-custom-domain.com")
```
##### close()
Fermer la session HTTP sous-jacente.
```python
client.close()
```
## Classes de données
### WorkflowExecutionResult
```python
@dataclass
class WorkflowExecutionResult:
success: bool
output: Optional[Any] = None
error: Optional[str] = None
logs: Optional[List[Any]] = None
metadata: Optional[Dict[str, Any]] = None
trace_spans: Optional[List[Any]] = None
total_duration: Optional[float] = None
```
### AsyncExecutionResult
```python
@dataclass
class AsyncExecutionResult:
success: bool
task_id: str
status: str # 'queued'
created_at: str
links: Dict[str, str] # e.g., {"status": "/api/jobs/{taskId}"}
```
### WorkflowStatus
```python
@dataclass
class WorkflowStatus:
is_deployed: bool
deployed_at: Optional[str] = None
needs_redeployment: bool = False
```
### RateLimitInfo
```python
@dataclass
class RateLimitInfo:
limit: int
remaining: int
reset: int
retry_after: Optional[int] = None
```
### UsageLimits
```python
@dataclass
class UsageLimits:
success: bool
rate_limit: Dict[str, Any]
usage: Dict[str, Any]
```
### SimStudioError
```python
class SimStudioError(Exception):
def __init__(self, message: str, code: Optional[str] = None, status: Optional[int] = None):
super().__init__(message)
self.code = code
self.status = status
```
**Codes d'erreur courants :**
- `UNAUTHORIZED` : Clé API invalide
- `TIMEOUT` : Délai d'attente de la requête dépassé
- `RATE_LIMIT_EXCEEDED` : Limite de débit dépassée
- `USAGE_LIMIT_EXCEEDED` : Limite d'utilisation dépassée
- `EXECUTION_ERROR` : Échec de l'exécution du workflow
## Exemples
### Exécution basique d'un workflow
<Steps>
<Step title="Initialiser le client">
Configurez le SimStudioClient avec votre clé API.
</Step>
<Step title="Valider le workflow">
Vérifiez si le workflow est déployé et prêt pour l'exécution.
</Step>
<Step title="Exécuter le workflow">
Lancez le workflow avec vos données d'entrée.
</Step>
<Step title="Gérer le résultat">
Traitez le résultat de l'exécution et gérez les éventuelles erreurs.
</Step>
</Steps>
```python
import os
from simstudio import SimStudioClient
client = SimStudioClient(api_key=os.getenv("SIM_API_KEY"))
def run_workflow():
try:
# Check if workflow is ready
is_ready = client.validate_workflow("my-workflow-id")
if not is_ready:
raise Exception("Workflow is not deployed or ready")
# Execute the workflow
result = client.execute_workflow(
"my-workflow-id",
input_data={
"message": "Process this data",
"user_id": "12345"
}
)
if result.success:
print("Output:", result.output)
print("Duration:", result.metadata.get("duration") if result.metadata else None)
else:
print("Workflow failed:", result.error)
except Exception as error:
print("Error:", error)
run_workflow()
```
### Gestion des erreurs
Gérez différents types d'erreurs qui peuvent survenir pendant l'exécution du workflow :
```python
from simstudio import SimStudioClient, SimStudioError
import os
client = SimStudioClient(api_key=os.getenv("SIM_API_KEY"))
def execute_with_error_handling():
try:
result = client.execute_workflow("workflow-id")
return result
except SimStudioError as error:
if error.code == "UNAUTHORIZED":
print("Invalid API key")
elif error.code == "TIMEOUT":
print("Workflow execution timed out")
elif error.code == "USAGE_LIMIT_EXCEEDED":
print("Usage limit exceeded")
elif error.code == "INVALID_JSON":
print("Invalid JSON in request body")
else:
print(f"Workflow error: {error}")
raise
except Exception as error:
print(f"Unexpected error: {error}")
raise
```
### Utilisation du gestionnaire de contexte
Utilisez le client comme gestionnaire de contexte pour gérer automatiquement le nettoyage des ressources :
```python
from simstudio import SimStudioClient
import os
# Using context manager to automatically close the session
with SimStudioClient(api_key=os.getenv("SIM_API_KEY")) as client:
result = client.execute_workflow("workflow-id")
print("Result:", result)
# Session is automatically closed here
```
### Exécution de workflows par lots
Exécutez plusieurs workflows efficacement :
```python
from simstudio import SimStudioClient
import os
client = SimStudioClient(api_key=os.getenv("SIM_API_KEY"))
def execute_workflows_batch(workflow_data_pairs):
"""Execute multiple workflows with different input data."""
results = []
for workflow_id, input_data in workflow_data_pairs:
try:
# Validate workflow before execution
if not client.validate_workflow(workflow_id):
print(f"Skipping {workflow_id}: not deployed")
continue
result = client.execute_workflow(workflow_id, input_data)
results.append({
"workflow_id": workflow_id,
"success": result.success,
"output": result.output,
"error": result.error
})
except Exception as error:
results.append({
"workflow_id": workflow_id,
"success": False,
"error": str(error)
})
return results
# Example usage
workflows = [
("workflow-1", {"type": "analysis", "data": "sample1"}),
("workflow-2", {"type": "processing", "data": "sample2"}),
]
results = execute_workflows_batch(workflows)
for result in results:
print(f"Workflow {result['workflow_id']}: {'Success' if result['success'] else 'Failed'}")
```
### Exécution asynchrone de workflow
Exécutez des workflows de manière asynchrone pour les tâches de longue durée :
```python
import os
import time
from simstudio import SimStudioClient
client = SimStudioClient(api_key=os.getenv("SIM_API_KEY"))
def execute_async():
try:
# Start async execution
result = client.execute_workflow(
"workflow-id",
input_data={"data": "large dataset"},
async_execution=True # Execute asynchronously
)
# Check if result is an async execution
if hasattr(result, 'task_id'):
print(f"Task ID: {result.task_id}")
print(f"Status endpoint: {result.links['status']}")
# Poll for completion
status = client.get_job_status(result.task_id)
while status["status"] in ["queued", "processing"]:
print(f"Current status: {status['status']}")
time.sleep(2) # Wait 2 seconds
status = client.get_job_status(result.task_id)
if status["status"] == "completed":
print("Workflow completed!")
print(f"Output: {status['output']}")
print(f"Duration: {status['metadata']['duration']}")
else:
print(f"Workflow failed: {status['error']}")
except Exception as error:
print(f"Error: {error}")
execute_async()
```
### Limitation de débit et nouvelle tentative
Gérez les limites de débit automatiquement avec un retrait exponentiel :
```python
import os
from simstudio import SimStudioClient, SimStudioError
client = SimStudioClient(api_key=os.getenv("SIM_API_KEY"))
def execute_with_retry_handling():
try:
# Automatically retries on rate limit
result = client.execute_with_retry(
"workflow-id",
input_data={"message": "Process this"},
max_retries=5,
initial_delay=1.0,
max_delay=60.0,
backoff_multiplier=2.0
)
print(f"Success: {result}")
except SimStudioError as error:
if error.code == "RATE_LIMIT_EXCEEDED":
print("Rate limit exceeded after all retries")
# Check rate limit info
rate_limit_info = client.get_rate_limit_info()
if rate_limit_info:
from datetime import datetime
reset_time = datetime.fromtimestamp(rate_limit_info.reset)
print(f"Rate limit resets at: {reset_time}")
execute_with_retry_handling()
```
### Surveillance de l'utilisation
Surveillez l'utilisation et les limites de votre compte :
```python
import os
from simstudio import SimStudioClient
client = SimStudioClient(api_key=os.getenv("SIM_API_KEY"))
def check_usage():
try:
limits = client.get_usage_limits()
print("=== Rate Limits ===")
print("Sync requests:")
print(f" Limit: {limits.rate_limit['sync']['limit']}")
print(f" Remaining: {limits.rate_limit['sync']['remaining']}")
print(f" Resets at: {limits.rate_limit['sync']['resetAt']}")
print(f" Is limited: {limits.rate_limit['sync']['isLimited']}")
print("\nAsync requests:")
print(f" Limit: {limits.rate_limit['async']['limit']}")
print(f" Remaining: {limits.rate_limit['async']['remaining']}")
print(f" Resets at: {limits.rate_limit['async']['resetAt']}")
print(f" Is limited: {limits.rate_limit['async']['isLimited']}")
print("\n=== Usage ===")
print(f"Current period cost: ${limits.usage['currentPeriodCost']:.2f}")
print(f"Limit: ${limits.usage['limit']:.2f}")
print(f"Plan: {limits.usage['plan']}")
percent_used = (limits.usage['currentPeriodCost'] / limits.usage['limit']) * 100
print(f"Usage: {percent_used:.1f}%")
if percent_used > 80:
print("⚠️ Warning: You are approaching your usage limit!")
except Exception as error:
print(f"Error checking usage: {error}")
check_usage()
```
### Exécution de workflow en streaming
Exécutez des workflows avec des réponses en streaming en temps réel :
```python
from simstudio import SimStudioClient
import os
client = SimStudioClient(api_key=os.getenv("SIM_API_KEY"))
def execute_with_streaming():
"""Execute workflow with streaming enabled."""
try:
# Enable streaming for specific block outputs
result = client.execute_workflow(
"workflow-id",
input_data={"message": "Count to five"},
stream=True,
selected_outputs=["agent1.content"] # Use blockName.attribute format
)
print("Workflow result:", result)
except Exception as error:
print("Error:", error)
execute_with_streaming()
```
La réponse en streaming suit le format Server-Sent Events (SSE) :
```
data: {"blockId":"7b7735b9-19e5-4bd6-818b-46aae2596e9f","chunk":"One"}
data: {"blockId":"7b7735b9-19e5-4bd6-818b-46aae2596e9f","chunk":", two"}
data: {"event":"done","success":true,"output":{},"metadata":{"duration":610}}
data: [DONE]
```
**Exemple de streaming avec Flask :**
```python
from flask import Flask, Response, stream_with_context
import requests
import json
import os
app = Flask(__name__)
@app.route('/stream-workflow')
def stream_workflow():
"""Stream workflow execution to the client."""
def generate():
response = requests.post(
'https://sim.ai/api/workflows/WORKFLOW_ID/execute',
headers={
'Content-Type': 'application/json',
'X-API-Key': os.getenv('SIM_API_KEY')
},
json={
'message': 'Generate a story',
'stream': True,
'selectedOutputs': ['agent1.content']
},
stream=True
)
for line in response.iter_lines():
if line:
decoded_line = line.decode('utf-8')
if decoded_line.startswith('data: '):
data = decoded_line[6:] # Remove 'data: ' prefix
if data == '[DONE]':
break
try:
parsed = json.loads(data)
if 'chunk' in parsed:
yield f"data: {json.dumps(parsed)}\n\n"
elif parsed.get('event') == 'done':
yield f"data: {json.dumps(parsed)}\n\n"
print("Execution complete:", parsed.get('metadata'))
except json.JSONDecodeError:
pass
return Response(
stream_with_context(generate()),
mimetype='text/event-stream'
)
if __name__ == '__main__':
app.run(debug=True)
```
### Configuration de l'environnement
Configurez le client en utilisant des variables d'environnement :
<Tabs items={['Development', 'Production']}>
<Tab value="Development">
```python
import os
from simstudio import SimStudioClient
# Development configuration
client = SimStudioClient(
api_key=os.getenv("SIM_API_KEY")
base_url=os.getenv("SIM_BASE_URL", "https://sim.ai")
)
```
</Tab>
<Tab value="Production">
```python
import os
from simstudio import SimStudioClient
# Production configuration with error handling
api_key = os.getenv("SIM_API_KEY")
if not api_key:
raise ValueError("SIM_API_KEY environment variable is required")
client = SimStudioClient(
api_key=api_key,
base_url=os.getenv("SIM_BASE_URL", "https://sim.ai")
)
```
</Tab>
</Tabs>
## Obtention de votre clé API
<Steps>
<Step title="Connectez-vous à Sim">
Accédez à [Sim](https://sim.ai) et connectez-vous à votre compte.
</Step>
<Step title="Ouvrez votre workflow">
Accédez au workflow que vous souhaitez exécuter par programmation.
</Step>
<Step title="Déployez votre workflow">
Cliquez sur "Déployer" pour déployer votre workflow s'il n'a pas encore été déployé.
</Step>
<Step title="Créez ou sélectionnez une clé API">
Pendant le processus de déploiement, sélectionnez ou créez une clé API.
</Step>
<Step title="Copiez la clé API">
Copiez la clé API pour l'utiliser dans votre application Python.
</Step>
</Steps>
## Prérequis
- Python 3.8+
- requests >= 2.25.0
## Licence
Apache-2.0

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,24 @@
{
"title": "Sim Documentation",
"pages": [
"./introduction/index",
"./getting-started/index",
"./quick-reference/index",
"triggers",
"blocks",
"tools",
"connections",
"mcp",
"copilot",
"skills",
"knowledgebase",
"variables",
"credentials",
"execution",
"permissions",
"self-hosting",
"./enterprise/index",
"./keyboard-shortcuts/index"
],
"defaultOpen": false
}

View File

@@ -0,0 +1,94 @@
---
title: Authentication
description: API key types, generation, and how to authenticate requests
---
import { Callout } from 'fumadocs-ui/components/callout'
import { Tab, Tabs } from 'fumadocs-ui/components/tabs'
To access the Sim API, you need an API key. Sim supports two types of API keys — **personal keys** and **workspace keys** — each with different billing and access behaviors.
## Key Types
| | **Personal Keys** | **Workspace Keys** |
| --- | --- | --- |
| **Billed to** | Your individual account | Workspace owner |
| **Scope** | Across workspaces you have access to | Shared across the workspace |
| **Managed by** | Each user individually | Workspace admins |
| **Permissions** | Must be enabled at workspace level | Require admin permissions |
<Callout type="info">
Workspace admins can disable personal API key usage for their workspace. If disabled, only workspace keys can be used.
</Callout>
## Generating API Keys
To generate a key, open the Sim dashboard and navigate to **Settings**, then go to **Sim Keys** and click **Create**.
<Callout type="warn">
API keys are only shown once when generated. Store your key securely — you will not be able to view it again.
</Callout>
## Using API Keys
Pass your API key in the `X-API-Key` header with every request:
<Tabs items={['curl', 'TypeScript', 'Python']}>
<Tab value="curl">
```bash
curl -X POST https://www.sim.ai/api/workflows/{workflowId}/execute \
-H "Content-Type: application/json" \
-H "X-API-Key: YOUR_API_KEY" \
-d '{"inputs": {}}'
```
</Tab>
<Tab value="TypeScript">
```typescript
const response = await fetch(
'https://www.sim.ai/api/workflows/{workflowId}/execute',
{
method: 'POST',
headers: {
'Content-Type': 'application/json',
'X-API-Key': process.env.SIM_API_KEY!,
},
body: JSON.stringify({ inputs: {} }),
}
)
```
</Tab>
<Tab value="Python">
```python
import requests
response = requests.post(
"https://www.sim.ai/api/workflows/{workflowId}/execute",
headers={
"Content-Type": "application/json",
"X-API-Key": os.environ["SIM_API_KEY"],
},
json={"inputs": {}},
)
```
</Tab>
</Tabs>
## Where Keys Are Used
API keys authenticate access to:
- **Workflow execution** — run deployed workflows via the API
- **Logs API** — query workflow execution logs and metrics
- **MCP servers** — authenticate connections to deployed MCP servers
- **SDKs** — the [Python](/api-reference/python) and [TypeScript](/api-reference/typescript) SDKs use API keys for all operations
## Security
- Keys use the `sk-sim-` prefix and are encrypted at rest
- Keys can be revoked at any time from the dashboard
- Use environment variables to store keys — never hardcode them in source code
- For browser-based applications, use a backend proxy to avoid exposing keys to the client
<Callout type="warn">
Never expose your API key in client-side code. Use a server-side proxy to make authenticated requests on behalf of your frontend.
</Callout>

View File

@@ -0,0 +1,210 @@
---
title: Getting Started
description: Base URL, first API call, response format, error handling, and pagination
---
import { Callout } from 'fumadocs-ui/components/callout'
import { Tab, Tabs } from 'fumadocs-ui/components/tabs'
import { Step, Steps } from 'fumadocs-ui/components/steps'
## Base URL
All API requests are made to:
```
https://www.sim.ai
```
## Quick Start
<Steps>
<Step>
### Get your API key
Go to the Sim platform and navigate to **Settings**, then go to **Sim Keys** and click **Create**. See [Authentication](/api-reference/authentication) for details on key types.
</Step>
<Step>
### Find your workflow ID
Open a workflow in the Sim editor. The workflow ID is in the URL:
```
https://www.sim.ai/workspace/{workspaceId}/w/{workflowId}
```
You can also use the [List Workflows](/api-reference/workflows/listWorkflows) endpoint to get all workflow IDs in a workspace.
</Step>
<Step>
### Deploy your workflow
A workflow must be deployed before it can be executed via the API. Click the **Deploy** button in the editor toolbar, or use the dashboard to manage deployments.
</Step>
<Step>
### Make your first request
<Tabs items={['curl', 'TypeScript', 'Python']}>
<Tab value="curl">
```bash
curl -X POST https://www.sim.ai/api/workflows/{workflowId}/execute \
-H "Content-Type: application/json" \
-H "X-API-Key: YOUR_API_KEY" \
-d '{"inputs": {}}'
```
</Tab>
<Tab value="TypeScript">
```typescript
const response = await fetch(
`https://www.sim.ai/api/workflows/${workflowId}/execute`,
{
method: 'POST',
headers: {
'Content-Type': 'application/json',
'X-API-Key': process.env.SIM_API_KEY!,
},
body: JSON.stringify({ inputs: {} }),
}
)
const data = await response.json()
console.log(data.output)
```
</Tab>
<Tab value="Python">
```python
import requests
import os
response = requests.post(
f"https://www.sim.ai/api/workflows/{workflow_id}/execute",
headers={
"Content-Type": "application/json",
"X-API-Key": os.environ["SIM_API_KEY"],
},
json={"inputs": {}},
)
data = response.json()
print(data["output"])
```
</Tab>
</Tabs>
</Step>
</Steps>
## Sync vs Async Execution
By default, workflow executions are **synchronous** — the API blocks until the workflow completes and returns the result directly.
For long-running workflows, use **asynchronous execution** by passing `async: true`:
```bash
curl -X POST https://www.sim.ai/api/workflows/{workflowId}/execute \
-H "Content-Type: application/json" \
-H "X-API-Key: YOUR_API_KEY" \
-d '{"inputs": {}, "async": true}'
```
This returns immediately with a `taskId`:
```json
{
"success": true,
"taskId": "job_abc123",
"status": "queued"
}
```
Poll the [Get Job Status](/api-reference/workflows/getJobStatus) endpoint until the status is `completed` or `failed`:
```bash
curl https://www.sim.ai/api/jobs/{taskId} \
-H "X-API-Key: YOUR_API_KEY"
```
<Callout type="info">
Job status transitions follow: `queued` → `processing` → `completed` or `failed`. The `output` field is only present when status is `completed`.
</Callout>
## Response Format
Successful responses include an `output` object with your workflow results and a `limits` object with your current rate limit and usage status:
```json
{
"success": true,
"output": {
"result": "Hello, world!"
},
"limits": {
"workflowExecutionRateLimit": {
"sync": {
"requestsPerMinute": 60,
"maxBurst": 10,
"remaining": 59,
"resetAt": "2025-01-01T00:01:00Z"
},
"async": {
"requestsPerMinute": 30,
"maxBurst": 5,
"remaining": 30,
"resetAt": "2025-01-01T00:01:00Z"
}
},
"usage": {
"currentPeriodCost": 1.25,
"limit": 50.00,
"plan": "pro",
"isExceeded": false
}
}
}
```
## Error Handling
The API uses standard HTTP status codes. Error responses include a human-readable `error` message:
```json
{
"error": "Workflow not found"
}
```
| Status | Meaning | What to do |
| --- | --- | --- |
| `400` | Invalid request parameters | Check the `details` array for specific field errors |
| `401` | Missing or invalid API key | Verify your `X-API-Key` header |
| `403` | Access denied | Check you have permission for this resource |
| `404` | Resource not found | Verify the ID exists and belongs to your workspace |
| `429` | Rate limit exceeded | Wait for the duration in the `Retry-After` header |
<Callout type="info">
Use the [Get Usage Limits](/api-reference/usage/getUsageLimits) endpoint to check your current rate limit status and billing usage at any time.
</Callout>
## Rate Limits
Rate limits depend on your subscription plan and apply separately to synchronous and asynchronous executions. Every execution response includes a `limits` object showing your current rate limit status.
When rate limited, the API returns a `429` response with a `Retry-After` header indicating how many seconds to wait before retrying.
## Pagination
List endpoints (workflows, logs, audit logs) use **cursor-based pagination**:
```bash
# First page
curl "https://www.sim.ai/api/v1/logs?limit=20" \
-H "X-API-Key: YOUR_API_KEY"
# Next page — use the nextCursor from the previous response
curl "https://www.sim.ai/api/v1/logs?limit=20&cursor=abc123" \
-H "X-API-Key: YOUR_API_KEY"
```
The response includes a `nextCursor` field. When `nextCursor` is absent or `null`, you have reached the last page.

View File

@@ -0,0 +1,16 @@
{
"title": "API Reference",
"root": true,
"pages": [
"getting-started",
"authentication",
"---SDKs---",
"python",
"typescript",
"---Endpoints---",
"(generated)/workflows",
"(generated)/logs",
"(generated)/usage",
"(generated)/audit-logs"
]
}

View File

@@ -0,0 +1,766 @@
---
title: Python
---
import { Callout } from 'fumadocs-ui/components/callout'
import { Card, Cards } from 'fumadocs-ui/components/card'
import { Step, Steps } from 'fumadocs-ui/components/steps'
import { Tab, Tabs } from 'fumadocs-ui/components/tabs'
Simの公式Python SDKを使用すると、公式Python SDKを使用してPythonアプリケーションからプログラムでワークフローを実行できます。
<Callout type="info">
Python SDKはPython 3.8以上をサポートし、非同期実行、指数バックオフによる自動レート制限、使用状況追跡機能を提供します。
</Callout>
## インストール
pipを使用してSDKをインストールします
```bash
pip install simstudio-sdk
```
## クイックスタート
以下は、始めるための簡単な例です:
```python
from simstudio import SimStudioClient
# Initialize the client
client = SimStudioClient(
api_key="your-api-key-here",
base_url="https://sim.ai" # optional, defaults to https://sim.ai
)
# Execute a workflow
try:
result = client.execute_workflow("workflow-id")
print("Workflow executed successfully:", result)
except Exception as error:
print("Workflow execution failed:", error)
```
## APIリファレンス
### SimStudioClient
#### コンストラクタ
```python
SimStudioClient(api_key: str, base_url: str = "https://sim.ai")
```
**パラメータ:**
- `api_key` (str): SimのAPIキー
- `base_url` (str, オプション): Sim APIのベースURL
#### メソッド
##### execute_workflow()
オプションの入力データでワークフローを実行します。
```python
result = client.execute_workflow(
"workflow-id",
input_data={"message": "Hello, world!"},
timeout=30.0 # 30 seconds
)
```
**パラメータ:**
- `workflow_id` (str): 実行するワークフローのID
- `input_data` (dict, オプション): ワークフローに渡す入力データ
- `timeout` (float, オプション): タイムアウト(秒)(デフォルト: 30.0
- `stream` (bool, オプション): ストリーミングレスポンスを有効にする(デフォルト: False
- `selected_outputs` (list[str], オプション): `blockName.attribute`形式でストリーミングするブロック出力(例: `["agent1.content"]`
- `async_execution` (bool, オプション): 非同期実行(デフォルト: False
**戻り値:** `WorkflowExecutionResult | AsyncExecutionResult`
`async_execution=True`の場合、ポーリング用のタスクIDをすぐに返します。それ以外の場合は、完了を待ちます。
##### get_workflow_status()
ワークフローのステータス(デプロイメントステータスなど)を取得します。
```python
status = client.get_workflow_status("workflow-id")
print("Is deployed:", status.is_deployed)
```
**パラメータ:**
- `workflow_id` (str): ワークフローのID
**戻り値:** `WorkflowStatus`
##### validate_workflow()
ワークフローが実行準備ができているかを検証します。
```python
is_ready = client.validate_workflow("workflow-id")
if is_ready:
# Workflow is deployed and ready
pass
```
**パラメータ:**
- `workflow_id` (str): ワークフローのID
**戻り値:** `bool`
##### get_job_status()
非同期ジョブ実行のステータスを取得します。
```python
status = client.get_job_status("task-id-from-async-execution")
print("Status:", status["status"]) # 'queued', 'processing', 'completed', 'failed'
if status["status"] == "completed":
print("Output:", status["output"])
```
**パラメータ:**
- `task_id` (str): 非同期実行から返されたタスクID
**戻り値:** `Dict[str, Any]`
**レスポンスフィールド:**
- `success` (bool): リクエストが成功したかどうか
- `taskId` (str): タスクID
- `status` (str): 次のいずれか: `'queued'`, `'processing'`, `'completed'`, `'failed'`, `'cancelled'`
- `metadata` (dict): `startedAt`, `completedAt`, `duration`を含む
- `output` (any, オプション): ワークフロー出力(完了時)
- `error` (any, オプション): エラー詳細(失敗時)
- `estimatedDuration` (int, オプション): 推定所要時間(ミリ秒)(処理中/キュー時)
##### execute_with_retry()
指数バックオフを使用してレート制限エラーで自動的に再試行するワークフロー実行。
```python
result = client.execute_with_retry(
"workflow-id",
input_data={"message": "Hello"},
timeout=30.0,
max_retries=3, # Maximum number of retries
initial_delay=1.0, # Initial delay in seconds
max_delay=30.0, # Maximum delay in seconds
backoff_multiplier=2.0 # Exponential backoff multiplier
)
```
**パラメータ:**
- `workflow_id` (str): 実行するワークフローのID
- `input_data` (dict, オプション): ワークフローに渡す入力データ
- `timeout` (float, オプション): タイムアウト(秒)
- `stream` (bool, オプション): ストリーミングレスポンスを有効にする
- `selected_outputs` (list, オプション): ストリーミングするブロック出力
- `async_execution` (bool, オプション): 非同期実行
- `max_retries` (int, オプション): 最大再試行回数(デフォルト: 3
- `initial_delay` (float, オプション): 初期遅延(秒)(デフォルト: 1.0
- `max_delay` (float, オプション): 最大遅延(秒)(デフォルト: 30.0
- `backoff_multiplier` (float, オプション): バックオフ乗数(デフォルト: 2.0
**戻り値:** `WorkflowExecutionResult | AsyncExecutionResult`
リトライロジックは、サンダリングハード問題を防ぐために±25%のジッターを伴う指数バックオフ1秒→2秒→4秒→8秒...を使用します。APIが `retry-after` ヘッダーを提供する場合、代わりにそれが使用されます。
##### get_rate_limit_info()
最後のAPIレスポンスから現在のレート制限情報を取得します。
```python
rate_limit_info = client.get_rate_limit_info()
if rate_limit_info:
print("Limit:", rate_limit_info.limit)
print("Remaining:", rate_limit_info.remaining)
print("Reset:", datetime.fromtimestamp(rate_limit_info.reset))
```
**戻り値:** `RateLimitInfo | None`
##### get_usage_limits()
アカウントの現在の使用制限とクォータ情報を取得します。
```python
limits = client.get_usage_limits()
print("Sync requests remaining:", limits.rate_limit["sync"]["remaining"])
print("Async requests remaining:", limits.rate_limit["async"]["remaining"])
print("Current period cost:", limits.usage["currentPeriodCost"])
print("Plan:", limits.usage["plan"])
```
**戻り値:** `UsageLimits`
**レスポンス構造:**
```python
{
"success": bool,
"rateLimit": {
"sync": {
"isLimited": bool,
"limit": int,
"remaining": int,
"resetAt": str
},
"async": {
"isLimited": bool,
"limit": int,
"remaining": int,
"resetAt": str
},
"authType": str # 'api' or 'manual'
},
"usage": {
"currentPeriodCost": float,
"limit": float,
"plan": str # e.g., 'free', 'pro'
}
}
```
##### set_api_key()
APIキーを更新します。
```python
client.set_api_key("new-api-key")
```
##### set_base_url()
ベースURLを更新します。
```python
client.set_base_url("https://my-custom-domain.com")
```
##### close()
基盤となるHTTPセッションを閉じます。
```python
client.close()
```
## データクラス
### WorkflowExecutionResult
```python
@dataclass
class WorkflowExecutionResult:
success: bool
output: Optional[Any] = None
error: Optional[str] = None
logs: Optional[List[Any]] = None
metadata: Optional[Dict[str, Any]] = None
trace_spans: Optional[List[Any]] = None
total_duration: Optional[float] = None
```
### AsyncExecutionResult
```python
@dataclass
class AsyncExecutionResult:
success: bool
task_id: str
status: str # 'queued'
created_at: str
links: Dict[str, str] # e.g., {"status": "/api/jobs/{taskId}"}
```
### WorkflowStatus
```python
@dataclass
class WorkflowStatus:
is_deployed: bool
deployed_at: Optional[str] = None
needs_redeployment: bool = False
```
### RateLimitInfo
```python
@dataclass
class RateLimitInfo:
limit: int
remaining: int
reset: int
retry_after: Optional[int] = None
```
### UsageLimits
```python
@dataclass
class UsageLimits:
success: bool
rate_limit: Dict[str, Any]
usage: Dict[str, Any]
```
### SimStudioError
```python
class SimStudioError(Exception):
def __init__(self, message: str, code: Optional[str] = None, status: Optional[int] = None):
super().__init__(message)
self.code = code
self.status = status
```
**一般的なエラーコード:**
- `UNAUTHORIZED`: 無効なAPIキー
- `TIMEOUT`: リクエストがタイムアウトしました
- `RATE_LIMIT_EXCEEDED`: レート制限を超えました
- `USAGE_LIMIT_EXCEEDED`: 使用制限を超えました
- `EXECUTION_ERROR`: ワークフローの実行に失敗しました
## 例
### 基本的なワークフロー実行
<Steps>
<Step title="クライアントの初期化">
APIキーを使用してSimStudioClientをセットアップします。
</Step>
<Step title="ワークフローの検証">
ワークフローがデプロイされ、実行準備ができているか確認します。
</Step>
<Step title="ワークフローの実行">
入力データでワークフローを実行します。
</Step>
<Step title="結果の処理">
実行結果を処理し、エラーがあれば対処します。
</Step>
</Steps>
```python
import os
from simstudio import SimStudioClient
client = SimStudioClient(api_key=os.getenv("SIM_API_KEY"))
def run_workflow():
try:
# Check if workflow is ready
is_ready = client.validate_workflow("my-workflow-id")
if not is_ready:
raise Exception("Workflow is not deployed or ready")
# Execute the workflow
result = client.execute_workflow(
"my-workflow-id",
input_data={
"message": "Process this data",
"user_id": "12345"
}
)
if result.success:
print("Output:", result.output)
print("Duration:", result.metadata.get("duration") if result.metadata else None)
else:
print("Workflow failed:", result.error)
except Exception as error:
print("Error:", error)
run_workflow()
```
### エラー処理
ワークフロー実行中に発生する可能性のある様々なタイプのエラーを処理します:
```python
from simstudio import SimStudioClient, SimStudioError
import os
client = SimStudioClient(api_key=os.getenv("SIM_API_KEY"))
def execute_with_error_handling():
try:
result = client.execute_workflow("workflow-id")
return result
except SimStudioError as error:
if error.code == "UNAUTHORIZED":
print("Invalid API key")
elif error.code == "TIMEOUT":
print("Workflow execution timed out")
elif error.code == "USAGE_LIMIT_EXCEEDED":
print("Usage limit exceeded")
elif error.code == "INVALID_JSON":
print("Invalid JSON in request body")
else:
print(f"Workflow error: {error}")
raise
except Exception as error:
print(f"Unexpected error: {error}")
raise
```
### コンテキストマネージャーの使用
リソースのクリーンアップを自動的に処理するためにクライアントをコンテキストマネージャーとして使用します:
```python
from simstudio import SimStudioClient
import os
# Using context manager to automatically close the session
with SimStudioClient(api_key=os.getenv("SIM_API_KEY")) as client:
result = client.execute_workflow("workflow-id")
print("Result:", result)
# Session is automatically closed here
```
### バッチワークフロー実行
複数のワークフローを効率的に実行します:
```python
from simstudio import SimStudioClient
import os
client = SimStudioClient(api_key=os.getenv("SIM_API_KEY"))
def execute_workflows_batch(workflow_data_pairs):
"""Execute multiple workflows with different input data."""
results = []
for workflow_id, input_data in workflow_data_pairs:
try:
# Validate workflow before execution
if not client.validate_workflow(workflow_id):
print(f"Skipping {workflow_id}: not deployed")
continue
result = client.execute_workflow(workflow_id, input_data)
results.append({
"workflow_id": workflow_id,
"success": result.success,
"output": result.output,
"error": result.error
})
except Exception as error:
results.append({
"workflow_id": workflow_id,
"success": False,
"error": str(error)
})
return results
# Example usage
workflows = [
("workflow-1", {"type": "analysis", "data": "sample1"}),
("workflow-2", {"type": "processing", "data": "sample2"}),
]
results = execute_workflows_batch(workflows)
for result in results:
print(f"Workflow {result['workflow_id']}: {'Success' if result['success'] else 'Failed'}")
```
### 非同期ワークフロー実行
長時間実行されるタスクのためにワークフローを非同期で実行します:
```python
import os
import time
from simstudio import SimStudioClient
client = SimStudioClient(api_key=os.getenv("SIM_API_KEY"))
def execute_async():
try:
# Start async execution
result = client.execute_workflow(
"workflow-id",
input_data={"data": "large dataset"},
async_execution=True # Execute asynchronously
)
# Check if result is an async execution
if hasattr(result, 'task_id'):
print(f"Task ID: {result.task_id}")
print(f"Status endpoint: {result.links['status']}")
# Poll for completion
status = client.get_job_status(result.task_id)
while status["status"] in ["queued", "processing"]:
print(f"Current status: {status['status']}")
time.sleep(2) # Wait 2 seconds
status = client.get_job_status(result.task_id)
if status["status"] == "completed":
print("Workflow completed!")
print(f"Output: {status['output']}")
print(f"Duration: {status['metadata']['duration']}")
else:
print(f"Workflow failed: {status['error']}")
except Exception as error:
print(f"Error: {error}")
execute_async()
```
### レート制限とリトライ
指数バックオフを使用して自動的にレート制限を処理します:
```python
import os
from simstudio import SimStudioClient, SimStudioError
client = SimStudioClient(api_key=os.getenv("SIM_API_KEY"))
def execute_with_retry_handling():
try:
# Automatically retries on rate limit
result = client.execute_with_retry(
"workflow-id",
input_data={"message": "Process this"},
max_retries=5,
initial_delay=1.0,
max_delay=60.0,
backoff_multiplier=2.0
)
print(f"Success: {result}")
except SimStudioError as error:
if error.code == "RATE_LIMIT_EXCEEDED":
print("Rate limit exceeded after all retries")
# Check rate limit info
rate_limit_info = client.get_rate_limit_info()
if rate_limit_info:
from datetime import datetime
reset_time = datetime.fromtimestamp(rate_limit_info.reset)
print(f"Rate limit resets at: {reset_time}")
execute_with_retry_handling()
```
### 使用状況モニタリング
アカウントの使用状況と制限をモニタリングします:
```python
import os
from simstudio import SimStudioClient
client = SimStudioClient(api_key=os.getenv("SIM_API_KEY"))
def check_usage():
try:
limits = client.get_usage_limits()
print("=== Rate Limits ===")
print("Sync requests:")
print(f" Limit: {limits.rate_limit['sync']['limit']}")
print(f" Remaining: {limits.rate_limit['sync']['remaining']}")
print(f" Resets at: {limits.rate_limit['sync']['resetAt']}")
print(f" Is limited: {limits.rate_limit['sync']['isLimited']}")
print("\nAsync requests:")
print(f" Limit: {limits.rate_limit['async']['limit']}")
print(f" Remaining: {limits.rate_limit['async']['remaining']}")
print(f" Resets at: {limits.rate_limit['async']['resetAt']}")
print(f" Is limited: {limits.rate_limit['async']['isLimited']}")
print("\n=== Usage ===")
print(f"Current period cost: ${limits.usage['currentPeriodCost']:.2f}")
print(f"Limit: ${limits.usage['limit']:.2f}")
print(f"Plan: {limits.usage['plan']}")
percent_used = (limits.usage['currentPeriodCost'] / limits.usage['limit']) * 100
print(f"Usage: {percent_used:.1f}%")
if percent_used > 80:
print("⚠️ Warning: You are approaching your usage limit!")
except Exception as error:
print(f"Error checking usage: {error}")
check_usage()
```
### ワークフローの実行ストリーミング
リアルタイムのストリーミングレスポンスでワークフローを実行します:
```python
from simstudio import SimStudioClient
import os
client = SimStudioClient(api_key=os.getenv("SIM_API_KEY"))
def execute_with_streaming():
"""Execute workflow with streaming enabled."""
try:
# Enable streaming for specific block outputs
result = client.execute_workflow(
"workflow-id",
input_data={"message": "Count to five"},
stream=True,
selected_outputs=["agent1.content"] # Use blockName.attribute format
)
print("Workflow result:", result)
except Exception as error:
print("Error:", error)
execute_with_streaming()
```
ストリーミングレスポンスはServer-Sent EventsSSE形式に従います
```
data: {"blockId":"7b7735b9-19e5-4bd6-818b-46aae2596e9f","chunk":"One"}
data: {"blockId":"7b7735b9-19e5-4bd6-818b-46aae2596e9f","chunk":", two"}
data: {"event":"done","success":true,"output":{},"metadata":{"duration":610}}
data: [DONE]
```
**Flaskストリーミングの例**
```python
from flask import Flask, Response, stream_with_context
import requests
import json
import os
app = Flask(__name__)
@app.route('/stream-workflow')
def stream_workflow():
"""Stream workflow execution to the client."""
def generate():
response = requests.post(
'https://sim.ai/api/workflows/WORKFLOW_ID/execute',
headers={
'Content-Type': 'application/json',
'X-API-Key': os.getenv('SIM_API_KEY')
},
json={
'message': 'Generate a story',
'stream': True,
'selectedOutputs': ['agent1.content']
},
stream=True
)
for line in response.iter_lines():
if line:
decoded_line = line.decode('utf-8')
if decoded_line.startswith('data: '):
data = decoded_line[6:] # Remove 'data: ' prefix
if data == '[DONE]':
break
try:
parsed = json.loads(data)
if 'chunk' in parsed:
yield f"data: {json.dumps(parsed)}\n\n"
elif parsed.get('event') == 'done':
yield f"data: {json.dumps(parsed)}\n\n"
print("Execution complete:", parsed.get('metadata'))
except json.JSONDecodeError:
pass
return Response(
stream_with_context(generate()),
mimetype='text/event-stream'
)
if __name__ == '__main__':
app.run(debug=True)
```
### 環境設定
環境変数を使用してクライアントを設定します:
<Tabs items={['Development', 'Production']}>
<Tab value="Development">
```python
import os
from simstudio import SimStudioClient
# Development configuration
client = SimStudioClient(
api_key=os.getenv("SIM_API_KEY")
base_url=os.getenv("SIM_BASE_URL", "https://sim.ai")
)
```
</Tab>
<Tab value="Production">
```python
import os
from simstudio import SimStudioClient
# Production configuration with error handling
api_key = os.getenv("SIM_API_KEY")
if not api_key:
raise ValueError("SIM_API_KEY environment variable is required")
client = SimStudioClient(
api_key=api_key,
base_url=os.getenv("SIM_BASE_URL", "https://sim.ai")
)
```
</Tab>
</Tabs>
## APIキーの取得方法
<Steps>
<Step title="Simにログイン">
[Sim](https://sim.ai)に移動してアカウントにログインします。
</Step>
<Step title="ワークフローを開く">
プログラムで実行したいワークフローに移動します。
</Step>
<Step title="ワークフローをデプロイする">
まだデプロイされていない場合は、「デプロイ」をクリックしてワークフローをデプロイします。
</Step>
<Step title="APIキーを作成または選択する">
デプロイプロセス中に、APIキーを選択または作成します。
</Step>
<Step title="APIキーをコピーする">
Pythonアプリケーションで使用するAPIキーをコピーします。
</Step>
</Steps>
## 要件
- Python 3.8+
- requests >= 2.25.0
## ライセンス
Apache-2.0

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,24 @@
{
"title": "Sim Documentation",
"pages": [
"./introduction/index",
"./getting-started/index",
"./quick-reference/index",
"triggers",
"blocks",
"tools",
"connections",
"mcp",
"copilot",
"skills",
"knowledgebase",
"variables",
"credentials",
"execution",
"permissions",
"self-hosting",
"./enterprise/index",
"./keyboard-shortcuts/index"
],
"defaultOpen": false
}

View File

@@ -0,0 +1,94 @@
---
title: Authentication
description: API key types, generation, and how to authenticate requests
---
import { Callout } from 'fumadocs-ui/components/callout'
import { Tab, Tabs } from 'fumadocs-ui/components/tabs'
To access the Sim API, you need an API key. Sim supports two types of API keys — **personal keys** and **workspace keys** — each with different billing and access behaviors.
## Key Types
| | **Personal Keys** | **Workspace Keys** |
| --- | --- | --- |
| **Billed to** | Your individual account | Workspace owner |
| **Scope** | Across workspaces you have access to | Shared across the workspace |
| **Managed by** | Each user individually | Workspace admins |
| **Permissions** | Must be enabled at workspace level | Require admin permissions |
<Callout type="info">
Workspace admins can disable personal API key usage for their workspace. If disabled, only workspace keys can be used.
</Callout>
## Generating API Keys
To generate a key, open the Sim platform and navigate to **Settings**, then go to **Sim Keys** and click **Create**.
<Callout type="warn">
API keys are only shown once when generated. Store your key securely — you will not be able to view it again.
</Callout>
## Using API Keys
Pass your API key in the `X-API-Key` header with every request:
<Tabs items={['curl', 'TypeScript', 'Python']}>
<Tab value="curl">
```bash
curl -X POST https://www.sim.ai/api/workflows/{workflowId}/execute \
-H "Content-Type: application/json" \
-H "X-API-Key: YOUR_API_KEY" \
-d '{"inputs": {}}'
```
</Tab>
<Tab value="TypeScript">
```typescript
const response = await fetch(
'https://www.sim.ai/api/workflows/{workflowId}/execute',
{
method: 'POST',
headers: {
'Content-Type': 'application/json',
'X-API-Key': process.env.SIM_API_KEY!,
},
body: JSON.stringify({ inputs: {} }),
}
)
```
</Tab>
<Tab value="Python">
```python
import requests
response = requests.post(
"https://www.sim.ai/api/workflows/{workflowId}/execute",
headers={
"Content-Type": "application/json",
"X-API-Key": os.environ["SIM_API_KEY"],
},
json={"inputs": {}},
)
```
</Tab>
</Tabs>
## Where Keys Are Used
API keys authenticate access to:
- **Workflow execution** — run deployed workflows via the API
- **Logs API** — query workflow execution logs and metrics
- **MCP servers** — authenticate connections to deployed MCP servers
- **SDKs** — the [Python](/api-reference/python) and [TypeScript](/api-reference/typescript) SDKs use API keys for all operations
## Security
- Keys use the `sk-sim-` prefix and are encrypted at rest
- Keys can be revoked at any time from the dashboard
- Use environment variables to store keys — never hardcode them in source code
- For browser-based applications, use a backend proxy to avoid exposing keys to the client
<Callout type="warn">
Never expose your API key in client-side code. Use a server-side proxy to make authenticated requests on behalf of your frontend.
</Callout>

View File

@@ -0,0 +1,210 @@
---
title: Getting Started
description: Base URL, first API call, response format, error handling, and pagination
---
import { Callout } from 'fumadocs-ui/components/callout'
import { Tab, Tabs } from 'fumadocs-ui/components/tabs'
import { Step, Steps } from 'fumadocs-ui/components/steps'
## Base URL
All API requests are made to:
```
https://www.sim.ai
```
## Quick Start
<Steps>
<Step>
### Get your API key
Go to the Sim platform and navigate to **Settings**, then go to **Sim Keys** and click **Create**. See [Authentication](/api-reference/authentication) for details on key types.
</Step>
<Step>
### Find your workflow ID
Open a workflow in the Sim editor. The workflow ID is in the URL:
```
https://www.sim.ai/workspace/{workspaceId}/w/{workflowId}
```
You can also use the [List Workflows](/api-reference/workflows/listWorkflows) endpoint to get all workflow IDs in a workspace.
</Step>
<Step>
### Deploy your workflow
A workflow must be deployed before it can be executed via the API. Click the **Deploy** button in the editor toolbar, or use the dashboard to manage deployments.
</Step>
<Step>
### Make your first request
<Tabs items={['curl', 'TypeScript', 'Python']}>
<Tab value="curl">
```bash
curl -X POST https://www.sim.ai/api/workflows/{workflowId}/execute \
-H "Content-Type: application/json" \
-H "X-API-Key: YOUR_API_KEY" \
-d '{"inputs": {}}'
```
</Tab>
<Tab value="TypeScript">
```typescript
const response = await fetch(
`https://www.sim.ai/api/workflows/${workflowId}/execute`,
{
method: 'POST',
headers: {
'Content-Type': 'application/json',
'X-API-Key': process.env.SIM_API_KEY!,
},
body: JSON.stringify({ inputs: {} }),
}
)
const data = await response.json()
console.log(data.output)
```
</Tab>
<Tab value="Python">
```python
import requests
import os
response = requests.post(
f"https://www.sim.ai/api/workflows/{workflow_id}/execute",
headers={
"Content-Type": "application/json",
"X-API-Key": os.environ["SIM_API_KEY"],
},
json={"inputs": {}},
)
data = response.json()
print(data["output"])
```
</Tab>
</Tabs>
</Step>
</Steps>
## Sync vs Async Execution
By default, workflow executions are **synchronous** — the API blocks until the workflow completes and returns the result directly.
For long-running workflows, use **asynchronous execution** by passing `async: true`:
```bash
curl -X POST https://www.sim.ai/api/workflows/{workflowId}/execute \
-H "Content-Type: application/json" \
-H "X-API-Key: YOUR_API_KEY" \
-d '{"inputs": {}, "async": true}'
```
This returns immediately with a `taskId`:
```json
{
"success": true,
"taskId": "job_abc123",
"status": "queued"
}
```
Poll the [Get Job Status](/api-reference/workflows/getJobStatus) endpoint until the status is `completed` or `failed`:
```bash
curl https://www.sim.ai/api/jobs/{taskId} \
-H "X-API-Key: YOUR_API_KEY"
```
<Callout type="info">
Job status transitions follow: `queued` → `processing` → `completed` or `failed`. The `output` field is only present when status is `completed`.
</Callout>
## Response Format
Successful responses include an `output` object with your workflow results and a `limits` object with your current rate limit and usage status:
```json
{
"success": true,
"output": {
"result": "Hello, world!"
},
"limits": {
"workflowExecutionRateLimit": {
"sync": {
"requestsPerMinute": 60,
"maxBurst": 10,
"remaining": 59,
"resetAt": "2025-01-01T00:01:00Z"
},
"async": {
"requestsPerMinute": 30,
"maxBurst": 5,
"remaining": 30,
"resetAt": "2025-01-01T00:01:00Z"
}
},
"usage": {
"currentPeriodCost": 1.25,
"limit": 50.00,
"plan": "pro",
"isExceeded": false
}
}
}
```
## Error Handling
The API uses standard HTTP status codes. Error responses include a human-readable `error` message:
```json
{
"error": "Workflow not found"
}
```
| Status | Meaning | What to do |
| --- | --- | --- |
| `400` | Invalid request parameters | Check the `details` array for specific field errors |
| `401` | Missing or invalid API key | Verify your `X-API-Key` header |
| `403` | Access denied | Check you have permission for this resource |
| `404` | Resource not found | Verify the ID exists and belongs to your workspace |
| `429` | Rate limit exceeded | Wait for the duration in the `Retry-After` header |
<Callout type="info">
Use the [Get Usage Limits](/api-reference/usage/getUsageLimits) endpoint to check your current rate limit status and billing usage at any time.
</Callout>
## Rate Limits
Rate limits depend on your subscription plan and apply separately to synchronous and asynchronous executions. Every execution response includes a `limits` object showing your current rate limit status.
When rate limited, the API returns a `429` response with a `Retry-After` header indicating how many seconds to wait before retrying.
## Pagination
List endpoints (workflows, logs, audit logs) use **cursor-based pagination**:
```bash
# First page
curl "https://www.sim.ai/api/v1/logs?limit=20" \
-H "X-API-Key: YOUR_API_KEY"
# Next page — use the nextCursor from the previous response
curl "https://www.sim.ai/api/v1/logs?limit=20&cursor=abc123" \
-H "X-API-Key: YOUR_API_KEY"
```
The response includes a `nextCursor` field. When `nextCursor` is absent or `null`, you have reached the last page.

View File

@@ -0,0 +1,16 @@
{
"title": "API Reference",
"root": true,
"pages": [
"getting-started",
"authentication",
"---SDKs---",
"python",
"typescript",
"---Endpoints---",
"(generated)/workflows",
"(generated)/logs",
"(generated)/usage",
"(generated)/audit-logs"
]
}

View File

@@ -0,0 +1,766 @@
---
title: Python
---
import { Callout } from 'fumadocs-ui/components/callout'
import { Card, Cards } from 'fumadocs-ui/components/card'
import { Step, Steps } from 'fumadocs-ui/components/steps'
import { Tab, Tabs } from 'fumadocs-ui/components/tabs'
官方的 Python SDK 允许您通过 Python 应用程序以编程方式执行工作流。
<Callout type="info">
Python SDK 支持 Python 3.8+,具备异步执行支持、自动速率限制(带指数退避)以及使用情况跟踪功能。
</Callout>
## 安装
使用 pip 安装 SDK
```bash
pip install simstudio-sdk
```
## 快速开始
以下是一个简单的示例,帮助您快速入门:
```python
from simstudio import SimStudioClient
# Initialize the client
client = SimStudioClient(
api_key="your-api-key-here",
base_url="https://sim.ai" # optional, defaults to https://sim.ai
)
# Execute a workflow
try:
result = client.execute_workflow("workflow-id")
print("Workflow executed successfully:", result)
except Exception as error:
print("Workflow execution failed:", error)
```
## API 参考
### SimStudioClient
#### 构造函数
```python
SimStudioClient(api_key: str, base_url: str = "https://sim.ai")
```
**参数:**
- `api_key` (str): 您的 Sim API 密钥
- `base_url` (str, 可选): Sim API 的基础 URL
#### 方法
##### execute_workflow()
执行带有可选输入数据的工作流。
```python
result = client.execute_workflow(
"workflow-id",
input_data={"message": "Hello, world!"},
timeout=30.0 # 30 seconds
)
```
**参数:**
- `workflow_id` (str): 要执行的工作流 ID
- `input_data` (dict, optional): 传递给工作流的输入数据
- `timeout` (float, optional): 超时时间以秒为单位默认值30.0
- `stream` (bool, optional): 启用流式响应默认值False
- `selected_outputs` (list[str], optional): 以 `blockName.attribute` 格式阻止输出流(例如,`["agent1.content"]`
- `async_execution` (bool, optional): 异步执行默认值False
**返回值:** `WorkflowExecutionResult | AsyncExecutionResult`
当 `async_execution=True` 时,立即返回任务 ID 以供轮询。否则,等待完成。
##### get_workflow_status()
获取工作流的状态(部署状态等)。
```python
status = client.get_workflow_status("workflow-id")
print("Is deployed:", status.is_deployed)
```
**参数:**
- `workflow_id` (str): 工作流的 ID
**返回值:** `WorkflowStatus`
##### validate_workflow()
验证工作流是否已准备好执行。
```python
is_ready = client.validate_workflow("workflow-id")
if is_ready:
# Workflow is deployed and ready
pass
```
**参数:**
- `workflow_id` (str): 工作流的 ID
**返回值:** `bool`
##### get_job_status()
获取异步任务执行的状态。
```python
status = client.get_job_status("task-id-from-async-execution")
print("Status:", status["status"]) # 'queued', 'processing', 'completed', 'failed'
if status["status"] == "completed":
print("Output:", status["output"])
```
**参数:**
- `task_id` (str): 异步执行返回的任务 ID
**返回值:** `Dict[str, Any]`
**响应字段:**
- `success` (bool): 请求是否成功
- `taskId` (str): 任务 ID
- `status` (str): 可能的值包括 `'queued'`, `'processing'`, `'completed'`, `'failed'`, `'cancelled'`
- `metadata` (dict): 包含 `startedAt`, `completedAt` 和 `duration`
- `output` (any, optional): 工作流输出(完成时)
- `error` (any, optional): 错误详情(失败时)
- `estimatedDuration` (int, optional): 估计持续时间(以毫秒为单位,处理中/排队时)
##### execute_with_retry()
使用指数退避在速率限制错误上自动重试执行工作流。
```python
result = client.execute_with_retry(
"workflow-id",
input_data={"message": "Hello"},
timeout=30.0,
max_retries=3, # Maximum number of retries
initial_delay=1.0, # Initial delay in seconds
max_delay=30.0, # Maximum delay in seconds
backoff_multiplier=2.0 # Exponential backoff multiplier
)
```
**参数:**
- `workflow_id` (str): 要执行的工作流 ID
- `input_data` (dict, optional): 传递给工作流的输入数据
- `timeout` (float, optional): 超时时间(以秒为单位)
- `stream` (bool, optional): 启用流式响应
- `selected_outputs` (list, optional): 阻止输出流
- `async_execution` (bool, optional): 异步执行
- `max_retries` (int, optional): 最大重试次数默认值3
- `initial_delay` (float, optional): 初始延迟时间以秒为单位默认值1.0
- `max_delay` (float, optional): 最大延迟时间以秒为单位默认值30.0
- `backoff_multiplier` (float, optional): 退避倍数默认值2.0
**返回值:** `WorkflowExecutionResult | AsyncExecutionResult`
重试逻辑使用指数退避1 秒 → 2 秒 → 4 秒 → 8 秒...),并带有 ±25% 的抖动以防止惊群效应。如果 API 提供了 `retry-after` 标头,则会使用该标头。
##### get_rate_limit_info()
从上一次 API 响应中获取当前的速率限制信息。
```python
rate_limit_info = client.get_rate_limit_info()
if rate_limit_info:
print("Limit:", rate_limit_info.limit)
print("Remaining:", rate_limit_info.remaining)
print("Reset:", datetime.fromtimestamp(rate_limit_info.reset))
```
**返回值:** `RateLimitInfo | None`
##### get_usage_limits()
获取您的账户当前的使用限制和配额信息。
```python
limits = client.get_usage_limits()
print("Sync requests remaining:", limits.rate_limit["sync"]["remaining"])
print("Async requests remaining:", limits.rate_limit["async"]["remaining"])
print("Current period cost:", limits.usage["currentPeriodCost"])
print("Plan:", limits.usage["plan"])
```
**返回值:** `UsageLimits`
**响应结构:**
```python
{
"success": bool,
"rateLimit": {
"sync": {
"isLimited": bool,
"limit": int,
"remaining": int,
"resetAt": str
},
"async": {
"isLimited": bool,
"limit": int,
"remaining": int,
"resetAt": str
},
"authType": str # 'api' or 'manual'
},
"usage": {
"currentPeriodCost": float,
"limit": float,
"plan": str # e.g., 'free', 'pro'
}
}
```
##### set_api_key()
更新 API 密钥。
```python
client.set_api_key("new-api-key")
```
##### set_base_url()
更新基础 URL。
```python
client.set_base_url("https://my-custom-domain.com")
```
##### close()
关闭底层 HTTP 会话。
```python
client.close()
```
## 数据类
### WorkflowExecutionResult
```python
@dataclass
class WorkflowExecutionResult:
success: bool
output: Optional[Any] = None
error: Optional[str] = None
logs: Optional[List[Any]] = None
metadata: Optional[Dict[str, Any]] = None
trace_spans: Optional[List[Any]] = None
total_duration: Optional[float] = None
```
### AsyncExecutionResult
```python
@dataclass
class AsyncExecutionResult:
success: bool
task_id: str
status: str # 'queued'
created_at: str
links: Dict[str, str] # e.g., {"status": "/api/jobs/{taskId}"}
```
### WorkflowStatus
```python
@dataclass
class WorkflowStatus:
is_deployed: bool
deployed_at: Optional[str] = None
needs_redeployment: bool = False
```
### RateLimitInfo
```python
@dataclass
class RateLimitInfo:
limit: int
remaining: int
reset: int
retry_after: Optional[int] = None
```
### UsageLimits
```python
@dataclass
class UsageLimits:
success: bool
rate_limit: Dict[str, Any]
usage: Dict[str, Any]
```
### SimStudioError
```python
class SimStudioError(Exception):
def __init__(self, message: str, code: Optional[str] = None, status: Optional[int] = None):
super().__init__(message)
self.code = code
self.status = status
```
**常见错误代码:**
- `UNAUTHORIZED`: 无效的 API 密钥
- `TIMEOUT`: 请求超时
- `RATE_LIMIT_EXCEEDED`: 超出速率限制
- `USAGE_LIMIT_EXCEEDED`: 超出使用限制
- `EXECUTION_ERROR`: 工作流执行失败
## 示例
### 基本工作流执行
<Steps>
<Step title="初始化客户端">
使用您的 API 密钥设置 SimStudioClient。
</Step>
<Step title="验证工作流">
检查工作流是否已部署并准备好执行。
</Step>
<Step title="执行工作流">
使用您的输入数据运行工作流。
</Step>
<Step title="处理结果">
处理执行结果并处理任何错误。
</Step>
</Steps>
```python
import os
from simstudio import SimStudioClient
client = SimStudioClient(api_key=os.getenv("SIM_API_KEY"))
def run_workflow():
try:
# Check if workflow is ready
is_ready = client.validate_workflow("my-workflow-id")
if not is_ready:
raise Exception("Workflow is not deployed or ready")
# Execute the workflow
result = client.execute_workflow(
"my-workflow-id",
input_data={
"message": "Process this data",
"user_id": "12345"
}
)
if result.success:
print("Output:", result.output)
print("Duration:", result.metadata.get("duration") if result.metadata else None)
else:
print("Workflow failed:", result.error)
except Exception as error:
print("Error:", error)
run_workflow()
```
### 错误处理
处理工作流执行过程中可能发生的不同类型的错误:
```python
from simstudio import SimStudioClient, SimStudioError
import os
client = SimStudioClient(api_key=os.getenv("SIM_API_KEY"))
def execute_with_error_handling():
try:
result = client.execute_workflow("workflow-id")
return result
except SimStudioError as error:
if error.code == "UNAUTHORIZED":
print("Invalid API key")
elif error.code == "TIMEOUT":
print("Workflow execution timed out")
elif error.code == "USAGE_LIMIT_EXCEEDED":
print("Usage limit exceeded")
elif error.code == "INVALID_JSON":
print("Invalid JSON in request body")
else:
print(f"Workflow error: {error}")
raise
except Exception as error:
print(f"Unexpected error: {error}")
raise
```
### 上下文管理器的使用
将客户端用作上下文管理器以自动处理资源清理:
```python
from simstudio import SimStudioClient
import os
# Using context manager to automatically close the session
with SimStudioClient(api_key=os.getenv("SIM_API_KEY")) as client:
result = client.execute_workflow("workflow-id")
print("Result:", result)
# Session is automatically closed here
```
### 批量工作流执行
高效地执行多个工作流:
```python
from simstudio import SimStudioClient
import os
client = SimStudioClient(api_key=os.getenv("SIM_API_KEY"))
def execute_workflows_batch(workflow_data_pairs):
"""Execute multiple workflows with different input data."""
results = []
for workflow_id, input_data in workflow_data_pairs:
try:
# Validate workflow before execution
if not client.validate_workflow(workflow_id):
print(f"Skipping {workflow_id}: not deployed")
continue
result = client.execute_workflow(workflow_id, input_data)
results.append({
"workflow_id": workflow_id,
"success": result.success,
"output": result.output,
"error": result.error
})
except Exception as error:
results.append({
"workflow_id": workflow_id,
"success": False,
"error": str(error)
})
return results
# Example usage
workflows = [
("workflow-1", {"type": "analysis", "data": "sample1"}),
("workflow-2", {"type": "processing", "data": "sample2"}),
]
results = execute_workflows_batch(workflows)
for result in results:
print(f"Workflow {result['workflow_id']}: {'Success' if result['success'] else 'Failed'}")
```
### 异步工作流执行
为长时间运行的任务异步执行工作流:
```python
import os
import time
from simstudio import SimStudioClient
client = SimStudioClient(api_key=os.getenv("SIM_API_KEY"))
def execute_async():
try:
# Start async execution
result = client.execute_workflow(
"workflow-id",
input_data={"data": "large dataset"},
async_execution=True # Execute asynchronously
)
# Check if result is an async execution
if hasattr(result, 'task_id'):
print(f"Task ID: {result.task_id}")
print(f"Status endpoint: {result.links['status']}")
# Poll for completion
status = client.get_job_status(result.task_id)
while status["status"] in ["queued", "processing"]:
print(f"Current status: {status['status']}")
time.sleep(2) # Wait 2 seconds
status = client.get_job_status(result.task_id)
if status["status"] == "completed":
print("Workflow completed!")
print(f"Output: {status['output']}")
print(f"Duration: {status['metadata']['duration']}")
else:
print(f"Workflow failed: {status['error']}")
except Exception as error:
print(f"Error: {error}")
execute_async()
```
### 速率限制与重试
通过指数退避自动处理速率限制:
```python
import os
from simstudio import SimStudioClient, SimStudioError
client = SimStudioClient(api_key=os.getenv("SIM_API_KEY"))
def execute_with_retry_handling():
try:
# Automatically retries on rate limit
result = client.execute_with_retry(
"workflow-id",
input_data={"message": "Process this"},
max_retries=5,
initial_delay=1.0,
max_delay=60.0,
backoff_multiplier=2.0
)
print(f"Success: {result}")
except SimStudioError as error:
if error.code == "RATE_LIMIT_EXCEEDED":
print("Rate limit exceeded after all retries")
# Check rate limit info
rate_limit_info = client.get_rate_limit_info()
if rate_limit_info:
from datetime import datetime
reset_time = datetime.fromtimestamp(rate_limit_info.reset)
print(f"Rate limit resets at: {reset_time}")
execute_with_retry_handling()
```
### 使用监控
监控您的账户使用情况和限制:
```python
import os
from simstudio import SimStudioClient
client = SimStudioClient(api_key=os.getenv("SIM_API_KEY"))
def check_usage():
try:
limits = client.get_usage_limits()
print("=== Rate Limits ===")
print("Sync requests:")
print(f" Limit: {limits.rate_limit['sync']['limit']}")
print(f" Remaining: {limits.rate_limit['sync']['remaining']}")
print(f" Resets at: {limits.rate_limit['sync']['resetAt']}")
print(f" Is limited: {limits.rate_limit['sync']['isLimited']}")
print("\nAsync requests:")
print(f" Limit: {limits.rate_limit['async']['limit']}")
print(f" Remaining: {limits.rate_limit['async']['remaining']}")
print(f" Resets at: {limits.rate_limit['async']['resetAt']}")
print(f" Is limited: {limits.rate_limit['async']['isLimited']}")
print("\n=== Usage ===")
print(f"Current period cost: ${limits.usage['currentPeriodCost']:.2f}")
print(f"Limit: ${limits.usage['limit']:.2f}")
print(f"Plan: {limits.usage['plan']}")
percent_used = (limits.usage['currentPeriodCost'] / limits.usage['limit']) * 100
print(f"Usage: {percent_used:.1f}%")
if percent_used > 80:
print("⚠️ Warning: You are approaching your usage limit!")
except Exception as error:
print(f"Error checking usage: {error}")
check_usage()
```
### 流式工作流执行
通过实时流式响应执行工作流:
```python
from simstudio import SimStudioClient
import os
client = SimStudioClient(api_key=os.getenv("SIM_API_KEY"))
def execute_with_streaming():
"""Execute workflow with streaming enabled."""
try:
# Enable streaming for specific block outputs
result = client.execute_workflow(
"workflow-id",
input_data={"message": "Count to five"},
stream=True,
selected_outputs=["agent1.content"] # Use blockName.attribute format
)
print("Workflow result:", result)
except Exception as error:
print("Error:", error)
execute_with_streaming()
```
流式响应遵循服务器发送事件 (SSE) 格式:
```
data: {"blockId":"7b7735b9-19e5-4bd6-818b-46aae2596e9f","chunk":"One"}
data: {"blockId":"7b7735b9-19e5-4bd6-818b-46aae2596e9f","chunk":", two"}
data: {"event":"done","success":true,"output":{},"metadata":{"duration":610}}
data: [DONE]
```
**Flask 流式示例:**
```python
from flask import Flask, Response, stream_with_context
import requests
import json
import os
app = Flask(__name__)
@app.route('/stream-workflow')
def stream_workflow():
"""Stream workflow execution to the client."""
def generate():
response = requests.post(
'https://sim.ai/api/workflows/WORKFLOW_ID/execute',
headers={
'Content-Type': 'application/json',
'X-API-Key': os.getenv('SIM_API_KEY')
},
json={
'message': 'Generate a story',
'stream': True,
'selectedOutputs': ['agent1.content']
},
stream=True
)
for line in response.iter_lines():
if line:
decoded_line = line.decode('utf-8')
if decoded_line.startswith('data: '):
data = decoded_line[6:] # Remove 'data: ' prefix
if data == '[DONE]':
break
try:
parsed = json.loads(data)
if 'chunk' in parsed:
yield f"data: {json.dumps(parsed)}\n\n"
elif parsed.get('event') == 'done':
yield f"data: {json.dumps(parsed)}\n\n"
print("Execution complete:", parsed.get('metadata'))
except json.JSONDecodeError:
pass
return Response(
stream_with_context(generate()),
mimetype='text/event-stream'
)
if __name__ == '__main__':
app.run(debug=True)
```
### 环境配置
使用环境变量配置客户端:
<Tabs items={['开发', '生产']}>
<Tab value="开发">
```python
import os
from simstudio import SimStudioClient
# Development configuration
client = SimStudioClient(
api_key=os.getenv("SIM_API_KEY")
base_url=os.getenv("SIM_BASE_URL", "https://sim.ai")
)
```
</Tab>
<Tab value="生产">
```python
import os
from simstudio import SimStudioClient
# Production configuration with error handling
api_key = os.getenv("SIM_API_KEY")
if not api_key:
raise ValueError("SIM_API_KEY environment variable is required")
client = SimStudioClient(
api_key=api_key,
base_url=os.getenv("SIM_BASE_URL", "https://sim.ai")
)
```
</Tab>
</Tabs>
## 获取您的 API 密钥
<Steps>
<Step title="登录 Sim">
前往 [Sim](https://sim.ai) 并登录您的账户。
</Step>
<Step title="打开您的工作流">
前往您想要以编程方式执行的工作流。
</Step>
<Step title="部署您的工作流">
如果尚未部署,请点击“部署”以部署您的工作流。
</Step>
<Step title="创建或选择一个 API 密钥">
在部署过程中,选择或创建一个 API 密钥。
</Step>
<Step title="复制 API 密钥">
复制 API 密钥以在您的 Python 应用程序中使用。
</Step>
</Steps>
## 系统要求
- Python 3.8+
- requests >= 2.25.0
## 许可证
Apache-2.0

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,24 @@
{
"title": "Sim Documentation",
"pages": [
"./introduction/index",
"./getting-started/index",
"./quick-reference/index",
"triggers",
"blocks",
"tools",
"connections",
"mcp",
"copilot",
"skills",
"knowledgebase",
"variables",
"credentials",
"execution",
"permissions",
"self-hosting",
"./enterprise/index",
"./keyboard-shortcuts/index"
],
"defaultOpen": false
}

View File

@@ -2,7 +2,8 @@ import type { InferPageType } from 'fumadocs-core/source'
import type { PageData, source } from '@/lib/source'
export async function getLLMText(page: InferPageType<typeof source>) {
const data = page.data as PageData
const data = page.data as unknown as PageData
if (typeof data.getText !== 'function') return ''
const processed = await data.getText('processed')
return `# ${data.title} (${page.url})

132
apps/docs/lib/openapi.ts Normal file
View File

@@ -0,0 +1,132 @@
import { readFileSync } from 'node:fs'
import { join } from 'node:path'
import { createOpenAPI } from 'fumadocs-openapi/server'
export const openapi = createOpenAPI({
input: ['./openapi.json'],
})
interface OpenAPIOperation {
path: string
method: string
}
function resolveRef(ref: string, spec: Record<string, unknown>): unknown {
const parts = ref.replace('#/', '').split('/')
let current: unknown = spec
for (const part of parts) {
if (current && typeof current === 'object') {
current = (current as Record<string, unknown>)[part]
} else {
return undefined
}
}
return current
}
function resolveRefs(obj: unknown, spec: Record<string, unknown>, depth = 0): unknown {
if (depth > 10) return obj
if (Array.isArray(obj)) {
return obj.map((item) => resolveRefs(item, spec, depth + 1))
}
if (obj && typeof obj === 'object') {
const record = obj as Record<string, unknown>
if ('$ref' in record && typeof record.$ref === 'string') {
const resolved = resolveRef(record.$ref, spec)
return resolveRefs(resolved, spec, depth + 1)
}
const result: Record<string, unknown> = {}
for (const [key, value] of Object.entries(record)) {
result[key] = resolveRefs(value, spec, depth + 1)
}
return result
}
return obj
}
function formatSchema(schema: unknown): string {
return JSON.stringify(schema, null, 2)
}
let cachedSpec: Record<string, unknown> | null = null
function getSpec(): Record<string, unknown> {
if (!cachedSpec) {
const specPath = join(process.cwd(), 'openapi.json')
cachedSpec = JSON.parse(readFileSync(specPath, 'utf8')) as Record<string, unknown>
}
return cachedSpec
}
export function getApiSpecContent(
title: string,
description: string | undefined,
operations: OpenAPIOperation[]
): string {
const spec = getSpec()
if (!operations || operations.length === 0) {
return `# ${title}\n\n${description || ''}`
}
const op = operations[0]
const method = op.method.toUpperCase()
const pathObj = (spec.paths as Record<string, Record<string, unknown>>)?.[op.path]
const operation = pathObj?.[op.method.toLowerCase()] as Record<string, unknown> | undefined
if (!operation) {
return `# ${title}\n\n${description || ''}`
}
const resolved = resolveRefs(operation, spec) as Record<string, unknown>
const lines: string[] = []
lines.push(`# ${title}`)
lines.push(`\`${method} ${op.path}\``)
if (resolved.description) {
lines.push(`## Description\n${resolved.description}`)
}
const parameters = resolved.parameters as Array<Record<string, unknown>> | undefined
if (parameters && parameters.length > 0) {
lines.push('## Parameters')
for (const param of parameters) {
const required = param.required ? ' (required)' : ''
const schemaType = param.schema
? `\`${(param.schema as Record<string, unknown>).type || 'string'}\``
: ''
lines.push(
`- **${param.name}** (${param.in})${required}${schemaType}: ${param.description || ''}`
)
}
}
const requestBody = resolved.requestBody as Record<string, unknown> | undefined
if (requestBody) {
lines.push('## Request Body')
if (requestBody.description) {
lines.push(String(requestBody.description))
}
const content = requestBody.content as Record<string, Record<string, unknown>> | undefined
const jsonContent = content?.['application/json']
if (jsonContent?.schema) {
lines.push(`\`\`\`json\n${formatSchema(jsonContent.schema)}\n\`\`\``)
}
}
const responses = resolved.responses as Record<string, Record<string, unknown>> | undefined
if (responses) {
lines.push('## Responses')
for (const [status, response] of Object.entries(responses)) {
lines.push(`### ${status}${response.description || ''}`)
const content = response.content as Record<string, Record<string, unknown>> | undefined
const jsonContent = content?.['application/json']
if (jsonContent?.schema) {
lines.push(`\`\`\`json\n${formatSchema(jsonContent.schema)}\n\`\`\``)
}
}
}
return lines.join('\n\n')
}

View File

@@ -1,13 +1,93 @@
import { type InferPageType, loader } from 'fumadocs-core/source'
import { createElement, Fragment } from 'react'
import { type InferPageType, loader, multiple } from 'fumadocs-core/source'
import type { DocData, DocMethods } from 'fumadocs-mdx/runtime/types'
import { openapiSource } from 'fumadocs-openapi/server'
import { docs } from '@/.source/server'
import { i18n } from './i18n'
import { openapi } from './openapi'
export const source = loader({
baseUrl: '/',
source: docs.toFumadocsSource(),
i18n,
})
const METHOD_COLORS: Record<string, string> = {
GET: 'text-green-600 dark:text-green-400',
HEAD: 'text-green-600 dark:text-green-400',
OPTIONS: 'text-green-600 dark:text-green-400',
POST: 'text-blue-600 dark:text-blue-400',
PUT: 'text-yellow-600 dark:text-yellow-400',
PATCH: 'text-orange-600 dark:text-orange-400',
DELETE: 'text-red-600 dark:text-red-400',
}
/**
* Custom openapi plugin that places method badges BEFORE the page name
* in the sidebar (like Mintlify/Gumloop) instead of after.
*/
function openapiPluginBadgeLeft() {
return {
name: 'fumadocs:openapi-badge-left',
enforce: 'pre' as const,
transformPageTree: {
file(
this: {
storage: {
read: (path: string) => { format: string; data: Record<string, unknown> } | undefined
}
},
node: { name: React.ReactNode },
filePath: string | undefined
) {
if (!filePath) return node
const file = this.storage.read(filePath)
if (!file || file.format !== 'page') return node
const openApiData = file.data._openapi as { method?: string; webhook?: boolean } | undefined
if (!openApiData || typeof openApiData !== 'object') return node
if (openApiData.webhook) {
node.name = createElement(
Fragment,
null,
node.name,
' ',
createElement(
'span',
{
className:
'ms-auto border border-current px-1 rounded-lg text-xs text-nowrap font-mono',
},
'Webhook'
)
)
} else if (openApiData.method) {
const method = openApiData.method.toUpperCase()
const colorClass = METHOD_COLORS[method] ?? METHOD_COLORS.GET
node.name = createElement(
Fragment,
null,
createElement(
'span',
{ className: `font-mono font-medium me-1.5 text-[10px] text-nowrap ${colorClass}` },
method
),
node.name
)
}
return node
},
},
}
}
export const source = loader(
multiple({
docs: docs.toFumadocsSource(),
openapi: await openapiSource(openapi, {
baseDir: 'en/api-reference/(generated)',
groupBy: 'tag',
}),
}),
{
baseUrl: '/',
i18n,
plugins: [openapiPluginBadgeLeft() as never],
}
)
/** Full page data type including MDX content and metadata */
export type PageData = DocData &

1893
apps/docs/openapi.json Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -17,15 +17,17 @@
"class-variance-authority": "^0.7.1",
"clsx": "^2.1.1",
"drizzle-orm": "^0.44.5",
"fumadocs-core": "16.2.3",
"fumadocs-mdx": "14.1.0",
"fumadocs-ui": "16.2.3",
"fumadocs-core": "16.6.7",
"fumadocs-mdx": "14.2.8",
"fumadocs-openapi": "10.3.13",
"fumadocs-ui": "16.6.7",
"lucide-react": "^0.511.0",
"next": "16.1.6",
"next-themes": "^0.4.6",
"postgres": "^3.4.5",
"react": "19.2.1",
"react-dom": "19.2.1",
"shiki": "4.0.0",
"tailwind-merge": "^3.0.2"
},
"devDependencies": {

View File

@@ -38,7 +38,7 @@ export default function StructuredData() {
url: 'https://sim.ai',
name: 'Sim - AI Agent Workflow Builder',
description:
'Open-source AI agent workflow builder. 60,000+ developers build and deploy agentic workflows. SOC2 and HIPAA compliant.',
'Open-source AI agent workflow builder. 70,000+ developers build and deploy agentic workflows. SOC2 and HIPAA compliant.',
publisher: {
'@id': 'https://sim.ai/#organization',
},
@@ -87,7 +87,7 @@ export default function StructuredData() {
'@id': 'https://sim.ai/#software',
name: 'Sim - AI Agent Workflow Builder',
description:
'Open-source AI agent workflow builder used by 60,000+ developers. Build agentic workflows with visual drag-and-drop interface. SOC2 and HIPAA compliant. Integrate with 100+ apps.',
'Open-source AI agent workflow builder used by 70,000+ developers. Build agentic workflows with visual drag-and-drop interface. SOC2 and HIPAA compliant. Integrate with 100+ apps.',
applicationCategory: 'DeveloperApplication',
applicationSubCategory: 'AI Development Tools',
operatingSystem: 'Web, Windows, macOS, Linux',
@@ -187,7 +187,7 @@ export default function StructuredData() {
name: 'What is Sim?',
acceptedAnswer: {
'@type': 'Answer',
text: 'Sim is an open-source AI agent workflow builder used by 60,000+ developers at trail-blazing startups to Fortune 500 companies. It provides a visual drag-and-drop interface for building and deploying agentic workflows. Sim is SOC2 and HIPAA compliant.',
text: 'Sim is an open-source AI agent workflow builder used by 70,000+ developers at trail-blazing startups to Fortune 500 companies. It provides a visual drag-and-drop interface for building and deploying agentic workflows. Sim is SOC2 and HIPAA compliant.',
},
},
{

View File

@@ -19,6 +19,7 @@ import { validateUrlWithDNS } from '@/lib/core/security/input-validation.server'
import { SSE_HEADERS } from '@/lib/core/utils/sse'
import { getBaseUrl } from '@/lib/core/utils/urls'
import { markExecutionCancelled } from '@/lib/execution/cancellation'
import { decrementSSEConnections, incrementSSEConnections } from '@/lib/monitoring/sse-connections'
import { checkWorkspaceAccess } from '@/lib/workspaces/permissions/utils'
import { getWorkspaceBilledAccountUserId } from '@/lib/workspaces/utils'
import {
@@ -630,9 +631,11 @@ async function handleMessageStream(
}
const encoder = new TextEncoder()
let messageStreamDecremented = false
const stream = new ReadableStream({
async start(controller) {
incrementSSEConnections('a2a-message')
const sendEvent = (event: string, data: unknown) => {
try {
const jsonRpcResponse = {
@@ -841,9 +844,19 @@ async function handleMessageStream(
})
} finally {
await releaseLock(lockKey, lockValue)
if (!messageStreamDecremented) {
messageStreamDecremented = true
decrementSSEConnections('a2a-message')
}
controller.close()
}
},
cancel() {
if (!messageStreamDecremented) {
messageStreamDecremented = true
decrementSSEConnections('a2a-message')
}
},
})
return new NextResponse(stream, {
@@ -1016,16 +1029,34 @@ async function handleTaskResubscribe(
let pollTimeoutId: ReturnType<typeof setTimeout> | null = null
const abortSignal = request.signal
abortSignal.addEventListener('abort', () => {
abortSignal.addEventListener(
'abort',
() => {
isCancelled = true
if (pollTimeoutId) {
clearTimeout(pollTimeoutId)
pollTimeoutId = null
}
},
{ once: true }
)
let sseDecremented = false
const cleanup = () => {
isCancelled = true
if (pollTimeoutId) {
clearTimeout(pollTimeoutId)
pollTimeoutId = null
}
})
if (!sseDecremented) {
sseDecremented = true
decrementSSEConnections('a2a-resubscribe')
}
}
const stream = new ReadableStream({
async start(controller) {
incrementSSEConnections('a2a-resubscribe')
const sendEvent = (event: string, data: unknown): boolean => {
if (isCancelled || abortSignal.aborted) return false
try {
@@ -1041,14 +1072,6 @@ async function handleTaskResubscribe(
}
}
const cleanup = () => {
isCancelled = true
if (pollTimeoutId) {
clearTimeout(pollTimeoutId)
pollTimeoutId = null
}
}
if (
!sendEvent('status', {
kind: 'status',
@@ -1160,11 +1183,7 @@ async function handleTaskResubscribe(
poll()
},
cancel() {
isCancelled = true
if (pollTimeoutId) {
clearTimeout(pollTimeoutId)
pollTimeoutId = null
}
cleanup()
},
})

View File

@@ -1,7 +1,7 @@
/**
* @vitest-environment node
*/
import { createMockRequest, setupCommonApiMocks } from '@sim/testing'
import { createMockRequest } from '@sim/testing'
import { beforeEach, describe, expect, it, vi } from 'vitest'
const handlerMocks = vi.hoisted(() => ({
@@ -14,6 +14,7 @@ const handlerMocks = vi.hoisted(() => ({
session: { id: 'anon-session' },
},
})),
isAuthDisabled: false,
}))
vi.mock('better-auth/next-js', () => ({
@@ -32,18 +33,22 @@ vi.mock('@/lib/auth/anonymous', () => ({
createAnonymousGetSessionResponse: handlerMocks.createAnonymousGetSessionResponse,
}))
vi.mock('@/lib/core/config/feature-flags', () => ({
get isAuthDisabled() {
return handlerMocks.isAuthDisabled
},
}))
import { GET } from '@/app/api/auth/[...all]/route'
describe('auth catch-all route (DISABLE_AUTH get-session)', () => {
beforeEach(() => {
vi.resetModules()
setupCommonApiMocks()
handlerMocks.betterAuthGET.mockReset()
handlerMocks.betterAuthPOST.mockReset()
handlerMocks.ensureAnonymousUserExists.mockReset()
handlerMocks.createAnonymousGetSessionResponse.mockClear()
vi.clearAllMocks()
handlerMocks.isAuthDisabled = false
})
it('returns anonymous session in better-auth response envelope when auth is disabled', async () => {
vi.doMock('@/lib/core/config/feature-flags', () => ({ isAuthDisabled: true }))
handlerMocks.isAuthDisabled = true
const req = createMockRequest(
'GET',
@@ -51,7 +56,6 @@ describe('auth catch-all route (DISABLE_AUTH get-session)', () => {
{},
'http://localhost:3000/api/auth/get-session'
)
const { GET } = await import('@/app/api/auth/[...all]/route')
const res = await GET(req as any)
const json = await res.json()
@@ -67,10 +71,11 @@ describe('auth catch-all route (DISABLE_AUTH get-session)', () => {
})
it('delegates to better-auth handler when auth is enabled', async () => {
vi.doMock('@/lib/core/config/feature-flags', () => ({ isAuthDisabled: false }))
handlerMocks.isAuthDisabled = false
const { NextResponse } = await import('next/server')
handlerMocks.betterAuthGET.mockResolvedValueOnce(
new (await import('next/server')).NextResponse(JSON.stringify({ data: { ok: true } }), {
new NextResponse(JSON.stringify({ data: { ok: true } }), {
headers: { 'content-type': 'application/json' },
}) as any
)
@@ -81,7 +86,6 @@ describe('auth catch-all route (DISABLE_AUTH get-session)', () => {
{},
'http://localhost:3000/api/auth/get-session'
)
const { GET } = await import('@/app/api/auth/[...all]/route')
const res = await GET(req as any)
const json = await res.json()

View File

@@ -3,63 +3,45 @@
*
* @vitest-environment node
*/
import {
createMockRequest,
mockConsoleLogger,
mockCryptoUuid,
mockDrizzleOrm,
mockUuid,
setupCommonApiMocks,
} from '@sim/testing'
import { createMockRequest } from '@sim/testing'
import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest'
const { mockForgetPassword, mockLogger } = vi.hoisted(() => {
const logger = {
info: vi.fn(),
warn: vi.fn(),
error: vi.fn(),
debug: vi.fn(),
trace: vi.fn(),
fatal: vi.fn(),
child: vi.fn(),
}
return {
mockForgetPassword: vi.fn(),
mockLogger: logger,
}
})
vi.mock('@/lib/core/utils/urls', () => ({
getBaseUrl: vi.fn(() => 'https://app.example.com'),
}))
/** Setup auth API mocks for testing authentication routes */
function setupAuthApiMocks(
options: {
operations?: {
forgetPassword?: { success?: boolean; error?: string }
resetPassword?: { success?: boolean; error?: string }
}
} = {}
) {
setupCommonApiMocks()
mockUuid()
mockCryptoUuid()
mockConsoleLogger()
mockDrizzleOrm()
const { operations = {} } = options
const defaultOperations = {
forgetPassword: { success: true, error: 'Forget password error', ...operations.forgetPassword },
resetPassword: { success: true, error: 'Reset password error', ...operations.resetPassword },
}
const createAuthMethod = (config: { success?: boolean; error?: string }) => {
return vi.fn().mockImplementation(() => {
if (config.success) {
return Promise.resolve()
}
return Promise.reject(new Error(config.error))
})
}
vi.doMock('@/lib/auth', () => ({
auth: {
api: {
forgetPassword: createAuthMethod(defaultOperations.forgetPassword),
resetPassword: createAuthMethod(defaultOperations.resetPassword),
},
vi.mock('@/lib/auth', () => ({
auth: {
api: {
forgetPassword: mockForgetPassword,
},
}))
}
},
}))
vi.mock('@sim/logger', () => ({
createLogger: vi.fn().mockReturnValue(mockLogger),
}))
import { POST } from '@/app/api/auth/forget-password/route'
describe('Forget Password API Route', () => {
beforeEach(() => {
vi.resetModules()
vi.clearAllMocks()
mockForgetPassword.mockResolvedValue(undefined)
})
afterEach(() => {
@@ -67,27 +49,18 @@ describe('Forget Password API Route', () => {
})
it('should send password reset email successfully with same-origin redirectTo', async () => {
setupAuthApiMocks({
operations: {
forgetPassword: { success: true },
},
})
const req = createMockRequest('POST', {
email: 'test@example.com',
redirectTo: 'https://app.example.com/reset',
})
const { POST } = await import('@/app/api/auth/forget-password/route')
const response = await POST(req)
const data = await response.json()
expect(response.status).toBe(200)
expect(data.success).toBe(true)
const auth = await import('@/lib/auth')
expect(auth.auth.api.forgetPassword).toHaveBeenCalledWith({
expect(mockForgetPassword).toHaveBeenCalledWith({
body: {
email: 'test@example.com',
redirectTo: 'https://app.example.com/reset',
@@ -97,50 +70,32 @@ describe('Forget Password API Route', () => {
})
it('should reject external redirectTo URL', async () => {
setupAuthApiMocks({
operations: {
forgetPassword: { success: true },
},
})
const req = createMockRequest('POST', {
email: 'test@example.com',
redirectTo: 'https://evil.com/phishing',
})
const { POST } = await import('@/app/api/auth/forget-password/route')
const response = await POST(req)
const data = await response.json()
expect(response.status).toBe(400)
expect(data.message).toBe('Redirect URL must be a valid same-origin URL')
const auth = await import('@/lib/auth')
expect(auth.auth.api.forgetPassword).not.toHaveBeenCalled()
expect(mockForgetPassword).not.toHaveBeenCalled()
})
it('should send password reset email without redirectTo', async () => {
setupAuthApiMocks({
operations: {
forgetPassword: { success: true },
},
})
const req = createMockRequest('POST', {
email: 'test@example.com',
})
const { POST } = await import('@/app/api/auth/forget-password/route')
const response = await POST(req)
const data = await response.json()
expect(response.status).toBe(200)
expect(data.success).toBe(true)
const auth = await import('@/lib/auth')
expect(auth.auth.api.forgetPassword).toHaveBeenCalledWith({
expect(mockForgetPassword).toHaveBeenCalledWith({
body: {
email: 'test@example.com',
redirectTo: undefined,
@@ -150,97 +105,64 @@ describe('Forget Password API Route', () => {
})
it('should handle missing email', async () => {
setupAuthApiMocks()
const req = createMockRequest('POST', {})
const { POST } = await import('@/app/api/auth/forget-password/route')
const response = await POST(req)
const data = await response.json()
expect(response.status).toBe(400)
expect(data.message).toBe('Email is required')
const auth = await import('@/lib/auth')
expect(auth.auth.api.forgetPassword).not.toHaveBeenCalled()
expect(mockForgetPassword).not.toHaveBeenCalled()
})
it('should handle empty email', async () => {
setupAuthApiMocks()
const req = createMockRequest('POST', {
email: '',
})
const { POST } = await import('@/app/api/auth/forget-password/route')
const response = await POST(req)
const data = await response.json()
expect(response.status).toBe(400)
expect(data.message).toBe('Please provide a valid email address')
const auth = await import('@/lib/auth')
expect(auth.auth.api.forgetPassword).not.toHaveBeenCalled()
expect(mockForgetPassword).not.toHaveBeenCalled()
})
it('should handle auth service error with message', async () => {
const errorMessage = 'User not found'
setupAuthApiMocks({
operations: {
forgetPassword: {
success: false,
error: errorMessage,
},
},
})
mockForgetPassword.mockRejectedValue(new Error(errorMessage))
const req = createMockRequest('POST', {
email: 'nonexistent@example.com',
})
const { POST } = await import('@/app/api/auth/forget-password/route')
const response = await POST(req)
const data = await response.json()
expect(response.status).toBe(500)
expect(data.message).toBe(errorMessage)
const logger = await import('@sim/logger')
const mockLogger = logger.createLogger('ForgetPasswordTest')
expect(mockLogger.error).toHaveBeenCalledWith('Error requesting password reset:', {
error: expect.any(Error),
})
})
it('should handle unknown error', async () => {
setupAuthApiMocks()
vi.doMock('@/lib/auth', () => ({
auth: {
api: {
forgetPassword: vi.fn().mockRejectedValue('Unknown error'),
},
},
}))
mockForgetPassword.mockRejectedValue('Unknown error')
const req = createMockRequest('POST', {
email: 'test@example.com',
})
const { POST } = await import('@/app/api/auth/forget-password/route')
const response = await POST(req)
const data = await response.json()
expect(response.status).toBe(500)
expect(data.message).toBe('Failed to send password reset email. Please try again later.')
const logger = await import('@sim/logger')
const mockLogger = logger.createLogger('ForgetPasswordTest')
expect(mockLogger.error).toHaveBeenCalled()
})
})

View File

@@ -3,52 +3,81 @@
*
* @vitest-environment node
*/
import { createMockLogger, createMockRequest } from '@sim/testing'
import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest'
import { createMockRequest } from '@sim/testing'
import { beforeEach, describe, expect, it, vi } from 'vitest'
describe('OAuth Connections API Route', () => {
const mockGetSession = vi.fn()
const mockDb = {
const {
mockGetSession,
mockDb,
mockLogger,
mockParseProvider,
mockEvaluateScopeCoverage,
mockJwtDecode,
mockEq,
} = vi.hoisted(() => {
const db = {
select: vi.fn().mockReturnThis(),
from: vi.fn().mockReturnThis(),
where: vi.fn().mockReturnThis(),
limit: vi.fn(),
}
const mockLogger = createMockLogger()
const mockParseProvider = vi.fn()
const mockEvaluateScopeCoverage = vi.fn()
const logger = {
info: vi.fn(),
warn: vi.fn(),
error: vi.fn(),
debug: vi.fn(),
trace: vi.fn(),
fatal: vi.fn(),
child: vi.fn(),
}
return {
mockGetSession: vi.fn(),
mockDb: db,
mockLogger: logger,
mockParseProvider: vi.fn(),
mockEvaluateScopeCoverage: vi.fn(),
mockJwtDecode: vi.fn(),
mockEq: vi.fn((field: unknown, value: unknown) => ({ field, value, type: 'eq' })),
}
})
const mockUUID = 'mock-uuid-12345678-90ab-cdef-1234-567890abcdef'
vi.mock('@/lib/auth', () => ({
getSession: mockGetSession,
}))
vi.mock('@sim/db', () => ({
db: mockDb,
account: { userId: 'userId', providerId: 'providerId' },
user: { email: 'email', id: 'id' },
eq: mockEq,
}))
vi.mock('drizzle-orm', () => ({
eq: mockEq,
}))
vi.mock('jwt-decode', () => ({
jwtDecode: mockJwtDecode,
}))
vi.mock('@sim/logger', () => ({
createLogger: vi.fn().mockReturnValue(mockLogger),
}))
vi.mock('@/lib/oauth/utils', () => ({
parseProvider: mockParseProvider,
evaluateScopeCoverage: mockEvaluateScopeCoverage,
}))
import { GET } from '@/app/api/auth/oauth/connections/route'
describe('OAuth Connections API Route', () => {
beforeEach(() => {
vi.resetModules()
vi.clearAllMocks()
vi.stubGlobal('crypto', {
randomUUID: vi.fn().mockReturnValue(mockUUID),
})
vi.doMock('@/lib/auth', () => ({
getSession: mockGetSession,
}))
vi.doMock('@sim/db', () => ({
db: mockDb,
account: { userId: 'userId', providerId: 'providerId' },
user: { email: 'email', id: 'id' },
eq: vi.fn((field, value) => ({ field, value, type: 'eq' })),
}))
vi.doMock('drizzle-orm', () => ({
eq: vi.fn((field, value) => ({ field, value, type: 'eq' })),
}))
vi.doMock('jwt-decode', () => ({
jwtDecode: vi.fn(),
}))
vi.doMock('@sim/logger', () => ({
createLogger: vi.fn().mockReturnValue(mockLogger),
}))
mockDb.select.mockReturnThis()
mockDb.from.mockReturnThis()
mockDb.where.mockReturnThis()
mockParseProvider.mockImplementation((providerId: string) => ({
baseProvider: providerId.split('-')[0] || providerId,
@@ -64,15 +93,6 @@ describe('OAuth Connections API Route', () => {
requiresReauthorization: false,
})
)
vi.doMock('@/lib/oauth/utils', () => ({
parseProvider: mockParseProvider,
evaluateScopeCoverage: mockEvaluateScopeCoverage,
}))
})
afterEach(() => {
vi.clearAllMocks()
})
it('should return connections successfully', async () => {
@@ -111,7 +131,6 @@ describe('OAuth Connections API Route', () => {
mockDb.limit.mockResolvedValueOnce(mockUserRecord)
const req = createMockRequest('GET')
const { GET } = await import('@/app/api/auth/oauth/connections/route')
const response = await GET(req)
const data = await response.json()
@@ -136,7 +155,6 @@ describe('OAuth Connections API Route', () => {
mockGetSession.mockResolvedValueOnce(null)
const req = createMockRequest('GET')
const { GET } = await import('@/app/api/auth/oauth/connections/route')
const response = await GET(req)
const data = await response.json()
@@ -161,7 +179,6 @@ describe('OAuth Connections API Route', () => {
mockDb.limit.mockResolvedValueOnce([])
const req = createMockRequest('GET')
const { GET } = await import('@/app/api/auth/oauth/connections/route')
const response = await GET(req)
const data = await response.json()
@@ -180,7 +197,6 @@ describe('OAuth Connections API Route', () => {
mockDb.where.mockRejectedValueOnce(new Error('Database error'))
const req = createMockRequest('GET')
const { GET } = await import('@/app/api/auth/oauth/connections/route')
const response = await GET(req)
const data = await response.json()
@@ -191,9 +207,6 @@ describe('OAuth Connections API Route', () => {
})
it('should decode ID token for display name', async () => {
const { jwtDecode } = await import('jwt-decode')
const mockJwtDecode = jwtDecode as any
mockGetSession.mockResolvedValueOnce({
user: { id: 'user-123' },
})
@@ -224,7 +237,6 @@ describe('OAuth Connections API Route', () => {
mockDb.limit.mockResolvedValueOnce([])
const req = createMockRequest('GET')
const { GET } = await import('@/app/api/auth/oauth/connections/route')
const response = await GET(req)
const data = await response.json()

View File

@@ -4,66 +4,89 @@
* @vitest-environment node
*/
import { createMockLogger } from '@sim/testing'
import { NextRequest } from 'next/server'
import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest'
import { beforeEach, describe, expect, it, vi } from 'vitest'
const { mockCheckSessionOrInternalAuth, mockEvaluateScopeCoverage, mockLogger } = vi.hoisted(() => {
const logger = {
info: vi.fn(),
warn: vi.fn(),
error: vi.fn(),
debug: vi.fn(),
trace: vi.fn(),
fatal: vi.fn(),
child: vi.fn(),
}
return {
mockCheckSessionOrInternalAuth: vi.fn(),
mockEvaluateScopeCoverage: vi.fn(),
mockLogger: logger,
}
})
vi.mock('@/lib/auth/hybrid', () => ({
checkSessionOrInternalAuth: mockCheckSessionOrInternalAuth,
}))
vi.mock('@/lib/oauth', () => ({
evaluateScopeCoverage: mockEvaluateScopeCoverage,
}))
vi.mock('@/lib/core/utils/request', () => ({
generateRequestId: vi.fn().mockReturnValue('mock-request-id'),
}))
vi.mock('@/lib/credentials/oauth', () => ({
syncWorkspaceOAuthCredentialsForUser: vi.fn(),
}))
vi.mock('@/lib/workflows/utils', () => ({
authorizeWorkflowByWorkspacePermission: vi.fn(),
}))
vi.mock('@/lib/workspaces/permissions/utils', () => ({
checkWorkspaceAccess: vi.fn(),
}))
vi.mock('@sim/db/schema', () => ({
account: {
userId: 'userId',
providerId: 'providerId',
id: 'id',
scope: 'scope',
updatedAt: 'updatedAt',
},
credential: {
id: 'id',
workspaceId: 'workspaceId',
type: 'type',
displayName: 'displayName',
providerId: 'providerId',
accountId: 'accountId',
},
credentialMember: {
id: 'id',
credentialId: 'credentialId',
userId: 'userId',
status: 'status',
},
user: { email: 'email', id: 'id' },
}))
vi.mock('@sim/logger', () => ({
createLogger: vi.fn().mockReturnValue(mockLogger),
}))
import { GET } from '@/app/api/auth/oauth/credentials/route'
describe('OAuth Credentials API Route', () => {
const mockGetSession = vi.fn()
const mockParseProvider = vi.fn()
const mockEvaluateScopeCoverage = vi.fn()
const mockDb = {
select: vi.fn().mockReturnThis(),
from: vi.fn().mockReturnThis(),
where: vi.fn().mockReturnThis(),
limit: vi.fn(),
}
const mockLogger = createMockLogger()
const mockUUID = 'mock-uuid-12345678-90ab-cdef-1234-567890abcdef'
function createMockRequestWithQuery(method = 'GET', queryParams = ''): NextRequest {
const url = `http://localhost:3000/api/auth/oauth/credentials${queryParams}`
return new NextRequest(new URL(url), { method })
}
beforeEach(() => {
vi.resetModules()
vi.stubGlobal('crypto', {
randomUUID: vi.fn().mockReturnValue(mockUUID),
})
vi.doMock('@/lib/auth', () => ({
getSession: mockGetSession,
}))
vi.doMock('@/lib/oauth/utils', () => ({
parseProvider: mockParseProvider,
evaluateScopeCoverage: mockEvaluateScopeCoverage,
}))
vi.doMock('@sim/db', () => ({
db: mockDb,
}))
vi.doMock('@sim/db/schema', () => ({
account: { userId: 'userId', providerId: 'providerId' },
user: { email: 'email', id: 'id' },
}))
vi.doMock('drizzle-orm', () => ({
and: vi.fn((...conditions) => ({ conditions, type: 'and' })),
eq: vi.fn((field, value) => ({ field, value, type: 'eq' })),
}))
vi.doMock('@sim/logger', () => ({
createLogger: vi.fn().mockReturnValue(mockLogger),
}))
mockParseProvider.mockImplementation((providerId: string) => ({
baseProvider: providerId.split('-')[0] || providerId,
}))
vi.clearAllMocks()
mockEvaluateScopeCoverage.mockImplementation(
(_providerId: string, grantedScopes: string[]) => ({
@@ -76,17 +99,14 @@ describe('OAuth Credentials API Route', () => {
)
})
afterEach(() => {
vi.clearAllMocks()
})
it('should handle unauthenticated user', async () => {
mockGetSession.mockResolvedValueOnce(null)
mockCheckSessionOrInternalAuth.mockResolvedValueOnce({
success: false,
error: 'Authentication required',
})
const req = createMockRequestWithQuery('GET', '?provider=google')
const { GET } = await import('@/app/api/auth/oauth/credentials/route')
const response = await GET(req)
const data = await response.json()
@@ -96,14 +116,14 @@ describe('OAuth Credentials API Route', () => {
})
it('should handle missing provider parameter', async () => {
mockGetSession.mockResolvedValueOnce({
user: { id: 'user-123' },
mockCheckSessionOrInternalAuth.mockResolvedValueOnce({
success: true,
userId: 'user-123',
authType: 'session',
})
const req = createMockRequestWithQuery('GET')
const { GET } = await import('@/app/api/auth/oauth/credentials/route')
const response = await GET(req)
const data = await response.json()
@@ -113,22 +133,14 @@ describe('OAuth Credentials API Route', () => {
})
it('should handle no credentials found', async () => {
mockGetSession.mockResolvedValueOnce({
user: { id: 'user-123' },
mockCheckSessionOrInternalAuth.mockResolvedValueOnce({
success: true,
userId: 'user-123',
authType: 'session',
})
mockParseProvider.mockReturnValueOnce({
baseProvider: 'github',
})
mockDb.select.mockReturnValueOnce(mockDb)
mockDb.from.mockReturnValueOnce(mockDb)
mockDb.where.mockResolvedValueOnce([])
const req = createMockRequestWithQuery('GET', '?provider=github')
const { GET } = await import('@/app/api/auth/oauth/credentials/route')
const response = await GET(req)
const data = await response.json()
@@ -137,14 +149,14 @@ describe('OAuth Credentials API Route', () => {
})
it('should return empty credentials when no workspace context', async () => {
mockGetSession.mockResolvedValueOnce({
user: { id: 'user-123' },
mockCheckSessionOrInternalAuth.mockResolvedValueOnce({
success: true,
userId: 'user-123',
authType: 'session',
})
const req = createMockRequestWithQuery('GET', '?provider=google-email')
const { GET } = await import('@/app/api/auth/oauth/credentials/route')
const response = await GET(req)
const data = await response.json()

View File

@@ -3,76 +3,102 @@
*
* @vitest-environment node
*/
import { auditMock, createMockLogger, createMockRequest } from '@sim/testing'
import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest'
import { createMockRequest } from '@sim/testing'
import { beforeEach, describe, expect, it, vi } from 'vitest'
describe('OAuth Disconnect API Route', () => {
const mockGetSession = vi.fn()
const mockSelectChain = {
from: vi.fn().mockReturnThis(),
innerJoin: vi.fn().mockReturnThis(),
where: vi.fn().mockResolvedValue([]),
}
const mockDb = {
delete: vi.fn().mockReturnThis(),
where: vi.fn(),
select: vi.fn().mockReturnValue(mockSelectChain),
}
const mockLogger = createMockLogger()
const mockSyncAllWebhooksForCredentialSet = vi.fn().mockResolvedValue({})
const mockUUID = 'mock-uuid-12345678-90ab-cdef-1234-567890abcdef'
beforeEach(() => {
vi.resetModules()
vi.stubGlobal('crypto', {
randomUUID: vi.fn().mockReturnValue(mockUUID),
})
vi.doMock('@/lib/auth', () => ({
getSession: mockGetSession,
}))
vi.doMock('@sim/db', () => ({
db: mockDb,
}))
vi.doMock('@sim/db/schema', () => ({
account: { userId: 'userId', providerId: 'providerId' },
credentialSetMember: {
id: 'id',
credentialSetId: 'credentialSetId',
userId: 'userId',
status: 'status',
},
credentialSet: { id: 'id', providerId: 'providerId' },
}))
vi.doMock('drizzle-orm', () => ({
and: vi.fn((...conditions) => ({ conditions, type: 'and' })),
eq: vi.fn((field, value) => ({ field, value, type: 'eq' })),
like: vi.fn((field, value) => ({ field, value, type: 'like' })),
or: vi.fn((...conditions) => ({ conditions, type: 'or' })),
}))
vi.doMock('@sim/logger', () => ({
createLogger: vi.fn().mockReturnValue(mockLogger),
}))
vi.doMock('@/lib/core/utils/request', () => ({
generateRequestId: vi.fn().mockReturnValue('test-request-id'),
}))
vi.doMock('@/lib/webhooks/utils.server', () => ({
syncAllWebhooksForCredentialSet: mockSyncAllWebhooksForCredentialSet,
}))
vi.doMock('@/lib/audit/log', () => auditMock)
const { mockGetSession, mockDb, mockSelectChain, mockLogger, mockSyncAllWebhooksForCredentialSet } =
vi.hoisted(() => {
const selectChain = {
from: vi.fn().mockReturnThis(),
innerJoin: vi.fn().mockReturnThis(),
where: vi.fn().mockResolvedValue([]),
}
const db = {
delete: vi.fn().mockReturnThis(),
where: vi.fn(),
select: vi.fn().mockReturnValue(selectChain),
}
const logger = {
info: vi.fn(),
warn: vi.fn(),
error: vi.fn(),
debug: vi.fn(),
trace: vi.fn(),
fatal: vi.fn(),
child: vi.fn(),
}
return {
mockGetSession: vi.fn(),
mockDb: db,
mockSelectChain: selectChain,
mockLogger: logger,
mockSyncAllWebhooksForCredentialSet: vi.fn().mockResolvedValue({}),
}
})
afterEach(() => {
vi.mock('@/lib/auth', () => ({
getSession: mockGetSession,
}))
vi.mock('@sim/db', () => ({
db: mockDb,
}))
vi.mock('@sim/db/schema', () => ({
account: { userId: 'userId', providerId: 'providerId' },
credentialSetMember: {
id: 'id',
credentialSetId: 'credentialSetId',
userId: 'userId',
status: 'status',
},
credentialSet: { id: 'id', providerId: 'providerId' },
}))
vi.mock('drizzle-orm', () => ({
and: vi.fn((...conditions: unknown[]) => ({ conditions, type: 'and' })),
eq: vi.fn((field: unknown, value: unknown) => ({ field, value, type: 'eq' })),
like: vi.fn((field: unknown, value: unknown) => ({ field, value, type: 'like' })),
or: vi.fn((...conditions: unknown[]) => ({ conditions, type: 'or' })),
}))
vi.mock('@sim/logger', () => ({
createLogger: vi.fn().mockReturnValue(mockLogger),
}))
vi.mock('@/lib/core/utils/request', () => ({
generateRequestId: vi.fn().mockReturnValue('test-request-id'),
}))
vi.mock('@/lib/webhooks/utils.server', () => ({
syncAllWebhooksForCredentialSet: mockSyncAllWebhooksForCredentialSet,
}))
vi.mock('@/lib/audit/log', () => ({
recordAudit: vi.fn(),
AuditAction: {
CREDENTIAL_SET_CREATED: 'credential_set.created',
CREDENTIAL_SET_UPDATED: 'credential_set.updated',
CREDENTIAL_SET_DELETED: 'credential_set.deleted',
OAUTH_CONNECTED: 'oauth.connected',
OAUTH_DISCONNECTED: 'oauth.disconnected',
},
AuditResourceType: {
CREDENTIAL_SET: 'credential_set',
OAUTH_CONNECTION: 'oauth_connection',
},
}))
import { POST } from '@/app/api/auth/oauth/disconnect/route'
describe('OAuth Disconnect API Route', () => {
beforeEach(() => {
vi.clearAllMocks()
mockDb.delete.mockReturnThis()
mockSelectChain.from.mockReturnThis()
mockSelectChain.innerJoin.mockReturnThis()
mockSelectChain.where.mockResolvedValue([])
})
it('should disconnect provider successfully', async () => {
@@ -87,8 +113,6 @@ describe('OAuth Disconnect API Route', () => {
provider: 'google',
})
const { POST } = await import('@/app/api/auth/oauth/disconnect/route')
const response = await POST(req)
const data = await response.json()
@@ -110,8 +134,6 @@ describe('OAuth Disconnect API Route', () => {
providerId: 'google-email',
})
const { POST } = await import('@/app/api/auth/oauth/disconnect/route')
const response = await POST(req)
const data = await response.json()
@@ -127,8 +149,6 @@ describe('OAuth Disconnect API Route', () => {
provider: 'google',
})
const { POST } = await import('@/app/api/auth/oauth/disconnect/route')
const response = await POST(req)
const data = await response.json()
@@ -144,8 +164,6 @@ describe('OAuth Disconnect API Route', () => {
const req = createMockRequest('POST', {})
const { POST } = await import('@/app/api/auth/oauth/disconnect/route')
const response = await POST(req)
const data = await response.json()
@@ -166,8 +184,6 @@ describe('OAuth Disconnect API Route', () => {
provider: 'google',
})
const { POST } = await import('@/app/api/auth/oauth/disconnect/route')
const response = await POST(req)
const data = await response.json()

Some files were not shown because too many files have changed in this diff Show More