mirror of
https://github.com/simstudioai/sim.git
synced 2026-04-28 03:00:29 -04:00
Compare commits
216 Commits
fix/log-so
...
v0.6.56
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
6066fc1960 | ||
|
|
91ccbb9921 | ||
|
|
dcbe7c69b0 | ||
|
|
c22ac38ab0 | ||
|
|
cdde8cbd66 | ||
|
|
0ae19dab85 | ||
|
|
3b11c814f8 | ||
|
|
b86ebb35fd | ||
|
|
65972f2fa3 | ||
|
|
7ca736a7a1 | ||
|
|
5f0f0edd63 | ||
|
|
bed5e95742 | ||
|
|
f7ab39984c | ||
|
|
8ce56fe1f2 | ||
|
|
64cfda523b | ||
|
|
8c9ddefc53 | ||
|
|
d927d8bdff | ||
|
|
0aeab026a8 | ||
|
|
7c619e78d8 | ||
|
|
41a1b50ace | ||
|
|
bbf400ff13 | ||
|
|
7941dcde98 | ||
|
|
51ace655e4 | ||
|
|
ca3bbf14b0 | ||
|
|
c2529c3f2c | ||
|
|
45bf396968 | ||
|
|
193f06fcf5 | ||
|
|
34cfc2689a | ||
|
|
2d94b3729d | ||
|
|
aee6189d14 | ||
|
|
3a0e7b89a4 | ||
|
|
d185953248 | ||
|
|
42ef2b1dbb | ||
|
|
2456128fb7 | ||
|
|
699bbfd16f | ||
|
|
0e1ff0a1ac | ||
|
|
94202ed229 | ||
|
|
802f4cf0fc | ||
|
|
ac4ccfcac8 | ||
|
|
0cd14f4ac9 | ||
|
|
d9209f9588 | ||
|
|
2ae1ad293f | ||
|
|
febc36ff9c | ||
|
|
5f56e46758 | ||
|
|
5cf7e8d546 | ||
|
|
1ced54a77c | ||
|
|
951fbd4ded | ||
|
|
f91c1b614a | ||
|
|
b5674d9ed4 | ||
|
|
c19187257e | ||
|
|
c246f5c660 | ||
|
|
28b4c4cc67 | ||
|
|
32541e79d4 | ||
|
|
a01f80c6a3 | ||
|
|
bc09865d81 | ||
|
|
524f33cc9e | ||
|
|
47519e34d9 | ||
|
|
2f932054a7 | ||
|
|
319e0db732 | ||
|
|
003e931546 | ||
|
|
948cdbcc3f | ||
|
|
5e716d74bc | ||
|
|
e1018f1c72 | ||
|
|
351873ac04 | ||
|
|
dcf33021f4 | ||
|
|
38864fac34 | ||
|
|
2266bb384b | ||
|
|
a589c8e318 | ||
|
|
3d909d5416 | ||
|
|
3e2a7a2eb1 | ||
|
|
49a1495e15 | ||
|
|
1d0e118eef | ||
|
|
c06361b142 | ||
|
|
8a50f1844c | ||
|
|
6fd1767c7e | ||
|
|
6aa6346330 | ||
|
|
1708bbee35 | ||
|
|
2dbc7fdddf | ||
|
|
e16c8e6f70 | ||
|
|
9f41736d50 | ||
|
|
147ac89672 | ||
|
|
4cdc941490 | ||
|
|
387cc977fa | ||
|
|
0464a57601 | ||
|
|
23ccd4a50c | ||
|
|
ba6bc91681 | ||
|
|
c0bc62c592 | ||
|
|
377712c9f3 | ||
|
|
6dddc3f796 | ||
|
|
010435c53b | ||
|
|
cd8c5bd0b8 | ||
|
|
f0285adc38 | ||
|
|
a39dc158cf | ||
|
|
05c1c5b1f6 | ||
|
|
5274efd8f9 | ||
|
|
0b36c8bcb6 | ||
|
|
842aa2c254 | ||
|
|
46ffc4904e | ||
|
|
ff71a07e8f | ||
|
|
22d4639f13 | ||
|
|
80095788fc | ||
|
|
61b33e5978 | ||
|
|
29fbad2874 | ||
|
|
e281ca0dac | ||
|
|
cbf0a139ed | ||
|
|
751eeaccd4 | ||
|
|
1bf2d95813 | ||
|
|
3a1b1a8032 | ||
|
|
3d6660ba4d | ||
|
|
48e174b21f | ||
|
|
7529a75ac0 | ||
|
|
6b2e83bf58 | ||
|
|
fc07922536 | ||
|
|
367415f649 | ||
|
|
ff2e369c20 | ||
|
|
64cdab24f7 | ||
|
|
3838b6e892 | ||
|
|
a51333aa2f | ||
|
|
0ac05397eb | ||
|
|
8a8bc1b0e6 | ||
|
|
48d5101151 | ||
|
|
9c1b0bc15f | ||
|
|
0e6ada4bdb | ||
|
|
85fda999b5 | ||
|
|
a4da8beb20 | ||
|
|
bd9dcf1ec0 | ||
|
|
6ce299bb23 | ||
|
|
c75d7b9ddc | ||
|
|
c71ae49da0 | ||
|
|
d6dc9f73cd | ||
|
|
6587afb97e | ||
|
|
e23557fdfe | ||
|
|
0abcc6e813 | ||
|
|
d238052fe8 | ||
|
|
c0db9de07b | ||
|
|
7491d70a67 | ||
|
|
4375f9921a | ||
|
|
fb4fb9e869 | ||
|
|
5ab85c6930 | ||
|
|
eba48e815f | ||
|
|
cd7e413607 | ||
|
|
cfe55914c9 | ||
|
|
e3d0e74cc4 | ||
|
|
ffda34442b | ||
|
|
cd3e24b79b | ||
|
|
6d2deb1b33 | ||
|
|
10341ae4a5 | ||
|
|
8b57476957 | ||
|
|
6ef40c5b21 | ||
|
|
4309d0619a | ||
|
|
85f1d96859 | ||
|
|
bc31710c1c | ||
|
|
30c5e82ab0 | ||
|
|
6a4f5f2074 | ||
|
|
74d0a47525 | ||
|
|
c8525852d4 | ||
|
|
20cc0185bf | ||
|
|
cbfab1ceaa | ||
|
|
1acafe8763 | ||
|
|
c1d788ce94 | ||
|
|
bad78ccb59 | ||
|
|
8bbca9ba05 | ||
|
|
34f77e00bc | ||
|
|
fb5ebd3bed | ||
|
|
2e85361ed6 | ||
|
|
59de6bbb43 | ||
|
|
2b9fb19899 | ||
|
|
266bc2141d | ||
|
|
6099683e5a | ||
|
|
4f40c4ce3e | ||
|
|
3efbd1d612 | ||
|
|
04c1f8e475 | ||
|
|
476669fd55 | ||
|
|
4074109362 | ||
|
|
171485d3b6 | ||
|
|
d33acf426d | ||
|
|
bce638dd75 | ||
|
|
05b5588a7b | ||
|
|
32bdf3cfa5 | ||
|
|
12deb0f5b4 | ||
|
|
3c8bb4076c | ||
|
|
c393791f04 | ||
|
|
fc3e762b1f | ||
|
|
70f04c003b | ||
|
|
7bd271ae5b | ||
|
|
8e222fa369 | ||
|
|
b67c068817 | ||
|
|
d778b3d35b | ||
|
|
dc7d876a34 | ||
|
|
f8f3758649 | ||
|
|
db230785d3 | ||
|
|
9fbe514dbd | ||
|
|
139213ef45 | ||
|
|
a8468a6056 | ||
|
|
3e85218142 | ||
|
|
c5cc336847 | ||
|
|
5f33432dc2 | ||
|
|
c83349200c | ||
|
|
1856635927 | ||
|
|
91ce55e547 | ||
|
|
694f4a5895 | ||
|
|
cf233bb497 | ||
|
|
4700590e64 | ||
|
|
1189400167 | ||
|
|
621aa65b91 | ||
|
|
c21876ab40 | ||
|
|
a1173ee712 | ||
|
|
579d240cee | ||
|
|
d7da35ba0b | ||
|
|
d6ec115348 | ||
|
|
3f508e445f | ||
|
|
316bc8cdcc | ||
|
|
d889f32697 | ||
|
|
28af223a9f | ||
|
|
a54dcbe949 | ||
|
|
0b9019d9a2 |
@@ -14,6 +14,20 @@ When the user asks you to create a block:
|
||||
2. Configure all subBlocks with proper types, conditions, and dependencies
|
||||
3. Wire up tools correctly
|
||||
|
||||
## Hard Rule: No Guessed Tool Outputs
|
||||
|
||||
Blocks depend on tool outputs. If the underlying tool response schema is not documented or live-verified, you MUST tell the user instead of guessing block outputs.
|
||||
|
||||
- Do NOT invent block outputs for undocumented tool responses
|
||||
- Do NOT describe unknown JSON shapes as if they were confirmed
|
||||
- Do NOT wire fields into the block just because they seem likely to exist
|
||||
|
||||
If the tool outputs are not known, do one of these instead:
|
||||
1. Ask the user for sample tool responses
|
||||
2. Ask the user for test credentials so the tool responses can be verified
|
||||
3. Limit the block to operations whose outputs are documented
|
||||
4. Leave uncertain outputs out and explicitly tell the user what remains unknown
|
||||
|
||||
## Block Configuration Structure
|
||||
|
||||
```typescript
|
||||
@@ -575,6 +589,8 @@ Use `type: 'json'` with a descriptive string when:
|
||||
- It represents a list/array of items
|
||||
- The shape varies by operation
|
||||
|
||||
If the output shape is unknown because the underlying tool response is undocumented, you MUST tell the user and stop. Unknown is not the same as variable. Never guess block outputs.
|
||||
|
||||
## V2 Block Pattern
|
||||
|
||||
When creating V2 blocks (alongside legacy V1):
|
||||
@@ -829,3 +845,4 @@ After creating the block, you MUST validate it against every tool it references:
|
||||
- Type coercions in `tools.config.params` for any params that need conversion (Number(), Boolean(), JSON.parse())
|
||||
3. **Verify block outputs** cover the key fields returned by all tools
|
||||
4. **Verify conditions** — each subBlock should only show for the operations that actually use it
|
||||
5. **If any tool outputs are still unknown**, explicitly tell the user instead of guessing block outputs
|
||||
|
||||
@@ -15,6 +15,21 @@ When the user asks you to create a connector:
|
||||
3. Create the connector directory and config
|
||||
4. Register it in the connector registry
|
||||
|
||||
## Hard Rule: No Guessed Response Or Document Schemas
|
||||
|
||||
If the service docs do not clearly show the document list response, document fetch response, pagination shape, or metadata fields, you MUST tell the user instead of guessing.
|
||||
|
||||
- Do NOT invent document fields
|
||||
- Do NOT guess pagination cursors or next-page fields
|
||||
- Do NOT infer metadata/tag mappings from unrelated endpoints
|
||||
- Do NOT fabricate `ExternalDocument` content structure from partial docs
|
||||
|
||||
If the source schema is unknown, do one of these instead:
|
||||
1. Ask the user for sample API responses
|
||||
2. Ask the user for test credentials so you can verify live payloads
|
||||
3. Implement only the documented parts of the connector
|
||||
4. Leave the connector incomplete and explicitly say which fields remain unknown
|
||||
|
||||
## Directory Structure
|
||||
|
||||
Create files in `apps/sim/connectors/{service}/`:
|
||||
@@ -92,6 +107,8 @@ export const {service}Connector: ConnectorConfig = {
|
||||
}
|
||||
```
|
||||
|
||||
Only map fields in `listDocuments`, `getDocument`, `validateConfig`, and `mapTags` when the source payload shape is documented or live-verified. If not, tell the user and stop rather than guessing.
|
||||
|
||||
### API key connector example
|
||||
|
||||
```typescript
|
||||
|
||||
@@ -29,6 +29,21 @@ Before writing any code:
|
||||
- Required vs optional parameters
|
||||
- Response structures
|
||||
|
||||
### Hard Rule: No Guessed Response Schemas
|
||||
|
||||
If the official docs do not clearly show the response JSON shape for an endpoint, you MUST stop and tell the user exactly which outputs are unknown.
|
||||
|
||||
- Do NOT guess response field names
|
||||
- Do NOT infer nested JSON paths from related endpoints
|
||||
- Do NOT invent output properties just because they seem likely
|
||||
- Do NOT implement `transformResponse` against unverified payload shapes
|
||||
|
||||
If response schemas are missing or incomplete, do one of the following before proceeding:
|
||||
1. Ask the user for sample responses
|
||||
2. Ask the user for test credentials so you can verify the live payload
|
||||
3. Reduce the scope to only endpoints whose response shapes are documented
|
||||
4. Leave the tool unimplemented and explicitly report why
|
||||
|
||||
## Step 2: Create Tools
|
||||
|
||||
### Directory Structure
|
||||
@@ -103,6 +118,7 @@ export const {service}{Action}Tool: ToolConfig<Params, Response> = {
|
||||
- Set `optional: true` for outputs that may not exist
|
||||
- Never output raw JSON dumps - extract meaningful fields
|
||||
- When using `type: 'json'` and you know the object shape, define `properties` with the inner fields so downstream consumers know the structure. Only use bare `type: 'json'` when the shape is truly dynamic
|
||||
- If you do not know the response JSON shape from docs or verified examples, you MUST tell the user and stop. Never guess outputs or response mappings.
|
||||
|
||||
## Step 3: Create Block
|
||||
|
||||
@@ -450,6 +466,8 @@ If creating V2 versions (API-aligned outputs):
|
||||
- [ ] Verified block subBlocks cover all required tool params with correct conditions
|
||||
- [ ] Verified block outputs match what the tools actually return
|
||||
- [ ] Verified `tools.config.params` correctly maps and coerces all param types
|
||||
- [ ] Verified every tool output and `transformResponse` path against documented or live-verified JSON responses
|
||||
- [ ] If any response schema remained unknown, explicitly told the user instead of guessing
|
||||
|
||||
## Example Command
|
||||
|
||||
|
||||
@@ -14,6 +14,21 @@ When the user asks you to create tools for a service:
|
||||
2. Create the tools directory structure
|
||||
3. Generate properly typed tool configurations
|
||||
|
||||
## Hard Rule: No Guessed Response Schemas
|
||||
|
||||
If the docs do not clearly show the response JSON for a tool, you MUST tell the user exactly which outputs are unknown and stop short of guessing.
|
||||
|
||||
- Do NOT invent response field names
|
||||
- Do NOT infer nested paths from nearby endpoints
|
||||
- Do NOT guess array item shapes
|
||||
- Do NOT write `transformResponse` against unverified payloads
|
||||
|
||||
If the response shape is unknown, do one of these instead:
|
||||
1. Ask the user for sample responses
|
||||
2. Ask the user for test credentials so you can verify live responses
|
||||
3. Implement only the endpoints whose outputs are documented
|
||||
4. Leave the tool unimplemented and explicitly say why
|
||||
|
||||
## Directory Structure
|
||||
|
||||
Create files in `apps/sim/tools/{service}/`:
|
||||
@@ -187,6 +202,8 @@ items: {
|
||||
|
||||
Only use bare `type: 'json'` without `properties` when the shape is truly dynamic or unknown.
|
||||
|
||||
If the response shape is unknown because the docs do not provide it, you MUST tell the user and stop. Unknown is not the same as dynamic. Never guess outputs.
|
||||
|
||||
## Critical Rules for transformResponse
|
||||
|
||||
### Handle Nullable Fields
|
||||
@@ -441,7 +458,9 @@ After creating all tools, you MUST validate every tool before finishing:
|
||||
- All output fields match what the API actually returns
|
||||
- No fields are missing from outputs that the API provides
|
||||
- No extra fields are defined in outputs that the API doesn't return
|
||||
- Every output field and JSON path is backed by docs or live-verified sample responses
|
||||
3. **Verify consistency** across tools:
|
||||
- Shared types in `types.ts` match all tools that use them
|
||||
- Tool IDs in the barrel export match the tool file definitions
|
||||
- Error handling is consistent (error checks, meaningful messages)
|
||||
4. **If any response schema is still unknown**, explicitly tell the user instead of guessing
|
||||
|
||||
@@ -14,6 +14,21 @@ You are an expert at creating webhook triggers for Sim. You understand the trigg
|
||||
3. Create a provider handler if custom auth, formatting, or subscriptions are needed
|
||||
4. Register triggers and connect them to the block
|
||||
|
||||
## Hard Rule: No Guessed Webhook Payload Schemas
|
||||
|
||||
If the service docs do not clearly show the webhook payload JSON for an event, you MUST tell the user instead of guessing trigger outputs or `formatInput` mappings.
|
||||
|
||||
- Do NOT invent payload field names
|
||||
- Do NOT guess nested event object paths
|
||||
- Do NOT infer output fields from the UI or marketing docs
|
||||
- Do NOT write `formatInput` against unverified webhook bodies
|
||||
|
||||
If the payload shape is unknown, do one of these instead:
|
||||
1. Ask the user for sample webhook payloads
|
||||
2. Ask the user for a test webhook source so you can inspect a real event
|
||||
3. Implement only the event registration/setup portions whose payloads are documented
|
||||
4. Leave the trigger unimplemented and explicitly say which payload fields are unknown
|
||||
|
||||
## Directory Structure
|
||||
|
||||
```
|
||||
|
||||
25
.agents/skills/cleanup/SKILL.md
Normal file
25
.agents/skills/cleanup/SKILL.md
Normal file
@@ -0,0 +1,25 @@
|
||||
---
|
||||
name: cleanup
|
||||
description: Run all code quality skills in sequence — effects, memo, callbacks, state, React Query, and emcn design review
|
||||
---
|
||||
|
||||
# Cleanup
|
||||
|
||||
Arguments:
|
||||
- scope: what to review (default: your current changes). Examples: "diff to main", "PR #123", "src/components/", "whole codebase"
|
||||
- fix: whether to apply fixes (default: true). Set to false to only propose changes.
|
||||
|
||||
User arguments: $ARGUMENTS
|
||||
|
||||
## Steps
|
||||
|
||||
Run each of these skills in order on the specified scope, passing through the scope and fix arguments. After each skill completes, move to the next. Do not skip any.
|
||||
|
||||
1. `/you-might-not-need-an-effect $ARGUMENTS`
|
||||
2. `/you-might-not-need-a-memo $ARGUMENTS`
|
||||
3. `/you-might-not-need-a-callback $ARGUMENTS`
|
||||
4. `/you-might-not-need-state $ARGUMENTS`
|
||||
5. `/react-query-best-practices $ARGUMENTS`
|
||||
6. `/emcn-design-review $ARGUMENTS`
|
||||
|
||||
After all skills have run, output a summary of what was found and fixed (or proposed) across all six passes.
|
||||
335
.agents/skills/emcn-design-review/SKILL.md
Normal file
335
.agents/skills/emcn-design-review/SKILL.md
Normal file
@@ -0,0 +1,335 @@
|
||||
---
|
||||
name: emcn-design-review
|
||||
description: Review UI code for alignment with the emcn design system — components, tokens, patterns, and conventions
|
||||
---
|
||||
|
||||
# EMCN Design Review
|
||||
|
||||
Arguments:
|
||||
- scope: what to review (default: your current changes). Examples: "diff to main", "PR #123", "src/components/", "whole codebase"
|
||||
- fix: whether to apply fixes (default: true). Set to false to only propose changes.
|
||||
|
||||
User arguments: $ARGUMENTS
|
||||
|
||||
## Context
|
||||
|
||||
This codebase uses **emcn**, a custom component library built on Radix UI primitives with CVA (class-variance-authority) variants and CSS variable design tokens. All UI must use emcn components and tokens — never raw HTML elements or hardcoded colors.
|
||||
|
||||
## Steps
|
||||
|
||||
1. Read the emcn barrel export at `apps/sim/components/emcn/components/index.ts` to know what's available
|
||||
2. Read `apps/sim/app/_styles/globals.css` for the full set of CSS variable tokens
|
||||
3. Analyze the specified scope against every rule below
|
||||
4. If fix=true, apply the fixes. If fix=false, propose the fixes without applying.
|
||||
|
||||
---
|
||||
|
||||
## Imports
|
||||
|
||||
- Import components from `@/components/emcn`, never from subpaths
|
||||
- Import icons from `@/components/emcn/icons` or `lucide-react`
|
||||
- Import `cn` from `@/lib/core/utils/cn` for conditional class merging
|
||||
- Import app-specific wrappers (Select, VerifiedBadge) from `@/components/ui`
|
||||
|
||||
```tsx
|
||||
// Good
|
||||
import { Button, Modal, Badge } from '@/components/emcn'
|
||||
// Bad
|
||||
import { Button } from '@/components/emcn/components/button/button'
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Design Tokens (CSS Variables)
|
||||
|
||||
Never use raw color values. Always use CSS variable tokens via Tailwind arbitrary values: `text-[var(--text-primary)]`, not `text-gray-500` or `#333`. The CSS variable pattern is canonical (1,700+ uses) — do not use Tailwind semantic classes like `text-muted-foreground`.
|
||||
|
||||
### Text hierarchy
|
||||
| Token | Use |
|
||||
|-------|-----|
|
||||
| `text-[var(--text-primary)]` | Main content text |
|
||||
| `text-[var(--text-secondary)]` | Secondary/supporting text |
|
||||
| `text-[var(--text-tertiary)]` | Tertiary text |
|
||||
| `text-[var(--text-muted)]` | Disabled, placeholder text |
|
||||
| `text-[var(--text-icon)]` | Icon tinting |
|
||||
| `text-[var(--text-inverse)]` | Text on dark backgrounds |
|
||||
| `text-[var(--text-error)]` | Error/warning messages |
|
||||
|
||||
### Surfaces (elevation)
|
||||
| Token | Use |
|
||||
|-------|-----|
|
||||
| `bg-[var(--bg)]` | Page background |
|
||||
| `bg-[var(--surface-2)]` through `bg-[var(--surface-7)]` | Increasing elevation |
|
||||
| `bg-[var(--surface-hover)]` | Hover state backgrounds |
|
||||
| `bg-[var(--surface-active)]` | Active/selected backgrounds |
|
||||
|
||||
### Borders
|
||||
| Token | Use |
|
||||
|-------|-----|
|
||||
| `border-[var(--border)]` | Default borders |
|
||||
| `border-[var(--border-1)]` | Stronger borders (inputs, cards) |
|
||||
| `border-[var(--border-muted)]` | Subtle dividers |
|
||||
|
||||
### Status
|
||||
| Token | Use |
|
||||
|-------|-----|
|
||||
| `--success` | Success states |
|
||||
| `--error` | Error states |
|
||||
| `--caution` | Warning states |
|
||||
|
||||
### Brand
|
||||
| Token | Use |
|
||||
|-------|-----|
|
||||
| `--brand-secondary` | Brand color |
|
||||
| `--brand-accent` | Accent/CTA color |
|
||||
|
||||
### Shadows
|
||||
Use shadow tokens, never raw box-shadow values:
|
||||
- `shadow-subtle`, `shadow-medium`, `shadow-overlay`
|
||||
- `shadow-kbd`, `shadow-card`
|
||||
|
||||
### Z-Index
|
||||
Use z-index tokens for layering:
|
||||
- `z-[var(--z-dropdown)]` (100), `z-[var(--z-modal)]` (200), `z-[var(--z-popover)]` (300), `z-[var(--z-tooltip)]` (400), `z-[var(--z-toast)]` (500)
|
||||
|
||||
---
|
||||
|
||||
## Component Usage Rules
|
||||
|
||||
### Buttons
|
||||
Available variants: `default`, `primary`, `destructive`, `ghost`, `outline`, `active`, `secondary`, `tertiary`, `subtle`, `ghost-secondary`, `3d`
|
||||
|
||||
| Action type | Variant | Frequency |
|
||||
|-------------|---------|-----------|
|
||||
| Toolbar, icon-only, utility actions | `ghost` | Most common (28%) |
|
||||
| Primary action (create, save, submit) | `primary` | Very common (24%) |
|
||||
| Cancel, close, secondary action | `default` | Common |
|
||||
| Delete, remove, destructive action | `destructive` | Targeted use only |
|
||||
| Active/selected state | `active` | Targeted use only |
|
||||
| Toggle, mode switch | `outline` | Moderate |
|
||||
|
||||
Sizes: `sm` (compact, 32% of buttons) or `md` (default, used when no size specified). Never create custom button styles — use an existing variant.
|
||||
|
||||
Buttons without an explicit variant prop get `default` styling. This is acceptable for cancel/secondary actions.
|
||||
|
||||
### Modals (Dialogs)
|
||||
Use `Modal` + subcomponents. Never build custom dialog overlays.
|
||||
|
||||
```tsx
|
||||
<Modal open={open} onOpenChange={setOpen}>
|
||||
<ModalContent size="sm">
|
||||
<ModalHeader>Title</ModalHeader>
|
||||
<ModalBody>Content</ModalBody>
|
||||
<ModalFooter>
|
||||
<Button variant="default" onClick={() => setOpen(false)}>Cancel</Button>
|
||||
<Button variant="primary" onClick={handleSubmit}>Save</Button>
|
||||
</ModalFooter>
|
||||
</ModalContent>
|
||||
</Modal>
|
||||
```
|
||||
|
||||
Modal sizes by frequency: `sm` (440px, most common — confirmations and simple dialogs), `md` (500px, forms), `lg` (600px, content-heavy), `xl` (800px, rare), `full` (1200px, rare).
|
||||
|
||||
Footer buttons: Cancel on left (`variant="default"`), primary action on right. This pattern is followed 100% across the codebase.
|
||||
|
||||
### Delete/Remove Confirmations
|
||||
Always use Modal with `size="sm"`. The established pattern:
|
||||
|
||||
```tsx
|
||||
<Modal open={open} onOpenChange={setOpen}>
|
||||
<ModalContent size="sm">
|
||||
<ModalHeader>Delete {itemType}</ModalHeader>
|
||||
<ModalBody>
|
||||
<p>Description of consequences</p>
|
||||
<p className="text-[var(--text-error)]">Warning about irreversibility</p>
|
||||
</ModalBody>
|
||||
<ModalFooter>
|
||||
<Button variant="default" onClick={() => setOpen(false)}>Cancel</Button>
|
||||
<Button variant="destructive" onClick={handleDelete} disabled={isDeleting}>
|
||||
Delete
|
||||
</Button>
|
||||
</ModalFooter>
|
||||
</ModalContent>
|
||||
</Modal>
|
||||
```
|
||||
|
||||
Rules:
|
||||
- Title: "Delete {ItemType}" or "Remove {ItemType}" (use "Remove" for membership/association changes)
|
||||
- Include consequence description
|
||||
- Use `text-[var(--text-error)]` for warning text when the action is irreversible
|
||||
- `variant="destructive"` for the action button (100% compliance)
|
||||
- `variant="default"` for cancel (100% compliance)
|
||||
- Cancel left, destructive right (100% compliance)
|
||||
- For high-risk deletes (workspaces), require typing the name to confirm
|
||||
- Include recovery info if soft-delete: "You can restore it from Recently Deleted in Settings"
|
||||
|
||||
### Toast Notifications
|
||||
Use the imperative `toast` API from `@/components/emcn`. Never build custom notification UI.
|
||||
|
||||
```tsx
|
||||
import { toast } from '@/components/emcn'
|
||||
|
||||
toast.success('Item saved')
|
||||
toast.error('Something went wrong')
|
||||
toast.success('Deleted', { action: { label: 'Undo', onClick: handleUndo } })
|
||||
```
|
||||
|
||||
Variants: `default`, `success`, `error`. Auto-dismiss after 5s. Supports optional action buttons with callbacks.
|
||||
|
||||
### Badges
|
||||
Use semantic color variants for status:
|
||||
|
||||
| Status | Variant | Usage |
|
||||
|--------|---------|-------|
|
||||
| Error, failed, disconnected | `red` | Most common (15 uses) |
|
||||
| Metadata, roles, auth types, scopes | `gray-secondary` | Very common (12 uses) |
|
||||
| Type annotations (TS types, field types) | `type` | Very common (12 uses) |
|
||||
| Success, active, enabled, running | `green` | Common (7 uses) |
|
||||
| Neutral, default, unknown | `gray` | Common (6 uses) |
|
||||
| Outline, parameters, public | `outline` | Moderate (6 uses) |
|
||||
| Warning, processing | `amber` | Moderate (5 uses) |
|
||||
| Paused, warning | `orange` | Occasional |
|
||||
| Info, queued | `blue` | Occasional |
|
||||
| Data types (arrays) | `purple` | Occasional |
|
||||
| Generic with border | `default` | Occasional |
|
||||
|
||||
Use `dot` prop for status indicators (19 instances in codebase). `icon` prop is available but rarely used.
|
||||
|
||||
### Tooltips
|
||||
Use `Tooltip` from emcn with namespace pattern:
|
||||
|
||||
```tsx
|
||||
<Tooltip.Root>
|
||||
<Tooltip.Trigger asChild>
|
||||
<Button variant="ghost">{icon}</Button>
|
||||
</Tooltip.Trigger>
|
||||
<Tooltip.Content>Helpful text</Tooltip.Content>
|
||||
</Tooltip.Root>
|
||||
```
|
||||
|
||||
Use tooltips for icon-only buttons and truncated text. Don't tooltip self-explanatory elements.
|
||||
|
||||
### Popovers
|
||||
Use for filters, option menus, and nested navigation:
|
||||
|
||||
```tsx
|
||||
<Popover open={open} onOpenChange={setOpen} size="sm">
|
||||
<PopoverTrigger asChild>
|
||||
<Button variant="ghost">Trigger</Button>
|
||||
</PopoverTrigger>
|
||||
<PopoverContent side="bottom" align="end" minWidth={160}>
|
||||
<PopoverSection>Section Title</PopoverSection>
|
||||
<PopoverItem active={isActive} onClick={handleClick}>
|
||||
Item Label
|
||||
</PopoverItem>
|
||||
<PopoverDivider />
|
||||
</PopoverContent>
|
||||
</Popover>
|
||||
```
|
||||
|
||||
### Dropdown Menus
|
||||
Use for context menus and action menus:
|
||||
|
||||
```tsx
|
||||
<DropdownMenu>
|
||||
<DropdownMenuTrigger asChild>
|
||||
<Button variant="ghost">
|
||||
<MoreHorizontal className="h-[14px] w-[14px]" />
|
||||
</Button>
|
||||
</DropdownMenuTrigger>
|
||||
<DropdownMenuContent align="end">
|
||||
<DropdownMenuItem onClick={handleEdit}>Edit</DropdownMenuItem>
|
||||
<DropdownMenuSeparator />
|
||||
<DropdownMenuItem onClick={handleDelete} className="text-[var(--text-error)]">
|
||||
Delete
|
||||
</DropdownMenuItem>
|
||||
</DropdownMenuContent>
|
||||
</DropdownMenu>
|
||||
```
|
||||
|
||||
Destructive items go last, after a separator, in error color.
|
||||
|
||||
### Forms
|
||||
Use `FormField` wrapper for labeled inputs:
|
||||
|
||||
```tsx
|
||||
<FormField label="Name" htmlFor="name" error={errors.name} optional>
|
||||
<Input id="name" value={name} onChange={e => setName(e.target.value)} />
|
||||
</FormField>
|
||||
```
|
||||
|
||||
Rules:
|
||||
- Use `Input` from emcn, never raw `<input>` (exception: hidden file inputs)
|
||||
- Use `Textarea` from emcn, never raw `<textarea>`
|
||||
- Use `FormField` for label + input + error layout
|
||||
- Mark optional fields with `optional` prop
|
||||
- Show errors inline below the input
|
||||
- Use `Combobox` for searchable selects
|
||||
- Use `TagInput` for multi-value inputs
|
||||
|
||||
### Loading States
|
||||
Use `Skeleton` for content placeholders:
|
||||
|
||||
```tsx
|
||||
<Skeleton className="h-5 w-[200px] rounded-md" />
|
||||
```
|
||||
|
||||
Rules:
|
||||
- Mirror the actual UI structure with skeletons
|
||||
- Match exact dimensions of the final content
|
||||
- Use `rounded-md` to match component radius
|
||||
- Stack multiple skeletons for lists
|
||||
|
||||
### Icons
|
||||
Standard sizing — `h-[14px] w-[14px]` is the dominant pattern (400+ uses):
|
||||
|
||||
```tsx
|
||||
<Icon className="h-[14px] w-[14px] text-[var(--text-icon)]" />
|
||||
```
|
||||
|
||||
Size scale by frequency:
|
||||
1. `h-[14px] w-[14px]` — default for inline icons (most common)
|
||||
2. `h-[16px] w-[16px]` — slightly larger inline icons
|
||||
3. `h-3 w-3` (12px) — compact/tight spaces
|
||||
4. `h-4 w-4` (16px) — Tailwind equivalent, also common
|
||||
5. `h-3.5 w-3.5` (14px) — Tailwind equivalent of 14px
|
||||
6. `h-5 w-5` (20px) — larger icons, section headers
|
||||
|
||||
Use `text-[var(--text-icon)]` for icon color (113+ uses in codebase).
|
||||
|
||||
---
|
||||
|
||||
## Styling Rules
|
||||
|
||||
1. **Use `cn()` for conditional classes**: `cn('base', condition && 'conditional')` — never template literal concatenation like `` `base ${condition ? 'active' : ''}` ``
|
||||
2. **Inline styles**: Avoid. Exception: dynamic values that can't be expressed as Tailwind classes (e.g., `style={{ width: dynamicVar }}` or CSS variable references). Never use inline styles for colors or static values.
|
||||
3. **Never hardcode colors**: Use CSS variable tokens. Never `text-gray-500`, `bg-red-100`, `#fff`, or `rgb()`. Always `text-[var(--text-*)]`, `bg-[var(--surface-*)]`, etc.
|
||||
4. **Never use Tailwind semantic color classes**: Use `text-[var(--text-muted)]` not `text-muted-foreground`. The CSS variable pattern is canonical.
|
||||
5. **Never use global styles**: Keep all styling local to components
|
||||
6. **Hover states**: Use `hover-hover:` pseudo-class for hover-capable devices
|
||||
7. **Transitions**: Use `transition-colors` for color changes, `transition-colors duration-100` for fast hover
|
||||
8. **Border radius**: `rounded-lg` (large cards), `rounded-md` (medium), `rounded-sm` (small), `rounded-xs` (tiny)
|
||||
9. **Typography**: Use semantic sizes — `text-small` (13px), `text-caption` (12px), `text-xs` (11px), `text-micro` (10px)
|
||||
10. **Font weight**: Use `font-medium` for emphasis, avoid `font-bold` unless for headings
|
||||
11. **Spacing**: Use Tailwind gap/padding utilities. Common patterns: `gap-2`, `gap-3`, `px-4 py-2.5`
|
||||
|
||||
---
|
||||
|
||||
## Anti-patterns to flag
|
||||
|
||||
- Raw HTML `<button>` instead of Button component (exception: inside Radix primitives)
|
||||
- Raw HTML `<input>` instead of Input component (exception: hidden file inputs, read-only checkboxes in markdown)
|
||||
- Hardcoded Tailwind default colors (`text-gray-*`, `bg-red-*`, `text-blue-*`)
|
||||
- Hex values in className (`bg-[#fff]`, `text-[#333]`)
|
||||
- Tailwind semantic classes (`text-muted-foreground`) instead of CSS variables (`text-[var(--text-muted)]`)
|
||||
- Custom modal/dialog implementations instead of `Modal`
|
||||
- Custom toast/notification implementations instead of `toast`
|
||||
- Inline styles for colors or static values (dynamic values are acceptable)
|
||||
- Template literal className concatenation instead of `cn()`
|
||||
- Wrong button variant for the action type
|
||||
- Missing loading/skeleton states
|
||||
- Missing error states on forms
|
||||
- Importing from emcn subpaths instead of barrel export
|
||||
- Using arbitrary z-index (`z-50`, `z-[9999]`) instead of z-index tokens
|
||||
- Custom shadows instead of shadow tokens
|
||||
- Icon sizes that don't follow the established scale (default to `h-[14px] w-[14px]`)
|
||||
54
.agents/skills/react-query-best-practices/SKILL.md
Normal file
54
.agents/skills/react-query-best-practices/SKILL.md
Normal file
@@ -0,0 +1,54 @@
|
||||
---
|
||||
name: react-query-best-practices
|
||||
description: Audit React Query usage for best practices — key factories, staleTime, mutations, and server state ownership
|
||||
---
|
||||
|
||||
# React Query Best Practices
|
||||
|
||||
Arguments:
|
||||
- scope: what to analyze (default: your current changes). Examples: "diff to main", "PR #123", "src/hooks/queries/", "whole codebase"
|
||||
- fix: whether to apply fixes (default: true). Set to false to only propose changes.
|
||||
|
||||
User arguments: $ARGUMENTS
|
||||
|
||||
## Context
|
||||
|
||||
This codebase uses React Query (TanStack Query) as the single source of truth for all server state. All query hooks live in `hooks/queries/`. Zustand is used only for client-only UI state. Server data must never be duplicated into useState or Zustand outside of mutation callbacks that coordinate cross-store state.
|
||||
|
||||
## References
|
||||
|
||||
Read these before analyzing:
|
||||
1. https://tkdodo.eu/blog/practical-react-query — foundational defaults, custom hooks, avoiding local state copies
|
||||
2. https://tkdodo.eu/blog/effective-react-query-keys — key factory pattern, hierarchical keys, fuzzy invalidation
|
||||
3. https://tkdodo.eu/blog/react-query-as-a-state-manager — React Query IS your server state manager
|
||||
|
||||
## Rules to enforce
|
||||
|
||||
### Query key factories
|
||||
- Every file in `hooks/queries/` must have a hierarchical key factory with an `all` root key
|
||||
- Keys must include intermediate plural keys (`lists`, `details`) for prefix invalidation
|
||||
- Key factories are colocated with their query hooks, not in a global keys file
|
||||
|
||||
### Query hooks
|
||||
- Every `queryFn` must forward `signal` for request cancellation
|
||||
- Every query must have an explicit `staleTime` (default 0 is almost never correct)
|
||||
- `keepPreviousData` / `placeholderData` only on variable-key queries (where params change), never on static keys
|
||||
- Use `enabled` to prevent queries from running without required params
|
||||
|
||||
### Mutations
|
||||
- Use `onSettled` (not `onSuccess`) for cache reconciliation — it fires on both success and error
|
||||
- For optimistic updates: save previous data in `onMutate`, roll back in `onError`
|
||||
- Use targeted invalidation (`entityKeys.lists()`) not broad (`entityKeys.all`) when possible
|
||||
- Don't include mutation objects in `useCallback` deps — `.mutate()` is stable
|
||||
|
||||
### Server state ownership
|
||||
- Never copy query data into useState. Use query data directly in components.
|
||||
- Never copy query data into Zustand stores (exception: mutation callbacks that coordinate cross-store state like temp ID replacement)
|
||||
- The query cache is not a local state manager — `setQueryData` is for optimistic updates only
|
||||
- Forms are the one deliberate exception: copy server data into local form state with `staleTime: Infinity`
|
||||
|
||||
## Steps
|
||||
|
||||
1. Read the references above to understand the guidelines
|
||||
2. Analyze the specified scope against the rules listed above
|
||||
3. If fix=true, apply the fixes. If fix=false, propose the fixes without applying.
|
||||
@@ -52,6 +52,20 @@ Fetch the official API docs for the service. This is the **source of truth** for
|
||||
|
||||
Use Context7 (resolve-library-id → query-docs) or WebFetch to retrieve documentation. If both fail, note which claims are based on training knowledge vs verified docs.
|
||||
|
||||
### Hard Rule: No Guessed Source Schemas
|
||||
|
||||
If the service docs do not clearly show document list responses, document fetch responses, metadata fields, or pagination shapes, you MUST tell the user instead of guessing.
|
||||
|
||||
- Do NOT infer document fields from unrelated endpoints
|
||||
- Do NOT guess pagination cursors or response wrappers
|
||||
- Do NOT assume metadata keys that are not documented
|
||||
- Do NOT treat probable shapes as validated
|
||||
|
||||
If a schema is unknown, validation must explicitly recommend:
|
||||
1. sample API responses,
|
||||
2. live test credentials, or
|
||||
3. trimming the connector to only documented fields.
|
||||
|
||||
## Step 3: Validate API Endpoints
|
||||
|
||||
For **every** API call in the connector (`listDocuments`, `getDocument`, `validateConfig`, and any helper functions), verify against the API docs:
|
||||
@@ -93,6 +107,7 @@ For **every** API call in the connector (`listDocuments`, `getDocument`, `valida
|
||||
- [ ] Field names extracted match what the API actually returns
|
||||
- [ ] Nullable fields are handled with `?? null` or `|| undefined`
|
||||
- [ ] Error responses are checked before accessing data fields
|
||||
- [ ] Every extracted field and pagination value is backed by official docs or live-verified sample payloads
|
||||
|
||||
## Step 4: Validate OAuth Scopes (if OAuth connector)
|
||||
|
||||
@@ -304,6 +319,7 @@ After fixing, confirm:
|
||||
1. `bun run lint` passes
|
||||
2. TypeScript compiles clean
|
||||
3. Re-read all modified files to verify fixes are correct
|
||||
4. Any remaining unknown source schemas were explicitly reported to the user instead of guessed
|
||||
|
||||
## Checklist Summary
|
||||
|
||||
|
||||
@@ -41,6 +41,20 @@ Fetch the official API docs for the service. This is the **source of truth** for
|
||||
- Pagination patterns (which param name, which response field)
|
||||
- Rate limits and error formats
|
||||
|
||||
### Hard Rule: No Guessed Response Schemas
|
||||
|
||||
If the official docs do not clearly show the response JSON shape for an endpoint, you MUST tell the user instead of guessing.
|
||||
|
||||
- Do NOT assume field names from nearby endpoints
|
||||
- Do NOT infer nested JSON paths without evidence
|
||||
- Do NOT treat "likely" fields as confirmed outputs
|
||||
- Do NOT accept implementation guesses as valid just because they are defensive
|
||||
|
||||
If a response schema is unknown, the validation must explicitly call that out and require:
|
||||
1. sample responses from the user,
|
||||
2. live test credentials for verification, or
|
||||
3. trimming the tool/block down to only documented fields.
|
||||
|
||||
## Step 3: Validate Tools
|
||||
|
||||
For **every** tool file, check:
|
||||
@@ -81,6 +95,7 @@ For **every** tool file, check:
|
||||
- [ ] All optional arrays use `?? []`
|
||||
- [ ] Error cases are handled: checks for missing/empty data and returns meaningful error
|
||||
- [ ] Does NOT do raw JSON dumps — extracts meaningful, individual fields
|
||||
- [ ] Every extracted field is backed by official docs or live-verified sample payloads
|
||||
|
||||
### Outputs
|
||||
- [ ] All output fields match what the API actually returns
|
||||
@@ -267,6 +282,7 @@ After fixing, confirm:
|
||||
1. `bun run lint` passes with no fixes needed
|
||||
2. TypeScript compiles clean (no type errors)
|
||||
3. Re-read all modified files to verify fixes are correct
|
||||
4. Any remaining unknown response schemas were explicitly reported to the user instead of guessed
|
||||
|
||||
## Checklist Summary
|
||||
|
||||
|
||||
@@ -44,6 +44,20 @@ Fetch the service's official webhook documentation. This is the **source of trut
|
||||
- Webhook subscription API (create/delete endpoints, if applicable)
|
||||
- Retry behavior and delivery guarantees
|
||||
|
||||
### Hard Rule: No Guessed Webhook Payload Schemas
|
||||
|
||||
If the official docs do not clearly show the webhook payload JSON for an event, you MUST tell the user instead of guessing.
|
||||
|
||||
- Do NOT invent payload field names
|
||||
- Do NOT infer nested payload paths without evidence
|
||||
- Do NOT treat likely event shapes as verified
|
||||
- Do NOT accept `formatInput` mappings that are not backed by docs or live payloads
|
||||
|
||||
If a payload schema is unknown, validation must explicitly recommend:
|
||||
1. sample webhook payloads,
|
||||
2. a live test webhook source, or
|
||||
3. trimming the trigger to only documented outputs.
|
||||
|
||||
## Step 3: Validate Trigger Definitions
|
||||
|
||||
### utils.ts
|
||||
@@ -93,6 +107,7 @@ Fetch the service's official webhook documentation. This is the **source of trut
|
||||
- [ ] Nested output paths exist at the correct depth (e.g., `resource.id` actually has `resource: { id: ... }`)
|
||||
- [ ] `null` is used for missing optional fields (not empty strings or empty objects)
|
||||
- [ ] Returns `{ input: { ... } }` — not a bare object
|
||||
- [ ] Every mapped payload field is backed by official docs or live-verified webhook payloads
|
||||
|
||||
### Idempotency
|
||||
- [ ] `extractIdempotencyId` returns a stable, unique key per delivery
|
||||
@@ -195,6 +210,7 @@ After fixing, confirm:
|
||||
1. `bun run type-check` passes
|
||||
2. Re-read all modified files to verify fixes are correct
|
||||
3. Provider handler tests pass (if they exist): `bun test {service}`
|
||||
4. Any remaining unknown webhook payload schemas were explicitly reported to the user instead of guessed
|
||||
|
||||
## Checklist Summary
|
||||
|
||||
|
||||
51
.agents/skills/you-might-not-need-a-callback/SKILL.md
Normal file
51
.agents/skills/you-might-not-need-a-callback/SKILL.md
Normal file
@@ -0,0 +1,51 @@
|
||||
---
|
||||
name: you-might-not-need-a-callback
|
||||
description: Analyze and fix useCallback anti-patterns in your code
|
||||
---
|
||||
|
||||
# You Might Not Need a Callback
|
||||
|
||||
Arguments:
|
||||
- scope: what to analyze (default: your current changes). Examples: "diff to main", "PR #123", "src/components/", "whole codebase"
|
||||
- fix: whether to apply fixes (default: true). Set to false to only propose changes.
|
||||
|
||||
User arguments: $ARGUMENTS
|
||||
|
||||
## References
|
||||
|
||||
Read before analyzing:
|
||||
1. https://react.dev/reference/react/useCallback — official docs on when useCallback is actually needed
|
||||
|
||||
## When useCallback IS needed
|
||||
|
||||
- Passing a callback to a child wrapped in `React.memo` (to preserve referential equality)
|
||||
- The callback is a dependency of another hook (`useEffect`, `useMemo`)
|
||||
- The callback is used in a custom hook that documents referential stability requirements
|
||||
|
||||
## Anti-patterns to detect
|
||||
|
||||
1. **useCallback on functions not passed as props or deps**: If the function is only called within the same component and isn't in any dependency array, useCallback adds overhead for no benefit. Just declare the function normally.
|
||||
2. **useCallback with exhaustive deps that change every render**: If the dependency array includes values that change on every render, useCallback recalculates every time. The memoization is wasted. Either stabilize the deps (use refs) or remove the useCallback.
|
||||
3. **useCallback on event handlers passed to native elements**: `<button onClick={handleClick}>` — native elements don't benefit from stable references. Only child components wrapped in React.memo do.
|
||||
4. **useCallback wrapping a function that creates new objects/arrays**: If the callback returns `{ ...newObj }` or `[...newArr]`, memoizing the callback doesn't prevent the child from re-rendering due to new return values. The memoization is at the wrong level.
|
||||
5. **useCallback with an empty dep array when deps are needed**: Stale closures — the callback captures outdated values. Either add proper deps or use refs for values that shouldn't trigger re-creation.
|
||||
6. **Pairing useCallback with React.memo unnecessarily**: If the child component is cheap to render, neither useCallback nor React.memo adds value. Only optimize when you've measured a performance problem.
|
||||
7. **useCallback in custom hooks that don't need stable references**: Not every hook return needs to be memoized. Only stabilize callbacks when consumers depend on referential equality.
|
||||
|
||||
## Codebase-specific notes
|
||||
|
||||
This codebase uses a ref pattern for stable callbacks in hooks:
|
||||
```tsx
|
||||
const idRef = useRef(id)
|
||||
useEffect(() => { idRef.current = id }, [id])
|
||||
const fetchData = useCallback(async () => {
|
||||
// use idRef.current instead of id
|
||||
}, []) // empty deps because refs are used
|
||||
```
|
||||
This pattern is correct — don't flag it as an anti-pattern.
|
||||
|
||||
## Steps
|
||||
|
||||
1. Read the reference above
|
||||
2. Analyze the specified scope for the anti-patterns listed above
|
||||
3. If fix=true, apply the fixes. If fix=false, propose the fixes without applying.
|
||||
33
.agents/skills/you-might-not-need-a-memo/SKILL.md
Normal file
33
.agents/skills/you-might-not-need-a-memo/SKILL.md
Normal file
@@ -0,0 +1,33 @@
|
||||
---
|
||||
name: you-might-not-need-a-memo
|
||||
description: Analyze and fix useMemo/React.memo anti-patterns in your code
|
||||
---
|
||||
|
||||
# You Might Not Need a Memo
|
||||
|
||||
Arguments:
|
||||
- scope: what to analyze (default: your current changes). Examples: "diff to main", "PR #123", "src/components/", "whole codebase"
|
||||
- fix: whether to apply fixes (default: true). Set to false to only propose changes.
|
||||
|
||||
User arguments: $ARGUMENTS
|
||||
|
||||
## References
|
||||
|
||||
Read before analyzing:
|
||||
1. https://overreacted.io/before-you-memo/ — two techniques to avoid memo entirely
|
||||
|
||||
## Anti-patterns to detect
|
||||
|
||||
1. **Wrapping a slow component in React.memo when state can be moved down**: If a component re-renders because of state it doesn't use, move that state into a smaller child component instead of memoizing. The slow component stops re-rendering without memo.
|
||||
2. **Wrapping in React.memo when children can be lifted up**: If a parent owns state that changes frequently, extract the stateful part and pass the expensive subtree as `children`. Children passed as props don't re-render when the parent's state changes.
|
||||
3. **useMemo on cheap computations**: Filtering or mapping a small array, string concatenation, simple arithmetic — these don't need memoization. Only memoize when you've measured a performance problem.
|
||||
4. **useMemo with constantly-changing deps**: If the dependency array changes on every render, useMemo does nothing — it recalculates every time. Fix the deps or remove the memo.
|
||||
5. **useMemo to create objects/arrays passed as props**: Instead of memoizing to prevent child re-renders, consider whether the child even needs referential stability. If the child doesn't use React.memo or pass it to a dep array, the memo is wasted.
|
||||
6. **React.memo on components that always receive new props**: If the parent always passes new objects, arrays, or callbacks, React.memo's shallow comparison always fails. Fix the parent instead of memoizing the child.
|
||||
7. **useMemo for derived state**: If you're computing a value from props or state, just compute it inline during render. React renders are fast. `const fullName = first + ' ' + last` doesn't need useMemo.
|
||||
|
||||
## Steps
|
||||
|
||||
1. Read the reference above to understand the two core techniques (move state down, lift content up)
|
||||
2. Analyze the specified scope for the anti-patterns listed above
|
||||
3. If fix=true, apply the fixes. If fix=false, propose the fixes without applying.
|
||||
38
.agents/skills/you-might-not-need-state/SKILL.md
Normal file
38
.agents/skills/you-might-not-need-state/SKILL.md
Normal file
@@ -0,0 +1,38 @@
|
||||
---
|
||||
name: you-might-not-need-state
|
||||
description: Analyze and fix unnecessary useState, derived state, and server-state-in-local-state anti-patterns
|
||||
---
|
||||
|
||||
# You Might Not Need State
|
||||
|
||||
Arguments:
|
||||
- scope: what to analyze (default: your current changes). Examples: "diff to main", "PR #123", "src/components/", "whole codebase"
|
||||
- fix: whether to apply fixes (default: true). Set to false to only propose changes.
|
||||
|
||||
User arguments: $ARGUMENTS
|
||||
|
||||
## Context
|
||||
|
||||
This codebase uses React Query for all server state and Zustand for client-only global state. useState should only be used for ephemeral UI concerns (open/closed, hover, local form input). Server data should never be copied into useState or Zustand — React Query is the single source of truth.
|
||||
|
||||
## References
|
||||
|
||||
Read these before analyzing:
|
||||
1. https://react.dev/learn/choosing-the-state-structure — 5 principles for structuring state
|
||||
2. https://tkdodo.eu/blog/dont-over-use-state — never store derived/computed values in state
|
||||
3. https://tkdodo.eu/blog/putting-props-to-use-state — never mirror props into state via useEffect
|
||||
|
||||
## Anti-patterns to detect
|
||||
|
||||
1. **Derived state stored in useState**: If a value can be computed from props, other state, or query data, compute it inline during render instead of storing it in state.
|
||||
2. **Server state copied into useState**: Never `useState` + `useEffect` to sync React Query data into local state. Use query data directly. The only exception is forms where users edit server data.
|
||||
3. **Props mirrored into state**: Never `useState(prop)` + `useEffect(() => setState(prop))`. Use the prop directly, or use a key to reset component state.
|
||||
4. **Chained useEffect state updates**: Never chain Effects that set state to trigger other Effects. Calculate all derived values in the event handler or inline during render.
|
||||
5. **Storing objects when an ID suffices**: Store `selectedId` not a copy of the selected object. Derive the object: `items.find(i => i.id === selectedId)`.
|
||||
6. **State that duplicates Zustand or React Query**: If the data already lives in a store or query cache, don't create a parallel useState.
|
||||
|
||||
## Steps
|
||||
|
||||
1. Read the references above to understand the guidelines
|
||||
2. Analyze the specified scope for the anti-patterns listed above
|
||||
3. If fix=true, apply the fixes. If fix=false, propose the fixes without applying.
|
||||
@@ -1,17 +1,17 @@
|
||||
---
|
||||
description: Create webhook triggers for a Sim integration using the generic trigger builder
|
||||
description: Create webhook or polling triggers for a Sim integration
|
||||
argument-hint: <service-name>
|
||||
---
|
||||
|
||||
# Add Trigger
|
||||
|
||||
You are an expert at creating webhook triggers for Sim. You understand the trigger system, the generic `buildTriggerSubBlocks` helper, and how triggers connect to blocks.
|
||||
You are an expert at creating webhook and polling triggers for Sim. You understand the trigger system, the generic `buildTriggerSubBlocks` helper, polling infrastructure, and how triggers connect to blocks.
|
||||
|
||||
## Your Task
|
||||
|
||||
1. Research what webhook events the service supports
|
||||
2. Create the trigger files using the generic builder
|
||||
3. Create a provider handler if custom auth, formatting, or subscriptions are needed
|
||||
1. Research what webhook events the service supports — if the service lacks reliable webhooks, use polling
|
||||
2. Create the trigger files using the generic builder (webhook) or manual config (polling)
|
||||
3. Create a provider handler (webhook) or polling handler (polling)
|
||||
4. Register triggers and connect them to the block
|
||||
|
||||
## Directory Structure
|
||||
@@ -146,23 +146,37 @@ export const TRIGGER_REGISTRY: TriggerRegistry = {
|
||||
|
||||
### Block file (`apps/sim/blocks/blocks/{service}.ts`)
|
||||
|
||||
Wire triggers into the block so the trigger UI appears and `generate-docs.ts` discovers them. Two changes are needed:
|
||||
|
||||
1. **Spread trigger subBlocks** at the end of the block's `subBlocks` array
|
||||
2. **Add `triggers` property** after `outputs` with `enabled: true` and `available: [...]`
|
||||
|
||||
```typescript
|
||||
import { getTrigger } from '@/triggers'
|
||||
|
||||
export const {Service}Block: BlockConfig = {
|
||||
// ...
|
||||
triggers: {
|
||||
enabled: true,
|
||||
available: ['{service}_event_a', '{service}_event_b'],
|
||||
},
|
||||
subBlocks: [
|
||||
// Regular tool subBlocks first...
|
||||
...getTrigger('{service}_event_a').subBlocks,
|
||||
...getTrigger('{service}_event_b').subBlocks,
|
||||
],
|
||||
// ... tools, inputs, outputs ...
|
||||
triggers: {
|
||||
enabled: true,
|
||||
available: ['{service}_event_a', '{service}_event_b'],
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
**Versioned blocks (V1 + V2):** Many integrations have a hidden V1 block and a visible V2 block. Where you add the trigger wiring depends on how V2 inherits from V1:
|
||||
|
||||
- **V2 uses `...V1Block` spread** (e.g., Google Calendar): Add trigger to V1 — V2 inherits both `subBlocks` and `triggers` automatically.
|
||||
- **V2 defines its own `subBlocks`** (e.g., Google Sheets): Add trigger to V2 (the visible block). V1 is hidden and doesn't need it.
|
||||
- **Single block, no V2** (e.g., Google Drive): Add trigger directly.
|
||||
|
||||
`generate-docs.ts` deduplicates by base type (first match wins). If V1 is processed first without triggers, the V2 triggers won't appear in `integrations.json`. Always verify by checking the output after running the script.
|
||||
|
||||
## Provider Handler
|
||||
|
||||
All provider-specific webhook logic lives in a single handler file: `apps/sim/lib/webhooks/providers/{service}.ts`.
|
||||
@@ -327,6 +341,121 @@ export function buildOutputs(): Record<string, TriggerOutput> {
|
||||
}
|
||||
```
|
||||
|
||||
## Polling Triggers
|
||||
|
||||
Use polling when the service lacks reliable webhooks (e.g., Google Sheets, Google Drive, Google Calendar, Gmail, RSS, IMAP). Polling triggers do NOT use `buildTriggerSubBlocks` — they define subBlocks manually.
|
||||
|
||||
### Directory Structure
|
||||
|
||||
```
|
||||
apps/sim/triggers/{service}/
|
||||
├── index.ts # Barrel export
|
||||
└── poller.ts # TriggerConfig with polling: true
|
||||
|
||||
apps/sim/lib/webhooks/polling/
|
||||
└── {service}.ts # PollingProviderHandler implementation
|
||||
```
|
||||
|
||||
### Polling Handler (`apps/sim/lib/webhooks/polling/{service}.ts`)
|
||||
|
||||
```typescript
|
||||
import { pollingIdempotency } from '@/lib/core/idempotency/service'
|
||||
import type { PollingProviderHandler, PollWebhookContext } from '@/lib/webhooks/polling/types'
|
||||
import { markWebhookFailed, markWebhookSuccess, resolveOAuthCredential, updateWebhookProviderConfig } from '@/lib/webhooks/polling/utils'
|
||||
import { processPolledWebhookEvent } from '@/lib/webhooks/processor'
|
||||
|
||||
export const {service}PollingHandler: PollingProviderHandler = {
|
||||
provider: '{service}',
|
||||
label: '{Service}',
|
||||
|
||||
async pollWebhook(ctx: PollWebhookContext): Promise<'success' | 'failure'> {
|
||||
const { webhookData, workflowData, requestId, logger } = ctx
|
||||
const webhookId = webhookData.id
|
||||
|
||||
try {
|
||||
// For OAuth services:
|
||||
const accessToken = await resolveOAuthCredential(webhookData, '{service}', requestId, logger)
|
||||
const config = webhookData.providerConfig as unknown as {Service}WebhookConfig
|
||||
|
||||
// First poll: seed state, emit nothing
|
||||
if (!config.lastCheckedTimestamp) {
|
||||
await updateWebhookProviderConfig(webhookId, { lastCheckedTimestamp: new Date().toISOString() }, logger)
|
||||
await markWebhookSuccess(webhookId, logger)
|
||||
return 'success'
|
||||
}
|
||||
|
||||
// Fetch changes since last poll, process with idempotency
|
||||
// ...
|
||||
|
||||
await markWebhookSuccess(webhookId, logger)
|
||||
return 'success'
|
||||
} catch (error) {
|
||||
logger.error(`[${requestId}] Error processing {service} webhook ${webhookId}:`, error)
|
||||
await markWebhookFailed(webhookId, logger)
|
||||
return 'failure'
|
||||
}
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
**Key patterns:**
|
||||
- First poll seeds state and emits nothing (avoids flooding with existing data)
|
||||
- Use `pollingIdempotency.executeWithIdempotency(provider, key, callback)` for dedup
|
||||
- Use `processPolledWebhookEvent(webhookData, workflowData, payload, requestId)` to fire the workflow
|
||||
- Use `updateWebhookProviderConfig(webhookId, partialConfig, logger)` for read-merge-write on state
|
||||
- Use the latest server-side timestamp from API responses (not wall clock) to avoid clock skew
|
||||
|
||||
### Trigger Config (`apps/sim/triggers/{service}/poller.ts`)
|
||||
|
||||
```typescript
|
||||
import { {Service}Icon } from '@/components/icons'
|
||||
import type { TriggerConfig } from '@/triggers/types'
|
||||
|
||||
export const {service}PollingTrigger: TriggerConfig = {
|
||||
id: '{service}_poller',
|
||||
name: '{Service} Trigger',
|
||||
provider: '{service}',
|
||||
description: 'Triggers when ...',
|
||||
version: '1.0.0',
|
||||
icon: {Service}Icon,
|
||||
polling: true, // REQUIRED — routes to polling infrastructure
|
||||
|
||||
subBlocks: [
|
||||
{ id: 'triggerCredentials', type: 'oauth-input', title: 'Credentials', serviceId: '{service}', requiredScopes: [], required: true, mode: 'trigger', supportsCredentialSets: true },
|
||||
// ... service-specific config fields (dropdowns, inputs, switches) ...
|
||||
{ id: 'triggerInstructions', type: 'text', title: 'Setup Instructions', hideFromPreview: true, mode: 'trigger', defaultValue: '...' },
|
||||
],
|
||||
|
||||
outputs: {
|
||||
// Must match the payload shape from processPolledWebhookEvent
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
### Registration (3 places)
|
||||
|
||||
1. **`apps/sim/triggers/constants.ts`** — add provider to `POLLING_PROVIDERS` Set
|
||||
2. **`apps/sim/lib/webhooks/polling/registry.ts`** — import handler, add to `POLLING_HANDLERS`
|
||||
3. **`apps/sim/triggers/registry.ts`** — import trigger config, add to `TRIGGER_REGISTRY`
|
||||
|
||||
### Helm Cron Job
|
||||
|
||||
Add to `helm/sim/values.yaml` under the existing polling cron jobs:
|
||||
|
||||
```yaml
|
||||
{service}WebhookPoll:
|
||||
schedule: "*/1 * * * *"
|
||||
concurrencyPolicy: Forbid
|
||||
url: "http://sim:3000/api/webhooks/poll/{service}"
|
||||
```
|
||||
|
||||
### Reference Implementations
|
||||
|
||||
- Simple: `apps/sim/lib/webhooks/polling/rss.ts` + `apps/sim/triggers/rss/poller.ts`
|
||||
- Complex (OAuth, attachments): `apps/sim/lib/webhooks/polling/gmail.ts` + `apps/sim/triggers/gmail/poller.ts`
|
||||
- Cursor-based (changes API): `apps/sim/lib/webhooks/polling/google-drive.ts`
|
||||
- Timestamp-based: `apps/sim/lib/webhooks/polling/google-calendar.ts`
|
||||
|
||||
## Checklist
|
||||
|
||||
### Trigger Definition
|
||||
@@ -352,7 +481,17 @@ export function buildOutputs(): Record<string, TriggerOutput> {
|
||||
- [ ] NO changes to `route.ts`, `provider-subscriptions.ts`, or `deploy.ts`
|
||||
- [ ] API key field uses `password: true`
|
||||
|
||||
### Polling Trigger (if applicable)
|
||||
- [ ] Handler implements `PollingProviderHandler` at `lib/webhooks/polling/{service}.ts`
|
||||
- [ ] Trigger config has `polling: true` and defines subBlocks manually (no `buildTriggerSubBlocks`)
|
||||
- [ ] Provider string matches across: trigger config, handler, `POLLING_PROVIDERS`, polling registry
|
||||
- [ ] First poll seeds state and emits nothing
|
||||
- [ ] Added provider to `POLLING_PROVIDERS` in `triggers/constants.ts`
|
||||
- [ ] Added handler to `POLLING_HANDLERS` in `lib/webhooks/polling/registry.ts`
|
||||
- [ ] Added cron job to `helm/sim/values.yaml`
|
||||
- [ ] Payload shape matches trigger `outputs` schema
|
||||
|
||||
### Testing
|
||||
- [ ] `bun run type-check` passes
|
||||
- [ ] Manually verify `formatInput` output keys match trigger `outputs` keys
|
||||
- [ ] Manually verify output keys match trigger `outputs` keys
|
||||
- [ ] Trigger UI shows correctly in the block
|
||||
|
||||
25
.claude/commands/cleanup.md
Normal file
25
.claude/commands/cleanup.md
Normal file
@@ -0,0 +1,25 @@
|
||||
---
|
||||
description: Run all code quality skills in sequence — effects, memo, callbacks, state, React Query, and emcn design review
|
||||
argument-hint: [scope] [fix=true|false]
|
||||
---
|
||||
|
||||
# Cleanup
|
||||
|
||||
Arguments:
|
||||
- scope: what to review (default: your current changes). Examples: "diff to main", "PR #123", "src/components/", "whole codebase"
|
||||
- fix: whether to apply fixes (default: true). Set to false to only propose changes.
|
||||
|
||||
User arguments: $ARGUMENTS
|
||||
|
||||
## Steps
|
||||
|
||||
Run each of these skills in order on the specified scope, passing through the scope and fix arguments. After each skill completes, move to the next. Do not skip any.
|
||||
|
||||
1. `/you-might-not-need-an-effect $ARGUMENTS`
|
||||
2. `/you-might-not-need-a-memo $ARGUMENTS`
|
||||
3. `/you-might-not-need-a-callback $ARGUMENTS`
|
||||
4. `/you-might-not-need-state $ARGUMENTS`
|
||||
5. `/react-query-best-practices $ARGUMENTS`
|
||||
6. `/emcn-design-review $ARGUMENTS`
|
||||
|
||||
After all skills have run, output a summary of what was found and fixed (or proposed) across all six passes.
|
||||
79
.claude/commands/emcn-design-review.md
Normal file
79
.claude/commands/emcn-design-review.md
Normal file
@@ -0,0 +1,79 @@
|
||||
---
|
||||
description: Review UI code for alignment with the emcn design system — components, tokens, patterns, and conventions
|
||||
argument-hint: [scope] [fix=true|false]
|
||||
---
|
||||
|
||||
# EMCN Design Review
|
||||
|
||||
Arguments:
|
||||
- scope: what to review (default: your current changes). Examples: "diff to main", "PR #123", "src/components/", "whole codebase"
|
||||
- fix: whether to apply fixes (default: true). Set to false to only propose changes.
|
||||
|
||||
User arguments: $ARGUMENTS
|
||||
|
||||
## Context
|
||||
|
||||
This codebase uses **emcn**, a custom component library built on Radix UI primitives with CVA variants and CSS variable design tokens. All UI must use emcn components and tokens.
|
||||
|
||||
## Steps
|
||||
|
||||
1. Read the emcn barrel export at `apps/sim/components/emcn/components/index.ts` to know what's available
|
||||
2. Read `apps/sim/app/_styles/globals.css` for CSS variable tokens
|
||||
3. Analyze the specified scope against every rule below
|
||||
4. If fix=true, apply the fixes. If fix=false, propose the fixes without applying.
|
||||
|
||||
---
|
||||
|
||||
## Imports
|
||||
|
||||
- Import from `@/components/emcn` barrel, never subpaths
|
||||
- Icons from `@/components/emcn/icons` or `lucide-react`
|
||||
- Use `cn` from `@/lib/core/utils/cn` for conditional classes
|
||||
|
||||
## Design Tokens
|
||||
|
||||
Use CSS variable pattern (`text-[var(--text-primary)]`), never Tailwind semantics (`text-muted-foreground`) or hardcoded colors (`text-gray-500`, `#333`).
|
||||
|
||||
**Text**: `--text-primary`, `--text-secondary`, `--text-tertiary`, `--text-muted`, `--text-icon`, `--text-inverse`, `--text-error`
|
||||
**Surfaces**: `--bg`, `--surface-2` through `--surface-7`, `--surface-hover`, `--surface-active`
|
||||
**Borders**: `--border`, `--border-1`, `--border-muted`
|
||||
**Z-Index**: `--z-dropdown` (100), `--z-modal` (200), `--z-popover` (300), `--z-tooltip` (400), `--z-toast` (500)
|
||||
**Shadows**: `shadow-subtle`, `shadow-medium`, `shadow-overlay`, `shadow-card`
|
||||
|
||||
## Buttons
|
||||
|
||||
| Action | Variant |
|
||||
|--------|---------|
|
||||
| Toolbar, icon-only | `ghost` (most common, 28%) |
|
||||
| Create, save, submit | `primary` (24%) |
|
||||
| Cancel, close | `default` |
|
||||
| Delete, remove | `destructive` |
|
||||
| Selected state | `active` |
|
||||
| Toggle | `outline` |
|
||||
|
||||
## Delete/Remove Confirmations
|
||||
|
||||
Modal `size="sm"`, title "Delete/Remove {ItemType}", `variant="destructive"` action button, `variant="default"` cancel. Cancel left, action right (100% compliance). Use `text-[var(--text-error)]` for irreversible warnings.
|
||||
|
||||
## Toast
|
||||
|
||||
`toast.success()`, `toast.error()`, `toast()` from `@/components/emcn`. Never custom notification UI.
|
||||
|
||||
## Badges
|
||||
|
||||
`red`=error/failed, `gray-secondary`=metadata/roles, `type`=type annotations, `green`=success/active, `gray`=neutral, `amber`=processing, `orange`=paused, `blue`=info. Use `dot` prop for status indicators.
|
||||
|
||||
## Icons
|
||||
|
||||
Default: `h-[14px] w-[14px]` (400+ uses). Color: `text-[var(--text-icon)]`. Scale: 14px > 16px > 12px > 20px.
|
||||
|
||||
## Anti-patterns to flag
|
||||
|
||||
- Raw `<button>`/`<input>` instead of emcn components
|
||||
- Hardcoded colors (`text-gray-*`, `#hex`, `rgb()`)
|
||||
- Tailwind semantics (`text-muted-foreground`) instead of CSS variables
|
||||
- Template literal className instead of `cn()`
|
||||
- Inline styles for colors/static values (dynamic values OK)
|
||||
- Importing from emcn subpaths instead of barrel
|
||||
- Arbitrary z-index instead of tokens
|
||||
- Wrong button variant for action type
|
||||
54
.claude/commands/react-query-best-practices.md
Normal file
54
.claude/commands/react-query-best-practices.md
Normal file
@@ -0,0 +1,54 @@
|
||||
---
|
||||
description: Audit React Query usage for best practices — key factories, staleTime, mutations, and server state ownership
|
||||
argument-hint: [scope] [fix=true|false]
|
||||
---
|
||||
|
||||
# React Query Best Practices
|
||||
|
||||
Arguments:
|
||||
- scope: what to analyze (default: your current changes). Examples: "diff to main", "PR #123", "src/hooks/queries/", "whole codebase"
|
||||
- fix: whether to apply fixes (default: true). Set to false to only propose changes.
|
||||
|
||||
User arguments: $ARGUMENTS
|
||||
|
||||
## Context
|
||||
|
||||
This codebase uses React Query (TanStack Query) as the single source of truth for all server state. All query hooks live in `hooks/queries/`. Zustand is used only for client-only UI state. Server data must never be duplicated into useState or Zustand outside of mutation callbacks that coordinate cross-store state.
|
||||
|
||||
## References
|
||||
|
||||
Read these before analyzing:
|
||||
1. https://tkdodo.eu/blog/practical-react-query — foundational defaults, custom hooks, avoiding local state copies
|
||||
2. https://tkdodo.eu/blog/effective-react-query-keys — key factory pattern, hierarchical keys, fuzzy invalidation
|
||||
3. https://tkdodo.eu/blog/react-query-as-a-state-manager — React Query IS your server state manager
|
||||
|
||||
## Rules to enforce
|
||||
|
||||
### Query key factories
|
||||
- Every file in `hooks/queries/` must have a hierarchical key factory with an `all` root key
|
||||
- Keys must include intermediate plural keys (`lists`, `details`) for prefix invalidation
|
||||
- Key factories are colocated with their query hooks, not in a global keys file
|
||||
|
||||
### Query hooks
|
||||
- Every `queryFn` must forward `signal` for request cancellation
|
||||
- Every query must have an explicit `staleTime` (default 0 is almost never correct)
|
||||
- `keepPreviousData` / `placeholderData` only on variable-key queries (where params change), never on static keys
|
||||
- Use `enabled` to prevent queries from running without required params
|
||||
|
||||
### Mutations
|
||||
- Use `onSettled` (not `onSuccess`) for cache reconciliation — it fires on both success and error
|
||||
- For optimistic updates: save previous data in `onMutate`, roll back in `onError`
|
||||
- Use targeted invalidation (`entityKeys.lists()`) not broad (`entityKeys.all`) when possible
|
||||
- Don't include mutation objects in `useCallback` deps — `.mutate()` is stable
|
||||
|
||||
### Server state ownership
|
||||
- Never copy query data into useState. Use query data directly in components.
|
||||
- Never copy query data into Zustand stores (exception: mutation callbacks that coordinate cross-store state like temp ID replacement)
|
||||
- The query cache is not a local state manager — `setQueryData` is for optimistic updates only
|
||||
- Forms are the one deliberate exception: copy server data into local form state with `staleTime: Infinity`
|
||||
|
||||
## Steps
|
||||
|
||||
1. Read the references above to understand the guidelines
|
||||
2. Analyze the specified scope against the rules listed above
|
||||
3. If fix=true, apply the fixes. If fix=false, propose the fixes without applying.
|
||||
52
.claude/commands/you-might-not-need-a-callback.md
Normal file
52
.claude/commands/you-might-not-need-a-callback.md
Normal file
@@ -0,0 +1,52 @@
|
||||
---
|
||||
description: Analyze and fix useCallback anti-patterns in your code
|
||||
argument-hint: [scope] [fix=true|false]
|
||||
---
|
||||
|
||||
# You Might Not Need a Callback
|
||||
|
||||
Arguments:
|
||||
- scope: what to analyze (default: your current changes). Examples: "diff to main", "PR #123", "src/components/", "whole codebase"
|
||||
- fix: whether to apply fixes (default: true). Set to false to only propose changes.
|
||||
|
||||
User arguments: $ARGUMENTS
|
||||
|
||||
## References
|
||||
|
||||
Read before analyzing:
|
||||
1. https://react.dev/reference/react/useCallback — official docs on when useCallback is actually needed
|
||||
|
||||
## The one rule that matters
|
||||
|
||||
`useCallback` is only useful when **something observes the reference**. Ask: does anything care if this function gets a new identity on re-render?
|
||||
|
||||
Observers that care about reference stability:
|
||||
- A `useEffect` that lists the function in its deps array
|
||||
- A `useMemo` that lists the function in its deps array
|
||||
- Another `useCallback` that lists the function in its deps array
|
||||
- A child component wrapped in `React.memo` that receives the function as a prop
|
||||
|
||||
If none of those apply — if the function is only called inline, or passed to a non-memoized child, or assigned to a native element event — the reference is unobserved and `useCallback` adds overhead with zero benefit.
|
||||
|
||||
## Anti-patterns to detect
|
||||
|
||||
1. **No observer tracks the reference**: The function is only called inline in the same component, or passed to a non-memoized child, or used as a native element handler (`<button onClick={fn}>`). Nothing re-runs or bails out based on reference identity. Remove `useCallback`.
|
||||
2. **useCallback with deps that change every render**: If a dep is a plain object/array created inline, or state that changes on every interaction, memoization buys nothing — the function gets a new identity anyway.
|
||||
3. **useCallback on handlers passed only to native elements**: `<button onClick={fn}>` — React never does reference equality on native element props. No benefit.
|
||||
4. **useCallback wrapping functions that return new objects/arrays**: Stable function identity, unstable return value — memoization is at the wrong level. Use `useMemo` on the return value instead, or restructure.
|
||||
5. **useCallback with empty deps when deps are needed**: Stale closure — reads initial values forever. This is a correctness bug, not just a performance issue.
|
||||
6. **Pairing useCallback + React.memo on trivially cheap renders**: If the child renders in < 1ms and re-renders rarely, the memo infrastructure costs more than it saves.
|
||||
|
||||
## Patterns that ARE correct — do not flag
|
||||
|
||||
- `useCallback` whose result is in a `useEffect` dep array — prevents the effect from re-running on every render
|
||||
- `useCallback` whose result is in a `useMemo` dep array — prevents the memo from recomputing on every render
|
||||
- `useCallback` whose result is a dep of another `useCallback` — stabilises a callback chain
|
||||
- `useCallback` passed to a `React.memo`-wrapped child — the whole point of the pattern
|
||||
- This codebase's ref pattern: `useRef` + callback with empty deps that reads the ref inside — correct, do not flag
|
||||
|
||||
## Steps
|
||||
|
||||
1. Read the reference above
|
||||
2. Analyze the specified scope for the anti-patterns listed above
|
||||
3. If fix=true, apply the fixes. If fix=false, propose the fixes without applying.
|
||||
33
.claude/commands/you-might-not-need-a-memo.md
Normal file
33
.claude/commands/you-might-not-need-a-memo.md
Normal file
@@ -0,0 +1,33 @@
|
||||
---
|
||||
description: Analyze and fix useMemo/React.memo anti-patterns in your code
|
||||
argument-hint: [scope] [fix=true|false]
|
||||
---
|
||||
|
||||
# You Might Not Need a Memo
|
||||
|
||||
Arguments:
|
||||
- scope: what to analyze (default: your current changes). Examples: "diff to main", "PR #123", "src/components/", "whole codebase"
|
||||
- fix: whether to apply fixes (default: true). Set to false to only propose changes.
|
||||
|
||||
User arguments: $ARGUMENTS
|
||||
|
||||
## References
|
||||
|
||||
Read before analyzing:
|
||||
1. https://overreacted.io/before-you-memo/ — two techniques to avoid memo entirely
|
||||
|
||||
## Anti-patterns to detect
|
||||
|
||||
1. **State can be moved down instead of memoizing**: Move state into a smaller child so the slow component stops re-rendering without memo.
|
||||
2. **Children can be lifted up**: Extract stateful part, pass expensive subtree as `children` — children as props don't re-render when parent state changes.
|
||||
3. **useMemo on cheap computations**: Small array filters, string concat, arithmetic don't need memoization.
|
||||
4. **useMemo with constantly-changing deps**: Deps change every render = useMemo does nothing.
|
||||
5. **useMemo to stabilize props for non-memoized children**: If the child isn't wrapped in React.memo, stable references don't matter.
|
||||
6. **React.memo on components that always receive new props**: Fix the parent instead.
|
||||
7. **useMemo for derived state**: Just compute inline during render.
|
||||
|
||||
## Steps
|
||||
|
||||
1. Read the reference above
|
||||
2. Analyze the specified scope for the anti-patterns listed above
|
||||
3. If fix=true, apply the fixes. If fix=false, propose the fixes without applying.
|
||||
38
.claude/commands/you-might-not-need-state.md
Normal file
38
.claude/commands/you-might-not-need-state.md
Normal file
@@ -0,0 +1,38 @@
|
||||
---
|
||||
description: Analyze and fix unnecessary useState, derived state, and server-state-in-local-state anti-patterns
|
||||
argument-hint: [scope] [fix=true|false]
|
||||
---
|
||||
|
||||
# You Might Not Need State
|
||||
|
||||
Arguments:
|
||||
- scope: what to analyze (default: your current changes). Examples: "diff to main", "PR #123", "src/components/", "whole codebase"
|
||||
- fix: whether to apply fixes (default: true). Set to false to only propose changes.
|
||||
|
||||
User arguments: $ARGUMENTS
|
||||
|
||||
## Context
|
||||
|
||||
This codebase uses React Query for all server state and Zustand for client-only global state. useState should only be used for ephemeral UI concerns (open/closed, hover, local form input). Server data should never be copied into useState or Zustand — React Query is the single source of truth.
|
||||
|
||||
## References
|
||||
|
||||
Read these before analyzing:
|
||||
1. https://react.dev/learn/choosing-the-state-structure — 5 principles for structuring state
|
||||
2. https://tkdodo.eu/blog/dont-over-use-state — never store derived/computed values in state
|
||||
3. https://tkdodo.eu/blog/putting-props-to-use-state — never mirror props into state via useEffect
|
||||
|
||||
## Anti-patterns to detect
|
||||
|
||||
1. **Derived state stored in useState**: If a value can be computed from props, other state, or query data, compute it inline during render instead of storing it in state.
|
||||
2. **Server state copied into useState**: Never `useState` + `useEffect` to sync React Query data into local state. Use query data directly. The only exception is forms where users edit server data.
|
||||
3. **Props mirrored into state**: Never `useState(prop)` + `useEffect(() => setState(prop))`. Use the prop directly, or use a key to reset component state.
|
||||
4. **Chained useEffect state updates**: Never chain Effects that set state to trigger other Effects. Calculate all derived values in the event handler or inline during render.
|
||||
5. **Storing objects when an ID suffices**: Store `selectedId` not a copy of the selected object. Derive the object: `items.find(i => i.id === selectedId)`.
|
||||
6. **State that duplicates Zustand or React Query**: If the data already lives in a store or query cache, don't create a parallel useState.
|
||||
|
||||
## Steps
|
||||
|
||||
1. Read the references above to understand the guidelines
|
||||
2. Analyze the specified scope for the anti-patterns listed above
|
||||
3. If fix=true, apply the fixes. If fix=false, propose the fixes without applying.
|
||||
71
.claude/rules/constitution.md
Normal file
71
.claude/rules/constitution.md
Normal file
@@ -0,0 +1,71 @@
|
||||
# Sim — Language & Positioning
|
||||
|
||||
When editing user-facing copy (landing pages, docs, metadata, marketing), follow these rules.
|
||||
|
||||
## Identity
|
||||
|
||||
Sim is the **AI workspace** where teams build and run AI agents. Not a workflow tool, not an agent framework, not an automation platform.
|
||||
|
||||
**Short definition:** Sim is the open-source AI workspace where teams build, deploy, and manage AI agents.
|
||||
|
||||
**Full definition:** Sim is the open-source AI workspace where teams build, deploy, and manage AI agents. Connect 1,000+ integrations and every major LLM to create agents that automate real work — visually, conversationally, or with code.
|
||||
|
||||
## Audience
|
||||
|
||||
**Primary:** Teams building AI agents for their organization — IT, operations, and technical teams who need governance, security, lifecycle management, and collaboration.
|
||||
|
||||
**Secondary:** Individual builders and developers who care about speed, flexibility, and open source.
|
||||
|
||||
## Required Language
|
||||
|
||||
| Concept | Use | Never use |
|
||||
|---------|-----|-----------|
|
||||
| The product | "AI workspace" | "workflow tool", "automation platform", "agent framework" |
|
||||
| Building | "build agents", "create agents" | "create workflows" (unless describing the workflow module specifically) |
|
||||
| Visual builder | "workflow builder" or "visual builder" | "canvas", "graph editor" |
|
||||
| Mothership | "Mothership" (capitalized) | "chat", "AI assistant", "copilot" |
|
||||
| Deployment | "deploy", "ship" | "publish", "activate" |
|
||||
| Audience | "teams", "builders" | "users", "customers" (in marketing copy) |
|
||||
| What agents do | "automate real work" | "automate tasks", "automate workflows" |
|
||||
| Our advantage | "open-source AI workspace" | "open-source platform" |
|
||||
|
||||
## Tone
|
||||
|
||||
- **Direct.** Short sentences. Active voice. Lead with what it does.
|
||||
- **Concrete.** Name specific things — "Slack bots, compliance agents, data pipelines" — not abstractions.
|
||||
- **Confident, not loud.** No exclamation marks or superlatives.
|
||||
- **Simple.** If a 16-year-old can't understand the sentence, rewrite it.
|
||||
|
||||
## Claim Hierarchy
|
||||
|
||||
When describing Sim, always lead with the most differentiated claim:
|
||||
|
||||
1. **What it is:** "The AI workspace for teams"
|
||||
2. **What you do:** "Build, deploy, and manage AI agents"
|
||||
3. **How:** "Visually, conversationally, or with code"
|
||||
4. **Scale:** "1,000+ integrations, every major LLM"
|
||||
5. **Trust:** "Open source. SOC2. Trusted by 100,000+ builders."
|
||||
|
||||
## Module Descriptions
|
||||
|
||||
| Module | One-liner |
|
||||
|--------|-----------|
|
||||
| **Mothership** | Your AI command center. Build and manage everything in natural language. |
|
||||
| **Workflows** | The visual builder. Connect blocks, models, and integrations into agent logic. |
|
||||
| **Knowledge Base** | Your agents' memory. Upload docs, sync sources, build vector databases. |
|
||||
| **Tables** | A database, built in. Store, query, and wire structured data into agent runs. |
|
||||
| **Files** | Upload, create, and share. One store for your team and every agent. |
|
||||
| **Logs** | Full visibility, every run. Trace execution block by block. |
|
||||
|
||||
## What We Never Say
|
||||
|
||||
- Never call Sim "just a workflow tool"
|
||||
- Never compare only on integration count — we win on AI-native capabilities
|
||||
- Never use "no-code" as the primary descriptor — say "visually, conversationally, or with code"
|
||||
- Never promise unshipped features
|
||||
- Never use jargon ("RAG", "vector database", "MCP") without plain-English explanation on public pages
|
||||
- Avoid "agentic workforce" as a primary term — use "AI agents"
|
||||
|
||||
## Vision
|
||||
|
||||
Sim becomes the default environment where teams build AI agents — not a tool you visit for one task, but a workspace you live in. Workflows are one module; Mothership is another. The workspace is the constant; the interface adapts.
|
||||
@@ -1,7 +1,10 @@
|
||||
# Global Standards
|
||||
|
||||
## Logging
|
||||
Import `createLogger` from `sim/logger`. Use `logger.info`, `logger.warn`, `logger.error` instead of `console.log`.
|
||||
Import `createLogger` from `@sim/logger`. Use `logger.info`, `logger.warn`, `logger.error` instead of `console.log`. Inside API routes wrapped with `withRouteHandler`, loggers automatically include the request ID.
|
||||
|
||||
## API Route Handlers
|
||||
All API route handlers must be wrapped with `withRouteHandler` from `@/lib/core/utils/with-route-handler`. Never export a bare `async function GET/POST/...` — always use `export const METHOD = withRouteHandler(...)`.
|
||||
|
||||
## Comments
|
||||
Use TSDoc for documentation. No `====` separators. No non-TSDoc comments.
|
||||
@@ -10,7 +13,7 @@ Use TSDoc for documentation. No `====` separators. No non-TSDoc comments.
|
||||
Never update global styles. Keep all styling local to components.
|
||||
|
||||
## ID Generation
|
||||
Never use `crypto.randomUUID()`, `nanoid`, or the `uuid` package directly. Use the utilities from `@/lib/core/utils/uuid`:
|
||||
Never use `crypto.randomUUID()`, `nanoid`, or the `uuid` package directly. Use the utilities from `@sim/utils/id`:
|
||||
|
||||
- `generateId()` — UUID v4, use by default
|
||||
- `generateShortId(size?)` — short URL-safe ID (default 21 chars), for compact identifiers
|
||||
@@ -24,11 +27,32 @@ import { v4 as uuidv4 } from 'uuid'
|
||||
const id = crypto.randomUUID()
|
||||
|
||||
// ✓ Good
|
||||
import { generateId, generateShortId } from '@/lib/core/utils/uuid'
|
||||
import { generateId, generateShortId } from '@sim/utils/id'
|
||||
const uuid = generateId()
|
||||
const shortId = generateShortId()
|
||||
const tiny = generateShortId(8)
|
||||
```
|
||||
|
||||
## Common Utilities
|
||||
Use shared helpers from `@sim/utils` instead of writing inline implementations:
|
||||
|
||||
- `sleep(ms)` — async delay. Never write `new Promise(resolve => setTimeout(resolve, ms))`
|
||||
- `toError(value)` — normalize unknown caught values to `Error`. Never write `e instanceof Error ? e : new Error(String(e))`
|
||||
- `toError(value).message` — get error message safely. Never write `e instanceof Error ? e.message : String(e)`
|
||||
|
||||
```typescript
|
||||
// ✗ Bad
|
||||
await new Promise(resolve => setTimeout(resolve, 1000))
|
||||
const msg = error instanceof Error ? error.message : String(error)
|
||||
const err = error instanceof Error ? error : new Error(String(error))
|
||||
|
||||
// ✓ Good
|
||||
import { sleep } from '@sim/utils/helpers'
|
||||
import { toError } from '@sim/utils/errors'
|
||||
await sleep(1000)
|
||||
const msg = toError(error).message
|
||||
const err = toError(error)
|
||||
```
|
||||
|
||||
## Package Manager
|
||||
Use `bun` and `bunx`, not `npm` and `npx`.
|
||||
|
||||
@@ -13,8 +13,12 @@ Use Vitest. Test files: `feature.ts` → `feature.test.ts`
|
||||
These modules are mocked globally — do NOT re-mock them in test files unless you need to override behavior:
|
||||
|
||||
- `@sim/db` → `databaseMock`
|
||||
- `@sim/db/schema` → `schemaMock`
|
||||
- `drizzle-orm` → `drizzleOrmMock`
|
||||
- `@sim/logger` → `loggerMock`
|
||||
- `@/lib/auth` → `authMock`
|
||||
- `@/lib/auth/hybrid` → `hybridAuthMock` (with default session-delegating behavior)
|
||||
- `@/lib/core/utils/request` → `requestUtilsMock`
|
||||
- `@/stores/console/store`, `@/stores/terminal`, `@/stores/execution/store`
|
||||
- `@/blocks/registry`
|
||||
- `@trigger.dev/sdk`
|
||||
@@ -102,10 +106,6 @@ vi.mock('@/lib/workspaces/utils', () => ({
|
||||
}))
|
||||
```
|
||||
|
||||
### NEVER use `mockAuth()`, `mockConsoleLogger()`, or `setupCommonApiMocks()` from `@sim/testing`
|
||||
|
||||
These helpers internally use `vi.doMock()` which is slow. Use direct `vi.hoisted()` + `vi.mock()` instead.
|
||||
|
||||
### Mock heavy transitive dependencies
|
||||
|
||||
If a module under test imports `@/blocks` (200+ files), `@/tools/registry`, or other heavy modules, mock them:
|
||||
@@ -135,83 +135,129 @@ await new Promise(r => setTimeout(r, 1))
|
||||
vi.useFakeTimers()
|
||||
```
|
||||
|
||||
## Mock Pattern Reference
|
||||
## Centralized Mocks (prefer over local declarations)
|
||||
|
||||
`@sim/testing` exports ready-to-use mock modules for common dependencies. Import and pass directly to `vi.mock()` — no `vi.hoisted()` boilerplate needed. Each paired `*MockFns` object exposes the underlying `vi.fn()`s for per-test overrides.
|
||||
|
||||
| Module mocked | Import | Factory form |
|
||||
|---|---|---|
|
||||
| `@/app/api/auth/oauth/utils` | `authOAuthUtilsMock`, `authOAuthUtilsMockFns` | `vi.mock('@/app/api/auth/oauth/utils', () => authOAuthUtilsMock)` |
|
||||
| `@/app/api/knowledge/utils` | `knowledgeApiUtilsMock`, `knowledgeApiUtilsMockFns` | `vi.mock('@/app/api/knowledge/utils', () => knowledgeApiUtilsMock)` |
|
||||
| `@/app/api/workflows/utils` | `workflowsApiUtilsMock`, `workflowsApiUtilsMockFns` | `vi.mock('@/app/api/workflows/utils', () => workflowsApiUtilsMock)` |
|
||||
| `@sim/audit` | `auditMock`, `auditMockFns` | `vi.mock('@sim/audit', () => auditMock)` |
|
||||
| `@/lib/auth` | `authMock`, `authMockFns` | `vi.mock('@/lib/auth', () => authMock)` |
|
||||
| `@/lib/auth/hybrid` | `hybridAuthMock`, `hybridAuthMockFns` | `vi.mock('@/lib/auth/hybrid', () => hybridAuthMock)` |
|
||||
| `@/lib/copilot/request/http` | `copilotHttpMock`, `copilotHttpMockFns` | `vi.mock('@/lib/copilot/request/http', () => copilotHttpMock)` |
|
||||
| `@/lib/core/config/env` | `envMock`, `createEnvMock(overrides)` | `vi.mock('@/lib/core/config/env', () => envMock)` |
|
||||
| `@/lib/core/config/feature-flags` | `featureFlagsMock` | `vi.mock('@/lib/core/config/feature-flags', () => featureFlagsMock)` |
|
||||
| `@/lib/core/config/redis` | `redisConfigMock`, `redisConfigMockFns` | `vi.mock('@/lib/core/config/redis', () => redisConfigMock)` |
|
||||
| `@/lib/core/security/encryption` | `encryptionMock`, `encryptionMockFns` | `vi.mock('@/lib/core/security/encryption', () => encryptionMock)` |
|
||||
| `@/lib/core/security/input-validation.server` | `inputValidationMock`, `inputValidationMockFns` | `vi.mock('@/lib/core/security/input-validation.server', () => inputValidationMock)` |
|
||||
| `@/lib/core/utils/request` | `requestUtilsMock`, `requestUtilsMockFns` | `vi.mock('@/lib/core/utils/request', () => requestUtilsMock)` |
|
||||
| `@/lib/core/utils/urls` | `urlsMock`, `urlsMockFns` | `vi.mock('@/lib/core/utils/urls', () => urlsMock)` |
|
||||
| `@/lib/execution/preprocessing` | `executionPreprocessingMock`, `executionPreprocessingMockFns` | `vi.mock('@/lib/execution/preprocessing', () => executionPreprocessingMock)` |
|
||||
| `@/lib/logs/execution/logging-session` | `loggingSessionMock`, `loggingSessionMockFns`, `LoggingSessionMock` | `vi.mock('@/lib/logs/execution/logging-session', () => loggingSessionMock)` |
|
||||
| `@/lib/workflows/orchestration` | `workflowsOrchestrationMock`, `workflowsOrchestrationMockFns` | `vi.mock('@/lib/workflows/orchestration', () => workflowsOrchestrationMock)` |
|
||||
| `@/lib/workflows/persistence/utils` | `workflowsPersistenceUtilsMock`, `workflowsPersistenceUtilsMockFns` | `vi.mock('@/lib/workflows/persistence/utils', () => workflowsPersistenceUtilsMock)` |
|
||||
| `@/lib/workflows/utils` | `workflowsUtilsMock`, `workflowsUtilsMockFns` | `vi.mock('@/lib/workflows/utils', () => workflowsUtilsMock)` |
|
||||
| `@/lib/workspaces/permissions/utils` | `permissionsMock`, `permissionsMockFns` | `vi.mock('@/lib/workspaces/permissions/utils', () => permissionsMock)` |
|
||||
| `@sim/db/schema` | `schemaMock` | `vi.mock('@sim/db/schema', () => schemaMock)` |
|
||||
|
||||
### Auth mocking (API routes)
|
||||
|
||||
```typescript
|
||||
const { mockGetSession } = vi.hoisted(() => ({
|
||||
mockGetSession: vi.fn(),
|
||||
}))
|
||||
import { authMock, authMockFns } from '@sim/testing'
|
||||
import { beforeEach, describe, expect, it, vi } from 'vitest'
|
||||
|
||||
vi.mock('@/lib/auth', () => ({
|
||||
auth: { api: { getSession: vi.fn() } },
|
||||
getSession: mockGetSession,
|
||||
}))
|
||||
vi.mock('@/lib/auth', () => authMock)
|
||||
|
||||
// In tests:
|
||||
mockGetSession.mockResolvedValue({ user: { id: 'user-1', email: 'test@example.com' } })
|
||||
mockGetSession.mockResolvedValue(null) // unauthenticated
|
||||
import { GET } from '@/app/api/my-route/route'
|
||||
|
||||
beforeEach(() => {
|
||||
vi.clearAllMocks()
|
||||
authMockFns.mockGetSession.mockResolvedValue({ user: { id: 'user-1' } })
|
||||
})
|
||||
```
|
||||
|
||||
Only define a local `vi.mock('@/lib/auth', ...)` if the module under test consumes exports outside the centralized shape (e.g., `auth.api.verifyOneTimeToken`, `auth.api.resetPassword`).
|
||||
|
||||
### Hybrid auth mocking
|
||||
|
||||
```typescript
|
||||
const { mockCheckSessionOrInternalAuth } = vi.hoisted(() => ({
|
||||
mockCheckSessionOrInternalAuth: vi.fn(),
|
||||
}))
|
||||
import { hybridAuthMock, hybridAuthMockFns } from '@sim/testing'
|
||||
|
||||
vi.mock('@/lib/auth/hybrid', () => ({
|
||||
checkSessionOrInternalAuth: mockCheckSessionOrInternalAuth,
|
||||
}))
|
||||
vi.mock('@/lib/auth/hybrid', () => hybridAuthMock)
|
||||
|
||||
// In tests:
|
||||
mockCheckSessionOrInternalAuth.mockResolvedValue({
|
||||
hybridAuthMockFns.mockCheckSessionOrInternalAuth.mockResolvedValue({
|
||||
success: true, userId: 'user-1', authType: 'session',
|
||||
})
|
||||
```
|
||||
|
||||
### Database chain mocking
|
||||
|
||||
```typescript
|
||||
const { mockSelect, mockFrom, mockWhere } = vi.hoisted(() => ({
|
||||
mockSelect: vi.fn(),
|
||||
mockFrom: vi.fn(),
|
||||
mockWhere: vi.fn(),
|
||||
}))
|
||||
Use the centralized `dbChainMock` + `dbChainMockFns` helpers — no `vi.hoisted()` or chain-wiring boilerplate needed.
|
||||
|
||||
vi.mock('@sim/db', () => ({
|
||||
db: { select: mockSelect },
|
||||
}))
|
||||
```typescript
|
||||
import { dbChainMock, dbChainMockFns, resetDbChainMock } from '@sim/testing'
|
||||
|
||||
vi.mock('@sim/db', () => dbChainMock)
|
||||
// Spread for custom exports: vi.mock('@sim/db', () => ({ ...dbChainMock, myTable: {...} }))
|
||||
|
||||
beforeEach(() => {
|
||||
mockSelect.mockReturnValue({ from: mockFrom })
|
||||
mockFrom.mockReturnValue({ where: mockWhere })
|
||||
mockWhere.mockResolvedValue([{ id: '1', name: 'test' }])
|
||||
vi.clearAllMocks()
|
||||
resetDbChainMock() // only needed if tests use permanent (non-`Once`) overrides
|
||||
})
|
||||
|
||||
it('reads a row', async () => {
|
||||
dbChainMockFns.limit.mockResolvedValueOnce([{ id: '1', name: 'test' }])
|
||||
// exercise code that hits db.select().from().where().limit()
|
||||
expect(dbChainMockFns.where).toHaveBeenCalled()
|
||||
})
|
||||
```
|
||||
|
||||
**Default chains supported:**
|
||||
- `select()/selectDistinct()/selectDistinctOn() → from() → where()/innerJoin()/leftJoin() → where() → limit()/orderBy()/returning()/groupBy()/for()`
|
||||
- `insert() → values() → returning()/onConflictDoUpdate()/onConflictDoNothing()`
|
||||
- `update() → set() → where() → limit()/orderBy()/returning()/for()`
|
||||
- `delete() → where() → limit()/orderBy()/returning()/for()`
|
||||
- `db.execute()` resolves `[]`
|
||||
- `db.transaction(cb)` calls cb with `dbChainMock.db`
|
||||
|
||||
`.for('update')` (Postgres row-level locking) is supported on `where`
|
||||
builders. It returns a thenable with `.limit` / `.orderBy` / `.returning` /
|
||||
`.groupBy` attached, so both `await .where().for('update')` (terminal) and
|
||||
`await .where().for('update').limit(1)` (chained) work. Override the terminal
|
||||
result with `dbChainMockFns.for.mockResolvedValueOnce([...])`; for the chained
|
||||
form, mock the downstream terminal (e.g. `dbChainMockFns.limit.mockResolvedValueOnce([...])`).
|
||||
|
||||
All terminals default to `Promise.resolve([])`. Override per-test with `dbChainMockFns.<terminal>.mockResolvedValueOnce(...)`.
|
||||
|
||||
Use `resetDbChainMock()` in `beforeEach` only when tests replace wiring with `.mockReturnValue` / `.mockResolvedValue` (permanent). Tests using only `...Once` variants don't need it.
|
||||
|
||||
## @sim/testing Package
|
||||
|
||||
Always prefer over local test data.
|
||||
|
||||
| Category | Utilities |
|
||||
|----------|-----------|
|
||||
| **Mocks** | `loggerMock`, `databaseMock`, `drizzleOrmMock`, `setupGlobalFetchMock()` |
|
||||
| **Module mocks** | See "Centralized Mocks" table above |
|
||||
| **Logger helpers** | `loggerMock`, `createMockLogger()`, `getLoggerCalls()`, `clearLoggerMocks()` |
|
||||
| **Database helpers** | `databaseMock`, `drizzleOrmMock`, `createMockDb()`, `createMockSql()`, `createMockSqlOperators()` |
|
||||
| **Fetch helpers** | `setupGlobalFetchMock()`, `createMockFetch()`, `createMockResponse()`, `mockFetchError()` |
|
||||
| **Factories** | `createSession()`, `createWorkflowRecord()`, `createBlock()`, `createExecutionContext()` |
|
||||
| **Builders** | `WorkflowBuilder`, `ExecutionContextBuilder` |
|
||||
| **Assertions** | `expectWorkflowAccessGranted()`, `expectBlockExecuted()` |
|
||||
| **Requests** | `createMockRequest()`, `createEnvMock()` |
|
||||
| **Requests** | `createMockRequest()`, `createMockFormDataRequest()` |
|
||||
|
||||
## Rules Summary
|
||||
|
||||
1. `@vitest-environment node` unless DOM is required
|
||||
2. `vi.hoisted()` + `vi.mock()` + static imports — never `vi.resetModules()` + `vi.doMock()` + dynamic imports
|
||||
3. `vi.mock()` calls before importing mocked modules
|
||||
4. `@sim/testing` utilities over local mocks
|
||||
2. Prefer centralized mocks from `@sim/testing` (see table above) over local `vi.hoisted()` + `vi.mock()` boilerplate
|
||||
3. `vi.hoisted()` + `vi.mock()` + static imports — never `vi.resetModules()` + `vi.doMock()` + dynamic imports
|
||||
4. `vi.mock()` calls before importing mocked modules
|
||||
5. `beforeEach(() => vi.clearAllMocks())` to reset state — no redundant `afterEach`
|
||||
6. No `vi.importActual()` — mock everything explicitly
|
||||
7. No `mockAuth()`, `mockConsoleLogger()`, `setupCommonApiMocks()` — use direct mocks
|
||||
8. Mock heavy deps (`@/blocks`, `@/tools/registry`, `@/triggers`) in tests that don't need them
|
||||
9. Use absolute imports in test files
|
||||
10. Avoid real timers — use 1ms delays or `vi.useFakeTimers()`
|
||||
7. Mock heavy deps (`@/blocks`, `@/tools/registry`, `@/triggers`) in tests that don't need them
|
||||
8. Use absolute imports in test files
|
||||
9. Avoid real timers — use 1ms delays or `vi.useFakeTimers()`
|
||||
|
||||
@@ -1,12 +1,12 @@
|
||||
# Add Trigger
|
||||
|
||||
You are an expert at creating webhook triggers for Sim. You understand the trigger system, the generic `buildTriggerSubBlocks` helper, and how triggers connect to blocks.
|
||||
You are an expert at creating webhook and polling triggers for Sim. You understand the trigger system, the generic `buildTriggerSubBlocks` helper, polling infrastructure, and how triggers connect to blocks.
|
||||
|
||||
## Your Task
|
||||
|
||||
1. Research what webhook events the service supports
|
||||
2. Create the trigger files using the generic builder
|
||||
3. Create a provider handler if custom auth, formatting, or subscriptions are needed
|
||||
1. Research what webhook events the service supports — if the service lacks reliable webhooks, use polling
|
||||
2. Create the trigger files using the generic builder (webhook) or manual config (polling)
|
||||
3. Create a provider handler (webhook) or polling handler (polling)
|
||||
4. Register triggers and connect them to the block
|
||||
|
||||
## Directory Structure
|
||||
@@ -141,23 +141,37 @@ export const TRIGGER_REGISTRY: TriggerRegistry = {
|
||||
|
||||
### Block file (`apps/sim/blocks/blocks/{service}.ts`)
|
||||
|
||||
Wire triggers into the block so the trigger UI appears and `generate-docs.ts` discovers them. Two changes are needed:
|
||||
|
||||
1. **Spread trigger subBlocks** at the end of the block's `subBlocks` array
|
||||
2. **Add `triggers` property** after `outputs` with `enabled: true` and `available: [...]`
|
||||
|
||||
```typescript
|
||||
import { getTrigger } from '@/triggers'
|
||||
|
||||
export const {Service}Block: BlockConfig = {
|
||||
// ...
|
||||
triggers: {
|
||||
enabled: true,
|
||||
available: ['{service}_event_a', '{service}_event_b'],
|
||||
},
|
||||
subBlocks: [
|
||||
// Regular tool subBlocks first...
|
||||
...getTrigger('{service}_event_a').subBlocks,
|
||||
...getTrigger('{service}_event_b').subBlocks,
|
||||
],
|
||||
// ... tools, inputs, outputs ...
|
||||
triggers: {
|
||||
enabled: true,
|
||||
available: ['{service}_event_a', '{service}_event_b'],
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
**Versioned blocks (V1 + V2):** Many integrations have a hidden V1 block and a visible V2 block. Where you add the trigger wiring depends on how V2 inherits from V1:
|
||||
|
||||
- **V2 uses `...V1Block` spread** (e.g., Google Calendar): Add trigger to V1 — V2 inherits both `subBlocks` and `triggers` automatically.
|
||||
- **V2 defines its own `subBlocks`** (e.g., Google Sheets): Add trigger to V2 (the visible block). V1 is hidden and doesn't need it.
|
||||
- **Single block, no V2** (e.g., Google Drive): Add trigger directly.
|
||||
|
||||
`generate-docs.ts` deduplicates by base type (first match wins). If V1 is processed first without triggers, the V2 triggers won't appear in `integrations.json`. Always verify by checking the output after running the script.
|
||||
|
||||
## Provider Handler
|
||||
|
||||
All provider-specific webhook logic lives in a single handler file: `apps/sim/lib/webhooks/providers/{service}.ts`.
|
||||
@@ -322,6 +336,121 @@ export function buildOutputs(): Record<string, TriggerOutput> {
|
||||
}
|
||||
```
|
||||
|
||||
## Polling Triggers
|
||||
|
||||
Use polling when the service lacks reliable webhooks (e.g., Google Sheets, Google Drive, Google Calendar, Gmail, RSS, IMAP). Polling triggers do NOT use `buildTriggerSubBlocks` — they define subBlocks manually.
|
||||
|
||||
### Directory Structure
|
||||
|
||||
```
|
||||
apps/sim/triggers/{service}/
|
||||
├── index.ts # Barrel export
|
||||
└── poller.ts # TriggerConfig with polling: true
|
||||
|
||||
apps/sim/lib/webhooks/polling/
|
||||
└── {service}.ts # PollingProviderHandler implementation
|
||||
```
|
||||
|
||||
### Polling Handler (`apps/sim/lib/webhooks/polling/{service}.ts`)
|
||||
|
||||
```typescript
|
||||
import { pollingIdempotency } from '@/lib/core/idempotency/service'
|
||||
import type { PollingProviderHandler, PollWebhookContext } from '@/lib/webhooks/polling/types'
|
||||
import { markWebhookFailed, markWebhookSuccess, resolveOAuthCredential, updateWebhookProviderConfig } from '@/lib/webhooks/polling/utils'
|
||||
import { processPolledWebhookEvent } from '@/lib/webhooks/processor'
|
||||
|
||||
export const {service}PollingHandler: PollingProviderHandler = {
|
||||
provider: '{service}',
|
||||
label: '{Service}',
|
||||
|
||||
async pollWebhook(ctx: PollWebhookContext): Promise<'success' | 'failure'> {
|
||||
const { webhookData, workflowData, requestId, logger } = ctx
|
||||
const webhookId = webhookData.id
|
||||
|
||||
try {
|
||||
// For OAuth services:
|
||||
const accessToken = await resolveOAuthCredential(webhookData, '{service}', requestId, logger)
|
||||
const config = webhookData.providerConfig as unknown as {Service}WebhookConfig
|
||||
|
||||
// First poll: seed state, emit nothing
|
||||
if (!config.lastCheckedTimestamp) {
|
||||
await updateWebhookProviderConfig(webhookId, { lastCheckedTimestamp: new Date().toISOString() }, logger)
|
||||
await markWebhookSuccess(webhookId, logger)
|
||||
return 'success'
|
||||
}
|
||||
|
||||
// Fetch changes since last poll, process with idempotency
|
||||
// ...
|
||||
|
||||
await markWebhookSuccess(webhookId, logger)
|
||||
return 'success'
|
||||
} catch (error) {
|
||||
logger.error(`[${requestId}] Error processing {service} webhook ${webhookId}:`, error)
|
||||
await markWebhookFailed(webhookId, logger)
|
||||
return 'failure'
|
||||
}
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
**Key patterns:**
|
||||
- First poll seeds state and emits nothing (avoids flooding with existing data)
|
||||
- Use `pollingIdempotency.executeWithIdempotency(provider, key, callback)` for dedup
|
||||
- Use `processPolledWebhookEvent(webhookData, workflowData, payload, requestId)` to fire the workflow
|
||||
- Use `updateWebhookProviderConfig(webhookId, partialConfig, logger)` for read-merge-write on state
|
||||
- Use the latest server-side timestamp from API responses (not wall clock) to avoid clock skew
|
||||
|
||||
### Trigger Config (`apps/sim/triggers/{service}/poller.ts`)
|
||||
|
||||
```typescript
|
||||
import { {Service}Icon } from '@/components/icons'
|
||||
import type { TriggerConfig } from '@/triggers/types'
|
||||
|
||||
export const {service}PollingTrigger: TriggerConfig = {
|
||||
id: '{service}_poller',
|
||||
name: '{Service} Trigger',
|
||||
provider: '{service}',
|
||||
description: 'Triggers when ...',
|
||||
version: '1.0.0',
|
||||
icon: {Service}Icon,
|
||||
polling: true, // REQUIRED — routes to polling infrastructure
|
||||
|
||||
subBlocks: [
|
||||
{ id: 'triggerCredentials', type: 'oauth-input', title: 'Credentials', serviceId: '{service}', requiredScopes: [], required: true, mode: 'trigger', supportsCredentialSets: true },
|
||||
// ... service-specific config fields (dropdowns, inputs, switches) ...
|
||||
{ id: 'triggerInstructions', type: 'text', title: 'Setup Instructions', hideFromPreview: true, mode: 'trigger', defaultValue: '...' },
|
||||
],
|
||||
|
||||
outputs: {
|
||||
// Must match the payload shape from processPolledWebhookEvent
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
### Registration (3 places)
|
||||
|
||||
1. **`apps/sim/triggers/constants.ts`** — add provider to `POLLING_PROVIDERS` Set
|
||||
2. **`apps/sim/lib/webhooks/polling/registry.ts`** — import handler, add to `POLLING_HANDLERS`
|
||||
3. **`apps/sim/triggers/registry.ts`** — import trigger config, add to `TRIGGER_REGISTRY`
|
||||
|
||||
### Helm Cron Job
|
||||
|
||||
Add to `helm/sim/values.yaml` under the existing polling cron jobs:
|
||||
|
||||
```yaml
|
||||
{service}WebhookPoll:
|
||||
schedule: "*/1 * * * *"
|
||||
concurrencyPolicy: Forbid
|
||||
url: "http://sim:3000/api/webhooks/poll/{service}"
|
||||
```
|
||||
|
||||
### Reference Implementations
|
||||
|
||||
- Simple: `apps/sim/lib/webhooks/polling/rss.ts` + `apps/sim/triggers/rss/poller.ts`
|
||||
- Complex (OAuth, attachments): `apps/sim/lib/webhooks/polling/gmail.ts` + `apps/sim/triggers/gmail/poller.ts`
|
||||
- Cursor-based (changes API): `apps/sim/lib/webhooks/polling/google-drive.ts`
|
||||
- Timestamp-based: `apps/sim/lib/webhooks/polling/google-calendar.ts`
|
||||
|
||||
## Checklist
|
||||
|
||||
### Trigger Definition
|
||||
@@ -347,7 +476,17 @@ export function buildOutputs(): Record<string, TriggerOutput> {
|
||||
- [ ] NO changes to `route.ts`, `provider-subscriptions.ts`, or `deploy.ts`
|
||||
- [ ] API key field uses `password: true`
|
||||
|
||||
### Polling Trigger (if applicable)
|
||||
- [ ] Handler implements `PollingProviderHandler` at `lib/webhooks/polling/{service}.ts`
|
||||
- [ ] Trigger config has `polling: true` and defines subBlocks manually (no `buildTriggerSubBlocks`)
|
||||
- [ ] Provider string matches across: trigger config, handler, `POLLING_PROVIDERS`, polling registry
|
||||
- [ ] First poll seeds state and emits nothing
|
||||
- [ ] Added provider to `POLLING_PROVIDERS` in `triggers/constants.ts`
|
||||
- [ ] Added handler to `POLLING_HANDLERS` in `lib/webhooks/polling/registry.ts`
|
||||
- [ ] Added cron job to `helm/sim/values.yaml`
|
||||
- [ ] Payload shape matches trigger `outputs` schema
|
||||
|
||||
### Testing
|
||||
- [ ] `bun run type-check` passes
|
||||
- [ ] Manually verify `formatInput` output keys match trigger `outputs` keys
|
||||
- [ ] Manually verify output keys match trigger `outputs` keys
|
||||
- [ ] Trigger UI shows correctly in the block
|
||||
|
||||
20
.cursor/commands/cleanup.md
Normal file
20
.cursor/commands/cleanup.md
Normal file
@@ -0,0 +1,20 @@
|
||||
# Cleanup
|
||||
|
||||
Arguments:
|
||||
- scope: what to review (default: your current changes). Examples: "diff to main", "PR #123", "src/components/", "whole codebase"
|
||||
- fix: whether to apply fixes (default: true). Set to false to only propose changes.
|
||||
|
||||
User arguments: $ARGUMENTS
|
||||
|
||||
## Steps
|
||||
|
||||
Run each of these skills in order on the specified scope, passing through the scope and fix arguments. After each skill completes, move to the next. Do not skip any.
|
||||
|
||||
1. `/you-might-not-need-an-effect $ARGUMENTS`
|
||||
2. `/you-might-not-need-a-memo $ARGUMENTS`
|
||||
3. `/you-might-not-need-a-callback $ARGUMENTS`
|
||||
4. `/you-might-not-need-state $ARGUMENTS`
|
||||
5. `/react-query-best-practices $ARGUMENTS`
|
||||
6. `/emcn-design-review $ARGUMENTS`
|
||||
|
||||
After all skills have run, output a summary of what was found and fixed (or proposed) across all six passes.
|
||||
74
.cursor/commands/emcn-design-review.md
Normal file
74
.cursor/commands/emcn-design-review.md
Normal file
@@ -0,0 +1,74 @@
|
||||
# EMCN Design Review
|
||||
|
||||
Arguments:
|
||||
- scope: what to review (default: your current changes). Examples: "diff to main", "PR #123", "src/components/", "whole codebase"
|
||||
- fix: whether to apply fixes (default: true). Set to false to only propose changes.
|
||||
|
||||
User arguments: $ARGUMENTS
|
||||
|
||||
## Context
|
||||
|
||||
This codebase uses **emcn**, a custom component library built on Radix UI primitives with CVA variants and CSS variable design tokens. All UI must use emcn components and tokens.
|
||||
|
||||
## Steps
|
||||
|
||||
1. Read the emcn barrel export at `apps/sim/components/emcn/components/index.ts` to know what's available
|
||||
2. Read `apps/sim/app/_styles/globals.css` for CSS variable tokens
|
||||
3. Analyze the specified scope against every rule below
|
||||
4. If fix=true, apply the fixes. If fix=false, propose the fixes without applying.
|
||||
|
||||
---
|
||||
|
||||
## Imports
|
||||
|
||||
- Import from `@/components/emcn` barrel, never subpaths
|
||||
- Icons from `@/components/emcn/icons` or `lucide-react`
|
||||
- Use `cn` from `@/lib/core/utils/cn` for conditional classes
|
||||
|
||||
## Design Tokens
|
||||
|
||||
Use CSS variable pattern (`text-[var(--text-primary)]`), never Tailwind semantics (`text-muted-foreground`) or hardcoded colors (`text-gray-500`, `#333`).
|
||||
|
||||
**Text**: `--text-primary`, `--text-secondary`, `--text-tertiary`, `--text-muted`, `--text-icon`, `--text-inverse`, `--text-error`
|
||||
**Surfaces**: `--bg`, `--surface-2` through `--surface-7`, `--surface-hover`, `--surface-active`
|
||||
**Borders**: `--border`, `--border-1`, `--border-muted`
|
||||
**Z-Index**: `--z-dropdown` (100), `--z-modal` (200), `--z-popover` (300), `--z-tooltip` (400), `--z-toast` (500)
|
||||
**Shadows**: `shadow-subtle`, `shadow-medium`, `shadow-overlay`, `shadow-card`
|
||||
|
||||
## Buttons
|
||||
|
||||
| Action | Variant |
|
||||
|--------|---------|
|
||||
| Toolbar, icon-only | `ghost` (most common, 28%) |
|
||||
| Create, save, submit | `primary` (24%) |
|
||||
| Cancel, close | `default` |
|
||||
| Delete, remove | `destructive` |
|
||||
| Selected state | `active` |
|
||||
| Toggle | `outline` |
|
||||
|
||||
## Delete/Remove Confirmations
|
||||
|
||||
Modal `size="sm"`, title "Delete/Remove {ItemType}", `variant="destructive"` action button, `variant="default"` cancel. Cancel left, action right (100% compliance). Use `text-[var(--text-error)]` for irreversible warnings.
|
||||
|
||||
## Toast
|
||||
|
||||
`toast.success()`, `toast.error()`, `toast()` from `@/components/emcn`. Never custom notification UI.
|
||||
|
||||
## Badges
|
||||
|
||||
`red`=error/failed, `gray-secondary`=metadata/roles, `type`=type annotations, `green`=success/active, `gray`=neutral, `amber`=processing, `orange`=paused, `blue`=info. Use `dot` prop for status indicators.
|
||||
|
||||
## Icons
|
||||
|
||||
Default: `h-[14px] w-[14px]` (400+ uses). Color: `text-[var(--text-icon)]`. Scale: 14px > 16px > 12px > 20px.
|
||||
|
||||
## Anti-patterns to flag
|
||||
|
||||
- Raw `<button>`/`<input>` instead of emcn components
|
||||
- Hardcoded colors (`text-gray-*`, `#hex`, `rgb()`)
|
||||
- Tailwind semantics (`text-muted-foreground`) instead of CSS variables
|
||||
- Template literal className instead of `cn()`
|
||||
- Inline styles for colors/static values (dynamic values OK)
|
||||
- Importing from emcn subpaths instead of barrel
|
||||
- Arbitrary z-index instead of tokens
|
||||
- Wrong button variant for action type
|
||||
49
.cursor/commands/react-query-best-practices.md
Normal file
49
.cursor/commands/react-query-best-practices.md
Normal file
@@ -0,0 +1,49 @@
|
||||
# React Query Best Practices
|
||||
|
||||
Arguments:
|
||||
- scope: what to analyze (default: your current changes). Examples: "diff to main", "PR #123", "src/hooks/queries/", "whole codebase"
|
||||
- fix: whether to apply fixes (default: true). Set to false to only propose changes.
|
||||
|
||||
User arguments: $ARGUMENTS
|
||||
|
||||
## Context
|
||||
|
||||
This codebase uses React Query (TanStack Query) as the single source of truth for all server state. All query hooks live in `hooks/queries/`. Zustand is used only for client-only UI state. Server data must never be duplicated into useState or Zustand outside of mutation callbacks that coordinate cross-store state.
|
||||
|
||||
## References
|
||||
|
||||
Read these before analyzing:
|
||||
1. https://tkdodo.eu/blog/practical-react-query — foundational defaults, custom hooks, avoiding local state copies
|
||||
2. https://tkdodo.eu/blog/effective-react-query-keys — key factory pattern, hierarchical keys, fuzzy invalidation
|
||||
3. https://tkdodo.eu/blog/react-query-as-a-state-manager — React Query IS your server state manager
|
||||
|
||||
## Rules to enforce
|
||||
|
||||
### Query key factories
|
||||
- Every file in `hooks/queries/` must have a hierarchical key factory with an `all` root key
|
||||
- Keys must include intermediate plural keys (`lists`, `details`) for prefix invalidation
|
||||
- Key factories are colocated with their query hooks, not in a global keys file
|
||||
|
||||
### Query hooks
|
||||
- Every `queryFn` must forward `signal` for request cancellation
|
||||
- Every query must have an explicit `staleTime` (default 0 is almost never correct)
|
||||
- `keepPreviousData` / `placeholderData` only on variable-key queries (where params change), never on static keys
|
||||
- Use `enabled` to prevent queries from running without required params
|
||||
|
||||
### Mutations
|
||||
- Use `onSettled` (not `onSuccess`) for cache reconciliation — it fires on both success and error
|
||||
- For optimistic updates: save previous data in `onMutate`, roll back in `onError`
|
||||
- Use targeted invalidation (`entityKeys.lists()`) not broad (`entityKeys.all`) when possible
|
||||
- Don't include mutation objects in `useCallback` deps — `.mutate()` is stable
|
||||
|
||||
### Server state ownership
|
||||
- Never copy query data into useState. Use query data directly in components.
|
||||
- Never copy query data into Zustand stores (exception: mutation callbacks that coordinate cross-store state like temp ID replacement)
|
||||
- The query cache is not a local state manager — `setQueryData` is for optimistic updates only
|
||||
- Forms are the one deliberate exception: copy server data into local form state with `staleTime: Infinity`
|
||||
|
||||
## Steps
|
||||
|
||||
1. Read the references above to understand the guidelines
|
||||
2. Analyze the specified scope against the rules listed above
|
||||
3. If fix=true, apply the fixes. If fix=false, propose the fixes without applying.
|
||||
30
.cursor/commands/you-might-not-need-a-callback.md
Normal file
30
.cursor/commands/you-might-not-need-a-callback.md
Normal file
@@ -0,0 +1,30 @@
|
||||
# You Might Not Need a Callback
|
||||
|
||||
Arguments:
|
||||
- scope: what to analyze (default: your current changes). Examples: "diff to main", "PR #123", "src/components/", "whole codebase"
|
||||
- fix: whether to apply fixes (default: true). Set to false to only propose changes.
|
||||
|
||||
User arguments: $ARGUMENTS
|
||||
|
||||
## References
|
||||
|
||||
Read before analyzing:
|
||||
1. https://react.dev/reference/react/useCallback — official docs on when useCallback is actually needed
|
||||
|
||||
## Anti-patterns to detect
|
||||
|
||||
1. **useCallback on functions not passed as props or deps**: No benefit if only called within the same component.
|
||||
2. **useCallback with deps that change every render**: Memoization is wasted.
|
||||
3. **useCallback on handlers passed to native elements**: `<button onClick={fn}>` doesn't benefit from stable references.
|
||||
4. **useCallback wrapping functions that return new objects/arrays**: Memoization at the wrong level.
|
||||
5. **useCallback with empty deps when deps are needed**: Stale closures.
|
||||
6. **Pairing useCallback + React.memo unnecessarily**: Only optimize when you've measured a problem.
|
||||
7. **useCallback in hooks that don't need stable references**: Not every hook return needs memoization.
|
||||
|
||||
Note: This codebase uses a ref pattern for stable callbacks (`useRef` + empty deps). That pattern is correct — don't flag it.
|
||||
|
||||
## Steps
|
||||
|
||||
1. Read the reference above
|
||||
2. Analyze the specified scope for the anti-patterns listed above
|
||||
3. If fix=true, apply the fixes. If fix=false, propose the fixes without applying.
|
||||
28
.cursor/commands/you-might-not-need-a-memo.md
Normal file
28
.cursor/commands/you-might-not-need-a-memo.md
Normal file
@@ -0,0 +1,28 @@
|
||||
# You Might Not Need a Memo
|
||||
|
||||
Arguments:
|
||||
- scope: what to analyze (default: your current changes). Examples: "diff to main", "PR #123", "src/components/", "whole codebase"
|
||||
- fix: whether to apply fixes (default: true). Set to false to only propose changes.
|
||||
|
||||
User arguments: $ARGUMENTS
|
||||
|
||||
## References
|
||||
|
||||
Read before analyzing:
|
||||
1. https://overreacted.io/before-you-memo/ — two techniques to avoid memo entirely
|
||||
|
||||
## Anti-patterns to detect
|
||||
|
||||
1. **State can be moved down instead of memoizing**: Move state into a smaller child so the slow component stops re-rendering without memo.
|
||||
2. **Children can be lifted up**: Extract stateful part, pass expensive subtree as `children` — children as props don't re-render when parent state changes.
|
||||
3. **useMemo on cheap computations**: Small array filters, string concat, arithmetic don't need memoization.
|
||||
4. **useMemo with constantly-changing deps**: Deps change every render = useMemo does nothing.
|
||||
5. **useMemo to stabilize props for non-memoized children**: If the child isn't wrapped in React.memo, stable references don't matter.
|
||||
6. **React.memo on components that always receive new props**: Fix the parent instead.
|
||||
7. **useMemo for derived state**: Just compute inline during render.
|
||||
|
||||
## Steps
|
||||
|
||||
1. Read the reference above
|
||||
2. Analyze the specified scope for the anti-patterns listed above
|
||||
3. If fix=true, apply the fixes. If fix=false, propose the fixes without applying.
|
||||
33
.cursor/commands/you-might-not-need-state.md
Normal file
33
.cursor/commands/you-might-not-need-state.md
Normal file
@@ -0,0 +1,33 @@
|
||||
# You Might Not Need State
|
||||
|
||||
Arguments:
|
||||
- scope: what to analyze (default: your current changes). Examples: "diff to main", "PR #123", "src/components/", "whole codebase"
|
||||
- fix: whether to apply fixes (default: true). Set to false to only propose changes.
|
||||
|
||||
User arguments: $ARGUMENTS
|
||||
|
||||
## Context
|
||||
|
||||
This codebase uses React Query for all server state and Zustand for client-only global state. useState should only be used for ephemeral UI concerns (open/closed, hover, local form input). Server data should never be copied into useState or Zustand — React Query is the single source of truth.
|
||||
|
||||
## References
|
||||
|
||||
Read these before analyzing:
|
||||
1. https://react.dev/learn/choosing-the-state-structure — 5 principles for structuring state
|
||||
2. https://tkdodo.eu/blog/dont-over-use-state — never store derived/computed values in state
|
||||
3. https://tkdodo.eu/blog/putting-props-to-use-state — never mirror props into state via useEffect
|
||||
|
||||
## Anti-patterns to detect
|
||||
|
||||
1. **Derived state stored in useState**: If a value can be computed from props, other state, or query data, compute it inline during render instead of storing it in state.
|
||||
2. **Server state copied into useState**: Never `useState` + `useEffect` to sync React Query data into local state. Use query data directly. The only exception is forms where users edit server data.
|
||||
3. **Props mirrored into state**: Never `useState(prop)` + `useEffect(() => setState(prop))`. Use the prop directly, or use a key to reset component state.
|
||||
4. **Chained useEffect state updates**: Never chain Effects that set state to trigger other Effects. Calculate all derived values in the event handler or inline during render.
|
||||
5. **Storing objects when an ID suffices**: Store `selectedId` not a copy of the selected object. Derive the object: `items.find(i => i.id === selectedId)`.
|
||||
6. **State that duplicates Zustand or React Query**: If the data already lives in a store or query cache, don't create a parallel useState.
|
||||
|
||||
## Steps
|
||||
|
||||
1. Read the references above to understand the guidelines
|
||||
2. Analyze the specified scope for the anti-patterns listed above
|
||||
3. If fix=true, apply the fixes. If fix=false, propose the fixes without applying.
|
||||
76
.cursor/rules/constitution.mdc
Normal file
76
.cursor/rules/constitution.mdc
Normal file
@@ -0,0 +1,76 @@
|
||||
---
|
||||
description: Sim product language, positioning, and tone guidelines
|
||||
globs: ["apps/sim/app/(landing)/**", "apps/sim/app/(home)/**", "apps/docs/**", "apps/sim/app/manifest.ts", "apps/sim/app/sitemap.ts", "apps/sim/app/robots.ts", "apps/sim/app/llms.txt/**", "apps/sim/app/llms-full.txt/**", "apps/sim/app/(landing)/**/structured-data*", "apps/docs/**/structured-data*", "**/metadata*", "**/seo*"]
|
||||
---
|
||||
|
||||
# Sim — Language & Positioning
|
||||
|
||||
When editing user-facing copy (landing pages, docs, metadata, marketing), follow these rules.
|
||||
|
||||
## Identity
|
||||
|
||||
Sim is the **AI workspace** where teams build and run AI agents. Not a workflow tool, not an agent framework, not an automation platform.
|
||||
|
||||
**Short definition:** Sim is the open-source AI workspace where teams build, deploy, and manage AI agents.
|
||||
|
||||
**Full definition:** Sim is the open-source AI workspace where teams build, deploy, and manage AI agents. Connect 1,000+ integrations and every major LLM to create agents that automate real work — visually, conversationally, or with code.
|
||||
|
||||
## Audience
|
||||
|
||||
**Primary:** Teams building AI agents for their organization — IT, operations, and technical teams who need governance, security, lifecycle management, and collaboration.
|
||||
|
||||
**Secondary:** Individual builders and developers who care about speed, flexibility, and open source.
|
||||
|
||||
## Required Language
|
||||
|
||||
| Concept | Use | Never use |
|
||||
|---------|-----|-----------|
|
||||
| The product | "AI workspace" | "workflow tool", "automation platform", "agent framework" |
|
||||
| Building | "build agents", "create agents" | "create workflows" (unless describing the workflow module specifically) |
|
||||
| Visual builder | "workflow builder" or "visual builder" | "canvas", "graph editor" |
|
||||
| Mothership | "Mothership" (capitalized) | "chat", "AI assistant", "copilot" |
|
||||
| Deployment | "deploy", "ship" | "publish", "activate" |
|
||||
| Audience | "teams", "builders" | "users", "customers" (in marketing copy) |
|
||||
| What agents do | "automate real work" | "automate tasks", "automate workflows" |
|
||||
| Our advantage | "open-source AI workspace" | "open-source platform" |
|
||||
|
||||
## Tone
|
||||
|
||||
- **Direct.** Short sentences. Active voice. Lead with what it does.
|
||||
- **Concrete.** Name specific things — "Slack bots, compliance agents, data pipelines" — not abstractions.
|
||||
- **Confident, not loud.** No exclamation marks or superlatives.
|
||||
- **Simple.** If a 16-year-old can't understand the sentence, rewrite it.
|
||||
|
||||
## Claim Hierarchy
|
||||
|
||||
When describing Sim, always lead with the most differentiated claim:
|
||||
|
||||
1. **What it is:** "The AI workspace for teams"
|
||||
2. **What you do:** "Build, deploy, and manage AI agents"
|
||||
3. **How:** "Visually, conversationally, or with code"
|
||||
4. **Scale:** "1,000+ integrations, every major LLM"
|
||||
5. **Trust:** "Open source. SOC2. Trusted by 100,000+ builders."
|
||||
|
||||
## Module Descriptions
|
||||
|
||||
| Module | One-liner |
|
||||
|--------|-----------|
|
||||
| **Mothership** | Your AI command center. Build and manage everything in natural language. |
|
||||
| **Workflows** | The visual builder. Connect blocks, models, and integrations into agent logic. |
|
||||
| **Knowledge Base** | Your agents' memory. Upload docs, sync sources, build vector databases. |
|
||||
| **Tables** | A database, built in. Store, query, and wire structured data into agent runs. |
|
||||
| **Files** | Upload, create, and share. One store for your team and every agent. |
|
||||
| **Logs** | Full visibility, every run. Trace execution block by block. |
|
||||
|
||||
## What We Never Say
|
||||
|
||||
- Never call Sim "just a workflow tool"
|
||||
- Never compare only on integration count — we win on AI-native capabilities
|
||||
- Never use "no-code" as the primary descriptor — say "visually, conversationally, or with code"
|
||||
- Never promise unshipped features
|
||||
- Never use jargon ("RAG", "vector database", "MCP") without plain-English explanation on public pages
|
||||
- Avoid "agentic workforce" as a primary term — use "AI agents"
|
||||
|
||||
## Vision
|
||||
|
||||
Sim becomes the default environment where teams build AI agents — not a tool you visit for one task, but a workspace you live in. Workflows are one module; Mothership is another. The workspace is the constant; the interface adapts.
|
||||
@@ -17,7 +17,7 @@ Use TSDoc for documentation. No `====` separators. No non-TSDoc comments.
|
||||
Never update global styles. Keep all styling local to components.
|
||||
|
||||
## ID Generation
|
||||
Never use `crypto.randomUUID()`, `nanoid`, or the `uuid` package directly. Use the utilities from `@/lib/core/utils/uuid`:
|
||||
Never use `crypto.randomUUID()`, `nanoid`, or the `uuid` package directly. Use the utilities from `@sim/utils/id`:
|
||||
|
||||
- `generateId()` — UUID v4, use by default
|
||||
- `generateShortId(size?)` — short URL-safe ID (default 21 chars), for compact identifiers
|
||||
@@ -31,11 +31,32 @@ import { v4 as uuidv4 } from 'uuid'
|
||||
const id = crypto.randomUUID()
|
||||
|
||||
// ✓ Good
|
||||
import { generateId, generateShortId } from '@/lib/core/utils/uuid'
|
||||
import { generateId, generateShortId } from '@sim/utils/id'
|
||||
const uuid = generateId()
|
||||
const shortId = generateShortId()
|
||||
const tiny = generateShortId(8)
|
||||
```
|
||||
|
||||
## Common Utilities
|
||||
Use shared helpers from `@sim/utils` instead of writing inline implementations:
|
||||
|
||||
- `sleep(ms)` — async delay. Never write `new Promise(resolve => setTimeout(resolve, ms))`
|
||||
- `toError(value)` — normalize unknown caught values to `Error`. Never write `e instanceof Error ? e : new Error(String(e))`
|
||||
- `toError(value).message` — get error message safely. Never write `e instanceof Error ? e.message : String(e)`
|
||||
|
||||
```typescript
|
||||
// ✗ Bad
|
||||
await new Promise(resolve => setTimeout(resolve, 1000))
|
||||
const msg = error instanceof Error ? error.message : String(error)
|
||||
const err = error instanceof Error ? error : new Error(String(error))
|
||||
|
||||
// ✓ Good
|
||||
import { sleep } from '@sim/utils/helpers'
|
||||
import { toError } from '@sim/utils/errors'
|
||||
await sleep(1000)
|
||||
const msg = toError(error).message
|
||||
const err = toError(error)
|
||||
```
|
||||
|
||||
## Package Manager
|
||||
Use `bun` and `bunx`, not `npm` and `npx`.
|
||||
|
||||
85
.cursor/rules/sim-sandbox.mdc
Normal file
85
.cursor/rules/sim-sandbox.mdc
Normal file
@@ -0,0 +1,85 @@
|
||||
---
|
||||
description: Isolated-vm sandbox worker security policy. Hard rules for anything that lives in the worker child process that runs user code.
|
||||
globs: ["apps/sim/lib/execution/isolated-vm-worker.cjs", "apps/sim/lib/execution/isolated-vm.ts", "apps/sim/lib/execution/sandbox/**", "apps/sim/sandbox-tasks/**"]
|
||||
---
|
||||
|
||||
# Sim Sandbox — Worker Security Policy
|
||||
|
||||
The isolated-vm worker child process at
|
||||
`apps/sim/lib/execution/isolated-vm-worker.cjs` runs untrusted user code inside
|
||||
V8 isolates. The process itself is a trust boundary. Everything in this rule is
|
||||
about what must **never** live in that process.
|
||||
|
||||
## Hard rules
|
||||
|
||||
1. **No app credentials in the worker process**. The worker must not hold, load,
|
||||
or receive via IPC: database URLs, Redis URLs, AWS keys, Stripe keys,
|
||||
session-signing keys, encryption keys, OAuth client secrets, internal API
|
||||
secrets, or any LLM / email / search provider API keys. If you catch yourself
|
||||
`require`'ing `@/lib/auth`, `@sim/db`, `@/lib/uploads/core/storage-service`,
|
||||
or anything that imports `env` directly inside the worker, stop and use a
|
||||
host-side broker instead.
|
||||
|
||||
2. **Host-side brokers own all credentialed work**. The worker can only access
|
||||
resources through `ivm.Reference` / `ivm.Callback` bridges back to the host
|
||||
process. Today the only broker is `workspaceFileBroker`
|
||||
(`apps/sim/lib/execution/sandbox/brokers/workspace-file.ts`); adding a new
|
||||
one requires co-reviewing this file.
|
||||
|
||||
3. **Host-side brokers must scope every resource access to a single tenant**.
|
||||
The `SandboxBrokerContext` always carries `workspaceId`. Any new broker that
|
||||
accesses storage, DB, or an external API must use `ctx.workspaceId` to scope
|
||||
the lookup — never accept a raw path, key, or URL from isolate code without
|
||||
validation.
|
||||
|
||||
4. **Nothing that runs in the isolate is trusted, even if we wrote it**. The
|
||||
task `bootstrap` and `finalize` strings in `apps/sim/sandbox-tasks/` execute
|
||||
inside the isolate. They must treat `globalThis` as adversarial — no pulling
|
||||
values from it that might have been mutated by user code. The hardening
|
||||
script in `executeTask` undefines dangerous globals before user code runs.
|
||||
|
||||
## Why
|
||||
|
||||
A V8 JIT bug (Chrome ships these roughly monthly) gives an attacker a native
|
||||
code primitive inside the process that owns whatever that process can reach.
|
||||
If the worker only holds `isolated-vm` + a single narrow workspace-file broker,
|
||||
a V8 escape leaks one tenant's files. If the worker holds a Stripe key or a DB
|
||||
connection, a V8 escape leaks the service.
|
||||
|
||||
The original `doc-worker.cjs` vulnerability (CVE-class, 225 production secrets
|
||||
leaked via `/proc/1/environ`) was the forcing function for this architecture.
|
||||
Keep the blast radius small.
|
||||
|
||||
## Checklist for changes to `isolated-vm-worker.cjs`
|
||||
|
||||
Before landing any change that adds a new `require(...)` or `process.send(...)`
|
||||
payload or `ivm.Reference` wrapper in the worker:
|
||||
|
||||
- [ ] Does it load a credential, key, connection string, or secret? If yes,
|
||||
move it host-side and expose as a broker.
|
||||
- [ ] Does it import from `@/lib/auth`, `@sim/db`, `@/lib/uploads/core/*`,
|
||||
`@/lib/core/config/env`, or any module that reads `process.env` of the
|
||||
main app? If yes, same — move host-side.
|
||||
- [ ] Does it expose a resource that's workspace-scoped without taking a
|
||||
`workspaceId`? If yes, re-scope.
|
||||
- [ ] Did you update the broker limits (`IVM_MAX_BROKER_ARGS_JSON_CHARS`,
|
||||
`IVM_MAX_BROKER_RESULT_JSON_CHARS`, `IVM_MAX_BROKERS_PER_EXECUTION`) if
|
||||
the new broker can emit large payloads or fire frequently?
|
||||
|
||||
## What the worker *may* hold
|
||||
|
||||
- `isolated-vm` module
|
||||
- Node built-ins: `node:fs` (only for reading the checked-in bundle `.cjs`
|
||||
files) and `node:path`
|
||||
- The three prebuilt library bundles under
|
||||
`apps/sim/lib/execution/sandbox/bundles/*.cjs`
|
||||
- IPC message handlers for `execute`, `cancel`, `fetchResponse`,
|
||||
`brokerResponse`
|
||||
|
||||
The worker deliberately has **no host-side logger**. All errors and
|
||||
diagnostics flow through IPC back to the host, which has `@sim/logger`. Do
|
||||
not add `createLogger` or console-based logging to the worker — it would
|
||||
require pulling the main app's config / env, which is exactly what this
|
||||
rule is preventing.
|
||||
|
||||
Anything else is suspect.
|
||||
@@ -3,6 +3,7 @@ description: Testing patterns with Vitest and @sim/testing
|
||||
globs: ["apps/sim/**/*.test.ts", "apps/sim/**/*.test.tsx"]
|
||||
---
|
||||
|
||||
|
||||
# Testing Patterns
|
||||
|
||||
Use Vitest. Test files: `feature.ts` → `feature.test.ts`
|
||||
@@ -12,8 +13,12 @@ Use Vitest. Test files: `feature.ts` → `feature.test.ts`
|
||||
These modules are mocked globally — do NOT re-mock them in test files unless you need to override behavior:
|
||||
|
||||
- `@sim/db` → `databaseMock`
|
||||
- `@sim/db/schema` → `schemaMock`
|
||||
- `drizzle-orm` → `drizzleOrmMock`
|
||||
- `@sim/logger` → `loggerMock`
|
||||
- `@/lib/auth` → `authMock`
|
||||
- `@/lib/auth/hybrid` → `hybridAuthMock` (with default session-delegating behavior)
|
||||
- `@/lib/core/utils/request` → `requestUtilsMock`
|
||||
- `@/stores/console/store`, `@/stores/terminal`, `@/stores/execution/store`
|
||||
- `@/blocks/registry`
|
||||
- `@trigger.dev/sdk`
|
||||
@@ -101,10 +106,6 @@ vi.mock('@/lib/workspaces/utils', () => ({
|
||||
}))
|
||||
```
|
||||
|
||||
### NEVER use `mockAuth()`, `mockConsoleLogger()`, or `setupCommonApiMocks()` from `@sim/testing`
|
||||
|
||||
These helpers internally use `vi.doMock()` which is slow. Use direct `vi.hoisted()` + `vi.mock()` instead.
|
||||
|
||||
### Mock heavy transitive dependencies
|
||||
|
||||
If a module under test imports `@/blocks` (200+ files), `@/tools/registry`, or other heavy modules, mock them:
|
||||
@@ -134,38 +135,61 @@ await new Promise(r => setTimeout(r, 1))
|
||||
vi.useFakeTimers()
|
||||
```
|
||||
|
||||
## Mock Pattern Reference
|
||||
## Centralized Mocks (prefer over local declarations)
|
||||
|
||||
`@sim/testing` exports ready-to-use mock modules for common dependencies. Import and pass directly to `vi.mock()` — no `vi.hoisted()` boilerplate needed. Each paired `*MockFns` object exposes the underlying `vi.fn()`s for per-test overrides.
|
||||
|
||||
| Module mocked | Import | Factory form |
|
||||
|---|---|---|
|
||||
| `@/app/api/auth/oauth/utils` | `authOAuthUtilsMock`, `authOAuthUtilsMockFns` | `vi.mock('@/app/api/auth/oauth/utils', () => authOAuthUtilsMock)` |
|
||||
| `@/app/api/knowledge/utils` | `knowledgeApiUtilsMock`, `knowledgeApiUtilsMockFns` | `vi.mock('@/app/api/knowledge/utils', () => knowledgeApiUtilsMock)` |
|
||||
| `@/app/api/workflows/utils` | `workflowsApiUtilsMock`, `workflowsApiUtilsMockFns` | `vi.mock('@/app/api/workflows/utils', () => workflowsApiUtilsMock)` |
|
||||
| `@sim/audit` | `auditMock`, `auditMockFns` | `vi.mock('@sim/audit', () => auditMock)` |
|
||||
| `@/lib/auth` | `authMock`, `authMockFns` | `vi.mock('@/lib/auth', () => authMock)` |
|
||||
| `@/lib/auth/hybrid` | `hybridAuthMock`, `hybridAuthMockFns` | `vi.mock('@/lib/auth/hybrid', () => hybridAuthMock)` |
|
||||
| `@/lib/copilot/request/http` | `copilotHttpMock`, `copilotHttpMockFns` | `vi.mock('@/lib/copilot/request/http', () => copilotHttpMock)` |
|
||||
| `@/lib/core/config/env` | `envMock`, `createEnvMock(overrides)` | `vi.mock('@/lib/core/config/env', () => envMock)` |
|
||||
| `@/lib/core/config/feature-flags` | `featureFlagsMock` | `vi.mock('@/lib/core/config/feature-flags', () => featureFlagsMock)` |
|
||||
| `@/lib/core/config/redis` | `redisConfigMock`, `redisConfigMockFns` | `vi.mock('@/lib/core/config/redis', () => redisConfigMock)` |
|
||||
| `@/lib/core/security/encryption` | `encryptionMock`, `encryptionMockFns` | `vi.mock('@/lib/core/security/encryption', () => encryptionMock)` |
|
||||
| `@/lib/core/security/input-validation.server` | `inputValidationMock`, `inputValidationMockFns` | `vi.mock('@/lib/core/security/input-validation.server', () => inputValidationMock)` |
|
||||
| `@/lib/core/utils/request` | `requestUtilsMock`, `requestUtilsMockFns` | `vi.mock('@/lib/core/utils/request', () => requestUtilsMock)` |
|
||||
| `@/lib/core/utils/urls` | `urlsMock`, `urlsMockFns` | `vi.mock('@/lib/core/utils/urls', () => urlsMock)` |
|
||||
| `@/lib/execution/preprocessing` | `executionPreprocessingMock`, `executionPreprocessingMockFns` | `vi.mock('@/lib/execution/preprocessing', () => executionPreprocessingMock)` |
|
||||
| `@/lib/logs/execution/logging-session` | `loggingSessionMock`, `loggingSessionMockFns`, `LoggingSessionMock` | `vi.mock('@/lib/logs/execution/logging-session', () => loggingSessionMock)` |
|
||||
| `@/lib/workflows/orchestration` | `workflowsOrchestrationMock`, `workflowsOrchestrationMockFns` | `vi.mock('@/lib/workflows/orchestration', () => workflowsOrchestrationMock)` |
|
||||
| `@/lib/workflows/persistence/utils` | `workflowsPersistenceUtilsMock`, `workflowsPersistenceUtilsMockFns` | `vi.mock('@/lib/workflows/persistence/utils', () => workflowsPersistenceUtilsMock)` |
|
||||
| `@/lib/workflows/utils` | `workflowsUtilsMock`, `workflowsUtilsMockFns` | `vi.mock('@/lib/workflows/utils', () => workflowsUtilsMock)` |
|
||||
| `@/lib/workspaces/permissions/utils` | `permissionsMock`, `permissionsMockFns` | `vi.mock('@/lib/workspaces/permissions/utils', () => permissionsMock)` |
|
||||
| `@sim/db/schema` | `schemaMock` | `vi.mock('@sim/db/schema', () => schemaMock)` |
|
||||
|
||||
### Auth mocking (API routes)
|
||||
|
||||
```typescript
|
||||
const { mockGetSession } = vi.hoisted(() => ({
|
||||
mockGetSession: vi.fn(),
|
||||
}))
|
||||
import { authMock, authMockFns } from '@sim/testing'
|
||||
import { beforeEach, describe, expect, it, vi } from 'vitest'
|
||||
|
||||
vi.mock('@/lib/auth', () => ({
|
||||
auth: { api: { getSession: vi.fn() } },
|
||||
getSession: mockGetSession,
|
||||
}))
|
||||
vi.mock('@/lib/auth', () => authMock)
|
||||
|
||||
// In tests:
|
||||
mockGetSession.mockResolvedValue({ user: { id: 'user-1', email: 'test@example.com' } })
|
||||
mockGetSession.mockResolvedValue(null) // unauthenticated
|
||||
import { GET } from '@/app/api/my-route/route'
|
||||
|
||||
beforeEach(() => {
|
||||
vi.clearAllMocks()
|
||||
authMockFns.mockGetSession.mockResolvedValue({ user: { id: 'user-1' } })
|
||||
})
|
||||
```
|
||||
|
||||
Only define a local `vi.mock('@/lib/auth', ...)` if the module under test consumes exports outside the centralized shape (e.g., `auth.api.verifyOneTimeToken`, `auth.api.resetPassword`).
|
||||
|
||||
### Hybrid auth mocking
|
||||
|
||||
```typescript
|
||||
const { mockCheckSessionOrInternalAuth } = vi.hoisted(() => ({
|
||||
mockCheckSessionOrInternalAuth: vi.fn(),
|
||||
}))
|
||||
import { hybridAuthMock, hybridAuthMockFns } from '@sim/testing'
|
||||
|
||||
vi.mock('@/lib/auth/hybrid', () => ({
|
||||
checkSessionOrInternalAuth: mockCheckSessionOrInternalAuth,
|
||||
}))
|
||||
vi.mock('@/lib/auth/hybrid', () => hybridAuthMock)
|
||||
|
||||
// In tests:
|
||||
mockCheckSessionOrInternalAuth.mockResolvedValue({
|
||||
hybridAuthMockFns.mockCheckSessionOrInternalAuth.mockResolvedValue({
|
||||
success: true, userId: 'user-1', authType: 'session',
|
||||
})
|
||||
```
|
||||
@@ -196,21 +220,23 @@ Always prefer over local test data.
|
||||
|
||||
| Category | Utilities |
|
||||
|----------|-----------|
|
||||
| **Mocks** | `loggerMock`, `databaseMock`, `drizzleOrmMock`, `setupGlobalFetchMock()` |
|
||||
| **Module mocks** | See "Centralized Mocks" table above |
|
||||
| **Logger helpers** | `loggerMock`, `createMockLogger()`, `getLoggerCalls()`, `clearLoggerMocks()` |
|
||||
| **Database helpers** | `databaseMock`, `drizzleOrmMock`, `createMockDb()`, `createMockSql()`, `createMockSqlOperators()` |
|
||||
| **Fetch helpers** | `setupGlobalFetchMock()`, `createMockFetch()`, `createMockResponse()`, `mockFetchError()` |
|
||||
| **Factories** | `createSession()`, `createWorkflowRecord()`, `createBlock()`, `createExecutionContext()` |
|
||||
| **Builders** | `WorkflowBuilder`, `ExecutionContextBuilder` |
|
||||
| **Assertions** | `expectWorkflowAccessGranted()`, `expectBlockExecuted()` |
|
||||
| **Requests** | `createMockRequest()`, `createEnvMock()` |
|
||||
| **Requests** | `createMockRequest()`, `createMockFormDataRequest()` |
|
||||
|
||||
## Rules Summary
|
||||
|
||||
1. `@vitest-environment node` unless DOM is required
|
||||
2. `vi.hoisted()` + `vi.mock()` + static imports — never `vi.resetModules()` + `vi.doMock()` + dynamic imports
|
||||
3. `vi.mock()` calls before importing mocked modules
|
||||
4. `@sim/testing` utilities over local mocks
|
||||
2. Prefer centralized mocks from `@sim/testing` (see table above) over local `vi.hoisted()` + `vi.mock()` boilerplate
|
||||
3. `vi.hoisted()` + `vi.mock()` + static imports — never `vi.resetModules()` + `vi.doMock()` + dynamic imports
|
||||
4. `vi.mock()` calls before importing mocked modules
|
||||
5. `beforeEach(() => vi.clearAllMocks())` to reset state — no redundant `afterEach`
|
||||
6. No `vi.importActual()` — mock everything explicitly
|
||||
7. No `mockAuth()`, `mockConsoleLogger()`, `setupCommonApiMocks()` — use direct mocks
|
||||
8. Mock heavy deps (`@/blocks`, `@/tools/registry`, `@/triggers`) in tests that don't need them
|
||||
9. Use absolute imports in test files
|
||||
10. Avoid real timers — use 1ms delays or `vi.useFakeTimers()`
|
||||
7. Mock heavy deps (`@/blocks`, `@/tools/registry`, `@/triggers`) in tests that don't need them
|
||||
8. Use absolute imports in test files
|
||||
9. Avoid real timers — use 1ms delays or `vi.useFakeTimers()`
|
||||
|
||||
@@ -71,7 +71,7 @@ fi
|
||||
|
||||
# Set up environment variables if .env doesn't exist for the sim app
|
||||
if [ ! -f "apps/sim/.env" ]; then
|
||||
echo "📄 Creating .env file from template..."
|
||||
echo "📄 Creating apps/sim/.env from template..."
|
||||
if [ -f "apps/sim/.env.example" ]; then
|
||||
cp apps/sim/.env.example apps/sim/.env
|
||||
else
|
||||
@@ -79,6 +79,18 @@ if [ ! -f "apps/sim/.env" ]; then
|
||||
fi
|
||||
fi
|
||||
|
||||
# Set up env for the realtime server (must match the shared values in apps/sim/.env)
|
||||
if [ ! -f "apps/realtime/.env" ] && [ -f "apps/realtime/.env.example" ]; then
|
||||
echo "📄 Creating apps/realtime/.env from template..."
|
||||
cp apps/realtime/.env.example apps/realtime/.env
|
||||
fi
|
||||
|
||||
# Set up packages/db/.env for drizzle-kit and migration scripts
|
||||
if [ ! -f "packages/db/.env" ] && [ -f "packages/db/.env.example" ]; then
|
||||
echo "📄 Creating packages/db/.env from template..."
|
||||
cp packages/db/.env.example packages/db/.env
|
||||
fi
|
||||
|
||||
# Generate schema and run database migrations
|
||||
echo "🗃️ Running database schema generation and migrations..."
|
||||
echo "Generating schema..."
|
||||
|
||||
28
.github/CODEOWNERS
vendored
Normal file
28
.github/CODEOWNERS
vendored
Normal file
@@ -0,0 +1,28 @@
|
||||
# Copilot/Mothership chat streaming entrypoints and replay surfaces.
|
||||
/apps/sim/app/api/copilot/chat/ @simstudioai/mothership
|
||||
/apps/sim/app/api/copilot/confirm/ @simstudioai/mothership
|
||||
/apps/sim/app/api/copilot/chats/ @simstudioai/mothership
|
||||
/apps/sim/app/api/mothership/chat/ @simstudioai/mothership
|
||||
/apps/sim/app/api/mothership/chats/ @simstudioai/mothership
|
||||
/apps/sim/app/api/mothership/execute/ @simstudioai/mothership
|
||||
/apps/sim/app/api/v1/copilot/chat/ @simstudioai/mothership
|
||||
|
||||
# Server-side stream orchestration, persistence, and protocol.
|
||||
/apps/sim/lib/copilot/chat/ @simstudioai/mothership
|
||||
/apps/sim/lib/copilot/async-runs/ @simstudioai/mothership
|
||||
/apps/sim/lib/copilot/request/ @simstudioai/mothership
|
||||
/apps/sim/lib/copilot/generated/ @simstudioai/mothership
|
||||
/apps/sim/lib/copilot/constants.ts @simstudioai/mothership
|
||||
/apps/sim/lib/core/utils/sse.ts @simstudioai/mothership
|
||||
|
||||
# Stream-time tool execution, confirmations, resource persistence, and handlers.
|
||||
/apps/sim/lib/copilot/tool-executor/ @simstudioai/mothership
|
||||
/apps/sim/lib/copilot/tools/ @simstudioai/mothership
|
||||
/apps/sim/lib/copilot/persistence/ @simstudioai/mothership
|
||||
/apps/sim/lib/copilot/resources/ @simstudioai/mothership
|
||||
|
||||
# Client-side stream consumption, hydration, and reconnect.
|
||||
/apps/sim/app/workspace/*/home/hooks/index.ts @simstudioai/mothership
|
||||
/apps/sim/app/workspace/*/home/hooks/use-chat.ts @simstudioai/mothership
|
||||
/apps/sim/app/workspace/*/home/hooks/use-file-preview-sessions.ts @simstudioai/mothership
|
||||
/apps/sim/hooks/queries/tasks.ts @simstudioai/mothership
|
||||
104
.github/workflows/ci.yml
vendored
104
.github/workflows/ci.yml
vendored
@@ -16,6 +16,7 @@ permissions:
|
||||
jobs:
|
||||
test-build:
|
||||
name: Test and Build
|
||||
if: github.ref != 'refs/heads/dev' || github.event_name == 'pull_request'
|
||||
uses: ./.github/workflows/test-build.yml
|
||||
secrets: inherit
|
||||
|
||||
@@ -45,11 +46,72 @@ jobs:
|
||||
echo "ℹ️ Not a release commit"
|
||||
fi
|
||||
|
||||
# Build AMD64 images and push to ECR immediately (+ GHCR for main)
|
||||
# Dev: build all 3 images for ECR only (no GHCR, no ARM64)
|
||||
build-dev:
|
||||
name: Build Dev ECR
|
||||
needs: [detect-version]
|
||||
if: github.event_name == 'push' && github.ref == 'refs/heads/dev'
|
||||
runs-on: blacksmith-8vcpu-ubuntu-2404
|
||||
permissions:
|
||||
contents: read
|
||||
id-token: write
|
||||
strategy:
|
||||
fail-fast: false
|
||||
matrix:
|
||||
include:
|
||||
- dockerfile: ./docker/app.Dockerfile
|
||||
ecr_repo_secret: ECR_APP
|
||||
- dockerfile: ./docker/db.Dockerfile
|
||||
ecr_repo_secret: ECR_MIGRATIONS
|
||||
- dockerfile: ./docker/realtime.Dockerfile
|
||||
ecr_repo_secret: ECR_REALTIME
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Configure AWS credentials
|
||||
uses: aws-actions/configure-aws-credentials@v4
|
||||
with:
|
||||
role-to-assume: ${{ secrets.DEV_AWS_ROLE_TO_ASSUME }}
|
||||
aws-region: ${{ secrets.DEV_AWS_REGION }}
|
||||
|
||||
- name: Login to Amazon ECR
|
||||
id: login-ecr
|
||||
uses: aws-actions/amazon-ecr-login@v2
|
||||
|
||||
- name: Login to Docker Hub
|
||||
uses: docker/login-action@v3
|
||||
with:
|
||||
username: ${{ secrets.DOCKERHUB_USERNAME }}
|
||||
password: ${{ secrets.DOCKERHUB_TOKEN }}
|
||||
|
||||
- name: Set up Docker Buildx
|
||||
uses: useblacksmith/setup-docker-builder@v1
|
||||
|
||||
- name: Resolve ECR repo name
|
||||
id: ecr-repo
|
||||
run: echo "name=$ECR_REPO" >> $GITHUB_OUTPUT
|
||||
env:
|
||||
ECR_REPO: ${{ matrix.ecr_repo_secret == 'ECR_APP' && secrets.ECR_APP || matrix.ecr_repo_secret == 'ECR_MIGRATIONS' && secrets.ECR_MIGRATIONS || matrix.ecr_repo_secret == 'ECR_REALTIME' && secrets.ECR_REALTIME || '' }}
|
||||
|
||||
- name: Build and push
|
||||
uses: useblacksmith/build-push-action@v2
|
||||
with:
|
||||
context: .
|
||||
file: ${{ matrix.dockerfile }}
|
||||
platforms: linux/amd64
|
||||
push: true
|
||||
tags: ${{ steps.login-ecr.outputs.registry }}/${{ steps.ecr-repo.outputs.name }}:dev
|
||||
provenance: false
|
||||
sbom: false
|
||||
|
||||
# Main/staging: build AMD64 images and push to ECR + GHCR
|
||||
build-amd64:
|
||||
name: Build AMD64
|
||||
needs: [test-build, detect-version]
|
||||
if: github.event_name == 'push' && (github.ref == 'refs/heads/main' || github.ref == 'refs/heads/staging' || github.ref == 'refs/heads/dev')
|
||||
if: >-
|
||||
github.event_name == 'push' &&
|
||||
(github.ref == 'refs/heads/main' || github.ref == 'refs/heads/staging')
|
||||
runs-on: blacksmith-8vcpu-ubuntu-2404
|
||||
permissions:
|
||||
contents: read
|
||||
@@ -70,13 +132,13 @@ jobs:
|
||||
ecr_repo_secret: ECR_REALTIME
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
uses: actions/checkout@v6
|
||||
|
||||
- name: Configure AWS credentials
|
||||
uses: aws-actions/configure-aws-credentials@v4
|
||||
with:
|
||||
role-to-assume: ${{ github.ref == 'refs/heads/main' && secrets.AWS_ROLE_TO_ASSUME || github.ref == 'refs/heads/dev' && secrets.DEV_AWS_ROLE_TO_ASSUME || secrets.STAGING_AWS_ROLE_TO_ASSUME }}
|
||||
aws-region: ${{ github.ref == 'refs/heads/main' && secrets.AWS_REGION || github.ref == 'refs/heads/dev' && secrets.DEV_AWS_REGION || secrets.STAGING_AWS_REGION }}
|
||||
role-to-assume: ${{ github.ref == 'refs/heads/main' && secrets.AWS_ROLE_TO_ASSUME || secrets.STAGING_AWS_ROLE_TO_ASSUME }}
|
||||
aws-region: ${{ github.ref == 'refs/heads/main' && secrets.AWS_REGION || secrets.STAGING_AWS_REGION }}
|
||||
|
||||
- name: Login to Amazon ECR
|
||||
id: login-ecr
|
||||
@@ -99,33 +161,33 @@ jobs:
|
||||
- name: Set up Docker Buildx
|
||||
uses: useblacksmith/setup-docker-builder@v1
|
||||
|
||||
- name: Resolve ECR repo name
|
||||
id: ecr-repo
|
||||
run: echo "name=$ECR_REPO" >> $GITHUB_OUTPUT
|
||||
env:
|
||||
ECR_REPO: ${{ matrix.ecr_repo_secret == 'ECR_APP' && secrets.ECR_APP || matrix.ecr_repo_secret == 'ECR_MIGRATIONS' && secrets.ECR_MIGRATIONS || matrix.ecr_repo_secret == 'ECR_REALTIME' && secrets.ECR_REALTIME || '' }}
|
||||
|
||||
- name: Generate tags
|
||||
id: meta
|
||||
run: |
|
||||
ECR_REGISTRY="${{ steps.login-ecr.outputs.registry }}"
|
||||
ECR_REPO="${{ secrets[matrix.ecr_repo_secret] }}"
|
||||
ECR_REPO="${{ steps.ecr-repo.outputs.name }}"
|
||||
GHCR_IMAGE="${{ matrix.ghcr_image }}"
|
||||
|
||||
# ECR tags (always build for ECR)
|
||||
if [ "${{ github.ref }}" = "refs/heads/main" ]; then
|
||||
ECR_TAG="latest"
|
||||
elif [ "${{ github.ref }}" = "refs/heads/dev" ]; then
|
||||
ECR_TAG="dev"
|
||||
else
|
||||
ECR_TAG="staging"
|
||||
fi
|
||||
ECR_IMAGE="${ECR_REGISTRY}/${ECR_REPO}:${ECR_TAG}"
|
||||
|
||||
# Build tags list
|
||||
TAGS="${ECR_IMAGE}"
|
||||
|
||||
# Add GHCR tags only for main branch
|
||||
if [ "${{ github.ref }}" = "refs/heads/main" ]; then
|
||||
GHCR_AMD64="${GHCR_IMAGE}:latest-amd64"
|
||||
GHCR_SHA="${GHCR_IMAGE}:${{ github.sha }}-amd64"
|
||||
TAGS="${TAGS},$GHCR_AMD64,$GHCR_SHA"
|
||||
|
||||
# Add version tag if this is a release commit
|
||||
if [ "${{ needs.detect-version.outputs.is_release }}" = "true" ]; then
|
||||
VERSION="${{ needs.detect-version.outputs.version }}"
|
||||
GHCR_VERSION="${GHCR_IMAGE}:${VERSION}-amd64"
|
||||
@@ -150,7 +212,7 @@ jobs:
|
||||
# Build ARM64 images for GHCR (main branch only, runs in parallel)
|
||||
build-ghcr-arm64:
|
||||
name: Build ARM64 (GHCR Only)
|
||||
needs: [test-build, detect-version]
|
||||
needs: [detect-version]
|
||||
runs-on: blacksmith-8vcpu-ubuntu-2404-arm
|
||||
if: github.event_name == 'push' && github.ref == 'refs/heads/main'
|
||||
permissions:
|
||||
@@ -169,7 +231,7 @@ jobs:
|
||||
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
uses: actions/checkout@v6
|
||||
|
||||
- name: Login to GHCR
|
||||
uses: docker/login-action@v3
|
||||
@@ -256,6 +318,14 @@ jobs:
|
||||
docker manifest push "${IMAGE_BASE}:${VERSION}"
|
||||
fi
|
||||
|
||||
# Run database migrations for dev
|
||||
migrate-dev:
|
||||
name: Migrate Dev DB
|
||||
needs: [build-dev]
|
||||
if: github.event_name == 'push' && github.ref == 'refs/heads/dev'
|
||||
uses: ./.github/workflows/migrations.yml
|
||||
secrets: inherit
|
||||
|
||||
# Check if docs changed
|
||||
check-docs-changes:
|
||||
name: Check Docs Changes
|
||||
@@ -264,10 +334,10 @@ jobs:
|
||||
outputs:
|
||||
docs_changed: ${{ steps.filter.outputs.docs }}
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
- uses: actions/checkout@v6
|
||||
with:
|
||||
fetch-depth: 2 # Need at least 2 commits to detect changes
|
||||
- uses: dorny/paths-filter@v3
|
||||
- uses: dorny/paths-filter@v4
|
||||
id: filter
|
||||
with:
|
||||
filters: |
|
||||
@@ -294,7 +364,7 @@ jobs:
|
||||
contents: write
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
uses: actions/checkout@v6
|
||||
with:
|
||||
fetch-depth: 0
|
||||
|
||||
|
||||
2
.github/workflows/docs-embeddings.yml
vendored
2
.github/workflows/docs-embeddings.yml
vendored
@@ -15,7 +15,7 @@ jobs:
|
||||
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
uses: actions/checkout@v6
|
||||
|
||||
- name: Setup Bun
|
||||
uses: oven-sh/setup-bun@v2
|
||||
|
||||
4
.github/workflows/i18n.yml
vendored
4
.github/workflows/i18n.yml
vendored
@@ -14,7 +14,7 @@ jobs:
|
||||
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
uses: actions/checkout@v6
|
||||
with:
|
||||
ref: staging
|
||||
token: ${{ secrets.GH_PAT }}
|
||||
@@ -115,7 +115,7 @@ jobs:
|
||||
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
uses: actions/checkout@v6
|
||||
with:
|
||||
ref: staging
|
||||
|
||||
|
||||
4
.github/workflows/images.yml
vendored
4
.github/workflows/images.yml
vendored
@@ -31,7 +31,7 @@ jobs:
|
||||
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
uses: actions/checkout@v6
|
||||
|
||||
- name: Configure AWS credentials
|
||||
uses: aws-actions/configure-aws-credentials@v4
|
||||
@@ -117,7 +117,7 @@ jobs:
|
||||
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
uses: actions/checkout@v6
|
||||
|
||||
- name: Login to GHCR
|
||||
uses: docker/login-action@v3
|
||||
|
||||
4
.github/workflows/migrations.yml
vendored
4
.github/workflows/migrations.yml
vendored
@@ -14,7 +14,7 @@ jobs:
|
||||
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
uses: actions/checkout@v6
|
||||
|
||||
- name: Setup Bun
|
||||
uses: oven-sh/setup-bun@v2
|
||||
@@ -38,5 +38,5 @@ jobs:
|
||||
- name: Apply migrations
|
||||
working-directory: ./packages/db
|
||||
env:
|
||||
DATABASE_URL: ${{ github.ref == 'refs/heads/main' && secrets.DATABASE_URL || secrets.STAGING_DATABASE_URL }}
|
||||
DATABASE_URL: ${{ github.ref == 'refs/heads/main' && secrets.DATABASE_URL || github.ref == 'refs/heads/dev' && secrets.DEV_DATABASE_URL || secrets.STAGING_DATABASE_URL }}
|
||||
run: bunx drizzle-kit migrate --config=./drizzle.config.ts
|
||||
2
.github/workflows/publish-cli.yml
vendored
2
.github/workflows/publish-cli.yml
vendored
@@ -14,7 +14,7 @@ jobs:
|
||||
runs-on: blacksmith-4vcpu-ubuntu-2404
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
uses: actions/checkout@v6
|
||||
|
||||
- name: Setup Bun
|
||||
uses: oven-sh/setup-bun@v2
|
||||
|
||||
2
.github/workflows/publish-python-sdk.yml
vendored
2
.github/workflows/publish-python-sdk.yml
vendored
@@ -14,7 +14,7 @@ jobs:
|
||||
runs-on: blacksmith-4vcpu-ubuntu-2404
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
uses: actions/checkout@v6
|
||||
|
||||
- name: Setup Python
|
||||
uses: actions/setup-python@v5
|
||||
|
||||
2
.github/workflows/publish-ts-sdk.yml
vendored
2
.github/workflows/publish-ts-sdk.yml
vendored
@@ -14,7 +14,7 @@ jobs:
|
||||
runs-on: blacksmith-4vcpu-ubuntu-2404
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
uses: actions/checkout@v6
|
||||
|
||||
- name: Setup Bun
|
||||
uses: oven-sh/setup-bun@v2
|
||||
|
||||
15
.github/workflows/test-build.yml
vendored
15
.github/workflows/test-build.yml
vendored
@@ -14,7 +14,7 @@ jobs:
|
||||
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
uses: actions/checkout@v6
|
||||
|
||||
- name: Setup Bun
|
||||
uses: oven-sh/setup-bun@v2
|
||||
@@ -103,9 +103,18 @@ jobs:
|
||||
- name: Lint code
|
||||
run: bun run lint:check
|
||||
|
||||
- name: Enforce monorepo boundaries
|
||||
run: bun run check:boundaries
|
||||
|
||||
- name: Verify realtime prune graph
|
||||
run: bun run check:realtime-prune
|
||||
|
||||
- name: Type-check realtime server
|
||||
run: bunx turbo run type-check --filter=@sim/realtime
|
||||
|
||||
- name: Run tests with coverage
|
||||
env:
|
||||
NODE_OPTIONS: '--no-warnings'
|
||||
NODE_OPTIONS: '--no-warnings --max-old-space-size=8192'
|
||||
NEXT_PUBLIC_APP_URL: 'https://www.sim.ai'
|
||||
DATABASE_URL: 'postgresql://postgres:postgres@localhost:5432/simstudio'
|
||||
ENCRYPTION_KEY: '7cf672e460e430c1fba707575c2b0e2ad5a99dddf9b7b7e3b5646e630861db1c' # dummy key for CI only
|
||||
@@ -127,7 +136,7 @@ jobs:
|
||||
|
||||
- name: Build application
|
||||
env:
|
||||
NODE_OPTIONS: '--no-warnings'
|
||||
NODE_OPTIONS: '--no-warnings --max-old-space-size=8192'
|
||||
NEXT_PUBLIC_APP_URL: 'https://www.sim.ai'
|
||||
DATABASE_URL: 'postgresql://postgres:postgres@localhost:5432/simstudio'
|
||||
STRIPE_SECRET_KEY: 'dummy_key_for_ci_only'
|
||||
|
||||
47
AGENTS.md
47
AGENTS.md
@@ -7,7 +7,7 @@ You are a professional software engineer. All code must follow best practices: a
|
||||
- **Logging**: Import `createLogger` from `@sim/logger`. Use `logger.info`, `logger.warn`, `logger.error` instead of `console.log`
|
||||
- **Comments**: Use TSDoc for documentation. No `====` separators. No non-TSDoc comments
|
||||
- **Styling**: Never update global styles. Keep all styling local to components
|
||||
- **ID Generation**: Never use `crypto.randomUUID()`, `nanoid`, or `uuid` package. Use `generateId()` (UUID v4) or `generateShortId()` (compact) from `@/lib/core/utils/uuid`
|
||||
- **ID Generation**: Never use `crypto.randomUUID()`, `nanoid`, or `uuid` package. Use `generateId()` (UUID v4) or `generateShortId()` (compact) from `@sim/utils/id`
|
||||
- **Package Manager**: Use `bun` and `bunx`, not `npm` and `npx`
|
||||
|
||||
## Architecture
|
||||
@@ -20,19 +20,42 @@ You are a professional software engineer. All code must follow best practices: a
|
||||
|
||||
### Root Structure
|
||||
```
|
||||
apps/sim/
|
||||
├── app/ # Next.js app router (pages, API routes)
|
||||
├── blocks/ # Block definitions and registry
|
||||
├── components/ # Shared UI (emcn/, ui/)
|
||||
├── executor/ # Workflow execution engine
|
||||
├── hooks/ # Shared hooks (queries/, selectors/)
|
||||
├── lib/ # App-wide utilities
|
||||
├── providers/ # LLM provider integrations
|
||||
├── stores/ # Zustand stores
|
||||
├── tools/ # Tool definitions
|
||||
└── triggers/ # Trigger definitions
|
||||
apps/
|
||||
├── sim/ # Next.js app (UI + API routes + workflow editor)
|
||||
│ ├── app/ # Next.js app router (pages, API routes)
|
||||
│ ├── blocks/ # Block definitions and registry
|
||||
│ ├── components/ # Shared UI (emcn/, ui/)
|
||||
│ ├── executor/ # Workflow execution engine
|
||||
│ ├── hooks/ # Shared hooks (queries/, selectors/)
|
||||
│ ├── lib/ # App-wide utilities
|
||||
│ ├── providers/ # LLM provider integrations
|
||||
│ ├── stores/ # Zustand stores
|
||||
│ ├── tools/ # Tool definitions
|
||||
│ └── triggers/ # Trigger definitions
|
||||
└── realtime/ # Bun Socket.IO server (collaborative canvas)
|
||||
└── src/ # auth, config, database, handlers, middleware,
|
||||
# rooms, routes, internal/webhook-cleanup.ts
|
||||
|
||||
packages/
|
||||
├── audit/ # @sim/audit — recordAudit + AuditAction + AuditResourceType
|
||||
├── auth/ # @sim/auth — @sim/auth/verify (shared Better Auth verifier)
|
||||
├── db/ # @sim/db — drizzle schema + client
|
||||
├── logger/ # @sim/logger
|
||||
├── realtime-protocol/ # @sim/realtime-protocol — socket operation constants + zod schemas
|
||||
├── security/ # @sim/security — safeCompare
|
||||
├── tsconfig/ # shared tsconfig presets
|
||||
├── utils/ # @sim/utils
|
||||
├── workflow-authz/ # @sim/workflow-authz — authorizeWorkflowByWorkspacePermission
|
||||
├── workflow-persistence/ # @sim/workflow-persistence — raw load/save + subflow helpers
|
||||
└── workflow-types/ # @sim/workflow-types — pure BlockState/Loop/Parallel/... types
|
||||
```
|
||||
|
||||
### Package boundaries
|
||||
- `apps/* → packages/*` only. Packages never import from `apps/*`.
|
||||
- Each package has explicit subpath `exports` maps; no barrels that accidentally pull in heavy halves.
|
||||
- `apps/realtime` intentionally avoids Next.js, React, the block/tool registry, provider SDKs, and the executor. CI enforces this via `scripts/check-monorepo-boundaries.ts` and `scripts/check-realtime-prune-graph.ts`.
|
||||
- Auth is shared across services via the Better Auth "Shared Database Session" pattern: both apps read the same `BETTER_AUTH_SECRET` and point at the same DB via `@sim/db`.
|
||||
|
||||
### Naming Conventions
|
||||
- Components: PascalCase (`WorkflowList`)
|
||||
- Hooks: `use` prefix (`useWorkflowOperations`)
|
||||
|
||||
41
CLAUDE.md
41
CLAUDE.md
@@ -4,10 +4,12 @@ You are a professional software engineer. All code must follow best practices: a
|
||||
|
||||
## Global Standards
|
||||
|
||||
- **Logging**: Import `createLogger` from `@sim/logger`. Use `logger.info`, `logger.warn`, `logger.error` instead of `console.log`
|
||||
- **Logging**: Import `createLogger` from `@sim/logger`. Use `logger.info`, `logger.warn`, `logger.error` instead of `console.log`. Inside API routes wrapped with `withRouteHandler`, loggers automatically include the request ID — no manual `withMetadata({ requestId })` needed
|
||||
- **API Route Handlers**: All API route handlers (`GET`, `POST`, `PUT`, `DELETE`, `PATCH`) must be wrapped with `withRouteHandler` from `@/lib/core/utils/with-route-handler`. This provides request ID tracking, automatic error logging for 4xx/5xx responses, and unhandled error catching. See "API Route Pattern" section below
|
||||
- **Comments**: Use TSDoc for documentation. No `====` separators. No non-TSDoc comments
|
||||
- **Styling**: Never update global styles. Keep all styling local to components
|
||||
- **ID Generation**: Never use `crypto.randomUUID()`, `nanoid`, or `uuid` package. Use `generateId()` (UUID v4) or `generateShortId()` (compact) from `@/lib/core/utils/uuid`
|
||||
- **ID Generation**: Never use `crypto.randomUUID()`, `nanoid`, or `uuid` package. Use `generateId()` (UUID v4) or `generateShortId()` (compact) from `@sim/utils/id`
|
||||
- **Common Utilities**: Use shared helpers from `@sim/utils` instead of inline implementations. `sleep(ms)` from `@sim/utils/helpers` for delays, `toError(e)` from `@sim/utils/errors` to normalize caught values.
|
||||
- **Package Manager**: Use `bun` and `bunx`, not `npm` and `npx`
|
||||
|
||||
## Architecture
|
||||
@@ -92,6 +94,41 @@ export function Component({ requiredProp, optionalProp = false }: ComponentProps
|
||||
|
||||
Extract when: 50+ lines, used in 2+ files, or has own state/logic. Keep inline when: < 10 lines, single use, purely presentational.
|
||||
|
||||
## API Route Pattern
|
||||
|
||||
Every API route handler must be wrapped with `withRouteHandler`. This sets up `AsyncLocalStorage`-based request context so all loggers in the request lifecycle automatically include the request ID.
|
||||
|
||||
```typescript
|
||||
import { createLogger } from '@sim/logger'
|
||||
import type { NextRequest } from 'next/server'
|
||||
import { NextResponse } from 'next/server'
|
||||
import { withRouteHandler } from '@/lib/core/utils/with-route-handler'
|
||||
|
||||
const logger = createLogger('MyAPI')
|
||||
|
||||
// Simple route
|
||||
export const GET = withRouteHandler(async (request: NextRequest) => {
|
||||
logger.info('Handling request') // automatically includes {requestId=...}
|
||||
return NextResponse.json({ ok: true })
|
||||
})
|
||||
|
||||
// Route with params
|
||||
export const DELETE = withRouteHandler(async (
|
||||
request: NextRequest,
|
||||
{ params }: { params: Promise<{ id: string }> }
|
||||
) => {
|
||||
const { id } = await params
|
||||
return NextResponse.json({ deleted: id })
|
||||
})
|
||||
|
||||
// Composing with other middleware (withRouteHandler wraps the outermost layer)
|
||||
export const POST = withRouteHandler(withAdminAuth(async (request) => {
|
||||
return NextResponse.json({ ok: true })
|
||||
}))
|
||||
```
|
||||
|
||||
Never export a bare `async function GET/POST/...` — always use `export const METHOD = withRouteHandler(...)`.
|
||||
|
||||
## Hooks
|
||||
|
||||
```typescript
|
||||
|
||||
14
README.md
14
README.md
@@ -74,10 +74,6 @@ docker compose -f docker-compose.prod.yml up -d
|
||||
|
||||
Open [http://localhost:3000](http://localhost:3000)
|
||||
|
||||
#### Background worker note
|
||||
|
||||
The Docker Compose stack starts a dedicated worker container by default. If `REDIS_URL` is not configured, the worker will start, log that it is idle, and do no queue processing. This is expected. Queue-backed API, webhook, and schedule execution requires Redis; installs without Redis continue to use the inline execution path.
|
||||
|
||||
Sim also supports local models via [Ollama](https://ollama.ai) and [vLLM](https://docs.vllm.ai/) — see the [Docker self-hosting docs](https://docs.sim.ai/self-hosting/docker) for setup details.
|
||||
|
||||
### Self-hosted: Manual Setup
|
||||
@@ -123,12 +119,10 @@ cd packages/db && bun run db:migrate
|
||||
5. Start development servers:
|
||||
|
||||
```bash
|
||||
bun run dev:full # Starts Next.js app, realtime socket server, and the BullMQ worker
|
||||
bun run dev:full # Starts Next.js app and realtime socket server
|
||||
```
|
||||
|
||||
If `REDIS_URL` is not configured, the worker will remain idle and execution continues inline.
|
||||
|
||||
Or run separately: `bun run dev` (Next.js), `cd apps/sim && bun run dev:sockets` (realtime), and `cd apps/sim && bun run worker` (BullMQ worker).
|
||||
Or run separately: `bun run dev` (Next.js) and `cd apps/sim && bun run dev:sockets` (realtime).
|
||||
|
||||
## Copilot API Keys
|
||||
|
||||
@@ -148,13 +142,15 @@ See the [environment variables reference](https://docs.sim.ai/self-hosting/envir
|
||||
- **Database**: PostgreSQL with [Drizzle ORM](https://orm.drizzle.team)
|
||||
- **Authentication**: [Better Auth](https://better-auth.com)
|
||||
- **UI**: [Shadcn](https://ui.shadcn.com/), [Tailwind CSS](https://tailwindcss.com)
|
||||
- **State Management**: [Zustand](https://zustand-demo.pmnd.rs/)
|
||||
- **Streaming Markdown**: [Streamdown](https://github.com/vercel/streamdown)
|
||||
- **State Management**: [Zustand](https://zustand-demo.pmnd.rs/), [TanStack Query](https://tanstack.com/query)
|
||||
- **Flow Editor**: [ReactFlow](https://reactflow.dev/)
|
||||
- **Docs**: [Fumadocs](https://fumadocs.vercel.app/)
|
||||
- **Monorepo**: [Turborepo](https://turborepo.org/)
|
||||
- **Realtime**: [Socket.io](https://socket.io/)
|
||||
- **Background Jobs**: [Trigger.dev](https://trigger.dev/)
|
||||
- **Remote Code Execution**: [E2B](https://www.e2b.dev/)
|
||||
- **Isolated Code Execution**: [isolated-vm](https://github.com/laverdet/isolated-vm)
|
||||
|
||||
## Contributing
|
||||
|
||||
|
||||
@@ -17,9 +17,10 @@ import { ResponseSection } from '@/components/ui/response-section'
|
||||
import { i18n } from '@/lib/i18n'
|
||||
import { getApiSpecContent, openapi } from '@/lib/openapi'
|
||||
import { type PageData, source } from '@/lib/source'
|
||||
import { DOCS_BASE_URL } from '@/lib/urls'
|
||||
|
||||
const SUPPORTED_LANGUAGES: Set<string> = new Set(i18n.languages)
|
||||
const BASE_URL = 'https://docs.sim.ai'
|
||||
const BASE_URL = DOCS_BASE_URL
|
||||
|
||||
const OG_LOCALE_MAP: Record<string, string> = {
|
||||
en: 'en_US',
|
||||
@@ -280,12 +281,12 @@ export async function generateMetadata(props: {
|
||||
title: data.title,
|
||||
description:
|
||||
data.description ||
|
||||
'Documentation for Sim — the open-source platform to build AI agents and run your agentic workforce.',
|
||||
'Documentation for Sim — the open-source AI workspace where teams build, deploy, and manage AI agents.',
|
||||
keywords: [
|
||||
'AI agents',
|
||||
'agentic workforce',
|
||||
'AI agent platform',
|
||||
'agentic workflows',
|
||||
'AI workspace',
|
||||
'AI agent builder',
|
||||
'build AI agents',
|
||||
'LLM orchestration',
|
||||
'AI automation',
|
||||
'knowledge base',
|
||||
@@ -300,7 +301,7 @@ export async function generateMetadata(props: {
|
||||
title: data.title,
|
||||
description:
|
||||
data.description ||
|
||||
'Documentation for Sim — the open-source platform to build AI agents and run your agentic workforce.',
|
||||
'Documentation for Sim — the open-source AI workspace where teams build, deploy, and manage AI agents.',
|
||||
url: fullUrl,
|
||||
siteName: 'Sim Documentation',
|
||||
type: 'article',
|
||||
@@ -322,7 +323,7 @@ export async function generateMetadata(props: {
|
||||
title: data.title,
|
||||
description:
|
||||
data.description ||
|
||||
'Documentation for Sim — the open-source platform to build AI agents and run your agentic workforce.',
|
||||
'Documentation for Sim — the open-source AI workspace where teams build, deploy, and manage AI agents.',
|
||||
images: [ogImageUrl],
|
||||
creator: '@simdotai',
|
||||
site: '@simdotai',
|
||||
|
||||
@@ -3,7 +3,6 @@ import { defineI18nUI } from 'fumadocs-ui/i18n'
|
||||
import { DocsLayout } from 'fumadocs-ui/layouts/docs'
|
||||
import { RootProvider } from 'fumadocs-ui/provider/next'
|
||||
import { Geist_Mono, Inter } from 'next/font/google'
|
||||
import Script from 'next/script'
|
||||
import {
|
||||
SidebarFolder,
|
||||
SidebarItem,
|
||||
@@ -13,6 +12,7 @@ import { Navbar } from '@/components/navbar/navbar'
|
||||
import { SimLogoFull } from '@/components/ui/sim-logo'
|
||||
import { i18n } from '@/lib/i18n'
|
||||
import { source } from '@/lib/source'
|
||||
import { DOCS_BASE_URL } from '@/lib/urls'
|
||||
import '../global.css'
|
||||
|
||||
const inter = Inter({
|
||||
@@ -66,15 +66,15 @@ export default async function Layout({ children, params }: LayoutProps) {
|
||||
'@type': 'WebSite',
|
||||
name: 'Sim Documentation',
|
||||
description:
|
||||
'Documentation for Sim — the open-source platform to build AI agents and run your agentic workforce. Connect 1,000+ integrations and LLMs to deploy and orchestrate agentic workflows.',
|
||||
url: 'https://docs.sim.ai',
|
||||
'Documentation for Sim — the open-source AI workspace where teams build, deploy, and manage AI agents. Connect 1,000+ integrations and every major LLM.',
|
||||
url: DOCS_BASE_URL,
|
||||
publisher: {
|
||||
'@type': 'Organization',
|
||||
name: 'Sim',
|
||||
url: 'https://sim.ai',
|
||||
logo: {
|
||||
'@type': 'ImageObject',
|
||||
url: 'https://docs.sim.ai/static/logo.png',
|
||||
url: `${DOCS_BASE_URL}/static/logo.png`,
|
||||
},
|
||||
},
|
||||
inLanguage: lang,
|
||||
@@ -82,7 +82,7 @@ export default async function Layout({ children, params }: LayoutProps) {
|
||||
'@type': 'SearchAction',
|
||||
target: {
|
||||
'@type': 'EntryPoint',
|
||||
urlTemplate: 'https://docs.sim.ai/api/search?q={search_term_string}',
|
||||
urlTemplate: `${DOCS_BASE_URL}/api/search?q={search_term_string}`,
|
||||
},
|
||||
'query-input': 'required name=search_term_string',
|
||||
},
|
||||
@@ -101,7 +101,6 @@ export default async function Layout({ children, params }: LayoutProps) {
|
||||
/>
|
||||
</head>
|
||||
<body className='flex min-h-screen flex-col font-sans'>
|
||||
<Script src='https://assets.onedollarstats.com/stonks.js' strategy='lazyOnload' />
|
||||
<RootProvider i18n={provider(lang)}>
|
||||
<Navbar />
|
||||
<DocsLayout
|
||||
|
||||
@@ -1,4 +1,5 @@
|
||||
import { DocsBody, DocsPage } from 'fumadocs-ui/page'
|
||||
import { DocsPage } from 'fumadocs-ui/page'
|
||||
import Link from 'next/link'
|
||||
|
||||
export const metadata = {
|
||||
title: 'Page Not Found',
|
||||
@@ -7,17 +8,21 @@ export const metadata = {
|
||||
export default function NotFound() {
|
||||
return (
|
||||
<DocsPage>
|
||||
<DocsBody>
|
||||
<div className='flex min-h-[60vh] flex-col items-center justify-center text-center'>
|
||||
<h1 className='mb-4 bg-gradient-to-b from-[#47d991] to-[#33c482] bg-clip-text font-bold text-8xl text-transparent'>
|
||||
404
|
||||
</h1>
|
||||
<h2 className='mb-2 font-semibold text-2xl text-foreground'>Page Not Found</h2>
|
||||
<p className='text-muted-foreground'>
|
||||
The page you're looking for doesn't exist or has been moved.
|
||||
</p>
|
||||
</div>
|
||||
</DocsBody>
|
||||
<div className='flex min-h-[70vh] flex-col items-center justify-center gap-4 text-center'>
|
||||
<h1 className='bg-gradient-to-b from-[#47d991] to-[#33c482] bg-clip-text font-bold text-8xl text-transparent'>
|
||||
404
|
||||
</h1>
|
||||
<h2 className='font-semibold text-2xl text-foreground'>Page Not Found</h2>
|
||||
<p className='text-muted-foreground'>
|
||||
The page you're looking for doesn't exist or has been moved.
|
||||
</p>
|
||||
<Link
|
||||
href='/'
|
||||
className='ml-1 flex items-center rounded-[8px] bg-[#33c482] px-2.5 py-1.5 text-[13px] text-white transition-colors duration-200 hover:bg-[#2DAC72]'
|
||||
>
|
||||
Go home
|
||||
</Link>
|
||||
</div>
|
||||
</DocsPage>
|
||||
)
|
||||
}
|
||||
|
||||
@@ -1,5 +1,6 @@
|
||||
import type { ReactNode } from 'react'
|
||||
import type { Viewport } from 'next'
|
||||
import { DOCS_BASE_URL } from '@/lib/urls'
|
||||
|
||||
export default function RootLayout({ children }: { children: ReactNode }) {
|
||||
return children
|
||||
@@ -12,31 +13,29 @@ export const viewport: Viewport = {
|
||||
}
|
||||
|
||||
export const metadata = {
|
||||
metadataBase: new URL('https://docs.sim.ai'),
|
||||
metadataBase: new URL(DOCS_BASE_URL),
|
||||
title: {
|
||||
default: 'Sim Documentation — Build AI Agents & Run Your Agentic Workforce',
|
||||
default: 'Sim Documentation — The AI Workspace for Teams',
|
||||
template: '%s | Sim Docs',
|
||||
},
|
||||
description:
|
||||
'Documentation for Sim — the open-source platform to build AI agents and run your agentic workforce. Connect 1,000+ integrations and LLMs to deploy and orchestrate agentic workflows.',
|
||||
'Documentation for Sim — the open-source AI workspace where teams build, deploy, and manage AI agents. Connect 1,000+ integrations and every major LLM.',
|
||||
applicationName: 'Sim Docs',
|
||||
generator: 'Next.js',
|
||||
referrer: 'origin-when-cross-origin' as const,
|
||||
keywords: [
|
||||
'AI workspace',
|
||||
'AI agent builder',
|
||||
'AI agents',
|
||||
'agentic workforce',
|
||||
'AI agent platform',
|
||||
'build AI agents',
|
||||
'open-source AI agents',
|
||||
'agentic workflows',
|
||||
'LLM orchestration',
|
||||
'AI integrations',
|
||||
'knowledge base',
|
||||
'AI automation',
|
||||
'workflow builder',
|
||||
'AI workflow orchestration',
|
||||
'visual workflow builder',
|
||||
'enterprise AI',
|
||||
'AI agent deployment',
|
||||
'intelligent automation',
|
||||
'AI tools',
|
||||
],
|
||||
authors: [{ name: 'Sim Team', url: 'https://sim.ai' }],
|
||||
@@ -63,14 +62,14 @@ export const metadata = {
|
||||
type: 'website',
|
||||
locale: 'en_US',
|
||||
alternateLocale: ['es_ES', 'fr_FR', 'de_DE', 'ja_JP', 'zh_CN'],
|
||||
url: 'https://docs.sim.ai',
|
||||
url: DOCS_BASE_URL,
|
||||
siteName: 'Sim Documentation',
|
||||
title: 'Sim Documentation — Build AI Agents & Run Your Agentic Workforce',
|
||||
title: 'Sim Documentation — The AI Workspace for Teams',
|
||||
description:
|
||||
'Documentation for Sim — the open-source platform to build AI agents and run your agentic workforce. Connect 1,000+ integrations and LLMs to deploy and orchestrate agentic workflows.',
|
||||
'Documentation for Sim — the open-source AI workspace where teams build, deploy, and manage AI agents. Connect 1,000+ integrations and every major LLM.',
|
||||
images: [
|
||||
{
|
||||
url: 'https://docs.sim.ai/api/og?title=Sim%20Documentation',
|
||||
url: `${DOCS_BASE_URL}/api/og?title=Sim%20Documentation`,
|
||||
width: 1200,
|
||||
height: 630,
|
||||
alt: 'Sim Documentation',
|
||||
@@ -79,12 +78,12 @@ export const metadata = {
|
||||
},
|
||||
twitter: {
|
||||
card: 'summary_large_image',
|
||||
title: 'Sim Documentation — Build AI Agents & Run Your Agentic Workforce',
|
||||
title: 'Sim Documentation — The AI Workspace for Teams',
|
||||
description:
|
||||
'Documentation for Sim — the open-source platform to build AI agents and run your agentic workforce. Connect 1,000+ integrations and LLMs to deploy and orchestrate agentic workflows.',
|
||||
'Documentation for Sim — the open-source AI workspace where teams build, deploy, and manage AI agents. Connect 1,000+ integrations and every major LLM.',
|
||||
creator: '@simdotai',
|
||||
site: '@simdotai',
|
||||
images: ['https://docs.sim.ai/api/og?title=Sim%20Documentation'],
|
||||
images: [`${DOCS_BASE_URL}/api/og?title=Sim%20Documentation`],
|
||||
},
|
||||
robots: {
|
||||
index: true,
|
||||
@@ -98,15 +97,15 @@ export const metadata = {
|
||||
},
|
||||
},
|
||||
alternates: {
|
||||
canonical: 'https://docs.sim.ai',
|
||||
canonical: DOCS_BASE_URL,
|
||||
languages: {
|
||||
'x-default': 'https://docs.sim.ai',
|
||||
en: 'https://docs.sim.ai',
|
||||
es: 'https://docs.sim.ai/es',
|
||||
fr: 'https://docs.sim.ai/fr',
|
||||
de: 'https://docs.sim.ai/de',
|
||||
ja: 'https://docs.sim.ai/ja',
|
||||
zh: 'https://docs.sim.ai/zh',
|
||||
'x-default': DOCS_BASE_URL,
|
||||
en: DOCS_BASE_URL,
|
||||
es: `${DOCS_BASE_URL}/es`,
|
||||
fr: `${DOCS_BASE_URL}/fr`,
|
||||
de: `${DOCS_BASE_URL}/de`,
|
||||
ja: `${DOCS_BASE_URL}/ja`,
|
||||
zh: `${DOCS_BASE_URL}/zh`,
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
@@ -1,9 +1,10 @@
|
||||
import { source } from '@/lib/source'
|
||||
import { DOCS_BASE_URL } from '@/lib/urls'
|
||||
|
||||
export const revalidate = false
|
||||
|
||||
export async function GET() {
|
||||
const baseUrl = 'https://docs.sim.ai'
|
||||
const baseUrl = DOCS_BASE_URL
|
||||
|
||||
try {
|
||||
const pages = source.getPages().filter((page) => {
|
||||
@@ -37,9 +38,9 @@ export async function GET() {
|
||||
|
||||
const manifest = `# Sim Documentation
|
||||
|
||||
> The open-source platform to build AI agents and run your agentic workforce.
|
||||
> The open-source AI workspace where teams build, deploy, and manage AI agents.
|
||||
|
||||
Sim is the open-source platform to build AI agents and run your agentic workforce. Connect 1,000+ integrations and LLMs to deploy and orchestrate agentic workflows. Create agents, workflows, knowledge bases, tables, and docs. Trusted by over 100,000 builders.
|
||||
Sim is the open-source AI workspace where teams build, deploy, and manage AI agents. Connect 1,000+ integrations and every major LLM to create agents that automate real work — visually, conversationally, or with code. Trusted by over 100,000 builders.
|
||||
|
||||
## Documentation Overview
|
||||
|
||||
@@ -61,7 +62,7 @@ ${Object.entries(sections)
|
||||
|
||||
- Full documentation content: ${baseUrl}/llms-full.txt
|
||||
- Individual page content: ${baseUrl}/llms.mdx/[page-path]
|
||||
- API documentation: ${baseUrl}/sdks/
|
||||
- API documentation: ${baseUrl}/api-reference/
|
||||
- Tool integrations: ${baseUrl}/tools/
|
||||
|
||||
## Statistics
|
||||
|
||||
@@ -1,70 +1,18 @@
|
||||
import { DOCS_BASE_URL } from '@/lib/urls'
|
||||
|
||||
export const revalidate = false
|
||||
|
||||
export async function GET() {
|
||||
const baseUrl = 'https://docs.sim.ai'
|
||||
const baseUrl = DOCS_BASE_URL
|
||||
|
||||
const robotsTxt = `# Robots.txt for Sim Documentation
|
||||
|
||||
User-agent: *
|
||||
Allow: /
|
||||
|
||||
# Search engine crawlers
|
||||
User-agent: Googlebot
|
||||
Allow: /
|
||||
|
||||
User-agent: Bingbot
|
||||
Allow: /
|
||||
|
||||
User-agent: Slurp
|
||||
Allow: /
|
||||
|
||||
User-agent: DuckDuckBot
|
||||
Allow: /
|
||||
|
||||
User-agent: Baiduspider
|
||||
Allow: /
|
||||
|
||||
User-agent: YandexBot
|
||||
Allow: /
|
||||
|
||||
# AI and LLM crawlers - explicitly allowed for documentation indexing
|
||||
User-agent: GPTBot
|
||||
Allow: /
|
||||
|
||||
User-agent: ChatGPT-User
|
||||
Allow: /
|
||||
|
||||
User-agent: CCBot
|
||||
Allow: /
|
||||
|
||||
User-agent: anthropic-ai
|
||||
Allow: /
|
||||
|
||||
User-agent: Claude-Web
|
||||
Allow: /
|
||||
|
||||
User-agent: Applebot
|
||||
Allow: /
|
||||
|
||||
User-agent: PerplexityBot
|
||||
Allow: /
|
||||
|
||||
User-agent: Diffbot
|
||||
Allow: /
|
||||
|
||||
User-agent: FacebookBot
|
||||
Allow: /
|
||||
|
||||
User-agent: cohere-ai
|
||||
Allow: /
|
||||
|
||||
# Disallow admin and internal paths (if any exist)
|
||||
Disallow: /.next/
|
||||
Disallow: /api/internal/
|
||||
Disallow: /_next/static/
|
||||
Disallow: /admin/
|
||||
|
||||
# Allow but don't prioritize these
|
||||
Allow: /
|
||||
Allow: /api/search
|
||||
Allow: /llms.txt
|
||||
Allow: /llms-full.txt
|
||||
@@ -73,23 +21,12 @@ Allow: /llms.mdx/
|
||||
# Sitemaps
|
||||
Sitemap: ${baseUrl}/sitemap.xml
|
||||
|
||||
# Crawl delay for aggressive bots (optional)
|
||||
# Crawl-delay: 1
|
||||
|
||||
# Additional resources for AI indexing
|
||||
# See https://github.com/AnswerDotAI/llms-txt for more info
|
||||
# LLM-friendly content:
|
||||
# Manifest: ${baseUrl}/llms.txt
|
||||
# Full content: ${baseUrl}/llms-full.txt
|
||||
# Individual pages: ${baseUrl}/llms.mdx/[page-path]
|
||||
|
||||
# Multi-language documentation available at:
|
||||
# ${baseUrl}/en - English
|
||||
# ${baseUrl}/es - Español
|
||||
# ${baseUrl}/fr - Français
|
||||
# ${baseUrl}/de - Deutsch
|
||||
# ${baseUrl}/ja - 日本語
|
||||
# ${baseUrl}/zh - 简体中文`
|
||||
# Individual pages: ${baseUrl}/llms.mdx/[page-path]`
|
||||
|
||||
return new Response(robotsTxt, {
|
||||
headers: {
|
||||
|
||||
42
apps/docs/app/sitemap.ts
Normal file
42
apps/docs/app/sitemap.ts
Normal file
@@ -0,0 +1,42 @@
|
||||
import type { MetadataRoute } from 'next'
|
||||
import { i18n } from '@/lib/i18n'
|
||||
import { source } from '@/lib/source'
|
||||
import { DOCS_BASE_URL } from '@/lib/urls'
|
||||
|
||||
export const revalidate = 3600
|
||||
|
||||
export default function sitemap(): MetadataRoute.Sitemap {
|
||||
const baseUrl = DOCS_BASE_URL
|
||||
const languages = source.getLanguages()
|
||||
|
||||
const pagesBySlug = new Map<string, Map<string, string>>()
|
||||
for (const { language, pages } of languages) {
|
||||
for (const page of pages) {
|
||||
const key = page.slugs.join('/')
|
||||
if (!pagesBySlug.has(key)) {
|
||||
pagesBySlug.set(key, new Map())
|
||||
}
|
||||
pagesBySlug.get(key)!.set(language, `${baseUrl}${page.url}`)
|
||||
}
|
||||
}
|
||||
|
||||
const entries: MetadataRoute.Sitemap = []
|
||||
for (const [, localeMap] of pagesBySlug) {
|
||||
const defaultUrl = localeMap.get(i18n.defaultLanguage)
|
||||
if (!defaultUrl) continue
|
||||
|
||||
const langAlternates: Record<string, string> = {}
|
||||
for (const [lang, url] of localeMap) {
|
||||
langAlternates[lang] = url
|
||||
}
|
||||
|
||||
langAlternates['x-default'] = defaultUrl
|
||||
|
||||
entries.push({
|
||||
url: defaultUrl,
|
||||
alternates: { languages: langAlternates },
|
||||
})
|
||||
}
|
||||
|
||||
return entries
|
||||
}
|
||||
@@ -1,62 +0,0 @@
|
||||
import { i18n } from '@/lib/i18n'
|
||||
import { source } from '@/lib/source'
|
||||
|
||||
export const revalidate = 3600
|
||||
|
||||
export async function GET() {
|
||||
const baseUrl = 'https://docs.sim.ai'
|
||||
|
||||
const allPages = source.getPages()
|
||||
|
||||
const getPriority = (url: string): string => {
|
||||
if (url === '/introduction' || url === '/') return '1.0'
|
||||
if (url === '/getting-started') return '0.9'
|
||||
if (url.match(/^\/[^/]+$/)) return '0.8'
|
||||
if (url.includes('/sdks/') || url.includes('/tools/')) return '0.7'
|
||||
return '0.6'
|
||||
}
|
||||
|
||||
const urls = allPages
|
||||
.flatMap((page) => {
|
||||
const urlWithoutLang = page.url.replace(/^\/[a-z]{2}\//, '/')
|
||||
|
||||
return i18n.languages.map((lang) => {
|
||||
const url =
|
||||
lang === i18n.defaultLanguage
|
||||
? `${baseUrl}${urlWithoutLang}`
|
||||
: `${baseUrl}/${lang}${urlWithoutLang}`
|
||||
|
||||
return ` <url>
|
||||
<loc>${url}</loc>
|
||||
<priority>${getPriority(urlWithoutLang)}</priority>
|
||||
${i18n.languages.length > 1 ? generateAlternateLinks(baseUrl, urlWithoutLang) : ''}
|
||||
</url>`
|
||||
})
|
||||
})
|
||||
.join('\n')
|
||||
|
||||
const sitemap = `<?xml version="1.0" encoding="UTF-8"?>
|
||||
<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9" xmlns:xhtml="http://www.w3.org/1999/xhtml">
|
||||
${urls}
|
||||
</urlset>`
|
||||
|
||||
return new Response(sitemap, {
|
||||
headers: {
|
||||
'Content-Type': 'application/xml',
|
||||
'Cache-Control': 'public, max-age=3600, s-maxage=3600',
|
||||
},
|
||||
})
|
||||
}
|
||||
|
||||
function generateAlternateLinks(baseUrl: string, urlWithoutLang: string): string {
|
||||
const langLinks = i18n.languages
|
||||
.map((lang) => {
|
||||
const url =
|
||||
lang === i18n.defaultLanguage
|
||||
? `${baseUrl}${urlWithoutLang}`
|
||||
: `${baseUrl}/${lang}${urlWithoutLang}`
|
||||
return ` <xhtml:link rel="alternate" hreflang="${lang}" href="${url}" />`
|
||||
})
|
||||
.join('\n')
|
||||
return `${langLinks}\n <xhtml:link rel="alternate" hreflang="x-default" href="${baseUrl}${urlWithoutLang}" />`
|
||||
}
|
||||
@@ -28,6 +28,47 @@ export function AgentMailIcon(props: SVGProps<SVGSVGElement>) {
|
||||
)
|
||||
}
|
||||
|
||||
export function AgentPhoneIcon(props: SVGProps<SVGSVGElement>) {
|
||||
return (
|
||||
<svg {...props} viewBox='0 0 150 150' xmlns='http://www.w3.org/2000/svg'>
|
||||
<path
|
||||
fill='#23AF58'
|
||||
stroke='#007F3F'
|
||||
strokeWidth='0.15'
|
||||
strokeMiterlimit='10'
|
||||
d='m139.6 53.3c-1.4-2.3-4.9-3.3-7.6-4.8-2.7-1.3-4.2-2.4-5.7-3.6-1.9-1-2.5-2.7-3.3-3.2s-2.7-1.4-4.5 1.3c-2 2.7-4.5 6.6-6.6 11.1-2.3 5.4-6.3 14.9-6.3 18.9 0.5 4.9 3.1 4.6 6.1 7.2 2.5 2.1 2.8 5.8 1.5 12.5-1.3 6.6-4 12.8-7.8 19.2-3.3 5.1-5.8 8.7-10 9.1-5.3 0.5-12.5-3.1-16.8-5.6-1-0.6-2.5-0.9-3.8-0.2-1.3 0.5-2.2 1.6-3.2 3.3-1.5 2.5-4.6 7.7-5.8 12.2-0.5 3 0 6.4 2.9 9 1.4 1.2 2.8 2.5 4.4 3.4 5 2.8 9.6 4.5 16.5 4.9 5.3 0.2 9.3-1 13.4-3.1 2.4-1.3 6.6-4.2 9.6-7.3l1.1-1.2c2.8-3.1 8.8-10 11.6-14.5 2.3-3.5 4.8-7.4 6.9-12.3 2.9-6.7 4.4-14 5-17.9 1.2-7 2.4-17.5 3.4-31.1 0.1-4.3-0.3-6.1-1-7.3zm-4.5 6.7c-0.5 9.5-1.9 23.3-3.1 30.1-0.9 4.5-2.4 9.6-3.8 13.4-1.1 2.6-3.1 7-5.6 10.8-3.4 5.3-8.4 11.6-12 15.8-6.4 6.6-10.2 9.6-14.2 10.8-2.2 0.9-3.8 1.2-7 1.2-3.4-0.1-8-0.7-11.3-2.2-3-1.2-7-4-6.9-6.8 0.4-3.2 3.3-9.6 5.2-11.9 0.2-0.3 0.5-0.3 0.7-0.2 2.5 1.1 6 3.2 9.6 4.5 2.4 0.9 4.8 1.4 7.3 1.4 3.9 0 6.7-1.2 9.5-3.2 5.6-4.6 9-10.8 12.1-17.5 2-4.3 4.1-11.6 4.4-18.3 0.1-4.9-1.1-8.9-4.5-12.2-1.1-0.7-3-2.1-3-2.8 0-4.2 3.9-13 8.9-22.9 0.2-0.7 0.5-1 1.1-0.7 1.1 0.6 3 1.4 4.6 2.4 2.1 1 5.4 2.4 7.1 3.9 0.9 0.4 1 3 0.9 4.4z'
|
||||
/>
|
||||
<path
|
||||
fill='#23AF58'
|
||||
d='m104.7 27.8c-1.3-1.5-3.3-1.3-6.2-1.5l-1.9 0.2-7-0.2-31.5 0.2 1.5-9.3c2-1.1 5.1-3.5 5.8-6.3 1-2.8 0.2-5.9-2-7.4-2.3-1.9-5.8-2.4-9.3-0.8-1.6 1-4.7 3.4-5.4 6.9-0.8 4.1 2.4 6.7 4.7 7.9l-1.5 9.1-17.2 0.9c-12.3 1.1-16.3 1.2-20.6 4.3-2 1.3-3 4.5-3.4 9.8-0.6 11.3-0.7 18.7-0.6 28.3 0.4 11.2 0 36.6 3 39.8l-1.2 0.3c-3.8 0.6-4 6.2-0.5 6.6l15.5-1 69.7-7.6c2.5-0.4 4.3-0.9 4.6-4.3l3.7-71.5c0-1.9 0.2-3.6-0.2-4.4zm-49.6-17.3c0.3-2.2 2.4-3 3.3-2.8 0.7 0.4 1 1.8 0 2.8-1.5 2-3.3 1.7-3.3 0zm40 90.2c-4 1-5.5 1.5-11.5 2.4-7.7 1-19.7 2.1-31.2 3.4l-33.8 2.9c-0.7 0.2-1-0.4-1-1-0.6-6.5-1.2-20.5-1.5-39.5l0.3-23.3c0.6-7.5 0.7-8.7 4.6-9.7 5.1-0.9 7.4-1.4 14.9-1.8l19.5-0.5 41.1-0.5c1.4 0 1.9 0.4 1.9 1.5l-3.3 66.1z'
|
||||
/>
|
||||
<path
|
||||
fill='#23AF58'
|
||||
d='m38.9 52.4c-1.8 0-4 1.1-4.5 3.3-1 3.9 1 7.6 4.5 7.7 3.8 0 5-3.8 4.7-6.3-0.2-2-2-4.7-4.7-4.7z'
|
||||
/>
|
||||
<path
|
||||
fill='#23AF58'
|
||||
d='m73.5 53.9c-1.8 0-4.3 1.5-4.4 4.5-0.1 3.2 2 5.3 4.3 5.3 2.5 0 4.2-1.7 4.2-4.8 0-3.2-1.7-4.8-4.1-5z'
|
||||
/>
|
||||
<path
|
||||
fill='#23AF58'
|
||||
d='m72.1 77.1c-2.7 3.4-7.2 7.4-14.7 8.3-7.3 0.3-13.9-2.9-20-8.5-3.5-3.4-8 0-6.2 2.7 1.7 2.5 6.4 6.6 10.4 8.8 3.5 2 7.3 3.3 13.8 3.5 4.7 0 9.2-0.8 12.7-2.4 2.9-1.1 5-2.8 6-3.8 2.3-2.1 3.8-4.1 3.5-7.3-0.9-2.5-3.6-2.8-5.5-1.3z'
|
||||
/>
|
||||
</svg>
|
||||
)
|
||||
}
|
||||
|
||||
export function CrowdStrikeIcon(props: SVGProps<SVGSVGElement>) {
|
||||
return (
|
||||
<svg {...props} viewBox='0 0 768 500' fill='none' xmlns='http://www.w3.org/2000/svg'>
|
||||
<path
|
||||
d='m152.8 23.6c-.8.8.3 4.4 1.3 4.4.5 0 .9.5.9 1.2 0 1.5 7.2 15.9 8.8 17.6.6.7 1.2 1.7 1.2 2.2 0 1.3 8.6 13.7 12.8 18.4 10 11.2 28.2 28.1 35.2 32.7 1.4.9 3.9 2.9 5.5 4.3 1.7 1.5 4.8 3.9 7 5.4s4.9 3.5 5.9 4.4c1.1 1 3.8 3 6 4.5 2.3 1.6 5 3.6 6 4.5 1.1 1 3.8 3 6 4.5 2.3 1.5 4.3 3 4.6 3.3s3.7 3 7.5 6c3.9 3 7.5 5.9 8.1 6.5.6.5 4.6 4.1 8.9 8 14.6 13.1 25.8 25.3 32.6 35.5 6.6 10 9.2 14.4 15.1 25.8 3.1 6.2 7.7 14.4 10 18.3 2.4 3.9 5.4 8.9 6.7 11.2s3 4.8 3.8 5.5c.7.7 1.3 1.8 1.3 2.3s.5 1.5 1 2.2c.6.7 5.3 7.7 10.6 15.7 16.9 25.6 40.1 46 62.9 55.1 10.8 4.3 33.4 6 63 4.7 20.6-.8 44.2-.2 48.3 1.3 1.3.5 4.2.9 6.5.9 2.3.1 6 .7 8.2 1.5s4.9 1.5 6 1.5 3.3.7 4.9 1.5c1.5.8 3.5 1.5 4.3 1.5 1.6 0 7.1 2.4 19.8 8.6 18.3 9.1 33.1 19.9 48.7 35.6 10.4 10.5 10.8 10.8 11.4 8.2.8-3.1-.2-13.7-1.5-16.1-.5-1-2-4.1-3.3-6.8-2.5-5.6-7.2-12.3-14.2-20.4-2.7-3.3-4.6-6.5-4.6-7.9 0-4.1-3.9-10.5-8.5-13.9-5.8-4.3-23.6-13.3-26.3-13.3-.5 0-2.3-.7-3.8-1.5-1.6-.8-3.7-1.5-4.7-1.5-.9 0-2.5-.4-3.5-.9-.9-.5-5.1-1.9-9.2-3.1-13.7-4.1-22.5-7.2-25.6-9.1-3.3-2-6.4-7.2-6.4-10.7 0-2.6 3.8-14.4 5-15.6.6-.6 1-1.7 1-2.5 0-.9.6-2.8 1.4-4.3.8-1.4 1.9-5.8 2.6-9.7 3.3-19.4-7.2-31.8-41-48.7-4.5-2.2-12.7-5.9-16.5-7.5-1.1-.4-4.1-1.7-6.7-2.8-2.6-1.2-5.4-2.1-6.2-2.1s-1.8-.5-2.1-1c-.3-.6-1.3-1-2.2-1-.8 0-2.9-.6-4.6-1.4-1.8-.8-10.4-3.8-19.2-6.6-8.8-2.9-16.7-5.6-17.6-6-.9-.5-3.4-1.2-5.5-1.6-2.2-.3-4.3-1-4.9-1.4-.5-.4-2.6-1.1-4.5-1.4-1.9-.4-4.4-1.1-5.5-1.6-1.1-.4-4-1.3-6.5-2-2.5-.6-6.3-1.6-8.5-2.1-2.2-.6-4.9-1.5-6-1.9-1.1-.5-3.6-1.2-5.5-1.6-1.9-.3-4.1-1-5-1.4-.8-.4-4.9-1.8-9-3s-8.2-2.5-9-2.9c-.9-.5-3.1-1.2-5-1.6s-3.9-1-4.5-1.4c-.5-.4-4.4-1.8-8.5-3.1-4.1-1.2-7.9-2.6-8.5-3-.5-.4-3.9-1.7-7.5-3s-6.9-2.7-7.4-3.2c-.6-.4-1.6-.8-2.4-.8-2 0-11.4-4.3-35.2-15.9-16.7-8.2-32.1-16.6-35.5-19.3-.5-.4-4.6-3.1-9-6s-8.4-5.6-9-6c-.5-.4-5.2-3.9-10.4-7.8-18.1-13.5-44.4-38.8-55.5-53.5-2.1-2.8-3.9-5.1-4-5.3-.2-.1-.5.1-.8.4zm447.2 303c10.2 3.4 13.5 6 15.9 12.1 2.4 5.9-1.6 7.3-6.5 2.2-1.6-1.7-4.5-4-6.4-5.2s-4.1-2.7-4.8-3.4-1.9-1.3-2.7-1.3c-1.3 0-2.5-2.1-2.5-4.6 0-1.8 1.4-1.8 7 .2zm-519-240c0 1.1 8.5 17.9 10 19.7.6.7 2.7 3.4 4.7 6.2 7.3 9.8 18.7 21.5 33.9 34.5 3.8 3.3 14.2 11.1 17.5 13.2 1.4.9 3.2 2.3 4 3 .8.8 3.2 2.5 5.4 3.8s4.2 2.7 4.5 3c.6.8 30.1 18.3 39.5 23.5 7.4 4.2 15.4 8.2 43.5 21.9 16.5 8.1 19.6 9.7 31.7 17 9.1 5.5 23.7 16.9 31 24.2 4.1 4.1 7.6 7.4 7.8 7.4.3 0-.1-1.1-.7-2.5s-1.5-2.5-2-2.5c-.4 0-.8-.6-.8-1.3 0-.8-.9-2.5-2-3.8s-2.3-2.9-2.7-3.4c-7.3-9.6-13.3-15.4-31.7-31-2.5-2.2-19-13.4-26.7-18.2-6.1-3.9-18.4-10.8-30.9-17.5-3-1.7-5.9-3.4-6.5-3.8-.9-.7-5.2-3-19.5-10.8-9-4.8-31.8-18.9-35.5-21.9-.5-.5-2.8-2-5-3.3s-4.4-2.8-5-3.2c-.5-.4-5.9-4.4-12-8.9-6-4.5-11.2-8.5-11.5-8.8-.3-.4-2.7-2.4-5.5-4.5-5.6-4.2-12.8-10.8-26.2-24-5.1-5-9.3-8.6-9.3-8zm113.6 179.1c-1 1 15.8 16.6 26.9 24.9 5.5 4.1 10.5 7.8 11 8.2 2.6 2 11.6 7.2 12.4 7.2.5 0 1.6.6 2.3 1.2.7.7 2.9 2 4.8 3 13.3 6.3 19 8.8 20.4 8.8.8 0 1.7.4 2 .8.8 1.3 32.3 11.2 35.8 11.2 1 0 2.6.4 3.6 1 .9.5 3.7 1.4 6.2 1.9 8.7 1.9 13.5 3.1 15.5 4 1.1.5 5.4 1.9 9.5 3.2s7.9 2.6 8.5 3.1c.5.4 1.5.8 2.3.8s2.8.6 4.5 1.4c16.4 7.1 20.8 8.8 21.4 8.3.3-.4-.7-1.7-2.3-2.9-2.5-2-6.9-5.9-16.4-14.8-1.5-1.4-4.2-3.8-6-5.4-5-4.3-26-19.9-30.5-22.6-2.2-1.3-4.2-2.7-4.5-3-.3-.4-1.2-1-2-1.4s-4.2-2.2-7.5-4.1c-6.2-3.6-18.9-9.9-26-12.9-2.2-.9-4.7-2.1-5.5-2.5-.9-.5-3-1.2-4.8-1.5-1.7-.4-3.4-1.2-3.7-1.7-.4-.5-1.6-.9-2.8-.9-2.2.1-2.2.1-.2 1.2 1.1.6 2.2 1.4 2.5 1.8.3.3 2.5 1.8 5 3.3 5.3 3.1 15 11.7 15 13.3 0 .6-.7 1.7-1.5 2.4-1.2 1-4.1.9-14.5-.4-7.2-.9-14.1-2.1-15.3-2.6-1.2-.4-4.7-1.6-7.7-2.5-15.6-4.7-47-22.1-56.1-31-.9-.8-1.9-1.2-2.3-.8z'
|
||||
fill='currentColor'
|
||||
/>
|
||||
</svg>
|
||||
)
|
||||
}
|
||||
|
||||
export function SearchIcon(props: SVGProps<SVGSVGElement>) {
|
||||
return (
|
||||
<svg
|
||||
@@ -2076,6 +2117,21 @@ export function BrandfetchIcon(props: SVGProps<SVGSVGElement>) {
|
||||
)
|
||||
}
|
||||
|
||||
export function BrightDataIcon(props: SVGProps<SVGSVGElement>) {
|
||||
return (
|
||||
<svg {...props} viewBox='54 93 22 52' fill='none' xmlns='http://www.w3.org/2000/svg'>
|
||||
<path
|
||||
d='M62 95.21c.19 2.16 1.85 3.24 2.82 4.74.25.38.48.11.67-.16.21-.31.6-1.21 1.15-1.28-.35 1.38-.04 3.15.16 4.45.49 3.05-1.22 5.64-4.07 6.18-3.38.65-6.22-2.21-5.6-5.62.23-1.24 1.37-2.5.77-3.7-.85-1.7.54-.52.79-.22 1.04 1.2 1.21.09 1.45-.55.24-.63.31-1.31.47-1.97.19-.77.55-1.4 1.39-1.87z'
|
||||
fill='currentColor'
|
||||
/>
|
||||
<path
|
||||
d='M66.70 123.37c0 3.69.04 7.38-.03 11.07-.02 1.04.31 1.48 1.32 1.49.29 0 .59.12.88.13.93.01 1.18.47 1.16 1.37-.05 2.19 0 2.19-2.24 2.19-3.48 0-6.96-.04-10.44.03-1.09.02-1.47-.33-1.3-1.36.02-.12.02-.26 0-.38-.28-1.39.39-1.96 1.7-1.9 1.36.06 1.76-.51 1.74-1.88-.09-5.17-.08-10.35 0-15.53.02-1.22-.32-1.87-1.52-2.17-.57-.14-1.47-.11-1.57-.85-.15-1.04-.05-2.11.01-3.17.02-.34.44-.35.73-.39 2.81-.39 5.63-.77 8.44-1.18.92-.14 1.15.2 1.14 1.09-.04 3.8-.02 7.62-.02 11.44z'
|
||||
fill='currentColor'
|
||||
/>
|
||||
</svg>
|
||||
)
|
||||
}
|
||||
|
||||
export function BrowserUseIcon(props: SVGProps<SVGSVGElement>) {
|
||||
return (
|
||||
<svg
|
||||
@@ -3554,7 +3610,7 @@ export function FireworksIcon(props: SVGProps<SVGSVGElement>) {
|
||||
>
|
||||
<path
|
||||
d='M314.333 110.167L255.98 251.729l-58.416-141.562h-37.459l64 154.75c5.23 12.854 17.771 21.312 31.646 21.312s26.417-8.437 31.646-21.27l64.396-154.792h-37.459zm24.917 215.666L446 216.583l-14.562-34.77-116.584 119.562c-9.708 9.958-12.541 24.833-7.146 37.646 5.292 12.73 17.792 21.083 31.584 21.083l.042.063L506 359.75l-14.562-34.77-152.146.853h-.042zM66 216.5l14.563-34.77 116.583 119.562a34.592 34.592 0 017.146 37.646C199 351.667 186.5 360.02 172.708 360.02l-166.666-.375-.042.042 14.563-34.771 152.145.875L66 216.5z'
|
||||
fill='currentColor'
|
||||
fill='#5019c5'
|
||||
/>
|
||||
</svg>
|
||||
)
|
||||
@@ -3576,6 +3632,29 @@ export function OpenRouterIcon(props: SVGProps<SVGSVGElement>) {
|
||||
)
|
||||
}
|
||||
|
||||
export function MondayIcon(props: SVGProps<SVGSVGElement>) {
|
||||
return (
|
||||
<svg
|
||||
{...props}
|
||||
viewBox='0 -50 256 256'
|
||||
xmlns='http://www.w3.org/2000/svg'
|
||||
preserveAspectRatio='xMidYMid'
|
||||
>
|
||||
<g>
|
||||
<path
|
||||
d='M31.8458633,153.488694 C20.3244423,153.513586 9.68073708,147.337265 3.98575204,137.321731 C-1.62714067,127.367831 -1.29055839,115.129325 4.86093879,105.498969 L62.2342919,15.4033556 C68.2125882,5.54538256 79.032489,-0.333585033 90.5563073,0.0146553508 C102.071737,0.290611552 112.546041,6.74705604 117.96667,16.9106216 C123.315033,27.0238906 122.646488,39.1914174 116.240607,48.6847625 L58.9037201,138.780375 C52.9943022,147.988884 42.7873202,153.537154 31.8458633,153.488694 L31.8458633,153.488694 Z'
|
||||
fill='#F62B54'
|
||||
/>
|
||||
<path
|
||||
d='M130.25575,153.488484 C118.683837,153.488484 108.035731,147.301291 102.444261,137.358197 C96.8438154,127.431292 97.1804475,115.223704 103.319447,105.620522 L160.583402,15.7315506 C166.47539,5.73210989 177.327374,-0.284878136 188.929728,0.0146553508 C200.598885,0.269918151 211.174058,6.7973526 216.522421,17.0078646 C221.834319,27.2183766 221.056375,39.4588356 214.456008,48.9278699 L157.204209,138.816842 C151.313487,147.985468 141.153618,153.5168 130.25575,153.488484 Z'
|
||||
fill='#FFCC00'
|
||||
/>
|
||||
<ellipse fill='#00CA72' cx='226.465527' cy='125.324379' rx='29.5375538' ry='28.9176274' />
|
||||
</g>
|
||||
</svg>
|
||||
)
|
||||
}
|
||||
|
||||
export function MongoDBIcon(props: SVGProps<SVGSVGElement>) {
|
||||
return (
|
||||
<svg {...props} xmlns='http://www.w3.org/2000/svg' viewBox='0 0 128 128'>
|
||||
@@ -4614,6 +4693,78 @@ export function DynamoDBIcon(props: SVGProps<SVGSVGElement>) {
|
||||
)
|
||||
}
|
||||
|
||||
export function IAMIcon(props: SVGProps<SVGSVGElement>) {
|
||||
return (
|
||||
<svg {...props} viewBox='0 0 80 80' xmlns='http://www.w3.org/2000/svg'>
|
||||
<defs>
|
||||
<linearGradient x1='0%' y1='100%' x2='100%' y2='0%' id='iamGradient'>
|
||||
<stop stopColor='#BD0816' offset='0%' />
|
||||
<stop stopColor='#FF5252' offset='100%' />
|
||||
</linearGradient>
|
||||
</defs>
|
||||
<rect fill='url(#iamGradient)' width='80' height='80' />
|
||||
<path
|
||||
d='M14,59 L66,59 L66,21 L14,21 L14,59 Z M68,20 L68,60 C68,60.552 67.553,61 67,61 L13,61 C12.447,61 12,60.552 12,60 L12,20 C12,19.448 12.447,19 13,19 L67,19 C67.553,19 68,19.448 68,20 L68,20 Z M44,48 L59,48 L59,46 L44,46 L44,48 Z M57,42 L62,42 L62,40 L57,40 L57,42 Z M44,42 L52,42 L52,40 L44,40 L44,42 Z M29,46 C29,45.449 28.552,45 28,45 C27.448,45 27,45.449 27,46 C27,46.551 27.448,47 28,47 C28.552,47 29,46.551 29,46 L29,46 Z M31,46 C31,47.302 30.161,48.401 29,48.816 L29,51 L27,51 L27,48.815 C25.839,48.401 25,47.302 25,46 C25,44.346 26.346,43 28,43 C29.654,43 31,44.346 31,46 L31,46 Z M19,53.993 L36.994,54 L36.996,50 L33,50 L33,48 L36.996,48 L36.998,45 L33,45 L33,43 L36.999,43 L37,40.007 L19.006,40 L19,53.993 Z M22,38.001 L34,38.006 L34,31 C34.001,28.697 31.197,26.677 28,26.675 L27.996,26.675 C24.804,26.675 22.004,28.696 22.002,31 L22,38.001 Z M17,54.992 L17.006,39 C17.006,38.734 17.111,38.48 17.299,38.292 C17.486,38.105 17.741,38 18.006,38 L20,38.001 L20.002,31 C20.004,27.512 23.59,24.675 27.996,24.675 L28,24.675 C32.412,24.677 36.001,27.515 36,31 L36,38.007 L38,38.008 C38.553,38.008 39,38.456 39,39.008 L38.994,55 C38.994,55.266 38.889,55.52 38.701,55.708 C38.514,55.895 38.259,56 37.994,56 L18,55.992 C17.447,55.992 17,55.544 17,54.992 L17,54.992 Z M60,36 L62,36 L62,34 L60,34 L60,36 Z M44,36 L55,36 L55,34 L44,34 L44,36 Z'
|
||||
fill='#FFFFFF'
|
||||
/>
|
||||
</svg>
|
||||
)
|
||||
}
|
||||
|
||||
export function IdentityCenterIcon(props: SVGProps<SVGSVGElement>) {
|
||||
return (
|
||||
<svg {...props} viewBox='0 0 80 80' xmlns='http://www.w3.org/2000/svg'>
|
||||
<defs>
|
||||
<linearGradient x1='0%' y1='100%' x2='100%' y2='0%' id='identityCenterGradient'>
|
||||
<stop stopColor='#BD0816' offset='0%' />
|
||||
<stop stopColor='#FF5252' offset='100%' />
|
||||
</linearGradient>
|
||||
</defs>
|
||||
<rect fill='url(#identityCenterGradient)' width='80' height='80' />
|
||||
<path
|
||||
d='M46.694,46.8194562 C47.376,46.1374562 47.376,45.0294562 46.694,44.3474562 C46.353,44.0074562 45.906,43.8374562 45.459,43.8374562 C45.01,43.8374562 44.563,44.0074562 44.222,44.3474562 C43.542,45.0284562 43.542,46.1384562 44.222,46.8194562 C44.905,47.5014562 46.013,47.4994562 46.694,46.8194562 M47.718,47.1374562 L51.703,51.1204562 L50.996,51.8274562 L49.868,50.6994562 L48.793,51.7754562 L48.086,51.0684562 L49.161,49.9924562 L47.011,47.8444562 C46.545,48.1654562 46.003,48.3294562 45.458,48.3294562 C44.755,48.3294562 44.051,48.0624562 43.515,47.5264562 C42.445,46.4554562 42.445,44.7124562 43.515,43.6404562 C44.586,42.5714562 46.329,42.5694562 47.401,43.6404562 C48.351,44.5904562 48.455,46.0674562 47.718,47.1374562 M53,44.1014562 C53,46.1684562 51.505,47.0934562 50.023,47.0934562 L50.023,46.0934562 C50.487,46.0934562 52,45.9494562 52,44.1014562 C52,43.0044562 51.353,42.3894562 49.905,42.1084562 C49.68,42.0654562 49.514,41.8754562 49.501,41.6484562 C49.446,40.7444562 48.987,40.1124562 48.384,40.1124562 C48.084,40.1124562 47.854,40.2424562 47.616,40.5464562 C47.506,40.6884562 47.324,40.7594562 47.147,40.7324562 C46.968,40.7054562 46.818,40.5844562 46.755,40.4144562 C46.577,39.9434562 46.211,39.4334562 45.723,38.9774562 C45.231,38.5094562 43.883,37.5074562 41.972,38.2734562 C40.885,38.7054562 40.034,39.9494562 40.034,41.1074562 C40.034,41.2354562 40.043,41.3624562 40.058,41.4884562 C40.061,41.5094562 40.062,41.5304562 40.062,41.5514562 C40.062,41.7994562 39.882,42.0064562 39.645,42.0464562 C38.886,42.2394562 38,42.7454562 38,44.0554562 L38.005,44.2104562 C38.069,45.3254562 39.252,45.9954562 40.358,45.9984562 L41,45.9984562 L41,46.9984562 L40.357,46.9984562 C38.536,46.9944562 37.095,45.8194562 37.006,44.2644562 C37.003,44.1944562 37,44.1244562 37,44.0554562 C37,42.6944562 37.752,41.6484562 39.035,41.1884562 C39.034,41.1614562 39.034,41.1344562 39.034,41.1074562 C39.034,39.5434562 40.138,37.9254562 41.602,37.3434562 C43.298,36.6654562 45.095,37.0034562 46.409,38.2494562 C46.706,38.5274562 47.076,38.9264562 47.372,39.4134562 C47.673,39.2124562 48.008,39.1124562 48.384,39.1124562 C49.257,39.1124562 50.231,39.7714562 50.458,41.2074562 C52.145,41.6324562 53,42.6054562 53,44.1014562 M27,53 L27,27 L53,27 L53,34 L51,34 L51,29 L29,29 L29,51 L51,51 L51,46 L53,46 L53,53 Z'
|
||||
fill='#FFFFFF'
|
||||
/>
|
||||
</svg>
|
||||
)
|
||||
}
|
||||
|
||||
export function STSIcon(props: SVGProps<SVGSVGElement>) {
|
||||
return (
|
||||
<svg {...props} viewBox='0 0 80 80' xmlns='http://www.w3.org/2000/svg'>
|
||||
<defs>
|
||||
<linearGradient x1='0%' y1='100%' x2='100%' y2='0%' id='stsGradient'>
|
||||
<stop stopColor='#BD0816' offset='0%' />
|
||||
<stop stopColor='#FF5252' offset='100%' />
|
||||
</linearGradient>
|
||||
</defs>
|
||||
<rect fill='url(#stsGradient)' width='80' height='80' />
|
||||
<path
|
||||
d='M14,59 L66,59 L66,21 L14,21 L14,59 Z M68,20 L68,60 C68,60.552 67.553,61 67,61 L13,61 C12.447,61 12,60.552 12,60 L12,20 C12,19.448 12.447,19 13,19 L67,19 C67.553,19 68,19.448 68,20 L68,20 Z M44,48 L59,48 L59,46 L44,46 L44,48 Z M57,42 L62,42 L62,40 L57,40 L57,42 Z M44,42 L52,42 L52,40 L44,40 L44,42 Z M29,46 C29,45.449 28.552,45 28,45 C27.448,45 27,45.449 27,46 C27,46.551 27.448,47 28,47 C28.552,47 29,46.551 29,46 L29,46 Z M31,46 C31,47.302 30.161,48.401 29,48.816 L29,51 L27,51 L27,48.815 C25.839,48.401 25,47.302 25,46 C25,44.346 26.346,43 28,43 C29.654,43 31,44.346 31,46 L31,46 Z M19,53.993 L36.994,54 L36.996,50 L33,50 L33,48 L36.996,48 L36.998,45 L33,45 L33,43 L36.999,43 L37,40.007 L19.006,40 L19,53.993 Z M22,38.001 L34,38.006 L34,31 C34.001,28.697 31.197,26.677 28,26.675 L27.996,26.675 C24.804,26.675 22.004,28.696 22.002,31 L22,38.001 Z M17,54.992 L17.006,39 C17.006,38.734 17.111,38.48 17.299,38.292 C17.486,38.105 17.741,38 18.006,38 L20,38.001 L20.002,31 C20.004,27.512 23.59,24.675 27.996,24.675 L28,24.675 C32.412,24.677 36.001,27.515 36,31 L36,38.007 L38,38.008 C38.553,38.008 39,38.456 39,39.008 L38.994,55 C38.994,55.266 38.889,55.52 38.701,55.708 C38.514,55.895 38.259,56 37.994,56 L18,55.992 C17.447,55.992 17,55.544 17,54.992 L17,54.992 Z M60,36 L62,36 L62,34 L60,34 L60,36 Z M44,36 L55,36 L55,34 L44,34 L44,36 Z'
|
||||
fill='#FFFFFF'
|
||||
/>
|
||||
</svg>
|
||||
)
|
||||
}
|
||||
|
||||
export function SESIcon(props: SVGProps<SVGSVGElement>) {
|
||||
return (
|
||||
<svg {...props} viewBox='0 0 80 80' xmlns='http://www.w3.org/2000/svg'>
|
||||
<defs>
|
||||
<linearGradient x1='0%' y1='100%' x2='100%' y2='0%' id='sesGradient'>
|
||||
<stop stopColor='#BD0816' offset='0%' />
|
||||
<stop stopColor='#FF5252' offset='100%' />
|
||||
</linearGradient>
|
||||
</defs>
|
||||
<rect fill='url(#sesGradient)' width='80' height='80' />
|
||||
<path
|
||||
d='M57,60.999875 C57,59.373846 55.626,57.9998214 54,57.9998214 C52.374,57.9998214 51,59.373846 51,60.999875 C51,62.625904 52.374,63.9999286 54,63.9999286 C55.626,63.9999286 57,62.625904 57,60.999875 L57,60.999875 Z M40,59.9998571 C38.374,59.9998571 37,61.3738817 37,62.9999107 C37,64.6259397 38.374,65.9999643 40,65.9999643 C41.626,65.9999643 43,64.6259397 43,62.9999107 C43,61.3738817 41.626,59.9998571 40,59.9998571 L40,59.9998571 Z M26,57.9998214 C24.374,57.9998214 23,59.373846 23,60.999875 C23,62.625904 24.374,63.9999286 26,63.9999286 C27.626,63.9999286 29,62.625904 29,60.999875 C29,59.373846 27.626,57.9998214 26,57.9998214 L26,57.9998214 Z M28.605,42.9995536 L51.395,42.9995536 L43.739,36.1104305 L40.649,38.7584778 C40.463,38.9194807 40.23,38.9994821 39.999,38.9994821 C39.768,38.9994821 39.535,38.9194807 39.349,38.7584778 L36.26,36.1104305 L28.605,42.9995536 Z M27,28.1732888 L27,41.7545313 L34.729,34.7984071 L27,28.1732888 Z M51.297,26.9992678 L28.703,26.9992678 L39.999,36.6824408 L51.297,26.9992678 Z M53,41.7545313 L53,28.1732888 L45.271,34.7974071 L53,41.7545313 Z M59,60.999875 C59,63.7099234 56.71,65.9999643 54,65.9999643 C51.29,65.9999643 49,63.7099234 49,60.999875 C49,58.6308327 50.75,56.5837961 53,56.1057876 L53,52.9997321 L41,52.9997321 L41,58.1058233 C43.25,58.5838319 45,60.6308684 45,62.9999107 C45,65.7099591 42.71,68 40,68 C37.29,68 35,65.7099591 35,62.9999107 C35,60.6308684 36.75,58.5838319 39,58.1058233 L39,52.9997321 L27,52.9997321 L27,56.1057876 C29.25,56.5837961 31,58.6308327 31,60.999875 C31,63.7099234 28.71,65.9999643 26,65.9999643 C23.29,65.9999643 21,63.7099234 21,60.999875 C21,58.6308327 22.75,56.5837961 25,56.1057876 L25,51.9997143 C25,51.4477044 25.447,50.9996964 26,50.9996964 L39,50.9996964 L39,44.9995893 L26,44.9995893 C25.447,44.9995893 25,44.5515813 25,43.9995714 L25,25.99925 C25,25.4472401 25.447,24.9992321 26,24.9992321 L54,24.9992321 C54.553,24.9992321 55,25.4472401 55,25.99925 L55,43.9995714 C55,44.5515813 54.553,44.9995893 54,44.9995893 L41,44.9995893 L41,50.9996964 L54,50.9996964 C54.553,50.9996964 55,51.4477044 55,51.9997143 L55,56.1057876 C57.25,56.5837961 59,58.6308327 59,60.999875 L59,60.999875 Z M68,39.9995 C68,45.9066055 66.177,51.5597064 62.727,56.3447919 L61.104,55.174771 C64.307,50.7316916 66,45.4845979 66,39.9995 C66,25.664244 54.337,14.0000357 40.001,14.0000357 C25.664,14.0000357 14,25.664244 14,39.9995 C14,45.4845979 15.693,50.7316916 18.896,55.174771 L17.273,56.3447919 C13.823,51.5597064 12,45.9066055 12,39.9995 C12,24.5612243 24.561,12 39.999,12 C55.438,12 68,24.5612243 68,39.9995 L68,39.9995 Z'
|
||||
fill='#FFFFFF'
|
||||
/>
|
||||
</svg>
|
||||
)
|
||||
}
|
||||
|
||||
export function SecretsManagerIcon(props: SVGProps<SVGSVGElement>) {
|
||||
return (
|
||||
<svg {...props} viewBox='0 0 80 80' xmlns='http://www.w3.org/2000/svg'>
|
||||
@@ -4824,6 +4975,17 @@ export function WordpressIcon(props: SVGProps<SVGSVGElement>) {
|
||||
)
|
||||
}
|
||||
|
||||
export function AgiloftIcon(props: SVGProps<SVGSVGElement>) {
|
||||
return (
|
||||
<svg {...props} viewBox='0 0 47.3 47.2' xmlns='http://www.w3.org/2000/svg'>
|
||||
<path d='M47.3,21.4H0v-4.3l4.3-4.2h43V21.4z' fill='#263A5C' />
|
||||
<path d='M47.3,8.6H8.6L17.2,0h30.1V8.6z' fill='#001028' />
|
||||
<path d='M0,25.7h47.3V30L43,34.4H0V25.7z' fill='#4A6587' />
|
||||
<path d='M0,38.7h38.8l-8.6,8.5H0V38.7z' fill='#6D8DAF' />
|
||||
</svg>
|
||||
)
|
||||
}
|
||||
|
||||
export function AhrefsIcon(props: SVGProps<SVGSVGElement>) {
|
||||
return (
|
||||
<svg {...props} xmlns='http://www.w3.org/2000/svg' viewBox='0 0 1065 1300'>
|
||||
|
||||
@@ -1,3 +1,5 @@
|
||||
import { DOCS_BASE_URL } from '@/lib/urls'
|
||||
|
||||
interface StructuredDataProps {
|
||||
title: string
|
||||
description: string
|
||||
@@ -15,7 +17,7 @@ export function StructuredData({
|
||||
dateModified,
|
||||
breadcrumb,
|
||||
}: StructuredDataProps) {
|
||||
const baseUrl = 'https://docs.sim.ai'
|
||||
const baseUrl = DOCS_BASE_URL
|
||||
|
||||
const articleStructuredData = {
|
||||
'@context': 'https://schema.org',
|
||||
@@ -70,10 +72,11 @@ export function StructuredData({
|
||||
'@context': 'https://schema.org',
|
||||
'@type': 'SoftwareApplication',
|
||||
name: 'Sim',
|
||||
applicationCategory: 'DeveloperApplication',
|
||||
applicationCategory: 'BusinessApplication',
|
||||
applicationSubCategory: 'AI Workspace',
|
||||
operatingSystem: 'Any',
|
||||
description:
|
||||
'Sim is the open-source platform to build AI agents and run your agentic workforce. Connect 1,000+ integrations and LLMs to deploy and orchestrate agentic workflows. Create agents, workflows, knowledge bases, tables, and docs.',
|
||||
'Sim is the open-source AI workspace where teams build, deploy, and manage AI agents. Connect 1,000+ integrations and every major LLM to create agents that automate real work.',
|
||||
url: baseUrl,
|
||||
author: {
|
||||
'@type': 'Organization',
|
||||
@@ -84,8 +87,9 @@ export function StructuredData({
|
||||
category: 'Developer Tools',
|
||||
},
|
||||
featureList: [
|
||||
'AI agent creation',
|
||||
'Agentic workflow orchestration',
|
||||
'AI workspace for teams',
|
||||
'Mothership — natural language agent creation',
|
||||
'Visual workflow builder',
|
||||
'1,000+ integrations',
|
||||
'LLM orchestration (OpenAI, Anthropic, Google, xAI, Mistral, Perplexity)',
|
||||
'Knowledge base creation',
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
'use client'
|
||||
|
||||
import { useState } from 'react'
|
||||
import { useRef, useState } from 'react'
|
||||
import { cn, getAssetUrl } from '@/lib/utils'
|
||||
import { Lightbox } from './lightbox'
|
||||
|
||||
@@ -50,11 +50,14 @@ export function ActionImage({ src, alt, enableLightbox = true }: ActionImageProp
|
||||
}
|
||||
|
||||
export function ActionVideo({ src, alt, enableLightbox = true }: ActionVideoProps) {
|
||||
const videoRef = useRef<HTMLVideoElement>(null)
|
||||
const startTimeRef = useRef(0)
|
||||
const [isLightboxOpen, setIsLightboxOpen] = useState(false)
|
||||
const resolvedSrc = getAssetUrl(src)
|
||||
|
||||
const handleClick = () => {
|
||||
if (enableLightbox) {
|
||||
startTimeRef.current = videoRef.current?.currentTime ?? 0
|
||||
setIsLightboxOpen(true)
|
||||
}
|
||||
}
|
||||
@@ -62,6 +65,7 @@ export function ActionVideo({ src, alt, enableLightbox = true }: ActionVideoProp
|
||||
return (
|
||||
<>
|
||||
<video
|
||||
ref={videoRef}
|
||||
src={resolvedSrc}
|
||||
autoPlay
|
||||
loop
|
||||
@@ -80,6 +84,7 @@ export function ActionVideo({ src, alt, enableLightbox = true }: ActionVideoProp
|
||||
src={src}
|
||||
alt={alt}
|
||||
type='video'
|
||||
startTime={startTimeRef.current}
|
||||
/>
|
||||
)}
|
||||
</>
|
||||
|
||||
@@ -1,195 +0,0 @@
|
||||
import { memo } from 'react'
|
||||
|
||||
const RX = '2.59574'
|
||||
|
||||
interface BlockRect {
|
||||
opacity: number
|
||||
width: string
|
||||
height: string
|
||||
fill: string
|
||||
x?: string
|
||||
y?: string
|
||||
transform?: string
|
||||
}
|
||||
|
||||
const RECTS = {
|
||||
topRight: [
|
||||
{ opacity: 1, x: '0', y: '0', width: '16.8626', height: '33.7252', fill: '#2ABBF8' },
|
||||
{ opacity: 0.6, x: '0', y: '0', width: '85.3433', height: '16.8626', fill: '#2ABBF8' },
|
||||
{ opacity: 1, x: '0', y: '0', width: '16.8626', height: '16.8626', fill: '#2ABBF8' },
|
||||
{ opacity: 0.6, x: '34.2403', y: '0', width: '34.2403', height: '33.7252', fill: '#2ABBF8' },
|
||||
{ opacity: 1, x: '34.2403', y: '0', width: '16.8626', height: '16.8626', fill: '#2ABBF8' },
|
||||
{
|
||||
opacity: 1,
|
||||
x: '51.6188',
|
||||
y: '16.8626',
|
||||
width: '16.8626',
|
||||
height: '16.8626',
|
||||
fill: '#2ABBF8',
|
||||
},
|
||||
{ opacity: 1, x: '68.4812', y: '0', width: '54.6502', height: '16.8626', fill: '#00F701' },
|
||||
{ opacity: 0.6, x: '106.268', y: '0', width: '34.2403', height: '33.7252', fill: '#00F701' },
|
||||
{ opacity: 0.6, x: '106.268', y: '0', width: '51.103', height: '16.8626', fill: '#00F701' },
|
||||
{
|
||||
opacity: 1,
|
||||
x: '123.6484',
|
||||
y: '16.8626',
|
||||
width: '16.8626',
|
||||
height: '16.8626',
|
||||
fill: '#00F701',
|
||||
},
|
||||
{ opacity: 0.6, x: '157.371', y: '0', width: '34.2403', height: '16.8626', fill: '#FFCC02' },
|
||||
{ opacity: 1, x: '157.371', y: '0', width: '16.8626', height: '16.8626', fill: '#FFCC02' },
|
||||
{ opacity: 0.6, x: '208.993', y: '0', width: '68.4805', height: '16.8626', fill: '#FA4EDF' },
|
||||
{ opacity: 0.6, x: '209.137', y: '0', width: '16.8626', height: '33.7252', fill: '#FA4EDF' },
|
||||
{ opacity: 0.6, x: '243.233', y: '0', width: '34.2403', height: '33.7252', fill: '#FA4EDF' },
|
||||
{ opacity: 1, x: '243.233', y: '0', width: '16.8626', height: '16.8626', fill: '#FA4EDF' },
|
||||
{ opacity: 0.6, x: '260.096', y: '0', width: '34.04', height: '16.8626', fill: '#FA4EDF' },
|
||||
{
|
||||
opacity: 1,
|
||||
x: '260.611',
|
||||
y: '16.8626',
|
||||
width: '16.8626',
|
||||
height: '16.8626',
|
||||
fill: '#FA4EDF',
|
||||
},
|
||||
],
|
||||
bottomLeft: [
|
||||
{ opacity: 1, x: '0', y: '0', width: '16.8626', height: '33.7252', fill: '#2ABBF8' },
|
||||
{ opacity: 0.6, x: '0', y: '0', width: '85.3433', height: '16.8626', fill: '#2ABBF8' },
|
||||
{ opacity: 1, x: '0', y: '0', width: '16.8626', height: '16.8626', fill: '#2ABBF8' },
|
||||
{ opacity: 0.6, x: '34.2403', y: '0', width: '34.2403', height: '33.7252', fill: '#2ABBF8' },
|
||||
{ opacity: 1, x: '34.2403', y: '0', width: '16.8626', height: '16.8626', fill: '#2ABBF8' },
|
||||
{
|
||||
opacity: 1,
|
||||
x: '51.6188',
|
||||
y: '16.8626',
|
||||
width: '16.8626',
|
||||
height: '16.8626',
|
||||
fill: '#2ABBF8',
|
||||
},
|
||||
{ opacity: 1, x: '68.4812', y: '0', width: '54.6502', height: '16.8626', fill: '#00F701' },
|
||||
{ opacity: 0.6, x: '106.268', y: '0', width: '34.2403', height: '33.7252', fill: '#00F701' },
|
||||
{ opacity: 0.6, x: '106.268', y: '0', width: '51.103', height: '16.8626', fill: '#00F701' },
|
||||
{
|
||||
opacity: 1,
|
||||
x: '123.6484',
|
||||
y: '16.8626',
|
||||
width: '16.8626',
|
||||
height: '16.8626',
|
||||
fill: '#00F701',
|
||||
},
|
||||
],
|
||||
bottomRight: [
|
||||
{
|
||||
opacity: 0.6,
|
||||
width: '16.8626',
|
||||
height: '33.726',
|
||||
fill: '#FA4EDF',
|
||||
transform: 'matrix(0 1 1 0 0 0)',
|
||||
},
|
||||
{
|
||||
opacity: 0.6,
|
||||
width: '34.241',
|
||||
height: '16.8626',
|
||||
fill: '#FA4EDF',
|
||||
transform: 'matrix(0 1 1 0 16.891 0)',
|
||||
},
|
||||
{
|
||||
opacity: 0.6,
|
||||
width: '16.8626',
|
||||
height: '68.482',
|
||||
fill: '#FA4EDF',
|
||||
transform: 'matrix(-1 0 0 1 33.739 16.888)',
|
||||
},
|
||||
{
|
||||
opacity: 0.6,
|
||||
width: '16.8626',
|
||||
height: '33.726',
|
||||
fill: '#FA4EDF',
|
||||
transform: 'matrix(0 1 1 0 0 33.776)',
|
||||
},
|
||||
{
|
||||
opacity: 1,
|
||||
width: '16.8626',
|
||||
height: '16.8626',
|
||||
fill: '#FA4EDF',
|
||||
transform: 'matrix(-1 0 0 1 33.739 34.272)',
|
||||
},
|
||||
{
|
||||
opacity: 0.6,
|
||||
width: '16.8626',
|
||||
height: '34.24',
|
||||
fill: '#2ABBF8',
|
||||
transform: 'matrix(-1 0 0 1 33.787 68)',
|
||||
},
|
||||
{
|
||||
opacity: 0.4,
|
||||
width: '16.8626',
|
||||
height: '16.8626',
|
||||
fill: '#1A8FCC',
|
||||
transform: 'matrix(-1 0 0 1 33.787 85)',
|
||||
},
|
||||
],
|
||||
} as const satisfies Record<string, readonly BlockRect[]>
|
||||
|
||||
const GLOBAL_OPACITY = 0.55
|
||||
|
||||
const BlockGroup = memo(function BlockGroup({
|
||||
width,
|
||||
height,
|
||||
viewBox,
|
||||
rects,
|
||||
}: {
|
||||
width: number
|
||||
height: number
|
||||
viewBox: string
|
||||
rects: readonly BlockRect[]
|
||||
}) {
|
||||
return (
|
||||
<svg
|
||||
width={width}
|
||||
height={height}
|
||||
viewBox={viewBox}
|
||||
fill='none'
|
||||
xmlns='http://www.w3.org/2000/svg'
|
||||
className='h-auto w-full'
|
||||
style={{ opacity: GLOBAL_OPACITY }}
|
||||
>
|
||||
{rects.map((r, i) => (
|
||||
<rect
|
||||
key={i}
|
||||
x={r.x}
|
||||
y={r.y}
|
||||
width={r.width}
|
||||
height={r.height}
|
||||
rx={RX}
|
||||
fill={r.fill}
|
||||
transform={r.transform}
|
||||
opacity={r.opacity}
|
||||
/>
|
||||
))}
|
||||
</svg>
|
||||
)
|
||||
})
|
||||
|
||||
export function AnimatedBlocks() {
|
||||
return (
|
||||
<div
|
||||
className='pointer-events-none fixed inset-0 z-0 hidden overflow-hidden lg:block'
|
||||
aria-hidden='true'
|
||||
>
|
||||
<div className='absolute top-[93px] right-0 w-[calc(140px+10.76vw)] max-w-[295px]'>
|
||||
<BlockGroup width={295} height={34} viewBox='0 0 295 34' rects={RECTS.topRight} />
|
||||
</div>
|
||||
|
||||
<div className='-left-24 absolute bottom-0 w-[calc(140px+10.76vw)] max-w-[295px] rotate-180'>
|
||||
<BlockGroup width={295} height={34} viewBox='0 0 295 34' rects={RECTS.bottomLeft} />
|
||||
</div>
|
||||
|
||||
<div className='-bottom-2 absolute right-0 w-[calc(16px+1.25vw)] max-w-[34px]'>
|
||||
<BlockGroup width={34} height={102} viewBox='0 0 34 102' rects={RECTS.bottomRight} />
|
||||
</div>
|
||||
</div>
|
||||
)
|
||||
}
|
||||
@@ -2,6 +2,7 @@
|
||||
|
||||
import { useState } from 'react'
|
||||
import { ChevronRight } from 'lucide-react'
|
||||
import { cn } from '@/lib/utils'
|
||||
|
||||
interface FAQItem {
|
||||
question: string
|
||||
@@ -31,9 +32,10 @@ function FAQItemRow({
|
||||
className='flex w-full cursor-pointer items-center gap-3 px-4 py-2.5 text-left font-[470] text-[0.875rem] text-[rgba(0,0,0,0.8)] transition-colors hover:bg-[rgba(0,0,0,0.02)] dark:text-[rgba(255,255,255,0.85)] dark:hover:bg-[rgba(255,255,255,0.03)]'
|
||||
>
|
||||
<ChevronRight
|
||||
className={`h-3.5 w-3.5 shrink-0 text-[rgba(0,0,0,0.3)] transition-transform duration-200 dark:text-[rgba(255,255,255,0.3)] ${
|
||||
isOpen ? 'rotate-90' : ''
|
||||
}`}
|
||||
className={cn(
|
||||
'h-3.5 w-3.5 shrink-0 text-[rgba(0,0,0,0.3)] transition-transform duration-200 dark:text-[rgba(255,255,255,0.3)]',
|
||||
isOpen && 'rotate-90'
|
||||
)}
|
||||
/>
|
||||
{item.question}
|
||||
</button>
|
||||
@@ -81,11 +83,10 @@ export function FAQ({ items, title = 'Common Questions' }: FAQProps) {
|
||||
{items.map((item, index) => (
|
||||
<div
|
||||
key={index}
|
||||
className={
|
||||
index !== items.length - 1
|
||||
? 'border-[rgba(0,0,0,0.08)] border-b dark:border-[rgba(255,255,255,0.08)]'
|
||||
: ''
|
||||
}
|
||||
className={cn(
|
||||
index !== items.length - 1 &&
|
||||
'border-[rgba(0,0,0,0.08)] border-b dark:border-[rgba(255,255,255,0.08)]'
|
||||
)}
|
||||
>
|
||||
<FAQItemRow
|
||||
item={item}
|
||||
|
||||
@@ -6,6 +6,8 @@ import type { ComponentType, SVGProps } from 'react'
|
||||
import {
|
||||
A2AIcon,
|
||||
AgentMailIcon,
|
||||
AgentPhoneIcon,
|
||||
AgiloftIcon,
|
||||
AhrefsIcon,
|
||||
AirtableIcon,
|
||||
AirweaveIcon,
|
||||
@@ -22,6 +24,7 @@ import {
|
||||
BoxCompanyIcon,
|
||||
BrainIcon,
|
||||
BrandfetchIcon,
|
||||
BrightDataIcon,
|
||||
BrowserUseIcon,
|
||||
CalComIcon,
|
||||
CalendlyIcon,
|
||||
@@ -32,6 +35,7 @@ import {
|
||||
CloudflareIcon,
|
||||
CloudWatchIcon,
|
||||
ConfluenceIcon,
|
||||
CrowdStrikeIcon,
|
||||
CursorIcon,
|
||||
DagsterIcon,
|
||||
DatabricksIcon,
|
||||
@@ -87,6 +91,8 @@ import {
|
||||
HubspotIcon,
|
||||
HuggingFaceIcon,
|
||||
HunterIOIcon,
|
||||
IAMIcon,
|
||||
IdentityCenterIcon,
|
||||
ImageIcon,
|
||||
IncidentioIcon,
|
||||
InfisicalIcon,
|
||||
@@ -115,6 +121,7 @@ import {
|
||||
MicrosoftSharepointIcon,
|
||||
MicrosoftTeamsIcon,
|
||||
MistralIcon,
|
||||
MondayIcon,
|
||||
MongoDBIcon,
|
||||
MySQLIcon,
|
||||
Neo4jIcon,
|
||||
@@ -147,6 +154,7 @@ import {
|
||||
RootlyIcon,
|
||||
S3Icon,
|
||||
SalesforceIcon,
|
||||
SESIcon,
|
||||
SearchIcon,
|
||||
SecretsManagerIcon,
|
||||
SendgridIcon,
|
||||
@@ -161,6 +169,7 @@ import {
|
||||
SmtpIcon,
|
||||
SQSIcon,
|
||||
SshIcon,
|
||||
STSIcon,
|
||||
STTIcon,
|
||||
StagehandIcon,
|
||||
StripeIcon,
|
||||
@@ -196,6 +205,8 @@ type IconComponent = ComponentType<SVGProps<SVGSVGElement>>
|
||||
export const blockTypeToIconMap: Record<string, IconComponent> = {
|
||||
a2a: A2AIcon,
|
||||
agentmail: AgentMailIcon,
|
||||
agentphone: AgentPhoneIcon,
|
||||
agiloft: AgiloftIcon,
|
||||
ahrefs: AhrefsIcon,
|
||||
airtable: AirtableIcon,
|
||||
airweave: AirweaveIcon,
|
||||
@@ -210,6 +221,7 @@ export const blockTypeToIconMap: Record<string, IconComponent> = {
|
||||
attio: AttioIcon,
|
||||
box: BoxCompanyIcon,
|
||||
brandfetch: BrandfetchIcon,
|
||||
brightdata: BrightDataIcon,
|
||||
browser_use: BrowserUseIcon,
|
||||
calcom: CalComIcon,
|
||||
calendly: CalendlyIcon,
|
||||
@@ -219,7 +231,10 @@ export const blockTypeToIconMap: Record<string, IconComponent> = {
|
||||
cloudflare: CloudflareIcon,
|
||||
cloudformation: CloudFormationIcon,
|
||||
cloudwatch: CloudWatchIcon,
|
||||
confluence: ConfluenceIcon,
|
||||
confluence_v2: ConfluenceIcon,
|
||||
crowdstrike: CrowdStrikeIcon,
|
||||
cursor: CursorIcon,
|
||||
cursor_v2: CursorIcon,
|
||||
dagster: DagsterIcon,
|
||||
databricks: DatabricksIcon,
|
||||
@@ -237,19 +252,25 @@ export const blockTypeToIconMap: Record<string, IconComponent> = {
|
||||
enrich: EnrichSoIcon,
|
||||
evernote: EvernoteIcon,
|
||||
exa: ExaAIIcon,
|
||||
extend: ExtendIcon,
|
||||
extend_v2: ExtendIcon,
|
||||
fathom: FathomIcon,
|
||||
file: DocumentIcon,
|
||||
file_v3: DocumentIcon,
|
||||
firecrawl: FirecrawlIcon,
|
||||
fireflies: FirefliesIcon,
|
||||
fireflies_v2: FirefliesIcon,
|
||||
gamma: GammaIcon,
|
||||
github: GithubIcon,
|
||||
github_v2: GithubIcon,
|
||||
gitlab: GitLabIcon,
|
||||
gmail: GmailIcon,
|
||||
gmail_v2: GmailIcon,
|
||||
gong: GongIcon,
|
||||
google_ads: GoogleAdsIcon,
|
||||
google_bigquery: GoogleBigQueryIcon,
|
||||
google_books: GoogleBooksIcon,
|
||||
google_calendar: GoogleCalendarIcon,
|
||||
google_calendar_v2: GoogleCalendarIcon,
|
||||
google_contacts: GoogleContactsIcon,
|
||||
google_docs: GoogleDocsIcon,
|
||||
@@ -260,7 +281,9 @@ export const blockTypeToIconMap: Record<string, IconComponent> = {
|
||||
google_meet: GoogleMeetIcon,
|
||||
google_pagespeed: GooglePagespeedIcon,
|
||||
google_search: GoogleIcon,
|
||||
google_sheets: GoogleSheetsIcon,
|
||||
google_sheets_v2: GoogleSheetsIcon,
|
||||
google_slides: GoogleSlidesIcon,
|
||||
google_slides_v2: GoogleSlidesIcon,
|
||||
google_tasks: GoogleTasksIcon,
|
||||
google_translate: GoogleTranslateIcon,
|
||||
@@ -274,20 +297,25 @@ export const blockTypeToIconMap: Record<string, IconComponent> = {
|
||||
hubspot: HubspotIcon,
|
||||
huggingface: HuggingFaceIcon,
|
||||
hunter: HunterIOIcon,
|
||||
iam: IAMIcon,
|
||||
identity_center: IdentityCenterIcon,
|
||||
image_generator: ImageIcon,
|
||||
imap: MailServerIcon,
|
||||
incidentio: IncidentioIcon,
|
||||
infisical: InfisicalIcon,
|
||||
intercom: IntercomIcon,
|
||||
intercom_v2: IntercomIcon,
|
||||
jina: JinaAIIcon,
|
||||
jira: JiraIcon,
|
||||
jira_service_management: JiraServiceManagementIcon,
|
||||
kalshi: KalshiIcon,
|
||||
kalshi_v2: KalshiIcon,
|
||||
ketch: KetchIcon,
|
||||
knowledge: PackageSearchIcon,
|
||||
langsmith: LangsmithIcon,
|
||||
launchdarkly: LaunchDarklyIcon,
|
||||
lemlist: LemlistIcon,
|
||||
linear: LinearIcon,
|
||||
linear_v2: LinearIcon,
|
||||
linkedin: LinkedInIcon,
|
||||
linkup: LinkupIcon,
|
||||
@@ -299,13 +327,17 @@ export const blockTypeToIconMap: Record<string, IconComponent> = {
|
||||
memory: BrainIcon,
|
||||
microsoft_ad: AzureIcon,
|
||||
microsoft_dataverse: MicrosoftDataverseIcon,
|
||||
microsoft_excel: MicrosoftExcelIcon,
|
||||
microsoft_excel_v2: MicrosoftExcelIcon,
|
||||
microsoft_planner: MicrosoftPlannerIcon,
|
||||
microsoft_teams: MicrosoftTeamsIcon,
|
||||
mistral_parse: MistralIcon,
|
||||
mistral_parse_v3: MistralIcon,
|
||||
monday: MondayIcon,
|
||||
mongodb: MongoDBIcon,
|
||||
mysql: MySQLIcon,
|
||||
neo4j: Neo4jIcon,
|
||||
notion: NotionIcon,
|
||||
notion_v2: NotionIcon,
|
||||
obsidian: ObsidianIcon,
|
||||
okta: OktaIcon,
|
||||
@@ -322,12 +354,14 @@ export const blockTypeToIconMap: Record<string, IconComponent> = {
|
||||
postgresql: PostgresIcon,
|
||||
posthog: PosthogIcon,
|
||||
profound: ProfoundIcon,
|
||||
pulse: PulseIcon,
|
||||
pulse_v2: PulseIcon,
|
||||
qdrant: QdrantIcon,
|
||||
quiver: QuiverIcon,
|
||||
rds: RDSIcon,
|
||||
reddit: RedditIcon,
|
||||
redis: RedisIcon,
|
||||
reducto: ReductoIcon,
|
||||
reducto_v2: ReductoIcon,
|
||||
resend: ResendIcon,
|
||||
revenuecat: RevenueCatIcon,
|
||||
@@ -341,6 +375,7 @@ export const blockTypeToIconMap: Record<string, IconComponent> = {
|
||||
sentry: SentryIcon,
|
||||
serper: SerperIcon,
|
||||
servicenow: ServiceNowIcon,
|
||||
ses: SESIcon,
|
||||
sftp: SftpIcon,
|
||||
sharepoint: MicrosoftSharepointIcon,
|
||||
shopify: ShopifyIcon,
|
||||
@@ -352,11 +387,14 @@ export const blockTypeToIconMap: Record<string, IconComponent> = {
|
||||
ssh: SshIcon,
|
||||
stagehand: StagehandIcon,
|
||||
stripe: StripeIcon,
|
||||
sts: STSIcon,
|
||||
stt: STTIcon,
|
||||
stt_v2: STTIcon,
|
||||
supabase: SupabaseIcon,
|
||||
tailscale: TailscaleIcon,
|
||||
tavily: TavilyIcon,
|
||||
telegram: TelegramIcon,
|
||||
textract: TextractIcon,
|
||||
textract_v2: TextractIcon,
|
||||
tinybird: TinybirdIcon,
|
||||
translate: TranslateIcon,
|
||||
@@ -367,7 +405,9 @@ export const blockTypeToIconMap: Record<string, IconComponent> = {
|
||||
typeform: TypeformIcon,
|
||||
upstash: UpstashIcon,
|
||||
vercel: VercelIcon,
|
||||
video_generator: VideoIcon,
|
||||
video_generator_v2: VideoIcon,
|
||||
vision: EyeIcon,
|
||||
vision_v2: EyeIcon,
|
||||
wealthbox: WealthboxIcon,
|
||||
webflow: WebflowIcon,
|
||||
|
||||
@@ -1,6 +1,5 @@
|
||||
'use client'
|
||||
|
||||
import { useEffect, useState } from 'react'
|
||||
import { Check } from 'lucide-react'
|
||||
import { useParams, usePathname, useRouter } from 'next/navigation'
|
||||
import {
|
||||
@@ -25,24 +24,9 @@ export function LanguageDropdown() {
|
||||
const params = useParams()
|
||||
const router = useRouter()
|
||||
|
||||
const [currentLang, setCurrentLang] = useState(() => {
|
||||
const langFromParams = params?.lang as string
|
||||
return langFromParams && Object.keys(languages).includes(langFromParams) ? langFromParams : 'en'
|
||||
})
|
||||
|
||||
useEffect(() => {
|
||||
const langFromParams = params?.lang as string
|
||||
|
||||
if (langFromParams && Object.keys(languages).includes(langFromParams)) {
|
||||
if (langFromParams !== currentLang) {
|
||||
setCurrentLang(langFromParams)
|
||||
}
|
||||
} else {
|
||||
if (currentLang !== 'en') {
|
||||
setCurrentLang('en')
|
||||
}
|
||||
}
|
||||
}, [params])
|
||||
const langFromParams = params?.lang as string
|
||||
const currentLang =
|
||||
langFromParams && Object.keys(languages).includes(langFromParams) ? langFromParams : 'en'
|
||||
|
||||
const handleLanguageChange = (locale: string) => {
|
||||
if (locale === currentLang) return
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
'use client'
|
||||
|
||||
import { useEffect, useRef } from 'react'
|
||||
import { useEffect, useLayoutEffect, useRef } from 'react'
|
||||
import { getAssetUrl } from '@/lib/utils'
|
||||
|
||||
interface LightboxProps {
|
||||
@@ -9,10 +9,12 @@ interface LightboxProps {
|
||||
src: string
|
||||
alt: string
|
||||
type: 'image' | 'video'
|
||||
startTime?: number
|
||||
}
|
||||
|
||||
export function Lightbox({ isOpen, onClose, src, alt, type }: LightboxProps) {
|
||||
export function Lightbox({ isOpen, onClose, src, alt, type, startTime }: LightboxProps) {
|
||||
const overlayRef = useRef<HTMLDivElement>(null)
|
||||
const videoRef = useRef<HTMLVideoElement>(null)
|
||||
|
||||
useEffect(() => {
|
||||
const handleKeyDown = (event: KeyboardEvent) => {
|
||||
@@ -40,6 +42,12 @@ export function Lightbox({ isOpen, onClose, src, alt, type }: LightboxProps) {
|
||||
}
|
||||
}, [isOpen, onClose])
|
||||
|
||||
useLayoutEffect(() => {
|
||||
if (isOpen && type === 'video' && videoRef.current && startTime != null && startTime > 0) {
|
||||
videoRef.current.currentTime = startTime
|
||||
}
|
||||
}, [isOpen, startTime, type])
|
||||
|
||||
if (!isOpen) return null
|
||||
|
||||
return (
|
||||
@@ -61,6 +69,7 @@ export function Lightbox({ isOpen, onClose, src, alt, type }: LightboxProps) {
|
||||
/>
|
||||
) : (
|
||||
<video
|
||||
ref={videoRef}
|
||||
src={getAssetUrl(src)}
|
||||
autoPlay
|
||||
loop
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
'use client'
|
||||
|
||||
import { useState } from 'react'
|
||||
import { getAssetUrl } from '@/lib/utils'
|
||||
import { useRef, useState } from 'react'
|
||||
import { cn, getAssetUrl } from '@/lib/utils'
|
||||
import { Lightbox } from './lightbox'
|
||||
|
||||
interface VideoProps {
|
||||
@@ -12,6 +12,8 @@ interface VideoProps {
|
||||
muted?: boolean
|
||||
playsInline?: boolean
|
||||
enableLightbox?: boolean
|
||||
width?: number
|
||||
height?: number
|
||||
}
|
||||
|
||||
export function Video({
|
||||
@@ -22,11 +24,16 @@ export function Video({
|
||||
muted = true,
|
||||
playsInline = true,
|
||||
enableLightbox = true,
|
||||
width,
|
||||
height,
|
||||
}: VideoProps) {
|
||||
const videoRef = useRef<HTMLVideoElement>(null)
|
||||
const startTimeRef = useRef(0)
|
||||
const [isLightboxOpen, setIsLightboxOpen] = useState(false)
|
||||
|
||||
const handleVideoClick = () => {
|
||||
if (enableLightbox) {
|
||||
startTimeRef.current = videoRef.current?.currentTime ?? 0
|
||||
setIsLightboxOpen(true)
|
||||
}
|
||||
}
|
||||
@@ -34,11 +41,17 @@ export function Video({
|
||||
return (
|
||||
<>
|
||||
<video
|
||||
ref={videoRef}
|
||||
autoPlay={autoPlay}
|
||||
loop={loop}
|
||||
muted={muted}
|
||||
playsInline={playsInline}
|
||||
className={`${className} ${enableLightbox ? 'cursor-pointer transition-opacity hover:opacity-95' : ''}`}
|
||||
width={width}
|
||||
height={height}
|
||||
className={cn(
|
||||
className,
|
||||
enableLightbox && 'cursor-pointer transition-opacity hover:opacity-95'
|
||||
)}
|
||||
src={getAssetUrl(src)}
|
||||
onClick={handleVideoClick}
|
||||
/>
|
||||
@@ -50,6 +63,7 @@ export function Video({
|
||||
src={src}
|
||||
alt={`Video: ${src}`}
|
||||
type='video'
|
||||
startTime={startTimeRef.current}
|
||||
/>
|
||||
)}
|
||||
</>
|
||||
|
||||
@@ -21,7 +21,17 @@ Verwenden Sie Ihre eigenen API-Schlüssel für KI-Modellanbieter anstelle der ge
|
||||
| OpenAI | Knowledge Base-Embeddings, Agent-Block |
|
||||
| Anthropic | Agent-Block |
|
||||
| Google | Agent-Block |
|
||||
| Mistral | Knowledge Base OCR |
|
||||
| Mistral | Knowledge Base OCR, Agent-Block |
|
||||
| Fireworks | Agent-Block |
|
||||
| Firecrawl | Web-Scraping, Crawling, Suche und Extraktion |
|
||||
| Exa | KI-gestützte Suche und Recherche |
|
||||
| Serper | Google-Such-API |
|
||||
| Linkup | Websuche und Inhaltsabruf |
|
||||
| Parallel AI | Websuche, Extraktion und tiefgehende Recherche |
|
||||
| Perplexity | KI-gestützter Chat und Websuche |
|
||||
| Jina AI | Web-Lesen und Suche |
|
||||
| Google Cloud | Translate, Maps, PageSpeed und Books APIs |
|
||||
| Brandfetch | Marken-Assets, Logos, Farben und Unternehmensinformationen |
|
||||
|
||||
### Einrichtung
|
||||
|
||||
|
||||
@@ -105,9 +105,108 @@ Die Modellaufschlüsselung zeigt:
|
||||
Die angezeigten Preise entsprechen den Tarifen vom 10. September 2025. Überprüfen Sie die Dokumentation der Anbieter für aktuelle Preise.
|
||||
</Callout>
|
||||
|
||||
## Gehostete Tool-Preise
|
||||
|
||||
Wenn Workflows Tool-Blöcke mit den gehosteten API-Schlüsseln von Sim verwenden, werden die Kosten pro Operation berechnet. Verwenden Sie Ihre eigenen Schlüssel über BYOK, um direkt an die Anbieter zu zahlen.
|
||||
|
||||
<Tabs items={['Firecrawl', 'Exa', 'Serper', 'Perplexity', 'Linkup', 'Parallel AI', 'Jina AI', 'Google Cloud', 'Brandfetch']}>
|
||||
<Tab>
|
||||
**Firecrawl** - Web-Scraping, Crawling, Suche und Extraktion
|
||||
|
||||
| Operation | Cost |
|
||||
|-----------|------|
|
||||
| Scrape | $0.001 per credit used |
|
||||
| Crawl | $0.001 per credit used |
|
||||
| Search | $0.001 per credit used |
|
||||
| Extract | $0.001 per credit used |
|
||||
| Map | $0.001 per credit used |
|
||||
</Tab>
|
||||
|
||||
<Tab>
|
||||
**Exa** - KI-gestützte Suche und Recherche
|
||||
|
||||
| Operation | Cost |
|
||||
|-----------|------|
|
||||
| Search | Dynamic (returned by API) |
|
||||
| Get Contents | Dynamic (returned by API) |
|
||||
| Find Similar Links | Dynamic (returned by API) |
|
||||
| Answer | Dynamic (returned by API) |
|
||||
</Tab>
|
||||
|
||||
<Tab>
|
||||
**Serper** - Google-Such-API
|
||||
|
||||
| Operation | Cost |
|
||||
|-----------|------|
|
||||
| Search (≤10 results) | $0.001 |
|
||||
| Search (>10 results) | $0.002 |
|
||||
</Tab>
|
||||
|
||||
<Tab>
|
||||
**Perplexity** - KI-gestützter Chat und Websuche
|
||||
|
||||
| Operation | Cost |
|
||||
|-----------|------|
|
||||
| Search | $0.005 per request |
|
||||
| Chat | Token-based (varies by model) |
|
||||
</Tab>
|
||||
|
||||
<Tab>
|
||||
**Linkup** - Websuche und Inhaltsabruf
|
||||
|
||||
| Operation | Cost |
|
||||
|-----------|------|
|
||||
| Standard search | ~$0.006 |
|
||||
| Deep search | ~$0.055 |
|
||||
</Tab>
|
||||
|
||||
<Tab>
|
||||
**Parallel AI** - Websuche, Extraktion und tiefgehende Recherche
|
||||
|
||||
| Operation | Cost |
|
||||
|-----------|------|
|
||||
| Search (≤10 results) | $0.005 |
|
||||
| Search (>10 results) | $0.005 + $0.001 per additional result |
|
||||
| Extract | $0.001 per URL |
|
||||
| Deep Research | $0.005–$2.40 (varies by processor tier) |
|
||||
</Tab>
|
||||
|
||||
<Tab>
|
||||
**Jina AI** - Web-Lesen und Suche
|
||||
|
||||
| Operation | Cost |
|
||||
|-----------|------|
|
||||
| Read URL | $0.20 per 1M tokens |
|
||||
| Search | $0.20 per 1M tokens (minimum 10K tokens) |
|
||||
</Tab>
|
||||
|
||||
<Tab>
|
||||
**Google Cloud** - Translate, Maps, PageSpeed und Books APIs
|
||||
|
||||
| Operation | Cost |
|
||||
|-----------|------|
|
||||
| Translate / Detect | $0.00002 per character |
|
||||
| Maps (Geocode, Directions, Distance Matrix, Elevation, Timezone, Reverse Geocode, Geolocate, Validate Address) | $0.005 per request |
|
||||
| Maps (Snap to Roads) | $0.01 per request |
|
||||
| Maps (Place Details) | $0.017 per request |
|
||||
| Maps (Places Search) | $0.032 per request |
|
||||
| PageSpeed | Free |
|
||||
| Books (Search, Details) | Free |
|
||||
</Tab>
|
||||
|
||||
<Tab>
|
||||
**Brandfetch** - Marken-Assets, Logos, Farben und Unternehmensinformationen
|
||||
|
||||
| Operation | Cost |
|
||||
|-----------|------|
|
||||
| Search | Free |
|
||||
| Get Brand | $0.04 per request |
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
## Bring Your Own Key (BYOK)
|
||||
|
||||
Sie können Ihre eigenen API-Schlüssel für gehostete Modelle (OpenAI, Anthropic, Google, Mistral) unter **Einstellungen → BYOK** verwenden, um Basispreise zu zahlen. Schlüssel werden verschlüsselt und gelten arbeitsbereichsweit.
|
||||
Sie können Ihre eigenen API-Schlüssel für unterstützte Anbieter (OpenAI, Anthropic, Google, Mistral, Fireworks, Firecrawl, Exa, Serper, Linkup, Parallel AI, Perplexity, Jina AI, Google Cloud, Brandfetch) unter **Einstellungen → BYOK** verwenden, um Basispreise zu zahlen. Schlüssel werden verschlüsselt und gelten arbeitsbereichsweit.
|
||||
|
||||
## Strategien zur Kostenoptimierung
|
||||
|
||||
|
||||
@@ -51,7 +51,7 @@ Willkommen bei Sim, einem visuellen Workflow-Builder für KI-Anwendungen. Erstel
|
||||
<Card title="MCP-Integration" href="/mcp">
|
||||
Externe Dienste mit dem Model Context Protocol verbinden
|
||||
</Card>
|
||||
<Card title="SDKs" href="/sdks">
|
||||
<Card title="SDKs" href="/api-reference">
|
||||
Sim in Ihre Anwendungen integrieren
|
||||
</Card>
|
||||
</Cards>
|
||||
@@ -0,0 +1,9 @@
|
||||
{
|
||||
"pages": [
|
||||
"listPausedExecutions",
|
||||
"getPausedExecution",
|
||||
"getPausedExecutionByResumePath",
|
||||
"getPauseContext",
|
||||
"resumeExecution"
|
||||
]
|
||||
}
|
||||
@@ -10,6 +10,7 @@
|
||||
"typescript",
|
||||
"---Endpoints---",
|
||||
"(generated)/workflows",
|
||||
"(generated)/human-in-the-loop",
|
||||
"(generated)/logs",
|
||||
"(generated)/usage",
|
||||
"(generated)/audit-logs",
|
||||
|
||||
@@ -65,14 +65,14 @@ Execute a workflow with optional input data.
|
||||
```python
|
||||
result = client.execute_workflow(
|
||||
"workflow-id",
|
||||
input_data={"message": "Hello, world!"},
|
||||
input={"message": "Hello, world!"},
|
||||
timeout=30.0 # 30 seconds
|
||||
)
|
||||
```
|
||||
|
||||
**Parameters:**
|
||||
- `workflow_id` (str): The ID of the workflow to execute
|
||||
- `input_data` (dict, optional): Input data to pass to the workflow
|
||||
- `input` (dict, optional): Input data to pass to the workflow
|
||||
- `timeout` (float, optional): Timeout in seconds (default: 30.0)
|
||||
- `stream` (bool, optional): Enable streaming responses (default: False)
|
||||
- `selected_outputs` (list[str], optional): Block outputs to stream in `blockName.attribute` format (e.g., `["agent1.content"]`)
|
||||
@@ -144,7 +144,7 @@ Execute a workflow with automatic retry on rate limit errors using exponential b
|
||||
```python
|
||||
result = client.execute_with_retry(
|
||||
"workflow-id",
|
||||
input_data={"message": "Hello"},
|
||||
input={"message": "Hello"},
|
||||
timeout=30.0,
|
||||
max_retries=3, # Maximum number of retries
|
||||
initial_delay=1.0, # Initial delay in seconds
|
||||
@@ -155,7 +155,7 @@ result = client.execute_with_retry(
|
||||
|
||||
**Parameters:**
|
||||
- `workflow_id` (str): The ID of the workflow to execute
|
||||
- `input_data` (dict, optional): Input data to pass to the workflow
|
||||
- `input` (dict, optional): Input data to pass to the workflow
|
||||
- `timeout` (float, optional): Timeout in seconds
|
||||
- `stream` (bool, optional): Enable streaming responses
|
||||
- `selected_outputs` (list, optional): Block outputs to stream
|
||||
@@ -359,7 +359,7 @@ def run_workflow():
|
||||
# Execute the workflow
|
||||
result = client.execute_workflow(
|
||||
"my-workflow-id",
|
||||
input_data={
|
||||
input={
|
||||
"message": "Process this data",
|
||||
"user_id": "12345"
|
||||
}
|
||||
@@ -488,7 +488,7 @@ def execute_async():
|
||||
# Start async execution
|
||||
result = client.execute_workflow(
|
||||
"workflow-id",
|
||||
input_data={"data": "large dataset"},
|
||||
input={"data": "large dataset"},
|
||||
async_execution=True # Execute asynchronously
|
||||
)
|
||||
|
||||
@@ -533,7 +533,7 @@ def execute_with_retry_handling():
|
||||
# Automatically retries on rate limit
|
||||
result = client.execute_with_retry(
|
||||
"workflow-id",
|
||||
input_data={"message": "Process this"},
|
||||
input={"message": "Process this"},
|
||||
max_retries=5,
|
||||
initial_delay=1.0,
|
||||
max_delay=60.0,
|
||||
@@ -615,7 +615,7 @@ def execute_with_streaming():
|
||||
# Enable streaming for specific block outputs
|
||||
result = client.execute_workflow(
|
||||
"workflow-id",
|
||||
input_data={"message": "Count to five"},
|
||||
input={"message": "Count to five"},
|
||||
stream=True,
|
||||
selected_outputs=["agent1.content"] # Use blockName.attribute format
|
||||
)
|
||||
@@ -758,4 +758,15 @@ Configure the client using environment variables:
|
||||
|
||||
## License
|
||||
|
||||
Apache-2.0
|
||||
Apache-2.0
|
||||
|
||||
import { FAQ } from '@/components/ui/faq'
|
||||
|
||||
<FAQ items={[
|
||||
{ question: "Do I need to deploy a workflow before I can execute it via the SDK?", answer: "Yes. Workflows must be deployed before they can be executed through the SDK. You can use the validate_workflow() method to check whether a workflow is deployed and ready. If it returns False, deploy the workflow from the Sim UI first and create or select an API key during deployment." },
|
||||
{ question: "What is the difference between sync and async execution?", answer: "Sync execution (the default) blocks until the workflow completes and returns the full result. Async execution (async_execution=True) returns immediately with a task ID that you can poll using get_job_status(). Use async mode for long-running workflows to avoid request timeouts. Async job statuses include queued, processing, completed, failed, and cancelled." },
|
||||
{ question: "How does the SDK handle rate limiting?", answer: "The SDK provides built-in rate limiting support through the execute_with_retry() method. It uses exponential backoff (1s, 2s, 4s, 8s...) with 25% jitter to avoid thundering herd problems. If the API returns a retry-after header, that value is used instead. You can configure max_retries, initial_delay, max_delay, and backoff_multiplier. Use get_rate_limit_info() to check your current rate limit status." },
|
||||
{ question: "Can I use the Python SDK as a context manager?", answer: "Yes. The SimStudioClient supports Python's context manager protocol. Use it with the 'with' statement to automatically close the underlying HTTP session when you are done, which is especially useful for scripts that create and discard client instances." },
|
||||
{ question: "How do I handle different types of errors from the SDK?", answer: "The SDK raises SimStudioError with a code property for API-specific errors. Common error codes are UNAUTHORIZED (invalid API key), TIMEOUT (request timed out), RATE_LIMIT_EXCEEDED (too many requests), USAGE_LIMIT_EXCEEDED (billing limit reached), and EXECUTION_ERROR (workflow failed). Use the error code to implement targeted error handling and recovery logic." },
|
||||
{ question: "How do I monitor my API usage and remaining quota?", answer: "Use the get_usage_limits() method to check your current usage. It returns sync and async rate limit details (limit, remaining, reset time, whether you are currently limited), plus your current period cost, usage limit, and plan tier. This lets you monitor consumption and alert before hitting limits." },
|
||||
]} />
|
||||
@@ -78,16 +78,15 @@ new SimStudioClient(config: SimStudioConfig)
|
||||
Execute a workflow with optional input data.
|
||||
|
||||
```typescript
|
||||
const result = await client.executeWorkflow('workflow-id', {
|
||||
input: { message: 'Hello, world!' },
|
||||
const result = await client.executeWorkflow('workflow-id', { message: 'Hello, world!' }, {
|
||||
timeout: 30000 // 30 seconds
|
||||
});
|
||||
```
|
||||
|
||||
**Parameters:**
|
||||
- `workflowId` (string): The ID of the workflow to execute
|
||||
- `input` (any, optional): Input data to pass to the workflow
|
||||
- `options` (ExecutionOptions, optional):
|
||||
- `input` (any): Input data to pass to the workflow
|
||||
- `timeout` (number): Timeout in milliseconds (default: 30000)
|
||||
- `stream` (boolean): Enable streaming responses (default: false)
|
||||
- `selectedOutputs` (string[]): Block outputs to stream in `blockName.attribute` format (e.g., `["agent1.content"]`)
|
||||
@@ -158,8 +157,7 @@ if (status.status === 'completed') {
|
||||
Execute a workflow with automatic retry on rate limit errors using exponential backoff.
|
||||
|
||||
```typescript
|
||||
const result = await client.executeWithRetry('workflow-id', {
|
||||
input: { message: 'Hello' },
|
||||
const result = await client.executeWithRetry('workflow-id', { message: 'Hello' }, {
|
||||
timeout: 30000
|
||||
}, {
|
||||
maxRetries: 3, // Maximum number of retries
|
||||
@@ -171,6 +169,7 @@ const result = await client.executeWithRetry('workflow-id', {
|
||||
|
||||
**Parameters:**
|
||||
- `workflowId` (string): The ID of the workflow to execute
|
||||
- `input` (any, optional): Input data to pass to the workflow
|
||||
- `options` (ExecutionOptions, optional): Same as `executeWorkflow()`
|
||||
- `retryOptions` (RetryOptions, optional):
|
||||
- `maxRetries` (number): Maximum number of retries (default: 3)
|
||||
@@ -389,10 +388,8 @@ async function runWorkflow() {
|
||||
|
||||
// Execute the workflow
|
||||
const result = await client.executeWorkflow('my-workflow-id', {
|
||||
input: {
|
||||
message: 'Process this data',
|
||||
userId: '12345'
|
||||
}
|
||||
});
|
||||
|
||||
if (result.success) {
|
||||
@@ -508,8 +505,7 @@ app.post('/execute-workflow', async (req, res) => {
|
||||
try {
|
||||
const { workflowId, input } = req.body;
|
||||
|
||||
const result = await client.executeWorkflow(workflowId, {
|
||||
input,
|
||||
const result = await client.executeWorkflow(workflowId, input, {
|
||||
timeout: 60000
|
||||
});
|
||||
|
||||
@@ -555,8 +551,7 @@ export default async function handler(
|
||||
try {
|
||||
const { workflowId, input } = req.body;
|
||||
|
||||
const result = await client.executeWorkflow(workflowId, {
|
||||
input,
|
||||
const result = await client.executeWorkflow(workflowId, input, {
|
||||
timeout: 30000
|
||||
});
|
||||
|
||||
@@ -586,9 +581,7 @@ const client = new SimStudioClient({
|
||||
async function executeClientSideWorkflow() {
|
||||
try {
|
||||
const result = await client.executeWorkflow('workflow-id', {
|
||||
input: {
|
||||
userInput: 'Hello from browser'
|
||||
}
|
||||
});
|
||||
|
||||
console.log('Workflow result:', result);
|
||||
@@ -642,10 +635,8 @@ Alternatively, you can manually provide files using the URL format:
|
||||
|
||||
// Include files under the field name from your API trigger's input format
|
||||
const result = await client.executeWorkflow('workflow-id', {
|
||||
input: {
|
||||
documents: files, // Must match your workflow's "files" field name
|
||||
instructions: 'Analyze these documents'
|
||||
}
|
||||
});
|
||||
|
||||
console.log('Result:', result);
|
||||
@@ -669,10 +660,8 @@ Alternatively, you can manually provide files using the URL format:
|
||||
|
||||
// Include files under the field name from your API trigger's input format
|
||||
const result = await client.executeWorkflow('workflow-id', {
|
||||
input: {
|
||||
documents: [file], // Must match your workflow's "files" field name
|
||||
query: 'Summarize this document'
|
||||
}
|
||||
});
|
||||
```
|
||||
</Tab>
|
||||
@@ -712,8 +701,7 @@ export function useWorkflow(): UseWorkflowResult {
|
||||
setResult(null);
|
||||
|
||||
try {
|
||||
const workflowResult = await client.executeWorkflow(workflowId, {
|
||||
input,
|
||||
const workflowResult = await client.executeWorkflow(workflowId, input, {
|
||||
timeout: 30000
|
||||
});
|
||||
setResult(workflowResult);
|
||||
@@ -774,8 +762,7 @@ const client = new SimStudioClient({
|
||||
async function executeAsync() {
|
||||
try {
|
||||
// Start async execution
|
||||
const result = await client.executeWorkflow('workflow-id', {
|
||||
input: { data: 'large dataset' },
|
||||
const result = await client.executeWorkflow('workflow-id', { data: 'large dataset' }, {
|
||||
async: true // Execute asynchronously
|
||||
});
|
||||
|
||||
@@ -823,9 +810,7 @@ const client = new SimStudioClient({
|
||||
async function executeWithRetryHandling() {
|
||||
try {
|
||||
// Automatically retries on rate limit
|
||||
const result = await client.executeWithRetry('workflow-id', {
|
||||
input: { message: 'Process this' }
|
||||
}, {
|
||||
const result = await client.executeWithRetry('workflow-id', { message: 'Process this' }, {}, {
|
||||
maxRetries: 5,
|
||||
initialDelay: 1000,
|
||||
maxDelay: 60000,
|
||||
@@ -908,8 +893,7 @@ const client = new SimStudioClient({
|
||||
async function executeWithStreaming() {
|
||||
try {
|
||||
// Enable streaming for specific block outputs
|
||||
const result = await client.executeWorkflow('workflow-id', {
|
||||
input: { message: 'Count to five' },
|
||||
const result = await client.executeWorkflow('workflow-id', { message: 'Count to five' }, {
|
||||
stream: true,
|
||||
selectedOutputs: ['agent1.content'] // Use blockName.attribute format
|
||||
});
|
||||
@@ -1033,3 +1017,14 @@ function StreamingWorkflow() {
|
||||
## License
|
||||
|
||||
Apache-2.0
|
||||
|
||||
import { FAQ } from '@/components/ui/faq'
|
||||
|
||||
<FAQ items={[
|
||||
{ question: "Do I need to deploy a workflow before I can execute it via the SDK?", answer: "Yes. Workflows must be deployed before they can be executed through the SDK. You can use the validateWorkflow() method to check whether a workflow is deployed and ready. If it returns false, deploy the workflow from the Sim UI first and create or select an API key during deployment." },
|
||||
{ question: "What is the difference between sync and async execution?", answer: "Sync execution (the default) blocks until the workflow completes and returns the full result. Async execution returns immediately with a task ID that you can poll using getJobStatus(). Use async mode for long-running workflows to avoid request timeouts. Async job statuses include queued, processing, completed, failed, and cancelled." },
|
||||
{ question: "How does streaming work with the SDK?", answer: "Enable streaming by setting stream: true and specifying selectedOutputs with block names and attributes in blockName.attribute format (e.g., ['agent1.content']). The response uses Server-Sent Events (SSE) format, sending incremental chunks as the workflow executes. Each chunk includes the blockId and the text content. A final done event includes the execution metadata." },
|
||||
{ question: "How does the SDK handle rate limiting?", answer: "The SDK provides built-in rate limiting support through the executeWithRetry() method. It uses exponential backoff (1s, 2s, 4s, 8s...) with 25% jitter to avoid thundering herd problems. If the API returns a retry-after header, that value is used instead. You can configure maxRetries, initialDelay, maxDelay, and backoffMultiplier. Use getRateLimitInfo() to check your current rate limit status." },
|
||||
{ question: "Is it safe to use the SDK in browser-side code?", answer: "You can use the SDK in the browser, but you should not expose your API key in client-side code. In production, use a backend proxy server to handle SDK calls, or use a public API key with limited permissions. The SDK works with both Node.js and browser environments, but sensitive keys should stay server-side." },
|
||||
{ question: "How do I send files to a workflow through the SDK?", answer: "File objects are automatically detected and converted to base64 format. Include them in the input object under the field name that matches your workflow's API trigger input format. In the browser, pass File objects directly from file inputs. In Node.js, create File objects from buffers. You can also provide files as URL references with type, data, name, and mime fields." },
|
||||
]} />
|
||||
|
||||
@@ -2,10 +2,12 @@
|
||||
title: Function
|
||||
---
|
||||
|
||||
import { Callout } from 'fumadocs-ui/components/callout'
|
||||
import { Tab, Tabs } from 'fumadocs-ui/components/tabs'
|
||||
import { Image } from '@/components/ui/image'
|
||||
import { FAQ } from '@/components/ui/faq'
|
||||
|
||||
The Function block executes custom JavaScript or TypeScript code in your workflows. Transform data, perform calculations, or implement custom logic.
|
||||
The Function block executes custom JavaScript, TypeScript, or Python code in your workflows. Transform data, perform calculations, or implement custom logic.
|
||||
|
||||
<div className="flex justify-center">
|
||||
<Image
|
||||
@@ -41,6 +43,8 @@ Input → Function (Validate & Sanitize) → API (Save to Database)
|
||||
|
||||
### Example: Loyalty Score Calculator
|
||||
|
||||
<Tabs items={['JavaScript', 'Python']}>
|
||||
<Tab value="JavaScript">
|
||||
```javascript title="loyalty-calculator.js"
|
||||
// Process customer data and calculate loyalty score
|
||||
const { purchaseHistory, accountAge, supportTickets } = <agent>;
|
||||
@@ -64,6 +68,120 @@ return {
|
||||
metrics: { spendScore, frequencyScore, supportScore }
|
||||
};
|
||||
```
|
||||
</Tab>
|
||||
<Tab value="Python">
|
||||
```python title="loyalty-calculator.py"
|
||||
import json
|
||||
|
||||
# Reference outputs from other blocks using angle bracket syntax
|
||||
data = json.loads('<agent>')
|
||||
purchase_history = data["purchaseHistory"]
|
||||
account_age = data["accountAge"]
|
||||
support_tickets = data["supportTickets"]
|
||||
|
||||
# Calculate metrics
|
||||
total_spent = sum(p["amount"] for p in purchase_history)
|
||||
purchase_frequency = len(purchase_history) / (account_age / 365)
|
||||
ticket_ratio = support_tickets["resolved"] / support_tickets["total"]
|
||||
|
||||
# Calculate loyalty score (0-100)
|
||||
spend_score = min(total_spent / 1000 * 30, 30)
|
||||
frequency_score = min(purchase_frequency * 20, 40)
|
||||
support_score = ticket_ratio * 30
|
||||
|
||||
loyalty_score = round(spend_score + frequency_score + support_score)
|
||||
|
||||
tier = "Platinum" if loyalty_score >= 80 else "Gold" if loyalty_score >= 60 else "Silver"
|
||||
|
||||
result = {
|
||||
"customer": data["name"],
|
||||
"loyaltyScore": loyalty_score,
|
||||
"loyaltyTier": tier,
|
||||
"metrics": {
|
||||
"spendScore": spend_score,
|
||||
"frequencyScore": frequency_score,
|
||||
"supportScore": support_score
|
||||
}
|
||||
}
|
||||
print(json.dumps(result))
|
||||
```
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
## Python Support
|
||||
|
||||
The Function block supports Python as an alternative to JavaScript. Python code runs in a secure [E2B](https://e2b.dev) cloud sandbox.
|
||||
|
||||
<div className="flex justify-center">
|
||||
<Image
|
||||
src="/static/blocks/function-python.png"
|
||||
alt="Function block with Python selected"
|
||||
width={400}
|
||||
height={500}
|
||||
className="my-6"
|
||||
/>
|
||||
</div>
|
||||
|
||||
### Enabling Python
|
||||
|
||||
Select **Python** from the language dropdown in the Function block. Python execution requires E2B to be enabled on your Sim instance.
|
||||
|
||||
<Callout type="warn">
|
||||
If you don't see Python as an option in the language dropdown, E2B is not enabled. This only applies to self-hosted instances — E2B is enabled by default on sim.ai.
|
||||
</Callout>
|
||||
|
||||
<Callout type="info">
|
||||
Python code always runs in the E2B sandbox, even for simple scripts without imports. This ensures a secure, isolated execution environment.
|
||||
</Callout>
|
||||
|
||||
### Returning Results
|
||||
|
||||
In Python, print your result as JSON to stdout. The Function block captures stdout and makes it available via `<function.result>`:
|
||||
|
||||
```python title="example.py"
|
||||
import json
|
||||
|
||||
data = {"status": "processed", "count": 42}
|
||||
print(json.dumps(data))
|
||||
```
|
||||
|
||||
### Available Libraries
|
||||
|
||||
The E2B sandbox includes the Python standard library (`json`, `re`, `datetime`, `math`, `os`, `collections`, etc.) and common packages like `matplotlib` for visualization. Charts generated with matplotlib are captured as images automatically.
|
||||
|
||||
<Callout type="info">
|
||||
The exact set of pre-installed packages depends on the E2B sandbox configuration. If a package you need isn't available, consider calling an external API from your code instead.
|
||||
</Callout>
|
||||
|
||||
### Matplotlib Charts
|
||||
|
||||
When your Python code generates matplotlib figures, they are automatically captured and returned as base64-encoded PNG images in the output:
|
||||
|
||||
```python title="chart.py"
|
||||
import matplotlib.pyplot as plt
|
||||
import json
|
||||
|
||||
data = json.loads('<api.data>')
|
||||
|
||||
plt.figure(figsize=(10, 6))
|
||||
plt.bar(data["labels"], data["values"])
|
||||
plt.title("Monthly Revenue")
|
||||
plt.xlabel("Month")
|
||||
plt.ylabel("Revenue ($)")
|
||||
plt.savefig("chart.png")
|
||||
plt.show()
|
||||
```
|
||||
|
||||
{/* TODO: Screenshot of Python code execution output in the logs panel */}
|
||||
|
||||
### JavaScript vs. Python
|
||||
|
||||
| | JavaScript | Python |
|
||||
|--|-----------|--------|
|
||||
| **Execution** | Local VM (fast) or E2B sandbox (with imports) | Always E2B sandbox |
|
||||
| **Returning results** | `return { ... }` | `print(json.dumps({ ... }))` |
|
||||
| **HTTP requests** | `fetch()` built-in | `requests` or `httpx` |
|
||||
| **Best for** | Quick transforms, JSON manipulation | Data science, charting, complex math |
|
||||
|
||||
## Best Practices
|
||||
|
||||
|
||||
@@ -78,7 +78,7 @@ Defines the fields approvers fill in when responding. This data becomes availabl
|
||||
}
|
||||
```
|
||||
|
||||
Access resume data in downstream blocks using `<blockId.resumeInput.fieldName>`.
|
||||
Access resume data in downstream blocks using `<blockId.fieldName>`.
|
||||
|
||||
## Approval Methods
|
||||
|
||||
@@ -93,11 +93,12 @@ Access resume data in downstream blocks using `<blockId.resumeInput.fieldName>`.
|
||||
<Tab>
|
||||
### REST API
|
||||
|
||||
Programmatically resume workflows using the resume endpoint. The `contextId` is available from the block's `resumeEndpoint` output or from the paused execution detail.
|
||||
Programmatically resume workflows using the resume endpoint. The `contextId` is available from the block's `resumeEndpoint` output or from the `_resume` object in the paused execution response.
|
||||
|
||||
```bash
|
||||
POST /api/resume/{workflowId}/{executionId}/{contextId}
|
||||
Content-Type: application/json
|
||||
X-API-Key: your-api-key
|
||||
|
||||
{
|
||||
"input": {
|
||||
@@ -107,23 +108,56 @@ Access resume data in downstream blocks using `<blockId.resumeInput.fieldName>`.
|
||||
}
|
||||
```
|
||||
|
||||
The response includes a new `executionId` for the resumed execution:
|
||||
The resume endpoint automatically respects the execution mode used in the original execute call:
|
||||
|
||||
- **Sync mode** (default) — The response waits for the remaining workflow to complete and returns the full result:
|
||||
|
||||
```json
|
||||
{
|
||||
"status": "started",
|
||||
"success": true,
|
||||
"status": "completed",
|
||||
"executionId": "<resumeExecutionId>",
|
||||
"message": "Resume execution started."
|
||||
"output": { ... },
|
||||
"metadata": { "duration": 1234, "startTime": "...", "endTime": "..." }
|
||||
}
|
||||
```
|
||||
|
||||
To poll execution progress after resuming, connect to the SSE stream:
|
||||
If the resumed workflow hits another HITL block, the response returns `"status": "paused"` with new `_resume` URLs in the output.
|
||||
|
||||
```bash
|
||||
GET /api/workflows/{workflowId}/executions/{resumeExecutionId}/stream
|
||||
- **Stream mode** (`stream: true` on the original execute call) — The resume response streams SSE events with `selectedOutputs` chunks, just like the initial execution.
|
||||
|
||||
- **Async mode** (`X-Execution-Mode: async` on the original execute call) — The resume dispatches execution to a background worker and returns immediately with `202`, including a `jobId` and `statusUrl` for polling:
|
||||
|
||||
```json
|
||||
{
|
||||
"success": true,
|
||||
"async": true,
|
||||
"jobId": "<jobId>",
|
||||
"executionId": "<resumeExecutionId>",
|
||||
"message": "Resume execution queued",
|
||||
"statusUrl": "/api/jobs/<jobId>"
|
||||
}
|
||||
```
|
||||
|
||||
Build custom approval UIs or integrate with existing systems.
|
||||
#### Polling execution status
|
||||
|
||||
Poll the `statusUrl` from the async response to check when the resume completes:
|
||||
|
||||
```bash
|
||||
GET /api/jobs/{jobId}
|
||||
X-API-Key: your-api-key
|
||||
```
|
||||
|
||||
Returns job status and, when completed, the full workflow output.
|
||||
|
||||
To check on a paused execution's pause points and resume links:
|
||||
|
||||
```bash
|
||||
GET /api/resume/{workflowId}/{executionId}
|
||||
X-API-Key: your-api-key
|
||||
```
|
||||
|
||||
Returns the paused execution detail with all pause points, their statuses, and resume links. Returns `404` when the execution has completed and is no longer paused.
|
||||
</Tab>
|
||||
<Tab>
|
||||
### Webhook
|
||||
@@ -132,6 +166,53 @@ Access resume data in downstream blocks using `<blockId.resumeInput.fieldName>`.
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
## API Execute Behavior
|
||||
|
||||
When triggering a workflow via the execute API (`POST /api/workflows/{id}/execute`), HITL blocks cause the execution to pause and return the `_resume` data in the response:
|
||||
|
||||
<Tabs items={['Sync (JSON)', 'Stream (SSE)', 'Async']}>
|
||||
<Tab>
|
||||
The response includes the full pause data with resume URLs:
|
||||
|
||||
```json
|
||||
{
|
||||
"success": true,
|
||||
"executionId": "<executionId>",
|
||||
"output": {
|
||||
"data": {
|
||||
"operation": "human",
|
||||
"_resume": {
|
||||
"apiUrl": "/api/resume/{workflowId}/{executionId}/{contextId}",
|
||||
"uiUrl": "/resume/{workflowId}/{executionId}",
|
||||
"contextId": "<contextId>",
|
||||
"executionId": "<executionId>",
|
||||
"workflowId": "<workflowId>"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
</Tab>
|
||||
<Tab>
|
||||
Blocks before the HITL stream their `selectedOutputs` normally. When execution pauses, the final SSE event includes `status: "paused"` and the `_resume` data:
|
||||
|
||||
```
|
||||
data: {"blockId":"agent1","chunk":"streamed content..."}
|
||||
data: {"event":"final","data":{"success":true,"output":{...,"_resume":{...}},"status":"paused"}}
|
||||
data: "[DONE]"
|
||||
```
|
||||
|
||||
On resume, blocks after the HITL stream their `selectedOutputs` the same way.
|
||||
|
||||
<Callout type="info">
|
||||
HITL blocks are automatically excluded from the `selectedOutputs` dropdown since their data is always included in the pause response.
|
||||
</Callout>
|
||||
</Tab>
|
||||
<Tab>
|
||||
Returns `202` immediately. Use the polling endpoint to check when the execution pauses.
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
## Common Use Cases
|
||||
|
||||
**Content Approval** - Review AI-generated content before publishing
|
||||
@@ -161,9 +242,9 @@ Agent (Generate) → Human in the Loop (QA) → Gmail (Send)
|
||||
**`response`** - Display data shown to the approver (json)
|
||||
**`submission`** - Form submission data from the approver (json)
|
||||
**`submittedAt`** - ISO timestamp when the workflow was resumed
|
||||
**`resumeInput.*`** - All fields defined in Resume Form become available after the workflow resumes
|
||||
**`<fieldName>`** - All fields defined in Resume Form become available at the top level after the workflow resumes
|
||||
|
||||
Access using `<blockId.resumeInput.fieldName>`.
|
||||
Access using `<blockId.fieldName>`.
|
||||
|
||||
## Example
|
||||
|
||||
@@ -187,7 +268,7 @@ Access using `<blockId.resumeInput.fieldName>`.
|
||||
**Downstream Usage:**
|
||||
```javascript
|
||||
// Condition block
|
||||
<approval1.resumeInput.approved> === true
|
||||
<approval1.approved> === true
|
||||
```
|
||||
The example below shows an approval portal as seen by an approver after the workflow is paused. Approvers can review the data and provide inputs as a part of the workflow resumption. The approval portal can be accessed directly via the unique URL, `<blockId.url>`.
|
||||
|
||||
@@ -204,7 +285,7 @@ The example below shows an approval portal as seen by an approver after the work
|
||||
<FAQ items={[
|
||||
{ question: "How long does the workflow stay paused?", answer: "The workflow pauses indefinitely until a human provides input through the approval portal, the REST API, or a webhook. There is no automatic timeout — it will wait until someone responds." },
|
||||
{ question: "What notification channels can I use to alert approvers?", answer: "You can configure notifications through Slack, Gmail, Microsoft Teams, SMS (via Twilio), or custom webhooks. Include the approval URL in your notification message so approvers can access the portal directly." },
|
||||
{ question: "How do I access the approver's input in downstream blocks?", answer: "Use the syntax <blockId.resumeInput.fieldName> to reference specific fields from the resume form. For example, if your block ID is 'approval1' and the form has an 'approved' field, use <approval1.resumeInput.approved>." },
|
||||
{ question: "How do I access the approver's input in downstream blocks?", answer: "Use the syntax <blockId.fieldName> to reference specific fields from the resume form. For example, if your block name is 'approval1' and the form has an 'approved' field, use <approval1.approved>." },
|
||||
{ question: "Can I chain multiple Human in the Loop blocks for multi-stage approvals?", answer: "Yes. You can place multiple Human in the Loop blocks in sequence to create multi-stage approval workflows. Each block pauses independently and can have its own notification configuration and resume form fields." },
|
||||
{ question: "Can I resume the workflow programmatically without the portal?", answer: "Yes. Each block exposes a resume API endpoint that you can call with a POST request containing the form data as JSON. This lets you build custom approval UIs or integrate with existing systems like Jira or ServiceNow." },
|
||||
{ question: "What outputs are available after the workflow resumes?", answer: "The block outputs include the approval portal URL, the resume API endpoint URL, the display data shown to the approver, the form submission data, the raw resume input, and an ISO timestamp of when the workflow was resumed." },
|
||||
|
||||
@@ -1,225 +1,71 @@
|
||||
---
|
||||
title: Copilot
|
||||
description: Your per-workflow AI assistant for building and editing workflows.
|
||||
---
|
||||
|
||||
import { Callout } from 'fumadocs-ui/components/callout'
|
||||
import { Card, Cards } from 'fumadocs-ui/components/card'
|
||||
import { Image } from '@/components/ui/image'
|
||||
import { MessageCircle, Hammer, ListChecks, Zap, Globe, Paperclip, History, RotateCcw, Brain } from 'lucide-react'
|
||||
import { Video } from '@/components/ui/video'
|
||||
import { FAQ } from '@/components/ui/faq'
|
||||
|
||||
Copilot is your in-editor assistant that helps you build and edit workflows. It can:
|
||||
Copilot is the AI assistant built into every workflow editor. It is scoped to the workflow you have open — it reads the current structure, makes changes directly, and saves checkpoints so you can revert if needed.
|
||||
|
||||
- **Explain**: Answer questions about Sim and your current workflow
|
||||
- **Guide**: Suggest edits and best practices
|
||||
- **Build**: Add blocks, wire connections, and configure settings
|
||||
- **Debug**: Analyze execution issues and optimize performance
|
||||
For workspace-wide tasks (managing multiple workflows, running research, working with tables, scheduling jobs), use [Mothership](/mothership).
|
||||
|
||||
<Callout type="info">
|
||||
Copilot is a Sim-managed service. For self-hosted deployments:
|
||||
1. Go to [sim.ai](https://sim.ai) → Settings → Copilot and generate a Copilot API key
|
||||
2. Set `COPILOT_API_KEY` in your self-hosted environment
|
||||
Copilot is a Sim-managed service. For self-hosted deployments, go to [sim.ai](https://sim.ai) → Settings → Copilot, generate a Copilot API key, then set `COPILOT_API_KEY` in your self-hosted environment.
|
||||
</Callout>
|
||||
|
||||
## Modes
|
||||
<Video src="copilot/copilot.mp4" width={700} height={450} />
|
||||
|
||||
Switch between modes using the mode selector at the bottom of the input area.
|
||||
## What Copilot Can Do
|
||||
|
||||
<Cards>
|
||||
<Card
|
||||
title={
|
||||
<span className="inline-flex items-center gap-2">
|
||||
<MessageCircle className="h-4 w-4 text-muted-foreground" />
|
||||
Ask
|
||||
</span>
|
||||
}
|
||||
>
|
||||
<div className="m-0 text-sm">
|
||||
Q&A mode for explanations, guidance, and suggestions without making changes to your workflow.
|
||||
</div>
|
||||
</Card>
|
||||
<Card
|
||||
title={
|
||||
<span className="inline-flex items-center gap-2">
|
||||
<Hammer className="h-4 w-4 text-muted-foreground" />
|
||||
Build
|
||||
</span>
|
||||
}
|
||||
>
|
||||
<div className="m-0 text-sm">
|
||||
Workflow building mode. Copilot can add blocks, wire connections, edit configurations, and debug issues.
|
||||
</div>
|
||||
</Card>
|
||||
<Card
|
||||
title={
|
||||
<span className="inline-flex items-center gap-2">
|
||||
<ListChecks className="h-4 w-4 text-muted-foreground" />
|
||||
Plan
|
||||
</span>
|
||||
}
|
||||
>
|
||||
<div className="m-0 text-sm">
|
||||
Creates a step-by-step implementation plan for your workflow without making any changes. Helps you think through the approach before building.
|
||||
</div>
|
||||
</Card>
|
||||
</Cards>
|
||||
Copilot can read and modify the workflow you are currently editing:
|
||||
|
||||
## Models
|
||||
- Add, configure, and connect blocks
|
||||
- Edit existing block configurations
|
||||
- Delete blocks and connections
|
||||
- Debug failures by reading execution logs
|
||||
- Answer questions about the workflow or how Sim works
|
||||
|
||||
Select your preferred AI model using the model selector at the bottom right of the input area.
|
||||
## Chat History
|
||||
|
||||
**Available Models:**
|
||||
- Claude 4.6 Opus (default), 4.5 Opus, Sonnet, Haiku
|
||||
- GPT 5.2 Codex, Pro
|
||||
- Gemini 3 Pro
|
||||
|
||||
Choose based on your needs: faster models for simple tasks, more capable models for complex workflows.
|
||||
|
||||
## Context Menu (@)
|
||||
|
||||
Use the `@` symbol to reference resources and give Copilot more context:
|
||||
|
||||
| Reference | Description |
|
||||
|-----------|-------------|
|
||||
| **Chats** | Previous copilot conversations |
|
||||
| **Workflows** | Any workflow in your workspace |
|
||||
| **Workflow Blocks** | Blocks in the current workflow |
|
||||
| **Blocks** | Block types and templates |
|
||||
| **Knowledge** | Uploaded documents and knowledge bases |
|
||||
| **Docs** | Sim documentation |
|
||||
| **Templates** | Workflow templates |
|
||||
| **Logs** | Execution logs and results |
|
||||
|
||||
Type `@` in the input field to open the context menu, then search or browse to find what you need.
|
||||
|
||||
## Slash Commands (/)
|
||||
|
||||
Use slash commands for quick actions:
|
||||
|
||||
| Command | Description |
|
||||
|---------|-------------|
|
||||
| `/fast` | Fast mode execution |
|
||||
| `/research` | Research and exploration mode |
|
||||
| `/actions` | Execute agent actions |
|
||||
|
||||
**Web Commands:**
|
||||
|
||||
| Command | Description |
|
||||
|---------|-------------|
|
||||
| `/search` | Search the web |
|
||||
| `/read` | Read a specific URL |
|
||||
| `/scrape` | Scrape web page content |
|
||||
| `/crawl` | Crawl multiple pages |
|
||||
|
||||
Type `/` in the input field to see available commands.
|
||||
|
||||
## Chat Management
|
||||
|
||||
### Starting a New Chat
|
||||
|
||||
Click the **+** button in the Copilot header to start a fresh conversation.
|
||||
|
||||
### Chat History
|
||||
|
||||
Click **History** to view previous conversations grouped by date. You can:
|
||||
- Click a chat to resume it
|
||||
- Delete chats you no longer need
|
||||
|
||||
### Editing Messages
|
||||
|
||||
Hover over any of your messages and click **Edit** to modify and resend it. This is useful for refining your prompts.
|
||||
|
||||
### Message Queue
|
||||
|
||||
If you send a message while Copilot is still responding, it gets queued. You can:
|
||||
- View queued messages in the expandable queue panel
|
||||
- Send a queued message immediately (aborts current response)
|
||||
- Remove messages from the queue
|
||||
Click **History** (clock icon) in the Copilot header to see past conversations for this workflow. Click any chat to resume it, or click **+** to start a new one.
|
||||
|
||||
## File Attachments
|
||||
|
||||
Click the attachment icon to upload files with your message. Supported file types include:
|
||||
- Images (preview thumbnails shown)
|
||||
- PDFs
|
||||
- Text files, JSON, XML
|
||||
- Other document formats
|
||||
Click the attachment icon in the input to upload files alongside your message. Copilot can read images, PDFs, and text-based files as context.
|
||||
|
||||
Files are displayed as clickable thumbnails that open in a new tab.
|
||||
## Checkpoints
|
||||
|
||||
## Checkpoints & Changes
|
||||
When Copilot modifies a workflow, it saves a checkpoint of the previous state.
|
||||
|
||||
When Copilot makes changes to your workflow, it saves checkpoints so you can revert if needed.
|
||||
To revert: hover over a Copilot message and click the checkpoints icon, then click **Revert** on the state you want to restore. Reverting cannot be undone.
|
||||
|
||||
### Viewing Checkpoints
|
||||
## Thinking
|
||||
|
||||
Hover over a Copilot message and click the checkpoints icon to see saved workflow states for that message.
|
||||
For complex requests, Copilot may show its reasoning in an expandable thinking block before responding. The block shows how long the thinking took and collapses after the response is complete.
|
||||
|
||||
### Reverting Changes
|
||||
## Usage
|
||||
|
||||
Click **Revert** on any checkpoint to restore your workflow to that state. A confirmation dialog will warn that this action cannot be undone.
|
||||
Copilot usage is billed per token and counts toward your plan's credit usage. If you reach your limit, enable on-demand billing from Settings → Subscription.
|
||||
|
||||
### Accepting Changes
|
||||
|
||||
When Copilot proposes changes, you can:
|
||||
- **Accept**: Apply the proposed changes (`Mod+Shift+Enter`)
|
||||
- **Reject**: Dismiss the changes and keep your current workflow
|
||||
|
||||
## Thinking Blocks
|
||||
|
||||
For complex requests, Copilot may show its reasoning process in expandable thinking blocks:
|
||||
|
||||
- Blocks auto-expand while Copilot is thinking
|
||||
- Click to manually expand/collapse
|
||||
- Shows duration of the thinking process
|
||||
- Helps you understand how Copilot arrived at its solution
|
||||
|
||||
## Options Selection
|
||||
|
||||
When Copilot presents multiple options, you can select using:
|
||||
|
||||
| Control | Action |
|
||||
|---------|--------|
|
||||
| **1-9** | Select option by number |
|
||||
| **Arrow Up/Down** | Navigate between options |
|
||||
| **Enter** | Select highlighted option |
|
||||
|
||||
Selected options are highlighted; unselected options appear struck through.
|
||||
|
||||
## Keyboard Shortcuts
|
||||
|
||||
| Shortcut | Action |
|
||||
|----------|--------|
|
||||
| `@` | Open context menu |
|
||||
| `/` | Open slash commands |
|
||||
| `Arrow Up/Down` | Navigate menu items |
|
||||
| `Enter` | Select menu item |
|
||||
| `Esc` | Close menus |
|
||||
| `Mod+Shift+Enter` | Accept Copilot changes |
|
||||
|
||||
## Usage Limits
|
||||
|
||||
Copilot usage is billed per token from the underlying LLM and counts toward your plan's credit usage. If you reach your usage limit, enable on-demand billing from Settings → Subscription to continue using Copilot beyond your plan's included credits.
|
||||
|
||||
<Callout type="info">
|
||||
See the [Cost Calculation page](/execution/costs) for billing and plan details.
|
||||
</Callout>
|
||||
## Copilot MCP
|
||||
|
||||
You can use Copilot as an MCP server in your favorite editor or AI client. This lets you build, test, deploy, and manage Sim workflows directly from tools like Cursor, Claude Code, Claude Desktop, and VS Code.
|
||||
You can use Copilot as an MCP server to build, test, and manage Sim workflows from external editors — Cursor, Claude Code, Claude Desktop, and VS Code.
|
||||
|
||||
### Generating a Copilot API Key
|
||||
|
||||
To connect to the Copilot MCP server, you need a **Copilot API key**:
|
||||
|
||||
1. Go to [sim.ai](https://sim.ai) and sign in
|
||||
2. Navigate to **Settings** → **Copilot**
|
||||
3. Click **Generate API Key**
|
||||
4. Copy the key — it is only shown once
|
||||
|
||||
The key will look like `sk-sim-copilot-...`. You will use this in the configuration below.
|
||||
The key will look like `sk-sim-copilot-...`.
|
||||
|
||||
### Cursor
|
||||
|
||||
Add the following to your `.cursor/mcp.json` (project-level) or global Cursor MCP settings:
|
||||
Add to `.cursor/mcp.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
@@ -234,12 +80,8 @@ Add the following to your `.cursor/mcp.json` (project-level) or global Cursor MC
|
||||
}
|
||||
```
|
||||
|
||||
Replace `YOUR_COPILOT_API_KEY` with the key you generated above.
|
||||
|
||||
### Claude Code
|
||||
|
||||
Run the following command to add the Copilot MCP server:
|
||||
|
||||
```bash
|
||||
claude mcp add sim-copilot \
|
||||
--transport http \
|
||||
@@ -247,11 +89,9 @@ claude mcp add sim-copilot \
|
||||
--header "X-API-Key: YOUR_COPILOT_API_KEY"
|
||||
```
|
||||
|
||||
Replace `YOUR_COPILOT_API_KEY` with your key.
|
||||
|
||||
### Claude Desktop
|
||||
|
||||
Claude Desktop requires [`mcp-remote`](https://www.npmjs.com/package/mcp-remote) to connect to HTTP-based MCP servers. Add the following to your Claude Desktop config file (`~/Library/Application Support/Claude/claude_desktop_config.json` on macOS):
|
||||
Claude Desktop requires [`mcp-remote`](https://www.npmjs.com/package/mcp-remote). Add to `~/Library/Application Support/Claude/claude_desktop_config.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
@@ -270,11 +110,9 @@ Claude Desktop requires [`mcp-remote`](https://www.npmjs.com/package/mcp-remote)
|
||||
}
|
||||
```
|
||||
|
||||
Replace `YOUR_COPILOT_API_KEY` with your key.
|
||||
|
||||
### VS Code
|
||||
|
||||
Add the following to your VS Code `settings.json` or workspace `.vscode/settings.json`:
|
||||
Add to `settings.json` or `.vscode/settings.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
@@ -292,21 +130,14 @@ Add the following to your VS Code `settings.json` or workspace `.vscode/settings
|
||||
}
|
||||
```
|
||||
|
||||
Replace `YOUR_COPILOT_API_KEY` with your key.
|
||||
|
||||
<Callout type="info">
|
||||
For self-hosted deployments, replace `https://www.sim.ai` with your self-hosted Sim URL.
|
||||
</Callout>
|
||||
|
||||
import { FAQ } from '@/components/ui/faq'
|
||||
|
||||
<FAQ items={[
|
||||
{ question: "What is the difference between Ask, Build, and Plan mode?", answer: "Copilot has three modes. Ask mode is a read-only Q&A mode for explanations, guidance, and suggestions without making any changes to your workflow. Build mode allows Copilot to actively modify your workflow by adding blocks, wiring connections, editing configurations, and debugging issues. Plan mode creates a step-by-step implementation plan for your request without making any changes, so you can review the approach before committing. Use Ask when you want to learn or explore ideas, Plan when you want to see a proposed approach first, and Build when you want Copilot to make changes directly." },
|
||||
{ question: "Does Copilot have access to my full workflow when answering questions?", answer: "Copilot has access to the workflow you are currently editing as context. You can also use the @ context menu to reference other workflows, previous chats, execution logs, knowledge bases, documentation, and templates to give Copilot additional context for your request." },
|
||||
{ question: "How do I use Copilot from an external editor like Cursor or VS Code?", answer: "You can use Copilot as an MCP server from external editors. First, generate a Copilot API key from Settings > Copilot on sim.ai. Then add the MCP server configuration to your editor using the endpoint https://www.sim.ai/api/mcp/copilot with your API key in the X-API-Key header. Configuration examples are available for Cursor, Claude Code, Claude Desktop, and VS Code." },
|
||||
{ question: "Can I revert changes that Copilot made to my workflow?", answer: "Yes. When Copilot makes changes in Build mode, it saves checkpoints of your workflow state. You can hover over a Copilot message and click the checkpoints icon to see saved states, then click Revert on any checkpoint to restore your workflow. Note that reverting cannot be undone, so review the checkpoint before confirming." },
|
||||
{ question: "How does Copilot billing work?", answer: "Copilot usage is billed per token from the underlying LLM and counts toward your plan's credit usage. More capable models like Claude Opus cost more per token than lighter models like Haiku. If you reach your usage limit, you can enable on-demand billing from Settings > Subscription to continue using Copilot." },
|
||||
{ question: "What do the slash commands like /research and /search do?", answer: "Slash commands trigger specialized behaviors. /fast enables fast mode execution, /research activates a research and exploration mode, and /actions executes agent actions. Web commands like /search, /read, /scrape, and /crawl let Copilot interact with the web to search for information, read URLs, scrape page content, or crawl multiple pages to gather context for your request." },
|
||||
{ question: "How do I set up Copilot for a self-hosted deployment?", answer: "For self-hosted deployments, go to sim.ai > Settings > Copilot and generate a Copilot API key. Then set the COPILOT_API_KEY environment variable in your self-hosted environment. Copilot is a Sim-managed service, so the self-hosted instance communicates with Sim's servers to process requests." },
|
||||
{ question: "How is Copilot different from Mothership?", answer: "Copilot is scoped to the workflow you have open — it reads and edits that workflow's blocks and connections. Mothership has access to your entire workspace and can build workflows, manage tables, run research, schedule jobs, and take actions across integrations." },
|
||||
{ question: "Can Copilot access other workflows or workspace data?", answer: "Copilot is scoped to the current workflow. For tasks that span multiple workflows or require workspace-level context, use Mothership." },
|
||||
{ question: "Can I revert changes Copilot made?", answer: "Yes. Copilot saves a checkpoint before each change. Hover over the message and click the checkpoints icon to see saved states, then click Revert to restore one. Reverting cannot be undone." },
|
||||
{ question: "How does Copilot billing work?", answer: "Copilot usage is billed per token and counts toward your plan's credit usage. If you reach your limit, enable on-demand billing from Settings → Subscription." },
|
||||
{ question: "How do I set up Copilot for a self-hosted deployment?", answer: "Go to sim.ai → Settings → Copilot and generate a Copilot API key. Set the COPILOT_API_KEY environment variable in your self-hosted environment. Copilot runs on Sim's infrastructure regardless of where you host the application." },
|
||||
]} />
|
||||
|
||||
|
||||
@@ -1,203 +1,121 @@
|
||||
---
|
||||
title: Credentials
|
||||
description: Manage secrets, API keys, and OAuth connections for your workflows
|
||||
title: Secrets
|
||||
description: Manage API keys and environment variables for your workflows
|
||||
---
|
||||
|
||||
import { Callout } from 'fumadocs-ui/components/callout'
|
||||
import { Image } from '@/components/ui/image'
|
||||
import { Step, Steps } from 'fumadocs-ui/components/steps'
|
||||
import { FAQ } from '@/components/ui/faq'
|
||||
|
||||
Credentials provide a secure way to manage API keys, tokens, and third-party service connections across your workflows. Instead of hardcoding sensitive values into your workflow, you store them as credentials and reference them at runtime.
|
||||
Secrets are key-value pairs that store sensitive data like API keys, tokens, and passwords. Instead of hardcoding values into your workflows, you store them as secrets and reference them by name at runtime.
|
||||
|
||||
Sim supports two categories of credentials: **secrets** for static values like API keys, and **OAuth accounts** for authenticated service connections like Google or Slack.
|
||||
## Managing Secrets
|
||||
|
||||
## Getting Started
|
||||
|
||||
To manage credentials, open your workspace **Settings** and navigate to the **Secrets** tab.
|
||||
To manage secrets, open your workspace **Settings** and navigate to the **Secrets** tab.
|
||||
|
||||
<Image
|
||||
src="/static/credentials/settings-secrets.png"
|
||||
alt="Settings modal showing the Secrets tab with a list of saved credentials"
|
||||
src="/static/secrets/secrets-list.png"
|
||||
alt="Secrets tab showing Workspace and Personal sections with inline key-value rows"
|
||||
width={700}
|
||||
height={200}
|
||||
height={500}
|
||||
/>
|
||||
|
||||
From here you can search, create, and delete both secrets and OAuth connections.
|
||||
Secrets are organized into two sections:
|
||||
|
||||
## Secrets
|
||||
- **Workspace** — shared with all members of your workspace
|
||||
- **Personal** — private to you
|
||||
|
||||
Secrets are key-value pairs that store sensitive data like API keys, tokens, and passwords. Each secret has a **key** (used to reference it in workflows) and a **value** (the actual secret).
|
||||
### Adding a Secret
|
||||
|
||||
### Creating a Secret
|
||||
Type a key name (e.g. `OPENAI_API_KEY`) into the **Key** column and its value into the **Value** column in the last empty row. A new empty row appears automatically as you type. Existing values are masked by default.
|
||||
|
||||
<Image
|
||||
src="/static/credentials/create-secret.png"
|
||||
alt="Create Secret dialog with fields for key, value, description, and scope toggle"
|
||||
width={500}
|
||||
height={400}
|
||||
/>
|
||||
When you're done, click **Save** to persist all changes.
|
||||
|
||||
<Steps>
|
||||
<Step>
|
||||
Click **+ Add** and select **Secret** as the type
|
||||
</Step>
|
||||
<Step>
|
||||
Enter a **Key** name (letters, numbers, and underscores only, e.g. `OPENAI_API_KEY`)
|
||||
</Step>
|
||||
<Step>
|
||||
Enter the **Value**
|
||||
</Step>
|
||||
<Step>
|
||||
Optionally add a **Description** to help your team understand what the secret is for
|
||||
</Step>
|
||||
<Step>
|
||||
Choose the **Scope** — Workspace or Personal
|
||||
</Step>
|
||||
<Step>
|
||||
Click **Create**
|
||||
</Step>
|
||||
</Steps>
|
||||
<Callout type="info">
|
||||
Keys must use only letters, numbers, and underscores — no spaces or special characters.
|
||||
</Callout>
|
||||
|
||||
### Using Secrets in Workflows
|
||||
### Bulk Import
|
||||
|
||||
To reference a secret in any input field, type `{{` to open the dropdown. It will show your available secrets grouped by scope.
|
||||
You can populate multiple secrets at once by pasting `.env`-style content into any key or value field. The parser supports standard `KEY=VALUE` pairs, `export KEY=VALUE`, quoted values, and inline comments.
|
||||
|
||||
### Editing and Deleting
|
||||
|
||||
Click directly into any key or value cell to edit it. To delete a secret, click the trash icon on its row and save.
|
||||
|
||||
## Using Secrets in Workflows
|
||||
|
||||
To reference a secret in any input field, type `{{` to open the variable dropdown. Your available secrets are listed grouped by scope (workspace, then personal).
|
||||
|
||||
<Image
|
||||
src="/static/credentials/secret-dropdown.png"
|
||||
alt="Typing {{ in a code block opens a dropdown showing available workspace secrets"
|
||||
alt="Typing {{ in an input opens a dropdown showing available secrets"
|
||||
width={400}
|
||||
height={250}
|
||||
/>
|
||||
|
||||
Select the secret you want to use. The reference will appear highlighted in blue, indicating it will be resolved at runtime.
|
||||
Select the secret you want to use. The reference appears highlighted in blue and is resolved to its actual value at runtime.
|
||||
|
||||
<Image
|
||||
src="/static/credentials/secret-resolved.png"
|
||||
alt="A resolved secret reference shown in blue text as {{OPENAI_API_KEY}}"
|
||||
alt="A resolved secret reference shown as {{OPENAI_API_KEY}}"
|
||||
width={400}
|
||||
height={200}
|
||||
/>
|
||||
|
||||
<Callout type="warn">
|
||||
Secret values are never exposed in the workflow editor or logs. They are only resolved during execution.
|
||||
Secret values are never exposed in the workflow editor or execution logs — they are only resolved during execution.
|
||||
</Callout>
|
||||
|
||||
### Bulk Import
|
||||
## Secret Details
|
||||
|
||||
You can import multiple secrets at once by pasting `.env`-style content:
|
||||
|
||||
1. Click **+ Add**, then switch to **Bulk** mode
|
||||
2. Paste your environment variables in `KEY=VALUE` format
|
||||
3. Choose the scope for all imported secrets
|
||||
4. Click **Create**
|
||||
|
||||
The parser supports standard `KEY=VALUE` pairs, quoted values, comments (`#`), and blank lines.
|
||||
|
||||
## OAuth Accounts
|
||||
|
||||
OAuth accounts are authenticated connections to third-party services like Google, Slack, GitHub, and more. Sim handles the OAuth flow, token storage, and automatic refresh.
|
||||
|
||||
You can connect **multiple accounts per provider** — for example, two separate Gmail accounts for different workflows.
|
||||
|
||||
### Connecting an OAuth Account
|
||||
Click **Details** on any secret row to open its detail view.
|
||||
|
||||
<Image
|
||||
src="/static/credentials/create-oauth.png"
|
||||
alt="Create Secret dialog with OAuth Account type selected, showing display name and provider dropdown"
|
||||
width={500}
|
||||
src="/static/secrets/secret-details.png"
|
||||
alt="Secret details view showing Display Name, Description, and Members sections"
|
||||
width={700}
|
||||
height={400}
|
||||
/>
|
||||
|
||||
<Steps>
|
||||
<Step>
|
||||
Click **+ Add** and select **OAuth Account** as the type
|
||||
</Step>
|
||||
<Step>
|
||||
Enter a **Display name** to identify this connection (e.g. "Work Gmail" or "Marketing Slack")
|
||||
</Step>
|
||||
<Step>
|
||||
Optionally add a **Description**
|
||||
</Step>
|
||||
<Step>
|
||||
Select the **Account** provider from the dropdown
|
||||
</Step>
|
||||
<Step>
|
||||
Click **Connect** and complete the authorization flow
|
||||
</Step>
|
||||
</Steps>
|
||||
From here you can:
|
||||
|
||||
### Using OAuth Accounts in Workflows
|
||||
- Edit the **Display Name** and **Description**
|
||||
- Manage **Members** — invite teammates by email and assign them an **Admin** or **Member** role
|
||||
|
||||
Blocks that require authentication (e.g. Gmail, Slack, Google Sheets) display a credential selector dropdown. Select the OAuth account you want the block to use.
|
||||
|
||||
<Image
|
||||
src="/static/credentials/oauth-selector.png"
|
||||
alt="Gmail block showing the account selector dropdown with a connected account and option to connect another"
|
||||
width={500}
|
||||
height={350}
|
||||
/>
|
||||
|
||||
You can also connect additional accounts directly from the block by selecting **Connect another account** at the bottom of the dropdown.
|
||||
|
||||
<Callout type="info">
|
||||
If a block requires an OAuth connection and none is selected, the workflow will fail at that step.
|
||||
</Callout>
|
||||
Click **Save** to apply changes, or **Back** to return to the list.
|
||||
|
||||
## Workspace vs. Personal
|
||||
|
||||
Credentials can be scoped to your **workspace** (shared with your team) or kept **personal** (private to you).
|
||||
|
||||
| | Workspace | Personal |
|
||||
|---|---|---|
|
||||
| **Visibility** | All workspace members | Only you |
|
||||
| **Use in workflows** | Any member can use | Only you can use |
|
||||
| **Best for** | Production workflows, shared services | Testing, personal API keys |
|
||||
| **Who can edit** | Workspace admins | Only you |
|
||||
| **Auto-shared** | Yes — all members get access on creation | No — only you have access |
|
||||
|
||||
<Callout type="info">
|
||||
When a workspace and personal secret share the same key name, the **workspace secret takes precedence**.
|
||||
When a workspace secret and a personal secret share the same key name, the **workspace secret takes precedence**.
|
||||
</Callout>
|
||||
|
||||
### Resolution Order
|
||||
|
||||
When a workflow runs, Sim resolves secrets in this order:
|
||||
When a workflow runs, secrets resolve in this order:
|
||||
|
||||
1. **Workspace secrets** are checked first
|
||||
2. **Personal secrets** are used as a fallback — from the user who triggered the run (manual) or the workflow owner (automated runs via API, webhook, or schedule)
|
||||
|
||||
## Access Control
|
||||
|
||||
Each credential has role-based access control:
|
||||
|
||||
- **Admin** — can view, edit, delete, and manage who has access
|
||||
- **Member** — can use the credential in workflows (read-only)
|
||||
|
||||
When you create a workspace secret, all current workspace members are automatically granted access. Personal secrets are only accessible to you by default.
|
||||
|
||||
### Sharing a Credential
|
||||
|
||||
To share a credential with specific team members:
|
||||
|
||||
1. Click **Details** on the credential
|
||||
2. Invite members by email
|
||||
3. Assign them an **Admin** or **Member** role
|
||||
|
||||
## Best Practices
|
||||
|
||||
- **Use workspace credentials for production** so workflows work regardless of who triggers them
|
||||
- **Use personal credentials for development** to keep your test keys separate
|
||||
- **Use workspace secrets for production** so workflows work regardless of who triggers them
|
||||
- **Use personal secrets for development** to keep test keys separate
|
||||
- **Name keys descriptively** — `STRIPE_SECRET_KEY` over `KEY1`
|
||||
- **Connect multiple OAuth accounts** when you need different permissions or identities per workflow
|
||||
- **Never hardcode secrets** in workflow input fields — always use `{{KEY}}` references
|
||||
|
||||
<FAQ items={[
|
||||
{ question: "Are my secrets encrypted at rest?", answer: "Yes. Secret values and OAuth tokens are encrypted before being stored in the database. The platform uses server-side encryption so that raw secret values are never persisted in plaintext. Secret values are also never exposed in the workflow editor, logs, or API responses." },
|
||||
{ question: "What happens if both a workspace secret and a personal secret have the same key name?", answer: "The workspace secret takes precedence. During execution, the resolver checks workspace secrets first and uses personal secrets only as a fallback. This ensures that production workflows use the shared, team-managed value." },
|
||||
{ question: "Are my secrets encrypted at rest?", answer: "Yes. Secret values are encrypted before being stored in the database using server-side encryption, so raw values are never persisted in plaintext. They are also never exposed in the workflow editor, logs, or API responses." },
|
||||
{ question: "What happens if both a workspace secret and a personal secret have the same key name?", answer: "The workspace secret takes precedence. During execution, the resolver checks workspace secrets first and uses personal secrets only as a fallback. This ensures production workflows use the shared, team-managed value." },
|
||||
{ question: "Who determines which personal secret is used for automated runs?", answer: "For manual runs, the personal secrets of the user who clicked Run are used as fallback. For automated runs triggered by API, webhook, or schedule, the personal secrets of the workflow owner are used instead." },
|
||||
{ question: "Does Sim handle OAuth token refresh automatically?", answer: "Yes. When an OAuth token is used during execution, the platform checks whether the access token has expired and automatically refreshes it using the stored refresh token before making the API call. You do not need to handle token refresh manually." },
|
||||
{ question: "Can I connect multiple OAuth accounts for the same provider?", answer: "Yes. You can connect multiple accounts per provider (for example, two separate Gmail accounts). Each block that requires OAuth lets you select which specific account to use from the credential dropdown. This is useful when different workflows or blocks need different permissions or identities." },
|
||||
{ question: "What happens if I delete a credential that is used in a workflow?", answer: "If a block references a deleted credential, the workflow will fail at that block during execution because the credential cannot be resolved. Make sure to update any blocks that reference a credential before deleting it." },
|
||||
{ question: "Can I import secrets from a .env file?", answer: "Yes. The bulk import feature lets you paste .env-style content in KEY=VALUE format. The parser supports quoted values, comments (lines starting with #), and blank lines. All imported secrets are created with the scope you choose (workspace or personal)." },
|
||||
{ question: "Can I import secrets from a .env file?", answer: "Yes. Paste .env-style content (KEY=VALUE format) into any key or value field and the secrets will be auto-populated. The parser supports export KEY=VALUE, quoted values, and inline comments." },
|
||||
{ question: "What happens if I delete a secret that is used in a workflow?", answer: "The workflow will fail at any block that references the deleted secret during execution because the value cannot be resolved. Update any references before deleting a secret." },
|
||||
]} />
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
{
|
||||
"title": "Credentials",
|
||||
"pages": ["index", "google-service-account"],
|
||||
"title": "Secrets",
|
||||
"pages": ["index"],
|
||||
"defaultOpen": false
|
||||
}
|
||||
|
||||
216
apps/docs/content/docs/en/enterprise/access-control.mdx
Normal file
216
apps/docs/content/docs/en/enterprise/access-control.mdx
Normal file
@@ -0,0 +1,216 @@
|
||||
---
|
||||
title: Access Control
|
||||
description: Restrict which models, blocks, and platform features each group of users can access
|
||||
---
|
||||
|
||||
import { Callout } from 'fumadocs-ui/components/callout'
|
||||
import { FAQ } from '@/components/ui/faq'
|
||||
import { Image } from '@/components/ui/image'
|
||||
|
||||
Access Control lets workspace admins define permission groups that restrict what each set of workspace members can do — which AI model providers they can use, which workflow blocks they can place, and which platform features are visible to them. Permission groups are scoped to a single workspace: a user can be in different groups (or no group) in different workspaces. Restrictions are enforced both in the workflow executor and in Mothership, based on the workflow's workspace.
|
||||
|
||||
---
|
||||
|
||||
## How it works
|
||||
|
||||
Access control is built around **permission groups**. Each group belongs to a specific workspace and has a name, an optional description, and a configuration that defines what its members can and cannot do. A user can belong to at most one permission group **per workspace**, but can belong to different groups in different workspaces.
|
||||
|
||||
When a user runs a workflow or uses Mothership, Sim reads their group's configuration and applies it:
|
||||
|
||||
- **In the executor:** If a workflow uses a disallowed block type or model provider, execution halts immediately with an error. This applies to both manual runs and scheduled or API-triggered deployments.
|
||||
- **In Mothership:** Disallowed blocks are filtered out of the block list so they cannot be added to a workflow. Disallowed tool types (MCP, custom tools, skills) are skipped if Mothership attempts to use them.
|
||||
|
||||
---
|
||||
|
||||
## Setup
|
||||
|
||||
### 1. Open Access Control settings
|
||||
|
||||
Go to **Settings → Enterprise → Access Control** in the workspace you want to manage. Each workspace has its own set of permission groups.
|
||||
|
||||
<Image src="/static/enterprise/access-control-groups.png" alt="Access Control settings showing a list of permission groups: Contractors, Sales, Engineering, and Marketing, each with Details and Delete actions" width={900} height={500} />
|
||||
|
||||
### 2. Create a permission group
|
||||
|
||||
Click **+ Create** and enter a name (required) and optional description. You can also enable **Auto-add new members** — when active, any new member who joins this workspace is automatically added to this group. Only one group per workspace can have this setting enabled at a time.
|
||||
|
||||
### 3. Configure permissions
|
||||
|
||||
Click **Details** on a group, then open **Configure Permissions**. There are three tabs.
|
||||
|
||||
#### Model Providers
|
||||
|
||||
Controls which AI model providers members of this group can use.
|
||||
|
||||
<Image src="/static/enterprise/access-control-model-providers.png" alt="Model Providers tab showing a grid of AI providers including Ollama, vLLM, OpenAI, Anthropic, Google, Azure OpenAI, and others with checkboxes to allow or restrict access" width={900} height={500} /> The list shows all providers available in Sim.
|
||||
|
||||
- **All checked (default):** All providers are allowed.
|
||||
- **Subset checked:** Only the selected providers are allowed. Any workflow block or agent using a provider not on the list will fail at execution time.
|
||||
|
||||
#### Blocks
|
||||
|
||||
Controls which workflow blocks members can place and execute.
|
||||
|
||||
<Image src="/static/enterprise/access-control-blocks.png" alt="Blocks tab showing Core Blocks (Agent, API, Condition, Function, Knowledge, etc.) and Tools (integrations like 1Password, A2A, Ahrefs, Airtable, and more) with checkboxes to allow or restrict each" width={900} height={500} /> Blocks are split into two sections: **Core Blocks** (Agent, API, Condition, Function, etc.) and **Tools** (all integration blocks).
|
||||
|
||||
- **All checked (default):** All blocks are allowed.
|
||||
- **Subset checked:** Only the selected blocks are allowed. Workflows that already contain a disallowed block will fail when run — they are not automatically modified.
|
||||
|
||||
<Callout type="info">
|
||||
The `start_trigger` block (the entry point of every workflow) is always allowed and cannot be restricted.
|
||||
</Callout>
|
||||
|
||||
#### Platform
|
||||
|
||||
Controls visibility of platform features and modules.
|
||||
|
||||
<Image src="/static/enterprise/access-control-platform.png" alt="Platform tab showing feature toggles grouped by category: Sidebar (Knowledge Base, Tables, Templates), Workflow Panel (Copilot), Settings Tabs, Tools, Deploy Tabs, Features, Logs, and Collaboration" width={900} height={500} /> Each checkbox maps to a specific feature; checking it hides or disables that feature for group members.
|
||||
|
||||
**Sidebar**
|
||||
|
||||
| Feature | Effect when checked |
|
||||
|---------|-------------------|
|
||||
| Knowledge Base | Hides the Knowledge Base section from the sidebar |
|
||||
| Tables | Hides the Tables section from the sidebar |
|
||||
| Templates | Hides the Templates section from the sidebar |
|
||||
|
||||
**Workflow Panel**
|
||||
|
||||
| Feature | Effect when checked |
|
||||
|---------|-------------------|
|
||||
| Copilot | Hides the Copilot panel inside the workflow editor |
|
||||
|
||||
**Settings Tabs**
|
||||
|
||||
| Feature | Effect when checked |
|
||||
|---------|-------------------|
|
||||
| Integrations | Hides the Integrations tab in Settings |
|
||||
| Secrets | Hides the Secrets tab in Settings |
|
||||
| API Keys | Hides the Sim Keys tab in Settings |
|
||||
| Files | Hides the Files tab in Settings |
|
||||
|
||||
**Tools**
|
||||
|
||||
| Feature | Effect when checked |
|
||||
|---------|-------------------|
|
||||
| MCP Tools | Disables the use of MCP tools in workflows and agents |
|
||||
| Custom Tools | Disables the use of custom tools in workflows and agents |
|
||||
| Skills | Disables the use of Sim Skills in workflows and agents |
|
||||
|
||||
**Deploy Tabs**
|
||||
|
||||
| Feature | Effect when checked |
|
||||
|---------|-------------------|
|
||||
| API | Hides the API deployment tab |
|
||||
| MCP | Hides the MCP deployment tab |
|
||||
| A2A | Hides the A2A deployment tab |
|
||||
| Chat | Hides the Chat deployment tab |
|
||||
| Template | Hides the Template deployment tab |
|
||||
|
||||
**Features**
|
||||
|
||||
| Feature | Effect when checked |
|
||||
|---------|-------------------|
|
||||
| Sim Mailer | Hides the Sim Mailer (Inbox) feature |
|
||||
| Public API | Disables public API access for deployed workflows |
|
||||
|
||||
**Logs**
|
||||
|
||||
| Feature | Effect when checked |
|
||||
|---------|-------------------|
|
||||
| Trace Spans | Hides trace span details in execution logs |
|
||||
|
||||
**Collaboration**
|
||||
|
||||
| Feature | Effect when checked |
|
||||
|---------|-------------------|
|
||||
| Invitations | Disables the ability to invite new members to the workspace |
|
||||
|
||||
### 4. Add members
|
||||
|
||||
Open the group's **Details** view and add members by searching for users by name or email. Only users who already have workspace-level access can be added. A user can only belong to one group per workspace — adding a user to a new group within the same workspace removes them from their current group for that workspace.
|
||||
|
||||
---
|
||||
|
||||
## Enforcement
|
||||
|
||||
### Workflow execution
|
||||
|
||||
Restrictions are enforced at the point of execution, not at save time. If a group's configuration changes after a workflow is built:
|
||||
|
||||
- **Block restrictions:** Any workflow run that reaches a disallowed block halts immediately with an error. The workflow is not modified — only execution is blocked.
|
||||
- **Model provider restrictions:** Any block or agent that uses a disallowed provider halts immediately with an error.
|
||||
- **Tool restrictions (MCP, custom tools, skills):** Agents that use a disallowed tool type halt immediately with an error.
|
||||
|
||||
This applies regardless of how the workflow is triggered — manually, via API, via schedule, or via webhook.
|
||||
|
||||
### Mothership
|
||||
|
||||
When a user opens Mothership, their permission group is read before any block or tool suggestions are made:
|
||||
|
||||
- Blocks not in the allowed list are filtered out of the block picker entirely — they do not appear as options.
|
||||
- If Mothership generates a workflow step that would use a disallowed tool (MCP, custom, or skills), that step is skipped and the reason is noted.
|
||||
|
||||
---
|
||||
|
||||
## User membership rules
|
||||
|
||||
- A user can belong to **at most one** permission group **per workspace**, but may be in different groups across different workspaces.
|
||||
- Moving a user to a new group within a workspace automatically removes them from their previous group in that workspace.
|
||||
- Users not assigned to any group in a workspace have no restrictions applied in that workspace (all blocks, providers, and features are available to them there).
|
||||
- If **Auto-add new members** is enabled on a group, new members of that workspace are automatically placed in the group. Only one group per workspace can have this setting active.
|
||||
|
||||
---
|
||||
|
||||
<FAQ items={[
|
||||
{
|
||||
question: "Who can create and manage permission groups?",
|
||||
answer: "Any workspace admin on an Enterprise-entitled workspace can create, edit, and delete permission groups for that workspace. The workspace's billed account must be on the Enterprise plan."
|
||||
},
|
||||
{
|
||||
question: "What happens to a workflow that was built before a block was restricted?",
|
||||
answer: "The workflow is not modified — it still exists and can be edited. However, any run that reaches a disallowed block will halt immediately with an error. The block must be removed or the user's group configuration must be updated before the workflow can run successfully."
|
||||
},
|
||||
{
|
||||
question: "Can a user be in multiple permission groups?",
|
||||
answer: "A user can belong to at most one permission group per workspace, but can belong to different groups in different workspaces. Adding a user to a new group within the same workspace automatically removes them from their previous group in that workspace."
|
||||
},
|
||||
{
|
||||
question: "What does a user see if they have no permission group assigned in a workspace?",
|
||||
answer: "Users with no group in a given workspace have no restrictions in that workspace. All blocks, model providers, and platform features are fully available to them there. Restrictions only apply in the specific workspaces where they are assigned to a group."
|
||||
},
|
||||
{
|
||||
question: "Does Mothership respect the same restrictions as the executor?",
|
||||
answer: "Yes. Mothership reads the user's permission group for the active workspace before suggesting blocks or tools. Disallowed blocks are filtered out of the block picker, and disallowed tool types are skipped during workflow generation."
|
||||
},
|
||||
{
|
||||
question: "Can I restrict access to specific workflows or workspaces?",
|
||||
answer: "Access Control operates at the feature and block level within a workspace. To restrict who can access the workspace itself, use workspace invitations and permissions. To apply different restrictions to different workflows, put them in different workspaces with distinct permission groups."
|
||||
},
|
||||
{
|
||||
question: "What is Auto-add new members?",
|
||||
answer: "When a group has Auto-add new members enabled, any new member who joins the workspace is automatically added to that group. Only one group per workspace can have this setting enabled at a time."
|
||||
}
|
||||
]} />
|
||||
|
||||
---
|
||||
|
||||
## Self-hosted setup
|
||||
|
||||
Self-hosted deployments use environment variables instead of the billing/plan check.
|
||||
|
||||
### Environment variables
|
||||
|
||||
```bash
|
||||
ACCESS_CONTROL_ENABLED=true
|
||||
NEXT_PUBLIC_ACCESS_CONTROL_ENABLED=true
|
||||
```
|
||||
|
||||
You can also set a server-level block allowlist using the `ALLOWED_INTEGRATIONS` environment variable. This is applied as an additional constraint on top of any permission group configuration — a block must be allowed by both the environment allowlist and the user's group to be usable.
|
||||
|
||||
```bash
|
||||
# Only these block types are available across the entire instance
|
||||
ALLOWED_INTEGRATIONS=slack,gmail,agent,function,condition
|
||||
```
|
||||
|
||||
Once enabled, permission groups are managed through **Settings → Enterprise → Access Control** the same way as Sim Cloud.
|
||||
143
apps/docs/content/docs/en/enterprise/audit-logs.mdx
Normal file
143
apps/docs/content/docs/en/enterprise/audit-logs.mdx
Normal file
@@ -0,0 +1,143 @@
|
||||
---
|
||||
title: Audit Logs
|
||||
description: Track every action taken across your organization's workspaces
|
||||
---
|
||||
|
||||
import { FAQ } from '@/components/ui/faq'
|
||||
import { Image } from '@/components/ui/image'
|
||||
|
||||
Audit logs give your organization a tamper-evident record of every significant action taken across workspaces — who did what, when, and on which resource. Use them for security reviews, compliance investigations, and incident response.
|
||||
|
||||
---
|
||||
|
||||
## Viewing audit logs
|
||||
|
||||
### In the UI
|
||||
|
||||
Go to **Settings → Enterprise → Audit Logs** in your workspace. Logs are displayed in a table with the following columns:
|
||||
|
||||
<Image src="/static/enterprise/audit-logs.png" alt="Audit Logs settings showing a table of events with columns for Timestamp, Event, Description, and Actor, along with search and filter controls" width={900} height={500} />
|
||||
|
||||
| Column | Description |
|
||||
|--------|-------------|
|
||||
| **Timestamp** | When the action occurred. |
|
||||
| **Event** | The action taken, e.g. `workflow.created`. |
|
||||
| **Description** | A human-readable summary of the action. |
|
||||
| **Actor** | The email address of the user who performed the action. |
|
||||
|
||||
Use the search bar, event type filter, and date range selector to narrow results.
|
||||
|
||||
### Via API
|
||||
|
||||
Audit logs are also accessible through the Sim API for integration with external SIEM or log management tools.
|
||||
|
||||
```http
|
||||
GET /api/v1/audit-logs
|
||||
Authorization: Bearer <api-key>
|
||||
```
|
||||
|
||||
**Query parameters:**
|
||||
|
||||
| Parameter | Type | Description |
|
||||
|-----------|------|-------------|
|
||||
| `action` | string | Filter by event type (e.g. `workflow.created`) |
|
||||
| `resourceType` | string | Filter by resource type (e.g. `workflow`) |
|
||||
| `resourceId` | string | Filter by a specific resource ID |
|
||||
| `workspaceId` | string | Filter by workspace |
|
||||
| `actorId` | string | Filter by user ID (must be an org member) |
|
||||
| `startDate` | string | ISO 8601 date — return logs on or after this date |
|
||||
| `endDate` | string | ISO 8601 date — return logs on or before this date |
|
||||
| `includeDeparted` | boolean | Include logs from members who have since left the organization (default `false`) |
|
||||
| `limit` | number | Results per page (1–100, default 50) |
|
||||
| `cursor` | string | Opaque cursor for fetching the next page |
|
||||
|
||||
**Example response:**
|
||||
|
||||
```json
|
||||
{
|
||||
"data": [
|
||||
{
|
||||
"id": "abc123",
|
||||
"action": "workflow.created",
|
||||
"resourceType": "workflow",
|
||||
"resourceId": "wf_xyz",
|
||||
"resourceName": "Customer Onboarding",
|
||||
"description": "Created workflow \"Customer Onboarding\"",
|
||||
"actorId": "usr_abc",
|
||||
"actorName": "Alice Smith",
|
||||
"actorEmail": "alice@company.com",
|
||||
"workspaceId": "ws_def",
|
||||
"metadata": {},
|
||||
"createdAt": "2026-04-20T21:16:00.000Z"
|
||||
}
|
||||
],
|
||||
"nextCursor": "eyJpZCI6ImFiYzEyMyJ9"
|
||||
}
|
||||
```
|
||||
|
||||
Paginate by passing the `nextCursor` value as the `cursor` parameter in the next request. When `nextCursor` is absent, you have reached the last page.
|
||||
|
||||
The API accepts both personal and workspace-scoped API keys. Rate limits apply — the response includes `X-RateLimit-*` headers with your current limit and remaining quota.
|
||||
|
||||
---
|
||||
|
||||
## Event types
|
||||
|
||||
Audit log events follow a `resource.action` naming pattern. The table below lists the main categories.
|
||||
|
||||
| Category | Example events |
|
||||
|----------|---------------|
|
||||
| **Workflows** | `workflow.created`, `workflow.deleted`, `workflow.deployed`, `workflow.locked` |
|
||||
| **Workspaces** | `workspace.created`, `workspace.updated`, `workspace.deleted` |
|
||||
| **Members** | `member.invited`, `member.removed`, `member.role_changed` |
|
||||
| **Permission groups** | `permission_group.created`, `permission_group.updated`, `permission_group.deleted` |
|
||||
| **Environments** | `environment.updated`, `environment.deleted` |
|
||||
| **Knowledge bases** | `knowledge_base.created`, `knowledge_base.deleted`, `connector.synced` |
|
||||
| **Tables** | `table.created`, `table.updated`, `table.deleted` |
|
||||
| **API keys** | `api_key.created`, `api_key.revoked` |
|
||||
| **Credentials** | `credential.created`, `credential.deleted`, `oauth.disconnected` |
|
||||
| **Organization** | `organization.updated`, `org_member.added`, `org_member.role_changed` |
|
||||
|
||||
---
|
||||
|
||||
<FAQ items={[
|
||||
{
|
||||
question: "Who can view audit logs?",
|
||||
answer: "Organization owners and admins can view audit logs. On Sim Cloud, you must be on the Enterprise plan."
|
||||
},
|
||||
{
|
||||
question: "Are audit logs tamper-proof?",
|
||||
answer: "Audit log entries are append-only and cannot be modified or deleted through the Sim interface or API. They represent a reliable record of actions taken in your organization."
|
||||
},
|
||||
{
|
||||
question: "Can I export audit logs?",
|
||||
answer: "Yes. Use the API to export logs programmatically. Paginate through all records using the cursor parameter and store them in your own data warehouse or SIEM."
|
||||
},
|
||||
{
|
||||
question: "Are logs scoped to a single workspace or the whole organization?",
|
||||
answer: "Audit logs are scoped to your organization and include activity across all workspaces within it. You can filter by workspaceId to narrow results to a specific workspace."
|
||||
},
|
||||
{
|
||||
question: "What information is included in each log entry?",
|
||||
answer: "Each entry includes the event type, a description, the actor's name and email, the affected resource, the workspace, and a timestamp. IP addresses and user agents are not exposed through the API."
|
||||
},
|
||||
{
|
||||
question: "Can I filter logs by a specific user?",
|
||||
answer: "Yes. Pass the actorId query parameter to filter logs by a specific user. The actor must be a current or former member of your organization."
|
||||
}
|
||||
]} />
|
||||
|
||||
---
|
||||
|
||||
## Self-hosted setup
|
||||
|
||||
Self-hosted deployments use environment variables instead of the billing/plan check.
|
||||
|
||||
### Environment variables
|
||||
|
||||
```bash
|
||||
AUDIT_LOGS_ENABLED=true
|
||||
NEXT_PUBLIC_AUDIT_LOGS_ENABLED=true
|
||||
```
|
||||
|
||||
Once enabled, audit logs are viewable in **Settings → Enterprise → Audit Logs** and accessible via the API.
|
||||
114
apps/docs/content/docs/en/enterprise/data-retention.mdx
Normal file
114
apps/docs/content/docs/en/enterprise/data-retention.mdx
Normal file
@@ -0,0 +1,114 @@
|
||||
---
|
||||
title: Data Retention
|
||||
description: Control how long execution logs, deleted resources, and copilot data are kept before permanent deletion
|
||||
---
|
||||
|
||||
import { FAQ } from '@/components/ui/faq'
|
||||
import { Image } from '@/components/ui/image'
|
||||
|
||||
Data Retention lets organization owners and admins on Enterprise plans configure how long three categories of data are kept before they are permanently deleted. The configuration applies to every workspace in the organization.
|
||||
|
||||
---
|
||||
|
||||
## Setup
|
||||
|
||||
Go to **Settings → Enterprise → Data Retention** in your workspace.
|
||||
|
||||
<Image src="/static/enterprise/data-retention.png" alt="Data Retention settings showing three dropdowns — Log retention, Soft deletion cleanup, and Task cleanup — each set to Forever" width={900} height={500} />
|
||||
|
||||
You will see three independent settings, each with the same set of options: **1 day, 3 days, 7 days, 14 days, 30 days, 60 days, 90 days, 180 days, 1 year, 5 years,** or **Forever**.
|
||||
|
||||
Setting a period to **Forever** means that category of data is never automatically deleted.
|
||||
|
||||
---
|
||||
|
||||
## Settings
|
||||
|
||||
### Log retention
|
||||
|
||||
Controls how long **workflow execution logs** are kept.
|
||||
|
||||
When the retention period expires, execution log records are permanently deleted, along with any files associated with those executions stored in cloud storage.
|
||||
|
||||
### Soft deletion cleanup
|
||||
|
||||
Controls how long **soft-deleted resources** remain recoverable before permanent removal.
|
||||
|
||||
When you delete a workflow, folder, knowledge base, table, or file, it is initially soft-deleted and can be recovered from Recently Deleted. Once the soft deletion cleanup period expires, those resources are permanently removed and cannot be recovered.
|
||||
|
||||
Resources covered:
|
||||
|
||||
- Workflows
|
||||
- Workflow folders
|
||||
- Knowledge bases
|
||||
- Tables
|
||||
- Files
|
||||
- MCP server configurations
|
||||
- Agent memory
|
||||
|
||||
### Task cleanup
|
||||
|
||||
Controls how long **Mothership data** is kept, including:
|
||||
|
||||
- Copilot chats and run history
|
||||
- Run checkpoints and async tool calls
|
||||
- Inbox tasks (Sim Mailer)
|
||||
|
||||
Each setting is independent. You can configure a short log retention period alongside a long soft deletion cleanup period, or any combination that fits your compliance requirements.
|
||||
|
||||
---
|
||||
|
||||
## Organization-wide configuration
|
||||
|
||||
Retention is configured at the **organization level**. A single configuration applies to every workspace in the organization — there are no per-workspace overrides.
|
||||
|
||||
---
|
||||
|
||||
## Defaults
|
||||
|
||||
By default, all three settings are unconfigured — no data is automatically deleted in any category until you configure it. Setting a period to **Forever** has the same effect as leaving it unconfigured, but makes the intent explicit and allows you to change it later without saving from scratch.
|
||||
|
||||
---
|
||||
|
||||
<FAQ items={[
|
||||
{
|
||||
question: "Who can configure data retention settings?",
|
||||
answer: "Only organization owners and admins can configure data retention settings. On Sim Cloud, the organization must be on an Enterprise plan."
|
||||
},
|
||||
{
|
||||
question: "Is deletion immediate once the retention period expires?",
|
||||
answer: "No. Deletion runs on a scheduled cleanup job. Data is deleted when the job next runs after the retention period has elapsed — not at the exact moment it expires."
|
||||
},
|
||||
{
|
||||
question: "Can deleted data be recovered after the soft deletion cleanup period?",
|
||||
answer: "No. Once the soft deletion cleanup period expires and the cleanup job runs, resources are permanently deleted and cannot be recovered."
|
||||
},
|
||||
{
|
||||
question: "Does the retention period apply to all workspaces in my organization?",
|
||||
answer: "Yes. Retention is configured once per organization and applies to every workspace in the organization."
|
||||
},
|
||||
{
|
||||
question: "What happens if I shorten the retention period?",
|
||||
answer: "The next cleanup job will delete any data that is older than the new, shorter period — including data that would have been kept under the previous setting. Shortening the period is irreversible for data that falls outside the new window."
|
||||
},
|
||||
{
|
||||
question: "What is the minimum retention period?",
|
||||
answer: "1 day (24 hours)."
|
||||
},
|
||||
{
|
||||
question: "What is the maximum retention period?",
|
||||
answer: "5 years."
|
||||
}
|
||||
]} />
|
||||
|
||||
---
|
||||
|
||||
## Self-hosted setup
|
||||
|
||||
### Environment variables
|
||||
|
||||
```bash
|
||||
NEXT_PUBLIC_DATA_RETENTION_ENABLED=true
|
||||
```
|
||||
|
||||
Once enabled, data retention settings are configurable through **Settings → Enterprise → Data Retention** the same way as Sim Cloud.
|
||||
@@ -3,7 +3,6 @@ title: Enterprise
|
||||
description: Enterprise features for business organizations
|
||||
---
|
||||
|
||||
import { Callout } from 'fumadocs-ui/components/callout'
|
||||
import { FAQ } from '@/components/ui/faq'
|
||||
|
||||
Sim Enterprise provides advanced features for organizations with enhanced security, compliance, and management requirements.
|
||||
@@ -12,7 +11,7 @@ Sim Enterprise provides advanced features for organizations with enhanced securi
|
||||
|
||||
## Access Control
|
||||
|
||||
Define permission groups to control what features and integrations team members can use.
|
||||
Define permission groups on a workspace to control what features and integrations its members can use. Permission groups are scoped to a single workspace — a user can belong to different groups (or no group) in different workspaces.
|
||||
|
||||
### Features
|
||||
|
||||
@@ -22,101 +21,64 @@ Define permission groups to control what features and integrations team members
|
||||
|
||||
### Setup
|
||||
|
||||
1. Navigate to **Settings** → **Access Control** in your workspace
|
||||
1. Navigate to **Settings** → **Access Control** in the workspace you want to manage
|
||||
2. Create a permission group with your desired restrictions
|
||||
3. Add team members to the permission group
|
||||
3. Add workspace members to the permission group
|
||||
|
||||
<Callout type="info">
|
||||
Users not assigned to any permission group have full access. Permission restrictions are enforced at both UI and execution time.
|
||||
</Callout>
|
||||
Any workspace admin on an Enterprise-entitled workspace can manage permission groups. Users not assigned to any group have full access. Restrictions are enforced at both UI and execution time, based on the workflow's workspace.
|
||||
|
||||
See the [Access Control guide](/docs/enterprise/access-control) for full details.
|
||||
|
||||
---
|
||||
|
||||
## Single Sign-On (SSO)
|
||||
|
||||
Enterprise authentication with SAML 2.0 and OIDC support for centralized identity management.
|
||||
Enterprise authentication with SAML 2.0 and OIDC support. Works with Okta, Azure AD (Entra ID), Google Workspace, ADFS, and any standard OIDC or SAML 2.0 provider.
|
||||
|
||||
### Supported Providers
|
||||
|
||||
- Okta
|
||||
- Azure AD / Entra ID
|
||||
- Google Workspace
|
||||
- OneLogin
|
||||
- Any SAML 2.0 or OIDC provider
|
||||
|
||||
### Setup
|
||||
|
||||
1. Navigate to **Settings** → **SSO** in your workspace
|
||||
2. Choose your identity provider
|
||||
3. Configure the connection using your IdP's metadata
|
||||
4. Enable SSO for your organization
|
||||
|
||||
<Callout type="info">
|
||||
Once SSO is enabled, team members authenticate through your identity provider instead of email/password.
|
||||
</Callout>
|
||||
See the [SSO setup guide](/docs/enterprise/sso) for step-by-step instructions and provider-specific configuration.
|
||||
|
||||
---
|
||||
|
||||
## Self-Hosted Configuration
|
||||
## Whitelabeling
|
||||
|
||||
For self-hosted deployments, enterprise features can be enabled via environment variables without requiring billing.
|
||||
Replace Sim's default branding — logos, product name, and favicons — with your own. See the [whitelabeling guide](/docs/enterprise/whitelabeling).
|
||||
|
||||
### Environment Variables
|
||||
---
|
||||
|
||||
## Audit Logs
|
||||
|
||||
Track configuration and security-relevant actions across your organization for compliance and monitoring. See the [audit logs guide](/docs/enterprise/audit-logs).
|
||||
|
||||
---
|
||||
|
||||
## Data Retention
|
||||
|
||||
Configure how long execution logs, soft-deleted resources, and Mothership data are kept before permanent deletion. See the [data retention guide](/docs/enterprise/data-retention).
|
||||
|
||||
---
|
||||
|
||||
<FAQ items={[
|
||||
{ question: "Who can manage Enterprise features?", answer: "Workspace admins on an Enterprise-entitled workspace. Access Control, SSO, whitelabeling, audit logs, and data retention are all configured per workspace under Settings → Enterprise." },
|
||||
{ question: "Which SSO providers are supported?", answer: "Sim supports SAML 2.0 and OIDC, which works with virtually any enterprise identity provider including Okta, Azure AD (Entra ID), Google Workspace, ADFS, and OneLogin." },
|
||||
{ question: "How do access control permission groups work?", answer: "Permission groups are created per workspace and let you restrict which AI providers, workflow blocks, and platform features are available to specific members of that workspace. Each user can belong to at most one group per workspace. Users not assigned to any group have full access. Restrictions are enforced at both the UI level and at execution time based on the workflow's workspace." },
|
||||
]} />
|
||||
|
||||
---
|
||||
|
||||
## Self-hosted setup
|
||||
|
||||
Self-hosted deployments enable enterprise features via environment variables instead of billing.
|
||||
|
||||
| Variable | Description |
|
||||
|----------|-------------|
|
||||
| `ORGANIZATIONS_ENABLED`, `NEXT_PUBLIC_ORGANIZATIONS_ENABLED` | Enable team/organization management |
|
||||
| `ACCESS_CONTROL_ENABLED`, `NEXT_PUBLIC_ACCESS_CONTROL_ENABLED` | Permission groups for access restrictions |
|
||||
| `SSO_ENABLED`, `NEXT_PUBLIC_SSO_ENABLED` | Single Sign-On with SAML/OIDC |
|
||||
| `CREDENTIAL_SETS_ENABLED`, `NEXT_PUBLIC_CREDENTIAL_SETS_ENABLED` | Polling Groups for email triggers |
|
||||
| `DISABLE_INVITATIONS`, `NEXT_PUBLIC_DISABLE_INVITATIONS` | Globally disable workspace/organization invitations |
|
||||
| `ORGANIZATIONS_ENABLED`, `NEXT_PUBLIC_ORGANIZATIONS_ENABLED` | Team and organization management |
|
||||
| `ACCESS_CONTROL_ENABLED`, `NEXT_PUBLIC_ACCESS_CONTROL_ENABLED` | Permission groups |
|
||||
| `SSO_ENABLED`, `NEXT_PUBLIC_SSO_ENABLED` | SAML and OIDC sign-in |
|
||||
| `WHITELABELING_ENABLED`, `NEXT_PUBLIC_WHITELABELING_ENABLED` | Custom branding |
|
||||
| `AUDIT_LOGS_ENABLED`, `NEXT_PUBLIC_AUDIT_LOGS_ENABLED` | Audit logging |
|
||||
| `NEXT_PUBLIC_DATA_RETENTION_ENABLED` | Data retention configuration |
|
||||
| `CREDENTIAL_SETS_ENABLED`, `NEXT_PUBLIC_CREDENTIAL_SETS_ENABLED` | Polling groups for email triggers |
|
||||
| `INBOX_ENABLED`, `NEXT_PUBLIC_INBOX_ENABLED` | Sim Mailer inbox |
|
||||
| `DISABLE_INVITATIONS`, `NEXT_PUBLIC_DISABLE_INVITATIONS` | Disable invitations; manage membership via Admin API |
|
||||
|
||||
### Organization Management
|
||||
|
||||
When billing is disabled, use the Admin API to manage organizations:
|
||||
|
||||
```bash
|
||||
# Create an organization
|
||||
curl -X POST https://your-instance/api/v1/admin/organizations \
|
||||
-H "x-admin-key: YOUR_ADMIN_API_KEY" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"name": "My Organization", "ownerId": "user-id-here"}'
|
||||
|
||||
# Add a member
|
||||
curl -X POST https://your-instance/api/v1/admin/organizations/{orgId}/members \
|
||||
-H "x-admin-key: YOUR_ADMIN_API_KEY" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"userId": "user-id-here", "role": "admin"}'
|
||||
```
|
||||
|
||||
### Workspace Members
|
||||
|
||||
When invitations are disabled, use the Admin API to manage workspace memberships directly:
|
||||
|
||||
```bash
|
||||
# Add a user to a workspace
|
||||
curl -X POST https://your-instance/api/v1/admin/workspaces/{workspaceId}/members \
|
||||
-H "x-admin-key: YOUR_ADMIN_API_KEY" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"userId": "user-id-here", "permissions": "write"}'
|
||||
|
||||
# Remove a user from a workspace
|
||||
curl -X DELETE "https://your-instance/api/v1/admin/workspaces/{workspaceId}/members?userId=user-id-here" \
|
||||
-H "x-admin-key: YOUR_ADMIN_API_KEY"
|
||||
```
|
||||
|
||||
### Notes
|
||||
|
||||
- Enabling `ACCESS_CONTROL_ENABLED` automatically enables organizations, as access control requires organization membership.
|
||||
- When `DISABLE_INVITATIONS` is set, users cannot send invitations. Use the Admin API to manage workspace and organization memberships instead.
|
||||
|
||||
<FAQ items={[
|
||||
{ question: "What are the minimum requirements to self-host Sim?", answer: "The Docker Compose production setup includes the Sim application (8 GB memory limit), a realtime collaboration server (1 GB memory limit), and a PostgreSQL database with pgvector. A machine with at least 16 GB of RAM and 4 CPU cores is recommended. You will also need Docker and Docker Compose installed." },
|
||||
{ question: "Can I run Sim completely offline with local AI models?", answer: "Yes. Sim supports Ollama and VLLM for running local AI models. A separate Docker Compose configuration (docker-compose.ollama.yml) is available for deploying with Ollama. This lets you run workflows without any external API calls, keeping all data on your infrastructure." },
|
||||
{ question: "How does data privacy work with self-hosted deployments?", answer: "When self-hosted, all data stays on your infrastructure. Workflow definitions, execution logs, credentials, and user data are stored in your PostgreSQL database. If you use local AI models through Ollama or VLLM, no data leaves your network. When using external AI providers, only the data sent in prompts goes to those providers." },
|
||||
{ question: "Do I need a paid license to self-host Sim?", answer: "The core Sim platform is open source under Apache 2.0 and can be self-hosted for free. Enterprise features like SSO (SAML/OIDC), access control with permission groups, and organization management require an Enterprise subscription for production use. These features can be enabled via environment variables for development and evaluation without a license." },
|
||||
{ question: "Which SSO providers are supported?", answer: "Sim supports SAML 2.0 and OIDC protocols, which means it works with virtually any enterprise identity provider including Okta, Azure AD (Entra ID), Google Workspace, and OneLogin. Configuration is done through Settings in the workspace UI." },
|
||||
{ question: "How do I manage users when invitations are disabled?", answer: "Use the Admin API with your admin API key. You can create organizations, add members to organizations with specific roles, add users to workspaces with defined permissions, and remove users. All management is done through REST API calls authenticated with the x-admin-key header." },
|
||||
{ question: "Can I scale Sim horizontally for high availability?", answer: "The Docker Compose setup is designed for single-node deployments. For production scaling, you can deploy on Kubernetes with multiple application replicas behind a load balancer. The database can be scaled independently using managed PostgreSQL services. Redis can be configured for session and cache management across multiple instances." },
|
||||
{ question: "How do access control permission groups work?", answer: "Permission groups let you restrict which AI providers, workflow blocks, and platform features are available to specific team members. Users not assigned to any group have full access. Restrictions are enforced at both the UI level (hiding restricted options) and at execution time (blocking unauthorized operations). Enabling access control automatically enables organization management." },
|
||||
]} />
|
||||
Once enabled, each feature is configured through the same Settings UI as Sim Cloud. When invitations are disabled, use the Admin API (`x-admin-key` header) to manage organization and workspace membership.
|
||||
|
||||
5
apps/docs/content/docs/en/enterprise/meta.json
Normal file
5
apps/docs/content/docs/en/enterprise/meta.json
Normal file
@@ -0,0 +1,5 @@
|
||||
{
|
||||
"title": "Enterprise",
|
||||
"pages": ["index", "sso", "access-control", "whitelabeling", "audit-logs", "data-retention"],
|
||||
"defaultOpen": false
|
||||
}
|
||||
326
apps/docs/content/docs/en/enterprise/sso.mdx
Normal file
326
apps/docs/content/docs/en/enterprise/sso.mdx
Normal file
@@ -0,0 +1,326 @@
|
||||
---
|
||||
title: Single Sign-On (SSO)
|
||||
description: Configure SAML 2.0 or OIDC-based single sign-on for your organization
|
||||
---
|
||||
|
||||
import { Callout } from 'fumadocs-ui/components/callout'
|
||||
import { Tab, Tabs } from 'fumadocs-ui/components/tabs'
|
||||
import { FAQ } from '@/components/ui/faq'
|
||||
import { Image } from '@/components/ui/image'
|
||||
|
||||
Single Sign-On lets your team sign in to Sim through your company's identity provider instead of managing separate passwords. Sim supports both OIDC and SAML 2.0.
|
||||
|
||||
---
|
||||
|
||||
## Setup
|
||||
|
||||
### 1. Open SSO settings
|
||||
|
||||
Go to **Settings → Enterprise → Single Sign-On** in your workspace.
|
||||
|
||||
### 2. Choose a protocol
|
||||
|
||||
| Protocol | Use when |
|
||||
|----------|----------|
|
||||
| **OIDC** | Your IdP supports OpenID Connect — Okta, Microsoft Entra ID, Auth0, Google Workspace |
|
||||
| **SAML 2.0** | Your IdP is SAML-only — ADFS, Shibboleth, or older enterprise IdPs |
|
||||
|
||||
### 3. Fill in the form
|
||||
|
||||
<Image src="/static/enterprise/sso-form.png" alt="Single Sign-On configuration form showing Provider Type (OIDC), Provider ID, Issuer URL, Domain, Client ID, Client Secret, Scopes, and Callback URL fields" width={900} height={500} />
|
||||
|
||||
**Fields required for both protocols:**
|
||||
|
||||
| Field | What to enter |
|
||||
|-------|--------------|
|
||||
| **Provider ID** | A short slug identifying this connection, e.g. `okta` or `azure-ad`. Letters, numbers, and dashes only. |
|
||||
| **Issuer URL** | The identity provider's issuer URL. Must be HTTPS. |
|
||||
| **Domain** | Your organization's email domain, e.g. `company.com`. Users with this domain will be routed through SSO at sign-in. |
|
||||
|
||||
**OIDC additional fields:**
|
||||
|
||||
| Field | What to enter |
|
||||
|-------|--------------|
|
||||
| **Client ID** | The application client ID from your IdP. |
|
||||
| **Client Secret** | The client secret from your IdP. |
|
||||
| **Scopes** | Comma-separated OIDC scopes. Default: `openid,profile,email`. |
|
||||
|
||||
<Callout type="info">
|
||||
For OIDC, Sim automatically fetches endpoints (`authorization_endpoint`, `token_endpoint`, `userinfo_endpoint`, `jwks_uri`) from your issuer's `/.well-known/openid-configuration` discovery document. You only need to provide the issuer URL.
|
||||
</Callout>
|
||||
|
||||
**SAML additional fields:**
|
||||
|
||||
| Field | What to enter |
|
||||
|-------|--------------|
|
||||
| **Entry Point URL** | The IdP's SSO service URL where Sim sends authentication requests. |
|
||||
| **Identity Provider Certificate** | The Base-64 encoded X.509 certificate from your IdP for verifying assertions. |
|
||||
|
||||
### 4. Copy the Callback URL
|
||||
|
||||
The **Callback URL** shown in the form is the endpoint your identity provider must redirect users back to after authentication. Copy it and register it in your IdP before saving.
|
||||
|
||||
**OIDC providers** (Okta, Microsoft Entra ID, Google Workspace, Auth0):
|
||||
```
|
||||
https://sim.ai/api/auth/sso/callback/{provider-id}
|
||||
```
|
||||
|
||||
**SAML providers** (ADFS, Shibboleth):
|
||||
```
|
||||
https://sim.ai/api/auth/sso/saml2/callback/{provider-id}
|
||||
```
|
||||
|
||||
### 5. Save and test
|
||||
|
||||
Click **Save**. To test, sign out and use the **Sign in with SSO** button on the login page. Enter an email address at your configured domain — Sim will redirect you to your identity provider.
|
||||
|
||||
---
|
||||
|
||||
## Provider Guides
|
||||
|
||||
<Tabs items={['Okta', 'Microsoft Entra ID', 'Google Workspace', 'ADFS']}>
|
||||
|
||||
<Tab value="Okta">
|
||||
|
||||
### Okta (OIDC)
|
||||
|
||||
**In Okta** ([official docs](https://help.okta.com/en-us/content/topics/apps/apps_app_integration_wizard_oidc.htm)):
|
||||
|
||||
1. Go to **Applications → Create App Integration**
|
||||
2. Select **OIDC - OpenID Connect**, then **Web Application**
|
||||
3. Set the **Sign-in redirect URI** to your Sim callback URL:
|
||||
```
|
||||
https://sim.ai/api/auth/sso/callback/okta
|
||||
```
|
||||
4. Under **Assignments**, grant access to the relevant users or groups
|
||||
5. Copy the **Client ID** and **Client Secret** from the app's **General** tab
|
||||
6. Your Okta domain is the hostname of your admin console, e.g. `dev-1234567.okta.com`
|
||||
|
||||
**In Sim:**
|
||||
|
||||
| Field | Value |
|
||||
|-------|-------|
|
||||
| Provider Type | OIDC |
|
||||
| Provider ID | `okta` |
|
||||
| Issuer URL | `https://dev-1234567.okta.com/oauth2/default` |
|
||||
| Domain | `company.com` |
|
||||
| Client ID | From Okta app |
|
||||
| Client Secret | From Okta app |
|
||||
|
||||
The issuer URL uses Okta's default authorization server, which is pre-configured on every Okta org. If you created a custom authorization server, replace `default` with your server name.
|
||||
|
||||
</Tab>
|
||||
|
||||
<Tab value="Microsoft Entra ID">
|
||||
|
||||
### Microsoft Entra ID (OIDC)
|
||||
|
||||
**In Azure** ([official docs](https://learn.microsoft.com/en-us/entra/identity-platform/quickstart-register-app)):
|
||||
|
||||
1. Go to **Microsoft Entra ID → App registrations → New registration**
|
||||
2. Under **Redirect URI**, select **Web** and enter your Sim callback URL:
|
||||
```
|
||||
https://sim.ai/api/auth/sso/callback/azure-ad
|
||||
```
|
||||
3. After registration, go to **Certificates & secrets → New client secret** and copy the value immediately — it won't be shown again
|
||||
4. Go to **Overview** and copy the **Application (client) ID** and **Directory (tenant) ID**
|
||||
|
||||
**In Sim:**
|
||||
|
||||
| Field | Value |
|
||||
|-------|-------|
|
||||
| Provider Type | OIDC |
|
||||
| Provider ID | `azure-ad` |
|
||||
| Issuer URL | `https://login.microsoftonline.com/{tenant-id}/v2.0` |
|
||||
| Domain | `company.com` |
|
||||
| Client ID | Application (client) ID |
|
||||
| Client Secret | Secret value |
|
||||
|
||||
</Tab>
|
||||
|
||||
<Tab value="Google Workspace">
|
||||
|
||||
### Google Workspace (OIDC)
|
||||
|
||||
**In Google Cloud Console** ([official docs](https://developers.google.com/identity/openid-connect/openid-connect)):
|
||||
|
||||
1. Go to **APIs & Services → Credentials → Create Credentials → OAuth 2.0 Client ID**
|
||||
2. Set the application type to **Web application**
|
||||
3. Add your Sim callback URL to **Authorized redirect URIs**:
|
||||
```
|
||||
https://sim.ai/api/auth/sso/callback/google-workspace
|
||||
```
|
||||
4. Copy the **Client ID** and **Client Secret**
|
||||
|
||||
**In Sim:**
|
||||
|
||||
| Field | Value |
|
||||
|-------|-------|
|
||||
| Provider Type | OIDC |
|
||||
| Provider ID | `google-workspace` |
|
||||
| Issuer URL | `https://accounts.google.com` |
|
||||
| Domain | `company.com` |
|
||||
| Client ID | From Google Cloud Console |
|
||||
| Client Secret | From Google Cloud Console |
|
||||
|
||||
<Callout type="info">
|
||||
To restrict sign-in to your Google Workspace domain, configure the OAuth consent screen and ensure your app is set to **Internal** (Workspace users only) under **User type**. Setting the app to Internal limits access to users within your Google Workspace organization.
|
||||
</Callout>
|
||||
|
||||
</Tab>
|
||||
|
||||
<Tab value="ADFS">
|
||||
|
||||
### ADFS (SAML 2.0)
|
||||
|
||||
**In ADFS** ([official docs](https://learn.microsoft.com/en-us/windows-server/identity/ad-fs/operations/create-a-relying-party-trust)):
|
||||
|
||||
1. Open **AD FS Management → Relying Party Trusts → Add Relying Party Trust**
|
||||
2. Choose **Claims aware**, then **Enter data about the relying party manually**
|
||||
3. Set the **Relying party identifier** (Entity ID) to your Sim base URL:
|
||||
```
|
||||
https://sim.ai
|
||||
```
|
||||
4. Add an endpoint: **SAML Assertion Consumer Service** (HTTP POST) with the URL:
|
||||
```
|
||||
https://sim.ai/api/auth/sso/saml2/callback/adfs
|
||||
```
|
||||
5. Export the **Token-signing certificate** from **Certificates**: right-click → **View Certificate → Details → Copy to File**, choose **Base-64 encoded X.509 (.CER)**. The `.cer` file is PEM-encoded — rename it to `.pem` before pasting its contents into Sim.
|
||||
6. Note the **ADFS Federation Service endpoint URL** (e.g. `https://adfs.company.com/adfs/ls`)
|
||||
|
||||
**In Sim:**
|
||||
|
||||
| Field | Value |
|
||||
|-------|-------|
|
||||
| Provider Type | SAML |
|
||||
| Provider ID | `adfs` |
|
||||
| Issuer URL | `https://sim.ai` |
|
||||
| Domain | `company.com` |
|
||||
| Entry Point URL | `https://adfs.company.com/adfs/ls` |
|
||||
| Certificate | Contents of the `.pem` file |
|
||||
|
||||
<Callout type="info">
|
||||
For ADFS, the **Issuer URL** field is the SP entity ID — the identifier ADFS uses to identify Sim as a relying party. It must match the **Relying party identifier** you registered in ADFS.
|
||||
</Callout>
|
||||
|
||||
</Tab>
|
||||
|
||||
</Tabs>
|
||||
|
||||
---
|
||||
|
||||
## How sign-in works after setup
|
||||
|
||||
Once SSO is configured, users with your domain (`company.com`) can sign in through your identity provider:
|
||||
|
||||
1. User goes to `sim.ai` and clicks **Sign in with SSO**
|
||||
2. They enter their work email (e.g. `alice@company.com`)
|
||||
3. Sim redirects them to your identity provider
|
||||
4. After authenticating, they are returned to Sim and added to your organization automatically
|
||||
5. They land in the workspace
|
||||
|
||||
Users who sign in via SSO for the first time are automatically provisioned and added to your organization — no manual invite required.
|
||||
|
||||
<Callout type="info">
|
||||
Password-based login remains available. Forcing all organization members to use SSO exclusively is not yet supported.
|
||||
</Callout>
|
||||
|
||||
---
|
||||
|
||||
<FAQ items={[
|
||||
{
|
||||
question: "Which SSO providers are supported?",
|
||||
answer: "Any identity provider that supports OIDC or SAML 2.0. This includes Okta, Microsoft Entra ID (Azure AD), Google Workspace, Auth0, OneLogin, JumpCloud, Ping Identity, ADFS, Shibboleth, and more."
|
||||
},
|
||||
{
|
||||
question: "What is the Domain field used for?",
|
||||
answer: "The domain (e.g. company.com) is how Sim routes users to the right identity provider. When a user enters their email on the SSO sign-in page, Sim matches their email domain to a registered SSO provider and redirects them there."
|
||||
},
|
||||
{
|
||||
question: "Do I need to provide OIDC endpoints manually?",
|
||||
answer: "No. For OIDC providers, Sim automatically fetches the authorization, token, and JWKS endpoints from the discovery document at {issuer}/.well-known/openid-configuration. You only need to provide the issuer URL."
|
||||
},
|
||||
{
|
||||
question: "What happens when a user signs in with SSO for the first time?",
|
||||
answer: "Sim creates an account for them automatically and adds them to your organization. No manual invite is needed. They are assigned the member role by default."
|
||||
},
|
||||
{
|
||||
question: "Can I still use email/password login after enabling SSO?",
|
||||
answer: "Yes. Enabling SSO does not disable password-based login. Users can still sign in with their email and password if they have one. Forced SSO (requiring all users on the domain to use SSO) is not yet supported."
|
||||
},
|
||||
{
|
||||
question: "Who can configure SSO on Sim Cloud?",
|
||||
answer: "Organization owners and admins can configure SSO. You must be on the Enterprise plan."
|
||||
},
|
||||
{
|
||||
question: "What is the Callback URL?",
|
||||
answer: "The Callback URL (also called Redirect URI or ACS URL) is the endpoint in Sim that receives the authentication response from your identity provider. For OIDC providers it follows the format: https://sim.ai/api/auth/sso/callback/{provider-id}. For SAML providers it is: https://sim.ai/api/auth/sso/saml2/callback/{provider-id}. You must register this URL in your identity provider before SSO will work."
|
||||
},
|
||||
{
|
||||
question: "How do I update or replace an existing SSO configuration?",
|
||||
answer: "Open Settings → Enterprise → Single Sign-On and click Edit. Update the fields and save. The existing provider configuration is replaced."
|
||||
}
|
||||
]} />
|
||||
|
||||
---
|
||||
|
||||
## Self-hosted setup
|
||||
|
||||
Self-hosted deployments use environment variables instead of the billing/plan check.
|
||||
|
||||
### Environment variables
|
||||
|
||||
```bash
|
||||
# Required
|
||||
SSO_ENABLED=true
|
||||
NEXT_PUBLIC_SSO_ENABLED=true
|
||||
|
||||
# Required if you want users auto-added to your organization on first SSO sign-in
|
||||
ORGANIZATIONS_ENABLED=true
|
||||
NEXT_PUBLIC_ORGANIZATIONS_ENABLED=true
|
||||
```
|
||||
|
||||
You can register providers through the **Settings UI** (same as cloud) or by running the registration script directly against your database.
|
||||
|
||||
### Script-based registration
|
||||
|
||||
Use this when you need to register an SSO provider without going through the UI — for example, during initial deployment or CI/CD automation.
|
||||
|
||||
```bash
|
||||
# OIDC example (Okta)
|
||||
SSO_ENABLED=true \
|
||||
NEXT_PUBLIC_APP_URL=https://your-instance.com \
|
||||
SSO_PROVIDER_TYPE=oidc \
|
||||
SSO_PROVIDER_ID=okta \
|
||||
SSO_ISSUER=https://dev-1234567.okta.com/oauth2/default \
|
||||
SSO_DOMAIN=company.com \
|
||||
SSO_USER_EMAIL=admin@company.com \
|
||||
SSO_OIDC_CLIENT_ID=your-client-id \
|
||||
SSO_OIDC_CLIENT_SECRET=your-client-secret \
|
||||
bun run packages/db/scripts/register-sso-provider.ts
|
||||
```
|
||||
|
||||
```bash
|
||||
# SAML example (ADFS)
|
||||
SSO_ENABLED=true \
|
||||
NEXT_PUBLIC_APP_URL=https://your-instance.com \
|
||||
SSO_PROVIDER_TYPE=saml \
|
||||
SSO_PROVIDER_ID=adfs \
|
||||
SSO_ISSUER=https://your-instance.com \
|
||||
SSO_DOMAIN=company.com \
|
||||
SSO_USER_EMAIL=admin@company.com \
|
||||
SSO_SAML_ENTRY_POINT=https://adfs.company.com/adfs/ls \
|
||||
SSO_SAML_CERT="-----BEGIN CERTIFICATE-----
|
||||
...
|
||||
-----END CERTIFICATE-----" \
|
||||
bun run packages/db/scripts/register-sso-provider.ts
|
||||
```
|
||||
|
||||
The script outputs the callback URL to configure in your IdP once it completes.
|
||||
|
||||
To remove a provider:
|
||||
|
||||
```bash
|
||||
SSO_USER_EMAIL=admin@company.com \
|
||||
bun run packages/db/scripts/deregister-sso-provider.ts
|
||||
```
|
||||
103
apps/docs/content/docs/en/enterprise/whitelabeling.mdx
Normal file
103
apps/docs/content/docs/en/enterprise/whitelabeling.mdx
Normal file
@@ -0,0 +1,103 @@
|
||||
---
|
||||
title: Whitelabeling
|
||||
description: Replace Sim branding with your own logo, colors, and links
|
||||
---
|
||||
|
||||
import { FAQ } from '@/components/ui/faq'
|
||||
import { Image } from '@/components/ui/image'
|
||||
|
||||
Whitelabeling lets you replace Sim's default branding — logo, colors, and support links — with your own. Members of your organization see your brand instead of Sim's throughout the workspace.
|
||||
|
||||
---
|
||||
|
||||
## Setup
|
||||
|
||||
### 1. Open Whitelabeling settings
|
||||
|
||||
Go to **Settings → Enterprise → Whitelabeling** in your workspace.
|
||||
|
||||
<Image src="/static/enterprise/whitelabeling.png" alt="Whitelabeling settings showing brand identity fields (Logo, Wordmark, Brand name), color pickers for primary and accent colors, and link fields for support email and documentation URL" width={900} height={500} />
|
||||
|
||||
### 2. Configure brand identity
|
||||
|
||||
| Field | Description |
|
||||
|-------|-------------|
|
||||
| **Logo** | Shown in the collapsed sidebar. Square image (PNG, JPEG, SVG, or WebP). Max 5 MB. |
|
||||
| **Wordmark** | Shown in the expanded sidebar. Wide image (PNG, JPEG, SVG, or WebP). Max 5 MB. |
|
||||
| **Brand name** | Replaces "Sim" in the sidebar and select UI elements. Max 64 characters. |
|
||||
|
||||
|
||||
### 3. Configure colors
|
||||
|
||||
All colors must be valid hex values (e.g. `#701ffc`).
|
||||
|
||||
| Field | Description |
|
||||
|-------|-------------|
|
||||
| **Primary color** | Main accent color used for buttons and active states. |
|
||||
| **Primary hover color** | Color shown when hovering over primary elements. |
|
||||
| **Accent color** | Secondary accent for highlights and secondary interactive elements. |
|
||||
| **Accent hover color** | Color shown when hovering over accent elements. |
|
||||
|
||||
### 4. Configure links
|
||||
|
||||
Replace Sim's default support and legal links with your own.
|
||||
|
||||
| Field | Description |
|
||||
|-------|-------------|
|
||||
| **Support email** | Shown in help prompts. Must be a valid email address. |
|
||||
| **Documentation URL** | Link to your internal documentation. Must be a valid URL. |
|
||||
| **Terms of service URL** | Link to your terms page. Must be a valid URL. |
|
||||
| **Privacy policy URL** | Link to your privacy page. Must be a valid URL. |
|
||||
|
||||
### 5. Save
|
||||
|
||||
Click **Save changes**. The new branding is applied immediately for all members of your organization.
|
||||
|
||||
---
|
||||
|
||||
## What gets replaced
|
||||
|
||||
Whitelabeling replaces the following visual elements:
|
||||
|
||||
- **Sidebar logo and wordmark** — your uploaded images replace the Sim logo
|
||||
- **Brand name** — appears in the sidebar and select UI labels
|
||||
- **Primary and accent colors** — applied to buttons, active states, and highlights
|
||||
- **Support and legal links** — help prompts and footer links point to your URLs
|
||||
|
||||
Whitelabeling applies only to members of your organization. Public-facing pages (login, marketing) are not affected.
|
||||
|
||||
---
|
||||
|
||||
<FAQ items={[
|
||||
{
|
||||
question: "Who can configure whitelabeling?",
|
||||
answer: "Organization owners and admins can configure whitelabeling. On Sim Cloud, you must be on the Enterprise plan."
|
||||
},
|
||||
{
|
||||
question: "What image formats are supported?",
|
||||
answer: "PNG, JPEG, SVG, and WebP. Maximum file size is 5 MB for both the logo and wordmark."
|
||||
},
|
||||
{
|
||||
question: "What is the difference between the logo and the wordmark?",
|
||||
answer: "The logo is a square image shown in the collapsed sidebar. The wordmark is a wide image shown in the expanded sidebar alongside member names and navigation items."
|
||||
},
|
||||
{
|
||||
question: "Do members outside my organization see the custom branding?",
|
||||
answer: "No. Custom branding is scoped to your organization. Members see your branding when signed in to your organization's workspace."
|
||||
}
|
||||
]} />
|
||||
|
||||
---
|
||||
|
||||
## Self-hosted setup
|
||||
|
||||
Self-hosted deployments use environment variables instead of the billing/plan check.
|
||||
|
||||
### Environment variables
|
||||
|
||||
```bash
|
||||
WHITELABELING_ENABLED=true
|
||||
NEXT_PUBLIC_WHITELABELING_ENABLED=true
|
||||
```
|
||||
|
||||
Once enabled, configure branding through **Settings → Enterprise → Whitelabeling** the same way.
|
||||
343
apps/docs/content/docs/en/execution/api-deployment.mdx
Normal file
343
apps/docs/content/docs/en/execution/api-deployment.mdx
Normal file
@@ -0,0 +1,343 @@
|
||||
---
|
||||
title: API Deployment
|
||||
---
|
||||
|
||||
import { Callout } from 'fumadocs-ui/components/callout'
|
||||
import { Tab, Tabs } from 'fumadocs-ui/components/tabs'
|
||||
import { Image } from '@/components/ui/image'
|
||||
import { FAQ } from '@/components/ui/faq'
|
||||
|
||||
Deploy your workflow as a REST API endpoint that any application can call directly. Supports synchronous, streaming, and asynchronous execution modes.
|
||||
|
||||
## Deploying a Workflow
|
||||
|
||||
Open your workflow and click **Deploy**. The **General** tab opens first and shows you the current deployment state:
|
||||
|
||||
<Image src="/static/api-deployment/api-versions.png" alt="General tab of the Workflow Deployment modal showing a live workflow preview, a Versions table with v2 (live) and v1, and Undeploy / Update buttons" width={800} height={500} />
|
||||
|
||||
The **General** tab contains:
|
||||
|
||||
- **Live Workflow** — a read-only minimap of the workflow snapshot that is currently deployed
|
||||
- **Versions** — a table of every deployment you've published, showing version number, who deployed it, and when
|
||||
- **Deploy / Update / Undeploy** — action buttons at the bottom right
|
||||
|
||||
Click **Deploy** to publish your workflow for the first time, or **Update** to push a new snapshot after making changes. The green dot next to a version indicates it is the currently live version.
|
||||
|
||||
Once deployed, your workflow is available at:
|
||||
|
||||
```
|
||||
POST https://sim.ai/api/workflows/{workflow-id}/execute
|
||||
```
|
||||
|
||||
<Callout type="info">
|
||||
API executions always run against the active deployment snapshot. After changing your workflow on the canvas, click **Update** to publish a new version.
|
||||
</Callout>
|
||||
|
||||
### Keeping Track of Changes
|
||||
|
||||
When you modify the workflow canvas after deploying, an **Update deployment** badge appears at the bottom of the screen as a reminder that your live version is out of date:
|
||||
|
||||
<Image src="/static/api-deployment/api-update-button.png" alt="Canvas toolbar showing the Update and Run buttons with an Update deployment tooltip" width={400} height={200} />
|
||||
|
||||
You can click the **Update** button directly from the canvas toolbar — you don't need to open the Deploy modal every time.
|
||||
|
||||
## Version Control
|
||||
|
||||
Every time you deploy or update, a new version is recorded in the Versions table. You can manage past versions using the context menu (⋮) next to any row:
|
||||
|
||||
<Image src="/static/api-deployment/api-versions-menu.png" alt="Versions table showing v2 (live) and v1 with a context menu open offering Rename, Add description, Promote to live, and Load deployment options" width={800} height={400} />
|
||||
|
||||
| Action | Description |
|
||||
|--------|-------------|
|
||||
| **Rename** | Give the version a human-readable name (e.g., "Added memory") |
|
||||
| **Add description** | Attach a note describing what changed in this version |
|
||||
| **Promote to live** | Make this older version the active one without re-deploying |
|
||||
| **Load deployment** | Load the workflow snapshot from this version back onto your canvas |
|
||||
|
||||
**Promote to live** is useful for rolling back — if a new deployment has an issue, promote the previous version to restore the last known-good state instantly.
|
||||
|
||||
## Making API Calls
|
||||
|
||||
Switch to the **API** tab in the Deploy modal to see ready-to-use code for all three execution modes:
|
||||
|
||||
<Image src="/static/api-deployment/api-tab.png" alt="API tab showing cURL, Python, JavaScript, and TypeScript language options, with Run workflow, Run workflow (stream response), and Run workflow (async) code sections" width={800} height={500} />
|
||||
|
||||
The language selector at the top lets you switch between **cURL**, **Python**, **JavaScript**, and **TypeScript**. Each mode — synchronous, streaming, and async — has its own code block that you can copy directly. The code is pre-filled with your workflow ID and a masked version of your API key.
|
||||
|
||||
At the bottom of the tab, two buttons give you quick access to key settings:
|
||||
|
||||
- **Edit API Info** — set a description and choose between API key auth or public access
|
||||
- **Generate API Key** — create a new API key scoped to your workspace
|
||||
|
||||
## Authentication
|
||||
|
||||
By default, API endpoints require an API key passed in the `x-api-key` header. Generate keys in **Settings → Sim Keys** or via the **Generate API Key** button in the API tab.
|
||||
|
||||
```bash
|
||||
curl -X POST https://sim.ai/api/workflows/{workflow-id}/execute \
|
||||
-H "Content-Type: application/json" \
|
||||
-H "x-api-key: $SIM_API_KEY" \
|
||||
-d '{ "input": "Hello" }'
|
||||
```
|
||||
|
||||
### API Info and Public Access
|
||||
|
||||
Click **Edit API Info** to add a description and change the access mode:
|
||||
|
||||
<Image src="/static/api-deployment/api-info.png" alt="Edit API Info modal with a Description textarea and an Access section toggling between API Key and Public modes" width={800} height={400} />
|
||||
|
||||
| Access Mode | Description |
|
||||
|-------------|-------------|
|
||||
| **API Key** (default) | Requires a valid API key in the `x-api-key` header |
|
||||
| **Public** | No authentication required — anyone with the URL can call the endpoint |
|
||||
|
||||
The **Description** field documents what the workflow API does. This is useful for teams, or when exposing the workflow to tools and services that surface API metadata.
|
||||
|
||||
<Callout type="warn">
|
||||
Public endpoints can be called by anyone with the URL. Only use this for workflows that don't expose sensitive data or perform sensitive actions.
|
||||
</Callout>
|
||||
|
||||
## Execution Modes
|
||||
|
||||
### Synchronous
|
||||
|
||||
The default mode. Send a request and wait for the complete response:
|
||||
|
||||
<Tabs items={['cURL', 'Python', 'TypeScript']}>
|
||||
<Tab value="cURL">
|
||||
```bash
|
||||
curl -X POST https://sim.ai/api/workflows/{workflow-id}/execute \
|
||||
-H "Content-Type: application/json" \
|
||||
-H "x-api-key: $SIM_API_KEY" \
|
||||
-d '{ "input": "Summarize this article" }'
|
||||
```
|
||||
</Tab>
|
||||
<Tab value="Python">
|
||||
```python
|
||||
import requests, os
|
||||
|
||||
response = requests.post(
|
||||
"https://sim.ai/api/workflows/{workflow-id}/execute",
|
||||
headers={
|
||||
"Content-Type": "application/json",
|
||||
"x-api-key": os.environ["SIM_API_KEY"]
|
||||
},
|
||||
json={"input": "Summarize this article"}
|
||||
)
|
||||
print(response.json())
|
||||
```
|
||||
</Tab>
|
||||
<Tab value="TypeScript">
|
||||
```typescript
|
||||
const response = await fetch('https://sim.ai/api/workflows/{workflow-id}/execute', {
|
||||
method: 'POST',
|
||||
headers: {
|
||||
'Content-Type': 'application/json',
|
||||
'x-api-key': process.env.SIM_API_KEY!
|
||||
},
|
||||
body: JSON.stringify({ input: 'Summarize this article' })
|
||||
});
|
||||
console.log(await response.json());
|
||||
```
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
### Streaming
|
||||
|
||||
Stream the response token-by-token as it is generated. Add `"stream": true` to your request body and specify which block output fields to stream using `selectedOutputs`.
|
||||
|
||||
Use the **Select outputs** dropdown in the API tab to choose which fields to stream:
|
||||
|
||||
<Image src="/static/api-deployment/api-select-outputs.png" alt="Select outputs dropdown open showing Agent 1 block with selectable output fields: content, model, tokens, toolCalls, providerTiming, cost" width={800} height={400} />
|
||||
|
||||
The dropdown groups available outputs by block. The most common choice is `content` from an Agent block, which streams the generated text. You can select fields from multiple blocks simultaneously.
|
||||
|
||||
The `selectedOutputs` values in the request body follow the format `blockName.field` (e.g., `agent_1.content`).
|
||||
|
||||
<Tabs items={['cURL', 'Python', 'TypeScript']}>
|
||||
<Tab value="cURL">
|
||||
```bash
|
||||
curl -X POST https://sim.ai/api/workflows/{workflow-id}/execute \
|
||||
-H "Content-Type: application/json" \
|
||||
-H "x-api-key: $SIM_API_KEY" \
|
||||
-d '{
|
||||
"input": "Write a long essay",
|
||||
"stream": true,
|
||||
"selectedOutputs": ["agent_1.content"]
|
||||
}'
|
||||
```
|
||||
</Tab>
|
||||
<Tab value="Python">
|
||||
```python
|
||||
import requests, os
|
||||
|
||||
response = requests.post(
|
||||
"https://sim.ai/api/workflows/{workflow-id}/execute",
|
||||
headers={
|
||||
"Content-Type": "application/json",
|
||||
"x-api-key": os.environ["SIM_API_KEY"]
|
||||
},
|
||||
json={
|
||||
"input": "Write a long essay",
|
||||
"stream": True,
|
||||
"selectedOutputs": ["agent_1.content"]
|
||||
},
|
||||
stream=True
|
||||
)
|
||||
for line in response.iter_lines():
|
||||
if line:
|
||||
print(line.decode())
|
||||
```
|
||||
</Tab>
|
||||
<Tab value="TypeScript">
|
||||
```typescript
|
||||
const response = await fetch('https://sim.ai/api/workflows/{workflow-id}/execute', {
|
||||
method: 'POST',
|
||||
headers: {
|
||||
'Content-Type': 'application/json',
|
||||
'x-api-key': process.env.SIM_API_KEY!
|
||||
},
|
||||
body: JSON.stringify({
|
||||
input: 'Write a long essay',
|
||||
stream: true,
|
||||
selectedOutputs: ['agent_1.content']
|
||||
})
|
||||
});
|
||||
|
||||
const reader = response.body!.getReader();
|
||||
const decoder = new TextDecoder();
|
||||
while (true) {
|
||||
const { done, value } = await reader.read();
|
||||
if (done) break;
|
||||
console.log(decoder.decode(value));
|
||||
}
|
||||
```
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
### Asynchronous
|
||||
|
||||
For long-running workflows, async mode returns a job ID immediately so you don't need to hold the connection open. Add the `X-Execution-Mode: async` header to your request. The API returns HTTP 202 with a job ID and status URL. Poll the status URL until the job completes.
|
||||
|
||||
<Tabs items={['Start Job', 'Check Status']}>
|
||||
<Tab value="Start Job">
|
||||
```bash
|
||||
curl -X POST https://sim.ai/api/workflows/{workflow-id}/execute \
|
||||
-H "Content-Type: application/json" \
|
||||
-H "x-api-key: $SIM_API_KEY" \
|
||||
-H "X-Execution-Mode: async" \
|
||||
-d '{ "input": "Process this large dataset" }'
|
||||
```
|
||||
|
||||
**Response** (HTTP 202):
|
||||
```json
|
||||
{
|
||||
"success": true,
|
||||
"async": true,
|
||||
"jobId": "run_abc123",
|
||||
"executionId": "exec_xyz",
|
||||
"message": "Workflow execution queued",
|
||||
"statusUrl": "https://sim.ai/api/jobs/run_abc123"
|
||||
}
|
||||
```
|
||||
</Tab>
|
||||
<Tab value="Check Status">
|
||||
```bash
|
||||
curl https://sim.ai/api/jobs/{jobId} \
|
||||
-H "x-api-key: $SIM_API_KEY"
|
||||
```
|
||||
|
||||
**While processing:**
|
||||
```json
|
||||
{
|
||||
"success": true,
|
||||
"taskId": "run_abc123",
|
||||
"status": "processing",
|
||||
"metadata": {
|
||||
"createdAt": "2025-09-10T12:00:00.000Z",
|
||||
"startedAt": "2025-09-10T12:00:01.000Z"
|
||||
},
|
||||
"estimatedDuration": 300000
|
||||
}
|
||||
```
|
||||
|
||||
**When completed:**
|
||||
```json
|
||||
{
|
||||
"success": true,
|
||||
"taskId": "run_abc123",
|
||||
"status": "completed",
|
||||
"metadata": {
|
||||
"createdAt": "2025-09-10T12:00:00.000Z",
|
||||
"startedAt": "2025-09-10T12:00:01.000Z",
|
||||
"completedAt": "2025-09-10T12:00:05.000Z",
|
||||
"duration": 4000
|
||||
},
|
||||
"output": { "result": "..." }
|
||||
}
|
||||
```
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
#### Job Status Values
|
||||
|
||||
| Status | Description |
|
||||
|--------|-------------|
|
||||
| `queued` | Job is waiting to be picked up |
|
||||
| `processing` | Workflow is actively executing |
|
||||
| `completed` | Finished successfully — `output` field contains the result |
|
||||
| `failed` | Execution failed — `error` field contains the message |
|
||||
|
||||
Poll the `statusUrl` from the initial response until the status is `completed` or `failed`.
|
||||
|
||||
#### Execution Time Limits
|
||||
|
||||
| Plan | Sync Limit | Async Limit |
|
||||
|------|-----------|-------------|
|
||||
| **Community** | 5 minutes | 90 minutes |
|
||||
| **Pro / Max / Team / Enterprise** | 50 minutes | 90 minutes |
|
||||
|
||||
If a job exceeds its time limit it is automatically marked as `failed`.
|
||||
|
||||
#### Job Retention
|
||||
|
||||
Completed and failed job results are retained for **24 hours**. After that, the status endpoint returns `404`. Retrieve and store results on your end if you need them longer.
|
||||
|
||||
#### Capacity Limits
|
||||
|
||||
If the execution queue is full, the API returns `503`:
|
||||
|
||||
```json
|
||||
{
|
||||
"error": "Service temporarily at capacity",
|
||||
"retryAfterSeconds": 10
|
||||
}
|
||||
```
|
||||
|
||||
<Callout type="info">
|
||||
Async mode always runs against the deployed version. It does not support draft state, block overrides, or partial execution options like `runFromBlock` or `stopAfterBlockId`.
|
||||
</Callout>
|
||||
|
||||
## API Key Management
|
||||
|
||||
Generate and manage API keys in **Settings → Sim Keys**:
|
||||
|
||||
- **Create** new keys for different applications or environments
|
||||
- **Revoke** keys that are no longer needed
|
||||
- Keys are scoped to your workspace
|
||||
|
||||
## Rate Limits
|
||||
|
||||
API calls are subject to rate limits based on your plan. Rate limit details are returned in response headers (`X-RateLimit-*`) and in the response body. Use async mode for high-volume or long-running workloads.
|
||||
|
||||
For detailed rate limit information and the logs/webhooks API, see [External API](/execution/api).
|
||||
|
||||
<FAQ items={[
|
||||
{ question: "What is the difference between the General tab and the API tab?", answer: "The General tab manages your deployment lifecycle — deploying, updating, rolling back, and viewing version history. The API tab gives you ready-to-use code samples and lets you configure the endpoint's description and access mode." },
|
||||
{ question: "Can I deploy the same workflow as both an API and a chat?", answer: "Yes. A workflow can be simultaneously deployed as an API, chat, MCP tool, and more. Each deployment type runs against the same active snapshot." },
|
||||
{ question: "How do I choose between sync, streaming, and async?", answer: "Use sync for quick workflows that finish in seconds. Use streaming when you want to show progressive output to users as it's generated. Use async for long-running workflows where holding a connection open isn't practical." },
|
||||
{ question: "How do I select multiple outputs for streaming?", answer: "Open the Select outputs dropdown in the API tab and check each output field you want to stream. You can choose fields from multiple blocks. The selected fields are reflected as an array in the selectedOutputs request body parameter." },
|
||||
{ question: "How does Promote to live work?", answer: "Promote to live sets an older version as the active deployment without creating a new version. Subsequent API calls immediately run against the promoted snapshot. This is the fastest way to roll back to a previous state." },
|
||||
{ question: "How long are async job results available?", answer: "Completed and failed job results are retained for 24 hours. After that, the status endpoint returns 404. Retrieve and store results on your end if you need them longer." },
|
||||
{ question: "What happens if my API key is compromised?", answer: "Revoke the key immediately in Settings → Sim Keys and generate a new one. Revoked keys stop working instantly." },
|
||||
]} />
|
||||
@@ -6,7 +6,7 @@ import { Callout } from 'fumadocs-ui/components/callout'
|
||||
import { Tab, Tabs } from 'fumadocs-ui/components/tabs'
|
||||
import { Video } from '@/components/ui/video'
|
||||
|
||||
Sim provides a comprehensive external API for querying workflow execution logs and setting up webhooks for real-time notifications when workflows complete.
|
||||
Sim provides a comprehensive external API for querying workflow run logs and setting up webhooks for real-time notifications when workflows complete.
|
||||
|
||||
## Authentication
|
||||
|
||||
@@ -21,7 +21,7 @@ You can generate API keys from the Sim platform and navigate to **Settings**, th
|
||||
|
||||
## Logs API
|
||||
|
||||
All API responses include information about your workflow execution limits and usage:
|
||||
All API responses include information about your workflow run limits and usage:
|
||||
|
||||
```json
|
||||
"limits": {
|
||||
@@ -48,11 +48,11 @@ All API responses include information about your workflow execution limits and u
|
||||
}
|
||||
```
|
||||
|
||||
**Note:** Rate limits use a token bucket algorithm. `remaining` can exceed `requestsPerMinute` up to `maxBurst` when you haven't used your full allowance recently, allowing for burst traffic. The rate limits in the response body are for workflow executions. The rate limits for calling this API endpoint are in the response headers (`X-RateLimit-*`).
|
||||
**Note:** Rate limits use a token bucket algorithm. `remaining` can exceed `requestsPerMinute` up to `maxBurst` when you haven't used your full allowance recently, allowing for burst traffic. The rate limits in the response body are for workflow runs. The rate limits for calling this API endpoint are in the response headers (`X-RateLimit-*`).
|
||||
|
||||
### Query Logs
|
||||
|
||||
Query workflow execution logs with extensive filtering options.
|
||||
Query workflow run logs with extensive filtering options.
|
||||
|
||||
<Tabs items={['Request', 'Response']}>
|
||||
<Tab value="Request">
|
||||
@@ -70,11 +70,11 @@ Query workflow execution logs with extensive filtering options.
|
||||
- `level` - Filter by level: `info`, `error`
|
||||
- `startDate` - ISO timestamp for date range start
|
||||
- `endDate` - ISO timestamp for date range end
|
||||
- `executionId` - Exact execution ID match
|
||||
- `minDurationMs` - Minimum execution duration in milliseconds
|
||||
- `maxDurationMs` - Maximum execution duration in milliseconds
|
||||
- `minCost` - Minimum execution cost
|
||||
- `maxCost` - Maximum execution cost
|
||||
- `executionId` - Exact run ID match
|
||||
- `minDurationMs` - Minimum run duration in milliseconds
|
||||
- `maxDurationMs` - Maximum run duration in milliseconds
|
||||
- `minCost` - Minimum run cost
|
||||
- `maxCost` - Maximum run cost
|
||||
- `model` - Filter by AI model used
|
||||
|
||||
**Pagination:**
|
||||
@@ -213,9 +213,9 @@ Retrieve detailed information about a specific log entry.
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
### Get Execution Details
|
||||
### Get Run Details
|
||||
|
||||
Retrieve execution details including the workflow state snapshot.
|
||||
Retrieve run details including the workflow state snapshot.
|
||||
|
||||
<Tabs items={['Request', 'Response']}>
|
||||
<Tab value="Request">
|
||||
@@ -248,7 +248,7 @@ Retrieve execution details including the workflow state snapshot.
|
||||
|
||||
## Notifications
|
||||
|
||||
Get real-time notifications when workflow executions complete via webhook, email, or Slack. Notifications are configured at the workspace level from the Logs page.
|
||||
Get real-time notifications when workflow runs complete via webhook, email, or Slack. Notifications are configured at the workspace level from the Logs page.
|
||||
|
||||
### Configuration
|
||||
|
||||
@@ -256,7 +256,7 @@ Configure notifications from the Logs page by clicking the menu button and selec
|
||||
|
||||
**Notification Channels:**
|
||||
- **Webhook**: Send HTTP POST requests to your endpoint
|
||||
- **Email**: Receive email notifications with execution details
|
||||
- **Email**: Receive email notifications with run details
|
||||
- **Slack**: Post messages to a Slack channel
|
||||
|
||||
**Workflow Selection:**
|
||||
@@ -269,38 +269,38 @@ Configure notifications from the Logs page by clicking the menu button and selec
|
||||
|
||||
**Optional Data:**
|
||||
- `includeFinalOutput`: Include the workflow's final output
|
||||
- `includeTraceSpans`: Include detailed execution trace spans
|
||||
- `includeTraceSpans`: Include detailed trace spans
|
||||
- `includeRateLimits`: Include rate limit information (sync/async limits and remaining)
|
||||
- `includeUsageData`: Include billing period usage and limits
|
||||
|
||||
### Alert Rules
|
||||
|
||||
Instead of receiving notifications for every execution, configure alert rules to be notified only when issues are detected:
|
||||
Instead of receiving notifications for every run, configure alert rules to be notified only when issues are detected:
|
||||
|
||||
**Consecutive Failures**
|
||||
- Alert after X consecutive failed executions (e.g., 3 failures in a row)
|
||||
- Resets when an execution succeeds
|
||||
- Alert after X consecutive failed runs (e.g., 3 failures in a row)
|
||||
- Resets when a run succeeds
|
||||
|
||||
**Failure Rate**
|
||||
- Alert when failure rate exceeds X% over the last Y hours
|
||||
- Requires minimum 5 executions in the window
|
||||
- Requires minimum 5 runs in the window
|
||||
- Only triggers after the full time window has elapsed
|
||||
|
||||
**Latency Threshold**
|
||||
- Alert when any execution takes longer than X seconds
|
||||
- Alert when any run takes longer than X seconds
|
||||
- Useful for catching slow or hanging workflows
|
||||
|
||||
**Latency Spike**
|
||||
- Alert when execution is X% slower than the average
|
||||
- Alert when a run is X% slower than the average
|
||||
- Compares against the average duration over the configured time window
|
||||
- Requires minimum 5 executions to establish baseline
|
||||
- Requires minimum 5 runs to establish baseline
|
||||
|
||||
**Cost Threshold**
|
||||
- Alert when a single execution costs more than $X
|
||||
- Alert when a single run costs more than $X
|
||||
- Useful for catching expensive LLM calls
|
||||
|
||||
**No Activity**
|
||||
- Alert when no executions occur within X hours
|
||||
- Alert when no runs occur within X hours
|
||||
- Useful for monitoring scheduled workflows that should run regularly
|
||||
|
||||
**Error Count**
|
||||
@@ -317,7 +317,7 @@ For webhooks, additional options are available:
|
||||
|
||||
### Payload Structure
|
||||
|
||||
When a workflow execution completes, Sim sends the following payload (via webhook POST, email, or Slack):
|
||||
When a workflow run completes, Sim sends the following payload (via webhook POST, email, or Slack):
|
||||
|
||||
```json
|
||||
{
|
||||
@@ -456,7 +456,7 @@ Failed webhook deliveries are retried with exponential backoff and jitter:
|
||||
- Deliveries timeout after 30 seconds
|
||||
|
||||
<Callout type="info">
|
||||
Webhook deliveries are processed asynchronously and don't affect workflow execution performance.
|
||||
Webhook deliveries are processed asynchronously and don't affect workflow run performance.
|
||||
</Callout>
|
||||
|
||||
## Best Practices
|
||||
@@ -596,11 +596,11 @@ app.listen(3000, () => {
|
||||
import { FAQ } from '@/components/ui/faq'
|
||||
|
||||
<FAQ items={[
|
||||
{ question: "How do I trigger async execution via the API?", answer: "Set the X-Execution-Mode header to 'async' on your POST request to /api/workflows/{id}/execute. The API returns a 202 response with a jobId, executionId, and a statusUrl you can poll to check when the job completes. Async mode does not support draft state, workflow overrides, or selective output options." },
|
||||
{ question: "How do I trigger an async run via the API?", answer: "Set the X-Execution-Mode header to 'async' on your POST request to /api/workflows/{id}/execute. The API returns a 202 response with a jobId, executionId, and a statusUrl you can poll to check when the job completes. Async mode does not support draft state, workflow overrides, or selective output options." },
|
||||
{ question: "What authentication methods does the API support?", answer: "The API supports two authentication methods: API keys passed in the x-api-key header, and session-based authentication for logged-in users. API keys can be generated from Settings > Sim Keys in the platform. Workflows with public API access enabled can also be called without authentication." },
|
||||
{ question: "How does the webhook retry policy work?", answer: "Failed webhook deliveries are retried up to 5 times with exponential backoff: 5 seconds, 15 seconds, 1 minute, 3 minutes, and 10 minutes, plus up to 10% jitter. Only HTTP 5xx and 429 responses trigger retries. Each delivery times out after 30 seconds." },
|
||||
{ question: "What rate limits apply to the Logs API?", answer: "Rate limits use a token bucket algorithm. Free plans get 30 requests/minute with 60 burst capacity, Pro gets 100/200, Team gets 200/400, and Enterprise gets 500/1000. These are separate from workflow execution rate limits, which are shown in the response body." },
|
||||
{ question: "What rate limits apply to the Logs API?", answer: "Rate limits use a token bucket algorithm. Free plans get 30 requests/minute with 60 burst capacity, Pro gets 100/200, Team gets 200/400, and Enterprise gets 500/1000. These are separate from workflow run rate limits, which are shown in the response body." },
|
||||
{ question: "How do I verify that a webhook is from Sim?", answer: "Configure a webhook secret when setting up notifications. Sim signs each delivery with HMAC-SHA256 using the format 't={timestamp},v1={signature}' in the sim-signature header. Compute the HMAC of '{timestamp}.{body}' with your secret and compare it to the signature value." },
|
||||
{ question: "What alert rules are available for notifications?", answer: "You can configure alerts for consecutive failures, failure rate thresholds, latency thresholds, latency spikes (percentage above average), cost thresholds, no-activity periods, and error counts within a time window. All alert types include a 1-hour cooldown to prevent notification spam." },
|
||||
{ question: "Can I filter which executions trigger notifications?", answer: "Yes. You can filter notifications by specific workflows (or select all), log level (info or error), and trigger type (api, webhook, schedule, manual, chat). You can also choose whether to include final output, trace spans, rate limits, and usage data in the notification payload." },
|
||||
{ question: "Can I filter which runs trigger notifications?", answer: "Yes. You can filter notifications by specific workflows (or select all), log level (info or error), and trigger type (api, webhook, schedule, manual, chat). You can also choose whether to include final output, trace spans, rate limits, and usage data in the notification payload." },
|
||||
]} />
|
||||
|
||||
@@ -6,7 +6,7 @@ import { Callout } from 'fumadocs-ui/components/callout'
|
||||
import { Card, Cards } from 'fumadocs-ui/components/card'
|
||||
import { Image } from '@/components/ui/image'
|
||||
|
||||
Understanding how workflows execute in Sim is key to building efficient and reliable automations. The execution engine automatically handles dependencies, concurrency, and data flow to ensure your workflows run smoothly and predictably.
|
||||
Understanding how workflows run in Sim is key to building efficient and reliable automations. The execution engine automatically handles dependencies, concurrency, and data flow to ensure your workflows run smoothly and predictably.
|
||||
|
||||
## How Workflows Execute
|
||||
|
||||
@@ -14,7 +14,7 @@ Sim's execution engine processes workflows intelligently by analyzing dependenci
|
||||
|
||||
### Concurrent Execution by Default
|
||||
|
||||
Multiple blocks run concurrently when they don't depend on each other. This parallel execution dramatically improves performance without requiring manual configuration.
|
||||
Multiple blocks run concurrently when they don't depend on each other. This dramatically improves performance without requiring manual configuration.
|
||||
|
||||
<Image
|
||||
src="/static/execution/concurrency.png"
|
||||
@@ -49,7 +49,7 @@ Workflows can branch in multiple directions using routing blocks. The execution
|
||||
height={500}
|
||||
/>
|
||||
|
||||
This workflow demonstrates how execution can follow different paths based on conditions or AI decisions, with each path executing independently.
|
||||
This workflow demonstrates how a run can follow different paths based on conditions or AI decisions, with each path running independently.
|
||||
|
||||
## Block Types
|
||||
|
||||
@@ -57,7 +57,7 @@ Sim provides different types of blocks that serve specific purposes in your work
|
||||
|
||||
<Cards>
|
||||
<Card title="Triggers" href="/triggers">
|
||||
**Starter blocks** initiate workflows and **Webhook blocks** respond to external events. Every workflow needs a trigger to begin execution.
|
||||
**Starter blocks** initiate workflows and **Webhook blocks** respond to external events. Every workflow needs a trigger to begin a run.
|
||||
</Card>
|
||||
|
||||
<Card title="Processing Blocks" href="/blocks">
|
||||
@@ -73,37 +73,37 @@ Sim provides different types of blocks that serve specific purposes in your work
|
||||
</Card>
|
||||
</Cards>
|
||||
|
||||
All blocks execute automatically based on their dependencies - you don't need to manually manage execution order or timing.
|
||||
All blocks run automatically based on their dependencies - you don't need to manually manage run order or timing.
|
||||
|
||||
## Execution Monitoring
|
||||
## Run Monitoring
|
||||
|
||||
When workflows run, Sim provides real-time visibility into the execution process:
|
||||
When workflows run, Sim provides real-time visibility into the process:
|
||||
|
||||
- **Live Block States**: See which blocks are currently executing, completed, or failed
|
||||
- **Execution Logs**: Detailed logs appear in real-time showing inputs, outputs, and any errors
|
||||
- **Performance Metrics**: Track execution time and costs for each block
|
||||
- **Path Visualization**: Understand which execution paths were taken through your workflow
|
||||
- **Live Block States**: See which blocks are currently running, completed, or failed
|
||||
- **Run Logs**: Detailed logs appear in real-time showing inputs, outputs, and any errors
|
||||
- **Performance Metrics**: Track run time and costs for each block
|
||||
- **Path Visualization**: Understand which paths were taken through your workflow
|
||||
|
||||
<Callout type="info">
|
||||
All execution details are captured and available for review even after workflows complete, helping with debugging and optimization.
|
||||
All run details are captured and available for review even after workflows complete, helping with debugging and optimization.
|
||||
</Callout>
|
||||
|
||||
## Key Execution Principles
|
||||
## Key Principles
|
||||
|
||||
Understanding these core principles will help you build better workflows:
|
||||
|
||||
1. **Dependency-Based Execution**: Blocks only run when all their dependencies have completed
|
||||
2. **Automatic Parallelization**: Independent blocks run concurrently without configuration
|
||||
3. **Smart Data Flow**: Outputs flow automatically to connected blocks
|
||||
4. **Error Handling**: Failed blocks stop their execution path but don't affect independent paths
|
||||
5. **Response Blocks as Exit Points**: When a Response block executes, the entire workflow stops and the API response is sent immediately. Multiple Response blocks can exist on different branches — the first one to execute wins
|
||||
6. **State Persistence**: All block outputs and execution details are preserved for debugging
|
||||
7. **Cycle Protection**: Workflows that call other workflows (via Workflow blocks, MCP tools, or API blocks) are tracked with a call chain. If the chain exceeds 25 hops, execution is stopped to prevent infinite loops
|
||||
4. **Error Handling**: Failed blocks stop their run path but don't affect independent paths
|
||||
5. **Response Blocks as Exit Points**: When a Response block runs, the entire workflow stops and the API response is sent immediately. Multiple Response blocks can exist on different branches — the first one to run wins
|
||||
6. **State Persistence**: All block outputs and run details are preserved for debugging
|
||||
7. **Cycle Protection**: Workflows that call other workflows (via Workflow blocks, MCP tools, or API blocks) are tracked with a call chain. If the chain exceeds 25 hops, the run is stopped to prevent infinite loops
|
||||
|
||||
## Next Steps
|
||||
|
||||
Now that you understand execution basics, explore:
|
||||
- **[Block Types](/blocks)** - Learn about specific block capabilities
|
||||
- **[Logging](/execution/logging)** - Monitor workflow executions and debug issues
|
||||
- **[Logging](/execution/logging)** - Monitor workflow runs and debug issues
|
||||
- **[Cost Calculation](/execution/costs)** - Understand and optimize workflow costs
|
||||
- **[Triggers](/triggers)** - Set up different ways to run your workflows
|
||||
|
||||
184
apps/docs/content/docs/en/execution/chat.mdx
Normal file
184
apps/docs/content/docs/en/execution/chat.mdx
Normal file
@@ -0,0 +1,184 @@
|
||||
---
|
||||
title: Chat Deployment
|
||||
---
|
||||
|
||||
import { Callout } from 'fumadocs-ui/components/callout'
|
||||
import { Tab, Tabs } from 'fumadocs-ui/components/tabs'
|
||||
import { Image } from '@/components/ui/image'
|
||||
import { FAQ } from '@/components/ui/faq'
|
||||
|
||||
Deploy your workflow as a conversational chat interface that users can interact with via a shareable link or embedded widget. Chat supports multi-turn conversations, file uploads, and voice input.
|
||||
|
||||
<Image src="/static/chat/chat-live.png" alt="A deployed chat interface showing a conversation with Friendly Assistant" width={800} height={500} />
|
||||
|
||||
Every chat message triggers a fresh workflow execution, with the full conversation history passed in as context. Responses stream back to the user in real time.
|
||||
|
||||
<Callout type="info">
|
||||
Chat executions run against your workflow's active deployment snapshot. Publish a new deployment after making canvas changes so the chat uses the updated version.
|
||||
</Callout>
|
||||
|
||||
## Creating a Chat
|
||||
|
||||
Open your workflow, click **Deploy**, and select the **Chat** tab. You'll see the chat configuration panel:
|
||||
|
||||
<Image src="/static/chat/chat-deploy-config.png" alt="Chat deployment configuration panel showing URL, Title, Output, Access control, and Welcome message fields" width={800} height={500} />
|
||||
|
||||
Configure the following fields, then click **Launch Chat**:
|
||||
|
||||
| Field | Description |
|
||||
|-------|-------------|
|
||||
| **URL** | Slug that forms the public URL, e.g. `https://www.sim.ai/chat/your-slug`. Lowercase letters, numbers, and hyphens only. Must be unique across all workspaces. |
|
||||
| **Title** | Display name shown in the chat header. |
|
||||
| **Output** | Output fields from your workflow blocks returned as the chat response. At least one must be selected. |
|
||||
| **Welcome Message** | Greeting shown before the user sends their first message. Defaults to `"Hi there! How can I help you today?"`. |
|
||||
| **Access Control** | Controls who can access the chat. See [Access Control](#access-control) below. |
|
||||
|
||||
### Output Selection
|
||||
|
||||
<Image src="/static/chat/chat-deploy-output.png" alt="Output dropdown showing Agent 1 block with selectable fields: content, model, tokens, toolCalls, providerTiming, cost" width={800} height={400} />
|
||||
|
||||
The output dropdown groups available fields by block. For an Agent block, you can choose from `content`, `model`, `tokens`, `toolCalls`, `providerTiming`, and `cost`. In most cases, selecting `content` from the final Agent block is all you need — it streams the agent's text response directly to the user.
|
||||
|
||||
## Access Control
|
||||
|
||||
<Image src="/static/chat/chat-deploy-access-email.png" alt="Access control section with Email tab selected, showing an Allowed emails field with @sim.ai domain added" width={800} height={300} />
|
||||
|
||||
| Mode | Description |
|
||||
|------|-------------|
|
||||
| **Public** | Anyone with the link can chat — no authentication required |
|
||||
| **Password** | Users must enter a password before they can start chatting |
|
||||
| **Email** | Only specific email addresses or domains can access. Users verify with a 6-digit OTP sent to their email |
|
||||
| **SSO** | OIDC-based single sign-on (enterprise only) |
|
||||
|
||||
**Email access:** Add individual addresses (`user@example.com`) or entire domains (`@example.com`) to the **Allowed emails** field. Users receive a one-time 6-digit OTP to their inbox — once verified, they can chat for the duration of their session.
|
||||
|
||||
**Password access:** A password field appears when this mode is selected. Share the password with users directly; they enter it before the conversation begins.
|
||||
|
||||
**SSO:** Uses OIDC to authenticate users through your identity provider. Available on enterprise plans.
|
||||
|
||||
## Sharing
|
||||
|
||||
### Direct Link
|
||||
|
||||
```
|
||||
https://www.sim.ai/chat/your-slug
|
||||
```
|
||||
|
||||
### Iframe
|
||||
|
||||
```html
|
||||
<iframe
|
||||
src="https://www.sim.ai/chat/your-slug"
|
||||
width="100%"
|
||||
height="600"
|
||||
frameborder="0"
|
||||
title="Chat"
|
||||
></iframe>
|
||||
```
|
||||
|
||||
## API Submission
|
||||
|
||||
You can also send messages to a chat programmatically. Responses are streamed using server-sent events (SSE).
|
||||
|
||||
<Tabs items={['cURL', 'TypeScript']}>
|
||||
<Tab value="cURL">
|
||||
```bash
|
||||
curl -X POST https://www.sim.ai/api/chat/your-slug \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"input": "Hello, I need help with my order",
|
||||
"conversationId": "optional-conversation-id"
|
||||
}'
|
||||
```
|
||||
</Tab>
|
||||
<Tab value="TypeScript">
|
||||
```typescript
|
||||
const response = await fetch('https://www.sim.ai/api/chat/your-slug', {
|
||||
method: 'POST',
|
||||
headers: { 'Content-Type': 'application/json' },
|
||||
body: JSON.stringify({
|
||||
input: 'Hello, I need help with my order',
|
||||
conversationId: 'optional-conversation-id'
|
||||
})
|
||||
});
|
||||
|
||||
// Response is an SSE stream
|
||||
const reader = response.body?.getReader();
|
||||
const decoder = new TextDecoder();
|
||||
|
||||
while (true) {
|
||||
const { done, value } = await reader!.read();
|
||||
if (done) break;
|
||||
console.log(decoder.decode(value));
|
||||
}
|
||||
```
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
### With File Uploads
|
||||
|
||||
```bash
|
||||
curl -X POST https://www.sim.ai/api/chat/your-slug \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"input": "What does this document say?",
|
||||
"files": [{
|
||||
"name": "report.pdf",
|
||||
"type": "application/pdf",
|
||||
"size": 1048576,
|
||||
"data": "data:application/pdf;base64,..."
|
||||
}]
|
||||
}'
|
||||
```
|
||||
|
||||
### Protected Chats
|
||||
|
||||
For password-protected chats, include the password in the request body:
|
||||
|
||||
```bash
|
||||
curl -X POST https://www.sim.ai/api/chat/your-slug \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{ "password": "secret", "input": "Hello" }'
|
||||
```
|
||||
|
||||
For email-protected chats, authenticate with OTP first:
|
||||
|
||||
```bash
|
||||
# Step 1: Request OTP — sends a 6-digit code to the email address
|
||||
curl -X POST https://www.sim.ai/api/chat/your-slug/otp \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{ "email": "allowed@example.com" }'
|
||||
|
||||
# Step 2: Verify OTP — save the Set-Cookie header for subsequent requests
|
||||
curl -X PUT https://www.sim.ai/api/chat/your-slug/otp \
|
||||
-H "Content-Type: application/json" \
|
||||
-c cookies.txt \
|
||||
-d '{ "email": "allowed@example.com", "otp": "123456" }'
|
||||
|
||||
# Step 3: Send messages using the auth cookie from Step 2
|
||||
curl -X POST https://www.sim.ai/api/chat/your-slug \
|
||||
-H "Content-Type: application/json" \
|
||||
-b cookies.txt \
|
||||
-d '{ "input": "Hello" }'
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
**Chat returns 403** — The deployment is inactive. Open the Deploy modal and re-deploy the workflow.
|
||||
|
||||
**"At least one output block is required"** — No output field is selected in the Output dropdown. Open the Deploy modal, go to the Chat tab, and select at least one output from a block.
|
||||
|
||||
**OTP email not arriving** — Confirm the email address is on the allowed list and check spam folders. OTP codes expire after 15 minutes and can be resent after a 30-second cooldown.
|
||||
|
||||
**Chat not loading in iframe** — Check your site's Content Security Policy allows iframes from `sim.ai`.
|
||||
|
||||
**Responses not updating after workflow changes** — Chat uses the active deployment snapshot. Publish a new deployment from the Deploy modal to pick up your latest changes.
|
||||
|
||||
<FAQ items={[
|
||||
{ question: "How is chat different from API deployment?", answer: "API deployment exposes your workflow as a REST endpoint for programmatic use. Chat wraps the workflow in a hosted conversational UI with streaming, file uploads, voice input, and access control — no application code required to use it." },
|
||||
{ question: "Which output field should I select?", answer: "For workflows built around Agent blocks, select the content field from the final Agent block — this streams the agent's text response to the user. You can select multiple fields if your workflow produces structured output you want to expose." },
|
||||
{ question: "How does conversation history work?", answer: "Each message triggers a new workflow execution. The full conversation history — all prior user messages and assistant responses — is passed as context so your workflow can maintain continuity across turns." },
|
||||
{ question: "How does email OTP authentication work?", answer: "When a user opens an email-protected chat, they enter their email address. If it matches the allowed list, Sim sends a 6-digit OTP to that address. The user enters the code, and a session cookie is set for the duration of their visit." },
|
||||
{ question: "Is there a message length limit?", answer: "There is no hard limit on message length. Very long messages may impact response time depending on your workflow's model context window." },
|
||||
{ question: "Can I use chat with any workflow?", answer: "Yes, any workflow can be deployed as a chat. The chat sends the user's message as the workflow input and streams the selected block outputs back as the response." },
|
||||
]} />
|
||||
@@ -6,7 +6,7 @@ import { Callout } from 'fumadocs-ui/components/callout'
|
||||
import { Tab, Tabs } from 'fumadocs-ui/components/tabs'
|
||||
import { Image } from '@/components/ui/image'
|
||||
|
||||
Sim automatically calculates costs for all workflow executions, providing transparent pricing based on AI model usage and execution charges. Understanding these costs helps you optimize workflows and manage your budget effectively.
|
||||
Sim automatically calculates costs for all workflow runs, providing transparent pricing based on AI model usage and run charges. Understanding these costs helps you optimize workflows and manage your budget effectively.
|
||||
|
||||
## Credits
|
||||
|
||||
@@ -16,18 +16,18 @@ All plan limits, usage meters, and billing thresholds are displayed in credits t
|
||||
|
||||
## How Costs Are Calculated
|
||||
|
||||
Every workflow execution includes two cost components:
|
||||
Every workflow run includes two cost components:
|
||||
|
||||
**Base Execution Charge**: 1 credit ($0.005) per execution
|
||||
**Base Run Charge**: 1 credit ($0.005) per run
|
||||
|
||||
**AI Model Usage**: Variable cost based on token consumption
|
||||
```javascript
|
||||
modelCost = (inputTokens × inputPrice + outputTokens × outputPrice) / 1,000,000
|
||||
totalCredits = baseExecutionCharge + modelCost × 200
|
||||
totalCredits = baseRunCharge + modelCost × 200
|
||||
```
|
||||
|
||||
<Callout type="info">
|
||||
AI model prices are per million tokens. The calculation divides by 1,000,000 to get the actual cost. Workflows without AI blocks only incur the base execution charge.
|
||||
AI model prices are per million tokens. The calculation divides by 1,000,000 to get the actual cost. Workflows without AI blocks only incur the base run charge.
|
||||
</Callout>
|
||||
|
||||
## Model Breakdown in Logs
|
||||
@@ -48,7 +48,7 @@ The model breakdown shows:
|
||||
- **Token Usage**: Input and output token counts for each model
|
||||
- **Cost Breakdown**: Individual costs per model and operation
|
||||
- **Model Distribution**: Which models were used and how many times
|
||||
- **Total Cost**: Aggregate cost for the entire workflow execution
|
||||
- **Total Cost**: Aggregate cost for the entire workflow run
|
||||
|
||||
## Pricing Options
|
||||
|
||||
@@ -110,9 +110,108 @@ The model breakdown shows:
|
||||
Pricing shown reflects rates as of September 10, 2025. Check provider documentation for current pricing.
|
||||
</Callout>
|
||||
|
||||
## Hosted Tool Pricing
|
||||
|
||||
When workflows use tool blocks with Sim's hosted API keys, costs are charged per operation. Use your own keys via BYOK to pay providers directly instead.
|
||||
|
||||
<Tabs items={['Firecrawl', 'Exa', 'Serper', 'Perplexity', 'Linkup', 'Parallel AI', 'Jina AI', 'Google Cloud', 'Brandfetch']}>
|
||||
<Tab>
|
||||
**Firecrawl** - Web scraping, crawling, search, and extraction
|
||||
|
||||
| Operation | Cost |
|
||||
|-----------|------|
|
||||
| Scrape | $0.001 per credit used |
|
||||
| Crawl | $0.001 per credit used |
|
||||
| Search | $0.001 per credit used |
|
||||
| Extract | $0.001 per credit used |
|
||||
| Map | $0.001 per credit used |
|
||||
</Tab>
|
||||
|
||||
<Tab>
|
||||
**Exa** - AI-powered search and research
|
||||
|
||||
| Operation | Cost |
|
||||
|-----------|------|
|
||||
| Search | Dynamic (returned by API) |
|
||||
| Get Contents | Dynamic (returned by API) |
|
||||
| Find Similar Links | Dynamic (returned by API) |
|
||||
| Answer | Dynamic (returned by API) |
|
||||
</Tab>
|
||||
|
||||
<Tab>
|
||||
**Serper** - Google search API
|
||||
|
||||
| Operation | Cost |
|
||||
|-----------|------|
|
||||
| Search (≤10 results) | $0.001 |
|
||||
| Search (>10 results) | $0.002 |
|
||||
</Tab>
|
||||
|
||||
<Tab>
|
||||
**Perplexity** - AI-powered chat and web search
|
||||
|
||||
| Operation | Cost |
|
||||
|-----------|------|
|
||||
| Search | $0.005 per request |
|
||||
| Chat | Token-based (varies by model) |
|
||||
</Tab>
|
||||
|
||||
<Tab>
|
||||
**Linkup** - Web search and content retrieval
|
||||
|
||||
| Operation | Cost |
|
||||
|-----------|------|
|
||||
| Standard search | ~$0.006 |
|
||||
| Deep search | ~$0.055 |
|
||||
</Tab>
|
||||
|
||||
<Tab>
|
||||
**Parallel AI** - Web search, extraction, and deep research
|
||||
|
||||
| Operation | Cost |
|
||||
|-----------|------|
|
||||
| Search (≤10 results) | $0.005 |
|
||||
| Search (>10 results) | $0.005 + $0.001 per additional result |
|
||||
| Extract | $0.001 per URL |
|
||||
| Deep Research | $0.005–$2.40 (varies by processor tier) |
|
||||
</Tab>
|
||||
|
||||
<Tab>
|
||||
**Jina AI** - Web reading and search
|
||||
|
||||
| Operation | Cost |
|
||||
|-----------|------|
|
||||
| Read URL | $0.20 per 1M tokens |
|
||||
| Search | $0.20 per 1M tokens (minimum 10K tokens) |
|
||||
</Tab>
|
||||
|
||||
<Tab>
|
||||
**Google Cloud** - Translate, Maps, PageSpeed, and Books APIs
|
||||
|
||||
| Operation | Cost |
|
||||
|-----------|------|
|
||||
| Translate / Detect | $0.00002 per character |
|
||||
| Maps (Geocode, Directions, Distance Matrix, Elevation, Timezone, Reverse Geocode, Geolocate, Validate Address) | $0.005 per request |
|
||||
| Maps (Snap to Roads) | $0.01 per request |
|
||||
| Maps (Place Details) | $0.017 per request |
|
||||
| Maps (Places Search) | $0.032 per request |
|
||||
| PageSpeed | Free |
|
||||
| Books (Search, Details) | Free |
|
||||
</Tab>
|
||||
|
||||
<Tab>
|
||||
**Brandfetch** - Brand assets, logos, colors, and company info
|
||||
|
||||
| Operation | Cost |
|
||||
|-----------|------|
|
||||
| Search | Free |
|
||||
| Get Brand | $0.04 per request |
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
## Bring Your Own Key (BYOK)
|
||||
|
||||
Use your own API keys for AI model providers instead of Sim's hosted keys to pay base prices with no markup.
|
||||
Use your own API keys for supported providers instead of Sim's hosted keys to pay base prices with no markup.
|
||||
|
||||
### Supported Providers
|
||||
|
||||
@@ -121,7 +220,17 @@ Use your own API keys for AI model providers instead of Sim's hosted keys to pay
|
||||
| OpenAI | Knowledge Base embeddings, Agent block |
|
||||
| Anthropic | Agent block |
|
||||
| Google | Agent block |
|
||||
| Mistral | Knowledge Base OCR |
|
||||
| Mistral | Knowledge Base OCR, Agent block |
|
||||
| Fireworks | Agent block |
|
||||
| Firecrawl | Web scraping, crawling, search, and extraction |
|
||||
| Exa | AI-powered search and research |
|
||||
| Serper | Google search API |
|
||||
| Linkup | Web search and content retrieval |
|
||||
| Parallel AI | Web search, extraction, and deep research |
|
||||
| Perplexity | AI-powered chat and web search |
|
||||
| Jina AI | Web reading and search |
|
||||
| Google Cloud | Translate, Maps, PageSpeed, and Books APIs |
|
||||
| Brandfetch | Brand assets, logos, colors, and company info |
|
||||
|
||||
### Setup
|
||||
|
||||
@@ -152,20 +261,20 @@ Each voice session is billed when it starts. In deployed chat voice mode, each c
|
||||
|
||||
## Plans
|
||||
|
||||
Sim has two paid plan tiers — **Pro** and **Max**. Either can be used individually or with a team. Team plans pool credits across all seats in the organization.
|
||||
Sim has two paid plan tiers - **Pro** and **Max**. Either can be used individually or with a team. Team plans pool credits across all seats in the organization.
|
||||
|
||||
| Plan | Price | Credits Included | Daily Refresh |
|
||||
|------|-------|------------------|---------------|
|
||||
| **Community** | $0 | 1,000 (one-time) | — |
|
||||
| **Community** | $0 | 1,000 (one-time) | - |
|
||||
| **Pro** | $25/mo | 6,000/mo | +50/day |
|
||||
| **Max** | $100/mo | 25,000/mo | +200/day |
|
||||
| **Enterprise** | Custom | Custom | — |
|
||||
| **Enterprise** | Custom | Custom | - |
|
||||
|
||||
To use Pro or Max with a team, select **Get For Team** in subscription settings and choose the tier and number of seats. Credits are pooled across the organization at the per-seat rate (e.g. Max for Teams with 3 seats = 75,000 credits/mo pooled).
|
||||
|
||||
### Daily Refresh Credits
|
||||
|
||||
Paid plans include a small daily credit allowance that does not count toward your plan limit. Each day, usage up to the daily refresh amount is excluded from billable usage. This allowance resets every 24 hours and does not carry over — use it or lose it.
|
||||
Paid plans include a small daily credit allowance that does not count toward your plan limit. Each day, usage up to the daily refresh amount is excluded from billable usage. This allowance resets every 24 hours and does not carry over - use it or lose it.
|
||||
|
||||
| Plan | Daily Refresh |
|
||||
|------|---------------|
|
||||
@@ -199,6 +308,17 @@ By default, your usage is capped at the credits included in your plan. To allow
|
||||
|
||||
## Plan Limits
|
||||
|
||||
### Workspaces
|
||||
|
||||
| Plan | Personal Workspaces | Shared (Organization) Workspaces |
|
||||
|------|---------------------|----------------------------------|
|
||||
| **Free** | 1 | — |
|
||||
| **Pro** | Up to 3 | — |
|
||||
| **Max** | Up to 10 | — |
|
||||
| **Team / Enterprise** | Unlimited | Unlimited |
|
||||
|
||||
Team and Enterprise plans unlock shared workspaces that belong to your organization. Members invited to a shared workspace automatically join the organization and count toward your seat total. When a Team or Enterprise subscription is cancelled or downgraded, existing shared workspaces remain accessible to current members but new invites are disabled until the organization is upgraded again.
|
||||
|
||||
### Rate Limits
|
||||
|
||||
| Plan | Sync (req/min) | Async (req/min) |
|
||||
@@ -210,17 +330,6 @@ By default, your usage is capped at the credits included in your plan. To allow
|
||||
|
||||
Max (individual) shares the same rate limits as team plans. Team plans (Pro or Max for Teams) use the Max-tier rate limits.
|
||||
|
||||
### Concurrent Execution Limits
|
||||
|
||||
| Plan | Concurrent Executions |
|
||||
|------|----------------------|
|
||||
| **Free** | 5 |
|
||||
| **Pro** | 50 |
|
||||
| **Max / Team** | 200 |
|
||||
| **Enterprise** | 200 (customizable) |
|
||||
|
||||
Concurrent execution limits control how many workflow executions can run simultaneously within a workspace. When the limit is reached, new executions are queued and admitted as running executions complete. Manual runs from the editor are not subject to these limits.
|
||||
|
||||
### File Storage
|
||||
|
||||
| Plan | Storage |
|
||||
@@ -232,18 +341,18 @@ Concurrent execution limits control how many workflow executions can run simulta
|
||||
|
||||
Team plans (Pro or Max for Teams) use 500 GB.
|
||||
|
||||
### Execution Time Limits
|
||||
### Run Time Limits
|
||||
|
||||
| Plan | Sync | Async |
|
||||
|------|------|-------|
|
||||
| **Free** | 5 minutes | 90 minutes |
|
||||
| **Pro / Max / Team / Enterprise** | 50 minutes | 90 minutes |
|
||||
|
||||
**Sync executions** run immediately and return results directly. These are triggered via the API with `async: false` (default) or through the UI.
|
||||
**Async executions** (triggered via API with `async: true`, webhooks, or schedules) run in the background.
|
||||
**Sync runs** complete immediately and return results directly. These are triggered via the API with `async: false` (default) or through the UI.
|
||||
**Async runs** (triggered via API with `async: true`, webhooks, or schedules) run in the background.
|
||||
|
||||
<Callout type="info">
|
||||
If a workflow exceeds its time limit, it will be terminated and marked as failed with a timeout error. Design long-running workflows to use async execution or break them into smaller workflows.
|
||||
If a workflow exceeds its time limit, it will be terminated and marked as failed with a timeout error. Design long-running workflows to use async runs or break them into smaller workflows.
|
||||
</Callout>
|
||||
|
||||
## Billing Model
|
||||
@@ -252,7 +361,7 @@ Sim uses a **base subscription + overage** billing model:
|
||||
|
||||
### How It Works
|
||||
|
||||
**Pro Plan ($25/month — 6,000 credits):**
|
||||
**Pro Plan ($25/month - 6,000 credits):**
|
||||
- Monthly subscription includes 6,000 credits of usage
|
||||
- Usage under 6,000 credits → No additional charges
|
||||
- Usage over 6,000 credits (with on-demand enabled) → Pay the overage at month end
|
||||
@@ -269,12 +378,12 @@ Sim uses a **base subscription + overage** billing model:
|
||||
|
||||
### Threshold Billing
|
||||
|
||||
When on-demand is enabled and unbilled overage reaches $50, Sim automatically bills the full unbilled amount.
|
||||
When on-demand is enabled and unbilled overage reaches $100, Sim automatically bills the full unbilled amount.
|
||||
|
||||
**Example:**
|
||||
- Day 10: $70 overage → Bill $70 immediately
|
||||
- Day 15: Additional $35 usage ($105 total) → Already billed, no action
|
||||
- Day 20: Another $50 usage ($155 total, $85 unbilled) → Bill $85 immediately
|
||||
- Day 10: $120 overage → Bill $120 immediately
|
||||
- Day 15: Additional $60 usage ($180 total) → Already billed, no action
|
||||
- Day 20: Another $80 usage ($260 total, $140 unbilled) → Bill $140 immediately
|
||||
|
||||
This spreads large overage charges throughout the month instead of one large bill at period end.
|
||||
|
||||
@@ -343,6 +452,21 @@ curl -X GET -H "X-API-Key: YOUR_API_KEY" -H "Content-Type: application/json" htt
|
||||
- `limit` is derived from individual limits (Free/Pro/Max) or pooled organization limits (Team/Enterprise)
|
||||
- `plan` is the highest-priority active plan associated with your user
|
||||
|
||||
## Purchasing Additional Credits
|
||||
|
||||
Pro and Team plan users can buy additional credits at any time in **Settings → Subscription → Credit Balance**:
|
||||
|
||||
- **Range**: $10 to $1,000 per purchase
|
||||
- **Conversion**: 1 credit = $0.005 (a $10 purchase adds 2,000 credits)
|
||||
- **Availability**: Credits are added immediately after payment
|
||||
- **Expiration**: Purchased credits do not expire
|
||||
- **Refunds**: Purchases are non-refundable
|
||||
- **Team plans**: Only organization owners and admins can purchase credits. Purchased credits are added to the team's shared pool.
|
||||
|
||||
<Callout type="info">
|
||||
Enterprise users should contact support for credit adjustments.
|
||||
</Callout>
|
||||
|
||||
## Cost Optimization Strategies
|
||||
|
||||
- **Model Selection**: Choose models based on task complexity. Simple tasks can use GPT-4.1-nano while complex reasoning might need o1 or Claude Opus.
|
||||
@@ -354,18 +478,18 @@ curl -X GET -H "X-API-Key: YOUR_API_KEY" -H "Content-Type: application/json" htt
|
||||
## Next Steps
|
||||
|
||||
- Review your current usage in [Settings → Subscription](https://sim.ai/settings/subscription)
|
||||
- Learn about [Logging](/execution/logging) to track execution details
|
||||
- Learn about [Logging](/execution/logging) to track run details
|
||||
- Explore the [External API](/execution/api) for programmatic cost monitoring
|
||||
- Check out [workflow optimization techniques](/blocks) to reduce costs
|
||||
|
||||
import { FAQ } from '@/components/ui/faq'
|
||||
|
||||
<FAQ items={[
|
||||
{ question: "How much does a single workflow execution cost?", answer: "Every execution incurs a base charge of 1 credit ($0.005). On top of that, any AI model usage is billed based on token consumption. Workflows that do not use AI blocks only pay the base execution charge." },
|
||||
{ question: "How much does a single workflow run cost?", answer: "Every run incurs a base charge of 1 credit ($0.005). On top of that, any AI model usage is billed based on token consumption. Workflows that do not use AI blocks only pay the base run charge." },
|
||||
{ question: "What is the credit-to-dollar conversion rate?", answer: "1 credit equals $0.005. All plan limits, usage meters, and billing thresholds in the Sim UI are displayed in credits." },
|
||||
{ question: "Do unused daily refresh credits carry over?", answer: "No. Daily refresh credits reset every 24 hours and do not accumulate. If you do not use them within the day, they are lost." },
|
||||
{ question: "What happens when I exceed my plan's credit limit?", answer: "By default, your usage is capped at your plan's included credits and executions will stop. If you enable on-demand billing or manually raise your usage limit in Settings, you can continue running workflows and pay for the overage at the end of the billing period." },
|
||||
{ question: "What happens when I exceed my plan's credit limit?", answer: "By default, your usage is capped at your plan's included credits and runs will stop. If you enable on-demand billing or manually raise your usage limit in Settings, you can continue running workflows and pay for the overage at the end of the billing period." },
|
||||
{ question: "How does the 1.1x hosted model multiplier work?", answer: "When you use Sim's hosted API keys (instead of bringing your own), a 1.1x multiplier is applied to the base model pricing for Agent blocks. This covers infrastructure and API management costs. You can avoid this multiplier by using your own API keys via the BYOK feature." },
|
||||
{ question: "Are there any free options for AI models?", answer: "Yes. If you run local models through Ollama or VLLM, there are no API costs for those model calls. You still pay the base execution charge of 1 credit per execution." },
|
||||
{ question: "When does threshold billing trigger?", answer: "When on-demand billing is enabled and your unbilled overage reaches $50, Sim automatically bills the full unbilled amount. This spreads large charges throughout the month instead of accumulating one large bill at period end." },
|
||||
{ question: "Are there any free options for AI models?", answer: "Yes. If you run local models through Ollama or VLLM, there are no API costs for those model calls. You still pay the base run charge of 1 credit per run." },
|
||||
{ question: "When does threshold billing trigger?", answer: "When on-demand billing is enabled and your unbilled overage reaches $100, Sim automatically bills the full unbilled amount. This spreads large charges throughout the month instead of accumulating one large bill at period end." },
|
||||
]} />
|
||||
|
||||
@@ -156,7 +156,7 @@ Use `url` for direct downloads or `base64` for inline processing.
|
||||
- **Dropbox** - Dropbox file operations
|
||||
|
||||
<Callout type="info">
|
||||
Files are automatically available to downstream blocks. The execution engine handles all file transfer and format conversion.
|
||||
Files are automatically available to downstream blocks. The engine handles all file transfer and format conversion.
|
||||
</Callout>
|
||||
|
||||
## Best Practices
|
||||
@@ -165,15 +165,15 @@ Use `url` for direct downloads or `base64` for inline processing.
|
||||
|
||||
2. **Check file types** - Ensure the file type matches what the receiving block expects. The Vision block needs images, the File block handles documents.
|
||||
|
||||
3. **Consider file size** - Large files increase execution time. For very large files, consider using storage blocks (S3, Supabase) for intermediate storage.
|
||||
3. **Consider file size** - Large files increase run time. For very large files, consider using storage blocks (S3, Supabase) for intermediate storage.
|
||||
|
||||
import { FAQ } from '@/components/ui/faq'
|
||||
|
||||
<FAQ items={[
|
||||
{ question: "What is the maximum file size for uploads?", answer: "The maximum file size for files processed during workflow execution is 20 MB. Files exceeding this limit will be rejected with an error indicating the actual file size. For larger files, use storage blocks like S3 or Supabase for intermediate storage." },
|
||||
{ question: "What is the maximum file size for uploads?", answer: "The maximum file size for files processed during a workflow run is 20 MB. Files exceeding this limit will be rejected with an error indicating the actual file size. For larger files, use storage blocks like S3 or Supabase for intermediate storage." },
|
||||
{ question: "What file input formats are supported via the API?", answer: "When triggering a workflow via API, you can send files as base64-encoded data (using a data URI with the format 'data:{mime};base64,{data}') or as a URL pointing to a publicly accessible file. In both cases, include the file name and MIME type in the request." },
|
||||
{ question: "How are files passed between blocks internally?", answer: "Files are represented as standardized UserFile objects with name, url, base64, type, and size properties. Most blocks accept the full file object and extract what they need automatically, so you typically pass the entire object rather than individual properties." },
|
||||
{ question: "Which blocks can output files?", answer: "Gmail outputs email attachments, Slack outputs downloaded files, TTS generates audio files, Video Generator and Image Generator produce media files. Storage blocks like S3, Supabase, Google Drive, and Dropbox can also retrieve files for use in downstream blocks." },
|
||||
{ question: "Do I need to extract base64 or URL from file objects manually?", answer: "No. Most blocks accept the full file object and handle the format conversion automatically. Simply pass the entire file reference (e.g., <gmail.attachments[0]>) and the receiving block will extract the data it needs." },
|
||||
{ question: "How do file fields work in the Start block's input format?", answer: "When you define a field with type 'file[]' in the Start block's input format, the execution engine automatically processes incoming file data (base64 or URL) and uploads it to storage, converting it into UserFile objects before the workflow runs." },
|
||||
{ question: "How do file fields work in the Start block's input format?", answer: "When you define a field with type 'file[]' in the Start block's input format, the engine automatically processes incoming file data (base64 or URL) and uploads it to storage, converting it into UserFile objects before the workflow runs." },
|
||||
]} />
|
||||
|
||||
@@ -7,10 +7,10 @@ import { Card, Cards } from 'fumadocs-ui/components/card'
|
||||
import { Image } from '@/components/ui/image'
|
||||
import { FAQ } from '@/components/ui/faq'
|
||||
|
||||
Sim's execution engine brings your workflows to life by processing blocks in the correct order, managing data flow, and handling errors gracefully, so you can understand exactly how workflows are executed in Sim.
|
||||
Sim's execution engine brings your workflows to life by processing blocks in the correct order, managing data flow, and handling errors gracefully, so you can understand exactly how workflows run in Sim.
|
||||
|
||||
<Callout type="info">
|
||||
Every workflow execution follows a deterministic path based on your block connections and logic, ensuring predictable and reliable results.
|
||||
Every workflow run follows a deterministic path based on your block connections and logic, ensuring predictable and reliable results.
|
||||
</Callout>
|
||||
|
||||
## Documentation Overview
|
||||
@@ -22,33 +22,42 @@ Sim's execution engine brings your workflows to life by processing blocks in the
|
||||
</Card>
|
||||
|
||||
<Card title="Logging" href="/execution/logging">
|
||||
Monitor workflow executions with comprehensive logging and real-time visibility
|
||||
Monitor workflow runs with comprehensive logging and real-time visibility
|
||||
</Card>
|
||||
|
||||
|
||||
<Card title="Cost Calculation" href="/execution/costs">
|
||||
Understand how workflow execution costs are calculated and optimized
|
||||
Understand how workflow run costs are calculated and optimized
|
||||
</Card>
|
||||
|
||||
|
||||
<Card title="External API" href="/execution/api">
|
||||
Access execution logs and set up webhooks programmatically via REST API
|
||||
Access run logs and set up webhooks programmatically via REST API
|
||||
</Card>
|
||||
|
||||
<Card title="API Deployment" href="/execution/api-deployment">
|
||||
Deploy your workflow as a REST API endpoint with sync, streaming, and async modes
|
||||
</Card>
|
||||
|
||||
<Card title="Chat Deployment" href="/execution/chat">
|
||||
Deploy your workflow as a conversational chat interface with streaming, file uploads, and voice
|
||||
</Card>
|
||||
|
||||
</Cards>
|
||||
|
||||
## Key Concepts
|
||||
|
||||
### Topological Execution
|
||||
Blocks execute in dependency order, similar to how a spreadsheet recalculates cells. The execution engine automatically determines which blocks can run based on completed dependencies.
|
||||
Blocks run in dependency order, similar to how a spreadsheet recalculates cells. The execution engine automatically determines which blocks can run based on completed dependencies.
|
||||
|
||||
### Path Tracking
|
||||
The engine actively tracks execution paths through your workflow. Router and Condition blocks dynamically update these paths, ensuring only relevant blocks execute.
|
||||
The engine actively tracks run paths through your workflow. Router and Condition blocks dynamically update these paths, ensuring only relevant blocks run.
|
||||
|
||||
### Layer-Based Processing
|
||||
Instead of executing blocks one-by-one, the engine identifies layers of blocks that can run in parallel, optimizing performance for complex workflows.
|
||||
|
||||
### Execution Context
|
||||
Each workflow maintains a rich context during execution containing:
|
||||
### Run Context
|
||||
Each workflow maintains a rich context during a run containing:
|
||||
- Block outputs and states
|
||||
- Active execution paths
|
||||
- Active run paths
|
||||
- Loop and parallel iteration tracking
|
||||
- Environment variables
|
||||
- Routing decisions
|
||||
@@ -56,23 +65,57 @@ Each workflow maintains a rich context during execution containing:
|
||||
|
||||
## Deployment Snapshots
|
||||
|
||||
API, Chat, Schedule, and Webhook executions run against the workflow’s active deployment snapshot. Manual runs from the editor execute the current draft canvas state, letting you test changes before deploying. Publish a new deployment whenever you change the canvas so every trigger uses the updated version.
|
||||
API, Chat, Schedule, and Webhook runs use the workflow’s active deployment snapshot. Manual runs from the editor use the current draft canvas state, letting you test changes before deploying. Publish a new deployment whenever you change the canvas so every trigger uses the updated version.
|
||||
|
||||
<div className='flex justify-center my-6'>
|
||||
<div className="flex justify-center my-6">
|
||||
<Image
|
||||
src='/static/execution/deployment-versions.png'
|
||||
alt='Deployment versions table'
|
||||
src="/static/execution/deployment-versions.png"
|
||||
alt="Deployment versions table"
|
||||
width={500}
|
||||
height={280}
|
||||
className='rounded-xl border border-border shadow-sm'
|
||||
className="rounded-xl border border-border shadow-sm"
|
||||
/>
|
||||
</div>
|
||||
|
||||
The Deploy modal keeps a full version history—inspect any snapshot, compare it against your draft, and promote or roll back with one click when you need to restore a prior release.
|
||||
### Version History
|
||||
|
||||
## Programmatic Execution
|
||||
The **General** tab in the Deploy modal shows a version history table for every deployment. Each row shows the version name, who deployed it, and when.
|
||||
|
||||
Execute workflows from your applications using our official SDKs:
|
||||
<div className="flex justify-center">
|
||||
<Image
|
||||
src="/static/execution/deployment-versions-table.png"
|
||||
alt="Version history table with multiple deployment versions"
|
||||
width={600}
|
||||
height={650}
|
||||
className="my-6"
|
||||
/>
|
||||
</div>
|
||||
|
||||
From the version table you can:
|
||||
|
||||
- **Rename** a version to give it a meaningful label (e.g., "v2 — added error handling")
|
||||
- **Add a description** with notes about what changed in that deployment
|
||||
- **Promote to live** to roll back to an older version — this makes the selected version the active deployment without changing your draft canvas
|
||||
- **Load into editor** to restore a previous version's workflow into the canvas for editing and redeploying
|
||||
- **Preview a version** by selecting a row to view that version's workflow in the canvas preview, then toggle between **Live** and the selected version
|
||||
|
||||
<div className="flex justify-center">
|
||||
<Image
|
||||
src="/static/execution/deployment-version-preview.png"
|
||||
alt="Previewing a selected deployment version"
|
||||
width={600}
|
||||
height={650}
|
||||
className="my-6"
|
||||
/>
|
||||
</div>
|
||||
|
||||
<Callout type="info">
|
||||
Promoting an old version takes effect immediately — all API, Chat, Schedule, and Webhook executions will use the promoted version. Your draft canvas is not affected.
|
||||
</Callout>
|
||||
|
||||
## Programmatic Access
|
||||
|
||||
Run workflows from your applications using our official SDKs:
|
||||
|
||||
```bash
|
||||
# TypeScript/JavaScript
|
||||
@@ -107,21 +150,21 @@ const result = await client.executeWorkflow('workflow-id', {
|
||||
- Use parallel execution for independent operations
|
||||
- Cache results with Memory blocks when appropriate
|
||||
|
||||
### Monitor Executions
|
||||
### Monitor Runs
|
||||
- Review logs regularly to understand performance patterns
|
||||
- Track costs for AI model usage
|
||||
- Use workflow snapshots to debug issues
|
||||
|
||||
## What's Next?
|
||||
|
||||
Start with [Execution Basics](/execution/basics) to understand how workflows run, then explore [Logging](/execution/logging) to monitor your executions and [Cost Calculation](/execution/costs) to optimize your spending.
|
||||
Start with [Execution Basics](/execution/basics) to understand how workflows run, then explore [Logging](/execution/logging) to monitor your runs and [Cost Calculation](/execution/costs) to optimize your spending.
|
||||
|
||||
<FAQ items={[
|
||||
{ question: "What are the execution timeout limits?", answer: "Synchronous executions (API, chat) have a default timeout of 5 minutes on the Free plan and 50 minutes on Pro, Team, and Enterprise plans. Asynchronous executions (schedules, webhooks) allow up to 90 minutes across all plans. These limits are configurable by the platform administrator." },
|
||||
{ question: "What are the run timeout limits?", answer: "Synchronous runs (API, chat) have a default timeout of 5 minutes on the Free plan and 50 minutes on Pro, Team, and Enterprise plans. Asynchronous runs (schedules, webhooks) allow up to 90 minutes across all plans. These limits are configurable by the platform administrator." },
|
||||
{ question: "How does parallel execution work?", answer: "The engine identifies layers of blocks with no dependencies on each other and runs them concurrently. Within loops and parallel blocks, the engine supports up to 20 parallel branches by default and up to 1,000 loop iterations. Nested subflows (loops inside parallels, or vice versa) are supported up to 10 levels deep." },
|
||||
{ question: "Can I cancel a running execution?", answer: "Yes. The engine supports cancellation through an abort signal mechanism. When you cancel an execution, the engine checks for cancellation between block executions (at roughly 500ms intervals when using Redis-backed cancellation). Any in-progress blocks complete, and the execution returns with a cancelled status." },
|
||||
{ question: "What is a deployment snapshot?", answer: "A deployment snapshot is a frozen copy of your workflow at the time you click Deploy. Trigger-based executions (API, chat, schedule, webhook) run against the active snapshot, not your draft canvas. Manual runs from the editor execute the current draft canvas state, so you can test changes before deploying. You can view, compare, and roll back snapshots from the Deploy modal." },
|
||||
{ question: "How are execution costs calculated?", answer: "Costs are tracked per block based on the AI model used. Each block log records input tokens, output tokens, and the computed cost using the model's pricing. The total workflow cost is the sum of all block-level costs for that execution. You can review costs in the execution logs." },
|
||||
{ question: "What happens when a block fails during execution?", answer: "When a block throws an error, the engine captures the error message in the block log, finalizes any incomplete logs with timing data, and halts the execution with a failure status. If the failing block has an error output handle connected to another block, that error path is followed instead of halting entirely." },
|
||||
{ question: "Can I re-run part of a workflow without starting from scratch?", answer: "Yes. The run-from-block feature lets you select a specific block and re-execute from that point. The engine computes which upstream blocks need to be re-run (the dirty set) and preserves cached outputs from blocks that have not changed, so only the affected portion of the workflow re-executes." },
|
||||
{ question: "Can I cancel a running workflow?", answer: "Yes. The engine supports cancellation through an abort signal mechanism. When you cancel a run, the engine checks for cancellation between blocks (at roughly 500ms intervals when using Redis-backed cancellation). Any in-progress blocks complete, and the run returns with a cancelled status." },
|
||||
{ question: "What is a deployment snapshot?", answer: "A deployment snapshot is a frozen copy of your workflow at the time you click Deploy. Trigger-based runs (API, chat, schedule, webhook) use the active snapshot, not your draft canvas. Manual runs from the editor use the current draft canvas state, so you can test changes before deploying. You can view, compare, and roll back snapshots from the Deploy modal." },
|
||||
{ question: "How are run costs calculated?", answer: "Costs are tracked per block based on the AI model used. Each block log records input tokens, output tokens, and the computed cost using the model's pricing. The total workflow cost is the sum of all block-level costs for that run. You can review costs in the run logs." },
|
||||
{ question: "What happens when a block fails during a run?", answer: "When a block throws an error, the engine captures the error message in the block log, finalizes any incomplete logs with timing data, and halts the run with a failure status. If the failing block has an error output handle connected to another block, that error path is followed instead of halting entirely." },
|
||||
{ question: "Can I re-run part of a workflow without starting from scratch?", answer: "Yes. The run-from-block feature lets you select a specific block and re-run from that point. The engine computes which upstream blocks need to be re-run (the dirty set) and preserves cached outputs from blocks that have not changed, so only the affected portion of the workflow re-runs." },
|
||||
]} />
|
||||
|
||||
@@ -6,7 +6,7 @@ import { Callout } from 'fumadocs-ui/components/callout'
|
||||
import { Tab, Tabs } from 'fumadocs-ui/components/tabs'
|
||||
import { Image } from '@/components/ui/image'
|
||||
|
||||
Sim provides comprehensive logging for all workflow executions, giving you complete visibility into how your workflows run, what data flows through them, and where issues might occur.
|
||||
Sim provides comprehensive logging for all workflow runs, giving you complete visibility into how your workflows run, what data flows through them, and where issues might occur.
|
||||
|
||||
## Logging System
|
||||
|
||||
@@ -14,7 +14,7 @@ Sim offers two complementary logging interfaces to match different workflows and
|
||||
|
||||
### Real-Time Console
|
||||
|
||||
During manual or chat workflow execution, logs appear in real-time in the Console panel on the right side of the workflow editor:
|
||||
During manual or chat workflow runs, logs appear in real-time in the Console panel on the right side of the workflow editor:
|
||||
|
||||
<div className="flex justify-center">
|
||||
<Image
|
||||
@@ -27,14 +27,14 @@ During manual or chat workflow execution, logs appear in real-time in the Consol
|
||||
</div>
|
||||
|
||||
The console shows:
|
||||
- Block execution progress with active block highlighting
|
||||
- Block progress with active block highlighting
|
||||
- Real-time outputs as blocks complete
|
||||
- Execution timing for each block
|
||||
- Timing for each block
|
||||
- Success/error status indicators
|
||||
|
||||
### Logs Page
|
||||
|
||||
All workflow executions—whether triggered manually, via API, Chat, Schedule, or Webhook—are logged to the dedicated Logs page:
|
||||
All workflow runs—whether triggered manually, via API, Chat, Schedule, or Webhook—are logged to the dedicated Logs page:
|
||||
|
||||
<div className="flex justify-center">
|
||||
<Image
|
||||
@@ -72,7 +72,7 @@ View the complete data flow for each block with tabs to switch between:
|
||||
|
||||
<Tabs items={['Output', 'Input']}>
|
||||
<Tab>
|
||||
**Output Tab** shows the block's execution result:
|
||||
**Output Tab** shows the block's result:
|
||||
- Structured data with JSON formatting
|
||||
- Markdown rendering for AI-generated content
|
||||
- Copy button for easy data extraction
|
||||
@@ -87,17 +87,17 @@ View the complete data flow for each block with tabs to switch between:
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
### Execution Timeline
|
||||
### Run Timeline
|
||||
|
||||
For workflow-level logs, view detailed execution metrics:
|
||||
For workflow-level logs, view detailed run metrics:
|
||||
- Start and end timestamps
|
||||
- Total workflow duration
|
||||
- Individual block execution times
|
||||
- Individual block run times
|
||||
- Performance bottleneck identification
|
||||
|
||||
## Workflow Snapshots
|
||||
|
||||
For any logged execution, click "View Snapshot" to see the exact workflow state at execution time:
|
||||
For any logged run, click "View Snapshot" to see the exact workflow state at the time of the run:
|
||||
|
||||
<div className="flex justify-center">
|
||||
<Image
|
||||
@@ -111,12 +111,12 @@ For any logged execution, click "View Snapshot" to see the exact workflow state
|
||||
|
||||
The snapshot provides:
|
||||
- Frozen canvas showing the workflow structure
|
||||
- Block states and connections as they were during execution
|
||||
- Block states and connections as they were during the run
|
||||
- Click any block to see its inputs and outputs
|
||||
- Useful for debugging workflows that have since been modified
|
||||
|
||||
<Callout type="info">
|
||||
Workflow snapshots are only available for executions after the enhanced logging system was introduced. Older migrated logs show a "Logged State Not Found" message.
|
||||
Workflow snapshots are only available for runs after the enhanced logging system was introduced. Older migrated logs show a "Logged State Not Found" message.
|
||||
</Callout>
|
||||
|
||||
## Log Retention
|
||||
@@ -134,11 +134,11 @@ The snapshot provides:
|
||||
### For Production
|
||||
- Monitor the Logs page regularly for errors or performance issues
|
||||
- Set up filters to focus on specific workflows or time periods
|
||||
- Use live mode during critical deployments to watch executions in real-time
|
||||
- Use live mode during critical deployments to watch runs in real-time
|
||||
|
||||
### For Debugging
|
||||
- Always check the execution timeline to identify slow blocks
|
||||
- Compare inputs between working and failing executions
|
||||
- Always check the run timeline to identify slow blocks
|
||||
- Compare inputs between working and failing runs
|
||||
- Use workflow snapshots to see the exact state when issues occurred
|
||||
|
||||
## Next Steps
|
||||
@@ -150,10 +150,10 @@ The snapshot provides:
|
||||
import { FAQ } from '@/components/ui/faq'
|
||||
|
||||
<FAQ items={[
|
||||
{ question: "How long are execution logs retained?", answer: "Free plans retain logs for 7 days — after that, logs are archived to cloud storage and deleted from the database. Pro, Team, and Enterprise plans retain logs indefinitely with no automatic cleanup." },
|
||||
{ question: "What data is captured in each execution log?", answer: "Each log entry includes the execution ID, workflow ID, trigger type, start and end timestamps, total duration in milliseconds, cost breakdown (total cost, token counts, and per-model breakdowns), execution data with trace spans, final output, and any associated files. The log details sidebar lets you inspect block-level inputs and outputs." },
|
||||
{ question: "How long are run logs retained?", answer: "Free plans retain logs for 7 days — after that, logs are archived to cloud storage and deleted from the database. Pro, Team, and Enterprise plans retain logs indefinitely with no automatic cleanup." },
|
||||
{ question: "What data is captured in each run log?", answer: "Each log entry includes the run ID, workflow ID, trigger type, start and end timestamps, total duration in milliseconds, cost breakdown (total cost, token counts, and per-model breakdowns), run data with trace spans, final output, and any associated files. The log details sidebar lets you inspect block-level inputs and outputs." },
|
||||
{ question: "Are API keys visible in the logs?", answer: "No. API keys and credentials are automatically redacted in the log input tab for security. You can safely inspect block inputs without exposing sensitive values." },
|
||||
{ question: "What is a workflow snapshot?", answer: "A workflow snapshot is a frozen copy of the workflow's structure (blocks, connections, and configuration) captured at execution time. It lets you see the exact state of the workflow when a particular execution ran, which is useful for debugging workflows that have been modified since the execution." },
|
||||
{ question: "Can I access logs programmatically?", answer: "Yes. The External API provides endpoints to query logs with filtering by workflow, time range, trigger type, duration, cost, and model. You can also set up webhook, email, or Slack notifications for real-time alerts when executions complete." },
|
||||
{ question: "What does Live mode do on the Logs page?", answer: "Live mode automatically refreshes the Logs page in real-time so new execution entries appear as they are logged, without requiring manual page refreshes. This is useful during deployments or when monitoring active workflows." },
|
||||
{ question: "What is a workflow snapshot?", answer: "A workflow snapshot is a frozen copy of the workflow's structure (blocks, connections, and configuration) captured at the time of a run. It lets you see the exact state of the workflow when a particular run happened, which is useful for debugging workflows that have been modified since." },
|
||||
{ question: "Can I access logs programmatically?", answer: "Yes. The External API provides endpoints to query logs with filtering by workflow, time range, trigger type, duration, cost, and model. You can also set up webhook, email, or Slack notifications for real-time alerts when runs complete." },
|
||||
{ question: "What does Live mode do on the Logs page?", answer: "Live mode automatically refreshes the Logs page in real-time so new log entries appear as they are recorded, without requiring manual page refreshes. This is useful during deployments or when monitoring active workflows." },
|
||||
]} />
|
||||
@@ -1,3 +1,3 @@
|
||||
{
|
||||
"pages": ["index", "basics", "files", "api", "logging", "costs"]
|
||||
"pages": ["index", "basics", "files", "api", "api-deployment", "chat", "logging", "costs"]
|
||||
}
|
||||
|
||||
@@ -170,17 +170,17 @@ Build, test, and refine workflows quickly with immediate feedback
|
||||
## Next Steps
|
||||
|
||||
<Cards>
|
||||
<Card title="Explore Workflow Blocks" href="/blocks">
|
||||
Discover API, Function, Condition, and other workflow blocks
|
||||
<Card title="Explore Blocks" href="/blocks">
|
||||
Discover API, Function, Condition, and other blocks
|
||||
</Card>
|
||||
<Card title="Browse Integrations" href="/tools">
|
||||
Connect 160+ services including Gmail, Slack, Notion, and more
|
||||
Connect 1,000+ services including Gmail, Slack, Notion, and more
|
||||
</Card>
|
||||
<Card title="Add Custom Logic" href="/blocks/function">
|
||||
Write custom functions for advanced data processing
|
||||
</Card>
|
||||
<Card title="Deploy Your Workflow" href="/execution">
|
||||
Make your workflow accessible via REST API or webhooks
|
||||
<Card title="Deploy Your Agent" href="/execution">
|
||||
Make your agent accessible via REST API or webhooks
|
||||
</Card>
|
||||
</Cards>
|
||||
|
||||
@@ -188,7 +188,7 @@ Build, test, and refine workflows quickly with immediate feedback
|
||||
|
||||
**Need detailed explanations?** Visit the [Blocks documentation](/blocks) for comprehensive guides on each component.
|
||||
|
||||
**Looking for integrations?** Explore the [Tools documentation](/tools) to see all 160+ available integrations.
|
||||
**Looking for integrations?** Explore the [Tools documentation](/tools) to see all 1,000+ available integrations.
|
||||
|
||||
**Ready to go live?** Learn about [Execution and Deployment](/execution) to make your workflows production-ready.
|
||||
|
||||
@@ -199,5 +199,5 @@ Build, test, and refine workflows quickly with immediate feedback
|
||||
{ question: "Can I use a different AI model instead of GPT-4o?", answer: "Yes. The Agent block supports models from OpenAI, Anthropic, Google, Groq, Cerebras, DeepSeek, Mistral, xAI, and more. You can select any available model from the dropdown. If you self-host, you can also use local models through Ollama." },
|
||||
{ question: "Can I import workflows from other tools?", answer: "Sim does not currently support importing workflows from other automation platforms. However, you can use the Copilot feature to describe what you want in natural language and have it build the workflow for you, which is often faster than manual recreation." },
|
||||
{ question: "What if my workflow does not produce the expected output?", answer: "Use the Chat panel to test iteratively and inspect outputs from each block. You can click the dropdown to view different block outputs and pinpoint where the issue is. The execution logs (accessible from the Logs tab) show detailed information about each step including token usage, costs, and any errors." },
|
||||
{ question: "Where do I go after completing this tutorial?", answer: "Explore the Blocks documentation to learn about Condition, Router, Function, and API blocks. Browse the Tools section to discover 160+ integrations you can add to your agents. When you are ready to deploy, check the Execution docs for REST API, webhook, and scheduled trigger options." },
|
||||
{ question: "Where do I go after completing this tutorial?", answer: "Explore the Blocks documentation to learn about Condition, Router, Function, and API blocks. Browse the Tools section to discover 1,000+ integrations you can add to your agents. When you are ready to deploy, check the Execution docs for REST API, webhook, and scheduled trigger options." },
|
||||
]} />
|
||||
|
||||
@@ -6,7 +6,7 @@ import { Card, Cards } from 'fumadocs-ui/components/card'
|
||||
|
||||
# Sim Documentation
|
||||
|
||||
Welcome to Sim, a visual workflow builder for AI applications. Build powerful AI agents, automation workflows, and data processing pipelines by connecting blocks on a canvas.
|
||||
Welcome to Sim, the open-source AI workspace where teams build, deploy, and manage AI agents. Create agents visually with the workflow builder, conversationally through Mothership, or programmatically with the API — connected to 1,000+ integrations and every major LLM.
|
||||
|
||||
## Quick Start
|
||||
|
||||
@@ -15,13 +15,13 @@ Welcome to Sim, a visual workflow builder for AI applications. Build powerful AI
|
||||
Learn what you can build with Sim
|
||||
</Card>
|
||||
<Card title="Getting Started" href="/getting-started">
|
||||
Create your first workflow in 10 minutes
|
||||
Build your first agent in 10 minutes
|
||||
</Card>
|
||||
<Card title="Workflow Blocks" href="/blocks">
|
||||
<Card title="Blocks" href="/blocks">
|
||||
Learn about the building blocks
|
||||
</Card>
|
||||
<Card title="Tools & Integrations" href="/tools">
|
||||
Explore 80+ built-in integrations
|
||||
Explore 1,000+ integrations
|
||||
</Card>
|
||||
</Cards>
|
||||
|
||||
@@ -35,10 +35,10 @@ Welcome to Sim, a visual workflow builder for AI applications. Build powerful AI
|
||||
Work with workflow and environment variables
|
||||
</Card>
|
||||
<Card title="Execution" href="/execution">
|
||||
Monitor workflow runs and manage costs
|
||||
Monitor agent runs and manage costs
|
||||
</Card>
|
||||
<Card title="Triggers" href="/triggers">
|
||||
Start workflows via API, webhooks, or schedules
|
||||
Start agents via API, webhooks, or schedules
|
||||
</Card>
|
||||
</Cards>
|
||||
|
||||
@@ -51,7 +51,7 @@ Welcome to Sim, a visual workflow builder for AI applications. Build powerful AI
|
||||
<Card title="MCP Integration" href="/mcp">
|
||||
Connect external services with Model Context Protocol
|
||||
</Card>
|
||||
<Card title="SDKs" href="/sdks">
|
||||
<Card title="SDKs" href="/api-reference">
|
||||
Integrate Sim into your applications
|
||||
</Card>
|
||||
</Cards>
|
||||
140
apps/docs/content/docs/en/integrations/index.mdx
Normal file
140
apps/docs/content/docs/en/integrations/index.mdx
Normal file
@@ -0,0 +1,140 @@
|
||||
---
|
||||
title: Integrations
|
||||
description: Connect third-party services and OAuth accounts for your workflows
|
||||
---
|
||||
|
||||
import { Callout } from 'fumadocs-ui/components/callout'
|
||||
import { Image } from '@/components/ui/image'
|
||||
import { FAQ } from '@/components/ui/faq'
|
||||
|
||||
Integrations are authenticated connections to third-party services like Gmail, Slack, GitHub, Dropbox, and more. Sim handles the OAuth flow, token storage, and automatic token refresh — you connect once and select the account in any block that needs it.
|
||||
|
||||
You can connect **multiple accounts per service** — for example, two separate Gmail accounts for different workflows.
|
||||
|
||||
## Managing Integrations
|
||||
|
||||
To manage integrations, open your workspace **Settings** and navigate to the **Integrations** tab.
|
||||
|
||||
<Image
|
||||
src="/static/integrations/integrations-list.png"
|
||||
alt="Integrations tab showing connected accounts with service icons, names, and Details/Disconnect buttons"
|
||||
width={700}
|
||||
height={500}
|
||||
/>
|
||||
|
||||
The list shows all your connected accounts with the service icon, display name, and provider. Each entry has a **Details** button and a **Disconnect** button.
|
||||
|
||||
## Connecting an Account
|
||||
|
||||
Click **+ Connect** in the top right to open the connection modal.
|
||||
|
||||
<Image
|
||||
src="/static/integrations/connect-service-picker.png"
|
||||
alt="Connect Integration modal showing a searchable list of available services"
|
||||
width={500}
|
||||
height={400}
|
||||
/>
|
||||
|
||||
Search for or select the service you want to connect, then fill in the connection details:
|
||||
|
||||
<Image
|
||||
src="/static/integrations/connect-modal.png"
|
||||
alt="Connect Gmail modal showing permissions requested, display name field, and description field"
|
||||
width={500}
|
||||
height={450}
|
||||
/>
|
||||
|
||||
1. Review the **Permissions requested** — these are the scopes Sim will request from the provider
|
||||
2. Enter a **Display name** to identify this connection (e.g. "Work Gmail" or "Marketing Slack")
|
||||
3. Optionally add a **Description**
|
||||
4. Click **Connect** and complete the authorization flow
|
||||
|
||||
## Using Integrations in Workflows
|
||||
|
||||
Blocks that require authentication (e.g. Gmail, Slack, Google Sheets) display a credential selector. Select the connected account you want that block to use.
|
||||
|
||||
<Image
|
||||
src="/static/credentials/oauth-selector.png"
|
||||
alt="Gmail block showing the account selector dropdown with connected accounts"
|
||||
width={500}
|
||||
height={350}
|
||||
/>
|
||||
|
||||
You can also connect additional accounts directly from the block by selecting **Connect another [service] account** at the bottom of the dropdown.
|
||||
|
||||
<Callout type="info">
|
||||
If a block requires an integration and none is selected, the workflow will fail at that step.
|
||||
</Callout>
|
||||
|
||||
## Using a Credential ID
|
||||
|
||||
Each integration has a unique credential ID you can use to reference it dynamically. This is useful when you have multiple accounts for the same service and want to switch between them programmatically — for example, routing different workflow runs to different Gmail accounts based on a variable.
|
||||
|
||||
To copy a credential ID, open **Details** on any integration and click the clipboard icon next to the Display Name.
|
||||
|
||||
<Image
|
||||
src="/static/integrations/copy-credential-id.png"
|
||||
alt="Integration details showing the Copy credential ID tooltip on the clipboard icon next to the Display Name"
|
||||
width={700}
|
||||
height={150}
|
||||
/>
|
||||
|
||||
In any block that requires an integration, click **Switch to manual ID** next to the credential selector to switch from the dropdown to a text field.
|
||||
|
||||
<Image
|
||||
src="/static/integrations/switch-to-manual-id.png"
|
||||
alt="Block showing the Switch to manual ID button next to the account selector"
|
||||
width={500}
|
||||
height={200}
|
||||
/>
|
||||
|
||||
Paste or reference the credential ID in that field. You can use a `{{SECRET}}` reference or a block output variable to make it dynamic.
|
||||
|
||||
<Image
|
||||
src="/static/integrations/manual-credential-id.png"
|
||||
alt="Block showing the Enter credential ID text field after switching to manual mode"
|
||||
width={500}
|
||||
height={200}
|
||||
/>
|
||||
|
||||
## Integration Details
|
||||
|
||||
Click **Details** on any integration to open its detail view.
|
||||
|
||||
<Image
|
||||
src="/static/integrations/integration-details.png"
|
||||
alt="Integration details view showing Display Name, Description, Members, Reconnect, and Disconnect"
|
||||
width={700}
|
||||
height={420}
|
||||
/>
|
||||
|
||||
From here you can:
|
||||
|
||||
- Edit the **Display Name** and **Description**
|
||||
- Manage **Members** — invite teammates by email and assign them an **Admin** or **Member** role
|
||||
- **Reconnect** — re-authorize the connection if it has expired or if you need to update permissions
|
||||
- **Disconnect** — remove the integration entirely
|
||||
|
||||
Click **Save** to apply changes, or **Back** to return to the list.
|
||||
|
||||
<Callout type="warn">
|
||||
If you disconnect an integration that is used in a workflow, that workflow will fail at any block referencing it. Update blocks before disconnecting.
|
||||
</Callout>
|
||||
|
||||
## Access Control
|
||||
|
||||
Each integration has role-based access:
|
||||
|
||||
- **Admin** — can view, edit, disconnect, reconnect, and manage member access
|
||||
- **Member** — can use the integration in workflows (read-only)
|
||||
|
||||
When you connect an integration, you are automatically set as its Admin. You can share it with teammates from the Details view.
|
||||
|
||||
<FAQ items={[
|
||||
{ question: "Does Sim handle OAuth token refresh automatically?", answer: "Yes. When an integration is used during execution, Sim checks whether the access token has expired and automatically refreshes it using the stored refresh token before making the API call. You do not need to handle token refresh manually." },
|
||||
{ question: "Can I connect multiple accounts for the same service?", answer: "Yes. You can connect multiple accounts per service (for example, two separate Gmail accounts). Each block lets you select which account to use from the credential dropdown. This is useful when different workflows need different identities or permissions." },
|
||||
{ question: "What is a credential ID and when should I use it?", answer: "Each integration has a unique credential ID that you can use instead of the dropdown selector. This lets you pass the credential dynamically — for example, from a variable or a previous block's output — so the same workflow can use different accounts depending on the context. Copy the ID from the Details view and use Switch to manual ID in any block to paste or reference it." },
|
||||
{ question: "What happens if an OAuth token can no longer be refreshed?", answer: "If a refresh fails (e.g. the user revoked access or the refresh token expired), the workflow will fail at the block using that integration. Open Settings → Integrations, find the connection, and use the Reconnect button to re-authorize it." },
|
||||
{ question: "Are OAuth tokens encrypted at rest?", answer: "Yes. OAuth tokens are encrypted before being stored in the database and are never exposed in the workflow editor, logs, or API responses." },
|
||||
{ question: "What happens if I disconnect an integration that is used in a workflow?", answer: "Any block referencing the disconnected integration will fail at runtime. Make sure to update those blocks before disconnecting, or reconnect the integration to restore access." },
|
||||
]} />
|
||||
5
apps/docs/content/docs/en/integrations/meta.json
Normal file
5
apps/docs/content/docs/en/integrations/meta.json
Normal file
@@ -0,0 +1,5 @@
|
||||
{
|
||||
"title": "Integrations",
|
||||
"pages": ["index", "google-service-account"],
|
||||
"defaultOpen": false
|
||||
}
|
||||
@@ -8,7 +8,7 @@ import { Image } from '@/components/ui/image'
|
||||
import { Video } from '@/components/ui/video'
|
||||
import { FAQ } from '@/components/ui/faq'
|
||||
|
||||
Sim is an open-source visual workflow builder for building and deploying AI agent workflows. Design intelligent automation systems using a no-code interface—connect AI models, databases, APIs, and business tools through an intuitive drag-and-drop canvas. Whether you're building chatbots, automating business processes, or orchestrating complex data pipelines, Sim provides the tools to bring your AI workflows to life.
|
||||
Sim is the open-source AI workspace where teams build, deploy, and manage AI agents. Create agents visually with the workflow builder, conversationally through Mothership, or programmatically with the API. Connect AI models, databases, APIs, and 1,000+ business tools to build agents that automate real work — from chatbots and compliance agents to data pipelines and ITSM automation.
|
||||
|
||||
<div className="flex justify-center">
|
||||
<Image
|
||||
@@ -40,8 +40,8 @@ Orchestrate complex multi-service interactions. Create unified API endpoints, im
|
||||
|
||||
## How It Works
|
||||
|
||||
**Visual Workflow Editor**
|
||||
Design workflows using an intuitive drag-and-drop canvas. Connect AI models, databases, APIs, and third-party services through a visual, no-code interface that makes complex automation logic easy to understand and maintain.
|
||||
**Visual Workflow Builder**
|
||||
Design agent logic using an intuitive drag-and-drop canvas. Connect AI models, databases, APIs, and third-party services through a visual interface that makes complex automation easy to understand and maintain.
|
||||
|
||||
**Modular Block System**
|
||||
Build with specialized components: processing blocks (AI agents, API calls, custom functions), logic blocks (conditional branching, loops, routers), and output blocks (responses, evaluators). Each block handles a specific task in your workflow.
|
||||
@@ -58,7 +58,7 @@ Enable your team to build together. Multiple users can edit workflows simultaneo
|
||||
|
||||
## Integrations
|
||||
|
||||
Sim provides native integrations with 160+ services across multiple categories:
|
||||
Sim provides native integrations with 1,000+ services across multiple categories:
|
||||
|
||||
- **AI Models**: OpenAI, Anthropic, Google Gemini, Groq, Cerebras, local models via Ollama or VLLM
|
||||
- **Communication**: Gmail, Slack, Microsoft Teams, Telegram, WhatsApp
|
||||
@@ -100,17 +100,17 @@ Deploy on your own infrastructure using Docker Compose or Kubernetes. Maintain c
|
||||
|
||||
## Next Steps
|
||||
|
||||
Ready to build your first AI workflow?
|
||||
Ready to build your first AI agent?
|
||||
|
||||
<Cards>
|
||||
<Card title="Getting Started" href="/getting-started">
|
||||
Create your first workflow in 10 minutes
|
||||
Build your first agent in 10 minutes
|
||||
</Card>
|
||||
<Card title="Workflow Blocks" href="/blocks">
|
||||
<Card title="Blocks" href="/blocks">
|
||||
Learn about the building blocks
|
||||
</Card>
|
||||
<Card title="Tools & Integrations" href="/tools">
|
||||
Explore 160+ built-in integrations
|
||||
Explore 1,000+ integrations
|
||||
</Card>
|
||||
<Card title="Team Permissions" href="/permissions/roles-and-permissions">
|
||||
Set up workspace roles and permissions
|
||||
@@ -121,9 +121,9 @@ Ready to build your first AI workflow?
|
||||
{ question: "Is Sim free to use?", answer: "Sim offers a free Community plan with 1,000 one-time credits to get started. Paid plans start at $25/month (Pro) with 5,000 credits and go up to $100/month (Max) with 20,000 credits. Annual billing is available at a 15% discount. You can also self-host Sim for free on your own infrastructure." },
|
||||
{ question: "Is Sim open source?", answer: "Yes. Sim is open source under the Apache 2.0 license. The full source code is available on GitHub and you can self-host it, contribute to development, or modify it for your own needs. Enterprise features (SSO, access control) have a separate license that requires a subscription for production use." },
|
||||
{ question: "Which AI models and providers are supported?", answer: "Sim supports 15+ providers including OpenAI, Anthropic, Google Gemini, Groq, Cerebras, DeepSeek, Mistral, xAI, and OpenRouter. You can also run local models through Ollama or VLLM at no API cost. Bring Your Own Key (BYOK) is supported so you can use your own API keys at base provider pricing with no markup." },
|
||||
{ question: "Do I need coding experience to use Sim?", answer: "No. Sim is a no-code visual builder where you design workflows by dragging blocks onto a canvas and connecting them. For advanced use cases, the Function block lets you write custom JavaScript, but it is entirely optional." },
|
||||
{ question: "Do I need coding experience to use Sim?", answer: "No. Sim lets you build agents visually by dragging blocks onto a canvas and connecting them, or conversationally through Mothership using natural language. For advanced use cases, the Function block lets you write custom JavaScript, and the full API/SDK is available for programmatic access." },
|
||||
{ question: "Can I self-host Sim?", answer: "Yes. Sim provides Docker Compose configurations for self-hosted deployments. The stack includes the Sim application, a PostgreSQL database with pgvector, and a realtime collaboration server. You can also integrate local AI models via Ollama for a fully offline setup." },
|
||||
{ question: "Is there a limit on how many workflows I can create?", answer: "There is no limit on the number of workflows you can create on any plan. Usage limits apply to execution credits, rate limits, and file storage, which vary by plan tier." },
|
||||
{ question: "What integrations are available?", answer: "Sim offers 160+ native integrations across categories including AI models, communication tools (Gmail, Slack, Teams, Telegram), productivity apps (Notion, Google Workspace, Airtable), development tools (GitHub, Jira, Linear), search services (Google Search, Perplexity, Exa), and databases (PostgreSQL, Supabase, Pinecone). For anything not built in, you can use the MCP (Model Context Protocol) support to connect custom services." },
|
||||
{ question: "How does Sim compare to other workflow automation tools?", answer: "Sim is purpose-built for AI agent workflows rather than general task automation. It provides a visual canvas for orchestrating LLM-powered agents with built-in support for tool use, structured outputs, conditional branching, and real-time collaboration. The Copilot feature also lets you build and modify workflows using natural language." },
|
||||
{ question: "What integrations are available?", answer: "Sim offers 1,000+ native integrations across categories including AI models, communication tools (Gmail, Slack, Teams, Telegram), productivity apps (Notion, Google Workspace, Airtable), development tools (GitHub, Jira, Linear), search services (Google Search, Perplexity, Exa), and databases (PostgreSQL, Supabase, Pinecone). For anything not built in, you can use the MCP (Model Context Protocol) support to connect custom services." },
|
||||
{ question: "How does Sim compare to other AI agent builders?", answer: "Sim is an AI workspace — not just a workflow tool or an agent framework. It combines a visual workflow builder, Mothership for natural-language agent creation, knowledge bases, tables, and full observability in one environment. Teams build agents visually, conversationally, or with code, then deploy and manage them with enterprise governance, real-time collaboration, and staging-to-production workflows." },
|
||||
]} />
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user