mirror of
https://github.com/simstudioai/sim.git
synced 2026-04-28 03:00:29 -04:00
Compare commits
83 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
d93a6f57bc | ||
|
|
df581c3efb | ||
|
|
f16d17ba49 | ||
|
|
e6fefc863c | ||
|
|
5fba724818 | ||
|
|
60b80ec172 | ||
|
|
af4be770a1 | ||
|
|
f330fe22a2 | ||
|
|
efc868263a | ||
|
|
56044776d5 | ||
|
|
04f1d015f3 | ||
|
|
ccb5f1e690 | ||
|
|
91ccbb9921 | ||
|
|
dcbe7c69b0 | ||
|
|
c22ac38ab0 | ||
|
|
cdde8cbd66 | ||
|
|
0ae19dab85 | ||
|
|
3b11c814f8 | ||
|
|
b86ebb35fd | ||
|
|
65972f2fa3 | ||
|
|
5f0f0edd63 | ||
|
|
bed5e95742 | ||
|
|
f7ab39984c | ||
|
|
8ce56fe1f2 | ||
|
|
8c9ddefc53 | ||
|
|
d927d8bdff | ||
|
|
0aeab026a8 | ||
|
|
41a1b50ace | ||
|
|
7941dcde98 | ||
|
|
51ace655e4 | ||
|
|
c2529c3f2c | ||
|
|
45bf396968 | ||
|
|
193f06fcf5 | ||
|
|
34cfc2689a | ||
|
|
2d94b3729d | ||
|
|
aee6189d14 | ||
|
|
3a0e7b89a4 | ||
|
|
d185953248 | ||
|
|
42ef2b1dbb | ||
|
|
2456128fb7 | ||
|
|
699bbfd16f | ||
|
|
0e1ff0a1ac | ||
|
|
94202ed229 | ||
|
|
802f4cf0fc | ||
|
|
ac4ccfcac8 | ||
|
|
0cd14f4ac9 | ||
|
|
d9209f9588 | ||
|
|
2ae1ad293f | ||
|
|
febc36ff9c | ||
|
|
5cf7e8d546 | ||
|
|
1ced54a77c | ||
|
|
951fbd4ded | ||
|
|
f91c1b614a | ||
|
|
b5674d9ed4 | ||
|
|
c19187257e | ||
|
|
c246f5c660 | ||
|
|
28b4c4cc67 | ||
|
|
32541e79d4 | ||
|
|
a01f80c6a3 | ||
|
|
524f33cc9e | ||
|
|
47519e34d9 | ||
|
|
2f932054a7 | ||
|
|
319e0db732 | ||
|
|
003e931546 | ||
|
|
948cdbcc3f | ||
|
|
5e716d74bc | ||
|
|
e1018f1c72 | ||
|
|
351873ac04 | ||
|
|
38864fac34 | ||
|
|
2266bb384b | ||
|
|
a589c8e318 | ||
|
|
3d909d5416 | ||
|
|
3e2a7a2eb1 | ||
|
|
49a1495e15 | ||
|
|
1d0e118eef | ||
|
|
c06361b142 | ||
|
|
6fd1767c7e | ||
|
|
6aa6346330 | ||
|
|
1708bbee35 | ||
|
|
e16c8e6f70 | ||
|
|
9f41736d50 | ||
|
|
147ac89672 | ||
|
|
4cdc941490 |
@@ -16,17 +16,34 @@ User arguments: $ARGUMENTS
|
||||
Read before analyzing:
|
||||
1. https://react.dev/reference/react/useCallback — official docs on when useCallback is actually needed
|
||||
|
||||
## The one rule that matters
|
||||
|
||||
`useCallback` is only useful when **something observes the reference**. Ask: does anything care if this function gets a new identity on re-render?
|
||||
|
||||
Observers that care about reference stability:
|
||||
- A `useEffect` that lists the function in its deps array
|
||||
- A `useMemo` that lists the function in its deps array
|
||||
- Another `useCallback` that lists the function in its deps array
|
||||
- A child component wrapped in `React.memo` that receives the function as a prop
|
||||
|
||||
If none of those apply — if the function is only called inline, or passed to a non-memoized child, or assigned to a native element event — the reference is unobserved and `useCallback` adds overhead with zero benefit.
|
||||
|
||||
## Anti-patterns to detect
|
||||
|
||||
1. **useCallback on functions not passed as props or deps**: No benefit if only called within the same component.
|
||||
2. **useCallback with deps that change every render**: Memoization is wasted.
|
||||
3. **useCallback on handlers passed to native elements**: `<button onClick={fn}>` doesn't benefit from stable references.
|
||||
4. **useCallback wrapping functions that return new objects/arrays**: Memoization at the wrong level.
|
||||
5. **useCallback with empty deps when deps are needed**: Stale closures.
|
||||
6. **Pairing useCallback + React.memo unnecessarily**: Only optimize when you've measured a problem.
|
||||
7. **useCallback in hooks that don't need stable references**: Not every hook return needs memoization.
|
||||
1. **No observer tracks the reference**: The function is only called inline in the same component, or passed to a non-memoized child, or used as a native element handler (`<button onClick={fn}>`). Nothing re-runs or bails out based on reference identity. Remove `useCallback`.
|
||||
2. **useCallback with deps that change every render**: If a dep is a plain object/array created inline, or state that changes on every interaction, memoization buys nothing — the function gets a new identity anyway.
|
||||
3. **useCallback on handlers passed only to native elements**: `<button onClick={fn}>` — React never does reference equality on native element props. No benefit.
|
||||
4. **useCallback wrapping functions that return new objects/arrays**: Stable function identity, unstable return value — memoization is at the wrong level. Use `useMemo` on the return value instead, or restructure.
|
||||
5. **useCallback with empty deps when deps are needed**: Stale closure — reads initial values forever. This is a correctness bug, not just a performance issue.
|
||||
6. **Pairing useCallback + React.memo on trivially cheap renders**: If the child renders in < 1ms and re-renders rarely, the memo infrastructure costs more than it saves.
|
||||
|
||||
Note: This codebase uses a ref pattern for stable callbacks (`useRef` + empty deps). That pattern is correct — don't flag it.
|
||||
## Patterns that ARE correct — do not flag
|
||||
|
||||
- `useCallback` whose result is in a `useEffect` dep array — prevents the effect from re-running on every render
|
||||
- `useCallback` whose result is in a `useMemo` dep array — prevents the memo from recomputing on every render
|
||||
- `useCallback` whose result is a dep of another `useCallback` — stabilises a callback chain
|
||||
- `useCallback` passed to a `React.memo`-wrapped child — the whole point of the pattern
|
||||
- This codebase's ref pattern: `useRef` + callback with empty deps that reads the ref inside — correct, do not flag
|
||||
|
||||
## Steps
|
||||
|
||||
|
||||
@@ -1,7 +1,10 @@
|
||||
# Global Standards
|
||||
|
||||
## Logging
|
||||
Import `createLogger` from `sim/logger`. Use `logger.info`, `logger.warn`, `logger.error` instead of `console.log`.
|
||||
Import `createLogger` from `@sim/logger`. Use `logger.info`, `logger.warn`, `logger.error` instead of `console.log`. Inside API routes wrapped with `withRouteHandler`, loggers automatically include the request ID.
|
||||
|
||||
## API Route Handlers
|
||||
All API route handlers must be wrapped with `withRouteHandler` from `@/lib/core/utils/with-route-handler`. Never export a bare `async function GET/POST/...` — always use `export const METHOD = withRouteHandler(...)`.
|
||||
|
||||
## Comments
|
||||
Use TSDoc for documentation. No `====` separators. No non-TSDoc comments.
|
||||
@@ -10,7 +13,7 @@ Use TSDoc for documentation. No `====` separators. No non-TSDoc comments.
|
||||
Never update global styles. Keep all styling local to components.
|
||||
|
||||
## ID Generation
|
||||
Never use `crypto.randomUUID()`, `nanoid`, or the `uuid` package directly. Use the utilities from `@/lib/core/utils/uuid`:
|
||||
Never use `crypto.randomUUID()`, `nanoid`, or the `uuid` package directly. Use the utilities from `@sim/utils/id`:
|
||||
|
||||
- `generateId()` — UUID v4, use by default
|
||||
- `generateShortId(size?)` — short URL-safe ID (default 21 chars), for compact identifiers
|
||||
@@ -24,11 +27,32 @@ import { v4 as uuidv4 } from 'uuid'
|
||||
const id = crypto.randomUUID()
|
||||
|
||||
// ✓ Good
|
||||
import { generateId, generateShortId } from '@/lib/core/utils/uuid'
|
||||
import { generateId, generateShortId } from '@sim/utils/id'
|
||||
const uuid = generateId()
|
||||
const shortId = generateShortId()
|
||||
const tiny = generateShortId(8)
|
||||
```
|
||||
|
||||
## Common Utilities
|
||||
Use shared helpers from `@sim/utils` instead of writing inline implementations:
|
||||
|
||||
- `sleep(ms)` — async delay. Never write `new Promise(resolve => setTimeout(resolve, ms))`
|
||||
- `toError(value)` — normalize unknown caught values to `Error`. Never write `e instanceof Error ? e : new Error(String(e))`
|
||||
- `toError(value).message` — get error message safely. Never write `e instanceof Error ? e.message : String(e)`
|
||||
|
||||
```typescript
|
||||
// ✗ Bad
|
||||
await new Promise(resolve => setTimeout(resolve, 1000))
|
||||
const msg = error instanceof Error ? error.message : String(error)
|
||||
const err = error instanceof Error ? error : new Error(String(error))
|
||||
|
||||
// ✓ Good
|
||||
import { sleep } from '@sim/utils/helpers'
|
||||
import { toError } from '@sim/utils/errors'
|
||||
await sleep(1000)
|
||||
const msg = toError(error).message
|
||||
const err = toError(error)
|
||||
```
|
||||
|
||||
## Package Manager
|
||||
Use `bun` and `bunx`, not `npm` and `npx`.
|
||||
|
||||
@@ -13,8 +13,12 @@ Use Vitest. Test files: `feature.ts` → `feature.test.ts`
|
||||
These modules are mocked globally — do NOT re-mock them in test files unless you need to override behavior:
|
||||
|
||||
- `@sim/db` → `databaseMock`
|
||||
- `@sim/db/schema` → `schemaMock`
|
||||
- `drizzle-orm` → `drizzleOrmMock`
|
||||
- `@sim/logger` → `loggerMock`
|
||||
- `@/lib/auth` → `authMock`
|
||||
- `@/lib/auth/hybrid` → `hybridAuthMock` (with default session-delegating behavior)
|
||||
- `@/lib/core/utils/request` → `requestUtilsMock`
|
||||
- `@/stores/console/store`, `@/stores/terminal`, `@/stores/execution/store`
|
||||
- `@/blocks/registry`
|
||||
- `@trigger.dev/sdk`
|
||||
@@ -102,10 +106,6 @@ vi.mock('@/lib/workspaces/utils', () => ({
|
||||
}))
|
||||
```
|
||||
|
||||
### NEVER use `mockAuth()`, `mockConsoleLogger()`, or `setupCommonApiMocks()` from `@sim/testing`
|
||||
|
||||
These helpers internally use `vi.doMock()` which is slow. Use direct `vi.hoisted()` + `vi.mock()` instead.
|
||||
|
||||
### Mock heavy transitive dependencies
|
||||
|
||||
If a module under test imports `@/blocks` (200+ files), `@/tools/registry`, or other heavy modules, mock them:
|
||||
@@ -135,83 +135,129 @@ await new Promise(r => setTimeout(r, 1))
|
||||
vi.useFakeTimers()
|
||||
```
|
||||
|
||||
## Mock Pattern Reference
|
||||
## Centralized Mocks (prefer over local declarations)
|
||||
|
||||
`@sim/testing` exports ready-to-use mock modules for common dependencies. Import and pass directly to `vi.mock()` — no `vi.hoisted()` boilerplate needed. Each paired `*MockFns` object exposes the underlying `vi.fn()`s for per-test overrides.
|
||||
|
||||
| Module mocked | Import | Factory form |
|
||||
|---|---|---|
|
||||
| `@/app/api/auth/oauth/utils` | `authOAuthUtilsMock`, `authOAuthUtilsMockFns` | `vi.mock('@/app/api/auth/oauth/utils', () => authOAuthUtilsMock)` |
|
||||
| `@/app/api/knowledge/utils` | `knowledgeApiUtilsMock`, `knowledgeApiUtilsMockFns` | `vi.mock('@/app/api/knowledge/utils', () => knowledgeApiUtilsMock)` |
|
||||
| `@/app/api/workflows/utils` | `workflowsApiUtilsMock`, `workflowsApiUtilsMockFns` | `vi.mock('@/app/api/workflows/utils', () => workflowsApiUtilsMock)` |
|
||||
| `@sim/audit` | `auditMock`, `auditMockFns` | `vi.mock('@sim/audit', () => auditMock)` |
|
||||
| `@/lib/auth` | `authMock`, `authMockFns` | `vi.mock('@/lib/auth', () => authMock)` |
|
||||
| `@/lib/auth/hybrid` | `hybridAuthMock`, `hybridAuthMockFns` | `vi.mock('@/lib/auth/hybrid', () => hybridAuthMock)` |
|
||||
| `@/lib/copilot/request/http` | `copilotHttpMock`, `copilotHttpMockFns` | `vi.mock('@/lib/copilot/request/http', () => copilotHttpMock)` |
|
||||
| `@/lib/core/config/env` | `envMock`, `createEnvMock(overrides)` | `vi.mock('@/lib/core/config/env', () => envMock)` |
|
||||
| `@/lib/core/config/feature-flags` | `featureFlagsMock` | `vi.mock('@/lib/core/config/feature-flags', () => featureFlagsMock)` |
|
||||
| `@/lib/core/config/redis` | `redisConfigMock`, `redisConfigMockFns` | `vi.mock('@/lib/core/config/redis', () => redisConfigMock)` |
|
||||
| `@/lib/core/security/encryption` | `encryptionMock`, `encryptionMockFns` | `vi.mock('@/lib/core/security/encryption', () => encryptionMock)` |
|
||||
| `@/lib/core/security/input-validation.server` | `inputValidationMock`, `inputValidationMockFns` | `vi.mock('@/lib/core/security/input-validation.server', () => inputValidationMock)` |
|
||||
| `@/lib/core/utils/request` | `requestUtilsMock`, `requestUtilsMockFns` | `vi.mock('@/lib/core/utils/request', () => requestUtilsMock)` |
|
||||
| `@/lib/core/utils/urls` | `urlsMock`, `urlsMockFns` | `vi.mock('@/lib/core/utils/urls', () => urlsMock)` |
|
||||
| `@/lib/execution/preprocessing` | `executionPreprocessingMock`, `executionPreprocessingMockFns` | `vi.mock('@/lib/execution/preprocessing', () => executionPreprocessingMock)` |
|
||||
| `@/lib/logs/execution/logging-session` | `loggingSessionMock`, `loggingSessionMockFns`, `LoggingSessionMock` | `vi.mock('@/lib/logs/execution/logging-session', () => loggingSessionMock)` |
|
||||
| `@/lib/workflows/orchestration` | `workflowsOrchestrationMock`, `workflowsOrchestrationMockFns` | `vi.mock('@/lib/workflows/orchestration', () => workflowsOrchestrationMock)` |
|
||||
| `@/lib/workflows/persistence/utils` | `workflowsPersistenceUtilsMock`, `workflowsPersistenceUtilsMockFns` | `vi.mock('@/lib/workflows/persistence/utils', () => workflowsPersistenceUtilsMock)` |
|
||||
| `@/lib/workflows/utils` | `workflowsUtilsMock`, `workflowsUtilsMockFns` | `vi.mock('@/lib/workflows/utils', () => workflowsUtilsMock)` |
|
||||
| `@/lib/workspaces/permissions/utils` | `permissionsMock`, `permissionsMockFns` | `vi.mock('@/lib/workspaces/permissions/utils', () => permissionsMock)` |
|
||||
| `@sim/db/schema` | `schemaMock` | `vi.mock('@sim/db/schema', () => schemaMock)` |
|
||||
|
||||
### Auth mocking (API routes)
|
||||
|
||||
```typescript
|
||||
const { mockGetSession } = vi.hoisted(() => ({
|
||||
mockGetSession: vi.fn(),
|
||||
}))
|
||||
import { authMock, authMockFns } from '@sim/testing'
|
||||
import { beforeEach, describe, expect, it, vi } from 'vitest'
|
||||
|
||||
vi.mock('@/lib/auth', () => ({
|
||||
auth: { api: { getSession: vi.fn() } },
|
||||
getSession: mockGetSession,
|
||||
}))
|
||||
vi.mock('@/lib/auth', () => authMock)
|
||||
|
||||
// In tests:
|
||||
mockGetSession.mockResolvedValue({ user: { id: 'user-1', email: 'test@example.com' } })
|
||||
mockGetSession.mockResolvedValue(null) // unauthenticated
|
||||
import { GET } from '@/app/api/my-route/route'
|
||||
|
||||
beforeEach(() => {
|
||||
vi.clearAllMocks()
|
||||
authMockFns.mockGetSession.mockResolvedValue({ user: { id: 'user-1' } })
|
||||
})
|
||||
```
|
||||
|
||||
Only define a local `vi.mock('@/lib/auth', ...)` if the module under test consumes exports outside the centralized shape (e.g., `auth.api.verifyOneTimeToken`, `auth.api.resetPassword`).
|
||||
|
||||
### Hybrid auth mocking
|
||||
|
||||
```typescript
|
||||
const { mockCheckSessionOrInternalAuth } = vi.hoisted(() => ({
|
||||
mockCheckSessionOrInternalAuth: vi.fn(),
|
||||
}))
|
||||
import { hybridAuthMock, hybridAuthMockFns } from '@sim/testing'
|
||||
|
||||
vi.mock('@/lib/auth/hybrid', () => ({
|
||||
checkSessionOrInternalAuth: mockCheckSessionOrInternalAuth,
|
||||
}))
|
||||
vi.mock('@/lib/auth/hybrid', () => hybridAuthMock)
|
||||
|
||||
// In tests:
|
||||
mockCheckSessionOrInternalAuth.mockResolvedValue({
|
||||
hybridAuthMockFns.mockCheckSessionOrInternalAuth.mockResolvedValue({
|
||||
success: true, userId: 'user-1', authType: 'session',
|
||||
})
|
||||
```
|
||||
|
||||
### Database chain mocking
|
||||
|
||||
```typescript
|
||||
const { mockSelect, mockFrom, mockWhere } = vi.hoisted(() => ({
|
||||
mockSelect: vi.fn(),
|
||||
mockFrom: vi.fn(),
|
||||
mockWhere: vi.fn(),
|
||||
}))
|
||||
Use the centralized `dbChainMock` + `dbChainMockFns` helpers — no `vi.hoisted()` or chain-wiring boilerplate needed.
|
||||
|
||||
vi.mock('@sim/db', () => ({
|
||||
db: { select: mockSelect },
|
||||
}))
|
||||
```typescript
|
||||
import { dbChainMock, dbChainMockFns, resetDbChainMock } from '@sim/testing'
|
||||
|
||||
vi.mock('@sim/db', () => dbChainMock)
|
||||
// Spread for custom exports: vi.mock('@sim/db', () => ({ ...dbChainMock, myTable: {...} }))
|
||||
|
||||
beforeEach(() => {
|
||||
mockSelect.mockReturnValue({ from: mockFrom })
|
||||
mockFrom.mockReturnValue({ where: mockWhere })
|
||||
mockWhere.mockResolvedValue([{ id: '1', name: 'test' }])
|
||||
vi.clearAllMocks()
|
||||
resetDbChainMock() // only needed if tests use permanent (non-`Once`) overrides
|
||||
})
|
||||
|
||||
it('reads a row', async () => {
|
||||
dbChainMockFns.limit.mockResolvedValueOnce([{ id: '1', name: 'test' }])
|
||||
// exercise code that hits db.select().from().where().limit()
|
||||
expect(dbChainMockFns.where).toHaveBeenCalled()
|
||||
})
|
||||
```
|
||||
|
||||
**Default chains supported:**
|
||||
- `select()/selectDistinct()/selectDistinctOn() → from() → where()/innerJoin()/leftJoin() → where() → limit()/orderBy()/returning()/groupBy()/for()`
|
||||
- `insert() → values() → returning()/onConflictDoUpdate()/onConflictDoNothing()`
|
||||
- `update() → set() → where() → limit()/orderBy()/returning()/for()`
|
||||
- `delete() → where() → limit()/orderBy()/returning()/for()`
|
||||
- `db.execute()` resolves `[]`
|
||||
- `db.transaction(cb)` calls cb with `dbChainMock.db`
|
||||
|
||||
`.for('update')` (Postgres row-level locking) is supported on `where`
|
||||
builders. It returns a thenable with `.limit` / `.orderBy` / `.returning` /
|
||||
`.groupBy` attached, so both `await .where().for('update')` (terminal) and
|
||||
`await .where().for('update').limit(1)` (chained) work. Override the terminal
|
||||
result with `dbChainMockFns.for.mockResolvedValueOnce([...])`; for the chained
|
||||
form, mock the downstream terminal (e.g. `dbChainMockFns.limit.mockResolvedValueOnce([...])`).
|
||||
|
||||
All terminals default to `Promise.resolve([])`. Override per-test with `dbChainMockFns.<terminal>.mockResolvedValueOnce(...)`.
|
||||
|
||||
Use `resetDbChainMock()` in `beforeEach` only when tests replace wiring with `.mockReturnValue` / `.mockResolvedValue` (permanent). Tests using only `...Once` variants don't need it.
|
||||
|
||||
## @sim/testing Package
|
||||
|
||||
Always prefer over local test data.
|
||||
|
||||
| Category | Utilities |
|
||||
|----------|-----------|
|
||||
| **Mocks** | `loggerMock`, `databaseMock`, `drizzleOrmMock`, `setupGlobalFetchMock()` |
|
||||
| **Module mocks** | See "Centralized Mocks" table above |
|
||||
| **Logger helpers** | `loggerMock`, `createMockLogger()`, `getLoggerCalls()`, `clearLoggerMocks()` |
|
||||
| **Database helpers** | `databaseMock`, `drizzleOrmMock`, `createMockDb()`, `createMockSql()`, `createMockSqlOperators()` |
|
||||
| **Fetch helpers** | `setupGlobalFetchMock()`, `createMockFetch()`, `createMockResponse()`, `mockFetchError()` |
|
||||
| **Factories** | `createSession()`, `createWorkflowRecord()`, `createBlock()`, `createExecutionContext()` |
|
||||
| **Builders** | `WorkflowBuilder`, `ExecutionContextBuilder` |
|
||||
| **Assertions** | `expectWorkflowAccessGranted()`, `expectBlockExecuted()` |
|
||||
| **Requests** | `createMockRequest()`, `createEnvMock()` |
|
||||
| **Requests** | `createMockRequest()`, `createMockFormDataRequest()` |
|
||||
|
||||
## Rules Summary
|
||||
|
||||
1. `@vitest-environment node` unless DOM is required
|
||||
2. `vi.hoisted()` + `vi.mock()` + static imports — never `vi.resetModules()` + `vi.doMock()` + dynamic imports
|
||||
3. `vi.mock()` calls before importing mocked modules
|
||||
4. `@sim/testing` utilities over local mocks
|
||||
2. Prefer centralized mocks from `@sim/testing` (see table above) over local `vi.hoisted()` + `vi.mock()` boilerplate
|
||||
3. `vi.hoisted()` + `vi.mock()` + static imports — never `vi.resetModules()` + `vi.doMock()` + dynamic imports
|
||||
4. `vi.mock()` calls before importing mocked modules
|
||||
5. `beforeEach(() => vi.clearAllMocks())` to reset state — no redundant `afterEach`
|
||||
6. No `vi.importActual()` — mock everything explicitly
|
||||
7. No `mockAuth()`, `mockConsoleLogger()`, `setupCommonApiMocks()` — use direct mocks
|
||||
8. Mock heavy deps (`@/blocks`, `@/tools/registry`, `@/triggers`) in tests that don't need them
|
||||
9. Use absolute imports in test files
|
||||
10. Avoid real timers — use 1ms delays or `vi.useFakeTimers()`
|
||||
7. Mock heavy deps (`@/blocks`, `@/tools/registry`, `@/triggers`) in tests that don't need them
|
||||
8. Use absolute imports in test files
|
||||
9. Avoid real timers — use 1ms delays or `vi.useFakeTimers()`
|
||||
|
||||
@@ -17,7 +17,7 @@ Use TSDoc for documentation. No `====` separators. No non-TSDoc comments.
|
||||
Never update global styles. Keep all styling local to components.
|
||||
|
||||
## ID Generation
|
||||
Never use `crypto.randomUUID()`, `nanoid`, or the `uuid` package directly. Use the utilities from `@/lib/core/utils/uuid`:
|
||||
Never use `crypto.randomUUID()`, `nanoid`, or the `uuid` package directly. Use the utilities from `@sim/utils/id`:
|
||||
|
||||
- `generateId()` — UUID v4, use by default
|
||||
- `generateShortId(size?)` — short URL-safe ID (default 21 chars), for compact identifiers
|
||||
@@ -31,11 +31,32 @@ import { v4 as uuidv4 } from 'uuid'
|
||||
const id = crypto.randomUUID()
|
||||
|
||||
// ✓ Good
|
||||
import { generateId, generateShortId } from '@/lib/core/utils/uuid'
|
||||
import { generateId, generateShortId } from '@sim/utils/id'
|
||||
const uuid = generateId()
|
||||
const shortId = generateShortId()
|
||||
const tiny = generateShortId(8)
|
||||
```
|
||||
|
||||
## Common Utilities
|
||||
Use shared helpers from `@sim/utils` instead of writing inline implementations:
|
||||
|
||||
- `sleep(ms)` — async delay. Never write `new Promise(resolve => setTimeout(resolve, ms))`
|
||||
- `toError(value)` — normalize unknown caught values to `Error`. Never write `e instanceof Error ? e : new Error(String(e))`
|
||||
- `toError(value).message` — get error message safely. Never write `e instanceof Error ? e.message : String(e)`
|
||||
|
||||
```typescript
|
||||
// ✗ Bad
|
||||
await new Promise(resolve => setTimeout(resolve, 1000))
|
||||
const msg = error instanceof Error ? error.message : String(error)
|
||||
const err = error instanceof Error ? error : new Error(String(error))
|
||||
|
||||
// ✓ Good
|
||||
import { sleep } from '@sim/utils/helpers'
|
||||
import { toError } from '@sim/utils/errors'
|
||||
await sleep(1000)
|
||||
const msg = toError(error).message
|
||||
const err = toError(error)
|
||||
```
|
||||
|
||||
## Package Manager
|
||||
Use `bun` and `bunx`, not `npm` and `npx`.
|
||||
|
||||
85
.cursor/rules/sim-sandbox.mdc
Normal file
85
.cursor/rules/sim-sandbox.mdc
Normal file
@@ -0,0 +1,85 @@
|
||||
---
|
||||
description: Isolated-vm sandbox worker security policy. Hard rules for anything that lives in the worker child process that runs user code.
|
||||
globs: ["apps/sim/lib/execution/isolated-vm-worker.cjs", "apps/sim/lib/execution/isolated-vm.ts", "apps/sim/lib/execution/sandbox/**", "apps/sim/sandbox-tasks/**"]
|
||||
---
|
||||
|
||||
# Sim Sandbox — Worker Security Policy
|
||||
|
||||
The isolated-vm worker child process at
|
||||
`apps/sim/lib/execution/isolated-vm-worker.cjs` runs untrusted user code inside
|
||||
V8 isolates. The process itself is a trust boundary. Everything in this rule is
|
||||
about what must **never** live in that process.
|
||||
|
||||
## Hard rules
|
||||
|
||||
1. **No app credentials in the worker process**. The worker must not hold, load,
|
||||
or receive via IPC: database URLs, Redis URLs, AWS keys, Stripe keys,
|
||||
session-signing keys, encryption keys, OAuth client secrets, internal API
|
||||
secrets, or any LLM / email / search provider API keys. If you catch yourself
|
||||
`require`'ing `@/lib/auth`, `@sim/db`, `@/lib/uploads/core/storage-service`,
|
||||
or anything that imports `env` directly inside the worker, stop and use a
|
||||
host-side broker instead.
|
||||
|
||||
2. **Host-side brokers own all credentialed work**. The worker can only access
|
||||
resources through `ivm.Reference` / `ivm.Callback` bridges back to the host
|
||||
process. Today the only broker is `workspaceFileBroker`
|
||||
(`apps/sim/lib/execution/sandbox/brokers/workspace-file.ts`); adding a new
|
||||
one requires co-reviewing this file.
|
||||
|
||||
3. **Host-side brokers must scope every resource access to a single tenant**.
|
||||
The `SandboxBrokerContext` always carries `workspaceId`. Any new broker that
|
||||
accesses storage, DB, or an external API must use `ctx.workspaceId` to scope
|
||||
the lookup — never accept a raw path, key, or URL from isolate code without
|
||||
validation.
|
||||
|
||||
4. **Nothing that runs in the isolate is trusted, even if we wrote it**. The
|
||||
task `bootstrap` and `finalize` strings in `apps/sim/sandbox-tasks/` execute
|
||||
inside the isolate. They must treat `globalThis` as adversarial — no pulling
|
||||
values from it that might have been mutated by user code. The hardening
|
||||
script in `executeTask` undefines dangerous globals before user code runs.
|
||||
|
||||
## Why
|
||||
|
||||
A V8 JIT bug (Chrome ships these roughly monthly) gives an attacker a native
|
||||
code primitive inside the process that owns whatever that process can reach.
|
||||
If the worker only holds `isolated-vm` + a single narrow workspace-file broker,
|
||||
a V8 escape leaks one tenant's files. If the worker holds a Stripe key or a DB
|
||||
connection, a V8 escape leaks the service.
|
||||
|
||||
The original `doc-worker.cjs` vulnerability (CVE-class, 225 production secrets
|
||||
leaked via `/proc/1/environ`) was the forcing function for this architecture.
|
||||
Keep the blast radius small.
|
||||
|
||||
## Checklist for changes to `isolated-vm-worker.cjs`
|
||||
|
||||
Before landing any change that adds a new `require(...)` or `process.send(...)`
|
||||
payload or `ivm.Reference` wrapper in the worker:
|
||||
|
||||
- [ ] Does it load a credential, key, connection string, or secret? If yes,
|
||||
move it host-side and expose as a broker.
|
||||
- [ ] Does it import from `@/lib/auth`, `@sim/db`, `@/lib/uploads/core/*`,
|
||||
`@/lib/core/config/env`, or any module that reads `process.env` of the
|
||||
main app? If yes, same — move host-side.
|
||||
- [ ] Does it expose a resource that's workspace-scoped without taking a
|
||||
`workspaceId`? If yes, re-scope.
|
||||
- [ ] Did you update the broker limits (`IVM_MAX_BROKER_ARGS_JSON_CHARS`,
|
||||
`IVM_MAX_BROKER_RESULT_JSON_CHARS`, `IVM_MAX_BROKERS_PER_EXECUTION`) if
|
||||
the new broker can emit large payloads or fire frequently?
|
||||
|
||||
## What the worker *may* hold
|
||||
|
||||
- `isolated-vm` module
|
||||
- Node built-ins: `node:fs` (only for reading the checked-in bundle `.cjs`
|
||||
files) and `node:path`
|
||||
- The three prebuilt library bundles under
|
||||
`apps/sim/lib/execution/sandbox/bundles/*.cjs`
|
||||
- IPC message handlers for `execute`, `cancel`, `fetchResponse`,
|
||||
`brokerResponse`
|
||||
|
||||
The worker deliberately has **no host-side logger**. All errors and
|
||||
diagnostics flow through IPC back to the host, which has `@sim/logger`. Do
|
||||
not add `createLogger` or console-based logging to the worker — it would
|
||||
require pulling the main app's config / env, which is exactly what this
|
||||
rule is preventing.
|
||||
|
||||
Anything else is suspect.
|
||||
@@ -3,6 +3,7 @@ description: Testing patterns with Vitest and @sim/testing
|
||||
globs: ["apps/sim/**/*.test.ts", "apps/sim/**/*.test.tsx"]
|
||||
---
|
||||
|
||||
|
||||
# Testing Patterns
|
||||
|
||||
Use Vitest. Test files: `feature.ts` → `feature.test.ts`
|
||||
@@ -12,8 +13,12 @@ Use Vitest. Test files: `feature.ts` → `feature.test.ts`
|
||||
These modules are mocked globally — do NOT re-mock them in test files unless you need to override behavior:
|
||||
|
||||
- `@sim/db` → `databaseMock`
|
||||
- `@sim/db/schema` → `schemaMock`
|
||||
- `drizzle-orm` → `drizzleOrmMock`
|
||||
- `@sim/logger` → `loggerMock`
|
||||
- `@/lib/auth` → `authMock`
|
||||
- `@/lib/auth/hybrid` → `hybridAuthMock` (with default session-delegating behavior)
|
||||
- `@/lib/core/utils/request` → `requestUtilsMock`
|
||||
- `@/stores/console/store`, `@/stores/terminal`, `@/stores/execution/store`
|
||||
- `@/blocks/registry`
|
||||
- `@trigger.dev/sdk`
|
||||
@@ -101,10 +106,6 @@ vi.mock('@/lib/workspaces/utils', () => ({
|
||||
}))
|
||||
```
|
||||
|
||||
### NEVER use `mockAuth()`, `mockConsoleLogger()`, or `setupCommonApiMocks()` from `@sim/testing`
|
||||
|
||||
These helpers internally use `vi.doMock()` which is slow. Use direct `vi.hoisted()` + `vi.mock()` instead.
|
||||
|
||||
### Mock heavy transitive dependencies
|
||||
|
||||
If a module under test imports `@/blocks` (200+ files), `@/tools/registry`, or other heavy modules, mock them:
|
||||
@@ -134,38 +135,61 @@ await new Promise(r => setTimeout(r, 1))
|
||||
vi.useFakeTimers()
|
||||
```
|
||||
|
||||
## Mock Pattern Reference
|
||||
## Centralized Mocks (prefer over local declarations)
|
||||
|
||||
`@sim/testing` exports ready-to-use mock modules for common dependencies. Import and pass directly to `vi.mock()` — no `vi.hoisted()` boilerplate needed. Each paired `*MockFns` object exposes the underlying `vi.fn()`s for per-test overrides.
|
||||
|
||||
| Module mocked | Import | Factory form |
|
||||
|---|---|---|
|
||||
| `@/app/api/auth/oauth/utils` | `authOAuthUtilsMock`, `authOAuthUtilsMockFns` | `vi.mock('@/app/api/auth/oauth/utils', () => authOAuthUtilsMock)` |
|
||||
| `@/app/api/knowledge/utils` | `knowledgeApiUtilsMock`, `knowledgeApiUtilsMockFns` | `vi.mock('@/app/api/knowledge/utils', () => knowledgeApiUtilsMock)` |
|
||||
| `@/app/api/workflows/utils` | `workflowsApiUtilsMock`, `workflowsApiUtilsMockFns` | `vi.mock('@/app/api/workflows/utils', () => workflowsApiUtilsMock)` |
|
||||
| `@sim/audit` | `auditMock`, `auditMockFns` | `vi.mock('@sim/audit', () => auditMock)` |
|
||||
| `@/lib/auth` | `authMock`, `authMockFns` | `vi.mock('@/lib/auth', () => authMock)` |
|
||||
| `@/lib/auth/hybrid` | `hybridAuthMock`, `hybridAuthMockFns` | `vi.mock('@/lib/auth/hybrid', () => hybridAuthMock)` |
|
||||
| `@/lib/copilot/request/http` | `copilotHttpMock`, `copilotHttpMockFns` | `vi.mock('@/lib/copilot/request/http', () => copilotHttpMock)` |
|
||||
| `@/lib/core/config/env` | `envMock`, `createEnvMock(overrides)` | `vi.mock('@/lib/core/config/env', () => envMock)` |
|
||||
| `@/lib/core/config/feature-flags` | `featureFlagsMock` | `vi.mock('@/lib/core/config/feature-flags', () => featureFlagsMock)` |
|
||||
| `@/lib/core/config/redis` | `redisConfigMock`, `redisConfigMockFns` | `vi.mock('@/lib/core/config/redis', () => redisConfigMock)` |
|
||||
| `@/lib/core/security/encryption` | `encryptionMock`, `encryptionMockFns` | `vi.mock('@/lib/core/security/encryption', () => encryptionMock)` |
|
||||
| `@/lib/core/security/input-validation.server` | `inputValidationMock`, `inputValidationMockFns` | `vi.mock('@/lib/core/security/input-validation.server', () => inputValidationMock)` |
|
||||
| `@/lib/core/utils/request` | `requestUtilsMock`, `requestUtilsMockFns` | `vi.mock('@/lib/core/utils/request', () => requestUtilsMock)` |
|
||||
| `@/lib/core/utils/urls` | `urlsMock`, `urlsMockFns` | `vi.mock('@/lib/core/utils/urls', () => urlsMock)` |
|
||||
| `@/lib/execution/preprocessing` | `executionPreprocessingMock`, `executionPreprocessingMockFns` | `vi.mock('@/lib/execution/preprocessing', () => executionPreprocessingMock)` |
|
||||
| `@/lib/logs/execution/logging-session` | `loggingSessionMock`, `loggingSessionMockFns`, `LoggingSessionMock` | `vi.mock('@/lib/logs/execution/logging-session', () => loggingSessionMock)` |
|
||||
| `@/lib/workflows/orchestration` | `workflowsOrchestrationMock`, `workflowsOrchestrationMockFns` | `vi.mock('@/lib/workflows/orchestration', () => workflowsOrchestrationMock)` |
|
||||
| `@/lib/workflows/persistence/utils` | `workflowsPersistenceUtilsMock`, `workflowsPersistenceUtilsMockFns` | `vi.mock('@/lib/workflows/persistence/utils', () => workflowsPersistenceUtilsMock)` |
|
||||
| `@/lib/workflows/utils` | `workflowsUtilsMock`, `workflowsUtilsMockFns` | `vi.mock('@/lib/workflows/utils', () => workflowsUtilsMock)` |
|
||||
| `@/lib/workspaces/permissions/utils` | `permissionsMock`, `permissionsMockFns` | `vi.mock('@/lib/workspaces/permissions/utils', () => permissionsMock)` |
|
||||
| `@sim/db/schema` | `schemaMock` | `vi.mock('@sim/db/schema', () => schemaMock)` |
|
||||
|
||||
### Auth mocking (API routes)
|
||||
|
||||
```typescript
|
||||
const { mockGetSession } = vi.hoisted(() => ({
|
||||
mockGetSession: vi.fn(),
|
||||
}))
|
||||
import { authMock, authMockFns } from '@sim/testing'
|
||||
import { beforeEach, describe, expect, it, vi } from 'vitest'
|
||||
|
||||
vi.mock('@/lib/auth', () => ({
|
||||
auth: { api: { getSession: vi.fn() } },
|
||||
getSession: mockGetSession,
|
||||
}))
|
||||
vi.mock('@/lib/auth', () => authMock)
|
||||
|
||||
// In tests:
|
||||
mockGetSession.mockResolvedValue({ user: { id: 'user-1', email: 'test@example.com' } })
|
||||
mockGetSession.mockResolvedValue(null) // unauthenticated
|
||||
import { GET } from '@/app/api/my-route/route'
|
||||
|
||||
beforeEach(() => {
|
||||
vi.clearAllMocks()
|
||||
authMockFns.mockGetSession.mockResolvedValue({ user: { id: 'user-1' } })
|
||||
})
|
||||
```
|
||||
|
||||
Only define a local `vi.mock('@/lib/auth', ...)` if the module under test consumes exports outside the centralized shape (e.g., `auth.api.verifyOneTimeToken`, `auth.api.resetPassword`).
|
||||
|
||||
### Hybrid auth mocking
|
||||
|
||||
```typescript
|
||||
const { mockCheckSessionOrInternalAuth } = vi.hoisted(() => ({
|
||||
mockCheckSessionOrInternalAuth: vi.fn(),
|
||||
}))
|
||||
import { hybridAuthMock, hybridAuthMockFns } from '@sim/testing'
|
||||
|
||||
vi.mock('@/lib/auth/hybrid', () => ({
|
||||
checkSessionOrInternalAuth: mockCheckSessionOrInternalAuth,
|
||||
}))
|
||||
vi.mock('@/lib/auth/hybrid', () => hybridAuthMock)
|
||||
|
||||
// In tests:
|
||||
mockCheckSessionOrInternalAuth.mockResolvedValue({
|
||||
hybridAuthMockFns.mockCheckSessionOrInternalAuth.mockResolvedValue({
|
||||
success: true, userId: 'user-1', authType: 'session',
|
||||
})
|
||||
```
|
||||
@@ -196,21 +220,23 @@ Always prefer over local test data.
|
||||
|
||||
| Category | Utilities |
|
||||
|----------|-----------|
|
||||
| **Mocks** | `loggerMock`, `databaseMock`, `drizzleOrmMock`, `setupGlobalFetchMock()` |
|
||||
| **Module mocks** | See "Centralized Mocks" table above |
|
||||
| **Logger helpers** | `loggerMock`, `createMockLogger()`, `getLoggerCalls()`, `clearLoggerMocks()` |
|
||||
| **Database helpers** | `databaseMock`, `drizzleOrmMock`, `createMockDb()`, `createMockSql()`, `createMockSqlOperators()` |
|
||||
| **Fetch helpers** | `setupGlobalFetchMock()`, `createMockFetch()`, `createMockResponse()`, `mockFetchError()` |
|
||||
| **Factories** | `createSession()`, `createWorkflowRecord()`, `createBlock()`, `createExecutionContext()` |
|
||||
| **Builders** | `WorkflowBuilder`, `ExecutionContextBuilder` |
|
||||
| **Assertions** | `expectWorkflowAccessGranted()`, `expectBlockExecuted()` |
|
||||
| **Requests** | `createMockRequest()`, `createEnvMock()` |
|
||||
| **Requests** | `createMockRequest()`, `createMockFormDataRequest()` |
|
||||
|
||||
## Rules Summary
|
||||
|
||||
1. `@vitest-environment node` unless DOM is required
|
||||
2. `vi.hoisted()` + `vi.mock()` + static imports — never `vi.resetModules()` + `vi.doMock()` + dynamic imports
|
||||
3. `vi.mock()` calls before importing mocked modules
|
||||
4. `@sim/testing` utilities over local mocks
|
||||
2. Prefer centralized mocks from `@sim/testing` (see table above) over local `vi.hoisted()` + `vi.mock()` boilerplate
|
||||
3. `vi.hoisted()` + `vi.mock()` + static imports — never `vi.resetModules()` + `vi.doMock()` + dynamic imports
|
||||
4. `vi.mock()` calls before importing mocked modules
|
||||
5. `beforeEach(() => vi.clearAllMocks())` to reset state — no redundant `afterEach`
|
||||
6. No `vi.importActual()` — mock everything explicitly
|
||||
7. No `mockAuth()`, `mockConsoleLogger()`, `setupCommonApiMocks()` — use direct mocks
|
||||
8. Mock heavy deps (`@/blocks`, `@/tools/registry`, `@/triggers`) in tests that don't need them
|
||||
9. Use absolute imports in test files
|
||||
10. Avoid real timers — use 1ms delays or `vi.useFakeTimers()`
|
||||
7. Mock heavy deps (`@/blocks`, `@/tools/registry`, `@/triggers`) in tests that don't need them
|
||||
8. Use absolute imports in test files
|
||||
9. Avoid real timers — use 1ms delays or `vi.useFakeTimers()`
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
FROM oven/bun:1.3.11-alpine
|
||||
FROM oven/bun:1.3.13-alpine
|
||||
|
||||
# Install necessary packages for development
|
||||
RUN apk add --no-cache \
|
||||
|
||||
@@ -71,7 +71,7 @@ fi
|
||||
|
||||
# Set up environment variables if .env doesn't exist for the sim app
|
||||
if [ ! -f "apps/sim/.env" ]; then
|
||||
echo "📄 Creating .env file from template..."
|
||||
echo "📄 Creating apps/sim/.env from template..."
|
||||
if [ -f "apps/sim/.env.example" ]; then
|
||||
cp apps/sim/.env.example apps/sim/.env
|
||||
else
|
||||
@@ -79,6 +79,18 @@ if [ ! -f "apps/sim/.env" ]; then
|
||||
fi
|
||||
fi
|
||||
|
||||
# Set up env for the realtime server (must match the shared values in apps/sim/.env)
|
||||
if [ ! -f "apps/realtime/.env" ] && [ -f "apps/realtime/.env.example" ]; then
|
||||
echo "📄 Creating apps/realtime/.env from template..."
|
||||
cp apps/realtime/.env.example apps/realtime/.env
|
||||
fi
|
||||
|
||||
# Set up packages/db/.env for drizzle-kit and migration scripts
|
||||
if [ ! -f "packages/db/.env" ] && [ -f "packages/db/.env.example" ]; then
|
||||
echo "📄 Creating packages/db/.env from template..."
|
||||
cp packages/db/.env.example packages/db/.env
|
||||
fi
|
||||
|
||||
# Generate schema and run database migrations
|
||||
echo "🗃️ Running database schema generation and migrations..."
|
||||
echo "Generating schema..."
|
||||
|
||||
259
.github/CONTRIBUTING.md
vendored
259
.github/CONTRIBUTING.md
vendored
@@ -2,8 +2,15 @@
|
||||
|
||||
Thank you for your interest in contributing to Sim! Our goal is to provide developers with a powerful, user-friendly platform for building, testing, and optimizing agentic workflows. We welcome contributions in all forms—from bug fixes and design improvements to brand-new features.
|
||||
|
||||
> **Project Overview:**
|
||||
> Sim is a monorepo using Turborepo, containing the main application (`apps/sim/`), documentation (`apps/docs/`), and shared packages (`packages/`). The main application is built with Next.js (app router), ReactFlow, Zustand, Shadcn, and Tailwind CSS. Please ensure your contributions follow our best practices for clarity, maintainability, and consistency.
|
||||
> **Project Overview:**
|
||||
> Sim is a Turborepo monorepo with two deployable apps and a set of shared packages:
|
||||
>
|
||||
> - `apps/sim/` — the main Next.js application (App Router, ReactFlow, Zustand, Shadcn, Tailwind CSS).
|
||||
> - `apps/realtime/` — a small Bun + Socket.IO server that powers the collaborative canvas. Shares DB and Better Auth secrets with `apps/sim` via `@sim/*` packages.
|
||||
> - `apps/docs/` — Fumadocs-based documentation site.
|
||||
> - `packages/` — shared workspace packages (`@sim/db`, `@sim/auth`, `@sim/audit`, `@sim/workflow-types`, `@sim/workflow-persistence`, `@sim/workflow-authz`, `@sim/realtime-protocol`, `@sim/security`, `@sim/logger`, `@sim/utils`, `@sim/testing`, `@sim/tsconfig`).
|
||||
>
|
||||
> Strict one-way dependency flow: `apps/* → packages/*`. Packages never import from apps. Please ensure your contributions follow this and our best practices for clarity, maintainability, and consistency.
|
||||
|
||||
---
|
||||
|
||||
@@ -24,14 +31,17 @@ Thank you for your interest in contributing to Sim! Our goal is to provide devel
|
||||
|
||||
We strive to keep our workflow as simple as possible. To contribute:
|
||||
|
||||
1. **Fork the Repository**
|
||||
1. **Fork the Repository**
|
||||
Click the **Fork** button on GitHub to create your own copy of the project.
|
||||
|
||||
2. **Clone Your Fork**
|
||||
|
||||
```bash
|
||||
git clone https://github.com/<your-username>/sim.git
|
||||
cd sim
|
||||
```
|
||||
3. **Create a Feature Branch**
|
||||
|
||||
3. **Create a Feature Branch**
|
||||
Create a new branch with a descriptive name:
|
||||
|
||||
```bash
|
||||
@@ -40,21 +50,23 @@ We strive to keep our workflow as simple as possible. To contribute:
|
||||
|
||||
Use a clear naming convention to indicate the type of work (e.g., `feat/`, `fix/`, `docs/`).
|
||||
|
||||
4. **Make Your Changes**
|
||||
4. **Make Your Changes**
|
||||
Ensure your changes are small, focused, and adhere to our coding guidelines.
|
||||
|
||||
5. **Commit Your Changes**
|
||||
5. **Commit Your Changes**
|
||||
Write clear, descriptive commit messages that follow the [Conventional Commits](https://www.conventionalcommits.org/en/v1.0.0/#specification) specification. This allows us to maintain a coherent project history and generate changelogs automatically. For example:
|
||||
|
||||
- `feat(api): add new endpoint for user authentication`
|
||||
- `fix(ui): resolve button alignment issue`
|
||||
- `docs: update contribution guidelines`
|
||||
|
||||
6. **Push Your Branch**
|
||||
|
||||
```bash
|
||||
git push origin feat/your-feature-name
|
||||
```
|
||||
|
||||
7. **Create a Pull Request**
|
||||
7. **Create a Pull Request**
|
||||
Open a pull request against the `staging` branch on GitHub. Please provide a clear description of the changes and reference any relevant issues (e.g., `fixes #123`).
|
||||
|
||||
---
|
||||
@@ -65,7 +77,7 @@ If you discover a bug or have a feature request, please open an issue in our Git
|
||||
|
||||
- Provide a clear, descriptive title.
|
||||
- Include as many details as possible (steps to reproduce, screenshots, etc.).
|
||||
- **Tag Your Issue Appropriately:**
|
||||
- **Tag Your Issue Appropriately:**
|
||||
Use the following labels to help us categorize your issue:
|
||||
- **active:** Actively working on it right now.
|
||||
- **bug:** Something isn't working.
|
||||
@@ -82,12 +94,11 @@ If you discover a bug or have a feature request, please open an issue in our Git
|
||||
|
||||
Before creating a pull request:
|
||||
|
||||
- **Ensure Your Branch Is Up-to-Date:**
|
||||
- **Ensure Your Branch Is Up-to-Date:**
|
||||
Rebase your branch onto the latest `staging` branch to prevent merge conflicts.
|
||||
- **Follow the Guidelines:**
|
||||
- **Follow the Guidelines:**
|
||||
Make sure your changes are well-tested, follow our coding standards, and include relevant documentation if necessary.
|
||||
|
||||
- **Reference Issues:**
|
||||
- **Reference Issues:**
|
||||
If your PR addresses an existing issue, include `refs #<issue-number>` or `fixes #<issue-number>` in your PR description.
|
||||
|
||||
Our maintainers will review your pull request and provide feedback. We aim to make the review process as smooth and timely as possible.
|
||||
@@ -166,27 +177,27 @@ To use local models with Sim:
|
||||
|
||||
1. Install Ollama and pull models:
|
||||
|
||||
```bash
|
||||
# Install Ollama (if not already installed)
|
||||
curl -fsSL https://ollama.ai/install.sh | sh
|
||||
```bash
|
||||
# Install Ollama (if not already installed)
|
||||
curl -fsSL https://ollama.ai/install.sh | sh
|
||||
|
||||
# Pull a model (e.g., gemma3:4b)
|
||||
ollama pull gemma3:4b
|
||||
```
|
||||
# Pull a model (e.g., gemma3:4b)
|
||||
ollama pull gemma3:4b
|
||||
```
|
||||
|
||||
2. Start Sim with local model support:
|
||||
|
||||
```bash
|
||||
# With NVIDIA GPU support
|
||||
docker compose --profile local-gpu -f docker-compose.ollama.yml up -d
|
||||
```bash
|
||||
# With NVIDIA GPU support
|
||||
docker compose --profile local-gpu -f docker-compose.ollama.yml up -d
|
||||
|
||||
# Without GPU (CPU only)
|
||||
docker compose --profile local-cpu -f docker-compose.ollama.yml up -d
|
||||
# Without GPU (CPU only)
|
||||
docker compose --profile local-cpu -f docker-compose.ollama.yml up -d
|
||||
|
||||
# If hosting on a server, update the environment variables in the docker-compose.prod.yml file
|
||||
# to include the server's public IP then start again (OLLAMA_URL to i.e. http://1.1.1.1:11434)
|
||||
docker compose -f docker-compose.prod.yml up -d
|
||||
```
|
||||
# If hosting on a server, update the environment variables in the docker-compose.prod.yml file
|
||||
# to include the server's public IP then start again (OLLAMA_URL to i.e. http://1.1.1.1:11434)
|
||||
docker compose -f docker-compose.prod.yml up -d
|
||||
```
|
||||
|
||||
### Option 3: Using VS Code / Cursor Dev Containers
|
||||
|
||||
@@ -201,61 +212,104 @@ Dev Containers provide a consistent and easy-to-use development environment:
|
||||
2. **Setup Steps:**
|
||||
|
||||
- Clone the repository:
|
||||
|
||||
```bash
|
||||
git clone https://github.com/<your-username>/sim.git
|
||||
cd sim
|
||||
```
|
||||
- Open the project in VS Code/Cursor
|
||||
- When prompted, click "Reopen in Container" (or press F1 and select "Remote-Containers: Reopen in Container")
|
||||
- Wait for the container to build and initialize
|
||||
|
||||
- Open the project in VS Code/Cursor.
|
||||
- When prompted, click "Reopen in Container" (or press F1 and select "Remote-Containers: Reopen in Container").
|
||||
- Wait for the container to build and initialize.
|
||||
|
||||
3. **Start Developing:**
|
||||
|
||||
- Run `bun run dev:full` in the terminal or use the `sim-start` alias
|
||||
- This starts both the main application and the realtime socket server
|
||||
- All dependencies and configurations are automatically set up
|
||||
- Your changes will be automatically hot-reloaded
|
||||
- Run `bun run dev:full` in the terminal or use the `sim-start` alias.
|
||||
- This starts both the main application and the realtime socket server.
|
||||
- All dependencies and configurations are automatically set up.
|
||||
- Your changes will be automatically hot-reloaded.
|
||||
|
||||
4. **GitHub Codespaces:**
|
||||
- This setup also works with GitHub Codespaces if you prefer development in the browser
|
||||
- Just click "Code" → "Codespaces" → "Create codespace on staging"
|
||||
|
||||
- This setup also works with GitHub Codespaces if you prefer development in the browser.
|
||||
- Just click "Code" → "Codespaces" → "Create codespace on staging".
|
||||
|
||||
### Option 4: Manual Setup
|
||||
|
||||
If you prefer not to use Docker or Dev Containers:
|
||||
If you prefer not to use Docker or Dev Containers. **All commands run from the repository root unless explicitly noted.**
|
||||
|
||||
1. **Clone and Install:**
|
||||
|
||||
1. **Clone the Repository:**
|
||||
```bash
|
||||
git clone https://github.com/<your-username>/sim.git
|
||||
cd sim
|
||||
bun install
|
||||
```
|
||||
|
||||
2. **Set Up Environment:**
|
||||
Bun workspaces handle dependency resolution for all apps and packages from the root `bun install`.
|
||||
|
||||
- Navigate to the app directory:
|
||||
```bash
|
||||
cd apps/sim
|
||||
```
|
||||
- Copy `.env.example` to `.env`
|
||||
- Configure required variables (DATABASE_URL, BETTER_AUTH_SECRET, BETTER_AUTH_URL)
|
||||
2. **Set Up Environment Files:**
|
||||
|
||||
3. **Set Up Database:**
|
||||
We use **per-app `.env` files** (the Turborepo-canonical pattern), not a single root `.env`. Three files are needed for local dev:
|
||||
|
||||
```bash
|
||||
bunx drizzle-kit push
|
||||
# Main app — large, app-specific (OAuth secrets, LLM keys, Stripe, etc.)
|
||||
cp apps/sim/.env.example apps/sim/.env
|
||||
|
||||
# Realtime server — small, only the values shared with the main app
|
||||
cp apps/realtime/.env.example apps/realtime/.env
|
||||
|
||||
# DB tooling (drizzle-kit, db:migrate)
|
||||
cp packages/db/.env.example packages/db/.env
|
||||
```
|
||||
|
||||
4. **Run the Development Server:**
|
||||
At minimum, each `.env` needs `DATABASE_URL`. `apps/sim/.env` and `apps/realtime/.env` additionally need matching values for `BETTER_AUTH_URL`, `BETTER_AUTH_SECRET`, `INTERNAL_API_SECRET`, and `NEXT_PUBLIC_APP_URL`. `apps/sim/.env` also needs `ENCRYPTION_KEY` and `API_ENCRYPTION_KEY`. Generate any 32-char secrets with `openssl rand -hex 32`.
|
||||
|
||||
The same `BETTER_AUTH_SECRET`, `INTERNAL_API_SECRET`, and `DATABASE_URL` must appear in both `apps/sim/.env` and `apps/realtime/.env` so the two services share auth and DB. After editing `apps/sim/.env`, you can mirror the shared subset into the realtime env in one shot:
|
||||
|
||||
```bash
|
||||
grep -E '^(DATABASE_URL|BETTER_AUTH_URL|BETTER_AUTH_SECRET|INTERNAL_API_SECRET|NEXT_PUBLIC_APP_URL|REDIS_URL)=' apps/sim/.env > apps/realtime/.env
|
||||
grep -E '^DATABASE_URL=' apps/sim/.env > packages/db/.env
|
||||
```
|
||||
|
||||
3. **Run Database Migrations:**
|
||||
|
||||
Migrations live in `packages/db/migrations/`. Run them via the dedicated workspace script:
|
||||
|
||||
```bash
|
||||
cd packages/db && bun run db:migrate && cd ../..
|
||||
```
|
||||
|
||||
For ad-hoc schema iteration during development you can also use `bun run db:push` from `packages/db`, but `db:migrate` is the canonical command for both local and CI/CD setups.
|
||||
|
||||
4. **Run the Development Servers:**
|
||||
|
||||
```bash
|
||||
bun run dev:full
|
||||
```
|
||||
|
||||
This command starts both the main application and the realtime socket server required for full functionality.
|
||||
This launches both apps with coloured prefixes:
|
||||
|
||||
- `[App]` — Next.js on `http://localhost:3000`
|
||||
- `[Realtime]` — Socket.IO on `http://localhost:3002`
|
||||
|
||||
Or run them separately:
|
||||
|
||||
```bash
|
||||
bun run dev # Next.js app only
|
||||
bun run dev:sockets # realtime server only
|
||||
```
|
||||
|
||||
5. **Make Your Changes and Test Locally.**
|
||||
|
||||
Before opening a PR, run the same checks CI runs:
|
||||
|
||||
```bash
|
||||
bun run type-check # TypeScript across every workspace
|
||||
bun run lint:check # Biome lint across every workspace
|
||||
bun run test # Vitest across every workspace
|
||||
```
|
||||
|
||||
### Email Template Development
|
||||
|
||||
When working on email templates, you can preview them using a local email preview server:
|
||||
@@ -263,18 +317,19 @@ When working on email templates, you can preview them using a local email previe
|
||||
1. **Run the Email Preview Server:**
|
||||
|
||||
```bash
|
||||
bun run email:dev
|
||||
cd apps/sim && bun run email:dev
|
||||
```
|
||||
|
||||
2. **Access the Preview:**
|
||||
|
||||
- Open `http://localhost:3000` in your browser
|
||||
- You'll see a list of all email templates
|
||||
- Click on any template to view and test it with various parameters
|
||||
- Open `http://localhost:3000` in your browser.
|
||||
- You'll see a list of all email templates.
|
||||
- Click on any template to view and test it with various parameters.
|
||||
|
||||
3. **Templates Location:**
|
||||
- Email templates are located in `sim/app/emails/`
|
||||
- After making changes to templates, they will automatically update in the preview
|
||||
|
||||
- Email templates live in `apps/sim/components/emails/`.
|
||||
- Changes hot-reload automatically in the preview.
|
||||
|
||||
---
|
||||
|
||||
@@ -282,28 +337,41 @@ When working on email templates, you can preview them using a local email previe
|
||||
|
||||
Sim is built in a modular fashion where blocks and tools extend the platform's functionality. To maintain consistency and quality, please follow the guidelines below when adding a new block or tool.
|
||||
|
||||
> **Use the skill guides for step-by-step recipes.** The repository ships opinionated, end-to-end guides under `.agents/skills/` that cover the exact file layout, conventions, registry wiring, and gotchas for each kind of contribution. Read the relevant SKILL.md before you start writing code:
|
||||
>
|
||||
> | Adding… | Read |
|
||||
> | ------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------- |
|
||||
> | A new integration end-to-end (tools + block + icon + optional triggers + all registrations) | [`.agents/skills/add-integration/SKILL.md`](../.agents/skills/add-integration/SKILL.md) |
|
||||
> | Just a block (or aligning an existing block with its tools) | [`.agents/skills/add-block/SKILL.md`](../.agents/skills/add-block/SKILL.md) |
|
||||
> | Just tool configs for a service | [`.agents/skills/add-tools/SKILL.md`](../.agents/skills/add-tools/SKILL.md) |
|
||||
> | A webhook trigger for a service | [`.agents/skills/add-trigger/SKILL.md`](../.agents/skills/add-trigger/SKILL.md) |
|
||||
> | A knowledge-base connector (sync docs from an external source) | [`.agents/skills/add-connector/SKILL.md`](../.agents/skills/add-connector/SKILL.md) |
|
||||
>
|
||||
> The shorter overview below is a high-level reference; the SKILL.md files are the authoritative source of truth and stay in sync with the codebase.
|
||||
|
||||
### Where to Add Your Code
|
||||
|
||||
- **Blocks:** Create your new block file under the `/apps/sim/blocks/blocks` directory. The name of the file should match the provider name (e.g., `pinecone.ts`).
|
||||
- **Tools:** Create a new directory under `/apps/sim/tools` with the same name as the provider (e.g., `/apps/sim/tools/pinecone`).
|
||||
- **Blocks:** Create your new block file under the `apps/sim/blocks/blocks/` directory. The name of the file should match the provider name (e.g., `pinecone.ts`).
|
||||
- **Tools:** Create a new directory under `apps/sim/tools/` with the same name as the provider (e.g., `apps/sim/tools/pinecone`).
|
||||
|
||||
In addition, you will need to update the registries:
|
||||
|
||||
- **Block Registry:** Update the blocks index (`/apps/sim/blocks/index.ts`) to include your new block.
|
||||
- **Tool Registry:** Update the tools registry (`/apps/sim/tools/index.ts`) to add your new tool.
|
||||
- **Block Registry:** Add your block to `apps/sim/blocks/registry.ts`. (`apps/sim/blocks/index.ts` re-exports lookups from the registry; you do not need to edit it.)
|
||||
- **Tool Registry:** Add your tool to `apps/sim/tools/index.ts`.
|
||||
|
||||
### How to Create a New Block
|
||||
|
||||
1. **Create a New File:**
|
||||
Create a file for your block named after the provider (e.g., `pinecone.ts`) in the `/apps/sim/blocks/blocks` directory.
|
||||
1. **Create a New File:**
|
||||
Create a file for your block named after the provider (e.g., `pinecone.ts`) in the `apps/sim/blocks/blocks/` directory.
|
||||
|
||||
2. **Create a New Icon:**
|
||||
Create a new icon for your block in the `/apps/sim/components/icons.tsx` file. The icon should follow the same naming convention as the block (e.g., `PineconeIcon`).
|
||||
Create a new icon for your block in `apps/sim/components/icons.tsx`. The icon should follow the same naming convention as the block (e.g., `PineconeIcon`).
|
||||
|
||||
3. **Define the Block Configuration:**
|
||||
3. **Define the Block Configuration:**
|
||||
Your block should export a constant of type `BlockConfig`. For example:
|
||||
|
||||
```typescript:/apps/sim/blocks/blocks/pinecone.ts
|
||||
```typescript
|
||||
// apps/sim/blocks/blocks/pinecone.ts
|
||||
import { PineconeIcon } from '@/components/icons'
|
||||
import type { BlockConfig } from '@/blocks/types'
|
||||
import type { PineconeResponse } from '@/tools/pinecone/types'
|
||||
@@ -321,7 +389,7 @@ In addition, you will need to update the registries:
|
||||
{
|
||||
id: 'operation',
|
||||
title: 'Operation',
|
||||
type: 'dropdown'
|
||||
type: 'dropdown',
|
||||
required: true,
|
||||
options: [
|
||||
{ label: 'Generate Embeddings', id: 'generate' },
|
||||
@@ -332,7 +400,7 @@ In addition, you will need to update the registries:
|
||||
{
|
||||
id: 'apiKey',
|
||||
title: 'API Key',
|
||||
type: 'short-input'
|
||||
type: 'short-input',
|
||||
placeholder: 'Your Pinecone API key',
|
||||
password: true,
|
||||
required: true,
|
||||
@@ -370,10 +438,11 @@ In addition, you will need to update the registries:
|
||||
}
|
||||
```
|
||||
|
||||
4. **Register Your Block:**
|
||||
Add your block to the blocks registry (`/apps/sim/blocks/registry.ts`):
|
||||
4. **Register Your Block:**
|
||||
Add your block to the blocks registry (`apps/sim/blocks/registry.ts`):
|
||||
|
||||
```typescript:/apps/sim/blocks/registry.ts
|
||||
```typescript
|
||||
// apps/sim/blocks/registry.ts
|
||||
import { PineconeBlock } from '@/blocks/blocks/pinecone'
|
||||
|
||||
// Registry of all available blocks
|
||||
@@ -385,24 +454,25 @@ In addition, you will need to update the registries:
|
||||
|
||||
The block will be automatically available to the application through the registry.
|
||||
|
||||
5. **Test Your Block:**
|
||||
5. **Test Your Block:**
|
||||
Ensure that the block displays correctly in the UI and that its functionality works as expected.
|
||||
|
||||
### How to Create a New Tool
|
||||
|
||||
1. **Create a New Directory:**
|
||||
Create a directory under `/apps/sim/tools` with the same name as the provider (e.g., `/apps/sim/tools/pinecone`).
|
||||
1. **Create a New Directory:**
|
||||
Create a directory under `apps/sim/tools/` with the same name as the provider (e.g., `apps/sim/tools/pinecone`).
|
||||
|
||||
2. **Create Tool Files:**
|
||||
2. **Create Tool Files:**
|
||||
Create separate files for each tool functionality with descriptive names (e.g., `fetch.ts`, `generate_embeddings.ts`, `search_text.ts`) in your tool directory.
|
||||
|
||||
3. **Create a Types File:**
|
||||
3. **Create a Types File:**
|
||||
Create a `types.ts` file in your tool directory to define and export all types related to your tools.
|
||||
|
||||
4. **Create an Index File:**
|
||||
4. **Create an Index File:**
|
||||
Create an `index.ts` file in your tool directory that imports and exports all tools:
|
||||
|
||||
```typescript:/apps/sim/tools/pinecone/index.ts
|
||||
```typescript
|
||||
// apps/sim/tools/pinecone/index.ts
|
||||
import { fetchTool } from './fetch'
|
||||
import { generateEmbeddingsTool } from './generate_embeddings'
|
||||
import { searchTextTool } from './search_text'
|
||||
@@ -410,10 +480,11 @@ In addition, you will need to update the registries:
|
||||
export { fetchTool, generateEmbeddingsTool, searchTextTool }
|
||||
```
|
||||
|
||||
5. **Define the Tool Configuration:**
|
||||
5. **Define the Tool Configuration:**
|
||||
Your tool should export a constant with a naming convention of `{toolName}Tool`. The tool ID should follow the format `{provider}_{tool_name}`. For example:
|
||||
|
||||
```typescript:/apps/sim/tools/pinecone/fetch.ts
|
||||
```typescript
|
||||
// apps/sim/tools/pinecone/fetch.ts
|
||||
import { ToolConfig, ToolResponse } from '@/tools/types'
|
||||
import { PineconeParams, PineconeResponse } from '@/tools/pinecone/types'
|
||||
|
||||
@@ -449,11 +520,12 @@ In addition, you will need to update the registries:
|
||||
}
|
||||
```
|
||||
|
||||
6. **Register Your Tool:**
|
||||
Update the tools registry in `/apps/sim/tools/index.ts` to include your new tool:
|
||||
6. **Register Your Tool:**
|
||||
Update the tools registry in `apps/sim/tools/index.ts` to include your new tool:
|
||||
|
||||
```typescript:/apps/sim/tools/index.ts
|
||||
import { fetchTool, generateEmbeddingsTool, searchTextTool } from '/@tools/pinecone'
|
||||
```typescript
|
||||
// apps/sim/tools/index.ts
|
||||
import { fetchTool, generateEmbeddingsTool, searchTextTool } from '@/tools/pinecone'
|
||||
// ... other imports
|
||||
|
||||
export const tools: Record<string, ToolConfig> = {
|
||||
@@ -464,13 +536,14 @@ In addition, you will need to update the registries:
|
||||
}
|
||||
```
|
||||
|
||||
7. **Test Your Tool:**
|
||||
7. **Test Your Tool:**
|
||||
Ensure that your tool functions correctly by making test requests and verifying the responses.
|
||||
|
||||
8. **Generate Documentation:**
|
||||
Run the documentation generator to create docs for your new tool:
|
||||
8. **Generate Documentation:**
|
||||
Run the documentation generator (from `apps/sim`) to create docs for your new tool:
|
||||
|
||||
```bash
|
||||
./scripts/generate-docs.sh
|
||||
cd apps/sim && bun run generate-docs
|
||||
```
|
||||
|
||||
### Naming Conventions
|
||||
@@ -480,7 +553,7 @@ Maintaining consistent naming across the codebase is critical for auto-generatio
|
||||
- **Block Files:** Name should match the provider (e.g., `pinecone.ts`)
|
||||
- **Block Export:** Should be named `{Provider}Block` (e.g., `PineconeBlock`)
|
||||
- **Icons:** Should be named `{Provider}Icon` (e.g., `PineconeIcon`)
|
||||
- **Tool Directories:** Should match the provider name (e.g., `/tools/pinecone/`)
|
||||
- **Tool Directories:** Should match the provider name (e.g., `tools/pinecone/`)
|
||||
- **Tool Files:** Should be named after their function (e.g., `fetch.ts`, `search_text.ts`)
|
||||
- **Tool Exports:** Should be named `{toolName}Tool` (e.g., `fetchTool`)
|
||||
- **Tool IDs:** Should follow the format `{provider}_{tool_name}` (e.g., `pinecone_fetch`)
|
||||
@@ -489,12 +562,12 @@ Maintaining consistent naming across the codebase is critical for auto-generatio
|
||||
|
||||
Sim implements a sophisticated parameter visibility system that controls how parameters are exposed to users and LLMs in agent workflows. Each parameter can have one of four visibility levels:
|
||||
|
||||
| Visibility | User Sees | LLM Sees | How It Gets Set |
|
||||
|-------------|-----------|----------|--------------------------------|
|
||||
| `user-only` | ✅ Yes | ❌ No | User provides in UI |
|
||||
| `user-or-llm` | ✅ Yes | ✅ Yes | User provides OR LLM generates |
|
||||
| `llm-only` | ❌ No | ✅ Yes | LLM generates only |
|
||||
| `hidden` | ❌ No | ❌ No | Application injects at runtime |
|
||||
| Visibility | User Sees | LLM Sees | How It Gets Set |
|
||||
| ------------- | --------- | -------- | ------------------------------ |
|
||||
| `user-only` | ✅ Yes | ❌ No | User provides in UI |
|
||||
| `user-or-llm` | ✅ Yes | ✅ Yes | User provides OR LLM generates |
|
||||
| `llm-only` | ❌ No | ✅ Yes | LLM generates only |
|
||||
| `hidden` | ❌ No | ❌ No | Application injects at runtime |
|
||||
|
||||
#### Visibility Guidelines
|
||||
|
||||
|
||||
2
.github/workflows/docs-embeddings.yml
vendored
2
.github/workflows/docs-embeddings.yml
vendored
@@ -20,7 +20,7 @@ jobs:
|
||||
- name: Setup Bun
|
||||
uses: oven-sh/setup-bun@v2
|
||||
with:
|
||||
bun-version: 1.3.11
|
||||
bun-version: 1.3.13
|
||||
|
||||
- name: Setup Node
|
||||
uses: actions/setup-node@v4
|
||||
|
||||
4
.github/workflows/i18n.yml
vendored
4
.github/workflows/i18n.yml
vendored
@@ -23,7 +23,7 @@ jobs:
|
||||
- name: Setup Bun
|
||||
uses: oven-sh/setup-bun@v2
|
||||
with:
|
||||
bun-version: 1.3.11
|
||||
bun-version: 1.3.13
|
||||
|
||||
- name: Cache Bun dependencies
|
||||
uses: actions/cache@v4
|
||||
@@ -122,7 +122,7 @@ jobs:
|
||||
- name: Setup Bun
|
||||
uses: oven-sh/setup-bun@v2
|
||||
with:
|
||||
bun-version: 1.3.11
|
||||
bun-version: 1.3.13
|
||||
|
||||
- name: Cache Bun dependencies
|
||||
uses: actions/cache@v4
|
||||
|
||||
2
.github/workflows/migrations.yml
vendored
2
.github/workflows/migrations.yml
vendored
@@ -19,7 +19,7 @@ jobs:
|
||||
- name: Setup Bun
|
||||
uses: oven-sh/setup-bun@v2
|
||||
with:
|
||||
bun-version: 1.3.11
|
||||
bun-version: 1.3.13
|
||||
|
||||
- name: Cache Bun dependencies
|
||||
uses: actions/cache@v4
|
||||
|
||||
2
.github/workflows/publish-cli.yml
vendored
2
.github/workflows/publish-cli.yml
vendored
@@ -19,7 +19,7 @@ jobs:
|
||||
- name: Setup Bun
|
||||
uses: oven-sh/setup-bun@v2
|
||||
with:
|
||||
bun-version: 1.3.11
|
||||
bun-version: 1.3.13
|
||||
|
||||
- name: Setup Node.js for npm publishing
|
||||
uses: actions/setup-node@v4
|
||||
|
||||
2
.github/workflows/publish-ts-sdk.yml
vendored
2
.github/workflows/publish-ts-sdk.yml
vendored
@@ -19,7 +19,7 @@ jobs:
|
||||
- name: Setup Bun
|
||||
uses: oven-sh/setup-bun@v2
|
||||
with:
|
||||
bun-version: 1.3.11
|
||||
bun-version: 1.3.13
|
||||
|
||||
- name: Setup Node.js for npm publishing
|
||||
uses: actions/setup-node@v4
|
||||
|
||||
11
.github/workflows/test-build.yml
vendored
11
.github/workflows/test-build.yml
vendored
@@ -19,7 +19,7 @@ jobs:
|
||||
- name: Setup Bun
|
||||
uses: oven-sh/setup-bun@v2
|
||||
with:
|
||||
bun-version: 1.3.11
|
||||
bun-version: 1.3.13
|
||||
|
||||
- name: Setup Node
|
||||
uses: actions/setup-node@v4
|
||||
@@ -103,6 +103,15 @@ jobs:
|
||||
- name: Lint code
|
||||
run: bun run lint:check
|
||||
|
||||
- name: Enforce monorepo boundaries
|
||||
run: bun run check:boundaries
|
||||
|
||||
- name: Verify realtime prune graph
|
||||
run: bun run check:realtime-prune
|
||||
|
||||
- name: Type-check realtime server
|
||||
run: bunx turbo run type-check --filter=@sim/realtime
|
||||
|
||||
- name: Run tests with coverage
|
||||
env:
|
||||
NODE_OPTIONS: '--no-warnings --max-old-space-size=8192'
|
||||
|
||||
47
AGENTS.md
47
AGENTS.md
@@ -7,7 +7,7 @@ You are a professional software engineer. All code must follow best practices: a
|
||||
- **Logging**: Import `createLogger` from `@sim/logger`. Use `logger.info`, `logger.warn`, `logger.error` instead of `console.log`
|
||||
- **Comments**: Use TSDoc for documentation. No `====` separators. No non-TSDoc comments
|
||||
- **Styling**: Never update global styles. Keep all styling local to components
|
||||
- **ID Generation**: Never use `crypto.randomUUID()`, `nanoid`, or `uuid` package. Use `generateId()` (UUID v4) or `generateShortId()` (compact) from `@/lib/core/utils/uuid`
|
||||
- **ID Generation**: Never use `crypto.randomUUID()`, `nanoid`, or `uuid` package. Use `generateId()` (UUID v4) or `generateShortId()` (compact) from `@sim/utils/id`
|
||||
- **Package Manager**: Use `bun` and `bunx`, not `npm` and `npx`
|
||||
|
||||
## Architecture
|
||||
@@ -20,19 +20,42 @@ You are a professional software engineer. All code must follow best practices: a
|
||||
|
||||
### Root Structure
|
||||
```
|
||||
apps/sim/
|
||||
├── app/ # Next.js app router (pages, API routes)
|
||||
├── blocks/ # Block definitions and registry
|
||||
├── components/ # Shared UI (emcn/, ui/)
|
||||
├── executor/ # Workflow execution engine
|
||||
├── hooks/ # Shared hooks (queries/, selectors/)
|
||||
├── lib/ # App-wide utilities
|
||||
├── providers/ # LLM provider integrations
|
||||
├── stores/ # Zustand stores
|
||||
├── tools/ # Tool definitions
|
||||
└── triggers/ # Trigger definitions
|
||||
apps/
|
||||
├── sim/ # Next.js app (UI + API routes + workflow editor)
|
||||
│ ├── app/ # Next.js app router (pages, API routes)
|
||||
│ ├── blocks/ # Block definitions and registry
|
||||
│ ├── components/ # Shared UI (emcn/, ui/)
|
||||
│ ├── executor/ # Workflow execution engine
|
||||
│ ├── hooks/ # Shared hooks (queries/, selectors/)
|
||||
│ ├── lib/ # App-wide utilities
|
||||
│ ├── providers/ # LLM provider integrations
|
||||
│ ├── stores/ # Zustand stores
|
||||
│ ├── tools/ # Tool definitions
|
||||
│ └── triggers/ # Trigger definitions
|
||||
└── realtime/ # Bun Socket.IO server (collaborative canvas)
|
||||
└── src/ # auth, config, database, handlers, middleware,
|
||||
# rooms, routes, internal/webhook-cleanup.ts
|
||||
|
||||
packages/
|
||||
├── audit/ # @sim/audit — recordAudit + AuditAction + AuditResourceType
|
||||
├── auth/ # @sim/auth — @sim/auth/verify (shared Better Auth verifier)
|
||||
├── db/ # @sim/db — drizzle schema + client
|
||||
├── logger/ # @sim/logger
|
||||
├── realtime-protocol/ # @sim/realtime-protocol — socket operation constants + zod schemas
|
||||
├── security/ # @sim/security — safeCompare
|
||||
├── tsconfig/ # shared tsconfig presets
|
||||
├── utils/ # @sim/utils
|
||||
├── workflow-authz/ # @sim/workflow-authz — authorizeWorkflowByWorkspacePermission
|
||||
├── workflow-persistence/ # @sim/workflow-persistence — raw load/save + subflow helpers
|
||||
└── workflow-types/ # @sim/workflow-types — pure BlockState/Loop/Parallel/... types
|
||||
```
|
||||
|
||||
### Package boundaries
|
||||
- `apps/* → packages/*` only. Packages never import from `apps/*`.
|
||||
- Each package has explicit subpath `exports` maps; no barrels that accidentally pull in heavy halves.
|
||||
- `apps/realtime` intentionally avoids Next.js, React, the block/tool registry, provider SDKs, and the executor. CI enforces this via `scripts/check-monorepo-boundaries.ts` and `scripts/check-realtime-prune-graph.ts`.
|
||||
- Auth is shared across services via the Better Auth "Shared Database Session" pattern: both apps read the same `BETTER_AUTH_SECRET` and point at the same DB via `@sim/db`.
|
||||
|
||||
### Naming Conventions
|
||||
- Components: PascalCase (`WorkflowList`)
|
||||
- Hooks: `use` prefix (`useWorkflowOperations`)
|
||||
|
||||
41
CLAUDE.md
41
CLAUDE.md
@@ -4,10 +4,12 @@ You are a professional software engineer. All code must follow best practices: a
|
||||
|
||||
## Global Standards
|
||||
|
||||
- **Logging**: Import `createLogger` from `@sim/logger`. Use `logger.info`, `logger.warn`, `logger.error` instead of `console.log`
|
||||
- **Logging**: Import `createLogger` from `@sim/logger`. Use `logger.info`, `logger.warn`, `logger.error` instead of `console.log`. Inside API routes wrapped with `withRouteHandler`, loggers automatically include the request ID — no manual `withMetadata({ requestId })` needed
|
||||
- **API Route Handlers**: All API route handlers (`GET`, `POST`, `PUT`, `DELETE`, `PATCH`) must be wrapped with `withRouteHandler` from `@/lib/core/utils/with-route-handler`. This provides request ID tracking, automatic error logging for 4xx/5xx responses, and unhandled error catching. See "API Route Pattern" section below
|
||||
- **Comments**: Use TSDoc for documentation. No `====` separators. No non-TSDoc comments
|
||||
- **Styling**: Never update global styles. Keep all styling local to components
|
||||
- **ID Generation**: Never use `crypto.randomUUID()`, `nanoid`, or `uuid` package. Use `generateId()` (UUID v4) or `generateShortId()` (compact) from `@/lib/core/utils/uuid`
|
||||
- **ID Generation**: Never use `crypto.randomUUID()`, `nanoid`, or `uuid` package. Use `generateId()` (UUID v4) or `generateShortId()` (compact) from `@sim/utils/id`
|
||||
- **Common Utilities**: Use shared helpers from `@sim/utils` instead of inline implementations. `sleep(ms)` from `@sim/utils/helpers` for delays, `toError(e)` from `@sim/utils/errors` to normalize caught values.
|
||||
- **Package Manager**: Use `bun` and `bunx`, not `npm` and `npx`
|
||||
|
||||
## Architecture
|
||||
@@ -92,6 +94,41 @@ export function Component({ requiredProp, optionalProp = false }: ComponentProps
|
||||
|
||||
Extract when: 50+ lines, used in 2+ files, or has own state/logic. Keep inline when: < 10 lines, single use, purely presentational.
|
||||
|
||||
## API Route Pattern
|
||||
|
||||
Every API route handler must be wrapped with `withRouteHandler`. This sets up `AsyncLocalStorage`-based request context so all loggers in the request lifecycle automatically include the request ID.
|
||||
|
||||
```typescript
|
||||
import { createLogger } from '@sim/logger'
|
||||
import type { NextRequest } from 'next/server'
|
||||
import { NextResponse } from 'next/server'
|
||||
import { withRouteHandler } from '@/lib/core/utils/with-route-handler'
|
||||
|
||||
const logger = createLogger('MyAPI')
|
||||
|
||||
// Simple route
|
||||
export const GET = withRouteHandler(async (request: NextRequest) => {
|
||||
logger.info('Handling request') // automatically includes {requestId=...}
|
||||
return NextResponse.json({ ok: true })
|
||||
})
|
||||
|
||||
// Route with params
|
||||
export const DELETE = withRouteHandler(async (
|
||||
request: NextRequest,
|
||||
{ params }: { params: Promise<{ id: string }> }
|
||||
) => {
|
||||
const { id } = await params
|
||||
return NextResponse.json({ deleted: id })
|
||||
})
|
||||
|
||||
// Composing with other middleware (withRouteHandler wraps the outermost layer)
|
||||
export const POST = withRouteHandler(withAdminAuth(async (request) => {
|
||||
return NextResponse.json({ ok: true })
|
||||
}))
|
||||
```
|
||||
|
||||
Never export a bare `async function GET/POST/...` — always use `export const METHOD = withRouteHandler(...)`.
|
||||
|
||||
## Hooks
|
||||
|
||||
```typescript
|
||||
|
||||
@@ -142,13 +142,15 @@ See the [environment variables reference](https://docs.sim.ai/self-hosting/envir
|
||||
- **Database**: PostgreSQL with [Drizzle ORM](https://orm.drizzle.team)
|
||||
- **Authentication**: [Better Auth](https://better-auth.com)
|
||||
- **UI**: [Shadcn](https://ui.shadcn.com/), [Tailwind CSS](https://tailwindcss.com)
|
||||
- **State Management**: [Zustand](https://zustand-demo.pmnd.rs/)
|
||||
- **Streaming Markdown**: [Streamdown](https://github.com/vercel/streamdown)
|
||||
- **State Management**: [Zustand](https://zustand-demo.pmnd.rs/), [TanStack Query](https://tanstack.com/query)
|
||||
- **Flow Editor**: [ReactFlow](https://reactflow.dev/)
|
||||
- **Docs**: [Fumadocs](https://fumadocs.vercel.app/)
|
||||
- **Monorepo**: [Turborepo](https://turborepo.org/)
|
||||
- **Realtime**: [Socket.io](https://socket.io/)
|
||||
- **Background Jobs**: [Trigger.dev](https://trigger.dev/)
|
||||
- **Remote Code Execution**: [E2B](https://www.e2b.dev/)
|
||||
- **Isolated Code Execution**: [isolated-vm](https://github.com/laverdet/isolated-vm)
|
||||
|
||||
## Contributing
|
||||
|
||||
|
||||
@@ -1,4 +1,5 @@
|
||||
import { DocsBody, DocsPage } from 'fumadocs-ui/page'
|
||||
import { DocsPage } from 'fumadocs-ui/page'
|
||||
import Link from 'next/link'
|
||||
|
||||
export const metadata = {
|
||||
title: 'Page Not Found',
|
||||
@@ -7,17 +8,21 @@ export const metadata = {
|
||||
export default function NotFound() {
|
||||
return (
|
||||
<DocsPage>
|
||||
<DocsBody>
|
||||
<div className='flex min-h-[60vh] flex-col items-center justify-center text-center'>
|
||||
<h1 className='mb-4 bg-gradient-to-b from-[#47d991] to-[#33c482] bg-clip-text font-bold text-8xl text-transparent'>
|
||||
404
|
||||
</h1>
|
||||
<h2 className='mb-2 font-semibold text-2xl text-foreground'>Page Not Found</h2>
|
||||
<p className='text-muted-foreground'>
|
||||
The page you're looking for doesn't exist or has been moved.
|
||||
</p>
|
||||
</div>
|
||||
</DocsBody>
|
||||
<div className='flex min-h-[70vh] flex-col items-center justify-center gap-4 text-center'>
|
||||
<h1 className='bg-gradient-to-b from-[#47d991] to-[#33c482] bg-clip-text font-bold text-8xl text-transparent'>
|
||||
404
|
||||
</h1>
|
||||
<h2 className='font-semibold text-2xl text-foreground'>Page Not Found</h2>
|
||||
<p className='text-muted-foreground'>
|
||||
The page you're looking for doesn't exist or has been moved.
|
||||
</p>
|
||||
<Link
|
||||
href='/'
|
||||
className='ml-1 flex items-center rounded-[8px] bg-[#33c482] px-2.5 py-1.5 text-[13px] text-white transition-colors duration-200 hover:bg-[#2DAC72]'
|
||||
>
|
||||
Go home
|
||||
</Link>
|
||||
</div>
|
||||
</DocsPage>
|
||||
)
|
||||
}
|
||||
|
||||
@@ -62,7 +62,7 @@ ${Object.entries(sections)
|
||||
|
||||
- Full documentation content: ${baseUrl}/llms-full.txt
|
||||
- Individual page content: ${baseUrl}/llms.mdx/[page-path]
|
||||
- API documentation: ${baseUrl}/sdks/
|
||||
- API documentation: ${baseUrl}/api-reference/
|
||||
- Tool integrations: ${baseUrl}/tools/
|
||||
|
||||
## Statistics
|
||||
|
||||
@@ -28,6 +28,36 @@ export function AgentMailIcon(props: SVGProps<SVGSVGElement>) {
|
||||
)
|
||||
}
|
||||
|
||||
export function AgentPhoneIcon(props: SVGProps<SVGSVGElement>) {
|
||||
return (
|
||||
<svg {...props} viewBox='0 0 150 150' xmlns='http://www.w3.org/2000/svg'>
|
||||
<path
|
||||
fill='#23AF58'
|
||||
stroke='#007F3F'
|
||||
strokeWidth='0.15'
|
||||
strokeMiterlimit='10'
|
||||
d='m139.6 53.3c-1.4-2.3-4.9-3.3-7.6-4.8-2.7-1.3-4.2-2.4-5.7-3.6-1.9-1-2.5-2.7-3.3-3.2s-2.7-1.4-4.5 1.3c-2 2.7-4.5 6.6-6.6 11.1-2.3 5.4-6.3 14.9-6.3 18.9 0.5 4.9 3.1 4.6 6.1 7.2 2.5 2.1 2.8 5.8 1.5 12.5-1.3 6.6-4 12.8-7.8 19.2-3.3 5.1-5.8 8.7-10 9.1-5.3 0.5-12.5-3.1-16.8-5.6-1-0.6-2.5-0.9-3.8-0.2-1.3 0.5-2.2 1.6-3.2 3.3-1.5 2.5-4.6 7.7-5.8 12.2-0.5 3 0 6.4 2.9 9 1.4 1.2 2.8 2.5 4.4 3.4 5 2.8 9.6 4.5 16.5 4.9 5.3 0.2 9.3-1 13.4-3.1 2.4-1.3 6.6-4.2 9.6-7.3l1.1-1.2c2.8-3.1 8.8-10 11.6-14.5 2.3-3.5 4.8-7.4 6.9-12.3 2.9-6.7 4.4-14 5-17.9 1.2-7 2.4-17.5 3.4-31.1 0.1-4.3-0.3-6.1-1-7.3zm-4.5 6.7c-0.5 9.5-1.9 23.3-3.1 30.1-0.9 4.5-2.4 9.6-3.8 13.4-1.1 2.6-3.1 7-5.6 10.8-3.4 5.3-8.4 11.6-12 15.8-6.4 6.6-10.2 9.6-14.2 10.8-2.2 0.9-3.8 1.2-7 1.2-3.4-0.1-8-0.7-11.3-2.2-3-1.2-7-4-6.9-6.8 0.4-3.2 3.3-9.6 5.2-11.9 0.2-0.3 0.5-0.3 0.7-0.2 2.5 1.1 6 3.2 9.6 4.5 2.4 0.9 4.8 1.4 7.3 1.4 3.9 0 6.7-1.2 9.5-3.2 5.6-4.6 9-10.8 12.1-17.5 2-4.3 4.1-11.6 4.4-18.3 0.1-4.9-1.1-8.9-4.5-12.2-1.1-0.7-3-2.1-3-2.8 0-4.2 3.9-13 8.9-22.9 0.2-0.7 0.5-1 1.1-0.7 1.1 0.6 3 1.4 4.6 2.4 2.1 1 5.4 2.4 7.1 3.9 0.9 0.4 1 3 0.9 4.4z'
|
||||
/>
|
||||
<path
|
||||
fill='#23AF58'
|
||||
d='m104.7 27.8c-1.3-1.5-3.3-1.3-6.2-1.5l-1.9 0.2-7-0.2-31.5 0.2 1.5-9.3c2-1.1 5.1-3.5 5.8-6.3 1-2.8 0.2-5.9-2-7.4-2.3-1.9-5.8-2.4-9.3-0.8-1.6 1-4.7 3.4-5.4 6.9-0.8 4.1 2.4 6.7 4.7 7.9l-1.5 9.1-17.2 0.9c-12.3 1.1-16.3 1.2-20.6 4.3-2 1.3-3 4.5-3.4 9.8-0.6 11.3-0.7 18.7-0.6 28.3 0.4 11.2 0 36.6 3 39.8l-1.2 0.3c-3.8 0.6-4 6.2-0.5 6.6l15.5-1 69.7-7.6c2.5-0.4 4.3-0.9 4.6-4.3l3.7-71.5c0-1.9 0.2-3.6-0.2-4.4zm-49.6-17.3c0.3-2.2 2.4-3 3.3-2.8 0.7 0.4 1 1.8 0 2.8-1.5 2-3.3 1.7-3.3 0zm40 90.2c-4 1-5.5 1.5-11.5 2.4-7.7 1-19.7 2.1-31.2 3.4l-33.8 2.9c-0.7 0.2-1-0.4-1-1-0.6-6.5-1.2-20.5-1.5-39.5l0.3-23.3c0.6-7.5 0.7-8.7 4.6-9.7 5.1-0.9 7.4-1.4 14.9-1.8l19.5-0.5 41.1-0.5c1.4 0 1.9 0.4 1.9 1.5l-3.3 66.1z'
|
||||
/>
|
||||
<path
|
||||
fill='#23AF58'
|
||||
d='m38.9 52.4c-1.8 0-4 1.1-4.5 3.3-1 3.9 1 7.6 4.5 7.7 3.8 0 5-3.8 4.7-6.3-0.2-2-2-4.7-4.7-4.7z'
|
||||
/>
|
||||
<path
|
||||
fill='#23AF58'
|
||||
d='m73.5 53.9c-1.8 0-4.3 1.5-4.4 4.5-0.1 3.2 2 5.3 4.3 5.3 2.5 0 4.2-1.7 4.2-4.8 0-3.2-1.7-4.8-4.1-5z'
|
||||
/>
|
||||
<path
|
||||
fill='#23AF58'
|
||||
d='m72.1 77.1c-2.7 3.4-7.2 7.4-14.7 8.3-7.3 0.3-13.9-2.9-20-8.5-3.5-3.4-8 0-6.2 2.7 1.7 2.5 6.4 6.6 10.4 8.8 3.5 2 7.3 3.3 13.8 3.5 4.7 0 9.2-0.8 12.7-2.4 2.9-1.1 5-2.8 6-3.8 2.3-2.1 3.8-4.1 3.5-7.3-0.9-2.5-3.6-2.8-5.5-1.3z'
|
||||
/>
|
||||
</svg>
|
||||
)
|
||||
}
|
||||
|
||||
export function CrowdStrikeIcon(props: SVGProps<SVGSVGElement>) {
|
||||
return (
|
||||
<svg {...props} viewBox='0 0 768 500' fill='none' xmlns='http://www.w3.org/2000/svg'>
|
||||
@@ -3602,6 +3632,29 @@ export function OpenRouterIcon(props: SVGProps<SVGSVGElement>) {
|
||||
)
|
||||
}
|
||||
|
||||
export function MondayIcon(props: SVGProps<SVGSVGElement>) {
|
||||
return (
|
||||
<svg
|
||||
{...props}
|
||||
viewBox='0 -50 256 256'
|
||||
xmlns='http://www.w3.org/2000/svg'
|
||||
preserveAspectRatio='xMidYMid'
|
||||
>
|
||||
<g>
|
||||
<path
|
||||
d='M31.8458633,153.488694 C20.3244423,153.513586 9.68073708,147.337265 3.98575204,137.321731 C-1.62714067,127.367831 -1.29055839,115.129325 4.86093879,105.498969 L62.2342919,15.4033556 C68.2125882,5.54538256 79.032489,-0.333585033 90.5563073,0.0146553508 C102.071737,0.290611552 112.546041,6.74705604 117.96667,16.9106216 C123.315033,27.0238906 122.646488,39.1914174 116.240607,48.6847625 L58.9037201,138.780375 C52.9943022,147.988884 42.7873202,153.537154 31.8458633,153.488694 L31.8458633,153.488694 Z'
|
||||
fill='#F62B54'
|
||||
/>
|
||||
<path
|
||||
d='M130.25575,153.488484 C118.683837,153.488484 108.035731,147.301291 102.444261,137.358197 C96.8438154,127.431292 97.1804475,115.223704 103.319447,105.620522 L160.583402,15.7315506 C166.47539,5.73210989 177.327374,-0.284878136 188.929728,0.0146553508 C200.598885,0.269918151 211.174058,6.7973526 216.522421,17.0078646 C221.834319,27.2183766 221.056375,39.4588356 214.456008,48.9278699 L157.204209,138.816842 C151.313487,147.985468 141.153618,153.5168 130.25575,153.488484 Z'
|
||||
fill='#FFCC00'
|
||||
/>
|
||||
<ellipse fill='#00CA72' cx='226.465527' cy='125.324379' rx='29.5375538' ry='28.9176274' />
|
||||
</g>
|
||||
</svg>
|
||||
)
|
||||
}
|
||||
|
||||
export function MongoDBIcon(props: SVGProps<SVGSVGElement>) {
|
||||
return (
|
||||
<svg {...props} xmlns='http://www.w3.org/2000/svg' viewBox='0 0 128 128'>
|
||||
@@ -4658,6 +4711,24 @@ export function IAMIcon(props: SVGProps<SVGSVGElement>) {
|
||||
)
|
||||
}
|
||||
|
||||
export function IdentityCenterIcon(props: SVGProps<SVGSVGElement>) {
|
||||
return (
|
||||
<svg {...props} viewBox='0 0 80 80' xmlns='http://www.w3.org/2000/svg'>
|
||||
<defs>
|
||||
<linearGradient x1='0%' y1='100%' x2='100%' y2='0%' id='identityCenterGradient'>
|
||||
<stop stopColor='#BD0816' offset='0%' />
|
||||
<stop stopColor='#FF5252' offset='100%' />
|
||||
</linearGradient>
|
||||
</defs>
|
||||
<rect fill='url(#identityCenterGradient)' width='80' height='80' />
|
||||
<path
|
||||
d='M46.694,46.8194562 C47.376,46.1374562 47.376,45.0294562 46.694,44.3474562 C46.353,44.0074562 45.906,43.8374562 45.459,43.8374562 C45.01,43.8374562 44.563,44.0074562 44.222,44.3474562 C43.542,45.0284562 43.542,46.1384562 44.222,46.8194562 C44.905,47.5014562 46.013,47.4994562 46.694,46.8194562 M47.718,47.1374562 L51.703,51.1204562 L50.996,51.8274562 L49.868,50.6994562 L48.793,51.7754562 L48.086,51.0684562 L49.161,49.9924562 L47.011,47.8444562 C46.545,48.1654562 46.003,48.3294562 45.458,48.3294562 C44.755,48.3294562 44.051,48.0624562 43.515,47.5264562 C42.445,46.4554562 42.445,44.7124562 43.515,43.6404562 C44.586,42.5714562 46.329,42.5694562 47.401,43.6404562 C48.351,44.5904562 48.455,46.0674562 47.718,47.1374562 M53,44.1014562 C53,46.1684562 51.505,47.0934562 50.023,47.0934562 L50.023,46.0934562 C50.487,46.0934562 52,45.9494562 52,44.1014562 C52,43.0044562 51.353,42.3894562 49.905,42.1084562 C49.68,42.0654562 49.514,41.8754562 49.501,41.6484562 C49.446,40.7444562 48.987,40.1124562 48.384,40.1124562 C48.084,40.1124562 47.854,40.2424562 47.616,40.5464562 C47.506,40.6884562 47.324,40.7594562 47.147,40.7324562 C46.968,40.7054562 46.818,40.5844562 46.755,40.4144562 C46.577,39.9434562 46.211,39.4334562 45.723,38.9774562 C45.231,38.5094562 43.883,37.5074562 41.972,38.2734562 C40.885,38.7054562 40.034,39.9494562 40.034,41.1074562 C40.034,41.2354562 40.043,41.3624562 40.058,41.4884562 C40.061,41.5094562 40.062,41.5304562 40.062,41.5514562 C40.062,41.7994562 39.882,42.0064562 39.645,42.0464562 C38.886,42.2394562 38,42.7454562 38,44.0554562 L38.005,44.2104562 C38.069,45.3254562 39.252,45.9954562 40.358,45.9984562 L41,45.9984562 L41,46.9984562 L40.357,46.9984562 C38.536,46.9944562 37.095,45.8194562 37.006,44.2644562 C37.003,44.1944562 37,44.1244562 37,44.0554562 C37,42.6944562 37.752,41.6484562 39.035,41.1884562 C39.034,41.1614562 39.034,41.1344562 39.034,41.1074562 C39.034,39.5434562 40.138,37.9254562 41.602,37.3434562 C43.298,36.6654562 45.095,37.0034562 46.409,38.2494562 C46.706,38.5274562 47.076,38.9264562 47.372,39.4134562 C47.673,39.2124562 48.008,39.1124562 48.384,39.1124562 C49.257,39.1124562 50.231,39.7714562 50.458,41.2074562 C52.145,41.6324562 53,42.6054562 53,44.1014562 M27,53 L27,27 L53,27 L53,34 L51,34 L51,29 L29,29 L29,51 L51,51 L51,46 L53,46 L53,53 Z'
|
||||
fill='#FFFFFF'
|
||||
/>
|
||||
</svg>
|
||||
)
|
||||
}
|
||||
|
||||
export function STSIcon(props: SVGProps<SVGSVGElement>) {
|
||||
return (
|
||||
<svg {...props} viewBox='0 0 80 80' xmlns='http://www.w3.org/2000/svg'>
|
||||
@@ -4676,6 +4747,24 @@ export function STSIcon(props: SVGProps<SVGSVGElement>) {
|
||||
)
|
||||
}
|
||||
|
||||
export function SESIcon(props: SVGProps<SVGSVGElement>) {
|
||||
return (
|
||||
<svg {...props} viewBox='0 0 80 80' xmlns='http://www.w3.org/2000/svg'>
|
||||
<defs>
|
||||
<linearGradient x1='0%' y1='100%' x2='100%' y2='0%' id='sesGradient'>
|
||||
<stop stopColor='#BD0816' offset='0%' />
|
||||
<stop stopColor='#FF5252' offset='100%' />
|
||||
</linearGradient>
|
||||
</defs>
|
||||
<rect fill='url(#sesGradient)' width='80' height='80' />
|
||||
<path
|
||||
d='M57,60.999875 C57,59.373846 55.626,57.9998214 54,57.9998214 C52.374,57.9998214 51,59.373846 51,60.999875 C51,62.625904 52.374,63.9999286 54,63.9999286 C55.626,63.9999286 57,62.625904 57,60.999875 L57,60.999875 Z M40,59.9998571 C38.374,59.9998571 37,61.3738817 37,62.9999107 C37,64.6259397 38.374,65.9999643 40,65.9999643 C41.626,65.9999643 43,64.6259397 43,62.9999107 C43,61.3738817 41.626,59.9998571 40,59.9998571 L40,59.9998571 Z M26,57.9998214 C24.374,57.9998214 23,59.373846 23,60.999875 C23,62.625904 24.374,63.9999286 26,63.9999286 C27.626,63.9999286 29,62.625904 29,60.999875 C29,59.373846 27.626,57.9998214 26,57.9998214 L26,57.9998214 Z M28.605,42.9995536 L51.395,42.9995536 L43.739,36.1104305 L40.649,38.7584778 C40.463,38.9194807 40.23,38.9994821 39.999,38.9994821 C39.768,38.9994821 39.535,38.9194807 39.349,38.7584778 L36.26,36.1104305 L28.605,42.9995536 Z M27,28.1732888 L27,41.7545313 L34.729,34.7984071 L27,28.1732888 Z M51.297,26.9992678 L28.703,26.9992678 L39.999,36.6824408 L51.297,26.9992678 Z M53,41.7545313 L53,28.1732888 L45.271,34.7974071 L53,41.7545313 Z M59,60.999875 C59,63.7099234 56.71,65.9999643 54,65.9999643 C51.29,65.9999643 49,63.7099234 49,60.999875 C49,58.6308327 50.75,56.5837961 53,56.1057876 L53,52.9997321 L41,52.9997321 L41,58.1058233 C43.25,58.5838319 45,60.6308684 45,62.9999107 C45,65.7099591 42.71,68 40,68 C37.29,68 35,65.7099591 35,62.9999107 C35,60.6308684 36.75,58.5838319 39,58.1058233 L39,52.9997321 L27,52.9997321 L27,56.1057876 C29.25,56.5837961 31,58.6308327 31,60.999875 C31,63.7099234 28.71,65.9999643 26,65.9999643 C23.29,65.9999643 21,63.7099234 21,60.999875 C21,58.6308327 22.75,56.5837961 25,56.1057876 L25,51.9997143 C25,51.4477044 25.447,50.9996964 26,50.9996964 L39,50.9996964 L39,44.9995893 L26,44.9995893 C25.447,44.9995893 25,44.5515813 25,43.9995714 L25,25.99925 C25,25.4472401 25.447,24.9992321 26,24.9992321 L54,24.9992321 C54.553,24.9992321 55,25.4472401 55,25.99925 L55,43.9995714 C55,44.5515813 54.553,44.9995893 54,44.9995893 L41,44.9995893 L41,50.9996964 L54,50.9996964 C54.553,50.9996964 55,51.4477044 55,51.9997143 L55,56.1057876 C57.25,56.5837961 59,58.6308327 59,60.999875 L59,60.999875 Z M68,39.9995 C68,45.9066055 66.177,51.5597064 62.727,56.3447919 L61.104,55.174771 C64.307,50.7316916 66,45.4845979 66,39.9995 C66,25.664244 54.337,14.0000357 40.001,14.0000357 C25.664,14.0000357 14,25.664244 14,39.9995 C14,45.4845979 15.693,50.7316916 18.896,55.174771 L17.273,56.3447919 C13.823,51.5597064 12,45.9066055 12,39.9995 C12,24.5612243 24.561,12 39.999,12 C55.438,12 68,24.5612243 68,39.9995 L68,39.9995 Z'
|
||||
fill='#FFFFFF'
|
||||
/>
|
||||
</svg>
|
||||
)
|
||||
}
|
||||
|
||||
export function SecretsManagerIcon(props: SVGProps<SVGSVGElement>) {
|
||||
return (
|
||||
<svg {...props} viewBox='0 0 80 80' xmlns='http://www.w3.org/2000/svg'>
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
'use client'
|
||||
|
||||
import { useState } from 'react'
|
||||
import { useRef, useState } from 'react'
|
||||
import { cn, getAssetUrl } from '@/lib/utils'
|
||||
import { Lightbox } from './lightbox'
|
||||
|
||||
@@ -50,11 +50,14 @@ export function ActionImage({ src, alt, enableLightbox = true }: ActionImageProp
|
||||
}
|
||||
|
||||
export function ActionVideo({ src, alt, enableLightbox = true }: ActionVideoProps) {
|
||||
const videoRef = useRef<HTMLVideoElement>(null)
|
||||
const startTimeRef = useRef(0)
|
||||
const [isLightboxOpen, setIsLightboxOpen] = useState(false)
|
||||
const resolvedSrc = getAssetUrl(src)
|
||||
|
||||
const handleClick = () => {
|
||||
if (enableLightbox) {
|
||||
startTimeRef.current = videoRef.current?.currentTime ?? 0
|
||||
setIsLightboxOpen(true)
|
||||
}
|
||||
}
|
||||
@@ -62,6 +65,7 @@ export function ActionVideo({ src, alt, enableLightbox = true }: ActionVideoProp
|
||||
return (
|
||||
<>
|
||||
<video
|
||||
ref={videoRef}
|
||||
src={resolvedSrc}
|
||||
autoPlay
|
||||
loop
|
||||
@@ -80,6 +84,7 @@ export function ActionVideo({ src, alt, enableLightbox = true }: ActionVideoProp
|
||||
src={src}
|
||||
alt={alt}
|
||||
type='video'
|
||||
startTime={startTimeRef.current}
|
||||
/>
|
||||
)}
|
||||
</>
|
||||
|
||||
@@ -1,195 +0,0 @@
|
||||
import { memo } from 'react'
|
||||
|
||||
const RX = '2.59574'
|
||||
|
||||
interface BlockRect {
|
||||
opacity: number
|
||||
width: string
|
||||
height: string
|
||||
fill: string
|
||||
x?: string
|
||||
y?: string
|
||||
transform?: string
|
||||
}
|
||||
|
||||
const RECTS = {
|
||||
topRight: [
|
||||
{ opacity: 1, x: '0', y: '0', width: '16.8626', height: '33.7252', fill: '#2ABBF8' },
|
||||
{ opacity: 0.6, x: '0', y: '0', width: '85.3433', height: '16.8626', fill: '#2ABBF8' },
|
||||
{ opacity: 1, x: '0', y: '0', width: '16.8626', height: '16.8626', fill: '#2ABBF8' },
|
||||
{ opacity: 0.6, x: '34.2403', y: '0', width: '34.2403', height: '33.7252', fill: '#2ABBF8' },
|
||||
{ opacity: 1, x: '34.2403', y: '0', width: '16.8626', height: '16.8626', fill: '#2ABBF8' },
|
||||
{
|
||||
opacity: 1,
|
||||
x: '51.6188',
|
||||
y: '16.8626',
|
||||
width: '16.8626',
|
||||
height: '16.8626',
|
||||
fill: '#2ABBF8',
|
||||
},
|
||||
{ opacity: 1, x: '68.4812', y: '0', width: '54.6502', height: '16.8626', fill: '#00F701' },
|
||||
{ opacity: 0.6, x: '106.268', y: '0', width: '34.2403', height: '33.7252', fill: '#00F701' },
|
||||
{ opacity: 0.6, x: '106.268', y: '0', width: '51.103', height: '16.8626', fill: '#00F701' },
|
||||
{
|
||||
opacity: 1,
|
||||
x: '123.6484',
|
||||
y: '16.8626',
|
||||
width: '16.8626',
|
||||
height: '16.8626',
|
||||
fill: '#00F701',
|
||||
},
|
||||
{ opacity: 0.6, x: '157.371', y: '0', width: '34.2403', height: '16.8626', fill: '#FFCC02' },
|
||||
{ opacity: 1, x: '157.371', y: '0', width: '16.8626', height: '16.8626', fill: '#FFCC02' },
|
||||
{ opacity: 0.6, x: '208.993', y: '0', width: '68.4805', height: '16.8626', fill: '#FA4EDF' },
|
||||
{ opacity: 0.6, x: '209.137', y: '0', width: '16.8626', height: '33.7252', fill: '#FA4EDF' },
|
||||
{ opacity: 0.6, x: '243.233', y: '0', width: '34.2403', height: '33.7252', fill: '#FA4EDF' },
|
||||
{ opacity: 1, x: '243.233', y: '0', width: '16.8626', height: '16.8626', fill: '#FA4EDF' },
|
||||
{ opacity: 0.6, x: '260.096', y: '0', width: '34.04', height: '16.8626', fill: '#FA4EDF' },
|
||||
{
|
||||
opacity: 1,
|
||||
x: '260.611',
|
||||
y: '16.8626',
|
||||
width: '16.8626',
|
||||
height: '16.8626',
|
||||
fill: '#FA4EDF',
|
||||
},
|
||||
],
|
||||
bottomLeft: [
|
||||
{ opacity: 1, x: '0', y: '0', width: '16.8626', height: '33.7252', fill: '#2ABBF8' },
|
||||
{ opacity: 0.6, x: '0', y: '0', width: '85.3433', height: '16.8626', fill: '#2ABBF8' },
|
||||
{ opacity: 1, x: '0', y: '0', width: '16.8626', height: '16.8626', fill: '#2ABBF8' },
|
||||
{ opacity: 0.6, x: '34.2403', y: '0', width: '34.2403', height: '33.7252', fill: '#2ABBF8' },
|
||||
{ opacity: 1, x: '34.2403', y: '0', width: '16.8626', height: '16.8626', fill: '#2ABBF8' },
|
||||
{
|
||||
opacity: 1,
|
||||
x: '51.6188',
|
||||
y: '16.8626',
|
||||
width: '16.8626',
|
||||
height: '16.8626',
|
||||
fill: '#2ABBF8',
|
||||
},
|
||||
{ opacity: 1, x: '68.4812', y: '0', width: '54.6502', height: '16.8626', fill: '#00F701' },
|
||||
{ opacity: 0.6, x: '106.268', y: '0', width: '34.2403', height: '33.7252', fill: '#00F701' },
|
||||
{ opacity: 0.6, x: '106.268', y: '0', width: '51.103', height: '16.8626', fill: '#00F701' },
|
||||
{
|
||||
opacity: 1,
|
||||
x: '123.6484',
|
||||
y: '16.8626',
|
||||
width: '16.8626',
|
||||
height: '16.8626',
|
||||
fill: '#00F701',
|
||||
},
|
||||
],
|
||||
bottomRight: [
|
||||
{
|
||||
opacity: 0.6,
|
||||
width: '16.8626',
|
||||
height: '33.726',
|
||||
fill: '#FA4EDF',
|
||||
transform: 'matrix(0 1 1 0 0 0)',
|
||||
},
|
||||
{
|
||||
opacity: 0.6,
|
||||
width: '34.241',
|
||||
height: '16.8626',
|
||||
fill: '#FA4EDF',
|
||||
transform: 'matrix(0 1 1 0 16.891 0)',
|
||||
},
|
||||
{
|
||||
opacity: 0.6,
|
||||
width: '16.8626',
|
||||
height: '68.482',
|
||||
fill: '#FA4EDF',
|
||||
transform: 'matrix(-1 0 0 1 33.739 16.888)',
|
||||
},
|
||||
{
|
||||
opacity: 0.6,
|
||||
width: '16.8626',
|
||||
height: '33.726',
|
||||
fill: '#FA4EDF',
|
||||
transform: 'matrix(0 1 1 0 0 33.776)',
|
||||
},
|
||||
{
|
||||
opacity: 1,
|
||||
width: '16.8626',
|
||||
height: '16.8626',
|
||||
fill: '#FA4EDF',
|
||||
transform: 'matrix(-1 0 0 1 33.739 34.272)',
|
||||
},
|
||||
{
|
||||
opacity: 0.6,
|
||||
width: '16.8626',
|
||||
height: '34.24',
|
||||
fill: '#2ABBF8',
|
||||
transform: 'matrix(-1 0 0 1 33.787 68)',
|
||||
},
|
||||
{
|
||||
opacity: 0.4,
|
||||
width: '16.8626',
|
||||
height: '16.8626',
|
||||
fill: '#1A8FCC',
|
||||
transform: 'matrix(-1 0 0 1 33.787 85)',
|
||||
},
|
||||
],
|
||||
} as const satisfies Record<string, readonly BlockRect[]>
|
||||
|
||||
const GLOBAL_OPACITY = 0.55
|
||||
|
||||
const BlockGroup = memo(function BlockGroup({
|
||||
width,
|
||||
height,
|
||||
viewBox,
|
||||
rects,
|
||||
}: {
|
||||
width: number
|
||||
height: number
|
||||
viewBox: string
|
||||
rects: readonly BlockRect[]
|
||||
}) {
|
||||
return (
|
||||
<svg
|
||||
width={width}
|
||||
height={height}
|
||||
viewBox={viewBox}
|
||||
fill='none'
|
||||
xmlns='http://www.w3.org/2000/svg'
|
||||
className='h-auto w-full'
|
||||
style={{ opacity: GLOBAL_OPACITY }}
|
||||
>
|
||||
{rects.map((r, i) => (
|
||||
<rect
|
||||
key={i}
|
||||
x={r.x}
|
||||
y={r.y}
|
||||
width={r.width}
|
||||
height={r.height}
|
||||
rx={RX}
|
||||
fill={r.fill}
|
||||
transform={r.transform}
|
||||
opacity={r.opacity}
|
||||
/>
|
||||
))}
|
||||
</svg>
|
||||
)
|
||||
})
|
||||
|
||||
export function AnimatedBlocks() {
|
||||
return (
|
||||
<div
|
||||
className='pointer-events-none fixed inset-0 z-0 hidden overflow-hidden lg:block'
|
||||
aria-hidden='true'
|
||||
>
|
||||
<div className='absolute top-[93px] right-0 w-[calc(140px+10.76vw)] max-w-[295px]'>
|
||||
<BlockGroup width={295} height={34} viewBox='0 0 295 34' rects={RECTS.topRight} />
|
||||
</div>
|
||||
|
||||
<div className='-left-24 absolute bottom-0 w-[calc(140px+10.76vw)] max-w-[295px] rotate-180'>
|
||||
<BlockGroup width={295} height={34} viewBox='0 0 295 34' rects={RECTS.bottomLeft} />
|
||||
</div>
|
||||
|
||||
<div className='-bottom-2 absolute right-0 w-[calc(16px+1.25vw)] max-w-[34px]'>
|
||||
<BlockGroup width={34} height={102} viewBox='0 0 34 102' rects={RECTS.bottomRight} />
|
||||
</div>
|
||||
</div>
|
||||
)
|
||||
}
|
||||
@@ -2,6 +2,7 @@
|
||||
|
||||
import { useState } from 'react'
|
||||
import { ChevronRight } from 'lucide-react'
|
||||
import { cn } from '@/lib/utils'
|
||||
|
||||
interface FAQItem {
|
||||
question: string
|
||||
@@ -31,9 +32,10 @@ function FAQItemRow({
|
||||
className='flex w-full cursor-pointer items-center gap-3 px-4 py-2.5 text-left font-[470] text-[0.875rem] text-[rgba(0,0,0,0.8)] transition-colors hover:bg-[rgba(0,0,0,0.02)] dark:text-[rgba(255,255,255,0.85)] dark:hover:bg-[rgba(255,255,255,0.03)]'
|
||||
>
|
||||
<ChevronRight
|
||||
className={`h-3.5 w-3.5 shrink-0 text-[rgba(0,0,0,0.3)] transition-transform duration-200 dark:text-[rgba(255,255,255,0.3)] ${
|
||||
isOpen ? 'rotate-90' : ''
|
||||
}`}
|
||||
className={cn(
|
||||
'h-3.5 w-3.5 shrink-0 text-[rgba(0,0,0,0.3)] transition-transform duration-200 dark:text-[rgba(255,255,255,0.3)]',
|
||||
isOpen && 'rotate-90'
|
||||
)}
|
||||
/>
|
||||
{item.question}
|
||||
</button>
|
||||
@@ -81,11 +83,10 @@ export function FAQ({ items, title = 'Common Questions' }: FAQProps) {
|
||||
{items.map((item, index) => (
|
||||
<div
|
||||
key={index}
|
||||
className={
|
||||
index !== items.length - 1
|
||||
? 'border-[rgba(0,0,0,0.08)] border-b dark:border-[rgba(255,255,255,0.08)]'
|
||||
: ''
|
||||
}
|
||||
className={cn(
|
||||
index !== items.length - 1 &&
|
||||
'border-[rgba(0,0,0,0.08)] border-b dark:border-[rgba(255,255,255,0.08)]'
|
||||
)}
|
||||
>
|
||||
<FAQItemRow
|
||||
item={item}
|
||||
|
||||
@@ -6,6 +6,7 @@ import type { ComponentType, SVGProps } from 'react'
|
||||
import {
|
||||
A2AIcon,
|
||||
AgentMailIcon,
|
||||
AgentPhoneIcon,
|
||||
AgiloftIcon,
|
||||
AhrefsIcon,
|
||||
AirtableIcon,
|
||||
@@ -91,6 +92,7 @@ import {
|
||||
HuggingFaceIcon,
|
||||
HunterIOIcon,
|
||||
IAMIcon,
|
||||
IdentityCenterIcon,
|
||||
ImageIcon,
|
||||
IncidentioIcon,
|
||||
InfisicalIcon,
|
||||
@@ -119,6 +121,7 @@ import {
|
||||
MicrosoftSharepointIcon,
|
||||
MicrosoftTeamsIcon,
|
||||
MistralIcon,
|
||||
MondayIcon,
|
||||
MongoDBIcon,
|
||||
MySQLIcon,
|
||||
Neo4jIcon,
|
||||
@@ -151,6 +154,7 @@ import {
|
||||
RootlyIcon,
|
||||
S3Icon,
|
||||
SalesforceIcon,
|
||||
SESIcon,
|
||||
SearchIcon,
|
||||
SecretsManagerIcon,
|
||||
SendgridIcon,
|
||||
@@ -201,6 +205,7 @@ type IconComponent = ComponentType<SVGProps<SVGSVGElement>>
|
||||
export const blockTypeToIconMap: Record<string, IconComponent> = {
|
||||
a2a: A2AIcon,
|
||||
agentmail: AgentMailIcon,
|
||||
agentphone: AgentPhoneIcon,
|
||||
agiloft: AgiloftIcon,
|
||||
ahrefs: AhrefsIcon,
|
||||
airtable: AirtableIcon,
|
||||
@@ -226,8 +231,10 @@ export const blockTypeToIconMap: Record<string, IconComponent> = {
|
||||
cloudflare: CloudflareIcon,
|
||||
cloudformation: CloudFormationIcon,
|
||||
cloudwatch: CloudWatchIcon,
|
||||
confluence: ConfluenceIcon,
|
||||
confluence_v2: ConfluenceIcon,
|
||||
crowdstrike: CrowdStrikeIcon,
|
||||
cursor: CursorIcon,
|
||||
cursor_v2: CursorIcon,
|
||||
dagster: DagsterIcon,
|
||||
databricks: DatabricksIcon,
|
||||
@@ -245,19 +252,25 @@ export const blockTypeToIconMap: Record<string, IconComponent> = {
|
||||
enrich: EnrichSoIcon,
|
||||
evernote: EvernoteIcon,
|
||||
exa: ExaAIIcon,
|
||||
extend: ExtendIcon,
|
||||
extend_v2: ExtendIcon,
|
||||
fathom: FathomIcon,
|
||||
file: DocumentIcon,
|
||||
file_v3: DocumentIcon,
|
||||
firecrawl: FirecrawlIcon,
|
||||
fireflies: FirefliesIcon,
|
||||
fireflies_v2: FirefliesIcon,
|
||||
gamma: GammaIcon,
|
||||
github: GithubIcon,
|
||||
github_v2: GithubIcon,
|
||||
gitlab: GitLabIcon,
|
||||
gmail: GmailIcon,
|
||||
gmail_v2: GmailIcon,
|
||||
gong: GongIcon,
|
||||
google_ads: GoogleAdsIcon,
|
||||
google_bigquery: GoogleBigQueryIcon,
|
||||
google_books: GoogleBooksIcon,
|
||||
google_calendar: GoogleCalendarIcon,
|
||||
google_calendar_v2: GoogleCalendarIcon,
|
||||
google_contacts: GoogleContactsIcon,
|
||||
google_docs: GoogleDocsIcon,
|
||||
@@ -268,7 +281,9 @@ export const blockTypeToIconMap: Record<string, IconComponent> = {
|
||||
google_meet: GoogleMeetIcon,
|
||||
google_pagespeed: GooglePagespeedIcon,
|
||||
google_search: GoogleIcon,
|
||||
google_sheets: GoogleSheetsIcon,
|
||||
google_sheets_v2: GoogleSheetsIcon,
|
||||
google_slides: GoogleSlidesIcon,
|
||||
google_slides_v2: GoogleSlidesIcon,
|
||||
google_tasks: GoogleTasksIcon,
|
||||
google_translate: GoogleTranslateIcon,
|
||||
@@ -283,20 +298,24 @@ export const blockTypeToIconMap: Record<string, IconComponent> = {
|
||||
huggingface: HuggingFaceIcon,
|
||||
hunter: HunterIOIcon,
|
||||
iam: IAMIcon,
|
||||
identity_center: IdentityCenterIcon,
|
||||
image_generator: ImageIcon,
|
||||
imap: MailServerIcon,
|
||||
incidentio: IncidentioIcon,
|
||||
infisical: InfisicalIcon,
|
||||
intercom: IntercomIcon,
|
||||
intercom_v2: IntercomIcon,
|
||||
jina: JinaAIIcon,
|
||||
jira: JiraIcon,
|
||||
jira_service_management: JiraServiceManagementIcon,
|
||||
kalshi: KalshiIcon,
|
||||
kalshi_v2: KalshiIcon,
|
||||
ketch: KetchIcon,
|
||||
knowledge: PackageSearchIcon,
|
||||
langsmith: LangsmithIcon,
|
||||
launchdarkly: LaunchDarklyIcon,
|
||||
lemlist: LemlistIcon,
|
||||
linear: LinearIcon,
|
||||
linear_v2: LinearIcon,
|
||||
linkedin: LinkedInIcon,
|
||||
linkup: LinkupIcon,
|
||||
@@ -308,13 +327,17 @@ export const blockTypeToIconMap: Record<string, IconComponent> = {
|
||||
memory: BrainIcon,
|
||||
microsoft_ad: AzureIcon,
|
||||
microsoft_dataverse: MicrosoftDataverseIcon,
|
||||
microsoft_excel: MicrosoftExcelIcon,
|
||||
microsoft_excel_v2: MicrosoftExcelIcon,
|
||||
microsoft_planner: MicrosoftPlannerIcon,
|
||||
microsoft_teams: MicrosoftTeamsIcon,
|
||||
mistral_parse: MistralIcon,
|
||||
mistral_parse_v3: MistralIcon,
|
||||
monday: MondayIcon,
|
||||
mongodb: MongoDBIcon,
|
||||
mysql: MySQLIcon,
|
||||
neo4j: Neo4jIcon,
|
||||
notion: NotionIcon,
|
||||
notion_v2: NotionIcon,
|
||||
obsidian: ObsidianIcon,
|
||||
okta: OktaIcon,
|
||||
@@ -331,12 +354,14 @@ export const blockTypeToIconMap: Record<string, IconComponent> = {
|
||||
postgresql: PostgresIcon,
|
||||
posthog: PosthogIcon,
|
||||
profound: ProfoundIcon,
|
||||
pulse: PulseIcon,
|
||||
pulse_v2: PulseIcon,
|
||||
qdrant: QdrantIcon,
|
||||
quiver: QuiverIcon,
|
||||
rds: RDSIcon,
|
||||
reddit: RedditIcon,
|
||||
redis: RedisIcon,
|
||||
reducto: ReductoIcon,
|
||||
reducto_v2: ReductoIcon,
|
||||
resend: ResendIcon,
|
||||
revenuecat: RevenueCatIcon,
|
||||
@@ -350,6 +375,7 @@ export const blockTypeToIconMap: Record<string, IconComponent> = {
|
||||
sentry: SentryIcon,
|
||||
serper: SerperIcon,
|
||||
servicenow: ServiceNowIcon,
|
||||
ses: SESIcon,
|
||||
sftp: SftpIcon,
|
||||
sharepoint: MicrosoftSharepointIcon,
|
||||
shopify: ShopifyIcon,
|
||||
@@ -362,11 +388,13 @@ export const blockTypeToIconMap: Record<string, IconComponent> = {
|
||||
stagehand: StagehandIcon,
|
||||
stripe: StripeIcon,
|
||||
sts: STSIcon,
|
||||
stt: STTIcon,
|
||||
stt_v2: STTIcon,
|
||||
supabase: SupabaseIcon,
|
||||
tailscale: TailscaleIcon,
|
||||
tavily: TavilyIcon,
|
||||
telegram: TelegramIcon,
|
||||
textract: TextractIcon,
|
||||
textract_v2: TextractIcon,
|
||||
tinybird: TinybirdIcon,
|
||||
translate: TranslateIcon,
|
||||
@@ -377,7 +405,9 @@ export const blockTypeToIconMap: Record<string, IconComponent> = {
|
||||
typeform: TypeformIcon,
|
||||
upstash: UpstashIcon,
|
||||
vercel: VercelIcon,
|
||||
video_generator: VideoIcon,
|
||||
video_generator_v2: VideoIcon,
|
||||
vision: EyeIcon,
|
||||
vision_v2: EyeIcon,
|
||||
wealthbox: WealthboxIcon,
|
||||
webflow: WebflowIcon,
|
||||
|
||||
@@ -1,6 +1,5 @@
|
||||
'use client'
|
||||
|
||||
import { useEffect, useState } from 'react'
|
||||
import { Check } from 'lucide-react'
|
||||
import { useParams, usePathname, useRouter } from 'next/navigation'
|
||||
import {
|
||||
@@ -25,24 +24,9 @@ export function LanguageDropdown() {
|
||||
const params = useParams()
|
||||
const router = useRouter()
|
||||
|
||||
const [currentLang, setCurrentLang] = useState(() => {
|
||||
const langFromParams = params?.lang as string
|
||||
return langFromParams && Object.keys(languages).includes(langFromParams) ? langFromParams : 'en'
|
||||
})
|
||||
|
||||
useEffect(() => {
|
||||
const langFromParams = params?.lang as string
|
||||
|
||||
if (langFromParams && Object.keys(languages).includes(langFromParams)) {
|
||||
if (langFromParams !== currentLang) {
|
||||
setCurrentLang(langFromParams)
|
||||
}
|
||||
} else {
|
||||
if (currentLang !== 'en') {
|
||||
setCurrentLang('en')
|
||||
}
|
||||
}
|
||||
}, [params])
|
||||
const langFromParams = params?.lang as string
|
||||
const currentLang =
|
||||
langFromParams && Object.keys(languages).includes(langFromParams) ? langFromParams : 'en'
|
||||
|
||||
const handleLanguageChange = (locale: string) => {
|
||||
if (locale === currentLang) return
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
'use client'
|
||||
|
||||
import { useEffect, useRef } from 'react'
|
||||
import { useEffect, useLayoutEffect, useRef } from 'react'
|
||||
import { getAssetUrl } from '@/lib/utils'
|
||||
|
||||
interface LightboxProps {
|
||||
@@ -9,10 +9,12 @@ interface LightboxProps {
|
||||
src: string
|
||||
alt: string
|
||||
type: 'image' | 'video'
|
||||
startTime?: number
|
||||
}
|
||||
|
||||
export function Lightbox({ isOpen, onClose, src, alt, type }: LightboxProps) {
|
||||
export function Lightbox({ isOpen, onClose, src, alt, type, startTime }: LightboxProps) {
|
||||
const overlayRef = useRef<HTMLDivElement>(null)
|
||||
const videoRef = useRef<HTMLVideoElement>(null)
|
||||
|
||||
useEffect(() => {
|
||||
const handleKeyDown = (event: KeyboardEvent) => {
|
||||
@@ -40,6 +42,12 @@ export function Lightbox({ isOpen, onClose, src, alt, type }: LightboxProps) {
|
||||
}
|
||||
}, [isOpen, onClose])
|
||||
|
||||
useLayoutEffect(() => {
|
||||
if (isOpen && type === 'video' && videoRef.current && startTime != null && startTime > 0) {
|
||||
videoRef.current.currentTime = startTime
|
||||
}
|
||||
}, [isOpen, startTime, type])
|
||||
|
||||
if (!isOpen) return null
|
||||
|
||||
return (
|
||||
@@ -61,6 +69,7 @@ export function Lightbox({ isOpen, onClose, src, alt, type }: LightboxProps) {
|
||||
/>
|
||||
) : (
|
||||
<video
|
||||
ref={videoRef}
|
||||
src={getAssetUrl(src)}
|
||||
autoPlay
|
||||
loop
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
'use client'
|
||||
|
||||
import { useState } from 'react'
|
||||
import { getAssetUrl } from '@/lib/utils'
|
||||
import { useRef, useState } from 'react'
|
||||
import { cn, getAssetUrl } from '@/lib/utils'
|
||||
import { Lightbox } from './lightbox'
|
||||
|
||||
interface VideoProps {
|
||||
@@ -12,6 +12,8 @@ interface VideoProps {
|
||||
muted?: boolean
|
||||
playsInline?: boolean
|
||||
enableLightbox?: boolean
|
||||
width?: number
|
||||
height?: number
|
||||
}
|
||||
|
||||
export function Video({
|
||||
@@ -22,11 +24,16 @@ export function Video({
|
||||
muted = true,
|
||||
playsInline = true,
|
||||
enableLightbox = true,
|
||||
width,
|
||||
height,
|
||||
}: VideoProps) {
|
||||
const videoRef = useRef<HTMLVideoElement>(null)
|
||||
const startTimeRef = useRef(0)
|
||||
const [isLightboxOpen, setIsLightboxOpen] = useState(false)
|
||||
|
||||
const handleVideoClick = () => {
|
||||
if (enableLightbox) {
|
||||
startTimeRef.current = videoRef.current?.currentTime ?? 0
|
||||
setIsLightboxOpen(true)
|
||||
}
|
||||
}
|
||||
@@ -34,11 +41,17 @@ export function Video({
|
||||
return (
|
||||
<>
|
||||
<video
|
||||
ref={videoRef}
|
||||
autoPlay={autoPlay}
|
||||
loop={loop}
|
||||
muted={muted}
|
||||
playsInline={playsInline}
|
||||
className={`${className} ${enableLightbox ? 'cursor-pointer transition-opacity hover:opacity-95' : ''}`}
|
||||
width={width}
|
||||
height={height}
|
||||
className={cn(
|
||||
className,
|
||||
enableLightbox && 'cursor-pointer transition-opacity hover:opacity-95'
|
||||
)}
|
||||
src={getAssetUrl(src)}
|
||||
onClick={handleVideoClick}
|
||||
/>
|
||||
@@ -50,6 +63,7 @@ export function Video({
|
||||
src={src}
|
||||
alt={`Video: ${src}`}
|
||||
type='video'
|
||||
startTime={startTimeRef.current}
|
||||
/>
|
||||
)}
|
||||
</>
|
||||
|
||||
@@ -51,7 +51,7 @@ Willkommen bei Sim, einem visuellen Workflow-Builder für KI-Anwendungen. Erstel
|
||||
<Card title="MCP-Integration" href="/mcp">
|
||||
Externe Dienste mit dem Model Context Protocol verbinden
|
||||
</Card>
|
||||
<Card title="SDKs" href="/sdks">
|
||||
<Card title="SDKs" href="/api-reference">
|
||||
Sim in Ihre Anwendungen integrieren
|
||||
</Card>
|
||||
</Cards>
|
||||
@@ -65,14 +65,14 @@ Execute a workflow with optional input data.
|
||||
```python
|
||||
result = client.execute_workflow(
|
||||
"workflow-id",
|
||||
input_data={"message": "Hello, world!"},
|
||||
input={"message": "Hello, world!"},
|
||||
timeout=30.0 # 30 seconds
|
||||
)
|
||||
```
|
||||
|
||||
**Parameters:**
|
||||
- `workflow_id` (str): The ID of the workflow to execute
|
||||
- `input_data` (dict, optional): Input data to pass to the workflow
|
||||
- `input` (dict, optional): Input data to pass to the workflow
|
||||
- `timeout` (float, optional): Timeout in seconds (default: 30.0)
|
||||
- `stream` (bool, optional): Enable streaming responses (default: False)
|
||||
- `selected_outputs` (list[str], optional): Block outputs to stream in `blockName.attribute` format (e.g., `["agent1.content"]`)
|
||||
@@ -144,7 +144,7 @@ Execute a workflow with automatic retry on rate limit errors using exponential b
|
||||
```python
|
||||
result = client.execute_with_retry(
|
||||
"workflow-id",
|
||||
input_data={"message": "Hello"},
|
||||
input={"message": "Hello"},
|
||||
timeout=30.0,
|
||||
max_retries=3, # Maximum number of retries
|
||||
initial_delay=1.0, # Initial delay in seconds
|
||||
@@ -155,7 +155,7 @@ result = client.execute_with_retry(
|
||||
|
||||
**Parameters:**
|
||||
- `workflow_id` (str): The ID of the workflow to execute
|
||||
- `input_data` (dict, optional): Input data to pass to the workflow
|
||||
- `input` (dict, optional): Input data to pass to the workflow
|
||||
- `timeout` (float, optional): Timeout in seconds
|
||||
- `stream` (bool, optional): Enable streaming responses
|
||||
- `selected_outputs` (list, optional): Block outputs to stream
|
||||
@@ -359,7 +359,7 @@ def run_workflow():
|
||||
# Execute the workflow
|
||||
result = client.execute_workflow(
|
||||
"my-workflow-id",
|
||||
input_data={
|
||||
input={
|
||||
"message": "Process this data",
|
||||
"user_id": "12345"
|
||||
}
|
||||
@@ -488,7 +488,7 @@ def execute_async():
|
||||
# Start async execution
|
||||
result = client.execute_workflow(
|
||||
"workflow-id",
|
||||
input_data={"data": "large dataset"},
|
||||
input={"data": "large dataset"},
|
||||
async_execution=True # Execute asynchronously
|
||||
)
|
||||
|
||||
@@ -533,7 +533,7 @@ def execute_with_retry_handling():
|
||||
# Automatically retries on rate limit
|
||||
result = client.execute_with_retry(
|
||||
"workflow-id",
|
||||
input_data={"message": "Process this"},
|
||||
input={"message": "Process this"},
|
||||
max_retries=5,
|
||||
initial_delay=1.0,
|
||||
max_delay=60.0,
|
||||
@@ -615,7 +615,7 @@ def execute_with_streaming():
|
||||
# Enable streaming for specific block outputs
|
||||
result = client.execute_workflow(
|
||||
"workflow-id",
|
||||
input_data={"message": "Count to five"},
|
||||
input={"message": "Count to five"},
|
||||
stream=True,
|
||||
selected_outputs=["agent1.content"] # Use blockName.attribute format
|
||||
)
|
||||
@@ -758,4 +758,15 @@ Configure the client using environment variables:
|
||||
|
||||
## License
|
||||
|
||||
Apache-2.0
|
||||
Apache-2.0
|
||||
|
||||
import { FAQ } from '@/components/ui/faq'
|
||||
|
||||
<FAQ items={[
|
||||
{ question: "Do I need to deploy a workflow before I can execute it via the SDK?", answer: "Yes. Workflows must be deployed before they can be executed through the SDK. You can use the validate_workflow() method to check whether a workflow is deployed and ready. If it returns False, deploy the workflow from the Sim UI first and create or select an API key during deployment." },
|
||||
{ question: "What is the difference between sync and async execution?", answer: "Sync execution (the default) blocks until the workflow completes and returns the full result. Async execution (async_execution=True) returns immediately with a task ID that you can poll using get_job_status(). Use async mode for long-running workflows to avoid request timeouts. Async job statuses include queued, processing, completed, failed, and cancelled." },
|
||||
{ question: "How does the SDK handle rate limiting?", answer: "The SDK provides built-in rate limiting support through the execute_with_retry() method. It uses exponential backoff (1s, 2s, 4s, 8s...) with 25% jitter to avoid thundering herd problems. If the API returns a retry-after header, that value is used instead. You can configure max_retries, initial_delay, max_delay, and backoff_multiplier. Use get_rate_limit_info() to check your current rate limit status." },
|
||||
{ question: "Can I use the Python SDK as a context manager?", answer: "Yes. The SimStudioClient supports Python's context manager protocol. Use it with the 'with' statement to automatically close the underlying HTTP session when you are done, which is especially useful for scripts that create and discard client instances." },
|
||||
{ question: "How do I handle different types of errors from the SDK?", answer: "The SDK raises SimStudioError with a code property for API-specific errors. Common error codes are UNAUTHORIZED (invalid API key), TIMEOUT (request timed out), RATE_LIMIT_EXCEEDED (too many requests), USAGE_LIMIT_EXCEEDED (billing limit reached), and EXECUTION_ERROR (workflow failed). Use the error code to implement targeted error handling and recovery logic." },
|
||||
{ question: "How do I monitor my API usage and remaining quota?", answer: "Use the get_usage_limits() method to check your current usage. It returns sync and async rate limit details (limit, remaining, reset time, whether you are currently limited), plus your current period cost, usage limit, and plan tier. This lets you monitor consumption and alert before hitting limits." },
|
||||
]} />
|
||||
@@ -78,16 +78,15 @@ new SimStudioClient(config: SimStudioConfig)
|
||||
Execute a workflow with optional input data.
|
||||
|
||||
```typescript
|
||||
const result = await client.executeWorkflow('workflow-id', {
|
||||
input: { message: 'Hello, world!' },
|
||||
const result = await client.executeWorkflow('workflow-id', { message: 'Hello, world!' }, {
|
||||
timeout: 30000 // 30 seconds
|
||||
});
|
||||
```
|
||||
|
||||
**Parameters:**
|
||||
- `workflowId` (string): The ID of the workflow to execute
|
||||
- `input` (any, optional): Input data to pass to the workflow
|
||||
- `options` (ExecutionOptions, optional):
|
||||
- `input` (any): Input data to pass to the workflow
|
||||
- `timeout` (number): Timeout in milliseconds (default: 30000)
|
||||
- `stream` (boolean): Enable streaming responses (default: false)
|
||||
- `selectedOutputs` (string[]): Block outputs to stream in `blockName.attribute` format (e.g., `["agent1.content"]`)
|
||||
@@ -158,8 +157,7 @@ if (status.status === 'completed') {
|
||||
Execute a workflow with automatic retry on rate limit errors using exponential backoff.
|
||||
|
||||
```typescript
|
||||
const result = await client.executeWithRetry('workflow-id', {
|
||||
input: { message: 'Hello' },
|
||||
const result = await client.executeWithRetry('workflow-id', { message: 'Hello' }, {
|
||||
timeout: 30000
|
||||
}, {
|
||||
maxRetries: 3, // Maximum number of retries
|
||||
@@ -171,6 +169,7 @@ const result = await client.executeWithRetry('workflow-id', {
|
||||
|
||||
**Parameters:**
|
||||
- `workflowId` (string): The ID of the workflow to execute
|
||||
- `input` (any, optional): Input data to pass to the workflow
|
||||
- `options` (ExecutionOptions, optional): Same as `executeWorkflow()`
|
||||
- `retryOptions` (RetryOptions, optional):
|
||||
- `maxRetries` (number): Maximum number of retries (default: 3)
|
||||
@@ -389,10 +388,8 @@ async function runWorkflow() {
|
||||
|
||||
// Execute the workflow
|
||||
const result = await client.executeWorkflow('my-workflow-id', {
|
||||
input: {
|
||||
message: 'Process this data',
|
||||
userId: '12345'
|
||||
}
|
||||
});
|
||||
|
||||
if (result.success) {
|
||||
@@ -508,8 +505,7 @@ app.post('/execute-workflow', async (req, res) => {
|
||||
try {
|
||||
const { workflowId, input } = req.body;
|
||||
|
||||
const result = await client.executeWorkflow(workflowId, {
|
||||
input,
|
||||
const result = await client.executeWorkflow(workflowId, input, {
|
||||
timeout: 60000
|
||||
});
|
||||
|
||||
@@ -555,8 +551,7 @@ export default async function handler(
|
||||
try {
|
||||
const { workflowId, input } = req.body;
|
||||
|
||||
const result = await client.executeWorkflow(workflowId, {
|
||||
input,
|
||||
const result = await client.executeWorkflow(workflowId, input, {
|
||||
timeout: 30000
|
||||
});
|
||||
|
||||
@@ -586,9 +581,7 @@ const client = new SimStudioClient({
|
||||
async function executeClientSideWorkflow() {
|
||||
try {
|
||||
const result = await client.executeWorkflow('workflow-id', {
|
||||
input: {
|
||||
userInput: 'Hello from browser'
|
||||
}
|
||||
});
|
||||
|
||||
console.log('Workflow result:', result);
|
||||
@@ -642,10 +635,8 @@ Alternatively, you can manually provide files using the URL format:
|
||||
|
||||
// Include files under the field name from your API trigger's input format
|
||||
const result = await client.executeWorkflow('workflow-id', {
|
||||
input: {
|
||||
documents: files, // Must match your workflow's "files" field name
|
||||
instructions: 'Analyze these documents'
|
||||
}
|
||||
});
|
||||
|
||||
console.log('Result:', result);
|
||||
@@ -669,10 +660,8 @@ Alternatively, you can manually provide files using the URL format:
|
||||
|
||||
// Include files under the field name from your API trigger's input format
|
||||
const result = await client.executeWorkflow('workflow-id', {
|
||||
input: {
|
||||
documents: [file], // Must match your workflow's "files" field name
|
||||
query: 'Summarize this document'
|
||||
}
|
||||
});
|
||||
```
|
||||
</Tab>
|
||||
@@ -712,8 +701,7 @@ export function useWorkflow(): UseWorkflowResult {
|
||||
setResult(null);
|
||||
|
||||
try {
|
||||
const workflowResult = await client.executeWorkflow(workflowId, {
|
||||
input,
|
||||
const workflowResult = await client.executeWorkflow(workflowId, input, {
|
||||
timeout: 30000
|
||||
});
|
||||
setResult(workflowResult);
|
||||
@@ -774,8 +762,7 @@ const client = new SimStudioClient({
|
||||
async function executeAsync() {
|
||||
try {
|
||||
// Start async execution
|
||||
const result = await client.executeWorkflow('workflow-id', {
|
||||
input: { data: 'large dataset' },
|
||||
const result = await client.executeWorkflow('workflow-id', { data: 'large dataset' }, {
|
||||
async: true // Execute asynchronously
|
||||
});
|
||||
|
||||
@@ -823,9 +810,7 @@ const client = new SimStudioClient({
|
||||
async function executeWithRetryHandling() {
|
||||
try {
|
||||
// Automatically retries on rate limit
|
||||
const result = await client.executeWithRetry('workflow-id', {
|
||||
input: { message: 'Process this' }
|
||||
}, {
|
||||
const result = await client.executeWithRetry('workflow-id', { message: 'Process this' }, {}, {
|
||||
maxRetries: 5,
|
||||
initialDelay: 1000,
|
||||
maxDelay: 60000,
|
||||
@@ -908,8 +893,7 @@ const client = new SimStudioClient({
|
||||
async function executeWithStreaming() {
|
||||
try {
|
||||
// Enable streaming for specific block outputs
|
||||
const result = await client.executeWorkflow('workflow-id', {
|
||||
input: { message: 'Count to five' },
|
||||
const result = await client.executeWorkflow('workflow-id', { message: 'Count to five' }, {
|
||||
stream: true,
|
||||
selectedOutputs: ['agent1.content'] // Use blockName.attribute format
|
||||
});
|
||||
@@ -1033,3 +1017,14 @@ function StreamingWorkflow() {
|
||||
## License
|
||||
|
||||
Apache-2.0
|
||||
|
||||
import { FAQ } from '@/components/ui/faq'
|
||||
|
||||
<FAQ items={[
|
||||
{ question: "Do I need to deploy a workflow before I can execute it via the SDK?", answer: "Yes. Workflows must be deployed before they can be executed through the SDK. You can use the validateWorkflow() method to check whether a workflow is deployed and ready. If it returns false, deploy the workflow from the Sim UI first and create or select an API key during deployment." },
|
||||
{ question: "What is the difference between sync and async execution?", answer: "Sync execution (the default) blocks until the workflow completes and returns the full result. Async execution returns immediately with a task ID that you can poll using getJobStatus(). Use async mode for long-running workflows to avoid request timeouts. Async job statuses include queued, processing, completed, failed, and cancelled." },
|
||||
{ question: "How does streaming work with the SDK?", answer: "Enable streaming by setting stream: true and specifying selectedOutputs with block names and attributes in blockName.attribute format (e.g., ['agent1.content']). The response uses Server-Sent Events (SSE) format, sending incremental chunks as the workflow executes. Each chunk includes the blockId and the text content. A final done event includes the execution metadata." },
|
||||
{ question: "How does the SDK handle rate limiting?", answer: "The SDK provides built-in rate limiting support through the executeWithRetry() method. It uses exponential backoff (1s, 2s, 4s, 8s...) with 25% jitter to avoid thundering herd problems. If the API returns a retry-after header, that value is used instead. You can configure maxRetries, initialDelay, maxDelay, and backoffMultiplier. Use getRateLimitInfo() to check your current rate limit status." },
|
||||
{ question: "Is it safe to use the SDK in browser-side code?", answer: "You can use the SDK in the browser, but you should not expose your API key in client-side code. In production, use a backend proxy server to handle SDK calls, or use a public API key with limited permissions. The SDK works with both Node.js and browser environments, but sensitive keys should stay server-side." },
|
||||
{ question: "How do I send files to a workflow through the SDK?", answer: "File objects are automatically detected and converted to base64 format. Include them in the input object under the field name that matches your workflow's API trigger input format. In the browser, pass File objects directly from file inputs. In Node.js, create File objects from buffers. You can also provide files as URL references with type, data, name, and mime fields." },
|
||||
]} />
|
||||
|
||||
@@ -2,10 +2,12 @@
|
||||
title: Function
|
||||
---
|
||||
|
||||
import { Callout } from 'fumadocs-ui/components/callout'
|
||||
import { Tab, Tabs } from 'fumadocs-ui/components/tabs'
|
||||
import { Image } from '@/components/ui/image'
|
||||
import { FAQ } from '@/components/ui/faq'
|
||||
|
||||
The Function block executes custom JavaScript or TypeScript code in your workflows. Transform data, perform calculations, or implement custom logic.
|
||||
The Function block executes custom JavaScript, TypeScript, or Python code in your workflows. Transform data, perform calculations, or implement custom logic.
|
||||
|
||||
<div className="flex justify-center">
|
||||
<Image
|
||||
@@ -41,6 +43,8 @@ Input → Function (Validate & Sanitize) → API (Save to Database)
|
||||
|
||||
### Example: Loyalty Score Calculator
|
||||
|
||||
<Tabs items={['JavaScript', 'Python']}>
|
||||
<Tab value="JavaScript">
|
||||
```javascript title="loyalty-calculator.js"
|
||||
// Process customer data and calculate loyalty score
|
||||
const { purchaseHistory, accountAge, supportTickets } = <agent>;
|
||||
@@ -64,6 +68,120 @@ return {
|
||||
metrics: { spendScore, frequencyScore, supportScore }
|
||||
};
|
||||
```
|
||||
</Tab>
|
||||
<Tab value="Python">
|
||||
```python title="loyalty-calculator.py"
|
||||
import json
|
||||
|
||||
# Reference outputs from other blocks using angle bracket syntax
|
||||
data = json.loads('<agent>')
|
||||
purchase_history = data["purchaseHistory"]
|
||||
account_age = data["accountAge"]
|
||||
support_tickets = data["supportTickets"]
|
||||
|
||||
# Calculate metrics
|
||||
total_spent = sum(p["amount"] for p in purchase_history)
|
||||
purchase_frequency = len(purchase_history) / (account_age / 365)
|
||||
ticket_ratio = support_tickets["resolved"] / support_tickets["total"]
|
||||
|
||||
# Calculate loyalty score (0-100)
|
||||
spend_score = min(total_spent / 1000 * 30, 30)
|
||||
frequency_score = min(purchase_frequency * 20, 40)
|
||||
support_score = ticket_ratio * 30
|
||||
|
||||
loyalty_score = round(spend_score + frequency_score + support_score)
|
||||
|
||||
tier = "Platinum" if loyalty_score >= 80 else "Gold" if loyalty_score >= 60 else "Silver"
|
||||
|
||||
result = {
|
||||
"customer": data["name"],
|
||||
"loyaltyScore": loyalty_score,
|
||||
"loyaltyTier": tier,
|
||||
"metrics": {
|
||||
"spendScore": spend_score,
|
||||
"frequencyScore": frequency_score,
|
||||
"supportScore": support_score
|
||||
}
|
||||
}
|
||||
print(json.dumps(result))
|
||||
```
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
## Python Support
|
||||
|
||||
The Function block supports Python as an alternative to JavaScript. Python code runs in a secure [E2B](https://e2b.dev) cloud sandbox.
|
||||
|
||||
<div className="flex justify-center">
|
||||
<Image
|
||||
src="/static/blocks/function-python.png"
|
||||
alt="Function block with Python selected"
|
||||
width={400}
|
||||
height={500}
|
||||
className="my-6"
|
||||
/>
|
||||
</div>
|
||||
|
||||
### Enabling Python
|
||||
|
||||
Select **Python** from the language dropdown in the Function block. Python execution requires E2B to be enabled on your Sim instance.
|
||||
|
||||
<Callout type="warn">
|
||||
If you don't see Python as an option in the language dropdown, E2B is not enabled. This only applies to self-hosted instances — E2B is enabled by default on sim.ai.
|
||||
</Callout>
|
||||
|
||||
<Callout type="info">
|
||||
Python code always runs in the E2B sandbox, even for simple scripts without imports. This ensures a secure, isolated execution environment.
|
||||
</Callout>
|
||||
|
||||
### Returning Results
|
||||
|
||||
In Python, print your result as JSON to stdout. The Function block captures stdout and makes it available via `<function.result>`:
|
||||
|
||||
```python title="example.py"
|
||||
import json
|
||||
|
||||
data = {"status": "processed", "count": 42}
|
||||
print(json.dumps(data))
|
||||
```
|
||||
|
||||
### Available Libraries
|
||||
|
||||
The E2B sandbox includes the Python standard library (`json`, `re`, `datetime`, `math`, `os`, `collections`, etc.) and common packages like `matplotlib` for visualization. Charts generated with matplotlib are captured as images automatically.
|
||||
|
||||
<Callout type="info">
|
||||
The exact set of pre-installed packages depends on the E2B sandbox configuration. If a package you need isn't available, consider calling an external API from your code instead.
|
||||
</Callout>
|
||||
|
||||
### Matplotlib Charts
|
||||
|
||||
When your Python code generates matplotlib figures, they are automatically captured and returned as base64-encoded PNG images in the output:
|
||||
|
||||
```python title="chart.py"
|
||||
import matplotlib.pyplot as plt
|
||||
import json
|
||||
|
||||
data = json.loads('<api.data>')
|
||||
|
||||
plt.figure(figsize=(10, 6))
|
||||
plt.bar(data["labels"], data["values"])
|
||||
plt.title("Monthly Revenue")
|
||||
plt.xlabel("Month")
|
||||
plt.ylabel("Revenue ($)")
|
||||
plt.savefig("chart.png")
|
||||
plt.show()
|
||||
```
|
||||
|
||||
{/* TODO: Screenshot of Python code execution output in the logs panel */}
|
||||
|
||||
### JavaScript vs. Python
|
||||
|
||||
| | JavaScript | Python |
|
||||
|--|-----------|--------|
|
||||
| **Execution** | Local VM (fast) or E2B sandbox (with imports) | Always E2B sandbox |
|
||||
| **Returning results** | `return { ... }` | `print(json.dumps({ ... }))` |
|
||||
| **HTTP requests** | `fetch()` built-in | `requests` or `httpx` |
|
||||
| **Best for** | Quick transforms, JSON manipulation | Data science, charting, complex math |
|
||||
|
||||
## Best Practices
|
||||
|
||||
|
||||
@@ -1,225 +1,71 @@
|
||||
---
|
||||
title: Copilot
|
||||
description: Your per-workflow AI assistant for building and editing workflows.
|
||||
---
|
||||
|
||||
import { Callout } from 'fumadocs-ui/components/callout'
|
||||
import { Card, Cards } from 'fumadocs-ui/components/card'
|
||||
import { Image } from '@/components/ui/image'
|
||||
import { MessageCircle, Hammer, ListChecks, Zap, Globe, Paperclip, History, RotateCcw, Brain } from 'lucide-react'
|
||||
import { Video } from '@/components/ui/video'
|
||||
import { FAQ } from '@/components/ui/faq'
|
||||
|
||||
Copilot is your in-editor assistant that helps you build and edit workflows. It can:
|
||||
Copilot is the AI assistant built into every workflow editor. It is scoped to the workflow you have open — it reads the current structure, makes changes directly, and saves checkpoints so you can revert if needed.
|
||||
|
||||
- **Explain**: Answer questions about Sim and your current workflow
|
||||
- **Guide**: Suggest edits and best practices
|
||||
- **Build**: Add blocks, wire connections, and configure settings
|
||||
- **Debug**: Analyze execution issues and optimize performance
|
||||
For workspace-wide tasks (managing multiple workflows, running research, working with tables, scheduling jobs), use [Mothership](/mothership).
|
||||
|
||||
<Callout type="info">
|
||||
Copilot is a Sim-managed service. For self-hosted deployments:
|
||||
1. Go to [sim.ai](https://sim.ai) → Settings → Copilot and generate a Copilot API key
|
||||
2. Set `COPILOT_API_KEY` in your self-hosted environment
|
||||
Copilot is a Sim-managed service. For self-hosted deployments, go to [sim.ai](https://sim.ai) → Settings → Copilot, generate a Copilot API key, then set `COPILOT_API_KEY` in your self-hosted environment.
|
||||
</Callout>
|
||||
|
||||
## Modes
|
||||
<Video src="copilot/copilot.mp4" width={700} height={450} />
|
||||
|
||||
Switch between modes using the mode selector at the bottom of the input area.
|
||||
## What Copilot Can Do
|
||||
|
||||
<Cards>
|
||||
<Card
|
||||
title={
|
||||
<span className="inline-flex items-center gap-2">
|
||||
<MessageCircle className="h-4 w-4 text-muted-foreground" />
|
||||
Ask
|
||||
</span>
|
||||
}
|
||||
>
|
||||
<div className="m-0 text-sm">
|
||||
Q&A mode for explanations, guidance, and suggestions without making changes to your workflow.
|
||||
</div>
|
||||
</Card>
|
||||
<Card
|
||||
title={
|
||||
<span className="inline-flex items-center gap-2">
|
||||
<Hammer className="h-4 w-4 text-muted-foreground" />
|
||||
Build
|
||||
</span>
|
||||
}
|
||||
>
|
||||
<div className="m-0 text-sm">
|
||||
Workflow building mode. Copilot can add blocks, wire connections, edit configurations, and debug issues.
|
||||
</div>
|
||||
</Card>
|
||||
<Card
|
||||
title={
|
||||
<span className="inline-flex items-center gap-2">
|
||||
<ListChecks className="h-4 w-4 text-muted-foreground" />
|
||||
Plan
|
||||
</span>
|
||||
}
|
||||
>
|
||||
<div className="m-0 text-sm">
|
||||
Creates a step-by-step implementation plan for your workflow without making any changes. Helps you think through the approach before building.
|
||||
</div>
|
||||
</Card>
|
||||
</Cards>
|
||||
Copilot can read and modify the workflow you are currently editing:
|
||||
|
||||
## Models
|
||||
- Add, configure, and connect blocks
|
||||
- Edit existing block configurations
|
||||
- Delete blocks and connections
|
||||
- Debug failures by reading execution logs
|
||||
- Answer questions about the workflow or how Sim works
|
||||
|
||||
Select your preferred AI model using the model selector at the bottom right of the input area.
|
||||
## Chat History
|
||||
|
||||
**Available Models:**
|
||||
- Claude 4.6 Opus (default), 4.5 Opus, Sonnet, Haiku
|
||||
- GPT 5.2 Codex, Pro
|
||||
- Gemini 3 Pro
|
||||
|
||||
Choose based on your needs: faster models for simple tasks, more capable models for complex workflows.
|
||||
|
||||
## Context Menu (@)
|
||||
|
||||
Use the `@` symbol to reference resources and give Copilot more context:
|
||||
|
||||
| Reference | Description |
|
||||
|-----------|-------------|
|
||||
| **Chats** | Previous copilot conversations |
|
||||
| **Workflows** | Any workflow in your workspace |
|
||||
| **Workflow Blocks** | Blocks in the current workflow |
|
||||
| **Blocks** | Block types and templates |
|
||||
| **Knowledge** | Uploaded documents and knowledge bases |
|
||||
| **Docs** | Sim documentation |
|
||||
| **Templates** | Workflow templates |
|
||||
| **Logs** | Execution logs and results |
|
||||
|
||||
Type `@` in the input field to open the context menu, then search or browse to find what you need.
|
||||
|
||||
## Slash Commands (/)
|
||||
|
||||
Use slash commands for quick actions:
|
||||
|
||||
| Command | Description |
|
||||
|---------|-------------|
|
||||
| `/fast` | Fast mode execution |
|
||||
| `/research` | Research and exploration mode |
|
||||
| `/actions` | Execute agent actions |
|
||||
|
||||
**Web Commands:**
|
||||
|
||||
| Command | Description |
|
||||
|---------|-------------|
|
||||
| `/search` | Search the web |
|
||||
| `/read` | Read a specific URL |
|
||||
| `/scrape` | Scrape web page content |
|
||||
| `/crawl` | Crawl multiple pages |
|
||||
|
||||
Type `/` in the input field to see available commands.
|
||||
|
||||
## Chat Management
|
||||
|
||||
### Starting a New Chat
|
||||
|
||||
Click the **+** button in the Copilot header to start a fresh conversation.
|
||||
|
||||
### Chat History
|
||||
|
||||
Click **History** to view previous conversations grouped by date. You can:
|
||||
- Click a chat to resume it
|
||||
- Delete chats you no longer need
|
||||
|
||||
### Editing Messages
|
||||
|
||||
Hover over any of your messages and click **Edit** to modify and resend it. This is useful for refining your prompts.
|
||||
|
||||
### Message Queue
|
||||
|
||||
If you send a message while Copilot is still responding, it gets queued. You can:
|
||||
- View queued messages in the expandable queue panel
|
||||
- Send a queued message immediately (aborts current response)
|
||||
- Remove messages from the queue
|
||||
Click **History** (clock icon) in the Copilot header to see past conversations for this workflow. Click any chat to resume it, or click **+** to start a new one.
|
||||
|
||||
## File Attachments
|
||||
|
||||
Click the attachment icon to upload files with your message. Supported file types include:
|
||||
- Images (preview thumbnails shown)
|
||||
- PDFs
|
||||
- Text files, JSON, XML
|
||||
- Other document formats
|
||||
Click the attachment icon in the input to upload files alongside your message. Copilot can read images, PDFs, and text-based files as context.
|
||||
|
||||
Files are displayed as clickable thumbnails that open in a new tab.
|
||||
## Checkpoints
|
||||
|
||||
## Checkpoints & Changes
|
||||
When Copilot modifies a workflow, it saves a checkpoint of the previous state.
|
||||
|
||||
When Copilot makes changes to your workflow, it saves checkpoints so you can revert if needed.
|
||||
To revert: hover over a Copilot message and click the checkpoints icon, then click **Revert** on the state you want to restore. Reverting cannot be undone.
|
||||
|
||||
### Viewing Checkpoints
|
||||
## Thinking
|
||||
|
||||
Hover over a Copilot message and click the checkpoints icon to see saved workflow states for that message.
|
||||
For complex requests, Copilot may show its reasoning in an expandable thinking block before responding. The block shows how long the thinking took and collapses after the response is complete.
|
||||
|
||||
### Reverting Changes
|
||||
## Usage
|
||||
|
||||
Click **Revert** on any checkpoint to restore your workflow to that state. A confirmation dialog will warn that this action cannot be undone.
|
||||
Copilot usage is billed per token and counts toward your plan's credit usage. If you reach your limit, enable on-demand billing from Settings → Subscription.
|
||||
|
||||
### Accepting Changes
|
||||
|
||||
When Copilot proposes changes, you can:
|
||||
- **Accept**: Apply the proposed changes (`Mod+Shift+Enter`)
|
||||
- **Reject**: Dismiss the changes and keep your current workflow
|
||||
|
||||
## Thinking Blocks
|
||||
|
||||
For complex requests, Copilot may show its reasoning process in expandable thinking blocks:
|
||||
|
||||
- Blocks auto-expand while Copilot is thinking
|
||||
- Click to manually expand/collapse
|
||||
- Shows duration of the thinking process
|
||||
- Helps you understand how Copilot arrived at its solution
|
||||
|
||||
## Options Selection
|
||||
|
||||
When Copilot presents multiple options, you can select using:
|
||||
|
||||
| Control | Action |
|
||||
|---------|--------|
|
||||
| **1-9** | Select option by number |
|
||||
| **Arrow Up/Down** | Navigate between options |
|
||||
| **Enter** | Select highlighted option |
|
||||
|
||||
Selected options are highlighted; unselected options appear struck through.
|
||||
|
||||
## Keyboard Shortcuts
|
||||
|
||||
| Shortcut | Action |
|
||||
|----------|--------|
|
||||
| `@` | Open context menu |
|
||||
| `/` | Open slash commands |
|
||||
| `Arrow Up/Down` | Navigate menu items |
|
||||
| `Enter` | Select menu item |
|
||||
| `Esc` | Close menus |
|
||||
| `Mod+Shift+Enter` | Accept Copilot changes |
|
||||
|
||||
## Usage Limits
|
||||
|
||||
Copilot usage is billed per token from the underlying LLM and counts toward your plan's credit usage. If you reach your usage limit, enable on-demand billing from Settings → Subscription to continue using Copilot beyond your plan's included credits.
|
||||
|
||||
<Callout type="info">
|
||||
See the [Cost Calculation page](/execution/costs) for billing and plan details.
|
||||
</Callout>
|
||||
## Copilot MCP
|
||||
|
||||
You can use Copilot as an MCP server in your favorite editor or AI client. This lets you build, test, deploy, and manage Sim workflows directly from tools like Cursor, Claude Code, Claude Desktop, and VS Code.
|
||||
You can use Copilot as an MCP server to build, test, and manage Sim workflows from external editors — Cursor, Claude Code, Claude Desktop, and VS Code.
|
||||
|
||||
### Generating a Copilot API Key
|
||||
|
||||
To connect to the Copilot MCP server, you need a **Copilot API key**:
|
||||
|
||||
1. Go to [sim.ai](https://sim.ai) and sign in
|
||||
2. Navigate to **Settings** → **Copilot**
|
||||
3. Click **Generate API Key**
|
||||
4. Copy the key — it is only shown once
|
||||
|
||||
The key will look like `sk-sim-copilot-...`. You will use this in the configuration below.
|
||||
The key will look like `sk-sim-copilot-...`.
|
||||
|
||||
### Cursor
|
||||
|
||||
Add the following to your `.cursor/mcp.json` (project-level) or global Cursor MCP settings:
|
||||
Add to `.cursor/mcp.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
@@ -234,12 +80,8 @@ Add the following to your `.cursor/mcp.json` (project-level) or global Cursor MC
|
||||
}
|
||||
```
|
||||
|
||||
Replace `YOUR_COPILOT_API_KEY` with the key you generated above.
|
||||
|
||||
### Claude Code
|
||||
|
||||
Run the following command to add the Copilot MCP server:
|
||||
|
||||
```bash
|
||||
claude mcp add sim-copilot \
|
||||
--transport http \
|
||||
@@ -247,11 +89,9 @@ claude mcp add sim-copilot \
|
||||
--header "X-API-Key: YOUR_COPILOT_API_KEY"
|
||||
```
|
||||
|
||||
Replace `YOUR_COPILOT_API_KEY` with your key.
|
||||
|
||||
### Claude Desktop
|
||||
|
||||
Claude Desktop requires [`mcp-remote`](https://www.npmjs.com/package/mcp-remote) to connect to HTTP-based MCP servers. Add the following to your Claude Desktop config file (`~/Library/Application Support/Claude/claude_desktop_config.json` on macOS):
|
||||
Claude Desktop requires [`mcp-remote`](https://www.npmjs.com/package/mcp-remote). Add to `~/Library/Application Support/Claude/claude_desktop_config.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
@@ -270,11 +110,9 @@ Claude Desktop requires [`mcp-remote`](https://www.npmjs.com/package/mcp-remote)
|
||||
}
|
||||
```
|
||||
|
||||
Replace `YOUR_COPILOT_API_KEY` with your key.
|
||||
|
||||
### VS Code
|
||||
|
||||
Add the following to your VS Code `settings.json` or workspace `.vscode/settings.json`:
|
||||
Add to `settings.json` or `.vscode/settings.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
@@ -292,21 +130,14 @@ Add the following to your VS Code `settings.json` or workspace `.vscode/settings
|
||||
}
|
||||
```
|
||||
|
||||
Replace `YOUR_COPILOT_API_KEY` with your key.
|
||||
|
||||
<Callout type="info">
|
||||
For self-hosted deployments, replace `https://www.sim.ai` with your self-hosted Sim URL.
|
||||
</Callout>
|
||||
|
||||
import { FAQ } from '@/components/ui/faq'
|
||||
|
||||
<FAQ items={[
|
||||
{ question: "What is the difference between Ask, Build, and Plan mode?", answer: "Copilot has three modes. Ask mode is a read-only Q&A mode for explanations, guidance, and suggestions without making any changes to your workflow. Build mode allows Copilot to actively modify your workflow by adding blocks, wiring connections, editing configurations, and debugging issues. Plan mode creates a step-by-step implementation plan for your request without making any changes, so you can review the approach before committing. Use Ask when you want to learn or explore ideas, Plan when you want to see a proposed approach first, and Build when you want Copilot to make changes directly." },
|
||||
{ question: "Does Copilot have access to my full workflow when answering questions?", answer: "Copilot has access to the workflow you are currently editing as context. You can also use the @ context menu to reference other workflows, previous chats, execution logs, knowledge bases, documentation, and templates to give Copilot additional context for your request." },
|
||||
{ question: "How do I use Copilot from an external editor like Cursor or VS Code?", answer: "You can use Copilot as an MCP server from external editors. First, generate a Copilot API key from Settings > Copilot on sim.ai. Then add the MCP server configuration to your editor using the endpoint https://www.sim.ai/api/mcp/copilot with your API key in the X-API-Key header. Configuration examples are available for Cursor, Claude Code, Claude Desktop, and VS Code." },
|
||||
{ question: "Can I revert changes that Copilot made to my workflow?", answer: "Yes. When Copilot makes changes in Build mode, it saves checkpoints of your workflow state. You can hover over a Copilot message and click the checkpoints icon to see saved states, then click Revert on any checkpoint to restore your workflow. Note that reverting cannot be undone, so review the checkpoint before confirming." },
|
||||
{ question: "How does Copilot billing work?", answer: "Copilot usage is billed per token from the underlying LLM and counts toward your plan's credit usage. More capable models like Claude Opus cost more per token than lighter models like Haiku. If you reach your usage limit, you can enable on-demand billing from Settings > Subscription to continue using Copilot." },
|
||||
{ question: "What do the slash commands like /research and /search do?", answer: "Slash commands trigger specialized behaviors. /fast enables fast mode execution, /research activates a research and exploration mode, and /actions executes agent actions. Web commands like /search, /read, /scrape, and /crawl let Copilot interact with the web to search for information, read URLs, scrape page content, or crawl multiple pages to gather context for your request." },
|
||||
{ question: "How do I set up Copilot for a self-hosted deployment?", answer: "For self-hosted deployments, go to sim.ai > Settings > Copilot and generate a Copilot API key. Then set the COPILOT_API_KEY environment variable in your self-hosted environment. Copilot is a Sim-managed service, so the self-hosted instance communicates with Sim's servers to process requests." },
|
||||
{ question: "How is Copilot different from Mothership?", answer: "Copilot is scoped to the workflow you have open — it reads and edits that workflow's blocks and connections. Mothership has access to your entire workspace and can build workflows, manage tables, run research, schedule jobs, and take actions across integrations." },
|
||||
{ question: "Can Copilot access other workflows or workspace data?", answer: "Copilot is scoped to the current workflow. For tasks that span multiple workflows or require workspace-level context, use Mothership." },
|
||||
{ question: "Can I revert changes Copilot made?", answer: "Yes. Copilot saves a checkpoint before each change. Hover over the message and click the checkpoints icon to see saved states, then click Revert to restore one. Reverting cannot be undone." },
|
||||
{ question: "How does Copilot billing work?", answer: "Copilot usage is billed per token and counts toward your plan's credit usage. If you reach your limit, enable on-demand billing from Settings → Subscription." },
|
||||
{ question: "How do I set up Copilot for a self-hosted deployment?", answer: "Go to sim.ai → Settings → Copilot and generate a Copilot API key. Set the COPILOT_API_KEY environment variable in your self-hosted environment. Copilot runs on Sim's infrastructure regardless of where you host the application." },
|
||||
]} />
|
||||
|
||||
|
||||
@@ -1,203 +1,121 @@
|
||||
---
|
||||
title: Credentials
|
||||
description: Manage secrets, API keys, and OAuth connections for your workflows
|
||||
title: Secrets
|
||||
description: Manage API keys and environment variables for your workflows
|
||||
---
|
||||
|
||||
import { Callout } from 'fumadocs-ui/components/callout'
|
||||
import { Image } from '@/components/ui/image'
|
||||
import { Step, Steps } from 'fumadocs-ui/components/steps'
|
||||
import { FAQ } from '@/components/ui/faq'
|
||||
|
||||
Credentials provide a secure way to manage API keys, tokens, and third-party service connections across your workflows. Instead of hardcoding sensitive values into your workflow, you store them as credentials and reference them at runtime.
|
||||
Secrets are key-value pairs that store sensitive data like API keys, tokens, and passwords. Instead of hardcoding values into your workflows, you store them as secrets and reference them by name at runtime.
|
||||
|
||||
Sim supports two categories of credentials: **secrets** for static values like API keys, and **OAuth accounts** for authenticated service connections like Google or Slack.
|
||||
## Managing Secrets
|
||||
|
||||
## Getting Started
|
||||
|
||||
To manage credentials, open your workspace **Settings** and navigate to the **Secrets** tab.
|
||||
To manage secrets, open your workspace **Settings** and navigate to the **Secrets** tab.
|
||||
|
||||
<Image
|
||||
src="/static/credentials/settings-secrets.png"
|
||||
alt="Settings modal showing the Secrets tab with a list of saved credentials"
|
||||
src="/static/secrets/secrets-list.png"
|
||||
alt="Secrets tab showing Workspace and Personal sections with inline key-value rows"
|
||||
width={700}
|
||||
height={200}
|
||||
height={500}
|
||||
/>
|
||||
|
||||
From here you can search, create, and delete both secrets and OAuth connections.
|
||||
Secrets are organized into two sections:
|
||||
|
||||
## Secrets
|
||||
- **Workspace** — shared with all members of your workspace
|
||||
- **Personal** — private to you
|
||||
|
||||
Secrets are key-value pairs that store sensitive data like API keys, tokens, and passwords. Each secret has a **key** (used to reference it in workflows) and a **value** (the actual secret).
|
||||
### Adding a Secret
|
||||
|
||||
### Creating a Secret
|
||||
Type a key name (e.g. `OPENAI_API_KEY`) into the **Key** column and its value into the **Value** column in the last empty row. A new empty row appears automatically as you type. Existing values are masked by default.
|
||||
|
||||
<Image
|
||||
src="/static/credentials/create-secret.png"
|
||||
alt="Create Secret dialog with fields for key, value, description, and scope toggle"
|
||||
width={500}
|
||||
height={400}
|
||||
/>
|
||||
When you're done, click **Save** to persist all changes.
|
||||
|
||||
<Steps>
|
||||
<Step>
|
||||
Click **+ Add** and select **Secret** as the type
|
||||
</Step>
|
||||
<Step>
|
||||
Enter a **Key** name (letters, numbers, and underscores only, e.g. `OPENAI_API_KEY`)
|
||||
</Step>
|
||||
<Step>
|
||||
Enter the **Value**
|
||||
</Step>
|
||||
<Step>
|
||||
Optionally add a **Description** to help your team understand what the secret is for
|
||||
</Step>
|
||||
<Step>
|
||||
Choose the **Scope** — Workspace or Personal
|
||||
</Step>
|
||||
<Step>
|
||||
Click **Create**
|
||||
</Step>
|
||||
</Steps>
|
||||
<Callout type="info">
|
||||
Keys must use only letters, numbers, and underscores — no spaces or special characters.
|
||||
</Callout>
|
||||
|
||||
### Using Secrets in Workflows
|
||||
### Bulk Import
|
||||
|
||||
To reference a secret in any input field, type `{{` to open the dropdown. It will show your available secrets grouped by scope.
|
||||
You can populate multiple secrets at once by pasting `.env`-style content into any key or value field. The parser supports standard `KEY=VALUE` pairs, `export KEY=VALUE`, quoted values, and inline comments.
|
||||
|
||||
### Editing and Deleting
|
||||
|
||||
Click directly into any key or value cell to edit it. To delete a secret, click the trash icon on its row and save.
|
||||
|
||||
## Using Secrets in Workflows
|
||||
|
||||
To reference a secret in any input field, type `{{` to open the variable dropdown. Your available secrets are listed grouped by scope (workspace, then personal).
|
||||
|
||||
<Image
|
||||
src="/static/credentials/secret-dropdown.png"
|
||||
alt="Typing {{ in a code block opens a dropdown showing available workspace secrets"
|
||||
alt="Typing {{ in an input opens a dropdown showing available secrets"
|
||||
width={400}
|
||||
height={250}
|
||||
/>
|
||||
|
||||
Select the secret you want to use. The reference will appear highlighted in blue, indicating it will be resolved at runtime.
|
||||
Select the secret you want to use. The reference appears highlighted in blue and is resolved to its actual value at runtime.
|
||||
|
||||
<Image
|
||||
src="/static/credentials/secret-resolved.png"
|
||||
alt="A resolved secret reference shown in blue text as {{OPENAI_API_KEY}}"
|
||||
alt="A resolved secret reference shown as {{OPENAI_API_KEY}}"
|
||||
width={400}
|
||||
height={200}
|
||||
/>
|
||||
|
||||
<Callout type="warn">
|
||||
Secret values are never exposed in the workflow editor or logs. They are only resolved during execution.
|
||||
Secret values are never exposed in the workflow editor or execution logs — they are only resolved during execution.
|
||||
</Callout>
|
||||
|
||||
### Bulk Import
|
||||
## Secret Details
|
||||
|
||||
You can import multiple secrets at once by pasting `.env`-style content:
|
||||
|
||||
1. Click **+ Add**, then switch to **Bulk** mode
|
||||
2. Paste your environment variables in `KEY=VALUE` format
|
||||
3. Choose the scope for all imported secrets
|
||||
4. Click **Create**
|
||||
|
||||
The parser supports standard `KEY=VALUE` pairs, quoted values, comments (`#`), and blank lines.
|
||||
|
||||
## OAuth Accounts
|
||||
|
||||
OAuth accounts are authenticated connections to third-party services like Google, Slack, GitHub, and more. Sim handles the OAuth flow, token storage, and automatic refresh.
|
||||
|
||||
You can connect **multiple accounts per provider** — for example, two separate Gmail accounts for different workflows.
|
||||
|
||||
### Connecting an OAuth Account
|
||||
Click **Details** on any secret row to open its detail view.
|
||||
|
||||
<Image
|
||||
src="/static/credentials/create-oauth.png"
|
||||
alt="Create Secret dialog with OAuth Account type selected, showing display name and provider dropdown"
|
||||
width={500}
|
||||
src="/static/secrets/secret-details.png"
|
||||
alt="Secret details view showing Display Name, Description, and Members sections"
|
||||
width={700}
|
||||
height={400}
|
||||
/>
|
||||
|
||||
<Steps>
|
||||
<Step>
|
||||
Click **+ Add** and select **OAuth Account** as the type
|
||||
</Step>
|
||||
<Step>
|
||||
Enter a **Display name** to identify this connection (e.g. "Work Gmail" or "Marketing Slack")
|
||||
</Step>
|
||||
<Step>
|
||||
Optionally add a **Description**
|
||||
</Step>
|
||||
<Step>
|
||||
Select the **Account** provider from the dropdown
|
||||
</Step>
|
||||
<Step>
|
||||
Click **Connect** and complete the authorization flow
|
||||
</Step>
|
||||
</Steps>
|
||||
From here you can:
|
||||
|
||||
### Using OAuth Accounts in Workflows
|
||||
- Edit the **Display Name** and **Description**
|
||||
- Manage **Members** — invite teammates by email and assign them an **Admin** or **Member** role
|
||||
|
||||
Blocks that require authentication (e.g. Gmail, Slack, Google Sheets) display a credential selector dropdown. Select the OAuth account you want the block to use.
|
||||
|
||||
<Image
|
||||
src="/static/credentials/oauth-selector.png"
|
||||
alt="Gmail block showing the account selector dropdown with a connected account and option to connect another"
|
||||
width={500}
|
||||
height={350}
|
||||
/>
|
||||
|
||||
You can also connect additional accounts directly from the block by selecting **Connect another account** at the bottom of the dropdown.
|
||||
|
||||
<Callout type="info">
|
||||
If a block requires an OAuth connection and none is selected, the workflow will fail at that step.
|
||||
</Callout>
|
||||
Click **Save** to apply changes, or **Back** to return to the list.
|
||||
|
||||
## Workspace vs. Personal
|
||||
|
||||
Credentials can be scoped to your **workspace** (shared with your team) or kept **personal** (private to you).
|
||||
|
||||
| | Workspace | Personal |
|
||||
|---|---|---|
|
||||
| **Visibility** | All workspace members | Only you |
|
||||
| **Use in workflows** | Any member can use | Only you can use |
|
||||
| **Best for** | Production workflows, shared services | Testing, personal API keys |
|
||||
| **Who can edit** | Workspace admins | Only you |
|
||||
| **Auto-shared** | Yes — all members get access on creation | No — only you have access |
|
||||
|
||||
<Callout type="info">
|
||||
When a workspace and personal secret share the same key name, the **workspace secret takes precedence**.
|
||||
When a workspace secret and a personal secret share the same key name, the **workspace secret takes precedence**.
|
||||
</Callout>
|
||||
|
||||
### Resolution Order
|
||||
|
||||
When a workflow runs, Sim resolves secrets in this order:
|
||||
When a workflow runs, secrets resolve in this order:
|
||||
|
||||
1. **Workspace secrets** are checked first
|
||||
2. **Personal secrets** are used as a fallback — from the user who triggered the run (manual) or the workflow owner (automated runs via API, webhook, or schedule)
|
||||
|
||||
## Access Control
|
||||
|
||||
Each credential has role-based access control:
|
||||
|
||||
- **Admin** — can view, edit, delete, and manage who has access
|
||||
- **Member** — can use the credential in workflows (read-only)
|
||||
|
||||
When you create a workspace secret, all current workspace members are automatically granted access. Personal secrets are only accessible to you by default.
|
||||
|
||||
### Sharing a Credential
|
||||
|
||||
To share a credential with specific team members:
|
||||
|
||||
1. Click **Details** on the credential
|
||||
2. Invite members by email
|
||||
3. Assign them an **Admin** or **Member** role
|
||||
|
||||
## Best Practices
|
||||
|
||||
- **Use workspace credentials for production** so workflows work regardless of who triggers them
|
||||
- **Use personal credentials for development** to keep your test keys separate
|
||||
- **Use workspace secrets for production** so workflows work regardless of who triggers them
|
||||
- **Use personal secrets for development** to keep test keys separate
|
||||
- **Name keys descriptively** — `STRIPE_SECRET_KEY` over `KEY1`
|
||||
- **Connect multiple OAuth accounts** when you need different permissions or identities per workflow
|
||||
- **Never hardcode secrets** in workflow input fields — always use `{{KEY}}` references
|
||||
|
||||
<FAQ items={[
|
||||
{ question: "Are my secrets encrypted at rest?", answer: "Yes. Secret values and OAuth tokens are encrypted before being stored in the database. The platform uses server-side encryption so that raw secret values are never persisted in plaintext. Secret values are also never exposed in the workflow editor, logs, or API responses." },
|
||||
{ question: "What happens if both a workspace secret and a personal secret have the same key name?", answer: "The workspace secret takes precedence. During execution, the resolver checks workspace secrets first and uses personal secrets only as a fallback. This ensures that production workflows use the shared, team-managed value." },
|
||||
{ question: "Are my secrets encrypted at rest?", answer: "Yes. Secret values are encrypted before being stored in the database using server-side encryption, so raw values are never persisted in plaintext. They are also never exposed in the workflow editor, logs, or API responses." },
|
||||
{ question: "What happens if both a workspace secret and a personal secret have the same key name?", answer: "The workspace secret takes precedence. During execution, the resolver checks workspace secrets first and uses personal secrets only as a fallback. This ensures production workflows use the shared, team-managed value." },
|
||||
{ question: "Who determines which personal secret is used for automated runs?", answer: "For manual runs, the personal secrets of the user who clicked Run are used as fallback. For automated runs triggered by API, webhook, or schedule, the personal secrets of the workflow owner are used instead." },
|
||||
{ question: "Does Sim handle OAuth token refresh automatically?", answer: "Yes. When an OAuth token is used during execution, the platform checks whether the access token has expired and automatically refreshes it using the stored refresh token before making the API call. You do not need to handle token refresh manually." },
|
||||
{ question: "Can I connect multiple OAuth accounts for the same provider?", answer: "Yes. You can connect multiple accounts per provider (for example, two separate Gmail accounts). Each block that requires OAuth lets you select which specific account to use from the credential dropdown. This is useful when different workflows or blocks need different permissions or identities." },
|
||||
{ question: "What happens if I delete a credential that is used in a workflow?", answer: "If a block references a deleted credential, the workflow will fail at that block during execution because the credential cannot be resolved. Make sure to update any blocks that reference a credential before deleting it." },
|
||||
{ question: "Can I import secrets from a .env file?", answer: "Yes. The bulk import feature lets you paste .env-style content in KEY=VALUE format. The parser supports quoted values, comments (lines starting with #), and blank lines. All imported secrets are created with the scope you choose (workspace or personal)." },
|
||||
{ question: "Can I import secrets from a .env file?", answer: "Yes. Paste .env-style content (KEY=VALUE format) into any key or value field and the secrets will be auto-populated. The parser supports export KEY=VALUE, quoted values, and inline comments." },
|
||||
{ question: "What happens if I delete a secret that is used in a workflow?", answer: "The workflow will fail at any block that references the deleted secret during execution because the value cannot be resolved. Update any references before deleting a secret." },
|
||||
]} />
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
{
|
||||
"title": "Credentials",
|
||||
"pages": ["index", "google-service-account"],
|
||||
"title": "Secrets",
|
||||
"pages": ["index"],
|
||||
"defaultOpen": false
|
||||
}
|
||||
|
||||
216
apps/docs/content/docs/en/enterprise/access-control.mdx
Normal file
216
apps/docs/content/docs/en/enterprise/access-control.mdx
Normal file
@@ -0,0 +1,216 @@
|
||||
---
|
||||
title: Access Control
|
||||
description: Restrict which models, blocks, and platform features each group of users can access
|
||||
---
|
||||
|
||||
import { Callout } from 'fumadocs-ui/components/callout'
|
||||
import { FAQ } from '@/components/ui/faq'
|
||||
import { Image } from '@/components/ui/image'
|
||||
|
||||
Access Control lets workspace admins define permission groups that restrict what each set of workspace members can do — which AI model providers they can use, which workflow blocks they can place, and which platform features are visible to them. Permission groups are scoped to a single workspace: a user can be in different groups (or no group) in different workspaces. Restrictions are enforced both in the workflow executor and in Mothership, based on the workflow's workspace.
|
||||
|
||||
---
|
||||
|
||||
## How it works
|
||||
|
||||
Access control is built around **permission groups**. Each group belongs to a specific workspace and has a name, an optional description, and a configuration that defines what its members can and cannot do. A user can belong to at most one permission group **per workspace**, but can belong to different groups in different workspaces.
|
||||
|
||||
When a user runs a workflow or uses Mothership, Sim reads their group's configuration and applies it:
|
||||
|
||||
- **In the executor:** If a workflow uses a disallowed block type or model provider, execution halts immediately with an error. This applies to both manual runs and scheduled or API-triggered deployments.
|
||||
- **In Mothership:** Disallowed blocks are filtered out of the block list so they cannot be added to a workflow. Disallowed tool types (MCP, custom tools, skills) are skipped if Mothership attempts to use them.
|
||||
|
||||
---
|
||||
|
||||
## Setup
|
||||
|
||||
### 1. Open Access Control settings
|
||||
|
||||
Go to **Settings → Enterprise → Access Control** in the workspace you want to manage. Each workspace has its own set of permission groups.
|
||||
|
||||
<Image src="/static/enterprise/access-control-groups.png" alt="Access Control settings showing a list of permission groups: Contractors, Sales, Engineering, and Marketing, each with Details and Delete actions" width={900} height={500} />
|
||||
|
||||
### 2. Create a permission group
|
||||
|
||||
Click **+ Create** and enter a name (required) and optional description. You can also enable **Auto-add new members** — when active, any new member who joins this workspace is automatically added to this group. Only one group per workspace can have this setting enabled at a time.
|
||||
|
||||
### 3. Configure permissions
|
||||
|
||||
Click **Details** on a group, then open **Configure Permissions**. There are three tabs.
|
||||
|
||||
#### Model Providers
|
||||
|
||||
Controls which AI model providers members of this group can use.
|
||||
|
||||
<Image src="/static/enterprise/access-control-model-providers.png" alt="Model Providers tab showing a grid of AI providers including Ollama, vLLM, OpenAI, Anthropic, Google, Azure OpenAI, and others with checkboxes to allow or restrict access" width={900} height={500} /> The list shows all providers available in Sim.
|
||||
|
||||
- **All checked (default):** All providers are allowed.
|
||||
- **Subset checked:** Only the selected providers are allowed. Any workflow block or agent using a provider not on the list will fail at execution time.
|
||||
|
||||
#### Blocks
|
||||
|
||||
Controls which workflow blocks members can place and execute.
|
||||
|
||||
<Image src="/static/enterprise/access-control-blocks.png" alt="Blocks tab showing Core Blocks (Agent, API, Condition, Function, Knowledge, etc.) and Tools (integrations like 1Password, A2A, Ahrefs, Airtable, and more) with checkboxes to allow or restrict each" width={900} height={500} /> Blocks are split into two sections: **Core Blocks** (Agent, API, Condition, Function, etc.) and **Tools** (all integration blocks).
|
||||
|
||||
- **All checked (default):** All blocks are allowed.
|
||||
- **Subset checked:** Only the selected blocks are allowed. Workflows that already contain a disallowed block will fail when run — they are not automatically modified.
|
||||
|
||||
<Callout type="info">
|
||||
The `start_trigger` block (the entry point of every workflow) is always allowed and cannot be restricted.
|
||||
</Callout>
|
||||
|
||||
#### Platform
|
||||
|
||||
Controls visibility of platform features and modules.
|
||||
|
||||
<Image src="/static/enterprise/access-control-platform.png" alt="Platform tab showing feature toggles grouped by category: Sidebar (Knowledge Base, Tables, Templates), Workflow Panel (Copilot), Settings Tabs, Tools, Deploy Tabs, Features, Logs, and Collaboration" width={900} height={500} /> Each checkbox maps to a specific feature; checking it hides or disables that feature for group members.
|
||||
|
||||
**Sidebar**
|
||||
|
||||
| Feature | Effect when checked |
|
||||
|---------|-------------------|
|
||||
| Knowledge Base | Hides the Knowledge Base section from the sidebar |
|
||||
| Tables | Hides the Tables section from the sidebar |
|
||||
| Templates | Hides the Templates section from the sidebar |
|
||||
|
||||
**Workflow Panel**
|
||||
|
||||
| Feature | Effect when checked |
|
||||
|---------|-------------------|
|
||||
| Copilot | Hides the Copilot panel inside the workflow editor |
|
||||
|
||||
**Settings Tabs**
|
||||
|
||||
| Feature | Effect when checked |
|
||||
|---------|-------------------|
|
||||
| Integrations | Hides the Integrations tab in Settings |
|
||||
| Secrets | Hides the Secrets tab in Settings |
|
||||
| API Keys | Hides the Sim Keys tab in Settings |
|
||||
| Files | Hides the Files tab in Settings |
|
||||
|
||||
**Tools**
|
||||
|
||||
| Feature | Effect when checked |
|
||||
|---------|-------------------|
|
||||
| MCP Tools | Disables the use of MCP tools in workflows and agents |
|
||||
| Custom Tools | Disables the use of custom tools in workflows and agents |
|
||||
| Skills | Disables the use of Sim Skills in workflows and agents |
|
||||
|
||||
**Deploy Tabs**
|
||||
|
||||
| Feature | Effect when checked |
|
||||
|---------|-------------------|
|
||||
| API | Hides the API deployment tab |
|
||||
| MCP | Hides the MCP deployment tab |
|
||||
| A2A | Hides the A2A deployment tab |
|
||||
| Chat | Hides the Chat deployment tab |
|
||||
| Template | Hides the Template deployment tab |
|
||||
|
||||
**Features**
|
||||
|
||||
| Feature | Effect when checked |
|
||||
|---------|-------------------|
|
||||
| Sim Mailer | Hides the Sim Mailer (Inbox) feature |
|
||||
| Public API | Disables public API access for deployed workflows |
|
||||
|
||||
**Logs**
|
||||
|
||||
| Feature | Effect when checked |
|
||||
|---------|-------------------|
|
||||
| Trace Spans | Hides trace span details in execution logs |
|
||||
|
||||
**Collaboration**
|
||||
|
||||
| Feature | Effect when checked |
|
||||
|---------|-------------------|
|
||||
| Invitations | Disables the ability to invite new members to the workspace |
|
||||
|
||||
### 4. Add members
|
||||
|
||||
Open the group's **Details** view and add members by searching for users by name or email. Only users who already have workspace-level access can be added. A user can only belong to one group per workspace — adding a user to a new group within the same workspace removes them from their current group for that workspace.
|
||||
|
||||
---
|
||||
|
||||
## Enforcement
|
||||
|
||||
### Workflow execution
|
||||
|
||||
Restrictions are enforced at the point of execution, not at save time. If a group's configuration changes after a workflow is built:
|
||||
|
||||
- **Block restrictions:** Any workflow run that reaches a disallowed block halts immediately with an error. The workflow is not modified — only execution is blocked.
|
||||
- **Model provider restrictions:** Any block or agent that uses a disallowed provider halts immediately with an error.
|
||||
- **Tool restrictions (MCP, custom tools, skills):** Agents that use a disallowed tool type halt immediately with an error.
|
||||
|
||||
This applies regardless of how the workflow is triggered — manually, via API, via schedule, or via webhook.
|
||||
|
||||
### Mothership
|
||||
|
||||
When a user opens Mothership, their permission group is read before any block or tool suggestions are made:
|
||||
|
||||
- Blocks not in the allowed list are filtered out of the block picker entirely — they do not appear as options.
|
||||
- If Mothership generates a workflow step that would use a disallowed tool (MCP, custom, or skills), that step is skipped and the reason is noted.
|
||||
|
||||
---
|
||||
|
||||
## User membership rules
|
||||
|
||||
- A user can belong to **at most one** permission group **per workspace**, but may be in different groups across different workspaces.
|
||||
- Moving a user to a new group within a workspace automatically removes them from their previous group in that workspace.
|
||||
- Users not assigned to any group in a workspace have no restrictions applied in that workspace (all blocks, providers, and features are available to them there).
|
||||
- If **Auto-add new members** is enabled on a group, new members of that workspace are automatically placed in the group. Only one group per workspace can have this setting active.
|
||||
|
||||
---
|
||||
|
||||
<FAQ items={[
|
||||
{
|
||||
question: "Who can create and manage permission groups?",
|
||||
answer: "Any workspace admin on an Enterprise-entitled workspace can create, edit, and delete permission groups for that workspace. The workspace's billed account must be on the Enterprise plan."
|
||||
},
|
||||
{
|
||||
question: "What happens to a workflow that was built before a block was restricted?",
|
||||
answer: "The workflow is not modified — it still exists and can be edited. However, any run that reaches a disallowed block will halt immediately with an error. The block must be removed or the user's group configuration must be updated before the workflow can run successfully."
|
||||
},
|
||||
{
|
||||
question: "Can a user be in multiple permission groups?",
|
||||
answer: "A user can belong to at most one permission group per workspace, but can belong to different groups in different workspaces. Adding a user to a new group within the same workspace automatically removes them from their previous group in that workspace."
|
||||
},
|
||||
{
|
||||
question: "What does a user see if they have no permission group assigned in a workspace?",
|
||||
answer: "Users with no group in a given workspace have no restrictions in that workspace. All blocks, model providers, and platform features are fully available to them there. Restrictions only apply in the specific workspaces where they are assigned to a group."
|
||||
},
|
||||
{
|
||||
question: "Does Mothership respect the same restrictions as the executor?",
|
||||
answer: "Yes. Mothership reads the user's permission group for the active workspace before suggesting blocks or tools. Disallowed blocks are filtered out of the block picker, and disallowed tool types are skipped during workflow generation."
|
||||
},
|
||||
{
|
||||
question: "Can I restrict access to specific workflows or workspaces?",
|
||||
answer: "Access Control operates at the feature and block level within a workspace. To restrict who can access the workspace itself, use workspace invitations and permissions. To apply different restrictions to different workflows, put them in different workspaces with distinct permission groups."
|
||||
},
|
||||
{
|
||||
question: "What is Auto-add new members?",
|
||||
answer: "When a group has Auto-add new members enabled, any new member who joins the workspace is automatically added to that group. Only one group per workspace can have this setting enabled at a time."
|
||||
}
|
||||
]} />
|
||||
|
||||
---
|
||||
|
||||
## Self-hosted setup
|
||||
|
||||
Self-hosted deployments use environment variables instead of the billing/plan check.
|
||||
|
||||
### Environment variables
|
||||
|
||||
```bash
|
||||
ACCESS_CONTROL_ENABLED=true
|
||||
NEXT_PUBLIC_ACCESS_CONTROL_ENABLED=true
|
||||
```
|
||||
|
||||
You can also set a server-level block allowlist using the `ALLOWED_INTEGRATIONS` environment variable. This is applied as an additional constraint on top of any permission group configuration — a block must be allowed by both the environment allowlist and the user's group to be usable.
|
||||
|
||||
```bash
|
||||
# Only these block types are available across the entire instance
|
||||
ALLOWED_INTEGRATIONS=slack,gmail,agent,function,condition
|
||||
```
|
||||
|
||||
Once enabled, permission groups are managed through **Settings → Enterprise → Access Control** the same way as Sim Cloud.
|
||||
143
apps/docs/content/docs/en/enterprise/audit-logs.mdx
Normal file
143
apps/docs/content/docs/en/enterprise/audit-logs.mdx
Normal file
@@ -0,0 +1,143 @@
|
||||
---
|
||||
title: Audit Logs
|
||||
description: Track every action taken across your organization's workspaces
|
||||
---
|
||||
|
||||
import { FAQ } from '@/components/ui/faq'
|
||||
import { Image } from '@/components/ui/image'
|
||||
|
||||
Audit logs give your organization a tamper-evident record of every significant action taken across workspaces — who did what, when, and on which resource. Use them for security reviews, compliance investigations, and incident response.
|
||||
|
||||
---
|
||||
|
||||
## Viewing audit logs
|
||||
|
||||
### In the UI
|
||||
|
||||
Go to **Settings → Enterprise → Audit Logs** in your workspace. Logs are displayed in a table with the following columns:
|
||||
|
||||
<Image src="/static/enterprise/audit-logs.png" alt="Audit Logs settings showing a table of events with columns for Timestamp, Event, Description, and Actor, along with search and filter controls" width={900} height={500} />
|
||||
|
||||
| Column | Description |
|
||||
|--------|-------------|
|
||||
| **Timestamp** | When the action occurred. |
|
||||
| **Event** | The action taken, e.g. `workflow.created`. |
|
||||
| **Description** | A human-readable summary of the action. |
|
||||
| **Actor** | The email address of the user who performed the action. |
|
||||
|
||||
Use the search bar, event type filter, and date range selector to narrow results.
|
||||
|
||||
### Via API
|
||||
|
||||
Audit logs are also accessible through the Sim API for integration with external SIEM or log management tools.
|
||||
|
||||
```http
|
||||
GET /api/v1/audit-logs
|
||||
Authorization: Bearer <api-key>
|
||||
```
|
||||
|
||||
**Query parameters:**
|
||||
|
||||
| Parameter | Type | Description |
|
||||
|-----------|------|-------------|
|
||||
| `action` | string | Filter by event type (e.g. `workflow.created`) |
|
||||
| `resourceType` | string | Filter by resource type (e.g. `workflow`) |
|
||||
| `resourceId` | string | Filter by a specific resource ID |
|
||||
| `workspaceId` | string | Filter by workspace |
|
||||
| `actorId` | string | Filter by user ID (must be an org member) |
|
||||
| `startDate` | string | ISO 8601 date — return logs on or after this date |
|
||||
| `endDate` | string | ISO 8601 date — return logs on or before this date |
|
||||
| `includeDeparted` | boolean | Include logs from members who have since left the organization (default `false`) |
|
||||
| `limit` | number | Results per page (1–100, default 50) |
|
||||
| `cursor` | string | Opaque cursor for fetching the next page |
|
||||
|
||||
**Example response:**
|
||||
|
||||
```json
|
||||
{
|
||||
"data": [
|
||||
{
|
||||
"id": "abc123",
|
||||
"action": "workflow.created",
|
||||
"resourceType": "workflow",
|
||||
"resourceId": "wf_xyz",
|
||||
"resourceName": "Customer Onboarding",
|
||||
"description": "Created workflow \"Customer Onboarding\"",
|
||||
"actorId": "usr_abc",
|
||||
"actorName": "Alice Smith",
|
||||
"actorEmail": "alice@company.com",
|
||||
"workspaceId": "ws_def",
|
||||
"metadata": {},
|
||||
"createdAt": "2026-04-20T21:16:00.000Z"
|
||||
}
|
||||
],
|
||||
"nextCursor": "eyJpZCI6ImFiYzEyMyJ9"
|
||||
}
|
||||
```
|
||||
|
||||
Paginate by passing the `nextCursor` value as the `cursor` parameter in the next request. When `nextCursor` is absent, you have reached the last page.
|
||||
|
||||
The API accepts both personal and workspace-scoped API keys. Rate limits apply — the response includes `X-RateLimit-*` headers with your current limit and remaining quota.
|
||||
|
||||
---
|
||||
|
||||
## Event types
|
||||
|
||||
Audit log events follow a `resource.action` naming pattern. The table below lists the main categories.
|
||||
|
||||
| Category | Example events |
|
||||
|----------|---------------|
|
||||
| **Workflows** | `workflow.created`, `workflow.deleted`, `workflow.deployed`, `workflow.locked` |
|
||||
| **Workspaces** | `workspace.created`, `workspace.updated`, `workspace.deleted` |
|
||||
| **Members** | `member.invited`, `member.removed`, `member.role_changed` |
|
||||
| **Permission groups** | `permission_group.created`, `permission_group.updated`, `permission_group.deleted` |
|
||||
| **Environments** | `environment.updated`, `environment.deleted` |
|
||||
| **Knowledge bases** | `knowledge_base.created`, `knowledge_base.deleted`, `connector.synced` |
|
||||
| **Tables** | `table.created`, `table.updated`, `table.deleted` |
|
||||
| **API keys** | `api_key.created`, `api_key.revoked` |
|
||||
| **Credentials** | `credential.created`, `credential.deleted`, `oauth.disconnected` |
|
||||
| **Organization** | `organization.updated`, `org_member.added`, `org_member.role_changed` |
|
||||
|
||||
---
|
||||
|
||||
<FAQ items={[
|
||||
{
|
||||
question: "Who can view audit logs?",
|
||||
answer: "Organization owners and admins can view audit logs. On Sim Cloud, you must be on the Enterprise plan."
|
||||
},
|
||||
{
|
||||
question: "Are audit logs tamper-proof?",
|
||||
answer: "Audit log entries are append-only and cannot be modified or deleted through the Sim interface or API. They represent a reliable record of actions taken in your organization."
|
||||
},
|
||||
{
|
||||
question: "Can I export audit logs?",
|
||||
answer: "Yes. Use the API to export logs programmatically. Paginate through all records using the cursor parameter and store them in your own data warehouse or SIEM."
|
||||
},
|
||||
{
|
||||
question: "Are logs scoped to a single workspace or the whole organization?",
|
||||
answer: "Audit logs are scoped to your organization and include activity across all workspaces within it. You can filter by workspaceId to narrow results to a specific workspace."
|
||||
},
|
||||
{
|
||||
question: "What information is included in each log entry?",
|
||||
answer: "Each entry includes the event type, a description, the actor's name and email, the affected resource, the workspace, and a timestamp. IP addresses and user agents are not exposed through the API."
|
||||
},
|
||||
{
|
||||
question: "Can I filter logs by a specific user?",
|
||||
answer: "Yes. Pass the actorId query parameter to filter logs by a specific user. The actor must be a current or former member of your organization."
|
||||
}
|
||||
]} />
|
||||
|
||||
---
|
||||
|
||||
## Self-hosted setup
|
||||
|
||||
Self-hosted deployments use environment variables instead of the billing/plan check.
|
||||
|
||||
### Environment variables
|
||||
|
||||
```bash
|
||||
AUDIT_LOGS_ENABLED=true
|
||||
NEXT_PUBLIC_AUDIT_LOGS_ENABLED=true
|
||||
```
|
||||
|
||||
Once enabled, audit logs are viewable in **Settings → Enterprise → Audit Logs** and accessible via the API.
|
||||
114
apps/docs/content/docs/en/enterprise/data-retention.mdx
Normal file
114
apps/docs/content/docs/en/enterprise/data-retention.mdx
Normal file
@@ -0,0 +1,114 @@
|
||||
---
|
||||
title: Data Retention
|
||||
description: Control how long execution logs, deleted resources, and copilot data are kept before permanent deletion
|
||||
---
|
||||
|
||||
import { FAQ } from '@/components/ui/faq'
|
||||
import { Image } from '@/components/ui/image'
|
||||
|
||||
Data Retention lets organization owners and admins on Enterprise plans configure how long three categories of data are kept before they are permanently deleted. The configuration applies to every workspace in the organization.
|
||||
|
||||
---
|
||||
|
||||
## Setup
|
||||
|
||||
Go to **Settings → Enterprise → Data Retention** in your workspace.
|
||||
|
||||
<Image src="/static/enterprise/data-retention.png" alt="Data Retention settings showing three dropdowns — Log retention, Soft deletion cleanup, and Task cleanup — each set to Forever" width={900} height={500} />
|
||||
|
||||
You will see three independent settings, each with the same set of options: **1 day, 3 days, 7 days, 14 days, 30 days, 60 days, 90 days, 180 days, 1 year, 5 years,** or **Forever**.
|
||||
|
||||
Setting a period to **Forever** means that category of data is never automatically deleted.
|
||||
|
||||
---
|
||||
|
||||
## Settings
|
||||
|
||||
### Log retention
|
||||
|
||||
Controls how long **workflow execution logs** are kept.
|
||||
|
||||
When the retention period expires, execution log records are permanently deleted, along with any files associated with those executions stored in cloud storage.
|
||||
|
||||
### Soft deletion cleanup
|
||||
|
||||
Controls how long **soft-deleted resources** remain recoverable before permanent removal.
|
||||
|
||||
When you delete a workflow, folder, knowledge base, table, or file, it is initially soft-deleted and can be recovered from Recently Deleted. Once the soft deletion cleanup period expires, those resources are permanently removed and cannot be recovered.
|
||||
|
||||
Resources covered:
|
||||
|
||||
- Workflows
|
||||
- Workflow folders
|
||||
- Knowledge bases
|
||||
- Tables
|
||||
- Files
|
||||
- MCP server configurations
|
||||
- Agent memory
|
||||
|
||||
### Task cleanup
|
||||
|
||||
Controls how long **Mothership data** is kept, including:
|
||||
|
||||
- Copilot chats and run history
|
||||
- Run checkpoints and async tool calls
|
||||
- Inbox tasks (Sim Mailer)
|
||||
|
||||
Each setting is independent. You can configure a short log retention period alongside a long soft deletion cleanup period, or any combination that fits your compliance requirements.
|
||||
|
||||
---
|
||||
|
||||
## Organization-wide configuration
|
||||
|
||||
Retention is configured at the **organization level**. A single configuration applies to every workspace in the organization — there are no per-workspace overrides.
|
||||
|
||||
---
|
||||
|
||||
## Defaults
|
||||
|
||||
By default, all three settings are unconfigured — no data is automatically deleted in any category until you configure it. Setting a period to **Forever** has the same effect as leaving it unconfigured, but makes the intent explicit and allows you to change it later without saving from scratch.
|
||||
|
||||
---
|
||||
|
||||
<FAQ items={[
|
||||
{
|
||||
question: "Who can configure data retention settings?",
|
||||
answer: "Only organization owners and admins can configure data retention settings. On Sim Cloud, the organization must be on an Enterprise plan."
|
||||
},
|
||||
{
|
||||
question: "Is deletion immediate once the retention period expires?",
|
||||
answer: "No. Deletion runs on a scheduled cleanup job. Data is deleted when the job next runs after the retention period has elapsed — not at the exact moment it expires."
|
||||
},
|
||||
{
|
||||
question: "Can deleted data be recovered after the soft deletion cleanup period?",
|
||||
answer: "No. Once the soft deletion cleanup period expires and the cleanup job runs, resources are permanently deleted and cannot be recovered."
|
||||
},
|
||||
{
|
||||
question: "Does the retention period apply to all workspaces in my organization?",
|
||||
answer: "Yes. Retention is configured once per organization and applies to every workspace in the organization."
|
||||
},
|
||||
{
|
||||
question: "What happens if I shorten the retention period?",
|
||||
answer: "The next cleanup job will delete any data that is older than the new, shorter period — including data that would have been kept under the previous setting. Shortening the period is irreversible for data that falls outside the new window."
|
||||
},
|
||||
{
|
||||
question: "What is the minimum retention period?",
|
||||
answer: "1 day (24 hours)."
|
||||
},
|
||||
{
|
||||
question: "What is the maximum retention period?",
|
||||
answer: "5 years."
|
||||
}
|
||||
]} />
|
||||
|
||||
---
|
||||
|
||||
## Self-hosted setup
|
||||
|
||||
### Environment variables
|
||||
|
||||
```bash
|
||||
NEXT_PUBLIC_DATA_RETENTION_ENABLED=true
|
||||
```
|
||||
|
||||
Once enabled, data retention settings are configurable through **Settings → Enterprise → Data Retention** the same way as Sim Cloud.
|
||||
@@ -3,7 +3,6 @@ title: Enterprise
|
||||
description: Enterprise features for business organizations
|
||||
---
|
||||
|
||||
import { Callout } from 'fumadocs-ui/components/callout'
|
||||
import { FAQ } from '@/components/ui/faq'
|
||||
|
||||
Sim Enterprise provides advanced features for organizations with enhanced security, compliance, and management requirements.
|
||||
@@ -12,7 +11,7 @@ Sim Enterprise provides advanced features for organizations with enhanced securi
|
||||
|
||||
## Access Control
|
||||
|
||||
Define permission groups to control what features and integrations team members can use.
|
||||
Define permission groups on a workspace to control what features and integrations its members can use. Permission groups are scoped to a single workspace — a user can belong to different groups (or no group) in different workspaces.
|
||||
|
||||
### Features
|
||||
|
||||
@@ -22,104 +21,64 @@ Define permission groups to control what features and integrations team members
|
||||
|
||||
### Setup
|
||||
|
||||
1. Navigate to **Settings** → **Access Control** in your workspace
|
||||
1. Navigate to **Settings** → **Access Control** in the workspace you want to manage
|
||||
2. Create a permission group with your desired restrictions
|
||||
3. Add team members to the permission group
|
||||
3. Add workspace members to the permission group
|
||||
|
||||
<Callout type="info">
|
||||
Users not assigned to any permission group have full access. Permission restrictions are enforced at both UI and execution time.
|
||||
</Callout>
|
||||
Any workspace admin on an Enterprise-entitled workspace can manage permission groups. Users not assigned to any group have full access. Restrictions are enforced at both UI and execution time, based on the workflow's workspace.
|
||||
|
||||
See the [Access Control guide](/docs/enterprise/access-control) for full details.
|
||||
|
||||
---
|
||||
|
||||
## Single Sign-On (SSO)
|
||||
|
||||
Enterprise authentication with SAML 2.0 and OIDC support for centralized identity management.
|
||||
Enterprise authentication with SAML 2.0 and OIDC support. Works with Okta, Azure AD (Entra ID), Google Workspace, ADFS, and any standard OIDC or SAML 2.0 provider.
|
||||
|
||||
### Supported Providers
|
||||
|
||||
- Okta
|
||||
- Azure AD / Entra ID
|
||||
- Google Workspace
|
||||
- OneLogin
|
||||
- Any SAML 2.0 or OIDC provider
|
||||
|
||||
### Setup
|
||||
|
||||
1. Navigate to **Settings** → **SSO** in your workspace
|
||||
2. Choose your identity provider
|
||||
3. Configure the connection using your IdP's metadata
|
||||
4. Enable SSO for your organization
|
||||
|
||||
<Callout type="info">
|
||||
Once SSO is enabled, team members authenticate through your identity provider instead of email/password.
|
||||
</Callout>
|
||||
See the [SSO setup guide](/docs/enterprise/sso) for step-by-step instructions and provider-specific configuration.
|
||||
|
||||
---
|
||||
|
||||
## Self-Hosted Configuration
|
||||
## Whitelabeling
|
||||
|
||||
For self-hosted deployments, enterprise features can be enabled via environment variables without requiring billing.
|
||||
Replace Sim's default branding — logos, product name, and favicons — with your own. See the [whitelabeling guide](/docs/enterprise/whitelabeling).
|
||||
|
||||
### Environment Variables
|
||||
---
|
||||
|
||||
## Audit Logs
|
||||
|
||||
Track configuration and security-relevant actions across your organization for compliance and monitoring. See the [audit logs guide](/docs/enterprise/audit-logs).
|
||||
|
||||
---
|
||||
|
||||
## Data Retention
|
||||
|
||||
Configure how long execution logs, soft-deleted resources, and Mothership data are kept before permanent deletion. See the [data retention guide](/docs/enterprise/data-retention).
|
||||
|
||||
---
|
||||
|
||||
<FAQ items={[
|
||||
{ question: "Who can manage Enterprise features?", answer: "Workspace admins on an Enterprise-entitled workspace. Access Control, SSO, whitelabeling, audit logs, and data retention are all configured per workspace under Settings → Enterprise." },
|
||||
{ question: "Which SSO providers are supported?", answer: "Sim supports SAML 2.0 and OIDC, which works with virtually any enterprise identity provider including Okta, Azure AD (Entra ID), Google Workspace, ADFS, and OneLogin." },
|
||||
{ question: "How do access control permission groups work?", answer: "Permission groups are created per workspace and let you restrict which AI providers, workflow blocks, and platform features are available to specific members of that workspace. Each user can belong to at most one group per workspace. Users not assigned to any group have full access. Restrictions are enforced at both the UI level and at execution time based on the workflow's workspace." },
|
||||
]} />
|
||||
|
||||
---
|
||||
|
||||
## Self-hosted setup
|
||||
|
||||
Self-hosted deployments enable enterprise features via environment variables instead of billing.
|
||||
|
||||
| Variable | Description |
|
||||
|----------|-------------|
|
||||
| `ORGANIZATIONS_ENABLED`, `NEXT_PUBLIC_ORGANIZATIONS_ENABLED` | Enable team/organization management |
|
||||
| `ACCESS_CONTROL_ENABLED`, `NEXT_PUBLIC_ACCESS_CONTROL_ENABLED` | Permission groups for access restrictions |
|
||||
| `SSO_ENABLED`, `NEXT_PUBLIC_SSO_ENABLED` | Single Sign-On with SAML/OIDC |
|
||||
| `CREDENTIAL_SETS_ENABLED`, `NEXT_PUBLIC_CREDENTIAL_SETS_ENABLED` | Polling Groups for email triggers |
|
||||
| `INBOX_ENABLED`, `NEXT_PUBLIC_INBOX_ENABLED` | Sim Mailer inbox for outbound email |
|
||||
| `WHITELABELING_ENABLED`, `NEXT_PUBLIC_WHITELABELING_ENABLED` | Custom branding and white-labeling |
|
||||
| `AUDIT_LOGS_ENABLED`, `NEXT_PUBLIC_AUDIT_LOGS_ENABLED` | Audit logging for compliance and monitoring |
|
||||
| `DISABLE_INVITATIONS`, `NEXT_PUBLIC_DISABLE_INVITATIONS` | Globally disable workspace/organization invitations |
|
||||
| `ORGANIZATIONS_ENABLED`, `NEXT_PUBLIC_ORGANIZATIONS_ENABLED` | Team and organization management |
|
||||
| `ACCESS_CONTROL_ENABLED`, `NEXT_PUBLIC_ACCESS_CONTROL_ENABLED` | Permission groups |
|
||||
| `SSO_ENABLED`, `NEXT_PUBLIC_SSO_ENABLED` | SAML and OIDC sign-in |
|
||||
| `WHITELABELING_ENABLED`, `NEXT_PUBLIC_WHITELABELING_ENABLED` | Custom branding |
|
||||
| `AUDIT_LOGS_ENABLED`, `NEXT_PUBLIC_AUDIT_LOGS_ENABLED` | Audit logging |
|
||||
| `NEXT_PUBLIC_DATA_RETENTION_ENABLED` | Data retention configuration |
|
||||
| `CREDENTIAL_SETS_ENABLED`, `NEXT_PUBLIC_CREDENTIAL_SETS_ENABLED` | Polling groups for email triggers |
|
||||
| `INBOX_ENABLED`, `NEXT_PUBLIC_INBOX_ENABLED` | Sim Mailer inbox |
|
||||
| `DISABLE_INVITATIONS`, `NEXT_PUBLIC_DISABLE_INVITATIONS` | Disable invitations; manage membership via Admin API |
|
||||
|
||||
### Organization Management
|
||||
|
||||
When billing is disabled, use the Admin API to manage organizations:
|
||||
|
||||
```bash
|
||||
# Create an organization
|
||||
curl -X POST https://your-instance/api/v1/admin/organizations \
|
||||
-H "x-admin-key: YOUR_ADMIN_API_KEY" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"name": "My Organization", "ownerId": "user-id-here"}'
|
||||
|
||||
# Add a member
|
||||
curl -X POST https://your-instance/api/v1/admin/organizations/{orgId}/members \
|
||||
-H "x-admin-key: YOUR_ADMIN_API_KEY" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"userId": "user-id-here", "role": "admin"}'
|
||||
```
|
||||
|
||||
### Workspace Members
|
||||
|
||||
When invitations are disabled, use the Admin API to manage workspace memberships directly:
|
||||
|
||||
```bash
|
||||
# Add a user to a workspace
|
||||
curl -X POST https://your-instance/api/v1/admin/workspaces/{workspaceId}/members \
|
||||
-H "x-admin-key: YOUR_ADMIN_API_KEY" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"userId": "user-id-here", "permissions": "write"}'
|
||||
|
||||
# Remove a user from a workspace
|
||||
curl -X DELETE "https://your-instance/api/v1/admin/workspaces/{workspaceId}/members?userId=user-id-here" \
|
||||
-H "x-admin-key: YOUR_ADMIN_API_KEY"
|
||||
```
|
||||
|
||||
### Notes
|
||||
|
||||
- Enabling `ACCESS_CONTROL_ENABLED` automatically enables organizations, as access control requires organization membership.
|
||||
- When `DISABLE_INVITATIONS` is set, users cannot send invitations. Use the Admin API to manage workspace and organization memberships instead.
|
||||
|
||||
<FAQ items={[
|
||||
{ question: "What are the minimum requirements to self-host Sim?", answer: "The Docker Compose production setup includes the Sim application (8 GB memory limit), a realtime collaboration server (1 GB memory limit), and a PostgreSQL database with pgvector. A machine with at least 16 GB of RAM and 4 CPU cores is recommended. You will also need Docker and Docker Compose installed." },
|
||||
{ question: "Can I run Sim completely offline with local AI models?", answer: "Yes. Sim supports Ollama and VLLM for running local AI models. A separate Docker Compose configuration (docker-compose.ollama.yml) is available for deploying with Ollama. This lets you run workflows without any external API calls, keeping all data on your infrastructure." },
|
||||
{ question: "How does data privacy work with self-hosted deployments?", answer: "When self-hosted, all data stays on your infrastructure. Workflow definitions, execution logs, credentials, and user data are stored in your PostgreSQL database. If you use local AI models through Ollama or VLLM, no data leaves your network. When using external AI providers, only the data sent in prompts goes to those providers." },
|
||||
{ question: "Do I need a paid license to self-host Sim?", answer: "The core Sim platform is open source under Apache 2.0 and can be self-hosted for free. Enterprise features like SSO (SAML/OIDC), access control with permission groups, and organization management require an Enterprise subscription for production use. These features can be enabled via environment variables for development and evaluation without a license." },
|
||||
{ question: "Which SSO providers are supported?", answer: "Sim supports SAML 2.0 and OIDC protocols, which means it works with virtually any enterprise identity provider including Okta, Azure AD (Entra ID), Google Workspace, and OneLogin. Configuration is done through Settings in the workspace UI." },
|
||||
{ question: "How do I manage users when invitations are disabled?", answer: "Use the Admin API with your admin API key. You can create organizations, add members to organizations with specific roles, add users to workspaces with defined permissions, and remove users. All management is done through REST API calls authenticated with the x-admin-key header." },
|
||||
{ question: "Can I scale Sim horizontally for high availability?", answer: "The Docker Compose setup is designed for single-node deployments. For production scaling, you can deploy on Kubernetes with multiple application replicas behind a load balancer. The database can be scaled independently using managed PostgreSQL services. Redis can be configured for session and cache management across multiple instances." },
|
||||
{ question: "How do access control permission groups work?", answer: "Permission groups let you restrict which AI providers, workflow blocks, and platform features are available to specific team members. Users not assigned to any group have full access. Restrictions are enforced at both the UI level (hiding restricted options) and at execution time (blocking unauthorized operations). Enabling access control automatically enables organization management." },
|
||||
]} />
|
||||
Once enabled, each feature is configured through the same Settings UI as Sim Cloud. When invitations are disabled, use the Admin API (`x-admin-key` header) to manage organization and workspace membership.
|
||||
|
||||
5
apps/docs/content/docs/en/enterprise/meta.json
Normal file
5
apps/docs/content/docs/en/enterprise/meta.json
Normal file
@@ -0,0 +1,5 @@
|
||||
{
|
||||
"title": "Enterprise",
|
||||
"pages": ["index", "sso", "access-control", "whitelabeling", "audit-logs", "data-retention"],
|
||||
"defaultOpen": false
|
||||
}
|
||||
326
apps/docs/content/docs/en/enterprise/sso.mdx
Normal file
326
apps/docs/content/docs/en/enterprise/sso.mdx
Normal file
@@ -0,0 +1,326 @@
|
||||
---
|
||||
title: Single Sign-On (SSO)
|
||||
description: Configure SAML 2.0 or OIDC-based single sign-on for your organization
|
||||
---
|
||||
|
||||
import { Callout } from 'fumadocs-ui/components/callout'
|
||||
import { Tab, Tabs } from 'fumadocs-ui/components/tabs'
|
||||
import { FAQ } from '@/components/ui/faq'
|
||||
import { Image } from '@/components/ui/image'
|
||||
|
||||
Single Sign-On lets your team sign in to Sim through your company's identity provider instead of managing separate passwords. Sim supports both OIDC and SAML 2.0.
|
||||
|
||||
---
|
||||
|
||||
## Setup
|
||||
|
||||
### 1. Open SSO settings
|
||||
|
||||
Go to **Settings → Enterprise → Single Sign-On** in your workspace.
|
||||
|
||||
### 2. Choose a protocol
|
||||
|
||||
| Protocol | Use when |
|
||||
|----------|----------|
|
||||
| **OIDC** | Your IdP supports OpenID Connect — Okta, Microsoft Entra ID, Auth0, Google Workspace |
|
||||
| **SAML 2.0** | Your IdP is SAML-only — ADFS, Shibboleth, or older enterprise IdPs |
|
||||
|
||||
### 3. Fill in the form
|
||||
|
||||
<Image src="/static/enterprise/sso-form.png" alt="Single Sign-On configuration form showing Provider Type (OIDC), Provider ID, Issuer URL, Domain, Client ID, Client Secret, Scopes, and Callback URL fields" width={900} height={500} />
|
||||
|
||||
**Fields required for both protocols:**
|
||||
|
||||
| Field | What to enter |
|
||||
|-------|--------------|
|
||||
| **Provider ID** | A short slug identifying this connection, e.g. `okta` or `azure-ad`. Letters, numbers, and dashes only. |
|
||||
| **Issuer URL** | The identity provider's issuer URL. Must be HTTPS. |
|
||||
| **Domain** | Your organization's email domain, e.g. `company.com`. Users with this domain will be routed through SSO at sign-in. |
|
||||
|
||||
**OIDC additional fields:**
|
||||
|
||||
| Field | What to enter |
|
||||
|-------|--------------|
|
||||
| **Client ID** | The application client ID from your IdP. |
|
||||
| **Client Secret** | The client secret from your IdP. |
|
||||
| **Scopes** | Comma-separated OIDC scopes. Default: `openid,profile,email`. |
|
||||
|
||||
<Callout type="info">
|
||||
For OIDC, Sim automatically fetches endpoints (`authorization_endpoint`, `token_endpoint`, `userinfo_endpoint`, `jwks_uri`) from your issuer's `/.well-known/openid-configuration` discovery document. You only need to provide the issuer URL.
|
||||
</Callout>
|
||||
|
||||
**SAML additional fields:**
|
||||
|
||||
| Field | What to enter |
|
||||
|-------|--------------|
|
||||
| **Entry Point URL** | The IdP's SSO service URL where Sim sends authentication requests. |
|
||||
| **Identity Provider Certificate** | The Base-64 encoded X.509 certificate from your IdP for verifying assertions. |
|
||||
|
||||
### 4. Copy the Callback URL
|
||||
|
||||
The **Callback URL** shown in the form is the endpoint your identity provider must redirect users back to after authentication. Copy it and register it in your IdP before saving.
|
||||
|
||||
**OIDC providers** (Okta, Microsoft Entra ID, Google Workspace, Auth0):
|
||||
```
|
||||
https://sim.ai/api/auth/sso/callback/{provider-id}
|
||||
```
|
||||
|
||||
**SAML providers** (ADFS, Shibboleth):
|
||||
```
|
||||
https://sim.ai/api/auth/sso/saml2/callback/{provider-id}
|
||||
```
|
||||
|
||||
### 5. Save and test
|
||||
|
||||
Click **Save**. To test, sign out and use the **Sign in with SSO** button on the login page. Enter an email address at your configured domain — Sim will redirect you to your identity provider.
|
||||
|
||||
---
|
||||
|
||||
## Provider Guides
|
||||
|
||||
<Tabs items={['Okta', 'Microsoft Entra ID', 'Google Workspace', 'ADFS']}>
|
||||
|
||||
<Tab value="Okta">
|
||||
|
||||
### Okta (OIDC)
|
||||
|
||||
**In Okta** ([official docs](https://help.okta.com/en-us/content/topics/apps/apps_app_integration_wizard_oidc.htm)):
|
||||
|
||||
1. Go to **Applications → Create App Integration**
|
||||
2. Select **OIDC - OpenID Connect**, then **Web Application**
|
||||
3. Set the **Sign-in redirect URI** to your Sim callback URL:
|
||||
```
|
||||
https://sim.ai/api/auth/sso/callback/okta
|
||||
```
|
||||
4. Under **Assignments**, grant access to the relevant users or groups
|
||||
5. Copy the **Client ID** and **Client Secret** from the app's **General** tab
|
||||
6. Your Okta domain is the hostname of your admin console, e.g. `dev-1234567.okta.com`
|
||||
|
||||
**In Sim:**
|
||||
|
||||
| Field | Value |
|
||||
|-------|-------|
|
||||
| Provider Type | OIDC |
|
||||
| Provider ID | `okta` |
|
||||
| Issuer URL | `https://dev-1234567.okta.com/oauth2/default` |
|
||||
| Domain | `company.com` |
|
||||
| Client ID | From Okta app |
|
||||
| Client Secret | From Okta app |
|
||||
|
||||
The issuer URL uses Okta's default authorization server, which is pre-configured on every Okta org. If you created a custom authorization server, replace `default` with your server name.
|
||||
|
||||
</Tab>
|
||||
|
||||
<Tab value="Microsoft Entra ID">
|
||||
|
||||
### Microsoft Entra ID (OIDC)
|
||||
|
||||
**In Azure** ([official docs](https://learn.microsoft.com/en-us/entra/identity-platform/quickstart-register-app)):
|
||||
|
||||
1. Go to **Microsoft Entra ID → App registrations → New registration**
|
||||
2. Under **Redirect URI**, select **Web** and enter your Sim callback URL:
|
||||
```
|
||||
https://sim.ai/api/auth/sso/callback/azure-ad
|
||||
```
|
||||
3. After registration, go to **Certificates & secrets → New client secret** and copy the value immediately — it won't be shown again
|
||||
4. Go to **Overview** and copy the **Application (client) ID** and **Directory (tenant) ID**
|
||||
|
||||
**In Sim:**
|
||||
|
||||
| Field | Value |
|
||||
|-------|-------|
|
||||
| Provider Type | OIDC |
|
||||
| Provider ID | `azure-ad` |
|
||||
| Issuer URL | `https://login.microsoftonline.com/{tenant-id}/v2.0` |
|
||||
| Domain | `company.com` |
|
||||
| Client ID | Application (client) ID |
|
||||
| Client Secret | Secret value |
|
||||
|
||||
</Tab>
|
||||
|
||||
<Tab value="Google Workspace">
|
||||
|
||||
### Google Workspace (OIDC)
|
||||
|
||||
**In Google Cloud Console** ([official docs](https://developers.google.com/identity/openid-connect/openid-connect)):
|
||||
|
||||
1. Go to **APIs & Services → Credentials → Create Credentials → OAuth 2.0 Client ID**
|
||||
2. Set the application type to **Web application**
|
||||
3. Add your Sim callback URL to **Authorized redirect URIs**:
|
||||
```
|
||||
https://sim.ai/api/auth/sso/callback/google-workspace
|
||||
```
|
||||
4. Copy the **Client ID** and **Client Secret**
|
||||
|
||||
**In Sim:**
|
||||
|
||||
| Field | Value |
|
||||
|-------|-------|
|
||||
| Provider Type | OIDC |
|
||||
| Provider ID | `google-workspace` |
|
||||
| Issuer URL | `https://accounts.google.com` |
|
||||
| Domain | `company.com` |
|
||||
| Client ID | From Google Cloud Console |
|
||||
| Client Secret | From Google Cloud Console |
|
||||
|
||||
<Callout type="info">
|
||||
To restrict sign-in to your Google Workspace domain, configure the OAuth consent screen and ensure your app is set to **Internal** (Workspace users only) under **User type**. Setting the app to Internal limits access to users within your Google Workspace organization.
|
||||
</Callout>
|
||||
|
||||
</Tab>
|
||||
|
||||
<Tab value="ADFS">
|
||||
|
||||
### ADFS (SAML 2.0)
|
||||
|
||||
**In ADFS** ([official docs](https://learn.microsoft.com/en-us/windows-server/identity/ad-fs/operations/create-a-relying-party-trust)):
|
||||
|
||||
1. Open **AD FS Management → Relying Party Trusts → Add Relying Party Trust**
|
||||
2. Choose **Claims aware**, then **Enter data about the relying party manually**
|
||||
3. Set the **Relying party identifier** (Entity ID) to your Sim base URL:
|
||||
```
|
||||
https://sim.ai
|
||||
```
|
||||
4. Add an endpoint: **SAML Assertion Consumer Service** (HTTP POST) with the URL:
|
||||
```
|
||||
https://sim.ai/api/auth/sso/saml2/callback/adfs
|
||||
```
|
||||
5. Export the **Token-signing certificate** from **Certificates**: right-click → **View Certificate → Details → Copy to File**, choose **Base-64 encoded X.509 (.CER)**. The `.cer` file is PEM-encoded — rename it to `.pem` before pasting its contents into Sim.
|
||||
6. Note the **ADFS Federation Service endpoint URL** (e.g. `https://adfs.company.com/adfs/ls`)
|
||||
|
||||
**In Sim:**
|
||||
|
||||
| Field | Value |
|
||||
|-------|-------|
|
||||
| Provider Type | SAML |
|
||||
| Provider ID | `adfs` |
|
||||
| Issuer URL | `https://sim.ai` |
|
||||
| Domain | `company.com` |
|
||||
| Entry Point URL | `https://adfs.company.com/adfs/ls` |
|
||||
| Certificate | Contents of the `.pem` file |
|
||||
|
||||
<Callout type="info">
|
||||
For ADFS, the **Issuer URL** field is the SP entity ID — the identifier ADFS uses to identify Sim as a relying party. It must match the **Relying party identifier** you registered in ADFS.
|
||||
</Callout>
|
||||
|
||||
</Tab>
|
||||
|
||||
</Tabs>
|
||||
|
||||
---
|
||||
|
||||
## How sign-in works after setup
|
||||
|
||||
Once SSO is configured, users with your domain (`company.com`) can sign in through your identity provider:
|
||||
|
||||
1. User goes to `sim.ai` and clicks **Sign in with SSO**
|
||||
2. They enter their work email (e.g. `alice@company.com`)
|
||||
3. Sim redirects them to your identity provider
|
||||
4. After authenticating, they are returned to Sim and added to your organization automatically
|
||||
5. They land in the workspace
|
||||
|
||||
Users who sign in via SSO for the first time are automatically provisioned and added to your organization — no manual invite required.
|
||||
|
||||
<Callout type="info">
|
||||
Password-based login remains available. Forcing all organization members to use SSO exclusively is not yet supported.
|
||||
</Callout>
|
||||
|
||||
---
|
||||
|
||||
<FAQ items={[
|
||||
{
|
||||
question: "Which SSO providers are supported?",
|
||||
answer: "Any identity provider that supports OIDC or SAML 2.0. This includes Okta, Microsoft Entra ID (Azure AD), Google Workspace, Auth0, OneLogin, JumpCloud, Ping Identity, ADFS, Shibboleth, and more."
|
||||
},
|
||||
{
|
||||
question: "What is the Domain field used for?",
|
||||
answer: "The domain (e.g. company.com) is how Sim routes users to the right identity provider. When a user enters their email on the SSO sign-in page, Sim matches their email domain to a registered SSO provider and redirects them there."
|
||||
},
|
||||
{
|
||||
question: "Do I need to provide OIDC endpoints manually?",
|
||||
answer: "No. For OIDC providers, Sim automatically fetches the authorization, token, and JWKS endpoints from the discovery document at {issuer}/.well-known/openid-configuration. You only need to provide the issuer URL."
|
||||
},
|
||||
{
|
||||
question: "What happens when a user signs in with SSO for the first time?",
|
||||
answer: "Sim creates an account for them automatically and adds them to your organization. No manual invite is needed. They are assigned the member role by default."
|
||||
},
|
||||
{
|
||||
question: "Can I still use email/password login after enabling SSO?",
|
||||
answer: "Yes. Enabling SSO does not disable password-based login. Users can still sign in with their email and password if they have one. Forced SSO (requiring all users on the domain to use SSO) is not yet supported."
|
||||
},
|
||||
{
|
||||
question: "Who can configure SSO on Sim Cloud?",
|
||||
answer: "Organization owners and admins can configure SSO. You must be on the Enterprise plan."
|
||||
},
|
||||
{
|
||||
question: "What is the Callback URL?",
|
||||
answer: "The Callback URL (also called Redirect URI or ACS URL) is the endpoint in Sim that receives the authentication response from your identity provider. For OIDC providers it follows the format: https://sim.ai/api/auth/sso/callback/{provider-id}. For SAML providers it is: https://sim.ai/api/auth/sso/saml2/callback/{provider-id}. You must register this URL in your identity provider before SSO will work."
|
||||
},
|
||||
{
|
||||
question: "How do I update or replace an existing SSO configuration?",
|
||||
answer: "Open Settings → Enterprise → Single Sign-On and click Edit. Update the fields and save. The existing provider configuration is replaced."
|
||||
}
|
||||
]} />
|
||||
|
||||
---
|
||||
|
||||
## Self-hosted setup
|
||||
|
||||
Self-hosted deployments use environment variables instead of the billing/plan check.
|
||||
|
||||
### Environment variables
|
||||
|
||||
```bash
|
||||
# Required
|
||||
SSO_ENABLED=true
|
||||
NEXT_PUBLIC_SSO_ENABLED=true
|
||||
|
||||
# Required if you want users auto-added to your organization on first SSO sign-in
|
||||
ORGANIZATIONS_ENABLED=true
|
||||
NEXT_PUBLIC_ORGANIZATIONS_ENABLED=true
|
||||
```
|
||||
|
||||
You can register providers through the **Settings UI** (same as cloud) or by running the registration script directly against your database.
|
||||
|
||||
### Script-based registration
|
||||
|
||||
Use this when you need to register an SSO provider without going through the UI — for example, during initial deployment or CI/CD automation.
|
||||
|
||||
```bash
|
||||
# OIDC example (Okta)
|
||||
SSO_ENABLED=true \
|
||||
NEXT_PUBLIC_APP_URL=https://your-instance.com \
|
||||
SSO_PROVIDER_TYPE=oidc \
|
||||
SSO_PROVIDER_ID=okta \
|
||||
SSO_ISSUER=https://dev-1234567.okta.com/oauth2/default \
|
||||
SSO_DOMAIN=company.com \
|
||||
SSO_USER_EMAIL=admin@company.com \
|
||||
SSO_OIDC_CLIENT_ID=your-client-id \
|
||||
SSO_OIDC_CLIENT_SECRET=your-client-secret \
|
||||
bun run packages/db/scripts/register-sso-provider.ts
|
||||
```
|
||||
|
||||
```bash
|
||||
# SAML example (ADFS)
|
||||
SSO_ENABLED=true \
|
||||
NEXT_PUBLIC_APP_URL=https://your-instance.com \
|
||||
SSO_PROVIDER_TYPE=saml \
|
||||
SSO_PROVIDER_ID=adfs \
|
||||
SSO_ISSUER=https://your-instance.com \
|
||||
SSO_DOMAIN=company.com \
|
||||
SSO_USER_EMAIL=admin@company.com \
|
||||
SSO_SAML_ENTRY_POINT=https://adfs.company.com/adfs/ls \
|
||||
SSO_SAML_CERT="-----BEGIN CERTIFICATE-----
|
||||
...
|
||||
-----END CERTIFICATE-----" \
|
||||
bun run packages/db/scripts/register-sso-provider.ts
|
||||
```
|
||||
|
||||
The script outputs the callback URL to configure in your IdP once it completes.
|
||||
|
||||
To remove a provider:
|
||||
|
||||
```bash
|
||||
SSO_USER_EMAIL=admin@company.com \
|
||||
bun run packages/db/scripts/deregister-sso-provider.ts
|
||||
```
|
||||
103
apps/docs/content/docs/en/enterprise/whitelabeling.mdx
Normal file
103
apps/docs/content/docs/en/enterprise/whitelabeling.mdx
Normal file
@@ -0,0 +1,103 @@
|
||||
---
|
||||
title: Whitelabeling
|
||||
description: Replace Sim branding with your own logo, colors, and links
|
||||
---
|
||||
|
||||
import { FAQ } from '@/components/ui/faq'
|
||||
import { Image } from '@/components/ui/image'
|
||||
|
||||
Whitelabeling lets you replace Sim's default branding — logo, colors, and support links — with your own. Members of your organization see your brand instead of Sim's throughout the workspace.
|
||||
|
||||
---
|
||||
|
||||
## Setup
|
||||
|
||||
### 1. Open Whitelabeling settings
|
||||
|
||||
Go to **Settings → Enterprise → Whitelabeling** in your workspace.
|
||||
|
||||
<Image src="/static/enterprise/whitelabeling.png" alt="Whitelabeling settings showing brand identity fields (Logo, Wordmark, Brand name), color pickers for primary and accent colors, and link fields for support email and documentation URL" width={900} height={500} />
|
||||
|
||||
### 2. Configure brand identity
|
||||
|
||||
| Field | Description |
|
||||
|-------|-------------|
|
||||
| **Logo** | Shown in the collapsed sidebar. Square image (PNG, JPEG, SVG, or WebP). Max 5 MB. |
|
||||
| **Wordmark** | Shown in the expanded sidebar. Wide image (PNG, JPEG, SVG, or WebP). Max 5 MB. |
|
||||
| **Brand name** | Replaces "Sim" in the sidebar and select UI elements. Max 64 characters. |
|
||||
|
||||
|
||||
### 3. Configure colors
|
||||
|
||||
All colors must be valid hex values (e.g. `#701ffc`).
|
||||
|
||||
| Field | Description |
|
||||
|-------|-------------|
|
||||
| **Primary color** | Main accent color used for buttons and active states. |
|
||||
| **Primary hover color** | Color shown when hovering over primary elements. |
|
||||
| **Accent color** | Secondary accent for highlights and secondary interactive elements. |
|
||||
| **Accent hover color** | Color shown when hovering over accent elements. |
|
||||
|
||||
### 4. Configure links
|
||||
|
||||
Replace Sim's default support and legal links with your own.
|
||||
|
||||
| Field | Description |
|
||||
|-------|-------------|
|
||||
| **Support email** | Shown in help prompts. Must be a valid email address. |
|
||||
| **Documentation URL** | Link to your internal documentation. Must be a valid URL. |
|
||||
| **Terms of service URL** | Link to your terms page. Must be a valid URL. |
|
||||
| **Privacy policy URL** | Link to your privacy page. Must be a valid URL. |
|
||||
|
||||
### 5. Save
|
||||
|
||||
Click **Save changes**. The new branding is applied immediately for all members of your organization.
|
||||
|
||||
---
|
||||
|
||||
## What gets replaced
|
||||
|
||||
Whitelabeling replaces the following visual elements:
|
||||
|
||||
- **Sidebar logo and wordmark** — your uploaded images replace the Sim logo
|
||||
- **Brand name** — appears in the sidebar and select UI labels
|
||||
- **Primary and accent colors** — applied to buttons, active states, and highlights
|
||||
- **Support and legal links** — help prompts and footer links point to your URLs
|
||||
|
||||
Whitelabeling applies only to members of your organization. Public-facing pages (login, marketing) are not affected.
|
||||
|
||||
---
|
||||
|
||||
<FAQ items={[
|
||||
{
|
||||
question: "Who can configure whitelabeling?",
|
||||
answer: "Organization owners and admins can configure whitelabeling. On Sim Cloud, you must be on the Enterprise plan."
|
||||
},
|
||||
{
|
||||
question: "What image formats are supported?",
|
||||
answer: "PNG, JPEG, SVG, and WebP. Maximum file size is 5 MB for both the logo and wordmark."
|
||||
},
|
||||
{
|
||||
question: "What is the difference between the logo and the wordmark?",
|
||||
answer: "The logo is a square image shown in the collapsed sidebar. The wordmark is a wide image shown in the expanded sidebar alongside member names and navigation items."
|
||||
},
|
||||
{
|
||||
question: "Do members outside my organization see the custom branding?",
|
||||
answer: "No. Custom branding is scoped to your organization. Members see your branding when signed in to your organization's workspace."
|
||||
}
|
||||
]} />
|
||||
|
||||
---
|
||||
|
||||
## Self-hosted setup
|
||||
|
||||
Self-hosted deployments use environment variables instead of the billing/plan check.
|
||||
|
||||
### Environment variables
|
||||
|
||||
```bash
|
||||
WHITELABELING_ENABLED=true
|
||||
NEXT_PUBLIC_WHITELABELING_ENABLED=true
|
||||
```
|
||||
|
||||
Once enabled, configure branding through **Settings → Enterprise → Whitelabeling** the same way.
|
||||
343
apps/docs/content/docs/en/execution/api-deployment.mdx
Normal file
343
apps/docs/content/docs/en/execution/api-deployment.mdx
Normal file
@@ -0,0 +1,343 @@
|
||||
---
|
||||
title: API Deployment
|
||||
---
|
||||
|
||||
import { Callout } from 'fumadocs-ui/components/callout'
|
||||
import { Tab, Tabs } from 'fumadocs-ui/components/tabs'
|
||||
import { Image } from '@/components/ui/image'
|
||||
import { FAQ } from '@/components/ui/faq'
|
||||
|
||||
Deploy your workflow as a REST API endpoint that any application can call directly. Supports synchronous, streaming, and asynchronous execution modes.
|
||||
|
||||
## Deploying a Workflow
|
||||
|
||||
Open your workflow and click **Deploy**. The **General** tab opens first and shows you the current deployment state:
|
||||
|
||||
<Image src="/static/api-deployment/api-versions.png" alt="General tab of the Workflow Deployment modal showing a live workflow preview, a Versions table with v2 (live) and v1, and Undeploy / Update buttons" width={800} height={500} />
|
||||
|
||||
The **General** tab contains:
|
||||
|
||||
- **Live Workflow** — a read-only minimap of the workflow snapshot that is currently deployed
|
||||
- **Versions** — a table of every deployment you've published, showing version number, who deployed it, and when
|
||||
- **Deploy / Update / Undeploy** — action buttons at the bottom right
|
||||
|
||||
Click **Deploy** to publish your workflow for the first time, or **Update** to push a new snapshot after making changes. The green dot next to a version indicates it is the currently live version.
|
||||
|
||||
Once deployed, your workflow is available at:
|
||||
|
||||
```
|
||||
POST https://sim.ai/api/workflows/{workflow-id}/execute
|
||||
```
|
||||
|
||||
<Callout type="info">
|
||||
API executions always run against the active deployment snapshot. After changing your workflow on the canvas, click **Update** to publish a new version.
|
||||
</Callout>
|
||||
|
||||
### Keeping Track of Changes
|
||||
|
||||
When you modify the workflow canvas after deploying, an **Update deployment** badge appears at the bottom of the screen as a reminder that your live version is out of date:
|
||||
|
||||
<Image src="/static/api-deployment/api-update-button.png" alt="Canvas toolbar showing the Update and Run buttons with an Update deployment tooltip" width={400} height={200} />
|
||||
|
||||
You can click the **Update** button directly from the canvas toolbar — you don't need to open the Deploy modal every time.
|
||||
|
||||
## Version Control
|
||||
|
||||
Every time you deploy or update, a new version is recorded in the Versions table. You can manage past versions using the context menu (⋮) next to any row:
|
||||
|
||||
<Image src="/static/api-deployment/api-versions-menu.png" alt="Versions table showing v2 (live) and v1 with a context menu open offering Rename, Add description, Promote to live, and Load deployment options" width={800} height={400} />
|
||||
|
||||
| Action | Description |
|
||||
|--------|-------------|
|
||||
| **Rename** | Give the version a human-readable name (e.g., "Added memory") |
|
||||
| **Add description** | Attach a note describing what changed in this version |
|
||||
| **Promote to live** | Make this older version the active one without re-deploying |
|
||||
| **Load deployment** | Load the workflow snapshot from this version back onto your canvas |
|
||||
|
||||
**Promote to live** is useful for rolling back — if a new deployment has an issue, promote the previous version to restore the last known-good state instantly.
|
||||
|
||||
## Making API Calls
|
||||
|
||||
Switch to the **API** tab in the Deploy modal to see ready-to-use code for all three execution modes:
|
||||
|
||||
<Image src="/static/api-deployment/api-tab.png" alt="API tab showing cURL, Python, JavaScript, and TypeScript language options, with Run workflow, Run workflow (stream response), and Run workflow (async) code sections" width={800} height={500} />
|
||||
|
||||
The language selector at the top lets you switch between **cURL**, **Python**, **JavaScript**, and **TypeScript**. Each mode — synchronous, streaming, and async — has its own code block that you can copy directly. The code is pre-filled with your workflow ID and a masked version of your API key.
|
||||
|
||||
At the bottom of the tab, two buttons give you quick access to key settings:
|
||||
|
||||
- **Edit API Info** — set a description and choose between API key auth or public access
|
||||
- **Generate API Key** — create a new API key scoped to your workspace
|
||||
|
||||
## Authentication
|
||||
|
||||
By default, API endpoints require an API key passed in the `x-api-key` header. Generate keys in **Settings → Sim Keys** or via the **Generate API Key** button in the API tab.
|
||||
|
||||
```bash
|
||||
curl -X POST https://sim.ai/api/workflows/{workflow-id}/execute \
|
||||
-H "Content-Type: application/json" \
|
||||
-H "x-api-key: $SIM_API_KEY" \
|
||||
-d '{ "input": "Hello" }'
|
||||
```
|
||||
|
||||
### API Info and Public Access
|
||||
|
||||
Click **Edit API Info** to add a description and change the access mode:
|
||||
|
||||
<Image src="/static/api-deployment/api-info.png" alt="Edit API Info modal with a Description textarea and an Access section toggling between API Key and Public modes" width={800} height={400} />
|
||||
|
||||
| Access Mode | Description |
|
||||
|-------------|-------------|
|
||||
| **API Key** (default) | Requires a valid API key in the `x-api-key` header |
|
||||
| **Public** | No authentication required — anyone with the URL can call the endpoint |
|
||||
|
||||
The **Description** field documents what the workflow API does. This is useful for teams, or when exposing the workflow to tools and services that surface API metadata.
|
||||
|
||||
<Callout type="warn">
|
||||
Public endpoints can be called by anyone with the URL. Only use this for workflows that don't expose sensitive data or perform sensitive actions.
|
||||
</Callout>
|
||||
|
||||
## Execution Modes
|
||||
|
||||
### Synchronous
|
||||
|
||||
The default mode. Send a request and wait for the complete response:
|
||||
|
||||
<Tabs items={['cURL', 'Python', 'TypeScript']}>
|
||||
<Tab value="cURL">
|
||||
```bash
|
||||
curl -X POST https://sim.ai/api/workflows/{workflow-id}/execute \
|
||||
-H "Content-Type: application/json" \
|
||||
-H "x-api-key: $SIM_API_KEY" \
|
||||
-d '{ "input": "Summarize this article" }'
|
||||
```
|
||||
</Tab>
|
||||
<Tab value="Python">
|
||||
```python
|
||||
import requests, os
|
||||
|
||||
response = requests.post(
|
||||
"https://sim.ai/api/workflows/{workflow-id}/execute",
|
||||
headers={
|
||||
"Content-Type": "application/json",
|
||||
"x-api-key": os.environ["SIM_API_KEY"]
|
||||
},
|
||||
json={"input": "Summarize this article"}
|
||||
)
|
||||
print(response.json())
|
||||
```
|
||||
</Tab>
|
||||
<Tab value="TypeScript">
|
||||
```typescript
|
||||
const response = await fetch('https://sim.ai/api/workflows/{workflow-id}/execute', {
|
||||
method: 'POST',
|
||||
headers: {
|
||||
'Content-Type': 'application/json',
|
||||
'x-api-key': process.env.SIM_API_KEY!
|
||||
},
|
||||
body: JSON.stringify({ input: 'Summarize this article' })
|
||||
});
|
||||
console.log(await response.json());
|
||||
```
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
### Streaming
|
||||
|
||||
Stream the response token-by-token as it is generated. Add `"stream": true` to your request body and specify which block output fields to stream using `selectedOutputs`.
|
||||
|
||||
Use the **Select outputs** dropdown in the API tab to choose which fields to stream:
|
||||
|
||||
<Image src="/static/api-deployment/api-select-outputs.png" alt="Select outputs dropdown open showing Agent 1 block with selectable output fields: content, model, tokens, toolCalls, providerTiming, cost" width={800} height={400} />
|
||||
|
||||
The dropdown groups available outputs by block. The most common choice is `content` from an Agent block, which streams the generated text. You can select fields from multiple blocks simultaneously.
|
||||
|
||||
The `selectedOutputs` values in the request body follow the format `blockName.field` (e.g., `agent_1.content`).
|
||||
|
||||
<Tabs items={['cURL', 'Python', 'TypeScript']}>
|
||||
<Tab value="cURL">
|
||||
```bash
|
||||
curl -X POST https://sim.ai/api/workflows/{workflow-id}/execute \
|
||||
-H "Content-Type: application/json" \
|
||||
-H "x-api-key: $SIM_API_KEY" \
|
||||
-d '{
|
||||
"input": "Write a long essay",
|
||||
"stream": true,
|
||||
"selectedOutputs": ["agent_1.content"]
|
||||
}'
|
||||
```
|
||||
</Tab>
|
||||
<Tab value="Python">
|
||||
```python
|
||||
import requests, os
|
||||
|
||||
response = requests.post(
|
||||
"https://sim.ai/api/workflows/{workflow-id}/execute",
|
||||
headers={
|
||||
"Content-Type": "application/json",
|
||||
"x-api-key": os.environ["SIM_API_KEY"]
|
||||
},
|
||||
json={
|
||||
"input": "Write a long essay",
|
||||
"stream": True,
|
||||
"selectedOutputs": ["agent_1.content"]
|
||||
},
|
||||
stream=True
|
||||
)
|
||||
for line in response.iter_lines():
|
||||
if line:
|
||||
print(line.decode())
|
||||
```
|
||||
</Tab>
|
||||
<Tab value="TypeScript">
|
||||
```typescript
|
||||
const response = await fetch('https://sim.ai/api/workflows/{workflow-id}/execute', {
|
||||
method: 'POST',
|
||||
headers: {
|
||||
'Content-Type': 'application/json',
|
||||
'x-api-key': process.env.SIM_API_KEY!
|
||||
},
|
||||
body: JSON.stringify({
|
||||
input: 'Write a long essay',
|
||||
stream: true,
|
||||
selectedOutputs: ['agent_1.content']
|
||||
})
|
||||
});
|
||||
|
||||
const reader = response.body!.getReader();
|
||||
const decoder = new TextDecoder();
|
||||
while (true) {
|
||||
const { done, value } = await reader.read();
|
||||
if (done) break;
|
||||
console.log(decoder.decode(value));
|
||||
}
|
||||
```
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
### Asynchronous
|
||||
|
||||
For long-running workflows, async mode returns a job ID immediately so you don't need to hold the connection open. Add the `X-Execution-Mode: async` header to your request. The API returns HTTP 202 with a job ID and status URL. Poll the status URL until the job completes.
|
||||
|
||||
<Tabs items={['Start Job', 'Check Status']}>
|
||||
<Tab value="Start Job">
|
||||
```bash
|
||||
curl -X POST https://sim.ai/api/workflows/{workflow-id}/execute \
|
||||
-H "Content-Type: application/json" \
|
||||
-H "x-api-key: $SIM_API_KEY" \
|
||||
-H "X-Execution-Mode: async" \
|
||||
-d '{ "input": "Process this large dataset" }'
|
||||
```
|
||||
|
||||
**Response** (HTTP 202):
|
||||
```json
|
||||
{
|
||||
"success": true,
|
||||
"async": true,
|
||||
"jobId": "run_abc123",
|
||||
"executionId": "exec_xyz",
|
||||
"message": "Workflow execution queued",
|
||||
"statusUrl": "https://sim.ai/api/jobs/run_abc123"
|
||||
}
|
||||
```
|
||||
</Tab>
|
||||
<Tab value="Check Status">
|
||||
```bash
|
||||
curl https://sim.ai/api/jobs/{jobId} \
|
||||
-H "x-api-key: $SIM_API_KEY"
|
||||
```
|
||||
|
||||
**While processing:**
|
||||
```json
|
||||
{
|
||||
"success": true,
|
||||
"taskId": "run_abc123",
|
||||
"status": "processing",
|
||||
"metadata": {
|
||||
"createdAt": "2025-09-10T12:00:00.000Z",
|
||||
"startedAt": "2025-09-10T12:00:01.000Z"
|
||||
},
|
||||
"estimatedDuration": 300000
|
||||
}
|
||||
```
|
||||
|
||||
**When completed:**
|
||||
```json
|
||||
{
|
||||
"success": true,
|
||||
"taskId": "run_abc123",
|
||||
"status": "completed",
|
||||
"metadata": {
|
||||
"createdAt": "2025-09-10T12:00:00.000Z",
|
||||
"startedAt": "2025-09-10T12:00:01.000Z",
|
||||
"completedAt": "2025-09-10T12:00:05.000Z",
|
||||
"duration": 4000
|
||||
},
|
||||
"output": { "result": "..." }
|
||||
}
|
||||
```
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
#### Job Status Values
|
||||
|
||||
| Status | Description |
|
||||
|--------|-------------|
|
||||
| `queued` | Job is waiting to be picked up |
|
||||
| `processing` | Workflow is actively executing |
|
||||
| `completed` | Finished successfully — `output` field contains the result |
|
||||
| `failed` | Execution failed — `error` field contains the message |
|
||||
|
||||
Poll the `statusUrl` from the initial response until the status is `completed` or `failed`.
|
||||
|
||||
#### Execution Time Limits
|
||||
|
||||
| Plan | Sync Limit | Async Limit |
|
||||
|------|-----------|-------------|
|
||||
| **Community** | 5 minutes | 90 minutes |
|
||||
| **Pro / Max / Team / Enterprise** | 50 minutes | 90 minutes |
|
||||
|
||||
If a job exceeds its time limit it is automatically marked as `failed`.
|
||||
|
||||
#### Job Retention
|
||||
|
||||
Completed and failed job results are retained for **24 hours**. After that, the status endpoint returns `404`. Retrieve and store results on your end if you need them longer.
|
||||
|
||||
#### Capacity Limits
|
||||
|
||||
If the execution queue is full, the API returns `503`:
|
||||
|
||||
```json
|
||||
{
|
||||
"error": "Service temporarily at capacity",
|
||||
"retryAfterSeconds": 10
|
||||
}
|
||||
```
|
||||
|
||||
<Callout type="info">
|
||||
Async mode always runs against the deployed version. It does not support draft state, block overrides, or partial execution options like `runFromBlock` or `stopAfterBlockId`.
|
||||
</Callout>
|
||||
|
||||
## API Key Management
|
||||
|
||||
Generate and manage API keys in **Settings → Sim Keys**:
|
||||
|
||||
- **Create** new keys for different applications or environments
|
||||
- **Revoke** keys that are no longer needed
|
||||
- Keys are scoped to your workspace
|
||||
|
||||
## Rate Limits
|
||||
|
||||
API calls are subject to rate limits based on your plan. Rate limit details are returned in response headers (`X-RateLimit-*`) and in the response body. Use async mode for high-volume or long-running workloads.
|
||||
|
||||
For detailed rate limit information and the logs/webhooks API, see [External API](/execution/api).
|
||||
|
||||
<FAQ items={[
|
||||
{ question: "What is the difference between the General tab and the API tab?", answer: "The General tab manages your deployment lifecycle — deploying, updating, rolling back, and viewing version history. The API tab gives you ready-to-use code samples and lets you configure the endpoint's description and access mode." },
|
||||
{ question: "Can I deploy the same workflow as both an API and a chat?", answer: "Yes. A workflow can be simultaneously deployed as an API, chat, MCP tool, and more. Each deployment type runs against the same active snapshot." },
|
||||
{ question: "How do I choose between sync, streaming, and async?", answer: "Use sync for quick workflows that finish in seconds. Use streaming when you want to show progressive output to users as it's generated. Use async for long-running workflows where holding a connection open isn't practical." },
|
||||
{ question: "How do I select multiple outputs for streaming?", answer: "Open the Select outputs dropdown in the API tab and check each output field you want to stream. You can choose fields from multiple blocks. The selected fields are reflected as an array in the selectedOutputs request body parameter." },
|
||||
{ question: "How does Promote to live work?", answer: "Promote to live sets an older version as the active deployment without creating a new version. Subsequent API calls immediately run against the promoted snapshot. This is the fastest way to roll back to a previous state." },
|
||||
{ question: "How long are async job results available?", answer: "Completed and failed job results are retained for 24 hours. After that, the status endpoint returns 404. Retrieve and store results on your end if you need them longer." },
|
||||
{ question: "What happens if my API key is compromised?", answer: "Revoke the key immediately in Settings → Sim Keys and generate a new one. Revoked keys stop working instantly." },
|
||||
]} />
|
||||
184
apps/docs/content/docs/en/execution/chat.mdx
Normal file
184
apps/docs/content/docs/en/execution/chat.mdx
Normal file
@@ -0,0 +1,184 @@
|
||||
---
|
||||
title: Chat Deployment
|
||||
---
|
||||
|
||||
import { Callout } from 'fumadocs-ui/components/callout'
|
||||
import { Tab, Tabs } from 'fumadocs-ui/components/tabs'
|
||||
import { Image } from '@/components/ui/image'
|
||||
import { FAQ } from '@/components/ui/faq'
|
||||
|
||||
Deploy your workflow as a conversational chat interface that users can interact with via a shareable link or embedded widget. Chat supports multi-turn conversations, file uploads, and voice input.
|
||||
|
||||
<Image src="/static/chat/chat-live.png" alt="A deployed chat interface showing a conversation with Friendly Assistant" width={800} height={500} />
|
||||
|
||||
Every chat message triggers a fresh workflow execution, with the full conversation history passed in as context. Responses stream back to the user in real time.
|
||||
|
||||
<Callout type="info">
|
||||
Chat executions run against your workflow's active deployment snapshot. Publish a new deployment after making canvas changes so the chat uses the updated version.
|
||||
</Callout>
|
||||
|
||||
## Creating a Chat
|
||||
|
||||
Open your workflow, click **Deploy**, and select the **Chat** tab. You'll see the chat configuration panel:
|
||||
|
||||
<Image src="/static/chat/chat-deploy-config.png" alt="Chat deployment configuration panel showing URL, Title, Output, Access control, and Welcome message fields" width={800} height={500} />
|
||||
|
||||
Configure the following fields, then click **Launch Chat**:
|
||||
|
||||
| Field | Description |
|
||||
|-------|-------------|
|
||||
| **URL** | Slug that forms the public URL, e.g. `https://www.sim.ai/chat/your-slug`. Lowercase letters, numbers, and hyphens only. Must be unique across all workspaces. |
|
||||
| **Title** | Display name shown in the chat header. |
|
||||
| **Output** | Output fields from your workflow blocks returned as the chat response. At least one must be selected. |
|
||||
| **Welcome Message** | Greeting shown before the user sends their first message. Defaults to `"Hi there! How can I help you today?"`. |
|
||||
| **Access Control** | Controls who can access the chat. See [Access Control](#access-control) below. |
|
||||
|
||||
### Output Selection
|
||||
|
||||
<Image src="/static/chat/chat-deploy-output.png" alt="Output dropdown showing Agent 1 block with selectable fields: content, model, tokens, toolCalls, providerTiming, cost" width={800} height={400} />
|
||||
|
||||
The output dropdown groups available fields by block. For an Agent block, you can choose from `content`, `model`, `tokens`, `toolCalls`, `providerTiming`, and `cost`. In most cases, selecting `content` from the final Agent block is all you need — it streams the agent's text response directly to the user.
|
||||
|
||||
## Access Control
|
||||
|
||||
<Image src="/static/chat/chat-deploy-access-email.png" alt="Access control section with Email tab selected, showing an Allowed emails field with @sim.ai domain added" width={800} height={300} />
|
||||
|
||||
| Mode | Description |
|
||||
|------|-------------|
|
||||
| **Public** | Anyone with the link can chat — no authentication required |
|
||||
| **Password** | Users must enter a password before they can start chatting |
|
||||
| **Email** | Only specific email addresses or domains can access. Users verify with a 6-digit OTP sent to their email |
|
||||
| **SSO** | OIDC-based single sign-on (enterprise only) |
|
||||
|
||||
**Email access:** Add individual addresses (`user@example.com`) or entire domains (`@example.com`) to the **Allowed emails** field. Users receive a one-time 6-digit OTP to their inbox — once verified, they can chat for the duration of their session.
|
||||
|
||||
**Password access:** A password field appears when this mode is selected. Share the password with users directly; they enter it before the conversation begins.
|
||||
|
||||
**SSO:** Uses OIDC to authenticate users through your identity provider. Available on enterprise plans.
|
||||
|
||||
## Sharing
|
||||
|
||||
### Direct Link
|
||||
|
||||
```
|
||||
https://www.sim.ai/chat/your-slug
|
||||
```
|
||||
|
||||
### Iframe
|
||||
|
||||
```html
|
||||
<iframe
|
||||
src="https://www.sim.ai/chat/your-slug"
|
||||
width="100%"
|
||||
height="600"
|
||||
frameborder="0"
|
||||
title="Chat"
|
||||
></iframe>
|
||||
```
|
||||
|
||||
## API Submission
|
||||
|
||||
You can also send messages to a chat programmatically. Responses are streamed using server-sent events (SSE).
|
||||
|
||||
<Tabs items={['cURL', 'TypeScript']}>
|
||||
<Tab value="cURL">
|
||||
```bash
|
||||
curl -X POST https://www.sim.ai/api/chat/your-slug \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"input": "Hello, I need help with my order",
|
||||
"conversationId": "optional-conversation-id"
|
||||
}'
|
||||
```
|
||||
</Tab>
|
||||
<Tab value="TypeScript">
|
||||
```typescript
|
||||
const response = await fetch('https://www.sim.ai/api/chat/your-slug', {
|
||||
method: 'POST',
|
||||
headers: { 'Content-Type': 'application/json' },
|
||||
body: JSON.stringify({
|
||||
input: 'Hello, I need help with my order',
|
||||
conversationId: 'optional-conversation-id'
|
||||
})
|
||||
});
|
||||
|
||||
// Response is an SSE stream
|
||||
const reader = response.body?.getReader();
|
||||
const decoder = new TextDecoder();
|
||||
|
||||
while (true) {
|
||||
const { done, value } = await reader!.read();
|
||||
if (done) break;
|
||||
console.log(decoder.decode(value));
|
||||
}
|
||||
```
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
### With File Uploads
|
||||
|
||||
```bash
|
||||
curl -X POST https://www.sim.ai/api/chat/your-slug \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"input": "What does this document say?",
|
||||
"files": [{
|
||||
"name": "report.pdf",
|
||||
"type": "application/pdf",
|
||||
"size": 1048576,
|
||||
"data": "data:application/pdf;base64,..."
|
||||
}]
|
||||
}'
|
||||
```
|
||||
|
||||
### Protected Chats
|
||||
|
||||
For password-protected chats, include the password in the request body:
|
||||
|
||||
```bash
|
||||
curl -X POST https://www.sim.ai/api/chat/your-slug \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{ "password": "secret", "input": "Hello" }'
|
||||
```
|
||||
|
||||
For email-protected chats, authenticate with OTP first:
|
||||
|
||||
```bash
|
||||
# Step 1: Request OTP — sends a 6-digit code to the email address
|
||||
curl -X POST https://www.sim.ai/api/chat/your-slug/otp \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{ "email": "allowed@example.com" }'
|
||||
|
||||
# Step 2: Verify OTP — save the Set-Cookie header for subsequent requests
|
||||
curl -X PUT https://www.sim.ai/api/chat/your-slug/otp \
|
||||
-H "Content-Type: application/json" \
|
||||
-c cookies.txt \
|
||||
-d '{ "email": "allowed@example.com", "otp": "123456" }'
|
||||
|
||||
# Step 3: Send messages using the auth cookie from Step 2
|
||||
curl -X POST https://www.sim.ai/api/chat/your-slug \
|
||||
-H "Content-Type: application/json" \
|
||||
-b cookies.txt \
|
||||
-d '{ "input": "Hello" }'
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
**Chat returns 403** — The deployment is inactive. Open the Deploy modal and re-deploy the workflow.
|
||||
|
||||
**"At least one output block is required"** — No output field is selected in the Output dropdown. Open the Deploy modal, go to the Chat tab, and select at least one output from a block.
|
||||
|
||||
**OTP email not arriving** — Confirm the email address is on the allowed list and check spam folders. OTP codes expire after 15 minutes and can be resent after a 30-second cooldown.
|
||||
|
||||
**Chat not loading in iframe** — Check your site's Content Security Policy allows iframes from `sim.ai`.
|
||||
|
||||
**Responses not updating after workflow changes** — Chat uses the active deployment snapshot. Publish a new deployment from the Deploy modal to pick up your latest changes.
|
||||
|
||||
<FAQ items={[
|
||||
{ question: "How is chat different from API deployment?", answer: "API deployment exposes your workflow as a REST endpoint for programmatic use. Chat wraps the workflow in a hosted conversational UI with streaming, file uploads, voice input, and access control — no application code required to use it." },
|
||||
{ question: "Which output field should I select?", answer: "For workflows built around Agent blocks, select the content field from the final Agent block — this streams the agent's text response to the user. You can select multiple fields if your workflow produces structured output you want to expose." },
|
||||
{ question: "How does conversation history work?", answer: "Each message triggers a new workflow execution. The full conversation history — all prior user messages and assistant responses — is passed as context so your workflow can maintain continuity across turns." },
|
||||
{ question: "How does email OTP authentication work?", answer: "When a user opens an email-protected chat, they enter their email address. If it matches the allowed list, Sim sends a 6-digit OTP to that address. The user enters the code, and a session cookie is set for the duration of their visit." },
|
||||
{ question: "Is there a message length limit?", answer: "There is no hard limit on message length. Very long messages may impact response time depending on your workflow's model context window." },
|
||||
{ question: "Can I use chat with any workflow?", answer: "Yes, any workflow can be deployed as a chat. The chat sends the user's message as the workflow input and streams the selected block outputs back as the response." },
|
||||
]} />
|
||||
@@ -308,6 +308,17 @@ By default, your usage is capped at the credits included in your plan. To allow
|
||||
|
||||
## Plan Limits
|
||||
|
||||
### Workspaces
|
||||
|
||||
| Plan | Personal Workspaces | Shared (Organization) Workspaces |
|
||||
|------|---------------------|----------------------------------|
|
||||
| **Free** | 1 | — |
|
||||
| **Pro** | Up to 3 | — |
|
||||
| **Max** | Up to 10 | — |
|
||||
| **Team / Enterprise** | Unlimited | Unlimited |
|
||||
|
||||
Team and Enterprise plans unlock shared workspaces that belong to your organization. Members invited to a shared workspace automatically join the organization and count toward your seat total. When a Team or Enterprise subscription is cancelled or downgraded, existing shared workspaces remain accessible to current members but new invites are disabled until the organization is upgraded again.
|
||||
|
||||
### Rate Limits
|
||||
|
||||
| Plan | Sync (req/min) | Async (req/min) |
|
||||
@@ -367,12 +378,12 @@ Sim uses a **base subscription + overage** billing model:
|
||||
|
||||
### Threshold Billing
|
||||
|
||||
When on-demand is enabled and unbilled overage reaches $50, Sim automatically bills the full unbilled amount.
|
||||
When on-demand is enabled and unbilled overage reaches $100, Sim automatically bills the full unbilled amount.
|
||||
|
||||
**Example:**
|
||||
- Day 10: $70 overage → Bill $70 immediately
|
||||
- Day 15: Additional $35 usage ($105 total) → Already billed, no action
|
||||
- Day 20: Another $50 usage ($155 total, $85 unbilled) → Bill $85 immediately
|
||||
- Day 10: $120 overage → Bill $120 immediately
|
||||
- Day 15: Additional $60 usage ($180 total) → Already billed, no action
|
||||
- Day 20: Another $80 usage ($260 total, $140 unbilled) → Bill $140 immediately
|
||||
|
||||
This spreads large overage charges throughout the month instead of one large bill at period end.
|
||||
|
||||
@@ -441,6 +452,21 @@ curl -X GET -H "X-API-Key: YOUR_API_KEY" -H "Content-Type: application/json" htt
|
||||
- `limit` is derived from individual limits (Free/Pro/Max) or pooled organization limits (Team/Enterprise)
|
||||
- `plan` is the highest-priority active plan associated with your user
|
||||
|
||||
## Purchasing Additional Credits
|
||||
|
||||
Pro and Team plan users can buy additional credits at any time in **Settings → Subscription → Credit Balance**:
|
||||
|
||||
- **Range**: $10 to $1,000 per purchase
|
||||
- **Conversion**: 1 credit = $0.005 (a $10 purchase adds 2,000 credits)
|
||||
- **Availability**: Credits are added immediately after payment
|
||||
- **Expiration**: Purchased credits do not expire
|
||||
- **Refunds**: Purchases are non-refundable
|
||||
- **Team plans**: Only organization owners and admins can purchase credits. Purchased credits are added to the team's shared pool.
|
||||
|
||||
<Callout type="info">
|
||||
Enterprise users should contact support for credit adjustments.
|
||||
</Callout>
|
||||
|
||||
## Cost Optimization Strategies
|
||||
|
||||
- **Model Selection**: Choose models based on task complexity. Simple tasks can use GPT-4.1-nano while complex reasoning might need o1 or Claude Opus.
|
||||
@@ -465,5 +491,5 @@ import { FAQ } from '@/components/ui/faq'
|
||||
{ question: "What happens when I exceed my plan's credit limit?", answer: "By default, your usage is capped at your plan's included credits and runs will stop. If you enable on-demand billing or manually raise your usage limit in Settings, you can continue running workflows and pay for the overage at the end of the billing period." },
|
||||
{ question: "How does the 1.1x hosted model multiplier work?", answer: "When you use Sim's hosted API keys (instead of bringing your own), a 1.1x multiplier is applied to the base model pricing for Agent blocks. This covers infrastructure and API management costs. You can avoid this multiplier by using your own API keys via the BYOK feature." },
|
||||
{ question: "Are there any free options for AI models?", answer: "Yes. If you run local models through Ollama or VLLM, there are no API costs for those model calls. You still pay the base run charge of 1 credit per run." },
|
||||
{ question: "When does threshold billing trigger?", answer: "When on-demand billing is enabled and your unbilled overage reaches $50, Sim automatically bills the full unbilled amount. This spreads large charges throughout the month instead of accumulating one large bill at period end." },
|
||||
{ question: "When does threshold billing trigger?", answer: "When on-demand billing is enabled and your unbilled overage reaches $100, Sim automatically bills the full unbilled amount. This spreads large charges throughout the month instead of accumulating one large bill at period end." },
|
||||
]} />
|
||||
|
||||
@@ -32,6 +32,15 @@ Sim's execution engine brings your workflows to life by processing blocks in the
|
||||
<Card title="External API" href="/execution/api">
|
||||
Access run logs and set up webhooks programmatically via REST API
|
||||
</Card>
|
||||
|
||||
<Card title="API Deployment" href="/execution/api-deployment">
|
||||
Deploy your workflow as a REST API endpoint with sync, streaming, and async modes
|
||||
</Card>
|
||||
|
||||
<Card title="Chat Deployment" href="/execution/chat">
|
||||
Deploy your workflow as a conversational chat interface with streaming, file uploads, and voice
|
||||
</Card>
|
||||
|
||||
</Cards>
|
||||
|
||||
## Key Concepts
|
||||
@@ -58,17 +67,51 @@ Each workflow maintains a rich context during a run containing:
|
||||
|
||||
API, Chat, Schedule, and Webhook runs use the workflow’s active deployment snapshot. Manual runs from the editor use the current draft canvas state, letting you test changes before deploying. Publish a new deployment whenever you change the canvas so every trigger uses the updated version.
|
||||
|
||||
<div className='flex justify-center my-6'>
|
||||
<div className="flex justify-center my-6">
|
||||
<Image
|
||||
src='/static/execution/deployment-versions.png'
|
||||
alt='Deployment versions table'
|
||||
src="/static/execution/deployment-versions.png"
|
||||
alt="Deployment versions table"
|
||||
width={500}
|
||||
height={280}
|
||||
className='rounded-xl border border-border shadow-sm'
|
||||
className="rounded-xl border border-border shadow-sm"
|
||||
/>
|
||||
</div>
|
||||
|
||||
The Deploy modal keeps a full version history—inspect any snapshot, compare it against your draft, and promote or roll back with one click when you need to restore a prior release.
|
||||
### Version History
|
||||
|
||||
The **General** tab in the Deploy modal shows a version history table for every deployment. Each row shows the version name, who deployed it, and when.
|
||||
|
||||
<div className="flex justify-center">
|
||||
<Image
|
||||
src="/static/execution/deployment-versions-table.png"
|
||||
alt="Version history table with multiple deployment versions"
|
||||
width={600}
|
||||
height={650}
|
||||
className="my-6"
|
||||
/>
|
||||
</div>
|
||||
|
||||
From the version table you can:
|
||||
|
||||
- **Rename** a version to give it a meaningful label (e.g., "v2 — added error handling")
|
||||
- **Add a description** with notes about what changed in that deployment
|
||||
- **Promote to live** to roll back to an older version — this makes the selected version the active deployment without changing your draft canvas
|
||||
- **Load into editor** to restore a previous version's workflow into the canvas for editing and redeploying
|
||||
- **Preview a version** by selecting a row to view that version's workflow in the canvas preview, then toggle between **Live** and the selected version
|
||||
|
||||
<div className="flex justify-center">
|
||||
<Image
|
||||
src="/static/execution/deployment-version-preview.png"
|
||||
alt="Previewing a selected deployment version"
|
||||
width={600}
|
||||
height={650}
|
||||
className="my-6"
|
||||
/>
|
||||
</div>
|
||||
|
||||
<Callout type="info">
|
||||
Promoting an old version takes effect immediately — all API, Chat, Schedule, and Webhook executions will use the promoted version. Your draft canvas is not affected.
|
||||
</Callout>
|
||||
|
||||
## Programmatic Access
|
||||
|
||||
|
||||
@@ -1,3 +1,3 @@
|
||||
{
|
||||
"pages": ["index", "basics", "files", "api", "logging", "costs"]
|
||||
"pages": ["index", "basics", "files", "api", "api-deployment", "chat", "logging", "costs"]
|
||||
}
|
||||
|
||||
@@ -51,7 +51,7 @@ Welcome to Sim, the open-source AI workspace where teams build, deploy, and mana
|
||||
<Card title="MCP Integration" href="/mcp">
|
||||
Connect external services with Model Context Protocol
|
||||
</Card>
|
||||
<Card title="SDKs" href="/sdks">
|
||||
<Card title="SDKs" href="/api-reference">
|
||||
Integrate Sim into your applications
|
||||
</Card>
|
||||
</Cards>
|
||||
140
apps/docs/content/docs/en/integrations/index.mdx
Normal file
140
apps/docs/content/docs/en/integrations/index.mdx
Normal file
@@ -0,0 +1,140 @@
|
||||
---
|
||||
title: Integrations
|
||||
description: Connect third-party services and OAuth accounts for your workflows
|
||||
---
|
||||
|
||||
import { Callout } from 'fumadocs-ui/components/callout'
|
||||
import { Image } from '@/components/ui/image'
|
||||
import { FAQ } from '@/components/ui/faq'
|
||||
|
||||
Integrations are authenticated connections to third-party services like Gmail, Slack, GitHub, Dropbox, and more. Sim handles the OAuth flow, token storage, and automatic token refresh — you connect once and select the account in any block that needs it.
|
||||
|
||||
You can connect **multiple accounts per service** — for example, two separate Gmail accounts for different workflows.
|
||||
|
||||
## Managing Integrations
|
||||
|
||||
To manage integrations, open your workspace **Settings** and navigate to the **Integrations** tab.
|
||||
|
||||
<Image
|
||||
src="/static/integrations/integrations-list.png"
|
||||
alt="Integrations tab showing connected accounts with service icons, names, and Details/Disconnect buttons"
|
||||
width={700}
|
||||
height={500}
|
||||
/>
|
||||
|
||||
The list shows all your connected accounts with the service icon, display name, and provider. Each entry has a **Details** button and a **Disconnect** button.
|
||||
|
||||
## Connecting an Account
|
||||
|
||||
Click **+ Connect** in the top right to open the connection modal.
|
||||
|
||||
<Image
|
||||
src="/static/integrations/connect-service-picker.png"
|
||||
alt="Connect Integration modal showing a searchable list of available services"
|
||||
width={500}
|
||||
height={400}
|
||||
/>
|
||||
|
||||
Search for or select the service you want to connect, then fill in the connection details:
|
||||
|
||||
<Image
|
||||
src="/static/integrations/connect-modal.png"
|
||||
alt="Connect Gmail modal showing permissions requested, display name field, and description field"
|
||||
width={500}
|
||||
height={450}
|
||||
/>
|
||||
|
||||
1. Review the **Permissions requested** — these are the scopes Sim will request from the provider
|
||||
2. Enter a **Display name** to identify this connection (e.g. "Work Gmail" or "Marketing Slack")
|
||||
3. Optionally add a **Description**
|
||||
4. Click **Connect** and complete the authorization flow
|
||||
|
||||
## Using Integrations in Workflows
|
||||
|
||||
Blocks that require authentication (e.g. Gmail, Slack, Google Sheets) display a credential selector. Select the connected account you want that block to use.
|
||||
|
||||
<Image
|
||||
src="/static/credentials/oauth-selector.png"
|
||||
alt="Gmail block showing the account selector dropdown with connected accounts"
|
||||
width={500}
|
||||
height={350}
|
||||
/>
|
||||
|
||||
You can also connect additional accounts directly from the block by selecting **Connect another [service] account** at the bottom of the dropdown.
|
||||
|
||||
<Callout type="info">
|
||||
If a block requires an integration and none is selected, the workflow will fail at that step.
|
||||
</Callout>
|
||||
|
||||
## Using a Credential ID
|
||||
|
||||
Each integration has a unique credential ID you can use to reference it dynamically. This is useful when you have multiple accounts for the same service and want to switch between them programmatically — for example, routing different workflow runs to different Gmail accounts based on a variable.
|
||||
|
||||
To copy a credential ID, open **Details** on any integration and click the clipboard icon next to the Display Name.
|
||||
|
||||
<Image
|
||||
src="/static/integrations/copy-credential-id.png"
|
||||
alt="Integration details showing the Copy credential ID tooltip on the clipboard icon next to the Display Name"
|
||||
width={700}
|
||||
height={150}
|
||||
/>
|
||||
|
||||
In any block that requires an integration, click **Switch to manual ID** next to the credential selector to switch from the dropdown to a text field.
|
||||
|
||||
<Image
|
||||
src="/static/integrations/switch-to-manual-id.png"
|
||||
alt="Block showing the Switch to manual ID button next to the account selector"
|
||||
width={500}
|
||||
height={200}
|
||||
/>
|
||||
|
||||
Paste or reference the credential ID in that field. You can use a `{{SECRET}}` reference or a block output variable to make it dynamic.
|
||||
|
||||
<Image
|
||||
src="/static/integrations/manual-credential-id.png"
|
||||
alt="Block showing the Enter credential ID text field after switching to manual mode"
|
||||
width={500}
|
||||
height={200}
|
||||
/>
|
||||
|
||||
## Integration Details
|
||||
|
||||
Click **Details** on any integration to open its detail view.
|
||||
|
||||
<Image
|
||||
src="/static/integrations/integration-details.png"
|
||||
alt="Integration details view showing Display Name, Description, Members, Reconnect, and Disconnect"
|
||||
width={700}
|
||||
height={420}
|
||||
/>
|
||||
|
||||
From here you can:
|
||||
|
||||
- Edit the **Display Name** and **Description**
|
||||
- Manage **Members** — invite teammates by email and assign them an **Admin** or **Member** role
|
||||
- **Reconnect** — re-authorize the connection if it has expired or if you need to update permissions
|
||||
- **Disconnect** — remove the integration entirely
|
||||
|
||||
Click **Save** to apply changes, or **Back** to return to the list.
|
||||
|
||||
<Callout type="warn">
|
||||
If you disconnect an integration that is used in a workflow, that workflow will fail at any block referencing it. Update blocks before disconnecting.
|
||||
</Callout>
|
||||
|
||||
## Access Control
|
||||
|
||||
Each integration has role-based access:
|
||||
|
||||
- **Admin** — can view, edit, disconnect, reconnect, and manage member access
|
||||
- **Member** — can use the integration in workflows (read-only)
|
||||
|
||||
When you connect an integration, you are automatically set as its Admin. You can share it with teammates from the Details view.
|
||||
|
||||
<FAQ items={[
|
||||
{ question: "Does Sim handle OAuth token refresh automatically?", answer: "Yes. When an integration is used during execution, Sim checks whether the access token has expired and automatically refreshes it using the stored refresh token before making the API call. You do not need to handle token refresh manually." },
|
||||
{ question: "Can I connect multiple accounts for the same service?", answer: "Yes. You can connect multiple accounts per service (for example, two separate Gmail accounts). Each block lets you select which account to use from the credential dropdown. This is useful when different workflows need different identities or permissions." },
|
||||
{ question: "What is a credential ID and when should I use it?", answer: "Each integration has a unique credential ID that you can use instead of the dropdown selector. This lets you pass the credential dynamically — for example, from a variable or a previous block's output — so the same workflow can use different accounts depending on the context. Copy the ID from the Details view and use Switch to manual ID in any block to paste or reference it." },
|
||||
{ question: "What happens if an OAuth token can no longer be refreshed?", answer: "If a refresh fails (e.g. the user revoked access or the refresh token expired), the workflow will fail at the block using that integration. Open Settings → Integrations, find the connection, and use the Reconnect button to re-authorize it." },
|
||||
{ question: "Are OAuth tokens encrypted at rest?", answer: "Yes. OAuth tokens are encrypted before being stored in the database and are never exposed in the workflow editor, logs, or API responses." },
|
||||
{ question: "What happens if I disconnect an integration that is used in a workflow?", answer: "Any block referencing the disconnected integration will fail at runtime. Make sure to update those blocks before disconnecting, or reconnect the integration to restore access." },
|
||||
]} />
|
||||
5
apps/docs/content/docs/en/integrations/meta.json
Normal file
5
apps/docs/content/docs/en/integrations/meta.json
Normal file
@@ -0,0 +1,5 @@
|
||||
{
|
||||
"title": "Integrations",
|
||||
"pages": ["index", "google-service-account"],
|
||||
"defaultOpen": false
|
||||
}
|
||||
@@ -5,13 +5,16 @@ description: Automatically sync documents from external sources into your knowle
|
||||
|
||||
import { Callout } from 'fumadocs-ui/components/callout'
|
||||
import { Step, Steps } from 'fumadocs-ui/components/steps'
|
||||
import { Image } from '@/components/ui/image'
|
||||
import { FAQ } from '@/components/ui/faq'
|
||||
|
||||
Connectors let you pull documents directly from external services into your knowledge base. Instead of manually uploading files, a connector continuously syncs content from sources like Notion, Google Drive, GitHub, Slack, and more — keeping your knowledge base up to date automatically.
|
||||
Connectors continuously sync documents from external services into your knowledge base, so you never have to upload files manually. New content is added, changed content is re-processed, and deleted content is removed — all automatically.
|
||||
|
||||
## Available Connectors
|
||||
|
||||
Sim ships with 30 built-in connectors spanning productivity tools, cloud storage, development platforms, and more.
|
||||
<Image src="/static/connectors/connectors-sources.png" alt="Connect Source picker showing a searchable list of available connectors including Airtable, Asana, Confluence, Discord, Dropbox, Evernote, Fireflies, GitHub, and Gmail" width={800} height={500} />
|
||||
|
||||
Sim ships with 30 built-in connectors:
|
||||
|
||||
| Category | Connectors |
|
||||
|----------|-----------|
|
||||
@@ -29,24 +32,25 @@ Sim ships with 30 built-in connectors spanning productivity tools, cloud storage
|
||||
|
||||
## Adding a Connector
|
||||
|
||||
From inside a knowledge base, click **+ New connector** in the top right to open the connector picker. Select a service, then complete the setup steps:
|
||||
|
||||
<Steps>
|
||||
<Step>
|
||||
|
||||
### Select a source
|
||||
|
||||
Open a knowledge base and click **Add Connector**. You'll see the full list of available connectors — pick the service you want to sync from.
|
||||
|
||||
</Step>
|
||||
<Step>
|
||||
|
||||
### Authenticate
|
||||
|
||||
Most connectors use **OAuth** — select an existing credential from the dropdown, or click **Connect new account** to authorize through the service's login flow. Tokens are refreshed automatically, so you won't need to re-authenticate unless you revoke access.
|
||||
Most connectors use **OAuth** — select an existing credential from the dropdown or click **Connect new account** to authorize through the service. Tokens are refreshed automatically.
|
||||
|
||||
A few connectors (Evernote, Obsidian, Fireflies) use **API keys** instead. Paste your key or developer token directly, and it will be stored securely.
|
||||
A few connectors use **API keys** instead:
|
||||
|
||||
| Connector | Where to get the key |
|
||||
|-----------|---------------------|
|
||||
| **Evernote** | Developer Token (starts with `S=`) from your Evernote account settings |
|
||||
| **Obsidian** | Install the [Local REST API](https://github.com/coddingtonbear/obsidian-local-rest-api) plugin, then copy the key from its settings |
|
||||
| **Fireflies** | Generate from the Integrations page in your Fireflies account |
|
||||
|
||||
<Callout type="info">
|
||||
If you rotate an API key in the external service, you'll need to update it in Sim as well. OAuth tokens are refreshed automatically, but API keys are not.
|
||||
If you rotate an API key in the external service, update it in Sim as well — OAuth tokens refresh automatically, but API keys do not.
|
||||
</Callout>
|
||||
|
||||
</Step>
|
||||
@@ -54,103 +58,135 @@ A few connectors (Evernote, Obsidian, Fireflies) use **API keys** instead. Paste
|
||||
|
||||
### Configure
|
||||
|
||||
Each connector has its own configuration fields that control what gets synced. Some examples:
|
||||
Each connector has source-specific fields that control what gets synced. Examples:
|
||||
|
||||
- **Notion**: Choose between syncing an entire workspace, a specific database, or a single page tree
|
||||
- **GitHub**: Specify a repository, branch, and optional file extension filter
|
||||
- **Confluence**: Enter your Atlassian domain and optionally filter by space key or content type
|
||||
- **Obsidian**: Provide your vault URL and optionally restrict to a folder path
|
||||
- **Notion** — sync an entire workspace, a specific database, or a single page tree
|
||||
- **GitHub** — specify a repository, branch, and optional file extension filter
|
||||
- **Confluence** — enter your Atlassian domain and optionally filter by space key or content type
|
||||
- **Obsidian** — provide your vault URL (`https://127.0.0.1:27124` by default) and optionally restrict to a folder path
|
||||
- **Fireflies** — optionally filter by host email or cap the number of transcripts synced
|
||||
|
||||
All configuration is validated when you save — if a repository doesn't exist or a domain is unreachable, you'll get an immediate error.
|
||||
Configuration is validated on save — if a repository doesn't exist or a domain is unreachable, you'll see an error immediately.
|
||||
|
||||
</Step>
|
||||
<Step>
|
||||
|
||||
### Choose sync frequency
|
||||
|
||||
Select how often the connector should re-sync:
|
||||
|
||||
| Frequency | Description |
|
||||
|-----------|-------------|
|
||||
| Frequency | Notes |
|
||||
|-----------|-------|
|
||||
| Every hour | Best for fast-moving sources |
|
||||
| Every 6 hours | Good balance for most use cases |
|
||||
| Every 6 hours | Good balance for most sources |
|
||||
| **Daily** (default) | Suitable for content that changes infrequently |
|
||||
| Weekly | For stable, rarely-updated sources |
|
||||
| Manual only | Sync only when you trigger it |
|
||||
| Manual only | Sync only when you trigger it manually |
|
||||
|
||||
Sub-hourly frequencies require a Max or Enterprise plan.
|
||||
|
||||
</Step>
|
||||
<Step>
|
||||
|
||||
### Configure metadata tags (optional)
|
||||
|
||||
If the connector supports metadata tags, you'll see checkboxes for each tag type (e.g., Labels, Last Modified, Notebook). All are enabled by default — uncheck any you don't need.
|
||||
If the connector supports metadata tags, you'll see checkboxes for each available tag type (e.g., Labels, Last Modified, Notebook). All are enabled by default — uncheck any you don't need.
|
||||
|
||||
See the [Metadata Tags](#metadata-tags) section below for details.
|
||||
Tag slots are shared across all documents in a knowledge base. See [Tags](/knowledgebase/tags) for details.
|
||||
|
||||
</Step>
|
||||
<Step>
|
||||
|
||||
### Connect & Sync
|
||||
|
||||
Click **Connect & Sync** to save the connector and trigger the first sync immediately. Documents will begin appearing in your knowledge base as they are processed.
|
||||
Click **Connect & Sync** to save the connector and trigger the first sync. Documents will start appearing as they're processed.
|
||||
|
||||
</Step>
|
||||
</Steps>
|
||||
|
||||
## How Syncing Works
|
||||
## Managing Connectors
|
||||
|
||||
On each sync, the connector fetches documents from the external service and compares them against what's already in your knowledge base. Only documents that have actually changed are reprocessed — new content is added, updated content is re-chunked and re-embedded, and documents that no longer exist in the source are removed.
|
||||
Open **Connected Sources** from the knowledge base to see all active connectors. Each card shows the connector's status, the last sync time and document count, and the next scheduled sync:
|
||||
|
||||
This means syncing is efficient even for large document sets. A connector with thousands of documents will only do meaningful work when something changes.
|
||||
<Image src="/static/connectors/connectors-sync-history.png" alt="Connected Sources panel showing a Google Docs connector with Active status, last sync details, and a sync history log with dated entries" width={800} height={450} />
|
||||
|
||||
### Handling Failures
|
||||
The action buttons on each connector card:
|
||||
|
||||
If a single document fails to fetch (e.g., due to a permission issue or timeout), the sync continues with the remaining documents. The failed document will be retried on the next sync cycle.
|
||||
| Button | Action |
|
||||
|--------|--------|
|
||||
| **↻** (Refresh) | Trigger a manual sync immediately. Disabled while syncing or disabled; a 5-minute cooldown applies after each manual trigger |
|
||||
| **⚙** (Settings) | Open the edit modal to change source config or sync frequency |
|
||||
| **⏸ / ▶** (Pause / Resume) | Pause scheduled syncs without removing the connector. Resume works from both paused and disabled states |
|
||||
| **🗑** (Delete) | Remove the connector. A confirmation modal appears with an option to also delete all synced documents |
|
||||
| **∨** (Chevron) | Expand to show sync history |
|
||||
|
||||
If an entire sync fails (e.g., the service is down or credentials expired), the connector automatically backs off and retries later. The backoff resets as soon as a sync succeeds.
|
||||
### Editing a Connector
|
||||
|
||||
## Metadata Tags
|
||||
Click the settings icon to open the edit modal. It has two tabs:
|
||||
|
||||
Connectors can automatically populate [tags](/docs/knowledgebase/tags) with metadata from the source, letting you filter documents in the Knowledge block based on information from the external service.
|
||||
**Settings** — change any source-specific config fields (e.g., switch the GitHub branch) and update the sync frequency.
|
||||
|
||||
For example, a Notion connector might tag documents with their **Labels**, **Last Modified** date, and **Created** date. A GitHub connector might tag documents with their **Repository** and **File Path**. This metadata becomes available for [tag-based filtering](/docs/knowledgebase/tags) in your workflows.
|
||||
**Documents** — browse all documents this connector has synced and manage exclusions (see [Excluding Documents](#excluding-documents) below).
|
||||
|
||||
### Opting Out
|
||||
### Sync History
|
||||
|
||||
You can disable specific metadata tags during connector setup. Disabled tags won't be populated, leaving those tag slots available for other connectors or manual tagging.
|
||||
Expand any connector card by clicking the chevron to see a log of recent syncs:
|
||||
|
||||
<Callout type="info">
|
||||
Tag slots are shared across all documents in a knowledge base. If you have multiple connectors, each one's metadata tags draw from the same pool of available slots.
|
||||
</Callout>
|
||||
- Each row shows the date/time and a summary of what changed: **+N** (added, green), **~N** (updated, amber), **-N** (deleted, red), **!N** (failed, red), or **No changes**
|
||||
- A spinner indicates a sync currently in progress
|
||||
- Error rows show a red icon and the failure message
|
||||
|
||||
The log retains the most recent 10 sync runs.
|
||||
|
||||
## Excluding Documents
|
||||
|
||||
You can manually exclude specific documents from a connector's sync. Excluded documents are skipped on every subsequent sync, even if they change in the source. This is useful for filtering out templates, drafts, or other content you don't want in your knowledge base.
|
||||
Sometimes a connector syncs documents you don't want in your knowledge base — drafts, templates, confidential pages, and so on. You can exclude them individually.
|
||||
|
||||
## Source Links
|
||||
<Image src="/static/connectors/connectors-excluded.png" alt="Edit Google Docs modal showing the Documents tab with Active (37) and Excluded (0) filter buttons and a 'No excluded documents' message" width={800} height={450} />
|
||||
|
||||
Every synced document retains a link back to the original in the external service. This lets you trace any knowledge base document to its source — whether that's a Notion page, a GitHub file, a Confluence article, or a Slack conversation.
|
||||
To exclude a document, open the connector's settings modal, go to the **Documents** tab, and click **Exclude** next to any document. Excluded documents are skipped on every subsequent sync even if the source content changes.
|
||||
|
||||
To reverse an exclusion, switch to the **Excluded** tab and click **Restore** — the document will be pulled in on the next sync.
|
||||
|
||||
## How Syncing Works
|
||||
|
||||
On each run the connector fetches documents from the source and compares them against what's already stored. Only changed documents are reprocessed — new content is added, updated content is re-chunked and re-embedded, deleted content is removed. A connector syncing thousands of documents will only do real work when something actually changes.
|
||||
|
||||
### Connector Status
|
||||
|
||||
| Status | Meaning |
|
||||
|--------|---------|
|
||||
| **Active** | Running normally on schedule |
|
||||
| **Syncing** | A sync is currently in progress |
|
||||
| **Paused** | Scheduled syncs are suspended; manual sync is still available |
|
||||
| **Error** | The last sync failed; will retry on the next scheduled run with backoff |
|
||||
| **Disabled** | Syncing has been paused automatically after 10 consecutive failures |
|
||||
|
||||
A disabled connector requires intervention — either reconnect the OAuth account or use the Resume button to re-enable syncing.
|
||||
|
||||
### Handling Failures
|
||||
|
||||
If a single document fails (e.g., a permission issue or timeout), the sync continues and retries that document next time. If an entire sync fails, the connector backs off and retries with increasing delays. After 10 consecutive full-sync failures the connector is automatically set to **Disabled** to avoid spinning indefinitely.
|
||||
|
||||
## Metadata Tags
|
||||
|
||||
Connectors can auto-populate [tags](/knowledgebase/tags) with metadata from the source — for example, a Notion connector can tag documents with their Labels and Last Modified date; a GitHub connector can tag documents with Repository and File Path. These tags are then available for filtered search in the Knowledge block.
|
||||
|
||||
You can disable specific tag types during setup or at any time from the connector settings to free up tag slots for manual tagging or other connectors.
|
||||
|
||||
<Callout type="info">
|
||||
Tag slots are shared across all documents in a knowledge base. If multiple connectors each populate tags, they draw from the same pool of 17 slots.
|
||||
</Callout>
|
||||
|
||||
## Multiple Connectors
|
||||
|
||||
You can add multiple connectors to a single knowledge base. For example, you might sync internal documentation from Confluence alongside code from GitHub and meeting notes from Fireflies — all searchable together through the Knowledge block.
|
||||
<Image src="/static/connectors/connectors-list.png" alt="Knowledge base document list showing synced Google Docs documents with Name, Size, Tokens, Chunks, Uploaded date, Status, and Tags columns" width={800} height={300} />
|
||||
|
||||
Each connector manages its own documents independently. Metadata tag slots are shared across the knowledge base, so keep an eye on slot usage if you're combining several connectors that each populate tags.
|
||||
|
||||
## Common Use Cases
|
||||
|
||||
- **Internal knowledge base**: Sync your team's Notion workspace and Confluence spaces so AI agents can answer questions about internal processes, policies, and documentation
|
||||
- **Customer support**: Connect HubSpot or Salesforce alongside your help docs from WordPress or Google Docs to give support agents full context on customers and product information
|
||||
- **Engineering assistant**: Sync a GitHub repository and Jira or Linear issues so an AI agent can reference code, specs, and ticket history when answering developer questions
|
||||
- **Meeting intelligence**: Pull in Fireflies transcripts alongside Slack conversations to build a searchable archive of decisions and discussions
|
||||
- **Research and notes**: Sync Evernote notebooks or an Obsidian vault to make your personal notes available to AI workflows
|
||||
You can add as many connectors as you need to a single knowledge base. Each manages its own documents independently, and all content is searchable together through the Knowledge block. Keep tag slot usage in mind when combining connectors that each populate metadata tags.
|
||||
|
||||
<FAQ items={[
|
||||
{ question: "How often do connectors sync?", answer: "You can choose from hourly, every 6 hours, daily (default), weekly, or manual-only sync frequencies. Each connector can have its own schedule." },
|
||||
{ question: "What happens if a source document is deleted?", answer: "On the next sync, the connector detects that the document no longer exists in the source and removes it from your knowledge base automatically." },
|
||||
{ question: "Can I connect multiple services to one knowledge base?", answer: "Yes. You can add as many connectors as you need to a single knowledge base. Each connector manages its documents independently." },
|
||||
{ question: "Do I need to re-authenticate connectors?", answer: "OAuth-based connectors refresh tokens automatically. API key-based connectors (Evernote, Obsidian, Fireflies) need manual updates if you rotate the key." },
|
||||
{ question: "What if a connector sync fails?", answer: "If a single document fails, the rest of the sync continues. If the entire sync fails (e.g., service is down), the connector backs off and retries automatically." },
|
||||
{ question: "Can I exclude specific documents from syncing?", answer: "Yes. You can manually exclude documents from any connector. Excluded documents are skipped on every subsequent sync, even if they change in the source." },
|
||||
{ question: "Do metadata tags count against a limit?", answer: "Tag slots are shared across all documents in a knowledge base. If you have multiple connectors, their metadata tags draw from the same pool of available slots." },
|
||||
{ question: "How often do connectors sync?", answer: "You choose from hourly, every 6 hours, daily (default), weekly, or manual-only. Sub-hourly frequencies require a Max or Enterprise plan. Each connector has its own schedule." },
|
||||
{ question: "What happens if a source document is deleted?", answer: "On the next sync the connector detects the document is gone and removes it from your knowledge base automatically." },
|
||||
{ question: "What happens when I delete a connector?", answer: "The connector is removed and future syncs stop. You're given the option to also delete all documents that were synced by that connector. If you don't check that option, they stay in the knowledge base as-is." },
|
||||
{ question: "What does the Disabled status mean?", answer: "After 10 consecutive full-sync failures, the connector is automatically disabled to stop retrying. Reconnect the OAuth account or click Resume to re-enable it." },
|
||||
{ question: "Do metadata tags count against a limit?", answer: "Yes. Tag slots are shared across all documents in a knowledge base — 17 slots total. Multiple connectors draw from the same pool, so plan accordingly if several connectors each auto-populate tags." },
|
||||
{ question: "Do I need to re-authenticate connectors?", answer: "OAuth connectors refresh tokens automatically. API key connectors (Evernote, Obsidian, Fireflies) need manual updates if you rotate the key in the external service." },
|
||||
]} />
|
||||
|
||||
@@ -2,117 +2,112 @@
|
||||
title: Tags and Filtering
|
||||
---
|
||||
|
||||
import { Video } from '@/components/ui/video'
|
||||
import { Callout } from 'fumadocs-ui/components/callout'
|
||||
import { Image } from '@/components/ui/image'
|
||||
import { FAQ } from '@/components/ui/faq'
|
||||
|
||||
Tags provide a powerful way to organize your documents and create precise filtering for your vector searches. By combining tag-based filtering with semantic search, you can retrieve exactly the content you need from your knowledgebase.
|
||||
Tags let you attach structured metadata to documents so the Knowledge block can filter results precisely — by department, date, priority, status, or any dimension you define.
|
||||
|
||||
## Adding Tags to Documents
|
||||
## How Tags Work
|
||||
|
||||
You can add custom tags to any document in your knowledgebase to organize and categorize your content for easier retrieval.
|
||||
Tags have two layers:
|
||||
|
||||
<div className="mx-auto w-full overflow-hidden rounded-lg">
|
||||
<Video src="knowledgebase-tag.mp4" width={700} height={450} />
|
||||
</div>
|
||||
1. **Tag definitions** — created at the knowledge base level. A definition has a name (e.g., "Department") and a type (Text, Number, Date, or Boolean). Definitions are shared across all documents.
|
||||
2. **Tag values** — set per document. Once a definition exists, you assign a value to it on each document that needs it (e.g., `Department = "engineering"`).
|
||||
|
||||
### Tag Management
|
||||
- **Custom tags**: Create your own tag system that fits your workflow
|
||||
- **Multiple tags per document**: Apply as many tags as needed to each document. Each knowledgebase has 17 tag slots total: 7 text, 5 number, 2 date, and 3 boolean slots, shared by all documents in the knowledgebase
|
||||
- **Tag organization**: Group related documents with consistent tagging
|
||||
## Tag Slots
|
||||
|
||||
### Tag Best Practices
|
||||
- **Consistent naming**: Use standardized tag names across your documents
|
||||
- **Descriptive tags**: Use clear, meaningful tag names
|
||||
- **Regular cleanup**: Remove unused or outdated tags periodically
|
||||
Each knowledge base has **17 tag slots** distributed across four types:
|
||||
|
||||
## Using Tags in Knowledge Blocks
|
||||
| Type | Slots | Accepted values |
|
||||
|------|-------|-----------------|
|
||||
| **Text** | 7 | Any string — matching is case-insensitive |
|
||||
| **Number** | 5 | Any valid number |
|
||||
| **Date** | 2 | `YYYY-MM-DD` format |
|
||||
| **Boolean** | 3 | `true` or `false` |
|
||||
|
||||
Tags become powerful when combined with the Knowledge block in your workflows. You can filter your searches to specific tagged content, ensuring your AI agents get the most relevant information.
|
||||
The type dropdown in the creation form shows current slot usage for each type (e.g., `Text (0/7)` means none of the 7 text slots are in use yet).
|
||||
|
||||
<div className="mx-auto w-full overflow-hidden rounded-lg">
|
||||
<Video src="knowledgebase-tag2.mp4" width={700} height={450} />
|
||||
</div>
|
||||
<Callout type="info">
|
||||
Slots are shared across all documents and connectors in a knowledge base. Connectors that auto-populate metadata tags draw from the same pool. Plan your schema with that in mind.
|
||||
</Callout>
|
||||
|
||||
## Defining Tags
|
||||
|
||||
Tag definitions live at the knowledge base level. To manage them, click the knowledge base name in the header to open the context menu and select **Tags**:
|
||||
|
||||
<Image src="/static/tags/tags-kb-menu.png" alt="Knowledge base header showing the dropdown menu with Rename, Tags, and Delete options" width={700} height={400} />
|
||||
|
||||
This opens the Tags modal, which lists all defined tags and shows how many documents each one is assigned to. Click **Add Tag** to define a new one:
|
||||
|
||||
<Image src="/static/tags/tags-create.png" alt="Tags modal showing 0 defined tags, a Tag Name input field, and a Type dropdown set to Text (0/7), with Cancel and Create Tag buttons" width={700} height={450} />
|
||||
|
||||
Enter a **Tag Name** and pick a **Type**, then click **Create Tag**. The name must be unique within the knowledge base. The type dropdown only shows types that still have available slots. Press Enter to submit or Escape to cancel.
|
||||
|
||||
To delete a tag definition, click the trash icon next to it. Deleting a definition removes the tag value from every document it was assigned to — the modal shows you which documents are affected before you confirm.
|
||||
|
||||
Clicking any existing tag definition opens a dialog showing all documents that have a value set for it, along with their current tag values.
|
||||
|
||||
## Setting Tag Values on Documents
|
||||
|
||||
Once a definition exists, you assign values document by document. Right-click any document (or click the `…` menu) to open the document context menu, then select **Tags**:
|
||||
|
||||
This opens the tag panel for that document where you can set a value for each defined tag.
|
||||
|
||||
## Viewing Tags in the Document List
|
||||
|
||||
The **Tags** column in the document list shows the current tag values for each document at a glance. Documents with no tags assigned show `– – –`:
|
||||
|
||||
<Image src="/static/tags/tags-document-list.png" alt="Knowledge base document list showing Name, Size, Tokens, Chunks, Uploaded, Status, and Tags columns — Document1.txt shows no tags (– – –) while Document2.txt shows the value 'Waleed'" width={900} height={200} />
|
||||
|
||||
Use the **Filter** and **Sort** controls in the top right to narrow the list by tag values or sort by them.
|
||||
|
||||
## Using Tags in the Knowledge Block
|
||||
|
||||
In a workflow, open the Knowledge block and configure **Tag Filters** to restrict which documents are searched:
|
||||
|
||||
<Image src="/static/tags/tags-knowledge-block.png" alt="Knowledge block editor showing Operation: search, Knowledge Base: test, Search Query field (optional), Number of Results, and a Tag Filters section with Filter 1 containing Tag: Name, Operator: equals, and a Value field" width={900} height={500} />
|
||||
|
||||
Each filter has three parts:
|
||||
- **Tag** — select a tag definition from the knowledge base
|
||||
- **Operator** — depends on the tag type (see below)
|
||||
- **Value** — the value to match against
|
||||
|
||||
Add as many filters as you need with the **+** button. Multiple filters are combined with AND logic — a document must match all filters to be included in the search.
|
||||
|
||||
### Operators by Type
|
||||
|
||||
| Type | Available operators |
|
||||
|------|-------------------|
|
||||
| **Text** | `equals`, `not equals`, `contains`, `does not contain`, `starts with`, `ends with` |
|
||||
| **Number** | `equals`, `not equals`, `greater than`, `greater than or equal`, `less than`, `less than or equal`, `between` |
|
||||
| **Date** | `equals`, `after`, `on or after`, `before`, `on or before`, `between` |
|
||||
| **Boolean** | `is`, `is not` |
|
||||
|
||||
Tag values in filter fields can be static strings or workflow variable references (e.g., `<start.department>`), so filtering can adapt dynamically at runtime.
|
||||
|
||||
## Search Modes
|
||||
|
||||
The Knowledge block supports three different search modes depending on what you provide:
|
||||
The Knowledge block behaves differently depending on what you provide:
|
||||
|
||||
### 1. Tag-Only Search
|
||||
When you **only provide tags** (no search query):
|
||||
- **Direct retrieval**: Fetches all documents that have the specified tags
|
||||
- **No vector search**: Results are based purely on tag matching
|
||||
- **Fast performance**: Quick retrieval without semantic processing
|
||||
- **Exact matching**: Only documents with all specified tags are returned
|
||||
| What you provide | Behaviour |
|
||||
|-----------------|-----------|
|
||||
| **Tags only** (no search query) | Fetches all documents that match the tag filters — pure tag matching, no vector search |
|
||||
| **Query only** (no tag filters) | Semantic vector search across all documents in the knowledge base |
|
||||
| **Both tags and query** | Tag filters run first to narrow the document set, then vector search runs within that subset |
|
||||
|
||||
**Use case**: When you need all documents from a specific category or project
|
||||
The combined mode is the most precise — tag filtering cuts down the candidate set cheaply before the more expensive vector similarity comparison runs.
|
||||
|
||||
### 2. Vector Search Only
|
||||
When you **only provide a search query** (no tags):
|
||||
- **Semantic search**: Finds content based on meaning and context
|
||||
- **Full knowledgebase**: Searches across all documents
|
||||
- **Relevance ranking**: Results ordered by semantic similarity
|
||||
- **Natural language**: Use questions or phrases to find relevant content
|
||||
## Connector-Populated Tags
|
||||
|
||||
**Use case**: When you need the most relevant content regardless of organization
|
||||
Connectors can auto-populate tags with metadata from the source. A Notion connector might set **Last Modified** and **Labels**; a GitHub connector might set **Repository** and **File Path**. These work exactly like manually defined tags and are available in Knowledge block filters.
|
||||
|
||||
### 3. Combined Tag Filtering + Vector Search
|
||||
When you **provide both tags and a search query**:
|
||||
1. **First**: Filter documents to only those with the specified tags
|
||||
2. **Then**: Perform vector search within that filtered subset
|
||||
3. **Result**: Semantically relevant content from your tagged documents only
|
||||
|
||||
**Use case**: When you need relevant content from a specific category or project
|
||||
|
||||
### Search Configuration
|
||||
|
||||
#### Tag Filtering
|
||||
- **Multiple tags**: Use multiple tags with AND or OR logic to control whether documents must match all or any of the specified tags
|
||||
- **Tag combinations**: Mix different tag types for precise filtering
|
||||
- **Case sensitivity**: Tag matching is case-insensitive
|
||||
- **Partial matching**: Text fields support partial matching operators such as contains, starts_with, and ends_with in addition to exact matching
|
||||
|
||||
#### Vector Search Parameters
|
||||
- **Query complexity**: Natural language questions work best
|
||||
- **Result limits**: Configure how many chunks to retrieve
|
||||
- **Relevance threshold**: Set minimum similarity scores
|
||||
- **Context window**: Adjust chunk size for your use case
|
||||
|
||||
## Integration with Workflows
|
||||
|
||||
### Knowledge Block Configuration
|
||||
1. **Select knowledgebase**: Choose which knowledgebase to search
|
||||
2. **Add tags**: Specify filtering tags (optional)
|
||||
3. **Enter query**: Add your search query (optional)
|
||||
4. **Configure results**: Set number of chunks to retrieve
|
||||
5. **Test search**: Preview results before using in workflow
|
||||
|
||||
### Dynamic Tag Usage
|
||||
- **Variable tags**: Use workflow variables as tag values
|
||||
- **Conditional filtering**: Apply different tags based on workflow logic
|
||||
- **Context-aware search**: Adjust tags based on conversation context
|
||||
- **Multi-step filtering**: Refine searches through workflow steps
|
||||
|
||||
### Performance Optimization
|
||||
- **Efficient filtering**: Tag filtering happens before vector search for better performance
|
||||
- **Caching**: Frequently used tag combinations are cached for speed
|
||||
- **Parallel processing**: Multiple tag searches can run simultaneously
|
||||
- **Resource management**: Automatic optimization of search resources
|
||||
|
||||
## Getting Started with Tags
|
||||
|
||||
1. **Plan your tag structure**: Decide on consistent naming conventions
|
||||
2. **Start tagging**: Add relevant tags to your existing documents
|
||||
3. **Test combinations**: Experiment with tag + search query combinations
|
||||
4. **Integrate into workflows**: Use the Knowledge block with your tagging strategy
|
||||
5. **Refine over time**: Adjust your tagging approach based on search results
|
||||
|
||||
Tags transform your knowledgebase from a simple document store into a precisely organized, searchable intelligence system that your AI workflows can navigate with surgical precision.
|
||||
You can disable specific metadata tag types during connector setup or in connector settings to free up slots for manual use. See [Connectors](/knowledgebase/connectors) for details.
|
||||
|
||||
<FAQ items={[
|
||||
{ question: "How many tag slots are available per knowledgebase?", answer: "Each knowledgebase supports up to 17 tag slots total across four field types: 7 text slots, 5 number slots, 2 date slots, and 3 boolean slots. These slots are shared across all documents in the knowledgebase." },
|
||||
{ question: "What tag field types are supported?", answer: "Four field types are supported: text (free-form string values), number (numeric values), date (date values in YYYY-MM-DD format), and boolean (true/false values). Each type has its own pool of available slots." },
|
||||
{ question: "Is tag matching case-sensitive?", answer: "No, tag matching is case-insensitive. You can use any capitalization when filtering by tags and it will match regardless of how the tag value was originally entered." },
|
||||
{ question: "How does combined tag and vector search work?", answer: "When you provide both tags and a search query, tag filtering is applied first to narrow down the document set, then vector search runs within that filtered subset. This approach is more efficient because it reduces the number of vectors that need similarity comparison." },
|
||||
{ question: "What is the default number of results returned from a knowledge search?", answer: "The default is 10 results. You can configure this with the topK parameter, which accepts values from 1 to 100." },
|
||||
{ question: "What embedding model does Sim use for knowledge base search?", answer: "Sim uses OpenAI's text-embedding-3-small model with 1536 dimensions for generating document embeddings and performing vector similarity search." },
|
||||
{ question: "How many tag slots are available?", answer: "17 total: 7 text, 5 number, 2 date, 3 boolean. These are shared across all documents and connectors in a knowledge base." },
|
||||
{ question: "Can I rename a tag definition?", answer: "No. Tag definitions cannot be renamed after creation. Delete the old definition and create a new one with the correct name. Deleting will remove the tag value from all documents it was assigned to." },
|
||||
{ question: "Is tag matching case-sensitive?", answer: "No. Text tag matching is case-insensitive — 'Engineering' and 'engineering' are treated the same." },
|
||||
{ question: "Can tag filter values come from workflow variables?", answer: "Yes. Enter a variable reference like <start.department> as the filter value. It resolves to the actual value at runtime, so a single workflow can filter different documents on each run." },
|
||||
{ question: "What happens to tag values when I delete a tag definition?", answer: "Deleting a definition removes the tag value from every document it was assigned to and frees the slot. The modal shows you which documents are affected before you confirm." },
|
||||
]} />
|
||||
|
||||
@@ -3,6 +3,7 @@ title: Deploy Workflows as MCP
|
||||
description: Expose your workflows as MCP tools for external AI assistants and applications
|
||||
---
|
||||
|
||||
import { Image } from '@/components/ui/image'
|
||||
import { Video } from '@/components/ui/video'
|
||||
import { Callout } from 'fumadocs-ui/components/callout'
|
||||
import { FAQ } from '@/components/ui/faq'
|
||||
@@ -18,10 +19,47 @@ MCP servers group your workflow tools together. Create and manage them in worksp
|
||||
</div>
|
||||
|
||||
1. Navigate to **Settings → MCP Servers**
|
||||
2. Click **Create Server**
|
||||
|
||||
<div className="flex justify-center">
|
||||
<Image
|
||||
src="/static/blocks/mcp-servers-settings.png"
|
||||
alt="MCP Servers settings page"
|
||||
width={700}
|
||||
height={450}
|
||||
className="my-6"
|
||||
/>
|
||||
</div>
|
||||
|
||||
2. Click **Add**
|
||||
3. Enter a name and optional description
|
||||
4. Copy the server URL for use in your MCP clients
|
||||
5. View and manage all tools added to the server
|
||||
4. Choose access: **API Key** (private, requires `X-API-Key` header) or **Public** (no authentication)
|
||||
5. Optionally select deployed workflows to add as tools immediately
|
||||
|
||||
<div className="flex justify-center">
|
||||
<Image
|
||||
src="/static/blocks/mcp-server-add-modal.png"
|
||||
alt="Add New MCP Server modal"
|
||||
width={550}
|
||||
height={380}
|
||||
className="my-6"
|
||||
/>
|
||||
</div>
|
||||
|
||||
6. Click **Add Server**
|
||||
7. Click **Details** to view the MCP server
|
||||
|
||||
<div className="flex justify-center">
|
||||
<Image
|
||||
src="/static/blocks/mcp-server-details.png"
|
||||
alt="MCP Server details view"
|
||||
width={700}
|
||||
height={450}
|
||||
className="my-6"
|
||||
/>
|
||||
</div>
|
||||
|
||||
8. Copy the server URL for use in your MCP clients
|
||||
9. View and manage all tools added to the server
|
||||
|
||||
## Adding a Workflow as a Tool
|
||||
|
||||
@@ -33,9 +71,21 @@ Once your workflow is deployed, you can expose it as an MCP tool:
|
||||
|
||||
1. Open your deployed workflow
|
||||
2. Click **Deploy** and go to the **MCP** tab
|
||||
|
||||
<div className="flex justify-center">
|
||||
<Image
|
||||
src="/static/blocks/mcp-deploy-modal.png"
|
||||
alt="Workflow Deployment MCP tab"
|
||||
width={380}
|
||||
height={470}
|
||||
className="my-6"
|
||||
/>
|
||||
</div>
|
||||
|
||||
3. Configure the tool name and description
|
||||
4. Add descriptions for each parameter (helps AI understand inputs)
|
||||
5. Select which MCP servers to add it to
|
||||
6. Click **Save Tool**
|
||||
|
||||
<Callout type="info">
|
||||
The workflow must be deployed before it can be added as an MCP tool.
|
||||
@@ -54,9 +104,50 @@ Your workflow's input format fields become tool parameters. Add descriptions to
|
||||
|
||||
## Connecting MCP Clients
|
||||
|
||||
Use the server URL from settings to connect external applications:
|
||||
Sim generates a ready-to-paste configuration for every supported client. To get it:
|
||||
|
||||
1. Navigate to **Settings → MCP Servers**
|
||||
2. Click **Details** on your server
|
||||
3. Under **MCP Client**, select your client — **Cursor**, **Claude Code**, **Claude Desktop**, **VS Code**, or **Sim**
|
||||
4. Copy the configuration, replacing `$SIM_API_KEY` with your Sim API key
|
||||
|
||||
<div className="flex justify-center">
|
||||
<Image
|
||||
src="/static/blocks/mcp-client-config.png"
|
||||
alt="MCP client configuration panel"
|
||||
width={700}
|
||||
height={450}
|
||||
className="my-6"
|
||||
/>
|
||||
</div>
|
||||
|
||||
### Cursor
|
||||
|
||||
Cursor supports direct URL configuration. Add to your Cursor MCP settings (`.cursor/mcp.json`):
|
||||
|
||||
```json
|
||||
{
|
||||
"mcpServers": {
|
||||
"my-sim-workflows": {
|
||||
"url": "YOUR_SERVER_URL",
|
||||
"headers": { "X-API-Key": "$SIM_API_KEY" }
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Cursor also provides a one-click install button in the server detail view.
|
||||
|
||||
### Claude Code
|
||||
|
||||
Run this command in your terminal:
|
||||
|
||||
```bash
|
||||
claude mcp add "my-sim-workflows" --url "YOUR_SERVER_URL" --header "X-API-Key:$SIM_API_KEY"
|
||||
```
|
||||
|
||||
### Claude Desktop
|
||||
|
||||
Add to your Claude Desktop config (`~/Library/Application Support/Claude/claude_desktop_config.json`):
|
||||
|
||||
```json
|
||||
@@ -64,17 +155,33 @@ Add to your Claude Desktop config (`~/Library/Application Support/Claude/claude_
|
||||
"mcpServers": {
|
||||
"my-sim-workflows": {
|
||||
"command": "npx",
|
||||
"args": ["-y", "mcp-remote", "YOUR_SERVER_URL"]
|
||||
"args": ["-y", "mcp-remote", "YOUR_SERVER_URL", "--header", "X-API-Key:$SIM_API_KEY"]
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Cursor
|
||||
Add the server URL in Cursor's MCP settings using the same mcp-remote pattern.
|
||||
### VS Code
|
||||
|
||||
Add to your VS Code MCP settings (`.vscode/mcp.json`):
|
||||
|
||||
```json
|
||||
{
|
||||
"mcpServers": {
|
||||
"my-sim-workflows": {
|
||||
"command": "npx",
|
||||
"args": ["-y", "mcp-remote", "YOUR_SERVER_URL", "--header", "X-API-Key:$SIM_API_KEY"]
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
<Callout type="info">
|
||||
For public servers, omit the `X-API-Key` header and `--header` arguments. Public servers don't require authentication.
|
||||
</Callout>
|
||||
|
||||
<Callout type="warn">
|
||||
Include your API key header (`X-API-Key`) for authenticated access when using mcp-remote or other HTTP-based MCP transports.
|
||||
`$SIM_API_KEY` is a placeholder. For Claude Desktop and VS Code configs, replace it with your actual API key since these clients don't expand environment variables in JSON config files. Claude Code and Cursor handle variable expansion natively.
|
||||
</Callout>
|
||||
|
||||
## Server Management
|
||||
|
||||
@@ -18,27 +18,83 @@ MCP is an open standard that enables AI assistants to securely connect to extern
|
||||
- Execute custom tools and scripts
|
||||
- Maintain secure, controlled access to external resources
|
||||
|
||||
## Configuring MCP Servers
|
||||
## Adding an MCP Server as a Tool
|
||||
|
||||
MCP servers provide collections of tools that your agents can use. Configure them in workspace settings:
|
||||
MCP servers provide collections of tools that your agents can use.
|
||||
|
||||
<div className="mx-auto w-full overflow-hidden rounded-lg">
|
||||
<Video src="mcp/settings-mcp-tools.mp4" width={700} height={450} />
|
||||
</div>
|
||||
|
||||
1. Navigate to your workspace settings
|
||||
2. Go to the **MCP Servers** section
|
||||
3. Click **Add MCP Server**
|
||||
4. Enter the server configuration details
|
||||
5. Save the configuration
|
||||
To add one:
|
||||
|
||||
1. Navigate to **Settings → MCP Tools**
|
||||
|
||||
<div className="flex justify-center">
|
||||
<Image
|
||||
src="/static/blocks/mcp-settings.png"
|
||||
alt="MCP Tools settings page"
|
||||
width={700}
|
||||
height={450}
|
||||
className="my-6"
|
||||
/>
|
||||
</div>
|
||||
|
||||
2. Click **Add** to open the configuration modal
|
||||
3. Enter a **Server Name** and **Server URL**
|
||||
4. Add any required **Headers** (e.g. API keys)
|
||||
5. Click **Add MCP** to save
|
||||
|
||||
<div className="flex justify-center">
|
||||
<Image
|
||||
src="/static/blocks/mcp-add-modal.png"
|
||||
alt="Add New MCP Server modal"
|
||||
width={450}
|
||||
height={290}
|
||||
className="my-6"
|
||||
/>
|
||||
</div>
|
||||
|
||||
<Callout type="info">
|
||||
You can also configure MCP servers directly from the toolbar in an Agent block for quick setup.
|
||||
</Callout>
|
||||
|
||||
### Server Configuration Options
|
||||
|
||||
| Field | Description |
|
||||
|-------|-------------|
|
||||
| **Name** | Display name for the server |
|
||||
| **URL** | The MCP server endpoint |
|
||||
| **Transport** | Currently supports `streamable-http` |
|
||||
| **Headers** | Key-value pairs for authentication or custom headers |
|
||||
| **Timeout** | Connection timeout in milliseconds (default: 30,000) |
|
||||
|
||||
### Environment Variables in Configuration
|
||||
|
||||
Server URLs and headers support environment variable substitution using `{{VAR_NAME}}` syntax. This keeps sensitive values like API keys out of the server configuration.
|
||||
|
||||
```
|
||||
URL: https://api.example.com/mcp
|
||||
Authorization: Bearer {{MCP_API_TOKEN}}
|
||||
```
|
||||
|
||||
When you type `{{` in the URL or header fields, a dropdown appears showing available workspace environment variables.
|
||||
|
||||
### Testing and Validation
|
||||
|
||||
Click **Test Connection** before saving to verify the server is reachable and discover available tools. The test response shows the number of tools found and the protocol version.
|
||||
|
||||
After saving, each server displays its available tools with parameter names, types, and required flags. If a server's tools change (e.g., after a server update), click **Refresh** to fetch the latest schemas. This automatically updates any agent blocks using those tools.
|
||||
|
||||
Tool validation badges appear on servers with issues — for example, if a tool was removed from the server but is still referenced in a workflow. Click the badge to see which workflows are affected.
|
||||
|
||||
### Domain Allowlisting
|
||||
|
||||
Self-hosted deployments can restrict which MCP server domains are allowed by setting the `ALLOWED_MCP_DOMAINS` environment variable (comma-separated list). When set, only servers on approved domains can be added. When unset, all domains are allowed.
|
||||
|
||||
### Refresh Tools
|
||||
|
||||
Click **Refresh** on a server to fetch the latest tool schemas and automatically update any agent blocks using those tools with the new parameter definitions.
|
||||
To auto-refresh an MCP tool already in use by an agent, go to **Settings → MCP Tools**, open the server's details, and click **Refresh**. This fetches the latest tool schemas and automatically updates any agent blocks using those tools with the new parameter definitions.
|
||||
|
||||
## Using MCP Tools in Agents
|
||||
|
||||
@@ -46,7 +102,7 @@ Once MCP servers are configured, their tools become available within your agent
|
||||
|
||||
<div className="flex justify-center">
|
||||
<Image
|
||||
src="/static/blocks/mcp-2.png"
|
||||
src="/static/blocks/mcp-agent-dropdown.png"
|
||||
alt="Using MCP Tool in Agent Block"
|
||||
width={700}
|
||||
height={450}
|
||||
@@ -55,9 +111,25 @@ Once MCP servers are configured, their tools become available within your agent
|
||||
</div>
|
||||
|
||||
1. Open an **Agent** block
|
||||
2. In the **Tools** section, you'll see available MCP tools
|
||||
3. Select the tools you want the agent to use
|
||||
4. The agent can now access these tools during execution
|
||||
2. In the **Tools** section, click **Add tool…**
|
||||
3. Under **MCP Servers**, click a server to see its tools
|
||||
|
||||
<div className="flex justify-center">
|
||||
<Image
|
||||
src="/static/blocks/mcp-agent-tools.png"
|
||||
alt="MCP tools list for a selected server"
|
||||
width={400}
|
||||
height={400}
|
||||
className="my-6"
|
||||
/>
|
||||
</div>
|
||||
|
||||
4. Select individual tools, or choose **Use all N tools** to add every tool from that server
|
||||
5. The agent can now access these tools during execution
|
||||
|
||||
<Callout type="info">
|
||||
If you haven't configured a server yet, click **Add MCP Server** at the top of the dropdown to open the setup modal without leaving the block.
|
||||
</Callout>
|
||||
|
||||
## Standalone MCP Tool Block
|
||||
|
||||
@@ -65,7 +137,7 @@ For more granular control, you can use the dedicated MCP Tool block to execute s
|
||||
|
||||
<div className="flex justify-center">
|
||||
<Image
|
||||
src="/static/blocks/mcp-3.png"
|
||||
src="/static/blocks/mcp-tool-block.png"
|
||||
alt="Standalone MCP Tool Block"
|
||||
width={700}
|
||||
height={450}
|
||||
@@ -79,17 +151,14 @@ The MCP Tool block allows you to:
|
||||
- Use the tool's output in subsequent workflow steps
|
||||
- Chain multiple MCP tools together
|
||||
|
||||
### When to Use MCP Tool vs Agent
|
||||
## When to Use MCP Tool vs Agent
|
||||
|
||||
**Use Agent with MCP tools when:**
|
||||
- You want the AI to decide which tools to use
|
||||
- You need complex reasoning about when and how to use tools
|
||||
- You want natural language interaction with the tools
|
||||
|
||||
**Use MCP Tool block when:**
|
||||
- You need deterministic tool execution
|
||||
- You want to execute a specific tool with known parameters
|
||||
- You're building structured workflows with predictable steps
|
||||
| | **Agent with MCP tools** | **MCP Tool block** |
|
||||
|---|---|---|
|
||||
| **Execution** | AI decides which tools to call | Deterministic — runs the tool you pick |
|
||||
| **Parameters** | AI chooses at runtime | You set them explicitly |
|
||||
| **Best for** | Dynamic, conversational flows | Structured, repeatable steps |
|
||||
| **Reasoning** | Handles complex multi-step logic | One tool, one call |
|
||||
|
||||
## Permission Requirements
|
||||
|
||||
@@ -151,6 +220,6 @@ import { FAQ } from '@/components/ui/faq'
|
||||
{ question: "Who can configure MCP servers in a workspace?", answer: "Users with Write permission can configure (add and update) MCP servers in workspace settings. Only Admin permission is required to delete MCP servers. Users with Read permission can view available MCP tools and execute them in agents and MCP Tool blocks. This means all workspace members with at least Read access can use MCP tools in their workflows." },
|
||||
{ question: "Can I use MCP servers from multiple workspaces?", answer: "MCP servers are configured per workspace. Each workspace maintains its own set of MCP server connections. If you need the same MCP server in multiple workspaces, you need to configure it separately in each workspace's settings." },
|
||||
{ question: "How do I update MCP tool schemas after a server changes its available tools?", answer: "Click the Refresh button on the MCP server in your workspace settings. This fetches the latest tool schemas from the server and automatically updates any agent blocks that use those tools with the new parameter definitions." },
|
||||
{ question: "Can permission groups restrict access to MCP tools?", answer: "Yes. Organization admins can create permission groups that disable MCP tools for specific members using the disableMcpTools configuration option. When this is enabled, affected users will not be able to add or use MCP tools in their workflows." },
|
||||
{ question: "Can permission groups restrict access to MCP tools?", answer: "Yes. On Enterprise-entitled workspaces, any workspace admin can create a permission group that disables MCP tools for its members using the disableMcpTools option. When this is enabled, affected users will not be able to add or use MCP tools in workflows that belong to that workspace." },
|
||||
{ question: "What happens if an MCP server goes offline during workflow execution?", answer: "If the MCP server is unreachable during execution, the tool call will fail and return an error. In an Agent block, the AI may attempt to handle the failure gracefully. In a standalone MCP Tool block, the workflow step will fail. Check MCP server logs and verify the server is running and accessible to troubleshoot connectivity issues." },
|
||||
]} />
|
||||
@@ -11,6 +11,7 @@
|
||||
"tools",
|
||||
"connections",
|
||||
"---Features---",
|
||||
"mothership",
|
||||
"mcp",
|
||||
"copilot",
|
||||
"mailer",
|
||||
@@ -18,12 +19,13 @@
|
||||
"knowledgebase",
|
||||
"tables",
|
||||
"variables",
|
||||
"integrations",
|
||||
"credentials",
|
||||
"---Platform---",
|
||||
"execution",
|
||||
"permissions",
|
||||
"self-hosting",
|
||||
"./enterprise/index",
|
||||
"enterprise",
|
||||
"./keyboard-shortcuts/index"
|
||||
],
|
||||
"defaultOpen": false
|
||||
|
||||
120
apps/docs/content/docs/en/mothership/files.mdx
Normal file
120
apps/docs/content/docs/en/mothership/files.mdx
Normal file
@@ -0,0 +1,120 @@
|
||||
---
|
||||
title: Files & Documents
|
||||
description: Upload, create, edit, and generate files — documents, presentations, images, and more.
|
||||
---
|
||||
|
||||
import { Image } from '@/components/ui/image'
|
||||
import { Video } from '@/components/ui/video'
|
||||
import { FAQ } from '@/components/ui/faq'
|
||||
|
||||
<Video src="mothership/files-pipeline-deals-summarizer.mp4" width={700} height={450} />
|
||||
|
||||
Describe a document, presentation, image, or visualization and Mothership creates it — streaming the content live into the resource panel as it writes. Attach any file to your message and Mothership reads it, processes it, and saves it to your workspace.
|
||||
|
||||
## Uploading Files to the Workspace
|
||||
|
||||
Attach any file directly to your Mothership message — drag it into the input, paste it, or click the attachment icon. Mothership reads the file as context and saves it to your workspace.
|
||||
|
||||
Use this to:
|
||||
- Hand Mothership a document and ask it to process, summarize, or extract data from it
|
||||
- Upload a CSV and have it create a table from it
|
||||
- Drop in a PDF and ask Mothership to turn it into a knowledge base document
|
||||
- Attach a design mockup and ask Mothership to describe it or generate code from it
|
||||
|
||||
Uploaded files appear in the Files panel in the sidebar and are accessible to all workflows in the workspace. Mothership can also fetch a file directly from a URL and save it for you: "Download the JSON at [URL] and save it to the workspace."
|
||||
|
||||
## Creating Documents
|
||||
|
||||
Mothership can write any text-based file — markdown, plain text, code files, CSV, JSON, or any other format:
|
||||
|
||||
- "Write a technical spec for the new auth system as a markdown file"
|
||||
- "Create a CSV of our test accounts with columns for name, email, and plan tier"
|
||||
- "Write a Python script that calls our workflow API and processes the response"
|
||||
- "Draft a postmortem for the outage last Tuesday and save it as a markdown file"
|
||||
- "Write a personalized outbound email for Acme Corp based on their recent funding announcement"
|
||||
- "Draft a weekly ops digest summarizing workflow run counts, errors, and top failures for the past 7 days"
|
||||
|
||||
Files are saved to your workspace and accessible from the Files panel in the sidebar.
|
||||
|
||||
## Editing Existing Files
|
||||
|
||||
Open a file using `@filename` or the **+** menu, then describe the change:
|
||||
|
||||
- "Update the pricing section to reflect the new tiers"
|
||||
- "Refactor this Python script to use async/await"
|
||||
- "Add a section on error handling to this spec"
|
||||
- "Rewrite the introduction of this report to be more concise"
|
||||
|
||||
## Presentations
|
||||
|
||||
<Image src="/static/mothership/pptx-example.png" alt="Mothership resource panel showing a generated Mothership-Use-Cases.pptx file open with the title slide and first use case slide visible" width={900} height={500} />
|
||||
|
||||
Mothership can generate `.pptx` files:
|
||||
|
||||
- "Create a pitch deck for Q3 review — 8 slides covering growth, retention, and roadmap"
|
||||
- "Turn this research report into a 10-slide presentation"
|
||||
- "Build a deck that walks through our API onboarding flow"
|
||||
- "Build a battle card deck for our top 3 competitors — one slide each covering positioning, pricing, and how we win"
|
||||
- "Create an account plan for Acme Corp — their priorities, our solution fit, and proposed next steps"
|
||||
|
||||
The file is saved to your workspace and can be downloaded.
|
||||
|
||||
## Images
|
||||
|
||||
Mothership can generate images using AI, and can use an existing image as a reference to guide the output:
|
||||
|
||||
**Generating images:**
|
||||
- "Generate a banner image for the new feature announcement — dark background, clean typography"
|
||||
- "Create a diagram showing the data flow through our webhook pipeline"
|
||||
- "Make a social card for the blog post with the title and author name"
|
||||
|
||||
**Using a reference image:**
|
||||
- Attach an existing image to your message, then describe what you want: "Generate a new version of this banner with a blue color scheme instead of green"
|
||||
- "Create a variation of this diagram with the boxes rearranged horizontally [attach image]"
|
||||
|
||||
<Image src="/static/mothership/image-example.png" alt="Mothership resource panel showing a generated hero image of a Mothership-branded blimp flying over San Francisco at golden hour, alongside the chat response linking the file" width={900} height={500} />
|
||||
|
||||
Generated images are saved as workspace files.
|
||||
|
||||
## Charts and Visualizations
|
||||
|
||||
Mothership can generate charts and data visualizations from data you describe or reference:
|
||||
|
||||
- "Plot the workflow run counts from the metrics table as a bar chart grouped by week"
|
||||
- "Create a line chart of token usage over the past 30 days from this data [paste data]"
|
||||
- "Generate a pie chart showing the distribution of lead sources from the leads table"
|
||||
|
||||
<Image src="/static/mothership/chart-example.png" alt="Mothership resource panel showing a generated chart file with bar charts for backend 5xx errors and error rate over time" width={900} height={500} />
|
||||
|
||||
Visualizations are saved as files and rendered in the resource panel.
|
||||
|
||||
## Calculations & Data Processing
|
||||
|
||||
For one-off calculations and data transformations, describe what you need and Mothership runs it directly in the chat:
|
||||
|
||||
- "Parse this JSON and extract all records where status is 'failed'"
|
||||
- "Calculate the p95 latency from these timing values: [paste values]"
|
||||
- "Convert these Unix timestamps to ISO 8601"
|
||||
- "Deduplicate this list of emails, case-insensitive"
|
||||
|
||||
Results come back directly in the chat. Ask Mothership to save the output as a file if you need it.
|
||||
|
||||
## File Viewer Modes
|
||||
|
||||
When a file opens in the resource panel, you can switch between three views:
|
||||
|
||||
<Video src="mothership/toggle-file-view.mp4" width={700} height={450} />
|
||||
|
||||
| Mode | What it shows |
|
||||
|------|--------------|
|
||||
| **Editor** | Raw editable text |
|
||||
| **Preview** | Rendered output (markdown, HTML) |
|
||||
| **Split** | Editor and preview side by side |
|
||||
|
||||
<FAQ items={[
|
||||
{ question: "Where are uploaded and generated files stored?", answer: "All files — uploaded, created, or generated — go to your workspace's Files section. They're accessible from the sidebar and can be referenced in any workflow." },
|
||||
{ question: "Can I use files created in Mothership in workflows?", answer: "Yes. Workspace files can be referenced in workflows using the File block or by passing them as inputs." },
|
||||
{ question: "What file types can I upload to Mothership?", answer: "You can attach images, PDFs, text files, JSON, XML, and other document formats directly to a Mothership message." },
|
||||
{ question: "What can Mothership calculate or process?", answer: "Anything expressible as a short script — parsing JSON, number crunching, string transformations, deduplication, format conversions. Results come back in the chat." },
|
||||
{ question: "Is there a file size limit for generated files?", answer: "There is no hard limit on generated file size, but very large files may take longer to stream." },
|
||||
]} />
|
||||
73
apps/docs/content/docs/en/mothership/index.mdx
Normal file
73
apps/docs/content/docs/en/mothership/index.mdx
Normal file
@@ -0,0 +1,73 @@
|
||||
---
|
||||
title: Mothership
|
||||
description: Your AI command center. Build and manage your entire workspace in natural language.
|
||||
---
|
||||
|
||||
import { Image } from '@/components/ui/image'
|
||||
import { Video } from '@/components/ui/video'
|
||||
import { FAQ } from '@/components/ui/faq'
|
||||
|
||||
<Video src="mothership/create-workflow.mp4" width={700} height={450} />
|
||||
|
||||
Describe what you want and Mothership handles it. Build a workflow, run research, generate a presentation, query a table, schedule a recurring job, send a Slack message — Mothership knows your entire workspace and takes action directly.
|
||||
|
||||
## What You Can Do
|
||||
|
||||
| Area | What Mothership can do |
|
||||
|------|-----------------------|
|
||||
| **[Workflows](/mothership/workflows)** | Build, edit, run, debug, deploy, and organize workflows |
|
||||
| **[Research](/mothership/research)** | Search the web, read pages, crawl sites, produce research reports |
|
||||
| **[Files & Documents](/mothership/files)** | Upload, create, edit, and generate documents, presentations, and images |
|
||||
| **[Tables](/mothership/tables)** | Create, query, update, and export workspace tables |
|
||||
| **[Automation & Configuration](/mothership/tasks)** | Schedule jobs, take immediate actions, connect integrations, manage tools |
|
||||
| **[Knowledge Bases](/mothership/knowledge)** | Create knowledge bases, add documents, and query content in plain language |
|
||||
|
||||
## How It Works
|
||||
|
||||
Mothership receives a snapshot of your entire workspace with every message — all workflows, tables, knowledge bases, files, credentials, jobs, and integrations. This is why you can refer to things by name without specifying IDs or paths:
|
||||
|
||||
- "Run the invoice workflow"
|
||||
- "Add a row to the leads table"
|
||||
- "Deploy the summarizer as a chat"
|
||||
|
||||
No configuration, no context-setting. Just describe what you want:
|
||||
|
||||
- "Build a lead enrichment workflow that scores inbound signups and writes the results to the leads table"
|
||||
- "Research our top 5 competitors and save a battle card for each one"
|
||||
- "Schedule a daily job that checks for new high-fit prospects and posts them to #outbound in Slack"
|
||||
- "Create a workflow that takes a contract PDF, extracts the key terms, and emails a summary to legal"
|
||||
|
||||
For complex tasks, Mothership delegates to specialized subagents automatically. You'll see them appear as collapsible sections in the chat while they work — building, researching, writing files, executing actions.
|
||||
|
||||
{/* TODO: Screenshot of the Mothership chat showing a subagent section expanded mid-task — e.g., the Build or Research subagent actively working, with its collapsible header and steps visible in the thread. */}
|
||||
|
||||
## Adding Context
|
||||
|
||||
Bring any workspace object into the conversation via the **+** menu, `@`-mentions, or drag-and-drop from the sidebar. Mothership also opens resources automatically when it creates or modifies them.
|
||||
|
||||
<Video src="mothership/context-menu.mp4" width={700} height={450} />
|
||||
|
||||
{/* TODO: Screenshot of the resource panel with multiple tabs open — a workflow tab, a table tab, and a file tab — showing different resource types side by side. */}
|
||||
|
||||
| What to add | How it appears |
|
||||
|-------------|---------------|
|
||||
| **Workflow** | Interactive canvas in the resource panel |
|
||||
| **Table** | Full table editor in the resource panel |
|
||||
| **File** | File viewer with editor, split, and preview modes |
|
||||
| **Knowledge Base** | Knowledge base management UI |
|
||||
| **Folder** | Folder contents |
|
||||
| **Past task** | A previous Mothership conversation |
|
||||
|
||||
## Layout
|
||||
|
||||
Mothership has two panes. On the left: the chat thread, where your messages and Mothership's responses appear. On the right: the resource panel, where workflows, tables, files, and knowledge bases open as tabs. The panel is resizable; tabs are draggable and closeable.
|
||||
|
||||
<Video src="mothership/split-view.mp4" width={700} height={450} />
|
||||
|
||||
<FAQ items={[
|
||||
{ question: "How is Mothership different from Copilot?", answer: "Copilot is scoped to a single workflow — it helps you build and edit that workflow. Mothership has access to your entire workspace and can build workflows, manage data, run research, schedule jobs, take actions across integrations, and more." },
|
||||
{ question: "What model does Mothership use?", answer: "Mothership always uses Claude Opus 4.6. There is no model selector." },
|
||||
{ question: "How do I reference an existing workflow or table?", answer: "Type @ followed by the name in the input, use the + menu, or drag the item from the sidebar into the chat." },
|
||||
{ question: "How long can a Mothership task run?", answer: "Up to one hour. For tasks that exceed that, set up a scheduled job or break the work into steps." },
|
||||
{ question: "Can Mothership work on multiple things at once?", answer: "Mothership processes one message at a time. You can queue messages — they will be processed in order." },
|
||||
]} />
|
||||
85
apps/docs/content/docs/en/mothership/knowledge.mdx
Normal file
85
apps/docs/content/docs/en/mothership/knowledge.mdx
Normal file
@@ -0,0 +1,85 @@
|
||||
---
|
||||
title: Knowledge Bases
|
||||
description: Create, populate, and query knowledge bases from Mothership.
|
||||
---
|
||||
|
||||
import { Image } from '@/components/ui/image'
|
||||
import { Video } from '@/components/ui/video'
|
||||
import { FAQ } from '@/components/ui/faq'
|
||||
|
||||
<Video src="mothership/kb.mp4" width={700} height={450} />
|
||||
|
||||
Create a knowledge base, add documents to it, and query it in plain language — all through conversation. Knowledge bases you create in Mothership are immediately available to Agent blocks in any workflow.
|
||||
|
||||
## Creating Knowledge Bases
|
||||
|
||||
Describe the knowledge base and Mothership creates it:
|
||||
|
||||
- "Create a knowledge base called 'Product Docs'"
|
||||
- "Set up a knowledge base for our support team — call it 'Support KB'"
|
||||
- "Create a competitive intelligence knowledge base"
|
||||
- "Create a knowledge base from our sales playbook and attach it to the outbound agent workflow"
|
||||
- "Set up a customer success knowledge base — I'll add our onboarding guides and past case studies to it"
|
||||
|
||||
## Adding Documents
|
||||
|
||||
Add documents by attaching files to your message, pasting text, or pointing Mothership at a URL:
|
||||
|
||||
- "Add this PDF to the Product Docs knowledge base [attach file]"
|
||||
- "Add the following text to the Support KB as a new document: [paste content]"
|
||||
- "Fetch the page at [URL] and add it to the competitive intelligence knowledge base"
|
||||
- "Add these three uploaded case studies to the customer success knowledge base"
|
||||
|
||||
Mothership processes and indexes each document automatically. Once indexed, the content is searchable by any Agent block that has the knowledge base attached.
|
||||
|
||||
{/* TODO: Screenshot of Mothership confirming a document was added and indexed — showing the document name and its indexed status in the knowledge base. */}
|
||||
|
||||
## Querying Knowledge Bases
|
||||
|
||||
Ask Mothership a question and it searches the specified knowledge base to answer:
|
||||
|
||||
- "What does the Product Docs knowledge base say about our refund policy?"
|
||||
- "Search the Support KB for anything related to SSO setup errors"
|
||||
- "What are the key differences between our Pro and Enterprise plans, based on the product docs?"
|
||||
- "Find everything in the competitive intelligence knowledge base about [competitor]'s pricing"
|
||||
|
||||
## Connectors
|
||||
|
||||
For knowledge bases that should stay current automatically, connectors sync content from external services on a schedule — no manual uploads needed. New content is added, changed content is re-processed, and deleted content is removed on every run.
|
||||
|
||||
Connectors are configured through the knowledge base settings, not through Mothership chat. Once connected, all synced content is immediately searchable by Mothership and by any Agent block with the knowledge base attached.
|
||||
|
||||
Sim ships with 30 built-in connectors, including Notion, Google Drive, Slack, GitHub, Confluence, HubSpot, Salesforce, Gmail, and more.
|
||||
|
||||
Examples of what you can sync:
|
||||
|
||||
- **Notion** — sync a workspace, a database, or a specific page tree
|
||||
- **Google Drive / Dropbox / OneDrive** — sync documents from cloud storage
|
||||
- **GitHub** — sync a repository's markdown and code files
|
||||
- **Slack** — sync channel history
|
||||
- **Confluence / Jira** — sync your internal wiki or issue tracker
|
||||
- **HubSpot / Salesforce** — sync CRM records into a searchable knowledge base
|
||||
|
||||
See [Connectors](/knowledgebase/connectors) for setup steps, sync frequency options, and managing connector status.
|
||||
|
||||
## Managing Knowledge Bases
|
||||
|
||||
List, inspect, and clean up knowledge bases in plain language:
|
||||
|
||||
- "What knowledge bases are in this workspace?"
|
||||
- "How many documents are in the Support KB?"
|
||||
- "Remove the outdated pricing doc from the Product Docs knowledge base"
|
||||
- "Delete the old-competitive-intel knowledge base"
|
||||
|
||||
## Using Knowledge Bases in Workflows
|
||||
|
||||
Knowledge bases created in Mothership are immediately available to Agent blocks in any workflow. Attach a knowledge base to an Agent block and it will use semantic search to retrieve relevant content at runtime.
|
||||
|
||||
See [Knowledge Base](/knowledgebase) for full details on document processing settings, search configuration, and connector syncing.
|
||||
|
||||
<FAQ items={[
|
||||
{ question: "How do I attach a knowledge base to a workflow?", answer: "Open the Agent block in the workflow editor, find the Knowledge Base setting, and select the knowledge base by name. Mothership can also do this for you: 'Attach the Product Docs knowledge base to the research agent block in the content pipeline workflow.'" },
|
||||
{ question: "What file types can I add to a knowledge base?", answer: "PDFs, markdown files, plain text, and web pages fetched from URLs. Mothership handles the parsing and indexing automatically." },
|
||||
{ question: "Can I add documents to a knowledge base from a workflow run?", answer: "Yes. The Knowledge Base write tool is available to Agent blocks and can be used to add documents programmatically during a workflow run." },
|
||||
{ question: "How do I keep a knowledge base up to date?", answer: "Set up a connector to sync from an external source automatically — see Connectors above. For one-off updates, ask Mothership to add or replace the document directly." },
|
||||
]} />
|
||||
4
apps/docs/content/docs/en/mothership/meta.json
Normal file
4
apps/docs/content/docs/en/mothership/meta.json
Normal file
@@ -0,0 +1,4 @@
|
||||
{
|
||||
"title": "Mothership",
|
||||
"pages": ["index", "workflows", "research", "files", "tables", "tasks", "knowledge"]
|
||||
}
|
||||
44
apps/docs/content/docs/en/mothership/research.mdx
Normal file
44
apps/docs/content/docs/en/mothership/research.mdx
Normal file
@@ -0,0 +1,44 @@
|
||||
---
|
||||
title: Research
|
||||
description: Ask Mothership to research anything — it searches, reads, and synthesizes from the web.
|
||||
---
|
||||
|
||||
import { Image } from '@/components/ui/image'
|
||||
import { Video } from '@/components/ui/video'
|
||||
import { FAQ } from '@/components/ui/faq'
|
||||
|
||||
<Video src="mothership/research-agent.mp4" width={700} height={450} />
|
||||
|
||||
Ask Mothership to research anything and it figures out the best approach — searching the web, reading specific pages, crawling sites, looking up technical docs. Just describe what you want to know.
|
||||
|
||||
## Asking Questions
|
||||
|
||||
Ask anything — about a company, a competitor, a market, a technical question, or a specific URL:
|
||||
|
||||
- "What did Salesforce, HubSpot, and Gong each ship in the past 30 days? Summarize the key product updates."
|
||||
- "What's Acme Corp's tech stack, recent hires, and open engineering roles?"
|
||||
- "Find everything published about [competitor] in the past 90 days — press, product changes, job postings."
|
||||
- "What are the current rate limits on the Anthropic API?"
|
||||
- "Read [URL] and tell me what changed in this release"
|
||||
- "What does Stripe's API say about handling webhooks with idempotency keys?"
|
||||
- "Who are the main players in AI-powered revenue operations, and how do they differentiate?"
|
||||
|
||||
Mothership returns an answer directly in the chat. For anything that needs a longer written output, ask it to save the result as a file.
|
||||
|
||||
## Research Reports
|
||||
|
||||
When you need a structured, saved document rather than a chat answer, ask Mothership to write it up. It searches, reads, and cross-references multiple sources until it has enough to produce a full report. The output is saved as a file in your workspace and opened in the resource panel.
|
||||
|
||||
{/* TODO: Screenshot of a completed research report open in the resource panel as a file — showing a structured markdown document with sections, findings, and citations. */}
|
||||
|
||||
- "Research the top 10 AI SDR tools — pricing, features, positioning, and what customers say. Save as a competitive analysis."
|
||||
- "Do a full market landscape for AI in healthcare diagnostics — major players, funding, use cases, and regulatory environment."
|
||||
- "Research how our top 5 competitors handle multi-tenant auth — pricing, architecture, and any known vulnerabilities. Write it up as a report."
|
||||
- "Find every public case study on AI agents in financial compliance from the past 2 years. Summarize the key outcomes and save as a markdown file."
|
||||
- "Build a battle card for [competitor] — their positioning, pricing, strengths, weaknesses, and how we win against them."
|
||||
|
||||
<FAQ items={[
|
||||
{ question: "Does Mothership have access to real-time information?", answer: "Yes. Mothership queries live internet data. Results reflect current information, not a training cutoff." },
|
||||
{ question: "Can Mothership read pages that require login?", answer: "No. Mothership can only read publicly accessible pages." },
|
||||
{ question: "Where are research reports saved?", answer: "Reports are saved as files in your workspace and opened in the resource panel. You can find them in the Files section of the sidebar." },
|
||||
]} />
|
||||
60
apps/docs/content/docs/en/mothership/tables.mdx
Normal file
60
apps/docs/content/docs/en/mothership/tables.mdx
Normal file
@@ -0,0 +1,60 @@
|
||||
---
|
||||
title: Tables
|
||||
description: Create, query, and manage workspace tables from Mothership.
|
||||
---
|
||||
|
||||
import { Image } from '@/components/ui/image'
|
||||
import { FAQ } from '@/components/ui/faq'
|
||||
|
||||
<Image src="/static/mothership/table-example.png" alt="Mothership resource panel showing the pipeline_deals table with company, deal_owner, stage, and amount columns, alongside a chat summary of total pipeline value and breakdown by stage" width={900} height={500} />
|
||||
|
||||
Create a table from a description or a CSV, query it in plain language, add or update rows, and export the results — all through conversation. Tables open in the resource panel as soon as they're created or referenced.
|
||||
|
||||
## Creating Tables
|
||||
|
||||
Describe the schema and Mothership creates the table:
|
||||
|
||||
- "Create a leads table with columns for name, email, company, status, and created date"
|
||||
- "Create a table that matches the structure of this CSV [attach file]"
|
||||
- "Set up an errors table with: id (text), message (text), workflow (text), timestamp (date), resolved (boolean)"
|
||||
- "Create a prospect table for outbound — company, domain, employee count, industry, ICP score, and last contacted date"
|
||||
- "Set up an enrichment results table to store output from the lead enrichment workflow: email, company, title, LinkedIn URL, fit score"
|
||||
|
||||
## Querying Data
|
||||
|
||||
Ask questions about table contents in plain language:
|
||||
|
||||
- "How many rows in the leads table have status 'qualified'?"
|
||||
- "Show me all records from the past 7 days where score is above 0.8"
|
||||
- "What are the top 5 most common error messages in the failures table?"
|
||||
- "Are there any duplicate emails in the contacts table?"
|
||||
- "How many prospects have an ICP score above 0.75 and haven't been contacted in the past 30 days?"
|
||||
- "What's the conversion rate from 'contacted' to 'meeting booked' in the pipeline table this month?"
|
||||
|
||||
Mothership translates the question into a structured query and returns the results.
|
||||
|
||||
## Adding and Updating Rows
|
||||
|
||||
Add individual rows, bulk-update based on a condition, or delete records — all in plain language:
|
||||
|
||||
- "Add a row to the leads table: Acme Corp, jane@acme.com, status pending"
|
||||
- "Mark all rows in the queue table as processed where created_at is before today"
|
||||
- "Update the price column for all rows where tier is 'pro' to 49"
|
||||
- "Delete all rows in the test_events table"
|
||||
|
||||
## Exporting
|
||||
|
||||
Export a full table or a filtered subset as a CSV. The file is saved to your workspace and can be downloaded or referenced in other workflows:
|
||||
|
||||
- "Export the leads table to a CSV"
|
||||
- "Export all rows where status is 'closed' and save as a file"
|
||||
|
||||
## Using Tables in Workflows
|
||||
|
||||
Tables created in Mothership are immediately available in workflows via the [Table tool](/tools/table). Reference a table by name — no additional configuration needed.
|
||||
|
||||
<FAQ items={[
|
||||
{ question: "Can Mothership join or combine data from multiple tables?", answer: "Yes. Describe the relationship and what you want — Mothership will query both tables and combine the results." },
|
||||
{ question: "Is there a row limit?", answer: "There is no hard row limit set by Mothership. Performance for very large tables may vary." },
|
||||
{ question: "Can I use a table created in Mothership as a workflow data source?", answer: "Yes. All workspace tables are accessible to the Table tool in any workflow." },
|
||||
]} />
|
||||
131
apps/docs/content/docs/en/mothership/tasks.mdx
Normal file
131
apps/docs/content/docs/en/mothership/tasks.mdx
Normal file
@@ -0,0 +1,131 @@
|
||||
---
|
||||
title: Automation & Configuration
|
||||
description: Schedule recurring jobs, take immediate actions, connect integrations, and configure your workspace.
|
||||
---
|
||||
|
||||
import { Callout } from 'fumadocs-ui/components/callout'
|
||||
import { Image } from '@/components/ui/image'
|
||||
import { Video } from '@/components/ui/video'
|
||||
import { FAQ } from '@/components/ui/faq'
|
||||
|
||||
<Video src="mothership/job-create.mp4" width={700} height={450} />
|
||||
|
||||
Mothership can act on your behalf right now — send a message, create an issue, call an API — or on a schedule, running a prompt automatically every hour, day, or week. It can also connect integrations, set environment variables, add MCP servers, and create custom tools.
|
||||
|
||||
## Scheduled Jobs
|
||||
|
||||
A scheduled job is a Mothership task that runs on a cron schedule. On each run, Mothership reads the current workspace state and executes the job's prompt as if you had just sent it.
|
||||
|
||||
### Creating a Job
|
||||
|
||||
Describe the recurring task and how often it should run:
|
||||
|
||||
- "Every morning at 8am, check the leads table for new entries and post a summary to #sales in Slack"
|
||||
- "Every Monday at 9am, pull last week's workflow run counts and write a report to the workspace"
|
||||
- "Run the data sync workflow every 6 hours"
|
||||
- "On the first of every month, export the billing table to CSV and email it to finance@example.com"
|
||||
- "Every weekday at 7am, check for new funding announcements from companies in our ICP and post the top 5 to #market-intel in Slack"
|
||||
- "Every Sunday night, run the lead enrichment workflow on all prospects added in the past week and update their scores in the table"
|
||||
- "Daily at 6am, pull the previous day's workflow errors, summarize the top issues, and post to #eng-alerts"
|
||||
|
||||
Mothership sets the cron expression and stores the job prompt. The first run happens at the next scheduled time.
|
||||
|
||||
### Viewing Job Logs
|
||||
|
||||
- "Show me the last 5 runs of the weekly report job"
|
||||
- "Did the sync job run successfully this morning?"
|
||||
- "What did the Monday digest job do last week?"
|
||||
|
||||
Logs show run time, status (completed, failed), and a summary of what the agent did.
|
||||
|
||||
### Managing Jobs
|
||||
|
||||
- "Pause the morning summary job"
|
||||
- "Change the sync job to run every 3 hours instead of 6"
|
||||
- "Delete the onboarding digest job"
|
||||
- "What scheduled jobs are currently active?"
|
||||
|
||||
## Taking Direct Action
|
||||
|
||||
For requests that should happen right now — without building a workflow — just ask. Mothership acts immediately using the credentials connected to your workspace.
|
||||
|
||||
{/* TODO: Screenshot of Mothership chat showing the "Taking action" subagent label active during a direct action — e.g., posting to Slack or sending an email. Shows the subagent inline in the chat thread. */}
|
||||
|
||||
| Request | What happens |
|
||||
|---------|-------------|
|
||||
| "Send a Slack message to #eng that the deploy finished" | Posts to Slack immediately |
|
||||
| "Email the Q3 report to jane@example.com" | Sends via connected Gmail or Outlook |
|
||||
| "Create a GitHub issue: auth tokens not rotating on logout" | Opens an issue in the specified repo |
|
||||
| "Add a contact to HubSpot: Acme Corp, ceo@acme.com" | Creates the contact via HubSpot API |
|
||||
| "Call the webhook at [URL] with this JSON payload" | Makes the HTTP request |
|
||||
|
||||
If an integration isn't connected, Mothership walks you through connecting it.
|
||||
|
||||
## Connecting Integrations
|
||||
|
||||
Mothership can connect new OAuth integrations and API credentials on demand:
|
||||
|
||||
- "Connect my Google account"
|
||||
- "Add the Slack workspace for our team"
|
||||
- "Set up GitHub with my personal access token"
|
||||
|
||||
{/* TODO: Screenshot of Mothership walking through connecting an integration — e.g., the Integration subagent active with an OAuth prompt or confirmation that a credential was connected. */}
|
||||
|
||||
Once connected, credentials are available to Mothership for direct actions and scheduled jobs, and to all workflows in the workspace.
|
||||
|
||||
<Callout type="info">
|
||||
Connected credentials are shared across the workspace. Any workflow that uses the same integration will automatically use the same credential.
|
||||
</Callout>
|
||||
|
||||
See [Credentials](/credentials) for managing connected accounts.
|
||||
|
||||
## Environment Variables
|
||||
|
||||
Environment variables are workspace-scoped values — API keys, connection strings, and configuration that workflows reference via `{{ENV_VAR}}` syntax rather than hardcoding. Set them once and every workflow in the workspace can use them.
|
||||
|
||||
- "Set the DATABASE_URL environment variable to 'postgres://...'"
|
||||
- "Add an OPENAI_API_KEY environment variable"
|
||||
- "Add a WEBHOOK_SECRET variable for the inbound webhook workflow"
|
||||
- "Update the SCORING_API_URL variable to point to the new endpoint"
|
||||
- "What environment variables are currently set?"
|
||||
|
||||
{/* TODO: Screenshot of Mothership confirming an environment variable was set — e.g., a response message showing the variable name was saved. */}
|
||||
|
||||
## MCP Servers
|
||||
|
||||
MCP (Model Context Protocol) servers expose tools from external services that Agent blocks can call inside workflows. Connecting an MCP server makes all of its tools available in the workflow editor's tool picker — no custom integration code required.
|
||||
|
||||
Mothership can add and manage MCP servers connected to your workspace:
|
||||
|
||||
- "Add the Stripe MCP server using my API key"
|
||||
- "Remove the old analytics MCP server"
|
||||
- "What MCP servers are connected to this workspace?"
|
||||
- "Update the endpoint for the internal tools MCP server to [URL]"
|
||||
|
||||
Once added, MCP tools appear in the workflow editor's tool picker and can be called from any Agent block.
|
||||
|
||||
{/* TODO: Screenshot of Mothership confirming an MCP server was added or updated — showing the server name and its status. */}
|
||||
|
||||
## Custom Tools
|
||||
|
||||
Custom tools are single HTTP endpoints you define manually — useful for internal APIs and services that don't have a built-in Sim integration or an MCP server. Once created, they appear in the workflow editor alongside built-in tools and can be called from any Agent block.
|
||||
|
||||
Mothership can build custom tools from a description:
|
||||
|
||||
- "Create a custom tool that calls our internal scoring API at [URL] with a POST request and returns the score field"
|
||||
- "Build a tool for our Zendesk instance that creates a ticket with a subject and body"
|
||||
- "Create a tool that hits our internal enrichment API with a domain and returns company size, industry, and funding stage"
|
||||
- "Add a tool that calls our CRM's REST API to look up a contact by email and return their account owner"
|
||||
|
||||
Custom tools appear in the workflow editor and are callable from any Agent block alongside built-in tools.
|
||||
|
||||
{/* TODO: Screenshot of Mothership with the Custom Tool subagent active — showing it building a tool definition. */}
|
||||
|
||||
<FAQ items={[
|
||||
{ question: "What's the difference between a scheduled job and a deployed workflow?", answer: "A scheduled job runs a Mothership prompt on a cron schedule — Mothership decides what to do each time based on current workspace state. A deployed workflow runs a fixed, deterministic graph of blocks. Use jobs when you want Mothership to reason and adapt; use workflows when you want predictable, auditable execution." },
|
||||
{ question: "Can a scheduled job trigger a workflow?", answer: "Yes. Include it in the job prompt: 'Run the invoice sync workflow and then post the results to Slack.'" },
|
||||
{ question: "How do I know what integrations are connected?", answer: "Ask Mothership: 'What integrations are connected to this workspace?' or check Settings → Credentials." },
|
||||
{ question: "Can direct actions be undone?", answer: "No. Actions like sending emails, posting to Slack, or creating records are immediate and irreversible. Confirm the details before asking Mothership to act." },
|
||||
{ question: "Are environment variables visible to all workflows?", answer: "Yes. Environment variables are workspace-scoped and available to every workflow via {{ENV_VAR}} syntax." },
|
||||
{ question: "What's the difference between MCP servers and custom tools?", answer: "MCP servers expose a set of tools from an external service over the MCP protocol — you connect an existing MCP-compatible server. Custom tools are single HTTP endpoints you define manually — useful for internal APIs that don't have an MCP server." },
|
||||
]} />
|
||||
118
apps/docs/content/docs/en/mothership/workflows.mdx
Normal file
118
apps/docs/content/docs/en/mothership/workflows.mdx
Normal file
@@ -0,0 +1,118 @@
|
||||
---
|
||||
title: Workflows
|
||||
description: Create, edit, run, debug, deploy, and organize workflows from Mothership.
|
||||
---
|
||||
|
||||
import { Image } from '@/components/ui/image'
|
||||
import { Video } from '@/components/ui/video'
|
||||
import { FAQ } from '@/components/ui/faq'
|
||||
|
||||
<Video src="mothership/create-workflow.mp4" width={700} height={450} />
|
||||
|
||||
Describe a workflow and Mothership builds it. Reference an existing one by name and it edits it. No canvas navigation required — every change appears in the resource panel in real time.
|
||||
|
||||
## Creating Workflows
|
||||
|
||||
Describe what the workflow should do — what triggers it, what it should do, which integrations it needs, and what it should return. Mothership builds it and opens the canvas in the resource panel.
|
||||
|
||||
- "Build a workflow that takes a URL, scrapes the page, summarizes it with Claude, and sends the summary to a Slack channel"
|
||||
- "Create a workflow triggered by a webhook that extracts invoice data from a PDF and writes it to the billing table"
|
||||
- "Build an outbound workflow: take a company name and domain, enrich it with firmographic data, score the fit, and draft a personalized cold email"
|
||||
- "Create a lead enrichment workflow that takes an email from a form submission, looks up the company, and writes the enriched record to the leads table"
|
||||
- "Build a customer onboarding workflow: when a new user signs up, send a welcome email, create a HubSpot contact, and post a notification to #new-customers in Slack"
|
||||
|
||||
## Editing Workflows
|
||||
|
||||
{/* TODO: Screenshot of Mothership with the Edit subagent active and a change applied to an open workflow — e.g., a new block added or a configuration updated, visible on the canvas in the resource panel. */}
|
||||
|
||||
Open an existing workflow with `@workflow-name` or the **+** menu, then describe the change. Mothership reads the current structure before modifying it — you don't need to explain what already exists.
|
||||
|
||||
- "Add a condition that routes to a different branch if the confidence score is below 0.7"
|
||||
- "Replace the GPT-4o model with Claude Opus 4.6 on the summarizer block"
|
||||
- "Add a Slack notification at the end that includes the output"
|
||||
|
||||
## Running Workflows
|
||||
|
||||
<Video src="mothership/run-workflow.mp4" width={700} height={450} />
|
||||
|
||||
Ask Mothership to run a workflow and it handles the execution:
|
||||
|
||||
- "Run the data sync workflow"
|
||||
- "Run the invoice processor with this PDF [attach file]"
|
||||
- "Test the lead scoring workflow with these inputs: name=Acme, score=0.4"
|
||||
|
||||
Execution streams back to the chat. The workflow in the resource panel shows live block-by-block state.
|
||||
|
||||
## Reading Logs
|
||||
|
||||
Mothership can retrieve and interpret execution logs for any workflow in the workspace:
|
||||
|
||||
- "Show me the last 10 runs of the pipeline workflow"
|
||||
- "Why did the invoice workflow fail yesterday?"
|
||||
- "What did the extractor block return in the most recent run?"
|
||||
|
||||
Logs include per-block execution state, outputs, errors, and timing.
|
||||
|
||||
## Debugging
|
||||
|
||||
When a workflow fails, tell Mothership to debug it:
|
||||
|
||||
- "Debug the last failed run of the content pipeline"
|
||||
- "The summarizer block is returning empty output — figure out why"
|
||||
|
||||
Mothership reads the failure logs, identifies the cause, applies a fix, and can re-run to confirm.
|
||||
|
||||
{/* TODO: Screenshot of the Debug subagent section in the Mothership chat showing it reading logs and applying a fix. */}
|
||||
|
||||
## Deploying
|
||||
|
||||
Mothership can deploy a workflow as any of the three deployment types:
|
||||
|
||||
| Deployment type | What it creates |
|
||||
|----------------|----------------|
|
||||
| **API** | A REST endpoint at `https://sim.ai/api/workflows/{id}/execute` |
|
||||
| **Chat** | A hosted conversational interface with a shareable URL |
|
||||
| **MCP tool** | An MCP server that exposes the workflow as a tool |
|
||||
|
||||
Ask: "Deploy the invoice workflow as an API and generate an API key."
|
||||
|
||||
Mothership can also roll back: "Revert the billing workflow to the version from last Tuesday."
|
||||
|
||||
See [API Deployment](/execution/api-deployment) and [Chat Deployment](/execution/chat) for full details on each deployment type.
|
||||
|
||||
## Organizing Workflows
|
||||
|
||||
Mothership can create and manage folders to keep your workspace organized.
|
||||
|
||||
**Folders:**
|
||||
- "Create a folder called 'Data Pipelines'"
|
||||
- "Move the invoice workflow into the billing folder"
|
||||
- "Move the billing folder inside the finance folder"
|
||||
- "Delete the old-experiments folder"
|
||||
|
||||
**Renaming and moving:**
|
||||
- "Rename the 'test_v2' workflow to 'lead-scorer'"
|
||||
- "Move the summarizer workflow to the research folder"
|
||||
|
||||
{/* TODO: Screenshot showing Mothership confirming a folder or workflow organization action — e.g., a message confirming "Moved 'invoice-processor' into 'billing' folder" with the resource panel showing the folder open. */}
|
||||
|
||||
## Workflow Variables
|
||||
|
||||
Mothership can set global variables on a workflow — values accessible across all blocks in that workflow at runtime:
|
||||
|
||||
- "Set the API_ENDPOINT variable on the sync workflow to 'https://api.example.com/v2'"
|
||||
- "Update the MAX_RETRIES variable on the pipeline workflow to 5"
|
||||
|
||||
Variables set this way are available via `<variable.VARIABLE_NAME>` syntax inside any block in the workflow.
|
||||
|
||||
## Deleting Workflows
|
||||
|
||||
- "Delete the old_api_prototype workflow"
|
||||
- "Delete all workflows in the deprecated folder"
|
||||
|
||||
<FAQ items={[
|
||||
{ question: "Can Mothership edit a workflow while it's deployed?", answer: "Yes. Editing a workflow does not affect the live deployment. The deployed version is a snapshot — you need to ask Mothership to redeploy to push changes to production." },
|
||||
{ question: "Can I run a workflow with specific inputs from Mothership?", answer: "Yes. Describe the inputs in your message and Mothership passes them to the workflow's start block." },
|
||||
{ question: "How does Mothership know what my workflow does?", answer: "When you reference a workflow, Mothership loads its full structure — every block, connection, and configuration — before acting on it." },
|
||||
{ question: "What happens to a workflow's deployment when I delete it?", answer: "The workflow and all its deployments are permanently removed. Any API endpoints, chat URLs, or MCP tools that pointed to that workflow will stop working." },
|
||||
]} />
|
||||
@@ -2,10 +2,31 @@
|
||||
title: "Roles and Permissions"
|
||||
---
|
||||
|
||||
import { Callout } from 'fumadocs-ui/components/callout'
|
||||
import { Video } from '@/components/ui/video'
|
||||
|
||||
When you invite team members to your organization or workspace, you'll need to choose what level of access to give them. This guide explains what each permission level allows users to do, helping you understand team roles and what access each permission level provides.
|
||||
|
||||
## Workspaces and Organizations
|
||||
|
||||
Sim has two kinds of workspaces:
|
||||
|
||||
- **Personal workspaces** live under your individual account. The number you can create depends on your plan.
|
||||
- **Shared (organization) workspaces** live under an organization and are available on Team and Enterprise plans. Any organization Owner or Admin can create them. Members invited to a shared workspace automatically join the organization and count toward your seat total.
|
||||
|
||||
### Workspace Limits by Plan
|
||||
|
||||
| Plan | Personal Workspaces | Shared Workspaces |
|
||||
|------|---------------------|-------------------|
|
||||
| **Free** | 1 | — |
|
||||
| **Pro** | Up to 3 | — |
|
||||
| **Max** | Up to 10 | — |
|
||||
| **Team / Enterprise** | Unlimited | Unlimited (seat-gated invites) |
|
||||
|
||||
<Callout type="info">
|
||||
When a Team or Enterprise subscription is cancelled or downgraded, existing shared workspaces stay accessible to current members. New invitations are blocked until the organization is upgraded again.
|
||||
</Callout>
|
||||
|
||||
## How to Invite Someone to a Workspace
|
||||
|
||||
<div className="mx-auto w-full overflow-hidden rounded-lg">
|
||||
@@ -88,6 +109,10 @@ Every workspace has one **Owner** (the person who created it) plus any number of
|
||||
- Can do everything except delete the workspace or remove the owner
|
||||
- Can be removed from the workspace by the owner or other admins
|
||||
|
||||
<Callout type="info">
|
||||
For shared (organization) workspaces, the organization's Owner and Admins are treated as Admins of every workspace in the organization, even without an explicit per-workspace invite.
|
||||
</Callout>
|
||||
|
||||
---
|
||||
|
||||
## Common Scenarios
|
||||
@@ -145,28 +170,41 @@ Periodically review who has access to what, especially when team members change
|
||||
|
||||
## Organization Roles
|
||||
|
||||
When inviting someone to your organization, you can assign one of two roles:
|
||||
An organization has three roles: **Owner**, **Admin**, and **Member**.
|
||||
|
||||
### Organization Owner
|
||||
**What they can do:**
|
||||
- Everything an Admin can do
|
||||
- Transfer organization ownership to another user
|
||||
- Only one Owner exists per organization
|
||||
|
||||
### Organization Admin
|
||||
**What they can do:**
|
||||
- Invite and remove team members from the organization
|
||||
- Create new workspaces
|
||||
- Manage billing and subscription settings
|
||||
- Access all workspaces within the organization
|
||||
- Create new shared workspaces under the organization
|
||||
- Manage billing, seat count, and subscription settings
|
||||
- Access all shared workspaces within the organization as a workspace Admin
|
||||
- Promote members to Admin or demote Admins to Member
|
||||
|
||||
<Callout type="info">
|
||||
Owners and Admins have the same day-to-day permissions. The only action reserved for the Owner is transferring ownership.
|
||||
</Callout>
|
||||
|
||||
### Organization Member
|
||||
**What they can do:**
|
||||
- Access workspaces they've been specifically invited to
|
||||
- Access shared workspaces they've been specifically invited to
|
||||
- View the list of organization members
|
||||
- Cannot invite new people or manage organization settings
|
||||
- Cannot invite new people, create shared workspaces, or manage organization settings
|
||||
|
||||
import { FAQ } from '@/components/ui/faq'
|
||||
|
||||
<FAQ items={[
|
||||
{ question: "What is the difference between organization roles and workspace permissions?", answer: "Organization roles (Admin or Member) control who can manage the organization itself, including inviting people, creating workspaces, and handling billing. Workspace permissions (Read, Write, Admin) control what a user can do within a specific workspace, such as viewing, editing, or managing workflows. A user needs both an organization role and a workspace permission to work within a workspace." },
|
||||
{ question: "Can I restrict which integrations or model providers a team member can use?", answer: "Yes. Organization admins can create permission groups with fine-grained controls, including restricting allowed integrations and allowed model providers to specific lists. You can also disable access to MCP tools, custom tools, skills, and various platform features like the knowledge base, API keys, or Copilot on a per-group basis." },
|
||||
{ question: "What is the difference between organization roles and workspace permissions?", answer: "Organization roles (Owner, Admin, or Member) control who can manage the organization itself, including inviting people, creating shared workspaces, and handling billing. Workspace permissions (Read, Write, Admin) control what a user can do within a specific workspace, such as viewing, editing, or managing workflows. A user needs both an organization role and a workspace permission to work within a shared workspace." },
|
||||
{ question: "How many workspaces can I create?", answer: "Free users get 1 personal workspace. Pro users get up to 3 personal workspaces. Max users get up to 10 personal workspaces. Team and Enterprise plans support unlimited shared workspaces under the organization — new invites are gated by your seat count." },
|
||||
{ question: "What happens to my shared workspaces if I cancel or downgrade my Team plan?", answer: "Existing shared workspaces remain accessible to current members, but new invitations are disabled until you upgrade back to a Team or Enterprise plan. No workspaces or members are deleted — the organization is simply dormant until billing is re-enabled." },
|
||||
{ question: "Can I restrict which integrations or model providers a team member can use?", answer: "Yes, on Enterprise-entitled workspaces. Any workspace admin can create permission groups with fine-grained controls, including restricting allowed integrations and allowed model providers to specific lists. You can also disable access to MCP tools, custom tools, skills, and various platform features like the knowledge base, API keys, or Copilot on a per-group basis. Permission groups are scoped per workspace — a user can belong to different groups in different workspaces." },
|
||||
{ question: "What happens when a personal environment variable has the same name as a workspace variable?", answer: "The personal environment variable takes priority. When a workflow runs, if both a personal and workspace variable share the same name, the personal value is used. This allows individual users to override shared workspace configuration when needed." },
|
||||
{ question: "Can an Admin remove the workspace owner?", answer: "No. The workspace owner cannot be removed from the workspace by anyone. Only the workspace owner can delete the workspace or transfer ownership to another user. Admins can do everything else, including inviting and removing other users and managing workspace settings." },
|
||||
{ question: "What are permission groups and how do they work?", answer: "Permission groups are an advanced access control feature that lets organization admins define granular restrictions beyond the standard Read/Write/Admin roles. A permission group can hide UI sections (like trace spans, knowledge base, API keys, or deployment options), disable features (MCP tools, custom tools, skills, invitations), and restrict which integrations and model providers members can access. Members can be assigned to groups, and new members can be auto-added." },
|
||||
{ question: "What are permission groups and how do they work?", answer: "Permission groups are an Enterprise access control feature that lets workspace admins define granular restrictions beyond the standard Read/Write/Admin roles. Groups are scoped to a single workspace: each user can be in at most one group per workspace, and a user can be in different groups across different workspaces. A permission group can hide UI sections (like trace spans, knowledge base, API keys, or deployment options), disable features (MCP tools, custom tools, skills, invitations), and restrict which integrations and model providers its members can access. Members can be assigned manually, and new members can be auto-added on join. Execution-time enforcement is based on the workflow's workspace, not the user's current UI context." },
|
||||
{ question: "How should I set up permissions for a new team member?", answer: "Start with the lowest permission level they need. Invite them to the organization as a Member, then add them to the relevant workspace with Read permission if they only need visibility, Write if they need to create and run workflows, or Admin if they need to manage the workspace and its users. You can always increase permissions later." },
|
||||
]} />
|
||||
@@ -1,4 +0,0 @@
|
||||
{
|
||||
"title": "SDKs",
|
||||
"pages": ["python", "typescript"]
|
||||
}
|
||||
@@ -1,772 +0,0 @@
|
||||
---
|
||||
title: Python
|
||||
---
|
||||
|
||||
import { Callout } from 'fumadocs-ui/components/callout'
|
||||
import { Card, Cards } from 'fumadocs-ui/components/card'
|
||||
import { Step, Steps } from 'fumadocs-ui/components/steps'
|
||||
import { Tab, Tabs } from 'fumadocs-ui/components/tabs'
|
||||
|
||||
The official Python SDK for Sim allows you to execute workflows programmatically from your Python applications using the official Python SDK.
|
||||
|
||||
<Callout type="info">
|
||||
The Python SDK supports Python 3.8+ with async execution support, automatic rate limiting with exponential backoff, and usage tracking.
|
||||
</Callout>
|
||||
|
||||
## Installation
|
||||
|
||||
Install the SDK using pip:
|
||||
|
||||
```bash
|
||||
pip install simstudio-sdk
|
||||
```
|
||||
|
||||
## Quick Start
|
||||
|
||||
Here's a simple example to get you started:
|
||||
|
||||
```python
|
||||
from simstudio import SimStudioClient
|
||||
|
||||
# Initialize the client
|
||||
client = SimStudioClient(
|
||||
api_key="your-api-key-here",
|
||||
base_url="https://sim.ai" # optional, defaults to https://sim.ai
|
||||
)
|
||||
|
||||
# Execute a workflow
|
||||
try:
|
||||
result = client.execute_workflow("workflow-id")
|
||||
print("Workflow executed successfully:", result)
|
||||
except Exception as error:
|
||||
print("Workflow execution failed:", error)
|
||||
```
|
||||
|
||||
## API Reference
|
||||
|
||||
### SimStudioClient
|
||||
|
||||
#### Constructor
|
||||
|
||||
```python
|
||||
SimStudioClient(api_key: str, base_url: str = "https://sim.ai")
|
||||
```
|
||||
|
||||
**Parameters:**
|
||||
- `api_key` (str): Your Sim API key
|
||||
- `base_url` (str, optional): Base URL for the Sim API
|
||||
|
||||
#### Methods
|
||||
|
||||
##### execute_workflow()
|
||||
|
||||
Execute a workflow with optional input data.
|
||||
|
||||
```python
|
||||
result = client.execute_workflow(
|
||||
"workflow-id",
|
||||
input={"message": "Hello, world!"},
|
||||
timeout=30.0 # 30 seconds
|
||||
)
|
||||
```
|
||||
|
||||
**Parameters:**
|
||||
- `workflow_id` (str): The ID of the workflow to execute
|
||||
- `input` (dict, optional): Input data to pass to the workflow
|
||||
- `timeout` (float, optional): Timeout in seconds (default: 30.0)
|
||||
- `stream` (bool, optional): Enable streaming responses (default: False)
|
||||
- `selected_outputs` (list[str], optional): Block outputs to stream in `blockName.attribute` format (e.g., `["agent1.content"]`)
|
||||
- `async_execution` (bool, optional): Execute asynchronously (default: False)
|
||||
|
||||
**Returns:** `WorkflowExecutionResult | AsyncExecutionResult`
|
||||
|
||||
When `async_execution=True`, returns immediately with a task ID for polling. Otherwise, waits for completion.
|
||||
|
||||
##### get_workflow_status()
|
||||
|
||||
Get the status of a workflow (deployment status, etc.).
|
||||
|
||||
```python
|
||||
status = client.get_workflow_status("workflow-id")
|
||||
print("Is deployed:", status.is_deployed)
|
||||
```
|
||||
|
||||
**Parameters:**
|
||||
- `workflow_id` (str): The ID of the workflow
|
||||
|
||||
**Returns:** `WorkflowStatus`
|
||||
|
||||
##### validate_workflow()
|
||||
|
||||
Validate that a workflow is ready for execution.
|
||||
|
||||
```python
|
||||
is_ready = client.validate_workflow("workflow-id")
|
||||
if is_ready:
|
||||
# Workflow is deployed and ready
|
||||
pass
|
||||
```
|
||||
|
||||
**Parameters:**
|
||||
- `workflow_id` (str): The ID of the workflow
|
||||
|
||||
**Returns:** `bool`
|
||||
|
||||
##### get_job_status()
|
||||
|
||||
Get the status of an async job execution.
|
||||
|
||||
```python
|
||||
status = client.get_job_status("task-id-from-async-execution")
|
||||
print("Status:", status["status"]) # 'queued', 'processing', 'completed', 'failed'
|
||||
if status["status"] == "completed":
|
||||
print("Output:", status["output"])
|
||||
```
|
||||
|
||||
**Parameters:**
|
||||
- `task_id` (str): The task ID returned from async execution
|
||||
|
||||
**Returns:** `Dict[str, Any]`
|
||||
|
||||
**Response fields:**
|
||||
- `success` (bool): Whether the request was successful
|
||||
- `taskId` (str): The task ID
|
||||
- `status` (str): One of `'queued'`, `'processing'`, `'completed'`, `'failed'`, `'cancelled'`
|
||||
- `metadata` (dict): Contains `startedAt`, `completedAt`, and `duration`
|
||||
- `output` (any, optional): The workflow output (when completed)
|
||||
- `error` (any, optional): Error details (when failed)
|
||||
- `estimatedDuration` (int, optional): Estimated duration in milliseconds (when processing/queued)
|
||||
|
||||
##### execute_with_retry()
|
||||
|
||||
Execute a workflow with automatic retry on rate limit errors using exponential backoff.
|
||||
|
||||
```python
|
||||
result = client.execute_with_retry(
|
||||
"workflow-id",
|
||||
input={"message": "Hello"},
|
||||
timeout=30.0,
|
||||
max_retries=3, # Maximum number of retries
|
||||
initial_delay=1.0, # Initial delay in seconds
|
||||
max_delay=30.0, # Maximum delay in seconds
|
||||
backoff_multiplier=2.0 # Exponential backoff multiplier
|
||||
)
|
||||
```
|
||||
|
||||
**Parameters:**
|
||||
- `workflow_id` (str): The ID of the workflow to execute
|
||||
- `input` (dict, optional): Input data to pass to the workflow
|
||||
- `timeout` (float, optional): Timeout in seconds
|
||||
- `stream` (bool, optional): Enable streaming responses
|
||||
- `selected_outputs` (list, optional): Block outputs to stream
|
||||
- `async_execution` (bool, optional): Execute asynchronously
|
||||
- `max_retries` (int, optional): Maximum number of retries (default: 3)
|
||||
- `initial_delay` (float, optional): Initial delay in seconds (default: 1.0)
|
||||
- `max_delay` (float, optional): Maximum delay in seconds (default: 30.0)
|
||||
- `backoff_multiplier` (float, optional): Backoff multiplier (default: 2.0)
|
||||
|
||||
**Returns:** `WorkflowExecutionResult | AsyncExecutionResult`
|
||||
|
||||
The retry logic uses exponential backoff (1s → 2s → 4s → 8s...) with ±25% jitter to prevent thundering herd. If the API provides a `retry-after` header, it will be used instead.
|
||||
|
||||
##### get_rate_limit_info()
|
||||
|
||||
Get the current rate limit information from the last API response.
|
||||
|
||||
```python
|
||||
rate_limit_info = client.get_rate_limit_info()
|
||||
if rate_limit_info:
|
||||
print("Limit:", rate_limit_info.limit)
|
||||
print("Remaining:", rate_limit_info.remaining)
|
||||
print("Reset:", datetime.fromtimestamp(rate_limit_info.reset))
|
||||
```
|
||||
|
||||
**Returns:** `RateLimitInfo | None`
|
||||
|
||||
##### get_usage_limits()
|
||||
|
||||
Get current usage limits and quota information for your account.
|
||||
|
||||
```python
|
||||
limits = client.get_usage_limits()
|
||||
print("Sync requests remaining:", limits.rate_limit["sync"]["remaining"])
|
||||
print("Async requests remaining:", limits.rate_limit["async"]["remaining"])
|
||||
print("Current period cost:", limits.usage["currentPeriodCost"])
|
||||
print("Plan:", limits.usage["plan"])
|
||||
```
|
||||
|
||||
**Returns:** `UsageLimits`
|
||||
|
||||
**Response structure:**
|
||||
```python
|
||||
{
|
||||
"success": bool,
|
||||
"rateLimit": {
|
||||
"sync": {
|
||||
"isLimited": bool,
|
||||
"limit": int,
|
||||
"remaining": int,
|
||||
"resetAt": str
|
||||
},
|
||||
"async": {
|
||||
"isLimited": bool,
|
||||
"limit": int,
|
||||
"remaining": int,
|
||||
"resetAt": str
|
||||
},
|
||||
"authType": str # 'api' or 'manual'
|
||||
},
|
||||
"usage": {
|
||||
"currentPeriodCost": float,
|
||||
"limit": float,
|
||||
"plan": str # e.g., 'free', 'pro'
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
##### set_api_key()
|
||||
|
||||
Update the API key.
|
||||
|
||||
```python
|
||||
client.set_api_key("new-api-key")
|
||||
```
|
||||
|
||||
##### set_base_url()
|
||||
|
||||
Update the base URL.
|
||||
|
||||
```python
|
||||
client.set_base_url("https://my-custom-domain.com")
|
||||
```
|
||||
|
||||
##### close()
|
||||
|
||||
Close the underlying HTTP session.
|
||||
|
||||
```python
|
||||
client.close()
|
||||
```
|
||||
|
||||
## Data Classes
|
||||
|
||||
### WorkflowExecutionResult
|
||||
|
||||
```python
|
||||
@dataclass
|
||||
class WorkflowExecutionResult:
|
||||
success: bool
|
||||
output: Optional[Any] = None
|
||||
error: Optional[str] = None
|
||||
logs: Optional[List[Any]] = None
|
||||
metadata: Optional[Dict[str, Any]] = None
|
||||
trace_spans: Optional[List[Any]] = None
|
||||
total_duration: Optional[float] = None
|
||||
```
|
||||
|
||||
### AsyncExecutionResult
|
||||
|
||||
```python
|
||||
@dataclass
|
||||
class AsyncExecutionResult:
|
||||
success: bool
|
||||
task_id: str
|
||||
status: str # 'queued'
|
||||
created_at: str
|
||||
links: Dict[str, str] # e.g., {"status": "/api/jobs/{taskId}"}
|
||||
```
|
||||
|
||||
### WorkflowStatus
|
||||
|
||||
```python
|
||||
@dataclass
|
||||
class WorkflowStatus:
|
||||
is_deployed: bool
|
||||
deployed_at: Optional[str] = None
|
||||
needs_redeployment: bool = False
|
||||
```
|
||||
|
||||
### RateLimitInfo
|
||||
|
||||
```python
|
||||
@dataclass
|
||||
class RateLimitInfo:
|
||||
limit: int
|
||||
remaining: int
|
||||
reset: int
|
||||
retry_after: Optional[int] = None
|
||||
```
|
||||
|
||||
### UsageLimits
|
||||
|
||||
```python
|
||||
@dataclass
|
||||
class UsageLimits:
|
||||
success: bool
|
||||
rate_limit: Dict[str, Any]
|
||||
usage: Dict[str, Any]
|
||||
```
|
||||
|
||||
### SimStudioError
|
||||
|
||||
```python
|
||||
class SimStudioError(Exception):
|
||||
def __init__(self, message: str, code: Optional[str] = None, status: Optional[int] = None):
|
||||
super().__init__(message)
|
||||
self.code = code
|
||||
self.status = status
|
||||
```
|
||||
|
||||
**Common error codes:**
|
||||
- `UNAUTHORIZED`: Invalid API key
|
||||
- `TIMEOUT`: Request timed out
|
||||
- `RATE_LIMIT_EXCEEDED`: Rate limit exceeded
|
||||
- `USAGE_LIMIT_EXCEEDED`: Usage limit exceeded
|
||||
- `EXECUTION_ERROR`: Workflow execution failed
|
||||
|
||||
## Examples
|
||||
|
||||
### Basic Workflow Execution
|
||||
|
||||
<Steps>
|
||||
<Step title="Initialize the client">
|
||||
Set up the SimStudioClient with your API key.
|
||||
</Step>
|
||||
<Step title="Validate the workflow">
|
||||
Check if the workflow is deployed and ready for execution.
|
||||
</Step>
|
||||
<Step title="Execute the workflow">
|
||||
Run the workflow with your input data.
|
||||
</Step>
|
||||
<Step title="Handle the result">
|
||||
Process the execution result and handle any errors.
|
||||
</Step>
|
||||
</Steps>
|
||||
|
||||
```python
|
||||
import os
|
||||
from simstudio import SimStudioClient
|
||||
|
||||
client = SimStudioClient(api_key=os.getenv("SIM_API_KEY"))
|
||||
|
||||
def run_workflow():
|
||||
try:
|
||||
# Check if workflow is ready
|
||||
is_ready = client.validate_workflow("my-workflow-id")
|
||||
if not is_ready:
|
||||
raise Exception("Workflow is not deployed or ready")
|
||||
|
||||
# Execute the workflow
|
||||
result = client.execute_workflow(
|
||||
"my-workflow-id",
|
||||
input={
|
||||
"message": "Process this data",
|
||||
"user_id": "12345"
|
||||
}
|
||||
)
|
||||
|
||||
if result.success:
|
||||
print("Output:", result.output)
|
||||
print("Duration:", result.metadata.get("duration") if result.metadata else None)
|
||||
else:
|
||||
print("Workflow failed:", result.error)
|
||||
|
||||
except Exception as error:
|
||||
print("Error:", error)
|
||||
|
||||
run_workflow()
|
||||
```
|
||||
|
||||
### Error Handling
|
||||
|
||||
Handle different types of errors that may occur during workflow execution:
|
||||
|
||||
```python
|
||||
from simstudio import SimStudioClient, SimStudioError
|
||||
import os
|
||||
|
||||
client = SimStudioClient(api_key=os.getenv("SIM_API_KEY"))
|
||||
|
||||
def execute_with_error_handling():
|
||||
try:
|
||||
result = client.execute_workflow("workflow-id")
|
||||
return result
|
||||
except SimStudioError as error:
|
||||
if error.code == "UNAUTHORIZED":
|
||||
print("Invalid API key")
|
||||
elif error.code == "TIMEOUT":
|
||||
print("Workflow execution timed out")
|
||||
elif error.code == "USAGE_LIMIT_EXCEEDED":
|
||||
print("Usage limit exceeded")
|
||||
elif error.code == "INVALID_JSON":
|
||||
print("Invalid JSON in request body")
|
||||
else:
|
||||
print(f"Workflow error: {error}")
|
||||
raise
|
||||
except Exception as error:
|
||||
print(f"Unexpected error: {error}")
|
||||
raise
|
||||
```
|
||||
|
||||
### Context Manager Usage
|
||||
|
||||
Use the client as a context manager to automatically handle resource cleanup:
|
||||
|
||||
```python
|
||||
from simstudio import SimStudioClient
|
||||
import os
|
||||
|
||||
# Using context manager to automatically close the session
|
||||
with SimStudioClient(api_key=os.getenv("SIM_API_KEY")) as client:
|
||||
result = client.execute_workflow("workflow-id")
|
||||
print("Result:", result)
|
||||
# Session is automatically closed here
|
||||
```
|
||||
|
||||
### Batch Workflow Execution
|
||||
|
||||
Execute multiple workflows efficiently:
|
||||
|
||||
```python
|
||||
from simstudio import SimStudioClient
|
||||
import os
|
||||
|
||||
client = SimStudioClient(api_key=os.getenv("SIM_API_KEY"))
|
||||
|
||||
def execute_workflows_batch(workflow_data_pairs):
|
||||
"""Execute multiple workflows with different input data."""
|
||||
results = []
|
||||
|
||||
for workflow_id, input_data in workflow_data_pairs:
|
||||
try:
|
||||
# Validate workflow before execution
|
||||
if not client.validate_workflow(workflow_id):
|
||||
print(f"Skipping {workflow_id}: not deployed")
|
||||
continue
|
||||
|
||||
result = client.execute_workflow(workflow_id, input_data)
|
||||
results.append({
|
||||
"workflow_id": workflow_id,
|
||||
"success": result.success,
|
||||
"output": result.output,
|
||||
"error": result.error
|
||||
})
|
||||
|
||||
except Exception as error:
|
||||
results.append({
|
||||
"workflow_id": workflow_id,
|
||||
"success": False,
|
||||
"error": str(error)
|
||||
})
|
||||
|
||||
return results
|
||||
|
||||
# Example usage
|
||||
workflows = [
|
||||
("workflow-1", {"type": "analysis", "data": "sample1"}),
|
||||
("workflow-2", {"type": "processing", "data": "sample2"}),
|
||||
]
|
||||
|
||||
results = execute_workflows_batch(workflows)
|
||||
for result in results:
|
||||
print(f"Workflow {result['workflow_id']}: {'Success' if result['success'] else 'Failed'}")
|
||||
```
|
||||
|
||||
### Async Workflow Execution
|
||||
|
||||
Execute workflows asynchronously for long-running tasks:
|
||||
|
||||
```python
|
||||
import os
|
||||
import time
|
||||
from simstudio import SimStudioClient
|
||||
|
||||
client = SimStudioClient(api_key=os.getenv("SIM_API_KEY"))
|
||||
|
||||
def execute_async():
|
||||
try:
|
||||
# Start async execution
|
||||
result = client.execute_workflow(
|
||||
"workflow-id",
|
||||
input={"data": "large dataset"},
|
||||
async_execution=True # Execute asynchronously
|
||||
)
|
||||
|
||||
# Check if result is an async execution
|
||||
if hasattr(result, 'task_id'):
|
||||
print(f"Task ID: {result.task_id}")
|
||||
print(f"Status endpoint: {result.links['status']}")
|
||||
|
||||
# Poll for completion
|
||||
status = client.get_job_status(result.task_id)
|
||||
|
||||
while status["status"] in ["queued", "processing"]:
|
||||
print(f"Current status: {status['status']}")
|
||||
time.sleep(2) # Wait 2 seconds
|
||||
status = client.get_job_status(result.task_id)
|
||||
|
||||
if status["status"] == "completed":
|
||||
print("Workflow completed!")
|
||||
print(f"Output: {status['output']}")
|
||||
print(f"Duration: {status['metadata']['duration']}")
|
||||
else:
|
||||
print(f"Workflow failed: {status['error']}")
|
||||
|
||||
except Exception as error:
|
||||
print(f"Error: {error}")
|
||||
|
||||
execute_async()
|
||||
```
|
||||
|
||||
### Rate Limiting and Retry
|
||||
|
||||
Handle rate limits automatically with exponential backoff:
|
||||
|
||||
```python
|
||||
import os
|
||||
from simstudio import SimStudioClient, SimStudioError
|
||||
|
||||
client = SimStudioClient(api_key=os.getenv("SIM_API_KEY"))
|
||||
|
||||
def execute_with_retry_handling():
|
||||
try:
|
||||
# Automatically retries on rate limit
|
||||
result = client.execute_with_retry(
|
||||
"workflow-id",
|
||||
input={"message": "Process this"},
|
||||
max_retries=5,
|
||||
initial_delay=1.0,
|
||||
max_delay=60.0,
|
||||
backoff_multiplier=2.0
|
||||
)
|
||||
|
||||
print(f"Success: {result}")
|
||||
except SimStudioError as error:
|
||||
if error.code == "RATE_LIMIT_EXCEEDED":
|
||||
print("Rate limit exceeded after all retries")
|
||||
|
||||
# Check rate limit info
|
||||
rate_limit_info = client.get_rate_limit_info()
|
||||
if rate_limit_info:
|
||||
from datetime import datetime
|
||||
reset_time = datetime.fromtimestamp(rate_limit_info.reset)
|
||||
print(f"Rate limit resets at: {reset_time}")
|
||||
|
||||
execute_with_retry_handling()
|
||||
```
|
||||
|
||||
### Usage Monitoring
|
||||
|
||||
Monitor your account usage and limits:
|
||||
|
||||
```python
|
||||
import os
|
||||
from simstudio import SimStudioClient
|
||||
|
||||
client = SimStudioClient(api_key=os.getenv("SIM_API_KEY"))
|
||||
|
||||
def check_usage():
|
||||
try:
|
||||
limits = client.get_usage_limits()
|
||||
|
||||
print("=== Rate Limits ===")
|
||||
print("Sync requests:")
|
||||
print(f" Limit: {limits.rate_limit['sync']['limit']}")
|
||||
print(f" Remaining: {limits.rate_limit['sync']['remaining']}")
|
||||
print(f" Resets at: {limits.rate_limit['sync']['resetAt']}")
|
||||
print(f" Is limited: {limits.rate_limit['sync']['isLimited']}")
|
||||
|
||||
print("\nAsync requests:")
|
||||
print(f" Limit: {limits.rate_limit['async']['limit']}")
|
||||
print(f" Remaining: {limits.rate_limit['async']['remaining']}")
|
||||
print(f" Resets at: {limits.rate_limit['async']['resetAt']}")
|
||||
print(f" Is limited: {limits.rate_limit['async']['isLimited']}")
|
||||
|
||||
print("\n=== Usage ===")
|
||||
print(f"Current period cost: ${limits.usage['currentPeriodCost']:.2f}")
|
||||
print(f"Limit: ${limits.usage['limit']:.2f}")
|
||||
print(f"Plan: {limits.usage['plan']}")
|
||||
|
||||
percent_used = (limits.usage['currentPeriodCost'] / limits.usage['limit']) * 100
|
||||
print(f"Usage: {percent_used:.1f}%")
|
||||
|
||||
if percent_used > 80:
|
||||
print("⚠️ Warning: You are approaching your usage limit!")
|
||||
|
||||
except Exception as error:
|
||||
print(f"Error checking usage: {error}")
|
||||
|
||||
check_usage()
|
||||
```
|
||||
|
||||
### Streaming Workflow Execution
|
||||
|
||||
Execute workflows with real-time streaming responses:
|
||||
|
||||
```python
|
||||
from simstudio import SimStudioClient
|
||||
import os
|
||||
|
||||
client = SimStudioClient(api_key=os.getenv("SIM_API_KEY"))
|
||||
|
||||
def execute_with_streaming():
|
||||
"""Execute workflow with streaming enabled."""
|
||||
try:
|
||||
# Enable streaming for specific block outputs
|
||||
result = client.execute_workflow(
|
||||
"workflow-id",
|
||||
input={"message": "Count to five"},
|
||||
stream=True,
|
||||
selected_outputs=["agent1.content"] # Use blockName.attribute format
|
||||
)
|
||||
|
||||
print("Workflow result:", result)
|
||||
except Exception as error:
|
||||
print("Error:", error)
|
||||
|
||||
execute_with_streaming()
|
||||
```
|
||||
|
||||
The streaming response follows the Server-Sent Events (SSE) format:
|
||||
|
||||
```
|
||||
data: {"blockId":"7b7735b9-19e5-4bd6-818b-46aae2596e9f","chunk":"One"}
|
||||
|
||||
data: {"blockId":"7b7735b9-19e5-4bd6-818b-46aae2596e9f","chunk":", two"}
|
||||
|
||||
data: {"event":"done","success":true,"output":{},"metadata":{"duration":610}}
|
||||
|
||||
data: [DONE]
|
||||
```
|
||||
|
||||
**Flask Streaming Example:**
|
||||
|
||||
```python
|
||||
from flask import Flask, Response, stream_with_context
|
||||
import requests
|
||||
import json
|
||||
import os
|
||||
|
||||
app = Flask(__name__)
|
||||
|
||||
@app.route('/stream-workflow')
|
||||
def stream_workflow():
|
||||
"""Stream workflow execution to the client."""
|
||||
|
||||
def generate():
|
||||
response = requests.post(
|
||||
'https://sim.ai/api/workflows/WORKFLOW_ID/execute',
|
||||
headers={
|
||||
'Content-Type': 'application/json',
|
||||
'X-API-Key': os.getenv('SIM_API_KEY')
|
||||
},
|
||||
json={
|
||||
'message': 'Generate a story',
|
||||
'stream': True,
|
||||
'selectedOutputs': ['agent1.content']
|
||||
},
|
||||
stream=True
|
||||
)
|
||||
|
||||
for line in response.iter_lines():
|
||||
if line:
|
||||
decoded_line = line.decode('utf-8')
|
||||
if decoded_line.startswith('data: '):
|
||||
data = decoded_line[6:] # Remove 'data: ' prefix
|
||||
|
||||
if data == '[DONE]':
|
||||
break
|
||||
|
||||
try:
|
||||
parsed = json.loads(data)
|
||||
if 'chunk' in parsed:
|
||||
yield f"data: {json.dumps(parsed)}\n\n"
|
||||
elif parsed.get('event') == 'done':
|
||||
yield f"data: {json.dumps(parsed)}\n\n"
|
||||
print("Execution complete:", parsed.get('metadata'))
|
||||
except json.JSONDecodeError:
|
||||
pass
|
||||
|
||||
return Response(
|
||||
stream_with_context(generate()),
|
||||
mimetype='text/event-stream'
|
||||
)
|
||||
|
||||
if __name__ == '__main__':
|
||||
app.run(debug=True)
|
||||
```
|
||||
|
||||
### Environment Configuration
|
||||
|
||||
Configure the client using environment variables:
|
||||
|
||||
<Tabs items={['Development', 'Production']}>
|
||||
<Tab value="Development">
|
||||
```python
|
||||
import os
|
||||
from simstudio import SimStudioClient
|
||||
|
||||
# Development configuration
|
||||
client = SimStudioClient(
|
||||
api_key=os.getenv("SIM_API_KEY")
|
||||
base_url=os.getenv("SIM_BASE_URL", "https://sim.ai")
|
||||
)
|
||||
```
|
||||
</Tab>
|
||||
<Tab value="Production">
|
||||
```python
|
||||
import os
|
||||
from simstudio import SimStudioClient
|
||||
|
||||
# Production configuration with error handling
|
||||
api_key = os.getenv("SIM_API_KEY")
|
||||
if not api_key:
|
||||
raise ValueError("SIM_API_KEY environment variable is required")
|
||||
|
||||
client = SimStudioClient(
|
||||
api_key=api_key,
|
||||
base_url=os.getenv("SIM_BASE_URL", "https://sim.ai")
|
||||
)
|
||||
```
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
## Getting Your API Key
|
||||
|
||||
<Steps>
|
||||
<Step title="Log in to Sim">
|
||||
Navigate to [Sim](https://sim.ai) and log in to your account.
|
||||
</Step>
|
||||
<Step title="Open your workflow">
|
||||
Navigate to the workflow you want to execute programmatically.
|
||||
</Step>
|
||||
<Step title="Deploy your workflow">
|
||||
Click on "Deploy" to deploy your workflow if it hasn't been deployed yet.
|
||||
</Step>
|
||||
<Step title="Create or select an API key">
|
||||
During the deployment process, select or create an API key.
|
||||
</Step>
|
||||
<Step title="Copy the API key">
|
||||
Copy the API key to use in your Python application.
|
||||
</Step>
|
||||
</Steps>
|
||||
|
||||
## Requirements
|
||||
|
||||
- Python 3.8+
|
||||
- requests >= 2.25.0
|
||||
|
||||
## License
|
||||
|
||||
Apache-2.0
|
||||
|
||||
import { FAQ } from '@/components/ui/faq'
|
||||
|
||||
<FAQ items={[
|
||||
{ question: "Do I need to deploy a workflow before I can execute it via the SDK?", answer: "Yes. Workflows must be deployed before they can be executed through the SDK. You can use the validate_workflow() method to check whether a workflow is deployed and ready. If it returns False, deploy the workflow from the Sim UI first and create or select an API key during deployment." },
|
||||
{ question: "What is the difference between sync and async execution?", answer: "Sync execution (the default) blocks until the workflow completes and returns the full result. Async execution (async_execution=True) returns immediately with a task ID that you can poll using get_job_status(). Use async mode for long-running workflows to avoid request timeouts. Async job statuses include queued, processing, completed, failed, and cancelled." },
|
||||
{ question: "How does the SDK handle rate limiting?", answer: "The SDK provides built-in rate limiting support through the execute_with_retry() method. It uses exponential backoff (1s, 2s, 4s, 8s...) with 25% jitter to avoid thundering herd problems. If the API returns a retry-after header, that value is used instead. You can configure max_retries, initial_delay, max_delay, and backoff_multiplier. Use get_rate_limit_info() to check your current rate limit status." },
|
||||
{ question: "Can I use the Python SDK as a context manager?", answer: "Yes. The SimStudioClient supports Python's context manager protocol. Use it with the 'with' statement to automatically close the underlying HTTP session when you are done, which is especially useful for scripts that create and discard client instances." },
|
||||
{ question: "How do I handle different types of errors from the SDK?", answer: "The SDK raises SimStudioError with a code property for API-specific errors. Common error codes are UNAUTHORIZED (invalid API key), TIMEOUT (request timed out), RATE_LIMIT_EXCEEDED (too many requests), USAGE_LIMIT_EXCEEDED (billing limit reached), and EXECUTION_ERROR (workflow failed). Use the error code to implement targeted error handling and recovery logic." },
|
||||
{ question: "How do I monitor my API usage and remaining quota?", answer: "Use the get_usage_limits() method to check your current usage. It returns sync and async rate limit details (limit, remaining, reset time, whether you are currently limited), plus your current period cost, usage limit, and plan tier. This lets you monitor consumption and alert before hitting limits." },
|
||||
]} />
|
||||
File diff suppressed because it is too large
Load Diff
@@ -140,7 +140,7 @@ import { FAQ } from '@/components/ui/faq'
|
||||
{ question: "How does the agent decide when to load a skill?", answer: "The agent sees an available_skills section in its system prompt listing each skill's name and description. When the agent determines that a skill is relevant to the current task, it calls the load_skill tool with the skill name. The full skill content is then returned as a tool response. This is why writing a specific, keyword-rich description is critical -- it is the only thing the agent reads before deciding whether to activate a skill." },
|
||||
{ question: "Do skills work with all LLM providers?", answer: "Yes. The load_skill mechanism uses standard tool-calling, which is supported by all LLM providers in Sim. No provider-specific configuration is needed. The skill system works the same way whether you are using Anthropic, OpenAI, Google, or any other supported provider." },
|
||||
{ question: "When should I use skills vs. agent instructions?", answer: "Use skills for knowledge that applies across multiple workflows or changes frequently. Skills are reusable packages that can be attached to any agent. Use agent instructions for task-specific context that is unique to a single agent and workflow. If you find yourself copying the same instructions into multiple agents, that content should be a skill instead." },
|
||||
{ question: "Can permission groups disable skills for certain users?", answer: "Yes. Organization admins can create permission groups with the disableSkills option enabled. When a user is assigned to such a permission group, the skills dropdown in agent blocks will be disabled and they will not be able to add or use skills in their workflows." },
|
||||
{ question: "Can permission groups disable skills for certain users?", answer: "Yes. On Enterprise-entitled workspaces, any workspace admin can create a permission group with the disableSkills option enabled. When a user is assigned to such a group in a workspace, the skills dropdown in agent blocks is disabled and they cannot add or use skills in workflows belonging to that workspace." },
|
||||
{ question: "What is the recommended maximum length for skill content?", answer: "Keep skills focused and under 500 lines. If a skill grows too large, split it into multiple specialized skills. Shorter, focused skills are more effective because the agent can load exactly what it needs. A broad skill with too much content can overwhelm the agent and reduce the quality of its responses." },
|
||||
{ question: "Where do I create and manage skills?", answer: "Go to Settings and select Skills under the Tools section. From there you can add new skills with a name (kebab-case identifier, max 64 characters), description (max 1024 characters), and content (full instructions in markdown). You can also edit or delete existing skills from this page." },
|
||||
]} />
|
||||
|
||||
629
apps/docs/content/docs/en/tools/agentphone.mdx
Normal file
629
apps/docs/content/docs/en/tools/agentphone.mdx
Normal file
@@ -0,0 +1,629 @@
|
||||
---
|
||||
title: AgentPhone
|
||||
description: Provision numbers, send SMS and iMessage, and place voice calls with AgentPhone
|
||||
---
|
||||
|
||||
import { BlockInfoCard } from "@/components/ui/block-info-card"
|
||||
|
||||
<BlockInfoCard
|
||||
type="agentphone"
|
||||
color="linear-gradient(135deg, #1a1a1a 0%, #0a2a14 100%)"
|
||||
/>
|
||||
|
||||
{/* MANUAL-CONTENT-START:intro */}
|
||||
[AgentPhone](https://agentphone.to/) is an API-first voice and messaging platform built for AI agents. AgentPhone lets you provision real phone numbers, place outbound AI voice calls, send SMS and iMessage, manage conversations and contacts, and monitor usage — all through a simple REST API designed for programmatic access.
|
||||
|
||||
**Why AgentPhone?**
|
||||
- **Agent-Native Telephony:** Purpose-built for AI agents — provision numbers, place calls, and send messages without carrier contracts or telephony plumbing.
|
||||
- **Voice + Messaging in One API:** Drive outbound AI voice calls alongside SMS, MMS, and iMessage from the same account and phone numbers.
|
||||
- **Conversation & Transcript Management:** Every call returns an ordered transcript; every message thread is tracked as a conversation with full history and metadata.
|
||||
- **Contacts Built In:** Create, search, update, and delete contacts on the account so your agents can reference people by name instead of raw phone numbers.
|
||||
- **Usage Visibility:** Inspect plan limits, current counts, and daily/monthly aggregation so workflows can stay inside guardrails.
|
||||
|
||||
**Using AgentPhone in Sim**
|
||||
|
||||
Sim's AgentPhone integration connects your agentic workflows directly to AgentPhone using an API key. With 22 operations spanning numbers, calls, conversations, contacts, and usage, you can build powerful voice and messaging automations without writing backend code.
|
||||
|
||||
**Key benefits of using AgentPhone in Sim:**
|
||||
- **Dynamic number provisioning:** Reserve US or Canadian numbers on the fly — per agent, per customer, or per workflow — and release them when no longer needed.
|
||||
- **Outbound AI voice calls:** Place calls from an agent with an optional greeting, voice override, or system prompt, and read the full transcript back as structured data once the call completes.
|
||||
- **Two-way messaging:** Send SMS, MMS, or iMessage, fetch conversation history, and react to incoming iMessages — all from inside your workflow.
|
||||
- **Contact and metadata management:** Keep an account-level contact list and attach custom JSON metadata to conversations so downstream blocks can branch on state.
|
||||
- **Operational insight:** Pull current usage stats and daily/monthly breakdowns to monitor consumption and enforce plan limits before making the next call.
|
||||
|
||||
Whether you're building an outbound AI voice agent, running automated SMS follow-ups, managing two-way customer conversations, or monitoring phone usage across your organization, AgentPhone in Sim gives you direct, secure access to the full AgentPhone API — no middleware required. Simply configure your API key, select the operation you need, and let Sim handle the rest.
|
||||
{/* MANUAL-CONTENT-END */}
|
||||
|
||||
|
||||
## Usage Instructions
|
||||
|
||||
Give your workflow a phone. Provision SMS- and voice-enabled numbers, send messages and tapback reactions, place outbound voice calls, manage conversations and contacts, and track usage — all through a single AgentPhone API key.
|
||||
|
||||
|
||||
|
||||
## Tools
|
||||
|
||||
### `agentphone_create_call`
|
||||
|
||||
Initiate an outbound voice call from an AgentPhone agent
|
||||
|
||||
#### Input
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `apiKey` | string | Yes | AgentPhone API key |
|
||||
| `agentId` | string | Yes | Agent that will handle the call |
|
||||
| `toNumber` | string | Yes | Phone number to call in E.164 format \(e.g. +14155551234\) |
|
||||
| `fromNumberId` | string | No | Phone number ID to use as caller ID. Must belong to the agent. If omitted, the agent's first assigned number is used. |
|
||||
| `initialGreeting` | string | No | Optional greeting spoken when the recipient answers |
|
||||
| `voice` | string | No | Voice ID override for this call \(defaults to the agent's configured voice\) |
|
||||
| `systemPrompt` | string | No | When provided, uses a built-in LLM for the conversation instead of forwarding to your webhook |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `id` | string | Unique call identifier |
|
||||
| `agentId` | string | Agent handling the call |
|
||||
| `status` | string | Initial call status |
|
||||
| `toNumber` | string | Destination phone number |
|
||||
| `fromNumber` | string | Caller ID used for the call |
|
||||
| `phoneNumberId` | string | ID of the phone number used as caller ID |
|
||||
| `direction` | string | Call direction \(outbound\) |
|
||||
| `startedAt` | string | ISO 8601 timestamp |
|
||||
|
||||
### `agentphone_create_contact`
|
||||
|
||||
Create a new contact in AgentPhone
|
||||
|
||||
#### Input
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `apiKey` | string | Yes | AgentPhone API key |
|
||||
| `phoneNumber` | string | Yes | Phone number in E.164 format \(e.g. +14155551234\) |
|
||||
| `name` | string | Yes | Contact's full name |
|
||||
| `email` | string | No | Contact's email address |
|
||||
| `notes` | string | No | Freeform notes stored on the contact |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `id` | string | Contact ID |
|
||||
| `phoneNumber` | string | Phone number in E.164 format |
|
||||
| `name` | string | Contact name |
|
||||
| `email` | string | Contact email address |
|
||||
| `notes` | string | Freeform notes |
|
||||
| `createdAt` | string | ISO 8601 creation timestamp |
|
||||
| `updatedAt` | string | ISO 8601 update timestamp |
|
||||
|
||||
### `agentphone_create_number`
|
||||
|
||||
Provision a new SMS- and voice-enabled phone number
|
||||
|
||||
#### Input
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `apiKey` | string | Yes | AgentPhone API key |
|
||||
| `country` | string | No | Two-letter country code \(e.g. US, CA\). Defaults to US. |
|
||||
| `areaCode` | string | No | Preferred area code \(US/CA only, e.g. "415"\). Best-effort — may be ignored if unavailable. |
|
||||
| `agentId` | string | No | Optionally attach the number to an agent immediately |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `id` | string | Unique phone number ID |
|
||||
| `phoneNumber` | string | Provisioned phone number in E.164 format |
|
||||
| `country` | string | Two-letter country code |
|
||||
| `status` | string | Number status \(e.g. active\) |
|
||||
| `type` | string | Number type \(e.g. sms\) |
|
||||
| `agentId` | string | Agent the number is attached to |
|
||||
| `createdAt` | string | ISO 8601 timestamp when the number was created |
|
||||
|
||||
### `agentphone_delete_contact`
|
||||
|
||||
Delete a contact by ID
|
||||
|
||||
#### Input
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `apiKey` | string | Yes | AgentPhone API key |
|
||||
| `contactId` | string | Yes | Contact ID |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `id` | string | ID of the deleted contact |
|
||||
| `deleted` | boolean | Whether the contact was deleted successfully |
|
||||
|
||||
### `agentphone_get_call`
|
||||
|
||||
Fetch a call and its full transcript
|
||||
|
||||
#### Input
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `apiKey` | string | Yes | AgentPhone API key |
|
||||
| `callId` | string | Yes | ID of the call to retrieve |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `id` | string | Call ID |
|
||||
| `agentId` | string | Agent that handled the call |
|
||||
| `phoneNumberId` | string | Phone number ID |
|
||||
| `phoneNumber` | string | Phone number used for the call |
|
||||
| `fromNumber` | string | Caller phone number |
|
||||
| `toNumber` | string | Recipient phone number |
|
||||
| `direction` | string | inbound or outbound |
|
||||
| `status` | string | Call status |
|
||||
| `startedAt` | string | ISO 8601 timestamp |
|
||||
| `endedAt` | string | ISO 8601 timestamp |
|
||||
| `durationSeconds` | number | Call duration in seconds |
|
||||
| `lastTranscriptSnippet` | string | Last transcript snippet |
|
||||
| `recordingUrl` | string | Recording audio URL |
|
||||
| `recordingAvailable` | boolean | Whether a recording is available |
|
||||
| `transcripts` | array | Ordered transcript turns for the call |
|
||||
| ↳ `id` | string | Transcript turn ID |
|
||||
| ↳ `transcript` | string | User utterance |
|
||||
| ↳ `confidence` | number | Speech recognition confidence |
|
||||
| ↳ `response` | string | Agent response \(when available\) |
|
||||
| ↳ `createdAt` | string | ISO 8601 timestamp |
|
||||
|
||||
### `agentphone_get_call_transcript`
|
||||
|
||||
Get the full ordered transcript for a call
|
||||
|
||||
#### Input
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `apiKey` | string | Yes | AgentPhone API key |
|
||||
| `callId` | string | Yes | ID of the call to retrieve the transcript for |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `callId` | string | Call ID |
|
||||
| `transcript` | array | Ordered transcript turns for the call |
|
||||
| ↳ `role` | string | Speaker role \(user or agent\) |
|
||||
| ↳ `content` | string | Turn content |
|
||||
| ↳ `createdAt` | string | ISO 8601 timestamp |
|
||||
|
||||
### `agentphone_get_contact`
|
||||
|
||||
Fetch a single contact by ID
|
||||
|
||||
#### Input
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `apiKey` | string | Yes | AgentPhone API key |
|
||||
| `contactId` | string | Yes | Contact ID |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `id` | string | Contact ID |
|
||||
| `phoneNumber` | string | Phone number in E.164 format |
|
||||
| `name` | string | Contact name |
|
||||
| `email` | string | Contact email address |
|
||||
| `notes` | string | Freeform notes |
|
||||
| `createdAt` | string | ISO 8601 creation timestamp |
|
||||
| `updatedAt` | string | ISO 8601 update timestamp |
|
||||
|
||||
### `agentphone_get_conversation`
|
||||
|
||||
Get a conversation along with its recent messages
|
||||
|
||||
#### Input
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `apiKey` | string | Yes | AgentPhone API key |
|
||||
| `conversationId` | string | Yes | Conversation ID |
|
||||
| `messageLimit` | number | No | Number of recent messages to include \(default 50, max 100\) |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `id` | string | Conversation ID |
|
||||
| `agentId` | string | Agent ID |
|
||||
| `phoneNumberId` | string | Phone number ID |
|
||||
| `phoneNumber` | string | Phone number |
|
||||
| `participant` | string | External participant phone number |
|
||||
| `lastMessageAt` | string | ISO 8601 timestamp |
|
||||
| `messageCount` | number | Number of messages in the conversation |
|
||||
| `metadata` | json | Custom metadata stored on the conversation |
|
||||
| `createdAt` | string | ISO 8601 timestamp |
|
||||
| `messages` | array | Recent messages in the conversation |
|
||||
| ↳ `id` | string | Message ID |
|
||||
| ↳ `body` | string | Message text |
|
||||
| ↳ `fromNumber` | string | Sender phone number |
|
||||
| ↳ `toNumber` | string | Recipient phone number |
|
||||
| ↳ `direction` | string | inbound or outbound |
|
||||
| ↳ `channel` | string | sms, mms, or imessage |
|
||||
| ↳ `mediaUrl` | string | Attached media URL |
|
||||
| ↳ `receivedAt` | string | ISO 8601 timestamp |
|
||||
|
||||
### `agentphone_get_conversation_messages`
|
||||
|
||||
Get paginated messages for a conversation
|
||||
|
||||
#### Input
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `apiKey` | string | Yes | AgentPhone API key |
|
||||
| `conversationId` | string | Yes | Conversation ID |
|
||||
| `limit` | number | No | Number of messages to return \(default 50, max 200\) |
|
||||
| `before` | string | No | Return messages received before this ISO 8601 timestamp |
|
||||
| `after` | string | No | Return messages received after this ISO 8601 timestamp |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `data` | array | Messages in the conversation |
|
||||
| ↳ `id` | string | Message ID |
|
||||
| ↳ `body` | string | Message text |
|
||||
| ↳ `fromNumber` | string | Sender phone number |
|
||||
| ↳ `toNumber` | string | Recipient phone number |
|
||||
| ↳ `direction` | string | inbound or outbound |
|
||||
| ↳ `channel` | string | sms, mms, or imessage |
|
||||
| ↳ `mediaUrl` | string | Attached media URL |
|
||||
| ↳ `receivedAt` | string | ISO 8601 timestamp |
|
||||
| `hasMore` | boolean | Whether more messages are available |
|
||||
|
||||
### `agentphone_get_number_messages`
|
||||
|
||||
Fetch messages received on a specific phone number
|
||||
|
||||
#### Input
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `apiKey` | string | Yes | AgentPhone API key |
|
||||
| `numberId` | string | Yes | ID of the phone number |
|
||||
| `limit` | number | No | Number of messages to return \(default 50, max 200\) |
|
||||
| `before` | string | No | Return messages received before this ISO 8601 timestamp |
|
||||
| `after` | string | No | Return messages received after this ISO 8601 timestamp |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `data` | array | Messages received on the number |
|
||||
| ↳ `id` | string | Message ID |
|
||||
| ↳ `from_` | string | Sender phone number \(E.164\) |
|
||||
| ↳ `to` | string | Recipient phone number \(E.164\) |
|
||||
| ↳ `body` | string | Message text |
|
||||
| ↳ `direction` | string | inbound or outbound |
|
||||
| ↳ `channel` | string | Channel \(sms, mms, etc.\) |
|
||||
| ↳ `receivedAt` | string | ISO 8601 timestamp |
|
||||
| `hasMore` | boolean | Whether more messages are available |
|
||||
|
||||
### `agentphone_get_usage`
|
||||
|
||||
Retrieve current usage statistics for the AgentPhone account
|
||||
|
||||
#### Input
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `apiKey` | string | Yes | AgentPhone API key |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `plan` | json | Plan name and limits \(name, limits: numbers/messagesPerMonth/voiceMinutesPerMonth/maxCallDurationMinutes/concurrentCalls\) |
|
||||
| `numbers` | json | Phone number usage \(used, limit, remaining\) |
|
||||
| `stats` | json | Usage stats: totalMessages, messagesLast24h/7d/30d, totalCalls, callsLast24h/7d/30d, totalWebhookDeliveries, successfulWebhookDeliveries, failedWebhookDeliveries |
|
||||
| `periodStart` | string | Billing period start |
|
||||
| `periodEnd` | string | Billing period end |
|
||||
|
||||
### `agentphone_get_usage_daily`
|
||||
|
||||
Get a daily breakdown of usage (messages, calls, webhooks) for the last N days
|
||||
|
||||
#### Input
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `apiKey` | string | Yes | AgentPhone API key |
|
||||
| `days` | number | No | Number of days to return \(1-365, default 30\) |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `data` | array | Daily usage entries |
|
||||
| ↳ `date` | string | Day \(YYYY-MM-DD\) |
|
||||
| ↳ `messages` | number | Messages that day |
|
||||
| ↳ `calls` | number | Calls that day |
|
||||
| ↳ `webhooks` | number | Webhook deliveries that day |
|
||||
| `days` | number | Number of days returned |
|
||||
|
||||
### `agentphone_get_usage_monthly`
|
||||
|
||||
Get monthly usage aggregation (messages, calls, webhooks) for the last N months
|
||||
|
||||
#### Input
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `apiKey` | string | Yes | AgentPhone API key |
|
||||
| `months` | number | No | Number of months to return \(1-24, default 6\) |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `data` | array | Monthly usage entries |
|
||||
| ↳ `month` | string | Month \(YYYY-MM\) |
|
||||
| ↳ `messages` | number | Messages that month |
|
||||
| ↳ `calls` | number | Calls that month |
|
||||
| ↳ `webhooks` | number | Webhook deliveries that month |
|
||||
| `months` | number | Number of months returned |
|
||||
|
||||
### `agentphone_list_calls`
|
||||
|
||||
List voice calls for this AgentPhone account
|
||||
|
||||
#### Input
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `apiKey` | string | Yes | AgentPhone API key |
|
||||
| `limit` | number | No | Number of results to return \(default 20, max 100\) |
|
||||
| `offset` | number | No | Number of results to skip \(min 0\) |
|
||||
| `status` | string | No | Filter by status \(completed, in-progress, failed\) |
|
||||
| `direction` | string | No | Filter by direction \(inbound, outbound\) |
|
||||
| `type` | string | No | Filter by call type \(pstn, web\) |
|
||||
| `search` | string | No | Search by phone number \(matches fromNumber or toNumber\) |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `data` | array | Calls |
|
||||
| ↳ `id` | string | Call ID |
|
||||
| ↳ `agentId` | string | Agent that handled the call |
|
||||
| ↳ `phoneNumberId` | string | Phone number ID used for the call |
|
||||
| ↳ `phoneNumber` | string | Phone number used for the call |
|
||||
| ↳ `fromNumber` | string | Caller phone number |
|
||||
| ↳ `toNumber` | string | Recipient phone number |
|
||||
| ↳ `direction` | string | inbound or outbound |
|
||||
| ↳ `status` | string | Call status |
|
||||
| ↳ `startedAt` | string | ISO 8601 timestamp |
|
||||
| ↳ `endedAt` | string | ISO 8601 timestamp |
|
||||
| ↳ `durationSeconds` | number | Call duration in seconds |
|
||||
| ↳ `lastTranscriptSnippet` | string | Last transcript snippet |
|
||||
| ↳ `recordingUrl` | string | Recording audio URL |
|
||||
| ↳ `recordingAvailable` | boolean | Whether a recording is available |
|
||||
| `hasMore` | boolean | Whether more results are available |
|
||||
| `total` | number | Total number of matching calls |
|
||||
|
||||
### `agentphone_list_contacts`
|
||||
|
||||
List contacts for this AgentPhone account
|
||||
|
||||
#### Input
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `apiKey` | string | Yes | AgentPhone API key |
|
||||
| `search` | string | No | Filter by name or phone number \(case-insensitive contains\) |
|
||||
| `limit` | number | No | Number of results to return \(default 50\) |
|
||||
| `offset` | number | No | Number of results to skip \(min 0\) |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `data` | array | Contacts |
|
||||
| ↳ `id` | string | Contact ID |
|
||||
| ↳ `phoneNumber` | string | Phone number in E.164 format |
|
||||
| ↳ `name` | string | Contact name |
|
||||
| ↳ `email` | string | Contact email address |
|
||||
| ↳ `notes` | string | Freeform notes |
|
||||
| ↳ `createdAt` | string | ISO 8601 creation timestamp |
|
||||
| ↳ `updatedAt` | string | ISO 8601 update timestamp |
|
||||
| `hasMore` | boolean | Whether more results are available |
|
||||
| `total` | number | Total number of contacts |
|
||||
|
||||
### `agentphone_list_conversations`
|
||||
|
||||
List conversations (message threads) for this AgentPhone account
|
||||
|
||||
#### Input
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `apiKey` | string | Yes | AgentPhone API key |
|
||||
| `limit` | number | No | Number of results to return \(default 20, max 100\) |
|
||||
| `offset` | number | No | Number of results to skip \(min 0\) |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `data` | array | Conversations |
|
||||
| ↳ `id` | string | Conversation ID |
|
||||
| ↳ `agentId` | string | Agent ID |
|
||||
| ↳ `phoneNumberId` | string | Phone number ID |
|
||||
| ↳ `phoneNumber` | string | Phone number |
|
||||
| ↳ `participant` | string | External participant phone number |
|
||||
| ↳ `lastMessageAt` | string | ISO 8601 timestamp |
|
||||
| ↳ `lastMessagePreview` | string | Last message preview |
|
||||
| ↳ `messageCount` | number | Number of messages in the conversation |
|
||||
| ↳ `metadata` | json | Custom metadata stored on the conversation |
|
||||
| ↳ `createdAt` | string | ISO 8601 timestamp |
|
||||
| `hasMore` | boolean | Whether more results are available |
|
||||
| `total` | number | Total number of conversations |
|
||||
|
||||
### `agentphone_list_numbers`
|
||||
|
||||
List all phone numbers provisioned for this AgentPhone account
|
||||
|
||||
#### Input
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `apiKey` | string | Yes | AgentPhone API key |
|
||||
| `limit` | number | No | Number of results to return \(default 20, max 100\) |
|
||||
| `offset` | number | No | Number of results to skip \(min 0\) |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `data` | array | Phone numbers |
|
||||
| ↳ `id` | string | Phone number ID |
|
||||
| ↳ `phoneNumber` | string | Phone number in E.164 format |
|
||||
| ↳ `country` | string | Two-letter country code |
|
||||
| ↳ `status` | string | Number status |
|
||||
| ↳ `type` | string | Number type \(e.g. sms\) |
|
||||
| ↳ `agentId` | string | Attached agent ID |
|
||||
| ↳ `createdAt` | string | ISO 8601 creation timestamp |
|
||||
| `hasMore` | boolean | Whether more results are available |
|
||||
| `total` | number | Total number of phone numbers |
|
||||
|
||||
### `agentphone_react_to_message`
|
||||
|
||||
Send an iMessage tapback reaction to a message (iMessage only)
|
||||
|
||||
#### Input
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `apiKey` | string | Yes | AgentPhone API key |
|
||||
| `messageId` | string | Yes | ID of the message to react to |
|
||||
| `reaction` | string | Yes | Reaction type: love, like, dislike, laugh, emphasize, or question |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `id` | string | Reaction ID |
|
||||
| `reactionType` | string | Reaction type applied |
|
||||
| `messageId` | string | ID of the message that was reacted to |
|
||||
| `channel` | string | Channel \(imessage\) |
|
||||
|
||||
### `agentphone_release_number`
|
||||
|
||||
Release (delete) a phone number. This action is irreversible.
|
||||
|
||||
#### Input
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `apiKey` | string | Yes | AgentPhone API key |
|
||||
| `numberId` | string | Yes | ID of the phone number to release |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `id` | string | ID of the released phone number |
|
||||
| `released` | boolean | Whether the number was released successfully |
|
||||
|
||||
### `agentphone_send_message`
|
||||
|
||||
Send an outbound SMS or iMessage from an AgentPhone agent
|
||||
|
||||
#### Input
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `apiKey` | string | Yes | AgentPhone API key |
|
||||
| `agentId` | string | Yes | Agent sending the message |
|
||||
| `toNumber` | string | Yes | Recipient phone number in E.164 format \(e.g. +14155551234\) |
|
||||
| `body` | string | Yes | Message text to send |
|
||||
| `mediaUrl` | string | No | Optional URL of an image, video, or file to attach |
|
||||
| `numberId` | string | No | Phone number ID to send from. If omitted, the agent's first assigned number is used. |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `id` | string | Message ID |
|
||||
| `status` | string | Delivery status |
|
||||
| `channel` | string | sms, mms, or imessage |
|
||||
| `fromNumber` | string | Sender phone number |
|
||||
| `toNumber` | string | Recipient phone number |
|
||||
|
||||
### `agentphone_update_contact`
|
||||
|
||||
Update a contact
|
||||
|
||||
#### Input
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `apiKey` | string | Yes | AgentPhone API key |
|
||||
| `contactId` | string | Yes | Contact ID |
|
||||
| `phoneNumber` | string | No | New phone number in E.164 format |
|
||||
| `name` | string | No | New contact name |
|
||||
| `email` | string | No | New email address |
|
||||
| `notes` | string | No | New freeform notes |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `id` | string | Contact ID |
|
||||
| `phoneNumber` | string | Phone number in E.164 format |
|
||||
| `name` | string | Contact name |
|
||||
| `email` | string | Contact email address |
|
||||
| `notes` | string | Freeform notes |
|
||||
| `createdAt` | string | ISO 8601 creation timestamp |
|
||||
| `updatedAt` | string | ISO 8601 update timestamp |
|
||||
|
||||
### `agentphone_update_conversation`
|
||||
|
||||
Update conversation metadata (stored state). Pass null to clear existing metadata.
|
||||
|
||||
#### Input
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `apiKey` | string | Yes | AgentPhone API key |
|
||||
| `conversationId` | string | Yes | Conversation ID |
|
||||
| `metadata` | json | No | Custom key-value metadata to store on the conversation. Pass null to clear existing metadata. |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `id` | string | Conversation ID |
|
||||
| `agentId` | string | Agent ID |
|
||||
| `phoneNumberId` | string | Phone number ID |
|
||||
| `phoneNumber` | string | Phone number |
|
||||
| `participant` | string | External participant phone number |
|
||||
| `lastMessageAt` | string | ISO 8601 timestamp |
|
||||
| `messageCount` | number | Number of messages |
|
||||
| `metadata` | json | Custom metadata stored on the conversation |
|
||||
| `createdAt` | string | ISO 8601 timestamp |
|
||||
| `messages` | array | Messages in the conversation |
|
||||
| ↳ `id` | string | Message ID |
|
||||
| ↳ `body` | string | Message body |
|
||||
| ↳ `fromNumber` | string | Sender phone number |
|
||||
| ↳ `toNumber` | string | Recipient phone number |
|
||||
| ↳ `direction` | string | inbound or outbound |
|
||||
| ↳ `channel` | string | Channel \(sms, mms, etc.\) |
|
||||
| ↳ `mediaUrl` | string | Media URL if any |
|
||||
| ↳ `receivedAt` | string | ISO 8601 timestamp |
|
||||
|
||||
|
||||
@@ -38,7 +38,7 @@ Integrate Ashby into the workflow. Manage candidates (list, get, create, update,
|
||||
|
||||
### `ashby_add_candidate_tag`
|
||||
|
||||
Adds a tag to a candidate in Ashby.
|
||||
Adds a tag to a candidate in Ashby and returns the updated candidate.
|
||||
|
||||
#### Input
|
||||
|
||||
@@ -52,7 +52,37 @@ Adds a tag to a candidate in Ashby.
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `success` | boolean | Whether the tag was successfully added |
|
||||
| `candidates` | json | List of candidates with rich fields \(id, name, primaryEmailAddress, primaryPhoneNumber, emailAddresses\[\], phoneNumbers\[\], socialLinks\[\], linkedInUrl, githubUrl, profileUrl, position, company, school, timezone, location with locationComponents\[\], tags\[\], applicationIds\[\], customFields\[\], resumeFileHandle, fileHandles\[\], source with sourceType, creditedToUser, fraudStatus, createdAt, updatedAt\) |
|
||||
| `jobs` | json | List of jobs \(id, title, confidential, status, employmentType, locationId, departmentId, defaultInterviewPlanId, interviewPlanIds\[\], customFields\[\], jobPostingIds\[\], customRequisitionId, brandId, hiringTeam\[\], author, createdAt, updatedAt, openedAt, closedAt, location with address, openings\[\] with latestVersion, compensation with compensationTiers\[\]\) |
|
||||
| `applications` | json | List of applications \(id, status, customFields\[\], candidate summary, currentInterviewStage, source with sourceType, archiveReason with customFields\[\], archivedAt, job summary, creditedToUser, hiringTeam\[\], appliedViaJobPostingId, submitterClientIp, submitterUserAgent, createdAt, updatedAt\) |
|
||||
| `notes` | json | List of notes \(id, content, author, isPrivate, createdAt\) |
|
||||
| `offers` | json | List of offers \(id, decidedAt, applicationId, acceptanceStatus, offerStatus, latestVersion with id/startDate/salary/createdAt/openingId/customFields\[\]/fileHandles\[\]/author/approvalStatus\) |
|
||||
| `archiveReasons` | json | List of archive reasons \(id, text, reasonType \[RejectedByCandidate/RejectedByOrg/Other\], isArchived\) |
|
||||
| `sources` | json | List of sources \(id, title, isArchived, sourceType \{id, title, isArchived\}\) |
|
||||
| `customFields` | json | List of custom field definitions \(id, title, isPrivate, fieldType, objectType, isArchived, isRequired, selectableValues\[\] \{label, value, isArchived\}\) |
|
||||
| `departments` | json | List of departments \(id, name, externalName, isArchived, parentId, createdAt, updatedAt\) |
|
||||
| `locations` | json | List of locations \(id, name, externalName, isArchived, isRemote, workplaceType, parentLocationId, type, address with addressCountry/Region/Locality/postalCode/streetAddress\) |
|
||||
| `jobPostings` | json | List of job postings \(id, title, jobId, departmentName, teamName, locationName, locationIds, workplaceType, employmentType, isListed, publishedDate, applicationDeadline, externalLink, applyLink, compensationTierSummary, shouldDisplayCompensationOnJobBoard, updatedAt\) |
|
||||
| `openings` | json | List of openings \(id, openedAt, closedAt, isArchived, archivedAt, closeReasonId, openingState, latestVersion with identifier/description/authorId/createdAt/teamId/jobIds\[\]/targetHireDate/targetStartDate/isBackfill/employmentType/locationIds\[\]/hiringTeam\[\]/customFields\[\]\) |
|
||||
| `users` | json | List of users \(id, firstName, lastName, email, globalRole, isEnabled, updatedAt, managerId\) |
|
||||
| `interviewSchedules` | json | List of interview schedules \(id, applicationId, interviewStageId, interviewEvents\[\] with interviewerUserIds/startTime/endTime/feedbackLink/location/meetingLink/hasSubmittedFeedback, status, scheduledBy, createdAt, updatedAt\) |
|
||||
| `tags` | json | List of candidate tags \(id, title, isArchived\) |
|
||||
| `id` | string | Resource UUID |
|
||||
| `name` | string | Resource name |
|
||||
| `title` | string | Job title or job posting title |
|
||||
| `status` | string | Status |
|
||||
| `candidate` | json | Candidate details \(id, name, primaryEmailAddress, primaryPhoneNumber, emailAddresses\[\], phoneNumbers\[\], socialLinks\[\], customFields\[\], source, creditedToUser, createdAt, updatedAt\) |
|
||||
| `job` | json | Job details \(id, title, status, employmentType, locationId, departmentId, hiringTeam\[\], author, location, openings\[\], compensation, createdAt, updatedAt\) |
|
||||
| `application` | json | Application details \(id, status, customFields\[\], candidate, currentInterviewStage, source, archiveReason, job, hiringTeam\[\], createdAt, updatedAt\) |
|
||||
| `offer` | json | Offer details \(id, decidedAt, applicationId, acceptanceStatus, offerStatus, latestVersion\) |
|
||||
| `jobPosting` | json | Job posting details \(id, title, descriptionPlain, descriptionHtml, descriptionSocial, descriptionParts, departmentName, teamName, teamNameHierarchy\[\], jobId, locationName, locationIds, linkedData, address, isRemote, workplaceType, employmentType, isListed, publishedDate, applicationDeadline, externalLink, applyLink, compensation, updatedAt\) |
|
||||
| `content` | string | Note content |
|
||||
| `author` | json | Note author \(id, firstName, lastName, email, globalRole, isEnabled\) |
|
||||
| `isPrivate` | boolean | Whether the note is private |
|
||||
| `createdAt` | string | ISO 8601 creation timestamp |
|
||||
| `moreDataAvailable` | boolean | Whether more pages exist |
|
||||
| `nextCursor` | string | Pagination cursor for next page |
|
||||
| `syncToken` | string | Sync token for incremental updates |
|
||||
|
||||
### `ashby_change_application_stage`
|
||||
|
||||
@@ -71,8 +101,37 @@ Moves an application to a different interview stage. Requires an archive reason
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `applicationId` | string | Application UUID |
|
||||
| `stageId` | string | New interview stage UUID |
|
||||
| `candidates` | json | List of candidates with rich fields \(id, name, primaryEmailAddress, primaryPhoneNumber, emailAddresses\[\], phoneNumbers\[\], socialLinks\[\], linkedInUrl, githubUrl, profileUrl, position, company, school, timezone, location with locationComponents\[\], tags\[\], applicationIds\[\], customFields\[\], resumeFileHandle, fileHandles\[\], source with sourceType, creditedToUser, fraudStatus, createdAt, updatedAt\) |
|
||||
| `jobs` | json | List of jobs \(id, title, confidential, status, employmentType, locationId, departmentId, defaultInterviewPlanId, interviewPlanIds\[\], customFields\[\], jobPostingIds\[\], customRequisitionId, brandId, hiringTeam\[\], author, createdAt, updatedAt, openedAt, closedAt, location with address, openings\[\] with latestVersion, compensation with compensationTiers\[\]\) |
|
||||
| `applications` | json | List of applications \(id, status, customFields\[\], candidate summary, currentInterviewStage, source with sourceType, archiveReason with customFields\[\], archivedAt, job summary, creditedToUser, hiringTeam\[\], appliedViaJobPostingId, submitterClientIp, submitterUserAgent, createdAt, updatedAt\) |
|
||||
| `notes` | json | List of notes \(id, content, author, isPrivate, createdAt\) |
|
||||
| `offers` | json | List of offers \(id, decidedAt, applicationId, acceptanceStatus, offerStatus, latestVersion with id/startDate/salary/createdAt/openingId/customFields\[\]/fileHandles\[\]/author/approvalStatus\) |
|
||||
| `archiveReasons` | json | List of archive reasons \(id, text, reasonType \[RejectedByCandidate/RejectedByOrg/Other\], isArchived\) |
|
||||
| `sources` | json | List of sources \(id, title, isArchived, sourceType \{id, title, isArchived\}\) |
|
||||
| `customFields` | json | List of custom field definitions \(id, title, isPrivate, fieldType, objectType, isArchived, isRequired, selectableValues\[\] \{label, value, isArchived\}\) |
|
||||
| `departments` | json | List of departments \(id, name, externalName, isArchived, parentId, createdAt, updatedAt\) |
|
||||
| `locations` | json | List of locations \(id, name, externalName, isArchived, isRemote, workplaceType, parentLocationId, type, address with addressCountry/Region/Locality/postalCode/streetAddress\) |
|
||||
| `jobPostings` | json | List of job postings \(id, title, jobId, departmentName, teamName, locationName, locationIds, workplaceType, employmentType, isListed, publishedDate, applicationDeadline, externalLink, applyLink, compensationTierSummary, shouldDisplayCompensationOnJobBoard, updatedAt\) |
|
||||
| `openings` | json | List of openings \(id, openedAt, closedAt, isArchived, archivedAt, closeReasonId, openingState, latestVersion with identifier/description/authorId/createdAt/teamId/jobIds\[\]/targetHireDate/targetStartDate/isBackfill/employmentType/locationIds\[\]/hiringTeam\[\]/customFields\[\]\) |
|
||||
| `users` | json | List of users \(id, firstName, lastName, email, globalRole, isEnabled, updatedAt, managerId\) |
|
||||
| `interviewSchedules` | json | List of interview schedules \(id, applicationId, interviewStageId, interviewEvents\[\] with interviewerUserIds/startTime/endTime/feedbackLink/location/meetingLink/hasSubmittedFeedback, status, scheduledBy, createdAt, updatedAt\) |
|
||||
| `tags` | json | List of candidate tags \(id, title, isArchived\) |
|
||||
| `id` | string | Resource UUID |
|
||||
| `name` | string | Resource name |
|
||||
| `title` | string | Job title or job posting title |
|
||||
| `status` | string | Status |
|
||||
| `candidate` | json | Candidate details \(id, name, primaryEmailAddress, primaryPhoneNumber, emailAddresses\[\], phoneNumbers\[\], socialLinks\[\], customFields\[\], source, creditedToUser, createdAt, updatedAt\) |
|
||||
| `job` | json | Job details \(id, title, status, employmentType, locationId, departmentId, hiringTeam\[\], author, location, openings\[\], compensation, createdAt, updatedAt\) |
|
||||
| `application` | json | Application details \(id, status, customFields\[\], candidate, currentInterviewStage, source, archiveReason, job, hiringTeam\[\], createdAt, updatedAt\) |
|
||||
| `offer` | json | Offer details \(id, decidedAt, applicationId, acceptanceStatus, offerStatus, latestVersion\) |
|
||||
| `jobPosting` | json | Job posting details \(id, title, descriptionPlain, descriptionHtml, descriptionSocial, descriptionParts, departmentName, teamName, teamNameHierarchy\[\], jobId, locationName, locationIds, linkedData, address, isRemote, workplaceType, employmentType, isListed, publishedDate, applicationDeadline, externalLink, applyLink, compensation, updatedAt\) |
|
||||
| `content` | string | Note content |
|
||||
| `author` | json | Note author \(id, firstName, lastName, email, globalRole, isEnabled\) |
|
||||
| `isPrivate` | boolean | Whether the note is private |
|
||||
| `createdAt` | string | ISO 8601 creation timestamp |
|
||||
| `moreDataAvailable` | boolean | Whether more pages exist |
|
||||
| `nextCursor` | string | Pagination cursor for next page |
|
||||
| `syncToken` | string | Sync token for incremental updates |
|
||||
|
||||
### `ashby_create_application`
|
||||
|
||||
@@ -95,7 +154,37 @@ Creates a new application for a candidate on a job. Optionally specify interview
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `applicationId` | string | Created application UUID |
|
||||
| `candidates` | json | List of candidates with rich fields \(id, name, primaryEmailAddress, primaryPhoneNumber, emailAddresses\[\], phoneNumbers\[\], socialLinks\[\], linkedInUrl, githubUrl, profileUrl, position, company, school, timezone, location with locationComponents\[\], tags\[\], applicationIds\[\], customFields\[\], resumeFileHandle, fileHandles\[\], source with sourceType, creditedToUser, fraudStatus, createdAt, updatedAt\) |
|
||||
| `jobs` | json | List of jobs \(id, title, confidential, status, employmentType, locationId, departmentId, defaultInterviewPlanId, interviewPlanIds\[\], customFields\[\], jobPostingIds\[\], customRequisitionId, brandId, hiringTeam\[\], author, createdAt, updatedAt, openedAt, closedAt, location with address, openings\[\] with latestVersion, compensation with compensationTiers\[\]\) |
|
||||
| `applications` | json | List of applications \(id, status, customFields\[\], candidate summary, currentInterviewStage, source with sourceType, archiveReason with customFields\[\], archivedAt, job summary, creditedToUser, hiringTeam\[\], appliedViaJobPostingId, submitterClientIp, submitterUserAgent, createdAt, updatedAt\) |
|
||||
| `notes` | json | List of notes \(id, content, author, isPrivate, createdAt\) |
|
||||
| `offers` | json | List of offers \(id, decidedAt, applicationId, acceptanceStatus, offerStatus, latestVersion with id/startDate/salary/createdAt/openingId/customFields\[\]/fileHandles\[\]/author/approvalStatus\) |
|
||||
| `archiveReasons` | json | List of archive reasons \(id, text, reasonType \[RejectedByCandidate/RejectedByOrg/Other\], isArchived\) |
|
||||
| `sources` | json | List of sources \(id, title, isArchived, sourceType \{id, title, isArchived\}\) |
|
||||
| `customFields` | json | List of custom field definitions \(id, title, isPrivate, fieldType, objectType, isArchived, isRequired, selectableValues\[\] \{label, value, isArchived\}\) |
|
||||
| `departments` | json | List of departments \(id, name, externalName, isArchived, parentId, createdAt, updatedAt\) |
|
||||
| `locations` | json | List of locations \(id, name, externalName, isArchived, isRemote, workplaceType, parentLocationId, type, address with addressCountry/Region/Locality/postalCode/streetAddress\) |
|
||||
| `jobPostings` | json | List of job postings \(id, title, jobId, departmentName, teamName, locationName, locationIds, workplaceType, employmentType, isListed, publishedDate, applicationDeadline, externalLink, applyLink, compensationTierSummary, shouldDisplayCompensationOnJobBoard, updatedAt\) |
|
||||
| `openings` | json | List of openings \(id, openedAt, closedAt, isArchived, archivedAt, closeReasonId, openingState, latestVersion with identifier/description/authorId/createdAt/teamId/jobIds\[\]/targetHireDate/targetStartDate/isBackfill/employmentType/locationIds\[\]/hiringTeam\[\]/customFields\[\]\) |
|
||||
| `users` | json | List of users \(id, firstName, lastName, email, globalRole, isEnabled, updatedAt, managerId\) |
|
||||
| `interviewSchedules` | json | List of interview schedules \(id, applicationId, interviewStageId, interviewEvents\[\] with interviewerUserIds/startTime/endTime/feedbackLink/location/meetingLink/hasSubmittedFeedback, status, scheduledBy, createdAt, updatedAt\) |
|
||||
| `tags` | json | List of candidate tags \(id, title, isArchived\) |
|
||||
| `id` | string | Resource UUID |
|
||||
| `name` | string | Resource name |
|
||||
| `title` | string | Job title or job posting title |
|
||||
| `status` | string | Status |
|
||||
| `candidate` | json | Candidate details \(id, name, primaryEmailAddress, primaryPhoneNumber, emailAddresses\[\], phoneNumbers\[\], socialLinks\[\], customFields\[\], source, creditedToUser, createdAt, updatedAt\) |
|
||||
| `job` | json | Job details \(id, title, status, employmentType, locationId, departmentId, hiringTeam\[\], author, location, openings\[\], compensation, createdAt, updatedAt\) |
|
||||
| `application` | json | Application details \(id, status, customFields\[\], candidate, currentInterviewStage, source, archiveReason, job, hiringTeam\[\], createdAt, updatedAt\) |
|
||||
| `offer` | json | Offer details \(id, decidedAt, applicationId, acceptanceStatus, offerStatus, latestVersion\) |
|
||||
| `jobPosting` | json | Job posting details \(id, title, descriptionPlain, descriptionHtml, descriptionSocial, descriptionParts, departmentName, teamName, teamNameHierarchy\[\], jobId, locationName, locationIds, linkedData, address, isRemote, workplaceType, employmentType, isListed, publishedDate, applicationDeadline, externalLink, applyLink, compensation, updatedAt\) |
|
||||
| `content` | string | Note content |
|
||||
| `author` | json | Note author \(id, firstName, lastName, email, globalRole, isEnabled\) |
|
||||
| `isPrivate` | boolean | Whether the note is private |
|
||||
| `createdAt` | string | ISO 8601 creation timestamp |
|
||||
| `moreDataAvailable` | boolean | Whether more pages exist |
|
||||
| `nextCursor` | string | Pagination cursor for next page |
|
||||
| `syncToken` | string | Sync token for incremental updates |
|
||||
|
||||
### `ashby_create_candidate`
|
||||
|
||||
@@ -107,7 +196,7 @@ Creates a new candidate record in Ashby.
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `apiKey` | string | Yes | Ashby API Key |
|
||||
| `name` | string | Yes | The candidate full name |
|
||||
| `email` | string | Yes | Primary email address for the candidate |
|
||||
| `email` | string | No | Primary email address for the candidate |
|
||||
| `phoneNumber` | string | No | Primary phone number for the candidate |
|
||||
| `linkedInUrl` | string | No | LinkedIn profile URL |
|
||||
| `githubUrl` | string | No | GitHub profile URL |
|
||||
@@ -117,17 +206,37 @@ Creates a new candidate record in Ashby.
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `id` | string | Created candidate UUID |
|
||||
| `name` | string | Full name |
|
||||
| `primaryEmailAddress` | object | Primary email contact info |
|
||||
| ↳ `value` | string | Email address |
|
||||
| ↳ `type` | string | Contact type \(Personal, Work, Other\) |
|
||||
| ↳ `isPrimary` | boolean | Whether this is the primary email |
|
||||
| `primaryPhoneNumber` | object | Primary phone contact info |
|
||||
| ↳ `value` | string | Phone number |
|
||||
| ↳ `type` | string | Contact type \(Personal, Work, Other\) |
|
||||
| ↳ `isPrimary` | boolean | Whether this is the primary phone |
|
||||
| `candidates` | json | List of candidates with rich fields \(id, name, primaryEmailAddress, primaryPhoneNumber, emailAddresses\[\], phoneNumbers\[\], socialLinks\[\], linkedInUrl, githubUrl, profileUrl, position, company, school, timezone, location with locationComponents\[\], tags\[\], applicationIds\[\], customFields\[\], resumeFileHandle, fileHandles\[\], source with sourceType, creditedToUser, fraudStatus, createdAt, updatedAt\) |
|
||||
| `jobs` | json | List of jobs \(id, title, confidential, status, employmentType, locationId, departmentId, defaultInterviewPlanId, interviewPlanIds\[\], customFields\[\], jobPostingIds\[\], customRequisitionId, brandId, hiringTeam\[\], author, createdAt, updatedAt, openedAt, closedAt, location with address, openings\[\] with latestVersion, compensation with compensationTiers\[\]\) |
|
||||
| `applications` | json | List of applications \(id, status, customFields\[\], candidate summary, currentInterviewStage, source with sourceType, archiveReason with customFields\[\], archivedAt, job summary, creditedToUser, hiringTeam\[\], appliedViaJobPostingId, submitterClientIp, submitterUserAgent, createdAt, updatedAt\) |
|
||||
| `notes` | json | List of notes \(id, content, author, isPrivate, createdAt\) |
|
||||
| `offers` | json | List of offers \(id, decidedAt, applicationId, acceptanceStatus, offerStatus, latestVersion with id/startDate/salary/createdAt/openingId/customFields\[\]/fileHandles\[\]/author/approvalStatus\) |
|
||||
| `archiveReasons` | json | List of archive reasons \(id, text, reasonType \[RejectedByCandidate/RejectedByOrg/Other\], isArchived\) |
|
||||
| `sources` | json | List of sources \(id, title, isArchived, sourceType \{id, title, isArchived\}\) |
|
||||
| `customFields` | json | List of custom field definitions \(id, title, isPrivate, fieldType, objectType, isArchived, isRequired, selectableValues\[\] \{label, value, isArchived\}\) |
|
||||
| `departments` | json | List of departments \(id, name, externalName, isArchived, parentId, createdAt, updatedAt\) |
|
||||
| `locations` | json | List of locations \(id, name, externalName, isArchived, isRemote, workplaceType, parentLocationId, type, address with addressCountry/Region/Locality/postalCode/streetAddress\) |
|
||||
| `jobPostings` | json | List of job postings \(id, title, jobId, departmentName, teamName, locationName, locationIds, workplaceType, employmentType, isListed, publishedDate, applicationDeadline, externalLink, applyLink, compensationTierSummary, shouldDisplayCompensationOnJobBoard, updatedAt\) |
|
||||
| `openings` | json | List of openings \(id, openedAt, closedAt, isArchived, archivedAt, closeReasonId, openingState, latestVersion with identifier/description/authorId/createdAt/teamId/jobIds\[\]/targetHireDate/targetStartDate/isBackfill/employmentType/locationIds\[\]/hiringTeam\[\]/customFields\[\]\) |
|
||||
| `users` | json | List of users \(id, firstName, lastName, email, globalRole, isEnabled, updatedAt, managerId\) |
|
||||
| `interviewSchedules` | json | List of interview schedules \(id, applicationId, interviewStageId, interviewEvents\[\] with interviewerUserIds/startTime/endTime/feedbackLink/location/meetingLink/hasSubmittedFeedback, status, scheduledBy, createdAt, updatedAt\) |
|
||||
| `tags` | json | List of candidate tags \(id, title, isArchived\) |
|
||||
| `id` | string | Resource UUID |
|
||||
| `name` | string | Resource name |
|
||||
| `title` | string | Job title or job posting title |
|
||||
| `status` | string | Status |
|
||||
| `candidate` | json | Candidate details \(id, name, primaryEmailAddress, primaryPhoneNumber, emailAddresses\[\], phoneNumbers\[\], socialLinks\[\], customFields\[\], source, creditedToUser, createdAt, updatedAt\) |
|
||||
| `job` | json | Job details \(id, title, status, employmentType, locationId, departmentId, hiringTeam\[\], author, location, openings\[\], compensation, createdAt, updatedAt\) |
|
||||
| `application` | json | Application details \(id, status, customFields\[\], candidate, currentInterviewStage, source, archiveReason, job, hiringTeam\[\], createdAt, updatedAt\) |
|
||||
| `offer` | json | Offer details \(id, decidedAt, applicationId, acceptanceStatus, offerStatus, latestVersion\) |
|
||||
| `jobPosting` | json | Job posting details \(id, title, descriptionPlain, descriptionHtml, descriptionSocial, descriptionParts, departmentName, teamName, teamNameHierarchy\[\], jobId, locationName, locationIds, linkedData, address, isRemote, workplaceType, employmentType, isListed, publishedDate, applicationDeadline, externalLink, applyLink, compensation, updatedAt\) |
|
||||
| `content` | string | Note content |
|
||||
| `author` | json | Note author \(id, firstName, lastName, email, globalRole, isEnabled\) |
|
||||
| `isPrivate` | boolean | Whether the note is private |
|
||||
| `createdAt` | string | ISO 8601 creation timestamp |
|
||||
| `moreDataAvailable` | boolean | Whether more pages exist |
|
||||
| `nextCursor` | string | Pagination cursor for next page |
|
||||
| `syncToken` | string | Sync token for incremental updates |
|
||||
|
||||
### `ashby_create_note`
|
||||
|
||||
@@ -147,7 +256,15 @@ Creates a note on a candidate in Ashby. Supports plain text and HTML content (bo
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `noteId` | string | Created note UUID |
|
||||
| `id` | string | Created note UUID |
|
||||
| `createdAt` | string | ISO 8601 creation timestamp |
|
||||
| `isPrivate` | boolean | Whether the note is private |
|
||||
| `content` | string | Note content |
|
||||
| `author` | object | Author of the note |
|
||||
| ↳ `id` | string | Author user UUID |
|
||||
| ↳ `firstName` | string | Author first name |
|
||||
| ↳ `lastName` | string | Author last name |
|
||||
| ↳ `email` | string | Author email |
|
||||
|
||||
### `ashby_get_application`
|
||||
|
||||
@@ -164,28 +281,37 @@ Retrieves full details about a single application by its ID.
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `id` | string | Application UUID |
|
||||
| `status` | string | Application status \(Active, Hired, Archived, Lead\) |
|
||||
| `candidate` | object | Associated candidate |
|
||||
| ↳ `id` | string | Candidate UUID |
|
||||
| ↳ `name` | string | Candidate name |
|
||||
| `job` | object | Associated job |
|
||||
| ↳ `id` | string | Job UUID |
|
||||
| ↳ `title` | string | Job title |
|
||||
| `currentInterviewStage` | object | Current interview stage |
|
||||
| ↳ `id` | string | Stage UUID |
|
||||
| ↳ `title` | string | Stage title |
|
||||
| ↳ `type` | string | Stage type |
|
||||
| `source` | object | Application source |
|
||||
| ↳ `id` | string | Source UUID |
|
||||
| ↳ `title` | string | Source title |
|
||||
| `archiveReason` | object | Reason for archival |
|
||||
| ↳ `id` | string | Reason UUID |
|
||||
| ↳ `text` | string | Reason text |
|
||||
| ↳ `reasonType` | string | Reason type |
|
||||
| `archivedAt` | string | ISO 8601 archive timestamp |
|
||||
| `candidates` | json | List of candidates with rich fields \(id, name, primaryEmailAddress, primaryPhoneNumber, emailAddresses\[\], phoneNumbers\[\], socialLinks\[\], linkedInUrl, githubUrl, profileUrl, position, company, school, timezone, location with locationComponents\[\], tags\[\], applicationIds\[\], customFields\[\], resumeFileHandle, fileHandles\[\], source with sourceType, creditedToUser, fraudStatus, createdAt, updatedAt\) |
|
||||
| `jobs` | json | List of jobs \(id, title, confidential, status, employmentType, locationId, departmentId, defaultInterviewPlanId, interviewPlanIds\[\], customFields\[\], jobPostingIds\[\], customRequisitionId, brandId, hiringTeam\[\], author, createdAt, updatedAt, openedAt, closedAt, location with address, openings\[\] with latestVersion, compensation with compensationTiers\[\]\) |
|
||||
| `applications` | json | List of applications \(id, status, customFields\[\], candidate summary, currentInterviewStage, source with sourceType, archiveReason with customFields\[\], archivedAt, job summary, creditedToUser, hiringTeam\[\], appliedViaJobPostingId, submitterClientIp, submitterUserAgent, createdAt, updatedAt\) |
|
||||
| `notes` | json | List of notes \(id, content, author, isPrivate, createdAt\) |
|
||||
| `offers` | json | List of offers \(id, decidedAt, applicationId, acceptanceStatus, offerStatus, latestVersion with id/startDate/salary/createdAt/openingId/customFields\[\]/fileHandles\[\]/author/approvalStatus\) |
|
||||
| `archiveReasons` | json | List of archive reasons \(id, text, reasonType \[RejectedByCandidate/RejectedByOrg/Other\], isArchived\) |
|
||||
| `sources` | json | List of sources \(id, title, isArchived, sourceType \{id, title, isArchived\}\) |
|
||||
| `customFields` | json | List of custom field definitions \(id, title, isPrivate, fieldType, objectType, isArchived, isRequired, selectableValues\[\] \{label, value, isArchived\}\) |
|
||||
| `departments` | json | List of departments \(id, name, externalName, isArchived, parentId, createdAt, updatedAt\) |
|
||||
| `locations` | json | List of locations \(id, name, externalName, isArchived, isRemote, workplaceType, parentLocationId, type, address with addressCountry/Region/Locality/postalCode/streetAddress\) |
|
||||
| `jobPostings` | json | List of job postings \(id, title, jobId, departmentName, teamName, locationName, locationIds, workplaceType, employmentType, isListed, publishedDate, applicationDeadline, externalLink, applyLink, compensationTierSummary, shouldDisplayCompensationOnJobBoard, updatedAt\) |
|
||||
| `openings` | json | List of openings \(id, openedAt, closedAt, isArchived, archivedAt, closeReasonId, openingState, latestVersion with identifier/description/authorId/createdAt/teamId/jobIds\[\]/targetHireDate/targetStartDate/isBackfill/employmentType/locationIds\[\]/hiringTeam\[\]/customFields\[\]\) |
|
||||
| `users` | json | List of users \(id, firstName, lastName, email, globalRole, isEnabled, updatedAt, managerId\) |
|
||||
| `interviewSchedules` | json | List of interview schedules \(id, applicationId, interviewStageId, interviewEvents\[\] with interviewerUserIds/startTime/endTime/feedbackLink/location/meetingLink/hasSubmittedFeedback, status, scheduledBy, createdAt, updatedAt\) |
|
||||
| `tags` | json | List of candidate tags \(id, title, isArchived\) |
|
||||
| `id` | string | Resource UUID |
|
||||
| `name` | string | Resource name |
|
||||
| `title` | string | Job title or job posting title |
|
||||
| `status` | string | Status |
|
||||
| `candidate` | json | Candidate details \(id, name, primaryEmailAddress, primaryPhoneNumber, emailAddresses\[\], phoneNumbers\[\], socialLinks\[\], customFields\[\], source, creditedToUser, createdAt, updatedAt\) |
|
||||
| `job` | json | Job details \(id, title, status, employmentType, locationId, departmentId, hiringTeam\[\], author, location, openings\[\], compensation, createdAt, updatedAt\) |
|
||||
| `application` | json | Application details \(id, status, customFields\[\], candidate, currentInterviewStage, source, archiveReason, job, hiringTeam\[\], createdAt, updatedAt\) |
|
||||
| `offer` | json | Offer details \(id, decidedAt, applicationId, acceptanceStatus, offerStatus, latestVersion\) |
|
||||
| `jobPosting` | json | Job posting details \(id, title, descriptionPlain, descriptionHtml, descriptionSocial, descriptionParts, departmentName, teamName, teamNameHierarchy\[\], jobId, locationName, locationIds, linkedData, address, isRemote, workplaceType, employmentType, isListed, publishedDate, applicationDeadline, externalLink, applyLink, compensation, updatedAt\) |
|
||||
| `content` | string | Note content |
|
||||
| `author` | json | Note author \(id, firstName, lastName, email, globalRole, isEnabled\) |
|
||||
| `isPrivate` | boolean | Whether the note is private |
|
||||
| `createdAt` | string | ISO 8601 creation timestamp |
|
||||
| `updatedAt` | string | ISO 8601 last update timestamp |
|
||||
| `moreDataAvailable` | boolean | Whether more pages exist |
|
||||
| `nextCursor` | string | Pagination cursor for next page |
|
||||
| `syncToken` | string | Sync token for incremental updates |
|
||||
|
||||
### `ashby_get_candidate`
|
||||
|
||||
@@ -202,27 +328,37 @@ Retrieves full details about a single candidate by their ID.
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `id` | string | Candidate UUID |
|
||||
| `name` | string | Full name |
|
||||
| `primaryEmailAddress` | object | Primary email contact info |
|
||||
| ↳ `value` | string | Email address |
|
||||
| ↳ `type` | string | Contact type \(Personal, Work, Other\) |
|
||||
| ↳ `isPrimary` | boolean | Whether this is the primary email |
|
||||
| `primaryPhoneNumber` | object | Primary phone contact info |
|
||||
| ↳ `value` | string | Phone number |
|
||||
| ↳ `type` | string | Contact type \(Personal, Work, Other\) |
|
||||
| ↳ `isPrimary` | boolean | Whether this is the primary phone |
|
||||
| `profileUrl` | string | URL to the candidate Ashby profile |
|
||||
| `position` | string | Current position or title |
|
||||
| `company` | string | Current company |
|
||||
| `linkedInUrl` | string | LinkedIn profile URL |
|
||||
| `githubUrl` | string | GitHub profile URL |
|
||||
| `tags` | array | Tags applied to the candidate |
|
||||
| ↳ `id` | string | Tag UUID |
|
||||
| ↳ `title` | string | Tag title |
|
||||
| `applicationIds` | array | IDs of associated applications |
|
||||
| `candidates` | json | List of candidates with rich fields \(id, name, primaryEmailAddress, primaryPhoneNumber, emailAddresses\[\], phoneNumbers\[\], socialLinks\[\], linkedInUrl, githubUrl, profileUrl, position, company, school, timezone, location with locationComponents\[\], tags\[\], applicationIds\[\], customFields\[\], resumeFileHandle, fileHandles\[\], source with sourceType, creditedToUser, fraudStatus, createdAt, updatedAt\) |
|
||||
| `jobs` | json | List of jobs \(id, title, confidential, status, employmentType, locationId, departmentId, defaultInterviewPlanId, interviewPlanIds\[\], customFields\[\], jobPostingIds\[\], customRequisitionId, brandId, hiringTeam\[\], author, createdAt, updatedAt, openedAt, closedAt, location with address, openings\[\] with latestVersion, compensation with compensationTiers\[\]\) |
|
||||
| `applications` | json | List of applications \(id, status, customFields\[\], candidate summary, currentInterviewStage, source with sourceType, archiveReason with customFields\[\], archivedAt, job summary, creditedToUser, hiringTeam\[\], appliedViaJobPostingId, submitterClientIp, submitterUserAgent, createdAt, updatedAt\) |
|
||||
| `notes` | json | List of notes \(id, content, author, isPrivate, createdAt\) |
|
||||
| `offers` | json | List of offers \(id, decidedAt, applicationId, acceptanceStatus, offerStatus, latestVersion with id/startDate/salary/createdAt/openingId/customFields\[\]/fileHandles\[\]/author/approvalStatus\) |
|
||||
| `archiveReasons` | json | List of archive reasons \(id, text, reasonType \[RejectedByCandidate/RejectedByOrg/Other\], isArchived\) |
|
||||
| `sources` | json | List of sources \(id, title, isArchived, sourceType \{id, title, isArchived\}\) |
|
||||
| `customFields` | json | List of custom field definitions \(id, title, isPrivate, fieldType, objectType, isArchived, isRequired, selectableValues\[\] \{label, value, isArchived\}\) |
|
||||
| `departments` | json | List of departments \(id, name, externalName, isArchived, parentId, createdAt, updatedAt\) |
|
||||
| `locations` | json | List of locations \(id, name, externalName, isArchived, isRemote, workplaceType, parentLocationId, type, address with addressCountry/Region/Locality/postalCode/streetAddress\) |
|
||||
| `jobPostings` | json | List of job postings \(id, title, jobId, departmentName, teamName, locationName, locationIds, workplaceType, employmentType, isListed, publishedDate, applicationDeadline, externalLink, applyLink, compensationTierSummary, shouldDisplayCompensationOnJobBoard, updatedAt\) |
|
||||
| `openings` | json | List of openings \(id, openedAt, closedAt, isArchived, archivedAt, closeReasonId, openingState, latestVersion with identifier/description/authorId/createdAt/teamId/jobIds\[\]/targetHireDate/targetStartDate/isBackfill/employmentType/locationIds\[\]/hiringTeam\[\]/customFields\[\]\) |
|
||||
| `users` | json | List of users \(id, firstName, lastName, email, globalRole, isEnabled, updatedAt, managerId\) |
|
||||
| `interviewSchedules` | json | List of interview schedules \(id, applicationId, interviewStageId, interviewEvents\[\] with interviewerUserIds/startTime/endTime/feedbackLink/location/meetingLink/hasSubmittedFeedback, status, scheduledBy, createdAt, updatedAt\) |
|
||||
| `tags` | json | List of candidate tags \(id, title, isArchived\) |
|
||||
| `id` | string | Resource UUID |
|
||||
| `name` | string | Resource name |
|
||||
| `title` | string | Job title or job posting title |
|
||||
| `status` | string | Status |
|
||||
| `candidate` | json | Candidate details \(id, name, primaryEmailAddress, primaryPhoneNumber, emailAddresses\[\], phoneNumbers\[\], socialLinks\[\], customFields\[\], source, creditedToUser, createdAt, updatedAt\) |
|
||||
| `job` | json | Job details \(id, title, status, employmentType, locationId, departmentId, hiringTeam\[\], author, location, openings\[\], compensation, createdAt, updatedAt\) |
|
||||
| `application` | json | Application details \(id, status, customFields\[\], candidate, currentInterviewStage, source, archiveReason, job, hiringTeam\[\], createdAt, updatedAt\) |
|
||||
| `offer` | json | Offer details \(id, decidedAt, applicationId, acceptanceStatus, offerStatus, latestVersion\) |
|
||||
| `jobPosting` | json | Job posting details \(id, title, descriptionPlain, descriptionHtml, descriptionSocial, descriptionParts, departmentName, teamName, teamNameHierarchy\[\], jobId, locationName, locationIds, linkedData, address, isRemote, workplaceType, employmentType, isListed, publishedDate, applicationDeadline, externalLink, applyLink, compensation, updatedAt\) |
|
||||
| `content` | string | Note content |
|
||||
| `author` | json | Note author \(id, firstName, lastName, email, globalRole, isEnabled\) |
|
||||
| `isPrivate` | boolean | Whether the note is private |
|
||||
| `createdAt` | string | ISO 8601 creation timestamp |
|
||||
| `updatedAt` | string | ISO 8601 last update timestamp |
|
||||
| `moreDataAvailable` | boolean | Whether more pages exist |
|
||||
| `nextCursor` | string | Pagination cursor for next page |
|
||||
| `syncToken` | string | Sync token for incremental updates |
|
||||
|
||||
### `ashby_get_job`
|
||||
|
||||
@@ -239,16 +375,37 @@ Retrieves full details about a single job by its ID.
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `id` | string | Job UUID |
|
||||
| `title` | string | Job title |
|
||||
| `status` | string | Job status \(Open, Closed, Draft, Archived\) |
|
||||
| `employmentType` | string | Employment type \(FullTime, PartTime, Intern, Contract, Temporary\) |
|
||||
| `departmentId` | string | Department UUID |
|
||||
| `locationId` | string | Location UUID |
|
||||
| `descriptionPlain` | string | Job description in plain text |
|
||||
| `isArchived` | boolean | Whether the job is archived |
|
||||
| `candidates` | json | List of candidates with rich fields \(id, name, primaryEmailAddress, primaryPhoneNumber, emailAddresses\[\], phoneNumbers\[\], socialLinks\[\], linkedInUrl, githubUrl, profileUrl, position, company, school, timezone, location with locationComponents\[\], tags\[\], applicationIds\[\], customFields\[\], resumeFileHandle, fileHandles\[\], source with sourceType, creditedToUser, fraudStatus, createdAt, updatedAt\) |
|
||||
| `jobs` | json | List of jobs \(id, title, confidential, status, employmentType, locationId, departmentId, defaultInterviewPlanId, interviewPlanIds\[\], customFields\[\], jobPostingIds\[\], customRequisitionId, brandId, hiringTeam\[\], author, createdAt, updatedAt, openedAt, closedAt, location with address, openings\[\] with latestVersion, compensation with compensationTiers\[\]\) |
|
||||
| `applications` | json | List of applications \(id, status, customFields\[\], candidate summary, currentInterviewStage, source with sourceType, archiveReason with customFields\[\], archivedAt, job summary, creditedToUser, hiringTeam\[\], appliedViaJobPostingId, submitterClientIp, submitterUserAgent, createdAt, updatedAt\) |
|
||||
| `notes` | json | List of notes \(id, content, author, isPrivate, createdAt\) |
|
||||
| `offers` | json | List of offers \(id, decidedAt, applicationId, acceptanceStatus, offerStatus, latestVersion with id/startDate/salary/createdAt/openingId/customFields\[\]/fileHandles\[\]/author/approvalStatus\) |
|
||||
| `archiveReasons` | json | List of archive reasons \(id, text, reasonType \[RejectedByCandidate/RejectedByOrg/Other\], isArchived\) |
|
||||
| `sources` | json | List of sources \(id, title, isArchived, sourceType \{id, title, isArchived\}\) |
|
||||
| `customFields` | json | List of custom field definitions \(id, title, isPrivate, fieldType, objectType, isArchived, isRequired, selectableValues\[\] \{label, value, isArchived\}\) |
|
||||
| `departments` | json | List of departments \(id, name, externalName, isArchived, parentId, createdAt, updatedAt\) |
|
||||
| `locations` | json | List of locations \(id, name, externalName, isArchived, isRemote, workplaceType, parentLocationId, type, address with addressCountry/Region/Locality/postalCode/streetAddress\) |
|
||||
| `jobPostings` | json | List of job postings \(id, title, jobId, departmentName, teamName, locationName, locationIds, workplaceType, employmentType, isListed, publishedDate, applicationDeadline, externalLink, applyLink, compensationTierSummary, shouldDisplayCompensationOnJobBoard, updatedAt\) |
|
||||
| `openings` | json | List of openings \(id, openedAt, closedAt, isArchived, archivedAt, closeReasonId, openingState, latestVersion with identifier/description/authorId/createdAt/teamId/jobIds\[\]/targetHireDate/targetStartDate/isBackfill/employmentType/locationIds\[\]/hiringTeam\[\]/customFields\[\]\) |
|
||||
| `users` | json | List of users \(id, firstName, lastName, email, globalRole, isEnabled, updatedAt, managerId\) |
|
||||
| `interviewSchedules` | json | List of interview schedules \(id, applicationId, interviewStageId, interviewEvents\[\] with interviewerUserIds/startTime/endTime/feedbackLink/location/meetingLink/hasSubmittedFeedback, status, scheduledBy, createdAt, updatedAt\) |
|
||||
| `tags` | json | List of candidate tags \(id, title, isArchived\) |
|
||||
| `id` | string | Resource UUID |
|
||||
| `name` | string | Resource name |
|
||||
| `title` | string | Job title or job posting title |
|
||||
| `status` | string | Status |
|
||||
| `candidate` | json | Candidate details \(id, name, primaryEmailAddress, primaryPhoneNumber, emailAddresses\[\], phoneNumbers\[\], socialLinks\[\], customFields\[\], source, creditedToUser, createdAt, updatedAt\) |
|
||||
| `job` | json | Job details \(id, title, status, employmentType, locationId, departmentId, hiringTeam\[\], author, location, openings\[\], compensation, createdAt, updatedAt\) |
|
||||
| `application` | json | Application details \(id, status, customFields\[\], candidate, currentInterviewStage, source, archiveReason, job, hiringTeam\[\], createdAt, updatedAt\) |
|
||||
| `offer` | json | Offer details \(id, decidedAt, applicationId, acceptanceStatus, offerStatus, latestVersion\) |
|
||||
| `jobPosting` | json | Job posting details \(id, title, descriptionPlain, descriptionHtml, descriptionSocial, descriptionParts, departmentName, teamName, teamNameHierarchy\[\], jobId, locationName, locationIds, linkedData, address, isRemote, workplaceType, employmentType, isListed, publishedDate, applicationDeadline, externalLink, applyLink, compensation, updatedAt\) |
|
||||
| `content` | string | Note content |
|
||||
| `author` | json | Note author \(id, firstName, lastName, email, globalRole, isEnabled\) |
|
||||
| `isPrivate` | boolean | Whether the note is private |
|
||||
| `createdAt` | string | ISO 8601 creation timestamp |
|
||||
| `updatedAt` | string | ISO 8601 last update timestamp |
|
||||
| `moreDataAvailable` | boolean | Whether more pages exist |
|
||||
| `nextCursor` | string | Pagination cursor for next page |
|
||||
| `syncToken` | string | Sync token for incremental updates |
|
||||
|
||||
### `ashby_get_job_posting`
|
||||
|
||||
@@ -260,6 +417,8 @@ Retrieves full details about a single job posting by its ID.
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `apiKey` | string | Yes | Ashby API Key |
|
||||
| `jobPostingId` | string | Yes | The UUID of the job posting to fetch |
|
||||
| `expandApplicationFormDefinition` | boolean | No | Include application form definition in the response |
|
||||
| `expandSurveyFormDefinitions` | boolean | No | Include survey form definitions in the response |
|
||||
|
||||
#### Output
|
||||
|
||||
@@ -267,14 +426,56 @@ Retrieves full details about a single job posting by its ID.
|
||||
| --------- | ---- | ----------- |
|
||||
| `id` | string | Job posting UUID |
|
||||
| `title` | string | Job posting title |
|
||||
| `jobId` | string | Associated job UUID |
|
||||
| `locationName` | string | Location name |
|
||||
| `descriptionPlain` | string | Full description in plain text |
|
||||
| `descriptionHtml` | string | Full description in HTML |
|
||||
| `descriptionSocial` | string | Shortened description for social sharing \(max 200 chars\) |
|
||||
| `descriptionParts` | object | Description broken into opening, body, and closing sections |
|
||||
| ↳ `descriptionOpening` | object | Opening \(from Job Boards theme settings\) |
|
||||
| ↳ `html` | string | HTML content |
|
||||
| ↳ `plain` | string | Plain text content |
|
||||
| ↳ `descriptionBody` | object | Main description body |
|
||||
| ↳ `html` | string | HTML content |
|
||||
| ↳ `plain` | string | Plain text content |
|
||||
| ↳ `descriptionClosing` | object | Closing \(from Job Boards theme settings\) |
|
||||
| ↳ `html` | string | HTML content |
|
||||
| ↳ `plain` | string | Plain text content |
|
||||
| `departmentName` | string | Department name |
|
||||
| `employmentType` | string | Employment type \(e.g. FullTime, PartTime, Contract\) |
|
||||
| `descriptionPlain` | string | Job posting description in plain text |
|
||||
| `isListed` | boolean | Whether the posting is publicly listed |
|
||||
| `teamName` | string | Team name |
|
||||
| `teamNameHierarchy` | array | Hierarchy of team names from root to team |
|
||||
| `jobId` | string | Associated job UUID |
|
||||
| `locationName` | string | Primary location name |
|
||||
| `locationIds` | object | Primary and secondary location UUIDs |
|
||||
| ↳ `primaryLocationId` | string | Primary location UUID |
|
||||
| ↳ `secondaryLocationIds` | array | Secondary location UUIDs |
|
||||
| `address` | object | Postal address of the posting location |
|
||||
| ↳ `postalAddress` | object | Structured postal address |
|
||||
| ↳ `addressCountry` | string | Country |
|
||||
| ↳ `addressRegion` | string | State or region |
|
||||
| ↳ `addressLocality` | string | City or locality |
|
||||
| ↳ `postalCode` | string | Postal code |
|
||||
| ↳ `streetAddress` | string | Street address |
|
||||
| `isRemote` | boolean | Whether the posting is remote |
|
||||
| `workplaceType` | string | Workplace type \(OnSite, Remote, Hybrid\) |
|
||||
| `employmentType` | string | Employment type \(FullTime, PartTime, Intern, Contract, Temporary\) |
|
||||
| `isListed` | boolean | Whether publicly listed on the job board |
|
||||
| `suppressDescriptionOpening` | boolean | Whether the theme opening is hidden on this posting |
|
||||
| `suppressDescriptionClosing` | boolean | Whether the theme closing is hidden on this posting |
|
||||
| `publishedDate` | string | ISO 8601 published date |
|
||||
| `applicationDeadline` | string | ISO 8601 application deadline |
|
||||
| `externalLink` | string | External link to the job posting |
|
||||
| `applyLink` | string | Direct apply link |
|
||||
| `compensation` | object | Compensation details for the posting |
|
||||
| ↳ `compensationTierSummary` | string | Human-readable tier summary |
|
||||
| ↳ `summaryComponents` | array | Structured compensation components |
|
||||
| ↳ `summary` | string | Component summary |
|
||||
| ↳ `compensationTypeLabel` | string | Component type label \(Salary, Commission, Bonus, Equity, etc.\) |
|
||||
| ↳ `interval` | string | Payment interval \(e.g. annual, hourly\) |
|
||||
| ↳ `currencyCode` | string | ISO 4217 currency code |
|
||||
| ↳ `minValue` | number | Minimum value |
|
||||
| ↳ `maxValue` | number | Maximum value |
|
||||
| ↳ `shouldDisplayCompensationOnJobBoard` | boolean | Whether compensation is shown on the job board |
|
||||
| `applicationLimitCalloutHtml` | string | HTML callout shown when application limit is reached |
|
||||
| `updatedAt` | string | ISO 8601 last update timestamp |
|
||||
|
||||
### `ashby_get_offer`
|
||||
|
||||
@@ -291,20 +492,41 @@ Retrieves full details about a single offer by its ID.
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `id` | string | Offer UUID |
|
||||
| `offerStatus` | string | Offer status \(e.g. WaitingOnCandidateResponse, CandidateAccepted\) |
|
||||
| `acceptanceStatus` | string | Acceptance status \(e.g. Accepted, Declined, Pending\) |
|
||||
| `applicationId` | string | Associated application UUID |
|
||||
| `startDate` | string | Offer start date |
|
||||
| `salary` | object | Salary details |
|
||||
| ↳ `currencyCode` | string | ISO 4217 currency code |
|
||||
| ↳ `value` | number | Salary amount |
|
||||
| `openingId` | string | Associated opening UUID |
|
||||
| `createdAt` | string | ISO 8601 creation timestamp \(from latest version\) |
|
||||
| `candidates` | json | List of candidates with rich fields \(id, name, primaryEmailAddress, primaryPhoneNumber, emailAddresses\[\], phoneNumbers\[\], socialLinks\[\], linkedInUrl, githubUrl, profileUrl, position, company, school, timezone, location with locationComponents\[\], tags\[\], applicationIds\[\], customFields\[\], resumeFileHandle, fileHandles\[\], source with sourceType, creditedToUser, fraudStatus, createdAt, updatedAt\) |
|
||||
| `jobs` | json | List of jobs \(id, title, confidential, status, employmentType, locationId, departmentId, defaultInterviewPlanId, interviewPlanIds\[\], customFields\[\], jobPostingIds\[\], customRequisitionId, brandId, hiringTeam\[\], author, createdAt, updatedAt, openedAt, closedAt, location with address, openings\[\] with latestVersion, compensation with compensationTiers\[\]\) |
|
||||
| `applications` | json | List of applications \(id, status, customFields\[\], candidate summary, currentInterviewStage, source with sourceType, archiveReason with customFields\[\], archivedAt, job summary, creditedToUser, hiringTeam\[\], appliedViaJobPostingId, submitterClientIp, submitterUserAgent, createdAt, updatedAt\) |
|
||||
| `notes` | json | List of notes \(id, content, author, isPrivate, createdAt\) |
|
||||
| `offers` | json | List of offers \(id, decidedAt, applicationId, acceptanceStatus, offerStatus, latestVersion with id/startDate/salary/createdAt/openingId/customFields\[\]/fileHandles\[\]/author/approvalStatus\) |
|
||||
| `archiveReasons` | json | List of archive reasons \(id, text, reasonType \[RejectedByCandidate/RejectedByOrg/Other\], isArchived\) |
|
||||
| `sources` | json | List of sources \(id, title, isArchived, sourceType \{id, title, isArchived\}\) |
|
||||
| `customFields` | json | List of custom field definitions \(id, title, isPrivate, fieldType, objectType, isArchived, isRequired, selectableValues\[\] \{label, value, isArchived\}\) |
|
||||
| `departments` | json | List of departments \(id, name, externalName, isArchived, parentId, createdAt, updatedAt\) |
|
||||
| `locations` | json | List of locations \(id, name, externalName, isArchived, isRemote, workplaceType, parentLocationId, type, address with addressCountry/Region/Locality/postalCode/streetAddress\) |
|
||||
| `jobPostings` | json | List of job postings \(id, title, jobId, departmentName, teamName, locationName, locationIds, workplaceType, employmentType, isListed, publishedDate, applicationDeadline, externalLink, applyLink, compensationTierSummary, shouldDisplayCompensationOnJobBoard, updatedAt\) |
|
||||
| `openings` | json | List of openings \(id, openedAt, closedAt, isArchived, archivedAt, closeReasonId, openingState, latestVersion with identifier/description/authorId/createdAt/teamId/jobIds\[\]/targetHireDate/targetStartDate/isBackfill/employmentType/locationIds\[\]/hiringTeam\[\]/customFields\[\]\) |
|
||||
| `users` | json | List of users \(id, firstName, lastName, email, globalRole, isEnabled, updatedAt, managerId\) |
|
||||
| `interviewSchedules` | json | List of interview schedules \(id, applicationId, interviewStageId, interviewEvents\[\] with interviewerUserIds/startTime/endTime/feedbackLink/location/meetingLink/hasSubmittedFeedback, status, scheduledBy, createdAt, updatedAt\) |
|
||||
| `tags` | json | List of candidate tags \(id, title, isArchived\) |
|
||||
| `id` | string | Resource UUID |
|
||||
| `name` | string | Resource name |
|
||||
| `title` | string | Job title or job posting title |
|
||||
| `status` | string | Status |
|
||||
| `candidate` | json | Candidate details \(id, name, primaryEmailAddress, primaryPhoneNumber, emailAddresses\[\], phoneNumbers\[\], socialLinks\[\], customFields\[\], source, creditedToUser, createdAt, updatedAt\) |
|
||||
| `job` | json | Job details \(id, title, status, employmentType, locationId, departmentId, hiringTeam\[\], author, location, openings\[\], compensation, createdAt, updatedAt\) |
|
||||
| `application` | json | Application details \(id, status, customFields\[\], candidate, currentInterviewStage, source, archiveReason, job, hiringTeam\[\], createdAt, updatedAt\) |
|
||||
| `offer` | json | Offer details \(id, decidedAt, applicationId, acceptanceStatus, offerStatus, latestVersion\) |
|
||||
| `jobPosting` | json | Job posting details \(id, title, descriptionPlain, descriptionHtml, descriptionSocial, descriptionParts, departmentName, teamName, teamNameHierarchy\[\], jobId, locationName, locationIds, linkedData, address, isRemote, workplaceType, employmentType, isListed, publishedDate, applicationDeadline, externalLink, applyLink, compensation, updatedAt\) |
|
||||
| `content` | string | Note content |
|
||||
| `author` | json | Note author \(id, firstName, lastName, email, globalRole, isEnabled\) |
|
||||
| `isPrivate` | boolean | Whether the note is private |
|
||||
| `createdAt` | string | ISO 8601 creation timestamp |
|
||||
| `moreDataAvailable` | boolean | Whether more pages exist |
|
||||
| `nextCursor` | string | Pagination cursor for next page |
|
||||
| `syncToken` | string | Sync token for incremental updates |
|
||||
|
||||
### `ashby_list_applications`
|
||||
|
||||
Lists all applications in an Ashby organization with pagination and optional filters for status, job, candidate, and creation date.
|
||||
Lists all applications in an Ashby organization with pagination and optional filters for status, job, and creation date.
|
||||
|
||||
#### Input
|
||||
|
||||
@@ -315,7 +537,6 @@ Lists all applications in an Ashby organization with pagination and optional fil
|
||||
| `perPage` | number | No | Number of results per page \(default 100\) |
|
||||
| `status` | string | No | Filter by application status: Active, Hired, Archived, or Lead |
|
||||
| `jobId` | string | No | Filter applications by a specific job UUID |
|
||||
| `candidateId` | string | No | Filter applications by a specific candidate UUID |
|
||||
| `createdAfter` | string | No | Filter to applications created after this ISO 8601 timestamp \(e.g. 2024-01-01T00:00:00Z\) |
|
||||
|
||||
#### Output
|
||||
@@ -323,23 +544,6 @@ Lists all applications in an Ashby organization with pagination and optional fil
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `applications` | array | List of applications |
|
||||
| ↳ `id` | string | Application UUID |
|
||||
| ↳ `status` | string | Application status \(Active, Hired, Archived, Lead\) |
|
||||
| ↳ `candidate` | object | Associated candidate |
|
||||
| ↳ `id` | string | Candidate UUID |
|
||||
| ↳ `name` | string | Candidate name |
|
||||
| ↳ `job` | object | Associated job |
|
||||
| ↳ `id` | string | Job UUID |
|
||||
| ↳ `title` | string | Job title |
|
||||
| ↳ `currentInterviewStage` | object | Current interview stage |
|
||||
| ↳ `id` | string | Stage UUID |
|
||||
| ↳ `title` | string | Stage title |
|
||||
| ↳ `type` | string | Stage type |
|
||||
| ↳ `source` | object | Application source |
|
||||
| ↳ `id` | string | Source UUID |
|
||||
| ↳ `title` | string | Source title |
|
||||
| ↳ `createdAt` | string | ISO 8601 creation timestamp |
|
||||
| ↳ `updatedAt` | string | ISO 8601 last update timestamp |
|
||||
| `moreDataAvailable` | boolean | Whether more pages of results exist |
|
||||
| `nextCursor` | string | Opaque cursor for fetching the next page |
|
||||
|
||||
@@ -352,6 +556,7 @@ Lists all archive reasons configured in Ashby.
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `apiKey` | string | Yes | Ashby API Key |
|
||||
| `includeArchived` | boolean | No | Whether to include archived archive reasons in the response \(default false\) |
|
||||
|
||||
#### Output
|
||||
|
||||
@@ -360,7 +565,7 @@ Lists all archive reasons configured in Ashby.
|
||||
| `archiveReasons` | array | List of archive reasons |
|
||||
| ↳ `id` | string | Archive reason UUID |
|
||||
| ↳ `text` | string | Archive reason text |
|
||||
| ↳ `reasonType` | string | Reason type |
|
||||
| ↳ `reasonType` | string | Reason type \(RejectedByCandidate, RejectedByOrg, Other\) |
|
||||
| ↳ `isArchived` | boolean | Whether the reason is archived |
|
||||
|
||||
### `ashby_list_candidate_tags`
|
||||
@@ -372,6 +577,10 @@ Lists all candidate tags configured in Ashby.
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `apiKey` | string | Yes | Ashby API Key |
|
||||
| `includeArchived` | boolean | No | Whether to include archived candidate tags \(default false\) |
|
||||
| `cursor` | string | No | Opaque pagination cursor from a previous response nextCursor value |
|
||||
| `syncToken` | string | No | Sync token from a previous response to fetch only changed results |
|
||||
| `perPage` | number | No | Number of results per page \(default 100\) |
|
||||
|
||||
#### Output
|
||||
|
||||
@@ -381,6 +590,9 @@ Lists all candidate tags configured in Ashby.
|
||||
| ↳ `id` | string | Tag UUID |
|
||||
| ↳ `title` | string | Tag title |
|
||||
| ↳ `isArchived` | boolean | Whether the tag is archived |
|
||||
| `moreDataAvailable` | boolean | Whether more pages of results exist |
|
||||
| `nextCursor` | string | Opaque cursor for fetching the next page |
|
||||
| `syncToken` | string | Sync token to use for incremental updates in future requests |
|
||||
|
||||
### `ashby_list_candidates`
|
||||
|
||||
@@ -399,18 +611,6 @@ Lists all candidates in an Ashby organization with cursor-based pagination.
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `candidates` | array | List of candidates |
|
||||
| ↳ `id` | string | Candidate UUID |
|
||||
| ↳ `name` | string | Full name |
|
||||
| ↳ `primaryEmailAddress` | object | Primary email contact info |
|
||||
| ↳ `value` | string | Email address |
|
||||
| ↳ `type` | string | Contact type \(Personal, Work, Other\) |
|
||||
| ↳ `isPrimary` | boolean | Whether this is the primary email |
|
||||
| ↳ `primaryPhoneNumber` | object | Primary phone contact info |
|
||||
| ↳ `value` | string | Phone number |
|
||||
| ↳ `type` | string | Contact type \(Personal, Work, Other\) |
|
||||
| ↳ `isPrimary` | boolean | Whether this is the primary phone |
|
||||
| ↳ `createdAt` | string | ISO 8601 creation timestamp |
|
||||
| ↳ `updatedAt` | string | ISO 8601 last update timestamp |
|
||||
| `moreDataAvailable` | boolean | Whether more pages of results exist |
|
||||
| `nextCursor` | string | Opaque cursor for fetching the next page |
|
||||
|
||||
@@ -431,9 +631,15 @@ Lists all custom field definitions configured in Ashby.
|
||||
| `customFields` | array | List of custom field definitions |
|
||||
| ↳ `id` | string | Custom field UUID |
|
||||
| ↳ `title` | string | Custom field title |
|
||||
| ↳ `fieldType` | string | Field type \(e.g. String, Number, Boolean\) |
|
||||
| ↳ `objectType` | string | Object type the field applies to \(e.g. Candidate, Application, Job\) |
|
||||
| ↳ `isPrivate` | boolean | Whether the custom field is private |
|
||||
| ↳ `fieldType` | string | Field data type \(MultiValueSelect, NumberRange, String, Date, ValueSelect, Number, Currency, Boolean, LongText, CompensationRange\) |
|
||||
| ↳ `objectType` | string | Object type the field applies to \(Application, Candidate, Employee, Job, Offer, Opening, Talent_Project\) |
|
||||
| ↳ `isArchived` | boolean | Whether the custom field is archived |
|
||||
| ↳ `isRequired` | boolean | Whether a value is required |
|
||||
| ↳ `selectableValues` | array | Selectable values for MultiValueSelect fields \(empty for other field types\) |
|
||||
| ↳ `label` | string | Display label |
|
||||
| ↳ `value` | string | Stored value |
|
||||
| ↳ `isArchived` | boolean | Whether archived |
|
||||
|
||||
### `ashby_list_departments`
|
||||
|
||||
@@ -452,8 +658,11 @@ Lists all departments in Ashby.
|
||||
| `departments` | array | List of departments |
|
||||
| ↳ `id` | string | Department UUID |
|
||||
| ↳ `name` | string | Department name |
|
||||
| ↳ `externalName` | string | Candidate-facing name used on job boards |
|
||||
| ↳ `isArchived` | boolean | Whether the department is archived |
|
||||
| ↳ `parentId` | string | Parent department UUID |
|
||||
| ↳ `createdAt` | string | ISO 8601 creation timestamp |
|
||||
| ↳ `updatedAt` | string | ISO 8601 last update timestamp |
|
||||
|
||||
### `ashby_list_interviews`
|
||||
|
||||
@@ -475,10 +684,24 @@ Lists interview schedules in Ashby, optionally filtered by application or interv
|
||||
| --------- | ---- | ----------- |
|
||||
| `interviewSchedules` | array | List of interview schedules |
|
||||
| ↳ `id` | string | Interview schedule UUID |
|
||||
| ↳ `status` | string | Schedule status \(NeedsScheduling, WaitingOnCandidateBooking, Scheduled, Complete, Cancelled, OnHold, etc.\) |
|
||||
| ↳ `applicationId` | string | Associated application UUID |
|
||||
| ↳ `interviewStageId` | string | Interview stage UUID |
|
||||
| ↳ `status` | string | Schedule status |
|
||||
| ↳ `createdAt` | string | ISO 8601 creation timestamp |
|
||||
| ↳ `updatedAt` | string | ISO 8601 last update timestamp |
|
||||
| ↳ `interviewEvents` | array | Scheduled interview events on this schedule |
|
||||
| ↳ `id` | string | Event UUID |
|
||||
| ↳ `interviewId` | string | Interview template UUID |
|
||||
| ↳ `interviewScheduleId` | string | Parent schedule UUID |
|
||||
| ↳ `interviewerUserIds` | array | User UUIDs of interviewers assigned to the event |
|
||||
| ↳ `createdAt` | string | Event creation timestamp |
|
||||
| ↳ `updatedAt` | string | Event last updated timestamp |
|
||||
| ↳ `startTime` | string | Event start time |
|
||||
| ↳ `endTime` | string | Event end time |
|
||||
| ↳ `feedbackLink` | string | URL to submit feedback for the event |
|
||||
| ↳ `location` | string | Physical location |
|
||||
| ↳ `meetingLink` | string | Virtual meeting URL |
|
||||
| ↳ `hasSubmittedFeedback` | boolean | Whether any feedback has been submitted |
|
||||
| `moreDataAvailable` | boolean | Whether more pages of results exist |
|
||||
| `nextCursor` | string | Opaque cursor for fetching the next page |
|
||||
|
||||
@@ -500,11 +723,22 @@ Lists all job postings in Ashby.
|
||||
| ↳ `id` | string | Job posting UUID |
|
||||
| ↳ `title` | string | Job posting title |
|
||||
| ↳ `jobId` | string | Associated job UUID |
|
||||
| ↳ `locationName` | string | Location name |
|
||||
| ↳ `departmentName` | string | Department name |
|
||||
| ↳ `employmentType` | string | Employment type \(e.g. FullTime, PartTime, Contract\) |
|
||||
| ↳ `teamName` | string | Team name |
|
||||
| ↳ `locationName` | string | Primary location display name |
|
||||
| ↳ `locationIds` | object | Primary and secondary location UUIDs |
|
||||
| ↳ `primaryLocationId` | string | Primary location UUID |
|
||||
| ↳ `secondaryLocationIds` | array | Secondary location UUIDs |
|
||||
| ↳ `workplaceType` | string | Workplace type \(OnSite, Remote, Hybrid\) |
|
||||
| ↳ `employmentType` | string | Employment type \(FullTime, PartTime, Intern, Contract, Temporary\) |
|
||||
| ↳ `isListed` | boolean | Whether the posting is publicly listed |
|
||||
| ↳ `publishedDate` | string | ISO 8601 published date |
|
||||
| ↳ `applicationDeadline` | string | ISO 8601 application deadline |
|
||||
| ↳ `externalLink` | string | External link to the job posting |
|
||||
| ↳ `applyLink` | string | Direct apply link for the job posting |
|
||||
| ↳ `compensationTierSummary` | string | Compensation tier summary for job boards |
|
||||
| ↳ `shouldDisplayCompensationOnJobBoard` | boolean | Whether compensation is shown on the job board |
|
||||
| ↳ `updatedAt` | string | ISO 8601 last update timestamp |
|
||||
|
||||
### `ashby_list_jobs`
|
||||
|
||||
@@ -524,14 +758,6 @@ Lists all jobs in an Ashby organization. By default returns Open, Closed, and Ar
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `jobs` | array | List of jobs |
|
||||
| ↳ `id` | string | Job UUID |
|
||||
| ↳ `title` | string | Job title |
|
||||
| ↳ `status` | string | Job status \(Open, Closed, Archived, Draft\) |
|
||||
| ↳ `employmentType` | string | Employment type \(FullTime, PartTime, Intern, Contract, Temporary\) |
|
||||
| ↳ `departmentId` | string | Department UUID |
|
||||
| ↳ `locationId` | string | Location UUID |
|
||||
| ↳ `createdAt` | string | ISO 8601 creation timestamp |
|
||||
| ↳ `updatedAt` | string | ISO 8601 last update timestamp |
|
||||
| `moreDataAvailable` | boolean | Whether more pages of results exist |
|
||||
| `nextCursor` | string | Opaque cursor for fetching the next page |
|
||||
|
||||
@@ -552,12 +778,18 @@ Lists all locations configured in Ashby.
|
||||
| `locations` | array | List of locations |
|
||||
| ↳ `id` | string | Location UUID |
|
||||
| ↳ `name` | string | Location name |
|
||||
| ↳ `externalName` | string | Candidate-facing name used on job boards |
|
||||
| ↳ `isArchived` | boolean | Whether the location is archived |
|
||||
| ↳ `isRemote` | boolean | Whether this is a remote location |
|
||||
| ↳ `address` | object | Location address |
|
||||
| ↳ `city` | string | City |
|
||||
| ↳ `region` | string | State or region |
|
||||
| ↳ `country` | string | Country |
|
||||
| ↳ `isRemote` | boolean | Whether the location is remote \(use workplaceType instead\) |
|
||||
| ↳ `workplaceType` | string | Workplace type \(OnSite, Hybrid, Remote\) |
|
||||
| ↳ `parentLocationId` | string | Parent location UUID |
|
||||
| ↳ `type` | string | Location component type \(Location, LocationHierarchy\) |
|
||||
| ↳ `address` | object | Location postal address |
|
||||
| ↳ `addressCountry` | string | Country |
|
||||
| ↳ `addressRegion` | string | State or region |
|
||||
| ↳ `addressLocality` | string | City or locality |
|
||||
| ↳ `postalCode` | string | Postal code |
|
||||
| ↳ `streetAddress` | string | Street address |
|
||||
|
||||
### `ashby_list_notes`
|
||||
|
||||
@@ -579,6 +811,7 @@ Lists all notes on a candidate with pagination support.
|
||||
| `notes` | array | List of notes on the candidate |
|
||||
| ↳ `id` | string | Note UUID |
|
||||
| ↳ `content` | string | Note content |
|
||||
| ↳ `isPrivate` | boolean | Whether the note is private |
|
||||
| ↳ `author` | object | Note author |
|
||||
| ↳ `id` | string | Author user UUID |
|
||||
| ↳ `firstName` | string | First name |
|
||||
@@ -605,16 +838,6 @@ Lists all offers with their latest version in an Ashby organization.
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `offers` | array | List of offers |
|
||||
| ↳ `id` | string | Offer UUID |
|
||||
| ↳ `offerStatus` | string | Offer status |
|
||||
| ↳ `acceptanceStatus` | string | Acceptance status |
|
||||
| ↳ `applicationId` | string | Associated application UUID |
|
||||
| ↳ `startDate` | string | Offer start date |
|
||||
| ↳ `salary` | object | Salary details |
|
||||
| ↳ `currencyCode` | string | ISO 4217 currency code |
|
||||
| ↳ `value` | number | Salary amount |
|
||||
| ↳ `openingId` | string | Associated opening UUID |
|
||||
| ↳ `createdAt` | string | ISO 8601 creation timestamp |
|
||||
| `moreDataAvailable` | boolean | Whether more pages of results exist |
|
||||
| `nextCursor` | string | Opaque cursor for fetching the next page |
|
||||
|
||||
@@ -634,12 +857,6 @@ Lists all openings in Ashby with pagination.
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `openings` | array | List of openings |
|
||||
| ↳ `id` | string | Opening UUID |
|
||||
| ↳ `openingState` | string | Opening state \(Approved, Closed, Draft, Filled, Open\) |
|
||||
| ↳ `isArchived` | boolean | Whether the opening is archived |
|
||||
| ↳ `openedAt` | string | ISO 8601 opened timestamp |
|
||||
| ↳ `closedAt` | string | ISO 8601 closed timestamp |
|
||||
| `moreDataAvailable` | boolean | Whether more pages of results exist |
|
||||
| `nextCursor` | string | Opaque cursor for fetching the next page |
|
||||
|
||||
@@ -661,6 +878,10 @@ Lists all candidate sources configured in Ashby.
|
||||
| ↳ `id` | string | Source UUID |
|
||||
| ↳ `title` | string | Source title |
|
||||
| ↳ `isArchived` | boolean | Whether the source is archived |
|
||||
| ↳ `sourceType` | object | Source type grouping |
|
||||
| ↳ `id` | string | Source type UUID |
|
||||
| ↳ `title` | string | Source type title |
|
||||
| ↳ `isArchived` | boolean | Whether archived |
|
||||
|
||||
### `ashby_list_users`
|
||||
|
||||
@@ -679,18 +900,12 @@ Lists all users in Ashby with pagination.
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `users` | array | List of users |
|
||||
| ↳ `id` | string | User UUID |
|
||||
| ↳ `firstName` | string | First name |
|
||||
| ↳ `lastName` | string | Last name |
|
||||
| ↳ `email` | string | Email address |
|
||||
| ↳ `isEnabled` | boolean | Whether the user account is enabled |
|
||||
| ↳ `globalRole` | string | User role \(Organization Admin, Elevated Access, Limited Access, External Recruiter\) |
|
||||
| `moreDataAvailable` | boolean | Whether more pages of results exist |
|
||||
| `nextCursor` | string | Opaque cursor for fetching the next page |
|
||||
|
||||
### `ashby_remove_candidate_tag`
|
||||
|
||||
Removes a tag from a candidate in Ashby.
|
||||
Removes a tag from a candidate in Ashby and returns the updated candidate.
|
||||
|
||||
#### Input
|
||||
|
||||
@@ -704,7 +919,37 @@ Removes a tag from a candidate in Ashby.
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `success` | boolean | Whether the tag was successfully removed |
|
||||
| `candidates` | json | List of candidates with rich fields \(id, name, primaryEmailAddress, primaryPhoneNumber, emailAddresses\[\], phoneNumbers\[\], socialLinks\[\], linkedInUrl, githubUrl, profileUrl, position, company, school, timezone, location with locationComponents\[\], tags\[\], applicationIds\[\], customFields\[\], resumeFileHandle, fileHandles\[\], source with sourceType, creditedToUser, fraudStatus, createdAt, updatedAt\) |
|
||||
| `jobs` | json | List of jobs \(id, title, confidential, status, employmentType, locationId, departmentId, defaultInterviewPlanId, interviewPlanIds\[\], customFields\[\], jobPostingIds\[\], customRequisitionId, brandId, hiringTeam\[\], author, createdAt, updatedAt, openedAt, closedAt, location with address, openings\[\] with latestVersion, compensation with compensationTiers\[\]\) |
|
||||
| `applications` | json | List of applications \(id, status, customFields\[\], candidate summary, currentInterviewStage, source with sourceType, archiveReason with customFields\[\], archivedAt, job summary, creditedToUser, hiringTeam\[\], appliedViaJobPostingId, submitterClientIp, submitterUserAgent, createdAt, updatedAt\) |
|
||||
| `notes` | json | List of notes \(id, content, author, isPrivate, createdAt\) |
|
||||
| `offers` | json | List of offers \(id, decidedAt, applicationId, acceptanceStatus, offerStatus, latestVersion with id/startDate/salary/createdAt/openingId/customFields\[\]/fileHandles\[\]/author/approvalStatus\) |
|
||||
| `archiveReasons` | json | List of archive reasons \(id, text, reasonType \[RejectedByCandidate/RejectedByOrg/Other\], isArchived\) |
|
||||
| `sources` | json | List of sources \(id, title, isArchived, sourceType \{id, title, isArchived\}\) |
|
||||
| `customFields` | json | List of custom field definitions \(id, title, isPrivate, fieldType, objectType, isArchived, isRequired, selectableValues\[\] \{label, value, isArchived\}\) |
|
||||
| `departments` | json | List of departments \(id, name, externalName, isArchived, parentId, createdAt, updatedAt\) |
|
||||
| `locations` | json | List of locations \(id, name, externalName, isArchived, isRemote, workplaceType, parentLocationId, type, address with addressCountry/Region/Locality/postalCode/streetAddress\) |
|
||||
| `jobPostings` | json | List of job postings \(id, title, jobId, departmentName, teamName, locationName, locationIds, workplaceType, employmentType, isListed, publishedDate, applicationDeadline, externalLink, applyLink, compensationTierSummary, shouldDisplayCompensationOnJobBoard, updatedAt\) |
|
||||
| `openings` | json | List of openings \(id, openedAt, closedAt, isArchived, archivedAt, closeReasonId, openingState, latestVersion with identifier/description/authorId/createdAt/teamId/jobIds\[\]/targetHireDate/targetStartDate/isBackfill/employmentType/locationIds\[\]/hiringTeam\[\]/customFields\[\]\) |
|
||||
| `users` | json | List of users \(id, firstName, lastName, email, globalRole, isEnabled, updatedAt, managerId\) |
|
||||
| `interviewSchedules` | json | List of interview schedules \(id, applicationId, interviewStageId, interviewEvents\[\] with interviewerUserIds/startTime/endTime/feedbackLink/location/meetingLink/hasSubmittedFeedback, status, scheduledBy, createdAt, updatedAt\) |
|
||||
| `tags` | json | List of candidate tags \(id, title, isArchived\) |
|
||||
| `id` | string | Resource UUID |
|
||||
| `name` | string | Resource name |
|
||||
| `title` | string | Job title or job posting title |
|
||||
| `status` | string | Status |
|
||||
| `candidate` | json | Candidate details \(id, name, primaryEmailAddress, primaryPhoneNumber, emailAddresses\[\], phoneNumbers\[\], socialLinks\[\], customFields\[\], source, creditedToUser, createdAt, updatedAt\) |
|
||||
| `job` | json | Job details \(id, title, status, employmentType, locationId, departmentId, hiringTeam\[\], author, location, openings\[\], compensation, createdAt, updatedAt\) |
|
||||
| `application` | json | Application details \(id, status, customFields\[\], candidate, currentInterviewStage, source, archiveReason, job, hiringTeam\[\], createdAt, updatedAt\) |
|
||||
| `offer` | json | Offer details \(id, decidedAt, applicationId, acceptanceStatus, offerStatus, latestVersion\) |
|
||||
| `jobPosting` | json | Job posting details \(id, title, descriptionPlain, descriptionHtml, descriptionSocial, descriptionParts, departmentName, teamName, teamNameHierarchy\[\], jobId, locationName, locationIds, linkedData, address, isRemote, workplaceType, employmentType, isListed, publishedDate, applicationDeadline, externalLink, applyLink, compensation, updatedAt\) |
|
||||
| `content` | string | Note content |
|
||||
| `author` | json | Note author \(id, firstName, lastName, email, globalRole, isEnabled\) |
|
||||
| `isPrivate` | boolean | Whether the note is private |
|
||||
| `createdAt` | string | ISO 8601 creation timestamp |
|
||||
| `moreDataAvailable` | boolean | Whether more pages exist |
|
||||
| `nextCursor` | string | Pagination cursor for next page |
|
||||
| `syncToken` | string | Sync token for incremental updates |
|
||||
|
||||
### `ashby_search_candidates`
|
||||
|
||||
@@ -723,18 +968,6 @@ Searches for candidates by name and/or email with AND logic. Results are limited
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `candidates` | array | Matching candidates \(max 100 results\) |
|
||||
| ↳ `id` | string | Candidate UUID |
|
||||
| ↳ `name` | string | Full name |
|
||||
| ↳ `primaryEmailAddress` | object | Primary email contact info |
|
||||
| ↳ `value` | string | Email address |
|
||||
| ↳ `type` | string | Contact type \(Personal, Work, Other\) |
|
||||
| ↳ `isPrimary` | boolean | Whether this is the primary email |
|
||||
| ↳ `primaryPhoneNumber` | object | Primary phone contact info |
|
||||
| ↳ `value` | string | Phone number |
|
||||
| ↳ `type` | string | Contact type \(Personal, Work, Other\) |
|
||||
| ↳ `isPrimary` | boolean | Whether this is the primary phone |
|
||||
| ↳ `createdAt` | string | ISO 8601 creation timestamp |
|
||||
| ↳ `updatedAt` | string | ISO 8601 last update timestamp |
|
||||
|
||||
### `ashby_update_candidate`
|
||||
|
||||
@@ -758,26 +991,36 @@ Updates an existing candidate record in Ashby. Only provided fields are changed.
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `id` | string | Candidate UUID |
|
||||
| `name` | string | Full name |
|
||||
| `primaryEmailAddress` | object | Primary email contact info |
|
||||
| ↳ `value` | string | Email address |
|
||||
| ↳ `type` | string | Contact type \(Personal, Work, Other\) |
|
||||
| ↳ `isPrimary` | boolean | Whether this is the primary email |
|
||||
| `primaryPhoneNumber` | object | Primary phone contact info |
|
||||
| ↳ `value` | string | Phone number |
|
||||
| ↳ `type` | string | Contact type \(Personal, Work, Other\) |
|
||||
| ↳ `isPrimary` | boolean | Whether this is the primary phone |
|
||||
| `profileUrl` | string | URL to the candidate Ashby profile |
|
||||
| `position` | string | Current position or title |
|
||||
| `company` | string | Current company |
|
||||
| `linkedInUrl` | string | LinkedIn profile URL |
|
||||
| `githubUrl` | string | GitHub profile URL |
|
||||
| `tags` | array | Tags applied to the candidate |
|
||||
| ↳ `id` | string | Tag UUID |
|
||||
| ↳ `title` | string | Tag title |
|
||||
| `applicationIds` | array | IDs of associated applications |
|
||||
| `candidates` | json | List of candidates with rich fields \(id, name, primaryEmailAddress, primaryPhoneNumber, emailAddresses\[\], phoneNumbers\[\], socialLinks\[\], linkedInUrl, githubUrl, profileUrl, position, company, school, timezone, location with locationComponents\[\], tags\[\], applicationIds\[\], customFields\[\], resumeFileHandle, fileHandles\[\], source with sourceType, creditedToUser, fraudStatus, createdAt, updatedAt\) |
|
||||
| `jobs` | json | List of jobs \(id, title, confidential, status, employmentType, locationId, departmentId, defaultInterviewPlanId, interviewPlanIds\[\], customFields\[\], jobPostingIds\[\], customRequisitionId, brandId, hiringTeam\[\], author, createdAt, updatedAt, openedAt, closedAt, location with address, openings\[\] with latestVersion, compensation with compensationTiers\[\]\) |
|
||||
| `applications` | json | List of applications \(id, status, customFields\[\], candidate summary, currentInterviewStage, source with sourceType, archiveReason with customFields\[\], archivedAt, job summary, creditedToUser, hiringTeam\[\], appliedViaJobPostingId, submitterClientIp, submitterUserAgent, createdAt, updatedAt\) |
|
||||
| `notes` | json | List of notes \(id, content, author, isPrivate, createdAt\) |
|
||||
| `offers` | json | List of offers \(id, decidedAt, applicationId, acceptanceStatus, offerStatus, latestVersion with id/startDate/salary/createdAt/openingId/customFields\[\]/fileHandles\[\]/author/approvalStatus\) |
|
||||
| `archiveReasons` | json | List of archive reasons \(id, text, reasonType \[RejectedByCandidate/RejectedByOrg/Other\], isArchived\) |
|
||||
| `sources` | json | List of sources \(id, title, isArchived, sourceType \{id, title, isArchived\}\) |
|
||||
| `customFields` | json | List of custom field definitions \(id, title, isPrivate, fieldType, objectType, isArchived, isRequired, selectableValues\[\] \{label, value, isArchived\}\) |
|
||||
| `departments` | json | List of departments \(id, name, externalName, isArchived, parentId, createdAt, updatedAt\) |
|
||||
| `locations` | json | List of locations \(id, name, externalName, isArchived, isRemote, workplaceType, parentLocationId, type, address with addressCountry/Region/Locality/postalCode/streetAddress\) |
|
||||
| `jobPostings` | json | List of job postings \(id, title, jobId, departmentName, teamName, locationName, locationIds, workplaceType, employmentType, isListed, publishedDate, applicationDeadline, externalLink, applyLink, compensationTierSummary, shouldDisplayCompensationOnJobBoard, updatedAt\) |
|
||||
| `openings` | json | List of openings \(id, openedAt, closedAt, isArchived, archivedAt, closeReasonId, openingState, latestVersion with identifier/description/authorId/createdAt/teamId/jobIds\[\]/targetHireDate/targetStartDate/isBackfill/employmentType/locationIds\[\]/hiringTeam\[\]/customFields\[\]\) |
|
||||
| `users` | json | List of users \(id, firstName, lastName, email, globalRole, isEnabled, updatedAt, managerId\) |
|
||||
| `interviewSchedules` | json | List of interview schedules \(id, applicationId, interviewStageId, interviewEvents\[\] with interviewerUserIds/startTime/endTime/feedbackLink/location/meetingLink/hasSubmittedFeedback, status, scheduledBy, createdAt, updatedAt\) |
|
||||
| `tags` | json | List of candidate tags \(id, title, isArchived\) |
|
||||
| `id` | string | Resource UUID |
|
||||
| `name` | string | Resource name |
|
||||
| `title` | string | Job title or job posting title |
|
||||
| `status` | string | Status |
|
||||
| `candidate` | json | Candidate details \(id, name, primaryEmailAddress, primaryPhoneNumber, emailAddresses\[\], phoneNumbers\[\], socialLinks\[\], customFields\[\], source, creditedToUser, createdAt, updatedAt\) |
|
||||
| `job` | json | Job details \(id, title, status, employmentType, locationId, departmentId, hiringTeam\[\], author, location, openings\[\], compensation, createdAt, updatedAt\) |
|
||||
| `application` | json | Application details \(id, status, customFields\[\], candidate, currentInterviewStage, source, archiveReason, job, hiringTeam\[\], createdAt, updatedAt\) |
|
||||
| `offer` | json | Offer details \(id, decidedAt, applicationId, acceptanceStatus, offerStatus, latestVersion\) |
|
||||
| `jobPosting` | json | Job posting details \(id, title, descriptionPlain, descriptionHtml, descriptionSocial, descriptionParts, departmentName, teamName, teamNameHierarchy\[\], jobId, locationName, locationIds, linkedData, address, isRemote, workplaceType, employmentType, isListed, publishedDate, applicationDeadline, externalLink, applyLink, compensation, updatedAt\) |
|
||||
| `content` | string | Note content |
|
||||
| `author` | json | Note author \(id, firstName, lastName, email, globalRole, isEnabled\) |
|
||||
| `isPrivate` | boolean | Whether the note is private |
|
||||
| `createdAt` | string | ISO 8601 creation timestamp |
|
||||
| `updatedAt` | string | ISO 8601 last update timestamp |
|
||||
| `moreDataAvailable` | boolean | Whether more pages exist |
|
||||
| `nextCursor` | string | Pagination cursor for next page |
|
||||
| `syncToken` | string | Sync token for incremental updates |
|
||||
|
||||
|
||||
|
||||
@@ -57,9 +57,12 @@ Run a CloudWatch Log Insights query against one or more log groups
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `results` | array | Query result rows |
|
||||
| `statistics` | object | Query statistics \(bytesScanned, recordsMatched, recordsScanned\) |
|
||||
| `status` | string | Query completion status |
|
||||
| `results` | array | Query result rows \(each row is a key/value map of field name to value\) |
|
||||
| `statistics` | object | Query statistics |
|
||||
| ↳ `bytesScanned` | number | Total bytes of log data scanned |
|
||||
| ↳ `recordsMatched` | number | Number of log records that matched the query |
|
||||
| ↳ `recordsScanned` | number | Total log records scanned |
|
||||
| `status` | string | Query completion status \(Complete, Failed, Cancelled, or Timeout\) |
|
||||
|
||||
### `cloudwatch_describe_log_groups`
|
||||
|
||||
@@ -80,6 +83,11 @@ List available CloudWatch log groups
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `logGroups` | array | List of CloudWatch log groups with metadata |
|
||||
| ↳ `logGroupName` | string | Log group name |
|
||||
| ↳ `arn` | string | Log group ARN |
|
||||
| ↳ `storedBytes` | number | Total stored bytes |
|
||||
| ↳ `retentionInDays` | number | Retention period in days \(if set\) |
|
||||
| ↳ `creationTime` | number | Creation time in epoch milliseconds |
|
||||
|
||||
### `cloudwatch_get_log_events`
|
||||
|
||||
@@ -103,6 +111,9 @@ Retrieve log events from a specific CloudWatch log stream
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `events` | array | Log events with timestamp, message, and ingestion time |
|
||||
| ↳ `timestamp` | number | Event timestamp in epoch milliseconds |
|
||||
| ↳ `message` | string | Log event message |
|
||||
| ↳ `ingestionTime` | number | Ingestion time in epoch milliseconds |
|
||||
|
||||
### `cloudwatch_describe_log_streams`
|
||||
|
||||
@@ -123,7 +134,12 @@ List log streams within a CloudWatch log group
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `logStreams` | array | List of log streams with metadata |
|
||||
| `logStreams` | array | List of log streams with metadata, sorted by last event time \(most recent first\) unless a prefix filter is applied |
|
||||
| ↳ `logStreamName` | string | Log stream name |
|
||||
| ↳ `lastEventTimestamp` | number | Timestamp of the last log event in epoch milliseconds |
|
||||
| ↳ `firstEventTimestamp` | number | Timestamp of the first log event in epoch milliseconds |
|
||||
| ↳ `creationTime` | number | Stream creation time in epoch milliseconds |
|
||||
| ↳ `storedBytes` | number | Total stored bytes |
|
||||
|
||||
### `cloudwatch_list_metrics`
|
||||
|
||||
@@ -146,6 +162,9 @@ List available CloudWatch metrics
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `metrics` | array | List of metrics with namespace, name, and dimensions |
|
||||
| ↳ `namespace` | string | Metric namespace \(e.g., AWS/EC2\) |
|
||||
| ↳ `metricName` | string | Metric name \(e.g., CPUUtilization\) |
|
||||
| ↳ `dimensions` | array | Array of name/value dimension pairs |
|
||||
|
||||
### `cloudwatch_get_metric_statistics`
|
||||
|
||||
@@ -170,8 +189,15 @@ Get statistics for a CloudWatch metric over a time range
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `label` | string | Metric label |
|
||||
| `datapoints` | array | Datapoints with timestamp and statistics values |
|
||||
| `label` | string | Metric label returned by CloudWatch |
|
||||
| `datapoints` | array | Datapoints sorted by timestamp with statistics values |
|
||||
| ↳ `timestamp` | number | Datapoint timestamp in epoch milliseconds |
|
||||
| ↳ `average` | number | Average statistic value |
|
||||
| ↳ `sum` | number | Sum statistic value |
|
||||
| ↳ `minimum` | number | Minimum statistic value |
|
||||
| ↳ `maximum` | number | Maximum statistic value |
|
||||
| ↳ `sampleCount` | number | Sample count statistic value |
|
||||
| ↳ `unit` | string | Unit of the metric |
|
||||
|
||||
### `cloudwatch_put_metric_data`
|
||||
|
||||
@@ -222,5 +248,13 @@ List and filter CloudWatch alarms
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `alarms` | array | List of CloudWatch alarms with state and configuration |
|
||||
| ↳ `alarmName` | string | Alarm name |
|
||||
| ↳ `alarmArn` | string | Alarm ARN |
|
||||
| ↳ `stateValue` | string | Current state \(OK, ALARM, INSUFFICIENT_DATA\) |
|
||||
| ↳ `stateReason` | string | Human-readable reason for the state |
|
||||
| ↳ `metricName` | string | Metric name \(MetricAlarm only\) |
|
||||
| ↳ `namespace` | string | Metric namespace \(MetricAlarm only\) |
|
||||
| ↳ `threshold` | number | Threshold value \(MetricAlarm only\) |
|
||||
| ↳ `stateUpdatedTimestamp` | number | Epoch ms when state last changed |
|
||||
|
||||
|
||||
|
||||
162
apps/docs/content/docs/en/tools/custom-tools.mdx
Normal file
162
apps/docs/content/docs/en/tools/custom-tools.mdx
Normal file
@@ -0,0 +1,162 @@
|
||||
---
|
||||
title: Custom Tools
|
||||
description: Create your own tools with JavaScript code and use them in Agent blocks
|
||||
---
|
||||
|
||||
import { Callout } from 'fumadocs-ui/components/callout'
|
||||
import { Step, Steps } from 'fumadocs-ui/components/steps'
|
||||
import { FAQ } from '@/components/ui/faq'
|
||||
|
||||
Custom tools let you write your own JavaScript functions and make them available as callable tools in Agent blocks. This is useful when you need functionality that isn't covered by Sim's built-in integrations — for example, calling an internal API, performing a custom calculation, or transforming data in a specific way.
|
||||
|
||||
## How Custom Tools Work
|
||||
|
||||
A custom tool has two parts:
|
||||
|
||||
1. **Schema** — A JSON definition describing the tool's name, description, and parameters (using the OpenAI function-calling format). This tells the AI agent what the tool does and what inputs it expects.
|
||||
2. **Code** — A JavaScript function body that runs when the agent calls the tool. Parameters defined in the schema are available as variables in your code.
|
||||
|
||||
When an Agent block has access to a custom tool, the AI model decides when to call it based on the schema description and the conversation context — just like built-in tools.
|
||||
|
||||
## Creating a Custom Tool
|
||||
|
||||
<Steps>
|
||||
<Step>
|
||||
|
||||
### Open Custom Tools settings
|
||||
|
||||
Navigate to **Settings → Custom Tools** in your workspace and click **Add**.
|
||||
|
||||
</Step>
|
||||
<Step>
|
||||
|
||||
### Define the schema
|
||||
|
||||
In the **Schema** tab, define your tool using JSON in the OpenAI function-calling format:
|
||||
|
||||
```json
|
||||
{
|
||||
"type": "function",
|
||||
"function": {
|
||||
"name": "get_weather",
|
||||
"description": "Get the current weather for a city",
|
||||
"parameters": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"city": {
|
||||
"type": "string",
|
||||
"description": "The city name"
|
||||
},
|
||||
"units": {
|
||||
"type": "string",
|
||||
"enum": ["celsius", "fahrenheit"],
|
||||
"description": "Temperature units"
|
||||
}
|
||||
},
|
||||
"required": ["city"]
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
<Callout type="info">
|
||||
You can use the AI wand button to generate a schema from a natural language description of what the tool should do.
|
||||
</Callout>
|
||||
|
||||
</Step>
|
||||
<Step>
|
||||
|
||||
### Write the code
|
||||
|
||||
Switch to the **Code** tab and write the JavaScript function body. Parameters from your schema are available directly as variables:
|
||||
|
||||
```javascript
|
||||
const response = await fetch(
|
||||
`https://api.openweathermap.org/data/2.5/weather?q=${city}&units=${units === 'celsius' ? 'metric' : 'imperial'}&appid={{OPENWEATHER_API_KEY}}`
|
||||
);
|
||||
|
||||
const data = await response.json();
|
||||
|
||||
return {
|
||||
temperature: data.main.temp,
|
||||
description: data.weather[0].description,
|
||||
humidity: data.main.humidity
|
||||
};
|
||||
```
|
||||
|
||||
<Callout type="info">
|
||||
You can also use the AI wand to generate code from a description. Environment variables are referenced with `{{KEY}}` syntax.
|
||||
</Callout>
|
||||
|
||||
</Step>
|
||||
<Step>
|
||||
|
||||
### Save
|
||||
|
||||
Click **Save** to create the tool. It's now available to use in any Agent block across your workspace.
|
||||
|
||||
</Step>
|
||||
</Steps>
|
||||
|
||||
## Using Custom Tools in Workflows
|
||||
|
||||
Once created, custom tools appear alongside built-in tools when configuring an Agent block:
|
||||
|
||||
1. Open an Agent block
|
||||
2. Click **Add Tools**
|
||||
3. Find your custom tool in the tool list
|
||||
4. The agent will call the tool when it determines it's relevant to the task
|
||||
|
||||
## Code Environment
|
||||
|
||||
### Available Features
|
||||
|
||||
- **Async/await** — Your code runs in an async context, so you can use `await` directly
|
||||
- **fetch()** — Make HTTP requests to external APIs
|
||||
- **Node.js built-ins** — Access to `crypto`, `Buffer`, and other standard modules
|
||||
- **Environment variables** — Use `{{KEY}}` syntax to inject secrets
|
||||
|
||||
### Limitations
|
||||
|
||||
- **No npm packages** — External libraries like `axios` or `lodash` are not available. Use built-in APIs instead
|
||||
- **Parameters by name** — Schema parameters are available directly as variables (e.g., `city`), not via a `params` object
|
||||
|
||||
### Returning Results
|
||||
|
||||
Return a value from your code to send it back to the agent:
|
||||
|
||||
```javascript
|
||||
const result = await fetch(`https://api.example.com/data?q=${query}`);
|
||||
const data = await result.json();
|
||||
return data;
|
||||
```
|
||||
|
||||
The returned value becomes the tool output that the agent sees and can use in its response.
|
||||
|
||||
## Managing Custom Tools
|
||||
|
||||
From **Settings → Custom Tools** you can:
|
||||
|
||||
- **Search** tools by name, function name, or description
|
||||
- **Edit** any tool's schema or code
|
||||
- **Delete** tools that are no longer needed
|
||||
|
||||
<Callout type="warn">
|
||||
Deleting a custom tool removes it from all Agent blocks that reference it. Make sure no active workflows depend on the tool before deleting.
|
||||
</Callout>
|
||||
|
||||
## Permissions
|
||||
|
||||
| Action | Required Permission |
|
||||
|--------|-------------------|
|
||||
| View custom tools | **Read**, **Write**, or **Admin** |
|
||||
| Create or edit tools | **Write** or **Admin** |
|
||||
| Delete tools | **Admin** |
|
||||
|
||||
<FAQ items={[
|
||||
{ question: "Can I use custom tools in standalone blocks (not agents)?", answer: "No. Custom tools are designed for use within Agent blocks, where the AI model decides when to call them. For deterministic tool execution, use the Function block instead." },
|
||||
{ question: "How do I pass API keys to my custom tool code?", answer: "Use environment variables with double curly brace syntax: {{MY_API_KEY}}. Create the environment variable in Settings → Secrets, and it will be injected at execution time without appearing in logs." },
|
||||
{ question: "Can I use external npm packages?", answer: "No. Custom tool code runs in a sandboxed environment with access to built-in Node.js modules and fetch(), but not external packages. For complex dependencies, consider calling an external API that wraps the functionality you need." },
|
||||
{ question: "What's the difference between custom tools and the Function block?", answer: "Custom tools are called by AI agents when they decide the tool is relevant — the agent chooses when to use it. Function blocks run deterministically at a fixed point in the workflow. Use custom tools for agent-driven actions and Function blocks for predictable data transformations." },
|
||||
{ question: "Are custom tools shared across the workspace?", answer: "Yes. Custom tools are workspace-scoped, so all workspace members can use them in their workflows." },
|
||||
]} />
|
||||
@@ -1,6 +1,6 @@
|
||||
---
|
||||
title: Amazon DynamoDB
|
||||
description: Connect to Amazon DynamoDB
|
||||
description: Get, put, query, scan, update, and delete items in Amazon DynamoDB tables
|
||||
---
|
||||
|
||||
import { BlockInfoCard } from "@/components/ui/block-info-card"
|
||||
@@ -55,7 +55,7 @@ Get an item from a DynamoDB table by primary key
|
||||
| `accessKeyId` | string | Yes | AWS access key ID |
|
||||
| `secretAccessKey` | string | Yes | AWS secret access key |
|
||||
| `tableName` | string | Yes | DynamoDB table name \(e.g., "Users", "Orders"\) |
|
||||
| `key` | object | Yes | Primary key of the item to retrieve \(e.g., \{"pk": "USER#123"\} or \{"pk": "ORDER#456", "sk": "ITEM#789"\}\) |
|
||||
| `key` | json | Yes | Primary key of the item to retrieve \(e.g., \{"pk": "USER#123"\} or \{"pk": "ORDER#456", "sk": "ITEM#789"\}\) |
|
||||
| `consistentRead` | boolean | No | Use strongly consistent read |
|
||||
|
||||
#### Output
|
||||
@@ -63,7 +63,7 @@ Get an item from a DynamoDB table by primary key
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `message` | string | Operation status message |
|
||||
| `item` | object | Retrieved item |
|
||||
| `item` | json | Retrieved item |
|
||||
|
||||
### `dynamodb_put`
|
||||
|
||||
@@ -77,14 +77,17 @@ Put an item into a DynamoDB table
|
||||
| `accessKeyId` | string | Yes | AWS access key ID |
|
||||
| `secretAccessKey` | string | Yes | AWS secret access key |
|
||||
| `tableName` | string | Yes | DynamoDB table name \(e.g., "Users", "Orders"\) |
|
||||
| `item` | object | Yes | Item to put into the table \(e.g., \{"pk": "USER#123", "name": "John", "email": "john@example.com"\}\) |
|
||||
| `item` | json | Yes | Item to put into the table \(e.g., \{"pk": "USER#123", "name": "John", "email": "john@example.com"\}\) |
|
||||
| `conditionExpression` | string | No | Condition that must be met for the put to succeed \(e.g., "attribute_not_exists\(pk\)" to prevent overwrites\) |
|
||||
| `expressionAttributeNames` | json | No | Attribute name mappings for reserved words used in conditionExpression \(e.g., \{"#name": "name"\}\) |
|
||||
| `expressionAttributeValues` | json | No | Expression attribute values used in conditionExpression \(e.g., \{":expected": "value"\}\) |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `message` | string | Operation status message |
|
||||
| `item` | object | Created item |
|
||||
| `item` | json | Created item |
|
||||
|
||||
### `dynamodb_query`
|
||||
|
||||
@@ -100,10 +103,12 @@ Query items from a DynamoDB table using key conditions
|
||||
| `tableName` | string | Yes | DynamoDB table name \(e.g., "Users", "Orders"\) |
|
||||
| `keyConditionExpression` | string | Yes | Key condition expression \(e.g., "pk = :pk" or "pk = :pk AND sk BEGINS_WITH :prefix"\) |
|
||||
| `filterExpression` | string | No | Filter expression for results \(e.g., "age > :minAge AND #status = :status"\) |
|
||||
| `expressionAttributeNames` | object | No | Attribute name mappings for reserved words \(e.g., \{"#status": "status"\}\) |
|
||||
| `expressionAttributeValues` | object | No | Expression attribute values \(e.g., \{":pk": "USER#123", ":minAge": 18\}\) |
|
||||
| `expressionAttributeNames` | json | No | Attribute name mappings for reserved words \(e.g., \{"#status": "status"\}\) |
|
||||
| `expressionAttributeValues` | json | No | Expression attribute values \(e.g., \{":pk": "USER#123", ":minAge": 18\}\) |
|
||||
| `indexName` | string | No | Secondary index name to query \(e.g., "GSI1", "email-index"\) |
|
||||
| `limit` | number | No | Maximum number of items to return \(e.g., 10, 50, 100\) |
|
||||
| `exclusiveStartKey` | json | No | Pagination token from a previous query's lastEvaluatedKey to continue fetching results |
|
||||
| `scanIndexForward` | boolean | No | Sort order for the sort key: true for ascending \(default\), false for descending |
|
||||
|
||||
#### Output
|
||||
|
||||
@@ -112,6 +117,7 @@ Query items from a DynamoDB table using key conditions
|
||||
| `message` | string | Operation status message |
|
||||
| `items` | array | Array of items returned |
|
||||
| `count` | number | Number of items returned |
|
||||
| `lastEvaluatedKey` | json | Pagination token to pass as exclusiveStartKey to fetch the next page of results |
|
||||
|
||||
### `dynamodb_scan`
|
||||
|
||||
@@ -127,9 +133,10 @@ Scan all items in a DynamoDB table
|
||||
| `tableName` | string | Yes | DynamoDB table name \(e.g., "Users", "Orders"\) |
|
||||
| `filterExpression` | string | No | Filter expression for results \(e.g., "age > :minAge AND #status = :status"\) |
|
||||
| `projectionExpression` | string | No | Attributes to retrieve \(e.g., "pk, sk, #name, email"\) |
|
||||
| `expressionAttributeNames` | object | No | Attribute name mappings for reserved words \(e.g., \{"#name": "name", "#status": "status"\}\) |
|
||||
| `expressionAttributeValues` | object | No | Expression attribute values \(e.g., \{":minAge": 18, ":status": "active"\}\) |
|
||||
| `expressionAttributeNames` | json | No | Attribute name mappings for reserved words \(e.g., \{"#name": "name", "#status": "status"\}\) |
|
||||
| `expressionAttributeValues` | json | No | Expression attribute values \(e.g., \{":minAge": 18, ":status": "active"\}\) |
|
||||
| `limit` | number | No | Maximum number of items to return \(e.g., 10, 50, 100\) |
|
||||
| `exclusiveStartKey` | json | No | Pagination token from a previous scan's lastEvaluatedKey to continue fetching results |
|
||||
|
||||
#### Output
|
||||
|
||||
@@ -138,6 +145,7 @@ Scan all items in a DynamoDB table
|
||||
| `message` | string | Operation status message |
|
||||
| `items` | array | Array of items returned |
|
||||
| `count` | number | Number of items returned |
|
||||
| `lastEvaluatedKey` | json | Pagination token to pass as exclusiveStartKey to fetch the next page of results |
|
||||
|
||||
### `dynamodb_update`
|
||||
|
||||
@@ -151,10 +159,10 @@ Update an item in a DynamoDB table
|
||||
| `accessKeyId` | string | Yes | AWS access key ID |
|
||||
| `secretAccessKey` | string | Yes | AWS secret access key |
|
||||
| `tableName` | string | Yes | DynamoDB table name \(e.g., "Users", "Orders"\) |
|
||||
| `key` | object | Yes | Primary key of the item to update \(e.g., \{"pk": "USER#123"\} or \{"pk": "ORDER#456", "sk": "ITEM#789"\}\) |
|
||||
| `key` | json | Yes | Primary key of the item to update \(e.g., \{"pk": "USER#123"\} or \{"pk": "ORDER#456", "sk": "ITEM#789"\}\) |
|
||||
| `updateExpression` | string | Yes | Update expression \(e.g., "SET #name = :name, age = :age" or "SET #count = #count + :inc"\) |
|
||||
| `expressionAttributeNames` | object | No | Attribute name mappings for reserved words \(e.g., \{"#name": "name", "#count": "count"\}\) |
|
||||
| `expressionAttributeValues` | object | No | Expression attribute values \(e.g., \{":name": "John", ":age": 30, ":inc": 1\}\) |
|
||||
| `expressionAttributeNames` | json | No | Attribute name mappings for reserved words \(e.g., \{"#name": "name", "#count": "count"\}\) |
|
||||
| `expressionAttributeValues` | json | No | Expression attribute values \(e.g., \{":name": "John", ":age": 30, ":inc": 1\}\) |
|
||||
| `conditionExpression` | string | No | Condition that must be met for the update to succeed \(e.g., "attribute_exists\(pk\)" or "version = :expectedVersion"\) |
|
||||
|
||||
#### Output
|
||||
@@ -162,7 +170,7 @@ Update an item in a DynamoDB table
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `message` | string | Operation status message |
|
||||
| `item` | object | Updated item |
|
||||
| `item` | json | Updated item with all attributes |
|
||||
|
||||
### `dynamodb_delete`
|
||||
|
||||
@@ -176,8 +184,10 @@ Delete an item from a DynamoDB table
|
||||
| `accessKeyId` | string | Yes | AWS access key ID |
|
||||
| `secretAccessKey` | string | Yes | AWS secret access key |
|
||||
| `tableName` | string | Yes | DynamoDB table name \(e.g., "Users", "Orders"\) |
|
||||
| `key` | object | Yes | Primary key of the item to delete \(e.g., \{"pk": "USER#123"\} or \{"pk": "ORDER#456", "sk": "ITEM#789"\}\) |
|
||||
| `key` | json | Yes | Primary key of the item to delete \(e.g., \{"pk": "USER#123"\} or \{"pk": "ORDER#456", "sk": "ITEM#789"\}\) |
|
||||
| `conditionExpression` | string | No | Condition that must be met for the delete to succeed \(e.g., "attribute_exists\(pk\)"\) |
|
||||
| `expressionAttributeNames` | json | No | Attribute name mappings for reserved words used in conditionExpression \(e.g., \{"#status": "status"\}\) |
|
||||
| `expressionAttributeValues` | json | No | Expression attribute values used in conditionExpression \(e.g., \{":status": "active"\}\) |
|
||||
|
||||
#### Output
|
||||
|
||||
@@ -204,6 +214,6 @@ Introspect DynamoDB to list tables or get detailed schema information for a spec
|
||||
| --------- | ---- | ----------- |
|
||||
| `message` | string | Operation status message |
|
||||
| `tables` | array | List of table names in the region |
|
||||
| `tableDetails` | object | Detailed schema information for a specific table |
|
||||
| `tableDetails` | json | Detailed schema information for a specific table |
|
||||
|
||||
|
||||
|
||||
@@ -409,7 +409,7 @@ Retrieve interaction statistics for users by date range from Gong. Only includes
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `peopleInteractionStats` | array | Email address of the Gong user |
|
||||
| `peopleInteractionStats` | array | Interaction statistics per user. Applicable stat names: 'Longest Monologue', 'Longest Customer Story', 'Interactivity', 'Patience', 'Question Rate'. |
|
||||
| ↳ `userId` | string | Gong's unique numeric identifier for the user |
|
||||
| ↳ `userEmailAddress` | string | Email address of the Gong user |
|
||||
| ↳ `personInteractionStats` | array | List of interaction stat measurements for this user |
|
||||
@@ -656,7 +656,7 @@ Retrieve coaching metrics for a manager from Gong.
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `requestId` | string | A Gong request reference ID for troubleshooting purposes |
|
||||
| `coachingData` | array | The manager user information |
|
||||
| `coachingData` | array | A list of coaching data entries, one per manager's team |
|
||||
| ↳ `manager` | object | The manager user information |
|
||||
| ↳ `id` | string | Gong unique numeric identifier for the user |
|
||||
| ↳ `emailAddress` | string | Email address of the Gong user |
|
||||
|
||||
@@ -68,7 +68,7 @@ Get detailed information about an IAM user
|
||||
| `region` | string | Yes | AWS region \(e.g., us-east-1\) |
|
||||
| `accessKeyId` | string | Yes | AWS access key ID |
|
||||
| `secretAccessKey` | string | Yes | AWS secret access key |
|
||||
| `userName` | string | Yes | The name of the IAM user to retrieve |
|
||||
| `userName` | string | No | The name of the IAM user to retrieve \(defaults to the caller if omitted\) |
|
||||
|
||||
#### Output
|
||||
|
||||
@@ -440,4 +440,80 @@ Remove an IAM user from a group
|
||||
| --------- | ---- | ----------- |
|
||||
| `message` | string | Operation status message |
|
||||
|
||||
### `iam_list_attached_role_policies`
|
||||
|
||||
List all managed policies attached to an IAM role
|
||||
|
||||
#### Input
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `region` | string | Yes | AWS region \(e.g., us-east-1\) |
|
||||
| `accessKeyId` | string | Yes | AWS access key ID |
|
||||
| `secretAccessKey` | string | Yes | AWS secret access key |
|
||||
| `roleName` | string | Yes | Name of the IAM role |
|
||||
| `pathPrefix` | string | No | Path prefix to filter policies \(e.g., /application/\) |
|
||||
| `maxItems` | number | No | Maximum number of policies to return \(1-1000\) |
|
||||
| `marker` | string | No | Pagination marker from a previous request |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `attachedPolicies` | json | List of attached policies with policyName and policyArn |
|
||||
| `isTruncated` | boolean | Whether there are more results available |
|
||||
| `marker` | string | Pagination marker for the next page of results |
|
||||
| `count` | number | Number of attached policies returned |
|
||||
|
||||
### `iam_list_attached_user_policies`
|
||||
|
||||
List all managed policies attached to an IAM user
|
||||
|
||||
#### Input
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `region` | string | Yes | AWS region \(e.g., us-east-1\) |
|
||||
| `accessKeyId` | string | Yes | AWS access key ID |
|
||||
| `secretAccessKey` | string | Yes | AWS secret access key |
|
||||
| `userName` | string | Yes | Name of the IAM user |
|
||||
| `pathPrefix` | string | No | Path prefix to filter policies \(e.g., /application/\) |
|
||||
| `maxItems` | number | No | Maximum number of policies to return \(1-1000\) |
|
||||
| `marker` | string | No | Pagination marker from a previous request |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `attachedPolicies` | json | List of attached policies with policyName and policyArn |
|
||||
| `isTruncated` | boolean | Whether there are more results available |
|
||||
| `marker` | string | Pagination marker for the next page of results |
|
||||
| `count` | number | Number of attached policies returned |
|
||||
|
||||
### `iam_simulate_principal_policy`
|
||||
|
||||
Simulate whether a user, role, or group is allowed to perform specific AWS actions — useful for pre-flight access checks
|
||||
|
||||
#### Input
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `region` | string | Yes | AWS region \(e.g., us-east-1\) |
|
||||
| `accessKeyId` | string | Yes | AWS access key ID |
|
||||
| `secretAccessKey` | string | Yes | AWS secret access key |
|
||||
| `policySourceArn` | string | Yes | ARN of the user, group, or role to simulate \(e.g., arn:aws:iam::123456789012:user/alice\) |
|
||||
| `actionNames` | string | Yes | Comma-separated list of AWS actions to simulate \(e.g., s3:GetObject,ec2:DescribeInstances\) |
|
||||
| `resourceArns` | string | No | Comma-separated list of resource ARNs to simulate against \(defaults to * if not provided\) |
|
||||
| `maxResults` | number | No | Maximum number of simulation results to return \(1-1000\) |
|
||||
| `marker` | string | No | Pagination marker from a previous request |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `evaluationResults` | json | Simulation results per action: evalActionName, evalResourceName, evalDecision \(allowed/explicitDeny/implicitDeny\), matchedStatements \(sourcePolicyId, sourcePolicyType\), missingContextValues |
|
||||
| `isTruncated` | boolean | Whether there are more results available |
|
||||
| `marker` | string | Pagination marker for the next page of results |
|
||||
| `count` | number | Number of evaluation results returned |
|
||||
|
||||
|
||||
|
||||
340
apps/docs/content/docs/en/tools/identity_center.mdx
Normal file
340
apps/docs/content/docs/en/tools/identity_center.mdx
Normal file
@@ -0,0 +1,340 @@
|
||||
---
|
||||
title: AWS Identity Center
|
||||
description: Manage temporary elevated access in AWS IAM Identity Center
|
||||
---
|
||||
|
||||
import { BlockInfoCard } from "@/components/ui/block-info-card"
|
||||
|
||||
<BlockInfoCard
|
||||
type="identity_center"
|
||||
color="linear-gradient(45deg, #BD0816 0%, #FF5252 100%)"
|
||||
/>
|
||||
|
||||
{/* MANUAL-CONTENT-START:intro */}
|
||||
[AWS IAM Identity Center](https://aws.amazon.com/iam/identity-center/) (formerly AWS Single Sign-On) is the recommended service for managing workforce access to multiple AWS accounts and applications. It provides a central place to assign users and groups temporary, permission-scoped access to AWS accounts using permission sets — without creating long-lived IAM credentials.
|
||||
|
||||
With AWS IAM Identity Center, you can:
|
||||
|
||||
- **Provision account assignments**: Grant a user or group access to a specific AWS account with a specific permission set — the core primitive of temporary elevated access
|
||||
- **Revoke access on demand**: Delete account assignments to immediately remove elevated permissions when they are no longer needed
|
||||
- **Look up users by email**: Resolve a federated identity (email address) to an Identity Store user ID for programmatic access provisioning
|
||||
- **List permission sets**: Enumerate the available permission sets (e.g., ReadOnly, PowerUser, AdministratorAccess) defined in your Identity Center instance
|
||||
- **Monitor assignment status**: Poll the provisioning status of create/delete operations, which are asynchronous in AWS
|
||||
- **List accounts in your organization**: Enumerate all AWS accounts in your AWS Organizations structure to populate access request dropdowns
|
||||
- **Manage groups**: List groups and resolve group IDs by display name for group-based access grants
|
||||
|
||||
In Sim, the AWS Identity Center integration is designed to power **TEAM (Temporary Elevated Access Management)** workflows — automated pipelines where users request elevated access, approvers approve or deny it, access is provisioned with a time limit, and auto-revocation removes it when the window expires. This replaces manual console-based access management with auditable, agent-driven workflows that integrate with Slack, email, ticketing systems, and CloudTrail for full traceability.
|
||||
{/* MANUAL-CONTENT-END */}
|
||||
|
||||
|
||||
## Usage Instructions
|
||||
|
||||
Provision and revoke temporary access to AWS accounts via IAM Identity Center (SSO). Assign permission sets to users or groups, look up users by email, and list accounts and permission sets for access request workflows.
|
||||
|
||||
|
||||
|
||||
## Tools
|
||||
|
||||
### `identity_center_list_instances`
|
||||
|
||||
List all AWS IAM Identity Center instances in your account
|
||||
|
||||
#### Input
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `region` | string | Yes | AWS region \(e.g., us-east-1\) |
|
||||
| `accessKeyId` | string | Yes | AWS access key ID |
|
||||
| `secretAccessKey` | string | Yes | AWS secret access key |
|
||||
| `maxResults` | number | No | Maximum number of instances to return \(1-100\) |
|
||||
| `nextToken` | string | No | Pagination token from a previous request |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `instances` | json | List of Identity Center instances with instanceArn, identityStoreId, name, status, statusReason |
|
||||
| `nextToken` | string | Pagination token for the next page of results |
|
||||
| `count` | number | Number of instances returned |
|
||||
|
||||
### `identity_center_list_accounts`
|
||||
|
||||
List all AWS accounts in your organization
|
||||
|
||||
#### Input
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `region` | string | Yes | AWS region \(e.g., us-east-1\) |
|
||||
| `accessKeyId` | string | Yes | AWS access key ID |
|
||||
| `secretAccessKey` | string | Yes | AWS secret access key |
|
||||
| `maxResults` | number | No | Maximum number of accounts to return |
|
||||
| `nextToken` | string | No | Pagination token from a previous request |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `accounts` | json | List of AWS accounts with id, arn, name, email, status |
|
||||
| `nextToken` | string | Pagination token for the next page of results |
|
||||
| `count` | number | Number of accounts returned |
|
||||
|
||||
### `identity_center_describe_account`
|
||||
|
||||
Retrieve details about a specific AWS account by its ID
|
||||
|
||||
#### Input
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `region` | string | Yes | AWS region \(e.g., us-east-1\) |
|
||||
| `accessKeyId` | string | Yes | AWS access key ID |
|
||||
| `secretAccessKey` | string | Yes | AWS secret access key |
|
||||
| `accountId` | string | Yes | AWS account ID to describe |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `id` | string | AWS account ID |
|
||||
| `arn` | string | AWS account ARN |
|
||||
| `name` | string | Account name |
|
||||
| `email` | string | Root email address of the account |
|
||||
| `status` | string | Account status \(ACTIVE, SUSPENDED, etc.\) |
|
||||
| `joinedTimestamp` | string | Date the account joined the organization |
|
||||
|
||||
### `identity_center_list_permission_sets`
|
||||
|
||||
List all permission sets defined in an IAM Identity Center instance
|
||||
|
||||
#### Input
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `region` | string | Yes | AWS region \(e.g., us-east-1\) |
|
||||
| `accessKeyId` | string | Yes | AWS access key ID |
|
||||
| `secretAccessKey` | string | Yes | AWS secret access key |
|
||||
| `instanceArn` | string | Yes | ARN of the Identity Center instance |
|
||||
| `maxResults` | number | No | Maximum number of permission sets to return |
|
||||
| `nextToken` | string | No | Pagination token from a previous request |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `permissionSets` | json | List of permission sets with permissionSetArn, name, description, sessionDuration |
|
||||
| `nextToken` | string | Pagination token for the next page of results |
|
||||
| `count` | number | Number of permission sets returned |
|
||||
|
||||
### `identity_center_get_user`
|
||||
|
||||
Look up a user in the Identity Store by email address
|
||||
|
||||
#### Input
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `region` | string | Yes | AWS region \(e.g., us-east-1\) |
|
||||
| `accessKeyId` | string | Yes | AWS access key ID |
|
||||
| `secretAccessKey` | string | Yes | AWS secret access key |
|
||||
| `identityStoreId` | string | Yes | Identity Store ID \(from the Identity Center instance\) |
|
||||
| `email` | string | Yes | Email address of the user to look up |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `userId` | string | Identity Store user ID \(use as principalId\) |
|
||||
| `userName` | string | Username in the Identity Store |
|
||||
| `displayName` | string | Display name of the user |
|
||||
| `email` | string | Email address of the user |
|
||||
|
||||
### `identity_center_get_group`
|
||||
|
||||
Look up a group in the Identity Store by display name
|
||||
|
||||
#### Input
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `region` | string | Yes | AWS region \(e.g., us-east-1\) |
|
||||
| `accessKeyId` | string | Yes | AWS access key ID |
|
||||
| `secretAccessKey` | string | Yes | AWS secret access key |
|
||||
| `identityStoreId` | string | Yes | Identity Store ID \(from the Identity Center instance\) |
|
||||
| `displayName` | string | Yes | Display name of the group to look up |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `groupId` | string | Identity Store group ID \(use as principalId\) |
|
||||
| `displayName` | string | Display name of the group |
|
||||
| `description` | string | Group description |
|
||||
|
||||
### `identity_center_list_groups`
|
||||
|
||||
List all groups in the Identity Store
|
||||
|
||||
#### Input
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `region` | string | Yes | AWS region \(e.g., us-east-1\) |
|
||||
| `accessKeyId` | string | Yes | AWS access key ID |
|
||||
| `secretAccessKey` | string | Yes | AWS secret access key |
|
||||
| `identityStoreId` | string | Yes | Identity Store ID \(from the Identity Center instance\) |
|
||||
| `maxResults` | number | No | Maximum number of groups to return |
|
||||
| `nextToken` | string | No | Pagination token from a previous request |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `groups` | json | List of groups with groupId, displayName, description |
|
||||
| `nextToken` | string | Pagination token for the next page of results |
|
||||
| `count` | number | Number of groups returned |
|
||||
|
||||
### `identity_center_create_account_assignment`
|
||||
|
||||
Grant a user or group access to an AWS account via a permission set (temporary elevated access)
|
||||
|
||||
#### Input
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `region` | string | Yes | AWS region \(e.g., us-east-1\) |
|
||||
| `accessKeyId` | string | Yes | AWS access key ID |
|
||||
| `secretAccessKey` | string | Yes | AWS secret access key |
|
||||
| `instanceArn` | string | Yes | ARN of the Identity Center instance |
|
||||
| `accountId` | string | Yes | AWS account ID to grant access to |
|
||||
| `permissionSetArn` | string | Yes | ARN of the permission set to assign |
|
||||
| `principalType` | string | Yes | Type of principal: USER or GROUP |
|
||||
| `principalId` | string | Yes | Identity Store ID of the user or group |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `message` | string | Status message |
|
||||
| `status` | string | Provisioning status: IN_PROGRESS, FAILED, or SUCCEEDED |
|
||||
| `requestId` | string | Request ID to use with Check Assignment Status |
|
||||
| `accountId` | string | Target AWS account ID |
|
||||
| `permissionSetArn` | string | Permission set ARN |
|
||||
| `principalType` | string | Principal type \(USER or GROUP\) |
|
||||
| `principalId` | string | Principal ID |
|
||||
| `failureReason` | string | Reason for failure if status is FAILED |
|
||||
| `createdDate` | string | Date the request was created |
|
||||
|
||||
### `identity_center_delete_account_assignment`
|
||||
|
||||
Revoke a user or group access to an AWS account by removing a permission set assignment
|
||||
|
||||
#### Input
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `region` | string | Yes | AWS region \(e.g., us-east-1\) |
|
||||
| `accessKeyId` | string | Yes | AWS access key ID |
|
||||
| `secretAccessKey` | string | Yes | AWS secret access key |
|
||||
| `instanceArn` | string | Yes | ARN of the Identity Center instance |
|
||||
| `accountId` | string | Yes | AWS account ID to revoke access from |
|
||||
| `permissionSetArn` | string | Yes | ARN of the permission set to remove |
|
||||
| `principalType` | string | Yes | Type of principal: USER or GROUP |
|
||||
| `principalId` | string | Yes | Identity Store ID of the user or group |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `message` | string | Status message |
|
||||
| `status` | string | Deprovisioning status: IN_PROGRESS, FAILED, or SUCCEEDED |
|
||||
| `requestId` | string | Request ID to use with Check Assignment Status |
|
||||
| `accountId` | string | Target AWS account ID |
|
||||
| `permissionSetArn` | string | Permission set ARN |
|
||||
| `principalType` | string | Principal type \(USER or GROUP\) |
|
||||
| `principalId` | string | Principal ID |
|
||||
| `failureReason` | string | Reason for failure if status is FAILED |
|
||||
| `createdDate` | string | Date the request was created |
|
||||
|
||||
### `identity_center_check_assignment_status`
|
||||
|
||||
Check the provisioning status of an account assignment creation request
|
||||
|
||||
#### Input
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `region` | string | Yes | AWS region \(e.g., us-east-1\) |
|
||||
| `accessKeyId` | string | Yes | AWS access key ID |
|
||||
| `secretAccessKey` | string | Yes | AWS secret access key |
|
||||
| `instanceArn` | string | Yes | ARN of the Identity Center instance |
|
||||
| `requestId` | string | Yes | Request ID returned from Create or Delete Account Assignment |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `message` | string | Human-readable status message |
|
||||
| `status` | string | Current status: IN_PROGRESS, FAILED, or SUCCEEDED |
|
||||
| `requestId` | string | The request ID that was checked |
|
||||
| `accountId` | string | Target AWS account ID |
|
||||
| `permissionSetArn` | string | Permission set ARN |
|
||||
| `principalType` | string | Principal type \(USER or GROUP\) |
|
||||
| `principalId` | string | Principal ID |
|
||||
| `failureReason` | string | Reason for failure if status is FAILED |
|
||||
| `createdDate` | string | Date the request was created |
|
||||
|
||||
### `identity_center_check_assignment_deletion_status`
|
||||
|
||||
Check the deprovisioning status of an account assignment deletion request
|
||||
|
||||
#### Input
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `region` | string | Yes | AWS region \(e.g., us-east-1\) |
|
||||
| `accessKeyId` | string | Yes | AWS access key ID |
|
||||
| `secretAccessKey` | string | Yes | AWS secret access key |
|
||||
| `instanceArn` | string | Yes | ARN of the Identity Center instance |
|
||||
| `requestId` | string | Yes | Request ID returned from Delete Account Assignment |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `message` | string | Human-readable status message |
|
||||
| `status` | string | Current deletion status: IN_PROGRESS, FAILED, or SUCCEEDED |
|
||||
| `requestId` | string | The deletion request ID that was checked |
|
||||
| `accountId` | string | Target AWS account ID |
|
||||
| `permissionSetArn` | string | Permission set ARN |
|
||||
| `principalType` | string | Principal type \(USER or GROUP\) |
|
||||
| `principalId` | string | Principal ID |
|
||||
| `failureReason` | string | Reason for failure if status is FAILED |
|
||||
| `createdDate` | string | Date the request was created |
|
||||
|
||||
### `identity_center_list_account_assignments`
|
||||
|
||||
List all account assignments for a specific user or group across all accounts
|
||||
|
||||
#### Input
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `region` | string | Yes | AWS region \(e.g., us-east-1\) |
|
||||
| `accessKeyId` | string | Yes | AWS access key ID |
|
||||
| `secretAccessKey` | string | Yes | AWS secret access key |
|
||||
| `instanceArn` | string | Yes | ARN of the Identity Center instance |
|
||||
| `principalId` | string | Yes | Identity Store ID of the user or group |
|
||||
| `principalType` | string | Yes | Type of principal: USER or GROUP |
|
||||
| `maxResults` | number | No | Maximum number of assignments to return |
|
||||
| `nextToken` | string | No | Pagination token from a previous request |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `assignments` | json | List of account assignments with accountId, permissionSetArn, principalType, principalId |
|
||||
| `nextToken` | string | Pagination token for the next page of results |
|
||||
| `count` | number | Number of assignments returned |
|
||||
|
||||
|
||||
@@ -3,6 +3,7 @@
|
||||
"index",
|
||||
"a2a",
|
||||
"agentmail",
|
||||
"agentphone",
|
||||
"agiloft",
|
||||
"ahrefs",
|
||||
"airtable",
|
||||
@@ -31,6 +32,7 @@
|
||||
"confluence",
|
||||
"crowdstrike",
|
||||
"cursor",
|
||||
"custom-tools",
|
||||
"dagster",
|
||||
"databricks",
|
||||
"datadog",
|
||||
@@ -85,6 +87,7 @@
|
||||
"huggingface",
|
||||
"hunter",
|
||||
"iam",
|
||||
"identity_center",
|
||||
"image_generator",
|
||||
"imap",
|
||||
"incidentio",
|
||||
@@ -114,6 +117,7 @@
|
||||
"microsoft_planner",
|
||||
"microsoft_teams",
|
||||
"mistral_parse",
|
||||
"monday",
|
||||
"mongodb",
|
||||
"mysql",
|
||||
"neo4j",
|
||||
@@ -152,6 +156,7 @@
|
||||
"sentry",
|
||||
"serper",
|
||||
"servicenow",
|
||||
"ses",
|
||||
"sftp",
|
||||
"sharepoint",
|
||||
"shopify",
|
||||
|
||||
387
apps/docs/content/docs/en/tools/monday.mdx
Normal file
387
apps/docs/content/docs/en/tools/monday.mdx
Normal file
@@ -0,0 +1,387 @@
|
||||
---
|
||||
title: Monday
|
||||
description: Manage Monday.com boards, items, and groups
|
||||
---
|
||||
|
||||
import { BlockInfoCard } from "@/components/ui/block-info-card"
|
||||
|
||||
<BlockInfoCard
|
||||
type="monday"
|
||||
color="#FFFFFF"
|
||||
/>
|
||||
|
||||
## Usage Instructions
|
||||
|
||||
Integrate with Monday.com to list boards, get board details, fetch and search items, create and update items, archive or delete items, create subitems, move items between groups, add updates, and create groups.
|
||||
|
||||
|
||||
|
||||
## Tools
|
||||
|
||||
### `monday_list_boards`
|
||||
|
||||
List boards from your Monday.com account
|
||||
|
||||
#### Input
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `limit` | number | No | Maximum number of boards to return \(default 25, max 500\) |
|
||||
| `page` | number | No | Page number for pagination \(starts at 1\) |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `boards` | array | List of Monday.com boards |
|
||||
| ↳ `id` | string | Board ID |
|
||||
| ↳ `name` | string | Board name |
|
||||
| ↳ `description` | string | Board description |
|
||||
| ↳ `state` | string | Board state \(active, archived, deleted\) |
|
||||
| ↳ `boardKind` | string | Board kind \(public, private, share\) |
|
||||
| ↳ `itemsCount` | number | Number of items on the board |
|
||||
| ↳ `url` | string | Board URL |
|
||||
| ↳ `updatedAt` | string | Last updated timestamp |
|
||||
| `count` | number | Number of boards returned |
|
||||
|
||||
### `monday_get_board`
|
||||
|
||||
Get a specific Monday.com board with its groups and columns
|
||||
|
||||
#### Input
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `boardId` | string | Yes | The ID of the board to retrieve |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `board` | json | Board details |
|
||||
| ↳ `id` | string | Board ID |
|
||||
| ↳ `name` | string | Board name |
|
||||
| ↳ `description` | string | Board description |
|
||||
| ↳ `state` | string | Board state |
|
||||
| ↳ `boardKind` | string | Board kind \(public, private, share\) |
|
||||
| ↳ `itemsCount` | number | Number of items |
|
||||
| ↳ `url` | string | Board URL |
|
||||
| ↳ `updatedAt` | string | Last updated timestamp |
|
||||
| `groups` | array | Groups on the board |
|
||||
| ↳ `id` | string | Group ID |
|
||||
| ↳ `title` | string | Group title |
|
||||
| ↳ `color` | string | Group color \(hex\) |
|
||||
| ↳ `archived` | boolean | Whether the group is archived |
|
||||
| ↳ `deleted` | boolean | Whether the group is deleted |
|
||||
| ↳ `position` | string | Group position |
|
||||
| `columns` | array | Columns on the board |
|
||||
| ↳ `id` | string | Column ID |
|
||||
| ↳ `title` | string | Column title |
|
||||
| ↳ `type` | string | Column type |
|
||||
|
||||
### `monday_get_item`
|
||||
|
||||
Get a specific item by ID from Monday.com
|
||||
|
||||
#### Input
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `itemId` | string | Yes | The ID of the item to retrieve |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `item` | json | The requested item |
|
||||
| ↳ `id` | string | Item ID |
|
||||
| ↳ `name` | string | Item name |
|
||||
| ↳ `state` | string | Item state |
|
||||
| ↳ `boardId` | string | Board ID |
|
||||
| ↳ `groupId` | string | Group ID |
|
||||
| ↳ `groupTitle` | string | Group title |
|
||||
| ↳ `columnValues` | array | Column values |
|
||||
| ↳ `id` | string | Column ID |
|
||||
| ↳ `text` | string | Text value |
|
||||
| ↳ `value` | string | Raw JSON value |
|
||||
| ↳ `type` | string | Column type |
|
||||
| ↳ `createdAt` | string | Creation timestamp |
|
||||
| ↳ `updatedAt` | string | Last updated timestamp |
|
||||
| ↳ `url` | string | Item URL |
|
||||
|
||||
### `monday_get_items`
|
||||
|
||||
Get items from a Monday.com board
|
||||
|
||||
#### Input
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `boardId` | string | Yes | The ID of the board to get items from |
|
||||
| `groupId` | string | No | Filter items by group ID |
|
||||
| `limit` | number | No | Maximum number of items to return \(default 25, max 500\) |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `items` | array | List of items from the board |
|
||||
| ↳ `id` | string | Item ID |
|
||||
| ↳ `name` | string | Item name |
|
||||
| ↳ `state` | string | Item state \(active, archived, deleted\) |
|
||||
| ↳ `boardId` | string | Board ID |
|
||||
| ↳ `groupId` | string | Group ID |
|
||||
| ↳ `groupTitle` | string | Group title |
|
||||
| ↳ `columnValues` | array | Column values for the item |
|
||||
| ↳ `id` | string | Column ID |
|
||||
| ↳ `text` | string | Human-readable text value |
|
||||
| ↳ `value` | string | Raw JSON value |
|
||||
| ↳ `type` | string | Column type |
|
||||
| ↳ `createdAt` | string | Creation timestamp |
|
||||
| ↳ `updatedAt` | string | Last updated timestamp |
|
||||
| ↳ `url` | string | Item URL |
|
||||
| `count` | number | Number of items returned |
|
||||
|
||||
### `monday_search_items`
|
||||
|
||||
Search for items on a Monday.com board by column values
|
||||
|
||||
#### Input
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `boardId` | string | Yes | The ID of the board to search |
|
||||
| `columns` | string | Yes | JSON array of column filters, e.g. \[\{"column_id":"status","column_values":\["Done"\]\}\] |
|
||||
| `limit` | number | No | Maximum number of items to return \(default 25, max 500\) |
|
||||
| `cursor` | string | No | Pagination cursor from a previous search response |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `items` | array | Matching items |
|
||||
| ↳ `id` | string | Item ID |
|
||||
| ↳ `name` | string | Item name |
|
||||
| ↳ `state` | string | Item state |
|
||||
| ↳ `boardId` | string | Board ID |
|
||||
| ↳ `groupId` | string | Group ID |
|
||||
| ↳ `groupTitle` | string | Group title |
|
||||
| ↳ `columnValues` | array | Column values |
|
||||
| ↳ `id` | string | Column ID |
|
||||
| ↳ `text` | string | Text value |
|
||||
| ↳ `value` | string | Raw JSON value |
|
||||
| ↳ `type` | string | Column type |
|
||||
| ↳ `createdAt` | string | Creation timestamp |
|
||||
| ↳ `updatedAt` | string | Last updated timestamp |
|
||||
| ↳ `url` | string | Item URL |
|
||||
| `count` | number | Number of items returned |
|
||||
| `cursor` | string | Pagination cursor for fetching the next page |
|
||||
|
||||
### `monday_create_item`
|
||||
|
||||
Create a new item on a Monday.com board
|
||||
|
||||
#### Input
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `boardId` | string | Yes | The ID of the board to create the item on |
|
||||
| `itemName` | string | Yes | The name of the new item |
|
||||
| `groupId` | string | No | The group ID to create the item in |
|
||||
| `columnValues` | string | No | JSON string of column values to set \(e.g., \{"status":"Done","date":"2024-01-01"\}\) |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `item` | json | The created item |
|
||||
| ↳ `id` | string | Item ID |
|
||||
| ↳ `name` | string | Item name |
|
||||
| ↳ `state` | string | Item state |
|
||||
| ↳ `boardId` | string | Board ID |
|
||||
| ↳ `groupId` | string | Group ID |
|
||||
| ↳ `groupTitle` | string | Group title |
|
||||
| ↳ `columnValues` | array | Column values |
|
||||
| ↳ `id` | string | Column ID |
|
||||
| ↳ `text` | string | Text value |
|
||||
| ↳ `value` | string | Raw JSON value |
|
||||
| ↳ `type` | string | Column type |
|
||||
| ↳ `createdAt` | string | Creation timestamp |
|
||||
| ↳ `updatedAt` | string | Last updated timestamp |
|
||||
| ↳ `url` | string | Item URL |
|
||||
|
||||
### `monday_update_item`
|
||||
|
||||
Update column values of an item on a Monday.com board
|
||||
|
||||
#### Input
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `boardId` | string | Yes | The ID of the board containing the item |
|
||||
| `itemId` | string | Yes | The ID of the item to update |
|
||||
| `columnValues` | string | Yes | JSON string of column values to update \(e.g., \{"status":"Done","date":"2024-01-01"\}\) |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `item` | json | The updated item |
|
||||
| ↳ `id` | string | Item ID |
|
||||
| ↳ `name` | string | Item name |
|
||||
| ↳ `state` | string | Item state |
|
||||
| ↳ `boardId` | string | Board ID |
|
||||
| ↳ `groupId` | string | Group ID |
|
||||
| ↳ `groupTitle` | string | Group title |
|
||||
| ↳ `columnValues` | array | Column values |
|
||||
| ↳ `id` | string | Column ID |
|
||||
| ↳ `text` | string | Text value |
|
||||
| ↳ `value` | string | Raw JSON value |
|
||||
| ↳ `type` | string | Column type |
|
||||
| ↳ `createdAt` | string | Creation timestamp |
|
||||
| ↳ `updatedAt` | string | Last updated timestamp |
|
||||
| ↳ `url` | string | Item URL |
|
||||
|
||||
### `monday_delete_item`
|
||||
|
||||
Delete an item from a Monday.com board
|
||||
|
||||
#### Input
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `itemId` | string | Yes | The ID of the item to delete |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `id` | string | The ID of the deleted item |
|
||||
|
||||
### `monday_archive_item`
|
||||
|
||||
Archive an item on a Monday.com board
|
||||
|
||||
#### Input
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `itemId` | string | Yes | The ID of the item to archive |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `id` | string | The ID of the archived item |
|
||||
|
||||
### `monday_move_item_to_group`
|
||||
|
||||
Move an item to a different group on a Monday.com board
|
||||
|
||||
#### Input
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `itemId` | string | Yes | The ID of the item to move |
|
||||
| `groupId` | string | Yes | The ID of the target group |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `item` | json | The moved item with updated group |
|
||||
| ↳ `id` | string | Item ID |
|
||||
| ↳ `name` | string | Item name |
|
||||
| ↳ `state` | string | Item state |
|
||||
| ↳ `boardId` | string | Board ID |
|
||||
| ↳ `groupId` | string | Group ID |
|
||||
| ↳ `groupTitle` | string | Group title |
|
||||
| ↳ `columnValues` | array | Column values |
|
||||
| ↳ `id` | string | Column ID |
|
||||
| ↳ `text` | string | Text value |
|
||||
| ↳ `value` | string | Raw JSON value |
|
||||
| ↳ `type` | string | Column type |
|
||||
| ↳ `createdAt` | string | Creation timestamp |
|
||||
| ↳ `updatedAt` | string | Last updated timestamp |
|
||||
| ↳ `url` | string | Item URL |
|
||||
|
||||
### `monday_create_subitem`
|
||||
|
||||
Create a subitem under a parent item on Monday.com
|
||||
|
||||
#### Input
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `parentItemId` | string | Yes | The ID of the parent item |
|
||||
| `itemName` | string | Yes | The name of the new subitem |
|
||||
| `columnValues` | string | No | JSON string of column values to set |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `item` | json | The created subitem |
|
||||
| ↳ `id` | string | Item ID |
|
||||
| ↳ `name` | string | Item name |
|
||||
| ↳ `state` | string | Item state |
|
||||
| ↳ `boardId` | string | Board ID |
|
||||
| ↳ `groupId` | string | Group ID |
|
||||
| ↳ `groupTitle` | string | Group title |
|
||||
| ↳ `columnValues` | array | Column values |
|
||||
| ↳ `id` | string | Column ID |
|
||||
| ↳ `text` | string | Text value |
|
||||
| ↳ `value` | string | Raw JSON value |
|
||||
| ↳ `type` | string | Column type |
|
||||
| ↳ `createdAt` | string | Creation timestamp |
|
||||
| ↳ `updatedAt` | string | Last updated timestamp |
|
||||
| ↳ `url` | string | Item URL |
|
||||
|
||||
### `monday_create_update`
|
||||
|
||||
Add an update (comment) to a Monday.com item
|
||||
|
||||
#### Input
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `itemId` | string | Yes | The ID of the item to add the update to |
|
||||
| `body` | string | Yes | The update text content \(supports HTML\) |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `update` | json | The created update |
|
||||
| ↳ `id` | string | Update ID |
|
||||
| ↳ `body` | string | Update body \(HTML\) |
|
||||
| ↳ `textBody` | string | Plain text body |
|
||||
| ↳ `createdAt` | string | Creation timestamp |
|
||||
| ↳ `creatorId` | string | Creator user ID |
|
||||
| ↳ `itemId` | string | Item ID |
|
||||
|
||||
### `monday_create_group`
|
||||
|
||||
Create a new group on a Monday.com board
|
||||
|
||||
#### Input
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `boardId` | string | Yes | The ID of the board to create the group on |
|
||||
| `groupName` | string | Yes | The name of the new group \(max 255 characters\) |
|
||||
| `groupColor` | string | No | The group color as a hex code \(e.g., "#ff642e"\) |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `group` | json | The created group |
|
||||
| ↳ `id` | string | Group ID |
|
||||
| ↳ `title` | string | Group title |
|
||||
| ↳ `color` | string | Group color \(hex\) |
|
||||
| ↳ `archived` | boolean | Whether archived |
|
||||
| ↳ `deleted` | boolean | Whether deleted |
|
||||
| ↳ `position` | string | Group position |
|
||||
|
||||
|
||||
241
apps/docs/content/docs/en/tools/ses.mdx
Normal file
241
apps/docs/content/docs/en/tools/ses.mdx
Normal file
@@ -0,0 +1,241 @@
|
||||
---
|
||||
title: AWS SES
|
||||
description: Send emails and manage templates with AWS Simple Email Service
|
||||
---
|
||||
|
||||
import { BlockInfoCard } from "@/components/ui/block-info-card"
|
||||
|
||||
<BlockInfoCard
|
||||
type="ses"
|
||||
color="linear-gradient(45deg, #BD0816 0%, #FF5252 100%)"
|
||||
/>
|
||||
|
||||
{/* MANUAL-CONTENT-START:intro */}
|
||||
[Amazon Simple Email Service (SES)](https://aws.amazon.com/ses/) is a cloud-based email sending service designed for high-volume, transactional, and marketing email delivery. It provides a cost-effective, scalable way to send email without managing your own mail server infrastructure.
|
||||
|
||||
With AWS SES, you can:
|
||||
|
||||
- **Send simple emails**: Deliver one-off emails with plain text or HTML body content to individual recipients
|
||||
- **Send templated emails**: Use pre-defined templates with variable substitution (e.g., `{{name}}`, `{{link}}`) for personalized emails at scale
|
||||
- **Send bulk emails**: Deliver templated emails to large lists of recipients in a single API call, with per-destination data overrides
|
||||
- **Manage email templates**: Create, retrieve, list, and delete reusable email templates for transactional and marketing campaigns
|
||||
- **Monitor account health**: Retrieve your account's sending quota, send rate, and whether sending is currently enabled
|
||||
|
||||
In Sim, the AWS SES integration is designed for workflows that need reliable, programmatic email delivery — from access request notifications and approval alerts to bulk outreach and automated reporting. It pairs naturally with the IAM Identity Center integration for TEAM (Temporary Elevated Access Management) workflows, where email notifications are sent when access is provisioned, approved, or revoked.
|
||||
{/* MANUAL-CONTENT-END */}
|
||||
|
||||
|
||||
## Usage Instructions
|
||||
|
||||
Integrate AWS SES v2 into the workflow. Send simple, templated, and bulk emails. Manage email templates and retrieve account sending quota and verified identity information.
|
||||
|
||||
|
||||
|
||||
## Tools
|
||||
|
||||
### `ses_send_email`
|
||||
|
||||
Send an email via AWS SES using simple or HTML content
|
||||
|
||||
#### Input
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `region` | string | Yes | AWS region \(e.g., us-east-1\) |
|
||||
| `accessKeyId` | string | Yes | AWS access key ID |
|
||||
| `secretAccessKey` | string | Yes | AWS secret access key |
|
||||
| `fromAddress` | string | Yes | Verified sender email address |
|
||||
| `toAddresses` | string | Yes | Comma-separated list of recipient email addresses |
|
||||
| `subject` | string | Yes | Email subject line |
|
||||
| `bodyText` | string | No | Plain text email body |
|
||||
| `bodyHtml` | string | No | HTML email body |
|
||||
| `ccAddresses` | string | No | Comma-separated list of CC email addresses |
|
||||
| `bccAddresses` | string | No | Comma-separated list of BCC email addresses |
|
||||
| `replyToAddresses` | string | No | Comma-separated list of reply-to email addresses |
|
||||
| `configurationSetName` | string | No | SES configuration set name for tracking |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `messageId` | string | SES message ID for the sent email |
|
||||
|
||||
### `ses_send_templated_email`
|
||||
|
||||
Send an email using an SES email template with dynamic template data
|
||||
|
||||
#### Input
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `region` | string | Yes | AWS region \(e.g., us-east-1\) |
|
||||
| `accessKeyId` | string | Yes | AWS access key ID |
|
||||
| `secretAccessKey` | string | Yes | AWS secret access key |
|
||||
| `fromAddress` | string | Yes | Verified sender email address |
|
||||
| `toAddresses` | string | Yes | Comma-separated list of recipient email addresses |
|
||||
| `templateName` | string | Yes | Name of the SES email template to use |
|
||||
| `templateData` | string | Yes | JSON string of key-value pairs for template variable substitution |
|
||||
| `ccAddresses` | string | No | Comma-separated list of CC email addresses |
|
||||
| `bccAddresses` | string | No | Comma-separated list of BCC email addresses |
|
||||
| `configurationSetName` | string | No | SES configuration set name for tracking |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `messageId` | string | SES message ID for the sent email |
|
||||
|
||||
### `ses_send_bulk_email`
|
||||
|
||||
Send emails to multiple recipients using an SES template with per-recipient data
|
||||
|
||||
#### Input
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `region` | string | Yes | AWS region \(e.g., us-east-1\) |
|
||||
| `accessKeyId` | string | Yes | AWS access key ID |
|
||||
| `secretAccessKey` | string | Yes | AWS secret access key |
|
||||
| `fromAddress` | string | Yes | Verified sender email address |
|
||||
| `templateName` | string | Yes | Name of the SES email template to use |
|
||||
| `destinations` | string | Yes | JSON array of destination objects with toAddresses \(string\[\]\) and optional templateData \(JSON string\); falls back to defaultTemplateData when omitted |
|
||||
| `defaultTemplateData` | string | No | Default JSON template data used when a destination does not specify its own |
|
||||
| `configurationSetName` | string | No | SES configuration set name for tracking |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `results` | array | Per-destination send results with status and messageId |
|
||||
| `successCount` | number | Number of successfully sent emails |
|
||||
| `failureCount` | number | Number of failed email sends |
|
||||
|
||||
### `ses_list_identities`
|
||||
|
||||
List all verified email identities (email addresses and domains) in your SES account
|
||||
|
||||
#### Input
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `region` | string | Yes | AWS region \(e.g., us-east-1\) |
|
||||
| `accessKeyId` | string | Yes | AWS access key ID |
|
||||
| `secretAccessKey` | string | Yes | AWS secret access key |
|
||||
| `pageSize` | number | No | Maximum number of identities to return \(1-1000\) |
|
||||
| `nextToken` | string | No | Pagination token from a previous list response |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `identities` | array | List of email identities with name, type, sending status, and verification status |
|
||||
| `nextToken` | string | Pagination token for the next page of results |
|
||||
| `count` | number | Number of identities returned |
|
||||
|
||||
### `ses_get_account`
|
||||
|
||||
Get SES account sending quota and status information
|
||||
|
||||
#### Input
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `region` | string | Yes | AWS region \(e.g., us-east-1\) |
|
||||
| `accessKeyId` | string | Yes | AWS access key ID |
|
||||
| `secretAccessKey` | string | Yes | AWS secret access key |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `sendingEnabled` | boolean | Whether email sending is enabled for the account |
|
||||
| `max24HourSend` | number | Maximum emails allowed per 24-hour period |
|
||||
| `maxSendRate` | number | Maximum emails allowed per second |
|
||||
| `sentLast24Hours` | number | Number of emails sent in the last 24 hours |
|
||||
|
||||
### `ses_create_template`
|
||||
|
||||
Create a new SES email template for use with templated email sending
|
||||
|
||||
#### Input
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `region` | string | Yes | AWS region \(e.g., us-east-1\) |
|
||||
| `accessKeyId` | string | Yes | AWS access key ID |
|
||||
| `secretAccessKey` | string | Yes | AWS secret access key |
|
||||
| `templateName` | string | Yes | Unique name for the email template |
|
||||
| `subjectPart` | string | Yes | Subject line template \(supports \{\{variable\}\} substitution\) |
|
||||
| `textPart` | string | No | Plain text version of the template body |
|
||||
| `htmlPart` | string | No | HTML version of the template body |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `message` | string | Confirmation message for the created template |
|
||||
|
||||
### `ses_get_template`
|
||||
|
||||
Retrieve the content and details of an SES email template
|
||||
|
||||
#### Input
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `region` | string | Yes | AWS region \(e.g., us-east-1\) |
|
||||
| `accessKeyId` | string | Yes | AWS access key ID |
|
||||
| `secretAccessKey` | string | Yes | AWS secret access key |
|
||||
| `templateName` | string | Yes | Name of the template to retrieve |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `templateName` | string | Name of the template |
|
||||
| `subjectPart` | string | Subject line of the template |
|
||||
| `textPart` | string | Plain text body of the template |
|
||||
| `htmlPart` | string | HTML body of the template |
|
||||
|
||||
### `ses_list_templates`
|
||||
|
||||
List all SES email templates in your account
|
||||
|
||||
#### Input
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `region` | string | Yes | AWS region \(e.g., us-east-1\) |
|
||||
| `accessKeyId` | string | Yes | AWS access key ID |
|
||||
| `secretAccessKey` | string | Yes | AWS secret access key |
|
||||
| `pageSize` | number | No | Maximum number of templates to return |
|
||||
| `nextToken` | string | No | Pagination token from a previous list response |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `templates` | array | List of email templates with name and creation timestamp |
|
||||
| `nextToken` | string | Pagination token for the next page of results |
|
||||
| `count` | number | Number of templates returned |
|
||||
|
||||
### `ses_delete_template`
|
||||
|
||||
Delete an existing SES email template
|
||||
|
||||
#### Input
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `region` | string | Yes | AWS region \(e.g., us-east-1\) |
|
||||
| `accessKeyId` | string | Yes | AWS access key ID |
|
||||
| `secretAccessKey` | string | Yes | AWS secret access key |
|
||||
| `templateName` | string | Yes | Name of the template to delete |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `message` | string | Confirmation message for the deleted template |
|
||||
|
||||
|
||||
@@ -87,7 +87,7 @@ Read a specific page from a SharePoint site
|
||||
| ↳ `createdDateTime` | string | When the page was created |
|
||||
| ↳ `lastModifiedDateTime` | string | When the page was last modified |
|
||||
| `pages` | array | List of SharePoint pages |
|
||||
| ↳ `page` | object | The unique ID of the page |
|
||||
| ↳ `page` | object | page output from the tool |
|
||||
| ↳ `id` | string | The unique ID of the page |
|
||||
| ↳ `name` | string | The name of the page |
|
||||
| ↳ `title` | string | The title of the page |
|
||||
@@ -95,7 +95,7 @@ Read a specific page from a SharePoint site
|
||||
| ↳ `pageLayout` | string | The layout type of the page |
|
||||
| ↳ `createdDateTime` | string | When the page was created |
|
||||
| ↳ `lastModifiedDateTime` | string | When the page was last modified |
|
||||
| ↳ `content` | object | Extracted text content from the page |
|
||||
| ↳ `content` | object | content output from the tool |
|
||||
| ↳ `content` | string | Extracted text content from the page |
|
||||
| ↳ `canvasLayout` | object | Raw SharePoint canvas layout structure |
|
||||
| `content` | object | Content of the SharePoint page |
|
||||
@@ -127,9 +127,9 @@ List details of all SharePoint sites
|
||||
| ↳ `createdDateTime` | string | When the site was created |
|
||||
| ↳ `lastModifiedDateTime` | string | When the site was last modified |
|
||||
| ↳ `isPersonalSite` | boolean | Whether this is a personal site |
|
||||
| ↳ `root` | object | Server relative URL |
|
||||
| ↳ `root` | object | root output from the tool |
|
||||
| ↳ `serverRelativeUrl` | string | Server relative URL |
|
||||
| ↳ `siteCollection` | object | Site collection hostname |
|
||||
| ↳ `siteCollection` | object | siteCollection output from the tool |
|
||||
| ↳ `hostname` | string | Site collection hostname |
|
||||
| `sites` | array | List of all accessible SharePoint sites |
|
||||
| ↳ `id` | string | The unique ID of the site |
|
||||
|
||||
@@ -46,6 +46,7 @@ Assume an IAM role and receive temporary security credentials
|
||||
| `roleArn` | string | Yes | ARN of the IAM role to assume |
|
||||
| `roleSessionName` | string | Yes | Identifier for the assumed role session |
|
||||
| `durationSeconds` | number | No | Duration of the session in seconds \(900-43200, default 3600\) |
|
||||
| `policy` | string | No | JSON IAM policy to further restrict session permissions \(max 2048 chars\) |
|
||||
| `externalId` | string | No | External ID for cross-account access |
|
||||
| `serialNumber` | string | No | MFA device serial number or ARN |
|
||||
| `tokenCode` | string | No | MFA token code \(6 digits\) |
|
||||
@@ -61,6 +62,7 @@ Assume an IAM role and receive temporary security credentials
|
||||
| `assumedRoleArn` | string | ARN of the assumed role |
|
||||
| `assumedRoleId` | string | Assumed role ID with session name |
|
||||
| `packedPolicySize` | number | Percentage of allowed policy size used |
|
||||
| `sourceIdentity` | string | Source identity set on the role session, if any |
|
||||
|
||||
### `sts_get_caller_identity`
|
||||
|
||||
|
||||
@@ -66,7 +66,7 @@ Integrate AWS Textract into your workflow to extract text, tables, forms, and ke
|
||||
| ↳ `Confidence` | number | Confidence score \(0-100\) |
|
||||
| ↳ `Page` | number | Page number |
|
||||
| ↳ `Geometry` | object | Location and bounding box information |
|
||||
| ↳ `BoundingBox` | object | Height as ratio of document height |
|
||||
| ↳ `BoundingBox` | object | BoundingBox output from the tool |
|
||||
| ↳ `Height` | number | Height as ratio of document height |
|
||||
| ↳ `Left` | number | Left position as ratio of document width |
|
||||
| ↳ `Top` | number | Top position as ratio of document height |
|
||||
|
||||
@@ -96,7 +96,7 @@ Retrieve insights and analytics for Typeform forms
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `fields` | array | Number of users who dropped off at this field |
|
||||
| `fields` | array | Analytics data for individual form fields |
|
||||
| ↳ `dropoffs` | number | Number of users who dropped off at this field |
|
||||
| ↳ `id` | string | Unique field ID |
|
||||
| ↳ `label` | string | Field label |
|
||||
@@ -104,15 +104,15 @@ Retrieve insights and analytics for Typeform forms
|
||||
| ↳ `title` | string | Field title/question |
|
||||
| ↳ `type` | string | Field type \(e.g., short_text, multiple_choice\) |
|
||||
| ↳ `views` | number | Number of times this field was viewed |
|
||||
| `form` | object | Average completion time for this platform |
|
||||
| ↳ `platforms` | array | Average completion time for this platform |
|
||||
| `form` | object | Form-level analytics and performance data |
|
||||
| ↳ `platforms` | array | Platform-specific analytics data |
|
||||
| ↳ `average_time` | number | Average completion time for this platform |
|
||||
| ↳ `completion_rate` | number | Completion rate for this platform |
|
||||
| ↳ `platform` | string | Platform name \(e.g., desktop, mobile\) |
|
||||
| ↳ `responses_count` | number | Number of responses from this platform |
|
||||
| ↳ `total_visits` | number | Total visits from this platform |
|
||||
| ↳ `unique_visits` | number | Unique visits from this platform |
|
||||
| ↳ `summary` | object | Overall average completion time |
|
||||
| ↳ `summary` | object | Overall form performance summary |
|
||||
| ↳ `average_time` | number | Overall average completion time |
|
||||
| ↳ `completion_rate` | number | Overall completion rate |
|
||||
| ↳ `responses_count` | number | Total number of responses |
|
||||
|
||||
60
apps/docs/content/docs/en/triggers/airtable.mdx
Normal file
60
apps/docs/content/docs/en/triggers/airtable.mdx
Normal file
@@ -0,0 +1,60 @@
|
||||
---
|
||||
title: Airtable
|
||||
description: Available Airtable triggers for automating workflows
|
||||
---
|
||||
|
||||
import { BlockInfoCard } from "@/components/ui/block-info-card"
|
||||
|
||||
<BlockInfoCard
|
||||
type="airtable"
|
||||
color="#E0E0E0"
|
||||
/>
|
||||
|
||||
Airtable provides 1 trigger for automating workflows based on events.
|
||||
|
||||
## Triggers
|
||||
|
||||
### Airtable Webhook
|
||||
|
||||
Trigger workflow from Airtable record changes like create, update, and delete events (requires Airtable credentials)
|
||||
|
||||
#### Configuration
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `triggerCredentials` | string | Yes | This trigger requires airtable credentials to access your account. |
|
||||
| `baseId` | string | Yes | The ID of the Airtable Base this webhook will monitor. |
|
||||
| `tableId` | string | Yes | The ID of the table within the Base that the webhook will monitor. |
|
||||
| `includeCellValues` | boolean | No | Enable to receive the complete record data in the payload, not just changes. |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `payloads` | array | The payloads of the Airtable changes |
|
||||
| ↳ `timestamp` | string | Timestamp of the change |
|
||||
| ↳ `baseTransactionNumber` | number | Transaction number |
|
||||
| `latestPayload` | object | The most recent payload from Airtable |
|
||||
| ↳ `timestamp` | string | ISO 8601 timestamp of the change |
|
||||
| ↳ `baseTransactionNumber` | number | Transaction number |
|
||||
| ↳ `payloadFormat` | string | Payload format version \(e.g., v0\) |
|
||||
| ↳ `actionMetadata` | object | Metadata about who made the change |
|
||||
| ↳ `source` | string | Source of the change \(e.g., client, publicApi\) |
|
||||
| ↳ `sourceMetadata` | object | Source metadata including user info |
|
||||
| ↳ `user` | object | User who made the change |
|
||||
| ↳ `id` | string | User ID |
|
||||
| ↳ `email` | string | User email |
|
||||
| ↳ `name` | string | User name |
|
||||
| ↳ `permissionLevel` | string | User permission level |
|
||||
| ↳ `changedTablesById` | object | Tables that were changed \(keyed by table ID\) |
|
||||
| ↳ `changedRecordsById` | object | Changed records keyed by record ID |
|
||||
| ↳ `current` | object | Current state of the record |
|
||||
| ↳ `cellValuesByFieldId` | object | Cell values keyed by field ID |
|
||||
| ↳ `createdRecordsById` | object | Created records by ID |
|
||||
| ↳ `destroyedRecordIds` | array | Array of destroyed record IDs |
|
||||
| `airtableChanges` | array | Changes made to the Airtable table |
|
||||
| ↳ `tableId` | string | Table ID |
|
||||
| ↳ `recordId` | string | Record ID |
|
||||
| ↳ `changeType` | string | Type of change \(created, changed, destroyed\) |
|
||||
| ↳ `cellValuesByFieldId` | object | Cell values by field ID |
|
||||
|
||||
193
apps/docs/content/docs/en/triggers/ashby.mdx
Normal file
193
apps/docs/content/docs/en/triggers/ashby.mdx
Normal file
@@ -0,0 +1,193 @@
|
||||
---
|
||||
title: Ashby
|
||||
description: Available Ashby triggers for automating workflows
|
||||
---
|
||||
|
||||
import { BlockInfoCard } from "@/components/ui/block-info-card"
|
||||
|
||||
<BlockInfoCard
|
||||
type="ashby"
|
||||
color="#5D4ED6"
|
||||
/>
|
||||
|
||||
Ashby provides 6 triggers for automating workflows based on events.
|
||||
|
||||
## Triggers
|
||||
|
||||
### Ashby Application Submitted
|
||||
|
||||
Trigger workflow when a new application is submitted
|
||||
|
||||
#### Configuration
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `apiKey` | string | Yes | API Key |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `action` | string | The webhook event type \(e.g., applicationSubmit, candidateHire\) |
|
||||
| `application` | object | application output from the tool |
|
||||
| ↳ `id` | string | Application UUID |
|
||||
| ↳ `createdAt` | string | Application creation timestamp \(ISO 8601\) |
|
||||
| ↳ `updatedAt` | string | Application last update timestamp \(ISO 8601\) |
|
||||
| ↳ `status` | string | Application status \(Active, Hired, Archived, Lead\) |
|
||||
| ↳ `candidate` | object | candidate output from the tool |
|
||||
| ↳ `id` | string | Candidate UUID |
|
||||
| ↳ `name` | string | Candidate name |
|
||||
| ↳ `currentInterviewStage` | object | currentInterviewStage output from the tool |
|
||||
| ↳ `id` | string | Current interview stage UUID |
|
||||
| ↳ `title` | string | Current interview stage title |
|
||||
| ↳ `job` | object | job output from the tool |
|
||||
| ↳ `id` | string | Job UUID |
|
||||
| ↳ `title` | string | Job title |
|
||||
|
||||
|
||||
---
|
||||
|
||||
### Ashby Candidate Deleted
|
||||
|
||||
Trigger workflow when a candidate is deleted
|
||||
|
||||
#### Configuration
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `apiKey` | string | Yes | API Key |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `action` | string | The webhook event type \(e.g., applicationSubmit, candidateHire\) |
|
||||
| `candidate` | object | candidate output from the tool |
|
||||
| ↳ `id` | string | Deleted candidate UUID |
|
||||
|
||||
|
||||
---
|
||||
|
||||
### Ashby Candidate Hired
|
||||
|
||||
Trigger workflow when a candidate is hired
|
||||
|
||||
#### Configuration
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `apiKey` | string | Yes | API Key |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `action` | string | The webhook event type \(e.g., applicationSubmit, candidateHire\) |
|
||||
| `application` | object | application output from the tool |
|
||||
| ↳ `id` | string | Application UUID |
|
||||
| ↳ `createdAt` | string | Application creation timestamp \(ISO 8601\) |
|
||||
| ↳ `updatedAt` | string | Application last update timestamp \(ISO 8601\) |
|
||||
| ↳ `status` | string | Application status \(Hired\) |
|
||||
| ↳ `candidate` | object | candidate output from the tool |
|
||||
| ↳ `id` | string | Candidate UUID |
|
||||
| ↳ `name` | string | Candidate name |
|
||||
| ↳ `currentInterviewStage` | object | currentInterviewStage output from the tool |
|
||||
| ↳ `id` | string | Current interview stage UUID |
|
||||
| ↳ `title` | string | Current interview stage title |
|
||||
| ↳ `job` | object | job output from the tool |
|
||||
| ↳ `id` | string | Job UUID |
|
||||
| ↳ `title` | string | Job title |
|
||||
| `offer` | object | offer output from the tool |
|
||||
| ↳ `id` | string | Accepted offer UUID |
|
||||
| ↳ `applicationId` | string | Associated application UUID |
|
||||
| ↳ `acceptanceStatus` | string | Offer acceptance status |
|
||||
| ↳ `offerStatus` | string | Offer process status |
|
||||
| ↳ `decidedAt` | string | Offer decision timestamp \(ISO 8601\) |
|
||||
| ↳ `latestVersion` | object | latestVersion output from the tool |
|
||||
| ↳ `id` | string | Latest offer version UUID |
|
||||
|
||||
|
||||
---
|
||||
|
||||
### Ashby Candidate Stage Change
|
||||
|
||||
Trigger workflow when a candidate changes interview stages
|
||||
|
||||
#### Configuration
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `apiKey` | string | Yes | API Key |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `action` | string | The webhook event type \(e.g., applicationSubmit, candidateHire\) |
|
||||
| `application` | object | application output from the tool |
|
||||
| ↳ `id` | string | Application UUID |
|
||||
| ↳ `createdAt` | string | Application creation timestamp \(ISO 8601\) |
|
||||
| ↳ `updatedAt` | string | Application last update timestamp \(ISO 8601\) |
|
||||
| ↳ `status` | string | Application status \(Active, Hired, Archived, Lead\) |
|
||||
| ↳ `candidate` | object | candidate output from the tool |
|
||||
| ↳ `id` | string | Candidate UUID |
|
||||
| ↳ `name` | string | Candidate name |
|
||||
| ↳ `currentInterviewStage` | object | currentInterviewStage output from the tool |
|
||||
| ↳ `id` | string | Current interview stage UUID |
|
||||
| ↳ `title` | string | Current interview stage title |
|
||||
| ↳ `job` | object | job output from the tool |
|
||||
| ↳ `id` | string | Job UUID |
|
||||
| ↳ `title` | string | Job title |
|
||||
|
||||
|
||||
---
|
||||
|
||||
### Ashby Job Created
|
||||
|
||||
Trigger workflow when a new job is created
|
||||
|
||||
#### Configuration
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `apiKey` | string | Yes | API Key |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `action` | string | The webhook event type \(e.g., applicationSubmit, candidateHire\) |
|
||||
| `job` | object | job output from the tool |
|
||||
| ↳ `id` | string | Job UUID |
|
||||
| ↳ `title` | string | Job title |
|
||||
| ↳ `confidential` | boolean | Whether the job is confidential |
|
||||
| ↳ `status` | string | Job status \(Open, Closed, Draft, Archived\) |
|
||||
| ↳ `employmentType` | string | Employment type \(Full-time, Part-time, etc.\) |
|
||||
|
||||
|
||||
---
|
||||
|
||||
### Ashby Offer Created
|
||||
|
||||
Trigger workflow when a new offer is created
|
||||
|
||||
#### Configuration
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `apiKey` | string | Yes | API Key |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `action` | string | The webhook event type \(e.g., applicationSubmit, candidateHire\) |
|
||||
| `offer` | object | offer output from the tool |
|
||||
| ↳ `id` | string | Offer UUID |
|
||||
| ↳ `applicationId` | string | Associated application UUID |
|
||||
| ↳ `acceptanceStatus` | string | Offer acceptance status \(Accepted, Declined, Pending, Created, Cancelled, WaitingOnResponse\) |
|
||||
| ↳ `offerStatus` | string | Offer process status \(WaitingOnApprovalStart, WaitingOnOfferApproval, WaitingOnCandidateResponse, CandidateAccepted, CandidateRejected, OfferCancelled\) |
|
||||
| ↳ `decidedAt` | string | Offer decision timestamp \(ISO 8601\). Typically null at creation; populated after candidate responds. |
|
||||
| ↳ `latestVersion` | object | latestVersion output from the tool |
|
||||
| ↳ `id` | string | Latest offer version UUID |
|
||||
|
||||
513
apps/docs/content/docs/en/triggers/attio.mdx
Normal file
513
apps/docs/content/docs/en/triggers/attio.mdx
Normal file
@@ -0,0 +1,513 @@
|
||||
---
|
||||
title: Attio
|
||||
description: Available Attio triggers for automating workflows
|
||||
---
|
||||
|
||||
import { BlockInfoCard } from "@/components/ui/block-info-card"
|
||||
|
||||
<BlockInfoCard
|
||||
type="attio"
|
||||
color="#1D1E20"
|
||||
/>
|
||||
|
||||
Attio provides 22 triggers for automating workflows based on events.
|
||||
|
||||
## Triggers
|
||||
|
||||
### Attio Comment Created
|
||||
|
||||
Trigger workflow when a new comment is created in Attio
|
||||
|
||||
#### Configuration
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `triggerCredentials` | string | Yes | Attio Account |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `eventType` | string | The type of event \(e.g. record.created, note.created\) |
|
||||
| `workspaceId` | string | The workspace ID |
|
||||
| `threadId` | string | The thread ID |
|
||||
| `commentId` | string | The comment ID |
|
||||
| `objectId` | string | The object type ID |
|
||||
| `recordId` | string | The record ID |
|
||||
| `listId` | string | The list ID \(if comment is on a list entry\) |
|
||||
| `entryId` | string | The list entry ID \(if comment is on a list entry\) |
|
||||
|
||||
|
||||
---
|
||||
|
||||
### Attio Comment Deleted
|
||||
|
||||
Trigger workflow when a comment is deleted in Attio
|
||||
|
||||
#### Configuration
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `triggerCredentials` | string | Yes | Attio Account |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `eventType` | string | The type of event \(e.g. record.created, note.created\) |
|
||||
| `workspaceId` | string | The workspace ID |
|
||||
| `threadId` | string | The thread ID |
|
||||
| `commentId` | string | The comment ID |
|
||||
| `objectId` | string | The object type ID |
|
||||
| `recordId` | string | The record ID |
|
||||
| `listId` | string | The list ID \(if comment is on a list entry\) |
|
||||
| `entryId` | string | The list entry ID \(if comment is on a list entry\) |
|
||||
|
||||
|
||||
---
|
||||
|
||||
### Attio Comment Resolved
|
||||
|
||||
Trigger workflow when a comment thread is resolved in Attio
|
||||
|
||||
#### Configuration
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `triggerCredentials` | string | Yes | Attio Account |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `eventType` | string | The type of event \(e.g. record.created, note.created\) |
|
||||
| `workspaceId` | string | The workspace ID |
|
||||
| `threadId` | string | The thread ID |
|
||||
| `commentId` | string | The comment ID |
|
||||
| `objectId` | string | The object type ID |
|
||||
| `recordId` | string | The record ID |
|
||||
| `listId` | string | The list ID \(if comment is on a list entry\) |
|
||||
| `entryId` | string | The list entry ID \(if comment is on a list entry\) |
|
||||
|
||||
|
||||
---
|
||||
|
||||
### Attio Comment Unresolved
|
||||
|
||||
Trigger workflow when a comment thread is unresolved in Attio
|
||||
|
||||
#### Configuration
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `triggerCredentials` | string | Yes | Attio Account |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `eventType` | string | The type of event \(e.g. record.created, note.created\) |
|
||||
| `workspaceId` | string | The workspace ID |
|
||||
| `threadId` | string | The thread ID |
|
||||
| `commentId` | string | The comment ID |
|
||||
| `objectId` | string | The object type ID |
|
||||
| `recordId` | string | The record ID |
|
||||
| `listId` | string | The list ID \(if comment is on a list entry\) |
|
||||
| `entryId` | string | The list entry ID \(if comment is on a list entry\) |
|
||||
|
||||
|
||||
---
|
||||
|
||||
### Attio List Created
|
||||
|
||||
Trigger workflow when a list is created in Attio
|
||||
|
||||
#### Configuration
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `triggerCredentials` | string | Yes | Attio Account |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `eventType` | string | The type of event \(e.g. record.created, note.created\) |
|
||||
| `workspaceId` | string | The workspace ID |
|
||||
| `listId` | string | The list ID |
|
||||
|
||||
|
||||
---
|
||||
|
||||
### Attio List Deleted
|
||||
|
||||
Trigger workflow when a list is deleted in Attio
|
||||
|
||||
#### Configuration
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `triggerCredentials` | string | Yes | Attio Account |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `eventType` | string | The type of event \(e.g. record.created, note.created\) |
|
||||
| `workspaceId` | string | The workspace ID |
|
||||
| `listId` | string | The list ID |
|
||||
|
||||
|
||||
---
|
||||
|
||||
### Attio List Entry Created
|
||||
|
||||
Trigger workflow when a new list entry is created in Attio
|
||||
|
||||
#### Configuration
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `triggerCredentials` | string | Yes | Attio Account |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `eventType` | string | The type of event \(e.g. record.created, note.created\) |
|
||||
| `workspaceId` | string | The workspace ID |
|
||||
| `listId` | string | The list ID |
|
||||
| `entryId` | string | The list entry ID |
|
||||
|
||||
|
||||
---
|
||||
|
||||
### Attio List Entry Deleted
|
||||
|
||||
Trigger workflow when a list entry is deleted in Attio
|
||||
|
||||
#### Configuration
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `triggerCredentials` | string | Yes | Attio Account |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `eventType` | string | The type of event \(e.g. record.created, note.created\) |
|
||||
| `workspaceId` | string | The workspace ID |
|
||||
| `listId` | string | The list ID |
|
||||
| `entryId` | string | The list entry ID |
|
||||
|
||||
|
||||
---
|
||||
|
||||
### Attio List Entry Updated
|
||||
|
||||
Trigger workflow when a list entry is updated in Attio
|
||||
|
||||
#### Configuration
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `triggerCredentials` | string | Yes | Attio Account |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `eventType` | string | The type of event \(e.g. record.created, note.created\) |
|
||||
| `workspaceId` | string | The workspace ID |
|
||||
| `listId` | string | The list ID |
|
||||
| `entryId` | string | The list entry ID |
|
||||
| `attributeId` | string | The ID of the attribute that was updated on the list entry |
|
||||
|
||||
|
||||
---
|
||||
|
||||
### Attio List Updated
|
||||
|
||||
Trigger workflow when a list is updated in Attio
|
||||
|
||||
#### Configuration
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `triggerCredentials` | string | Yes | Attio Account |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `eventType` | string | The type of event \(e.g. record.created, note.created\) |
|
||||
| `workspaceId` | string | The workspace ID |
|
||||
| `listId` | string | The list ID |
|
||||
|
||||
|
||||
---
|
||||
|
||||
### Attio Note Created
|
||||
|
||||
Trigger workflow when a new note is created in Attio
|
||||
|
||||
#### Configuration
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `triggerCredentials` | string | Yes | Attio Account |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `eventType` | string | The type of event \(e.g. record.created, note.created\) |
|
||||
| `workspaceId` | string | The workspace ID |
|
||||
| `noteId` | string | The note ID |
|
||||
| `parentObjectId` | string | The parent object type ID |
|
||||
| `parentRecordId` | string | The parent record ID |
|
||||
|
||||
|
||||
---
|
||||
|
||||
### Attio Note Deleted
|
||||
|
||||
Trigger workflow when a note is deleted in Attio
|
||||
|
||||
#### Configuration
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `triggerCredentials` | string | Yes | Attio Account |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `eventType` | string | The type of event \(e.g. record.created, note.created\) |
|
||||
| `workspaceId` | string | The workspace ID |
|
||||
| `noteId` | string | The note ID |
|
||||
| `parentObjectId` | string | The parent object type ID |
|
||||
| `parentRecordId` | string | The parent record ID |
|
||||
|
||||
|
||||
---
|
||||
|
||||
### Attio Note Updated
|
||||
|
||||
Trigger workflow when a note is updated in Attio
|
||||
|
||||
#### Configuration
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `triggerCredentials` | string | Yes | Attio Account |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `eventType` | string | The type of event \(e.g. record.created, note.created\) |
|
||||
| `workspaceId` | string | The workspace ID |
|
||||
| `noteId` | string | The note ID |
|
||||
| `parentObjectId` | string | The parent object type ID |
|
||||
| `parentRecordId` | string | The parent record ID |
|
||||
|
||||
|
||||
---
|
||||
|
||||
### Attio Record Created
|
||||
|
||||
Trigger workflow when a new record is created in Attio
|
||||
|
||||
#### Configuration
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `triggerCredentials` | string | Yes | Attio Account |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `eventType` | string | The type of event \(e.g. record.created, note.created\) |
|
||||
| `workspaceId` | string | The workspace ID |
|
||||
| `objectId` | string | The object type ID \(e.g. people, companies\) |
|
||||
| `recordId` | string | The record ID |
|
||||
|
||||
|
||||
---
|
||||
|
||||
### Attio Record Deleted
|
||||
|
||||
Trigger workflow when a record is deleted in Attio
|
||||
|
||||
#### Configuration
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `triggerCredentials` | string | Yes | Attio Account |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `eventType` | string | The type of event \(e.g. record.created, note.created\) |
|
||||
| `workspaceId` | string | The workspace ID |
|
||||
| `objectId` | string | The object type ID \(e.g. people, companies\) |
|
||||
| `recordId` | string | The record ID |
|
||||
|
||||
|
||||
---
|
||||
|
||||
### Attio Record Merged
|
||||
|
||||
Trigger workflow when two records are merged in Attio
|
||||
|
||||
#### Configuration
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `triggerCredentials` | string | Yes | Attio Account |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `eventType` | string | The type of event \(e.g. record.created, note.created\) |
|
||||
| `workspaceId` | string | The workspace ID |
|
||||
| `objectId` | string | The object type ID of the surviving record |
|
||||
| `recordId` | string | The surviving record ID after merge |
|
||||
| `duplicateObjectId` | string | The object type ID of the merged-away record |
|
||||
| `duplicateRecordId` | string | The record ID that was merged away |
|
||||
|
||||
|
||||
---
|
||||
|
||||
### Attio Record Updated
|
||||
|
||||
Trigger workflow when a record is updated in Attio
|
||||
|
||||
#### Configuration
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `triggerCredentials` | string | Yes | Attio Account |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `eventType` | string | The type of event \(e.g. record.created, note.created\) |
|
||||
| `workspaceId` | string | The workspace ID |
|
||||
| `objectId` | string | The object type ID \(e.g. people, companies\) |
|
||||
| `recordId` | string | The record ID |
|
||||
| `attributeId` | string | The ID of the attribute that was updated on the record |
|
||||
|
||||
|
||||
---
|
||||
|
||||
### Attio Task Created
|
||||
|
||||
Trigger workflow when a new task is created in Attio
|
||||
|
||||
#### Configuration
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `triggerCredentials` | string | Yes | Attio Account |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `eventType` | string | The type of event \(e.g. record.created, note.created\) |
|
||||
| `workspaceId` | string | The workspace ID |
|
||||
| `taskId` | string | The task ID |
|
||||
|
||||
|
||||
---
|
||||
|
||||
### Attio Task Deleted
|
||||
|
||||
Trigger workflow when a task is deleted in Attio
|
||||
|
||||
#### Configuration
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `triggerCredentials` | string | Yes | Attio Account |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `eventType` | string | The type of event \(e.g. record.created, note.created\) |
|
||||
| `workspaceId` | string | The workspace ID |
|
||||
| `taskId` | string | The task ID |
|
||||
|
||||
|
||||
---
|
||||
|
||||
### Attio Task Updated
|
||||
|
||||
Trigger workflow when a task is updated in Attio
|
||||
|
||||
#### Configuration
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `triggerCredentials` | string | Yes | Attio Account |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `eventType` | string | The type of event \(e.g. record.created, note.created\) |
|
||||
| `workspaceId` | string | The workspace ID |
|
||||
| `taskId` | string | The task ID |
|
||||
|
||||
|
||||
---
|
||||
|
||||
### Attio Webhook (All Events)
|
||||
|
||||
Trigger workflow on any Attio webhook event
|
||||
|
||||
#### Configuration
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `triggerCredentials` | string | Yes | Attio Account |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `eventType` | string | The type of event \(e.g. record.created, note.created\) |
|
||||
| `id` | json | The event ID object containing resource identifiers |
|
||||
| `parentObjectId` | string | The parent object type ID \(if applicable\) |
|
||||
| `parentRecordId` | string | The parent record ID \(if applicable\) |
|
||||
|
||||
|
||||
---
|
||||
|
||||
### Attio Workspace Member Created
|
||||
|
||||
Trigger workflow when a new member is added to the Attio workspace
|
||||
|
||||
#### Configuration
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `triggerCredentials` | string | Yes | Attio Account |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `eventType` | string | The type of event \(e.g. record.created, note.created\) |
|
||||
| `workspaceId` | string | The workspace ID |
|
||||
| `workspaceMemberId` | string | The workspace member ID |
|
||||
|
||||
293
apps/docs/content/docs/en/triggers/calcom.mdx
Normal file
293
apps/docs/content/docs/en/triggers/calcom.mdx
Normal file
@@ -0,0 +1,293 @@
|
||||
---
|
||||
title: Cal.com
|
||||
description: Available Cal.com triggers for automating workflows
|
||||
---
|
||||
|
||||
import { BlockInfoCard } from "@/components/ui/block-info-card"
|
||||
|
||||
<BlockInfoCard
|
||||
type="calcom"
|
||||
color="#FFFFFE"
|
||||
/>
|
||||
|
||||
Cal.com provides 9 triggers for automating workflows based on events.
|
||||
|
||||
## Triggers
|
||||
|
||||
### CalCom Booking Cancelled
|
||||
|
||||
Trigger workflow when a booking is cancelled in Cal.com
|
||||
|
||||
#### Configuration
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `webhookSecret` | string | No | Used to verify webhook requests via X-Cal-Signature-256 header. |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `triggerEvent` | string | The webhook event type |
|
||||
| `createdAt` | string | When the webhook event was created \(ISO 8601\) |
|
||||
| `payload` | object | payload output from the tool |
|
||||
| ↳ `title` | string | Booking title |
|
||||
| ↳ `description` | string | Booking description |
|
||||
| ↳ `eventTypeId` | number | Event type ID |
|
||||
| ↳ `startTime` | string | Booking start time \(ISO 8601\) |
|
||||
| ↳ `endTime` | string | Booking end time \(ISO 8601\) |
|
||||
| ↳ `uid` | string | Unique booking identifier |
|
||||
| ↳ `bookingId` | number | Numeric booking ID |
|
||||
| ↳ `status` | string | Booking status |
|
||||
| ↳ `location` | string | Meeting location or URL |
|
||||
| ↳ `cancellationReason` | string | Reason for cancellation |
|
||||
| ↳ `responses` | json | Booking form responses |
|
||||
| ↳ `metadata` | json | Custom metadata attached to the booking |
|
||||
|
||||
|
||||
---
|
||||
|
||||
### CalCom Booking Created
|
||||
|
||||
Trigger workflow when a new booking is created in Cal.com
|
||||
|
||||
#### Configuration
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `webhookSecret` | string | No | Used to verify webhook requests via X-Cal-Signature-256 header. |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `triggerEvent` | string | The webhook event type |
|
||||
| `createdAt` | string | When the webhook event was created \(ISO 8601\) |
|
||||
| `payload` | object | payload output from the tool |
|
||||
| ↳ `title` | string | Booking title |
|
||||
| ↳ `description` | string | Booking description |
|
||||
| ↳ `eventTypeId` | number | Event type ID |
|
||||
| ↳ `startTime` | string | Booking start time \(ISO 8601\) |
|
||||
| ↳ `endTime` | string | Booking end time \(ISO 8601\) |
|
||||
| ↳ `uid` | string | Unique booking identifier |
|
||||
| ↳ `bookingId` | number | Numeric booking ID |
|
||||
| ↳ `status` | string | Booking status |
|
||||
| ↳ `location` | string | Meeting location or URL |
|
||||
| ↳ `responses` | json | Booking form responses \(dynamic - fields depend on your event type configuration\) |
|
||||
| ↳ `metadata` | json | Custom metadata attached to the booking \(dynamic - user-defined key-value pairs\) |
|
||||
| ↳ `videoCallData` | json | Video call details \(structure varies by provider\) |
|
||||
|
||||
|
||||
---
|
||||
|
||||
### CalCom Booking Paid
|
||||
|
||||
Trigger workflow when payment is completed for a paid booking
|
||||
|
||||
#### Configuration
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `webhookSecret` | string | No | Used to verify webhook requests via X-Cal-Signature-256 header. |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `triggerEvent` | string | The webhook event type \(BOOKING_PAID\) |
|
||||
| `createdAt` | string | When the webhook event was created \(ISO 8601\) |
|
||||
| `payload` | object | payload output from the tool |
|
||||
| ↳ `title` | string | Booking title |
|
||||
| ↳ `description` | string | Booking description |
|
||||
| ↳ `eventTypeId` | number | Event type ID |
|
||||
| ↳ `startTime` | string | Booking start time \(ISO 8601\) |
|
||||
| ↳ `endTime` | string | Booking end time \(ISO 8601\) |
|
||||
| ↳ `uid` | string | Unique booking identifier |
|
||||
| ↳ `bookingId` | number | Numeric booking ID |
|
||||
| ↳ `status` | string | Booking status |
|
||||
| ↳ `location` | string | Meeting location or URL |
|
||||
| ↳ `payment` | object | Payment details |
|
||||
| ↳ `id` | string | Payment ID |
|
||||
| ↳ `amount` | number | Payment amount |
|
||||
| ↳ `currency` | string | Payment currency |
|
||||
| ↳ `success` | boolean | Whether payment succeeded |
|
||||
| ↳ `metadata` | json | Custom metadata attached to the booking |
|
||||
|
||||
|
||||
---
|
||||
|
||||
### CalCom Booking Rejected
|
||||
|
||||
Trigger workflow when a booking request is rejected by the host
|
||||
|
||||
#### Configuration
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `webhookSecret` | string | No | Used to verify webhook requests via X-Cal-Signature-256 header. |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `triggerEvent` | string | The webhook event type \(BOOKING_REJECTED\) |
|
||||
| `createdAt` | string | When the webhook event was created \(ISO 8601\) |
|
||||
| `payload` | object | payload output from the tool |
|
||||
| ↳ `title` | string | Booking title |
|
||||
| ↳ `description` | string | Booking description |
|
||||
| ↳ `eventTypeId` | number | Event type ID |
|
||||
| ↳ `startTime` | string | Requested start time \(ISO 8601\) |
|
||||
| ↳ `endTime` | string | Requested end time \(ISO 8601\) |
|
||||
| ↳ `uid` | string | Unique booking identifier |
|
||||
| ↳ `bookingId` | number | Numeric booking ID |
|
||||
| ↳ `status` | string | Booking status \(rejected\) |
|
||||
| ↳ `rejectionReason` | string | Reason for rejection provided by host |
|
||||
| ↳ `metadata` | json | Custom metadata attached to the booking |
|
||||
|
||||
|
||||
---
|
||||
|
||||
### CalCom Booking Requested
|
||||
|
||||
Trigger workflow when a booking request is submitted (pending confirmation)
|
||||
|
||||
#### Configuration
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `webhookSecret` | string | No | Used to verify webhook requests via X-Cal-Signature-256 header. |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `triggerEvent` | string | The webhook event type \(BOOKING_REQUESTED\) |
|
||||
| `createdAt` | string | When the webhook event was created \(ISO 8601\) |
|
||||
| `payload` | object | payload output from the tool |
|
||||
| ↳ `title` | string | Booking title |
|
||||
| ↳ `description` | string | Booking description |
|
||||
| ↳ `eventTypeId` | number | Event type ID |
|
||||
| ↳ `startTime` | string | Requested start time \(ISO 8601\) |
|
||||
| ↳ `endTime` | string | Requested end time \(ISO 8601\) |
|
||||
| ↳ `uid` | string | Unique booking identifier |
|
||||
| ↳ `bookingId` | number | Numeric booking ID |
|
||||
| ↳ `status` | string | Booking status \(pending\) |
|
||||
| ↳ `location` | string | Meeting location or URL |
|
||||
| ↳ `responses` | json | Booking form responses |
|
||||
| ↳ `metadata` | json | Custom metadata attached to the booking |
|
||||
|
||||
|
||||
---
|
||||
|
||||
### CalCom Booking Rescheduled
|
||||
|
||||
Trigger workflow when a booking is rescheduled in Cal.com
|
||||
|
||||
#### Configuration
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `webhookSecret` | string | No | Used to verify webhook requests via X-Cal-Signature-256 header. |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `triggerEvent` | string | The webhook event type |
|
||||
| `createdAt` | string | When the webhook event was created \(ISO 8601\) |
|
||||
| `payload` | object | payload output from the tool |
|
||||
| ↳ `title` | string | Booking title |
|
||||
| ↳ `description` | string | Booking description |
|
||||
| ↳ `eventTypeId` | number | Event type ID |
|
||||
| ↳ `startTime` | string | New booking start time \(ISO 8601\) |
|
||||
| ↳ `endTime` | string | New booking end time \(ISO 8601\) |
|
||||
| ↳ `uid` | string | Unique booking identifier |
|
||||
| ↳ `bookingId` | number | Numeric booking ID |
|
||||
| ↳ `status` | string | Booking status |
|
||||
| ↳ `location` | string | Meeting location or URL |
|
||||
| ↳ `rescheduleId` | number | Previous booking ID |
|
||||
| ↳ `rescheduleUid` | string | Previous booking UID |
|
||||
| ↳ `rescheduleStartTime` | string | Original start time \(ISO 8601\) |
|
||||
| ↳ `rescheduleEndTime` | string | Original end time \(ISO 8601\) |
|
||||
| ↳ `responses` | json | Booking form responses |
|
||||
| ↳ `metadata` | json | Custom metadata attached to the booking |
|
||||
|
||||
|
||||
---
|
||||
|
||||
### CalCom Meeting Ended
|
||||
|
||||
Trigger workflow when a Cal.com meeting ends
|
||||
|
||||
#### Configuration
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `webhookSecret` | string | No | Used to verify webhook requests via X-Cal-Signature-256 header. |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `triggerEvent` | string | The webhook event type \(MEETING_ENDED\) |
|
||||
| `createdAt` | string | When the webhook event was created \(ISO 8601\) |
|
||||
| `payload` | object | payload output from the tool |
|
||||
| ↳ `title` | string | Meeting title |
|
||||
| ↳ `eventTypeId` | number | Event type ID |
|
||||
| ↳ `startTime` | string | Meeting start time \(ISO 8601\) |
|
||||
| ↳ `endTime` | string | Meeting end time \(ISO 8601\) |
|
||||
| ↳ `uid` | string | Unique booking identifier |
|
||||
| ↳ `bookingId` | number | Numeric booking ID |
|
||||
| ↳ `duration` | number | Actual meeting duration in minutes |
|
||||
| ↳ `videoCallData` | json | Video call details |
|
||||
|
||||
|
||||
---
|
||||
|
||||
### CalCom Recording Ready
|
||||
|
||||
Trigger workflow when a meeting recording is ready for download
|
||||
|
||||
#### Configuration
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `webhookSecret` | string | No | Used to verify webhook requests via X-Cal-Signature-256 header. |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `triggerEvent` | string | The webhook event type \(RECORDING_READY\) |
|
||||
| `createdAt` | string | When the webhook event was created \(ISO 8601\) |
|
||||
| `payload` | object | payload output from the tool |
|
||||
| ↳ `title` | string | Meeting title |
|
||||
| ↳ `eventTypeId` | number | Event type ID |
|
||||
| ↳ `startTime` | string | Meeting start time \(ISO 8601\) |
|
||||
| ↳ `endTime` | string | Meeting end time \(ISO 8601\) |
|
||||
| ↳ `uid` | string | Unique booking identifier |
|
||||
| ↳ `bookingId` | number | Numeric booking ID |
|
||||
| ↳ `recordingUrl` | string | URL to download the recording |
|
||||
| ↳ `transcription` | string | Meeting transcription text \(if available\) |
|
||||
|
||||
|
||||
---
|
||||
|
||||
### CalCom Webhook (All Events)
|
||||
|
||||
Trigger workflow on any Cal.com webhook event (configure event types in Cal.com)
|
||||
|
||||
#### Configuration
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `webhookSecret` | string | No | Used to verify webhook requests via X-Cal-Signature-256 header. |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `triggerEvent` | string | The webhook event type \(e.g., BOOKING_CREATED, MEETING_ENDED\) |
|
||||
| `createdAt` | string | When the webhook event was created \(ISO 8601\) |
|
||||
| `payload` | json | Complete webhook payload \(structure varies by event type\) |
|
||||
|
||||
181
apps/docs/content/docs/en/triggers/calendly.mdx
Normal file
181
apps/docs/content/docs/en/triggers/calendly.mdx
Normal file
@@ -0,0 +1,181 @@
|
||||
---
|
||||
title: Calendly
|
||||
description: Available Calendly triggers for automating workflows
|
||||
---
|
||||
|
||||
import { BlockInfoCard } from "@/components/ui/block-info-card"
|
||||
|
||||
<BlockInfoCard
|
||||
type="calendly"
|
||||
color="#FFFFFF"
|
||||
/>
|
||||
|
||||
Calendly provides 4 triggers for automating workflows based on events.
|
||||
|
||||
## Triggers
|
||||
|
||||
### Calendly Invitee Canceled
|
||||
|
||||
Trigger workflow when someone cancels a scheduled event on Calendly
|
||||
|
||||
#### Configuration
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `apiKey` | string | Yes | Personal Access Token |
|
||||
| `organization` | string | Yes | Organization URI for the webhook subscription. Get this from |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `event` | string | Event type \(invitee.created or invitee.canceled\) |
|
||||
| `created_at` | string | Webhook event creation timestamp |
|
||||
| `created_by` | string | URI of the Calendly user who created this webhook |
|
||||
| `payload` | object | payload output from the tool |
|
||||
| ↳ `uri` | string | Invitee URI |
|
||||
| ↳ `email` | string | Invitee email address |
|
||||
| ↳ `name` | string | Invitee full name |
|
||||
| ↳ `first_name` | string | Invitee first name |
|
||||
| ↳ `last_name` | string | Invitee last name |
|
||||
| ↳ `status` | string | Invitee status \(active or canceled\) |
|
||||
| ↳ `timezone` | string | Invitee timezone |
|
||||
| ↳ `event` | string | Scheduled event URI |
|
||||
| ↳ `text_reminder_number` | string | Phone number for text reminders |
|
||||
| ↳ `rescheduled` | boolean | Whether this invitee rescheduled |
|
||||
| ↳ `old_invitee` | string | URI of the old invitee \(if rescheduled\) |
|
||||
| ↳ `new_invitee` | string | URI of the new invitee \(if rescheduled\) |
|
||||
| ↳ `cancel_url` | string | URL to cancel the event |
|
||||
| ↳ `reschedule_url` | string | URL to reschedule the event |
|
||||
| ↳ `created_at` | string | Invitee creation timestamp |
|
||||
| ↳ `updated_at` | string | Invitee last update timestamp |
|
||||
| ↳ `canceled` | boolean | Whether the event was canceled |
|
||||
| ↳ `cancellation` | object | Cancellation details |
|
||||
| ↳ `canceled_by` | string | Who canceled the event |
|
||||
| ↳ `reason` | string | Cancellation reason |
|
||||
| ↳ `payment` | object | Payment details |
|
||||
| ↳ `id` | string | Payment ID |
|
||||
| ↳ `provider` | string | Payment provider |
|
||||
| ↳ `amount` | number | Payment amount |
|
||||
| ↳ `currency` | string | Payment currency |
|
||||
| ↳ `terms` | string | Payment terms |
|
||||
| ↳ `successful` | boolean | Whether payment was successful |
|
||||
| ↳ `no_show` | object | No-show details |
|
||||
| ↳ `created_at` | string | No-show marked timestamp |
|
||||
| ↳ `reconfirmation` | object | Reconfirmation details |
|
||||
| ↳ `created_at` | string | Reconfirmation timestamp |
|
||||
| ↳ `confirmed_at` | string | Confirmation timestamp |
|
||||
|
||||
|
||||
---
|
||||
|
||||
### Calendly Invitee Created
|
||||
|
||||
Trigger workflow when someone schedules a new event on Calendly
|
||||
|
||||
#### Configuration
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `apiKey` | string | Yes | Personal Access Token |
|
||||
| `organization` | string | Yes | Organization URI for the webhook subscription. Get this from |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `event` | string | Event type \(invitee.created or invitee.canceled\) |
|
||||
| `created_at` | string | Webhook event creation timestamp |
|
||||
| `created_by` | string | URI of the Calendly user who created this webhook |
|
||||
| `payload` | object | payload output from the tool |
|
||||
| ↳ `uri` | string | Invitee URI |
|
||||
| ↳ `email` | string | Invitee email address |
|
||||
| ↳ `name` | string | Invitee full name |
|
||||
| ↳ `first_name` | string | Invitee first name |
|
||||
| ↳ `last_name` | string | Invitee last name |
|
||||
| ↳ `status` | string | Invitee status \(active or canceled\) |
|
||||
| ↳ `timezone` | string | Invitee timezone |
|
||||
| ↳ `event` | string | Scheduled event URI |
|
||||
| ↳ `text_reminder_number` | string | Phone number for text reminders |
|
||||
| ↳ `rescheduled` | boolean | Whether this invitee rescheduled |
|
||||
| ↳ `old_invitee` | string | URI of the old invitee \(if rescheduled\) |
|
||||
| ↳ `new_invitee` | string | URI of the new invitee \(if rescheduled\) |
|
||||
| ↳ `cancel_url` | string | URL to cancel the event |
|
||||
| ↳ `reschedule_url` | string | URL to reschedule the event |
|
||||
| ↳ `created_at` | string | Invitee creation timestamp |
|
||||
| ↳ `updated_at` | string | Invitee last update timestamp |
|
||||
| ↳ `canceled` | boolean | Whether the event was canceled |
|
||||
| ↳ `cancellation` | object | Cancellation details |
|
||||
| ↳ `canceled_by` | string | Who canceled the event |
|
||||
| ↳ `reason` | string | Cancellation reason |
|
||||
| ↳ `payment` | object | Payment details |
|
||||
| ↳ `id` | string | Payment ID |
|
||||
| ↳ `provider` | string | Payment provider |
|
||||
| ↳ `amount` | number | Payment amount |
|
||||
| ↳ `currency` | string | Payment currency |
|
||||
| ↳ `terms` | string | Payment terms |
|
||||
| ↳ `successful` | boolean | Whether payment was successful |
|
||||
| ↳ `no_show` | object | No-show details |
|
||||
| ↳ `created_at` | string | No-show marked timestamp |
|
||||
| ↳ `reconfirmation` | object | Reconfirmation details |
|
||||
| ↳ `created_at` | string | Reconfirmation timestamp |
|
||||
| ↳ `confirmed_at` | string | Confirmation timestamp |
|
||||
|
||||
|
||||
---
|
||||
|
||||
### Calendly Routing Form Submitted
|
||||
|
||||
Trigger workflow when someone submits a Calendly routing form
|
||||
|
||||
#### Configuration
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `apiKey` | string | Yes | Personal Access Token |
|
||||
| `organization` | string | Yes | Organization URI for the webhook subscription. Get this from |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `event` | string | Event type \(routing_form_submission.created\) |
|
||||
| `created_at` | string | Webhook event creation timestamp |
|
||||
| `created_by` | string | URI of the Calendly user who created this webhook |
|
||||
| `payload` | object | payload output from the tool |
|
||||
| ↳ `uri` | string | Routing form submission URI |
|
||||
| ↳ `routing_form` | string | Routing form URI |
|
||||
| ↳ `submitter` | object | Submitter details |
|
||||
| ↳ `uri` | string | Submitter URI |
|
||||
| ↳ `email` | string | Submitter email address |
|
||||
| ↳ `name` | string | Submitter full name |
|
||||
| ↳ `submitter_type` | string | Type of submitter |
|
||||
| ↳ `result` | object | Routing result details |
|
||||
| ↳ `type` | string | Result type \(event_type, custom_message, or external_url\) |
|
||||
| ↳ `value` | string | Result value \(event type URI, message, or URL\) |
|
||||
| ↳ `created_at` | string | Submission creation timestamp |
|
||||
| ↳ `updated_at` | string | Submission last update timestamp |
|
||||
|
||||
|
||||
---
|
||||
|
||||
### Calendly Webhook
|
||||
|
||||
Trigger workflow from any Calendly webhook event
|
||||
|
||||
#### Configuration
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `apiKey` | string | Yes | Personal Access Token |
|
||||
| `organization` | string | Yes | Organization URI for the webhook subscription. Get this from |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `event` | string | Event type \(invitee.created, invitee.canceled, or routing_form_submission.created\) |
|
||||
| `created_at` | string | Webhook event creation timestamp |
|
||||
| `created_by` | string | URI of the Calendly user who created this webhook |
|
||||
| `payload` | object | Complete event payload \(structure varies by event type\) |
|
||||
|
||||
163
apps/docs/content/docs/en/triggers/circleback.mdx
Normal file
163
apps/docs/content/docs/en/triggers/circleback.mdx
Normal file
@@ -0,0 +1,163 @@
|
||||
---
|
||||
title: Circleback
|
||||
description: Available Circleback triggers for automating workflows
|
||||
---
|
||||
|
||||
import { BlockInfoCard } from "@/components/ui/block-info-card"
|
||||
|
||||
<BlockInfoCard
|
||||
type="circleback"
|
||||
color="linear-gradient(180deg, #E0F7FA 0%, #FFFFFF 100%)"
|
||||
/>
|
||||
|
||||
Circleback provides 3 triggers for automating workflows based on events.
|
||||
|
||||
## Triggers
|
||||
|
||||
### Circleback Meeting Completed
|
||||
|
||||
Trigger workflow when a meeting is processed and ready in Circleback
|
||||
|
||||
#### Configuration
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `webhookSecret` | string | No | Validates that webhook deliveries originate from Circleback using HMAC-SHA256. |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `id` | number | Circleback meeting ID |
|
||||
| `name` | string | Meeting title/name |
|
||||
| `url` | string | URL of the virtual meeting \(Zoom, Google Meet, Teams, etc.\) |
|
||||
| `createdAt` | string | ISO8601 timestamp when meeting was created |
|
||||
| `duration` | number | Meeting duration in seconds |
|
||||
| `recordingUrl` | string | Recording URL \(valid for 24 hours, if enabled\) |
|
||||
| `tags` | array | Array of tag strings |
|
||||
| `icalUid` | string | Calendar event identifier |
|
||||
| `attendees` | array | Array of attendee objects with name and email |
|
||||
| ↳ `name` | string | Attendee name |
|
||||
| ↳ `email` | string | Attendee email address |
|
||||
| `notes` | string | Meeting notes in Markdown format |
|
||||
| `actionItems` | array | Array of action item objects |
|
||||
| ↳ `id` | number | Action item ID |
|
||||
| ↳ `title` | string | Action item title |
|
||||
| ↳ `description` | string | Action item description |
|
||||
| ↳ `assignee` | object | Person assigned to the action item \(or null\) |
|
||||
| ↳ `name` | string | Assignee name |
|
||||
| ↳ `email` | string | Assignee email |
|
||||
| ↳ `status` | string | Status: PENDING or DONE |
|
||||
| `transcript` | array | Array of transcript segments |
|
||||
| ↳ `speaker` | string | Speaker name |
|
||||
| ↳ `text` | string | Transcript text |
|
||||
| ↳ `timestamp` | number | Timestamp in seconds |
|
||||
| `insights` | object | User-created insights keyed by insight name |
|
||||
| `meeting` | object | Full meeting payload object |
|
||||
| ↳ `id` | number | Meeting ID |
|
||||
| ↳ `name` | string | Meeting name |
|
||||
| ↳ `url` | string | Meeting URL |
|
||||
| ↳ `duration` | number | Duration in seconds |
|
||||
| ↳ `createdAt` | string | Creation timestamp |
|
||||
| ↳ `recordingUrl` | string | Recording URL |
|
||||
|
||||
|
||||
---
|
||||
|
||||
### Circleback Meeting Notes Ready
|
||||
|
||||
Trigger workflow when meeting notes and action items are ready
|
||||
|
||||
#### Configuration
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `webhookSecret` | string | No | Validates that webhook deliveries originate from Circleback using HMAC-SHA256. |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `id` | number | Circleback meeting ID |
|
||||
| `name` | string | Meeting title/name |
|
||||
| `url` | string | URL of the virtual meeting \(Zoom, Google Meet, Teams, etc.\) |
|
||||
| `createdAt` | string | ISO8601 timestamp when meeting was created |
|
||||
| `duration` | number | Meeting duration in seconds |
|
||||
| `recordingUrl` | string | Recording URL \(valid for 24 hours, if enabled\) |
|
||||
| `tags` | array | Array of tag strings |
|
||||
| `icalUid` | string | Calendar event identifier |
|
||||
| `attendees` | array | Array of attendee objects with name and email |
|
||||
| ↳ `name` | string | Attendee name |
|
||||
| ↳ `email` | string | Attendee email address |
|
||||
| `notes` | string | Meeting notes in Markdown format |
|
||||
| `actionItems` | array | Array of action item objects |
|
||||
| ↳ `id` | number | Action item ID |
|
||||
| ↳ `title` | string | Action item title |
|
||||
| ↳ `description` | string | Action item description |
|
||||
| ↳ `assignee` | object | Person assigned to the action item \(or null\) |
|
||||
| ↳ `name` | string | Assignee name |
|
||||
| ↳ `email` | string | Assignee email |
|
||||
| ↳ `status` | string | Status: PENDING or DONE |
|
||||
| `transcript` | array | Array of transcript segments |
|
||||
| ↳ `speaker` | string | Speaker name |
|
||||
| ↳ `text` | string | Transcript text |
|
||||
| ↳ `timestamp` | number | Timestamp in seconds |
|
||||
| `insights` | object | User-created insights keyed by insight name |
|
||||
| `meeting` | object | Full meeting payload object |
|
||||
| ↳ `id` | number | Meeting ID |
|
||||
| ↳ `name` | string | Meeting name |
|
||||
| ↳ `url` | string | Meeting URL |
|
||||
| ↳ `duration` | number | Duration in seconds |
|
||||
| ↳ `createdAt` | string | Creation timestamp |
|
||||
| ↳ `recordingUrl` | string | Recording URL |
|
||||
|
||||
|
||||
---
|
||||
|
||||
### Circleback Webhook
|
||||
|
||||
Generic webhook trigger for all Circleback events
|
||||
|
||||
#### Configuration
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `webhookSecret` | string | No | Validates that webhook deliveries originate from Circleback using HMAC-SHA256. |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `id` | number | Circleback meeting ID |
|
||||
| `name` | string | Meeting title/name |
|
||||
| `url` | string | URL of the virtual meeting \(Zoom, Google Meet, Teams, etc.\) |
|
||||
| `createdAt` | string | ISO8601 timestamp when meeting was created |
|
||||
| `duration` | number | Meeting duration in seconds |
|
||||
| `recordingUrl` | string | Recording URL \(valid for 24 hours, if enabled\) |
|
||||
| `tags` | array | Array of tag strings |
|
||||
| `icalUid` | string | Calendar event identifier |
|
||||
| `attendees` | array | Array of attendee objects with name and email |
|
||||
| ↳ `name` | string | Attendee name |
|
||||
| ↳ `email` | string | Attendee email address |
|
||||
| `notes` | string | Meeting notes in Markdown format |
|
||||
| `actionItems` | array | Array of action item objects |
|
||||
| ↳ `id` | number | Action item ID |
|
||||
| ↳ `title` | string | Action item title |
|
||||
| ↳ `description` | string | Action item description |
|
||||
| ↳ `assignee` | object | Person assigned to the action item \(or null\) |
|
||||
| ↳ `name` | string | Assignee name |
|
||||
| ↳ `email` | string | Assignee email |
|
||||
| ↳ `status` | string | Status: PENDING or DONE |
|
||||
| `transcript` | array | Array of transcript segments |
|
||||
| ↳ `speaker` | string | Speaker name |
|
||||
| ↳ `text` | string | Transcript text |
|
||||
| ↳ `timestamp` | number | Timestamp in seconds |
|
||||
| `insights` | object | User-created insights keyed by insight name |
|
||||
| `meeting` | object | Full meeting payload object |
|
||||
| ↳ `id` | number | Meeting ID |
|
||||
| ↳ `name` | string | Meeting name |
|
||||
| ↳ `url` | string | Meeting URL |
|
||||
| ↳ `duration` | number | Duration in seconds |
|
||||
| ↳ `createdAt` | string | Creation timestamp |
|
||||
| ↳ `recordingUrl` | string | Recording URL |
|
||||
|
||||
692
apps/docs/content/docs/en/triggers/confluence.mdx
Normal file
692
apps/docs/content/docs/en/triggers/confluence.mdx
Normal file
@@ -0,0 +1,692 @@
|
||||
---
|
||||
title: Confluence
|
||||
description: Available Confluence triggers for automating workflows
|
||||
---
|
||||
|
||||
import { BlockInfoCard } from "@/components/ui/block-info-card"
|
||||
|
||||
<BlockInfoCard
|
||||
type="confluence"
|
||||
color="#E0E0E0"
|
||||
/>
|
||||
|
||||
Confluence provides 23 triggers for automating workflows based on events.
|
||||
|
||||
## Triggers
|
||||
|
||||
### Confluence Attachment Created
|
||||
|
||||
Trigger workflow when an attachment is uploaded in Confluence
|
||||
|
||||
#### Configuration
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `webhookSecret` | string | No | Optional secret to validate webhook deliveries from Confluence using HMAC signature |
|
||||
| `confluenceDomain` | string | No | Your Confluence Cloud domain |
|
||||
| `confluenceEmail` | string | No | Your Atlassian account email. Required together with API token to download attachment files. |
|
||||
| `confluenceApiToken` | string | No | API token from https://id.atlassian.com/manage-profile/security/api-tokens. Required to download attachment file content. |
|
||||
| `includeFileContent` | boolean | No | Download and include actual file content from attachments. Requires email, API token, and domain. |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `timestamp` | number | Timestamp of the webhook event \(Unix epoch milliseconds\) |
|
||||
| `userAccountId` | string | Account ID of the user who triggered the event |
|
||||
| `accountType` | string | Account type \(e.g., customer\) |
|
||||
| `id` | number | Content ID |
|
||||
| `title` | string | Content title |
|
||||
| `contentType` | string | Content type \(page, blogpost, comment, attachment\) |
|
||||
| `version` | number | Version number |
|
||||
| `spaceKey` | string | Space key the content belongs to |
|
||||
| `creatorAccountId` | string | Account ID of the creator |
|
||||
| `lastModifierAccountId` | string | Account ID of the last modifier |
|
||||
| `self` | string | URL link to the content |
|
||||
| `creationDate` | number | Creation timestamp \(Unix epoch milliseconds\) |
|
||||
| `modificationDate` | number | Last modification timestamp \(Unix epoch milliseconds\) |
|
||||
| `attachment` | object | attachment output from the tool |
|
||||
| ↳ `mediaType` | string | MIME type of the attachment |
|
||||
| ↳ `fileSize` | number | File size in bytes |
|
||||
| ↳ `parent` | object | parent output from the tool |
|
||||
| ↳ `id` | number | Container page/blog ID |
|
||||
| ↳ `title` | string | Container page/blog title |
|
||||
| ↳ `contentType` | string | Container content type |
|
||||
| `files` | file[] | Attachment file content downloaded from Confluence \(if includeFileContent is enabled with credentials\) |
|
||||
|
||||
|
||||
---
|
||||
|
||||
### Confluence Attachment Removed
|
||||
|
||||
Trigger workflow when an attachment is removed in Confluence
|
||||
|
||||
#### Configuration
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `webhookSecret` | string | No | Optional secret to validate webhook deliveries from Confluence using HMAC signature |
|
||||
| `confluenceDomain` | string | No | Your Confluence Cloud domain |
|
||||
| `confluenceEmail` | string | No | Your Atlassian account email. Required together with API token to download attachment files. |
|
||||
| `confluenceApiToken` | string | No | API token from https://id.atlassian.com/manage-profile/security/api-tokens. Required to download attachment file content. |
|
||||
| `includeFileContent` | boolean | No | Download and include actual file content from attachments. Requires email, API token, and domain. |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `timestamp` | number | Timestamp of the webhook event \(Unix epoch milliseconds\) |
|
||||
| `userAccountId` | string | Account ID of the user who triggered the event |
|
||||
| `accountType` | string | Account type \(e.g., customer\) |
|
||||
| `id` | number | Content ID |
|
||||
| `title` | string | Content title |
|
||||
| `contentType` | string | Content type \(page, blogpost, comment, attachment\) |
|
||||
| `version` | number | Version number |
|
||||
| `spaceKey` | string | Space key the content belongs to |
|
||||
| `creatorAccountId` | string | Account ID of the creator |
|
||||
| `lastModifierAccountId` | string | Account ID of the last modifier |
|
||||
| `self` | string | URL link to the content |
|
||||
| `creationDate` | number | Creation timestamp \(Unix epoch milliseconds\) |
|
||||
| `modificationDate` | number | Last modification timestamp \(Unix epoch milliseconds\) |
|
||||
| `attachment` | object | attachment output from the tool |
|
||||
| ↳ `mediaType` | string | MIME type of the attachment |
|
||||
| ↳ `fileSize` | number | File size in bytes |
|
||||
| ↳ `parent` | object | parent output from the tool |
|
||||
| ↳ `id` | number | Container page/blog ID |
|
||||
| ↳ `title` | string | Container page/blog title |
|
||||
| ↳ `contentType` | string | Container content type |
|
||||
| `files` | file[] | Attachment file content downloaded from Confluence \(if includeFileContent is enabled with credentials\) |
|
||||
|
||||
|
||||
---
|
||||
|
||||
### Confluence Attachment Updated
|
||||
|
||||
Trigger workflow when an attachment is updated in Confluence
|
||||
|
||||
#### Configuration
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `webhookSecret` | string | No | Optional secret to validate webhook deliveries from Confluence using HMAC signature |
|
||||
| `confluenceDomain` | string | No | Your Confluence Cloud domain |
|
||||
| `confluenceEmail` | string | No | Your Atlassian account email. Required together with API token to download attachment files. |
|
||||
| `confluenceApiToken` | string | No | API token from https://id.atlassian.com/manage-profile/security/api-tokens. Required to download attachment file content. |
|
||||
| `includeFileContent` | boolean | No | Download and include actual file content from attachments. Requires email, API token, and domain. |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `timestamp` | number | Timestamp of the webhook event \(Unix epoch milliseconds\) |
|
||||
| `userAccountId` | string | Account ID of the user who triggered the event |
|
||||
| `accountType` | string | Account type \(e.g., customer\) |
|
||||
| `id` | number | Content ID |
|
||||
| `title` | string | Content title |
|
||||
| `contentType` | string | Content type \(page, blogpost, comment, attachment\) |
|
||||
| `version` | number | Version number |
|
||||
| `spaceKey` | string | Space key the content belongs to |
|
||||
| `creatorAccountId` | string | Account ID of the creator |
|
||||
| `lastModifierAccountId` | string | Account ID of the last modifier |
|
||||
| `self` | string | URL link to the content |
|
||||
| `creationDate` | number | Creation timestamp \(Unix epoch milliseconds\) |
|
||||
| `modificationDate` | number | Last modification timestamp \(Unix epoch milliseconds\) |
|
||||
| `attachment` | object | attachment output from the tool |
|
||||
| ↳ `mediaType` | string | MIME type of the attachment |
|
||||
| ↳ `fileSize` | number | File size in bytes |
|
||||
| ↳ `parent` | object | parent output from the tool |
|
||||
| ↳ `id` | number | Container page/blog ID |
|
||||
| ↳ `title` | string | Container page/blog title |
|
||||
| ↳ `contentType` | string | Container content type |
|
||||
| `files` | file[] | Attachment file content downloaded from Confluence \(if includeFileContent is enabled with credentials\) |
|
||||
|
||||
|
||||
---
|
||||
|
||||
### Confluence Blog Post Created
|
||||
|
||||
Trigger workflow when a blog post is created in Confluence
|
||||
|
||||
#### Configuration
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `webhookSecret` | string | No | Optional secret to validate webhook deliveries from Confluence using HMAC signature |
|
||||
| `confluenceDomain` | string | No | Your Confluence Cloud domain |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `timestamp` | number | Timestamp of the webhook event \(Unix epoch milliseconds\) |
|
||||
| `userAccountId` | string | Account ID of the user who triggered the event |
|
||||
| `accountType` | string | Account type \(e.g., customer\) |
|
||||
|
||||
|
||||
---
|
||||
|
||||
### Confluence Blog Post Removed
|
||||
|
||||
Trigger workflow when a blog post is removed in Confluence
|
||||
|
||||
#### Configuration
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `webhookSecret` | string | No | Optional secret to validate webhook deliveries from Confluence using HMAC signature |
|
||||
| `confluenceDomain` | string | No | Your Confluence Cloud domain |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `timestamp` | number | Timestamp of the webhook event \(Unix epoch milliseconds\) |
|
||||
| `userAccountId` | string | Account ID of the user who triggered the event |
|
||||
| `accountType` | string | Account type \(e.g., customer\) |
|
||||
|
||||
|
||||
---
|
||||
|
||||
### Confluence Blog Post Restored
|
||||
|
||||
Trigger workflow when a blog post is restored from trash in Confluence
|
||||
|
||||
#### Configuration
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `webhookSecret` | string | No | Optional secret to validate webhook deliveries from Confluence using HMAC signature |
|
||||
| `confluenceDomain` | string | No | Your Confluence Cloud domain |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `timestamp` | number | Timestamp of the webhook event \(Unix epoch milliseconds\) |
|
||||
| `userAccountId` | string | Account ID of the user who triggered the event |
|
||||
| `accountType` | string | Account type \(e.g., customer\) |
|
||||
|
||||
|
||||
---
|
||||
|
||||
### Confluence Blog Post Updated
|
||||
|
||||
Trigger workflow when a blog post is updated in Confluence
|
||||
|
||||
#### Configuration
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `webhookSecret` | string | No | Optional secret to validate webhook deliveries from Confluence using HMAC signature |
|
||||
| `confluenceDomain` | string | No | Your Confluence Cloud domain |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `timestamp` | number | Timestamp of the webhook event \(Unix epoch milliseconds\) |
|
||||
| `userAccountId` | string | Account ID of the user who triggered the event |
|
||||
| `accountType` | string | Account type \(e.g., customer\) |
|
||||
|
||||
|
||||
---
|
||||
|
||||
### Confluence Comment Created
|
||||
|
||||
Trigger workflow when a comment is created in Confluence
|
||||
|
||||
#### Configuration
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `webhookSecret` | string | No | Optional secret to validate webhook deliveries from Confluence using HMAC signature |
|
||||
| `confluenceDomain` | string | No | Your Confluence Cloud domain |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `timestamp` | number | Timestamp of the webhook event \(Unix epoch milliseconds\) |
|
||||
| `userAccountId` | string | Account ID of the user who triggered the event |
|
||||
| `accountType` | string | Account type \(e.g., customer\) |
|
||||
| `id` | number | Content ID |
|
||||
| `title` | string | Content title |
|
||||
| `contentType` | string | Content type \(page, blogpost, comment, attachment\) |
|
||||
| `version` | number | Version number |
|
||||
| `spaceKey` | string | Space key the content belongs to |
|
||||
| `creatorAccountId` | string | Account ID of the creator |
|
||||
| `lastModifierAccountId` | string | Account ID of the last modifier |
|
||||
| `self` | string | URL link to the content |
|
||||
| `creationDate` | number | Creation timestamp \(Unix epoch milliseconds\) |
|
||||
| `modificationDate` | number | Last modification timestamp \(Unix epoch milliseconds\) |
|
||||
| `comment` | object | comment output from the tool |
|
||||
| ↳ `parent` | object | parent output from the tool |
|
||||
| ↳ `id` | number | Parent page/blog ID |
|
||||
| ↳ `title` | string | Parent page/blog title |
|
||||
| ↳ `contentType` | string | Parent content type \(page or blogpost\) |
|
||||
| ↳ `spaceKey` | string | Space key of the parent |
|
||||
| ↳ `self` | string | URL link to the parent content |
|
||||
|
||||
|
||||
---
|
||||
|
||||
### Confluence Comment Removed
|
||||
|
||||
Trigger workflow when a comment is removed in Confluence
|
||||
|
||||
#### Configuration
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `webhookSecret` | string | No | Optional secret to validate webhook deliveries from Confluence using HMAC signature |
|
||||
| `confluenceDomain` | string | No | Your Confluence Cloud domain |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `timestamp` | number | Timestamp of the webhook event \(Unix epoch milliseconds\) |
|
||||
| `userAccountId` | string | Account ID of the user who triggered the event |
|
||||
| `accountType` | string | Account type \(e.g., customer\) |
|
||||
| `id` | number | Content ID |
|
||||
| `title` | string | Content title |
|
||||
| `contentType` | string | Content type \(page, blogpost, comment, attachment\) |
|
||||
| `version` | number | Version number |
|
||||
| `spaceKey` | string | Space key the content belongs to |
|
||||
| `creatorAccountId` | string | Account ID of the creator |
|
||||
| `lastModifierAccountId` | string | Account ID of the last modifier |
|
||||
| `self` | string | URL link to the content |
|
||||
| `creationDate` | number | Creation timestamp \(Unix epoch milliseconds\) |
|
||||
| `modificationDate` | number | Last modification timestamp \(Unix epoch milliseconds\) |
|
||||
| `comment` | object | comment output from the tool |
|
||||
| ↳ `parent` | object | parent output from the tool |
|
||||
| ↳ `id` | number | Parent page/blog ID |
|
||||
| ↳ `title` | string | Parent page/blog title |
|
||||
| ↳ `contentType` | string | Parent content type \(page or blogpost\) |
|
||||
| ↳ `spaceKey` | string | Space key of the parent |
|
||||
| ↳ `self` | string | URL link to the parent content |
|
||||
|
||||
|
||||
---
|
||||
|
||||
### Confluence Comment Updated
|
||||
|
||||
Trigger workflow when a comment is updated in Confluence
|
||||
|
||||
#### Configuration
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `webhookSecret` | string | No | Optional secret to validate webhook deliveries from Confluence using HMAC signature |
|
||||
| `confluenceDomain` | string | No | Your Confluence Cloud domain |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `timestamp` | number | Timestamp of the webhook event \(Unix epoch milliseconds\) |
|
||||
| `userAccountId` | string | Account ID of the user who triggered the event |
|
||||
| `accountType` | string | Account type \(e.g., customer\) |
|
||||
| `id` | number | Content ID |
|
||||
| `title` | string | Content title |
|
||||
| `contentType` | string | Content type \(page, blogpost, comment, attachment\) |
|
||||
| `version` | number | Version number |
|
||||
| `spaceKey` | string | Space key the content belongs to |
|
||||
| `creatorAccountId` | string | Account ID of the creator |
|
||||
| `lastModifierAccountId` | string | Account ID of the last modifier |
|
||||
| `self` | string | URL link to the content |
|
||||
| `creationDate` | number | Creation timestamp \(Unix epoch milliseconds\) |
|
||||
| `modificationDate` | number | Last modification timestamp \(Unix epoch milliseconds\) |
|
||||
| `comment` | object | comment output from the tool |
|
||||
| ↳ `parent` | object | parent output from the tool |
|
||||
| ↳ `id` | number | Parent page/blog ID |
|
||||
| ↳ `title` | string | Parent page/blog title |
|
||||
| ↳ `contentType` | string | Parent content type \(page or blogpost\) |
|
||||
| ↳ `spaceKey` | string | Space key of the parent |
|
||||
| ↳ `self` | string | URL link to the parent content |
|
||||
|
||||
|
||||
---
|
||||
|
||||
### Confluence Label Added
|
||||
|
||||
Trigger workflow when a label is added to content in Confluence
|
||||
|
||||
#### Configuration
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `webhookSecret` | string | No | Optional secret to validate webhook deliveries from Confluence using HMAC signature |
|
||||
| `confluenceDomain` | string | No | Your Confluence Cloud domain |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `timestamp` | number | Timestamp of the webhook event \(Unix epoch milliseconds\) |
|
||||
| `userAccountId` | string | Account ID of the user who triggered the event |
|
||||
| `accountType` | string | Account type \(e.g., customer\) |
|
||||
| `label` | object | label output from the tool |
|
||||
| ↳ `name` | string | Label name |
|
||||
| ↳ `id` | string | Label ID |
|
||||
| ↳ `prefix` | string | Label prefix \(global, my, team\) |
|
||||
| `content` | object | content output from the tool |
|
||||
| ↳ `id` | number | Content ID the label was added to or removed from |
|
||||
| ↳ `title` | string | Content title |
|
||||
| ↳ `contentType` | string | Content type \(page, blogpost\) |
|
||||
|
||||
|
||||
---
|
||||
|
||||
### Confluence Label Removed
|
||||
|
||||
Trigger workflow when a label is removed from content in Confluence
|
||||
|
||||
#### Configuration
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `webhookSecret` | string | No | Optional secret to validate webhook deliveries from Confluence using HMAC signature |
|
||||
| `confluenceDomain` | string | No | Your Confluence Cloud domain |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `timestamp` | number | Timestamp of the webhook event \(Unix epoch milliseconds\) |
|
||||
| `userAccountId` | string | Account ID of the user who triggered the event |
|
||||
| `accountType` | string | Account type \(e.g., customer\) |
|
||||
| `label` | object | label output from the tool |
|
||||
| ↳ `name` | string | Label name |
|
||||
| ↳ `id` | string | Label ID |
|
||||
| ↳ `prefix` | string | Label prefix \(global, my, team\) |
|
||||
| `content` | object | content output from the tool |
|
||||
| ↳ `id` | number | Content ID the label was added to or removed from |
|
||||
| ↳ `title` | string | Content title |
|
||||
| ↳ `contentType` | string | Content type \(page, blogpost\) |
|
||||
|
||||
|
||||
---
|
||||
|
||||
### Confluence Page Created
|
||||
|
||||
Trigger workflow when a new page is created in Confluence
|
||||
|
||||
#### Configuration
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `webhookSecret` | string | No | Optional secret to validate webhook deliveries from Confluence using HMAC signature |
|
||||
| `confluenceDomain` | string | No | Your Confluence Cloud domain |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `timestamp` | number | Timestamp of the webhook event \(Unix epoch milliseconds\) |
|
||||
| `userAccountId` | string | Account ID of the user who triggered the event |
|
||||
| `accountType` | string | Account type \(e.g., customer\) |
|
||||
|
||||
|
||||
---
|
||||
|
||||
### Confluence Page Moved
|
||||
|
||||
Trigger workflow when a page is moved in Confluence
|
||||
|
||||
#### Configuration
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `webhookSecret` | string | No | Optional secret to validate webhook deliveries from Confluence using HMAC signature |
|
||||
| `confluenceDomain` | string | No | Your Confluence Cloud domain |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `timestamp` | number | Timestamp of the webhook event \(Unix epoch milliseconds\) |
|
||||
| `userAccountId` | string | Account ID of the user who triggered the event |
|
||||
| `accountType` | string | Account type \(e.g., customer\) |
|
||||
|
||||
|
||||
---
|
||||
|
||||
### Confluence Page Permissions Updated
|
||||
|
||||
Trigger workflow when page permissions are changed in Confluence
|
||||
|
||||
#### Configuration
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `webhookSecret` | string | No | Optional secret to validate webhook deliveries from Confluence using HMAC signature |
|
||||
| `confluenceDomain` | string | No | Your Confluence Cloud domain |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `timestamp` | number | Timestamp of the webhook event \(Unix epoch milliseconds\) |
|
||||
| `userAccountId` | string | Account ID of the user who triggered the event |
|
||||
| `accountType` | string | Account type \(e.g., customer\) |
|
||||
| `id` | number | Content ID |
|
||||
| `title` | string | Content title |
|
||||
| `contentType` | string | Content type \(page, blogpost, comment, attachment\) |
|
||||
| `version` | number | Version number |
|
||||
| `spaceKey` | string | Space key the content belongs to |
|
||||
| `creatorAccountId` | string | Account ID of the creator |
|
||||
| `lastModifierAccountId` | string | Account ID of the last modifier |
|
||||
| `self` | string | URL link to the content |
|
||||
| `creationDate` | number | Creation timestamp \(Unix epoch milliseconds\) |
|
||||
| `modificationDate` | number | Last modification timestamp \(Unix epoch milliseconds\) |
|
||||
| `page` | object | page output from the tool |
|
||||
| ↳ `permissions` | json | Updated permissions object for the page |
|
||||
|
||||
|
||||
---
|
||||
|
||||
### Confluence Page Removed
|
||||
|
||||
Trigger workflow when a page is removed or trashed in Confluence
|
||||
|
||||
#### Configuration
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `webhookSecret` | string | No | Optional secret to validate webhook deliveries from Confluence using HMAC signature |
|
||||
| `confluenceDomain` | string | No | Your Confluence Cloud domain |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `timestamp` | number | Timestamp of the webhook event \(Unix epoch milliseconds\) |
|
||||
| `userAccountId` | string | Account ID of the user who triggered the event |
|
||||
| `accountType` | string | Account type \(e.g., customer\) |
|
||||
|
||||
|
||||
---
|
||||
|
||||
### Confluence Page Restored
|
||||
|
||||
Trigger workflow when a page is restored from trash in Confluence
|
||||
|
||||
#### Configuration
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `webhookSecret` | string | No | Optional secret to validate webhook deliveries from Confluence using HMAC signature |
|
||||
| `confluenceDomain` | string | No | Your Confluence Cloud domain |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `timestamp` | number | Timestamp of the webhook event \(Unix epoch milliseconds\) |
|
||||
| `userAccountId` | string | Account ID of the user who triggered the event |
|
||||
| `accountType` | string | Account type \(e.g., customer\) |
|
||||
|
||||
|
||||
---
|
||||
|
||||
### Confluence Page Updated
|
||||
|
||||
Trigger workflow when a page is updated in Confluence
|
||||
|
||||
#### Configuration
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `webhookSecret` | string | No | Optional secret to validate webhook deliveries from Confluence using HMAC signature |
|
||||
| `confluenceDomain` | string | No | Your Confluence Cloud domain |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `timestamp` | number | Timestamp of the webhook event \(Unix epoch milliseconds\) |
|
||||
| `userAccountId` | string | Account ID of the user who triggered the event |
|
||||
| `accountType` | string | Account type \(e.g., customer\) |
|
||||
|
||||
|
||||
---
|
||||
|
||||
### Confluence Space Created
|
||||
|
||||
Trigger workflow when a new space is created in Confluence
|
||||
|
||||
#### Configuration
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `webhookSecret` | string | No | Optional secret to validate webhook deliveries from Confluence using HMAC signature |
|
||||
| `confluenceDomain` | string | No | Your Confluence Cloud domain |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `timestamp` | number | Timestamp of the webhook event \(Unix epoch milliseconds\) |
|
||||
| `userAccountId` | string | Account ID of the user who triggered the event |
|
||||
| `accountType` | string | Account type \(e.g., customer\) |
|
||||
| `space` | object | space output from the tool |
|
||||
| ↳ `key` | string | Space key |
|
||||
| ↳ `name` | string | Space name |
|
||||
| ↳ `self` | string | URL link to the space |
|
||||
|
||||
|
||||
---
|
||||
|
||||
### Confluence Space Removed
|
||||
|
||||
Trigger workflow when a space is removed in Confluence
|
||||
|
||||
#### Configuration
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `webhookSecret` | string | No | Optional secret to validate webhook deliveries from Confluence using HMAC signature |
|
||||
| `confluenceDomain` | string | No | Your Confluence Cloud domain |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `timestamp` | number | Timestamp of the webhook event \(Unix epoch milliseconds\) |
|
||||
| `userAccountId` | string | Account ID of the user who triggered the event |
|
||||
| `accountType` | string | Account type \(e.g., customer\) |
|
||||
| `space` | object | space output from the tool |
|
||||
| ↳ `key` | string | Space key |
|
||||
| ↳ `name` | string | Space name |
|
||||
| ↳ `self` | string | URL link to the space |
|
||||
|
||||
|
||||
---
|
||||
|
||||
### Confluence Space Updated
|
||||
|
||||
Trigger workflow when a space is updated in Confluence
|
||||
|
||||
#### Configuration
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `webhookSecret` | string | No | Optional secret to validate webhook deliveries from Confluence using HMAC signature |
|
||||
| `confluenceDomain` | string | No | Your Confluence Cloud domain |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `timestamp` | number | Timestamp of the webhook event \(Unix epoch milliseconds\) |
|
||||
| `userAccountId` | string | Account ID of the user who triggered the event |
|
||||
| `accountType` | string | Account type \(e.g., customer\) |
|
||||
| `space` | object | space output from the tool |
|
||||
| ↳ `key` | string | Space key |
|
||||
| ↳ `name` | string | Space name |
|
||||
| ↳ `self` | string | URL link to the space |
|
||||
|
||||
|
||||
---
|
||||
|
||||
### Confluence User Created
|
||||
|
||||
Trigger workflow when a new user is added to Confluence
|
||||
|
||||
#### Configuration
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `webhookSecret` | string | No | Optional secret to validate webhook deliveries from Confluence using HMAC signature |
|
||||
| `confluenceDomain` | string | No | Your Confluence Cloud domain |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `timestamp` | number | Timestamp of the webhook event \(Unix epoch milliseconds\) |
|
||||
| `userAccountId` | string | Account ID of the user who triggered the event |
|
||||
| `accountType` | string | Account type \(e.g., customer\) |
|
||||
| `user` | object | user output from the tool |
|
||||
| ↳ `accountId` | string | Account ID of the new user |
|
||||
| ↳ `accountType` | string | Account type \(e.g., atlassian, app\) |
|
||||
| ↳ `displayName` | string | Display name of the user |
|
||||
| ↳ `emailAddress` | string | Email address of the user \(may not be available due to GDPR/privacy settings\) |
|
||||
| ↳ `publicName` | string | Public name of the user |
|
||||
| ↳ `self` | string | URL link to the user profile |
|
||||
|
||||
|
||||
---
|
||||
|
||||
### Confluence Webhook (All Events)
|
||||
|
||||
Trigger workflow on any Confluence webhook event
|
||||
|
||||
#### Configuration
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `webhookSecret` | string | No | Optional secret to validate webhook deliveries from Confluence using HMAC signature |
|
||||
| `confluenceDomain` | string | No | Your Confluence Cloud domain |
|
||||
| `confluenceEmail` | string | No | Your Atlassian account email. Required together with API token to download attachment files. |
|
||||
| `confluenceApiToken` | string | No | API token from https://id.atlassian.com/manage-profile/security/api-tokens. Required to download attachment file content. |
|
||||
| `includeFileContent` | boolean | No | Download and include actual file content from attachments. Requires email, API token, and domain. |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `timestamp` | number | Timestamp of the webhook event \(Unix epoch milliseconds\) |
|
||||
| `userAccountId` | string | Account ID of the user who triggered the event |
|
||||
| `accountType` | string | Account type \(e.g., customer\) |
|
||||
| `page` | json | Page object \(present in page events\) |
|
||||
| `comment` | json | Comment object \(present in comment events\) |
|
||||
| `blog` | json | Blog post object \(present in blog events\) |
|
||||
| `attachment` | json | Attachment object \(present in attachment events\) |
|
||||
| `space` | json | Space object \(present in space events\) |
|
||||
| `label` | json | Label object \(present in label events\) |
|
||||
| `content` | json | Content object \(present in label events\) |
|
||||
| `user` | json | User object \(present in user events\) |
|
||||
| `files` | file[] | Attachment file content \(present in attachment events when includeFileContent is enabled\) |
|
||||
|
||||
95
apps/docs/content/docs/en/triggers/fathom.mdx
Normal file
95
apps/docs/content/docs/en/triggers/fathom.mdx
Normal file
@@ -0,0 +1,95 @@
|
||||
---
|
||||
title: Fathom
|
||||
description: Available Fathom triggers for automating workflows
|
||||
---
|
||||
|
||||
import { BlockInfoCard } from "@/components/ui/block-info-card"
|
||||
|
||||
<BlockInfoCard
|
||||
type="fathom"
|
||||
color="#181C1E"
|
||||
/>
|
||||
|
||||
Fathom provides 2 triggers for automating workflows based on events.
|
||||
|
||||
## Triggers
|
||||
|
||||
### Fathom New Meeting Content
|
||||
|
||||
Trigger workflow when new meeting content is ready in Fathom
|
||||
|
||||
#### Configuration
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `apiKey` | string | Yes | Required to create the webhook in Fathom. |
|
||||
| `triggeredFor` | string | No | Which recording types should trigger this webhook. |
|
||||
| `includeSummary` | boolean | No | Include the meeting summary in the webhook payload. |
|
||||
| `includeTranscript` | boolean | No | Include the full transcript in the webhook payload. |
|
||||
| `includeActionItems` | boolean | No | Include action items extracted from the meeting. |
|
||||
| `includeCrmMatches` | boolean | No | Include matched CRM contacts, companies, and deals from your linked CRM. |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `title` | string | Meeting title |
|
||||
| `meeting_title` | string | Calendar event title |
|
||||
| `recording_id` | number | Unique recording ID |
|
||||
| `url` | string | URL to view the meeting in Fathom |
|
||||
| `share_url` | string | Shareable URL for the meeting |
|
||||
| `created_at` | string | ISO 8601 creation timestamp |
|
||||
| `scheduled_start_time` | string | Scheduled start time |
|
||||
| `scheduled_end_time` | string | Scheduled end time |
|
||||
| `recording_start_time` | string | Recording start time |
|
||||
| `recording_end_time` | string | Recording end time |
|
||||
| `transcript_language` | string | Language of the transcript |
|
||||
| `calendar_invitees_domains_type` | string | Domain type: only_internal or one_or_more_external |
|
||||
| `recorded_by` | object | Recorder details |
|
||||
| `calendar_invitees` | array | Array of calendar invitees with name and email |
|
||||
| `default_summary` | object | Meeting summary |
|
||||
| `transcript` | array | Array of transcript entries with speaker, text, and timestamp |
|
||||
| `action_items` | array | Array of action items extracted from the meeting |
|
||||
| `crm_matches` | json | Matched CRM contacts, companies, and deals from linked CRM |
|
||||
|
||||
|
||||
---
|
||||
|
||||
### Fathom Webhook
|
||||
|
||||
Generic webhook trigger for all Fathom events
|
||||
|
||||
#### Configuration
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `apiKey` | string | Yes | Required to create the webhook in Fathom. |
|
||||
| `triggeredFor` | string | No | Which recording types should trigger this webhook. |
|
||||
| `includeSummary` | boolean | No | Include the meeting summary in the webhook payload. |
|
||||
| `includeTranscript` | boolean | No | Include the full transcript in the webhook payload. |
|
||||
| `includeActionItems` | boolean | No | Include action items extracted from the meeting. |
|
||||
| `includeCrmMatches` | boolean | No | Include matched CRM contacts, companies, and deals from your linked CRM. |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `title` | string | Meeting title |
|
||||
| `meeting_title` | string | Calendar event title |
|
||||
| `recording_id` | number | Unique recording ID |
|
||||
| `url` | string | URL to view the meeting in Fathom |
|
||||
| `share_url` | string | Shareable URL for the meeting |
|
||||
| `created_at` | string | ISO 8601 creation timestamp |
|
||||
| `scheduled_start_time` | string | Scheduled start time |
|
||||
| `scheduled_end_time` | string | Scheduled end time |
|
||||
| `recording_start_time` | string | Recording start time |
|
||||
| `recording_end_time` | string | Recording end time |
|
||||
| `transcript_language` | string | Language of the transcript |
|
||||
| `calendar_invitees_domains_type` | string | Domain type: only_internal or one_or_more_external |
|
||||
| `recorded_by` | object | Recorder details |
|
||||
| `calendar_invitees` | array | Array of calendar invitees with name and email |
|
||||
| `default_summary` | object | Meeting summary |
|
||||
| `transcript` | array | Array of transcript entries with speaker, text, and timestamp |
|
||||
| `action_items` | array | Array of action items extracted from the meeting |
|
||||
| `crm_matches` | json | Matched CRM contacts, companies, and deals from linked CRM |
|
||||
|
||||
35
apps/docs/content/docs/en/triggers/fireflies.mdx
Normal file
35
apps/docs/content/docs/en/triggers/fireflies.mdx
Normal file
@@ -0,0 +1,35 @@
|
||||
---
|
||||
title: Fireflies
|
||||
description: Available Fireflies triggers for automating workflows
|
||||
---
|
||||
|
||||
import { BlockInfoCard } from "@/components/ui/block-info-card"
|
||||
|
||||
<BlockInfoCard
|
||||
type="fireflies"
|
||||
color="#100730"
|
||||
/>
|
||||
|
||||
Fireflies provides 1 trigger for automating workflows based on events.
|
||||
|
||||
## Triggers
|
||||
|
||||
### Fireflies Transcription Complete
|
||||
|
||||
Trigger workflow when a Fireflies meeting transcription is complete
|
||||
|
||||
#### Configuration
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `webhookSecret` | string | No | Secret key for HMAC signature verification \(set in Fireflies dashboard\) |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `meetingId` | string | The ID of the transcribed meeting |
|
||||
| `eventType` | string | The type of event \(e.g. Transcription completed, meeting.transcribed\) |
|
||||
| `clientReferenceId` | string | Custom reference ID if set during upload |
|
||||
| `timestamp` | number | Unix timestamp in milliseconds when the event was fired \(V2 webhooks\) |
|
||||
|
||||
1186
apps/docs/content/docs/en/triggers/github.mdx
Normal file
1186
apps/docs/content/docs/en/triggers/github.mdx
Normal file
File diff suppressed because it is too large
Load Diff
52
apps/docs/content/docs/en/triggers/gmail.mdx
Normal file
52
apps/docs/content/docs/en/triggers/gmail.mdx
Normal file
@@ -0,0 +1,52 @@
|
||||
---
|
||||
title: Gmail
|
||||
description: Available Gmail triggers for automating workflows
|
||||
---
|
||||
|
||||
import { BlockInfoCard } from "@/components/ui/block-info-card"
|
||||
|
||||
<BlockInfoCard
|
||||
type="gmail"
|
||||
color="#E0E0E0"
|
||||
/>
|
||||
|
||||
Gmail provides 1 trigger for automating workflows based on events.
|
||||
|
||||
All triggers below are **polling-based** — they check for new data on a schedule rather than receiving push notifications.
|
||||
|
||||
## Triggers
|
||||
|
||||
### Gmail Email Trigger
|
||||
|
||||
Triggers when new emails are received in Gmail (requires Gmail credentials)
|
||||
|
||||
#### Configuration
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `triggerCredentials` | string | Yes | This trigger requires google email credentials to access your account. |
|
||||
| `labelIds` | string | No | Choose which Gmail labels to monitor. Leave empty to monitor all emails. |
|
||||
| `labelFilterBehavior` | string | Yes | Include only emails with selected labels, or exclude emails with selected labels |
|
||||
| `searchQuery` | string | No | Optional Gmail search query to filter emails. Use the same format as Gmail search box \(e.g., |
|
||||
| `markAsRead` | boolean | No | Automatically mark emails as read after processing |
|
||||
| `includeAttachments` | boolean | No | Download and include email attachments in the trigger payload |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `email` | object | email output from the tool |
|
||||
| ↳ `id` | string | Gmail message ID |
|
||||
| ↳ `threadId` | string | Gmail thread ID |
|
||||
| ↳ `subject` | string | Email subject line |
|
||||
| ↳ `from` | string | Sender email address |
|
||||
| ↳ `to` | string | Recipient email address |
|
||||
| ↳ `cc` | string | CC recipients |
|
||||
| ↳ `date` | string | Email date in ISO format |
|
||||
| ↳ `bodyText` | string | Plain text email body |
|
||||
| ↳ `bodyHtml` | string | HTML email body |
|
||||
| ↳ `labels` | string | Email labels array |
|
||||
| ↳ `hasAttachments` | boolean | Whether email has attachments |
|
||||
| ↳ `attachments` | file[] | Array of email attachments as files \(if includeAttachments is enabled\) |
|
||||
| `timestamp` | string | Event timestamp |
|
||||
|
||||
109
apps/docs/content/docs/en/triggers/gong.mdx
Normal file
109
apps/docs/content/docs/en/triggers/gong.mdx
Normal file
@@ -0,0 +1,109 @@
|
||||
---
|
||||
title: Gong
|
||||
description: Available Gong triggers for automating workflows
|
||||
---
|
||||
|
||||
import { BlockInfoCard } from "@/components/ui/block-info-card"
|
||||
|
||||
<BlockInfoCard
|
||||
type="gong"
|
||||
color="#8039DF"
|
||||
/>
|
||||
|
||||
Gong provides 2 triggers for automating workflows based on events.
|
||||
|
||||
## Triggers
|
||||
|
||||
### Gong Call Completed
|
||||
|
||||
Trigger workflow when a call is completed and processed in Gong
|
||||
|
||||
#### Configuration
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `gongJwtPublicKeyPem` | string | No | Required only when your Gong rule uses **Signed JWT header**. Sim verifies RS256, `webhook_url`, and `body_sha256` per Gong. If empty, only the webhook URL path authenticates the request. |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `eventType` | string | Constant identifier for automation-rule webhooks \(`gong.automation_rule`\). Gong does not send distinct event names in the payload. |
|
||||
| `callId` | string | Gong call ID \(same value as metaData.id when present\) |
|
||||
| `isTest` | boolean | Whether this is a test webhook from the Gong UI |
|
||||
| `callData` | json | Full call data object |
|
||||
| `metaData` | object | metaData output from the tool |
|
||||
| ↳ `id` | string | Gong call ID |
|
||||
| ↳ `url` | string | URL to the call in Gong |
|
||||
| ↳ `title` | string | Call title |
|
||||
| ↳ `scheduled` | string | Scheduled start time \(ISO 8601\) |
|
||||
| ↳ `started` | string | Actual start time \(ISO 8601\) |
|
||||
| ↳ `duration` | number | Call duration in seconds |
|
||||
| ↳ `primaryUserId` | string | Primary Gong user ID |
|
||||
| ↳ `workspaceId` | string | Gong workspace ID |
|
||||
| ↳ `direction` | string | Call direction \(Inbound, Outbound, etc.\) |
|
||||
| ↳ `system` | string | Communication platform used \(e.g. Zoom, Teams\) |
|
||||
| ↳ `scope` | string | Call scope \(Internal, External, or Unknown\) |
|
||||
| ↳ `media` | string | Media type \(Video or Audio\) |
|
||||
| ↳ `language` | string | Language code \(ISO-639-2B\) |
|
||||
| ↳ `sdrDisposition` | string | SDR disposition classification \(when present\) |
|
||||
| ↳ `clientUniqueId` | string | Call identifier from the origin recording system \(when present\) |
|
||||
| ↳ `customData` | string | Custom metadata from call creation \(when present\) |
|
||||
| ↳ `purpose` | string | Call purpose \(when present\) |
|
||||
| ↳ `meetingUrl` | string | Web conference provider URL \(when present\) |
|
||||
| ↳ `isPrivate` | boolean | Whether the call is private \(when present\) |
|
||||
| ↳ `calendarEventId` | string | Calendar event identifier \(when present\) |
|
||||
| `parties` | array | Array of call participants with name, email, title, and affiliation |
|
||||
| `context` | array | Array of CRM context objects \(Salesforce opportunities, accounts, etc.\) |
|
||||
| `trackers` | array | Keyword and smart trackers from call content \(same shape as Gong extensive-calls `content.trackers`\) |
|
||||
| `topics` | array | Topic segments with durations from call content \(`content.topics`\) |
|
||||
| `highlights` | array | AI-generated highlights from call content \(`content.highlights`\) |
|
||||
|
||||
|
||||
---
|
||||
|
||||
### Gong Webhook
|
||||
|
||||
Generic webhook trigger for all Gong events
|
||||
|
||||
#### Configuration
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `gongJwtPublicKeyPem` | string | No | Required only when your Gong rule uses **Signed JWT header**. Sim verifies RS256, `webhook_url`, and `body_sha256` per Gong. If empty, only the webhook URL path authenticates the request. |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `eventType` | string | Constant identifier for automation-rule webhooks \(`gong.automation_rule`\). Gong does not send distinct event names in the payload. |
|
||||
| `callId` | string | Gong call ID \(same value as metaData.id when present\) |
|
||||
| `isTest` | boolean | Whether this is a test webhook from the Gong UI |
|
||||
| `callData` | json | Full call data object |
|
||||
| `metaData` | object | metaData output from the tool |
|
||||
| ↳ `id` | string | Gong call ID |
|
||||
| ↳ `url` | string | URL to the call in Gong |
|
||||
| ↳ `title` | string | Call title |
|
||||
| ↳ `scheduled` | string | Scheduled start time \(ISO 8601\) |
|
||||
| ↳ `started` | string | Actual start time \(ISO 8601\) |
|
||||
| ↳ `duration` | number | Call duration in seconds |
|
||||
| ↳ `primaryUserId` | string | Primary Gong user ID |
|
||||
| ↳ `workspaceId` | string | Gong workspace ID |
|
||||
| ↳ `direction` | string | Call direction \(Inbound, Outbound, etc.\) |
|
||||
| ↳ `system` | string | Communication platform used \(e.g. Zoom, Teams\) |
|
||||
| ↳ `scope` | string | Call scope \(Internal, External, or Unknown\) |
|
||||
| ↳ `media` | string | Media type \(Video or Audio\) |
|
||||
| ↳ `language` | string | Language code \(ISO-639-2B\) |
|
||||
| ↳ `sdrDisposition` | string | SDR disposition classification \(when present\) |
|
||||
| ↳ `clientUniqueId` | string | Call identifier from the origin recording system \(when present\) |
|
||||
| ↳ `customData` | string | Custom metadata from call creation \(when present\) |
|
||||
| ↳ `purpose` | string | Call purpose \(when present\) |
|
||||
| ↳ `meetingUrl` | string | Web conference provider URL \(when present\) |
|
||||
| ↳ `isPrivate` | boolean | Whether the call is private \(when present\) |
|
||||
| ↳ `calendarEventId` | string | Calendar event identifier \(when present\) |
|
||||
| `parties` | array | Array of call participants with name, email, title, and affiliation |
|
||||
| `context` | array | Array of CRM context objects \(Salesforce opportunities, accounts, etc.\) |
|
||||
| `trackers` | array | Keyword and smart trackers from call content \(same shape as Gong extensive-calls `content.trackers`\) |
|
||||
| `topics` | array | Topic segments with durations from call content \(`content.topics`\) |
|
||||
| `highlights` | array | AI-generated highlights from call content \(`content.highlights`\) |
|
||||
|
||||
54
apps/docs/content/docs/en/triggers/google-calendar.mdx
Normal file
54
apps/docs/content/docs/en/triggers/google-calendar.mdx
Normal file
@@ -0,0 +1,54 @@
|
||||
---
|
||||
title: Google Calendar
|
||||
description: Available Google Calendar triggers for automating workflows
|
||||
---
|
||||
|
||||
import { BlockInfoCard } from "@/components/ui/block-info-card"
|
||||
|
||||
<BlockInfoCard
|
||||
type="google_calendar"
|
||||
color="#E0E0E0"
|
||||
/>
|
||||
|
||||
Google Calendar provides 1 trigger for automating workflows based on events.
|
||||
|
||||
All triggers below are **polling-based** — they check for new data on a schedule rather than receiving push notifications.
|
||||
|
||||
## Triggers
|
||||
|
||||
### Google Calendar Event Trigger
|
||||
|
||||
Triggers when events are created, updated, or cancelled in Google Calendar
|
||||
|
||||
#### Configuration
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `triggerCredentials` | string | Yes | Connect your Google account to access Google Calendar. |
|
||||
| `calendarId` | file-selector | No | The calendar to monitor for event changes. |
|
||||
| `manualCalendarId` | string | No | The calendar to monitor for event changes. |
|
||||
| `eventTypeFilter` | string | No | Only trigger for specific event types. Defaults to all events. |
|
||||
| `searchTerm` | string | No | Optional: Filter events by text match across title, description, location, and attendees. |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `event` | object | event output from the tool |
|
||||
| ↳ `id` | string | Calendar event ID |
|
||||
| ↳ `status` | string | Event status \(confirmed, tentative, cancelled\) |
|
||||
| ↳ `eventType` | string | Change type: "created", "updated", or "cancelled" |
|
||||
| ↳ `summary` | string | Event title |
|
||||
| ↳ `eventDescription` | string | Event description |
|
||||
| ↳ `location` | string | Event location |
|
||||
| ↳ `htmlLink` | string | Link to event in Google Calendar |
|
||||
| ↳ `start` | json | Event start time |
|
||||
| ↳ `end` | json | Event end time |
|
||||
| ↳ `created` | string | Event creation time |
|
||||
| ↳ `updated` | string | Event last updated time |
|
||||
| ↳ `attendees` | json | Event attendees |
|
||||
| ↳ `creator` | json | Event creator |
|
||||
| ↳ `organizer` | json | Event organizer |
|
||||
| `calendarId` | string | Calendar ID |
|
||||
| `timestamp` | string | Event processing timestamp in ISO format |
|
||||
|
||||
52
apps/docs/content/docs/en/triggers/google-drive.mdx
Normal file
52
apps/docs/content/docs/en/triggers/google-drive.mdx
Normal file
@@ -0,0 +1,52 @@
|
||||
---
|
||||
title: Google Drive
|
||||
description: Available Google Drive triggers for automating workflows
|
||||
---
|
||||
|
||||
import { BlockInfoCard } from "@/components/ui/block-info-card"
|
||||
|
||||
<BlockInfoCard
|
||||
type="google_drive"
|
||||
color="#E0E0E0"
|
||||
/>
|
||||
|
||||
Google Drive provides 1 trigger for automating workflows based on events.
|
||||
|
||||
All triggers below are **polling-based** — they check for new data on a schedule rather than receiving push notifications.
|
||||
|
||||
## Triggers
|
||||
|
||||
### Google Drive File Trigger
|
||||
|
||||
Triggers when files are created, modified, or deleted in Google Drive
|
||||
|
||||
#### Configuration
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `triggerCredentials` | string | Yes | Connect your Google account to access Google Drive. |
|
||||
| `folderId` | file-selector | No | Optional: The folder to monitor. Leave empty to monitor all files in Drive. |
|
||||
| `manualFolderId` | string | No | Optional: The folder ID from the Google Drive URL to monitor. Leave empty to monitor all files. |
|
||||
| `mimeTypeFilter` | string | No | Optional: Only trigger for specific file types. |
|
||||
| `eventTypeFilter` | string | No | Only trigger for specific change types. Defaults to all changes. |
|
||||
| `includeSharedDrives` | boolean | No | Include files from shared \(team\) drives. |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `file` | object | file output from the tool |
|
||||
| ↳ `id` | string | Google Drive file ID |
|
||||
| ↳ `name` | string | File name |
|
||||
| ↳ `mimeType` | string | File MIME type |
|
||||
| ↳ `modifiedTime` | string | Last modified time \(ISO\) |
|
||||
| ↳ `createdTime` | string | File creation time \(ISO\) |
|
||||
| ↳ `size` | string | File size in bytes |
|
||||
| ↳ `webViewLink` | string | URL to view file in browser |
|
||||
| ↳ `parents` | json | Parent folder IDs |
|
||||
| ↳ `lastModifyingUser` | json | User who last modified the file |
|
||||
| ↳ `shared` | boolean | Whether file is shared |
|
||||
| ↳ `starred` | boolean | Whether file is starred |
|
||||
| `eventType` | string | Change type: "created", "modified", or "deleted" |
|
||||
| `timestamp` | string | Event timestamp in ISO format |
|
||||
|
||||
46
apps/docs/content/docs/en/triggers/google-sheets.mdx
Normal file
46
apps/docs/content/docs/en/triggers/google-sheets.mdx
Normal file
@@ -0,0 +1,46 @@
|
||||
---
|
||||
title: Google Sheets
|
||||
description: Available Google Sheets triggers for automating workflows
|
||||
---
|
||||
|
||||
import { BlockInfoCard } from "@/components/ui/block-info-card"
|
||||
|
||||
<BlockInfoCard
|
||||
type="google_sheets"
|
||||
color="#E0E0E0"
|
||||
/>
|
||||
|
||||
Google Sheets provides 1 trigger for automating workflows based on events.
|
||||
|
||||
All triggers below are **polling-based** — they check for new data on a schedule rather than receiving push notifications.
|
||||
|
||||
## Triggers
|
||||
|
||||
### Google Sheets New Row Trigger
|
||||
|
||||
Triggers when new rows are added to a Google Sheet
|
||||
|
||||
#### Configuration
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `triggerCredentials` | string | Yes | Connect your Google account to access Google Sheets. |
|
||||
| `spreadsheetId` | file-selector | Yes | The spreadsheet to monitor for new rows. |
|
||||
| `manualSpreadsheetId` | string | Yes | The spreadsheet to monitor for new rows. |
|
||||
| `sheetName` | sheet-selector | Yes | The sheet tab to monitor for new rows. |
|
||||
| `manualSheetName` | string | Yes | The sheet tab to monitor for new rows. |
|
||||
| `valueRenderOption` | string | No | How values are rendered. Formatted returns display strings, Unformatted returns raw numbers/booleans, Formula returns the formula text. |
|
||||
| `dateTimeRenderOption` | string | No | How dates and times are rendered. Only applies when Value Render is not |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `row` | json | Row data mapped to column headers from row 1 |
|
||||
| `rawRow` | json | Raw row values as an array |
|
||||
| `headers` | json | Column headers from row 1 |
|
||||
| `rowNumber` | number | The 1-based row number of the new row |
|
||||
| `spreadsheetId` | string | The spreadsheet ID |
|
||||
| `sheetName` | string | The sheet tab name |
|
||||
| `timestamp` | string | Event timestamp in ISO format |
|
||||
|
||||
41
apps/docs/content/docs/en/triggers/google_forms.mdx
Normal file
41
apps/docs/content/docs/en/triggers/google_forms.mdx
Normal file
@@ -0,0 +1,41 @@
|
||||
---
|
||||
title: Google Forms
|
||||
description: Available Google Forms triggers for automating workflows
|
||||
---
|
||||
|
||||
import { BlockInfoCard } from "@/components/ui/block-info-card"
|
||||
|
||||
<BlockInfoCard
|
||||
type="google_forms"
|
||||
color="#E0E0E0"
|
||||
/>
|
||||
|
||||
Google Forms provides 1 trigger for automating workflows based on events.
|
||||
|
||||
## Triggers
|
||||
|
||||
### Google Forms Webhook
|
||||
|
||||
Trigger workflow from Google Form submissions (via Apps Script forwarder)
|
||||
|
||||
#### Configuration
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| --------- | ---- | -------- | ----------- |
|
||||
| `token` | string | Yes | We validate requests using this secret. Send it as Authorization: Bearer <token> or a custom header. |
|
||||
| `secretHeaderName` | string | No | If set, the webhook will validate this header equals your Shared Secret instead of Authorization. |
|
||||
| `triggerFormId` | string | No | Optional, for clarity and matching in workflows. Not required for webhook to work. |
|
||||
| `includeRawPayload` | boolean | No | Include the original payload from Apps Script in the workflow input. |
|
||||
| `setupScript` | string | No | Copy this code and paste it into your Google Forms Apps Script editor |
|
||||
|
||||
#### Output
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| --------- | ---- | ----------- |
|
||||
| `responseId` | string | Unique response identifier \(if available\) |
|
||||
| `createTime` | string | Response creation timestamp |
|
||||
| `lastSubmittedTime` | string | Last submitted timestamp |
|
||||
| `formId` | string | Google Form ID |
|
||||
| `answers` | object | Normalized map of question -> answer |
|
||||
| `raw` | object | Original payload \(when enabled\) |
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user