Compare commits

..

7 Commits

802 changed files with 25068 additions and 87192 deletions

View File

@@ -183,109 +183,6 @@ export const {ServiceName}Block: BlockConfig = {
}
```
## File Input Handling
When your block accepts file uploads, use the basic/advanced mode pattern with `normalizeFileInput`.
### Basic/Advanced File Pattern
```typescript
// Basic mode: Visual file upload
{
id: 'uploadFile',
title: 'File',
type: 'file-upload',
canonicalParamId: 'file', // Both map to 'file' param
placeholder: 'Upload file',
mode: 'basic',
multiple: false,
required: true,
condition: { field: 'operation', value: 'upload' },
},
// Advanced mode: Reference from other blocks
{
id: 'fileRef',
title: 'File',
type: 'short-input',
canonicalParamId: 'file', // Both map to 'file' param
placeholder: 'Reference file (e.g., {{file_block.output}})',
mode: 'advanced',
required: true,
condition: { field: 'operation', value: 'upload' },
},
```
**Critical constraints:**
- `canonicalParamId` must NOT match any subblock's `id` in the same block
- Values are stored under subblock `id`, not `canonicalParamId`
### Normalizing File Input in tools.config
Use `normalizeFileInput` to handle all input variants:
```typescript
import { normalizeFileInput } from '@/blocks/utils'
tools: {
access: ['service_upload'],
config: {
tool: (params) => {
// Check all field IDs: uploadFile (basic), fileRef (advanced), fileContent (legacy)
const normalizedFile = normalizeFileInput(
params.uploadFile || params.fileRef || params.fileContent,
{ single: true }
)
if (normalizedFile) {
params.file = normalizedFile
}
return `service_${params.operation}`
},
},
}
```
**Why this pattern?**
- Values come through as `params.uploadFile` or `params.fileRef` (the subblock IDs)
- `canonicalParamId` only controls UI/schema mapping, not runtime values
- `normalizeFileInput` handles JSON strings from advanced mode template resolution
### File Input Types in `inputs`
Use `type: 'json'` for file inputs:
```typescript
inputs: {
uploadFile: { type: 'json', description: 'Uploaded file (UserFile)' },
fileRef: { type: 'json', description: 'File reference from previous block' },
// Legacy field for backwards compatibility
fileContent: { type: 'string', description: 'Legacy: base64 encoded content' },
}
```
### Multiple Files
For multiple file uploads:
```typescript
{
id: 'attachments',
title: 'Attachments',
type: 'file-upload',
multiple: true, // Allow multiple files
maxSize: 25, // Max size in MB per file
acceptedTypes: 'image/*,application/pdf,.doc,.docx',
}
// In tools.config:
const normalizedFiles = normalizeFileInput(
params.attachments || params.attachmentRefs,
// No { single: true } - returns array
)
if (normalizedFiles) {
params.files = normalizedFiles
}
```
## Condition Syntax
Controls when a field is shown based on other field values.

View File

@@ -206,15 +206,10 @@ export const {Service}Block: BlockConfig = {
}
```
**Critical Canonical Param Rules:**
- `canonicalParamId` must NOT match any subblock's `id` in the block
- `canonicalParamId` must be unique per operation/condition context
- Only use `canonicalParamId` to link basic/advanced alternatives for the same logical parameter
- `mode` only controls UI visibility, NOT serialization. Without `canonicalParamId`, both basic and advanced field values would be sent
- Every subblock `id` must be unique within the block. Duplicate IDs cause conflicts even with different conditions
- **Required consistency:** If one subblock in a canonical group has `required: true`, ALL subblocks in that group must have `required: true` (prevents bypassing validation by switching modes)
- **Inputs section:** Must list canonical param IDs (e.g., `fileId`), NOT raw subblock IDs (e.g., `fileSelector`, `manualFileId`)
- **Params function:** Must use canonical param IDs, NOT raw subblock IDs (raw IDs are deleted after canonical transformation)
**Critical:**
- `canonicalParamId` must NOT match any other subblock's `id`, must be unique per block, and should only be used to link basic/advanced alternatives for the same parameter.
- `mode` only controls UI visibility, NOT serialization. Without `canonicalParamId`, both basic and advanced field values would be sent.
- Every subblock `id` must be unique within the block. Duplicate IDs cause conflicts even with different conditions.
## Step 4: Add Icon
@@ -462,230 +457,7 @@ You can usually find this in the service's brand/press kit page, or copy it from
Paste the SVG code here and I'll convert it to a React component.
```
## File Handling
When your integration handles file uploads or downloads, follow these patterns to work with `UserFile` objects consistently.
### What is a UserFile?
A `UserFile` is the standard file representation in Sim:
```typescript
interface UserFile {
id: string // Unique identifier
name: string // Original filename
url: string // Presigned URL for download
size: number // File size in bytes
type: string // MIME type (e.g., 'application/pdf')
base64?: string // Optional base64 content (if small file)
key?: string // Internal storage key
context?: object // Storage context metadata
}
```
### File Input Pattern (Uploads)
For tools that accept file uploads, **always route through an internal API endpoint** rather than calling external APIs directly. This ensures proper file content retrieval.
#### 1. Block SubBlocks for File Input
Use the basic/advanced mode pattern:
```typescript
// Basic mode: File upload UI
{
id: 'uploadFile',
title: 'File',
type: 'file-upload',
canonicalParamId: 'file', // Maps to 'file' param
placeholder: 'Upload file',
mode: 'basic',
multiple: false,
required: true,
condition: { field: 'operation', value: 'upload' },
},
// Advanced mode: Reference from previous block
{
id: 'fileRef',
title: 'File',
type: 'short-input',
canonicalParamId: 'file', // Same canonical param
placeholder: 'Reference file (e.g., {{file_block.output}})',
mode: 'advanced',
required: true,
condition: { field: 'operation', value: 'upload' },
},
```
**Critical:** `canonicalParamId` must NOT match any subblock `id`.
#### 2. Normalize File Input in Block Config
In `tools.config.tool`, use `normalizeFileInput` to handle all input variants:
```typescript
import { normalizeFileInput } from '@/blocks/utils'
tools: {
config: {
tool: (params) => {
// Normalize file from basic (uploadFile), advanced (fileRef), or legacy (fileContent)
const normalizedFile = normalizeFileInput(
params.uploadFile || params.fileRef || params.fileContent,
{ single: true }
)
if (normalizedFile) {
params.file = normalizedFile
}
return `{service}_${params.operation}`
},
},
}
```
#### 3. Create Internal API Route
Create `apps/sim/app/api/tools/{service}/{action}/route.ts`:
```typescript
import { createLogger } from '@sim/logger'
import { NextResponse, type NextRequest } from 'next/server'
import { z } from 'zod'
import { checkInternalAuth } from '@/lib/auth/hybrid'
import { generateRequestId } from '@/lib/core/utils/request'
import { FileInputSchema, type RawFileInput } from '@/lib/uploads/utils/file-schemas'
import { processFilesToUserFiles } from '@/lib/uploads/utils/file-utils'
import { downloadFileFromStorage } from '@/lib/uploads/utils/file-utils.server'
const logger = createLogger('{Service}UploadAPI')
const RequestSchema = z.object({
accessToken: z.string(),
file: FileInputSchema.optional().nullable(),
// Legacy field for backwards compatibility
fileContent: z.string().optional().nullable(),
// ... other params
})
export async function POST(request: NextRequest) {
const requestId = generateRequestId()
const authResult = await checkInternalAuth(request, { requireWorkflowId: false })
if (!authResult.success) {
return NextResponse.json({ success: false, error: 'Unauthorized' }, { status: 401 })
}
const body = await request.json()
const data = RequestSchema.parse(body)
let fileBuffer: Buffer
let fileName: string
// Prefer UserFile input, fall back to legacy base64
if (data.file) {
const userFiles = processFilesToUserFiles([data.file as RawFileInput], requestId, logger)
if (userFiles.length === 0) {
return NextResponse.json({ success: false, error: 'Invalid file' }, { status: 400 })
}
const userFile = userFiles[0]
fileBuffer = await downloadFileFromStorage(userFile, requestId, logger)
fileName = userFile.name
} else if (data.fileContent) {
// Legacy: base64 string (backwards compatibility)
fileBuffer = Buffer.from(data.fileContent, 'base64')
fileName = 'file'
} else {
return NextResponse.json({ success: false, error: 'File required' }, { status: 400 })
}
// Now call external API with fileBuffer
const response = await fetch('https://api.{service}.com/upload', {
method: 'POST',
headers: { Authorization: `Bearer ${data.accessToken}` },
body: new Uint8Array(fileBuffer), // Convert Buffer for fetch
})
// ... handle response
}
```
#### 4. Update Tool to Use Internal Route
```typescript
export const {service}UploadTool: ToolConfig<Params, Response> = {
id: '{service}_upload',
// ...
params: {
file: { type: 'file', required: false, visibility: 'user-or-llm' },
fileContent: { type: 'string', required: false, visibility: 'hidden' }, // Legacy
},
request: {
url: '/api/tools/{service}/upload', // Internal route
method: 'POST',
body: (params) => ({
accessToken: params.accessToken,
file: params.file,
fileContent: params.fileContent,
}),
},
}
```
### File Output Pattern (Downloads)
For tools that return files, use `FileToolProcessor` to store files and return `UserFile` objects.
#### In Tool transformResponse
```typescript
import { FileToolProcessor } from '@/executor/utils/file-tool-processor'
transformResponse: async (response, context) => {
const data = await response.json()
// Process file outputs to UserFile objects
const fileProcessor = new FileToolProcessor(context)
const file = await fileProcessor.processFileData({
data: data.content, // base64 or buffer
mimeType: data.mimeType,
filename: data.filename,
})
return {
success: true,
output: { file },
}
}
```
#### In API Route (for complex file handling)
```typescript
// Return file data that FileToolProcessor can handle
return NextResponse.json({
success: true,
output: {
file: {
data: base64Content,
mimeType: 'application/pdf',
filename: 'document.pdf',
},
},
})
```
### Key Helpers Reference
| Helper | Location | Purpose |
|--------|----------|---------|
| `normalizeFileInput` | `@/blocks/utils` | Normalize file params in block config |
| `processFilesToUserFiles` | `@/lib/uploads/utils/file-utils` | Convert raw inputs to UserFile[] |
| `downloadFileFromStorage` | `@/lib/uploads/utils/file-utils.server` | Get file Buffer from UserFile |
| `FileToolProcessor` | `@/executor/utils/file-tool-processor` | Process tool output files |
| `isUserFile` | `@/lib/core/utils/user-file` | Type guard for UserFile objects |
| `FileInputSchema` | `@/lib/uploads/utils/file-schemas` | Zod schema for file validation |
### Common Gotchas
## Common Gotchas
1. **OAuth serviceId must match** - The `serviceId` in oauth-input must match the OAuth provider configuration
2. **Tool IDs are snake_case** - `stripe_create_payment`, not `stripeCreatePayment`
@@ -693,5 +465,3 @@ return NextResponse.json({
4. **Alphabetical ordering** - Keep imports and registry entries alphabetically sorted
5. **Required can be conditional** - Use `required: { field: 'op', value: 'create' }` instead of always true
6. **DependsOn clears options** - When a dependency changes, selector options are refetched
7. **Never pass Buffer directly to fetch** - Convert to `new Uint8Array(buffer)` for TypeScript compatibility
8. **Always handle legacy file params** - Keep hidden `fileContent` params for backwards compatibility

View File

@@ -157,36 +157,6 @@ dependsOn: { all: ['authMethod'], any: ['credential', 'botToken'] }
- `'both'` - Show in both modes (default)
- `'trigger'` - Only when block is used as trigger
### `canonicalParamId` - Link basic/advanced alternatives
Use to map multiple UI inputs to a single logical parameter:
```typescript
// Basic mode: Visual selector
{
id: 'fileSelector',
type: 'file-selector',
mode: 'basic',
canonicalParamId: 'fileId',
required: true,
},
// Advanced mode: Manual input
{
id: 'manualFileId',
type: 'short-input',
mode: 'advanced',
canonicalParamId: 'fileId',
required: true,
},
```
**Critical Rules:**
- `canonicalParamId` must NOT match any subblock's `id`
- `canonicalParamId` must be unique per operation/condition context
- **Required consistency:** All subblocks in a canonical group must have the same `required` status
- **Inputs section:** Must list canonical param IDs (e.g., `fileId`), NOT raw subblock IDs
- **Params function:** Must use canonical param IDs (raw IDs are deleted after canonical transformation)
**Register in `blocks/registry.ts`:**
```typescript
@@ -225,52 +195,6 @@ import { {service}WebhookTrigger } from '@/triggers/{service}'
{service}_webhook: {service}WebhookTrigger,
```
## File Handling
When integrations handle file uploads/downloads, use `UserFile` objects consistently.
### File Input (Uploads)
1. **Block subBlocks:** Use basic/advanced mode pattern with `canonicalParamId`
2. **Normalize in block config:** Use `normalizeFileInput` from `@/blocks/utils`
3. **Internal API route:** Create route that uses `downloadFileFromStorage` to get file content
4. **Tool routes to internal API:** Don't call external APIs directly with files
```typescript
// In block tools.config:
import { normalizeFileInput } from '@/blocks/utils'
const normalizedFile = normalizeFileInput(
params.uploadFile || params.fileRef || params.fileContent,
{ single: true }
)
if (normalizedFile) params.file = normalizedFile
```
### File Output (Downloads)
Use `FileToolProcessor` in tool `transformResponse` to store files:
```typescript
import { FileToolProcessor } from '@/executor/utils/file-tool-processor'
const processor = new FileToolProcessor(context)
const file = await processor.processFileData({
data: base64Content,
mimeType: 'application/pdf',
filename: 'doc.pdf',
})
```
### Key Helpers
| Helper | Location | Purpose |
|--------|----------|---------|
| `normalizeFileInput` | `@/blocks/utils` | Normalize file params in block config |
| `processFilesToUserFiles` | `@/lib/uploads/utils/file-utils` | Convert raw inputs to UserFile[] |
| `downloadFileFromStorage` | `@/lib/uploads/utils/file-utils.server` | Get Buffer from UserFile |
| `FileToolProcessor` | `@/executor/utils/file-tool-processor` | Process tool output files |
## Checklist
- [ ] Look up API docs for the service
@@ -283,5 +207,3 @@ const file = await processor.processFileData({
- [ ] Register block in `blocks/registry.ts`
- [ ] (Optional) Create triggers in `triggers/{service}/`
- [ ] (Optional) Register triggers in `triggers/registry.ts`
- [ ] (If file uploads) Create internal API route with `downloadFileFromStorage`
- [ ] (If file uploads) Use `normalizeFileInput` in block config

View File

@@ -155,36 +155,6 @@ dependsOn: { all: ['authMethod'], any: ['credential', 'botToken'] }
- `'both'` - Show in both modes (default)
- `'trigger'` - Only when block is used as trigger
### `canonicalParamId` - Link basic/advanced alternatives
Use to map multiple UI inputs to a single logical parameter:
```typescript
// Basic mode: Visual selector
{
id: 'fileSelector',
type: 'file-selector',
mode: 'basic',
canonicalParamId: 'fileId',
required: true,
},
// Advanced mode: Manual input
{
id: 'manualFileId',
type: 'short-input',
mode: 'advanced',
canonicalParamId: 'fileId',
required: true,
},
```
**Critical Rules:**
- `canonicalParamId` must NOT match any subblock's `id`
- `canonicalParamId` must be unique per operation/condition context
- **Required consistency:** All subblocks in a canonical group must have the same `required` status
- **Inputs section:** Must list canonical param IDs (e.g., `fileId`), NOT raw subblock IDs
- **Params function:** Must use canonical param IDs (raw IDs are deleted after canonical transformation)
**Register in `blocks/registry.ts`:**
```typescript
@@ -223,52 +193,6 @@ import { {service}WebhookTrigger } from '@/triggers/{service}'
{service}_webhook: {service}WebhookTrigger,
```
## File Handling
When integrations handle file uploads/downloads, use `UserFile` objects consistently.
### File Input (Uploads)
1. **Block subBlocks:** Use basic/advanced mode pattern with `canonicalParamId`
2. **Normalize in block config:** Use `normalizeFileInput` from `@/blocks/utils`
3. **Internal API route:** Create route that uses `downloadFileFromStorage` to get file content
4. **Tool routes to internal API:** Don't call external APIs directly with files
```typescript
// In block tools.config:
import { normalizeFileInput } from '@/blocks/utils'
const normalizedFile = normalizeFileInput(
params.uploadFile || params.fileRef || params.fileContent,
{ single: true }
)
if (normalizedFile) params.file = normalizedFile
```
### File Output (Downloads)
Use `FileToolProcessor` in tool `transformResponse` to store files:
```typescript
import { FileToolProcessor } from '@/executor/utils/file-tool-processor'
const processor = new FileToolProcessor(context)
const file = await processor.processFileData({
data: base64Content,
mimeType: 'application/pdf',
filename: 'doc.pdf',
})
```
### Key Helpers
| Helper | Location | Purpose |
|--------|----------|---------|
| `normalizeFileInput` | `@/blocks/utils` | Normalize file params in block config |
| `processFilesToUserFiles` | `@/lib/uploads/utils/file-utils` | Convert raw inputs to UserFile[] |
| `downloadFileFromStorage` | `@/lib/uploads/utils/file-utils.server` | Get Buffer from UserFile |
| `FileToolProcessor` | `@/executor/utils/file-tool-processor` | Process tool output files |
## Checklist
- [ ] Look up API docs for the service
@@ -281,5 +205,3 @@ const file = await processor.processFileData({
- [ ] Register block in `blocks/registry.ts`
- [ ] (Optional) Create triggers in `triggers/{service}/`
- [ ] (Optional) Register triggers in `triggers/registry.ts`
- [ ] (If file uploads) Create internal API route with `downloadFileFromStorage`
- [ ] (If file uploads) Use `normalizeFileInput` in block config

View File

@@ -265,23 +265,6 @@ Register in `blocks/registry.ts` (alphabetically).
**dependsOn:** `['field']` or `{ all: ['a'], any: ['b', 'c'] }`
**File Input Pattern (basic/advanced mode):**
```typescript
// Basic: file-upload UI
{ id: 'uploadFile', type: 'file-upload', canonicalParamId: 'file', mode: 'basic' },
// Advanced: reference from other blocks
{ id: 'fileRef', type: 'short-input', canonicalParamId: 'file', mode: 'advanced' },
```
In `tools.config.tool`, normalize with:
```typescript
import { normalizeFileInput } from '@/blocks/utils'
const file = normalizeFileInput(params.uploadFile || params.fileRef, { single: true })
if (file) params.file = file
```
For file uploads, create an internal API route (`/api/tools/{service}/upload`) that uses `downloadFileFromStorage` to get file content from `UserFile` objects.
### 3. Icon (`components/icons.tsx`)
```typescript
@@ -310,5 +293,3 @@ Register in `triggers/registry.ts`.
- [ ] Create block in `blocks/blocks/{service}.ts`
- [ ] Register block in `blocks/registry.ts`
- [ ] (Optional) Create and register triggers
- [ ] (If file uploads) Create internal API route with `downloadFileFromStorage`
- [ ] (If file uploads) Use `normalizeFileInput` in block config

View File

@@ -1131,32 +1131,6 @@ export function AirtableIcon(props: SVGProps<SVGSVGElement>) {
)
}
export function AirweaveIcon(props: SVGProps<SVGSVGElement>) {
return (
<svg
{...props}
width='143'
height='143'
viewBox='0 0 143 143'
fill='none'
xmlns='http://www.w3.org/2000/svg'
>
<path
d='M89.8854 128.872C79.9165 123.339 66.7502 115.146 60.5707 107.642L60.0432 107.018C58.7836 105.5 57.481 104.014 56.1676 102.593C51.9152 97.9641 47.3614 93.7978 42.646 90.2021C40.7405 88.7487 38.7704 87.3492 36.8111 86.0789C35.7991 85.4222 34.8302 84.8193 33.9151 84.2703C31.6221 82.903 28.8338 82.5263 26.2716 83.2476C23.8385 83.9366 21.89 85.5406 20.7596 87.7476C18.5634 92.0323 20.0814 97.3289 24.2046 99.805C27.5204 101.786 30.7608 104.111 33.8398 106.717C34.2381 107.05 34.3996 107.578 34.2596 108.062C33.1292 112.185 31.9989 118.957 31.5682 121.67C30.6424 127.429 33.4737 133.081 38.5982 135.751L38.7812 135.848C41.0204 137 43.6472 136.946 45.8219 135.697C47.9858 134.459 49.353 132.231 49.4822 129.733C49.536 128.657 49.6006 127.58 49.676 126.59C49.719 126.062 50.042 125.632 50.5264 125.459C50.6772 125.406 50.8494 125.373 51.0001 125.373C51.3554 125.373 51.6784 125.513 51.9475 125.782C56.243 130.185 60.8829 134.169 65.7167 137.625C70.3674 140.951 75.8686 142.706 81.639 142.706C83.7383 142.706 85.8376 142.469 87.8938 141.995L88.1199 141.942C90.9943 141.274 93.029 139.024 93.4488 136.085C93.8687 133.146 92.4476 130.315 89.8747 128.883H89.8639L89.8854 128.872Z'
fill='currentColor'
/>
<path
d='M142.551 58.1747L142.529 58.0563C142.045 55.591 140.118 53.7069 137.598 53.2548C135.112 52.8134 132.754 53.8577 131.484 55.9893L131.408 56.1077C126.704 64.1604 120.061 71.6101 111.653 78.2956C109.446 80.0504 107.293 81.902 105.226 83.8075C103.644 85.2717 101.265 85.53 99.4452 84.4212C97.6474 83.3339 95.8495 82.1389 94.1055 80.8686C90.3268 78.1233 86.6772 74.9475 83.2753 71.4271C81.4989 69.597 79.798 67.6915 78.1939 65.7321C76.0408 63.1161 73.7477 60.5539 71.3685 58.1316C66.3195 52.9857 56.6089 45.9127 53.7453 43.878C53.3792 43.6304 53.1639 43.2428 53.0993 42.8014C53.0455 42.3601 53.1639 41.9509 53.4546 41.6064C55.274 39.4318 56.9965 37.1818 58.5683 34.921C60.2369 32.5311 60.786 29.6028 60.0862 26.8899C59.408 24.2523 57.6424 22.11 55.134 20.8827C50.9139 18.7942 45.8972 20.0968 43.2273 23.9293C40.8373 27.3636 38.0167 30.7332 34.8732 33.9306C34.5718 34.232 34.1304 34.3397 33.7213 34.1889C30.5239 33.1447 27.2296 32.2942 23.9461 31.659C23.7093 31.616 23.354 31.5514 22.9126 31.4975C16.4102 30.5286 10.1123 33.7798 7.21639 39.5717L7.1195 39.7548C6.18289 41.628 6.26902 43.8349 7.32405 45.6651C8.40061 47.5167 10.3277 48.701 12.4592 48.8194C13.4604 48.8732 14.4401 48.9378 15.3659 49.0024C15.7966 49.0347 16.1411 49.2823 16.3025 49.6914C16.4533 50.1112 16.3671 50.5419 16.0657 50.8541C12.147 54.8804 8.60515 59.1974 5.5262 63.6867C1.1446 70.0814 -0.481008 78.2095 1.08 85.9822L1.10154 86.1006C1.70441 89.0719 4.05131 91.2035 7.07644 91.5264C9.98315 91.8386 12.6099 90.3208 13.7619 87.6724L13.8265 87.5109C18.6925 75.8625 26.7559 65.5168 37.7907 56.7536C38.3182 56.3445 39.0072 56.28 39.567 56.5922C45.3373 59.768 50.8601 63.902 55.9738 68.8864C56.5982 69.4893 56.6089 70.5013 56.0168 71.1257C53.4761 73.8063 51.0862 76.6054 48.9115 79.469C47.2106 81.7083 47.5335 84.8949 49.6221 86.7358L53.3254 89.9977L53.2824 90.0409C53.8637 90.5576 54.445 91.0744 55.0264 91.5911L55.8123 92.194C56.9319 93.1844 58.3529 93.6365 59.8386 93.4858C61.3027 93.3351 62.67 92.56 63.5635 91.3758C65.1353 89.2873 66.8578 87.2525 68.6556 85.304C68.957 84.9702 69.3661 84.798 69.8075 84.7872C70.2705 84.7872 70.6257 84.9379 70.9164 85.2286C75.8147 90.0624 81.1114 94.3686 86.6772 97.9966C88.8626 99.4176 89.4978 102.26 88.1306 104.477C86.9248 106.448 85.7729 108.493 84.7179 110.539C83.5014 112.918 83.2968 115.738 84.1688 118.257C84.9978 120.68 86.7095 122.585 88.981 123.64C90.2514 124.232 91.5971 124.534 92.9859 124.534C96.5062 124.534 99.682 122.596 101.286 119.452C102.729 116.61 104.419 113.8 106.281 111.131C107.369 109.559 109.36 108.838 111.255 109.322C115.26 110.355 120.643 111.421 124.454 112.143C128.308 112.864 132.119 111.023 133.96 107.578L134.143 107.233C135.521 104.628 135.531 101.506 134.164 98.8901C132.786 96.2526 130.181 94.4655 127.21 94.121C126.478 94.0349 125.778 93.9488 125.11 93.8626C124.97 93.8411 124.852 93.8196 124.744 93.798L123.356 93.4751L124.357 92.4523C124.432 92.377 124.529 92.2801 124.658 92.194C128.771 88.8028 132.571 85.1963 135.962 81.4714C141.668 75.1951 144.122 66.4965 142.518 58.1747H142.529H142.551Z'
fill='currentColor'
/>
<path
d='M56.6506 14.3371C65.5861 19.6338 77.4067 27.3743 82.9833 34.1674C83.64 34.9532 84.2967 35.7391 84.9534 36.4927C86.1591 37.8815 86.2991 39.8731 85.2979 41.4233C83.4892 44.2116 81.4115 46.9569 79.1399 49.5945C77.4713 51.5107 77.4067 54.3098 78.9785 56.2476L79.0431 56.323C79.2261 56.5598 79.4306 56.8074 79.6136 57.0442C81.2931 59.1758 83.0801 61.2213 84.9211 63.1375C85.9007 64.1603 87.2249 64.7309 88.6352 64.7309L88.7644 65.5275L88.7429 64.7309C90.207 64.6986 91.6173 64.0526 92.5969 62.933C94.8362 60.4031 96.9247 57.744 98.8302 55.0633C100.133 53.2224 102.63 52.8026 104.525 54.1052C106.463 55.4402 108.465 56.7105 110.457 57.8839C112.793 59.2511 115.614 59.5095 118.165 58.5621C120.749 57.604 122.762 55.5694 123.656 52.9533C125.055 48.9055 123.257 44.2547 119.382 41.9078C116.755 40.3145 114.15 38.5166 111.674 36.5788C110.382 35.5561 109.833 33.8767 110.296 32.2941C111.437 28.3001 112.481 23.1218 113.148 19.4831C113.837 15.7259 112.147 11.8826 108.939 9.94477L108.562 9.72944C105.871 8.12537 102.587 8.00696 99.7668 9.40649C96.9247 10.8168 95.03 13.5405 94.6855 16.6733L94.6639 16.867C94.6209 17.2546 94.384 17.5453 94.018 17.6637C93.652 17.7821 93.2859 17.6852 93.0168 17.4269C89.0012 13.1422 84.738 9.25576 80.3134 5.8646C74.3708 1.31075 66.7811 -0.583999 59.4928 0.675575L59.1805 0.729423C56.1124 1.2677 53.7547 3.60383 53.1949 6.68279C52.6351 9.72946 53.9915 12.7223 56.6722 14.3048H56.6614L56.6506 14.3371Z'
fill='currentColor'
/>
</svg>
)
}
export function GoogleDocsIcon(props: SVGProps<SVGSVGElement>) {
return (
<svg

View File

@@ -7,7 +7,6 @@ import {
A2AIcon,
AhrefsIcon,
AirtableIcon,
AirweaveIcon,
ApifyIcon,
ApolloIcon,
ArxivIcon,
@@ -142,7 +141,6 @@ export const blockTypeToIconMap: Record<string, IconComponent> = {
a2a: A2AIcon,
ahrefs: AhrefsIcon,
airtable: AirtableIcon,
airweave: AirweaveIcon,
apify: ApifyIcon,
apollo: ApolloIcon,
arxiv: ArxivIcon,
@@ -165,9 +163,9 @@ export const blockTypeToIconMap: Record<string, IconComponent> = {
elevenlabs: ElevenLabsIcon,
enrich: EnrichSoIcon,
exa: ExaAIIcon,
file_v3: DocumentIcon,
file_v2: DocumentIcon,
firecrawl: FirecrawlIcon,
fireflies_v2: FirefliesIcon,
fireflies: FirefliesIcon,
github_v2: GithubIcon,
gitlab: GitLabIcon,
gmail_v2: GmailIcon,
@@ -179,7 +177,7 @@ export const blockTypeToIconMap: Record<string, IconComponent> = {
google_maps: GoogleMapsIcon,
google_search: GoogleIcon,
google_sheets_v2: GoogleSheetsIcon,
google_slides_v2: GoogleSlidesIcon,
google_slides: GoogleSlidesIcon,
google_vault: GoogleVaultIcon,
grafana: GrafanaIcon,
grain: GrainIcon,
@@ -208,7 +206,7 @@ export const blockTypeToIconMap: Record<string, IconComponent> = {
microsoft_excel_v2: MicrosoftExcelIcon,
microsoft_planner: MicrosoftPlannerIcon,
microsoft_teams: MicrosoftTeamsIcon,
mistral_parse_v3: MistralIcon,
mistral_parse_v2: MistralIcon,
mongodb: MongoDBIcon,
mysql: MySQLIcon,
neo4j: Neo4jIcon,
@@ -223,11 +221,11 @@ export const blockTypeToIconMap: Record<string, IconComponent> = {
polymarket: PolymarketIcon,
postgresql: PostgresIcon,
posthog: PosthogIcon,
pulse_v2: PulseIcon,
pulse: PulseIcon,
qdrant: QdrantIcon,
rds: RDSIcon,
reddit: RedditIcon,
reducto_v2: ReductoIcon,
reducto: ReductoIcon,
resend: ResendIcon,
s3: S3Icon,
salesforce: SalesforceIcon,
@@ -246,11 +244,11 @@ export const blockTypeToIconMap: Record<string, IconComponent> = {
ssh: SshIcon,
stagehand: StagehandIcon,
stripe: StripeIcon,
stt_v2: STTIcon,
stt: STTIcon,
supabase: SupabaseIcon,
tavily: TavilyIcon,
telegram: TelegramIcon,
textract_v2: TextractIcon,
textract: TextractIcon,
tinybird: TinybirdIcon,
translate: TranslateIcon,
trello: TrelloIcon,
@@ -259,7 +257,7 @@ export const blockTypeToIconMap: Record<string, IconComponent> = {
twilio_voice: TwilioIcon,
typeform: TypeformIcon,
video_generator_v2: VideoIcon,
vision_v2: EyeIcon,
vision: EyeIcon,
wealthbox: WealthboxIcon,
webflow: WebflowIcon,
whatsapp: WhatsAppIcon,

View File

@@ -6,7 +6,7 @@ description: Mehrere Dateien lesen und parsen
import { BlockInfoCard } from "@/components/ui/block-info-card"
<BlockInfoCard
type="file_v3"
type="file"
color="#40916C"
/>

View File

@@ -6,7 +6,7 @@ description: Interagieren Sie mit Fireflies.ai-Besprechungstranskripten und -auf
import { BlockInfoCard } from "@/components/ui/block-info-card"
<BlockInfoCard
type="fireflies_v2"
type="fireflies"
color="#100730"
/>

View File

@@ -6,7 +6,7 @@ description: Text aus PDF-Dokumenten extrahieren
import { BlockInfoCard } from "@/components/ui/block-info-card"
<BlockInfoCard
type="mistral_parse_v3"
type="mistral_parse"
color="#000000"
/>

View File

@@ -56,7 +56,7 @@ Switch between modes using the mode selector at the bottom of the input area.
Select your preferred AI model using the model selector at the bottom right of the input area.
**Available Models:**
- Claude 4.6 Opus (default), 4.5 Opus, Sonnet, Haiku
- Claude 4.5 Opus, Sonnet (default), Haiku
- GPT 5.2 Codex, Pro
- Gemini 3 Pro

View File

@@ -213,25 +213,6 @@ Different subscription plans have different usage limits:
| **Team** | $40/seat (pooled, adjustable) | 300 sync, 2,500 async |
| **Enterprise** | Custom | Custom |
## Execution Time Limits
Workflows have maximum execution time limits based on your subscription plan:
| Plan | Sync Execution | Async Execution |
|------|----------------|-----------------|
| **Free** | 5 minutes | 10 minutes |
| **Pro** | 50 minutes | 90 minutes |
| **Team** | 50 minutes | 90 minutes |
| **Enterprise** | 50 minutes | 90 minutes |
**Sync executions** run immediately and return results directly. These are triggered via the API with `async: false` (default) or through the UI.
**Async executions** (triggered via API with `async: true`, webhooks, or schedules) run in the background. Async time limits are up to 2x the sync limit, capped at 90 minutes.
<Callout type="info">
If a workflow exceeds its time limit, it will be terminated and marked as failed with a timeout error. Design long-running workflows to use async execution or break them into smaller workflows.
</Callout>
## Billing Model
Sim uses a **base subscription + overage** billing model:

View File

@@ -1,168 +0,0 @@
---
title: Passing Files
---
import { Callout } from 'fumadocs-ui/components/callout'
import { Tab, Tabs } from 'fumadocs-ui/components/tabs'
Sim makes it easy to work with files throughout your workflows. Blocks can receive files, process them, and pass them to other blocks seamlessly.
## File Objects
When blocks output files (like Gmail attachments, generated images, or parsed documents), they return a standardized file object:
```json
{
"name": "report.pdf",
"url": "https://...",
"base64": "JVBERi0xLjQK...",
"type": "application/pdf",
"size": 245678
}
```
You can access any of these properties when referencing files from previous blocks.
## The File Block
The **File block** is the universal entry point for files in your workflows. It accepts files from any source and outputs standardized file objects that work with all integrations.
**Inputs:**
- **Uploaded files** - Drag and drop or select files directly
- **External URLs** - Any publicly accessible file URL
- **Files from other blocks** - Pass files from Gmail attachments, Slack downloads, etc.
**Outputs:**
- A list of `UserFile` objects with consistent structure (`name`, `url`, `base64`, `type`, `size`)
- `combinedContent` - Extracted text content from all files (for documents)
**Example usage:**
```
// Get all files from the File block
<file.files>
// Get the first file
<file.files[0]>
// Get combined text content from parsed documents
<file.combinedContent>
```
The File block automatically:
- Detects file types from URLs and extensions
- Extracts text from PDFs, CSVs, and documents
- Generates base64 encoding for binary files
- Creates presigned URLs for secure access
Use the File block when you need to normalize files from different sources before passing them to other blocks like Vision, STT, or email integrations.
## Passing Files Between Blocks
Reference files from previous blocks using the tag dropdown. Click in any file input field and type `<` to see available outputs.
**Common patterns:**
```
// Single file from a block
<gmail.attachments[0]>
// Pass the whole file object
<file_parser.files[0]>
// Access specific properties
<gmail.attachments[0].name>
<gmail.attachments[0].base64>
```
Most blocks accept the full file object and extract what they need automatically. You don't need to manually extract `base64` or `url` in most cases.
## Triggering Workflows with Files
When calling a workflow via API that expects file input, include files in your request:
<Tabs items={['Base64', 'URL']}>
<Tab value="Base64">
```bash
curl -X POST "https://sim.ai/api/workflows/YOUR_WORKFLOW_ID/execute" \
-H "Content-Type: application/json" \
-H "x-api-key: YOUR_API_KEY" \
-d '{
"document": {
"name": "report.pdf",
"base64": "JVBERi0xLjQK...",
"type": "application/pdf"
}
}'
```
</Tab>
<Tab value="URL">
```bash
curl -X POST "https://sim.ai/api/workflows/YOUR_WORKFLOW_ID/execute" \
-H "Content-Type: application/json" \
-H "x-api-key: YOUR_API_KEY" \
-d '{
"document": {
"name": "report.pdf",
"url": "https://example.com/report.pdf",
"type": "application/pdf"
}
}'
```
</Tab>
</Tabs>
The workflow's Start block should have an input field configured to receive the file parameter.
## Receiving Files in API Responses
When a workflow outputs files, they're included in the response:
```json
{
"success": true,
"output": {
"generatedFile": {
"name": "output.png",
"url": "https://...",
"base64": "iVBORw0KGgo...",
"type": "image/png",
"size": 34567
}
}
}
```
Use `url` for direct downloads or `base64` for inline processing.
## Blocks That Work with Files
**File inputs:**
- **File** - Parse documents, images, and text files
- **Vision** - Analyze images with AI models
- **Mistral Parser** - Extract text from PDFs
**File outputs:**
- **Gmail** - Email attachments
- **Slack** - Downloaded files
- **TTS** - Generated audio files
- **Video Generator** - Generated videos
- **Image Generator** - Generated images
**File storage:**
- **Supabase** - Upload/download from storage
- **S3** - AWS S3 operations
- **Google Drive** - Drive file operations
- **Dropbox** - Dropbox file operations
<Callout type="info">
Files are automatically available to downstream blocks. The execution engine handles all file transfer and format conversion.
</Callout>
## Best Practices
1. **Use file objects directly** - Pass the full file object rather than extracting individual properties. Blocks handle the conversion automatically.
2. **Check file types** - Ensure the file type matches what the receiving block expects. The Vision block needs images, the File block handles documents.
3. **Consider file size** - Large files increase execution time. For very large files, consider using storage blocks (S3, Supabase) for intermediate storage.

View File

@@ -1,3 +1,3 @@
{
"pages": ["index", "basics", "files", "api", "logging", "costs"]
"pages": ["index", "basics", "api", "logging", "costs"]
}

View File

@@ -10,7 +10,6 @@
"connections",
"mcp",
"copilot",
"skills",
"knowledgebase",
"variables",
"execution",

View File

@@ -180,11 +180,6 @@ A quick lookup for everyday actions in the Sim workflow editor. For keyboard sho
<td>Right-click → **Enable/Disable**</td>
<td><ActionImage src="/static/quick-reference/disable-block.png" alt="Disable block" /></td>
</tr>
<tr>
<td>Lock/Unlock a block</td>
<td>Hover block → Click lock icon (Admin only)</td>
<td><ActionImage src="/static/quick-reference/lock-block.png" alt="Lock block" /></td>
</tr>
<tr>
<td>Toggle handle orientation</td>
<td>Right-click → **Toggle Handles**</td>

View File

@@ -1,83 +0,0 @@
---
title: Agent Skills
---
import { Callout } from 'fumadocs-ui/components/callout'
Agent Skills are reusable packages of instructions that give your AI agents specialized capabilities. Based on the open [Agent Skills](https://agentskills.io) format, skills let you capture domain expertise, workflows, and best practices that agents can load on demand.
## How Skills Work
Skills use **progressive disclosure** to keep agent context lean:
1. **Discovery** — Only skill names and descriptions are included in the agent's system prompt (~50-100 tokens each)
2. **Activation** — When the agent decides a skill is relevant, it calls the `load_skill` tool to load the full instructions into context
3. **Execution** — The agent follows the loaded instructions to complete the task
This means you can attach many skills to an agent without bloating its context window. The agent only loads what it needs.
## Creating Skills
Go to **Settings** (gear icon) and select **Skills** under the Tools section.
Click **Add** to create a new skill with three fields:
| Field | Description |
|-------|-------------|
| **Name** | A kebab-case identifier (e.g. `sql-expert`, `code-reviewer`). Max 64 characters. |
| **Description** | A short explanation of what the skill does and when to use it. This is what the agent reads to decide whether to activate the skill. Max 1024 characters. |
| **Content** | The full skill instructions in markdown. This is loaded when the agent activates the skill. |
<Callout type="info">
The description is critical — it's the only thing the agent sees before deciding to load a skill. Be specific about when and why the skill should be used.
</Callout>
### Writing Good Skill Content
Skill content follows the same conventions as [SKILL.md files](https://agentskills.io/specification):
```markdown
# SQL Expert
## When to use this skill
Use when the user asks you to write, optimize, or debug SQL queries.
## Instructions
1. Always ask which database engine (PostgreSQL, MySQL, SQLite)
2. Use CTEs over subqueries for readability
3. Add index recommendations when relevant
4. Explain query plans for optimization requests
## Common Patterns
...
```
## Adding Skills to an Agent
Open any **Agent** block and find the **Skills** dropdown below the tools section. Select the skills you want the agent to have access to.
Selected skills appear as chips that you can click to edit or remove.
### What Happens at Runtime
When the workflow runs:
1. The agent's system prompt includes an `<available_skills>` section listing each skill's name and description
2. A `load_skill` tool is automatically added to the agent's available tools
3. When the agent determines a skill is relevant to the current task, it calls `load_skill` with the skill name
4. The full skill content is returned as a tool response, giving the agent detailed instructions
This works across all supported LLM providers — the `load_skill` tool uses standard tool-calling, so no provider-specific configuration is needed.
## Tips
- **Keep descriptions actionable** — Instead of "Helps with SQL", write "Write optimized SQL queries for PostgreSQL, MySQL, and SQLite, including index recommendations and query plan analysis"
- **One skill per domain** — A focused `sql-expert` skill works better than a broad `database-everything` skill
- **Use markdown structure** — Headers, lists, and code blocks help the agent parse and follow instructions
- **Test iteratively** — Run your workflow and check if the agent activates the skill when expected
## Learn More
- [Agent Skills specification](https://agentskills.io) — The open format for portable agent skills
- [Example skills](https://github.com/anthropics/skills) — Browse community skill examples
- [Best practices](https://agentskills.io/what-are-skills) — Writing effective skills

View File

@@ -1,52 +0,0 @@
---
title: Airweave
description: Search your synced data collections
---
import { BlockInfoCard } from "@/components/ui/block-info-card"
<BlockInfoCard
type="airweave"
color="#6366F1"
/>
## Usage Instructions
Search across your synced data sources using Airweave. Supports semantic search with hybrid, neural, or keyword retrieval strategies. Optionally generate AI-powered answers from search results.
## Tools
### `airweave_search`
Search your synced data collections using Airweave. Supports semantic search with hybrid, neural, or keyword retrieval strategies. Optionally generate AI-powered answers from search results.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Airweave API Key for authentication |
| `collectionId` | string | Yes | The readable ID of the collection to search |
| `query` | string | Yes | The search query text |
| `limit` | number | No | Maximum number of results to return \(default: 100\) |
| `retrievalStrategy` | string | No | Retrieval strategy: hybrid \(default\), neural, or keyword |
| `expandQuery` | boolean | No | Generate query variations to improve recall |
| `rerank` | boolean | No | Reorder results for improved relevance using LLM |
| `generateAnswer` | boolean | No | Generate a natural-language answer to the query |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `results` | array | Search results with content, scores, and metadata from your synced data |
| ↳ `entity_id` | string | Unique identifier for the search result entity |
| ↳ `source_name` | string | Name of the data source \(e.g., "GitHub", "Slack"\) |
| ↳ `md_content` | string | Markdown-formatted content of the result |
| ↳ `score` | number | Relevance score from the search |
| ↳ `metadata` | object | Additional metadata associated with the result |
| ↳ `breadcrumbs` | array | Navigation path to the result within its source |
| ↳ `url` | string | URL to the original content |
| `completion` | string | AI-generated answer to the query \(when generateAnswer is enabled\) |

View File

@@ -49,25 +49,10 @@ Retrieve content from Confluence pages using the Confluence API.
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `ts` | string | ISO 8601 timestamp of the operation |
| `ts` | string | Timestamp of retrieval |
| `pageId` | string | Confluence page ID |
| `title` | string | Page title |
| `content` | string | Page content with HTML tags stripped |
| `status` | string | Page status \(current, archived, trashed, draft\) |
| `spaceId` | string | ID of the space containing the page |
| `parentId` | string | ID of the parent page |
| `authorId` | string | Account ID of the page author |
| `createdAt` | string | ISO 8601 timestamp when the page was created |
| `url` | string | URL to view the page in Confluence |
| `body` | object | Raw page body content in storage format |
| ↳ `value` | string | The content value in the specified format |
| ↳ `representation` | string | Content representation type |
| `version` | object | Page version information |
| ↳ `number` | number | Version number |
| ↳ `message` | string | Version message |
| ↳ `minorEdit` | boolean | Whether this is a minor edit |
| ↳ `authorId` | string | Account ID of the version author |
| ↳ `createdAt` | string | ISO 8601 timestamp of version creation |
| `title` | string | Page title |
### `confluence_update`
@@ -91,25 +76,6 @@ Update a Confluence page using the Confluence API.
| `ts` | string | Timestamp of update |
| `pageId` | string | Confluence page ID |
| `title` | string | Updated page title |
| `status` | string | Page status |
| `spaceId` | string | Space ID |
| `body` | object | Page body content in storage format |
| ↳ `storage` | object | Body in storage format \(Confluence markup\) |
| ↳ `value` | string | The content value in the specified format |
| ↳ `representation` | string | Content representation type |
| ↳ `view` | object | Body in view format \(rendered HTML\) |
| ↳ `value` | string | The content value in the specified format |
| ↳ `representation` | string | Content representation type |
| ↳ `atlas_doc_format` | object | Body in Atlassian Document Format \(ADF\) |
| ↳ `value` | string | The content value in the specified format |
| ↳ `representation` | string | Content representation type |
| `version` | object | Page version information |
| ↳ `number` | number | Version number |
| ↳ `message` | string | Version message |
| ↳ `minorEdit` | boolean | Whether this is a minor edit |
| ↳ `authorId` | string | Account ID of the version author |
| ↳ `createdAt` | string | ISO 8601 timestamp of version creation |
| `url` | string | URL to view the page in Confluence |
| `success` | boolean | Update operation success status |
### `confluence_create_page`
@@ -134,30 +100,11 @@ Create a new page in a Confluence space.
| `ts` | string | Timestamp of creation |
| `pageId` | string | Created page ID |
| `title` | string | Page title |
| `status` | string | Page status |
| `spaceId` | string | Space ID |
| `parentId` | string | Parent page ID |
| `body` | object | Page body content |
| ↳ `storage` | object | Body in storage format \(Confluence markup\) |
| ↳ `value` | string | The content value in the specified format |
| ↳ `representation` | string | Content representation type |
| ↳ `view` | object | Body in view format \(rendered HTML\) |
| ↳ `value` | string | The content value in the specified format |
| ↳ `representation` | string | Content representation type |
| ↳ `atlas_doc_format` | object | Body in Atlassian Document Format \(ADF\) |
| ↳ `value` | string | The content value in the specified format |
| ↳ `representation` | string | Content representation type |
| `version` | object | Page version information |
| ↳ `number` | number | Version number |
| ↳ `message` | string | Version message |
| ↳ `minorEdit` | boolean | Whether this is a minor edit |
| ↳ `authorId` | string | Account ID of the version author |
| ↳ `createdAt` | string | ISO 8601 timestamp of version creation |
| `url` | string | Page URL |
### `confluence_delete_page`
Delete a Confluence page. By default moves to trash; use purge=true to permanently delete.
Delete a Confluence page (moves it to trash where it can be restored).
#### Input
@@ -165,7 +112,6 @@ Delete a Confluence page. By default moves to trash; use purge=true to permanent
| --------- | ---- | -------- | ----------- |
| `domain` | string | Yes | Your Confluence domain \(e.g., yourcompany.atlassian.net\) |
| `pageId` | string | Yes | Confluence page ID to delete |
| `purge` | boolean | No | If true, permanently deletes the page instead of moving to trash \(default: false\) |
| `cloudId` | string | No | Confluence Cloud ID for the instance. If not provided, it will be fetched using the domain. |
#### Output
@@ -176,229 +122,6 @@ Delete a Confluence page. By default moves to trash; use purge=true to permanent
| `pageId` | string | Deleted page ID |
| `deleted` | boolean | Deletion status |
### `confluence_list_pages_in_space`
List all pages within a specific Confluence space. Supports pagination and filtering by status.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `domain` | string | Yes | Your Confluence domain \(e.g., yourcompany.atlassian.net\) |
| `spaceId` | string | Yes | The ID of the Confluence space to list pages from |
| `limit` | number | No | Maximum number of pages to return \(default: 50, max: 250\) |
| `status` | string | No | Filter pages by status: current, archived, trashed, or draft |
| `bodyFormat` | string | No | Format for page body content: storage, atlas_doc_format, or view. If not specified, body is not included. |
| `cursor` | string | No | Pagination cursor from previous response to get the next page of results |
| `cloudId` | string | No | Confluence Cloud ID for the instance. If not provided, it will be fetched using the domain. |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `ts` | string | ISO 8601 timestamp of the operation |
| `pages` | array | Array of pages in the space |
| ↳ `id` | string | Unique page identifier |
| ↳ `title` | string | Page title |
| ↳ `status` | string | Page status \(e.g., current, archived, trashed, draft\) |
| ↳ `spaceId` | string | ID of the space containing the page |
| ↳ `parentId` | string | ID of the parent page \(null if top-level\) |
| ↳ `authorId` | string | Account ID of the page author |
| ↳ `createdAt` | string | ISO 8601 timestamp when the page was created |
| ↳ `version` | object | Page version information |
| ↳ `number` | number | Version number |
| ↳ `message` | string | Version message |
| ↳ `minorEdit` | boolean | Whether this is a minor edit |
| ↳ `authorId` | string | Account ID of the version author |
| ↳ `createdAt` | string | ISO 8601 timestamp of version creation |
| ↳ `body` | object | Page body content \(if bodyFormat was specified\) |
| ↳ `storage` | object | Body in storage format \(Confluence markup\) |
| ↳ `value` | string | The content value in the specified format |
| ↳ `representation` | string | Content representation type |
| ↳ `view` | object | Body in view format \(rendered HTML\) |
| ↳ `value` | string | The content value in the specified format |
| ↳ `representation` | string | Content representation type |
| ↳ `atlas_doc_format` | object | Body in Atlassian Document Format \(ADF\) |
| ↳ `value` | string | The content value in the specified format |
| ↳ `representation` | string | Content representation type |
| ↳ `webUrl` | string | URL to view the page in Confluence |
| `nextCursor` | string | Cursor for fetching the next page of results |
### `confluence_get_page_children`
Get all child pages of a specific Confluence page. Useful for navigating page hierarchies.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `domain` | string | Yes | Your Confluence domain \(e.g., yourcompany.atlassian.net\) |
| `pageId` | string | Yes | The ID of the parent page to get children from |
| `limit` | number | No | Maximum number of child pages to return \(default: 50, max: 250\) |
| `cursor` | string | No | Pagination cursor from previous response to get the next page of results |
| `cloudId` | string | No | Confluence Cloud ID for the instance. If not provided, it will be fetched using the domain. |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `ts` | string | ISO 8601 timestamp of the operation |
| `parentId` | string | ID of the parent page |
| `children` | array | Array of child pages |
| ↳ `id` | string | Child page ID |
| ↳ `title` | string | Child page title |
| ↳ `status` | string | Page status |
| ↳ `spaceId` | string | Space ID |
| ↳ `childPosition` | number | Position among siblings |
| ↳ `webUrl` | string | URL to view the page |
| `nextCursor` | string | Cursor for fetching the next page of results |
### `confluence_get_page_ancestors`
Get the ancestor (parent) pages of a specific Confluence page. Returns the full hierarchy from the page up to the root.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `domain` | string | Yes | Your Confluence domain \(e.g., yourcompany.atlassian.net\) |
| `pageId` | string | Yes | The ID of the page to get ancestors for |
| `limit` | number | No | Maximum number of ancestors to return \(default: 25, max: 250\) |
| `cloudId` | string | No | Confluence Cloud ID for the instance. If not provided, it will be fetched using the domain. |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `ts` | string | ISO 8601 timestamp of the operation |
| `pageId` | string | ID of the page whose ancestors were retrieved |
| `ancestors` | array | Array of ancestor pages, ordered from direct parent to root |
| ↳ `id` | string | Ancestor page ID |
| ↳ `title` | string | Ancestor page title |
| ↳ `status` | string | Page status |
| ↳ `spaceId` | string | Space ID |
| ↳ `webUrl` | string | URL to view the page |
### `confluence_list_page_versions`
List all versions (revision history) of a Confluence page.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `domain` | string | Yes | Your Confluence domain \(e.g., yourcompany.atlassian.net\) |
| `pageId` | string | Yes | The ID of the page to get versions for |
| `limit` | number | No | Maximum number of versions to return \(default: 50, max: 250\) |
| `cursor` | string | No | Pagination cursor from previous response |
| `cloudId` | string | No | Confluence Cloud ID for the instance. If not provided, it will be fetched using the domain. |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `ts` | string | ISO 8601 timestamp of the operation |
| `pageId` | string | ID of the page |
| `versions` | array | Array of page versions |
| ↳ `number` | number | Version number |
| ↳ `message` | string | Version message |
| ↳ `minorEdit` | boolean | Whether this is a minor edit |
| ↳ `authorId` | string | Account ID of the version author |
| ↳ `createdAt` | string | ISO 8601 timestamp of version creation |
| `nextCursor` | string | Cursor for fetching the next page of results |
### `confluence_get_page_version`
Get details about a specific version of a Confluence page.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `domain` | string | Yes | Your Confluence domain \(e.g., yourcompany.atlassian.net\) |
| `pageId` | string | Yes | The ID of the page |
| `versionNumber` | number | Yes | The version number to retrieve \(e.g., 1, 2, 3\) |
| `cloudId` | string | No | Confluence Cloud ID for the instance. If not provided, it will be fetched using the domain. |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `ts` | string | ISO 8601 timestamp of the operation |
| `pageId` | string | ID of the page |
| `version` | object | Detailed version information |
| ↳ `number` | number | Version number |
| ↳ `message` | string | Version message |
| ↳ `minorEdit` | boolean | Whether this is a minor edit |
| ↳ `authorId` | string | Account ID of the version author |
| ↳ `createdAt` | string | ISO 8601 timestamp of version creation |
| ↳ `contentTypeModified` | boolean | Whether the content type was modified in this version |
| ↳ `collaborators` | array | List of collaborator account IDs for this version |
| ↳ `prevVersion` | number | Previous version number |
| ↳ `nextVersion` | number | Next version number |
### `confluence_list_page_properties`
List all custom properties (metadata) attached to a Confluence page.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `domain` | string | Yes | Your Confluence domain \(e.g., yourcompany.atlassian.net\) |
| `pageId` | string | Yes | The ID of the page to list properties from |
| `limit` | number | No | Maximum number of properties to return \(default: 50, max: 250\) |
| `cursor` | string | No | Pagination cursor from previous response |
| `cloudId` | string | No | Confluence Cloud ID for the instance. If not provided, it will be fetched using the domain. |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `ts` | string | ISO 8601 timestamp of the operation |
| `pageId` | string | ID of the page |
| `properties` | array | Array of content properties |
| ↳ `id` | string | Property ID |
| ↳ `key` | string | Property key |
| ↳ `value` | json | Property value \(can be any JSON\) |
| ↳ `version` | object | Version information |
| ↳ `number` | number | Version number |
| ↳ `message` | string | Version message |
| ↳ `minorEdit` | boolean | Whether this is a minor edit |
| ↳ `authorId` | string | Account ID of the version author |
| ↳ `createdAt` | string | ISO 8601 timestamp of version creation |
| `nextCursor` | string | Cursor for fetching the next page of results |
### `confluence_create_page_property`
Create a new custom property (metadata) on a Confluence page.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `domain` | string | Yes | Your Confluence domain \(e.g., yourcompany.atlassian.net\) |
| `pageId` | string | Yes | The ID of the page to add the property to |
| `key` | string | Yes | The key/name for the property |
| `value` | json | Yes | The value for the property \(can be any JSON value\) |
| `cloudId` | string | No | Confluence Cloud ID for the instance. If not provided, it will be fetched using the domain. |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `ts` | string | ISO 8601 timestamp of the operation |
| `pageId` | string | ID of the page |
| `propertyId` | string | ID of the created property |
| `key` | string | Property key |
| `value` | json | Property value |
| `version` | object | Version information |
| ↳ `number` | number | Version number |
| ↳ `message` | string | Version message |
| ↳ `minorEdit` | boolean | Whether this is a minor edit |
| ↳ `authorId` | string | Account ID of the version author |
| ↳ `createdAt` | string | ISO 8601 timestamp of version creation |
### `confluence_search`
Search for content across Confluence pages, blog posts, and other content.
@@ -432,211 +155,6 @@ Search for content across Confluence pages, blog posts, and other content.
| ↳ `lastModified` | string | ISO 8601 timestamp of last modification |
| ↳ `entityType` | string | Entity type identifier \(e.g., content, space\) |
### `confluence_search_in_space`
Search for content within a specific Confluence space. Optionally filter by text query and content type.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `domain` | string | Yes | Your Confluence domain \(e.g., yourcompany.atlassian.net\) |
| `spaceKey` | string | Yes | The key of the Confluence space to search in \(e.g., "ENG", "HR"\) |
| `query` | string | No | Text search query. If not provided, returns all content in the space. |
| `contentType` | string | No | Filter by content type: page, blogpost, attachment, or comment |
| `limit` | number | No | Maximum number of results to return \(default: 25, max: 250\) |
| `cloudId` | string | No | Confluence Cloud ID for the instance. If not provided, it will be fetched using the domain. |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `ts` | string | ISO 8601 timestamp of the operation |
| `spaceKey` | string | The space key that was searched |
| `totalSize` | number | Total number of matching results |
| `results` | array | Array of search results |
| ↳ `id` | string | Unique content identifier |
| ↳ `title` | string | Content title |
| ↳ `type` | string | Content type \(e.g., page, blogpost, attachment, comment\) |
| ↳ `status` | string | Content status \(e.g., current\) |
| ↳ `url` | string | URL to view the content in Confluence |
| ↳ `excerpt` | string | Text excerpt matching the search query |
| ↳ `spaceKey` | string | Key of the space containing the content |
| ↳ `space` | object | Space information for the content |
| ↳ `id` | string | Space identifier |
| ↳ `key` | string | Space key |
| ↳ `name` | string | Space name |
| ↳ `lastModified` | string | ISO 8601 timestamp of last modification |
| ↳ `entityType` | string | Entity type identifier \(e.g., content, space\) |
### `confluence_list_blogposts`
List all blog posts across all accessible Confluence spaces.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `domain` | string | Yes | Your Confluence domain \(e.g., yourcompany.atlassian.net\) |
| `limit` | number | No | Maximum number of blog posts to return \(default: 25, max: 250\) |
| `status` | string | No | Filter by status: current, archived, trashed, or draft |
| `sort` | string | No | Sort order: created-date, -created-date, modified-date, -modified-date, title, -title |
| `cursor` | string | No | Pagination cursor from previous response |
| `cloudId` | string | No | Confluence Cloud ID for the instance. If not provided, it will be fetched using the domain. |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `ts` | string | ISO 8601 timestamp of the operation |
| `blogPosts` | array | Array of blog posts |
| ↳ `id` | string | Blog post ID |
| ↳ `title` | string | Blog post title |
| ↳ `status` | string | Blog post status |
| ↳ `spaceId` | string | Space ID |
| ↳ `authorId` | string | Author account ID |
| ↳ `createdAt` | string | Creation timestamp |
| ↳ `version` | object | Version information |
| ↳ `number` | number | Version number |
| ↳ `message` | string | Version message |
| ↳ `minorEdit` | boolean | Whether this is a minor edit |
| ↳ `authorId` | string | Account ID of the version author |
| ↳ `createdAt` | string | ISO 8601 timestamp of version creation |
| ↳ `webUrl` | string | URL to view the blog post |
| `nextCursor` | string | Cursor for fetching the next page of results |
### `confluence_get_blogpost`
Get a specific Confluence blog post by ID, including its content.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `domain` | string | Yes | Your Confluence domain \(e.g., yourcompany.atlassian.net\) |
| `blogPostId` | string | Yes | The ID of the blog post to retrieve |
| `bodyFormat` | string | No | Format for blog post body: storage, atlas_doc_format, or view |
| `cloudId` | string | No | Confluence Cloud ID for the instance. If not provided, it will be fetched using the domain. |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `ts` | string | ISO 8601 timestamp of the operation |
| `id` | string | Blog post ID |
| `title` | string | Blog post title |
| `status` | string | Blog post status |
| `spaceId` | string | Space ID |
| `authorId` | string | Author account ID |
| `createdAt` | string | Creation timestamp |
| `version` | object | Version information |
| ↳ `number` | number | Version number |
| ↳ `message` | string | Version message |
| ↳ `minorEdit` | boolean | Whether this is a minor edit |
| ↳ `authorId` | string | Account ID of the version author |
| ↳ `createdAt` | string | ISO 8601 timestamp of version creation |
| `body` | object | Blog post body content in requested format\(s\) |
| ↳ `storage` | object | Body in storage format \(Confluence markup\) |
| ↳ `value` | string | The content value in the specified format |
| ↳ `representation` | string | Content representation type |
| ↳ `view` | object | Body in view format \(rendered HTML\) |
| ↳ `value` | string | The content value in the specified format |
| ↳ `representation` | string | Content representation type |
| ↳ `atlas_doc_format` | object | Body in Atlassian Document Format \(ADF\) |
| ↳ `value` | string | The content value in the specified format |
| ↳ `representation` | string | Content representation type |
| `webUrl` | string | URL to view the blog post |
### `confluence_create_blogpost`
Create a new blog post in a Confluence space.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `domain` | string | Yes | Your Confluence domain \(e.g., yourcompany.atlassian.net\) |
| `spaceId` | string | Yes | The ID of the space to create the blog post in |
| `title` | string | Yes | Title of the blog post |
| `content` | string | Yes | Blog post content in Confluence storage format \(HTML\) |
| `status` | string | No | Blog post status: current \(default\) or draft |
| `cloudId` | string | No | Confluence Cloud ID for the instance. If not provided, it will be fetched using the domain. |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `ts` | string | ISO 8601 timestamp of the operation |
| `id` | string | Created blog post ID |
| `title` | string | Blog post title |
| `status` | string | Blog post status |
| `spaceId` | string | Space ID |
| `authorId` | string | Author account ID |
| `body` | object | Blog post body content |
| ↳ `storage` | object | Body in storage format \(Confluence markup\) |
| ↳ `value` | string | The content value in the specified format |
| ↳ `representation` | string | Content representation type |
| ↳ `view` | object | Body in view format \(rendered HTML\) |
| ↳ `value` | string | The content value in the specified format |
| ↳ `representation` | string | Content representation type |
| ↳ `atlas_doc_format` | object | Body in Atlassian Document Format \(ADF\) |
| ↳ `value` | string | The content value in the specified format |
| ↳ `representation` | string | Content representation type |
| `version` | object | Blog post version information |
| ↳ `number` | number | Version number |
| ↳ `message` | string | Version message |
| ↳ `minorEdit` | boolean | Whether this is a minor edit |
| ↳ `authorId` | string | Account ID of the version author |
| ↳ `createdAt` | string | ISO 8601 timestamp of version creation |
| `webUrl` | string | URL to view the blog post |
### `confluence_list_blogposts_in_space`
List all blog posts within a specific Confluence space.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `domain` | string | Yes | Your Confluence domain \(e.g., yourcompany.atlassian.net\) |
| `spaceId` | string | Yes | The ID of the Confluence space to list blog posts from |
| `limit` | number | No | Maximum number of blog posts to return \(default: 25, max: 250\) |
| `status` | string | No | Filter by status: current, archived, trashed, or draft |
| `bodyFormat` | string | No | Format for blog post body: storage, atlas_doc_format, or view |
| `cursor` | string | No | Pagination cursor from previous response |
| `cloudId` | string | No | Confluence Cloud ID for the instance. If not provided, it will be fetched using the domain. |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `ts` | string | ISO 8601 timestamp of the operation |
| `blogPosts` | array | Array of blog posts in the space |
| ↳ `id` | string | Blog post ID |
| ↳ `title` | string | Blog post title |
| ↳ `status` | string | Blog post status |
| ↳ `spaceId` | string | Space ID |
| ↳ `authorId` | string | Author account ID |
| ↳ `createdAt` | string | Creation timestamp |
| ↳ `version` | object | Version information |
| ↳ `number` | number | Version number |
| ↳ `message` | string | Version message |
| ↳ `minorEdit` | boolean | Whether this is a minor edit |
| ↳ `authorId` | string | Account ID of the version author |
| ↳ `createdAt` | string | ISO 8601 timestamp of version creation |
| ↳ `body` | object | Blog post body content |
| ↳ `storage` | object | Body in storage format \(Confluence markup\) |
| ↳ `value` | string | The content value in the specified format |
| ↳ `representation` | string | Content representation type |
| ↳ `view` | object | Body in view format \(rendered HTML\) |
| ↳ `value` | string | The content value in the specified format |
| ↳ `representation` | string | Content representation type |
| ↳ `atlas_doc_format` | object | Body in Atlassian Document Format \(ADF\) |
| ↳ `value` | string | The content value in the specified format |
| ↳ `representation` | string | Content representation type |
| ↳ `webUrl` | string | URL to view the blog post |
| `nextCursor` | string | Cursor for fetching the next page of results |
### `confluence_create_comment`
Add a comment to a Confluence page.
@@ -669,8 +187,6 @@ List all comments on a Confluence page.
| `domain` | string | Yes | Your Confluence domain \(e.g., yourcompany.atlassian.net\) |
| `pageId` | string | Yes | Confluence page ID to list comments from |
| `limit` | number | No | Maximum number of comments to return \(default: 25\) |
| `bodyFormat` | string | No | Format for the comment body: storage, atlas_doc_format, view, or export_view \(default: storage\) |
| `cursor` | string | No | Pagination cursor from previous response |
| `cloudId` | string | No | Confluence Cloud ID for the instance. If not provided, it will be fetched using the domain. |
#### Output
@@ -696,7 +212,6 @@ List all comments on a Confluence page.
| ↳ `minorEdit` | boolean | Whether this is a minor edit |
| ↳ `authorId` | string | Account ID of the version author |
| ↳ `createdAt` | string | ISO 8601 timestamp of version creation |
| `nextCursor` | string | Cursor for fetching the next page of results |
### `confluence_update_comment`
@@ -776,8 +291,7 @@ List all attachments on a Confluence page.
| --------- | ---- | -------- | ----------- |
| `domain` | string | Yes | Your Confluence domain \(e.g., yourcompany.atlassian.net\) |
| `pageId` | string | Yes | Confluence page ID to list attachments from |
| `limit` | number | No | Maximum number of attachments to return \(default: 50, max: 250\) |
| `cursor` | string | No | Pagination cursor from previous response |
| `limit` | number | No | Maximum number of attachments to return \(default: 25\) |
| `cloudId` | string | No | Confluence Cloud ID for the instance. If not provided, it will be fetched using the domain. |
#### Output
@@ -802,7 +316,6 @@ List all attachments on a Confluence page.
| ↳ `minorEdit` | boolean | Whether this is a minor edit |
| ↳ `authorId` | string | Account ID of the version author |
| ↳ `createdAt` | string | ISO 8601 timestamp of version creation |
| `nextCursor` | string | Cursor for fetching the next page of results |
### `confluence_delete_attachment`
@@ -834,8 +347,6 @@ List all labels on a Confluence page.
| --------- | ---- | -------- | ----------- |
| `domain` | string | Yes | Your Confluence domain \(e.g., yourcompany.atlassian.net\) |
| `pageId` | string | Yes | Confluence page ID to list labels from |
| `limit` | number | No | Maximum number of labels to return \(default: 25, max: 250\) |
| `cursor` | string | No | Pagination cursor from previous response |
| `cloudId` | string | No | Confluence Cloud ID for the instance. If not provided, it will be fetched using the domain. |
#### Output
@@ -847,30 +358,6 @@ List all labels on a Confluence page.
| ↳ `id` | string | Unique label identifier |
| ↳ `name` | string | Label name |
| ↳ `prefix` | string | Label prefix/type \(e.g., global, my, team\) |
| `nextCursor` | string | Cursor for fetching the next page of results |
### `confluence_add_label`
Add a label to a Confluence page for organization and categorization.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `domain` | string | Yes | Your Confluence domain \(e.g., yourcompany.atlassian.net\) |
| `pageId` | string | Yes | Confluence page ID to add the label to |
| `labelName` | string | Yes | Name of the label to add |
| `prefix` | string | No | Label prefix: global \(default\), my, team, or system |
| `cloudId` | string | No | Confluence Cloud ID for the instance. If not provided, it will be fetched using the domain. |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `ts` | string | ISO 8601 timestamp of the operation |
| `pageId` | string | Page ID that the label was added to |
| `labelName` | string | Name of the added label |
| `labelId` | string | ID of the added label |
### `confluence_get_space`
@@ -888,19 +375,13 @@ Get details about a specific Confluence space.
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `ts` | string | ISO 8601 timestamp of the operation |
| `ts` | string | Timestamp of retrieval |
| `spaceId` | string | Space ID |
| `name` | string | Space name |
| `key` | string | Space key |
| `type` | string | Space type \(global, personal\) |
| `status` | string | Space status \(current, archived\) |
| `url` | string | URL to view the space in Confluence |
| `authorId` | string | Account ID of the space creator |
| `createdAt` | string | ISO 8601 timestamp when the space was created |
| `homepageId` | string | ID of the space homepage |
| `description` | object | Space description content |
| ↳ `value` | string | Description text content |
| ↳ `representation` | string | Content representation format \(e.g., plain, view, storage\) |
| `type` | string | Space type |
| `status` | string | Space status |
| `url` | string | Space URL |
### `confluence_list_spaces`
@@ -911,8 +392,7 @@ List all Confluence spaces accessible to the user.
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `domain` | string | Yes | Your Confluence domain \(e.g., yourcompany.atlassian.net\) |
| `limit` | number | No | Maximum number of spaces to return \(default: 25, max: 250\) |
| `cursor` | string | No | Pagination cursor from previous response |
| `limit` | number | No | Maximum number of spaces to return \(default: 25\) |
| `cloudId` | string | No | Confluence Cloud ID for the instance. If not provided, it will be fetched using the domain. |
#### Output
@@ -932,6 +412,5 @@ List all Confluence spaces accessible to the user.
| ↳ `description` | object | Space description |
| ↳ `value` | string | Description text content |
| ↳ `representation` | string | Content representation format \(e.g., plain, view, storage\) |
| `nextCursor` | string | Cursor for fetching the next page of results |

View File

@@ -63,7 +63,6 @@ Send a message to a Discord channel
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `message` | string | Success or error message |
| `files` | file[] | Files attached to the message |
| `data` | object | Discord message data |
| ↳ `id` | string | Message ID |
| ↳ `content` | string | Message content |

View File

@@ -43,8 +43,7 @@ Upload a file to Dropbox
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `path` | string | Yes | The path in Dropbox where the file should be saved \(e.g., /folder/document.pdf\) |
| `file` | file | No | The file to upload \(UserFile object\) |
| `fileContent` | string | No | Legacy: base64 encoded file content |
| `fileContent` | string | Yes | The base64 encoded content of the file to upload |
| `fileName` | string | No | Optional filename \(used if path is a folder\) |
| `mode` | string | No | Write mode: add \(default\) or overwrite |
| `autorename` | boolean | No | If true, rename the file if there is a conflict |
@@ -67,7 +66,7 @@ Upload a file to Dropbox
### `dropbox_download`
Download a file from Dropbox with metadata and content
Download a file from Dropbox and get a temporary link
#### Input
@@ -79,8 +78,11 @@ Download a file from Dropbox with metadata and content
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `file` | file | Downloaded file stored in execution files |
| `metadata` | json | The file metadata |
| `file` | object | The file metadata |
| ↳ `id` | string | Unique identifier for the file |
| ↳ `name` | string | Name of the file |
| ↳ `path_display` | string | Display path of the file |
| ↳ `size` | number | Size of the file in bytes |
| `temporaryLink` | string | Temporary link to download the file \(valid for ~4 hours\) |
| `content` | string | Base64 encoded file content \(if fetched\) |

View File

@@ -6,7 +6,7 @@ description: Read and parse multiple files
import { BlockInfoCard } from "@/components/ui/block-info-card"
<BlockInfoCard
type="file_v3"
type="file_v2"
color="#40916C"
/>
@@ -27,7 +27,7 @@ The File Parser tool is particularly useful for scenarios where your agents need
## Usage Instructions
Upload files directly or import from external URLs to get UserFile objects for use in other blocks.
Integrate File into the workflow. Can upload a file manually or insert a file url.
@@ -41,15 +41,14 @@ Parse one or more uploaded files or files from URLs (text, PDF, CSV, images, etc
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `filePath` | string | No | Path to the file\(s\). Can be a single path, URL, or an array of paths. |
| `file` | file | No | Uploaded file\(s\) to parse |
| `filePath` | string | Yes | Path to the file\(s\). Can be a single path, URL, or an array of paths. |
| `fileType` | string | No | Type of file to parse \(auto-detected if not specified\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `files` | file[] | Parsed files as UserFile objects |
| `combinedContent` | string | Combined content of all parsed files |
| `files` | array | Array of parsed files with content, metadata, and file properties |
| `combinedContent` | string | All file contents merged into a single text string |

View File

@@ -6,7 +6,7 @@ description: Interact with Fireflies.ai meeting transcripts and recordings
import { BlockInfoCard } from "@/components/ui/block-info-card"
<BlockInfoCard
type="fireflies_v2"
type="fireflies"
color="#100730"
/>

View File

@@ -692,7 +692,6 @@ Get the content of a file from a GitHub repository. Supports files up to 1MB. Co
| `download_url` | string | Direct download URL |
| `git_url` | string | Git blob API URL |
| `_links` | json | Related links |
| `file` | file | Downloaded file stored in execution files |
### `github_create_file`

View File

@@ -291,7 +291,11 @@ Download a file from Google Drive with complete metadata (exports Google Workspa
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `file` | file | Downloaded file stored in execution files |
| `file` | object | Downloaded file data |
| ↳ `name` | string | File name |
| ↳ `mimeType` | string | MIME type of the file |
| ↳ `data` | string | File content as base64-encoded string |
| ↳ `size` | number | File size in bytes |
| `metadata` | object | Complete file metadata from Google Drive |
| ↳ `id` | string | Google Drive file ID |
| ↳ `kind` | string | Resource type identifier |

View File

@@ -6,7 +6,7 @@ description: Read, write, and create presentations
import { BlockInfoCard } from "@/components/ui/block-info-card"
<BlockInfoCard
type="google_slides_v2"
type="google_slides"
color="#E0E0E0"
/>

View File

@@ -333,28 +333,6 @@ Get all attachments from a Jira issue
| `issueKey` | string | Issue key |
| `attachments` | array | Array of attachments with id, filename, size, mimeType, created, author |
### `jira_add_attachment`
Add attachments to a Jira issue
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `domain` | string | Yes | Your Jira domain \(e.g., yourcompany.atlassian.net\) |
| `issueKey` | string | Yes | Jira issue key to add attachments to \(e.g., PROJ-123\) |
| `files` | file[] | Yes | Files to attach to the Jira issue |
| `cloudId` | string | No | Jira Cloud ID for the instance. If not provided, it will be fetched using the domain. |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `ts` | string | Timestamp of the operation |
| `issueKey` | string | Issue key |
| `attachmentIds` | json | IDs of uploaded attachments |
| `files` | file[] | Uploaded attachment files |
### `jira_delete_attachment`
Delete an attachment from a Jira issue

View File

@@ -320,7 +320,6 @@ Search for issues in Linear using full-text search
| `teamId` | string | No | Filter by team ID |
| `includeArchived` | boolean | No | Include archived issues in search results |
| `first` | number | No | Number of results to return \(default: 50\) |
| `after` | string | No | Cursor for pagination |
#### Output
@@ -755,10 +754,6 @@ List all labels in Linear workspace or team
| ↳ `name` | string | Label name |
| ↳ `color` | string | Label color \(hex\) |
| ↳ `description` | string | Label description |
| ↳ `isGroup` | boolean | Whether this label is a group |
| ↳ `createdAt` | string | Creation timestamp \(ISO 8601\) |
| ↳ `updatedAt` | string | Last update timestamp \(ISO 8601\) |
| ↳ `archivedAt` | string | Archive timestamp \(ISO 8601\) |
| ↳ `team` | object | Team object |
| ↳ `id` | string | Team ID |
| ↳ `name` | string | Team name |
@@ -785,10 +780,6 @@ Create a new label in Linear
| ↳ `name` | string | Label name |
| ↳ `color` | string | Label color \(hex\) |
| ↳ `description` | string | Label description |
| ↳ `isGroup` | boolean | Whether this label is a group |
| ↳ `createdAt` | string | Creation timestamp \(ISO 8601\) |
| ↳ `updatedAt` | string | Last update timestamp \(ISO 8601\) |
| ↳ `archivedAt` | string | Archive timestamp \(ISO 8601\) |
| ↳ `team` | object | Team object |
| ↳ `id` | string | Team ID |
| ↳ `name` | string | Team name |
@@ -815,10 +806,6 @@ Update an existing label in Linear
| ↳ `name` | string | Label name |
| ↳ `color` | string | Label color \(hex\) |
| ↳ `description` | string | Label description |
| ↳ `isGroup` | boolean | Whether this label is a group |
| ↳ `createdAt` | string | Creation timestamp \(ISO 8601\) |
| ↳ `updatedAt` | string | Last update timestamp \(ISO 8601\) |
| ↳ `archivedAt` | string | Archive timestamp \(ISO 8601\) |
| ↳ `team` | object | Team object |
| ↳ `id` | string | Team ID |
| ↳ `name` | string | Team name |
@@ -862,13 +849,9 @@ List all workflow states (statuses) in Linear
| `states` | array | Array of workflow states |
| ↳ `id` | string | State ID |
| ↳ `name` | string | State name \(e.g., "Todo", "In Progress"\) |
| ↳ `description` | string | State description |
| ↳ `type` | string | State type \(triage, backlog, unstarted, started, completed, canceled\) |
| ↳ `type` | string | State type \(unstarted, started, completed, canceled\) |
| ↳ `color` | string | State color \(hex\) |
| ↳ `position` | number | State position in workflow |
| ↳ `createdAt` | string | Creation timestamp \(ISO 8601\) |
| ↳ `updatedAt` | string | Last update timestamp \(ISO 8601\) |
| ↳ `archivedAt` | string | Archive timestamp \(ISO 8601\) |
| ↳ `team` | object | Team object |
| ↳ `id` | string | Team ID |
| ↳ `name` | string | Team name |
@@ -894,17 +877,11 @@ Create a new workflow state (status) in Linear
| --------- | ---- | ----------- |
| `state` | object | The created workflow state |
| ↳ `id` | string | State ID |
| ↳ `name` | string | State name \(e.g., "Todo", "In Progress"\) |
| ↳ `description` | string | State description |
| ↳ `type` | string | State type \(triage, backlog, unstarted, started, completed, canceled\) |
| ↳ `color` | string | State color \(hex\) |
| ↳ `position` | number | State position in workflow |
| ↳ `createdAt` | string | Creation timestamp \(ISO 8601\) |
| ↳ `updatedAt` | string | Last update timestamp \(ISO 8601\) |
| ↳ `archivedAt` | string | Archive timestamp \(ISO 8601\) |
| ↳ `team` | object | Team object |
| ↳ `id` | string | Team ID |
| ↳ `name` | string | Team name |
| ↳ `name` | string | State name |
| ↳ `type` | string | State type |
| ↳ `color` | string | State color |
| ↳ `position` | number | State position |
| ↳ `team` | object | Team this state belongs to |
### `linear_update_workflow_state`
@@ -926,17 +903,10 @@ Update an existing workflow state in Linear
| --------- | ---- | ----------- |
| `state` | object | The updated workflow state |
| ↳ `id` | string | State ID |
| ↳ `name` | string | State name \(e.g., "Todo", "In Progress"\) |
| ↳ `description` | string | State description |
| ↳ `type` | string | State type \(triage, backlog, unstarted, started, completed, canceled\) |
| ↳ `color` | string | State color \(hex\) |
| ↳ `position` | number | State position in workflow |
| ↳ `createdAt` | string | Creation timestamp \(ISO 8601\) |
| ↳ `updatedAt` | string | Last update timestamp \(ISO 8601\) |
| ↳ `archivedAt` | string | Archive timestamp \(ISO 8601\) |
| ↳ `team` | object | Team object |
| ↳ `id` | string | Team ID |
| ↳ `name` | string | Team name |
| ↳ `name` | string | State name |
| ↳ `type` | string | State type |
| ↳ `color` | string | State color |
| ↳ `position` | number | State position |
### `linear_list_cycles`
@@ -965,7 +935,6 @@ List cycles (sprints/iterations) in Linear
| ↳ `endsAt` | string | End date \(ISO 8601\) |
| ↳ `completedAt` | string | Completion date \(ISO 8601\) |
| ↳ `progress` | number | Progress percentage \(0-1\) |
| ↳ `createdAt` | string | Creation timestamp \(ISO 8601\) |
| ↳ `team` | object | Team object |
| ↳ `id` | string | Team ID |
| ↳ `name` | string | Team name |
@@ -992,7 +961,6 @@ Get a single cycle by ID from Linear
| ↳ `endsAt` | string | End date \(ISO 8601\) |
| ↳ `completedAt` | string | Completion date \(ISO 8601\) |
| ↳ `progress` | number | Progress percentage \(0-1\) |
| ↳ `createdAt` | string | Creation timestamp \(ISO 8601\) |
| ↳ `team` | object | Team object |
| ↳ `id` | string | Team ID |
| ↳ `name` | string | Team name |
@@ -1018,14 +986,9 @@ Create a new cycle (sprint/iteration) in Linear
| ↳ `id` | string | Cycle ID |
| ↳ `number` | number | Cycle number |
| ↳ `name` | string | Cycle name |
| ↳ `startsAt` | string | Start date \(ISO 8601\) |
| ↳ `endsAt` | string | End date \(ISO 8601\) |
| ↳ `completedAt` | string | Completion date \(ISO 8601\) |
| ↳ `progress` | number | Progress percentage \(0-1\) |
| ↳ `createdAt` | string | Creation timestamp \(ISO 8601\) |
| ↳ `team` | object | Team object |
| ↳ `id` | string | Team ID |
| ↳ `name` | string | Team name |
| ↳ `startsAt` | string | Start date |
| ↳ `endsAt` | string | End date |
| ↳ `team` | object | Team this cycle belongs to |
### `linear_get_active_cycle`
@@ -1045,14 +1008,10 @@ Get the currently active cycle for a team
| ↳ `id` | string | Cycle ID |
| ↳ `number` | number | Cycle number |
| ↳ `name` | string | Cycle name |
| ↳ `startsAt` | string | Start date \(ISO 8601\) |
| ↳ `endsAt` | string | End date \(ISO 8601\) |
| ↳ `completedAt` | string | Completion date \(ISO 8601\) |
| ↳ `progress` | number | Progress percentage \(0-1\) |
| ↳ `createdAt` | string | Creation timestamp \(ISO 8601\) |
| ↳ `team` | object | Team object |
| ↳ `id` | string | Team ID |
| ↳ `name` | string | Team name |
| ↳ `startsAt` | string | Start date |
| ↳ `endsAt` | string | End date |
| ↳ `progress` | number | Progress percentage |
| ↳ `team` | object | Team this cycle belongs to |
### `linear_create_attachment`
@@ -1063,8 +1022,7 @@ Add an attachment to an issue in Linear
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `issueId` | string | Yes | Issue ID to attach to |
| `url` | string | No | URL of the attachment |
| `file` | file | No | File to attach |
| `url` | string | Yes | URL of the attachment |
| `title` | string | Yes | Attachment title |
| `subtitle` | string | No | Attachment subtitle/description |
@@ -1375,12 +1333,8 @@ Create a new customer in Linear
| ↳ `domains` | array | Associated domains |
| ↳ `externalIds` | array | External IDs from other systems |
| ↳ `logoUrl` | string | Logo URL |
| ↳ `slugId` | string | Unique URL slug |
| ↳ `approximateNeedCount` | number | Number of customer needs |
| ↳ `revenue` | number | Annual revenue |
| ↳ `size` | number | Organization size |
| ↳ `createdAt` | string | Creation timestamp \(ISO 8601\) |
| ↳ `updatedAt` | string | Last update timestamp \(ISO 8601\) |
| ↳ `archivedAt` | string | Archive timestamp \(ISO 8601\) |
### `linear_list_customers`
@@ -1408,12 +1362,8 @@ List all customers in Linear
| ↳ `domains` | array | Associated domains |
| ↳ `externalIds` | array | External IDs from other systems |
| ↳ `logoUrl` | string | Logo URL |
| ↳ `slugId` | string | Unique URL slug |
| ↳ `approximateNeedCount` | number | Number of customer needs |
| ↳ `revenue` | number | Annual revenue |
| ↳ `size` | number | Organization size |
| ↳ `createdAt` | string | Creation timestamp \(ISO 8601\) |
| ↳ `updatedAt` | string | Last update timestamp \(ISO 8601\) |
| ↳ `archivedAt` | string | Archive timestamp \(ISO 8601\) |
### `linear_create_customer_request`
@@ -1529,12 +1479,8 @@ Get a single customer by ID in Linear
| ↳ `domains` | array | Associated domains |
| ↳ `externalIds` | array | External IDs from other systems |
| ↳ `logoUrl` | string | Logo URL |
| ↳ `slugId` | string | Unique URL slug |
| ↳ `approximateNeedCount` | number | Number of customer needs |
| ↳ `revenue` | number | Annual revenue |
| ↳ `size` | number | Organization size |
| ↳ `createdAt` | string | Creation timestamp \(ISO 8601\) |
| ↳ `updatedAt` | string | Last update timestamp \(ISO 8601\) |
| ↳ `archivedAt` | string | Archive timestamp \(ISO 8601\) |
### `linear_update_customer`
@@ -1566,12 +1512,8 @@ Update a customer in Linear
| ↳ `domains` | array | Associated domains |
| ↳ `externalIds` | array | External IDs from other systems |
| ↳ `logoUrl` | string | Logo URL |
| ↳ `slugId` | string | Unique URL slug |
| ↳ `approximateNeedCount` | number | Number of customer needs |
| ↳ `revenue` | number | Annual revenue |
| ↳ `size` | number | Organization size |
| ↳ `createdAt` | string | Creation timestamp \(ISO 8601\) |
| ↳ `updatedAt` | string | Last update timestamp \(ISO 8601\) |
| ↳ `archivedAt` | string | Archive timestamp \(ISO 8601\) |
### `linear_delete_customer`
@@ -1617,8 +1559,8 @@ Create a new customer status in Linear
| --------- | ---- | -------- | ----------- |
| `name` | string | Yes | Customer status name |
| `color` | string | Yes | Status color \(hex code\) |
| `description` | string | No | Status description |
| `displayName` | string | No | Display name for the status |
| `description` | string | No | Status description |
| `position` | number | No | Position in status list |
#### Output
@@ -1628,12 +1570,11 @@ Create a new customer status in Linear
| `customerStatus` | object | The created customer status |
| ↳ `id` | string | Customer status ID |
| ↳ `name` | string | Status name |
| ↳ `displayName` | string | Display name |
| ↳ `description` | string | Status description |
| ↳ `color` | string | Status color \(hex\) |
| ↳ `position` | number | Position in list |
| ↳ `type` | string | Status type \(active, inactive\) |
| ↳ `createdAt` | string | Creation timestamp \(ISO 8601\) |
| ↳ `updatedAt` | string | Last updated timestamp \(ISO 8601\) |
| ↳ `archivedAt` | string | Archive timestamp \(ISO 8601\) |
### `linear_update_customer_status`
@@ -1647,8 +1588,8 @@ Update a customer status in Linear
| `statusId` | string | Yes | Customer status ID to update |
| `name` | string | No | Updated status name |
| `color` | string | No | Updated status color |
| `description` | string | No | Updated description |
| `displayName` | string | No | Updated display name |
| `description` | string | No | Updated description |
| `position` | number | No | Updated position |
#### Output
@@ -1656,15 +1597,6 @@ Update a customer status in Linear
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `customerStatus` | object | The updated customer status |
| ↳ `id` | string | Customer status ID |
| ↳ `name` | string | Status name |
| ↳ `description` | string | Status description |
| ↳ `color` | string | Status color \(hex\) |
| ↳ `position` | number | Position in list |
| ↳ `type` | string | Status type \(active, inactive\) |
| ↳ `createdAt` | string | Creation timestamp \(ISO 8601\) |
| ↳ `updatedAt` | string | Last updated timestamp \(ISO 8601\) |
| ↳ `archivedAt` | string | Archive timestamp \(ISO 8601\) |
### `linear_delete_customer_status`
@@ -1690,25 +1622,19 @@ List all customer statuses in Linear
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `first` | number | No | Number of statuses to return \(default: 50\) |
| `after` | string | No | Cursor for pagination |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `pageInfo` | object | Pagination information |
| ↳ `hasNextPage` | boolean | Whether there are more results |
| ↳ `endCursor` | string | Cursor for the next page |
| `customerStatuses` | array | List of customer statuses |
| ↳ `id` | string | Customer status ID |
| ↳ `name` | string | Status name |
| ↳ `displayName` | string | Display name |
| ↳ `description` | string | Status description |
| ↳ `color` | string | Status color \(hex\) |
| ↳ `position` | number | Position in list |
| ↳ `type` | string | Status type \(active, inactive\) |
| ↳ `createdAt` | string | Creation timestamp \(ISO 8601\) |
| ↳ `updatedAt` | string | Last updated timestamp \(ISO 8601\) |
| ↳ `archivedAt` | string | Archive timestamp \(ISO 8601\) |
### `linear_create_customer_tier`
@@ -1784,16 +1710,11 @@ List all customer tiers in Linear
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `first` | number | No | Number of tiers to return \(default: 50\) |
| `after` | string | No | Cursor for pagination |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `pageInfo` | object | Pagination information |
| ↳ `hasNextPage` | boolean | Whether there are more results |
| ↳ `endCursor` | string | Cursor for the next page |
| `customerTiers` | array | List of customer tiers |
| ↳ `id` | string | Customer tier ID |
| ↳ `name` | string | Tier name |
@@ -1839,14 +1760,6 @@ Create a new project label in Linear
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `projectLabel` | object | The created project label |
| ↳ `id` | string | Project label ID |
| ↳ `name` | string | Label name |
| ↳ `description` | string | Label description |
| ↳ `color` | string | Label color \(hex\) |
| ↳ `isGroup` | boolean | Whether this label is a group |
| ↳ `createdAt` | string | Creation timestamp \(ISO 8601\) |
| ↳ `updatedAt` | string | Last update timestamp \(ISO 8601\) |
| ↳ `archivedAt` | string | Archive timestamp \(ISO 8601\) |
### `linear_update_project_label`
@@ -1866,14 +1779,6 @@ Update a project label in Linear
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `projectLabel` | object | The updated project label |
| ↳ `id` | string | Project label ID |
| ↳ `name` | string | Label name |
| ↳ `description` | string | Label description |
| ↳ `color` | string | Label color \(hex\) |
| ↳ `isGroup` | boolean | Whether this label is a group |
| ↳ `createdAt` | string | Creation timestamp \(ISO 8601\) |
| ↳ `updatedAt` | string | Last update timestamp \(ISO 8601\) |
| ↳ `archivedAt` | string | Archive timestamp \(ISO 8601\) |
### `linear_delete_project_label`
@@ -1900,25 +1805,12 @@ List all project labels in Linear
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `projectId` | string | No | Optional project ID to filter labels for a specific project |
| `first` | number | No | Number of labels to return \(default: 50\) |
| `after` | string | No | Cursor for pagination |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `pageInfo` | object | Pagination information |
| ↳ `hasNextPage` | boolean | Whether there are more results |
| ↳ `endCursor` | string | Cursor for the next page |
| `projectLabels` | array | List of project labels |
| ↳ `id` | string | Project label ID |
| ↳ `name` | string | Label name |
| ↳ `description` | string | Label description |
| ↳ `color` | string | Label color \(hex\) |
| ↳ `isGroup` | boolean | Whether this label is a group |
| ↳ `createdAt` | string | Creation timestamp \(ISO 8601\) |
| ↳ `updatedAt` | string | Last update timestamp \(ISO 8601\) |
| ↳ `archivedAt` | string | Archive timestamp \(ISO 8601\) |
### `linear_add_label_to_project`
@@ -1974,16 +1866,6 @@ Create a new project milestone in Linear
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `projectMilestone` | object | The created project milestone |
| ↳ `id` | string | Project milestone ID |
| ↳ `name` | string | Milestone name |
| ↳ `description` | string | Milestone description |
| ↳ `projectId` | string | Project ID |
| ↳ `targetDate` | string | Target date \(YYYY-MM-DD\) |
| ↳ `progress` | number | Progress percentage \(0-1\) |
| ↳ `sortOrder` | number | Sort order within the project |
| ↳ `status` | string | Milestone status \(done, next, overdue, unstarted\) |
| ↳ `createdAt` | string | Creation timestamp \(ISO 8601\) |
| ↳ `archivedAt` | string | Archive timestamp \(ISO 8601\) |
### `linear_update_project_milestone`
@@ -2003,16 +1885,6 @@ Update a project milestone in Linear
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `projectMilestone` | object | The updated project milestone |
| ↳ `id` | string | Project milestone ID |
| ↳ `name` | string | Milestone name |
| ↳ `description` | string | Milestone description |
| ↳ `projectId` | string | Project ID |
| ↳ `targetDate` | string | Target date \(YYYY-MM-DD\) |
| ↳ `progress` | number | Progress percentage \(0-1\) |
| ↳ `sortOrder` | number | Sort order within the project |
| ↳ `status` | string | Milestone status \(done, next, overdue, unstarted\) |
| ↳ `createdAt` | string | Creation timestamp \(ISO 8601\) |
| ↳ `archivedAt` | string | Archive timestamp \(ISO 8601\) |
### `linear_delete_project_milestone`
@@ -2039,27 +1911,12 @@ List all milestones for a project in Linear
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `projectId` | string | Yes | Project ID to list milestones for |
| `first` | number | No | Number of milestones to return \(default: 50\) |
| `after` | string | No | Cursor for pagination |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `pageInfo` | object | Pagination information |
| ↳ `hasNextPage` | boolean | Whether there are more results |
| ↳ `endCursor` | string | Cursor for the next page |
| `projectMilestones` | array | List of project milestones |
| ↳ `id` | string | Project milestone ID |
| ↳ `name` | string | Milestone name |
| ↳ `description` | string | Milestone description |
| ↳ `projectId` | string | Project ID |
| ↳ `targetDate` | string | Target date \(YYYY-MM-DD\) |
| ↳ `progress` | number | Progress percentage \(0-1\) |
| ↳ `sortOrder` | number | Sort order within the project |
| ↳ `status` | string | Milestone status \(done, next, overdue, unstarted\) |
| ↳ `createdAt` | string | Creation timestamp \(ISO 8601\) |
| ↳ `archivedAt` | string | Archive timestamp \(ISO 8601\) |
### `linear_create_project_status`
@@ -2081,16 +1938,6 @@ Create a new project status in Linear
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `projectStatus` | object | The created project status |
| ↳ `id` | string | Project status ID |
| ↳ `name` | string | Status name |
| ↳ `description` | string | Status description |
| ↳ `color` | string | Status color \(hex\) |
| ↳ `indefinite` | boolean | Whether this status is indefinite |
| ↳ `position` | number | Position in list |
| ↳ `type` | string | Status type \(backlog, planned, started, paused, completed, canceled\) |
| ↳ `createdAt` | string | Creation timestamp \(ISO 8601\) |
| ↳ `updatedAt` | string | Last updated timestamp \(ISO 8601\) |
| ↳ `archivedAt` | string | Archive timestamp \(ISO 8601\) |
### `linear_update_project_status`
@@ -2112,16 +1959,6 @@ Update a project status in Linear
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `projectStatus` | object | The updated project status |
| ↳ `id` | string | Project status ID |
| ↳ `name` | string | Status name |
| ↳ `description` | string | Status description |
| ↳ `color` | string | Status color \(hex\) |
| ↳ `indefinite` | boolean | Whether this status is indefinite |
| ↳ `position` | number | Position in list |
| ↳ `type` | string | Status type \(backlog, planned, started, paused, completed, canceled\) |
| ↳ `createdAt` | string | Creation timestamp \(ISO 8601\) |
| ↳ `updatedAt` | string | Last updated timestamp \(ISO 8601\) |
| ↳ `archivedAt` | string | Archive timestamp \(ISO 8601\) |
### `linear_delete_project_status`
@@ -2147,26 +1984,11 @@ List all project statuses in Linear
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `first` | number | No | Number of statuses to return \(default: 50\) |
| `after` | string | No | Cursor for pagination |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `pageInfo` | object | Pagination information |
| ↳ `hasNextPage` | boolean | Whether there are more results |
| ↳ `endCursor` | string | Cursor for the next page |
| `projectStatuses` | array | List of project statuses |
| ↳ `id` | string | Project status ID |
| ↳ `name` | string | Status name |
| ↳ `description` | string | Status description |
| ↳ `color` | string | Status color \(hex\) |
| ↳ `indefinite` | boolean | Whether this status is indefinite |
| ↳ `position` | number | Position in list |
| ↳ `type` | string | Status type \(backlog, planned, started, paused, completed, canceled\) |
| ↳ `createdAt` | string | Creation timestamp \(ISO 8601\) |
| ↳ `updatedAt` | string | Last updated timestamp \(ISO 8601\) |
| ↳ `archivedAt` | string | Archive timestamp \(ISO 8601\) |

View File

@@ -4,7 +4,6 @@
"a2a",
"ahrefs",
"airtable",
"airweave",
"apify",
"apollo",
"arxiv",

View File

@@ -81,7 +81,6 @@ Write or update content in a Microsoft Teams chat
| `createdTime` | string | Timestamp when message was created |
| `url` | string | Web URL to the message |
| `updatedContent` | boolean | Whether content was successfully updated |
| `files` | file[] | Files attached to the message |
### `microsoft_teams_read_channel`
@@ -133,7 +132,6 @@ Write or send a message to a Microsoft Teams channel
| `createdTime` | string | Timestamp when message was created |
| `url` | string | Web URL to the message |
| `updatedContent` | boolean | Whether content was successfully updated |
| `files` | file[] | Files attached to the message |
### `microsoft_teams_update_chat_message`

View File

@@ -6,7 +6,7 @@ description: Extract text from PDF documents
import { BlockInfoCard } from "@/components/ui/block-info-card"
<BlockInfoCard
type="mistral_parse_v3"
type="mistral_parse_v2"
color="#000000"
/>
@@ -35,12 +35,13 @@ Integrate Mistral Parse into the workflow. Can extract text from uploaded PDF do
### `mistral_parser`
Parse PDF documents using Mistral OCR API
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `filePath` | string | No | URL to a PDF document to be processed |
| `file` | file | No | Document file to be processed |
| `filePath` | string | Yes | URL to a PDF document to be processed |
| `fileUpload` | object | No | File upload data from file-upload component |
| `resultType` | string | No | Type of parsed result \(markdown, text, or json\). Defaults to markdown. |
| `includeImageBase64` | boolean | No | Include base64-encoded images in the response |
@@ -54,8 +55,27 @@ Integrate Mistral Parse into the workflow. Can extract text from uploaded PDF do
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `pages` | array | Array of page objects from Mistral OCR |
| `model` | string | Mistral OCR model identifier |
| `usage_info` | json | Usage statistics from the API |
| `document_annotation` | string | Structured annotation data |
| ↳ `index` | number | Page index \(zero-based\) |
| ↳ `markdown` | string | Extracted markdown content |
| ↳ `images` | array | Images extracted from this page with bounding boxes |
| ↳ `id` | string | Image identifier \(e.g., img-0.jpeg\) |
| ↳ `top_left_x` | number | Top-left X coordinate in pixels |
| ↳ `top_left_y` | number | Top-left Y coordinate in pixels |
| ↳ `bottom_right_x` | number | Bottom-right X coordinate in pixels |
| ↳ `bottom_right_y` | number | Bottom-right Y coordinate in pixels |
| ↳ `image_base64` | string | Base64-encoded image data \(when include_image_base64=true\) |
| ↳ `dimensions` | object | Page dimensions |
| ↳ `dpi` | number | Dots per inch |
| ↳ `height` | number | Page height in pixels |
| ↳ `width` | number | Page width in pixels |
| ↳ `tables` | array | Extracted tables as HTML/markdown \(when table_format is set\). Referenced via placeholders like \[tbl-0.html\] |
| ↳ `hyperlinks` | array | Array of URL strings detected in the page \(e.g., \["https://...", "mailto:..."\]\) |
| ↳ `header` | string | Page header content \(when extract_header=true\) |
| ↳ `footer` | string | Page footer content \(when extract_footer=true\) |
| `model` | string | Mistral OCR model identifier \(e.g., mistral-ocr-latest\) |
| `usage_info` | object | Usage and processing statistics |
| ↳ `pages_processed` | number | Total number of pages processed |
| ↳ `doc_size_bytes` | number | Document file size in bytes |
| `document_annotation` | string | Structured annotation data as JSON string \(when applicable\) |

View File

@@ -113,26 +113,6 @@ Create a new page in Notion
| `last_edited_time` | string | ISO 8601 last edit timestamp |
| `title` | string | Page title |
### `notion_update_page`
Update properties of a Notion page
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `pageId` | string | Yes | The UUID of the Notion page to update |
| `properties` | json | Yes | JSON object of properties to update |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `id` | string | Page UUID |
| `url` | string | Notion page URL |
| `last_edited_time` | string | ISO 8601 last edit timestamp |
| `title` | string | Page title |
### `notion_query_database`
Query and filter Notion database entries with advanced filtering

View File

@@ -152,7 +152,6 @@ Retrieve files from Pipedrive with optional filters
| `person_id` | string | No | Filter files by person ID \(e.g., "456"\) |
| `org_id` | string | No | Filter files by organization ID \(e.g., "789"\) |
| `limit` | string | No | Number of results to return \(e.g., "50", default: 100, max: 500\) |
| `downloadFiles` | boolean | No | Download file contents into file outputs |
#### Output
@@ -169,7 +168,6 @@ Retrieve files from Pipedrive with optional filters
| ↳ `person_id` | number | Associated person ID |
| ↳ `org_id` | number | Associated organization ID |
| ↳ `url` | string | File download URL |
| `downloadedFiles` | file[] | Downloaded files from Pipedrive |
| `total_items` | number | Total number of files returned |
| `success` | boolean | Operation success status |

View File

@@ -6,12 +6,12 @@ description: Extract text from documents using Pulse OCR
import { BlockInfoCard } from "@/components/ui/block-info-card"
<BlockInfoCard
type="pulse_v2"
type="pulse"
color="#E0E0E0"
/>
{/* MANUAL-CONTENT-START:intro */}
The [Pulse](https://www.runpulse.com) tool enables seamless extraction of text and structured content from a wide variety of documents—including PDFs, images, and Office files—using state-of-the-art OCR (Optical Character Recognition) powered by Pulse. Designed for automated agentic workflows, Pulse Parser makes it easy to unlock valuable information trapped in unstructured documents and integrate the extracted content directly into your workflow.
The [Pulse](https://www.pulseapi.com/) tool enables seamless extraction of text and structured content from a wide variety of documents—including PDFs, images, and Office files—using state-of-the-art OCR (Optical Character Recognition) powered by Pulse. Designed for automated agentic workflows, Pulse Parser makes it easy to unlock valuable information trapped in unstructured documents and integrate the extracted content directly into your workflow.
With Pulse, you can:
@@ -31,7 +31,7 @@ If you need accurate, scalable, and developer-friendly document parsing capabili
## Usage Instructions
Integrate Pulse into the workflow. Extract text from PDF documents, images, and Office files via upload or file references.
Integrate Pulse into the workflow. Extract text from PDF documents, images, and Office files via URL or upload.
@@ -39,12 +39,13 @@ Integrate Pulse into the workflow. Extract text from PDF documents, images, and
### `pulse_parser`
Parse documents (PDF, images, Office docs) using Pulse OCR API
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `filePath` | string | No | URL to a document to be processed |
| `file` | file | No | Document file to be processed |
| `filePath` | string | Yes | URL to a document to be processed |
| `fileUpload` | object | No | File upload data from file-upload component |
| `pages` | string | No | Page range to process \(1-indexed, e.g., "1-2,5"\) |
| `extractFigure` | boolean | No | Enable figure extraction from the document |
@@ -56,6 +57,16 @@ Integrate Pulse into the workflow. Extract text from PDF documents, images, and
#### Output
This tool does not produce any outputs.
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `markdown` | string | Extracted content in markdown format |
| `page_count` | number | Number of pages in the document |
| `job_id` | string | Unique job identifier |
| `bounding_boxes` | json | Bounding box layout information |
| `extraction_url` | string | URL for extraction results \(for large documents\) |
| `html` | string | HTML content if requested |
| `structured_output` | json | Structured output if schema was provided |
| `chunks` | json | Chunked content if chunking was enabled |
| `figures` | json | Extracted figures if figure extraction was enabled |

View File

@@ -6,7 +6,7 @@ description: Extract text from PDF documents
import { BlockInfoCard } from "@/components/ui/block-info-card"
<BlockInfoCard
type="reducto_v2"
type="reducto"
color="#5c0c5c"
/>
@@ -29,7 +29,7 @@ Looking for reliable and scalable PDF parsing? Reducto is optimized for develope
## Usage Instructions
Integrate Reducto Parse into the workflow. Can extract text from uploaded PDF documents or file references.
Integrate Reducto Parse into the workflow. Can extract text from uploaded PDF documents, or from a URL.
@@ -37,12 +37,13 @@ Integrate Reducto Parse into the workflow. Can extract text from uploaded PDF do
### `reducto_parser`
Parse PDF documents using Reducto OCR API
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `filePath` | string | No | URL to a PDF document to be processed |
| `file` | file | No | Document file to be processed |
| `filePath` | string | Yes | URL to a PDF document to be processed |
| `fileUpload` | object | No | File upload data from file-upload component |
| `pages` | array | No | Specific pages to process \(1-indexed page numbers\) |
| `tableOutputFormat` | string | No | Table output format \(html or markdown\). Defaults to markdown. |
@@ -50,6 +51,13 @@ Integrate Reducto Parse into the workflow. Can extract text from uploaded PDF do
#### Output
This tool does not produce any outputs.
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `job_id` | string | Unique identifier for the processing job |
| `duration` | number | Processing time in seconds |
| `usage` | json | Resource consumption data |
| `result` | json | Parsed document content with chunks and blocks |
| `pdf_url` | string | Storage URL of converted PDF |
| `studio_link` | string | Link to Reducto studio interface |

View File

@@ -78,7 +78,6 @@ Retrieve an object from an AWS S3 bucket
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `url` | string | Pre-signed URL for downloading the S3 object |
| `file` | file | Downloaded file stored in execution files |
| `metadata` | object | File metadata including type, size, name, and last modified date |
### `s3_list_objects`

View File

@@ -62,7 +62,7 @@ Send an email using SendGrid API
| `bcc` | string | No | BCC email address |
| `replyTo` | string | No | Reply-to email address |
| `replyToName` | string | No | Reply-to name |
| `attachments` | file[] | No | Files to attach to the email \(UserFile objects\) |
| `attachments` | file[] | No | Files to attach to the email as an array of attachment objects |
| `templateId` | string | No | SendGrid template ID to use |
| `dynamicTemplateData` | json | No | JSON object of dynamic template data |

View File

@@ -97,7 +97,6 @@ Download a file from a remote SFTP server
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `success` | boolean | Whether the download was successful |
| `file` | file | Downloaded file stored in execution files |
| `fileName` | string | Name of the downloaded file |
| `content` | string | File content \(text or base64 encoded\) |
| `size` | number | File size in bytes |

View File

@@ -144,7 +144,6 @@ Send messages to Slack channels or direct messages. Supports Slack mrkdwn format
| `ts` | string | Message timestamp |
| `channel` | string | Channel ID where message was sent |
| `fileCount` | number | Number of files uploaded \(when files are attached\) |
| `files` | file[] | Files attached to the message |
### `slack_canvas`

View File

@@ -170,7 +170,6 @@ Download a file from a remote SSH server
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `downloaded` | boolean | Whether the file was downloaded successfully |
| `file` | file | Downloaded file stored in execution files |
| `fileContent` | string | File content \(base64 encoded for binary files\) |
| `fileName` | string | Name of the downloaded file |
| `remotePath` | string | Source path on the remote server |

View File

@@ -6,7 +6,7 @@ description: Convert speech to text using AI
import { BlockInfoCard } from "@/components/ui/block-info-card"
<BlockInfoCard
type="stt_v2"
type="stt"
color="#181C1E"
/>
@@ -50,6 +50,8 @@ Transcribe audio and video files to text using leading AI providers. Supports mu
### `stt_whisper`
Transcribe audio to text using OpenAI Whisper
#### Input
| Parameter | Type | Required | Description |
@@ -69,10 +71,22 @@ Transcribe audio and video files to text using leading AI providers. Supports mu
#### Output
This tool does not produce any outputs.
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `transcript` | string | Full transcribed text |
| `segments` | array | Timestamped segments |
| ↳ `text` | string | Transcribed text for this segment |
| ↳ `start` | number | Start time in seconds |
| ↳ `end` | number | End time in seconds |
| ↳ `speaker` | string | Speaker identifier \(if diarization enabled\) |
| ↳ `confidence` | number | Confidence score \(0-1\) |
| `language` | string | Detected or specified language |
| `duration` | number | Audio duration in seconds |
### `stt_deepgram`
Transcribe audio to text using Deepgram
#### Input
| Parameter | Type | Required | Description |
@@ -89,10 +103,23 @@ This tool does not produce any outputs.
#### Output
This tool does not produce any outputs.
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `transcript` | string | Full transcribed text |
| `segments` | array | Timestamped segments with speaker labels |
| ↳ `text` | string | Transcribed text for this segment |
| ↳ `start` | number | Start time in seconds |
| ↳ `end` | number | End time in seconds |
| ↳ `speaker` | string | Speaker identifier \(if diarization enabled\) |
| ↳ `confidence` | number | Confidence score \(0-1\) |
| `language` | string | Detected or specified language |
| `duration` | number | Audio duration in seconds |
| `confidence` | number | Overall confidence score |
### `stt_elevenlabs`
Transcribe audio to text using ElevenLabs
#### Input
| Parameter | Type | Required | Description |
@@ -108,10 +135,18 @@ This tool does not produce any outputs.
#### Output
This tool does not produce any outputs.
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `transcript` | string | Full transcribed text |
| `segments` | array | Timestamped segments |
| `language` | string | Detected or specified language |
| `duration` | number | Audio duration in seconds |
| `confidence` | number | Overall confidence score |
### `stt_assemblyai`
Transcribe audio to text using AssemblyAI with advanced NLP features
#### Input
| Parameter | Type | Required | Description |
@@ -132,10 +167,35 @@ This tool does not produce any outputs.
#### Output
This tool does not produce any outputs.
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `transcript` | string | Full transcribed text |
| `segments` | array | Timestamped segments with speaker labels |
| ↳ `text` | string | Transcribed text for this segment |
| ↳ `start` | number | Start time in seconds |
| ↳ `end` | number | End time in seconds |
| ↳ `speaker` | string | Speaker identifier \(if diarization enabled\) |
| ↳ `confidence` | number | Confidence score \(0-1\) |
| `language` | string | Detected or specified language |
| `duration` | number | Audio duration in seconds |
| `confidence` | number | Overall confidence score |
| `sentiment` | array | Sentiment analysis results |
| ↳ `text` | string | Text that was analyzed |
| ↳ `sentiment` | string | Sentiment \(POSITIVE, NEGATIVE, NEUTRAL\) |
| ↳ `confidence` | number | Confidence score |
| ↳ `start` | number | Start time in milliseconds |
| ↳ `end` | number | End time in milliseconds |
| `entities` | array | Detected entities |
| ↳ `entity_type` | string | Entity type \(e.g., person_name, location, organization\) |
| ↳ `text` | string | Entity text |
| ↳ `start` | number | Start time in milliseconds |
| ↳ `end` | number | End time in milliseconds |
| `summary` | string | Auto-generated summary |
### `stt_gemini`
Transcribe audio to text using Google Gemini with multimodal capabilities
#### Input
| Parameter | Type | Required | Description |
@@ -151,6 +211,12 @@ This tool does not produce any outputs.
#### Output
This tool does not produce any outputs.
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `transcript` | string | Full transcribed text |
| `segments` | array | Timestamped segments |
| `language` | string | Detected or specified language |
| `duration` | number | Audio duration in seconds |
| `confidence` | number | Overall confidence score |

View File

@@ -354,7 +354,6 @@ Send documents (PDF, ZIP, DOC, etc.) to Telegram channels or users through the T
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `message` | string | Success or error message |
| `files` | file[] | Files attached to the message |
| `data` | object | Telegram message data including document |
| ↳ `message_id` | number | Unique Telegram message identifier |
| ↳ `from` | object | Information about the sender |

View File

@@ -6,7 +6,7 @@ description: Extract text, tables, and forms from documents
import { BlockInfoCard } from "@/components/ui/block-info-card"
<BlockInfoCard
type="textract_v2"
type="textract"
color="linear-gradient(135deg, #055F4E 0%, #56C0A7 100%)"
/>
@@ -35,6 +35,8 @@ Integrate AWS Textract into your workflow to extract text, tables, forms, and ke
### `textract_parser`
Parse documents using AWS Textract OCR and document analysis
#### Input
| Parameter | Type | Required | Description |
@@ -44,8 +46,8 @@ Integrate AWS Textract into your workflow to extract text, tables, forms, and ke
| `region` | string | Yes | AWS region for Textract service \(e.g., us-east-1\) |
| `processingMode` | string | No | Document type: single-page or multi-page. Defaults to single-page. |
| `filePath` | string | No | URL to a document to be processed \(JPEG, PNG, or single-page PDF\). |
| `file` | file | No | Document file to be processed \(JPEG, PNG, or single-page PDF\). |
| `s3Uri` | string | No | S3 URI for multi-page processing \(s3://bucket/key\). |
| `fileUpload` | object | No | File upload data from file-upload component |
| `featureTypes` | array | No | Feature types to detect: TABLES, FORMS, QUERIES, SIGNATURES, LAYOUT. If not specified, only text detection is performed. |
| `items` | string | No | Feature type |
| `queries` | array | No | Custom queries to extract specific information. Only used when featureTypes includes QUERIES. |
@@ -56,6 +58,39 @@ Integrate AWS Textract into your workflow to extract text, tables, forms, and ke
#### Output
This tool does not produce any outputs.
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `blocks` | array | Array of Block objects containing detected text, tables, forms, and other elements |
| ↳ `BlockType` | string | Type of block \(PAGE, LINE, WORD, TABLE, CELL, KEY_VALUE_SET, etc.\) |
| ↳ `Id` | string | Unique identifier for the block |
| ↳ `Text` | string | The text content \(for LINE and WORD blocks\) |
| ↳ `TextType` | string | Type of text \(PRINTED or HANDWRITING\) |
| ↳ `Confidence` | number | Confidence score \(0-100\) |
| ↳ `Page` | number | Page number |
| ↳ `Geometry` | object | Location and bounding box information |
| ↳ `BoundingBox` | object | Height as ratio of document height |
| ↳ `Height` | number | Height as ratio of document height |
| ↳ `Left` | number | Left position as ratio of document width |
| ↳ `Top` | number | Top position as ratio of document height |
| ↳ `Width` | number | Width as ratio of document width |
| ↳ `Polygon` | array | Polygon coordinates |
| ↳ `X` | number | X coordinate |
| ↳ `Y` | number | Y coordinate |
| ↳ `Relationships` | array | Relationships to other blocks |
| ↳ `Type` | string | Relationship type \(CHILD, VALUE, ANSWER, etc.\) |
| ↳ `Ids` | array | IDs of related blocks |
| ↳ `EntityTypes` | array | Entity types for KEY_VALUE_SET \(KEY or VALUE\) |
| ↳ `SelectionStatus` | string | For checkboxes: SELECTED or NOT_SELECTED |
| ↳ `RowIndex` | number | Row index for table cells |
| ↳ `ColumnIndex` | number | Column index for table cells |
| ↳ `RowSpan` | number | Row span for merged cells |
| ↳ `ColumnSpan` | number | Column span for merged cells |
| ↳ `Query` | object | Query information for QUERY blocks |
| ↳ `Text` | string | Query text |
| ↳ `Alias` | string | Query alias |
| ↳ `Pages` | array | Pages to search |
| `documentMetadata` | object | Metadata about the analyzed document |
| ↳ `pages` | number | Number of pages in the document |
| `modelVersion` | string | Version of the Textract model used for processing |

View File

@@ -122,7 +122,6 @@ Retrieve call recording information and transcription (if enabled via TwiML).
| `channels` | number | Number of channels \(1 for mono, 2 for dual\) |
| `source` | string | How the recording was created |
| `mediaUrl` | string | URL to download the recording media file |
| `file` | file | Downloaded recording media file |
| `price` | string | Cost of the recording |
| `priceUnit` | string | Currency of the price |
| `uri` | string | Relative URI of the recording resource |

View File

@@ -75,7 +75,6 @@ Download files uploaded in Typeform responses
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `fileUrl` | string | Direct download URL for the uploaded file |
| `file` | file | Downloaded file stored in execution files |
| `contentType` | string | MIME type of the uploaded file |
| `filename` | string | Original filename of the uploaded file |

View File

@@ -57,14 +57,14 @@ Generate videos using Runway Gen-4 with world consistency and visual references
| `duration` | number | No | Video duration in seconds \(5 or 10, default: 5\) |
| `aspectRatio` | string | No | Aspect ratio: 16:9 \(landscape\), 9:16 \(portrait\), or 1:1 \(square\) |
| `resolution` | string | No | Video resolution \(720p output\). Note: Gen-4 Turbo outputs at 720p natively |
| `visualReference` | file | Yes | Reference image REQUIRED for Gen-4 \(UserFile object\). Gen-4 only supports image-to-video, not text-only generation |
| `visualReference` | json | Yes | Reference image REQUIRED for Gen-4 \(UserFile object\). Gen-4 only supports image-to-video, not text-only generation |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `videoUrl` | string | Generated video URL |
| `videoFile` | file | Video file object with metadata |
| `videoFile` | json | Video file object with metadata |
| `duration` | number | Video duration in seconds |
| `width` | number | Video width in pixels |
| `height` | number | Video height in pixels |
@@ -93,7 +93,7 @@ Generate videos using Google Veo 3/3.1 with native audio generation
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `videoUrl` | string | Generated video URL |
| `videoFile` | file | Video file object with metadata |
| `videoFile` | json | Video file object with metadata |
| `duration` | number | Video duration in seconds |
| `width` | number | Video width in pixels |
| `height` | number | Video height in pixels |
@@ -123,7 +123,7 @@ Generate videos using Luma Dream Machine with advanced camera controls
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `videoUrl` | string | Generated video URL |
| `videoFile` | file | Video file object with metadata |
| `videoFile` | json | Video file object with metadata |
| `duration` | number | Video duration in seconds |
| `width` | number | Video width in pixels |
| `height` | number | Video height in pixels |
@@ -151,7 +151,7 @@ Generate videos using MiniMax Hailuo through MiniMax Platform API with advanced
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `videoUrl` | string | Generated video URL |
| `videoFile` | file | Video file object with metadata |
| `videoFile` | json | Video file object with metadata |
| `duration` | number | Video duration in seconds |
| `width` | number | Video width in pixels |
| `height` | number | Video height in pixels |
@@ -181,7 +181,7 @@ Generate videos using Fal.ai platform with access to multiple models including V
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `videoUrl` | string | Generated video URL |
| `videoFile` | file | Video file object with metadata |
| `videoFile` | json | Video file object with metadata |
| `duration` | number | Video duration in seconds |
| `width` | number | Video width in pixels |
| `height` | number | Video height in pixels |

View File

@@ -6,7 +6,7 @@ description: Analyze images with vision models
import { BlockInfoCard } from "@/components/ui/block-info-card"
<BlockInfoCard
type="vision_v2"
type="vision"
color="#4D5FFF"
/>
@@ -35,6 +35,8 @@ Integrate Vision into the workflow. Can analyze images with vision models.
### `vision_tool`
Process and analyze images using advanced vision models. Capable of understanding image content, extracting text, identifying objects, and providing detailed visual descriptions.
#### Input
| Parameter | Type | Required | Description |
@@ -47,6 +49,14 @@ Integrate Vision into the workflow. Can analyze images with vision models.
#### Output
This tool does not produce any outputs.
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `content` | string | The analyzed content and description of the image |
| `model` | string | The vision model that was used for analysis |
| `tokens` | number | Total tokens used for the analysis |
| `usage` | object | Detailed token usage breakdown |
| ↳ `input_tokens` | number | Tokens used for input processing |
| ↳ `output_tokens` | number | Tokens used for response generation |
| ↳ `total_tokens` | number | Total tokens consumed |

View File

@@ -335,7 +335,6 @@ Get all recordings for a specific Zoom meeting
| `meetingId` | string | Yes | The meeting ID or meeting UUID \(e.g., "1234567890" or "4444AAABBBccccc12345=="\) |
| `includeFolderItems` | boolean | No | Include items within a folder |
| `ttl` | number | No | Time to live for download URLs in seconds \(max 604800\) |
| `downloadFiles` | boolean | No | Download recording files into file outputs |
#### Output
@@ -365,7 +364,6 @@ Get all recordings for a specific Zoom meeting
| ↳ `download_url` | string | URL to download the recording |
| ↳ `status` | string | Recording status |
| ↳ `recording_type` | string | Type of recording \(shared_screen, audio_only, etc.\) |
| `files` | file[] | Downloaded recording files |
### `zoom_delete_recording`

View File

@@ -6,7 +6,7 @@ description: Leer y analizar múltiples archivos
import { BlockInfoCard } from "@/components/ui/block-info-card"
<BlockInfoCard
type="file_v3"
type="file"
color="#40916C"
/>

View File

@@ -6,7 +6,7 @@ description: Interactúa con transcripciones y grabaciones de reuniones de Firef
import { BlockInfoCard } from "@/components/ui/block-info-card"
<BlockInfoCard
type="fireflies_v2"
type="fireflies"
color="#100730"
/>

View File

@@ -6,7 +6,7 @@ description: Extraer texto de documentos PDF
import { BlockInfoCard } from "@/components/ui/block-info-card"
<BlockInfoCard
type="mistral_parse_v3"
type="mistral_parse"
color="#000000"
/>

View File

@@ -6,7 +6,7 @@ description: Lire et analyser plusieurs fichiers
import { BlockInfoCard } from "@/components/ui/block-info-card"
<BlockInfoCard
type="file_v3"
type="file"
color="#40916C"
/>

View File

@@ -7,7 +7,7 @@ description: Interagissez avec les transcriptions et enregistrements de réunion
import { BlockInfoCard } from "@/components/ui/block-info-card"
<BlockInfoCard
type="fireflies_v2"
type="fireflies"
color="#100730"
/>

View File

@@ -6,7 +6,7 @@ description: Extraire du texte à partir de documents PDF
import { BlockInfoCard } from "@/components/ui/block-info-card"
<BlockInfoCard
type="mistral_parse_v3"
type="mistral_parse"
color="#000000"
/>

View File

@@ -6,7 +6,7 @@ description: 複数のファイルを読み込んで解析する
import { BlockInfoCard } from "@/components/ui/block-info-card"
<BlockInfoCard
type="file_v3"
type="file"
color="#40916C"
/>

View File

@@ -6,7 +6,7 @@ description: Fireflies.aiの会議文字起こしと録画を操作
import { BlockInfoCard } from "@/components/ui/block-info-card"
<BlockInfoCard
type="fireflies_v2"
type="fireflies"
color="#100730"
/>

View File

@@ -6,7 +6,7 @@ description: PDFドキュメントからテキストを抽出する
import { BlockInfoCard } from "@/components/ui/block-info-card"
<BlockInfoCard
type="mistral_parse_v3"
type="mistral_parse"
color="#000000"
/>

View File

@@ -6,7 +6,7 @@ description: 读取并解析多个文件
import { BlockInfoCard } from "@/components/ui/block-info-card"
<BlockInfoCard
type="file_v3"
type="file"
color="#40916C"
/>

View File

@@ -6,7 +6,7 @@ description: 与 Fireflies.ai 会议转录和录音进行交互
import { BlockInfoCard } from "@/components/ui/block-info-card"
<BlockInfoCard
type="fireflies_v2"
type="fireflies"
color="#100730"
/>

View File

@@ -6,7 +6,7 @@ description: 从 PDF 文档中提取文本
import { BlockInfoCard } from "@/components/ui/block-info-card"
<BlockInfoCard
type="mistral_parse_v3"
type="mistral_parse"
color="#000000"
/>

Binary file not shown.

Before

Width:  |  Height:  |  Size: 34 KiB

View File

@@ -1,7 +1,7 @@
'use client'
import { useBrandConfig } from '@/lib/branding/branding'
import { inter } from '@/app/_styles/fonts/inter/inter'
import { useBrandConfig } from '@/ee/whitelabeling'
export interface SupportFooterProps {
/** Position style - 'fixed' for pages without AuthLayout, 'absolute' for pages with AuthLayout */

View File

@@ -11,7 +11,7 @@ import {
Database,
DollarSign,
HardDrive,
Timer,
Workflow,
} from 'lucide-react'
import { useRouter } from 'next/navigation'
import { cn } from '@/lib/core/utils/cn'
@@ -44,7 +44,7 @@ interface PricingTier {
const FREE_PLAN_FEATURES: PricingFeature[] = [
{ icon: DollarSign, text: '$20 usage limit' },
{ icon: HardDrive, text: '5GB file storage' },
{ icon: Timer, text: '5 min execution limit' },
{ icon: Workflow, text: 'Public template access' },
{ icon: Database, text: 'Limited log retention' },
{ icon: Code2, text: 'CLI/SDK Access' },
]

View File

@@ -7,10 +7,10 @@ import Image from 'next/image'
import Link from 'next/link'
import { useRouter } from 'next/navigation'
import { GithubIcon } from '@/components/icons'
import { useBrandConfig } from '@/lib/branding/branding'
import { isHosted } from '@/lib/core/config/feature-flags'
import { soehne } from '@/app/_styles/fonts/soehne/soehne'
import { getFormattedGitHubStars } from '@/app/(landing)/actions/github'
import { useBrandConfig } from '@/ee/whitelabeling'
import { useBrandedButtonClass } from '@/hooks/use-branded-button-class'
const logger = createLogger('nav')

View File

@@ -1,6 +0,0 @@
import { type NextRequest, NextResponse } from 'next/server'
import { createMcpAuthorizationServerMetadataResponse } from '@/lib/mcp/oauth-discovery'
export async function GET(request: NextRequest): Promise<NextResponse> {
return createMcpAuthorizationServerMetadataResponse(request)
}

View File

@@ -1,6 +0,0 @@
import { type NextRequest, NextResponse } from 'next/server'
import { createMcpAuthorizationServerMetadataResponse } from '@/lib/mcp/oauth-discovery'
export async function GET(request: NextRequest): Promise<NextResponse> {
return createMcpAuthorizationServerMetadataResponse(request)
}

View File

@@ -1,6 +0,0 @@
import { type NextRequest, NextResponse } from 'next/server'
import { createMcpAuthorizationServerMetadataResponse } from '@/lib/mcp/oauth-discovery'
export async function GET(request: NextRequest): Promise<NextResponse> {
return createMcpAuthorizationServerMetadataResponse(request)
}

View File

@@ -1,6 +0,0 @@
import { type NextRequest, NextResponse } from 'next/server'
import { createMcpProtectedResourceMetadataResponse } from '@/lib/mcp/oauth-discovery'
export async function GET(request: NextRequest): Promise<NextResponse> {
return createMcpProtectedResourceMetadataResponse(request)
}

View File

@@ -1,6 +0,0 @@
import { type NextRequest, NextResponse } from 'next/server'
import { createMcpProtectedResourceMetadataResponse } from '@/lib/mcp/oauth-discovery'
export async function GET(request: NextRequest): Promise<NextResponse> {
return createMcpProtectedResourceMetadataResponse(request)
}

View File

@@ -14,8 +14,9 @@ import {
parseWorkflowSSEChunk,
} from '@/lib/a2a/utils'
import { checkHybridAuth } from '@/lib/auth/hybrid'
import { getBrandConfig } from '@/lib/branding/branding'
import { acquireLock, getRedisClient, releaseLock } from '@/lib/core/config/redis'
import { validateUrlWithDNS } from '@/lib/core/security/input-validation.server'
import { validateExternalUrl } from '@/lib/core/security/input-validation'
import { SSE_HEADERS } from '@/lib/core/utils/sse'
import { getBaseUrl } from '@/lib/core/utils/urls'
import { markExecutionCancelled } from '@/lib/execution/cancellation'
@@ -34,7 +35,6 @@ import {
type PushNotificationSetParams,
type TaskIdParams,
} from '@/app/api/a2a/serve/[agentId]/utils'
import { getBrandConfig } from '@/ee/whitelabeling'
const logger = createLogger('A2AServeAPI')
@@ -1119,7 +1119,7 @@ async function handlePushNotificationSet(
)
}
const urlValidation = await validateUrlWithDNS(
const urlValidation = validateExternalUrl(
params.pushNotificationConfig.url,
'Push notification URL'
)

View File

@@ -18,7 +18,6 @@ const UpdateCostSchema = z.object({
model: z.string().min(1, 'Model is required'),
inputTokens: z.number().min(0).default(0),
outputTokens: z.number().min(0).default(0),
source: z.enum(['copilot', 'mcp_copilot']).default('copilot'),
})
/**
@@ -76,14 +75,12 @@ export async function POST(req: NextRequest) {
)
}
const { userId, cost, model, inputTokens, outputTokens, source } = validation.data
const isMcp = source === 'mcp_copilot'
const { userId, cost, model, inputTokens, outputTokens } = validation.data
logger.info(`[${requestId}] Processing cost update`, {
userId,
cost,
model,
source,
})
// Check if user stats record exists (same as ExecutionLogger)
@@ -99,7 +96,7 @@ export async function POST(req: NextRequest) {
return NextResponse.json({ error: 'User stats record not found' }, { status: 500 })
}
const updateFields: Record<string, unknown> = {
const updateFields = {
totalCost: sql`total_cost + ${cost}`,
currentPeriodCost: sql`current_period_cost + ${cost}`,
totalCopilotCost: sql`total_copilot_cost + ${cost}`,
@@ -108,24 +105,17 @@ export async function POST(req: NextRequest) {
lastActive: new Date(),
}
// Also increment MCP-specific counters when source is mcp_copilot
if (isMcp) {
updateFields.totalMcpCopilotCost = sql`total_mcp_copilot_cost + ${cost}`
updateFields.currentPeriodMcpCopilotCost = sql`current_period_mcp_copilot_cost + ${cost}`
}
await db.update(userStats).set(updateFields).where(eq(userStats.userId, userId))
logger.info(`[${requestId}] Updated user stats record`, {
userId,
addedCost: cost,
source,
})
// Log usage for complete audit trail
await logModelUsage({
userId,
source: isMcp ? 'mcp_copilot' : 'copilot',
source: 'copilot',
model,
inputTokens,
outputTokens,

View File

@@ -1,7 +1,7 @@
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { getSession } from '@/lib/auth'
import { SIM_AGENT_API_URL } from '@/lib/copilot/constants'
import { SIM_AGENT_API_URL_DEFAULT } from '@/lib/copilot/constants'
import { env } from '@/lib/core/config/env'
const GenerateApiKeySchema = z.object({
@@ -17,6 +17,9 @@ export async function POST(req: NextRequest) {
const userId = session.user.id
// Move environment variable access inside the function
const SIM_AGENT_API_URL = env.SIM_AGENT_API_URL || SIM_AGENT_API_URL_DEFAULT
const body = await req.json().catch(() => ({}))
const validationResult = GenerateApiKeySchema.safeParse(body)

View File

@@ -1,6 +1,6 @@
import { type NextRequest, NextResponse } from 'next/server'
import { getSession } from '@/lib/auth'
import { SIM_AGENT_API_URL } from '@/lib/copilot/constants'
import { SIM_AGENT_API_URL_DEFAULT } from '@/lib/copilot/constants'
import { env } from '@/lib/core/config/env'
export async function GET(request: NextRequest) {
@@ -12,6 +12,8 @@ export async function GET(request: NextRequest) {
const userId = session.user.id
const SIM_AGENT_API_URL = env.SIM_AGENT_API_URL || SIM_AGENT_API_URL_DEFAULT
const res = await fetch(`${SIM_AGENT_API_URL}/api/validate-key/get-api-keys`, {
method: 'POST',
headers: {
@@ -66,6 +68,8 @@ export async function DELETE(request: NextRequest) {
return NextResponse.json({ error: 'id is required' }, { status: 400 })
}
const SIM_AGENT_API_URL = env.SIM_AGENT_API_URL || SIM_AGENT_API_URL_DEFAULT
const res = await fetch(`${SIM_AGENT_API_URL}/api/validate-key/delete`, {
method: 'POST',
headers: {

File diff suppressed because it is too large Load Diff

View File

@@ -1,130 +0,0 @@
import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server'
import {
getStreamMeta,
readStreamEvents,
type StreamMeta,
} from '@/lib/copilot/orchestrator/stream-buffer'
import { authenticateCopilotRequestSessionOnly } from '@/lib/copilot/request-helpers'
import { SSE_HEADERS } from '@/lib/core/utils/sse'
const logger = createLogger('CopilotChatStreamAPI')
const POLL_INTERVAL_MS = 250
const MAX_STREAM_MS = 10 * 60 * 1000
function encodeEvent(event: Record<string, any>): Uint8Array {
return new TextEncoder().encode(`data: ${JSON.stringify(event)}\n\n`)
}
export async function GET(request: NextRequest) {
const { userId: authenticatedUserId, isAuthenticated } =
await authenticateCopilotRequestSessionOnly()
if (!isAuthenticated || !authenticatedUserId) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
const url = new URL(request.url)
const streamId = url.searchParams.get('streamId') || ''
const fromParam = url.searchParams.get('from') || '0'
const fromEventId = Number(fromParam || 0)
// If batch=true, return buffered events as JSON instead of SSE
const batchMode = url.searchParams.get('batch') === 'true'
const toParam = url.searchParams.get('to')
const toEventId = toParam ? Number(toParam) : undefined
if (!streamId) {
return NextResponse.json({ error: 'streamId is required' }, { status: 400 })
}
const meta = (await getStreamMeta(streamId)) as StreamMeta | null
logger.info('[Resume] Stream lookup', {
streamId,
fromEventId,
toEventId,
batchMode,
hasMeta: !!meta,
metaStatus: meta?.status,
})
if (!meta) {
return NextResponse.json({ error: 'Stream not found' }, { status: 404 })
}
if (meta.userId && meta.userId !== authenticatedUserId) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 403 })
}
// Batch mode: return all buffered events as JSON
if (batchMode) {
const events = await readStreamEvents(streamId, fromEventId)
const filteredEvents = toEventId ? events.filter((e) => e.eventId <= toEventId) : events
logger.info('[Resume] Batch response', {
streamId,
fromEventId,
toEventId,
eventCount: filteredEvents.length,
})
return NextResponse.json({
success: true,
events: filteredEvents,
status: meta.status,
})
}
const startTime = Date.now()
const stream = new ReadableStream({
async start(controller) {
let lastEventId = Number.isFinite(fromEventId) ? fromEventId : 0
const flushEvents = async () => {
const events = await readStreamEvents(streamId, lastEventId)
if (events.length > 0) {
logger.info('[Resume] Flushing events', {
streamId,
fromEventId: lastEventId,
eventCount: events.length,
})
}
for (const entry of events) {
lastEventId = entry.eventId
const payload = {
...entry.event,
eventId: entry.eventId,
streamId: entry.streamId,
}
controller.enqueue(encodeEvent(payload))
}
}
try {
await flushEvents()
while (Date.now() - startTime < MAX_STREAM_MS) {
const currentMeta = await getStreamMeta(streamId)
if (!currentMeta) break
await flushEvents()
if (currentMeta.status === 'complete' || currentMeta.status === 'error') {
break
}
if (request.signal.aborted) {
break
}
await new Promise((resolve) => setTimeout(resolve, POLL_INTERVAL_MS))
}
} catch (error) {
logger.warn('Stream replay failed', {
streamId,
error: error instanceof Error ? error.message : String(error),
})
} finally {
controller.close()
}
},
})
return new Response(stream, { headers: SSE_HEADERS })
}

View File

@@ -1,7 +1,6 @@
import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { REDIS_TOOL_CALL_PREFIX, REDIS_TOOL_CALL_TTL_SECONDS } from '@/lib/copilot/constants'
import {
authenticateCopilotRequestSessionOnly,
createBadRequestResponse,
@@ -24,8 +23,7 @@ const ConfirmationSchema = z.object({
})
/**
* Write the user's tool decision to Redis. The server-side orchestrator's
* waitForToolDecision() polls Redis for this value.
* Update tool call status in Redis
*/
async function updateToolCallStatus(
toolCallId: string,
@@ -34,24 +32,57 @@ async function updateToolCallStatus(
): Promise<boolean> {
const redis = getRedisClient()
if (!redis) {
logger.warn('Redis client not available for tool confirmation')
logger.warn('updateToolCallStatus: Redis client not available')
return false
}
try {
const key = `${REDIS_TOOL_CALL_PREFIX}${toolCallId}`
const payload = {
const key = `tool_call:${toolCallId}`
const timeout = 600000 // 10 minutes timeout for user confirmation
const pollInterval = 100 // Poll every 100ms
const startTime = Date.now()
logger.info('Polling for tool call in Redis', { toolCallId, key, timeout })
// Poll until the key exists or timeout
while (Date.now() - startTime < timeout) {
const exists = await redis.exists(key)
if (exists) {
break
}
// Wait before next poll
await new Promise((resolve) => setTimeout(resolve, pollInterval))
}
// Final check if key exists after polling
const exists = await redis.exists(key)
if (!exists) {
logger.warn('Tool call not found in Redis after polling timeout', {
toolCallId,
key,
timeout,
pollDuration: Date.now() - startTime,
})
return false
}
// Store both status and message as JSON
const toolCallData = {
status,
message: message || null,
timestamp: new Date().toISOString(),
}
await redis.set(key, JSON.stringify(payload), 'EX', REDIS_TOOL_CALL_TTL_SECONDS)
await redis.set(key, JSON.stringify(toolCallData), 'EX', 86400) // Keep 24 hour expiry
return true
} catch (error) {
logger.error('Failed to update tool call status', {
logger.error('Failed to update tool call status in Redis', {
toolCallId,
status,
error: error instanceof Error ? error.message : String(error),
message,
error: error instanceof Error ? error.message : 'Unknown error',
})
return false
}

View File

@@ -1,28 +0,0 @@
import { type NextRequest, NextResponse } from 'next/server'
import { authenticateCopilotRequestSessionOnly } from '@/lib/copilot/request-helpers'
import { routeExecution } from '@/lib/copilot/tools/server/router'
/**
* GET /api/copilot/credentials
* Returns connected OAuth credentials for the authenticated user.
* Used by the copilot store for credential masking.
*/
export async function GET(_req: NextRequest) {
const { userId, isAuthenticated } = await authenticateCopilotRequestSessionOnly()
if (!isAuthenticated || !userId) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
try {
const result = await routeExecution('get_credentials', {}, { userId })
return NextResponse.json({ success: true, result })
} catch (error) {
return NextResponse.json(
{
success: false,
error: error instanceof Error ? error.message : 'Failed to load credentials',
},
{ status: 500 }
)
}
}

View File

@@ -0,0 +1,54 @@
import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import {
authenticateCopilotRequestSessionOnly,
createBadRequestResponse,
createInternalServerErrorResponse,
createRequestTracker,
createUnauthorizedResponse,
} from '@/lib/copilot/request-helpers'
import { routeExecution } from '@/lib/copilot/tools/server/router'
const logger = createLogger('ExecuteCopilotServerToolAPI')
const ExecuteSchema = z.object({
toolName: z.string(),
payload: z.unknown().optional(),
})
export async function POST(req: NextRequest) {
const tracker = createRequestTracker()
try {
const { userId, isAuthenticated } = await authenticateCopilotRequestSessionOnly()
if (!isAuthenticated || !userId) {
return createUnauthorizedResponse()
}
const body = await req.json()
try {
const preview = JSON.stringify(body).slice(0, 300)
logger.debug(`[${tracker.requestId}] Incoming request body preview`, { preview })
} catch {}
const { toolName, payload } = ExecuteSchema.parse(body)
logger.info(`[${tracker.requestId}] Executing server tool`, { toolName })
const result = await routeExecution(toolName, payload, { userId })
try {
const resultPreview = JSON.stringify(result).slice(0, 300)
logger.debug(`[${tracker.requestId}] Server tool result preview`, { toolName, resultPreview })
} catch {}
return NextResponse.json({ success: true, result })
} catch (error) {
if (error instanceof z.ZodError) {
logger.debug(`[${tracker.requestId}] Zod validation error`, { issues: error.issues })
return createBadRequestResponse('Invalid request body for execute-copilot-server-tool')
}
logger.error(`[${tracker.requestId}] Failed to execute server tool:`, error)
const errorMessage = error instanceof Error ? error.message : 'Failed to execute server tool'
return createInternalServerErrorResponse(errorMessage)
}
}

View File

@@ -0,0 +1,247 @@
import { db } from '@sim/db'
import { account, workflow } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { and, eq } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { getSession } from '@/lib/auth'
import {
createBadRequestResponse,
createInternalServerErrorResponse,
createRequestTracker,
createUnauthorizedResponse,
} from '@/lib/copilot/request-helpers'
import { generateRequestId } from '@/lib/core/utils/request'
import { getEffectiveDecryptedEnv } from '@/lib/environment/utils'
import { refreshTokenIfNeeded } from '@/app/api/auth/oauth/utils'
import { resolveEnvVarReferences } from '@/executor/utils/reference-validation'
import { executeTool } from '@/tools'
import { getTool, resolveToolId } from '@/tools/utils'
const logger = createLogger('CopilotExecuteToolAPI')
const ExecuteToolSchema = z.object({
toolCallId: z.string(),
toolName: z.string(),
arguments: z.record(z.any()).optional().default({}),
workflowId: z.string().optional(),
})
export async function POST(req: NextRequest) {
const tracker = createRequestTracker()
try {
const session = await getSession()
if (!session?.user?.id) {
return createUnauthorizedResponse()
}
const userId = session.user.id
const body = await req.json()
try {
const preview = JSON.stringify(body).slice(0, 300)
logger.debug(`[${tracker.requestId}] Incoming execute-tool request`, { preview })
} catch {}
const { toolCallId, toolName, arguments: toolArgs, workflowId } = ExecuteToolSchema.parse(body)
const resolvedToolName = resolveToolId(toolName)
logger.info(`[${tracker.requestId}] Executing tool`, {
toolCallId,
toolName,
resolvedToolName,
workflowId,
hasArgs: Object.keys(toolArgs).length > 0,
})
const toolConfig = getTool(resolvedToolName)
if (!toolConfig) {
// Find similar tool names to help debug
const { tools: allTools } = await import('@/tools/registry')
const allToolNames = Object.keys(allTools)
const prefix = toolName.split('_').slice(0, 2).join('_')
const similarTools = allToolNames
.filter((name) => name.startsWith(`${prefix.split('_')[0]}_`))
.slice(0, 10)
logger.warn(`[${tracker.requestId}] Tool not found in registry`, {
toolName,
prefix,
similarTools,
totalToolsInRegistry: allToolNames.length,
})
return NextResponse.json(
{
success: false,
error: `Tool not found: ${toolName}. Similar tools: ${similarTools.join(', ')}`,
toolCallId,
},
{ status: 404 }
)
}
// Get the workspaceId from the workflow (env vars are stored at workspace level)
let workspaceId: string | undefined
if (workflowId) {
const workflowResult = await db
.select({ workspaceId: workflow.workspaceId })
.from(workflow)
.where(eq(workflow.id, workflowId))
.limit(1)
workspaceId = workflowResult[0]?.workspaceId ?? undefined
}
// Get decrypted environment variables early so we can resolve all {{VAR}} references
const decryptedEnvVars = await getEffectiveDecryptedEnv(userId, workspaceId)
logger.info(`[${tracker.requestId}] Fetched environment variables`, {
workflowId,
workspaceId,
envVarCount: Object.keys(decryptedEnvVars).length,
envVarKeys: Object.keys(decryptedEnvVars),
})
// Build execution params starting with LLM-provided arguments
// Resolve all {{ENV_VAR}} references in the arguments (deep for nested objects)
const executionParams: Record<string, any> = resolveEnvVarReferences(
toolArgs,
decryptedEnvVars,
{ deep: true }
) as Record<string, any>
logger.info(`[${tracker.requestId}] Resolved env var references in arguments`, {
toolName,
originalArgKeys: Object.keys(toolArgs),
resolvedArgKeys: Object.keys(executionParams),
})
// Resolve OAuth access token if required
if (toolConfig.oauth?.required && toolConfig.oauth.provider) {
const provider = toolConfig.oauth.provider
logger.info(`[${tracker.requestId}] Resolving OAuth token`, { provider })
try {
// Find the account for this provider and user
const accounts = await db
.select()
.from(account)
.where(and(eq(account.providerId, provider), eq(account.userId, userId)))
.limit(1)
if (accounts.length > 0) {
const acc = accounts[0]
const requestId = generateRequestId()
const { accessToken } = await refreshTokenIfNeeded(requestId, acc as any, acc.id)
if (accessToken) {
executionParams.accessToken = accessToken
logger.info(`[${tracker.requestId}] OAuth token resolved`, { provider })
} else {
logger.warn(`[${tracker.requestId}] No access token available`, { provider })
return NextResponse.json(
{
success: false,
error: `OAuth token not available for ${provider}. Please reconnect your account.`,
toolCallId,
},
{ status: 400 }
)
}
} else {
logger.warn(`[${tracker.requestId}] No account found for provider`, { provider })
return NextResponse.json(
{
success: false,
error: `No ${provider} account connected. Please connect your account first.`,
toolCallId,
},
{ status: 400 }
)
}
} catch (error) {
logger.error(`[${tracker.requestId}] Failed to resolve OAuth token`, {
provider,
error: error instanceof Error ? error.message : String(error),
})
return NextResponse.json(
{
success: false,
error: `Failed to get OAuth token for ${provider}`,
toolCallId,
},
{ status: 500 }
)
}
}
// Check if tool requires an API key that wasn't resolved via {{ENV_VAR}} reference
const needsApiKey = toolConfig.params?.apiKey?.required
if (needsApiKey && !executionParams.apiKey) {
logger.warn(`[${tracker.requestId}] No API key found for tool`, { toolName })
return NextResponse.json(
{
success: false,
error: `API key not provided for ${toolName}. Use {{YOUR_API_KEY_ENV_VAR}} to reference your environment variable.`,
toolCallId,
},
{ status: 400 }
)
}
// Add execution context
executionParams._context = {
workflowId,
userId,
}
// Special handling for function_execute - inject environment variables
if (toolName === 'function_execute') {
executionParams.envVars = decryptedEnvVars
executionParams.workflowVariables = {} // No workflow variables in copilot context
executionParams.blockData = {} // No block data in copilot context
executionParams.blockNameMapping = {} // No block mapping in copilot context
executionParams.language = executionParams.language || 'javascript'
executionParams.timeout = executionParams.timeout || 30000
logger.info(`[${tracker.requestId}] Injected env vars for function_execute`, {
envVarCount: Object.keys(decryptedEnvVars).length,
})
}
// Execute the tool
logger.info(`[${tracker.requestId}] Executing tool with resolved credentials`, {
toolName,
hasAccessToken: !!executionParams.accessToken,
hasApiKey: !!executionParams.apiKey,
})
const result = await executeTool(resolvedToolName, executionParams)
logger.info(`[${tracker.requestId}] Tool execution complete`, {
toolName,
success: result.success,
hasOutput: !!result.output,
})
return NextResponse.json({
success: true,
toolCallId,
result: {
success: result.success,
output: result.output,
error: result.error,
},
})
} catch (error) {
if (error instanceof z.ZodError) {
logger.debug(`[${tracker.requestId}] Zod validation error`, { issues: error.issues })
return createBadRequestResponse('Invalid request body for execute-tool')
}
logger.error(`[${tracker.requestId}] Failed to execute tool:`, error)
const errorMessage = error instanceof Error ? error.message : 'Failed to execute tool'
return createInternalServerErrorResponse(errorMessage)
}
}

View File

@@ -1,6 +1,6 @@
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { SIM_AGENT_API_URL } from '@/lib/copilot/constants'
import { SIM_AGENT_API_URL_DEFAULT } from '@/lib/copilot/constants'
import {
authenticateCopilotRequestSessionOnly,
createBadRequestResponse,
@@ -10,6 +10,8 @@ import {
} from '@/lib/copilot/request-helpers'
import { env } from '@/lib/core/config/env'
const SIM_AGENT_API_URL = env.SIM_AGENT_API_URL || SIM_AGENT_API_URL_DEFAULT
const BodySchema = z.object({
messageId: z.string(),
diffCreated: z.boolean(),

View File

@@ -0,0 +1,123 @@
import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { SIM_AGENT_API_URL_DEFAULT } from '@/lib/copilot/constants'
import {
authenticateCopilotRequestSessionOnly,
createBadRequestResponse,
createInternalServerErrorResponse,
createRequestTracker,
createUnauthorizedResponse,
} from '@/lib/copilot/request-helpers'
import { env } from '@/lib/core/config/env'
const logger = createLogger('CopilotMarkToolCompleteAPI')
const SIM_AGENT_API_URL = env.SIM_AGENT_API_URL || SIM_AGENT_API_URL_DEFAULT
const MarkCompleteSchema = z.object({
id: z.string(),
name: z.string(),
status: z.number().int(),
message: z.any().optional(),
data: z.any().optional(),
})
/**
* POST /api/copilot/tools/mark-complete
* Proxy to Sim Agent: POST /api/tools/mark-complete
*/
export async function POST(req: NextRequest) {
const tracker = createRequestTracker()
try {
const { userId, isAuthenticated } = await authenticateCopilotRequestSessionOnly()
if (!isAuthenticated || !userId) {
return createUnauthorizedResponse()
}
const body = await req.json()
// Log raw body shape for diagnostics (avoid dumping huge payloads)
try {
const bodyPreview = JSON.stringify(body).slice(0, 300)
logger.debug(`[${tracker.requestId}] Incoming mark-complete raw body preview`, {
preview: `${bodyPreview}${bodyPreview.length === 300 ? '...' : ''}`,
})
} catch {}
const parsed = MarkCompleteSchema.parse(body)
const messagePreview = (() => {
try {
const s =
typeof parsed.message === 'string' ? parsed.message : JSON.stringify(parsed.message)
return s ? `${s.slice(0, 200)}${s.length > 200 ? '...' : ''}` : undefined
} catch {
return undefined
}
})()
logger.info(`[${tracker.requestId}] Forwarding tool mark-complete`, {
userId,
toolCallId: parsed.id,
toolName: parsed.name,
status: parsed.status,
hasMessage: parsed.message !== undefined,
hasData: parsed.data !== undefined,
messagePreview,
agentUrl: `${SIM_AGENT_API_URL}/api/tools/mark-complete`,
})
const agentRes = await fetch(`${SIM_AGENT_API_URL}/api/tools/mark-complete`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
...(env.COPILOT_API_KEY ? { 'x-api-key': env.COPILOT_API_KEY } : {}),
},
body: JSON.stringify(parsed),
})
// Attempt to parse agent response JSON
let agentJson: any = null
let agentText: string | null = null
try {
agentJson = await agentRes.json()
} catch (_) {
try {
agentText = await agentRes.text()
} catch {}
}
logger.info(`[${tracker.requestId}] Agent responded to mark-complete`, {
status: agentRes.status,
ok: agentRes.ok,
responseJsonPreview: agentJson ? JSON.stringify(agentJson).slice(0, 300) : undefined,
responseTextPreview: agentText ? agentText.slice(0, 300) : undefined,
})
if (agentRes.ok) {
return NextResponse.json({ success: true })
}
const errorMessage =
agentJson?.error || agentText || `Agent responded with status ${agentRes.status}`
const status = agentRes.status >= 500 ? 500 : 400
logger.warn(`[${tracker.requestId}] Mark-complete failed`, {
status,
error: errorMessage,
})
return NextResponse.json({ success: false, error: errorMessage }, { status })
} catch (error) {
if (error instanceof z.ZodError) {
logger.warn(`[${tracker.requestId}] Invalid mark-complete request body`, {
issues: error.issues,
})
return createBadRequestResponse('Invalid request body for mark-complete')
}
logger.error(`[${tracker.requestId}] Failed to proxy mark-complete:`, error)
return createInternalServerErrorResponse('Failed to mark tool as complete')
}
}

View File

@@ -28,7 +28,6 @@ const DEFAULT_ENABLED_MODELS: Record<CopilotModelId, boolean> = {
'claude-4-sonnet': false,
'claude-4.5-haiku': true,
'claude-4.5-sonnet': true,
'claude-4.6-opus': true,
'claude-4.5-opus': true,
'claude-4.1-opus': false,
'gemini-3-pro': true,

View File

@@ -21,7 +21,6 @@ const UpdateCreatorProfileSchema = z.object({
name: z.string().min(1, 'Name is required').max(100, 'Max 100 characters').optional(),
profileImageUrl: z.string().optional().or(z.literal('')),
details: CreatorProfileDetailsSchema.optional(),
verified: z.boolean().optional(), // Verification status (super users only)
})
// Helper to check if user has permission to manage profile
@@ -98,29 +97,11 @@ export async function PUT(request: NextRequest, { params }: { params: Promise<{
return NextResponse.json({ error: 'Profile not found' }, { status: 404 })
}
// Verification changes require super user permission
if (data.verified !== undefined) {
const { verifyEffectiveSuperUser } = await import('@/lib/templates/permissions')
const { effectiveSuperUser } = await verifyEffectiveSuperUser(session.user.id)
if (!effectiveSuperUser) {
logger.warn(`[${requestId}] Non-super user attempted to change creator verification: ${id}`)
return NextResponse.json(
{ error: 'Only super users can change verification status' },
{ status: 403 }
)
}
}
// For non-verified updates, check regular permissions
const hasNonVerifiedUpdates =
data.name !== undefined || data.profileImageUrl !== undefined || data.details !== undefined
if (hasNonVerifiedUpdates) {
const canEdit = await hasPermission(session.user.id, existing[0])
if (!canEdit) {
logger.warn(`[${requestId}] User denied permission to update profile: ${id}`)
return NextResponse.json({ error: 'Access denied' }, { status: 403 })
}
// Check permissions
const canEdit = await hasPermission(session.user.id, existing[0])
if (!canEdit) {
logger.warn(`[${requestId}] User denied permission to update profile: ${id}`)
return NextResponse.json({ error: 'Access denied' }, { status: 403 })
}
const updateData: any = {
@@ -130,7 +111,6 @@ export async function PUT(request: NextRequest, { params }: { params: Promise<{
if (data.name !== undefined) updateData.name = data.name
if (data.profileImageUrl !== undefined) updateData.profileImageUrl = data.profileImageUrl
if (data.details !== undefined) updateData.details = data.details
if (data.verified !== undefined) updateData.verified = data.verified
const updated = await db
.update(templateCreators)

View File

@@ -0,0 +1,113 @@
import { db } from '@sim/db'
import { templateCreators } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { eq } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { getSession } from '@/lib/auth'
import { generateRequestId } from '@/lib/core/utils/request'
import { verifyEffectiveSuperUser } from '@/lib/templates/permissions'
const logger = createLogger('CreatorVerificationAPI')
export const revalidate = 0
// POST /api/creators/[id]/verify - Verify a creator (super users only)
export async function POST(request: NextRequest, { params }: { params: Promise<{ id: string }> }) {
const requestId = generateRequestId()
const { id } = await params
try {
const session = await getSession()
if (!session?.user?.id) {
logger.warn(`[${requestId}] Unauthorized verification attempt for creator: ${id}`)
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
// Check if user is a super user
const { effectiveSuperUser } = await verifyEffectiveSuperUser(session.user.id)
if (!effectiveSuperUser) {
logger.warn(`[${requestId}] Non-super user attempted to verify creator: ${id}`)
return NextResponse.json({ error: 'Only super users can verify creators' }, { status: 403 })
}
// Check if creator exists
const existingCreator = await db
.select()
.from(templateCreators)
.where(eq(templateCreators.id, id))
.limit(1)
if (existingCreator.length === 0) {
logger.warn(`[${requestId}] Creator not found for verification: ${id}`)
return NextResponse.json({ error: 'Creator not found' }, { status: 404 })
}
// Update creator verified status to true
await db
.update(templateCreators)
.set({ verified: true, updatedAt: new Date() })
.where(eq(templateCreators.id, id))
logger.info(`[${requestId}] Creator verified: ${id} by super user: ${session.user.id}`)
return NextResponse.json({
message: 'Creator verified successfully',
creatorId: id,
})
} catch (error) {
logger.error(`[${requestId}] Error verifying creator ${id}`, error)
return NextResponse.json({ error: 'Internal server error' }, { status: 500 })
}
}
// DELETE /api/creators/[id]/verify - Unverify a creator (super users only)
export async function DELETE(
request: NextRequest,
{ params }: { params: Promise<{ id: string }> }
) {
const requestId = generateRequestId()
const { id } = await params
try {
const session = await getSession()
if (!session?.user?.id) {
logger.warn(`[${requestId}] Unauthorized unverification attempt for creator: ${id}`)
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
// Check if user is a super user
const { effectiveSuperUser } = await verifyEffectiveSuperUser(session.user.id)
if (!effectiveSuperUser) {
logger.warn(`[${requestId}] Non-super user attempted to unverify creator: ${id}`)
return NextResponse.json({ error: 'Only super users can unverify creators' }, { status: 403 })
}
// Check if creator exists
const existingCreator = await db
.select()
.from(templateCreators)
.where(eq(templateCreators.id, id))
.limit(1)
if (existingCreator.length === 0) {
logger.warn(`[${requestId}] Creator not found for unverification: ${id}`)
return NextResponse.json({ error: 'Creator not found' }, { status: 404 })
}
// Update creator verified status to false
await db
.update(templateCreators)
.set({ verified: false, updatedAt: new Date() })
.where(eq(templateCreators.id, id))
logger.info(`[${requestId}] Creator unverified: ${id} by super user: ${session.user.id}`)
return NextResponse.json({
message: 'Creator unverified successfully',
creatorId: id,
})
} catch (error) {
logger.error(`[${requestId}] Error unverifying creator ${id}`, error)
return NextResponse.json({ error: 'Internal server error' }, { status: 500 })
}
}

View File

@@ -1,16 +1,13 @@
import { asyncJobs, db } from '@sim/db'
import { db } from '@sim/db'
import { workflowExecutionLogs } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { and, eq, inArray, lt, sql } from 'drizzle-orm'
import { and, eq, lt, sql } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { verifyCronAuth } from '@/lib/auth/internal'
import { JOB_RETENTION_HOURS, JOB_STATUS } from '@/lib/core/async-jobs'
import { getMaxExecutionTimeout } from '@/lib/core/execution-limits'
const logger = createLogger('CleanupStaleExecutions')
const STALE_THRESHOLD_MS = getMaxExecutionTimeout() + 5 * 60 * 1000
const STALE_THRESHOLD_MINUTES = Math.ceil(STALE_THRESHOLD_MS / 60000)
const STALE_THRESHOLD_MINUTES = 30
const MAX_INT32 = 2_147_483_647
export async function GET(request: NextRequest) {
@@ -81,102 +78,12 @@ export async function GET(request: NextRequest) {
logger.info(`Stale execution cleanup completed. Cleaned: ${cleaned}, Failed: ${failed}`)
// Clean up stale async jobs (stuck in processing)
let asyncJobsMarkedFailed = 0
try {
const staleAsyncJobs = await db
.update(asyncJobs)
.set({
status: JOB_STATUS.FAILED,
completedAt: new Date(),
error: `Job terminated: stuck in processing for more than ${STALE_THRESHOLD_MINUTES} minutes`,
updatedAt: new Date(),
})
.where(
and(eq(asyncJobs.status, JOB_STATUS.PROCESSING), lt(asyncJobs.startedAt, staleThreshold))
)
.returning({ id: asyncJobs.id })
asyncJobsMarkedFailed = staleAsyncJobs.length
if (asyncJobsMarkedFailed > 0) {
logger.info(`Marked ${asyncJobsMarkedFailed} stale async jobs as failed`)
}
} catch (error) {
logger.error('Failed to clean up stale async jobs:', {
error: error instanceof Error ? error.message : String(error),
})
}
// Clean up stale pending jobs (never started, e.g., due to server crash before startJob())
let stalePendingJobsMarkedFailed = 0
try {
const stalePendingJobs = await db
.update(asyncJobs)
.set({
status: JOB_STATUS.FAILED,
completedAt: new Date(),
error: `Job terminated: stuck in pending state for more than ${STALE_THRESHOLD_MINUTES} minutes (never started)`,
updatedAt: new Date(),
})
.where(
and(eq(asyncJobs.status, JOB_STATUS.PENDING), lt(asyncJobs.createdAt, staleThreshold))
)
.returning({ id: asyncJobs.id })
stalePendingJobsMarkedFailed = stalePendingJobs.length
if (stalePendingJobsMarkedFailed > 0) {
logger.info(`Marked ${stalePendingJobsMarkedFailed} stale pending jobs as failed`)
}
} catch (error) {
logger.error('Failed to clean up stale pending jobs:', {
error: error instanceof Error ? error.message : String(error),
})
}
// Delete completed/failed jobs older than retention period
const retentionThreshold = new Date(Date.now() - JOB_RETENTION_HOURS * 60 * 60 * 1000)
let asyncJobsDeleted = 0
try {
const deletedJobs = await db
.delete(asyncJobs)
.where(
and(
inArray(asyncJobs.status, [JOB_STATUS.COMPLETED, JOB_STATUS.FAILED]),
lt(asyncJobs.completedAt, retentionThreshold)
)
)
.returning({ id: asyncJobs.id })
asyncJobsDeleted = deletedJobs.length
if (asyncJobsDeleted > 0) {
logger.info(
`Deleted ${asyncJobsDeleted} old async jobs (retention: ${JOB_RETENTION_HOURS}h)`
)
}
} catch (error) {
logger.error('Failed to delete old async jobs:', {
error: error instanceof Error ? error.message : String(error),
})
}
return NextResponse.json({
success: true,
executions: {
found: staleExecutions.length,
cleaned,
failed,
thresholdMinutes: STALE_THRESHOLD_MINUTES,
},
asyncJobs: {
staleProcessingMarkedFailed: asyncJobsMarkedFailed,
stalePendingMarkedFailed: stalePendingJobsMarkedFailed,
oldDeleted: asyncJobsDeleted,
staleThresholdMinutes: STALE_THRESHOLD_MINUTES,
retentionHours: JOB_RETENTION_HOURS,
},
found: staleExecutions.length,
cleaned,
failed,
thresholdMinutes: STALE_THRESHOLD_MINUTES,
})
} catch (error) {
logger.error('Error in stale execution cleanup job:', error)

View File

@@ -6,11 +6,7 @@ import { createLogger } from '@sim/logger'
import binaryExtensionsList from 'binary-extensions'
import { type NextRequest, NextResponse } from 'next/server'
import { checkHybridAuth } from '@/lib/auth/hybrid'
import {
secureFetchWithPinnedIP,
validateUrlWithDNS,
} from '@/lib/core/security/input-validation.server'
import { sanitizeUrlForLog } from '@/lib/core/utils/logging'
import { secureFetchWithPinnedIP, validateUrlWithDNS } from '@/lib/core/security/input-validation'
import { isSupportedFileType, parseFile } from '@/lib/file-parsers'
import { isUsingCloudStorage, type StorageContext, StorageService } from '@/lib/uploads'
import { uploadExecutionFile } from '@/lib/uploads/contexts/execution'
@@ -23,7 +19,6 @@ import {
getMimeTypeFromExtension,
getViewerUrl,
inferContextFromKey,
isInternalFileUrl,
} from '@/lib/uploads/utils/file-utils'
import { getUserEntityPermissions } from '@/lib/workspaces/permissions/utils'
import { verifyFileAccess } from '@/app/api/files/authorization'
@@ -220,7 +215,7 @@ async function parseFileSingle(
}
}
if (isInternalFileUrl(filePath)) {
if (filePath.includes('/api/files/serve/')) {
return handleCloudFile(filePath, fileType, undefined, userId, executionContext)
}
@@ -251,7 +246,7 @@ function validateFilePath(filePath: string): { isValid: boolean; error?: string
return { isValid: false, error: 'Invalid path: tilde character not allowed' }
}
if (filePath.startsWith('/') && !isInternalFileUrl(filePath)) {
if (filePath.startsWith('/') && !filePath.startsWith('/api/files/serve/')) {
return { isValid: false, error: 'Path outside allowed directory' }
}
@@ -425,7 +420,7 @@ async function handleExternalUrl(
return parseResult
} catch (error) {
logger.error(`Error handling external URL ${sanitizeUrlForLog(url)}:`, error)
logger.error(`Error handling external URL ${url}:`, error)
return {
success: false,
error: `Error fetching URL: ${(error as Error).message}`,

View File

@@ -1,7 +1,7 @@
import { createLogger } from '@sim/logger'
import { runs } from '@trigger.dev/sdk'
import { type NextRequest, NextResponse } from 'next/server'
import { checkHybridAuth } from '@/lib/auth/hybrid'
import { getJobQueue, JOB_STATUS } from '@/lib/core/async-jobs'
import { generateRequestId } from '@/lib/core/utils/request'
import { createErrorResponse } from '@/app/api/workflows/utils'
@@ -15,6 +15,8 @@ export async function GET(
const requestId = generateRequestId()
try {
logger.debug(`[${requestId}] Getting status for task: ${taskId}`)
const authResult = await checkHybridAuth(request, { requireWorkflowId: false })
if (!authResult.success || !authResult.userId) {
logger.warn(`[${requestId}] Unauthorized task status request`)
@@ -23,60 +25,76 @@ export async function GET(
const authenticatedUserId = authResult.userId
const jobQueue = await getJobQueue()
const job = await jobQueue.getJob(taskId)
const run = await runs.retrieve(taskId)
if (!job) {
return createErrorResponse('Task not found', 404)
}
logger.debug(`[${requestId}] Task ${taskId} status: ${run.status}`)
if (job.metadata?.workflowId) {
const payload = run.payload as any
if (payload?.workflowId) {
const { verifyWorkflowAccess } = await import('@/socket/middleware/permissions')
const accessCheck = await verifyWorkflowAccess(
authenticatedUserId,
job.metadata.workflowId as string
)
const accessCheck = await verifyWorkflowAccess(authenticatedUserId, payload.workflowId)
if (!accessCheck.hasAccess) {
logger.warn(`[${requestId}] Access denied to workflow ${job.metadata.workflowId}`)
logger.warn(`[${requestId}] User ${authenticatedUserId} denied access to task ${taskId}`, {
workflowId: payload.workflowId,
})
return createErrorResponse('Access denied', 403)
}
logger.debug(`[${requestId}] User ${authenticatedUserId} has access to task ${taskId}`)
} else {
if (payload?.userId && payload.userId !== authenticatedUserId) {
logger.warn(
`[${requestId}] User ${authenticatedUserId} attempted to access task ${taskId} owned by ${payload.userId}`
)
return createErrorResponse('Access denied', 403)
}
if (!payload?.userId) {
logger.warn(
`[${requestId}] Task ${taskId} has no ownership information in payload. Denying access for security.`
)
return createErrorResponse('Access denied', 403)
}
} else if (job.metadata?.userId && job.metadata.userId !== authenticatedUserId) {
logger.warn(`[${requestId}] Access denied to user ${job.metadata.userId}`)
return createErrorResponse('Access denied', 403)
} else if (!job.metadata?.userId && !job.metadata?.workflowId) {
logger.warn(`[${requestId}] Access denied to job ${taskId}`)
return createErrorResponse('Access denied', 403)
}
const mappedStatus = job.status === JOB_STATUS.PENDING ? 'queued' : job.status
const statusMap = {
QUEUED: 'queued',
WAITING_FOR_DEPLOY: 'queued',
EXECUTING: 'processing',
RESCHEDULED: 'processing',
FROZEN: 'processing',
COMPLETED: 'completed',
CANCELED: 'cancelled',
FAILED: 'failed',
CRASHED: 'failed',
INTERRUPTED: 'failed',
SYSTEM_FAILURE: 'failed',
EXPIRED: 'failed',
} as const
const mappedStatus = statusMap[run.status as keyof typeof statusMap] || 'unknown'
const response: any = {
success: true,
taskId,
status: mappedStatus,
metadata: {
startedAt: job.startedAt,
startedAt: run.startedAt,
},
}
if (job.status === JOB_STATUS.COMPLETED) {
response.output = job.output
response.metadata.completedAt = job.completedAt
if (job.startedAt && job.completedAt) {
response.metadata.duration = job.completedAt.getTime() - job.startedAt.getTime()
}
if (mappedStatus === 'completed') {
response.output = run.output // This contains the workflow execution results
response.metadata.completedAt = run.finishedAt
response.metadata.duration = run.durationMs
}
if (job.status === JOB_STATUS.FAILED) {
response.error = job.error
response.metadata.completedAt = job.completedAt
if (job.startedAt && job.completedAt) {
response.metadata.duration = job.completedAt.getTime() - job.startedAt.getTime()
}
if (mappedStatus === 'failed') {
response.error = run.error
response.metadata.completedAt = run.finishedAt
response.metadata.duration = run.durationMs
}
if (job.status === JOB_STATUS.PROCESSING || job.status === JOB_STATUS.PENDING) {
response.estimatedDuration = 180000
if (mappedStatus === 'processing' || mappedStatus === 'queued') {
response.estimatedDuration = 180000 // 3 minutes max from our config
}
return NextResponse.json(response)

View File

@@ -1,6 +0,0 @@
import { type NextRequest, NextResponse } from 'next/server'
import { createMcpAuthorizationServerMetadataResponse } from '@/lib/mcp/oauth-discovery'
export async function GET(request: NextRequest): Promise<NextResponse> {
return createMcpAuthorizationServerMetadataResponse(request)
}

View File

@@ -1,6 +0,0 @@
import { type NextRequest, NextResponse } from 'next/server'
import { createMcpProtectedResourceMetadataResponse } from '@/lib/mcp/oauth-discovery'
export async function GET(request: NextRequest): Promise<NextResponse> {
return createMcpProtectedResourceMetadataResponse(request)
}

View File

@@ -1,776 +0,0 @@
import { Server } from '@modelcontextprotocol/sdk/server/index.js'
import { StreamableHTTPServerTransport } from '@modelcontextprotocol/sdk/server/streamableHttp.js'
import {
CallToolRequestSchema,
type CallToolResult,
ErrorCode,
type JSONRPCError,
type ListToolsResult,
ListToolsRequestSchema,
McpError,
type RequestId,
} from '@modelcontextprotocol/sdk/types.js'
import { db } from '@sim/db'
import { userStats } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { randomUUID } from 'node:crypto'
import { eq, sql } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { getHighestPrioritySubscription } from '@/lib/billing/core/subscription'
import { getCopilotModel } from '@/lib/copilot/config'
import { SIM_AGENT_API_URL, SIM_AGENT_VERSION } from '@/lib/copilot/constants'
import { RateLimiter } from '@/lib/core/rate-limiter'
import { env } from '@/lib/core/config/env'
import { orchestrateCopilotStream } from '@/lib/copilot/orchestrator'
import { orchestrateSubagentStream } from '@/lib/copilot/orchestrator/subagent'
import {
executeToolServerSide,
prepareExecutionContext,
} from '@/lib/copilot/orchestrator/tool-executor'
import { DIRECT_TOOL_DEFS, SUBAGENT_TOOL_DEFS } from '@/lib/copilot/tools/mcp/definitions'
import { resolveWorkflowIdForUser } from '@/lib/workflows/utils'
const logger = createLogger('CopilotMcpAPI')
const mcpRateLimiter = new RateLimiter()
export const dynamic = 'force-dynamic'
export const runtime = 'nodejs'
interface CopilotKeyAuthResult {
success: boolean
userId?: string
error?: string
}
/**
* Validates a copilot API key by forwarding it to the Go copilot service's
* `/api/validate-key` endpoint. Returns the associated userId on success.
*/
async function authenticateCopilotApiKey(apiKey: string): Promise<CopilotKeyAuthResult> {
try {
const internalSecret = env.INTERNAL_API_SECRET
if (!internalSecret) {
logger.error('INTERNAL_API_SECRET not configured')
return { success: false, error: 'Server configuration error' }
}
const res = await fetch(`${SIM_AGENT_API_URL}/api/validate-key`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'x-api-key': internalSecret,
},
body: JSON.stringify({ targetApiKey: apiKey }),
signal: AbortSignal.timeout(10_000),
})
if (!res.ok) {
const body = await res.json().catch(() => null)
const upstream = (body as Record<string, unknown>)?.message
const status = res.status
if (status === 401 || status === 403) {
return {
success: false,
error: `Invalid Copilot API key. Generate a new key in Settings → Copilot and set it in the x-api-key header.`,
}
}
if (status === 402) {
return {
success: false,
error: `Usage limit exceeded for this Copilot API key. Upgrade your plan or wait for your quota to reset.`,
}
}
return { success: false, error: String(upstream ?? 'Copilot API key validation failed') }
}
const data = (await res.json()) as { ok?: boolean; userId?: string }
if (!data.ok || !data.userId) {
return {
success: false,
error: 'Invalid Copilot API key. Generate a new key in Settings → Copilot.',
}
}
return { success: true, userId: data.userId }
} catch (error) {
logger.error('Copilot API key validation failed', { error })
return {
success: false,
error: 'Could not validate Copilot API key — the authentication service is temporarily unreachable. This is NOT a problem with the API key itself; please retry shortly.',
}
}
}
/**
* MCP Server instructions that guide LLMs on how to use the Sim copilot tools.
* This is included in the initialize response to help external LLMs understand
* the workflow lifecycle and best practices.
*/
const MCP_SERVER_INSTRUCTIONS = `
## Sim Workflow Copilot
Sim is a workflow automation platform. Workflows are visual pipelines of connected blocks (Agent, Function, Condition, API, integrations, etc.). The Agent block is the core — an LLM with tools, memory, structured output, and knowledge bases.
### Workflow Lifecycle (Happy Path)
1. \`list_workspaces\` → know where to work
2. \`create_workflow(name, workspaceId)\` → get a workflowId
3. \`sim_build(request, workflowId)\` → plan and build in one pass
4. \`sim_test(request, workflowId)\` → verify it works
5. \`sim_deploy("deploy as api", workflowId)\` → make it accessible externally (optional)
For fine-grained control, use \`sim_plan\`\`sim_edit\` instead of \`sim_build\`. Pass the plan object from sim_plan EXACTLY as-is to sim_edit's context.plan field.
### Working with Existing Workflows
When the user refers to a workflow by name or description ("the email one", "my Slack bot"):
1. Use \`sim_discovery\` to find it by functionality
2. Or use \`list_workflows\` and match by name
3. Then pass the workflowId to other tools
### Organization
- \`rename_workflow\` — rename a workflow
- \`move_workflow\` — move a workflow into a folder (or root with null)
- \`move_folder\` — nest a folder inside another (or root with null)
- \`create_folder(name, parentId)\` — create nested folder hierarchies
### Key Rules
- You can test workflows immediately after building — deployment is only needed for external access (API, chat, MCP).
- All copilot tools (build, plan, edit, deploy, test, debug) require workflowId.
- If the user reports errors → use \`sim_debug\` first, don't guess.
- Variable syntax: \`<blockname.field>\` for block outputs, \`{{ENV_VAR}}\` for env vars.
`
type HeaderMap = Record<string, string | string[] | undefined>
function createError(id: RequestId, code: ErrorCode | number, message: string): JSONRPCError {
return {
jsonrpc: '2.0',
id,
error: { code, message },
}
}
function normalizeRequestHeaders(request: NextRequest): HeaderMap {
const headers: HeaderMap = {}
request.headers.forEach((value, key) => {
headers[key.toLowerCase()] = value
})
return headers
}
function readHeader(headers: HeaderMap | undefined, name: string): string | undefined {
if (!headers) return undefined
const value = headers[name.toLowerCase()]
if (Array.isArray(value)) {
return value[0]
}
return value
}
class NextResponseCapture {
private _status = 200
private _headers = new Headers()
private _controller: ReadableStreamDefaultController<Uint8Array> | null = null
private _pendingChunks: Uint8Array[] = []
private _closeHandlers: Array<() => void> = []
private _errorHandlers: Array<(error: Error) => void> = []
private _headersWritten = false
private _ended = false
private _headersPromise: Promise<void>
private _resolveHeaders: (() => void) | null = null
private _endedPromise: Promise<void>
private _resolveEnded: (() => void) | null = null
readonly readable: ReadableStream<Uint8Array>
constructor() {
this._headersPromise = new Promise<void>((resolve) => {
this._resolveHeaders = resolve
})
this._endedPromise = new Promise<void>((resolve) => {
this._resolveEnded = resolve
})
this.readable = new ReadableStream<Uint8Array>({
start: (controller) => {
this._controller = controller
if (this._pendingChunks.length > 0) {
for (const chunk of this._pendingChunks) {
controller.enqueue(chunk)
}
this._pendingChunks = []
}
},
cancel: () => {
this._ended = true
this._resolveEnded?.()
this.triggerCloseHandlers()
},
})
}
private markHeadersWritten(): void {
if (this._headersWritten) return
this._headersWritten = true
this._resolveHeaders?.()
}
private triggerCloseHandlers(): void {
for (const handler of this._closeHandlers) {
try {
handler()
} catch (error) {
this.triggerErrorHandlers(error instanceof Error ? error : new Error(String(error)))
}
}
}
private triggerErrorHandlers(error: Error): void {
for (const errorHandler of this._errorHandlers) {
errorHandler(error)
}
}
private normalizeChunk(chunk: unknown): Uint8Array | null {
if (typeof chunk === 'string') {
return new TextEncoder().encode(chunk)
}
if (chunk instanceof Uint8Array) {
return chunk
}
if (chunk === undefined || chunk === null) {
return null
}
return new TextEncoder().encode(String(chunk))
}
writeHead(status: number, headers?: Record<string, string | number | string[]>): this {
this._status = status
if (headers) {
Object.entries(headers).forEach(([key, value]) => {
if (Array.isArray(value)) {
this._headers.set(key, value.join(', '))
} else {
this._headers.set(key, String(value))
}
})
}
this.markHeadersWritten()
return this
}
flushHeaders(): this {
this.markHeadersWritten()
return this
}
write(chunk: unknown): boolean {
const normalized = this.normalizeChunk(chunk)
if (!normalized) return true
this.markHeadersWritten()
if (this._controller) {
try {
this._controller.enqueue(normalized)
} catch (error) {
this.triggerErrorHandlers(error instanceof Error ? error : new Error(String(error)))
}
} else {
this._pendingChunks.push(normalized)
}
return true
}
end(chunk?: unknown): this {
if (chunk !== undefined) this.write(chunk)
this.markHeadersWritten()
if (this._ended) return this
this._ended = true
this._resolveEnded?.()
if (this._controller) {
try {
this._controller.close()
} catch (error) {
this.triggerErrorHandlers(error instanceof Error ? error : new Error(String(error)))
}
}
this.triggerCloseHandlers()
return this
}
async waitForHeaders(timeoutMs = 30000): Promise<void> {
if (this._headersWritten) return
await Promise.race([
this._headersPromise,
new Promise<void>((resolve) => {
setTimeout(resolve, timeoutMs)
}),
])
}
async waitForEnd(timeoutMs = 30000): Promise<void> {
if (this._ended) return
await Promise.race([
this._endedPromise,
new Promise<void>((resolve) => {
setTimeout(resolve, timeoutMs)
}),
])
}
on(event: 'close' | 'error', handler: (() => void) | ((error: Error) => void)): this {
if (event === 'close') {
this._closeHandlers.push(handler as () => void)
}
if (event === 'error') {
this._errorHandlers.push(handler as (error: Error) => void)
}
return this
}
toNextResponse(): NextResponse {
return new NextResponse(this.readable, {
status: this._status,
headers: this._headers,
})
}
}
function buildMcpServer(): Server {
const server = new Server(
{
name: 'sim-copilot',
version: '1.0.0',
},
{
capabilities: { tools: {} },
instructions: MCP_SERVER_INSTRUCTIONS,
}
)
server.setRequestHandler(ListToolsRequestSchema, async () => {
const directTools = DIRECT_TOOL_DEFS.map((tool) => ({
name: tool.name,
description: tool.description,
inputSchema: tool.inputSchema,
}))
const subagentTools = SUBAGENT_TOOL_DEFS.map((tool) => ({
name: tool.name,
description: tool.description,
inputSchema: tool.inputSchema,
}))
const result: ListToolsResult = {
tools: [...directTools, ...subagentTools],
}
return result
})
server.setRequestHandler(CallToolRequestSchema, async (request, extra) => {
const headers = (extra.requestInfo?.headers || {}) as HeaderMap
const apiKeyHeader = readHeader(headers, 'x-api-key')
if (!apiKeyHeader) {
return {
content: [
{
type: 'text' as const,
text: 'AUTHENTICATION ERROR: No Copilot API key provided. The user must set their Copilot API key in the x-api-key header. They can generate one in the Sim app under Settings → Copilot. Do NOT retry — this will fail until the key is configured.',
},
],
isError: true,
}
}
const authResult = await authenticateCopilotApiKey(apiKeyHeader)
if (!authResult.success || !authResult.userId) {
logger.warn('MCP copilot key auth failed', { method: request.method })
return {
content: [
{
type: 'text' as const,
text: `AUTHENTICATION ERROR: ${authResult.error} Do NOT retry — this will fail until the user fixes their Copilot API key.`,
},
],
isError: true,
}
}
const rateLimitResult = await mcpRateLimiter.checkRateLimitWithSubscription(
authResult.userId,
await getHighestPrioritySubscription(authResult.userId),
'api-endpoint',
false
)
if (!rateLimitResult.allowed) {
return {
content: [
{
type: 'text' as const,
text: `RATE LIMIT: Too many requests. Please wait and retry after ${rateLimitResult.resetAt.toISOString()}.`,
},
],
isError: true,
}
}
const params = request.params as { name?: string; arguments?: Record<string, unknown> } | undefined
if (!params?.name) {
throw new McpError(ErrorCode.InvalidParams, 'Tool name required')
}
const result = await handleToolsCall(
{
name: params.name,
arguments: params.arguments,
},
authResult.userId
)
trackMcpCopilotCall(authResult.userId)
return result
})
return server
}
async function handleMcpRequestWithSdk(
request: NextRequest,
parsedBody: unknown
): Promise<NextResponse> {
const server = buildMcpServer()
const transport = new StreamableHTTPServerTransport({
sessionIdGenerator: undefined,
enableJsonResponse: true,
})
const responseCapture = new NextResponseCapture()
const requestAdapter = {
method: request.method,
headers: normalizeRequestHeaders(request),
}
await server.connect(transport)
try {
await transport.handleRequest(requestAdapter as any, responseCapture as any, parsedBody)
await responseCapture.waitForHeaders()
await responseCapture.waitForEnd()
return responseCapture.toNextResponse()
} finally {
await server.close().catch(() => {})
await transport.close().catch(() => {})
}
}
export async function GET() {
// Return 405 to signal that server-initiated SSE notifications are not
// supported. Without this, clients like mcp-remote will repeatedly
// reconnect trying to open an SSE stream, flooding the logs with GETs.
return new NextResponse(null, { status: 405 })
}
export async function POST(request: NextRequest) {
try {
let parsedBody: unknown
try {
parsedBody = await request.json()
} catch {
return NextResponse.json(createError(0, ErrorCode.ParseError, 'Invalid JSON body'), {
status: 400,
})
}
return await handleMcpRequestWithSdk(request, parsedBody)
} catch (error) {
logger.error('Error handling MCP request', { error })
return NextResponse.json(createError(0, ErrorCode.InternalError, 'Internal error'), {
status: 500,
})
}
}
export async function DELETE(request: NextRequest) {
void request
return NextResponse.json(createError(0, -32000, 'Method not allowed.'), { status: 405 })
}
/**
* Increment MCP copilot call counter in userStats (fire-and-forget).
*/
function trackMcpCopilotCall(userId: string): void {
db.update(userStats)
.set({
totalMcpCopilotCalls: sql`total_mcp_copilot_calls + 1`,
lastActive: new Date(),
})
.where(eq(userStats.userId, userId))
.then(() => {})
.catch((error) => {
logger.error('Failed to track MCP copilot call', { error, userId })
})
}
async function handleToolsCall(
params: { name: string; arguments?: Record<string, unknown> },
userId: string
): Promise<CallToolResult> {
const args = params.arguments || {}
const directTool = DIRECT_TOOL_DEFS.find((tool) => tool.name === params.name)
if (directTool) {
return handleDirectToolCall(directTool, args, userId)
}
const subagentTool = SUBAGENT_TOOL_DEFS.find((tool) => tool.name === params.name)
if (subagentTool) {
return handleSubagentToolCall(subagentTool, args, userId)
}
throw new McpError(ErrorCode.MethodNotFound, `Tool not found: ${params.name}`)
}
async function handleDirectToolCall(
toolDef: (typeof DIRECT_TOOL_DEFS)[number],
args: Record<string, unknown>,
userId: string
): Promise<CallToolResult> {
try {
const execContext = await prepareExecutionContext(userId, (args.workflowId as string) || '')
const toolCall = {
id: randomUUID(),
name: toolDef.toolId,
status: 'pending' as const,
params: args as Record<string, any>,
startTime: Date.now(),
}
const result = await executeToolServerSide(toolCall, execContext)
return {
content: [
{
type: 'text',
text: JSON.stringify(result.output ?? result, null, 2),
},
],
isError: !result.success,
}
} catch (error) {
logger.error('Direct tool execution failed', { tool: toolDef.name, error })
return {
content: [
{
type: 'text',
text: `Tool execution failed: ${error instanceof Error ? error.message : String(error)}`,
},
],
isError: true,
}
}
}
/**
* Build mode uses the main chat orchestrator with the 'fast' command instead of
* the subagent endpoint. In Go, 'build' is not a registered subagent — it's a mode
* (ModeFast) on the main chat processor that bypasses subagent orchestration and
* executes all tools directly.
*/
async function handleBuildToolCall(
args: Record<string, unknown>,
userId: string
): Promise<CallToolResult> {
try {
const requestText = (args.request as string) || JSON.stringify(args)
const { model } = getCopilotModel('chat')
const workflowId = args.workflowId as string | undefined
const resolved = workflowId ? { workflowId } : await resolveWorkflowIdForUser(userId)
if (!resolved?.workflowId) {
return {
content: [
{
type: 'text',
text: JSON.stringify(
{
success: false,
error: 'workflowId is required for build. Call create_workflow first.',
},
null,
2
),
},
],
isError: true,
}
}
const chatId = randomUUID()
const requestPayload = {
message: requestText,
workflowId: resolved.workflowId,
userId,
model,
mode: 'agent',
commands: ['fast'],
messageId: randomUUID(),
version: SIM_AGENT_VERSION,
headless: true,
chatId,
source: 'mcp',
}
const result = await orchestrateCopilotStream(requestPayload, {
userId,
workflowId: resolved.workflowId,
chatId,
autoExecuteTools: true,
timeout: 300000,
interactive: false,
})
const responseData = {
success: result.success,
content: result.content,
toolCalls: result.toolCalls,
error: result.error,
}
return {
content: [{ type: 'text', text: JSON.stringify(responseData, null, 2) }],
isError: !result.success,
}
} catch (error) {
logger.error('Build tool call failed', { error })
return {
content: [
{
type: 'text',
text: `Build failed: ${error instanceof Error ? error.message : String(error)}`,
},
],
isError: true,
}
}
}
async function handleSubagentToolCall(
toolDef: (typeof SUBAGENT_TOOL_DEFS)[number],
args: Record<string, unknown>,
userId: string
): Promise<CallToolResult> {
if (toolDef.agentId === 'build') {
return handleBuildToolCall(args, userId)
}
try {
const requestText =
(args.request as string) ||
(args.message as string) ||
(args.error as string) ||
JSON.stringify(args)
const context = (args.context as Record<string, unknown>) || {}
if (args.plan && !context.plan) {
context.plan = args.plan
}
const { model } = getCopilotModel('chat')
const result = await orchestrateSubagentStream(
toolDef.agentId,
{
message: requestText,
workflowId: args.workflowId,
workspaceId: args.workspaceId,
context,
model,
headless: true,
source: 'mcp',
},
{
userId,
workflowId: args.workflowId as string | undefined,
workspaceId: args.workspaceId as string | undefined,
}
)
let responseData: unknown
if (result.structuredResult) {
responseData = {
success: result.structuredResult.success ?? result.success,
type: result.structuredResult.type,
summary: result.structuredResult.summary,
data: result.structuredResult.data,
}
} else if (result.error) {
responseData = {
success: false,
error: result.error,
errors: result.errors,
}
} else {
responseData = {
success: result.success,
content: result.content,
}
}
return {
content: [
{
type: 'text',
text: JSON.stringify(responseData, null, 2),
},
],
isError: !result.success,
}
} catch (error) {
logger.error('Subagent tool call failed', {
tool: toolDef.name,
agentId: toolDef.agentId,
error,
})
return {
content: [
{
type: 'text',
text: `Subagent call failed: ${error instanceof Error ? error.message : String(error)}`,
},
],
isError: true,
}
}
}

View File

@@ -21,7 +21,6 @@ import { and, eq } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { checkHybridAuth } from '@/lib/auth/hybrid'
import { generateInternalToken } from '@/lib/auth/internal'
import { getMaxExecutionTimeout } from '@/lib/core/execution-limits'
import { getBaseUrl } from '@/lib/core/utils/urls'
const logger = createLogger('WorkflowMcpServeAPI')
@@ -265,7 +264,7 @@ async function handleToolsCall(
method: 'POST',
headers,
body: JSON.stringify({ input: params.arguments || {}, triggerType: 'mcp' }),
signal: AbortSignal.timeout(getMaxExecutionTimeout()),
signal: AbortSignal.timeout(600000), // 10 minute timeout
})
const executeResult = await response.json()

View File

@@ -1,8 +1,5 @@
import { createLogger } from '@sim/logger'
import type { NextRequest } from 'next/server'
import { getHighestPrioritySubscription } from '@/lib/billing/core/plan'
import { getExecutionTimeout } from '@/lib/core/execution-limits'
import type { SubscriptionPlan } from '@/lib/core/rate-limiter/types'
import { getParsedBody, withMcpAuth } from '@/lib/mcp/middleware'
import { mcpService } from '@/lib/mcp/service'
import type { McpTool, McpToolCall, McpToolResult } from '@/lib/mcp/types'
@@ -10,6 +7,7 @@ import {
categorizeError,
createMcpErrorResponse,
createMcpSuccessResponse,
MCP_CONSTANTS,
validateStringParam,
} from '@/lib/mcp/utils'
@@ -173,16 +171,13 @@ export const POST = withMcpAuth('read')(
arguments: args,
}
const userSubscription = await getHighestPrioritySubscription(userId)
const executionTimeout = getExecutionTimeout(
userSubscription?.plan as SubscriptionPlan | undefined,
'sync'
)
const result = await Promise.race([
mcpService.executeTool(userId, serverId, toolCall, workspaceId),
new Promise<never>((_, reject) =>
setTimeout(() => reject(new Error('Tool execution timeout')), executionTimeout)
setTimeout(
() => reject(new Error('Tool execution timeout')),
MCP_CONSTANTS.EXECUTION_TIMEOUT
)
),
])

View File

@@ -24,7 +24,6 @@ const configSchema = z.object({
hideFilesTab: z.boolean().optional(),
disableMcpTools: z.boolean().optional(),
disableCustomTools: z.boolean().optional(),
disableSkills: z.boolean().optional(),
hideTemplates: z.boolean().optional(),
disableInvitations: z.boolean().optional(),
hideDeployApi: z.boolean().optional(),

View File

@@ -25,7 +25,6 @@ const configSchema = z.object({
hideFilesTab: z.boolean().optional(),
disableMcpTools: z.boolean().optional(),
disableCustomTools: z.boolean().optional(),
disableSkills: z.boolean().optional(),
hideTemplates: z.boolean().optional(),
disableInvitations: z.boolean().optional(),
hideDeployApi: z.boolean().optional(),

View File

@@ -1,9 +1,10 @@
import { db, workflowDeploymentVersion, workflowSchedule } from '@sim/db'
import { createLogger } from '@sim/logger'
import { tasks } from '@trigger.dev/sdk'
import { and, eq, isNull, lt, lte, not, or, sql } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { verifyCronAuth } from '@/lib/auth/internal'
import { getJobQueue, shouldExecuteInline } from '@/lib/core/async-jobs'
import { isTriggerDevEnabled } from '@/lib/core/config/feature-flags'
import { generateRequestId } from '@/lib/core/utils/request'
import { executeScheduleJob } from '@/background/schedule-execution'
@@ -54,67 +55,72 @@ export async function GET(request: NextRequest) {
logger.debug(`[${requestId}] Successfully queried schedules: ${dueSchedules.length} found`)
logger.info(`[${requestId}] Processing ${dueSchedules.length} due scheduled workflows`)
const jobQueue = await getJobQueue()
if (isTriggerDevEnabled) {
const triggerPromises = dueSchedules.map(async (schedule) => {
const queueTime = schedule.lastQueuedAt ?? queuedAt
const queuePromises = dueSchedules.map(async (schedule) => {
const queueTime = schedule.lastQueuedAt ?? queuedAt
try {
const payload = {
scheduleId: schedule.id,
workflowId: schedule.workflowId,
blockId: schedule.blockId || undefined,
cronExpression: schedule.cronExpression || undefined,
lastRanAt: schedule.lastRanAt?.toISOString(),
failedCount: schedule.failedCount || 0,
now: queueTime.toISOString(),
scheduledFor: schedule.nextRunAt?.toISOString(),
}
const payload = {
scheduleId: schedule.id,
workflowId: schedule.workflowId,
blockId: schedule.blockId || undefined,
cronExpression: schedule.cronExpression || undefined,
lastRanAt: schedule.lastRanAt?.toISOString(),
failedCount: schedule.failedCount || 0,
now: queueTime.toISOString(),
scheduledFor: schedule.nextRunAt?.toISOString(),
}
try {
const jobId = await jobQueue.enqueue('schedule-execution', payload, {
metadata: { workflowId: schedule.workflowId },
})
logger.info(
`[${requestId}] Queued schedule execution task ${jobId} for workflow ${schedule.workflowId}`
)
if (shouldExecuteInline()) {
void (async () => {
try {
await jobQueue.startJob(jobId)
const output = await executeScheduleJob(payload)
await jobQueue.completeJob(jobId, output)
} catch (error) {
const errorMessage = error instanceof Error ? error.message : String(error)
logger.error(
`[${requestId}] Schedule execution failed for workflow ${schedule.workflowId}`,
{ jobId, error: errorMessage }
)
try {
await jobQueue.markJobFailed(jobId, errorMessage)
} catch (markFailedError) {
logger.error(`[${requestId}] Failed to mark job as failed`, {
jobId,
error:
markFailedError instanceof Error
? markFailedError.message
: String(markFailedError),
})
}
}
})()
const handle = await tasks.trigger('schedule-execution', payload)
logger.info(
`[${requestId}] Queued schedule execution task ${handle.id} for workflow ${schedule.workflowId}`
)
return handle
} catch (error) {
logger.error(
`[${requestId}] Failed to trigger schedule execution for workflow ${schedule.workflowId}`,
error
)
return null
}
} catch (error) {
logger.error(
`[${requestId}] Failed to queue schedule execution for workflow ${schedule.workflowId}`,
error
})
await Promise.allSettled(triggerPromises)
logger.info(`[${requestId}] Queued ${dueSchedules.length} schedule executions to Trigger.dev`)
} else {
const directExecutionPromises = dueSchedules.map(async (schedule) => {
const queueTime = schedule.lastQueuedAt ?? queuedAt
const payload = {
scheduleId: schedule.id,
workflowId: schedule.workflowId,
blockId: schedule.blockId || undefined,
cronExpression: schedule.cronExpression || undefined,
lastRanAt: schedule.lastRanAt?.toISOString(),
failedCount: schedule.failedCount || 0,
now: queueTime.toISOString(),
scheduledFor: schedule.nextRunAt?.toISOString(),
}
void executeScheduleJob(payload).catch((error) => {
logger.error(
`[${requestId}] Direct schedule execution failed for workflow ${schedule.workflowId}`,
error
)
})
logger.info(
`[${requestId}] Queued direct schedule execution for workflow ${schedule.workflowId} (Trigger.dev disabled)`
)
}
})
})
await Promise.allSettled(queuePromises)
await Promise.allSettled(directExecutionPromises)
logger.info(`[${requestId}] Queued ${dueSchedules.length} schedule executions`)
logger.info(
`[${requestId}] Queued ${dueSchedules.length} direct schedule executions (Trigger.dev disabled)`
)
}
return NextResponse.json({
message: 'Scheduled workflow executions processed',

View File

@@ -1,182 +0,0 @@
import { db } from '@sim/db'
import { skill } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { and, desc, eq } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { checkSessionOrInternalAuth } from '@/lib/auth/hybrid'
import { generateRequestId } from '@/lib/core/utils/request'
import { upsertSkills } from '@/lib/workflows/skills/operations'
import { getUserEntityPermissions } from '@/lib/workspaces/permissions/utils'
const logger = createLogger('SkillsAPI')
const SkillSchema = z.object({
skills: z.array(
z.object({
id: z.string().optional(),
name: z
.string()
.min(1, 'Skill name is required')
.max(64)
.regex(/^[a-z0-9]+(-[a-z0-9]+)*$/, 'Name must be kebab-case (e.g. my-skill)'),
description: z.string().min(1, 'Description is required').max(1024),
content: z.string().min(1, 'Content is required').max(50000, 'Content is too large'),
})
),
workspaceId: z.string().optional(),
})
/** GET - Fetch all skills for a workspace */
export async function GET(request: NextRequest) {
const requestId = generateRequestId()
const searchParams = request.nextUrl.searchParams
const workspaceId = searchParams.get('workspaceId')
try {
const authResult = await checkSessionOrInternalAuth(request, { requireWorkflowId: false })
if (!authResult.success || !authResult.userId) {
logger.warn(`[${requestId}] Unauthorized skills access attempt`)
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
const userId = authResult.userId
if (!workspaceId) {
logger.warn(`[${requestId}] Missing workspaceId`)
return NextResponse.json({ error: 'workspaceId is required' }, { status: 400 })
}
const userPermission = await getUserEntityPermissions(userId, 'workspace', workspaceId)
if (!userPermission) {
logger.warn(`[${requestId}] User ${userId} does not have access to workspace ${workspaceId}`)
return NextResponse.json({ error: 'Access denied' }, { status: 403 })
}
const result = await db
.select()
.from(skill)
.where(eq(skill.workspaceId, workspaceId))
.orderBy(desc(skill.createdAt))
return NextResponse.json({ data: result }, { status: 200 })
} catch (error) {
logger.error(`[${requestId}] Error fetching skills:`, error)
return NextResponse.json({ error: 'Failed to fetch skills' }, { status: 500 })
}
}
/** POST - Create or update skills */
export async function POST(req: NextRequest) {
const requestId = generateRequestId()
try {
const authResult = await checkSessionOrInternalAuth(req, { requireWorkflowId: false })
if (!authResult.success || !authResult.userId) {
logger.warn(`[${requestId}] Unauthorized skills update attempt`)
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
const userId = authResult.userId
const body = await req.json()
try {
const { skills, workspaceId } = SkillSchema.parse(body)
if (!workspaceId) {
logger.warn(`[${requestId}] Missing workspaceId in request body`)
return NextResponse.json({ error: 'workspaceId is required' }, { status: 400 })
}
const userPermission = await getUserEntityPermissions(userId, 'workspace', workspaceId)
if (!userPermission || (userPermission !== 'admin' && userPermission !== 'write')) {
logger.warn(
`[${requestId}] User ${userId} does not have write permission for workspace ${workspaceId}`
)
return NextResponse.json({ error: 'Write permission required' }, { status: 403 })
}
const resultSkills = await upsertSkills({
skills,
workspaceId,
userId,
requestId,
})
return NextResponse.json({ success: true, data: resultSkills })
} catch (validationError) {
if (validationError instanceof z.ZodError) {
logger.warn(`[${requestId}] Invalid skills data`, {
errors: validationError.errors,
})
return NextResponse.json(
{ error: 'Invalid request data', details: validationError.errors },
{ status: 400 }
)
}
if (validationError instanceof Error && validationError.message.includes('already exists')) {
return NextResponse.json({ error: validationError.message }, { status: 409 })
}
throw validationError
}
} catch (error) {
logger.error(`[${requestId}] Error updating skills`, error)
return NextResponse.json({ error: 'Failed to update skills' }, { status: 500 })
}
}
/** DELETE - Delete a skill by ID */
export async function DELETE(request: NextRequest) {
const requestId = generateRequestId()
const searchParams = request.nextUrl.searchParams
const skillId = searchParams.get('id')
const workspaceId = searchParams.get('workspaceId')
try {
const authResult = await checkSessionOrInternalAuth(request, { requireWorkflowId: false })
if (!authResult.success || !authResult.userId) {
logger.warn(`[${requestId}] Unauthorized skill deletion attempt`)
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
const userId = authResult.userId
if (!skillId) {
logger.warn(`[${requestId}] Missing skill ID for deletion`)
return NextResponse.json({ error: 'Skill ID is required' }, { status: 400 })
}
if (!workspaceId) {
logger.warn(`[${requestId}] Missing workspaceId for deletion`)
return NextResponse.json({ error: 'workspaceId is required' }, { status: 400 })
}
const userPermission = await getUserEntityPermissions(userId, 'workspace', workspaceId)
if (!userPermission || (userPermission !== 'admin' && userPermission !== 'write')) {
logger.warn(
`[${requestId}] User ${userId} does not have write permission for workspace ${workspaceId}`
)
return NextResponse.json({ error: 'Write permission required' }, { status: 403 })
}
const existingSkill = await db.select().from(skill).where(eq(skill.id, skillId)).limit(1)
if (existingSkill.length === 0) {
logger.warn(`[${requestId}] Skill not found: ${skillId}`)
return NextResponse.json({ error: 'Skill not found' }, { status: 404 })
}
if (existingSkill[0].workspaceId !== workspaceId) {
logger.warn(`[${requestId}] Skill ${skillId} does not belong to workspace ${workspaceId}`)
return NextResponse.json({ error: 'Skill not found' }, { status: 404 })
}
await db.delete(skill).where(and(eq(skill.id, skillId), eq(skill.workspaceId, workspaceId)))
logger.info(`[${requestId}] Deleted skill: ${skillId}`)
return NextResponse.json({ success: true })
} catch (error) {
logger.error(`[${requestId}] Error deleting skill:`, error)
return NextResponse.json({ error: 'Failed to delete skill' }, { status: 500 })
}
}

View File

@@ -0,0 +1,101 @@
import { db } from '@sim/db'
import { templates } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { eq } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { getSession } from '@/lib/auth'
import { generateRequestId } from '@/lib/core/utils/request'
import { verifyEffectiveSuperUser } from '@/lib/templates/permissions'
const logger = createLogger('TemplateApprovalAPI')
export const revalidate = 0
/**
* POST /api/templates/[id]/approve - Approve a template (super users only)
*/
export async function POST(request: NextRequest, { params }: { params: Promise<{ id: string }> }) {
const requestId = generateRequestId()
const { id } = await params
try {
const session = await getSession()
if (!session?.user?.id) {
logger.warn(`[${requestId}] Unauthorized template approval attempt for ID: ${id}`)
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
const { effectiveSuperUser } = await verifyEffectiveSuperUser(session.user.id)
if (!effectiveSuperUser) {
logger.warn(`[${requestId}] Non-super user attempted to approve template: ${id}`)
return NextResponse.json({ error: 'Only super users can approve templates' }, { status: 403 })
}
const existingTemplate = await db.select().from(templates).where(eq(templates.id, id)).limit(1)
if (existingTemplate.length === 0) {
logger.warn(`[${requestId}] Template not found for approval: ${id}`)
return NextResponse.json({ error: 'Template not found' }, { status: 404 })
}
await db
.update(templates)
.set({ status: 'approved', updatedAt: new Date() })
.where(eq(templates.id, id))
logger.info(`[${requestId}] Template approved: ${id} by super user: ${session.user.id}`)
return NextResponse.json({
message: 'Template approved successfully',
templateId: id,
})
} catch (error) {
logger.error(`[${requestId}] Error approving template ${id}`, error)
return NextResponse.json({ error: 'Internal server error' }, { status: 500 })
}
}
/**
* DELETE /api/templates/[id]/approve - Unapprove a template (super users only)
*/
export async function DELETE(
_request: NextRequest,
{ params }: { params: Promise<{ id: string }> }
) {
const requestId = generateRequestId()
const { id } = await params
try {
const session = await getSession()
if (!session?.user?.id) {
logger.warn(`[${requestId}] Unauthorized template rejection attempt for ID: ${id}`)
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
const { effectiveSuperUser } = await verifyEffectiveSuperUser(session.user.id)
if (!effectiveSuperUser) {
logger.warn(`[${requestId}] Non-super user attempted to reject template: ${id}`)
return NextResponse.json({ error: 'Only super users can reject templates' }, { status: 403 })
}
const existingTemplate = await db.select().from(templates).where(eq(templates.id, id)).limit(1)
if (existingTemplate.length === 0) {
logger.warn(`[${requestId}] Template not found for rejection: ${id}`)
return NextResponse.json({ error: 'Template not found' }, { status: 404 })
}
await db
.update(templates)
.set({ status: 'rejected', updatedAt: new Date() })
.where(eq(templates.id, id))
logger.info(`[${requestId}] Template rejected: ${id} by super user: ${session.user.id}`)
return NextResponse.json({
message: 'Template rejected successfully',
templateId: id,
})
} catch (error) {
logger.error(`[${requestId}] Error rejecting template ${id}`, error)
return NextResponse.json({ error: 'Internal server error' }, { status: 500 })
}
}

View File

@@ -0,0 +1,55 @@
import { db } from '@sim/db'
import { templates } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { eq } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { getSession } from '@/lib/auth'
import { generateRequestId } from '@/lib/core/utils/request'
import { verifyEffectiveSuperUser } from '@/lib/templates/permissions'
const logger = createLogger('TemplateRejectionAPI')
export const revalidate = 0
/**
* POST /api/templates/[id]/reject - Reject a template (super users only)
*/
export async function POST(request: NextRequest, { params }: { params: Promise<{ id: string }> }) {
const requestId = generateRequestId()
const { id } = await params
try {
const session = await getSession()
if (!session?.user?.id) {
logger.warn(`[${requestId}] Unauthorized template rejection attempt for ID: ${id}`)
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
const { effectiveSuperUser } = await verifyEffectiveSuperUser(session.user.id)
if (!effectiveSuperUser) {
logger.warn(`[${requestId}] Non-super user attempted to reject template: ${id}`)
return NextResponse.json({ error: 'Only super users can reject templates' }, { status: 403 })
}
const existingTemplate = await db.select().from(templates).where(eq(templates.id, id)).limit(1)
if (existingTemplate.length === 0) {
logger.warn(`[${requestId}] Template not found for rejection: ${id}`)
return NextResponse.json({ error: 'Template not found' }, { status: 404 })
}
await db
.update(templates)
.set({ status: 'rejected', updatedAt: new Date() })
.where(eq(templates.id, id))
logger.info(`[${requestId}] Template rejected: ${id} by super user: ${session.user.id}`)
return NextResponse.json({
message: 'Template rejected successfully',
templateId: id,
})
} catch (error) {
logger.error(`[${requestId}] Error rejecting template ${id}`, error)
return NextResponse.json({ error: 'Internal server error' }, { status: 500 })
}
}

View File

@@ -106,7 +106,6 @@ const updateTemplateSchema = z.object({
creatorId: z.string().optional(), // Creator profile ID
tags: z.array(z.string()).max(10, 'Maximum 10 tags allowed').optional(),
updateState: z.boolean().optional(), // Explicitly request state update from current workflow
status: z.enum(['approved', 'rejected', 'pending']).optional(), // Status change (super users only)
})
// PUT /api/templates/[id] - Update a template
@@ -132,7 +131,7 @@ export async function PUT(request: NextRequest, { params }: { params: Promise<{
)
}
const { name, details, creatorId, tags, updateState, status } = validationResult.data
const { name, details, creatorId, tags, updateState } = validationResult.data
const existingTemplate = await db.select().from(templates).where(eq(templates.id, id)).limit(1)
@@ -143,44 +142,21 @@ export async function PUT(request: NextRequest, { params }: { params: Promise<{
const template = existingTemplate[0]
// Status changes require super user permission
if (status !== undefined) {
const { verifyEffectiveSuperUser } = await import('@/lib/templates/permissions')
const { effectiveSuperUser } = await verifyEffectiveSuperUser(session.user.id)
if (!effectiveSuperUser) {
logger.warn(`[${requestId}] Non-super user attempted to change template status: ${id}`)
return NextResponse.json(
{ error: 'Only super users can change template status' },
{ status: 403 }
)
}
if (!template.creatorId) {
logger.warn(`[${requestId}] Template ${id} has no creator, denying update`)
return NextResponse.json({ error: 'Access denied' }, { status: 403 })
}
// For non-status updates, verify creator permission
const hasNonStatusUpdates =
name !== undefined ||
details !== undefined ||
creatorId !== undefined ||
tags !== undefined ||
updateState
const { verifyCreatorPermission } = await import('@/lib/templates/permissions')
const { hasPermission, error: permissionError } = await verifyCreatorPermission(
session.user.id,
template.creatorId,
'admin'
)
if (hasNonStatusUpdates) {
if (!template.creatorId) {
logger.warn(`[${requestId}] Template ${id} has no creator, denying update`)
return NextResponse.json({ error: 'Access denied' }, { status: 403 })
}
const { verifyCreatorPermission } = await import('@/lib/templates/permissions')
const { hasPermission, error: permissionError } = await verifyCreatorPermission(
session.user.id,
template.creatorId,
'admin'
)
if (!hasPermission) {
logger.warn(`[${requestId}] User denied permission to update template ${id}`)
return NextResponse.json({ error: permissionError || 'Access denied' }, { status: 403 })
}
if (!hasPermission) {
logger.warn(`[${requestId}] User denied permission to update template ${id}`)
return NextResponse.json({ error: permissionError || 'Access denied' }, { status: 403 })
}
const updateData: any = {
@@ -191,7 +167,6 @@ export async function PUT(request: NextRequest, { params }: { params: Promise<{
if (details !== undefined) updateData.details = details
if (tags !== undefined) updateData.tags = tags
if (creatorId !== undefined) updateData.creatorId = creatorId
if (status !== undefined) updateData.status = status
if (updateState && template.workflowId) {
const { verifyWorkflowAccess } = await import('@/socket/middleware/permissions')

View File

@@ -4,7 +4,6 @@ import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { createA2AClient, extractTextContent, isTerminalState } from '@/lib/a2a/utils'
import { checkHybridAuth } from '@/lib/auth/hybrid'
import { validateUrlWithDNS } from '@/lib/core/security/input-validation.server'
import { generateRequestId } from '@/lib/core/utils/request'
export const dynamic = 'force-dynamic'
@@ -96,14 +95,6 @@ export async function POST(request: NextRequest) {
if (validatedData.files && validatedData.files.length > 0) {
for (const file of validatedData.files) {
if (file.type === 'url') {
const urlValidation = await validateUrlWithDNS(file.data, 'fileUrl')
if (!urlValidation.isValid) {
return NextResponse.json(
{ success: false, error: urlValidation.error },
{ status: 400 }
)
}
const filePart: FilePart = {
kind: 'file',
file: {

Some files were not shown because too many files have changed in this diff Show More