Compare commits

..

23 Commits

Author SHA1 Message Date
Siddharth Ganesan
3b2b411a6a Improvement 2026-01-31 16:00:53 -08:00
Siddharth Ganesan
034819a7c1 mcp v1 2026-01-31 12:26:06 -08:00
Siddharth Ganesan
3d57125993 Add mcp 2026-01-31 11:38:26 -08:00
Siddharth Ganesan
c9e182216e Stuff 2026-01-30 17:01:15 -08:00
Siddharth Ganesan
f00c710d58 Ss tests 2026-01-30 16:52:23 -08:00
Siddharth Ganesan
e37c33eb9f Basic ss tes 2026-01-30 16:24:27 -08:00
Siddharth Ganesan
eedcde0ce1 v1 2026-01-30 12:15:41 -08:00
Siddharth Ganesan
deccca0276 v0 2026-01-30 11:33:08 -08:00
Siddharth Ganesan
5add92a613 Use b64 2026-01-29 18:10:47 -08:00
Siddharth Ganesan
4ab3e23cf7 Works 2026-01-29 17:35:34 -08:00
Siddharth Ganesan
aa893d56d8 Fix media 2026-01-29 17:20:38 -08:00
Siddharth Ganesan
599ffb77e6 v1 2026-01-29 17:19:29 -08:00
Siddharth Ganesan
86c3b82339 Add anvanced mode to messages 2026-01-29 13:19:48 -08:00
Siddharth Ganesan
d44c75f486 Add toggle, haven't tested 2026-01-29 13:17:27 -08:00
Siddharth Ganesan
2b026ded16 fix(copilot): hosted api key validation + credential validation (#3000)
* Fix

* Fix greptile

* Fix validation

* Fix comments

* Lint

* Fix

* remove passed in workspace id ref

* Fix comments

---------

Co-authored-by: Vikhyath Mondreti <vikhyath@simstudio.ai>
2026-01-29 10:48:59 -08:00
Siddharth Ganesan
dca0758054 fix(executor): conditional deactivation for loops/parallels (#3069)
* Fix deactivation

* Remove comments
2026-01-29 10:43:30 -08:00
Waleed
ae17c90bdf chore(readme): update readme.md (#3066) 2026-01-28 23:51:34 -08:00
Waleed
1256a15266 fix(posthog): move session recording proxy to middleware for large payload support (#3065)
Next.js rewrites can strip request bodies for large payloads (1MB+),
causing 400 errors from CloudFront. PostHog session recordings require
up to 64MB per message. Moving the proxy to middleware ensures proper
body passthrough.

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-28 23:49:57 -08:00
Waleed
0b2b7ed9c8 fix(oauth): use createElement for icon components to fix React hooks error (#3064) 2026-01-28 23:37:00 -08:00
Vikhyath Mondreti
0d8d9fb238 fix(type): logs workspace delivery (#3063) 2026-01-28 21:54:20 -08:00
Vikhyath Mondreti
e0f1e66f4f feat(child-workflows): nested execution snapshots (#3059)
* feat(child-workflows): nested execution snapshots

* cleanup typing

* address bugbot comments and fix tests

* do not cascade delete logs/snapshots

* fix few more inconsitencies

* fix external logs route

* add fallback color
2026-01-28 19:40:52 -08:00
Emir Karabeg
20bb7cdec6 improvement(preview): include current workflow badge in breadcrumb in workflow snapshot (#3062)
* feat(preview): add workflow context badge for nested navigation

Adds a badge next to the Back button when viewing nested workflows
to help users identify which workflow they are currently viewing.
This is especially helpful when navigating deeply into nested
workflow blocks.

Changes:
- Added workflowName field to WorkflowStackEntry interface
- Capture workflow name from metadata when drilling down
- Display workflow name badge next to Back button

Co-authored-by: emir <emir@simstudio.ai>

* added workflow name and desc to metadata for workflow preview

* added copy and search icon in code in preview editor

---------

Co-authored-by: Cursor Agent <cursoragent@cursor.com>
Co-authored-by: waleed <walif6@gmail.com>
2026-01-28 19:33:19 -08:00
Waleed
1469e9c66c feat(youtube): add captions, trending, and video categories tools with enhanced API coverage (#3060)
* feat(youtube): add captions, trending, and video categories tools with enhanced API coverage

* fix(youtube): remove captions tool (requires OAuth), fix tinybird defaults, encode pageToken
2026-01-28 19:08:33 -08:00
105 changed files with 19018 additions and 1225 deletions

View File

@@ -172,31 +172,6 @@ Key environment variables for self-hosted deployments. See [`.env.example`](apps
| `API_ENCRYPTION_KEY` | Yes | Encrypts API keys (`openssl rand -hex 32`) | | `API_ENCRYPTION_KEY` | Yes | Encrypts API keys (`openssl rand -hex 32`) |
| `COPILOT_API_KEY` | No | API key from sim.ai for Copilot features | | `COPILOT_API_KEY` | No | API key from sim.ai for Copilot features |
## Troubleshooting
### Ollama models not showing in dropdown (Docker)
If you're running Ollama on your host machine and Sim in Docker, change `OLLAMA_URL` from `localhost` to `host.docker.internal`:
```bash
OLLAMA_URL=http://host.docker.internal:11434 docker compose -f docker-compose.prod.yml up -d
```
See [Using an External Ollama Instance](#using-an-external-ollama-instance) for details.
### Database connection issues
Ensure PostgreSQL has the pgvector extension installed. When using Docker, wait for the database to be healthy before running migrations.
### Port conflicts
If ports 3000, 3002, or 5432 are in use, configure alternatives:
```bash
# Custom ports
NEXT_PUBLIC_APP_URL=http://localhost:3100 POSTGRES_PORT=5433 docker compose up -d
```
## Tech Stack ## Tech Stack
- **Framework**: [Next.js](https://nextjs.org/) (App Router) - **Framework**: [Next.js](https://nextjs.org/) (App Router)

View File

@@ -26,78 +26,41 @@ In Sim, the YouTube integration enables your agents to programmatically search a
## Usage Instructions ## Usage Instructions
Integrate YouTube into the workflow. Can search for videos, get video details, get channel information, get all videos from a channel, get channel playlists, get playlist items, find related videos, and get video comments. Integrate YouTube into the workflow. Can search for videos, get trending videos, get video details, get video captions, get video categories, get channel information, get all videos from a channel, get channel playlists, get playlist items, and get video comments.
## Tools ## Tools
### `youtube_search` ### `youtube_captions`
Search for videos on YouTube using the YouTube Data API. Supports advanced filtering by channel, date range, duration, category, quality, captions, and more. List available caption tracks (subtitles/transcripts) for a YouTube video. Returns information about each caption including language, type, and whether it is auto-generated.
#### Input #### Input
| Parameter | Type | Required | Description | | Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- | | --------- | ---- | -------- | ----------- |
| `query` | string | Yes | Search query for YouTube videos | | `videoId` | string | Yes | YouTube video ID to get captions for |
| `maxResults` | number | No | Maximum number of videos to return \(1-50\) |
| `apiKey` | string | Yes | YouTube API Key |
| `channelId` | string | No | Filter results to a specific YouTube channel ID |
| `publishedAfter` | string | No | Only return videos published after this date \(RFC 3339 format: "2024-01-01T00:00:00Z"\) |
| `publishedBefore` | string | No | Only return videos published before this date \(RFC 3339 format: "2024-01-01T00:00:00Z"\) |
| `videoDuration` | string | No | Filter by video length: "short" \(&lt;4 min\), "medium" \(4-20 min\), "long" \(&gt;20 min\), "any" |
| `order` | string | No | Sort results by: "date", "rating", "relevance" \(default\), "title", "videoCount", "viewCount" |
| `videoCategoryId` | string | No | Filter by YouTube category ID \(e.g., "10" for Music, "20" for Gaming\) |
| `videoDefinition` | string | No | Filter by video quality: "high" \(HD\), "standard", "any" |
| `videoCaption` | string | No | Filter by caption availability: "closedCaption" \(has captions\), "none" \(no captions\), "any" |
| `regionCode` | string | No | Return results relevant to a specific region \(ISO 3166-1 alpha-2 country code, e.g., "US", "GB"\) |
| `relevanceLanguage` | string | No | Return results most relevant to a language \(ISO 639-1 code, e.g., "en", "es"\) |
| `safeSearch` | string | No | Content filtering level: "moderate" \(default\), "none", "strict" |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `items` | array | Array of YouTube videos matching the search query |
| ↳ `videoId` | string | YouTube video ID |
| ↳ `title` | string | Video title |
| ↳ `description` | string | Video description |
| ↳ `thumbnail` | string | Video thumbnail URL |
| `totalResults` | number | Total number of search results available |
| `nextPageToken` | string | Token for accessing the next page of results |
### `youtube_video_details`
Get detailed information about a specific YouTube video.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `videoId` | string | Yes | YouTube video ID |
| `apiKey` | string | Yes | YouTube API Key | | `apiKey` | string | Yes | YouTube API Key |
#### Output #### Output
| Parameter | Type | Description | | Parameter | Type | Description |
| --------- | ---- | ----------- | | --------- | ---- | ----------- |
| `videoId` | string | YouTube video ID | | `items` | array | Array of available caption tracks for the video |
| `title` | string | Video title | | ↳ `captionId` | string | Caption track ID |
| `description` | string | Video description | | ↳ `language` | string | Language code of the caption \(e.g., |
| `channelId` | string | Channel ID | | ↳ `name` | string | Name/label of the caption track |
| `channelTitle` | string | Channel name | | ↳ `trackKind` | string | Type of caption track: |
| `publishedAt` | string | Published date and time | | ↳ `lastUpdated` | string | When the caption was last updated |
| `duration` | string | Video duration in ISO 8601 format | | ↳ `isCC` | boolean | Whether this is a closed caption track |
| `viewCount` | number | Number of views | | ↳ `isAutoSynced` | boolean | Whether the caption timing was automatically synced |
| `likeCount` | number | Number of likes | | ↳ `audioTrackType` | string | Type of audio track this caption is for |
| `commentCount` | number | Number of comments | | `totalResults` | number | Total number of caption tracks available |
| `thumbnail` | string | Video thumbnail URL |
| `tags` | array | Video tags |
### `youtube_channel_info` ### `youtube_channel_info`
Get detailed information about a YouTube channel. Get detailed information about a YouTube channel including statistics, branding, and content details.
#### Input #### Input
@@ -114,43 +77,20 @@ Get detailed information about a YouTube channel.
| `channelId` | string | YouTube channel ID | | `channelId` | string | YouTube channel ID |
| `title` | string | Channel name | | `title` | string | Channel name |
| `description` | string | Channel description | | `description` | string | Channel description |
| `subscriberCount` | number | Number of subscribers | | `subscriberCount` | number | Number of subscribers \(0 if hidden\) |
| `videoCount` | number | Number of videos | | `videoCount` | number | Number of public videos |
| `viewCount` | number | Total channel views | | `viewCount` | number | Total channel views |
| `publishedAt` | string | Channel creation date | | `publishedAt` | string | Channel creation date |
| `thumbnail` | string | Channel thumbnail URL | | `thumbnail` | string | Channel thumbnail/avatar URL |
| `customUrl` | string | Channel custom URL | | `customUrl` | string | Channel custom URL \(handle\) |
| `country` | string | Country the channel is associated with |
### `youtube_channel_videos` | `uploadsPlaylistId` | string | Playlist ID containing all channel uploads \(use with playlist_items\) |
| `bannerImageUrl` | string | Channel banner image URL |
Get all videos from a specific YouTube channel, with sorting options. | `hiddenSubscriberCount` | boolean | Whether the subscriber count is hidden |
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `channelId` | string | Yes | YouTube channel ID to get videos from |
| `maxResults` | number | No | Maximum number of videos to return \(1-50\) |
| `order` | string | No | Sort order: "date" \(newest first\), "rating", "relevance", "title", "viewCount" |
| `pageToken` | string | No | Page token for pagination |
| `apiKey` | string | Yes | YouTube API Key |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `items` | array | Array of videos from the channel |
| ↳ `videoId` | string | YouTube video ID |
| ↳ `title` | string | Video title |
| ↳ `description` | string | Video description |
| ↳ `thumbnail` | string | Video thumbnail URL |
| ↳ `publishedAt` | string | Video publish date |
| `totalResults` | number | Total number of videos in the channel |
| `nextPageToken` | string | Token for accessing the next page of results |
### `youtube_channel_playlists` ### `youtube_channel_playlists`
Get all playlists from a specific YouTube channel. Get all public playlists from a specific YouTube channel.
#### Input #### Input
@@ -172,19 +112,80 @@ Get all playlists from a specific YouTube channel.
| ↳ `thumbnail` | string | Playlist thumbnail URL | | ↳ `thumbnail` | string | Playlist thumbnail URL |
| ↳ `itemCount` | number | Number of videos in playlist | | ↳ `itemCount` | number | Number of videos in playlist |
| ↳ `publishedAt` | string | Playlist creation date | | ↳ `publishedAt` | string | Playlist creation date |
| ↳ `channelTitle` | string | Channel name |
| `totalResults` | number | Total number of playlists in the channel | | `totalResults` | number | Total number of playlists in the channel |
| `nextPageToken` | string | Token for accessing the next page of results | | `nextPageToken` | string | Token for accessing the next page of results |
### `youtube_playlist_items` ### `youtube_channel_videos`
Get videos from a YouTube playlist. Search for videos from a specific YouTube channel with sorting options. For complete channel video list, use channel_info to get uploadsPlaylistId, then use playlist_items.
#### Input #### Input
| Parameter | Type | Required | Description | | Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- | | --------- | ---- | -------- | ----------- |
| `playlistId` | string | Yes | YouTube playlist ID | | `channelId` | string | Yes | YouTube channel ID to get videos from |
| `maxResults` | number | No | Maximum number of videos to return | | `maxResults` | number | No | Maximum number of videos to return \(1-50\) |
| `order` | string | No | Sort order: "date" \(newest first, default\), "rating", "relevance", "title", "viewCount" |
| `pageToken` | string | No | Page token for pagination |
| `apiKey` | string | Yes | YouTube API Key |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `items` | array | Array of videos from the channel |
| ↳ `videoId` | string | YouTube video ID |
| ↳ `title` | string | Video title |
| ↳ `description` | string | Video description |
| ↳ `thumbnail` | string | Video thumbnail URL |
| ↳ `publishedAt` | string | Video publish date |
| ↳ `channelTitle` | string | Channel name |
| `totalResults` | number | Total number of videos in the channel |
| `nextPageToken` | string | Token for accessing the next page of results |
### `youtube_comments`
Get top-level comments from a YouTube video with author details and engagement.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `videoId` | string | Yes | YouTube video ID |
| `maxResults` | number | No | Maximum number of comments to return \(1-100\) |
| `order` | string | No | Order of comments: "time" \(newest first\) or "relevance" \(most relevant first\) |
| `pageToken` | string | No | Page token for pagination |
| `apiKey` | string | Yes | YouTube API Key |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `items` | array | Array of top-level comments from the video |
| ↳ `commentId` | string | Comment ID |
| ↳ `authorDisplayName` | string | Comment author display name |
| ↳ `authorChannelUrl` | string | Comment author channel URL |
| ↳ `authorProfileImageUrl` | string | Comment author profile image URL |
| ↳ `textDisplay` | string | Comment text \(HTML formatted\) |
| ↳ `textOriginal` | string | Comment text \(plain text\) |
| ↳ `likeCount` | number | Number of likes on the comment |
| ↳ `publishedAt` | string | When the comment was posted |
| ↳ `updatedAt` | string | When the comment was last edited |
| ↳ `replyCount` | number | Number of replies to this comment |
| `totalResults` | number | Total number of comment threads available |
| `nextPageToken` | string | Token for accessing the next page of results |
### `youtube_playlist_items`
Get videos from a YouTube playlist. Can be used with a channel uploads playlist to get all channel videos.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `playlistId` | string | Yes | YouTube playlist ID. Use uploadsPlaylistId from channel_info to get all channel videos. |
| `maxResults` | number | No | Maximum number of videos to return \(1-50\) |
| `pageToken` | string | No | Page token for pagination | | `pageToken` | string | No | Page token for pagination |
| `apiKey` | string | Yes | YouTube API Key | | `apiKey` | string | Yes | YouTube API Key |
@@ -198,22 +199,65 @@ Get videos from a YouTube playlist.
| ↳ `description` | string | Video description | | ↳ `description` | string | Video description |
| ↳ `thumbnail` | string | Video thumbnail URL | | ↳ `thumbnail` | string | Video thumbnail URL |
| ↳ `publishedAt` | string | Date added to playlist | | ↳ `publishedAt` | string | Date added to playlist |
| ↳ `channelTitle` | string | Channel name | | ↳ `channelTitle` | string | Playlist owner channel name |
| ↳ `position` | number | Position in playlist | | ↳ `position` | number | Position in playlist \(0-indexed\) |
| ↳ `videoOwnerChannelId` | string | Channel ID of the video owner |
| ↳ `videoOwnerChannelTitle` | string | Channel name of the video owner |
| `totalResults` | number | Total number of items in playlist | | `totalResults` | number | Total number of items in playlist |
| `nextPageToken` | string | Token for accessing the next page of results | | `nextPageToken` | string | Token for accessing the next page of results |
### `youtube_comments` ### `youtube_search`
Get comments from a YouTube video. Search for videos on YouTube using the YouTube Data API. Supports advanced filtering by channel, date range, duration, category, quality, captions, live streams, and more.
#### Input #### Input
| Parameter | Type | Required | Description | | Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- | | --------- | ---- | -------- | ----------- |
| `videoId` | string | Yes | YouTube video ID | | `query` | string | Yes | Search query for YouTube videos |
| `maxResults` | number | No | Maximum number of comments to return | | `maxResults` | number | No | Maximum number of videos to return \(1-50\) |
| `order` | string | No | Order of comments: time or relevance | | `pageToken` | string | No | Page token for pagination \(use nextPageToken from previous response\) |
| `apiKey` | string | Yes | YouTube API Key |
| `channelId` | string | No | Filter results to a specific YouTube channel ID |
| `publishedAfter` | string | No | Only return videos published after this date \(RFC 3339 format: "2024-01-01T00:00:00Z"\) |
| `publishedBefore` | string | No | Only return videos published before this date \(RFC 3339 format: "2024-01-01T00:00:00Z"\) |
| `videoDuration` | string | No | Filter by video length: "short" \(&lt;4 min\), "medium" \(4-20 min\), "long" \(&gt;20 min\), "any" |
| `order` | string | No | Sort results by: "date", "rating", "relevance" \(default\), "title", "videoCount", "viewCount" |
| `videoCategoryId` | string | No | Filter by YouTube category ID \(e.g., "10" for Music, "20" for Gaming\). Use video_categories to list IDs. |
| `videoDefinition` | string | No | Filter by video quality: "high" \(HD\), "standard", "any" |
| `videoCaption` | string | No | Filter by caption availability: "closedCaption" \(has captions\), "none" \(no captions\), "any" |
| `eventType` | string | No | Filter by live broadcast status: "live" \(currently live\), "upcoming" \(scheduled\), "completed" \(past streams\) |
| `regionCode` | string | No | Return results relevant to a specific region \(ISO 3166-1 alpha-2 country code, e.g., "US", "GB"\) |
| `relevanceLanguage` | string | No | Return results most relevant to a language \(ISO 639-1 code, e.g., "en", "es"\) |
| `safeSearch` | string | No | Content filtering level: "moderate" \(default\), "none", "strict" |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `items` | array | Array of YouTube videos matching the search query |
| ↳ `videoId` | string | YouTube video ID |
| ↳ `title` | string | Video title |
| ↳ `description` | string | Video description |
| ↳ `thumbnail` | string | Video thumbnail URL |
| ↳ `channelId` | string | Channel ID that uploaded the video |
| ↳ `channelTitle` | string | Channel name |
| ↳ `publishedAt` | string | Video publish date |
| ↳ `liveBroadcastContent` | string | Live broadcast status: |
| `totalResults` | number | Total number of search results available |
| `nextPageToken` | string | Token for accessing the next page of results |
### `youtube_trending`
Get the most popular/trending videos on YouTube. Can filter by region and video category.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `regionCode` | string | No | ISO 3166-1 alpha-2 country code to get trending videos for \(e.g., "US", "GB", "JP"\). Defaults to US. |
| `videoCategoryId` | string | No | Filter by video category ID \(e.g., "10" for Music, "20" for Gaming, "17" for Sports\) |
| `maxResults` | number | No | Maximum number of trending videos to return \(1-50\) |
| `pageToken` | string | No | Page token for pagination | | `pageToken` | string | No | Page token for pagination |
| `apiKey` | string | Yes | YouTube API Key | | `apiKey` | string | Yes | YouTube API Key |
@@ -221,17 +265,84 @@ Get comments from a YouTube video.
| Parameter | Type | Description | | Parameter | Type | Description |
| --------- | ---- | ----------- | | --------- | ---- | ----------- |
| `items` | array | Array of comments from the video | | `items` | array | Array of trending videos |
| ↳ `commentId` | string | Comment ID | | ↳ `videoId` | string | YouTube video ID |
| ↳ `authorDisplayName` | string | Comment author name | | ↳ `title` | string | Video title |
| ↳ `authorChannelUrl` | string | Comment author channel URL | | ↳ `description` | string | Video description |
| ↳ `textDisplay` | string | Comment text \(HTML formatted\) | | ↳ `thumbnail` | string | Video thumbnail URL |
| ↳ `textOriginal` | string | Comment text \(plain text\) | | ↳ `channelId` | string | Channel ID |
| ↳ `channelTitle` | string | Channel name |
| ↳ `publishedAt` | string | Video publish date |
| ↳ `viewCount` | number | Number of views |
| ↳ `likeCount` | number | Number of likes | | ↳ `likeCount` | number | Number of likes |
| ↳ `publishedAt` | string | Comment publish date | | ↳ `commentCount` | number | Number of comments |
| ↳ `updatedAt` | string | Comment last updated date | | ↳ `duration` | string | Video duration in ISO 8601 format |
| ↳ `replyCount` | number | Number of replies | | `totalResults` | number | Total number of trending videos available |
| `totalResults` | number | Total number of comments |
| `nextPageToken` | string | Token for accessing the next page of results | | `nextPageToken` | string | Token for accessing the next page of results |
### `youtube_video_categories`
Get a list of video categories available on YouTube. Use this to discover valid category IDs for filtering search and trending results.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `regionCode` | string | No | ISO 3166-1 alpha-2 country code to get categories for \(e.g., "US", "GB", "JP"\). Defaults to US. |
| `hl` | string | No | Language for category titles \(e.g., "en", "es", "fr"\). Defaults to English. |
| `apiKey` | string | Yes | YouTube API Key |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `items` | array | Array of video categories available in the specified region |
| ↳ `categoryId` | string | Category ID to use in search/trending filters \(e.g., |
| ↳ `title` | string | Human-readable category name |
| ↳ `assignable` | boolean | Whether videos can be tagged with this category |
| `totalResults` | number | Total number of categories available |
### `youtube_video_details`
Get detailed information about a specific YouTube video including statistics, content details, live streaming info, and metadata.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `videoId` | string | Yes | YouTube video ID |
| `apiKey` | string | Yes | YouTube API Key |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `videoId` | string | YouTube video ID |
| `title` | string | Video title |
| `description` | string | Video description |
| `channelId` | string | Channel ID |
| `channelTitle` | string | Channel name |
| `publishedAt` | string | Published date and time |
| `duration` | string | Video duration in ISO 8601 format \(e.g., |
| `viewCount` | number | Number of views |
| `likeCount` | number | Number of likes |
| `commentCount` | number | Number of comments |
| `favoriteCount` | number | Number of times added to favorites |
| `thumbnail` | string | Video thumbnail URL |
| `tags` | array | Video tags |
| `categoryId` | string | YouTube video category ID |
| `definition` | string | Video definition: |
| `caption` | string | Whether captions are available: |
| `licensedContent` | boolean | Whether the video is licensed content |
| `privacyStatus` | string | Video privacy status: |
| `liveBroadcastContent` | string | Live broadcast status: |
| `defaultLanguage` | string | Default language of the video metadata |
| `defaultAudioLanguage` | string | Default audio language of the video |
| `isLiveContent` | boolean | Whether this video is or was a live stream |
| `scheduledStartTime` | string | Scheduled start time for upcoming live streams \(ISO 8601\) |
| `actualStartTime` | string | When the live stream actually started \(ISO 8601\) |
| `actualEndTime` | string | When the live stream ended \(ISO 8601\) |
| `concurrentViewers` | number | Current number of viewers \(only for active live streams\) |
| `activeLiveChatId` | string | Live chat ID for the stream \(only for active live streams\) |

View File

@@ -1,13 +1,13 @@
import { db } from '@sim/db' import { db } from '@sim/db'
import { copilotChats } from '@sim/db/schema' import { copilotChats, permissions, workflow } from '@sim/db/schema'
import { createLogger } from '@sim/logger' import { createLogger } from '@sim/logger'
import { and, desc, eq } from 'drizzle-orm' import { and, asc, desc, eq, inArray, or } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server' import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod' import { z } from 'zod'
import { getSession } from '@/lib/auth' import { getSession } from '@/lib/auth'
import { generateChatTitle } from '@/lib/copilot/chat-title' import { generateChatTitle } from '@/lib/copilot/chat-title'
import { getCopilotModel } from '@/lib/copilot/config' import { getCopilotModel } from '@/lib/copilot/config'
import { SIM_AGENT_API_URL_DEFAULT, SIM_AGENT_VERSION } from '@/lib/copilot/constants' import { SIM_AGENT_VERSION } from '@/lib/copilot/constants'
import { COPILOT_MODEL_IDS, COPILOT_REQUEST_MODES } from '@/lib/copilot/models' import { COPILOT_MODEL_IDS, COPILOT_REQUEST_MODES } from '@/lib/copilot/models'
import { import {
authenticateCopilotRequestSessionOnly, authenticateCopilotRequestSessionOnly,
@@ -23,10 +23,10 @@ import { CopilotFiles } from '@/lib/uploads'
import { createFileContent } from '@/lib/uploads/utils/file-utils' import { createFileContent } from '@/lib/uploads/utils/file-utils'
import { tools } from '@/tools/registry' import { tools } from '@/tools/registry'
import { getLatestVersionTools, stripVersionSuffix } from '@/tools/utils' import { getLatestVersionTools, stripVersionSuffix } from '@/tools/utils'
import { orchestrateCopilotStream } from '@/lib/copilot/orchestrator'
const logger = createLogger('CopilotChatAPI') const logger = createLogger('CopilotChatAPI')
const SIM_AGENT_API_URL = env.SIM_AGENT_API_URL || SIM_AGENT_API_URL_DEFAULT
const FileAttachmentSchema = z.object({ const FileAttachmentSchema = z.object({
id: z.string(), id: z.string(),
@@ -40,7 +40,8 @@ const ChatMessageSchema = z.object({
message: z.string().min(1, 'Message is required'), message: z.string().min(1, 'Message is required'),
userMessageId: z.string().optional(), // ID from frontend for the user message userMessageId: z.string().optional(), // ID from frontend for the user message
chatId: z.string().optional(), chatId: z.string().optional(),
workflowId: z.string().min(1, 'Workflow ID is required'), workflowId: z.string().optional(),
workflowName: z.string().optional(),
model: z.enum(COPILOT_MODEL_IDS).optional().default('claude-4.5-opus'), model: z.enum(COPILOT_MODEL_IDS).optional().default('claude-4.5-opus'),
mode: z.enum(COPILOT_REQUEST_MODES).optional().default('agent'), mode: z.enum(COPILOT_REQUEST_MODES).optional().default('agent'),
prefetch: z.boolean().optional(), prefetch: z.boolean().optional(),
@@ -78,6 +79,54 @@ const ChatMessageSchema = z.object({
commands: z.array(z.string()).optional(), commands: z.array(z.string()).optional(),
}) })
async function resolveWorkflowId(
userId: string,
workflowId?: string,
workflowName?: string
): Promise<{ workflowId: string; workflowName?: string } | null> {
// If workflowId provided, use it directly
if (workflowId) {
return { workflowId }
}
// Get user's accessible workflows
const workspaceIds = await db
.select({ entityId: permissions.entityId })
.from(permissions)
.where(and(eq(permissions.userId, userId), eq(permissions.entityType, 'workspace')))
const workspaceIdList = workspaceIds.map((row) => row.entityId)
const workflowConditions = [eq(workflow.userId, userId)]
if (workspaceIdList.length > 0) {
workflowConditions.push(inArray(workflow.workspaceId, workspaceIdList))
}
const workflows = await db
.select()
.from(workflow)
.where(or(...workflowConditions))
.orderBy(asc(workflow.sortOrder), asc(workflow.createdAt), asc(workflow.id))
if (workflows.length === 0) {
return null
}
// If workflowName provided, find matching workflow
if (workflowName) {
const match = workflows.find(
(w) => String(w.name || '').trim().toLowerCase() === workflowName.toLowerCase()
)
if (match) {
return { workflowId: match.id, workflowName: match.name || undefined }
}
return null
}
// Default to first workflow
return { workflowId: workflows[0].id, workflowName: workflows[0].name || undefined }
}
/** /**
* POST /api/copilot/chat * POST /api/copilot/chat
* Send messages to sim agent and handle chat persistence * Send messages to sim agent and handle chat persistence
@@ -100,7 +149,8 @@ export async function POST(req: NextRequest) {
message, message,
userMessageId, userMessageId,
chatId, chatId,
workflowId, workflowId: providedWorkflowId,
workflowName,
model, model,
mode, mode,
prefetch, prefetch,
@@ -113,6 +163,16 @@ export async function POST(req: NextRequest) {
contexts, contexts,
commands, commands,
} = ChatMessageSchema.parse(body) } = ChatMessageSchema.parse(body)
// Resolve workflowId - if not provided, use first workflow or find by name
const resolved = await resolveWorkflowId(authenticatedUserId, providedWorkflowId, workflowName)
if (!resolved) {
return createBadRequestResponse(
'No workflows found. Create a workflow first or provide a valid workflowId.'
)
}
const workflowId = resolved.workflowId
// Ensure we have a consistent user message ID for this request // Ensure we have a consistent user message ID for this request
const userMessageIdToUse = userMessageId || crypto.randomUUID() const userMessageIdToUse = userMessageId || crypto.randomUUID()
try { try {
@@ -465,77 +525,19 @@ export async function POST(req: NextRequest) {
}) })
} catch {} } catch {}
const simAgentResponse = await fetch(`${SIM_AGENT_API_URL}/api/chat-completion-streaming`, { if (stream) {
method: 'POST',
headers: {
'Content-Type': 'application/json',
...(env.COPILOT_API_KEY ? { 'x-api-key': env.COPILOT_API_KEY } : {}),
},
body: JSON.stringify(requestPayload),
})
if (!simAgentResponse.ok) {
if (simAgentResponse.status === 401 || simAgentResponse.status === 402) {
// Rethrow status only; client will render appropriate assistant message
return new NextResponse(null, { status: simAgentResponse.status })
}
const errorText = await simAgentResponse.text().catch(() => '')
logger.error(`[${tracker.requestId}] Sim agent API error:`, {
status: simAgentResponse.status,
error: errorText,
})
return NextResponse.json(
{ error: `Sim agent API error: ${simAgentResponse.statusText}` },
{ status: simAgentResponse.status }
)
}
// If streaming is requested, forward the stream and update chat later
if (stream && simAgentResponse.body) {
// Create user message to save
const userMessage = {
id: userMessageIdToUse, // Consistent ID used for request and persistence
role: 'user',
content: message,
timestamp: new Date().toISOString(),
...(fileAttachments && fileAttachments.length > 0 && { fileAttachments }),
...(Array.isArray(contexts) && contexts.length > 0 && { contexts }),
...(Array.isArray(contexts) &&
contexts.length > 0 && {
contentBlocks: [{ type: 'contexts', contexts: contexts as any, timestamp: Date.now() }],
}),
}
// Create a pass-through stream that captures the response
const transformedStream = new ReadableStream({ const transformedStream = new ReadableStream({
async start(controller) { async start(controller) {
const encoder = new TextEncoder() const encoder = new TextEncoder()
let assistantContent = ''
const toolCalls: any[] = []
let buffer = ''
const isFirstDone = true
let responseIdFromStart: string | undefined
let responseIdFromDone: string | undefined
// Track tool call progress to identify a safe done event
const announcedToolCallIds = new Set<string>()
const startedToolExecutionIds = new Set<string>()
const completedToolExecutionIds = new Set<string>()
let lastDoneResponseId: string | undefined
let lastSafeDoneResponseId: string | undefined
// Send chatId as first event
if (actualChatId) { if (actualChatId) {
const chatIdEvent = `data: ${JSON.stringify({ controller.enqueue(
type: 'chat_id', encoder.encode(
chatId: actualChatId, `data: ${JSON.stringify({ type: 'chat_id', chatId: actualChatId })}\n\n`
})}\n\n` )
controller.enqueue(encoder.encode(chatIdEvent)) )
logger.debug(`[${tracker.requestId}] Sent initial chatId event to client`)
} }
// Start title generation in parallel if needed
if (actualChatId && !currentChat?.title && conversationHistory.length === 0) { if (actualChatId && !currentChat?.title && conversationHistory.length === 0) {
generateChatTitle(message) generateChatTitle(message)
.then(async (title) => { .then(async (title) => {
@@ -547,311 +549,61 @@ export async function POST(req: NextRequest) {
updatedAt: new Date(), updatedAt: new Date(),
}) })
.where(eq(copilotChats.id, actualChatId!)) .where(eq(copilotChats.id, actualChatId!))
controller.enqueue(
const titleEvent = `data: ${JSON.stringify({ encoder.encode(`data: ${JSON.stringify({ type: 'title_updated', title })}\n\n`)
type: 'title_updated', )
title: title,
})}\n\n`
controller.enqueue(encoder.encode(titleEvent))
logger.info(`[${tracker.requestId}] Generated and saved title: ${title}`)
} }
}) })
.catch((error) => { .catch((error) => {
logger.error(`[${tracker.requestId}] Title generation failed:`, error) logger.error(`[${tracker.requestId}] Title generation failed:`, error)
}) })
} else {
logger.debug(`[${tracker.requestId}] Skipping title generation`)
} }
// Forward the sim agent stream and capture assistant response
const reader = simAgentResponse.body!.getReader()
const decoder = new TextDecoder()
try { try {
while (true) { const result = await orchestrateCopilotStream(requestPayload, {
const { done, value } = await reader.read() userId: authenticatedUserId,
if (done) { workflowId,
break chatId: actualChatId,
} autoExecuteTools: true,
interactive: true,
// Decode and parse SSE events for logging and capturing content onEvent: async (event) => {
const decodedChunk = decoder.decode(value, { stream: true })
buffer += decodedChunk
const lines = buffer.split('\n')
buffer = lines.pop() || '' // Keep incomplete line in buffer
for (const line of lines) {
if (line.trim() === '') continue // Skip empty lines
if (line.startsWith('data: ') && line.length > 6) {
try {
const jsonStr = line.slice(6)
// Check if the JSON string is unusually large (potential streaming issue)
if (jsonStr.length > 50000) {
// 50KB limit
logger.warn(`[${tracker.requestId}] Large SSE event detected`, {
size: jsonStr.length,
preview: `${jsonStr.substring(0, 100)}...`,
})
}
const event = JSON.parse(jsonStr)
// Log different event types comprehensively
switch (event.type) {
case 'content':
if (event.data) {
assistantContent += event.data
}
break
case 'reasoning':
logger.debug(
`[${tracker.requestId}] Reasoning chunk received (${(event.data || event.content || '').length} chars)`
)
break
case 'tool_call':
if (!event.data?.partial) {
toolCalls.push(event.data)
if (event.data?.id) {
announcedToolCallIds.add(event.data.id)
}
}
break
case 'tool_generating':
if (event.toolCallId) {
startedToolExecutionIds.add(event.toolCallId)
}
break
case 'tool_result':
if (event.toolCallId) {
completedToolExecutionIds.add(event.toolCallId)
}
break
case 'tool_error':
logger.error(`[${tracker.requestId}] Tool error:`, {
toolCallId: event.toolCallId,
toolName: event.toolName,
error: event.error,
success: event.success,
})
if (event.toolCallId) {
completedToolExecutionIds.add(event.toolCallId)
}
break
case 'start':
if (event.data?.responseId) {
responseIdFromStart = event.data.responseId
}
break
case 'done':
if (event.data?.responseId) {
responseIdFromDone = event.data.responseId
lastDoneResponseId = responseIdFromDone
// Mark this done as safe only if no tool call is currently in progress or pending
const announced = announcedToolCallIds.size
const completed = completedToolExecutionIds.size
const started = startedToolExecutionIds.size
const hasToolInProgress = announced > completed || started > completed
if (!hasToolInProgress) {
lastSafeDoneResponseId = responseIdFromDone
}
}
break
case 'error':
break
default:
}
// Emit to client: rewrite 'error' events into user-friendly assistant message
if (event?.type === 'error') {
try {
const displayMessage: string =
(event?.data && (event.data.displayMessage as string)) ||
'Sorry, I encountered an error. Please try again.'
const formatted = `_${displayMessage}_`
// Accumulate so it persists to DB as assistant content
assistantContent += formatted
// Send as content chunk
try {
controller.enqueue(
encoder.encode(
`data: ${JSON.stringify({ type: 'content', data: formatted })}\n\n`
)
)
} catch (enqueueErr) {
reader.cancel()
break
}
// Then close this response cleanly for the client
try {
controller.enqueue(
encoder.encode(`data: ${JSON.stringify({ type: 'done' })}\n\n`)
)
} catch (enqueueErr) {
reader.cancel()
break
}
} catch {}
// Do not forward the original error event
} else {
// Forward original event to client
try {
controller.enqueue(encoder.encode(`data: ${jsonStr}\n\n`))
} catch (enqueueErr) {
reader.cancel()
break
}
}
} catch (e) {
// Enhanced error handling for large payloads and parsing issues
const lineLength = line.length
const isLargePayload = lineLength > 10000
if (isLargePayload) {
logger.error(
`[${tracker.requestId}] Failed to parse large SSE event (${lineLength} chars)`,
{
error: e,
preview: `${line.substring(0, 200)}...`,
size: lineLength,
}
)
} else {
logger.warn(
`[${tracker.requestId}] Failed to parse SSE event: "${line.substring(0, 200)}..."`,
e
)
}
}
} else if (line.trim() && line !== 'data: [DONE]') {
logger.debug(`[${tracker.requestId}] Non-SSE line from sim agent: "${line}"`)
}
}
}
// Process any remaining buffer
if (buffer.trim()) {
logger.debug(`[${tracker.requestId}] Processing remaining buffer: "${buffer}"`)
if (buffer.startsWith('data: ')) {
try { try {
const jsonStr = buffer.slice(6) controller.enqueue(encoder.encode(`data: ${JSON.stringify(event)}\n\n`))
const event = JSON.parse(jsonStr) } catch {
if (event.type === 'content' && event.data) { controller.error('Failed to forward SSE event')
assistantContent += event.data
}
// Forward remaining event, applying same error rewrite behavior
if (event?.type === 'error') {
const displayMessage: string =
(event?.data && (event.data.displayMessage as string)) ||
'Sorry, I encountered an error. Please try again.'
const formatted = `_${displayMessage}_`
assistantContent += formatted
try {
controller.enqueue(
encoder.encode(
`data: ${JSON.stringify({ type: 'content', data: formatted })}\n\n`
)
)
controller.enqueue(
encoder.encode(`data: ${JSON.stringify({ type: 'done' })}\n\n`)
)
} catch (enqueueErr) {
reader.cancel()
}
} else {
try {
controller.enqueue(encoder.encode(`data: ${jsonStr}\n\n`))
} catch (enqueueErr) {
reader.cancel()
}
}
} catch (e) {
logger.warn(`[${tracker.requestId}] Failed to parse final buffer: "${buffer}"`)
} }
} },
}
// Log final streaming summary
logger.info(`[${tracker.requestId}] Streaming complete summary:`, {
totalContentLength: assistantContent.length,
toolCallsCount: toolCalls.length,
hasContent: assistantContent.length > 0,
toolNames: toolCalls.map((tc) => tc?.name).filter(Boolean),
}) })
// NOTE: Messages are saved by the client via update-messages endpoint with full contentBlocks. if (currentChat && result.conversationId) {
// Server only updates conversationId here to avoid overwriting client's richer save. await db
if (currentChat) { .update(copilotChats)
// Persist only a safe conversationId to avoid continuing from a state that expects tool outputs .set({
const previousConversationId = currentChat?.conversationId as string | undefined updatedAt: new Date(),
const responseId = lastSafeDoneResponseId || previousConversationId || undefined conversationId: result.conversationId,
})
if (responseId) { .where(eq(copilotChats.id, actualChatId!))
await db
.update(copilotChats)
.set({
updatedAt: new Date(),
conversationId: responseId,
})
.where(eq(copilotChats.id, actualChatId!))
logger.info(
`[${tracker.requestId}] Updated conversationId for chat ${actualChatId}`,
{
updatedConversationId: responseId,
}
)
}
} }
} catch (error) { } catch (error) {
logger.error(`[${tracker.requestId}] Error processing stream:`, error) logger.error(`[${tracker.requestId}] Orchestration error:`, error)
controller.enqueue(
// Send an error event to the client before closing so it knows what happened encoder.encode(
try { `data: ${JSON.stringify({
const errorMessage = type: 'error',
error instanceof Error && error.message === 'terminated' data: {
? 'Connection to AI service was interrupted. Please try again.' displayMessage:
: 'An unexpected error occurred while processing the response.' 'An unexpected error occurred while processing the response.',
const encoder = new TextEncoder() },
})}\n\n`
// Send error as content so it shows in the chat
controller.enqueue(
encoder.encode(
`data: ${JSON.stringify({ type: 'content', data: `\n\n_${errorMessage}_` })}\n\n`
)
) )
// Send done event to properly close the stream on client )
controller.enqueue(encoder.encode(`data: ${JSON.stringify({ type: 'done' })}\n\n`))
} catch (enqueueError) {
// Stream might already be closed, that's ok
logger.warn(
`[${tracker.requestId}] Could not send error event to client:`,
enqueueError
)
}
} finally { } finally {
try { controller.close()
controller.close()
} catch {
// Controller might already be closed
}
} }
}, },
}) })
const response = new Response(transformedStream, { return new Response(transformedStream, {
headers: { headers: {
'Content-Type': 'text/event-stream', 'Content-Type': 'text/event-stream',
'Cache-Control': 'no-cache', 'Cache-Control': 'no-cache',
@@ -859,43 +611,31 @@ export async function POST(req: NextRequest) {
'X-Accel-Buffering': 'no', 'X-Accel-Buffering': 'no',
}, },
}) })
logger.info(`[${tracker.requestId}] Returning streaming response to client`, {
duration: tracker.getDuration(),
chatId: actualChatId,
headers: {
'Content-Type': 'text/event-stream',
'Cache-Control': 'no-cache',
Connection: 'keep-alive',
},
})
return response
} }
// For non-streaming responses const nonStreamingResult = await orchestrateCopilotStream(requestPayload, {
const responseData = await simAgentResponse.json() userId: authenticatedUserId,
logger.info(`[${tracker.requestId}] Non-streaming response from sim agent:`, { workflowId,
chatId: actualChatId,
autoExecuteTools: true,
interactive: true,
})
const responseData = {
content: nonStreamingResult.content,
toolCalls: nonStreamingResult.toolCalls,
model: selectedModel,
provider: providerConfig?.provider || env.COPILOT_PROVIDER || 'openai',
}
logger.info(`[${tracker.requestId}] Non-streaming response from orchestrator:`, {
hasContent: !!responseData.content, hasContent: !!responseData.content,
contentLength: responseData.content?.length || 0, contentLength: responseData.content?.length || 0,
model: responseData.model, model: responseData.model,
provider: responseData.provider, provider: responseData.provider,
toolCallsCount: responseData.toolCalls?.length || 0, toolCallsCount: responseData.toolCalls?.length || 0,
hasTokens: !!responseData.tokens,
}) })
// Log tool calls if present
if (responseData.toolCalls?.length > 0) {
responseData.toolCalls.forEach((toolCall: any) => {
logger.info(`[${tracker.requestId}] Tool call in response:`, {
id: toolCall.id,
name: toolCall.name,
success: toolCall.success,
result: `${JSON.stringify(toolCall.result).substring(0, 200)}...`,
})
})
}
// Save messages if we have a chat // Save messages if we have a chat
if (currentChat && responseData.content) { if (currentChat && responseData.content) {
const userMessage = { const userMessage = {
@@ -947,6 +687,9 @@ export async function POST(req: NextRequest) {
.set({ .set({
messages: updatedMessages, messages: updatedMessages,
updatedAt: new Date(), updatedAt: new Date(),
...(nonStreamingResult.conversationId
? { conversationId: nonStreamingResult.conversationId }
: {}),
}) })
.where(eq(copilotChats.id, actualChatId!)) .where(eq(copilotChats.id, actualChatId!))
} }

View File

@@ -56,7 +56,7 @@ export async function GET(_request: NextRequest, { params }: { params: Promise<{
deploymentVersionName: workflowDeploymentVersion.name, deploymentVersionName: workflowDeploymentVersion.name,
}) })
.from(workflowExecutionLogs) .from(workflowExecutionLogs)
.innerJoin(workflow, eq(workflowExecutionLogs.workflowId, workflow.id)) .leftJoin(workflow, eq(workflowExecutionLogs.workflowId, workflow.id))
.leftJoin( .leftJoin(
workflowDeploymentVersion, workflowDeploymentVersion,
eq(workflowDeploymentVersion.id, workflowExecutionLogs.deploymentVersionId) eq(workflowDeploymentVersion.id, workflowExecutionLogs.deploymentVersionId)
@@ -65,7 +65,7 @@ export async function GET(_request: NextRequest, { params }: { params: Promise<{
permissions, permissions,
and( and(
eq(permissions.entityType, 'workspace'), eq(permissions.entityType, 'workspace'),
eq(permissions.entityId, workflow.workspaceId), eq(permissions.entityId, workflowExecutionLogs.workspaceId),
eq(permissions.userId, userId) eq(permissions.userId, userId)
) )
) )
@@ -77,17 +77,19 @@ export async function GET(_request: NextRequest, { params }: { params: Promise<{
return NextResponse.json({ error: 'Not found' }, { status: 404 }) return NextResponse.json({ error: 'Not found' }, { status: 404 })
} }
const workflowSummary = { const workflowSummary = log.workflowId
id: log.workflowId, ? {
name: log.workflowName, id: log.workflowId,
description: log.workflowDescription, name: log.workflowName,
color: log.workflowColor, description: log.workflowDescription,
folderId: log.workflowFolderId, color: log.workflowColor,
userId: log.workflowUserId, folderId: log.workflowFolderId,
workspaceId: log.workflowWorkspaceId, userId: log.workflowUserId,
createdAt: log.workflowCreatedAt, workspaceId: log.workflowWorkspaceId,
updatedAt: log.workflowUpdatedAt, createdAt: log.workflowCreatedAt,
} updatedAt: log.workflowUpdatedAt,
}
: null
const response = { const response = {
id: log.id, id: log.id,

View File

@@ -1,5 +1,5 @@
import { db } from '@sim/db' import { db } from '@sim/db'
import { subscription, user, workflow, workflowExecutionLogs } from '@sim/db/schema' import { subscription, user, workflowExecutionLogs, workspace } from '@sim/db/schema'
import { createLogger } from '@sim/logger' import { createLogger } from '@sim/logger'
import { and, eq, inArray, lt, sql } from 'drizzle-orm' import { and, eq, inArray, lt, sql } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server' import { type NextRequest, NextResponse } from 'next/server'
@@ -40,17 +40,17 @@ export async function GET(request: NextRequest) {
const freeUserIds = freeUsers.map((u) => u.userId) const freeUserIds = freeUsers.map((u) => u.userId)
const workflowsQuery = await db const workspacesQuery = await db
.select({ id: workflow.id }) .select({ id: workspace.id })
.from(workflow) .from(workspace)
.where(inArray(workflow.userId, freeUserIds)) .where(inArray(workspace.billedAccountUserId, freeUserIds))
if (workflowsQuery.length === 0) { if (workspacesQuery.length === 0) {
logger.info('No workflows found for free users') logger.info('No workspaces found for free users')
return NextResponse.json({ message: 'No workflows found for cleanup' }) return NextResponse.json({ message: 'No workspaces found for cleanup' })
} }
const workflowIds = workflowsQuery.map((w) => w.id) const workspaceIds = workspacesQuery.map((w) => w.id)
const results = { const results = {
enhancedLogs: { enhancedLogs: {
@@ -77,7 +77,7 @@ export async function GET(request: NextRequest) {
let batchesProcessed = 0 let batchesProcessed = 0
let hasMoreLogs = true let hasMoreLogs = true
logger.info(`Starting enhanced logs cleanup for ${workflowIds.length} workflows`) logger.info(`Starting enhanced logs cleanup for ${workspaceIds.length} workspaces`)
while (hasMoreLogs && batchesProcessed < MAX_BATCHES) { while (hasMoreLogs && batchesProcessed < MAX_BATCHES) {
const oldEnhancedLogs = await db const oldEnhancedLogs = await db
@@ -99,7 +99,7 @@ export async function GET(request: NextRequest) {
.from(workflowExecutionLogs) .from(workflowExecutionLogs)
.where( .where(
and( and(
inArray(workflowExecutionLogs.workflowId, workflowIds), inArray(workflowExecutionLogs.workspaceId, workspaceIds),
lt(workflowExecutionLogs.createdAt, retentionDate) lt(workflowExecutionLogs.createdAt, retentionDate)
) )
) )
@@ -127,7 +127,7 @@ export async function GET(request: NextRequest) {
customKey: enhancedLogKey, customKey: enhancedLogKey,
metadata: { metadata: {
logId: String(log.id), logId: String(log.id),
workflowId: String(log.workflowId), workflowId: String(log.workflowId ?? ''),
executionId: String(log.executionId), executionId: String(log.executionId),
logType: 'enhanced', logType: 'enhanced',
archivedAt: new Date().toISOString(), archivedAt: new Date().toISOString(),

View File

@@ -6,10 +6,11 @@ import {
workflowExecutionSnapshots, workflowExecutionSnapshots,
} from '@sim/db/schema' } from '@sim/db/schema'
import { createLogger } from '@sim/logger' import { createLogger } from '@sim/logger'
import { and, eq } from 'drizzle-orm' import { and, eq, inArray } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server' import { type NextRequest, NextResponse } from 'next/server'
import { checkHybridAuth } from '@/lib/auth/hybrid' import { checkHybridAuth } from '@/lib/auth/hybrid'
import { generateRequestId } from '@/lib/core/utils/request' import { generateRequestId } from '@/lib/core/utils/request'
import type { TraceSpan, WorkflowExecutionLog } from '@/lib/logs/types'
const logger = createLogger('LogsByExecutionIdAPI') const logger = createLogger('LogsByExecutionIdAPI')
@@ -48,14 +49,15 @@ export async function GET(
endedAt: workflowExecutionLogs.endedAt, endedAt: workflowExecutionLogs.endedAt,
totalDurationMs: workflowExecutionLogs.totalDurationMs, totalDurationMs: workflowExecutionLogs.totalDurationMs,
cost: workflowExecutionLogs.cost, cost: workflowExecutionLogs.cost,
executionData: workflowExecutionLogs.executionData,
}) })
.from(workflowExecutionLogs) .from(workflowExecutionLogs)
.innerJoin(workflow, eq(workflowExecutionLogs.workflowId, workflow.id)) .leftJoin(workflow, eq(workflowExecutionLogs.workflowId, workflow.id))
.innerJoin( .innerJoin(
permissions, permissions,
and( and(
eq(permissions.entityType, 'workspace'), eq(permissions.entityType, 'workspace'),
eq(permissions.entityId, workflow.workspaceId), eq(permissions.entityId, workflowExecutionLogs.workspaceId),
eq(permissions.userId, authenticatedUserId) eq(permissions.userId, authenticatedUserId)
) )
) )
@@ -78,10 +80,42 @@ export async function GET(
return NextResponse.json({ error: 'Workflow state snapshot not found' }, { status: 404 }) return NextResponse.json({ error: 'Workflow state snapshot not found' }, { status: 404 })
} }
const executionData = workflowLog.executionData as WorkflowExecutionLog['executionData']
const traceSpans = (executionData?.traceSpans as TraceSpan[]) || []
const childSnapshotIds = new Set<string>()
const collectSnapshotIds = (spans: TraceSpan[]) => {
spans.forEach((span) => {
const snapshotId = span.childWorkflowSnapshotId
if (typeof snapshotId === 'string') {
childSnapshotIds.add(snapshotId)
}
if (span.children?.length) {
collectSnapshotIds(span.children)
}
})
}
if (traceSpans.length > 0) {
collectSnapshotIds(traceSpans)
}
const childWorkflowSnapshots =
childSnapshotIds.size > 0
? await db
.select()
.from(workflowExecutionSnapshots)
.where(inArray(workflowExecutionSnapshots.id, Array.from(childSnapshotIds)))
: []
const childSnapshotMap = childWorkflowSnapshots.reduce<Record<string, unknown>>((acc, snap) => {
acc[snap.id] = snap.stateData
return acc
}, {})
const response = { const response = {
executionId, executionId,
workflowId: workflowLog.workflowId, workflowId: workflowLog.workflowId,
workflowState: snapshot.stateData, workflowState: snapshot.stateData,
childWorkflowSnapshots: childSnapshotMap,
executionMetadata: { executionMetadata: {
trigger: workflowLog.trigger, trigger: workflowLog.trigger,
startedAt: workflowLog.startedAt.toISOString(), startedAt: workflowLog.startedAt.toISOString(),

View File

@@ -1,7 +1,7 @@
import { db } from '@sim/db' import { db } from '@sim/db'
import { permissions, workflow, workflowExecutionLogs } from '@sim/db/schema' import { permissions, workflow, workflowExecutionLogs } from '@sim/db/schema'
import { createLogger } from '@sim/logger' import { createLogger } from '@sim/logger'
import { and, desc, eq } from 'drizzle-orm' import { and, desc, eq, sql } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server' import { type NextRequest, NextResponse } from 'next/server'
import { getSession } from '@/lib/auth' import { getSession } from '@/lib/auth'
import { buildFilterConditions, LogFilterParamsSchema } from '@/lib/logs/filters' import { buildFilterConditions, LogFilterParamsSchema } from '@/lib/logs/filters'
@@ -41,7 +41,7 @@ export async function GET(request: NextRequest) {
totalDurationMs: workflowExecutionLogs.totalDurationMs, totalDurationMs: workflowExecutionLogs.totalDurationMs,
cost: workflowExecutionLogs.cost, cost: workflowExecutionLogs.cost,
executionData: workflowExecutionLogs.executionData, executionData: workflowExecutionLogs.executionData,
workflowName: workflow.name, workflowName: sql<string>`COALESCE(${workflow.name}, 'Deleted Workflow')`,
} }
const workspaceCondition = eq(workflowExecutionLogs.workspaceId, params.workspaceId) const workspaceCondition = eq(workflowExecutionLogs.workspaceId, params.workspaceId)
@@ -74,7 +74,7 @@ export async function GET(request: NextRequest) {
const rows = await db const rows = await db
.select(selectColumns) .select(selectColumns)
.from(workflowExecutionLogs) .from(workflowExecutionLogs)
.innerJoin(workflow, eq(workflowExecutionLogs.workflowId, workflow.id)) .leftJoin(workflow, eq(workflowExecutionLogs.workflowId, workflow.id))
.innerJoin( .innerJoin(
permissions, permissions,
and( and(

View File

@@ -116,7 +116,7 @@ export async function GET(request: NextRequest) {
workflowDeploymentVersion, workflowDeploymentVersion,
eq(workflowDeploymentVersion.id, workflowExecutionLogs.deploymentVersionId) eq(workflowDeploymentVersion.id, workflowExecutionLogs.deploymentVersionId)
) )
.innerJoin(workflow, eq(workflowExecutionLogs.workflowId, workflow.id)) .leftJoin(workflow, eq(workflowExecutionLogs.workflowId, workflow.id))
.innerJoin( .innerJoin(
permissions, permissions,
and( and(
@@ -190,7 +190,7 @@ export async function GET(request: NextRequest) {
pausedExecutions, pausedExecutions,
eq(pausedExecutions.executionId, workflowExecutionLogs.executionId) eq(pausedExecutions.executionId, workflowExecutionLogs.executionId)
) )
.innerJoin(workflow, eq(workflowExecutionLogs.workflowId, workflow.id)) .leftJoin(workflow, eq(workflowExecutionLogs.workflowId, workflow.id))
.innerJoin( .innerJoin(
permissions, permissions,
and( and(
@@ -314,17 +314,19 @@ export async function GET(request: NextRequest) {
} catch {} } catch {}
} }
const workflowSummary = { const workflowSummary = log.workflowId
id: log.workflowId, ? {
name: log.workflowName, id: log.workflowId,
description: log.workflowDescription, name: log.workflowName,
color: log.workflowColor, description: log.workflowDescription,
folderId: log.workflowFolderId, color: log.workflowColor,
userId: log.workflowUserId, folderId: log.workflowFolderId,
workspaceId: log.workflowWorkspaceId, userId: log.workflowUserId,
createdAt: log.workflowCreatedAt, workspaceId: log.workflowWorkspaceId,
updatedAt: log.workflowUpdatedAt, createdAt: log.workflowCreatedAt,
} updatedAt: log.workflowUpdatedAt,
}
: null
return { return {
id: log.id, id: log.id,

View File

@@ -72,7 +72,7 @@ export async function GET(request: NextRequest) {
maxTime: sql<string>`MAX(${workflowExecutionLogs.startedAt})`, maxTime: sql<string>`MAX(${workflowExecutionLogs.startedAt})`,
}) })
.from(workflowExecutionLogs) .from(workflowExecutionLogs)
.innerJoin(workflow, eq(workflowExecutionLogs.workflowId, workflow.id)) .leftJoin(workflow, eq(workflowExecutionLogs.workflowId, workflow.id))
.innerJoin( .innerJoin(
permissions, permissions,
and( and(
@@ -103,8 +103,8 @@ export async function GET(request: NextRequest) {
const statsQuery = await db const statsQuery = await db
.select({ .select({
workflowId: workflowExecutionLogs.workflowId, workflowId: sql<string>`COALESCE(${workflowExecutionLogs.workflowId}, 'deleted')`,
workflowName: workflow.name, workflowName: sql<string>`COALESCE(${workflow.name}, 'Deleted Workflow')`,
segmentIndex: segmentIndex:
sql<number>`FLOOR(EXTRACT(EPOCH FROM (${workflowExecutionLogs.startedAt} - ${startTimeIso}::timestamp)) * 1000 / ${segmentMs})`.as( sql<number>`FLOOR(EXTRACT(EPOCH FROM (${workflowExecutionLogs.startedAt} - ${startTimeIso}::timestamp)) * 1000 / ${segmentMs})`.as(
'segment_index' 'segment_index'
@@ -120,7 +120,7 @@ export async function GET(request: NextRequest) {
), ),
}) })
.from(workflowExecutionLogs) .from(workflowExecutionLogs)
.innerJoin(workflow, eq(workflowExecutionLogs.workflowId, workflow.id)) .leftJoin(workflow, eq(workflowExecutionLogs.workflowId, workflow.id))
.innerJoin( .innerJoin(
permissions, permissions,
and( and(
@@ -130,7 +130,11 @@ export async function GET(request: NextRequest) {
) )
) )
.where(whereCondition) .where(whereCondition)
.groupBy(workflowExecutionLogs.workflowId, workflow.name, sql`segment_index`) .groupBy(
sql`COALESCE(${workflowExecutionLogs.workflowId}, 'deleted')`,
sql`COALESCE(${workflow.name}, 'Deleted Workflow')`,
sql`segment_index`
)
const workflowMap = new Map< const workflowMap = new Map<
string, string,

View File

@@ -0,0 +1,824 @@
import {
type CallToolResult,
ErrorCode,
type InitializeResult,
isJSONRPCNotification,
isJSONRPCRequest,
type JSONRPCError,
type JSONRPCMessage,
type JSONRPCResponse,
type ListToolsResult,
type RequestId,
} from '@modelcontextprotocol/sdk/types.js'
import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server'
import { checkHybridAuth } from '@/lib/auth/hybrid'
import { getCopilotModel } from '@/lib/copilot/config'
import { orchestrateSubagentStream } from '@/lib/copilot/orchestrator/subagent'
import { executeToolServerSide, prepareExecutionContext } from '@/lib/copilot/orchestrator/tool-executor'
const logger = createLogger('CopilotMcpAPI')
export const dynamic = 'force-dynamic'
/**
* MCP Server instructions that guide LLMs on how to use the Sim copilot tools.
* This is included in the initialize response to help external LLMs understand
* the workflow lifecycle and best practices.
*/
const MCP_SERVER_INSTRUCTIONS = `
## Sim Workflow Copilot - Usage Guide
You are interacting with Sim's workflow automation platform. These tools orchestrate specialized AI agents that build workflows. Follow these guidelines carefully.
---
## Platform Knowledge
Sim is a workflow automation platform. Workflows are visual pipelines of blocks.
### Block Types
**Core Logic:**
- **Agent** - The heart of Sim (LLM block with tools, memory, structured output, knowledge bases)
- **Function** - JavaScript code execution
- **Condition** - If/else branching
- **Router** - AI-powered content-based routing
- **Loop** - While/do-while iteration
- **Parallel** - Simultaneous execution
- **API** - HTTP requests
**Integrations (3rd Party):**
- OAuth: Slack, Gmail, Google Calendar, Sheets, Outlook, Linear, GitHub, Notion
- API: Stripe, Twilio, SendGrid, any REST API
### The Agent Block
The Agent block is the core of intelligent workflows:
- **Tools** - Add integrations, custom tools, web search to give it capabilities
- **Memory** - Multi-turn conversations with persistent context
- **Structured Output** - JSON schema for reliable parsing
- **Knowledge Bases** - RAG-powered document retrieval
**Design principle:** Put tools INSIDE agents rather than using standalone tool blocks.
### Triggers
| Type | Description |
|------|-------------|
| Manual/Chat | User sends message in UI (start block: input, files, conversationId) |
| API | REST endpoint with custom input schema |
| Webhook | External services POST to trigger URL |
| Schedule | Cron-based (hourly, daily, weekly) |
### Deployments
| Type | Trigger | Use Case |
|------|---------|----------|
| API | Start block | REST endpoint for programmatic access |
| Chat | Start block | Managed chat UI with auth options |
| MCP | Start block | Expose as MCP tool for AI agents |
| General | Schedule/Webhook | Activate triggers to run automatically |
**Undeployed workflows only run in the builder UI.**
### Variable Syntax
Reference outputs from previous blocks: \`<blockname.field>\`
Reference environment variables: \`{{ENV_VAR_NAME}}\`
Rules:
- Block names must be lowercase, no spaces, no special characters
- Use dot notation for nested fields: \`<blockname.field.subfield>\`
---
## Workflow Lifecycle
1. **Create**: For NEW workflows, FIRST call create_workflow to get a workflowId
2. **Plan**: Use copilot_plan with the workflowId to plan the workflow
3. **Edit**: Use copilot_edit with the workflowId AND the plan to build the workflow
4. **Deploy**: ALWAYS deploy after building using copilot_deploy before testing/running
5. **Test**: Use copilot_test to verify the workflow works correctly
6. **Share**: Provide the user with the workflow URL after completion
---
## CRITICAL: Always Pass workflowId
- For NEW workflows: Call create_workflow FIRST, then use the returned workflowId
- For EXISTING workflows: Pass the workflowId to all copilot tools
- copilot_plan, copilot_edit, copilot_deploy, copilot_test, copilot_debug all REQUIRE workflowId
---
## CRITICAL: How to Handle Plans
The copilot_plan tool returns a structured plan object. You MUST:
1. **Do NOT modify the plan**: Pass the plan object EXACTLY as returned to copilot_edit
2. **Do NOT interpret or summarize the plan**: The edit agent needs the raw plan data
3. **Pass the plan in the context.plan field**: \`{ "context": { "plan": <plan_object> } }\`
4. **Include ALL plan data**: Block configurations, connections, credentials, everything
Example flow:
\`\`\`
1. copilot_plan({ request: "build a workflow...", workflowId: "abc123" })
-> Returns: { "plan": { "blocks": [...], "connections": [...], ... } }
2. copilot_edit({
workflowId: "abc123",
message: "Execute the plan",
context: { "plan": <EXACT plan object from step 1> }
})
\`\`\`
**Why this matters**: The plan contains technical details (block IDs, field mappings, API schemas) that the edit agent needs verbatim. Summarizing or rephrasing loses critical information.
---
## CRITICAL: Error Handling
**If the user says "doesn't work", "broke", "failed", "error" → ALWAYS use copilot_debug FIRST.**
Don't guess. Don't plan. Debug first to find the actual problem.
---
## Important Rules
- ALWAYS deploy a workflow before attempting to run or test it
- Workflows must be deployed to have an "active deployment" for execution
- After building, call copilot_deploy with the appropriate deployment type (api, chat, or mcp)
- Return the workflow URL to the user so they can access it in Sim
---
## Quick Operations (use direct tools)
- list_workflows, list_workspaces, list_folders, get_workflow: Fast database queries
- create_workflow: Create new workflow and get workflowId (CALL THIS FIRST for new workflows)
- create_folder: Create new resources
## Workflow Building (use copilot tools)
- copilot_plan: Plan workflow changes (REQUIRES workflowId) - returns a plan object
- copilot_edit: Execute the plan (REQUIRES workflowId AND plan from copilot_plan)
- copilot_deploy: Deploy workflows (REQUIRES workflowId)
- copilot_test: Test workflow execution (REQUIRES workflowId)
- copilot_debug: Diagnose errors (REQUIRES workflowId) - USE THIS FIRST for issues
`
/**
* Direct tools that execute immediately without LLM orchestration.
* These are fast database queries that don't need AI reasoning.
*/
const DIRECT_TOOL_DEFS: Array<{
name: string
description: string
inputSchema: { type: 'object'; properties?: Record<string, unknown>; required?: string[] }
toolId: string
}> = [
{
name: 'list_workflows',
toolId: 'list_user_workflows',
description: 'List all workflows the user has access to. Returns workflow IDs, names, and workspace info.',
inputSchema: {
type: 'object',
properties: {
workspaceId: {
type: 'string',
description: 'Optional workspace ID to filter workflows.',
},
folderId: {
type: 'string',
description: 'Optional folder ID to filter workflows.',
},
},
},
},
{
name: 'list_workspaces',
toolId: 'list_user_workspaces',
description: 'List all workspaces the user has access to. Returns workspace IDs, names, and roles.',
inputSchema: {
type: 'object',
properties: {},
},
},
{
name: 'list_folders',
toolId: 'list_folders',
description: 'List all folders in a workspace.',
inputSchema: {
type: 'object',
properties: {
workspaceId: {
type: 'string',
description: 'Workspace ID to list folders from.',
},
},
required: ['workspaceId'],
},
},
{
name: 'get_workflow',
toolId: 'get_workflow_from_name',
description: 'Get a workflow by name or ID. Returns the full workflow definition.',
inputSchema: {
type: 'object',
properties: {
name: {
type: 'string',
description: 'Workflow name to search for.',
},
workflowId: {
type: 'string',
description: 'Workflow ID to retrieve directly.',
},
},
},
},
{
name: 'create_workflow',
toolId: 'create_workflow',
description: 'Create a new workflow. Returns the new workflow ID.',
inputSchema: {
type: 'object',
properties: {
name: {
type: 'string',
description: 'Name for the new workflow.',
},
workspaceId: {
type: 'string',
description: 'Optional workspace ID. Uses default workspace if not provided.',
},
folderId: {
type: 'string',
description: 'Optional folder ID to place the workflow in.',
},
description: {
type: 'string',
description: 'Optional description for the workflow.',
},
},
required: ['name'],
},
},
{
name: 'create_folder',
toolId: 'create_folder',
description: 'Create a new folder in a workspace.',
inputSchema: {
type: 'object',
properties: {
name: {
type: 'string',
description: 'Name for the new folder.',
},
workspaceId: {
type: 'string',
description: 'Optional workspace ID. Uses default workspace if not provided.',
},
parentId: {
type: 'string',
description: 'Optional parent folder ID for nested folders.',
},
},
required: ['name'],
},
},
]
const SUBAGENT_TOOL_DEFS: Array<{
name: string
description: string
inputSchema: { type: 'object'; properties?: Record<string, unknown>; required?: string[] }
agentId: string
}> = [
{
name: 'copilot_discovery',
agentId: 'discovery',
description: `Find workflows by their contents or functionality when the user doesn't know the exact name or ID.
USE THIS WHEN:
- User describes a workflow by what it does: "the one that sends emails", "my Slack notification workflow"
- User refers to workflow contents: "the workflow with the OpenAI block"
- User needs to search/match workflows by functionality or description
DO NOT USE (use direct tools instead):
- User knows the workflow name → use get_workflow
- User wants to list all workflows → use list_workflows
- User wants to list workspaces → use list_workspaces
- User wants to list folders → use list_folders`,
inputSchema: {
type: 'object',
properties: {
request: { type: 'string' },
workspaceId: { type: 'string' },
context: { type: 'object' },
},
required: ['request'],
},
},
{
name: 'copilot_plan',
agentId: 'plan',
description: `Plan workflow changes by gathering required information.
USE THIS WHEN:
- Building a new workflow
- Modifying an existing workflow
- You need to understand what blocks and integrations are available
- The workflow requires multiple blocks or connections
WORKFLOW ID (REQUIRED):
- For NEW workflows: First call create_workflow to get a workflowId, then pass it here
- For EXISTING workflows: Always pass the workflowId parameter
This tool gathers information about available blocks, credentials, and the current workflow state.
RETURNS: A plan object containing block configurations, connections, and technical details.
IMPORTANT: Pass the returned plan EXACTLY to copilot_edit - do not modify or summarize it.`,
inputSchema: {
type: 'object',
properties: {
request: { type: 'string', description: 'What you want to build or modify in the workflow.' },
workflowId: {
type: 'string',
description: 'REQUIRED. The workflow ID. For new workflows, call create_workflow first to get this.',
},
context: { type: 'object' },
},
required: ['request', 'workflowId'],
},
},
{
name: 'copilot_edit',
agentId: 'edit',
description: `Execute a workflow plan and apply edits.
USE THIS WHEN:
- You have a plan from copilot_plan that needs to be executed
- Building or modifying a workflow based on the plan
- Making changes to blocks, connections, or configurations
WORKFLOW ID (REQUIRED):
- You MUST provide the workflowId parameter
- For new workflows, get the workflowId from create_workflow first
PLAN (REQUIRED):
- Pass the EXACT plan object from copilot_plan in the context.plan field
- Do NOT modify, summarize, or interpret the plan - pass it verbatim
- The plan contains technical details the edit agent needs exactly as-is
IMPORTANT: After copilot_edit completes, you MUST call copilot_deploy before the workflow can be run or tested.`,
inputSchema: {
type: 'object',
properties: {
message: { type: 'string', description: 'Optional additional instructions for the edit.' },
workflowId: {
type: 'string',
description: 'REQUIRED. The workflow ID to edit. Get this from create_workflow for new workflows.',
},
plan: {
type: 'object',
description: 'The plan object from copilot_plan. Pass it EXACTLY as returned, do not modify.',
},
context: {
type: 'object',
description: 'Additional context. Put the plan in context.plan if not using the plan field directly.',
},
},
required: ['workflowId'],
},
},
{
name: 'copilot_debug',
agentId: 'debug',
description: `Diagnose errors or unexpected workflow behavior.
WORKFLOW ID (REQUIRED): Always provide the workflowId of the workflow to debug.`,
inputSchema: {
type: 'object',
properties: {
error: { type: 'string', description: 'The error message or description of the issue.' },
workflowId: { type: 'string', description: 'REQUIRED. The workflow ID to debug.' },
context: { type: 'object' },
},
required: ['error', 'workflowId'],
},
},
{
name: 'copilot_deploy',
agentId: 'deploy',
description: `Deploy or manage workflow deployments.
CRITICAL: You MUST deploy a workflow after building before it can be run or tested.
Workflows without an active deployment will fail with "no active deployment" error.
WORKFLOW ID (REQUIRED):
- Always provide the workflowId parameter
- This must match the workflow you built with copilot_edit
USE THIS:
- After copilot_edit completes to activate the workflow
- To update deployment settings
- To redeploy after making changes
DEPLOYMENT TYPES:
- "deploy as api" - REST API endpoint
- "deploy as chat" - Chat interface
- "deploy as mcp" - MCP server`,
inputSchema: {
type: 'object',
properties: {
request: {
type: 'string',
description: 'The deployment request, e.g. "deploy as api" or "deploy as chat"',
},
workflowId: {
type: 'string',
description: 'REQUIRED. The workflow ID to deploy.',
},
context: { type: 'object' },
},
required: ['request', 'workflowId'],
},
},
{
name: 'copilot_auth',
agentId: 'auth',
description: 'Handle OAuth connection flows.',
inputSchema: {
type: 'object',
properties: {
request: { type: 'string' },
context: { type: 'object' },
},
required: ['request'],
},
},
{
name: 'copilot_knowledge',
agentId: 'knowledge',
description: 'Create and manage knowledge bases.',
inputSchema: {
type: 'object',
properties: {
request: { type: 'string' },
context: { type: 'object' },
},
required: ['request'],
},
},
{
name: 'copilot_custom_tool',
agentId: 'custom_tool',
description: 'Create or manage custom tools.',
inputSchema: {
type: 'object',
properties: {
request: { type: 'string' },
context: { type: 'object' },
},
required: ['request'],
},
},
{
name: 'copilot_info',
agentId: 'info',
description: 'Inspect blocks, outputs, and workflow metadata.',
inputSchema: {
type: 'object',
properties: {
request: { type: 'string' },
workflowId: { type: 'string' },
context: { type: 'object' },
},
required: ['request'],
},
},
{
name: 'copilot_workflow',
agentId: 'workflow',
description: 'Manage workflow environment and configuration.',
inputSchema: {
type: 'object',
properties: {
request: { type: 'string' },
workflowId: { type: 'string' },
context: { type: 'object' },
},
required: ['request'],
},
},
{
name: 'copilot_research',
agentId: 'research',
description: 'Research external APIs and documentation.',
inputSchema: {
type: 'object',
properties: {
request: { type: 'string' },
context: { type: 'object' },
},
required: ['request'],
},
},
{
name: 'copilot_tour',
agentId: 'tour',
description: 'Explain platform features and usage.',
inputSchema: {
type: 'object',
properties: {
request: { type: 'string' },
context: { type: 'object' },
},
required: ['request'],
},
},
{
name: 'copilot_test',
agentId: 'test',
description: `Run workflows and verify outputs.
PREREQUISITE: The workflow MUST be deployed first using copilot_deploy.
Undeployed workflows will fail with "no active deployment" error.
WORKFLOW ID (REQUIRED):
- Always provide the workflowId parameter
USE THIS:
- After deploying to verify the workflow works correctly
- To test with sample inputs
- To validate workflow behavior before sharing with user`,
inputSchema: {
type: 'object',
properties: {
request: { type: 'string' },
workflowId: {
type: 'string',
description: 'REQUIRED. The workflow ID to test.',
},
context: { type: 'object' },
},
required: ['request', 'workflowId'],
},
},
{
name: 'copilot_superagent',
agentId: 'superagent',
description: 'Execute direct external actions (email, Slack, etc.).',
inputSchema: {
type: 'object',
properties: {
request: { type: 'string' },
context: { type: 'object' },
},
required: ['request'],
},
},
]
function createResponse(id: RequestId, result: unknown): JSONRPCResponse {
return {
jsonrpc: '2.0',
id,
result: result as JSONRPCResponse['result'],
}
}
function createError(id: RequestId, code: ErrorCode | number, message: string): JSONRPCError {
return {
jsonrpc: '2.0',
id,
error: { code, message },
}
}
export async function GET() {
return NextResponse.json({
name: 'copilot-subagents',
version: '1.0.0',
protocolVersion: '2024-11-05',
capabilities: { tools: {} },
})
}
export async function POST(request: NextRequest) {
try {
const auth = await checkHybridAuth(request, { requireWorkflowId: false })
if (!auth.success || !auth.userId) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
const body = (await request.json()) as JSONRPCMessage
if (isJSONRPCNotification(body)) {
return new NextResponse(null, { status: 202 })
}
if (!isJSONRPCRequest(body)) {
return NextResponse.json(
createError(0, ErrorCode.InvalidRequest, 'Invalid JSON-RPC message'),
{ status: 400 }
)
}
const { id, method, params } = body
switch (method) {
case 'initialize': {
const result: InitializeResult = {
protocolVersion: '2024-11-05',
capabilities: { tools: {} },
serverInfo: { name: 'sim-copilot', version: '1.0.0' },
instructions: MCP_SERVER_INSTRUCTIONS,
}
return NextResponse.json(createResponse(id, result))
}
case 'ping':
return NextResponse.json(createResponse(id, {}))
case 'tools/list':
return handleToolsList(id)
case 'tools/call':
return handleToolsCall(
id,
params as { name: string; arguments?: Record<string, unknown> },
auth.userId
)
default:
return NextResponse.json(
createError(id, ErrorCode.MethodNotFound, `Method not found: ${method}`),
{ status: 404 }
)
}
} catch (error) {
logger.error('Error handling MCP request', { error })
return NextResponse.json(createError(0, ErrorCode.InternalError, 'Internal error'), {
status: 500,
})
}
}
async function handleToolsList(id: RequestId): Promise<NextResponse> {
const directTools = DIRECT_TOOL_DEFS.map((tool) => ({
name: tool.name,
description: tool.description,
inputSchema: tool.inputSchema,
}))
const subagentTools = SUBAGENT_TOOL_DEFS.map((tool) => ({
name: tool.name,
description: tool.description,
inputSchema: tool.inputSchema,
}))
const result: ListToolsResult = {
tools: [...directTools, ...subagentTools],
}
return NextResponse.json(createResponse(id, result))
}
async function handleToolsCall(
id: RequestId,
params: { name: string; arguments?: Record<string, unknown> },
userId: string
): Promise<NextResponse> {
const args = params.arguments || {}
// Check if this is a direct tool (fast, no LLM)
const directTool = DIRECT_TOOL_DEFS.find((tool) => tool.name === params.name)
if (directTool) {
return handleDirectToolCall(id, directTool, args, userId)
}
// Check if this is a subagent tool (uses LLM orchestration)
const subagentTool = SUBAGENT_TOOL_DEFS.find((tool) => tool.name === params.name)
if (subagentTool) {
return handleSubagentToolCall(id, subagentTool, args, userId)
}
return NextResponse.json(
createError(id, ErrorCode.MethodNotFound, `Tool not found: ${params.name}`),
{ status: 404 }
)
}
async function handleDirectToolCall(
id: RequestId,
toolDef: (typeof DIRECT_TOOL_DEFS)[number],
args: Record<string, unknown>,
userId: string
): Promise<NextResponse> {
try {
const execContext = await prepareExecutionContext(userId, (args.workflowId as string) || '')
const toolCall = {
id: crypto.randomUUID(),
name: toolDef.toolId,
status: 'pending' as const,
params: args as Record<string, any>,
startTime: Date.now(),
}
const result = await executeToolServerSide(toolCall, execContext)
const response: CallToolResult = {
content: [
{
type: 'text',
text: JSON.stringify(result.output ?? result, null, 2),
},
],
isError: !result.success,
}
return NextResponse.json(createResponse(id, response))
} catch (error) {
logger.error('Direct tool execution failed', { tool: toolDef.name, error })
return NextResponse.json(
createError(id, ErrorCode.InternalError, `Tool execution failed: ${error}`),
{ status: 500 }
)
}
}
async function handleSubagentToolCall(
id: RequestId,
toolDef: (typeof SUBAGENT_TOOL_DEFS)[number],
args: Record<string, unknown>,
userId: string
): Promise<NextResponse> {
const requestText =
(args.request as string) ||
(args.message as string) ||
(args.error as string) ||
JSON.stringify(args)
const context = (args.context as Record<string, unknown>) || {}
if (args.plan && !context.plan) {
context.plan = args.plan
}
const { model } = getCopilotModel('chat')
const result = await orchestrateSubagentStream(
toolDef.agentId,
{
message: requestText,
workflowId: args.workflowId,
workspaceId: args.workspaceId,
context,
model,
// Signal to the copilot backend that this is a headless request
// so it can enforce workflowId requirements on tools
headless: true,
},
{
userId,
workflowId: args.workflowId as string | undefined,
workspaceId: args.workspaceId as string | undefined,
}
)
// When a respond tool (plan_respond, edit_respond, etc.) was used,
// return only the structured result - not the full result with all internal tool calls.
// This provides clean output for MCP consumers.
let responseData: unknown
if (result.structuredResult) {
responseData = {
success: result.structuredResult.success ?? result.success,
type: result.structuredResult.type,
summary: result.structuredResult.summary,
data: result.structuredResult.data,
}
} else if (result.error) {
responseData = {
success: false,
error: result.error,
errors: result.errors,
}
} else {
// Fallback: return content if no structured result
responseData = {
success: result.success,
content: result.content,
}
}
const response: CallToolResult = {
content: [
{
type: 'text',
text: JSON.stringify(responseData, null, 2),
},
],
isError: !result.success,
}
return NextResponse.json(createResponse(id, response))
}

View File

@@ -9,7 +9,7 @@ import { hasAccessControlAccess } from '@/lib/billing'
import { import {
type PermissionGroupConfig, type PermissionGroupConfig,
parsePermissionGroupConfig, parsePermissionGroupConfig,
} from '@/ee/access-control/lib/types' } from '@/lib/permission-groups/types'
const logger = createLogger('PermissionGroup') const logger = createLogger('PermissionGroup')

View File

@@ -10,7 +10,7 @@ import {
DEFAULT_PERMISSION_GROUP_CONFIG, DEFAULT_PERMISSION_GROUP_CONFIG,
type PermissionGroupConfig, type PermissionGroupConfig,
parsePermissionGroupConfig, parsePermissionGroupConfig,
} from '@/ee/access-control/lib/types' } from '@/lib/permission-groups/types'
const logger = createLogger('PermissionGroups') const logger = createLogger('PermissionGroups')

View File

@@ -4,7 +4,7 @@ import { and, eq } from 'drizzle-orm'
import { NextResponse } from 'next/server' import { NextResponse } from 'next/server'
import { getSession } from '@/lib/auth' import { getSession } from '@/lib/auth'
import { isOrganizationOnEnterprisePlan } from '@/lib/billing' import { isOrganizationOnEnterprisePlan } from '@/lib/billing'
import { parsePermissionGroupConfig } from '@/ee/access-control/lib/types' import { parsePermissionGroupConfig } from '@/lib/permission-groups/types'
export async function GET(req: Request) { export async function GET(req: Request) {
const session = await getSession() const session = await getSession()

View File

@@ -0,0 +1,157 @@
import { db } from '@sim/db'
import { permissions, workflow } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { and, asc, eq, inArray, or } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { authenticateV1Request } from '@/app/api/v1/auth'
import { getCopilotModel } from '@/lib/copilot/config'
import { SIM_AGENT_VERSION } from '@/lib/copilot/constants'
import { COPILOT_REQUEST_MODES } from '@/lib/copilot/models'
import { orchestrateCopilotStream } from '@/lib/copilot/orchestrator'
const logger = createLogger('CopilotHeadlessAPI')
const RequestSchema = z.object({
message: z.string().min(1, 'message is required'),
workflowId: z.string().optional(),
workflowName: z.string().optional(),
chatId: z.string().optional(),
mode: z.enum(COPILOT_REQUEST_MODES).optional().default('agent'),
model: z.string().optional(),
autoExecuteTools: z.boolean().optional().default(true),
timeout: z.number().optional().default(300000),
})
async function resolveWorkflowId(
userId: string,
workflowId?: string,
workflowName?: string
): Promise<{ workflowId: string; workflowName?: string } | null> {
// If workflowId provided, use it directly
if (workflowId) {
return { workflowId }
}
// Get user's accessible workflows
const workspaceIds = await db
.select({ entityId: permissions.entityId })
.from(permissions)
.where(and(eq(permissions.userId, userId), eq(permissions.entityType, 'workspace')))
const workspaceIdList = workspaceIds.map((row) => row.entityId)
const workflowConditions = [eq(workflow.userId, userId)]
if (workspaceIdList.length > 0) {
workflowConditions.push(inArray(workflow.workspaceId, workspaceIdList))
}
const workflows = await db
.select()
.from(workflow)
.where(or(...workflowConditions))
.orderBy(asc(workflow.sortOrder), asc(workflow.createdAt), asc(workflow.id))
if (workflows.length === 0) {
return null
}
// If workflowName provided, find matching workflow
if (workflowName) {
const match = workflows.find(
(w) => String(w.name || '').trim().toLowerCase() === workflowName.toLowerCase()
)
if (match) {
return { workflowId: match.id, workflowName: match.name || undefined }
}
return null
}
// Default to first workflow
return { workflowId: workflows[0].id, workflowName: workflows[0].name || undefined }
}
/**
* POST /api/v1/copilot/chat
* Headless copilot endpoint for server-side orchestration.
*
* workflowId is optional - if not provided:
* - If workflowName is provided, finds that workflow
* - Otherwise uses the user's first workflow as context
* - The copilot can still operate on any workflow using list_user_workflows
*/
export async function POST(req: NextRequest) {
const auth = await authenticateV1Request(req)
if (!auth.authenticated || !auth.userId) {
return NextResponse.json({ success: false, error: auth.error || 'Unauthorized' }, { status: 401 })
}
try {
const body = await req.json()
const parsed = RequestSchema.parse(body)
const defaults = getCopilotModel('chat')
const selectedModel = parsed.model || defaults.model
// Resolve workflow ID
const resolved = await resolveWorkflowId(auth.userId, parsed.workflowId, parsed.workflowName)
if (!resolved) {
return NextResponse.json(
{ success: false, error: 'No workflows found. Create a workflow first or provide a valid workflowId.' },
{ status: 400 }
)
}
// Transform mode to transport mode (same as client API)
// build and agent both map to 'agent' on the backend
const effectiveMode = parsed.mode === 'agent' ? 'build' : parsed.mode
const transportMode = effectiveMode === 'build' ? 'agent' : effectiveMode
// Always generate a chatId - required for artifacts system to work with subagents
const chatId = parsed.chatId || crypto.randomUUID()
const requestPayload = {
message: parsed.message,
workflowId: resolved.workflowId,
userId: auth.userId,
stream: true,
streamToolCalls: true,
model: selectedModel,
mode: transportMode,
messageId: crypto.randomUUID(),
version: SIM_AGENT_VERSION,
headless: true, // Enable cross-workflow operations via workflowId params
chatId,
}
const result = await orchestrateCopilotStream(requestPayload, {
userId: auth.userId,
workflowId: resolved.workflowId,
chatId,
autoExecuteTools: parsed.autoExecuteTools,
timeout: parsed.timeout,
interactive: false,
})
return NextResponse.json({
success: result.success,
content: result.content,
toolCalls: result.toolCalls,
chatId: result.chatId || chatId, // Return the chatId for conversation continuity
conversationId: result.conversationId,
error: result.error,
})
} catch (error) {
if (error instanceof z.ZodError) {
return NextResponse.json(
{ success: false, error: 'Invalid request', details: error.errors },
{ status: 400 }
)
}
logger.error('Headless copilot request failed', {
error: error instanceof Error ? error.message : String(error),
})
return NextResponse.json({ success: false, error: 'Internal server error' }, { status: 500 })
}
}

View File

@@ -133,9 +133,7 @@ export async function GET(request: NextRequest, { params }: { params: Promise<{
const finalWorkflowData = { const finalWorkflowData = {
...workflowData, ...workflowData,
state: { state: {
// Default values for expected properties
deploymentStatuses: {}, deploymentStatuses: {},
// Data from normalized tables
blocks: normalizedData.blocks, blocks: normalizedData.blocks,
edges: normalizedData.edges, edges: normalizedData.edges,
loops: normalizedData.loops, loops: normalizedData.loops,
@@ -143,8 +141,11 @@ export async function GET(request: NextRequest, { params }: { params: Promise<{
lastSaved: Date.now(), lastSaved: Date.now(),
isDeployed: workflowData.isDeployed || false, isDeployed: workflowData.isDeployed || false,
deployedAt: workflowData.deployedAt, deployedAt: workflowData.deployedAt,
metadata: {
name: workflowData.name,
description: workflowData.description,
},
}, },
// Include workflow variables
variables: workflowData.variables || {}, variables: workflowData.variables || {},
} }
@@ -166,6 +167,10 @@ export async function GET(request: NextRequest, { params }: { params: Promise<{
lastSaved: Date.now(), lastSaved: Date.now(),
isDeployed: workflowData.isDeployed || false, isDeployed: workflowData.isDeployed || false,
deployedAt: workflowData.deployedAt, deployedAt: workflowData.deployedAt,
metadata: {
name: workflowData.name,
description: workflowData.description,
},
}, },
variables: workflowData.variables || {}, variables: workflowData.variables || {},
} }

View File

@@ -215,6 +215,7 @@ export async function GET(request: NextRequest, { params }: { params: Promise<{
} }
for (const log of logs) { for (const log of logs) {
if (!log.workflowId) continue // Skip logs for deleted workflows
const idx = Math.min( const idx = Math.min(
segments - 1, segments - 1,
Math.max(0, Math.floor((log.startedAt.getTime() - start.getTime()) / segmentMs)) Math.max(0, Math.floor((log.startedAt.getTime() - start.getTime()) / segmentMs))

View File

@@ -1,5 +1,9 @@
import { memo } from 'react' import { memo } from 'react'
import { cn } from '@/lib/core/utils/cn' import { cn } from '@/lib/core/utils/cn'
import {
DELETED_WORKFLOW_COLOR,
DELETED_WORKFLOW_LABEL,
} from '@/app/workspace/[workspaceId]/logs/utils'
import { useWorkflowRegistry } from '@/stores/workflows/registry/store' import { useWorkflowRegistry } from '@/stores/workflows/registry/store'
import { StatusBar, type StatusBarSegment } from '..' import { StatusBar, type StatusBarSegment } from '..'
@@ -61,22 +65,32 @@ export function WorkflowsList({
<div> <div>
{filteredExecutions.map((workflow, idx) => { {filteredExecutions.map((workflow, idx) => {
const isSelected = expandedWorkflowId === workflow.workflowId const isSelected = expandedWorkflowId === workflow.workflowId
const isDeletedWorkflow = workflow.workflowName === DELETED_WORKFLOW_LABEL
const workflowColor = isDeletedWorkflow
? DELETED_WORKFLOW_COLOR
: workflows[workflow.workflowId]?.color || '#64748b'
const canToggle = !isDeletedWorkflow
return ( return (
<div <div
key={workflow.workflowId} key={workflow.workflowId}
className={cn( className={cn(
'flex h-[44px] cursor-pointer items-center gap-[16px] px-[24px] hover:bg-[var(--surface-3)] dark:hover:bg-[var(--surface-4)]', 'flex h-[44px] items-center gap-[16px] px-[24px] hover:bg-[var(--surface-3)] dark:hover:bg-[var(--surface-4)]',
canToggle ? 'cursor-pointer' : 'cursor-default',
isSelected && 'bg-[var(--surface-3)] dark:bg-[var(--surface-4)]' isSelected && 'bg-[var(--surface-3)] dark:bg-[var(--surface-4)]'
)} )}
onClick={() => onToggleWorkflow(workflow.workflowId)} onClick={() => {
if (canToggle) {
onToggleWorkflow(workflow.workflowId)
}
}}
> >
{/* Workflow name with color */} {/* Workflow name with color */}
<div className='flex w-[160px] flex-shrink-0 items-center gap-[8px] pr-[8px]'> <div className='flex w-[160px] flex-shrink-0 items-center gap-[8px] pr-[8px]'>
<div <div
className='h-[10px] w-[10px] flex-shrink-0 rounded-[3px]' className='h-[10px] w-[10px] flex-shrink-0 rounded-[3px]'
style={{ style={{
backgroundColor: workflows[workflow.workflowId]?.color || '#64748b', backgroundColor: workflowColor,
}} }}
/> />
<span className='min-w-0 truncate font-medium text-[12px] text-[var(--text-primary)]'> <span className='min-w-0 truncate font-medium text-[12px] text-[var(--text-primary)]'>

View File

@@ -80,6 +80,9 @@ export function ExecutionSnapshot({
}, [executionId, closeMenu]) }, [executionId, closeMenu])
const workflowState = data?.workflowState as WorkflowState | undefined const workflowState = data?.workflowState as WorkflowState | undefined
const childWorkflowSnapshots = data?.childWorkflowSnapshots as
| Record<string, WorkflowState>
| undefined
const renderContent = () => { const renderContent = () => {
if (isLoading) { if (isLoading) {
@@ -148,6 +151,7 @@ export function ExecutionSnapshot({
key={executionId} key={executionId}
workflowState={workflowState} workflowState={workflowState}
traceSpans={traceSpans} traceSpans={traceSpans}
childWorkflowSnapshots={childWorkflowSnapshots}
className={className} className={className}
height={height} height={height}
width={width} width={width}

View File

@@ -26,6 +26,8 @@ import {
} from '@/app/workspace/[workspaceId]/logs/components' } from '@/app/workspace/[workspaceId]/logs/components'
import { useLogDetailsResize } from '@/app/workspace/[workspaceId]/logs/hooks' import { useLogDetailsResize } from '@/app/workspace/[workspaceId]/logs/hooks'
import { import {
DELETED_WORKFLOW_COLOR,
DELETED_WORKFLOW_LABEL,
formatDate, formatDate,
getDisplayStatus, getDisplayStatus,
StatusBadge, StatusBadge,
@@ -386,22 +388,25 @@ export const LogDetails = memo(function LogDetails({
</div> </div>
{/* Workflow Card */} {/* Workflow Card */}
{log.workflow && ( <div className='flex w-0 min-w-0 flex-1 flex-col gap-[8px]'>
<div className='flex w-0 min-w-0 flex-1 flex-col gap-[8px]'> <div className='font-medium text-[12px] text-[var(--text-tertiary)]'>
<div className='font-medium text-[12px] text-[var(--text-tertiary)]'> Workflow
Workflow
</div>
<div className='flex min-w-0 items-center gap-[8px]'>
<div
className='h-[10px] w-[10px] flex-shrink-0 rounded-[3px]'
style={{ backgroundColor: log.workflow?.color }}
/>
<span className='min-w-0 flex-1 truncate font-medium text-[14px] text-[var(--text-secondary)]'>
{log.workflow.name}
</span>
</div>
</div> </div>
)} <div className='flex min-w-0 items-center gap-[8px]'>
<div
className='h-[10px] w-[10px] flex-shrink-0 rounded-[3px]'
style={{
backgroundColor:
log.workflow?.color ||
(!log.workflowId ? DELETED_WORKFLOW_COLOR : undefined),
}}
/>
<span className='min-w-0 flex-1 truncate font-medium text-[14px] text-[var(--text-secondary)]'>
{log.workflow?.name ||
(!log.workflowId ? DELETED_WORKFLOW_LABEL : 'Unknown')}
</span>
</div>
</div>
</div> </div>
{/* Execution ID */} {/* Execution ID */}

View File

@@ -7,6 +7,8 @@ import { List, type RowComponentProps, useListRef } from 'react-window'
import { Badge, buttonVariants } from '@/components/emcn' import { Badge, buttonVariants } from '@/components/emcn'
import { cn } from '@/lib/core/utils/cn' import { cn } from '@/lib/core/utils/cn'
import { import {
DELETED_WORKFLOW_COLOR,
DELETED_WORKFLOW_LABEL,
formatDate, formatDate,
formatDuration, formatDuration,
getDisplayStatus, getDisplayStatus,
@@ -33,6 +35,11 @@ interface LogRowProps {
const LogRow = memo( const LogRow = memo(
function LogRow({ log, isSelected, onClick, onContextMenu, selectedRowRef }: LogRowProps) { function LogRow({ log, isSelected, onClick, onContextMenu, selectedRowRef }: LogRowProps) {
const formattedDate = useMemo(() => formatDate(log.createdAt), [log.createdAt]) const formattedDate = useMemo(() => formatDate(log.createdAt), [log.createdAt])
const isDeletedWorkflow = !log.workflow?.id && !log.workflowId
const workflowName = isDeletedWorkflow
? DELETED_WORKFLOW_LABEL
: log.workflow?.name || 'Unknown'
const workflowColor = isDeletedWorkflow ? DELETED_WORKFLOW_COLOR : log.workflow?.color
const handleClick = useCallback(() => onClick(log), [onClick, log]) const handleClick = useCallback(() => onClick(log), [onClick, log])
@@ -78,10 +85,15 @@ const LogRow = memo(
> >
<div <div
className='h-[10px] w-[10px] flex-shrink-0 rounded-[3px]' className='h-[10px] w-[10px] flex-shrink-0 rounded-[3px]'
style={{ backgroundColor: log.workflow?.color }} style={{ backgroundColor: workflowColor }}
/> />
<span className='min-w-0 truncate font-medium text-[12px] text-[var(--text-primary)]'> <span
{log.workflow?.name || 'Unknown'} className={cn(
'min-w-0 truncate font-medium text-[12px]',
isDeletedWorkflow ? 'text-[var(--text-tertiary)]' : 'text-[var(--text-primary)]'
)}
>
{workflowName}
</span> </span>
</div> </div>

View File

@@ -27,6 +27,9 @@ export const LOG_COLUMN_ORDER: readonly LogColumnKey[] = [
'duration', 'duration',
] as const ] as const
export const DELETED_WORKFLOW_LABEL = 'Deleted Workflow'
export const DELETED_WORKFLOW_COLOR = 'var(--text-tertiary)'
export type LogStatus = 'error' | 'pending' | 'running' | 'info' | 'cancelled' export type LogStatus = 'error' | 'pending' | 'running' | 'info' | 'cancelled'
/** /**

View File

@@ -1,5 +1,6 @@
'use client' 'use client'
import { createLogger } from '@sim/logger'
import { memo, useEffect, useMemo, useRef, useState } from 'react' import { memo, useEffect, useMemo, useRef, useState } from 'react'
import clsx from 'clsx' import clsx from 'clsx'
import { ChevronUp, LayoutList } from 'lucide-react' import { ChevronUp, LayoutList } from 'lucide-react'
@@ -25,6 +26,7 @@ import { getBlock } from '@/blocks/registry'
import type { CopilotToolCall } from '@/stores/panel' import type { CopilotToolCall } from '@/stores/panel'
import { useCopilotStore } from '@/stores/panel' import { useCopilotStore } from '@/stores/panel'
import { CLASS_TOOL_METADATA } from '@/stores/panel/copilot/store' import { CLASS_TOOL_METADATA } from '@/stores/panel/copilot/store'
import { COPILOT_SERVER_ORCHESTRATED } from '@/lib/copilot/orchestrator/config'
import type { SubAgentContentBlock } from '@/stores/panel/copilot/types' import type { SubAgentContentBlock } from '@/stores/panel/copilot/types'
import { useWorkflowStore } from '@/stores/workflows/workflow/store' import { useWorkflowStore } from '@/stores/workflows/workflow/store'
@@ -1259,12 +1261,36 @@ function shouldShowRunSkipButtons(toolCall: CopilotToolCall): boolean {
return false return false
} }
const toolCallLogger = createLogger('CopilotToolCall')
async function sendToolDecision(toolCallId: string, status: 'accepted' | 'rejected') {
try {
await fetch('/api/copilot/confirm', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ toolCallId, status }),
})
} catch (error) {
toolCallLogger.warn('Failed to send tool decision', {
toolCallId,
status,
error: error instanceof Error ? error.message : String(error),
})
}
}
async function handleRun( async function handleRun(
toolCall: CopilotToolCall, toolCall: CopilotToolCall,
setToolCallState: any, setToolCallState: any,
onStateChange?: any, onStateChange?: any,
editedParams?: any editedParams?: any
) { ) {
if (COPILOT_SERVER_ORCHESTRATED) {
setToolCallState(toolCall, 'executing')
onStateChange?.('executing')
await sendToolDecision(toolCall.id, 'accepted')
return
}
const instance = getClientTool(toolCall.id) const instance = getClientTool(toolCall.id)
if (!instance && isIntegrationTool(toolCall.name)) { if (!instance && isIntegrationTool(toolCall.name)) {
@@ -1309,6 +1335,12 @@ async function handleRun(
} }
async function handleSkip(toolCall: CopilotToolCall, setToolCallState: any, onStateChange?: any) { async function handleSkip(toolCall: CopilotToolCall, setToolCallState: any, onStateChange?: any) {
if (COPILOT_SERVER_ORCHESTRATED) {
setToolCallState(toolCall, 'rejected')
onStateChange?.('rejected')
await sendToolDecision(toolCall.id, 'rejected')
return
}
const instance = getClientTool(toolCall.id) const instance = getClientTool(toolCall.id)
if (!instance && isIntegrationTool(toolCall.name)) { if (!instance && isIntegrationTool(toolCall.name)) {

View File

@@ -1,6 +1,6 @@
'use client' 'use client'
import { useCallback, useEffect, useMemo, useState } from 'react' import { createElement, useCallback, useEffect, useMemo, useState } from 'react'
import { createLogger } from '@sim/logger' import { createLogger } from '@sim/logger'
import { ExternalLink, Users } from 'lucide-react' import { ExternalLink, Users } from 'lucide-react'
import { Button, Combobox } from '@/components/emcn/components' import { Button, Combobox } from '@/components/emcn/components'
@@ -203,7 +203,7 @@ export function CredentialSelector({
if (!baseProviderConfig) { if (!baseProviderConfig) {
return <ExternalLink className='h-3 w-3' /> return <ExternalLink className='h-3 w-3' />
} }
return baseProviderConfig.icon({ className: 'h-3 w-3' }) return createElement(baseProviderConfig.icon, { className: 'h-3 w-3' })
}, []) }, [])
const getProviderName = useCallback((providerName: OAuthProvider) => { const getProviderName = useCallback((providerName: OAuthProvider) => {

View File

@@ -7,13 +7,24 @@ import {
useRef, useRef,
useState, useState,
} from 'react' } from 'react'
import { createLogger } from '@sim/logger'
import { isEqual } from 'lodash' import { isEqual } from 'lodash'
import { ChevronDown, ChevronsUpDown, ChevronUp, Plus } from 'lucide-react' import { ArrowLeftRight, ChevronDown, ChevronsUpDown, ChevronUp, Plus } from 'lucide-react'
import { Button, Popover, PopoverContent, PopoverItem, PopoverTrigger } from '@/components/emcn' import { useParams } from 'next/navigation'
import {
Button,
Popover,
PopoverContent,
PopoverItem,
PopoverTrigger,
Tooltip,
} from '@/components/emcn'
import { Trash } from '@/components/emcn/icons/trash' import { Trash } from '@/components/emcn/icons/trash'
import { cn } from '@/lib/core/utils/cn' import { cn } from '@/lib/core/utils/cn'
import { EnvVarDropdown } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/components/env-var-dropdown' import { EnvVarDropdown } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/components/env-var-dropdown'
import { FileUpload } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/components/file-upload/file-upload'
import { formatDisplayText } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/components/formatted-text' import { formatDisplayText } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/components/formatted-text'
import { ShortInput } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/components/short-input/short-input'
import { TagDropdown } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/components/tag-dropdown/tag-dropdown' import { TagDropdown } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/components/tag-dropdown/tag-dropdown'
import { useSubBlockInput } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/hooks/use-sub-block-input' import { useSubBlockInput } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/hooks/use-sub-block-input'
import { useSubBlockValue } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/hooks/use-sub-block-value' import { useSubBlockValue } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/hooks/use-sub-block-value'
@@ -21,19 +32,32 @@ import type { WandControlHandlers } from '@/app/workspace/[workspaceId]/w/[workf
import { useAccessibleReferencePrefixes } from '@/app/workspace/[workspaceId]/w/[workflowId]/hooks/use-accessible-reference-prefixes' import { useAccessibleReferencePrefixes } from '@/app/workspace/[workspaceId]/w/[workflowId]/hooks/use-accessible-reference-prefixes'
import { useWand } from '@/app/workspace/[workspaceId]/w/[workflowId]/hooks/use-wand' import { useWand } from '@/app/workspace/[workspaceId]/w/[workflowId]/hooks/use-wand'
import type { SubBlockConfig } from '@/blocks/types' import type { SubBlockConfig } from '@/blocks/types'
import { supportsVision } from '@/providers/utils'
import { useWorkflowRegistry } from '@/stores/workflows/registry/store'
import { useSubBlockStore } from '@/stores/workflows/subblock/store'
const logger = createLogger('MessagesInput')
const MIN_TEXTAREA_HEIGHT_PX = 80 const MIN_TEXTAREA_HEIGHT_PX = 80
/** Workspace file record from API */
interface WorkspaceFile {
id: string
name: string
path: string
type: string
}
const MAX_TEXTAREA_HEIGHT_PX = 320 const MAX_TEXTAREA_HEIGHT_PX = 320
/** Pattern to match complete message objects in JSON */ /** Pattern to match complete message objects in JSON */
const COMPLETE_MESSAGE_PATTERN = const COMPLETE_MESSAGE_PATTERN =
/"role"\s*:\s*"(system|user|assistant)"[^}]*"content"\s*:\s*"((?:[^"\\]|\\.)*)"/g /"role"\s*:\s*"(system|user|assistant|attachment)"[^}]*"content"\s*:\s*"((?:[^"\\]|\\.)*)"/g
/** Pattern to match incomplete content at end of buffer */ /** Pattern to match incomplete content at end of buffer */
const INCOMPLETE_CONTENT_PATTERN = /"content"\s*:\s*"((?:[^"\\]|\\.)*)$/ const INCOMPLETE_CONTENT_PATTERN = /"content"\s*:\s*"((?:[^"\\]|\\.)*)$/
/** Pattern to match role before content */ /** Pattern to match role before content */
const ROLE_BEFORE_CONTENT_PATTERN = /"role"\s*:\s*"(system|user|assistant)"[^{]*$/ const ROLE_BEFORE_CONTENT_PATTERN = /"role"\s*:\s*"(system|user|assistant|attachment)"[^{]*$/
/** /**
* Unescapes JSON string content * Unescapes JSON string content
@@ -41,41 +65,46 @@ const ROLE_BEFORE_CONTENT_PATTERN = /"role"\s*:\s*"(system|user|assistant)"[^{]*
const unescapeContent = (str: string): string => const unescapeContent = (str: string): string =>
str.replace(/\\n/g, '\n').replace(/\\"/g, '"').replace(/\\\\/g, '\\') str.replace(/\\n/g, '\n').replace(/\\"/g, '"').replace(/\\\\/g, '\\')
/**
* Attachment content (files, images, documents)
*/
interface AttachmentContent {
/** Source type: how the data was provided */
sourceType: 'url' | 'base64' | 'file'
/** The URL or base64 data */
data: string
/** MIME type (e.g., 'image/png', 'application/pdf', 'audio/mp3') */
mimeType?: string
/** Optional filename for file uploads */
fileName?: string
/** Optional workspace file ID (used by wand to select existing files) */
fileId?: string
}
/** /**
* Interface for individual message in the messages array * Interface for individual message in the messages array
*/ */
interface Message { interface Message {
role: 'system' | 'user' | 'assistant' role: 'system' | 'user' | 'assistant' | 'attachment'
content: string content: string
attachment?: AttachmentContent
} }
/** /**
* Props for the MessagesInput component * Props for the MessagesInput component
*/ */
interface MessagesInputProps { interface MessagesInputProps {
/** Unique identifier for the block */
blockId: string blockId: string
/** Unique identifier for the sub-block */
subBlockId: string subBlockId: string
/** Configuration object for the sub-block */
config: SubBlockConfig config: SubBlockConfig
/** Whether component is in preview mode */
isPreview?: boolean isPreview?: boolean
/** Value to display in preview mode */
previewValue?: Message[] | null previewValue?: Message[] | null
/** Whether the input is disabled */
disabled?: boolean disabled?: boolean
/** Ref to expose wand control handlers to parent */
wandControlRef?: React.MutableRefObject<WandControlHandlers | null> wandControlRef?: React.MutableRefObject<WandControlHandlers | null>
} }
/** /**
* MessagesInput component for managing LLM message history * MessagesInput component for managing LLM message history
*
* @remarks
* - Manages an array of messages with role and content
* - Each message can be edited, removed, or reordered
* - Stores data in LLM-compatible format: [{ role, content }]
*/ */
export function MessagesInput({ export function MessagesInput({
blockId, blockId,
@@ -86,10 +115,163 @@ export function MessagesInput({
disabled = false, disabled = false,
wandControlRef, wandControlRef,
}: MessagesInputProps) { }: MessagesInputProps) {
const params = useParams()
const workspaceId = params?.workspaceId as string
const [messages, setMessages] = useSubBlockValue<Message[]>(blockId, subBlockId, false) const [messages, setMessages] = useSubBlockValue<Message[]>(blockId, subBlockId, false)
const [localMessages, setLocalMessages] = useState<Message[]>([{ role: 'user', content: '' }]) const [localMessages, setLocalMessages] = useState<Message[]>([{ role: 'user', content: '' }])
const accessiblePrefixes = useAccessibleReferencePrefixes(blockId) const accessiblePrefixes = useAccessibleReferencePrefixes(blockId)
const [openPopoverIndex, setOpenPopoverIndex] = useState<number | null>(null) const [openPopoverIndex, setOpenPopoverIndex] = useState<number | null>(null)
const { activeWorkflowId } = useWorkflowRegistry()
// Local attachment mode state - basic = FileUpload, advanced = URL/base64 textarea
const [attachmentMode, setAttachmentMode] = useState<'basic' | 'advanced'>('basic')
// Workspace files for wand context
const [workspaceFiles, setWorkspaceFiles] = useState<WorkspaceFile[]>([])
// Fetch workspace files for wand context
const loadWorkspaceFiles = useCallback(async () => {
if (!workspaceId || isPreview) return
try {
const response = await fetch(`/api/workspaces/${workspaceId}/files`)
const data = await response.json()
if (data.success) {
setWorkspaceFiles(data.files || [])
}
} catch (error) {
logger.error('Error loading workspace files:', error)
}
}, [workspaceId, isPreview])
// Load workspace files on mount
useEffect(() => {
void loadWorkspaceFiles()
}, [loadWorkspaceFiles])
// Build sources string for wand - available workspace files
const sourcesInfo = useMemo(() => {
if (workspaceFiles.length === 0) {
return 'No workspace files available. The user can upload files manually after generation.'
}
const filesList = workspaceFiles
.filter(
(f) =>
f.type.startsWith('image/') ||
f.type.startsWith('audio/') ||
f.type.startsWith('video/') ||
f.type === 'application/pdf'
)
.map((f) => ` - id: "${f.id}", name: "${f.name}", type: "${f.type}"`)
.join('\n')
if (!filesList) {
return 'No files in workspace. The user can upload files manually after generation.'
}
return `AVAILABLE WORKSPACE FILES (optional - you don't have to select one):\n${filesList}\n\nTo use a file, include "fileId": "<id>" in the attachment object. If not selecting a file, omit the fileId field.`
}, [workspaceFiles])
// Get indices of attachment messages for subscription
const attachmentIndices = useMemo(
() =>
localMessages
.map((msg, index) => (msg.role === 'attachment' ? index : -1))
.filter((i) => i !== -1),
[localMessages]
)
// Subscribe to model value to check vision capability
const modelSupportsVision = useSubBlockStore(
useCallback(
(state) => {
if (!activeWorkflowId) return true // Default to allowing attachments
const blockValues = state.workflowValues[activeWorkflowId]?.[blockId] ?? {}
const modelValue = blockValues.model as string | undefined
if (!modelValue) return true // No model selected, allow attachments
return supportsVision(modelValue)
},
[activeWorkflowId, blockId]
)
)
// Determine available roles based on model capabilities
const availableRoles = useMemo(() => {
const baseRoles: Array<'system' | 'user' | 'assistant' | 'attachment'> = [
'system',
'user',
'assistant',
]
if (modelSupportsVision) {
baseRoles.push('attachment')
}
return baseRoles
}, [modelSupportsVision])
// Subscribe to file upload values for all attachment messages
const fileUploadValues = useSubBlockStore(
useCallback(
(state) => {
if (!activeWorkflowId) return {}
const blockValues = state.workflowValues[activeWorkflowId]?.[blockId] ?? {}
const result: Record<number, { name: string; path: string; type: string; size: number }> =
{}
for (const index of attachmentIndices) {
const fileUploadKey = `${subBlockId}-attachment-${index}`
const fileValue = blockValues[fileUploadKey]
if (fileValue && typeof fileValue === 'object' && 'path' in fileValue) {
result[index] = fileValue as { name: string; path: string; type: string; size: number }
}
}
return result
},
[activeWorkflowId, blockId, subBlockId, attachmentIndices]
)
)
// Effect to sync FileUpload values to message attachment objects
useEffect(() => {
if (!activeWorkflowId || isPreview) return
let hasChanges = false
const updatedMessages = localMessages.map((msg, index) => {
if (msg.role !== 'attachment') return msg
const uploadedFile = fileUploadValues[index]
if (uploadedFile) {
const newAttachment: AttachmentContent = {
sourceType: 'file',
data: uploadedFile.path,
mimeType: uploadedFile.type,
fileName: uploadedFile.name,
}
// Only update if different
if (
msg.attachment?.data !== newAttachment.data ||
msg.attachment?.sourceType !== newAttachment.sourceType ||
msg.attachment?.mimeType !== newAttachment.mimeType ||
msg.attachment?.fileName !== newAttachment.fileName
) {
hasChanges = true
return {
...msg,
content: uploadedFile.name || msg.content,
attachment: newAttachment,
}
}
}
return msg
})
if (hasChanges) {
setLocalMessages(updatedMessages)
setMessages(updatedMessages)
}
}, [activeWorkflowId, localMessages, isPreview, setMessages, fileUploadValues])
const subBlockInput = useSubBlockInput({ const subBlockInput = useSubBlockInput({
blockId, blockId,
subBlockId, subBlockId,
@@ -98,43 +280,40 @@ export function MessagesInput({
disabled, disabled,
}) })
/**
* Gets the current messages as JSON string for wand context
*/
const getMessagesJson = useCallback((): string => { const getMessagesJson = useCallback((): string => {
if (localMessages.length === 0) return '' if (localMessages.length === 0) return ''
// Filter out empty messages for cleaner context
const nonEmptyMessages = localMessages.filter((m) => m.content.trim() !== '') const nonEmptyMessages = localMessages.filter((m) => m.content.trim() !== '')
if (nonEmptyMessages.length === 0) return '' if (nonEmptyMessages.length === 0) return ''
return JSON.stringify(nonEmptyMessages, null, 2) return JSON.stringify(nonEmptyMessages, null, 2)
}, [localMessages]) }, [localMessages])
/**
* Streaming buffer for accumulating JSON content
*/
const streamBufferRef = useRef<string>('') const streamBufferRef = useRef<string>('')
/**
* Parses and validates messages from JSON content
*/
const parseMessages = useCallback((content: string): Message[] | null => { const parseMessages = useCallback((content: string): Message[] | null => {
try { try {
const parsed = JSON.parse(content) const parsed = JSON.parse(content)
if (Array.isArray(parsed)) { if (Array.isArray(parsed)) {
const validMessages: Message[] = parsed const validMessages: Message[] = parsed
.filter( .filter(
(m): m is { role: string; content: string } => (m): m is { role: string; content: string; attachment?: AttachmentContent } =>
typeof m === 'object' && typeof m === 'object' &&
m !== null && m !== null &&
typeof m.role === 'string' && typeof m.role === 'string' &&
typeof m.content === 'string' typeof m.content === 'string'
) )
.map((m) => ({ .map((m) => {
role: (['system', 'user', 'assistant'].includes(m.role) const role = ['system', 'user', 'assistant', 'attachment'].includes(m.role)
? m.role ? m.role
: 'user') as Message['role'], : 'user'
content: m.content, const message: Message = {
})) role: role as Message['role'],
content: m.content,
}
if (m.attachment) {
message.attachment = m.attachment
}
return message
})
return validMessages.length > 0 ? validMessages : null return validMessages.length > 0 ? validMessages : null
} }
} catch { } catch {
@@ -143,26 +322,19 @@ export function MessagesInput({
return null return null
}, []) }, [])
/**
* Extracts messages from streaming JSON buffer
* Uses simple pattern matching for efficiency
*/
const extractStreamingMessages = useCallback( const extractStreamingMessages = useCallback(
(buffer: string): Message[] => { (buffer: string): Message[] => {
// Try complete JSON parse first
const complete = parseMessages(buffer) const complete = parseMessages(buffer)
if (complete) return complete if (complete) return complete
const result: Message[] = [] const result: Message[] = []
// Reset regex lastIndex for global pattern
COMPLETE_MESSAGE_PATTERN.lastIndex = 0 COMPLETE_MESSAGE_PATTERN.lastIndex = 0
let match let match
while ((match = COMPLETE_MESSAGE_PATTERN.exec(buffer)) !== null) { while ((match = COMPLETE_MESSAGE_PATTERN.exec(buffer)) !== null) {
result.push({ role: match[1] as Message['role'], content: unescapeContent(match[2]) }) result.push({ role: match[1] as Message['role'], content: unescapeContent(match[2]) })
} }
// Check for incomplete message at end (content still streaming)
const lastContentIdx = buffer.lastIndexOf('"content"') const lastContentIdx = buffer.lastIndexOf('"content"')
if (lastContentIdx !== -1) { if (lastContentIdx !== -1) {
const tail = buffer.slice(lastContentIdx) const tail = buffer.slice(lastContentIdx)
@@ -172,7 +344,6 @@ export function MessagesInput({
const roleMatch = head.match(ROLE_BEFORE_CONTENT_PATTERN) const roleMatch = head.match(ROLE_BEFORE_CONTENT_PATTERN)
if (roleMatch) { if (roleMatch) {
const content = unescapeContent(incomplete[1]) const content = unescapeContent(incomplete[1])
// Only add if not duplicate of last complete message
if (result.length === 0 || result[result.length - 1].content !== content) { if (result.length === 0 || result[result.length - 1].content !== content) {
result.push({ role: roleMatch[1] as Message['role'], content }) result.push({ role: roleMatch[1] as Message['role'], content })
} }
@@ -185,12 +356,10 @@ export function MessagesInput({
[parseMessages] [parseMessages]
) )
/**
* Wand hook for AI-assisted content generation
*/
const wandHook = useWand({ const wandHook = useWand({
wandConfig: config.wandConfig, wandConfig: config.wandConfig,
currentValue: getMessagesJson(), currentValue: getMessagesJson(),
sources: sourcesInfo,
onStreamStart: () => { onStreamStart: () => {
streamBufferRef.current = '' streamBufferRef.current = ''
setLocalMessages([{ role: 'system', content: '' }]) setLocalMessages([{ role: 'system', content: '' }])
@@ -205,10 +374,50 @@ export function MessagesInput({
onGeneratedContent: (content) => { onGeneratedContent: (content) => {
const validMessages = parseMessages(content) const validMessages = parseMessages(content)
if (validMessages) { if (validMessages) {
// Process attachment messages - only allow fileId to set files, sanitize other attempts
validMessages.forEach((msg, index) => {
if (msg.role === 'attachment') {
// Check if this is an existing file with valid data (preserve it)
const hasExistingFile =
msg.attachment?.sourceType === 'file' &&
msg.attachment?.data?.startsWith('/api/') &&
msg.attachment?.fileName
if (hasExistingFile) {
// Preserve existing file data as-is
return
}
// Check if wand provided a fileId to select a workspace file
if (msg.attachment?.fileId) {
const file = workspaceFiles.find((f) => f.id === msg.attachment?.fileId)
if (file) {
// Set the file value in SubBlockStore so FileUpload picks it up
const fileUploadKey = `${subBlockId}-attachment-${index}`
const uploadedFile = {
name: file.name,
path: file.path,
type: file.type,
size: 0, // Size not available from workspace files list
}
useSubBlockStore.getState().setValue(blockId, fileUploadKey, uploadedFile)
// Clear the attachment object - the FileUpload will sync the file data via useEffect
// DON'T set attachment.data here as it would appear in the ShortInput (advanced mode)
msg.attachment = undefined
return
}
}
// Sanitize: clear any attachment object that isn't a valid existing file or fileId match
// This prevents the LLM from setting arbitrary data/variable references
msg.attachment = undefined
}
})
setLocalMessages(validMessages) setLocalMessages(validMessages)
setMessages(validMessages) setMessages(validMessages)
} else { } else {
// Fallback: treat as raw system prompt
const trimmed = content.trim() const trimmed = content.trim()
if (trimmed) { if (trimmed) {
const fallback: Message[] = [{ role: 'system', content: trimmed }] const fallback: Message[] = [{ role: 'system', content: trimmed }]
@@ -219,9 +428,6 @@ export function MessagesInput({
}, },
}) })
/**
* Expose wand control handlers to parent via ref
*/
useImperativeHandle( useImperativeHandle(
wandControlRef, wandControlRef,
() => ({ () => ({
@@ -249,9 +455,6 @@ export function MessagesInput({
} }
}, [isPreview, previewValue, messages]) }, [isPreview, previewValue, messages])
/**
* Gets the current messages array
*/
const currentMessages = useMemo<Message[]>(() => { const currentMessages = useMemo<Message[]>(() => {
if (isPreview && previewValue && Array.isArray(previewValue)) { if (isPreview && previewValue && Array.isArray(previewValue)) {
return previewValue return previewValue
@@ -269,9 +472,6 @@ export function MessagesInput({
startHeight: number startHeight: number
} | null>(null) } | null>(null)
/**
* Updates a specific message's content
*/
const updateMessageContent = useCallback( const updateMessageContent = useCallback(
(index: number, content: string) => { (index: number, content: string) => {
if (isPreview || disabled) return if (isPreview || disabled) return
@@ -287,17 +487,27 @@ export function MessagesInput({
[localMessages, setMessages, isPreview, disabled] [localMessages, setMessages, isPreview, disabled]
) )
/**
* Updates a specific message's role
*/
const updateMessageRole = useCallback( const updateMessageRole = useCallback(
(index: number, role: 'system' | 'user' | 'assistant') => { (index: number, role: 'system' | 'user' | 'assistant' | 'attachment') => {
if (isPreview || disabled) return if (isPreview || disabled) return
const updatedMessages = [...localMessages] const updatedMessages = [...localMessages]
updatedMessages[index] = { if (role === 'attachment') {
...updatedMessages[index], updatedMessages[index] = {
role, ...updatedMessages[index],
role,
content: updatedMessages[index].content || '',
attachment: updatedMessages[index].attachment || {
sourceType: 'file',
data: '',
},
}
} else {
const { attachment: _, ...rest } = updatedMessages[index]
updatedMessages[index] = {
...rest,
role,
}
} }
setLocalMessages(updatedMessages) setLocalMessages(updatedMessages)
setMessages(updatedMessages) setMessages(updatedMessages)
@@ -305,9 +515,6 @@ export function MessagesInput({
[localMessages, setMessages, isPreview, disabled] [localMessages, setMessages, isPreview, disabled]
) )
/**
* Adds a message after the specified index
*/
const addMessageAfter = useCallback( const addMessageAfter = useCallback(
(index: number) => { (index: number) => {
if (isPreview || disabled) return if (isPreview || disabled) return
@@ -320,9 +527,6 @@ export function MessagesInput({
[localMessages, setMessages, isPreview, disabled] [localMessages, setMessages, isPreview, disabled]
) )
/**
* Deletes a message at the specified index
*/
const deleteMessage = useCallback( const deleteMessage = useCallback(
(index: number) => { (index: number) => {
if (isPreview || disabled) return if (isPreview || disabled) return
@@ -335,9 +539,6 @@ export function MessagesInput({
[localMessages, setMessages, isPreview, disabled] [localMessages, setMessages, isPreview, disabled]
) )
/**
* Moves a message up in the list
*/
const moveMessageUp = useCallback( const moveMessageUp = useCallback(
(index: number) => { (index: number) => {
if (isPreview || disabled || index === 0) return if (isPreview || disabled || index === 0) return
@@ -352,9 +553,6 @@ export function MessagesInput({
[localMessages, setMessages, isPreview, disabled] [localMessages, setMessages, isPreview, disabled]
) )
/**
* Moves a message down in the list
*/
const moveMessageDown = useCallback( const moveMessageDown = useCallback(
(index: number) => { (index: number) => {
if (isPreview || disabled || index === localMessages.length - 1) return if (isPreview || disabled || index === localMessages.length - 1) return
@@ -369,18 +567,11 @@ export function MessagesInput({
[localMessages, setMessages, isPreview, disabled] [localMessages, setMessages, isPreview, disabled]
) )
/**
* Capitalizes the first letter of the role
*/
const formatRole = (role: string): string => { const formatRole = (role: string): string => {
return role.charAt(0).toUpperCase() + role.slice(1) return role.charAt(0).toUpperCase() + role.slice(1)
} }
/**
* Handles header click to focus the textarea
*/
const handleHeaderClick = useCallback((index: number, e: React.MouseEvent) => { const handleHeaderClick = useCallback((index: number, e: React.MouseEvent) => {
// Don't focus if clicking on interactive elements
const target = e.target as HTMLElement const target = e.target as HTMLElement
if (target.closest('button') || target.closest('[data-radix-popper-content-wrapper]')) { if (target.closest('button') || target.closest('[data-radix-popper-content-wrapper]')) {
return return
@@ -570,50 +761,52 @@ export function MessagesInput({
className='flex cursor-pointer items-center justify-between px-[8px] pt-[6px]' className='flex cursor-pointer items-center justify-between px-[8px] pt-[6px]'
onClick={(e) => handleHeaderClick(index, e)} onClick={(e) => handleHeaderClick(index, e)}
> >
<Popover <div className='flex items-center'>
open={openPopoverIndex === index} <Popover
onOpenChange={(open) => setOpenPopoverIndex(open ? index : null)} open={openPopoverIndex === index}
> onOpenChange={(open) => setOpenPopoverIndex(open ? index : null)}
<PopoverTrigger asChild> >
<button <PopoverTrigger asChild>
type='button' <button
disabled={isPreview || disabled} type='button'
className={cn( disabled={isPreview || disabled}
'group -ml-1.5 -my-1 flex items-center gap-1 rounded px-1.5 py-1 font-medium text-[13px] text-[var(--text-primary)] leading-none transition-colors hover:bg-[var(--surface-5)] hover:text-[var(--text-secondary)]', className={cn(
(isPreview || disabled) && 'group -ml-1.5 -my-1 flex items-center gap-1 rounded px-1.5 py-1 font-medium text-[13px] text-[var(--text-primary)] leading-none transition-colors hover:bg-[var(--surface-5)] hover:text-[var(--text-secondary)]',
'cursor-default hover:bg-transparent hover:text-[var(--text-primary)]' (isPreview || disabled) &&
)} 'cursor-default hover:bg-transparent hover:text-[var(--text-primary)]'
onClick={(e) => e.stopPropagation()} )}
aria-label='Select message role' onClick={(e) => e.stopPropagation()}
> aria-label='Select message role'
{formatRole(message.role)} >
{!isPreview && !disabled && ( {formatRole(message.role)}
<ChevronDown {!isPreview && !disabled && (
className={cn( <ChevronDown
'h-3 w-3 flex-shrink-0 transition-transform duration-100', className={cn(
openPopoverIndex === index && 'rotate-180' 'h-3 w-3 flex-shrink-0 transition-transform duration-100',
)} openPopoverIndex === index && 'rotate-180'
/> )}
)} />
</button> )}
</PopoverTrigger> </button>
<PopoverContent minWidth={140} align='start'> </PopoverTrigger>
<div className='flex flex-col gap-[2px]'> <PopoverContent minWidth={140} align='start'>
{(['system', 'user', 'assistant'] as const).map((role) => ( <div className='flex flex-col gap-[2px]'>
<PopoverItem {availableRoles.map((role) => (
key={role} <PopoverItem
active={message.role === role} key={role}
onClick={() => { active={message.role === role}
updateMessageRole(index, role) onClick={() => {
setOpenPopoverIndex(null) updateMessageRole(index, role)
}} setOpenPopoverIndex(null)
> }}
<span>{formatRole(role)}</span> >
</PopoverItem> <span>{formatRole(role)}</span>
))} </PopoverItem>
</div> ))}
</PopoverContent> </div>
</Popover> </PopoverContent>
</Popover>
</div>
{!isPreview && !disabled && ( {!isPreview && !disabled && (
<div className='flex items-center'> <div className='flex items-center'>
@@ -657,6 +850,43 @@ export function MessagesInput({
</Button> </Button>
</> </>
)} )}
{/* Mode toggle for attachment messages */}
{message.role === 'attachment' && (
<Tooltip.Root>
<Tooltip.Trigger asChild>
<Button
variant='ghost'
onClick={(e: React.MouseEvent) => {
e.stopPropagation()
setAttachmentMode((m) => (m === 'basic' ? 'advanced' : 'basic'))
}}
disabled={disabled}
className='-my-1 -mr-1 h-6 w-6 p-0'
aria-label={
attachmentMode === 'advanced'
? 'Switch to file upload'
: 'Switch to URL/text input'
}
>
<ArrowLeftRight
className={cn(
'h-3 w-3',
attachmentMode === 'advanced'
? 'text-[var(--text-primary)]'
: 'text-[var(--text-secondary)]'
)}
/>
</Button>
</Tooltip.Trigger>
<Tooltip.Content side='top'>
<p>
{attachmentMode === 'advanced'
? 'Switch to file upload'
: 'Switch to URL/text input'}
</p>
</Tooltip.Content>
</Tooltip.Root>
)}
<Button <Button
variant='ghost' variant='ghost'
onClick={(e: React.MouseEvent) => { onClick={(e: React.MouseEvent) => {
@@ -673,98 +903,152 @@ export function MessagesInput({
)} )}
</div> </div>
{/* Content Input with overlay for variable highlighting */} {/* Content Input - different for attachment vs text messages */}
<div className='relative w-full overflow-hidden'> {message.role === 'attachment' ? (
<textarea <div className='relative w-full px-[8px] py-[8px]'>
ref={(el) => { {attachmentMode === 'basic' ? (
textareaRefs.current[fieldId] = el <FileUpload
}} blockId={blockId}
className='relative z-[2] m-0 box-border h-auto min-h-[80px] w-full resize-none overflow-y-auto overflow-x-hidden whitespace-pre-wrap break-words border-none bg-transparent px-[8px] py-[8px] font-medium font-sans text-sm text-transparent leading-[1.5] caret-[var(--text-primary)] outline-none [-ms-overflow-style:none] [scrollbar-width:none] placeholder:text-[var(--text-muted)] focus:outline-none focus-visible:outline-none disabled:cursor-not-allowed [&::-webkit-scrollbar]:hidden' subBlockId={`${subBlockId}-attachment-${index}`}
placeholder='Enter message content...' acceptedTypes='image/*,audio/*,video/*,application/pdf,.doc,.docx,.txt'
value={message.content} multiple={false}
onChange={fieldHandlers.onChange} isPreview={isPreview}
onKeyDown={(e) => { disabled={disabled}
if (e.key === 'Tab' && !isPreview && !disabled) { />
e.preventDefault() ) : (
const direction = e.shiftKey ? -1 : 1 <ShortInput
const nextIndex = index + direction blockId={blockId}
subBlockId={`${subBlockId}-attachment-ref-${index}`}
if (nextIndex >= 0 && nextIndex < currentMessages.length) { placeholder='Reference file from previous block...'
const nextFieldId = `message-${nextIndex}` config={{
const nextTextarea = textareaRefs.current[nextFieldId] id: `${subBlockId}-attachment-ref-${index}`,
if (nextTextarea) { type: 'short-input',
nextTextarea.focus() }}
nextTextarea.selectionStart = nextTextarea.value.length value={
nextTextarea.selectionEnd = nextTextarea.value.length // Only show value for variable references, not file uploads
} message.attachment?.sourceType === 'file'
? ''
: message.attachment?.data || ''
} }
return onChange={(newValue: string) => {
} const updatedMessages = [...localMessages]
if (updatedMessages[index].role === 'attachment') {
fieldHandlers.onKeyDown(e) // Determine sourceType based on content
}} let sourceType: 'url' | 'base64' = 'url'
onDrop={fieldHandlers.onDrop} if (newValue.startsWith('data:') || newValue.includes(';base64,')) {
onDragOver={fieldHandlers.onDragOver} sourceType = 'base64'
onFocus={fieldHandlers.onFocus} }
onScroll={(e) => { updatedMessages[index] = {
const overlay = overlayRefs.current[fieldId] ...updatedMessages[index],
if (overlay) { content: newValue.substring(0, 50),
overlay.scrollTop = e.currentTarget.scrollTop attachment: {
overlay.scrollLeft = e.currentTarget.scrollLeft ...updatedMessages[index].attachment,
} sourceType,
}} data: newValue,
disabled={isPreview || disabled} },
/> }
<div setLocalMessages(updatedMessages)
ref={(el) => { setMessages(updatedMessages)
overlayRefs.current[fieldId] = el }
}} }}
className='pointer-events-none absolute top-0 left-0 z-[1] m-0 box-border w-full overflow-y-auto overflow-x-hidden whitespace-pre-wrap break-words border-none bg-transparent px-[8px] py-[8px] font-medium font-sans text-[var(--text-primary)] text-sm leading-[1.5] [-ms-overflow-style:none] [scrollbar-width:none] [&::-webkit-scrollbar]:hidden' isPreview={isPreview}
> disabled={disabled}
{formatDisplayText(message.content, { />
accessiblePrefixes, )}
highlightAll: !accessiblePrefixes,
})}
{message.content.endsWith('\n') && '\u200B'}
</div> </div>
) : (
{/* Env var dropdown for this message */} <div className='relative w-full overflow-hidden'>
<EnvVarDropdown <textarea
visible={fieldState.showEnvVars && !isPreview && !disabled} ref={(el) => {
onSelect={handleEnvSelect} textareaRefs.current[fieldId] = el
searchTerm={fieldState.searchTerm}
inputValue={message.content}
cursorPosition={fieldState.cursorPosition}
onClose={() => subBlockInput.fieldHelpers.hideFieldDropdowns(fieldId)}
workspaceId={subBlockInput.workspaceId}
maxHeight='192px'
inputRef={textareaRefObject}
/>
{/* Tag dropdown for this message */}
<TagDropdown
visible={fieldState.showTags && !isPreview && !disabled}
onSelect={handleTagSelect}
blockId={blockId}
activeSourceBlockId={fieldState.activeSourceBlockId}
inputValue={message.content}
cursorPosition={fieldState.cursorPosition}
onClose={() => subBlockInput.fieldHelpers.hideFieldDropdowns(fieldId)}
inputRef={textareaRefObject}
/>
{!isPreview && !disabled && (
<div
className='absolute right-1 bottom-1 z-[3] flex h-4 w-4 cursor-ns-resize items-center justify-center rounded-[4px] border border-[var(--border-1)] bg-[var(--surface-5)] dark:bg-[var(--surface-5)]'
onMouseDown={(e) => handleResizeStart(fieldId, e)}
onDragStart={(e) => {
e.preventDefault()
}} }}
className='relative z-[2] m-0 box-border h-auto min-h-[80px] w-full resize-none overflow-y-auto overflow-x-hidden whitespace-pre-wrap break-words border-none bg-transparent px-[8px] py-[8px] font-medium font-sans text-sm text-transparent leading-[1.5] caret-[var(--text-primary)] outline-none [-ms-overflow-style:none] [scrollbar-width:none] placeholder:text-[var(--text-muted)] focus:outline-none focus-visible:outline-none disabled:cursor-not-allowed [&::-webkit-scrollbar]:hidden'
placeholder='Enter message content...'
value={message.content}
onChange={fieldHandlers.onChange}
onKeyDown={(e) => {
if (e.key === 'Tab' && !isPreview && !disabled) {
e.preventDefault()
const direction = e.shiftKey ? -1 : 1
const nextIndex = index + direction
if (nextIndex >= 0 && nextIndex < currentMessages.length) {
const nextFieldId = `message-${nextIndex}`
const nextTextarea = textareaRefs.current[nextFieldId]
if (nextTextarea) {
nextTextarea.focus()
nextTextarea.selectionStart = nextTextarea.value.length
nextTextarea.selectionEnd = nextTextarea.value.length
}
}
return
}
fieldHandlers.onKeyDown(e)
}}
onDrop={fieldHandlers.onDrop}
onDragOver={fieldHandlers.onDragOver}
onFocus={fieldHandlers.onFocus}
onScroll={(e) => {
const overlay = overlayRefs.current[fieldId]
if (overlay) {
overlay.scrollTop = e.currentTarget.scrollTop
overlay.scrollLeft = e.currentTarget.scrollLeft
}
}}
disabled={isPreview || disabled}
/>
<div
ref={(el) => {
overlayRefs.current[fieldId] = el
}}
className='pointer-events-none absolute top-0 left-0 z-[1] m-0 box-border w-full overflow-y-auto overflow-x-hidden whitespace-pre-wrap break-words border-none bg-transparent px-[8px] py-[8px] font-medium font-sans text-[var(--text-primary)] text-sm leading-[1.5] [-ms-overflow-style:none] [scrollbar-width:none] [&::-webkit-scrollbar]:hidden'
> >
<ChevronsUpDown className='h-3 w-3 text-[var(--text-muted)]' /> {formatDisplayText(message.content, {
accessiblePrefixes,
highlightAll: !accessiblePrefixes,
})}
{message.content.endsWith('\n') && '\u200B'}
</div> </div>
)}
</div> {/* Env var dropdown for this message */}
<EnvVarDropdown
visible={fieldState.showEnvVars && !isPreview && !disabled}
onSelect={handleEnvSelect}
searchTerm={fieldState.searchTerm}
inputValue={message.content}
cursorPosition={fieldState.cursorPosition}
onClose={() => subBlockInput.fieldHelpers.hideFieldDropdowns(fieldId)}
workspaceId={subBlockInput.workspaceId}
maxHeight='192px'
inputRef={textareaRefObject}
/>
{/* Tag dropdown for this message */}
<TagDropdown
visible={fieldState.showTags && !isPreview && !disabled}
onSelect={handleTagSelect}
blockId={blockId}
activeSourceBlockId={fieldState.activeSourceBlockId}
inputValue={message.content}
cursorPosition={fieldState.cursorPosition}
onClose={() => subBlockInput.fieldHelpers.hideFieldDropdowns(fieldId)}
inputRef={textareaRefObject}
/>
{!isPreview && !disabled && (
<div
className='absolute right-1 bottom-1 z-[3] flex h-4 w-4 cursor-ns-resize items-center justify-center rounded-[4px] border border-[var(--border-1)] bg-[var(--surface-5)] dark:bg-[var(--surface-5)]'
onMouseDown={(e) => handleResizeStart(fieldId, e)}
onDragStart={(e) => {
e.preventDefault()
}}
>
<ChevronsUpDown className='h-3 w-3 text-[var(--text-muted)]' />
</div>
)}
</div>
)}
</> </>
) )
})()} })()}

View File

@@ -23,6 +23,7 @@ interface SelectorComboboxProps {
readOnly?: boolean readOnly?: boolean
onOptionChange?: (value: string) => void onOptionChange?: (value: string) => void
allowSearch?: boolean allowSearch?: boolean
missingOptionLabel?: string
} }
export function SelectorCombobox({ export function SelectorCombobox({
@@ -37,6 +38,7 @@ export function SelectorCombobox({
readOnly, readOnly,
onOptionChange, onOptionChange,
allowSearch = true, allowSearch = true,
missingOptionLabel,
}: SelectorComboboxProps) { }: SelectorComboboxProps) {
const [storeValueRaw, setStoreValue] = useSubBlockValue<string | null | undefined>( const [storeValueRaw, setStoreValue] = useSubBlockValue<string | null | undefined>(
blockId, blockId,
@@ -60,7 +62,16 @@ export function SelectorCombobox({
detailId: activeValue, detailId: activeValue,
}) })
const optionMap = useSelectorOptionMap(options, detailOption ?? undefined) const optionMap = useSelectorOptionMap(options, detailOption ?? undefined)
const selectedLabel = activeValue ? (optionMap.get(activeValue)?.label ?? activeValue) : '' const hasMissingOption =
Boolean(activeValue) &&
Boolean(missingOptionLabel) &&
!isLoading &&
!optionMap.get(activeValue!)
const selectedLabel = activeValue
? hasMissingOption
? missingOptionLabel
: (optionMap.get(activeValue)?.label ?? activeValue)
: ''
const [inputValue, setInputValue] = useState(selectedLabel) const [inputValue, setInputValue] = useState(selectedLabel)
const previousActiveValue = useRef<string | undefined>(activeValue) const previousActiveValue = useRef<string | undefined>(activeValue)

View File

@@ -1,4 +1,4 @@
import { useCallback, useEffect, useMemo, useState } from 'react' import { createElement, useCallback, useEffect, useMemo, useState } from 'react'
import { ExternalLink } from 'lucide-react' import { ExternalLink } from 'lucide-react'
import { Button, Combobox } from '@/components/emcn/components' import { Button, Combobox } from '@/components/emcn/components'
import { import {
@@ -22,7 +22,7 @@ const getProviderIcon = (providerName: OAuthProvider) => {
if (!baseProviderConfig) { if (!baseProviderConfig) {
return <ExternalLink className='h-3 w-3' /> return <ExternalLink className='h-3 w-3' />
} }
return baseProviderConfig.icon({ className: 'h-3 w-3' }) return createElement(baseProviderConfig.icon, { className: 'h-3 w-3' })
} }
const getProviderName = (providerName: OAuthProvider) => { const getProviderName = (providerName: OAuthProvider) => {

View File

@@ -1,6 +1,7 @@
'use client' 'use client'
import { useMemo } from 'react' import { useMemo } from 'react'
import { DELETED_WORKFLOW_LABEL } from '@/app/workspace/[workspaceId]/logs/utils'
import { SelectorCombobox } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/components/selector-combobox/selector-combobox' import { SelectorCombobox } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/components/selector-combobox/selector-combobox'
import type { SubBlockConfig } from '@/blocks/types' import type { SubBlockConfig } from '@/blocks/types'
import type { SelectorContext } from '@/hooks/selectors/types' import type { SelectorContext } from '@/hooks/selectors/types'
@@ -40,6 +41,7 @@ export function WorkflowSelectorInput({
isPreview={isPreview} isPreview={isPreview}
previewValue={previewValue} previewValue={previewValue}
placeholder={subBlock.placeholder || 'Select workflow...'} placeholder={subBlock.placeholder || 'Select workflow...'}
missingOptionLabel={DELETED_WORKFLOW_LABEL}
/> />
) )
} }

View File

@@ -63,6 +63,8 @@ export interface WandConfig {
interface UseWandProps { interface UseWandProps {
wandConfig?: WandConfig wandConfig?: WandConfig
currentValue?: string currentValue?: string
/** Additional context about available sources/references for the prompt */
sources?: string
onGeneratedContent: (content: string) => void onGeneratedContent: (content: string) => void
onStreamChunk?: (chunk: string) => void onStreamChunk?: (chunk: string) => void
onStreamStart?: () => void onStreamStart?: () => void
@@ -72,6 +74,7 @@ interface UseWandProps {
export function useWand({ export function useWand({
wandConfig, wandConfig,
currentValue, currentValue,
sources,
onGeneratedContent, onGeneratedContent,
onStreamChunk, onStreamChunk,
onStreamStart, onStreamStart,
@@ -154,6 +157,12 @@ export function useWand({
if (systemPrompt.includes('{context}')) { if (systemPrompt.includes('{context}')) {
systemPrompt = systemPrompt.replace('{context}', contextInfo) systemPrompt = systemPrompt.replace('{context}', contextInfo)
} }
if (systemPrompt.includes('{sources}')) {
systemPrompt = systemPrompt.replace(
'{sources}',
sources || 'No upstream sources available'
)
}
const userMessage = prompt const userMessage = prompt

View File

@@ -4,11 +4,14 @@ import { useCallback, useEffect, useMemo, useRef, useState } from 'react'
import { import {
ArrowDown, ArrowDown,
ArrowUp, ArrowUp,
Check,
ChevronDown as ChevronDownIcon, ChevronDown as ChevronDownIcon,
ChevronUp, ChevronUp,
Clipboard,
ExternalLink, ExternalLink,
Maximize2, Maximize2,
RepeatIcon, RepeatIcon,
Search,
SplitIcon, SplitIcon,
X, X,
} from 'lucide-react' } from 'lucide-react'
@@ -34,6 +37,7 @@ import {
isSubBlockFeatureEnabled, isSubBlockFeatureEnabled,
isSubBlockVisibleForMode, isSubBlockVisibleForMode,
} from '@/lib/workflows/subblocks/visibility' } from '@/lib/workflows/subblocks/visibility'
import { DELETED_WORKFLOW_LABEL } from '@/app/workspace/[workspaceId]/logs/utils'
import { SubBlock } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components' import { SubBlock } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components'
import { PreviewContextMenu } from '@/app/workspace/[workspaceId]/w/components/preview/components/preview-context-menu' import { PreviewContextMenu } from '@/app/workspace/[workspaceId]/w/components/preview/components/preview-context-menu'
import { PreviewWorkflow } from '@/app/workspace/[workspaceId]/w/components/preview/components/preview-workflow' import { PreviewWorkflow } from '@/app/workspace/[workspaceId]/w/components/preview/components/preview-workflow'
@@ -690,6 +694,7 @@ interface ExecutionData {
output?: unknown output?: unknown
status?: string status?: string
durationMs?: number durationMs?: number
childWorkflowSnapshotId?: string
} }
interface WorkflowVariable { interface WorkflowVariable {
@@ -714,6 +719,8 @@ interface PreviewEditorProps {
parallels?: Record<string, Parallel> parallels?: Record<string, Parallel>
/** When true, shows "Not Executed" badge if no executionData is provided */ /** When true, shows "Not Executed" badge if no executionData is provided */
isExecutionMode?: boolean isExecutionMode?: boolean
/** Child workflow snapshots keyed by snapshot ID (execution mode only) */
childWorkflowSnapshots?: Record<string, WorkflowState>
/** Optional close handler - if not provided, no close button is shown */ /** Optional close handler - if not provided, no close button is shown */
onClose?: () => void onClose?: () => void
/** Callback to drill down into a nested workflow block */ /** Callback to drill down into a nested workflow block */
@@ -739,6 +746,7 @@ function PreviewEditorContent({
loops, loops,
parallels, parallels,
isExecutionMode = false, isExecutionMode = false,
childWorkflowSnapshots,
onClose, onClose,
onDrillDown, onDrillDown,
}: PreviewEditorProps) { }: PreviewEditorProps) {
@@ -768,17 +776,35 @@ function PreviewEditorContent({
const { data: childWorkflowState, isLoading: isLoadingChildWorkflow } = useWorkflowState( const { data: childWorkflowState, isLoading: isLoadingChildWorkflow } = useWorkflowState(
childWorkflowId ?? undefined childWorkflowId ?? undefined
) )
const childWorkflowSnapshotId = executionData?.childWorkflowSnapshotId
const childWorkflowSnapshotState = childWorkflowSnapshotId
? childWorkflowSnapshots?.[childWorkflowSnapshotId]
: undefined
const resolvedChildWorkflowState = isExecutionMode
? childWorkflowSnapshotState
: childWorkflowState
const resolvedIsLoadingChildWorkflow = isExecutionMode ? false : isLoadingChildWorkflow
const isMissingChildWorkflow =
Boolean(childWorkflowId) && !resolvedIsLoadingChildWorkflow && !resolvedChildWorkflowState
/** Drills down into the child workflow or opens it in a new tab */ /** Drills down into the child workflow or opens it in a new tab */
const handleExpandChildWorkflow = useCallback(() => { const handleExpandChildWorkflow = useCallback(() => {
if (!childWorkflowId || !childWorkflowState) return if (!childWorkflowId) return
if (isExecutionMode && onDrillDown) { if (isExecutionMode && onDrillDown) {
onDrillDown(block.id, childWorkflowState) if (!childWorkflowSnapshotState) return
onDrillDown(block.id, childWorkflowSnapshotState)
} else if (workspaceId) { } else if (workspaceId) {
window.open(`/workspace/${workspaceId}/w/${childWorkflowId}`, '_blank', 'noopener,noreferrer') window.open(`/workspace/${workspaceId}/w/${childWorkflowId}`, '_blank', 'noopener,noreferrer')
} }
}, [childWorkflowId, childWorkflowState, isExecutionMode, onDrillDown, block.id, workspaceId]) }, [
childWorkflowId,
childWorkflowSnapshotState,
isExecutionMode,
onDrillDown,
block.id,
workspaceId,
])
const contentRef = useRef<HTMLDivElement>(null) const contentRef = useRef<HTMLDivElement>(null)
const subBlocksRef = useRef<HTMLDivElement>(null) const subBlocksRef = useRef<HTMLDivElement>(null)
@@ -813,6 +839,13 @@ function PreviewEditorContent({
} = useContextMenu() } = useContextMenu()
const [contextMenuData, setContextMenuData] = useState({ content: '', copyOnly: false }) const [contextMenuData, setContextMenuData] = useState({ content: '', copyOnly: false })
const [copiedSection, setCopiedSection] = useState<'input' | 'output' | null>(null)
const handleCopySection = useCallback((content: string, section: 'input' | 'output') => {
navigator.clipboard.writeText(content)
setCopiedSection(section)
setTimeout(() => setCopiedSection(null), 1500)
}, [])
const openContextMenu = useCallback( const openContextMenu = useCallback(
(e: React.MouseEvent, content: string, copyOnly: boolean) => { (e: React.MouseEvent, content: string, copyOnly: boolean) => {
@@ -862,9 +895,6 @@ function PreviewEditorContent({
} }
}, [contextMenuData.content]) }, [contextMenuData.content])
/**
* Handles mouse down event on the resize handle to initiate resizing
*/
const handleConnectionsResizeMouseDown = useCallback( const handleConnectionsResizeMouseDown = useCallback(
(e: React.MouseEvent) => { (e: React.MouseEvent) => {
setIsResizing(true) setIsResizing(true)
@@ -874,18 +904,12 @@ function PreviewEditorContent({
[connectionsHeight] [connectionsHeight]
) )
/**
* Toggle connections collapsed state
*/
const toggleConnectionsCollapsed = useCallback(() => { const toggleConnectionsCollapsed = useCallback(() => {
setConnectionsHeight((prev) => setConnectionsHeight((prev) =>
prev <= MIN_CONNECTIONS_HEIGHT ? DEFAULT_CONNECTIONS_HEIGHT : MIN_CONNECTIONS_HEIGHT prev <= MIN_CONNECTIONS_HEIGHT ? DEFAULT_CONNECTIONS_HEIGHT : MIN_CONNECTIONS_HEIGHT
) )
}, []) }, [])
/**
* Sets up resize event listeners during resize operations
*/
useEffect(() => { useEffect(() => {
if (!isResizing) return if (!isResizing) return
@@ -1205,7 +1229,11 @@ function PreviewEditorContent({
} }
emptyMessage='No input data' emptyMessage='No input data'
> >
<div onContextMenu={handleExecutionContextMenu} ref={contentRef}> <div
onContextMenu={handleExecutionContextMenu}
ref={contentRef}
className='relative'
>
<Code.Viewer <Code.Viewer
code={formatValueAsJson(executionData.input)} code={formatValueAsJson(executionData.input)}
language='json' language='json'
@@ -1215,6 +1243,49 @@ function PreviewEditorContent({
currentMatchIndex={currentMatchIndex} currentMatchIndex={currentMatchIndex}
onMatchCountChange={handleMatchCountChange} onMatchCountChange={handleMatchCountChange}
/> />
{/* Action buttons overlay */}
{!isSearchActive && (
<div className='absolute top-[7px] right-[6px] z-10 flex gap-[4px]'>
<Tooltip.Root>
<Tooltip.Trigger asChild>
<Button
type='button'
variant='ghost'
onClick={(e) => {
e.stopPropagation()
handleCopySection(formatValueAsJson(executionData.input), 'input')
}}
className='h-[20px] w-[20px] cursor-pointer border border-[var(--border-1)] bg-transparent p-0 backdrop-blur-sm hover:bg-[var(--surface-4)]'
>
{copiedSection === 'input' ? (
<Check className='h-[10px] w-[10px] text-[var(--text-success)]' />
) : (
<Clipboard className='h-[10px] w-[10px]' />
)}
</Button>
</Tooltip.Trigger>
<Tooltip.Content side='top'>
{copiedSection === 'input' ? 'Copied' : 'Copy'}
</Tooltip.Content>
</Tooltip.Root>
<Tooltip.Root>
<Tooltip.Trigger asChild>
<Button
type='button'
variant='ghost'
onClick={(e) => {
e.stopPropagation()
activateSearch()
}}
className='h-[20px] w-[20px] cursor-pointer border border-[var(--border-1)] bg-transparent p-0 backdrop-blur-sm hover:bg-[var(--surface-4)]'
>
<Search className='h-[10px] w-[10px]' />
</Button>
</Tooltip.Trigger>
<Tooltip.Content side='top'>Search</Tooltip.Content>
</Tooltip.Root>
</div>
)}
</div> </div>
</CollapsibleSection> </CollapsibleSection>
)} )}
@@ -1231,7 +1302,7 @@ function PreviewEditorContent({
emptyMessage='No output data' emptyMessage='No output data'
isError={executionData.status === 'error'} isError={executionData.status === 'error'}
> >
<div onContextMenu={handleExecutionContextMenu}> <div onContextMenu={handleExecutionContextMenu} className='relative'>
<Code.Viewer <Code.Viewer
code={formatValueAsJson(executionData.output)} code={formatValueAsJson(executionData.output)}
language='json' language='json'
@@ -1244,6 +1315,49 @@ function PreviewEditorContent({
currentMatchIndex={currentMatchIndex} currentMatchIndex={currentMatchIndex}
onMatchCountChange={handleMatchCountChange} onMatchCountChange={handleMatchCountChange}
/> />
{/* Action buttons overlay */}
{!isSearchActive && (
<div className='absolute top-[7px] right-[6px] z-10 flex gap-[4px]'>
<Tooltip.Root>
<Tooltip.Trigger asChild>
<Button
type='button'
variant='ghost'
onClick={(e) => {
e.stopPropagation()
handleCopySection(formatValueAsJson(executionData.output), 'output')
}}
className='h-[20px] w-[20px] cursor-pointer border border-[var(--border-1)] bg-transparent p-0 backdrop-blur-sm hover:bg-[var(--surface-4)]'
>
{copiedSection === 'output' ? (
<Check className='h-[10px] w-[10px] text-[var(--text-success)]' />
) : (
<Clipboard className='h-[10px] w-[10px]' />
)}
</Button>
</Tooltip.Trigger>
<Tooltip.Content side='top'>
{copiedSection === 'output' ? 'Copied' : 'Copy'}
</Tooltip.Content>
</Tooltip.Root>
<Tooltip.Root>
<Tooltip.Trigger asChild>
<Button
type='button'
variant='ghost'
onClick={(e) => {
e.stopPropagation()
activateSearch()
}}
className='h-[20px] w-[20px] cursor-pointer border border-[var(--border-1)] bg-transparent p-0 backdrop-blur-sm hover:bg-[var(--surface-4)]'
>
<Search className='h-[10px] w-[10px]' />
</Button>
</Tooltip.Trigger>
<Tooltip.Content side='top'>Search</Tooltip.Content>
</Tooltip.Root>
</div>
)}
</div> </div>
</CollapsibleSection> </CollapsibleSection>
)} )}
@@ -1256,7 +1370,7 @@ function PreviewEditorContent({
Workflow Preview Workflow Preview
</div> </div>
<div className='relative h-[160px] overflow-hidden rounded-[4px] border border-[var(--border)]'> <div className='relative h-[160px] overflow-hidden rounded-[4px] border border-[var(--border)]'>
{isLoadingChildWorkflow ? ( {resolvedIsLoadingChildWorkflow ? (
<div className='flex h-full items-center justify-center bg-[var(--surface-3)]'> <div className='flex h-full items-center justify-center bg-[var(--surface-3)]'>
<div <div
className='h-[18px] w-[18px] animate-spin rounded-full' className='h-[18px] w-[18px] animate-spin rounded-full'
@@ -1269,11 +1383,11 @@ function PreviewEditorContent({
}} }}
/> />
</div> </div>
) : childWorkflowState ? ( ) : resolvedChildWorkflowState ? (
<> <>
<div className='[&_*:active]:!cursor-grabbing [&_*]:!cursor-grab [&_.react-flow__handle]:!hidden h-full w-full'> <div className='[&_*:active]:!cursor-grabbing [&_*]:!cursor-grab [&_.react-flow__handle]:!hidden h-full w-full'>
<PreviewWorkflow <PreviewWorkflow
workflowState={childWorkflowState} workflowState={resolvedChildWorkflowState}
height={160} height={160}
width='100%' width='100%'
isPannable={true} isPannable={true}
@@ -1305,7 +1419,9 @@ function PreviewEditorContent({
) : ( ) : (
<div className='flex h-full items-center justify-center bg-[var(--surface-3)]'> <div className='flex h-full items-center justify-center bg-[var(--surface-3)]'>
<span className='text-[13px] text-[var(--text-tertiary)]'> <span className='text-[13px] text-[var(--text-tertiary)]'>
Unable to load preview {isMissingChildWorkflow
? DELETED_WORKFLOW_LABEL
: 'Unable to load preview'}
</span> </span>
</div> </div>
)} )}

View File

@@ -9,6 +9,7 @@ import {
isSubBlockFeatureEnabled, isSubBlockFeatureEnabled,
isSubBlockVisibleForMode, isSubBlockVisibleForMode,
} from '@/lib/workflows/subblocks/visibility' } from '@/lib/workflows/subblocks/visibility'
import { DELETED_WORKFLOW_LABEL } from '@/app/workspace/[workspaceId]/logs/utils'
import { getDisplayValue } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/workflow-block/workflow-block' import { getDisplayValue } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/workflow-block/workflow-block'
import { getBlock } from '@/blocks' import { getBlock } from '@/blocks'
import { SELECTOR_TYPES_HYDRATION_REQUIRED, type SubBlockConfig } from '@/blocks/types' import { SELECTOR_TYPES_HYDRATION_REQUIRED, type SubBlockConfig } from '@/blocks/types'
@@ -112,7 +113,7 @@ function resolveWorkflowName(
if (!rawValue || typeof rawValue !== 'string') return null if (!rawValue || typeof rawValue !== 'string') return null
const workflowMap = useWorkflowRegistry.getState().workflows const workflowMap = useWorkflowRegistry.getState().workflows
return workflowMap[rawValue]?.name ?? null return workflowMap[rawValue]?.name ?? DELETED_WORKFLOW_LABEL
} }
/** /**

View File

@@ -19,6 +19,8 @@ interface TraceSpan {
status?: string status?: string
duration?: number duration?: number
children?: TraceSpan[] children?: TraceSpan[]
childWorkflowSnapshotId?: string
childWorkflowId?: string
} }
interface BlockExecutionData { interface BlockExecutionData {
@@ -28,6 +30,7 @@ interface BlockExecutionData {
durationMs: number durationMs: number
/** Child trace spans for nested workflow blocks */ /** Child trace spans for nested workflow blocks */
children?: TraceSpan[] children?: TraceSpan[]
childWorkflowSnapshotId?: string
} }
/** Represents a level in the workflow navigation stack */ /** Represents a level in the workflow navigation stack */
@@ -35,6 +38,7 @@ interface WorkflowStackEntry {
workflowState: WorkflowState workflowState: WorkflowState
traceSpans: TraceSpan[] traceSpans: TraceSpan[]
blockExecutions: Record<string, BlockExecutionData> blockExecutions: Record<string, BlockExecutionData>
workflowName: string
} }
/** /**
@@ -89,6 +93,7 @@ export function buildBlockExecutions(spans: TraceSpan[]): Record<string, BlockEx
status: span.status || 'unknown', status: span.status || 'unknown',
durationMs: span.duration || 0, durationMs: span.duration || 0,
children: span.children, children: span.children,
childWorkflowSnapshotId: span.childWorkflowSnapshotId,
} }
} }
} }
@@ -103,6 +108,8 @@ interface PreviewProps {
traceSpans?: TraceSpan[] traceSpans?: TraceSpan[]
/** Pre-computed block executions (optional - will be built from traceSpans if not provided) */ /** Pre-computed block executions (optional - will be built from traceSpans if not provided) */
blockExecutions?: Record<string, BlockExecutionData> blockExecutions?: Record<string, BlockExecutionData>
/** Child workflow snapshots keyed by snapshot ID (execution mode only) */
childWorkflowSnapshots?: Record<string, WorkflowState>
/** Additional CSS class names */ /** Additional CSS class names */
className?: string className?: string
/** Height of the component */ /** Height of the component */
@@ -135,6 +142,7 @@ export function Preview({
workflowState: rootWorkflowState, workflowState: rootWorkflowState,
traceSpans: rootTraceSpans, traceSpans: rootTraceSpans,
blockExecutions: providedBlockExecutions, blockExecutions: providedBlockExecutions,
childWorkflowSnapshots,
className, className,
height = '100%', height = '100%',
width = '100%', width = '100%',
@@ -144,7 +152,6 @@ export function Preview({
initialSelectedBlockId, initialSelectedBlockId,
autoSelectLeftmost = true, autoSelectLeftmost = true,
}: PreviewProps) { }: PreviewProps) {
/** Initialize pinnedBlockId synchronously to ensure sidebar is present from first render */
const [pinnedBlockId, setPinnedBlockId] = useState<string | null>(() => { const [pinnedBlockId, setPinnedBlockId] = useState<string | null>(() => {
if (initialSelectedBlockId) return initialSelectedBlockId if (initialSelectedBlockId) return initialSelectedBlockId
if (autoSelectLeftmost) { if (autoSelectLeftmost) {
@@ -153,17 +160,14 @@ export function Preview({
return null return null
}) })
/** Stack for nested workflow navigation. Empty means we're at the root level. */
const [workflowStack, setWorkflowStack] = useState<WorkflowStackEntry[]>([]) const [workflowStack, setWorkflowStack] = useState<WorkflowStackEntry[]>([])
/** Block executions for the root level */
const rootBlockExecutions = useMemo(() => { const rootBlockExecutions = useMemo(() => {
if (providedBlockExecutions) return providedBlockExecutions if (providedBlockExecutions) return providedBlockExecutions
if (!rootTraceSpans || !Array.isArray(rootTraceSpans)) return {} if (!rootTraceSpans || !Array.isArray(rootTraceSpans)) return {}
return buildBlockExecutions(rootTraceSpans) return buildBlockExecutions(rootTraceSpans)
}, [providedBlockExecutions, rootTraceSpans]) }, [providedBlockExecutions, rootTraceSpans])
/** Current block executions - either from stack or root */
const blockExecutions = useMemo(() => { const blockExecutions = useMemo(() => {
if (workflowStack.length > 0) { if (workflowStack.length > 0) {
return workflowStack[workflowStack.length - 1].blockExecutions return workflowStack[workflowStack.length - 1].blockExecutions
@@ -171,7 +175,6 @@ export function Preview({
return rootBlockExecutions return rootBlockExecutions
}, [workflowStack, rootBlockExecutions]) }, [workflowStack, rootBlockExecutions])
/** Current workflow state - either from stack or root */
const workflowState = useMemo(() => { const workflowState = useMemo(() => {
if (workflowStack.length > 0) { if (workflowStack.length > 0) {
return workflowStack[workflowStack.length - 1].workflowState return workflowStack[workflowStack.length - 1].workflowState
@@ -179,41 +182,39 @@ export function Preview({
return rootWorkflowState return rootWorkflowState
}, [workflowStack, rootWorkflowState]) }, [workflowStack, rootWorkflowState])
/** Whether we're in execution mode (have trace spans/block executions) */
const isExecutionMode = useMemo(() => { const isExecutionMode = useMemo(() => {
return Object.keys(blockExecutions).length > 0 return Object.keys(blockExecutions).length > 0
}, [blockExecutions]) }, [blockExecutions])
/** Handler to drill down into a nested workflow block */
const handleDrillDown = useCallback( const handleDrillDown = useCallback(
(blockId: string, childWorkflowState: WorkflowState) => { (blockId: string, childWorkflowState: WorkflowState) => {
const blockExecution = blockExecutions[blockId] const blockExecution = blockExecutions[blockId]
const childTraceSpans = extractChildTraceSpans(blockExecution) const childTraceSpans = extractChildTraceSpans(blockExecution)
const childBlockExecutions = buildBlockExecutions(childTraceSpans) const childBlockExecutions = buildBlockExecutions(childTraceSpans)
const workflowName = childWorkflowState.metadata?.name || 'Nested Workflow'
setWorkflowStack((prev) => [ setWorkflowStack((prev) => [
...prev, ...prev,
{ {
workflowState: childWorkflowState, workflowState: childWorkflowState,
traceSpans: childTraceSpans, traceSpans: childTraceSpans,
blockExecutions: childBlockExecutions, blockExecutions: childBlockExecutions,
workflowName,
}, },
]) ])
/** Set pinned block synchronously to avoid double fitView from sidebar resize */
const leftmostId = getLeftmostBlockId(childWorkflowState) const leftmostId = getLeftmostBlockId(childWorkflowState)
setPinnedBlockId(leftmostId) setPinnedBlockId(leftmostId)
}, },
[blockExecutions] [blockExecutions]
) )
/** Handler to go back up the stack */
const handleGoBack = useCallback(() => { const handleGoBack = useCallback(() => {
setWorkflowStack((prev) => prev.slice(0, -1)) setWorkflowStack((prev) => prev.slice(0, -1))
setPinnedBlockId(null) setPinnedBlockId(null)
}, []) }, [])
/** Handlers for node interactions - memoized to prevent unnecessary re-renders */
const handleNodeClick = useCallback((blockId: string) => { const handleNodeClick = useCallback((blockId: string) => {
setPinnedBlockId(blockId) setPinnedBlockId(blockId)
}, []) }, [])
@@ -232,6 +233,8 @@ export function Preview({
const isNested = workflowStack.length > 0 const isNested = workflowStack.length > 0
const currentWorkflowName = isNested ? workflowStack[workflowStack.length - 1].workflowName : null
return ( return (
<div <div
style={{ height, width }} style={{ height, width }}
@@ -242,20 +245,27 @@ export function Preview({
)} )}
> >
{isNested && ( {isNested && (
<div className='absolute top-[12px] left-[12px] z-20'> <div className='absolute top-[12px] left-[12px] z-20 flex items-center gap-[6px]'>
<Tooltip.Root> <Tooltip.Root>
<Tooltip.Trigger asChild> <Tooltip.Trigger asChild>
<Button <Button
variant='ghost' variant='ghost'
onClick={handleGoBack} onClick={handleGoBack}
className='flex h-[30px] items-center gap-[5px] border border-[var(--border)] bg-[var(--surface-2)] px-[10px] hover:bg-[var(--surface-4)]' className='flex h-[28px] items-center gap-[5px] rounded-[6px] border border-[var(--border)] bg-[var(--surface-2)] px-[10px] text-[var(--text-secondary)] shadow-sm hover:bg-[var(--surface-4)] hover:text-[var(--text-primary)]'
> >
<ArrowLeft className='h-[13px] w-[13px]' /> <ArrowLeft className='h-[12px] w-[12px]' />
<span className='font-medium text-[13px]'>Back</span> <span className='font-medium text-[12px]'>Back</span>
</Button> </Button>
</Tooltip.Trigger> </Tooltip.Trigger>
<Tooltip.Content side='bottom'>Go back to parent workflow</Tooltip.Content> <Tooltip.Content side='bottom'>Go back to parent workflow</Tooltip.Content>
</Tooltip.Root> </Tooltip.Root>
{currentWorkflowName && (
<div className='flex h-[28px] max-w-[200px] items-center rounded-[6px] border border-[var(--border)] bg-[var(--surface-2)] px-[10px] shadow-sm'>
<span className='truncate font-medium text-[12px] text-[var(--text-secondary)]'>
{currentWorkflowName}
</span>
</div>
)}
</div> </div>
)} )}
@@ -284,6 +294,7 @@ export function Preview({
loops={workflowState.loops} loops={workflowState.loops}
parallels={workflowState.parallels} parallels={workflowState.parallels}
isExecutionMode={isExecutionMode} isExecutionMode={isExecutionMode}
childWorkflowSnapshots={childWorkflowSnapshots}
onClose={handleEditorClose} onClose={handleEditorClose}
onDrillDown={handleDrillDown} onDrillDown={handleDrillDown}
/> />

View File

@@ -25,9 +25,11 @@ import {
import { Input as BaseInput, Skeleton } from '@/components/ui' import { Input as BaseInput, Skeleton } from '@/components/ui'
import { useSession } from '@/lib/auth/auth-client' import { useSession } from '@/lib/auth/auth-client'
import { getSubscriptionStatus } from '@/lib/billing/client' import { getSubscriptionStatus } from '@/lib/billing/client'
import type { PermissionGroupConfig } from '@/lib/permission-groups/types'
import { getUserColor } from '@/lib/workspaces/colors' import { getUserColor } from '@/lib/workspaces/colors'
import { getUserRole } from '@/lib/workspaces/organization' import { getUserRole } from '@/lib/workspaces/organization'
import { getAllBlocks } from '@/blocks' import { getAllBlocks } from '@/blocks'
import { useOrganization, useOrganizations } from '@/hooks/queries/organization'
import { import {
type PermissionGroup, type PermissionGroup,
useBulkAddPermissionGroupMembers, useBulkAddPermissionGroupMembers,
@@ -37,9 +39,7 @@ import {
usePermissionGroups, usePermissionGroups,
useRemovePermissionGroupMember, useRemovePermissionGroupMember,
useUpdatePermissionGroup, useUpdatePermissionGroup,
} from '@/ee/access-control/hooks/permission-groups' } from '@/hooks/queries/permission-groups'
import type { PermissionGroupConfig } from '@/ee/access-control/lib/types'
import { useOrganization, useOrganizations } from '@/hooks/queries/organization'
import { useSubscriptionData } from '@/hooks/queries/subscription' import { useSubscriptionData } from '@/hooks/queries/subscription'
import { PROVIDER_DEFINITIONS } from '@/providers/models' import { PROVIDER_DEFINITIONS } from '@/providers/models'
import { getAllProviderIds } from '@/providers/utils' import { getAllProviderIds } from '@/providers/utils'

View File

@@ -1,3 +1,4 @@
export { AccessControl } from './access-control/access-control'
export { ApiKeys } from './api-keys/api-keys' export { ApiKeys } from './api-keys/api-keys'
export { BYOK } from './byok/byok' export { BYOK } from './byok/byok'
export { Copilot } from './copilot/copilot' export { Copilot } from './copilot/copilot'
@@ -9,6 +10,7 @@ export { Files as FileUploads } from './files/files'
export { General } from './general/general' export { General } from './general/general'
export { Integrations } from './integrations/integrations' export { Integrations } from './integrations/integrations'
export { MCP } from './mcp/mcp' export { MCP } from './mcp/mcp'
export { SSO } from './sso/sso'
export { Subscription } from './subscription/subscription' export { Subscription } from './subscription/subscription'
export { TeamManagement } from './team-management/team-management' export { TeamManagement } from './team-management/team-management'
export { WorkflowMcpServers } from './workflow-mcp-servers/workflow-mcp-servers' export { WorkflowMcpServers } from './workflow-mcp-servers/workflow-mcp-servers'

View File

@@ -1,6 +1,6 @@
'use client' 'use client'
import { useEffect, useRef, useState } from 'react' import { createElement, useEffect, useRef, useState } from 'react'
import { createLogger } from '@sim/logger' import { createLogger } from '@sim/logger'
import { Check, ChevronDown, ExternalLink, Search } from 'lucide-react' import { Check, ChevronDown, ExternalLink, Search } from 'lucide-react'
import { useRouter, useSearchParams } from 'next/navigation' import { useRouter, useSearchParams } from 'next/navigation'
@@ -339,9 +339,7 @@ export function Integrations({ onOpenChange, registerCloseHandler }: Integration
> >
<div className='flex items-center gap-[12px]'> <div className='flex items-center gap-[12px]'>
<div className='flex h-9 w-9 flex-shrink-0 items-center justify-center overflow-hidden rounded-[6px] bg-[var(--surface-5)]'> <div className='flex h-9 w-9 flex-shrink-0 items-center justify-center overflow-hidden rounded-[6px] bg-[var(--surface-5)]'>
{typeof service.icon === 'function' {createElement(service.icon, { className: 'h-4 w-4' })}
? service.icon({ className: 'h-4 w-4' })
: service.icon}
</div> </div>
<div className='flex flex-col justify-center gap-[1px]'> <div className='flex flex-col justify-center gap-[1px]'>
<span className='font-medium text-[14px]'>{service.name}</span> <span className='font-medium text-[14px]'>{service.name}</span>

View File

@@ -11,13 +11,55 @@ import { isBillingEnabled } from '@/lib/core/config/feature-flags'
import { cn } from '@/lib/core/utils/cn' import { cn } from '@/lib/core/utils/cn'
import { getBaseUrl } from '@/lib/core/utils/urls' import { getBaseUrl } from '@/lib/core/utils/urls'
import { getUserRole } from '@/lib/workspaces/organization/utils' import { getUserRole } from '@/lib/workspaces/organization/utils'
import { useConfigureSSO, useSSOProviders } from '@/ee/sso/hooks/sso'
import { SSO_TRUSTED_PROVIDERS } from '@/ee/sso/lib/constants'
import { useOrganizations } from '@/hooks/queries/organization' import { useOrganizations } from '@/hooks/queries/organization'
import { useConfigureSSO, useSSOProviders } from '@/hooks/queries/sso'
import { useSubscriptionData } from '@/hooks/queries/subscription' import { useSubscriptionData } from '@/hooks/queries/subscription'
const logger = createLogger('SSO') const logger = createLogger('SSO')
const TRUSTED_SSO_PROVIDERS = [
'okta',
'okta-saml',
'okta-prod',
'okta-dev',
'okta-staging',
'okta-test',
'azure-ad',
'azure-active-directory',
'azure-corp',
'azure-enterprise',
'adfs',
'adfs-company',
'adfs-corp',
'adfs-enterprise',
'auth0',
'auth0-prod',
'auth0-dev',
'auth0-staging',
'onelogin',
'onelogin-prod',
'onelogin-corp',
'jumpcloud',
'jumpcloud-prod',
'jumpcloud-corp',
'ping-identity',
'ping-federate',
'pingone',
'shibboleth',
'shibboleth-idp',
'google-workspace',
'google-sso',
'saml',
'saml2',
'saml-sso',
'oidc',
'oidc-sso',
'openid-connect',
'custom-sso',
'enterprise-sso',
'company-sso',
]
interface SSOProvider { interface SSOProvider {
id: string id: string
providerId: string providerId: string
@@ -523,7 +565,7 @@ export function SSO() {
<Combobox <Combobox
value={formData.providerId} value={formData.providerId}
onChange={(value: string) => handleInputChange('providerId', value)} onChange={(value: string) => handleInputChange('providerId', value)}
options={SSO_TRUSTED_PROVIDERS.map((id) => ({ options={TRUSTED_SSO_PROVIDERS.map((id) => ({
label: id, label: id,
value: id, value: id,
}))} }))}

View File

@@ -41,6 +41,7 @@ import { getEnv, isTruthy } from '@/lib/core/config/env'
import { isHosted } from '@/lib/core/config/feature-flags' import { isHosted } from '@/lib/core/config/feature-flags'
import { getUserRole } from '@/lib/workspaces/organization' import { getUserRole } from '@/lib/workspaces/organization'
import { import {
AccessControl,
ApiKeys, ApiKeys,
BYOK, BYOK,
Copilot, Copilot,
@@ -52,15 +53,15 @@ import {
General, General,
Integrations, Integrations,
MCP, MCP,
SSO,
Subscription, Subscription,
TeamManagement, TeamManagement,
WorkflowMcpServers, WorkflowMcpServers,
} from '@/app/workspace/[workspaceId]/w/components/sidebar/components/settings-modal/components' } from '@/app/workspace/[workspaceId]/w/components/sidebar/components/settings-modal/components'
import { TemplateProfile } from '@/app/workspace/[workspaceId]/w/components/sidebar/components/settings-modal/components/template-profile/template-profile' import { TemplateProfile } from '@/app/workspace/[workspaceId]/w/components/sidebar/components/settings-modal/components/template-profile/template-profile'
import { AccessControl } from '@/ee/access-control'
import { SSO, ssoKeys, useSSOProviders } from '@/ee/sso'
import { generalSettingsKeys, useGeneralSettings } from '@/hooks/queries/general-settings' import { generalSettingsKeys, useGeneralSettings } from '@/hooks/queries/general-settings'
import { organizationKeys, useOrganizations } from '@/hooks/queries/organization' import { organizationKeys, useOrganizations } from '@/hooks/queries/organization'
import { ssoKeys, useSSOProviders } from '@/hooks/queries/sso'
import { subscriptionKeys, useSubscriptionData } from '@/hooks/queries/subscription' import { subscriptionKeys, useSubscriptionData } from '@/hooks/queries/subscription'
import { usePermissionConfig } from '@/hooks/use-permission-config' import { usePermissionConfig } from '@/hooks/use-permission-config'
import { useSettingsModalStore } from '@/stores/modals/settings/store' import { useSettingsModalStore } from '@/stores/modals/settings/store'

View File

@@ -66,7 +66,10 @@ function generateSignature(secret: string, timestamp: number, body: string): str
async function buildPayload( async function buildPayload(
log: WorkflowExecutionLog, log: WorkflowExecutionLog,
subscription: typeof workspaceNotificationSubscription.$inferSelect subscription: typeof workspaceNotificationSubscription.$inferSelect
): Promise<NotificationPayload> { ): Promise<NotificationPayload | null> {
// Skip notifications for deleted workflows
if (!log.workflowId) return null
const workflowData = await db const workflowData = await db
.select({ name: workflowTable.name, userId: workflowTable.userId }) .select({ name: workflowTable.name, userId: workflowTable.userId })
.from(workflowTable) .from(workflowTable)
@@ -526,6 +529,13 @@ export async function executeNotificationDelivery(params: NotificationDeliveryPa
const attempts = claimed[0].attempts const attempts = claimed[0].attempts
const payload = await buildPayload(log, subscription) const payload = await buildPayload(log, subscription)
// Skip delivery for deleted workflows
if (!payload) {
await updateDeliveryStatus(deliveryId, 'failed', 'Workflow was deleted')
logger.info(`Skipping delivery ${deliveryId} - workflow was deleted`)
return
}
let result: { success: boolean; status?: number; error?: string } let result: { success: boolean; status?: number; error?: string }
switch (notificationType) { switch (notificationType) {

View File

@@ -85,7 +85,9 @@ export const AgentBlock: BlockConfig<AgentResponse> = {
id: 'messages', id: 'messages',
title: 'Messages', title: 'Messages',
type: 'messages-input', type: 'messages-input',
canonicalParamId: 'messages',
placeholder: 'Enter messages...', placeholder: 'Enter messages...',
mode: 'basic',
wandConfig: { wandConfig: {
enabled: true, enabled: true,
maintainHistory: true, maintainHistory: true,
@@ -93,10 +95,12 @@ export const AgentBlock: BlockConfig<AgentResponse> = {
Current messages: {context} Current messages: {context}
{sources}
RULES: RULES:
1. Generate ONLY a valid JSON array - no markdown, no explanations 1. Generate ONLY a valid JSON array - no markdown, no explanations
2. Each message object must have "role" (system/user/assistant) and "content" (string) 2. Each message object must have "role" and "content" properties
3. You can generate any number of messages as needed 3. Valid roles are: "system", "user", "assistant", "attachment"
4. Content can be as long as necessary - don't truncate 4. Content can be as long as necessary - don't truncate
5. If editing existing messages, preserve structure unless asked to change it 5. If editing existing messages, preserve structure unless asked to change it
6. For new agents, create DETAILED, PROFESSIONAL system prompts that include: 6. For new agents, create DETAILED, PROFESSIONAL system prompts that include:
@@ -106,6 +110,16 @@ RULES:
- Critical thinking or quality guidelines - Critical thinking or quality guidelines
- How to handle edge cases and uncertainty - How to handle edge cases and uncertainty
ATTACHMENTS:
- Use role "attachment" to include images, audio, video, or documents in a multimodal conversation
- IMPORTANT: If an attachment message in the current context has an "attachment" object with file data, ALWAYS preserve that entire "attachment" object exactly as-is
- When creating NEW attachment messages, you can either:
1. Just set role to "attachment" with descriptive content - user will upload the file manually
2. Select a file from the available workspace files by including "fileId" in the attachment object (optional)
- You do NOT have to select a file - it's completely optional
- Example without file: {"role": "attachment", "content": "Analyze this image for text and objects"}
- Example with file selection: {"role": "attachment", "content": "Analyze this image", "attachment": {"fileId": "abc123"}}
EXAMPLES: EXAMPLES:
Research agent: Research agent:
@@ -114,14 +128,23 @@ Research agent:
Code reviewer: Code reviewer:
[{"role": "system", "content": "You are a Senior Code Reviewer with expertise in software architecture, security, and best practices. Your role is to provide thorough, constructive code reviews that improve code quality and help developers grow.\\n\\n## Review Methodology\\n\\n1. **Security First**: Check for vulnerabilities including injection attacks, authentication flaws, data exposure, and insecure dependencies.\\n\\n2. **Code Quality**: Evaluate readability, maintainability, adherence to DRY/SOLID principles, and appropriate abstraction levels.\\n\\n3. **Performance**: Identify potential bottlenecks, unnecessary computations, memory leaks, and optimization opportunities.\\n\\n4. **Testing**: Assess test coverage, edge case handling, and testability of the code structure.\\n\\n## Output Format\\n\\n### Summary\\nBrief overview of the code's purpose and overall assessment.\\n\\n### Critical Issues\\nSecurity vulnerabilities or bugs that must be fixed before merging.\\n\\n### Improvements\\nSuggested enhancements with clear explanations of why and how.\\n\\n### Positive Aspects\\nHighlight well-written code to reinforce good practices.\\n\\nBe specific with line references. Provide code examples for suggested changes. Balance critique with encouragement."}, {"role": "user", "content": "<start.input>"}] [{"role": "system", "content": "You are a Senior Code Reviewer with expertise in software architecture, security, and best practices. Your role is to provide thorough, constructive code reviews that improve code quality and help developers grow.\\n\\n## Review Methodology\\n\\n1. **Security First**: Check for vulnerabilities including injection attacks, authentication flaws, data exposure, and insecure dependencies.\\n\\n2. **Code Quality**: Evaluate readability, maintainability, adherence to DRY/SOLID principles, and appropriate abstraction levels.\\n\\n3. **Performance**: Identify potential bottlenecks, unnecessary computations, memory leaks, and optimization opportunities.\\n\\n4. **Testing**: Assess test coverage, edge case handling, and testability of the code structure.\\n\\n## Output Format\\n\\n### Summary\\nBrief overview of the code's purpose and overall assessment.\\n\\n### Critical Issues\\nSecurity vulnerabilities or bugs that must be fixed before merging.\\n\\n### Improvements\\nSuggested enhancements with clear explanations of why and how.\\n\\n### Positive Aspects\\nHighlight well-written code to reinforce good practices.\\n\\nBe specific with line references. Provide code examples for suggested changes. Balance critique with encouragement."}, {"role": "user", "content": "<start.input>"}]
Writing assistant: Image analysis agent:
[{"role": "system", "content": "You are a skilled Writing Editor and Coach. Your role is to help users improve their writing through constructive feedback, editing suggestions, and guidance on style, clarity, and structure.\\n\\n## Editing Approach\\n\\n1. **Clarity**: Ensure ideas are expressed clearly and concisely. Eliminate jargon unless appropriate for the audience.\\n\\n2. **Structure**: Evaluate logical flow, paragraph organization, and transitions between ideas.\\n\\n3. **Voice & Tone**: Maintain consistency and appropriateness for the intended audience and purpose.\\n\\n4. **Grammar & Style**: Correct errors while respecting the author's voice.\\n\\n## Output Format\\n\\n### Overall Impression\\nBrief assessment of the piece's strengths and areas for improvement.\\n\\n### Structural Feedback\\nComments on organization, flow, and logical progression.\\n\\n### Line-Level Edits\\nSpecific suggestions with explanations, not just corrections.\\n\\n### Revised Version\\nWhen appropriate, provide an edited version demonstrating improvements.\\n\\nBe encouraging while honest. Explain the reasoning behind suggestions to help the writer improve."}, {"role": "user", "content": "<start.input>"}] [{"role": "system", "content": "You are an expert image analyst. Describe images in detail, identify objects, text, and patterns. Provide structured analysis."}, {"role": "attachment", "content": "Analyze this image"}]
Return ONLY the JSON array.`, Return ONLY the JSON array.`,
placeholder: 'Describe what you want to create or change...', placeholder: 'Describe what you want to create or change...',
generationType: 'json-object', generationType: 'json-object',
}, },
}, },
{
id: 'messagesRaw',
title: 'Messages',
type: 'code',
canonicalParamId: 'messages',
placeholder: '[{"role": "system", "content": "..."}, {"role": "user", "content": "..."}]',
language: 'json',
mode: 'advanced',
},
{ {
id: 'model', id: 'model',
title: 'Model', title: 'Model',

View File

@@ -9,7 +9,7 @@ export const YouTubeBlock: BlockConfig<YouTubeResponse> = {
description: 'Interact with YouTube videos, channels, and playlists', description: 'Interact with YouTube videos, channels, and playlists',
authMode: AuthMode.ApiKey, authMode: AuthMode.ApiKey,
longDescription: longDescription:
'Integrate YouTube into the workflow. Can search for videos, get video details, get channel information, get all videos from a channel, get channel playlists, get playlist items, find related videos, and get video comments.', 'Integrate YouTube into the workflow. Can search for videos, get trending videos, get video details, get video categories, get channel information, get all videos from a channel, get channel playlists, get playlist items, and get video comments.',
docsLink: 'https://docs.sim.ai/tools/youtube', docsLink: 'https://docs.sim.ai/tools/youtube',
category: 'tools', category: 'tools',
bgColor: '#FF0000', bgColor: '#FF0000',
@@ -21,7 +21,9 @@ export const YouTubeBlock: BlockConfig<YouTubeResponse> = {
type: 'dropdown', type: 'dropdown',
options: [ options: [
{ label: 'Search Videos', id: 'youtube_search' }, { label: 'Search Videos', id: 'youtube_search' },
{ label: 'Get Trending Videos', id: 'youtube_trending' },
{ label: 'Get Video Details', id: 'youtube_video_details' }, { label: 'Get Video Details', id: 'youtube_video_details' },
{ label: 'Get Video Categories', id: 'youtube_video_categories' },
{ label: 'Get Channel Info', id: 'youtube_channel_info' }, { label: 'Get Channel Info', id: 'youtube_channel_info' },
{ label: 'Get Channel Videos', id: 'youtube_channel_videos' }, { label: 'Get Channel Videos', id: 'youtube_channel_videos' },
{ label: 'Get Channel Playlists', id: 'youtube_channel_playlists' }, { label: 'Get Channel Playlists', id: 'youtube_channel_playlists' },
@@ -49,6 +51,13 @@ export const YouTubeBlock: BlockConfig<YouTubeResponse> = {
integer: true, integer: true,
condition: { field: 'operation', value: 'youtube_search' }, condition: { field: 'operation', value: 'youtube_search' },
}, },
{
id: 'pageToken',
title: 'Page Token',
type: 'short-input',
placeholder: 'Token for pagination (from nextPageToken)',
condition: { field: 'operation', value: 'youtube_search' },
},
{ {
id: 'channelId', id: 'channelId',
title: 'Filter by Channel ID', title: 'Filter by Channel ID',
@@ -56,6 +65,19 @@ export const YouTubeBlock: BlockConfig<YouTubeResponse> = {
placeholder: 'Filter results to a specific channel', placeholder: 'Filter results to a specific channel',
condition: { field: 'operation', value: 'youtube_search' }, condition: { field: 'operation', value: 'youtube_search' },
}, },
{
id: 'eventType',
title: 'Live Stream Filter',
type: 'dropdown',
options: [
{ label: 'All Videos', id: '' },
{ label: 'Currently Live', id: 'live' },
{ label: 'Upcoming Streams', id: 'upcoming' },
{ label: 'Past Streams', id: 'completed' },
],
value: () => '',
condition: { field: 'operation', value: 'youtube_search' },
},
{ {
id: 'publishedAfter', id: 'publishedAfter',
title: 'Published After', title: 'Published After',
@@ -131,7 +153,7 @@ Return ONLY the timestamp string - no explanations, no quotes, no extra text.`,
id: 'videoCategoryId', id: 'videoCategoryId',
title: 'Category ID', title: 'Category ID',
type: 'short-input', type: 'short-input',
placeholder: '10 for Music, 20 for Gaming', placeholder: 'Use Get Video Categories to find IDs',
condition: { field: 'operation', value: 'youtube_search' }, condition: { field: 'operation', value: 'youtube_search' },
}, },
{ {
@@ -163,7 +185,10 @@ Return ONLY the timestamp string - no explanations, no quotes, no extra text.`,
title: 'Region Code', title: 'Region Code',
type: 'short-input', type: 'short-input',
placeholder: 'US, GB, JP', placeholder: 'US, GB, JP',
condition: { field: 'operation', value: 'youtube_search' }, condition: {
field: 'operation',
value: ['youtube_search', 'youtube_trending', 'youtube_video_categories'],
},
}, },
{ {
id: 'relevanceLanguage', id: 'relevanceLanguage',
@@ -184,6 +209,31 @@ Return ONLY the timestamp string - no explanations, no quotes, no extra text.`,
value: () => 'moderate', value: () => 'moderate',
condition: { field: 'operation', value: 'youtube_search' }, condition: { field: 'operation', value: 'youtube_search' },
}, },
// Get Trending Videos operation inputs
{
id: 'maxResults',
title: 'Max Results',
type: 'slider',
min: 1,
max: 50,
step: 1,
integer: true,
condition: { field: 'operation', value: 'youtube_trending' },
},
{
id: 'videoCategoryId',
title: 'Category ID',
type: 'short-input',
placeholder: 'Use Get Video Categories to find IDs',
condition: { field: 'operation', value: 'youtube_trending' },
},
{
id: 'pageToken',
title: 'Page Token',
type: 'short-input',
placeholder: 'Token for pagination (from nextPageToken)',
condition: { field: 'operation', value: 'youtube_trending' },
},
// Get Video Details operation inputs // Get Video Details operation inputs
{ {
id: 'videoId', id: 'videoId',
@@ -193,6 +243,14 @@ Return ONLY the timestamp string - no explanations, no quotes, no extra text.`,
required: true, required: true,
condition: { field: 'operation', value: 'youtube_video_details' }, condition: { field: 'operation', value: 'youtube_video_details' },
}, },
// Get Video Categories operation inputs
{
id: 'hl',
title: 'Language',
type: 'short-input',
placeholder: 'en, es, fr (for category names)',
condition: { field: 'operation', value: 'youtube_video_categories' },
},
// Get Channel Info operation inputs // Get Channel Info operation inputs
{ {
id: 'channelId', id: 'channelId',
@@ -241,6 +299,13 @@ Return ONLY the timestamp string - no explanations, no quotes, no extra text.`,
value: () => 'date', value: () => 'date',
condition: { field: 'operation', value: 'youtube_channel_videos' }, condition: { field: 'operation', value: 'youtube_channel_videos' },
}, },
{
id: 'pageToken',
title: 'Page Token',
type: 'short-input',
placeholder: 'Token for pagination (from nextPageToken)',
condition: { field: 'operation', value: 'youtube_channel_videos' },
},
// Get Channel Playlists operation inputs // Get Channel Playlists operation inputs
{ {
id: 'channelId', id: 'channelId',
@@ -260,6 +325,13 @@ Return ONLY the timestamp string - no explanations, no quotes, no extra text.`,
integer: true, integer: true,
condition: { field: 'operation', value: 'youtube_channel_playlists' }, condition: { field: 'operation', value: 'youtube_channel_playlists' },
}, },
{
id: 'pageToken',
title: 'Page Token',
type: 'short-input',
placeholder: 'Token for pagination (from nextPageToken)',
condition: { field: 'operation', value: 'youtube_channel_playlists' },
},
// Get Playlist Items operation inputs // Get Playlist Items operation inputs
{ {
id: 'playlistId', id: 'playlistId',
@@ -279,6 +351,13 @@ Return ONLY the timestamp string - no explanations, no quotes, no extra text.`,
integer: true, integer: true,
condition: { field: 'operation', value: 'youtube_playlist_items' }, condition: { field: 'operation', value: 'youtube_playlist_items' },
}, },
{
id: 'pageToken',
title: 'Page Token',
type: 'short-input',
placeholder: 'Token for pagination (from nextPageToken)',
condition: { field: 'operation', value: 'youtube_playlist_items' },
},
// Get Video Comments operation inputs // Get Video Comments operation inputs
{ {
id: 'videoId', id: 'videoId',
@@ -309,6 +388,13 @@ Return ONLY the timestamp string - no explanations, no quotes, no extra text.`,
value: () => 'relevance', value: () => 'relevance',
condition: { field: 'operation', value: 'youtube_comments' }, condition: { field: 'operation', value: 'youtube_comments' },
}, },
{
id: 'pageToken',
title: 'Page Token',
type: 'short-input',
placeholder: 'Token for pagination (from nextPageToken)',
condition: { field: 'operation', value: 'youtube_comments' },
},
// API Key (common to all operations) // API Key (common to all operations)
{ {
id: 'apiKey', id: 'apiKey',
@@ -321,13 +407,15 @@ Return ONLY the timestamp string - no explanations, no quotes, no extra text.`,
], ],
tools: { tools: {
access: [ access: [
'youtube_search',
'youtube_video_details',
'youtube_channel_info', 'youtube_channel_info',
'youtube_channel_videos',
'youtube_channel_playlists', 'youtube_channel_playlists',
'youtube_playlist_items', 'youtube_channel_videos',
'youtube_comments', 'youtube_comments',
'youtube_playlist_items',
'youtube_search',
'youtube_trending',
'youtube_video_categories',
'youtube_video_details',
], ],
config: { config: {
tool: (params) => { tool: (params) => {
@@ -339,8 +427,12 @@ Return ONLY the timestamp string - no explanations, no quotes, no extra text.`,
switch (params.operation) { switch (params.operation) {
case 'youtube_search': case 'youtube_search':
return 'youtube_search' return 'youtube_search'
case 'youtube_trending':
return 'youtube_trending'
case 'youtube_video_details': case 'youtube_video_details':
return 'youtube_video_details' return 'youtube_video_details'
case 'youtube_video_categories':
return 'youtube_video_categories'
case 'youtube_channel_info': case 'youtube_channel_info':
return 'youtube_channel_info' return 'youtube_channel_info'
case 'youtube_channel_videos': case 'youtube_channel_videos':
@@ -363,6 +455,7 @@ Return ONLY the timestamp string - no explanations, no quotes, no extra text.`,
// Search Videos // Search Videos
query: { type: 'string', description: 'Search query' }, query: { type: 'string', description: 'Search query' },
maxResults: { type: 'number', description: 'Maximum number of results' }, maxResults: { type: 'number', description: 'Maximum number of results' },
pageToken: { type: 'string', description: 'Page token for pagination' },
// Search Filters // Search Filters
publishedAfter: { type: 'string', description: 'Published after date (RFC 3339)' }, publishedAfter: { type: 'string', description: 'Published after date (RFC 3339)' },
publishedBefore: { type: 'string', description: 'Published before date (RFC 3339)' }, publishedBefore: { type: 'string', description: 'Published before date (RFC 3339)' },
@@ -370,9 +463,11 @@ Return ONLY the timestamp string - no explanations, no quotes, no extra text.`,
videoCategoryId: { type: 'string', description: 'YouTube category ID' }, videoCategoryId: { type: 'string', description: 'YouTube category ID' },
videoDefinition: { type: 'string', description: 'Video quality filter' }, videoDefinition: { type: 'string', description: 'Video quality filter' },
videoCaption: { type: 'string', description: 'Caption availability filter' }, videoCaption: { type: 'string', description: 'Caption availability filter' },
eventType: { type: 'string', description: 'Live stream filter (live/upcoming/completed)' },
regionCode: { type: 'string', description: 'Region code (ISO 3166-1)' }, regionCode: { type: 'string', description: 'Region code (ISO 3166-1)' },
relevanceLanguage: { type: 'string', description: 'Language code (ISO 639-1)' }, relevanceLanguage: { type: 'string', description: 'Language code (ISO 639-1)' },
safeSearch: { type: 'string', description: 'Safe search level' }, safeSearch: { type: 'string', description: 'Safe search level' },
hl: { type: 'string', description: 'Language for category names' },
// Video Details & Comments // Video Details & Comments
videoId: { type: 'string', description: 'YouTube video ID' }, videoId: { type: 'string', description: 'YouTube video ID' },
// Channel Info // Channel Info
@@ -384,7 +479,7 @@ Return ONLY the timestamp string - no explanations, no quotes, no extra text.`,
order: { type: 'string', description: 'Sort order' }, order: { type: 'string', description: 'Sort order' },
}, },
outputs: { outputs: {
// Search Videos & Playlist Items // Search Videos, Trending, Playlist Items, Captions, Categories
items: { type: 'json', description: 'List of items returned' }, items: { type: 'json', description: 'List of items returned' },
totalResults: { type: 'number', description: 'Total number of results' }, totalResults: { type: 'number', description: 'Total number of results' },
nextPageToken: { type: 'string', description: 'Token for next page' }, nextPageToken: { type: 'string', description: 'Token for next page' },
@@ -399,11 +494,33 @@ Return ONLY the timestamp string - no explanations, no quotes, no extra text.`,
viewCount: { type: 'number', description: 'View count' }, viewCount: { type: 'number', description: 'View count' },
likeCount: { type: 'number', description: 'Like count' }, likeCount: { type: 'number', description: 'Like count' },
commentCount: { type: 'number', description: 'Comment count' }, commentCount: { type: 'number', description: 'Comment count' },
favoriteCount: { type: 'number', description: 'Favorite count' },
thumbnail: { type: 'string', description: 'Thumbnail URL' }, thumbnail: { type: 'string', description: 'Thumbnail URL' },
tags: { type: 'json', description: 'Video tags' }, tags: { type: 'json', description: 'Video tags' },
categoryId: { type: 'string', description: 'Video category ID' },
definition: { type: 'string', description: 'Video definition (hd/sd)' },
caption: { type: 'string', description: 'Has captions (true/false)' },
licensedContent: { type: 'boolean', description: 'Is licensed content' },
privacyStatus: { type: 'string', description: 'Privacy status' },
liveBroadcastContent: { type: 'string', description: 'Live broadcast status' },
defaultLanguage: { type: 'string', description: 'Default language' },
defaultAudioLanguage: { type: 'string', description: 'Default audio language' },
// Live Streaming Details
isLiveContent: { type: 'boolean', description: 'Whether video is/was a live stream' },
scheduledStartTime: { type: 'string', description: 'Scheduled start time for live streams' },
actualStartTime: { type: 'string', description: 'Actual start time of live stream' },
actualEndTime: { type: 'string', description: 'End time of live stream' },
concurrentViewers: { type: 'number', description: 'Current viewers (live only)' },
activeLiveChatId: { type: 'string', description: 'Live chat ID' },
// Channel Info // Channel Info
subscriberCount: { type: 'number', description: 'Subscriber count' }, subscriberCount: { type: 'number', description: 'Subscriber count' },
videoCount: { type: 'number', description: 'Total video count' }, videoCount: { type: 'number', description: 'Total video count' },
customUrl: { type: 'string', description: 'Channel custom URL' }, customUrl: { type: 'string', description: 'Channel custom URL' },
country: { type: 'string', description: 'Channel country' },
uploadsPlaylistId: { type: 'string', description: 'Uploads playlist ID' },
bannerImageUrl: { type: 'string', description: 'Channel banner URL' },
hiddenSubscriberCount: { type: 'boolean', description: 'Is subscriber count hidden' },
// Video Categories
assignable: { type: 'boolean', description: 'Whether category can be assigned' },
}, },
} }

View File

@@ -1,46 +0,0 @@
The Sim Enterprise License (the "Enterprise License")
Copyright (c) 2026-present Sim Studio, Inc.
With regard to the Sim Software:
This software and associated documentation files (the "Software") may only be
used in production, if you (and any entity that you represent) have agreed to,
and are in compliance with, the Sim Terms of Service available at
https://sim.ai/terms (or other agreement governing the use of the Software,
as mutually agreed by you and Sim Studio, Inc. ("Sim")), and otherwise
have a valid Sim Enterprise subscription ("Enterprise Subscription")
for the correct number of seats as defined in your agreement.
Subject to the foregoing sentence, you are free to modify this Software and
publish patches to the Software. You agree that Sim and/or its licensors
(as applicable) retain all right, title and interest in and to all such
modifications and/or patches, and all such modifications and/or patches may
only be used, copied, modified, displayed, distributed, or otherwise exploited
with a valid Enterprise Subscription.
Notwithstanding the foregoing, you may copy and modify the Software for
development and testing purposes, without requiring a subscription. You agree
that Sim and/or its licensors (as applicable) retain all right, title
and interest in and to all such modifications.
You are not granted any other rights beyond what is expressly stated herein.
Subject to the foregoing, it is forbidden to copy, merge, publish, distribute,
sublicense, and/or sell the Software.
This Enterprise License applies only to the part of this Software that is not
distributed under the Apache License 2.0. Any part of this Software distributed
under the Apache License 2.0 is copyrighted under that license. The full text
of this Enterprise License shall be included in all copies or substantial
portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
For all third party components incorporated into the Sim Software, those
components are licensed under the original license provided by the owner of the
applicable component.

View File

@@ -1,69 +0,0 @@
# Sim Enterprise Edition
This directory contains enterprise features that require a valid Sim Enterprise license for production use.
## Features
- **SSO**: SAML and OIDC single sign-on authentication
- **Access Control**: Permission groups and role-based access control
## Structure
```
ee/
├── LICENSE
├── README.md
├── index.ts # Main barrel export
├── sso/
│ ├── index.ts
│ ├── components/ # SSO settings UI
│ ├── hooks/ # React Query hooks
│ └── lib/ # Utilities and constants
└── access-control/
├── index.ts
├── components/ # Access control settings UI
├── hooks/ # React Query hooks
└── lib/ # Types and utilities
```
**Note:** API routes remain in `app/api/` as required by Next.js routing conventions:
- SSO API: `app/api/auth/sso/`
- Permission Groups API: `app/api/permission-groups/`
## Licensing
Code in this directory is **NOT** covered by the Apache 2.0 license. See [LICENSE](./LICENSE) for the Sim Enterprise License terms.
The rest of the Sim codebase outside this directory is licensed under Apache 2.0.
## For Open Source Users
You may delete this directory to use Sim under the Apache 2.0 license only. The application will continue to function without enterprise features.
## Development & Testing
You may copy and modify this software for development and testing purposes without requiring an Enterprise subscription. Production use requires a valid license.
## Enabling Enterprise Features
Enterprise features are controlled by environment variables and subscription status:
- `NEXT_PUBLIC_SSO_ENABLED` - Enable SSO for self-hosted instances
- `NEXT_PUBLIC_ACCESS_CONTROL_ENABLED` - Enable access control for self-hosted instances
On the hosted platform (sim.ai), these features are automatically available with an Enterprise subscription.
## Usage
```typescript
// Import enterprise components
import { SSO, AccessControl } from '@/ee'
// Or import specific features
import { SSO, useSSOProviders } from '@/ee/sso'
import { AccessControl, usePermissionGroups } from '@/ee/access-control'
```
## Contact
For Enterprise licensing inquiries, contact [sales@sim.ai](mailto:sales@sim.ai).

View File

@@ -1,26 +0,0 @@
export { AccessControl } from './components/access-control'
export {
type BulkAddMembersData,
type CreatePermissionGroupData,
type DeletePermissionGroupParams,
type PermissionGroup,
type PermissionGroupMember,
permissionGroupKeys,
type UpdatePermissionGroupData,
type UserPermissionConfig,
useAddPermissionGroupMember,
useBulkAddPermissionGroupMembers,
useCreatePermissionGroup,
useDeletePermissionGroup,
usePermissionGroup,
usePermissionGroupMembers,
usePermissionGroups,
useRemovePermissionGroupMember,
useUpdatePermissionGroup,
useUserPermissionConfig,
} from './hooks/permission-groups'
export type { PermissionGroupConfig } from './lib/types'
export {
DEFAULT_PERMISSION_GROUP_CONFIG,
parsePermissionGroupConfig,
} from './lib/types'

View File

@@ -1,34 +0,0 @@
/**
* Sim Enterprise Edition
*
* This module contains enterprise features that require a valid
* Sim Enterprise license for production use.
*
* See LICENSE in this directory for terms.
*/
export type { PermissionGroupConfig } from './access-control'
// Access Control (Permission Groups)
export {
AccessControl,
type BulkAddMembersData,
type CreatePermissionGroupData,
type DeletePermissionGroupParams,
type PermissionGroup,
type PermissionGroupMember,
permissionGroupKeys,
type UpdatePermissionGroupData,
type UserPermissionConfig,
useAddPermissionGroupMember,
useBulkAddPermissionGroupMembers,
useCreatePermissionGroup,
useDeletePermissionGroup,
usePermissionGroup,
usePermissionGroupMembers,
usePermissionGroups,
useRemovePermissionGroupMember,
useUpdatePermissionGroup,
useUserPermissionConfig,
} from './access-control'
// SSO (Single Sign-On)
export { SSO, ssoKeys, useConfigureSSO, useDeleteSSO, useSSOProviders } from './sso'

View File

@@ -1,8 +0,0 @@
export { SSO } from './components/sso'
export {
ssoKeys,
useConfigureSSO,
useDeleteSSO,
useSSOProviders,
} from './hooks/sso'
export * from './lib/constants'

View File

@@ -6,6 +6,7 @@ interface ChildWorkflowErrorOptions {
childWorkflowName: string childWorkflowName: string
childTraceSpans?: TraceSpan[] childTraceSpans?: TraceSpan[]
executionResult?: ExecutionResult executionResult?: ExecutionResult
childWorkflowSnapshotId?: string
cause?: Error cause?: Error
} }
@@ -16,6 +17,7 @@ export class ChildWorkflowError extends Error {
readonly childTraceSpans: TraceSpan[] readonly childTraceSpans: TraceSpan[]
readonly childWorkflowName: string readonly childWorkflowName: string
readonly executionResult?: ExecutionResult readonly executionResult?: ExecutionResult
readonly childWorkflowSnapshotId?: string
constructor(options: ChildWorkflowErrorOptions) { constructor(options: ChildWorkflowErrorOptions) {
super(options.message, { cause: options.cause }) super(options.message, { cause: options.cause })
@@ -23,6 +25,7 @@ export class ChildWorkflowError extends Error {
this.childWorkflowName = options.childWorkflowName this.childWorkflowName = options.childWorkflowName
this.childTraceSpans = options.childTraceSpans ?? [] this.childTraceSpans = options.childTraceSpans ?? []
this.executionResult = options.executionResult this.executionResult = options.executionResult
this.childWorkflowSnapshotId = options.childWorkflowSnapshotId
} }
static isChildWorkflowError(error: unknown): error is ChildWorkflowError { static isChildWorkflowError(error: unknown): error is ChildWorkflowError {

View File

@@ -237,6 +237,9 @@ export class BlockExecutor {
if (ChildWorkflowError.isChildWorkflowError(error)) { if (ChildWorkflowError.isChildWorkflowError(error)) {
errorOutput.childTraceSpans = error.childTraceSpans errorOutput.childTraceSpans = error.childTraceSpans
errorOutput.childWorkflowName = error.childWorkflowName errorOutput.childWorkflowName = error.childWorkflowName
if (error.childWorkflowSnapshotId) {
errorOutput.childWorkflowSnapshotId = error.childWorkflowSnapshotId
}
} }
this.state.setBlockOutput(node.id, errorOutput, duration) this.state.setBlockOutput(node.id, errorOutput, duration)

View File

@@ -2417,4 +2417,177 @@ describe('EdgeManager', () => {
expect(successReady).toContain(targetId) expect(successReady).toContain(targetId)
}) })
}) })
describe('Condition with loop downstream - deactivation propagation', () => {
it('should deactivate nodes after loop when condition branch containing loop is deactivated', () => {
// Scenario: condition → (if) → sentinel_start → loopBody → sentinel_end → (loop_exit) → after_loop
// → (else) → other_branch
// When condition takes "else" path, the entire if-branch including nodes after the loop should be deactivated
const conditionId = 'condition'
const sentinelStartId = 'sentinel-start'
const loopBodyId = 'loop-body'
const sentinelEndId = 'sentinel-end'
const afterLoopId = 'after-loop'
const otherBranchId = 'other-branch'
const conditionNode = createMockNode(conditionId, [
{ target: sentinelStartId, sourceHandle: 'condition-if' },
{ target: otherBranchId, sourceHandle: 'condition-else' },
])
const sentinelStartNode = createMockNode(
sentinelStartId,
[{ target: loopBodyId }],
[conditionId]
)
const loopBodyNode = createMockNode(
loopBodyId,
[{ target: sentinelEndId }],
[sentinelStartId]
)
const sentinelEndNode = createMockNode(
sentinelEndId,
[
{ target: sentinelStartId, sourceHandle: 'loop_continue' },
{ target: afterLoopId, sourceHandle: 'loop_exit' },
],
[loopBodyId]
)
const afterLoopNode = createMockNode(afterLoopId, [], [sentinelEndId])
const otherBranchNode = createMockNode(otherBranchId, [], [conditionId])
const nodes = new Map<string, DAGNode>([
[conditionId, conditionNode],
[sentinelStartId, sentinelStartNode],
[loopBodyId, loopBodyNode],
[sentinelEndId, sentinelEndNode],
[afterLoopId, afterLoopNode],
[otherBranchId, otherBranchNode],
])
const dag = createMockDAG(nodes)
const edgeManager = new EdgeManager(dag)
// Condition selects "else" branch, deactivating the "if" branch (which contains the loop)
const readyNodes = edgeManager.processOutgoingEdges(conditionNode, { selectedOption: 'else' })
// Only otherBranch should be ready
expect(readyNodes).toContain(otherBranchId)
expect(readyNodes).not.toContain(sentinelStartId)
// afterLoop should NOT be ready - its incoming edge from sentinel_end should be deactivated
expect(readyNodes).not.toContain(afterLoopId)
// Verify that countActiveIncomingEdges returns 0 for afterLoop
// (meaning the loop_exit edge was properly deactivated)
// Note: isNodeReady returns true when all edges are deactivated (no pending deps),
// but the node won't be in readyNodes since it wasn't reached via an active path
expect(edgeManager.isNodeReady(afterLoopNode)).toBe(true) // All edges deactivated = no blocking deps
})
it('should deactivate nodes after parallel when condition branch containing parallel is deactivated', () => {
// Similar scenario with parallel instead of loop
const conditionId = 'condition'
const parallelStartId = 'parallel-start'
const parallelBodyId = 'parallel-body'
const parallelEndId = 'parallel-end'
const afterParallelId = 'after-parallel'
const otherBranchId = 'other-branch'
const conditionNode = createMockNode(conditionId, [
{ target: parallelStartId, sourceHandle: 'condition-if' },
{ target: otherBranchId, sourceHandle: 'condition-else' },
])
const parallelStartNode = createMockNode(
parallelStartId,
[{ target: parallelBodyId }],
[conditionId]
)
const parallelBodyNode = createMockNode(
parallelBodyId,
[{ target: parallelEndId }],
[parallelStartId]
)
const parallelEndNode = createMockNode(
parallelEndId,
[{ target: afterParallelId, sourceHandle: 'parallel_exit' }],
[parallelBodyId]
)
const afterParallelNode = createMockNode(afterParallelId, [], [parallelEndId])
const otherBranchNode = createMockNode(otherBranchId, [], [conditionId])
const nodes = new Map<string, DAGNode>([
[conditionId, conditionNode],
[parallelStartId, parallelStartNode],
[parallelBodyId, parallelBodyNode],
[parallelEndId, parallelEndNode],
[afterParallelId, afterParallelNode],
[otherBranchId, otherBranchNode],
])
const dag = createMockDAG(nodes)
const edgeManager = new EdgeManager(dag)
// Condition selects "else" branch
const readyNodes = edgeManager.processOutgoingEdges(conditionNode, { selectedOption: 'else' })
expect(readyNodes).toContain(otherBranchId)
expect(readyNodes).not.toContain(parallelStartId)
expect(readyNodes).not.toContain(afterParallelId)
// isNodeReady returns true when all edges are deactivated (no pending deps)
expect(edgeManager.isNodeReady(afterParallelNode)).toBe(true)
})
it('should still correctly handle normal loop exit (not deactivate when loop runs)', () => {
// When a loop actually executes and exits normally, after_loop should become ready
const sentinelStartId = 'sentinel-start'
const loopBodyId = 'loop-body'
const sentinelEndId = 'sentinel-end'
const afterLoopId = 'after-loop'
const sentinelStartNode = createMockNode(sentinelStartId, [{ target: loopBodyId }])
const loopBodyNode = createMockNode(
loopBodyId,
[{ target: sentinelEndId }],
[sentinelStartId]
)
const sentinelEndNode = createMockNode(
sentinelEndId,
[
{ target: sentinelStartId, sourceHandle: 'loop_continue' },
{ target: afterLoopId, sourceHandle: 'loop_exit' },
],
[loopBodyId]
)
const afterLoopNode = createMockNode(afterLoopId, [], [sentinelEndId])
const nodes = new Map<string, DAGNode>([
[sentinelStartId, sentinelStartNode],
[loopBodyId, loopBodyNode],
[sentinelEndId, sentinelEndNode],
[afterLoopId, afterLoopNode],
])
const dag = createMockDAG(nodes)
const edgeManager = new EdgeManager(dag)
// Simulate sentinel_end completing with loop_exit (loop is done)
const readyNodes = edgeManager.processOutgoingEdges(sentinelEndNode, {
selectedRoute: 'loop_exit',
})
// afterLoop should be ready
expect(readyNodes).toContain(afterLoopId)
})
})
}) })

View File

@@ -243,7 +243,7 @@ export class EdgeManager {
} }
for (const [, outgoingEdge] of targetNode.outgoingEdges) { for (const [, outgoingEdge] of targetNode.outgoingEdges) {
if (!this.isControlEdge(outgoingEdge.sourceHandle)) { if (!this.isBackwardsEdge(outgoingEdge.sourceHandle)) {
this.deactivateEdgeAndDescendants( this.deactivateEdgeAndDescendants(
targetId, targetId,
outgoingEdge.target, outgoingEdge.target,

View File

@@ -25,6 +25,8 @@ import {
validateModelProvider, validateModelProvider,
} from '@/executor/utils/permission-check' } from '@/executor/utils/permission-check'
import { executeProviderRequest } from '@/providers' import { executeProviderRequest } from '@/providers'
import { transformAttachmentMessages } from '@/providers/attachment'
import type { ProviderId } from '@/providers/types'
import { getProviderFromModel, transformBlockTool } from '@/providers/utils' import { getProviderFromModel, transformBlockTool } from '@/providers/utils'
import type { SerializedBlock } from '@/serializer/types' import type { SerializedBlock } from '@/serializer/types'
import { executeTool } from '@/tools' import { executeTool } from '@/tools'
@@ -58,7 +60,15 @@ export class AgentBlockHandler implements BlockHandler {
const providerId = getProviderFromModel(model) const providerId = getProviderFromModel(model)
const formattedTools = await this.formatTools(ctx, filteredInputs.tools || []) const formattedTools = await this.formatTools(ctx, filteredInputs.tools || [])
const streamingConfig = this.getStreamingConfig(ctx, block) const streamingConfig = this.getStreamingConfig(ctx, block)
const messages = await this.buildMessages(ctx, filteredInputs) const rawMessages = await this.buildMessages(ctx, filteredInputs)
// Transform attachment messages to provider-specific format (async for file fetching)
const messages = rawMessages
? await transformAttachmentMessages(rawMessages, {
providerId: providerId as ProviderId,
model,
})
: undefined
const providerRequest = this.buildProviderRequest({ const providerRequest = this.buildProviderRequest({
ctx, ctx,
@@ -806,17 +816,44 @@ export class AgentBlockHandler implements BlockHandler {
return messages.length > 0 ? messages : undefined return messages.length > 0 ? messages : undefined
} }
private extractValidMessages(messages?: Message[]): Message[] { private extractValidMessages(messages?: Message[] | string): Message[] {
if (!messages || !Array.isArray(messages)) return [] if (!messages) return []
return messages.filter( // Handle raw JSON string input (from advanced mode)
(msg): msg is Message => let messageArray: unknown[]
msg && if (typeof messages === 'string') {
typeof msg === 'object' && const trimmed = messages.trim()
'role' in msg && if (!trimmed) return []
'content' in msg && try {
['system', 'user', 'assistant'].includes(msg.role) const parsed = JSON.parse(trimmed)
) if (!Array.isArray(parsed)) {
logger.warn('Parsed messages JSON is not an array', { parsed })
return []
}
messageArray = parsed
} catch (error) {
logger.warn('Failed to parse messages JSON string', {
error,
messages: trimmed.substring(0, 100),
})
return []
}
} else if (Array.isArray(messages)) {
messageArray = messages
} else {
return []
}
return messageArray.filter((msg): msg is Message => {
if (!msg || typeof msg !== 'object') return false
const m = msg as Record<string, unknown>
return (
'role' in m &&
'content' in m &&
typeof m.role === 'string' &&
['system', 'user', 'assistant', 'attachment'].includes(m.role)
)
})
} }
private processMemories(memories: any): Message[] { private processMemories(memories: any): Message[] {

View File

@@ -6,8 +6,8 @@ export interface AgentInputs {
systemPrompt?: string systemPrompt?: string
userPrompt?: string | object userPrompt?: string | object
memories?: any // Legacy memory block output memories?: any // Legacy memory block output
// New message array input (from messages-input subblock) // New message array input (from messages-input subblock or raw JSON from advanced mode)
messages?: Message[] messages?: Message[] | string
// Memory configuration // Memory configuration
memoryType?: 'none' | 'conversation' | 'sliding_window' | 'sliding_window_tokens' memoryType?: 'none' | 'conversation' | 'sliding_window' | 'sliding_window_tokens'
conversationId?: string // Required for all non-none memory types conversationId?: string // Required for all non-none memory types
@@ -42,9 +42,25 @@ export interface ToolInput {
customToolId?: string customToolId?: string
} }
/**
* Attachment content (files, images, documents)
*/
export interface AttachmentContent {
/** Source type: how the data was provided */
sourceType: 'url' | 'base64' | 'file'
/** The URL or base64 data */
data: string
/** MIME type (e.g., 'image/png', 'application/pdf', 'audio/mp3') */
mimeType?: string
/** Optional filename for file uploads */
fileName?: string
}
export interface Message { export interface Message {
role: 'system' | 'user' | 'assistant' role: 'system' | 'user' | 'assistant' | 'attachment'
content: string content: string
/** Attachment content for 'attachment' role messages */
attachment?: AttachmentContent
executionId?: string executionId?: string
function_call?: any function_call?: any
tool_calls?: any[] tool_calls?: any[]

View File

@@ -198,6 +198,7 @@ describe('WorkflowBlockHandler', () => {
expect(result).toEqual({ expect(result).toEqual({
success: true, success: true,
childWorkflowId: 'child-id',
childWorkflowName: 'Child Workflow', childWorkflowName: 'Child Workflow',
result: { data: 'test result' }, result: { data: 'test result' },
childTraceSpans: [], childTraceSpans: [],
@@ -235,6 +236,7 @@ describe('WorkflowBlockHandler', () => {
expect(result).toEqual({ expect(result).toEqual({
success: true, success: true,
childWorkflowId: 'child-id',
childWorkflowName: 'Child Workflow', childWorkflowName: 'Child Workflow',
result: { nested: 'data' }, result: { nested: 'data' },
childTraceSpans: [], childTraceSpans: [],

View File

@@ -1,4 +1,5 @@
import { createLogger } from '@sim/logger' import { createLogger } from '@sim/logger'
import { snapshotService } from '@/lib/logs/execution/snapshot/service'
import { buildTraceSpans } from '@/lib/logs/execution/trace-spans/trace-spans' import { buildTraceSpans } from '@/lib/logs/execution/trace-spans/trace-spans'
import type { TraceSpan } from '@/lib/logs/types' import type { TraceSpan } from '@/lib/logs/types'
import type { BlockOutput } from '@/blocks/types' import type { BlockOutput } from '@/blocks/types'
@@ -57,6 +58,7 @@ export class WorkflowBlockHandler implements BlockHandler {
const workflowMetadata = workflows[workflowId] const workflowMetadata = workflows[workflowId]
let childWorkflowName = workflowMetadata?.name || workflowId let childWorkflowName = workflowMetadata?.name || workflowId
let childWorkflowSnapshotId: string | undefined
try { try {
const currentDepth = (ctx.workflowId?.split('_sub_').length || 1) - 1 const currentDepth = (ctx.workflowId?.split('_sub_').length || 1) - 1
if (currentDepth >= DEFAULTS.MAX_WORKFLOW_DEPTH) { if (currentDepth >= DEFAULTS.MAX_WORKFLOW_DEPTH) {
@@ -107,6 +109,12 @@ export class WorkflowBlockHandler implements BlockHandler {
childWorkflowInput = inputs.input childWorkflowInput = inputs.input
} }
const childSnapshotResult = await snapshotService.createSnapshotWithDeduplication(
workflowId,
childWorkflow.workflowState
)
childWorkflowSnapshotId = childSnapshotResult.snapshot.id
const subExecutor = new Executor({ const subExecutor = new Executor({
workflow: childWorkflow.serializedState, workflow: childWorkflow.serializedState,
workflowInput: childWorkflowInput, workflowInput: childWorkflowInput,
@@ -139,7 +147,8 @@ export class WorkflowBlockHandler implements BlockHandler {
workflowId, workflowId,
childWorkflowName, childWorkflowName,
duration, duration,
childTraceSpans childTraceSpans,
childWorkflowSnapshotId
) )
return mappedResult return mappedResult
@@ -172,6 +181,7 @@ export class WorkflowBlockHandler implements BlockHandler {
childWorkflowName, childWorkflowName,
childTraceSpans, childTraceSpans,
executionResult, executionResult,
childWorkflowSnapshotId,
cause: error instanceof Error ? error : undefined, cause: error instanceof Error ? error : undefined,
}) })
} }
@@ -279,6 +289,10 @@ export class WorkflowBlockHandler implements BlockHandler {
) )
const workflowVariables = (workflowData.variables as Record<string, any>) || {} const workflowVariables = (workflowData.variables as Record<string, any>) || {}
const workflowStateWithVariables = {
...workflowState,
variables: workflowVariables,
}
if (Object.keys(workflowVariables).length > 0) { if (Object.keys(workflowVariables).length > 0) {
logger.info( logger.info(
@@ -290,6 +304,7 @@ export class WorkflowBlockHandler implements BlockHandler {
name: workflowData.name, name: workflowData.name,
serializedState: serializedWorkflow, serializedState: serializedWorkflow,
variables: workflowVariables, variables: workflowVariables,
workflowState: workflowStateWithVariables,
rawBlocks: workflowState.blocks, rawBlocks: workflowState.blocks,
} }
} }
@@ -358,11 +373,16 @@ export class WorkflowBlockHandler implements BlockHandler {
) )
const workflowVariables = (wfData?.variables as Record<string, any>) || {} const workflowVariables = (wfData?.variables as Record<string, any>) || {}
const workflowStateWithVariables = {
...deployedState,
variables: workflowVariables,
}
return { return {
name: wfData?.name || DEFAULTS.WORKFLOW_NAME, name: wfData?.name || DEFAULTS.WORKFLOW_NAME,
serializedState: serializedWorkflow, serializedState: serializedWorkflow,
variables: workflowVariables, variables: workflowVariables,
workflowState: workflowStateWithVariables,
rawBlocks: deployedState.blocks, rawBlocks: deployedState.blocks,
} }
} }
@@ -504,7 +524,8 @@ export class WorkflowBlockHandler implements BlockHandler {
childWorkflowId: string, childWorkflowId: string,
childWorkflowName: string, childWorkflowName: string,
duration: number, duration: number,
childTraceSpans?: WorkflowTraceSpan[] childTraceSpans?: WorkflowTraceSpan[],
childWorkflowSnapshotId?: string
): BlockOutput { ): BlockOutput {
const success = childResult.success !== false const success = childResult.success !== false
const result = childResult.output || {} const result = childResult.output || {}
@@ -515,12 +536,15 @@ export class WorkflowBlockHandler implements BlockHandler {
message: `"${childWorkflowName}" failed: ${childResult.error || 'Child workflow execution failed'}`, message: `"${childWorkflowName}" failed: ${childResult.error || 'Child workflow execution failed'}`,
childWorkflowName, childWorkflowName,
childTraceSpans: childTraceSpans || [], childTraceSpans: childTraceSpans || [],
childWorkflowSnapshotId,
}) })
} }
return { return {
success: true, success: true,
childWorkflowName, childWorkflowName,
childWorkflowId,
...(childWorkflowSnapshotId ? { childWorkflowSnapshotId } : {}),
result, result,
childTraceSpans: childTraceSpans || [], childTraceSpans: childTraceSpans || [],
} as Record<string, any> } as Record<string, any>

View File

@@ -1,6 +1,6 @@
import type { TraceSpan } from '@/lib/logs/types' import type { TraceSpan } from '@/lib/logs/types'
import type { PermissionGroupConfig } from '@/lib/permission-groups/types'
import type { BlockOutput } from '@/blocks/types' import type { BlockOutput } from '@/blocks/types'
import type { PermissionGroupConfig } from '@/ee/access-control/lib/types'
import type { RunFromBlockContext } from '@/executor/utils/run-from-block' import type { RunFromBlockContext } from '@/executor/utils/run-from-block'
import type { SerializedBlock, SerializedWorkflow } from '@/serializer/types' import type { SerializedBlock, SerializedWorkflow } from '@/serializer/types'

View File

@@ -7,7 +7,7 @@ import { isAccessControlEnabled, isHosted } from '@/lib/core/config/feature-flag
import { import {
type PermissionGroupConfig, type PermissionGroupConfig,
parsePermissionGroupConfig, parsePermissionGroupConfig,
} from '@/ee/access-control/lib/types' } from '@/lib/permission-groups/types'
import type { ExecutionContext } from '@/executor/types' import type { ExecutionContext } from '@/executor/types'
import { getProviderFromModel } from '@/providers/utils' import { getProviderFromModel } from '@/providers/utils'

View File

@@ -210,6 +210,7 @@ export interface ExecutionSnapshotData {
executionId: string executionId: string
workflowId: string workflowId: string
workflowState: Record<string, unknown> workflowState: Record<string, unknown>
childWorkflowSnapshots?: Record<string, Record<string, unknown>>
executionMetadata: { executionMetadata: {
trigger: string trigger: string
startedAt: string startedAt: string

View File

@@ -1,5 +1,5 @@
import { useMutation, useQuery, useQueryClient } from '@tanstack/react-query' import { useMutation, useQuery, useQueryClient } from '@tanstack/react-query'
import type { PermissionGroupConfig } from '@/ee/access-control/lib/types' import type { PermissionGroupConfig } from '@/lib/permission-groups/types'
import { fetchJson } from '@/hooks/selectors/helpers' import { fetchJson } from '@/hooks/selectors/helpers'
export interface PermissionGroup { export interface PermissionGroup {

View File

@@ -1,12 +1,12 @@
import { useMemo } from 'react' import { useMemo } from 'react'
import { getEnv, isTruthy } from '@/lib/core/config/env' import { getEnv, isTruthy } from '@/lib/core/config/env'
import { isAccessControlEnabled, isHosted } from '@/lib/core/config/feature-flags' import { isAccessControlEnabled, isHosted } from '@/lib/core/config/feature-flags'
import { useUserPermissionConfig } from '@/ee/access-control/hooks/permission-groups'
import { import {
DEFAULT_PERMISSION_GROUP_CONFIG, DEFAULT_PERMISSION_GROUP_CONFIG,
type PermissionGroupConfig, type PermissionGroupConfig,
} from '@/ee/access-control/lib/types' } from '@/lib/permission-groups/types'
import { useOrganizations } from '@/hooks/queries/organization' import { useOrganizations } from '@/hooks/queries/organization'
import { useUserPermissionConfig } from '@/hooks/queries/permission-groups'
export interface PermissionConfigResult { export interface PermissionConfigResult {
config: PermissionGroupConfig config: PermissionGroupConfig

View File

@@ -59,8 +59,8 @@ import { sendEmail } from '@/lib/messaging/email/mailer'
import { getFromEmailAddress, getPersonalEmailFrom } from '@/lib/messaging/email/utils' import { getFromEmailAddress, getPersonalEmailFrom } from '@/lib/messaging/email/utils'
import { quickValidateEmail } from '@/lib/messaging/email/validation' import { quickValidateEmail } from '@/lib/messaging/email/validation'
import { syncAllWebhooksForCredentialSet } from '@/lib/webhooks/utils.server' import { syncAllWebhooksForCredentialSet } from '@/lib/webhooks/utils.server'
import { SSO_TRUSTED_PROVIDERS } from '@/ee/sso/lib/constants'
import { createAnonymousSession, ensureAnonymousUserExists } from './anonymous' import { createAnonymousSession, ensureAnonymousUserExists } from './anonymous'
import { SSO_TRUSTED_PROVIDERS } from './sso/constants'
const logger = createLogger('Auth') const logger = createLogger('Auth')

View File

@@ -0,0 +1,42 @@
/**
* Feature flag for server-side copilot orchestration.
*/
export const COPILOT_SERVER_ORCHESTRATED = true
export const INTERRUPT_TOOL_NAMES = [
'set_global_workflow_variables',
'run_workflow',
'manage_mcp_tool',
'manage_custom_tool',
'deploy_mcp',
'deploy_chat',
'deploy_api',
'create_workspace_mcp_server',
'set_environment_variables',
'make_api_request',
'oauth_request_access',
'navigate_ui',
'knowledge_base',
] as const
export const INTERRUPT_TOOL_SET = new Set<string>(INTERRUPT_TOOL_NAMES)
export const SUBAGENT_TOOL_NAMES = [
'debug',
'edit',
'plan',
'test',
'deploy',
'auth',
'research',
'knowledge',
'custom_tool',
'tour',
'info',
'workflow',
'evaluate',
'superagent',
] as const
export const SUBAGENT_TOOL_SET = new Set<string>(SUBAGENT_TOOL_NAMES)

View File

@@ -0,0 +1,181 @@
import { createLogger } from '@sim/logger'
import { SIM_AGENT_API_URL_DEFAULT } from '@/lib/copilot/constants'
import { env } from '@/lib/core/config/env'
import { parseSSEStream } from '@/lib/copilot/orchestrator/sse-parser'
import {
handleSubagentRouting,
sseHandlers,
subAgentHandlers,
} from '@/lib/copilot/orchestrator/sse-handlers'
import { prepareExecutionContext } from '@/lib/copilot/orchestrator/tool-executor'
import type {
ExecutionContext,
OrchestratorOptions,
OrchestratorResult,
SSEEvent,
StreamingContext,
ToolCallSummary,
} from '@/lib/copilot/orchestrator/types'
const logger = createLogger('CopilotOrchestrator')
const SIM_AGENT_API_URL = env.SIM_AGENT_API_URL || SIM_AGENT_API_URL_DEFAULT
export interface OrchestrateStreamOptions extends OrchestratorOptions {
userId: string
workflowId: string
chatId?: string
}
/**
* Orchestrate a copilot SSE stream and execute tool calls server-side.
*/
export async function orchestrateCopilotStream(
requestPayload: Record<string, any>,
options: OrchestrateStreamOptions
): Promise<OrchestratorResult> {
const { userId, workflowId, chatId, timeout = 300000, abortSignal } = options
const execContext = await prepareExecutionContext(userId, workflowId)
const context: StreamingContext = {
chatId,
conversationId: undefined,
messageId: requestPayload?.messageId || crypto.randomUUID(),
accumulatedContent: '',
contentBlocks: [],
toolCalls: new Map(),
currentThinkingBlock: null,
isInThinkingBlock: false,
subAgentParentToolCallId: undefined,
subAgentContent: {},
subAgentToolCalls: {},
pendingContent: '',
streamComplete: false,
wasAborted: false,
errors: [],
}
try {
const response = await fetch(`${SIM_AGENT_API_URL}/api/chat-completion-streaming`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
...(env.COPILOT_API_KEY ? { 'x-api-key': env.COPILOT_API_KEY } : {}),
},
body: JSON.stringify(requestPayload),
signal: abortSignal,
})
if (!response.ok) {
const errorText = await response.text().catch(() => '')
throw new Error(`Copilot backend error (${response.status}): ${errorText || response.statusText}`)
}
if (!response.body) {
throw new Error('Copilot backend response missing body')
}
const reader = response.body.getReader()
const decoder = new TextDecoder()
const timeoutId = setTimeout(() => {
context.errors.push('Request timed out')
context.streamComplete = true
reader.cancel().catch(() => {})
}, timeout)
try {
for await (const event of parseSSEStream(reader, decoder, abortSignal)) {
if (abortSignal?.aborted) {
context.wasAborted = true
break
}
await forwardEvent(event, options)
if (event.type === 'subagent_start') {
const toolCallId = event.data?.tool_call_id
if (toolCallId) {
context.subAgentParentToolCallId = toolCallId
context.subAgentContent[toolCallId] = ''
context.subAgentToolCalls[toolCallId] = []
}
continue
}
if (event.type === 'subagent_end') {
context.subAgentParentToolCallId = undefined
continue
}
if (handleSubagentRouting(event, context)) {
const handler = subAgentHandlers[event.type]
if (handler) {
await handler(event, context, execContext, options)
}
if (context.streamComplete) break
continue
}
const handler = sseHandlers[event.type]
if (handler) {
await handler(event, context, execContext, options)
}
if (context.streamComplete) break
}
} finally {
clearTimeout(timeoutId)
}
const result = buildResult(context)
await options.onComplete?.(result)
return result
} catch (error) {
const err = error instanceof Error ? error : new Error('Copilot orchestration failed')
logger.error('Copilot orchestration failed', { error: err.message })
await options.onError?.(err)
return {
success: false,
content: '',
contentBlocks: [],
toolCalls: [],
chatId: context.chatId,
conversationId: context.conversationId,
error: err.message,
}
}
}
async function forwardEvent(event: SSEEvent, options: OrchestratorOptions): Promise<void> {
try {
await options.onEvent?.(event)
} catch (error) {
logger.warn('Failed to forward SSE event', {
type: event.type,
error: error instanceof Error ? error.message : String(error),
})
}
}
function buildResult(context: StreamingContext): OrchestratorResult {
const toolCalls: ToolCallSummary[] = Array.from(context.toolCalls.values()).map((toolCall) => ({
id: toolCall.id,
name: toolCall.name,
status: toolCall.status,
params: toolCall.params,
result: toolCall.result?.output,
error: toolCall.error,
durationMs:
toolCall.endTime && toolCall.startTime ? toolCall.endTime - toolCall.startTime : undefined,
}))
return {
success: context.errors.length === 0,
content: context.accumulatedContent,
contentBlocks: context.contentBlocks,
toolCalls,
chatId: context.chatId,
conversationId: context.conversationId,
errors: context.errors.length ? context.errors : undefined,
}
}

View File

@@ -0,0 +1,138 @@
import { db } from '@sim/db'
import { copilotChats } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { and, eq } from 'drizzle-orm'
import { getRedisClient } from '@/lib/core/config/redis'
const logger = createLogger('CopilotOrchestratorPersistence')
/**
* Create a new copilot chat record.
*/
export async function createChat(params: {
userId: string
workflowId: string
model: string
}): Promise<{ id: string }> {
const [chat] = await db
.insert(copilotChats)
.values({
userId: params.userId,
workflowId: params.workflowId,
model: params.model,
messages: [],
})
.returning({ id: copilotChats.id })
return { id: chat.id }
}
/**
* Load an existing chat for a user.
*/
export async function loadChat(chatId: string, userId: string) {
const [chat] = await db
.select()
.from(copilotChats)
.where(and(eq(copilotChats.id, chatId), eq(copilotChats.userId, userId)))
.limit(1)
return chat || null
}
/**
* Save chat messages and metadata.
*/
export async function saveMessages(
chatId: string,
messages: any[],
options?: {
title?: string
conversationId?: string
planArtifact?: string | null
config?: { mode?: string; model?: string }
}
): Promise<void> {
await db
.update(copilotChats)
.set({
messages,
updatedAt: new Date(),
...(options?.title ? { title: options.title } : {}),
...(options?.conversationId ? { conversationId: options.conversationId } : {}),
...(options?.planArtifact !== undefined ? { planArtifact: options.planArtifact } : {}),
...(options?.config ? { config: options.config } : {}),
})
.where(eq(copilotChats.id, chatId))
}
/**
* Update the conversationId for a chat without overwriting messages.
*/
export async function updateChatConversationId(chatId: string, conversationId: string): Promise<void> {
await db
.update(copilotChats)
.set({
conversationId,
updatedAt: new Date(),
})
.where(eq(copilotChats.id, chatId))
}
/**
* Set a tool call confirmation status in Redis.
*/
export async function setToolConfirmation(
toolCallId: string,
status: 'accepted' | 'rejected' | 'background' | 'pending',
message?: string
): Promise<boolean> {
const redis = getRedisClient()
if (!redis) {
logger.warn('Redis client not available for tool confirmation')
return false
}
const key = `tool_call:${toolCallId}`
const payload = {
status,
message: message || null,
timestamp: new Date().toISOString(),
}
try {
await redis.set(key, JSON.stringify(payload), 'EX', 86400)
return true
} catch (error) {
logger.error('Failed to set tool confirmation', {
toolCallId,
error: error instanceof Error ? error.message : String(error),
})
return false
}
}
/**
* Get a tool call confirmation status from Redis.
*/
export async function getToolConfirmation(toolCallId: string): Promise<{
status: string
message?: string
timestamp?: string
} | null> {
const redis = getRedisClient()
if (!redis) return null
try {
const data = await redis.get(`tool_call:${toolCallId}`)
if (!data) return null
return JSON.parse(data) as { status: string; message?: string; timestamp?: string }
} catch (error) {
logger.error('Failed to read tool confirmation', {
toolCallId,
error: error instanceof Error ? error.message : String(error),
})
return null
}
}

View File

@@ -0,0 +1,439 @@
import { createLogger } from '@sim/logger'
import type {
ContentBlock,
ExecutionContext,
OrchestratorOptions,
SSEEvent,
StreamingContext,
ToolCallState,
} from '@/lib/copilot/orchestrator/types'
import { executeToolServerSide, markToolComplete } from '@/lib/copilot/orchestrator/tool-executor'
import { getToolConfirmation } from '@/lib/copilot/orchestrator/persistence'
import { INTERRUPT_TOOL_SET, SUBAGENT_TOOL_SET } from '@/lib/copilot/orchestrator/config'
const logger = createLogger('CopilotSseHandlers')
/**
* Respond tools are internal to the copilot's subagent system.
* They're used by subagents to signal completion and should NOT be executed by the sim side.
* The copilot backend handles these internally.
*/
const RESPOND_TOOL_SET = new Set([
'plan_respond',
'edit_respond',
'debug_respond',
'info_respond',
'research_respond',
'deploy_respond',
'superagent_respond',
'discovery_respond',
])
export type SSEHandler = (
event: SSEEvent,
context: StreamingContext,
execContext: ExecutionContext,
options: OrchestratorOptions
) => void | Promise<void>
function addContentBlock(
context: StreamingContext,
block: Omit<ContentBlock, 'timestamp'>
): void {
context.contentBlocks.push({
...block,
timestamp: Date.now(),
})
}
async function executeToolAndReport(
toolCallId: string,
context: StreamingContext,
execContext: ExecutionContext,
options?: OrchestratorOptions
): Promise<void> {
const toolCall = context.toolCalls.get(toolCallId)
if (!toolCall) return
if (toolCall.status === 'executing') return
toolCall.status = 'executing'
try {
const result = await executeToolServerSide(toolCall, execContext)
toolCall.status = result.success ? 'success' : 'error'
toolCall.result = result
toolCall.error = result.error
toolCall.endTime = Date.now()
// If create_workflow was successful, update the execution context with the new workflowId
// This ensures subsequent tools in the same stream have access to the workflowId
if (
toolCall.name === 'create_workflow' &&
result.success &&
result.output?.workflowId &&
!execContext.workflowId
) {
execContext.workflowId = result.output.workflowId
if (result.output.workspaceId) {
execContext.workspaceId = result.output.workspaceId
}
}
await markToolComplete(
toolCall.id,
toolCall.name,
result.success ? 200 : 500,
result.error || (result.success ? 'Tool completed' : 'Tool failed'),
result.output
)
await options?.onEvent?.({
type: 'tool_result',
toolCallId: toolCall.id,
data: {
id: toolCall.id,
name: toolCall.name,
success: result.success,
result: result.output,
},
})
} catch (error) {
toolCall.status = 'error'
toolCall.error = error instanceof Error ? error.message : String(error)
toolCall.endTime = Date.now()
await markToolComplete(toolCall.id, toolCall.name, 500, toolCall.error)
await options?.onEvent?.({
type: 'tool_error',
toolCallId: toolCall.id,
data: {
id: toolCall.id,
name: toolCall.name,
error: toolCall.error,
},
})
}
}
async function waitForToolDecision(
toolCallId: string,
timeoutMs: number
): Promise<{ status: string; message?: string } | null> {
const start = Date.now()
while (Date.now() - start < timeoutMs) {
const decision = await getToolConfirmation(toolCallId)
if (decision?.status) {
return decision
}
await new Promise((resolve) => setTimeout(resolve, 100))
}
return null
}
export const sseHandlers: Record<string, SSEHandler> = {
chat_id: (event, context) => {
context.chatId = event.data?.chatId
},
title_updated: () => {},
tool_result: (event, context) => {
const toolCallId = event.toolCallId || event.data?.id
if (!toolCallId) return
const current = context.toolCalls.get(toolCallId)
if (!current) return
// Determine success: explicit success field, or if there's result data without explicit failure
const hasExplicitSuccess = event.data?.success !== undefined || event.data?.result?.success !== undefined
const explicitSuccess = event.data?.success ?? event.data?.result?.success
const hasResultData = event.data?.result !== undefined || event.data?.data !== undefined
const hasError = !!event.data?.error || !!event.data?.result?.error
// If explicitly set, use that; otherwise infer from data presence
const success = hasExplicitSuccess ? !!explicitSuccess : (hasResultData && !hasError)
current.status = success ? 'success' : 'error'
current.endTime = Date.now()
if (hasResultData) {
current.result = {
success,
output: event.data?.result || event.data?.data,
}
}
if (hasError) {
current.error = event.data?.error || event.data?.result?.error
}
},
tool_error: (event, context) => {
const toolCallId = event.toolCallId || event.data?.id
if (!toolCallId) return
const current = context.toolCalls.get(toolCallId)
if (!current) return
current.status = 'error'
current.error = event.data?.error || 'Tool execution failed'
current.endTime = Date.now()
},
tool_generating: (event, context) => {
const toolCallId = event.toolCallId || event.data?.toolCallId || event.data?.id
const toolName = event.toolName || event.data?.toolName || event.data?.name
if (!toolCallId || !toolName) return
if (!context.toolCalls.has(toolCallId)) {
context.toolCalls.set(toolCallId, {
id: toolCallId,
name: toolName,
status: 'pending',
startTime: Date.now(),
})
}
},
tool_call: async (event, context, execContext, options) => {
const toolData = event.data || {}
const toolCallId = toolData.id || event.toolCallId
const toolName = toolData.name || event.toolName
if (!toolCallId || !toolName) return
const args = toolData.arguments || toolData.input || event.data?.input
const isPartial = toolData.partial === true
const existing = context.toolCalls.get(toolCallId)
const toolCall: ToolCallState = existing
? { ...existing, status: 'pending', params: args || existing.params }
: {
id: toolCallId,
name: toolName,
status: 'pending',
params: args,
startTime: Date.now(),
}
context.toolCalls.set(toolCallId, toolCall)
addContentBlock(context, { type: 'tool_call', toolCall })
if (isPartial) return
// Subagent tools are executed by the copilot backend, not sim side
if (SUBAGENT_TOOL_SET.has(toolName)) {
return
}
// Respond tools are internal to copilot's subagent system - skip execution
// The copilot backend handles these internally to signal subagent completion
if (RESPOND_TOOL_SET.has(toolName)) {
toolCall.status = 'success'
toolCall.endTime = Date.now()
toolCall.result = { success: true, output: 'Internal respond tool - handled by copilot backend' }
return
}
const isInterruptTool = INTERRUPT_TOOL_SET.has(toolName)
const isInteractive = options.interactive === true
if (isInterruptTool && isInteractive) {
const decision = await waitForToolDecision(toolCallId, options.timeout || 600000)
if (decision?.status === 'accepted' || decision?.status === 'success') {
await executeToolAndReport(toolCallId, context, execContext, options)
return
}
if (decision?.status === 'rejected' || decision?.status === 'error') {
toolCall.status = 'rejected'
toolCall.endTime = Date.now()
await markToolComplete(
toolCall.id,
toolCall.name,
400,
decision.message || 'Tool execution rejected',
{ skipped: true, reason: 'user_rejected' }
)
await options.onEvent?.({
type: 'tool_result',
toolCallId: toolCall.id,
data: {
id: toolCall.id,
name: toolCall.name,
success: false,
result: { skipped: true, reason: 'user_rejected' },
},
})
return
}
if (decision?.status === 'background') {
toolCall.status = 'skipped'
toolCall.endTime = Date.now()
await markToolComplete(
toolCall.id,
toolCall.name,
202,
decision.message || 'Tool execution moved to background',
{ background: true }
)
await options.onEvent?.({
type: 'tool_result',
toolCallId: toolCall.id,
data: {
id: toolCall.id,
name: toolCall.name,
success: true,
result: { background: true },
},
})
return
}
}
if (options.autoExecuteTools !== false) {
await executeToolAndReport(toolCallId, context, execContext, options)
}
},
reasoning: (event, context) => {
const phase = event.data?.phase || event.data?.data?.phase
if (phase === 'start') {
context.isInThinkingBlock = true
context.currentThinkingBlock = {
type: 'thinking',
content: '',
timestamp: Date.now(),
}
return
}
if (phase === 'end') {
if (context.currentThinkingBlock) {
context.contentBlocks.push(context.currentThinkingBlock)
}
context.isInThinkingBlock = false
context.currentThinkingBlock = null
return
}
const chunk = typeof event.data === 'string' ? event.data : event.data?.data || event.data?.content
if (!chunk || !context.currentThinkingBlock) return
context.currentThinkingBlock.content = `${context.currentThinkingBlock.content || ''}${chunk}`
},
content: (event, context) => {
const chunk = typeof event.data === 'string' ? event.data : event.data?.content || event.data?.data
if (!chunk) return
context.accumulatedContent += chunk
addContentBlock(context, { type: 'text', content: chunk })
},
done: (event, context) => {
if (event.data?.responseId) {
context.conversationId = event.data.responseId
}
context.streamComplete = true
},
start: (event, context) => {
if (event.data?.responseId) {
context.conversationId = event.data.responseId
}
},
error: (event, context) => {
const message =
event.data?.message || event.data?.error || (typeof event.data === 'string' ? event.data : null)
if (message) {
context.errors.push(message)
}
context.streamComplete = true
},
}
export const subAgentHandlers: Record<string, SSEHandler> = {
content: (event, context) => {
const parentToolCallId = context.subAgentParentToolCallId
if (!parentToolCallId || !event.data) return
const chunk = typeof event.data === 'string' ? event.data : event.data?.content || ''
if (!chunk) return
context.subAgentContent[parentToolCallId] = (context.subAgentContent[parentToolCallId] || '') + chunk
addContentBlock(context, { type: 'subagent_text', content: chunk })
},
tool_call: async (event, context, execContext, options) => {
const parentToolCallId = context.subAgentParentToolCallId
if (!parentToolCallId) return
const toolData = event.data || {}
const toolCallId = toolData.id || event.toolCallId
const toolName = toolData.name || event.toolName
if (!toolCallId || !toolName) return
const isPartial = toolData.partial === true
const args = toolData.arguments || toolData.input || event.data?.input
const toolCall: ToolCallState = {
id: toolCallId,
name: toolName,
status: 'pending',
params: args,
startTime: Date.now(),
}
// Store in both places - subAgentToolCalls for tracking and toolCalls for executeToolAndReport
if (!context.subAgentToolCalls[parentToolCallId]) {
context.subAgentToolCalls[parentToolCallId] = []
}
context.subAgentToolCalls[parentToolCallId].push(toolCall)
context.toolCalls.set(toolCallId, toolCall)
if (isPartial) return
// Respond tools are internal to copilot's subagent system - skip execution
if (RESPOND_TOOL_SET.has(toolName)) {
toolCall.status = 'success'
toolCall.endTime = Date.now()
toolCall.result = { success: true, output: 'Internal respond tool - handled by copilot backend' }
return
}
if (options.autoExecuteTools !== false) {
await executeToolAndReport(toolCallId, context, execContext, options)
}
},
tool_result: (event, context) => {
const parentToolCallId = context.subAgentParentToolCallId
if (!parentToolCallId) return
const toolCallId = event.toolCallId || event.data?.id
if (!toolCallId) return
// Update in subAgentToolCalls
const toolCalls = context.subAgentToolCalls[parentToolCallId] || []
const subAgentToolCall = toolCalls.find((tc) => tc.id === toolCallId)
// Also update in main toolCalls (where we added it for execution)
const mainToolCall = context.toolCalls.get(toolCallId)
// Use same success inference logic as main handler
const hasExplicitSuccess =
event.data?.success !== undefined || event.data?.result?.success !== undefined
const explicitSuccess = event.data?.success ?? event.data?.result?.success
const hasResultData = event.data?.result !== undefined || event.data?.data !== undefined
const hasError = !!event.data?.error || !!event.data?.result?.error
const success = hasExplicitSuccess ? !!explicitSuccess : hasResultData && !hasError
const status = success ? 'success' : 'error'
const endTime = Date.now()
const result = hasResultData
? { success, output: event.data?.result || event.data?.data }
: undefined
if (subAgentToolCall) {
subAgentToolCall.status = status
subAgentToolCall.endTime = endTime
if (result) subAgentToolCall.result = result
if (hasError) subAgentToolCall.error = event.data?.error || event.data?.result?.error
}
if (mainToolCall) {
mainToolCall.status = status
mainToolCall.endTime = endTime
if (result) mainToolCall.result = result
if (hasError) mainToolCall.error = event.data?.error || event.data?.result?.error
}
},
}
export function handleSubagentRouting(event: SSEEvent, context: StreamingContext): boolean {
if (!event.subagent) return false
if (!context.subAgentParentToolCallId) {
logger.warn('Subagent event missing parent tool call', {
type: event.type,
subagent: event.subagent,
})
return false
}
return true
}

View File

@@ -0,0 +1,72 @@
import { createLogger } from '@sim/logger'
import type { SSEEvent } from '@/lib/copilot/orchestrator/types'
const logger = createLogger('CopilotSseParser')
/**
* Parses SSE streams from the copilot backend into typed events.
*/
export async function* parseSSEStream(
reader: ReadableStreamDefaultReader<Uint8Array>,
decoder: TextDecoder,
abortSignal?: AbortSignal
): AsyncGenerator<SSEEvent> {
let buffer = ''
try {
while (true) {
if (abortSignal?.aborted) {
logger.info('SSE stream aborted by signal')
break
}
const { done, value } = await reader.read()
if (done) break
buffer += decoder.decode(value, { stream: true })
const lines = buffer.split('\n')
buffer = lines.pop() || ''
for (const line of lines) {
if (!line.trim()) continue
if (!line.startsWith('data: ')) continue
const jsonStr = line.slice(6)
if (jsonStr === '[DONE]') continue
try {
const event = JSON.parse(jsonStr) as SSEEvent
if (event?.type) {
yield event
}
} catch (error) {
logger.warn('Failed to parse SSE event', {
preview: jsonStr.slice(0, 200),
error: error instanceof Error ? error.message : String(error),
})
}
}
}
if (buffer.trim() && buffer.startsWith('data: ')) {
try {
const event = JSON.parse(buffer.slice(6)) as SSEEvent
if (event?.type) {
yield event
}
} catch (error) {
logger.warn('Failed to parse final SSE buffer', {
preview: buffer.slice(0, 200),
error: error instanceof Error ? error.message : String(error),
})
}
}
} finally {
try {
reader.releaseLock()
} catch {
logger.warn('Failed to release SSE reader lock')
}
}
}

View File

@@ -0,0 +1,239 @@
import { createLogger } from '@sim/logger'
import { SIM_AGENT_API_URL_DEFAULT } from '@/lib/copilot/constants'
import { parseSSEStream } from '@/lib/copilot/orchestrator/sse-parser'
import {
sseHandlers,
subAgentHandlers,
handleSubagentRouting,
} from '@/lib/copilot/orchestrator/sse-handlers'
import { prepareExecutionContext } from '@/lib/copilot/orchestrator/tool-executor'
import type {
ExecutionContext,
OrchestratorOptions,
SSEEvent,
StreamingContext,
ToolCallSummary,
} from '@/lib/copilot/orchestrator/types'
import { env } from '@/lib/core/config/env'
import { getEffectiveDecryptedEnv } from '@/lib/environment/utils'
const logger = createLogger('CopilotSubagentOrchestrator')
const SIM_AGENT_API_URL = env.SIM_AGENT_API_URL || SIM_AGENT_API_URL_DEFAULT
export interface SubagentOrchestratorOptions extends OrchestratorOptions {
userId: string
workflowId?: string
workspaceId?: string
}
export interface SubagentOrchestratorResult {
success: boolean
content: string
toolCalls: ToolCallSummary[]
structuredResult?: {
type?: string
summary?: string
data?: any
success?: boolean
}
error?: string
errors?: string[]
}
export async function orchestrateSubagentStream(
agentId: string,
requestPayload: Record<string, any>,
options: SubagentOrchestratorOptions
): Promise<SubagentOrchestratorResult> {
const { userId, workflowId, workspaceId, timeout = 300000, abortSignal } = options
const execContext = await buildExecutionContext(userId, workflowId, workspaceId)
const context: StreamingContext = {
chatId: undefined,
conversationId: undefined,
messageId: requestPayload?.messageId || crypto.randomUUID(),
accumulatedContent: '',
contentBlocks: [],
toolCalls: new Map(),
currentThinkingBlock: null,
isInThinkingBlock: false,
subAgentParentToolCallId: undefined,
subAgentContent: {},
subAgentToolCalls: {},
pendingContent: '',
streamComplete: false,
wasAborted: false,
errors: [],
}
let structuredResult: SubagentOrchestratorResult['structuredResult']
try {
const response = await fetch(`${SIM_AGENT_API_URL}/api/subagent/${agentId}`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
...(env.COPILOT_API_KEY ? { 'x-api-key': env.COPILOT_API_KEY } : {}),
},
body: JSON.stringify({ ...requestPayload, stream: true, userId }),
signal: abortSignal,
})
if (!response.ok) {
const errorText = await response.text().catch(() => '')
throw new Error(
`Copilot backend error (${response.status}): ${errorText || response.statusText}`
)
}
if (!response.body) {
throw new Error('Copilot backend response missing body')
}
const reader = response.body.getReader()
const decoder = new TextDecoder()
const timeoutId = setTimeout(() => {
context.errors.push('Request timed out')
context.streamComplete = true
reader.cancel().catch(() => {})
}, timeout)
try {
for await (const event of parseSSEStream(reader, decoder, abortSignal)) {
if (abortSignal?.aborted) {
context.wasAborted = true
break
}
await forwardEvent(event, options)
if (event.type === 'structured_result' || event.type === 'subagent_result') {
structuredResult = normalizeStructuredResult(event.data)
context.streamComplete = true
continue
}
// Handle subagent_start/subagent_end events to track nested subagent calls
if (event.type === 'subagent_start') {
const toolCallId = event.data?.tool_call_id
if (toolCallId) {
context.subAgentParentToolCallId = toolCallId
context.subAgentContent[toolCallId] = ''
context.subAgentToolCalls[toolCallId] = []
}
continue
}
if (event.type === 'subagent_end') {
context.subAgentParentToolCallId = undefined
continue
}
// For direct subagent calls, events may have the subagent field set (e.g., subagent: "discovery")
// but no subagent_start event because this IS the top-level agent. Skip subagent routing
// for events where the subagent field matches the current agentId - these are top-level events.
const isTopLevelSubagentEvent = event.subagent === agentId && !context.subAgentParentToolCallId
// Only route to subagent handlers for nested subagent events (not matching current agentId)
if (!isTopLevelSubagentEvent && handleSubagentRouting(event, context)) {
const handler = subAgentHandlers[event.type]
if (handler) {
await handler(event, context, execContext, options)
}
if (context.streamComplete) break
continue
}
// Process as a regular SSE event (including top-level subagent events)
const handler = sseHandlers[event.type]
if (handler) {
await handler(event, context, execContext, options)
}
if (context.streamComplete) break
}
} finally {
clearTimeout(timeoutId)
}
const result = buildResult(context, structuredResult)
await options.onComplete?.(result)
return result
} catch (error) {
const err = error instanceof Error ? error : new Error('Subagent orchestration failed')
logger.error('Subagent orchestration failed', { error: err.message, agentId })
await options.onError?.(err)
return {
success: false,
content: context.accumulatedContent,
toolCalls: [],
error: err.message,
}
}
}
async function forwardEvent(event: SSEEvent, options: OrchestratorOptions): Promise<void> {
try {
await options.onEvent?.(event)
} catch (error) {
logger.warn('Failed to forward SSE event', {
type: event.type,
error: error instanceof Error ? error.message : String(error),
})
}
}
function normalizeStructuredResult(data: any): SubagentOrchestratorResult['structuredResult'] {
if (!data || typeof data !== 'object') {
return undefined
}
return {
type: data.result_type || data.type,
summary: data.summary,
data: data.data ?? data,
success: data.success,
}
}
async function buildExecutionContext(
userId: string,
workflowId?: string,
workspaceId?: string
): Promise<ExecutionContext> {
if (workflowId) {
return prepareExecutionContext(userId, workflowId)
}
const decryptedEnvVars = await getEffectiveDecryptedEnv(userId, workspaceId)
return {
userId,
workflowId: workflowId || '',
workspaceId,
decryptedEnvVars,
}
}
function buildResult(
context: StreamingContext,
structuredResult?: SubagentOrchestratorResult['structuredResult']
): SubagentOrchestratorResult {
const toolCalls: ToolCallSummary[] = Array.from(context.toolCalls.values()).map((toolCall) => ({
id: toolCall.id,
name: toolCall.name,
status: toolCall.status,
params: toolCall.params,
result: toolCall.result?.output,
error: toolCall.error,
durationMs:
toolCall.endTime && toolCall.startTime ? toolCall.endTime - toolCall.startTime : undefined,
}))
return {
success: context.errors.length === 0 && !context.wasAborted,
content: context.accumulatedContent,
toolCalls,
structuredResult,
errors: context.errors.length ? context.errors : undefined,
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,129 @@
import type { CopilotProviderConfig } from '@/lib/copilot/types'
export type SSEEventType =
| 'chat_id'
| 'title_updated'
| 'content'
| 'reasoning'
| 'tool_call'
| 'tool_generating'
| 'tool_result'
| 'tool_error'
| 'subagent_start'
| 'subagent_end'
| 'structured_result'
| 'subagent_result'
| 'done'
| 'error'
| 'start'
export interface SSEEvent {
type: SSEEventType
data?: any
subagent?: string
toolCallId?: string
toolName?: string
}
export type ToolCallStatus = 'pending' | 'executing' | 'success' | 'error' | 'skipped' | 'rejected'
export interface ToolCallState {
id: string
name: string
status: ToolCallStatus
params?: Record<string, any>
result?: ToolCallResult
error?: string
startTime?: number
endTime?: number
}
export interface ToolCallResult {
success: boolean
output?: any
error?: string
}
export type ContentBlockType = 'text' | 'thinking' | 'tool_call' | 'subagent_text'
export interface ContentBlock {
type: ContentBlockType
content?: string
toolCall?: ToolCallState
timestamp: number
}
export interface StreamingContext {
chatId?: string
conversationId?: string
messageId: string
accumulatedContent: string
contentBlocks: ContentBlock[]
toolCalls: Map<string, ToolCallState>
currentThinkingBlock: ContentBlock | null
isInThinkingBlock: boolean
subAgentParentToolCallId?: string
subAgentContent: Record<string, string>
subAgentToolCalls: Record<string, ToolCallState[]>
pendingContent: string
streamComplete: boolean
wasAborted: boolean
errors: string[]
}
export interface OrchestratorRequest {
message: string
workflowId: string
userId: string
chatId?: string
mode?: 'agent' | 'ask' | 'plan'
model?: string
conversationId?: string
contexts?: Array<{ type: string; content: string }>
fileAttachments?: any[]
commands?: string[]
provider?: CopilotProviderConfig
streamToolCalls?: boolean
version?: string
prefetch?: boolean
userName?: string
}
export interface OrchestratorOptions {
autoExecuteTools?: boolean
timeout?: number
onEvent?: (event: SSEEvent) => void | Promise<void>
onComplete?: (result: OrchestratorResult) => void | Promise<void>
onError?: (error: Error) => void | Promise<void>
abortSignal?: AbortSignal
interactive?: boolean
}
export interface OrchestratorResult {
success: boolean
content: string
contentBlocks: ContentBlock[]
toolCalls: ToolCallSummary[]
chatId?: string
conversationId?: string
error?: string
errors?: string[]
}
export interface ToolCallSummary {
id: string
name: string
status: ToolCallStatus
params?: Record<string, any>
result?: any
error?: string
durationMs?: number
}
export interface ExecutionContext {
userId: string
workflowId: string
workspaceId?: string
decryptedEnvVars?: Record<string, string>
}

View File

@@ -10,6 +10,7 @@ import {
type KnowledgeBaseArgs, type KnowledgeBaseArgs,
} from '@/lib/copilot/tools/shared/schemas' } from '@/lib/copilot/tools/shared/schemas'
import { useCopilotStore } from '@/stores/panel/copilot/store' import { useCopilotStore } from '@/stores/panel/copilot/store'
import { useWorkflowRegistry } from '@/stores/workflows/registry/store'
/** /**
* Client tool for knowledge base operations * Client tool for knowledge base operations
@@ -102,7 +103,19 @@ export class KnowledgeBaseClientTool extends BaseClientTool {
const logger = createLogger('KnowledgeBaseClientTool') const logger = createLogger('KnowledgeBaseClientTool')
try { try {
this.setState(ClientToolCallState.executing) this.setState(ClientToolCallState.executing)
const payload: KnowledgeBaseArgs = { ...(args || { operation: 'list' }) }
// Get the workspace ID from the workflow registry hydration state
const { hydration } = useWorkflowRegistry.getState()
const workspaceId = hydration.workspaceId
// Build payload with workspace ID included in args
const payload: KnowledgeBaseArgs = {
...(args || { operation: 'list' }),
args: {
...(args?.args || {}),
workspaceId: workspaceId || undefined,
},
}
const res = await fetch('/api/copilot/execute-copilot-server-tool', { const res = await fetch('/api/copilot/execute-copilot-server-tool', {
method: 'POST', method: 'POST',

View File

@@ -5,16 +5,20 @@ import { createLogger } from '@sim/logger'
import { eq } from 'drizzle-orm' import { eq } from 'drizzle-orm'
import type { BaseServerTool } from '@/lib/copilot/tools/server/base-tool' import type { BaseServerTool } from '@/lib/copilot/tools/server/base-tool'
import { validateSelectorIds } from '@/lib/copilot/validation/selector-validator' import { validateSelectorIds } from '@/lib/copilot/validation/selector-validator'
import type { PermissionGroupConfig } from '@/lib/permission-groups/types'
import { getBlockOutputs } from '@/lib/workflows/blocks/block-outputs' import { getBlockOutputs } from '@/lib/workflows/blocks/block-outputs'
import { extractAndPersistCustomTools } from '@/lib/workflows/persistence/custom-tools-persistence' import { extractAndPersistCustomTools } from '@/lib/workflows/persistence/custom-tools-persistence'
import { loadWorkflowFromNormalizedTables } from '@/lib/workflows/persistence/utils' import { applyAutoLayout } from '@/lib/workflows/autolayout'
import {
loadWorkflowFromNormalizedTables,
saveWorkflowToNormalizedTables,
} from '@/lib/workflows/persistence/utils'
import { isValidKey } from '@/lib/workflows/sanitization/key-validation' import { isValidKey } from '@/lib/workflows/sanitization/key-validation'
import { validateWorkflowState } from '@/lib/workflows/sanitization/validation' import { validateWorkflowState } from '@/lib/workflows/sanitization/validation'
import { buildCanonicalIndex, isCanonicalPair } from '@/lib/workflows/subblocks/visibility' import { buildCanonicalIndex, isCanonicalPair } from '@/lib/workflows/subblocks/visibility'
import { TriggerUtils } from '@/lib/workflows/triggers/triggers' import { TriggerUtils } from '@/lib/workflows/triggers/triggers'
import { getAllBlocks, getBlock } from '@/blocks/registry' import { getAllBlocks, getBlock } from '@/blocks/registry'
import type { BlockConfig, SubBlockConfig } from '@/blocks/types' import type { BlockConfig, SubBlockConfig } from '@/blocks/types'
import type { PermissionGroupConfig } from '@/ee/access-control/lib/types'
import { EDGE, normalizeName, RESERVED_BLOCK_NAMES } from '@/executor/constants' import { EDGE, normalizeName, RESERVED_BLOCK_NAMES } from '@/executor/constants'
import { getUserPermissionConfig } from '@/executor/utils/permission-check' import { getUserPermissionConfig } from '@/executor/utils/permission-check'
import { generateLoopBlocks, generateParallelBlocks } from '@/stores/workflows/workflow/utils' import { generateLoopBlocks, generateParallelBlocks } from '@/stores/workflows/workflow/utils'
@@ -1397,6 +1401,101 @@ function filterDisallowedTools(
return allowedTools return allowedTools
} }
/**
* Normalizes block IDs in operations to ensure they are valid UUIDs.
* The LLM may generate human-readable IDs like "web_search" or "research_agent"
* which need to be converted to proper UUIDs for database compatibility.
*
* Returns the normalized operations and a mapping from old IDs to new UUIDs.
*/
function normalizeBlockIdsInOperations(operations: EditWorkflowOperation[]): {
normalizedOperations: EditWorkflowOperation[]
idMapping: Map<string, string>
} {
const logger = createLogger('EditWorkflowServerTool')
const idMapping = new Map<string, string>()
// First pass: collect all non-UUID block_ids from add/insert operations
for (const op of operations) {
if (op.operation_type === 'add' || op.operation_type === 'insert_into_subflow') {
if (op.block_id && !UUID_REGEX.test(op.block_id)) {
const newId = crypto.randomUUID()
idMapping.set(op.block_id, newId)
logger.debug('Normalizing block ID', { oldId: op.block_id, newId })
}
}
}
if (idMapping.size === 0) {
return { normalizedOperations: operations, idMapping }
}
logger.info('Normalizing block IDs in operations', {
normalizedCount: idMapping.size,
mappings: Object.fromEntries(idMapping),
})
// Helper to replace an ID if it's in the mapping
const replaceId = (id: string | undefined): string | undefined => {
if (!id) return id
return idMapping.get(id) ?? id
}
// Second pass: update all references to use new UUIDs
const normalizedOperations = operations.map((op) => {
const normalized: EditWorkflowOperation = {
...op,
block_id: replaceId(op.block_id) ?? op.block_id,
}
if (op.params) {
normalized.params = { ...op.params }
// Update subflowId references (for insert_into_subflow)
if (normalized.params.subflowId) {
normalized.params.subflowId = replaceId(normalized.params.subflowId)
}
// Update connection references
if (normalized.params.connections) {
const normalizedConnections: Record<string, any> = {}
for (const [handle, targets] of Object.entries(normalized.params.connections)) {
if (typeof targets === 'string') {
normalizedConnections[handle] = replaceId(targets)
} else if (Array.isArray(targets)) {
normalizedConnections[handle] = targets.map((t) => {
if (typeof t === 'string') return replaceId(t)
if (t && typeof t === 'object' && t.block) {
return { ...t, block: replaceId(t.block) }
}
return t
})
} else if (targets && typeof targets === 'object' && (targets as any).block) {
normalizedConnections[handle] = { ...targets, block: replaceId((targets as any).block) }
} else {
normalizedConnections[handle] = targets
}
}
normalized.params.connections = normalizedConnections
}
// Update nestedNodes block IDs
if (normalized.params.nestedNodes) {
const normalizedNestedNodes: Record<string, any> = {}
for (const [childId, childBlock] of Object.entries(normalized.params.nestedNodes)) {
const newChildId = replaceId(childId) ?? childId
normalizedNestedNodes[newChildId] = childBlock
}
normalized.params.nestedNodes = normalizedNestedNodes
}
}
return normalized
})
return { normalizedOperations, idMapping }
}
/** /**
* Apply operations directly to the workflow JSON state * Apply operations directly to the workflow JSON state
*/ */
@@ -1416,6 +1515,11 @@ function applyOperationsToWorkflowState(
// Log initial state // Log initial state
const logger = createLogger('EditWorkflowServerTool') const logger = createLogger('EditWorkflowServerTool')
// Normalize block IDs to UUIDs before processing
const { normalizedOperations } = normalizeBlockIdsInOperations(operations)
operations = normalizedOperations
logger.info('Applying operations to workflow:', { logger.info('Applying operations to workflow:', {
totalOperations: operations.length, totalOperations: operations.length,
operationTypes: operations.reduce((acc: any, op) => { operationTypes: operations.reduce((acc: any, op) => {
@@ -2508,6 +2612,10 @@ async function validateWorkflowSelectorIds(
for (const subBlockConfig of blockConfig.subBlocks) { for (const subBlockConfig of blockConfig.subBlocks) {
if (!SELECTOR_TYPES.has(subBlockConfig.type)) continue if (!SELECTOR_TYPES.has(subBlockConfig.type)) continue
// Skip oauth-input - credentials are pre-validated before edit application
// This allows existing collaborator credentials to remain untouched
if (subBlockConfig.type === 'oauth-input') continue
const subBlockValue = blockData.subBlocks?.[subBlockConfig.id]?.value const subBlockValue = blockData.subBlocks?.[subBlockConfig.id]?.value
if (!subBlockValue) continue if (!subBlockValue) continue
@@ -2573,6 +2681,295 @@ async function validateWorkflowSelectorIds(
return errors return errors
} }
/**
* Pre-validates credential and apiKey inputs in operations before they are applied.
* - Validates oauth-input (credential) IDs belong to the user
* - Filters out apiKey inputs for hosted models when isHosted is true
* - Also validates credentials and apiKeys in nestedNodes (blocks inside loop/parallel)
* Returns validation errors for any removed inputs.
*/
async function preValidateCredentialInputs(
operations: EditWorkflowOperation[],
context: { userId: string },
workflowState?: Record<string, unknown>
): Promise<{ filteredOperations: EditWorkflowOperation[]; errors: ValidationError[] }> {
const { isHosted } = await import('@/lib/core/config/feature-flags')
const { getHostedModels } = await import('@/providers/utils')
const logger = createLogger('PreValidateCredentials')
const errors: ValidationError[] = []
// Collect credential and apiKey inputs that need validation/filtering
const credentialInputs: Array<{
operationIndex: number
blockId: string
blockType: string
fieldName: string
value: string
nestedBlockId?: string
}> = []
const hostedApiKeyInputs: Array<{
operationIndex: number
blockId: string
blockType: string
model: string
nestedBlockId?: string
}> = []
const hostedModelsLower = isHosted ? new Set(getHostedModels().map((m) => m.toLowerCase())) : null
/**
* Collect credential inputs from a block's inputs based on its block config
*/
function collectCredentialInputs(
blockConfig: ReturnType<typeof getBlock>,
inputs: Record<string, unknown>,
opIndex: number,
blockId: string,
blockType: string,
nestedBlockId?: string
) {
if (!blockConfig) return
for (const subBlockConfig of blockConfig.subBlocks) {
if (subBlockConfig.type !== 'oauth-input') continue
const inputValue = inputs[subBlockConfig.id]
if (!inputValue || typeof inputValue !== 'string' || inputValue.trim() === '') continue
credentialInputs.push({
operationIndex: opIndex,
blockId,
blockType,
fieldName: subBlockConfig.id,
value: inputValue,
nestedBlockId,
})
}
}
/**
* Check if apiKey should be filtered for a block with the given model
*/
function collectHostedApiKeyInput(
inputs: Record<string, unknown>,
modelValue: string | undefined,
opIndex: number,
blockId: string,
blockType: string,
nestedBlockId?: string
) {
if (!hostedModelsLower || !inputs.apiKey) return
if (!modelValue || typeof modelValue !== 'string') return
if (hostedModelsLower.has(modelValue.toLowerCase())) {
hostedApiKeyInputs.push({
operationIndex: opIndex,
blockId,
blockType,
model: modelValue,
nestedBlockId,
})
}
}
operations.forEach((op, opIndex) => {
// Process main block inputs
if (op.params?.inputs && op.params?.type) {
const blockConfig = getBlock(op.params.type)
if (blockConfig) {
// Collect credentials from main block
collectCredentialInputs(
blockConfig,
op.params.inputs as Record<string, unknown>,
opIndex,
op.block_id,
op.params.type
)
// Check for apiKey inputs on hosted models
let modelValue = (op.params.inputs as Record<string, unknown>).model as string | undefined
// For edit operations, if model is not being changed, check existing block's model
if (
!modelValue &&
op.operation_type === 'edit' &&
(op.params.inputs as Record<string, unknown>).apiKey &&
workflowState
) {
const existingBlock = (workflowState.blocks as Record<string, unknown>)?.[op.block_id] as
| Record<string, unknown>
| undefined
const existingSubBlocks = existingBlock?.subBlocks as Record<string, unknown> | undefined
const existingModelSubBlock = existingSubBlocks?.model as
| Record<string, unknown>
| undefined
modelValue = existingModelSubBlock?.value as string | undefined
}
collectHostedApiKeyInput(
op.params.inputs as Record<string, unknown>,
modelValue,
opIndex,
op.block_id,
op.params.type
)
}
}
// Process nested nodes (blocks inside loop/parallel containers)
const nestedNodes = op.params?.nestedNodes as
| Record<string, Record<string, unknown>>
| undefined
if (nestedNodes) {
Object.entries(nestedNodes).forEach(([childId, childBlock]) => {
const childType = childBlock.type as string | undefined
const childInputs = childBlock.inputs as Record<string, unknown> | undefined
if (!childType || !childInputs) return
const childBlockConfig = getBlock(childType)
if (!childBlockConfig) return
// Collect credentials from nested block
collectCredentialInputs(
childBlockConfig,
childInputs,
opIndex,
op.block_id,
childType,
childId
)
// Check for apiKey inputs on hosted models in nested block
const modelValue = childInputs.model as string | undefined
collectHostedApiKeyInput(childInputs, modelValue, opIndex, op.block_id, childType, childId)
})
}
})
const hasCredentialsToValidate = credentialInputs.length > 0
const hasHostedApiKeysToFilter = hostedApiKeyInputs.length > 0
if (!hasCredentialsToValidate && !hasHostedApiKeysToFilter) {
return { filteredOperations: operations, errors }
}
// Deep clone operations so we can modify them
const filteredOperations = structuredClone(operations)
// Filter out apiKey inputs for hosted models and add validation errors
if (hasHostedApiKeysToFilter) {
logger.info('Filtering apiKey inputs for hosted models', { count: hostedApiKeyInputs.length })
for (const apiKeyInput of hostedApiKeyInputs) {
const op = filteredOperations[apiKeyInput.operationIndex]
// Handle nested block apiKey filtering
if (apiKeyInput.nestedBlockId) {
const nestedNodes = op.params?.nestedNodes as
| Record<string, Record<string, unknown>>
| undefined
const nestedBlock = nestedNodes?.[apiKeyInput.nestedBlockId]
const nestedInputs = nestedBlock?.inputs as Record<string, unknown> | undefined
if (nestedInputs?.apiKey) {
nestedInputs.apiKey = undefined
logger.debug('Filtered apiKey for hosted model in nested block', {
parentBlockId: apiKeyInput.blockId,
nestedBlockId: apiKeyInput.nestedBlockId,
model: apiKeyInput.model,
})
errors.push({
blockId: apiKeyInput.nestedBlockId,
blockType: apiKeyInput.blockType,
field: 'apiKey',
value: '[redacted]',
error: `Cannot set API key for hosted model "${apiKeyInput.model}" - API keys are managed by the platform when using hosted models`,
})
}
} else if (op.params?.inputs?.apiKey) {
// Handle main block apiKey filtering
op.params.inputs.apiKey = undefined
logger.debug('Filtered apiKey for hosted model', {
blockId: apiKeyInput.blockId,
model: apiKeyInput.model,
})
errors.push({
blockId: apiKeyInput.blockId,
blockType: apiKeyInput.blockType,
field: 'apiKey',
value: '[redacted]',
error: `Cannot set API key for hosted model "${apiKeyInput.model}" - API keys are managed by the platform when using hosted models`,
})
}
}
}
// Validate credential inputs
if (hasCredentialsToValidate) {
logger.info('Pre-validating credential inputs', {
credentialCount: credentialInputs.length,
userId: context.userId,
})
const allCredentialIds = credentialInputs.map((c) => c.value)
const validationResult = await validateSelectorIds('oauth-input', allCredentialIds, context)
const invalidSet = new Set(validationResult.invalid)
if (invalidSet.size > 0) {
for (const credInput of credentialInputs) {
if (!invalidSet.has(credInput.value)) continue
const op = filteredOperations[credInput.operationIndex]
// Handle nested block credential removal
if (credInput.nestedBlockId) {
const nestedNodes = op.params?.nestedNodes as
| Record<string, Record<string, unknown>>
| undefined
const nestedBlock = nestedNodes?.[credInput.nestedBlockId]
const nestedInputs = nestedBlock?.inputs as Record<string, unknown> | undefined
if (nestedInputs?.[credInput.fieldName]) {
delete nestedInputs[credInput.fieldName]
logger.info('Removed invalid credential from nested block', {
parentBlockId: credInput.blockId,
nestedBlockId: credInput.nestedBlockId,
field: credInput.fieldName,
invalidValue: credInput.value,
})
}
} else if (op.params?.inputs?.[credInput.fieldName]) {
// Handle main block credential removal
delete op.params.inputs[credInput.fieldName]
logger.info('Removed invalid credential from operation', {
blockId: credInput.blockId,
field: credInput.fieldName,
invalidValue: credInput.value,
})
}
const warningInfo = validationResult.warning ? `. ${validationResult.warning}` : ''
const errorBlockId = credInput.nestedBlockId ?? credInput.blockId
errors.push({
blockId: errorBlockId,
blockType: credInput.blockType,
field: credInput.fieldName,
value: credInput.value,
error: `Invalid credential ID "${credInput.value}" - credential does not exist or user doesn't have access${warningInfo}`,
})
}
logger.warn('Filtered out invalid credentials', {
invalidCount: invalidSet.size,
})
}
}
return { filteredOperations, errors }
}
async function getCurrentWorkflowStateFromDb( async function getCurrentWorkflowStateFromDb(
workflowId: string workflowId: string
): Promise<{ workflowState: any; subBlockValues: Record<string, Record<string, any>> }> { ): Promise<{ workflowState: any; subBlockValues: Record<string, Record<string, any>> }> {
@@ -2657,12 +3054,29 @@ export const editWorkflowServerTool: BaseServerTool<EditWorkflowParams, any> = {
// Get permission config for the user // Get permission config for the user
const permissionConfig = context?.userId ? await getUserPermissionConfig(context.userId) : null const permissionConfig = context?.userId ? await getUserPermissionConfig(context.userId) : null
// Pre-validate credential and apiKey inputs before applying operations
// This filters out invalid credentials and apiKeys for hosted models
let operationsToApply = operations
const credentialErrors: ValidationError[] = []
if (context?.userId) {
const { filteredOperations, errors: credErrors } = await preValidateCredentialInputs(
operations,
{ userId: context.userId },
workflowState
)
operationsToApply = filteredOperations
credentialErrors.push(...credErrors)
}
// Apply operations directly to the workflow state // Apply operations directly to the workflow state
const { const {
state: modifiedWorkflowState, state: modifiedWorkflowState,
validationErrors, validationErrors,
skippedItems, skippedItems,
} = applyOperationsToWorkflowState(workflowState, operations, permissionConfig) } = applyOperationsToWorkflowState(workflowState, operationsToApply, permissionConfig)
// Add credential validation errors
validationErrors.push(...credentialErrors)
// Get workspaceId for selector validation // Get workspaceId for selector validation
let workspaceId: string | undefined let workspaceId: string | undefined
@@ -2757,10 +3171,60 @@ export const editWorkflowServerTool: BaseServerTool<EditWorkflowParams, any> = {
const skippedMessages = const skippedMessages =
skippedItems.length > 0 ? skippedItems.map((item) => item.reason) : undefined skippedItems.length > 0 ? skippedItems.map((item) => item.reason) : undefined
// Return the modified workflow state for the client to convert to YAML if needed // Persist the workflow state to the database
const finalWorkflowState = validation.sanitizedState || modifiedWorkflowState
// Apply autolayout to position blocks properly
const layoutResult = applyAutoLayout(finalWorkflowState.blocks, finalWorkflowState.edges, {
horizontalSpacing: 250,
verticalSpacing: 100,
padding: { x: 100, y: 100 },
})
const layoutedBlocks = layoutResult.success && layoutResult.blocks
? layoutResult.blocks
: finalWorkflowState.blocks
if (!layoutResult.success) {
logger.warn('Autolayout failed, using default positions', {
workflowId,
error: layoutResult.error,
})
}
const workflowStateForDb = {
blocks: layoutedBlocks,
edges: finalWorkflowState.edges,
loops: generateLoopBlocks(layoutedBlocks as any),
parallels: generateParallelBlocks(layoutedBlocks as any),
lastSaved: Date.now(),
isDeployed: false,
}
const saveResult = await saveWorkflowToNormalizedTables(workflowId, workflowStateForDb as any)
if (!saveResult.success) {
logger.error('Failed to persist workflow state to database', {
workflowId,
error: saveResult.error,
})
throw new Error(`Failed to save workflow: ${saveResult.error}`)
}
// Update workflow's lastSynced timestamp
await db
.update(workflowTable)
.set({
lastSynced: new Date(),
updatedAt: new Date(),
})
.where(eq(workflowTable.id, workflowId))
logger.info('Workflow state persisted to database', { workflowId })
// Return the modified workflow state with autolayout applied
return { return {
success: true, success: true,
workflowState: validation.sanitizedState || modifiedWorkflowState, workflowState: { ...finalWorkflowState, blocks: layoutedBlocks },
// Include input validation errors so the LLM can see what was rejected // Include input validation errors so the LLM can see what was rejected
...(inputErrors && { ...(inputErrors && {
inputValidationErrors: inputErrors, inputValidationErrors: inputErrors,

View File

@@ -50,6 +50,8 @@ function prepareLogData(
export async function emitWorkflowExecutionCompleted(log: WorkflowExecutionLog): Promise<void> { export async function emitWorkflowExecutionCompleted(log: WorkflowExecutionLog): Promise<void> {
try { try {
if (!log.workflowId) return
const workflowData = await db const workflowData = await db
.select({ workspaceId: workflow.workspaceId }) .select({ workspaceId: workflow.workspaceId })
.from(workflow) .from(workflow)

View File

@@ -293,7 +293,10 @@ export class ExecutionLogger implements IExecutionLoggerService {
} }
try { try {
const [wf] = await db.select().from(workflow).where(eq(workflow.id, updatedLog.workflowId)) // Skip workflow lookup if workflow was deleted
const wf = updatedLog.workflowId
? (await db.select().from(workflow).where(eq(workflow.id, updatedLog.workflowId)))[0]
: undefined
if (wf) { if (wf) {
const [usr] = await db const [usr] = await db
.select({ id: userTable.id, email: userTable.email, name: userTable.name }) .select({ id: userTable.id, email: userTable.email, name: userTable.name })
@@ -461,7 +464,7 @@ export class ExecutionLogger implements IExecutionLoggerService {
* Maintains same logic as original execution logger for billing consistency * Maintains same logic as original execution logger for billing consistency
*/ */
private async updateUserStats( private async updateUserStats(
workflowId: string, workflowId: string | null,
costSummary: { costSummary: {
totalCost: number totalCost: number
totalInputCost: number totalInputCost: number
@@ -494,6 +497,11 @@ export class ExecutionLogger implements IExecutionLoggerService {
return return
} }
if (!workflowId) {
logger.debug('Workflow was deleted, skipping user stats update')
return
}
try { try {
// Get the workflow record to get the userId // Get the workflow record to get the userId
const [workflowRecord] = await db const [workflowRecord] = await db

View File

@@ -1,8 +1,8 @@
import { createHash } from 'crypto' import { createHash } from 'crypto'
import { db } from '@sim/db' import { db } from '@sim/db'
import { workflowExecutionSnapshots } from '@sim/db/schema' import { workflowExecutionLogs, workflowExecutionSnapshots } from '@sim/db/schema'
import { createLogger } from '@sim/logger' import { createLogger } from '@sim/logger'
import { and, eq, lt } from 'drizzle-orm' import { and, eq, lt, notExists } from 'drizzle-orm'
import { v4 as uuidv4 } from 'uuid' import { v4 as uuidv4 } from 'uuid'
import type { import type {
SnapshotService as ISnapshotService, SnapshotService as ISnapshotService,
@@ -121,7 +121,17 @@ export class SnapshotService implements ISnapshotService {
const deletedSnapshots = await db const deletedSnapshots = await db
.delete(workflowExecutionSnapshots) .delete(workflowExecutionSnapshots)
.where(lt(workflowExecutionSnapshots.createdAt, cutoffDate)) .where(
and(
lt(workflowExecutionSnapshots.createdAt, cutoffDate),
notExists(
db
.select({ id: workflowExecutionLogs.id })
.from(workflowExecutionLogs)
.where(eq(workflowExecutionLogs.stateSnapshotId, workflowExecutionSnapshots.id))
)
)
)
.returning({ id: workflowExecutionSnapshots.id }) .returning({ id: workflowExecutionSnapshots.id })
const deletedCount = deletedSnapshots.length const deletedCount = deletedSnapshots.length

View File

@@ -112,6 +112,26 @@ export function buildTraceSpans(result: ExecutionResult): {
const duration = log.durationMs || 0 const duration = log.durationMs || 0
let output = log.output || {} let output = log.output || {}
let childWorkflowSnapshotId: string | undefined
let childWorkflowId: string | undefined
if (output && typeof output === 'object') {
const outputRecord = output as Record<string, unknown>
childWorkflowSnapshotId =
typeof outputRecord.childWorkflowSnapshotId === 'string'
? outputRecord.childWorkflowSnapshotId
: undefined
childWorkflowId =
typeof outputRecord.childWorkflowId === 'string' ? outputRecord.childWorkflowId : undefined
if (childWorkflowSnapshotId || childWorkflowId) {
const {
childWorkflowSnapshotId: _childSnapshotId,
childWorkflowId: _childWorkflowId,
...outputRest
} = outputRecord
output = outputRest
}
}
if (log.error) { if (log.error) {
output = { output = {
@@ -134,6 +154,8 @@ export function buildTraceSpans(result: ExecutionResult): {
blockId: log.blockId, blockId: log.blockId,
input: log.input || {}, input: log.input || {},
output: output, output: output,
...(childWorkflowSnapshotId ? { childWorkflowSnapshotId } : {}),
...(childWorkflowId ? { childWorkflowId } : {}),
...(log.loopId && { loopId: log.loopId }), ...(log.loopId && { loopId: log.loopId }),
...(log.parallelId && { parallelId: log.parallelId }), ...(log.parallelId && { parallelId: log.parallelId }),
...(log.iterationIndex !== undefined && { iterationIndex: log.iterationIndex }), ...(log.iterationIndex !== undefined && { iterationIndex: log.iterationIndex }),

View File

@@ -69,7 +69,7 @@ export interface ExecutionStatus {
export interface WorkflowExecutionSnapshot { export interface WorkflowExecutionSnapshot {
id: string id: string
workflowId: string workflowId: string | null
stateHash: string stateHash: string
stateData: WorkflowState stateData: WorkflowState
createdAt: string createdAt: string
@@ -80,7 +80,7 @@ export type WorkflowExecutionSnapshotSelect = WorkflowExecutionSnapshot
export interface WorkflowExecutionLog { export interface WorkflowExecutionLog {
id: string id: string
workflowId: string workflowId: string | null
executionId: string executionId: string
stateSnapshotId: string stateSnapshotId: string
level: 'info' | 'error' level: 'info' | 'error'
@@ -178,6 +178,8 @@ export interface TraceSpan {
blockId?: string blockId?: string
input?: Record<string, unknown> input?: Record<string, unknown>
output?: Record<string, unknown> output?: Record<string, unknown>
childWorkflowSnapshotId?: string
childWorkflowId?: string
model?: string model?: string
cost?: { cost?: {
input?: number input?: number

View File

@@ -325,18 +325,6 @@ const nextConfig: NextConfig = {
return redirects return redirects
}, },
async rewrites() {
return [
{
source: '/ingest/static/:path*',
destination: 'https://us-assets.i.posthog.com/static/:path*',
},
{
source: '/ingest/:path*',
destination: 'https://us.i.posthog.com/:path*',
},
]
},
} }
export default nextConfig export default nextConfig

View File

@@ -109,9 +109,15 @@ export const anthropicProvider: ProviderConfig = {
], ],
}) })
} else { } else {
// Handle content that's already in array format (from transformAttachmentMessages)
const content = Array.isArray(msg.content)
? msg.content
: msg.content
? [{ type: 'text', text: msg.content }]
: []
messages.push({ messages.push({
role: msg.role === 'assistant' ? 'assistant' : 'user', role: msg.role === 'assistant' ? 'assistant' : 'user',
content: msg.content ? [{ type: 'text', text: msg.content }] : [], content,
}) })
} }
}) })

View File

@@ -0,0 +1,397 @@
/**
* Centralized attachment content transformation for all providers.
*
* Strategy: Always normalize to base64 first, then create provider-specific formats.
* This eliminates URL accessibility issues and simplifies provider handling.
*/
import { createLogger } from '@sim/logger'
import { bufferToBase64 } from '@/lib/uploads/utils/file-utils'
import { downloadFileFromUrl } from '@/lib/uploads/utils/file-utils.server'
import { supportsVision } from '@/providers/models'
import type { ProviderId } from '@/providers/types'
const logger = createLogger('AttachmentTransformer')
/**
* Generic message type for attachment transformation.
*/
interface TransformableMessage {
role: string
content: string | any[] | null
attachment?: AttachmentContent
[key: string]: any
}
/**
* Attachment content (files, images, documents)
*/
export interface AttachmentContent {
sourceType: 'url' | 'base64' | 'file'
data: string
mimeType?: string
fileName?: string
}
/**
* Normalized attachment data (always base64)
*/
interface NormalizedAttachment {
base64: string
mimeType: string
}
/**
* Configuration for attachment transformation
*/
interface AttachmentTransformConfig {
providerId: ProviderId
model: string
}
/**
* Checks if a model supports attachments (vision/multimodal content).
*/
export function modelSupportsAttachments(model: string): boolean {
return supportsVision(model)
}
/**
* Transforms messages with 'attachment' role into provider-compatible format.
*/
export async function transformAttachmentMessages<T extends TransformableMessage>(
messages: T[],
config: AttachmentTransformConfig
): Promise<T[]> {
const { providerId, model } = config
const supportsAttachments = modelSupportsAttachments(model)
if (!supportsAttachments) {
return transformAttachmentsToText(messages) as T[]
}
const result: T[] = []
for (const msg of messages) {
if (msg.role !== 'attachment') {
result.push(msg)
continue
}
const attachmentContent = await createProviderAttachmentContent(msg, providerId)
if (!attachmentContent) {
logger.warn('Could not create attachment content for message', { msg })
continue
}
// Merge with previous user message or create new one
const lastMessage = result[result.length - 1]
if (lastMessage && lastMessage.role === 'user') {
const existingContent = ensureContentArray(lastMessage, providerId)
existingContent.push(attachmentContent)
lastMessage.content = existingContent as any
} else {
result.push({
role: 'user',
content: [attachmentContent] as any,
} as T)
}
}
// Ensure all user messages have consistent content format
return result.map((msg) => {
if (msg.role === 'user' && typeof msg.content === 'string') {
return {
...msg,
content: [createTextContent(msg.content, providerId)] as any,
}
}
return msg
})
}
/**
* Transforms attachment messages to text placeholders for non-vision models
*/
function transformAttachmentsToText<T extends TransformableMessage>(messages: T[]): T[] {
const result: T[] = []
for (const msg of messages) {
if (msg.role !== 'attachment') {
result.push(msg)
continue
}
const attachment = msg.attachment
const mimeType = attachment?.mimeType || 'unknown type'
const fileName = attachment?.fileName || 'file'
const lastMessage = result[result.length - 1]
if (lastMessage && lastMessage.role === 'user') {
const currentContent = typeof lastMessage.content === 'string' ? lastMessage.content : ''
lastMessage.content = `${currentContent}\n[Attached file: ${fileName} (${mimeType}) - Note: This model does not support file/image inputs]`
} else {
result.push({
role: 'user',
content: `[Attached file: ${fileName} (${mimeType}) - Note: This model does not support file/image inputs]`,
} as T)
}
}
return result
}
/**
* Ensures a user message has content as an array for multimodal support
*/
function ensureContentArray(msg: TransformableMessage, providerId: ProviderId): any[] {
if (Array.isArray(msg.content)) {
return msg.content
}
if (typeof msg.content === 'string' && msg.content) {
return [createTextContent(msg.content, providerId)]
}
return []
}
/**
* Creates provider-specific text content block
*/
export function createTextContent(text: string, providerId: ProviderId): any {
switch (providerId) {
case 'google':
case 'vertex':
return { text }
default:
return { type: 'text', text }
}
}
/**
* Normalizes attachment data to base64.
* Fetches URLs and converts to base64, extracts base64 from data URLs.
*/
async function normalizeToBase64(
attachment: AttachmentContent
): Promise<NormalizedAttachment | null> {
const { sourceType, data, mimeType } = attachment
if (!data || !data.trim()) {
logger.warn('Empty attachment data')
return null
}
const trimmedData = data.trim()
// Already base64
if (sourceType === 'base64') {
// Handle data URL format: data:mime;base64,xxx
if (trimmedData.startsWith('data:')) {
const match = trimmedData.match(/^data:([^;]+);base64,(.+)$/)
if (match) {
return { base64: match[2], mimeType: match[1] }
}
}
// Raw base64
return { base64: trimmedData, mimeType: mimeType || 'application/octet-stream' }
}
// URL or file path - need to fetch
if (sourceType === 'url' || sourceType === 'file') {
try {
logger.info('Fetching attachment for base64 conversion', {
url: trimmedData.substring(0, 50),
})
const buffer = await downloadFileFromUrl(trimmedData)
const base64 = bufferToBase64(buffer)
return { base64, mimeType: mimeType || 'application/octet-stream' }
} catch (error) {
logger.error('Failed to fetch attachment', { error, url: trimmedData.substring(0, 50) })
return null
}
}
return null
}
/**
* Creates provider-specific attachment content from an attachment message.
* First normalizes to base64, then creates the provider format.
*/
async function createProviderAttachmentContent(
msg: TransformableMessage,
providerId: ProviderId
): Promise<any> {
const attachment = msg.attachment
if (!attachment) return null
// Normalize to base64 first
const normalized = await normalizeToBase64(attachment)
if (!normalized) {
return createTextContent('[Failed to load attachment]', providerId)
}
const { base64, mimeType } = normalized
switch (providerId) {
case 'anthropic':
return createAnthropicContent(base64, mimeType)
case 'google':
case 'vertex':
return createGeminiContent(base64, mimeType)
case 'mistral':
return createMistralContent(base64, mimeType)
case 'bedrock':
return createBedrockContent(base64, mimeType)
default:
// OpenAI format (OpenAI, Azure, xAI, DeepSeek, Cerebras, Groq, OpenRouter, Ollama, vLLM)
return createOpenAIContent(base64, mimeType)
}
}
/**
* OpenAI-compatible content (images only via base64 data URL)
*/
function createOpenAIContent(base64: string, mimeType: string): any {
const isImage = mimeType.startsWith('image/')
const isAudio = mimeType.startsWith('audio/')
if (isImage) {
return {
type: 'image_url',
image_url: {
url: `data:${mimeType};base64,${base64}`,
detail: 'auto',
},
}
}
if (isAudio) {
return {
type: 'input_audio',
input_audio: {
data: base64,
format: mimeType === 'audio/wav' ? 'wav' : 'mp3',
},
}
}
// OpenAI Chat API doesn't support other file types directly
// For PDFs/docs, return a text placeholder
logger.warn(`OpenAI does not support ${mimeType} attachments in Chat API`)
return {
type: 'text',
text: `[Attached file: ${mimeType} - OpenAI Chat API only supports images and audio]`,
}
}
/**
* Anthropic-compatible content (images and PDFs)
*/
function createAnthropicContent(base64: string, mimeType: string): any {
const isImage = mimeType.startsWith('image/')
const isPdf = mimeType === 'application/pdf'
if (isImage) {
return {
type: 'image',
source: {
type: 'base64',
media_type: mimeType,
data: base64,
},
}
}
if (isPdf) {
return {
type: 'document',
source: {
type: 'base64',
media_type: 'application/pdf',
data: base64,
},
}
}
return {
type: 'text',
text: `[Attached file: ${mimeType} - Anthropic supports images and PDFs only]`,
}
}
/**
* Google Gemini-compatible content (inlineData format)
*/
function createGeminiContent(base64: string, mimeType: string): any {
// Gemini supports a wide range of file types via inlineData
return {
inlineData: {
mimeType,
data: base64,
},
}
}
/**
* Mistral-compatible content (images only, data URL format)
*/
function createMistralContent(base64: string, mimeType: string): any {
const isImage = mimeType.startsWith('image/')
if (isImage) {
// Mistral uses direct string for image_url, not nested object
return {
type: 'image_url',
image_url: `data:${mimeType};base64,${base64}`,
}
}
return {
type: 'text',
text: `[Attached file: ${mimeType} - Mistral supports images only]`,
}
}
/**
* AWS Bedrock-compatible content (images and PDFs)
*/
function createBedrockContent(base64: string, mimeType: string): any {
const isImage = mimeType.startsWith('image/')
const isPdf = mimeType === 'application/pdf'
// Determine image format from mimeType
const getImageFormat = (mime: string): string => {
if (mime.includes('jpeg') || mime.includes('jpg')) return 'jpeg'
if (mime.includes('png')) return 'png'
if (mime.includes('gif')) return 'gif'
if (mime.includes('webp')) return 'webp'
return 'png'
}
if (isImage) {
// Return a marker object that the Bedrock provider will convert to proper format
return {
type: 'bedrock_image',
format: getImageFormat(mimeType),
data: base64,
}
}
if (isPdf) {
return {
type: 'bedrock_document',
format: 'pdf',
data: base64,
}
}
return {
type: 'text',
text: `[Attached file: ${mimeType} - Bedrock supports images and PDFs only]`,
}
}

View File

@@ -16,6 +16,7 @@ import type { StreamingExecution } from '@/executor/types'
import { MAX_TOOL_ITERATIONS } from '@/providers' import { MAX_TOOL_ITERATIONS } from '@/providers'
import { import {
checkForForcedToolUsage, checkForForcedToolUsage,
convertToBedrockContentBlocks,
createReadableStreamFromBedrockStream, createReadableStreamFromBedrockStream,
generateToolUseId, generateToolUseId,
getBedrockInferenceProfileId, getBedrockInferenceProfileId,
@@ -116,9 +117,11 @@ export const bedrockProvider: ProviderConfig = {
} }
} else { } else {
const role: ConversationRole = msg.role === 'assistant' ? 'assistant' : 'user' const role: ConversationRole = msg.role === 'assistant' ? 'assistant' : 'user'
// Handle multimodal content arrays
const contentBlocks = convertToBedrockContentBlocks(msg.content || '')
messages.push({ messages.push({
role, role,
content: [{ text: msg.content || '' }], content: contentBlocks,
}) })
} }
} }

View File

@@ -1,9 +1,199 @@
import type { ConverseStreamOutput } from '@aws-sdk/client-bedrock-runtime' import type {
ContentBlock,
ConverseStreamOutput,
ImageFormat,
} from '@aws-sdk/client-bedrock-runtime'
import { createLogger } from '@sim/logger' import { createLogger } from '@sim/logger'
import { trackForcedToolUsage } from '@/providers/utils' import { trackForcedToolUsage } from '@/providers/utils'
const logger = createLogger('BedrockUtils') const logger = createLogger('BedrockUtils')
/**
* Converts message content (string or array) to Bedrock ContentBlock array.
* Handles multimodal content including images and documents.
*/
export function convertToBedrockContentBlocks(content: string | any[]): ContentBlock[] {
// Simple string content
if (typeof content === 'string') {
return [{ text: content || '' }]
}
// Array content - could be multimodal
if (!Array.isArray(content)) {
return [{ text: String(content) || '' }]
}
const blocks: ContentBlock[] = []
for (const item of content) {
if (!item) continue
// Text content
if (item.type === 'text' && item.text) {
blocks.push({ text: item.text })
continue
}
// Gemini-style text (just { text: "..." })
if (typeof item.text === 'string' && !item.type) {
blocks.push({ text: item.text })
continue
}
// Bedrock image content (from agent handler)
if (item.type === 'bedrock_image') {
const imageBlock = createBedrockImageBlock(item)
if (imageBlock) {
blocks.push(imageBlock)
}
continue
}
// Bedrock document content (from agent handler)
if (item.type === 'bedrock_document') {
const docBlock = createBedrockDocumentBlock(item)
if (docBlock) {
blocks.push(docBlock)
}
continue
}
// OpenAI-style image_url (fallback for direct OpenAI format)
if (item.type === 'image_url' && item.image_url) {
const url = typeof item.image_url === 'string' ? item.image_url : item.image_url?.url
if (url) {
const imageBlock = createBedrockImageBlockFromUrl(url)
if (imageBlock) {
blocks.push(imageBlock)
}
}
continue
}
// Unknown type - log warning and skip
logger.warn('Unknown content block type in Bedrock conversion:', { type: item.type })
}
// Ensure at least one text block
if (blocks.length === 0) {
blocks.push({ text: '' })
}
return blocks
}
/**
* Creates a Bedrock image ContentBlock from a bedrock_image item
*/
function createBedrockImageBlock(item: {
format: string
sourceType: string
data?: string
url?: string
}): ContentBlock | null {
const format = (item.format || 'png') as ImageFormat
if (item.sourceType === 'base64' && item.data) {
// Convert base64 to Uint8Array
const bytes = base64ToUint8Array(item.data)
return {
image: {
format,
source: { bytes },
},
}
}
if (item.sourceType === 'url' && item.url) {
// For URLs, we need to fetch the image and convert to bytes
// This is a limitation - Bedrock doesn't support URL sources directly
// The provider layer should handle this, or we log a warning
logger.warn('Bedrock does not support image URLs directly. Image will be skipped.', {
url: item.url,
})
// Return a text placeholder
return { text: `[Image from URL: ${item.url}]` }
}
return null
}
/**
* Creates a Bedrock document ContentBlock from a bedrock_document item
*/
function createBedrockDocumentBlock(item: {
format: string
sourceType: string
data?: string
url?: string
}): ContentBlock | null {
if (item.sourceType === 'base64' && item.data) {
const bytes = base64ToUint8Array(item.data)
return {
document: {
format: 'pdf',
name: 'document',
source: { bytes },
},
}
}
if (item.sourceType === 'url' && item.url) {
logger.warn('Bedrock does not support document URLs directly. Document will be skipped.', {
url: item.url,
})
return { text: `[Document from URL: ${item.url}]` }
}
return null
}
/**
* Creates a Bedrock image ContentBlock from a data URL or regular URL
*/
function createBedrockImageBlockFromUrl(url: string): ContentBlock | null {
// Check if it's a data URL (base64)
if (url.startsWith('data:')) {
const match = url.match(/^data:image\/(\w+);base64,(.+)$/)
if (match) {
let format: ImageFormat = match[1] as ImageFormat
// Normalize jpg to jpeg
if (format === ('jpg' as ImageFormat)) {
format = 'jpeg'
}
const base64Data = match[2]
const bytes = base64ToUint8Array(base64Data)
return {
image: {
format,
source: { bytes },
},
}
}
}
// Regular URL - Bedrock doesn't support this directly
logger.warn('Bedrock does not support image URLs directly. Image will be skipped.', { url })
return { text: `[Image from URL: ${url}]` }
}
/**
* Converts a base64 string to Uint8Array
*/
function base64ToUint8Array(base64: string): Uint8Array {
// Handle browser and Node.js environments
if (typeof Buffer !== 'undefined') {
return Buffer.from(base64, 'base64')
}
// Browser fallback
const binaryString = atob(base64)
const bytes = new Uint8Array(binaryString.length)
for (let i = 0; i < binaryString.length; i++) {
bytes[i] = binaryString.charCodeAt(i)
}
return bytes
}
export interface BedrockStreamUsage { export interface BedrockStreamUsage {
inputTokens: number inputTokens: number
outputTokens: number outputTokens: number

View File

@@ -72,6 +72,75 @@ export function cleanSchemaForGemini(schema: SchemaUnion): SchemaUnion {
return cleanedSchema return cleanedSchema
} }
/**
* Converts an array of content items to Gemini-compatible Part array.
* Handles various formats from the attachment transformer.
*/
function convertContentArrayToGeminiParts(contentArray: any[]): Part[] {
const parts: Part[] = []
for (const item of contentArray) {
if (!item) continue
// Gemini-native text format: { text: "..." }
if (typeof item.text === 'string') {
parts.push({ text: item.text })
continue
}
// OpenAI-style text: { type: 'text', text: '...' }
if (item.type === 'text' && typeof item.text === 'string') {
parts.push({ text: item.text })
continue
}
// Gemini-native inlineData format (from attachment transformer)
if (item.inlineData) {
parts.push({ inlineData: item.inlineData })
continue
}
// Gemini-native fileData format (from attachment transformer)
if (item.fileData) {
parts.push({ fileData: item.fileData })
continue
}
// OpenAI-style image_url - convert to Gemini format
if (item.type === 'image_url' && item.image_url) {
const url = typeof item.image_url === 'string' ? item.image_url : item.image_url?.url
if (url) {
// Check if it's a data URL (base64)
if (url.startsWith('data:')) {
const match = url.match(/^data:([^;]+);base64,(.+)$/)
if (match) {
parts.push({
inlineData: {
mimeType: match[1],
data: match[2],
},
})
}
} else {
// External URL
parts.push({
fileData: {
mimeType: 'image/jpeg', // Default, Gemini will detect actual type
fileUri: url,
},
})
}
}
continue
}
// Unknown type - log warning
logger.warn('Unknown content item type in Gemini conversion:', { type: item.type })
}
return parts
}
/** /**
* Extracts text content from a Gemini response candidate. * Extracts text content from a Gemini response candidate.
* Filters out thought parts (model reasoning) from the output. * Filters out thought parts (model reasoning) from the output.
@@ -180,7 +249,13 @@ export function convertToGeminiFormat(request: ProviderRequest): {
} else if (message.role === 'user' || message.role === 'assistant') { } else if (message.role === 'user' || message.role === 'assistant') {
const geminiRole = message.role === 'user' ? 'user' : 'model' const geminiRole = message.role === 'user' ? 'user' : 'model'
if (message.content) { // Handle multimodal content (arrays with text/image/file parts)
if (Array.isArray(message.content)) {
const parts: Part[] = convertContentArrayToGeminiParts(message.content)
if (parts.length > 0) {
contents.push({ role: geminiRole, parts })
}
} else if (message.content) {
contents.push({ role: geminiRole, parts: [{ text: message.content }] }) contents.push({ role: geminiRole, parts: [{ text: message.content }] })
} }

View File

@@ -34,6 +34,8 @@ export interface ModelCapabilities {
toolUsageControl?: boolean toolUsageControl?: boolean
computerUse?: boolean computerUse?: boolean
nativeStructuredOutputs?: boolean nativeStructuredOutputs?: boolean
/** Whether the model supports vision/multimodal inputs (images, audio, video, PDFs) */
vision?: boolean
maxOutputTokens?: { maxOutputTokens?: {
/** Maximum tokens for streaming requests */ /** Maximum tokens for streaming requests */
max: number max: number
@@ -120,6 +122,7 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
}, },
capabilities: { capabilities: {
temperature: { min: 0, max: 2 }, temperature: { min: 0, max: 2 },
vision: true,
}, },
contextWindow: 128000, contextWindow: 128000,
}, },
@@ -132,6 +135,7 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
updatedAt: '2025-12-11', updatedAt: '2025-12-11',
}, },
capabilities: { capabilities: {
vision: true,
reasoningEffort: { reasoningEffort: {
values: ['none', 'minimal', 'low', 'medium', 'high', 'xhigh'], values: ['none', 'minimal', 'low', 'medium', 'high', 'xhigh'],
}, },
@@ -150,6 +154,7 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
updatedAt: '2025-11-14', updatedAt: '2025-11-14',
}, },
capabilities: { capabilities: {
vision: true,
reasoningEffort: { reasoningEffort: {
values: ['none', 'low', 'medium', 'high'], values: ['none', 'low', 'medium', 'high'],
}, },
@@ -222,6 +227,7 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
updatedAt: '2025-08-07', updatedAt: '2025-08-07',
}, },
capabilities: { capabilities: {
vision: true,
reasoningEffort: { reasoningEffort: {
values: ['minimal', 'low', 'medium', 'high'], values: ['minimal', 'low', 'medium', 'high'],
}, },
@@ -240,6 +246,7 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
updatedAt: '2025-08-07', updatedAt: '2025-08-07',
}, },
capabilities: { capabilities: {
vision: true,
reasoningEffort: { reasoningEffort: {
values: ['minimal', 'low', 'medium', 'high'], values: ['minimal', 'low', 'medium', 'high'],
}, },
@@ -258,6 +265,7 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
updatedAt: '2025-08-07', updatedAt: '2025-08-07',
}, },
capabilities: { capabilities: {
vision: true,
reasoningEffort: { reasoningEffort: {
values: ['minimal', 'low', 'medium', 'high'], values: ['minimal', 'low', 'medium', 'high'],
}, },
@@ -287,6 +295,7 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
updatedAt: '2025-06-17', updatedAt: '2025-06-17',
}, },
capabilities: { capabilities: {
vision: true,
reasoningEffort: { reasoningEffort: {
values: ['low', 'medium', 'high'], values: ['low', 'medium', 'high'],
}, },
@@ -302,6 +311,7 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
updatedAt: '2025-06-17', updatedAt: '2025-06-17',
}, },
capabilities: { capabilities: {
vision: true,
reasoningEffort: { reasoningEffort: {
values: ['low', 'medium', 'high'], values: ['low', 'medium', 'high'],
}, },
@@ -317,6 +327,7 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
updatedAt: '2025-06-17', updatedAt: '2025-06-17',
}, },
capabilities: { capabilities: {
vision: true,
reasoningEffort: { reasoningEffort: {
values: ['low', 'medium', 'high'], values: ['low', 'medium', 'high'],
}, },
@@ -333,6 +344,7 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
}, },
capabilities: { capabilities: {
temperature: { min: 0, max: 2 }, temperature: { min: 0, max: 2 },
vision: true,
}, },
contextWindow: 1000000, contextWindow: 1000000,
}, },
@@ -346,6 +358,7 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
}, },
capabilities: { capabilities: {
temperature: { min: 0, max: 2 }, temperature: { min: 0, max: 2 },
vision: true,
}, },
contextWindow: 1000000, contextWindow: 1000000,
}, },
@@ -359,6 +372,7 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
}, },
capabilities: { capabilities: {
temperature: { min: 0, max: 2 }, temperature: { min: 0, max: 2 },
vision: true,
}, },
contextWindow: 1000000, contextWindow: 1000000,
}, },
@@ -385,6 +399,7 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
}, },
capabilities: { capabilities: {
temperature: { min: 0, max: 2 }, temperature: { min: 0, max: 2 },
vision: true,
}, },
contextWindow: 128000, contextWindow: 128000,
}, },
@@ -397,6 +412,7 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
updatedAt: '2025-12-11', updatedAt: '2025-12-11',
}, },
capabilities: { capabilities: {
vision: true,
reasoningEffort: { reasoningEffort: {
values: ['none', 'minimal', 'low', 'medium', 'high', 'xhigh'], values: ['none', 'minimal', 'low', 'medium', 'high', 'xhigh'],
}, },
@@ -415,6 +431,7 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
updatedAt: '2025-11-14', updatedAt: '2025-11-14',
}, },
capabilities: { capabilities: {
vision: true,
reasoningEffort: { reasoningEffort: {
values: ['none', 'low', 'medium', 'high'], values: ['none', 'low', 'medium', 'high'],
}, },
@@ -433,6 +450,7 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
updatedAt: '2025-11-14', updatedAt: '2025-11-14',
}, },
capabilities: { capabilities: {
vision: true,
reasoningEffort: { reasoningEffort: {
values: ['none', 'low', 'medium', 'high'], values: ['none', 'low', 'medium', 'high'],
}, },
@@ -451,6 +469,7 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
updatedAt: '2025-11-14', updatedAt: '2025-11-14',
}, },
capabilities: { capabilities: {
vision: true,
reasoningEffort: { reasoningEffort: {
values: ['none', 'low', 'medium', 'high'], values: ['none', 'low', 'medium', 'high'],
}, },
@@ -469,6 +488,7 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
updatedAt: '2025-11-14', updatedAt: '2025-11-14',
}, },
capabilities: { capabilities: {
vision: true,
reasoningEffort: { reasoningEffort: {
values: ['none', 'medium', 'high'], values: ['none', 'medium', 'high'],
}, },
@@ -487,6 +507,7 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
updatedAt: '2025-08-07', updatedAt: '2025-08-07',
}, },
capabilities: { capabilities: {
vision: true,
reasoningEffort: { reasoningEffort: {
values: ['minimal', 'low', 'medium', 'high'], values: ['minimal', 'low', 'medium', 'high'],
}, },
@@ -505,6 +526,7 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
updatedAt: '2025-08-07', updatedAt: '2025-08-07',
}, },
capabilities: { capabilities: {
vision: true,
reasoningEffort: { reasoningEffort: {
values: ['minimal', 'low', 'medium', 'high'], values: ['minimal', 'low', 'medium', 'high'],
}, },
@@ -523,6 +545,7 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
updatedAt: '2025-08-07', updatedAt: '2025-08-07',
}, },
capabilities: { capabilities: {
vision: true,
reasoningEffort: { reasoningEffort: {
values: ['minimal', 'low', 'medium', 'high'], values: ['minimal', 'low', 'medium', 'high'],
}, },
@@ -552,6 +575,7 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
updatedAt: '2025-06-15', updatedAt: '2025-06-15',
}, },
capabilities: { capabilities: {
vision: true,
reasoningEffort: { reasoningEffort: {
values: ['low', 'medium', 'high'], values: ['low', 'medium', 'high'],
}, },
@@ -567,6 +591,7 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
updatedAt: '2025-06-15', updatedAt: '2025-06-15',
}, },
capabilities: { capabilities: {
vision: true,
reasoningEffort: { reasoningEffort: {
values: ['low', 'medium', 'high'], values: ['low', 'medium', 'high'],
}, },
@@ -581,7 +606,9 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
output: 8.0, output: 8.0,
updatedAt: '2025-06-15', updatedAt: '2025-06-15',
}, },
capabilities: {}, capabilities: {
vision: true,
},
contextWindow: 1000000, contextWindow: 1000000,
}, },
{ {
@@ -620,6 +647,7 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
temperature: { min: 0, max: 1 }, temperature: { min: 0, max: 1 },
nativeStructuredOutputs: true, nativeStructuredOutputs: true,
maxOutputTokens: { max: 64000, default: 8192 }, maxOutputTokens: { max: 64000, default: 8192 },
vision: true,
}, },
contextWindow: 200000, contextWindow: 200000,
}, },
@@ -635,6 +663,7 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
temperature: { min: 0, max: 1 }, temperature: { min: 0, max: 1 },
nativeStructuredOutputs: true, nativeStructuredOutputs: true,
maxOutputTokens: { max: 64000, default: 8192 }, maxOutputTokens: { max: 64000, default: 8192 },
vision: true,
}, },
contextWindow: 200000, contextWindow: 200000,
}, },
@@ -649,6 +678,7 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
capabilities: { capabilities: {
temperature: { min: 0, max: 1 }, temperature: { min: 0, max: 1 },
maxOutputTokens: { max: 64000, default: 8192 }, maxOutputTokens: { max: 64000, default: 8192 },
vision: true,
}, },
contextWindow: 200000, contextWindow: 200000,
}, },
@@ -664,6 +694,7 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
temperature: { min: 0, max: 1 }, temperature: { min: 0, max: 1 },
nativeStructuredOutputs: true, nativeStructuredOutputs: true,
maxOutputTokens: { max: 64000, default: 8192 }, maxOutputTokens: { max: 64000, default: 8192 },
vision: true,
}, },
contextWindow: 200000, contextWindow: 200000,
}, },
@@ -679,6 +710,7 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
temperature: { min: 0, max: 1 }, temperature: { min: 0, max: 1 },
nativeStructuredOutputs: true, nativeStructuredOutputs: true,
maxOutputTokens: { max: 64000, default: 8192 }, maxOutputTokens: { max: 64000, default: 8192 },
vision: true,
}, },
contextWindow: 200000, contextWindow: 200000,
}, },
@@ -693,6 +725,7 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
capabilities: { capabilities: {
temperature: { min: 0, max: 1 }, temperature: { min: 0, max: 1 },
maxOutputTokens: { max: 64000, default: 8192 }, maxOutputTokens: { max: 64000, default: 8192 },
vision: true,
}, },
contextWindow: 200000, contextWindow: 200000,
}, },
@@ -708,6 +741,7 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
temperature: { min: 0, max: 1 }, temperature: { min: 0, max: 1 },
computerUse: true, computerUse: true,
maxOutputTokens: { max: 8192, default: 8192 }, maxOutputTokens: { max: 8192, default: 8192 },
vision: true,
}, },
contextWindow: 200000, contextWindow: 200000,
}, },
@@ -723,6 +757,7 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
temperature: { min: 0, max: 1 }, temperature: { min: 0, max: 1 },
computerUse: true, computerUse: true,
maxOutputTokens: { max: 8192, default: 8192 }, maxOutputTokens: { max: 8192, default: 8192 },
vision: true,
}, },
contextWindow: 200000, contextWindow: 200000,
}, },
@@ -736,6 +771,7 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
modelPatterns: [/^gemini/], modelPatterns: [/^gemini/],
capabilities: { capabilities: {
toolUsageControl: true, toolUsageControl: true,
vision: true,
}, },
icon: GeminiIcon, icon: GeminiIcon,
models: [ models: [
@@ -847,6 +883,7 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
icon: VertexIcon, icon: VertexIcon,
capabilities: { capabilities: {
toolUsageControl: true, toolUsageControl: true,
vision: true,
}, },
models: [ models: [
{ {
@@ -1005,6 +1042,7 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
icon: xAIIcon, icon: xAIIcon,
capabilities: { capabilities: {
toolUsageControl: true, toolUsageControl: true,
vision: true,
}, },
models: [ models: [
{ {
@@ -1277,7 +1315,9 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
output: 0.34, output: 0.34,
updatedAt: '2026-01-27', updatedAt: '2026-01-27',
}, },
capabilities: {}, capabilities: {
vision: true,
},
contextWindow: 131072, contextWindow: 131072,
}, },
{ {
@@ -1287,7 +1327,9 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
output: 0.6, output: 0.6,
updatedAt: '2026-01-27', updatedAt: '2026-01-27',
}, },
capabilities: {}, capabilities: {
vision: true,
},
contextWindow: 131072, contextWindow: 131072,
}, },
{ {
@@ -1369,6 +1411,7 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
}, },
capabilities: { capabilities: {
temperature: { min: 0, max: 1 }, temperature: { min: 0, max: 1 },
vision: true,
}, },
contextWindow: 256000, contextWindow: 256000,
}, },
@@ -1381,6 +1424,7 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
}, },
capabilities: { capabilities: {
temperature: { min: 0, max: 1 }, temperature: { min: 0, max: 1 },
vision: true,
}, },
contextWindow: 256000, contextWindow: 256000,
}, },
@@ -1453,6 +1497,7 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
}, },
capabilities: { capabilities: {
temperature: { min: 0, max: 1 }, temperature: { min: 0, max: 1 },
vision: true,
}, },
contextWindow: 128000, contextWindow: 128000,
}, },
@@ -1465,6 +1510,7 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
}, },
capabilities: { capabilities: {
temperature: { min: 0, max: 1 }, temperature: { min: 0, max: 1 },
vision: true,
}, },
contextWindow: 128000, contextWindow: 128000,
}, },
@@ -1489,6 +1535,7 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
}, },
capabilities: { capabilities: {
temperature: { min: 0, max: 1 }, temperature: { min: 0, max: 1 },
vision: true,
}, },
contextWindow: 128000, contextWindow: 128000,
}, },
@@ -1501,6 +1548,7 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
}, },
capabilities: { capabilities: {
temperature: { min: 0, max: 1 }, temperature: { min: 0, max: 1 },
vision: true,
}, },
contextWindow: 128000, contextWindow: 128000,
}, },
@@ -1549,6 +1597,7 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
}, },
capabilities: { capabilities: {
temperature: { min: 0, max: 1 }, temperature: { min: 0, max: 1 },
vision: true,
}, },
contextWindow: 128000, contextWindow: 128000,
}, },
@@ -1561,6 +1610,7 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
}, },
capabilities: { capabilities: {
temperature: { min: 0, max: 1 }, temperature: { min: 0, max: 1 },
vision: true,
}, },
contextWindow: 128000, contextWindow: 128000,
}, },
@@ -1585,6 +1635,7 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
}, },
capabilities: { capabilities: {
temperature: { min: 0, max: 1 }, temperature: { min: 0, max: 1 },
vision: true,
}, },
contextWindow: 256000, contextWindow: 256000,
}, },
@@ -1597,6 +1648,7 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
}, },
capabilities: { capabilities: {
temperature: { min: 0, max: 1 }, temperature: { min: 0, max: 1 },
vision: true,
}, },
contextWindow: 256000, contextWindow: 256000,
}, },
@@ -1609,6 +1661,7 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
}, },
capabilities: { capabilities: {
temperature: { min: 0, max: 1 }, temperature: { min: 0, max: 1 },
vision: true,
}, },
contextWindow: 256000, contextWindow: 256000,
}, },
@@ -1621,6 +1674,7 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
}, },
capabilities: { capabilities: {
temperature: { min: 0, max: 1 }, temperature: { min: 0, max: 1 },
vision: true,
}, },
contextWindow: 256000, contextWindow: 256000,
}, },
@@ -1645,6 +1699,7 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
}, },
capabilities: { capabilities: {
temperature: { min: 0, max: 1 }, temperature: { min: 0, max: 1 },
vision: true,
}, },
contextWindow: 256000, contextWindow: 256000,
}, },
@@ -1657,6 +1712,7 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
}, },
capabilities: { capabilities: {
temperature: { min: 0, max: 1 }, temperature: { min: 0, max: 1 },
vision: true,
}, },
contextWindow: 256000, contextWindow: 256000,
}, },
@@ -1710,6 +1766,7 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
temperature: { min: 0, max: 1 }, temperature: { min: 0, max: 1 },
nativeStructuredOutputs: true, nativeStructuredOutputs: true,
maxOutputTokens: { max: 64000, default: 8192 }, maxOutputTokens: { max: 64000, default: 8192 },
vision: true,
}, },
contextWindow: 200000, contextWindow: 200000,
}, },
@@ -1724,6 +1781,7 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
temperature: { min: 0, max: 1 }, temperature: { min: 0, max: 1 },
nativeStructuredOutputs: true, nativeStructuredOutputs: true,
maxOutputTokens: { max: 64000, default: 8192 }, maxOutputTokens: { max: 64000, default: 8192 },
vision: true,
}, },
contextWindow: 200000, contextWindow: 200000,
}, },
@@ -1738,6 +1796,7 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
temperature: { min: 0, max: 1 }, temperature: { min: 0, max: 1 },
nativeStructuredOutputs: true, nativeStructuredOutputs: true,
maxOutputTokens: { max: 64000, default: 8192 }, maxOutputTokens: { max: 64000, default: 8192 },
vision: true,
}, },
contextWindow: 200000, contextWindow: 200000,
}, },
@@ -1752,6 +1811,7 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
temperature: { min: 0, max: 1 }, temperature: { min: 0, max: 1 },
nativeStructuredOutputs: true, nativeStructuredOutputs: true,
maxOutputTokens: { max: 64000, default: 8192 }, maxOutputTokens: { max: 64000, default: 8192 },
vision: true,
}, },
contextWindow: 200000, contextWindow: 200000,
}, },
@@ -1764,6 +1824,7 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
}, },
capabilities: { capabilities: {
temperature: { min: 0, max: 1 }, temperature: { min: 0, max: 1 },
vision: true,
}, },
contextWindow: 1000000, contextWindow: 1000000,
}, },
@@ -1776,6 +1837,7 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
}, },
capabilities: { capabilities: {
temperature: { min: 0, max: 1 }, temperature: { min: 0, max: 1 },
vision: true,
}, },
contextWindow: 1000000, contextWindow: 1000000,
}, },
@@ -1788,6 +1850,7 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
}, },
capabilities: { capabilities: {
temperature: { min: 0, max: 1 }, temperature: { min: 0, max: 1 },
vision: true,
}, },
contextWindow: 1000000, contextWindow: 1000000,
}, },
@@ -1800,6 +1863,7 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
}, },
capabilities: { capabilities: {
temperature: { min: 0, max: 1 }, temperature: { min: 0, max: 1 },
vision: true,
}, },
contextWindow: 300000, contextWindow: 300000,
}, },
@@ -1812,6 +1876,7 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
}, },
capabilities: { capabilities: {
temperature: { min: 0, max: 1 }, temperature: { min: 0, max: 1 },
vision: true,
}, },
contextWindow: 300000, contextWindow: 300000,
}, },
@@ -1836,6 +1901,7 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
}, },
capabilities: { capabilities: {
temperature: { min: 0, max: 1 }, temperature: { min: 0, max: 1 },
vision: true,
}, },
contextWindow: 1000000, contextWindow: 1000000,
}, },
@@ -1848,6 +1914,7 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
}, },
capabilities: { capabilities: {
temperature: { min: 0, max: 1 }, temperature: { min: 0, max: 1 },
vision: true,
}, },
contextWindow: 3500000, contextWindow: 3500000,
}, },
@@ -1872,6 +1939,7 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
}, },
capabilities: { capabilities: {
temperature: { min: 0, max: 1 }, temperature: { min: 0, max: 1 },
vision: true,
}, },
contextWindow: 128000, contextWindow: 128000,
}, },
@@ -1884,6 +1952,7 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
}, },
capabilities: { capabilities: {
temperature: { min: 0, max: 1 }, temperature: { min: 0, max: 1 },
vision: true,
}, },
contextWindow: 128000, contextWindow: 128000,
}, },
@@ -1956,6 +2025,7 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
}, },
capabilities: { capabilities: {
temperature: { min: 0, max: 1 }, temperature: { min: 0, max: 1 },
vision: true,
}, },
contextWindow: 128000, contextWindow: 128000,
}, },
@@ -1992,6 +2062,7 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
}, },
capabilities: { capabilities: {
temperature: { min: 0, max: 1 }, temperature: { min: 0, max: 1 },
vision: true,
}, },
contextWindow: 128000, contextWindow: 128000,
}, },
@@ -2016,6 +2087,7 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
}, },
capabilities: { capabilities: {
temperature: { min: 0, max: 1 }, temperature: { min: 0, max: 1 },
vision: true,
}, },
contextWindow: 128000, contextWindow: 128000,
}, },
@@ -2028,6 +2100,7 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
}, },
capabilities: { capabilities: {
temperature: { min: 0, max: 1 }, temperature: { min: 0, max: 1 },
vision: true,
}, },
contextWindow: 128000, contextWindow: 128000,
}, },
@@ -2040,6 +2113,7 @@ export const PROVIDER_DEFINITIONS: Record<string, ProviderDefinition> = {
}, },
capabilities: { capabilities: {
temperature: { min: 0, max: 1 }, temperature: { min: 0, max: 1 },
vision: true,
}, },
contextWindow: 128000, contextWindow: 128000,
}, },
@@ -2211,6 +2285,32 @@ export function getMaxTemperature(modelId: string): number | undefined {
return capabilities?.temperature?.max return capabilities?.temperature?.max
} }
/**
* Checks if a model supports vision/multimodal inputs (images, audio, video, PDFs)
*/
export function supportsVision(modelId: string): boolean {
const capabilities = getModelCapabilities(modelId)
return !!capabilities?.vision
}
/**
* Returns a list of all vision-capable models
*/
export function getVisionModels(): string[] {
const models: string[] = []
for (const provider of Object.values(PROVIDER_DEFINITIONS)) {
// Check if the provider has vision capability at the provider level
const providerHasVision = provider.capabilities?.vision
for (const model of provider.models) {
// Model has vision if either the model or provider has vision capability
if (model.capabilities.vision || providerHasVision) {
models.push(model.id)
}
}
}
return models
}
export function supportsToolUsageControl(providerId: string): boolean { export function supportsToolUsageControl(providerId: string): boolean {
return getProvidersWithToolUsageControl().includes(providerId) return getProvidersWithToolUsageControl().includes(providerId)
} }

View File

@@ -111,9 +111,25 @@ export interface ProviderToolConfig {
usageControl?: ToolUsageControl usageControl?: ToolUsageControl
} }
/**
* Attachment content (files, images, documents)
*/
export interface AttachmentContent {
/** Source type: how the data was provided */
sourceType: 'url' | 'base64' | 'file'
/** The URL or base64 data */
data: string
/** MIME type (e.g., 'image/png', 'application/pdf', 'audio/mp3') */
mimeType?: string
/** Optional filename for file uploads */
fileName?: string
}
export interface Message { export interface Message {
role: 'system' | 'user' | 'assistant' | 'function' | 'tool' role: 'system' | 'user' | 'assistant' | 'function' | 'tool' | 'attachment'
content: string | null content: string | null
/** Attachment content for 'attachment' role messages */
attachment?: AttachmentContent
name?: string name?: string
function_call?: { function_call?: {
name: string name: string

View File

@@ -23,9 +23,11 @@ import {
getReasoningEffortValuesForModel as getReasoningEffortValuesForModelFromDefinitions, getReasoningEffortValuesForModel as getReasoningEffortValuesForModelFromDefinitions,
getThinkingLevelsForModel as getThinkingLevelsForModelFromDefinitions, getThinkingLevelsForModel as getThinkingLevelsForModelFromDefinitions,
getVerbosityValuesForModel as getVerbosityValuesForModelFromDefinitions, getVerbosityValuesForModel as getVerbosityValuesForModelFromDefinitions,
getVisionModels,
PROVIDER_DEFINITIONS, PROVIDER_DEFINITIONS,
supportsTemperature as supportsTemperatureFromDefinitions, supportsTemperature as supportsTemperatureFromDefinitions,
supportsToolUsageControl as supportsToolUsageControlFromDefinitions, supportsToolUsageControl as supportsToolUsageControlFromDefinitions,
supportsVision,
updateOllamaModels as updateOllamaModelsInDefinitions, updateOllamaModels as updateOllamaModelsInDefinitions,
} from '@/providers/models' } from '@/providers/models'
import type { ProviderId, ProviderToolConfig } from '@/providers/types' import type { ProviderId, ProviderToolConfig } from '@/providers/types'
@@ -1152,3 +1154,6 @@ export function checkForForcedToolUsageOpenAI(
return { hasUsedForcedTool, usedForcedTools: updatedUsedForcedTools } return { hasUsedForcedTool, usedForcedTools: updatedUsedForcedTools }
} }
// Re-export vision capability functions
export { supportsVision, getVisionModels }

View File

@@ -134,6 +134,24 @@ function handleSecurityFiltering(request: NextRequest): NextResponse | null {
export async function proxy(request: NextRequest) { export async function proxy(request: NextRequest) {
const url = request.nextUrl const url = request.nextUrl
if (url.pathname.startsWith('/ingest/')) {
const hostname = url.pathname.startsWith('/ingest/static/')
? 'us-assets.i.posthog.com'
: 'us.i.posthog.com'
const targetPath = url.pathname.replace(/^\/ingest/, '')
const targetUrl = `https://${hostname}${targetPath}${url.search}`
return NextResponse.rewrite(new URL(targetUrl), {
request: {
headers: new Headers({
...Object.fromEntries(request.headers),
host: hostname,
}),
},
})
}
const sessionCookie = getSessionCookie(request) const sessionCookie = getSessionCookie(request)
const hasActiveSession = isAuthDisabled || !!sessionCookie const hasActiveSession = isAuthDisabled || !!sessionCookie
@@ -195,6 +213,7 @@ export async function proxy(request: NextRequest) {
export const config = { export const config = {
matcher: [ matcher: [
'/ingest/:path*', // PostHog proxy for session recording
'/', // Root path for self-hosted redirect logic '/', // Root path for self-hosted redirect logic
'/terms', // Whitelabel terms redirect '/terms', // Whitelabel terms redirect
'/privacy', // Whitelabel privacy redirect '/privacy', // Whitelabel privacy redirect

View File

@@ -102,7 +102,7 @@ export interface TraceSpan {
export interface WorkflowLog { export interface WorkflowLog {
id: string id: string
workflowId: string workflowId: string | null
executionId?: string | null executionId?: string | null
deploymentVersion?: number | null deploymentVersion?: number | null
deploymentVersionName?: string | null deploymentVersionName?: string | null

View File

@@ -54,6 +54,7 @@ import { TestClientTool } from '@/lib/copilot/tools/client/other/test'
import { TourClientTool } from '@/lib/copilot/tools/client/other/tour' import { TourClientTool } from '@/lib/copilot/tools/client/other/tour'
import { WorkflowClientTool } from '@/lib/copilot/tools/client/other/workflow' import { WorkflowClientTool } from '@/lib/copilot/tools/client/other/workflow'
import { createExecutionContext, getTool } from '@/lib/copilot/tools/client/registry' import { createExecutionContext, getTool } from '@/lib/copilot/tools/client/registry'
import { COPILOT_SERVER_ORCHESTRATED } from '@/lib/copilot/orchestrator/config'
import { GetCredentialsClientTool } from '@/lib/copilot/tools/client/user/get-credentials' import { GetCredentialsClientTool } from '@/lib/copilot/tools/client/user/get-credentials'
import { SetEnvironmentVariablesClientTool } from '@/lib/copilot/tools/client/user/set-environment-variables' import { SetEnvironmentVariablesClientTool } from '@/lib/copilot/tools/client/user/set-environment-variables'
import { CheckDeploymentStatusClientTool } from '@/lib/copilot/tools/client/workflow/check-deployment-status' import { CheckDeploymentStatusClientTool } from '@/lib/copilot/tools/client/workflow/check-deployment-status'
@@ -1198,6 +1199,18 @@ const sseHandlers: Record<string, SSEHandler> = {
} }
} catch {} } catch {}
} }
if (COPILOT_SERVER_ORCHESTRATED && current.name === 'edit_workflow') {
try {
const resultPayload =
data?.result || data?.data?.result || data?.data?.data || data?.data || {}
const workflowState = resultPayload?.workflowState
if (workflowState) {
const diffStore = useWorkflowDiffStore.getState()
void diffStore.setProposedChanges(workflowState)
}
} catch {}
}
} }
// Update inline content block state // Update inline content block state
@@ -1362,6 +1375,10 @@ const sseHandlers: Record<string, SSEHandler> = {
return return
} }
if (COPILOT_SERVER_ORCHESTRATED) {
return
}
// Prefer interface-based registry to determine interrupt and execute // Prefer interface-based registry to determine interrupt and execute
try { try {
const def = name ? getTool(name) : undefined const def = name ? getTool(name) : undefined
@@ -3820,6 +3837,9 @@ export const useCopilotStore = create<CopilotStore>()(
setEnabledModels: (models) => set({ enabledModels: models }), setEnabledModels: (models) => set({ enabledModels: models }),
executeIntegrationTool: async (toolCallId: string) => { executeIntegrationTool: async (toolCallId: string) => {
if (COPILOT_SERVER_ORCHESTRATED) {
return
}
const { toolCallsById, workflowId } = get() const { toolCallsById, workflowId } = get()
const toolCall = toolCallsById[toolCallId] const toolCall = toolCallsById[toolCallId]
if (!toolCall || !workflowId) return if (!toolCall || !workflowId) return

View File

@@ -1648,6 +1648,8 @@ import {
youtubeCommentsTool, youtubeCommentsTool,
youtubePlaylistItemsTool, youtubePlaylistItemsTool,
youtubeSearchTool, youtubeSearchTool,
youtubeTrendingTool,
youtubeVideoCategoriesTool,
youtubeVideoDetailsTool, youtubeVideoDetailsTool,
} from '@/tools/youtube' } from '@/tools/youtube'
import { import {
@@ -1982,13 +1984,15 @@ export const tools: Record<string, ToolConfig> = {
typeform_create_form: typeformCreateFormTool, typeform_create_form: typeformCreateFormTool,
typeform_update_form: typeformUpdateFormTool, typeform_update_form: typeformUpdateFormTool,
typeform_delete_form: typeformDeleteFormTool, typeform_delete_form: typeformDeleteFormTool,
youtube_search: youtubeSearchTool,
youtube_video_details: youtubeVideoDetailsTool,
youtube_channel_info: youtubeChannelInfoTool, youtube_channel_info: youtubeChannelInfoTool,
youtube_playlist_items: youtubePlaylistItemsTool,
youtube_comments: youtubeCommentsTool,
youtube_channel_videos: youtubeChannelVideosTool,
youtube_channel_playlists: youtubeChannelPlaylistsTool, youtube_channel_playlists: youtubeChannelPlaylistsTool,
youtube_channel_videos: youtubeChannelVideosTool,
youtube_comments: youtubeCommentsTool,
youtube_playlist_items: youtubePlaylistItemsTool,
youtube_search: youtubeSearchTool,
youtube_trending: youtubeTrendingTool,
youtube_video_categories: youtubeVideoCategoriesTool,
youtube_video_details: youtubeVideoDetailsTool,
notion_read: notionReadTool, notion_read: notionReadTool,
notion_read_database: notionReadDatabaseTool, notion_read_database: notionReadDatabaseTool,
notion_write: notionWriteTool, notion_write: notionWriteTool,

View File

@@ -7,8 +7,9 @@ export const youtubeChannelInfoTool: ToolConfig<
> = { > = {
id: 'youtube_channel_info', id: 'youtube_channel_info',
name: 'YouTube Channel Info', name: 'YouTube Channel Info',
description: 'Get detailed information about a YouTube channel.', description:
version: '1.0.0', 'Get detailed information about a YouTube channel including statistics, branding, and content details.',
version: '1.1.0',
params: { params: {
channelId: { channelId: {
type: 'string', type: 'string',
@@ -33,11 +34,11 @@ export const youtubeChannelInfoTool: ToolConfig<
request: { request: {
url: (params: YouTubeChannelInfoParams) => { url: (params: YouTubeChannelInfoParams) => {
let url = let url =
'https://www.googleapis.com/youtube/v3/channels?part=snippet,statistics,contentDetails' 'https://www.googleapis.com/youtube/v3/channels?part=snippet,statistics,contentDetails,brandingSettings'
if (params.channelId) { if (params.channelId) {
url += `&id=${params.channelId}` url += `&id=${encodeURIComponent(params.channelId)}`
} else if (params.username) { } else if (params.username) {
url += `&forUsername=${params.username}` url += `&forUsername=${encodeURIComponent(params.username)}`
} }
url += `&key=${params.apiKey}` url += `&key=${params.apiKey}`
return url return url
@@ -63,6 +64,11 @@ export const youtubeChannelInfoTool: ToolConfig<
viewCount: 0, viewCount: 0,
publishedAt: '', publishedAt: '',
thumbnail: '', thumbnail: '',
customUrl: null,
country: null,
uploadsPlaylistId: null,
bannerImageUrl: null,
hiddenSubscriberCount: false,
}, },
error: 'Channel not found', error: 'Channel not found',
} }
@@ -72,19 +78,23 @@ export const youtubeChannelInfoTool: ToolConfig<
return { return {
success: true, success: true,
output: { output: {
channelId: item.id, channelId: item.id ?? '',
title: item.snippet?.title || '', title: item.snippet?.title ?? '',
description: item.snippet?.description || '', description: item.snippet?.description ?? '',
subscriberCount: Number(item.statistics?.subscriberCount || 0), subscriberCount: Number(item.statistics?.subscriberCount || 0),
videoCount: Number(item.statistics?.videoCount || 0), videoCount: Number(item.statistics?.videoCount || 0),
viewCount: Number(item.statistics?.viewCount || 0), viewCount: Number(item.statistics?.viewCount || 0),
publishedAt: item.snippet?.publishedAt || '', publishedAt: item.snippet?.publishedAt ?? '',
thumbnail: thumbnail:
item.snippet?.thumbnails?.high?.url || item.snippet?.thumbnails?.high?.url ||
item.snippet?.thumbnails?.medium?.url || item.snippet?.thumbnails?.medium?.url ||
item.snippet?.thumbnails?.default?.url || item.snippet?.thumbnails?.default?.url ||
'', '',
customUrl: item.snippet?.customUrl, customUrl: item.snippet?.customUrl ?? null,
country: item.snippet?.country ?? null,
uploadsPlaylistId: item.contentDetails?.relatedPlaylists?.uploads ?? null,
bannerImageUrl: item.brandingSettings?.image?.bannerExternalUrl ?? null,
hiddenSubscriberCount: item.statistics?.hiddenSubscriberCount ?? false,
}, },
} }
}, },
@@ -104,11 +114,11 @@ export const youtubeChannelInfoTool: ToolConfig<
}, },
subscriberCount: { subscriberCount: {
type: 'number', type: 'number',
description: 'Number of subscribers', description: 'Number of subscribers (0 if hidden)',
}, },
videoCount: { videoCount: {
type: 'number', type: 'number',
description: 'Number of videos', description: 'Number of public videos',
}, },
viewCount: { viewCount: {
type: 'number', type: 'number',
@@ -120,12 +130,31 @@ export const youtubeChannelInfoTool: ToolConfig<
}, },
thumbnail: { thumbnail: {
type: 'string', type: 'string',
description: 'Channel thumbnail URL', description: 'Channel thumbnail/avatar URL',
}, },
customUrl: { customUrl: {
type: 'string', type: 'string',
description: 'Channel custom URL', description: 'Channel custom URL (handle)',
optional: true, optional: true,
}, },
country: {
type: 'string',
description: 'Country the channel is associated with',
optional: true,
},
uploadsPlaylistId: {
type: 'string',
description: 'Playlist ID containing all channel uploads (use with playlist_items)',
optional: true,
},
bannerImageUrl: {
type: 'string',
description: 'Channel banner image URL',
optional: true,
},
hiddenSubscriberCount: {
type: 'boolean',
description: 'Whether the subscriber count is hidden',
},
}, },
} }

View File

@@ -10,8 +10,8 @@ export const youtubeChannelPlaylistsTool: ToolConfig<
> = { > = {
id: 'youtube_channel_playlists', id: 'youtube_channel_playlists',
name: 'YouTube Channel Playlists', name: 'YouTube Channel Playlists',
description: 'Get all playlists from a specific YouTube channel.', description: 'Get all public playlists from a specific YouTube channel.',
version: '1.0.0', version: '1.1.0',
params: { params: {
channelId: { channelId: {
type: 'string', type: 'string',
@@ -47,7 +47,7 @@ export const youtubeChannelPlaylistsTool: ToolConfig<
)}&key=${params.apiKey}` )}&key=${params.apiKey}`
url += `&maxResults=${Number(params.maxResults || 10)}` url += `&maxResults=${Number(params.maxResults || 10)}`
if (params.pageToken) { if (params.pageToken) {
url += `&pageToken=${params.pageToken}` url += `&pageToken=${encodeURIComponent(params.pageToken)}`
} }
return url return url
}, },
@@ -60,36 +60,49 @@ export const youtubeChannelPlaylistsTool: ToolConfig<
transformResponse: async (response: Response): Promise<YouTubeChannelPlaylistsResponse> => { transformResponse: async (response: Response): Promise<YouTubeChannelPlaylistsResponse> => {
const data = await response.json() const data = await response.json()
if (!data.items) { if (data.error) {
return { return {
success: false, success: false,
output: { output: {
items: [], items: [],
totalResults: 0, totalResults: 0,
nextPageToken: null,
},
error: data.error.message || 'Failed to fetch channel playlists',
}
}
if (!data.items || data.items.length === 0) {
return {
success: true,
output: {
items: [],
totalResults: 0,
nextPageToken: null,
}, },
error: 'No playlists found',
} }
} }
const items = (data.items || []).map((item: any) => ({ const items = (data.items || []).map((item: any) => ({
playlistId: item.id, playlistId: item.id ?? '',
title: item.snippet?.title || '', title: item.snippet?.title ?? '',
description: item.snippet?.description || '', description: item.snippet?.description ?? '',
thumbnail: thumbnail:
item.snippet?.thumbnails?.medium?.url || item.snippet?.thumbnails?.medium?.url ||
item.snippet?.thumbnails?.default?.url || item.snippet?.thumbnails?.default?.url ||
item.snippet?.thumbnails?.high?.url || item.snippet?.thumbnails?.high?.url ||
'', '',
itemCount: item.contentDetails?.itemCount || 0, itemCount: Number(item.contentDetails?.itemCount || 0),
publishedAt: item.snippet?.publishedAt || '', publishedAt: item.snippet?.publishedAt ?? '',
channelTitle: item.snippet?.channelTitle ?? '',
})) }))
return { return {
success: true, success: true,
output: { output: {
items, items,
totalResults: data.pageInfo?.totalResults || 0, totalResults: data.pageInfo?.totalResults || items.length,
nextPageToken: data.nextPageToken, nextPageToken: data.nextPageToken ?? null,
}, },
} }
}, },
@@ -107,6 +120,7 @@ export const youtubeChannelPlaylistsTool: ToolConfig<
thumbnail: { type: 'string', description: 'Playlist thumbnail URL' }, thumbnail: { type: 'string', description: 'Playlist thumbnail URL' },
itemCount: { type: 'number', description: 'Number of videos in playlist' }, itemCount: { type: 'number', description: 'Number of videos in playlist' },
publishedAt: { type: 'string', description: 'Playlist creation date' }, publishedAt: { type: 'string', description: 'Playlist creation date' },
channelTitle: { type: 'string', description: 'Channel name' },
}, },
}, },
}, },

View File

@@ -10,8 +10,9 @@ export const youtubeChannelVideosTool: ToolConfig<
> = { > = {
id: 'youtube_channel_videos', id: 'youtube_channel_videos',
name: 'YouTube Channel Videos', name: 'YouTube Channel Videos',
description: 'Get all videos from a specific YouTube channel, with sorting options.', description:
version: '1.0.0', 'Search for videos from a specific YouTube channel with sorting options. For complete channel video list, use channel_info to get uploadsPlaylistId, then use playlist_items.',
version: '1.1.0',
params: { params: {
channelId: { channelId: {
type: 'string', type: 'string',
@@ -30,7 +31,8 @@ export const youtubeChannelVideosTool: ToolConfig<
type: 'string', type: 'string',
required: false, required: false,
visibility: 'user-or-llm', visibility: 'user-or-llm',
description: 'Sort order: "date" (newest first), "rating", "relevance", "title", "viewCount"', description:
'Sort order: "date" (newest first, default), "rating", "relevance", "title", "viewCount"',
}, },
pageToken: { pageToken: {
type: 'string', type: 'string',
@@ -52,11 +54,9 @@ export const youtubeChannelVideosTool: ToolConfig<
params.channelId params.channelId
)}&key=${params.apiKey}` )}&key=${params.apiKey}`
url += `&maxResults=${Number(params.maxResults || 10)}` url += `&maxResults=${Number(params.maxResults || 10)}`
if (params.order) { url += `&order=${params.order || 'date'}`
url += `&order=${params.order}`
}
if (params.pageToken) { if (params.pageToken) {
url += `&pageToken=${params.pageToken}` url += `&pageToken=${encodeURIComponent(params.pageToken)}`
} }
return url return url
}, },
@@ -68,23 +68,38 @@ export const youtubeChannelVideosTool: ToolConfig<
transformResponse: async (response: Response): Promise<YouTubeChannelVideosResponse> => { transformResponse: async (response: Response): Promise<YouTubeChannelVideosResponse> => {
const data = await response.json() const data = await response.json()
if (data.error) {
return {
success: false,
output: {
items: [],
totalResults: 0,
nextPageToken: null,
},
error: data.error.message || 'Failed to fetch channel videos',
}
}
const items = (data.items || []).map((item: any) => ({ const items = (data.items || []).map((item: any) => ({
videoId: item.id?.videoId, videoId: item.id?.videoId ?? '',
title: item.snippet?.title, title: item.snippet?.title ?? '',
description: item.snippet?.description, description: item.snippet?.description ?? '',
thumbnail: thumbnail:
item.snippet?.thumbnails?.medium?.url || item.snippet?.thumbnails?.medium?.url ||
item.snippet?.thumbnails?.default?.url || item.snippet?.thumbnails?.default?.url ||
item.snippet?.thumbnails?.high?.url || item.snippet?.thumbnails?.high?.url ||
'', '',
publishedAt: item.snippet?.publishedAt || '', publishedAt: item.snippet?.publishedAt ?? '',
channelTitle: item.snippet?.channelTitle ?? '',
})) }))
return { return {
success: true, success: true,
output: { output: {
items, items,
totalResults: data.pageInfo?.totalResults || 0, totalResults: data.pageInfo?.totalResults || items.length,
nextPageToken: data.nextPageToken, nextPageToken: data.nextPageToken ?? null,
}, },
} }
}, },
@@ -101,6 +116,7 @@ export const youtubeChannelVideosTool: ToolConfig<
description: { type: 'string', description: 'Video description' }, description: { type: 'string', description: 'Video description' },
thumbnail: { type: 'string', description: 'Video thumbnail URL' }, thumbnail: { type: 'string', description: 'Video thumbnail URL' },
publishedAt: { type: 'string', description: 'Video publish date' }, publishedAt: { type: 'string', description: 'Video publish date' },
channelTitle: { type: 'string', description: 'Channel name' },
}, },
}, },
}, },

View File

@@ -4,8 +4,8 @@ import type { YouTubeCommentsParams, YouTubeCommentsResponse } from '@/tools/you
export const youtubeCommentsTool: ToolConfig<YouTubeCommentsParams, YouTubeCommentsResponse> = { export const youtubeCommentsTool: ToolConfig<YouTubeCommentsParams, YouTubeCommentsResponse> = {
id: 'youtube_comments', id: 'youtube_comments',
name: 'YouTube Video Comments', name: 'YouTube Video Comments',
description: 'Get comments from a YouTube video.', description: 'Get top-level comments from a YouTube video with author details and engagement.',
version: '1.0.0', version: '1.1.0',
params: { params: {
videoId: { videoId: {
type: 'string', type: 'string',
@@ -18,14 +18,14 @@ export const youtubeCommentsTool: ToolConfig<YouTubeCommentsParams, YouTubeComme
required: false, required: false,
visibility: 'user-only', visibility: 'user-only',
default: 20, default: 20,
description: 'Maximum number of comments to return', description: 'Maximum number of comments to return (1-100)',
}, },
order: { order: {
type: 'string', type: 'string',
required: false, required: false,
visibility: 'user-only', visibility: 'user-or-llm',
default: 'relevance', default: 'relevance',
description: 'Order of comments: time or relevance', description: 'Order of comments: "time" (newest first) or "relevance" (most relevant first)',
}, },
pageToken: { pageToken: {
type: 'string', type: 'string',
@@ -43,11 +43,11 @@ export const youtubeCommentsTool: ToolConfig<YouTubeCommentsParams, YouTubeComme
request: { request: {
url: (params: YouTubeCommentsParams) => { url: (params: YouTubeCommentsParams) => {
let url = `https://www.googleapis.com/youtube/v3/commentThreads?part=snippet,replies&videoId=${params.videoId}&key=${params.apiKey}` let url = `https://www.googleapis.com/youtube/v3/commentThreads?part=snippet,replies&videoId=${encodeURIComponent(params.videoId)}&key=${params.apiKey}`
url += `&maxResults=${Number(params.maxResults || 20)}` url += `&maxResults=${Number(params.maxResults || 20)}`
url += `&order=${params.order || 'relevance'}` url += `&order=${params.order || 'relevance'}`
if (params.pageToken) { if (params.pageToken) {
url += `&pageToken=${params.pageToken}` url += `&pageToken=${encodeURIComponent(params.pageToken)}`
} }
return url return url
}, },
@@ -60,18 +60,31 @@ export const youtubeCommentsTool: ToolConfig<YouTubeCommentsParams, YouTubeComme
transformResponse: async (response: Response): Promise<YouTubeCommentsResponse> => { transformResponse: async (response: Response): Promise<YouTubeCommentsResponse> => {
const data = await response.json() const data = await response.json()
if (data.error) {
return {
success: false,
output: {
items: [],
totalResults: 0,
nextPageToken: null,
},
error: data.error.message || 'Failed to fetch comments',
}
}
const items = (data.items || []).map((item: any) => { const items = (data.items || []).map((item: any) => {
const topLevelComment = item.snippet?.topLevelComment?.snippet const topLevelComment = item.snippet?.topLevelComment?.snippet
return { return {
commentId: item.snippet?.topLevelComment?.id || item.id, commentId: item.snippet?.topLevelComment?.id ?? item.id ?? '',
authorDisplayName: topLevelComment?.authorDisplayName || '', authorDisplayName: topLevelComment?.authorDisplayName ?? '',
authorChannelUrl: topLevelComment?.authorChannelUrl || '', authorChannelUrl: topLevelComment?.authorChannelUrl ?? '',
textDisplay: topLevelComment?.textDisplay || '', authorProfileImageUrl: topLevelComment?.authorProfileImageUrl ?? '',
textOriginal: topLevelComment?.textOriginal || '', textDisplay: topLevelComment?.textDisplay ?? '',
likeCount: topLevelComment?.likeCount || 0, textOriginal: topLevelComment?.textOriginal ?? '',
publishedAt: topLevelComment?.publishedAt || '', likeCount: Number(topLevelComment?.likeCount || 0),
updatedAt: topLevelComment?.updatedAt || '', publishedAt: topLevelComment?.publishedAt ?? '',
replyCount: item.snippet?.totalReplyCount || 0, updatedAt: topLevelComment?.updatedAt ?? '',
replyCount: Number(item.snippet?.totalReplyCount || 0),
} }
}) })
@@ -79,8 +92,8 @@ export const youtubeCommentsTool: ToolConfig<YouTubeCommentsParams, YouTubeComme
success: true, success: true,
output: { output: {
items, items,
totalResults: data.pageInfo?.totalResults || 0, totalResults: data.pageInfo?.totalResults || items.length,
nextPageToken: data.nextPageToken, nextPageToken: data.nextPageToken ?? null,
}, },
} }
}, },
@@ -88,25 +101,29 @@ export const youtubeCommentsTool: ToolConfig<YouTubeCommentsParams, YouTubeComme
outputs: { outputs: {
items: { items: {
type: 'array', type: 'array',
description: 'Array of comments from the video', description: 'Array of top-level comments from the video',
items: { items: {
type: 'object', type: 'object',
properties: { properties: {
commentId: { type: 'string', description: 'Comment ID' }, commentId: { type: 'string', description: 'Comment ID' },
authorDisplayName: { type: 'string', description: 'Comment author name' }, authorDisplayName: { type: 'string', description: 'Comment author display name' },
authorChannelUrl: { type: 'string', description: 'Comment author channel URL' }, authorChannelUrl: { type: 'string', description: 'Comment author channel URL' },
authorProfileImageUrl: {
type: 'string',
description: 'Comment author profile image URL',
},
textDisplay: { type: 'string', description: 'Comment text (HTML formatted)' }, textDisplay: { type: 'string', description: 'Comment text (HTML formatted)' },
textOriginal: { type: 'string', description: 'Comment text (plain text)' }, textOriginal: { type: 'string', description: 'Comment text (plain text)' },
likeCount: { type: 'number', description: 'Number of likes' }, likeCount: { type: 'number', description: 'Number of likes on the comment' },
publishedAt: { type: 'string', description: 'Comment publish date' }, publishedAt: { type: 'string', description: 'When the comment was posted' },
updatedAt: { type: 'string', description: 'Comment last updated date' }, updatedAt: { type: 'string', description: 'When the comment was last edited' },
replyCount: { type: 'number', description: 'Number of replies', optional: true }, replyCount: { type: 'number', description: 'Number of replies to this comment' },
}, },
}, },
}, },
totalResults: { totalResults: {
type: 'number', type: 'number',
description: 'Total number of comments', description: 'Total number of comment threads available',
}, },
nextPageToken: { nextPageToken: {
type: 'string', type: 'string',

View File

@@ -4,6 +4,8 @@ import { youtubeChannelVideosTool } from '@/tools/youtube/channel_videos'
import { youtubeCommentsTool } from '@/tools/youtube/comments' import { youtubeCommentsTool } from '@/tools/youtube/comments'
import { youtubePlaylistItemsTool } from '@/tools/youtube/playlist_items' import { youtubePlaylistItemsTool } from '@/tools/youtube/playlist_items'
import { youtubeSearchTool } from '@/tools/youtube/search' import { youtubeSearchTool } from '@/tools/youtube/search'
import { youtubeTrendingTool } from '@/tools/youtube/trending'
import { youtubeVideoCategoriesTool } from '@/tools/youtube/video_categories'
import { youtubeVideoDetailsTool } from '@/tools/youtube/video_details' import { youtubeVideoDetailsTool } from '@/tools/youtube/video_details'
export { youtubeSearchTool } export { youtubeSearchTool }
@@ -13,3 +15,5 @@ export { youtubePlaylistItemsTool }
export { youtubeCommentsTool } export { youtubeCommentsTool }
export { youtubeChannelVideosTool } export { youtubeChannelVideosTool }
export { youtubeChannelPlaylistsTool } export { youtubeChannelPlaylistsTool }
export { youtubeTrendingTool }
export { youtubeVideoCategoriesTool }

View File

@@ -10,21 +10,23 @@ export const youtubePlaylistItemsTool: ToolConfig<
> = { > = {
id: 'youtube_playlist_items', id: 'youtube_playlist_items',
name: 'YouTube Playlist Items', name: 'YouTube Playlist Items',
description: 'Get videos from a YouTube playlist.', description:
version: '1.0.0', 'Get videos from a YouTube playlist. Can be used with a channel uploads playlist to get all channel videos.',
version: '1.1.0',
params: { params: {
playlistId: { playlistId: {
type: 'string', type: 'string',
required: true, required: true,
visibility: 'user-or-llm', visibility: 'user-or-llm',
description: 'YouTube playlist ID', description:
'YouTube playlist ID. Use uploadsPlaylistId from channel_info to get all channel videos.',
}, },
maxResults: { maxResults: {
type: 'number', type: 'number',
required: false, required: false,
visibility: 'user-only', visibility: 'user-only',
default: 10, default: 10,
description: 'Maximum number of videos to return', description: 'Maximum number of videos to return (1-50)',
}, },
pageToken: { pageToken: {
type: 'string', type: 'string',
@@ -42,10 +44,10 @@ export const youtubePlaylistItemsTool: ToolConfig<
request: { request: {
url: (params: YouTubePlaylistItemsParams) => { url: (params: YouTubePlaylistItemsParams) => {
let url = `https://www.googleapis.com/youtube/v3/playlistItems?part=snippet,contentDetails&playlistId=${params.playlistId}&key=${params.apiKey}` let url = `https://www.googleapis.com/youtube/v3/playlistItems?part=snippet,contentDetails&playlistId=${encodeURIComponent(params.playlistId)}&key=${params.apiKey}`
url += `&maxResults=${Number(params.maxResults || 10)}` url += `&maxResults=${Number(params.maxResults || 10)}`
if (params.pageToken) { if (params.pageToken) {
url += `&pageToken=${params.pageToken}` url += `&pageToken=${encodeURIComponent(params.pageToken)}`
} }
return url return url
}, },
@@ -58,26 +60,40 @@ export const youtubePlaylistItemsTool: ToolConfig<
transformResponse: async (response: Response): Promise<YouTubePlaylistItemsResponse> => { transformResponse: async (response: Response): Promise<YouTubePlaylistItemsResponse> => {
const data = await response.json() const data = await response.json()
if (data.error) {
return {
success: false,
output: {
items: [],
totalResults: 0,
nextPageToken: null,
},
error: data.error.message || 'Failed to fetch playlist items',
}
}
const items = (data.items || []).map((item: any, index: number) => ({ const items = (data.items || []).map((item: any, index: number) => ({
videoId: item.contentDetails?.videoId || item.snippet?.resourceId?.videoId, videoId: item.contentDetails?.videoId ?? item.snippet?.resourceId?.videoId ?? '',
title: item.snippet?.title || '', title: item.snippet?.title ?? '',
description: item.snippet?.description || '', description: item.snippet?.description ?? '',
thumbnail: thumbnail:
item.snippet?.thumbnails?.medium?.url || item.snippet?.thumbnails?.medium?.url ||
item.snippet?.thumbnails?.default?.url || item.snippet?.thumbnails?.default?.url ||
item.snippet?.thumbnails?.high?.url || item.snippet?.thumbnails?.high?.url ||
'', '',
publishedAt: item.snippet?.publishedAt || '', publishedAt: item.snippet?.publishedAt ?? '',
channelTitle: item.snippet?.channelTitle || '', channelTitle: item.snippet?.channelTitle ?? '',
position: item.snippet?.position ?? index, position: item.snippet?.position ?? index,
videoOwnerChannelId: item.snippet?.videoOwnerChannelId ?? null,
videoOwnerChannelTitle: item.snippet?.videoOwnerChannelTitle ?? null,
})) }))
return { return {
success: true, success: true,
output: { output: {
items, items,
totalResults: data.pageInfo?.totalResults || 0, totalResults: data.pageInfo?.totalResults || items.length,
nextPageToken: data.nextPageToken, nextPageToken: data.nextPageToken ?? null,
}, },
} }
}, },
@@ -94,8 +110,18 @@ export const youtubePlaylistItemsTool: ToolConfig<
description: { type: 'string', description: 'Video description' }, description: { type: 'string', description: 'Video description' },
thumbnail: { type: 'string', description: 'Video thumbnail URL' }, thumbnail: { type: 'string', description: 'Video thumbnail URL' },
publishedAt: { type: 'string', description: 'Date added to playlist' }, publishedAt: { type: 'string', description: 'Date added to playlist' },
channelTitle: { type: 'string', description: 'Channel name' }, channelTitle: { type: 'string', description: 'Playlist owner channel name' },
position: { type: 'number', description: 'Position in playlist' }, position: { type: 'number', description: 'Position in playlist (0-indexed)' },
videoOwnerChannelId: {
type: 'string',
description: 'Channel ID of the video owner',
optional: true,
},
videoOwnerChannelTitle: {
type: 'string',
description: 'Channel name of the video owner',
optional: true,
},
}, },
}, },
}, },

View File

@@ -5,8 +5,8 @@ export const youtubeSearchTool: ToolConfig<YouTubeSearchParams, YouTubeSearchRes
id: 'youtube_search', id: 'youtube_search',
name: 'YouTube Search', name: 'YouTube Search',
description: description:
'Search for videos on YouTube using the YouTube Data API. Supports advanced filtering by channel, date range, duration, category, quality, captions, and more.', 'Search for videos on YouTube using the YouTube Data API. Supports advanced filtering by channel, date range, duration, category, quality, captions, live streams, and more.',
version: '1.0.0', version: '1.2.0',
params: { params: {
query: { query: {
type: 'string', type: 'string',
@@ -21,13 +21,18 @@ export const youtubeSearchTool: ToolConfig<YouTubeSearchParams, YouTubeSearchRes
default: 5, default: 5,
description: 'Maximum number of videos to return (1-50)', description: 'Maximum number of videos to return (1-50)',
}, },
pageToken: {
type: 'string',
required: false,
visibility: 'user-only',
description: 'Page token for pagination (use nextPageToken from previous response)',
},
apiKey: { apiKey: {
type: 'string', type: 'string',
required: true, required: true,
visibility: 'user-only', visibility: 'user-only',
description: 'YouTube API Key', description: 'YouTube API Key',
}, },
// Priority 1: Essential filters
channelId: { channelId: {
type: 'string', type: 'string',
required: false, required: false,
@@ -66,9 +71,9 @@ export const youtubeSearchTool: ToolConfig<YouTubeSearchParams, YouTubeSearchRes
type: 'string', type: 'string',
required: false, required: false,
visibility: 'user-or-llm', visibility: 'user-or-llm',
description: 'Filter by YouTube category ID (e.g., "10" for Music, "20" for Gaming)', description:
'Filter by YouTube category ID (e.g., "10" for Music, "20" for Gaming). Use video_categories to list IDs.',
}, },
// Priority 2: Very useful filters
videoDefinition: { videoDefinition: {
type: 'string', type: 'string',
required: false, required: false,
@@ -82,6 +87,13 @@ export const youtubeSearchTool: ToolConfig<YouTubeSearchParams, YouTubeSearchRes
description: description:
'Filter by caption availability: "closedCaption" (has captions), "none" (no captions), "any"', 'Filter by caption availability: "closedCaption" (has captions), "none" (no captions), "any"',
}, },
eventType: {
type: 'string',
required: false,
visibility: 'user-or-llm',
description:
'Filter by live broadcast status: "live" (currently live), "upcoming" (scheduled), "completed" (past streams)',
},
regionCode: { regionCode: {
type: 'string', type: 'string',
required: false, required: false,
@@ -110,7 +122,9 @@ export const youtubeSearchTool: ToolConfig<YouTubeSearchParams, YouTubeSearchRes
)}` )}`
url += `&maxResults=${Number(params.maxResults || 5)}` url += `&maxResults=${Number(params.maxResults || 5)}`
// Add Priority 1 filters if provided if (params.pageToken) {
url += `&pageToken=${encodeURIComponent(params.pageToken)}`
}
if (params.channelId) { if (params.channelId) {
url += `&channelId=${encodeURIComponent(params.channelId)}` url += `&channelId=${encodeURIComponent(params.channelId)}`
} }
@@ -129,14 +143,15 @@ export const youtubeSearchTool: ToolConfig<YouTubeSearchParams, YouTubeSearchRes
if (params.videoCategoryId) { if (params.videoCategoryId) {
url += `&videoCategoryId=${params.videoCategoryId}` url += `&videoCategoryId=${params.videoCategoryId}`
} }
// Add Priority 2 filters if provided
if (params.videoDefinition) { if (params.videoDefinition) {
url += `&videoDefinition=${params.videoDefinition}` url += `&videoDefinition=${params.videoDefinition}`
} }
if (params.videoCaption) { if (params.videoCaption) {
url += `&videoCaption=${params.videoCaption}` url += `&videoCaption=${params.videoCaption}`
} }
if (params.eventType) {
url += `&eventType=${params.eventType}`
}
if (params.regionCode) { if (params.regionCode) {
url += `&regionCode=${params.regionCode}` url += `&regionCode=${params.regionCode}`
} }
@@ -157,22 +172,39 @@ export const youtubeSearchTool: ToolConfig<YouTubeSearchParams, YouTubeSearchRes
transformResponse: async (response: Response): Promise<YouTubeSearchResponse> => { transformResponse: async (response: Response): Promise<YouTubeSearchResponse> => {
const data = await response.json() const data = await response.json()
if (data.error) {
return {
success: false,
output: {
items: [],
totalResults: 0,
nextPageToken: null,
},
error: data.error.message || 'Search failed',
}
}
const items = (data.items || []).map((item: any) => ({ const items = (data.items || []).map((item: any) => ({
videoId: item.id?.videoId, videoId: item.id?.videoId ?? '',
title: item.snippet?.title, title: item.snippet?.title ?? '',
description: item.snippet?.description, description: item.snippet?.description ?? '',
thumbnail: thumbnail:
item.snippet?.thumbnails?.default?.url || item.snippet?.thumbnails?.default?.url ||
item.snippet?.thumbnails?.medium?.url || item.snippet?.thumbnails?.medium?.url ||
item.snippet?.thumbnails?.high?.url || item.snippet?.thumbnails?.high?.url ||
'', '',
channelId: item.snippet?.channelId ?? '',
channelTitle: item.snippet?.channelTitle ?? '',
publishedAt: item.snippet?.publishedAt ?? '',
liveBroadcastContent: item.snippet?.liveBroadcastContent ?? 'none',
})) }))
return { return {
success: true, success: true,
output: { output: {
items, items,
totalResults: data.pageInfo?.totalResults || 0, totalResults: data.pageInfo?.totalResults || 0,
nextPageToken: data.nextPageToken, nextPageToken: data.nextPageToken ?? null,
}, },
} }
}, },
@@ -188,6 +220,13 @@ export const youtubeSearchTool: ToolConfig<YouTubeSearchParams, YouTubeSearchRes
title: { type: 'string', description: 'Video title' }, title: { type: 'string', description: 'Video title' },
description: { type: 'string', description: 'Video description' }, description: { type: 'string', description: 'Video description' },
thumbnail: { type: 'string', description: 'Video thumbnail URL' }, thumbnail: { type: 'string', description: 'Video thumbnail URL' },
channelId: { type: 'string', description: 'Channel ID that uploaded the video' },
channelTitle: { type: 'string', description: 'Channel name' },
publishedAt: { type: 'string', description: 'Video publish date' },
liveBroadcastContent: {
type: 'string',
description: 'Live broadcast status: "none", "live", or "upcoming"',
},
}, },
}, },
}, },

View File

@@ -0,0 +1,139 @@
import type { ToolConfig } from '@/tools/types'
import type { YouTubeTrendingParams, YouTubeTrendingResponse } from '@/tools/youtube/types'
export const youtubeTrendingTool: ToolConfig<YouTubeTrendingParams, YouTubeTrendingResponse> = {
id: 'youtube_trending',
name: 'YouTube Trending Videos',
description:
'Get the most popular/trending videos on YouTube. Can filter by region and video category.',
version: '1.0.0',
params: {
regionCode: {
type: 'string',
required: false,
visibility: 'user-or-llm',
description:
'ISO 3166-1 alpha-2 country code to get trending videos for (e.g., "US", "GB", "JP"). Defaults to US.',
},
videoCategoryId: {
type: 'string',
required: false,
visibility: 'user-or-llm',
description:
'Filter by video category ID (e.g., "10" for Music, "20" for Gaming, "17" for Sports)',
},
maxResults: {
type: 'number',
required: false,
visibility: 'user-only',
default: 10,
description: 'Maximum number of trending videos to return (1-50)',
},
pageToken: {
type: 'string',
required: false,
visibility: 'user-only',
description: 'Page token for pagination',
},
apiKey: {
type: 'string',
required: true,
visibility: 'user-only',
description: 'YouTube API Key',
},
},
request: {
url: (params: YouTubeTrendingParams) => {
let url = `https://www.googleapis.com/youtube/v3/videos?part=snippet,statistics,contentDetails&chart=mostPopular&key=${params.apiKey}`
url += `&maxResults=${Number(params.maxResults || 10)}`
url += `&regionCode=${params.regionCode || 'US'}`
if (params.videoCategoryId) {
url += `&videoCategoryId=${params.videoCategoryId}`
}
if (params.pageToken) {
url += `&pageToken=${encodeURIComponent(params.pageToken)}`
}
return url
},
method: 'GET',
headers: () => ({
'Content-Type': 'application/json',
}),
},
transformResponse: async (response: Response): Promise<YouTubeTrendingResponse> => {
const data = await response.json()
if (data.error) {
return {
success: false,
output: {
items: [],
totalResults: 0,
nextPageToken: null,
},
error: data.error.message || 'Failed to fetch trending videos',
}
}
const items = (data.items || []).map((item: any) => ({
videoId: item.id ?? '',
title: item.snippet?.title ?? '',
description: item.snippet?.description ?? '',
thumbnail:
item.snippet?.thumbnails?.high?.url ||
item.snippet?.thumbnails?.medium?.url ||
item.snippet?.thumbnails?.default?.url ||
'',
channelId: item.snippet?.channelId ?? '',
channelTitle: item.snippet?.channelTitle ?? '',
publishedAt: item.snippet?.publishedAt ?? '',
viewCount: Number(item.statistics?.viewCount || 0),
likeCount: Number(item.statistics?.likeCount || 0),
commentCount: Number(item.statistics?.commentCount || 0),
duration: item.contentDetails?.duration ?? '',
}))
return {
success: true,
output: {
items,
totalResults: data.pageInfo?.totalResults || items.length,
nextPageToken: data.nextPageToken ?? null,
},
}
},
outputs: {
items: {
type: 'array',
description: 'Array of trending videos',
items: {
type: 'object',
properties: {
videoId: { type: 'string', description: 'YouTube video ID' },
title: { type: 'string', description: 'Video title' },
description: { type: 'string', description: 'Video description' },
thumbnail: { type: 'string', description: 'Video thumbnail URL' },
channelId: { type: 'string', description: 'Channel ID' },
channelTitle: { type: 'string', description: 'Channel name' },
publishedAt: { type: 'string', description: 'Video publish date' },
viewCount: { type: 'number', description: 'Number of views' },
likeCount: { type: 'number', description: 'Number of likes' },
commentCount: { type: 'number', description: 'Number of comments' },
duration: { type: 'string', description: 'Video duration in ISO 8601 format' },
},
},
},
totalResults: {
type: 'number',
description: 'Total number of trending videos available',
},
nextPageToken: {
type: 'string',
description: 'Token for accessing the next page of results',
optional: true,
},
},
}

View File

@@ -16,6 +16,7 @@ export interface YouTubeSearchParams {
regionCode?: string regionCode?: string
relevanceLanguage?: string relevanceLanguage?: string
safeSearch?: 'moderate' | 'none' | 'strict' safeSearch?: 'moderate' | 'none' | 'strict'
eventType?: 'completed' | 'live' | 'upcoming'
} }
export interface YouTubeSearchResponse extends ToolResponse { export interface YouTubeSearchResponse extends ToolResponse {
@@ -25,9 +26,13 @@ export interface YouTubeSearchResponse extends ToolResponse {
title: string title: string
description: string description: string
thumbnail: string thumbnail: string
channelId: string
channelTitle: string
publishedAt: string
liveBroadcastContent: string
}> }>
totalResults: number totalResults: number
nextPageToken?: string nextPageToken?: string | null
} }
} }
@@ -48,8 +53,24 @@ export interface YouTubeVideoDetailsResponse extends ToolResponse {
viewCount: number viewCount: number
likeCount: number likeCount: number
commentCount: number commentCount: number
favoriteCount: number
thumbnail: string thumbnail: string
tags?: string[] tags: string[]
categoryId: string | null
definition: string | null
caption: string | null
licensedContent: boolean | null
privacyStatus: string | null
liveBroadcastContent: string | null
defaultLanguage: string | null
defaultAudioLanguage: string | null
// Live streaming details
isLiveContent: boolean
scheduledStartTime: string | null
actualStartTime: string | null
actualEndTime: string | null
concurrentViewers: number | null
activeLiveChatId: string | null
} }
} }
@@ -69,7 +90,11 @@ export interface YouTubeChannelInfoResponse extends ToolResponse {
viewCount: number viewCount: number
publishedAt: string publishedAt: string
thumbnail: string thumbnail: string
customUrl?: string customUrl: string | null
country: string | null
uploadsPlaylistId: string | null
bannerImageUrl: string | null
hiddenSubscriberCount: boolean
} }
} }
@@ -90,9 +115,11 @@ export interface YouTubePlaylistItemsResponse extends ToolResponse {
publishedAt: string publishedAt: string
channelTitle: string channelTitle: string
position: number position: number
videoOwnerChannelId: string | null
videoOwnerChannelTitle: string | null
}> }>
totalResults: number totalResults: number
nextPageToken?: string nextPageToken?: string | null
} }
} }
@@ -110,15 +137,16 @@ export interface YouTubeCommentsResponse extends ToolResponse {
commentId: string commentId: string
authorDisplayName: string authorDisplayName: string
authorChannelUrl: string authorChannelUrl: string
authorProfileImageUrl: string
textDisplay: string textDisplay: string
textOriginal: string textOriginal: string
likeCount: number likeCount: number
publishedAt: string publishedAt: string
updatedAt: string updatedAt: string
replyCount?: number replyCount: number
}> }>
totalResults: number totalResults: number
nextPageToken?: string nextPageToken?: string | null
} }
} }
@@ -138,9 +166,10 @@ export interface YouTubeChannelVideosResponse extends ToolResponse {
description: string description: string
thumbnail: string thumbnail: string
publishedAt: string publishedAt: string
channelTitle: string
}> }>
totalResults: number totalResults: number
nextPageToken?: string nextPageToken?: string | null
} }
} }
@@ -160,9 +189,55 @@ export interface YouTubeChannelPlaylistsResponse extends ToolResponse {
thumbnail: string thumbnail: string
itemCount: number itemCount: number
publishedAt: string publishedAt: string
channelTitle: string
}>
totalResults: number
nextPageToken?: string | null
}
}
export interface YouTubeTrendingParams {
apiKey: string
regionCode?: string
videoCategoryId?: string
maxResults?: number
pageToken?: string
}
export interface YouTubeTrendingResponse extends ToolResponse {
output: {
items: Array<{
videoId: string
title: string
description: string
thumbnail: string
channelId: string
channelTitle: string
publishedAt: string
viewCount: number
likeCount: number
commentCount: number
duration: string
}>
totalResults: number
nextPageToken?: string | null
}
}
export interface YouTubeVideoCategoriesParams {
apiKey: string
regionCode?: string
hl?: string
}
export interface YouTubeVideoCategoriesResponse extends ToolResponse {
output: {
items: Array<{
categoryId: string
title: string
assignable: boolean
}> }>
totalResults: number totalResults: number
nextPageToken?: string
} }
} }
@@ -174,3 +249,5 @@ export type YouTubeResponse =
| YouTubeCommentsResponse | YouTubeCommentsResponse
| YouTubeChannelVideosResponse | YouTubeChannelVideosResponse
| YouTubeChannelPlaylistsResponse | YouTubeChannelPlaylistsResponse
| YouTubeTrendingResponse
| YouTubeVideoCategoriesResponse

View File

@@ -0,0 +1,108 @@
import type { ToolConfig } from '@/tools/types'
import type {
YouTubeVideoCategoriesParams,
YouTubeVideoCategoriesResponse,
} from '@/tools/youtube/types'
export const youtubeVideoCategoriesTool: ToolConfig<
YouTubeVideoCategoriesParams,
YouTubeVideoCategoriesResponse
> = {
id: 'youtube_video_categories',
name: 'YouTube Video Categories',
description:
'Get a list of video categories available on YouTube. Use this to discover valid category IDs for filtering search and trending results.',
version: '1.0.0',
params: {
regionCode: {
type: 'string',
required: false,
visibility: 'user-or-llm',
description:
'ISO 3166-1 alpha-2 country code to get categories for (e.g., "US", "GB", "JP"). Defaults to US.',
},
hl: {
type: 'string',
required: false,
visibility: 'user-only',
description: 'Language for category titles (e.g., "en", "es", "fr"). Defaults to English.',
},
apiKey: {
type: 'string',
required: true,
visibility: 'user-only',
description: 'YouTube API Key',
},
},
request: {
url: (params: YouTubeVideoCategoriesParams) => {
let url = `https://www.googleapis.com/youtube/v3/videoCategories?part=snippet&key=${params.apiKey}`
url += `&regionCode=${params.regionCode || 'US'}`
if (params.hl) {
url += `&hl=${params.hl}`
}
return url
},
method: 'GET',
headers: () => ({
'Content-Type': 'application/json',
}),
},
transformResponse: async (response: Response): Promise<YouTubeVideoCategoriesResponse> => {
const data = await response.json()
if (data.error) {
return {
success: false,
output: {
items: [],
totalResults: 0,
},
error: data.error.message || 'Failed to fetch video categories',
}
}
const items = (data.items || [])
.filter((item: any) => item.snippet?.assignable !== false)
.map((item: any) => ({
categoryId: item.id ?? '',
title: item.snippet?.title ?? '',
assignable: item.snippet?.assignable ?? false,
}))
return {
success: true,
output: {
items,
totalResults: items.length,
},
}
},
outputs: {
items: {
type: 'array',
description: 'Array of video categories available in the specified region',
items: {
type: 'object',
properties: {
categoryId: {
type: 'string',
description: 'Category ID to use in search/trending filters (e.g., "10" for Music)',
},
title: { type: 'string', description: 'Human-readable category name' },
assignable: {
type: 'boolean',
description: 'Whether videos can be tagged with this category',
},
},
},
},
totalResults: {
type: 'number',
description: 'Total number of categories available',
},
},
}

View File

@@ -7,8 +7,9 @@ export const youtubeVideoDetailsTool: ToolConfig<
> = { > = {
id: 'youtube_video_details', id: 'youtube_video_details',
name: 'YouTube Video Details', name: 'YouTube Video Details',
description: 'Get detailed information about a specific YouTube video.', description:
version: '1.0.0', 'Get detailed information about a specific YouTube video including statistics, content details, live streaming info, and metadata.',
version: '1.2.0',
params: { params: {
videoId: { videoId: {
type: 'string', type: 'string',
@@ -26,7 +27,7 @@ export const youtubeVideoDetailsTool: ToolConfig<
request: { request: {
url: (params: YouTubeVideoDetailsParams) => { url: (params: YouTubeVideoDetailsParams) => {
return `https://www.googleapis.com/youtube/v3/videos?part=snippet,statistics,contentDetails&id=${params.videoId}&key=${params.apiKey}` return `https://www.googleapis.com/youtube/v3/videos?part=snippet,statistics,contentDetails,status,liveStreamingDetails&id=${encodeURIComponent(params.videoId)}&key=${params.apiKey}`
}, },
method: 'GET', method: 'GET',
headers: () => ({ headers: () => ({
@@ -51,32 +52,68 @@ export const youtubeVideoDetailsTool: ToolConfig<
viewCount: 0, viewCount: 0,
likeCount: 0, likeCount: 0,
commentCount: 0, commentCount: 0,
favoriteCount: 0,
thumbnail: '', thumbnail: '',
tags: [],
categoryId: null,
definition: null,
caption: null,
licensedContent: null,
privacyStatus: null,
liveBroadcastContent: null,
defaultLanguage: null,
defaultAudioLanguage: null,
isLiveContent: false,
scheduledStartTime: null,
actualStartTime: null,
actualEndTime: null,
concurrentViewers: null,
activeLiveChatId: null,
}, },
error: 'Video not found', error: 'Video not found',
} }
} }
const item = data.items[0] const item = data.items[0]
const liveDetails = item.liveStreamingDetails
return { return {
success: true, success: true,
output: { output: {
videoId: item.id, videoId: item.id ?? '',
title: item.snippet?.title || '', title: item.snippet?.title ?? '',
description: item.snippet?.description || '', description: item.snippet?.description ?? '',
channelId: item.snippet?.channelId || '', channelId: item.snippet?.channelId ?? '',
channelTitle: item.snippet?.channelTitle || '', channelTitle: item.snippet?.channelTitle ?? '',
publishedAt: item.snippet?.publishedAt || '', publishedAt: item.snippet?.publishedAt ?? '',
duration: item.contentDetails?.duration || '', duration: item.contentDetails?.duration ?? '',
viewCount: Number(item.statistics?.viewCount || 0), viewCount: Number(item.statistics?.viewCount || 0),
likeCount: Number(item.statistics?.likeCount || 0), likeCount: Number(item.statistics?.likeCount || 0),
commentCount: Number(item.statistics?.commentCount || 0), commentCount: Number(item.statistics?.commentCount || 0),
favoriteCount: Number(item.statistics?.favoriteCount || 0),
thumbnail: thumbnail:
item.snippet?.thumbnails?.high?.url || item.snippet?.thumbnails?.high?.url ||
item.snippet?.thumbnails?.medium?.url || item.snippet?.thumbnails?.medium?.url ||
item.snippet?.thumbnails?.default?.url || item.snippet?.thumbnails?.default?.url ||
'', '',
tags: item.snippet?.tags || [], tags: item.snippet?.tags ?? [],
categoryId: item.snippet?.categoryId ?? null,
definition: item.contentDetails?.definition ?? null,
caption: item.contentDetails?.caption ?? null,
licensedContent: item.contentDetails?.licensedContent ?? null,
privacyStatus: item.status?.privacyStatus ?? null,
liveBroadcastContent: item.snippet?.liveBroadcastContent ?? null,
defaultLanguage: item.snippet?.defaultLanguage ?? null,
defaultAudioLanguage: item.snippet?.defaultAudioLanguage ?? null,
// Live streaming details
isLiveContent: liveDetails !== undefined,
scheduledStartTime: liveDetails?.scheduledStartTime ?? null,
actualStartTime: liveDetails?.actualStartTime ?? null,
actualEndTime: liveDetails?.actualEndTime ?? null,
concurrentViewers: liveDetails?.concurrentViewers
? Number(liveDetails.concurrentViewers)
: null,
activeLiveChatId: liveDetails?.activeLiveChatId ?? null,
}, },
} }
}, },
@@ -108,7 +145,7 @@ export const youtubeVideoDetailsTool: ToolConfig<
}, },
duration: { duration: {
type: 'string', type: 'string',
description: 'Video duration in ISO 8601 format', description: 'Video duration in ISO 8601 format (e.g., "PT4M13S" for 4 min 13 sec)',
}, },
viewCount: { viewCount: {
type: 'number', type: 'number',
@@ -122,6 +159,10 @@ export const youtubeVideoDetailsTool: ToolConfig<
type: 'number', type: 'number',
description: 'Number of comments', description: 'Number of comments',
}, },
favoriteCount: {
type: 'number',
description: 'Number of times added to favorites',
},
thumbnail: { thumbnail: {
type: 'string', type: 'string',
description: 'Video thumbnail URL', description: 'Video thumbnail URL',
@@ -132,6 +173,74 @@ export const youtubeVideoDetailsTool: ToolConfig<
items: { items: {
type: 'string', type: 'string',
}, },
},
categoryId: {
type: 'string',
description: 'YouTube video category ID',
optional: true,
},
definition: {
type: 'string',
description: 'Video definition: "hd" or "sd"',
optional: true,
},
caption: {
type: 'string',
description: 'Whether captions are available: "true" or "false"',
optional: true,
},
licensedContent: {
type: 'boolean',
description: 'Whether the video is licensed content',
optional: true,
},
privacyStatus: {
type: 'string',
description: 'Video privacy status: "public", "private", or "unlisted"',
optional: true,
},
liveBroadcastContent: {
type: 'string',
description: 'Live broadcast status: "live", "upcoming", or "none"',
optional: true,
},
defaultLanguage: {
type: 'string',
description: 'Default language of the video metadata',
optional: true,
},
defaultAudioLanguage: {
type: 'string',
description: 'Default audio language of the video',
optional: true,
},
isLiveContent: {
type: 'boolean',
description: 'Whether this video is or was a live stream',
},
scheduledStartTime: {
type: 'string',
description: 'Scheduled start time for upcoming live streams (ISO 8601)',
optional: true,
},
actualStartTime: {
type: 'string',
description: 'When the live stream actually started (ISO 8601)',
optional: true,
},
actualEndTime: {
type: 'string',
description: 'When the live stream ended (ISO 8601)',
optional: true,
},
concurrentViewers: {
type: 'number',
description: 'Current number of viewers (only for active live streams)',
optional: true,
},
activeLiveChatId: {
type: 'string',
description: 'Live chat ID for the stream (only for active live streams)',
optional: true, optional: true,
}, },
}, },

Some files were not shown because too many files have changed in this diff Show More