Compare commits

...

18 Commits

Author SHA1 Message Date
Waleed
7d0fdefb22 v0.6.18: file operations block, profound integration, edge connection improvements, copy logs, knowledgebase robustness 2026-03-30 21:35:41 -07:00
Vikhyath Mondreti
90f592797a fix(file): use file-upload subblock (#3862)
* fix(file): use file-upload subblock

* fix preview + logs url for notifs

* fix color for profound

* remove canonical param from desc
2026-03-30 21:29:01 -07:00
Waleed
d091441e39 feat(logs): add copy link and deep-link support for log entries (#3863)
* feat(logs): add copy link and deep link support for log entries

* fix(logs): move Link icon to emcn and handle clipboard rejections

* feat(notifications): use executionId deep-link for View Log URLs

Switch buildLogUrl from ?search= to ?executionId= so email and Slack
'View Log' buttons open the logs page with the specific execution
auto-selected and the details panel expanded.
2026-03-30 21:17:58 -07:00
Waleed
7d4dd26760 fix(knowledge): fix document processing stuck in processing state (#3857)
* fix(knowledge): fix document processing stuck in processing state

* fix(knowledge): use Promise.allSettled for document dispatch and fix Copilot OAuth context

- Change Promise.all to Promise.allSettled in processDocumentsWithQueue so
  one failed dispatch doesn't abort the entire batch
- Add writeOAuthReturnContext before showing LazyOAuthRequiredModal from
  Copilot tools so useOAuthReturnForWorkflow can handle the return
- Add consumeOAuthReturnContext on modal close to clean up stale context

* fix(knowledge): fix type error in useCredentialRefreshTriggers call

Pass empty string instead of undefined for connectorProviderId fallback
to match the hook's string parameter type.

* upgrade turbo

* fix(knowledge): fix type error in connectors-section useCredentialRefreshTriggers call

Same string narrowing fix as add-connector-modal — pass empty string
fallback for providerId.
2026-03-30 20:35:08 -07:00
Vikhyath Mondreti
0abeac77e1 improvement(platform): standardize perms, audit logging, lifecycle across admin, copilot, ui actions (#3858)
* improvement(platform): standardize perms, audit logging, lifecycle mgmt across admin, copilot, ui actions

* address comments

* improve error codes

* address bugbot comments

* fix test
2026-03-30 20:25:38 -07:00
Waleed
e9c94fa462 feat(logs): add copy link and deep link support for log entries (#3855)
* feat(logs): add copy link and deep link support for log entries

* fix(logs): fetch next page when deep linked log is beyond initial page

* fix(logs): move Link icon to emcn and handle clipboard rejections

* fix(logs): track isFetching reactively and drop empty-list early-return

- Remove  guard that prevented clearing the
  pending ref when filters return no results
- Use  directly in the condition and add it to
  the effect deps so the effect re-triggers after a background refetch

* fix(logs): guard deep-link ref clear until query has succeeded

Only clear pendingExecutionIdRef when the query status is 'success',
preventing premature clearing before the initial fetch completes.
On mount, the query is disabled (isInitialized.current starts false),
so hasNextPage is false but no data has loaded yet — the ref was being
cleared in the same effect pass that set it.

* fix(logs): guard fetchNextPage call until query has succeeded

Add logsQuery.status === 'success' to the fetchNextPage branch so it
mirrors the clear branch. On mount the query is disabled (isFetching is
false, status is pending), causing the effect to call fetchNextPage()
before the query is initialized — now both branches require success.
2026-03-30 18:42:27 -07:00
Waleed
72eea64bf6 improvement(tour): align product tour tooltip styling with emcn and fix spotlight overflow (#3854) 2026-03-30 16:59:53 -07:00
Waleed
27460f847c fix(atlassian): harden cloud ID resolution for Confluence and Jira (#3853) 2026-03-30 16:54:45 -07:00
Vikhyath Mondreti
c7643198dc fix(mothership): hang condition (#3852) 2026-03-30 16:47:54 -07:00
Waleed
e5aef6184a feat(profound): add Profound AI visibility and analytics integration (#3849)
* feat(profound): add Profound AI visibility and analytics integration

* fix(profound): fix import ordering and JSON formatting for CI lint

* fix(profound): gate metrics mapping on current operation to prevent stale overrides

* fix(profound): guard JSON.parse on filters, fix offset=0 falsy check, remove duplicate prompt_answers in FILTER_OPS

* lint

* fix(docs): fix import ordering and trailing newline for docs lint

* fix(scripts): sort generated imports to match Biome's organizeImports order

* fix(profound): use != null checks for limit param across all tools

* fix(profound): flatten block output type to 'json' to pass block validation test

* fix(profound): remove invalid 'required' field from block inputs (not part of ParamConfig)

* fix(profound): rename tool files from kebab-case to snake_case for docs generator compatibility

* lint

* fix(docs): let biome auto-fix import order, revert custom sort in generator

* fix(landing): fix import order in sim icon-mapping via biome

* fix(scripts): match Biome's exact import sort order in docs generator

* fix(generate-docs): produce Biome-compatible JSON output

The generator wrote multi-line arrays for short string arrays (like tags)
and omitted trailing newlines, causing Biome format check failures in CI.
Post-process integrations.json to collapse short arrays onto single lines
and add trailing newlines to both integrations.json and meta.json.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-30 16:30:06 -07:00
Waleed
4ae5b1b620 improvement(workflow): use DOM hit-testing for edge drop-on-block detection (#3851) 2026-03-30 16:20:30 -07:00
Waleed
5c334874eb fix(auth): use standard 'Unauthorized' error in hybrid auth responses (#3850) 2026-03-30 16:11:04 -07:00
Theodore Li
d3d58a9615 Feat/improved logging (#3833)
* feat(logs): add additional metadata for workflow execution logs

* Revert "Feat(logs) upgrade mothership chat messages to error (#3772)"

This reverts commit 9d1b9763c5.

* Fix lint, address greptile comments

* improvement(sidebar): expand sidebar by hovering and clicking the edge (#3830)

* improvement(sidebar): expand sidebar by hovering and clicking the edge

* improvement(sidebar): add keyboard shortcuts for new workflow/task, center search modal, fix edge ARIA

* improvement(sidebar): use Tooltip.Shortcut for inline shortcut display

* fix(sidebar): change new workflow shortcut from Mod+Shift+W to Mod+Shift+P to avoid browser close-window conflict

* fix(hotkeys): fall back to event.code for international keyboard layout compatibility

* fix(sidebar): guard add-workflow shortcut with canEdit and isCreatingWorkflow checks

* feat(ui): handle image paste (#3826)

* feat(ui): handle image paste

* Fix lint

* Fix type error

---------

Co-authored-by: Theodore Li <theo@sim.ai>

* feat(files): interactive markdown checkbox toggling in preview (#3829)

* feat(files): interactive markdown checkbox toggling in preview

* fix(files): handle ordered-list checkboxes and fix index drift

* lint

* fix(files): remove counter offset that prevented checkbox toggling

* fix(files): apply task-list styling to ordered lists too

* fix(files): render single pass when interactive to avoid index drift

* fix(files): move useMemo above conditional return to fix Rules of Hooks

* fix(files): pass content directly to preview when not streaming to avoid stale frame

* improvement(home): position @ mention popup at caret and fix icon consistency (#3831)

* improvement(home): position @ mention popup at caret and fix icon consistency

* fix(home): pin mirror div to document origin and guard button anchor

* chore(auth): restore hybrid.ts to staging

* improvement(ui): sidebar (#3832)

* Fix logger tests

* Add metadata to mothership logs

---------

Co-authored-by: Theodore Li <theo@sim.ai>
Co-authored-by: Waleed <walif6@gmail.com>
Co-authored-by: Theodore Li <theo@sim.ai>
2026-03-30 19:02:17 -04:00
Waleed
1d59eca90a fix(analytics): use getBaseDomain for Profound host field (#3848)
request.url resolves to internal ALB hostname on ECS, not the public domain
2026-03-30 15:36:41 -07:00
Theodore Li
e1359b09d6 feat(block) add block write and append operations (#3665)
* Add file write and delete operations

* Add file block write operation

* Fix lint

* Allow loop-in-loop workflow edits

* Fix type error

* Remove file id input, output link correctly

* Add append tool

* fix lint

* Address feedback

* Handle writing to same file name gracefully

* Removed  mime type from append block

* Add lock for file append operation

---------

Co-authored-by: Theodore Li <theo@sim.ai>
2026-03-30 18:08:51 -04:00
Waleed
35b3646330 fix(sidebar): cmd+click opens in new tab, shift+click for range select (#3846)
* fix(sidebar): cmd+click opens in new tab, shift+click for range select

* comment cleanup

* fix(sidebar): drop stale metaKey param from workflow and task selection hooks
2026-03-30 11:02:33 -07:00
Waleed
73e00f53e1 v0.6.17: trigger.dev CI, workers FF 2026-03-30 09:33:30 -07:00
Waleed
5c47ea58f8 chore(trigger): update @trigger.dev/sdk and @trigger.dev/build to 4.4.3 (#3843)
* chore(trigger): update @trigger.dev/sdk and @trigger.dev/build to 4.4.3

* fix(webhooks): execute non-polling webhooks inline when BullMQ is disabled
2026-03-30 09:27:15 -07:00
146 changed files with 7886 additions and 3765 deletions

View File

@@ -1285,6 +1285,17 @@ export function StartIcon(props: SVGProps<SVGSVGElement>) {
)
}
export function ProfoundIcon(props: SVGProps<SVGSVGElement>) {
return (
<svg width='1em' height='1em' viewBox='0 0 55 55' xmlns='http://www.w3.org/2000/svg' {...props}>
<path
fill='currentColor'
d='M0 36.685V21.349a7.017 7.017 0 0 1 2.906-5.69l19.742-14.25A7.443 7.443 0 0 1 27.004 0h.062c1.623 0 3.193.508 4.501 1.452l19.684 14.207a7.016 7.016 0 0 1 2.906 5.69v12.302a7.013 7.013 0 0 1-2.907 5.689L31.527 53.562A7.605 7.605 0 0 1 27.078 55a7.641 7.641 0 0 1-4.465-1.44c-2.581-1.859-6.732-4.855-6.732-4.855V29.777c0-.249.28-.393.482-.248l10.538 7.605c.106.077.249.077.355 0l13.005-9.386a.306.306 0 0 0 0-.496l-13.005-9.386a.303.303 0 0 0-.355 0L.482 36.933A.304.304 0 0 1 0 36.685Z'
/>
</svg>
)
}
export function PineconeIcon(props: SVGProps<SVGSVGElement>) {
return (
<svg

View File

@@ -126,6 +126,7 @@ import {
PolymarketIcon,
PostgresIcon,
PosthogIcon,
ProfoundIcon,
PulseIcon,
QdrantIcon,
QuiverIcon,
@@ -302,6 +303,7 @@ export const blockTypeToIconMap: Record<string, IconComponent> = {
polymarket: PolymarketIcon,
postgresql: PostgresIcon,
posthog: PosthogIcon,
profound: ProfoundIcon,
pulse_v2: PulseIcon,
qdrant: QdrantIcon,
quiver: QuiverIcon,

View File

@@ -1,6 +1,6 @@
---
title: File
description: Read and parse multiple files
description: Read and write workspace files
---
import { BlockInfoCard } from "@/components/ui/block-info-card"
@@ -27,7 +27,7 @@ The File Parser tool is particularly useful for scenarios where your agents need
## Usage Instructions
Upload files directly or import from external URLs to get UserFile objects for use in other blocks.
Read and parse files from uploads or URLs, write new workspace files, or append content to existing files.
@@ -52,4 +52,45 @@ Parse one or more uploaded files or files from URLs (text, PDF, CSV, images, etc
| `files` | file[] | Parsed files as UserFile objects |
| `combinedContent` | string | Combined content of all parsed files |
### `file_write`
Create a new workspace file. If a file with the same name already exists, a numeric suffix is added (e.g.,
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `fileName` | string | Yes | File name \(e.g., "data.csv"\). If a file with this name exists, a numeric suffix is added automatically. |
| `content` | string | Yes | The text content to write to the file. |
| `contentType` | string | No | MIME type for new files \(e.g., "text/plain"\). Auto-detected from file extension if omitted. |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `id` | string | File ID |
| `name` | string | File name |
| `size` | number | File size in bytes |
| `url` | string | URL to access the file |
### `file_append`
Append content to an existing workspace file. The file must already exist. Content is added to the end of the file.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `fileName` | string | Yes | Name of an existing workspace file to append to. |
| `content` | string | Yes | The text content to append to the file. |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `id` | string | File ID |
| `name` | string | File name |
| `size` | number | File size in bytes |
| `url` | string | URL to access the file |

View File

@@ -121,6 +121,7 @@
"polymarket",
"postgresql",
"posthog",
"profound",
"pulse",
"qdrant",
"quiver",

View File

@@ -0,0 +1,626 @@
---
title: Profound
description: AI visibility and analytics with Profound
---
import { BlockInfoCard } from "@/components/ui/block-info-card"
<BlockInfoCard
type="profound"
color="#000000"
/>
{/* MANUAL-CONTENT-START:intro */}
[Profound](https://tryprofound.com/) is an AI visibility and analytics platform that helps brands understand how they appear across AI-powered search engines, chatbots, and assistants. It tracks mentions, citations, sentiment, bot traffic, and referral patterns across platforms like ChatGPT, Perplexity, Google AI Overviews, and more.
With the Profound integration in Sim, you can:
- **Monitor AI Visibility**: Track share of voice, visibility scores, and mention counts across AI platforms for your brand and competitors.
- **Analyze Sentiment**: Measure how positively or negatively your brand is discussed in AI-generated responses.
- **Track Citations**: See which URLs are being cited by AI models and your citation share relative to competitors.
- **Monitor Bot Traffic**: Analyze AI crawler activity on your domain, including GPTBot, ClaudeBot, and other AI agents, with hourly granularity.
- **Track Referral Traffic**: Monitor human visits arriving from AI platforms to your website.
- **Explore Prompt Data**: Access raw prompt-answer pairs, query fanouts, and prompt volume trends across AI platforms.
- **Optimize Content**: Get AEO (Answer Engine Optimization) scores and actionable recommendations to improve how AI models reference your content.
- **Manage Categories & Assets**: List and explore your tracked categories, assets (brands), topics, tags, personas, and regions.
These tools let your agents automate AI visibility monitoring, competitive intelligence, and content optimization workflows. To use the Profound integration, you'll need a Profound account with API access.
{/* MANUAL-CONTENT-END */}
## Usage Instructions
Track how your brand appears across AI platforms. Monitor visibility scores, sentiment, citations, bot traffic, referrals, content optimization, and prompt volumes with Profound.
## Tools
### `profound_list_categories`
List all organization categories in Profound
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Profound API Key |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `categories` | json | List of organization categories |
| ↳ `id` | string | Category ID |
| ↳ `name` | string | Category name |
### `profound_list_regions`
List all organization regions in Profound
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Profound API Key |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `regions` | json | List of organization regions |
| ↳ `id` | string | Region ID \(UUID\) |
| ↳ `name` | string | Region name |
### `profound_list_models`
List all AI models/platforms tracked in Profound
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Profound API Key |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `models` | json | List of AI models/platforms |
| ↳ `id` | string | Model ID \(UUID\) |
| ↳ `name` | string | Model/platform name |
### `profound_list_domains`
List all organization domains in Profound
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Profound API Key |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `domains` | json | List of organization domains |
| ↳ `id` | string | Domain ID \(UUID\) |
| ↳ `name` | string | Domain name |
| ↳ `createdAt` | string | When the domain was added |
### `profound_list_assets`
List all organization assets (companies/brands) across all categories in Profound
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Profound API Key |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `assets` | json | List of organization assets with category info |
| ↳ `id` | string | Asset ID |
| ↳ `name` | string | Asset/company name |
| ↳ `website` | string | Asset website URL |
| ↳ `alternateDomains` | json | Alternate domain names |
| ↳ `isOwned` | boolean | Whether this asset is owned by the organization |
| ↳ `createdAt` | string | When the asset was created |
| ↳ `logoUrl` | string | URL of the asset logo |
| ↳ `categoryId` | string | Category ID the asset belongs to |
| ↳ `categoryName` | string | Category name |
### `profound_list_personas`
List all organization personas across all categories in Profound
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Profound API Key |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `personas` | json | List of organization personas with profile details |
| ↳ `id` | string | Persona ID |
| ↳ `name` | string | Persona name |
| ↳ `categoryId` | string | Category ID |
| ↳ `categoryName` | string | Category name |
| ↳ `persona` | json | Persona profile with behavior, employment, and demographics |
### `profound_category_topics`
List topics for a specific category in Profound
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Profound API Key |
| `categoryId` | string | Yes | Category ID \(UUID\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `topics` | json | List of topics in the category |
| ↳ `id` | string | Topic ID \(UUID\) |
| ↳ `name` | string | Topic name |
### `profound_category_tags`
List tags for a specific category in Profound
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Profound API Key |
| `categoryId` | string | Yes | Category ID \(UUID\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `tags` | json | List of tags in the category |
| ↳ `id` | string | Tag ID \(UUID\) |
| ↳ `name` | string | Tag name |
### `profound_category_prompts`
List prompts for a specific category in Profound
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Profound API Key |
| `categoryId` | string | Yes | Category ID \(UUID\) |
| `limit` | number | No | Maximum number of results \(default 10000, max 10000\) |
| `cursor` | string | No | Pagination cursor from previous response |
| `orderDir` | string | No | Sort direction: asc or desc \(default desc\) |
| `promptType` | string | No | Comma-separated prompt types to filter: visibility, sentiment |
| `topicId` | string | No | Comma-separated topic IDs \(UUIDs\) to filter by |
| `tagId` | string | No | Comma-separated tag IDs \(UUIDs\) to filter by |
| `regionId` | string | No | Comma-separated region IDs \(UUIDs\) to filter by |
| `platformId` | string | No | Comma-separated platform IDs \(UUIDs\) to filter by |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `totalRows` | number | Total number of prompts |
| `nextCursor` | string | Cursor for next page of results |
| `prompts` | json | List of prompts |
| ↳ `id` | string | Prompt ID |
| ↳ `prompt` | string | Prompt text |
| ↳ `promptType` | string | Prompt type \(visibility or sentiment\) |
| ↳ `topicId` | string | Topic ID |
| ↳ `topicName` | string | Topic name |
| ↳ `tags` | json | Associated tags |
| ↳ `regions` | json | Associated regions |
| ↳ `platforms` | json | Associated platforms |
| ↳ `createdAt` | string | When the prompt was created |
### `profound_category_assets`
List assets (companies/brands) for a specific category in Profound
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Profound API Key |
| `categoryId` | string | Yes | Category ID \(UUID\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `assets` | json | List of assets in the category |
| ↳ `id` | string | Asset ID |
| ↳ `name` | string | Asset/company name |
| ↳ `website` | string | Website URL |
| ↳ `alternateDomains` | json | Alternate domain names |
| ↳ `isOwned` | boolean | Whether the asset is owned by the organization |
| ↳ `createdAt` | string | When the asset was created |
| ↳ `logoUrl` | string | URL of the asset logo |
### `profound_category_personas`
List personas for a specific category in Profound
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Profound API Key |
| `categoryId` | string | Yes | Category ID \(UUID\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `personas` | json | List of personas in the category |
| ↳ `id` | string | Persona ID |
| ↳ `name` | string | Persona name |
| ↳ `persona` | json | Persona profile with behavior, employment, and demographics |
### `profound_visibility_report`
Query AI visibility report for a category in Profound
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Profound API Key |
| `categoryId` | string | Yes | Category ID \(UUID\) |
| `startDate` | string | Yes | Start date \(YYYY-MM-DD or ISO 8601\) |
| `endDate` | string | Yes | End date \(YYYY-MM-DD or ISO 8601\) |
| `metrics` | string | Yes | Comma-separated metrics: share_of_voice, mentions_count, visibility_score, executions, average_position |
| `dimensions` | string | No | Comma-separated dimensions: date, region, topic, model, asset_name, prompt, tag, persona |
| `dateInterval` | string | No | Date interval: hour, day, week, month, year |
| `filters` | string | No | JSON array of filter objects, e.g. \[\{"field":"asset_name","operator":"is","value":"Company"\}\] |
| `limit` | number | No | Maximum number of results \(default 10000, max 50000\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `totalRows` | number | Total number of rows in the report |
| `data` | json | Report data rows with metrics and dimension values |
| ↳ `metrics` | json | Array of metric values matching requested metrics order |
| ↳ `dimensions` | json | Array of dimension values matching requested dimensions order |
### `profound_sentiment_report`
Query sentiment report for a category in Profound
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Profound API Key |
| `categoryId` | string | Yes | Category ID \(UUID\) |
| `startDate` | string | Yes | Start date \(YYYY-MM-DD or ISO 8601\) |
| `endDate` | string | Yes | End date \(YYYY-MM-DD or ISO 8601\) |
| `metrics` | string | Yes | Comma-separated metrics: positive, negative, occurrences |
| `dimensions` | string | No | Comma-separated dimensions: theme, date, region, topic, model, asset_name, tag, prompt, sentiment_type, persona |
| `dateInterval` | string | No | Date interval: hour, day, week, month, year |
| `filters` | string | No | JSON array of filter objects, e.g. \[\{"field":"asset_name","operator":"is","value":"Company"\}\] |
| `limit` | number | No | Maximum number of results \(default 10000, max 50000\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `totalRows` | number | Total number of rows in the report |
| `data` | json | Report data rows with metrics and dimension values |
| ↳ `metrics` | json | Array of metric values matching requested metrics order |
| ↳ `dimensions` | json | Array of dimension values matching requested dimensions order |
### `profound_citations_report`
Query citations report for a category in Profound
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Profound API Key |
| `categoryId` | string | Yes | Category ID \(UUID\) |
| `startDate` | string | Yes | Start date \(YYYY-MM-DD or ISO 8601\) |
| `endDate` | string | Yes | End date \(YYYY-MM-DD or ISO 8601\) |
| `metrics` | string | Yes | Comma-separated metrics: count, citation_share |
| `dimensions` | string | No | Comma-separated dimensions: hostname, path, date, region, topic, model, tag, prompt, url, root_domain, persona, citation_category |
| `dateInterval` | string | No | Date interval: hour, day, week, month, year |
| `filters` | string | No | JSON array of filter objects, e.g. \[\{"field":"hostname","operator":"is","value":"example.com"\}\] |
| `limit` | number | No | Maximum number of results \(default 10000, max 50000\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `totalRows` | number | Total number of rows in the report |
| `data` | json | Report data rows with metrics and dimension values |
| ↳ `metrics` | json | Array of metric values matching requested metrics order |
| ↳ `dimensions` | json | Array of dimension values matching requested dimensions order |
### `profound_query_fanouts`
Query fanout report showing how AI models expand prompts into sub-queries in Profound
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Profound API Key |
| `categoryId` | string | Yes | Category ID \(UUID\) |
| `startDate` | string | Yes | Start date \(YYYY-MM-DD or ISO 8601\) |
| `endDate` | string | Yes | End date \(YYYY-MM-DD or ISO 8601\) |
| `metrics` | string | Yes | Comma-separated metrics: fanouts_per_execution, total_fanouts, share |
| `dimensions` | string | No | Comma-separated dimensions: prompt, query, model, region, date |
| `dateInterval` | string | No | Date interval: hour, day, week, month, year |
| `filters` | string | No | JSON array of filter objects |
| `limit` | number | No | Maximum number of results \(default 10000, max 50000\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `totalRows` | number | Total number of rows in the report |
| `data` | json | Report data rows with metrics and dimension values |
| ↳ `metrics` | json | Array of metric values matching requested metrics order |
| ↳ `dimensions` | json | Array of dimension values matching requested dimensions order |
### `profound_prompt_answers`
Get raw prompt answers data for a category in Profound
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Profound API Key |
| `categoryId` | string | Yes | Category ID \(UUID\) |
| `startDate` | string | Yes | Start date \(YYYY-MM-DD or ISO 8601\) |
| `endDate` | string | Yes | End date \(YYYY-MM-DD or ISO 8601\) |
| `filters` | string | No | JSON array of filter objects, e.g. \[\{"field":"prompt_type","operator":"is","value":"visibility"\}\] |
| `limit` | number | No | Maximum number of results \(default 10000, max 50000\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `totalRows` | number | Total number of answer rows |
| `data` | json | Raw prompt answer data |
| ↳ `prompt` | string | The prompt text |
| ↳ `promptType` | string | Prompt type \(visibility or sentiment\) |
| ↳ `response` | string | AI model response text |
| ↳ `mentions` | json | Companies/assets mentioned in the response |
| ↳ `citations` | json | URLs cited in the response |
| ↳ `topic` | string | Topic name |
| ↳ `region` | string | Region name |
| ↳ `model` | string | AI model/platform name |
| ↳ `asset` | string | Asset name |
| ↳ `createdAt` | string | Timestamp when the answer was collected |
### `profound_bots_report`
Query bot traffic report with hourly granularity for a domain in Profound
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Profound API Key |
| `domain` | string | Yes | Domain to query bot traffic for \(e.g. example.com\) |
| `startDate` | string | Yes | Start date \(YYYY-MM-DD or ISO 8601\) |
| `endDate` | string | No | End date \(YYYY-MM-DD or ISO 8601\). Defaults to now |
| `metrics` | string | Yes | Comma-separated metrics: count, citations, indexing, training, last_visit |
| `dimensions` | string | No | Comma-separated dimensions: date, hour, path, bot_name, bot_provider, bot_type |
| `dateInterval` | string | No | Date interval: hour, day, week, month, year |
| `filters` | string | No | JSON array of filter objects, e.g. \[\{"field":"bot_name","operator":"is","value":"GPTBot"\}\] |
| `limit` | number | No | Maximum number of results \(default 10000, max 50000\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `totalRows` | number | Total number of rows in the report |
| `data` | json | Report data rows with metrics and dimension values |
| ↳ `metrics` | json | Array of metric values matching requested metrics order |
| ↳ `dimensions` | json | Array of dimension values matching requested dimensions order |
### `profound_referrals_report`
Query human referral traffic report with hourly granularity for a domain in Profound
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Profound API Key |
| `domain` | string | Yes | Domain to query referral traffic for \(e.g. example.com\) |
| `startDate` | string | Yes | Start date \(YYYY-MM-DD or ISO 8601\) |
| `endDate` | string | No | End date \(YYYY-MM-DD or ISO 8601\). Defaults to now |
| `metrics` | string | Yes | Comma-separated metrics: visits, last_visit |
| `dimensions` | string | No | Comma-separated dimensions: date, hour, path, referral_source, referral_type |
| `dateInterval` | string | No | Date interval: hour, day, week, month, year |
| `filters` | string | No | JSON array of filter objects, e.g. \[\{"field":"referral_source","operator":"is","value":"openai"\}\] |
| `limit` | number | No | Maximum number of results \(default 10000, max 50000\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `totalRows` | number | Total number of rows in the report |
| `data` | json | Report data rows with metrics and dimension values |
| ↳ `metrics` | json | Array of metric values matching requested metrics order |
| ↳ `dimensions` | json | Array of dimension values matching requested dimensions order |
### `profound_raw_logs`
Get raw traffic logs with filters for a domain in Profound
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Profound API Key |
| `domain` | string | Yes | Domain to query logs for \(e.g. example.com\) |
| `startDate` | string | Yes | Start date \(YYYY-MM-DD or ISO 8601\) |
| `endDate` | string | No | End date \(YYYY-MM-DD or ISO 8601\). Defaults to now |
| `dimensions` | string | No | Comma-separated dimensions: timestamp, method, host, path, status_code, ip, user_agent, referer, bytes_sent, duration_ms, query_params |
| `filters` | string | No | JSON array of filter objects, e.g. \[\{"field":"path","operator":"contains","value":"/blog"\}\] |
| `limit` | number | No | Maximum number of results \(default 10000, max 50000\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `totalRows` | number | Total number of log entries |
| `data` | json | Log data rows with metrics and dimension values |
| ↳ `metrics` | json | Array of metric values \(count\) |
| ↳ `dimensions` | json | Array of dimension values matching requested dimensions order |
### `profound_bot_logs`
Get identified bot visit logs with filters for a domain in Profound
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Profound API Key |
| `domain` | string | Yes | Domain to query bot logs for \(e.g. example.com\) |
| `startDate` | string | Yes | Start date \(YYYY-MM-DD or ISO 8601\) |
| `endDate` | string | No | End date \(YYYY-MM-DD or ISO 8601\). Defaults to now |
| `dimensions` | string | No | Comma-separated dimensions: timestamp, method, host, path, status_code, ip, user_agent, referer, bytes_sent, duration_ms, query_params, bot_name, bot_provider, bot_types |
| `filters` | string | No | JSON array of filter objects, e.g. \[\{"field":"bot_name","operator":"is","value":"GPTBot"\}\] |
| `limit` | number | No | Maximum number of results \(default 10000, max 50000\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `totalRows` | number | Total number of bot log entries |
| `data` | json | Bot log data rows with metrics and dimension values |
| ↳ `metrics` | json | Array of metric values \(count\) |
| ↳ `dimensions` | json | Array of dimension values matching requested dimensions order |
### `profound_list_optimizations`
List content optimization entries for an asset in Profound
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Profound API Key |
| `assetId` | string | Yes | Asset ID \(UUID\) |
| `limit` | number | No | Maximum number of results \(default 10000, max 50000\) |
| `offset` | number | No | Offset for pagination \(default 0\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `totalRows` | number | Total number of optimization entries |
| `optimizations` | json | List of content optimization entries |
| ↳ `id` | string | Optimization ID \(UUID\) |
| ↳ `title` | string | Content title |
| ↳ `createdAt` | string | When the optimization was created |
| ↳ `extractedInput` | string | Extracted input text |
| ↳ `type` | string | Content type: file, text, or url |
| ↳ `status` | string | Optimization status |
### `profound_optimization_analysis`
Get detailed content optimization analysis for a specific content item in Profound
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Profound API Key |
| `assetId` | string | Yes | Asset ID \(UUID\) |
| `contentId` | string | Yes | Content/optimization ID \(UUID\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `content` | json | The analyzed content |
| ↳ `format` | string | Content format: markdown or html |
| ↳ `value` | string | Content text |
| `aeoContentScore` | json | AEO content score with target zone |
| ↳ `value` | number | AEO score value |
| ↳ `targetZone` | json | Target zone range |
| ↳ `low` | number | Low end of target range |
| ↳ `high` | number | High end of target range |
| `analysis` | json | Analysis breakdown by category |
| ↳ `breakdown` | json | Array of scoring breakdowns |
| ↳ `title` | string | Category title |
| ↳ `weight` | number | Category weight |
| ↳ `score` | number | Category score |
| `recommendations` | json | Content optimization recommendations |
| ↳ `title` | string | Recommendation title |
| ↳ `status` | string | Status: done or pending |
| ↳ `impact` | json | Impact details with section and score |
| ↳ `suggestion` | json | Suggestion text and rationale |
| ↳ `text` | string | Suggestion text |
| ↳ `rationale` | string | Why this recommendation matters |
### `profound_prompt_volume`
Query prompt volume data to understand search demand across AI platforms in Profound
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Profound API Key |
| `startDate` | string | Yes | Start date \(YYYY-MM-DD or ISO 8601\) |
| `endDate` | string | Yes | End date \(YYYY-MM-DD or ISO 8601\) |
| `metrics` | string | Yes | Comma-separated metrics: volume, change |
| `dimensions` | string | No | Comma-separated dimensions: keyword, date, platform, country_code, matching_type, frequency |
| `dateInterval` | string | No | Date interval: hour, day, week, month, year |
| `filters` | string | No | JSON array of filter objects, e.g. \[\{"field":"keyword","operator":"contains","value":"best"\}\] |
| `limit` | number | No | Maximum number of results \(default 10000, max 50000\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `totalRows` | number | Total number of rows in the report |
| `data` | json | Volume data rows with metrics and dimension values |
| ↳ `metrics` | json | Array of metric values matching requested metrics order |
| ↳ `dimensions` | json | Array of dimension values matching requested dimensions order |
### `profound_citation_prompts`
Get prompts that cite a specific domain across AI platforms in Profound
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Profound API Key |
| `inputDomain` | string | Yes | Domain to look up citations for \(e.g. ramp.com\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `data` | json | Citation prompt data for the queried domain |

View File

@@ -126,6 +126,7 @@ import {
PolymarketIcon,
PostgresIcon,
PosthogIcon,
ProfoundIcon,
PulseIcon,
QdrantIcon,
QuiverIcon,
@@ -302,6 +303,7 @@ export const blockTypeToIconMap: Record<string, IconComponent> = {
polymarket: PolymarketIcon,
postgresql: PostgresIcon,
posthog: PosthogIcon,
profound: ProfoundIcon,
pulse_v2: PulseIcon,
qdrant: QdrantIcon,
quiver: QuiverIcon,

View File

@@ -2993,13 +2993,26 @@
"type": "file_v3",
"slug": "file",
"name": "File",
"description": "Read and parse multiple files",
"longDescription": "Upload files directly or import from external URLs to get UserFile objects for use in other blocks.",
"description": "Read and write workspace files",
"longDescription": "Read and parse files from uploads or URLs, write new workspace files, or append content to existing files.",
"bgColor": "#40916C",
"iconName": "DocumentIcon",
"docsUrl": "https://docs.sim.ai/tools/file",
"operations": [],
"operationCount": 0,
"operations": [
{
"name": "Read",
"description": "Parse one or more uploaded files or files from URLs (text, PDF, CSV, images, etc.)"
},
{
"name": "Write",
"description": "Create a new workspace file. If a file with the same name already exists, a numeric suffix is added (e.g., "
},
{
"name": "Append",
"description": "Append content to an existing workspace file. The file must already exist. Content is added to the end of the file."
}
],
"operationCount": 3,
"triggers": [],
"triggerCount": 0,
"authType": "none",
@@ -8611,6 +8624,121 @@
"integrationType": "analytics",
"tags": ["data-analytics", "monitoring"]
},
{
"type": "profound",
"slug": "profound",
"name": "Profound",
"description": "AI visibility and analytics with Profound",
"longDescription": "Track how your brand appears across AI platforms. Monitor visibility scores, sentiment, citations, bot traffic, referrals, content optimization, and prompt volumes with Profound.",
"bgColor": "#000000",
"iconName": "ProfoundIcon",
"docsUrl": "https://docs.sim.ai/tools/profound",
"operations": [
{
"name": "List Categories",
"description": "List all organization categories in Profound"
},
{
"name": "List Regions",
"description": "List all organization regions in Profound"
},
{
"name": "List Models",
"description": "List all AI models/platforms tracked in Profound"
},
{
"name": "List Domains",
"description": "List all organization domains in Profound"
},
{
"name": "List Assets",
"description": "List all organization assets (companies/brands) across all categories in Profound"
},
{
"name": "List Personas",
"description": "List all organization personas across all categories in Profound"
},
{
"name": "Category Topics",
"description": "List topics for a specific category in Profound"
},
{
"name": "Category Tags",
"description": "List tags for a specific category in Profound"
},
{
"name": "Category Prompts",
"description": "List prompts for a specific category in Profound"
},
{
"name": "Category Assets",
"description": "List assets (companies/brands) for a specific category in Profound"
},
{
"name": "Category Personas",
"description": "List personas for a specific category in Profound"
},
{
"name": "Visibility Report",
"description": "Query AI visibility report for a category in Profound"
},
{
"name": "Sentiment Report",
"description": "Query sentiment report for a category in Profound"
},
{
"name": "Citations Report",
"description": "Query citations report for a category in Profound"
},
{
"name": "Query Fanouts",
"description": "Query fanout report showing how AI models expand prompts into sub-queries in Profound"
},
{
"name": "Prompt Answers",
"description": "Get raw prompt answers data for a category in Profound"
},
{
"name": "Bots Report",
"description": "Query bot traffic report with hourly granularity for a domain in Profound"
},
{
"name": "Referrals Report",
"description": "Query human referral traffic report with hourly granularity for a domain in Profound"
},
{
"name": "Raw Logs",
"description": "Get raw traffic logs with filters for a domain in Profound"
},
{
"name": "Bot Logs",
"description": "Get identified bot visit logs with filters for a domain in Profound"
},
{
"name": "List Optimizations",
"description": "List content optimization entries for an asset in Profound"
},
{
"name": "Optimization Analysis",
"description": "Get detailed content optimization analysis for a specific content item in Profound"
},
{
"name": "Prompt Volume",
"description": "Query prompt volume data to understand search demand across AI platforms in Profound"
},
{
"name": "Citation Prompts",
"description": "Get prompts that cite a specific domain across AI platforms in Profound"
}
],
"operationCount": 24,
"triggers": [],
"triggerCount": 0,
"authType": "api-key",
"category": "tools",
"integrationType": "analytics",
"tags": ["seo", "data-analytics"]
},
{
"type": "pulse_v2",
"slug": "pulse",

View File

@@ -1,5 +1,4 @@
import type { Metadata } from 'next'
import Link from 'next/link'
import { getNavBlogPosts } from '@/lib/blog/registry'
import { martianMono } from '@/app/_styles/fonts/martian-mono/martian-mono'
import { season } from '@/app/_styles/fonts/season/season'

View File

@@ -15,12 +15,12 @@ const {
mockLimit,
mockUpdate,
mockSet,
mockDelete,
mockCreateSuccessResponse,
mockCreateErrorResponse,
mockEncryptSecret,
mockCheckChatAccess,
mockDeployWorkflow,
mockPerformChatUndeploy,
mockLogger,
} = vi.hoisted(() => {
const logger = {
@@ -40,12 +40,12 @@ const {
mockLimit: vi.fn(),
mockUpdate: vi.fn(),
mockSet: vi.fn(),
mockDelete: vi.fn(),
mockCreateSuccessResponse: vi.fn(),
mockCreateErrorResponse: vi.fn(),
mockEncryptSecret: vi.fn(),
mockCheckChatAccess: vi.fn(),
mockDeployWorkflow: vi.fn(),
mockPerformChatUndeploy: vi.fn(),
mockLogger: logger,
}
})
@@ -66,7 +66,6 @@ vi.mock('@sim/db', () => ({
db: {
select: mockSelect,
update: mockUpdate,
delete: mockDelete,
},
}))
vi.mock('@sim/db/schema', () => ({
@@ -88,6 +87,9 @@ vi.mock('@/app/api/chat/utils', () => ({
vi.mock('@/lib/workflows/persistence/utils', () => ({
deployWorkflow: mockDeployWorkflow,
}))
vi.mock('@/lib/workflows/orchestration', () => ({
performChatUndeploy: mockPerformChatUndeploy,
}))
vi.mock('drizzle-orm', () => ({
and: vi.fn((...conditions: unknown[]) => ({ type: 'and', conditions })),
eq: vi.fn((field: unknown, value: unknown) => ({ field, value, type: 'eq' })),
@@ -106,7 +108,7 @@ describe('Chat Edit API Route', () => {
mockWhere.mockReturnValue({ limit: mockLimit })
mockUpdate.mockReturnValue({ set: mockSet })
mockSet.mockReturnValue({ where: mockWhere })
mockDelete.mockReturnValue({ where: mockWhere })
mockPerformChatUndeploy.mockResolvedValue({ success: true })
mockCreateSuccessResponse.mockImplementation((data) => {
return new Response(JSON.stringify(data), {
@@ -428,7 +430,11 @@ describe('Chat Edit API Route', () => {
const response = await DELETE(req, { params: Promise.resolve({ id: 'chat-123' }) })
expect(response.status).toBe(200)
expect(mockDelete).toHaveBeenCalled()
expect(mockPerformChatUndeploy).toHaveBeenCalledWith({
chatId: 'chat-123',
userId: 'user-id',
workspaceId: 'workspace-123',
})
const data = await response.json()
expect(data.message).toBe('Chat deployment deleted successfully')
})
@@ -451,7 +457,11 @@ describe('Chat Edit API Route', () => {
expect(response.status).toBe(200)
expect(mockCheckChatAccess).toHaveBeenCalledWith('chat-123', 'admin-user-id')
expect(mockDelete).toHaveBeenCalled()
expect(mockPerformChatUndeploy).toHaveBeenCalledWith({
chatId: 'chat-123',
userId: 'admin-user-id',
workspaceId: 'workspace-123',
})
})
})
})

View File

@@ -9,6 +9,7 @@ import { getSession } from '@/lib/auth'
import { isDev } from '@/lib/core/config/feature-flags'
import { encryptSecret } from '@/lib/core/security/encryption'
import { getEmailDomain } from '@/lib/core/utils/urls'
import { performChatUndeploy } from '@/lib/workflows/orchestration'
import { deployWorkflow } from '@/lib/workflows/persistence/utils'
import { checkChatAccess } from '@/app/api/chat/utils'
import { createErrorResponse, createSuccessResponse } from '@/app/api/workflows/utils'
@@ -270,33 +271,25 @@ export async function DELETE(
return createErrorResponse('Unauthorized', 401)
}
const {
hasAccess,
chat: chatRecord,
workspaceId: chatWorkspaceId,
} = await checkChatAccess(chatId, session.user.id)
const { hasAccess, workspaceId: chatWorkspaceId } = await checkChatAccess(
chatId,
session.user.id
)
if (!hasAccess) {
return createErrorResponse('Chat not found or access denied', 404)
}
await db.delete(chat).where(eq(chat.id, chatId))
logger.info(`Chat "${chatId}" deleted successfully`)
recordAudit({
workspaceId: chatWorkspaceId || null,
actorId: session.user.id,
actorName: session.user.name,
actorEmail: session.user.email,
action: AuditAction.CHAT_DELETED,
resourceType: AuditResourceType.CHAT,
resourceId: chatId,
resourceName: chatRecord?.title || chatId,
description: `Deleted chat deployment "${chatRecord?.title || chatId}"`,
request: _request,
const result = await performChatUndeploy({
chatId,
userId: session.user.id,
workspaceId: chatWorkspaceId,
})
if (!result.success) {
return createErrorResponse(result.error || 'Failed to delete chat', 500)
}
return createSuccessResponse({
message: 'Chat deployment deleted successfully',
})

View File

@@ -3,7 +3,7 @@
*
* @vitest-environment node
*/
import { auditMock, createEnvMock } from '@sim/testing'
import { createEnvMock } from '@sim/testing'
import { NextRequest } from 'next/server'
import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest'
@@ -12,66 +12,51 @@ const {
mockFrom,
mockWhere,
mockLimit,
mockInsert,
mockValues,
mockReturning,
mockCreateSuccessResponse,
mockCreateErrorResponse,
mockEncryptSecret,
mockCheckWorkflowAccessForChatCreation,
mockDeployWorkflow,
mockPerformChatDeploy,
mockGetSession,
mockUuidV4,
} = vi.hoisted(() => ({
mockSelect: vi.fn(),
mockFrom: vi.fn(),
mockWhere: vi.fn(),
mockLimit: vi.fn(),
mockInsert: vi.fn(),
mockValues: vi.fn(),
mockReturning: vi.fn(),
mockCreateSuccessResponse: vi.fn(),
mockCreateErrorResponse: vi.fn(),
mockEncryptSecret: vi.fn(),
mockCheckWorkflowAccessForChatCreation: vi.fn(),
mockDeployWorkflow: vi.fn(),
mockPerformChatDeploy: vi.fn(),
mockGetSession: vi.fn(),
mockUuidV4: vi.fn(),
}))
vi.mock('@/lib/audit/log', () => auditMock)
vi.mock('@sim/db', () => ({
db: {
select: mockSelect,
insert: mockInsert,
},
}))
vi.mock('@sim/db/schema', () => ({
chat: { userId: 'userId', identifier: 'identifier' },
chat: { userId: 'userId', identifier: 'identifier', archivedAt: 'archivedAt' },
workflow: { id: 'id', userId: 'userId', isDeployed: 'isDeployed' },
}))
vi.mock('drizzle-orm', () => ({
and: vi.fn((...conditions: unknown[]) => ({ type: 'and', conditions })),
eq: vi.fn((field: unknown, value: unknown) => ({ field, value, type: 'eq' })),
isNull: vi.fn((field: unknown) => ({ type: 'isNull', field })),
}))
vi.mock('@/app/api/workflows/utils', () => ({
createSuccessResponse: mockCreateSuccessResponse,
createErrorResponse: mockCreateErrorResponse,
}))
vi.mock('@/lib/core/security/encryption', () => ({
encryptSecret: mockEncryptSecret,
}))
vi.mock('uuid', () => ({
v4: mockUuidV4,
}))
vi.mock('@/app/api/chat/utils', () => ({
checkWorkflowAccessForChatCreation: mockCheckWorkflowAccessForChatCreation,
}))
vi.mock('@/lib/workflows/persistence/utils', () => ({
deployWorkflow: mockDeployWorkflow,
vi.mock('@/lib/workflows/orchestration', () => ({
performChatDeploy: mockPerformChatDeploy,
}))
vi.mock('@/lib/auth', () => ({
@@ -94,10 +79,6 @@ describe('Chat API Route', () => {
mockSelect.mockReturnValue({ from: mockFrom })
mockFrom.mockReturnValue({ where: mockWhere })
mockWhere.mockReturnValue({ limit: mockLimit })
mockInsert.mockReturnValue({ values: mockValues })
mockValues.mockReturnValue({ returning: mockReturning })
mockUuidV4.mockReturnValue('test-uuid')
mockCreateSuccessResponse.mockImplementation((data) => {
return new Response(JSON.stringify(data), {
@@ -113,12 +94,10 @@ describe('Chat API Route', () => {
})
})
mockEncryptSecret.mockResolvedValue({ encrypted: 'encrypted-password' })
mockDeployWorkflow.mockResolvedValue({
mockPerformChatDeploy.mockResolvedValue({
success: true,
version: 1,
deployedAt: new Date(),
chatId: 'test-uuid',
chatUrl: 'http://localhost:3000/chat/test-chat',
})
})
@@ -277,7 +256,6 @@ describe('Chat API Route', () => {
hasAccess: true,
workflow: { userId: 'user-id', workspaceId: null, isDeployed: true },
})
mockReturning.mockResolvedValue([{ id: 'test-uuid' }])
const req = new NextRequest('http://localhost:3000/api/chat', {
method: 'POST',
@@ -287,6 +265,13 @@ describe('Chat API Route', () => {
expect(response.status).toBe(200)
expect(mockCheckWorkflowAccessForChatCreation).toHaveBeenCalledWith('workflow-123', 'user-id')
expect(mockPerformChatDeploy).toHaveBeenCalledWith(
expect.objectContaining({
workflowId: 'workflow-123',
userId: 'user-id',
identifier: 'test-chat',
})
)
})
it('should allow chat deployment when user has workspace admin permission', async () => {
@@ -309,7 +294,6 @@ describe('Chat API Route', () => {
hasAccess: true,
workflow: { userId: 'other-user-id', workspaceId: 'workspace-123', isDeployed: true },
})
mockReturning.mockResolvedValue([{ id: 'test-uuid' }])
const req = new NextRequest('http://localhost:3000/api/chat', {
method: 'POST',
@@ -319,6 +303,12 @@ describe('Chat API Route', () => {
expect(response.status).toBe(200)
expect(mockCheckWorkflowAccessForChatCreation).toHaveBeenCalledWith('workflow-123', 'user-id')
expect(mockPerformChatDeploy).toHaveBeenCalledWith(
expect.objectContaining({
workflowId: 'workflow-123',
workspaceId: 'workspace-123',
})
)
})
it('should reject when workflow is in workspace but user lacks admin permission', async () => {
@@ -383,7 +373,7 @@ describe('Chat API Route', () => {
expect(mockCheckWorkflowAccessForChatCreation).toHaveBeenCalledWith('workflow-123', 'user-id')
})
it('should auto-deploy workflow if not already deployed', async () => {
it('should call performChatDeploy for undeployed workflow', async () => {
mockGetSession.mockResolvedValue({
user: { id: 'user-id', email: 'user@example.com' },
})
@@ -403,7 +393,6 @@ describe('Chat API Route', () => {
hasAccess: true,
workflow: { userId: 'user-id', workspaceId: null, isDeployed: false },
})
mockReturning.mockResolvedValue([{ id: 'test-uuid' }])
const req = new NextRequest('http://localhost:3000/api/chat', {
method: 'POST',
@@ -412,10 +401,12 @@ describe('Chat API Route', () => {
const response = await POST(req)
expect(response.status).toBe(200)
expect(mockDeployWorkflow).toHaveBeenCalledWith({
workflowId: 'workflow-123',
deployedBy: 'user-id',
})
expect(mockPerformChatDeploy).toHaveBeenCalledWith(
expect.objectContaining({
workflowId: 'workflow-123',
userId: 'user-id',
})
)
})
})
})

View File

@@ -3,14 +3,9 @@ import { chat } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { and, eq, isNull } from 'drizzle-orm'
import type { NextRequest } from 'next/server'
import { v4 as uuidv4 } from 'uuid'
import { z } from 'zod'
import { AuditAction, AuditResourceType, recordAudit } from '@/lib/audit/log'
import { getSession } from '@/lib/auth'
import { isDev } from '@/lib/core/config/feature-flags'
import { encryptSecret } from '@/lib/core/security/encryption'
import { getBaseUrl } from '@/lib/core/utils/urls'
import { deployWorkflow } from '@/lib/workflows/persistence/utils'
import { performChatDeploy } from '@/lib/workflows/orchestration'
import { checkWorkflowAccessForChatCreation } from '@/app/api/chat/utils'
import { createErrorResponse, createSuccessResponse } from '@/app/api/workflows/utils'
@@ -109,7 +104,6 @@ export async function POST(request: NextRequest) {
)
}
// Check identifier availability and workflow access in parallel
const [existingIdentifier, { hasAccess, workflow: workflowRecord }] = await Promise.all([
db
.select()
@@ -127,121 +121,27 @@ export async function POST(request: NextRequest) {
return createErrorResponse('Workflow not found or access denied', 404)
}
// Always deploy/redeploy the workflow to ensure latest version
const result = await deployWorkflow({
workflowId,
deployedBy: session.user.id,
})
if (!result.success) {
return createErrorResponse(result.error || 'Failed to deploy workflow', 500)
}
logger.info(
`${workflowRecord.isDeployed ? 'Redeployed' : 'Auto-deployed'} workflow ${workflowId} for chat (v${result.version})`
)
// Encrypt password if provided
let encryptedPassword = null
if (authType === 'password' && password) {
const { encrypted } = await encryptSecret(password)
encryptedPassword = encrypted
}
// Create the chat deployment
const id = uuidv4()
// Log the values we're inserting
logger.info('Creating chat deployment with values:', {
workflowId,
identifier,
title,
authType,
hasPassword: !!encryptedPassword,
emailCount: allowedEmails?.length || 0,
outputConfigsCount: outputConfigs.length,
})
// Merge customizations with the additional fields
const mergedCustomizations = {
...(customizations || {}),
primaryColor: customizations?.primaryColor || 'var(--brand-hover)',
welcomeMessage: customizations?.welcomeMessage || 'Hi there! How can I help you today?',
}
await db.insert(chat).values({
id,
const result = await performChatDeploy({
workflowId,
userId: session.user.id,
identifier,
title,
description: description || null,
customizations: mergedCustomizations,
isActive: true,
description,
customizations,
authType,
password: encryptedPassword,
allowedEmails: authType === 'email' || authType === 'sso' ? allowedEmails : [],
password,
allowedEmails,
outputConfigs,
createdAt: new Date(),
updatedAt: new Date(),
workspaceId: workflowRecord.workspaceId,
})
// Return successful response with chat URL
// Generate chat URL using path-based routing instead of subdomains
const baseUrl = getBaseUrl()
let chatUrl: string
try {
const url = new URL(baseUrl)
let host = url.host
if (host.startsWith('www.')) {
host = host.substring(4)
}
chatUrl = `${url.protocol}//${host}/chat/${identifier}`
} catch (error) {
logger.warn('Failed to parse baseUrl, falling back to defaults:', {
baseUrl,
error: error instanceof Error ? error.message : 'Unknown error',
})
// Fallback based on environment
if (isDev) {
chatUrl = `http://localhost:3000/chat/${identifier}`
} else {
chatUrl = `https://sim.ai/chat/${identifier}`
}
if (!result.success) {
return createErrorResponse(result.error || 'Failed to deploy chat', 500)
}
logger.info(`Chat "${title}" deployed successfully at ${chatUrl}`)
try {
const { PlatformEvents } = await import('@/lib/core/telemetry')
PlatformEvents.chatDeployed({
chatId: id,
workflowId,
authType,
hasOutputConfigs: outputConfigs.length > 0,
})
} catch (_e) {
// Silently fail
}
recordAudit({
workspaceId: workflowRecord.workspaceId || null,
actorId: session.user.id,
actorName: session.user.name,
actorEmail: session.user.email,
action: AuditAction.CHAT_DEPLOYED,
resourceType: AuditResourceType.CHAT,
resourceId: id,
resourceName: title,
description: `Deployed chat "${title}"`,
metadata: { workflowId, identifier, authType },
request,
})
return createSuccessResponse({
id,
chatUrl,
id: result.chatId,
chatUrl: result.chatUrl,
message: 'Chat deployment created successfully',
})
} catch (validationError) {

View File

@@ -15,7 +15,6 @@ import {
requestChatTitle,
SSE_RESPONSE_HEADERS,
} from '@/lib/copilot/chat-streaming'
import { appendCopilotLogContext } from '@/lib/copilot/logging'
import { COPILOT_REQUEST_MODES } from '@/lib/copilot/models'
import { orchestrateCopilotStream } from '@/lib/copilot/orchestrator'
import { getStreamMeta, readStreamEvents } from '@/lib/copilot/orchestrator/stream/buffer'
@@ -184,36 +183,31 @@ export async function POST(req: NextRequest) {
const wf = await getWorkflowById(workflowId)
resolvedWorkspaceId = wf?.workspaceId ?? undefined
} catch {
logger.warn(
appendCopilotLogContext('Failed to resolve workspaceId from workflow', {
requestId: tracker.requestId,
messageId: userMessageId,
})
)
logger
.withMetadata({ requestId: tracker.requestId, messageId: userMessageId })
.warn('Failed to resolve workspaceId from workflow')
}
const userMessageIdToUse = userMessageId || crypto.randomUUID()
const reqLogger = logger.withMetadata({
requestId: tracker.requestId,
messageId: userMessageIdToUse,
})
try {
logger.error(
appendCopilotLogContext('Received chat POST', {
requestId: tracker.requestId,
messageId: userMessageIdToUse,
}),
{
workflowId,
hasContexts: Array.isArray(normalizedContexts),
contextsCount: Array.isArray(normalizedContexts) ? normalizedContexts.length : 0,
contextsPreview: Array.isArray(normalizedContexts)
? normalizedContexts.map((c: any) => ({
kind: c?.kind,
chatId: c?.chatId,
workflowId: c?.workflowId,
executionId: (c as any)?.executionId,
label: c?.label,
}))
: undefined,
}
)
reqLogger.info('Received chat POST', {
workflowId,
hasContexts: Array.isArray(normalizedContexts),
contextsCount: Array.isArray(normalizedContexts) ? normalizedContexts.length : 0,
contextsPreview: Array.isArray(normalizedContexts)
? normalizedContexts.map((c: any) => ({
kind: c?.kind,
chatId: c?.chatId,
workflowId: c?.workflowId,
executionId: (c as any)?.executionId,
label: c?.label,
}))
: undefined,
})
} catch {}
let currentChat: any = null
@@ -251,40 +245,22 @@ export async function POST(req: NextRequest) {
actualChatId
)
agentContexts = processed
logger.error(
appendCopilotLogContext('Contexts processed for request', {
requestId: tracker.requestId,
messageId: userMessageIdToUse,
}),
{
processedCount: agentContexts.length,
kinds: agentContexts.map((c) => c.type),
lengthPreview: agentContexts.map((c) => c.content?.length ?? 0),
}
)
reqLogger.info('Contexts processed for request', {
processedCount: agentContexts.length,
kinds: agentContexts.map((c) => c.type),
lengthPreview: agentContexts.map((c) => c.content?.length ?? 0),
})
if (
Array.isArray(normalizedContexts) &&
normalizedContexts.length > 0 &&
agentContexts.length === 0
) {
logger.warn(
appendCopilotLogContext(
'Contexts provided but none processed. Check executionId for logs contexts.',
{
requestId: tracker.requestId,
messageId: userMessageIdToUse,
}
)
reqLogger.warn(
'Contexts provided but none processed. Check executionId for logs contexts.'
)
}
} catch (e) {
logger.error(
appendCopilotLogContext('Failed to process contexts', {
requestId: tracker.requestId,
messageId: userMessageIdToUse,
}),
e
)
reqLogger.error('Failed to process contexts', e)
}
}
@@ -313,13 +289,7 @@ export async function POST(req: NextRequest) {
if (result.status === 'fulfilled' && result.value) {
agentContexts.push(result.value)
} else if (result.status === 'rejected') {
logger.error(
appendCopilotLogContext('Failed to resolve resource attachment', {
requestId: tracker.requestId,
messageId: userMessageIdToUse,
}),
result.reason
)
reqLogger.error('Failed to resolve resource attachment', result.reason)
}
}
}
@@ -358,26 +328,20 @@ export async function POST(req: NextRequest) {
)
try {
logger.error(
appendCopilotLogContext('About to call Sim Agent', {
requestId: tracker.requestId,
messageId: userMessageIdToUse,
}),
{
hasContext: agentContexts.length > 0,
contextCount: agentContexts.length,
hasFileAttachments: Array.isArray(requestPayload.fileAttachments),
messageLength: message.length,
mode: effectiveMode,
hasTools: Array.isArray(requestPayload.tools),
toolCount: Array.isArray(requestPayload.tools) ? requestPayload.tools.length : 0,
hasBaseTools: Array.isArray(requestPayload.baseTools),
baseToolCount: Array.isArray(requestPayload.baseTools)
? requestPayload.baseTools.length
: 0,
hasCredentials: !!requestPayload.credentials,
}
)
reqLogger.info('About to call Sim Agent', {
hasContext: agentContexts.length > 0,
contextCount: agentContexts.length,
hasFileAttachments: Array.isArray(requestPayload.fileAttachments),
messageLength: message.length,
mode: effectiveMode,
hasTools: Array.isArray(requestPayload.tools),
toolCount: Array.isArray(requestPayload.tools) ? requestPayload.tools.length : 0,
hasBaseTools: Array.isArray(requestPayload.baseTools),
baseToolCount: Array.isArray(requestPayload.baseTools)
? requestPayload.baseTools.length
: 0,
hasCredentials: !!requestPayload.credentials,
})
} catch {}
if (stream && actualChatId) {
@@ -521,16 +485,10 @@ export async function POST(req: NextRequest) {
.where(eq(copilotChats.id, actualChatId))
}
} catch (error) {
logger.error(
appendCopilotLogContext('Failed to persist chat messages', {
requestId: tracker.requestId,
messageId: userMessageIdToUse,
}),
{
chatId: actualChatId,
error: error instanceof Error ? error.message : 'Unknown error',
}
)
reqLogger.error('Failed to persist chat messages', {
chatId: actualChatId,
error: error instanceof Error ? error.message : 'Unknown error',
})
}
},
},
@@ -572,19 +530,13 @@ export async function POST(req: NextRequest) {
provider: typeof requestPayload?.provider === 'string' ? requestPayload.provider : undefined,
}
logger.error(
appendCopilotLogContext('Non-streaming response from orchestrator', {
requestId: tracker.requestId,
messageId: userMessageIdToUse,
}),
{
hasContent: !!responseData.content,
contentLength: responseData.content?.length || 0,
model: responseData.model,
provider: responseData.provider,
toolCallsCount: responseData.toolCalls?.length || 0,
}
)
reqLogger.info('Non-streaming response from orchestrator', {
hasContent: !!responseData.content,
contentLength: responseData.content?.length || 0,
model: responseData.model,
provider: responseData.provider,
toolCallsCount: responseData.toolCalls?.length || 0,
})
// Save messages if we have a chat
if (currentChat && responseData.content) {
@@ -617,12 +569,7 @@ export async function POST(req: NextRequest) {
// Start title generation in parallel if this is first message (non-streaming)
if (actualChatId && !currentChat.title && conversationHistory.length === 0) {
logger.error(
appendCopilotLogContext('Starting title generation for non-streaming response', {
requestId: tracker.requestId,
messageId: userMessageIdToUse,
})
)
reqLogger.info('Starting title generation for non-streaming response')
requestChatTitle({ message, model: selectedModel, provider, messageId: userMessageIdToUse })
.then(async (title) => {
if (title) {
@@ -633,22 +580,11 @@ export async function POST(req: NextRequest) {
updatedAt: new Date(),
})
.where(eq(copilotChats.id, actualChatId!))
logger.error(
appendCopilotLogContext(`Generated and saved title: ${title}`, {
requestId: tracker.requestId,
messageId: userMessageIdToUse,
})
)
reqLogger.info(`Generated and saved title: ${title}`)
}
})
.catch((error) => {
logger.error(
appendCopilotLogContext('Title generation failed', {
requestId: tracker.requestId,
messageId: userMessageIdToUse,
}),
error
)
reqLogger.error('Title generation failed', error)
})
}
@@ -662,17 +598,11 @@ export async function POST(req: NextRequest) {
.where(eq(copilotChats.id, actualChatId!))
}
logger.error(
appendCopilotLogContext('Returning non-streaming response', {
requestId: tracker.requestId,
messageId: userMessageIdToUse,
}),
{
duration: tracker.getDuration(),
chatId: actualChatId,
responseLength: responseData.content?.length || 0,
}
)
reqLogger.info('Returning non-streaming response', {
duration: tracker.getDuration(),
chatId: actualChatId,
responseLength: responseData.content?.length || 0,
})
return NextResponse.json({
success: true,
@@ -696,33 +626,25 @@ export async function POST(req: NextRequest) {
const duration = tracker.getDuration()
if (error instanceof z.ZodError) {
logger.error(
appendCopilotLogContext('Validation error', {
requestId: tracker.requestId,
messageId: pendingChatStreamID ?? undefined,
}),
{
logger
.withMetadata({ requestId: tracker.requestId, messageId: pendingChatStreamID ?? undefined })
.error('Validation error', {
duration,
errors: error.errors,
}
)
})
return NextResponse.json(
{ error: 'Invalid request data', details: error.errors },
{ status: 400 }
)
}
logger.error(
appendCopilotLogContext('Error handling copilot chat', {
requestId: tracker.requestId,
messageId: pendingChatStreamID ?? undefined,
}),
{
logger
.withMetadata({ requestId: tracker.requestId, messageId: pendingChatStreamID ?? undefined })
.error('Error handling copilot chat', {
duration,
error: error instanceof Error ? error.message : 'Unknown error',
stack: error instanceof Error ? error.stack : undefined,
}
)
})
return NextResponse.json(
{ error: error instanceof Error ? error.message : 'Internal server error' },
@@ -767,16 +689,13 @@ export async function GET(req: NextRequest) {
status: meta?.status || 'unknown',
}
} catch (err) {
logger.warn(
appendCopilotLogContext('Failed to read stream snapshot for chat', {
messageId: chat.conversationId || undefined,
}),
{
logger
.withMetadata({ messageId: chat.conversationId || undefined })
.warn('Failed to read stream snapshot for chat', {
chatId,
conversationId: chat.conversationId,
error: err instanceof Error ? err.message : String(err),
}
)
})
}
}
@@ -795,11 +714,9 @@ export async function GET(req: NextRequest) {
...(streamSnapshot ? { streamSnapshot } : {}),
}
logger.error(
appendCopilotLogContext(`Retrieved chat ${chatId}`, {
messageId: chat.conversationId || undefined,
})
)
logger
.withMetadata({ messageId: chat.conversationId || undefined })
.info(`Retrieved chat ${chatId}`)
return NextResponse.json({ success: true, chat: transformedChat })
}

View File

@@ -1,6 +1,5 @@
import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server'
import { appendCopilotLogContext } from '@/lib/copilot/logging'
import {
getStreamMeta,
readStreamEvents,
@@ -36,24 +35,21 @@ export async function GET(request: NextRequest) {
const toParam = url.searchParams.get('to')
const toEventId = toParam ? Number(toParam) : undefined
logger.error(
appendCopilotLogContext('[Resume] Received resume request', {
messageId: streamId || undefined,
}),
{
streamId: streamId || undefined,
fromEventId,
toEventId,
batchMode,
}
)
const reqLogger = logger.withMetadata({ messageId: streamId || undefined })
reqLogger.info('[Resume] Received resume request', {
streamId: streamId || undefined,
fromEventId,
toEventId,
batchMode,
})
if (!streamId) {
return NextResponse.json({ error: 'streamId is required' }, { status: 400 })
}
const meta = (await getStreamMeta(streamId)) as StreamMeta | null
logger.error(appendCopilotLogContext('[Resume] Stream lookup', { messageId: streamId }), {
reqLogger.info('[Resume] Stream lookup', {
streamId,
fromEventId,
toEventId,
@@ -72,7 +68,7 @@ export async function GET(request: NextRequest) {
if (batchMode) {
const events = await readStreamEvents(streamId, fromEventId)
const filteredEvents = toEventId ? events.filter((e) => e.eventId <= toEventId) : events
logger.error(appendCopilotLogContext('[Resume] Batch response', { messageId: streamId }), {
reqLogger.info('[Resume] Batch response', {
streamId,
fromEventId,
toEventId,
@@ -124,14 +120,11 @@ export async function GET(request: NextRequest) {
const flushEvents = async () => {
const events = await readStreamEvents(streamId, lastEventId)
if (events.length > 0) {
logger.error(
appendCopilotLogContext('[Resume] Flushing events', { messageId: streamId }),
{
streamId,
fromEventId: lastEventId,
eventCount: events.length,
}
)
reqLogger.info('[Resume] Flushing events', {
streamId,
fromEventId: lastEventId,
eventCount: events.length,
})
}
for (const entry of events) {
lastEventId = entry.eventId
@@ -178,7 +171,7 @@ export async function GET(request: NextRequest) {
}
} catch (error) {
if (!controllerClosed && !request.signal.aborted) {
logger.warn(appendCopilotLogContext('Stream replay failed', { messageId: streamId }), {
reqLogger.warn('Stream replay failed', {
streamId,
error: error instanceof Error ? error.message : String(error),
})

View File

@@ -100,7 +100,7 @@ const emailTemplates = {
trigger: 'api',
duration: '2.3s',
cost: '$0.0042',
logUrl: 'https://sim.ai/workspace/ws_123/logs?search=exec_abc123',
logUrl: 'https://sim.ai/workspace/ws_123/logs?executionId=exec_abc123',
}),
'workflow-notification-error': () =>
renderWorkflowNotificationEmail({
@@ -109,7 +109,7 @@ const emailTemplates = {
trigger: 'webhook',
duration: '1.1s',
cost: '$0.0021',
logUrl: 'https://sim.ai/workspace/ws_123/logs?search=exec_abc123',
logUrl: 'https://sim.ai/workspace/ws_123/logs?executionId=exec_abc123',
}),
'workflow-notification-alert': () =>
renderWorkflowNotificationEmail({
@@ -118,7 +118,7 @@ const emailTemplates = {
trigger: 'schedule',
duration: '45.2s',
cost: '$0.0156',
logUrl: 'https://sim.ai/workspace/ws_123/logs?search=exec_abc123',
logUrl: 'https://sim.ai/workspace/ws_123/logs?executionId=exec_abc123',
alertReason: '3 consecutive failures detected',
}),
'workflow-notification-full': () =>
@@ -128,7 +128,7 @@ const emailTemplates = {
trigger: 'api',
duration: '12.5s',
cost: '$0.0234',
logUrl: 'https://sim.ai/workspace/ws_123/logs?search=exec_abc123',
logUrl: 'https://sim.ai/workspace/ws_123/logs?executionId=exec_abc123',
finalOutput: { processed: 150, skipped: 3, status: 'completed' },
rateLimits: {
sync: { requestsPerMinute: 60, remaining: 45 },

View File

@@ -6,7 +6,14 @@
import { auditMock, createMockRequest, type MockUser } from '@sim/testing'
import { beforeEach, describe, expect, it, vi } from 'vitest'
const { mockGetSession, mockGetUserEntityPermissions, mockLogger, mockDbRef } = vi.hoisted(() => {
const {
mockGetSession,
mockGetUserEntityPermissions,
mockLogger,
mockDbRef,
mockPerformDeleteFolder,
mockCheckForCircularReference,
} = vi.hoisted(() => {
const logger = {
info: vi.fn(),
warn: vi.fn(),
@@ -21,6 +28,8 @@ const { mockGetSession, mockGetUserEntityPermissions, mockLogger, mockDbRef } =
mockGetUserEntityPermissions: vi.fn(),
mockLogger: logger,
mockDbRef: { current: null as any },
mockPerformDeleteFolder: vi.fn(),
mockCheckForCircularReference: vi.fn(),
}
})
@@ -39,6 +48,12 @@ vi.mock('@sim/db', () => ({
return mockDbRef.current
},
}))
vi.mock('@/lib/workflows/orchestration', () => ({
performDeleteFolder: mockPerformDeleteFolder,
}))
vi.mock('@/lib/workflows/utils', () => ({
checkForCircularReference: mockCheckForCircularReference,
}))
import { DELETE, PUT } from '@/app/api/folders/[id]/route'
@@ -144,6 +159,11 @@ describe('Individual Folder API Route', () => {
mockGetUserEntityPermissions.mockResolvedValue('admin')
mockDbRef.current = createFolderDbMock()
mockPerformDeleteFolder.mockResolvedValue({
success: true,
deletedItems: { folders: 1, workflows: 0 },
})
mockCheckForCircularReference.mockResolvedValue(false)
})
describe('PUT /api/folders/[id]', () => {
@@ -369,13 +389,17 @@ describe('Individual Folder API Route', () => {
it('should prevent circular references when updating parent', async () => {
mockAuthenticatedUser()
const circularCheckResults = [{ parentId: 'folder-2' }, { parentId: 'folder-3' }]
mockDbRef.current = createFolderDbMock({
folderLookupResult: { id: 'folder-3', parentId: null, name: 'Folder 3' },
circularCheckResults,
folderLookupResult: {
id: 'folder-3',
parentId: null,
name: 'Folder 3',
workspaceId: 'workspace-123',
},
})
mockCheckForCircularReference.mockResolvedValue(true)
const req = createMockRequest('PUT', {
name: 'Updated Folder 3',
parentId: 'folder-1',
@@ -388,6 +412,7 @@ describe('Individual Folder API Route', () => {
const data = await response.json()
expect(data).toHaveProperty('error', 'Cannot create circular folder reference')
expect(mockCheckForCircularReference).toHaveBeenCalledWith('folder-3', 'folder-1')
})
})
@@ -409,6 +434,12 @@ describe('Individual Folder API Route', () => {
const data = await response.json()
expect(data).toHaveProperty('success', true)
expect(data).toHaveProperty('deletedItems')
expect(mockPerformDeleteFolder).toHaveBeenCalledWith({
folderId: 'folder-1',
workspaceId: 'workspace-123',
userId: TEST_USER.id,
folderName: 'Test Folder',
})
})
it('should return 401 for unauthenticated delete requests', async () => {
@@ -472,6 +503,7 @@ describe('Individual Folder API Route', () => {
const data = await response.json()
expect(data).toHaveProperty('success', true)
expect(mockPerformDeleteFolder).toHaveBeenCalled()
})
it('should handle database errors during deletion', async () => {

View File

@@ -1,12 +1,12 @@
import { db } from '@sim/db'
import { workflow, workflowFolder } from '@sim/db/schema'
import { workflowFolder } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { and, eq, isNull } from 'drizzle-orm'
import { eq } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { AuditAction, AuditResourceType, recordAudit } from '@/lib/audit/log'
import { getSession } from '@/lib/auth'
import { archiveWorkflowsByIdsInWorkspace } from '@/lib/workflows/lifecycle'
import { performDeleteFolder } from '@/lib/workflows/orchestration'
import { checkForCircularReference } from '@/lib/workflows/utils'
import { getUserEntityPermissions } from '@/lib/workspaces/permissions/utils'
const logger = createLogger('FoldersIDAPI')
@@ -130,7 +130,6 @@ export async function DELETE(
return NextResponse.json({ error: 'Folder not found' }, { status: 404 })
}
// Check if user has admin permissions for the workspace (admin-only for deletions)
const workspacePermission = await getUserEntityPermissions(
session.user.id,
'workspace',
@@ -144,170 +143,25 @@ export async function DELETE(
)
}
// Check if deleting this folder would delete the last workflow(s) in the workspace
const workflowsInFolder = await countWorkflowsInFolderRecursively(
id,
existingFolder.workspaceId
)
const totalWorkflowsInWorkspace = await db
.select({ id: workflow.id })
.from(workflow)
.where(and(eq(workflow.workspaceId, existingFolder.workspaceId), isNull(workflow.archivedAt)))
if (workflowsInFolder > 0 && workflowsInFolder >= totalWorkflowsInWorkspace.length) {
return NextResponse.json(
{ error: 'Cannot delete folder containing the only workflow(s) in the workspace' },
{ status: 400 }
)
}
// Recursively delete folder and all its contents
const deletionStats = await deleteFolderRecursively(id, existingFolder.workspaceId)
logger.info('Deleted folder and all contents:', {
id,
deletionStats,
})
recordAudit({
const result = await performDeleteFolder({
folderId: id,
workspaceId: existingFolder.workspaceId,
actorId: session.user.id,
actorName: session.user.name,
actorEmail: session.user.email,
action: AuditAction.FOLDER_DELETED,
resourceType: AuditResourceType.FOLDER,
resourceId: id,
resourceName: existingFolder.name,
description: `Deleted folder "${existingFolder.name}"`,
metadata: {
affected: {
workflows: deletionStats.workflows,
subfolders: deletionStats.folders - 1,
},
},
request,
userId: session.user.id,
folderName: existingFolder.name,
})
if (!result.success) {
const status =
result.errorCode === 'not_found' ? 404 : result.errorCode === 'validation' ? 400 : 500
return NextResponse.json({ error: result.error }, { status })
}
return NextResponse.json({
success: true,
deletedItems: deletionStats,
deletedItems: result.deletedItems,
})
} catch (error) {
logger.error('Error deleting folder:', { error })
return NextResponse.json({ error: 'Internal server error' }, { status: 500 })
}
}
// Helper function to recursively delete a folder and all its contents
async function deleteFolderRecursively(
folderId: string,
workspaceId: string
): Promise<{ folders: number; workflows: number }> {
const stats = { folders: 0, workflows: 0 }
// Get all child folders first (workspace-scoped, not user-scoped)
const childFolders = await db
.select({ id: workflowFolder.id })
.from(workflowFolder)
.where(and(eq(workflowFolder.parentId, folderId), eq(workflowFolder.workspaceId, workspaceId)))
// Recursively delete child folders
for (const childFolder of childFolders) {
const childStats = await deleteFolderRecursively(childFolder.id, workspaceId)
stats.folders += childStats.folders
stats.workflows += childStats.workflows
}
// Delete all workflows in this folder (workspace-scoped, not user-scoped)
// The database cascade will handle deleting related workflow_blocks, workflow_edges, workflow_subflows
const workflowsInFolder = await db
.select({ id: workflow.id })
.from(workflow)
.where(
and(
eq(workflow.folderId, folderId),
eq(workflow.workspaceId, workspaceId),
isNull(workflow.archivedAt)
)
)
if (workflowsInFolder.length > 0) {
await archiveWorkflowsByIdsInWorkspace(
workspaceId,
workflowsInFolder.map((entry) => entry.id),
{ requestId: `folder-${folderId}` }
)
stats.workflows += workflowsInFolder.length
}
// Delete this folder
await db.delete(workflowFolder).where(eq(workflowFolder.id, folderId))
stats.folders += 1
return stats
}
/**
* Counts the number of workflows in a folder and all its subfolders recursively.
*/
async function countWorkflowsInFolderRecursively(
folderId: string,
workspaceId: string
): Promise<number> {
let count = 0
const workflowsInFolder = await db
.select({ id: workflow.id })
.from(workflow)
.where(
and(
eq(workflow.folderId, folderId),
eq(workflow.workspaceId, workspaceId),
isNull(workflow.archivedAt)
)
)
count += workflowsInFolder.length
const childFolders = await db
.select({ id: workflowFolder.id })
.from(workflowFolder)
.where(and(eq(workflowFolder.parentId, folderId), eq(workflowFolder.workspaceId, workspaceId)))
for (const childFolder of childFolders) {
count += await countWorkflowsInFolderRecursively(childFolder.id, workspaceId)
}
return count
}
// Helper function to check for circular references
async function checkForCircularReference(folderId: string, parentId: string): Promise<boolean> {
let currentParentId: string | null = parentId
const visited = new Set<string>()
while (currentParentId) {
if (visited.has(currentParentId)) {
return true // Circular reference detected
}
if (currentParentId === folderId) {
return true // Would create a cycle
}
visited.add(currentParentId)
// Get the parent of the current parent
const parent: { parentId: string | null } | undefined = await db
.select({ parentId: workflowFolder.parentId })
.from(workflowFolder)
.where(eq(workflowFolder.id, currentParentId))
.then((rows) => rows[0])
currentParentId = parent?.parentId || null
}
return false
}

View File

@@ -12,7 +12,6 @@ import {
createSSEStream,
SSE_RESPONSE_HEADERS,
} from '@/lib/copilot/chat-streaming'
import { appendCopilotLogContext } from '@/lib/copilot/logging'
import type { OrchestratorResult } from '@/lib/copilot/orchestrator/types'
import { processContextsServer, resolveActiveResourceContext } from '@/lib/copilot/process-contents'
import { createRequestTracker, createUnauthorizedResponse } from '@/lib/copilot/request-helpers'
@@ -112,27 +111,22 @@ export async function POST(req: NextRequest) {
const userMessageId = providedMessageId || crypto.randomUUID()
userMessageIdForLogs = userMessageId
const reqLogger = logger.withMetadata({
requestId: tracker.requestId,
messageId: userMessageId,
})
logger.error(
appendCopilotLogContext('Received mothership chat start request', {
requestId: tracker.requestId,
messageId: userMessageId,
}),
{
workspaceId,
chatId,
createNewChat,
hasContexts: Array.isArray(contexts) && contexts.length > 0,
contextsCount: Array.isArray(contexts) ? contexts.length : 0,
hasResourceAttachments:
Array.isArray(resourceAttachments) && resourceAttachments.length > 0,
resourceAttachmentCount: Array.isArray(resourceAttachments)
? resourceAttachments.length
: 0,
hasFileAttachments: Array.isArray(fileAttachments) && fileAttachments.length > 0,
fileAttachmentCount: Array.isArray(fileAttachments) ? fileAttachments.length : 0,
}
)
reqLogger.info('Received mothership chat start request', {
workspaceId,
chatId,
createNewChat,
hasContexts: Array.isArray(contexts) && contexts.length > 0,
contextsCount: Array.isArray(contexts) ? contexts.length : 0,
hasResourceAttachments: Array.isArray(resourceAttachments) && resourceAttachments.length > 0,
resourceAttachmentCount: Array.isArray(resourceAttachments) ? resourceAttachments.length : 0,
hasFileAttachments: Array.isArray(fileAttachments) && fileAttachments.length > 0,
fileAttachmentCount: Array.isArray(fileAttachments) ? fileAttachments.length : 0,
})
try {
await assertActiveWorkspaceAccess(workspaceId, authenticatedUserId)
@@ -174,13 +168,7 @@ export async function POST(req: NextRequest) {
actualChatId
)
} catch (e) {
logger.error(
appendCopilotLogContext('Failed to process contexts', {
requestId: tracker.requestId,
messageId: userMessageId,
}),
e
)
reqLogger.error('Failed to process contexts', e)
}
}
@@ -205,13 +193,7 @@ export async function POST(req: NextRequest) {
if (result.status === 'fulfilled' && result.value) {
agentContexts.push(result.value)
} else if (result.status === 'rejected') {
logger.error(
appendCopilotLogContext('Failed to resolve resource attachment', {
requestId: tracker.requestId,
messageId: userMessageId,
}),
result.reason
)
reqLogger.error('Failed to resolve resource attachment', result.reason)
}
}
}
@@ -399,16 +381,10 @@ export async function POST(req: NextRequest) {
})
}
} catch (error) {
logger.error(
appendCopilotLogContext('Failed to persist chat messages', {
requestId: tracker.requestId,
messageId: userMessageId,
}),
{
chatId: actualChatId,
error: error instanceof Error ? error.message : 'Unknown error',
}
)
reqLogger.error('Failed to persist chat messages', {
chatId: actualChatId,
error: error instanceof Error ? error.message : 'Unknown error',
})
}
},
},
@@ -423,15 +399,11 @@ export async function POST(req: NextRequest) {
)
}
logger.error(
appendCopilotLogContext('Error handling mothership chat', {
requestId: tracker.requestId,
messageId: userMessageIdForLogs,
}),
{
logger
.withMetadata({ requestId: tracker.requestId, messageId: userMessageIdForLogs })
.error('Error handling mothership chat', {
error: error instanceof Error ? error.message : 'Unknown error',
}
)
})
return NextResponse.json(
{ error: error instanceof Error ? error.message : 'Internal server error' },

View File

@@ -5,6 +5,7 @@ import { and, eq, sql } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { getSession } from '@/lib/auth'
import { releasePendingChatStream } from '@/lib/copilot/chat-streaming'
import { taskPubSub } from '@/lib/copilot/task-events'
const logger = createLogger('MothershipChatStopAPI')
@@ -58,6 +59,8 @@ export async function POST(req: NextRequest) {
const { chatId, streamId, content, contentBlocks } = StopSchema.parse(await req.json())
await releasePendingChatStream(chatId, streamId)
const setClause: Record<string, unknown> = {
conversationId: null,
updatedAt: new Date(),

View File

@@ -5,7 +5,6 @@ import { and, eq, sql } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { getAccessibleCopilotChat } from '@/lib/copilot/chat-lifecycle'
import { appendCopilotLogContext } from '@/lib/copilot/logging'
import { getStreamMeta, readStreamEvents } from '@/lib/copilot/orchestrator/stream/buffer'
import {
authenticateCopilotRequestSessionOnly,
@@ -63,16 +62,13 @@ export async function GET(
status: meta?.status || 'unknown',
}
} catch (error) {
logger.warn(
appendCopilotLogContext('Failed to read stream snapshot for mothership chat', {
messageId: chat.conversationId || undefined,
}),
{
logger
.withMetadata({ messageId: chat.conversationId || undefined })
.warn('Failed to read stream snapshot for mothership chat', {
chatId,
conversationId: chat.conversationId,
error: error instanceof Error ? error.message : String(error),
}
)
})
}
}

View File

@@ -4,7 +4,6 @@ import { z } from 'zod'
import { checkInternalAuth } from '@/lib/auth/hybrid'
import { createRunSegment } from '@/lib/copilot/async-runs/repository'
import { buildIntegrationToolSchemas } from '@/lib/copilot/chat-payload'
import { appendCopilotLogContext } from '@/lib/copilot/logging'
import { orchestrateCopilotStream } from '@/lib/copilot/orchestrator'
import { generateWorkspaceContext } from '@/lib/copilot/workspace-context'
import {
@@ -53,6 +52,7 @@ export async function POST(req: NextRequest) {
const effectiveChatId = chatId || crypto.randomUUID()
messageId = crypto.randomUUID()
const reqLogger = logger.withMetadata({ messageId })
const [workspaceContext, integrationTools, userPermission] = await Promise.all([
generateWorkspaceContext(workspaceId, userId),
buildIntegrationToolSchemas(userId, messageId),
@@ -96,7 +96,7 @@ export async function POST(req: NextRequest) {
})
if (!result.success) {
logger.error(appendCopilotLogContext('Mothership execute failed', { messageId }), {
reqLogger.error('Mothership execute failed', {
error: result.error,
errors: result.errors,
})
@@ -135,7 +135,7 @@ export async function POST(req: NextRequest) {
)
}
logger.error(appendCopilotLogContext('Mothership execute error', { messageId }), {
logger.withMetadata({ messageId }).error('Mothership execute error', {
error: error instanceof Error ? error.message : 'Unknown error',
})

View File

@@ -1,6 +1,7 @@
import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { AuditAction, AuditResourceType, recordAudit } from '@/lib/audit/log'
import { checkSessionOrInternalAuth } from '@/lib/auth/hybrid'
import { generateRequestId } from '@/lib/core/utils/request'
import { deleteSkill, listSkills, upsertSkills } from '@/lib/workflows/skills/operations'
@@ -96,6 +97,18 @@ export async function POST(req: NextRequest) {
requestId,
})
for (const skill of resultSkills) {
recordAudit({
workspaceId,
actorId: userId,
action: AuditAction.SKILL_CREATED,
resourceType: AuditResourceType.SKILL,
resourceId: skill.id,
resourceName: skill.name,
description: `Created/updated skill "${skill.name}"`,
})
}
return NextResponse.json({ success: true, data: resultSkills })
} catch (validationError) {
if (validationError instanceof z.ZodError) {
@@ -158,6 +171,15 @@ export async function DELETE(request: NextRequest) {
return NextResponse.json({ error: 'Skill not found' }, { status: 404 })
}
recordAudit({
workspaceId,
actorId: authResult.userId,
action: AuditAction.SKILL_DELETED,
resourceType: AuditResourceType.SKILL,
resourceId: skillId,
description: `Deleted skill`,
})
logger.info(`[${requestId}] Deleted skill: ${skillId}`)
return NextResponse.json({ success: true })
} catch (error) {

View File

@@ -4,6 +4,7 @@ import { createLogger } from '@sim/logger'
import { and, desc, eq, isNull, or } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { AuditAction, AuditResourceType, recordAudit } from '@/lib/audit/log'
import { checkSessionOrInternalAuth } from '@/lib/auth/hybrid'
import { generateRequestId } from '@/lib/core/utils/request'
import { upsertCustomTools } from '@/lib/workflows/custom-tools/operations'
@@ -166,6 +167,18 @@ export async function POST(req: NextRequest) {
requestId,
})
for (const tool of resultTools) {
recordAudit({
workspaceId,
actorId: userId,
action: AuditAction.CUSTOM_TOOL_CREATED,
resourceType: AuditResourceType.CUSTOM_TOOL,
resourceId: tool.id,
resourceName: tool.title,
description: `Created/updated custom tool "${tool.title}"`,
})
}
return NextResponse.json({ success: true, data: resultTools })
} catch (validationError) {
if (validationError instanceof z.ZodError) {
@@ -265,6 +278,15 @@ export async function DELETE(request: NextRequest) {
// Delete the tool
await db.delete(customTools).where(eq(customTools.id, toolId))
recordAudit({
workspaceId: tool.workspaceId || undefined,
actorId: userId,
action: AuditAction.CUSTOM_TOOL_DELETED,
resourceType: AuditResourceType.CUSTOM_TOOL,
resourceId: toolId,
description: `Deleted custom tool`,
})
logger.info(`[${requestId}] Deleted tool: ${toolId}`)
return NextResponse.json({ success: true })
} catch (error) {

View File

@@ -0,0 +1,166 @@
import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server'
import { checkInternalAuth } from '@/lib/auth/hybrid'
import { acquireLock, releaseLock } from '@/lib/core/config/redis'
import { ensureAbsoluteUrl } from '@/lib/core/utils/urls'
import {
downloadWorkspaceFile,
getWorkspaceFileByName,
updateWorkspaceFileContent,
uploadWorkspaceFile,
} from '@/lib/uploads/contexts/workspace/workspace-file-manager'
import { getFileExtension, getMimeTypeFromExtension } from '@/lib/uploads/utils/file-utils'
export const dynamic = 'force-dynamic'
const logger = createLogger('FileManageAPI')
export async function POST(request: NextRequest) {
const auth = await checkInternalAuth(request, { requireWorkflowId: false })
if (!auth.success) {
return NextResponse.json({ success: false, error: auth.error }, { status: 401 })
}
const { searchParams } = new URL(request.url)
const userId = auth.userId || searchParams.get('userId')
if (!userId) {
return NextResponse.json({ success: false, error: 'userId is required' }, { status: 400 })
}
let body: Record<string, unknown>
try {
body = await request.json()
} catch {
return NextResponse.json({ success: false, error: 'Invalid JSON body' }, { status: 400 })
}
const workspaceId = (body.workspaceId as string) || searchParams.get('workspaceId')
if (!workspaceId) {
return NextResponse.json({ success: false, error: 'workspaceId is required' }, { status: 400 })
}
const operation = body.operation as string
try {
switch (operation) {
case 'write': {
const fileName = body.fileName as string | undefined
const content = body.content as string | undefined
const contentType = body.contentType as string | undefined
if (!fileName) {
return NextResponse.json(
{ success: false, error: 'fileName is required for write operation' },
{ status: 400 }
)
}
if (!content && content !== '') {
return NextResponse.json(
{ success: false, error: 'content is required for write operation' },
{ status: 400 }
)
}
const mimeType = contentType || getMimeTypeFromExtension(getFileExtension(fileName))
const fileBuffer = Buffer.from(content ?? '', 'utf-8')
const result = await uploadWorkspaceFile(
workspaceId,
userId,
fileBuffer,
fileName,
mimeType
)
logger.info('File created', {
fileId: result.id,
name: fileName,
size: fileBuffer.length,
})
return NextResponse.json({
success: true,
data: {
id: result.id,
name: result.name,
size: fileBuffer.length,
url: ensureAbsoluteUrl(result.url),
},
})
}
case 'append': {
const fileName = body.fileName as string | undefined
const content = body.content as string | undefined
if (!fileName) {
return NextResponse.json(
{ success: false, error: 'fileName is required for append operation' },
{ status: 400 }
)
}
if (!content && content !== '') {
return NextResponse.json(
{ success: false, error: 'content is required for append operation' },
{ status: 400 }
)
}
const existing = await getWorkspaceFileByName(workspaceId, fileName)
if (!existing) {
return NextResponse.json(
{ success: false, error: `File not found: "${fileName}"` },
{ status: 404 }
)
}
const lockKey = `file-append:${workspaceId}:${existing.id}`
const lockValue = `${Date.now()}-${Math.random().toString(36).slice(2)}`
const acquired = await acquireLock(lockKey, lockValue, 30)
if (!acquired) {
return NextResponse.json(
{ success: false, error: 'File is busy, please retry' },
{ status: 409 }
)
}
try {
const existingBuffer = await downloadWorkspaceFile(existing)
const finalContent = existingBuffer.toString('utf-8') + content
const fileBuffer = Buffer.from(finalContent, 'utf-8')
await updateWorkspaceFileContent(workspaceId, existing.id, userId, fileBuffer)
logger.info('File appended', {
fileId: existing.id,
name: existing.name,
size: fileBuffer.length,
})
return NextResponse.json({
success: true,
data: {
id: existing.id,
name: existing.name,
size: fileBuffer.length,
url: ensureAbsoluteUrl(existing.path),
},
})
} finally {
await releaseLock(lockKey, lockValue)
}
}
default:
return NextResponse.json(
{ success: false, error: `Unknown operation: ${operation}. Supported: write, append` },
{ status: 400 }
)
}
} catch (error) {
const message = error instanceof Error ? error.message : 'Unknown error'
logger.error('File operation failed', { operation, error: message })
return NextResponse.json({ success: false, error: message }, { status: 500 })
}
}

View File

@@ -1,25 +1,7 @@
import { db, workflowDeploymentVersion } from '@sim/db'
import { createLogger } from '@sim/logger'
import { and, eq } from 'drizzle-orm'
import { generateRequestId } from '@/lib/core/utils/request'
import { removeMcpToolsForWorkflow, syncMcpToolsForWorkflow } from '@/lib/mcp/workflow-mcp-sync'
import {
cleanupWebhooksForWorkflow,
restorePreviousVersionWebhooks,
saveTriggerWebhooksForDeploy,
} from '@/lib/webhooks/deploy'
import { getActiveWorkflowRecord } from '@/lib/workflows/active-context'
import {
activateWorkflowVersionById,
deployWorkflow,
loadWorkflowFromNormalizedTables,
undeployWorkflow,
} from '@/lib/workflows/persistence/utils'
import {
cleanupDeploymentVersion,
createSchedulesForDeploy,
validateWorkflowSchedules,
} from '@/lib/workflows/schedules'
import { performFullDeploy, performFullUndeploy } from '@/lib/workflows/orchestration'
import { withAdminAuthParams } from '@/app/api/v1/admin/middleware'
import {
badRequestResponse,
@@ -31,12 +13,19 @@ import type { AdminDeployResult, AdminUndeployResult } from '@/app/api/v1/admin/
const logger = createLogger('AdminWorkflowDeployAPI')
const ADMIN_ACTOR_ID = 'admin-api'
interface RouteParams {
id: string
}
/**
* POST — Deploy a workflow via admin API.
*
* `userId` is set to the workflow owner so that webhook credential resolution
* (OAuth token lookups for providers like Airtable, Attio, etc.) uses a real
* user. `actorId` is set to `'admin-api'` so that the `deployedBy` field on
* the deployment version and audit log entries are correctly attributed to an
* admin action rather than the workflow owner.
*/
export const POST = withAdminAuthParams<RouteParams>(async (request, context) => {
const { id: workflowId } = await context.params
const requestId = generateRequestId()
@@ -48,140 +37,28 @@ export const POST = withAdminAuthParams<RouteParams>(async (request, context) =>
return notFoundResponse('Workflow')
}
const normalizedData = await loadWorkflowFromNormalizedTables(workflowId)
if (!normalizedData) {
return badRequestResponse('Workflow has no saved state')
}
const scheduleValidation = validateWorkflowSchedules(normalizedData.blocks)
if (!scheduleValidation.isValid) {
return badRequestResponse(`Invalid schedule configuration: ${scheduleValidation.error}`)
}
const [currentActiveVersion] = await db
.select({ id: workflowDeploymentVersion.id })
.from(workflowDeploymentVersion)
.where(
and(
eq(workflowDeploymentVersion.workflowId, workflowId),
eq(workflowDeploymentVersion.isActive, true)
)
)
.limit(1)
const previousVersionId = currentActiveVersion?.id
const rollbackDeployment = async () => {
if (previousVersionId) {
await restorePreviousVersionWebhooks({
request,
workflow: workflowData,
userId: workflowRecord.userId,
previousVersionId,
requestId,
})
const reactivateResult = await activateWorkflowVersionById({
workflowId,
deploymentVersionId: previousVersionId,
})
if (reactivateResult.success) {
return
}
}
await undeployWorkflow({ workflowId })
}
const deployResult = await deployWorkflow({
const result = await performFullDeploy({
workflowId,
deployedBy: ADMIN_ACTOR_ID,
workflowName: workflowRecord.name,
})
if (!deployResult.success) {
return internalErrorResponse(deployResult.error || 'Failed to deploy workflow')
}
if (!deployResult.deploymentVersionId) {
await undeployWorkflow({ workflowId })
return internalErrorResponse('Failed to resolve deployment version')
}
const workflowData = workflowRecord as Record<string, unknown>
const triggerSaveResult = await saveTriggerWebhooksForDeploy({
request,
workflowId,
workflow: workflowData,
userId: workflowRecord.userId,
blocks: normalizedData.blocks,
workflowName: workflowRecord.name,
requestId,
deploymentVersionId: deployResult.deploymentVersionId,
previousVersionId,
request,
actorId: 'admin-api',
})
if (!triggerSaveResult.success) {
await cleanupDeploymentVersion({
workflowId,
workflow: workflowData,
requestId,
deploymentVersionId: deployResult.deploymentVersionId,
})
await rollbackDeployment()
return internalErrorResponse(
triggerSaveResult.error?.message || 'Failed to sync trigger configuration'
)
if (!result.success) {
if (result.errorCode === 'not_found') return notFoundResponse('Workflow state')
if (result.errorCode === 'validation') return badRequestResponse(result.error!)
return internalErrorResponse(result.error || 'Failed to deploy workflow')
}
const scheduleResult = await createSchedulesForDeploy(
workflowId,
normalizedData.blocks,
db,
deployResult.deploymentVersionId
)
if (!scheduleResult.success) {
logger.error(
`[${requestId}] Admin API: Schedule creation failed for workflow ${workflowId}: ${scheduleResult.error}`
)
await cleanupDeploymentVersion({
workflowId,
workflow: workflowData,
requestId,
deploymentVersionId: deployResult.deploymentVersionId,
})
await rollbackDeployment()
return internalErrorResponse(scheduleResult.error || 'Failed to create schedule')
}
if (previousVersionId && previousVersionId !== deployResult.deploymentVersionId) {
try {
logger.info(`[${requestId}] Admin API: Cleaning up previous version ${previousVersionId}`)
await cleanupDeploymentVersion({
workflowId,
workflow: workflowData,
requestId,
deploymentVersionId: previousVersionId,
skipExternalCleanup: true,
})
} catch (cleanupError) {
logger.error(
`[${requestId}] Admin API: Failed to clean up previous version ${previousVersionId}`,
cleanupError
)
}
}
logger.info(
`[${requestId}] Admin API: Deployed workflow ${workflowId} as v${deployResult.version}`
)
// Sync MCP tools with the latest parameter schema
await syncMcpToolsForWorkflow({ workflowId, requestId, context: 'deploy' })
logger.info(`[${requestId}] Admin API: Deployed workflow ${workflowId} as v${result.version}`)
const response: AdminDeployResult = {
isDeployed: true,
version: deployResult.version!,
deployedAt: deployResult.deployedAt!.toISOString(),
warnings: triggerSaveResult.warnings,
version: result.version!,
deployedAt: result.deployedAt!.toISOString(),
warnings: result.warnings,
}
return singleResponse(response)
@@ -191,7 +68,7 @@ export const POST = withAdminAuthParams<RouteParams>(async (request, context) =>
}
})
export const DELETE = withAdminAuthParams<RouteParams>(async (request, context) => {
export const DELETE = withAdminAuthParams<RouteParams>(async (_request, context) => {
const { id: workflowId } = await context.params
const requestId = generateRequestId()
@@ -202,19 +79,17 @@ export const DELETE = withAdminAuthParams<RouteParams>(async (request, context)
return notFoundResponse('Workflow')
}
const result = await undeployWorkflow({ workflowId })
const result = await performFullUndeploy({
workflowId,
userId: workflowRecord.userId,
requestId,
actorId: 'admin-api',
})
if (!result.success) {
return internalErrorResponse(result.error || 'Failed to undeploy workflow')
}
await cleanupWebhooksForWorkflow(
workflowId,
workflowRecord as Record<string, unknown>,
requestId
)
await removeMcpToolsForWorkflow(workflowId, requestId)
logger.info(`Admin API: Undeployed workflow ${workflowId}`)
const response: AdminUndeployResult = {

View File

@@ -13,12 +13,12 @@
*/
import { db } from '@sim/db'
import { templates, workflowBlocks, workflowEdges } from '@sim/db/schema'
import { workflowBlocks, workflowEdges } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { count, eq } from 'drizzle-orm'
import { NextResponse } from 'next/server'
import { getActiveWorkflowRecord } from '@/lib/workflows/active-context'
import { archiveWorkflow } from '@/lib/workflows/lifecycle'
import { performDeleteWorkflow } from '@/lib/workflows/orchestration'
import { withAdminAuthParams } from '@/app/api/v1/admin/middleware'
import {
internalErrorResponse,
@@ -69,7 +69,7 @@ export const GET = withAdminAuthParams<RouteParams>(async (request, context) =>
}
})
export const DELETE = withAdminAuthParams<RouteParams>(async (request, context) => {
export const DELETE = withAdminAuthParams<RouteParams>(async (_request, context) => {
const { id: workflowId } = await context.params
try {
@@ -79,12 +79,18 @@ export const DELETE = withAdminAuthParams<RouteParams>(async (request, context)
return notFoundResponse('Workflow')
}
await db.update(templates).set({ workflowId: null }).where(eq(templates.workflowId, workflowId))
await archiveWorkflow(workflowId, {
const result = await performDeleteWorkflow({
workflowId,
userId: workflowData.userId,
skipLastWorkflowGuard: true,
requestId: `admin-workflow-${workflowId}`,
actorId: 'admin-api',
})
if (!result.success) {
return internalErrorResponse(result.error || 'Failed to delete workflow')
}
logger.info(`Admin API: Deleted workflow ${workflowId} (${workflowData.name})`)
return NextResponse.json({ success: true, workflowId })

View File

@@ -1,16 +1,7 @@
import { db, workflowDeploymentVersion } from '@sim/db'
import { createLogger } from '@sim/logger'
import { and, eq } from 'drizzle-orm'
import { generateRequestId } from '@/lib/core/utils/request'
import { syncMcpToolsForWorkflow } from '@/lib/mcp/workflow-mcp-sync'
import { restorePreviousVersionWebhooks, saveTriggerWebhooksForDeploy } from '@/lib/webhooks/deploy'
import { getActiveWorkflowRecord } from '@/lib/workflows/active-context'
import { activateWorkflowVersion } from '@/lib/workflows/persistence/utils'
import {
cleanupDeploymentVersion,
createSchedulesForDeploy,
validateWorkflowSchedules,
} from '@/lib/workflows/schedules'
import { performActivateVersion } from '@/lib/workflows/orchestration'
import { withAdminAuthParams } from '@/app/api/v1/admin/middleware'
import {
badRequestResponse,
@@ -18,7 +9,6 @@ import {
notFoundResponse,
singleResponse,
} from '@/app/api/v1/admin/responses'
import type { BlockState } from '@/stores/workflows/workflow/types'
const logger = createLogger('AdminWorkflowActivateVersionAPI')
@@ -43,144 +33,22 @@ export const POST = withAdminAuthParams<RouteParams>(async (request, context) =>
return badRequestResponse('Invalid version number')
}
const [versionRow] = await db
.select({
id: workflowDeploymentVersion.id,
state: workflowDeploymentVersion.state,
})
.from(workflowDeploymentVersion)
.where(
and(
eq(workflowDeploymentVersion.workflowId, workflowId),
eq(workflowDeploymentVersion.version, versionNum)
)
)
.limit(1)
if (!versionRow?.state) {
return notFoundResponse('Deployment version')
}
const [currentActiveVersion] = await db
.select({ id: workflowDeploymentVersion.id })
.from(workflowDeploymentVersion)
.where(
and(
eq(workflowDeploymentVersion.workflowId, workflowId),
eq(workflowDeploymentVersion.isActive, true)
)
)
.limit(1)
const previousVersionId = currentActiveVersion?.id
const deployedState = versionRow.state as { blocks?: Record<string, BlockState> }
const blocks = deployedState.blocks
if (!blocks || typeof blocks !== 'object') {
return internalErrorResponse('Invalid deployed state structure')
}
const workflowData = workflowRecord as Record<string, unknown>
const scheduleValidation = validateWorkflowSchedules(blocks)
if (!scheduleValidation.isValid) {
return badRequestResponse(`Invalid schedule configuration: ${scheduleValidation.error}`)
}
const triggerSaveResult = await saveTriggerWebhooksForDeploy({
request,
const result = await performActivateVersion({
workflowId,
workflow: workflowData,
version: versionNum,
userId: workflowRecord.userId,
blocks,
workflow: workflowRecord as Record<string, unknown>,
requestId,
deploymentVersionId: versionRow.id,
previousVersionId,
forceRecreateSubscriptions: true,
request,
actorId: 'admin-api',
})
if (!triggerSaveResult.success) {
logger.error(
`[${requestId}] Admin API: Failed to sync triggers for workflow ${workflowId}`,
triggerSaveResult.error
)
return internalErrorResponse(
triggerSaveResult.error?.message || 'Failed to sync trigger configuration'
)
}
const scheduleResult = await createSchedulesForDeploy(workflowId, blocks, db, versionRow.id)
if (!scheduleResult.success) {
await cleanupDeploymentVersion({
workflowId,
workflow: workflowData,
requestId,
deploymentVersionId: versionRow.id,
})
if (previousVersionId) {
await restorePreviousVersionWebhooks({
request,
workflow: workflowData,
userId: workflowRecord.userId,
previousVersionId,
requestId,
})
}
return internalErrorResponse(scheduleResult.error || 'Failed to sync schedules')
}
const result = await activateWorkflowVersion({ workflowId, version: versionNum })
if (!result.success) {
await cleanupDeploymentVersion({
workflowId,
workflow: workflowData,
requestId,
deploymentVersionId: versionRow.id,
})
if (previousVersionId) {
await restorePreviousVersionWebhooks({
request,
workflow: workflowData,
userId: workflowRecord.userId,
previousVersionId,
requestId,
})
}
if (result.error === 'Deployment version not found') {
return notFoundResponse('Deployment version')
}
if (result.errorCode === 'not_found') return notFoundResponse('Deployment version')
if (result.errorCode === 'validation') return badRequestResponse(result.error!)
return internalErrorResponse(result.error || 'Failed to activate version')
}
if (previousVersionId && previousVersionId !== versionRow.id) {
try {
logger.info(
`[${requestId}] Admin API: Cleaning up previous version ${previousVersionId} webhooks/schedules`
)
await cleanupDeploymentVersion({
workflowId,
workflow: workflowData,
requestId,
deploymentVersionId: previousVersionId,
skipExternalCleanup: true,
})
logger.info(`[${requestId}] Admin API: Previous version cleanup completed`)
} catch (cleanupError) {
logger.error(
`[${requestId}] Admin API: Failed to clean up previous version ${previousVersionId}`,
cleanupError
)
}
}
await syncMcpToolsForWorkflow({
workflowId,
requestId,
state: versionRow.state,
context: 'activate',
})
logger.info(
`[${requestId}] Admin API: Activated version ${versionNum} for workflow ${workflowId}`
)
@@ -189,14 +57,12 @@ export const POST = withAdminAuthParams<RouteParams>(async (request, context) =>
success: true,
version: versionNum,
deployedAt: result.deployedAt!.toISOString(),
warnings: triggerSaveResult.warnings,
warnings: result.warnings,
})
} catch (error) {
logger.error(
`[${requestId}] Admin API: Failed to activate version for workflow ${workflowId}`,
{
error,
}
{ error }
)
return internalErrorResponse('Failed to activate deployment version')
}

View File

@@ -2,7 +2,6 @@ import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { createRunSegment } from '@/lib/copilot/async-runs/repository'
import { appendCopilotLogContext } from '@/lib/copilot/logging'
import { COPILOT_REQUEST_MODES } from '@/lib/copilot/models'
import { orchestrateCopilotStream } from '@/lib/copilot/orchestrator'
import { getWorkflowById, resolveWorkflowIdForUser } from '@/lib/workflows/utils'
@@ -84,17 +83,15 @@ export async function POST(req: NextRequest) {
const chatId = parsed.chatId || crypto.randomUUID()
messageId = crypto.randomUUID()
logger.error(
appendCopilotLogContext('Received headless copilot chat start request', { messageId }),
{
workflowId: resolved.workflowId,
workflowName: parsed.workflowName,
chatId,
mode: transportMode,
autoExecuteTools: parsed.autoExecuteTools,
timeout: parsed.timeout,
}
)
const reqLogger = logger.withMetadata({ messageId })
reqLogger.info('Received headless copilot chat start request', {
workflowId: resolved.workflowId,
workflowName: parsed.workflowName,
chatId,
mode: transportMode,
autoExecuteTools: parsed.autoExecuteTools,
timeout: parsed.timeout,
})
const requestPayload = {
message: parsed.message,
workflowId: resolved.workflowId,
@@ -144,7 +141,7 @@ export async function POST(req: NextRequest) {
)
}
logger.error(appendCopilotLogContext('Headless copilot request failed', { messageId }), {
logger.withMetadata({ messageId }).error('Headless copilot request failed', {
error: error instanceof Error ? error.message : String(error),
})
return NextResponse.json({ success: false, error: 'Internal server error' }, { status: 500 })

View File

@@ -1,26 +1,9 @@
import { db, workflow, workflowDeploymentVersion } from '@sim/db'
import { db, workflow } from '@sim/db'
import { createLogger } from '@sim/logger'
import { and, eq } from 'drizzle-orm'
import { eq } from 'drizzle-orm'
import type { NextRequest } from 'next/server'
import { AuditAction, AuditResourceType, recordAudit } from '@/lib/audit/log'
import { generateRequestId } from '@/lib/core/utils/request'
import { removeMcpToolsForWorkflow, syncMcpToolsForWorkflow } from '@/lib/mcp/workflow-mcp-sync'
import {
cleanupWebhooksForWorkflow,
restorePreviousVersionWebhooks,
saveTriggerWebhooksForDeploy,
} from '@/lib/webhooks/deploy'
import {
activateWorkflowVersionById,
deployWorkflow,
loadWorkflowFromNormalizedTables,
undeployWorkflow,
} from '@/lib/workflows/persistence/utils'
import {
cleanupDeploymentVersion,
createSchedulesForDeploy,
validateWorkflowSchedules,
} from '@/lib/workflows/schedules'
import { performFullDeploy, performFullUndeploy } from '@/lib/workflows/orchestration'
import { validateWorkflowPermissions } from '@/lib/workflows/utils'
import {
checkNeedsRedeployment,
@@ -97,164 +80,22 @@ export async function POST(request: NextRequest, { params }: { params: Promise<{
return createErrorResponse('Unable to determine deploying user', 400)
}
const normalizedData = await loadWorkflowFromNormalizedTables(id)
if (!normalizedData) {
return createErrorResponse('Failed to load workflow state', 500)
}
const scheduleValidation = validateWorkflowSchedules(normalizedData.blocks)
if (!scheduleValidation.isValid) {
logger.warn(
`[${requestId}] Schedule validation failed for workflow ${id}: ${scheduleValidation.error}`
)
return createErrorResponse(`Invalid schedule configuration: ${scheduleValidation.error}`, 400)
}
const [currentActiveVersion] = await db
.select({ id: workflowDeploymentVersion.id })
.from(workflowDeploymentVersion)
.where(
and(
eq(workflowDeploymentVersion.workflowId, id),
eq(workflowDeploymentVersion.isActive, true)
)
)
.limit(1)
const previousVersionId = currentActiveVersion?.id
const rollbackDeployment = async () => {
if (previousVersionId) {
await restorePreviousVersionWebhooks({
request,
workflow: workflowData as Record<string, unknown>,
userId: actorUserId,
previousVersionId,
requestId,
})
const reactivateResult = await activateWorkflowVersionById({
workflowId: id,
deploymentVersionId: previousVersionId,
})
if (reactivateResult.success) {
return
}
}
await undeployWorkflow({ workflowId: id })
}
const deployResult = await deployWorkflow({
const result = await performFullDeploy({
workflowId: id,
deployedBy: actorUserId,
workflowName: workflowData!.name,
})
if (!deployResult.success) {
return createErrorResponse(deployResult.error || 'Failed to deploy workflow', 500)
}
const deployedAt = deployResult.deployedAt!
const deploymentVersionId = deployResult.deploymentVersionId
if (!deploymentVersionId) {
await undeployWorkflow({ workflowId: id })
return createErrorResponse('Failed to resolve deployment version', 500)
}
const triggerSaveResult = await saveTriggerWebhooksForDeploy({
request,
workflowId: id,
workflow: workflowData,
userId: actorUserId,
blocks: normalizedData.blocks,
workflowName: workflowData!.name || undefined,
requestId,
deploymentVersionId,
previousVersionId,
request,
})
if (!triggerSaveResult.success) {
await cleanupDeploymentVersion({
workflowId: id,
workflow: workflowData as Record<string, unknown>,
requestId,
deploymentVersionId,
})
await rollbackDeployment()
return createErrorResponse(
triggerSaveResult.error?.message || 'Failed to save trigger configuration',
triggerSaveResult.error?.status || 500
)
}
let scheduleInfo: { scheduleId?: string; cronExpression?: string; nextRunAt?: Date } = {}
const scheduleResult = await createSchedulesForDeploy(
id,
normalizedData.blocks,
db,
deploymentVersionId
)
if (!scheduleResult.success) {
logger.error(
`[${requestId}] Failed to create schedule for workflow ${id}: ${scheduleResult.error}`
)
await cleanupDeploymentVersion({
workflowId: id,
workflow: workflowData as Record<string, unknown>,
requestId,
deploymentVersionId,
})
await rollbackDeployment()
return createErrorResponse(scheduleResult.error || 'Failed to create schedule', 500)
}
if (scheduleResult.scheduleId) {
scheduleInfo = {
scheduleId: scheduleResult.scheduleId,
cronExpression: scheduleResult.cronExpression,
nextRunAt: scheduleResult.nextRunAt,
}
logger.info(
`[${requestId}] Schedule created for workflow ${id}: ${scheduleResult.scheduleId}`
)
}
if (previousVersionId && previousVersionId !== deploymentVersionId) {
try {
logger.info(`[${requestId}] Cleaning up previous version ${previousVersionId} DB records`)
await cleanupDeploymentVersion({
workflowId: id,
workflow: workflowData as Record<string, unknown>,
requestId,
deploymentVersionId: previousVersionId,
skipExternalCleanup: true,
})
} catch (cleanupError) {
logger.error(
`[${requestId}] Failed to clean up previous version ${previousVersionId}`,
cleanupError
)
// Non-fatal - continue with success response
}
if (!result.success) {
const status =
result.errorCode === 'validation' ? 400 : result.errorCode === 'not_found' ? 404 : 500
return createErrorResponse(result.error || 'Failed to deploy workflow', status)
}
logger.info(`[${requestId}] Workflow deployed successfully: ${id}`)
// Sync MCP tools with the latest parameter schema
await syncMcpToolsForWorkflow({ workflowId: id, requestId, context: 'deploy' })
recordAudit({
workspaceId: workflowData?.workspaceId || null,
actorId: actorUserId,
actorName: session?.user?.name,
actorEmail: session?.user?.email,
action: AuditAction.WORKFLOW_DEPLOYED,
resourceType: AuditResourceType.WORKFLOW,
resourceId: id,
resourceName: workflowData?.name,
description: `Deployed workflow "${workflowData?.name || id}"`,
metadata: { version: deploymentVersionId },
request,
})
const responseApiKeyInfo = workflowData!.workspaceId
? 'Workspace API keys'
: 'Personal API keys'
@@ -262,25 +103,13 @@ export async function POST(request: NextRequest, { params }: { params: Promise<{
return createSuccessResponse({
apiKey: responseApiKeyInfo,
isDeployed: true,
deployedAt,
schedule: scheduleInfo.scheduleId
? {
id: scheduleInfo.scheduleId,
cronExpression: scheduleInfo.cronExpression,
nextRunAt: scheduleInfo.nextRunAt,
}
: undefined,
warnings: triggerSaveResult.warnings,
deployedAt: result.deployedAt,
warnings: result.warnings,
})
} catch (error: any) {
logger.error(`[${requestId}] Error deploying workflow: ${id}`, {
error: error.message,
stack: error.stack,
name: error.name,
cause: error.cause,
fullError: error,
})
return createErrorResponse(error.message || 'Failed to deploy workflow', 500)
} catch (error: unknown) {
const message = error instanceof Error ? error.message : 'Failed to deploy workflow'
logger.error(`[${requestId}] Error deploying workflow: ${id}`, { error })
return createErrorResponse(message, 500)
}
}
@@ -328,60 +157,36 @@ export async function PATCH(request: NextRequest, { params }: { params: Promise<
}
export async function DELETE(
request: NextRequest,
_request: NextRequest,
{ params }: { params: Promise<{ id: string }> }
) {
const requestId = generateRequestId()
const { id } = await params
try {
const {
error,
session,
workflow: workflowData,
} = await validateWorkflowPermissions(id, requestId, 'admin')
const { error, session } = await validateWorkflowPermissions(id, requestId, 'admin')
if (error) {
return createErrorResponse(error.message, error.status)
}
const result = await undeployWorkflow({ workflowId: id })
const result = await performFullUndeploy({
workflowId: id,
userId: session!.user.id,
requestId,
})
if (!result.success) {
return createErrorResponse(result.error || 'Failed to undeploy workflow', 500)
}
await cleanupWebhooksForWorkflow(id, workflowData as Record<string, unknown>, requestId)
await removeMcpToolsForWorkflow(id, requestId)
logger.info(`[${requestId}] Workflow undeployed successfully: ${id}`)
try {
const { PlatformEvents } = await import('@/lib/core/telemetry')
PlatformEvents.workflowUndeployed({ workflowId: id })
} catch (_e) {
// Silently fail
}
recordAudit({
workspaceId: workflowData?.workspaceId || null,
actorId: session!.user.id,
actorName: session?.user?.name,
actorEmail: session?.user?.email,
action: AuditAction.WORKFLOW_UNDEPLOYED,
resourceType: AuditResourceType.WORKFLOW,
resourceId: id,
resourceName: workflowData?.name,
description: `Undeployed workflow "${workflowData?.name || id}"`,
request,
})
return createSuccessResponse({
isDeployed: false,
deployedAt: null,
apiKey: null,
})
} catch (error: any) {
logger.error(`[${requestId}] Error undeploying workflow: ${id}`, error)
return createErrorResponse(error.message || 'Failed to undeploy workflow', 500)
} catch (error: unknown) {
const message = error instanceof Error ? error.message : 'Failed to undeploy workflow'
logger.error(`[${requestId}] Error undeploying workflow: ${id}`, { error })
return createErrorResponse(message, 500)
}
}

View File

@@ -3,19 +3,10 @@ import { createLogger } from '@sim/logger'
import { and, eq } from 'drizzle-orm'
import type { NextRequest } from 'next/server'
import { z } from 'zod'
import { AuditAction, AuditResourceType, recordAudit } from '@/lib/audit/log'
import { generateRequestId } from '@/lib/core/utils/request'
import { syncMcpToolsForWorkflow } from '@/lib/mcp/workflow-mcp-sync'
import { restorePreviousVersionWebhooks, saveTriggerWebhooksForDeploy } from '@/lib/webhooks/deploy'
import { activateWorkflowVersion } from '@/lib/workflows/persistence/utils'
import {
cleanupDeploymentVersion,
createSchedulesForDeploy,
validateWorkflowSchedules,
} from '@/lib/workflows/schedules'
import { performActivateVersion } from '@/lib/workflows/orchestration'
import { validateWorkflowPermissions } from '@/lib/workflows/utils'
import { createErrorResponse, createSuccessResponse } from '@/app/api/workflows/utils'
import type { BlockState } from '@/stores/workflows/workflow/types'
const logger = createLogger('WorkflowDeploymentVersionAPI')
@@ -129,140 +120,25 @@ export async function PATCH(
return createErrorResponse('Unable to determine activating user', 400)
}
const [versionRow] = await db
.select({
id: workflowDeploymentVersion.id,
state: workflowDeploymentVersion.state,
})
.from(workflowDeploymentVersion)
.where(
and(
eq(workflowDeploymentVersion.workflowId, id),
eq(workflowDeploymentVersion.version, versionNum)
)
)
.limit(1)
if (!versionRow?.state) {
return createErrorResponse('Deployment version not found', 404)
}
const [currentActiveVersion] = await db
.select({ id: workflowDeploymentVersion.id })
.from(workflowDeploymentVersion)
.where(
and(
eq(workflowDeploymentVersion.workflowId, id),
eq(workflowDeploymentVersion.isActive, true)
)
)
.limit(1)
const previousVersionId = currentActiveVersion?.id
const deployedState = versionRow.state as { blocks?: Record<string, BlockState> }
const blocks = deployedState.blocks
if (!blocks || typeof blocks !== 'object') {
return createErrorResponse('Invalid deployed state structure', 500)
}
const scheduleValidation = validateWorkflowSchedules(blocks)
if (!scheduleValidation.isValid) {
return createErrorResponse(
`Invalid schedule configuration: ${scheduleValidation.error}`,
400
)
}
const triggerSaveResult = await saveTriggerWebhooksForDeploy({
request,
const activateResult = await performActivateVersion({
workflowId: id,
workflow: workflowData as Record<string, unknown>,
version: versionNum,
userId: actorUserId,
blocks,
workflow: workflowData as Record<string, unknown>,
requestId,
deploymentVersionId: versionRow.id,
previousVersionId,
forceRecreateSubscriptions: true,
request,
})
if (!triggerSaveResult.success) {
return createErrorResponse(
triggerSaveResult.error?.message || 'Failed to sync trigger configuration',
triggerSaveResult.error?.status || 500
)
if (!activateResult.success) {
const status =
activateResult.errorCode === 'not_found'
? 404
: activateResult.errorCode === 'validation'
? 400
: 500
return createErrorResponse(activateResult.error || 'Failed to activate deployment', status)
}
const scheduleResult = await createSchedulesForDeploy(id, blocks, db, versionRow.id)
if (!scheduleResult.success) {
await cleanupDeploymentVersion({
workflowId: id,
workflow: workflowData as Record<string, unknown>,
requestId,
deploymentVersionId: versionRow.id,
})
if (previousVersionId) {
await restorePreviousVersionWebhooks({
request,
workflow: workflowData as Record<string, unknown>,
userId: actorUserId,
previousVersionId,
requestId,
})
}
return createErrorResponse(scheduleResult.error || 'Failed to sync schedules', 500)
}
const result = await activateWorkflowVersion({ workflowId: id, version: versionNum })
if (!result.success) {
await cleanupDeploymentVersion({
workflowId: id,
workflow: workflowData as Record<string, unknown>,
requestId,
deploymentVersionId: versionRow.id,
})
if (previousVersionId) {
await restorePreviousVersionWebhooks({
request,
workflow: workflowData as Record<string, unknown>,
userId: actorUserId,
previousVersionId,
requestId,
})
}
return createErrorResponse(result.error || 'Failed to activate deployment', 400)
}
if (previousVersionId && previousVersionId !== versionRow.id) {
try {
logger.info(
`[${requestId}] Cleaning up previous version ${previousVersionId} webhooks/schedules`
)
await cleanupDeploymentVersion({
workflowId: id,
workflow: workflowData as Record<string, unknown>,
requestId,
deploymentVersionId: previousVersionId,
skipExternalCleanup: true,
})
logger.info(`[${requestId}] Previous version cleanup completed`)
} catch (cleanupError) {
logger.error(
`[${requestId}] Failed to clean up previous version ${previousVersionId}`,
cleanupError
)
}
}
await syncMcpToolsForWorkflow({
workflowId: id,
requestId,
state: versionRow.state,
context: 'activate',
})
// Apply name/description updates if provided alongside activation
let updatedName: string | null | undefined
let updatedDescription: string | null | undefined
if (name !== undefined || description !== undefined) {
@@ -298,23 +174,10 @@ export async function PATCH(
}
}
recordAudit({
workspaceId: workflowData?.workspaceId,
actorId: actorUserId,
actorName: session?.user?.name,
actorEmail: session?.user?.email,
action: AuditAction.WORKFLOW_DEPLOYMENT_ACTIVATED,
resourceType: AuditResourceType.WORKFLOW,
resourceId: id,
description: `Activated deployment version ${versionNum}`,
metadata: { version: versionNum },
request,
})
return createSuccessResponse({
success: true,
deployedAt: result.deployedAt,
warnings: triggerSaveResult.warnings,
deployedAt: activateResult.deployedAt,
warnings: activateResult.warnings,
...(updatedName !== undefined && { name: updatedName }),
...(updatedDescription !== undefined && { description: updatedDescription }),
})

View File

@@ -82,14 +82,16 @@ vi.mock('@/background/workflow-execution', () => ({
executeWorkflowJob: vi.fn(),
}))
vi.mock('@sim/logger', () => ({
createLogger: vi.fn().mockReturnValue({
vi.mock('@sim/logger', () => {
const createMockLogger = (): Record<string, any> => ({
info: vi.fn(),
warn: vi.fn(),
error: vi.fn(),
debug: vi.fn(),
}),
}))
withMetadata: vi.fn(() => createMockLogger()),
})
return { createLogger: vi.fn(() => createMockLogger()) }
})
vi.mock('uuid', () => ({
validate: vi.fn().mockReturnValue(true),

View File

@@ -187,6 +187,13 @@ type AsyncExecutionParams = {
async function handleAsyncExecution(params: AsyncExecutionParams): Promise<NextResponse> {
const { requestId, workflowId, userId, workspaceId, input, triggerType, executionId, callChain } =
params
const asyncLogger = logger.withMetadata({
requestId,
workflowId,
workspaceId,
userId,
executionId,
})
const correlation = {
executionId,
@@ -233,10 +240,7 @@ async function handleAsyncExecution(params: AsyncExecutionParams): Promise<NextR
metadata: { workflowId, userId, correlation },
})
logger.info(`[${requestId}] Queued async workflow execution`, {
workflowId,
jobId,
})
asyncLogger.info('Queued async workflow execution', { jobId })
if (shouldExecuteInline() && jobQueue) {
const inlineJobQueue = jobQueue
@@ -247,14 +251,14 @@ async function handleAsyncExecution(params: AsyncExecutionParams): Promise<NextR
await inlineJobQueue.completeJob(jobId, output)
} catch (error) {
const errorMessage = error instanceof Error ? error.message : String(error)
logger.error(`[${requestId}] Async workflow execution failed`, {
asyncLogger.error('Async workflow execution failed', {
jobId,
error: errorMessage,
})
try {
await inlineJobQueue.markJobFailed(jobId, errorMessage)
} catch (markFailedError) {
logger.error(`[${requestId}] Failed to mark job as failed`, {
asyncLogger.error('Failed to mark job as failed', {
jobId,
error:
markFailedError instanceof Error
@@ -289,7 +293,7 @@ async function handleAsyncExecution(params: AsyncExecutionParams): Promise<NextR
)
}
logger.error(`[${requestId}] Failed to queue async execution`, error)
asyncLogger.error('Failed to queue async execution', error)
return NextResponse.json(
{ error: `Failed to queue async execution: ${error.message}` },
{ status: 500 }
@@ -352,11 +356,12 @@ async function handleExecutePost(
): Promise<NextResponse | Response> {
const requestId = generateRequestId()
const { id: workflowId } = await params
let reqLogger = logger.withMetadata({ requestId, workflowId })
const incomingCallChain = parseCallChain(req.headers.get(SIM_VIA_HEADER))
const callChainError = validateCallChain(incomingCallChain)
if (callChainError) {
logger.warn(`[${requestId}] Call chain rejected for workflow ${workflowId}: ${callChainError}`)
reqLogger.warn(`Call chain rejected: ${callChainError}`)
return NextResponse.json({ error: callChainError }, { status: 409 })
}
const callChain = buildNextCallChain(incomingCallChain, workflowId)
@@ -414,12 +419,12 @@ async function handleExecutePost(
body = JSON.parse(text)
}
} catch (error) {
logger.warn(`[${requestId}] Failed to parse request body, using defaults`)
reqLogger.warn('Failed to parse request body, using defaults')
}
const validation = ExecuteWorkflowSchema.safeParse(body)
if (!validation.success) {
logger.warn(`[${requestId}] Invalid request body:`, validation.error.errors)
reqLogger.warn('Invalid request body:', validation.error.errors)
return NextResponse.json(
{
error: 'Invalid request body',
@@ -589,9 +594,10 @@ async function handleExecutePost(
)
}
logger.info(`[${requestId}] Starting server-side execution`, {
workflowId,
userId,
const executionId = uuidv4()
reqLogger = reqLogger.withMetadata({ userId, executionId })
reqLogger.info('Starting server-side execution', {
hasInput: !!input,
triggerType,
authType: auth.authType,
@@ -600,8 +606,6 @@ async function handleExecutePost(
enableSSE,
isAsyncMode,
})
const executionId = uuidv4()
let loggingTriggerType: CoreTriggerType = 'manual'
if (CORE_TRIGGER_TYPES.includes(triggerType as CoreTriggerType)) {
loggingTriggerType = triggerType as CoreTriggerType
@@ -657,10 +661,11 @@ async function handleExecutePost(
const workflow = preprocessResult.workflowRecord!
if (!workflow.workspaceId) {
logger.error(`[${requestId}] Workflow ${workflowId} has no workspaceId`)
reqLogger.error('Workflow has no workspaceId')
return NextResponse.json({ error: 'Workflow has no associated workspace' }, { status: 500 })
}
const workspaceId = workflow.workspaceId
reqLogger = reqLogger.withMetadata({ workspaceId, userId: actorUserId })
if (auth.apiKeyType === 'workspace' && auth.workspaceId !== workspaceId) {
return NextResponse.json(
@@ -669,11 +674,7 @@ async function handleExecutePost(
)
}
logger.info(`[${requestId}] Preprocessing passed`, {
workflowId,
actorUserId,
workspaceId,
})
reqLogger.info('Preprocessing passed')
if (isAsyncMode) {
return handleAsyncExecution({
@@ -744,7 +745,7 @@ async function handleExecutePost(
)
}
} catch (fileError) {
logger.error(`[${requestId}] Failed to process input file fields:`, fileError)
reqLogger.error('Failed to process input file fields:', fileError)
await loggingSession.safeStart({
userId: actorUserId,
@@ -772,7 +773,7 @@ async function handleExecutePost(
sanitizedWorkflowStateOverride || cachedWorkflowData || undefined
if (!enableSSE) {
logger.info(`[${requestId}] Using non-SSE execution (direct JSON response)`)
reqLogger.info('Using non-SSE execution (direct JSON response)')
const metadata: ExecutionMetadata = {
requestId,
executionId,
@@ -866,7 +867,7 @@ async function handleExecutePost(
const errorMessage = error instanceof Error ? error.message : 'Unknown error'
logger.error(`[${requestId}] Queued non-SSE execution failed: ${errorMessage}`)
reqLogger.error(`Queued non-SSE execution failed: ${errorMessage}`)
return NextResponse.json(
{
@@ -908,7 +909,7 @@ async function handleExecutePost(
timeoutController.timeoutMs
) {
const timeoutErrorMessage = getTimeoutErrorMessage(null, timeoutController.timeoutMs)
logger.info(`[${requestId}] Non-SSE execution timed out`, {
reqLogger.info('Non-SSE execution timed out', {
timeoutMs: timeoutController.timeoutMs,
})
await loggingSession.markAsFailed(timeoutErrorMessage)
@@ -962,7 +963,7 @@ async function handleExecutePost(
} catch (error: unknown) {
const errorMessage = error instanceof Error ? error.message : 'Unknown error'
logger.error(`[${requestId}] Non-SSE execution failed: ${errorMessage}`)
reqLogger.error(`Non-SSE execution failed: ${errorMessage}`)
const executionResult = hasExecutionResult(error) ? error.executionResult : undefined
@@ -985,7 +986,7 @@ async function handleExecutePost(
timeoutController.cleanup()
if (executionId) {
void cleanupExecutionBase64Cache(executionId).catch((error) => {
logger.error(`[${requestId}] Failed to cleanup base64 cache`, { error })
reqLogger.error('Failed to cleanup base64 cache', { error })
})
}
}
@@ -1039,9 +1040,9 @@ async function handleExecutePost(
})
}
logger.info(`[${requestId}] Using SSE console log streaming (manual execution)`)
reqLogger.info('Using SSE console log streaming (manual execution)')
} else {
logger.info(`[${requestId}] Using streaming API response`)
reqLogger.info('Using streaming API response')
const resolvedSelectedOutputs = resolveOutputIds(
selectedOutputs,
@@ -1135,7 +1136,7 @@ async function handleExecutePost(
iterationContext?: IterationContext,
childWorkflowContext?: ChildWorkflowContext
) => {
logger.info(`[${requestId}] 🔷 onBlockStart called:`, { blockId, blockName, blockType })
reqLogger.info('onBlockStart called', { blockId, blockName, blockType })
sendEvent({
type: 'block:started',
timestamp: new Date().toISOString(),
@@ -1184,7 +1185,7 @@ async function handleExecutePost(
: {}
if (hasError) {
logger.info(`[${requestId}] ✗ onBlockComplete (error) called:`, {
reqLogger.info('onBlockComplete (error) called', {
blockId,
blockName,
blockType,
@@ -1219,7 +1220,7 @@ async function handleExecutePost(
},
})
} else {
logger.info(`[${requestId}] ✓ onBlockComplete called:`, {
reqLogger.info('onBlockComplete called', {
blockId,
blockName,
blockType,
@@ -1284,7 +1285,7 @@ async function handleExecutePost(
data: { blockId },
})
} catch (error) {
logger.error(`[${requestId}] Error streaming block content:`, error)
reqLogger.error('Error streaming block content:', error)
} finally {
try {
await reader.cancel().catch(() => {})
@@ -1360,9 +1361,7 @@ async function handleExecutePost(
if (result.status === 'paused') {
if (!result.snapshotSeed) {
logger.error(`[${requestId}] Missing snapshot seed for paused execution`, {
executionId,
})
reqLogger.error('Missing snapshot seed for paused execution')
await loggingSession.markAsFailed('Missing snapshot seed for paused execution')
} else {
try {
@@ -1374,8 +1373,7 @@ async function handleExecutePost(
executorUserId: result.metadata?.userId,
})
} catch (pauseError) {
logger.error(`[${requestId}] Failed to persist pause result`, {
executionId,
reqLogger.error('Failed to persist pause result', {
error: pauseError instanceof Error ? pauseError.message : String(pauseError),
})
await loggingSession.markAsFailed(
@@ -1390,7 +1388,7 @@ async function handleExecutePost(
if (result.status === 'cancelled') {
if (timeoutController.isTimedOut() && timeoutController.timeoutMs) {
const timeoutErrorMessage = getTimeoutErrorMessage(null, timeoutController.timeoutMs)
logger.info(`[${requestId}] Workflow execution timed out`, {
reqLogger.info('Workflow execution timed out', {
timeoutMs: timeoutController.timeoutMs,
})
@@ -1408,7 +1406,7 @@ async function handleExecutePost(
})
finalMetaStatus = 'error'
} else {
logger.info(`[${requestId}] Workflow execution was cancelled`)
reqLogger.info('Workflow execution was cancelled')
sendEvent({
type: 'execution:cancelled',
@@ -1452,7 +1450,7 @@ async function handleExecutePost(
? error.message
: 'Unknown error'
logger.error(`[${requestId}] SSE execution failed: ${errorMessage}`, { isTimeout })
reqLogger.error(`SSE execution failed: ${errorMessage}`, { isTimeout })
const executionResult = hasExecutionResult(error) ? error.executionResult : undefined
@@ -1475,7 +1473,7 @@ async function handleExecutePost(
try {
await eventWriter.close()
} catch (closeError) {
logger.warn(`[${requestId}] Failed to close event writer`, {
reqLogger.warn('Failed to close event writer', {
error: closeError instanceof Error ? closeError.message : String(closeError),
})
}
@@ -1496,7 +1494,7 @@ async function handleExecutePost(
},
cancel() {
isStreamClosed = true
logger.info(`[${requestId}] Client disconnected from SSE stream`)
reqLogger.info('Client disconnected from SSE stream')
},
})
@@ -1518,7 +1516,7 @@ async function handleExecutePost(
)
}
logger.error(`[${requestId}] Failed to start workflow execution:`, error)
reqLogger.error('Failed to start workflow execution:', error)
return NextResponse.json(
{ error: error.message || 'Failed to start workflow execution' },
{ status: 500 }

View File

@@ -5,14 +5,7 @@
* @vitest-environment node
*/
import {
auditMock,
envMock,
loggerMock,
requestUtilsMock,
setupGlobalFetchMock,
telemetryMock,
} from '@sim/testing'
import { auditMock, envMock, loggerMock, requestUtilsMock, telemetryMock } from '@sim/testing'
import { NextRequest } from 'next/server'
import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest'
@@ -21,7 +14,7 @@ const mockCheckSessionOrInternalAuth = vi.fn()
const mockLoadWorkflowFromNormalizedTables = vi.fn()
const mockGetWorkflowById = vi.fn()
const mockAuthorizeWorkflowByWorkspacePermission = vi.fn()
const mockArchiveWorkflow = vi.fn()
const mockPerformDeleteWorkflow = vi.fn()
const mockDbUpdate = vi.fn()
const mockDbSelect = vi.fn()
@@ -72,8 +65,8 @@ vi.mock('@/lib/workflows/utils', () => ({
}) => mockAuthorizeWorkflowByWorkspacePermission(params),
}))
vi.mock('@/lib/workflows/lifecycle', () => ({
archiveWorkflow: (...args: unknown[]) => mockArchiveWorkflow(...args),
vi.mock('@/lib/workflows/orchestration', () => ({
performDeleteWorkflow: (...args: unknown[]) => mockPerformDeleteWorkflow(...args),
}))
vi.mock('@sim/db', () => ({
@@ -294,18 +287,7 @@ describe('Workflow By ID API Route', () => {
workspacePermission: 'admin',
})
mockDbSelect.mockReturnValue({
from: vi.fn().mockReturnValue({
where: vi.fn().mockResolvedValue([{ id: 'workflow-123' }, { id: 'workflow-456' }]),
}),
})
mockArchiveWorkflow.mockResolvedValue({
archived: true,
workflow: mockWorkflow,
})
setupGlobalFetchMock({ ok: true })
mockPerformDeleteWorkflow.mockResolvedValue({ success: true })
const req = new NextRequest('http://localhost:3000/api/workflows/workflow-123', {
method: 'DELETE',
@@ -317,6 +299,12 @@ describe('Workflow By ID API Route', () => {
expect(response.status).toBe(200)
const data = await response.json()
expect(data.success).toBe(true)
expect(mockPerformDeleteWorkflow).toHaveBeenCalledWith(
expect.objectContaining({
workflowId: 'workflow-123',
userId: 'user-123',
})
)
})
it('should allow admin to delete workspace workflow', async () => {
@@ -337,19 +325,7 @@ describe('Workflow By ID API Route', () => {
workspacePermission: 'admin',
})
// Mock db.select() to return multiple workflows so deletion is allowed
mockDbSelect.mockReturnValue({
from: vi.fn().mockReturnValue({
where: vi.fn().mockResolvedValue([{ id: 'workflow-123' }, { id: 'workflow-456' }]),
}),
})
mockArchiveWorkflow.mockResolvedValue({
archived: true,
workflow: mockWorkflow,
})
setupGlobalFetchMock({ ok: true })
mockPerformDeleteWorkflow.mockResolvedValue({ success: true })
const req = new NextRequest('http://localhost:3000/api/workflows/workflow-123', {
method: 'DELETE',
@@ -381,11 +357,10 @@ describe('Workflow By ID API Route', () => {
workspacePermission: 'admin',
})
// Mock db.select() to return only 1 workflow (the one being deleted)
mockDbSelect.mockReturnValue({
from: vi.fn().mockReturnValue({
where: vi.fn().mockResolvedValue([{ id: 'workflow-123' }]),
}),
mockPerformDeleteWorkflow.mockResolvedValue({
success: false,
error: 'Cannot delete the only workflow in the workspace',
errorCode: 'validation',
})
const req = new NextRequest('http://localhost:3000/api/workflows/workflow-123', {

View File

@@ -1,13 +1,12 @@
import { db } from '@sim/db'
import { templates, workflow } from '@sim/db/schema'
import { workflow } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { and, eq, isNull, ne } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { AuditAction, AuditResourceType, recordAudit } from '@/lib/audit/log'
import { AuthType, checkHybridAuth, checkSessionOrInternalAuth } from '@/lib/auth/hybrid'
import { generateRequestId } from '@/lib/core/utils/request'
import { archiveWorkflow } from '@/lib/workflows/lifecycle'
import { performDeleteWorkflow } from '@/lib/workflows/orchestration'
import { loadWorkflowFromNormalizedTables } from '@/lib/workflows/persistence/utils'
import { authorizeWorkflowByWorkspacePermission, getWorkflowById } from '@/lib/workflows/utils'
@@ -184,28 +183,12 @@ export async function DELETE(
)
}
// Check if this is the last workflow in the workspace
if (workflowData.workspaceId) {
const totalWorkflowsInWorkspace = await db
.select({ id: workflow.id })
.from(workflow)
.where(and(eq(workflow.workspaceId, workflowData.workspaceId), isNull(workflow.archivedAt)))
if (totalWorkflowsInWorkspace.length <= 1) {
return NextResponse.json(
{ error: 'Cannot delete the only workflow in the workspace' },
{ status: 400 }
)
}
}
// Check if workflow has published templates before deletion
const { searchParams } = new URL(request.url)
const checkTemplates = searchParams.get('check-templates') === 'true'
const deleteTemplatesParam = searchParams.get('deleteTemplates')
if (checkTemplates) {
// Return template information for frontend to handle
const { templates } = await import('@sim/db/schema')
const publishedTemplates = await db
.select({
id: templates.id,
@@ -229,49 +212,22 @@ export async function DELETE(
})
}
// Handle template deletion based on user choice
if (deleteTemplatesParam !== null) {
const deleteTemplates = deleteTemplatesParam === 'delete'
const result = await performDeleteWorkflow({
workflowId,
userId,
requestId,
templateAction: deleteTemplatesParam === 'delete' ? 'delete' : 'orphan',
})
if (deleteTemplates) {
// Delete all templates associated with this workflow
await db.delete(templates).where(eq(templates.workflowId, workflowId))
logger.info(`[${requestId}] Deleted templates for workflow ${workflowId}`)
} else {
// Orphan the templates (set workflowId to null)
await db
.update(templates)
.set({ workflowId: null })
.where(eq(templates.workflowId, workflowId))
logger.info(`[${requestId}] Orphaned templates for workflow ${workflowId}`)
}
}
const archiveResult = await archiveWorkflow(workflowId, { requestId })
if (!archiveResult.workflow) {
return NextResponse.json({ error: 'Workflow not found' }, { status: 404 })
if (!result.success) {
const status =
result.errorCode === 'not_found' ? 404 : result.errorCode === 'validation' ? 400 : 500
return NextResponse.json({ error: result.error }, { status })
}
const elapsed = Date.now() - startTime
logger.info(`[${requestId}] Successfully archived workflow ${workflowId} in ${elapsed}ms`)
recordAudit({
workspaceId: workflowData.workspaceId || null,
actorId: userId,
actorName: auth.userName,
actorEmail: auth.userEmail,
action: AuditAction.WORKFLOW_DELETED,
resourceType: AuditResourceType.WORKFLOW,
resourceId: workflowId,
resourceName: workflowData.name,
description: `Archived workflow "${workflowData.name}"`,
metadata: {
archived: archiveResult.archived,
deleteTemplates: deleteTemplatesParam === 'delete',
},
request,
})
return NextResponse.json({ success: true }, { status: 200 })
} catch (error: any) {
const elapsed = Date.now() - startTime

View File

@@ -61,7 +61,7 @@ export const navTourSteps: Step[] = [
target: '[data-tour="nav-tasks"]',
title: 'Tasks',
content:
'Tasks that work for you. Mothership can create, edit, and delete resource throughout the platform. It can also perform actions on your behalf, like sending emails, creating tasks, and more.',
'Tasks that work for you. Mothership can create, edit, and delete resources throughout the platform. It can also perform actions on your behalf, like sending emails, creating tasks, and more.',
placement: 'right',
disableBeacon: true,
},

View File

@@ -1,6 +1,6 @@
'use client'
import { createContext, useCallback, useContext, useEffect, useState } from 'react'
import { createContext, useCallback, useContext } from 'react'
import type { TooltipRenderProps } from 'react-joyride'
import { TourTooltip } from '@/components/emcn'
@@ -59,18 +59,14 @@ export function TourTooltipAdapter({
closeProps,
}: TooltipRenderProps) {
const { isTooltipVisible, isEntrance, totalSteps } = useContext(TourStateContext)
const [targetEl, setTargetEl] = useState<HTMLElement | null>(null)
useEffect(() => {
const { target } = step
if (typeof target === 'string') {
setTargetEl(document.querySelector<HTMLElement>(target))
} else if (target instanceof HTMLElement) {
setTargetEl(target)
} else {
setTargetEl(null)
}
}, [step])
const { target } = step
const targetEl =
typeof target === 'string'
? document.querySelector<HTMLElement>(target)
: target instanceof HTMLElement
? target
: null
/**
* Forwards the Joyride tooltip ref safely, handling both

View File

@@ -114,6 +114,16 @@ export function useTour({
[steps.length, stopTour, cancelPendingTransitions, scheduleReveal]
)
useEffect(() => {
if (!run) return
const html = document.documentElement
const prev = html.style.scrollbarGutter
html.style.scrollbarGutter = 'stable'
return () => {
html.style.scrollbarGutter = prev
}
}, [run])
/** Stop the tour when disabled becomes true (e.g. navigating away from the relevant page) */
useEffect(() => {
if (disabled && run) {

View File

@@ -1737,6 +1737,8 @@ export function useChat(
}
if (options?.error) {
pendingRecoveryMessageRef.current = null
setPendingRecoveryMessage(null)
setMessageQueue([])
return
}

View File

@@ -900,7 +900,11 @@ export function KnowledgeBase({
onClick={() => setShowConnectorsModal(true)}
className='flex shrink-0 cursor-pointer items-center gap-1.5 rounded-md px-2 py-1 text-[var(--text-secondary)] text-caption shadow-[inset_0_0_0_1px_var(--border)] transition-colors hover-hover:bg-[var(--surface-3)]'
>
{ConnectorIcon && <ConnectorIcon className='h-[14px] w-[14px]' />}
{connector.status === 'syncing' ? (
<Loader2 className='h-[14px] w-[14px] animate-spin' />
) : (
ConnectorIcon && <ConnectorIcon className='h-[14px] w-[14px]' />
)}
{def?.name || connector.connectorType}
</button>
)

View File

@@ -19,21 +19,17 @@ import {
ModalHeader,
Tooltip,
} from '@/components/emcn'
import { useSession } from '@/lib/auth/auth-client'
import { consumeOAuthReturnContext, writeOAuthReturnContext } from '@/lib/credentials/client-state'
import {
getCanonicalScopesForProvider,
getProviderIdFromServiceId,
type OAuthProvider,
} from '@/lib/oauth'
import { consumeOAuthReturnContext } from '@/lib/credentials/client-state'
import { getProviderIdFromServiceId, type OAuthProvider } from '@/lib/oauth'
import { ConnectorSelectorField } from '@/app/workspace/[workspaceId]/knowledge/[id]/components/add-connector-modal/components/connector-selector-field'
import { OAuthRequiredModal } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/components/credential-selector/components/oauth-required-modal'
import { ConnectCredentialModal } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/components/credential-selector/components/connect-credential-modal'
import { getDependsOnFields } from '@/blocks/utils'
import { CONNECTOR_REGISTRY } from '@/connectors/registry'
import type { ConnectorConfig, ConnectorConfigField } from '@/connectors/types'
import { useCreateConnector } from '@/hooks/queries/kb/connectors'
import { useOAuthCredentials } from '@/hooks/queries/oauth/oauth-credentials'
import type { SelectorKey } from '@/hooks/selectors/types'
import { useCredentialRefreshTriggers } from '@/hooks/use-credential-refresh-triggers'
const SYNC_INTERVALS = [
{ label: 'Every hour', value: 60 },
@@ -69,7 +65,6 @@ export function AddConnectorModal({ open, onOpenChange, knowledgeBaseId }: AddCo
const [searchTerm, setSearchTerm] = useState('')
const { workspaceId } = useParams<{ workspaceId: string }>()
const { data: session } = useSession()
const { mutate: createConnector, isPending: isCreating } = useCreateConnector()
const connectorConfig = selectedType ? CONNECTOR_REGISTRY[selectedType] : null
@@ -82,10 +77,16 @@ export function AddConnectorModal({ open, onOpenChange, knowledgeBaseId }: AddCo
[connectorConfig]
)
const { data: credentials = [], isLoading: credentialsLoading } = useOAuthCredentials(
connectorProviderId ?? undefined,
{ enabled: Boolean(connectorConfig) && !isApiKeyMode, workspaceId }
)
const {
data: credentials = [],
isLoading: credentialsLoading,
refetch: refetchCredentials,
} = useOAuthCredentials(connectorProviderId ?? undefined, {
enabled: Boolean(connectorConfig) && !isApiKeyMode,
workspaceId,
})
useCredentialRefreshTriggers(refetchCredentials, connectorProviderId ?? '', workspaceId)
const effectiveCredentialId =
selectedCredentialId ?? (credentials.length === 1 ? credentials[0].id : null)
@@ -263,51 +264,9 @@ export function AddConnectorModal({ open, onOpenChange, knowledgeBaseId }: AddCo
)
}
const handleConnectNewAccount = useCallback(async () => {
if (!connectorConfig || !connectorProviderId || !workspaceId) return
const userName = session?.user?.name
const integrationName = connectorConfig.name
const displayName = userName ? `${userName}'s ${integrationName}` : integrationName
try {
const res = await fetch('/api/credentials/draft', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
workspaceId,
providerId: connectorProviderId,
displayName,
}),
})
if (!res.ok) {
setError('Failed to prepare credential. Please try again.')
return
}
} catch {
setError('Failed to prepare credential. Please try again.')
return
}
writeOAuthReturnContext({
origin: 'kb-connectors',
knowledgeBaseId,
displayName,
providerId: connectorProviderId,
preCount: credentials.length,
workspaceId,
requestedAt: Date.now(),
})
const handleConnectNewAccount = useCallback(() => {
setShowOAuthModal(true)
}, [
connectorConfig,
connectorProviderId,
workspaceId,
session?.user?.name,
knowledgeBaseId,
credentials.length,
])
}, [])
const filteredEntries = useMemo(() => {
const term = searchTerm.toLowerCase().trim()
@@ -396,40 +355,40 @@ export function AddConnectorModal({ open, onOpenChange, knowledgeBaseId }: AddCo
) : (
<div className='flex flex-col gap-2'>
<Label>Account</Label>
{credentialsLoading ? (
<div className='flex items-center gap-2 text-[var(--text-muted)] text-small'>
<Loader2 className='h-4 w-4 animate-spin' />
Loading credentials...
</div>
) : (
<Combobox
size='sm'
options={[
...credentials.map(
(cred): ComboboxOption => ({
label: cred.name || cred.provider,
value: cred.id,
icon: connectorConfig.icon,
})
),
{
label: 'Connect new account',
value: '__connect_new__',
icon: Plus,
onSelect: () => {
void handleConnectNewAccount()
},
<Combobox
size='sm'
options={[
...credentials.map(
(cred): ComboboxOption => ({
label: cred.name || cred.provider,
value: cred.id,
icon: connectorConfig.icon,
})
),
{
label:
credentials.length > 0
? `Connect another ${connectorConfig.name} account`
: `Connect ${connectorConfig.name} account`,
value: '__connect_new__',
icon: Plus,
onSelect: () => {
void handleConnectNewAccount()
},
]}
value={effectiveCredentialId ?? undefined}
onChange={(value) => setSelectedCredentialId(value)}
placeholder={
credentials.length === 0
? `No ${connectorConfig.name} accounts`
: 'Select account'
}
/>
)}
},
]}
value={effectiveCredentialId ?? undefined}
onChange={(value) => setSelectedCredentialId(value)}
onOpenChange={(isOpen) => {
if (isOpen) void refetchCredentials()
}}
placeholder={
credentials.length === 0
? `No ${connectorConfig.name} accounts`
: 'Select account'
}
isLoading={credentialsLoading}
/>
</div>
)}
@@ -590,20 +549,23 @@ export function AddConnectorModal({ open, onOpenChange, knowledgeBaseId }: AddCo
)}
</ModalContent>
</Modal>
{connectorConfig && connectorConfig.auth.mode === 'oauth' && connectorProviderId && (
<OAuthRequiredModal
isOpen={showOAuthModal}
onClose={() => {
consumeOAuthReturnContext()
setShowOAuthModal(false)
}}
provider={connectorProviderId}
toolName={connectorConfig.name}
requiredScopes={getCanonicalScopesForProvider(connectorProviderId)}
newScopes={[]}
serviceId={connectorConfig.auth.provider}
/>
)}
{showOAuthModal &&
connectorConfig &&
connectorConfig.auth.mode === 'oauth' &&
connectorProviderId && (
<ConnectCredentialModal
isOpen={showOAuthModal}
onClose={() => {
consumeOAuthReturnContext()
setShowOAuthModal(false)
}}
provider={connectorProviderId}
serviceId={connectorConfig.auth.provider}
workspaceId={workspaceId}
knowledgeBaseId={knowledgeBaseId}
credentialCount={credentials.length}
/>
)}
</>
)
}

View File

@@ -36,6 +36,7 @@ import {
} from '@/lib/oauth'
import { getMissingRequiredScopes } from '@/lib/oauth/utils'
import { EditConnectorModal } from '@/app/workspace/[workspaceId]/knowledge/[id]/components/edit-connector-modal/edit-connector-modal'
import { ConnectCredentialModal } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/components/credential-selector/components/connect-credential-modal'
import { OAuthRequiredModal } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/components/credential-selector/components/oauth-required-modal'
import { CONNECTOR_REGISTRY } from '@/connectors/registry'
import type { ConnectorData, SyncLogData } from '@/hooks/queries/kb/connectors'
@@ -46,6 +47,7 @@ import {
useUpdateConnector,
} from '@/hooks/queries/kb/connectors'
import { useOAuthCredentials } from '@/hooks/queries/oauth/oauth-credentials'
import { useCredentialRefreshTriggers } from '@/hooks/use-credential-refresh-triggers'
const logger = createLogger('ConnectorsSection')
@@ -328,11 +330,16 @@ function ConnectorCard({
const requiredScopes =
connectorDef?.auth.mode === 'oauth' ? (connectorDef.auth.requiredScopes ?? []) : []
const { data: credentials } = useOAuthCredentials(providerId, { workspaceId })
const { data: credentials, refetch: refetchCredentials } = useOAuthCredentials(providerId, {
workspaceId,
})
useCredentialRefreshTriggers(refetchCredentials, providerId ?? '', workspaceId)
const missingScopes = useMemo(() => {
if (!credentials || !connector.credentialId) return []
const credential = credentials.find((c) => c.id === connector.credentialId)
if (!credential) return []
return getMissingRequiredScopes(credential, requiredScopes)
}, [credentials, connector.credentialId, requiredScopes])
@@ -484,15 +491,17 @@ function ConnectorCard({
<Button
variant='active'
onClick={() => {
writeOAuthReturnContext({
origin: 'kb-connectors',
knowledgeBaseId,
displayName: connectorDef?.name ?? connector.connectorType,
providerId: providerId!,
preCount: credentials?.length ?? 0,
workspaceId,
requestedAt: Date.now(),
})
if (connector.credentialId) {
writeOAuthReturnContext({
origin: 'kb-connectors',
knowledgeBaseId,
displayName: connectorDef?.name ?? connector.connectorType,
providerId: providerId!,
preCount: credentials?.length ?? 0,
workspaceId,
requestedAt: Date.now(),
})
}
setShowOAuthModal(true)
}}
className='w-full px-2 py-1 font-medium text-caption'
@@ -510,7 +519,22 @@ function ConnectorCard({
</div>
)}
{showOAuthModal && serviceId && providerId && (
{showOAuthModal && serviceId && providerId && !connector.credentialId && (
<ConnectCredentialModal
isOpen={showOAuthModal}
onClose={() => {
consumeOAuthReturnContext()
setShowOAuthModal(false)
}}
provider={providerId as OAuthProvider}
serviceId={serviceId}
workspaceId={workspaceId}
knowledgeBaseId={knowledgeBaseId}
credentialCount={credentials?.length ?? 0}
/>
)}
{showOAuthModal && serviceId && providerId && connector.credentialId && (
<OAuthRequiredModal
isOpen={showOAuthModal}
onClose={() => {

View File

@@ -8,7 +8,7 @@ import {
DropdownMenuSeparator,
DropdownMenuTrigger,
} from '@/components/emcn'
import { Copy, Eye, ListFilter, SquareArrowUpRight, X } from '@/components/emcn/icons'
import { Copy, Eye, Link, ListFilter, SquareArrowUpRight, X } from '@/components/emcn/icons'
import type { WorkflowLog } from '@/stores/logs/filters/types'
interface LogRowContextMenuProps {
@@ -17,6 +17,7 @@ interface LogRowContextMenuProps {
onClose: () => void
log: WorkflowLog | null
onCopyExecutionId: () => void
onCopyLink: () => void
onOpenWorkflow: () => void
onOpenPreview: () => void
onToggleWorkflowFilter: () => void
@@ -35,6 +36,7 @@ export const LogRowContextMenu = memo(function LogRowContextMenu({
onClose,
log,
onCopyExecutionId,
onCopyLink,
onOpenWorkflow,
onOpenPreview,
onToggleWorkflowFilter,
@@ -71,6 +73,10 @@ export const LogRowContextMenu = memo(function LogRowContextMenu({
<Copy />
Copy Execution ID
</DropdownMenuItem>
<DropdownMenuItem disabled={!hasExecutionId} onSelect={onCopyLink}>
<Link />
Copy Link
</DropdownMenuItem>
<DropdownMenuSeparator />
<DropdownMenuItem disabled={!hasWorkflow} onSelect={onOpenWorkflow}>

View File

@@ -266,16 +266,17 @@ export default function Logs() {
isSidebarOpen: false,
})
const isInitialized = useRef<boolean>(false)
const pendingExecutionIdRef = useRef<string | null>(null)
const [searchQuery, setSearchQuery] = useState('')
const debouncedSearchQuery = useDebounce(searchQuery, 300)
useEffect(() => {
const urlSearch = new URLSearchParams(window.location.search).get('search') || ''
if (urlSearch && urlSearch !== searchQuery) {
setSearchQuery(urlSearch)
}
// eslint-disable-next-line react-hooks/exhaustive-deps
const params = new URLSearchParams(window.location.search)
const urlSearch = params.get('search')
if (urlSearch) setSearchQuery(urlSearch)
const urlExecutionId = params.get('executionId')
if (urlExecutionId) pendingExecutionIdRef.current = urlExecutionId
}, [])
const isLive = true
@@ -298,7 +299,6 @@ export default function Logs() {
const [contextMenuOpen, setContextMenuOpen] = useState(false)
const [contextMenuPosition, setContextMenuPosition] = useState({ x: 0, y: 0 })
const [contextMenuLog, setContextMenuLog] = useState<WorkflowLog | null>(null)
const contextMenuRef = useRef<HTMLDivElement>(null)
const [isPreviewOpen, setIsPreviewOpen] = useState(false)
const [previewLogId, setPreviewLogId] = useState<string | null>(null)
@@ -417,28 +417,30 @@ export default function Logs() {
useFolders(workspaceId)
logsRef.current = sortedLogs
selectedLogIndexRef.current = selectedLogIndex
selectedLogIdRef.current = selectedLogId
logsRefetchRef.current = logsQuery.refetch
activeLogRefetchRef.current = activeLogQuery.refetch
logsQueryRef.current = {
isFetching: logsQuery.isFetching,
hasNextPage: logsQuery.hasNextPage ?? false,
fetchNextPage: logsQuery.fetchNextPage,
}
useEffect(() => {
logsRef.current = sortedLogs
}, [sortedLogs])
useEffect(() => {
selectedLogIndexRef.current = selectedLogIndex
}, [selectedLogIndex])
useEffect(() => {
selectedLogIdRef.current = selectedLogId
}, [selectedLogId])
useEffect(() => {
logsRefetchRef.current = logsQuery.refetch
}, [logsQuery.refetch])
useEffect(() => {
activeLogRefetchRef.current = activeLogQuery.refetch
}, [activeLogQuery.refetch])
useEffect(() => {
logsQueryRef.current = {
isFetching: logsQuery.isFetching,
hasNextPage: logsQuery.hasNextPage ?? false,
fetchNextPage: logsQuery.fetchNextPage,
if (!pendingExecutionIdRef.current) return
const targetExecutionId = pendingExecutionIdRef.current
const found = sortedLogs.find((l) => l.executionId === targetExecutionId)
if (found) {
pendingExecutionIdRef.current = null
dispatch({ type: 'TOGGLE_LOG', logId: found.id })
} else if (!logsQuery.hasNextPage && logsQuery.status === 'success') {
pendingExecutionIdRef.current = null
} else if (!logsQuery.isFetching && logsQuery.status === 'success') {
logsQueryRef.current.fetchNextPage()
}
}, [logsQuery.isFetching, logsQuery.hasNextPage, logsQuery.fetchNextPage])
}, [sortedLogs, logsQuery.hasNextPage, logsQuery.isFetching, logsQuery.status])
useEffect(() => {
const timers = refreshTimersRef.current
@@ -490,10 +492,17 @@ export default function Logs() {
const handleCopyExecutionId = useCallback(() => {
if (contextMenuLog?.executionId) {
navigator.clipboard.writeText(contextMenuLog.executionId)
navigator.clipboard.writeText(contextMenuLog.executionId).catch(() => {})
}
}, [contextMenuLog])
const handleCopyLink = useCallback(() => {
if (contextMenuLog?.executionId) {
const url = `${window.location.origin}/workspace/${workspaceId}/logs?executionId=${contextMenuLog.executionId}`
navigator.clipboard.writeText(url).catch(() => {})
}
}, [contextMenuLog, workspaceId])
const handleOpenWorkflow = useCallback(() => {
const wfId = contextMenuLog?.workflow?.id || contextMenuLog?.workflowId
if (wfId) {
@@ -1165,6 +1174,7 @@ export default function Logs() {
onClose={handleCloseContextMenu}
log={contextMenuLog}
onCopyExecutionId={handleCopyExecutionId}
onCopyLink={handleCopyLink}
onOpenWorkflow={handleOpenWorkflow}
onOpenPreview={handleOpenPreview}
onToggleWorkflowFilter={handleToggleWorkflowFilter}

View File

@@ -14,6 +14,7 @@ import {
ModalHeader,
} from '@/components/emcn'
import { client } from '@/lib/auth/auth-client'
import type { OAuthReturnContext } from '@/lib/credentials/client-state'
import { writeOAuthReturnContext } from '@/lib/credentials/client-state'
import {
getCanonicalScopesForProvider,
@@ -27,17 +28,22 @@ import { useCreateCredentialDraft } from '@/hooks/queries/credentials'
const logger = createLogger('ConnectCredentialModal')
export interface ConnectCredentialModalProps {
interface ConnectCredentialModalBaseProps {
isOpen: boolean
onClose: () => void
provider: OAuthProvider
serviceId: string
workspaceId: string
workflowId: string
/** Number of existing credentials for this provider — used to detect a successful new connection. */
credentialCount: number
}
export type ConnectCredentialModalProps = ConnectCredentialModalBaseProps &
(
| { workflowId: string; knowledgeBaseId?: never }
| { workflowId?: never; knowledgeBaseId: string }
)
export function ConnectCredentialModal({
isOpen,
onClose,
@@ -45,6 +51,7 @@ export function ConnectCredentialModal({
serviceId,
workspaceId,
workflowId,
knowledgeBaseId,
credentialCount,
}: ConnectCredentialModalProps) {
const [displayName, setDisplayName] = useState('')
@@ -97,15 +104,19 @@ export function ConnectCredentialModal({
try {
await createDraft.mutateAsync({ workspaceId, providerId, displayName: trimmedName })
writeOAuthReturnContext({
origin: 'workflow',
workflowId,
const baseContext = {
displayName: trimmedName,
providerId,
preCount: credentialCount,
workspaceId,
requestedAt: Date.now(),
})
}
const returnContext: OAuthReturnContext = knowledgeBaseId
? { ...baseContext, origin: 'kb-connectors' as const, knowledgeBaseId }
: { ...baseContext, origin: 'workflow' as const, workflowId: workflowId! }
writeOAuthReturnContext(returnContext)
if (providerId === 'trello') {
window.location.href = '/api/auth/trello/authorize'

View File

@@ -7,6 +7,7 @@ import { Button, Combobox } from '@/components/emcn/components'
import { getSubscriptionAccessState } from '@/lib/billing/client'
import { getEnv, isTruthy } from '@/lib/core/config/env'
import { getPollingProviderFromOAuth } from '@/lib/credential-sets/providers'
import { consumeOAuthReturnContext, writeOAuthReturnContext } from '@/lib/credentials/client-state'
import {
getCanonicalScopesForProvider,
getProviderIdFromServiceId,
@@ -357,7 +358,18 @@ export function CredentialSelector({
</div>
<Button
variant='active'
onClick={() => setShowOAuthModal(true)}
onClick={() => {
writeOAuthReturnContext({
origin: 'workflow',
workflowId: activeWorkflowId || '',
displayName: selectedCredential?.name ?? getProviderName(provider),
providerId: effectiveProviderId,
preCount: credentials.length,
workspaceId,
requestedAt: Date.now(),
})
setShowOAuthModal(true)
}}
className='w-full px-2 py-1 font-medium text-caption'
>
Update access
@@ -380,7 +392,10 @@ export function CredentialSelector({
{showOAuthModal && (
<OAuthRequiredModal
isOpen={showOAuthModal}
onClose={() => setShowOAuthModal(false)}
onClose={() => {
consumeOAuthReturnContext()
setShowOAuthModal(false)
}}
provider={provider}
toolName={getProviderName(provider)}
requiredScopes={getCanonicalScopesForProvider(effectiveProviderId)}

View File

@@ -7,9 +7,9 @@ import { useParams } from 'next/navigation'
import { Button, Combobox } from '@/components/emcn/components'
import { Progress } from '@/components/ui/progress'
import { cn } from '@/lib/core/utils/cn'
import type { WorkspaceFileRecord } from '@/lib/uploads/contexts/workspace'
import { getExtensionFromMimeType } from '@/lib/uploads/utils/file-utils'
import { useSubBlockValue } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/hooks/use-sub-block-value'
import { useWorkspaceFiles } from '@/hooks/queries/workspace-files'
import { useWorkflowRegistry } from '@/stores/workflows/registry/store'
import { useWorkflowStore } from '@/stores/workflows/workflow/store'
@@ -150,8 +150,6 @@ export function FileUpload({
const [storeValue, setStoreValue] = useSubBlockValue(blockId, subBlockId)
const [uploadingFiles, setUploadingFiles] = useState<UploadingFile[]>([])
const [uploadProgress, setUploadProgress] = useState(0)
const [workspaceFiles, setWorkspaceFiles] = useState<WorkspaceFileRecord[]>([])
const [loadingWorkspaceFiles, setLoadingWorkspaceFiles] = useState(false)
const [uploadError, setUploadError] = useState<string | null>(null)
const [inputValue, setInputValue] = useState('')
@@ -163,26 +161,14 @@ export function FileUpload({
const params = useParams()
const workspaceId = params?.workspaceId as string
const {
data: workspaceFiles = [],
isLoading: loadingWorkspaceFiles,
refetch: refetchWorkspaceFiles,
} = useWorkspaceFiles(isPreview ? '' : workspaceId)
const value = isPreview ? previewValue : storeValue
const loadWorkspaceFiles = async () => {
if (!workspaceId || isPreview) return
try {
setLoadingWorkspaceFiles(true)
const response = await fetch(`/api/workspaces/${workspaceId}/files`)
const data = await response.json()
if (data.success) {
setWorkspaceFiles(data.files || [])
}
} catch (error) {
logger.error('Error loading workspace files:', error)
} finally {
setLoadingWorkspaceFiles(false)
}
}
/**
* Checks if a file's MIME type matches the accepted types
* Supports exact matches, wildcard patterns (e.g., 'image/*'), and '*' for all types
@@ -226,10 +212,6 @@ export function FileUpload({
return !isAlreadySelected
})
useEffect(() => {
void loadWorkspaceFiles()
}, [workspaceId])
/**
* Opens file dialog
*/
@@ -394,7 +376,7 @@ export function FileUpload({
setUploadError(null)
if (workspaceId) {
void loadWorkspaceFiles()
void refetchWorkspaceFiles()
}
if (uploadedFiles.length === 1) {
@@ -726,7 +708,7 @@ export function FileUpload({
value={inputValue}
onChange={handleComboboxChange}
onOpenChange={(open) => {
if (open) void loadWorkspaceFiles()
if (open) void refetchWorkspaceFiles()
}}
placeholder={loadingWorkspaceFiles ? 'Loading files...' : '+ Add More'}
disabled={disabled || loadingWorkspaceFiles}
@@ -746,7 +728,7 @@ export function FileUpload({
onInputChange={handleComboboxChange}
onClear={(e) => handleRemoveFile(filesArray[0], e)}
onOpenChange={(open) => {
if (open) void loadWorkspaceFiles()
if (open) void refetchWorkspaceFiles()
}}
disabled={disabled}
isLoading={loadingWorkspaceFiles}
@@ -763,7 +745,7 @@ export function FileUpload({
value={inputValue}
onChange={handleComboboxChange}
onOpenChange={(open) => {
if (open) void loadWorkspaceFiles()
if (open) void refetchWorkspaceFiles()
}}
placeholder={loadingWorkspaceFiles ? 'Loading files...' : 'Select or upload file'}
disabled={disabled || loadingWorkspaceFiles}

View File

@@ -4,6 +4,7 @@ import { createElement, useCallback, useMemo, useRef, useState } from 'react'
import { ExternalLink } from 'lucide-react'
import { useParams } from 'next/navigation'
import { Button, Combobox } from '@/components/emcn/components'
import { consumeOAuthReturnContext, writeOAuthReturnContext } from '@/lib/credentials/client-state'
import {
getCanonicalScopesForProvider,
getProviderIdFromServiceId,
@@ -222,7 +223,18 @@ export function ToolCredentialSelector({
</div>
<Button
variant='active'
onClick={() => setShowOAuthModal(true)}
onClick={() => {
writeOAuthReturnContext({
origin: 'workflow',
workflowId: effectiveWorkflowId || '',
displayName: selectedCredential?.name ?? getProviderName(provider),
providerId: effectiveProviderId,
preCount: credentials.length,
workspaceId,
requestedAt: Date.now(),
})
setShowOAuthModal(true)
}}
className='w-full px-2 py-1 font-medium text-caption'
>
Update access
@@ -245,7 +257,10 @@ export function ToolCredentialSelector({
{showOAuthModal && (
<OAuthRequiredModal
isOpen={showOAuthModal}
onClose={() => setShowOAuthModal(false)}
onClose={() => {
consumeOAuthReturnContext()
setShowOAuthModal(false)
}}
provider={provider}
toolName={getProviderName(provider)}
requiredScopes={getCanonicalScopesForProvider(effectiveProviderId)}

View File

@@ -19,6 +19,7 @@ import { createLogger } from '@sim/logger'
import { useShallow } from 'zustand/react/shallow'
import { useSession } from '@/lib/auth/auth-client'
import type { OAuthConnectEventDetail } from '@/lib/copilot/tools/client/base-tool'
import { consumeOAuthReturnContext, writeOAuthReturnContext } from '@/lib/credentials/client-state'
import type { OAuthProvider } from '@/lib/oauth'
import { BLOCK_DIMENSIONS, CONTAINER_DIMENSIONS } from '@/lib/workflows/blocks/block-dimensions'
import { TriggerUtils } from '@/lib/workflows/triggers/triggers'
@@ -263,7 +264,7 @@ const WorkflowContent = React.memo(
const params = useParams()
const router = useRouter()
const reactFlowInstance = useReactFlow()
const { screenToFlowPosition, getNodes, setNodes, getIntersectingNodes } = reactFlowInstance
const { screenToFlowPosition, getNodes, setNodes } = reactFlowInstance
const { fitViewToBounds, getViewportCenter } = useCanvasViewport(reactFlowInstance, {
embedded,
})
@@ -478,6 +479,17 @@ const WorkflowContent = React.memo(
const handleOpenOAuthConnect = (event: Event) => {
const detail = (event as CustomEvent<OAuthConnectEventDetail>).detail
if (!detail) return
writeOAuthReturnContext({
origin: 'workflow',
workflowId: workflowIdParam,
displayName: detail.providerName,
providerId: detail.providerId,
preCount: 0,
workspaceId,
requestedAt: Date.now(),
})
setOauthModal({
provider: detail.providerId as OAuthProvider,
serviceId: detail.serviceId,
@@ -490,7 +502,7 @@ const WorkflowContent = React.memo(
window.addEventListener('open-oauth-connect', handleOpenOAuthConnect as EventListener)
return () =>
window.removeEventListener('open-oauth-connect', handleOpenOAuthConnect as EventListener)
}, [])
}, [workflowIdParam, workspaceId])
const { diffAnalysis, isShowingDiff, isDiffReady, reapplyDiffMarkers, hasActiveDiff } =
useWorkflowDiffStore(
@@ -2849,38 +2861,29 @@ const WorkflowContent = React.memo(
)
/**
* Finds the best node at a given flow position for drop-on-block connection.
* Skips subflow containers as they have their own connection logic.
* Finds the node under the cursor using DOM hit-testing for pixel-perfect
* detection that matches exactly what the user sees on screen.
* Uses the same approach as ReactFlow's internal handle detection.
*/
const findNodeAtPosition = useCallback(
(position: { x: number; y: number }) => {
const cursorRect = {
x: position.x - 1,
y: position.y - 1,
width: 2,
height: 2,
const findNodeAtScreenPosition = useCallback(
(clientX: number, clientY: number) => {
const elements = document.elementsFromPoint(clientX, clientY)
const nodes = getNodes()
for (const el of elements) {
const nodeEl = el.closest('.react-flow__node') as HTMLElement | null
if (!nodeEl) continue
const nodeId = nodeEl.getAttribute('data-id')
if (!nodeId) continue
const node = nodes.find((n) => n.id === nodeId)
if (node && node.type !== 'subflowNode') return node
}
const intersecting = getIntersectingNodes(cursorRect, true).filter(
(node) => node.type !== 'subflowNode'
)
if (intersecting.length === 0) return undefined
if (intersecting.length === 1) return intersecting[0]
return intersecting.reduce((closest, node) => {
const getDistance = (n: Node) => {
const absPos = getNodeAbsolutePosition(n.id)
const dims = getBlockDimensions(n.id)
const centerX = absPos.x + dims.width / 2
const centerY = absPos.y + dims.height / 2
return Math.hypot(position.x - centerX, position.y - centerY)
}
return getDistance(node) < getDistance(closest) ? node : closest
})
return undefined
},
[getIntersectingNodes, getNodeAbsolutePosition, getBlockDimensions]
[getNodes]
)
/**
@@ -3005,15 +3008,9 @@ const WorkflowContent = React.memo(
return
}
// Get cursor position in flow coordinates
// Find node under cursor using DOM hit-testing
const clientPos = 'changedTouches' in event ? event.changedTouches[0] : event
const flowPosition = screenToFlowPosition({
x: clientPos.clientX,
y: clientPos.clientY,
})
// Find node under cursor
const targetNode = findNodeAtPosition(flowPosition)
const targetNode = findNodeAtScreenPosition(clientPos.clientX, clientPos.clientY)
// Create connection if valid target found (handle-to-body case)
if (targetNode && targetNode.id !== source.nodeId) {
@@ -3027,7 +3024,7 @@ const WorkflowContent = React.memo(
connectionSourceRef.current = null
},
[screenToFlowPosition, findNodeAtPosition, onConnect]
[findNodeAtScreenPosition, onConnect]
)
/** Handles node drag to detect container intersections and update highlighting. */
@@ -4118,7 +4115,10 @@ const WorkflowContent = React.memo(
<Suspense fallback={null}>
<LazyOAuthRequiredModal
isOpen={true}
onClose={() => setOauthModal(null)}
onClose={() => {
consumeOAuthReturnContext()
setOauthModal(null)
}}
provider={oauthModal.provider}
toolName={oauthModal.providerName}
serviceId={oauthModal.serviceId}

View File

@@ -35,7 +35,7 @@ interface WorkflowItemProps {
active: boolean
level: number
dragDisabled?: boolean
onWorkflowClick: (workflowId: string, shiftKey: boolean, metaKey: boolean) => void
onWorkflowClick: (workflowId: string, shiftKey: boolean) => void
onDragStart?: () => void
onDragEnd?: () => void
}
@@ -368,13 +368,15 @@ export function WorkflowItem({
return
}
const isModifierClick = e.shiftKey || e.metaKey || e.ctrlKey
if (e.metaKey || e.ctrlKey) {
return
}
if (isModifierClick) {
if (e.shiftKey) {
e.preventDefault()
}
onWorkflowClick(workflow.id, e.shiftKey, e.metaKey || e.ctrlKey)
onWorkflowClick(workflow.id, e.shiftKey)
},
[shouldPreventClickRef, workflow.id, onWorkflowClick, isEditing]
)

View File

@@ -9,8 +9,9 @@ interface UseTaskSelectionProps {
}
/**
* Hook for managing task selection with support for single, range, and toggle selection.
* Handles shift-click for range selection and cmd/ctrl-click for toggle.
* Hook for managing task selection with support for single and range selection.
* Handles shift-click for range selection.
* cmd/ctrl+click is handled by the browser (opens in new tab) and never reaches this handler.
* Uses the last selected task as the anchor point for range selections.
* Selecting tasks clears workflow/folder selections and vice versa.
*/
@@ -18,16 +19,14 @@ export function useTaskSelection({ taskIds }: UseTaskSelectionProps) {
const selectedTasks = useFolderStore((s) => s.selectedTasks)
const handleTaskClick = useCallback(
(taskId: string, shiftKey: boolean, metaKey: boolean) => {
(taskId: string, shiftKey: boolean) => {
const {
selectTaskOnly,
selectTaskRange,
toggleTaskSelection,
lastSelectedTaskId: anchor,
} = useFolderStore.getState()
if (metaKey) {
toggleTaskSelection(taskId)
} else if (shiftKey && anchor && anchor !== taskId) {
if (shiftKey && anchor && anchor !== taskId) {
selectTaskRange(taskIds, anchor, taskId)
} else if (shiftKey) {
toggleTaskSelection(taskId)

View File

@@ -60,18 +60,15 @@ export function useWorkflowSelection({
}, [workflowAncestorFolderIds])
/**
* Handle workflow click with support for shift-click range selection and cmd/ctrl-click toggle.
* Handle workflow click with support for shift-click range selection.
* cmd/ctrl+click is handled by the browser (opens in new tab) and never reaches this handler.
*
* @param workflowId - ID of clicked workflow
* @param shiftKey - Whether shift key was pressed
* @param metaKey - Whether cmd (Mac) or ctrl (Windows) key was pressed
*/
const handleWorkflowClick = useCallback(
(workflowId: string, shiftKey: boolean, metaKey: boolean) => {
if (metaKey) {
toggleWorkflowSelection(workflowId)
deselectConflictingFolders()
} else if (shiftKey && activeWorkflowId && activeWorkflowId !== workflowId) {
(workflowId: string, shiftKey: boolean) => {
if (shiftKey && activeWorkflowId && activeWorkflowId !== workflowId) {
selectRange(workflowIds, activeWorkflowId, workflowId)
deselectConflictingFolders()
} else if (shiftKey) {

View File

@@ -151,7 +151,7 @@ const SidebarTaskItem = memo(function SidebarTaskItem({
isUnread: boolean
isMenuOpen: boolean
showCollapsedTooltips: boolean
onMultiSelectClick: (taskId: string, shiftKey: boolean, metaKey: boolean) => void
onMultiSelectClick: (taskId: string, shiftKey: boolean) => void
onContextMenu: (e: React.MouseEvent, taskId: string) => void
onMorePointerDown: () => void
onMoreClick: (e: React.MouseEvent<HTMLButtonElement>, taskId: string) => void
@@ -167,9 +167,10 @@ const SidebarTaskItem = memo(function SidebarTaskItem({
)}
onClick={(e) => {
if (task.id === 'new') return
if (e.shiftKey || e.metaKey || e.ctrlKey) {
if (e.metaKey || e.ctrlKey) return
if (e.shiftKey) {
e.preventDefault()
onMultiSelectClick(task.id, e.shiftKey, e.metaKey || e.ctrlKey)
onMultiSelectClick(task.id, true)
} else {
useFolderStore.setState({
selectedTasks: new Set<string>(),
@@ -1058,8 +1059,6 @@ export const Sidebar = memo(function Sidebar() {
[handleCreateWorkflow]
)
const noop = useCallback(() => {}, [])
const handleExpandSidebar = useCallback(
(e: React.MouseEvent) => {
e.preventDefault()
@@ -1330,8 +1329,11 @@ export const Sidebar = memo(function Sidebar() {
!hasOverflowTop && 'border-transparent'
)}
>
<div className='tasks-section flex flex-shrink-0 flex-col' data-tour='nav-tasks'>
<div className='flex h-[18px] flex-shrink-0 items-center justify-between px-4'>
<div
className='tasks-section mx-2 flex flex-shrink-0 flex-col'
data-tour='nav-tasks'
>
<div className='flex h-[18px] flex-shrink-0 items-center justify-between px-2'>
<div className='font-base text-[var(--text-icon)] text-small'>All tasks</div>
{!isCollapsed && (
<div className='flex items-center justify-center gap-2'>
@@ -1452,10 +1454,10 @@ export const Sidebar = memo(function Sidebar() {
</div>
<div
className='workflows-section relative mt-3.5 flex flex-col'
className='workflows-section relative mx-2 mt-3.5 flex flex-col'
data-tour='nav-workflows'
>
<div className='flex h-[18px] flex-shrink-0 items-center justify-between px-4'>
<div className='flex h-[18px] flex-shrink-0 items-center justify-between px-2'>
<div className='font-base text-[var(--text-icon)] text-small'>Workflows</div>
{!isCollapsed && (
<div className='flex items-center justify-center gap-2'>

View File

@@ -247,7 +247,7 @@ function formatCost(cost?: Record<string, unknown>): string {
}
function buildLogUrl(workspaceId: string, executionId: string): string {
return `${getBaseUrl()}/workspace/${workspaceId}/logs?search=${encodeURIComponent(executionId)}`
return `${getBaseUrl()}/workspace/${workspaceId}/logs?executionId=${encodeURIComponent(executionId)}`
}
function formatAlertReason(alertConfig: AlertConfig): string {

View File

@@ -250,9 +250,9 @@ export const FileV2Block: BlockConfig<FileParserOutput> = {
export const FileV3Block: BlockConfig<FileParserV3Output> = {
type: 'file_v3',
name: 'File',
description: 'Read and parse multiple files',
description: 'Read and write workspace files',
longDescription:
'Upload files directly or import from external URLs to get UserFile objects for use in other blocks.',
'Read and parse files from uploads or URLs, write new workspace files, or append content to existing files.',
docsLink: 'https://docs.sim.ai/tools/file',
category: 'tools',
integrationType: IntegrationType.FileStorage,
@@ -260,6 +260,17 @@ export const FileV3Block: BlockConfig<FileParserV3Output> = {
bgColor: '#40916C',
icon: DocumentIcon,
subBlocks: [
{
id: 'operation',
title: 'Operation',
type: 'dropdown' as SubBlockType,
options: [
{ label: 'Read', id: 'file_parser_v3' },
{ label: 'Write', id: 'file_write' },
{ label: 'Append', id: 'file_append' },
],
value: () => 'file_parser_v3',
},
{
id: 'file',
title: 'Files',
@@ -270,7 +281,8 @@ export const FileV3Block: BlockConfig<FileParserV3Output> = {
multiple: true,
mode: 'basic',
maxSize: 100,
required: true,
required: { field: 'operation', value: 'file_parser_v3' },
condition: { field: 'operation', value: 'file_parser_v3' },
},
{
id: 'fileUrl',
@@ -279,15 +291,105 @@ export const FileV3Block: BlockConfig<FileParserV3Output> = {
canonicalParamId: 'fileInput',
placeholder: 'https://example.com/document.pdf',
mode: 'advanced',
required: true,
required: { field: 'operation', value: 'file_parser_v3' },
condition: { field: 'operation', value: 'file_parser_v3' },
},
{
id: 'fileName',
title: 'File Name',
type: 'short-input' as SubBlockType,
placeholder: 'File name (e.g., data.csv)',
condition: { field: 'operation', value: 'file_write' },
required: { field: 'operation', value: 'file_write' },
},
{
id: 'content',
title: 'Content',
type: 'long-input' as SubBlockType,
placeholder: 'File content to write...',
condition: { field: 'operation', value: 'file_write' },
required: { field: 'operation', value: 'file_write' },
},
{
id: 'contentType',
title: 'Content Type',
type: 'short-input' as SubBlockType,
placeholder: 'text/plain (auto-detected from extension)',
condition: { field: 'operation', value: 'file_write' },
mode: 'advanced',
},
{
id: 'appendFile',
title: 'File',
type: 'file-upload' as SubBlockType,
canonicalParamId: 'appendFileInput',
acceptedTypes: '.txt,.md,.json,.csv,.xml,.html,.htm,.yaml,.yml,.log,.rtf',
placeholder: 'Select or upload a workspace file',
mode: 'basic',
condition: { field: 'operation', value: 'file_append' },
required: { field: 'operation', value: 'file_append' },
},
{
id: 'appendFileName',
title: 'File',
type: 'short-input' as SubBlockType,
canonicalParamId: 'appendFileInput',
placeholder: 'File name (e.g., notes.md)',
mode: 'advanced',
condition: { field: 'operation', value: 'file_append' },
required: { field: 'operation', value: 'file_append' },
},
{
id: 'appendContent',
title: 'Content',
type: 'long-input' as SubBlockType,
placeholder: 'Content to append...',
condition: { field: 'operation', value: 'file_append' },
required: { field: 'operation', value: 'file_append' },
},
],
tools: {
access: ['file_parser_v3'],
access: ['file_parser_v3', 'file_write', 'file_append'],
config: {
tool: () => 'file_parser_v3',
tool: (params) => params.operation || 'file_parser_v3',
params: (params) => {
// Use canonical 'fileInput' param directly
const operation = params.operation || 'file_parser_v3'
if (operation === 'file_write') {
return {
fileName: params.fileName,
content: params.content,
contentType: params.contentType,
workspaceId: params._context?.workspaceId,
}
}
if (operation === 'file_append') {
const appendInput = params.appendFileInput
if (!appendInput) {
throw new Error('File is required for append')
}
let fileName: string
if (typeof appendInput === 'string') {
fileName = appendInput.trim()
} else {
const normalized = normalizeFileInput(appendInput, { single: true })
const file = normalized as Record<string, unknown> | null
fileName = (file?.name as string) ?? ''
}
if (!fileName) {
throw new Error('Could not determine file name')
}
return {
fileName,
content: params.appendContent,
workspaceId: params._context?.workspaceId,
}
}
const fileInput = params.fileInput
if (!fileInput) {
logger.error('No file input provided')
@@ -326,17 +428,39 @@ export const FileV3Block: BlockConfig<FileParserV3Output> = {
},
},
inputs: {
fileInput: { type: 'json', description: 'File input (canonical param)' },
fileType: { type: 'string', description: 'File type' },
operation: { type: 'string', description: 'Operation to perform (read, write, or append)' },
fileInput: { type: 'json', description: 'File input for read' },
fileType: { type: 'string', description: 'File type for read' },
fileName: { type: 'string', description: 'Name for a new file (write)' },
content: { type: 'string', description: 'File content to write' },
contentType: { type: 'string', description: 'MIME content type for write' },
appendFileInput: { type: 'json', description: 'File to append to' },
appendContent: { type: 'string', description: 'Content to append to file' },
},
outputs: {
files: {
type: 'file[]',
description: 'Parsed files as UserFile objects',
description: 'Parsed files as UserFile objects (read)',
},
combinedContent: {
type: 'string',
description: 'All file contents merged into a single text string',
description: 'All file contents merged into a single text string (read)',
},
id: {
type: 'string',
description: 'File ID (write)',
},
name: {
type: 'string',
description: 'File name (write)',
},
size: {
type: 'number',
description: 'File size in bytes (write)',
},
url: {
type: 'string',
description: 'URL to access the file (write)',
},
},
}

View File

@@ -0,0 +1,406 @@
import { ProfoundIcon } from '@/components/icons'
import type { BlockConfig } from '@/blocks/types'
import { AuthMode, IntegrationType } from '@/blocks/types'
const CATEGORY_REPORT_OPS = [
'visibility_report',
'sentiment_report',
'citations_report',
'prompt_answers',
'query_fanouts',
] as const
const DOMAIN_REPORT_OPS = ['bots_report', 'referrals_report', 'raw_logs', 'bot_logs'] as const
const ALL_REPORT_OPS = [...CATEGORY_REPORT_OPS, ...DOMAIN_REPORT_OPS] as const
const CATEGORY_ID_OPS = [
...CATEGORY_REPORT_OPS,
'category_topics',
'category_tags',
'category_prompts',
'category_assets',
'category_personas',
] as const
const DATE_REQUIRED_CATEGORY_OPS = [
'visibility_report',
'sentiment_report',
'citations_report',
'prompt_answers',
'query_fanouts',
'prompt_volume',
] as const
const DATE_REQUIRED_ALL_OPS = [...DATE_REQUIRED_CATEGORY_OPS, ...DOMAIN_REPORT_OPS] as const
const METRICS_REPORT_OPS = [
'visibility_report',
'sentiment_report',
'citations_report',
'bots_report',
'referrals_report',
'query_fanouts',
'prompt_volume',
] as const
const DIMENSION_OPS = [
'visibility_report',
'sentiment_report',
'citations_report',
'bots_report',
'referrals_report',
'query_fanouts',
'raw_logs',
'bot_logs',
'prompt_volume',
] as const
const FILTER_OPS = [...ALL_REPORT_OPS, 'prompt_volume'] as const
export const ProfoundBlock: BlockConfig = {
type: 'profound',
name: 'Profound',
description: 'AI visibility and analytics with Profound',
longDescription:
'Track how your brand appears across AI platforms. Monitor visibility scores, sentiment, citations, bot traffic, referrals, content optimization, and prompt volumes with Profound.',
docsLink: 'https://docs.sim.ai/tools/profound',
category: 'tools',
integrationType: IntegrationType.Analytics,
tags: ['seo', 'data-analytics'],
bgColor: '#000000',
icon: ProfoundIcon,
authMode: AuthMode.ApiKey,
subBlocks: [
{
id: 'operation',
title: 'Operation',
type: 'dropdown',
options: [
{ label: 'List Categories', id: 'list_categories' },
{ label: 'List Regions', id: 'list_regions' },
{ label: 'List Models', id: 'list_models' },
{ label: 'List Domains', id: 'list_domains' },
{ label: 'List Assets', id: 'list_assets' },
{ label: 'List Personas', id: 'list_personas' },
{ label: 'Category Topics', id: 'category_topics' },
{ label: 'Category Tags', id: 'category_tags' },
{ label: 'Category Prompts', id: 'category_prompts' },
{ label: 'Category Assets', id: 'category_assets' },
{ label: 'Category Personas', id: 'category_personas' },
{ label: 'Visibility Report', id: 'visibility_report' },
{ label: 'Sentiment Report', id: 'sentiment_report' },
{ label: 'Citations Report', id: 'citations_report' },
{ label: 'Query Fanouts', id: 'query_fanouts' },
{ label: 'Prompt Answers', id: 'prompt_answers' },
{ label: 'Bots Report', id: 'bots_report' },
{ label: 'Referrals Report', id: 'referrals_report' },
{ label: 'Raw Logs', id: 'raw_logs' },
{ label: 'Bot Logs', id: 'bot_logs' },
{ label: 'List Optimizations', id: 'list_optimizations' },
{ label: 'Optimization Analysis', id: 'optimization_analysis' },
{ label: 'Prompt Volume', id: 'prompt_volume' },
{ label: 'Citation Prompts', id: 'citation_prompts' },
],
value: () => 'visibility_report',
},
{
id: 'apiKey',
title: 'API Key',
type: 'short-input',
placeholder: 'Enter your Profound API key',
required: true,
password: true,
},
// Category ID - for category-based operations
{
id: 'categoryId',
title: 'Category ID',
type: 'short-input',
placeholder: 'Category UUID',
required: { field: 'operation', value: [...CATEGORY_ID_OPS] },
condition: { field: 'operation', value: [...CATEGORY_ID_OPS] },
},
// Domain - for domain-based operations
{
id: 'domain',
title: 'Domain',
type: 'short-input',
placeholder: 'e.g. example.com',
required: { field: 'operation', value: [...DOMAIN_REPORT_OPS] },
condition: { field: 'operation', value: [...DOMAIN_REPORT_OPS] },
},
// Input domain - for citation prompts
{
id: 'inputDomain',
title: 'Domain',
type: 'short-input',
placeholder: 'e.g. ramp.com',
required: { field: 'operation', value: 'citation_prompts' },
condition: { field: 'operation', value: 'citation_prompts' },
},
// Asset ID - for content optimization
{
id: 'assetId',
title: 'Asset ID',
type: 'short-input',
placeholder: 'Asset UUID',
required: { field: 'operation', value: ['list_optimizations', 'optimization_analysis'] },
condition: { field: 'operation', value: ['list_optimizations', 'optimization_analysis'] },
},
// Content ID - for optimization analysis
{
id: 'contentId',
title: 'Content ID',
type: 'short-input',
placeholder: 'Content/optimization UUID',
required: { field: 'operation', value: 'optimization_analysis' },
condition: { field: 'operation', value: 'optimization_analysis' },
},
// Date fields
{
id: 'startDate',
title: 'Start Date',
type: 'short-input',
placeholder: 'YYYY-MM-DD',
required: { field: 'operation', value: [...DATE_REQUIRED_ALL_OPS] },
condition: { field: 'operation', value: [...DATE_REQUIRED_ALL_OPS] },
wandConfig: {
enabled: true,
prompt: 'Generate a date in YYYY-MM-DD format. Return ONLY the date string.',
generationType: 'timestamp',
},
},
{
id: 'endDate',
title: 'End Date',
type: 'short-input',
placeholder: 'YYYY-MM-DD',
required: { field: 'operation', value: [...DATE_REQUIRED_CATEGORY_OPS] },
condition: { field: 'operation', value: [...DATE_REQUIRED_ALL_OPS] },
wandConfig: {
enabled: true,
prompt: 'Generate a date in YYYY-MM-DD format. Return ONLY the date string.',
generationType: 'timestamp',
},
},
// Per-operation metrics fields
{
id: 'visibilityMetrics',
title: 'Metrics',
type: 'short-input',
placeholder: 'share_of_voice, visibility_score, mentions_count',
required: { field: 'operation', value: 'visibility_report' },
condition: { field: 'operation', value: 'visibility_report' },
},
{
id: 'sentimentMetrics',
title: 'Metrics',
type: 'short-input',
placeholder: 'positive, negative, occurrences',
required: { field: 'operation', value: 'sentiment_report' },
condition: { field: 'operation', value: 'sentiment_report' },
},
{
id: 'citationsMetrics',
title: 'Metrics',
type: 'short-input',
placeholder: 'count, citation_share',
required: { field: 'operation', value: 'citations_report' },
condition: { field: 'operation', value: 'citations_report' },
},
{
id: 'botsMetrics',
title: 'Metrics',
type: 'short-input',
placeholder: 'count, citations, indexing, training',
required: { field: 'operation', value: 'bots_report' },
condition: { field: 'operation', value: 'bots_report' },
},
{
id: 'referralsMetrics',
title: 'Metrics',
type: 'short-input',
placeholder: 'visits, last_visit',
required: { field: 'operation', value: 'referrals_report' },
condition: { field: 'operation', value: 'referrals_report' },
},
{
id: 'fanoutsMetrics',
title: 'Metrics',
type: 'short-input',
placeholder: 'fanouts_per_execution, total_fanouts, share',
required: { field: 'operation', value: 'query_fanouts' },
condition: { field: 'operation', value: 'query_fanouts' },
},
{
id: 'volumeMetrics',
title: 'Metrics',
type: 'short-input',
placeholder: 'volume, change',
required: { field: 'operation', value: 'prompt_volume' },
condition: { field: 'operation', value: 'prompt_volume' },
},
// Advanced fields
{
id: 'dimensions',
title: 'Dimensions',
type: 'short-input',
placeholder: 'e.g. date, asset_name, model',
condition: { field: 'operation', value: [...DIMENSION_OPS] },
mode: 'advanced',
},
{
id: 'dateInterval',
title: 'Date Interval',
type: 'dropdown',
options: [
{ label: 'Day', id: 'day' },
{ label: 'Hour', id: 'hour' },
{ label: 'Week', id: 'week' },
{ label: 'Month', id: 'month' },
{ label: 'Year', id: 'year' },
],
condition: { field: 'operation', value: [...METRICS_REPORT_OPS] },
mode: 'advanced',
},
{
id: 'filters',
title: 'Filters',
type: 'long-input',
placeholder: '[{"field":"asset_name","operator":"is","value":"Company"}]',
condition: { field: 'operation', value: [...FILTER_OPS] },
mode: 'advanced',
wandConfig: {
enabled: true,
prompt:
'Generate a JSON array of filter objects. Each object has "field", "operator", and "value" keys. Return ONLY valid JSON.',
generationType: 'json-object',
},
},
{
id: 'limit',
title: 'Limit',
type: 'short-input',
placeholder: '10000',
condition: {
field: 'operation',
value: [...FILTER_OPS, 'category_prompts', 'list_optimizations'],
},
mode: 'advanced',
},
// Category prompts specific fields
{
id: 'cursor',
title: 'Cursor',
type: 'short-input',
placeholder: 'Pagination cursor from previous response',
condition: { field: 'operation', value: 'category_prompts' },
mode: 'advanced',
},
{
id: 'promptType',
title: 'Prompt Type',
type: 'short-input',
placeholder: 'visibility, sentiment',
condition: { field: 'operation', value: 'category_prompts' },
mode: 'advanced',
},
// Optimization list specific
{
id: 'offset',
title: 'Offset',
type: 'short-input',
placeholder: '0',
condition: { field: 'operation', value: 'list_optimizations' },
mode: 'advanced',
},
],
tools: {
access: [
'profound_list_categories',
'profound_list_regions',
'profound_list_models',
'profound_list_domains',
'profound_list_assets',
'profound_list_personas',
'profound_category_topics',
'profound_category_tags',
'profound_category_prompts',
'profound_category_assets',
'profound_category_personas',
'profound_visibility_report',
'profound_sentiment_report',
'profound_citations_report',
'profound_query_fanouts',
'profound_prompt_answers',
'profound_bots_report',
'profound_referrals_report',
'profound_raw_logs',
'profound_bot_logs',
'profound_list_optimizations',
'profound_optimization_analysis',
'profound_prompt_volume',
'profound_citation_prompts',
],
config: {
tool: (params) => `profound_${params.operation}`,
params: (params) => {
const result: Record<string, unknown> = {}
const metricsMap: Record<string, string> = {
visibility_report: 'visibilityMetrics',
sentiment_report: 'sentimentMetrics',
citations_report: 'citationsMetrics',
bots_report: 'botsMetrics',
referrals_report: 'referralsMetrics',
query_fanouts: 'fanoutsMetrics',
prompt_volume: 'volumeMetrics',
}
const metricsField = metricsMap[params.operation as string]
if (metricsField && params[metricsField]) {
result.metrics = params[metricsField]
}
if (params.limit != null) result.limit = Number(params.limit)
if (params.offset != null) result.offset = Number(params.offset)
return result
},
},
},
inputs: {
apiKey: { type: 'string' },
categoryId: { type: 'string' },
domain: { type: 'string' },
inputDomain: { type: 'string' },
assetId: { type: 'string' },
contentId: { type: 'string' },
startDate: { type: 'string' },
endDate: { type: 'string' },
metrics: { type: 'string' },
dimensions: { type: 'string' },
dateInterval: { type: 'string' },
filters: { type: 'string' },
limit: { type: 'number' },
offset: { type: 'number' },
cursor: { type: 'string' },
promptType: { type: 'string' },
},
outputs: {
response: {
type: 'json',
},
},
}

View File

@@ -137,6 +137,7 @@ import { PipedriveBlock } from '@/blocks/blocks/pipedrive'
import { PolymarketBlock } from '@/blocks/blocks/polymarket'
import { PostgreSQLBlock } from '@/blocks/blocks/postgresql'
import { PostHogBlock } from '@/blocks/blocks/posthog'
import { ProfoundBlock } from '@/blocks/blocks/profound'
import { PulseBlock, PulseV2Block } from '@/blocks/blocks/pulse'
import { QdrantBlock } from '@/blocks/blocks/qdrant'
import { QuiverBlock } from '@/blocks/blocks/quiver'
@@ -357,6 +358,7 @@ export const registry: Record<string, BlockConfig> = {
perplexity: PerplexityBlock,
pinecone: PineconeBlock,
pipedrive: PipedriveBlock,
profound: ProfoundBlock,
polymarket: PolymarketBlock,
postgresql: PostgreSQLBlock,
posthog: PostHogBlock,

View File

@@ -45,12 +45,12 @@ function TourCard({
return (
<>
<div className='flex items-center justify-between gap-2 px-4 pt-4 pb-2'>
<h3 className='min-w-0 font-medium text-[var(--text-primary)] text-sm leading-none'>
<h3 className='min-w-0 font-medium text-[var(--text-primary)] text-small leading-none'>
{title}
</h3>
<Button
variant='ghost'
className='h-[16px] w-[16px] flex-shrink-0 p-0'
className='relative h-[16px] w-[16px] flex-shrink-0 p-0 before:absolute before:inset-[-14px] before:content-[""]'
onClick={onClose}
aria-label='Close tour'
>
@@ -60,24 +60,23 @@ function TourCard({
</div>
<div className='px-4 pt-1 pb-3'>
<p className='text-[12px] text-[var(--text-secondary)] leading-[1.6]'>{description}</p>
<p className='text-[var(--text-secondary)] text-caption leading-[1.6]'>{description}</p>
</div>
<div className='flex items-center justify-between border-[var(--border)] border-t px-4 py-3'>
<span className='text-[11px] text-[var(--text-muted)] [font-variant-numeric:tabular-nums]'>
<div className='flex items-center justify-between rounded-b-xl border-[var(--border)] border-t bg-[color-mix(in_srgb,var(--surface-3)_50%,transparent)] px-4 py-2'>
<span className='text-[var(--text-muted)] text-xs [font-variant-numeric:tabular-nums]'>
{step} / {totalSteps}
</span>
<div className='flex items-center gap-1.5'>
<div className={cn(isFirst && 'invisible')}>
<Button
variant='default'
size='sm'
onClick={onBack}
tabIndex={isFirst ? -1 : undefined}
>
Back
</Button>
</div>
<Button
variant='default'
size='sm'
onClick={onBack}
tabIndex={isFirst ? -1 : undefined}
className={cn(isFirst && 'invisible')}
>
Back
</Button>
<Button variant='tertiary' size='sm' onClick={onNext}>
{isLast ? 'Done' : 'Next'}
</Button>
@@ -156,7 +155,7 @@ function TourTooltip({
const isCentered = placement === 'center'
const cardClasses = cn(
'w-[260px] overflow-hidden rounded-[8px] bg-[var(--bg)]',
'w-[260px] overflow-hidden rounded-xl bg-[var(--bg)]',
isEntrance && 'animate-tour-tooltip-in motion-reduce:animate-none',
className
)
@@ -181,7 +180,7 @@ function TourTooltip({
<div
className={cn(
cardClasses,
'pointer-events-auto relative border border-[var(--border)] shadow-sm'
'pointer-events-auto relative shadow-overlay ring-1 ring-foreground/10'
)}
>
{cardContent}
@@ -202,10 +201,7 @@ function TourTooltip({
sideOffset={10}
collisionPadding={12}
avoidCollisions
className='z-[10000300] outline-none'
style={{
filter: 'drop-shadow(0 0 0.5px var(--border)) drop-shadow(0 1px 2px rgba(0,0,0,0.1))',
}}
className='z-[10000300] outline-none drop-shadow-tour'
onOpenAutoFocus={(e) => e.preventDefault()}
onCloseAutoFocus={(e) => e.preventDefault()}
>

View File

@@ -42,6 +42,7 @@ export { Key } from './key'
export { KeySquare } from './key-square'
export { Layout } from './layout'
export { Library } from './library'
export { Link } from './link'
export { ListFilter } from './list-filter'
export { Loader } from './loader'
export { Lock } from './lock'

View File

@@ -0,0 +1,26 @@
import type { SVGProps } from 'react'
/**
* Link icon component
* @param props - SVG properties including className, size, etc.
*/
export function Link(props: SVGProps<SVGSVGElement>) {
return (
<svg
xmlns='http://www.w3.org/2000/svg'
width='24'
height='24'
viewBox='0 0 24 24'
fill='none'
stroke='currentColor'
strokeWidth='2'
strokeLinecap='round'
strokeLinejoin='round'
aria-hidden='true'
{...props}
>
<path d='M10 13a5 5 0 0 0 7.54.54l3-3a5 5 0 0 0-7.07-7.07l-1.72 1.71' />
<path d='M14 11a5 5 0 0 0-7.54-.54l-3 3a5 5 0 0 0 7.07 7.07l1.71-1.71' />
</svg>
)
}

View File

@@ -1285,6 +1285,17 @@ export function StartIcon(props: SVGProps<SVGSVGElement>) {
)
}
export function ProfoundIcon(props: SVGProps<SVGSVGElement>) {
return (
<svg width='1em' height='1em' viewBox='0 0 55 55' xmlns='http://www.w3.org/2000/svg' {...props}>
<path
fill='currentColor'
d='M0 36.685V21.349a7.017 7.017 0 0 1 2.906-5.69l19.742-14.25A7.443 7.443 0 0 1 27.004 0h.062c1.623 0 3.193.508 4.501 1.452l19.684 14.207a7.016 7.016 0 0 1 2.906 5.69v12.302a7.013 7.013 0 0 1-2.907 5.689L31.527 53.562A7.605 7.605 0 0 1 27.078 55a7.641 7.641 0 0 1-4.465-1.44c-2.581-1.859-6.732-4.855-6.732-4.855V29.777c0-.249.28-.393.482-.248l10.538 7.605c.106.077.249.077.355 0l13.005-9.386a.306.306 0 0 0 0-.496l-13.005-9.386a.303.303 0 0 0-.355 0L.482 36.933A.304.304 0 0 1 0 36.685Z'
/>
</svg>
)
}
export function PineconeIcon(props: SVGProps<SVGSVGElement>) {
return (
<svg

View File

@@ -1,4 +1,4 @@
import { createLogger } from '@sim/logger'
import { createLogger, type Logger } from '@sim/logger'
import { redactApiKeys } from '@/lib/core/security/redaction'
import { getBaseUrl } from '@/lib/core/utils/urls'
import {
@@ -49,12 +49,22 @@ import { SYSTEM_SUBBLOCK_IDS } from '@/triggers/constants'
const logger = createLogger('BlockExecutor')
export class BlockExecutor {
private execLogger: Logger
constructor(
private blockHandlers: BlockHandler[],
private resolver: VariableResolver,
private contextExtensions: ContextExtensions,
private state: BlockStateWriter
) {}
) {
this.execLogger = logger.withMetadata({
workflowId: this.contextExtensions.metadata?.workflowId,
workspaceId: this.contextExtensions.workspaceId,
executionId: this.contextExtensions.executionId,
userId: this.contextExtensions.userId,
requestId: this.contextExtensions.metadata?.requestId,
})
}
async execute(
ctx: ExecutionContext,
@@ -273,7 +283,7 @@ export class BlockExecutor {
}
}
logger.error(
this.execLogger.error(
phase === 'input_resolution' ? 'Failed to resolve block inputs' : 'Block execution failed',
{
blockId: node.id,
@@ -306,7 +316,7 @@ export class BlockExecutor {
if (blockLog) {
blockLog.errorHandled = true
}
logger.info('Block has error port - returning error output instead of throwing', {
this.execLogger.info('Block has error port - returning error output instead of throwing', {
blockId: node.id,
error: errorMessage,
})
@@ -358,7 +368,7 @@ export class BlockExecutor {
blockName = `${blockName} (iteration ${loopScope.iteration})`
iterationIndex = loopScope.iteration
} else {
logger.warn('Loop scope not found for block', { blockId, loopId })
this.execLogger.warn('Loop scope not found for block', { blockId, loopId })
}
}
}
@@ -462,7 +472,7 @@ export class BlockExecutor {
ctx.childWorkflowContext
)
} catch (error) {
logger.warn('Block start callback failed', {
this.execLogger.warn('Block start callback failed', {
blockId,
blockType,
error: error instanceof Error ? error.message : String(error),
@@ -508,7 +518,7 @@ export class BlockExecutor {
ctx.childWorkflowContext
)
} catch (error) {
logger.warn('Block completion callback failed', {
this.execLogger.warn('Block completion callback failed', {
blockId,
blockType,
error: error instanceof Error ? error.message : String(error),
@@ -633,7 +643,7 @@ export class BlockExecutor {
try {
await ctx.onStream?.(clientStreamingExec)
} catch (error) {
logger.error('Error in onStream callback', { blockId, error })
this.execLogger.error('Error in onStream callback', { blockId, error })
// Cancel the client stream to release the tee'd buffer
await processedClientStream.cancel().catch(() => {})
}
@@ -663,7 +673,7 @@ export class BlockExecutor {
stream: processedStream,
})
} catch (error) {
logger.error('Error in onStream callback', { blockId, error })
this.execLogger.error('Error in onStream callback', { blockId, error })
await processedStream.cancel().catch(() => {})
}
}
@@ -687,7 +697,7 @@ export class BlockExecutor {
const tail = decoder.decode()
if (tail) chunks.push(tail)
} catch (error) {
logger.error('Error reading executor stream for block', { blockId, error })
this.execLogger.error('Error reading executor stream for block', { blockId, error })
} finally {
try {
await reader.cancel().catch(() => {})
@@ -718,7 +728,10 @@ export class BlockExecutor {
}
return
} catch (error) {
logger.warn('Failed to parse streamed content for response format', { blockId, error })
this.execLogger.warn('Failed to parse streamed content for response format', {
blockId,
error,
})
}
}

View File

@@ -1,4 +1,4 @@
import { createLogger } from '@sim/logger'
import { createLogger, type Logger } from '@sim/logger'
import { isExecutionCancelled, isRedisCancellationEnabled } from '@/lib/execution/cancellation'
import { BlockType } from '@/executor/constants'
import type { DAG } from '@/executor/dag/builder'
@@ -34,6 +34,7 @@ export class ExecutionEngine {
private readonly CANCELLATION_CHECK_INTERVAL_MS = 500
private abortPromise: Promise<void> | null = null
private abortResolve: (() => void) | null = null
private execLogger: Logger
constructor(
private context: ExecutionContext,
@@ -43,6 +44,13 @@ export class ExecutionEngine {
) {
this.allowResumeTriggers = this.context.metadata.resumeFromSnapshot === true
this.useRedisCancellation = isRedisCancellationEnabled() && !!this.context.executionId
this.execLogger = logger.withMetadata({
workflowId: this.context.workflowId,
workspaceId: this.context.workspaceId,
executionId: this.context.executionId,
userId: this.context.userId,
requestId: this.context.metadata.requestId,
})
this.initializeAbortHandler()
}
@@ -88,7 +96,9 @@ export class ExecutionEngine {
const cancelled = await isExecutionCancelled(this.context.executionId!)
if (cancelled) {
this.cancelledFlag = true
logger.info('Execution cancelled via Redis', { executionId: this.context.executionId })
this.execLogger.info('Execution cancelled via Redis', {
executionId: this.context.executionId,
})
}
return cancelled
}
@@ -169,7 +179,7 @@ export class ExecutionEngine {
this.finalizeIncompleteLogs()
const errorMessage = normalizeError(error)
logger.error('Execution failed', { error: errorMessage })
this.execLogger.error('Execution failed', { error: errorMessage })
const executionResult: ExecutionResult = {
success: false,
@@ -270,7 +280,7 @@ export class ExecutionEngine {
private initializeQueue(triggerBlockId?: string): void {
if (this.context.runFromBlockContext) {
const { startBlockId } = this.context.runFromBlockContext
logger.info('Initializing queue for run-from-block mode', {
this.execLogger.info('Initializing queue for run-from-block mode', {
startBlockId,
dirtySetSize: this.context.runFromBlockContext.dirtySet.size,
})
@@ -282,7 +292,7 @@ export class ExecutionEngine {
const remainingEdges = (this.context.metadata as any).remainingEdges
if (remainingEdges && Array.isArray(remainingEdges) && remainingEdges.length > 0) {
logger.info('Removing edges from resumed pause blocks', {
this.execLogger.info('Removing edges from resumed pause blocks', {
edgeCount: remainingEdges.length,
edges: remainingEdges,
})
@@ -294,13 +304,13 @@ export class ExecutionEngine {
targetNode.incomingEdges.delete(edge.source)
if (this.edgeManager.isNodeReady(targetNode)) {
logger.info('Node became ready after edge removal', { nodeId: targetNode.id })
this.execLogger.info('Node became ready after edge removal', { nodeId: targetNode.id })
this.addToQueue(targetNode.id)
}
}
}
logger.info('Edge removal complete, queued ready nodes', {
this.execLogger.info('Edge removal complete, queued ready nodes', {
queueLength: this.readyQueue.length,
queuedNodes: this.readyQueue,
})
@@ -309,7 +319,7 @@ export class ExecutionEngine {
}
if (pendingBlocks && pendingBlocks.length > 0) {
logger.info('Initializing queue from pending blocks (resume mode)', {
this.execLogger.info('Initializing queue from pending blocks (resume mode)', {
pendingBlocks,
allowResumeTriggers: this.allowResumeTriggers,
dagNodeCount: this.dag.nodes.size,
@@ -319,7 +329,7 @@ export class ExecutionEngine {
this.addToQueue(nodeId)
}
logger.info('Pending blocks queued', {
this.execLogger.info('Pending blocks queued', {
queueLength: this.readyQueue.length,
queuedNodes: this.readyQueue,
})
@@ -341,7 +351,7 @@ export class ExecutionEngine {
if (startNode) {
this.addToQueue(startNode.id)
} else {
logger.warn('No start node found in DAG')
this.execLogger.warn('No start node found in DAG')
}
}
@@ -373,7 +383,7 @@ export class ExecutionEngine {
}
} catch (error) {
const errorMessage = normalizeError(error)
logger.error('Node execution failed', { nodeId, error: errorMessage })
this.execLogger.error('Node execution failed', { nodeId, error: errorMessage })
throw error
}
}
@@ -385,7 +395,7 @@ export class ExecutionEngine {
): Promise<void> {
const node = this.dag.nodes.get(nodeId)
if (!node) {
logger.error('Node not found during completion', { nodeId })
this.execLogger.error('Node not found during completion', { nodeId })
return
}
@@ -409,7 +419,7 @@ export class ExecutionEngine {
// shouldContinue: true means more iterations, shouldExit: true means loop is done
const shouldContinueLoop = output.shouldContinue === true
if (!shouldContinueLoop) {
logger.info('Stopping execution after target block', { nodeId })
this.execLogger.info('Stopping execution after target block', { nodeId })
this.stoppedEarlyFlag = true
return
}
@@ -417,7 +427,7 @@ export class ExecutionEngine {
const readyNodes = this.edgeManager.processOutgoingEdges(node, output, false)
logger.info('Processing outgoing edges', {
this.execLogger.info('Processing outgoing edges', {
nodeId,
outgoingEdgesCount: node.outgoingEdges.size,
outgoingEdges: Array.from(node.outgoingEdges.entries()).map(([id, e]) => ({
@@ -435,7 +445,7 @@ export class ExecutionEngine {
if (this.context.pendingDynamicNodes && this.context.pendingDynamicNodes.length > 0) {
const dynamicNodes = this.context.pendingDynamicNodes
this.context.pendingDynamicNodes = []
logger.info('Adding dynamically expanded parallel nodes', { dynamicNodes })
this.execLogger.info('Adding dynamically expanded parallel nodes', { dynamicNodes })
this.addMultipleToQueue(dynamicNodes)
}
}
@@ -482,7 +492,7 @@ export class ExecutionEngine {
}
return parsedSnapshot.state
} catch (error) {
logger.warn('Failed to serialize execution state', {
this.execLogger.warn('Failed to serialize execution state', {
error: error instanceof Error ? error.message : String(error),
})
return undefined

View File

@@ -1,4 +1,4 @@
import { createLogger } from '@sim/logger'
import { createLogger, type Logger } from '@sim/logger'
import { StartBlockPath } from '@/lib/workflows/triggers/triggers'
import type { DAG } from '@/executor/dag/builder'
import { DAGBuilder } from '@/executor/dag/builder'
@@ -52,6 +52,7 @@ export class DAGExecutor {
private workflowVariables: Record<string, unknown>
private contextExtensions: ContextExtensions
private dagBuilder: DAGBuilder
private execLogger: Logger
constructor(options: DAGExecutorOptions) {
this.workflow = options.workflow
@@ -60,6 +61,13 @@ export class DAGExecutor {
this.workflowVariables = options.workflowVariables ?? {}
this.contextExtensions = options.contextExtensions ?? {}
this.dagBuilder = new DAGBuilder()
this.execLogger = logger.withMetadata({
workflowId: this.contextExtensions.metadata?.workflowId,
workspaceId: this.contextExtensions.workspaceId,
executionId: this.contextExtensions.executionId,
userId: this.contextExtensions.userId,
requestId: this.contextExtensions.metadata?.requestId,
})
}
async execute(workflowId: string, triggerBlockId?: string): Promise<ExecutionResult> {
@@ -79,7 +87,9 @@ export class DAGExecutor {
_pendingBlocks: string[],
context: ExecutionContext
): Promise<ExecutionResult> {
logger.warn('Debug mode (continueExecution) is not yet implemented in the refactored executor')
this.execLogger.warn(
'Debug mode (continueExecution) is not yet implemented in the refactored executor'
)
return {
success: false,
output: {},
@@ -163,7 +173,7 @@ export class DAGExecutor {
parallelExecutions: filteredParallelExecutions,
}
logger.info('Executing from block', {
this.execLogger.info('Executing from block', {
workflowId,
startBlockId,
effectiveStartBlockId,
@@ -247,7 +257,7 @@ export class DAGExecutor {
if (overrides?.runFromBlockContext) {
const { dirtySet } = overrides.runFromBlockContext
executedBlocks = new Set([...executedBlocks].filter((id) => !dirtySet.has(id)))
logger.info('Cleared executed status for dirty blocks', {
this.execLogger.info('Cleared executed status for dirty blocks', {
dirtySetSize: dirtySet.size,
remainingExecutedBlocks: executedBlocks.size,
})
@@ -332,7 +342,7 @@ export class DAGExecutor {
if (this.contextExtensions.resumeFromSnapshot) {
context.metadata.resumeFromSnapshot = true
logger.info('Resume from snapshot enabled', {
this.execLogger.info('Resume from snapshot enabled', {
resumePendingQueue: this.contextExtensions.resumePendingQueue,
remainingEdges: this.contextExtensions.remainingEdges,
triggerBlockId,
@@ -341,14 +351,14 @@ export class DAGExecutor {
if (this.contextExtensions.remainingEdges) {
;(context.metadata as any).remainingEdges = this.contextExtensions.remainingEdges
logger.info('Set remaining edges for resume', {
this.execLogger.info('Set remaining edges for resume', {
edgeCount: this.contextExtensions.remainingEdges.length,
})
}
if (this.contextExtensions.resumePendingQueue?.length) {
context.metadata.pendingBlocks = [...this.contextExtensions.resumePendingQueue]
logger.info('Set pending blocks from resume queue', {
this.execLogger.info('Set pending blocks from resume queue', {
pendingBlocks: context.metadata.pendingBlocks,
skipStarterBlockInit: true,
})
@@ -409,7 +419,7 @@ export class DAGExecutor {
if (triggerBlockId) {
const triggerBlock = this.workflow.blocks.find((b) => b.id === triggerBlockId)
if (!triggerBlock) {
logger.error('Specified trigger block not found in workflow', {
this.execLogger.error('Specified trigger block not found in workflow', {
triggerBlockId,
})
throw new Error(`Trigger block not found: ${triggerBlockId}`)
@@ -431,7 +441,7 @@ export class DAGExecutor {
})
if (!startResolution?.block) {
logger.warn('No start block found in workflow')
this.execLogger.warn('No start block found in workflow')
return
}
}

View File

@@ -8,6 +8,7 @@
import { createLogger } from '@sim/logger'
import { env } from '@/lib/core/config/env'
import { isHosted } from '@/lib/core/config/feature-flags'
import { getBaseDomain } from '@/lib/core/utils/urls'
const logger = createLogger('ProfoundAnalytics')
@@ -97,7 +98,7 @@ export function sendToProfound(request: Request, statusCode: number): void {
buffer.push({
timestamp: new Date().toISOString(),
method: request.method,
host: url.hostname,
host: getBaseDomain(),
path: url.pathname,
status_code: statusCode,
ip:

View File

@@ -26,6 +26,11 @@ export const AuditAction = {
CHAT_UPDATED: 'chat.updated',
CHAT_DELETED: 'chat.deleted',
// Custom Tools
CUSTOM_TOOL_CREATED: 'custom_tool.created',
CUSTOM_TOOL_UPDATED: 'custom_tool.updated',
CUSTOM_TOOL_DELETED: 'custom_tool.deleted',
// Billing
CREDIT_PURCHASED: 'credit.purchased',
@@ -99,8 +104,10 @@ export const AuditAction = {
NOTIFICATION_UPDATED: 'notification.updated',
NOTIFICATION_DELETED: 'notification.deleted',
// OAuth
// OAuth / Credentials
OAUTH_DISCONNECTED: 'oauth.disconnected',
CREDENTIAL_RENAMED: 'credential.renamed',
CREDENTIAL_DELETED: 'credential.deleted',
// Password
PASSWORD_RESET: 'password.reset',
@@ -124,6 +131,11 @@ export const AuditAction = {
PERMISSION_GROUP_MEMBER_ADDED: 'permission_group_member.added',
PERMISSION_GROUP_MEMBER_REMOVED: 'permission_group_member.removed',
// Skills
SKILL_CREATED: 'skill.created',
SKILL_UPDATED: 'skill.updated',
SKILL_DELETED: 'skill.deleted',
// Schedules
SCHEDULE_UPDATED: 'schedule.updated',
@@ -173,6 +185,7 @@ export const AuditResourceType = {
CHAT: 'chat',
CONNECTOR: 'connector',
CREDENTIAL_SET: 'credential_set',
CUSTOM_TOOL: 'custom_tool',
DOCUMENT: 'document',
ENVIRONMENT: 'environment',
FILE: 'file',
@@ -186,6 +199,7 @@ export const AuditResourceType = {
PASSWORD: 'password',
PERMISSION_GROUP: 'permission_group',
SCHEDULE: 'schedule',
SKILL: 'skill',
TABLE: 'table',
TEMPLATE: 'template',
WEBHOOK: 'webhook',

View File

@@ -154,7 +154,7 @@ export async function checkSessionOrInternalAuth(
return {
success: false,
error: 'Authentication required - provide session or internal JWT',
error: 'Unauthorized',
}
} catch (error) {
logger.error('Error in session/internal authentication:', error)
@@ -225,7 +225,7 @@ export async function checkHybridAuth(
// No authentication found
return {
success: false,
error: 'Authentication required - provide session, API key, or internal JWT',
error: 'Unauthorized',
}
} catch (error) {
logger.error('Error in hybrid authentication:', error)

View File

@@ -3,13 +3,15 @@
*/
import { beforeEach, describe, expect, it, vi } from 'vitest'
vi.mock('@sim/logger', () => ({
createLogger: vi.fn(() => ({
vi.mock('@sim/logger', () => {
const createMockLogger = (): Record<string, any> => ({
info: vi.fn(),
warn: vi.fn(),
error: vi.fn(),
})),
}))
withMetadata: vi.fn(() => createMockLogger()),
})
return { createLogger: vi.fn(() => createMockLogger()) }
})
vi.mock('@/lib/billing/core/subscription', () => ({
getUserSubscriptionState: vi.fn(),

View File

@@ -1,6 +1,5 @@
import { createLogger } from '@sim/logger'
import { getUserSubscriptionState } from '@/lib/billing/core/subscription'
import { appendCopilotLogContext } from '@/lib/copilot/logging'
import { getCopilotToolDescription } from '@/lib/copilot/tool-descriptions'
import { isHosted } from '@/lib/core/config/feature-flags'
import { createMcpToolId } from '@/lib/mcp/utils'
@@ -50,6 +49,7 @@ export async function buildIntegrationToolSchemas(
userId: string,
messageId?: string
): Promise<ToolSchema[]> {
const reqLogger = logger.withMetadata({ messageId })
const integrationTools: ToolSchema[] = []
try {
const { createUserToolSchema } = await import('@/tools/params')
@@ -60,15 +60,10 @@ export async function buildIntegrationToolSchemas(
const subscriptionState = await getUserSubscriptionState(userId)
shouldAppendEmailTagline = subscriptionState.isFree
} catch (error) {
logger.warn(
appendCopilotLogContext('Failed to load subscription state for copilot tool descriptions', {
messageId,
}),
{
userId,
error: error instanceof Error ? error.message : String(error),
}
)
reqLogger.warn('Failed to load subscription state for copilot tool descriptions', {
userId,
error: error instanceof Error ? error.message : String(error),
})
}
for (const [toolId, toolConfig] of Object.entries(latestTools)) {
@@ -92,17 +87,14 @@ export async function buildIntegrationToolSchemas(
}),
})
} catch (toolError) {
logger.warn(
appendCopilotLogContext('Failed to build schema for tool, skipping', { messageId }),
{
toolId,
error: toolError instanceof Error ? toolError.message : String(toolError),
}
)
reqLogger.warn('Failed to build schema for tool, skipping', {
toolId,
error: toolError instanceof Error ? toolError.message : String(toolError),
})
}
}
} catch (error) {
logger.warn(appendCopilotLogContext('Failed to build tool schemas', { messageId }), {
reqLogger.warn('Failed to build tool schemas', {
error: error instanceof Error ? error.message : String(error),
})
}
@@ -182,6 +174,8 @@ export async function buildCopilotRequestPayload(
let integrationTools: ToolSchema[] = []
const payloadLogger = logger.withMetadata({ messageId: userMessageId })
if (effectiveMode === 'build') {
integrationTools = await buildIntegrationToolSchemas(userId, userMessageId)
@@ -201,23 +195,13 @@ export async function buildCopilotRequestPayload(
})
}
if (mcpTools.length > 0) {
logger.error(
appendCopilotLogContext('Added MCP tools to copilot payload', {
messageId: userMessageId,
}),
{ count: mcpTools.length }
)
payloadLogger.info('Added MCP tools to copilot payload', { count: mcpTools.length })
}
}
} catch (error) {
logger.warn(
appendCopilotLogContext('Failed to discover MCP tools for copilot', {
messageId: userMessageId,
}),
{
error: error instanceof Error ? error.message : String(error),
}
)
payloadLogger.warn('Failed to discover MCP tools for copilot', {
error: error instanceof Error ? error.message : String(error),
})
}
}
}

View File

@@ -4,7 +4,6 @@ import { createLogger } from '@sim/logger'
import { eq } from 'drizzle-orm'
import { createRunSegment, updateRunStatus } from '@/lib/copilot/async-runs/repository'
import { SIM_AGENT_API_URL } from '@/lib/copilot/constants'
import { appendCopilotLogContext } from '@/lib/copilot/logging'
import type { OrchestrateStreamOptions } from '@/lib/copilot/orchestrator'
import { orchestrateCopilotStream } from '@/lib/copilot/orchestrator'
import {
@@ -229,20 +228,17 @@ export async function requestChatTitle(params: {
const payload = await response.json().catch(() => ({}))
if (!response.ok) {
logger.warn(
appendCopilotLogContext('Failed to generate chat title via copilot backend', { messageId }),
{
status: response.status,
error: payload,
}
)
logger.withMetadata({ messageId }).warn('Failed to generate chat title via copilot backend', {
status: response.status,
error: payload,
})
return null
}
const title = typeof payload?.title === 'string' ? payload.title.trim() : ''
return title || null
} catch (error) {
logger.error(appendCopilotLogContext('Error generating chat title', { messageId }), error)
logger.withMetadata({ messageId }).error('Error generating chat title', error)
return null
}
}
@@ -285,6 +281,7 @@ export function createSSEStream(params: StreamingOrchestrationParams): ReadableS
} = params
const messageId =
typeof requestPayload.messageId === 'string' ? requestPayload.messageId : streamId
const reqLogger = logger.withMetadata({ requestId, messageId })
let eventWriter: ReturnType<typeof createStreamEventWriter> | null = null
let clientDisconnected = false
@@ -306,17 +303,11 @@ export function createSSEStream(params: StreamingOrchestrationParams): ReadableS
if (!clientDisconnectedController.signal.aborted) {
clientDisconnectedController.abort()
}
logger.info(
appendCopilotLogContext('Client disconnected from live SSE stream', {
requestId,
messageId,
}),
{
streamId,
runId,
reason,
}
)
reqLogger.info('Client disconnected from live SSE stream', {
streamId,
runId,
reason,
})
}
await resetStreamBuffer(streamId)
@@ -334,15 +325,9 @@ export function createSSEStream(params: StreamingOrchestrationParams): ReadableS
provider: (requestPayload.provider as string | undefined) || null,
requestContext: { requestId },
}).catch((error) => {
logger.warn(
appendCopilotLogContext('Failed to create copilot run segment', {
requestId,
messageId,
}),
{
error: error instanceof Error ? error.message : String(error),
}
)
reqLogger.warn('Failed to create copilot run segment', {
error: error instanceof Error ? error.message : String(error),
})
})
}
eventWriter = createStreamEventWriter(streamId)
@@ -362,16 +347,10 @@ export function createSSEStream(params: StreamingOrchestrationParams): ReadableS
await redis.del(getStreamAbortKey(streamId))
}
} catch (error) {
logger.warn(
appendCopilotLogContext('Failed to poll distributed stream abort', {
requestId,
messageId,
}),
{
streamId,
error: error instanceof Error ? error.message : String(error),
}
)
reqLogger.warn('Failed to poll distributed stream abort', {
streamId,
error: error instanceof Error ? error.message : String(error),
})
}
})()
}, STREAM_ABORT_POLL_MS)
@@ -388,14 +367,11 @@ export function createSSEStream(params: StreamingOrchestrationParams): ReadableS
await eventWriter.flush()
}
} catch (error) {
logger.error(
appendCopilotLogContext('Failed to persist stream event', { requestId, messageId }),
{
eventType: event.type,
eventId,
error: error instanceof Error ? error.message : String(error),
}
)
reqLogger.error('Failed to persist stream event', {
eventType: event.type,
eventId,
error: error instanceof Error ? error.message : String(error),
})
// Keep the live SSE stream going even if durable buffering hiccups.
}
@@ -414,7 +390,7 @@ export function createSSEStream(params: StreamingOrchestrationParams): ReadableS
try {
await pushEvent(event)
} catch (error) {
logger.error(appendCopilotLogContext('Failed to push event', { requestId, messageId }), {
reqLogger.error('Failed to push event', {
eventType: event.type,
error: error instanceof Error ? error.message : String(error),
})
@@ -437,10 +413,7 @@ export function createSSEStream(params: StreamingOrchestrationParams): ReadableS
}
})
.catch((error) => {
logger.error(
appendCopilotLogContext('Title generation failed', { requestId, messageId }),
error
)
reqLogger.error('Title generation failed', error)
})
}
@@ -467,9 +440,7 @@ export function createSSEStream(params: StreamingOrchestrationParams): ReadableS
})
if (abortController.signal.aborted) {
logger.error(
appendCopilotLogContext('Stream aborted by explicit stop', { requestId, messageId })
)
reqLogger.info('Stream aborted by explicit stop')
await eventWriter.close().catch(() => {})
await setStreamMeta(streamId, { status: 'cancelled', userId, executionId, runId })
await updateRunStatus(runId, 'cancelled', { completedAt: new Date() }).catch(() => {})
@@ -483,23 +454,14 @@ export function createSSEStream(params: StreamingOrchestrationParams): ReadableS
'An unexpected error occurred while processing the response.'
if (clientDisconnected) {
logger.error(
appendCopilotLogContext('Stream failed after client disconnect', {
requestId,
messageId,
}),
{
error: errorMessage,
}
)
reqLogger.info('Stream failed after client disconnect', {
error: errorMessage,
})
}
logger.error(
appendCopilotLogContext('Orchestration returned failure', { requestId, messageId }),
{
error: errorMessage,
}
)
reqLogger.error('Orchestration returned failure', {
error: errorMessage,
})
await pushEventBestEffort({
type: 'error',
error: errorMessage,
@@ -526,42 +488,25 @@ export function createSSEStream(params: StreamingOrchestrationParams): ReadableS
await setStreamMeta(streamId, { status: 'complete', userId, executionId, runId })
await updateRunStatus(runId, 'complete', { completedAt: new Date() }).catch(() => {})
if (clientDisconnected) {
logger.info(
appendCopilotLogContext('Orchestration completed after client disconnect', {
requestId,
messageId,
}),
{
streamId,
runId,
}
)
reqLogger.info('Orchestration completed after client disconnect', {
streamId,
runId,
})
}
} catch (error) {
if (abortController.signal.aborted) {
logger.error(
appendCopilotLogContext('Stream aborted by explicit stop', { requestId, messageId })
)
reqLogger.info('Stream aborted by explicit stop')
await eventWriter.close().catch(() => {})
await setStreamMeta(streamId, { status: 'cancelled', userId, executionId, runId })
await updateRunStatus(runId, 'cancelled', { completedAt: new Date() }).catch(() => {})
return
}
if (clientDisconnected) {
logger.error(
appendCopilotLogContext('Stream errored after client disconnect', {
requestId,
messageId,
}),
{
error: error instanceof Error ? error.message : 'Stream error',
}
)
reqLogger.info('Stream errored after client disconnect', {
error: error instanceof Error ? error.message : 'Stream error',
})
}
logger.error(
appendCopilotLogContext('Orchestration error', { requestId, messageId }),
error
)
reqLogger.error('Orchestration error', error)
const errorMessage = error instanceof Error ? error.message : 'Stream error'
await pushEventBestEffort({
type: 'error',
@@ -583,7 +528,7 @@ export function createSSEStream(params: StreamingOrchestrationParams): ReadableS
error: errorMessage,
}).catch(() => {})
} finally {
logger.info(appendCopilotLogContext('Closing live SSE stream', { requestId, messageId }), {
reqLogger.info('Closing live SSE stream', {
streamId,
runId,
clientDisconnected,
@@ -611,16 +556,10 @@ export function createSSEStream(params: StreamingOrchestrationParams): ReadableS
}
},
cancel() {
logger.info(
appendCopilotLogContext('ReadableStream cancel received from client', {
requestId,
messageId,
}),
{
streamId,
runId,
}
)
reqLogger.info('ReadableStream cancel received from client', {
streamId,
runId,
})
if (!clientDisconnected) {
clientDisconnected = true
if (!clientDisconnectedController.signal.aborted) {

View File

@@ -1,62 +0,0 @@
import type { ClientContentBlock, ClientStreamingContext } from '@/lib/copilot/client-sse/types'
/**
* Appends plain text to the active text block, or starts a new one when needed.
*/
export function appendTextBlock(context: ClientStreamingContext, content: string): void {
if (!content) return
context.accumulatedContent += content
if (context.currentTextBlock?.type === 'text') {
context.currentTextBlock.content = `${context.currentTextBlock.content || ''}${content}`
return
}
const block: ClientContentBlock = {
type: 'text',
content,
timestamp: Date.now(),
}
context.currentTextBlock = block
context.contentBlocks.push(block)
}
/**
* Starts a new thinking block when the stream enters a reasoning segment.
*/
export function beginThinkingBlock(context: ClientStreamingContext): void {
if (context.currentThinkingBlock) {
context.isInThinkingBlock = true
context.currentTextBlock = null
return
}
const block: ClientContentBlock = {
type: 'thinking',
content: '',
timestamp: Date.now(),
startTime: Date.now(),
}
context.currentThinkingBlock = block
context.contentBlocks.push(block)
context.currentTextBlock = null
context.isInThinkingBlock = true
}
/**
* Closes the active thinking block and records its visible duration.
*/
export function finalizeThinkingBlock(context: ClientStreamingContext): void {
if (!context.currentThinkingBlock) {
context.isInThinkingBlock = false
return
}
const startTime = context.currentThinkingBlock.startTime ?? context.currentThinkingBlock.timestamp
context.currentThinkingBlock.duration = Math.max(0, Date.now() - startTime)
context.currentThinkingBlock = null
context.isInThinkingBlock = false
}

File diff suppressed because it is too large Load Diff

View File

@@ -14,7 +14,6 @@ import {
updateRunStatus,
} from '@/lib/copilot/async-runs/repository'
import { SIM_AGENT_API_URL, SIM_AGENT_VERSION } from '@/lib/copilot/constants'
import { appendCopilotLogContext } from '@/lib/copilot/logging'
import {
isToolAvailableOnSimSide,
prepareExecutionContext,
@@ -128,15 +127,11 @@ export async function orchestrateCopilotStream(
messageId,
})
const continuationWorkerId = `sim-resume:${crypto.randomUUID()}`
const withLogContext = (message: string) =>
appendCopilotLogContext(message, {
requestId: context.requestId,
messageId,
})
const reqLogger = logger.withMetadata({ requestId: context.requestId, messageId })
let claimedToolCallIds: string[] = []
let claimedByWorkerId: string | null = null
logger.error(withLogContext('Starting copilot orchestration'), {
reqLogger.info('Starting copilot orchestration', {
goRoute,
workflowId,
workspaceId,
@@ -155,7 +150,7 @@ export async function orchestrateCopilotStream(
for (;;) {
context.streamComplete = false
logger.error(withLogContext('Starting orchestration loop iteration'), {
reqLogger.info('Starting orchestration loop iteration', {
route,
hasPendingAsyncContinuation: Boolean(context.awaitingAsyncContinuation),
claimedToolCallCount: claimedToolCallIds.length,
@@ -168,7 +163,7 @@ export async function orchestrateCopilotStream(
const d = (event.data ?? {}) as Record<string, unknown>
const response = (d.response ?? {}) as Record<string, unknown>
if (response.async_pause) {
logger.error(withLogContext('Detected async pause from copilot backend'), {
reqLogger.info('Detected async pause from copilot backend', {
route,
checkpointId:
typeof (response.async_pause as Record<string, unknown>)?.checkpointId ===
@@ -201,7 +196,7 @@ export async function orchestrateCopilotStream(
loopOptions
)
logger.error(withLogContext('Completed orchestration loop iteration'), {
reqLogger.info('Completed orchestration loop iteration', {
route,
streamComplete: context.streamComplete,
wasAborted: context.wasAborted,
@@ -210,7 +205,7 @@ export async function orchestrateCopilotStream(
})
if (claimedToolCallIds.length > 0) {
logger.error(withLogContext('Marking async tool calls as delivered'), {
reqLogger.info('Marking async tool calls as delivered', {
toolCallIds: claimedToolCallIds,
})
await Promise.all(
@@ -223,7 +218,7 @@ export async function orchestrateCopilotStream(
}
if (options.abortSignal?.aborted || context.wasAborted) {
logger.error(withLogContext('Stopping orchestration because request was aborted'), {
reqLogger.info('Stopping orchestration because request was aborted', {
pendingToolCallCount: Array.from(context.toolCalls.values()).filter(
(toolCall) => toolCall.status === 'pending' || toolCall.status === 'executing'
).length,
@@ -241,13 +236,13 @@ export async function orchestrateCopilotStream(
const continuation = context.awaitingAsyncContinuation
if (!continuation) {
logger.error(withLogContext('No async continuation pending; finishing orchestration'))
reqLogger.info('No async continuation pending; finishing orchestration')
break
}
let resumeReady = false
let resumeRetries = 0
logger.error(withLogContext('Processing async continuation'), {
reqLogger.info('Processing async continuation', {
checkpointId: continuation.checkpointId,
runId: continuation.runId,
pendingToolCallIds: continuation.pendingToolCallIds,
@@ -267,26 +262,19 @@ export async function orchestrateCopilotStream(
if (localPendingPromise) {
localPendingPromises.push(localPendingPromise)
logger.info(
withLogContext(
'Waiting for local async tool completion before retrying resume claim'
),
{
toolCallId,
runId: continuation.runId,
workerId: resumeWorkerId,
}
)
reqLogger.info('Waiting for local async tool completion before retrying resume claim', {
toolCallId,
runId: continuation.runId,
workerId: resumeWorkerId,
})
continue
}
if (durableRow && isTerminalAsyncStatus(durableRow.status)) {
if (durableRow.claimedBy && durableRow.claimedBy !== resumeWorkerId) {
missingToolCallIds.push(toolCallId)
logger.warn(
withLogContext(
'Async tool continuation is waiting on a claim held by another worker'
),
reqLogger.warn(
'Async tool continuation is waiting on a claim held by another worker',
{
toolCallId,
runId: continuation.runId,
@@ -312,15 +300,12 @@ export async function orchestrateCopilotStream(
isTerminalToolCallStatus(toolState.status) &&
!isToolAvailableOnSimSide(toolState.name)
) {
logger.info(
withLogContext('Including Go-handled tool in resume payload (no Sim-side row)'),
{
toolCallId,
toolName: toolState.name,
status: toolState.status,
runId: continuation.runId,
}
)
reqLogger.info('Including Go-handled tool in resume payload (no Sim-side row)', {
toolCallId,
toolName: toolState.name,
status: toolState.status,
runId: continuation.runId,
})
readyTools.push({
toolCallId,
toolState,
@@ -330,7 +315,7 @@ export async function orchestrateCopilotStream(
continue
}
logger.warn(withLogContext('Skipping already-claimed or missing async tool resume'), {
reqLogger.warn('Skipping already-claimed or missing async tool resume', {
toolCallId,
runId: continuation.runId,
durableStatus: durableRow?.status,
@@ -340,13 +325,10 @@ export async function orchestrateCopilotStream(
}
if (localPendingPromises.length > 0) {
logger.info(
withLogContext('Waiting for local pending async tools before resuming continuation'),
{
checkpointId: continuation.checkpointId,
pendingPromiseCount: localPendingPromises.length,
}
)
reqLogger.info('Waiting for local pending async tools before resuming continuation', {
checkpointId: continuation.checkpointId,
pendingPromiseCount: localPendingPromises.length,
})
await Promise.allSettled(localPendingPromises)
continue
}
@@ -354,23 +336,18 @@ export async function orchestrateCopilotStream(
if (missingToolCallIds.length > 0) {
if (resumeRetries < 3) {
resumeRetries++
logger.info(
withLogContext('Retrying async resume after some tool calls were not yet ready'),
{
checkpointId: continuation.checkpointId,
runId: continuation.runId,
workerId: resumeWorkerId,
retry: resumeRetries,
missingToolCallIds,
}
)
reqLogger.info('Retrying async resume after some tool calls were not yet ready', {
checkpointId: continuation.checkpointId,
runId: continuation.runId,
workerId: resumeWorkerId,
retry: resumeRetries,
missingToolCallIds,
})
await new Promise((resolve) => setTimeout(resolve, 250 * resumeRetries))
continue
}
logger.error(
withLogContext(
'Async continuation failed because pending tool calls never became ready'
),
reqLogger.error(
'Async continuation failed because pending tool calls never became ready',
{
checkpointId: continuation.checkpointId,
runId: continuation.runId,
@@ -385,26 +362,20 @@ export async function orchestrateCopilotStream(
if (readyTools.length === 0) {
if (resumeRetries < 3 && continuation.pendingToolCallIds.length > 0) {
resumeRetries++
logger.info(
withLogContext('Retrying async resume because no tool calls were ready yet'),
{
checkpointId: continuation.checkpointId,
runId: continuation.runId,
workerId: resumeWorkerId,
retry: resumeRetries,
}
)
reqLogger.info('Retrying async resume because no tool calls were ready yet', {
checkpointId: continuation.checkpointId,
runId: continuation.runId,
workerId: resumeWorkerId,
retry: resumeRetries,
})
await new Promise((resolve) => setTimeout(resolve, 250 * resumeRetries))
continue
}
logger.error(
withLogContext('Async continuation failed because no tool calls were ready'),
{
checkpointId: continuation.checkpointId,
runId: continuation.runId,
requestedToolCallIds: continuation.pendingToolCallIds,
}
)
reqLogger.error('Async continuation failed because no tool calls were ready', {
checkpointId: continuation.checkpointId,
runId: continuation.runId,
requestedToolCallIds: continuation.pendingToolCallIds,
})
throw new Error('Failed to resume async tool continuation: no tool calls were ready')
}
@@ -425,16 +396,13 @@ export async function orchestrateCopilotStream(
if (claimFailures.length > 0) {
if (newlyClaimedToolCallIds.length > 0) {
logger.info(
withLogContext('Releasing async tool claims after claim contention during resume'),
{
checkpointId: continuation.checkpointId,
runId: continuation.runId,
workerId: resumeWorkerId,
newlyClaimedToolCallIds,
claimFailures,
}
)
reqLogger.info('Releasing async tool claims after claim contention during resume', {
checkpointId: continuation.checkpointId,
runId: continuation.runId,
workerId: resumeWorkerId,
newlyClaimedToolCallIds,
claimFailures,
})
await Promise.all(
newlyClaimedToolCallIds.map((toolCallId) =>
releaseCompletedAsyncToolClaim(toolCallId, resumeWorkerId).catch(() => null)
@@ -443,7 +411,7 @@ export async function orchestrateCopilotStream(
}
if (resumeRetries < 3) {
resumeRetries++
logger.error(withLogContext('Retrying async resume after claim contention'), {
reqLogger.info('Retrying async resume after claim contention', {
checkpointId: continuation.checkpointId,
runId: continuation.runId,
workerId: resumeWorkerId,
@@ -453,14 +421,11 @@ export async function orchestrateCopilotStream(
await new Promise((resolve) => setTimeout(resolve, 250 * resumeRetries))
continue
}
logger.error(
withLogContext('Async continuation failed because tool claims could not be acquired'),
{
checkpointId: continuation.checkpointId,
runId: continuation.runId,
claimFailures,
}
)
reqLogger.error('Async continuation failed because tool claims could not be acquired', {
checkpointId: continuation.checkpointId,
runId: continuation.runId,
claimFailures,
})
throw new Error(
`Failed to resume async tool continuation: unable to claim tool calls (${claimFailures.join(', ')})`
)
@@ -474,7 +439,7 @@ export async function orchestrateCopilotStream(
]
claimedByWorkerId = claimedToolCallIds.length > 0 ? resumeWorkerId : null
logger.error(withLogContext('Resuming async tool continuation'), {
reqLogger.info('Resuming async tool continuation', {
checkpointId: continuation.checkpointId,
runId: continuation.runId,
workerId: resumeWorkerId,
@@ -514,10 +479,8 @@ export async function orchestrateCopilotStream(
!isTerminalAsyncStatus(durableStatus) &&
!isDeliveredAsyncStatus(durableStatus)
) {
logger.warn(
withLogContext(
'Async tool row was claimed for resume without terminal durable state'
),
reqLogger.warn(
'Async tool row was claimed for resume without terminal durable state',
{
toolCallId: tool.toolCallId,
status: durableStatus,
@@ -540,7 +503,7 @@ export async function orchestrateCopilotStream(
checkpointId: continuation.checkpointId,
results,
}
logger.error(withLogContext('Prepared async continuation payload for resume endpoint'), {
reqLogger.info('Prepared async continuation payload for resume endpoint', {
route,
checkpointId: continuation.checkpointId,
resultCount: results.length,
@@ -550,7 +513,7 @@ export async function orchestrateCopilotStream(
}
if (!resumeReady) {
logger.warn(withLogContext('Async continuation loop exited without resume payload'), {
reqLogger.warn('Async continuation loop exited without resume payload', {
checkpointId: continuation.checkpointId,
runId: continuation.runId,
})
@@ -569,7 +532,7 @@ export async function orchestrateCopilotStream(
usage: context.usage,
cost: context.cost,
}
logger.error(withLogContext('Completing copilot orchestration'), {
reqLogger.info('Completing copilot orchestration', {
success: result.success,
chatId: result.chatId,
hasRequestId: Boolean(result.requestId),
@@ -581,7 +544,7 @@ export async function orchestrateCopilotStream(
} catch (error) {
const err = error instanceof Error ? error : new Error('Copilot orchestration failed')
if (claimedToolCallIds.length > 0 && claimedByWorkerId) {
logger.warn(withLogContext('Releasing async tool claims after delivery failure'), {
reqLogger.warn('Releasing async tool claims after delivery failure', {
toolCallIds: claimedToolCallIds,
workerId: claimedByWorkerId,
})
@@ -591,7 +554,7 @@ export async function orchestrateCopilotStream(
)
)
}
logger.error(withLogContext('Copilot orchestration failed'), {
reqLogger.error('Copilot orchestration failed', {
error: err.message,
})
await options.onError?.(err)

View File

@@ -1,7 +1,6 @@
import { createLogger } from '@sim/logger'
import { upsertAsyncToolCall } from '@/lib/copilot/async-runs/repository'
import { STREAM_TIMEOUT_MS } from '@/lib/copilot/constants'
import { appendCopilotLogContext } from '@/lib/copilot/logging'
import {
asRecord,
getEventData,
@@ -83,15 +82,12 @@ function abortPendingToolIfStreamDead(
},
context.messageId
).catch((err) => {
logger.error(
appendCopilotLogContext('markToolComplete fire-and-forget failed (stream aborted)', {
messageId: context.messageId,
}),
{
logger
.withMetadata({ messageId: context.messageId })
.error('markToolComplete fire-and-forget failed (stream aborted)', {
toolCallId: toolCall.id,
error: err instanceof Error ? err.message : String(err),
}
)
})
})
return true
}
@@ -136,15 +132,12 @@ function handleClientCompletion(
{ background: true },
context.messageId
).catch((err) => {
logger.error(
appendCopilotLogContext('markToolComplete fire-and-forget failed (client background)', {
messageId: context.messageId,
}),
{
logger
.withMetadata({ messageId: context.messageId })
.error('markToolComplete fire-and-forget failed (client background)', {
toolCallId: toolCall.id,
error: err instanceof Error ? err.message : String(err),
}
)
})
})
markToolResultSeen(toolCallId)
return
@@ -160,15 +153,12 @@ function handleClientCompletion(
undefined,
context.messageId
).catch((err) => {
logger.error(
appendCopilotLogContext('markToolComplete fire-and-forget failed (client rejected)', {
messageId: context.messageId,
}),
{
logger
.withMetadata({ messageId: context.messageId })
.error('markToolComplete fire-and-forget failed (client rejected)', {
toolCallId: toolCall.id,
error: err instanceof Error ? err.message : String(err),
}
)
})
})
markToolResultSeen(toolCallId)
return
@@ -184,15 +174,12 @@ function handleClientCompletion(
completion.data,
context.messageId
).catch((err) => {
logger.error(
appendCopilotLogContext('markToolComplete fire-and-forget failed (client cancelled)', {
messageId: context.messageId,
}),
{
logger
.withMetadata({ messageId: context.messageId })
.error('markToolComplete fire-and-forget failed (client cancelled)', {
toolCallId: toolCall.id,
error: err instanceof Error ? err.message : String(err),
}
)
})
})
markToolResultSeen(toolCallId)
return
@@ -209,16 +196,13 @@ function handleClientCompletion(
completion?.data,
context.messageId
).catch((err) => {
logger.error(
appendCopilotLogContext('markToolComplete fire-and-forget failed (client completion)', {
messageId: context.messageId,
}),
{
logger
.withMetadata({ messageId: context.messageId })
.error('markToolComplete fire-and-forget failed (client completion)', {
toolCallId: toolCall.id,
toolName: toolCall.name,
error: err instanceof Error ? err.message : String(err),
}
)
})
})
markToolResultSeen(toolCallId)
}
@@ -252,16 +236,13 @@ async function emitSyntheticToolResult(
error: !success ? completion?.message : undefined,
} as SSEEvent)
} catch (error) {
logger.warn(
appendCopilotLogContext('Failed to emit synthetic tool_result', {
messageId: context.messageId,
}),
{
logger
.withMetadata({ messageId: context.messageId })
.warn('Failed to emit synthetic tool_result', {
toolCallId,
toolName,
error: error instanceof Error ? error.message : String(error),
}
)
})
}
}
@@ -328,17 +309,14 @@ export const sseHandlers: Record<string, SSEHandler> = {
const rid = typeof event.data === 'string' ? event.data : undefined
if (rid) {
context.requestId = rid
logger.error(
appendCopilotLogContext('Mapped copilot message to Go trace ID', {
messageId: context.messageId,
}),
{
logger
.withMetadata({ messageId: context.messageId })
.info('Mapped copilot message to Go trace ID', {
goTraceId: rid,
chatId: context.chatId,
executionId: context.executionId,
runId: context.runId,
}
)
})
}
},
title_updated: () => {},
@@ -485,29 +463,23 @@ export const sseHandlers: Record<string, SSEHandler> = {
args,
})
} catch (err) {
logger.warn(
appendCopilotLogContext('Failed to persist async tool row before execution', {
messageId: context.messageId,
}),
{
logger
.withMetadata({ messageId: context.messageId })
.warn('Failed to persist async tool row before execution', {
toolCallId,
toolName,
error: err instanceof Error ? err.message : String(err),
}
)
})
}
return executeToolAndReport(toolCallId, context, execContext, options)
})().catch((err) => {
logger.error(
appendCopilotLogContext('Parallel tool execution failed', {
messageId: context.messageId,
}),
{
logger
.withMetadata({ messageId: context.messageId })
.error('Parallel tool execution failed', {
toolCallId,
toolName,
error: err instanceof Error ? err.message : String(err),
}
)
})
return {
status: 'error',
message: err instanceof Error ? err.message : String(err),
@@ -546,16 +518,13 @@ export const sseHandlers: Record<string, SSEHandler> = {
args,
status: 'running',
}).catch((err) => {
logger.warn(
appendCopilotLogContext('Failed to persist async tool row for client-executable tool', {
messageId: context.messageId,
}),
{
logger
.withMetadata({ messageId: context.messageId })
.warn('Failed to persist async tool row for client-executable tool', {
toolCallId,
toolName,
error: err instanceof Error ? err.message : String(err),
}
)
})
})
const clientWaitSignal = buildClientToolAbortSignal(options)
const completion = await waitForToolCompletion(
@@ -746,29 +715,23 @@ export const subAgentHandlers: Record<string, SSEHandler> = {
args,
})
} catch (err) {
logger.warn(
appendCopilotLogContext('Failed to persist async subagent tool row before execution', {
messageId: context.messageId,
}),
{
logger
.withMetadata({ messageId: context.messageId })
.warn('Failed to persist async subagent tool row before execution', {
toolCallId,
toolName,
error: err instanceof Error ? err.message : String(err),
}
)
})
}
return executeToolAndReport(toolCallId, context, execContext, options)
})().catch((err) => {
logger.error(
appendCopilotLogContext('Parallel subagent tool execution failed', {
messageId: context.messageId,
}),
{
logger
.withMetadata({ messageId: context.messageId })
.error('Parallel subagent tool execution failed', {
toolCallId,
toolName,
error: err instanceof Error ? err.message : String(err),
}
)
})
return {
status: 'error',
message: err instanceof Error ? err.message : String(err),
@@ -802,17 +765,13 @@ export const subAgentHandlers: Record<string, SSEHandler> = {
args,
status: 'running',
}).catch((err) => {
logger.warn(
appendCopilotLogContext(
'Failed to persist async tool row for client-executable subagent tool',
{ messageId: context.messageId }
),
{
logger
.withMetadata({ messageId: context.messageId })
.warn('Failed to persist async tool row for client-executable subagent tool', {
toolCallId,
toolName,
error: err instanceof Error ? err.message : String(err),
}
)
})
})
const clientWaitSignal = buildClientToolAbortSignal(options)
const completion = await waitForToolCompletion(
@@ -881,15 +840,12 @@ export const subAgentHandlers: Record<string, SSEHandler> = {
export function handleSubagentRouting(event: SSEEvent, context: StreamingContext): boolean {
if (!event.subagent) return false
if (!context.subAgentParentToolCallId) {
logger.warn(
appendCopilotLogContext('Subagent event missing parent tool call', {
messageId: context.messageId,
}),
{
logger
.withMetadata({ messageId: context.messageId })
.warn('Subagent event missing parent tool call', {
type: event.type,
subagent: event.subagent,
}
)
})
return false
}
return true

View File

@@ -3,7 +3,6 @@ import { userTableRows } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { eq } from 'drizzle-orm'
import { completeAsyncToolCall, markAsyncToolRunning } from '@/lib/copilot/async-runs/repository'
import { appendCopilotLogContext } from '@/lib/copilot/logging'
import { waitForToolConfirmation } from '@/lib/copilot/orchestrator/persistence'
import { asRecord, markToolResultSeen } from '@/lib/copilot/orchestrator/sse/utils'
import { executeToolServerSide, markToolComplete } from '@/lib/copilot/orchestrator/tool-executor'
@@ -187,15 +186,12 @@ async function maybeWriteOutputToFile(
contentType
)
logger.error(
appendCopilotLogContext('Tool output written to file', { messageId: context.messageId }),
{
toolName,
fileName,
size: buffer.length,
fileId: uploaded.id,
}
)
logger.withMetadata({ messageId: context.messageId }).info('Tool output written to file', {
toolName,
fileName,
size: buffer.length,
fileId: uploaded.id,
})
return {
success: true,
@@ -209,16 +205,13 @@ async function maybeWriteOutputToFile(
}
} catch (err) {
const message = err instanceof Error ? err.message : String(err)
logger.warn(
appendCopilotLogContext('Failed to write tool output to file', {
messageId: context.messageId,
}),
{
logger
.withMetadata({ messageId: context.messageId })
.warn('Failed to write tool output to file', {
toolName,
outputPath,
error: message,
}
)
})
return {
success: false,
error: `Failed to write output file: ${message}`,
@@ -306,7 +299,7 @@ function reportCancelledTool(
data: Record<string, unknown> = { cancelled: true }
): void {
markToolComplete(toolCall.id, toolCall.name, 499, message, data, messageId).catch((err) => {
logger.error(appendCopilotLogContext('markToolComplete failed (cancelled)', { messageId }), {
logger.withMetadata({ messageId }).error('markToolComplete failed (cancelled)', {
toolCallId: toolCall.id,
toolName: toolCall.name,
error: err instanceof Error ? err.message : String(err),
@@ -401,14 +394,11 @@ async function maybeWriteOutputToTable(
}
})
logger.error(
appendCopilotLogContext('Tool output written to table', { messageId: context.messageId }),
{
toolName,
tableId: outputTable,
rowCount: rows.length,
}
)
logger.withMetadata({ messageId: context.messageId }).info('Tool output written to table', {
toolName,
tableId: outputTable,
rowCount: rows.length,
})
return {
success: true,
@@ -419,16 +409,13 @@ async function maybeWriteOutputToTable(
},
}
} catch (err) {
logger.warn(
appendCopilotLogContext('Failed to write tool output to table', {
messageId: context.messageId,
}),
{
logger
.withMetadata({ messageId: context.messageId })
.warn('Failed to write tool output to table', {
toolName,
outputTable,
error: err instanceof Error ? err.message : String(err),
}
)
})
return {
success: false,
error: `Failed to write to table: ${err instanceof Error ? err.message : String(err)}`,
@@ -528,16 +515,13 @@ async function maybeWriteReadCsvToTable(
}
})
logger.error(
appendCopilotLogContext('Read output written to table', { messageId: context.messageId }),
{
toolName,
tableId: outputTable,
tableName: table.name,
rowCount: rows.length,
filePath,
}
)
logger.withMetadata({ messageId: context.messageId }).info('Read output written to table', {
toolName,
tableId: outputTable,
tableName: table.name,
rowCount: rows.length,
filePath,
})
return {
success: true,
@@ -549,16 +533,13 @@ async function maybeWriteReadCsvToTable(
},
}
} catch (err) {
logger.warn(
appendCopilotLogContext('Failed to write read output to table', {
messageId: context.messageId,
}),
{
logger
.withMetadata({ messageId: context.messageId })
.warn('Failed to write read output to table', {
toolName,
outputTable,
error: err instanceof Error ? err.message : String(err),
}
)
})
return {
success: false,
error: `Failed to import into table: ${err instanceof Error ? err.message : String(err)}`,
@@ -599,14 +580,11 @@ export async function executeToolAndReport(
toolCall.status = 'executing'
await markAsyncToolRunning(toolCall.id, 'sim-stream').catch(() => {})
logger.error(
appendCopilotLogContext('Tool execution started', { messageId: context.messageId }),
{
toolCallId: toolCall.id,
toolName: toolCall.name,
params: toolCall.params,
}
)
logger.withMetadata({ messageId: context.messageId }).info('Tool execution started', {
toolCallId: toolCall.id,
toolName: toolCall.name,
params: toolCall.params,
})
try {
let result = await executeToolServerSide(toolCall, execContext)
@@ -693,24 +671,18 @@ export async function executeToolAndReport(
: raw && typeof raw === 'object'
? JSON.stringify(raw).slice(0, 200)
: undefined
logger.error(
appendCopilotLogContext('Tool execution succeeded', { messageId: context.messageId }),
{
toolCallId: toolCall.id,
toolName: toolCall.name,
outputPreview: preview,
}
)
logger.withMetadata({ messageId: context.messageId }).info('Tool execution succeeded', {
toolCallId: toolCall.id,
toolName: toolCall.name,
outputPreview: preview,
})
} else {
logger.warn(
appendCopilotLogContext('Tool execution failed', { messageId: context.messageId }),
{
toolCallId: toolCall.id,
toolName: toolCall.name,
error: result.error,
params: toolCall.params,
}
)
logger.withMetadata({ messageId: context.messageId }).warn('Tool execution failed', {
toolCallId: toolCall.id,
toolName: toolCall.name,
error: result.error,
params: toolCall.params,
})
}
// If create_workflow was successful, update the execution context with the new workflowId.
@@ -760,16 +732,13 @@ export async function executeToolAndReport(
result.output,
context.messageId
).catch((err) => {
logger.error(
appendCopilotLogContext('markToolComplete fire-and-forget failed', {
messageId: context.messageId,
}),
{
logger
.withMetadata({ messageId: context.messageId })
.error('markToolComplete fire-and-forget failed', {
toolCallId: toolCall.id,
toolName: toolCall.name,
error: err instanceof Error ? err.message : String(err),
}
)
})
})
const resultEvent: SSEEvent = {
@@ -804,15 +773,12 @@ export async function executeToolAndReport(
if (deleted.length > 0) {
isDeleteOp = true
removeChatResources(execContext.chatId, deleted).catch((err) => {
logger.warn(
appendCopilotLogContext('Failed to remove chat resources after deletion', {
messageId: context.messageId,
}),
{
logger
.withMetadata({ messageId: context.messageId })
.warn('Failed to remove chat resources after deletion', {
chatId: execContext.chatId,
error: err instanceof Error ? err.message : String(err),
}
)
})
})
for (const resource of deleted) {
@@ -835,15 +801,12 @@ export async function executeToolAndReport(
if (resources.length > 0) {
persistChatResources(execContext.chatId, resources).catch((err) => {
logger.warn(
appendCopilotLogContext('Failed to persist chat resources', {
messageId: context.messageId,
}),
{
logger
.withMetadata({ messageId: context.messageId })
.warn('Failed to persist chat resources', {
chatId: execContext.chatId,
error: err instanceof Error ? err.message : String(err),
}
)
})
})
for (const resource of resources) {
@@ -879,15 +842,12 @@ export async function executeToolAndReport(
toolCall.error = error instanceof Error ? error.message : String(error)
toolCall.endTime = Date.now()
logger.error(
appendCopilotLogContext('Tool execution threw', { messageId: context.messageId }),
{
toolCallId: toolCall.id,
toolName: toolCall.name,
error: toolCall.error,
params: toolCall.params,
}
)
logger.withMetadata({ messageId: context.messageId }).error('Tool execution threw', {
toolCallId: toolCall.id,
toolName: toolCall.name,
error: toolCall.error,
params: toolCall.params,
})
markToolResultSeen(toolCall.id)
await completeAsyncToolCall({
@@ -909,16 +869,13 @@ export async function executeToolAndReport(
},
context.messageId
).catch((err) => {
logger.error(
appendCopilotLogContext('markToolComplete fire-and-forget failed', {
messageId: context.messageId,
}),
{
logger
.withMetadata({ messageId: context.messageId })
.error('markToolComplete fire-and-forget failed', {
toolCallId: toolCall.id,
toolName: toolCall.name,
error: err instanceof Error ? err.message : String(err),
}
)
})
})
const errorEvent: SSEEvent = {

View File

@@ -2,7 +2,6 @@ import { createLogger } from '@sim/logger'
import { getHighestPrioritySubscription } from '@/lib/billing/core/plan'
import { isPaid } from '@/lib/billing/plan-helpers'
import { ORCHESTRATION_TIMEOUT_MS } from '@/lib/copilot/constants'
import { appendCopilotLogContext } from '@/lib/copilot/logging'
import {
handleSubagentRouting,
sseHandlers,
@@ -165,13 +164,10 @@ export async function runStreamLoop(
try {
await options.onEvent?.(normalizedEvent)
} catch (error) {
logger.warn(
appendCopilotLogContext('Failed to forward SSE event', { messageId: context.messageId }),
{
type: normalizedEvent.type,
error: error instanceof Error ? error.message : String(error),
}
)
logger.withMetadata({ messageId: context.messageId }).warn('Failed to forward SSE event', {
type: normalizedEvent.type,
error: error instanceof Error ? error.message : String(error),
})
}
// Let the caller intercept before standard dispatch.
@@ -205,11 +201,9 @@ export async function runStreamLoop(
if (context.subAgentParentStack.length > 0) {
context.subAgentParentStack.pop()
} else {
logger.warn(
appendCopilotLogContext('subagent_end without matching subagent_start', {
messageId: context.messageId,
})
)
logger
.withMetadata({ messageId: context.messageId })
.warn('subagent_end without matching subagent_start')
}
context.subAgentParentToolCallId =
context.subAgentParentStack.length > 0

View File

@@ -2,13 +2,14 @@ import { db } from '@sim/db'
import { permissions, workspace } from '@sim/db/schema'
import { and, desc, eq, isNull } from 'drizzle-orm'
import { authorizeWorkflowByWorkspacePermission, type getWorkflowById } from '@/lib/workflows/utils'
import { checkWorkspaceAccess } from '@/lib/workspaces/permissions/utils'
import { checkWorkspaceAccess, getUserEntityPermissions } from '@/lib/workspaces/permissions/utils'
type WorkflowRecord = NonNullable<Awaited<ReturnType<typeof getWorkflowById>>>
export async function ensureWorkflowAccess(
workflowId: string,
userId: string
userId: string,
action: 'read' | 'write' | 'admin' = 'read'
): Promise<{
workflow: WorkflowRecord
workspaceId?: string | null
@@ -16,7 +17,7 @@ export async function ensureWorkflowAccess(
const result = await authorizeWorkflowByWorkspacePermission({
workflowId,
userId,
action: 'read',
action,
})
if (!result.workflow) {
@@ -56,25 +57,25 @@ export async function getDefaultWorkspaceId(userId: string): Promise<string> {
export async function ensureWorkspaceAccess(
workspaceId: string,
userId: string,
requireWrite: boolean
level: 'read' | 'write' | 'admin' = 'read'
): Promise<void> {
const access = await checkWorkspaceAccess(workspaceId, userId)
if (!access.exists || !access.hasAccess) {
throw new Error(`Workspace ${workspaceId} not found`)
}
const permissionType = access.canWrite
? 'write'
: access.workspace?.ownerId === userId
? 'admin'
: 'read'
const canWrite = permissionType === 'admin' || permissionType === 'write'
if (level === 'read') return
if (requireWrite && !canWrite) {
if (level === 'admin') {
if (access.workspace?.ownerId === userId) return
const perm = await getUserEntityPermissions(userId, 'workspace', workspaceId)
if (perm !== 'admin') {
throw new Error('Admin access required for this workspace')
}
return
}
if (!access.canWrite) {
throw new Error('Write or admin access required for this workspace')
}
if (!requireWrite && !canWrite && permissionType !== 'read') {
throw new Error('Access denied to workspace')
}
}

View File

@@ -2,15 +2,18 @@ import crypto from 'crypto'
import { db } from '@sim/db'
import { chat, workflowMcpTool } from '@sim/db/schema'
import { and, eq, isNull } from 'drizzle-orm'
import { AuditAction, AuditResourceType, recordAudit } from '@/lib/audit/log'
import type { ExecutionContext, ToolCallResult } from '@/lib/copilot/orchestrator/types'
import { getBaseUrl } from '@/lib/core/utils/urls'
import { mcpPubSub } from '@/lib/mcp/pubsub'
import {
generateParameterSchemaForWorkflow,
removeMcpToolsForWorkflow,
} from '@/lib/mcp/workflow-mcp-sync'
import { generateParameterSchemaForWorkflow } from '@/lib/mcp/workflow-mcp-sync'
import { sanitizeToolName } from '@/lib/mcp/workflow-tool-schema'
import { deployWorkflow, undeployWorkflow } from '@/lib/workflows/persistence/utils'
import {
performChatDeploy,
performChatUndeploy,
performFullDeploy,
performFullUndeploy,
} from '@/lib/workflows/orchestration'
import { checkChatAccess, checkWorkflowAccessForChatCreation } from '@/app/api/chat/utils'
import { ensureWorkflowAccess } from '../access'
import type { DeployApiParams, DeployChatParams, DeployMcpParams } from '../param-types'
@@ -25,20 +28,23 @@ export async function executeDeployApi(
return { success: false, error: 'workflowId is required' }
}
const action = params.action === 'undeploy' ? 'undeploy' : 'deploy'
const { workflow: workflowRecord } = await ensureWorkflowAccess(workflowId, context.userId)
const { workflow: workflowRecord } = await ensureWorkflowAccess(
workflowId,
context.userId,
'admin'
)
if (action === 'undeploy') {
const result = await undeployWorkflow({ workflowId })
const result = await performFullUndeploy({ workflowId, userId: context.userId })
if (!result.success) {
return { success: false, error: result.error || 'Failed to undeploy workflow' }
}
await removeMcpToolsForWorkflow(workflowId, crypto.randomUUID().slice(0, 8))
return { success: true, output: { workflowId, isDeployed: false } }
}
const result = await deployWorkflow({
const result = await performFullDeploy({
workflowId,
deployedBy: context.userId,
userId: context.userId,
workflowName: workflowRecord.name || undefined,
})
if (!result.success) {
@@ -82,11 +88,21 @@ export async function executeDeployChat(
if (!existing.length) {
return { success: false, error: 'No active chat deployment found for this workflow' }
}
const { hasAccess } = await checkChatAccess(existing[0].id, context.userId)
const { hasAccess, workspaceId: chatWorkspaceId } = await checkChatAccess(
existing[0].id,
context.userId
)
if (!hasAccess) {
return { success: false, error: 'Unauthorized chat access' }
}
await db.delete(chat).where(eq(chat.id, existing[0].id))
const undeployResult = await performChatUndeploy({
chatId: existing[0].id,
userId: context.userId,
workspaceId: chatWorkspaceId,
})
if (!undeployResult.success) {
return { success: false, error: undeployResult.error || 'Failed to undeploy chat' }
}
return {
success: true,
output: {
@@ -99,17 +115,19 @@ export async function executeDeployChat(
}
}
const { hasAccess } = await checkWorkflowAccessForChatCreation(workflowId, context.userId)
if (!hasAccess) {
const { hasAccess, workflow: workflowRecord } = await checkWorkflowAccessForChatCreation(
workflowId,
context.userId
)
if (!hasAccess || !workflowRecord) {
return { success: false, error: 'Workflow not found or access denied' }
}
const existing = await db
const [existingDeployment] = await db
.select()
.from(chat)
.where(and(eq(chat.workflowId, workflowId), isNull(chat.archivedAt)))
.limit(1)
const existingDeployment = existing[0] || null
const identifier = String(params.identifier || existingDeployment?.identifier || '').trim()
const title = String(params.title || existingDeployment?.title || '').trim()
@@ -134,21 +152,14 @@ export async function executeDeployChat(
return { success: false, error: 'Identifier already in use' }
}
const deployResult = await deployWorkflow({
workflowId,
deployedBy: context.userId,
})
if (!deployResult.success) {
return { success: false, error: deployResult.error || 'Failed to deploy workflow' }
}
const existingCustomizations =
(existingDeployment?.customizations as
| { primaryColor?: string; welcomeMessage?: string }
| undefined) || {}
const payload = {
const result = await performChatDeploy({
workflowId,
userId: context.userId,
identifier,
title,
description: String(params.description || existingDeployment?.description || ''),
@@ -162,46 +173,22 @@ export async function executeDeployChat(
existingCustomizations.welcomeMessage ||
'Hi there! How can I help you today?',
},
authType: params.authType || existingDeployment?.authType || 'public',
authType: (params.authType || existingDeployment?.authType || 'public') as
| 'public'
| 'password'
| 'email'
| 'sso',
password: params.password,
allowedEmails: params.allowedEmails || existingDeployment?.allowedEmails || [],
outputConfigs: params.outputConfigs || existingDeployment?.outputConfigs || [],
}
allowedEmails: params.allowedEmails || (existingDeployment?.allowedEmails as string[]) || [],
outputConfigs: (params.outputConfigs || existingDeployment?.outputConfigs || []) as Array<{
blockId: string
path: string
}>,
workspaceId: workflowRecord.workspaceId,
})
if (existingDeployment) {
await db
.update(chat)
.set({
identifier: payload.identifier,
title: payload.title,
description: payload.description,
customizations: payload.customizations,
authType: payload.authType,
password: payload.password || existingDeployment.password,
allowedEmails:
payload.authType === 'email' || payload.authType === 'sso' ? payload.allowedEmails : [],
outputConfigs: payload.outputConfigs,
updatedAt: new Date(),
})
.where(eq(chat.id, existingDeployment.id))
} else {
await db.insert(chat).values({
id: crypto.randomUUID(),
workflowId,
userId: context.userId,
identifier: payload.identifier,
title: payload.title,
description: payload.description,
customizations: payload.customizations,
isActive: true,
authType: payload.authType,
password: payload.password || null,
allowedEmails:
payload.authType === 'email' || payload.authType === 'sso' ? payload.allowedEmails : [],
outputConfigs: payload.outputConfigs,
createdAt: new Date(),
updatedAt: new Date(),
})
if (!result.success) {
return { success: false, error: result.error || 'Failed to deploy chat' }
}
const baseUrl = getBaseUrl()
@@ -214,7 +201,7 @@ export async function executeDeployChat(
isDeployed: true,
isChatDeployed: true,
identifier,
chatUrl: `${baseUrl}/chat/${identifier}`,
chatUrl: result.chatUrl,
apiEndpoint: `${baseUrl}/api/workflows/${workflowId}/run`,
baseUrl,
},
@@ -234,7 +221,11 @@ export async function executeDeployMcp(
return { success: false, error: 'workflowId is required' }
}
const { workflow: workflowRecord } = await ensureWorkflowAccess(workflowId, context.userId)
const { workflow: workflowRecord } = await ensureWorkflowAccess(
workflowId,
context.userId,
'admin'
)
const workspaceId = workflowRecord.workspaceId
if (!workspaceId) {
return { success: false, error: 'workspaceId is required' }
@@ -263,8 +254,15 @@ export async function executeDeployMcp(
mcpPubSub?.publishWorkflowToolsChanged({ serverId, workspaceId })
// Intentionally omits `isDeployed` — removing from an MCP server does not
// affect the workflow's API deployment.
recordAudit({
workspaceId,
actorId: context.userId,
action: AuditAction.MCP_SERVER_REMOVED,
resourceType: AuditResourceType.MCP_SERVER,
resourceId: serverId,
description: `Undeployed workflow "${workflowId}" from MCP server`,
})
return {
success: true,
output: { workflowId, serverId, action: 'undeploy', removed: true },
@@ -319,6 +317,15 @@ export async function executeDeployMcp(
mcpPubSub?.publishWorkflowToolsChanged({ serverId, workspaceId })
recordAudit({
workspaceId,
actorId: context.userId,
action: AuditAction.MCP_SERVER_UPDATED,
resourceType: AuditResourceType.MCP_SERVER,
resourceId: serverId,
description: `Updated MCP tool "${toolName}" on server`,
})
return {
success: true,
output: { toolId, toolName, toolDescription, updated: true, mcpServerUrl, baseUrl },
@@ -339,6 +346,15 @@ export async function executeDeployMcp(
mcpPubSub?.publishWorkflowToolsChanged({ serverId, workspaceId })
recordAudit({
workspaceId,
actorId: context.userId,
action: AuditAction.MCP_SERVER_ADDED,
resourceType: AuditResourceType.MCP_SERVER,
resourceId: serverId,
description: `Deployed workflow as MCP tool "${toolName}"`,
})
return {
success: true,
output: { toolId, toolName, toolDescription, updated: false, mcpServerUrl, baseUrl },
@@ -357,9 +373,9 @@ export async function executeRedeploy(
if (!workflowId) {
return { success: false, error: 'workflowId is required' }
}
await ensureWorkflowAccess(workflowId, context.userId)
await ensureWorkflowAccess(workflowId, context.userId, 'admin')
const result = await deployWorkflow({ workflowId, deployedBy: context.userId })
const result = await performFullDeploy({ workflowId, userId: context.userId })
if (!result.success) {
return { success: false, error: result.error || 'Failed to redeploy workflow' }
}

View File

@@ -8,12 +8,13 @@ import {
workflowMcpTool,
} from '@sim/db/schema'
import { and, eq, inArray, isNull } from 'drizzle-orm'
import { AuditAction, AuditResourceType, recordAudit } from '@/lib/audit/log'
import type { ExecutionContext, ToolCallResult } from '@/lib/copilot/orchestrator/types'
import { mcpPubSub } from '@/lib/mcp/pubsub'
import { generateParameterSchemaForWorkflow } from '@/lib/mcp/workflow-mcp-sync'
import { sanitizeToolName } from '@/lib/mcp/workflow-tool-schema'
import { hasValidStartBlock } from '@/lib/workflows/triggers/trigger-utils.server'
import { ensureWorkflowAccess } from '../access'
import { ensureWorkflowAccess, ensureWorkspaceAccess } from '../access'
import type {
CheckDeploymentStatusParams,
CreateWorkspaceMcpServerParams,
@@ -182,7 +183,11 @@ export async function executeCreateWorkspaceMcpServer(
if (!workflowId) {
return { success: false, error: 'workflowId is required' }
}
const { workflow: workflowRecord } = await ensureWorkflowAccess(workflowId, context.userId)
const { workflow: workflowRecord } = await ensureWorkflowAccess(
workflowId,
context.userId,
'write'
)
const workspaceId = workflowRecord.workspaceId
if (!workspaceId) {
return { success: false, error: 'workspaceId is required' }
@@ -242,6 +247,16 @@ export async function executeCreateWorkspaceMcpServer(
mcpPubSub?.publishWorkflowToolsChanged({ serverId, workspaceId })
}
recordAudit({
workspaceId,
actorId: context.userId,
action: AuditAction.MCP_SERVER_ADDED,
resourceType: AuditResourceType.MCP_SERVER,
resourceId: serverId,
resourceName: name,
description: `Created MCP server "${name}"`,
})
return { success: true, output: { server, addedTools } }
} catch (error) {
return { success: false, error: error instanceof Error ? error.message : String(error) }
@@ -277,7 +292,10 @@ export async function executeUpdateWorkspaceMcpServer(
}
const [existing] = await db
.select({ id: workflowMcpServer.id, createdBy: workflowMcpServer.createdBy })
.select({
id: workflowMcpServer.id,
workspaceId: workflowMcpServer.workspaceId,
})
.from(workflowMcpServer)
.where(eq(workflowMcpServer.id, serverId))
.limit(1)
@@ -286,8 +304,18 @@ export async function executeUpdateWorkspaceMcpServer(
return { success: false, error: 'MCP server not found' }
}
await ensureWorkspaceAccess(existing.workspaceId, context.userId, 'write')
await db.update(workflowMcpServer).set(updates).where(eq(workflowMcpServer.id, serverId))
recordAudit({
actorId: context.userId,
action: AuditAction.MCP_SERVER_UPDATED,
resourceType: AuditResourceType.MCP_SERVER,
resourceId: params.serverId,
description: `Updated MCP server`,
})
return { success: true, output: { serverId, ...updates, updatedAt: undefined } }
} catch (error) {
return { success: false, error: error instanceof Error ? error.message : String(error) }
@@ -318,10 +346,20 @@ export async function executeDeleteWorkspaceMcpServer(
return { success: false, error: 'MCP server not found' }
}
await ensureWorkspaceAccess(existing.workspaceId, context.userId, 'admin')
await db.delete(workflowMcpServer).where(eq(workflowMcpServer.id, serverId))
mcpPubSub?.publishWorkflowToolsChanged({ serverId, workspaceId: existing.workspaceId })
recordAudit({
actorId: context.userId,
action: AuditAction.MCP_SERVER_REMOVED,
resourceType: AuditResourceType.MCP_SERVER,
resourceId: params.serverId,
description: `Deleted MCP server`,
})
return { success: true, output: { serverId, name: existing.name, deleted: true } }
} catch (error) {
return { success: false, error: error instanceof Error ? error.message : String(error) }
@@ -379,7 +417,7 @@ export async function executeRevertToVersion(
return { success: false, error: 'version is required' }
}
await ensureWorkflowAccess(workflowId, context.userId)
await ensureWorkflowAccess(workflowId, context.userId, 'admin')
const baseUrl =
process.env.NEXT_PUBLIC_APP_URL || process.env.APP_URL || 'http://localhost:3000'

View File

@@ -2,8 +2,8 @@ import { db } from '@sim/db'
import { credential, mcpServers, pendingCredentialDraft, user } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { and, eq, isNull, lt } from 'drizzle-orm'
import { AuditAction, AuditResourceType, recordAudit } from '@/lib/audit/log'
import { SIM_AGENT_API_URL } from '@/lib/copilot/constants'
import { appendCopilotLogContext } from '@/lib/copilot/logging'
import type {
ExecutionContext,
ToolCallResult,
@@ -232,6 +232,16 @@ async function executeManageCustomTool(
})
const created = resultTools.find((tool) => tool.title === title)
recordAudit({
workspaceId,
actorId: context.userId,
action: AuditAction.CUSTOM_TOOL_CREATED,
resourceType: AuditResourceType.CUSTOM_TOOL,
resourceId: created?.id,
resourceName: title,
description: `Created custom tool "${title}"`,
})
return {
success: true,
output: {
@@ -280,6 +290,16 @@ async function executeManageCustomTool(
userId: context.userId,
})
recordAudit({
workspaceId,
actorId: context.userId,
action: AuditAction.CUSTOM_TOOL_UPDATED,
resourceType: AuditResourceType.CUSTOM_TOOL,
resourceId: params.toolId,
resourceName: title,
description: `Updated custom tool "${title}"`,
})
return {
success: true,
output: {
@@ -306,6 +326,15 @@ async function executeManageCustomTool(
return { success: false, error: `Custom tool not found: ${params.toolId}` }
}
recordAudit({
workspaceId,
actorId: context.userId,
action: AuditAction.CUSTOM_TOOL_DELETED,
resourceType: AuditResourceType.CUSTOM_TOOL,
resourceId: params.toolId,
description: 'Deleted custom tool',
})
return {
success: true,
output: {
@@ -322,17 +351,14 @@ async function executeManageCustomTool(
error: `Unsupported operation for manage_custom_tool: ${operation}`,
}
} catch (error) {
logger.error(
appendCopilotLogContext('manage_custom_tool execution failed', {
messageId: context.messageId,
}),
{
logger
.withMetadata({ messageId: context.messageId })
.error('manage_custom_tool execution failed', {
operation,
workspaceId,
userId: context.userId,
error: error instanceof Error ? error.message : String(error),
}
)
})
return {
success: false,
error: error instanceof Error ? error.message : 'Failed to manage custom tool',
@@ -465,6 +491,18 @@ async function executeManageMcpTool(
await mcpService.clearCache(workspaceId)
recordAudit({
workspaceId,
actorId: context.userId,
action: AuditAction.MCP_SERVER_ADDED,
resourceType: AuditResourceType.MCP_SERVER,
resourceId: serverId,
resourceName: config.name,
description: existing
? `Updated existing MCP server "${config.name}"`
: `Added MCP server "${config.name}"`,
})
return {
success: true,
output: {
@@ -518,6 +556,15 @@ async function executeManageMcpTool(
await mcpService.clearCache(workspaceId)
recordAudit({
workspaceId,
actorId: context.userId,
action: AuditAction.MCP_SERVER_UPDATED,
resourceType: AuditResourceType.MCP_SERVER,
resourceId: params.serverId,
description: `Updated MCP server "${updated.name}"`,
})
return {
success: true,
output: {
@@ -531,6 +578,13 @@ async function executeManageMcpTool(
}
if (operation === 'delete') {
if (context.userPermission && context.userPermission !== 'admin') {
return {
success: false,
error: `Permission denied: 'delete' on manage_mcp_tool requires admin access. You have '${context.userPermission}' permission.`,
}
}
if (!params.serverId) {
return { success: false, error: "'serverId' is required for 'delete'" }
}
@@ -546,6 +600,15 @@ async function executeManageMcpTool(
await mcpService.clearCache(workspaceId)
recordAudit({
workspaceId,
actorId: context.userId,
action: AuditAction.MCP_SERVER_REMOVED,
resourceType: AuditResourceType.MCP_SERVER,
resourceId: params.serverId,
description: `Deleted MCP server "${deleted.name}"`,
})
return {
success: true,
output: {
@@ -559,16 +622,13 @@ async function executeManageMcpTool(
return { success: false, error: `Unsupported operation for manage_mcp_tool: ${operation}` }
} catch (error) {
logger.error(
appendCopilotLogContext('manage_mcp_tool execution failed', {
messageId: context.messageId,
}),
{
logger
.withMetadata({ messageId: context.messageId })
.error('manage_mcp_tool execution failed', {
operation,
workspaceId,
error: error instanceof Error ? error.message : String(error),
}
)
})
return {
success: false,
error: error instanceof Error ? error.message : 'Failed to manage MCP server',
@@ -650,6 +710,16 @@ async function executeManageSkill(
})
const created = resultSkills.find((s) => s.name === params.name)
recordAudit({
workspaceId,
actorId: context.userId,
action: AuditAction.SKILL_CREATED,
resourceType: AuditResourceType.SKILL,
resourceId: created?.id,
resourceName: params.name,
description: `Created skill "${params.name}"`,
})
return {
success: true,
output: {
@@ -692,14 +762,26 @@ async function executeManageSkill(
userId: context.userId,
})
const updatedName = params.name || found.name
recordAudit({
workspaceId,
actorId: context.userId,
action: AuditAction.SKILL_UPDATED,
resourceType: AuditResourceType.SKILL,
resourceId: params.skillId,
resourceName: updatedName,
description: `Updated skill "${updatedName}"`,
})
return {
success: true,
output: {
success: true,
operation,
skillId: params.skillId,
name: params.name || found.name,
message: `Updated skill "${params.name || found.name}"`,
name: updatedName,
message: `Updated skill "${updatedName}"`,
},
}
}
@@ -714,6 +796,15 @@ async function executeManageSkill(
return { success: false, error: `Skill not found: ${params.skillId}` }
}
recordAudit({
workspaceId,
actorId: context.userId,
action: AuditAction.SKILL_DELETED,
resourceType: AuditResourceType.SKILL,
resourceId: params.skillId,
description: 'Deleted skill',
})
return {
success: true,
output: {
@@ -727,16 +818,11 @@ async function executeManageSkill(
return { success: false, error: `Unsupported operation for manage_skill: ${operation}` }
} catch (error) {
logger.error(
appendCopilotLogContext('manage_skill execution failed', {
messageId: context.messageId,
}),
{
operation,
workspaceId,
error: error instanceof Error ? error.message : String(error),
}
)
logger.withMetadata({ messageId: context.messageId }).error('manage_skill execution failed', {
operation,
workspaceId,
error: error instanceof Error ? error.message : String(error),
})
return {
success: false,
error: error instanceof Error ? error.message : 'Failed to manage skill',
@@ -963,10 +1049,24 @@ const SIM_WORKFLOW_TOOL_HANDLERS: Record<
.update(credential)
.set({ displayName, updatedAt: new Date() })
.where(eq(credential.id, credentialId))
recordAudit({
actorId: c.userId,
action: AuditAction.CREDENTIAL_RENAMED,
resourceType: AuditResourceType.OAUTH,
resourceId: credentialId,
description: `Renamed credential to "${displayName}"`,
})
return { success: true, output: { credentialId, displayName } }
}
case 'delete': {
await db.delete(credential).where(eq(credential.id, credentialId))
recordAudit({
actorId: c.userId,
action: AuditAction.CREDENTIAL_DELETED,
resourceType: AuditResourceType.OAUTH,
resourceId: credentialId,
description: `Deleted credential`,
})
return { success: true, output: { credentialId, deleted: true } }
}
default:
@@ -1007,15 +1107,12 @@ const SIM_WORKFLOW_TOOL_HANDLERS: Record<
},
}
} catch (err) {
logger.warn(
appendCopilotLogContext('Failed to generate OAuth link, falling back to generic URL', {
messageId: c.messageId,
}),
{
logger
.withMetadata({ messageId: c.messageId })
.warn('Failed to generate OAuth link, falling back to generic URL', {
providerName,
error: err instanceof Error ? err.message : String(err),
}
)
})
const workspaceUrl = c.workspaceId
? `${baseUrl}/workspace/${c.workspaceId}`
: `${baseUrl}/workspace`
@@ -1199,12 +1296,9 @@ export async function executeToolServerSide(
const toolConfig = getTool(resolvedToolName)
if (!toolConfig) {
logger.warn(
appendCopilotLogContext('Tool not found in registry', {
messageId: context.messageId,
}),
{ toolName, resolvedToolName }
)
logger
.withMetadata({ messageId: context.messageId })
.warn('Tool not found in registry', { toolName, resolvedToolName })
return {
success: false,
error: `Tool not found: ${toolName}`,
@@ -1293,15 +1387,10 @@ async function executeServerToolDirect(
return { success: true, output: result }
} catch (error) {
logger.error(
appendCopilotLogContext('Server tool execution failed', {
messageId: context.messageId,
}),
{
toolName,
error: error instanceof Error ? error.message : String(error),
}
)
logger.withMetadata({ messageId: context.messageId }).error('Server tool execution failed', {
toolName,
error: error instanceof Error ? error.message : String(error),
})
return {
success: false,
error: error instanceof Error ? error.message : 'Server tool execution failed',
@@ -1377,7 +1466,7 @@ export async function markToolComplete(
})
if (!response.ok) {
logger.warn(appendCopilotLogContext('Mark-complete call failed', { messageId }), {
logger.withMetadata({ messageId }).warn('Mark-complete call failed', {
toolCallId,
toolName,
status: response.status,
@@ -1391,7 +1480,7 @@ export async function markToolComplete(
}
} catch (error) {
const isTimeout = error instanceof DOMException && error.name === 'AbortError'
logger.error(appendCopilotLogContext('Mark-complete call failed', { messageId }), {
logger.withMetadata({ messageId }).error('Mark-complete call failed', {
toolCallId,
toolName,
timedOut: isTimeout,

View File

@@ -3,6 +3,7 @@ import { copilotChats, workflowSchedule } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { and, eq, isNull } from 'drizzle-orm'
import { v4 as uuidv4 } from 'uuid'
import { AuditAction, AuditResourceType, recordAudit } from '@/lib/audit/log'
import type { ExecutionContext, ToolCallResult } from '@/lib/copilot/orchestrator/types'
import { parseCronToHumanReadable, validateCronExpression } from '@/lib/workflows/schedules/utils'
@@ -150,6 +151,17 @@ export async function executeCreateJob(
logger.info('Job created', { jobId, cronExpression, nextRunAt: nextRunAt.toISOString() })
recordAudit({
workspaceId: context.workspaceId || null,
actorId: context.userId,
action: AuditAction.SCHEDULE_UPDATED,
resourceType: AuditResourceType.SCHEDULE,
resourceId: jobId,
resourceName: title || undefined,
description: `Created job "${title || jobId}"`,
metadata: { operation: 'create', cronExpression },
})
return {
success: true,
output: {
@@ -381,6 +393,19 @@ export async function executeManageJob(
logger.info('Job updated', { jobId: args.jobId, fields: Object.keys(updates) })
recordAudit({
workspaceId: context.workspaceId || null,
actorId: context.userId,
action: AuditAction.SCHEDULE_UPDATED,
resourceType: AuditResourceType.SCHEDULE,
resourceId: args.jobId,
description: `Updated job`,
metadata: {
operation: 'update',
fields: Object.keys(updates).filter((k) => k !== 'updatedAt'),
},
})
return {
success: true,
output: {
@@ -419,6 +444,16 @@ export async function executeManageJob(
logger.info('Job deleted', { jobId: args.jobId })
recordAudit({
workspaceId: context.workspaceId || null,
actorId: context.userId,
action: AuditAction.SCHEDULE_UPDATED,
resourceType: AuditResourceType.SCHEDULE,
resourceId: args.jobId,
description: `Deleted job`,
metadata: { operation: 'delete' },
})
return {
success: true,
output: {
@@ -492,6 +527,16 @@ export async function executeCompleteJob(
logger.info('Job completed', { jobId })
recordAudit({
workspaceId: context.workspaceId || null,
actorId: context.userId,
action: AuditAction.SCHEDULE_UPDATED,
resourceType: AuditResourceType.SCHEDULE,
resourceId: jobId,
description: `Completed job`,
metadata: { operation: 'complete' },
})
return {
success: true,
output: { jobId, message: 'Job marked as completed. No further executions will occur.' },

View File

@@ -2,6 +2,7 @@ import { db } from '@sim/db'
import { workflow, workspaceFiles } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { and, eq, isNull } from 'drizzle-orm'
import { AuditAction, AuditResourceType, recordAudit } from '@/lib/audit/log'
import { findMothershipUploadRowByChatAndName } from '@/lib/copilot/orchestrator/tool-executor/upload-file-reader'
import type { ExecutionContext, ToolCallResult } from '@/lib/copilot/orchestrator/types'
import { getServePathPrefix } from '@/lib/uploads'
@@ -158,6 +159,17 @@ async function executeImport(
chatId,
})
recordAudit({
workspaceId,
actorId: userId,
action: AuditAction.WORKFLOW_CREATED,
resourceType: AuditResourceType.WORKFLOW,
resourceId: workflowId,
resourceName: dedupedName,
description: `Imported workflow "${dedupedName}" from file`,
metadata: { fileName, source: 'copilot-import' },
})
return {
success: true,
output: {

View File

@@ -1,6 +1,7 @@
import crypto from 'crypto'
import { createLogger } from '@sim/logger'
import { createWorkspaceApiKey } from '@/lib/api-key/auth'
import { AuditAction, AuditResourceType, recordAudit } from '@/lib/audit/log'
import type { ExecutionContext, ToolCallResult } from '@/lib/copilot/orchestrator/types'
import { generateRequestId } from '@/lib/core/utils/request'
import { executeWorkflow } from '@/lib/workflows/executor/execute-workflow'
@@ -8,13 +9,13 @@ import {
getExecutionState,
getLatestExecutionState,
} from '@/lib/workflows/executor/execution-state'
import { performDeleteFolder, performDeleteWorkflow } from '@/lib/workflows/orchestration'
import { loadWorkflowFromNormalizedTables } from '@/lib/workflows/persistence/utils'
import { sanitizeForCopilot } from '@/lib/workflows/sanitization/json-sanitizer'
import {
checkForCircularReference,
createFolderRecord,
createWorkflowRecord,
deleteFolderRecord,
deleteWorkflowRecord,
listFolders,
setWorkflowVariables,
updateFolderRecord,
@@ -121,7 +122,7 @@ export async function executeCreateWorkflow(
params?.workspaceId || context.workspaceId || (await getDefaultWorkspaceId(context.userId))
const folderId = params?.folderId || null
await ensureWorkspaceAccess(workspaceId, context.userId, true)
await ensureWorkspaceAccess(workspaceId, context.userId, 'write')
assertWorkflowMutationNotAborted(context)
const result = await createWorkflowRecord({
@@ -132,6 +133,28 @@ export async function executeCreateWorkflow(
folderId,
})
recordAudit({
workspaceId,
actorId: context.userId,
action: AuditAction.WORKFLOW_CREATED,
resourceType: AuditResourceType.WORKFLOW,
resourceId: result.workflowId,
resourceName: name,
description: `Created workflow "${name}"`,
})
try {
const { PlatformEvents } = await import('@/lib/core/telemetry')
PlatformEvents.workflowCreated({
workflowId: result.workflowId,
name,
workspaceId,
folderId: folderId ?? undefined,
})
} catch (_e) {
// Telemetry is best-effort
}
const normalized = await loadWorkflowFromNormalizedTables(result.workflowId)
let copilotSanitizedWorkflowState: unknown
if (normalized) {
@@ -175,7 +198,7 @@ export async function executeCreateFolder(
params?.workspaceId || context.workspaceId || (await getDefaultWorkspaceId(context.userId))
const parentId = params?.parentId || null
await ensureWorkspaceAccess(workspaceId, context.userId, true)
await ensureWorkspaceAccess(workspaceId, context.userId, 'write')
assertWorkflowMutationNotAborted(context)
const result = await createFolderRecord({
@@ -185,6 +208,16 @@ export async function executeCreateFolder(
parentId,
})
recordAudit({
workspaceId,
actorId: context.userId,
action: AuditAction.FOLDER_CREATED,
resourceType: AuditResourceType.FOLDER,
resourceId: result.folderId,
resourceName: name,
description: `Created folder "${name}"`,
})
return { success: true, output: result }
} catch (error) {
return { success: false, error: error instanceof Error ? error.message : String(error) }
@@ -201,7 +234,11 @@ export async function executeRunWorkflow(
return { success: false, error: 'workflowId is required' }
}
const { workflow: workflowRecord } = await ensureWorkflowAccess(workflowId, context.userId)
const { workflow: workflowRecord } = await ensureWorkflowAccess(
workflowId,
context.userId,
'write'
)
const useDraftState = !params.useDeployedState
@@ -236,7 +273,11 @@ export async function executeSetGlobalWorkflowVariables(
const operations: VariableOperation[] = Array.isArray(params.operations)
? params.operations
: []
const { workflow: workflowRecord } = await ensureWorkflowAccess(workflowId, context.userId)
const { workflow: workflowRecord } = await ensureWorkflowAccess(
workflowId,
context.userId,
'write'
)
interface WorkflowVariable {
id: string
@@ -325,6 +366,14 @@ export async function executeSetGlobalWorkflowVariables(
assertWorkflowMutationNotAborted(context)
await setWorkflowVariables(workflowId, nextVarsRecord)
recordAudit({
actorId: context.userId,
action: AuditAction.WORKFLOW_VARIABLES_UPDATED,
resourceType: AuditResourceType.WORKFLOW,
resourceId: workflowId,
description: `Updated workflow variables`,
})
return { success: true, output: { updated: Object.values(byName).length } }
} catch (error) {
return { success: false, error: error instanceof Error ? error.message : String(error) }
@@ -348,7 +397,7 @@ export async function executeRenameWorkflow(
return { success: false, error: 'Workflow name must be 200 characters or less' }
}
await ensureWorkflowAccess(workflowId, context.userId)
await ensureWorkflowAccess(workflowId, context.userId, 'write')
assertWorkflowMutationNotAborted(context)
await updateWorkflowRecord(workflowId, { name })
@@ -368,7 +417,7 @@ export async function executeMoveWorkflow(
return { success: false, error: 'workflowId is required' }
}
await ensureWorkflowAccess(workflowId, context.userId)
await ensureWorkflowAccess(workflowId, context.userId, 'write')
const folderId = params.folderId || null
assertWorkflowMutationNotAborted(context)
await updateWorkflowRecord(workflowId, { folderId })
@@ -395,6 +444,15 @@ export async function executeMoveFolder(
return { success: false, error: 'A folder cannot be moved into itself' }
}
if (parentId) {
const wouldCreateCycle = await checkForCircularReference(folderId, parentId)
if (wouldCreateCycle) {
return { success: false, error: 'Cannot create circular folder reference' }
}
}
const workspaceId = context.workspaceId || (await getDefaultWorkspaceId(context.userId))
await ensureWorkspaceAccess(workspaceId, context.userId, 'write')
assertWorkflowMutationNotAborted(context)
await updateFolderRecord(folderId, { parentId })
@@ -417,7 +475,11 @@ export async function executeRunWorkflowUntilBlock(
return { success: false, error: 'stopAfterBlockId is required' }
}
const { workflow: workflowRecord } = await ensureWorkflowAccess(workflowId, context.userId)
const { workflow: workflowRecord } = await ensureWorkflowAccess(
workflowId,
context.userId,
'write'
)
const useDraftState = !params.useDeployedState
@@ -460,7 +522,7 @@ export async function executeGenerateApiKey(
const workspaceId =
params.workspaceId || context.workspaceId || (await getDefaultWorkspaceId(context.userId))
await ensureWorkspaceAccess(workspaceId, context.userId, true)
await ensureWorkspaceAccess(workspaceId, context.userId, 'admin')
assertWorkflowMutationNotAborted(context)
const newKey = await createWorkspaceApiKey({
@@ -469,6 +531,14 @@ export async function executeGenerateApiKey(
name,
})
recordAudit({
workspaceId,
actorId: context.userId,
action: AuditAction.API_KEY_CREATED,
resourceType: AuditResourceType.API_KEY,
description: `Generated API key for workspace`,
})
return {
success: true,
output: {
@@ -511,7 +581,11 @@ export async function executeRunFromBlock(
}
}
const { workflow: workflowRecord } = await ensureWorkflowAccess(workflowId, context.userId)
const { workflow: workflowRecord } = await ensureWorkflowAccess(
workflowId,
context.userId,
'write'
)
const useDraftState = !params.useDeployedState
const result = await executeWorkflow(
@@ -569,7 +643,7 @@ export async function executeUpdateWorkflow(
return { success: false, error: 'At least one of name or description is required' }
}
await ensureWorkflowAccess(workflowId, context.userId)
await ensureWorkflowAccess(workflowId, context.userId, 'write')
assertWorkflowMutationNotAborted(context)
await updateWorkflowRecord(workflowId, updates)
@@ -592,9 +666,17 @@ export async function executeDeleteWorkflow(
return { success: false, error: 'workflowId is required' }
}
const { workflow: workflowRecord } = await ensureWorkflowAccess(workflowId, context.userId)
const { workflow: workflowRecord } = await ensureWorkflowAccess(
workflowId,
context.userId,
'admin'
)
assertWorkflowMutationNotAborted(context)
await deleteWorkflowRecord(workflowId)
const result = await performDeleteWorkflow({ workflowId, userId: context.userId })
if (!result.success) {
return { success: false, error: result.error || 'Failed to delete workflow' }
}
return {
success: true,
@@ -616,7 +698,7 @@ export async function executeDeleteFolder(
}
const workspaceId = context.workspaceId || (await getDefaultWorkspaceId(context.userId))
await ensureWorkspaceAccess(workspaceId, context.userId, true)
await ensureWorkspaceAccess(workspaceId, context.userId, 'admin')
const folders = await listFolders(workspaceId)
const folder = folders.find((f) => f.folderId === folderId)
@@ -625,12 +707,19 @@ export async function executeDeleteFolder(
}
assertWorkflowMutationNotAborted(context)
const deleted = await deleteFolderRecord(folderId)
if (!deleted) {
return { success: false, error: 'Folder not found' }
const result = await performDeleteFolder({
folderId,
workspaceId,
userId: context.userId,
folderName: folder.folderName,
})
if (!result.success) {
return { success: false, error: result.error || 'Failed to delete folder' }
}
return { success: true, output: { folderId, deleted: true } }
return { success: true, output: { folderId, deleted: true, ...result.deletedItems } }
} catch (error) {
return { success: false, error: error instanceof Error ? error.message : String(error) }
}
@@ -653,6 +742,8 @@ export async function executeRenameFolder(
return { success: false, error: 'Folder name must be 200 characters or less' }
}
const workspaceId = context.workspaceId || (await getDefaultWorkspaceId(context.userId))
await ensureWorkspaceAccess(workspaceId, context.userId, 'write')
assertWorkflowMutationNotAborted(context)
await updateFolderRecord(folderId, { name })
@@ -688,7 +779,11 @@ export async function executeRunBlock(
}
}
const { workflow: workflowRecord } = await ensureWorkflowAccess(workflowId, context.userId)
const { workflow: workflowRecord } = await ensureWorkflowAccess(
workflowId,
context.userId,
'write'
)
const useDraftState = !params.useDeployedState
const result = await executeWorkflow(

View File

@@ -46,7 +46,7 @@ export async function executeListFolders(
context.workspaceId ||
(await getDefaultWorkspaceId(context.userId))
await ensureWorkspaceAccess(workspaceId, context.userId, false)
await ensureWorkspaceAccess(workspaceId, context.userId, 'read')
const folders = await listFolders(workspaceId)

View File

@@ -1,6 +1,5 @@
import { createLogger } from '@sim/logger'
import { z } from 'zod'
import { appendCopilotLogContext } from '@/lib/copilot/logging'
import {
assertServerToolNotAborted,
type BaseServerTool,
@@ -125,8 +124,7 @@ export const downloadToWorkspaceFileServerTool: BaseServerTool<
params: DownloadToWorkspaceFileArgs,
context?: ServerToolContext
): Promise<DownloadToWorkspaceFileResult> {
const withMessageId = (message: string) =>
appendCopilotLogContext(message, { messageId: context?.messageId })
const reqLogger = logger.withMetadata({ messageId: context?.messageId })
if (!context?.userId) {
throw new Error('Authentication required')
@@ -178,7 +176,7 @@ export const downloadToWorkspaceFileServerTool: BaseServerTool<
mimeType
)
logger.info(withMessageId('Downloaded remote file to workspace'), {
reqLogger.info('Downloaded remote file to workspace', {
sourceUrl: params.url,
fileId: uploaded.id,
fileName: uploaded.name,
@@ -195,7 +193,7 @@ export const downloadToWorkspaceFileServerTool: BaseServerTool<
}
} catch (error) {
const msg = error instanceof Error ? error.message : 'Unknown error'
logger.error(withMessageId('Failed to download file to workspace'), {
reqLogger.error('Failed to download file to workspace', {
url: params.url,
error: msg,
})

View File

@@ -1,5 +1,4 @@
import { createLogger } from '@sim/logger'
import { appendCopilotLogContext } from '@/lib/copilot/logging'
import {
assertServerToolNotAborted,
type BaseServerTool,
@@ -51,11 +50,10 @@ export const workspaceFileServerTool: BaseServerTool<WorkspaceFileArgs, Workspac
params: WorkspaceFileArgs,
context?: ServerToolContext
): Promise<WorkspaceFileResult> {
const withMessageId = (message: string) =>
appendCopilotLogContext(message, { messageId: context?.messageId })
const reqLogger = logger.withMetadata({ messageId: context?.messageId })
if (!context?.userId) {
logger.error(withMessageId('Unauthorized attempt to access workspace files'))
reqLogger.error('Unauthorized attempt to access workspace files')
throw new Error('Authentication required')
}
@@ -94,7 +92,7 @@ export const workspaceFileServerTool: BaseServerTool<WorkspaceFileArgs, Workspac
await generatePptxFromCode(content, workspaceId)
} catch (err) {
const msg = err instanceof Error ? err.message : String(err)
logger.error(withMessageId('PPTX code validation failed'), { error: msg, fileName })
reqLogger.error('PPTX code validation failed', { error: msg, fileName })
return {
success: false,
message: `PPTX generation failed: ${msg}. Fix the pptxgenjs code and retry.`,
@@ -116,7 +114,7 @@ export const workspaceFileServerTool: BaseServerTool<WorkspaceFileArgs, Workspac
contentType
)
logger.info(withMessageId('Workspace file written via copilot'), {
reqLogger.info('Workspace file written via copilot', {
fileId: result.id,
name: fileName,
size: fileBuffer.length,
@@ -177,7 +175,7 @@ export const workspaceFileServerTool: BaseServerTool<WorkspaceFileArgs, Workspac
isPptxUpdate ? PPTX_SOURCE_MIME : undefined
)
logger.info(withMessageId('Workspace file updated via copilot'), {
reqLogger.info('Workspace file updated via copilot', {
fileId,
name: fileRecord.name,
size: fileBuffer.length,
@@ -219,7 +217,7 @@ export const workspaceFileServerTool: BaseServerTool<WorkspaceFileArgs, Workspac
assertServerToolNotAborted(context)
await renameWorkspaceFile(workspaceId, fileId, newName)
logger.info(withMessageId('Workspace file renamed via copilot'), {
reqLogger.info('Workspace file renamed via copilot', {
fileId,
oldName,
newName,
@@ -247,7 +245,7 @@ export const workspaceFileServerTool: BaseServerTool<WorkspaceFileArgs, Workspac
assertServerToolNotAborted(context)
await deleteWorkspaceFile(workspaceId, fileId)
logger.info(withMessageId('Workspace file deleted via copilot'), {
reqLogger.info('Workspace file deleted via copilot', {
fileId,
name: fileRecord.name,
userId: context.userId,
@@ -324,7 +322,7 @@ export const workspaceFileServerTool: BaseServerTool<WorkspaceFileArgs, Workspac
isPptxPatch ? PPTX_SOURCE_MIME : undefined
)
logger.info(withMessageId('Workspace file patched via copilot'), {
reqLogger.info('Workspace file patched via copilot', {
fileId,
name: fileRecord.name,
editCount: edits.length,
@@ -350,7 +348,7 @@ export const workspaceFileServerTool: BaseServerTool<WorkspaceFileArgs, Workspac
}
} catch (error) {
const errorMessage = error instanceof Error ? error.message : 'Unknown error occurred'
logger.error(withMessageId('Error in workspace_file tool'), {
reqLogger.error('Error in workspace_file tool', {
operation,
error: errorMessage,
userId: context.userId,

View File

@@ -1,6 +1,5 @@
import { GoogleGenAI, type Part } from '@google/genai'
import { createLogger } from '@sim/logger'
import { appendCopilotLogContext } from '@/lib/copilot/logging'
import {
assertServerToolNotAborted,
type BaseServerTool,
@@ -61,8 +60,7 @@ export const generateImageServerTool: BaseServerTool<GenerateImageArgs, Generate
params: GenerateImageArgs,
context?: ServerToolContext
): Promise<GenerateImageResult> {
const withMessageId = (message: string) =>
appendCopilotLogContext(message, { messageId: context?.messageId })
const reqLogger = logger.withMetadata({ messageId: context?.messageId })
if (!context?.userId) {
throw new Error('Authentication required')
@@ -97,17 +95,17 @@ export const generateImageServerTool: BaseServerTool<GenerateImageArgs, Generate
parts.push({
inlineData: { mimeType: mime, data: base64 },
})
logger.info(withMessageId('Loaded reference image'), {
reqLogger.info('Loaded reference image', {
fileId,
name: fileRecord.name,
size: buffer.length,
mimeType: mime,
})
} else {
logger.warn(withMessageId('Reference file not found, skipping'), { fileId })
reqLogger.warn('Reference file not found, skipping', { fileId })
}
} catch (err) {
logger.warn(withMessageId('Failed to load reference image, skipping'), {
reqLogger.warn('Failed to load reference image, skipping', {
fileId,
error: err instanceof Error ? err.message : String(err),
})
@@ -121,7 +119,7 @@ export const generateImageServerTool: BaseServerTool<GenerateImageArgs, Generate
parts.push({ text: prompt + sizeInstruction })
logger.info(withMessageId('Generating image with Nano Banana 2'), {
reqLogger.info('Generating image with Nano Banana 2', {
model: NANO_BANANA_MODEL,
aspectRatio,
promptLength: prompt.length,
@@ -186,7 +184,7 @@ export const generateImageServerTool: BaseServerTool<GenerateImageArgs, Generate
imageBuffer,
mimeType
)
logger.info(withMessageId('Generated image overwritten'), {
reqLogger.info('Generated image overwritten', {
fileId: updated.id,
fileName: updated.name,
size: imageBuffer.length,
@@ -212,7 +210,7 @@ export const generateImageServerTool: BaseServerTool<GenerateImageArgs, Generate
mimeType
)
logger.info(withMessageId('Generated image saved'), {
reqLogger.info('Generated image saved', {
fileId: uploaded.id,
fileName: uploaded.name,
size: imageBuffer.length,
@@ -229,7 +227,7 @@ export const generateImageServerTool: BaseServerTool<GenerateImageArgs, Generate
}
} catch (error) {
const msg = error instanceof Error ? error.message : 'Unknown error'
logger.error(withMessageId('Image generation failed'), { error: msg })
reqLogger.error('Image generation failed', { error: msg })
return { success: false, message: `Failed to generate image: ${msg}` }
}
},

View File

@@ -2,7 +2,6 @@ import { db } from '@sim/db'
import { jobExecutionLogs } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { and, desc, eq } from 'drizzle-orm'
import { appendCopilotLogContext } from '@/lib/copilot/logging'
import type { BaseServerTool, ServerToolContext } from '@/lib/copilot/tools/server/base-tool'
import { checkWorkspaceAccess } from '@/lib/workspaces/permissions/utils'
@@ -86,8 +85,7 @@ function extractOutputAndError(executionData: any): {
export const getJobLogsServerTool: BaseServerTool<GetJobLogsArgs, JobLogEntry[]> = {
name: 'get_job_logs',
async execute(rawArgs: GetJobLogsArgs, context?: ServerToolContext): Promise<JobLogEntry[]> {
const withMessageId = (message: string) =>
appendCopilotLogContext(message, { messageId: context?.messageId })
const reqLogger = logger.withMetadata({ messageId: context?.messageId })
const {
jobId,
@@ -114,7 +112,7 @@ export const getJobLogsServerTool: BaseServerTool<GetJobLogsArgs, JobLogEntry[]>
const clampedLimit = Math.min(Math.max(1, limit), 5)
logger.info(withMessageId('Fetching job logs'), {
reqLogger.info('Fetching job logs', {
jobId,
executionId,
limit: clampedLimit,
@@ -173,7 +171,7 @@ export const getJobLogsServerTool: BaseServerTool<GetJobLogsArgs, JobLogEntry[]>
return entry
})
logger.info(withMessageId('Job logs prepared'), {
reqLogger.info('Job logs prepared', {
jobId,
count: entries.length,
resultSizeKB: Math.round(JSON.stringify(entries).length / 1024),

View File

@@ -3,7 +3,6 @@ import { knowledgeConnector } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { and, eq, isNull } from 'drizzle-orm'
import { generateInternalToken } from '@/lib/auth/internal'
import { appendCopilotLogContext } from '@/lib/copilot/logging'
import {
assertServerToolNotAborted,
type BaseServerTool,
@@ -48,14 +47,11 @@ export const knowledgeBaseServerTool: BaseServerTool<KnowledgeBaseArgs, Knowledg
params: KnowledgeBaseArgs,
context?: ServerToolContext
): Promise<KnowledgeBaseResult> {
const withMessageId = (message: string) =>
appendCopilotLogContext(message, { messageId: context?.messageId })
const reqLogger = logger.withMetadata({ messageId: context?.messageId })
if (!context?.userId) {
logger.error(
withMessageId(
'Unauthorized attempt to access knowledge base - no authenticated user context'
)
reqLogger.error(
'Unauthorized attempt to access knowledge base - no authenticated user context'
)
throw new Error('Authentication required')
}
@@ -105,7 +101,7 @@ export const knowledgeBaseServerTool: BaseServerTool<KnowledgeBaseArgs, Knowledg
requestId
)
logger.info(withMessageId('Knowledge base created via copilot'), {
reqLogger.info('Knowledge base created via copilot', {
knowledgeBaseId: newKnowledgeBase.id,
name: newKnowledgeBase.name,
userId: context.userId,
@@ -141,7 +137,7 @@ export const knowledgeBaseServerTool: BaseServerTool<KnowledgeBaseArgs, Knowledg
}
}
logger.info(withMessageId('Knowledge base metadata retrieved via copilot'), {
reqLogger.info('Knowledge base metadata retrieved via copilot', {
knowledgeBaseId: knowledgeBase.id,
userId: context.userId,
})
@@ -205,7 +201,7 @@ export const knowledgeBaseServerTool: BaseServerTool<KnowledgeBaseArgs, Knowledg
distanceThreshold: strategy.distanceThreshold,
})
logger.info(withMessageId('Knowledge base queried via copilot'), {
reqLogger.info('Knowledge base queried via copilot', {
knowledgeBaseId: args.knowledgeBaseId,
query: args.query.substring(0, 100),
resultCount: results.length,
@@ -296,13 +292,13 @@ export const knowledgeBaseServerTool: BaseServerTool<KnowledgeBaseArgs, Knowledg
},
{}
).catch((err) => {
logger.error(withMessageId('Background document processing failed'), {
reqLogger.error('Background document processing failed', {
documentId: doc.id,
error: err instanceof Error ? err.message : String(err),
})
})
logger.info(withMessageId('Workspace file added to knowledge base via copilot'), {
reqLogger.info('Workspace file added to knowledge base via copilot', {
knowledgeBaseId: args.knowledgeBaseId,
documentId: doc.id,
fileName: fileRecord.name,
@@ -352,7 +348,7 @@ export const knowledgeBaseServerTool: BaseServerTool<KnowledgeBaseArgs, Knowledg
assertNotAborted()
const updatedKb = await updateKnowledgeBase(args.knowledgeBaseId, updates, requestId)
logger.info(withMessageId('Knowledge base updated via copilot'), {
reqLogger.info('Knowledge base updated via copilot', {
knowledgeBaseId: args.knowledgeBaseId,
userId: context.userId,
})
@@ -391,7 +387,7 @@ export const knowledgeBaseServerTool: BaseServerTool<KnowledgeBaseArgs, Knowledg
assertNotAborted()
await deleteKnowledgeBase(args.knowledgeBaseId, requestId)
logger.info(withMessageId('Knowledge base deleted via copilot'), {
reqLogger.info('Knowledge base deleted via copilot', {
knowledgeBaseId: args.knowledgeBaseId,
name: kbToDelete.name,
userId: context.userId,
@@ -468,7 +464,7 @@ export const knowledgeBaseServerTool: BaseServerTool<KnowledgeBaseArgs, Knowledg
const tagDefinitions = await getDocumentTagDefinitions(args.knowledgeBaseId)
logger.info(withMessageId('Tag definitions listed via copilot'), {
reqLogger.info('Tag definitions listed via copilot', {
knowledgeBaseId: args.knowledgeBaseId,
count: tagDefinitions.length,
userId: context.userId,
@@ -522,7 +518,7 @@ export const knowledgeBaseServerTool: BaseServerTool<KnowledgeBaseArgs, Knowledg
requestId
)
logger.info(withMessageId('Tag definition created via copilot'), {
reqLogger.info('Tag definition created via copilot', {
knowledgeBaseId: args.knowledgeBaseId,
tagId: newTag.id,
displayName: newTag.displayName,
@@ -573,7 +569,7 @@ export const knowledgeBaseServerTool: BaseServerTool<KnowledgeBaseArgs, Knowledg
assertNotAborted()
const updatedTag = await updateTagDefinition(args.tagDefinitionId, updateData, requestId)
logger.info(withMessageId('Tag definition updated via copilot'), {
reqLogger.info('Tag definition updated via copilot', {
tagId: args.tagDefinitionId,
knowledgeBaseId: existingTag.knowledgeBaseId,
userId: context.userId,
@@ -614,7 +610,7 @@ export const knowledgeBaseServerTool: BaseServerTool<KnowledgeBaseArgs, Knowledg
requestId
)
logger.info(withMessageId('Tag definition deleted via copilot'), {
reqLogger.info('Tag definition deleted via copilot', {
tagId: args.tagDefinitionId,
tagSlot: deleted.tagSlot,
displayName: deleted.displayName,
@@ -696,7 +692,7 @@ export const knowledgeBaseServerTool: BaseServerTool<KnowledgeBaseArgs, Knowledg
}
const connector = createRes.data
logger.info(withMessageId('Connector created via copilot'), {
reqLogger.info('Connector created via copilot', {
connectorId: connector.id,
connectorType: args.connectorType,
knowledgeBaseId: args.knowledgeBaseId,
@@ -751,7 +747,7 @@ export const knowledgeBaseServerTool: BaseServerTool<KnowledgeBaseArgs, Knowledg
return { success: false, message: updateRes.error ?? 'Failed to update connector' }
}
logger.info(withMessageId('Connector updated via copilot'), {
reqLogger.info('Connector updated via copilot', {
connectorId: args.connectorId,
userId: context.userId,
})
@@ -784,7 +780,7 @@ export const knowledgeBaseServerTool: BaseServerTool<KnowledgeBaseArgs, Knowledg
return { success: false, message: deleteRes.error ?? 'Failed to delete connector' }
}
logger.info(withMessageId('Connector deleted via copilot'), {
reqLogger.info('Connector deleted via copilot', {
connectorId: args.connectorId,
userId: context.userId,
})
@@ -817,7 +813,7 @@ export const knowledgeBaseServerTool: BaseServerTool<KnowledgeBaseArgs, Knowledg
return { success: false, message: syncRes.error ?? 'Failed to sync connector' }
}
logger.info(withMessageId('Connector sync triggered via copilot'), {
reqLogger.info('Connector sync triggered via copilot', {
connectorId: args.connectorId,
userId: context.userId,
})
@@ -837,7 +833,7 @@ export const knowledgeBaseServerTool: BaseServerTool<KnowledgeBaseArgs, Knowledg
}
} catch (error) {
const errorMessage = error instanceof Error ? error.message : 'Unknown error occurred'
logger.error(withMessageId('Error in knowledge_base tool'), {
reqLogger.error('Error in knowledge_base tool', {
operation,
error: errorMessage,
userId: context.userId,

View File

@@ -1,5 +1,4 @@
import { createLogger } from '@sim/logger'
import { appendCopilotLogContext } from '@/lib/copilot/logging'
import {
assertServerToolNotAborted,
type BaseServerTool,
@@ -123,7 +122,7 @@ export async function routeExecution(
throw new Error(`Unknown server tool: ${toolName}`)
}
logger.debug(appendCopilotLogContext('Routing to tool', { messageId: context?.messageId }), {
logger.withMetadata({ messageId: context?.messageId }).debug('Routing to tool', {
toolName,
})

View File

@@ -1,5 +1,4 @@
import { createLogger } from '@sim/logger'
import { appendCopilotLogContext } from '@/lib/copilot/logging'
import {
assertServerToolNotAborted,
type BaseServerTool,
@@ -242,13 +241,10 @@ async function batchInsertAll(
export const userTableServerTool: BaseServerTool<UserTableArgs, UserTableResult> = {
name: 'user_table',
async execute(params: UserTableArgs, context?: ServerToolContext): Promise<UserTableResult> {
const withMessageId = (message: string) =>
appendCopilotLogContext(message, { messageId: context?.messageId })
const reqLogger = logger.withMetadata({ messageId: context?.messageId })
if (!context?.userId) {
logger.error(
withMessageId('Unauthorized attempt to access user table - no authenticated user context')
)
logger.error('Unauthorized attempt to access user table - no authenticated user context')
throw new Error('Authentication required')
}
@@ -729,7 +725,7 @@ export const userTableServerTool: BaseServerTool<UserTableArgs, UserTableResult>
const coerced = coerceRows(rows, columns, columnMap)
const inserted = await batchInsertAll(table.id, coerced, table, workspaceId, context)
logger.info(withMessageId('Table created from file'), {
reqLogger.info('Table created from file', {
tableId: table.id,
fileName: file.name,
columns: columns.length,
@@ -805,7 +801,7 @@ export const userTableServerTool: BaseServerTool<UserTableArgs, UserTableResult>
const coerced = coerceRows(rows, matchedColumns, columnMap)
const inserted = await batchInsertAll(table.id, coerced, table, workspaceId, context)
logger.info(withMessageId('Rows imported from file'), {
reqLogger.info('Rows imported from file', {
tableId: table.id,
fileName: file.name,
matchedColumns: mappedHeaders.length,
@@ -1003,7 +999,7 @@ export const userTableServerTool: BaseServerTool<UserTableArgs, UserTableResult>
? error.cause.message
: String(error.cause)
: undefined
logger.error(withMessageId('Table operation failed'), {
reqLogger.error('Table operation failed', {
operation,
error: errorMessage,
cause,

View File

@@ -1,5 +1,4 @@
import { createLogger } from '@sim/logger'
import { appendCopilotLogContext } from '@/lib/copilot/logging'
import {
assertServerToolNotAborted,
type BaseServerTool,
@@ -66,7 +65,7 @@ async function collectSandboxFiles(
inputTables?: string[],
messageId?: string
): Promise<SandboxFile[]> {
const withMessageId = (message: string) => appendCopilotLogContext(message, { messageId })
const reqLogger = logger.withMetadata({ messageId })
const sandboxFiles: SandboxFile[] = []
let totalSize = 0
@@ -75,12 +74,12 @@ async function collectSandboxFiles(
for (const fileRef of inputFiles) {
const record = findWorkspaceFileRecord(allFiles, fileRef)
if (!record) {
logger.warn(withMessageId('Sandbox input file not found'), { fileRef })
reqLogger.warn('Sandbox input file not found', { fileRef })
continue
}
const ext = record.name.split('.').pop()?.toLowerCase() ?? ''
if (!TEXT_EXTENSIONS.has(ext)) {
logger.warn(withMessageId('Skipping non-text sandbox input file'), {
reqLogger.warn('Skipping non-text sandbox input file', {
fileId: record.id,
fileName: record.name,
ext,
@@ -88,7 +87,7 @@ async function collectSandboxFiles(
continue
}
if (record.size > MAX_FILE_SIZE) {
logger.warn(withMessageId('Sandbox input file exceeds size limit'), {
reqLogger.warn('Sandbox input file exceeds size limit', {
fileId: record.id,
fileName: record.name,
size: record.size,
@@ -96,9 +95,7 @@ async function collectSandboxFiles(
continue
}
if (totalSize + record.size > MAX_TOTAL_SIZE) {
logger.warn(
withMessageId('Sandbox input total size limit reached, skipping remaining files')
)
logger.warn('Sandbox input total size limit reached, skipping remaining files')
break
}
const buffer = await downloadWorkspaceFile(record)
@@ -119,7 +116,7 @@ async function collectSandboxFiles(
for (const tableId of inputTables) {
const table = await getTableById(tableId)
if (!table) {
logger.warn(withMessageId('Sandbox input table not found'), { tableId })
reqLogger.warn('Sandbox input table not found', { tableId })
continue
}
const { rows } = await queryRows(tableId, workspaceId, { limit: 10000 }, 'sandbox-input')
@@ -134,9 +131,7 @@ async function collectSandboxFiles(
}
const csvContent = csvLines.join('\n')
if (totalSize + csvContent.length > MAX_TOTAL_SIZE) {
logger.warn(
withMessageId('Sandbox input total size limit reached, skipping remaining tables')
)
logger.warn('Sandbox input total size limit reached, skipping remaining tables')
break
}
totalSize += csvContent.length
@@ -157,8 +152,7 @@ export const generateVisualizationServerTool: BaseServerTool<
params: VisualizationArgs,
context?: ServerToolContext
): Promise<VisualizationResult> {
const withMessageId = (message: string) =>
appendCopilotLogContext(message, { messageId: context?.messageId })
const reqLogger = logger.withMetadata({ messageId: context?.messageId })
if (!context?.userId) {
throw new Error('Authentication required')
@@ -243,7 +237,7 @@ export const generateVisualizationServerTool: BaseServerTool<
imageBuffer,
'image/png'
)
logger.info(withMessageId('Chart image overwritten'), {
reqLogger.info('Chart image overwritten', {
fileId: updated.id,
fileName: updated.name,
size: imageBuffer.length,
@@ -267,7 +261,7 @@ export const generateVisualizationServerTool: BaseServerTool<
'image/png'
)
logger.info(withMessageId('Chart image saved'), {
reqLogger.info('Chart image saved', {
fileId: uploaded.id,
fileName: uploaded.name,
size: imageBuffer.length,
@@ -282,7 +276,7 @@ export const generateVisualizationServerTool: BaseServerTool<
}
} catch (error) {
const msg = error instanceof Error ? error.message : 'Unknown error'
logger.error(withMessageId('Visualization generation failed'), { error: msg })
reqLogger.error('Visualization generation failed', { error: msg })
return { success: false, message: `Failed to generate visualization: ${msg}` }
}
},

View File

@@ -1,6 +1,7 @@
import { db } from '@sim/db'
import {
document,
embedding,
knowledgeBase,
knowledgeConnector,
knowledgeConnectorSyncLog,
@@ -658,6 +659,23 @@ export async function executeSync(
if (stuckDocs.length > 0) {
logger.info(`Retrying ${stuckDocs.length} stuck documents`, { connectorId })
try {
const stuckDocIds = stuckDocs.map((doc) => doc.id)
await db.delete(embedding).where(inArray(embedding.documentId, stuckDocIds))
await db
.update(document)
.set({
processingStatus: 'pending',
processingStartedAt: null,
processingCompletedAt: null,
processingError: null,
chunkCount: 0,
tokenCount: 0,
characterCount: 0,
})
.where(inArray(document.id, stuckDocIds))
await processDocumentsWithQueue(
stuckDocs.map((doc) => ({
documentId: doc.id,

View File

@@ -156,17 +156,12 @@ export async function dispatchDocumentProcessingJob(payload: DocumentJobData): P
return
}
void processDocumentAsync(
await processDocumentAsync(
payload.knowledgeBaseId,
payload.documentId,
payload.docData,
payload.processingOptions
).catch((error) => {
logger.error(`[${payload.requestId}] Direct document processing failed`, {
documentId: payload.documentId,
error: error instanceof Error ? error.message : String(error),
})
})
)
}
export interface DocumentTagData {
@@ -385,9 +380,27 @@ export async function processDocumentsWithQueue(
}
)
await Promise.all(jobPayloads.map((payload) => dispatchDocumentProcessingJob(payload)))
const results = await Promise.allSettled(
jobPayloads.map((payload) => dispatchDocumentProcessingJob(payload))
)
const failures = results.filter((r): r is PromiseRejectedResult => r.status === 'rejected')
if (failures.length > 0) {
logger.error(`[${requestId}] ${failures.length}/${results.length} document dispatches failed`, {
errors: failures.map((f) =>
f.reason instanceof Error ? f.reason.message : String(f.reason)
),
})
}
logger.info(
`[${requestId}] Document dispatch complete: ${results.length - failures.length}/${results.length} succeeded`
)
if (failures.length === results.length) {
throw new Error(`All ${failures.length} document processing dispatches failed`)
}
logger.info(`[${requestId}] All documents dispatched for processing`)
return
}
@@ -434,6 +447,7 @@ export async function processDocumentAsync(
.set({
processingStatus: 'processing',
processingStartedAt: new Date(),
processingCompletedAt: null,
processingError: null,
})
.where(
@@ -624,8 +638,9 @@ export async function processDocumentAsync(
logger.info(`[${documentId}] Successfully processed document in ${processingTime}ms`)
} catch (error) {
const processingTime = Date.now() - startTime
const errorMessage = error instanceof Error ? error.message : 'Unknown error'
logger.error(`[${documentId}] Failed to process document after ${processingTime}ms:`, {
error: error instanceof Error ? error.message : 'Unknown error',
error: errorMessage,
stack: error instanceof Error ? error.stack : undefined,
filename: docData.filename,
fileUrl: docData.fileUrl,
@@ -636,10 +651,12 @@ export async function processDocumentAsync(
.update(document)
.set({
processingStatus: 'failed',
processingError: error instanceof Error ? error.message : 'Unknown error',
processingError: errorMessage,
processingCompletedAt: new Date(),
})
.where(eq(document.id, documentId))
throw error
}
}
@@ -1527,7 +1544,7 @@ export async function markDocumentAsFailedTimeout(
.update(document)
.set({
processingStatus: 'failed',
processingError: 'Processing timed out - background process may have been terminated',
processingError: 'Processing timed out. Please retry or re-sync the connector.',
processingCompletedAt: now,
})
.where(eq(document.id, documentId))

View File

@@ -101,6 +101,8 @@ async function getEmbeddingConfig(
}
}
const EMBEDDING_REQUEST_TIMEOUT_MS = 60_000
async function callEmbeddingAPI(inputs: string[], config: EmbeddingConfig): Promise<number[][]> {
return retryWithExponentialBackoff(
async () => {
@@ -119,11 +121,15 @@ async function callEmbeddingAPI(inputs: string[], config: EmbeddingConfig): Prom
...(useDimensions && { dimensions: EMBEDDING_DIMENSIONS }),
}
const controller = new AbortController()
const timeout = setTimeout(() => controller.abort(), EMBEDDING_REQUEST_TIMEOUT_MS)
const response = await fetch(config.apiUrl, {
method: 'POST',
headers: config.headers,
body: JSON.stringify(requestBody),
})
signal: controller.signal,
}).finally(() => clearTimeout(timeout))
if (!response.ok) {
const errorText = await response.text()

View File

@@ -153,8 +153,9 @@ export class ExecutionLogger implements IExecutionLoggerService {
workflowState,
deploymentVersionId,
} = params
const execLog = logger.withMetadata({ workflowId, workspaceId, executionId })
logger.debug(`Starting workflow execution ${executionId} for workflow ${workflowId}`)
execLog.debug('Starting workflow execution')
// Check if execution log already exists (idempotency check)
const existingLog = await db
@@ -164,9 +165,7 @@ export class ExecutionLogger implements IExecutionLoggerService {
.limit(1)
if (existingLog.length > 0) {
logger.debug(
`Execution log already exists for ${executionId}, skipping duplicate INSERT (idempotent)`
)
execLog.debug('Execution log already exists, skipping duplicate INSERT (idempotent)')
const snapshot = await snapshotService.getSnapshot(existingLog[0].stateSnapshotId)
if (!snapshot) {
throw new Error(`Snapshot ${existingLog[0].stateSnapshotId} not found for existing log`)
@@ -228,7 +227,7 @@ export class ExecutionLogger implements IExecutionLoggerService {
})
.returning()
logger.debug(`Created workflow log ${workflowLog.id} for execution ${executionId}`)
execLog.debug('Created workflow log', { logId: workflowLog.id })
return {
workflowLog: {
@@ -298,13 +297,20 @@ export class ExecutionLogger implements IExecutionLoggerService {
status: statusOverride,
} = params
logger.debug(`Completing workflow execution ${executionId}`, { isResume })
let execLog = logger.withMetadata({ executionId })
execLog.debug('Completing workflow execution', { isResume })
const [existingLog] = await db
.select()
.from(workflowExecutionLogs)
.where(eq(workflowExecutionLogs.executionId, executionId))
.limit(1)
if (existingLog) {
execLog = execLog.withMetadata({
workflowId: existingLog.workflowId ?? undefined,
workspaceId: existingLog.workspaceId ?? undefined,
})
}
const billingUserId = this.extractBillingUserId(existingLog?.executionData)
const existingExecutionData = existingLog?.executionData as
| WorkflowExecutionLog['executionData']
@@ -507,10 +513,10 @@ export class ExecutionLogger implements IExecutionLoggerService {
billingUserId
)
} catch {}
logger.warn('Usage threshold notification check failed (non-fatal)', { error: e })
execLog.warn('Usage threshold notification check failed (non-fatal)', { error: e })
}
logger.debug(`Completed workflow execution ${executionId}`)
execLog.debug('Completed workflow execution')
const completedLog: WorkflowExecutionLog = {
id: updatedLog.id,
@@ -528,10 +534,7 @@ export class ExecutionLogger implements IExecutionLoggerService {
}
emitWorkflowExecutionCompleted(completedLog).catch((error) => {
logger.error('Failed to emit workflow execution completed event', {
error,
executionId,
})
execLog.error('Failed to emit workflow execution completed event', { error })
})
return completedLog
@@ -608,18 +611,20 @@ export class ExecutionLogger implements IExecutionLoggerService {
executionId?: string,
billingUserId?: string | null
): Promise<void> {
const statsLog = logger.withMetadata({ workflowId: workflowId ?? undefined, executionId })
if (!isBillingEnabled) {
logger.debug('Billing is disabled, skipping user stats cost update')
statsLog.debug('Billing is disabled, skipping user stats cost update')
return
}
if (costSummary.totalCost <= 0) {
logger.debug('No cost to update in user stats')
statsLog.debug('No cost to update in user stats')
return
}
if (!workflowId) {
logger.debug('Workflow was deleted, skipping user stats update')
statsLog.debug('Workflow was deleted, skipping user stats update')
return
}
@@ -631,16 +636,14 @@ export class ExecutionLogger implements IExecutionLoggerService {
.limit(1)
if (!workflowRecord) {
logger.error(`Workflow ${workflowId} not found for user stats update`)
statsLog.error('Workflow not found for user stats update')
return
}
const userId = billingUserId?.trim() || null
if (!userId) {
logger.error('Missing billing actor in execution context; skipping stats update', {
workflowId,
statsLog.error('Missing billing actor in execution context; skipping stats update', {
trigger,
executionId,
})
return
}
@@ -702,8 +705,7 @@ export class ExecutionLogger implements IExecutionLoggerService {
// Check if user has hit overage threshold and bill incrementally
await checkAndBillOverageThreshold(userId)
} catch (error) {
logger.error('Error updating user stats with cost information', {
workflowId,
statsLog.error('Error updating user stats with cost information', {
error,
costSummary,
})

View File

@@ -1504,7 +1504,9 @@ export async function refreshOAuthToken(
refreshToken: newRefreshToken || refreshToken, // Return new refresh token if available
}
} catch (error) {
logger.error('Error refreshing token:', { error })
logger.error('Error refreshing token:', {
error: error instanceof Error ? error.message : String(error),
})
return null
}
}

View File

@@ -15,8 +15,10 @@ import {
import { normalizeVfsSegment } from '@/lib/copilot/vfs/normalize-segment'
import { getPostgresErrorCode } from '@/lib/core/utils/pg-error'
import { generateRestoreName } from '@/lib/core/utils/restore-name'
import { getServePathPrefix } from '@/lib/uploads'
import { downloadFile, hasCloudStorage, uploadFile } from '@/lib/uploads/core/storage-service'
import { getFileMetadataByKey, insertFileMetadata } from '@/lib/uploads/server/metadata'
import { getWorkspaceWithOwner } from '@/lib/workspaces/permissions/utils'
import { isUuid, sanitizeFileName } from '@/executor/constants'
import type { UserFile } from '@/executor/types'
@@ -221,7 +223,6 @@ export async function uploadWorkspaceFile(
logger.error(`Failed to update storage tracking:`, storageError)
}
const { getServePathPrefix } = await import('@/lib/uploads')
const pathPrefix = getServePathPrefix()
const serveUrl = `${pathPrefix}${encodeURIComponent(uploadResult.key)}?context=workspace`
@@ -336,6 +337,47 @@ export async function fileExistsInWorkspace(
}
}
/**
* Look up a single active workspace file by its original name.
* Returns the record if found, or null if no matching file exists.
* Throws on DB errors so callers can distinguish "not found" from "lookup failed."
*/
export async function getWorkspaceFileByName(
workspaceId: string,
fileName: string
): Promise<WorkspaceFileRecord | null> {
const files = await db
.select()
.from(workspaceFiles)
.where(
and(
eq(workspaceFiles.workspaceId, workspaceId),
eq(workspaceFiles.originalName, fileName),
eq(workspaceFiles.context, 'workspace'),
isNull(workspaceFiles.deletedAt)
)
)
.limit(1)
if (files.length === 0) return null
const pathPrefix = getServePathPrefix()
const file = files[0]
return {
id: file.id,
workspaceId: file.workspaceId || workspaceId,
name: file.originalName,
key: file.key,
path: `${pathPrefix}${encodeURIComponent(file.key)}?context=workspace`,
size: file.size,
type: file.contentType,
uploadedBy: file.userId,
deletedAt: file.deletedAt,
uploadedAt: file.uploadedAt,
}
}
/**
* List all files for a workspace
*/
@@ -368,7 +410,6 @@ export async function listWorkspaceFiles(
)
.orderBy(workspaceFiles.uploadedAt)
const { getServePathPrefix } = await import('@/lib/uploads')
const pathPrefix = getServePathPrefix()
return files.map((file) => ({
@@ -493,7 +534,6 @@ export async function getWorkspaceFile(
if (files.length === 0) return null
const { getServePathPrefix } = await import('@/lib/uploads')
const pathPrefix = getServePathPrefix()
const file = files[0]
@@ -731,7 +771,6 @@ export async function restoreWorkspaceFile(workspaceId: string, fileId: string):
throw new Error('File is not archived')
}
const { getWorkspaceWithOwner } = await import('@/lib/workspaces/permissions/utils')
const ws = await getWorkspaceWithOwner(workspaceId)
if (!ws || ws.archivedAt) {
throw new Error('Cannot restore file into an archived workspace')

View File

@@ -1325,7 +1325,7 @@ export async function queueWebhookExecution(
`[${options.requestId}] Queued ${foundWebhook.provider} webhook execution ${jobId} via inline backend`
)
if (shouldExecuteInline()) {
if (!isBullMQEnabled()) {
void (async () => {
try {
await jobQueue.startJob(jobId)

View File

@@ -0,0 +1,206 @@
import crypto from 'crypto'
import { db } from '@sim/db'
import { chat } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { and, eq, isNull } from 'drizzle-orm'
import { AuditAction, AuditResourceType, recordAudit } from '@/lib/audit/log'
import { encryptSecret } from '@/lib/core/security/encryption'
import { getBaseUrl } from '@/lib/core/utils/urls'
import { performFullDeploy } from '@/lib/workflows/orchestration/deploy'
const logger = createLogger('ChatDeployOrchestration')
export interface ChatDeployPayload {
workflowId: string
userId: string
identifier: string
title: string
description?: string
customizations?: { primaryColor?: string; welcomeMessage?: string; imageUrl?: string }
authType?: 'public' | 'password' | 'email' | 'sso'
password?: string | null
allowedEmails?: string[]
outputConfigs?: Array<{ blockId: string; path: string }>
workspaceId?: string | null
}
export interface PerformChatDeployResult {
success: boolean
chatId?: string
chatUrl?: string
error?: string
}
/**
* Deploys a chat: deploys the underlying workflow via `performFullDeploy`,
* encrypts passwords, creates or updates the chat record, fires telemetry,
* and records an audit entry. Both the chat API route and the copilot
* `deploy_chat` tool must use this function.
*/
export async function performChatDeploy(
params: ChatDeployPayload
): Promise<PerformChatDeployResult> {
const {
workflowId,
userId,
identifier,
title,
description = '',
authType = 'public',
password,
allowedEmails = [],
outputConfigs = [],
} = params
const customizations = {
primaryColor: params.customizations?.primaryColor || 'var(--brand-hover)',
welcomeMessage: params.customizations?.welcomeMessage || 'Hi there! How can I help you today?',
...(params.customizations?.imageUrl ? { imageUrl: params.customizations.imageUrl } : {}),
}
const deployResult = await performFullDeploy({ workflowId, userId })
if (!deployResult.success) {
return { success: false, error: deployResult.error || 'Failed to deploy workflow' }
}
let encryptedPassword: string | null = null
if (authType === 'password' && password) {
const { encrypted } = await encryptSecret(password)
encryptedPassword = encrypted
}
const [existingDeployment] = await db
.select()
.from(chat)
.where(and(eq(chat.workflowId, workflowId), isNull(chat.archivedAt)))
.limit(1)
let chatId: string
if (existingDeployment) {
chatId = existingDeployment.id
let passwordToStore: string | null
if (authType === 'password') {
passwordToStore = encryptedPassword || existingDeployment.password
} else {
passwordToStore = null
}
await db
.update(chat)
.set({
identifier,
title,
description: description || null,
customizations,
authType,
password: passwordToStore,
allowedEmails: authType === 'email' || authType === 'sso' ? allowedEmails : [],
outputConfigs,
updatedAt: new Date(),
})
.where(eq(chat.id, chatId))
} else {
chatId = crypto.randomUUID()
await db.insert(chat).values({
id: chatId,
workflowId,
userId,
identifier,
title,
description: description || null,
customizations,
isActive: true,
authType,
password: encryptedPassword,
allowedEmails: authType === 'email' || authType === 'sso' ? allowedEmails : [],
outputConfigs,
createdAt: new Date(),
updatedAt: new Date(),
})
}
const baseUrl = getBaseUrl()
let chatUrl: string
try {
const url = new URL(baseUrl)
let host = url.host
if (host.startsWith('www.')) {
host = host.substring(4)
}
chatUrl = `${url.protocol}//${host}/chat/${identifier}`
} catch {
chatUrl = `${baseUrl}/chat/${identifier}`
}
logger.info(`Chat "${title}" deployed successfully at ${chatUrl}`)
try {
const { PlatformEvents } = await import('@/lib/core/telemetry')
PlatformEvents.chatDeployed({
chatId,
workflowId,
authType,
hasOutputConfigs: outputConfigs.length > 0,
})
} catch (_e) {
// Telemetry is best-effort
}
recordAudit({
workspaceId: params.workspaceId || null,
actorId: userId,
action: AuditAction.CHAT_DEPLOYED,
resourceType: AuditResourceType.CHAT,
resourceId: chatId,
resourceName: title,
description: `Deployed chat "${title}"`,
metadata: { workflowId, identifier, authType },
})
return { success: true, chatId, chatUrl }
}
export interface PerformChatUndeployParams {
chatId: string
userId: string
workspaceId?: string | null
}
export interface PerformChatUndeployResult {
success: boolean
error?: string
}
/**
* Undeploys a chat: deletes the chat record and records an audit entry.
* Both the chat manage DELETE route and the copilot `deploy_chat` undeploy
* action must use this function.
*/
export async function performChatUndeploy(
params: PerformChatUndeployParams
): Promise<PerformChatUndeployResult> {
const { chatId, userId, workspaceId } = params
const [chatRecord] = await db.select().from(chat).where(eq(chat.id, chatId)).limit(1)
if (!chatRecord) {
return { success: false, error: 'Chat not found' }
}
await db.delete(chat).where(eq(chat.id, chatId))
logger.info(`Chat "${chatId}" deleted successfully`)
recordAudit({
workspaceId: workspaceId || null,
actorId: userId,
action: AuditAction.CHAT_DELETED,
resourceType: AuditResourceType.CHAT,
resourceId: chatId,
resourceName: chatRecord.title || chatId,
description: `Deleted chat deployment "${chatRecord.title || chatId}"`,
})
return { success: true }
}

View File

@@ -0,0 +1,484 @@
import { db, workflowDeploymentVersion, workflow as workflowTable } from '@sim/db'
import { createLogger } from '@sim/logger'
import { and, eq } from 'drizzle-orm'
import { NextRequest } from 'next/server'
import { AuditAction, AuditResourceType, recordAudit } from '@/lib/audit/log'
import { generateRequestId } from '@/lib/core/utils/request'
import { getBaseUrl } from '@/lib/core/utils/urls'
import { removeMcpToolsForWorkflow, syncMcpToolsForWorkflow } from '@/lib/mcp/workflow-mcp-sync'
import {
cleanupWebhooksForWorkflow,
restorePreviousVersionWebhooks,
saveTriggerWebhooksForDeploy,
} from '@/lib/webhooks/deploy'
import type { OrchestrationErrorCode } from '@/lib/workflows/orchestration/types'
import {
activateWorkflowVersion,
activateWorkflowVersionById,
deployWorkflow,
loadWorkflowFromNormalizedTables,
undeployWorkflow,
} from '@/lib/workflows/persistence/utils'
import {
cleanupDeploymentVersion,
createSchedulesForDeploy,
validateWorkflowSchedules,
} from '@/lib/workflows/schedules'
const logger = createLogger('DeployOrchestration')
export interface PerformFullDeployParams {
workflowId: string
userId: string
workflowName?: string
requestId?: string
/**
* Optional NextRequest for external webhook subscriptions.
* If not provided, a synthetic request is constructed from the base URL.
*/
request?: NextRequest
/**
* Override the actor ID used in audit logs and the `deployedBy` field.
* Defaults to `userId`. Use `'admin-api'` for admin-initiated actions.
*/
actorId?: string
}
export interface PerformFullDeployResult {
success: boolean
deployedAt?: Date
version?: number
deploymentVersionId?: string
error?: string
errorCode?: OrchestrationErrorCode
warnings?: string[]
}
/**
* Performs a full workflow deployment: creates a deployment version, syncs
* trigger webhooks, creates schedules, cleans up the previous version, and
* syncs MCP tools. Both the deploy API route and the copilot deploy tools
* must use this single function so behaviour stays consistent.
*/
export async function performFullDeploy(
params: PerformFullDeployParams
): Promise<PerformFullDeployResult> {
const { workflowId, userId, workflowName } = params
const actorId = params.actorId ?? userId
const requestId = params.requestId ?? generateRequestId()
const request = params.request ?? new NextRequest(new URL('/api/webhooks', getBaseUrl()))
const normalizedData = await loadWorkflowFromNormalizedTables(workflowId)
if (!normalizedData) {
return { success: false, error: 'Failed to load workflow state', errorCode: 'not_found' }
}
const scheduleValidation = validateWorkflowSchedules(normalizedData.blocks)
if (!scheduleValidation.isValid) {
return {
success: false,
error: `Invalid schedule configuration: ${scheduleValidation.error}`,
errorCode: 'validation',
}
}
const [workflowRecord] = await db
.select()
.from(workflowTable)
.where(eq(workflowTable.id, workflowId))
.limit(1)
if (!workflowRecord) {
return { success: false, error: 'Workflow not found', errorCode: 'not_found' }
}
const workflowData = workflowRecord as Record<string, unknown>
const [currentActiveVersion] = await db
.select({ id: workflowDeploymentVersion.id })
.from(workflowDeploymentVersion)
.where(
and(
eq(workflowDeploymentVersion.workflowId, workflowId),
eq(workflowDeploymentVersion.isActive, true)
)
)
.limit(1)
const previousVersionId = currentActiveVersion?.id
const rollbackDeployment = async () => {
if (previousVersionId) {
await restorePreviousVersionWebhooks({
request,
workflow: workflowData,
userId,
previousVersionId,
requestId,
})
const reactivateResult = await activateWorkflowVersionById({
workflowId,
deploymentVersionId: previousVersionId,
})
if (reactivateResult.success) return
}
await undeployWorkflow({ workflowId })
}
const deployResult = await deployWorkflow({
workflowId,
deployedBy: actorId,
workflowName: workflowName || workflowRecord.name || undefined,
})
if (!deployResult.success) {
return { success: false, error: deployResult.error || 'Failed to deploy workflow' }
}
const deployedAt = deployResult.deployedAt!
const deploymentVersionId = deployResult.deploymentVersionId
if (!deploymentVersionId) {
await undeployWorkflow({ workflowId })
return { success: false, error: 'Failed to resolve deployment version' }
}
const triggerSaveResult = await saveTriggerWebhooksForDeploy({
request,
workflowId,
workflow: workflowData,
userId,
blocks: normalizedData.blocks,
requestId,
deploymentVersionId,
previousVersionId,
})
if (!triggerSaveResult.success) {
await cleanupDeploymentVersion({
workflowId,
workflow: workflowData,
requestId,
deploymentVersionId,
})
await rollbackDeployment()
return {
success: false,
error: triggerSaveResult.error?.message || 'Failed to save trigger configuration',
}
}
const scheduleResult = await createSchedulesForDeploy(
workflowId,
normalizedData.blocks,
db,
deploymentVersionId
)
if (!scheduleResult.success) {
logger.error(`[${requestId}] Failed to create schedule: ${scheduleResult.error}`)
await cleanupDeploymentVersion({
workflowId,
workflow: workflowData,
requestId,
deploymentVersionId,
})
await rollbackDeployment()
return { success: false, error: scheduleResult.error || 'Failed to create schedule' }
}
if (previousVersionId && previousVersionId !== deploymentVersionId) {
try {
await cleanupDeploymentVersion({
workflowId,
workflow: workflowData,
requestId,
deploymentVersionId: previousVersionId,
skipExternalCleanup: true,
})
} catch (cleanupError) {
logger.error(`[${requestId}] Failed to clean up previous version`, cleanupError)
}
}
await syncMcpToolsForWorkflow({ workflowId, requestId, context: 'deploy' })
recordAudit({
workspaceId: (workflowData.workspaceId as string) || null,
actorId: actorId,
action: AuditAction.WORKFLOW_DEPLOYED,
resourceType: AuditResourceType.WORKFLOW,
resourceId: workflowId,
resourceName: (workflowData.name as string) || undefined,
description: `Deployed workflow "${(workflowData.name as string) || workflowId}"`,
metadata: { version: deploymentVersionId },
request,
})
return {
success: true,
deployedAt,
version: deployResult.version,
deploymentVersionId,
warnings: triggerSaveResult.warnings,
}
}
export interface PerformFullUndeployParams {
workflowId: string
userId: string
requestId?: string
/** Override the actor ID used in audit logs. Defaults to `userId`. */
actorId?: string
}
export interface PerformFullUndeployResult {
success: boolean
error?: string
}
/**
* Performs a full workflow undeploy: marks the workflow as undeployed, cleans up
* webhook records and external subscriptions, removes MCP tools, emits a
* telemetry event, and records an audit log entry. Both the deploy API DELETE
* handler and the copilot undeploy tools must use this single function.
*/
export async function performFullUndeploy(
params: PerformFullUndeployParams
): Promise<PerformFullUndeployResult> {
const { workflowId, userId } = params
const actorId = params.actorId ?? userId
const requestId = params.requestId ?? generateRequestId()
const [workflowRecord] = await db
.select()
.from(workflowTable)
.where(eq(workflowTable.id, workflowId))
.limit(1)
if (!workflowRecord) {
return { success: false, error: 'Workflow not found' }
}
const workflowData = workflowRecord as Record<string, unknown>
const result = await undeployWorkflow({ workflowId })
if (!result.success) {
return { success: false, error: result.error || 'Failed to undeploy workflow' }
}
await cleanupWebhooksForWorkflow(workflowId, workflowData, requestId)
await removeMcpToolsForWorkflow(workflowId, requestId)
logger.info(`[${requestId}] Workflow undeployed successfully: ${workflowId}`)
try {
const { PlatformEvents } = await import('@/lib/core/telemetry')
PlatformEvents.workflowUndeployed({ workflowId })
} catch (_e) {
// Telemetry is best-effort
}
recordAudit({
workspaceId: (workflowData.workspaceId as string) || null,
actorId: actorId,
action: AuditAction.WORKFLOW_UNDEPLOYED,
resourceType: AuditResourceType.WORKFLOW,
resourceId: workflowId,
resourceName: (workflowData.name as string) || undefined,
description: `Undeployed workflow "${(workflowData.name as string) || workflowId}"`,
})
return { success: true }
}
export interface PerformActivateVersionParams {
workflowId: string
version: number
userId: string
workflow: Record<string, unknown>
requestId?: string
request?: NextRequest
/** Override the actor ID used in audit logs. Defaults to `userId`. */
actorId?: string
}
export interface PerformActivateVersionResult {
success: boolean
deployedAt?: Date
error?: string
errorCode?: OrchestrationErrorCode
warnings?: string[]
}
/**
* Activates an existing deployment version: validates schedules, syncs trigger
* webhooks (with forced subscription recreation), creates schedules, activates
* the version, cleans up the previous version, syncs MCP tools, and records
* an audit entry. Both the deployment version PATCH handler and the admin
* activate route must use this function.
*/
export async function performActivateVersion(
params: PerformActivateVersionParams
): Promise<PerformActivateVersionResult> {
const { workflowId, version, userId, workflow } = params
const actorId = params.actorId ?? userId
const requestId = params.requestId ?? generateRequestId()
const request = params.request ?? new NextRequest(new URL('/api/webhooks', getBaseUrl()))
const [versionRow] = await db
.select({
id: workflowDeploymentVersion.id,
state: workflowDeploymentVersion.state,
})
.from(workflowDeploymentVersion)
.where(
and(
eq(workflowDeploymentVersion.workflowId, workflowId),
eq(workflowDeploymentVersion.version, version)
)
)
.limit(1)
if (!versionRow?.state) {
return { success: false, error: 'Deployment version not found', errorCode: 'not_found' }
}
const deployedState = versionRow.state as { blocks?: Record<string, unknown> }
const blocks = deployedState.blocks
if (!blocks || typeof blocks !== 'object') {
return { success: false, error: 'Invalid deployed state structure', errorCode: 'validation' }
}
const [currentActiveVersion] = await db
.select({ id: workflowDeploymentVersion.id })
.from(workflowDeploymentVersion)
.where(
and(
eq(workflowDeploymentVersion.workflowId, workflowId),
eq(workflowDeploymentVersion.isActive, true)
)
)
.limit(1)
const previousVersionId = currentActiveVersion?.id
const scheduleValidation = validateWorkflowSchedules(
blocks as Record<string, import('@/stores/workflows/workflow/types').BlockState>
)
if (!scheduleValidation.isValid) {
return {
success: false,
error: `Invalid schedule configuration: ${scheduleValidation.error}`,
errorCode: 'validation',
}
}
const triggerSaveResult = await saveTriggerWebhooksForDeploy({
request,
workflowId,
workflow,
userId,
blocks: blocks as Record<string, import('@/stores/workflows/workflow/types').BlockState>,
requestId,
deploymentVersionId: versionRow.id,
previousVersionId,
forceRecreateSubscriptions: true,
})
if (!triggerSaveResult.success) {
if (previousVersionId) {
await restorePreviousVersionWebhooks({
request,
workflow,
userId,
previousVersionId,
requestId,
})
}
return {
success: false,
error: triggerSaveResult.error?.message || 'Failed to sync trigger configuration',
}
}
const scheduleResult = await createSchedulesForDeploy(
workflowId,
blocks as Record<string, import('@/stores/workflows/workflow/types').BlockState>,
db,
versionRow.id
)
if (!scheduleResult.success) {
await cleanupDeploymentVersion({
workflowId,
workflow,
requestId,
deploymentVersionId: versionRow.id,
})
if (previousVersionId) {
await restorePreviousVersionWebhooks({
request,
workflow,
userId,
previousVersionId,
requestId,
})
}
return { success: false, error: scheduleResult.error || 'Failed to sync schedules' }
}
const result = await activateWorkflowVersion({ workflowId, version })
if (!result.success) {
await cleanupDeploymentVersion({
workflowId,
workflow,
requestId,
deploymentVersionId: versionRow.id,
})
if (previousVersionId) {
await restorePreviousVersionWebhooks({
request,
workflow,
userId,
previousVersionId,
requestId,
})
}
return { success: false, error: result.error || 'Failed to activate version' }
}
if (previousVersionId && previousVersionId !== versionRow.id) {
try {
await cleanupDeploymentVersion({
workflowId,
workflow,
requestId,
deploymentVersionId: previousVersionId,
skipExternalCleanup: true,
})
} catch (cleanupError) {
logger.error(`[${requestId}] Failed to clean up previous version`, cleanupError)
}
}
await syncMcpToolsForWorkflow({
workflowId,
requestId,
state: versionRow.state as { blocks?: Record<string, unknown> },
context: 'activate',
})
recordAudit({
workspaceId: (workflow.workspaceId as string) || null,
actorId: actorId,
action: AuditAction.WORKFLOW_DEPLOYMENT_ACTIVATED,
resourceType: AuditResourceType.WORKFLOW,
resourceId: workflowId,
description: `Activated deployment version ${version}`,
metadata: { version },
})
return {
success: true,
deployedAt: result.deployedAt,
warnings: triggerSaveResult.warnings,
}
}

Some files were not shown because too many files have changed in this diff Show More