Compare commits

..

78 Commits

Author SHA1 Message Date
Theodore Li
ff767438bc Fix edge deletion in nested subflows 2026-04-04 19:28:36 -07:00
Waleed Latif
2c12f499a6 fix(subflows): make edges inside subflows directly clickable
Edges inside subflows defaulted to z-index 0, causing the subflow body
area (pointer-events: auto) to intercept clicks. Derive edge z-index
from the container's depth so edges sit just above their parent container
but below canvas blocks and child blocks.
2026-04-04 19:05:51 -07:00
Waleed
d0baf5b1df feat(cloudformation): add AWS CloudFormation integration with 7 operations (#3964)
* feat(cloudformation): add AWS CloudFormation integration with 7 operations

* fix(cloudformation): add pagination to list-stack-resources, describe-stacks, and describe-stack-events routes
2026-04-04 18:39:02 -07:00
Theodore Li
855c892f55 feat(block): Add cloudwatch block (#3953)
* feat(block): Add cloudwatch block (#3911)

* feat(block): add cloudwatch integration

* Fix bun lock

* Add logger, use execution timeout

* Switch metric dimensions to map style input

* Fix attribute names for dimension map

* Fix import styling

---------

Co-authored-by: Theodore Li <theo@sim.ai>

* Fix import ordering

---------

Co-authored-by: Theodore Li <theo@sim.ai>
2026-04-04 19:54:12 -04:00
Waleed
8ae4b88d80 fix(integrations): show disabled role combobox for readonly members (#3962) 2026-04-04 16:50:11 -07:00
Waleed
a70ccddef5 fix(kb): fix Linear connector GraphQL type errors and tag slot reuse (#3961)
* fix(kb): fix Linear connector GraphQL type errors and tag slot reuse

* fix(kb): simplify tag slot reuse, revert Linear GraphQL types to String

Clean up newTagSlotMapping into direct assignment, remove unnecessary
comment, and revert ID! back to String! to match Linear SDK types.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(kb): use ID! type for Linear GraphQL filter variables

* fix(kb): verify field type when reusing existing tag slots

Add fieldType check to the tag slot reuse logic so a connector with
a matching displayName but different fieldType falls through to fresh
slot allocation instead of silently reusing an incompatible slot.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(kb): enable search on connector selector dropdowns

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-04 16:50:04 -07:00
Waleed
b4d9b8c396 feat(analytics): posthog audit — remove noise, add 10 new events (#3960)
* feat(analytics): posthog audit — remove noise, add 10 new events

Remove task_marked_read (fires automatically on every task view).

Add workspace_id to task_message_sent for group analytics.

New events:
- search_result_selected: block/tool/trigger/workflow/table/file/
  knowledge_base/workspace/task/page/docs with query_length
- workflow_imported: count + format (json/zip)
- workflow_exported: count + format (json/zip)
- folder_created / folder_deleted
- logs_filter_applied: status/workflow/folder/trigger/time
- knowledge_base_document_deleted
- scheduled_task_created / scheduled_task_deleted

* fix(analytics): use usePostHog + captureEvent in hooks, track custom date range

* fix(analytics): always fire scheduled_task_deleted regardless of workspaceId

* fix(analytics): correct format field logic and add missing useCallback deps
2026-04-04 16:49:52 -07:00
Waleed
ce53275e9d feat(knowledge): add Live sync option to KB connectors + fix embedding billing (#3959)
* feat(knowledge): add Live sync option to KB connector modal for Max/Enterprise users

Adds a "Live" (every 5 min) sync frequency option gated to Max and Enterprise plan users.
Includes client-side badge + disabled state, shared sync intervals constant, and server-side
plan validation on both POST and PATCH connector routes.

* fix(knowledge): record embedding usage cost for KB document processing

Adds billing tracking to the KB embedding pipeline, which was previously
generating OpenAI API calls with no cost recorded. Token counts are now
captured from the actual API response and recorded via recordUsage after
successful embedding insertion. BYOK workspaces are excluded from billing.
Applies to all execution paths: direct, BullMQ, and Trigger.dev.

* fix(knowledge): simplify embedding billing — use calculateCost, return modelName

- Use calculateCost() from @/providers/utils instead of inline formula, consistent
  with how LLM billing works throughout the platform
- Return modelName from GenerateEmbeddingsResult so billing uses the actual model
  (handles custom Azure deployments) instead of a hardcoded fallback string
- Fix docs-chunker.ts empty-path fallback to satisfy full GenerateEmbeddingsResult type

* fix(knowledge): remove dev bypass from hasLiveSyncAccess

* chore(knowledge): rename sync-intervals to consts, fix stale TSDoc comment

* improvement(knowledge): extract MaxBadge component, capture billing config once per document

* fix(knowledge): add knowledge-base to usage_log_source enum, fix docs-chunker type

* fix(knowledge): generate migration for knowledge-base usage_log_source enum value

* fix(knowledge): add knowledge-base to usage_log_source enum via drizzle-kit

* fix(knowledge): fix search embedding test mocks, parallelize billing lookups

* fix(knowledge): warn when embedding model has no pricing entry

* fix(knowledge): call checkAndBillOverageThreshold after embedding usage
2026-04-04 16:49:42 -07:00
abhinavDhulipala
7971a64e63 fix(setup): db migrate hard fail and correct ini env (#3946) 2026-04-04 16:22:19 -07:00
abhinavDhulipala
f39b4c74dc fix(setup): bun run prepare explicitly (#3947) 2026-04-04 16:13:53 -07:00
Waleed
0ba8ab1ec7 fix(posthog): upgrade SDKs and fix serverless event flushing (#3951)
* fix(posthog): upgrade SDKs and fix serverless event flushing

* fix(posthog): revert flushAt to 20 for long-running ECS container
2026-04-04 16:11:35 -07:00
Waleed
039e57541e fix(csp): allow Cloudflare Turnstile domains for script, frame, and connect (#3948) 2026-04-04 15:54:14 -07:00
Theodore Li
75f8c6ad7e fix(ui): persist active resource tab in url, fix internal markdown links (#3925)
* fix(ui): handle markdown internal links

* Fix lint

* Reference correct scroll container

* Add resource tab to url state, scroll correctly on new tab

* Handle delete all resource by clearing url

---------

Co-authored-by: Theodore Li <theo@sim.ai>
2026-04-04 18:25:35 -04:00
Waleed
c2b12cf21f fix(captcha): use getResponsePromise for Turnstile execute-on-submit flow (#3943) 2026-04-04 12:34:53 -07:00
Waleed
4a9439e952 improvement(models): tighten model metadata and crawl discovery (#3942)
* improvement(models): tighten model metadata and crawl discovery

Made-with: Cursor

* revert hardcoded FF

* fix(models): narrow structured output ranking signal

Made-with: Cursor

* fix(models): remove generic best-for copy

Made-with: Cursor

* fix(models): restore best-for with stricter criteria

Made-with: Cursor

* fix

* models
2026-04-04 11:53:54 -07:00
Waleed
893e322a49 fix(envvars): restore workflowUserId fallback for scheduled execution env var resolution (#3941)
* fix(envvars): restore workflowUserId fallback for scheduled execution env var resolution

* test(envvars): add coverage for env var user resolution branches
2026-04-04 11:22:52 -07:00
Emir Karabeg
b0cb95be2f feat: mothership/copilot feedback (#3940)
* feat: mothership/copilot feedback

* fix(feedback): remove mutation object from useCallback deps
2026-04-04 10:46:49 -07:00
Waleed
6d00d6bf2c fix(modals): center modals in visible content area and remove open/close animation (#3937)
* fix(modals): center modals in visible content area accounting for sidebar and panel

* fix(modals): address pr feedback — comment clarity and document panel assumption

* fix(modals): remove open/close animation from modal content
2026-04-03 20:06:10 -07:00
Waleed
3267d8cc24 fix(modals): center modals in visible content area accounting for sidebar and panel (#3934)
* fix(modals): center modals in visible content area accounting for sidebar and panel

* fix(modals): address pr feedback — comment clarity and document panel assumption
2026-04-03 19:19:36 -07:00
Theodore Li
2e69f85364 Fix "fix in copilot" button (#3931)
* Fix "fix in copilot" button

* Auto send message to copilot for fix in copilot

---------

Co-authored-by: Theodore Li <theo@sim.ai>
2026-04-03 22:11:45 -04:00
Waleed
57e5bac121 fix(mcp): resolve userId before JWT generation for agent block auth (#3932)
* fix(mcp): resolve userId before JWT generation for agent block auth

* test(mcp): add regression test for agent block JWT userId resolution
2026-04-03 19:05:10 -07:00
Theodore Li
8ce0299400 fix(ui) Fix oauth redirect on connector modal (#3926)
* Fix oauth redirect on connector modal

* Fix lint

---------

Co-authored-by: Theodore Li <theo@sim.ai>
2026-04-03 21:58:42 -04:00
Vikhyath Mondreti
a0796f088b improvement(mothership): workflow edits via sockets (#3927)
* improvement(mothership): workflow edits via sockets

* make embedded view join room

* fix cursor positioning bug
2026-04-03 18:44:14 -07:00
Waleed
98fe4cd40b refactor(stores): consolidate variables stores into stores/variables/ (#3930)
* refactor(stores): consolidate variables stores into stores/variables/

Move variable data store from stores/panel/variables/ to stores/variables/
since the panel variables tab no longer exists. Rename the modal UI store
to useVariablesModalStore to eliminate naming collision with the data store.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix: remove unused workflowId variable in deleteVariable

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-03 18:43:47 -07:00
Waleed
34d210c66c chore(stores): remove Zustand environment store and dead init scaffolding (#3929) 2026-04-03 17:54:49 -07:00
Waleed
2334f2dca4 fix(loading): remove jarring workflow loading spinners (#3928)
* fix(loading): remove jarring workflow loading spinners

* fix(loading): remove home page skeleton loading state

* fix(loading): remove plain spinner loading states from task and file view
2026-04-03 17:45:30 -07:00
Waleed
65fc138bfc improvement(stores): remove deployment state from Zustand in favor of React Query (#3923) 2026-04-03 17:44:10 -07:00
Waleed
e8f7fe0989 v0.6.22: agentmail, rootly, landing fixes, analytics, credentials block 2026-04-03 01:14:36 -07:00
Waleed
ace87791d8 feat(analytics): add PostHog product analytics (#3910)
* feat(analytics): add PostHog product analytics

* fix(posthog): fix workspace group via URL params, type errors, and clean up comments

* fix(posthog): address PR review - fix pre-tx event, auth_method, paused executions, enterprise cancellation, settings double-fire

* chore(posthog): remove unused identifyServerPerson

* fix(posthog): isolate processQueuedResumes errors, simplify settings posthog deps

* fix(posthog): correctly classify SSO auth_method, fix phantom empty-string workspace groups

* fix(posthog): remove usePostHog from memo'd TemplateCard, fix copilot chat phantom workspace group

* fix(posthog): eliminate all remaining phantom empty-string workspace groups

* fix(posthog): fix cancel route phantom group, remove redundant workspaceId shadow in catch block

* fix(posthog): use ids.length for block_removed guard to handle container blocks with descendants

* chore(posthog): remove unused removedBlockTypes variable

* fix(posthog): remove phantom $set person properties from subscription events

* fix(posthog): add passedKnowledgeBaseName to knowledge_base_opened effect deps

* fix(posthog): capture currentWorkflowId synchronously before async import to avoid stale closure

* fix(posthog): add typed captureEvent wrapper for React components, deduplicate copilot_panel_opened

* feat(posthog): add task_created and task_message_sent events, remove copilot_panel_opened

* feat(posthog): track task_renamed, task_deleted, task_marked_read, task_marked_unread

* feat(analytics): expand posthog event coverage with source tracking and lifecycle events

* fix(analytics): flush posthog events on SIGTERM before ECS task termination

* fix(analytics): fix posthog in useCallback deps and fire block events for bulk operations
2026-04-03 01:00:35 -07:00
Waleed
74af452175 feat(blocks): add Credential block (#3907)
* feat(blocks): add Credential block

* fix(blocks): explicit workspaceId guard in credential handler, clarify hasOAuthSelection

* feat(credential): add list operation with type/provider filters

* feat(credential): restrict to OAuth only, remove env vars and service accounts

* docs(credential): update screenshots

* fix(credential): remove stale isServiceAccount dep from overlayContent memo

* fix(credential): filter to oauth-only in handleComboboxChange matchedCred lookup
2026-04-02 23:15:15 -07:00
Waleed
ec51f73596 feat(email): abandoned checkout email, 80% free tier warning, credits exhausted email (#3908)
* feat(email): send plain personal email on abandoned checkout

* feat(email): lower free tier warning to 80% and add credits exhausted email

* feat(email): use wordmark in email header instead of icon-only logo

* fix(email): restore accidentally deleted social icons in email footer

* fix(email): prevent double email for free users at 80%, fix subject line

* improvement(emails): extract shared plain email styles and proFeatures constant, fix double email on 100% usage

* fix(email): filter subscription-mode checkout, skip already-subscribed users, fix preview text

* fix(email): use notifications type for onboarding followup to respect unsubscribe preferences

* fix(email): use limit instead of currentUsage in credits exhausted email body

* fix(email): use notifications type for abandoned checkout, clarify crosses80 comment

* chore(email): rename _constants.ts to constants.ts

* fix(email): use isProPlan to catch org-level subscriptions in abandoned checkout guard

* fix(email): align onboarding followup delay to 5 days for email/password users
2026-04-02 19:31:29 -07:00
Theodore Li
6866da590c fix(tools) Directly query db for custom tool id (#3875)
* Directly query db for custom tool id

* Switch back to inline imports

* Fix lint

* Fix test

* Fix greptile comments

* Fix lint

* Make userId and workspaceId required

* Add back nullable userId and workspaceId fields

---------

Co-authored-by: Theodore Li <theo@sim.ai>
2026-04-02 22:13:37 -04:00
Waleed
b0c0ee29a8 feat(email): send onboarding followup email 3 days after signup (#3906)
* feat(email): send onboarding followup email 3 days after signup

* fix(email): add trigger guard, idempotency key, and shared task ID constant

* fix(email): increase onboarding followup delay from 3 to 5 days
2026-04-02 18:08:14 -07:00
Waleed
f0d1950477 v0.6.21: concurrency FF, blog theme 2026-04-02 13:08:59 -07:00
Waleed
0fdd8ffb55 v0.6.20: oauth default credential name, models pages, new models, rippling and rootly integrations 2026-04-02 11:44:24 -07:00
Waleed
d581009099 v0.6.19: vllm fixes, loading improevments, reactquery standardization, new gpt 5.4 models, fireworks provider support, launchdarkly, tailscale, extend integrations 2026-03-31 20:17:00 -07:00
Waleed
7d0fdefb22 v0.6.18: file operations block, profound integration, edge connection improvements, copy logs, knowledgebase robustness 2026-03-30 21:35:41 -07:00
Waleed
73e00f53e1 v0.6.17: trigger.dev CI, workers FF 2026-03-30 09:33:30 -07:00
Vikhyath Mondreti
1d7ae906bc v0.6.16: bullmq optionality 2026-03-30 00:12:21 -07:00
Waleed
560fa75155 v0.6.15: workers, security hardening, sidebar improvements, chat fixes, profound 2026-03-29 23:02:19 -07:00
Waleed
14089f7dbb v0.6.14: performance improvements, connectors UX, collapsed sidebar actions 2026-03-27 13:07:59 -07:00
Waleed
e615816dce v0.6.13: emcn standardization, granola and ketch integrations, security hardening, connectors improvements 2026-03-27 00:16:37 -07:00
Waleed
ca87d7ce29 v0.6.12: billing, blogs UI 2026-03-26 01:19:23 -07:00
Waleed
6bebbc5e29 v0.6.11: billing fixes, rippling, hubspot, UI improvements, demo modal 2026-03-25 22:54:56 -07:00
Waleed
7b572f1f61 v0.6.10: tour fix, connectors reliability improvements, tooltip gif fixes 2026-03-24 21:38:19 -07:00
Vikhyath Mondreti
ed9a71f0af v0.6.9: general ux improvements for tables, mothership 2026-03-24 17:03:24 -07:00
Siddharth Ganesan
c78c870fda v0.6.8: mothership tool loop
v0.6.8: mothership tool loop
2026-03-24 04:06:19 -07:00
Waleed
19442f19e2 v0.6.7: kb improvements, edge z index fix, captcha, new trust center, block classifications 2026-03-21 12:43:33 -07:00
Waleed
1731a4d7f0 v0.6.6: landing improvements, styling consistency, mothership table renaming 2026-03-19 23:58:30 -07:00
Waleed
9fcd02fd3b v0.6.5: email validation, integrations page, mothership and custom tool fixes 2026-03-19 16:08:30 -07:00
Waleed
ff7b5b528c v0.6.4: subflows, docusign, ashby new tools, box, workday, billing bug fixes 2026-03-18 23:12:36 -07:00
Waleed
30f2d1a0fc v0.6.3: hubspot integration, kb block improvements 2026-03-18 11:19:55 -07:00
Waleed
4bd0731871 v0.6.2: mothership stability, chat iframe embedding, KB upserts, new blog post 2026-03-18 03:29:39 -07:00
Waleed
4f3bc37fe4 v0.6.1: added better auth admin plugin 2026-03-17 15:16:16 -07:00
Waleed
84d6fdc423 v0.6: mothership, tables, connectors 2026-03-17 12:21:15 -07:00
Vikhyath Mondreti
4c12914d35 v0.5.113: jira, ashby, google ads, grain updates 2026-03-12 22:54:25 -07:00
Waleed
e9bdc57616 v0.5.112: trace spans improvements, fathom integration, jira fixes, canvas navigation updates 2026-03-12 13:30:20 -07:00
Vikhyath Mondreti
36612ae42a v0.5.111: non-polling webhook execs off trigger.dev, gmail subject headers, webhook trigger configs (#3530) 2026-03-11 17:47:28 -07:00
Waleed
1c2c2c65d4 v0.5.110: webhook execution speedups, SSRF patches 2026-03-11 15:00:24 -07:00
Waleed
ecd3536a72 v0.5.109: obsidian and evernote integrations, slack fixes, remove memory instrumentation 2026-03-09 10:40:37 -07:00
Vikhyath Mondreti
8c0a2e04b1 v0.5.108: workflow input params in agent tools, bun upgrade, dropdown selectors for 14 blocks 2026-03-06 21:02:25 -08:00
Waleed
6586c5ce40 v0.5.107: new reddit, slack tools 2026-03-05 22:48:20 -08:00
Vikhyath Mondreti
3ce947566d v0.5.106: condition block and legacy kbs fixes, GPT 5.4 2026-03-05 17:30:05 -08:00
Waleed
70c36cb7aa v0.5.105: slack remove reaction, nested subflow locks fix, servicenow pagination, memory improvements 2026-03-04 22:38:26 -08:00
Waleed
f1ec5fe824 v0.5.104: memory improvements, nested subflows, careers page redirect, brandfetch, google meet 2026-03-03 23:45:29 -08:00
Waleed
e07e3c34cc v0.5.103: memory util instrumentation, API docs, amplitude, google pagespeed insights, pagerduty 2026-03-01 23:27:02 -08:00
Waleed
0d2e6ff31d v0.5.102: new integrations, new tools, ci speedups, memory leak instrumentation 2026-02-28 12:48:10 -08:00
Waleed
4fd0989264 v0.5.101: circular dependency mitigation, confluence enhancements, google tasks and bigquery integrations, workflow lock 2026-02-26 15:04:53 -08:00
Waleed
67f8a687f6 v0.5.100: multiple credentials, 40% speedup, gong, attio, audit log improvements 2026-02-25 00:28:25 -08:00
Waleed
af592349d3 v0.5.99: local dev improvements, live workflow logs in terminal 2026-02-23 00:24:49 -08:00
Waleed
0d86ea01f0 v0.5.98: change detection improvements, rate limit and code execution fixes, removed retired models, hex integration 2026-02-21 18:07:40 -08:00
Waleed
115f04e989 v0.5.97: oidc discovery for copilot mcp 2026-02-21 02:06:25 -08:00
Waleed
34d92fae89 v0.5.96: sim oauth provider, slack ephemeral message tool and blockkit support 2026-02-20 18:22:20 -08:00
Waleed
67aa4bb332 v0.5.95: gemini 3.1 pro, cloudflare, dataverse, revenuecat, redis, upstash, algolia tools; isolated-vm robustness improvements, tables backend (#3271)
* feat(tools): advanced fields for youtube, vercel; added cloudflare and dataverse tools (#3257)

* refactor(vercel): mark optional fields as advanced mode

Move optional/power-user fields behind the advanced toggle:
- List Deployments: project filter, target, state
- Create Deployment: project ID override, redeploy from, target
- List Projects: search
- Create/Update Project: framework, build/output/install commands
- Env Vars: variable type
- Webhooks: project IDs filter
- Checks: path, details URL
- Team Members: role filter
- All operations: team ID scope

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* style(youtube): mark optional params as advanced mode

Hide pagination, sort order, and filter fields behind the advanced
toggle for a cleaner default UX across all YouTube operations.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* added advanced fields for vercel and youtube, added cloudflare and dataverse block

* addded desc for dataverse

* add more tools

* ack comment

* more

* ops

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>

* feat(tables): added tables (#2867)

* updates

* required

* trashy table viewer

* updates

* updates

* filtering ui

* updates

* updates

* updates

* one input mode

* format

* fix lints

* improved errors

* updates

* updates

* chages

* doc strings

* breaking down file

* update comments with ai

* updates

* comments

* changes

* revert

* updates

* dedupe

* updates

* updates

* updates

* refactoring

* renames & refactors

* refactoring

* updates

* undo

* update db

* wand

* updates

* fix comments

* fixes

* simplify comments

* u[dates

* renames

* better comments

* validation

* updates

* updates

* updates

* fix sorting

* fix appearnce

* updating prompt to make it user sort

* rm

* updates

* rename

* comments

* clean comments

* simplicifcaiton

* updates

* updates

* refactor

* reduced type confusion

* undo

* rename

* undo changes

* undo

* simplify

* updates

* updates

* revert

* updates

* db updates

* type fix

* fix

* fix error handling

* updates

* docs

* docs

* updates

* rename

* dedupe

* revert

* uncook

* updates

* fix

* fix

* fix

* fix

* prepare merge

* readd migrations

* add back missed code

* migrate enrichment logic to general abstraction

* address bugbot concerns

* adhere to size limits for tables

* remove conflicting migration

* add back migrations

* fix tables auth

* fix permissive auth

* fix lint

* reran migrations

* migrate to use tanstack query for all server state

* update table-selector

* update names

* added tables to permission groups, updated subblock types

---------

Co-authored-by: Vikhyath Mondreti <vikhyath@simstudio.ai>
Co-authored-by: waleed <walif6@gmail.com>

* fix(snapshot): changed insert to upsert when concurrent identical child workflows are running (#3259)

* fix(snapshot): changed insert to upsert when concurrent identical child workflows are running

* fixed ci tests failing

* fix(workflows): disallow duplicate workflow names at the same folder level (#3260)

* feat(tools): added redis, upstash, algolia, and revenuecat (#3261)

* feat(tools): added redis, upstash, algolia, and revenuecat

* ack comment

* feat(models): add gemini-3.1-pro-preview and update gemini-3-pro thinking levels (#3263)

* fix(audit-log): lazily resolve actor name/email when missing (#3262)

* fix(blocks): move type coercions from tools.config.tool to tools.config.params (#3264)

* fix(blocks): move type coercions from tools.config.tool to tools.config.params

Number() coercions in tools.config.tool ran at serialization time before
variable resolution, destroying dynamic references like <block.result.count>
by converting them to NaN/null. Moved all coercions to tools.config.params
which runs at execution time after variables are resolved.

Fixed in 15 blocks: exa, arxiv, sentry, incidentio, wikipedia, ahrefs,
posthog, elasticsearch, dropbox, hunter, lemlist, spotify, youtube, grafana,
parallel. Also added mode: 'advanced' to optional exa fields.

Closes #3258

* fix(blocks): address PR review — move remaining param mutations from tool() to params()

- Moved field mappings from tool() to params() in grafana, posthog,
  lemlist, spotify, dropbox (same dynamic reference bug)
- Fixed parallel.ts excerpts/full_content boolean logic
- Fixed parallel.ts search_queries empty case (must set undefined)
- Fixed elasticsearch.ts timeout not included when already ends with 's'
- Restored dropbox.ts tool() switch for proper default fallback

* fix(blocks): restore field renames to tool() for serialization-time validation

Field renames (e.g. personalApiKey→apiKey) must be in tool() because
validateRequiredFieldsBeforeExecution calls selectToolId()→tool() then
checks renamed field names on params. Only type coercions (Number(),
boolean) stay in params() to avoid destroying dynamic variable references.

* improvement(resolver): resovled empty sentinel to not pass through unexecuted valid refs to text inputs (#3266)

* fix(blocks): add required constraint for serviceDeskId in JSM block (#3268)

* fix(blocks): add required constraint for serviceDeskId in JSM block

* fix(blocks): rename custom field values to request field values in JSM create request

* fix(trigger): add isolated-vm support to trigger.dev container builds (#3269)

Scheduled workflow executions running in trigger.dev containers were
failing to spawn isolated-vm workers because the native module wasn't
available in the container. This caused loop condition evaluation to
silently fail and exit after one iteration.

- Add isolated-vm to build.external and additionalPackages in trigger config
- Include isolated-vm-worker.cjs via additionalFiles for child process spawning
- Add fallback path resolution for worker file in trigger.dev environment

* fix(tables): hide tables from sidebar and block registry (#3270)

* fix(tables): hide tables from sidebar and block registry

* fix(trigger): add isolated-vm support to trigger.dev container builds (#3269)

Scheduled workflow executions running in trigger.dev containers were
failing to spawn isolated-vm workers because the native module wasn't
available in the container. This caused loop condition evaluation to
silently fail and exit after one iteration.

- Add isolated-vm to build.external and additionalPackages in trigger config
- Include isolated-vm-worker.cjs via additionalFiles for child process spawning
- Add fallback path resolution for worker file in trigger.dev environment

* lint

* fix(trigger): update node version to align with main app (#3272)

* fix(build): fix corrupted sticky disk cache on blacksmith (#3273)

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: Lakee Sivaraya <71339072+lakeesiv@users.noreply.github.com>
Co-authored-by: Vikhyath Mondreti <vikhyath@simstudio.ai>
Co-authored-by: Vikhyath Mondreti <vikhyathvikku@gmail.com>
2026-02-20 13:43:07 -08:00
Waleed
15ace5e63f v0.5.94: vercel integration, folder insertion, migrated tracking redirects to rewrites 2026-02-18 16:53:34 -08:00
Waleed
fdca73679d v0.5.93: NextJS config changes, MCP and Blocks whitelisting, copilot keyboard shortcuts, audit logs 2026-02-18 12:10:05 -08:00
Waleed
da46a387c9 v0.5.92: shortlinks, copilot scrolling stickiness, pagination 2026-02-17 15:13:21 -08:00
Waleed
b7e377ec4b v0.5.91: docs i18n, turborepo upgrade 2026-02-16 00:36:05 -08:00
542 changed files with 39755 additions and 16120 deletions

View File

@@ -74,6 +74,10 @@ docker compose -f docker-compose.prod.yml up -d
Open [http://localhost:3000](http://localhost:3000)
#### Background worker note
The Docker Compose stack starts a dedicated worker container by default. If `REDIS_URL` is not configured, the worker will start, log that it is idle, and do no queue processing. This is expected. Queue-backed API, webhook, and schedule execution requires Redis; installs without Redis continue to use the inline execution path.
Sim also supports local models via [Ollama](https://ollama.ai) and [vLLM](https://docs.vllm.ai/) — see the [Docker self-hosting docs](https://docs.sim.ai/self-hosting/docker) for setup details.
### Self-hosted: Manual Setup
@@ -86,6 +90,7 @@ Sim also supports local models via [Ollama](https://ollama.ai) and [vLLM](https:
git clone https://github.com/simstudioai/sim.git
cd sim
bun install
bun run prepare # Set up pre-commit hooks
```
2. Set up PostgreSQL with pgvector:
@@ -100,6 +105,11 @@ Or install manually via the [pgvector guide](https://github.com/pgvector/pgvecto
```bash
cp apps/sim/.env.example apps/sim/.env
# Create your secrets
perl -i -pe "s/your_encryption_key/$(openssl rand -hex 32)/" apps/sim/.env
perl -i -pe "s/your_internal_api_secret/$(openssl rand -hex 32)/" apps/sim/.env
perl -i -pe "s/your_api_encryption_key/$(openssl rand -hex 32)/" apps/sim/.env
# DB configs for migration
cp packages/db/.env.example packages/db/.env
# Edit both .env files to set DATABASE_URL="postgresql://postgres:your_password@localhost:5432/simstudio"
```
@@ -107,16 +117,18 @@ cp packages/db/.env.example packages/db/.env
4. Run migrations:
```bash
cd packages/db && bunx drizzle-kit migrate --config=./drizzle.config.ts
cd packages/db && bun run db:migrate
```
5. Start development servers:
```bash
bun run dev:full # Starts Next.js app and realtime socket server
bun run dev:full # Starts Next.js app, realtime socket server, and the BullMQ worker
```
Or run separately: `bun run dev` (Next.js) and `cd apps/sim && bun run dev:sockets` (realtime).
If `REDIS_URL` is not configured, the worker will remain idle and execution continues inline.
Or run separately: `bun run dev` (Next.js), `cd apps/sim && bun run dev:sockets` (realtime), and `cd apps/sim && bun run worker` (BullMQ worker).
## Copilot API Keys

View File

@@ -124,6 +124,29 @@ export function ConditionalIcon(props: SVGProps<SVGSVGElement>) {
)
}
export function CredentialIcon(props: SVGProps<SVGSVGElement>) {
return (
<svg {...props} viewBox='0 0 24 24' fill='none' xmlns='http://www.w3.org/2000/svg'>
<circle cx='8' cy='15' r='4' stroke='currentColor' strokeWidth='1.75' />
<path d='M11.83 13.17L20 5' stroke='currentColor' strokeWidth='1.75' strokeLinecap='round' />
<path
d='M18 7l2 2'
stroke='currentColor'
strokeWidth='1.75'
strokeLinecap='round'
strokeLinejoin='round'
/>
<path
d='M15 10l2 2'
stroke='currentColor'
strokeWidth='1.75'
strokeLinecap='round'
strokeLinejoin='round'
/>
</svg>
)
}
export function NoteIcon(props: SVGProps<SVGSVGElement>) {
return (
<svg
@@ -4630,6 +4653,59 @@ export function SQSIcon(props: SVGProps<SVGSVGElement>) {
)
}
export function CloudFormationIcon(props: SVGProps<SVGSVGElement>) {
return (
<svg
{...props}
viewBox='0 0 80 80'
version='1.1'
xmlns='http://www.w3.org/2000/svg'
xmlnsXlink='http://www.w3.org/1999/xlink'
>
<g
id='Icon-Architecture/64/Arch_AWS-CloudFormation_64'
stroke='none'
strokeWidth='1'
fill='none'
fillRule='evenodd'
>
<path
d='M53,39.9632039 L58,39.9632039 L58,37.9601375 L53,37.9601375 L53,39.9632039 Z M28,51.9816019 L33,51.9816019 L33,49.9785356 L28,49.9785356 L28,51.9816019 Z M18,51.9816019 L25,51.9816019 L25,49.9785356 L18,49.9785356 L18,51.9816019 Z M18,45.9724029 L30,45.9724029 L30,43.9693366 L18,43.9693366 L18,45.9724029 Z M18,33.9540048 L27,33.9540048 L27,31.9509385 L18,31.9509385 L18,33.9540048 Z M18,39.9632039 L51,39.9632039 L51,37.9601375 L18,37.9601375 L18,39.9632039 Z M37,61.9969337 L14,61.9969337 L14,27.9448058 L37,27.9448058 L37,35.9570712 L39,35.9570712 L39,26.9432726 C39,26.3904263 38.552,25.9417395 38,25.9417395 L13,25.9417395 C12.447,25.9417395 12,26.3904263 12,26.9432726 L12,62.9984668 C12,63.5513131 12.447,64 13,64 L38,64 C38.552,64 39,63.5513131 39,62.9984668 L39,42.9678034 L37,42.9678034 L37,61.9969337 Z M68,36.9586044 C68,43.4305117 62.173,45.6819583 59.092,45.9683968 L43,45.9724029 L43,43.9693366 L59,43.9693366 C59.195,43.9463013 66,43.2121775 66,36.9586044 C66,31.2638867 60.863,30.1081175 59.834,29.9338507 C59.321,29.8467173 58.96,29.3820059 59.004,28.8632117 C59.005,28.8441826 59.007,28.826155 59.009,28.8081274 C58.954,25.5902013 56.981,24.584662 56.126,24.3002266 C54.53,23.769414 52.751,24.2771913 51.81,25.5391231 C51.591,25.8355769 51.229,25.9868085 50.861,25.9307226 C50.497,25.8756383 50.192,25.625255 50.068,25.2767214 C49.447,23.5360568 48.546,22.4083304 47.293,21.1534094 C44.159,18.0386412 39.905,17.1783242 35.925,18.8528877 C33.837,19.7332353 32.012,21.7282894 30.922,24.327268 L29.078,23.5500782 C30.37,20.4743699 32.584,18.0887179 35.15,17.007062 C39.905,15.0049972 44.971,16.0255595 48.704,19.7342369 C49.774,20.8068789 50.66,21.851478 51.35,23.2035478 C52.843,22.0978551 54.857,21.7673492 56.757,22.3993166 C59.189,23.2085554 60.727,25.3207889 60.975,28.1290879 C64.381,28.9884034 68,31.7115721 68,36.9586044 L68,36.9586044 Z'
id='AWS-CloudFormation_Icon_64_Squid'
fill='currentColor'
/>
</g>
</svg>
)
}
export function CloudWatchIcon(props: SVGProps<SVGSVGElement>) {
return (
<svg
{...props}
viewBox='0 0 80 80'
version='1.1'
xmlns='http://www.w3.org/2000/svg'
xmlnsXlink='http://www.w3.org/1999/xlink'
>
<g
id='Icon-Architecture/64/Arch_Amazon-CloudWatch_64'
stroke='none'
strokeWidth='1'
fill='none'
fillRule='evenodd'
transform='translate(40, 40) scale(1.25) translate(-40, -40)'
>
<path
d='M55.0592315,46.7773707 C55.0592315,42.8680281 51.8575588,39.6876305 47.9220646,39.6876305 C43.9865705,39.6876305 40.785903,42.8680281 40.785903,46.7773707 C40.785903,50.6867133 43.9865705,53.8671108 47.9220646,53.8671108 C51.8575588,53.8671108 55.0592315,50.6867133 55.0592315,46.7773707 M57.0697011,46.7773707 C57.0697011,51.7881194 52.9663327,55.8642207 47.9220646,55.8642207 C42.8788018,55.8642207 38.7754334,51.7881194 38.7754334,46.7773707 C38.7754334,41.7666219 42.8788018,37.6905206 47.9220646,37.6905206 C52.9663327,37.6905206 57.0697011,41.7666219 57.0697011,46.7773707 M65.5096522,60.4735504 L58.5011554,54.2026253 C57.9352082,54.9944794 57.2808004,55.7174332 56.5540156,56.3634982 L63.5524601,62.6334248 C64.1495696,63.1686502 65.0784065,63.1187225 65.6182176,62.5255808 C66.155013,61.9324392 66.1067617,61.010773 65.5096522,60.4735504 M47.9220646,57.6616197 C53.9645309,57.6616197 58.8801289,52.7786859 58.8801289,46.7773707 C58.8801289,40.7750569 53.9645309,35.8931217 47.9220646,35.8931217 C41.8806036,35.8931217 36.9650056,40.7750569 36.9650056,46.7773707 C36.9650056,52.7786859 41.8806036,57.6616197 47.9220646,57.6616197 M67.1119965,63.8626459 C66.4264264,64.6165549 65.47849,65 64.5285431,65 C63.7002296,65 62.8699057,64.708422 62.207456,64.1172774 L54.9305615,57.5987107 C52.9070239,58.8968321 50.505518,59.6587296 47.9220646,59.6587296 C40.7718297,59.6587296 34.9545361,53.8800921 34.9545361,46.7773707 C34.9545361,39.6746493 40.7718297,33.8960118 47.9220646,33.8960118 C55.0733048,33.8960118 60.8905985,39.6746493 60.8905985,46.7773707 C60.8905985,48.8154213 60.3990387,50.7366411 59.5465996,52.4511599 L66.8556616,58.9906963 C68.2750531,60.265851 68.3896499,62.4496907 67.1119965,63.8626459 M21.2803274,29.392529 C21.2803274,29.9117776 21.3124949,30.429029 21.3738143,30.9293051 C21.4089975,31.2138932 21.3205368,31.4984814 21.1295422,31.7131707 C20.9777518,31.8839236 20.7736891,31.9967603 20.550527,32.0347054 C18.0786547,32.6687878 14.0104695,34.5880104 14.0104695,40.3456782 C14.0104695,44.6933865 16.4240382,47.0929141 18.4495863,48.3411077 C19.1411878,48.7744806 19.9594489,49.0051468 20.8229456,49.0141338 L32.9450717,49.0251179 L32.9430613,51.0222278 L20.811888,51.0112437 C19.5664021,50.9982625 18.384246,50.6607509 17.3840374,50.0346569 C15.3765836,48.7974474 12,45.8896553 12,40.3456782 C12,33.66235 16.5999543,31.191925 19.3000149,30.319188 C19.2799102,30.0116331 19.2698579,29.702081 19.2698579,29.392529 C19.2698579,23.9324305 22.9982737,18.2696254 27.9420183,16.2215892 C33.7241287,13.8150717 39.8500294,15.0083449 44.3263399,19.4109737 C45.7135638,20.7749998 46.8545053,22.4316024 47.7300648,24.3478294 C48.9061895,23.3802296 50.355738,22.8460027 51.8836949,22.8460027 C54.8863312,22.8460027 58.2659305,25.1097268 58.8680661,30.0605622 C61.6797078,30.7046302 67.6206453,32.9553731 67.6206453,40.422567 C67.6206453,43.4042521 66.6797455,45.8666886 64.8230769,47.7419748 L63.3896121,46.3410022 C64.8632863,44.8531553 65.6101757,42.8620367 65.6101757,40.422567 C65.6101757,33.891019 60.1055101,32.2663701 57.737177,31.8719409 C57.4677741,31.827006 57.2295334,31.6752256 57.0757325,31.4515493 C56.9259525,31.2358614 56.8686541,30.9712444 56.9138897,30.7146157 C56.5851779,26.6604826 54.1605516,24.8431126 51.8836949,24.8431126 C50.4472144,24.8431126 49.1001998,25.5381069 48.1874466,26.7503526 C47.9652897,27.0439277 47.6044105,27.193711 47.2344841,27.139789 C46.8695838,27.085867 46.5629872,26.8362283 46.4373329,26.4917268 C45.6140456,24.2260057 44.4278686,22.3207628 42.9119745,20.8309188 C39.0327735,17.0154404 33.7281496,15.9809374 28.7170543,18.0649216 C24.5463352,19.7924217 21.2803274,24.7672224 21.2803274,29.392529'
id='Amazon-CloudWatch_Icon_64_Squid'
fill='currentColor'
/>
</g>
</svg>
)
}
export function TextractIcon(props: SVGProps<SVGSVGElement>) {
return (
<svg

View File

@@ -27,7 +27,9 @@ import {
CirclebackIcon,
ClayIcon,
ClerkIcon,
CloudFormationIcon,
CloudflareIcon,
CloudWatchIcon,
ConfluenceIcon,
CursorIcon,
DatabricksIcon,
@@ -211,6 +213,8 @@ export const blockTypeToIconMap: Record<string, IconComponent> = {
clay: ClayIcon,
clerk: ClerkIcon,
cloudflare: CloudflareIcon,
cloudformation: CloudFormationIcon,
cloudwatch: CloudWatchIcon,
confluence_v2: ConfluenceIcon,
cursor_v2: CursorIcon,
databricks: DatabricksIcon,

View File

@@ -0,0 +1,150 @@
---
title: Credential
---
import { Callout } from 'fumadocs-ui/components/callout'
import { Tab, Tabs } from 'fumadocs-ui/components/tabs'
import { Image } from '@/components/ui/image'
import { FAQ } from '@/components/ui/faq'
The Credential block has two operations: **Select Credential** picks a single OAuth credential and outputs its ID reference for downstream blocks; **List Credentials** returns all OAuth credentials in the workspace (optionally filtered by provider) as an array for iteration.
<div className="flex justify-center">
<Image
src="/static/blocks/credential.png"
alt="Credential Block"
width={400}
height={300}
className="my-6"
/>
</div>
<Callout>
The Credential block outputs credential **ID references**, not secrets. Downstream blocks receive the ID and resolve the actual OAuth token securely during their own execution.
</Callout>
## Configuration Options
### Operation
| Value | Description |
|---|---|
| **Select Credential** | Pick one OAuth credential and output its reference — use this to wire a single credential into downstream blocks |
| **List Credentials** | Return all OAuth credentials in the workspace as an array — use this with a ForEach loop |
### Credential (Select operation)
Select an OAuth credential from your workspace. The dropdown shows all connected OAuth accounts (Google, GitHub, Slack, etc.).
In advanced mode, paste a credential ID directly. You can copy a credential ID from your workspace's Credentials settings page.
### Provider (List operation)
Filter the returned OAuth credentials by provider. Select one or more providers from the dropdown — only providers you have credentials for will appear. Leave empty to return all OAuth credentials.
| Example | Returns |
|---|---|
| Gmail | Gmail credentials only |
| Slack | Slack credentials only |
| Gmail + Slack | Gmail and Slack credentials |
## Outputs
<Tabs items={['Select Credential', 'List Credentials']}>
<Tab>
| Output | Type | Description |
|---|---|---|
| `credentialId` | `string` | The credential ID — pipe this into other blocks' credential fields |
| `displayName` | `string` | Human-readable name (e.g. "waleed@company.com") |
| `providerId` | `string` | OAuth provider ID (e.g. `google-email`, `slack`) |
</Tab>
<Tab>
| Output | Type | Description |
|---|---|---|
| `credentials` | `json` | Array of OAuth credential objects (see shape below) |
| `count` | `number` | Number of credentials returned |
Each object in the `credentials` array:
| Field | Type | Description |
|---|---|---|
| `credentialId` | `string` | The credential ID |
| `displayName` | `string` | Human-readable name |
| `providerId` | `string` | OAuth provider ID |
</Tab>
</Tabs>
## Example Use Cases
**Shared credential across multiple blocks** — Define once, use everywhere
```
Credential (Select, Google) → Gmail (Send) & Google Drive (Upload) & Google Calendar (Create)
```
**Multi-account workflows** — Route to different credentials based on logic
```
Agent (Determine account) → Condition → Credential A or Credential B → Slack (Post)
```
**Iterate over all Gmail accounts**
```
Credential (List, Provider: Gmail) → ForEach Loop → Gmail (Send) using <loop.currentItem.credentialId>
```
<div className="flex justify-center">
<Image
src="/static/blocks/credential-loop.png"
alt="Credential List wired into a ForEach Loop"
width={900}
height={400}
className="my-6"
/>
</div>
## How to wire a Credential block
### Select Credential
1. Drop a **Credential** block and select your OAuth credential from the picker
2. In the downstream block, switch to **advanced mode** on its credential field
3. Enter `<credentialBlockName.credentialId>` as the value
<Tabs items={['Gmail', 'Slack']}>
<Tab>
In the Gmail block's credential field (advanced mode):
```
<myCredential.credentialId>
```
</Tab>
<Tab>
In the Slack block's credential field (advanced mode):
```
<myCredential.credentialId>
```
</Tab>
</Tabs>
### List Credentials
1. Drop a **Credential** block, set Operation to **List Credentials**
2. Optionally select one or more **Providers** to narrow results (only your connected providers appear)
3. Wire `<credentialBlockName.credentials>` into a **ForEach Loop** as the items source
4. Inside the loop, reference `<loop.currentItem.credentialId>` in downstream blocks' credential fields
## Best Practices
- **Define once, reference many times**: When five blocks use the same Google account, use one Credential block and wire all five to `<credential.credentialId>` instead of selecting the account five times
- **Outputs are safe to log**: The `credentialId` output is a UUID reference, not a secret. It is safe to inspect in execution logs
- **Use for environment switching**: Pair with a Condition block to route to a production or staging OAuth credential based on a workflow variable
- **Advanced mode is required**: Downstream blocks must be in advanced mode on their credential field to accept a dynamic reference
- **Use List + ForEach for fan-out**: When you need to run the same action across all accounts of a provider, List Credentials feeds naturally into a ForEach loop
- **Narrow by provider**: Use the Provider multiselect to filter to specific services — only providers you have credentials for are shown
<FAQ items={[
{ question: "Does the Credential block expose my secret or token?", answer: "No. The block outputs a credential ID (a UUID), not the actual OAuth token. Downstream blocks receive the ID and resolve the token securely in their own execution context. Secrets never appear in workflow state, logs, or the canvas." },
{ question: "What credential types does it support?", answer: "OAuth connected accounts only (Google, GitHub, Slack, etc.). Environment variables and service accounts cannot be resolved by ID in downstream blocks, so they are not supported." },
{ question: "How is Select different from just copying a credential ID into advanced mode?", answer: "Functionally identical — both pass the same credential ID to the downstream block. The Credential block adds value when you need to use one credential in many blocks (change it once), or when you want to select between credentials dynamically using a Condition block." },
{ question: "Can I list all OAuth credentials in my workspace?", answer: "Yes. Set the Operation to 'List Credentials'. Optionally filter by provider using the Provider multiselect. Wire the credentials output into a ForEach loop to process each credential individually." },
{ question: "Can I use a Credential block output in a Function block?", answer: "Yes. Reference <credential.credentialId> in your Function block's code. Note that the function will receive the raw UUID string — if you need the resolved token, the downstream block must handle the resolution (as integration blocks do). The Function block does not automatically resolve credential IDs." },
{ question: "What happens if the credential is deleted?", answer: "The Select operation will throw an error at execution time: 'Credential not found'. The List operation will simply omit the deleted credential from the results. Update the Credential block to select a valid credential before re-running." },
]} />

View File

@@ -4,6 +4,7 @@
"agent",
"api",
"condition",
"credential",
"evaluator",
"function",
"guardrails",

View File

@@ -195,6 +195,17 @@ By default, your usage is capped at the credits included in your plan. To allow
Max (individual) shares the same rate limits as team plans. Team plans (Pro or Max for Teams) use the Max-tier rate limits.
### Concurrent Execution Limits
| Plan | Concurrent Executions |
|------|----------------------|
| **Free** | 5 |
| **Pro** | 50 |
| **Max / Team** | 200 |
| **Enterprise** | 200 (customizable) |
Concurrent execution limits control how many workflow executions can run simultaneously within a workspace. When the limit is reached, new executions are queued and admitted as running executions complete. Manual runs from the editor are not subject to these limits.
### File Storage
| Plan | Storage |

View File

@@ -0,0 +1,183 @@
---
title: CloudFormation
description: Manage and inspect AWS CloudFormation stacks, resources, and drift
---
import { BlockInfoCard } from "@/components/ui/block-info-card"
<BlockInfoCard
type="cloudformation"
color="linear-gradient(45deg, #B0084D 0%, #FF4F8B 100%)"
/>
{/* MANUAL-CONTENT-START:intro */}
[AWS CloudFormation](https://aws.amazon.com/cloudformation/) is an infrastructure-as-code service that lets you model, provision, and manage AWS resources by treating infrastructure as code. CloudFormation uses templates to describe the resources you need and their dependencies, so you can launch and configure them together as a stack.
With the CloudFormation integration, you can:
- **Describe Stacks**: List all stacks in a region or get detailed information about a specific stack, including its status, outputs, tags, and drift information
- **List Stack Resources**: Enumerate every resource in a stack with its logical ID, physical ID, type, status, and drift status
- **Describe Stack Events**: View the full event history for a stack to understand what happened during create, update, or delete operations
- **Detect Stack Drift**: Initiate drift detection to check whether any resources in a stack have been modified outside of CloudFormation
- **Drift Detection Status**: Poll the results of a drift detection operation to see which resources have drifted and how many
- **Get Template**: Retrieve the original template body (JSON or YAML) used to create or update a stack
- **Validate Template**: Check a CloudFormation template for syntax errors, required capabilities, parameters, and declared transforms before deploying
In Sim, the CloudFormation integration enables your agents to monitor infrastructure state, detect configuration drift, audit stack resources, and validate templates as part of automated SRE and DevOps workflows. This is especially powerful when combined with CloudWatch for observability and SNS for alerting, creating end-to-end infrastructure monitoring pipelines.
{/* MANUAL-CONTENT-END */}
## Usage Instructions
Integrate AWS CloudFormation into workflows. Describe stacks, list resources, detect drift, view stack events, retrieve templates, and validate templates. Requires AWS access key and secret access key.
## Tools
### `cloudformation_describe_stacks`
List and describe CloudFormation stacks
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `awsRegion` | string | Yes | AWS region \(e.g., us-east-1\) |
| `awsAccessKeyId` | string | Yes | AWS access key ID |
| `awsSecretAccessKey` | string | Yes | AWS secret access key |
| `stackName` | string | No | Stack name or ID to describe \(omit to list all stacks\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `stacks` | array | List of CloudFormation stacks with status, outputs, and tags |
### `cloudformation_list_stack_resources`
List all resources in a CloudFormation stack
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `awsRegion` | string | Yes | AWS region \(e.g., us-east-1\) |
| `awsAccessKeyId` | string | Yes | AWS access key ID |
| `awsSecretAccessKey` | string | Yes | AWS secret access key |
| `stackName` | string | Yes | Stack name or ID |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `resources` | array | List of stack resources with type, status, and drift information |
### `cloudformation_detect_stack_drift`
Initiate drift detection on a CloudFormation stack
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `awsRegion` | string | Yes | AWS region \(e.g., us-east-1\) |
| `awsAccessKeyId` | string | Yes | AWS access key ID |
| `awsSecretAccessKey` | string | Yes | AWS secret access key |
| `stackName` | string | Yes | Stack name or ID to detect drift on |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `stackDriftDetectionId` | string | ID to use with Describe Stack Drift Detection Status to check results |
### `cloudformation_describe_stack_drift_detection_status`
Check the status of a stack drift detection operation
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `awsRegion` | string | Yes | AWS region \(e.g., us-east-1\) |
| `awsAccessKeyId` | string | Yes | AWS access key ID |
| `awsSecretAccessKey` | string | Yes | AWS secret access key |
| `stackDriftDetectionId` | string | Yes | The drift detection ID returned by Detect Stack Drift |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `stackId` | string | The stack ID |
| `stackDriftDetectionId` | string | The drift detection ID |
| `stackDriftStatus` | string | Drift status \(DRIFTED, IN_SYNC, NOT_CHECKED\) |
| `detectionStatus` | string | Detection status \(DETECTION_IN_PROGRESS, DETECTION_COMPLETE, DETECTION_FAILED\) |
| `detectionStatusReason` | string | Reason if detection failed |
| `driftedStackResourceCount` | number | Number of resources that have drifted |
| `timestamp` | number | Timestamp of the detection |
### `cloudformation_describe_stack_events`
Get the event history for a CloudFormation stack
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `awsRegion` | string | Yes | AWS region \(e.g., us-east-1\) |
| `awsAccessKeyId` | string | Yes | AWS access key ID |
| `awsSecretAccessKey` | string | Yes | AWS secret access key |
| `stackName` | string | Yes | Stack name or ID |
| `limit` | number | No | Maximum number of events to return \(default: 50\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `events` | array | List of stack events with resource status and timestamps |
### `cloudformation_get_template`
Retrieve the template body for a CloudFormation stack
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `awsRegion` | string | Yes | AWS region \(e.g., us-east-1\) |
| `awsAccessKeyId` | string | Yes | AWS access key ID |
| `awsSecretAccessKey` | string | Yes | AWS secret access key |
| `stackName` | string | Yes | Stack name or ID |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `templateBody` | string | The template body as a JSON or YAML string |
| `stagesAvailable` | array | Available template stages |
### `cloudformation_validate_template`
Validate a CloudFormation template for syntax and structural correctness
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `awsRegion` | string | Yes | AWS region \(e.g., us-east-1\) |
| `awsAccessKeyId` | string | Yes | AWS access key ID |
| `awsSecretAccessKey` | string | Yes | AWS secret access key |
| `templateBody` | string | Yes | The CloudFormation template body \(JSON or YAML\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `description` | string | Template description |
| `parameters` | array | Template parameters with defaults and descriptions |
| `capabilities` | array | Required capabilities \(e.g., CAPABILITY_IAM\) |
| `capabilitiesReason` | string | Reason capabilities are required |
| `declaredTransforms` | array | Transforms used in the template \(e.g., AWS::Serverless-2016-10-31\) |

View File

@@ -0,0 +1,180 @@
---
title: CloudWatch
description: Query and monitor AWS CloudWatch logs, metrics, and alarms
---
import { BlockInfoCard } from "@/components/ui/block-info-card"
<BlockInfoCard
type="cloudwatch"
color="linear-gradient(45deg, #B0084D 0%, #FF4F8B 100%)"
/>
## Usage Instructions
Integrate AWS CloudWatch into workflows. Run Log Insights queries, list log groups, retrieve log events, list and get metrics, and monitor alarms. Requires AWS access key and secret access key.
## Tools
### `cloudwatch_query_logs`
Run a CloudWatch Log Insights query against one or more log groups
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `awsRegion` | string | Yes | AWS region \(e.g., us-east-1\) |
| `awsAccessKeyId` | string | Yes | AWS access key ID |
| `awsSecretAccessKey` | string | Yes | AWS secret access key |
| `logGroupNames` | array | Yes | Log group names to query |
| `queryString` | string | Yes | CloudWatch Log Insights query string |
| `startTime` | number | Yes | Start time as Unix epoch seconds |
| `endTime` | number | Yes | End time as Unix epoch seconds |
| `limit` | number | No | Maximum number of results to return |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `results` | array | Query result rows |
| `statistics` | object | Query statistics \(bytesScanned, recordsMatched, recordsScanned\) |
| `status` | string | Query completion status |
### `cloudwatch_describe_log_groups`
List available CloudWatch log groups
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `awsRegion` | string | Yes | AWS region \(e.g., us-east-1\) |
| `awsAccessKeyId` | string | Yes | AWS access key ID |
| `awsSecretAccessKey` | string | Yes | AWS secret access key |
| `prefix` | string | No | Filter log groups by name prefix |
| `limit` | number | No | Maximum number of log groups to return |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `logGroups` | array | List of CloudWatch log groups with metadata |
### `cloudwatch_get_log_events`
Retrieve log events from a specific CloudWatch log stream
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `awsRegion` | string | Yes | AWS region \(e.g., us-east-1\) |
| `awsAccessKeyId` | string | Yes | AWS access key ID |
| `awsSecretAccessKey` | string | Yes | AWS secret access key |
| `logGroupName` | string | Yes | CloudWatch log group name |
| `logStreamName` | string | Yes | CloudWatch log stream name |
| `startTime` | number | No | Start time as Unix epoch seconds |
| `endTime` | number | No | End time as Unix epoch seconds |
| `limit` | number | No | Maximum number of events to return |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `events` | array | Log events with timestamp, message, and ingestion time |
### `cloudwatch_describe_log_streams`
List log streams within a CloudWatch log group
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `awsRegion` | string | Yes | AWS region \(e.g., us-east-1\) |
| `awsAccessKeyId` | string | Yes | AWS access key ID |
| `awsSecretAccessKey` | string | Yes | AWS secret access key |
| `logGroupName` | string | Yes | CloudWatch log group name |
| `prefix` | string | No | Filter log streams by name prefix |
| `limit` | number | No | Maximum number of log streams to return |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `logStreams` | array | List of log streams with metadata |
### `cloudwatch_list_metrics`
List available CloudWatch metrics
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `awsRegion` | string | Yes | AWS region \(e.g., us-east-1\) |
| `awsAccessKeyId` | string | Yes | AWS access key ID |
| `awsSecretAccessKey` | string | Yes | AWS secret access key |
| `namespace` | string | No | Filter by namespace \(e.g., AWS/EC2, AWS/Lambda\) |
| `metricName` | string | No | Filter by metric name |
| `recentlyActive` | boolean | No | Only show metrics active in the last 3 hours |
| `limit` | number | No | Maximum number of metrics to return |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `metrics` | array | List of metrics with namespace, name, and dimensions |
### `cloudwatch_get_metric_statistics`
Get statistics for a CloudWatch metric over a time range
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `awsRegion` | string | Yes | AWS region \(e.g., us-east-1\) |
| `awsAccessKeyId` | string | Yes | AWS access key ID |
| `awsSecretAccessKey` | string | Yes | AWS secret access key |
| `namespace` | string | Yes | Metric namespace \(e.g., AWS/EC2, AWS/Lambda\) |
| `metricName` | string | Yes | Metric name \(e.g., CPUUtilization, Invocations\) |
| `startTime` | number | Yes | Start time as Unix epoch seconds |
| `endTime` | number | Yes | End time as Unix epoch seconds |
| `period` | number | Yes | Granularity in seconds \(e.g., 60, 300, 3600\) |
| `statistics` | array | Yes | Statistics to retrieve \(Average, Sum, Minimum, Maximum, SampleCount\) |
| `dimensions` | string | No | Dimensions as JSON \(e.g., \{"InstanceId": "i-1234"\}\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `label` | string | Metric label |
| `datapoints` | array | Datapoints with timestamp and statistics values |
### `cloudwatch_describe_alarms`
List and filter CloudWatch alarms
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `awsRegion` | string | Yes | AWS region \(e.g., us-east-1\) |
| `awsAccessKeyId` | string | Yes | AWS access key ID |
| `awsSecretAccessKey` | string | Yes | AWS secret access key |
| `alarmNamePrefix` | string | No | Filter alarms by name prefix |
| `stateValue` | string | No | Filter by alarm state \(OK, ALARM, INSUFFICIENT_DATA\) |
| `alarmType` | string | No | Filter by alarm type \(MetricAlarm, CompositeAlarm\) |
| `limit` | number | No | Maximum number of alarms to return |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `alarms` | array | List of CloudWatch alarms with state and configuration |

View File

@@ -23,6 +23,8 @@
"clay",
"clerk",
"cloudflare",
"cloudformation",
"cloudwatch",
"confluence",
"cursor",
"databricks",

Binary file not shown.

After

Width:  |  Height:  |  Size: 63 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 12 KiB

View File

@@ -1,8 +1,5 @@
# Database (Required)
DATABASE_URL="postgresql://postgres:password@localhost:5432/postgres"
# PostgreSQL Port (Optional) - defaults to 5432 if not specified
# POSTGRES_PORT=5432
DATABASE_URL="postgresql://postgres:your_password@localhost:5432/simstudio"
# Authentication (Required unless DISABLE_AUTH=true)
BETTER_AUTH_SECRET=your_secret_key # Use `openssl rand -hex 32` to generate, or visit https://www.better-auth.com/docs/installation

View File

@@ -1,16 +1,18 @@
'use client'
import { Suspense, useMemo, useRef, useState } from 'react'
import { Suspense, useEffect, useMemo, useRef, useState } from 'react'
import { Turnstile, type TurnstileInstance } from '@marsidev/react-turnstile'
import { createLogger } from '@sim/logger'
import { Eye, EyeOff, Loader2 } from 'lucide-react'
import Link from 'next/link'
import { useRouter, useSearchParams } from 'next/navigation'
import { usePostHog } from 'posthog-js/react'
import { Input, Label } from '@/components/emcn'
import { client, useSession } from '@/lib/auth/auth-client'
import { getEnv, isFalsy, isTruthy } from '@/lib/core/config/env'
import { cn } from '@/lib/core/utils/cn'
import { quickValidateEmail } from '@/lib/messaging/email/validation'
import { captureEvent } from '@/lib/posthog/client'
import { AUTH_SUBMIT_BTN } from '@/app/(auth)/components/auth-button-classes'
import { SocialLoginButtons } from '@/app/(auth)/components/social-login-buttons'
import { SSOLoginButton } from '@/app/(auth)/components/sso-login-button'
@@ -81,7 +83,12 @@ function SignupFormContent({
const router = useRouter()
const searchParams = useSearchParams()
const { refetch: refetchSession } = useSession()
const posthog = usePostHog()
const [isLoading, setIsLoading] = useState(false)
useEffect(() => {
captureEvent(posthog, 'signup_page_viewed', {})
}, [posthog])
const [showPassword, setShowPassword] = useState(false)
const [password, setPassword] = useState('')
const [passwordErrors, setPasswordErrors] = useState<string[]>([])
@@ -92,8 +99,6 @@ function SignupFormContent({
const [showEmailValidationError, setShowEmailValidationError] = useState(false)
const [formError, setFormError] = useState<string | null>(null)
const turnstileRef = useRef<TurnstileInstance>(null)
const captchaResolveRef = useRef<((token: string) => void) | null>(null)
const captchaRejectRef = useRef<((reason: Error) => void) | null>(null)
const turnstileSiteKey = useMemo(() => getEnv('NEXT_PUBLIC_TURNSTILE_SITE_KEY'), [])
const redirectUrl = useMemo(
() => searchParams.get('redirect') || searchParams.get('callbackUrl') || '',
@@ -251,27 +256,14 @@ function SignupFormContent({
let token: string | undefined
const widget = turnstileRef.current
if (turnstileSiteKey && widget) {
let timeoutId: ReturnType<typeof setTimeout> | undefined
try {
widget.reset()
token = await Promise.race([
new Promise<string>((resolve, reject) => {
captchaResolveRef.current = resolve
captchaRejectRef.current = reject
widget.execute()
}),
new Promise<string>((_, reject) => {
timeoutId = setTimeout(() => reject(new Error('Captcha timed out')), 15_000)
}),
])
widget.execute()
token = await widget.getResponsePromise()
} catch {
setFormError('Captcha verification failed. Please try again.')
setIsLoading(false)
return
} finally {
clearTimeout(timeoutId)
captchaResolveRef.current = null
captchaRejectRef.current = null
}
}
@@ -528,10 +520,7 @@ function SignupFormContent({
<Turnstile
ref={turnstileRef}
siteKey={turnstileSiteKey}
onSuccess={(token) => captchaResolveRef.current?.(token)}
onError={() => captchaRejectRef.current?.(new Error('Captcha verification failed'))}
onExpire={() => captchaRejectRef.current?.(new Error('Captcha token expired'))}
options={{ execution: 'execute' }}
options={{ execution: 'execute', appearance: 'execute' }}
/>
)}

View File

@@ -0,0 +1,15 @@
'use client'
import { useEffect } from 'react'
import { usePostHog } from 'posthog-js/react'
import { captureEvent } from '@/lib/posthog/client'
export function LandingAnalytics() {
const posthog = usePostHog()
useEffect(() => {
captureEvent(posthog, 'landing_page_viewed', {})
}, [posthog])
return null
}

View File

@@ -13,6 +13,7 @@ import {
Templates,
Testimonials,
} from '@/app/(home)/components'
import { LandingAnalytics } from '@/app/(home)/landing-analytics'
/**
* Landing page root component.
@@ -45,6 +46,7 @@ export default async function Landing() {
>
Skip to main content
</a>
<LandingAnalytics />
<StructuredData />
<header>
<Navbar blogPosts={blogPosts} />

View File

@@ -27,7 +27,9 @@ import {
CirclebackIcon,
ClayIcon,
ClerkIcon,
CloudFormationIcon,
CloudflareIcon,
CloudWatchIcon,
ConfluenceIcon,
CursorIcon,
DatabricksIcon,
@@ -211,6 +213,8 @@ export const blockTypeToIconMap: Record<string, IconComponent> = {
clay: ClayIcon,
clerk: ClerkIcon,
cloudflare: CloudflareIcon,
cloudformation: CloudFormationIcon,
cloudwatch: CloudWatchIcon,
confluence_v2: ConfluenceIcon,
cursor_v2: CursorIcon,
databricks: DatabricksIcon,

View File

@@ -1912,6 +1912,100 @@
"integrationType": "developer-tools",
"tags": ["cloud", "monitoring"]
},
{
"type": "cloudformation",
"slug": "cloudformation",
"name": "CloudFormation",
"description": "Manage and inspect AWS CloudFormation stacks, resources, and drift",
"longDescription": "Integrate AWS CloudFormation into workflows. Describe stacks, list resources, detect drift, view stack events, retrieve templates, and validate templates. Requires AWS access key and secret access key.",
"bgColor": "linear-gradient(45deg, #B0084D 0%, #FF4F8B 100%)",
"iconName": "CloudFormationIcon",
"docsUrl": "https://docs.sim.ai/tools/cloudformation",
"operations": [
{
"name": "Describe Stacks",
"description": "List and describe CloudFormation stacks"
},
{
"name": "List Stack Resources",
"description": "List all resources in a CloudFormation stack"
},
{
"name": "Describe Stack Events",
"description": "Get the event history for a CloudFormation stack"
},
{
"name": "Detect Stack Drift",
"description": "Initiate drift detection on a CloudFormation stack"
},
{
"name": "Drift Detection Status",
"description": "Check the status of a stack drift detection operation"
},
{
"name": "Get Template",
"description": "Retrieve the template body for a CloudFormation stack"
},
{
"name": "Validate Template",
"description": "Validate a CloudFormation template for syntax and structural correctness"
}
],
"operationCount": 7,
"triggers": [],
"triggerCount": 0,
"authType": "none",
"category": "tools",
"integrationType": "developer-tools",
"tags": ["cloud"]
},
{
"type": "cloudwatch",
"slug": "cloudwatch",
"name": "CloudWatch",
"description": "Query and monitor AWS CloudWatch logs, metrics, and alarms",
"longDescription": "Integrate AWS CloudWatch into workflows. Run Log Insights queries, list log groups, retrieve log events, list and get metrics, and monitor alarms. Requires AWS access key and secret access key.",
"bgColor": "linear-gradient(45deg, #B0084D 0%, #FF4F8B 100%)",
"iconName": "CloudWatchIcon",
"docsUrl": "https://docs.sim.ai/tools/cloudwatch",
"operations": [
{
"name": "Query Logs (Insights)",
"description": "Run a CloudWatch Log Insights query against one or more log groups"
},
{
"name": "Describe Log Groups",
"description": "List available CloudWatch log groups"
},
{
"name": "Get Log Events",
"description": "Retrieve log events from a specific CloudWatch log stream"
},
{
"name": "Describe Log Streams",
"description": "List log streams within a CloudWatch log group"
},
{
"name": "List Metrics",
"description": "List available CloudWatch metrics"
},
{
"name": "Get Metric Statistics",
"description": "Get statistics for a CloudWatch metric over a time range"
},
{
"name": "Describe Alarms",
"description": "List and filter CloudWatch alarms"
}
],
"operationCount": 7,
"triggers": [],
"triggerCount": 0,
"authType": "none",
"category": "tools",
"integrationType": "analytics",
"tags": ["cloud", "monitoring"]
},
{
"type": "confluence_v2",
"slug": "confluence",

View File

@@ -18,6 +18,7 @@ import {
formatPrice,
formatTokenCount,
formatUpdatedAt,
getEffectiveMaxOutputTokens,
getModelBySlug,
getPricingBounds,
getProviderBySlug,
@@ -198,7 +199,8 @@ export default async function ModelPage({
</div>
<p className='max-w-[820px] text-[17px] text-[var(--landing-text-muted)] leading-relaxed'>
{model.summary} {model.bestFor}
{model.summary}
{model.bestFor ? ` ${model.bestFor}` : ''}
</p>
<div className='mt-8 flex flex-wrap gap-3'>
@@ -229,13 +231,11 @@ export default async function ModelPage({
? `${formatPrice(model.pricing.cachedInput)}/1M`
: 'N/A'
}
compact
/>
<StatCard label='Output price' value={`${formatPrice(model.pricing.output)}/1M`} />
<StatCard
label='Context window'
value={model.contextWindow ? formatTokenCount(model.contextWindow) : 'Unknown'}
compact
/>
</section>
@@ -280,12 +280,12 @@ export default async function ModelPage({
label='Max output'
value={
model.capabilities.maxOutputTokens
? `${formatTokenCount(model.capabilities.maxOutputTokens)} tokens`
: 'Standard defaults'
? `${formatTokenCount(getEffectiveMaxOutputTokens(model.capabilities))} tokens`
: 'Not published'
}
/>
<DetailItem label='Provider' value={provider.name} />
<DetailItem label='Best for' value={model.bestFor} />
{model.bestFor ? <DetailItem label='Best for' value={model.bestFor} /> : null}
</div>
</section>

View File

@@ -0,0 +1,49 @@
import { describe, expect, it } from 'vitest'
import { buildModelCapabilityFacts, getEffectiveMaxOutputTokens, getModelBySlug } from './utils'
describe('model catalog capability facts', () => {
it.concurrent(
'shows structured outputs support and published max output tokens for gpt-4o',
() => {
const model = getModelBySlug('openai', 'gpt-4o')
expect(model).not.toBeNull()
expect(model).toBeDefined()
const capabilityFacts = buildModelCapabilityFacts(model!)
const structuredOutputs = capabilityFacts.find((fact) => fact.label === 'Structured outputs')
const maxOutputTokens = capabilityFacts.find((fact) => fact.label === 'Max output tokens')
expect(getEffectiveMaxOutputTokens(model!.capabilities)).toBe(16384)
expect(structuredOutputs?.value).toBe('Supported')
expect(maxOutputTokens?.value).toBe('16k')
}
)
it.concurrent('preserves native structured outputs labeling for claude models', () => {
const model = getModelBySlug('anthropic', 'claude-sonnet-4-6')
expect(model).not.toBeNull()
expect(model).toBeDefined()
const capabilityFacts = buildModelCapabilityFacts(model!)
const structuredOutputs = capabilityFacts.find((fact) => fact.label === 'Structured outputs')
expect(structuredOutputs?.value).toBe('Supported (native)')
})
it.concurrent('does not invent a max output token limit when one is not published', () => {
expect(getEffectiveMaxOutputTokens({})).toBeNull()
})
it.concurrent('keeps best-for copy for clearly differentiated models only', () => {
const researchModel = getModelBySlug('google', 'deep-research-pro-preview-12-2025')
const generalModel = getModelBySlug('xai', 'grok-4-latest')
expect(researchModel).not.toBeNull()
expect(generalModel).not.toBeNull()
expect(researchModel?.bestFor).toContain('research workflows')
expect(generalModel?.bestFor).toBeUndefined()
})
})

View File

@@ -112,7 +112,7 @@ export interface CatalogModel {
capabilities: ModelCapabilities
capabilityTags: string[]
summary: string
bestFor: string
bestFor?: string
searchText: string
}
@@ -190,6 +190,14 @@ export function formatCapabilityBoolean(
return value ? positive : negative
}
function supportsCatalogStructuredOutputs(capabilities: ModelCapabilities): boolean {
return !capabilities.deepResearch
}
export function getEffectiveMaxOutputTokens(capabilities: ModelCapabilities): number | null {
return capabilities.maxOutputTokens ?? null
}
function trimTrailingZeros(value: string): string {
return value.replace(/\.0+$/, '').replace(/(\.\d*?)0+$/, '$1')
}
@@ -326,7 +334,7 @@ function buildCapabilityTags(capabilities: ModelCapabilities): string[] {
tags.push('Tool choice')
}
if (capabilities.nativeStructuredOutputs) {
if (supportsCatalogStructuredOutputs(capabilities)) {
tags.push('Structured outputs')
}
@@ -365,7 +373,7 @@ function buildBestForLine(model: {
pricing: PricingInfo
capabilities: ModelCapabilities
contextWindow: number | null
}): string {
}): string | null {
const { pricing, capabilities, contextWindow } = model
if (capabilities.deepResearch) {
@@ -376,10 +384,6 @@ function buildBestForLine(model: {
return 'Best for reasoning-heavy tasks that need more deliberate model control.'
}
if (pricing.input <= 0.2 && pricing.output <= 1.25) {
return 'Best for cost-sensitive automations, background tasks, and high-volume workloads.'
}
if (contextWindow && contextWindow >= 1000000) {
return 'Best for long-context retrieval, large documents, and high-memory workflows.'
}
@@ -388,7 +392,11 @@ function buildBestForLine(model: {
return 'Best for production workflows that need reliable typed outputs.'
}
return 'Best for general-purpose AI workflows inside Sim.'
if (pricing.input <= 0.2 && pricing.output <= 1.25) {
return 'Best for cost-sensitive automations, background tasks, and high-volume workloads.'
}
return null
}
function buildModelSummary(
@@ -437,6 +445,11 @@ const rawProviders = Object.values(PROVIDER_DEFINITIONS).map((provider) => {
const shortId = stripProviderPrefix(provider.id, model.id)
const mergedCapabilities = { ...provider.capabilities, ...model.capabilities }
const capabilityTags = buildCapabilityTags(mergedCapabilities)
const bestFor = buildBestForLine({
pricing: model.pricing,
capabilities: mergedCapabilities,
contextWindow: model.contextWindow ?? null,
})
const displayName = formatModelDisplayName(provider.id, model.id)
const modelSlug = slugify(shortId)
const href = `/models/${providerSlug}/${modelSlug}`
@@ -461,11 +474,7 @@ const rawProviders = Object.values(PROVIDER_DEFINITIONS).map((provider) => {
model.contextWindow ?? null,
capabilityTags
),
bestFor: buildBestForLine({
pricing: model.pricing,
capabilities: mergedCapabilities,
contextWindow: model.contextWindow ?? null,
}),
...(bestFor ? { bestFor } : {}),
searchText: [
provider.name,
providerDisplayName,
@@ -683,6 +692,7 @@ export function buildModelFaqs(provider: CatalogProvider, model: CatalogModel):
export function buildModelCapabilityFacts(model: CatalogModel): CapabilityFact[] {
const { capabilities } = model
const supportsStructuredOutputs = supportsCatalogStructuredOutputs(capabilities)
return [
{
@@ -711,7 +721,11 @@ export function buildModelCapabilityFacts(model: CatalogModel): CapabilityFact[]
},
{
label: 'Structured outputs',
value: formatCapabilityBoolean(capabilities.nativeStructuredOutputs),
value: supportsStructuredOutputs
? capabilities.nativeStructuredOutputs
? 'Supported (native)'
: 'Supported'
: 'Not supported',
},
{
label: 'Tool choice',
@@ -732,8 +746,8 @@ export function buildModelCapabilityFacts(model: CatalogModel): CapabilityFact[]
{
label: 'Max output tokens',
value: capabilities.maxOutputTokens
? formatTokenCount(capabilities.maxOutputTokens)
: 'Standard defaults',
? formatTokenCount(getEffectiveMaxOutputTokens(capabilities))
: 'Not published',
},
]
}
@@ -752,8 +766,8 @@ export function getProviderCapabilitySummary(provider: CatalogProvider): Capabil
const reasoningCount = provider.models.filter(
(model) => model.capabilities.reasoningEffort || model.capabilities.thinking
).length
const structuredCount = provider.models.filter(
(model) => model.capabilities.nativeStructuredOutputs
const structuredCount = provider.models.filter((model) =>
supportsCatalogStructuredOutputs(model.capabilities)
).length
const deepResearchCount = provider.models.filter(
(model) => model.capabilities.deepResearch

View File

@@ -10,7 +10,7 @@
* @see stores/constants.ts for the source of truth
*/
:root {
--sidebar-width: 248px; /* SIDEBAR_WIDTH.DEFAULT */
--sidebar-width: 0px; /* 0 outside workspace; blocking script always sets actual value on workspace pages */
--panel-width: 320px; /* PANEL_WIDTH.DEFAULT */
--toolbar-triggers-height: 300px; /* TOOLBAR_TRIGGERS_HEIGHT.DEFAULT */
--editor-connections-height: 172px; /* EDITOR_CONNECTIONS_HEIGHT.DEFAULT */

View File

@@ -7,6 +7,7 @@ import { generateAgentCard, generateSkillsFromWorkflow } from '@/lib/a2a/agent-c
import type { AgentCapabilities, AgentSkill } from '@/lib/a2a/types'
import { checkSessionOrInternalAuth } from '@/lib/auth/hybrid'
import { getRedisClient } from '@/lib/core/config/redis'
import { captureServerEvent } from '@/lib/posthog/server'
import { loadWorkflowFromNormalizedTables } from '@/lib/workflows/persistence/utils'
import { checkWorkspaceAccess } from '@/lib/workspaces/permissions/utils'
@@ -180,6 +181,17 @@ export async function DELETE(request: NextRequest, { params }: { params: Promise
logger.info(`Deleted A2A agent: ${agentId}`)
captureServerEvent(
auth.userId,
'a2a_agent_deleted',
{
agent_id: agentId,
workflow_id: existingAgent.workflowId,
workspace_id: existingAgent.workspaceId,
},
{ groups: { workspace: existingAgent.workspaceId } }
)
return NextResponse.json({ success: true })
} catch (error) {
logger.error('Error deleting agent:', error)
@@ -251,6 +263,16 @@ export async function POST(request: NextRequest, { params }: { params: Promise<R
}
logger.info(`Published A2A agent: ${agentId}`)
captureServerEvent(
auth.userId,
'a2a_agent_published',
{
agent_id: agentId,
workflow_id: existingAgent.workflowId,
workspace_id: existingAgent.workspaceId,
},
{ groups: { workspace: existingAgent.workspaceId } }
)
return NextResponse.json({ success: true, isPublished: true })
}
@@ -273,6 +295,16 @@ export async function POST(request: NextRequest, { params }: { params: Promise<R
}
logger.info(`Unpublished A2A agent: ${agentId}`)
captureServerEvent(
auth.userId,
'a2a_agent_unpublished',
{
agent_id: agentId,
workflow_id: existingAgent.workflowId,
workspace_id: existingAgent.workspaceId,
},
{ groups: { workspace: existingAgent.workspaceId } }
)
return NextResponse.json({ success: true, isPublished: false })
}

View File

@@ -14,6 +14,7 @@ import { generateSkillsFromWorkflow } from '@/lib/a2a/agent-card'
import { A2A_DEFAULT_CAPABILITIES } from '@/lib/a2a/constants'
import { sanitizeAgentName } from '@/lib/a2a/utils'
import { checkSessionOrInternalAuth } from '@/lib/auth/hybrid'
import { captureServerEvent } from '@/lib/posthog/server'
import { loadWorkflowFromNormalizedTables } from '@/lib/workflows/persistence/utils'
import { hasValidStartBlockInState } from '@/lib/workflows/triggers/trigger-utils'
import { checkWorkspaceAccess } from '@/lib/workspaces/permissions/utils'
@@ -201,6 +202,16 @@ export async function POST(request: NextRequest) {
logger.info(`Created A2A agent ${agentId} for workflow ${workflowId}`)
captureServerEvent(
auth.userId,
'a2a_agent_created',
{ agent_id: agentId, workflow_id: workflowId, workspace_id: workspaceId },
{
groups: { workspace: workspaceId },
setOnce: { first_a2a_agent_created_at: new Date().toISOString() },
}
)
return NextResponse.json({ success: true, agent }, { status: 201 })
} catch (error) {
logger.error('Error creating agent:', error)

View File

@@ -17,6 +17,7 @@ import {
hasUsableSubscriptionStatus,
} from '@/lib/billing/subscriptions/utils'
import { isBillingEnabled } from '@/lib/core/config/feature-flags'
import { captureServerEvent } from '@/lib/posthog/server'
const logger = createLogger('SwitchPlan')
@@ -173,6 +174,13 @@ export async function POST(request: NextRequest) {
interval: targetInterval,
})
captureServerEvent(
userId,
'subscription_changed',
{ from_plan: sub.plan ?? 'unknown', to_plan: targetPlanName, interval: targetInterval },
{ set: { plan: targetPlanName } }
)
return NextResponse.json({ success: true, plan: targetPlanName, interval: targetInterval })
} catch (error) {
logger.error('Failed to switch subscription', {

View File

@@ -4,7 +4,7 @@ import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { recordUsage } from '@/lib/billing/core/usage-log'
import { checkAndBillOverageThreshold } from '@/lib/billing/threshold-billing'
import { checkInternalApiKey } from '@/lib/copilot/request/http'
import { checkInternalApiKey } from '@/lib/copilot/utils'
import { isBillingEnabled } from '@/lib/core/config/feature-flags'
import { generateRequestId } from '@/lib/core/utils/request'

View File

@@ -2,7 +2,7 @@ import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { checkServerSideUsageLimits } from '@/lib/billing/calculations/usage-monitor'
import { checkInternalApiKey } from '@/lib/copilot/request/http'
import { checkInternalApiKey } from '@/lib/copilot/utils'
const logger = createLogger('CopilotApiKeysValidate')

View File

@@ -1,12 +1,10 @@
import { createLogger } from '@sim/logger'
import { NextResponse } from 'next/server'
import { getLatestRunForStream } from '@/lib/copilot/async-runs/repository'
import { abortActiveStream, waitForPendingChatStream } from '@/lib/copilot/chat-streaming'
import { SIM_AGENT_API_URL } from '@/lib/copilot/constants'
import { authenticateCopilotRequestSessionOnly } from '@/lib/copilot/request/http'
import { abortActiveStream } from '@/lib/copilot/request/session/abort'
import { authenticateCopilotRequestSessionOnly } from '@/lib/copilot/request-helpers'
import { env } from '@/lib/core/config/env'
const logger = createLogger('CopilotChatAbortAPI')
const GO_EXPLICIT_ABORT_TIMEOUT_MS = 3000
export async function POST(request: Request) {
@@ -17,12 +15,7 @@ export async function POST(request: Request) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
const body = await request.json().catch((err) => {
logger.warn('Abort request body parse failed; continuing with empty object', {
error: err instanceof Error ? err.message : String(err),
})
return {}
})
const body = await request.json().catch(() => ({}))
const streamId = typeof body.streamId === 'string' ? body.streamId : ''
let chatId = typeof body.chatId === 'string' ? body.chatId : ''
@@ -31,13 +24,7 @@ export async function POST(request: Request) {
}
if (!chatId) {
const run = await getLatestRunForStream(streamId, authenticatedUserId).catch((err) => {
logger.warn('getLatestRunForStream failed while resolving chatId for abort', {
streamId,
error: err instanceof Error ? err.message : String(err),
})
return null
})
const run = await getLatestRunForStream(streamId, authenticatedUserId).catch(() => null)
if (run?.chatId) {
chatId = run.chatId
}
@@ -63,13 +50,15 @@ export async function POST(request: Request) {
if (!response.ok) {
throw new Error(`Explicit abort marker request failed: ${response.status}`)
}
} catch (err) {
logger.warn('Explicit abort marker request failed; proceeding with local abort', {
streamId,
error: err instanceof Error ? err.message : String(err),
})
} catch {
// best effort: local abort should still proceed even if Go marker fails
}
const aborted = await abortActiveStream(streamId)
if (chatId) {
await waitForPendingChatStream(chatId, GO_EXPLICIT_ABORT_TIMEOUT_MS + 1000, streamId).catch(
() => false
)
}
return NextResponse.json({ aborted })
}

View File

@@ -36,11 +36,11 @@ vi.mock('drizzle-orm', () => ({
eq: vi.fn((field: unknown, value: unknown) => ({ field, value, type: 'eq' })),
}))
vi.mock('@/lib/copilot/chat/lifecycle', () => ({
vi.mock('@/lib/copilot/chat-lifecycle', () => ({
getAccessibleCopilotChat: mockGetAccessibleCopilotChat,
}))
vi.mock('@/lib/copilot/tasks', () => ({
vi.mock('@/lib/copilot/task-events', () => ({
taskPubSub: { publishStatusChanged: vi.fn() },
}))

View File

@@ -5,8 +5,8 @@ import { and, eq } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { getSession } from '@/lib/auth'
import { getAccessibleCopilotChat } from '@/lib/copilot/chat/lifecycle'
import { taskPubSub } from '@/lib/copilot/tasks'
import { getAccessibleCopilotChat } from '@/lib/copilot/chat-lifecycle'
import { taskPubSub } from '@/lib/copilot/task-events'
const logger = createLogger('DeleteChatAPI')

View File

@@ -1,119 +0,0 @@
import { db } from '@sim/db'
import { copilotChats } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { and, desc, eq } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { getAccessibleCopilotChat } from '@/lib/copilot/chat/lifecycle'
import {
authenticateCopilotRequestSessionOnly,
createBadRequestResponse,
createInternalServerErrorResponse,
createUnauthorizedResponse,
} from '@/lib/copilot/request/http'
import { authorizeWorkflowByWorkspacePermission } from '@/lib/workflows/utils'
import { assertActiveWorkspaceAccess } from '@/lib/workspaces/permissions/utils'
const logger = createLogger('CopilotChatAPI')
function transformChat(chat: {
id: string
title: string | null
model: string | null
messages: unknown
planArtifact?: unknown
config?: unknown
conversationId?: string | null
resources?: unknown
createdAt: Date | null
updatedAt: Date | null
}) {
return {
id: chat.id,
title: chat.title,
model: chat.model,
messages: Array.isArray(chat.messages) ? chat.messages : [],
messageCount: Array.isArray(chat.messages) ? chat.messages.length : 0,
planArtifact: chat.planArtifact || null,
config: chat.config || null,
...('conversationId' in chat ? { activeStreamId: chat.conversationId || null } : {}),
...('resources' in chat
? { resources: Array.isArray(chat.resources) ? chat.resources : [] }
: {}),
createdAt: chat.createdAt,
updatedAt: chat.updatedAt,
}
}
export async function GET(req: NextRequest) {
try {
const { searchParams } = new URL(req.url)
const workflowId = searchParams.get('workflowId')
const workspaceId = searchParams.get('workspaceId')
const chatId = searchParams.get('chatId')
const { userId: authenticatedUserId, isAuthenticated } =
await authenticateCopilotRequestSessionOnly()
if (!isAuthenticated || !authenticatedUserId) {
return createUnauthorizedResponse()
}
if (chatId) {
const chat = await getAccessibleCopilotChat(chatId, authenticatedUserId)
if (!chat) {
return NextResponse.json({ success: false, error: 'Chat not found' }, { status: 404 })
}
logger.info(`Retrieved chat ${chatId}`)
return NextResponse.json({ success: true, chat: transformChat(chat) })
}
if (!workflowId && !workspaceId) {
return createBadRequestResponse('workflowId, workspaceId, or chatId is required')
}
if (workspaceId) {
await assertActiveWorkspaceAccess(workspaceId, authenticatedUserId)
}
if (workflowId) {
const authorization = await authorizeWorkflowByWorkspacePermission({
workflowId,
userId: authenticatedUserId,
action: 'read',
})
if (!authorization.allowed) {
return createUnauthorizedResponse()
}
}
const scopeFilter = workflowId
? eq(copilotChats.workflowId, workflowId)
: eq(copilotChats.workspaceId, workspaceId!)
const chats = await db
.select({
id: copilotChats.id,
title: copilotChats.title,
model: copilotChats.model,
messages: copilotChats.messages,
planArtifact: copilotChats.planArtifact,
config: copilotChats.config,
createdAt: copilotChats.createdAt,
updatedAt: copilotChats.updatedAt,
})
.from(copilotChats)
.where(and(eq(copilotChats.userId, authenticatedUserId), scopeFilter))
.orderBy(desc(copilotChats.updatedAt))
const scope = workflowId ? `workflow ${workflowId}` : `workspace ${workspaceId}`
logger.info(`Retrieved ${chats.length} chats for ${scope}`)
return NextResponse.json({
success: true,
chats: chats.map(transformChat),
})
} catch (error) {
logger.error('Error fetching copilot chats:', error)
return createInternalServerErrorResponse('Failed to fetch chats')
}
}

View File

@@ -1,65 +0,0 @@
import { db } from '@sim/db'
import { copilotChats } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { and, eq } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { getSession } from '@/lib/auth'
import { getAccessibleCopilotChat } from '@/lib/copilot/chat/lifecycle'
import { taskPubSub } from '@/lib/copilot/tasks'
const logger = createLogger('RenameChatAPI')
const RenameChatSchema = z.object({
chatId: z.string().min(1),
title: z.string().min(1).max(200),
})
export async function PATCH(request: NextRequest) {
try {
const session = await getSession()
if (!session?.user?.id) {
return NextResponse.json({ success: false, error: 'Unauthorized' }, { status: 401 })
}
const body = await request.json()
const { chatId, title } = RenameChatSchema.parse(body)
const chat = await getAccessibleCopilotChat(chatId, session.user.id)
if (!chat) {
return NextResponse.json({ success: false, error: 'Chat not found' }, { status: 404 })
}
const now = new Date()
const [updated] = await db
.update(copilotChats)
.set({ title, updatedAt: now, lastSeenAt: now })
.where(and(eq(copilotChats.id, chatId), eq(copilotChats.userId, session.user.id)))
.returning({ id: copilotChats.id, workspaceId: copilotChats.workspaceId })
if (!updated) {
return NextResponse.json({ success: false, error: 'Chat not found' }, { status: 404 })
}
logger.info('Chat renamed', { chatId, title })
if (updated.workspaceId) {
taskPubSub?.publishStatusChanged({
workspaceId: updated.workspaceId,
chatId,
type: 'renamed',
})
}
return NextResponse.json({ success: true })
} catch (error) {
if (error instanceof z.ZodError) {
return NextResponse.json(
{ success: false, error: 'Invalid request data', details: error.errors },
{ status: 400 }
)
}
logger.error('Error renaming chat:', error)
return NextResponse.json({ success: false, error: 'Failed to rename chat' }, { status: 500 })
}
}

View File

@@ -10,8 +10,8 @@ import {
createInternalServerErrorResponse,
createNotFoundResponse,
createUnauthorizedResponse,
} from '@/lib/copilot/request/http'
import type { ChatResource, ResourceType } from '@/lib/copilot/resources/persistence'
} from '@/lib/copilot/request-helpers'
import type { ChatResource, ResourceType } from '@/lib/copilot/resources'
const logger = createLogger('CopilotChatResourcesAPI')

View File

@@ -1,45 +1,46 @@
import { db } from '@sim/db'
import { copilotChats } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { eq, sql } from 'drizzle-orm'
import { and, desc, eq, sql } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { getSession } from '@/lib/auth'
import { type ChatLoadResult, resolveOrCreateChat } from '@/lib/copilot/chat/lifecycle'
import { buildCopilotRequestPayload } from '@/lib/copilot/chat/payload'
import {
buildPersistedAssistantMessage,
buildPersistedUserMessage,
} from '@/lib/copilot/chat/persisted-message'
import {
processContextsServer,
resolveActiveResourceContext,
} from '@/lib/copilot/chat/process-contents'
import { COPILOT_REQUEST_MODES } from '@/lib/copilot/constants'
import {
createBadRequestResponse,
createRequestTracker,
createUnauthorizedResponse,
} from '@/lib/copilot/request/http'
import { createSSEStream, SSE_RESPONSE_HEADERS } from '@/lib/copilot/request/lifecycle/start'
import { createRunSegment } from '@/lib/copilot/async-runs/repository'
import { getAccessibleCopilotChat, resolveOrCreateChat } from '@/lib/copilot/chat-lifecycle'
import { buildCopilotRequestPayload } from '@/lib/copilot/chat-payload'
import {
acquirePendingChatStream,
getPendingChatStreamId,
createSSEStream,
releasePendingChatStream,
} from '@/lib/copilot/request/session'
import type { OrchestratorResult } from '@/lib/copilot/request/types'
import { getWorkflowById, resolveWorkflowIdForUser } from '@/lib/workflows/utils'
import { getUserEntityPermissions } from '@/lib/workspaces/permissions/utils'
import type { ChatContext } from '@/stores/panel'
requestChatTitle,
SSE_RESPONSE_HEADERS,
} from '@/lib/copilot/chat-streaming'
import { COPILOT_REQUEST_MODES } from '@/lib/copilot/models'
import { orchestrateCopilotStream } from '@/lib/copilot/orchestrator'
import { getStreamMeta, readStreamEvents } from '@/lib/copilot/orchestrator/stream/buffer'
import type { OrchestratorResult } from '@/lib/copilot/orchestrator/types'
import { resolveActiveResourceContext } from '@/lib/copilot/process-contents'
import {
authenticateCopilotRequestSessionOnly,
createBadRequestResponse,
createInternalServerErrorResponse,
createRequestTracker,
createUnauthorizedResponse,
} from '@/lib/copilot/request-helpers'
import { captureServerEvent } from '@/lib/posthog/server'
import {
authorizeWorkflowByWorkspacePermission,
resolveWorkflowIdForUser,
} from '@/lib/workflows/utils'
import {
assertActiveWorkspaceAccess,
getUserEntityPermissions,
} from '@/lib/workspaces/permissions/utils'
export const maxDuration = 3600
const logger = createLogger('CopilotChatAPI')
// ---------------------------------------------------------------------------
// Schemas
// ---------------------------------------------------------------------------
const FileAttachmentSchema = z.object({
id: z.string(),
key: z.string(),
@@ -66,6 +67,7 @@ const ChatMessageSchema = z.object({
mode: z.enum(COPILOT_REQUEST_MODES).optional().default('agent'),
prefetch: z.boolean().optional(),
createNewChat: z.boolean().optional().default(false),
stream: z.boolean().optional().default(true),
implicitFeedback: z.string().optional(),
fileAttachments: z.array(FileAttachmentSchema).optional(),
resourceAttachments: z.array(ResourceAttachmentSchema).optional(),
@@ -103,25 +105,27 @@ const ChatMessageSchema = z.object({
userTimezone: z.string().optional(),
})
// ---------------------------------------------------------------------------
// POST /api/copilot/chat
// ---------------------------------------------------------------------------
/**
* POST /api/copilot/chat
* Send messages to sim agent and handle chat persistence
*/
export async function POST(req: NextRequest) {
const tracker = createRequestTracker()
let actualChatId: string | undefined
let chatStreamLockAcquired = false
let userMessageIdToUse = ''
let pendingChatStreamAcquired = false
let pendingChatStreamHandedOff = false
let pendingChatStreamID: string | undefined
try {
// 1. Auth
// Get session to access user information including name
const session = await getSession()
if (!session?.user?.id) {
return createUnauthorizedResponse()
}
const authenticatedUserId = session.user.id
// 2. Parse & validate
const body = await req.json()
const {
message,
@@ -134,6 +138,7 @@ export async function POST(req: NextRequest) {
mode,
prefetch,
createNewChat,
stream,
implicitFeedback,
fileAttachments,
resourceAttachments,
@@ -147,12 +152,17 @@ export async function POST(req: NextRequest) {
? contexts.map((ctx) => {
if (ctx.kind !== 'blocks') return ctx
if (Array.isArray(ctx.blockIds) && ctx.blockIds.length > 0) return ctx
if (ctx.blockId) return { ...ctx, blockIds: [ctx.blockId] }
if (ctx.blockId) {
return {
...ctx,
blockIds: [ctx.blockId],
}
}
return ctx
})
: contexts
// 3. Resolve workflow & workspace
// Copilot route always requires a workflow scope
const resolved = await resolveWorkflowIdForUser(
authenticatedUserId,
providedWorkflowId,
@@ -164,28 +174,63 @@ export async function POST(req: NextRequest) {
'No workflows found. Create a workflow first or provide a valid workflowId.'
)
}
const { workflowId, workflowName: workflowResolvedName } = resolved
const workflowId = resolved.workflowId
const workflowResolvedName = resolved.workflowName
// Resolve workspace from workflow so it can be sent as implicit context to the copilot.
let resolvedWorkspaceId: string | undefined
try {
const { getWorkflowById } = await import('@/lib/workflows/utils')
const wf = await getWorkflowById(workflowId)
resolvedWorkspaceId = wf?.workspaceId ?? undefined
} catch {
logger.warn(`[${tracker.requestId}] Failed to resolve workspaceId from workflow`)
logger
.withMetadata({ requestId: tracker.requestId, messageId: userMessageId })
.warn('Failed to resolve workspaceId from workflow')
}
userMessageIdToUse = userMessageId || crypto.randomUUID()
const selectedModel = model || 'claude-opus-4-6'
captureServerEvent(
authenticatedUserId,
'copilot_chat_sent',
{
workflow_id: workflowId,
workspace_id: resolvedWorkspaceId ?? '',
has_file_attachments: Array.isArray(fileAttachments) && fileAttachments.length > 0,
has_contexts: Array.isArray(contexts) && contexts.length > 0,
mode,
},
{
groups: resolvedWorkspaceId ? { workspace: resolvedWorkspaceId } : undefined,
setOnce: { first_copilot_use_at: new Date().toISOString() },
}
)
logger.info(`[${tracker.requestId}] Received chat POST`, {
workflowId,
contextsCount: Array.isArray(normalizedContexts) ? normalizedContexts.length : 0,
const userMessageIdToUse = userMessageId || crypto.randomUUID()
const reqLogger = logger.withMetadata({
requestId: tracker.requestId,
messageId: userMessageIdToUse,
})
try {
reqLogger.info('Received chat POST', {
workflowId,
hasContexts: Array.isArray(normalizedContexts),
contextsCount: Array.isArray(normalizedContexts) ? normalizedContexts.length : 0,
contextsPreview: Array.isArray(normalizedContexts)
? normalizedContexts.map((c: any) => ({
kind: c?.kind,
chatId: c?.chatId,
workflowId: c?.workflowId,
executionId: (c as any)?.executionId,
label: c?.label,
}))
: undefined,
})
} catch {}
// 4. Resolve or create chat
let currentChat: ChatLoadResult['chat'] = null
let conversationHistory: unknown[] = []
let currentChat: any = null
let conversationHistory: any[] = []
actualChatId = chatId
const selectedModel = model || 'claude-opus-4-6'
if (chatId || createNewChat) {
const chatResult = await resolveOrCreateChat({
@@ -205,48 +250,37 @@ export async function POST(req: NextRequest) {
}
}
if (actualChatId) {
chatStreamLockAcquired = await acquirePendingChatStream(actualChatId, userMessageIdToUse)
if (!chatStreamLockAcquired) {
const activeStreamId = await getPendingChatStreamId(actualChatId)
return NextResponse.json(
{
error: 'A response is already in progress for this chat.',
...(activeStreamId ? { activeStreamId } : {}),
},
{ status: 409 }
)
}
}
// 5. Process contexts
let agentContexts: Array<{ type: string; content: string }> = []
if (Array.isArray(normalizedContexts) && normalizedContexts.length > 0) {
try {
const { processContextsServer } = await import('@/lib/copilot/process-contents')
const processed = await processContextsServer(
normalizedContexts as ChatContext[],
normalizedContexts as any,
authenticatedUserId,
message,
resolvedWorkspaceId,
actualChatId
)
agentContexts = processed
logger.info(`[${tracker.requestId}] Contexts processed`, {
reqLogger.info('Contexts processed for request', {
processedCount: agentContexts.length,
kinds: agentContexts.map((c) => c.type),
lengthPreview: agentContexts.map((c) => c.content?.length ?? 0),
})
if (agentContexts.length === 0) {
logger.warn(
`[${tracker.requestId}] Contexts provided but none processed. Check executionId for logs contexts.`
if (
Array.isArray(normalizedContexts) &&
normalizedContexts.length > 0 &&
agentContexts.length === 0
) {
reqLogger.warn(
'Contexts provided but none processed. Check executionId for logs contexts.'
)
}
} catch (e) {
logger.error(`[${tracker.requestId}] Failed to process contexts`, e)
reqLogger.error('Failed to process contexts', e)
}
}
// 5b. Process resource attachments
if (
Array.isArray(resourceAttachments) &&
resourceAttachments.length > 0 &&
@@ -262,30 +296,26 @@ export async function POST(req: NextRequest) {
actualChatId
)
if (!ctx) return null
return { ...ctx, tag: r.active ? '@active_tab' : '@open_tab' }
return {
...ctx,
tag: r.active ? '@active_tab' : '@open_tab',
}
})
)
for (const result of results) {
if (result.status === 'fulfilled' && result.value) {
agentContexts.push(result.value)
} else if (result.status === 'rejected') {
logger.error(
`[${tracker.requestId}] Failed to resolve resource attachment`,
result.reason
)
reqLogger.error('Failed to resolve resource attachment', result.reason)
}
}
}
// 6. Build copilot request payload
const effectiveMode = mode === 'agent' ? 'build' : mode
const userPermission = resolvedWorkspaceId
? await getUserEntityPermissions(authenticatedUserId, 'workspace', resolvedWorkspaceId).catch(
(err) => {
logger.warn('Failed to load user permissions', {
error: err instanceof Error ? err.message : String(err),
})
return null
}
() => null
)
: null
@@ -309,24 +339,55 @@ export async function POST(req: NextRequest) {
userPermission: userPermission ?? undefined,
userTimezone,
},
{ selectedModel }
{
selectedModel,
}
)
logger.info(`[${tracker.requestId}] About to call Sim Agent`, {
contextCount: agentContexts.length,
hasFileAttachments: Array.isArray(requestPayload.fileAttachments),
messageLength: message.length,
mode,
})
// 7. Persist user message
if (actualChatId) {
const userMsg = buildPersistedUserMessage({
id: userMessageIdToUse,
content: message,
fileAttachments,
contexts: normalizedContexts,
try {
reqLogger.info('About to call Sim Agent', {
hasContext: agentContexts.length > 0,
contextCount: agentContexts.length,
hasFileAttachments: Array.isArray(requestPayload.fileAttachments),
messageLength: message.length,
mode: effectiveMode,
hasTools: Array.isArray(requestPayload.tools),
toolCount: Array.isArray(requestPayload.tools) ? requestPayload.tools.length : 0,
hasBaseTools: Array.isArray(requestPayload.baseTools),
baseToolCount: Array.isArray(requestPayload.baseTools)
? requestPayload.baseTools.length
: 0,
hasCredentials: !!requestPayload.credentials,
})
} catch {}
if (stream && actualChatId) {
const acquired = await acquirePendingChatStream(actualChatId, userMessageIdToUse)
if (!acquired) {
return NextResponse.json(
{
error:
'A response is already in progress for this chat. Wait for it to finish or use Stop.',
},
{ status: 409 }
)
}
pendingChatStreamAcquired = true
pendingChatStreamID = userMessageIdToUse
}
if (actualChatId) {
const userMsg = {
id: userMessageIdToUse,
role: 'user' as const,
content: message,
timestamp: new Date().toISOString(),
...(fileAttachments && fileAttachments.length > 0 && { fileAttachments }),
...(Array.isArray(normalizedContexts) &&
normalizedContexts.length > 0 && {
contexts: normalizedContexts,
}),
}
const [updated] = await db
.update(copilotChats)
@@ -339,66 +400,268 @@ export async function POST(req: NextRequest) {
.returning({ messages: copilotChats.messages })
if (updated) {
const freshMessages: Record<string, unknown>[] = Array.isArray(updated.messages)
? updated.messages
: []
conversationHistory = freshMessages.filter(
(m: Record<string, unknown>) => m.id !== userMessageIdToUse
)
const freshMessages: any[] = Array.isArray(updated.messages) ? updated.messages : []
conversationHistory = freshMessages.filter((m: any) => m.id !== userMessageIdToUse)
}
}
// 8. Create SSE stream with onComplete for assistant message persistence
const executionId = crypto.randomUUID()
const runId = crypto.randomUUID()
const sseStream = createSSEStream({
requestPayload,
userId: authenticatedUserId,
streamId: userMessageIdToUse,
executionId,
runId,
chatId: actualChatId,
currentChat,
isNewChat: conversationHistory.length === 0,
message,
titleModel: selectedModel,
titleProvider: provider,
requestId: tracker.requestId,
workspaceId: resolvedWorkspaceId,
orchestrateOptions: {
if (stream) {
const executionId = crypto.randomUUID()
const runId = crypto.randomUUID()
const sseStream = createSSEStream({
requestPayload,
userId: authenticatedUserId,
workflowId,
chatId: actualChatId,
streamId: userMessageIdToUse,
executionId,
runId,
goRoute: '/api/copilot',
autoExecuteTools: true,
interactive: true,
onComplete: buildOnComplete(actualChatId, userMessageIdToUse, tracker.requestId),
},
chatId: actualChatId,
currentChat,
isNewChat: conversationHistory.length === 0,
message,
titleModel: selectedModel,
titleProvider: provider,
requestId: tracker.requestId,
workspaceId: resolvedWorkspaceId,
pendingChatStreamAlreadyRegistered: Boolean(actualChatId && stream),
orchestrateOptions: {
userId: authenticatedUserId,
workflowId,
chatId: actualChatId,
executionId,
runId,
goRoute: '/api/copilot',
autoExecuteTools: true,
interactive: true,
onComplete: async (result: OrchestratorResult) => {
if (!actualChatId) return
if (!result.success) return
const assistantMessage: Record<string, unknown> = {
id: crypto.randomUUID(),
role: 'assistant' as const,
content: result.content,
timestamp: new Date().toISOString(),
...(result.requestId ? { requestId: result.requestId } : {}),
}
if (result.toolCalls.length > 0) {
assistantMessage.toolCalls = result.toolCalls
}
if (result.contentBlocks.length > 0) {
assistantMessage.contentBlocks = result.contentBlocks.map((block) => {
const stored: Record<string, unknown> = { type: block.type }
if (block.content) stored.content = block.content
if (block.type === 'tool_call' && block.toolCall) {
const state =
block.toolCall.result?.success !== undefined
? block.toolCall.result.success
? 'success'
: 'error'
: block.toolCall.status
const isSubagentTool = !!block.calledBy
const isNonTerminal =
state === 'cancelled' || state === 'pending' || state === 'executing'
stored.toolCall = {
id: block.toolCall.id,
name: block.toolCall.name,
state,
...(isSubagentTool && isNonTerminal ? {} : { result: block.toolCall.result }),
...(isSubagentTool && isNonTerminal
? {}
: block.toolCall.params
? { params: block.toolCall.params }
: {}),
...(block.calledBy ? { calledBy: block.calledBy } : {}),
}
}
return stored
})
}
try {
const [row] = await db
.select({ messages: copilotChats.messages })
.from(copilotChats)
.where(eq(copilotChats.id, actualChatId))
.limit(1)
const msgs: any[] = Array.isArray(row?.messages) ? row.messages : []
const userIdx = msgs.findIndex((m: any) => m.id === userMessageIdToUse)
const alreadyHasResponse =
userIdx >= 0 &&
userIdx + 1 < msgs.length &&
(msgs[userIdx + 1] as any)?.role === 'assistant'
if (!alreadyHasResponse) {
await db
.update(copilotChats)
.set({
messages: sql`${copilotChats.messages} || ${JSON.stringify([assistantMessage])}::jsonb`,
conversationId: sql`CASE WHEN ${copilotChats.conversationId} = ${userMessageIdToUse} THEN NULL ELSE ${copilotChats.conversationId} END`,
updatedAt: new Date(),
})
.where(eq(copilotChats.id, actualChatId))
}
} catch (error) {
reqLogger.error('Failed to persist chat messages', {
chatId: actualChatId,
error: error instanceof Error ? error.message : 'Unknown error',
})
}
},
},
})
pendingChatStreamHandedOff = true
return new Response(sseStream, { headers: SSE_RESPONSE_HEADERS })
}
const nsExecutionId = crypto.randomUUID()
const nsRunId = crypto.randomUUID()
if (actualChatId) {
await createRunSegment({
id: nsRunId,
executionId: nsExecutionId,
chatId: actualChatId,
userId: authenticatedUserId,
workflowId,
streamId: userMessageIdToUse,
}).catch(() => {})
}
const nonStreamingResult = await orchestrateCopilotStream(requestPayload, {
userId: authenticatedUserId,
workflowId,
chatId: actualChatId,
executionId: nsExecutionId,
runId: nsRunId,
goRoute: '/api/copilot',
autoExecuteTools: true,
interactive: true,
})
return new Response(sseStream, { headers: SSE_RESPONSE_HEADERS })
const responseData = {
content: nonStreamingResult.content,
toolCalls: nonStreamingResult.toolCalls,
model: selectedModel,
provider: typeof requestPayload?.provider === 'string' ? requestPayload.provider : undefined,
}
reqLogger.info('Non-streaming response from orchestrator', {
hasContent: !!responseData.content,
contentLength: responseData.content?.length || 0,
model: responseData.model,
provider: responseData.provider,
toolCallsCount: responseData.toolCalls?.length || 0,
})
// Save messages if we have a chat
if (currentChat && responseData.content) {
const userMessage = {
id: userMessageIdToUse, // Consistent ID used for request and persistence
role: 'user',
content: message,
timestamp: new Date().toISOString(),
...(fileAttachments && fileAttachments.length > 0 && { fileAttachments }),
...(Array.isArray(normalizedContexts) &&
normalizedContexts.length > 0 && {
contexts: normalizedContexts,
}),
...(Array.isArray(normalizedContexts) &&
normalizedContexts.length > 0 && {
contentBlocks: [
{ type: 'contexts', contexts: normalizedContexts as any, timestamp: Date.now() },
],
}),
}
const assistantMessage = {
id: crypto.randomUUID(),
role: 'assistant',
content: responseData.content,
timestamp: new Date().toISOString(),
}
const updatedMessages = [...conversationHistory, userMessage, assistantMessage]
// Start title generation in parallel if this is first message (non-streaming)
if (actualChatId && !currentChat.title && conversationHistory.length === 0) {
reqLogger.info('Starting title generation for non-streaming response')
requestChatTitle({ message, model: selectedModel, provider, messageId: userMessageIdToUse })
.then(async (title) => {
if (title) {
await db
.update(copilotChats)
.set({
title,
updatedAt: new Date(),
})
.where(eq(copilotChats.id, actualChatId!))
reqLogger.info(`Generated and saved title: ${title}`)
}
})
.catch((error) => {
reqLogger.error('Title generation failed', error)
})
}
// Update chat in database immediately (without blocking for title)
await db
.update(copilotChats)
.set({
messages: updatedMessages,
updatedAt: new Date(),
})
.where(eq(copilotChats.id, actualChatId!))
}
reqLogger.info('Returning non-streaming response', {
duration: tracker.getDuration(),
chatId: actualChatId,
responseLength: responseData.content?.length || 0,
})
return NextResponse.json({
success: true,
response: responseData,
chatId: actualChatId,
metadata: {
requestId: tracker.requestId,
message,
duration: tracker.getDuration(),
},
})
} catch (error) {
if (chatStreamLockAcquired && actualChatId && userMessageIdToUse) {
await releasePendingChatStream(actualChatId, userMessageIdToUse)
if (
actualChatId &&
pendingChatStreamAcquired &&
!pendingChatStreamHandedOff &&
pendingChatStreamID
) {
await releasePendingChatStream(actualChatId, pendingChatStreamID).catch(() => {})
}
const duration = tracker.getDuration()
if (error instanceof z.ZodError) {
logger.error(`[${tracker.requestId}] Validation error:`, { duration, errors: error.errors })
logger
.withMetadata({ requestId: tracker.requestId, messageId: pendingChatStreamID ?? undefined })
.error('Validation error', {
duration,
errors: error.errors,
})
return NextResponse.json(
{ error: 'Invalid request data', details: error.errors },
{ status: 400 }
)
}
logger.error(`[${tracker.requestId}] Error handling copilot chat:`, {
duration,
error: error instanceof Error ? error.message : 'Unknown error',
stack: error instanceof Error ? error.stack : undefined,
})
logger
.withMetadata({ requestId: tracker.requestId, messageId: pendingChatStreamID ?? undefined })
.error('Error handling copilot chat', {
duration,
error: error instanceof Error ? error.message : 'Unknown error',
stack: error instanceof Error ? error.stack : undefined,
})
return NextResponse.json(
{ error: error instanceof Error ? error.message : 'Internal server error' },
@@ -407,55 +670,132 @@ export async function POST(req: NextRequest) {
}
}
// ---------------------------------------------------------------------------
// onComplete: persist assistant message after streaming finishes
// ---------------------------------------------------------------------------
export async function GET(req: NextRequest) {
try {
const { searchParams } = new URL(req.url)
const workflowId = searchParams.get('workflowId')
const workspaceId = searchParams.get('workspaceId')
const chatId = searchParams.get('chatId')
function buildOnComplete(
chatId: string | undefined,
userMessageId: string,
requestId: string
): (result: OrchestratorResult) => Promise<void> {
return async (result) => {
if (!chatId || !result.success) return
const assistantMessage = buildPersistedAssistantMessage(result, result.requestId)
try {
const [row] = await db
.select({ messages: copilotChats.messages })
.from(copilotChats)
.where(eq(copilotChats.id, chatId))
.limit(1)
const msgs: Record<string, unknown>[] = Array.isArray(row?.messages) ? row.messages : []
const userIdx = msgs.findIndex((m: Record<string, unknown>) => m.id === userMessageId)
const alreadyHasResponse =
userIdx >= 0 &&
userIdx + 1 < msgs.length &&
(msgs[userIdx + 1] as Record<string, unknown>)?.role === 'assistant'
if (!alreadyHasResponse) {
await db
.update(copilotChats)
.set({
messages: sql`${copilotChats.messages} || ${JSON.stringify([assistantMessage])}::jsonb`,
conversationId: sql`CASE WHEN ${copilotChats.conversationId} = ${userMessageId} THEN NULL ELSE ${copilotChats.conversationId} END`,
updatedAt: new Date(),
})
.where(eq(copilotChats.id, chatId))
}
} catch (error) {
logger.error(`[${requestId}] Failed to persist chat messages`, {
chatId,
error: error instanceof Error ? error.message : 'Unknown error',
})
const { userId: authenticatedUserId, isAuthenticated } =
await authenticateCopilotRequestSessionOnly()
if (!isAuthenticated || !authenticatedUserId) {
return createUnauthorizedResponse()
}
if (chatId) {
const chat = await getAccessibleCopilotChat(chatId, authenticatedUserId)
if (!chat) {
return NextResponse.json({ success: false, error: 'Chat not found' }, { status: 404 })
}
let streamSnapshot: {
events: Array<{ eventId: number; streamId: string; event: Record<string, unknown> }>
status: string
} | null = null
if (chat.conversationId) {
try {
const [meta, events] = await Promise.all([
getStreamMeta(chat.conversationId),
readStreamEvents(chat.conversationId, 0),
])
streamSnapshot = {
events: events || [],
status: meta?.status || 'unknown',
}
} catch (err) {
logger
.withMetadata({ messageId: chat.conversationId || undefined })
.warn('Failed to read stream snapshot for chat', {
chatId,
conversationId: chat.conversationId,
error: err instanceof Error ? err.message : String(err),
})
}
}
const transformedChat = {
id: chat.id,
title: chat.title,
model: chat.model,
messages: Array.isArray(chat.messages) ? chat.messages : [],
messageCount: Array.isArray(chat.messages) ? chat.messages.length : 0,
planArtifact: chat.planArtifact || null,
config: chat.config || null,
conversationId: chat.conversationId || null,
resources: Array.isArray(chat.resources) ? chat.resources : [],
createdAt: chat.createdAt,
updatedAt: chat.updatedAt,
...(streamSnapshot ? { streamSnapshot } : {}),
}
logger
.withMetadata({ messageId: chat.conversationId || undefined })
.info(`Retrieved chat ${chatId}`)
return NextResponse.json({ success: true, chat: transformedChat })
}
if (!workflowId && !workspaceId) {
return createBadRequestResponse('workflowId, workspaceId, or chatId is required')
}
if (workspaceId) {
await assertActiveWorkspaceAccess(workspaceId, authenticatedUserId)
}
if (workflowId) {
const authorization = await authorizeWorkflowByWorkspacePermission({
workflowId,
userId: authenticatedUserId,
action: 'read',
})
if (!authorization.allowed) {
return createUnauthorizedResponse()
}
}
const scopeFilter = workflowId
? eq(copilotChats.workflowId, workflowId)
: eq(copilotChats.workspaceId, workspaceId!)
const chats = await db
.select({
id: copilotChats.id,
title: copilotChats.title,
model: copilotChats.model,
messages: copilotChats.messages,
planArtifact: copilotChats.planArtifact,
config: copilotChats.config,
createdAt: copilotChats.createdAt,
updatedAt: copilotChats.updatedAt,
})
.from(copilotChats)
.where(and(eq(copilotChats.userId, authenticatedUserId), scopeFilter))
.orderBy(desc(copilotChats.updatedAt))
const transformedChats = chats.map((chat) => ({
id: chat.id,
title: chat.title,
model: chat.model,
messages: Array.isArray(chat.messages) ? chat.messages : [],
messageCount: Array.isArray(chat.messages) ? chat.messages.length : 0,
planArtifact: chat.planArtifact || null,
config: chat.config || null,
createdAt: chat.createdAt,
updatedAt: chat.updatedAt,
}))
const scope = workflowId ? `workflow ${workflowId}` : `workspace ${workspaceId}`
logger.info(`Retrieved ${transformedChats.length} chats for ${scope}`)
return NextResponse.json({
success: true,
chats: transformedChats,
})
} catch (error) {
logger.error('Error fetching copilot chats', error)
return createInternalServerErrorResponse('Failed to fetch chats')
}
}
// ---------------------------------------------------------------------------
// GET handler (read-only queries, extracted to queries.ts)
// ---------------------------------------------------------------------------
export { GET } from './queries'

View File

@@ -4,67 +4,25 @@
import { NextRequest } from 'next/server'
import { beforeEach, describe, expect, it, vi } from 'vitest'
import {
MothershipStreamV1CompletionStatus,
MothershipStreamV1EventType,
} from '@/lib/copilot/generated/mothership-stream-v1'
const {
getLatestRunForStream,
readEvents,
checkForReplayGap,
authenticateCopilotRequestSessionOnly,
} = vi.hoisted(() => ({
getLatestRunForStream: vi.fn(),
readEvents: vi.fn(),
checkForReplayGap: vi.fn(),
authenticateCopilotRequestSessionOnly: vi.fn(),
const { getStreamMeta, readStreamEvents, authenticateCopilotRequestSessionOnly } = vi.hoisted(
() => ({
getStreamMeta: vi.fn(),
readStreamEvents: vi.fn(),
authenticateCopilotRequestSessionOnly: vi.fn(),
})
)
vi.mock('@/lib/copilot/orchestrator/stream/buffer', () => ({
getStreamMeta,
readStreamEvents,
}))
vi.mock('@/lib/copilot/async-runs/repository', () => ({
getLatestRunForStream,
}))
vi.mock('@/lib/copilot/request/session', () => ({
readEvents,
checkForReplayGap,
createEvent: (event: Record<string, unknown>) => ({
stream: {
streamId: event.streamId,
cursor: event.cursor,
},
seq: event.seq,
trace: { requestId: event.requestId ?? '' },
type: event.type,
payload: event.payload,
}),
encodeSSEEnvelope: (event: Record<string, unknown>) =>
new TextEncoder().encode(`data: ${JSON.stringify(event)}\n\n`),
SSE_RESPONSE_HEADERS: {
'Content-Type': 'text/event-stream',
},
}))
vi.mock('@/lib/copilot/request/http', () => ({
vi.mock('@/lib/copilot/request-helpers', () => ({
authenticateCopilotRequestSessionOnly,
}))
import { GET } from './route'
async function readAllChunks(response: Response): Promise<string[]> {
const reader = response.body?.getReader()
expect(reader).toBeTruthy()
const chunks: string[] = []
while (true) {
const { done, value } = await reader!.read()
if (done) {
break
}
chunks.push(new TextDecoder().decode(value))
}
return chunks
}
import { GET } from '@/app/api/copilot/chat/stream/route'
describe('copilot chat stream replay route', () => {
beforeEach(() => {
@@ -73,54 +31,29 @@ describe('copilot chat stream replay route', () => {
userId: 'user-1',
isAuthenticated: true,
})
readEvents.mockResolvedValue([])
checkForReplayGap.mockResolvedValue(null)
readStreamEvents.mockResolvedValue([])
})
it('stops replay polling when run becomes cancelled', async () => {
getLatestRunForStream
it('stops replay polling when stream meta becomes cancelled', async () => {
getStreamMeta
.mockResolvedValueOnce({
status: 'active',
executionId: 'exec-1',
id: 'run-1',
userId: 'user-1',
})
.mockResolvedValueOnce({
status: 'cancelled',
executionId: 'exec-1',
id: 'run-1',
userId: 'user-1',
})
const response = await GET(
new NextRequest('http://localhost:3000/api/copilot/chat/stream?streamId=stream-1&after=0')
new NextRequest('http://localhost:3000/api/copilot/chat/stream?streamId=stream-1')
)
const chunks = await readAllChunks(response)
expect(chunks.join('')).toContain(
JSON.stringify({
status: MothershipStreamV1CompletionStatus.cancelled,
reason: 'terminal_status',
})
)
expect(getLatestRunForStream).toHaveBeenCalledTimes(2)
})
const reader = response.body?.getReader()
expect(reader).toBeTruthy()
it('emits structured terminal replay error when run metadata disappears', async () => {
getLatestRunForStream
.mockResolvedValueOnce({
status: 'active',
executionId: 'exec-1',
id: 'run-1',
})
.mockResolvedValueOnce(null)
const response = await GET(
new NextRequest('http://localhost:3000/api/copilot/chat/stream?streamId=stream-1&after=0')
)
const chunks = await readAllChunks(response)
const body = chunks.join('')
expect(body).toContain(`"type":"${MothershipStreamV1EventType.error}"`)
expect(body).toContain('"code":"resume_run_unavailable"')
expect(body).toContain(`"type":"${MothershipStreamV1EventType.complete}"`)
const first = await reader!.read()
expect(first.done).toBe(true)
expect(getStreamMeta).toHaveBeenCalledTimes(2)
})
})

View File

@@ -1,18 +1,12 @@
import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server'
import { getLatestRunForStream } from '@/lib/copilot/async-runs/repository'
import {
MothershipStreamV1CompletionStatus,
MothershipStreamV1EventType,
} from '@/lib/copilot/generated/mothership-stream-v1'
import { authenticateCopilotRequestSessionOnly } from '@/lib/copilot/request/http'
import {
checkForReplayGap,
createEvent,
encodeSSEEnvelope,
readEvents,
SSE_RESPONSE_HEADERS,
} from '@/lib/copilot/request/session'
getStreamMeta,
readStreamEvents,
type StreamMeta,
} from '@/lib/copilot/orchestrator/stream/buffer'
import { authenticateCopilotRequestSessionOnly } from '@/lib/copilot/request-helpers'
import { SSE_HEADERS } from '@/lib/core/utils/sse'
export const maxDuration = 3600
@@ -20,59 +14,8 @@ const logger = createLogger('CopilotChatStreamAPI')
const POLL_INTERVAL_MS = 250
const MAX_STREAM_MS = 60 * 60 * 1000
function isTerminalStatus(
status: string | null | undefined
): status is MothershipStreamV1CompletionStatus {
return (
status === MothershipStreamV1CompletionStatus.complete ||
status === MothershipStreamV1CompletionStatus.error ||
status === MothershipStreamV1CompletionStatus.cancelled
)
}
function buildResumeTerminalEnvelopes(options: {
streamId: string
afterCursor: string
status: MothershipStreamV1CompletionStatus
message?: string
code: string
reason?: string
}) {
const baseSeq = Number(options.afterCursor || '0')
const seq = Number.isFinite(baseSeq) ? baseSeq : 0
const envelopes: ReturnType<typeof createEvent>[] = []
if (options.status === MothershipStreamV1CompletionStatus.error) {
envelopes.push(
createEvent({
streamId: options.streamId,
cursor: String(seq + 1),
seq: seq + 1,
requestId: '',
type: MothershipStreamV1EventType.error,
payload: {
message: options.message || 'Stream recovery failed before completion.',
code: options.code,
},
})
)
}
envelopes.push(
createEvent({
streamId: options.streamId,
cursor: String(seq + envelopes.length + 1),
seq: seq + envelopes.length + 1,
requestId: '',
type: MothershipStreamV1EventType.complete,
payload: {
status: options.status,
...(options.reason ? { reason: options.reason } : {}),
},
})
)
return envelopes
function encodeEvent(event: Record<string, any>): Uint8Array {
return new TextEncoder().encode(`data: ${JSON.stringify(event)}\n\n`)
}
export async function GET(request: NextRequest) {
@@ -85,49 +28,58 @@ export async function GET(request: NextRequest) {
const url = new URL(request.url)
const streamId = url.searchParams.get('streamId') || ''
const afterCursor = url.searchParams.get('after') || ''
const fromParam = url.searchParams.get('from') || '0'
const fromEventId = Number(fromParam || 0)
// If batch=true, return buffered events as JSON instead of SSE
const batchMode = url.searchParams.get('batch') === 'true'
const toParam = url.searchParams.get('to')
const toEventId = toParam ? Number(toParam) : undefined
const reqLogger = logger.withMetadata({ messageId: streamId || undefined })
reqLogger.info('[Resume] Received resume request', {
streamId: streamId || undefined,
fromEventId,
toEventId,
batchMode,
})
if (!streamId) {
return NextResponse.json({ error: 'streamId is required' }, { status: 400 })
}
const run = await getLatestRunForStream(streamId, authenticatedUserId).catch((err) => {
logger.warn('Failed to fetch latest run for stream', {
streamId,
error: err instanceof Error ? err.message : String(err),
})
return null
})
logger.info('[Resume] Stream lookup', {
const meta = (await getStreamMeta(streamId)) as StreamMeta | null
reqLogger.info('[Resume] Stream lookup', {
streamId,
afterCursor,
fromEventId,
toEventId,
batchMode,
hasRun: !!run,
runStatus: run?.status,
hasMeta: !!meta,
metaStatus: meta?.status,
})
if (!run) {
if (!meta) {
return NextResponse.json({ error: 'Stream not found' }, { status: 404 })
}
if (meta.userId && meta.userId !== authenticatedUserId) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 403 })
}
// Batch mode: return all buffered events as JSON
if (batchMode) {
const afterSeq = afterCursor || '0'
const events = await readEvents(streamId, afterSeq)
const batchEvents = events.map((envelope) => ({
eventId: envelope.seq,
streamId: envelope.stream.streamId,
event: envelope,
}))
logger.info('[Resume] Batch response', {
const events = await readStreamEvents(streamId, fromEventId)
const filteredEvents = toEventId ? events.filter((e) => e.eventId <= toEventId) : events
reqLogger.info('[Resume] Batch response', {
streamId,
afterCursor: afterSeq,
eventCount: batchEvents.length,
runStatus: run.status,
fromEventId,
toEventId,
eventCount: filteredEvents.length,
})
return NextResponse.json({
success: true,
events: batchEvents,
status: run.status,
events: filteredEvents,
status: meta.status,
executionId: meta.executionId,
runId: meta.runId,
})
}
@@ -135,9 +87,9 @@ export async function GET(request: NextRequest) {
const stream = new ReadableStream({
async start(controller) {
let cursor = afterCursor || '0'
let lastEventId = Number.isFinite(fromEventId) ? fromEventId : 0
let latestMeta = meta
let controllerClosed = false
let sawTerminalEvent = false
const closeController = () => {
if (controllerClosed) return
@@ -145,14 +97,14 @@ export async function GET(request: NextRequest) {
try {
controller.close()
} catch {
// Controller already closed by runtime/client
// Controller already closed by runtime/client - treat as normal.
}
}
const enqueueEvent = (payload: unknown) => {
const enqueueEvent = (payload: Record<string, any>) => {
if (controllerClosed) return false
try {
controller.enqueue(encodeSSEEnvelope(payload))
controller.enqueue(encodeEvent(payload))
return true
} catch {
controllerClosed = true
@@ -166,96 +118,47 @@ export async function GET(request: NextRequest) {
request.signal.addEventListener('abort', abortListener, { once: true })
const flushEvents = async () => {
const events = await readEvents(streamId, cursor)
const events = await readStreamEvents(streamId, lastEventId)
if (events.length > 0) {
logger.info('[Resume] Flushing events', {
reqLogger.info('[Resume] Flushing events', {
streamId,
afterCursor: cursor,
fromEventId: lastEventId,
eventCount: events.length,
})
}
for (const envelope of events) {
cursor = envelope.stream.cursor ?? String(envelope.seq)
if (envelope.type === MothershipStreamV1EventType.complete) {
sawTerminalEvent = true
for (const entry of events) {
lastEventId = entry.eventId
const payload = {
...entry.event,
eventId: entry.eventId,
streamId: entry.streamId,
executionId: latestMeta?.executionId,
runId: latestMeta?.runId,
}
if (!enqueueEvent(envelope)) {
break
}
}
}
const emitTerminalIfMissing = (
status: MothershipStreamV1CompletionStatus,
options?: { message?: string; code: string; reason?: string }
) => {
if (controllerClosed || sawTerminalEvent) {
return
}
for (const envelope of buildResumeTerminalEnvelopes({
streamId,
afterCursor: cursor,
status,
message: options?.message,
code: options?.code ?? 'resume_terminal',
reason: options?.reason,
})) {
cursor = envelope.stream.cursor ?? String(envelope.seq)
if (envelope.type === MothershipStreamV1EventType.complete) {
sawTerminalEvent = true
}
if (!enqueueEvent(envelope)) {
if (!enqueueEvent(payload)) {
break
}
}
}
try {
const gap = await checkForReplayGap(streamId, afterCursor)
if (gap) {
for (const envelope of gap.envelopes) {
enqueueEvent(envelope)
}
return
}
await flushEvents()
while (!controllerClosed && Date.now() - startTime < MAX_STREAM_MS) {
const currentRun = await getLatestRunForStream(streamId, authenticatedUserId).catch(
(err) => {
logger.warn('Failed to poll latest run for stream', {
streamId,
error: err instanceof Error ? err.message : String(err),
})
return null
}
)
if (!currentRun) {
emitTerminalIfMissing(MothershipStreamV1CompletionStatus.error, {
message: 'The stream could not be recovered because its run metadata is unavailable.',
code: 'resume_run_unavailable',
reason: 'run_unavailable',
})
break
}
const currentMeta = await getStreamMeta(streamId)
if (!currentMeta) break
latestMeta = currentMeta
await flushEvents()
if (controllerClosed) {
break
}
if (isTerminalStatus(currentRun.status)) {
emitTerminalIfMissing(currentRun.status, {
message:
currentRun.status === MothershipStreamV1CompletionStatus.error
? typeof currentRun.error === 'string'
? currentRun.error
: 'The recovered stream ended with an error.'
: undefined,
code: 'resume_terminal_status',
reason: 'terminal_status',
})
if (
currentMeta.status === 'complete' ||
currentMeta.status === 'error' ||
currentMeta.status === 'cancelled'
) {
break
}
@@ -266,24 +169,12 @@ export async function GET(request: NextRequest) {
await new Promise((resolve) => setTimeout(resolve, POLL_INTERVAL_MS))
}
if (!controllerClosed && Date.now() - startTime >= MAX_STREAM_MS) {
emitTerminalIfMissing(MothershipStreamV1CompletionStatus.error, {
message: 'The stream recovery timed out before completion.',
code: 'resume_timeout',
reason: 'timeout',
})
}
} catch (error) {
if (!controllerClosed && !request.signal.aborted) {
logger.warn('Stream replay failed', {
reqLogger.warn('Stream replay failed', {
streamId,
error: error instanceof Error ? error.message : String(error),
})
emitTerminalIfMissing(MothershipStreamV1CompletionStatus.error, {
message: 'The stream replay failed before completion.',
code: 'resume_internal',
reason: 'stream_replay_failed',
})
}
} finally {
request.signal.removeEventListener('abort', abortListener)
@@ -292,5 +183,5 @@ export async function GET(request: NextRequest) {
},
})
return new Response(stream, { headers: SSE_RESPONSE_HEADERS })
return new Response(stream, { headers: SSE_HEADERS })
}

View File

@@ -327,35 +327,7 @@ describe('Copilot Chat Update Messages API Route', () => {
})
expect(mockSet).toHaveBeenCalledWith({
messages: [
{
id: 'msg-1',
role: 'user',
content: 'Hello',
timestamp: '2024-01-01T10:00:00.000Z',
},
{
id: 'msg-2',
role: 'assistant',
content: 'Hi there!',
timestamp: '2024-01-01T10:01:00.000Z',
contentBlocks: [
{
type: 'text',
content: 'Here is the weather information',
},
{
type: 'tool',
phase: 'call',
toolCall: {
id: 'tool-1',
name: 'get_weather',
state: 'pending',
},
},
],
},
],
messages,
updatedAt: expect.any(Date),
})
})

View File

@@ -4,16 +4,15 @@ import { createLogger } from '@sim/logger'
import { eq } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { getAccessibleCopilotChat } from '@/lib/copilot/chat/lifecycle'
import { normalizeMessage, type PersistedMessage } from '@/lib/copilot/chat/persisted-message'
import { COPILOT_MODES } from '@/lib/copilot/constants'
import { getAccessibleCopilotChat } from '@/lib/copilot/chat-lifecycle'
import { COPILOT_MODES } from '@/lib/copilot/models'
import {
authenticateCopilotRequestSessionOnly,
createInternalServerErrorResponse,
createNotFoundResponse,
createRequestTracker,
createUnauthorizedResponse,
} from '@/lib/copilot/request/http'
} from '@/lib/copilot/request-helpers'
const logger = createLogger('CopilotChatUpdateAPI')
@@ -79,15 +78,12 @@ export async function POST(req: NextRequest) {
}
const { chatId, messages, planArtifact, config } = UpdateMessagesSchema.parse(body)
const normalizedMessages: PersistedMessage[] = messages.map((message) =>
normalizeMessage(message as Record<string, unknown>)
)
// Debug: Log what we're about to save
const lastMsgParsed = normalizedMessages[normalizedMessages.length - 1]
const lastMsgParsed = messages[messages.length - 1]
if (lastMsgParsed?.role === 'assistant') {
logger.info(`[${tracker.requestId}] Parsed messages to save`, {
messageCount: normalizedMessages.length,
messageCount: messages.length,
lastMsgId: lastMsgParsed.id,
lastMsgContentLength: lastMsgParsed.content?.length || 0,
lastMsgContentBlockCount: lastMsgParsed.contentBlocks?.length || 0,
@@ -103,8 +99,8 @@ export async function POST(req: NextRequest) {
}
// Update chat with new messages, plan artifact, and config
const updateData: Record<string, unknown> = {
messages: normalizedMessages,
const updateData: Record<string, any> = {
messages: messages,
updatedAt: new Date(),
}
@@ -120,14 +116,14 @@ export async function POST(req: NextRequest) {
logger.info(`[${tracker.requestId}] Successfully updated chat`, {
chatId,
newMessageCount: normalizedMessages.length,
newMessageCount: messages.length,
hasPlanArtifact: !!planArtifact,
hasConfig: !!config,
})
return NextResponse.json({
success: true,
messageCount: normalizedMessages.length,
messageCount: messages.length,
})
} catch (error) {
logger.error(`[${tracker.requestId}] Error updating chat messages:`, error)

View File

@@ -66,7 +66,7 @@ vi.mock('drizzle-orm', () => ({
sql: vi.fn(),
}))
vi.mock('@/lib/copilot/request/http', () => ({
vi.mock('@/lib/copilot/request-helpers', () => ({
authenticateCopilotRequestSessionOnly: mockAuthenticate,
createUnauthorizedResponse: mockCreateUnauthorizedResponse,
createInternalServerErrorResponse: mockCreateInternalServerErrorResponse,

View File

@@ -4,14 +4,14 @@ import { createLogger } from '@sim/logger'
import { and, desc, eq, isNull, or, sql } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { resolveOrCreateChat } from '@/lib/copilot/chat/lifecycle'
import { resolveOrCreateChat } from '@/lib/copilot/chat-lifecycle'
import {
authenticateCopilotRequestSessionOnly,
createBadRequestResponse,
createInternalServerErrorResponse,
createUnauthorizedResponse,
} from '@/lib/copilot/request/http'
import { taskPubSub } from '@/lib/copilot/tasks'
} from '@/lib/copilot/request-helpers'
import { taskPubSub } from '@/lib/copilot/task-events'
import { authorizeWorkflowByWorkspacePermission } from '@/lib/workflows/utils'
import { assertActiveWorkspaceAccess } from '@/lib/workspaces/permissions/utils'
@@ -37,7 +37,7 @@ export async function GET(_request: NextRequest) {
title: copilotChats.title,
workflowId: copilotChats.workflowId,
workspaceId: copilotChats.workspaceId,
activeStreamId: copilotChats.conversationId,
conversationId: copilotChats.conversationId,
updatedAt: copilotChats.updatedAt,
})
.from(copilotChats)

View File

@@ -43,7 +43,7 @@ vi.mock('@/lib/workflows/utils', () => ({
authorizeWorkflowByWorkspacePermission: mockAuthorize,
}))
vi.mock('@/lib/copilot/chat/lifecycle', () => ({
vi.mock('@/lib/copilot/chat-lifecycle', () => ({
getAccessibleCopilotChat: mockGetAccessibleCopilotChat,
}))
@@ -304,7 +304,6 @@ describe('Copilot Checkpoints Revert API Route', () => {
loops: {},
parallels: {},
isDeployed: true,
deploymentStatuses: { production: 'deployed' },
},
}
@@ -349,7 +348,6 @@ describe('Copilot Checkpoints Revert API Route', () => {
loops: {},
parallels: {},
isDeployed: true,
deploymentStatuses: { production: 'deployed' },
lastSaved: 1640995200000,
},
},
@@ -370,7 +368,6 @@ describe('Copilot Checkpoints Revert API Route', () => {
loops: {},
parallels: {},
isDeployed: true,
deploymentStatuses: { production: 'deployed' },
lastSaved: 1640995200000,
}),
}
@@ -473,7 +470,6 @@ describe('Copilot Checkpoints Revert API Route', () => {
edges: undefined,
loops: null,
parallels: undefined,
deploymentStatuses: null,
},
}
@@ -508,7 +504,6 @@ describe('Copilot Checkpoints Revert API Route', () => {
loops: {},
parallels: {},
isDeployed: false,
deploymentStatuses: {},
lastSaved: 1640995200000,
})
})
@@ -768,10 +763,6 @@ describe('Copilot Checkpoints Revert API Route', () => {
parallel1: { branches: ['branch1', 'branch2'] },
},
isDeployed: true,
deploymentStatuses: {
production: 'deployed',
staging: 'pending',
},
deployedAt: '2024-01-01T10:00:00.000Z',
},
}
@@ -816,10 +807,6 @@ describe('Copilot Checkpoints Revert API Route', () => {
parallel1: { branches: ['branch1', 'branch2'] },
},
isDeployed: true,
deploymentStatuses: {
production: 'deployed',
staging: 'pending',
},
deployedAt: '2024-01-01T10:00:00.000Z',
lastSaved: 1640995200000,
})

View File

@@ -4,14 +4,14 @@ import { createLogger } from '@sim/logger'
import { and, eq } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { getAccessibleCopilotChat } from '@/lib/copilot/chat/lifecycle'
import { getAccessibleCopilotChat } from '@/lib/copilot/chat-lifecycle'
import {
authenticateCopilotRequestSessionOnly,
createInternalServerErrorResponse,
createNotFoundResponse,
createRequestTracker,
createUnauthorizedResponse,
} from '@/lib/copilot/request/http'
} from '@/lib/copilot/request-helpers'
import { getInternalApiBaseUrl } from '@/lib/core/utils/urls'
import { authorizeWorkflowByWorkspacePermission } from '@/lib/workflows/utils'
import { isUuidV4 } from '@/executor/constants'
@@ -82,7 +82,6 @@ export async function POST(request: NextRequest) {
loops: checkpointState?.loops || {},
parallels: checkpointState?.parallels || {},
isDeployed: checkpointState?.isDeployed || false,
deploymentStatuses: checkpointState?.deploymentStatuses || {},
lastSaved: Date.now(),
...(checkpointState?.deployedAt &&
checkpointState.deployedAt !== null &&

View File

@@ -62,7 +62,7 @@ vi.mock('drizzle-orm', () => ({
desc: vi.fn((field: unknown) => ({ field, type: 'desc' })),
}))
vi.mock('@/lib/copilot/chat/lifecycle', () => ({
vi.mock('@/lib/copilot/chat-lifecycle', () => ({
getAccessibleCopilotChat: mockGetAccessibleCopilotChat,
}))

View File

@@ -4,14 +4,14 @@ import { createLogger } from '@sim/logger'
import { and, desc, eq } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { getAccessibleCopilotChat } from '@/lib/copilot/chat/lifecycle'
import { getAccessibleCopilotChat } from '@/lib/copilot/chat-lifecycle'
import {
authenticateCopilotRequestSessionOnly,
createBadRequestResponse,
createInternalServerErrorResponse,
createRequestTracker,
createUnauthorizedResponse,
} from '@/lib/copilot/request/http'
} from '@/lib/copilot/request-helpers'
import { authorizeWorkflowByWorkspacePermission } from '@/lib/workflows/utils'
const logger = createLogger('WorkflowCheckpointsAPI')

View File

@@ -38,7 +38,7 @@ const {
publishToolConfirmation: vi.fn(),
}))
vi.mock('@/lib/copilot/request/http', () => ({
vi.mock('@/lib/copilot/request-helpers', () => ({
authenticateCopilotRequestSessionOnly,
createBadRequestResponse,
createInternalServerErrorResponse,
@@ -54,7 +54,7 @@ vi.mock('@/lib/copilot/async-runs/repository', () => ({
completeAsyncToolCall,
}))
vi.mock('@/lib/copilot/persistence/tool-confirm', () => ({
vi.mock('@/lib/copilot/orchestrator/persistence', () => ({
publishToolConfirmation,
}))

View File

@@ -1,14 +1,13 @@
import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { ASYNC_TOOL_STATUS } from '@/lib/copilot/async-runs/lifecycle'
import {
completeAsyncToolCall,
getAsyncToolCall,
getRunSegment,
upsertAsyncToolCall,
} from '@/lib/copilot/async-runs/repository'
import { publishToolConfirmation } from '@/lib/copilot/persistence/tool-confirm'
import { publishToolConfirmation } from '@/lib/copilot/orchestrator/persistence'
import {
authenticateCopilotRequestSessionOnly,
createBadRequestResponse,
@@ -17,7 +16,7 @@ import {
createRequestTracker,
createUnauthorizedResponse,
type NotificationStatus,
} from '@/lib/copilot/request/http'
} from '@/lib/copilot/request-helpers'
const logger = createLogger('CopilotConfirmAPI')
@@ -43,17 +42,17 @@ async function updateToolCallStatus(
const toolCallId = existing.toolCallId
const durableStatus =
status === 'success'
? ASYNC_TOOL_STATUS.completed
? 'completed'
: status === 'cancelled'
? ASYNC_TOOL_STATUS.cancelled
? 'cancelled'
: status === 'error' || status === 'rejected'
? ASYNC_TOOL_STATUS.failed
: ASYNC_TOOL_STATUS.pending
? 'failed'
: 'pending'
try {
if (
durableStatus === ASYNC_TOOL_STATUS.completed ||
durableStatus === ASYNC_TOOL_STATUS.failed ||
durableStatus === ASYNC_TOOL_STATUS.cancelled
durableStatus === 'completed' ||
durableStatus === 'failed' ||
durableStatus === 'cancelled'
) {
await completeAsyncToolCall({
toolCallId,
@@ -108,25 +107,13 @@ export async function POST(req: NextRequest) {
const body = await req.json()
const { toolCallId, status, message, data } = ConfirmationSchema.parse(body)
const existing = await getAsyncToolCall(toolCallId).catch((err) => {
logger.warn('Failed to fetch async tool call', {
toolCallId,
error: err instanceof Error ? err.message : String(err),
})
return null
})
const existing = await getAsyncToolCall(toolCallId).catch(() => null)
if (!existing) {
return createNotFoundResponse('Tool call not found')
}
const run = await getRunSegment(existing.runId).catch((err) => {
logger.warn('Failed to fetch run segment', {
runId: existing.runId,
error: err instanceof Error ? err.message : String(err),
})
return null
})
const run = await getRunSegment(existing.runId).catch(() => null)
if (!run) {
return createNotFoundResponse('Tool call run not found')
}

View File

@@ -1,5 +1,5 @@
import { type NextRequest, NextResponse } from 'next/server'
import { authenticateCopilotRequestSessionOnly } from '@/lib/copilot/request/http'
import { authenticateCopilotRequestSessionOnly } from '@/lib/copilot/request-helpers'
import { routeExecution } from '@/lib/copilot/tools/server/router'
/**

View File

@@ -57,7 +57,7 @@ vi.mock('drizzle-orm', () => ({
eq: vi.fn((field: unknown, value: unknown) => ({ field, value, type: 'eq' })),
}))
vi.mock('@/lib/copilot/request/http', () => ({
vi.mock('@/lib/copilot/request-helpers', () => ({
authenticateCopilotRequestSessionOnly: mockAuthenticate,
createUnauthorizedResponse: mockCreateUnauthorizedResponse,
createBadRequestResponse: mockCreateBadRequestResponse,

View File

@@ -10,7 +10,8 @@ import {
createInternalServerErrorResponse,
createRequestTracker,
createUnauthorizedResponse,
} from '@/lib/copilot/request/http'
} from '@/lib/copilot/request-helpers'
import { captureServerEvent } from '@/lib/posthog/server'
const logger = createLogger('CopilotFeedbackAPI')
@@ -76,6 +77,12 @@ export async function POST(req: NextRequest) {
duration: tracker.getDuration(),
})
captureServerEvent(authenticatedUserId, 'copilot_feedback_submitted', {
is_positive: isPositiveFeedback,
has_text_feedback: !!feedback,
has_workflow_yaml: !!workflowYaml,
})
return NextResponse.json({
success: true,
feedbackId: feedbackRecord.feedbackId,

View File

@@ -1,14 +1,8 @@
import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server'
import { SIM_AGENT_API_URL } from '@/lib/copilot/constants'
import { authenticateCopilotRequestSessionOnly } from '@/lib/copilot/request/http'
interface AvailableModel {
id: string
friendlyName: string
provider: string
}
import { authenticateCopilotRequestSessionOnly } from '@/lib/copilot/request-helpers'
import type { AvailableModel } from '@/lib/copilot/types'
import { env } from '@/lib/core/config/env'
const logger = createLogger('CopilotModelsAPI')

View File

@@ -23,7 +23,7 @@ const {
mockFetch: vi.fn(),
}))
vi.mock('@/lib/copilot/request/http', () => ({
vi.mock('@/lib/copilot/request-helpers', () => ({
authenticateCopilotRequestSessionOnly: mockAuthenticateCopilotRequestSessionOnly,
createUnauthorizedResponse: mockCreateUnauthorizedResponse,
createBadRequestResponse: mockCreateBadRequestResponse,

View File

@@ -7,7 +7,7 @@ import {
createInternalServerErrorResponse,
createRequestTracker,
createUnauthorizedResponse,
} from '@/lib/copilot/request/http'
} from '@/lib/copilot/request-helpers'
import { env } from '@/lib/core/config/env'
const BodySchema = z.object({

View File

@@ -4,7 +4,7 @@ import { z } from 'zod'
import {
authenticateCopilotRequestSessionOnly,
createUnauthorizedResponse,
} from '@/lib/copilot/request/http'
} from '@/lib/copilot/request-helpers'
import { env } from '@/lib/core/config/env'
const logger = createLogger('CopilotTrainingExamplesAPI')

View File

@@ -4,7 +4,7 @@ import { z } from 'zod'
import {
authenticateCopilotRequestSessionOnly,
createUnauthorizedResponse,
} from '@/lib/copilot/request/http'
} from '@/lib/copilot/request-helpers'
import { env } from '@/lib/core/config/env'
const logger = createLogger('CopilotTrainingAPI')

View File

@@ -11,6 +11,7 @@ import {
syncPersonalEnvCredentialsForUser,
syncWorkspaceEnvCredentials,
} from '@/lib/credentials/environment'
import { captureServerEvent } from '@/lib/posthog/server'
const logger = createLogger('CredentialByIdAPI')
@@ -236,6 +237,17 @@ export async function DELETE(
envKeys: Object.keys(current),
})
captureServerEvent(
session.user.id,
'credential_deleted',
{
credential_type: 'env_personal',
provider_id: access.credential.envKey,
workspace_id: access.credential.workspaceId,
},
{ groups: { workspace: access.credential.workspaceId } }
)
return NextResponse.json({ success: true }, { status: 200 })
}
@@ -278,10 +290,33 @@ export async function DELETE(
actingUserId: session.user.id,
})
captureServerEvent(
session.user.id,
'credential_deleted',
{
credential_type: 'env_workspace',
provider_id: access.credential.envKey,
workspace_id: access.credential.workspaceId,
},
{ groups: { workspace: access.credential.workspaceId } }
)
return NextResponse.json({ success: true }, { status: 200 })
}
await db.delete(credential).where(eq(credential.id, id))
captureServerEvent(
session.user.id,
'credential_deleted',
{
credential_type: access.credential.type as 'oauth' | 'service_account',
provider_id: access.credential.providerId ?? id,
workspace_id: access.credential.workspaceId,
},
{ groups: { workspace: access.credential.workspaceId } }
)
return NextResponse.json({ success: true }, { status: 200 })
} catch (error) {
logger.error('Failed to delete credential', error)

View File

@@ -10,6 +10,7 @@ import { generateRequestId } from '@/lib/core/utils/request'
import { getWorkspaceMemberUserIds } from '@/lib/credentials/environment'
import { syncWorkspaceOAuthCredentialsForUser } from '@/lib/credentials/oauth'
import { getServiceConfigByProviderId } from '@/lib/oauth'
import { captureServerEvent } from '@/lib/posthog/server'
import { checkWorkspaceAccess } from '@/lib/workspaces/permissions/utils'
import { isValidEnvVarName } from '@/executor/constants'
@@ -600,6 +601,16 @@ export async function POST(request: NextRequest) {
.where(eq(credential.id, credentialId))
.limit(1)
captureServerEvent(
session.user.id,
'credential_connected',
{ credential_type: type, provider_id: resolvedProviderId ?? type, workspace_id: workspaceId },
{
groups: { workspace: workspaceId },
setOnce: { first_credential_connected_at: new Date().toISOString() },
}
)
return NextResponse.json({ credential: created }, { status: 201 })
} catch (error: any) {
if (error?.code === '23505') {

View File

@@ -75,16 +75,6 @@ vi.mock('@/lib/uploads/utils/file-utils', () => ({
vi.mock('@/lib/uploads/setup.server', () => ({}))
vi.mock('@/lib/execution/doc-vm', () => ({
generatePdfFromCode: vi.fn().mockResolvedValue(Buffer.from('%PDF-compiled')),
generateDocxFromCode: vi.fn().mockResolvedValue(Buffer.from('PK\x03\x04compiled')),
generatePptxFromCode: vi.fn().mockResolvedValue(Buffer.from('PK\x03\x04compiled')),
}))
vi.mock('@/lib/uploads/contexts/workspace/workspace-file-manager', () => ({
parseWorkspaceFileKey: vi.fn().mockReturnValue(undefined),
}))
vi.mock('@/app/api/files/utils', () => ({
FileNotFoundError,
createFileResponse: mockCreateFileResponse,

View File

@@ -4,11 +4,7 @@ import { createLogger } from '@sim/logger'
import type { NextRequest } from 'next/server'
import { NextResponse } from 'next/server'
import { checkSessionOrInternalAuth } from '@/lib/auth/hybrid'
import {
generateDocxFromCode,
generatePdfFromCode,
generatePptxFromCode,
} from '@/lib/execution/doc-vm'
import { generatePptxFromCode } from '@/lib/execution/pptx-vm'
import { CopilotFiles, isUsingCloudStorage } from '@/lib/uploads'
import type { StorageContext } from '@/lib/uploads/config'
import { parseWorkspaceFileKey } from '@/lib/uploads/contexts/workspace/workspace-file-manager'
@@ -26,73 +22,47 @@ import {
const logger = createLogger('FilesServeAPI')
const ZIP_MAGIC = Buffer.from([0x50, 0x4b, 0x03, 0x04])
const PDF_MAGIC = Buffer.from([0x25, 0x50, 0x44, 0x46, 0x2d]) // %PDF-
interface CompilableFormat {
magic: Buffer
compile: (code: string, workspaceId: string) => Promise<Buffer>
contentType: string
}
const COMPILABLE_FORMATS: Record<string, CompilableFormat> = {
'.pptx': {
magic: ZIP_MAGIC,
compile: generatePptxFromCode,
contentType: 'application/vnd.openxmlformats-officedocument.presentationml.presentation',
},
'.docx': {
magic: ZIP_MAGIC,
compile: generateDocxFromCode,
contentType: 'application/vnd.openxmlformats-officedocument.wordprocessingml.document',
},
'.pdf': {
magic: PDF_MAGIC,
compile: generatePdfFromCode,
contentType: 'application/pdf',
},
}
const MAX_COMPILED_DOC_CACHE = 10
const compiledDocCache = new Map<string, Buffer>()
const MAX_COMPILED_PPTX_CACHE = 10
const compiledPptxCache = new Map<string, Buffer>()
function compiledCacheSet(key: string, buffer: Buffer): void {
if (compiledDocCache.size >= MAX_COMPILED_DOC_CACHE) {
compiledDocCache.delete(compiledDocCache.keys().next().value as string)
if (compiledPptxCache.size >= MAX_COMPILED_PPTX_CACHE) {
compiledPptxCache.delete(compiledPptxCache.keys().next().value as string)
}
compiledDocCache.set(key, buffer)
compiledPptxCache.set(key, buffer)
}
async function compileDocumentIfNeeded(
async function compilePptxIfNeeded(
buffer: Buffer,
filename: string,
workspaceId?: string,
raw?: boolean
): Promise<{ buffer: Buffer; contentType: string }> {
if (raw) return { buffer, contentType: getContentType(filename) }
const ext = filename.slice(filename.lastIndexOf('.')).toLowerCase()
const format = COMPILABLE_FORMATS[ext]
if (!format) return { buffer, contentType: getContentType(filename) }
const magicLen = format.magic.length
if (buffer.length >= magicLen && buffer.subarray(0, magicLen).equals(format.magic)) {
const isPptx = filename.toLowerCase().endsWith('.pptx')
if (raw || !isPptx || buffer.subarray(0, 4).equals(ZIP_MAGIC)) {
return { buffer, contentType: getContentType(filename) }
}
const code = buffer.toString('utf-8')
const cacheKey = createHash('sha256')
.update(ext)
.update(code)
.update(workspaceId ?? '')
.digest('hex')
const cached = compiledDocCache.get(cacheKey)
const cached = compiledPptxCache.get(cacheKey)
if (cached) {
return { buffer: cached, contentType: format.contentType }
return {
buffer: cached,
contentType: 'application/vnd.openxmlformats-officedocument.presentationml.presentation',
}
}
const compiled = await format.compile(code, workspaceId || '')
const compiled = await generatePptxFromCode(code, workspaceId || '')
compiledCacheSet(cacheKey, compiled)
return { buffer: compiled, contentType: format.contentType }
return {
buffer: compiled,
contentType: 'application/vnd.openxmlformats-officedocument.presentationml.presentation',
}
}
const STORAGE_KEY_PREFIX_RE = /^\d{13}-[a-z0-9]{7}-/
@@ -199,7 +169,7 @@ async function handleLocalFile(
const segment = filename.split('/').pop() || filename
const displayName = stripStorageKeyPrefix(segment)
const workspaceId = getWorkspaceIdForCompile(filename)
const { buffer: fileBuffer, contentType } = await compileDocumentIfNeeded(
const { buffer: fileBuffer, contentType } = await compilePptxIfNeeded(
rawBuffer,
displayName,
workspaceId,
@@ -256,7 +226,7 @@ async function handleCloudProxy(
const segment = cloudKey.split('/').pop() || 'download'
const displayName = stripStorageKeyPrefix(segment)
const workspaceId = getWorkspaceIdForCompile(cloudKey)
const { buffer: fileBuffer, contentType } = await compileDocumentIfNeeded(
const { buffer: fileBuffer, contentType } = await compilePptxIfNeeded(
rawBuffer,
displayName,
workspaceId,

View File

@@ -5,6 +5,7 @@ import { eq } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { getSession } from '@/lib/auth'
import { captureServerEvent } from '@/lib/posthog/server'
import { performDeleteFolder } from '@/lib/workflows/orchestration'
import { checkForCircularReference } from '@/lib/workflows/utils'
import { getUserEntityPermissions } from '@/lib/workspaces/permissions/utils'
@@ -156,6 +157,13 @@ export async function DELETE(
return NextResponse.json({ error: result.error }, { status })
}
captureServerEvent(
session.user.id,
'folder_deleted',
{ workspace_id: existingFolder.workspaceId },
{ groups: { workspace: existingFolder.workspaceId } }
)
return NextResponse.json({
success: true,
deletedItems: result.deletedItems,

View File

@@ -6,6 +6,7 @@ import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { AuditAction, AuditResourceType, recordAudit } from '@/lib/audit/log'
import { getSession } from '@/lib/auth'
import { captureServerEvent } from '@/lib/posthog/server'
import { getUserEntityPermissions } from '@/lib/workspaces/permissions/utils'
const logger = createLogger('FoldersAPI')
@@ -145,6 +146,13 @@ export async function POST(request: NextRequest) {
logger.info('Created new folder:', { id, name, workspaceId, parentId })
captureServerEvent(
session.user.id,
'folder_created',
{ workspace_id: workspaceId },
{ groups: { workspace: workspaceId } }
)
recordAudit({
workspaceId,
actorId: session.user.id,

View File

@@ -24,27 +24,6 @@ vi.mock('@/lib/auth/hybrid', () => ({
vi.mock('@/lib/execution/e2b', () => ({
executeInE2B: mockExecuteInE2B,
executeShellInE2B: vi.fn(),
}))
vi.mock('@/lib/copilot/request/tools/files', () => ({
FORMAT_TO_CONTENT_TYPE: {
json: 'application/json',
csv: 'text/csv',
txt: 'text/plain',
md: 'text/markdown',
html: 'text/html',
},
normalizeOutputWorkspaceFileName: vi.fn((p: string) => p.replace(/^files\//, '')),
resolveOutputFormat: vi.fn(() => 'json'),
}))
vi.mock('@/lib/uploads/contexts/workspace/workspace-file-manager', () => ({
uploadWorkspaceFile: vi.fn(),
}))
vi.mock('@/lib/workflows/utils', () => ({
getWorkflowById: vi.fn(),
}))
vi.mock('@/lib/core/config/feature-flags', () => ({
@@ -53,7 +32,6 @@ vi.mock('@/lib/core/config/feature-flags', () => ({
isProd: false,
isDev: false,
isTest: true,
isEmailVerificationEnabled: false,
}))
import { validateProxyUrl } from '@/lib/core/security/input-validation'

View File

@@ -1,18 +1,11 @@
import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server'
import { checkInternalAuth } from '@/lib/auth/hybrid'
import {
FORMAT_TO_CONTENT_TYPE,
normalizeOutputWorkspaceFileName,
resolveOutputFormat,
} from '@/lib/copilot/request/tools/files'
import { isE2bEnabled } from '@/lib/core/config/feature-flags'
import { generateRequestId } from '@/lib/core/utils/request'
import { executeInE2B, executeShellInE2B } from '@/lib/execution/e2b'
import { executeInE2B } from '@/lib/execution/e2b'
import { executeInIsolatedVM } from '@/lib/execution/isolated-vm'
import { CodeLanguage, DEFAULT_CODE_LANGUAGE, isValidCodeLanguage } from '@/lib/execution/languages'
import { uploadWorkspaceFile } from '@/lib/uploads/contexts/workspace/workspace-file-manager'
import { getWorkflowById } from '@/lib/workflows/utils'
import { escapeRegExp, normalizeName, REFERENCE } from '@/executor/constants'
import { type OutputSchema, resolveBlockReference } from '@/executor/utils/block-reference'
import { formatLiteralForCode } from '@/executor/utils/code-formatting'
@@ -587,107 +580,6 @@ function cleanStdout(stdout: string): string {
return stdout
}
async function maybeExportSandboxFileToWorkspace(args: {
authUserId: string
workflowId?: string
workspaceId?: string
outputPath?: string
outputFormat?: string
outputMimeType?: string
outputSandboxPath?: string
exportedFileContent?: string
stdout: string
executionTime: number
}) {
const {
authUserId,
workflowId,
workspaceId,
outputPath,
outputFormat,
outputMimeType,
outputSandboxPath,
exportedFileContent,
stdout,
executionTime,
} = args
if (!outputSandboxPath) return null
if (!outputPath) {
return NextResponse.json(
{
success: false,
error:
'outputSandboxPath requires outputPath. Set outputPath to the destination workspace file, e.g. "files/result.csv".',
output: { result: null, stdout: cleanStdout(stdout), executionTime },
},
{ status: 400 }
)
}
const resolvedWorkspaceId =
workspaceId || (workflowId ? (await getWorkflowById(workflowId))?.workspaceId : undefined)
if (!resolvedWorkspaceId) {
return NextResponse.json(
{
success: false,
error: 'Workspace context required to save sandbox file to workspace',
output: { result: null, stdout: cleanStdout(stdout), executionTime },
},
{ status: 400 }
)
}
if (exportedFileContent === undefined) {
return NextResponse.json(
{
success: false,
error: `Sandbox file "${outputSandboxPath}" was not found or could not be read`,
output: { result: null, stdout: cleanStdout(stdout), executionTime },
},
{ status: 500 }
)
}
const fileName = normalizeOutputWorkspaceFileName(outputPath)
const TEXT_MIMES = new Set(Object.values(FORMAT_TO_CONTENT_TYPE))
const resolvedMimeType =
outputMimeType ||
FORMAT_TO_CONTENT_TYPE[resolveOutputFormat(fileName, outputFormat)] ||
'application/octet-stream'
const isBinary = !TEXT_MIMES.has(resolvedMimeType)
const fileBuffer = isBinary
? Buffer.from(exportedFileContent, 'base64')
: Buffer.from(exportedFileContent, 'utf-8')
const uploaded = await uploadWorkspaceFile(
resolvedWorkspaceId,
authUserId,
fileBuffer,
fileName,
resolvedMimeType
)
return NextResponse.json({
success: true,
output: {
result: {
message: `Sandbox file exported to files/${fileName}`,
fileId: uploaded.id,
fileName,
downloadUrl: uploaded.url,
sandboxPath: outputSandboxPath,
},
stdout: cleanStdout(stdout),
executionTime,
},
resources: [{ type: 'file', id: uploaded.id, title: fileName }],
})
}
export async function POST(req: NextRequest) {
const requestId = generateRequestId()
const startTime = Date.now()
@@ -711,17 +603,12 @@ export async function POST(req: NextRequest) {
params = {},
timeout = DEFAULT_EXECUTION_TIMEOUT_MS,
language = DEFAULT_CODE_LANGUAGE,
outputPath,
outputFormat,
outputMimeType,
outputSandboxPath,
envVars = {},
blockData = {},
blockNameMapping = {},
blockOutputSchemas = {},
workflowVariables = {},
workflowId,
workspaceId,
isCustomTool = false,
_sandboxFiles,
} = body
@@ -765,83 +652,6 @@ export async function POST(req: NextRequest) {
hasImports = jsImports.trim().length > 0 || hasRequireStatements
}
if (lang === CodeLanguage.Shell) {
if (!isE2bEnabled) {
throw new Error(
'Shell execution requires E2B to be enabled. Please contact your administrator to enable E2B.'
)
}
const shellEnvs: Record<string, string> = {}
for (const [k, v] of Object.entries(envVars)) {
shellEnvs[k] = String(v)
}
for (const [k, v] of Object.entries(contextVariables)) {
shellEnvs[k] = String(v)
}
logger.info(`[${requestId}] E2B shell execution`, {
enabled: isE2bEnabled,
hasApiKey: Boolean(process.env.E2B_API_KEY),
envVarCount: Object.keys(shellEnvs).length,
})
const execStart = Date.now()
const {
result: shellResult,
stdout: shellStdout,
sandboxId,
error: shellError,
exportedFileContent,
} = await executeShellInE2B({
code: resolvedCode,
envs: shellEnvs,
timeoutMs: timeout,
sandboxFiles: _sandboxFiles,
outputSandboxPath,
})
const executionTime = Date.now() - execStart
logger.info(`[${requestId}] E2B shell sandbox`, {
sandboxId,
stdoutPreview: shellStdout?.slice(0, 200),
error: shellError,
executionTime,
})
if (shellError) {
return NextResponse.json(
{
success: false,
error: shellError,
output: { result: null, stdout: cleanStdout(shellStdout), executionTime },
},
{ status: 500 }
)
}
if (outputSandboxPath) {
const fileExportResponse = await maybeExportSandboxFileToWorkspace({
authUserId: auth.userId,
workflowId,
workspaceId,
outputPath,
outputFormat,
outputMimeType,
outputSandboxPath,
exportedFileContent,
stdout: shellStdout,
executionTime,
})
if (fileExportResponse) return fileExportResponse
}
return NextResponse.json({
success: true,
output: { result: shellResult ?? null, stdout: cleanStdout(shellStdout), executionTime },
})
}
if (lang === CodeLanguage.Python && !isE2bEnabled) {
throw new Error(
'Python execution requires E2B to be enabled. Please contact your administrator to enable E2B, or use JavaScript instead.'
@@ -909,13 +719,11 @@ export async function POST(req: NextRequest) {
stdout: e2bStdout,
sandboxId,
error: e2bError,
exportedFileContent,
} = await executeInE2B({
code: codeForE2B,
language: CodeLanguage.JavaScript,
timeoutMs: timeout,
sandboxFiles: _sandboxFiles,
outputSandboxPath,
})
const executionTime = Date.now() - execStart
stdout += e2bStdout
@@ -944,22 +752,6 @@ export async function POST(req: NextRequest) {
)
}
if (outputSandboxPath) {
const fileExportResponse = await maybeExportSandboxFileToWorkspace({
authUserId: auth.userId,
workflowId,
workspaceId,
outputPath,
outputFormat,
outputMimeType,
outputSandboxPath,
exportedFileContent,
stdout,
executionTime,
})
if (fileExportResponse) return fileExportResponse
}
return NextResponse.json({
success: true,
output: { result: e2bResult ?? null, stdout: cleanStdout(stdout), executionTime },
@@ -991,13 +783,11 @@ export async function POST(req: NextRequest) {
stdout: e2bStdout,
sandboxId,
error: e2bError,
exportedFileContent,
} = await executeInE2B({
code: codeForE2B,
language: CodeLanguage.Python,
timeoutMs: timeout,
sandboxFiles: _sandboxFiles,
outputSandboxPath,
})
const executionTime = Date.now() - execStart
stdout += e2bStdout
@@ -1026,22 +816,6 @@ export async function POST(req: NextRequest) {
)
}
if (outputSandboxPath) {
const fileExportResponse = await maybeExportSandboxFileToWorkspace({
authUserId: auth.userId,
workflowId,
workspaceId,
outputPath,
outputFormat,
outputMimeType,
outputSandboxPath,
exportedFileContent,
stdout,
executionTime,
})
if (fileExportResponse) return fileExportResponse
}
return NextResponse.json({
success: true,
output: { result: e2bResult ?? null, stdout: cleanStdout(stdout), executionTime },

View File

@@ -4,13 +4,19 @@
import type { NextRequest } from 'next/server'
import { beforeEach, describe, expect, it, vi } from 'vitest'
const { mockCheckHybridAuth, mockGetJobQueue, mockVerifyWorkflowAccess, mockGetWorkflowById } =
vi.hoisted(() => ({
mockCheckHybridAuth: vi.fn(),
mockGetJobQueue: vi.fn(),
mockVerifyWorkflowAccess: vi.fn(),
mockGetWorkflowById: vi.fn(),
}))
const {
mockCheckHybridAuth,
mockGetDispatchJobRecord,
mockGetJobQueue,
mockVerifyWorkflowAccess,
mockGetWorkflowById,
} = vi.hoisted(() => ({
mockCheckHybridAuth: vi.fn(),
mockGetDispatchJobRecord: vi.fn(),
mockGetJobQueue: vi.fn(),
mockVerifyWorkflowAccess: vi.fn(),
mockGetWorkflowById: vi.fn(),
}))
vi.mock('@sim/logger', () => ({
createLogger: () => ({
@@ -26,9 +32,19 @@ vi.mock('@/lib/auth/hybrid', () => ({
}))
vi.mock('@/lib/core/async-jobs', () => ({
JOB_STATUS: {
PENDING: 'pending',
PROCESSING: 'processing',
COMPLETED: 'completed',
FAILED: 'failed',
},
getJobQueue: mockGetJobQueue,
}))
vi.mock('@/lib/core/workspace-dispatch/store', () => ({
getDispatchJobRecord: mockGetDispatchJobRecord,
}))
vi.mock('@/lib/core/utils/request', () => ({
generateRequestId: vi.fn().mockReturnValue('request-1'),
}))
@@ -73,78 +89,72 @@ describe('GET /api/jobs/[jobId]', () => {
})
})
it('returns pending status for a queued job', async () => {
mockGetJobQueue.mockResolvedValue({
getJob: vi.fn().mockResolvedValue({
id: 'job-1',
type: 'workflow-execution',
payload: {},
status: 'pending',
createdAt: new Date('2025-01-01T00:00:00Z'),
attempts: 0,
maxAttempts: 1,
metadata: {
workflowId: 'workflow-1',
},
}),
it('returns dispatcher-aware waiting status with metadata', async () => {
mockGetDispatchJobRecord.mockResolvedValue({
id: 'dispatch-1',
workspaceId: 'workspace-1',
lane: 'runtime',
queueName: 'workflow-execution',
bullmqJobName: 'workflow-execution',
bullmqPayload: {},
metadata: {
workflowId: 'workflow-1',
},
priority: 10,
status: 'waiting',
createdAt: 1000,
admittedAt: 2000,
})
const response = await GET(createMockRequest(), {
params: Promise.resolve({ jobId: 'job-1' }),
params: Promise.resolve({ jobId: 'dispatch-1' }),
})
const body = await response.json()
expect(response.status).toBe(200)
expect(body.status).toBe('pending')
expect(body.status).toBe('waiting')
expect(body.metadata.queueName).toBe('workflow-execution')
expect(body.metadata.lane).toBe('runtime')
expect(body.metadata.workspaceId).toBe('workspace-1')
})
it('returns completed output from job', async () => {
mockGetJobQueue.mockResolvedValue({
getJob: vi.fn().mockResolvedValue({
id: 'job-2',
type: 'workflow-execution',
payload: {},
status: 'completed',
createdAt: new Date('2025-01-01T00:00:00Z'),
startedAt: new Date('2025-01-01T00:00:01Z'),
completedAt: new Date('2025-01-01T00:00:06Z'),
attempts: 1,
maxAttempts: 1,
output: { success: true },
metadata: {
workflowId: 'workflow-1',
},
}),
it('returns completed output from dispatch state', async () => {
mockGetDispatchJobRecord.mockResolvedValue({
id: 'dispatch-2',
workspaceId: 'workspace-1',
lane: 'interactive',
queueName: 'workflow-execution',
bullmqJobName: 'direct-workflow-execution',
bullmqPayload: {},
metadata: {
workflowId: 'workflow-1',
},
priority: 1,
status: 'completed',
createdAt: 1000,
startedAt: 2000,
completedAt: 7000,
output: { success: true },
})
const response = await GET(createMockRequest(), {
params: Promise.resolve({ jobId: 'job-2' }),
params: Promise.resolve({ jobId: 'dispatch-2' }),
})
const body = await response.json()
expect(response.status).toBe(200)
expect(body.status).toBe('completed')
expect(body.output).toEqual({ success: true })
expect(body.metadata.duration).toBe(5000)
})
it('returns 404 when job does not exist', async () => {
it('returns 404 when neither dispatch nor BullMQ job exists', async () => {
mockGetDispatchJobRecord.mockResolvedValue(null)
const response = await GET(createMockRequest(), {
params: Promise.resolve({ jobId: 'missing-job' }),
})
expect(response.status).toBe(404)
})
it('returns 401 for unauthenticated requests', async () => {
mockCheckHybridAuth.mockResolvedValue({
success: false,
error: 'Not authenticated',
})
const response = await GET(createMockRequest(), {
params: Promise.resolve({ jobId: 'job-1' }),
})
expect(response.status).toBe(401)
})
})

View File

@@ -2,27 +2,13 @@ import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server'
import { checkHybridAuth } from '@/lib/auth/hybrid'
import { getJobQueue } from '@/lib/core/async-jobs'
import type { Job } from '@/lib/core/async-jobs/types'
import { generateRequestId } from '@/lib/core/utils/request'
import { presentDispatchOrJobStatus } from '@/lib/core/workspace-dispatch/status'
import { getDispatchJobRecord } from '@/lib/core/workspace-dispatch/store'
import { createErrorResponse } from '@/app/api/workflows/utils'
const logger = createLogger('TaskStatusAPI')
function presentJobStatus(job: Job) {
return {
status: job.status,
metadata: {
createdAt: job.createdAt.toISOString(),
startedAt: job.startedAt?.toISOString(),
completedAt: job.completedAt?.toISOString(),
attempts: job.attempts,
maxAttempts: job.maxAttempts,
},
output: job.output,
error: job.error,
}
}
export async function GET(
request: NextRequest,
{ params }: { params: Promise<{ jobId: string }> }
@@ -39,14 +25,15 @@ export async function GET(
const authenticatedUserId = authResult.userId
const dispatchJob = await getDispatchJobRecord(taskId)
const jobQueue = await getJobQueue()
const job = await jobQueue.getJob(taskId)
const job = dispatchJob ? null : await jobQueue.getJob(taskId)
if (!job) {
if (!job && !dispatchJob) {
return createErrorResponse('Task not found', 404)
}
const metadataToCheck = job.metadata
const metadataToCheck = dispatchJob?.metadata ?? job?.metadata
if (metadataToCheck?.workflowId) {
const { verifyWorkflowAccess } = await import('@/socket/middleware/permissions')
@@ -74,7 +61,7 @@ export async function GET(
return createErrorResponse('Access denied', 403)
}
const presented = presentJobStatus(job)
const presented = presentDispatchOrJobStatus(dispatchJob, job)
const response: any = {
success: true,
taskId,
@@ -84,6 +71,9 @@ export async function GET(
if (presented.output !== undefined) response.output = presented.output
if (presented.error !== undefined) response.error = presented.error
if (presented.estimatedDuration !== undefined) {
response.estimatedDuration = presented.estimatedDuration
}
return NextResponse.json(response)
} catch (error: any) {

View File

@@ -13,9 +13,11 @@ import { z } from 'zod'
import { decryptApiKey } from '@/lib/api-key/crypto'
import { AuditAction, AuditResourceType, recordAudit } from '@/lib/audit/log'
import { checkSessionOrInternalAuth } from '@/lib/auth/hybrid'
import { hasLiveSyncAccess } from '@/lib/billing/core/subscription'
import { generateRequestId } from '@/lib/core/utils/request'
import { deleteDocumentStorageFiles } from '@/lib/knowledge/documents/service'
import { cleanupUnusedTagDefinitions } from '@/lib/knowledge/tags/service'
import { captureServerEvent } from '@/lib/posthog/server'
import { refreshAccessTokenIfNeeded } from '@/app/api/auth/oauth/utils'
import { checkKnowledgeBaseAccess, checkKnowledgeBaseWriteAccess } from '@/app/api/knowledge/utils'
import { CONNECTOR_REGISTRY } from '@/connectors/registry'
@@ -115,6 +117,20 @@ export async function PATCH(request: NextRequest, { params }: RouteParams) {
)
}
if (
parsed.data.syncIntervalMinutes !== undefined &&
parsed.data.syncIntervalMinutes > 0 &&
parsed.data.syncIntervalMinutes < 60
) {
const canUseLiveSync = await hasLiveSyncAccess(auth.userId)
if (!canUseLiveSync) {
return NextResponse.json(
{ error: 'Live sync requires a Max or Enterprise plan' },
{ status: 403 }
)
}
}
if (parsed.data.sourceConfig !== undefined) {
const existingRows = await db
.select()
@@ -351,6 +367,19 @@ export async function DELETE(request: NextRequest, { params }: RouteParams) {
`[${requestId}] Deleted connector ${connectorId}${deleteDocuments ? ` and ${docCount} documents` : `, kept ${docCount} documents`}`
)
const kbWorkspaceId = writeCheck.knowledgeBase.workspaceId ?? ''
captureServerEvent(
auth.userId,
'knowledge_base_connector_removed',
{
knowledge_base_id: knowledgeBaseId,
workspace_id: kbWorkspaceId,
connector_type: existingConnector[0].connectorType,
documents_deleted: deleteDocuments ? docCount : 0,
},
kbWorkspaceId ? { groups: { workspace: kbWorkspaceId } } : undefined
)
recordAudit({
workspaceId: writeCheck.knowledgeBase.workspaceId,
actorId: auth.userId,

View File

@@ -7,6 +7,7 @@ import { AuditAction, AuditResourceType, recordAudit } from '@/lib/audit/log'
import { checkSessionOrInternalAuth } from '@/lib/auth/hybrid'
import { generateRequestId } from '@/lib/core/utils/request'
import { dispatchSync } from '@/lib/knowledge/connectors/sync-engine'
import { captureServerEvent } from '@/lib/posthog/server'
import { checkKnowledgeBaseWriteAccess } from '@/app/api/knowledge/utils'
const logger = createLogger('ConnectorManualSyncAPI')
@@ -55,6 +56,18 @@ export async function POST(request: NextRequest, { params }: RouteParams) {
logger.info(`[${requestId}] Manual sync triggered for connector ${connectorId}`)
const kbWorkspaceId = writeCheck.knowledgeBase.workspaceId ?? ''
captureServerEvent(
auth.userId,
'knowledge_base_connector_synced',
{
knowledge_base_id: knowledgeBaseId,
workspace_id: kbWorkspaceId,
connector_type: connectorRows[0].connectorType,
},
kbWorkspaceId ? { groups: { workspace: kbWorkspaceId } } : undefined
)
recordAudit({
workspaceId: writeCheck.knowledgeBase.workspaceId,
actorId: auth.userId,

View File

@@ -7,10 +7,12 @@ import { z } from 'zod'
import { encryptApiKey } from '@/lib/api-key/crypto'
import { AuditAction, AuditResourceType, recordAudit } from '@/lib/audit/log'
import { checkSessionOrInternalAuth } from '@/lib/auth/hybrid'
import { hasLiveSyncAccess } from '@/lib/billing/core/subscription'
import { generateRequestId } from '@/lib/core/utils/request'
import { dispatchSync } from '@/lib/knowledge/connectors/sync-engine'
import { allocateTagSlots } from '@/lib/knowledge/constants'
import { createTagDefinition } from '@/lib/knowledge/tags/service'
import { captureServerEvent } from '@/lib/posthog/server'
import { getCredential } from '@/app/api/auth/oauth/utils'
import { checkKnowledgeBaseAccess, checkKnowledgeBaseWriteAccess } from '@/app/api/knowledge/utils'
import { CONNECTOR_REGISTRY } from '@/connectors/registry'
@@ -96,6 +98,16 @@ export async function POST(request: NextRequest, { params }: { params: Promise<{
const { connectorType, credentialId, apiKey, sourceConfig, syncIntervalMinutes } = parsed.data
if (syncIntervalMinutes > 0 && syncIntervalMinutes < 60) {
const canUseLiveSync = await hasLiveSyncAccess(auth.userId)
if (!canUseLiveSync) {
return NextResponse.json(
{ error: 'Live sync requires a Max or Enterprise plan' },
{ status: 403 }
)
}
}
const connectorConfig = CONNECTOR_REGISTRY[connectorType]
if (!connectorConfig) {
return NextResponse.json(
@@ -150,19 +162,39 @@ export async function POST(request: NextRequest, { params }: { params: Promise<{
}
const tagSlotMapping: Record<string, string> = {}
let newTagSlots: Record<string, string> = {}
if (connectorConfig.tagDefinitions?.length) {
const disabledIds = new Set((sourceConfig.disabledTagIds as string[] | undefined) ?? [])
const enabledDefs = connectorConfig.tagDefinitions.filter((td) => !disabledIds.has(td.id))
const existingDefs = await db
.select({ tagSlot: knowledgeBaseTagDefinitions.tagSlot })
.select({
tagSlot: knowledgeBaseTagDefinitions.tagSlot,
displayName: knowledgeBaseTagDefinitions.displayName,
fieldType: knowledgeBaseTagDefinitions.fieldType,
})
.from(knowledgeBaseTagDefinitions)
.where(eq(knowledgeBaseTagDefinitions.knowledgeBaseId, knowledgeBaseId))
const usedSlots = new Set<string>(existingDefs.map((d) => d.tagSlot))
const { mapping, skipped: skippedTags } = allocateTagSlots(enabledDefs, usedSlots)
const existingByName = new Map(
existingDefs.map((d) => [d.displayName, { tagSlot: d.tagSlot, fieldType: d.fieldType }])
)
const defsNeedingSlots: typeof enabledDefs = []
for (const td of enabledDefs) {
const existing = existingByName.get(td.displayName)
if (existing && existing.fieldType === td.fieldType) {
tagSlotMapping[td.id] = existing.tagSlot
} else {
defsNeedingSlots.push(td)
}
}
const { mapping, skipped: skippedTags } = allocateTagSlots(defsNeedingSlots, usedSlots)
Object.assign(tagSlotMapping, mapping)
newTagSlots = mapping
for (const name of skippedTags) {
logger.warn(`[${requestId}] No available slots for "${name}"`)
@@ -196,7 +228,7 @@ export async function POST(request: NextRequest, { params }: { params: Promise<{
throw new Error('Knowledge base not found')
}
for (const [semanticId, slot] of Object.entries(tagSlotMapping)) {
for (const [semanticId, slot] of Object.entries(newTagSlots)) {
const td = connectorConfig.tagDefinitions!.find((d) => d.id === semanticId)!
await createTagDefinition(
{
@@ -227,6 +259,22 @@ export async function POST(request: NextRequest, { params }: { params: Promise<{
logger.info(`[${requestId}] Created connector ${connectorId} for KB ${knowledgeBaseId}`)
const kbWorkspaceId = writeCheck.knowledgeBase.workspaceId ?? ''
captureServerEvent(
auth.userId,
'knowledge_base_connector_added',
{
knowledge_base_id: knowledgeBaseId,
workspace_id: kbWorkspaceId,
connector_type: connectorType,
sync_interval_minutes: syncIntervalMinutes,
},
{
groups: kbWorkspaceId ? { workspace: kbWorkspaceId } : undefined,
setOnce: { first_connector_added_at: new Date().toISOString() },
}
)
recordAudit({
workspaceId: writeCheck.knowledgeBase.workspaceId,
actorId: auth.userId,

View File

@@ -10,6 +10,7 @@ import {
retryDocumentProcessing,
updateDocument,
} from '@/lib/knowledge/documents/service'
import { captureServerEvent } from '@/lib/posthog/server'
import { checkDocumentAccess, checkDocumentWriteAccess } from '@/app/api/knowledge/utils'
const logger = createLogger('DocumentByIdAPI')
@@ -285,6 +286,14 @@ export async function DELETE(
request: req,
})
const kbWorkspaceId = accessCheck.knowledgeBase?.workspaceId ?? ''
captureServerEvent(
userId,
'knowledge_base_document_deleted',
{ knowledge_base_id: knowledgeBaseId, workspace_id: kbWorkspaceId },
kbWorkspaceId ? { groups: { workspace: kbWorkspaceId } } : undefined
)
return NextResponse.json({
success: true,
data: result,

View File

@@ -16,6 +16,7 @@ import {
type TagFilterCondition,
} from '@/lib/knowledge/documents/service'
import type { DocumentSortField, SortOrder } from '@/lib/knowledge/documents/types'
import { captureServerEvent } from '@/lib/posthog/server'
import { authorizeWorkflowByWorkspacePermission } from '@/lib/workflows/utils'
import { checkKnowledgeBaseAccess, checkKnowledgeBaseWriteAccess } from '@/app/api/knowledge/utils'
@@ -214,6 +215,8 @@ export async function POST(req: NextRequest, { params }: { params: Promise<{ id:
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
const kbWorkspaceId = accessCheck.knowledgeBase?.workspaceId
if (body.bulk === true) {
try {
const validatedData = BulkCreateDocumentsSchema.parse(body)
@@ -240,6 +243,21 @@ export async function POST(req: NextRequest, { params }: { params: Promise<{ id:
// Silently fail
}
captureServerEvent(
userId,
'knowledge_base_document_uploaded',
{
knowledge_base_id: knowledgeBaseId,
workspace_id: kbWorkspaceId ?? '',
document_count: createdDocuments.length,
upload_type: 'bulk',
},
{
...(kbWorkspaceId ? { groups: { workspace: kbWorkspaceId } } : {}),
setOnce: { first_document_uploaded_at: new Date().toISOString() },
}
)
processDocumentsWithQueue(
createdDocuments,
knowledgeBaseId,
@@ -314,6 +332,21 @@ export async function POST(req: NextRequest, { params }: { params: Promise<{ id:
// Silently fail
}
captureServerEvent(
userId,
'knowledge_base_document_uploaded',
{
knowledge_base_id: knowledgeBaseId,
workspace_id: kbWorkspaceId ?? '',
document_count: 1,
upload_type: 'single',
},
{
...(kbWorkspaceId ? { groups: { workspace: kbWorkspaceId } } : {}),
setOnce: { first_document_uploaded_at: new Date().toISOString() },
}
)
recordAudit({
workspaceId: accessCheck.knowledgeBase?.workspaceId ?? null,
actorId: userId,

View File

@@ -11,6 +11,7 @@ import {
KnowledgeBaseConflictError,
type KnowledgeBaseScope,
} from '@/lib/knowledge/service'
import { captureServerEvent } from '@/lib/posthog/server'
const logger = createLogger('KnowledgeBaseAPI')
@@ -115,6 +116,20 @@ export async function POST(req: NextRequest) {
// Telemetry should not fail the operation
}
captureServerEvent(
session.user.id,
'knowledge_base_created',
{
knowledge_base_id: newKnowledgeBase.id,
workspace_id: validatedData.workspaceId,
name: validatedData.name,
},
{
groups: { workspace: validatedData.workspaceId },
setOnce: { first_kb_created_at: new Date().toISOString() },
}
)
logger.info(
`[${requestId}] Knowledge base created: ${newKnowledgeBase.id} for user ${session.user.id}`
)

View File

@@ -5,6 +5,7 @@
* @vitest-environment node
*/
import { createEnvMock, databaseMock, loggerMock } from '@sim/testing'
import { mockNextFetchResponse } from '@sim/testing/mocks'
import { beforeEach, describe, expect, it, vi } from 'vitest'
vi.mock('drizzle-orm')
@@ -14,16 +15,6 @@ vi.mock('@/lib/knowledge/documents/utils', () => ({
retryWithExponentialBackoff: (fn: any) => fn(),
}))
vi.stubGlobal(
'fetch',
vi.fn().mockResolvedValue({
ok: true,
json: async () => ({
data: [{ embedding: [0.1, 0.2, 0.3] }],
}),
})
)
vi.mock('@/lib/core/config/env', () => createEnvMock())
import {
@@ -178,17 +169,16 @@ describe('Knowledge Search Utils', () => {
OPENAI_API_KEY: 'test-openai-key',
})
const fetchSpy = vi.mocked(fetch)
fetchSpy.mockResolvedValueOnce({
ok: true,
json: async () => ({
mockNextFetchResponse({
json: {
data: [{ embedding: [0.1, 0.2, 0.3] }],
}),
} as any)
usage: { prompt_tokens: 1, total_tokens: 1 },
},
})
const result = await generateSearchEmbedding('test query')
expect(fetchSpy).toHaveBeenCalledWith(
expect(vi.mocked(fetch)).toHaveBeenCalledWith(
'https://test.openai.azure.com/openai/deployments/text-embedding-ada-002/embeddings?api-version=2024-12-01-preview',
expect.objectContaining({
headers: expect.objectContaining({
@@ -209,17 +199,16 @@ describe('Knowledge Search Utils', () => {
OPENAI_API_KEY: 'test-openai-key',
})
const fetchSpy = vi.mocked(fetch)
fetchSpy.mockResolvedValueOnce({
ok: true,
json: async () => ({
mockNextFetchResponse({
json: {
data: [{ embedding: [0.1, 0.2, 0.3] }],
}),
} as any)
usage: { prompt_tokens: 1, total_tokens: 1 },
},
})
const result = await generateSearchEmbedding('test query')
expect(fetchSpy).toHaveBeenCalledWith(
expect(vi.mocked(fetch)).toHaveBeenCalledWith(
'https://api.openai.com/v1/embeddings',
expect.objectContaining({
headers: expect.objectContaining({
@@ -243,17 +232,16 @@ describe('Knowledge Search Utils', () => {
OPENAI_API_KEY: 'test-openai-key',
})
const fetchSpy = vi.mocked(fetch)
fetchSpy.mockResolvedValueOnce({
ok: true,
json: async () => ({
mockNextFetchResponse({
json: {
data: [{ embedding: [0.1, 0.2, 0.3] }],
}),
} as any)
usage: { prompt_tokens: 1, total_tokens: 1 },
},
})
await generateSearchEmbedding('test query')
expect(fetchSpy).toHaveBeenCalledWith(
expect(vi.mocked(fetch)).toHaveBeenCalledWith(
expect.stringContaining('api-version='),
expect.any(Object)
)
@@ -273,17 +261,16 @@ describe('Knowledge Search Utils', () => {
OPENAI_API_KEY: 'test-openai-key',
})
const fetchSpy = vi.mocked(fetch)
fetchSpy.mockResolvedValueOnce({
ok: true,
json: async () => ({
mockNextFetchResponse({
json: {
data: [{ embedding: [0.1, 0.2, 0.3] }],
}),
} as any)
usage: { prompt_tokens: 1, total_tokens: 1 },
},
})
await generateSearchEmbedding('test query', 'text-embedding-3-small')
expect(fetchSpy).toHaveBeenCalledWith(
expect(vi.mocked(fetch)).toHaveBeenCalledWith(
'https://test.openai.azure.com/openai/deployments/custom-embedding-model/embeddings?api-version=2024-12-01-preview',
expect.any(Object)
)
@@ -311,13 +298,12 @@ describe('Knowledge Search Utils', () => {
KB_OPENAI_MODEL_NAME: 'text-embedding-ada-002',
})
const fetchSpy = vi.mocked(fetch)
fetchSpy.mockResolvedValueOnce({
mockNextFetchResponse({
ok: false,
status: 404,
statusText: 'Not Found',
text: async () => 'Deployment not found',
} as any)
text: 'Deployment not found',
})
await expect(generateSearchEmbedding('test query')).rejects.toThrow('Embedding API failed')
@@ -332,13 +318,12 @@ describe('Knowledge Search Utils', () => {
OPENAI_API_KEY: 'test-openai-key',
})
const fetchSpy = vi.mocked(fetch)
fetchSpy.mockResolvedValueOnce({
mockNextFetchResponse({
ok: false,
status: 429,
statusText: 'Too Many Requests',
text: async () => 'Rate limit exceeded',
} as any)
text: 'Rate limit exceeded',
})
await expect(generateSearchEmbedding('test query')).rejects.toThrow('Embedding API failed')
@@ -356,17 +341,16 @@ describe('Knowledge Search Utils', () => {
KB_OPENAI_MODEL_NAME: 'text-embedding-ada-002',
})
const fetchSpy = vi.mocked(fetch)
fetchSpy.mockResolvedValueOnce({
ok: true,
json: async () => ({
mockNextFetchResponse({
json: {
data: [{ embedding: [0.1, 0.2, 0.3] }],
}),
} as any)
usage: { prompt_tokens: 1, total_tokens: 1 },
},
})
await generateSearchEmbedding('test query')
expect(fetchSpy).toHaveBeenCalledWith(
expect(vi.mocked(fetch)).toHaveBeenCalledWith(
expect.any(String),
expect.objectContaining({
body: JSON.stringify({
@@ -387,17 +371,16 @@ describe('Knowledge Search Utils', () => {
OPENAI_API_KEY: 'test-openai-key',
})
const fetchSpy = vi.mocked(fetch)
fetchSpy.mockResolvedValueOnce({
ok: true,
json: async () => ({
mockNextFetchResponse({
json: {
data: [{ embedding: [0.1, 0.2, 0.3] }],
}),
} as any)
usage: { prompt_tokens: 1, total_tokens: 1 },
},
})
await generateSearchEmbedding('test query', 'text-embedding-3-small')
expect(fetchSpy).toHaveBeenCalledWith(
expect(vi.mocked(fetch)).toHaveBeenCalledWith(
expect.any(String),
expect.objectContaining({
body: JSON.stringify({

View File

@@ -77,6 +77,7 @@ vi.stubGlobal(
{ embedding: [0.1, 0.2], index: 0 },
{ embedding: [0.3, 0.4], index: 1 },
],
usage: { prompt_tokens: 2, total_tokens: 2 },
}),
})
)
@@ -294,7 +295,7 @@ describe('Knowledge Utils', () => {
it.concurrent('should return same length as input', async () => {
const result = await generateEmbeddings(['a', 'b'])
expect(result.length).toBe(2)
expect(result.embeddings.length).toBe(2)
})
it('should use Azure OpenAI when Azure config is provided', async () => {
@@ -313,6 +314,7 @@ describe('Knowledge Utils', () => {
ok: true,
json: async () => ({
data: [{ embedding: [0.1, 0.2], index: 0 }],
usage: { prompt_tokens: 1, total_tokens: 1 },
}),
} as any)
@@ -342,6 +344,7 @@ describe('Knowledge Utils', () => {
ok: true,
json: async () => ({
data: [{ embedding: [0.1, 0.2], index: 0 }],
usage: { prompt_tokens: 1, total_tokens: 1 },
}),
} as any)

View File

@@ -18,11 +18,14 @@ import { eq, sql } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { validateOAuthAccessToken } from '@/lib/auth/oauth-token'
import { getHighestPrioritySubscription } from '@/lib/billing/core/subscription'
import { createRunSegment } from '@/lib/copilot/async-runs/repository'
import { ORCHESTRATION_TIMEOUT_MS, SIM_AGENT_API_URL } from '@/lib/copilot/constants'
import { runCopilotLifecycle } from '@/lib/copilot/request/lifecycle/run'
import { orchestrateSubagentStream } from '@/lib/copilot/request/subagent'
import { ensureHandlersRegistered, executeTool } from '@/lib/copilot/tool-executor'
import { prepareExecutionContext } from '@/lib/copilot/tools/handlers/context'
import { orchestrateCopilotStream } from '@/lib/copilot/orchestrator'
import { orchestrateSubagentStream } from '@/lib/copilot/orchestrator/subagent'
import {
executeToolServerSide,
prepareExecutionContext,
} from '@/lib/copilot/orchestrator/tool-executor'
import { DIRECT_TOOL_DEFS, SUBAGENT_TOOL_DEFS } from '@/lib/copilot/tools/mcp/definitions'
import { env } from '@/lib/core/config/env'
import { RateLimiter } from '@/lib/core/rate-limiter'
@@ -642,8 +645,7 @@ async function handleDirectToolCall(
startTime: Date.now(),
}
ensureHandlersRegistered()
const result = await executeTool(toolCall.name, toolCall.params || {}, execContext)
const result = await executeToolServerSide(toolCall, execContext)
return {
content: [
@@ -726,10 +728,25 @@ async function handleBuildToolCall(
chatId,
}
const result = await runCopilotLifecycle(requestPayload, {
const executionId = crypto.randomUUID()
const runId = crypto.randomUUID()
const messageId = requestPayload.messageId as string
await createRunSegment({
id: runId,
executionId,
chatId,
userId,
workflowId: resolved.workflowId,
streamId: messageId,
}).catch(() => {})
const result = await orchestrateCopilotStream(requestPayload, {
userId,
workflowId: resolved.workflowId,
chatId,
executionId,
runId,
goRoute: '/api/mcp',
autoExecuteTools: true,
timeout: ORCHESTRATION_TIMEOUT_MS,

View File

@@ -18,6 +18,7 @@ import {
createMcpSuccessResponse,
generateMcpServerId,
} from '@/lib/mcp/utils'
import { captureServerEvent } from '@/lib/posthog/server'
const logger = createLogger('McpServersAPI')
@@ -180,6 +181,20 @@ export const POST = withMcpAuth('write')(
// Silently fail
}
const sourceParam = body.source as string | undefined
const source =
sourceParam === 'settings' || sourceParam === 'tool_input' ? sourceParam : undefined
captureServerEvent(
userId,
'mcp_server_connected',
{ workspace_id: workspaceId, server_name: body.name, transport: body.transport, source },
{
groups: { workspace: workspaceId },
setOnce: { first_mcp_connected_at: new Date().toISOString() },
}
)
recordAudit({
workspaceId,
actorId: userId,
@@ -214,6 +229,9 @@ export const DELETE = withMcpAuth('admin')(
try {
const { searchParams } = new URL(request.url)
const serverId = searchParams.get('serverId')
const sourceParam = searchParams.get('source')
const source =
sourceParam === 'settings' || sourceParam === 'tool_input' ? sourceParam : undefined
if (!serverId) {
return createMcpErrorResponse(
@@ -242,6 +260,13 @@ export const DELETE = withMcpAuth('admin')(
logger.info(`[${requestId}] Successfully deleted MCP server: ${serverId}`)
captureServerEvent(
userId,
'mcp_server_disconnected',
{ workspace_id: workspaceId, server_name: deletedServer.name, source },
{ groups: { workspace: workspaceId } }
)
recordAudit({
workspaceId,
actorId: userId,

View File

@@ -5,26 +5,18 @@ import { eq, sql } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { getSession } from '@/lib/auth'
import { resolveOrCreateChat } from '@/lib/copilot/chat/lifecycle'
import { buildCopilotRequestPayload } from '@/lib/copilot/chat/payload'
import {
buildPersistedAssistantMessage,
buildPersistedUserMessage,
} from '@/lib/copilot/chat/persisted-message'
import {
processContextsServer,
resolveActiveResourceContext,
} from '@/lib/copilot/chat/process-contents'
import { generateWorkspaceContext } from '@/lib/copilot/chat/workspace-context'
import { createRequestTracker, createUnauthorizedResponse } from '@/lib/copilot/request/http'
import { createSSEStream, SSE_RESPONSE_HEADERS } from '@/lib/copilot/request/lifecycle/start'
import { resolveOrCreateChat } from '@/lib/copilot/chat-lifecycle'
import { buildCopilotRequestPayload } from '@/lib/copilot/chat-payload'
import {
acquirePendingChatStream,
getPendingChatStreamId,
releasePendingChatStream,
} from '@/lib/copilot/request/session'
import type { OrchestratorResult } from '@/lib/copilot/request/types'
import { taskPubSub } from '@/lib/copilot/tasks'
createSSEStream,
SSE_RESPONSE_HEADERS,
} from '@/lib/copilot/chat-streaming'
import type { OrchestratorResult } from '@/lib/copilot/orchestrator/types'
import { processContextsServer, resolveActiveResourceContext } from '@/lib/copilot/process-contents'
import { createRequestTracker, createUnauthorizedResponse } from '@/lib/copilot/request-helpers'
import { taskPubSub } from '@/lib/copilot/task-events'
import { generateWorkspaceContext } from '@/lib/copilot/workspace-context'
import {
assertActiveWorkspaceAccess,
getUserEntityPermissions,
@@ -45,6 +37,7 @@ const FileAttachmentSchema = z.object({
const ResourceAttachmentSchema = z.object({
type: z.enum(['workflow', 'table', 'file', 'knowledgebase']),
id: z.string().min(1),
title: z.string().optional(),
active: z.boolean().optional(),
})
@@ -94,9 +87,7 @@ const MothershipMessageSchema = z.object({
*/
export async function POST(req: NextRequest) {
const tracker = createRequestTracker()
let lockChatId: string | undefined
let lockStreamId = ''
let chatStreamLockAcquired = false
let userMessageIdForLogs: string | undefined
try {
const session = await getSession()
@@ -119,23 +110,27 @@ export async function POST(req: NextRequest) {
} = MothershipMessageSchema.parse(body)
const userMessageId = providedMessageId || crypto.randomUUID()
lockStreamId = userMessageId
userMessageIdForLogs = userMessageId
const reqLogger = logger.withMetadata({
requestId: tracker.requestId,
messageId: userMessageId,
})
// Phase 1: workspace access + chat resolution in parallel
const [accessResult, chatResult] = await Promise.allSettled([
assertActiveWorkspaceAccess(workspaceId, authenticatedUserId),
chatId || createNewChat
? resolveOrCreateChat({
chatId,
userId: authenticatedUserId,
workspaceId,
model: 'claude-opus-4-6',
type: 'mothership',
})
: null,
])
reqLogger.info('Received mothership chat start request', {
workspaceId,
chatId,
createNewChat,
hasContexts: Array.isArray(contexts) && contexts.length > 0,
contextsCount: Array.isArray(contexts) ? contexts.length : 0,
hasResourceAttachments: Array.isArray(resourceAttachments) && resourceAttachments.length > 0,
resourceAttachmentCount: Array.isArray(resourceAttachments) ? resourceAttachments.length : 0,
hasFileAttachments: Array.isArray(fileAttachments) && fileAttachments.length > 0,
fileAttachmentCount: Array.isArray(fileAttachments) ? fileAttachments.length : 0,
})
if (accessResult.status === 'rejected') {
try {
await assertActiveWorkspaceAccess(workspaceId, authenticatedUserId)
} catch {
return NextResponse.json({ error: 'Workspace not found or access denied' }, { status: 403 })
}
@@ -143,12 +138,18 @@ export async function POST(req: NextRequest) {
let conversationHistory: any[] = []
let actualChatId = chatId
if (chatResult.status === 'fulfilled' && chatResult.value) {
const resolved = chatResult.value
currentChat = resolved.chat
actualChatId = resolved.chatId || chatId
conversationHistory = Array.isArray(resolved.conversationHistory)
? resolved.conversationHistory
if (chatId || createNewChat) {
const chatResult = await resolveOrCreateChat({
chatId,
userId: authenticatedUserId,
workspaceId,
model: 'claude-opus-4-6',
type: 'mothership',
})
currentChat = chatResult.chat
actualChatId = chatResult.chatId || chatId
conversationHistory = Array.isArray(chatResult.conversationHistory)
? chatResult.conversationHistory
: []
if (chatId && !currentChat) {
@@ -156,73 +157,76 @@ export async function POST(req: NextRequest) {
}
}
if (actualChatId) {
chatStreamLockAcquired = await acquirePendingChatStream(actualChatId, userMessageId)
if (!chatStreamLockAcquired) {
const activeStreamId = await getPendingChatStreamId(actualChatId)
return NextResponse.json(
{
error: 'A response is already in progress for this chat.',
...(activeStreamId ? { activeStreamId } : {}),
},
{ status: 409 }
let agentContexts: Array<{ type: string; content: string }> = []
if (Array.isArray(contexts) && contexts.length > 0) {
try {
agentContexts = await processContextsServer(
contexts as any,
authenticatedUserId,
message,
workspaceId,
actualChatId
)
} catch (e) {
reqLogger.error('Failed to process contexts', e)
}
lockChatId = actualChatId
}
// Phase 2: contexts + workspace context + user message persistence in parallel
const contextPromise = (async () => {
let agentCtxs: Array<{ type: string; content: string }> = []
if (Array.isArray(contexts) && contexts.length > 0) {
try {
agentCtxs = await processContextsServer(
contexts as any,
authenticatedUserId,
message,
if (Array.isArray(resourceAttachments) && resourceAttachments.length > 0) {
const results = await Promise.allSettled(
resourceAttachments.map(async (r) => {
const ctx = await resolveActiveResourceContext(
r.type,
r.id,
workspaceId,
authenticatedUserId,
actualChatId
)
} catch (e) {
logger.error(`[${tracker.requestId}] Failed to process contexts`, e)
}
}
if (Array.isArray(resourceAttachments) && resourceAttachments.length > 0) {
const results = await Promise.allSettled(
resourceAttachments.map(async (r) => {
const ctx = await resolveActiveResourceContext(
r.type,
r.id,
workspaceId,
authenticatedUserId,
actualChatId
)
if (!ctx) return null
return { ...ctx, tag: r.active ? '@active_tab' : '@open_tab' }
})
)
for (const result of results) {
if (result.status === 'fulfilled' && result.value) {
agentCtxs.push(result.value)
} else if (result.status === 'rejected') {
logger.error(
`[${tracker.requestId}] Failed to resolve resource attachment`,
result.reason
)
if (!ctx) return null
return {
...ctx,
tag: r.active ? '@active_tab' : '@open_tab',
}
})
)
for (const result of results) {
if (result.status === 'fulfilled' && result.value) {
agentContexts.push(result.value)
} else if (result.status === 'rejected') {
reqLogger.error('Failed to resolve resource attachment', result.reason)
}
}
return agentCtxs
})()
}
const userMsgPromise = (async () => {
if (!actualChatId) return
const userMsg = buildPersistedUserMessage({
if (actualChatId) {
const userMsg = {
id: userMessageId,
role: 'user' as const,
content: message,
fileAttachments,
contexts,
})
timestamp: new Date().toISOString(),
...(fileAttachments &&
fileAttachments.length > 0 && {
fileAttachments: fileAttachments.map((f) => ({
id: f.id,
key: f.key,
filename: f.filename,
media_type: f.media_type,
size: f.size,
})),
}),
...(contexts &&
contexts.length > 0 && {
contexts: contexts.map((c) => ({
kind: c.kind,
label: c.label,
...(c.workflowId && { workflowId: c.workflowId }),
...(c.knowledgeId && { knowledgeId: c.knowledgeId }),
...(c.tableId && { tableId: c.tableId }),
...(c.fileId && { fileId: c.fileId }),
})),
}),
}
const [updated] = await db
.update(copilotChats)
.set({
@@ -238,15 +242,11 @@ export async function POST(req: NextRequest) {
conversationHistory = freshMessages.filter((m: any) => m.id !== userMessageId)
taskPubSub?.publishStatusChanged({ workspaceId, chatId: actualChatId, type: 'started' })
}
})()
}
const [agentContexts, [workspaceContext, userPermission]] = await Promise.all([
contextPromise,
Promise.all([
generateWorkspaceContext(workspaceId, authenticatedUserId),
getUserEntityPermissions(authenticatedUserId, 'workspace', workspaceId).catch(() => null),
]),
userMsgPromise,
const [workspaceContext, userPermission] = await Promise.all([
generateWorkspaceContext(workspaceId, authenticatedUserId),
getUserEntityPermissions(authenticatedUserId, 'workspace', workspaceId).catch(() => null),
])
const requestPayload = await buildCopilotRequestPayload(
@@ -267,6 +267,19 @@ export async function POST(req: NextRequest) {
{ selectedModel: '' }
)
if (actualChatId) {
const acquired = await acquirePendingChatStream(actualChatId, userMessageId)
if (!acquired) {
return NextResponse.json(
{
error:
'A response is already in progress for this chat. Wait for it to finish or use Stop.',
},
{ status: 409 }
)
}
}
const executionId = crypto.randomUUID()
const runId = crypto.randomUUID()
const stream = createSSEStream({
@@ -282,6 +295,7 @@ export async function POST(req: NextRequest) {
titleModel: 'claude-opus-4-6',
requestId: tracker.requestId,
workspaceId,
pendingChatStreamAlreadyRegistered: Boolean(actualChatId),
orchestrateOptions: {
userId: authenticatedUserId,
workspaceId,
@@ -295,7 +309,46 @@ export async function POST(req: NextRequest) {
if (!actualChatId) return
if (!result.success) return
const assistantMessage = buildPersistedAssistantMessage(result, result.requestId)
const assistantMessage: Record<string, unknown> = {
id: crypto.randomUUID(),
role: 'assistant' as const,
content: result.content,
timestamp: new Date().toISOString(),
...(result.requestId ? { requestId: result.requestId } : {}),
}
if (result.toolCalls.length > 0) {
assistantMessage.toolCalls = result.toolCalls
}
if (result.contentBlocks.length > 0) {
assistantMessage.contentBlocks = result.contentBlocks.map((block) => {
const stored: Record<string, unknown> = { type: block.type }
if (block.content) stored.content = block.content
if (block.type === 'tool_call' && block.toolCall) {
const state =
block.toolCall.result?.success !== undefined
? block.toolCall.result.success
? 'success'
: 'error'
: block.toolCall.status
const isSubagentTool = !!block.calledBy
const isNonTerminal =
state === 'cancelled' || state === 'pending' || state === 'executing'
stored.toolCall = {
id: block.toolCall.id,
name: block.toolCall.name,
state,
...(isSubagentTool && isNonTerminal ? {} : { result: block.toolCall.result }),
...(isSubagentTool && isNonTerminal
? {}
: block.toolCall.params
? { params: block.toolCall.params }
: {}),
...(block.calledBy ? { calledBy: block.calledBy } : {}),
}
}
return stored
})
}
try {
const [row] = await db
@@ -328,7 +381,7 @@ export async function POST(req: NextRequest) {
})
}
} catch (error) {
logger.error(`[${tracker.requestId}] Failed to persist chat messages`, {
reqLogger.error('Failed to persist chat messages', {
chatId: actualChatId,
error: error instanceof Error ? error.message : 'Unknown error',
})
@@ -339,9 +392,6 @@ export async function POST(req: NextRequest) {
return new Response(stream, { headers: SSE_RESPONSE_HEADERS })
} catch (error) {
if (chatStreamLockAcquired && lockChatId && lockStreamId) {
await releasePendingChatStream(lockChatId, lockStreamId)
}
if (error instanceof z.ZodError) {
return NextResponse.json(
{ error: 'Invalid request data', details: error.errors },
@@ -349,9 +399,11 @@ export async function POST(req: NextRequest) {
)
}
logger.error(`[${tracker.requestId}] Error handling mothership chat:`, {
error: error instanceof Error ? error.message : 'Unknown error',
})
logger
.withMetadata({ requestId: tracker.requestId, messageId: userMessageIdForLogs })
.error('Error handling mothership chat', {
error: error instanceof Error ? error.message : 'Unknown error',
})
return NextResponse.json(
{ error: error instanceof Error ? error.message : 'Internal server error' },

View File

@@ -5,9 +5,8 @@ import { and, eq, sql } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { getSession } from '@/lib/auth'
import { normalizeMessage, type PersistedMessage } from '@/lib/copilot/chat/persisted-message'
import { releasePendingChatStream } from '@/lib/copilot/request/session'
import { taskPubSub } from '@/lib/copilot/tasks'
import { releasePendingChatStream } from '@/lib/copilot/chat-streaming'
import { taskPubSub } from '@/lib/copilot/task-events'
const logger = createLogger('MothershipChatStopAPI')
@@ -27,25 +26,15 @@ const StoredToolCallSchema = z
display: z
.object({
text: z.string().optional(),
title: z.string().optional(),
phaseLabel: z.string().optional(),
})
.optional(),
calledBy: z.string().optional(),
durationMs: z.number().optional(),
error: z.string().optional(),
})
.nullable()
const ContentBlockSchema = z.object({
type: z.string(),
lane: z.enum(['main', 'subagent']).optional(),
content: z.string().optional(),
channel: z.enum(['assistant', 'thinking']).optional(),
phase: z.enum(['call', 'args_delta', 'result']).optional(),
kind: z.enum(['subagent', 'structured_result', 'subagent_result']).optional(),
lifecycle: z.enum(['start', 'end']).optional(),
status: z.enum(['complete', 'error', 'cancelled']).optional(),
toolCall: StoredToolCallSchema.optional(),
})
@@ -81,14 +70,15 @@ export async function POST(req: NextRequest) {
const hasBlocks = Array.isArray(contentBlocks) && contentBlocks.length > 0
if (hasContent || hasBlocks) {
const normalized = normalizeMessage({
const assistantMessage: Record<string, unknown> = {
id: crypto.randomUUID(),
role: 'assistant',
role: 'assistant' as const,
content,
timestamp: new Date().toISOString(),
...(hasBlocks ? { contentBlocks } : {}),
})
const assistantMessage: PersistedMessage = normalized
}
if (hasBlocks) {
assistantMessage.contentBlocks = contentBlocks
}
setClause.messages = sql`${copilotChats.messages} || ${JSON.stringify([assistantMessage])}::jsonb`
}

View File

@@ -4,15 +4,16 @@ import { createLogger } from '@sim/logger'
import { and, eq, sql } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { getAccessibleCopilotChat } from '@/lib/copilot/chat/lifecycle'
import { getAccessibleCopilotChat } from '@/lib/copilot/chat-lifecycle'
import { getStreamMeta, readStreamEvents } from '@/lib/copilot/orchestrator/stream/buffer'
import {
authenticateCopilotRequestSessionOnly,
createBadRequestResponse,
createInternalServerErrorResponse,
createUnauthorizedResponse,
} from '@/lib/copilot/request/http'
import { readEvents } from '@/lib/copilot/request/session/buffer'
import { taskPubSub } from '@/lib/copilot/tasks'
} from '@/lib/copilot/request-helpers'
import { taskPubSub } from '@/lib/copilot/task-events'
import { captureServerEvent } from '@/lib/posthog/server'
const logger = createLogger('MothershipChatAPI')
@@ -46,24 +47,29 @@ export async function GET(
}
let streamSnapshot: {
events: unknown[]
events: Array<{ eventId: number; streamId: string; event: Record<string, unknown> }>
status: string
} | null = null
if (chat.conversationId) {
try {
const events = await readEvents(chat.conversationId, '0')
const [meta, events] = await Promise.all([
getStreamMeta(chat.conversationId),
readStreamEvents(chat.conversationId, 0),
])
streamSnapshot = {
events: events || [],
status: events.length > 0 ? 'active' : 'unknown',
status: meta?.status || 'unknown',
}
} catch (error) {
logger.warn('Failed to read stream snapshot for mothership chat', {
chatId,
conversationId: chat.conversationId,
error: error instanceof Error ? error.message : String(error),
})
logger
.withMetadata({ messageId: chat.conversationId || undefined })
.warn('Failed to read stream snapshot for mothership chat', {
chatId,
conversationId: chat.conversationId,
error: error instanceof Error ? error.message : String(error),
})
}
}
@@ -137,12 +143,32 @@ export async function PATCH(
return NextResponse.json({ success: false, error: 'Chat not found' }, { status: 404 })
}
if (title !== undefined && updatedChat.workspaceId) {
taskPubSub?.publishStatusChanged({
workspaceId: updatedChat.workspaceId,
chatId,
type: 'renamed',
})
if (updatedChat.workspaceId) {
if (title !== undefined) {
taskPubSub?.publishStatusChanged({
workspaceId: updatedChat.workspaceId,
chatId,
type: 'renamed',
})
captureServerEvent(
userId,
'task_renamed',
{ workspace_id: updatedChat.workspaceId },
{
groups: { workspace: updatedChat.workspaceId },
}
)
}
if (isUnread === true) {
captureServerEvent(
userId,
'task_marked_unread',
{ workspace_id: updatedChat.workspaceId },
{
groups: { workspace: updatedChat.workspaceId },
}
)
}
}
return NextResponse.json({ success: true })
@@ -198,6 +224,14 @@ export async function DELETE(
chatId,
type: 'deleted',
})
captureServerEvent(
userId,
'task_deleted',
{ workspace_id: deletedChat.workspaceId },
{
groups: { workspace: deletedChat.workspaceId },
}
)
}
return NextResponse.json({ success: true })

View File

@@ -1,43 +0,0 @@
import { db } from '@sim/db'
import { copilotChats } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { and, eq, sql } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import {
authenticateCopilotRequestSessionOnly,
createBadRequestResponse,
createInternalServerErrorResponse,
createUnauthorizedResponse,
} from '@/lib/copilot/request/http'
const logger = createLogger('MarkTaskReadAPI')
const MarkReadSchema = z.object({
chatId: z.string().min(1),
})
export async function POST(request: NextRequest) {
try {
const { userId, isAuthenticated } = await authenticateCopilotRequestSessionOnly()
if (!isAuthenticated || !userId) {
return createUnauthorizedResponse()
}
const body = await request.json()
const { chatId } = MarkReadSchema.parse(body)
await db
.update(copilotChats)
.set({ lastSeenAt: sql`GREATEST(${copilotChats.updatedAt}, NOW())` })
.where(and(eq(copilotChats.id, chatId), eq(copilotChats.userId, userId)))
return NextResponse.json({ success: true })
} catch (error) {
if (error instanceof z.ZodError) {
return createBadRequestResponse('chatId is required')
}
logger.error('Error marking task as read:', error)
return createInternalServerErrorResponse('Failed to mark task as read')
}
}

View File

@@ -9,8 +9,9 @@ import {
createBadRequestResponse,
createInternalServerErrorResponse,
createUnauthorizedResponse,
} from '@/lib/copilot/request/http'
import { taskPubSub } from '@/lib/copilot/tasks'
} from '@/lib/copilot/request-helpers'
import { taskPubSub } from '@/lib/copilot/task-events'
import { captureServerEvent } from '@/lib/posthog/server'
import { assertActiveWorkspaceAccess } from '@/lib/workspaces/permissions/utils'
const logger = createLogger('MothershipChatsAPI')
@@ -38,7 +39,7 @@ export async function GET(request: NextRequest) {
id: copilotChats.id,
title: copilotChats.title,
updatedAt: copilotChats.updatedAt,
activeStreamId: copilotChats.conversationId,
conversationId: copilotChats.conversationId,
lastSeenAt: copilotChats.lastSeenAt,
})
.from(copilotChats)
@@ -95,6 +96,15 @@ export async function POST(request: NextRequest) {
taskPubSub?.publishStatusChanged({ workspaceId, chatId: chat.id, type: 'created' })
captureServerEvent(
userId,
'task_created',
{ workspace_id: workspaceId },
{
groups: { workspace: workspaceId },
}
)
return NextResponse.json({ success: true, id: chat.id })
} catch (error) {
if (error instanceof z.ZodError) {

View File

@@ -7,7 +7,7 @@
* Auth is handled via session cookies (EventSource sends cookies automatically).
*/
import { taskPubSub } from '@/lib/copilot/tasks'
import { taskPubSub } from '@/lib/copilot/task-events'
import { createWorkspaceSSE } from '@/lib/events/sse-endpoint'
export const dynamic = 'force-dynamic'

View File

@@ -2,9 +2,10 @@ import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { checkInternalAuth } from '@/lib/auth/hybrid'
import { buildIntegrationToolSchemas } from '@/lib/copilot/chat/payload'
import { generateWorkspaceContext } from '@/lib/copilot/chat/workspace-context'
import { runCopilotLifecycle } from '@/lib/copilot/request/lifecycle/run'
import { createRunSegment } from '@/lib/copilot/async-runs/repository'
import { buildIntegrationToolSchemas } from '@/lib/copilot/chat-payload'
import { orchestrateCopilotStream } from '@/lib/copilot/orchestrator'
import { generateWorkspaceContext } from '@/lib/copilot/workspace-context'
import {
assertActiveWorkspaceAccess,
getUserEntityPermissions,
@@ -71,25 +72,34 @@ export async function POST(req: NextRequest) {
...(userPermission ? { userPermission } : {}),
}
const result = await runCopilotLifecycle(requestPayload, {
const executionId = crypto.randomUUID()
const runId = crypto.randomUUID()
await createRunSegment({
id: runId,
executionId,
chatId: effectiveChatId,
userId,
workspaceId,
streamId: messageId,
}).catch(() => {})
const result = await orchestrateCopilotStream(requestPayload, {
userId,
workspaceId,
chatId: effectiveChatId,
executionId,
runId,
goRoute: '/api/mothership/execute',
autoExecuteTools: true,
interactive: false,
})
if (!result.success) {
logger.error(
messageId
? `Mothership execute failed [messageId:${messageId}]`
: 'Mothership execute failed',
{
error: result.error,
errors: result.errors,
}
)
reqLogger.error('Mothership execute failed', {
error: result.error,
errors: result.errors,
})
return NextResponse.json(
{
error: result.error || 'Mothership execution failed',
@@ -125,12 +135,9 @@ export async function POST(req: NextRequest) {
)
}
logger.error(
messageId ? `Mothership execute error [messageId:${messageId}]` : 'Mothership execute error',
{
error: error instanceof Error ? error.message : 'Unknown error',
}
)
logger.withMetadata({ messageId }).error('Mothership execute error', {
error: error instanceof Error ? error.message : 'Unknown error',
})
return NextResponse.json(
{ error: error instanceof Error ? error.message : 'Internal server error' },

View File

@@ -7,6 +7,7 @@ import { z } from 'zod'
import { AuditAction, AuditResourceType, recordAudit } from '@/lib/audit/log'
import { getSession } from '@/lib/auth'
import { generateRequestId } from '@/lib/core/utils/request'
import { captureServerEvent } from '@/lib/posthog/server'
import { validateCronExpression } from '@/lib/workflows/schedules/utils'
import { authorizeWorkflowByWorkspacePermission } from '@/lib/workflows/utils'
import { verifyWorkspaceMembership } from '@/app/api/workflows/utils'
@@ -298,6 +299,13 @@ export async function DELETE(
request,
})
captureServerEvent(
session.user.id,
'scheduled_task_deleted',
{ workspace_id: workspaceId ?? '' },
workspaceId ? { groups: { workspace: workspaceId } } : undefined
)
return NextResponse.json({ message: 'Schedule deleted successfully' })
} catch (error) {
logger.error(`[${requestId}] Error deleting schedule`, error)

View File

@@ -14,6 +14,7 @@ const {
mockDbReturning,
mockDbUpdate,
mockEnqueue,
mockEnqueueWorkspaceDispatch,
mockStartJob,
mockCompleteJob,
mockMarkJobFailed,
@@ -23,6 +24,7 @@ const {
const mockDbSet = vi.fn().mockReturnValue({ where: mockDbWhere })
const mockDbUpdate = vi.fn().mockReturnValue({ set: mockDbSet })
const mockEnqueue = vi.fn().mockResolvedValue('job-id-1')
const mockEnqueueWorkspaceDispatch = vi.fn().mockResolvedValue('job-id-1')
const mockStartJob = vi.fn().mockResolvedValue(undefined)
const mockCompleteJob = vi.fn().mockResolvedValue(undefined)
const mockMarkJobFailed = vi.fn().mockResolvedValue(undefined)
@@ -40,6 +42,7 @@ const {
mockDbReturning,
mockDbUpdate,
mockEnqueue,
mockEnqueueWorkspaceDispatch,
mockStartJob,
mockCompleteJob,
mockMarkJobFailed,
@@ -72,6 +75,15 @@ vi.mock('@/lib/core/async-jobs', () => ({
shouldExecuteInline: vi.fn().mockReturnValue(false),
}))
vi.mock('@/lib/core/bullmq', () => ({
isBullMQEnabled: vi.fn().mockReturnValue(true),
createBullMQJobData: vi.fn((payload: unknown) => ({ payload })),
}))
vi.mock('@/lib/core/workspace-dispatch', () => ({
enqueueWorkspaceDispatch: mockEnqueueWorkspaceDispatch,
}))
vi.mock('@/lib/workflows/utils', () => ({
getWorkflowById: vi.fn().mockResolvedValue({
id: 'workflow-1',
@@ -234,19 +246,29 @@ describe('Scheduled Workflow Execution API Route', () => {
expect(data).toHaveProperty('executedCount', 2)
})
it('should execute mothership jobs inline', async () => {
it('should queue mothership jobs to BullMQ when available', async () => {
mockDbReturning.mockReturnValueOnce([]).mockReturnValueOnce(SINGLE_JOB)
const response = await GET(createMockRequest())
expect(response.status).toBe(200)
expect(mockExecuteJobInline).toHaveBeenCalledWith(
expect(mockEnqueueWorkspaceDispatch).toHaveBeenCalledWith(
expect.objectContaining({
scheduleId: 'job-1',
cronExpression: '0 * * * *',
failedCount: 0,
workspaceId: 'workspace-1',
lane: 'runtime',
queueName: 'mothership-job-execution',
bullmqJobName: 'mothership-job-execution',
bullmqPayload: {
payload: {
scheduleId: 'job-1',
cronExpression: '0 * * * *',
failedCount: 0,
now: expect.any(String),
},
},
})
)
expect(mockExecuteJobInline).not.toHaveBeenCalled()
})
it('should enqueue preassigned correlation metadata for schedules', async () => {
@@ -255,23 +277,25 @@ describe('Scheduled Workflow Execution API Route', () => {
const response = await GET(createMockRequest())
expect(response.status).toBe(200)
expect(mockEnqueue).toHaveBeenCalledWith(
'schedule-execution',
expect(mockEnqueueWorkspaceDispatch).toHaveBeenCalledWith(
expect.objectContaining({
scheduleId: 'schedule-1',
workflowId: 'workflow-1',
executionId: 'schedule-execution-1',
}),
expect.objectContaining({
metadata: expect.objectContaining({
id: 'schedule-execution-1',
workspaceId: 'workspace-1',
lane: 'runtime',
queueName: 'schedule-execution',
bullmqJobName: 'schedule-execution',
metadata: {
workflowId: 'workflow-1',
correlation: expect.objectContaining({
correlation: {
executionId: 'schedule-execution-1',
requestId: 'test-request-id',
source: 'schedule',
workflowId: 'workflow-1',
}),
}),
scheduleId: 'schedule-1',
triggerType: 'schedule',
scheduledFor: '2025-01-01T00:00:00.000Z',
},
},
})
)
})

View File

@@ -5,7 +5,9 @@ import { type NextRequest, NextResponse } from 'next/server'
import { v4 as uuidv4 } from 'uuid'
import { verifyCronAuth } from '@/lib/auth/internal'
import { getJobQueue, shouldExecuteInline } from '@/lib/core/async-jobs'
import { createBullMQJobData, isBullMQEnabled } from '@/lib/core/bullmq'
import { generateRequestId } from '@/lib/core/utils/request'
import { enqueueWorkspaceDispatch } from '@/lib/core/workspace-dispatch'
import {
executeJobInline,
executeScheduleJob,
@@ -119,13 +121,38 @@ export async function GET(request: NextRequest) {
: null
const resolvedWorkspaceId = resolvedWorkflow?.workspaceId
const jobId = await jobQueue.enqueue('schedule-execution', payload, {
metadata: {
workflowId: schedule.workflowId ?? undefined,
workspaceId: resolvedWorkspaceId ?? undefined,
correlation,
},
})
let jobId: string
if (isBullMQEnabled()) {
if (!resolvedWorkspaceId) {
throw new Error(
`Missing workspace for scheduled workflow ${schedule.workflowId}; refusing to bypass workspace admission`
)
}
jobId = await enqueueWorkspaceDispatch({
id: executionId,
workspaceId: resolvedWorkspaceId,
lane: 'runtime',
queueName: 'schedule-execution',
bullmqJobName: 'schedule-execution',
bullmqPayload: createBullMQJobData(payload, {
workflowId: schedule.workflowId ?? undefined,
correlation,
}),
metadata: {
workflowId: schedule.workflowId ?? undefined,
correlation,
},
})
} else {
jobId = await jobQueue.enqueue('schedule-execution', payload, {
metadata: {
workflowId: schedule.workflowId ?? undefined,
workspaceId: resolvedWorkspaceId ?? undefined,
correlation,
},
})
}
logger.info(
`[${requestId}] Queued schedule execution task ${jobId} for workflow ${schedule.workflowId}`
)
@@ -177,7 +204,7 @@ export async function GET(request: NextRequest) {
}
})
// Mothership jobs execute inline directly.
// Mothership jobs use BullMQ when available, otherwise direct inline execution.
const jobPromises = dueJobs.map(async (job) => {
const queueTime = job.lastQueuedAt ?? queuedAt
const payload = {
@@ -188,7 +215,24 @@ export async function GET(request: NextRequest) {
}
try {
await executeJobInline(payload)
if (isBullMQEnabled()) {
if (!job.sourceWorkspaceId || !job.sourceUserId) {
throw new Error(`Mothership job ${job.id} is missing workspace/user ownership`)
}
await enqueueWorkspaceDispatch({
workspaceId: job.sourceWorkspaceId!,
lane: 'runtime',
queueName: 'mothership-job-execution',
bullmqJobName: 'mothership-job-execution',
bullmqPayload: createBullMQJobData(payload),
metadata: {
userId: job.sourceUserId,
},
})
} else {
await executeJobInline(payload)
}
} catch (error) {
logger.error(`[${requestId}] Job execution failed for ${job.id}`, {
error: error instanceof Error ? error.message : String(error),

View File

@@ -5,6 +5,7 @@ import { and, eq, isNull, or } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { getSession } from '@/lib/auth'
import { generateRequestId } from '@/lib/core/utils/request'
import { captureServerEvent } from '@/lib/posthog/server'
import { validateCronExpression } from '@/lib/workflows/schedules/utils'
import { authorizeWorkflowByWorkspacePermission } from '@/lib/workflows/utils'
import { verifyWorkspaceMembership } from '@/app/api/workflows/utils'
@@ -277,6 +278,13 @@ export async function POST(req: NextRequest) {
lifecycle,
})
captureServerEvent(
session.user.id,
'scheduled_task_created',
{ workspace_id: workspaceId },
{ groups: { workspace: workspaceId } }
)
return NextResponse.json(
{ schedule: { id, status: 'active', cronExpression, nextRunAt } },
{ status: 201 }

View File

@@ -4,6 +4,7 @@ import { z } from 'zod'
import { AuditAction, AuditResourceType, recordAudit } from '@/lib/audit/log'
import { checkSessionOrInternalAuth } from '@/lib/auth/hybrid'
import { generateRequestId } from '@/lib/core/utils/request'
import { captureServerEvent } from '@/lib/posthog/server'
import { deleteSkill, listSkills, upsertSkills } from '@/lib/workflows/skills/operations'
import { getUserEntityPermissions } from '@/lib/workspaces/permissions/utils'
@@ -23,6 +24,7 @@ const SkillSchema = z.object({
})
),
workspaceId: z.string().optional(),
source: z.enum(['settings', 'tool_input']).optional(),
})
/** GET - Fetch all skills for a workspace */
@@ -75,7 +77,7 @@ export async function POST(req: NextRequest) {
const body = await req.json()
try {
const { skills, workspaceId } = SkillSchema.parse(body)
const { skills, workspaceId, source } = SkillSchema.parse(body)
if (!workspaceId) {
logger.warn(`[${requestId}] Missing workspaceId in request body`)
@@ -107,6 +109,12 @@ export async function POST(req: NextRequest) {
resourceName: skill.name,
description: `Created/updated skill "${skill.name}"`,
})
captureServerEvent(
userId,
'skill_created',
{ skill_id: skill.id, skill_name: skill.name, workspace_id: workspaceId, source },
{ groups: { workspace: workspaceId } }
)
}
return NextResponse.json({ success: true, data: resultSkills })
@@ -137,6 +145,9 @@ export async function DELETE(request: NextRequest) {
const searchParams = request.nextUrl.searchParams
const skillId = searchParams.get('id')
const workspaceId = searchParams.get('workspaceId')
const sourceParam = searchParams.get('source')
const source =
sourceParam === 'settings' || sourceParam === 'tool_input' ? sourceParam : undefined
try {
const authResult = await checkSessionOrInternalAuth(request, { requireWorkflowId: false })
@@ -180,6 +191,13 @@ export async function DELETE(request: NextRequest) {
description: `Deleted skill`,
})
captureServerEvent(
userId,
'skill_deleted',
{ skill_id: skillId, workspace_id: workspaceId, source },
{ groups: { workspace: workspaceId } }
)
logger.info(`[${requestId}] Deleted skill: ${skillId}`)
return NextResponse.json({ success: true })
} catch (error) {

View File

@@ -3,6 +3,7 @@ import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { checkSessionOrInternalAuth } from '@/lib/auth/hybrid'
import { generateRequestId } from '@/lib/core/utils/request'
import { captureServerEvent } from '@/lib/posthog/server'
import {
deleteTable,
NAME_PATTERN,
@@ -183,6 +184,13 @@ export async function DELETE(request: NextRequest, { params }: TableRouteParams)
await deleteTable(tableId, requestId)
captureServerEvent(
authResult.userId,
'table_deleted',
{ table_id: tableId, workspace_id: table.workspaceId },
{ groups: { workspace: table.workspaceId } }
)
return NextResponse.json({
success: true,
data: {

View File

@@ -3,6 +3,7 @@ import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { checkSessionOrInternalAuth } from '@/lib/auth/hybrid'
import { generateRequestId } from '@/lib/core/utils/request'
import { captureServerEvent } from '@/lib/posthog/server'
import {
createTable,
getWorkspaceTableLimits,
@@ -141,6 +142,20 @@ export async function POST(request: NextRequest) {
requestId
)
captureServerEvent(
authResult.userId,
'table_created',
{
table_id: table.id,
workspace_id: params.workspaceId,
column_count: params.schema.columns.length,
},
{
groups: { workspace: params.workspaceId },
setOnce: { first_table_created_at: new Date().toISOString() },
}
)
return NextResponse.json({
success: true,
data: {

View File

@@ -3,7 +3,7 @@ import { templates } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { eq } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { checkInternalApiKey } from '@/lib/copilot/request/http'
import { checkInternalApiKey } from '@/lib/copilot/utils'
import { generateRequestId } from '@/lib/core/utils/request'
import { sanitizeForCopilot } from '@/lib/workflows/sanitization/json-sanitizer'

View File

@@ -0,0 +1,61 @@
import {
CloudFormationClient,
DescribeStackDriftDetectionStatusCommand,
} from '@aws-sdk/client-cloudformation'
import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { checkInternalAuth } from '@/lib/auth/hybrid'
const logger = createLogger('CloudFormationDescribeStackDriftDetectionStatus')
const DescribeStackDriftDetectionStatusSchema = z.object({
region: z.string().min(1, 'AWS region is required'),
accessKeyId: z.string().min(1, 'AWS access key ID is required'),
secretAccessKey: z.string().min(1, 'AWS secret access key is required'),
stackDriftDetectionId: z.string().min(1, 'Stack drift detection ID is required'),
})
export async function POST(request: NextRequest) {
try {
const auth = await checkInternalAuth(request)
if (!auth.success || !auth.userId) {
return NextResponse.json({ error: auth.error || 'Unauthorized' }, { status: 401 })
}
const body = await request.json()
const validatedData = DescribeStackDriftDetectionStatusSchema.parse(body)
const client = new CloudFormationClient({
region: validatedData.region,
credentials: {
accessKeyId: validatedData.accessKeyId,
secretAccessKey: validatedData.secretAccessKey,
},
})
const command = new DescribeStackDriftDetectionStatusCommand({
StackDriftDetectionId: validatedData.stackDriftDetectionId,
})
const response = await client.send(command)
return NextResponse.json({
success: true,
output: {
stackId: response.StackId ?? '',
stackDriftDetectionId: response.StackDriftDetectionId ?? '',
stackDriftStatus: response.StackDriftStatus,
detectionStatus: response.DetectionStatus ?? 'UNKNOWN',
detectionStatusReason: response.DetectionStatusReason,
driftedStackResourceCount: response.DriftedStackResourceCount,
timestamp: response.Timestamp?.getTime(),
},
})
} catch (error) {
const errorMessage =
error instanceof Error ? error.message : 'Failed to describe stack drift detection status'
logger.error('DescribeStackDriftDetectionStatus failed', { error: errorMessage })
return NextResponse.json({ error: errorMessage }, { status: 500 })
}
}

View File

@@ -0,0 +1,78 @@
import {
CloudFormationClient,
DescribeStackEventsCommand,
type StackEvent,
} from '@aws-sdk/client-cloudformation'
import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { checkInternalAuth } from '@/lib/auth/hybrid'
const logger = createLogger('CloudFormationDescribeStackEvents')
const DescribeStackEventsSchema = z.object({
region: z.string().min(1, 'AWS region is required'),
accessKeyId: z.string().min(1, 'AWS access key ID is required'),
secretAccessKey: z.string().min(1, 'AWS secret access key is required'),
stackName: z.string().min(1, 'Stack name is required'),
limit: z.preprocess(
(v) => (v === '' || v === undefined || v === null ? undefined : v),
z.number({ coerce: true }).int().positive().optional()
),
})
export async function POST(request: NextRequest) {
try {
const auth = await checkInternalAuth(request)
if (!auth.success || !auth.userId) {
return NextResponse.json({ error: auth.error || 'Unauthorized' }, { status: 401 })
}
const body = await request.json()
const validatedData = DescribeStackEventsSchema.parse(body)
const client = new CloudFormationClient({
region: validatedData.region,
credentials: {
accessKeyId: validatedData.accessKeyId,
secretAccessKey: validatedData.secretAccessKey,
},
})
const limit = validatedData.limit ?? 50
const allEvents: StackEvent[] = []
let nextToken: string | undefined
do {
const command = new DescribeStackEventsCommand({
StackName: validatedData.stackName,
...(nextToken && { NextToken: nextToken }),
})
const response = await client.send(command)
allEvents.push(...(response.StackEvents ?? []))
nextToken = allEvents.length >= limit ? undefined : response.NextToken
} while (nextToken)
const events = allEvents.slice(0, limit).map((e) => ({
stackId: e.StackId ?? '',
eventId: e.EventId ?? '',
stackName: e.StackName ?? '',
logicalResourceId: e.LogicalResourceId,
physicalResourceId: e.PhysicalResourceId,
resourceType: e.ResourceType,
resourceStatus: e.ResourceStatus,
resourceStatusReason: e.ResourceStatusReason,
timestamp: e.Timestamp?.getTime(),
}))
return NextResponse.json({
success: true,
output: { events },
})
} catch (error) {
const errorMessage =
error instanceof Error ? error.message : 'Failed to describe CloudFormation stack events'
logger.error('DescribeStackEvents failed', { error: errorMessage })
return NextResponse.json({ error: errorMessage }, { status: 500 })
}
}

View File

@@ -0,0 +1,86 @@
import {
CloudFormationClient,
DescribeStacksCommand,
type Stack,
} from '@aws-sdk/client-cloudformation'
import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { checkInternalAuth } from '@/lib/auth/hybrid'
const logger = createLogger('CloudFormationDescribeStacks')
const DescribeStacksSchema = z.object({
region: z.string().min(1, 'AWS region is required'),
accessKeyId: z.string().min(1, 'AWS access key ID is required'),
secretAccessKey: z.string().min(1, 'AWS secret access key is required'),
stackName: z.string().optional(),
})
export async function POST(request: NextRequest) {
try {
const auth = await checkInternalAuth(request)
if (!auth.success || !auth.userId) {
return NextResponse.json({ error: auth.error || 'Unauthorized' }, { status: 401 })
}
const body = await request.json()
const validatedData = DescribeStacksSchema.parse(body)
const client = new CloudFormationClient({
region: validatedData.region,
credentials: {
accessKeyId: validatedData.accessKeyId,
secretAccessKey: validatedData.secretAccessKey,
},
})
const allStacks: Stack[] = []
let nextToken: string | undefined
do {
const command = new DescribeStacksCommand({
...(validatedData.stackName && { StackName: validatedData.stackName }),
...(nextToken && { NextToken: nextToken }),
})
const response = await client.send(command)
allStacks.push(...(response.Stacks ?? []))
nextToken = response.NextToken
} while (nextToken)
const stacks = allStacks.map((s) => ({
stackName: s.StackName ?? '',
stackId: s.StackId ?? '',
stackStatus: s.StackStatus ?? 'UNKNOWN',
stackStatusReason: s.StackStatusReason,
creationTime: s.CreationTime?.getTime(),
lastUpdatedTime: s.LastUpdatedTime?.getTime(),
description: s.Description,
enableTerminationProtection: s.EnableTerminationProtection,
driftInformation: s.DriftInformation
? {
stackDriftStatus: s.DriftInformation.StackDriftStatus,
lastCheckTimestamp: s.DriftInformation.LastCheckTimestamp?.getTime(),
}
: null,
outputs: (s.Outputs ?? []).map((o) => ({
outputKey: o.OutputKey ?? '',
outputValue: o.OutputValue ?? '',
description: o.Description,
})),
tags: (s.Tags ?? []).map((t) => ({
key: t.Key ?? '',
value: t.Value ?? '',
})),
}))
return NextResponse.json({
success: true,
output: { stacks },
})
} catch (error) {
const errorMessage =
error instanceof Error ? error.message : 'Failed to describe CloudFormation stacks'
logger.error('DescribeStacks failed', { error: errorMessage })
return NextResponse.json({ error: errorMessage }, { status: 500 })
}
}

View File

@@ -0,0 +1,56 @@
import { CloudFormationClient, DetectStackDriftCommand } from '@aws-sdk/client-cloudformation'
import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { checkInternalAuth } from '@/lib/auth/hybrid'
const logger = createLogger('CloudFormationDetectStackDrift')
const DetectStackDriftSchema = z.object({
region: z.string().min(1, 'AWS region is required'),
accessKeyId: z.string().min(1, 'AWS access key ID is required'),
secretAccessKey: z.string().min(1, 'AWS secret access key is required'),
stackName: z.string().min(1, 'Stack name is required'),
})
export async function POST(request: NextRequest) {
try {
const auth = await checkInternalAuth(request)
if (!auth.success || !auth.userId) {
return NextResponse.json({ error: auth.error || 'Unauthorized' }, { status: 401 })
}
const body = await request.json()
const validatedData = DetectStackDriftSchema.parse(body)
const client = new CloudFormationClient({
region: validatedData.region,
credentials: {
accessKeyId: validatedData.accessKeyId,
secretAccessKey: validatedData.secretAccessKey,
},
})
const command = new DetectStackDriftCommand({
StackName: validatedData.stackName,
})
const response = await client.send(command)
if (!response.StackDriftDetectionId) {
throw new Error('No drift detection ID returned')
}
return NextResponse.json({
success: true,
output: {
stackDriftDetectionId: response.StackDriftDetectionId,
},
})
} catch (error) {
const errorMessage =
error instanceof Error ? error.message : 'Failed to detect CloudFormation stack drift'
logger.error('DetectStackDrift failed', { error: errorMessage })
return NextResponse.json({ error: errorMessage }, { status: 500 })
}
}

View File

@@ -0,0 +1,53 @@
import { CloudFormationClient, GetTemplateCommand } from '@aws-sdk/client-cloudformation'
import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { checkInternalAuth } from '@/lib/auth/hybrid'
const logger = createLogger('CloudFormationGetTemplate')
const GetTemplateSchema = z.object({
region: z.string().min(1, 'AWS region is required'),
accessKeyId: z.string().min(1, 'AWS access key ID is required'),
secretAccessKey: z.string().min(1, 'AWS secret access key is required'),
stackName: z.string().min(1, 'Stack name is required'),
})
export async function POST(request: NextRequest) {
try {
const auth = await checkInternalAuth(request)
if (!auth.success || !auth.userId) {
return NextResponse.json({ error: auth.error || 'Unauthorized' }, { status: 401 })
}
const body = await request.json()
const validatedData = GetTemplateSchema.parse(body)
const client = new CloudFormationClient({
region: validatedData.region,
credentials: {
accessKeyId: validatedData.accessKeyId,
secretAccessKey: validatedData.secretAccessKey,
},
})
const command = new GetTemplateCommand({
StackName: validatedData.stackName,
})
const response = await client.send(command)
return NextResponse.json({
success: true,
output: {
templateBody: response.TemplateBody ?? '',
stagesAvailable: response.StagesAvailable ?? [],
},
})
} catch (error) {
const errorMessage =
error instanceof Error ? error.message : 'Failed to get CloudFormation template'
logger.error('GetTemplate failed', { error: errorMessage })
return NextResponse.json({ error: errorMessage }, { status: 500 })
}
}

View File

@@ -0,0 +1,75 @@
import {
CloudFormationClient,
ListStackResourcesCommand,
type StackResourceSummary,
} from '@aws-sdk/client-cloudformation'
import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { checkInternalAuth } from '@/lib/auth/hybrid'
const logger = createLogger('CloudFormationListStackResources')
const ListStackResourcesSchema = z.object({
region: z.string().min(1, 'AWS region is required'),
accessKeyId: z.string().min(1, 'AWS access key ID is required'),
secretAccessKey: z.string().min(1, 'AWS secret access key is required'),
stackName: z.string().min(1, 'Stack name is required'),
})
export async function POST(request: NextRequest) {
try {
const auth = await checkInternalAuth(request)
if (!auth.success || !auth.userId) {
return NextResponse.json({ error: auth.error || 'Unauthorized' }, { status: 401 })
}
const body = await request.json()
const validatedData = ListStackResourcesSchema.parse(body)
const client = new CloudFormationClient({
region: validatedData.region,
credentials: {
accessKeyId: validatedData.accessKeyId,
secretAccessKey: validatedData.secretAccessKey,
},
})
const allSummaries: StackResourceSummary[] = []
let nextToken: string | undefined
do {
const command = new ListStackResourcesCommand({
StackName: validatedData.stackName,
...(nextToken && { NextToken: nextToken }),
})
const response = await client.send(command)
allSummaries.push(...(response.StackResourceSummaries ?? []))
nextToken = response.NextToken
} while (nextToken)
const resources = allSummaries.map((r) => ({
logicalResourceId: r.LogicalResourceId ?? '',
physicalResourceId: r.PhysicalResourceId,
resourceType: r.ResourceType ?? '',
resourceStatus: r.ResourceStatus ?? 'UNKNOWN',
resourceStatusReason: r.ResourceStatusReason,
lastUpdatedTimestamp: r.LastUpdatedTimestamp?.getTime(),
driftInformation: r.DriftInformation
? {
stackResourceDriftStatus: r.DriftInformation.StackResourceDriftStatus,
lastCheckTimestamp: r.DriftInformation.LastCheckTimestamp?.getTime(),
}
: null,
}))
return NextResponse.json({
success: true,
output: { resources },
})
} catch (error) {
const errorMessage =
error instanceof Error ? error.message : 'Failed to list CloudFormation stack resources'
logger.error('ListStackResources failed', { error: errorMessage })
return NextResponse.json({ error: errorMessage }, { status: 500 })
}
}

View File

@@ -0,0 +1,61 @@
import { CloudFormationClient, ValidateTemplateCommand } from '@aws-sdk/client-cloudformation'
import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { checkInternalAuth } from '@/lib/auth/hybrid'
const logger = createLogger('CloudFormationValidateTemplate')
const ValidateTemplateSchema = z.object({
region: z.string().min(1, 'AWS region is required'),
accessKeyId: z.string().min(1, 'AWS access key ID is required'),
secretAccessKey: z.string().min(1, 'AWS secret access key is required'),
templateBody: z.string().min(1, 'Template body is required'),
})
export async function POST(request: NextRequest) {
try {
const auth = await checkInternalAuth(request)
if (!auth.success || !auth.userId) {
return NextResponse.json({ error: auth.error || 'Unauthorized' }, { status: 401 })
}
const body = await request.json()
const validatedData = ValidateTemplateSchema.parse(body)
const client = new CloudFormationClient({
region: validatedData.region,
credentials: {
accessKeyId: validatedData.accessKeyId,
secretAccessKey: validatedData.secretAccessKey,
},
})
const command = new ValidateTemplateCommand({
TemplateBody: validatedData.templateBody,
})
const response = await client.send(command)
return NextResponse.json({
success: true,
output: {
description: response.Description,
parameters: (response.Parameters ?? []).map((p) => ({
parameterKey: p.ParameterKey,
defaultValue: p.DefaultValue,
noEcho: p.NoEcho,
description: p.Description,
})),
capabilities: response.Capabilities ?? [],
capabilitiesReason: response.CapabilitiesReason,
declaredTransforms: response.DeclaredTransforms ?? [],
},
})
} catch (error) {
const errorMessage =
error instanceof Error ? error.message : 'Failed to validate CloudFormation template'
logger.error('ValidateTemplate failed', { error: errorMessage })
return NextResponse.json({ error: errorMessage }, { status: 500 })
}
}

View File

@@ -0,0 +1,96 @@
import {
type AlarmType,
CloudWatchClient,
DescribeAlarmsCommand,
type StateValue,
} from '@aws-sdk/client-cloudwatch'
import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { checkInternalAuth } from '@/lib/auth/hybrid'
const logger = createLogger('CloudWatchDescribeAlarms')
const DescribeAlarmsSchema = z.object({
region: z.string().min(1, 'AWS region is required'),
accessKeyId: z.string().min(1, 'AWS access key ID is required'),
secretAccessKey: z.string().min(1, 'AWS secret access key is required'),
alarmNamePrefix: z.string().optional(),
stateValue: z.preprocess(
(v) => (v === '' ? undefined : v),
z.enum(['OK', 'ALARM', 'INSUFFICIENT_DATA']).optional()
),
alarmType: z.preprocess(
(v) => (v === '' ? undefined : v),
z.enum(['MetricAlarm', 'CompositeAlarm']).optional()
),
limit: z.preprocess(
(v) => (v === '' || v === undefined || v === null ? undefined : v),
z.number({ coerce: true }).int().positive().optional()
),
})
export async function POST(request: NextRequest) {
try {
const auth = await checkInternalAuth(request)
if (!auth.success || !auth.userId) {
return NextResponse.json({ error: auth.error || 'Unauthorized' }, { status: 401 })
}
const body = await request.json()
const validatedData = DescribeAlarmsSchema.parse(body)
const client = new CloudWatchClient({
region: validatedData.region,
credentials: {
accessKeyId: validatedData.accessKeyId,
secretAccessKey: validatedData.secretAccessKey,
},
})
const command = new DescribeAlarmsCommand({
...(validatedData.alarmNamePrefix && { AlarmNamePrefix: validatedData.alarmNamePrefix }),
...(validatedData.stateValue && { StateValue: validatedData.stateValue as StateValue }),
...(validatedData.alarmType && { AlarmTypes: [validatedData.alarmType as AlarmType] }),
...(validatedData.limit !== undefined && { MaxRecords: validatedData.limit }),
})
const response = await client.send(command)
const metricAlarms = (response.MetricAlarms ?? []).map((a) => ({
alarmName: a.AlarmName ?? '',
alarmArn: a.AlarmArn ?? '',
stateValue: a.StateValue ?? 'UNKNOWN',
stateReason: a.StateReason ?? '',
metricName: a.MetricName,
namespace: a.Namespace,
comparisonOperator: a.ComparisonOperator,
threshold: a.Threshold,
evaluationPeriods: a.EvaluationPeriods,
stateUpdatedTimestamp: a.StateUpdatedTimestamp?.getTime(),
}))
const compositeAlarms = (response.CompositeAlarms ?? []).map((a) => ({
alarmName: a.AlarmName ?? '',
alarmArn: a.AlarmArn ?? '',
stateValue: a.StateValue ?? 'UNKNOWN',
stateReason: a.StateReason ?? '',
metricName: undefined,
namespace: undefined,
comparisonOperator: undefined,
threshold: undefined,
evaluationPeriods: undefined,
stateUpdatedTimestamp: a.StateUpdatedTimestamp?.getTime(),
}))
return NextResponse.json({
success: true,
output: { alarms: [...metricAlarms, ...compositeAlarms] },
})
} catch (error) {
const errorMessage =
error instanceof Error ? error.message : 'Failed to describe CloudWatch alarms'
logger.error('DescribeAlarms failed', { error: errorMessage })
return NextResponse.json({ error: errorMessage }, { status: 500 })
}
}

View File

@@ -0,0 +1,62 @@
import { DescribeLogGroupsCommand } from '@aws-sdk/client-cloudwatch-logs'
import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { checkSessionOrInternalAuth } from '@/lib/auth/hybrid'
import { createCloudWatchLogsClient } from '@/app/api/tools/cloudwatch/utils'
const logger = createLogger('CloudWatchDescribeLogGroups')
const DescribeLogGroupsSchema = z.object({
region: z.string().min(1, 'AWS region is required'),
accessKeyId: z.string().min(1, 'AWS access key ID is required'),
secretAccessKey: z.string().min(1, 'AWS secret access key is required'),
prefix: z.string().optional(),
limit: z.preprocess(
(v) => (v === '' || v === undefined || v === null ? undefined : v),
z.number({ coerce: true }).int().positive().optional()
),
})
export async function POST(request: NextRequest) {
try {
const auth = await checkSessionOrInternalAuth(request)
if (!auth.success || !auth.userId) {
return NextResponse.json({ error: auth.error || 'Unauthorized' }, { status: 401 })
}
const body = await request.json()
const validatedData = DescribeLogGroupsSchema.parse(body)
const client = createCloudWatchLogsClient({
region: validatedData.region,
accessKeyId: validatedData.accessKeyId,
secretAccessKey: validatedData.secretAccessKey,
})
const command = new DescribeLogGroupsCommand({
...(validatedData.prefix && { logGroupNamePrefix: validatedData.prefix }),
...(validatedData.limit !== undefined && { limit: validatedData.limit }),
})
const response = await client.send(command)
const logGroups = (response.logGroups ?? []).map((lg) => ({
logGroupName: lg.logGroupName ?? '',
arn: lg.arn ?? '',
storedBytes: lg.storedBytes ?? 0,
retentionInDays: lg.retentionInDays,
creationTime: lg.creationTime,
}))
return NextResponse.json({
success: true,
output: { logGroups },
})
} catch (error) {
const errorMessage =
error instanceof Error ? error.message : 'Failed to describe CloudWatch log groups'
logger.error('DescribeLogGroups failed', { error: errorMessage })
return NextResponse.json({ error: errorMessage }, { status: 500 })
}
}

View File

@@ -0,0 +1,52 @@
import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { checkSessionOrInternalAuth } from '@/lib/auth/hybrid'
import { createCloudWatchLogsClient, describeLogStreams } from '@/app/api/tools/cloudwatch/utils'
const logger = createLogger('CloudWatchDescribeLogStreams')
const DescribeLogStreamsSchema = z.object({
region: z.string().min(1, 'AWS region is required'),
accessKeyId: z.string().min(1, 'AWS access key ID is required'),
secretAccessKey: z.string().min(1, 'AWS secret access key is required'),
logGroupName: z.string().min(1, 'Log group name is required'),
prefix: z.string().optional(),
limit: z.preprocess(
(v) => (v === '' || v === undefined || v === null ? undefined : v),
z.number({ coerce: true }).int().positive().optional()
),
})
export async function POST(request: NextRequest) {
try {
const auth = await checkSessionOrInternalAuth(request)
if (!auth.success || !auth.userId) {
return NextResponse.json({ error: auth.error || 'Unauthorized' }, { status: 401 })
}
const body = await request.json()
const validatedData = DescribeLogStreamsSchema.parse(body)
const client = createCloudWatchLogsClient({
region: validatedData.region,
accessKeyId: validatedData.accessKeyId,
secretAccessKey: validatedData.secretAccessKey,
})
const result = await describeLogStreams(client, validatedData.logGroupName, {
prefix: validatedData.prefix,
limit: validatedData.limit,
})
return NextResponse.json({
success: true,
output: { logStreams: result.logStreams },
})
} catch (error) {
const errorMessage =
error instanceof Error ? error.message : 'Failed to describe CloudWatch log streams'
logger.error('DescribeLogStreams failed', { error: errorMessage })
return NextResponse.json({ error: errorMessage }, { status: 500 })
}
}

View File

@@ -0,0 +1,60 @@
import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { checkInternalAuth } from '@/lib/auth/hybrid'
import { createCloudWatchLogsClient, getLogEvents } from '@/app/api/tools/cloudwatch/utils'
const logger = createLogger('CloudWatchGetLogEvents')
const GetLogEventsSchema = z.object({
region: z.string().min(1, 'AWS region is required'),
accessKeyId: z.string().min(1, 'AWS access key ID is required'),
secretAccessKey: z.string().min(1, 'AWS secret access key is required'),
logGroupName: z.string().min(1, 'Log group name is required'),
logStreamName: z.string().min(1, 'Log stream name is required'),
startTime: z.number({ coerce: true }).int().optional(),
endTime: z.number({ coerce: true }).int().optional(),
limit: z.preprocess(
(v) => (v === '' || v === undefined || v === null ? undefined : v),
z.number({ coerce: true }).int().positive().optional()
),
})
export async function POST(request: NextRequest) {
try {
const auth = await checkInternalAuth(request)
if (!auth.success || !auth.userId) {
return NextResponse.json({ error: auth.error || 'Unauthorized' }, { status: 401 })
}
const body = await request.json()
const validatedData = GetLogEventsSchema.parse(body)
const client = createCloudWatchLogsClient({
region: validatedData.region,
accessKeyId: validatedData.accessKeyId,
secretAccessKey: validatedData.secretAccessKey,
})
const result = await getLogEvents(
client,
validatedData.logGroupName,
validatedData.logStreamName,
{
startTime: validatedData.startTime,
endTime: validatedData.endTime,
limit: validatedData.limit,
}
)
return NextResponse.json({
success: true,
output: { events: result.events },
})
} catch (error) {
const errorMessage =
error instanceof Error ? error.message : 'Failed to get CloudWatch log events'
logger.error('GetLogEvents failed', { error: errorMessage })
return NextResponse.json({ error: errorMessage }, { status: 500 })
}
}

View File

@@ -0,0 +1,97 @@
import { CloudWatchClient, GetMetricStatisticsCommand } from '@aws-sdk/client-cloudwatch'
import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { checkInternalAuth } from '@/lib/auth/hybrid'
const logger = createLogger('CloudWatchGetMetricStatistics')
const GetMetricStatisticsSchema = z.object({
region: z.string().min(1, 'AWS region is required'),
accessKeyId: z.string().min(1, 'AWS access key ID is required'),
secretAccessKey: z.string().min(1, 'AWS secret access key is required'),
namespace: z.string().min(1, 'Namespace is required'),
metricName: z.string().min(1, 'Metric name is required'),
startTime: z.number({ coerce: true }).int(),
endTime: z.number({ coerce: true }).int(),
period: z.number({ coerce: true }).int().min(1),
statistics: z.array(z.enum(['Average', 'Sum', 'Minimum', 'Maximum', 'SampleCount'])).min(1),
dimensions: z.string().optional(),
})
export async function POST(request: NextRequest) {
try {
const auth = await checkInternalAuth(request)
if (!auth.success || !auth.userId) {
return NextResponse.json({ error: auth.error || 'Unauthorized' }, { status: 401 })
}
const body = await request.json()
const validatedData = GetMetricStatisticsSchema.parse(body)
const client = new CloudWatchClient({
region: validatedData.region,
credentials: {
accessKeyId: validatedData.accessKeyId,
secretAccessKey: validatedData.secretAccessKey,
},
})
let parsedDimensions: { Name: string; Value: string }[] | undefined
if (validatedData.dimensions) {
try {
const dims = JSON.parse(validatedData.dimensions)
if (Array.isArray(dims)) {
parsedDimensions = dims.map((d: Record<string, string>) => ({
Name: d.name,
Value: d.value,
}))
} else if (typeof dims === 'object') {
parsedDimensions = Object.entries(dims).map(([name, value]) => ({
Name: name,
Value: String(value),
}))
}
} catch {
throw new Error('Invalid dimensions JSON')
}
}
const command = new GetMetricStatisticsCommand({
Namespace: validatedData.namespace,
MetricName: validatedData.metricName,
StartTime: new Date(validatedData.startTime * 1000),
EndTime: new Date(validatedData.endTime * 1000),
Period: validatedData.period,
Statistics: validatedData.statistics,
...(parsedDimensions && { Dimensions: parsedDimensions }),
})
const response = await client.send(command)
const datapoints = (response.Datapoints ?? [])
.sort((a, b) => (a.Timestamp?.getTime() ?? 0) - (b.Timestamp?.getTime() ?? 0))
.map((dp) => ({
timestamp: dp.Timestamp ? dp.Timestamp.getTime() : 0,
average: dp.Average,
sum: dp.Sum,
minimum: dp.Minimum,
maximum: dp.Maximum,
sampleCount: dp.SampleCount,
unit: dp.Unit,
}))
return NextResponse.json({
success: true,
output: {
label: response.Label ?? validatedData.metricName,
datapoints,
},
})
} catch (error) {
const errorMessage =
error instanceof Error ? error.message : 'Failed to get CloudWatch metric statistics'
logger.error('GetMetricStatistics failed', { error: errorMessage })
return NextResponse.json({ error: errorMessage }, { status: 500 })
}
}

Some files were not shown because too many files have changed in this diff Show More