Compare commits

..

30 Commits

Author SHA1 Message Date
Vikhyath Mondreti
b09f683072 v0.5.63: ui and performance improvements, more google tools 2026-01-18 15:22:42 -08:00
Vikhyath Mondreti
a8bb0db660 v0.5.62: webhook bug fixes, seeding default subblock values, block selection fixes 2026-01-16 20:27:06 -08:00
Waleed
af82820a28 v0.5.61: webhook improvements, workflow controls, react query for deployment status, chat fixes, reducto and pulse OCR, linear fixes 2026-01-16 18:06:23 -08:00
Waleed
4372841797 v0.5.60: invitation flow improvements, chat fixes, a2a improvements, additional copilot actions 2026-01-15 00:02:18 -08:00
Waleed
5e8c843241 v0.5.59: a2a support, documentation 2026-01-13 13:21:21 -08:00
Waleed
7bf3d73ee6 v0.5.58: export folders, new tools, permissions groups enhancements 2026-01-13 00:56:59 -08:00
Vikhyath Mondreti
7ffc11a738 v0.5.57: subagents, context menu improvements, bug fixes 2026-01-11 11:38:40 -08:00
Waleed
be578e2ed7 v0.5.56: batch operations, access control and permission groups, billing fixes 2026-01-10 00:31:34 -08:00
Waleed
f415e5edc4 v0.5.55: polling groups, bedrock provider, devcontainer fixes, workflow preview enhancements 2026-01-08 23:36:56 -08:00
Waleed
13a6e6c3fa v0.5.54: seo, model blacklist, helm chart updates, fireflies integration, autoconnect improvements, billing fixes 2026-01-07 16:09:45 -08:00
Waleed
f5ab7f21ae v0.5.53: hotkey improvements, added redis fallback, fixes for workflow tool 2026-01-06 23:34:52 -08:00
Waleed
bfb6fffe38 v0.5.52: new port-based router block, combobox expression and variable support 2026-01-06 16:14:10 -08:00
Waleed
4fbec0a43f v0.5.51: triggers, kb, condition block improvements, supabase and grain integration updates 2026-01-06 14:26:46 -08:00
Waleed
585f5e365b v0.5.50: import improvements, ui upgrades, kb styling and performance improvements 2026-01-05 00:35:55 -08:00
Waleed
3792bdd252 v0.5.49: hitl improvements, new email styles, imap trigger, logs context menu (#2672)
* feat(logs-context-menu): consolidated logs utils and types, added logs record context menu (#2659)

* feat(email): welcome email; improvement(emails): ui/ux (#2658)

* feat(email): welcome email; improvement(emails): ui/ux

* improvement(emails): links, accounts, preview

* refactor(emails): file structure and wrapper components

* added envvar for personal emails sent, added isHosted gate

* fixed failing tests, added env mock

* fix: removed comment

---------

Co-authored-by: waleed <walif6@gmail.com>

* fix(logging): hitl + trigger dev crash protection (#2664)

* hitl gaps

* deal with trigger worker crashes

* cleanup import strcuture

* feat(imap): added support for imap trigger (#2663)

* feat(tools): added support for imap trigger

* feat(imap): added parity, tested

* ack PR comments

* final cleanup

* feat(i18n): update translations (#2665)

Co-authored-by: waleedlatif1 <waleedlatif1@users.noreply.github.com>

* fix(grain): updated grain trigger to auto-establish trigger (#2666)

Co-authored-by: aadamgough <adam@sim.ai>

* feat(admin): routes to manage deployments (#2667)

* feat(admin): routes to manage deployments

* fix naming fo deployed by

* feat(time-picker): added timepicker emcn component, added to playground, added searchable prop for dropdown, added more timezones for schedule, updated license and notice date (#2668)

* feat(time-picker): added timepicker emcn component, added to playground, added searchable prop for dropdown, added more timezones for schedule, updated license and notice date

* removed unused params, cleaned up redundant utils

* improvement(invite): aligned styling (#2669)

* improvement(invite): aligned with rest of app

* fix(invite): error handling

* fix: addressed comments

---------

Co-authored-by: Emir Karabeg <78010029+emir-karabeg@users.noreply.github.com>
Co-authored-by: Vikhyath Mondreti <vikhyathvikku@gmail.com>
Co-authored-by: waleedlatif1 <waleedlatif1@users.noreply.github.com>
Co-authored-by: Adam Gough <77861281+aadamgough@users.noreply.github.com>
Co-authored-by: aadamgough <adam@sim.ai>
2026-01-03 13:19:18 -08:00
Waleed
eb5d1f3e5b v0.5.48: copy-paste workflow blocks, docs updates, mcp tool fixes 2025-12-31 18:00:04 -08:00
Waleed
54ab82c8dd v0.5.47: deploy workflow as mcp, kb chunks tokenizer, UI improvements, jira service management tools 2025-12-30 23:18:58 -08:00
Waleed
f895bf469b v0.5.46: build improvements, greptile, light mode improvements 2025-12-29 02:17:52 -08:00
Waleed
dd3209af06 v0.5.45: light mode fixes, realtime usage indicator, docker build improvements 2025-12-27 19:57:42 -08:00
Waleed
b6ba3b50a7 v0.5.44: keyboard shortcuts, autolayout, light mode, byok, testing improvements 2025-12-26 21:25:19 -08:00
Waleed
b304233062 v0.5.43: export logs, circleback, grain, vertex, code hygiene, schedule improvements 2025-12-23 19:19:18 -08:00
Vikhyath Mondreti
57e4b49bd6 v0.5.42: fix memory migration 2025-12-23 01:24:54 -08:00
Vikhyath Mondreti
e12dd204ed v0.5.41: memory fixes, copilot improvements, knowledgebase improvements, LLM providers standardization 2025-12-23 00:15:18 -08:00
Vikhyath Mondreti
3d9d9cbc54 v0.5.40: supabase ops to allow non-public schemas, jira uuid 2025-12-21 22:28:05 -08:00
Waleed
0f4ec962ad v0.5.39: notion, workflow variables fixes 2025-12-20 20:44:00 -08:00
Waleed
4827866f9a v0.5.38: snap to grid, copilot ux improvements, billing line items 2025-12-20 17:24:38 -08:00
Waleed
3e697d9ed9 v0.5.37: redaction utils consolidation, logs updates, autoconnect improvements, additional kb tag types 2025-12-19 22:31:55 -08:00
Martin Yankov
4431a1a484 fix(helm): add custom egress rules to realtime network policy (#2481)
The realtime service network policy was missing the custom egress rules section
that allows configuration of additional egress rules via values.yaml. This caused
the realtime pods to be unable to connect to external databases (e.g., PostgreSQL
on port 5432) when using external database configurations.

The app network policy already had this section, but the realtime network policy
was missing it, creating an inconsistency and preventing the realtime service
from accessing external databases configured via networkPolicy.egress values.

This fix adds the same custom egress rules template section to the realtime
network policy, matching the app network policy behavior and allowing users to
configure database connectivity via values.yaml.
2025-12-19 18:59:08 -08:00
Waleed
4d1a9a3f22 v0.5.36: hitl improvements, opengraph, slack fixes, one-click unsubscribe, auth checks, new db indexes 2025-12-19 01:27:49 -08:00
Vikhyath Mondreti
eb07a080fb v0.5.35: helm updates, copilot improvements, 404 for docs, salesforce fixes, subflow resize clamping 2025-12-18 16:23:19 -08:00
116 changed files with 2293 additions and 19170 deletions

View File

@@ -86,112 +86,27 @@ export async function GET(request: NextRequest) {
)
.limit(candidateLimit)
const knownLocales = ['en', 'es', 'fr', 'de', 'ja', 'zh']
const seenIds = new Set<string>()
const mergedResults = []
const vectorRankMap = new Map<string, number>()
vectorResults.forEach((r, idx) => vectorRankMap.set(r.chunkId, idx + 1))
const keywordRankMap = new Map<string, number>()
keywordResults.forEach((r, idx) => keywordRankMap.set(r.chunkId, idx + 1))
const allChunkIds = new Set([
...vectorResults.map((r) => r.chunkId),
...keywordResults.map((r) => r.chunkId),
])
const k = 60
type ResultWithRRF = (typeof vectorResults)[0] & { rrfScore: number }
const scoredResults: ResultWithRRF[] = []
for (const chunkId of allChunkIds) {
const vectorRank = vectorRankMap.get(chunkId) ?? Number.POSITIVE_INFINITY
const keywordRank = keywordRankMap.get(chunkId) ?? Number.POSITIVE_INFINITY
const rrfScore = 1 / (k + vectorRank) + 1 / (k + keywordRank)
const result =
vectorResults.find((r) => r.chunkId === chunkId) ||
keywordResults.find((r) => r.chunkId === chunkId)
if (result) {
scoredResults.push({ ...result, rrfScore })
for (let i = 0; i < Math.max(vectorResults.length, keywordResults.length); i++) {
if (i < vectorResults.length && !seenIds.has(vectorResults[i].chunkId)) {
mergedResults.push(vectorResults[i])
seenIds.add(vectorResults[i].chunkId)
}
if (i < keywordResults.length && !seenIds.has(keywordResults[i].chunkId)) {
mergedResults.push(keywordResults[i])
seenIds.add(keywordResults[i].chunkId)
}
}
scoredResults.sort((a, b) => b.rrfScore - a.rrfScore)
const localeFilteredResults = scoredResults.filter((result) => {
const firstPart = result.sourceDocument.split('/')[0]
if (knownLocales.includes(firstPart)) {
return firstPart === locale
}
return locale === 'en'
})
const queryLower = query.toLowerCase()
const getTitleBoost = (result: ResultWithRRF): number => {
const fileName = result.sourceDocument
.replace('.mdx', '')
.split('/')
.pop()
?.toLowerCase()
?.replace(/_/g, ' ')
if (fileName === queryLower) return 0.01
if (fileName?.includes(queryLower)) return 0.005
return 0
}
localeFilteredResults.sort((a, b) => {
return b.rrfScore + getTitleBoost(b) - (a.rrfScore + getTitleBoost(a))
})
const pageMap = new Map<string, ResultWithRRF>()
for (const result of localeFilteredResults) {
const pageKey = result.sourceDocument
const existing = pageMap.get(pageKey)
if (!existing || result.rrfScore > existing.rrfScore) {
pageMap.set(pageKey, result)
}
}
const deduplicatedResults = Array.from(pageMap.values())
.sort((a, b) => b.rrfScore + getTitleBoost(b) - (a.rrfScore + getTitleBoost(a)))
.slice(0, limit)
const searchResults = deduplicatedResults.map((result) => {
const filteredResults = mergedResults.slice(0, limit)
const searchResults = filteredResults.map((result) => {
const title = result.headerText || result.sourceDocument.replace('.mdx', '')
const pathParts = result.sourceDocument
.replace('.mdx', '')
.split('/')
.filter((part) => part !== 'index' && !knownLocales.includes(part))
.map((part) => {
return part
.replace(/_/g, ' ')
.split(' ')
.map((word) => {
const acronyms = [
'api',
'mcp',
'sdk',
'url',
'http',
'json',
'xml',
'html',
'css',
'ai',
]
if (acronyms.includes(word.toLowerCase())) {
return word.toUpperCase()
}
return word.charAt(0).toUpperCase() + word.slice(1)
})
.join(' ')
})
.map((part) => part.charAt(0).toUpperCase() + part.slice(1))
return {
id: result.chunkId,

View File

@@ -1739,12 +1739,12 @@ export function BrowserUseIcon(props: SVGProps<SVGSVGElement>) {
{...props}
version='1.0'
xmlns='http://www.w3.org/2000/svg'
width='28'
height='28'
width='150pt'
height='150pt'
viewBox='0 0 150 150'
preserveAspectRatio='xMidYMid meet'
>
<g transform='translate(0,150) scale(0.05,-0.05)' fill='currentColor' stroke='none'>
<g transform='translate(0,150) scale(0.05,-0.05)' fill='#000000' stroke='none'>
<path
d='M786 2713 c-184 -61 -353 -217 -439 -405 -76 -165 -65 -539 19 -666
l57 -85 -48 -124 c-203 -517 -79 -930 346 -1155 159 -85 441 -71 585 28 l111

View File

@@ -7,7 +7,7 @@ import { BlockInfoCard } from "@/components/ui/block-info-card"
<BlockInfoCard
type="browser_use"
color="#181C1E"
color="#E0E0E0"
/>
{/* MANUAL-CONTENT-START:intro */}

View File

@@ -52,15 +52,6 @@ Read content from a Google Slides presentation
| --------- | ---- | ----------- |
| `slides` | json | Array of slides with their content |
| `metadata` | json | Presentation metadata including ID, title, and URL |
| ↳ `presentationId` | string | The presentation ID |
| ↳ `title` | string | The presentation title |
| ↳ `pageSize` | object | Presentation page size |
| ↳ `width` | json | Page width as a Dimension object |
| ↳ `height` | json | Page height as a Dimension object |
| ↳ `width` | json | Page width as a Dimension object |
| ↳ `height` | json | Page height as a Dimension object |
| ↳ `mimeType` | string | The mime type of the presentation |
| ↳ `url` | string | URL to open the presentation |
### `google_slides_write`
@@ -80,10 +71,6 @@ Write or update content in a Google Slides presentation
| --------- | ---- | ----------- |
| `updatedContent` | boolean | Indicates if presentation content was updated successfully |
| `metadata` | json | Updated presentation metadata including ID, title, and URL |
| ↳ `presentationId` | string | The presentation ID |
| ↳ `title` | string | The presentation title |
| ↳ `mimeType` | string | The mime type of the presentation |
| ↳ `url` | string | URL to open the presentation |
### `google_slides_create`
@@ -103,10 +90,6 @@ Create a new Google Slides presentation
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `metadata` | json | Created presentation metadata including ID, title, and URL |
| ↳ `presentationId` | string | The presentation ID |
| ↳ `title` | string | The presentation title |
| ↳ `mimeType` | string | The mime type of the presentation |
| ↳ `url` | string | URL to open the presentation |
### `google_slides_replace_all_text`
@@ -128,10 +111,6 @@ Find and replace all occurrences of text throughout a Google Slides presentation
| --------- | ---- | ----------- |
| `occurrencesChanged` | number | Number of text occurrences that were replaced |
| `metadata` | json | Operation metadata including presentation ID and URL |
| ↳ `presentationId` | string | The presentation ID |
| ↳ `findText` | string | The text that was searched for |
| ↳ `replaceText` | string | The text that replaced the matches |
| ↳ `url` | string | URL to open the presentation |
### `google_slides_add_slide`
@@ -152,10 +131,6 @@ Add a new slide to a Google Slides presentation with a specified layout
| --------- | ---- | ----------- |
| `slideId` | string | The object ID of the newly created slide |
| `metadata` | json | Operation metadata including presentation ID, layout, and URL |
| ↳ `presentationId` | string | The presentation ID |
| ↳ `layout` | string | The layout used for the new slide |
| ↳ `insertionIndex` | number | The zero-based index where the slide was inserted |
| ↳ `url` | string | URL to open the presentation |
### `google_slides_add_image`
@@ -179,10 +154,6 @@ Insert an image into a specific slide in a Google Slides presentation
| --------- | ---- | ----------- |
| `imageId` | string | The object ID of the newly created image |
| `metadata` | json | Operation metadata including presentation ID and image URL |
| ↳ `presentationId` | string | The presentation ID |
| ↳ `pageObjectId` | string | The page object ID where the image was inserted |
| ↳ `imageUrl` | string | The source image URL |
| ↳ `url` | string | URL to open the presentation |
### `google_slides_get_thumbnail`
@@ -205,10 +176,6 @@ Generate a thumbnail image of a specific slide in a Google Slides presentation
| `width` | number | Width of the thumbnail in pixels |
| `height` | number | Height of the thumbnail in pixels |
| `metadata` | json | Operation metadata including presentation ID and page object ID |
| ↳ `presentationId` | string | The presentation ID |
| ↳ `pageObjectId` | string | The page object ID for the thumbnail |
| ↳ `thumbnailSize` | string | The requested thumbnail size |
| ↳ `mimeType` | string | The thumbnail MIME type |
### `google_slides_get_page`

View File

@@ -1,86 +0,0 @@
/**
* GET /api/copilot/chat/[chatId]/active-stream
*
* Check if a chat has an active stream that can be resumed.
* Used by the client on page load to detect if there's an in-progress stream.
*/
import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server'
import { getSession } from '@/lib/auth'
import {
getActiveStreamForChat,
getChunkCount,
getStreamMeta,
} from '@/lib/copilot/stream-persistence'
const logger = createLogger('CopilotActiveStreamAPI')
export async function GET(
req: NextRequest,
{ params }: { params: Promise<{ chatId: string }> }
) {
const session = await getSession()
if (!session?.user?.id) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
const { chatId } = await params
logger.info('Active stream check', { chatId, userId: session.user.id })
// Look up active stream ID from Redis
const streamId = await getActiveStreamForChat(chatId)
if (!streamId) {
logger.debug('No active stream found', { chatId })
return NextResponse.json({ hasActiveStream: false })
}
// Get stream metadata
const meta = await getStreamMeta(streamId)
if (!meta) {
logger.debug('Stream metadata not found', { streamId, chatId })
return NextResponse.json({ hasActiveStream: false })
}
// Verify the stream is still active
if (meta.status !== 'streaming') {
logger.debug('Stream not active', { streamId, chatId, status: meta.status })
return NextResponse.json({ hasActiveStream: false })
}
// Verify ownership
if (meta.userId !== session.user.id) {
logger.warn('Stream belongs to different user', {
streamId,
chatId,
requesterId: session.user.id,
ownerId: meta.userId,
})
return NextResponse.json({ hasActiveStream: false })
}
// Get current chunk count for client to track progress
const chunkCount = await getChunkCount(streamId)
logger.info('Active stream found', {
streamId,
chatId,
chunkCount,
toolCallsCount: meta.toolCalls.length,
})
return NextResponse.json({
hasActiveStream: true,
streamId,
chunkCount,
toolCalls: meta.toolCalls.filter(
(tc) => tc.state === 'pending' || tc.state === 'executing'
),
createdAt: meta.createdAt,
updatedAt: meta.updatedAt,
})
}

View File

@@ -2,7 +2,6 @@ import { db } from '@sim/db'
import { copilotChats } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { and, desc, eq } from 'drizzle-orm'
import { after } from 'next/server'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { getSession } from '@/lib/auth'
@@ -17,21 +16,6 @@ import {
createRequestTracker,
createUnauthorizedResponse,
} from '@/lib/copilot/request-helpers'
import {
type RenderEvent,
serializeRenderEvent,
} from '@/lib/copilot/render-events'
import {
appendChunk,
appendContent,
checkAbortSignal,
completeStream,
createStream,
errorStream,
refreshStreamTTL,
updateToolCall,
} from '@/lib/copilot/stream-persistence'
import { transformStream } from '@/lib/copilot/stream-transformer'
import { getCredentialsServerTool } from '@/lib/copilot/tools/server/user/get-credentials'
import type { CopilotProviderConfig } from '@/lib/copilot/types'
import { env } from '@/lib/core/config/env'
@@ -508,205 +492,385 @@ export async function POST(req: NextRequest) {
)
}
// If streaming is requested, start background processing and return streamId immediately
// If streaming is requested, forward the stream and update chat later
if (stream && simAgentResponse.body) {
// Create stream ID for persistence and resumption
const streamId = crypto.randomUUID()
// Initialize stream state in Redis
await createStream({
streamId,
chatId: actualChatId!,
userId: authenticatedUserId,
workflowId,
userMessageId: userMessageIdToUse,
isClientSession: true,
})
// Save user message to database immediately so it's available on refresh
// This is critical for stream resumption - user message must be persisted before stream starts
if (currentChat) {
const existingMessages = Array.isArray(currentChat.messages) ? currentChat.messages : []
const userMessage = {
id: userMessageIdToUse,
role: 'user',
content: message,
timestamp: new Date().toISOString(),
...(fileAttachments && fileAttachments.length > 0 && { fileAttachments }),
...(Array.isArray(contexts) && contexts.length > 0 && { contexts }),
...(Array.isArray(contexts) &&
contexts.length > 0 && {
contentBlocks: [{ type: 'contexts', contexts: contexts as any, timestamp: Date.now() }],
}),
}
await db
.update(copilotChats)
.set({
messages: [...existingMessages, userMessage],
updatedAt: new Date(),
})
.where(eq(copilotChats.id, actualChatId!))
logger.info(`[${tracker.requestId}] Saved user message before streaming`, {
chatId: actualChatId,
messageId: userMessageIdToUse,
})
// Create user message to save
const userMessage = {
id: userMessageIdToUse, // Consistent ID used for request and persistence
role: 'user',
content: message,
timestamp: new Date().toISOString(),
...(fileAttachments && fileAttachments.length > 0 && { fileAttachments }),
...(Array.isArray(contexts) && contexts.length > 0 && { contexts }),
...(Array.isArray(contexts) &&
contexts.length > 0 && {
contentBlocks: [{ type: 'contexts', contexts: contexts as any, timestamp: Date.now() }],
}),
}
// Track last TTL refresh time
const TTL_REFRESH_INTERVAL = 60000 // Refresh TTL every minute
// Create a pass-through stream that captures the response
const transformedStream = new ReadableStream({
async start(controller) {
const encoder = new TextEncoder()
let assistantContent = ''
const toolCalls: any[] = []
let buffer = ''
const isFirstDone = true
let responseIdFromStart: string | undefined
let responseIdFromDone: string | undefined
// Track tool call progress to identify a safe done event
const announcedToolCallIds = new Set<string>()
const startedToolExecutionIds = new Set<string>()
const completedToolExecutionIds = new Set<string>()
let lastDoneResponseId: string | undefined
let lastSafeDoneResponseId: string | undefined
// Capture needed values for background task
const capturedChatId = actualChatId!
const capturedCurrentChat = currentChat
// Generate assistant message ID upfront
const assistantMessageId = crypto.randomUUID()
// Start background processing task using the stream transformer
// This processes the Sim Agent stream, executes tools, and emits render events
// Client will connect to /api/copilot/stream/{streamId} for live updates
const backgroundTask = (async () => {
// Start title generation if needed (runs in parallel)
if (capturedChatId && !capturedCurrentChat?.title && conversationHistory.length === 0) {
generateChatTitle(message)
.then(async (title) => {
if (title) {
await db
.update(copilotChats)
.set({ title, updatedAt: new Date() })
.where(eq(copilotChats.id, capturedChatId))
logger.info(`[${tracker.requestId}] Generated and saved title: ${title}`)
}
})
.catch((error) => {
logger.error(`[${tracker.requestId}] Title generation failed:`, error)
})
}
// Track accumulated content for final persistence
let accumulatedContent = ''
const accumulatedToolCalls: Array<{
id: string
name: string
args: Record<string, unknown>
state: string
result?: unknown
}> = []
try {
// Use the stream transformer to process the Sim Agent stream
await transformStream(simAgentResponse.body!, {
streamId,
chatId: capturedChatId,
userId: authenticatedUserId,
workflowId,
userMessageId: userMessageIdToUse,
assistantMessageId,
// Emit render events to Redis for client consumption
onRenderEvent: async (event: RenderEvent) => {
// Serialize and append to Redis
const serialized = serializeRenderEvent(event)
await appendChunk(streamId, serialized).catch(() => {})
// Also update stream metadata for specific events
switch (event.type) {
case 'text_delta':
accumulatedContent += (event as any).content || ''
appendContent(streamId, (event as any).content || '').catch(() => {})
break
case 'tool_pending':
updateToolCall(streamId, (event as any).toolCallId, {
id: (event as any).toolCallId,
name: (event as any).toolName,
args: (event as any).args || {},
state: 'pending',
}).catch(() => {})
break
case 'tool_executing':
updateToolCall(streamId, (event as any).toolCallId, {
state: 'executing',
}).catch(() => {})
break
case 'tool_success':
updateToolCall(streamId, (event as any).toolCallId, {
state: 'success',
result: (event as any).result,
}).catch(() => {})
accumulatedToolCalls.push({
id: (event as any).toolCallId,
name: (event as any).display?.label || '',
args: {},
state: 'success',
result: (event as any).result,
})
break
case 'tool_error':
updateToolCall(streamId, (event as any).toolCallId, {
state: 'error',
error: (event as any).error,
}).catch(() => {})
accumulatedToolCalls.push({
id: (event as any).toolCallId,
name: (event as any).display?.label || '',
args: {},
state: 'error',
})
break
}
},
// Persist data at key moments
onPersist: async (data) => {
if (data.type === 'message_complete') {
// Stream complete - save final message to DB
await completeStream(streamId, undefined)
}
},
// Check for user-initiated abort
isAborted: () => {
// We'll check Redis for abort signal synchronously cached
// For now, return false - proper abort checking can be async in transformer
return false
},
})
// Update chat with conversationId if available
if (capturedCurrentChat) {
await db
.update(copilotChats)
.set({ updatedAt: new Date() })
.where(eq(copilotChats.id, capturedChatId))
// Send chatId as first event
if (actualChatId) {
const chatIdEvent = `data: ${JSON.stringify({
type: 'chat_id',
chatId: actualChatId,
})}\n\n`
controller.enqueue(encoder.encode(chatIdEvent))
logger.debug(`[${tracker.requestId}] Sent initial chatId event to client`)
}
logger.info(`[${tracker.requestId}] Background stream processing complete`, {
streamId,
contentLength: accumulatedContent.length,
toolCallsCount: accumulatedToolCalls.length,
})
} catch (error) {
logger.error(`[${tracker.requestId}] Background stream error`, { streamId, error })
await errorStream(streamId, error instanceof Error ? error.message : 'Unknown error')
}
})()
// Start title generation in parallel if needed
if (actualChatId && !currentChat?.title && conversationHistory.length === 0) {
generateChatTitle(message)
.then(async (title) => {
if (title) {
await db
.update(copilotChats)
.set({
title,
updatedAt: new Date(),
})
.where(eq(copilotChats.id, actualChatId!))
// Use after() to ensure background task completes even after response is sent
after(() => backgroundTask)
const titleEvent = `data: ${JSON.stringify({
type: 'title_updated',
title: title,
})}\n\n`
controller.enqueue(encoder.encode(titleEvent))
logger.info(`[${tracker.requestId}] Generated and saved title: ${title}`)
}
})
.catch((error) => {
logger.error(`[${tracker.requestId}] Title generation failed:`, error)
})
} else {
logger.debug(`[${tracker.requestId}] Skipping title generation`)
}
// Return streamId immediately - client will connect to stream endpoint
logger.info(`[${tracker.requestId}] Returning streamId for client to connect`, {
streamId,
chatId: capturedChatId,
// Forward the sim agent stream and capture assistant response
const reader = simAgentResponse.body!.getReader()
const decoder = new TextDecoder()
try {
while (true) {
const { done, value } = await reader.read()
if (done) {
break
}
// Decode and parse SSE events for logging and capturing content
const decodedChunk = decoder.decode(value, { stream: true })
buffer += decodedChunk
const lines = buffer.split('\n')
buffer = lines.pop() || '' // Keep incomplete line in buffer
for (const line of lines) {
if (line.trim() === '') continue // Skip empty lines
if (line.startsWith('data: ') && line.length > 6) {
try {
const jsonStr = line.slice(6)
// Check if the JSON string is unusually large (potential streaming issue)
if (jsonStr.length > 50000) {
// 50KB limit
logger.warn(`[${tracker.requestId}] Large SSE event detected`, {
size: jsonStr.length,
preview: `${jsonStr.substring(0, 100)}...`,
})
}
const event = JSON.parse(jsonStr)
// Log different event types comprehensively
switch (event.type) {
case 'content':
if (event.data) {
assistantContent += event.data
}
break
case 'reasoning':
logger.debug(
`[${tracker.requestId}] Reasoning chunk received (${(event.data || event.content || '').length} chars)`
)
break
case 'tool_call':
if (!event.data?.partial) {
toolCalls.push(event.data)
if (event.data?.id) {
announcedToolCallIds.add(event.data.id)
}
}
break
case 'tool_generating':
if (event.toolCallId) {
startedToolExecutionIds.add(event.toolCallId)
}
break
case 'tool_result':
if (event.toolCallId) {
completedToolExecutionIds.add(event.toolCallId)
}
break
case 'tool_error':
logger.error(`[${tracker.requestId}] Tool error:`, {
toolCallId: event.toolCallId,
toolName: event.toolName,
error: event.error,
success: event.success,
})
if (event.toolCallId) {
completedToolExecutionIds.add(event.toolCallId)
}
break
case 'start':
if (event.data?.responseId) {
responseIdFromStart = event.data.responseId
}
break
case 'done':
if (event.data?.responseId) {
responseIdFromDone = event.data.responseId
lastDoneResponseId = responseIdFromDone
// Mark this done as safe only if no tool call is currently in progress or pending
const announced = announcedToolCallIds.size
const completed = completedToolExecutionIds.size
const started = startedToolExecutionIds.size
const hasToolInProgress = announced > completed || started > completed
if (!hasToolInProgress) {
lastSafeDoneResponseId = responseIdFromDone
}
}
break
case 'error':
break
default:
}
// Emit to client: rewrite 'error' events into user-friendly assistant message
if (event?.type === 'error') {
try {
const displayMessage: string =
(event?.data && (event.data.displayMessage as string)) ||
'Sorry, I encountered an error. Please try again.'
const formatted = `_${displayMessage}_`
// Accumulate so it persists to DB as assistant content
assistantContent += formatted
// Send as content chunk
try {
controller.enqueue(
encoder.encode(
`data: ${JSON.stringify({ type: 'content', data: formatted })}\n\n`
)
)
} catch (enqueueErr) {
reader.cancel()
break
}
// Then close this response cleanly for the client
try {
controller.enqueue(
encoder.encode(`data: ${JSON.stringify({ type: 'done' })}\n\n`)
)
} catch (enqueueErr) {
reader.cancel()
break
}
} catch {}
// Do not forward the original error event
} else {
// Forward original event to client
try {
controller.enqueue(encoder.encode(`data: ${jsonStr}\n\n`))
} catch (enqueueErr) {
reader.cancel()
break
}
}
} catch (e) {
// Enhanced error handling for large payloads and parsing issues
const lineLength = line.length
const isLargePayload = lineLength > 10000
if (isLargePayload) {
logger.error(
`[${tracker.requestId}] Failed to parse large SSE event (${lineLength} chars)`,
{
error: e,
preview: `${line.substring(0, 200)}...`,
size: lineLength,
}
)
} else {
logger.warn(
`[${tracker.requestId}] Failed to parse SSE event: "${line.substring(0, 200)}..."`,
e
)
}
}
} else if (line.trim() && line !== 'data: [DONE]') {
logger.debug(`[${tracker.requestId}] Non-SSE line from sim agent: "${line}"`)
}
}
}
// Process any remaining buffer
if (buffer.trim()) {
logger.debug(`[${tracker.requestId}] Processing remaining buffer: "${buffer}"`)
if (buffer.startsWith('data: ')) {
try {
const jsonStr = buffer.slice(6)
const event = JSON.parse(jsonStr)
if (event.type === 'content' && event.data) {
assistantContent += event.data
}
// Forward remaining event, applying same error rewrite behavior
if (event?.type === 'error') {
const displayMessage: string =
(event?.data && (event.data.displayMessage as string)) ||
'Sorry, I encountered an error. Please try again.'
const formatted = `_${displayMessage}_`
assistantContent += formatted
try {
controller.enqueue(
encoder.encode(
`data: ${JSON.stringify({ type: 'content', data: formatted })}\n\n`
)
)
controller.enqueue(
encoder.encode(`data: ${JSON.stringify({ type: 'done' })}\n\n`)
)
} catch (enqueueErr) {
reader.cancel()
}
} else {
try {
controller.enqueue(encoder.encode(`data: ${jsonStr}\n\n`))
} catch (enqueueErr) {
reader.cancel()
}
}
} catch (e) {
logger.warn(`[${tracker.requestId}] Failed to parse final buffer: "${buffer}"`)
}
}
}
// Log final streaming summary
logger.info(`[${tracker.requestId}] Streaming complete summary:`, {
totalContentLength: assistantContent.length,
toolCallsCount: toolCalls.length,
hasContent: assistantContent.length > 0,
toolNames: toolCalls.map((tc) => tc?.name).filter(Boolean),
})
// NOTE: Messages are saved by the client via update-messages endpoint with full contentBlocks.
// Server only updates conversationId here to avoid overwriting client's richer save.
if (currentChat) {
// Persist only a safe conversationId to avoid continuing from a state that expects tool outputs
const previousConversationId = currentChat?.conversationId as string | undefined
const responseId = lastSafeDoneResponseId || previousConversationId || undefined
if (responseId) {
await db
.update(copilotChats)
.set({
updatedAt: new Date(),
conversationId: responseId,
})
.where(eq(copilotChats.id, actualChatId!))
logger.info(
`[${tracker.requestId}] Updated conversationId for chat ${actualChatId}`,
{
updatedConversationId: responseId,
}
)
}
}
} catch (error) {
logger.error(`[${tracker.requestId}] Error processing stream:`, error)
// Send an error event to the client before closing so it knows what happened
try {
const errorMessage =
error instanceof Error && error.message === 'terminated'
? 'Connection to AI service was interrupted. Please try again.'
: 'An unexpected error occurred while processing the response.'
const encoder = new TextEncoder()
// Send error as content so it shows in the chat
controller.enqueue(
encoder.encode(
`data: ${JSON.stringify({ type: 'content', data: `\n\n_${errorMessage}_` })}\n\n`
)
)
// Send done event to properly close the stream on client
controller.enqueue(encoder.encode(`data: ${JSON.stringify({ type: 'done' })}\n\n`))
} catch (enqueueError) {
// Stream might already be closed, that's ok
logger.warn(
`[${tracker.requestId}] Could not send error event to client:`,
enqueueError
)
}
} finally {
try {
controller.close()
} catch {
// Controller might already be closed
}
}
},
})
return NextResponse.json({
success: true,
streamId,
chatId: capturedChatId,
const response = new Response(transformedStream, {
headers: {
'Content-Type': 'text/event-stream',
'Cache-Control': 'no-cache',
Connection: 'keep-alive',
'X-Accel-Buffering': 'no',
},
})
logger.info(`[${tracker.requestId}] Returning streaming response to client`, {
duration: tracker.getDuration(),
chatId: actualChatId,
headers: {
'Content-Type': 'text/event-stream',
'Cache-Control': 'no-cache',
Connection: 'keep-alive',
},
})
return response
}
// For non-streaming responses
@@ -735,7 +899,7 @@ export async function POST(req: NextRequest) {
// Save messages if we have a chat
if (currentChat && responseData.content) {
const userMessage = {
id: userMessageIdToUse,
id: userMessageIdToUse, // Consistent ID used for request and persistence
role: 'user',
content: message,
timestamp: new Date().toISOString(),

View File

@@ -1,64 +0,0 @@
/**
* POST /api/copilot/stream/[streamId]/abort
*
* Signal the server to abort an active stream.
* The original request handler will check for this signal and cancel the stream.
*/
import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server'
import { getSession } from '@/lib/auth'
import { getStreamMeta, setAbortSignal } from '@/lib/copilot/stream-persistence'
const logger = createLogger('CopilotStreamAbortAPI')
export async function POST(
req: NextRequest,
{ params }: { params: Promise<{ streamId: string }> }
) {
const session = await getSession()
if (!session?.user?.id) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
const { streamId } = await params
logger.info('Stream abort request', { streamId, userId: session.user.id })
const meta = await getStreamMeta(streamId)
if (!meta) {
logger.info('Stream not found for abort', { streamId })
return NextResponse.json({ error: 'Stream not found' }, { status: 404 })
}
// Verify ownership
if (meta.userId !== session.user.id) {
logger.warn('Unauthorized abort attempt', {
streamId,
requesterId: session.user.id,
ownerId: meta.userId,
})
return NextResponse.json({ error: 'Forbidden' }, { status: 403 })
}
// Stream already finished
if (meta.status !== 'streaming') {
logger.info('Stream already finished, nothing to abort', {
streamId,
status: meta.status,
})
return NextResponse.json({
success: true,
message: 'Stream already finished',
})
}
// Set abort signal in Redis
await setAbortSignal(streamId)
logger.info('Abort signal set for stream', { streamId })
return NextResponse.json({ success: true })
}

View File

@@ -1,146 +0,0 @@
import { createLogger } from '@sim/logger'
import { NextResponse } from 'next/server'
import { getSession } from '@/lib/auth'
import {
clearPendingDiff,
getPendingDiff,
getStreamMeta,
setPendingDiff,
} from '@/lib/copilot/stream-persistence'
const logger = createLogger('CopilotPendingDiffAPI')
/**
* GET /api/copilot/stream/[streamId]/pending-diff
* Retrieve pending diff state for a stream (used for resumption after page refresh)
*/
export async function GET(
request: Request,
{ params }: { params: Promise<{ streamId: string }> }
) {
try {
const session = await getSession()
if (!session?.user?.id) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
const { streamId } = await params
if (!streamId) {
return NextResponse.json({ error: 'Stream ID required' }, { status: 400 })
}
// Verify user owns this stream
const meta = await getStreamMeta(streamId)
if (!meta) {
return NextResponse.json({ error: 'Stream not found' }, { status: 404 })
}
if (meta.userId !== session.user.id) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 403 })
}
// Get pending diff
const pendingDiff = await getPendingDiff(streamId)
if (!pendingDiff) {
return NextResponse.json({ pendingDiff: null })
}
logger.info('Retrieved pending diff', {
streamId,
toolCallId: pendingDiff.toolCallId,
})
return NextResponse.json({ pendingDiff })
} catch (error) {
logger.error('Failed to get pending diff', { error })
return NextResponse.json({ error: 'Internal server error' }, { status: 500 })
}
}
/**
* POST /api/copilot/stream/[streamId]/pending-diff
* Store pending diff state for a stream
*/
export async function POST(
request: Request,
{ params }: { params: Promise<{ streamId: string }> }
) {
try {
const session = await getSession()
if (!session?.user?.id) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
const { streamId } = await params
if (!streamId) {
return NextResponse.json({ error: 'Stream ID required' }, { status: 400 })
}
// Verify user owns this stream
const meta = await getStreamMeta(streamId)
if (!meta) {
return NextResponse.json({ error: 'Stream not found' }, { status: 404 })
}
if (meta.userId !== session.user.id) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 403 })
}
const body = await request.json()
const { pendingDiff } = body
if (!pendingDiff || !pendingDiff.toolCallId) {
return NextResponse.json({ error: 'Invalid pending diff data' }, { status: 400 })
}
await setPendingDiff(streamId, pendingDiff)
logger.info('Stored pending diff', {
streamId,
toolCallId: pendingDiff.toolCallId,
})
return NextResponse.json({ success: true })
} catch (error) {
logger.error('Failed to store pending diff', { error })
return NextResponse.json({ error: 'Internal server error' }, { status: 500 })
}
}
/**
* DELETE /api/copilot/stream/[streamId]/pending-diff
* Clear pending diff state for a stream
*/
export async function DELETE(
request: Request,
{ params }: { params: Promise<{ streamId: string }> }
) {
try {
const session = await getSession()
if (!session?.user?.id) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
const { streamId } = await params
if (!streamId) {
return NextResponse.json({ error: 'Stream ID required' }, { status: 400 })
}
// Verify user owns this stream (if it exists - might already be cleaned up)
const meta = await getStreamMeta(streamId)
if (meta && meta.userId !== session.user.id) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 403 })
}
await clearPendingDiff(streamId)
logger.info('Cleared pending diff', { streamId })
return NextResponse.json({ success: true })
} catch (error) {
logger.error('Failed to clear pending diff', { error })
return NextResponse.json({ error: 'Internal server error' }, { status: 500 })
}
}

View File

@@ -1,160 +0,0 @@
/**
* GET /api/copilot/stream/[streamId]
*
* Resume an active copilot stream.
* - If stream is still active: returns SSE with replay of missed chunks + live updates via Redis Pub/Sub
* - If stream is completed: returns JSON indicating to load from database
* - If stream not found: returns 404
*/
import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server'
import { getSession } from '@/lib/auth'
import {
getChunks,
getStreamMeta,
subscribeToStream,
} from '@/lib/copilot/stream-persistence'
const logger = createLogger('CopilotStreamResumeAPI')
const SSE_HEADERS = {
'Content-Type': 'text/event-stream',
'Cache-Control': 'no-cache',
Connection: 'keep-alive',
'X-Accel-Buffering': 'no',
}
export async function GET(
req: NextRequest,
{ params }: { params: Promise<{ streamId: string }> }
) {
const session = await getSession()
if (!session?.user?.id) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
const { streamId } = await params
const fromChunk = parseInt(req.nextUrl.searchParams.get('from') || '0')
logger.info('Stream resume request', { streamId, fromChunk, userId: session.user.id })
const meta = await getStreamMeta(streamId)
if (!meta) {
logger.info('Stream not found or expired', { streamId })
return NextResponse.json(
{
status: 'not_found',
message: 'Stream not found or expired. Reload chat from database.',
},
{ status: 404 }
)
}
// Verify ownership
if (meta.userId !== session.user.id) {
logger.warn('Unauthorized stream access attempt', {
streamId,
requesterId: session.user.id,
ownerId: meta.userId,
})
return NextResponse.json({ error: 'Forbidden' }, { status: 403 })
}
// Stream completed - tell client to load from database
if (meta.status === 'completed') {
logger.info('Stream already completed', { streamId, chatId: meta.chatId })
return NextResponse.json({
status: 'completed',
chatId: meta.chatId,
message: 'Stream completed. Messages saved to database.',
})
}
// Stream errored
if (meta.status === 'error') {
logger.info('Stream encountered error', { streamId, chatId: meta.chatId })
return NextResponse.json({
status: 'error',
chatId: meta.chatId,
message: 'Stream encountered an error.',
})
}
// Stream still active - return SSE with replay + live updates
logger.info('Resuming active stream', { streamId, chatId: meta.chatId })
const encoder = new TextEncoder()
const abortController = new AbortController()
// Handle client disconnect
req.signal.addEventListener('abort', () => {
logger.info('Client disconnected from resumed stream', { streamId })
abortController.abort()
})
const responseStream = new ReadableStream({
async start(controller) {
try {
// 1. Replay missed chunks (single read from Redis LIST)
const missedChunks = await getChunks(streamId, fromChunk)
logger.info('Replaying missed chunks', {
streamId,
fromChunk,
missedChunkCount: missedChunks.length,
})
for (const chunk of missedChunks) {
// Chunks are already in SSE format, just re-encode
controller.enqueue(encoder.encode(chunk))
}
// 2. Subscribe to live chunks via Redis Pub/Sub (blocking, no polling)
await subscribeToStream(
streamId,
(chunk) => {
try {
controller.enqueue(encoder.encode(chunk))
} catch {
// Client disconnected
abortController.abort()
}
},
() => {
// Stream complete - close connection
logger.info('Stream completed during resume', { streamId })
try {
controller.close()
} catch {
// Already closed
}
},
abortController.signal
)
} catch (error) {
logger.error('Error in stream resume', {
streamId,
error: error instanceof Error ? error.message : String(error),
})
try {
controller.close()
} catch {
// Already closed
}
}
},
cancel() {
abortController.abort()
},
})
return new Response(responseStream, {
headers: {
...SSE_HEADERS,
'X-Stream-Id': streamId,
'X-Chat-Id': meta.chatId,
},
})
}

View File

@@ -1,11 +1,10 @@
import { db } from '@sim/db'
import { templateCreators } from '@sim/db/schema'
import { templateCreators, user } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { eq } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { getSession } from '@/lib/auth'
import { generateRequestId } from '@/lib/core/utils/request'
import { verifyEffectiveSuperUser } from '@/lib/templates/permissions'
const logger = createLogger('CreatorVerificationAPI')
@@ -24,8 +23,9 @@ export async function POST(request: NextRequest, { params }: { params: Promise<{
}
// Check if user is a super user
const { effectiveSuperUser } = await verifyEffectiveSuperUser(session.user.id)
if (!effectiveSuperUser) {
const currentUser = await db.select().from(user).where(eq(user.id, session.user.id)).limit(1)
if (!currentUser[0]?.isSuperUser) {
logger.warn(`[${requestId}] Non-super user attempted to verify creator: ${id}`)
return NextResponse.json({ error: 'Only super users can verify creators' }, { status: 403 })
}
@@ -76,8 +76,9 @@ export async function DELETE(
}
// Check if user is a super user
const { effectiveSuperUser } = await verifyEffectiveSuperUser(session.user.id)
if (!effectiveSuperUser) {
const currentUser = await db.select().from(user).where(eq(user.id, session.user.id)).limit(1)
if (!currentUser[0]?.isSuperUser) {
logger.warn(`[${requestId}] Non-super user attempted to unverify creator: ${id}`)
return NextResponse.json({ error: 'Only super users can unverify creators' }, { status: 403 })
}

View File

@@ -1,193 +0,0 @@
import { db } from '@sim/db'
import { copilotChats, workflow, workspace } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { eq } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { getSession } from '@/lib/auth'
import { verifyEffectiveSuperUser } from '@/lib/templates/permissions'
import { parseWorkflowJson } from '@/lib/workflows/operations/import-export'
import {
loadWorkflowFromNormalizedTables,
saveWorkflowToNormalizedTables,
} from '@/lib/workflows/persistence/utils'
import { sanitizeForExport } from '@/lib/workflows/sanitization/json-sanitizer'
const logger = createLogger('SuperUserImportWorkflow')
interface ImportWorkflowRequest {
workflowId: string
targetWorkspaceId: string
}
/**
* POST /api/superuser/import-workflow
*
* Superuser endpoint to import a workflow by ID along with its copilot chats.
* This creates a copy of the workflow in the target workspace with new IDs.
* Only the workflow structure and copilot chats are copied - no deployments,
* webhooks, triggers, or other sensitive data.
*
* Requires both isSuperUser flag AND superUserModeEnabled setting.
*/
export async function POST(request: NextRequest) {
try {
const session = await getSession()
if (!session?.user?.id) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
const { effectiveSuperUser, isSuperUser, superUserModeEnabled } =
await verifyEffectiveSuperUser(session.user.id)
if (!effectiveSuperUser) {
logger.warn('Non-effective-superuser attempted to access import-workflow endpoint', {
userId: session.user.id,
isSuperUser,
superUserModeEnabled,
})
return NextResponse.json({ error: 'Forbidden: Superuser access required' }, { status: 403 })
}
const body: ImportWorkflowRequest = await request.json()
const { workflowId, targetWorkspaceId } = body
if (!workflowId) {
return NextResponse.json({ error: 'workflowId is required' }, { status: 400 })
}
if (!targetWorkspaceId) {
return NextResponse.json({ error: 'targetWorkspaceId is required' }, { status: 400 })
}
// Verify target workspace exists
const [targetWorkspace] = await db
.select({ id: workspace.id, ownerId: workspace.ownerId })
.from(workspace)
.where(eq(workspace.id, targetWorkspaceId))
.limit(1)
if (!targetWorkspace) {
return NextResponse.json({ error: 'Target workspace not found' }, { status: 404 })
}
// Get the source workflow
const [sourceWorkflow] = await db
.select()
.from(workflow)
.where(eq(workflow.id, workflowId))
.limit(1)
if (!sourceWorkflow) {
return NextResponse.json({ error: 'Source workflow not found' }, { status: 404 })
}
// Load the workflow state from normalized tables
const normalizedData = await loadWorkflowFromNormalizedTables(workflowId)
if (!normalizedData) {
return NextResponse.json(
{ error: 'Workflow has no normalized data - cannot import' },
{ status: 400 }
)
}
// Use existing export logic to create export format
const workflowState = {
blocks: normalizedData.blocks,
edges: normalizedData.edges,
loops: normalizedData.loops,
parallels: normalizedData.parallels,
metadata: {
name: sourceWorkflow.name,
description: sourceWorkflow.description ?? undefined,
color: sourceWorkflow.color,
},
}
const exportData = sanitizeForExport(workflowState)
// Use existing import logic (parseWorkflowJson regenerates IDs automatically)
const { data: importedData, errors } = parseWorkflowJson(JSON.stringify(exportData))
if (!importedData || errors.length > 0) {
return NextResponse.json(
{ error: `Failed to parse workflow: ${errors.join(', ')}` },
{ status: 400 }
)
}
// Create new workflow record
const newWorkflowId = crypto.randomUUID()
const now = new Date()
await db.insert(workflow).values({
id: newWorkflowId,
userId: session.user.id,
workspaceId: targetWorkspaceId,
folderId: null, // Don't copy folder association
name: `[Debug Import] ${sourceWorkflow.name}`,
description: sourceWorkflow.description,
color: sourceWorkflow.color,
lastSynced: now,
createdAt: now,
updatedAt: now,
isDeployed: false, // Never copy deployment status
runCount: 0,
variables: sourceWorkflow.variables || {},
})
// Save using existing persistence logic
const saveResult = await saveWorkflowToNormalizedTables(newWorkflowId, importedData)
if (!saveResult.success) {
// Clean up the workflow record if save failed
await db.delete(workflow).where(eq(workflow.id, newWorkflowId))
return NextResponse.json(
{ error: `Failed to save workflow state: ${saveResult.error}` },
{ status: 500 }
)
}
// Copy copilot chats associated with the source workflow
const sourceCopilotChats = await db
.select()
.from(copilotChats)
.where(eq(copilotChats.workflowId, workflowId))
let copilotChatsImported = 0
for (const chat of sourceCopilotChats) {
await db.insert(copilotChats).values({
userId: session.user.id,
workflowId: newWorkflowId,
title: chat.title ? `[Import] ${chat.title}` : null,
messages: chat.messages,
model: chat.model,
conversationId: null, // Don't copy conversation ID
previewYaml: chat.previewYaml,
planArtifact: chat.planArtifact,
config: chat.config,
createdAt: new Date(),
updatedAt: new Date(),
})
copilotChatsImported++
}
logger.info('Superuser imported workflow', {
userId: session.user.id,
sourceWorkflowId: workflowId,
newWorkflowId,
targetWorkspaceId,
copilotChatsImported,
})
return NextResponse.json({
success: true,
newWorkflowId,
copilotChatsImported,
})
} catch (error) {
logger.error('Error importing workflow', error)
return NextResponse.json({ error: 'Internal server error' }, { status: 500 })
}
}

View File

@@ -5,7 +5,7 @@ import { eq } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { getSession } from '@/lib/auth'
import { generateRequestId } from '@/lib/core/utils/request'
import { verifyEffectiveSuperUser } from '@/lib/templates/permissions'
import { verifySuperUser } from '@/lib/templates/permissions'
const logger = createLogger('TemplateApprovalAPI')
@@ -25,8 +25,8 @@ export async function POST(request: NextRequest, { params }: { params: Promise<{
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
const { effectiveSuperUser } = await verifyEffectiveSuperUser(session.user.id)
if (!effectiveSuperUser) {
const { isSuperUser } = await verifySuperUser(session.user.id)
if (!isSuperUser) {
logger.warn(`[${requestId}] Non-super user attempted to approve template: ${id}`)
return NextResponse.json({ error: 'Only super users can approve templates' }, { status: 403 })
}
@@ -71,8 +71,8 @@ export async function DELETE(
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
const { effectiveSuperUser } = await verifyEffectiveSuperUser(session.user.id)
if (!effectiveSuperUser) {
const { isSuperUser } = await verifySuperUser(session.user.id)
if (!isSuperUser) {
logger.warn(`[${requestId}] Non-super user attempted to reject template: ${id}`)
return NextResponse.json({ error: 'Only super users can reject templates' }, { status: 403 })
}

View File

@@ -5,7 +5,7 @@ import { eq } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { getSession } from '@/lib/auth'
import { generateRequestId } from '@/lib/core/utils/request'
import { verifyEffectiveSuperUser } from '@/lib/templates/permissions'
import { verifySuperUser } from '@/lib/templates/permissions'
const logger = createLogger('TemplateRejectionAPI')
@@ -25,8 +25,8 @@ export async function POST(request: NextRequest, { params }: { params: Promise<{
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
const { effectiveSuperUser } = await verifyEffectiveSuperUser(session.user.id)
if (!effectiveSuperUser) {
const { isSuperUser } = await verifySuperUser(session.user.id)
if (!isSuperUser) {
logger.warn(`[${requestId}] Non-super user attempted to reject template: ${id}`)
return NextResponse.json({ error: 'Only super users can reject templates' }, { status: 403 })
}

View File

@@ -3,6 +3,7 @@ import {
templateCreators,
templateStars,
templates,
user,
workflow,
workflowDeploymentVersion,
} from '@sim/db/schema'
@@ -13,7 +14,6 @@ import { v4 as uuidv4 } from 'uuid'
import { z } from 'zod'
import { getSession } from '@/lib/auth'
import { generateRequestId } from '@/lib/core/utils/request'
import { verifyEffectiveSuperUser } from '@/lib/templates/permissions'
import {
extractRequiredCredentials,
sanitizeCredentials,
@@ -70,8 +70,8 @@ export async function GET(request: NextRequest) {
logger.debug(`[${requestId}] Fetching templates with params:`, params)
// Check if user is a super user
const { effectiveSuperUser } = await verifyEffectiveSuperUser(session.user.id)
const isSuperUser = effectiveSuperUser
const currentUser = await db.select().from(user).where(eq(user.id, session.user.id)).limit(1)
const isSuperUser = currentUser[0]?.isSuperUser || false
// Build query conditions
const conditions = []

View File

@@ -550,8 +550,6 @@ export interface AdminUserBilling {
totalWebhookTriggers: number
totalScheduledExecutions: number
totalChatExecutions: number
totalMcpExecutions: number
totalA2aExecutions: number
totalTokensUsed: number
totalCost: string
currentUsageLimit: string | null

View File

@@ -97,8 +97,6 @@ export const GET = withAdminAuthParams<RouteParams>(async (_, context) => {
totalWebhookTriggers: stats?.totalWebhookTriggers ?? 0,
totalScheduledExecutions: stats?.totalScheduledExecutions ?? 0,
totalChatExecutions: stats?.totalChatExecutions ?? 0,
totalMcpExecutions: stats?.totalMcpExecutions ?? 0,
totalA2aExecutions: stats?.totalA2aExecutions ?? 0,
totalTokensUsed: stats?.totalTokensUsed ?? 0,
totalCost: stats?.totalCost ?? '0',
currentUsageLimit: stats?.currentUsageLimit ?? null,

View File

@@ -19,7 +19,7 @@ export interface RateLimitResult {
export async function checkRateLimit(
request: NextRequest,
endpoint: 'logs' | 'logs-detail' | 'workflows' | 'workflow-detail' = 'logs'
endpoint: 'logs' | 'logs-detail' = 'logs'
): Promise<RateLimitResult> {
try {
const auth = await authenticateV1Request(request)

View File

@@ -1,102 +0,0 @@
import { db } from '@sim/db'
import { permissions, workflow, workflowBlocks } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { and, eq } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { extractInputFieldsFromBlocks } from '@/lib/workflows/input-format'
import { createApiResponse, getUserLimits } from '@/app/api/v1/logs/meta'
import { checkRateLimit, createRateLimitResponse } from '@/app/api/v1/middleware'
const logger = createLogger('V1WorkflowDetailsAPI')
export const revalidate = 0
export async function GET(request: NextRequest, { params }: { params: Promise<{ id: string }> }) {
const requestId = crypto.randomUUID().slice(0, 8)
try {
const rateLimit = await checkRateLimit(request, 'workflow-detail')
if (!rateLimit.allowed) {
return createRateLimitResponse(rateLimit)
}
const userId = rateLimit.userId!
const { id } = await params
logger.info(`[${requestId}] Fetching workflow details for ${id}`, { userId })
const rows = await db
.select({
id: workflow.id,
name: workflow.name,
description: workflow.description,
color: workflow.color,
folderId: workflow.folderId,
workspaceId: workflow.workspaceId,
isDeployed: workflow.isDeployed,
deployedAt: workflow.deployedAt,
runCount: workflow.runCount,
lastRunAt: workflow.lastRunAt,
variables: workflow.variables,
createdAt: workflow.createdAt,
updatedAt: workflow.updatedAt,
})
.from(workflow)
.innerJoin(
permissions,
and(
eq(permissions.entityType, 'workspace'),
eq(permissions.entityId, workflow.workspaceId),
eq(permissions.userId, userId)
)
)
.where(eq(workflow.id, id))
.limit(1)
const workflowData = rows[0]
if (!workflowData) {
return NextResponse.json({ error: 'Workflow not found' }, { status: 404 })
}
const blockRows = await db
.select({
id: workflowBlocks.id,
type: workflowBlocks.type,
subBlocks: workflowBlocks.subBlocks,
})
.from(workflowBlocks)
.where(eq(workflowBlocks.workflowId, id))
const blocksRecord = Object.fromEntries(
blockRows.map((block) => [block.id, { type: block.type, subBlocks: block.subBlocks }])
)
const inputs = extractInputFieldsFromBlocks(blocksRecord)
const response = {
id: workflowData.id,
name: workflowData.name,
description: workflowData.description,
color: workflowData.color,
folderId: workflowData.folderId,
workspaceId: workflowData.workspaceId,
isDeployed: workflowData.isDeployed,
deployedAt: workflowData.deployedAt?.toISOString() || null,
runCount: workflowData.runCount,
lastRunAt: workflowData.lastRunAt?.toISOString() || null,
variables: workflowData.variables || {},
inputs,
createdAt: workflowData.createdAt.toISOString(),
updatedAt: workflowData.updatedAt.toISOString(),
}
const limits = await getUserLimits(userId)
const apiResponse = createApiResponse({ data: response }, limits, rateLimit)
return NextResponse.json(apiResponse.body, { headers: apiResponse.headers })
} catch (error: unknown) {
const message = error instanceof Error ? error.message : 'Unknown error'
logger.error(`[${requestId}] Workflow details fetch error`, { error: message })
return NextResponse.json({ error: 'Internal server error' }, { status: 500 })
}
}

View File

@@ -1,184 +0,0 @@
import { db } from '@sim/db'
import { permissions, workflow } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { and, asc, eq, gt, or } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { createApiResponse, getUserLimits } from '@/app/api/v1/logs/meta'
import { checkRateLimit, createRateLimitResponse } from '@/app/api/v1/middleware'
const logger = createLogger('V1WorkflowsAPI')
export const dynamic = 'force-dynamic'
export const revalidate = 0
const QueryParamsSchema = z.object({
workspaceId: z.string(),
folderId: z.string().optional(),
deployedOnly: z.coerce.boolean().optional().default(false),
limit: z.coerce.number().min(1).max(100).optional().default(50),
cursor: z.string().optional(),
})
interface CursorData {
sortOrder: number
createdAt: string
id: string
}
function encodeCursor(data: CursorData): string {
return Buffer.from(JSON.stringify(data)).toString('base64')
}
function decodeCursor(cursor: string): CursorData | null {
try {
return JSON.parse(Buffer.from(cursor, 'base64').toString())
} catch {
return null
}
}
export async function GET(request: NextRequest) {
const requestId = crypto.randomUUID().slice(0, 8)
try {
const rateLimit = await checkRateLimit(request, 'workflows')
if (!rateLimit.allowed) {
return createRateLimitResponse(rateLimit)
}
const userId = rateLimit.userId!
const { searchParams } = new URL(request.url)
const rawParams = Object.fromEntries(searchParams.entries())
const validationResult = QueryParamsSchema.safeParse(rawParams)
if (!validationResult.success) {
return NextResponse.json(
{ error: 'Invalid parameters', details: validationResult.error.errors },
{ status: 400 }
)
}
const params = validationResult.data
logger.info(`[${requestId}] Fetching workflows for workspace ${params.workspaceId}`, {
userId,
filters: {
folderId: params.folderId,
deployedOnly: params.deployedOnly,
},
})
const conditions = [
eq(workflow.workspaceId, params.workspaceId),
eq(permissions.entityType, 'workspace'),
eq(permissions.entityId, params.workspaceId),
eq(permissions.userId, userId),
]
if (params.folderId) {
conditions.push(eq(workflow.folderId, params.folderId))
}
if (params.deployedOnly) {
conditions.push(eq(workflow.isDeployed, true))
}
if (params.cursor) {
const cursorData = decodeCursor(params.cursor)
if (cursorData) {
const cursorCondition = or(
gt(workflow.sortOrder, cursorData.sortOrder),
and(
eq(workflow.sortOrder, cursorData.sortOrder),
gt(workflow.createdAt, new Date(cursorData.createdAt))
),
and(
eq(workflow.sortOrder, cursorData.sortOrder),
eq(workflow.createdAt, new Date(cursorData.createdAt)),
gt(workflow.id, cursorData.id)
)
)
if (cursorCondition) {
conditions.push(cursorCondition)
}
}
}
const orderByClause = [asc(workflow.sortOrder), asc(workflow.createdAt), asc(workflow.id)]
const rows = await db
.select({
id: workflow.id,
name: workflow.name,
description: workflow.description,
color: workflow.color,
folderId: workflow.folderId,
workspaceId: workflow.workspaceId,
isDeployed: workflow.isDeployed,
deployedAt: workflow.deployedAt,
runCount: workflow.runCount,
lastRunAt: workflow.lastRunAt,
sortOrder: workflow.sortOrder,
createdAt: workflow.createdAt,
updatedAt: workflow.updatedAt,
})
.from(workflow)
.innerJoin(
permissions,
and(
eq(permissions.entityType, 'workspace'),
eq(permissions.entityId, params.workspaceId),
eq(permissions.userId, userId)
)
)
.where(and(...conditions))
.orderBy(...orderByClause)
.limit(params.limit + 1)
const hasMore = rows.length > params.limit
const data = rows.slice(0, params.limit)
let nextCursor: string | undefined
if (hasMore && data.length > 0) {
const lastWorkflow = data[data.length - 1]
nextCursor = encodeCursor({
sortOrder: lastWorkflow.sortOrder,
createdAt: lastWorkflow.createdAt.toISOString(),
id: lastWorkflow.id,
})
}
const formattedWorkflows = data.map((w) => ({
id: w.id,
name: w.name,
description: w.description,
color: w.color,
folderId: w.folderId,
workspaceId: w.workspaceId,
isDeployed: w.isDeployed,
deployedAt: w.deployedAt?.toISOString() || null,
runCount: w.runCount,
lastRunAt: w.lastRunAt?.toISOString() || null,
createdAt: w.createdAt.toISOString(),
updatedAt: w.updatedAt.toISOString(),
}))
const limits = await getUserLimits(userId)
const response = createApiResponse(
{
data: formattedWorkflows,
nextCursor,
},
limits,
rateLimit
)
return NextResponse.json(response.body, { headers: response.headers })
} catch (error: unknown) {
const message = error instanceof Error ? error.message : 'Unknown error'
logger.error(`[${requestId}] Workflows fetch error`, { error: message })
return NextResponse.json({ error: 'Internal server error' }, { status: 500 })
}
}

View File

@@ -1,13 +1,10 @@
'use client'
import { Suspense, useEffect, useState } from 'react'
import { Loader2 } from 'lucide-react'
import { CheckCircle, Heart, Info, Loader2, XCircle } from 'lucide-react'
import { useSearchParams } from 'next/navigation'
import { inter } from '@/app/_styles/fonts/inter/inter'
import { soehne } from '@/app/_styles/fonts/soehne/soehne'
import { BrandedButton } from '@/app/(auth)/components/branded-button'
import { SupportFooter } from '@/app/(auth)/components/support-footer'
import { InviteLayout } from '@/app/invite/components'
import { Button, Card, CardContent, CardDescription, CardHeader, CardTitle } from '@/components/ui'
import { useBrandConfig } from '@/lib/branding/branding'
interface UnsubscribeData {
success: boolean
@@ -30,6 +27,7 @@ function UnsubscribeContent() {
const [error, setError] = useState<string | null>(null)
const [processing, setProcessing] = useState(false)
const [unsubscribed, setUnsubscribed] = useState(false)
const brand = useBrandConfig()
const email = searchParams.get('email')
const token = searchParams.get('token')
@@ -111,7 +109,7 @@ function UnsubscribeContent() {
} else {
setError(result.error || 'Failed to unsubscribe')
}
} catch {
} catch (error) {
setError('Failed to process unsubscribe request')
} finally {
setProcessing(false)
@@ -120,171 +118,272 @@ function UnsubscribeContent() {
if (loading) {
return (
<InviteLayout>
<div className='space-y-1 text-center'>
<h1 className={`${soehne.className} font-medium text-[32px] text-black tracking-tight`}>
Loading
</h1>
<p className={`${inter.className} font-[380] text-[16px] text-muted-foreground`}>
Validating your unsubscribe link...
</p>
</div>
<div className={`${inter.className} mt-8 flex w-full items-center justify-center py-8`}>
<Loader2 className='h-8 w-8 animate-spin text-muted-foreground' />
</div>
<SupportFooter position='absolute' />
</InviteLayout>
<div className='before:-z-50 relative flex min-h-screen items-center justify-center before:pointer-events-none before:fixed before:inset-0 before:bg-white'>
<Card className='w-full max-w-md border shadow-sm'>
<CardContent className='flex items-center justify-center p-8'>
<Loader2 className='h-8 w-8 animate-spin text-muted-foreground' />
</CardContent>
</Card>
</div>
)
}
if (error) {
return (
<InviteLayout>
<div className='space-y-1 text-center'>
<h1 className={`${soehne.className} font-medium text-[32px] text-black tracking-tight`}>
Invalid Unsubscribe Link
</h1>
<p className={`${inter.className} font-[380] text-[16px] text-muted-foreground`}>
{error}
</p>
</div>
<div className='before:-z-50 relative flex min-h-screen items-center justify-center p-4 before:pointer-events-none before:fixed before:inset-0 before:bg-white'>
<Card className='w-full max-w-md border shadow-sm'>
<CardHeader className='text-center'>
<XCircle className='mx-auto mb-2 h-12 w-12 text-red-500' />
<CardTitle className='text-foreground'>Invalid Unsubscribe Link</CardTitle>
<CardDescription className='text-muted-foreground'>
This unsubscribe link is invalid or has expired
</CardDescription>
</CardHeader>
<CardContent className='space-y-4'>
<div className='rounded-lg border bg-red-50 p-4'>
<p className='text-red-800 text-sm'>
<strong>Error:</strong> {error}
</p>
</div>
<div className={`${inter.className} mt-8 w-full max-w-[410px] space-y-3`}>
<BrandedButton onClick={() => window.history.back()}>Go Back</BrandedButton>
</div>
<div className='space-y-3'>
<p className='text-muted-foreground text-sm'>This could happen if:</p>
<ul className='ml-4 list-inside list-disc space-y-1 text-muted-foreground text-sm'>
<li>The link is missing required parameters</li>
<li>The link has expired or been used already</li>
<li>The link was copied incorrectly</li>
</ul>
</div>
<SupportFooter position='absolute' />
</InviteLayout>
<div className='mt-6 flex flex-col gap-3'>
<Button
onClick={() =>
window.open(
`mailto:${brand.supportEmail}?subject=Unsubscribe%20Help&body=Hi%2C%20I%20need%20help%20unsubscribing%20from%20emails.%20My%20unsubscribe%20link%20is%20not%20working.`,
'_blank'
)
}
className='w-full bg-[var(--brand-primary-hex)] font-medium text-white shadow-sm transition-colors duration-200 hover:bg-[var(--brand-primary-hover-hex)]'
>
Contact Support
</Button>
<Button onClick={() => window.history.back()} variant='outline' className='w-full'>
Go Back
</Button>
</div>
<div className='mt-4 text-center'>
<p className='text-muted-foreground text-xs'>
Need immediate help? Email us at{' '}
<a
href={`mailto:${brand.supportEmail}`}
className='text-muted-foreground hover:underline'
>
{brand.supportEmail}
</a>
</p>
</div>
</CardContent>
</Card>
</div>
)
}
if (data?.isTransactional) {
return (
<InviteLayout>
<div className='space-y-1 text-center'>
<h1 className={`${soehne.className} font-medium text-[32px] text-black tracking-tight`}>
Important Account Emails
</h1>
<p className={`${inter.className} font-[380] text-[16px] text-muted-foreground`}>
Transactional emails like password resets, account confirmations, and security alerts
cannot be unsubscribed from as they contain essential information for your account.
</p>
</div>
<div className='before:-z-50 relative flex min-h-screen items-center justify-center p-4 before:pointer-events-none before:fixed before:inset-0 before:bg-white'>
<Card className='w-full max-w-md border shadow-sm'>
<CardHeader className='text-center'>
<Info className='mx-auto mb-2 h-12 w-12 text-blue-500' />
<CardTitle className='text-foreground'>Important Account Emails</CardTitle>
<CardDescription className='text-muted-foreground'>
This email contains important information about your account
</CardDescription>
</CardHeader>
<CardContent className='space-y-4'>
<div className='rounded-lg border bg-blue-50 p-4'>
<p className='text-blue-800 text-sm'>
<strong>Transactional emails</strong> like password resets, account confirmations,
and security alerts cannot be unsubscribed from as they contain essential
information for your account security and functionality.
</p>
</div>
<div className={`${inter.className} mt-8 w-full max-w-[410px] space-y-3`}>
<BrandedButton onClick={() => window.close()}>Close</BrandedButton>
</div>
<div className='space-y-3'>
<p className='text-foreground text-sm'>
If you no longer wish to receive these emails, you can:
</p>
<ul className='ml-4 list-inside list-disc space-y-1 text-muted-foreground text-sm'>
<li>Close your account entirely</li>
<li>Contact our support team for assistance</li>
</ul>
</div>
<SupportFooter position='absolute' />
</InviteLayout>
<div className='mt-6 flex flex-col gap-3'>
<Button
onClick={() =>
window.open(
`mailto:${brand.supportEmail}?subject=Account%20Help&body=Hi%2C%20I%20need%20help%20with%20my%20account%20emails.`,
'_blank'
)
}
className='w-full bg-blue-600 text-white hover:bg-blue-700'
>
Contact Support
</Button>
<Button onClick={() => window.close()} variant='outline' className='w-full'>
Close
</Button>
</div>
</CardContent>
</Card>
</div>
)
}
if (unsubscribed) {
return (
<InviteLayout>
<div className='space-y-1 text-center'>
<h1 className={`${soehne.className} font-medium text-[32px] text-black tracking-tight`}>
Successfully Unsubscribed
</h1>
<p className={`${inter.className} font-[380] text-[16px] text-muted-foreground`}>
You have been unsubscribed from our emails. You will stop receiving emails within 48
hours.
</p>
</div>
<div className={`${inter.className} mt-8 w-full max-w-[410px] space-y-3`}>
<BrandedButton onClick={() => window.close()}>Close</BrandedButton>
</div>
<SupportFooter position='absolute' />
</InviteLayout>
<div className='before:-z-50 relative flex min-h-screen items-center justify-center before:pointer-events-none before:fixed before:inset-0 before:bg-white'>
<Card className='w-full max-w-md border shadow-sm'>
<CardHeader className='text-center'>
<CheckCircle className='mx-auto mb-2 h-12 w-12 text-green-500' />
<CardTitle className='text-foreground'>Successfully Unsubscribed</CardTitle>
<CardDescription className='text-muted-foreground'>
You have been unsubscribed from our emails. You will stop receiving emails within 48
hours.
</CardDescription>
</CardHeader>
<CardContent className='text-center'>
<p className='text-muted-foreground text-sm'>
If you change your mind, you can always update your email preferences in your account
settings or contact us at{' '}
<a
href={`mailto:${brand.supportEmail}`}
className='text-muted-foreground hover:underline'
>
{brand.supportEmail}
</a>
</p>
</CardContent>
</Card>
</div>
)
}
const isAlreadyUnsubscribedFromAll = data?.currentPreferences.unsubscribeAll
return (
<InviteLayout>
<div className='space-y-1 text-center'>
<h1 className={`${soehne.className} font-medium text-[32px] text-black tracking-tight`}>
Email Preferences
</h1>
<p className={`${inter.className} font-[380] text-[16px] text-muted-foreground`}>
Choose which emails you'd like to stop receiving.
</p>
<p className={`${inter.className} mt-2 font-[380] text-[14px] text-muted-foreground`}>
{data?.email}
</p>
</div>
<div className='before:-z-50 relative flex min-h-screen items-center justify-center p-4 before:pointer-events-none before:fixed before:inset-0 before:bg-white'>
<Card className='w-full max-w-md border shadow-sm'>
<CardHeader className='text-center'>
<Heart className='mx-auto mb-2 h-12 w-12 text-red-500' />
<CardTitle className='text-foreground'>We&apos;re sorry to see you go!</CardTitle>
<CardDescription className='text-muted-foreground'>
We understand email preferences are personal. Choose which emails you&apos;d like to
stop receiving from Sim.
</CardDescription>
<div className='mt-2 rounded-lg border bg-muted/50 p-3'>
<p className='text-muted-foreground text-xs'>
Email: <span className='font-medium text-foreground'>{data?.email}</span>
</p>
</div>
</CardHeader>
<CardContent className='space-y-4'>
<div className='space-y-3'>
<Button
onClick={() => handleUnsubscribe('all')}
disabled={processing || data?.currentPreferences.unsubscribeAll}
variant='destructive'
className='w-full'
>
{data?.currentPreferences.unsubscribeAll ? (
<CheckCircle className='mr-2 h-4 w-4' />
) : null}
{processing
? 'Unsubscribing...'
: data?.currentPreferences.unsubscribeAll
? 'Unsubscribed from All Emails'
: 'Unsubscribe from All Marketing Emails'}
</Button>
<div className={`${inter.className} mt-8 w-full max-w-[410px] space-y-3`}>
<BrandedButton
onClick={() => handleUnsubscribe('all')}
disabled={processing || isAlreadyUnsubscribedFromAll}
loading={processing}
loadingText='Unsubscribing'
>
{isAlreadyUnsubscribedFromAll
? 'Unsubscribed from All Emails'
: 'Unsubscribe from All Marketing Emails'}
</BrandedButton>
<div className='text-center text-muted-foreground text-sm'>
or choose specific types:
</div>
<div className='py-2 text-center'>
<span className={`${inter.className} font-[380] text-[14px] text-muted-foreground`}>
or choose specific types
</span>
</div>
<Button
onClick={() => handleUnsubscribe('marketing')}
disabled={
processing ||
data?.currentPreferences.unsubscribeAll ||
data?.currentPreferences.unsubscribeMarketing
}
variant='outline'
className='w-full'
>
{data?.currentPreferences.unsubscribeMarketing ? (
<CheckCircle className='mr-2 h-4 w-4' />
) : null}
{data?.currentPreferences.unsubscribeMarketing
? 'Unsubscribed from Marketing'
: 'Unsubscribe from Marketing Emails'}
</Button>
<BrandedButton
onClick={() => handleUnsubscribe('marketing')}
disabled={
processing ||
isAlreadyUnsubscribedFromAll ||
data?.currentPreferences.unsubscribeMarketing
}
>
{data?.currentPreferences.unsubscribeMarketing
? 'Unsubscribed from Marketing'
: 'Unsubscribe from Marketing Emails'}
</BrandedButton>
<Button
onClick={() => handleUnsubscribe('updates')}
disabled={
processing ||
data?.currentPreferences.unsubscribeAll ||
data?.currentPreferences.unsubscribeUpdates
}
variant='outline'
className='w-full'
>
{data?.currentPreferences.unsubscribeUpdates ? (
<CheckCircle className='mr-2 h-4 w-4' />
) : null}
{data?.currentPreferences.unsubscribeUpdates
? 'Unsubscribed from Updates'
: 'Unsubscribe from Product Updates'}
</Button>
<BrandedButton
onClick={() => handleUnsubscribe('updates')}
disabled={
processing ||
isAlreadyUnsubscribedFromAll ||
data?.currentPreferences.unsubscribeUpdates
}
>
{data?.currentPreferences.unsubscribeUpdates
? 'Unsubscribed from Updates'
: 'Unsubscribe from Product Updates'}
</BrandedButton>
<Button
onClick={() => handleUnsubscribe('notifications')}
disabled={
processing ||
data?.currentPreferences.unsubscribeAll ||
data?.currentPreferences.unsubscribeNotifications
}
variant='outline'
className='w-full'
>
{data?.currentPreferences.unsubscribeNotifications ? (
<CheckCircle className='mr-2 h-4 w-4' />
) : null}
{data?.currentPreferences.unsubscribeNotifications
? 'Unsubscribed from Notifications'
: 'Unsubscribe from Notifications'}
</Button>
</div>
<BrandedButton
onClick={() => handleUnsubscribe('notifications')}
disabled={
processing ||
isAlreadyUnsubscribedFromAll ||
data?.currentPreferences.unsubscribeNotifications
}
>
{data?.currentPreferences.unsubscribeNotifications
? 'Unsubscribed from Notifications'
: 'Unsubscribe from Notifications'}
</BrandedButton>
</div>
<div className='mt-6 space-y-3'>
<div className='rounded-lg border bg-muted/50 p-3'>
<p className='text-center text-muted-foreground text-xs'>
<strong>Note:</strong> You&apos;ll continue receiving important account emails like
password resets and security alerts.
</p>
</div>
<div className={`${inter.className} mt-6 max-w-[410px] text-center`}>
<p className='font-[380] text-[13px] text-muted-foreground'>
You'll continue receiving important account emails like password resets and security
alerts.
</p>
</div>
<SupportFooter position='absolute' />
</InviteLayout>
<p className='text-center text-muted-foreground text-xs'>
Questions? Contact us at{' '}
<a
href={`mailto:${brand.supportEmail}`}
className='text-muted-foreground hover:underline'
>
{brand.supportEmail}
</a>
</p>
</div>
</CardContent>
</Card>
</div>
)
}
@@ -292,20 +391,13 @@ export default function Unsubscribe() {
return (
<Suspense
fallback={
<InviteLayout>
<div className='space-y-1 text-center'>
<h1 className={`${soehne.className} font-medium text-[32px] text-black tracking-tight`}>
Loading
</h1>
<p className={`${inter.className} font-[380] text-[16px] text-muted-foreground`}>
Validating your unsubscribe link...
</p>
</div>
<div className={`${inter.className} mt-8 flex w-full items-center justify-center py-8`}>
<Loader2 className='h-8 w-8 animate-spin text-muted-foreground' />
</div>
<SupportFooter position='absolute' />
</InviteLayout>
<div className='before:-z-50 relative flex min-h-screen items-center justify-center before:pointer-events-none before:fixed before:inset-0 before:bg-white'>
<Card className='w-full max-w-md border shadow-sm'>
<CardContent className='flex items-center justify-center p-8'>
<Loader2 className='h-8 w-8 animate-spin text-muted-foreground' />
</CardContent>
</Card>
</div>
}
>
<UnsubscribeContent />

View File

@@ -2,6 +2,7 @@
import { useRef, useState } from 'react'
import { createLogger } from '@sim/logger'
import { useQueryClient } from '@tanstack/react-query'
import {
Button,
Label,
@@ -13,7 +14,7 @@ import {
Textarea,
} from '@/components/emcn'
import type { DocumentData } from '@/lib/knowledge/types'
import { useCreateChunk } from '@/hooks/queries/knowledge'
import { knowledgeKeys } from '@/hooks/queries/knowledge'
const logger = createLogger('CreateChunkModal')
@@ -30,20 +31,16 @@ export function CreateChunkModal({
document,
knowledgeBaseId,
}: CreateChunkModalProps) {
const {
mutate: createChunk,
isPending: isCreating,
error: mutationError,
reset: resetMutation,
} = useCreateChunk()
const queryClient = useQueryClient()
const [content, setContent] = useState('')
const [isCreating, setIsCreating] = useState(false)
const [error, setError] = useState<string | null>(null)
const [showUnsavedChangesAlert, setShowUnsavedChangesAlert] = useState(false)
const isProcessingRef = useRef(false)
const error = mutationError?.message ?? null
const hasUnsavedChanges = content.trim().length > 0
const handleCreateChunk = () => {
const handleCreateChunk = async () => {
if (!document || content.trim().length === 0 || isProcessingRef.current) {
if (isProcessingRef.current) {
logger.warn('Chunk creation already in progress, ignoring duplicate request')
@@ -51,32 +48,57 @@ export function CreateChunkModal({
return
}
isProcessingRef.current = true
try {
isProcessingRef.current = true
setIsCreating(true)
setError(null)
createChunk(
{
knowledgeBaseId,
documentId: document.id,
content: content.trim(),
enabled: true,
},
{
onSuccess: () => {
isProcessingRef.current = false
onClose()
},
onError: () => {
isProcessingRef.current = false
},
const response = await fetch(
`/api/knowledge/${knowledgeBaseId}/documents/${document.id}/chunks`,
{
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({
content: content.trim(),
enabled: true,
}),
}
)
if (!response.ok) {
const result = await response.json()
throw new Error(result.error || 'Failed to create chunk')
}
)
const result = await response.json()
if (result.success && result.data) {
logger.info('Chunk created successfully:', result.data.id)
await queryClient.invalidateQueries({
queryKey: knowledgeKeys.detail(knowledgeBaseId),
})
onClose()
} else {
throw new Error(result.error || 'Failed to create chunk')
}
} catch (err) {
logger.error('Error creating chunk:', err)
setError(err instanceof Error ? err.message : 'An error occurred')
} finally {
isProcessingRef.current = false
setIsCreating(false)
}
}
const onClose = () => {
onOpenChange(false)
setContent('')
setError(null)
setShowUnsavedChangesAlert(false)
resetMutation()
}
const handleCloseAttempt = () => {

View File

@@ -1,8 +1,13 @@
'use client'
import { useState } from 'react'
import { createLogger } from '@sim/logger'
import { useQueryClient } from '@tanstack/react-query'
import { Button, Modal, ModalBody, ModalContent, ModalFooter, ModalHeader } from '@/components/emcn'
import type { ChunkData } from '@/lib/knowledge/types'
import { useDeleteChunk } from '@/hooks/queries/knowledge'
import { knowledgeKeys } from '@/hooks/queries/knowledge'
const logger = createLogger('DeleteChunkModal')
interface DeleteChunkModalProps {
chunk: ChunkData | null
@@ -19,12 +24,44 @@ export function DeleteChunkModal({
isOpen,
onClose,
}: DeleteChunkModalProps) {
const { mutate: deleteChunk, isPending: isDeleting } = useDeleteChunk()
const queryClient = useQueryClient()
const [isDeleting, setIsDeleting] = useState(false)
const handleDeleteChunk = () => {
const handleDeleteChunk = async () => {
if (!chunk || isDeleting) return
deleteChunk({ knowledgeBaseId, documentId, chunkId: chunk.id }, { onSuccess: onClose })
try {
setIsDeleting(true)
const response = await fetch(
`/api/knowledge/${knowledgeBaseId}/documents/${documentId}/chunks/${chunk.id}`,
{
method: 'DELETE',
}
)
if (!response.ok) {
throw new Error('Failed to delete chunk')
}
const result = await response.json()
if (result.success) {
logger.info('Chunk deleted successfully:', chunk.id)
await queryClient.invalidateQueries({
queryKey: knowledgeKeys.detail(knowledgeBaseId),
})
onClose()
} else {
throw new Error(result.error || 'Failed to delete chunk')
}
} catch (err) {
logger.error('Error deleting chunk:', err)
} finally {
setIsDeleting(false)
}
}
if (!chunk) return null

View File

@@ -25,7 +25,6 @@ import {
} from '@/hooks/kb/use-knowledge-base-tag-definitions'
import { useNextAvailableSlot } from '@/hooks/kb/use-next-available-slot'
import { type TagDefinitionInput, useTagDefinitions } from '@/hooks/kb/use-tag-definitions'
import { useUpdateDocumentTags } from '@/hooks/queries/knowledge'
const logger = createLogger('DocumentTagsModal')
@@ -59,6 +58,8 @@ function formatValueForDisplay(value: string, fieldType: string): string {
try {
const date = new Date(value)
if (Number.isNaN(date.getTime())) return value
// For UTC dates, display the UTC date to prevent timezone shifts
// e.g., 2002-05-16T00:00:00.000Z should show as "May 16, 2002" not "May 15, 2002"
if (typeof value === 'string' && (value.endsWith('Z') || /[+-]\d{2}:\d{2}$/.test(value))) {
return new Date(
date.getUTCFullYear(),
@@ -95,7 +96,6 @@ export function DocumentTagsModal({
const documentTagHook = useTagDefinitions(knowledgeBaseId, documentId)
const kbTagHook = useKnowledgeBaseTagDefinitions(knowledgeBaseId)
const { getNextAvailableSlot: getServerNextSlot } = useNextAvailableSlot(knowledgeBaseId)
const { mutateAsync: updateDocumentTags } = useUpdateDocumentTags()
const { saveTagDefinitions, tagDefinitions, fetchTagDefinitions } = documentTagHook
const { tagDefinitions: kbTagDefinitions, fetchTagDefinitions: refreshTagDefinitions } = kbTagHook
@@ -118,6 +118,7 @@ export function DocumentTagsModal({
const definition = definitions.find((def) => def.tagSlot === slot)
if (rawValue !== null && rawValue !== undefined && definition) {
// Convert value to string for storage
const stringValue = String(rawValue).trim()
if (stringValue) {
tags.push({
@@ -141,34 +142,41 @@ export function DocumentTagsModal({
async (tagsToSave: DocumentTag[]) => {
if (!documentData) return
const tagData: Record<string, string> = {}
try {
const tagData: Record<string, string> = {}
ALL_TAG_SLOTS.forEach((slot) => {
const tag = tagsToSave.find((t) => t.slot === slot)
if (tag?.value.trim()) {
tagData[slot] = tag.value.trim()
} else {
tagData[slot] = ''
// Only include tags that have values (omit empty ones)
// Use empty string for slots that should be cleared
ALL_TAG_SLOTS.forEach((slot) => {
const tag = tagsToSave.find((t) => t.slot === slot)
if (tag?.value.trim()) {
tagData[slot] = tag.value.trim()
} else {
// Use empty string to clear a tag (API schema expects string, not null)
tagData[slot] = ''
}
})
const response = await fetch(`/api/knowledge/${knowledgeBaseId}/documents/${documentId}`, {
method: 'PUT',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify(tagData),
})
if (!response.ok) {
throw new Error('Failed to update document tags')
}
})
await updateDocumentTags({
knowledgeBaseId,
documentId,
tags: tagData,
})
onDocumentUpdate?.(tagData)
await fetchTagDefinitions()
onDocumentUpdate?.(tagData as Record<string, string>)
await fetchTagDefinitions()
} catch (error) {
logger.error('Error updating document tags:', error)
throw error
}
},
[
documentData,
knowledgeBaseId,
documentId,
updateDocumentTags,
fetchTagDefinitions,
onDocumentUpdate,
]
[documentData, knowledgeBaseId, documentId, fetchTagDefinitions, onDocumentUpdate]
)
const handleRemoveTag = async (index: number) => {

View File

@@ -2,6 +2,7 @@
import { useEffect, useMemo, useRef, useState } from 'react'
import { createLogger } from '@sim/logger'
import { useQueryClient } from '@tanstack/react-query'
import { ChevronDown, ChevronUp } from 'lucide-react'
import {
Button,
@@ -18,7 +19,7 @@ import {
import type { ChunkData, DocumentData } from '@/lib/knowledge/types'
import { getAccurateTokenCount, getTokenStrings } from '@/lib/tokenization/estimators'
import { useUserPermissionsContext } from '@/app/workspace/[workspaceId]/providers/workspace-permissions-provider'
import { useUpdateChunk } from '@/hooks/queries/knowledge'
import { knowledgeKeys } from '@/hooks/queries/knowledge'
const logger = createLogger('EditChunkModal')
@@ -49,22 +50,17 @@ export function EditChunkModal({
onNavigateToPage,
maxChunkSize,
}: EditChunkModalProps) {
const queryClient = useQueryClient()
const userPermissions = useUserPermissionsContext()
const {
mutate: updateChunk,
isPending: isSaving,
error: mutationError,
reset: resetMutation,
} = useUpdateChunk()
const [editedContent, setEditedContent] = useState(chunk?.content || '')
const [isSaving, setIsSaving] = useState(false)
const [isNavigating, setIsNavigating] = useState(false)
const [error, setError] = useState<string | null>(null)
const [showUnsavedChangesAlert, setShowUnsavedChangesAlert] = useState(false)
const [pendingNavigation, setPendingNavigation] = useState<(() => void) | null>(null)
const [tokenizerOn, setTokenizerOn] = useState(false)
const textareaRef = useRef<HTMLTextAreaElement>(null)
const error = mutationError?.message ?? null
const hasUnsavedChanges = editedContent !== (chunk?.content || '')
const tokenStrings = useMemo(() => {
@@ -106,15 +102,44 @@ export function EditChunkModal({
const canNavigatePrev = currentChunkIndex > 0 || currentPage > 1
const canNavigateNext = currentChunkIndex < allChunks.length - 1 || currentPage < totalPages
const handleSaveContent = () => {
const handleSaveContent = async () => {
if (!chunk || !document) return
updateChunk({
knowledgeBaseId,
documentId: document.id,
chunkId: chunk.id,
content: editedContent,
})
try {
setIsSaving(true)
setError(null)
const response = await fetch(
`/api/knowledge/${knowledgeBaseId}/documents/${document.id}/chunks/${chunk.id}`,
{
method: 'PUT',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({
content: editedContent,
}),
}
)
if (!response.ok) {
const result = await response.json()
throw new Error(result.error || 'Failed to update chunk')
}
const result = await response.json()
if (result.success) {
await queryClient.invalidateQueries({
queryKey: knowledgeKeys.detail(knowledgeBaseId),
})
}
} catch (err) {
logger.error('Error updating chunk:', err)
setError(err instanceof Error ? err.message : 'An error occurred')
} finally {
setIsSaving(false)
}
}
const navigateToChunk = async (direction: 'prev' | 'next') => {
@@ -140,6 +165,7 @@ export function EditChunkModal({
}
} catch (err) {
logger.error(`Error navigating ${direction}:`, err)
setError(`Failed to navigate to ${direction === 'prev' ? 'previous' : 'next'} chunk`)
} finally {
setIsNavigating(false)
}
@@ -159,7 +185,6 @@ export function EditChunkModal({
setPendingNavigation(null)
setShowUnsavedChangesAlert(true)
} else {
resetMutation()
onClose()
}
}
@@ -170,7 +195,6 @@ export function EditChunkModal({
void pendingNavigation()
setPendingNavigation(null)
} else {
resetMutation()
onClose()
}
}

View File

@@ -48,13 +48,7 @@ import { ActionBar } from '@/app/workspace/[workspaceId]/knowledge/[id]/componen
import { useUserPermissionsContext } from '@/app/workspace/[workspaceId]/providers/workspace-permissions-provider'
import { useContextMenu } from '@/app/workspace/[workspaceId]/w/components/sidebar/hooks'
import { useDocument, useDocumentChunks, useKnowledgeBase } from '@/hooks/kb/use-knowledge'
import {
knowledgeKeys,
useBulkChunkOperation,
useDeleteDocument,
useDocumentChunkSearchQuery,
useUpdateChunk,
} from '@/hooks/queries/knowledge'
import { knowledgeKeys, useDocumentChunkSearchQuery } from '@/hooks/queries/knowledge'
const logger = createLogger('Document')
@@ -409,13 +403,11 @@ export function Document({
const [isCreateChunkModalOpen, setIsCreateChunkModalOpen] = useState(false)
const [chunkToDelete, setChunkToDelete] = useState<ChunkData | null>(null)
const [isDeleteModalOpen, setIsDeleteModalOpen] = useState(false)
const [isBulkOperating, setIsBulkOperating] = useState(false)
const [showDeleteDocumentDialog, setShowDeleteDocumentDialog] = useState(false)
const [isDeletingDocument, setIsDeletingDocument] = useState(false)
const [contextMenuChunk, setContextMenuChunk] = useState<ChunkData | null>(null)
const { mutate: updateChunkMutation } = useUpdateChunk()
const { mutate: deleteDocumentMutation, isPending: isDeletingDocument } = useDeleteDocument()
const { mutate: bulkChunkMutation, isPending: isBulkOperating } = useBulkChunkOperation()
const {
isOpen: isContextMenuOpen,
position: contextMenuPosition,
@@ -448,23 +440,36 @@ export function Document({
setSelectedChunk(null)
}
const handleToggleEnabled = (chunkId: string) => {
const handleToggleEnabled = async (chunkId: string) => {
const chunk = displayChunks.find((c) => c.id === chunkId)
if (!chunk) return
updateChunkMutation(
{
knowledgeBaseId,
documentId,
chunkId,
enabled: !chunk.enabled,
},
{
onSuccess: () => {
updateChunk(chunkId, { enabled: !chunk.enabled })
},
try {
const response = await fetch(
`/api/knowledge/${knowledgeBaseId}/documents/${documentId}/chunks/${chunkId}`,
{
method: 'PUT',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({
enabled: !chunk.enabled,
}),
}
)
if (!response.ok) {
throw new Error('Failed to update chunk')
}
)
const result = await response.json()
if (result.success) {
updateChunk(chunkId, { enabled: !chunk.enabled })
}
} catch (err) {
logger.error('Error updating chunk:', err)
}
}
const handleDeleteChunk = (chunkId: string) => {
@@ -510,69 +515,107 @@ export function Document({
/**
* Handles deleting the document
*/
const handleDeleteDocument = () => {
const handleDeleteDocument = async () => {
if (!documentData) return
deleteDocumentMutation(
{ knowledgeBaseId, documentId },
{
onSuccess: () => {
router.push(`/workspace/${workspaceId}/knowledge/${knowledgeBaseId}`)
},
try {
setIsDeletingDocument(true)
const response = await fetch(`/api/knowledge/${knowledgeBaseId}/documents/${documentId}`, {
method: 'DELETE',
})
if (!response.ok) {
throw new Error('Failed to delete document')
}
)
const result = await response.json()
if (result.success) {
await queryClient.invalidateQueries({
queryKey: knowledgeKeys.detail(knowledgeBaseId),
})
router.push(`/workspace/${workspaceId}/knowledge/${knowledgeBaseId}`)
} else {
throw new Error(result.error || 'Failed to delete document')
}
} catch (err) {
logger.error('Error deleting document:', err)
setIsDeletingDocument(false)
}
}
const performBulkChunkOperation = (
const performBulkChunkOperation = async (
operation: 'enable' | 'disable' | 'delete',
chunks: ChunkData[]
) => {
if (chunks.length === 0) return
bulkChunkMutation(
{
knowledgeBaseId,
documentId,
operation,
chunkIds: chunks.map((chunk) => chunk.id),
},
{
onSuccess: (result) => {
if (operation === 'delete') {
refreshChunks()
} else {
result.results.forEach((opResult) => {
if (opResult.operation === operation) {
opResult.chunkIds.forEach((chunkId: string) => {
updateChunk(chunkId, { enabled: operation === 'enable' })
})
}
})
}
logger.info(`Successfully ${operation}d ${result.successCount} chunks`)
setSelectedChunks(new Set())
},
try {
setIsBulkOperating(true)
const response = await fetch(
`/api/knowledge/${knowledgeBaseId}/documents/${documentId}/chunks`,
{
method: 'PATCH',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({
operation,
chunkIds: chunks.map((chunk) => chunk.id),
}),
}
)
if (!response.ok) {
throw new Error(`Failed to ${operation} chunks`)
}
)
const result = await response.json()
if (result.success) {
if (operation === 'delete') {
await refreshChunks()
} else {
result.data.results.forEach((opResult: any) => {
if (opResult.operation === operation) {
opResult.chunkIds.forEach((chunkId: string) => {
updateChunk(chunkId, { enabled: operation === 'enable' })
})
}
})
}
logger.info(`Successfully ${operation}d ${result.data.successCount} chunks`)
}
setSelectedChunks(new Set())
} catch (err) {
logger.error(`Error ${operation}ing chunks:`, err)
} finally {
setIsBulkOperating(false)
}
}
const handleBulkEnable = () => {
const handleBulkEnable = async () => {
const chunksToEnable = displayChunks.filter(
(chunk) => selectedChunks.has(chunk.id) && !chunk.enabled
)
performBulkChunkOperation('enable', chunksToEnable)
await performBulkChunkOperation('enable', chunksToEnable)
}
const handleBulkDisable = () => {
const handleBulkDisable = async () => {
const chunksToDisable = displayChunks.filter(
(chunk) => selectedChunks.has(chunk.id) && chunk.enabled
)
performBulkChunkOperation('disable', chunksToDisable)
await performBulkChunkOperation('disable', chunksToDisable)
}
const handleBulkDelete = () => {
const handleBulkDelete = async () => {
const chunksToDelete = displayChunks.filter((chunk) => selectedChunks.has(chunk.id))
performBulkChunkOperation('delete', chunksToDelete)
await performBulkChunkOperation('delete', chunksToDelete)
}
const selectedChunksList = displayChunks.filter((chunk) => selectedChunks.has(chunk.id))

View File

@@ -2,6 +2,7 @@
import { useCallback, useEffect, useRef, useState } from 'react'
import { createLogger } from '@sim/logger'
import { useQueryClient } from '@tanstack/react-query'
import { format } from 'date-fns'
import {
AlertCircle,
@@ -61,12 +62,7 @@ import {
type TagDefinition,
useKnowledgeBaseTagDefinitions,
} from '@/hooks/kb/use-knowledge-base-tag-definitions'
import {
useBulkDocumentOperation,
useDeleteDocument,
useDeleteKnowledgeBase,
useUpdateDocument,
} from '@/hooks/queries/knowledge'
import { knowledgeKeys } from '@/hooks/queries/knowledge'
const logger = createLogger('KnowledgeBase')
@@ -411,17 +407,12 @@ export function KnowledgeBase({
id,
knowledgeBaseName: passedKnowledgeBaseName,
}: KnowledgeBaseProps) {
const queryClient = useQueryClient()
const params = useParams()
const workspaceId = params.workspaceId as string
const { removeKnowledgeBase } = useKnowledgeBasesList(workspaceId, { enabled: false })
const userPermissions = useUserPermissionsContext()
const { mutate: updateDocumentMutation } = useUpdateDocument()
const { mutate: deleteDocumentMutation } = useDeleteDocument()
const { mutate: deleteKnowledgeBaseMutation, isPending: isDeleting } =
useDeleteKnowledgeBase(workspaceId)
const { mutate: bulkDocumentMutation, isPending: isBulkOperating } = useBulkDocumentOperation()
const [searchQuery, setSearchQuery] = useState('')
const [showTagsModal, setShowTagsModal] = useState(false)
@@ -436,6 +427,8 @@ export function KnowledgeBase({
const [selectedDocuments, setSelectedDocuments] = useState<Set<string>>(new Set())
const [showDeleteDialog, setShowDeleteDialog] = useState(false)
const [showAddDocumentsModal, setShowAddDocumentsModal] = useState(false)
const [isDeleting, setIsDeleting] = useState(false)
const [isBulkOperating, setIsBulkOperating] = useState(false)
const [showDeleteDocumentModal, setShowDeleteDocumentModal] = useState(false)
const [documentToDelete, setDocumentToDelete] = useState<string | null>(null)
const [showBulkDeleteModal, setShowBulkDeleteModal] = useState(false)
@@ -557,7 +550,7 @@ export function KnowledgeBase({
/**
* Checks for documents with stale processing states and marks them as failed
*/
const checkForDeadProcesses = () => {
const checkForDeadProcesses = async () => {
const now = new Date()
const DEAD_PROCESS_THRESHOLD_MS = 600 * 1000 // 10 minutes
@@ -574,79 +567,116 @@ export function KnowledgeBase({
logger.warn(`Found ${staleDocuments.length} documents with dead processes`)
staleDocuments.forEach((doc) => {
updateDocumentMutation(
{
knowledgeBaseId: id,
documentId: doc.id,
updates: { markFailedDueToTimeout: true },
},
{
onSuccess: () => {
logger.info(`Successfully marked dead process as failed for document: ${doc.filename}`)
const markFailedPromises = staleDocuments.map(async (doc) => {
try {
const response = await fetch(`/api/knowledge/${id}/documents/${doc.id}`, {
method: 'PUT',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({
markFailedDueToTimeout: true,
}),
})
if (!response.ok) {
const errorData = await response.json().catch(() => ({ error: 'Unknown error' }))
logger.error(`Failed to mark document ${doc.id} as failed: ${errorData.error}`)
return
}
)
const result = await response.json()
if (result.success) {
logger.info(`Successfully marked dead process as failed for document: ${doc.filename}`)
}
} catch (error) {
logger.error(`Error marking document ${doc.id} as failed:`, error)
}
})
await Promise.allSettled(markFailedPromises)
}
const handleToggleEnabled = (docId: string) => {
const handleToggleEnabled = async (docId: string) => {
const document = documents.find((doc) => doc.id === docId)
if (!document) return
const newEnabled = !document.enabled
// Optimistic update
updateDocument(docId, { enabled: newEnabled })
updateDocumentMutation(
{
knowledgeBaseId: id,
documentId: docId,
updates: { enabled: newEnabled },
},
{
onError: () => {
// Rollback on error
updateDocument(docId, { enabled: !newEnabled })
try {
const response = await fetch(`/api/knowledge/${id}/documents/${docId}`, {
method: 'PUT',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({
enabled: newEnabled,
}),
})
if (!response.ok) {
throw new Error('Failed to update document')
}
)
const result = await response.json()
if (!result.success) {
updateDocument(docId, { enabled: !newEnabled })
}
} catch (err) {
updateDocument(docId, { enabled: !newEnabled })
logger.error('Error updating document:', err)
}
}
/**
* Handles retrying a failed document processing
*/
const handleRetryDocument = (docId: string) => {
// Optimistic update
updateDocument(docId, {
processingStatus: 'pending',
processingError: null,
processingStartedAt: null,
processingCompletedAt: null,
})
const handleRetryDocument = async (docId: string) => {
try {
updateDocument(docId, {
processingStatus: 'pending',
processingError: null,
processingStartedAt: null,
processingCompletedAt: null,
})
updateDocumentMutation(
{
knowledgeBaseId: id,
documentId: docId,
updates: { retryProcessing: true },
},
{
onSuccess: () => {
refreshDocuments()
logger.info(`Document retry initiated successfully for: ${docId}`)
},
onError: (err) => {
logger.error('Error retrying document:', err)
updateDocument(docId, {
processingStatus: 'failed',
processingError:
err instanceof Error ? err.message : 'Failed to retry document processing',
})
const response = await fetch(`/api/knowledge/${id}/documents/${docId}`, {
method: 'PUT',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({
retryProcessing: true,
}),
})
if (!response.ok) {
throw new Error('Failed to retry document processing')
}
)
const result = await response.json()
if (!result.success) {
throw new Error(result.error || 'Failed to retry document processing')
}
await refreshDocuments()
logger.info(`Document retry initiated successfully for: ${docId}`)
} catch (err) {
logger.error('Error retrying document:', err)
const currentDoc = documents.find((doc) => doc.id === docId)
if (currentDoc) {
updateDocument(docId, {
processingStatus: 'failed',
processingError:
err instanceof Error ? err.message : 'Failed to retry document processing',
})
}
}
}
/**
@@ -664,32 +694,43 @@ export function KnowledgeBase({
const currentDoc = documents.find((doc) => doc.id === documentId)
const previousName = currentDoc?.filename
// Optimistic update
updateDocument(documentId, { filename: newName })
queryClient.setQueryData<DocumentData>(knowledgeKeys.document(id, documentId), (previous) =>
previous ? { ...previous, filename: newName } : previous
)
return new Promise<void>((resolve, reject) => {
updateDocumentMutation(
{
knowledgeBaseId: id,
documentId,
updates: { filename: newName },
try {
const response = await fetch(`/api/knowledge/${id}/documents/${documentId}`, {
method: 'PUT',
headers: {
'Content-Type': 'application/json',
},
{
onSuccess: () => {
logger.info(`Document renamed: ${documentId}`)
resolve()
},
onError: (err) => {
// Rollback on error
if (previousName !== undefined) {
updateDocument(documentId, { filename: previousName })
}
logger.error('Error renaming document:', err)
reject(err)
},
}
)
})
body: JSON.stringify({ filename: newName }),
})
if (!response.ok) {
const result = await response.json()
throw new Error(result.error || 'Failed to rename document')
}
const result = await response.json()
if (!result.success) {
throw new Error(result.error || 'Failed to rename document')
}
logger.info(`Document renamed: ${documentId}`)
} catch (err) {
if (previousName !== undefined) {
updateDocument(documentId, { filename: previousName })
queryClient.setQueryData<DocumentData>(
knowledgeKeys.document(id, documentId),
(previous) => (previous ? { ...previous, filename: previousName } : previous)
)
}
logger.error('Error renaming document:', err)
throw err
}
}
/**
@@ -703,26 +744,35 @@ export function KnowledgeBase({
/**
* Confirms and executes the deletion of a single document
*/
const confirmDeleteDocument = () => {
const confirmDeleteDocument = async () => {
if (!documentToDelete) return
deleteDocumentMutation(
{ knowledgeBaseId: id, documentId: documentToDelete },
{
onSuccess: () => {
refreshDocuments()
setSelectedDocuments((prev) => {
const newSet = new Set(prev)
newSet.delete(documentToDelete)
return newSet
})
},
onSettled: () => {
setShowDeleteDocumentModal(false)
setDocumentToDelete(null)
},
try {
const response = await fetch(`/api/knowledge/${id}/documents/${documentToDelete}`, {
method: 'DELETE',
})
if (!response.ok) {
throw new Error('Failed to delete document')
}
)
const result = await response.json()
if (result.success) {
refreshDocuments()
setSelectedDocuments((prev) => {
const newSet = new Set(prev)
newSet.delete(documentToDelete)
return newSet
})
}
} catch (err) {
logger.error('Error deleting document:', err)
} finally {
setShowDeleteDocumentModal(false)
setDocumentToDelete(null)
}
}
/**
@@ -768,18 +818,32 @@ export function KnowledgeBase({
/**
* Handles deleting the entire knowledge base
*/
const handleDeleteKnowledgeBase = () => {
const handleDeleteKnowledgeBase = async () => {
if (!knowledgeBase) return
deleteKnowledgeBaseMutation(
{ knowledgeBaseId: id },
{
onSuccess: () => {
removeKnowledgeBase(id)
router.push(`/workspace/${workspaceId}/knowledge`)
},
try {
setIsDeleting(true)
const response = await fetch(`/api/knowledge/${id}`, {
method: 'DELETE',
})
if (!response.ok) {
throw new Error('Failed to delete knowledge base')
}
)
const result = await response.json()
if (result.success) {
removeKnowledgeBase(id)
router.push(`/workspace/${workspaceId}/knowledge`)
} else {
throw new Error(result.error || 'Failed to delete knowledge base')
}
} catch (err) {
logger.error('Error deleting knowledge base:', err)
setIsDeleting(false)
}
}
/**
@@ -792,57 +856,93 @@ export function KnowledgeBase({
/**
* Handles bulk enabling of selected documents
*/
const handleBulkEnable = () => {
const handleBulkEnable = async () => {
const documentsToEnable = documents.filter(
(doc) => selectedDocuments.has(doc.id) && !doc.enabled
)
if (documentsToEnable.length === 0) return
bulkDocumentMutation(
{
knowledgeBaseId: id,
operation: 'enable',
documentIds: documentsToEnable.map((doc) => doc.id),
},
{
onSuccess: (result) => {
result.updatedDocuments?.forEach((updatedDoc) => {
updateDocument(updatedDoc.id, { enabled: updatedDoc.enabled })
})
logger.info(`Successfully enabled ${result.successCount} documents`)
setSelectedDocuments(new Set())
try {
setIsBulkOperating(true)
const response = await fetch(`/api/knowledge/${id}/documents`, {
method: 'PATCH',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({
operation: 'enable',
documentIds: documentsToEnable.map((doc) => doc.id),
}),
})
if (!response.ok) {
throw new Error('Failed to enable documents')
}
)
const result = await response.json()
if (result.success) {
result.data.updatedDocuments.forEach((updatedDoc: { id: string; enabled: boolean }) => {
updateDocument(updatedDoc.id, { enabled: updatedDoc.enabled })
})
logger.info(`Successfully enabled ${result.data.successCount} documents`)
}
setSelectedDocuments(new Set())
} catch (err) {
logger.error('Error enabling documents:', err)
} finally {
setIsBulkOperating(false)
}
}
/**
* Handles bulk disabling of selected documents
*/
const handleBulkDisable = () => {
const handleBulkDisable = async () => {
const documentsToDisable = documents.filter(
(doc) => selectedDocuments.has(doc.id) && doc.enabled
)
if (documentsToDisable.length === 0) return
bulkDocumentMutation(
{
knowledgeBaseId: id,
operation: 'disable',
documentIds: documentsToDisable.map((doc) => doc.id),
},
{
onSuccess: (result) => {
result.updatedDocuments?.forEach((updatedDoc) => {
updateDocument(updatedDoc.id, { enabled: updatedDoc.enabled })
})
logger.info(`Successfully disabled ${result.successCount} documents`)
setSelectedDocuments(new Set())
try {
setIsBulkOperating(true)
const response = await fetch(`/api/knowledge/${id}/documents`, {
method: 'PATCH',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({
operation: 'disable',
documentIds: documentsToDisable.map((doc) => doc.id),
}),
})
if (!response.ok) {
throw new Error('Failed to disable documents')
}
)
const result = await response.json()
if (result.success) {
result.data.updatedDocuments.forEach((updatedDoc: { id: string; enabled: boolean }) => {
updateDocument(updatedDoc.id, { enabled: updatedDoc.enabled })
})
logger.info(`Successfully disabled ${result.data.successCount} documents`)
}
setSelectedDocuments(new Set())
} catch (err) {
logger.error('Error disabling documents:', err)
} finally {
setIsBulkOperating(false)
}
}
/**
@@ -856,28 +956,44 @@ export function KnowledgeBase({
/**
* Confirms and executes the bulk deletion of selected documents
*/
const confirmBulkDelete = () => {
const confirmBulkDelete = async () => {
const documentsToDelete = documents.filter((doc) => selectedDocuments.has(doc.id))
if (documentsToDelete.length === 0) return
bulkDocumentMutation(
{
knowledgeBaseId: id,
operation: 'delete',
documentIds: documentsToDelete.map((doc) => doc.id),
},
{
onSuccess: (result) => {
logger.info(`Successfully deleted ${result.successCount} documents`)
refreshDocuments()
setSelectedDocuments(new Set())
},
onSettled: () => {
setShowBulkDeleteModal(false)
try {
setIsBulkOperating(true)
const response = await fetch(`/api/knowledge/${id}/documents`, {
method: 'PATCH',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({
operation: 'delete',
documentIds: documentsToDelete.map((doc) => doc.id),
}),
})
if (!response.ok) {
throw new Error('Failed to delete documents')
}
)
const result = await response.json()
if (result.success) {
logger.info(`Successfully deleted ${result.data.successCount} documents`)
}
await refreshDocuments()
setSelectedDocuments(new Set())
} catch (err) {
logger.error('Error deleting documents:', err)
} finally {
setIsBulkOperating(false)
setShowBulkDeleteModal(false)
}
}
const selectedDocumentsList = documents.filter((doc) => selectedDocuments.has(doc.id))

View File

@@ -22,10 +22,10 @@ import {
type TagDefinition,
useKnowledgeBaseTagDefinitions,
} from '@/hooks/kb/use-knowledge-base-tag-definitions'
import { useCreateTagDefinition, useDeleteTagDefinition } from '@/hooks/queries/knowledge'
const logger = createLogger('BaseTagsModal')
/** Field type display labels */
const FIELD_TYPE_LABELS: Record<string, string> = {
text: 'Text',
number: 'Number',
@@ -45,6 +45,7 @@ interface DocumentListProps {
totalCount: number
}
/** Displays a list of documents affected by tag operations */
function DocumentList({ documents, totalCount }: DocumentListProps) {
const displayLimit = 5
const hasMore = totalCount > displayLimit
@@ -94,14 +95,13 @@ export function BaseTagsModal({ open, onOpenChange, knowledgeBaseId }: BaseTagsM
const { tagDefinitions: kbTagDefinitions, fetchTagDefinitions: refreshTagDefinitions } =
useKnowledgeBaseTagDefinitions(knowledgeBaseId)
const createTagMutation = useCreateTagDefinition()
const deleteTagMutation = useDeleteTagDefinition()
const [deleteTagDialogOpen, setDeleteTagDialogOpen] = useState(false)
const [selectedTag, setSelectedTag] = useState<TagDefinition | null>(null)
const [viewDocumentsDialogOpen, setViewDocumentsDialogOpen] = useState(false)
const [isDeletingTag, setIsDeletingTag] = useState(false)
const [tagUsageData, setTagUsageData] = useState<TagUsageData[]>([])
const [isCreatingTag, setIsCreatingTag] = useState(false)
const [isSavingTag, setIsSavingTag] = useState(false)
const [createTagForm, setCreateTagForm] = useState({
displayName: '',
fieldType: 'text',
@@ -177,12 +177,13 @@ export function BaseTagsModal({ open, onOpenChange, knowledgeBaseId }: BaseTagsM
}
const tagNameConflict =
isCreatingTag && !createTagMutation.isPending && hasTagNameConflict(createTagForm.displayName)
isCreatingTag && !isSavingTag && hasTagNameConflict(createTagForm.displayName)
const canSaveTag = () => {
return createTagForm.displayName.trim() && !hasTagNameConflict(createTagForm.displayName)
}
/** Get slot usage counts per field type */
const getSlotUsageByFieldType = (fieldType: string): { used: number; max: number } => {
const config = TAG_SLOT_CONFIG[fieldType as keyof typeof TAG_SLOT_CONFIG]
if (!config) return { used: 0, max: 0 }
@@ -190,11 +191,13 @@ export function BaseTagsModal({ open, onOpenChange, knowledgeBaseId }: BaseTagsM
return { used, max: config.maxSlots }
}
/** Check if a field type has available slots */
const hasAvailableSlots = (fieldType: string): boolean => {
const { used, max } = getSlotUsageByFieldType(fieldType)
return used < max
}
/** Field type options for Combobox */
const fieldTypeOptions: ComboboxOption[] = useMemo(() => {
return SUPPORTED_FIELD_TYPES.filter((type) => hasAvailableSlots(type)).map((type) => {
const { used, max } = getSlotUsageByFieldType(type)
@@ -208,17 +211,43 @@ export function BaseTagsModal({ open, onOpenChange, knowledgeBaseId }: BaseTagsM
const saveTagDefinition = async () => {
if (!canSaveTag()) return
setIsSavingTag(true)
try {
// Check if selected field type has available slots
if (!hasAvailableSlots(createTagForm.fieldType)) {
throw new Error(`No available slots for ${createTagForm.fieldType} type`)
}
await createTagMutation.mutateAsync({
knowledgeBaseId,
// Get the next available slot from the API
const slotResponse = await fetch(
`/api/knowledge/${knowledgeBaseId}/next-available-slot?fieldType=${createTagForm.fieldType}`
)
if (!slotResponse.ok) {
throw new Error('Failed to get available slot')
}
const slotResult = await slotResponse.json()
if (!slotResult.success || !slotResult.data?.nextAvailableSlot) {
throw new Error('No available tag slots for this field type')
}
const newTagDefinition = {
tagSlot: slotResult.data.nextAvailableSlot,
displayName: createTagForm.displayName.trim(),
fieldType: createTagForm.fieldType,
}
const response = await fetch(`/api/knowledge/${knowledgeBaseId}/tag-definitions`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify(newTagDefinition),
})
if (!response.ok) {
throw new Error('Failed to create tag definition')
}
await Promise.all([refreshTagDefinitions(), fetchTagUsage()])
setCreateTagForm({
@@ -228,17 +257,27 @@ export function BaseTagsModal({ open, onOpenChange, knowledgeBaseId }: BaseTagsM
setIsCreatingTag(false)
} catch (error) {
logger.error('Error creating tag definition:', error)
} finally {
setIsSavingTag(false)
}
}
const confirmDeleteTag = async () => {
if (!selectedTag) return
setIsDeletingTag(true)
try {
await deleteTagMutation.mutateAsync({
knowledgeBaseId,
tagDefinitionId: selectedTag.id,
})
const response = await fetch(
`/api/knowledge/${knowledgeBaseId}/tag-definitions/${selectedTag.id}`,
{
method: 'DELETE',
}
)
if (!response.ok) {
const errorText = await response.text()
throw new Error(`Failed to delete tag definition: ${response.status} ${errorText}`)
}
await Promise.all([refreshTagDefinitions(), fetchTagUsage()])
@@ -246,6 +285,8 @@ export function BaseTagsModal({ open, onOpenChange, knowledgeBaseId }: BaseTagsM
setSelectedTag(null)
} catch (error) {
logger.error('Error deleting tag definition:', error)
} finally {
setIsDeletingTag(false)
}
}
@@ -392,11 +433,11 @@ export function BaseTagsModal({ open, onOpenChange, knowledgeBaseId }: BaseTagsM
className='flex-1'
disabled={
!canSaveTag() ||
createTagMutation.isPending ||
isSavingTag ||
!hasAvailableSlots(createTagForm.fieldType)
}
>
{createTagMutation.isPending ? 'Creating...' : 'Create Tag'}
{isSavingTag ? 'Creating...' : 'Create Tag'}
</Button>
</div>
</div>
@@ -440,17 +481,13 @@ export function BaseTagsModal({ open, onOpenChange, knowledgeBaseId }: BaseTagsM
<ModalFooter>
<Button
variant='default'
disabled={deleteTagMutation.isPending}
disabled={isDeletingTag}
onClick={() => setDeleteTagDialogOpen(false)}
>
Cancel
</Button>
<Button
variant='destructive'
onClick={confirmDeleteTag}
disabled={deleteTagMutation.isPending}
>
{deleteTagMutation.isPending ? 'Deleting...' : 'Delete Tag'}
<Button variant='destructive' onClick={confirmDeleteTag} disabled={isDeletingTag}>
{isDeletingTag ? <>Deleting...</> : 'Delete Tag'}
</Button>
</ModalFooter>
</ModalContent>

View File

@@ -3,6 +3,7 @@
import { useEffect, useRef, useState } from 'react'
import { zodResolver } from '@hookform/resolvers/zod'
import { createLogger } from '@sim/logger'
import { useQueryClient } from '@tanstack/react-query'
import { Loader2, RotateCcw, X } from 'lucide-react'
import { useParams } from 'next/navigation'
import { useForm } from 'react-hook-form'
@@ -22,7 +23,7 @@ import { cn } from '@/lib/core/utils/cn'
import { formatFileSize, validateKnowledgeBaseFile } from '@/lib/uploads/utils/file-utils'
import { ACCEPT_ATTRIBUTE } from '@/lib/uploads/utils/validation'
import { useKnowledgeUpload } from '@/app/workspace/[workspaceId]/knowledge/hooks/use-knowledge-upload'
import { useCreateKnowledgeBase, useDeleteKnowledgeBase } from '@/hooks/queries/knowledge'
import { knowledgeKeys } from '@/hooks/queries/knowledge'
const logger = createLogger('CreateBaseModal')
@@ -81,11 +82,10 @@ interface SubmitStatus {
export function CreateBaseModal({ open, onOpenChange }: CreateBaseModalProps) {
const params = useParams()
const workspaceId = params.workspaceId as string
const createKnowledgeBaseMutation = useCreateKnowledgeBase(workspaceId)
const deleteKnowledgeBaseMutation = useDeleteKnowledgeBase(workspaceId)
const queryClient = useQueryClient()
const fileInputRef = useRef<HTMLInputElement>(null)
const [isSubmitting, setIsSubmitting] = useState(false)
const [submitStatus, setSubmitStatus] = useState<SubmitStatus | null>(null)
const [files, setFiles] = useState<FileWithPreview[]>([])
const [fileError, setFileError] = useState<string | null>(null)
@@ -245,14 +245,12 @@ export function CreateBaseModal({ open, onOpenChange }: CreateBaseModalProps) {
})
}
const isSubmitting =
createKnowledgeBaseMutation.isPending || deleteKnowledgeBaseMutation.isPending || isUploading
const onSubmit = async (data: FormValues) => {
setIsSubmitting(true)
setSubmitStatus(null)
try {
const newKnowledgeBase = await createKnowledgeBaseMutation.mutateAsync({
const knowledgeBasePayload = {
name: data.name,
description: data.description || undefined,
workspaceId: workspaceId,
@@ -261,8 +259,29 @@ export function CreateBaseModal({ open, onOpenChange }: CreateBaseModalProps) {
minSize: data.minChunkSize,
overlap: data.overlapSize,
},
}
const response = await fetch('/api/knowledge', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify(knowledgeBasePayload),
})
if (!response.ok) {
const errorData = await response.json()
throw new Error(errorData.error || 'Failed to create knowledge base')
}
const result = await response.json()
if (!result.success) {
throw new Error(result.error || 'Failed to create knowledge base')
}
const newKnowledgeBase = result.data
if (files.length > 0) {
try {
const uploadedFiles = await uploadFiles(files, newKnowledgeBase.id, {
@@ -274,11 +293,15 @@ export function CreateBaseModal({ open, onOpenChange }: CreateBaseModalProps) {
logger.info(`Successfully uploaded ${uploadedFiles.length} files`)
logger.info(`Started processing ${uploadedFiles.length} documents in the background`)
await queryClient.invalidateQueries({
queryKey: knowledgeKeys.list(workspaceId),
})
} catch (uploadError) {
logger.error('File upload failed, deleting knowledge base:', uploadError)
try {
await deleteKnowledgeBaseMutation.mutateAsync({
knowledgeBaseId: newKnowledgeBase.id,
await fetch(`/api/knowledge/${newKnowledgeBase.id}`, {
method: 'DELETE',
})
logger.info(`Deleted orphaned knowledge base: ${newKnowledgeBase.id}`)
} catch (deleteError) {
@@ -286,6 +309,10 @@ export function CreateBaseModal({ open, onOpenChange }: CreateBaseModalProps) {
}
throw uploadError
}
} else {
await queryClient.invalidateQueries({
queryKey: knowledgeKeys.list(workspaceId),
})
}
files.forEach((file) => URL.revokeObjectURL(file.preview))
@@ -298,6 +325,8 @@ export function CreateBaseModal({ open, onOpenChange }: CreateBaseModalProps) {
type: 'error',
message: error instanceof Error ? error.message : 'An unknown error occurred',
})
} finally {
setIsSubmitting(false)
}
}

View File

@@ -2,6 +2,7 @@
import { useEffect, useState } from 'react'
import { createLogger } from '@sim/logger'
import { useQueryClient } from '@tanstack/react-query'
import { AlertTriangle, ChevronDown, LibraryBig, MoreHorizontal } from 'lucide-react'
import Link from 'next/link'
import {
@@ -14,7 +15,7 @@ import {
} from '@/components/emcn'
import { Trash } from '@/components/emcn/icons/trash'
import { filterButtonClass } from '@/app/workspace/[workspaceId]/knowledge/components/constants'
import { useUpdateKnowledgeBase } from '@/hooks/queries/knowledge'
import { knowledgeKeys } from '@/hooks/queries/knowledge'
const logger = createLogger('KnowledgeHeader')
@@ -53,13 +54,14 @@ interface Workspace {
}
export function KnowledgeHeader({ breadcrumbs, options }: KnowledgeHeaderProps) {
const queryClient = useQueryClient()
const [isActionsPopoverOpen, setIsActionsPopoverOpen] = useState(false)
const [isWorkspacePopoverOpen, setIsWorkspacePopoverOpen] = useState(false)
const [workspaces, setWorkspaces] = useState<Workspace[]>([])
const [isLoadingWorkspaces, setIsLoadingWorkspaces] = useState(false)
const [isUpdatingWorkspace, setIsUpdatingWorkspace] = useState(false)
const updateKnowledgeBase = useUpdateKnowledgeBase()
// Fetch available workspaces
useEffect(() => {
if (!options?.knowledgeBaseId) return
@@ -74,6 +76,7 @@ export function KnowledgeHeader({ breadcrumbs, options }: KnowledgeHeaderProps)
const data = await response.json()
// Filter workspaces where user has write/admin permissions
const availableWorkspaces = data.workspaces
.filter((ws: any) => ws.permissions === 'write' || ws.permissions === 'admin')
.map((ws: any) => ({
@@ -94,27 +97,47 @@ export function KnowledgeHeader({ breadcrumbs, options }: KnowledgeHeaderProps)
}, [options?.knowledgeBaseId])
const handleWorkspaceChange = async (workspaceId: string | null) => {
if (updateKnowledgeBase.isPending || !options?.knowledgeBaseId) return
if (isUpdatingWorkspace || !options?.knowledgeBaseId) return
setIsWorkspacePopoverOpen(false)
try {
setIsUpdatingWorkspace(true)
setIsWorkspacePopoverOpen(false)
updateKnowledgeBase.mutate(
{
knowledgeBaseId: options.knowledgeBaseId,
updates: { workspaceId },
},
{
onSuccess: () => {
logger.info(
`Knowledge base workspace updated: ${options.knowledgeBaseId} -> ${workspaceId}`
)
options.onWorkspaceChange?.(workspaceId)
},
onError: (err) => {
logger.error('Error updating workspace:', err)
const response = await fetch(`/api/knowledge/${options.knowledgeBaseId}`, {
method: 'PUT',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({
workspaceId,
}),
})
if (!response.ok) {
const result = await response.json()
throw new Error(result.error || 'Failed to update workspace')
}
)
const result = await response.json()
if (result.success) {
logger.info(
`Knowledge base workspace updated: ${options.knowledgeBaseId} -> ${workspaceId}`
)
await queryClient.invalidateQueries({
queryKey: knowledgeKeys.detail(options.knowledgeBaseId),
})
await options.onWorkspaceChange?.(workspaceId)
} else {
throw new Error(result.error || 'Failed to update workspace')
}
} catch (err) {
logger.error('Error updating workspace:', err)
} finally {
setIsUpdatingWorkspace(false)
}
}
const currentWorkspace = workspaces.find((ws) => ws.id === options?.currentWorkspaceId)
@@ -124,6 +147,7 @@ export function KnowledgeHeader({ breadcrumbs, options }: KnowledgeHeaderProps)
<div className={HEADER_STYLES.container}>
<div className={HEADER_STYLES.breadcrumbs}>
{breadcrumbs.map((breadcrumb, index) => {
// Use unique identifier when available, fallback to content-based key
const key = breadcrumb.id || `${breadcrumb.label}-${breadcrumb.href || index}`
return (
@@ -165,13 +189,13 @@ export function KnowledgeHeader({ breadcrumbs, options }: KnowledgeHeaderProps)
<PopoverTrigger asChild>
<Button
variant='outline'
disabled={isLoadingWorkspaces || updateKnowledgeBase.isPending}
disabled={isLoadingWorkspaces || isUpdatingWorkspace}
className={filterButtonClass}
>
<span className='truncate'>
{isLoadingWorkspaces
? 'Loading...'
: updateKnowledgeBase.isPending
: isUpdatingWorkspace
? 'Updating...'
: currentWorkspace?.name || 'No workspace'}
</span>

View File

@@ -32,7 +32,6 @@ import {
import { useUserPermissionsContext } from '@/app/workspace/[workspaceId]/providers/workspace-permissions-provider'
import { useContextMenu } from '@/app/workspace/[workspaceId]/w/components/sidebar/hooks'
import { useKnowledgeBasesList } from '@/hooks/kb/use-knowledge'
import { useDeleteKnowledgeBase, useUpdateKnowledgeBase } from '@/hooks/queries/knowledge'
import { useDebounce } from '@/hooks/use-debounce'
const logger = createLogger('Knowledge')
@@ -52,12 +51,10 @@ export function Knowledge() {
const params = useParams()
const workspaceId = params.workspaceId as string
const { knowledgeBases, isLoading, error } = useKnowledgeBasesList(workspaceId)
const { knowledgeBases, isLoading, error, removeKnowledgeBase, updateKnowledgeBase } =
useKnowledgeBasesList(workspaceId)
const userPermissions = useUserPermissionsContext()
const { mutateAsync: updateKnowledgeBaseMutation } = useUpdateKnowledgeBase(workspaceId)
const { mutateAsync: deleteKnowledgeBaseMutation } = useDeleteKnowledgeBase(workspaceId)
const [searchQuery, setSearchQuery] = useState('')
const debouncedSearchQuery = useDebounce(searchQuery, 300)
const [isCreateModalOpen, setIsCreateModalOpen] = useState(false)
@@ -115,13 +112,29 @@ export function Knowledge() {
*/
const handleUpdateKnowledgeBase = useCallback(
async (id: string, name: string, description: string) => {
await updateKnowledgeBaseMutation({
knowledgeBaseId: id,
updates: { name, description },
const response = await fetch(`/api/knowledge/${id}`, {
method: 'PUT',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({ name, description }),
})
logger.info(`Knowledge base updated: ${id}`)
if (!response.ok) {
const result = await response.json()
throw new Error(result.error || 'Failed to update knowledge base')
}
const result = await response.json()
if (result.success) {
logger.info(`Knowledge base updated: ${id}`)
updateKnowledgeBase(id, { name, description })
} else {
throw new Error(result.error || 'Failed to update knowledge base')
}
},
[updateKnowledgeBaseMutation]
[updateKnowledgeBase]
)
/**
@@ -129,10 +142,25 @@ export function Knowledge() {
*/
const handleDeleteKnowledgeBase = useCallback(
async (id: string) => {
await deleteKnowledgeBaseMutation({ knowledgeBaseId: id })
logger.info(`Knowledge base deleted: ${id}`)
const response = await fetch(`/api/knowledge/${id}`, {
method: 'DELETE',
})
if (!response.ok) {
const result = await response.json()
throw new Error(result.error || 'Failed to delete knowledge base')
}
const result = await response.json()
if (result.success) {
logger.info(`Knowledge base deleted: ${id}`)
removeKnowledgeBase(id)
} else {
throw new Error(result.error || 'Failed to delete knowledge base')
}
},
[deleteKnowledgeBaseMutation]
[removeKnowledgeBase]
)
/**

View File

@@ -26,6 +26,9 @@ import { CLASS_TOOL_METADATA } from '@/stores/panel/copilot/store'
import type { SubAgentContentBlock } from '@/stores/panel/copilot/types'
import { useWorkflowStore } from '@/stores/workflows/workflow/store'
/**
* Parse special tags from content
*/
/**
* Plan step can be either a string or an object with title and plan
*/
@@ -44,56 +47,6 @@ interface ParsedTags {
cleanContent: string
}
/**
* Extract plan steps from plan_respond tool calls in subagent blocks.
* Returns { steps, isComplete } where steps is in the format expected by PlanSteps component.
*/
function extractPlanFromBlocks(blocks: SubAgentContentBlock[] | undefined): {
steps: Record<string, PlanStep> | undefined
isComplete: boolean
} {
if (!blocks) return { steps: undefined, isComplete: false }
// Find the plan_respond tool call
const planRespondBlock = blocks.find(
(b) => b.type === 'subagent_tool_call' && b.toolCall?.name === 'plan_respond'
)
if (!planRespondBlock?.toolCall) {
return { steps: undefined, isComplete: false }
}
// Tool call arguments can be in different places depending on the source
// Also handle nested data.arguments structure from the schema
const tc = planRespondBlock.toolCall as any
const args = tc.params || tc.parameters || tc.input || tc.arguments || tc.data?.arguments || {}
const stepsArray = args.steps
if (!Array.isArray(stepsArray) || stepsArray.length === 0) {
return { steps: undefined, isComplete: false }
}
// Convert array format to Record<string, PlanStep> format
// From: [{ number: 1, title: "..." }, { number: 2, title: "..." }]
// To: { "1": "...", "2": "..." }
const steps: Record<string, PlanStep> = {}
for (const step of stepsArray) {
if (step.number !== undefined && step.title) {
steps[String(step.number)] = step.title
}
}
// Check if the tool call is complete (not pending/executing)
const isComplete =
planRespondBlock.toolCall.state === ClientToolCallState.success ||
planRespondBlock.toolCall.state === ClientToolCallState.error
return {
steps: Object.keys(steps).length > 0 ? steps : undefined,
isComplete,
}
}
/**
* Try to parse partial JSON for streaming options.
* Attempts to extract complete key-value pairs from incomplete JSON.
@@ -701,20 +654,11 @@ function SubAgentThinkingContent({
}
}
// Extract plan from plan_respond tool call (preferred) or fall back to <plan> tags
const { steps: planSteps, isComplete: planComplete } = extractPlanFromBlocks(blocks)
const allParsed = parseSpecialTags(allRawText)
// Prefer plan_respond tool data over <plan> tags
const hasPlan =
!!(planSteps && Object.keys(planSteps).length > 0) ||
!!(allParsed.plan && Object.keys(allParsed.plan).length > 0)
const planToRender = planSteps || allParsed.plan
const isPlanStreaming = planSteps ? !planComplete : isStreaming
if (!cleanText.trim() && !allParsed.plan) return null
if (!cleanText.trim() && !hasPlan) return null
const hasSpecialTags = hasPlan
const hasSpecialTags = !!(allParsed.plan && Object.keys(allParsed.plan).length > 0)
return (
<div className='space-y-1.5'>
@@ -726,7 +670,9 @@ function SubAgentThinkingContent({
hasSpecialTags={hasSpecialTags}
/>
)}
{hasPlan && planToRender && <PlanSteps steps={planToRender} streaming={isPlanStreaming} />}
{allParsed.plan && Object.keys(allParsed.plan).length > 0 && (
<PlanSteps steps={allParsed.plan} streaming={isStreaming} />
)}
</div>
)
}
@@ -798,19 +744,8 @@ const SubagentContentRenderer = memo(function SubagentContentRenderer({
}
const allParsed = parseSpecialTags(allRawText)
// Extract plan from plan_respond tool call (preferred) or fall back to <plan> tags
const { steps: planSteps, isComplete: planComplete } = extractPlanFromBlocks(
toolCall.subAgentBlocks
)
const hasPlan =
!!(planSteps && Object.keys(planSteps).length > 0) ||
!!(allParsed.plan && Object.keys(allParsed.plan).length > 0)
const planToRender = planSteps || allParsed.plan
const isPlanStreaming = planSteps ? !planComplete : isStreaming
const hasSpecialTags = !!(
hasPlan ||
(allParsed.plan && Object.keys(allParsed.plan).length > 0) ||
(allParsed.options && Object.keys(allParsed.options).length > 0)
)
@@ -822,6 +757,8 @@ const SubagentContentRenderer = memo(function SubagentContentRenderer({
const outerLabel = getSubagentCompletionLabel(toolCall.name)
const durationText = `${outerLabel} for ${formatDuration(duration)}`
const hasPlan = allParsed.plan && Object.keys(allParsed.plan).length > 0
const renderCollapsibleContent = () => (
<>
{segments.map((segment, index) => {
@@ -863,7 +800,7 @@ const SubagentContentRenderer = memo(function SubagentContentRenderer({
return (
<div className='w-full space-y-1.5'>
{renderCollapsibleContent()}
{hasPlan && planToRender && <PlanSteps steps={planToRender} streaming={isPlanStreaming} />}
{hasPlan && <PlanSteps steps={allParsed.plan!} streaming={isStreaming} />}
</div>
)
}
@@ -895,7 +832,7 @@ const SubagentContentRenderer = memo(function SubagentContentRenderer({
</div>
{/* Plan stays outside the collapsible */}
{hasPlan && planToRender && <PlanSteps steps={planToRender} />}
{hasPlan && <PlanSteps steps={allParsed.plan!} />}
</div>
)
})
@@ -1475,11 +1412,7 @@ export function ToolCall({ toolCall: toolCallProp, toolCallId, onStateChange }:
if (
toolCall.name === 'checkoff_todo' ||
toolCall.name === 'mark_todo_in_progress' ||
toolCall.name === 'tool_search_tool_regex' ||
toolCall.name === 'user_memory' ||
toolCall.name === 'edit_respond' ||
toolCall.name === 'debug_respond' ||
toolCall.name === 'plan_respond'
toolCall.name === 'tool_search_tool_regex'
)
return null

View File

@@ -191,10 +191,26 @@ export const Copilot = forwardRef<CopilotRef, CopilotProps>(({ panelWidth }, ref
}, [isInitialized, messages.length, scrollToBottom])
/**
* Note: We intentionally do NOT abort on component unmount.
* Streams continue server-side and can be resumed when user returns.
* The server persists chunks to Redis for resumption.
* Cleanup on component unmount (page refresh, navigation, etc.)
* Uses a ref to track sending state to avoid stale closure issues
* Note: Parent workflow.tsx also has useStreamCleanup for page-level cleanup
*/
const isSendingRef = useRef(isSendingMessage)
isSendingRef.current = isSendingMessage
const abortMessageRef = useRef(abortMessage)
abortMessageRef.current = abortMessage
useEffect(() => {
return () => {
// Use refs to check current values, not stale closure values
if (isSendingRef.current) {
abortMessageRef.current()
logger.info('Aborted active message streaming due to component unmount')
}
}
// Empty deps - only run cleanup on actual unmount, not on re-renders
// eslint-disable-next-line react-hooks/exhaustive-deps
}, [])
/**
* Container-level click capture to cancel edit mode when clicking outside the current edit area

View File

@@ -452,6 +452,39 @@ console.log(limits);`
</div>
)}
{/* <div>
<div className='mb-[6.5px] flex items-center justify-between'>
<Label className='block pl-[2px] font-medium text-[13px] text-[var(--text-primary)]'>
URL
</Label>
<Tooltip.Root>
<Tooltip.Trigger asChild>
<Button
variant='ghost'
onClick={() => handleCopy('endpoint', info.endpoint)}
aria-label='Copy endpoint'
className='!p-1.5 -my-1.5'
>
{copied.endpoint ? (
<Check className='h-3 w-3' />
) : (
<Clipboard className='h-3 w-3' />
)}
</Button>
</Tooltip.Trigger>
<Tooltip.Content>
<span>{copied.endpoint ? 'Copied' : 'Copy'}</span>
</Tooltip.Content>
</Tooltip.Root>
</div>
<Code.Viewer
code={info.endpoint}
language='javascript'
wrapText
className='!min-h-0 rounded-[4px] border border-[var(--border-1)]'
/>
</div> */}
<div>
<div className='mb-[6.5px] flex items-center justify-between'>
<Label className='block pl-[2px] font-medium text-[13px] text-[var(--text-primary)]'>

View File

@@ -1,260 +0,0 @@
'use client'
import { useCallback, useEffect, useMemo, useRef, useState } from 'react'
import {
Badge,
Button,
Input,
Label,
Modal,
ModalBody,
ModalContent,
ModalFooter,
ModalHeader,
Textarea,
} from '@/components/emcn'
import { normalizeInputFormatValue } from '@/lib/workflows/input-format'
import { isValidStartBlockType } from '@/lib/workflows/triggers/start-block-types'
import type { InputFormatField } from '@/lib/workflows/types'
import { useWorkflowRegistry } from '@/stores/workflows/registry/store'
import { useSubBlockStore } from '@/stores/workflows/subblock/store'
import { useWorkflowStore } from '@/stores/workflows/workflow/store'
type NormalizedField = InputFormatField & { name: string }
interface ApiInfoModalProps {
open: boolean
onOpenChange: (open: boolean) => void
workflowId: string
}
export function ApiInfoModal({ open, onOpenChange, workflowId }: ApiInfoModalProps) {
const blocks = useWorkflowStore((state) => state.blocks)
const setValue = useSubBlockStore((state) => state.setValue)
const subBlockValues = useSubBlockStore((state) =>
workflowId ? (state.workflowValues[workflowId] ?? {}) : {}
)
const workflowMetadata = useWorkflowRegistry((state) =>
workflowId ? state.workflows[workflowId] : undefined
)
const updateWorkflow = useWorkflowRegistry((state) => state.updateWorkflow)
const [description, setDescription] = useState('')
const [paramDescriptions, setParamDescriptions] = useState<Record<string, string>>({})
const [isSaving, setIsSaving] = useState(false)
const [showUnsavedChangesAlert, setShowUnsavedChangesAlert] = useState(false)
const initialDescriptionRef = useRef('')
const initialParamDescriptionsRef = useRef<Record<string, string>>({})
const starterBlockId = useMemo(() => {
for (const [blockId, block] of Object.entries(blocks)) {
if (!block || typeof block !== 'object') continue
const blockType = (block as { type?: string }).type
if (blockType && isValidStartBlockType(blockType)) {
return blockId
}
}
return null
}, [blocks])
const inputFormat = useMemo((): NormalizedField[] => {
if (!starterBlockId) return []
const storeValue = subBlockValues[starterBlockId]?.inputFormat
const normalized = normalizeInputFormatValue(storeValue) as NormalizedField[]
if (normalized.length > 0) return normalized
const startBlock = blocks[starterBlockId]
const blockValue = startBlock?.subBlocks?.inputFormat?.value
return normalizeInputFormatValue(blockValue) as NormalizedField[]
}, [starterBlockId, subBlockValues, blocks])
useEffect(() => {
if (open) {
const normalizedDesc = workflowMetadata?.description?.toLowerCase().trim()
const isDefaultDescription =
!workflowMetadata?.description ||
workflowMetadata.description === workflowMetadata.name ||
normalizedDesc === 'new workflow' ||
normalizedDesc === 'your first workflow - start building here!'
const initialDescription = isDefaultDescription ? '' : workflowMetadata?.description || ''
setDescription(initialDescription)
initialDescriptionRef.current = initialDescription
const descriptions: Record<string, string> = {}
for (const field of inputFormat) {
if (field.description) {
descriptions[field.name] = field.description
}
}
setParamDescriptions(descriptions)
initialParamDescriptionsRef.current = { ...descriptions }
}
}, [open, workflowMetadata, inputFormat])
const hasChanges = useMemo(() => {
if (description.trim() !== initialDescriptionRef.current.trim()) return true
for (const field of inputFormat) {
const currentValue = (paramDescriptions[field.name] || '').trim()
const initialValue = (initialParamDescriptionsRef.current[field.name] || '').trim()
if (currentValue !== initialValue) return true
}
return false
}, [description, paramDescriptions, inputFormat])
const handleParamDescriptionChange = (fieldName: string, value: string) => {
setParamDescriptions((prev) => ({
...prev,
[fieldName]: value,
}))
}
const handleCloseAttempt = useCallback(() => {
if (hasChanges && !isSaving) {
setShowUnsavedChangesAlert(true)
} else {
onOpenChange(false)
}
}, [hasChanges, isSaving, onOpenChange])
const handleDiscardChanges = useCallback(() => {
setShowUnsavedChangesAlert(false)
setDescription(initialDescriptionRef.current)
setParamDescriptions({ ...initialParamDescriptionsRef.current })
onOpenChange(false)
}, [onOpenChange])
const handleSave = useCallback(async () => {
if (!workflowId) return
const activeWorkflowId = useWorkflowRegistry.getState().activeWorkflowId
if (activeWorkflowId !== workflowId) {
return
}
setIsSaving(true)
try {
if (description.trim() !== (workflowMetadata?.description || '')) {
updateWorkflow(workflowId, { description: description.trim() || 'New workflow' })
}
if (starterBlockId) {
const updatedValue = inputFormat.map((field) => ({
...field,
description: paramDescriptions[field.name]?.trim() || undefined,
}))
setValue(starterBlockId, 'inputFormat', updatedValue)
}
onOpenChange(false)
} finally {
setIsSaving(false)
}
}, [
workflowId,
description,
workflowMetadata,
updateWorkflow,
starterBlockId,
inputFormat,
paramDescriptions,
setValue,
onOpenChange,
])
return (
<>
<Modal open={open} onOpenChange={(openState) => !openState && handleCloseAttempt()}>
<ModalContent className='max-w-[480px]'>
<ModalHeader>
<span>Edit API Info</span>
</ModalHeader>
<ModalBody className='space-y-[12px]'>
<div>
<Label className='mb-[6.5px] block pl-[2px] font-medium text-[13px] text-[var(--text-primary)]'>
Description
</Label>
<Textarea
placeholder='Describe what this workflow API does...'
className='min-h-[80px] resize-none'
value={description}
onChange={(e) => setDescription(e.target.value)}
/>
</div>
{inputFormat.length > 0 && (
<div>
<Label className='mb-[6.5px] block pl-[2px] font-medium text-[13px] text-[var(--text-primary)]'>
Parameters ({inputFormat.length})
</Label>
<div className='flex flex-col gap-[8px]'>
{inputFormat.map((field) => (
<div
key={field.name}
className='overflow-hidden rounded-[4px] border border-[var(--border-1)]'
>
<div className='flex items-center justify-between bg-[var(--surface-4)] px-[10px] py-[5px]'>
<div className='flex min-w-0 flex-1 items-center gap-[8px]'>
<span className='block truncate font-medium text-[14px] text-[var(--text-tertiary)]'>
{field.name}
</span>
<Badge size='sm'>{field.type || 'string'}</Badge>
</div>
</div>
<div className='border-[var(--border-1)] border-t px-[10px] pt-[6px] pb-[10px]'>
<div className='flex flex-col gap-[6px]'>
<Label className='text-[13px]'>Description</Label>
<Input
value={paramDescriptions[field.name] || ''}
onChange={(e) =>
handleParamDescriptionChange(field.name, e.target.value)
}
placeholder={`Enter description for ${field.name}`}
/>
</div>
</div>
</div>
))}
</div>
</div>
)}
</ModalBody>
<ModalFooter>
<Button variant='default' onClick={handleCloseAttempt} disabled={isSaving}>
Cancel
</Button>
<Button variant='tertiary' onClick={handleSave} disabled={isSaving || !hasChanges}>
{isSaving ? 'Saving...' : 'Save'}
</Button>
</ModalFooter>
</ModalContent>
</Modal>
<Modal open={showUnsavedChangesAlert} onOpenChange={setShowUnsavedChangesAlert}>
<ModalContent className='max-w-[400px]'>
<ModalHeader>
<span>Unsaved Changes</span>
</ModalHeader>
<ModalBody>
<p className='text-[14px] text-[var(--text-secondary)]'>
You have unsaved changes. Are you sure you want to discard them?
</p>
</ModalBody>
<ModalFooter>
<Button variant='default' onClick={() => setShowUnsavedChangesAlert(false)}>
Keep Editing
</Button>
<Button variant='destructive' onClick={handleDiscardChanges}>
Discard Changes
</Button>
</ModalFooter>
</ModalContent>
</Modal>
</>
)
}

View File

@@ -43,7 +43,6 @@ import type { WorkflowState } from '@/stores/workflows/workflow/types'
import { A2aDeploy } from './components/a2a/a2a'
import { ApiDeploy } from './components/api/api'
import { ChatDeploy, type ExistingChat } from './components/chat/chat'
import { ApiInfoModal } from './components/general/components/api-info-modal'
import { GeneralDeploy } from './components/general/general'
import { McpDeploy } from './components/mcp/mcp'
import { TemplateDeploy } from './components/template/template'
@@ -111,7 +110,6 @@ export function DeployModal({
const [chatSuccess, setChatSuccess] = useState(false)
const [isCreateKeyModalOpen, setIsCreateKeyModalOpen] = useState(false)
const [isApiInfoModalOpen, setIsApiInfoModalOpen] = useState(false)
const userPermissions = useUserPermissionsContext()
const canManageWorkspaceKeys = userPermissions.canAdmin
const { config: permissionConfig } = usePermissionConfig()
@@ -391,6 +389,11 @@ export function DeployModal({
form?.requestSubmit()
}, [])
const handleA2aFormSubmit = useCallback(() => {
const form = document.getElementById('a2a-deploy-form') as HTMLFormElement
form?.requestSubmit()
}, [])
const handleA2aPublish = useCallback(() => {
const form = document.getElementById('a2a-deploy-form')
const publishTrigger = form?.querySelector('[data-a2a-publish-trigger]') as HTMLButtonElement
@@ -591,11 +594,7 @@ export function DeployModal({
)}
{activeTab === 'api' && (
<ModalFooter className='items-center justify-between'>
<div>
<Button variant='default' onClick={() => setIsApiInfoModalOpen(true)}>
Edit API Info
</Button>
</div>
<div />
<div className='flex items-center gap-2'>
<Button
variant='tertiary'
@@ -881,14 +880,6 @@ export function DeployModal({
canManageWorkspaceKeys={canManageWorkspaceKeys}
defaultKeyType={defaultKeyType}
/>
{workflowId && (
<ApiInfoModal
open={isApiInfoModalOpen}
onOpenChange={setIsApiInfoModalOpen}
workflowId={workflowId}
/>
)}
</>
)
}

View File

@@ -1,7 +1,7 @@
import type { ReactElement } from 'react'
import { useEffect, useRef, useState } from 'react'
import { createLogger } from '@sim/logger'
import { ChevronDown, ChevronsUpDown, ChevronUp, Plus } from 'lucide-react'
import { ChevronDown, ChevronUp, Plus } from 'lucide-react'
import { useParams } from 'next/navigation'
import Editor from 'react-simple-code-editor'
import { useUpdateNodeInternals } from 'reactflow'
@@ -39,16 +39,6 @@ import { useWorkflowStore } from '@/stores/workflows/workflow/store'
const logger = createLogger('ConditionInput')
/**
* Default height for router textareas in pixels
*/
const ROUTER_DEFAULT_HEIGHT_PX = 100
/**
* Minimum height for router textareas in pixels
*/
const ROUTER_MIN_HEIGHT_PX = 80
/**
* Represents a single conditional block (if/else if/else).
*/
@@ -753,61 +743,6 @@ export function ConditionInput({
}
}, [conditionalBlocks, isRouterMode])
// State for tracking individual router textarea heights
const [routerHeights, setRouterHeights] = useState<{ [key: string]: number }>({})
const isResizing = useRef(false)
/**
* Gets the height for a specific router block, returning default if not set.
*
* @param blockId - ID of the router block
* @returns Height in pixels
*/
const getRouterHeight = (blockId: string): number => {
return routerHeights[blockId] ?? ROUTER_DEFAULT_HEIGHT_PX
}
/**
* Handles mouse-based resize for router textareas.
*
* @param e - Mouse event from the resize handle
* @param blockId - ID of the block being resized
*/
const startRouterResize = (e: React.MouseEvent, blockId: string) => {
if (isPreview || disabled) return
e.preventDefault()
e.stopPropagation()
isResizing.current = true
const startY = e.clientY
const startHeight = getRouterHeight(blockId)
const handleMouseMove = (moveEvent: MouseEvent) => {
if (!isResizing.current) return
const deltaY = moveEvent.clientY - startY
const newHeight = Math.max(ROUTER_MIN_HEIGHT_PX, startHeight + deltaY)
// Update the textarea height directly for smooth resizing
const textarea = inputRefs.current.get(blockId)
if (textarea) {
textarea.style.height = `${newHeight}px`
}
// Update state to keep track
setRouterHeights((prev) => ({ ...prev, [blockId]: newHeight }))
}
const handleMouseUp = () => {
isResizing.current = false
document.removeEventListener('mousemove', handleMouseMove)
document.removeEventListener('mouseup', handleMouseUp)
}
document.addEventListener('mousemove', handleMouseMove)
document.addEventListener('mouseup', handleMouseUp)
}
// Show loading or empty state if not ready or no blocks
if (!isReady || conditionalBlocks.length === 0) {
return (
@@ -972,24 +907,10 @@ export function ConditionInput({
}}
placeholder='Describe when this route should be taken...'
disabled={disabled || isPreview}
className='min-h-[100px] resize-none rounded-none border-0 px-3 py-2 text-sm placeholder:text-muted-foreground/50 focus-visible:ring-0 focus-visible:ring-offset-0'
rows={4}
style={{ height: `${getRouterHeight(block.id)}px` }}
className='min-h-[60px] resize-none rounded-none border-0 px-3 py-2 text-sm placeholder:text-muted-foreground/50 focus-visible:ring-0 focus-visible:ring-offset-0'
rows={2}
/>
{/* Custom resize handle */}
{!isPreview && !disabled && (
<div
className='absolute right-1 bottom-1 flex h-4 w-4 cursor-ns-resize items-center justify-center rounded-[4px] border border-[var(--border-1)] bg-[var(--surface-5)] dark:bg-[var(--surface-5)]'
onMouseDown={(e) => startRouterResize(e, block.id)}
onDragStart={(e) => {
e.preventDefault()
}}
>
<ChevronsUpDown className='h-3 w-3 text-[var(--text-muted)]' />
</div>
)}
{block.showEnvVars && (
<EnvVarDropdown
visible={block.showEnvVars}

View File

@@ -234,45 +234,48 @@ export function LongInput({
}, [value])
// Handle resize functionality
const startResize = (e: React.MouseEvent) => {
e.preventDefault()
e.stopPropagation()
isResizing.current = true
const startResize = useCallback(
(e: React.MouseEvent) => {
e.preventDefault()
e.stopPropagation()
isResizing.current = true
const startY = e.clientY
const startHeight = height
const startY = e.clientY
const startHeight = height
const handleMouseMove = (moveEvent: MouseEvent) => {
if (!isResizing.current) return
const handleMouseMove = (moveEvent: MouseEvent) => {
if (!isResizing.current) return
const deltaY = moveEvent.clientY - startY
const newHeight = Math.max(MIN_HEIGHT_PX, startHeight + deltaY)
const deltaY = moveEvent.clientY - startY
const newHeight = Math.max(MIN_HEIGHT_PX, startHeight + deltaY)
if (textareaRef.current && overlayRef.current) {
textareaRef.current.style.height = `${newHeight}px`
overlayRef.current.style.height = `${newHeight}px`
}
if (containerRef.current) {
containerRef.current.style.height = `${newHeight}px`
}
// Keep React state in sync so parent layouts (e.g., Editor) update during drag
setHeight(newHeight)
}
const handleMouseUp = () => {
if (textareaRef.current) {
const finalHeight = Number.parseInt(textareaRef.current.style.height, 10) || height
setHeight(finalHeight)
if (textareaRef.current && overlayRef.current) {
textareaRef.current.style.height = `${newHeight}px`
overlayRef.current.style.height = `${newHeight}px`
}
if (containerRef.current) {
containerRef.current.style.height = `${newHeight}px`
}
// Keep React state in sync so parent layouts (e.g., Editor) update during drag
setHeight(newHeight)
}
isResizing.current = false
document.removeEventListener('mousemove', handleMouseMove)
document.removeEventListener('mouseup', handleMouseUp)
}
const handleMouseUp = () => {
if (textareaRef.current) {
const finalHeight = Number.parseInt(textareaRef.current.style.height, 10) || height
setHeight(finalHeight)
}
document.addEventListener('mousemove', handleMouseMove)
document.addEventListener('mouseup', handleMouseUp)
}
isResizing.current = false
document.removeEventListener('mousemove', handleMouseMove)
document.removeEventListener('mouseup', handleMouseUp)
}
document.addEventListener('mousemove', handleMouseMove)
document.addEventListener('mouseup', handleMouseUp)
},
[height]
)
// Expose wand control handlers to parent via ref
useImperativeHandle(

View File

@@ -1,17 +1,281 @@
import { useCallback, useMemo } from 'react'
import type { RefObject } from 'react'
import { useCallback, useMemo, useRef, useState } from 'react'
import { createLogger } from '@sim/logger'
import { useParams } from 'next/navigation'
import { Combobox, Label, Slider, Switch } from '@/components/emcn/components'
import { Combobox, Input, Label, Slider, Switch, Textarea } from '@/components/emcn/components'
import { cn } from '@/lib/core/utils/cn'
import { LongInput } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/components/long-input/long-input'
import { ShortInput } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/components/short-input/short-input'
import { formatDisplayText } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/components/formatted-text'
import {
checkTagTrigger,
TagDropdown,
} from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/components/tag-dropdown/tag-dropdown'
import { useSubBlockValue } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/hooks/use-sub-block-value'
import type { SubBlockConfig } from '@/blocks/types'
import { useAccessibleReferencePrefixes } from '@/app/workspace/[workspaceId]/w/[workflowId]/hooks/use-accessible-reference-prefixes'
import { useMcpTools } from '@/hooks/mcp/use-mcp-tools'
import { formatParameterLabel } from '@/tools/params'
const logger = createLogger('McpDynamicArgs')
interface McpInputWithTagsProps {
value: string
onChange: (value: string) => void
placeholder?: string
disabled?: boolean
isPassword?: boolean
blockId: string
accessiblePrefixes?: Set<string>
}
function McpInputWithTags({
value,
onChange,
placeholder,
disabled,
isPassword,
blockId,
accessiblePrefixes,
}: McpInputWithTagsProps) {
const [showTags, setShowTags] = useState(false)
const [cursorPosition, setCursorPosition] = useState(0)
const [activeSourceBlockId, setActiveSourceBlockId] = useState<string | null>(null)
const inputRef = useRef<HTMLInputElement>(null)
const inputNameRef = useRef(`mcp_input_${Math.random()}`)
const handleChange = (e: React.ChangeEvent<HTMLInputElement>) => {
const newValue = e.target.value
const newCursorPosition = e.target.selectionStart ?? 0
onChange(newValue)
setCursorPosition(newCursorPosition)
const tagTrigger = checkTagTrigger(newValue, newCursorPosition)
setShowTags(tagTrigger.show)
}
const handleDrop = (e: React.DragEvent<HTMLInputElement>) => {
e.preventDefault()
try {
const data = JSON.parse(e.dataTransfer.getData('application/json'))
if (data.type !== 'connectionBlock') return
const dropPosition = inputRef.current?.selectionStart ?? value.length ?? 0
const currentValue = value ?? ''
const newValue = `${currentValue.slice(0, dropPosition)}<${currentValue.slice(dropPosition)}`
onChange(newValue)
setCursorPosition(dropPosition + 1)
setShowTags(true)
if (data.connectionData?.sourceBlockId) {
setActiveSourceBlockId(data.connectionData.sourceBlockId)
}
setTimeout(() => {
if (inputRef.current) {
inputRef.current.selectionStart = dropPosition + 1
inputRef.current.selectionEnd = dropPosition + 1
}
}, 0)
} catch (error) {
logger.error('Failed to parse drop data:', { error })
}
}
const handleDragOver = (e: React.DragEvent<HTMLInputElement>) => {
e.preventDefault()
}
const handleTagSelect = (newValue: string) => {
onChange(newValue)
setShowTags(false)
setActiveSourceBlockId(null)
}
return (
<div className='relative'>
<div className='relative'>
<Input
ref={inputRef}
type={isPassword ? 'password' : 'text'}
value={value || ''}
onChange={handleChange}
onDrop={handleDrop}
onDragOver={handleDragOver}
placeholder={placeholder}
disabled={disabled}
name={inputNameRef.current}
autoComplete='off'
autoCapitalize='off'
spellCheck='false'
data-form-type='other'
data-lpignore='true'
data-1p-ignore
readOnly
onFocus={(e) => {
e.currentTarget.removeAttribute('readOnly')
// Show tag dropdown on focus when input is empty
if (!disabled && (value?.trim() === '' || !value)) {
setShowTags(true)
setCursorPosition(0)
}
}}
className={cn(!isPassword && 'text-transparent caret-foreground')}
/>
{!isPassword && (
<div className='pointer-events-none absolute inset-0 flex items-center overflow-hidden bg-transparent px-[8px] py-[6px] font-medium font-sans text-sm'>
<div className='whitespace-pre'>
{formatDisplayText(value?.toString() || '', {
accessiblePrefixes,
highlightAll: !accessiblePrefixes,
})}
</div>
</div>
)}
</div>
<TagDropdown
visible={showTags}
onSelect={handleTagSelect}
blockId={blockId}
activeSourceBlockId={activeSourceBlockId}
inputValue={value?.toString() ?? ''}
cursorPosition={cursorPosition}
onClose={() => {
setShowTags(false)
setActiveSourceBlockId(null)
}}
inputRef={inputRef as RefObject<HTMLInputElement>}
/>
</div>
)
}
interface McpTextareaWithTagsProps {
value: string
onChange: (value: string) => void
placeholder?: string
disabled?: boolean
blockId: string
accessiblePrefixes?: Set<string>
rows?: number
}
function McpTextareaWithTags({
value,
onChange,
placeholder,
disabled,
blockId,
accessiblePrefixes,
rows = 4,
}: McpTextareaWithTagsProps) {
const [showTags, setShowTags] = useState(false)
const [cursorPosition, setCursorPosition] = useState(0)
const [activeSourceBlockId, setActiveSourceBlockId] = useState<string | null>(null)
const textareaRef = useRef<HTMLTextAreaElement>(null)
const textareaNameRef = useRef(`mcp_textarea_${Math.random()}`)
const handleChange = (e: React.ChangeEvent<HTMLTextAreaElement>) => {
const newValue = e.target.value
const newCursorPosition = e.target.selectionStart ?? 0
onChange(newValue)
setCursorPosition(newCursorPosition)
const tagTrigger = checkTagTrigger(newValue, newCursorPosition)
setShowTags(tagTrigger.show)
}
const handleDrop = (e: React.DragEvent<HTMLTextAreaElement>) => {
e.preventDefault()
try {
const data = JSON.parse(e.dataTransfer.getData('application/json'))
if (data.type !== 'connectionBlock') return
const dropPosition = textareaRef.current?.selectionStart ?? value.length ?? 0
const currentValue = value ?? ''
const newValue = `${currentValue.slice(0, dropPosition)}<${currentValue.slice(dropPosition)}`
onChange(newValue)
setCursorPosition(dropPosition + 1)
setShowTags(true)
if (data.connectionData?.sourceBlockId) {
setActiveSourceBlockId(data.connectionData.sourceBlockId)
}
setTimeout(() => {
if (textareaRef.current) {
textareaRef.current.selectionStart = dropPosition + 1
textareaRef.current.selectionEnd = dropPosition + 1
}
}, 0)
} catch (error) {
logger.error('Failed to parse drop data:', { error })
}
}
const handleDragOver = (e: React.DragEvent<HTMLTextAreaElement>) => {
e.preventDefault()
}
const handleTagSelect = (newValue: string) => {
onChange(newValue)
setShowTags(false)
setActiveSourceBlockId(null)
}
return (
<div className='relative'>
<Textarea
ref={textareaRef}
value={value || ''}
onChange={handleChange}
onDrop={handleDrop}
onDragOver={handleDragOver}
onFocus={() => {
// Show tag dropdown on focus when input is empty
if (!disabled && (value?.trim() === '' || !value)) {
setShowTags(true)
setCursorPosition(0)
}
}}
placeholder={placeholder}
disabled={disabled}
rows={rows}
name={textareaNameRef.current}
autoComplete='off'
autoCapitalize='off'
spellCheck='false'
data-form-type='other'
data-lpignore='true'
data-1p-ignore
className={cn('min-h-[80px] resize-none text-transparent caret-foreground')}
/>
<div className='pointer-events-none absolute inset-0 overflow-auto whitespace-pre-wrap break-words px-[8px] py-[8px] font-medium font-sans text-sm'>
{formatDisplayText(value || '', {
accessiblePrefixes,
highlightAll: !accessiblePrefixes,
})}
</div>
<TagDropdown
visible={showTags}
onSelect={handleTagSelect}
blockId={blockId}
activeSourceBlockId={activeSourceBlockId}
inputValue={value?.toString() ?? ''}
cursorPosition={cursorPosition}
onClose={() => {
setShowTags(false)
setActiveSourceBlockId(null)
}}
inputRef={textareaRef as RefObject<HTMLTextAreaElement>}
/>
</div>
)
}
interface McpDynamicArgsProps {
blockId: string
subBlockId: string
@@ -20,27 +284,6 @@ interface McpDynamicArgsProps {
previewValue?: any
}
/**
* Creates a minimal SubBlockConfig for MCP tool parameters
*/
function createParamConfig(
paramName: string,
paramSchema: any,
inputType: 'long-input' | 'short-input'
): SubBlockConfig {
const placeholder =
paramSchema.type === 'array'
? `Enter JSON array, e.g. ["item1", "item2"] or comma-separated values`
: paramSchema.description || `Enter ${formatParameterLabel(paramName).toLowerCase()}`
return {
id: paramName,
type: inputType,
title: formatParameterLabel(paramName),
placeholder,
}
}
export function McpDynamicArgs({
blockId,
subBlockId,
@@ -54,6 +297,7 @@ export function McpDynamicArgs({
const [selectedTool] = useSubBlockValue(blockId, 'tool')
const [cachedSchema] = useSubBlockValue(blockId, '_toolSchema')
const [toolArgs, setToolArgs] = useSubBlockValue(blockId, subBlockId)
const accessiblePrefixes = useAccessibleReferencePrefixes(blockId)
const selectedToolConfig = mcpTools.find((tool) => tool.id === selectedTool)
const toolSchema = cachedSchema || selectedToolConfig?.inputSchema
@@ -64,7 +308,7 @@ export function McpDynamicArgs({
try {
return JSON.parse(previewValue)
} catch (error) {
logger.warn('Failed to parse preview value as JSON:', { error })
console.warn('Failed to parse preview value as JSON:', error)
return previewValue
}
}
@@ -74,7 +318,7 @@ export function McpDynamicArgs({
try {
return JSON.parse(toolArgs)
} catch (error) {
logger.warn('Failed to parse toolArgs as JSON:', { error })
console.warn('Failed to parse toolArgs as JSON:', error)
return {}
}
}
@@ -216,23 +460,24 @@ export function McpDynamicArgs({
)
}
case 'long-input': {
const config = createParamConfig(paramName, paramSchema, 'long-input')
case 'long-input':
return (
<LongInput
<McpTextareaWithTags
key={`${paramName}-long`}
blockId={blockId}
subBlockId={`_mcp_${paramName}`}
config={config}
placeholder={config.placeholder}
rows={4}
value={value || ''}
onChange={(newValue) => updateParameter(paramName, newValue)}
isPreview={isPreview}
placeholder={
paramSchema.type === 'array'
? `Enter JSON array, e.g. ["item1", "item2"] or comma-separated values`
: paramSchema.description ||
`Enter ${formatParameterLabel(paramName).toLowerCase()}`
}
disabled={disabled}
blockId={blockId}
accessiblePrefixes={accessiblePrefixes}
rows={4}
/>
)
}
default: {
const isPassword =
@@ -240,16 +485,10 @@ export function McpDynamicArgs({
paramName.toLowerCase().includes('password') ||
paramName.toLowerCase().includes('token')
const isNumeric = paramSchema.type === 'number' || paramSchema.type === 'integer'
const config = createParamConfig(paramName, paramSchema, 'short-input')
return (
<ShortInput
<McpInputWithTags
key={`${paramName}-short`}
blockId={blockId}
subBlockId={`_mcp_${paramName}`}
config={config}
placeholder={config.placeholder}
password={isPassword}
value={value?.toString() || ''}
onChange={(newValue) => {
let processedValue: any = newValue
@@ -267,8 +506,16 @@ export function McpDynamicArgs({
}
updateParameter(paramName, processedValue)
}}
isPreview={isPreview}
placeholder={
paramSchema.type === 'array'
? `Enter JSON array, e.g. ["item1", "item2"] or comma-separated values`
: paramSchema.description ||
`Enter ${formatParameterLabel(paramName).toLowerCase()}`
}
disabled={disabled}
isPassword={isPassword}
blockId={blockId}
accessiblePrefixes={accessiblePrefixes}
/>
)
}
@@ -331,40 +578,26 @@ export function McpDynamicArgs({
tabIndex={-1}
readOnly
/>
<div>
<div className='space-y-4'>
{toolSchema.properties &&
Object.entries(toolSchema.properties).map(([paramName, paramSchema], index, entries) => {
Object.entries(toolSchema.properties).map(([paramName, paramSchema]) => {
const inputType = getInputType(paramSchema as any)
const showLabel = inputType !== 'switch'
const showDivider = index < entries.length - 1
return (
<div key={paramName} className='subblock-row'>
<div className='subblock-content flex flex-col gap-[10px]'>
{showLabel && (
<Label
className={cn(
'font-medium text-sm',
toolSchema.required?.includes(paramName) &&
'after:ml-1 after:text-red-500 after:content-["*"]'
)}
>
{formatParameterLabel(paramName)}
</Label>
)}
{renderParameterInput(paramName, paramSchema as any)}
</div>
{showDivider && (
<div className='subblock-divider px-[2px] pt-[16px] pb-[13px]'>
<div
className='h-[1.25px]'
style={{
backgroundImage:
'repeating-linear-gradient(to right, var(--border) 0px, var(--border) 6px, transparent 6px, transparent 12px)',
}}
/>
</div>
<div key={paramName} className='space-y-2'>
{showLabel && (
<Label
className={cn(
'font-medium text-sm',
toolSchema.required?.includes(paramName) &&
'after:ml-1 after:text-red-500 after:content-["*"]'
)}
>
{formatParameterLabel(paramName)}
</Label>
)}
{renderParameterInput(paramName, paramSchema as any)}
</div>
)
})}

View File

@@ -2069,7 +2069,6 @@ export const ToolInput = memo(function ToolInput({
placeholder: uiComponent.placeholder,
requiredScopes: uiComponent.requiredScopes,
dependsOn: uiComponent.dependsOn,
canonicalParamId: uiComponent.canonicalParamId ?? param.id,
}}
onProjectSelect={onChange}
disabled={disabled}

View File

@@ -34,7 +34,6 @@ interface LogRowContextMenuProps {
onCopyRunId: (runId: string) => void
onClearFilters: () => void
onClearConsole: () => void
onFixInCopilot: (entry: ConsoleEntry) => void
hasActiveFilters: boolean
}
@@ -55,7 +54,6 @@ export function LogRowContextMenu({
onCopyRunId,
onClearFilters,
onClearConsole,
onFixInCopilot,
hasActiveFilters,
}: LogRowContextMenuProps) {
const hasRunId = entry?.executionId != null
@@ -98,21 +96,6 @@ export function LogRowContextMenu({
</>
)}
{/* Fix in Copilot - only for error rows */}
{entry && !entry.success && (
<>
<PopoverItem
onClick={() => {
onFixInCopilot(entry)
onClose()
}}
>
Fix in Copilot
</PopoverItem>
<PopoverDivider />
</>
)}
{/* Filter actions */}
{entry && (
<>

View File

@@ -54,7 +54,6 @@ import { useShowTrainingControls } from '@/hooks/queries/general-settings'
import { useCodeViewerFeatures } from '@/hooks/use-code-viewer'
import { OUTPUT_PANEL_WIDTH, TERMINAL_HEIGHT } from '@/stores/constants'
import { useCopilotTrainingStore } from '@/stores/copilot-training/store'
import { openCopilotWithMessage } from '@/stores/notifications/utils'
import type { ConsoleEntry } from '@/stores/terminal'
import { useTerminalConsoleStore, useTerminalStore } from '@/stores/terminal'
import { useWorkflowRegistry } from '@/stores/workflows/registry/store'
@@ -227,6 +226,7 @@ const isEventFromEditableElement = (e: KeyboardEvent): boolean => {
return false
}
// Check target and walk up ancestors in case editors render nested elements
let el: HTMLElement | null = target
while (el) {
if (isEditable(el)) return true
@@ -1159,17 +1159,6 @@ export const Terminal = memo(function Terminal() {
clearCurrentWorkflowConsole()
}, [clearCurrentWorkflowConsole])
const handleFixInCopilot = useCallback(
(entry: ConsoleEntry) => {
const errorMessage = entry.error ? String(entry.error) : 'Unknown error'
const blockName = entry.blockName || 'Unknown Block'
const message = `${errorMessage}\n\nError in ${blockName}.\n\nPlease fix this.`
openCopilotWithMessage(message)
closeLogRowMenu()
},
[closeLogRowMenu]
)
const handleTrainingClick = useCallback(
(e: React.MouseEvent) => {
e.stopPropagation()
@@ -1960,7 +1949,6 @@ export const Terminal = memo(function Terminal() {
closeLogRowMenu()
}}
onClearConsole={handleClearConsoleFromMenu}
onFixInCopilot={handleFixInCopilot}
hasActiveFilters={hasActiveFilters}
/>
</>

View File

@@ -434,20 +434,13 @@ const WorkflowContent = React.memo(() => {
window.removeEventListener('open-oauth-connect', handleOpenOAuthConnect as EventListener)
}, [])
const {
diffAnalysis,
isShowingDiff,
isDiffReady,
reapplyDiffMarkers,
hasActiveDiff,
restoreDiffFromMarkers,
} = useWorkflowDiffStore(
const { diffAnalysis, isShowingDiff, isDiffReady, reapplyDiffMarkers, hasActiveDiff } =
useWorkflowDiffStore(
useShallow((state) => ({
diffAnalysis: state.diffAnalysis,
isShowingDiff: state.isShowingDiff,
isDiffReady: state.isDiffReady,
reapplyDiffMarkers: state.reapplyDiffMarkers,
restoreDiffFromMarkers: state.restoreDiffFromMarkers,
hasActiveDiff: state.hasActiveDiff,
}))
)
@@ -473,16 +466,6 @@ const WorkflowContent = React.memo(() => {
}
}, [blocks, hasActiveDiff, isDiffReady, reapplyDiffMarkers, isWorkflowReady])
/** Restore diff state from markers on page load if blocks have is_diff markers. */
const hasRestoredDiff = useRef(false)
useEffect(() => {
if (!isWorkflowReady || hasRestoredDiff.current || hasActiveDiff) return
// Check once when workflow becomes ready
hasRestoredDiff.current = true
// Delay slightly to ensure blocks are fully loaded
setTimeout(() => restoreDiffFromMarkers(), 100)
}, [isWorkflowReady, hasActiveDiff, restoreDiffFromMarkers])
/** Reconstructs deleted edges for diff view and filters invalid edges. */
const edgesForDisplay = useMemo(() => {
let edgesToFilter = edges
@@ -709,8 +692,7 @@ const WorkflowContent = React.memo(() => {
parentId?: string,
extent?: 'parent',
autoConnectEdge?: Edge,
triggerMode?: boolean,
presetSubBlockValues?: Record<string, unknown>
triggerMode?: boolean
) => {
setPendingSelection([id])
setSelectedEdges(new Map())
@@ -740,14 +722,6 @@ const WorkflowContent = React.memo(() => {
}
}
// Apply preset subblock values (e.g., from tool-operation search)
if (presetSubBlockValues) {
if (!subBlockValues[id]) {
subBlockValues[id] = {}
}
Object.assign(subBlockValues[id], presetSubBlockValues)
}
collaborativeBatchAddBlocks(
[block],
autoConnectEdge ? [autoConnectEdge] : [],
@@ -1515,7 +1489,7 @@ const WorkflowContent = React.memo(() => {
return
}
const { type, enableTriggerMode, presetOperation } = event.detail
const { type, enableTriggerMode } = event.detail
if (!type) return
if (type === 'connectionBlock') return
@@ -1578,8 +1552,7 @@ const WorkflowContent = React.memo(() => {
undefined,
undefined,
autoConnectEdge,
enableTriggerMode,
presetOperation ? { operation: presetOperation } : undefined
enableTriggerMode
)
}

View File

@@ -8,7 +8,6 @@ import { useParams, useRouter } from 'next/navigation'
import { Dialog, DialogPortal, DialogTitle } from '@/components/ui/dialog'
import { useBrandConfig } from '@/lib/branding/branding'
import { cn } from '@/lib/core/utils/cn'
import { getToolOperationsIndex } from '@/lib/search/tool-operations'
import { getTriggersForSidebar, hasTriggerCapability } from '@/lib/workflows/triggers/trigger-utils'
import { searchItems } from '@/app/workspace/[workspaceId]/w/components/sidebar/components/search-modal/search-utils'
import { SIDEBAR_SCROLL_EVENT } from '@/app/workspace/[workspaceId]/w/components/sidebar/sidebar'
@@ -82,12 +81,10 @@ type SearchItem = {
color?: string
href?: string
shortcut?: string
type: 'block' | 'trigger' | 'tool' | 'tool-operation' | 'workflow' | 'workspace' | 'page' | 'doc'
type: 'block' | 'trigger' | 'tool' | 'workflow' | 'workspace' | 'page' | 'doc'
isCurrent?: boolean
blockType?: string
config?: any
operationId?: string
aliases?: string[]
}
interface SearchResultItemProps {
@@ -104,11 +101,7 @@ const SearchResultItem = memo(function SearchResultItem({
onItemClick,
}: SearchResultItemProps) {
const Icon = item.icon
const showColoredIcon =
item.type === 'block' ||
item.type === 'trigger' ||
item.type === 'tool' ||
item.type === 'tool-operation'
const showColoredIcon = item.type === 'block' || item.type === 'trigger' || item.type === 'tool'
const isWorkflow = item.type === 'workflow'
const isWorkspace = item.type === 'workspace'
@@ -285,24 +278,6 @@ export const SearchModal = memo(function SearchModal({
)
}, [open, isOnWorkflowPage, filterBlocks])
const toolOperations = useMemo(() => {
if (!open || !isOnWorkflowPage) return []
const allowedBlockTypes = new Set(tools.map((t) => t.type))
return getToolOperationsIndex()
.filter((op) => allowedBlockTypes.has(op.blockType))
.map((op) => ({
id: op.id,
name: `${op.serviceName}: ${op.operationName}`,
icon: op.icon,
bgColor: op.bgColor,
blockType: op.blockType,
operationId: op.operationId,
aliases: op.aliases,
}))
}, [open, isOnWorkflowPage, tools])
const pages = useMemo(
(): PageItem[] => [
{
@@ -421,19 +396,6 @@ export const SearchModal = memo(function SearchModal({
})
})
toolOperations.forEach((op) => {
items.push({
id: op.id,
name: op.name,
icon: op.icon,
bgColor: op.bgColor,
type: 'tool-operation',
blockType: op.blockType,
operationId: op.operationId,
aliases: op.aliases,
})
})
docs.forEach((doc) => {
items.push({
id: doc.id,
@@ -445,10 +407,10 @@ export const SearchModal = memo(function SearchModal({
})
return items
}, [workspaces, workflows, pages, blocks, triggers, tools, toolOperations, docs])
}, [workspaces, workflows, pages, blocks, triggers, tools, docs])
const sectionOrder = useMemo<SearchItem['type'][]>(
() => ['block', 'tool', 'tool-operation', 'trigger', 'workflow', 'workspace', 'page', 'doc'],
() => ['block', 'tool', 'trigger', 'workflow', 'workspace', 'page', 'doc'],
[]
)
@@ -495,7 +457,6 @@ export const SearchModal = memo(function SearchModal({
page: [],
trigger: [],
block: [],
'tool-operation': [],
tool: [],
doc: [],
}
@@ -551,17 +512,6 @@ export const SearchModal = memo(function SearchModal({
window.dispatchEvent(event)
}
break
case 'tool-operation':
if (item.blockType && item.operationId) {
const event = new CustomEvent('add-block-from-toolbar', {
detail: {
type: item.blockType,
presetOperation: item.operationId,
},
})
window.dispatchEvent(event)
}
break
case 'workspace':
if (item.isCurrent) {
break
@@ -642,7 +592,6 @@ export const SearchModal = memo(function SearchModal({
page: 'Pages',
trigger: 'Triggers',
block: 'Blocks',
'tool-operation': 'Tool Operations',
tool: 'Tools',
doc: 'Docs',
}

View File

@@ -8,19 +8,17 @@ export interface SearchableItem {
name: string
description?: string
type: string
aliases?: string[]
[key: string]: any
}
export interface SearchResult<T extends SearchableItem> {
item: T
score: number
matchType: 'exact' | 'prefix' | 'alias' | 'word-boundary' | 'substring' | 'description'
matchType: 'exact' | 'prefix' | 'word-boundary' | 'substring' | 'description'
}
const SCORE_EXACT_MATCH = 10000
const SCORE_PREFIX_MATCH = 5000
const SCORE_ALIAS_MATCH = 3000
const SCORE_WORD_BOUNDARY = 1000
const SCORE_SUBSTRING_MATCH = 100
const DESCRIPTION_WEIGHT = 0.3
@@ -69,39 +67,6 @@ function calculateFieldScore(
return { score: 0, matchType: null }
}
/**
* Check if query matches any alias in the item's aliases array
* Returns the alias score if a match is found, 0 otherwise
*/
function calculateAliasScore(
query: string,
aliases?: string[]
): { score: number; matchType: 'alias' | null } {
if (!aliases || aliases.length === 0) {
return { score: 0, matchType: null }
}
const normalizedQuery = query.toLowerCase().trim()
for (const alias of aliases) {
const normalizedAlias = alias.toLowerCase().trim()
if (normalizedAlias === normalizedQuery) {
return { score: SCORE_ALIAS_MATCH, matchType: 'alias' }
}
if (normalizedAlias.startsWith(normalizedQuery)) {
return { score: SCORE_ALIAS_MATCH * 0.8, matchType: 'alias' }
}
if (normalizedQuery.includes(normalizedAlias) || normalizedAlias.includes(normalizedQuery)) {
return { score: SCORE_ALIAS_MATCH * 0.6, matchType: 'alias' }
}
}
return { score: 0, matchType: null }
}
/**
* Search items using tiered matching algorithm
* Returns items sorted by relevance (highest score first)
@@ -125,20 +90,15 @@ export function searchItems<T extends SearchableItem>(
? calculateFieldScore(normalizedQuery, item.description)
: { score: 0, matchType: null }
const aliasMatch = calculateAliasScore(normalizedQuery, item.aliases)
const nameScore = nameMatch.score
const descScore = descMatch.score * DESCRIPTION_WEIGHT
const aliasScore = aliasMatch.score
const bestScore = Math.max(nameScore, descScore, aliasScore)
const bestScore = Math.max(nameScore, descScore)
if (bestScore > 0) {
let matchType: SearchResult<T>['matchType'] = 'substring'
if (nameScore >= descScore && nameScore >= aliasScore) {
if (nameScore >= descScore) {
matchType = nameMatch.matchType || 'substring'
} else if (aliasScore >= descScore) {
matchType = 'alias'
} else {
matchType = 'description'
}
@@ -165,8 +125,6 @@ export function getMatchTypeLabel(matchType: SearchResult<any>['matchType']): st
return 'Exact match'
case 'prefix':
return 'Starts with'
case 'alias':
return 'Similar to'
case 'word-boundary':
return 'Word match'
case 'substring':

View File

@@ -1,80 +0,0 @@
'use client'
import { useState } from 'react'
import { createLogger } from '@sim/logger'
import { useQueryClient } from '@tanstack/react-query'
import { useParams } from 'next/navigation'
import { Button, Input as EmcnInput } from '@/components/emcn'
import { workflowKeys } from '@/hooks/queries/workflows'
const logger = createLogger('DebugSettings')
/**
* Debug settings component for superusers.
* Allows importing workflows by ID for debugging purposes.
*/
export function Debug() {
const params = useParams()
const queryClient = useQueryClient()
const workspaceId = params?.workspaceId as string
const [workflowId, setWorkflowId] = useState('')
const [isImporting, setIsImporting] = useState(false)
const handleImport = async () => {
if (!workflowId.trim()) return
setIsImporting(true)
try {
const response = await fetch('/api/superuser/import-workflow', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
workflowId: workflowId.trim(),
targetWorkspaceId: workspaceId,
}),
})
const data = await response.json()
if (response.ok) {
await queryClient.invalidateQueries({ queryKey: workflowKeys.list(workspaceId) })
setWorkflowId('')
logger.info('Workflow imported successfully', {
originalWorkflowId: workflowId.trim(),
newWorkflowId: data.newWorkflowId,
copilotChatsImported: data.copilotChatsImported,
})
}
} catch (error) {
logger.error('Failed to import workflow', error)
} finally {
setIsImporting(false)
}
}
return (
<div className='flex h-full flex-col gap-[16px]'>
<p className='text-[13px] text-[var(--text-secondary)]'>
Import a workflow by ID along with its associated copilot chats.
</p>
<div className='flex gap-[8px]'>
<EmcnInput
value={workflowId}
onChange={(e) => setWorkflowId(e.target.value)}
placeholder='Enter workflow ID'
disabled={isImporting}
/>
<Button
variant='tertiary'
onClick={handleImport}
disabled={isImporting || !workflowId.trim()}
>
{isImporting ? 'Importing...' : 'Import'}
</Button>
</div>
</div>
)
}

View File

@@ -4,7 +4,6 @@ export { BYOK } from './byok/byok'
export { Copilot } from './copilot/copilot'
export { CredentialSets } from './credential-sets/credential-sets'
export { CustomTools } from './custom-tools/custom-tools'
export { Debug } from './debug/debug'
export { EnvironmentVariables } from './environment/environment'
export { Files as FileUploads } from './files/files'
export { General } from './general/general'

View File

@@ -5,7 +5,6 @@ import * as DialogPrimitive from '@radix-ui/react-dialog'
import * as VisuallyHidden from '@radix-ui/react-visually-hidden'
import { useQueryClient } from '@tanstack/react-query'
import {
Bug,
Files,
KeySquare,
LogIn,
@@ -47,7 +46,6 @@ import {
Copilot,
CredentialSets,
CustomTools,
Debug,
EnvironmentVariables,
FileUploads,
General,
@@ -93,15 +91,8 @@ type SettingsSection =
| 'mcp'
| 'custom-tools'
| 'workflow-mcp-servers'
| 'debug'
type NavigationSection =
| 'account'
| 'subscription'
| 'tools'
| 'system'
| 'enterprise'
| 'superuser'
type NavigationSection = 'account' | 'subscription' | 'tools' | 'system' | 'enterprise'
type NavigationItem = {
id: SettingsSection
@@ -113,7 +104,6 @@ type NavigationItem = {
requiresEnterprise?: boolean
requiresHosted?: boolean
selfHostedOverride?: boolean
requiresSuperUser?: boolean
}
const sectionConfig: { key: NavigationSection; title: string }[] = [
@@ -122,7 +112,6 @@ const sectionConfig: { key: NavigationSection; title: string }[] = [
{ key: 'subscription', title: 'Subscription' },
{ key: 'system', title: 'System' },
{ key: 'enterprise', title: 'Enterprise' },
{ key: 'superuser', title: 'Superuser' },
]
const allNavigationItems: NavigationItem[] = [
@@ -191,24 +180,15 @@ const allNavigationItems: NavigationItem[] = [
requiresEnterprise: true,
selfHostedOverride: isSSOEnabled,
},
{
id: 'debug',
label: 'Debug',
icon: Bug,
section: 'superuser',
requiresSuperUser: true,
},
]
export function SettingsModal({ open, onOpenChange }: SettingsModalProps) {
const [activeSection, setActiveSection] = useState<SettingsSection>('general')
const { initialSection, mcpServerId, clearInitialState } = useSettingsModalStore()
const [pendingMcpServerId, setPendingMcpServerId] = useState<string | null>(null)
const [isSuperUser, setIsSuperUser] = useState(false)
const { data: session } = useSession()
const queryClient = useQueryClient()
const { data: organizationsData } = useOrganizations()
const { data: generalSettings } = useGeneralSettings()
const { data: subscriptionData } = useSubscriptionData({ enabled: isBillingEnabled })
const { data: ssoProvidersData, isLoading: isLoadingSSO } = useSSOProviders()
@@ -229,23 +209,6 @@ export function SettingsModal({ open, onOpenChange }: SettingsModalProps) {
const hasEnterprisePlan = subscriptionStatus.isEnterprise
const hasOrganization = !!activeOrganization?.id
// Fetch superuser status
useEffect(() => {
const fetchSuperUserStatus = async () => {
if (!userId) return
try {
const response = await fetch('/api/user/super-user')
if (response.ok) {
const data = await response.json()
setIsSuperUser(data.isSuperUser)
}
} catch {
setIsSuperUser(false)
}
}
fetchSuperUserStatus()
}, [userId])
// Memoize SSO provider ownership check
const isSSOProviderOwner = useMemo(() => {
if (isHosted) return null
@@ -305,13 +268,6 @@ export function SettingsModal({ open, onOpenChange }: SettingsModalProps) {
return false
}
// requiresSuperUser: only show if user is a superuser AND has superuser mode enabled
const superUserModeEnabled = generalSettings?.superUserModeEnabled ?? false
const effectiveSuperUser = isSuperUser && superUserModeEnabled
if (item.requiresSuperUser && !effectiveSuperUser) {
return false
}
return true
})
}, [
@@ -324,8 +280,6 @@ export function SettingsModal({ open, onOpenChange }: SettingsModalProps) {
isOwner,
isAdmin,
permissionConfig,
isSuperUser,
generalSettings?.superUserModeEnabled,
])
// Memoized callbacks to prevent infinite loops in child components
@@ -354,6 +308,9 @@ export function SettingsModal({ open, onOpenChange }: SettingsModalProps) {
[activeSection]
)
// React Query hook automatically loads and syncs settings
useGeneralSettings()
// Apply initial section from store when modal opens
useEffect(() => {
if (open && initialSection) {
@@ -566,7 +523,6 @@ export function SettingsModal({ open, onOpenChange }: SettingsModalProps) {
{activeSection === 'mcp' && <MCP initialServerId={pendingMcpServerId} />}
{activeSection === 'custom-tools' && <CustomTools />}
{activeSection === 'workflow-mcp-servers' && <WorkflowMcpServers />}
{activeSection === 'debug' && <Debug />}
</SModalMainBody>
</SModalMain>
</SModalContent>

View File

@@ -11,7 +11,7 @@ export const BrowserUseBlock: BlockConfig<BrowserUseResponse> = {
'Integrate Browser Use into the workflow. Can navigate the web and perform actions as if a real user was interacting with the browser.',
docsLink: 'https://docs.sim.ai/tools/browser_use',
category: 'tools',
bgColor: '#181C1E',
bgColor: '#E0E0E0',
icon: BrowserUseIcon,
subBlocks: [
{

View File

@@ -34,7 +34,7 @@ export function OTPVerificationEmail({
const brand = getBrandConfig()
return (
<EmailLayout preview={getSubjectByType(type, brand.name, chatTitle)} showUnsubscribe={false}>
<EmailLayout preview={getSubjectByType(type, brand.name, chatTitle)}>
<Text style={baseStyles.paragraph}>Your verification code:</Text>
<Section style={baseStyles.codeContainer}>

View File

@@ -12,7 +12,7 @@ export function ResetPasswordEmail({ username = '', resetLink = '' }: ResetPassw
const brand = getBrandConfig()
return (
<EmailLayout preview={`Reset your ${brand.name} password`} showUnsubscribe={false}>
<EmailLayout preview={`Reset your ${brand.name} password`}>
<Text style={baseStyles.paragraph}>Hello {username},</Text>
<Text style={baseStyles.paragraph}>
A password reset was requested for your {brand.name} account. Click below to set a new

View File

@@ -13,7 +13,7 @@ export function WelcomeEmail({ userName }: WelcomeEmailProps) {
const baseUrl = getBaseUrl()
return (
<EmailLayout preview={`Welcome to ${brand.name}`} showUnsubscribe={false}>
<EmailLayout preview={`Welcome to ${brand.name}`}>
<Text style={{ ...baseStyles.paragraph, marginTop: 0 }}>
{userName ? `Hey ${userName},` : 'Hey,'}
</Text>

View File

@@ -23,7 +23,7 @@ export function CreditPurchaseEmail({
const previewText = `${brand.name}: $${amount.toFixed(2)} in credits added to your account`
return (
<EmailLayout preview={previewText} showUnsubscribe={false}>
<EmailLayout preview={previewText}>
<Text style={{ ...baseStyles.paragraph, marginTop: 0 }}>
{userName ? `Hi ${userName},` : 'Hi,'}
</Text>

View File

@@ -18,10 +18,7 @@ export function EnterpriseSubscriptionEmail({
const effectiveLoginLink = loginLink || `${baseUrl}/login`
return (
<EmailLayout
preview={`Your Enterprise Plan is now active on ${brand.name}`}
showUnsubscribe={false}
>
<EmailLayout preview={`Your Enterprise Plan is now active on ${brand.name}`}>
<Text style={baseStyles.paragraph}>Hello {userName},</Text>
<Text style={baseStyles.paragraph}>
Your <strong>Enterprise Plan</strong> is now active. You have full access to advanced

View File

@@ -31,7 +31,7 @@ export function FreeTierUpgradeEmail({
const previewText = `${brand.name}: You've used ${percentUsed}% of your free credits`
return (
<EmailLayout preview={previewText} showUnsubscribe={true}>
<EmailLayout preview={previewText}>
<Text style={{ ...baseStyles.paragraph, marginTop: 0 }}>
{userName ? `Hi ${userName},` : 'Hi,'}
</Text>

View File

@@ -25,7 +25,7 @@ export function PaymentFailedEmail({
const previewText = `${brand.name}: Payment Failed - Action Required`
return (
<EmailLayout preview={previewText} showUnsubscribe={false}>
<EmailLayout preview={previewText}>
<Text style={{ ...baseStyles.paragraph, marginTop: 0 }}>
{userName ? `Hi ${userName},` : 'Hi,'}
</Text>

View File

@@ -18,7 +18,7 @@ export function PlanWelcomeEmail({ planName, userName, loginLink }: PlanWelcomeE
const previewText = `${brand.name}: Your ${planName} plan is active`
return (
<EmailLayout preview={previewText} showUnsubscribe={true}>
<EmailLayout preview={previewText}>
<Text style={{ ...baseStyles.paragraph, marginTop: 0 }}>
{userName ? `Hi ${userName},` : 'Hi,'}
</Text>

View File

@@ -25,7 +25,7 @@ export function UsageThresholdEmail({
const previewText = `${brand.name}: You're at ${percentUsed}% of your ${planName} monthly budget`
return (
<EmailLayout preview={previewText} showUnsubscribe={true}>
<EmailLayout preview={previewText}>
<Text style={{ ...baseStyles.paragraph, marginTop: 0 }}>
{userName ? `Hi ${userName},` : 'Hi,'}
</Text>

View File

@@ -20,10 +20,7 @@ export function CareersConfirmationEmail({
const baseUrl = getBaseUrl()
return (
<EmailLayout
preview={`Your application to ${brand.name} has been received`}
showUnsubscribe={false}
>
<EmailLayout preview={`Your application to ${brand.name} has been received`}>
<Text style={baseStyles.paragraph}>Hello {name},</Text>
<Text style={baseStyles.paragraph}>
We've received your application for <strong>{position}</strong>. Our team reviews every

View File

@@ -40,7 +40,7 @@ export function CareersSubmissionEmail({
submittedDate = new Date(),
}: CareersSubmissionEmailProps) {
return (
<EmailLayout preview={`New Career Application from ${name}`} hideFooter showUnsubscribe={false}>
<EmailLayout preview={`New Career Application from ${name}`} hideFooter>
<Text
style={{
...baseStyles.paragraph,

View File

@@ -4,29 +4,22 @@ import { getBrandConfig } from '@/lib/branding/branding'
import { isHosted } from '@/lib/core/config/feature-flags'
import { getBaseUrl } from '@/lib/core/utils/urls'
interface UnsubscribeOptions {
unsubscribeToken?: string
email?: string
}
interface EmailFooterProps {
baseUrl?: string
unsubscribe?: UnsubscribeOptions
messageId?: string
/**
* Whether to show unsubscribe link. Defaults to true.
* Set to false for transactional emails where unsubscribe doesn't apply.
*/
showUnsubscribe?: boolean
}
/**
* Email footer component styled to match Stripe's email design.
* Sits in the gray area below the main white card.
*
* For non-transactional emails, the unsubscribe link uses placeholders
* {{UNSUBSCRIBE_TOKEN}} and {{UNSUBSCRIBE_EMAIL}} which are replaced
* by the mailer when sending.
*/
export function EmailFooter({
baseUrl = getBaseUrl(),
messageId,
showUnsubscribe = true,
}: EmailFooterProps) {
export function EmailFooter({ baseUrl = getBaseUrl(), unsubscribe, messageId }: EmailFooterProps) {
const brand = getBrandConfig()
const footerLinkStyle = {
@@ -188,20 +181,19 @@ export function EmailFooter({
{' '}
<a href={`${baseUrl}/terms`} style={footerLinkStyle} rel='noopener noreferrer'>
Terms of Service
</a>{' '}
{' '}
<a
href={
unsubscribe?.unsubscribeToken && unsubscribe?.email
? `${baseUrl}/unsubscribe?token=${unsubscribe.unsubscribeToken}&email=${encodeURIComponent(unsubscribe.email)}`
: `mailto:${brand.supportEmail}?subject=Unsubscribe%20Request&body=Please%20unsubscribe%20me%20from%20all%20emails.`
}
style={footerLinkStyle}
rel='noopener noreferrer'
>
Unsubscribe
</a>
{showUnsubscribe && (
<>
{' '}
{' '}
<a
href={`${baseUrl}/unsubscribe?token={{UNSUBSCRIBE_TOKEN}}&email={{UNSUBSCRIBE_EMAIL}}`}
style={footerLinkStyle}
rel='noopener noreferrer'
>
Unsubscribe
</a>
</>
)}
</td>
<td style={baseStyles.gutter} width={spacing.gutter}>
&nbsp;

View File

@@ -11,23 +11,13 @@ interface EmailLayoutProps {
children: React.ReactNode
/** Optional: hide footer for internal emails */
hideFooter?: boolean
/**
* Whether to show unsubscribe link in footer.
* Set to false for transactional emails where unsubscribe doesn't apply.
*/
showUnsubscribe: boolean
}
/**
* Shared email layout wrapper providing consistent structure.
* Includes Html, Head, Body, Container with logo header, and Footer.
*/
export function EmailLayout({
preview,
children,
hideFooter = false,
showUnsubscribe,
}: EmailLayoutProps) {
export function EmailLayout({ preview, children, hideFooter = false }: EmailLayoutProps) {
const brand = getBrandConfig()
const baseUrl = getBaseUrl()
@@ -53,7 +43,7 @@ export function EmailLayout({
</Container>
{/* Footer in gray section */}
{!hideFooter && <EmailFooter baseUrl={baseUrl} showUnsubscribe={showUnsubscribe} />}
{!hideFooter && <EmailFooter baseUrl={baseUrl} />}
</Body>
</Html>
)

View File

@@ -54,7 +54,6 @@ export function BatchInvitationEmail({
return (
<EmailLayout
preview={`You've been invited to join ${organizationName}${hasWorkspaces ? ` and ${workspaceInvitations.length} workspace(s)` : ''}`}
showUnsubscribe={false}
>
<Text style={baseStyles.paragraph}>Hello,</Text>
<Text style={baseStyles.paragraph}>

View File

@@ -36,10 +36,7 @@ export function InvitationEmail({
}
return (
<EmailLayout
preview={`You've been invited to join ${organizationName} on ${brand.name}`}
showUnsubscribe={false}
>
<EmailLayout preview={`You've been invited to join ${organizationName} on ${brand.name}`}>
<Text style={baseStyles.paragraph}>Hello,</Text>
<Text style={baseStyles.paragraph}>
<strong>{inviterName}</strong> invited you to join <strong>{organizationName}</strong> on{' '}

View File

@@ -22,10 +22,7 @@ export function PollingGroupInvitationEmail({
const providerName = provider === 'google-email' ? 'Gmail' : 'Outlook'
return (
<EmailLayout
preview={`You've been invited to join ${pollingGroupName} on ${brand.name}`}
showUnsubscribe={false}
>
<EmailLayout preview={`You've been invited to join ${pollingGroupName} on ${brand.name}`}>
<Text style={baseStyles.paragraph}>Hello,</Text>
<Text style={baseStyles.paragraph}>
<strong>{inviterName}</strong> from <strong>{organizationName}</strong> has invited you to

View File

@@ -41,7 +41,6 @@ export function WorkspaceInvitationEmail({
return (
<EmailLayout
preview={`You've been invited to join the "${workspaceName}" workspace on ${brand.name}!`}
showUnsubscribe={false}
>
<Text style={baseStyles.paragraph}>Hello,</Text>
<Text style={baseStyles.paragraph}>

View File

@@ -73,7 +73,7 @@ export function WorkflowNotificationEmail({
: 'Your workflow completed successfully.'
return (
<EmailLayout preview={previewText} showUnsubscribe={true}>
<EmailLayout preview={previewText}>
<Text style={{ ...baseStyles.paragraph, marginTop: 0 }}>Hello,</Text>
<Text style={baseStyles.paragraph}>{message}</Text>

View File

@@ -32,10 +32,7 @@ export function HelpConfirmationEmail({
const typeLabel = getTypeLabel(type)
return (
<EmailLayout
preview={`Your ${typeLabel.toLowerCase()} has been received`}
showUnsubscribe={false}
>
<EmailLayout preview={`Your ${typeLabel.toLowerCase()} has been received`}>
<Text style={baseStyles.paragraph}>Hello,</Text>
<Text style={baseStyles.paragraph}>
We've received your <strong>{typeLabel.toLowerCase()}</strong> and will get back to you

View File

@@ -1739,12 +1739,12 @@ export function BrowserUseIcon(props: SVGProps<SVGSVGElement>) {
{...props}
version='1.0'
xmlns='http://www.w3.org/2000/svg'
width='28'
height='28'
width='150pt'
height='150pt'
viewBox='0 0 150 150'
preserveAspectRatio='xMidYMid meet'
>
<g transform='translate(0,150) scale(0.05,-0.05)' fill='currentColor' stroke='none'>
<g transform='translate(0,150) scale(0.05,-0.05)' fill='#000000' stroke='none'>
<path
d='M786 2713 c-184 -61 -353 -217 -439 -405 -76 -165 -65 -539 19 -666
l57 -85 -48 -124 c-203 -517 -79 -930 346 -1155 159 -85 441 -71 585 28 l111

View File

@@ -1,4 +1,4 @@
import { keepPreviousData, useMutation, useQuery, useQueryClient } from '@tanstack/react-query'
import { keepPreviousData, useQuery } from '@tanstack/react-query'
import type {
ChunkData,
ChunksPagination,
@@ -332,629 +332,3 @@ export function useDocumentChunkSearchQuery(
placeholderData: keepPreviousData,
})
}
export interface UpdateChunkParams {
knowledgeBaseId: string
documentId: string
chunkId: string
content?: string
enabled?: boolean
}
export async function updateChunk({
knowledgeBaseId,
documentId,
chunkId,
content,
enabled,
}: UpdateChunkParams): Promise<ChunkData> {
const body: Record<string, unknown> = {}
if (content !== undefined) body.content = content
if (enabled !== undefined) body.enabled = enabled
const response = await fetch(
`/api/knowledge/${knowledgeBaseId}/documents/${documentId}/chunks/${chunkId}`,
{
method: 'PUT',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify(body),
}
)
if (!response.ok) {
const result = await response.json()
throw new Error(result.error || 'Failed to update chunk')
}
const result = await response.json()
if (!result?.success) {
throw new Error(result?.error || 'Failed to update chunk')
}
return result.data
}
export function useUpdateChunk() {
const queryClient = useQueryClient()
return useMutation({
mutationFn: updateChunk,
onSuccess: (_, { knowledgeBaseId, documentId }) => {
queryClient.invalidateQueries({
queryKey: knowledgeKeys.detail(knowledgeBaseId),
})
queryClient.invalidateQueries({
queryKey: knowledgeKeys.document(knowledgeBaseId, documentId),
})
},
})
}
export interface DeleteChunkParams {
knowledgeBaseId: string
documentId: string
chunkId: string
}
export async function deleteChunk({
knowledgeBaseId,
documentId,
chunkId,
}: DeleteChunkParams): Promise<void> {
const response = await fetch(
`/api/knowledge/${knowledgeBaseId}/documents/${documentId}/chunks/${chunkId}`,
{ method: 'DELETE' }
)
if (!response.ok) {
const result = await response.json()
throw new Error(result.error || 'Failed to delete chunk')
}
const result = await response.json()
if (!result?.success) {
throw new Error(result?.error || 'Failed to delete chunk')
}
}
export function useDeleteChunk() {
const queryClient = useQueryClient()
return useMutation({
mutationFn: deleteChunk,
onSuccess: (_, { knowledgeBaseId, documentId }) => {
queryClient.invalidateQueries({
queryKey: knowledgeKeys.detail(knowledgeBaseId),
})
queryClient.invalidateQueries({
queryKey: knowledgeKeys.document(knowledgeBaseId, documentId),
})
},
})
}
export interface CreateChunkParams {
knowledgeBaseId: string
documentId: string
content: string
enabled?: boolean
}
export async function createChunk({
knowledgeBaseId,
documentId,
content,
enabled = true,
}: CreateChunkParams): Promise<ChunkData> {
const response = await fetch(`/api/knowledge/${knowledgeBaseId}/documents/${documentId}/chunks`, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ content, enabled }),
})
if (!response.ok) {
const result = await response.json()
throw new Error(result.error || 'Failed to create chunk')
}
const result = await response.json()
if (!result?.success || !result?.data) {
throw new Error(result?.error || 'Failed to create chunk')
}
return result.data
}
export function useCreateChunk() {
const queryClient = useQueryClient()
return useMutation({
mutationFn: createChunk,
onSuccess: (_, { knowledgeBaseId, documentId }) => {
queryClient.invalidateQueries({
queryKey: knowledgeKeys.detail(knowledgeBaseId),
})
queryClient.invalidateQueries({
queryKey: knowledgeKeys.document(knowledgeBaseId, documentId),
})
},
})
}
export interface UpdateDocumentParams {
knowledgeBaseId: string
documentId: string
updates: {
enabled?: boolean
filename?: string
retryProcessing?: boolean
markFailedDueToTimeout?: boolean
}
}
export async function updateDocument({
knowledgeBaseId,
documentId,
updates,
}: UpdateDocumentParams): Promise<DocumentData> {
const response = await fetch(`/api/knowledge/${knowledgeBaseId}/documents/${documentId}`, {
method: 'PUT',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify(updates),
})
if (!response.ok) {
const result = await response.json()
throw new Error(result.error || 'Failed to update document')
}
const result = await response.json()
if (!result?.success) {
throw new Error(result?.error || 'Failed to update document')
}
return result.data
}
export function useUpdateDocument() {
const queryClient = useQueryClient()
return useMutation({
mutationFn: updateDocument,
onSuccess: (_, { knowledgeBaseId, documentId }) => {
queryClient.invalidateQueries({
queryKey: knowledgeKeys.detail(knowledgeBaseId),
})
queryClient.invalidateQueries({
queryKey: knowledgeKeys.document(knowledgeBaseId, documentId),
})
},
})
}
export interface DeleteDocumentParams {
knowledgeBaseId: string
documentId: string
}
export async function deleteDocument({
knowledgeBaseId,
documentId,
}: DeleteDocumentParams): Promise<void> {
const response = await fetch(`/api/knowledge/${knowledgeBaseId}/documents/${documentId}`, {
method: 'DELETE',
})
if (!response.ok) {
const result = await response.json()
throw new Error(result.error || 'Failed to delete document')
}
const result = await response.json()
if (!result?.success) {
throw new Error(result?.error || 'Failed to delete document')
}
}
export function useDeleteDocument() {
const queryClient = useQueryClient()
return useMutation({
mutationFn: deleteDocument,
onSuccess: (_, { knowledgeBaseId }) => {
queryClient.invalidateQueries({
queryKey: knowledgeKeys.detail(knowledgeBaseId),
})
},
})
}
export interface BulkDocumentOperationParams {
knowledgeBaseId: string
operation: 'enable' | 'disable' | 'delete'
documentIds: string[]
}
export interface BulkDocumentOperationResult {
successCount: number
failedCount: number
updatedDocuments?: Array<{ id: string; enabled: boolean }>
}
export async function bulkDocumentOperation({
knowledgeBaseId,
operation,
documentIds,
}: BulkDocumentOperationParams): Promise<BulkDocumentOperationResult> {
const response = await fetch(`/api/knowledge/${knowledgeBaseId}/documents`, {
method: 'PATCH',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ operation, documentIds }),
})
if (!response.ok) {
const result = await response.json()
throw new Error(result.error || `Failed to ${operation} documents`)
}
const result = await response.json()
if (!result?.success) {
throw new Error(result?.error || `Failed to ${operation} documents`)
}
return result.data
}
export function useBulkDocumentOperation() {
const queryClient = useQueryClient()
return useMutation({
mutationFn: bulkDocumentOperation,
onSuccess: (_, { knowledgeBaseId }) => {
queryClient.invalidateQueries({
queryKey: knowledgeKeys.detail(knowledgeBaseId),
})
},
})
}
export interface CreateKnowledgeBaseParams {
name: string
description?: string
workspaceId: string
chunkingConfig: {
maxSize: number
minSize: number
overlap: number
}
}
export async function createKnowledgeBase(
params: CreateKnowledgeBaseParams
): Promise<KnowledgeBaseData> {
const response = await fetch('/api/knowledge', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify(params),
})
if (!response.ok) {
const result = await response.json()
throw new Error(result.error || 'Failed to create knowledge base')
}
const result = await response.json()
if (!result?.success || !result?.data) {
throw new Error(result?.error || 'Failed to create knowledge base')
}
return result.data
}
export function useCreateKnowledgeBase(workspaceId?: string) {
const queryClient = useQueryClient()
return useMutation({
mutationFn: createKnowledgeBase,
onSuccess: () => {
queryClient.invalidateQueries({
queryKey: knowledgeKeys.list(workspaceId),
})
},
})
}
export interface UpdateKnowledgeBaseParams {
knowledgeBaseId: string
updates: {
name?: string
description?: string
workspaceId?: string | null
}
}
export async function updateKnowledgeBase({
knowledgeBaseId,
updates,
}: UpdateKnowledgeBaseParams): Promise<KnowledgeBaseData> {
const response = await fetch(`/api/knowledge/${knowledgeBaseId}`, {
method: 'PUT',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify(updates),
})
if (!response.ok) {
const result = await response.json()
throw new Error(result.error || 'Failed to update knowledge base')
}
const result = await response.json()
if (!result?.success) {
throw new Error(result?.error || 'Failed to update knowledge base')
}
return result.data
}
export function useUpdateKnowledgeBase(workspaceId?: string) {
const queryClient = useQueryClient()
return useMutation({
mutationFn: updateKnowledgeBase,
onSuccess: (_, { knowledgeBaseId }) => {
queryClient.invalidateQueries({
queryKey: knowledgeKeys.detail(knowledgeBaseId),
})
queryClient.invalidateQueries({
queryKey: knowledgeKeys.list(workspaceId),
})
},
})
}
export interface DeleteKnowledgeBaseParams {
knowledgeBaseId: string
}
export async function deleteKnowledgeBase({
knowledgeBaseId,
}: DeleteKnowledgeBaseParams): Promise<void> {
const response = await fetch(`/api/knowledge/${knowledgeBaseId}`, {
method: 'DELETE',
})
if (!response.ok) {
const result = await response.json()
throw new Error(result.error || 'Failed to delete knowledge base')
}
const result = await response.json()
if (!result?.success) {
throw new Error(result?.error || 'Failed to delete knowledge base')
}
}
export function useDeleteKnowledgeBase(workspaceId?: string) {
const queryClient = useQueryClient()
return useMutation({
mutationFn: deleteKnowledgeBase,
onSuccess: () => {
queryClient.invalidateQueries({
queryKey: knowledgeKeys.list(workspaceId),
})
},
})
}
export interface BulkChunkOperationParams {
knowledgeBaseId: string
documentId: string
operation: 'enable' | 'disable' | 'delete'
chunkIds: string[]
}
export interface BulkChunkOperationResult {
successCount: number
failedCount: number
results: Array<{
operation: string
chunkIds: string[]
}>
}
export async function bulkChunkOperation({
knowledgeBaseId,
documentId,
operation,
chunkIds,
}: BulkChunkOperationParams): Promise<BulkChunkOperationResult> {
const response = await fetch(`/api/knowledge/${knowledgeBaseId}/documents/${documentId}/chunks`, {
method: 'PATCH',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ operation, chunkIds }),
})
if (!response.ok) {
const result = await response.json()
throw new Error(result.error || `Failed to ${operation} chunks`)
}
const result = await response.json()
if (!result?.success) {
throw new Error(result?.error || `Failed to ${operation} chunks`)
}
return result.data
}
export function useBulkChunkOperation() {
const queryClient = useQueryClient()
return useMutation({
mutationFn: bulkChunkOperation,
onSuccess: (_, { knowledgeBaseId, documentId }) => {
queryClient.invalidateQueries({
queryKey: knowledgeKeys.detail(knowledgeBaseId),
})
queryClient.invalidateQueries({
queryKey: knowledgeKeys.document(knowledgeBaseId, documentId),
})
},
})
}
export interface UpdateDocumentTagsParams {
knowledgeBaseId: string
documentId: string
tags: Record<string, string>
}
export async function updateDocumentTags({
knowledgeBaseId,
documentId,
tags,
}: UpdateDocumentTagsParams): Promise<DocumentData> {
const response = await fetch(`/api/knowledge/${knowledgeBaseId}/documents/${documentId}`, {
method: 'PUT',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify(tags),
})
if (!response.ok) {
const result = await response.json()
throw new Error(result.error || 'Failed to update document tags')
}
const result = await response.json()
if (!result?.success) {
throw new Error(result?.error || 'Failed to update document tags')
}
return result.data
}
export function useUpdateDocumentTags() {
const queryClient = useQueryClient()
return useMutation({
mutationFn: updateDocumentTags,
onSuccess: (_, { knowledgeBaseId, documentId }) => {
queryClient.invalidateQueries({
queryKey: knowledgeKeys.detail(knowledgeBaseId),
})
queryClient.invalidateQueries({
queryKey: knowledgeKeys.document(knowledgeBaseId, documentId),
})
},
})
}
export interface TagDefinitionData {
id: string
tagSlot: string
displayName: string
fieldType: string
createdAt: string
updatedAt: string
}
export interface CreateTagDefinitionParams {
knowledgeBaseId: string
displayName: string
fieldType: string
}
async function fetchNextAvailableSlot(knowledgeBaseId: string, fieldType: string): Promise<string> {
const response = await fetch(
`/api/knowledge/${knowledgeBaseId}/next-available-slot?fieldType=${fieldType}`
)
if (!response.ok) {
throw new Error('Failed to get available slot')
}
const result = await response.json()
if (!result.success || !result.data?.nextAvailableSlot) {
throw new Error('No available tag slots for this field type')
}
return result.data.nextAvailableSlot
}
export async function createTagDefinition({
knowledgeBaseId,
displayName,
fieldType,
}: CreateTagDefinitionParams): Promise<TagDefinitionData> {
const tagSlot = await fetchNextAvailableSlot(knowledgeBaseId, fieldType)
const response = await fetch(`/api/knowledge/${knowledgeBaseId}/tag-definitions`, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ tagSlot, displayName, fieldType }),
})
if (!response.ok) {
const result = await response.json()
throw new Error(result.error || 'Failed to create tag definition')
}
const result = await response.json()
if (!result?.success || !result?.data) {
throw new Error(result?.error || 'Failed to create tag definition')
}
return result.data
}
export function useCreateTagDefinition() {
const queryClient = useQueryClient()
return useMutation({
mutationFn: createTagDefinition,
onSuccess: (_, { knowledgeBaseId }) => {
queryClient.invalidateQueries({
queryKey: knowledgeKeys.detail(knowledgeBaseId),
})
},
})
}
export interface DeleteTagDefinitionParams {
knowledgeBaseId: string
tagDefinitionId: string
}
export async function deleteTagDefinition({
knowledgeBaseId,
tagDefinitionId,
}: DeleteTagDefinitionParams): Promise<void> {
const response = await fetch(
`/api/knowledge/${knowledgeBaseId}/tag-definitions/${tagDefinitionId}`,
{ method: 'DELETE' }
)
if (!response.ok) {
const result = await response.json()
throw new Error(result.error || 'Failed to delete tag definition')
}
const result = await response.json()
if (!result?.success) {
throw new Error(result?.error || 'Failed to delete tag definition')
}
}
export function useDeleteTagDefinition() {
const queryClient = useQueryClient()
return useMutation({
mutationFn: deleteTagDefinition,
onSuccess: (_, { knowledgeBaseId }) => {
queryClient.invalidateQueries({
queryKey: knowledgeKeys.detail(knowledgeBaseId),
})
},
})
}

View File

@@ -203,11 +203,10 @@ function resolveProjectSelector(
): SelectorResolution {
const serviceId = subBlock.serviceId
const context = buildBaseContext(args)
const selectorId = subBlock.canonicalParamId ?? subBlock.id
switch (serviceId) {
case 'linear': {
const key: SelectorKey = selectorId === 'teamId' ? 'linear.teams' : 'linear.projects'
const key: SelectorKey = subBlock.id === 'teamId' ? 'linear.teams' : 'linear.projects'
return { key, context, allowSearch: true }
}
case 'jira':

View File

@@ -21,8 +21,6 @@ import {
type BatchToggleEnabledOperation,
type BatchToggleHandlesOperation,
type BatchUpdateParentOperation,
captureLatestEdges,
captureLatestSubBlockValues,
createOperationEntry,
runWithUndoRedoRecordingSuspended,
type UpdateParentOperation,
@@ -30,6 +28,7 @@ import {
} from '@/stores/undo-redo'
import { useWorkflowRegistry } from '@/stores/workflows/registry/store'
import { useSubBlockStore } from '@/stores/workflows/subblock/store'
import { mergeSubblockState } from '@/stores/workflows/utils'
import { useWorkflowStore } from '@/stores/workflows/workflow/store'
import type { BlockState } from '@/stores/workflows/workflow/types'
@@ -446,19 +445,34 @@ export function useUndoRedo() {
break
}
const latestEdges = captureLatestEdges(
useWorkflowStore.getState().edges,
existingBlockIds
)
const latestEdges = useWorkflowStore
.getState()
.edges.filter(
(e) => existingBlockIds.includes(e.source) || existingBlockIds.includes(e.target)
)
batchRemoveOp.data.edgeSnapshots = latestEdges
const latestSubBlockValues = captureLatestSubBlockValues(
useWorkflowStore.getState().blocks,
activeWorkflowId,
existingBlockIds
)
const latestSubBlockValues: Record<string, Record<string, unknown>> = {}
existingBlockIds.forEach((blockId) => {
const merged = mergeSubblockState(
useWorkflowStore.getState().blocks,
activeWorkflowId,
blockId
)
const block = merged[blockId]
if (block?.subBlocks) {
const values: Record<string, unknown> = {}
Object.entries(block.subBlocks).forEach(([subBlockId, subBlock]) => {
if (subBlock.value !== null && subBlock.value !== undefined) {
values[subBlockId] = subBlock.value
}
})
if (Object.keys(values).length > 0) {
latestSubBlockValues[blockId] = values
}
}
})
batchRemoveOp.data.subBlockValues = latestSubBlockValues
;(entry.operation as BatchAddBlocksOperation).data.subBlockValues = latestSubBlockValues
addToQueue({
id: opId,
@@ -1139,20 +1153,6 @@ export function useUndoRedo() {
break
}
const latestEdges = captureLatestEdges(
useWorkflowStore.getState().edges,
existingBlockIds
)
batchOp.data.edgeSnapshots = latestEdges
const latestSubBlockValues = captureLatestSubBlockValues(
useWorkflowStore.getState().blocks,
activeWorkflowId,
existingBlockIds
)
batchOp.data.subBlockValues = latestSubBlockValues
;(entry.inverse as BatchAddBlocksOperation).data.subBlockValues = latestSubBlockValues
addToQueue({
id: opId,
operation: {

View File

@@ -29,11 +29,13 @@ export class DocsChunker {
private readonly baseUrl: string
constructor(options: DocsChunkerOptions = {}) {
// Use the existing TextChunker for chunking logic
this.textChunker = new TextChunker({
chunkSize: options.chunkSize ?? 300, // Max 300 tokens per chunk
minCharactersPerChunk: options.minCharactersPerChunk ?? 1,
chunkOverlap: options.chunkOverlap ?? 50,
})
// Use localhost docs in development, production docs otherwise
this.baseUrl = options.baseUrl ?? 'https://docs.sim.ai'
}
@@ -72,18 +74,24 @@ export class DocsChunker {
const content = await fs.readFile(filePath, 'utf-8')
const relativePath = path.relative(basePath, filePath)
// Parse frontmatter and content
const { data: frontmatter, content: markdownContent } = this.parseFrontmatter(content)
// Extract headers from the content
const headers = this.extractHeaders(markdownContent)
// Generate document URL
const documentUrl = this.generateDocumentUrl(relativePath)
// Split content into chunks
const textChunks = await this.splitContent(markdownContent)
// Generate embeddings for all chunks at once (batch processing)
logger.info(`Generating embeddings for ${textChunks.length} chunks in ${relativePath}`)
const embeddings = textChunks.length > 0 ? await generateEmbeddings(textChunks) : []
const embeddingModel = 'text-embedding-3-small'
// Convert to DocChunk objects with header context and embeddings
const chunks: DocChunk[] = []
let currentPosition = 0
@@ -92,6 +100,7 @@ export class DocsChunker {
const chunkStart = currentPosition
const chunkEnd = currentPosition + chunkText.length
// Find the most relevant header for this chunk
const relevantHeader = this.findRelevantHeader(headers, chunkStart)
const chunk: DocChunk = {
@@ -177,21 +186,11 @@ export class DocsChunker {
/**
* Generate document URL from relative path
* Handles index.mdx files specially - they are served at the parent directory path
*/
private generateDocumentUrl(relativePath: string): string {
// Convert file path to URL path
// e.g., "tools/knowledge.mdx" -> "/tools/knowledge"
// e.g., "triggers/index.mdx" -> "/triggers" (NOT "/triggers/index")
let urlPath = relativePath.replace(/\.mdx$/, '').replace(/\\/g, '/') // Handle Windows paths
// In fumadocs, index.mdx files are served at the parent directory path
// e.g., "triggers/index" -> "triggers"
if (urlPath.endsWith('/index')) {
urlPath = urlPath.slice(0, -6) // Remove "/index"
} else if (urlPath === 'index') {
urlPath = '' // Root index.mdx
}
const urlPath = relativePath.replace(/\.mdx$/, '').replace(/\\/g, '/') // Handle Windows paths
return `${this.baseUrl}/${urlPath}`
}
@@ -202,6 +201,7 @@ export class DocsChunker {
private findRelevantHeader(headers: HeaderInfo[], position: number): HeaderInfo | null {
if (headers.length === 0) return null
// Find the last header that comes before this position
let relevantHeader: HeaderInfo | null = null
for (const header of headers) {
@@ -219,18 +219,23 @@ export class DocsChunker {
* Split content into chunks using the existing TextChunker with table awareness
*/
private async splitContent(content: string): Promise<string[]> {
// Clean the content first
const cleanedContent = this.cleanContent(content)
// Detect table boundaries to avoid splitting them
const tableBoundaries = this.detectTableBoundaries(cleanedContent)
// Use the existing TextChunker
const chunks = await this.textChunker.chunk(cleanedContent)
// Post-process chunks to ensure tables aren't split
const processedChunks = this.mergeTableChunks(
chunks.map((chunk) => chunk.text),
tableBoundaries,
cleanedContent
)
// Ensure no chunk exceeds 300 tokens
const finalChunks = this.enforceSizeLimit(processedChunks)
return finalChunks
@@ -268,6 +273,7 @@ export class DocsChunker {
const [, frontmatterText, markdownContent] = match
const data: Frontmatter = {}
// Simple YAML parsing for title and description
const lines = frontmatterText.split('\n')
for (const line of lines) {
const colonIndex = line.indexOf(':')
@@ -288,6 +294,7 @@ export class DocsChunker {
* Estimate token count (rough approximation)
*/
private estimateTokens(text: string): number {
// Rough approximation: 1 token ≈ 4 characters
return Math.ceil(text.length / 4)
}
@@ -304,13 +311,17 @@ export class DocsChunker {
for (let i = 0; i < lines.length; i++) {
const line = lines[i].trim()
// Detect table start (markdown table row with pipes)
if (line.includes('|') && line.split('|').length >= 3 && !inTable) {
// Check if next line is table separator (contains dashes and pipes)
const nextLine = lines[i + 1]?.trim()
if (nextLine?.includes('|') && nextLine.includes('-')) {
inTable = true
tableStart = i
}
} else if (inTable && (!line.includes('|') || line === '' || line.startsWith('#'))) {
}
// Detect table end (empty line or non-table content)
else if (inTable && (!line.includes('|') || line === '' || line.startsWith('#'))) {
tables.push({
start: this.getCharacterPosition(lines, tableStart),
end: this.getCharacterPosition(lines, i - 1) + lines[i - 1]?.length || 0,
@@ -319,6 +330,7 @@ export class DocsChunker {
}
}
// Handle table at end of content
if (inTable && tableStart >= 0) {
tables.push({
start: this.getCharacterPosition(lines, tableStart),
@@ -355,6 +367,7 @@ export class DocsChunker {
const chunkStart = originalContent.indexOf(chunk, currentPosition)
const chunkEnd = chunkStart + chunk.length
// Check if this chunk intersects with any table
const intersectsTable = tableBoundaries.some(
(table) =>
(chunkStart >= table.start && chunkStart <= table.end) ||
@@ -363,6 +376,7 @@ export class DocsChunker {
)
if (intersectsTable) {
// Find which table(s) this chunk intersects with
const affectedTables = tableBoundaries.filter(
(table) =>
(chunkStart >= table.start && chunkStart <= table.end) ||
@@ -370,10 +384,12 @@ export class DocsChunker {
(chunkStart <= table.start && chunkEnd >= table.end)
)
// Create a chunk that includes the complete table(s)
const minStart = Math.min(chunkStart, ...affectedTables.map((t) => t.start))
const maxEnd = Math.max(chunkEnd, ...affectedTables.map((t) => t.end))
const completeChunk = originalContent.slice(minStart, maxEnd)
// Only add if we haven't already included this content
if (!mergedChunks.some((existing) => existing.includes(completeChunk.trim()))) {
mergedChunks.push(completeChunk.trim())
}
@@ -384,7 +400,7 @@ export class DocsChunker {
currentPosition = chunkEnd
}
return mergedChunks.filter((chunk) => chunk.length > 50)
return mergedChunks.filter((chunk) => chunk.length > 50) // Filter out tiny chunks
}
/**
@@ -397,8 +413,10 @@ export class DocsChunker {
const tokens = this.estimateTokens(chunk)
if (tokens <= 300) {
// Chunk is within limit
finalChunks.push(chunk)
} else {
// Chunk is too large - split it
const lines = chunk.split('\n')
let currentChunk = ''
@@ -408,6 +426,7 @@ export class DocsChunker {
if (this.estimateTokens(testChunk) <= 300) {
currentChunk = testChunk
} else {
// Adding this line would exceed limit
if (currentChunk.trim()) {
finalChunks.push(currentChunk.trim())
}
@@ -415,6 +434,7 @@ export class DocsChunker {
}
}
// Add final chunk if it has content
if (currentChunk.trim()) {
finalChunks.push(currentChunk.trim())
}

View File

@@ -3,15 +3,6 @@ import type { CopilotMode, CopilotModelId, CopilotTransportMode } from '@/lib/co
const logger = createLogger('CopilotAPI')
/**
* Response from chat initiation endpoint
*/
export interface ChatInitResponse {
success: boolean
streamId: string
chatId: string
}
/**
* Citation interface for documentation references
*/
@@ -124,16 +115,10 @@ async function handleApiError(response: Response, defaultMessage: string): Promi
/**
* Send a streaming message to the copilot chat API
* This is the main API endpoint that handles all chat operations
*
* Server-first architecture:
* 1. POST to /api/copilot/chat - starts background processing, returns { streamId, chatId }
* 2. Connect to /api/copilot/stream/{streamId} for SSE stream
*
* This ensures stream continues server-side even if client disconnects
*/
export async function sendStreamingMessage(
request: SendMessageRequest
): Promise<StreamingResponse & { streamId?: string; chatId?: string }> {
): Promise<StreamingResponse> {
try {
const { abortSignal, ...requestBody } = request
try {
@@ -153,83 +138,34 @@ export async function sendStreamingMessage(
contextsPreview: preview,
})
} catch {}
// Step 1: Initiate chat - server starts background processing
const initResponse = await fetch('/api/copilot/chat', {
const response = await fetch('/api/copilot/chat', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ ...requestBody, stream: true }),
credentials: 'include',
})
if (!initResponse.ok) {
const errorMessage = await handleApiError(initResponse, 'Failed to initiate chat')
return {
success: false,
error: errorMessage,
status: initResponse.status,
}
}
const initData: ChatInitResponse = await initResponse.json()
if (!initData.success || !initData.streamId) {
return {
success: false,
error: 'Failed to get stream ID from server',
status: 500,
}
}
logger.info('Chat initiated, connecting to stream', {
streamId: initData.streamId,
chatId: initData.chatId,
})
// Step 2: Connect to stream endpoint for SSE
const streamResponse = await fetch(`/api/copilot/stream/${initData.streamId}`, {
method: 'GET',
headers: { Accept: 'text/event-stream' },
signal: abortSignal,
credentials: 'include',
credentials: 'include', // Include cookies for session authentication
})
if (!streamResponse.ok) {
// Handle completed/not found cases
if (streamResponse.status === 404) {
return {
success: false,
error: 'Stream not found or expired',
status: 404,
streamId: initData.streamId,
chatId: initData.chatId,
}
}
const errorMessage = await handleApiError(streamResponse, 'Failed to connect to stream')
if (!response.ok) {
const errorMessage = await handleApiError(response, 'Failed to send streaming message')
return {
success: false,
error: errorMessage,
status: streamResponse.status,
streamId: initData.streamId,
chatId: initData.chatId,
status: response.status,
}
}
if (!streamResponse.body) {
if (!response.body) {
return {
success: false,
error: 'No stream body received',
error: 'No response body received',
status: 500,
streamId: initData.streamId,
chatId: initData.chatId,
}
}
return {
success: true,
stream: streamResponse.body,
streamId: initData.streamId,
chatId: initData.chatId,
stream: response.body,
}
} catch (error) {
// Handle AbortError gracefully - this is expected when user aborts

View File

@@ -1,743 +0,0 @@
/**
* Client Renderer - Handles render events from the server
*
* This is the client-side counterpart to the stream transformer.
* It receives render events from the server and updates the UI accordingly.
* All business logic (tool execution, persistence) is handled server-side.
* The client just renders.
*/
import { createLogger } from '@sim/logger'
import type { RenderEvent, RenderEventType } from './render-events'
const logger = createLogger('ClientRenderer')
// ============================================================================
// Types
// ============================================================================
export interface RendererState {
// Stream state
streamId: string | null
chatId: string | null
isStreaming: boolean
isComplete: boolean
hasError: boolean
errorMessage: string | null
// Message state
currentMessageId: string | null
content: string
// Thinking state
isThinking: boolean
thinkingContent: string
// Tool calls
toolCalls: Map<string, ToolCallState>
// Plan state
isCapturingPlan: boolean
planContent: string
planTodos: PlanTodo[]
// Options state
isCapturingOptions: boolean
optionsContent: string
options: string[]
// Subagent state
activeSubagents: Map<string, SubagentState>
// Interrupts
pendingInterrupts: Map<string, InterruptState>
}
export interface ToolCallState {
id: string
name: string
args: Record<string, unknown>
status: 'pending' | 'generating' | 'executing' | 'success' | 'error' | 'aborted'
result?: unknown
error?: string
display: {
label: string
description?: string
}
}
export interface SubagentState {
parentToolCallId: string
subagentId: string
label?: string
toolCalls: Map<string, ToolCallState>
}
export interface PlanTodo {
id: string
content: string
status: 'pending' | 'in_progress' | 'completed'
}
export interface InterruptState {
toolCallId: string
toolName: string
options: Array<{
id: string
label: string
description?: string
variant?: 'default' | 'destructive' | 'outline'
}>
message?: string
}
export interface RendererCallbacks {
/** Called when state changes - trigger UI re-render */
onStateChange: (state: RendererState) => void
/** Called when a diff is ready - read workflow from DB */
onDiffReady?: (workflowId: string, toolCallId: string) => void
/** Called when user needs to resolve an interrupt */
onInterruptRequired?: (interrupt: InterruptState) => void
/** Called when stream completes */
onStreamComplete?: () => void
/** Called when stream errors */
onStreamError?: (error: string) => void
}
// ============================================================================
// Renderer Class
// ============================================================================
export class ClientRenderer {
private state: RendererState
private callbacks: RendererCallbacks
private eventQueue: RenderEvent[] = []
private isProcessing = false
constructor(callbacks: RendererCallbacks) {
this.callbacks = callbacks
this.state = this.createInitialState()
}
private createInitialState(): RendererState {
return {
streamId: null,
chatId: null,
isStreaming: false,
isComplete: false,
hasError: false,
errorMessage: null,
currentMessageId: null,
content: '',
isThinking: false,
thinkingContent: '',
toolCalls: new Map(),
isCapturingPlan: false,
planContent: '',
planTodos: [],
isCapturingOptions: false,
optionsContent: '',
options: [],
activeSubagents: new Map(),
pendingInterrupts: new Map(),
}
}
/** Reset renderer state for a new stream */
reset(): void {
this.state = this.createInitialState()
this.eventQueue = []
this.isProcessing = false
this.notifyStateChange()
}
/** Get current state (immutable copy) */
getState(): Readonly<RendererState> {
return { ...this.state }
}
/** Process a render event from the server */
async processEvent(event: RenderEvent): Promise<void> {
this.eventQueue.push(event)
await this.processQueue()
}
/** Process multiple events (for replay) */
async processEvents(events: RenderEvent[]): Promise<void> {
this.eventQueue.push(...events)
await this.processQueue()
}
private async processQueue(): Promise<void> {
if (this.isProcessing) return
this.isProcessing = true
try {
while (this.eventQueue.length > 0) {
const event = this.eventQueue.shift()!
await this.handleEvent(event)
}
} finally {
this.isProcessing = false
}
}
private async handleEvent(event: RenderEvent): Promise<void> {
const type = event.type as RenderEventType
switch (type) {
// ========== Stream Lifecycle ==========
case 'stream_start':
this.handleStreamStart(event as any)
break
case 'stream_end':
this.handleStreamEnd()
break
case 'stream_error':
this.handleStreamError(event as any)
break
// ========== Message Lifecycle ==========
case 'message_start':
this.handleMessageStart(event as any)
break
case 'message_saved':
this.handleMessageSaved(event as any)
break
case 'message_end':
this.handleMessageEnd(event as any)
break
// ========== Text Content ==========
case 'text_delta':
this.handleTextDelta(event as any)
break
// ========== Thinking ==========
case 'thinking_start':
this.handleThinkingStart()
break
case 'thinking_delta':
this.handleThinkingDelta(event as any)
break
case 'thinking_end':
this.handleThinkingEnd()
break
// ========== Tool Calls ==========
case 'tool_pending':
this.handleToolPending(event as any)
break
case 'tool_generating':
this.handleToolGenerating(event as any)
break
case 'tool_executing':
this.handleToolExecuting(event as any)
break
case 'tool_success':
this.handleToolSuccess(event as any)
break
case 'tool_error':
this.handleToolError(event as any)
break
case 'tool_aborted':
this.handleToolAborted(event as any)
break
// ========== Interrupts ==========
case 'interrupt_show':
this.handleInterruptShow(event as any)
break
case 'interrupt_resolved':
this.handleInterruptResolved(event as any)
break
// ========== Diffs ==========
case 'diff_ready':
this.handleDiffReady(event as any)
break
// ========== Plans ==========
case 'plan_start':
this.handlePlanStart()
break
case 'plan_delta':
this.handlePlanDelta(event as any)
break
case 'plan_end':
this.handlePlanEnd(event as any)
break
// ========== Options ==========
case 'options_start':
this.handleOptionsStart()
break
case 'options_delta':
this.handleOptionsDelta(event as any)
break
case 'options_end':
this.handleOptionsEnd(event as any)
break
// ========== Subagents ==========
case 'subagent_start':
this.handleSubagentStart(event as any)
break
case 'subagent_tool_pending':
this.handleSubagentToolPending(event as any)
break
case 'subagent_tool_executing':
this.handleSubagentToolExecuting(event as any)
break
case 'subagent_tool_success':
this.handleSubagentToolSuccess(event as any)
break
case 'subagent_tool_error':
this.handleSubagentToolError(event as any)
break
case 'subagent_end':
this.handleSubagentEnd(event as any)
break
// ========== Chat Metadata ==========
case 'chat_id':
this.state.chatId = (event as any).chatId
this.notifyStateChange()
break
case 'title_updated':
// Title updates are handled externally
logger.debug('Title updated', { title: (event as any).title })
break
default:
logger.warn('Unknown render event type', { type })
}
}
// ============================================================================
// Event Handlers
// ============================================================================
private handleStreamStart(event: {
streamId: string
chatId: string
userMessageId: string
assistantMessageId: string
}): void {
this.state.streamId = event.streamId
this.state.chatId = event.chatId
this.state.currentMessageId = event.assistantMessageId
this.state.isStreaming = true
this.state.isComplete = false
this.state.hasError = false
this.notifyStateChange()
}
private handleStreamEnd(): void {
this.state.isStreaming = false
this.state.isComplete = true
this.notifyStateChange()
this.callbacks.onStreamComplete?.()
}
private handleStreamError(event: { error: string }): void {
this.state.isStreaming = false
this.state.hasError = true
this.state.errorMessage = event.error
this.notifyStateChange()
this.callbacks.onStreamError?.(event.error)
}
private handleMessageStart(event: { messageId: string; role: string }): void {
if (event.role === 'assistant') {
this.state.currentMessageId = event.messageId
this.state.content = ''
}
this.notifyStateChange()
}
private handleMessageSaved(event: { messageId: string; refreshFromDb?: boolean }): void {
logger.debug('Message saved', { messageId: event.messageId, refresh: event.refreshFromDb })
// If refreshFromDb is true, the message was saved with special state (like diff markers)
// The client should refresh from DB to get the latest state
}
private handleMessageEnd(event: { messageId: string }): void {
logger.debug('Message end', { messageId: event.messageId })
}
private handleTextDelta(event: { content: string }): void {
this.state.content += event.content
this.notifyStateChange()
}
private handleThinkingStart(): void {
this.state.isThinking = true
this.state.thinkingContent = ''
this.notifyStateChange()
}
private handleThinkingDelta(event: { content: string }): void {
this.state.thinkingContent += event.content
this.notifyStateChange()
}
private handleThinkingEnd(): void {
this.state.isThinking = false
this.notifyStateChange()
}
private handleToolPending(event: {
toolCallId: string
toolName: string
args: Record<string, unknown>
display: { label: string; description?: string }
}): void {
this.state.toolCalls.set(event.toolCallId, {
id: event.toolCallId,
name: event.toolName,
args: event.args,
status: 'pending',
display: event.display,
})
this.notifyStateChange()
}
private handleToolGenerating(event: {
toolCallId: string
argsPartial?: Record<string, unknown>
}): void {
const tool = this.state.toolCalls.get(event.toolCallId)
if (tool) {
tool.status = 'generating'
if (event.argsPartial) {
tool.args = event.argsPartial
}
}
this.notifyStateChange()
}
private handleToolExecuting(event: { toolCallId: string }): void {
const tool = this.state.toolCalls.get(event.toolCallId)
if (tool) {
tool.status = 'executing'
}
this.notifyStateChange()
}
private handleToolSuccess(event: {
toolCallId: string
result?: unknown
display?: { label: string; description?: string }
workflowId?: string
hasDiff?: boolean
}): void {
const tool = this.state.toolCalls.get(event.toolCallId)
if (tool) {
tool.status = 'success'
tool.result = event.result
if (event.display) {
tool.display = event.display
}
}
this.notifyStateChange()
}
private handleToolError(event: {
toolCallId: string
error: string
display?: { label: string; description?: string }
}): void {
const tool = this.state.toolCalls.get(event.toolCallId)
if (tool) {
tool.status = 'error'
tool.error = event.error
if (event.display) {
tool.display = event.display
}
}
this.notifyStateChange()
}
private handleToolAborted(event: { toolCallId: string; reason?: string }): void {
const tool = this.state.toolCalls.get(event.toolCallId)
if (tool) {
tool.status = 'aborted'
tool.error = event.reason
}
this.notifyStateChange()
}
private handleInterruptShow(event: {
toolCallId: string
toolName: string
options: Array<{
id: string
label: string
description?: string
variant?: 'default' | 'destructive' | 'outline'
}>
message?: string
}): void {
this.state.pendingInterrupts.set(event.toolCallId, {
toolCallId: event.toolCallId,
toolName: event.toolName,
options: event.options,
message: event.message,
})
this.notifyStateChange()
this.callbacks.onInterruptRequired?.({
toolCallId: event.toolCallId,
toolName: event.toolName,
options: event.options,
message: event.message,
})
}
private handleInterruptResolved(event: {
toolCallId: string
choice: string
approved: boolean
}): void {
this.state.pendingInterrupts.delete(event.toolCallId)
this.notifyStateChange()
}
private handleDiffReady(event: { workflowId: string; toolCallId: string }): void {
this.callbacks.onDiffReady?.(event.workflowId, event.toolCallId)
}
private handlePlanStart(): void {
this.state.isCapturingPlan = true
this.state.planContent = ''
this.notifyStateChange()
}
private handlePlanDelta(event: { content: string }): void {
this.state.planContent += event.content
this.notifyStateChange()
}
private handlePlanEnd(event: { todos: PlanTodo[] }): void {
this.state.isCapturingPlan = false
this.state.planTodos = event.todos
this.notifyStateChange()
}
private handleOptionsStart(): void {
this.state.isCapturingOptions = true
this.state.optionsContent = ''
this.notifyStateChange()
}
private handleOptionsDelta(event: { content: string }): void {
this.state.optionsContent += event.content
this.notifyStateChange()
}
private handleOptionsEnd(event: { options: string[] }): void {
this.state.isCapturingOptions = false
this.state.options = event.options
this.notifyStateChange()
}
private handleSubagentStart(event: {
parentToolCallId: string
subagentId: string
label?: string
}): void {
this.state.activeSubagents.set(event.parentToolCallId, {
parentToolCallId: event.parentToolCallId,
subagentId: event.subagentId,
label: event.label,
toolCalls: new Map(),
})
this.notifyStateChange()
}
private handleSubagentToolPending(event: {
parentToolCallId: string
toolCallId: string
toolName: string
args: Record<string, unknown>
display: { label: string; description?: string }
}): void {
const subagent = this.state.activeSubagents.get(event.parentToolCallId)
if (subagent) {
subagent.toolCalls.set(event.toolCallId, {
id: event.toolCallId,
name: event.toolName,
args: event.args,
status: 'pending',
display: event.display,
})
}
this.notifyStateChange()
}
private handleSubagentToolExecuting(event: {
parentToolCallId: string
toolCallId: string
}): void {
const subagent = this.state.activeSubagents.get(event.parentToolCallId)
if (subagent) {
const tool = subagent.toolCalls.get(event.toolCallId)
if (tool) {
tool.status = 'executing'
}
}
this.notifyStateChange()
}
private handleSubagentToolSuccess(event: {
parentToolCallId: string
toolCallId: string
result?: unknown
display?: { label: string; description?: string }
}): void {
const subagent = this.state.activeSubagents.get(event.parentToolCallId)
if (subagent) {
const tool = subagent.toolCalls.get(event.toolCallId)
if (tool) {
tool.status = 'success'
tool.result = event.result
if (event.display) {
tool.display = event.display
}
}
}
this.notifyStateChange()
}
private handleSubagentToolError(event: {
parentToolCallId: string
toolCallId: string
error: string
}): void {
const subagent = this.state.activeSubagents.get(event.parentToolCallId)
if (subagent) {
const tool = subagent.toolCalls.get(event.toolCallId)
if (tool) {
tool.status = 'error'
tool.error = event.error
}
}
this.notifyStateChange()
}
private handleSubagentEnd(event: { parentToolCallId: string }): void {
// Keep subagent data for display, just mark as complete
logger.debug('Subagent ended', { parentToolCallId: event.parentToolCallId })
this.notifyStateChange()
}
private notifyStateChange(): void {
this.callbacks.onStateChange(this.getState())
}
}
// ============================================================================
// Helper Functions
// ============================================================================
/**
* Parse a render event from an SSE data line
*/
export function parseRenderEvent(line: string): RenderEvent | null {
if (!line.startsWith('data: ')) return null
try {
return JSON.parse(line.slice(6)) as RenderEvent
} catch {
return null
}
}
/**
* Stream events from an SSE endpoint and process them
*/
export async function streamRenderEvents(
url: string,
renderer: ClientRenderer,
options?: {
signal?: AbortSignal
onConnect?: () => void
onError?: (error: Error) => void
}
): Promise<void> {
const response = await fetch(url, {
headers: { Accept: 'text/event-stream' },
signal: options?.signal,
})
if (!response.ok) {
const error = new Error(`Stream failed: ${response.status}`)
options?.onError?.(error)
throw error
}
options?.onConnect?.()
const reader = response.body?.getReader()
if (!reader) {
throw new Error('No response body')
}
const decoder = new TextDecoder()
let buffer = ''
try {
while (true) {
const { done, value } = await reader.read()
if (done) break
buffer += decoder.decode(value, { stream: true })
const lines = buffer.split('\n')
buffer = lines.pop() || ''
for (const line of lines) {
const event = parseRenderEvent(line)
if (event) {
await renderer.processEvent(event)
}
}
}
// Process remaining buffer
if (buffer) {
const event = parseRenderEvent(buffer)
if (event) {
await renderer.processEvent(event)
}
}
} finally {
reader.releaseLock()
}
}

View File

@@ -1,470 +0,0 @@
/**
* Render Events - Server → Client SSE Protocol
*
* This defines the SSE event protocol between the copilot server and client.
* The server processes the raw Sim Agent stream, executes tools, persists to DB,
* and emits these render events. The client just renders based on these events.
*
* Benefits:
* - Client is purely a renderer (no parsing, no execution)
* - Persistence happens before render (safe to refresh anytime)
* - Works identically with or without a client (API-only mode)
* - Resume is just replaying render events
*/
// ============================================================================
// Base Types
// ============================================================================
export interface BaseRenderEvent {
type: RenderEventType
/** Monotonically increasing sequence number for ordering */
seq: number
/** Timestamp when event was created */
ts: number
}
export type RenderEventType =
// Stream lifecycle
| 'stream_start'
| 'stream_end'
| 'stream_error'
// Message lifecycle
| 'message_start'
| 'message_saved'
| 'message_end'
// Text content
| 'text_delta'
// Thinking blocks
| 'thinking_start'
| 'thinking_delta'
| 'thinking_end'
// Tool calls
| 'tool_pending'
| 'tool_generating'
| 'tool_executing'
| 'tool_success'
| 'tool_error'
| 'tool_aborted'
// Interrupts (user approval needed)
| 'interrupt_show'
| 'interrupt_resolved'
// Workflow diffs
| 'diff_ready'
| 'diff_accepted'
| 'diff_rejected'
// Plans
| 'plan_start'
| 'plan_delta'
| 'plan_end'
// Options (continue/follow-up suggestions)
| 'options_start'
| 'options_delta'
| 'options_end'
// Subagents
| 'subagent_start'
| 'subagent_tool_pending'
| 'subagent_tool_generating'
| 'subagent_tool_executing'
| 'subagent_tool_success'
| 'subagent_tool_error'
| 'subagent_end'
// Chat metadata
| 'chat_id'
| 'title_updated'
// ============================================================================
// Stream Lifecycle Events
// ============================================================================
export interface StreamStartEvent extends BaseRenderEvent {
type: 'stream_start'
streamId: string
chatId: string
userMessageId: string
assistantMessageId: string
}
export interface StreamEndEvent extends BaseRenderEvent {
type: 'stream_end'
}
export interface StreamErrorEvent extends BaseRenderEvent {
type: 'stream_error'
error: string
code?: string
}
// ============================================================================
// Message Lifecycle Events
// ============================================================================
export interface MessageStartEvent extends BaseRenderEvent {
type: 'message_start'
messageId: string
role: 'user' | 'assistant'
}
export interface MessageSavedEvent extends BaseRenderEvent {
type: 'message_saved'
messageId: string
/** If true, client should refresh message from DB (contains diff markers, etc.) */
refreshFromDb?: boolean
}
export interface MessageEndEvent extends BaseRenderEvent {
type: 'message_end'
messageId: string
}
// ============================================================================
// Text Content Events
// ============================================================================
export interface TextDeltaEvent extends BaseRenderEvent {
type: 'text_delta'
content: string
}
// ============================================================================
// Thinking Block Events
// ============================================================================
export interface ThinkingStartEvent extends BaseRenderEvent {
type: 'thinking_start'
}
export interface ThinkingDeltaEvent extends BaseRenderEvent {
type: 'thinking_delta'
content: string
}
export interface ThinkingEndEvent extends BaseRenderEvent {
type: 'thinking_end'
}
// ============================================================================
// Tool Call Events
// ============================================================================
export interface ToolDisplay {
label: string
description?: string
icon?: string
}
export interface ToolPendingEvent extends BaseRenderEvent {
type: 'tool_pending'
toolCallId: string
toolName: string
args: Record<string, unknown>
display: ToolDisplay
}
export interface ToolGeneratingEvent extends BaseRenderEvent {
type: 'tool_generating'
toolCallId: string
/** Partial args as they stream in */
argsDelta?: string
/** Full args so far */
argsPartial?: Record<string, unknown>
}
export interface ToolExecutingEvent extends BaseRenderEvent {
type: 'tool_executing'
toolCallId: string
display?: ToolDisplay
}
export interface ToolSuccessEvent extends BaseRenderEvent {
type: 'tool_success'
toolCallId: string
result?: unknown
display?: ToolDisplay
/** For edit_workflow: tells client to read diff from DB */
workflowId?: string
hasDiff?: boolean
}
export interface ToolErrorEvent extends BaseRenderEvent {
type: 'tool_error'
toolCallId: string
error: string
display?: ToolDisplay
}
export interface ToolAbortedEvent extends BaseRenderEvent {
type: 'tool_aborted'
toolCallId: string
reason?: string
display?: ToolDisplay
}
// ============================================================================
// Interrupt Events (User Approval)
// ============================================================================
export interface InterruptOption {
id: string
label: string
description?: string
variant?: 'default' | 'destructive' | 'outline'
}
export interface InterruptShowEvent extends BaseRenderEvent {
type: 'interrupt_show'
toolCallId: string
toolName: string
options: InterruptOption[]
/** Optional message to display */
message?: string
}
export interface InterruptResolvedEvent extends BaseRenderEvent {
type: 'interrupt_resolved'
toolCallId: string
choice: string
/** Whether to continue execution */
approved: boolean
}
// ============================================================================
// Workflow Diff Events
// ============================================================================
export interface DiffReadyEvent extends BaseRenderEvent {
type: 'diff_ready'
workflowId: string
toolCallId: string
/** Client should read workflow state from DB which contains diff markers */
}
export interface DiffAcceptedEvent extends BaseRenderEvent {
type: 'diff_accepted'
workflowId: string
}
export interface DiffRejectedEvent extends BaseRenderEvent {
type: 'diff_rejected'
workflowId: string
}
// ============================================================================
// Plan Events
// ============================================================================
export interface PlanTodo {
id: string
content: string
status: 'pending' | 'in_progress' | 'completed'
}
export interface PlanStartEvent extends BaseRenderEvent {
type: 'plan_start'
}
export interface PlanDeltaEvent extends BaseRenderEvent {
type: 'plan_delta'
content: string
}
export interface PlanEndEvent extends BaseRenderEvent {
type: 'plan_end'
todos: PlanTodo[]
}
// ============================================================================
// Options Events (Follow-up Suggestions)
// ============================================================================
export interface OptionsStartEvent extends BaseRenderEvent {
type: 'options_start'
}
export interface OptionsDeltaEvent extends BaseRenderEvent {
type: 'options_delta'
content: string
}
export interface OptionsEndEvent extends BaseRenderEvent {
type: 'options_end'
options: string[]
}
// ============================================================================
// Subagent Events
// ============================================================================
export interface SubagentStartEvent extends BaseRenderEvent {
type: 'subagent_start'
parentToolCallId: string
subagentId: string
label?: string
}
export interface SubagentToolPendingEvent extends BaseRenderEvent {
type: 'subagent_tool_pending'
parentToolCallId: string
toolCallId: string
toolName: string
args: Record<string, unknown>
display: ToolDisplay
}
export interface SubagentToolGeneratingEvent extends BaseRenderEvent {
type: 'subagent_tool_generating'
parentToolCallId: string
toolCallId: string
argsDelta?: string
}
export interface SubagentToolExecutingEvent extends BaseRenderEvent {
type: 'subagent_tool_executing'
parentToolCallId: string
toolCallId: string
}
export interface SubagentToolSuccessEvent extends BaseRenderEvent {
type: 'subagent_tool_success'
parentToolCallId: string
toolCallId: string
result?: unknown
display?: ToolDisplay
}
export interface SubagentToolErrorEvent extends BaseRenderEvent {
type: 'subagent_tool_error'
parentToolCallId: string
toolCallId: string
error: string
}
export interface SubagentEndEvent extends BaseRenderEvent {
type: 'subagent_end'
parentToolCallId: string
}
// ============================================================================
// Chat Metadata Events
// ============================================================================
export interface ChatIdEvent extends BaseRenderEvent {
type: 'chat_id'
chatId: string
}
export interface TitleUpdatedEvent extends BaseRenderEvent {
type: 'title_updated'
title: string
}
// ============================================================================
// Union Type
// ============================================================================
export type RenderEvent =
// Stream lifecycle
| StreamStartEvent
| StreamEndEvent
| StreamErrorEvent
// Message lifecycle
| MessageStartEvent
| MessageSavedEvent
| MessageEndEvent
// Text content
| TextDeltaEvent
// Thinking
| ThinkingStartEvent
| ThinkingDeltaEvent
| ThinkingEndEvent
// Tool calls
| ToolPendingEvent
| ToolGeneratingEvent
| ToolExecutingEvent
| ToolSuccessEvent
| ToolErrorEvent
| ToolAbortedEvent
// Interrupts
| InterruptShowEvent
| InterruptResolvedEvent
// Diffs
| DiffReadyEvent
| DiffAcceptedEvent
| DiffRejectedEvent
// Plans
| PlanStartEvent
| PlanDeltaEvent
| PlanEndEvent
// Options
| OptionsStartEvent
| OptionsDeltaEvent
| OptionsEndEvent
// Subagents
| SubagentStartEvent
| SubagentToolPendingEvent
| SubagentToolGeneratingEvent
| SubagentToolExecutingEvent
| SubagentToolSuccessEvent
| SubagentToolErrorEvent
| SubagentEndEvent
// Chat metadata
| ChatIdEvent
| TitleUpdatedEvent
// ============================================================================
// Helper Functions
// ============================================================================
let seqCounter = 0
/**
* Create a render event with auto-incrementing sequence number
*/
export function createRenderEvent<T extends RenderEventType>(
type: T,
data: Omit<Extract<RenderEvent, { type: T }>, 'type' | 'seq' | 'ts'>
): Extract<RenderEvent, { type: T }> {
return {
type,
seq: ++seqCounter,
ts: Date.now(),
...data,
} as Extract<RenderEvent, { type: T }>
}
/**
* Reset sequence counter (for testing or new streams)
*/
export function resetSeqCounter(): void {
seqCounter = 0
}
/**
* Serialize a render event to SSE format
*/
export function serializeRenderEvent(event: RenderEvent): string {
return `data: ${JSON.stringify(event)}\n\n`
}
/**
* Parse a render event from SSE data line
*/
export function parseRenderEvent(line: string): RenderEvent | null {
if (!line.startsWith('data: ')) return null
try {
return JSON.parse(line.slice(6)) as RenderEvent
} catch {
return null
}
}

View File

@@ -1,438 +0,0 @@
/**
* Server-Side Tool Executor for Copilot
*
* Executes copilot tools server-side when no client session is present.
* Handles routing to appropriate server implementations and marking tools complete.
*/
import { db } from '@sim/db'
import { account, workflow } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { and, eq } from 'drizzle-orm'
import { isClientOnlyTool } from '@/lib/copilot/tools/client/ui-config'
import { routeExecution } from '@/lib/copilot/tools/server/router'
import { SIM_AGENT_API_URL_DEFAULT } from '@/lib/copilot/constants'
import { env } from '@/lib/core/config/env'
import { generateRequestId } from '@/lib/core/utils/request'
import { getBaseUrl } from '@/lib/core/utils/urls'
import { getEffectiveDecryptedEnv } from '@/lib/environment/utils'
import { refreshTokenIfNeeded } from '@/app/api/auth/oauth/utils'
import { resolveEnvVarReferences } from '@/executor/utils/reference-validation'
import { executeTool } from '@/tools'
import { getTool, resolveToolId } from '@/tools/utils'
const logger = createLogger('ServerToolExecutor')
const SIM_AGENT_API_URL = env.SIM_AGENT_API_URL || SIM_AGENT_API_URL_DEFAULT
/**
* Context for tool execution
*/
export interface ToolExecutionContext {
userId: string
workflowId: string
chatId: string
streamId: string
workspaceId?: string
}
/**
* Result of tool execution
*/
export interface ToolExecutionResult {
success: boolean
status: number
message?: string
data?: unknown
}
/**
* Tools that have dedicated server implementations in the router
*/
const SERVER_ROUTED_TOOLS = [
'edit_workflow',
'get_workflow_data',
'get_workflow_console',
'get_blocks_and_tools',
'get_blocks_metadata',
'get_block_options',
'get_block_config',
'get_trigger_blocks',
'knowledge_base',
'set_environment_variables',
'get_credentials',
'search_documentation',
'make_api_request',
'search_online',
]
/**
* Tools that execute workflows
*/
const WORKFLOW_EXECUTION_TOOLS = ['run_workflow']
/**
* Tools that handle deployments
*/
const DEPLOYMENT_TOOLS = ['deploy_api', 'deploy_chat', 'deploy_mcp', 'redeploy']
/**
* Execute a tool server-side.
* Returns result to be sent to Sim Agent via mark-complete.
*/
export async function executeToolServerSide(
toolName: string,
toolCallId: string,
args: Record<string, unknown>,
context: ToolExecutionContext
): Promise<ToolExecutionResult> {
logger.info('Executing tool server-side', {
toolName,
toolCallId,
userId: context.userId,
workflowId: context.workflowId,
})
// 1. Check if tool is client-only
if (isClientOnlyTool(toolName)) {
logger.info('Skipping client-only tool', { toolName, toolCallId })
return {
success: true,
status: 200,
message: `Tool "${toolName}" requires a browser session and was skipped in API mode.`,
data: { skipped: true, reason: 'client_only' },
}
}
try {
// 2. Route to appropriate executor
if (SERVER_ROUTED_TOOLS.includes(toolName)) {
return executeServerRoutedTool(toolName, args, context)
}
if (WORKFLOW_EXECUTION_TOOLS.includes(toolName)) {
return executeRunWorkflow(args, context)
}
if (DEPLOYMENT_TOOLS.includes(toolName)) {
return executeDeploymentTool(toolName, args, context)
}
// 3. Try integration tool execution (Slack, Gmail, etc.)
return executeIntegrationTool(toolName, toolCallId, args, context)
} catch (error) {
logger.error('Tool execution failed', {
toolName,
toolCallId,
error: error instanceof Error ? error.message : String(error),
})
return {
success: false,
status: 500,
message: error instanceof Error ? error.message : 'Tool execution failed',
}
}
}
/**
* Execute a tool that has a dedicated server implementation
*/
async function executeServerRoutedTool(
toolName: string,
args: Record<string, unknown>,
context: ToolExecutionContext
): Promise<ToolExecutionResult> {
try {
const result = await routeExecution(toolName, args, { userId: context.userId })
return {
success: true,
status: 200,
data: result,
}
} catch (error) {
return {
success: false,
status: 500,
message: error instanceof Error ? error.message : 'Server tool execution failed',
}
}
}
/**
* Execute the run_workflow tool
*/
async function executeRunWorkflow(
args: Record<string, unknown>,
context: ToolExecutionContext
): Promise<ToolExecutionResult> {
const workflowId = (args.workflowId as string) || context.workflowId
const input = (args.input as Record<string, unknown>) || {}
logger.info('Executing run_workflow', { workflowId, inputKeys: Object.keys(input) })
try {
const response = await fetch(`${getBaseUrl()}/api/workflows/${workflowId}/execute`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
Authorization: `Bearer ${await generateInternalToken()}`,
},
body: JSON.stringify({
input,
triggerType: 'copilot',
workflowId, // For internal auth
}),
})
if (!response.ok) {
const errorText = await response.text()
return {
success: false,
status: response.status,
message: `Workflow execution failed: ${errorText}`,
}
}
const result = await response.json()
return {
success: true,
status: 200,
data: result,
}
} catch (error) {
return {
success: false,
status: 500,
message: error instanceof Error ? error.message : 'Workflow execution failed',
}
}
}
/**
* Execute a deployment tool
*/
async function executeDeploymentTool(
toolName: string,
args: Record<string, unknown>,
context: ToolExecutionContext
): Promise<ToolExecutionResult> {
// Deployment tools modify workflow state and create deployments
// These can be executed server-side via the server router
try {
const result = await routeExecution(toolName, args, { userId: context.userId })
return {
success: true,
status: 200,
data: result,
}
} catch (error) {
// If the tool isn't in the router, it might need to be added
// For now, return a skip result
logger.warn('Deployment tool not available server-side', { toolName })
return {
success: true,
status: 200,
message: `Deployment tool "${toolName}" executed with limited functionality in API mode.`,
data: { skipped: true, reason: 'limited_api_support' },
}
}
}
/**
* Execute an integration tool (Slack, Gmail, etc.)
* Uses the same logic as /api/copilot/execute-tool
*/
async function executeIntegrationTool(
toolName: string,
toolCallId: string,
args: Record<string, unknown>,
context: ToolExecutionContext
): Promise<ToolExecutionResult> {
const resolvedToolName = resolveToolId(toolName)
const toolConfig = getTool(resolvedToolName)
if (!toolConfig) {
// Tool not found - try server router as fallback
try {
const result = await routeExecution(toolName, args, { userId: context.userId })
return {
success: true,
status: 200,
data: result,
}
} catch {
logger.warn('Tool not found', { toolName, resolvedToolName })
return {
success: true,
status: 200,
message: `Tool "${toolName}" not found. Skipped.`,
data: { skipped: true, reason: 'not_found' },
}
}
}
// Get workspaceId for env vars
let workspaceId = context.workspaceId
if (!workspaceId && context.workflowId) {
const workflowResult = await db
.select({ workspaceId: workflow.workspaceId })
.from(workflow)
.where(eq(workflow.id, context.workflowId))
.limit(1)
workspaceId = workflowResult[0]?.workspaceId ?? undefined
}
// Get decrypted environment variables
const decryptedEnvVars = await getEffectiveDecryptedEnv(context.userId, workspaceId)
// Resolve env var references in arguments
const executionParams: Record<string, unknown> = resolveEnvVarReferences(
args,
decryptedEnvVars,
{
resolveExactMatch: true,
allowEmbedded: true,
trimKeys: true,
onMissing: 'keep',
deep: true,
}
) as Record<string, unknown>
// Resolve OAuth access token if required
if (toolConfig.oauth?.required && toolConfig.oauth.provider) {
const provider = toolConfig.oauth.provider
try {
const accounts = await db
.select()
.from(account)
.where(and(eq(account.providerId, provider), eq(account.userId, context.userId)))
.limit(1)
if (accounts.length > 0) {
const acc = accounts[0]
const requestId = generateRequestId()
const { accessToken } = await refreshTokenIfNeeded(requestId, acc as any, acc.id)
if (accessToken) {
executionParams.accessToken = accessToken
} else {
return {
success: false,
status: 400,
message: `OAuth token not available for ${provider}. Please reconnect your account.`,
}
}
} else {
return {
success: false,
status: 400,
message: `No ${provider} account connected. Please connect your account first.`,
}
}
} catch (error) {
return {
success: false,
status: 500,
message: `Failed to get OAuth token for ${toolConfig.oauth.provider}`,
}
}
}
// Check if tool requires an API key
const needsApiKey = toolConfig.params?.apiKey?.required
if (needsApiKey && !executionParams.apiKey) {
return {
success: false,
status: 400,
message: `API key not provided for ${toolName}.`,
}
}
// Add execution context
executionParams._context = {
workflowId: context.workflowId,
userId: context.userId,
}
// Special handling for function_execute
if (toolName === 'function_execute') {
executionParams.envVars = decryptedEnvVars
executionParams.workflowVariables = {}
executionParams.blockData = {}
executionParams.blockNameMapping = {}
executionParams.language = executionParams.language || 'javascript'
executionParams.timeout = executionParams.timeout || 30000
}
// Execute the tool
const result = await executeTool(resolvedToolName, executionParams, true)
logger.info('Integration tool execution complete', {
toolName,
success: result.success,
})
return {
success: result.success,
status: result.success ? 200 : 500,
message: result.error,
data: result.output,
}
}
/**
* Mark a tool as complete with Sim Agent
*/
export async function markToolComplete(
toolCallId: string,
toolName: string,
result: ToolExecutionResult
): Promise<boolean> {
logger.info('Marking tool complete', {
toolCallId,
toolName,
success: result.success,
status: result.status,
})
try {
const response = await fetch(`${SIM_AGENT_API_URL}/api/tools/mark-complete`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
...(env.COPILOT_API_KEY ? { 'x-api-key': env.COPILOT_API_KEY } : {}),
},
body: JSON.stringify({
id: toolCallId,
name: toolName,
status: result.status,
message: result.message,
data: result.data,
}),
})
if (!response.ok) {
logger.error('Mark complete failed', { toolCallId, status: response.status })
return false
}
return true
} catch (error) {
logger.error('Mark complete error', {
toolCallId,
error: error instanceof Error ? error.message : String(error),
})
return false
}
}
/**
* Generate an internal authentication token for server-to-server calls
*/
async function generateInternalToken(): Promise<string> {
// Use the same pattern as A2A for internal auth
const { generateInternalToken: genToken } = await import('@/app/api/a2a/serve/[agentId]/utils')
return genToken()
}

View File

@@ -1,556 +0,0 @@
/**
* Stream Persistence Service for Copilot
*
* Handles persisting copilot stream state to Redis (ephemeral) and database (permanent).
* Uses Redis LIST for chunk history and Pub/Sub for live updates (no polling).
*
* Redis Key Structure:
* - copilot:stream:{streamId}:meta → StreamMeta JSON (TTL: 10 min)
* - copilot:stream:{streamId}:chunks → LIST of chunks (for replay)
* - copilot:stream:{streamId} → Pub/Sub CHANNEL (for live updates)
* - copilot:active:{chatId} → streamId lookup
* - copilot:abort:{streamId} → abort signal flag
*/
import { db } from '@sim/db'
import { copilotChats } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { eq } from 'drizzle-orm'
import type Redis from 'ioredis'
import { getRedisClient } from '@/lib/core/config/redis'
const logger = createLogger('CopilotStreamPersistence')
const STREAM_TTL = 60 * 10 // 10 minutes
/**
* Tool call record stored in stream state
*/
export interface ToolCallRecord {
id: string
name: string
args: Record<string, unknown>
state: 'pending' | 'executing' | 'success' | 'error' | 'skipped'
result?: unknown
error?: string
}
/**
* Pending diff state for edit_workflow tool calls
*/
export interface PendingDiffState {
toolCallId: string
baselineWorkflow: unknown
proposedWorkflow: unknown
diffAnalysis: unknown
}
/**
* Stream metadata stored in Redis
*/
export interface StreamMeta {
id: string
status: 'streaming' | 'completed' | 'error'
chatId: string
userId: string
workflowId: string
userMessageId: string
isClientSession: boolean
toolCalls: ToolCallRecord[]
assistantContent: string
conversationId?: string
createdAt: number
updatedAt: number
/** Pending diff state if edit_workflow tool has changes waiting for review */
pendingDiff?: PendingDiffState
}
/**
* Parameters for creating a new stream
*/
export interface CreateStreamParams {
streamId: string
chatId: string
userId: string
workflowId: string
userMessageId: string
isClientSession: boolean
}
// ============ WRITE OPERATIONS (used by original request handler) ============
/**
* Create a new stream state in Redis
*/
export async function createStream(params: CreateStreamParams): Promise<void> {
const redis = getRedisClient()
if (!redis) {
logger.warn('Redis not available, stream persistence disabled')
return
}
const meta: StreamMeta = {
id: params.streamId,
status: 'streaming',
chatId: params.chatId,
userId: params.userId,
workflowId: params.workflowId,
userMessageId: params.userMessageId,
isClientSession: params.isClientSession,
toolCalls: [],
assistantContent: '',
createdAt: Date.now(),
updatedAt: Date.now(),
}
const metaKey = `copilot:stream:${params.streamId}:meta`
const activeKey = `copilot:active:${params.chatId}`
await redis.setex(metaKey, STREAM_TTL, JSON.stringify(meta))
await redis.setex(activeKey, STREAM_TTL, params.streamId)
logger.info('Created stream state', { streamId: params.streamId, chatId: params.chatId })
}
/**
* Append a chunk to the stream buffer and publish for live subscribers
*/
export async function appendChunk(streamId: string, chunk: string): Promise<void> {
const redis = getRedisClient()
if (!redis) return
const listKey = `copilot:stream:${streamId}:chunks`
const channel = `copilot:stream:${streamId}`
// Push to list for replay, publish for live subscribers
await redis.rpush(listKey, chunk)
await redis.expire(listKey, STREAM_TTL)
await redis.publish(channel, chunk)
}
/**
* Append content to the accumulated assistant content
*/
export async function appendContent(streamId: string, content: string): Promise<void> {
const redis = getRedisClient()
if (!redis) return
const metaKey = `copilot:stream:${streamId}:meta`
const raw = await redis.get(metaKey)
if (!raw) return
const meta: StreamMeta = JSON.parse(raw)
meta.assistantContent += content
meta.updatedAt = Date.now()
await redis.setex(metaKey, STREAM_TTL, JSON.stringify(meta))
}
/**
* Update stream metadata
*/
export async function updateMeta(streamId: string, update: Partial<StreamMeta>): Promise<void> {
const redis = getRedisClient()
if (!redis) return
const metaKey = `copilot:stream:${streamId}:meta`
const raw = await redis.get(metaKey)
if (!raw) return
const meta: StreamMeta = { ...JSON.parse(raw), ...update, updatedAt: Date.now() }
await redis.setex(metaKey, STREAM_TTL, JSON.stringify(meta))
}
/**
* Update a specific tool call in the stream state
*/
export async function updateToolCall(
streamId: string,
toolCallId: string,
update: Partial<ToolCallRecord>
): Promise<void> {
const redis = getRedisClient()
if (!redis) return
const metaKey = `copilot:stream:${streamId}:meta`
const raw = await redis.get(metaKey)
if (!raw) return
const meta: StreamMeta = JSON.parse(raw)
const toolCallIndex = meta.toolCalls.findIndex((tc) => tc.id === toolCallId)
if (toolCallIndex >= 0) {
meta.toolCalls[toolCallIndex] = { ...meta.toolCalls[toolCallIndex], ...update }
} else {
// Add new tool call
meta.toolCalls.push({
id: toolCallId,
name: update.name || 'unknown',
args: update.args || {},
state: update.state || 'pending',
result: update.result,
error: update.error,
})
}
meta.updatedAt = Date.now()
await redis.setex(metaKey, STREAM_TTL, JSON.stringify(meta))
}
/**
* Store pending diff state for a stream (called when edit_workflow creates a diff)
*/
export async function setPendingDiff(
streamId: string,
pendingDiff: PendingDiffState
): Promise<void> {
const redis = getRedisClient()
if (!redis) return
const metaKey = `copilot:stream:${streamId}:meta`
const raw = await redis.get(metaKey)
if (!raw) return
const meta: StreamMeta = JSON.parse(raw)
meta.pendingDiff = pendingDiff
meta.updatedAt = Date.now()
await redis.setex(metaKey, STREAM_TTL, JSON.stringify(meta))
logger.info('Stored pending diff for stream', { streamId, toolCallId: pendingDiff.toolCallId })
}
/**
* Clear pending diff state (called when user accepts/rejects the diff)
*/
export async function clearPendingDiff(streamId: string): Promise<void> {
const redis = getRedisClient()
if (!redis) return
const metaKey = `copilot:stream:${streamId}:meta`
const raw = await redis.get(metaKey)
if (!raw) return
const meta: StreamMeta = JSON.parse(raw)
delete meta.pendingDiff
meta.updatedAt = Date.now()
await redis.setex(metaKey, STREAM_TTL, JSON.stringify(meta))
logger.info('Cleared pending diff for stream', { streamId })
}
/**
* Get pending diff state for a stream
*/
export async function getPendingDiff(streamId: string): Promise<PendingDiffState | null> {
const redis = getRedisClient()
if (!redis) return null
const meta = await getStreamMeta(streamId)
return meta?.pendingDiff || null
}
/**
* Complete the stream - save to database and cleanup Redis
*/
export async function completeStream(streamId: string, conversationId?: string): Promise<void> {
const redis = getRedisClient()
if (!redis) return
const meta = await getStreamMeta(streamId)
if (!meta) return
// Publish completion event for subscribers
await redis.publish(`copilot:stream:${streamId}`, JSON.stringify({ type: 'stream_complete' }))
// Save to database
await saveToDatabase(meta, conversationId)
// Cleanup Redis
await redis.del(`copilot:stream:${streamId}:meta`)
await redis.del(`copilot:stream:${streamId}:chunks`)
await redis.del(`copilot:active:${meta.chatId}`)
await redis.del(`copilot:abort:${streamId}`)
logger.info('Completed stream', { streamId, chatId: meta.chatId })
}
/**
* Mark stream as errored and save partial content
*/
export async function errorStream(streamId: string, error: string): Promise<void> {
const redis = getRedisClient()
if (!redis) return
const meta = await getStreamMeta(streamId)
if (!meta) return
// Update status
meta.status = 'error'
// Publish error event for subscribers
await redis.publish(
`copilot:stream:${streamId}`,
JSON.stringify({ type: 'stream_error', error })
)
// Still save what we have to database
await saveToDatabase(meta)
// Cleanup Redis
await redis.del(`copilot:stream:${streamId}:meta`)
await redis.del(`copilot:stream:${streamId}:chunks`)
await redis.del(`copilot:active:${meta.chatId}`)
await redis.del(`copilot:abort:${streamId}`)
logger.info('Errored stream', { streamId, error })
}
/**
* Save stream content to database as assistant message
*/
async function saveToDatabase(meta: StreamMeta, conversationId?: string): Promise<void> {
try {
const [chat] = await db
.select()
.from(copilotChats)
.where(eq(copilotChats.id, meta.chatId))
.limit(1)
if (!chat) {
logger.warn('Chat not found for stream save', { chatId: meta.chatId })
return
}
const existingMessages = Array.isArray(chat.messages) ? chat.messages : []
// Check if there's already an assistant message after the user message
// This can happen if the client already saved it before disconnecting
const userMessageIndex = existingMessages.findIndex(
(m: any) => m.id === meta.userMessageId && m.role === 'user'
)
// If there's already an assistant message right after the user message,
// the client may have already saved it - check if it's incomplete
if (userMessageIndex >= 0 && userMessageIndex < existingMessages.length - 1) {
const nextMessage = existingMessages[userMessageIndex + 1] as any
if (nextMessage?.role === 'assistant' && !nextMessage?.serverCompleted) {
// Client saved a partial message, update it with the complete content
const updatedMessages = existingMessages.map((m: any, idx: number) => {
if (idx === userMessageIndex + 1) {
return {
...m,
content: meta.assistantContent,
toolCalls: meta.toolCalls,
serverCompleted: true,
}
}
return m
})
await db
.update(copilotChats)
.set({
messages: updatedMessages,
conversationId: conversationId || (chat.conversationId as string | undefined),
updatedAt: new Date(),
})
.where(eq(copilotChats.id, meta.chatId))
logger.info('Updated existing assistant message in database', {
streamId: meta.id,
chatId: meta.chatId,
})
return
}
}
// Build the assistant message
const assistantMessage = {
id: crypto.randomUUID(),
role: 'assistant',
content: meta.assistantContent,
toolCalls: meta.toolCalls,
timestamp: new Date().toISOString(),
serverCompleted: true, // Mark that this was completed server-side
}
const updatedMessages = [...existingMessages, assistantMessage]
await db
.update(copilotChats)
.set({
messages: updatedMessages,
conversationId: conversationId || (chat.conversationId as string | undefined),
updatedAt: new Date(),
})
.where(eq(copilotChats.id, meta.chatId))
logger.info('Saved stream to database', {
streamId: meta.id,
chatId: meta.chatId,
contentLength: meta.assistantContent.length,
toolCallsCount: meta.toolCalls.length,
})
} catch (error) {
logger.error('Failed to save stream to database', { streamId: meta.id, error })
}
}
// ============ READ OPERATIONS (used by resume handler) ============
/**
* Get stream metadata
*/
export async function getStreamMeta(streamId: string): Promise<StreamMeta | null> {
const redis = getRedisClient()
if (!redis) return null
const raw = await redis.get(`copilot:stream:${streamId}:meta`)
return raw ? JSON.parse(raw) : null
}
/**
* Get chunks from stream history (for replay)
*/
export async function getChunks(streamId: string, fromIndex: number = 0): Promise<string[]> {
const redis = getRedisClient()
if (!redis) return []
const listKey = `copilot:stream:${streamId}:chunks`
return redis.lrange(listKey, fromIndex, -1)
}
/**
* Get the number of chunks in the stream
*/
export async function getChunkCount(streamId: string): Promise<number> {
const redis = getRedisClient()
if (!redis) return 0
const listKey = `copilot:stream:${streamId}:chunks`
return redis.llen(listKey)
}
/**
* Get active stream ID for a chat (if any)
*/
export async function getActiveStreamForChat(chatId: string): Promise<string | null> {
const redis = getRedisClient()
if (!redis) return null
return redis.get(`copilot:active:${chatId}`)
}
// ============ SUBSCRIPTION (for resume handler) ============
/**
* Subscribe to live stream updates.
* Uses Redis Pub/Sub - no polling, fully event-driven.
*
* @param streamId - Stream to subscribe to
* @param onChunk - Callback for each new chunk
* @param onComplete - Callback when stream completes
* @param signal - Optional AbortSignal to cancel subscription
*/
export async function subscribeToStream(
streamId: string,
onChunk: (chunk: string) => void,
onComplete: () => void,
signal?: AbortSignal
): Promise<void> {
const redis = getRedisClient()
if (!redis) {
onComplete()
return
}
// Create a separate Redis connection for subscription
const subscriber = redis.duplicate()
const channel = `copilot:stream:${streamId}`
let isComplete = false
const cleanup = () => {
if (!isComplete) {
isComplete = true
subscriber.unsubscribe(channel).catch(() => {})
subscriber.quit().catch(() => {})
}
}
signal?.addEventListener('abort', cleanup)
await subscriber.subscribe(channel)
subscriber.on('message', (ch, message) => {
if (ch !== channel) return
try {
const parsed = JSON.parse(message)
if (parsed.type === 'stream_complete' || parsed.type === 'stream_error') {
cleanup()
onComplete()
return
}
} catch {
// Not a control message, just a chunk
}
onChunk(message)
})
subscriber.on('error', (err) => {
logger.error('Subscriber error', { streamId, error: err })
cleanup()
onComplete()
})
}
// ============ ABORT HANDLING ============
/**
* Set abort signal for a stream.
* The original request handler should check this and cancel if set.
*/
export async function setAbortSignal(streamId: string): Promise<void> {
const redis = getRedisClient()
if (!redis) return
await redis.setex(`copilot:abort:${streamId}`, 60, '1')
// Also publish to channel so handler sees it immediately
await redis.publish(`copilot:stream:${streamId}`, JSON.stringify({ type: 'abort' }))
logger.info('Set abort signal', { streamId })
}
/**
* Check if abort signal is set for a stream
*/
export async function checkAbortSignal(streamId: string): Promise<boolean> {
const redis = getRedisClient()
if (!redis) return false
const val = await redis.get(`copilot:abort:${streamId}`)
return val === '1'
}
/**
* Clear abort signal for a stream
*/
export async function clearAbortSignal(streamId: string): Promise<void> {
const redis = getRedisClient()
if (!redis) return
await redis.del(`copilot:abort:${streamId}`)
}
/**
* Refresh TTL on all stream keys (call periodically during long streams)
*/
export async function refreshStreamTTL(streamId: string, chatId: string): Promise<void> {
const redis = getRedisClient()
if (!redis) return
await redis.expire(`copilot:stream:${streamId}:meta`, STREAM_TTL)
await redis.expire(`copilot:stream:${streamId}:chunks`, STREAM_TTL)
await redis.expire(`copilot:active:${chatId}`, STREAM_TTL)
}

View File

@@ -1,953 +0,0 @@
/**
* Stream Transformer - Converts Sim Agent SSE to Render Events
*
* This module processes the raw SSE stream from Sim Agent, executes tools,
* persists to the database, and emits render events for the client.
*
* The client receives only render events and just needs to render them.
*/
import { createLogger } from '@sim/logger'
import { routeExecution } from '@/lib/copilot/tools/server/router'
import { isClientOnlyTool } from '@/lib/copilot/tools/client/ui-config'
import { env } from '@/lib/core/config/env'
import {
type RenderEvent,
type ToolDisplay,
createRenderEvent,
resetSeqCounter,
serializeRenderEvent,
} from './render-events'
import { SIM_AGENT_API_URL_DEFAULT } from './constants'
const logger = createLogger('StreamTransformer')
const SIM_AGENT_API_URL = env.SIM_AGENT_API_URL || SIM_AGENT_API_URL_DEFAULT
// ============================================================================
// Types
// ============================================================================
export interface StreamTransformContext {
streamId: string
chatId: string
userId: string
workflowId?: string
userMessageId: string
assistantMessageId: string
/** Callback to emit render events (sent to client via SSE) */
onRenderEvent: (event: RenderEvent) => Promise<void>
/** Callback to persist state (called at key moments) */
onPersist?: (data: PersistData) => Promise<void>
/** Callback to check if stream is aborted */
isAborted?: () => boolean
}
export interface PersistData {
type: 'content' | 'tool_call' | 'message_complete'
content?: string
toolCall?: {
id: string
name: string
args: Record<string, unknown>
state: 'pending' | 'executing' | 'success' | 'error'
result?: unknown
}
messageComplete?: boolean
}
// Track state during stream processing
interface TransformState {
// Content accumulation
assistantContent: string
// Thinking block state
inThinkingBlock: boolean
thinkingContent: string
// Plan capture
inPlanCapture: boolean
planContent: string
// Options capture
inOptionsCapture: boolean
optionsContent: string
// Tool call tracking
toolCalls: Map<
string,
{
id: string
name: string
args: Record<string, unknown>
state: 'pending' | 'generating' | 'executing' | 'success' | 'error'
result?: unknown
}
>
// Subagent tracking
activeSubagent: string | null // parentToolCallId
subagentToolCalls: Map<string, string> // toolCallId -> parentToolCallId
}
// ============================================================================
// Main Transformer
// ============================================================================
/**
* Process a Sim Agent SSE stream and emit render events
*/
export async function transformStream(
agentStream: ReadableStream<Uint8Array>,
context: StreamTransformContext
): Promise<void> {
const { streamId, chatId, userMessageId, assistantMessageId, onRenderEvent, isAborted } = context
// Reset sequence counter for new stream
resetSeqCounter()
const state: TransformState = {
assistantContent: '',
inThinkingBlock: false,
thinkingContent: '',
inPlanCapture: false,
planContent: '',
inOptionsCapture: false,
optionsContent: '',
toolCalls: new Map(),
activeSubagent: null,
subagentToolCalls: new Map(),
}
// Emit stream start
await emitEvent(onRenderEvent, 'stream_start', {
streamId,
chatId,
userMessageId,
assistantMessageId,
})
// Emit message start for assistant
await emitEvent(onRenderEvent, 'message_start', {
messageId: assistantMessageId,
role: 'assistant',
})
const reader = agentStream.getReader()
const decoder = new TextDecoder()
let buffer = ''
try {
while (true) {
// Check for abort
if (isAborted?.()) {
logger.info('Stream aborted by user', { streamId })
// Abort any in-progress tools
for (const [toolCallId, tool] of state.toolCalls) {
if (tool.state === 'pending' || tool.state === 'executing') {
await emitEvent(onRenderEvent, 'tool_aborted', {
toolCallId,
reason: 'User aborted',
})
}
}
break
}
const { done, value } = await reader.read()
if (done) break
buffer += decoder.decode(value, { stream: true })
// Process complete SSE lines
const lines = buffer.split('\n')
buffer = lines.pop() || '' // Keep incomplete line in buffer
for (const line of lines) {
if (!line.startsWith('data: ') || line.length <= 6) continue
try {
const event = JSON.parse(line.slice(6))
await processSimAgentEvent(event, state, context)
} catch (e) {
logger.warn('Failed to parse SSE event', { line: line.slice(0, 100) })
}
}
}
// Process any remaining buffer
if (buffer.startsWith('data: ')) {
try {
const event = JSON.parse(buffer.slice(6))
await processSimAgentEvent(event, state, context)
} catch {}
}
// Finalize thinking block if still open
if (state.inThinkingBlock) {
await emitEvent(onRenderEvent, 'thinking_end', {})
}
// Finalize plan if still open
if (state.inPlanCapture) {
await finalizePlan(state, context)
}
// Finalize options if still open
if (state.inOptionsCapture) {
await finalizeOptions(state, context)
}
// Emit message end
await emitEvent(onRenderEvent, 'message_end', { messageId: assistantMessageId })
// Emit stream end
await emitEvent(onRenderEvent, 'stream_end', {})
// Persist final message
await context.onPersist?.({
type: 'message_complete',
content: state.assistantContent,
messageComplete: true,
})
// Emit message saved
await emitEvent(onRenderEvent, 'message_saved', {
messageId: assistantMessageId,
refreshFromDb: false,
})
} catch (error) {
logger.error('Stream transform error', { error, streamId })
await emitEvent(onRenderEvent, 'stream_error', {
error: error instanceof Error ? error.message : 'Unknown error',
})
} finally {
reader.releaseLock()
}
}
// ============================================================================
// Event Processing
// ============================================================================
async function processSimAgentEvent(
event: any,
state: TransformState,
context: StreamTransformContext
): Promise<void> {
const { onRenderEvent } = context
switch (event.type) {
// ========== Content Events ==========
case 'content':
await handleContent(event, state, context)
break
// ========== Thinking Events ==========
case 'thinking':
await handleThinking(event, state, context)
break
// ========== Tool Call Events ==========
case 'tool_call':
await handleToolCall(event, state, context)
break
case 'tool_generating':
await handleToolGenerating(event, state, context)
break
case 'tool_result':
await handleToolResult(event, state, context)
break
case 'tool_error':
await handleToolError(event, state, context)
break
// ========== Plan Events ==========
case 'plan_capture_start':
state.inPlanCapture = true
state.planContent = ''
await emitEvent(onRenderEvent, 'plan_start', {})
break
case 'plan_capture':
if (state.inPlanCapture && event.data) {
state.planContent += event.data
await emitEvent(onRenderEvent, 'plan_delta', { content: event.data })
}
break
case 'plan_capture_end':
await finalizePlan(state, context)
break
// ========== Options Events ==========
case 'options_stream_start':
state.inOptionsCapture = true
state.optionsContent = ''
await emitEvent(onRenderEvent, 'options_start', {})
break
case 'options_stream':
if (state.inOptionsCapture && event.data) {
state.optionsContent += event.data
await emitEvent(onRenderEvent, 'options_delta', { content: event.data })
}
break
case 'options_stream_end':
await finalizeOptions(state, context)
break
// ========== Subagent Events ==========
case 'subagent_start':
await handleSubagentStart(event, state, context)
break
case 'subagent_end':
await handleSubagentEnd(event, state, context)
break
// ========== Response Events ==========
case 'response_done':
// Final response from Sim Agent
logger.debug('Response done received', { streamId: context.streamId })
break
default:
logger.debug('Unknown Sim Agent event type', { type: event.type })
}
}
// ============================================================================
// Content Handling
// ============================================================================
async function handleContent(
event: any,
state: TransformState,
context: StreamTransformContext
): Promise<void> {
const content = event.data
if (!content) return
state.assistantContent += content
// Check for thinking block markers
if (content.includes('<think>') || content.includes('<thinking>')) {
state.inThinkingBlock = true
await context.onRenderEvent(createRenderEvent('thinking_start', {}))
// Don't emit the marker as text
return
}
if (content.includes('</think>') || content.includes('</thinking>')) {
state.inThinkingBlock = false
await context.onRenderEvent(createRenderEvent('thinking_end', {}))
// Don't emit the marker as text
return
}
// Route to appropriate handler
if (state.inThinkingBlock) {
state.thinkingContent += content
await context.onRenderEvent(createRenderEvent('thinking_delta', { content }))
} else {
await context.onRenderEvent(createRenderEvent('text_delta', { content }))
}
}
// ============================================================================
// Thinking Handling
// ============================================================================
async function handleThinking(
event: any,
state: TransformState,
context: StreamTransformContext
): Promise<void> {
const content = event.data || event.thinking
if (!content) return
// Start thinking block if not already
if (!state.inThinkingBlock) {
state.inThinkingBlock = true
await context.onRenderEvent(createRenderEvent('thinking_start', {}))
}
state.thinkingContent += content
await context.onRenderEvent(createRenderEvent('thinking_delta', { content }))
}
// ============================================================================
// Tool Call Handling
// ============================================================================
async function handleToolCall(
event: any,
state: TransformState,
context: StreamTransformContext
): Promise<void> {
const { onRenderEvent, userId, workflowId } = context
const data = event.data || event
const { id: toolCallId, name: toolName, arguments: args, partial } = data
if (!toolCallId || !toolName) return
// Check if this is a subagent tool call
const isSubagentTool = state.activeSubagent !== null
// Track the tool call
const existingTool = state.toolCalls.get(toolCallId)
if (partial) {
// Streaming args
if (!existingTool) {
state.toolCalls.set(toolCallId, {
id: toolCallId,
name: toolName,
args: args || {},
state: 'generating',
})
if (isSubagentTool) {
state.subagentToolCalls.set(toolCallId, state.activeSubagent!)
}
} else {
existingTool.args = { ...existingTool.args, ...args }
}
const display = getToolDisplay(toolName, 'generating')
if (isSubagentTool) {
await emitEvent(onRenderEvent, 'subagent_tool_generating', {
parentToolCallId: state.activeSubagent!,
toolCallId,
argsDelta: JSON.stringify(args),
})
} else {
await emitEvent(onRenderEvent, 'tool_generating', {
toolCallId,
argsPartial: existingTool?.args || args,
})
}
return
}
// Complete tool call - ready to execute
const finalArgs = args || existingTool?.args || {}
state.toolCalls.set(toolCallId, {
id: toolCallId,
name: toolName,
args: finalArgs,
state: 'pending',
})
if (isSubagentTool) {
state.subagentToolCalls.set(toolCallId, state.activeSubagent!)
}
const display = getToolDisplay(toolName, 'pending')
// Emit pending event
if (isSubagentTool) {
await emitEvent(onRenderEvent, 'subagent_tool_pending', {
parentToolCallId: state.activeSubagent!,
toolCallId,
toolName,
args: finalArgs,
display,
})
} else {
await emitEvent(onRenderEvent, 'tool_pending', {
toolCallId,
toolName,
args: finalArgs,
display,
})
}
// Check if this tool needs user approval (interrupt)
const needsInterrupt = checkToolNeedsInterrupt(toolName, finalArgs)
if (needsInterrupt) {
const options = getInterruptOptions(toolName, finalArgs)
await emitEvent(onRenderEvent, 'interrupt_show', {
toolCallId,
toolName,
options,
})
// Don't execute yet - wait for interrupt resolution
return
}
// Check if this is a client-only tool
if (isClientOnlyTool(toolName)) {
logger.info('Skipping client-only tool on server', { toolName, toolCallId })
// Client will handle this tool
return
}
// Execute tool server-side - NON-BLOCKING for parallel execution
// Fire off the execution and let tool_result event handle the completion
executeToolServerSide(toolCallId, toolName, finalArgs, state, context).catch((err) => {
logger.error('Tool execution failed (async)', { toolCallId, toolName, error: err })
})
}
async function handleToolGenerating(
event: any,
state: TransformState,
context: StreamTransformContext
): Promise<void> {
const toolCallId = event.toolCallId || event.data?.id
if (!toolCallId) return
const isSubagentTool = state.subagentToolCalls.has(toolCallId)
if (isSubagentTool) {
await emitEvent(context.onRenderEvent, 'subagent_tool_generating', {
parentToolCallId: state.subagentToolCalls.get(toolCallId)!,
toolCallId,
argsDelta: event.data,
})
} else {
await emitEvent(context.onRenderEvent, 'tool_generating', {
toolCallId,
argsDelta: event.data,
})
}
}
async function handleToolResult(
event: any,
state: TransformState,
context: StreamTransformContext
): Promise<void> {
const toolCallId = event.toolCallId || event.data?.id
const success = event.success !== false
const result = event.result || event.data?.result
if (!toolCallId) return
const tool = state.toolCalls.get(toolCallId)
// Skip if tool already in terminal state (server-side execution already emitted events)
if (tool && (tool.state === 'success' || tool.state === 'error')) {
logger.debug('Skipping duplicate tool_result event', { toolCallId, currentState: tool.state })
return
}
if (tool) {
tool.state = success ? 'success' : 'error'
tool.result = result
}
const isSubagentTool = state.subagentToolCalls.has(toolCallId)
const display = getToolDisplay(tool?.name || '', success ? 'success' : 'error')
if (isSubagentTool) {
await emitEvent(context.onRenderEvent, success ? 'subagent_tool_success' : 'subagent_tool_error', {
parentToolCallId: state.subagentToolCalls.get(toolCallId)!,
toolCallId,
...(success ? { result, display } : { error: event.error || 'Tool failed' }),
})
} else {
if (success) {
const successEvent: any = {
toolCallId,
result,
display,
}
// Check if this was an edit_workflow that created a diff
if (tool?.name === 'edit_workflow' && result?.workflowState) {
successEvent.workflowId = context.workflowId
successEvent.hasDiff = true
}
await emitEvent(context.onRenderEvent, 'tool_success', successEvent)
} else {
await emitEvent(context.onRenderEvent, 'tool_error', {
toolCallId,
error: event.error || 'Tool failed',
display,
})
}
}
// Persist tool call result
await context.onPersist?.({
type: 'tool_call',
toolCall: {
id: toolCallId,
name: tool?.name || '',
args: tool?.args || {},
state: success ? 'success' : 'error',
result,
},
})
}
async function handleToolError(
event: any,
state: TransformState,
context: StreamTransformContext
): Promise<void> {
const toolCallId = event.toolCallId || event.data?.id
const error = event.error || event.data?.error || 'Tool execution failed'
if (!toolCallId) return
const tool = state.toolCalls.get(toolCallId)
if (tool) {
tool.state = 'error'
}
const isSubagentTool = state.subagentToolCalls.has(toolCallId)
const display = getToolDisplay(tool?.name || '', 'error')
if (isSubagentTool) {
await emitEvent(context.onRenderEvent, 'subagent_tool_error', {
parentToolCallId: state.subagentToolCalls.get(toolCallId)!,
toolCallId,
error,
})
} else {
await emitEvent(context.onRenderEvent, 'tool_error', {
toolCallId,
error,
display,
})
}
}
// ============================================================================
// Tool Execution
// ============================================================================
async function executeToolServerSide(
toolCallId: string,
toolName: string,
args: Record<string, unknown>,
state: TransformState,
context: StreamTransformContext
): Promise<void> {
const { onRenderEvent, userId, workflowId } = context
const isSubagentTool = state.subagentToolCalls.has(toolCallId)
// Update state to executing
const tool = state.toolCalls.get(toolCallId)
if (tool) {
tool.state = 'executing'
}
const display = getToolDisplay(toolName, 'executing')
// Emit executing event
if (isSubagentTool) {
await emitEvent(onRenderEvent, 'subagent_tool_executing', {
parentToolCallId: state.subagentToolCalls.get(toolCallId)!,
toolCallId,
})
} else {
await emitEvent(onRenderEvent, 'tool_executing', {
toolCallId,
display,
})
}
try {
// Add workflowId to args if available
const execArgs = { ...args }
if (workflowId && !execArgs.workflowId) {
execArgs.workflowId = workflowId
}
// Execute the tool via the router
const result = await routeExecution(toolName, execArgs, { userId })
// Update state
if (tool) {
tool.state = 'success'
tool.result = result
}
// Emit success event
const successDisplay = getToolDisplay(toolName, 'success')
if (isSubagentTool) {
await emitEvent(onRenderEvent, 'subagent_tool_success', {
parentToolCallId: state.subagentToolCalls.get(toolCallId)!,
toolCallId,
result,
display: successDisplay,
})
} else {
const successEvent: any = {
toolCallId,
result,
display: successDisplay,
}
// Check if this was an edit_workflow that created a diff
if (toolName === 'edit_workflow' && result?.workflowState) {
successEvent.workflowId = workflowId
successEvent.hasDiff = true
// Emit diff_ready so client knows to read from DB
await emitEvent(onRenderEvent, 'diff_ready', {
workflowId: workflowId || '',
toolCallId,
})
}
await emitEvent(onRenderEvent, 'tool_success', successEvent)
}
// Notify Sim Agent that tool is complete
await markToolComplete(toolCallId, toolName, true, result)
// Persist tool result
await context.onPersist?.({
type: 'tool_call',
toolCall: {
id: toolCallId,
name: toolName,
args,
state: 'success',
result,
},
})
} catch (error) {
const errorMessage = error instanceof Error ? error.message : 'Tool execution failed'
logger.error('Tool execution failed', { toolCallId, toolName, error: errorMessage })
// Update state
if (tool) {
tool.state = 'error'
}
const errorDisplay = getToolDisplay(toolName, 'error')
// Emit error event
if (isSubagentTool) {
await emitEvent(onRenderEvent, 'subagent_tool_error', {
parentToolCallId: state.subagentToolCalls.get(toolCallId)!,
toolCallId,
error: errorMessage,
})
} else {
await emitEvent(onRenderEvent, 'tool_error', {
toolCallId,
error: errorMessage,
display: errorDisplay,
})
}
// Notify Sim Agent that tool failed
await markToolComplete(toolCallId, toolName, false, undefined, errorMessage)
}
}
async function markToolComplete(
toolCallId: string,
toolName: string,
success: boolean,
result?: unknown,
error?: string
): Promise<void> {
try {
const response = await fetch(`${SIM_AGENT_API_URL}/api/tools/mark-complete`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
...(env.COPILOT_API_KEY ? { 'x-api-key': env.COPILOT_API_KEY } : {}),
},
body: JSON.stringify({
id: toolCallId,
name: toolName,
status: success ? 200 : 500,
message: success
? (result as Record<string, unknown> | undefined)?.message || 'Success'
: error,
data: success ? result : undefined,
}),
})
if (!response.ok) {
logger.warn('Failed to mark tool complete', { toolCallId, status: response.status })
}
} catch (e) {
logger.error('Error marking tool complete', { toolCallId, error: e })
}
}
// ============================================================================
// Subagent Handling
// ============================================================================
async function handleSubagentStart(
event: any,
state: TransformState,
context: StreamTransformContext
): Promise<void> {
const parentToolCallId = event.parentToolCallId || event.data?.parentToolCallId
const subagentId = event.subagentId || event.data?.subagentId || parentToolCallId
const label = event.label || event.data?.label
if (!parentToolCallId) return
state.activeSubagent = parentToolCallId
await emitEvent(context.onRenderEvent, 'subagent_start', {
parentToolCallId,
subagentId,
label,
})
}
async function handleSubagentEnd(
event: any,
state: TransformState,
context: StreamTransformContext
): Promise<void> {
const parentToolCallId = event.parentToolCallId || event.data?.parentToolCallId || state.activeSubagent
if (!parentToolCallId) return
state.activeSubagent = null
await emitEvent(context.onRenderEvent, 'subagent_end', {
parentToolCallId,
})
}
// ============================================================================
// Plan & Options Handling
// ============================================================================
async function finalizePlan(state: TransformState, context: StreamTransformContext): Promise<void> {
if (!state.inPlanCapture) return
state.inPlanCapture = false
// Parse todos from plan content
const todos = parseTodosFromPlan(state.planContent)
await emitEvent(context.onRenderEvent, 'plan_end', { todos })
}
async function finalizeOptions(
state: TransformState,
context: StreamTransformContext
): Promise<void> {
if (!state.inOptionsCapture) return
state.inOptionsCapture = false
// Parse options from content
const options = parseOptionsFromContent(state.optionsContent)
await emitEvent(context.onRenderEvent, 'options_end', { options })
}
function parseTodosFromPlan(content: string): Array<{ id: string; content: string; status: 'pending' }> {
const todos: Array<{ id: string; content: string; status: 'pending' }> = []
const lines = content.split('\n')
for (const line of lines) {
const match = line.match(/^[-*]\s+(.+)$/)
if (match) {
todos.push({
id: `todo_${Date.now()}_${todos.length}`,
content: match[1].trim(),
status: 'pending',
})
}
}
return todos
}
function parseOptionsFromContent(content: string): string[] {
try {
// Try to parse as JSON array
const parsed = JSON.parse(content)
if (Array.isArray(parsed)) {
return parsed.filter((o) => typeof o === 'string')
}
} catch {}
// Fall back to splitting by newlines
return content
.split('\n')
.map((l) => l.trim())
.filter((l) => l.length > 0)
}
// ============================================================================
// Helpers
// ============================================================================
function getToolDisplay(
toolName: string,
state: 'pending' | 'generating' | 'executing' | 'success' | 'error'
): ToolDisplay {
// Default displays based on state
const stateLabels: Record<string, string> = {
pending: 'Pending...',
generating: 'Preparing...',
executing: 'Running...',
success: 'Completed',
error: 'Failed',
}
// Tool-specific labels
const toolLabels: Record<string, string> = {
edit_workflow: 'Editing workflow',
get_user_workflow: 'Reading workflow',
get_block_config: 'Getting block config',
get_blocks_and_tools: 'Loading blocks',
get_credentials: 'Checking credentials',
run_workflow: 'Running workflow',
knowledge_base: 'Searching knowledge base',
navigate_ui: 'Navigating',
tour: 'Starting tour',
}
return {
label: toolLabels[toolName] || toolName.replace(/_/g, ' '),
description: stateLabels[state],
}
}
function checkToolNeedsInterrupt(toolName: string, args: Record<string, unknown>): boolean {
// Tools that always need user approval
const interruptTools = ['deploy_api', 'deploy_chat', 'deploy_mcp', 'delete_workflow']
return interruptTools.includes(toolName)
}
function getInterruptOptions(
toolName: string,
args: Record<string, unknown>
): Array<{ id: string; label: string; description?: string; variant?: 'default' | 'destructive' | 'outline' }> {
// Default interrupt options
return [
{ id: 'approve', label: 'Approve', variant: 'default' },
{ id: 'reject', label: 'Cancel', variant: 'outline' },
]
}
async function emitEvent<T extends RenderEvent['type']>(
onRenderEvent: (event: RenderEvent) => Promise<void>,
type: T,
data: Omit<Extract<RenderEvent, { type: T }>, 'type' | 'seq' | 'ts'>
): Promise<void> {
const event = createRenderEvent(type, data)
await onRenderEvent(event)
}

View File

@@ -5,9 +5,6 @@
* Import this module early in the app to ensure all tool configs are available.
*/
// Navigation tools
import './navigation/navigate-ui'
// Other tools (subagents)
import './other/auth'
import './other/custom-tool'
@@ -44,7 +41,6 @@ export {
getToolUIConfig,
hasInterrupt,
type InterruptConfig,
isClientOnlyTool,
isSpecialTool,
isSubagentTool,
type ParamsTableConfig,

View File

@@ -5,7 +5,6 @@ import {
type BaseClientToolMetadata,
ClientToolCallState,
} from '@/lib/copilot/tools/client/base-tool'
import { registerToolUIConfig } from '@/lib/copilot/tools/client/ui-config'
import { useCopilotStore } from '@/stores/panel/copilot/store'
import { useWorkflowRegistry } from '@/stores/workflows/registry/store'
@@ -240,12 +239,3 @@ export class NavigateUIClientTool extends BaseClientTool {
await this.handleAccept(args)
}
}
// Register UI config at module load - clientOnly because this requires browser navigation
registerToolUIConfig(NavigateUIClientTool.id, {
clientOnly: true,
interrupt: {
accept: { text: 'Open', icon: Navigation },
reject: { text: 'Skip', icon: XCircle },
},
})

View File

@@ -33,7 +33,6 @@ export class TourClientTool extends BaseClientTool {
[ClientToolCallState.aborted]: { text: 'Aborted tour', icon: XCircle },
},
uiConfig: {
clientOnly: true, // Tour requires browser UI to guide the user
subagent: {
streamingLabel: 'Touring',
completedLabel: 'Tour complete',

View File

@@ -172,13 +172,6 @@ export interface ToolUIConfig {
* The tool-call component will use this to render specialized content.
*/
customRenderer?: 'code' | 'edit_summary' | 'none'
/**
* Whether this tool requires a client/browser session to execute.
* Client-only tools (like navigate_ui, tour) cannot run in headless/API mode.
* In API-only mode, these tools will be skipped with a message.
*/
clientOnly?: boolean
}
/**
@@ -222,14 +215,6 @@ export function hasInterrupt(toolName: string): boolean {
return !!toolUIConfigs[toolName]?.interrupt
}
/**
* Check if a tool is client-only (requires browser session).
* Client-only tools cannot execute in headless/API mode.
*/
export function isClientOnlyTool(toolName: string): boolean {
return !!toolUIConfigs[toolName]?.clientOnly
}
/**
* Get subagent labels for a tool
*/

View File

@@ -209,17 +209,13 @@ export class SetGlobalWorkflowVariablesClientTool extends BaseClientTool {
}
}
// Convert byName (keyed by name) to record keyed by ID for the API
const variablesRecord: Record<string, any> = {}
for (const v of Object.values(byName)) {
variablesRecord[v.id] = v
}
const variablesArray = Object.values(byName)
// POST full variables record to persist
// POST full variables array to persist
const res = await fetch(`/api/workflows/${payload.workflowId}/variables`, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ variables: variablesRecord }),
body: JSON.stringify({ variables: variablesArray }),
})
if (!res.ok) {
const txt = await res.text().catch(() => '')

View File

@@ -1,6 +1,4 @@
import { createLogger } from '@sim/logger'
import crypto from 'crypto'
import { acquireLock, releaseLock } from '@/lib/core/config/redis'
import type { BaseServerTool } from '@/lib/copilot/tools/server/base-tool'
import { getBlockConfigServerTool } from '@/lib/copilot/tools/server/blocks/get-block-config'
import { getBlockOptionsServerTool } from '@/lib/copilot/tools/server/blocks/get-block-options'
@@ -32,15 +30,6 @@ import {
GetTriggerBlocksResult,
} from '@/lib/copilot/tools/shared/schemas'
/** Lock expiry in seconds for edit_workflow operations */
const EDIT_WORKFLOW_LOCK_EXPIRY = 30
/** Maximum wait time in ms before giving up on acquiring the lock */
const EDIT_WORKFLOW_LOCK_TIMEOUT = 15000
/** Delay between lock acquisition retries in ms */
const EDIT_WORKFLOW_LOCK_RETRY_DELAY = 100
// Generic execute response schemas (success path only for this route; errors handled via HTTP status)
export { ExecuteResponseSuccessSchema }
export type ExecuteResponseSuccess = (typeof ExecuteResponseSuccessSchema)['_type']
@@ -64,30 +53,6 @@ serverToolRegistry[getCredentialsServerTool.name] = getCredentialsServerTool
serverToolRegistry[makeApiRequestServerTool.name] = makeApiRequestServerTool
serverToolRegistry[knowledgeBaseServerTool.name] = knowledgeBaseServerTool
/**
* Acquire a lock with retries for workflow-mutating operations
*/
async function acquireLockWithRetry(
lockKey: string,
lockValue: string,
expirySeconds: number,
timeoutMs: number,
retryDelayMs: number
): Promise<boolean> {
const startTime = Date.now()
while (Date.now() - startTime < timeoutMs) {
const acquired = await acquireLock(lockKey, lockValue, expirySeconds)
if (acquired) {
return true
}
// Wait before retrying
await new Promise((resolve) => setTimeout(resolve, retryDelayMs))
}
return false
}
export async function routeExecution(
toolName: string,
payload: unknown,
@@ -128,74 +93,23 @@ export async function routeExecution(
args = KnowledgeBaseInput.parse(args)
}
// For edit_workflow, acquire a per-workflow lock to prevent race conditions
// when multiple edit_workflow calls happen in parallel for the same workflow
let lockKey: string | null = null
let lockValue: string | null = null
const result = await tool.execute(args, context)
if (toolName === 'edit_workflow' && args.workflowId) {
lockKey = `copilot:edit_workflow:lock:${args.workflowId}`
lockValue = crypto.randomUUID()
const acquired = await acquireLockWithRetry(
lockKey,
lockValue,
EDIT_WORKFLOW_LOCK_EXPIRY,
EDIT_WORKFLOW_LOCK_TIMEOUT,
EDIT_WORKFLOW_LOCK_RETRY_DELAY
)
if (!acquired) {
logger.warn('Failed to acquire edit_workflow lock after timeout', {
workflowId: args.workflowId,
timeoutMs: EDIT_WORKFLOW_LOCK_TIMEOUT,
})
throw new Error(
'Workflow is currently being edited by another operation. Please try again shortly.'
)
}
logger.debug('Acquired edit_workflow lock', {
workflowId: args.workflowId,
lockKey,
})
if (toolName === 'get_blocks_and_tools') {
return GetBlocksAndToolsResult.parse(result)
}
if (toolName === 'get_blocks_metadata') {
return GetBlocksMetadataResult.parse(result)
}
if (toolName === 'get_block_options') {
return GetBlockOptionsResult.parse(result)
}
if (toolName === 'get_block_config') {
return GetBlockConfigResult.parse(result)
}
if (toolName === 'get_trigger_blocks') {
return GetTriggerBlocksResult.parse(result)
}
try {
const result = await tool.execute(args, context)
if (toolName === 'get_blocks_and_tools') {
return GetBlocksAndToolsResult.parse(result)
}
if (toolName === 'get_blocks_metadata') {
return GetBlocksMetadataResult.parse(result)
}
if (toolName === 'get_block_options') {
return GetBlockOptionsResult.parse(result)
}
if (toolName === 'get_block_config') {
return GetBlockConfigResult.parse(result)
}
if (toolName === 'get_trigger_blocks') {
return GetTriggerBlocksResult.parse(result)
}
return result
} finally {
// Always release the lock if we acquired one
if (lockKey && lockValue) {
const released = await releaseLock(lockKey, lockValue)
if (released) {
logger.debug('Released edit_workflow lock', {
workflowId: args.workflowId,
lockKey,
})
} else {
logger.warn('Failed to release edit_workflow lock (may have expired)', {
workflowId: args.workflowId,
lockKey,
})
}
}
}
return result
}

View File

@@ -817,8 +817,6 @@ function normalizeResponseFormat(value: any): string {
interface EdgeHandleValidationResult {
valid: boolean
error?: string
/** The normalized handle to use (e.g., simple 'if' normalized to 'condition-{uuid}') */
normalizedHandle?: string
}
/**
@@ -853,6 +851,13 @@ function validateSourceHandleForBlock(
}
case 'condition': {
if (!sourceHandle.startsWith(EDGE.CONDITION_PREFIX)) {
return {
valid: false,
error: `Invalid source handle "${sourceHandle}" for condition block. Must start with "${EDGE.CONDITION_PREFIX}"`,
}
}
const conditionsValue = sourceBlock?.subBlocks?.conditions?.value
if (!conditionsValue) {
return {
@@ -861,8 +866,6 @@ function validateSourceHandleForBlock(
}
}
// validateConditionHandle accepts simple format (if, else-if-0, else),
// legacy format (condition-{blockId}-if), and internal ID format (condition-{uuid})
return validateConditionHandle(sourceHandle, sourceBlock.id, conditionsValue)
}
@@ -876,6 +879,13 @@ function validateSourceHandleForBlock(
}
case 'router_v2': {
if (!sourceHandle.startsWith(EDGE.ROUTER_PREFIX)) {
return {
valid: false,
error: `Invalid source handle "${sourceHandle}" for router_v2 block. Must start with "${EDGE.ROUTER_PREFIX}"`,
}
}
const routesValue = sourceBlock?.subBlocks?.routes?.value
if (!routesValue) {
return {
@@ -884,8 +894,6 @@ function validateSourceHandleForBlock(
}
}
// validateRouterHandle accepts simple format (route-0, route-1),
// legacy format (router-{blockId}-route-1), and internal ID format (router-{uuid})
return validateRouterHandle(sourceHandle, sourceBlock.id, routesValue)
}
@@ -902,12 +910,7 @@ function validateSourceHandleForBlock(
/**
* Validates condition handle references a valid condition in the block.
* Accepts multiple formats:
* - Simple format: "if", "else-if-0", "else-if-1", "else"
* - Legacy semantic format: "condition-{blockId}-if", "condition-{blockId}-else-if"
* - Internal ID format: "condition-{conditionId}"
*
* Returns the normalized handle (condition-{conditionId}) for storage.
* Accepts both internal IDs (condition-blockId-if) and semantic keys (condition-blockId-else-if)
*/
function validateConditionHandle(
sourceHandle: string,
@@ -940,80 +943,48 @@ function validateConditionHandle(
}
}
// Build a map of all valid handle formats -> normalized handle (condition-{conditionId})
const handleToNormalized = new Map<string, string>()
const legacySemanticPrefix = `condition-${blockId}-`
let elseIfIndex = 0
const validHandles = new Set<string>()
const semanticPrefix = `condition-${blockId}-`
let elseIfCount = 0
for (const condition of conditions) {
if (!condition.id) continue
if (condition.id) {
validHandles.add(`condition-${condition.id}`)
}
const normalizedHandle = `condition-${condition.id}`
const title = condition.title?.toLowerCase()
// Always accept internal ID format
handleToNormalized.set(normalizedHandle, normalizedHandle)
if (title === 'if') {
// Simple format: "if"
handleToNormalized.set('if', normalizedHandle)
// Legacy format: "condition-{blockId}-if"
handleToNormalized.set(`${legacySemanticPrefix}if`, normalizedHandle)
validHandles.add(`${semanticPrefix}if`)
} else if (title === 'else if') {
// Simple format: "else-if-0", "else-if-1", etc. (0-indexed)
handleToNormalized.set(`else-if-${elseIfIndex}`, normalizedHandle)
// Legacy format: "condition-{blockId}-else-if" for first, "condition-{blockId}-else-if-2" for second
if (elseIfIndex === 0) {
handleToNormalized.set(`${legacySemanticPrefix}else-if`, normalizedHandle)
} else {
handleToNormalized.set(
`${legacySemanticPrefix}else-if-${elseIfIndex + 1}`,
normalizedHandle
)
}
elseIfIndex++
elseIfCount++
validHandles.add(
elseIfCount === 1 ? `${semanticPrefix}else-if` : `${semanticPrefix}else-if-${elseIfCount}`
)
} else if (title === 'else') {
// Simple format: "else"
handleToNormalized.set('else', normalizedHandle)
// Legacy format: "condition-{blockId}-else"
handleToNormalized.set(`${legacySemanticPrefix}else`, normalizedHandle)
validHandles.add(`${semanticPrefix}else`)
}
}
const normalizedHandle = handleToNormalized.get(sourceHandle)
if (normalizedHandle) {
return { valid: true, normalizedHandle }
if (validHandles.has(sourceHandle)) {
return { valid: true }
}
// Build list of valid simple format options for error message
const simpleOptions: string[] = []
elseIfIndex = 0
for (const condition of conditions) {
const title = condition.title?.toLowerCase()
if (title === 'if') {
simpleOptions.push('if')
} else if (title === 'else if') {
simpleOptions.push(`else-if-${elseIfIndex}`)
elseIfIndex++
} else if (title === 'else') {
simpleOptions.push('else')
}
const validOptions = Array.from(validHandles).slice(0, 5)
const moreCount = validHandles.size - validOptions.length
let validOptionsStr = validOptions.join(', ')
if (moreCount > 0) {
validOptionsStr += `, ... and ${moreCount} more`
}
return {
valid: false,
error: `Invalid condition handle "${sourceHandle}". Valid handles: ${simpleOptions.join(', ')}`,
error: `Invalid condition handle "${sourceHandle}". Valid handles: ${validOptionsStr}`,
}
}
/**
* Validates router handle references a valid route in the block.
* Accepts multiple formats:
* - Simple format: "route-0", "route-1", "route-2" (0-indexed)
* - Legacy semantic format: "router-{blockId}-route-1" (1-indexed)
* - Internal ID format: "router-{routeId}"
*
* Returns the normalized handle (router-{routeId}) for storage.
* Accepts both internal IDs (router-{routeId}) and semantic keys (router-{blockId}-route-1)
*/
function validateRouterHandle(
sourceHandle: string,
@@ -1046,48 +1017,47 @@ function validateRouterHandle(
}
}
// Build a map of all valid handle formats -> normalized handle (router-{routeId})
const handleToNormalized = new Map<string, string>()
const legacySemanticPrefix = `router-${blockId}-`
const validHandles = new Set<string>()
const semanticPrefix = `router-${blockId}-`
for (let i = 0; i < routes.length; i++) {
const route = routes[i]
if (!route.id) continue
const normalizedHandle = `router-${route.id}`
// Accept internal ID format: router-{uuid}
if (route.id) {
validHandles.add(`router-${route.id}`)
}
// Always accept internal ID format: router-{uuid}
handleToNormalized.set(normalizedHandle, normalizedHandle)
// Simple format: route-0, route-1, etc. (0-indexed)
handleToNormalized.set(`route-${i}`, normalizedHandle)
// Legacy 1-indexed route number format: router-{blockId}-route-1
handleToNormalized.set(`${legacySemanticPrefix}route-${i + 1}`, normalizedHandle)
// Accept 1-indexed route number format: router-{blockId}-route-1, router-{blockId}-route-2, etc.
validHandles.add(`${semanticPrefix}route-${i + 1}`)
// Accept normalized title format: router-{blockId}-{normalized-title}
// Normalize: lowercase, replace spaces with dashes, remove special chars
if (route.title && typeof route.title === 'string') {
const normalizedTitle = route.title
.toLowerCase()
.replace(/\s+/g, '-')
.replace(/[^a-z0-9-]/g, '')
if (normalizedTitle) {
handleToNormalized.set(`${legacySemanticPrefix}${normalizedTitle}`, normalizedHandle)
validHandles.add(`${semanticPrefix}${normalizedTitle}`)
}
}
}
const normalizedHandle = handleToNormalized.get(sourceHandle)
if (normalizedHandle) {
return { valid: true, normalizedHandle }
if (validHandles.has(sourceHandle)) {
return { valid: true }
}
// Build list of valid simple format options for error message
const simpleOptions = routes.map((_, i) => `route-${i}`)
const validOptions = Array.from(validHandles).slice(0, 5)
const moreCount = validHandles.size - validOptions.length
let validOptionsStr = validOptions.join(', ')
if (moreCount > 0) {
validOptionsStr += `, ... and ${moreCount} more`
}
return {
valid: false,
error: `Invalid router handle "${sourceHandle}". Valid handles: ${simpleOptions.join(', ')}`,
error: `Invalid router handle "${sourceHandle}". Valid handles: ${validOptionsStr}`,
}
}
@@ -1202,13 +1172,10 @@ function createValidatedEdge(
return false
}
// Use normalized handle if available (e.g., 'if' -> 'condition-{uuid}')
const finalSourceHandle = sourceValidation.normalizedHandle || sourceHandle
modifiedState.edges.push({
id: crypto.randomUUID(),
source: sourceBlockId,
sourceHandle: finalSourceHandle,
sourceHandle,
target: targetBlockId,
targetHandle,
type: 'default',
@@ -1217,11 +1184,7 @@ function createValidatedEdge(
}
/**
* Adds connections as edges for a block.
* Supports multiple target formats:
* - String: "target-block-id"
* - Object: { block: "target-block-id", handle?: "custom-target-handle" }
* - Array of strings or objects
* Adds connections as edges for a block
*/
function addConnectionsAsEdges(
modifiedState: any,
@@ -1231,34 +1194,19 @@ function addConnectionsAsEdges(
skippedItems?: SkippedItem[]
): void {
Object.entries(connections).forEach(([sourceHandle, targets]) => {
if (targets === null) return
const addEdgeForTarget = (targetBlock: string, targetHandle?: string) => {
const targetArray = Array.isArray(targets) ? targets : [targets]
targetArray.forEach((targetId: string) => {
createValidatedEdge(
modifiedState,
blockId,
targetBlock,
targetId,
sourceHandle,
targetHandle || 'target',
'target',
'add_edge',
logger,
skippedItems
)
}
if (typeof targets === 'string') {
addEdgeForTarget(targets)
} else if (Array.isArray(targets)) {
targets.forEach((target: any) => {
if (typeof target === 'string') {
addEdgeForTarget(target)
} else if (target?.block) {
addEdgeForTarget(target.block, target.handle)
}
})
} else if (typeof targets === 'object' && targets?.block) {
addEdgeForTarget(targets.block, targets.handle)
}
})
})
}
@@ -2550,7 +2498,7 @@ export const editWorkflowServerTool: BaseServerTool<EditWorkflowParams, any> = {
name: 'edit_workflow',
async execute(params: EditWorkflowParams, context?: { userId: string }): Promise<any> {
const logger = createLogger('EditWorkflowServerTool')
const { operations, workflowId } = params
const { operations, workflowId, currentUserWorkflow } = params
if (!Array.isArray(operations) || operations.length === 0) {
throw new Error('operations are required and must be an array')
}
@@ -2559,14 +2507,22 @@ export const editWorkflowServerTool: BaseServerTool<EditWorkflowParams, any> = {
logger.info('Executing edit_workflow', {
operationCount: operations.length,
workflowId,
hasCurrentUserWorkflow: !!currentUserWorkflow,
})
// Always fetch from DB to ensure we have the latest state.
// This is critical because multiple edit_workflow calls may execute
// sequentially (via locking), and each must see the previous call's changes.
// The AI-provided currentUserWorkflow may be stale.
const fromDb = await getCurrentWorkflowStateFromDb(workflowId)
const workflowState = fromDb.workflowState
// Get current workflow state
let workflowState: any
if (currentUserWorkflow) {
try {
workflowState = JSON.parse(currentUserWorkflow)
} catch (error) {
logger.error('Failed to parse currentUserWorkflow', error)
throw new Error('Invalid currentUserWorkflow format')
}
} else {
const fromDb = await getCurrentWorkflowStateFromDb(workflowId)
workflowState = fromDb.workflowState
}
// Get permission config for the user
const permissionConfig = context?.userId ? await getUserPermissionConfig(context.userId) : null
@@ -2651,42 +2607,16 @@ export const editWorkflowServerTool: BaseServerTool<EditWorkflowParams, any> = {
logger.warn('No userId in context - skipping custom tools persistence', { workflowId })
}
const finalWorkflowState = validation.sanitizedState || modifiedWorkflowState
logger.info('edit_workflow successfully applied operations', {
operationCount: operations.length,
blocksCount: Object.keys(finalWorkflowState.blocks).length,
edgesCount: finalWorkflowState.edges.length,
blocksCount: Object.keys(modifiedWorkflowState.blocks).length,
edgesCount: modifiedWorkflowState.edges.length,
inputValidationErrors: validationErrors.length,
skippedItemsCount: skippedItems.length,
schemaValidationErrors: validation.errors.length,
validationWarnings: validation.warnings.length,
})
// IMPORTANT: Persist the workflow state to DB BEFORE returning.
// This ensures that subsequent edit_workflow calls (which fetch from DB)
// will see the latest state. Without this, there's a race condition where
// the client persists AFTER the lock is released, and another edit_workflow
// call can see stale state.
try {
const { saveWorkflowToNormalizedTables } = await import(
'@/lib/workflows/persistence/utils'
)
await saveWorkflowToNormalizedTables(workflowId, finalWorkflowState)
logger.info('Persisted workflow state to DB before returning', {
workflowId,
blocksCount: Object.keys(finalWorkflowState.blocks).length,
edgesCount: finalWorkflowState.edges.length,
})
} catch (persistError) {
logger.error('Failed to persist workflow state to DB', {
workflowId,
error: persistError instanceof Error ? persistError.message : String(persistError),
})
// Don't throw - we still want to return the modified state
// The client will also try to persist, which may succeed
}
// Format validation errors for LLM feedback
const inputErrors =
validationErrors.length > 0

View File

@@ -326,32 +326,32 @@ export const env = createEnv({
NEXT_PUBLIC_E2B_ENABLED: z.string().optional(),
NEXT_PUBLIC_COPILOT_TRAINING_ENABLED: z.string().optional(),
NEXT_PUBLIC_ENABLE_PLAYGROUND: z.string().optional(), // Enable component playground at /playground
NEXT_PUBLIC_ENABLE_PLAYGROUND: z.string().optional(), // Enable component playground at /playground
NEXT_PUBLIC_DOCUMENTATION_URL: z.string().url().optional(), // Custom documentation URL
NEXT_PUBLIC_TERMS_URL: z.string().url().optional(), // Custom terms of service URL
NEXT_PUBLIC_PRIVACY_URL: z.string().url().optional(), // Custom privacy policy URL
// Theme Customization
NEXT_PUBLIC_BRAND_PRIMARY_COLOR: z.string().regex(/^#[0-9A-Fa-f]{6}$/).optional(), // Primary brand color (hex format, e.g., "#701ffc")
NEXT_PUBLIC_BRAND_PRIMARY_HOVER_COLOR: z.string().regex(/^#[0-9A-Fa-f]{6}$/).optional(), // Primary brand hover state (hex format)
NEXT_PUBLIC_BRAND_PRIMARY_HOVER_COLOR: z.string().regex(/^#[0-9A-Fa-f]{6}$/).optional(), // Primary brand hover state (hex format)
NEXT_PUBLIC_BRAND_ACCENT_COLOR: z.string().regex(/^#[0-9A-Fa-f]{6}$/).optional(), // Accent brand color (hex format)
NEXT_PUBLIC_BRAND_ACCENT_HOVER_COLOR: z.string().regex(/^#[0-9A-Fa-f]{6}$/).optional(), // Accent brand hover state (hex format)
NEXT_PUBLIC_BRAND_BACKGROUND_COLOR: z.string().regex(/^#[0-9A-Fa-f]{6}$/).optional(), // Brand background color (hex format)
// Feature Flags
NEXT_PUBLIC_TRIGGER_DEV_ENABLED: z.boolean().optional(), // Client-side gate for async executions UI
NEXT_PUBLIC_SSO_ENABLED: z.boolean().optional(), // Enable SSO login UI components
NEXT_PUBLIC_CREDENTIAL_SETS_ENABLED: z.boolean().optional(), // Enable credential sets (email polling) on self-hosted
NEXT_PUBLIC_ACCESS_CONTROL_ENABLED: z.boolean().optional(), // Enable access control (permission groups) on self-hosted
NEXT_PUBLIC_ORGANIZATIONS_ENABLED: z.boolean().optional(), // Enable organizations on self-hosted (bypasses plan requirements)
NEXT_PUBLIC_DISABLE_INVITATIONS: z.boolean().optional(), // Disable workspace invitations globally (for self-hosted deployments)
NEXT_PUBLIC_TRIGGER_DEV_ENABLED: z.boolean().optional(), // Client-side gate for async executions UI
NEXT_PUBLIC_SSO_ENABLED: z.boolean().optional(), // Enable SSO login UI components
NEXT_PUBLIC_CREDENTIAL_SETS_ENABLED: z.boolean().optional(), // Enable credential sets (email polling) on self-hosted
NEXT_PUBLIC_ACCESS_CONTROL_ENABLED: z.boolean().optional(), // Enable access control (permission groups) on self-hosted
NEXT_PUBLIC_ORGANIZATIONS_ENABLED: z.boolean().optional(), // Enable organizations on self-hosted (bypasses plan requirements)
NEXT_PUBLIC_DISABLE_INVITATIONS: z.boolean().optional(), // Disable workspace invitations globally (for self-hosted deployments)
NEXT_PUBLIC_EMAIL_PASSWORD_SIGNUP_ENABLED: z.boolean().optional().default(true), // Control visibility of email/password login forms
},
// Variables available on both server and client
shared: {
NODE_ENV: z.enum(['development', 'test', 'production']).optional(), // Runtime environment
NEXT_TELEMETRY_DISABLED: z.string().optional(), // Disable Next.js telemetry collection
NEXT_TELEMETRY_DISABLED: z.string().optional(), // Disable Next.js telemetry collection
},
experimental__runtimeEnv: {

View File

@@ -543,12 +543,6 @@ export class ExecutionLogger implements IExecutionLoggerService {
case 'chat':
updateFields.totalChatExecutions = sql`total_chat_executions + 1`
break
case 'mcp':
updateFields.totalMcpExecutions = sql`total_mcp_executions + 1`
break
case 'a2a':
updateFields.totalA2aExecutions = sql`total_a2a_executions + 1`
break
}
await db.update(userStats).set(updateFields).where(eq(userStats.userId, userId))

View File

@@ -152,20 +152,15 @@ function addUnsubscribeData(
): UnsubscribeData {
const unsubscribeToken = generateUnsubscribeToken(recipientEmail, emailType)
const baseUrl = getBaseUrl()
const encodedEmail = encodeURIComponent(recipientEmail)
const unsubscribeUrl = `${baseUrl}/unsubscribe?token=${unsubscribeToken}&email=${encodedEmail}`
const unsubscribeUrl = `${baseUrl}/unsubscribe?token=${unsubscribeToken}&email=${encodeURIComponent(recipientEmail)}`
return {
headers: {
'List-Unsubscribe': `<${unsubscribeUrl}>`,
'List-Unsubscribe-Post': 'List-Unsubscribe=One-Click',
},
html: html
?.replace(/\{\{UNSUBSCRIBE_TOKEN\}\}/g, unsubscribeToken)
.replace(/\{\{UNSUBSCRIBE_EMAIL\}\}/g, encodedEmail),
text: text
?.replace(/\{\{UNSUBSCRIBE_TOKEN\}\}/g, unsubscribeToken)
.replace(/\{\{UNSUBSCRIBE_EMAIL\}\}/g, encodedEmail),
html: html?.replace(/\{\{UNSUBSCRIBE_TOKEN\}\}/g, unsubscribeToken),
text: text?.replace(/\{\{UNSUBSCRIBE_TOKEN\}\}/g, unsubscribeToken),
}
}
@@ -366,15 +361,15 @@ async function sendBatchWithResend(emails: EmailOptions[]): Promise<BatchSendEma
subject: email.subject,
}
if (email.html) emailData.html = email.html
if (email.text) emailData.text = email.text
if (includeUnsubscribe && emailType !== 'transactional') {
const primaryEmail = Array.isArray(email.to) ? email.to[0] : email.to
const unsubData = addUnsubscribeData(primaryEmail, emailType, email.html, email.text)
emailData.headers = unsubData.headers
if (unsubData.html) emailData.html = unsubData.html
if (unsubData.text) emailData.text = unsubData.text
} else {
if (email.html) emailData.html = email.html
if (email.text) emailData.text = email.text
}
batchEmails.push(emailData)

View File

@@ -114,15 +114,17 @@ describe('unsubscribe utilities', () => {
})
it.concurrent('should handle legacy tokens (2 parts) and default to marketing', () => {
// Generate a real legacy token using the actual hashing logic to ensure backward compatibility
const salt = 'abc123'
const secret = 'test-secret-key'
const { createHash } = require('crypto')
const hash = createHash('sha256').update(`${testEmail}:${salt}:${secret}`).digest('hex')
const legacyToken = `${salt}:${hash}`
// This should return valid since we're using the actual legacy format properly
const result = verifyUnsubscribeToken(testEmail, legacyToken)
expect(result.valid).toBe(true)
expect(result.emailType).toBe('marketing')
expect(result.emailType).toBe('marketing') // Should default to marketing for legacy tokens
})
it.concurrent('should reject malformed tokens', () => {
@@ -224,6 +226,7 @@ describe('unsubscribe utilities', () => {
it('should update email preferences for existing user', async () => {
const userId = 'user-123'
// Mock finding the user
mockDb.select.mockReturnValueOnce({
from: vi.fn().mockReturnValue({
where: vi.fn().mockReturnValue({
@@ -232,6 +235,7 @@ describe('unsubscribe utilities', () => {
}),
})
// Mock getting existing settings
mockDb.select.mockReturnValueOnce({
from: vi.fn().mockReturnValue({
where: vi.fn().mockReturnValue({
@@ -240,6 +244,7 @@ describe('unsubscribe utilities', () => {
}),
})
// Mock insert with upsert
mockDb.insert.mockReturnValue({
values: vi.fn().mockReturnValue({
onConflictDoUpdate: vi.fn().mockResolvedValue(undefined),
@@ -295,6 +300,7 @@ describe('unsubscribe utilities', () => {
await updateEmailPreferences(testEmail, { unsubscribeMarketing: true })
// Verify that the merged preferences are passed
expect(mockInsertValues).toHaveBeenCalledWith(
expect.objectContaining({
emailPreferences: {

View File

@@ -38,6 +38,7 @@ export function verifyUnsubscribeToken(
const parts = token.split(':')
if (parts.length < 2) return { valid: false }
// Handle legacy tokens (without email type)
if (parts.length === 2) {
const [salt, expectedHash] = parts
const hash = createHash('sha256')
@@ -47,6 +48,7 @@ export function verifyUnsubscribeToken(
return { valid: hash === expectedHash, emailType: 'marketing' }
}
// Handle new tokens (with email type)
const [salt, expectedHash, emailType] = parts
if (!salt || !expectedHash || !emailType) return { valid: false }
@@ -99,6 +101,7 @@ export async function updateEmailPreferences(
preferences: EmailPreferences
): Promise<boolean> {
try {
// First, find the user
const userResult = await db
.select({ id: user.id })
.from(user)
@@ -112,6 +115,7 @@ export async function updateEmailPreferences(
const userId = userResult[0].id
// Get existing email preferences
const existingSettings = await db
.select({ emailPreferences: settings.emailPreferences })
.from(settings)
@@ -123,11 +127,13 @@ export async function updateEmailPreferences(
currentEmailPreferences = (existingSettings[0].emailPreferences as EmailPreferences) || {}
}
// Merge email preferences
const updatedEmailPreferences = {
...currentEmailPreferences,
...preferences,
}
// Upsert settings
await db
.insert(settings)
.values({
@@ -162,8 +168,10 @@ export async function isUnsubscribed(
const preferences = await getEmailPreferences(email)
if (!preferences) return false
// Check unsubscribe all first
if (preferences.unsubscribeAll) return true
// Check specific type
switch (emailType) {
case 'marketing':
return preferences.unsubscribeMarketing || false

View File

@@ -1,193 +0,0 @@
import type { ComponentType } from 'react'
import { getAllBlocks } from '@/blocks'
import type { BlockConfig, SubBlockConfig } from '@/blocks/types'
/**
* Represents a searchable tool operation extracted from block configurations.
* Each operation maps to a specific tool that can be invoked when the block
* is configured with that operation selected.
*/
export interface ToolOperationItem {
/** Unique identifier combining block type and operation ID (e.g., "slack_send") */
id: string
/** The block type this operation belongs to (e.g., "slack") */
blockType: string
/** The operation dropdown value (e.g., "send") */
operationId: string
/** Human-readable service name from the block (e.g., "Slack") */
serviceName: string
/** Human-readable operation name from the dropdown label (e.g., "Send Message") */
operationName: string
/** The block's icon component */
icon: ComponentType<{ className?: string }>
/** The block's background color */
bgColor: string
/** Search aliases for common synonyms */
aliases: string[]
}
/**
* Maps common action verbs to their synonyms for better search matching.
* When a user searches for "post message", it should match "send message".
* Based on analysis of 1000+ tool operations in the codebase.
*/
const ACTION_VERB_ALIASES: Record<string, string[]> = {
get: ['read', 'fetch', 'retrieve', 'load', 'obtain'],
read: ['get', 'fetch', 'retrieve', 'load'],
create: ['make', 'new', 'add', 'generate', 'insert'],
add: ['create', 'insert', 'append', 'include'],
update: ['edit', 'modify', 'change', 'patch', 'set'],
set: ['update', 'configure', 'assign'],
delete: ['remove', 'trash', 'destroy', 'erase'],
remove: ['delete', 'clear', 'drop', 'unset'],
list: ['show', 'display', 'view', 'browse', 'enumerate'],
search: ['find', 'query', 'lookup', 'locate'],
query: ['search', 'find', 'lookup'],
send: ['post', 'write', 'deliver', 'transmit', 'publish'],
write: ['send', 'post', 'compose'],
download: ['export', 'save', 'pull', 'fetch'],
upload: ['import', 'push', 'transfer', 'attach'],
execute: ['run', 'invoke', 'trigger', 'perform', 'start'],
check: ['verify', 'validate', 'test', 'inspect'],
cancel: ['abort', 'stop', 'terminate', 'revoke'],
archive: ['store', 'backup', 'preserve'],
copy: ['duplicate', 'clone', 'replicate'],
move: ['transfer', 'relocate', 'migrate'],
share: ['publish', 'distribute', 'broadcast'],
}
/**
* Generates search aliases for an operation name by finding synonyms
* for action verbs in the operation name.
*/
function generateAliases(operationName: string): string[] {
const aliases: string[] = []
const lowerName = operationName.toLowerCase()
for (const [verb, synonyms] of Object.entries(ACTION_VERB_ALIASES)) {
if (lowerName.includes(verb)) {
for (const synonym of synonyms) {
aliases.push(lowerName.replace(verb, synonym))
}
}
}
return aliases
}
/**
* Extracts the operation dropdown subblock from a block's configuration.
* Returns null if no operation dropdown exists.
*/
function findOperationDropdown(block: BlockConfig): SubBlockConfig | null {
return (
block.subBlocks.find(
(sb) => sb.id === 'operation' && sb.type === 'dropdown' && Array.isArray(sb.options)
) ?? null
)
}
/**
* Resolves the tool ID for a given operation using the block's tool config.
* Falls back to checking tools.access if no config.tool function exists.
*/
function resolveToolId(block: BlockConfig, operationId: string): string | null {
if (!block.tools) return null
if (block.tools.config?.tool) {
try {
return block.tools.config.tool({ operation: operationId })
} catch {
return null
}
}
if (block.tools.access?.length === 1) {
return block.tools.access[0]
}
return null
}
/**
* Builds an index of all tool operations from the block registry.
* This index is used by the search modal to enable operation-level discovery.
*
* The function iterates through all blocks that have:
* 1. A tools.access array (indicating they use tools)
* 2. An "operation" dropdown subblock with options
*
* For each operation option, it creates a ToolOperationItem that maps
* the operation to its corresponding tool.
*/
export function buildToolOperationsIndex(): ToolOperationItem[] {
const operations: ToolOperationItem[] = []
const allBlocks = getAllBlocks()
for (const block of allBlocks) {
if (!block.tools?.access?.length || block.hideFromToolbar) {
continue
}
if (block.category !== 'tools') {
continue
}
const operationDropdown = findOperationDropdown(block)
if (!operationDropdown) {
continue
}
const options =
typeof operationDropdown.options === 'function'
? operationDropdown.options()
: operationDropdown.options
if (!options) continue
for (const option of options) {
if (!resolveToolId(block, option.id)) continue
const operationName = option.label
const aliases = generateAliases(operationName)
operations.push({
id: `${block.type}_${option.id}`,
blockType: block.type,
operationId: option.id,
serviceName: block.name,
operationName,
icon: block.icon,
bgColor: block.bgColor,
aliases,
})
}
}
return operations
}
/**
* Cached operations index to avoid rebuilding on every search.
* The index is built lazily on first access.
*/
let cachedOperations: ToolOperationItem[] | null = null
/**
* Returns the tool operations index, building it if necessary.
* The index is cached after first build since block registry is static.
*/
export function getToolOperationsIndex(): ToolOperationItem[] {
if (!cachedOperations) {
cachedOperations = buildToolOperationsIndex()
}
return cachedOperations
}
/**
* Clears the cached operations index.
* Useful for testing or if blocks are dynamically modified.
*/
export function clearToolOperationsCache(): void {
cachedOperations = null
}

View File

@@ -1,41 +1,18 @@
import { db } from '@sim/db'
import { member, settings, templateCreators, templates, user } from '@sim/db/schema'
import { member, templateCreators, templates, user } from '@sim/db/schema'
import { and, eq, or } from 'drizzle-orm'
export type CreatorPermissionLevel = 'member' | 'admin'
/**
* Verifies if a user is an effective super user (database flag AND settings toggle).
* This should be used for features that can be disabled by the user's settings toggle.
* Verifies if a user is a super user.
*
* @param userId - The ID of the user to check
* @returns Object with effectiveSuperUser boolean and component values
* @returns Object with isSuperUser boolean
*/
export async function verifyEffectiveSuperUser(userId: string): Promise<{
effectiveSuperUser: boolean
isSuperUser: boolean
superUserModeEnabled: boolean
}> {
const [currentUser] = await db
.select({ isSuperUser: user.isSuperUser })
.from(user)
.where(eq(user.id, userId))
.limit(1)
const [userSettings] = await db
.select({ superUserModeEnabled: settings.superUserModeEnabled })
.from(settings)
.where(eq(settings.userId, userId))
.limit(1)
const isSuperUser = currentUser?.isSuperUser || false
const superUserModeEnabled = userSettings?.superUserModeEnabled ?? false
return {
effectiveSuperUser: isSuperUser && superUserModeEnabled,
isSuperUser,
superUserModeEnabled,
}
export async function verifySuperUser(userId: string): Promise<{ isSuperUser: boolean }> {
const [currentUser] = await db.select().from(user).where(eq(user.id, userId)).limit(1)
return { isSuperUser: currentUser?.isSuperUser || false }
}
/**

View File

@@ -7,7 +7,6 @@ import type { InputFormatField } from '@/lib/workflows/types'
export interface WorkflowInputField {
name: string
type: string
description?: string
}
/**
@@ -38,7 +37,7 @@ export function extractInputFieldsFromBlocks(
if (Array.isArray(inputFormat)) {
return inputFormat
.filter(
(field: unknown): field is { name: string; type?: string; description?: string } =>
(field: unknown): field is { name: string; type?: string } =>
typeof field === 'object' &&
field !== null &&
'name' in field &&
@@ -48,7 +47,6 @@ export function extractInputFieldsFromBlocks(
.map((field) => ({
name: field.name,
type: field.type || 'string',
...(field.description && { description: field.description }),
}))
}
@@ -59,7 +57,7 @@ export function extractInputFieldsFromBlocks(
if (Array.isArray(legacyFormat)) {
return legacyFormat
.filter(
(field: unknown): field is { name: string; type?: string; description?: string } =>
(field: unknown): field is { name: string; type?: string } =>
typeof field === 'object' &&
field !== null &&
'name' in field &&
@@ -69,7 +67,6 @@ export function extractInputFieldsFromBlocks(
.map((field) => ({
name: field.name,
type: field.type || 'string',
...(field.description && { description: field.description }),
}))
}

View File

@@ -269,12 +269,11 @@ function sanitizeSubBlocks(
}
/**
* Convert internal condition handle (condition-{uuid}) to simple format (if, else-if-0, else)
* Uses 0-indexed numbering for else-if conditions
* Convert internal condition handle (condition-{uuid}) to semantic format (condition-{blockId}-if)
*/
function convertConditionHandleToSimple(
function convertConditionHandleToSemantic(
handle: string,
_blockId: string,
blockId: string,
block: BlockState
): string {
if (!handle.startsWith('condition-')) {
@@ -301,24 +300,27 @@ function convertConditionHandleToSimple(
return handle
}
// Find the condition by ID and generate simple handle
let elseIfIndex = 0
// Find the condition by ID and generate semantic handle
let elseIfCount = 0
for (const condition of conditions) {
const title = condition.title?.toLowerCase()
if (condition.id === conditionId) {
if (title === 'if') {
return 'if'
return `condition-${blockId}-if`
}
if (title === 'else if') {
return `else-if-${elseIfIndex}`
elseIfCount++
return elseIfCount === 1
? `condition-${blockId}-else-if`
: `condition-${blockId}-else-if-${elseIfCount}`
}
if (title === 'else') {
return 'else'
return `condition-${blockId}-else`
}
}
// Count else-ifs as we iterate (for index tracking)
// Count else-ifs as we iterate
if (title === 'else if') {
elseIfIndex++
elseIfCount++
}
}
@@ -327,10 +329,9 @@ function convertConditionHandleToSimple(
}
/**
* Convert internal router handle (router-{uuid}) to simple format (route-0, route-1)
* Uses 0-indexed numbering for routes
* Convert internal router handle (router-{uuid}) to semantic format (router-{blockId}-route-N)
*/
function convertRouterHandleToSimple(handle: string, _blockId: string, block: BlockState): string {
function convertRouterHandleToSemantic(handle: string, blockId: string, block: BlockState): string {
if (!handle.startsWith('router-')) {
return handle
}
@@ -355,10 +356,10 @@ function convertRouterHandleToSimple(handle: string, _blockId: string, block: Bl
return handle
}
// Find the route by ID and generate simple handle (0-indexed)
// Find the route by ID and generate semantic handle (1-indexed)
for (let i = 0; i < routes.length; i++) {
if (routes[i].id === routeId) {
return `route-${i}`
return `router-${blockId}-route-${i + 1}`
}
}
@@ -367,16 +368,15 @@ function convertRouterHandleToSimple(handle: string, _blockId: string, block: Bl
}
/**
* Convert source handle to simple format for condition and router blocks
* Outputs: if, else-if-0, else (for conditions) and route-0, route-1 (for routers)
* Convert source handle to semantic format for condition and router blocks
*/
function convertToSimpleHandle(handle: string, blockId: string, block: BlockState): string {
function convertToSemanticHandle(handle: string, blockId: string, block: BlockState): string {
if (handle.startsWith('condition-') && block.type === 'condition') {
return convertConditionHandleToSimple(handle, blockId, block)
return convertConditionHandleToSemantic(handle, blockId, block)
}
if (handle.startsWith('router-') && block.type === 'router_v2') {
return convertRouterHandleToSimple(handle, blockId, block)
return convertRouterHandleToSemantic(handle, blockId, block)
}
return handle
@@ -400,12 +400,12 @@ function extractConnectionsForBlock(
return undefined
}
// Group by source handle (converting to simple format)
// Group by source handle (converting to semantic format)
for (const edge of outgoingEdges) {
let handle = edge.sourceHandle || 'source'
// Convert internal UUID handles to simple format (if, else-if-0, route-0, etc.)
handle = convertToSimpleHandle(handle, blockId, block)
// Convert internal UUID handles to semantic format
handle = convertToSemanticHandle(handle, blockId, block)
if (!connections[handle]) {
connections[handle] = []

File diff suppressed because it is too large Load Diff

View File

@@ -156,13 +156,6 @@ export interface CopilotState {
// Message queue for messages sent while another is in progress
messageQueue: QueuedMessage[]
// Stream resumption state
activeStreamId: string | null
isResuming: boolean
// Track if abort was user-initiated (vs browser refresh)
userInitiatedAbort: boolean
}
export interface CopilotActions {
@@ -256,12 +249,6 @@ export interface CopilotActions {
moveUpInQueue: (id: string) => void
sendNow: (id: string) => Promise<void>
clearQueue: () => void
// Stream resumption actions
checkForActiveStream: (chatId: string) => Promise<boolean>
resumeActiveStream: (streamId: string) => Promise<void>
setActiveStreamId: (streamId: string | null) => void
restorePendingDiff: (streamId: string) => Promise<void>
}
export type CopilotStore = CopilotState & CopilotActions

View File

@@ -163,13 +163,12 @@ export const useTerminalConsoleStore = create<ConsoleStore>()(
try {
const errorMessage = String(newEntry.error)
const blockName = newEntry.blockName || 'Unknown Block'
const displayMessage = `${blockName}: ${errorMessage}`
const copilotMessage = `${errorMessage}\n\nError in ${blockName}.\n\nPlease fix this.`
useNotificationStore.getState().addNotification({
level: 'error',
message: displayMessage,
message: errorMessage,
workflowId: entry.workflowId,
action: {
type: 'copilot',

View File

@@ -1,394 +0,0 @@
/**
* @vitest-environment node
*/
import type { Edge } from 'reactflow'
import { beforeEach, describe, expect, it, type Mock, vi } from 'vitest'
import type { BlockState } from '@/stores/workflows/workflow/types'
vi.mock('@/stores/workflows/utils', () => ({
mergeSubblockState: vi.fn(),
}))
import { mergeSubblockState } from '@/stores/workflows/utils'
import { captureLatestEdges, captureLatestSubBlockValues } from './utils'
const mockMergeSubblockState = mergeSubblockState as Mock
describe('captureLatestEdges', () => {
const createEdge = (id: string, source: string, target: string): Edge => ({
id,
source,
target,
})
it('should return edges where blockId is the source', () => {
const edges = [
createEdge('edge-1', 'block-1', 'block-2'),
createEdge('edge-2', 'block-3', 'block-4'),
]
const result = captureLatestEdges(edges, ['block-1'])
expect(result).toEqual([createEdge('edge-1', 'block-1', 'block-2')])
})
it('should return edges where blockId is the target', () => {
const edges = [
createEdge('edge-1', 'block-1', 'block-2'),
createEdge('edge-2', 'block-3', 'block-4'),
]
const result = captureLatestEdges(edges, ['block-2'])
expect(result).toEqual([createEdge('edge-1', 'block-1', 'block-2')])
})
it('should return edges for multiple blocks', () => {
const edges = [
createEdge('edge-1', 'block-1', 'block-2'),
createEdge('edge-2', 'block-3', 'block-4'),
createEdge('edge-3', 'block-2', 'block-5'),
]
const result = captureLatestEdges(edges, ['block-1', 'block-2'])
expect(result).toHaveLength(2)
expect(result).toContainEqual(createEdge('edge-1', 'block-1', 'block-2'))
expect(result).toContainEqual(createEdge('edge-3', 'block-2', 'block-5'))
})
it('should return empty array when no edges match', () => {
const edges = [
createEdge('edge-1', 'block-1', 'block-2'),
createEdge('edge-2', 'block-3', 'block-4'),
]
const result = captureLatestEdges(edges, ['block-99'])
expect(result).toEqual([])
})
it('should return empty array when blockIds is empty', () => {
const edges = [
createEdge('edge-1', 'block-1', 'block-2'),
createEdge('edge-2', 'block-3', 'block-4'),
]
const result = captureLatestEdges(edges, [])
expect(result).toEqual([])
})
it('should return edge when block has both source and target edges', () => {
const edges = [
createEdge('edge-1', 'block-1', 'block-2'),
createEdge('edge-2', 'block-2', 'block-3'),
createEdge('edge-3', 'block-4', 'block-2'),
]
const result = captureLatestEdges(edges, ['block-2'])
expect(result).toHaveLength(3)
expect(result).toContainEqual(createEdge('edge-1', 'block-1', 'block-2'))
expect(result).toContainEqual(createEdge('edge-2', 'block-2', 'block-3'))
expect(result).toContainEqual(createEdge('edge-3', 'block-4', 'block-2'))
})
it('should handle empty edges array', () => {
const result = captureLatestEdges([], ['block-1'])
expect(result).toEqual([])
})
it('should not duplicate edges when block appears in multiple blockIds', () => {
const edges = [createEdge('edge-1', 'block-1', 'block-2')]
const result = captureLatestEdges(edges, ['block-1', 'block-2'])
expect(result).toHaveLength(1)
expect(result).toContainEqual(createEdge('edge-1', 'block-1', 'block-2'))
})
})
describe('captureLatestSubBlockValues', () => {
const workflowId = 'wf-test'
const createBlockState = (
id: string,
subBlocks: Record<string, { id: string; type: string; value: unknown }>
): BlockState =>
({
id,
type: 'function',
name: 'Test Block',
position: { x: 0, y: 0 },
subBlocks: Object.fromEntries(
Object.entries(subBlocks).map(([subId, sb]) => [
subId,
{ id: sb.id, type: sb.type, value: sb.value },
])
),
outputs: {},
enabled: true,
}) as BlockState
beforeEach(() => {
vi.clearAllMocks()
})
it('should capture single block with single subblock value', () => {
const blocks: Record<string, BlockState> = {
'block-1': createBlockState('block-1', {
code: { id: 'code', type: 'code', value: 'console.log("hello")' },
}),
}
mockMergeSubblockState.mockReturnValue(blocks)
const result = captureLatestSubBlockValues(blocks, workflowId, ['block-1'])
expect(result).toEqual({
'block-1': { code: 'console.log("hello")' },
})
})
it('should capture single block with multiple subblock values', () => {
const blocks: Record<string, BlockState> = {
'block-1': createBlockState('block-1', {
code: { id: 'code', type: 'code', value: 'test code' },
model: { id: 'model', type: 'dropdown', value: 'gpt-4' },
temperature: { id: 'temperature', type: 'slider', value: 0.7 },
}),
}
mockMergeSubblockState.mockReturnValue(blocks)
const result = captureLatestSubBlockValues(blocks, workflowId, ['block-1'])
expect(result).toEqual({
'block-1': {
code: 'test code',
model: 'gpt-4',
temperature: 0.7,
},
})
})
it('should capture multiple blocks with values', () => {
const blocks: Record<string, BlockState> = {
'block-1': createBlockState('block-1', {
code: { id: 'code', type: 'code', value: 'code 1' },
}),
'block-2': createBlockState('block-2', {
prompt: { id: 'prompt', type: 'long-input', value: 'hello world' },
}),
}
mockMergeSubblockState.mockImplementation((_blocks, _wfId, blockId) => {
if (blockId === 'block-1') return { 'block-1': blocks['block-1'] }
if (blockId === 'block-2') return { 'block-2': blocks['block-2'] }
return {}
})
const result = captureLatestSubBlockValues(blocks, workflowId, ['block-1', 'block-2'])
expect(result).toEqual({
'block-1': { code: 'code 1' },
'block-2': { prompt: 'hello world' },
})
})
it('should skip null values', () => {
const blocks: Record<string, BlockState> = {
'block-1': createBlockState('block-1', {
code: { id: 'code', type: 'code', value: 'valid code' },
empty: { id: 'empty', type: 'short-input', value: null },
}),
}
mockMergeSubblockState.mockReturnValue(blocks)
const result = captureLatestSubBlockValues(blocks, workflowId, ['block-1'])
expect(result).toEqual({
'block-1': { code: 'valid code' },
})
expect(result['block-1']).not.toHaveProperty('empty')
})
it('should skip undefined values', () => {
const blocks: Record<string, BlockState> = {
'block-1': createBlockState('block-1', {
code: { id: 'code', type: 'code', value: 'valid code' },
empty: { id: 'empty', type: 'short-input', value: undefined },
}),
}
mockMergeSubblockState.mockReturnValue(blocks)
const result = captureLatestSubBlockValues(blocks, workflowId, ['block-1'])
expect(result).toEqual({
'block-1': { code: 'valid code' },
})
})
it('should return empty object for block with no subBlocks', () => {
const blocks: Record<string, BlockState> = {
'block-1': {
id: 'block-1',
type: 'function',
name: 'Test Block',
position: { x: 0, y: 0 },
subBlocks: {},
outputs: {},
enabled: true,
} as BlockState,
}
mockMergeSubblockState.mockReturnValue(blocks)
const result = captureLatestSubBlockValues(blocks, workflowId, ['block-1'])
expect(result).toEqual({})
})
it('should return empty object for non-existent blockId', () => {
const blocks: Record<string, BlockState> = {
'block-1': createBlockState('block-1', {
code: { id: 'code', type: 'code', value: 'test' },
}),
}
mockMergeSubblockState.mockReturnValue({})
const result = captureLatestSubBlockValues(blocks, workflowId, ['non-existent'])
expect(result).toEqual({})
})
it('should return empty object when blockIds is empty', () => {
const blocks: Record<string, BlockState> = {
'block-1': createBlockState('block-1', {
code: { id: 'code', type: 'code', value: 'test' },
}),
}
const result = captureLatestSubBlockValues(blocks, workflowId, [])
expect(result).toEqual({})
expect(mockMergeSubblockState).not.toHaveBeenCalled()
})
it('should handle various value types (string, number, array)', () => {
const blocks: Record<string, BlockState> = {
'block-1': createBlockState('block-1', {
text: { id: 'text', type: 'short-input', value: 'string value' },
number: { id: 'number', type: 'slider', value: 42 },
array: {
id: 'array',
type: 'table',
value: [
['a', 'b'],
['c', 'd'],
],
},
}),
}
mockMergeSubblockState.mockReturnValue(blocks)
const result = captureLatestSubBlockValues(blocks, workflowId, ['block-1'])
expect(result).toEqual({
'block-1': {
text: 'string value',
number: 42,
array: [
['a', 'b'],
['c', 'd'],
],
},
})
})
it('should only capture values for blockIds in the list', () => {
const blocks: Record<string, BlockState> = {
'block-1': createBlockState('block-1', {
code: { id: 'code', type: 'code', value: 'code 1' },
}),
'block-2': createBlockState('block-2', {
code: { id: 'code', type: 'code', value: 'code 2' },
}),
'block-3': createBlockState('block-3', {
code: { id: 'code', type: 'code', value: 'code 3' },
}),
}
mockMergeSubblockState.mockImplementation((_blocks, _wfId, blockId) => {
if (blockId === 'block-1') return { 'block-1': blocks['block-1'] }
if (blockId === 'block-3') return { 'block-3': blocks['block-3'] }
return {}
})
const result = captureLatestSubBlockValues(blocks, workflowId, ['block-1', 'block-3'])
expect(result).toEqual({
'block-1': { code: 'code 1' },
'block-3': { code: 'code 3' },
})
expect(result).not.toHaveProperty('block-2')
})
it('should handle block without subBlocks property', () => {
const blocks: Record<string, BlockState> = {
'block-1': {
id: 'block-1',
type: 'function',
name: 'Test Block',
position: { x: 0, y: 0 },
outputs: {},
enabled: true,
} as BlockState,
}
mockMergeSubblockState.mockReturnValue(blocks)
const result = captureLatestSubBlockValues(blocks, workflowId, ['block-1'])
expect(result).toEqual({})
})
it('should handle empty string values', () => {
const blocks: Record<string, BlockState> = {
'block-1': createBlockState('block-1', {
code: { id: 'code', type: 'code', value: '' },
}),
}
mockMergeSubblockState.mockReturnValue(blocks)
const result = captureLatestSubBlockValues(blocks, workflowId, ['block-1'])
expect(result).toEqual({
'block-1': { code: '' },
})
})
it('should handle zero numeric values', () => {
const blocks: Record<string, BlockState> = {
'block-1': createBlockState('block-1', {
temperature: { id: 'temperature', type: 'slider', value: 0 },
}),
}
mockMergeSubblockState.mockReturnValue(blocks)
const result = captureLatestSubBlockValues(blocks, workflowId, ['block-1'])
expect(result).toEqual({
'block-1': { temperature: 0 },
})
})
})

View File

@@ -1,4 +1,3 @@
import type { Edge } from 'reactflow'
import { UNDO_REDO_OPERATIONS } from '@/socket/constants'
import type {
BatchAddBlocksOperation,
@@ -10,8 +9,6 @@ import type {
Operation,
OperationEntry,
} from '@/stores/undo-redo/types'
import { mergeSubblockState } from '@/stores/workflows/utils'
import type { BlockState } from '@/stores/workflows/workflow/types'
export function createOperationEntry(operation: Operation, inverse: Operation): OperationEntry {
return {
@@ -173,31 +170,3 @@ export function createInverseOperation(operation: Operation): Operation {
}
}
}
export function captureLatestEdges(edges: Edge[], blockIds: string[]): Edge[] {
return edges.filter((e) => blockIds.includes(e.source) || blockIds.includes(e.target))
}
export function captureLatestSubBlockValues(
blocks: Record<string, BlockState>,
workflowId: string,
blockIds: string[]
): Record<string, Record<string, unknown>> {
const values: Record<string, Record<string, unknown>> = {}
blockIds.forEach((blockId) => {
const merged = mergeSubblockState(blocks, workflowId, blockId)
const block = merged[blockId]
if (block?.subBlocks) {
const blockValues: Record<string, unknown> = {}
Object.entries(block.subBlocks).forEach(([subBlockId, subBlock]) => {
if (subBlock.value !== null && subBlock.value !== undefined) {
blockValues[subBlockId] = subBlock.value
}
})
if (Object.keys(blockValues).length > 0) {
values[blockId] = blockValues
}
}
})
return values
}

View File

@@ -20,50 +20,6 @@ import {
persistWorkflowStateToServer,
} from './utils'
/** Get the active stream ID from copilot store (lazy import to avoid circular deps) */
async function getActiveStreamId(): Promise<string | null> {
try {
const { useCopilotStore } = await import('@/stores/panel/copilot/store')
return useCopilotStore.getState().activeStreamId
} catch {
return null
}
}
/** Save pending diff to server (Redis) via API */
async function savePendingDiffToServer(
streamId: string,
pendingDiff: {
toolCallId: string
baselineWorkflow: unknown
proposedWorkflow: unknown
diffAnalysis: unknown
}
): Promise<void> {
try {
await fetch(`/api/copilot/stream/${streamId}/pending-diff`, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
credentials: 'include',
body: JSON.stringify({ pendingDiff }),
})
} catch (err) {
logger.warn('Failed to save pending diff to server', { error: err })
}
}
/** Clear pending diff from server (Redis) via API */
async function clearPendingDiffFromServer(streamId: string): Promise<void> {
try {
await fetch(`/api/copilot/stream/${streamId}/pending-diff`, {
method: 'DELETE',
credentials: 'include',
})
} catch {
// Ignore errors - not critical
}
}
const logger = createLogger('WorkflowDiffStore')
const diffEngine = new WorkflowDiffEngine()
@@ -213,35 +169,32 @@ export const useWorkflowDiffStore = create<WorkflowDiffState & WorkflowDiffActio
edges: candidateState.edges?.length || 0,
})
// Persist pending diff to Redis for resumption on page refresh
getActiveStreamId().then((streamId) => {
if (streamId) {
findLatestEditWorkflowToolCallId().then((toolCallId) => {
if (toolCallId) {
savePendingDiffToServer(streamId, {
toolCallId,
baselineWorkflow: baselineWorkflow,
proposedWorkflow: candidateState,
diffAnalysis: diffAnalysisResult,
})
}
})
}
// BACKGROUND: Broadcast and persist without blocking
// These operations happen after the UI has already updated
const cleanState = stripWorkflowDiffMarkers(cloneWorkflowState(candidateState))
// Fire and forget: broadcast to other users (don't await)
enqueueReplaceWorkflowState({
workflowId: activeWorkflowId,
state: cleanState,
}).catch((error) => {
logger.warn('Failed to broadcast workflow state (non-blocking)', { error })
})
// NOTE: We do NOT broadcast to other users here (to prevent socket errors on refresh).
// But we DO persist to database WITH diff markers so the proposed state survives page refresh
// and the diff UI can be restored. Final broadcast (without markers) happens when user accepts.
persistWorkflowStateToServer(activeWorkflowId, candidateState, { preserveDiffMarkers: true })
// Fire and forget: persist to database (don't await)
persistWorkflowStateToServer(activeWorkflowId, candidateState)
.then((persisted) => {
if (!persisted) {
logger.warn('Failed to persist diff preview state')
logger.warn('Failed to persist copilot edits (state already applied locally)')
// Don't revert - user can retry or state will sync on next save
} else {
logger.info('Diff preview state persisted with markers', { workflowId: activeWorkflowId })
logger.info('Workflow diff persisted to database', {
workflowId: activeWorkflowId,
})
}
})
.catch((error) => {
logger.warn('Failed to persist diff preview state', { error })
logger.warn('Failed to persist workflow state (non-blocking)', { error })
})
// Emit event for undo/redo recording
@@ -259,37 +212,6 @@ export const useWorkflowDiffStore = create<WorkflowDiffState & WorkflowDiffActio
}
},
restoreDiffWithBaseline: (baselineWorkflow, proposedWorkflow, diffAnalysis) => {
const activeWorkflowId = useWorkflowRegistry.getState().activeWorkflowId
if (!activeWorkflowId) {
logger.error('Cannot restore diff without an active workflow')
return
}
logger.info('Restoring diff with baseline', {
workflowId: activeWorkflowId,
hasBaseline: !!baselineWorkflow,
newBlocks: diffAnalysis.new_blocks?.length || 0,
editedBlocks: diffAnalysis.edited_blocks?.length || 0,
})
// Set the diff state with the provided baseline
batchedUpdate({
hasActiveDiff: true,
isShowingDiff: true,
isDiffReady: true,
baselineWorkflow: baselineWorkflow,
baselineWorkflowId: activeWorkflowId,
diffAnalysis: diffAnalysis,
diffMetadata: null,
diffError: null,
})
// The proposed workflow should already be applied (blocks have is_diff markers)
// Just re-apply the markers to ensure they're visible
setTimeout(() => get().reapplyDiffMarkers(), 0)
},
clearDiff: ({ restoreBaseline = true } = {}) => {
const { baselineWorkflow, baselineWorkflowId } = get()
const activeWorkflowId = useWorkflowRegistry.getState().activeWorkflowId
@@ -370,13 +292,6 @@ export const useWorkflowDiffStore = create<WorkflowDiffState & WorkflowDiffActio
const baselineForUndo = get().baselineWorkflow
const triggerMessageId = get()._triggerMessageId
// Clear pending diff from Redis (fire and forget)
getActiveStreamId().then((streamId) => {
if (streamId) {
clearPendingDiffFromServer(streamId)
}
})
// Clear diff state FIRST to prevent flash of colors
// This must happen synchronously before applying the cleaned state
set({
@@ -397,32 +312,6 @@ export const useWorkflowDiffStore = create<WorkflowDiffState & WorkflowDiffActio
// Now apply the cleaned state
applyWorkflowStateToStores(activeWorkflowId, stateToApply)
// Broadcast and persist the accepted changes
const cleanStateForBroadcast = stripWorkflowDiffMarkers(cloneWorkflowState(stateToApply))
// Fire and forget: broadcast to other users
enqueueReplaceWorkflowState({
workflowId: activeWorkflowId,
state: cleanStateForBroadcast,
}).catch((error) => {
logger.warn('Failed to broadcast accepted workflow state', { error })
})
// Fire and forget: persist to database
persistWorkflowStateToServer(activeWorkflowId, stateToApply)
.then((persisted) => {
if (!persisted) {
logger.warn('Failed to persist accepted workflow changes')
} else {
logger.info('Accepted workflow changes persisted to database', {
workflowId: activeWorkflowId,
})
}
})
.catch((error) => {
logger.warn('Failed to persist accepted workflow state', { error })
})
// Emit event for undo/redo recording (unless we're in an undo/redo operation)
if (!(window as any).__skipDiffRecording) {
window.dispatchEvent(
@@ -467,19 +356,8 @@ export const useWorkflowDiffStore = create<WorkflowDiffState & WorkflowDiffActio
const activeWorkflowId = useWorkflowRegistry.getState().activeWorkflowId
if (!baselineWorkflow || !baselineWorkflowId) {
logger.warn('Reject called without baseline workflow - cannot revert changes')
// This can happen if the diff was restored from markers after page refresh
// without a saved baseline. Just clear the diff markers without reverting.
logger.warn('Reject called without baseline workflow')
get().clearDiff({ restoreBaseline: false })
// Show a notification to the user
try {
const { useNotificationStore } = await import('@/stores/notifications')
useNotificationStore.getState().addNotification({
level: 'info',
message:
'Cannot revert: The original workflow state was lost after page refresh. Diff markers have been cleared.',
})
} catch {}
return
}
@@ -505,13 +383,6 @@ export const useWorkflowDiffStore = create<WorkflowDiffState & WorkflowDiffActio
})
const afterReject = cloneWorkflowState(baselineWorkflow)
// Clear pending diff from Redis (fire and forget)
getActiveStreamId().then((streamId) => {
if (streamId) {
clearPendingDiffFromServer(streamId)
}
})
// Clear diff state FIRST for instant UI feedback
set({
hasActiveDiff: false,
@@ -655,94 +526,6 @@ export const useWorkflowDiffStore = create<WorkflowDiffState & WorkflowDiffActio
logger.info('Re-applied diff markers to workflow blocks')
}
},
restoreDiffFromMarkers: () => {
// Check if the workflow has any blocks with is_diff markers
// If so, restore the diff store state to show the diff view
const { hasActiveDiff } = get()
if (hasActiveDiff) {
// Already have an active diff
return
}
const workflowStore = useWorkflowStore.getState()
const blocks = workflowStore.blocks
const newBlocks: string[] = []
const editedBlocks: string[] = []
const fieldDiffs: Record<string, { changed_fields: string[] }> = {}
Object.entries(blocks).forEach(([blockId, block]) => {
const isDiff = (block as any).is_diff
if (isDiff === 'new') {
newBlocks.push(blockId)
} else if (isDiff === 'edited') {
editedBlocks.push(blockId)
// Check for field diffs
const blockFieldDiffs = (block as any).field_diffs
if (blockFieldDiffs) {
fieldDiffs[blockId] = blockFieldDiffs
} else {
// Check subBlocks for is_diff markers
const changedFields: string[] = []
Object.entries((block as any).subBlocks || {}).forEach(
([fieldId, subBlock]: [string, any]) => {
if (subBlock?.is_diff === 'changed') {
changedFields.push(fieldId)
}
}
)
if (changedFields.length > 0) {
fieldDiffs[blockId] = { changed_fields: changedFields }
}
}
}
})
if (newBlocks.length === 0 && editedBlocks.length === 0) {
// No diff markers found
return
}
logger.info('Restoring diff state from markers', {
newBlocks: newBlocks.length,
editedBlocks: editedBlocks.length,
})
// Restore the diff state
// Note: We don't have the baseline, so reject will just clear the diff
// Add unchanged_fields to satisfy the type (we don't track unchanged fields on restore)
const fieldDiffsWithUnchanged: Record<
string,
{ changed_fields: string[]; unchanged_fields: string[] }
> = {}
Object.entries(fieldDiffs).forEach(([blockId, diff]) => {
fieldDiffsWithUnchanged[blockId] = {
changed_fields: diff.changed_fields,
unchanged_fields: [],
}
})
const diffAnalysis = {
new_blocks: newBlocks,
edited_blocks: editedBlocks,
deleted_blocks: [],
field_diffs: fieldDiffsWithUnchanged,
}
const activeWorkflowId = useWorkflowRegistry.getState().activeWorkflowId
batchedUpdate({
hasActiveDiff: true,
isShowingDiff: true,
isDiffReady: true,
baselineWorkflow: null, // We don't have baseline on restore from markers alone
baselineWorkflowId: activeWorkflowId, // Set the workflow ID for later baseline restoration
diffAnalysis,
diffMetadata: null,
diffError: null,
})
},
}
},
{ name: 'workflow-diff-store' }

Some files were not shown because too many files have changed in this diff Show More