Compare commits

..

50 Commits

Author SHA1 Message Date
Waleed
0ce0f98aa5 v0.5.65: gemini updates, textract integration, ui updates (#2909)
* fix(google): wrap primitive tool responses for Gemini API compatibility (#2900)

* fix(canonical): copilot path + update parent (#2901)

* fix(rss): add top-level title, link, pubDate fields to RSS trigger output (#2902)

* fix(rss): add top-level title, link, pubDate fields to RSS trigger output

* fix(imap): add top-level fields to IMAP trigger output

* improvement(browseruse): add profile id param (#2903)

* improvement(browseruse): add profile id param

* make request a stub since we have directExec

* improvement(executor): upgraded abort controller to handle aborts for loops and parallels (#2880)

* improvement(executor): upgraded abort controller to handle aborts for loops and parallels

* comments

* improvement(files): update execution for passing base64 strings (#2906)

* progress

* improvement(execution): update execution for passing base64 strings

* fix types

* cleanup comments

* path security vuln

* reject promise correctly

* fix redirect case

* remove proxy routes

* fix tests

* use ipaddr

* feat(tools): added textract, added v2 for mistral, updated tag dropdown (#2904)

* feat(tools): added textract

* cleanup

* ack pr comments

* reorder

* removed upload for textract async version

* fix additional fields dropdown in editor, update parser to leave validation to be done on the server

* added mistral v2, files v2, and finalized textract

* updated the rest of the old file patterns, updated mistral outputs for v2

* updated tag dropdown to parse non-operation fields as well

* updated extension finder

* cleanup

* added description for inputs to workflow

* use helper for internal route check

* fix tag dropdown merge conflict change

* remove duplicate code

---------

Co-authored-by: Vikhyath Mondreti <vikhyath@simstudio.ai>

* fix(ui): change add inputs button to match output selector (#2907)

* fix(canvas): removed invite to workspace from canvas popover (#2908)

* fix(canvas): removed invite to workspace

* removed unused props

* fix(copilot): legacy tool display names (#2911)

* fix(a2a): canonical merge  (#2912)

* fix canonical merge

* fix empty array case

* fix(change-detection): copilot diffs have extra field (#2913)

* improvement(logs): improved logs ui bugs, added subflow disable UI (#2910)

* improvement(logs): improved logs ui bugs, added subflow disable UI

* added duplicate to action bar for subflows

* feat(broadcast): email v0.5 (#2905)

---------

Co-authored-by: Vikhyath Mondreti <vikhyathvikku@gmail.com>
Co-authored-by: Vikhyath Mondreti <vikhyath@simstudio.ai>
Co-authored-by: Emir Karabeg <78010029+emir-karabeg@users.noreply.github.com>
2026-01-20 23:54:55 -08:00
Emir Karabeg
294b168ed9 feat(broadcast): email v0.5 (#2905) 2026-01-20 23:42:48 -08:00
Waleed
0dc2c1fe0d improvement(logs): improved logs ui bugs, added subflow disable UI (#2910)
* improvement(logs): improved logs ui bugs, added subflow disable UI

* added duplicate to action bar for subflows
2026-01-20 23:13:05 -08:00
Vikhyath Mondreti
fb90c4e9b1 fix(change-detection): copilot diffs have extra field (#2913) 2026-01-20 22:04:08 -08:00
Vikhyath Mondreti
0af96d06c6 fix(a2a): canonical merge (#2912)
* fix canonical merge

* fix empty array case
2026-01-20 21:58:13 -08:00
Vikhyath Mondreti
1d450578c8 fix(copilot): legacy tool display names (#2911) 2026-01-20 21:16:48 -08:00
Waleed
c6d408c65b fix(canvas): removed invite to workspace from canvas popover (#2908)
* fix(canvas): removed invite to workspace

* removed unused props
2026-01-20 20:29:53 -08:00
Waleed
16716ea26a fix(ui): change add inputs button to match output selector (#2907) 2026-01-20 19:24:59 -08:00
Waleed
563098ca0a feat(tools): added textract, added v2 for mistral, updated tag dropdown (#2904)
* feat(tools): added textract

* cleanup

* ack pr comments

* reorder

* removed upload for textract async version

* fix additional fields dropdown in editor, update parser to leave validation to be done on the server

* added mistral v2, files v2, and finalized textract

* updated the rest of the old file patterns, updated mistral outputs for v2

* updated tag dropdown to parse non-operation fields as well

* updated extension finder

* cleanup

* added description for inputs to workflow

* use helper for internal route check

* fix tag dropdown merge conflict change

* remove duplicate code

---------

Co-authored-by: Vikhyath Mondreti <vikhyath@simstudio.ai>
2026-01-20 18:41:26 -08:00
Vikhyath Mondreti
1f1f015031 improvement(files): update execution for passing base64 strings (#2906)
* progress

* improvement(execution): update execution for passing base64 strings

* fix types

* cleanup comments

* path security vuln

* reject promise correctly

* fix redirect case

* remove proxy routes

* fix tests

* use ipaddr
2026-01-20 17:49:00 -08:00
Waleed
4afb245fa2 improvement(executor): upgraded abort controller to handle aborts for loops and parallels (#2880)
* improvement(executor): upgraded abort controller to handle aborts for loops and parallels

* comments
2026-01-20 15:40:37 -08:00
Vikhyath Mondreti
8344d68ca8 improvement(browseruse): add profile id param (#2903)
* improvement(browseruse): add profile id param

* make request a stub since we have directExec
2026-01-20 11:08:47 -08:00
Waleed
a26a1a9737 fix(rss): add top-level title, link, pubDate fields to RSS trigger output (#2902)
* fix(rss): add top-level title, link, pubDate fields to RSS trigger output

* fix(imap): add top-level fields to IMAP trigger output
2026-01-20 10:06:13 -08:00
Vikhyath Mondreti
689037a300 fix(canonical): copilot path + update parent (#2901) 2026-01-20 09:43:41 -08:00
Waleed
07f0c01dc4 fix(google): wrap primitive tool responses for Gemini API compatibility (#2900) 2026-01-20 09:27:45 -08:00
Waleed
dff1c9d083 v0.5.64: unsubscribe, search improvements, metrics, additional SSO configuration 2026-01-20 00:34:11 -08:00
Waleed
e4ad31bb6b fix(kb): align bulk chunk operation with API response (#2899)
* fix(kb): align bulk chunk operation with API response

* fix(kb): skip local state update for failed chunks

* fix(kb): correct errors type and refresh on partial failure
2026-01-20 00:24:50 -08:00
Waleed
84691fc873 improvement(modal): fixed popover issue in custom tools modal, removed the ability to update if no changes made (#2897)
* improvement(modal): fixed popover issue in custom tools modal, removed the ability to update if no changes made

* improvement(modal): fixed popover issue in custom tools modal, removed the ability to update if no changes made

* popover fixes, color picker keyboard nav, code simplification

* color standardization

* fix color picker

* set discard alert state when closing modal
2026-01-19 23:52:07 -08:00
Emir Karabeg
2daf34386e fix(copilot): ui/ux (#2891)
* feat(claude): added rules

* fix(copilot): chat loading; refactor(copilot): components, utils, hooks

* fix(copilot): options selection strikethrough

* fix(copilot): options render inside thinking

* fix(copilot): checkpoints, user-input; improvement(code): colors

* fix(copilot): scrolling, tool-call truncation, thinking ui

* fix(copilot): tool call spacing and shimmer/actions on previous messages

* improvement(copilot): queue

* addressed comments
2026-01-19 23:23:21 -08:00
Waleed
ac991d4b54 fix(sso): removed provider specific OIDC logic from SSO registration & deregistration scripts (#2896)
* fix(sso): updated registration & deregistration script for explicit support for Entra ID

* cleanup

* ack PR comment

* ack PR comment

* tested edge cases, ack'd PR comments

* remove trailing slash
2026-01-19 19:23:50 -08:00
Vikhyath Mondreti
b09f683072 v0.5.63: ui and performance improvements, more google tools 2026-01-18 15:22:42 -08:00
Vikhyath Mondreti
a8bb0db660 v0.5.62: webhook bug fixes, seeding default subblock values, block selection fixes 2026-01-16 20:27:06 -08:00
Waleed
af82820a28 v0.5.61: webhook improvements, workflow controls, react query for deployment status, chat fixes, reducto and pulse OCR, linear fixes 2026-01-16 18:06:23 -08:00
Waleed
4372841797 v0.5.60: invitation flow improvements, chat fixes, a2a improvements, additional copilot actions 2026-01-15 00:02:18 -08:00
Waleed
5e8c843241 v0.5.59: a2a support, documentation 2026-01-13 13:21:21 -08:00
Waleed
7bf3d73ee6 v0.5.58: export folders, new tools, permissions groups enhancements 2026-01-13 00:56:59 -08:00
Vikhyath Mondreti
7ffc11a738 v0.5.57: subagents, context menu improvements, bug fixes 2026-01-11 11:38:40 -08:00
Waleed
be578e2ed7 v0.5.56: batch operations, access control and permission groups, billing fixes 2026-01-10 00:31:34 -08:00
Waleed
f415e5edc4 v0.5.55: polling groups, bedrock provider, devcontainer fixes, workflow preview enhancements 2026-01-08 23:36:56 -08:00
Waleed
13a6e6c3fa v0.5.54: seo, model blacklist, helm chart updates, fireflies integration, autoconnect improvements, billing fixes 2026-01-07 16:09:45 -08:00
Waleed
f5ab7f21ae v0.5.53: hotkey improvements, added redis fallback, fixes for workflow tool 2026-01-06 23:34:52 -08:00
Waleed
bfb6fffe38 v0.5.52: new port-based router block, combobox expression and variable support 2026-01-06 16:14:10 -08:00
Waleed
4fbec0a43f v0.5.51: triggers, kb, condition block improvements, supabase and grain integration updates 2026-01-06 14:26:46 -08:00
Waleed
585f5e365b v0.5.50: import improvements, ui upgrades, kb styling and performance improvements 2026-01-05 00:35:55 -08:00
Waleed
3792bdd252 v0.5.49: hitl improvements, new email styles, imap trigger, logs context menu (#2672)
* feat(logs-context-menu): consolidated logs utils and types, added logs record context menu (#2659)

* feat(email): welcome email; improvement(emails): ui/ux (#2658)

* feat(email): welcome email; improvement(emails): ui/ux

* improvement(emails): links, accounts, preview

* refactor(emails): file structure and wrapper components

* added envvar for personal emails sent, added isHosted gate

* fixed failing tests, added env mock

* fix: removed comment

---------

Co-authored-by: waleed <walif6@gmail.com>

* fix(logging): hitl + trigger dev crash protection (#2664)

* hitl gaps

* deal with trigger worker crashes

* cleanup import strcuture

* feat(imap): added support for imap trigger (#2663)

* feat(tools): added support for imap trigger

* feat(imap): added parity, tested

* ack PR comments

* final cleanup

* feat(i18n): update translations (#2665)

Co-authored-by: waleedlatif1 <waleedlatif1@users.noreply.github.com>

* fix(grain): updated grain trigger to auto-establish trigger (#2666)

Co-authored-by: aadamgough <adam@sim.ai>

* feat(admin): routes to manage deployments (#2667)

* feat(admin): routes to manage deployments

* fix naming fo deployed by

* feat(time-picker): added timepicker emcn component, added to playground, added searchable prop for dropdown, added more timezones for schedule, updated license and notice date (#2668)

* feat(time-picker): added timepicker emcn component, added to playground, added searchable prop for dropdown, added more timezones for schedule, updated license and notice date

* removed unused params, cleaned up redundant utils

* improvement(invite): aligned styling (#2669)

* improvement(invite): aligned with rest of app

* fix(invite): error handling

* fix: addressed comments

---------

Co-authored-by: Emir Karabeg <78010029+emir-karabeg@users.noreply.github.com>
Co-authored-by: Vikhyath Mondreti <vikhyathvikku@gmail.com>
Co-authored-by: waleedlatif1 <waleedlatif1@users.noreply.github.com>
Co-authored-by: Adam Gough <77861281+aadamgough@users.noreply.github.com>
Co-authored-by: aadamgough <adam@sim.ai>
2026-01-03 13:19:18 -08:00
Waleed
eb5d1f3e5b v0.5.48: copy-paste workflow blocks, docs updates, mcp tool fixes 2025-12-31 18:00:04 -08:00
Waleed
54ab82c8dd v0.5.47: deploy workflow as mcp, kb chunks tokenizer, UI improvements, jira service management tools 2025-12-30 23:18:58 -08:00
Waleed
f895bf469b v0.5.46: build improvements, greptile, light mode improvements 2025-12-29 02:17:52 -08:00
Waleed
dd3209af06 v0.5.45: light mode fixes, realtime usage indicator, docker build improvements 2025-12-27 19:57:42 -08:00
Waleed
b6ba3b50a7 v0.5.44: keyboard shortcuts, autolayout, light mode, byok, testing improvements 2025-12-26 21:25:19 -08:00
Waleed
b304233062 v0.5.43: export logs, circleback, grain, vertex, code hygiene, schedule improvements 2025-12-23 19:19:18 -08:00
Vikhyath Mondreti
57e4b49bd6 v0.5.42: fix memory migration 2025-12-23 01:24:54 -08:00
Vikhyath Mondreti
e12dd204ed v0.5.41: memory fixes, copilot improvements, knowledgebase improvements, LLM providers standardization 2025-12-23 00:15:18 -08:00
Vikhyath Mondreti
3d9d9cbc54 v0.5.40: supabase ops to allow non-public schemas, jira uuid 2025-12-21 22:28:05 -08:00
Waleed
0f4ec962ad v0.5.39: notion, workflow variables fixes 2025-12-20 20:44:00 -08:00
Waleed
4827866f9a v0.5.38: snap to grid, copilot ux improvements, billing line items 2025-12-20 17:24:38 -08:00
Waleed
3e697d9ed9 v0.5.37: redaction utils consolidation, logs updates, autoconnect improvements, additional kb tag types 2025-12-19 22:31:55 -08:00
Martin Yankov
4431a1a484 fix(helm): add custom egress rules to realtime network policy (#2481)
The realtime service network policy was missing the custom egress rules section
that allows configuration of additional egress rules via values.yaml. This caused
the realtime pods to be unable to connect to external databases (e.g., PostgreSQL
on port 5432) when using external database configurations.

The app network policy already had this section, but the realtime network policy
was missing it, creating an inconsistency and preventing the realtime service
from accessing external databases configured via networkPolicy.egress values.

This fix adds the same custom egress rules template section to the realtime
network policy, matching the app network policy behavior and allowing users to
configure database connectivity via values.yaml.
2025-12-19 18:59:08 -08:00
Waleed
4d1a9a3f22 v0.5.36: hitl improvements, opengraph, slack fixes, one-click unsubscribe, auth checks, new db indexes 2025-12-19 01:27:49 -08:00
Vikhyath Mondreti
eb07a080fb v0.5.35: helm updates, copilot improvements, 404 for docs, salesforce fixes, subflow resize clamping 2025-12-18 16:23:19 -08:00
287 changed files with 9324 additions and 8060 deletions

View File

@@ -0,0 +1,35 @@
---
paths:
- "apps/sim/components/emcn/**"
---
# EMCN Components
Import from `@/components/emcn`, never from subpaths (except CSS files).
## CVA vs Direct Styles
**Use CVA when:** 2+ variants (primary/secondary, sm/md/lg)
```tsx
const buttonVariants = cva('base-classes', {
variants: { variant: { default: '...', primary: '...' } }
})
export { Button, buttonVariants }
```
**Use direct className when:** Single consistent style, no variations
```tsx
function Label({ className, ...props }) {
return <Primitive className={cn('style-classes', className)} {...props} />
}
```
## Rules
- Use Radix UI primitives for accessibility
- Export component and variants (if using CVA)
- TSDoc with usage examples
- Consistent tokens: `font-medium`, `text-[12px]`, `rounded-[4px]`
- `transition-colors` for hover states

13
.claude/rules/global.md Normal file
View File

@@ -0,0 +1,13 @@
# Global Standards
## Logging
Import `createLogger` from `sim/logger`. Use `logger.info`, `logger.warn`, `logger.error` instead of `console.log`.
## Comments
Use TSDoc for documentation. No `====` separators. No non-TSDoc comments.
## Styling
Never update global styles. Keep all styling local to components.
## Package Manager
Use `bun` and `bunx`, not `npm` and `npx`.

View File

@@ -0,0 +1,56 @@
---
paths:
- "apps/sim/**"
---
# Sim App Architecture
## Core Principles
1. **Single Responsibility**: Each component, hook, store has one clear purpose
2. **Composition Over Complexity**: Break down complex logic into smaller pieces
3. **Type Safety First**: TypeScript interfaces for all props, state, return types
4. **Predictable State**: Zustand for global state, useState for UI-only concerns
## Root-Level Structure
```
apps/sim/
├── app/ # Next.js app router (pages, API routes)
├── blocks/ # Block definitions and registry
├── components/ # Shared UI (emcn/, ui/)
├── executor/ # Workflow execution engine
├── hooks/ # Shared hooks (queries/, selectors/)
├── lib/ # App-wide utilities
├── providers/ # LLM provider integrations
├── stores/ # Zustand stores
├── tools/ # Tool definitions
└── triggers/ # Trigger definitions
```
## Feature Organization
Features live under `app/workspace/[workspaceId]/`:
```
feature/
├── components/ # Feature components
├── hooks/ # Feature-scoped hooks
├── utils/ # Feature-scoped utilities (2+ consumers)
├── feature.tsx # Main component
└── page.tsx # Next.js page entry
```
## Naming Conventions
- **Components**: PascalCase (`WorkflowList`)
- **Hooks**: `use` prefix (`useWorkflowOperations`)
- **Files**: kebab-case (`workflow-list.tsx`)
- **Stores**: `stores/feature/store.ts`
- **Constants**: SCREAMING_SNAKE_CASE
- **Interfaces**: PascalCase with suffix (`WorkflowListProps`)
## Utils Rules
- **Never create `utils.ts` for single consumer** - inline it
- **Create `utils.ts` when** 2+ files need the same helper
- **Check existing sources** before duplicating (`lib/` has many utilities)
- **Location**: `lib/` (app-wide) → `feature/utils/` (feature-scoped) → inline (single-use)

View File

@@ -0,0 +1,48 @@
---
paths:
- "apps/sim/**/*.tsx"
---
# Component Patterns
## Structure Order
```typescript
'use client' // Only if using hooks
// Imports (external → internal)
// Constants at module level
const CONFIG = { SPACING: 8 } as const
// Props interface
interface ComponentProps {
requiredProp: string
optionalProp?: boolean
}
export function Component({ requiredProp, optionalProp = false }: ComponentProps) {
// a. Refs
// b. External hooks (useParams, useRouter)
// c. Store hooks
// d. Custom hooks
// e. Local state
// f. useMemo
// g. useCallback
// h. useEffect
// i. Return JSX
}
```
## Rules
1. `'use client'` only when using React hooks
2. Always define props interface
3. Extract constants with `as const`
4. Semantic HTML (`aside`, `nav`, `article`)
5. Optional chain callbacks: `onAction?.(id)`
## Component Extraction
**Extract when:** 50+ lines, used in 2+ files, or has own state/logic
**Keep inline when:** < 10 lines, single use, purely presentational

View File

@@ -0,0 +1,55 @@
---
paths:
- "apps/sim/**/use-*.ts"
- "apps/sim/**/hooks/**/*.ts"
---
# Hook Patterns
## Structure
```typescript
interface UseFeatureProps {
id: string
onSuccess?: (result: Result) => void
}
export function useFeature({ id, onSuccess }: UseFeatureProps) {
// 1. Refs for stable dependencies
const idRef = useRef(id)
const onSuccessRef = useRef(onSuccess)
// 2. State
const [data, setData] = useState<Data | null>(null)
const [isLoading, setIsLoading] = useState(false)
// 3. Sync refs
useEffect(() => {
idRef.current = id
onSuccessRef.current = onSuccess
}, [id, onSuccess])
// 4. Operations (useCallback with empty deps when using refs)
const fetchData = useCallback(async () => {
setIsLoading(true)
try {
const result = await fetch(`/api/${idRef.current}`).then(r => r.json())
setData(result)
onSuccessRef.current?.(result)
} finally {
setIsLoading(false)
}
}, [])
return { data, isLoading, fetchData }
}
```
## Rules
1. Single responsibility per hook
2. Props interface required
3. Refs for stable callback dependencies
4. Wrap returned functions in useCallback
5. Always try/catch async operations
6. Track loading/error states

View File

@@ -0,0 +1,62 @@
---
paths:
- "apps/sim/**/*.ts"
- "apps/sim/**/*.tsx"
---
# Import Patterns
## Absolute Imports
**Always use absolute imports.** Never use relative imports.
```typescript
// ✓ Good
import { useWorkflowStore } from '@/stores/workflows/store'
import { Button } from '@/components/ui/button'
// ✗ Bad
import { useWorkflowStore } from '../../../stores/workflows/store'
```
## Barrel Exports
Use barrel exports (`index.ts`) when a folder has 3+ exports. Import from barrel, not individual files.
```typescript
// ✓ Good
import { Dashboard, Sidebar } from '@/app/workspace/[workspaceId]/logs/components'
// ✗ Bad
import { Dashboard } from '@/app/workspace/[workspaceId]/logs/components/dashboard/dashboard'
```
## No Re-exports
Do not re-export from non-barrel files. Import directly from the source.
```typescript
// ✓ Good - import from where it's declared
import { CORE_TRIGGER_TYPES } from '@/stores/logs/filters/types'
// ✗ Bad - re-exporting in utils.ts then importing from there
import { CORE_TRIGGER_TYPES } from '@/app/workspace/.../utils'
```
## Import Order
1. React/core libraries
2. External libraries
3. UI components (`@/components/emcn`, `@/components/ui`)
4. Utilities (`@/lib/...`)
5. Stores (`@/stores/...`)
6. Feature imports
7. CSS imports
## Type Imports
Use `type` keyword for type-only imports:
```typescript
import type { WorkflowLog } from '@/stores/logs/types'
```

View File

@@ -0,0 +1,209 @@
---
paths:
- "apps/sim/tools/**"
- "apps/sim/blocks/**"
- "apps/sim/triggers/**"
---
# Adding Integrations
## Overview
Adding a new integration typically requires:
1. **Tools** - API operations (`tools/{service}/`)
2. **Block** - UI component (`blocks/blocks/{service}.ts`)
3. **Icon** - SVG icon (`components/icons.tsx`)
4. **Trigger** (optional) - Webhooks/polling (`triggers/{service}/`)
Always look up the service's API docs first.
## 1. Tools (`tools/{service}/`)
```
tools/{service}/
├── index.ts # Export all tools
├── types.ts # Params/response types
├── {action}.ts # Individual tool (e.g., send_message.ts)
└── ...
```
**Tool file structure:**
```typescript
// tools/{service}/{action}.ts
import type { {Service}Params, {Service}Response } from '@/tools/{service}/types'
import type { ToolConfig } from '@/tools/types'
export const {service}{Action}Tool: ToolConfig<{Service}Params, {Service}Response> = {
id: '{service}_{action}',
name: '{Service} {Action}',
description: 'What this tool does',
version: '1.0.0',
oauth: { required: true, provider: '{service}' }, // if OAuth
params: { /* param definitions */ },
request: {
url: '/api/tools/{service}/{action}',
method: 'POST',
headers: () => ({ 'Content-Type': 'application/json' }),
body: (params) => ({ ...params }),
},
transformResponse: async (response) => {
const data = await response.json()
if (!data.success) throw new Error(data.error)
return { success: true, output: data.output }
},
outputs: { /* output definitions */ },
}
```
**Register in `tools/registry.ts`:**
```typescript
import { {service}{Action}Tool } from '@/tools/{service}'
// Add to registry object
{service}_{action}: {service}{Action}Tool,
```
## 2. Block (`blocks/blocks/{service}.ts`)
```typescript
import { {Service}Icon } from '@/components/icons'
import type { BlockConfig } from '@/blocks/types'
import type { {Service}Response } from '@/tools/{service}/types'
export const {Service}Block: BlockConfig<{Service}Response> = {
type: '{service}',
name: '{Service}',
description: 'Short description',
longDescription: 'Detailed description',
category: 'tools',
bgColor: '#hexcolor',
icon: {Service}Icon,
subBlocks: [ /* see SubBlock Properties below */ ],
tools: {
access: ['{service}_{action}', ...],
config: {
tool: (params) => `{service}_${params.operation}`,
params: (params) => ({ ...params }),
},
},
inputs: { /* input definitions */ },
outputs: { /* output definitions */ },
}
```
### SubBlock Properties
```typescript
{
id: 'fieldName', // Unique identifier
title: 'Field Label', // UI label
type: 'short-input', // See SubBlock Types below
placeholder: 'Hint text',
required: true, // See Required below
condition: { ... }, // See Condition below
dependsOn: ['otherField'], // See DependsOn below
mode: 'basic', // 'basic' | 'advanced' | 'both' | 'trigger'
}
```
**SubBlock Types:** `short-input`, `long-input`, `dropdown`, `code`, `switch`, `slider`, `oauth-input`, `channel-selector`, `user-selector`, `file-upload`, etc.
### `condition` - Show/hide based on another field
```typescript
// Show when operation === 'send'
condition: { field: 'operation', value: 'send' }
// Show when operation is 'send' OR 'read'
condition: { field: 'operation', value: ['send', 'read'] }
// Show when operation !== 'send'
condition: { field: 'operation', value: 'send', not: true }
// Complex: NOT in list AND another condition
condition: {
field: 'operation',
value: ['list_channels', 'list_users'],
not: true,
and: { field: 'destinationType', value: 'dm', not: true }
}
```
### `required` - Field validation
```typescript
// Always required
required: true
// Conditionally required (same syntax as condition)
required: { field: 'operation', value: 'send' }
```
### `dependsOn` - Clear field when dependencies change
```typescript
// Clear when credential changes
dependsOn: ['credential']
// Clear when authMethod changes AND (credential OR botToken) changes
dependsOn: { all: ['authMethod'], any: ['credential', 'botToken'] }
```
### `mode` - When to show field
- `'basic'` - Only in basic mode (default UI)
- `'advanced'` - Only in advanced mode (manual input)
- `'both'` - Show in both modes (default)
- `'trigger'` - Only when block is used as trigger
**Register in `blocks/registry.ts`:**
```typescript
import { {Service}Block } from '@/blocks/blocks/{service}'
// Add to registry object (alphabetically)
{service}: {Service}Block,
```
## 3. Icon (`components/icons.tsx`)
```typescript
export function {Service}Icon(props: SVGProps<SVGSVGElement>) {
return (
<svg {...props} viewBox="0 0 24 24" fill="none" xmlns="http://www.w3.org/2000/svg">
{/* SVG path from service's brand assets */}
</svg>
)
}
```
## 4. Trigger (`triggers/{service}/`) - Optional
```
triggers/{service}/
├── index.ts # Export all triggers
├── webhook.ts # Webhook handler
├── utils.ts # Shared utilities
└── {event}.ts # Specific event handlers
```
**Register in `triggers/registry.ts`:**
```typescript
import { {service}WebhookTrigger } from '@/triggers/{service}'
// Add to TRIGGER_REGISTRY
{service}_webhook: {service}WebhookTrigger,
```
## Checklist
- [ ] Look up API docs for the service
- [ ] Create `tools/{service}/types.ts` with proper types
- [ ] Create tool files for each operation
- [ ] Create `tools/{service}/index.ts` barrel export
- [ ] Register tools in `tools/registry.ts`
- [ ] Add icon to `components/icons.tsx`
- [ ] Create block in `blocks/blocks/{service}.ts`
- [ ] Register block in `blocks/registry.ts`
- [ ] (Optional) Create triggers in `triggers/{service}/`
- [ ] (Optional) Register triggers in `triggers/registry.ts`

View File

@@ -0,0 +1,66 @@
---
paths:
- "apps/sim/hooks/queries/**/*.ts"
---
# React Query Patterns
All React Query hooks live in `hooks/queries/`.
## Query Key Factory
Every query file defines a keys factory:
```typescript
export const entityKeys = {
all: ['entity'] as const,
list: (workspaceId?: string) => [...entityKeys.all, 'list', workspaceId ?? ''] as const,
detail: (id?: string) => [...entityKeys.all, 'detail', id ?? ''] as const,
}
```
## File Structure
```typescript
// 1. Query keys factory
// 2. Types (if needed)
// 3. Private fetch functions
// 4. Exported hooks
```
## Query Hook
```typescript
export function useEntityList(workspaceId?: string, options?: { enabled?: boolean }) {
return useQuery({
queryKey: entityKeys.list(workspaceId),
queryFn: () => fetchEntities(workspaceId as string),
enabled: Boolean(workspaceId) && (options?.enabled ?? true),
staleTime: 60 * 1000,
placeholderData: keepPreviousData,
})
}
```
## Mutation Hook
```typescript
export function useCreateEntity() {
const queryClient = useQueryClient()
return useMutation({
mutationFn: async (variables) => { /* fetch POST */ },
onSuccess: () => queryClient.invalidateQueries({ queryKey: entityKeys.all }),
})
}
```
## Optimistic Updates
For optimistic mutations syncing with Zustand, use `createOptimisticMutationHandlers` from `@/hooks/queries/utils/optimistic-mutation`.
## Naming
- **Keys**: `entityKeys`
- **Query hooks**: `useEntity`, `useEntityList`
- **Mutation hooks**: `useCreateEntity`, `useUpdateEntity`
- **Fetch functions**: `fetchEntity` (private)

View File

@@ -0,0 +1,71 @@
---
paths:
- "apps/sim/**/store.ts"
- "apps/sim/**/stores/**/*.ts"
---
# Zustand Store Patterns
Stores live in `stores/`. Complex stores split into `store.ts` + `types.ts`.
## Basic Store
```typescript
import { create } from 'zustand'
import { devtools } from 'zustand/middleware'
import type { FeatureState } from '@/stores/feature/types'
const initialState = { items: [] as Item[], activeId: null as string | null }
export const useFeatureStore = create<FeatureState>()(
devtools(
(set, get) => ({
...initialState,
setItems: (items) => set({ items }),
addItem: (item) => set((state) => ({ items: [...state.items, item] })),
reset: () => set(initialState),
}),
{ name: 'feature-store' }
)
)
```
## Persisted Store
```typescript
import { create } from 'zustand'
import { persist } from 'zustand/middleware'
export const useFeatureStore = create<FeatureState>()(
persist(
(set) => ({
width: 300,
setWidth: (width) => set({ width }),
_hasHydrated: false,
setHasHydrated: (v) => set({ _hasHydrated: v }),
}),
{
name: 'feature-state',
partialize: (state) => ({ width: state.width }),
onRehydrateStorage: () => (state) => state?.setHasHydrated(true),
}
)
)
```
## Rules
1. Use `devtools` middleware (named stores)
2. Use `persist` only when data should survive reload
3. `partialize` to persist only necessary state
4. `_hasHydrated` pattern for persisted stores needing hydration tracking
5. Immutable updates only
6. `set((state) => ...)` when depending on previous state
7. Provide `reset()` action
## Outside React
```typescript
const items = useFeatureStore.getState().items
useFeatureStore.setState({ items: newItems })
```

View File

@@ -0,0 +1,41 @@
---
paths:
- "apps/sim/**/*.tsx"
- "apps/sim/**/*.css"
---
# Styling Rules
## Tailwind
1. **No inline styles** - Use Tailwind classes
2. **No duplicate dark classes** - Skip `dark:` when value matches light mode
3. **Exact values** - `text-[14px]`, `h-[26px]`
4. **Transitions** - `transition-colors` for interactive states
## Conditional Classes
```typescript
import { cn } from '@/lib/utils'
<div className={cn(
'base-classes',
isActive && 'active-classes',
disabled ? 'opacity-60' : 'hover:bg-accent'
)} />
```
## CSS Variables
For dynamic values (widths, heights) synced with stores:
```typescript
// In store
setWidth: (width) => {
set({ width })
document.documentElement.style.setProperty('--sidebar-width', `${width}px`)
}
// In component
<aside style={{ width: 'var(--sidebar-width)' }} />
```

View File

@@ -0,0 +1,58 @@
---
paths:
- "apps/sim/**/*.test.ts"
- "apps/sim/**/*.test.tsx"
---
# Testing Patterns
Use Vitest. Test files: `feature.ts``feature.test.ts`
## Structure
```typescript
/**
* @vitest-environment node
*/
import { databaseMock, loggerMock } from '@sim/testing'
import { describe, expect, it, vi } from 'vitest'
vi.mock('@sim/db', () => databaseMock)
vi.mock('@sim/logger', () => loggerMock)
import { myFunction } from '@/lib/feature'
describe('myFunction', () => {
beforeEach(() => vi.clearAllMocks())
it.concurrent('isolated tests run in parallel', () => { ... })
})
```
## @sim/testing Package
Always prefer over local mocks.
| Category | Utilities |
|----------|-----------|
| **Mocks** | `loggerMock`, `databaseMock`, `setupGlobalFetchMock()` |
| **Factories** | `createSession()`, `createWorkflowRecord()`, `createBlock()`, `createExecutorContext()` |
| **Builders** | `WorkflowBuilder`, `ExecutionContextBuilder` |
| **Assertions** | `expectWorkflowAccessGranted()`, `expectBlockExecuted()` |
## Rules
1. `@vitest-environment node` directive at file top
2. `vi.mock()` calls before importing mocked modules
3. `@sim/testing` utilities over local mocks
4. `it.concurrent` for isolated tests (no shared mutable state)
5. `beforeEach(() => vi.clearAllMocks())` to reset state
## Hoisted Mocks
For mutable mock references:
```typescript
const mockFn = vi.hoisted(() => vi.fn())
vi.mock('@/lib/module', () => ({ myFunction: mockFn }))
mockFn.mockResolvedValue({ data: 'test' })
```

View File

@@ -0,0 +1,21 @@
---
paths:
- "apps/sim/**/*.ts"
- "apps/sim/**/*.tsx"
---
# TypeScript Rules
1. **No `any`** - Use proper types or `unknown` with type guards
2. **Props interface** - Always define for components
3. **Const assertions** - `as const` for constant objects/arrays
4. **Ref types** - Explicit: `useRef<HTMLDivElement>(null)`
5. **Type imports** - `import type { X }` for type-only imports
```typescript
// ✗ Bad
const handleClick = (e: any) => {}
// ✓ Good
const handleClick = (e: React.MouseEvent<HTMLButtonElement>) => {}
```

View File

@@ -8,7 +8,7 @@ alwaysApply: true
You are a professional software engineer. All code must follow best practices: accurate, readable, clean, and efficient. You are a professional software engineer. All code must follow best practices: accurate, readable, clean, and efficient.
## Logging ## Logging
Import `createLogger` from `sim/logger`. Use `logger.info`, `logger.warn`, `logger.error` instead of `console.log`. Import `createLogger` from `@sim/logger`. Use `logger.info`, `logger.warn`, `logger.error` instead of `console.log`.
## Comments ## Comments
Use TSDoc for documentation. No `====` separators. No non-TSDoc comments. Use TSDoc for documentation. No `====` separators. No non-TSDoc comments.

View File

@@ -14,7 +14,7 @@
</p> </p>
<p align="center"> <p align="center">
<a href="https://deepwiki.com/simstudioai/sim" target="_blank" rel="noopener noreferrer"><img src="https://deepwiki.com/badge.svg" alt="Ask DeepWiki"></a> <a href="https://cursor.com/link/prompt?text=Help%20me%20set%20up%20Sim%20Studio%20locally.%20Follow%20these%20steps%3A%0A%0A1.%20First%2C%20verify%20Docker%20is%20installed%20and%20running%3A%0A%20%20%20docker%20--version%0A%20%20%20docker%20info%0A%0A2.%20Clone%20the%20repository%3A%0A%20%20%20git%20clone%20https%3A%2F%2Fgithub.com%2Fsimstudioai%2Fsim.git%0A%20%20%20cd%20sim%0A%0A3.%20Start%20the%20services%20with%20Docker%20Compose%3A%0A%20%20%20docker%20compose%20-f%20docker-compose.prod.yml%20up%20-d%0A%0A4.%20Wait%20for%20all%20containers%20to%20be%20healthy%20(this%20may%20take%201-2%20minutes)%3A%0A%20%20%20docker%20compose%20-f%20docker-compose.prod.yml%20ps%0A%0A5.%20Verify%20the%20app%20is%20accessible%20at%20http%3A%2F%2Flocalhost%3A3000%0A%0AIf%20there%20are%20any%20errors%2C%20help%20me%20troubleshoot%20them.%20Common%20issues%3A%0A-%20Port%203000%2C%203002%2C%20or%205432%20already%20in%20use%0A-%20Docker%20not%20running%0A-%20Insufficient%20memory%20(needs%2012GB%2B%20RAM)%0A%0AFor%20local%20AI%20models%20with%20Ollama%2C%20use%20this%20instead%20of%20step%203%3A%0A%20%20%20docker%20compose%20-f%20docker-compose.ollama.yml%20--profile%20setup%20up%20-d"><img src="https://img.shields.io/badge/Set%20Up%20with-Cursor-000000?logo=cursor&logoColor=white" alt="Set Up with Cursor"></a> <a href="https://deepwiki.com/simstudioai/sim" target="_blank" rel="noopener noreferrer"><img src="https://deepwiki.com/badge.svg" alt="Ask DeepWiki"></a> <a href="https://cursor.com/link/prompt?text=Help%20me%20set%20up%20Sim%20locally.%20Follow%20these%20steps%3A%0A%0A1.%20First%2C%20verify%20Docker%20is%20installed%20and%20running%3A%0A%20%20%20docker%20--version%0A%20%20%20docker%20info%0A%0A2.%20Clone%20the%20repository%3A%0A%20%20%20git%20clone%20https%3A%2F%2Fgithub.com%2Fsimstudioai%2Fsim.git%0A%20%20%20cd%20sim%0A%0A3.%20Start%20the%20services%20with%20Docker%20Compose%3A%0A%20%20%20docker%20compose%20-f%20docker-compose.prod.yml%20up%20-d%0A%0A4.%20Wait%20for%20all%20containers%20to%20be%20healthy%20(this%20may%20take%201-2%20minutes)%3A%0A%20%20%20docker%20compose%20-f%20docker-compose.prod.yml%20ps%0A%0A5.%20Verify%20the%20app%20is%20accessible%20at%20http%3A%2F%2Flocalhost%3A3000%0A%0AIf%20there%20are%20any%20errors%2C%20help%20me%20troubleshoot%20them.%20Common%20issues%3A%0A-%20Port%203000%2C%203002%2C%20or%205432%20already%20in%20use%0A-%20Docker%20not%20running%0A-%20Insufficient%20memory%20(needs%2012GB%2B%20RAM)%0A%0AFor%20local%20AI%20models%20with%20Ollama%2C%20use%20this%20instead%20of%20step%203%3A%0A%20%20%20docker%20compose%20-f%20docker-compose.ollama.yml%20--profile%20setup%20up%20-d"><img src="https://img.shields.io/badge/Set%20Up%20with-Cursor-000000?logo=cursor&logoColor=white" alt="Set Up with Cursor"></a>
</p> </p>
### Build Workflows with Ease ### Build Workflows with Ease

View File

@@ -4093,6 +4093,23 @@ export function SQSIcon(props: SVGProps<SVGSVGElement>) {
) )
} }
export function TextractIcon(props: SVGProps<SVGSVGElement>) {
return (
<svg
{...props}
viewBox='10 14 60 52'
version='1.1'
xmlns='http://www.w3.org/2000/svg'
xmlnsXlink='http://www.w3.org/1999/xlink'
>
<path
d='M22.0624102,50 C24.3763895,53.603 28.4103535,56 33.0003125,56 C40.1672485,56 45.9991964,50.168 45.9991964,43 C45.9991964,35.832 40.1672485,30 33.0003125,30 C27.6033607,30 22.9664021,33.307 21.0024196,38 L23.2143999,38 C25.0393836,34.444 28.7363506,32 33.0003125,32 C39.0652583,32 43.9992143,36.935 43.9992143,43 C43.9992143,49.065 39.0652583,54 33.0003125,54 C29.5913429,54 26.5413702,52.441 24.5213882,50 L22.0624102,50 Z M37.0002768,45 L37.0002768,43 L41.9992321,43 C41.9992321,38.038 37.9622682,34 33.0003125,34 C28.0373568,34 23.9993929,38.038 23.9993929,43 L28.9993482,43 L28.9993482,45 L24.2313908,45 C25.1443826,49.002 28.7253507,52 33.0003125,52 C35.1362934,52 37.0992759,51.249 38.6442621,50 L34.0003036,50 L34.0003036,48 L40.4782457,48 C41.0812403,47.102 41.5202364,46.087 41.7682342,45 L37.0002768,45 Z M21.0024196,48 L23.2143999,48 C22.4434068,46.498 22.0004107,44.801 22.0004107,43 C22.0004107,41.959 22.1554093,40.955 22.4264069,40 L20.3634253,40 C20.1344274,40.965 19.9994286,41.966 19.9994286,43 C19.9994286,44.771 20.3584254,46.46 21.0024196,48 L21.0024196,48 Z M19.7434309,50 L17.0004554,50 L17.0004554,48 L18.8744386,48 C18.5344417,47.04 18.2894438,46.038 18.1494451,45 L15.4144695,45 L16.707458,46.293 L15.2924706,47.707 L12.2924974,44.707 C11.9025009,44.316 11.9025009,43.684 12.2924974,43.293 L15.2924706,40.293 L16.707458,41.707 L15.4144695,43 L18.0004464,43 C18.0004464,41.973 18.1044455,40.97 18.3024437,40 L17.0004554,40 L17.0004554,38 L18.8744386,38 C20.9404202,32.184 26.4833707,28 33.0003125,28 C37.427273,28 41.4002375,29.939 44.148213,33 L59.0000804,33 L59.0000804,35 L45.6661994,35 C47.1351863,37.318 47.9991786,40.058 47.9991786,43 L59.0000804,43 L59.0000804,45 L47.8501799,45 C46.8681887,52.327 40.5912447,58 33.0003125,58 C27.2563638,58 22.2624084,54.752 19.7434309,50 L19.7434309,50 Z M37.0002768,39 C37.0002768,38.448 36.5522808,38 36.0002857,38 L29.9993482,38 C29.4473442,38 28.9993482,38.448 28.9993482,39 L28.9993482,41 L31.0003304,41 L31.0003304,40 L32.0003214,40 L32.0003214,43 L31.0003304,43 L31.0003304,45 L35.0002946,45 L35.0002946,43 L34.0003036,43 L34.0003036,40 L35.0002946,40 L35.0002946,41 L37.0002768,41 L37.0002768,39 Z M49.0001696,40 L59.0000804,40 L59.0000804,38 L49.0001696,38 L49.0001696,40 Z M49.0001696,50 L59.0000804,50 L59.0000804,48 L49.0001696,48 L49.0001696,50 Z M57.0000982,27 L60.5850662,27 L57.0000982,23.414 L57.0000982,27 Z M63.7070383,27.293 C63.8940367,27.48 64.0000357,27.735 64.0000357,28 L64.0000357,63 C64.0000357,63.552 63.5520397,64 63.0000446,64 L32.0003304,64 C31.4473264,64 31.0003304,63.552 31.0003304,63 L31.0003304,59 L33.0003125,59 L33.0003125,62 L62.0000536,62 L62.0000536,29 L56.0001071,29 C55.4471121,29 55.0001161,28.552 55.0001161,28 L55.0001161,22 L33.0003125,22 L33.0003125,27 L31.0003304,27 L31.0003304,21 C31.0003304,20.448 31.4473264,20 32.0003304,20 L56.0001071,20 C56.2651048,20 56.5191025,20.105 56.7071008,20.293 L63.7070383,27.293 Z M68,24.166 L68,61 C68,61.552 67.552004,62 67.0000089,62 L65.0000268,62 L65.0000268,60 L66.0000179,60 L66.0000179,24.612 L58.6170838,18 L36.0002857,18 L36.0002857,19 L34.0003036,19 L34.0003036,17 C34.0003036,16.448 34.4472996,16 35.0003036,16 L59.0000804,16 C59.2460782,16 59.483076,16.091 59.6660744,16.255 L67.666003,23.42 C67.8780011,23.61 68,23.881 68,24.166 L68,24.166 Z'
fill='currentColor'
/>
</svg>
)
}
export function McpIcon(props: SVGProps<SVGSVGElement>) { export function McpIcon(props: SVGProps<SVGSVGElement>) {
return ( return (
<svg <svg

View File

@@ -110,6 +110,7 @@ import {
SupabaseIcon, SupabaseIcon,
TavilyIcon, TavilyIcon,
TelegramIcon, TelegramIcon,
TextractIcon,
TinybirdIcon, TinybirdIcon,
TranslateIcon, TranslateIcon,
TrelloIcon, TrelloIcon,
@@ -143,7 +144,7 @@ export const blockTypeToIconMap: Record<string, IconComponent> = {
calendly: CalendlyIcon, calendly: CalendlyIcon,
circleback: CirclebackIcon, circleback: CirclebackIcon,
clay: ClayIcon, clay: ClayIcon,
confluence: ConfluenceIcon, confluence_v2: ConfluenceIcon,
cursor_v2: CursorIcon, cursor_v2: CursorIcon,
datadog: DatadogIcon, datadog: DatadogIcon,
discord: DiscordIcon, discord: DiscordIcon,
@@ -153,7 +154,7 @@ export const blockTypeToIconMap: Record<string, IconComponent> = {
elasticsearch: ElasticsearchIcon, elasticsearch: ElasticsearchIcon,
elevenlabs: ElevenLabsIcon, elevenlabs: ElevenLabsIcon,
exa: ExaAIIcon, exa: ExaAIIcon,
file: DocumentIcon, file_v2: DocumentIcon,
firecrawl: FirecrawlIcon, firecrawl: FirecrawlIcon,
fireflies: FirefliesIcon, fireflies: FirefliesIcon,
github_v2: GithubIcon, github_v2: GithubIcon,
@@ -195,7 +196,7 @@ export const blockTypeToIconMap: Record<string, IconComponent> = {
microsoft_excel_v2: MicrosoftExcelIcon, microsoft_excel_v2: MicrosoftExcelIcon,
microsoft_planner: MicrosoftPlannerIcon, microsoft_planner: MicrosoftPlannerIcon,
microsoft_teams: MicrosoftTeamsIcon, microsoft_teams: MicrosoftTeamsIcon,
mistral_parse: MistralIcon, mistral_parse_v2: MistralIcon,
mongodb: MongoDBIcon, mongodb: MongoDBIcon,
mysql: MySQLIcon, mysql: MySQLIcon,
neo4j: Neo4jIcon, neo4j: Neo4jIcon,
@@ -237,6 +238,7 @@ export const blockTypeToIconMap: Record<string, IconComponent> = {
supabase: SupabaseIcon, supabase: SupabaseIcon,
tavily: TavilyIcon, tavily: TavilyIcon,
telegram: TelegramIcon, telegram: TelegramIcon,
textract: TextractIcon,
tinybird: TinybirdIcon, tinybird: TinybirdIcon,
translate: TranslateIcon, translate: TranslateIcon,
trello: TrelloIcon, trello: TrelloIcon,
@@ -244,7 +246,7 @@ export const blockTypeToIconMap: Record<string, IconComponent> = {
twilio_sms: TwilioIcon, twilio_sms: TwilioIcon,
twilio_voice: TwilioIcon, twilio_voice: TwilioIcon,
typeform: TypeformIcon, typeform: TypeformIcon,
video_generator: VideoIcon, video_generator_v2: VideoIcon,
vision: EyeIcon, vision: EyeIcon,
wealthbox: WealthboxIcon, wealthbox: WealthboxIcon,
webflow: WebflowIcon, webflow: WebflowIcon,

View File

@@ -6,7 +6,7 @@ description: Interact with Confluence
import { BlockInfoCard } from "@/components/ui/block-info-card" import { BlockInfoCard } from "@/components/ui/block-info-card"
<BlockInfoCard <BlockInfoCard
type="confluence" type="confluence_v2"
color="#E0E0E0" color="#E0E0E0"
/> />

View File

@@ -6,7 +6,7 @@ description: Read and parse multiple files
import { BlockInfoCard } from "@/components/ui/block-info-card" import { BlockInfoCard } from "@/components/ui/block-info-card"
<BlockInfoCard <BlockInfoCard
type="file" type="file_v2"
color="#40916C" color="#40916C"
/> />
@@ -48,7 +48,7 @@ Parse one or more uploaded files or files from URLs (text, PDF, CSV, images, etc
| Parameter | Type | Description | | Parameter | Type | Description |
| --------- | ---- | ----------- | | --------- | ---- | ----------- |
| `files` | array | Array of parsed files | | `files` | array | Array of parsed files with content, metadata, and file properties |
| `combinedContent` | string | Combined content of all parsed files | | `combinedContent` | string | All file contents merged into a single text string |

View File

@@ -106,6 +106,7 @@
"supabase", "supabase",
"tavily", "tavily",
"telegram", "telegram",
"textract",
"tinybird", "tinybird",
"translate", "translate",
"trello", "trello",

View File

@@ -6,7 +6,7 @@ description: Extract text from PDF documents
import { BlockInfoCard } from "@/components/ui/block-info-card" import { BlockInfoCard } from "@/components/ui/block-info-card"
<BlockInfoCard <BlockInfoCard
type="mistral_parse" type="mistral_parse_v2"
color="#000000" color="#000000"
/> />
@@ -54,18 +54,37 @@ Parse PDF documents using Mistral OCR API
| Parameter | Type | Description | | Parameter | Type | Description |
| --------- | ---- | ----------- | | --------- | ---- | ----------- |
| `success` | boolean | Whether the PDF was parsed successfully | | `pages` | array | Array of page objects from Mistral OCR |
| `content` | string | Extracted content in the requested format \(markdown, text, or JSON\) | | ↳ `index` | number | Page index \(zero-based\) |
| `metadata` | object | Processing metadata including jobId, fileType, pageCount, and usage info | | ↳ `markdown` | string | Extracted markdown content |
| ↳ `jobId` | string | Unique job identifier | | ↳ `images` | array | Images extracted from this page with bounding boxes |
| ↳ `fileType` | string | File type \(e.g., pdf\) | | ↳ `id` | string | Image identifier \(e.g., img-0.jpeg\) |
| ↳ `fileName` | string | Original file name | | ↳ `top_left_x` | number | Top-left X coordinate in pixels |
| ↳ `source` | string | Source type \(url\) | | ↳ `top_left_y` | number | Top-left Y coordinate in pixels |
| ↳ `pageCount` | number | Number of pages processed | | ↳ `bottom_right_x` | number | Bottom-right X coordinate in pixels |
| ↳ `model` | string | Mistral model used | | ↳ `bottom_right_y` | number | Bottom-right Y coordinate in pixels |
| ↳ `resultType` | string | Output format \(markdown, text, json\) | | ↳ `image_base64` | string | Base64-encoded image data \(when include_image_base64=true\) |
| ↳ `processedAt` | string | Processing timestamp | | ↳ `id` | string | Image identifier \(e.g., img-0.jpeg\) |
| ↳ `sourceUrl` | string | Source URL if applicable | | ↳ `top_left_x` | number | Top-left X coordinate in pixels |
| ↳ `usageInfo` | object | Usage statistics from OCR processing | | ↳ `top_left_y` | number | Top-left Y coordinate in pixels |
| ↳ `bottom_right_x` | number | Bottom-right X coordinate in pixels |
| ↳ `bottom_right_y` | number | Bottom-right Y coordinate in pixels |
| ↳ `image_base64` | string | Base64-encoded image data \(when include_image_base64=true\) |
| ↳ `dimensions` | object | Page dimensions |
| ↳ `dpi` | number | Dots per inch |
| ↳ `height` | number | Page height in pixels |
| ↳ `width` | number | Page width in pixels |
| ↳ `dpi` | number | Dots per inch |
| ↳ `height` | number | Page height in pixels |
| ↳ `width` | number | Page width in pixels |
| ↳ `tables` | array | Extracted tables as HTML/markdown \(when table_format is set\). Referenced via placeholders like \[tbl-0.html\] |
| ↳ `hyperlinks` | array | Array of URL strings detected in the page \(e.g., \[ |
| ↳ `header` | string | Page header content \(when extract_header=true\) |
| ↳ `footer` | string | Page footer content \(when extract_footer=true\) |
| `model` | string | Mistral OCR model identifier \(e.g., mistral-ocr-latest\) |
| `usage_info` | object | Usage and processing statistics |
| ↳ `pages_processed` | number | Total number of pages processed |
| ↳ `doc_size_bytes` | number | Document file size in bytes |
| `document_annotation` | string | Structured annotation data as JSON string \(when applicable\) |

View File

@@ -58,6 +58,7 @@ Upload a file to an AWS S3 bucket
| Parameter | Type | Description | | Parameter | Type | Description |
| --------- | ---- | ----------- | | --------- | ---- | ----------- |
| `url` | string | URL of the uploaded S3 object | | `url` | string | URL of the uploaded S3 object |
| `uri` | string | S3 URI of the uploaded object \(s3://bucket/key\) |
| `metadata` | object | Upload metadata including ETag and location | | `metadata` | object | Upload metadata including ETag and location |
### `s3_get_object` ### `s3_get_object`
@@ -149,6 +150,7 @@ Copy an object within or between AWS S3 buckets
| Parameter | Type | Description | | Parameter | Type | Description |
| --------- | ---- | ----------- | | --------- | ---- | ----------- |
| `url` | string | URL of the copied S3 object | | `url` | string | URL of the copied S3 object |
| `uri` | string | S3 URI of the copied object \(s3://bucket/key\) |
| `metadata` | object | Copy operation metadata | | `metadata` | object | Copy operation metadata |

View File

@@ -0,0 +1,120 @@
---
title: AWS Textract
description: Extract text, tables, and forms from documents
---
import { BlockInfoCard } from "@/components/ui/block-info-card"
<BlockInfoCard
type="textract"
color="linear-gradient(135deg, #055F4E 0%, #56C0A7 100%)"
/>
{/* MANUAL-CONTENT-START:intro */}
[AWS Textract](https://aws.amazon.com/textract/) is a powerful AI service from Amazon Web Services designed to automatically extract printed text, handwriting, tables, forms, key-value pairs, and other structured data from scanned documents and images. Textract leverages advanced optical character recognition (OCR) and document analysis to transform documents into actionable data, enabling automation, analytics, compliance, and more.
With AWS Textract, you can:
- **Extract text from images and documents**: Recognize printed text and handwriting in formats such as PDF, JPEG, PNG, or TIFF
- **Detect and extract tables**: Automatically find tables and output their structured content
- **Parse forms and key-value pairs**: Pull structured data from forms, including fields and their corresponding values
- **Identify signatures and layout features**: Detect signatures, geometric layout, and relationships between document elements
- **Customize extraction with queries**: Extract specific fields and answers using query-based extraction (e.g., "What is the invoice number?")
In Sim, the AWS Textract integration empowers your agents to intelligently process documents as part of their workflows. This unlocks automation scenarios such as data entry from invoices, onboarding documents, contracts, receipts, and more. Your agents can extract relevant data, analyze structured forms, and generate summaries or reports directly from document uploads or URLs. By connecting Sim with AWS Textract, you can reduce manual effort, improve data accuracy, and streamline your business processes with robust document understanding.
{/* MANUAL-CONTENT-END */}
## Usage Instructions
Integrate AWS Textract into your workflow to extract text, tables, forms, and key-value pairs from documents. Single-page mode supports JPEG, PNG, and single-page PDF. Multi-page mode supports multi-page PDF and TIFF.
## Tools
### `textract_parser`
Parse documents using AWS Textract OCR and document analysis
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `accessKeyId` | string | Yes | AWS Access Key ID |
| `secretAccessKey` | string | Yes | AWS Secret Access Key |
| `region` | string | Yes | AWS region for Textract service \(e.g., us-east-1\) |
| `processingMode` | string | No | Document type: single-page or multi-page. Defaults to single-page. |
| `filePath` | string | No | URL to a document to be processed \(JPEG, PNG, or single-page PDF\). |
| `s3Uri` | string | No | S3 URI for multi-page processing \(s3://bucket/key\). |
| `fileUpload` | object | No | File upload data from file-upload component |
| `featureTypes` | array | No | Feature types to detect: TABLES, FORMS, QUERIES, SIGNATURES, LAYOUT. If not specified, only text detection is performed. |
| `items` | string | No | Feature type |
| `queries` | array | No | Custom queries to extract specific information. Only used when featureTypes includes QUERIES. |
| `items` | object | No | Query configuration |
| `properties` | string | No | The query text |
| `Text` | string | No | No description |
| `Alias` | string | No | No description |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `blocks` | array | Array of Block objects containing detected text, tables, forms, and other elements |
| ↳ `BlockType` | string | Type of block \(PAGE, LINE, WORD, TABLE, CELL, KEY_VALUE_SET, etc.\) |
| ↳ `Id` | string | Unique identifier for the block |
| ↳ `Text` | string | Query text |
| ↳ `TextType` | string | Type of text \(PRINTED or HANDWRITING\) |
| ↳ `Confidence` | number | Confidence score \(0-100\) |
| ↳ `Page` | number | Page number |
| ↳ `Geometry` | object | Location and bounding box information |
| ↳ `BoundingBox` | object | Height as ratio of document height |
| ↳ `Height` | number | Height as ratio of document height |
| ↳ `Left` | number | Left position as ratio of document width |
| ↳ `Top` | number | Top position as ratio of document height |
| ↳ `Width` | number | Width as ratio of document width |
| ↳ `Height` | number | Height as ratio of document height |
| ↳ `Left` | number | Left position as ratio of document width |
| ↳ `Top` | number | Top position as ratio of document height |
| ↳ `Width` | number | Width as ratio of document width |
| ↳ `Polygon` | array | Polygon coordinates |
| ↳ `X` | number | X coordinate |
| ↳ `Y` | number | Y coordinate |
| ↳ `X` | number | X coordinate |
| ↳ `Y` | number | Y coordinate |
| ↳ `BoundingBox` | object | Height as ratio of document height |
| ↳ `Height` | number | Height as ratio of document height |
| ↳ `Left` | number | Left position as ratio of document width |
| ↳ `Top` | number | Top position as ratio of document height |
| ↳ `Width` | number | Width as ratio of document width |
| ↳ `Height` | number | Height as ratio of document height |
| ↳ `Left` | number | Left position as ratio of document width |
| ↳ `Top` | number | Top position as ratio of document height |
| ↳ `Width` | number | Width as ratio of document width |
| ↳ `Polygon` | array | Polygon coordinates |
| ↳ `X` | number | X coordinate |
| ↳ `Y` | number | Y coordinate |
| ↳ `X` | number | X coordinate |
| ↳ `Y` | number | Y coordinate |
| ↳ `Relationships` | array | Relationships to other blocks |
| ↳ `Type` | string | Relationship type \(CHILD, VALUE, ANSWER, etc.\) |
| ↳ `Ids` | array | IDs of related blocks |
| ↳ `Type` | string | Relationship type \(CHILD, VALUE, ANSWER, etc.\) |
| ↳ `Ids` | array | IDs of related blocks |
| ↳ `EntityTypes` | array | Entity types for KEY_VALUE_SET \(KEY or VALUE\) |
| ↳ `SelectionStatus` | string | For checkboxes: SELECTED or NOT_SELECTED |
| ↳ `RowIndex` | number | Row index for table cells |
| ↳ `ColumnIndex` | number | Column index for table cells |
| ↳ `RowSpan` | number | Row span for merged cells |
| ↳ `ColumnSpan` | number | Column span for merged cells |
| ↳ `Query` | object | Query information for QUERY blocks |
| ↳ `Text` | string | Query text |
| ↳ `Alias` | string | Query alias |
| ↳ `Pages` | array | Pages to search |
| ↳ `Alias` | string | Query alias |
| ↳ `Pages` | array | Pages to search |
| `documentMetadata` | object | Metadata about the analyzed document |
| ↳ `pages` | number | Number of pages in the document |
| `modelVersion` | string | Version of the Textract model used for processing |

View File

@@ -6,7 +6,7 @@ description: Generate videos from text using AI
import { BlockInfoCard } from "@/components/ui/block-info-card" import { BlockInfoCard } from "@/components/ui/block-info-card"
<BlockInfoCard <BlockInfoCard
type="video_generator" type="video_generator_v2"
color="#181C1E" color="#181C1E"
/> />

View File

@@ -4,7 +4,7 @@ import { eq } from 'drizzle-orm'
import { NextResponse } from 'next/server' import { NextResponse } from 'next/server'
import { getSession } from '@/lib/auth' import { getSession } from '@/lib/auth'
const logger = createLogger('SSO-Providers') const logger = createLogger('SSOProvidersRoute')
export async function GET() { export async function GET() {
try { try {

View File

@@ -6,7 +6,7 @@ import { hasSSOAccess } from '@/lib/billing'
import { env } from '@/lib/core/config/env' import { env } from '@/lib/core/config/env'
import { REDACTED_MARKER } from '@/lib/core/security/redaction' import { REDACTED_MARKER } from '@/lib/core/security/redaction'
const logger = createLogger('SSO-Register') const logger = createLogger('SSORegisterRoute')
const mappingSchema = z const mappingSchema = z
.object({ .object({
@@ -43,6 +43,10 @@ const ssoRegistrationSchema = z.discriminatedUnion('providerType', [
]) ])
.default(['openid', 'profile', 'email']), .default(['openid', 'profile', 'email']),
pkce: z.boolean().default(true), pkce: z.boolean().default(true),
authorizationEndpoint: z.string().url().optional(),
tokenEndpoint: z.string().url().optional(),
userInfoEndpoint: z.string().url().optional(),
jwksEndpoint: z.string().url().optional(),
}), }),
z.object({ z.object({
providerType: z.literal('saml'), providerType: z.literal('saml'),
@@ -64,12 +68,10 @@ const ssoRegistrationSchema = z.discriminatedUnion('providerType', [
export async function POST(request: NextRequest) { export async function POST(request: NextRequest) {
try { try {
// SSO plugin must be enabled in Better Auth
if (!env.SSO_ENABLED) { if (!env.SSO_ENABLED) {
return NextResponse.json({ error: 'SSO is not enabled' }, { status: 400 }) return NextResponse.json({ error: 'SSO is not enabled' }, { status: 400 })
} }
// Check plan access (enterprise) or env var override
const session = await getSession() const session = await getSession()
if (!session?.user?.id) { if (!session?.user?.id) {
return NextResponse.json({ error: 'Authentication required' }, { status: 401 }) return NextResponse.json({ error: 'Authentication required' }, { status: 401 })
@@ -116,7 +118,16 @@ export async function POST(request: NextRequest) {
} }
if (providerType === 'oidc') { if (providerType === 'oidc') {
const { clientId, clientSecret, scopes, pkce } = body const {
clientId,
clientSecret,
scopes,
pkce,
authorizationEndpoint,
tokenEndpoint,
userInfoEndpoint,
jwksEndpoint,
} = body
const oidcConfig: any = { const oidcConfig: any = {
clientId, clientId,
@@ -127,50 +138,104 @@ export async function POST(request: NextRequest) {
pkce: pkce ?? true, pkce: pkce ?? true,
} }
// Add manual endpoints for providers that might need them oidcConfig.authorizationEndpoint = authorizationEndpoint
// Common patterns for OIDC providers that don't support discovery properly oidcConfig.tokenEndpoint = tokenEndpoint
if ( oidcConfig.userInfoEndpoint = userInfoEndpoint
issuer.includes('okta.com') || oidcConfig.jwksEndpoint = jwksEndpoint
issuer.includes('auth0.com') ||
issuer.includes('identityserver')
) {
const baseUrl = issuer.includes('/oauth2/default')
? issuer.replace('/oauth2/default', '')
: issuer.replace('/oauth', '').replace('/v2.0', '').replace('/oauth2', '')
// Okta-style endpoints const needsDiscovery =
if (issuer.includes('okta.com')) { !oidcConfig.authorizationEndpoint || !oidcConfig.tokenEndpoint || !oidcConfig.jwksEndpoint
oidcConfig.authorizationEndpoint = `${baseUrl}/oauth2/default/v1/authorize`
oidcConfig.tokenEndpoint = `${baseUrl}/oauth2/default/v1/token`
oidcConfig.userInfoEndpoint = `${baseUrl}/oauth2/default/v1/userinfo`
oidcConfig.jwksEndpoint = `${baseUrl}/oauth2/default/v1/keys`
}
// Auth0-style endpoints
else if (issuer.includes('auth0.com')) {
oidcConfig.authorizationEndpoint = `${baseUrl}/authorize`
oidcConfig.tokenEndpoint = `${baseUrl}/oauth/token`
oidcConfig.userInfoEndpoint = `${baseUrl}/userinfo`
oidcConfig.jwksEndpoint = `${baseUrl}/.well-known/jwks.json`
}
// Generic OIDC endpoints (IdentityServer, etc.)
else {
oidcConfig.authorizationEndpoint = `${baseUrl}/connect/authorize`
oidcConfig.tokenEndpoint = `${baseUrl}/connect/token`
oidcConfig.userInfoEndpoint = `${baseUrl}/connect/userinfo`
oidcConfig.jwksEndpoint = `${baseUrl}/.well-known/jwks`
}
logger.info('Using manual OIDC endpoints for provider', { if (needsDiscovery) {
const discoveryUrl = `${issuer.replace(/\/$/, '')}/.well-known/openid-configuration`
try {
logger.info('Fetching OIDC discovery document for missing endpoints', {
discoveryUrl,
hasAuthEndpoint: !!oidcConfig.authorizationEndpoint,
hasTokenEndpoint: !!oidcConfig.tokenEndpoint,
hasJwksEndpoint: !!oidcConfig.jwksEndpoint,
})
const discoveryResponse = await fetch(discoveryUrl, {
headers: { Accept: 'application/json' },
})
if (!discoveryResponse.ok) {
logger.error('Failed to fetch OIDC discovery document', {
status: discoveryResponse.status,
statusText: discoveryResponse.statusText,
})
return NextResponse.json(
{
error: `Failed to fetch OIDC discovery document from ${discoveryUrl}. Status: ${discoveryResponse.status}. Provide all endpoints explicitly or verify the issuer URL.`,
},
{ status: 400 }
)
}
const discovery = await discoveryResponse.json()
oidcConfig.authorizationEndpoint =
oidcConfig.authorizationEndpoint || discovery.authorization_endpoint
oidcConfig.tokenEndpoint = oidcConfig.tokenEndpoint || discovery.token_endpoint
oidcConfig.userInfoEndpoint = oidcConfig.userInfoEndpoint || discovery.userinfo_endpoint
oidcConfig.jwksEndpoint = oidcConfig.jwksEndpoint || discovery.jwks_uri
logger.info('Merged OIDC endpoints (user-provided + discovery)', {
providerId,
issuer,
authorizationEndpoint: oidcConfig.authorizationEndpoint,
tokenEndpoint: oidcConfig.tokenEndpoint,
userInfoEndpoint: oidcConfig.userInfoEndpoint,
jwksEndpoint: oidcConfig.jwksEndpoint,
})
} catch (error) {
logger.error('Error fetching OIDC discovery document', {
error: error instanceof Error ? error.message : 'Unknown error',
discoveryUrl,
})
return NextResponse.json(
{
error: `Failed to fetch OIDC discovery document from ${discoveryUrl}. Please verify the issuer URL is correct or provide all endpoints explicitly.`,
},
{ status: 400 }
)
}
} else {
logger.info('Using explicitly provided OIDC endpoints (all present)', {
providerId, providerId,
provider: issuer.includes('okta.com') issuer,
? 'Okta' authorizationEndpoint: oidcConfig.authorizationEndpoint,
: issuer.includes('auth0.com') tokenEndpoint: oidcConfig.tokenEndpoint,
? 'Auth0' userInfoEndpoint: oidcConfig.userInfoEndpoint,
: 'Generic', jwksEndpoint: oidcConfig.jwksEndpoint,
authEndpoint: oidcConfig.authorizationEndpoint,
}) })
} }
if (
!oidcConfig.authorizationEndpoint ||
!oidcConfig.tokenEndpoint ||
!oidcConfig.jwksEndpoint
) {
const missing: string[] = []
if (!oidcConfig.authorizationEndpoint) missing.push('authorizationEndpoint')
if (!oidcConfig.tokenEndpoint) missing.push('tokenEndpoint')
if (!oidcConfig.jwksEndpoint) missing.push('jwksEndpoint')
logger.error('Missing required OIDC endpoints after discovery merge', {
missing,
authorizationEndpoint: oidcConfig.authorizationEndpoint,
tokenEndpoint: oidcConfig.tokenEndpoint,
jwksEndpoint: oidcConfig.jwksEndpoint,
})
return NextResponse.json(
{
error: `Missing required OIDC endpoints: ${missing.join(', ')}. Please provide these explicitly or verify the issuer supports OIDC discovery.`,
},
{ status: 400 }
)
}
providerConfig.oidcConfig = oidcConfig providerConfig.oidcConfig = oidcConfig
} else if (providerType === 'saml') { } else if (providerType === 'saml') {
const { const {

View File

@@ -1,86 +0,0 @@
/**
* GET /api/copilot/chat/[chatId]/active-stream
*
* Check if a chat has an active stream that can be resumed.
* Used by the client on page load to detect if there's an in-progress stream.
*/
import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server'
import { getSession } from '@/lib/auth'
import {
getActiveStreamForChat,
getChunkCount,
getStreamMeta,
} from '@/lib/copilot/stream-persistence'
const logger = createLogger('CopilotActiveStreamAPI')
export async function GET(
req: NextRequest,
{ params }: { params: Promise<{ chatId: string }> }
) {
const session = await getSession()
if (!session?.user?.id) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
const { chatId } = await params
logger.info('Active stream check', { chatId, userId: session.user.id })
// Look up active stream ID from Redis
const streamId = await getActiveStreamForChat(chatId)
if (!streamId) {
logger.debug('No active stream found', { chatId })
return NextResponse.json({ hasActiveStream: false })
}
// Get stream metadata
const meta = await getStreamMeta(streamId)
if (!meta) {
logger.debug('Stream metadata not found', { streamId, chatId })
return NextResponse.json({ hasActiveStream: false })
}
// Verify the stream is still active
if (meta.status !== 'streaming') {
logger.debug('Stream not active', { streamId, chatId, status: meta.status })
return NextResponse.json({ hasActiveStream: false })
}
// Verify ownership
if (meta.userId !== session.user.id) {
logger.warn('Stream belongs to different user', {
streamId,
chatId,
requesterId: session.user.id,
ownerId: meta.userId,
})
return NextResponse.json({ hasActiveStream: false })
}
// Get current chunk count for client to track progress
const chunkCount = await getChunkCount(streamId)
logger.info('Active stream found', {
streamId,
chatId,
chunkCount,
toolCallsCount: meta.toolCalls.length,
})
return NextResponse.json({
hasActiveStream: true,
streamId,
chunkCount,
toolCalls: meta.toolCalls.filter(
(tc) => tc.state === 'pending' || tc.state === 'executing'
),
createdAt: meta.createdAt,
updatedAt: meta.updatedAt,
})
}

View File

@@ -2,7 +2,6 @@ import { db } from '@sim/db'
import { copilotChats } from '@sim/db/schema' import { copilotChats } from '@sim/db/schema'
import { createLogger } from '@sim/logger' import { createLogger } from '@sim/logger'
import { and, desc, eq } from 'drizzle-orm' import { and, desc, eq } from 'drizzle-orm'
import { after } from 'next/server'
import { type NextRequest, NextResponse } from 'next/server' import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod' import { z } from 'zod'
import { getSession } from '@/lib/auth' import { getSession } from '@/lib/auth'
@@ -17,21 +16,6 @@ import {
createRequestTracker, createRequestTracker,
createUnauthorizedResponse, createUnauthorizedResponse,
} from '@/lib/copilot/request-helpers' } from '@/lib/copilot/request-helpers'
import {
type RenderEvent,
serializeRenderEvent,
} from '@/lib/copilot/render-events'
import {
appendChunk,
appendContent,
checkAbortSignal,
completeStream,
createStream,
errorStream,
refreshStreamTTL,
updateToolCall,
} from '@/lib/copilot/stream-persistence'
import { transformStream } from '@/lib/copilot/stream-transformer'
import { getCredentialsServerTool } from '@/lib/copilot/tools/server/user/get-credentials' import { getCredentialsServerTool } from '@/lib/copilot/tools/server/user/get-credentials'
import type { CopilotProviderConfig } from '@/lib/copilot/types' import type { CopilotProviderConfig } from '@/lib/copilot/types'
import { env } from '@/lib/core/config/env' import { env } from '@/lib/core/config/env'
@@ -508,205 +492,385 @@ export async function POST(req: NextRequest) {
) )
} }
// If streaming is requested, start background processing and return streamId immediately // If streaming is requested, forward the stream and update chat later
if (stream && simAgentResponse.body) { if (stream && simAgentResponse.body) {
// Create stream ID for persistence and resumption // Create user message to save
const streamId = crypto.randomUUID() const userMessage = {
id: userMessageIdToUse, // Consistent ID used for request and persistence
// Initialize stream state in Redis role: 'user',
await createStream({ content: message,
streamId, timestamp: new Date().toISOString(),
chatId: actualChatId!, ...(fileAttachments && fileAttachments.length > 0 && { fileAttachments }),
userId: authenticatedUserId, ...(Array.isArray(contexts) && contexts.length > 0 && { contexts }),
workflowId, ...(Array.isArray(contexts) &&
userMessageId: userMessageIdToUse, contexts.length > 0 && {
isClientSession: true, contentBlocks: [{ type: 'contexts', contexts: contexts as any, timestamp: Date.now() }],
}) }),
// Save user message to database immediately so it's available on refresh
// This is critical for stream resumption - user message must be persisted before stream starts
if (currentChat) {
const existingMessages = Array.isArray(currentChat.messages) ? currentChat.messages : []
const userMessage = {
id: userMessageIdToUse,
role: 'user',
content: message,
timestamp: new Date().toISOString(),
...(fileAttachments && fileAttachments.length > 0 && { fileAttachments }),
...(Array.isArray(contexts) && contexts.length > 0 && { contexts }),
...(Array.isArray(contexts) &&
contexts.length > 0 && {
contentBlocks: [{ type: 'contexts', contexts: contexts as any, timestamp: Date.now() }],
}),
}
await db
.update(copilotChats)
.set({
messages: [...existingMessages, userMessage],
updatedAt: new Date(),
})
.where(eq(copilotChats.id, actualChatId!))
logger.info(`[${tracker.requestId}] Saved user message before streaming`, {
chatId: actualChatId,
messageId: userMessageIdToUse,
})
} }
// Track last TTL refresh time // Create a pass-through stream that captures the response
const TTL_REFRESH_INTERVAL = 60000 // Refresh TTL every minute const transformedStream = new ReadableStream({
async start(controller) {
const encoder = new TextEncoder()
let assistantContent = ''
const toolCalls: any[] = []
let buffer = ''
const isFirstDone = true
let responseIdFromStart: string | undefined
let responseIdFromDone: string | undefined
// Track tool call progress to identify a safe done event
const announcedToolCallIds = new Set<string>()
const startedToolExecutionIds = new Set<string>()
const completedToolExecutionIds = new Set<string>()
let lastDoneResponseId: string | undefined
let lastSafeDoneResponseId: string | undefined
// Capture needed values for background task // Send chatId as first event
const capturedChatId = actualChatId! if (actualChatId) {
const capturedCurrentChat = currentChat const chatIdEvent = `data: ${JSON.stringify({
type: 'chat_id',
// Generate assistant message ID upfront chatId: actualChatId,
const assistantMessageId = crypto.randomUUID() })}\n\n`
controller.enqueue(encoder.encode(chatIdEvent))
// Start background processing task using the stream transformer logger.debug(`[${tracker.requestId}] Sent initial chatId event to client`)
// This processes the Sim Agent stream, executes tools, and emits render events
// Client will connect to /api/copilot/stream/{streamId} for live updates
const backgroundTask = (async () => {
// Start title generation if needed (runs in parallel)
if (capturedChatId && !capturedCurrentChat?.title && conversationHistory.length === 0) {
generateChatTitle(message)
.then(async (title) => {
if (title) {
await db
.update(copilotChats)
.set({ title, updatedAt: new Date() })
.where(eq(copilotChats.id, capturedChatId))
logger.info(`[${tracker.requestId}] Generated and saved title: ${title}`)
}
})
.catch((error) => {
logger.error(`[${tracker.requestId}] Title generation failed:`, error)
})
}
// Track accumulated content for final persistence
let accumulatedContent = ''
const accumulatedToolCalls: Array<{
id: string
name: string
args: Record<string, unknown>
state: string
result?: unknown
}> = []
try {
// Use the stream transformer to process the Sim Agent stream
await transformStream(simAgentResponse.body!, {
streamId,
chatId: capturedChatId,
userId: authenticatedUserId,
workflowId,
userMessageId: userMessageIdToUse,
assistantMessageId,
// Emit render events to Redis for client consumption
onRenderEvent: async (event: RenderEvent) => {
// Serialize and append to Redis
const serialized = serializeRenderEvent(event)
await appendChunk(streamId, serialized).catch(() => {})
// Also update stream metadata for specific events
switch (event.type) {
case 'text_delta':
accumulatedContent += (event as any).content || ''
appendContent(streamId, (event as any).content || '').catch(() => {})
break
case 'tool_pending':
updateToolCall(streamId, (event as any).toolCallId, {
id: (event as any).toolCallId,
name: (event as any).toolName,
args: (event as any).args || {},
state: 'pending',
}).catch(() => {})
break
case 'tool_executing':
updateToolCall(streamId, (event as any).toolCallId, {
state: 'executing',
}).catch(() => {})
break
case 'tool_success':
updateToolCall(streamId, (event as any).toolCallId, {
state: 'success',
result: (event as any).result,
}).catch(() => {})
accumulatedToolCalls.push({
id: (event as any).toolCallId,
name: (event as any).display?.label || '',
args: {},
state: 'success',
result: (event as any).result,
})
break
case 'tool_error':
updateToolCall(streamId, (event as any).toolCallId, {
state: 'error',
error: (event as any).error,
}).catch(() => {})
accumulatedToolCalls.push({
id: (event as any).toolCallId,
name: (event as any).display?.label || '',
args: {},
state: 'error',
})
break
}
},
// Persist data at key moments
onPersist: async (data) => {
if (data.type === 'message_complete') {
// Stream complete - save final message to DB
await completeStream(streamId, undefined)
}
},
// Check for user-initiated abort
isAborted: () => {
// We'll check Redis for abort signal synchronously cached
// For now, return false - proper abort checking can be async in transformer
return false
},
})
// Update chat with conversationId if available
if (capturedCurrentChat) {
await db
.update(copilotChats)
.set({ updatedAt: new Date() })
.where(eq(copilotChats.id, capturedChatId))
} }
logger.info(`[${tracker.requestId}] Background stream processing complete`, { // Start title generation in parallel if needed
streamId, if (actualChatId && !currentChat?.title && conversationHistory.length === 0) {
contentLength: accumulatedContent.length, generateChatTitle(message)
toolCallsCount: accumulatedToolCalls.length, .then(async (title) => {
}) if (title) {
} catch (error) { await db
logger.error(`[${tracker.requestId}] Background stream error`, { streamId, error }) .update(copilotChats)
await errorStream(streamId, error instanceof Error ? error.message : 'Unknown error') .set({
} title,
})() updatedAt: new Date(),
})
.where(eq(copilotChats.id, actualChatId!))
// Use after() to ensure background task completes even after response is sent const titleEvent = `data: ${JSON.stringify({
after(() => backgroundTask) type: 'title_updated',
title: title,
})}\n\n`
controller.enqueue(encoder.encode(titleEvent))
logger.info(`[${tracker.requestId}] Generated and saved title: ${title}`)
}
})
.catch((error) => {
logger.error(`[${tracker.requestId}] Title generation failed:`, error)
})
} else {
logger.debug(`[${tracker.requestId}] Skipping title generation`)
}
// Return streamId immediately - client will connect to stream endpoint // Forward the sim agent stream and capture assistant response
logger.info(`[${tracker.requestId}] Returning streamId for client to connect`, { const reader = simAgentResponse.body!.getReader()
streamId, const decoder = new TextDecoder()
chatId: capturedChatId,
try {
while (true) {
const { done, value } = await reader.read()
if (done) {
break
}
// Decode and parse SSE events for logging and capturing content
const decodedChunk = decoder.decode(value, { stream: true })
buffer += decodedChunk
const lines = buffer.split('\n')
buffer = lines.pop() || '' // Keep incomplete line in buffer
for (const line of lines) {
if (line.trim() === '') continue // Skip empty lines
if (line.startsWith('data: ') && line.length > 6) {
try {
const jsonStr = line.slice(6)
// Check if the JSON string is unusually large (potential streaming issue)
if (jsonStr.length > 50000) {
// 50KB limit
logger.warn(`[${tracker.requestId}] Large SSE event detected`, {
size: jsonStr.length,
preview: `${jsonStr.substring(0, 100)}...`,
})
}
const event = JSON.parse(jsonStr)
// Log different event types comprehensively
switch (event.type) {
case 'content':
if (event.data) {
assistantContent += event.data
}
break
case 'reasoning':
logger.debug(
`[${tracker.requestId}] Reasoning chunk received (${(event.data || event.content || '').length} chars)`
)
break
case 'tool_call':
if (!event.data?.partial) {
toolCalls.push(event.data)
if (event.data?.id) {
announcedToolCallIds.add(event.data.id)
}
}
break
case 'tool_generating':
if (event.toolCallId) {
startedToolExecutionIds.add(event.toolCallId)
}
break
case 'tool_result':
if (event.toolCallId) {
completedToolExecutionIds.add(event.toolCallId)
}
break
case 'tool_error':
logger.error(`[${tracker.requestId}] Tool error:`, {
toolCallId: event.toolCallId,
toolName: event.toolName,
error: event.error,
success: event.success,
})
if (event.toolCallId) {
completedToolExecutionIds.add(event.toolCallId)
}
break
case 'start':
if (event.data?.responseId) {
responseIdFromStart = event.data.responseId
}
break
case 'done':
if (event.data?.responseId) {
responseIdFromDone = event.data.responseId
lastDoneResponseId = responseIdFromDone
// Mark this done as safe only if no tool call is currently in progress or pending
const announced = announcedToolCallIds.size
const completed = completedToolExecutionIds.size
const started = startedToolExecutionIds.size
const hasToolInProgress = announced > completed || started > completed
if (!hasToolInProgress) {
lastSafeDoneResponseId = responseIdFromDone
}
}
break
case 'error':
break
default:
}
// Emit to client: rewrite 'error' events into user-friendly assistant message
if (event?.type === 'error') {
try {
const displayMessage: string =
(event?.data && (event.data.displayMessage as string)) ||
'Sorry, I encountered an error. Please try again.'
const formatted = `_${displayMessage}_`
// Accumulate so it persists to DB as assistant content
assistantContent += formatted
// Send as content chunk
try {
controller.enqueue(
encoder.encode(
`data: ${JSON.stringify({ type: 'content', data: formatted })}\n\n`
)
)
} catch (enqueueErr) {
reader.cancel()
break
}
// Then close this response cleanly for the client
try {
controller.enqueue(
encoder.encode(`data: ${JSON.stringify({ type: 'done' })}\n\n`)
)
} catch (enqueueErr) {
reader.cancel()
break
}
} catch {}
// Do not forward the original error event
} else {
// Forward original event to client
try {
controller.enqueue(encoder.encode(`data: ${jsonStr}\n\n`))
} catch (enqueueErr) {
reader.cancel()
break
}
}
} catch (e) {
// Enhanced error handling for large payloads and parsing issues
const lineLength = line.length
const isLargePayload = lineLength > 10000
if (isLargePayload) {
logger.error(
`[${tracker.requestId}] Failed to parse large SSE event (${lineLength} chars)`,
{
error: e,
preview: `${line.substring(0, 200)}...`,
size: lineLength,
}
)
} else {
logger.warn(
`[${tracker.requestId}] Failed to parse SSE event: "${line.substring(0, 200)}..."`,
e
)
}
}
} else if (line.trim() && line !== 'data: [DONE]') {
logger.debug(`[${tracker.requestId}] Non-SSE line from sim agent: "${line}"`)
}
}
}
// Process any remaining buffer
if (buffer.trim()) {
logger.debug(`[${tracker.requestId}] Processing remaining buffer: "${buffer}"`)
if (buffer.startsWith('data: ')) {
try {
const jsonStr = buffer.slice(6)
const event = JSON.parse(jsonStr)
if (event.type === 'content' && event.data) {
assistantContent += event.data
}
// Forward remaining event, applying same error rewrite behavior
if (event?.type === 'error') {
const displayMessage: string =
(event?.data && (event.data.displayMessage as string)) ||
'Sorry, I encountered an error. Please try again.'
const formatted = `_${displayMessage}_`
assistantContent += formatted
try {
controller.enqueue(
encoder.encode(
`data: ${JSON.stringify({ type: 'content', data: formatted })}\n\n`
)
)
controller.enqueue(
encoder.encode(`data: ${JSON.stringify({ type: 'done' })}\n\n`)
)
} catch (enqueueErr) {
reader.cancel()
}
} else {
try {
controller.enqueue(encoder.encode(`data: ${jsonStr}\n\n`))
} catch (enqueueErr) {
reader.cancel()
}
}
} catch (e) {
logger.warn(`[${tracker.requestId}] Failed to parse final buffer: "${buffer}"`)
}
}
}
// Log final streaming summary
logger.info(`[${tracker.requestId}] Streaming complete summary:`, {
totalContentLength: assistantContent.length,
toolCallsCount: toolCalls.length,
hasContent: assistantContent.length > 0,
toolNames: toolCalls.map((tc) => tc?.name).filter(Boolean),
})
// NOTE: Messages are saved by the client via update-messages endpoint with full contentBlocks.
// Server only updates conversationId here to avoid overwriting client's richer save.
if (currentChat) {
// Persist only a safe conversationId to avoid continuing from a state that expects tool outputs
const previousConversationId = currentChat?.conversationId as string | undefined
const responseId = lastSafeDoneResponseId || previousConversationId || undefined
if (responseId) {
await db
.update(copilotChats)
.set({
updatedAt: new Date(),
conversationId: responseId,
})
.where(eq(copilotChats.id, actualChatId!))
logger.info(
`[${tracker.requestId}] Updated conversationId for chat ${actualChatId}`,
{
updatedConversationId: responseId,
}
)
}
}
} catch (error) {
logger.error(`[${tracker.requestId}] Error processing stream:`, error)
// Send an error event to the client before closing so it knows what happened
try {
const errorMessage =
error instanceof Error && error.message === 'terminated'
? 'Connection to AI service was interrupted. Please try again.'
: 'An unexpected error occurred while processing the response.'
const encoder = new TextEncoder()
// Send error as content so it shows in the chat
controller.enqueue(
encoder.encode(
`data: ${JSON.stringify({ type: 'content', data: `\n\n_${errorMessage}_` })}\n\n`
)
)
// Send done event to properly close the stream on client
controller.enqueue(encoder.encode(`data: ${JSON.stringify({ type: 'done' })}\n\n`))
} catch (enqueueError) {
// Stream might already be closed, that's ok
logger.warn(
`[${tracker.requestId}] Could not send error event to client:`,
enqueueError
)
}
} finally {
try {
controller.close()
} catch {
// Controller might already be closed
}
}
},
}) })
return NextResponse.json({ const response = new Response(transformedStream, {
success: true, headers: {
streamId, 'Content-Type': 'text/event-stream',
chatId: capturedChatId, 'Cache-Control': 'no-cache',
Connection: 'keep-alive',
'X-Accel-Buffering': 'no',
},
}) })
logger.info(`[${tracker.requestId}] Returning streaming response to client`, {
duration: tracker.getDuration(),
chatId: actualChatId,
headers: {
'Content-Type': 'text/event-stream',
'Cache-Control': 'no-cache',
Connection: 'keep-alive',
},
})
return response
} }
// For non-streaming responses // For non-streaming responses
@@ -735,7 +899,7 @@ export async function POST(req: NextRequest) {
// Save messages if we have a chat // Save messages if we have a chat
if (currentChat && responseData.content) { if (currentChat && responseData.content) {
const userMessage = { const userMessage = {
id: userMessageIdToUse, id: userMessageIdToUse, // Consistent ID used for request and persistence
role: 'user', role: 'user',
content: message, content: message,
timestamp: new Date().toISOString(), timestamp: new Date().toISOString(),

View File

@@ -224,7 +224,7 @@ export async function POST(req: NextRequest) {
hasApiKey: !!executionParams.apiKey, hasApiKey: !!executionParams.apiKey,
}) })
const result = await executeTool(resolvedToolName, executionParams, true) const result = await executeTool(resolvedToolName, executionParams)
logger.info(`[${tracker.requestId}] Tool execution complete`, { logger.info(`[${tracker.requestId}] Tool execution complete`, {
toolName, toolName,

View File

@@ -1,64 +0,0 @@
/**
* POST /api/copilot/stream/[streamId]/abort
*
* Signal the server to abort an active stream.
* The original request handler will check for this signal and cancel the stream.
*/
import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server'
import { getSession } from '@/lib/auth'
import { getStreamMeta, setAbortSignal } from '@/lib/copilot/stream-persistence'
const logger = createLogger('CopilotStreamAbortAPI')
export async function POST(
req: NextRequest,
{ params }: { params: Promise<{ streamId: string }> }
) {
const session = await getSession()
if (!session?.user?.id) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
const { streamId } = await params
logger.info('Stream abort request', { streamId, userId: session.user.id })
const meta = await getStreamMeta(streamId)
if (!meta) {
logger.info('Stream not found for abort', { streamId })
return NextResponse.json({ error: 'Stream not found' }, { status: 404 })
}
// Verify ownership
if (meta.userId !== session.user.id) {
logger.warn('Unauthorized abort attempt', {
streamId,
requesterId: session.user.id,
ownerId: meta.userId,
})
return NextResponse.json({ error: 'Forbidden' }, { status: 403 })
}
// Stream already finished
if (meta.status !== 'streaming') {
logger.info('Stream already finished, nothing to abort', {
streamId,
status: meta.status,
})
return NextResponse.json({
success: true,
message: 'Stream already finished',
})
}
// Set abort signal in Redis
await setAbortSignal(streamId)
logger.info('Abort signal set for stream', { streamId })
return NextResponse.json({ success: true })
}

View File

@@ -1,146 +0,0 @@
import { createLogger } from '@sim/logger'
import { NextResponse } from 'next/server'
import { getSession } from '@/lib/auth'
import {
clearPendingDiff,
getPendingDiff,
getStreamMeta,
setPendingDiff,
} from '@/lib/copilot/stream-persistence'
const logger = createLogger('CopilotPendingDiffAPI')
/**
* GET /api/copilot/stream/[streamId]/pending-diff
* Retrieve pending diff state for a stream (used for resumption after page refresh)
*/
export async function GET(
request: Request,
{ params }: { params: Promise<{ streamId: string }> }
) {
try {
const session = await getSession()
if (!session?.user?.id) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
const { streamId } = await params
if (!streamId) {
return NextResponse.json({ error: 'Stream ID required' }, { status: 400 })
}
// Verify user owns this stream
const meta = await getStreamMeta(streamId)
if (!meta) {
return NextResponse.json({ error: 'Stream not found' }, { status: 404 })
}
if (meta.userId !== session.user.id) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 403 })
}
// Get pending diff
const pendingDiff = await getPendingDiff(streamId)
if (!pendingDiff) {
return NextResponse.json({ pendingDiff: null })
}
logger.info('Retrieved pending diff', {
streamId,
toolCallId: pendingDiff.toolCallId,
})
return NextResponse.json({ pendingDiff })
} catch (error) {
logger.error('Failed to get pending diff', { error })
return NextResponse.json({ error: 'Internal server error' }, { status: 500 })
}
}
/**
* POST /api/copilot/stream/[streamId]/pending-diff
* Store pending diff state for a stream
*/
export async function POST(
request: Request,
{ params }: { params: Promise<{ streamId: string }> }
) {
try {
const session = await getSession()
if (!session?.user?.id) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
const { streamId } = await params
if (!streamId) {
return NextResponse.json({ error: 'Stream ID required' }, { status: 400 })
}
// Verify user owns this stream
const meta = await getStreamMeta(streamId)
if (!meta) {
return NextResponse.json({ error: 'Stream not found' }, { status: 404 })
}
if (meta.userId !== session.user.id) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 403 })
}
const body = await request.json()
const { pendingDiff } = body
if (!pendingDiff || !pendingDiff.toolCallId) {
return NextResponse.json({ error: 'Invalid pending diff data' }, { status: 400 })
}
await setPendingDiff(streamId, pendingDiff)
logger.info('Stored pending diff', {
streamId,
toolCallId: pendingDiff.toolCallId,
})
return NextResponse.json({ success: true })
} catch (error) {
logger.error('Failed to store pending diff', { error })
return NextResponse.json({ error: 'Internal server error' }, { status: 500 })
}
}
/**
* DELETE /api/copilot/stream/[streamId]/pending-diff
* Clear pending diff state for a stream
*/
export async function DELETE(
request: Request,
{ params }: { params: Promise<{ streamId: string }> }
) {
try {
const session = await getSession()
if (!session?.user?.id) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
const { streamId } = await params
if (!streamId) {
return NextResponse.json({ error: 'Stream ID required' }, { status: 400 })
}
// Verify user owns this stream (if it exists - might already be cleaned up)
const meta = await getStreamMeta(streamId)
if (meta && meta.userId !== session.user.id) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 403 })
}
await clearPendingDiff(streamId)
logger.info('Cleared pending diff', { streamId })
return NextResponse.json({ success: true })
} catch (error) {
logger.error('Failed to clear pending diff', { error })
return NextResponse.json({ error: 'Internal server error' }, { status: 500 })
}
}

View File

@@ -1,160 +0,0 @@
/**
* GET /api/copilot/stream/[streamId]
*
* Resume an active copilot stream.
* - If stream is still active: returns SSE with replay of missed chunks + live updates via Redis Pub/Sub
* - If stream is completed: returns JSON indicating to load from database
* - If stream not found: returns 404
*/
import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server'
import { getSession } from '@/lib/auth'
import {
getChunks,
getStreamMeta,
subscribeToStream,
} from '@/lib/copilot/stream-persistence'
const logger = createLogger('CopilotStreamResumeAPI')
const SSE_HEADERS = {
'Content-Type': 'text/event-stream',
'Cache-Control': 'no-cache',
Connection: 'keep-alive',
'X-Accel-Buffering': 'no',
}
export async function GET(
req: NextRequest,
{ params }: { params: Promise<{ streamId: string }> }
) {
const session = await getSession()
if (!session?.user?.id) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
const { streamId } = await params
const fromChunk = parseInt(req.nextUrl.searchParams.get('from') || '0')
logger.info('Stream resume request', { streamId, fromChunk, userId: session.user.id })
const meta = await getStreamMeta(streamId)
if (!meta) {
logger.info('Stream not found or expired', { streamId })
return NextResponse.json(
{
status: 'not_found',
message: 'Stream not found or expired. Reload chat from database.',
},
{ status: 404 }
)
}
// Verify ownership
if (meta.userId !== session.user.id) {
logger.warn('Unauthorized stream access attempt', {
streamId,
requesterId: session.user.id,
ownerId: meta.userId,
})
return NextResponse.json({ error: 'Forbidden' }, { status: 403 })
}
// Stream completed - tell client to load from database
if (meta.status === 'completed') {
logger.info('Stream already completed', { streamId, chatId: meta.chatId })
return NextResponse.json({
status: 'completed',
chatId: meta.chatId,
message: 'Stream completed. Messages saved to database.',
})
}
// Stream errored
if (meta.status === 'error') {
logger.info('Stream encountered error', { streamId, chatId: meta.chatId })
return NextResponse.json({
status: 'error',
chatId: meta.chatId,
message: 'Stream encountered an error.',
})
}
// Stream still active - return SSE with replay + live updates
logger.info('Resuming active stream', { streamId, chatId: meta.chatId })
const encoder = new TextEncoder()
const abortController = new AbortController()
// Handle client disconnect
req.signal.addEventListener('abort', () => {
logger.info('Client disconnected from resumed stream', { streamId })
abortController.abort()
})
const responseStream = new ReadableStream({
async start(controller) {
try {
// 1. Replay missed chunks (single read from Redis LIST)
const missedChunks = await getChunks(streamId, fromChunk)
logger.info('Replaying missed chunks', {
streamId,
fromChunk,
missedChunkCount: missedChunks.length,
})
for (const chunk of missedChunks) {
// Chunks are already in SSE format, just re-encode
controller.enqueue(encoder.encode(chunk))
}
// 2. Subscribe to live chunks via Redis Pub/Sub (blocking, no polling)
await subscribeToStream(
streamId,
(chunk) => {
try {
controller.enqueue(encoder.encode(chunk))
} catch {
// Client disconnected
abortController.abort()
}
},
() => {
// Stream complete - close connection
logger.info('Stream completed during resume', { streamId })
try {
controller.close()
} catch {
// Already closed
}
},
abortController.signal
)
} catch (error) {
logger.error('Error in stream resume', {
streamId,
error: error instanceof Error ? error.message : String(error),
})
try {
controller.close()
} catch {
// Already closed
}
}
},
cancel() {
abortController.abort()
},
})
return new Response(responseStream, {
headers: {
...SSE_HEADERS,
'X-Stream-Id': streamId,
'X-Chat-Id': meta.chatId,
},
})
}

View File

@@ -6,9 +6,10 @@ import { createLogger } from '@sim/logger'
import binaryExtensionsList from 'binary-extensions' import binaryExtensionsList from 'binary-extensions'
import { type NextRequest, NextResponse } from 'next/server' import { type NextRequest, NextResponse } from 'next/server'
import { checkHybridAuth } from '@/lib/auth/hybrid' import { checkHybridAuth } from '@/lib/auth/hybrid'
import { createPinnedUrl, validateUrlWithDNS } from '@/lib/core/security/input-validation' import { secureFetchWithPinnedIP, validateUrlWithDNS } from '@/lib/core/security/input-validation'
import { isSupportedFileType, parseFile } from '@/lib/file-parsers' import { isSupportedFileType, parseFile } from '@/lib/file-parsers'
import { isUsingCloudStorage, type StorageContext, StorageService } from '@/lib/uploads' import { isUsingCloudStorage, type StorageContext, StorageService } from '@/lib/uploads'
import { uploadExecutionFile } from '@/lib/uploads/contexts/execution'
import { UPLOAD_DIR_SERVER } from '@/lib/uploads/core/setup.server' import { UPLOAD_DIR_SERVER } from '@/lib/uploads/core/setup.server'
import { getFileMetadataByKey } from '@/lib/uploads/server/metadata' import { getFileMetadataByKey } from '@/lib/uploads/server/metadata'
import { import {
@@ -21,6 +22,7 @@ import {
} from '@/lib/uploads/utils/file-utils' } from '@/lib/uploads/utils/file-utils'
import { getUserEntityPermissions } from '@/lib/workspaces/permissions/utils' import { getUserEntityPermissions } from '@/lib/workspaces/permissions/utils'
import { verifyFileAccess } from '@/app/api/files/authorization' import { verifyFileAccess } from '@/app/api/files/authorization'
import type { UserFile } from '@/executor/types'
import '@/lib/uploads/core/setup.server' import '@/lib/uploads/core/setup.server'
export const dynamic = 'force-dynamic' export const dynamic = 'force-dynamic'
@@ -30,6 +32,12 @@ const logger = createLogger('FilesParseAPI')
const MAX_DOWNLOAD_SIZE_BYTES = 100 * 1024 * 1024 // 100 MB const MAX_DOWNLOAD_SIZE_BYTES = 100 * 1024 * 1024 // 100 MB
const DOWNLOAD_TIMEOUT_MS = 30000 // 30 seconds const DOWNLOAD_TIMEOUT_MS = 30000 // 30 seconds
interface ExecutionContext {
workspaceId: string
workflowId: string
executionId: string
}
interface ParseResult { interface ParseResult {
success: boolean success: boolean
content?: string content?: string
@@ -37,6 +45,7 @@ interface ParseResult {
filePath: string filePath: string
originalName?: string // Original filename from database (for workspace files) originalName?: string // Original filename from database (for workspace files)
viewerUrl?: string | null // Viewer URL for the file if available viewerUrl?: string | null // Viewer URL for the file if available
userFile?: UserFile // UserFile object for the raw file
metadata?: { metadata?: {
fileType: string fileType: string
size: number size: number
@@ -70,27 +79,45 @@ export async function POST(request: NextRequest) {
const userId = authResult.userId const userId = authResult.userId
const requestData = await request.json() const requestData = await request.json()
const { filePath, fileType, workspaceId } = requestData const { filePath, fileType, workspaceId, workflowId, executionId } = requestData
if (!filePath || (typeof filePath === 'string' && filePath.trim() === '')) { if (!filePath || (typeof filePath === 'string' && filePath.trim() === '')) {
return NextResponse.json({ success: false, error: 'No file path provided' }, { status: 400 }) return NextResponse.json({ success: false, error: 'No file path provided' }, { status: 400 })
} }
logger.info('File parse request received:', { filePath, fileType, workspaceId, userId }) // Build execution context if all required fields are present
const executionContext: ExecutionContext | undefined =
workspaceId && workflowId && executionId
? { workspaceId, workflowId, executionId }
: undefined
logger.info('File parse request received:', {
filePath,
fileType,
workspaceId,
userId,
hasExecutionContext: !!executionContext,
})
if (Array.isArray(filePath)) { if (Array.isArray(filePath)) {
const results = [] const results = []
for (const path of filePath) { for (const singlePath of filePath) {
if (!path || (typeof path === 'string' && path.trim() === '')) { if (!singlePath || (typeof singlePath === 'string' && singlePath.trim() === '')) {
results.push({ results.push({
success: false, success: false,
error: 'Empty file path in array', error: 'Empty file path in array',
filePath: path || '', filePath: singlePath || '',
}) })
continue continue
} }
const result = await parseFileSingle(path, fileType, workspaceId, userId) const result = await parseFileSingle(
singlePath,
fileType,
workspaceId,
userId,
executionContext
)
if (result.metadata) { if (result.metadata) {
result.metadata.processingTime = Date.now() - startTime result.metadata.processingTime = Date.now() - startTime
} }
@@ -106,6 +133,7 @@ export async function POST(request: NextRequest) {
fileType: result.metadata?.fileType || 'application/octet-stream', fileType: result.metadata?.fileType || 'application/octet-stream',
size: result.metadata?.size || 0, size: result.metadata?.size || 0,
binary: false, binary: false,
file: result.userFile,
}, },
filePath: result.filePath, filePath: result.filePath,
viewerUrl: result.viewerUrl, viewerUrl: result.viewerUrl,
@@ -121,7 +149,7 @@ export async function POST(request: NextRequest) {
}) })
} }
const result = await parseFileSingle(filePath, fileType, workspaceId, userId) const result = await parseFileSingle(filePath, fileType, workspaceId, userId, executionContext)
if (result.metadata) { if (result.metadata) {
result.metadata.processingTime = Date.now() - startTime result.metadata.processingTime = Date.now() - startTime
@@ -137,6 +165,7 @@ export async function POST(request: NextRequest) {
fileType: result.metadata?.fileType || 'application/octet-stream', fileType: result.metadata?.fileType || 'application/octet-stream',
size: result.metadata?.size || 0, size: result.metadata?.size || 0,
binary: false, binary: false,
file: result.userFile,
}, },
filePath: result.filePath, filePath: result.filePath,
viewerUrl: result.viewerUrl, viewerUrl: result.viewerUrl,
@@ -164,7 +193,8 @@ async function parseFileSingle(
filePath: string, filePath: string,
fileType: string, fileType: string,
workspaceId: string, workspaceId: string,
userId: string userId: string,
executionContext?: ExecutionContext
): Promise<ParseResult> { ): Promise<ParseResult> {
logger.info('Parsing file:', filePath) logger.info('Parsing file:', filePath)
@@ -186,18 +216,18 @@ async function parseFileSingle(
} }
if (filePath.includes('/api/files/serve/')) { if (filePath.includes('/api/files/serve/')) {
return handleCloudFile(filePath, fileType, undefined, userId) return handleCloudFile(filePath, fileType, undefined, userId, executionContext)
} }
if (filePath.startsWith('http://') || filePath.startsWith('https://')) { if (filePath.startsWith('http://') || filePath.startsWith('https://')) {
return handleExternalUrl(filePath, fileType, workspaceId, userId) return handleExternalUrl(filePath, fileType, workspaceId, userId, executionContext)
} }
if (isUsingCloudStorage()) { if (isUsingCloudStorage()) {
return handleCloudFile(filePath, fileType, undefined, userId) return handleCloudFile(filePath, fileType, undefined, userId, executionContext)
} }
return handleLocalFile(filePath, fileType, userId) return handleLocalFile(filePath, fileType, userId, executionContext)
} }
/** /**
@@ -230,12 +260,14 @@ function validateFilePath(filePath: string): { isValid: boolean; error?: string
/** /**
* Handle external URL * Handle external URL
* If workspaceId is provided, checks if file already exists and saves to workspace if not * If workspaceId is provided, checks if file already exists and saves to workspace if not
* If executionContext is provided, also stores the file in execution storage and returns UserFile
*/ */
async function handleExternalUrl( async function handleExternalUrl(
url: string, url: string,
fileType: string, fileType: string,
workspaceId: string, workspaceId: string,
userId: string userId: string,
executionContext?: ExecutionContext
): Promise<ParseResult> { ): Promise<ParseResult> {
try { try {
logger.info('Fetching external URL:', url) logger.info('Fetching external URL:', url)
@@ -312,17 +344,13 @@ async function handleExternalUrl(
if (existingFile) { if (existingFile) {
const storageFilePath = `/api/files/serve/${existingFile.key}` const storageFilePath = `/api/files/serve/${existingFile.key}`
return handleCloudFile(storageFilePath, fileType, 'workspace', userId) return handleCloudFile(storageFilePath, fileType, 'workspace', userId, executionContext)
} }
} }
} }
const pinnedUrl = createPinnedUrl(url, urlValidation.resolvedIP!) const response = await secureFetchWithPinnedIP(url, urlValidation.resolvedIP!, {
const response = await fetch(pinnedUrl, { timeout: DOWNLOAD_TIMEOUT_MS,
signal: AbortSignal.timeout(DOWNLOAD_TIMEOUT_MS),
headers: {
Host: urlValidation.originalHostname!,
},
}) })
if (!response.ok) { if (!response.ok) {
throw new Error(`Failed to fetch URL: ${response.status} ${response.statusText}`) throw new Error(`Failed to fetch URL: ${response.status} ${response.statusText}`)
@@ -341,6 +369,19 @@ async function handleExternalUrl(
logger.info(`Downloaded file from URL: ${url}, size: ${buffer.length} bytes`) logger.info(`Downloaded file from URL: ${url}, size: ${buffer.length} bytes`)
let userFile: UserFile | undefined
const mimeType = response.headers.get('content-type') || getMimeTypeFromExtension(extension)
if (executionContext) {
try {
userFile = await uploadExecutionFile(executionContext, buffer, filename, mimeType, userId)
logger.info(`Stored file in execution storage: ${filename}`, { key: userFile.key })
} catch (uploadError) {
logger.warn(`Failed to store file in execution storage:`, uploadError)
// Continue without userFile - parsing can still work
}
}
if (shouldCheckWorkspace) { if (shouldCheckWorkspace) {
try { try {
const permission = await getUserEntityPermissions(userId, 'workspace', workspaceId) const permission = await getUserEntityPermissions(userId, 'workspace', workspaceId)
@@ -353,8 +394,6 @@ async function handleExternalUrl(
}) })
} else { } else {
const { uploadWorkspaceFile } = await import('@/lib/uploads/contexts/workspace') const { uploadWorkspaceFile } = await import('@/lib/uploads/contexts/workspace')
const mimeType =
response.headers.get('content-type') || getMimeTypeFromExtension(extension)
await uploadWorkspaceFile(workspaceId, userId, buffer, filename, mimeType) await uploadWorkspaceFile(workspaceId, userId, buffer, filename, mimeType)
logger.info(`Saved URL file to workspace storage: ${filename}`) logger.info(`Saved URL file to workspace storage: ${filename}`)
} }
@@ -363,17 +402,23 @@ async function handleExternalUrl(
} }
} }
let parseResult: ParseResult
if (extension === 'pdf') { if (extension === 'pdf') {
return await handlePdfBuffer(buffer, filename, fileType, url) parseResult = await handlePdfBuffer(buffer, filename, fileType, url)
} } else if (extension === 'csv') {
if (extension === 'csv') { parseResult = await handleCsvBuffer(buffer, filename, fileType, url)
return await handleCsvBuffer(buffer, filename, fileType, url) } else if (isSupportedFileType(extension)) {
} parseResult = await handleGenericTextBuffer(buffer, filename, extension, fileType, url)
if (isSupportedFileType(extension)) { } else {
return await handleGenericTextBuffer(buffer, filename, extension, fileType, url) parseResult = handleGenericBuffer(buffer, filename, extension, fileType)
} }
return handleGenericBuffer(buffer, filename, extension, fileType) // Attach userFile to the result
if (userFile) {
parseResult.userFile = userFile
}
return parseResult
} catch (error) { } catch (error) {
logger.error(`Error handling external URL ${url}:`, error) logger.error(`Error handling external URL ${url}:`, error)
return { return {
@@ -386,12 +431,15 @@ async function handleExternalUrl(
/** /**
* Handle file stored in cloud storage * Handle file stored in cloud storage
* If executionContext is provided and file is not already from execution storage,
* copies the file to execution storage and returns UserFile
*/ */
async function handleCloudFile( async function handleCloudFile(
filePath: string, filePath: string,
fileType: string, fileType: string,
explicitContext: string | undefined, explicitContext: string | undefined,
userId: string userId: string,
executionContext?: ExecutionContext
): Promise<ParseResult> { ): Promise<ParseResult> {
try { try {
const cloudKey = extractStorageKey(filePath) const cloudKey = extractStorageKey(filePath)
@@ -438,6 +486,7 @@ async function handleCloudFile(
const filename = originalFilename || cloudKey.split('/').pop() || cloudKey const filename = originalFilename || cloudKey.split('/').pop() || cloudKey
const extension = path.extname(filename).toLowerCase().substring(1) const extension = path.extname(filename).toLowerCase().substring(1)
const mimeType = getMimeTypeFromExtension(extension)
const normalizedFilePath = `/api/files/serve/${encodeURIComponent(cloudKey)}?context=${context}` const normalizedFilePath = `/api/files/serve/${encodeURIComponent(cloudKey)}?context=${context}`
let workspaceIdFromKey: string | undefined let workspaceIdFromKey: string | undefined
@@ -453,6 +502,39 @@ async function handleCloudFile(
const viewerUrl = getViewerUrl(cloudKey, workspaceIdFromKey) const viewerUrl = getViewerUrl(cloudKey, workspaceIdFromKey)
// Store file in execution storage if executionContext is provided
let userFile: UserFile | undefined
if (executionContext) {
// If file is already from execution context, create UserFile reference without re-uploading
if (context === 'execution') {
userFile = {
id: `file_${Date.now()}_${Math.random().toString(36).substring(2, 9)}`,
name: filename,
url: normalizedFilePath,
size: fileBuffer.length,
type: mimeType,
key: cloudKey,
context: 'execution',
}
logger.info(`Created UserFile reference for existing execution file: ${filename}`)
} else {
// Copy from workspace/other storage to execution storage
try {
userFile = await uploadExecutionFile(
executionContext,
fileBuffer,
filename,
mimeType,
userId
)
logger.info(`Copied file to execution storage: ${filename}`, { key: userFile.key })
} catch (uploadError) {
logger.warn(`Failed to copy file to execution storage:`, uploadError)
}
}
}
let parseResult: ParseResult let parseResult: ParseResult
if (extension === 'pdf') { if (extension === 'pdf') {
parseResult = await handlePdfBuffer(fileBuffer, filename, fileType, normalizedFilePath) parseResult = await handlePdfBuffer(fileBuffer, filename, fileType, normalizedFilePath)
@@ -477,6 +559,11 @@ async function handleCloudFile(
parseResult.viewerUrl = viewerUrl parseResult.viewerUrl = viewerUrl
// Attach userFile to the result
if (userFile) {
parseResult.userFile = userFile
}
return parseResult return parseResult
} catch (error) { } catch (error) {
logger.error(`Error handling cloud file ${filePath}:`, error) logger.error(`Error handling cloud file ${filePath}:`, error)
@@ -500,7 +587,8 @@ async function handleCloudFile(
async function handleLocalFile( async function handleLocalFile(
filePath: string, filePath: string,
fileType: string, fileType: string,
userId: string userId: string,
executionContext?: ExecutionContext
): Promise<ParseResult> { ): Promise<ParseResult> {
try { try {
const filename = filePath.split('/').pop() || filePath const filename = filePath.split('/').pop() || filePath
@@ -540,13 +628,32 @@ async function handleLocalFile(
const hash = createHash('md5').update(fileBuffer).digest('hex') const hash = createHash('md5').update(fileBuffer).digest('hex')
const extension = path.extname(filename).toLowerCase().substring(1) const extension = path.extname(filename).toLowerCase().substring(1)
const mimeType = fileType || getMimeTypeFromExtension(extension)
// Store file in execution storage if executionContext is provided
let userFile: UserFile | undefined
if (executionContext) {
try {
userFile = await uploadExecutionFile(
executionContext,
fileBuffer,
filename,
mimeType,
userId
)
logger.info(`Stored local file in execution storage: ${filename}`, { key: userFile.key })
} catch (uploadError) {
logger.warn(`Failed to store local file in execution storage:`, uploadError)
}
}
return { return {
success: true, success: true,
content: result.content, content: result.content,
filePath, filePath,
userFile,
metadata: { metadata: {
fileType: fileType || getMimeTypeFromExtension(extension), fileType: mimeType,
size: stats.size, size: stats.size,
hash, hash,
processingTime: 0, processingTime: 0,

View File

@@ -1,395 +0,0 @@
import { createLogger } from '@sim/logger'
import type { NextRequest } from 'next/server'
import { NextResponse } from 'next/server'
import { z } from 'zod'
import { checkHybridAuth } from '@/lib/auth/hybrid'
import { generateInternalToken } from '@/lib/auth/internal'
import { isDev } from '@/lib/core/config/feature-flags'
import { createPinnedUrl, validateUrlWithDNS } from '@/lib/core/security/input-validation'
import { generateRequestId } from '@/lib/core/utils/request'
import { getBaseUrl } from '@/lib/core/utils/urls'
import { executeTool } from '@/tools'
import { getTool, validateRequiredParametersAfterMerge } from '@/tools/utils'
const logger = createLogger('ProxyAPI')
const proxyPostSchema = z.object({
toolId: z.string().min(1, 'toolId is required'),
params: z.record(z.any()).optional().default({}),
executionContext: z
.object({
workflowId: z.string().optional(),
workspaceId: z.string().optional(),
executionId: z.string().optional(),
userId: z.string().optional(),
})
.optional(),
})
/**
* Creates a minimal set of default headers for proxy requests
* @returns Record of HTTP headers
*/
const getProxyHeaders = (): Record<string, string> => {
return {
'User-Agent':
'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/135.0.0.0 Safari/537.36',
Accept: '*/*',
'Accept-Encoding': 'gzip, deflate, br',
'Cache-Control': 'no-cache',
Connection: 'keep-alive',
}
}
/**
* Formats a response with CORS headers
* @param responseData Response data object
* @param status HTTP status code
* @returns NextResponse with CORS headers
*/
const formatResponse = (responseData: any, status = 200) => {
return NextResponse.json(responseData, {
status,
headers: {
'Access-Control-Allow-Origin': '*',
'Access-Control-Allow-Methods': 'GET, POST, PUT, DELETE, OPTIONS',
'Access-Control-Allow-Headers': 'Content-Type, Authorization',
},
})
}
/**
* Creates an error response with consistent formatting
* @param error Error object or message
* @param status HTTP status code
* @param additionalData Additional data to include in the response
* @returns Formatted error response
*/
const createErrorResponse = (error: any, status = 500, additionalData = {}) => {
const errorMessage = error instanceof Error ? error.message : String(error)
const errorStack = error instanceof Error ? error.stack : undefined
logger.error('Creating error response', {
errorMessage,
status,
stack: isDev ? errorStack : undefined,
})
return formatResponse(
{
success: false,
error: errorMessage,
stack: isDev ? errorStack : undefined,
...additionalData,
},
status
)
}
/**
* GET handler for direct external URL proxying
* This allows for GET requests to external APIs
*/
export async function GET(request: Request) {
const url = new URL(request.url)
const targetUrl = url.searchParams.get('url')
const requestId = generateRequestId()
// Vault download proxy: /api/proxy?vaultDownload=1&bucket=...&object=...&credentialId=...
const vaultDownload = url.searchParams.get('vaultDownload')
if (vaultDownload === '1') {
try {
const bucket = url.searchParams.get('bucket')
const objectParam = url.searchParams.get('object')
const credentialId = url.searchParams.get('credentialId')
if (!bucket || !objectParam || !credentialId) {
return createErrorResponse('Missing bucket, object, or credentialId', 400)
}
// Fetch access token using existing token API
const baseUrl = new URL(getBaseUrl())
const tokenUrl = new URL('/api/auth/oauth/token', baseUrl)
// Build headers: forward session cookies if present; include internal auth for server-side
const tokenHeaders: Record<string, string> = { 'Content-Type': 'application/json' }
const incomingCookie = request.headers.get('cookie')
if (incomingCookie) tokenHeaders.Cookie = incomingCookie
try {
const internalToken = await generateInternalToken()
tokenHeaders.Authorization = `Bearer ${internalToken}`
} catch (_e) {
// best-effort internal auth
}
// Optional workflow context for collaboration auth
const workflowId = url.searchParams.get('workflowId') || undefined
const tokenRes = await fetch(tokenUrl.toString(), {
method: 'POST',
headers: tokenHeaders,
body: JSON.stringify({ credentialId, workflowId }),
})
if (!tokenRes.ok) {
const err = await tokenRes.text()
return createErrorResponse(`Failed to fetch access token: ${err}`, 401)
}
const tokenJson = await tokenRes.json()
const accessToken = tokenJson.accessToken
if (!accessToken) {
return createErrorResponse('No access token available', 401)
}
// Avoid double-encoding: incoming object may already be percent-encoded
const objectDecoded = decodeURIComponent(objectParam)
const gcsUrl = `https://storage.googleapis.com/storage/v1/b/${encodeURIComponent(
bucket
)}/o/${encodeURIComponent(objectDecoded)}?alt=media`
const fileRes = await fetch(gcsUrl, {
headers: { Authorization: `Bearer ${accessToken}` },
})
if (!fileRes.ok) {
const errText = await fileRes.text()
return createErrorResponse(errText || 'Failed to download file', fileRes.status)
}
const headers = new Headers()
fileRes.headers.forEach((v, k) => headers.set(k, v))
return new NextResponse(fileRes.body, { status: 200, headers })
} catch (error: any) {
logger.error(`[${requestId}] Vault download proxy failed`, {
error: error instanceof Error ? error.message : String(error),
})
return createErrorResponse('Vault download failed', 500)
}
}
if (!targetUrl) {
logger.error(`[${requestId}] Missing 'url' parameter`)
return createErrorResponse("Missing 'url' parameter", 400)
}
const urlValidation = await validateUrlWithDNS(targetUrl)
if (!urlValidation.isValid) {
logger.warn(`[${requestId}] Blocked proxy request`, {
url: targetUrl.substring(0, 100),
error: urlValidation.error,
})
return createErrorResponse(urlValidation.error || 'Invalid URL', 403)
}
const method = url.searchParams.get('method') || 'GET'
const bodyParam = url.searchParams.get('body')
let body: string | undefined
if (bodyParam && ['POST', 'PUT', 'PATCH'].includes(method.toUpperCase())) {
try {
body = decodeURIComponent(bodyParam)
} catch (error) {
logger.warn(`[${requestId}] Failed to decode body parameter`, error)
}
}
const customHeaders: Record<string, string> = {}
for (const [key, value] of url.searchParams.entries()) {
if (key.startsWith('header.')) {
const headerName = key.substring(7)
customHeaders[headerName] = value
}
}
if (body && !customHeaders['Content-Type']) {
customHeaders['Content-Type'] = 'application/json'
}
logger.info(`[${requestId}] Proxying ${method} request to: ${targetUrl}`)
try {
const pinnedUrl = createPinnedUrl(targetUrl, urlValidation.resolvedIP!)
const response = await fetch(pinnedUrl, {
method: method,
headers: {
...getProxyHeaders(),
...customHeaders,
Host: urlValidation.originalHostname!,
},
body: body || undefined,
})
const contentType = response.headers.get('content-type') || ''
let data
if (contentType.includes('application/json')) {
data = await response.json()
} else {
data = await response.text()
}
const errorMessage = !response.ok
? data && typeof data === 'object' && data.error
? `${data.error.message || JSON.stringify(data.error)}`
: response.statusText || `HTTP error ${response.status}`
: undefined
if (!response.ok) {
logger.error(`[${requestId}] External API error: ${response.status} ${response.statusText}`)
}
return formatResponse({
success: response.ok,
status: response.status,
statusText: response.statusText,
headers: Object.fromEntries(response.headers.entries()),
data,
error: errorMessage,
})
} catch (error: any) {
logger.error(`[${requestId}] Proxy GET request failed`, {
url: targetUrl,
error: error instanceof Error ? error.message : String(error),
stack: error instanceof Error ? error.stack : undefined,
})
return createErrorResponse(error)
}
}
export async function POST(request: NextRequest) {
const requestId = generateRequestId()
const startTime = new Date()
const startTimeISO = startTime.toISOString()
try {
const authResult = await checkHybridAuth(request, { requireWorkflowId: false })
if (!authResult.success) {
logger.error(`[${requestId}] Authentication failed for proxy:`, authResult.error)
return createErrorResponse('Unauthorized', 401)
}
let requestBody
try {
requestBody = await request.json()
} catch (parseError) {
logger.error(`[${requestId}] Failed to parse request body`, {
error: parseError instanceof Error ? parseError.message : String(parseError),
})
throw new Error('Invalid JSON in request body')
}
const validationResult = proxyPostSchema.safeParse(requestBody)
if (!validationResult.success) {
logger.error(`[${requestId}] Request validation failed`, {
errors: validationResult.error.errors,
})
const errorMessages = validationResult.error.errors
.map((err) => `${err.path.join('.')}: ${err.message}`)
.join(', ')
throw new Error(`Validation failed: ${errorMessages}`)
}
const { toolId, params } = validationResult.data
logger.info(`[${requestId}] Processing tool: ${toolId}`)
const tool = getTool(toolId)
if (!tool) {
logger.error(`[${requestId}] Tool not found: ${toolId}`)
throw new Error(`Tool not found: ${toolId}`)
}
try {
validateRequiredParametersAfterMerge(toolId, tool, params)
} catch (validationError) {
logger.warn(`[${requestId}] Tool validation failed for ${toolId}`, {
error: validationError instanceof Error ? validationError.message : String(validationError),
})
const endTime = new Date()
const endTimeISO = endTime.toISOString()
const duration = endTime.getTime() - startTime.getTime()
return createErrorResponse(validationError, 400, {
startTime: startTimeISO,
endTime: endTimeISO,
duration,
})
}
const hasFileOutputs =
tool.outputs &&
Object.values(tool.outputs).some(
(output) => output.type === 'file' || output.type === 'file[]'
)
const result = await executeTool(
toolId,
params,
true, // skipProxy (we're already in the proxy)
!hasFileOutputs, // skipPostProcess (don't skip if tool has file outputs)
undefined // execution context is not available in proxy context
)
if (!result.success) {
logger.warn(`[${requestId}] Tool execution failed for ${toolId}`, {
error: result.error || 'Unknown error',
})
throw new Error(result.error || 'Tool execution failed')
}
const endTime = new Date()
const endTimeISO = endTime.toISOString()
const duration = endTime.getTime() - startTime.getTime()
const responseWithTimingData = {
...result,
startTime: startTimeISO,
endTime: endTimeISO,
duration,
timing: {
startTime: startTimeISO,
endTime: endTimeISO,
duration,
},
}
logger.info(`[${requestId}] Tool executed successfully: ${toolId} (${duration}ms)`)
return formatResponse(responseWithTimingData)
} catch (error: any) {
logger.error(`[${requestId}] Proxy request failed`, {
error: error instanceof Error ? error.message : String(error),
stack: error instanceof Error ? error.stack : undefined,
name: error instanceof Error ? error.name : undefined,
})
const endTime = new Date()
const endTimeISO = endTime.toISOString()
const duration = endTime.getTime() - startTime.getTime()
return createErrorResponse(error, 500, {
startTime: startTimeISO,
endTime: endTimeISO,
duration,
})
}
}
export async function OPTIONS() {
return new NextResponse(null, {
status: 204,
headers: {
'Access-Control-Allow-Origin': '*',
'Access-Control-Allow-Methods': 'GET, POST, PUT, DELETE, OPTIONS',
'Access-Control-Allow-Headers': 'Content-Type, Authorization',
'Access-Control-Max-Age': '86400',
},
})
}

View File

@@ -5,7 +5,11 @@ import { checkHybridAuth } from '@/lib/auth/hybrid'
import { generateRequestId } from '@/lib/core/utils/request' import { generateRequestId } from '@/lib/core/utils/request'
import { getBaseUrl } from '@/lib/core/utils/urls' import { getBaseUrl } from '@/lib/core/utils/urls'
import { StorageService } from '@/lib/uploads' import { StorageService } from '@/lib/uploads'
import { extractStorageKey, inferContextFromKey } from '@/lib/uploads/utils/file-utils' import {
extractStorageKey,
inferContextFromKey,
isInternalFileUrl,
} from '@/lib/uploads/utils/file-utils'
import { verifyFileAccess } from '@/app/api/files/authorization' import { verifyFileAccess } from '@/app/api/files/authorization'
export const dynamic = 'force-dynamic' export const dynamic = 'force-dynamic'
@@ -47,13 +51,13 @@ export async function POST(request: NextRequest) {
logger.info(`[${requestId}] Mistral parse request`, { logger.info(`[${requestId}] Mistral parse request`, {
filePath: validatedData.filePath, filePath: validatedData.filePath,
isWorkspaceFile: validatedData.filePath.includes('/api/files/serve/'), isWorkspaceFile: isInternalFileUrl(validatedData.filePath),
userId, userId,
}) })
let fileUrl = validatedData.filePath let fileUrl = validatedData.filePath
if (validatedData.filePath?.includes('/api/files/serve/')) { if (isInternalFileUrl(validatedData.filePath)) {
try { try {
const storageKey = extractStorageKey(validatedData.filePath) const storageKey = extractStorageKey(validatedData.filePath)

View File

@@ -5,7 +5,11 @@ import { checkHybridAuth } from '@/lib/auth/hybrid'
import { generateRequestId } from '@/lib/core/utils/request' import { generateRequestId } from '@/lib/core/utils/request'
import { getBaseUrl } from '@/lib/core/utils/urls' import { getBaseUrl } from '@/lib/core/utils/urls'
import { StorageService } from '@/lib/uploads' import { StorageService } from '@/lib/uploads'
import { extractStorageKey, inferContextFromKey } from '@/lib/uploads/utils/file-utils' import {
extractStorageKey,
inferContextFromKey,
isInternalFileUrl,
} from '@/lib/uploads/utils/file-utils'
import { verifyFileAccess } from '@/app/api/files/authorization' import { verifyFileAccess } from '@/app/api/files/authorization'
export const dynamic = 'force-dynamic' export const dynamic = 'force-dynamic'
@@ -48,13 +52,13 @@ export async function POST(request: NextRequest) {
logger.info(`[${requestId}] Pulse parse request`, { logger.info(`[${requestId}] Pulse parse request`, {
filePath: validatedData.filePath, filePath: validatedData.filePath,
isWorkspaceFile: validatedData.filePath.includes('/api/files/serve/'), isWorkspaceFile: isInternalFileUrl(validatedData.filePath),
userId, userId,
}) })
let fileUrl = validatedData.filePath let fileUrl = validatedData.filePath
if (validatedData.filePath?.includes('/api/files/serve/')) { if (isInternalFileUrl(validatedData.filePath)) {
try { try {
const storageKey = extractStorageKey(validatedData.filePath) const storageKey = extractStorageKey(validatedData.filePath)
const context = inferContextFromKey(storageKey) const context = inferContextFromKey(storageKey)

View File

@@ -5,7 +5,11 @@ import { checkHybridAuth } from '@/lib/auth/hybrid'
import { generateRequestId } from '@/lib/core/utils/request' import { generateRequestId } from '@/lib/core/utils/request'
import { getBaseUrl } from '@/lib/core/utils/urls' import { getBaseUrl } from '@/lib/core/utils/urls'
import { StorageService } from '@/lib/uploads' import { StorageService } from '@/lib/uploads'
import { extractStorageKey, inferContextFromKey } from '@/lib/uploads/utils/file-utils' import {
extractStorageKey,
inferContextFromKey,
isInternalFileUrl,
} from '@/lib/uploads/utils/file-utils'
import { verifyFileAccess } from '@/app/api/files/authorization' import { verifyFileAccess } from '@/app/api/files/authorization'
export const dynamic = 'force-dynamic' export const dynamic = 'force-dynamic'
@@ -44,13 +48,13 @@ export async function POST(request: NextRequest) {
logger.info(`[${requestId}] Reducto parse request`, { logger.info(`[${requestId}] Reducto parse request`, {
filePath: validatedData.filePath, filePath: validatedData.filePath,
isWorkspaceFile: validatedData.filePath.includes('/api/files/serve/'), isWorkspaceFile: isInternalFileUrl(validatedData.filePath),
userId, userId,
}) })
let fileUrl = validatedData.filePath let fileUrl = validatedData.filePath
if (validatedData.filePath?.includes('/api/files/serve/')) { if (isInternalFileUrl(validatedData.filePath)) {
try { try {
const storageKey = extractStorageKey(validatedData.filePath) const storageKey = extractStorageKey(validatedData.filePath)
const context = inferContextFromKey(storageKey) const context = inferContextFromKey(storageKey)

View File

@@ -79,11 +79,13 @@ export async function POST(request: NextRequest) {
// Generate public URL for destination (properly encode the destination key) // Generate public URL for destination (properly encode the destination key)
const encodedDestKey = validatedData.destinationKey.split('/').map(encodeURIComponent).join('/') const encodedDestKey = validatedData.destinationKey.split('/').map(encodeURIComponent).join('/')
const url = `https://${validatedData.destinationBucket}.s3.${validatedData.region}.amazonaws.com/${encodedDestKey}` const url = `https://${validatedData.destinationBucket}.s3.${validatedData.region}.amazonaws.com/${encodedDestKey}`
const uri = `s3://${validatedData.destinationBucket}/${validatedData.destinationKey}`
return NextResponse.json({ return NextResponse.json({
success: true, success: true,
output: { output: {
url, url,
uri,
copySourceVersionId: result.CopySourceVersionId, copySourceVersionId: result.CopySourceVersionId,
versionId: result.VersionId, versionId: result.VersionId,
etag: result.CopyObjectResult?.ETag, etag: result.CopyObjectResult?.ETag,

View File

@@ -117,11 +117,13 @@ export async function POST(request: NextRequest) {
const encodedKey = validatedData.objectKey.split('/').map(encodeURIComponent).join('/') const encodedKey = validatedData.objectKey.split('/').map(encodeURIComponent).join('/')
const url = `https://${validatedData.bucketName}.s3.${validatedData.region}.amazonaws.com/${encodedKey}` const url = `https://${validatedData.bucketName}.s3.${validatedData.region}.amazonaws.com/${encodedKey}`
const uri = `s3://${validatedData.bucketName}/${validatedData.objectKey}`
return NextResponse.json({ return NextResponse.json({
success: true, success: true,
output: { output: {
url, url,
uri,
etag: result.ETag, etag: result.ETag,
location: url, location: url,
key: validatedData.objectKey, key: validatedData.objectKey,

View File

@@ -0,0 +1,637 @@
import crypto from 'crypto'
import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { checkHybridAuth } from '@/lib/auth/hybrid'
import {
validateAwsRegion,
validateExternalUrl,
validateS3BucketName,
} from '@/lib/core/security/input-validation'
import { generateRequestId } from '@/lib/core/utils/request'
import { StorageService } from '@/lib/uploads'
import {
extractStorageKey,
inferContextFromKey,
isInternalFileUrl,
} from '@/lib/uploads/utils/file-utils'
import { verifyFileAccess } from '@/app/api/files/authorization'
export const dynamic = 'force-dynamic'
export const maxDuration = 300 // 5 minutes for large multi-page PDF processing
const logger = createLogger('TextractParseAPI')
const QuerySchema = z.object({
Text: z.string().min(1),
Alias: z.string().optional(),
Pages: z.array(z.string()).optional(),
})
const TextractParseSchema = z
.object({
accessKeyId: z.string().min(1, 'AWS Access Key ID is required'),
secretAccessKey: z.string().min(1, 'AWS Secret Access Key is required'),
region: z.string().min(1, 'AWS region is required'),
processingMode: z.enum(['sync', 'async']).optional().default('sync'),
filePath: z.string().optional(),
s3Uri: z.string().optional(),
featureTypes: z
.array(z.enum(['TABLES', 'FORMS', 'QUERIES', 'SIGNATURES', 'LAYOUT']))
.optional(),
queries: z.array(QuerySchema).optional(),
})
.superRefine((data, ctx) => {
const regionValidation = validateAwsRegion(data.region, 'AWS region')
if (!regionValidation.isValid) {
ctx.addIssue({
code: z.ZodIssueCode.custom,
message: regionValidation.error,
path: ['region'],
})
}
})
function getSignatureKey(
key: string,
dateStamp: string,
regionName: string,
serviceName: string
): Buffer {
const kDate = crypto.createHmac('sha256', `AWS4${key}`).update(dateStamp).digest()
const kRegion = crypto.createHmac('sha256', kDate).update(regionName).digest()
const kService = crypto.createHmac('sha256', kRegion).update(serviceName).digest()
const kSigning = crypto.createHmac('sha256', kService).update('aws4_request').digest()
return kSigning
}
function signAwsRequest(
method: string,
host: string,
uri: string,
body: string,
accessKeyId: string,
secretAccessKey: string,
region: string,
service: string,
amzTarget: string
): Record<string, string> {
const date = new Date()
const amzDate = date.toISOString().replace(/[:-]|\.\d{3}/g, '')
const dateStamp = amzDate.slice(0, 8)
const payloadHash = crypto.createHash('sha256').update(body).digest('hex')
const canonicalHeaders =
`content-type:application/x-amz-json-1.1\n` +
`host:${host}\n` +
`x-amz-date:${amzDate}\n` +
`x-amz-target:${amzTarget}\n`
const signedHeaders = 'content-type;host;x-amz-date;x-amz-target'
const canonicalRequest = `${method}\n${uri}\n\n${canonicalHeaders}\n${signedHeaders}\n${payloadHash}`
const algorithm = 'AWS4-HMAC-SHA256'
const credentialScope = `${dateStamp}/${region}/${service}/aws4_request`
const stringToSign = `${algorithm}\n${amzDate}\n${credentialScope}\n${crypto.createHash('sha256').update(canonicalRequest).digest('hex')}`
const signingKey = getSignatureKey(secretAccessKey, dateStamp, region, service)
const signature = crypto.createHmac('sha256', signingKey).update(stringToSign).digest('hex')
const authorizationHeader = `${algorithm} Credential=${accessKeyId}/${credentialScope}, SignedHeaders=${signedHeaders}, Signature=${signature}`
return {
'Content-Type': 'application/x-amz-json-1.1',
Host: host,
'X-Amz-Date': amzDate,
'X-Amz-Target': amzTarget,
Authorization: authorizationHeader,
}
}
async function fetchDocumentBytes(url: string): Promise<{ bytes: string; contentType: string }> {
const response = await fetch(url)
if (!response.ok) {
throw new Error(`Failed to fetch document: ${response.statusText}`)
}
const arrayBuffer = await response.arrayBuffer()
const bytes = Buffer.from(arrayBuffer).toString('base64')
const contentType = response.headers.get('content-type') || 'application/octet-stream'
return { bytes, contentType }
}
function parseS3Uri(s3Uri: string): { bucket: string; key: string } {
const match = s3Uri.match(/^s3:\/\/([^/]+)\/(.+)$/)
if (!match) {
throw new Error(
`Invalid S3 URI format: ${s3Uri}. Expected format: s3://bucket-name/path/to/object`
)
}
const bucket = match[1]
const key = match[2]
const bucketValidation = validateS3BucketName(bucket, 'S3 bucket name')
if (!bucketValidation.isValid) {
throw new Error(bucketValidation.error)
}
if (key.includes('..') || key.startsWith('/')) {
throw new Error('S3 key contains invalid path traversal sequences')
}
return { bucket, key }
}
function sleep(ms: number): Promise<void> {
return new Promise((resolve) => setTimeout(resolve, ms))
}
async function callTextractAsync(
host: string,
amzTarget: string,
body: Record<string, unknown>,
accessKeyId: string,
secretAccessKey: string,
region: string
): Promise<Record<string, unknown>> {
const bodyString = JSON.stringify(body)
const headers = signAwsRequest(
'POST',
host,
'/',
bodyString,
accessKeyId,
secretAccessKey,
region,
'textract',
amzTarget
)
const response = await fetch(`https://${host}/`, {
method: 'POST',
headers,
body: bodyString,
})
if (!response.ok) {
const errorText = await response.text()
let errorMessage = `Textract API error: ${response.statusText}`
try {
const errorJson = JSON.parse(errorText)
if (errorJson.Message) {
errorMessage = errorJson.Message
} else if (errorJson.__type) {
errorMessage = `${errorJson.__type}: ${errorJson.message || errorText}`
}
} catch {
// Use default error message
}
throw new Error(errorMessage)
}
return response.json()
}
async function pollForJobCompletion(
host: string,
jobId: string,
accessKeyId: string,
secretAccessKey: string,
region: string,
useAnalyzeDocument: boolean,
requestId: string
): Promise<Record<string, unknown>> {
const pollIntervalMs = 5000 // 5 seconds between polls
const maxPollTimeMs = 180000 // 3 minutes maximum polling time
const maxAttempts = Math.ceil(maxPollTimeMs / pollIntervalMs)
const getTarget = useAnalyzeDocument
? 'Textract.GetDocumentAnalysis'
: 'Textract.GetDocumentTextDetection'
for (let attempt = 0; attempt < maxAttempts; attempt++) {
const result = await callTextractAsync(
host,
getTarget,
{ JobId: jobId },
accessKeyId,
secretAccessKey,
region
)
const jobStatus = result.JobStatus as string
if (jobStatus === 'SUCCEEDED') {
logger.info(`[${requestId}] Async job completed successfully after ${attempt + 1} polls`)
let allBlocks = (result.Blocks as unknown[]) || []
let nextToken = result.NextToken as string | undefined
while (nextToken) {
const nextResult = await callTextractAsync(
host,
getTarget,
{ JobId: jobId, NextToken: nextToken },
accessKeyId,
secretAccessKey,
region
)
allBlocks = allBlocks.concat((nextResult.Blocks as unknown[]) || [])
nextToken = nextResult.NextToken as string | undefined
}
return {
...result,
Blocks: allBlocks,
}
}
if (jobStatus === 'FAILED') {
throw new Error(`Textract job failed: ${result.StatusMessage || 'Unknown error'}`)
}
if (jobStatus === 'PARTIAL_SUCCESS') {
logger.warn(`[${requestId}] Job completed with partial success: ${result.StatusMessage}`)
let allBlocks = (result.Blocks as unknown[]) || []
let nextToken = result.NextToken as string | undefined
while (nextToken) {
const nextResult = await callTextractAsync(
host,
getTarget,
{ JobId: jobId, NextToken: nextToken },
accessKeyId,
secretAccessKey,
region
)
allBlocks = allBlocks.concat((nextResult.Blocks as unknown[]) || [])
nextToken = nextResult.NextToken as string | undefined
}
return {
...result,
Blocks: allBlocks,
}
}
logger.info(`[${requestId}] Job status: ${jobStatus}, attempt ${attempt + 1}/${maxAttempts}`)
await sleep(pollIntervalMs)
}
throw new Error(
`Timeout waiting for Textract job to complete (max ${maxPollTimeMs / 1000} seconds)`
)
}
export async function POST(request: NextRequest) {
const requestId = generateRequestId()
try {
const authResult = await checkHybridAuth(request, { requireWorkflowId: false })
if (!authResult.success || !authResult.userId) {
logger.warn(`[${requestId}] Unauthorized Textract parse attempt`, {
error: authResult.error || 'Missing userId',
})
return NextResponse.json(
{
success: false,
error: authResult.error || 'Unauthorized',
},
{ status: 401 }
)
}
const userId = authResult.userId
const body = await request.json()
const validatedData = TextractParseSchema.parse(body)
const processingMode = validatedData.processingMode || 'sync'
const featureTypes = validatedData.featureTypes ?? []
const useAnalyzeDocument = featureTypes.length > 0
const host = `textract.${validatedData.region}.amazonaws.com`
logger.info(`[${requestId}] Textract parse request`, {
processingMode,
filePath: validatedData.filePath?.substring(0, 50),
s3Uri: validatedData.s3Uri?.substring(0, 50),
featureTypes,
userId,
})
if (processingMode === 'async') {
if (!validatedData.s3Uri) {
return NextResponse.json(
{
success: false,
error: 'S3 URI is required for multi-page processing (s3://bucket/key)',
},
{ status: 400 }
)
}
const { bucket: s3Bucket, key: s3Key } = parseS3Uri(validatedData.s3Uri)
logger.info(`[${requestId}] Starting async Textract job`, { s3Bucket, s3Key })
const startTarget = useAnalyzeDocument
? 'Textract.StartDocumentAnalysis'
: 'Textract.StartDocumentTextDetection'
const startBody: Record<string, unknown> = {
DocumentLocation: {
S3Object: {
Bucket: s3Bucket,
Name: s3Key,
},
},
}
if (useAnalyzeDocument) {
startBody.FeatureTypes = featureTypes
if (
validatedData.queries &&
validatedData.queries.length > 0 &&
featureTypes.includes('QUERIES')
) {
startBody.QueriesConfig = {
Queries: validatedData.queries.map((q) => ({
Text: q.Text,
Alias: q.Alias,
Pages: q.Pages,
})),
}
}
}
const startResult = await callTextractAsync(
host,
startTarget,
startBody,
validatedData.accessKeyId,
validatedData.secretAccessKey,
validatedData.region
)
const jobId = startResult.JobId as string
if (!jobId) {
throw new Error('Failed to start Textract job: No JobId returned')
}
logger.info(`[${requestId}] Async job started`, { jobId })
const textractData = await pollForJobCompletion(
host,
jobId,
validatedData.accessKeyId,
validatedData.secretAccessKey,
validatedData.region,
useAnalyzeDocument,
requestId
)
logger.info(`[${requestId}] Textract async parse successful`, {
pageCount: (textractData.DocumentMetadata as { Pages?: number })?.Pages ?? 0,
blockCount: (textractData.Blocks as unknown[])?.length ?? 0,
})
return NextResponse.json({
success: true,
output: {
blocks: textractData.Blocks ?? [],
documentMetadata: {
pages: (textractData.DocumentMetadata as { Pages?: number })?.Pages ?? 0,
},
modelVersion: (textractData.AnalyzeDocumentModelVersion ??
textractData.DetectDocumentTextModelVersion) as string | undefined,
},
})
}
if (!validatedData.filePath) {
return NextResponse.json(
{
success: false,
error: 'File path is required for single-page processing',
},
{ status: 400 }
)
}
let fileUrl = validatedData.filePath
const isInternalFilePath = validatedData.filePath && isInternalFileUrl(validatedData.filePath)
if (isInternalFilePath) {
try {
const storageKey = extractStorageKey(validatedData.filePath)
const context = inferContextFromKey(storageKey)
const hasAccess = await verifyFileAccess(storageKey, userId, undefined, context, false)
if (!hasAccess) {
logger.warn(`[${requestId}] Unauthorized presigned URL generation attempt`, {
userId,
key: storageKey,
context,
})
return NextResponse.json(
{
success: false,
error: 'File not found',
},
{ status: 404 }
)
}
fileUrl = await StorageService.generatePresignedDownloadUrl(storageKey, context, 5 * 60)
logger.info(`[${requestId}] Generated presigned URL for ${context} file`)
} catch (error) {
logger.error(`[${requestId}] Failed to generate presigned URL:`, error)
return NextResponse.json(
{
success: false,
error: 'Failed to generate file access URL',
},
{ status: 500 }
)
}
} else if (validatedData.filePath?.startsWith('/')) {
// Reject arbitrary absolute paths that don't contain /api/files/serve/
logger.warn(`[${requestId}] Invalid internal path`, {
userId,
path: validatedData.filePath.substring(0, 50),
})
return NextResponse.json(
{
success: false,
error: 'Invalid file path. Only uploaded files are supported for internal paths.',
},
{ status: 400 }
)
} else {
const urlValidation = validateExternalUrl(fileUrl, 'Document URL')
if (!urlValidation.isValid) {
logger.warn(`[${requestId}] SSRF attempt blocked`, {
userId,
url: fileUrl.substring(0, 100),
error: urlValidation.error,
})
return NextResponse.json(
{
success: false,
error: urlValidation.error,
},
{ status: 400 }
)
}
}
const { bytes, contentType } = await fetchDocumentBytes(fileUrl)
// Track if this is a PDF for better error messaging
const isPdf = contentType.includes('pdf') || fileUrl.toLowerCase().endsWith('.pdf')
const uri = '/'
let textractBody: Record<string, unknown>
let amzTarget: string
if (useAnalyzeDocument) {
amzTarget = 'Textract.AnalyzeDocument'
textractBody = {
Document: {
Bytes: bytes,
},
FeatureTypes: featureTypes,
}
if (
validatedData.queries &&
validatedData.queries.length > 0 &&
featureTypes.includes('QUERIES')
) {
textractBody.QueriesConfig = {
Queries: validatedData.queries.map((q) => ({
Text: q.Text,
Alias: q.Alias,
Pages: q.Pages,
})),
}
}
} else {
amzTarget = 'Textract.DetectDocumentText'
textractBody = {
Document: {
Bytes: bytes,
},
}
}
const bodyString = JSON.stringify(textractBody)
const headers = signAwsRequest(
'POST',
host,
uri,
bodyString,
validatedData.accessKeyId,
validatedData.secretAccessKey,
validatedData.region,
'textract',
amzTarget
)
const textractResponse = await fetch(`https://${host}${uri}`, {
method: 'POST',
headers,
body: bodyString,
})
if (!textractResponse.ok) {
const errorText = await textractResponse.text()
logger.error(`[${requestId}] Textract API error:`, errorText)
let errorMessage = `Textract API error: ${textractResponse.statusText}`
let isUnsupportedFormat = false
try {
const errorJson = JSON.parse(errorText)
if (errorJson.Message) {
errorMessage = errorJson.Message
} else if (errorJson.__type) {
errorMessage = `${errorJson.__type}: ${errorJson.message || errorText}`
}
// Check for unsupported document format error
isUnsupportedFormat =
errorJson.__type === 'UnsupportedDocumentException' ||
errorJson.Message?.toLowerCase().includes('unsupported document') ||
errorText.toLowerCase().includes('unsupported document')
} catch {
isUnsupportedFormat = errorText.toLowerCase().includes('unsupported document')
}
// Provide helpful message for unsupported format (likely multi-page PDF)
if (isUnsupportedFormat && isPdf) {
errorMessage =
'This document format is not supported in Single Page mode. If this is a multi-page PDF, please use "Multi-Page (PDF, TIFF via S3)" mode instead, which requires uploading your document to S3 first. Single Page mode only supports JPEG, PNG, and single-page PDF files.'
}
return NextResponse.json(
{
success: false,
error: errorMessage,
},
{ status: textractResponse.status }
)
}
const textractData = await textractResponse.json()
logger.info(`[${requestId}] Textract parse successful`, {
pageCount: textractData.DocumentMetadata?.Pages ?? 0,
blockCount: textractData.Blocks?.length ?? 0,
})
return NextResponse.json({
success: true,
output: {
blocks: textractData.Blocks ?? [],
documentMetadata: {
pages: textractData.DocumentMetadata?.Pages ?? 0,
},
modelVersion:
textractData.AnalyzeDocumentModelVersion ??
textractData.DetectDocumentTextModelVersion ??
undefined,
},
})
} catch (error) {
if (error instanceof z.ZodError) {
logger.warn(`[${requestId}] Invalid request data`, { errors: error.errors })
return NextResponse.json(
{
success: false,
error: 'Invalid request data',
details: error.errors,
},
{ status: 400 }
)
}
logger.error(`[${requestId}] Error in Textract parse:`, error)
return NextResponse.json(
{
success: false,
error: error instanceof Error ? error.message : 'Internal server error',
},
{ status: 500 }
)
}
}

View File

@@ -12,6 +12,10 @@ import { markExecutionCancelled } from '@/lib/execution/cancellation'
import { processInputFileFields } from '@/lib/execution/files' import { processInputFileFields } from '@/lib/execution/files'
import { preprocessExecution } from '@/lib/execution/preprocessing' import { preprocessExecution } from '@/lib/execution/preprocessing'
import { LoggingSession } from '@/lib/logs/execution/logging-session' import { LoggingSession } from '@/lib/logs/execution/logging-session'
import {
cleanupExecutionBase64Cache,
hydrateUserFilesWithBase64,
} from '@/lib/uploads/utils/user-file-base64.server'
import { executeWorkflowCore } from '@/lib/workflows/executor/execution-core' import { executeWorkflowCore } from '@/lib/workflows/executor/execution-core'
import { type ExecutionEvent, encodeSSEEvent } from '@/lib/workflows/executor/execution-events' import { type ExecutionEvent, encodeSSEEvent } from '@/lib/workflows/executor/execution-events'
import { PauseResumeManager } from '@/lib/workflows/executor/human-in-the-loop-manager' import { PauseResumeManager } from '@/lib/workflows/executor/human-in-the-loop-manager'
@@ -25,7 +29,7 @@ import type { WorkflowExecutionPayload } from '@/background/workflow-execution'
import { normalizeName } from '@/executor/constants' import { normalizeName } from '@/executor/constants'
import { ExecutionSnapshot } from '@/executor/execution/snapshot' import { ExecutionSnapshot } from '@/executor/execution/snapshot'
import type { ExecutionMetadata, IterationContext } from '@/executor/execution/types' import type { ExecutionMetadata, IterationContext } from '@/executor/execution/types'
import type { StreamingExecution } from '@/executor/types' import type { NormalizedBlockOutput, StreamingExecution } from '@/executor/types'
import { Serializer } from '@/serializer' import { Serializer } from '@/serializer'
import { CORE_TRIGGER_TYPES, type CoreTriggerType } from '@/stores/logs/filters/types' import { CORE_TRIGGER_TYPES, type CoreTriggerType } from '@/stores/logs/filters/types'
@@ -38,6 +42,8 @@ const ExecuteWorkflowSchema = z.object({
useDraftState: z.boolean().optional(), useDraftState: z.boolean().optional(),
input: z.any().optional(), input: z.any().optional(),
isClientSession: z.boolean().optional(), isClientSession: z.boolean().optional(),
includeFileBase64: z.boolean().optional().default(true),
base64MaxBytes: z.number().int().positive().optional(),
workflowStateOverride: z workflowStateOverride: z
.object({ .object({
blocks: z.record(z.any()), blocks: z.record(z.any()),
@@ -214,6 +220,8 @@ export async function POST(req: NextRequest, { params }: { params: Promise<{ id:
useDraftState, useDraftState,
input: validatedInput, input: validatedInput,
isClientSession = false, isClientSession = false,
includeFileBase64,
base64MaxBytes,
workflowStateOverride, workflowStateOverride,
} = validation.data } = validation.data
@@ -227,6 +235,8 @@ export async function POST(req: NextRequest, { params }: { params: Promise<{ id:
triggerType, triggerType,
stream, stream,
useDraftState, useDraftState,
includeFileBase64,
base64MaxBytes,
workflowStateOverride, workflowStateOverride,
workflowId: _workflowId, // Also exclude workflowId used for internal JWT auth workflowId: _workflowId, // Also exclude workflowId used for internal JWT auth
...rest ...rest
@@ -427,16 +437,31 @@ export async function POST(req: NextRequest, { params }: { params: Promise<{ id:
snapshot, snapshot,
callbacks: {}, callbacks: {},
loggingSession, loggingSession,
includeFileBase64,
base64MaxBytes,
}) })
const hasResponseBlock = workflowHasResponseBlock(result) const outputWithBase64 = includeFileBase64
? ((await hydrateUserFilesWithBase64(result.output, {
requestId,
executionId,
maxBytes: base64MaxBytes,
})) as NormalizedBlockOutput)
: result.output
const resultWithBase64 = { ...result, output: outputWithBase64 }
// Cleanup base64 cache for this execution
await cleanupExecutionBase64Cache(executionId)
const hasResponseBlock = workflowHasResponseBlock(resultWithBase64)
if (hasResponseBlock) { if (hasResponseBlock) {
return createHttpResponseFromBlock(result) return createHttpResponseFromBlock(resultWithBase64)
} }
const filteredResult = { const filteredResult = {
success: result.success, success: result.success,
output: result.output, output: outputWithBase64,
error: result.error, error: result.error,
metadata: result.metadata metadata: result.metadata
? { ? {
@@ -498,6 +523,8 @@ export async function POST(req: NextRequest, { params }: { params: Promise<{ id:
selectedOutputs: resolvedSelectedOutputs, selectedOutputs: resolvedSelectedOutputs,
isSecureMode: false, isSecureMode: false,
workflowTriggerType: triggerType === 'chat' ? 'chat' : 'api', workflowTriggerType: triggerType === 'chat' ? 'chat' : 'api',
includeFileBase64,
base64MaxBytes,
}, },
executionId, executionId,
}) })
@@ -698,6 +725,8 @@ export async function POST(req: NextRequest, { params }: { params: Promise<{ id:
}, },
loggingSession, loggingSession,
abortSignal: abortController.signal, abortSignal: abortController.signal,
includeFileBase64,
base64MaxBytes,
}) })
if (result.status === 'paused') { if (result.status === 'paused') {
@@ -750,12 +779,21 @@ export async function POST(req: NextRequest, { params }: { params: Promise<{ id:
workflowId, workflowId,
data: { data: {
success: result.success, success: result.success,
output: result.output, output: includeFileBase64
? await hydrateUserFilesWithBase64(result.output, {
requestId,
executionId,
maxBytes: base64MaxBytes,
})
: result.output,
duration: result.metadata?.duration || 0, duration: result.metadata?.duration || 0,
startTime: result.metadata?.startTime || startTime.toISOString(), startTime: result.metadata?.startTime || startTime.toISOString(),
endTime: result.metadata?.endTime || new Date().toISOString(), endTime: result.metadata?.endTime || new Date().toISOString(),
}, },
}) })
// Cleanup base64 cache for this execution
await cleanupExecutionBase64Cache(executionId)
} catch (error: any) { } catch (error: any) {
const errorMessage = error.message || 'Unknown error' const errorMessage = error.message || 'Unknown error'
logger.error(`[${requestId}] SSE execution failed: ${errorMessage}`) logger.error(`[${requestId}] SSE execution failed: ${errorMessage}`)

View File

@@ -33,6 +33,7 @@ const BlockDataSchema = z.object({
doWhileCondition: z.string().optional(), doWhileCondition: z.string().optional(),
parallelType: z.enum(['collection', 'count']).optional(), parallelType: z.enum(['collection', 'count']).optional(),
type: z.string().optional(), type: z.string().optional(),
canonicalModes: z.record(z.enum(['basic', 'advanced'])).optional(),
}) })
const SubBlockStateSchema = z.object({ const SubBlockStateSchema = z.object({

View File

@@ -2,7 +2,7 @@
import { useRef, useState } from 'react' import { useRef, useState } from 'react'
import { createLogger } from '@sim/logger' import { createLogger } from '@sim/logger'
import { isUserFile } from '@/lib/core/utils/display-filters' import { isUserFileWithMetadata } from '@/lib/core/utils/user-file'
import type { ChatFile, ChatMessage } from '@/app/chat/components/message/message' import type { ChatFile, ChatMessage } from '@/app/chat/components/message/message'
import { CHAT_ERROR_MESSAGES } from '@/app/chat/constants' import { CHAT_ERROR_MESSAGES } from '@/app/chat/constants'
@@ -17,7 +17,7 @@ function extractFilesFromData(
return files return files
} }
if (isUserFile(data)) { if (isUserFileWithMetadata(data)) {
if (!seenIds.has(data.id)) { if (!seenIds.has(data.id)) {
seenIds.add(data.id) seenIds.add(data.id)
files.push({ files.push({
@@ -232,7 +232,7 @@ export function useChatStreaming() {
return null return null
} }
if (isUserFile(value)) { if (isUserFileWithMetadata(value)) {
return null return null
} }
@@ -285,7 +285,7 @@ export function useChatStreaming() {
const value = getOutputValue(blockOutputs, config.path) const value = getOutputValue(blockOutputs, config.path)
if (isUserFile(value)) { if (isUserFileWithMetadata(value)) {
extractedFiles.push({ extractedFiles.push({
id: value.id, id: value.id,
name: value.name, name: value.name,

View File

@@ -538,15 +538,11 @@ export function Document({
}, },
{ {
onSuccess: (result) => { onSuccess: (result) => {
if (operation === 'delete') { if (operation === 'delete' || result.errorCount > 0) {
refreshChunks() refreshChunks()
} else { } else {
result.results.forEach((opResult) => { chunks.forEach((chunk) => {
if (opResult.operation === operation) { updateChunk(chunk.id, { enabled: operation === 'enable' })
opResult.chunkIds.forEach((chunkId: string) => {
updateChunk(chunkId, { enabled: operation === 'enable' })
})
}
}) })
} }
logger.info(`Successfully ${operation}d ${result.successCount} chunks`) logger.info(`Successfully ${operation}d ${result.successCount} chunks`)

View File

@@ -462,7 +462,7 @@ export function BaseTagsModal({ open, onOpenChange, knowledgeBaseId }: BaseTagsM
<ModalHeader>Documents using "{selectedTag?.displayName}"</ModalHeader> <ModalHeader>Documents using "{selectedTag?.displayName}"</ModalHeader>
<ModalBody> <ModalBody>
<div className='space-y-[8px]'> <div className='space-y-[8px]'>
<p className='text-[12px] text-[var(--text-tertiary)]'> <p className='text-[12px] text-[var(--text-secondary)]'>
{selectedTagUsage?.documentCount || 0} document {selectedTagUsage?.documentCount || 0} document
{selectedTagUsage?.documentCount !== 1 ? 's are' : ' is'} currently using this tag {selectedTagUsage?.documentCount !== 1 ? 's are' : ' is'} currently using this tag
definition. definition.
@@ -470,7 +470,7 @@ export function BaseTagsModal({ open, onOpenChange, knowledgeBaseId }: BaseTagsM
{selectedTagUsage?.documentCount === 0 ? ( {selectedTagUsage?.documentCount === 0 ? (
<div className='rounded-[6px] border p-[16px] text-center'> <div className='rounded-[6px] border p-[16px] text-center'>
<p className='text-[12px] text-[var(--text-tertiary)]'> <p className='text-[12px] text-[var(--text-secondary)]'>
This tag definition is not being used by any documents. You can safely delete it This tag definition is not being used by any documents. You can safely delete it
to free up the tag slot. to free up the tag slot.
</p> </p>

View File

@@ -234,7 +234,7 @@ function ProgressBar({
{segments.map((segment, index) => ( {segments.map((segment, index) => (
<div <div
key={index} key={index}
className='absolute h-full' className='absolute h-full opacity-70'
style={{ style={{
left: `${segment.startPercent}%`, left: `${segment.startPercent}%`,
width: `${segment.widthPercent}%`, width: `${segment.widthPercent}%`,

View File

@@ -257,7 +257,7 @@ export const LogDetails = memo(function LogDetails({
Version Version
</span> </span>
<div className='flex w-0 flex-1 justify-end'> <div className='flex w-0 flex-1 justify-end'>
<span className='max-w-full truncate rounded-[6px] bg-[#14291B] px-[9px] py-[2px] font-medium text-[#86EFAC] text-[12px]'> <span className='max-w-full truncate rounded-[6px] bg-[#bbf7d0] px-[9px] py-[2px] font-medium text-[#15803d] text-[12px] dark:bg-[#14291B] dark:text-[#86EFAC]'>
{log.deploymentVersionName || `v${log.deploymentVersion}`} {log.deploymentVersionName || `v${log.deploymentVersion}`}
</span> </span>
</div> </div>

View File

@@ -19,6 +19,7 @@ import { DatePicker } from '@/components/emcn/components/date-picker/date-picker
import { cn } from '@/lib/core/utils/cn' import { cn } from '@/lib/core/utils/cn'
import { hasActiveFilters } from '@/lib/logs/filters' import { hasActiveFilters } from '@/lib/logs/filters'
import { getTriggerOptions } from '@/lib/logs/get-trigger-options' import { getTriggerOptions } from '@/lib/logs/get-trigger-options'
import { type LogStatus, STATUS_CONFIG } from '@/app/workspace/[workspaceId]/logs/utils'
import { getBlock } from '@/blocks/registry' import { getBlock } from '@/blocks/registry'
import { useFolderStore } from '@/stores/folders/store' import { useFolderStore } from '@/stores/folders/store'
import { useFilterStore } from '@/stores/logs/filters/store' import { useFilterStore } from '@/stores/logs/filters/store'
@@ -211,12 +212,12 @@ export function LogsToolbar({
}, [level]) }, [level])
const statusOptions: ComboboxOption[] = useMemo( const statusOptions: ComboboxOption[] = useMemo(
() => [ () =>
{ value: 'error', label: 'Error', icon: getColorIcon('var(--text-error)') }, (Object.keys(STATUS_CONFIG) as LogStatus[]).map((status) => ({
{ value: 'info', label: 'Info', icon: getColorIcon('var(--terminal-status-info-color)') }, value: status,
{ value: 'running', label: 'Running', icon: getColorIcon('#22c55e') }, label: STATUS_CONFIG[status].label,
{ value: 'pending', label: 'Pending', icon: getColorIcon('#f59e0b') }, icon: getColorIcon(STATUS_CONFIG[status].color),
], })),
[] []
) )
@@ -242,12 +243,8 @@ export function LogsToolbar({
const selectedStatusColor = useMemo(() => { const selectedStatusColor = useMemo(() => {
if (selectedStatuses.length !== 1) return null if (selectedStatuses.length !== 1) return null
const status = selectedStatuses[0] const status = selectedStatuses[0] as LogStatus
if (status === 'error') return 'var(--text-error)' return STATUS_CONFIG[status]?.color ?? null
if (status === 'info') return 'var(--terminal-status-info-color)'
if (status === 'running') return '#22c55e'
if (status === 'pending') return '#f59e0b'
return null
}, [selectedStatuses]) }, [selectedStatuses])
const workflowOptions: ComboboxOption[] = useMemo( const workflowOptions: ComboboxOption[] = useMemo(

View File

@@ -5,7 +5,6 @@ import { getIntegrationMetadata } from '@/lib/logs/get-trigger-options'
import { getBlock } from '@/blocks/registry' import { getBlock } from '@/blocks/registry'
import { CORE_TRIGGER_TYPES } from '@/stores/logs/filters/types' import { CORE_TRIGGER_TYPES } from '@/stores/logs/filters/types'
/** Column configuration for logs table - shared between header and rows */
export const LOG_COLUMNS = { export const LOG_COLUMNS = {
date: { width: 'w-[8%]', minWidth: 'min-w-[70px]', label: 'Date' }, date: { width: 'w-[8%]', minWidth: 'min-w-[70px]', label: 'Date' },
time: { width: 'w-[12%]', minWidth: 'min-w-[90px]', label: 'Time' }, time: { width: 'w-[12%]', minWidth: 'min-w-[90px]', label: 'Time' },
@@ -16,10 +15,8 @@ export const LOG_COLUMNS = {
duration: { width: 'w-[20%]', minWidth: 'min-w-[100px]', label: 'Duration' }, duration: { width: 'w-[20%]', minWidth: 'min-w-[100px]', label: 'Duration' },
} as const } as const
/** Type-safe column key derived from LOG_COLUMNS */
export type LogColumnKey = keyof typeof LOG_COLUMNS export type LogColumnKey = keyof typeof LOG_COLUMNS
/** Ordered list of column keys for rendering table headers */
export const LOG_COLUMN_ORDER: readonly LogColumnKey[] = [ export const LOG_COLUMN_ORDER: readonly LogColumnKey[] = [
'date', 'date',
'time', 'time',
@@ -30,7 +27,6 @@ export const LOG_COLUMN_ORDER: readonly LogColumnKey[] = [
'duration', 'duration',
] as const ] as const
/** Possible execution status values for workflow logs */
export type LogStatus = 'error' | 'pending' | 'running' | 'info' | 'cancelled' export type LogStatus = 'error' | 'pending' | 'running' | 'info' | 'cancelled'
/** /**
@@ -53,30 +49,28 @@ export function getDisplayStatus(status: string | null | undefined): LogStatus {
} }
} }
/** Configuration mapping log status to Badge variant and display label */ export const STATUS_CONFIG: Record<
const STATUS_VARIANT_MAP: Record<
LogStatus, LogStatus,
{ variant: React.ComponentProps<typeof Badge>['variant']; label: string } { variant: React.ComponentProps<typeof Badge>['variant']; label: string; color: string }
> = { > = {
error: { variant: 'red', label: 'Error' }, error: { variant: 'red', label: 'Error', color: 'var(--text-error)' },
pending: { variant: 'amber', label: 'Pending' }, pending: { variant: 'amber', label: 'Pending', color: '#f59e0b' },
running: { variant: 'green', label: 'Running' }, running: { variant: 'green', label: 'Running', color: '#22c55e' },
cancelled: { variant: 'gray', label: 'Cancelled' }, cancelled: { variant: 'orange', label: 'Cancelled', color: '#f97316' },
info: { variant: 'gray', label: 'Info' }, info: { variant: 'gray', label: 'Info', color: 'var(--terminal-status-info-color)' },
} }
/** Configuration mapping core trigger types to Badge color variants */
const TRIGGER_VARIANT_MAP: Record<string, React.ComponentProps<typeof Badge>['variant']> = { const TRIGGER_VARIANT_MAP: Record<string, React.ComponentProps<typeof Badge>['variant']> = {
manual: 'gray-secondary', manual: 'gray-secondary',
api: 'blue', api: 'blue',
schedule: 'green', schedule: 'green',
chat: 'purple', chat: 'purple',
webhook: 'orange', webhook: 'orange',
mcp: 'cyan',
a2a: 'teal', a2a: 'teal',
} }
interface StatusBadgeProps { interface StatusBadgeProps {
/** The execution status to display */
status: LogStatus status: LogStatus
} }
@@ -86,14 +80,13 @@ interface StatusBadgeProps {
* @returns A Badge with dot indicator and status label * @returns A Badge with dot indicator and status label
*/ */
export const StatusBadge = React.memo(({ status }: StatusBadgeProps) => { export const StatusBadge = React.memo(({ status }: StatusBadgeProps) => {
const config = STATUS_VARIANT_MAP[status] const config = STATUS_CONFIG[status]
return React.createElement(Badge, { variant: config.variant, dot: true }, config.label) return React.createElement(Badge, { variant: config.variant, dot: true }, config.label)
}) })
StatusBadge.displayName = 'StatusBadge' StatusBadge.displayName = 'StatusBadge'
interface TriggerBadgeProps { interface TriggerBadgeProps {
/** The trigger type identifier (e.g., 'manual', 'api', or integration block type) */
trigger: string trigger: string
} }

View File

@@ -142,7 +142,7 @@ export const ActionBar = memo(
</Tooltip.Root> </Tooltip.Root>
)} )}
{!isStartBlock && !isResponseBlock && !isSubflowBlock && ( {!isStartBlock && !isResponseBlock && (
<Tooltip.Root> <Tooltip.Root>
<Tooltip.Trigger asChild> <Tooltip.Trigger asChild>
<Button <Button
@@ -213,6 +213,29 @@ export const ActionBar = memo(
</Tooltip.Root> </Tooltip.Root>
)} )}
{isSubflowBlock && (
<Tooltip.Root>
<Tooltip.Trigger asChild>
<Button
variant='ghost'
onClick={(e) => {
e.stopPropagation()
if (!disabled) {
collaborativeBatchToggleBlockEnabled([blockId])
}
}}
className={ACTION_BUTTON_STYLES}
disabled={disabled}
>
{isEnabled ? <Circle className={ICON_SIZE} /> : <CircleOff className={ICON_SIZE} />}
</Button>
</Tooltip.Trigger>
<Tooltip.Content side='top'>
{getTooltipMessage(isEnabled ? 'Disable Block' : 'Enable Block')}
</Tooltip.Content>
</Tooltip.Root>
)}
<Tooltip.Root> <Tooltip.Root>
<Tooltip.Trigger asChild> <Tooltip.Trigger asChild>
<Button <Button

View File

@@ -26,7 +26,6 @@ export interface CanvasMenuProps {
onOpenLogs: () => void onOpenLogs: () => void
onToggleVariables: () => void onToggleVariables: () => void
onToggleChat: () => void onToggleChat: () => void
onInvite: () => void
isVariablesOpen?: boolean isVariablesOpen?: boolean
isChatOpen?: boolean isChatOpen?: boolean
hasClipboard?: boolean hasClipboard?: boolean
@@ -55,15 +54,12 @@ export function CanvasMenu({
onOpenLogs, onOpenLogs,
onToggleVariables, onToggleVariables,
onToggleChat, onToggleChat,
onInvite,
isVariablesOpen = false, isVariablesOpen = false,
isChatOpen = false, isChatOpen = false,
hasClipboard = false, hasClipboard = false,
disableEdit = false, disableEdit = false,
disableAdmin = false,
canUndo = false, canUndo = false,
canRedo = false, canRedo = false,
isInvitationsDisabled = false,
}: CanvasMenuProps) { }: CanvasMenuProps) {
return ( return (
<Popover <Popover
@@ -179,22 +175,6 @@ export function CanvasMenu({
> >
{isChatOpen ? 'Close Chat' : 'Open Chat'} {isChatOpen ? 'Close Chat' : 'Open Chat'}
</PopoverItem> </PopoverItem>
{/* Admin action - hidden when invitations are disabled */}
{!isInvitationsDisabled && (
<>
<PopoverDivider />
<PopoverItem
disabled={disableAdmin}
onClick={() => {
onInvite()
onClose()
}}
>
Invite to Workspace
</PopoverItem>
</>
)}
</PopoverContent> </PopoverContent>
</Popover> </Popover>
) )

View File

@@ -886,17 +886,16 @@ export function Chat() {
onMouseDown={(e) => e.stopPropagation()} onMouseDown={(e) => e.stopPropagation()}
> >
{shouldShowConfigureStartInputsButton && ( {shouldShowConfigureStartInputsButton && (
<Badge <div
variant='outline' className='flex flex-none cursor-pointer items-center whitespace-nowrap rounded-[6px] border border-[var(--border-1)] bg-[var(--surface-5)] px-[9px] py-[2px] font-medium font-sans text-[12px] text-[var(--text-primary)] hover:bg-[var(--surface-7)] dark:hover:border-[var(--surface-7)] dark:hover:bg-[var(--border-1)]'
className='flex-none cursor-pointer whitespace-nowrap rounded-[6px]'
title='Add chat inputs to Start block' title='Add chat inputs to Start block'
onMouseDown={(e) => { onMouseDown={(e) => {
e.stopPropagation() e.stopPropagation()
handleConfigureStartInputs() handleConfigureStartInputs()
}} }}
> >
<span className='whitespace-nowrap text-[12px]'>Add inputs</span> <span className='whitespace-nowrap'>Add inputs</span>
</Badge> </div>
)} )}
<OutputSelect <OutputSelect

View File

@@ -129,10 +129,6 @@ export function OutputSelect({
? baselineWorkflow.blocks?.[block.id]?.subBlocks?.responseFormat?.value ? baselineWorkflow.blocks?.[block.id]?.subBlocks?.responseFormat?.value
: subBlockValues?.[block.id]?.responseFormat : subBlockValues?.[block.id]?.responseFormat
const responseFormat = parseResponseFormatSafely(responseFormatValue, block.id) const responseFormat = parseResponseFormatSafely(responseFormatValue, block.id)
const operationValue =
shouldUseBaseline && baselineWorkflow
? baselineWorkflow.blocks?.[block.id]?.subBlocks?.operation?.value
: subBlockValues?.[block.id]?.operation
let outputsToProcess: Record<string, unknown> = {} let outputsToProcess: Record<string, unknown> = {}
@@ -146,10 +142,20 @@ export function OutputSelect({
outputsToProcess = blockConfig?.outputs || {} outputsToProcess = blockConfig?.outputs || {}
} }
} else { } else {
const toolOutputs = // Build subBlocks object for tool selector
blockConfig && typeof operationValue === 'string' const rawSubBlockValues =
? getToolOutputs(blockConfig, operationValue) shouldUseBaseline && baselineWorkflow
: {} ? baselineWorkflow.blocks?.[block.id]?.subBlocks
: subBlockValues?.[block.id]
const subBlocks: Record<string, { value: unknown }> = {}
if (rawSubBlockValues && typeof rawSubBlockValues === 'object') {
for (const [key, val] of Object.entries(rawSubBlockValues)) {
// Handle both { value: ... } and raw value formats
subBlocks[key] = val && typeof val === 'object' && 'value' in val ? val : { value: val }
}
}
const toolOutputs = blockConfig ? getToolOutputs(blockConfig, subBlocks) : {}
outputsToProcess = outputsToProcess =
Object.keys(toolOutputs).length > 0 ? toolOutputs : blockConfig?.outputs || {} Object.keys(toolOutputs).length > 0 ? toolOutputs : blockConfig?.outputs || {}
} }

View File

@@ -0,0 +1,22 @@
import { PopoverSection } from '@/components/emcn'
/**
* Skeleton loading component for chat history dropdown
* Displays placeholder content while chats are being loaded
*/
export function ChatHistorySkeleton() {
return (
<>
<PopoverSection>
<div className='h-3 w-12 animate-pulse rounded bg-muted/40' />
</PopoverSection>
<div className='flex flex-col gap-0.5'>
{[1, 2, 3].map((i) => (
<div key={i} className='flex h-[25px] items-center px-[6px]'>
<div className='h-3 w-full animate-pulse rounded bg-muted/40' />
</div>
))}
</div>
</>
)
}

View File

@@ -0,0 +1,79 @@
import { Button } from '@/components/emcn'
type CheckpointConfirmationVariant = 'restore' | 'discard'
interface CheckpointConfirmationProps {
/** Confirmation variant - 'restore' for reverting, 'discard' for edit with checkpoint options */
variant: CheckpointConfirmationVariant
/** Whether an action is currently processing */
isProcessing: boolean
/** Callback when cancel is clicked */
onCancel: () => void
/** Callback when revert is clicked */
onRevert: () => void
/** Callback when continue is clicked (only for 'discard' variant) */
onContinue?: () => void
}
/**
* Inline confirmation for checkpoint operations
* Supports two variants:
* - 'restore': Simple revert confirmation with warning
* - 'discard': Edit with checkpoint options (revert or continue without revert)
*/
export function CheckpointConfirmation({
variant,
isProcessing,
onCancel,
onRevert,
onContinue,
}: CheckpointConfirmationProps) {
const isRestoreVariant = variant === 'restore'
return (
<div className='mt-[8px] rounded-[4px] border border-[var(--border)] bg-[var(--surface-4)] p-[10px]'>
<p className='mb-[8px] text-[12px] text-[var(--text-primary)]'>
{isRestoreVariant ? (
<>
Revert to checkpoint? This will restore your workflow to the state saved at this
checkpoint.{' '}
<span className='text-[var(--text-error)]'>This action cannot be undone.</span>
</>
) : (
'Continue from a previous message?'
)}
</p>
<div className='flex gap-[8px]'>
<Button
onClick={onCancel}
variant='active'
size='sm'
className='flex-1'
disabled={isProcessing}
>
Cancel
</Button>
<Button
onClick={onRevert}
variant='destructive'
size='sm'
className='flex-1'
disabled={isProcessing}
>
{isProcessing ? 'Reverting...' : 'Revert'}
</Button>
{!isRestoreVariant && onContinue && (
<Button
onClick={onContinue}
variant='tertiary'
size='sm'
className='flex-1'
disabled={isProcessing}
>
Continue
</Button>
)}
</div>
</div>
)
}

View File

@@ -1,5 +1,6 @@
export * from './checkpoint-confirmation'
export * from './file-display' export * from './file-display'
export { default as CopilotMarkdownRenderer } from './markdown-renderer' export { CopilotMarkdownRenderer } from './markdown-renderer'
export * from './smooth-streaming' export * from './smooth-streaming'
export * from './thinking-block' export * from './thinking-block'
export * from './usage-limit-actions' export * from './usage-limit-actions'

View File

@@ -0,0 +1 @@
export { default as CopilotMarkdownRenderer } from './markdown-renderer'

View File

@@ -1,27 +1,17 @@
import { memo, useEffect, useRef, useState } from 'react' import { memo, useEffect, useRef, useState } from 'react'
import { cn } from '@/lib/core/utils/cn' import { cn } from '@/lib/core/utils/cn'
import CopilotMarkdownRenderer from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/copilot/components/copilot-message/components/markdown-renderer' import { CopilotMarkdownRenderer } from '../markdown-renderer'
/** /** Character animation delay in milliseconds */
* Character animation delay in milliseconds
*/
const CHARACTER_DELAY = 3 const CHARACTER_DELAY = 3
/** /** Props for the StreamingIndicator component */
* Props for the StreamingIndicator component
*/
interface StreamingIndicatorProps { interface StreamingIndicatorProps {
/** Optional class name for layout adjustments */ /** Optional class name for layout adjustments */
className?: string className?: string
} }
/** /** Shows animated dots during message streaming when no content has arrived */
* StreamingIndicator shows animated dots during message streaming
* Used as a standalone indicator when no content has arrived yet
*
* @param props - Component props
* @returns Animated loading indicator
*/
export const StreamingIndicator = memo(({ className }: StreamingIndicatorProps) => ( export const StreamingIndicator = memo(({ className }: StreamingIndicatorProps) => (
<div className={cn('flex h-[1.25rem] items-center text-muted-foreground', className)}> <div className={cn('flex h-[1.25rem] items-center text-muted-foreground', className)}>
<div className='flex space-x-0.5'> <div className='flex space-x-0.5'>
@@ -34,9 +24,7 @@ export const StreamingIndicator = memo(({ className }: StreamingIndicatorProps)
StreamingIndicator.displayName = 'StreamingIndicator' StreamingIndicator.displayName = 'StreamingIndicator'
/** /** Props for the SmoothStreamingText component */
* Props for the SmoothStreamingText component
*/
interface SmoothStreamingTextProps { interface SmoothStreamingTextProps {
/** Content to display with streaming animation */ /** Content to display with streaming animation */
content: string content: string
@@ -44,20 +32,12 @@ interface SmoothStreamingTextProps {
isStreaming: boolean isStreaming: boolean
} }
/** /** Displays text with character-by-character animation for smooth streaming */
* SmoothStreamingText component displays text with character-by-character animation
* Creates a smooth streaming effect for AI responses
*
* @param props - Component props
* @returns Streaming text with smooth animation
*/
export const SmoothStreamingText = memo( export const SmoothStreamingText = memo(
({ content, isStreaming }: SmoothStreamingTextProps) => { ({ content, isStreaming }: SmoothStreamingTextProps) => {
// Initialize with full content when not streaming to avoid flash on page load
const [displayedContent, setDisplayedContent] = useState(() => (isStreaming ? '' : content)) const [displayedContent, setDisplayedContent] = useState(() => (isStreaming ? '' : content))
const contentRef = useRef(content) const contentRef = useRef(content)
const timeoutRef = useRef<NodeJS.Timeout | null>(null) const timeoutRef = useRef<NodeJS.Timeout | null>(null)
// Initialize index based on streaming state
const indexRef = useRef(isStreaming ? 0 : content.length) const indexRef = useRef(isStreaming ? 0 : content.length)
const isAnimatingRef = useRef(false) const isAnimatingRef = useRef(false)
@@ -95,7 +75,6 @@ export const SmoothStreamingText = memo(
} }
} }
} else { } else {
// Streaming ended - show full content immediately
if (timeoutRef.current) { if (timeoutRef.current) {
clearTimeout(timeoutRef.current) clearTimeout(timeoutRef.current)
} }
@@ -119,7 +98,6 @@ export const SmoothStreamingText = memo(
) )
}, },
(prevProps, nextProps) => { (prevProps, nextProps) => {
// Prevent re-renders during streaming unless content actually changed
return ( return (
prevProps.content === nextProps.content && prevProps.isStreaming === nextProps.isStreaming prevProps.content === nextProps.content && prevProps.isStreaming === nextProps.isStreaming
) )

View File

@@ -3,66 +3,45 @@
import { memo, useEffect, useMemo, useRef, useState } from 'react' import { memo, useEffect, useMemo, useRef, useState } from 'react'
import clsx from 'clsx' import clsx from 'clsx'
import { ChevronUp } from 'lucide-react' import { ChevronUp } from 'lucide-react'
import CopilotMarkdownRenderer from './markdown-renderer' import { CopilotMarkdownRenderer } from '../markdown-renderer'
/** /** Removes thinking tags (raw or escaped) and special tags from streamed content */
* Removes thinking tags (raw or escaped) from streamed content.
*/
function stripThinkingTags(text: string): string { function stripThinkingTags(text: string): string {
return text return text
.replace(/<\/?thinking[^>]*>/gi, '') .replace(/<\/?thinking[^>]*>/gi, '')
.replace(/&lt;\/?thinking[^&]*&gt;/gi, '') .replace(/&lt;\/?thinking[^&]*&gt;/gi, '')
.replace(/<options>[\s\S]*?<\/options>/gi, '')
.replace(/<options>[\s\S]*$/gi, '')
.replace(/<plan>[\s\S]*?<\/plan>/gi, '')
.replace(/<plan>[\s\S]*$/gi, '')
.trim() .trim()
} }
/** /** Interval for auto-scroll during streaming (ms) */
* Max height for thinking content before internal scrolling kicks in
*/
const THINKING_MAX_HEIGHT = 150
/**
* Height threshold before gradient fade kicks in
*/
const GRADIENT_THRESHOLD = 100
/**
* Interval for auto-scroll during streaming (ms)
*/
const SCROLL_INTERVAL = 50 const SCROLL_INTERVAL = 50
/** /** Timer update interval in milliseconds */
* Timer update interval in milliseconds
*/
const TIMER_UPDATE_INTERVAL = 100 const TIMER_UPDATE_INTERVAL = 100
/** /** Thinking text streaming delay - faster than main text */
* Thinking text streaming - much faster than main text
* Essentially instant with minimal delay
*/
const THINKING_DELAY = 0.5 const THINKING_DELAY = 0.5
const THINKING_CHARS_PER_FRAME = 3 const THINKING_CHARS_PER_FRAME = 3
/** /** Props for the SmoothThinkingText component */
* Props for the SmoothThinkingText component
*/
interface SmoothThinkingTextProps { interface SmoothThinkingTextProps {
content: string content: string
isStreaming: boolean isStreaming: boolean
} }
/** /**
* SmoothThinkingText renders thinking content with fast streaming animation * Renders thinking content with fast streaming animation.
* Uses gradient fade at top when content is tall enough
*/ */
const SmoothThinkingText = memo( const SmoothThinkingText = memo(
({ content, isStreaming }: SmoothThinkingTextProps) => { ({ content, isStreaming }: SmoothThinkingTextProps) => {
// Initialize with full content when not streaming to avoid flash on page load
const [displayedContent, setDisplayedContent] = useState(() => (isStreaming ? '' : content)) const [displayedContent, setDisplayedContent] = useState(() => (isStreaming ? '' : content))
const [showGradient, setShowGradient] = useState(false)
const contentRef = useRef(content) const contentRef = useRef(content)
const textRef = useRef<HTMLDivElement>(null) const textRef = useRef<HTMLDivElement>(null)
const rafRef = useRef<number | null>(null) const rafRef = useRef<number | null>(null)
// Initialize index based on streaming state
const indexRef = useRef(isStreaming ? 0 : content.length) const indexRef = useRef(isStreaming ? 0 : content.length)
const lastFrameTimeRef = useRef<number>(0) const lastFrameTimeRef = useRef<number>(0)
const isAnimatingRef = useRef(false) const isAnimatingRef = useRef(false)
@@ -88,7 +67,6 @@ const SmoothThinkingText = memo(
if (elapsed >= THINKING_DELAY) { if (elapsed >= THINKING_DELAY) {
if (currentIndex < currentContent.length) { if (currentIndex < currentContent.length) {
// Reveal multiple characters per frame for faster streaming
const newIndex = Math.min( const newIndex = Math.min(
currentIndex + THINKING_CHARS_PER_FRAME, currentIndex + THINKING_CHARS_PER_FRAME,
currentContent.length currentContent.length
@@ -110,7 +88,6 @@ const SmoothThinkingText = memo(
rafRef.current = requestAnimationFrame(animateText) rafRef.current = requestAnimationFrame(animateText)
} }
} else { } else {
// Streaming ended - show full content immediately
if (rafRef.current) { if (rafRef.current) {
cancelAnimationFrame(rafRef.current) cancelAnimationFrame(rafRef.current)
} }
@@ -127,30 +104,10 @@ const SmoothThinkingText = memo(
} }
}, [content, isStreaming]) }, [content, isStreaming])
// Check if content height exceeds threshold for gradient
useEffect(() => {
if (textRef.current && isStreaming) {
const height = textRef.current.scrollHeight
setShowGradient(height > GRADIENT_THRESHOLD)
} else {
setShowGradient(false)
}
}, [displayedContent, isStreaming])
// Apply vertical gradient fade at the top only when content is tall enough
const gradientStyle =
isStreaming && showGradient
? {
maskImage: 'linear-gradient(to bottom, transparent 0%, black 30%, black 100%)',
WebkitMaskImage: 'linear-gradient(to bottom, transparent 0%, black 30%, black 100%)',
}
: undefined
return ( return (
<div <div
ref={textRef} ref={textRef}
className='[&_*]:!text-[var(--text-muted)] [&_*]:!text-[12px] [&_*]:!leading-[1.4] [&_p]:!m-0 [&_p]:!mb-1 [&_h1]:!text-[12px] [&_h1]:!font-semibold [&_h1]:!m-0 [&_h1]:!mb-1 [&_h2]:!text-[12px] [&_h2]:!font-semibold [&_h2]:!m-0 [&_h2]:!mb-1 [&_h3]:!text-[12px] [&_h3]:!font-semibold [&_h3]:!m-0 [&_h3]:!mb-1 [&_code]:!text-[11px] [&_ul]:!pl-5 [&_ul]:!my-1 [&_ol]:!pl-6 [&_ol]:!my-1 [&_li]:!my-0.5 [&_li]:!py-0 font-season text-[12px] text-[var(--text-muted)]' className='[&_*]:!text-[var(--text-muted)] [&_*]:!text-[12px] [&_*]:!leading-[1.4] [&_p]:!m-0 [&_p]:!mb-1 [&_h1]:!text-[12px] [&_h1]:!font-semibold [&_h1]:!m-0 [&_h1]:!mb-1 [&_h2]:!text-[12px] [&_h2]:!font-semibold [&_h2]:!m-0 [&_h2]:!mb-1 [&_h3]:!text-[12px] [&_h3]:!font-semibold [&_h3]:!m-0 [&_h3]:!mb-1 [&_code]:!text-[11px] [&_ul]:!pl-5 [&_ul]:!my-1 [&_ol]:!pl-6 [&_ol]:!my-1 [&_li]:!my-0.5 [&_li]:!py-0 font-season text-[12px] text-[var(--text-muted)]'
style={gradientStyle}
> >
<CopilotMarkdownRenderer content={displayedContent} /> <CopilotMarkdownRenderer content={displayedContent} />
</div> </div>
@@ -165,9 +122,7 @@ const SmoothThinkingText = memo(
SmoothThinkingText.displayName = 'SmoothThinkingText' SmoothThinkingText.displayName = 'SmoothThinkingText'
/** /** Props for the ThinkingBlock component */
* Props for the ThinkingBlock component
*/
interface ThinkingBlockProps { interface ThinkingBlockProps {
/** Content of the thinking block */ /** Content of the thinking block */
content: string content: string
@@ -182,13 +137,8 @@ interface ThinkingBlockProps {
} }
/** /**
* ThinkingBlock component displays AI reasoning/thinking process * Displays AI reasoning/thinking process with collapsible content and duration timer.
* Shows collapsible content with duration timer * Auto-expands during streaming and collapses when complete.
* Auto-expands during streaming and collapses when complete
* Auto-collapses when a tool call or other content comes in after it
*
* @param props - Component props
* @returns Thinking block with expandable content and timer
*/ */
export function ThinkingBlock({ export function ThinkingBlock({
content, content,
@@ -197,7 +147,6 @@ export function ThinkingBlock({
label = 'Thought', label = 'Thought',
hasSpecialTags = false, hasSpecialTags = false,
}: ThinkingBlockProps) { }: ThinkingBlockProps) {
// Strip thinking tags from content on render to handle persisted messages
const cleanContent = useMemo(() => stripThinkingTags(content || ''), [content]) const cleanContent = useMemo(() => stripThinkingTags(content || ''), [content])
const [isExpanded, setIsExpanded] = useState(false) const [isExpanded, setIsExpanded] = useState(false)
@@ -209,12 +158,8 @@ export function ThinkingBlock({
const lastScrollTopRef = useRef(0) const lastScrollTopRef = useRef(0)
const programmaticScrollRef = useRef(false) const programmaticScrollRef = useRef(false)
/** /** Auto-expands during streaming, auto-collapses when streaming ends or following content arrives */
* Auto-expands block when streaming with content
* Auto-collapses when streaming ends OR when following content arrives
*/
useEffect(() => { useEffect(() => {
// Collapse if streaming ended, there's following content, or special tags arrived
if (!isStreaming || hasFollowingContent || hasSpecialTags) { if (!isStreaming || hasFollowingContent || hasSpecialTags) {
setIsExpanded(false) setIsExpanded(false)
userCollapsedRef.current = false userCollapsedRef.current = false
@@ -227,7 +172,6 @@ export function ThinkingBlock({
} }
}, [isStreaming, cleanContent, hasFollowingContent, hasSpecialTags]) }, [isStreaming, cleanContent, hasFollowingContent, hasSpecialTags])
// Reset start time when streaming begins
useEffect(() => { useEffect(() => {
if (isStreaming && !hasFollowingContent) { if (isStreaming && !hasFollowingContent) {
startTimeRef.current = Date.now() startTimeRef.current = Date.now()
@@ -236,9 +180,7 @@ export function ThinkingBlock({
} }
}, [isStreaming, hasFollowingContent]) }, [isStreaming, hasFollowingContent])
// Update duration timer during streaming (stop when following content arrives)
useEffect(() => { useEffect(() => {
// Stop timer if not streaming or if there's following content (thinking is done)
if (!isStreaming || hasFollowingContent) return if (!isStreaming || hasFollowingContent) return
const interval = setInterval(() => { const interval = setInterval(() => {
@@ -248,7 +190,6 @@ export function ThinkingBlock({
return () => clearInterval(interval) return () => clearInterval(interval)
}, [isStreaming, hasFollowingContent]) }, [isStreaming, hasFollowingContent])
// Handle scroll events to detect user scrolling away
useEffect(() => { useEffect(() => {
const container = scrollContainerRef.current const container = scrollContainerRef.current
if (!container || !isExpanded) return if (!container || !isExpanded) return
@@ -267,7 +208,6 @@ export function ThinkingBlock({
setUserHasScrolledAway(true) setUserHasScrolledAway(true)
} }
// Re-stick if user scrolls back to bottom with intent
if (userHasScrolledAway && isNearBottom && delta > 10) { if (userHasScrolledAway && isNearBottom && delta > 10) {
setUserHasScrolledAway(false) setUserHasScrolledAway(false)
} }
@@ -281,7 +221,6 @@ export function ThinkingBlock({
return () => container.removeEventListener('scroll', handleScroll) return () => container.removeEventListener('scroll', handleScroll)
}, [isExpanded, userHasScrolledAway]) }, [isExpanded, userHasScrolledAway])
// Smart auto-scroll: always scroll to bottom while streaming unless user scrolled away
useEffect(() => { useEffect(() => {
if (!isStreaming || !isExpanded || userHasScrolledAway) return if (!isStreaming || !isExpanded || userHasScrolledAway) return
@@ -302,20 +241,16 @@ export function ThinkingBlock({
return () => window.clearInterval(intervalId) return () => window.clearInterval(intervalId)
}, [isStreaming, isExpanded, userHasScrolledAway]) }, [isStreaming, isExpanded, userHasScrolledAway])
/** /** Formats duration in milliseconds to seconds (minimum 1s) */
* Formats duration in milliseconds to seconds
* Always shows seconds, rounded to nearest whole second, minimum 1s
*/
const formatDuration = (ms: number) => { const formatDuration = (ms: number) => {
const seconds = Math.max(1, Math.round(ms / 1000)) const seconds = Math.max(1, Math.round(ms / 1000))
return `${seconds}s` return `${seconds}s`
} }
const hasContent = cleanContent.length > 0 const hasContent = cleanContent.length > 0
// Thinking is "done" when streaming ends OR when there's following content (like a tool call) OR when special tags appear
const isThinkingDone = !isStreaming || hasFollowingContent || hasSpecialTags const isThinkingDone = !isStreaming || hasFollowingContent || hasSpecialTags
const durationText = `${label} for ${formatDuration(duration)}` const durationText = `${label} for ${formatDuration(duration)}`
// Convert past tense label to present tense for streaming (e.g., "Thought" → "Thinking")
const getStreamingLabel = (lbl: string) => { const getStreamingLabel = (lbl: string) => {
if (lbl === 'Thought') return 'Thinking' if (lbl === 'Thought') return 'Thinking'
if (lbl.endsWith('ed')) return `${lbl.slice(0, -2)}ing` if (lbl.endsWith('ed')) return `${lbl.slice(0, -2)}ing`
@@ -323,11 +258,9 @@ export function ThinkingBlock({
} }
const streamingLabel = getStreamingLabel(label) const streamingLabel = getStreamingLabel(label)
// During streaming: show header with shimmer effect + expanded content
if (!isThinkingDone) { if (!isThinkingDone) {
return ( return (
<div> <div>
{/* Define shimmer keyframes */}
<style>{` <style>{`
@keyframes thinking-shimmer { @keyframes thinking-shimmer {
0% { background-position: 150% 0; } 0% { background-position: 150% 0; }
@@ -396,7 +329,6 @@ export function ThinkingBlock({
) )
} }
// After done: show collapsible header with duration
return ( return (
<div> <div>
<button <button
@@ -426,7 +358,6 @@ export function ThinkingBlock({
isExpanded ? 'mt-1.5 max-h-[150px] opacity-100' : 'max-h-0 opacity-0' isExpanded ? 'mt-1.5 max-h-[150px] opacity-100' : 'max-h-0 opacity-0'
)} )}
> >
{/* Completed thinking text - dimmed with markdown */}
<div className='[&_*]:!text-[var(--text-muted)] [&_*]:!text-[12px] [&_*]:!leading-[1.4] [&_p]:!m-0 [&_p]:!mb-1 [&_h1]:!text-[12px] [&_h1]:!font-semibold [&_h1]:!m-0 [&_h1]:!mb-1 [&_h2]:!text-[12px] [&_h2]:!font-semibold [&_h2]:!m-0 [&_h2]:!mb-1 [&_h3]:!text-[12px] [&_h3]:!font-semibold [&_h3]:!m-0 [&_h3]:!mb-1 [&_code]:!text-[11px] [&_ul]:!pl-5 [&_ul]:!my-1 [&_ol]:!pl-6 [&_ol]:!my-1 [&_li]:!my-0.5 [&_li]:!py-0 font-season text-[12px] text-[var(--text-muted)]'> <div className='[&_*]:!text-[var(--text-muted)] [&_*]:!text-[12px] [&_*]:!leading-[1.4] [&_p]:!m-0 [&_p]:!mb-1 [&_h1]:!text-[12px] [&_h1]:!font-semibold [&_h1]:!m-0 [&_h1]:!mb-1 [&_h2]:!text-[12px] [&_h2]:!font-semibold [&_h2]:!m-0 [&_h2]:!mb-1 [&_h3]:!text-[12px] [&_h3]:!font-semibold [&_h3]:!m-0 [&_h3]:!mb-1 [&_code]:!text-[11px] [&_ul]:!pl-5 [&_ul]:!my-1 [&_ol]:!pl-6 [&_ol]:!my-1 [&_li]:!my-0.5 [&_li]:!py-0 font-season text-[12px] text-[var(--text-muted)]'>
<CopilotMarkdownRenderer content={cleanContent} /> <CopilotMarkdownRenderer content={cleanContent} />
</div> </div>

View File

@@ -9,18 +9,20 @@ import {
ToolCall, ToolCall,
} from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/copilot/components' } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/copilot/components'
import { import {
CheckpointConfirmation,
FileAttachmentDisplay, FileAttachmentDisplay,
SmoothStreamingText, SmoothStreamingText,
StreamingIndicator, StreamingIndicator,
ThinkingBlock, ThinkingBlock,
UsageLimitActions, UsageLimitActions,
} from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/copilot/components/copilot-message/components' } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/copilot/components/copilot-message/components'
import CopilotMarkdownRenderer from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/copilot/components/copilot-message/components/markdown-renderer' import { CopilotMarkdownRenderer } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/copilot/components/copilot-message/components/markdown-renderer'
import { import {
useCheckpointManagement, useCheckpointManagement,
useMessageEditing, useMessageEditing,
} from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/copilot/components/copilot-message/hooks' } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/copilot/components/copilot-message/hooks'
import { UserInput } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/copilot/components/user-input/user-input' import { UserInput } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/copilot/components/user-input/user-input'
import { buildMentionHighlightNodes } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/copilot/components/user-input/utils'
import type { CopilotMessage as CopilotMessageType } from '@/stores/panel' import type { CopilotMessage as CopilotMessageType } from '@/stores/panel'
import { useCopilotStore } from '@/stores/panel' import { useCopilotStore } from '@/stores/panel'
@@ -68,7 +70,6 @@ const CopilotMessage: FC<CopilotMessageProps> = memo(
const isUser = message.role === 'user' const isUser = message.role === 'user'
const isAssistant = message.role === 'assistant' const isAssistant = message.role === 'assistant'
// Store state
const { const {
messageCheckpoints: allMessageCheckpoints, messageCheckpoints: allMessageCheckpoints,
messages, messages,
@@ -79,23 +80,18 @@ const CopilotMessage: FC<CopilotMessageProps> = memo(
isAborting, isAborting,
} = useCopilotStore() } = useCopilotStore()
// Get checkpoints for this message if it's a user message
const messageCheckpoints = isUser ? allMessageCheckpoints[message.id] || [] : [] const messageCheckpoints = isUser ? allMessageCheckpoints[message.id] || [] : []
const hasCheckpoints = messageCheckpoints.length > 0 && messageCheckpoints.some((cp) => cp?.id) const hasCheckpoints = messageCheckpoints.length > 0 && messageCheckpoints.some((cp) => cp?.id)
// Check if this is the last user message (for showing abort button)
const isLastUserMessage = useMemo(() => { const isLastUserMessage = useMemo(() => {
if (!isUser) return false if (!isUser) return false
const userMessages = messages.filter((m) => m.role === 'user') const userMessages = messages.filter((m) => m.role === 'user')
return userMessages.length > 0 && userMessages[userMessages.length - 1]?.id === message.id return userMessages.length > 0 && userMessages[userMessages.length - 1]?.id === message.id
}, [isUser, messages, message.id]) }, [isUser, messages, message.id])
// UI state
const [isHoveringMessage, setIsHoveringMessage] = useState(false) const [isHoveringMessage, setIsHoveringMessage] = useState(false)
const cancelEditRef = useRef<(() => void) | null>(null) const cancelEditRef = useRef<(() => void) | null>(null)
// Checkpoint management hook
const { const {
showRestoreConfirmation, showRestoreConfirmation,
showCheckpointDiscardModal, showCheckpointDiscardModal,
@@ -118,7 +114,6 @@ const CopilotMessage: FC<CopilotMessageProps> = memo(
() => cancelEditRef.current?.() () => cancelEditRef.current?.()
) )
// Message editing hook
const { const {
isEditMode, isEditMode,
isExpanded, isExpanded,
@@ -147,27 +142,20 @@ const CopilotMessage: FC<CopilotMessageProps> = memo(
cancelEditRef.current = handleCancelEdit cancelEditRef.current = handleCancelEdit
// Get clean text content with double newline parsing
const cleanTextContent = useMemo(() => { const cleanTextContent = useMemo(() => {
if (!message.content) return '' if (!message.content) return ''
// Parse out excessive newlines (more than 2 consecutive newlines)
return message.content.replace(/\n{3,}/g, '\n\n') return message.content.replace(/\n{3,}/g, '\n\n')
}, [message.content]) }, [message.content])
// Parse special tags from message content (options, plan)
// Parse during streaming to show options/plan as they stream in
const parsedTags = useMemo(() => { const parsedTags = useMemo(() => {
if (isUser) return null if (isUser) return null
// Try message.content first
if (message.content) { if (message.content) {
const parsed = parseSpecialTags(message.content) const parsed = parseSpecialTags(message.content)
if (parsed.options || parsed.plan) return parsed if (parsed.options || parsed.plan) return parsed
} }
// During streaming, check content blocks for options/plan if (message.contentBlocks && message.contentBlocks.length > 0) {
if (isStreaming && message.contentBlocks && message.contentBlocks.length > 0) {
for (const block of message.contentBlocks) { for (const block of message.contentBlocks) {
if (block.type === 'text' && block.content) { if (block.type === 'text' && block.content) {
const parsed = parseSpecialTags(block.content) const parsed = parseSpecialTags(block.content)
@@ -176,23 +164,42 @@ const CopilotMessage: FC<CopilotMessageProps> = memo(
} }
} }
return message.content ? parseSpecialTags(message.content) : null return null
}, [message.content, message.contentBlocks, isUser, isStreaming]) }, [message.content, message.contentBlocks, isUser])
const selectedOptionKey = useMemo(() => {
if (!parsedTags?.options || isStreaming) return null
const currentIndex = messages.findIndex((m) => m.id === message.id)
if (currentIndex === -1 || currentIndex >= messages.length - 1) return null
const nextMessage = messages[currentIndex + 1]
if (!nextMessage || nextMessage.role !== 'user') return null
const nextContent = nextMessage.content?.trim()
if (!nextContent) return null
for (const [key, option] of Object.entries(parsedTags.options)) {
const optionTitle = typeof option === 'string' ? option : option.title
if (nextContent === optionTitle) {
return key
}
}
return null
}, [parsedTags?.options, messages, message.id, isStreaming])
// Get sendMessage from store for continuation actions
const sendMessage = useCopilotStore((s) => s.sendMessage) const sendMessage = useCopilotStore((s) => s.sendMessage)
// Handler for option selection
const handleOptionSelect = useCallback( const handleOptionSelect = useCallback(
(_optionKey: string, optionText: string) => { (_optionKey: string, optionText: string) => {
// Send the option text as a message
sendMessage(optionText) sendMessage(optionText)
}, },
[sendMessage] [sendMessage]
) )
// Memoize content blocks to avoid re-rendering unchanged blocks const isActivelyStreaming = isLastMessage && isStreaming
// No entrance animations to prevent layout shift
const memoizedContentBlocks = useMemo(() => { const memoizedContentBlocks = useMemo(() => {
if (!message.contentBlocks || message.contentBlocks.length === 0) { if (!message.contentBlocks || message.contentBlocks.length === 0) {
return null return null
@@ -202,21 +209,21 @@ const CopilotMessage: FC<CopilotMessageProps> = memo(
if (block.type === 'text') { if (block.type === 'text') {
const isLastTextBlock = const isLastTextBlock =
index === message.contentBlocks!.length - 1 && block.type === 'text' index === message.contentBlocks!.length - 1 && block.type === 'text'
// Always strip special tags from display (they're rendered separately as options/plan)
const parsed = parseSpecialTags(block.content) const parsed = parseSpecialTags(block.content)
const cleanBlockContent = parsed.cleanContent.replace(/\n{3,}/g, '\n\n') const cleanBlockContent = parsed.cleanContent.replace(/\n{3,}/g, '\n\n')
// Skip if no content after stripping tags
if (!cleanBlockContent.trim()) return null if (!cleanBlockContent.trim()) return null
// Use smooth streaming for the last text block if we're streaming const shouldUseSmoothing = isActivelyStreaming && isLastTextBlock
const shouldUseSmoothing = isStreaming && isLastTextBlock
const blockKey = `text-${index}-${block.timestamp || index}` const blockKey = `text-${index}-${block.timestamp || index}`
return ( return (
<div key={blockKey} className='w-full max-w-full'> <div key={blockKey} className='w-full max-w-full'>
{shouldUseSmoothing ? ( {shouldUseSmoothing ? (
<SmoothStreamingText content={cleanBlockContent} isStreaming={isStreaming} /> <SmoothStreamingText
content={cleanBlockContent}
isStreaming={isActivelyStreaming}
/>
) : ( ) : (
<CopilotMarkdownRenderer content={cleanBlockContent} /> <CopilotMarkdownRenderer content={cleanBlockContent} />
)} )}
@@ -224,9 +231,7 @@ const CopilotMessage: FC<CopilotMessageProps> = memo(
) )
} }
if (block.type === 'thinking') { if (block.type === 'thinking') {
// Check if there are any blocks after this one (tool calls, text, etc.)
const hasFollowingContent = index < message.contentBlocks!.length - 1 const hasFollowingContent = index < message.contentBlocks!.length - 1
// Check if special tags (options, plan) are present - should also close thinking
const hasSpecialTags = !!(parsedTags?.options || parsedTags?.plan) const hasSpecialTags = !!(parsedTags?.options || parsedTags?.plan)
const blockKey = `thinking-${index}-${block.timestamp || index}` const blockKey = `thinking-${index}-${block.timestamp || index}`
@@ -234,7 +239,7 @@ const CopilotMessage: FC<CopilotMessageProps> = memo(
<div key={blockKey} className='w-full'> <div key={blockKey} className='w-full'>
<ThinkingBlock <ThinkingBlock
content={block.content} content={block.content}
isStreaming={isStreaming} isStreaming={isActivelyStreaming}
hasFollowingContent={hasFollowingContent} hasFollowingContent={hasFollowingContent}
hasSpecialTags={hasSpecialTags} hasSpecialTags={hasSpecialTags}
/> />
@@ -246,18 +251,22 @@ const CopilotMessage: FC<CopilotMessageProps> = memo(
return ( return (
<div key={blockKey}> <div key={blockKey}>
<ToolCall toolCallId={block.toolCall.id} toolCall={block.toolCall} /> <ToolCall
toolCallId={block.toolCall.id}
toolCall={block.toolCall}
isCurrentMessage={isLastMessage}
/>
</div> </div>
) )
} }
return null return null
}) })
}, [message.contentBlocks, isStreaming, parsedTags]) }, [message.contentBlocks, isActivelyStreaming, parsedTags, isLastMessage])
if (isUser) { if (isUser) {
return ( return (
<div <div
className={`w-full max-w-full overflow-hidden transition-opacity duration-200 [max-width:var(--panel-max-width)] ${isDimmed ? 'opacity-40' : 'opacity-100'}`} className={`w-full max-w-full flex-none overflow-hidden transition-opacity duration-200 [max-width:var(--panel-max-width)] ${isDimmed ? 'opacity-40' : 'opacity-100'}`}
style={{ '--panel-max-width': `${panelWidth - 16}px` } as React.CSSProperties} style={{ '--panel-max-width': `${panelWidth - 16}px` } as React.CSSProperties}
> >
{isEditMode ? ( {isEditMode ? (
@@ -288,42 +297,15 @@ const CopilotMessage: FC<CopilotMessageProps> = memo(
initialContexts={message.contexts} initialContexts={message.contexts}
/> />
{/* Inline Checkpoint Discard Confirmation - shown below input in edit mode */} {/* Inline checkpoint confirmation - shown below input in edit mode */}
{showCheckpointDiscardModal && ( {showCheckpointDiscardModal && (
<div className='mt-[8px] rounded-[4px] border border-[var(--border)] bg-[var(--surface-4)] p-[10px]'> <CheckpointConfirmation
<p className='mb-[8px] text-[12px] text-[var(--text-primary)]'> variant='discard'
Continue from a previous message? isProcessing={isProcessingDiscard}
</p> onCancel={handleCancelCheckpointDiscard}
<div className='flex gap-[8px]'> onRevert={handleContinueAndRevert}
<Button onContinue={handleContinueWithoutRevert}
onClick={handleCancelCheckpointDiscard} />
variant='active'
size='sm'
className='flex-1'
disabled={isProcessingDiscard}
>
Cancel
</Button>
<Button
onClick={handleContinueAndRevert}
variant='destructive'
size='sm'
className='flex-1'
disabled={isProcessingDiscard}
>
{isProcessingDiscard ? 'Reverting...' : 'Revert'}
</Button>
<Button
onClick={handleContinueWithoutRevert}
variant='tertiary'
size='sm'
className='flex-1'
disabled={isProcessingDiscard}
>
Continue
</Button>
</div>
</div>
)} )}
</div> </div>
) : ( ) : (
@@ -348,46 +330,15 @@ const CopilotMessage: FC<CopilotMessageProps> = memo(
ref={messageContentRef} ref={messageContentRef}
className={`relative whitespace-pre-wrap break-words px-[2px] py-1 font-medium font-sans text-[var(--text-primary)] text-sm leading-[1.25rem] ${isSendingMessage && isLastUserMessage && isHoveringMessage ? 'pr-7' : ''} ${!isExpanded && needsExpansion ? 'max-h-[60px] overflow-hidden' : 'overflow-visible'}`} className={`relative whitespace-pre-wrap break-words px-[2px] py-1 font-medium font-sans text-[var(--text-primary)] text-sm leading-[1.25rem] ${isSendingMessage && isLastUserMessage && isHoveringMessage ? 'pr-7' : ''} ${!isExpanded && needsExpansion ? 'max-h-[60px] overflow-hidden' : 'overflow-visible'}`}
> >
{(() => { {buildMentionHighlightNodes(
const text = message.content || '' message.content || '',
const contexts: any[] = Array.isArray((message as any).contexts) message.contexts || [],
? ((message as any).contexts as any[]) (token, key) => (
: [] <span key={key} className='rounded-[4px] bg-[rgba(50,189,126,0.65)] py-[1px]'>
{token}
// Build tokens with their prefixes (@ for mentions, / for commands) </span>
const tokens = contexts )
.filter((c) => c?.kind !== 'current_workflow' && c?.label) )}
.map((c) => {
const prefix = c?.kind === 'slash_command' ? '/' : '@'
return `${prefix}${c.label}`
})
if (!tokens.length) return text
const escapeRegex = (s: string) => s.replace(/[.*+?^${}()|[\]\\]/g, '\\$&')
const pattern = new RegExp(`(${tokens.map(escapeRegex).join('|')})`, 'g')
const nodes: React.ReactNode[] = []
let lastIndex = 0
let match: RegExpExecArray | null
while ((match = pattern.exec(text)) !== null) {
const i = match.index
const before = text.slice(lastIndex, i)
if (before) nodes.push(before)
const mention = match[0]
nodes.push(
<span
key={`mention-${i}-${lastIndex}`}
className='rounded-[4px] bg-[rgba(50,189,126,0.65)] py-[1px]'
>
{mention}
</span>
)
lastIndex = i + mention.length
}
const tail = text.slice(lastIndex)
if (tail) nodes.push(tail)
return nodes
})()}
</div> </div>
{/* Gradient fade when truncated - applies to entire message box */} {/* Gradient fade when truncated - applies to entire message box */}
@@ -437,65 +388,30 @@ const CopilotMessage: FC<CopilotMessageProps> = memo(
</div> </div>
)} )}
{/* Inline Restore Checkpoint Confirmation */} {/* Inline restore checkpoint confirmation */}
{showRestoreConfirmation && ( {showRestoreConfirmation && (
<div className='mt-[8px] rounded-[4px] border border-[var(--border)] bg-[var(--surface-4)] p-[10px]'> <CheckpointConfirmation
<p className='mb-[8px] text-[12px] text-[var(--text-primary)]'> variant='restore'
Revert to checkpoint? This will restore your workflow to the state saved at this isProcessing={isReverting}
checkpoint.{' '} onCancel={handleCancelRevert}
<span className='text-[var(--text-error)]'>This action cannot be undone.</span> onRevert={handleConfirmRevert}
</p> />
<div className='flex gap-[8px]'>
<Button
onClick={handleCancelRevert}
variant='active'
size='sm'
className='flex-1'
disabled={isReverting}
>
Cancel
</Button>
<Button
onClick={handleConfirmRevert}
variant='destructive'
size='sm'
className='flex-1'
disabled={isReverting}
>
{isReverting ? 'Reverting...' : 'Revert'}
</Button>
</div>
</div>
)} )}
</div> </div>
) )
} }
// Check if there's any visible content in the blocks
const hasVisibleContent = useMemo(() => {
if (!message.contentBlocks || message.contentBlocks.length === 0) return false
return message.contentBlocks.some((block) => {
if (block.type === 'text') {
const parsed = parseSpecialTags(block.content)
return parsed.cleanContent.trim().length > 0
}
return block.type === 'thinking' || block.type === 'tool_call'
})
}, [message.contentBlocks])
if (isAssistant) { if (isAssistant) {
return ( return (
<div <div
className={`w-full max-w-full overflow-hidden [max-width:var(--panel-max-width)] ${isDimmed ? 'opacity-40' : 'opacity-100'}`} className={`w-full max-w-full flex-none overflow-hidden [max-width:var(--panel-max-width)] ${isDimmed ? 'opacity-40' : 'opacity-100'}`}
style={{ '--panel-max-width': `${panelWidth - 16}px` } as React.CSSProperties} style={{ '--panel-max-width': `${panelWidth - 16}px` } as React.CSSProperties}
> >
<div className='max-w-full space-y-1 px-[2px]'> <div className='max-w-full space-y-[4px] px-[2px] pb-[4px]'>
{/* Content blocks in chronological order */} {/* Content blocks in chronological order */}
{memoizedContentBlocks} {memoizedContentBlocks || (isStreaming && <div className='min-h-0' />)}
{isStreaming && ( {isStreaming && <StreamingIndicator />}
<StreamingIndicator className={!hasVisibleContent ? 'mt-1' : undefined} />
)}
{message.errorType === 'usage_limit' && ( {message.errorType === 'usage_limit' && (
<div className='flex gap-1.5'> <div className='flex gap-1.5'>
@@ -534,6 +450,7 @@ const CopilotMessage: FC<CopilotMessageProps> = memo(
isLastMessage && !isStreaming && parsedTags.optionsComplete === true isLastMessage && !isStreaming && parsedTags.optionsComplete === true
} }
streaming={isStreaming || !parsedTags.optionsComplete} streaming={isStreaming || !parsedTags.optionsComplete}
selectedOptionKey={selectedOptionKey}
/> />
)} )}
</div> </div>
@@ -544,50 +461,22 @@ const CopilotMessage: FC<CopilotMessageProps> = memo(
return null return null
}, },
(prevProps, nextProps) => { (prevProps, nextProps) => {
// Custom comparison function for better streaming performance
const prevMessage = prevProps.message const prevMessage = prevProps.message
const nextMessage = nextProps.message const nextMessage = nextProps.message
// If message IDs are different, always re-render if (prevMessage.id !== nextMessage.id) return false
if (prevMessage.id !== nextMessage.id) { if (prevProps.isStreaming !== nextProps.isStreaming) return false
return false if (prevProps.isDimmed !== nextProps.isDimmed) return false
} if (prevProps.panelWidth !== nextProps.panelWidth) return false
if (prevProps.checkpointCount !== nextProps.checkpointCount) return false
if (prevProps.isLastMessage !== nextProps.isLastMessage) return false
// If streaming state changed, re-render
if (prevProps.isStreaming !== nextProps.isStreaming) {
return false
}
// If dimmed state changed, re-render
if (prevProps.isDimmed !== nextProps.isDimmed) {
return false
}
// If panel width changed, re-render
if (prevProps.panelWidth !== nextProps.panelWidth) {
return false
}
// If checkpoint count changed, re-render
if (prevProps.checkpointCount !== nextProps.checkpointCount) {
return false
}
// If isLastMessage changed, re-render (for options visibility)
if (prevProps.isLastMessage !== nextProps.isLastMessage) {
return false
}
// For streaming messages, check if content actually changed
if (nextProps.isStreaming) { if (nextProps.isStreaming) {
const prevBlocks = prevMessage.contentBlocks || [] const prevBlocks = prevMessage.contentBlocks || []
const nextBlocks = nextMessage.contentBlocks || [] const nextBlocks = nextMessage.contentBlocks || []
if (prevBlocks.length !== nextBlocks.length) { if (prevBlocks.length !== nextBlocks.length) return false
return false // Content blocks changed
}
// Helper: get last block content by type
const getLastBlockContent = (blocks: any[], type: 'text' | 'thinking'): string | null => { const getLastBlockContent = (blocks: any[], type: 'text' | 'thinking'): string | null => {
for (let i = blocks.length - 1; i >= 0; i--) { for (let i = blocks.length - 1; i >= 0; i--) {
const block = blocks[i] const block = blocks[i]
@@ -598,7 +487,6 @@ const CopilotMessage: FC<CopilotMessageProps> = memo(
return null return null
} }
// Re-render if the last text block content changed
const prevLastTextContent = getLastBlockContent(prevBlocks as any[], 'text') const prevLastTextContent = getLastBlockContent(prevBlocks as any[], 'text')
const nextLastTextContent = getLastBlockContent(nextBlocks as any[], 'text') const nextLastTextContent = getLastBlockContent(nextBlocks as any[], 'text')
if ( if (
@@ -609,7 +497,6 @@ const CopilotMessage: FC<CopilotMessageProps> = memo(
return false return false
} }
// Re-render if the last thinking block content changed
const prevLastThinkingContent = getLastBlockContent(prevBlocks as any[], 'thinking') const prevLastThinkingContent = getLastBlockContent(prevBlocks as any[], 'thinking')
const nextLastThinkingContent = getLastBlockContent(nextBlocks as any[], 'thinking') const nextLastThinkingContent = getLastBlockContent(nextBlocks as any[], 'thinking')
if ( if (
@@ -620,24 +507,18 @@ const CopilotMessage: FC<CopilotMessageProps> = memo(
return false return false
} }
// Check if tool calls changed
const prevToolCalls = prevMessage.toolCalls || [] const prevToolCalls = prevMessage.toolCalls || []
const nextToolCalls = nextMessage.toolCalls || [] const nextToolCalls = nextMessage.toolCalls || []
if (prevToolCalls.length !== nextToolCalls.length) { if (prevToolCalls.length !== nextToolCalls.length) return false
return false // Tool calls count changed
}
for (let i = 0; i < nextToolCalls.length; i++) { for (let i = 0; i < nextToolCalls.length; i++) {
if (prevToolCalls[i]?.state !== nextToolCalls[i]?.state) { if (prevToolCalls[i]?.state !== nextToolCalls[i]?.state) return false
return false // Tool call state changed
}
} }
return true return true
} }
// For non-streaming messages, do a deeper comparison including tool call states
if ( if (
prevMessage.content !== nextMessage.content || prevMessage.content !== nextMessage.content ||
prevMessage.role !== nextMessage.role || prevMessage.role !== nextMessage.role ||
@@ -647,16 +528,12 @@ const CopilotMessage: FC<CopilotMessageProps> = memo(
return false return false
} }
// Check tool call states for non-streaming messages too
const prevToolCalls = prevMessage.toolCalls || [] const prevToolCalls = prevMessage.toolCalls || []
const nextToolCalls = nextMessage.toolCalls || [] const nextToolCalls = nextMessage.toolCalls || []
for (let i = 0; i < nextToolCalls.length; i++) { for (let i = 0; i < nextToolCalls.length; i++) {
if (prevToolCalls[i]?.state !== nextToolCalls[i]?.state) { if (prevToolCalls[i]?.state !== nextToolCalls[i]?.state) return false
return false // Tool call state changed
}
} }
// Check contentBlocks tool call states
const prevContentBlocks = prevMessage.contentBlocks || [] const prevContentBlocks = prevMessage.contentBlocks || []
const nextContentBlocks = nextMessage.contentBlocks || [] const nextContentBlocks = nextMessage.contentBlocks || []
for (let i = 0; i < nextContentBlocks.length; i++) { for (let i = 0; i < nextContentBlocks.length; i++) {
@@ -667,7 +544,7 @@ const CopilotMessage: FC<CopilotMessageProps> = memo(
nextBlock?.type === 'tool_call' && nextBlock?.type === 'tool_call' &&
prevBlock.toolCall?.state !== nextBlock.toolCall?.state prevBlock.toolCall?.state !== nextBlock.toolCall?.state
) { ) {
return false // ContentBlock tool call state changed return false
} }
} }

View File

@@ -15,6 +15,7 @@ const logger = createLogger('useCheckpointManagement')
* @param messageCheckpoints - Checkpoints for this message * @param messageCheckpoints - Checkpoints for this message
* @param onRevertModeChange - Callback for revert mode changes * @param onRevertModeChange - Callback for revert mode changes
* @param onEditModeChange - Callback for edit mode changes * @param onEditModeChange - Callback for edit mode changes
* @param onCancelEdit - Callback when edit is cancelled
* @returns Checkpoint management utilities * @returns Checkpoint management utilities
*/ */
export function useCheckpointManagement( export function useCheckpointManagement(
@@ -37,17 +38,13 @@ export function useCheckpointManagement(
const { revertToCheckpoint, currentChat } = useCopilotStore() const { revertToCheckpoint, currentChat } = useCopilotStore()
/** /** Initiates checkpoint revert confirmation */
* Handles initiating checkpoint revert
*/
const handleRevertToCheckpoint = useCallback(() => { const handleRevertToCheckpoint = useCallback(() => {
setShowRestoreConfirmation(true) setShowRestoreConfirmation(true)
onRevertModeChange?.(true) onRevertModeChange?.(true)
}, [onRevertModeChange]) }, [onRevertModeChange])
/** /** Confirms and executes checkpoint revert */
* Confirms checkpoint revert and updates state
*/
const handleConfirmRevert = useCallback(async () => { const handleConfirmRevert = useCallback(async () => {
if (messageCheckpoints.length > 0) { if (messageCheckpoints.length > 0) {
const latestCheckpoint = messageCheckpoints[0] const latestCheckpoint = messageCheckpoints[0]
@@ -116,18 +113,13 @@ export function useCheckpointManagement(
onRevertModeChange, onRevertModeChange,
]) ])
/** /** Cancels checkpoint revert */
* Cancels checkpoint revert
*/
const handleCancelRevert = useCallback(() => { const handleCancelRevert = useCallback(() => {
setShowRestoreConfirmation(false) setShowRestoreConfirmation(false)
onRevertModeChange?.(false) onRevertModeChange?.(false)
}, [onRevertModeChange]) }, [onRevertModeChange])
/** /** Reverts to checkpoint then proceeds with pending edit */
* Handles "Continue and revert" action for checkpoint discard modal
* Reverts to checkpoint then proceeds with pending edit
*/
const handleContinueAndRevert = useCallback(async () => { const handleContinueAndRevert = useCallback(async () => {
setIsProcessingDiscard(true) setIsProcessingDiscard(true)
try { try {
@@ -184,9 +176,7 @@ export function useCheckpointManagement(
} }
}, [messageCheckpoints, revertToCheckpoint, message, messages, onEditModeChange, onCancelEdit]) }, [messageCheckpoints, revertToCheckpoint, message, messages, onEditModeChange, onCancelEdit])
/** /** Cancels checkpoint discard and clears pending edit */
* Cancels checkpoint discard and clears pending edit
*/
const handleCancelCheckpointDiscard = useCallback(() => { const handleCancelCheckpointDiscard = useCallback(() => {
setShowCheckpointDiscardModal(false) setShowCheckpointDiscardModal(false)
onEditModeChange?.(false) onEditModeChange?.(false)
@@ -194,11 +184,11 @@ export function useCheckpointManagement(
pendingEditRef.current = null pendingEditRef.current = null
}, [onEditModeChange, onCancelEdit]) }, [onEditModeChange, onCancelEdit])
/** /** Continues with edit without reverting checkpoint */
* Continues with edit WITHOUT reverting checkpoint
*/
const handleContinueWithoutRevert = useCallback(async () => { const handleContinueWithoutRevert = useCallback(async () => {
setShowCheckpointDiscardModal(false) setShowCheckpointDiscardModal(false)
onEditModeChange?.(false)
onCancelEdit?.()
if (pendingEditRef.current) { if (pendingEditRef.current) {
const { message: msg, fileAttachments, contexts } = pendingEditRef.current const { message: msg, fileAttachments, contexts } = pendingEditRef.current
@@ -225,43 +215,34 @@ export function useCheckpointManagement(
} }
}, [message, messages, onEditModeChange, onCancelEdit]) }, [message, messages, onEditModeChange, onCancelEdit])
/** /** Handles keyboard events for confirmation dialogs */
* Handles keyboard events for restore confirmation (Escape/Enter)
*/
useEffect(() => { useEffect(() => {
if (!showRestoreConfirmation) return const isActive = showRestoreConfirmation || showCheckpointDiscardModal
if (!isActive) return
const handleKeyDown = (event: KeyboardEvent) => { const handleKeyDown = (event: KeyboardEvent) => {
if (event.defaultPrevented) return
if (event.key === 'Escape') { if (event.key === 'Escape') {
handleCancelRevert() if (showRestoreConfirmation) handleCancelRevert()
else handleCancelCheckpointDiscard()
} else if (event.key === 'Enter') { } else if (event.key === 'Enter') {
event.preventDefault() event.preventDefault()
handleConfirmRevert() if (showRestoreConfirmation) handleConfirmRevert()
else handleContinueAndRevert()
} }
} }
document.addEventListener('keydown', handleKeyDown) document.addEventListener('keydown', handleKeyDown)
return () => document.removeEventListener('keydown', handleKeyDown) return () => document.removeEventListener('keydown', handleKeyDown)
}, [showRestoreConfirmation, handleCancelRevert, handleConfirmRevert]) }, [
showRestoreConfirmation,
/** showCheckpointDiscardModal,
* Handles keyboard events for checkpoint discard modal (Escape/Enter) handleCancelRevert,
*/ handleConfirmRevert,
useEffect(() => { handleCancelCheckpointDiscard,
if (!showCheckpointDiscardModal) return handleContinueAndRevert,
])
const handleCheckpointDiscardKeyDown = async (event: KeyboardEvent) => {
if (event.key === 'Escape') {
handleCancelCheckpointDiscard()
} else if (event.key === 'Enter') {
event.preventDefault()
await handleContinueAndRevert()
}
}
document.addEventListener('keydown', handleCheckpointDiscardKeyDown)
return () => document.removeEventListener('keydown', handleCheckpointDiscardKeyDown)
}, [showCheckpointDiscardModal, handleCancelCheckpointDiscard, handleContinueAndRevert])
return { return {
// State // State

View File

@@ -2,24 +2,23 @@
import { useCallback, useEffect, useRef, useState } from 'react' import { useCallback, useEffect, useRef, useState } from 'react'
import { createLogger } from '@sim/logger' import { createLogger } from '@sim/logger'
import type { CopilotMessage } from '@/stores/panel' import type { ChatContext, CopilotMessage, MessageFileAttachment } from '@/stores/panel'
import { useCopilotStore } from '@/stores/panel' import { useCopilotStore } from '@/stores/panel'
const logger = createLogger('useMessageEditing') const logger = createLogger('useMessageEditing')
/** /** Ref interface for UserInput component */
* Message truncation height in pixels interface UserInputRef {
*/ focus: () => void
}
/** Message truncation height in pixels */
const MESSAGE_TRUNCATION_HEIGHT = 60 const MESSAGE_TRUNCATION_HEIGHT = 60
/** /** Delay before attaching click-outside listener to avoid immediate trigger */
* Delay before attaching click-outside listener to avoid immediate trigger
*/
const CLICK_OUTSIDE_DELAY = 100 const CLICK_OUTSIDE_DELAY = 100
/** /** Delay before aborting when editing during stream */
* Delay before aborting when editing during stream
*/
const ABORT_DELAY = 100 const ABORT_DELAY = 100
interface UseMessageEditingProps { interface UseMessageEditingProps {
@@ -32,8 +31,8 @@ interface UseMessageEditingProps {
setShowCheckpointDiscardModal: (show: boolean) => void setShowCheckpointDiscardModal: (show: boolean) => void
pendingEditRef: React.MutableRefObject<{ pendingEditRef: React.MutableRefObject<{
message: string message: string
fileAttachments?: any[] fileAttachments?: MessageFileAttachment[]
contexts?: any[] contexts?: ChatContext[]
} | null> } | null>
/** /**
* When true, disables the internal document click-outside handler. * When true, disables the internal document click-outside handler.
@@ -69,13 +68,11 @@ export function useMessageEditing(props: UseMessageEditingProps) {
const editContainerRef = useRef<HTMLDivElement>(null) const editContainerRef = useRef<HTMLDivElement>(null)
const messageContentRef = useRef<HTMLDivElement>(null) const messageContentRef = useRef<HTMLDivElement>(null)
const userInputRef = useRef<any>(null) const userInputRef = useRef<UserInputRef>(null)
const { sendMessage, isSendingMessage, abortMessage, currentChat } = useCopilotStore() const { sendMessage, isSendingMessage, abortMessage, currentChat } = useCopilotStore()
/** /** Checks if message content needs expansion based on height */
* Checks if message content needs expansion based on height
*/
useEffect(() => { useEffect(() => {
if (messageContentRef.current && message.role === 'user') { if (messageContentRef.current && message.role === 'user') {
const scrollHeight = messageContentRef.current.scrollHeight const scrollHeight = messageContentRef.current.scrollHeight
@@ -83,9 +80,7 @@ export function useMessageEditing(props: UseMessageEditingProps) {
} }
}, [message.content, message.role]) }, [message.content, message.role])
/** /** Enters edit mode */
* Handles entering edit mode
*/
const handleEditMessage = useCallback(() => { const handleEditMessage = useCallback(() => {
setIsEditMode(true) setIsEditMode(true)
setIsExpanded(false) setIsExpanded(false)
@@ -97,18 +92,14 @@ export function useMessageEditing(props: UseMessageEditingProps) {
}, 0) }, 0)
}, [message.content, onEditModeChange]) }, [message.content, onEditModeChange])
/** /** Cancels edit mode */
* Handles canceling edit mode
*/
const handleCancelEdit = useCallback(() => { const handleCancelEdit = useCallback(() => {
setIsEditMode(false) setIsEditMode(false)
setEditedContent(message.content) setEditedContent(message.content)
onEditModeChange?.(false) onEditModeChange?.(false)
}, [message.content, onEditModeChange]) }, [message.content, onEditModeChange])
/** /** Handles message click to enter edit mode */
* Handles clicking on message to enter edit mode
*/
const handleMessageClick = useCallback(() => { const handleMessageClick = useCallback(() => {
if (needsExpansion && !isExpanded) { if (needsExpansion && !isExpanded) {
setIsExpanded(true) setIsExpanded(true)
@@ -116,12 +107,13 @@ export function useMessageEditing(props: UseMessageEditingProps) {
handleEditMessage() handleEditMessage()
}, [needsExpansion, isExpanded, handleEditMessage]) }, [needsExpansion, isExpanded, handleEditMessage])
/** /** Performs the edit operation - truncates messages after edited message and resends */
* Performs the actual edit operation
* Truncates messages after edited message and resends with same ID
*/
const performEdit = useCallback( const performEdit = useCallback(
async (editedMessage: string, fileAttachments?: any[], contexts?: any[]) => { async (
editedMessage: string,
fileAttachments?: MessageFileAttachment[],
contexts?: ChatContext[]
) => {
const currentMessages = messages const currentMessages = messages
const editIndex = currentMessages.findIndex((m) => m.id === message.id) const editIndex = currentMessages.findIndex((m) => m.id === message.id)
@@ -134,7 +126,7 @@ export function useMessageEditing(props: UseMessageEditingProps) {
...message, ...message,
content: editedMessage, content: editedMessage,
fileAttachments: fileAttachments || message.fileAttachments, fileAttachments: fileAttachments || message.fileAttachments,
contexts: contexts || (message as any).contexts, contexts: contexts || message.contexts,
} }
useCopilotStore.setState({ messages: [...truncatedMessages, updatedMessage] }) useCopilotStore.setState({ messages: [...truncatedMessages, updatedMessage] })
@@ -153,7 +145,7 @@ export function useMessageEditing(props: UseMessageEditingProps) {
timestamp: m.timestamp, timestamp: m.timestamp,
...(m.contentBlocks && { contentBlocks: m.contentBlocks }), ...(m.contentBlocks && { contentBlocks: m.contentBlocks }),
...(m.fileAttachments && { fileAttachments: m.fileAttachments }), ...(m.fileAttachments && { fileAttachments: m.fileAttachments }),
...((m as any).contexts && { contexts: (m as any).contexts }), ...(m.contexts && { contexts: m.contexts }),
})), })),
}), }),
}) })
@@ -164,7 +156,7 @@ export function useMessageEditing(props: UseMessageEditingProps) {
await sendMessage(editedMessage, { await sendMessage(editedMessage, {
fileAttachments: fileAttachments || message.fileAttachments, fileAttachments: fileAttachments || message.fileAttachments,
contexts: contexts || (message as any).contexts, contexts: contexts || message.contexts,
messageId: message.id, messageId: message.id,
queueIfBusy: false, queueIfBusy: false,
}) })
@@ -173,12 +165,13 @@ export function useMessageEditing(props: UseMessageEditingProps) {
[messages, message, currentChat, sendMessage, onEditModeChange] [messages, message, currentChat, sendMessage, onEditModeChange]
) )
/** /** Submits edited message, checking for checkpoints first */
* Handles submitting edited message
* Checks for checkpoints and shows confirmation if needed
*/
const handleSubmitEdit = useCallback( const handleSubmitEdit = useCallback(
async (editedMessage: string, fileAttachments?: any[], contexts?: any[]) => { async (
editedMessage: string,
fileAttachments?: MessageFileAttachment[],
contexts?: ChatContext[]
) => {
if (!editedMessage.trim()) return if (!editedMessage.trim()) return
if (isSendingMessage) { if (isSendingMessage) {
@@ -204,9 +197,7 @@ export function useMessageEditing(props: UseMessageEditingProps) {
] ]
) )
/** /** Keyboard-only exit (Esc) */
* Keyboard-only exit (Esc). Click-outside is optionally handled by parent.
*/
useEffect(() => { useEffect(() => {
if (!isEditMode) return if (!isEditMode) return
@@ -222,9 +213,7 @@ export function useMessageEditing(props: UseMessageEditingProps) {
} }
}, [isEditMode, handleCancelEdit]) }, [isEditMode, handleCancelEdit])
/** /** Optional document-level click-outside handler */
* Optional document-level click-outside handler (disabled when parent manages it).
*/
useEffect(() => { useEffect(() => {
if (!isEditMode || disableDocumentClickOutside) return if (!isEditMode || disableDocumentClickOutside) return

View File

@@ -1,7 +1,8 @@
export * from './copilot-message/copilot-message' export * from './chat-history-skeleton'
export * from './plan-mode-section/plan-mode-section' export * from './copilot-message'
export * from './queued-messages/queued-messages' export * from './plan-mode-section'
export * from './todo-list/todo-list' export * from './queued-messages'
export * from './tool-call/tool-call' export * from './todo-list'
export * from './user-input/user-input' export * from './tool-call'
export * from './welcome/welcome' export * from './user-input'
export * from './welcome'

View File

@@ -29,7 +29,7 @@ import { Check, GripHorizontal, Pencil, X } from 'lucide-react'
import { Button, Textarea } from '@/components/emcn' import { Button, Textarea } from '@/components/emcn'
import { Trash } from '@/components/emcn/icons/trash' import { Trash } from '@/components/emcn/icons/trash'
import { cn } from '@/lib/core/utils/cn' import { cn } from '@/lib/core/utils/cn'
import CopilotMarkdownRenderer from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/copilot/components/copilot-message/components/markdown-renderer' import { CopilotMarkdownRenderer } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/copilot/components/copilot-message/components/markdown-renderer'
/** /**
* Shared border and background styles * Shared border and background styles

View File

@@ -31,21 +31,22 @@ export function QueuedMessages() {
if (messageQueue.length === 0) return null if (messageQueue.length === 0) return null
return ( return (
<div className='mx-2 overflow-hidden rounded-t-lg border border-black/[0.08] border-b-0 bg-[var(--bg-secondary)] dark:border-white/[0.08]'> <div className='mx-[14px] overflow-hidden rounded-t-[4px] border border-[var(--border)] border-b-0 bg-[var(--bg-secondary)]'>
{/* Header */} {/* Header */}
<button <button
type='button' type='button'
onClick={() => setIsExpanded(!isExpanded)} onClick={() => setIsExpanded(!isExpanded)}
className='flex w-full items-center justify-between px-2.5 py-1.5 transition-colors hover:bg-[var(--bg-tertiary)]' className='flex w-full items-center justify-between px-[10px] py-[6px] transition-colors hover:bg-[var(--surface-3)]'
> >
<div className='flex items-center gap-1.5'> <div className='flex items-center gap-[6px]'>
{isExpanded ? ( {isExpanded ? (
<ChevronDown className='h-3 w-3 text-[var(--text-tertiary)]' /> <ChevronDown className='h-[14px] w-[14px] text-[var(--text-tertiary)]' />
) : ( ) : (
<ChevronRight className='h-3 w-3 text-[var(--text-tertiary)]' /> <ChevronRight className='h-[14px] w-[14px] text-[var(--text-tertiary)]' />
)} )}
<span className='font-medium text-[var(--text-secondary)] text-xs'> <span className='font-medium text-[12px] text-[var(--text-primary)]'>Queued</span>
{messageQueue.length} Queued <span className='flex-shrink-0 font-medium text-[12px] text-[var(--text-tertiary)]'>
{messageQueue.length}
</span> </span>
</div> </div>
</button> </button>
@@ -56,30 +57,30 @@ export function QueuedMessages() {
{messageQueue.map((msg) => ( {messageQueue.map((msg) => (
<div <div
key={msg.id} key={msg.id}
className='group flex items-center gap-2 border-black/[0.04] border-t px-2.5 py-1.5 hover:bg-[var(--bg-tertiary)] dark:border-white/[0.04]' className='group flex items-center gap-[8px] border-[var(--border)] border-t px-[10px] py-[6px] hover:bg-[var(--surface-3)]'
> >
{/* Radio indicator */} {/* Radio indicator */}
<div className='flex h-3 w-3 shrink-0 items-center justify-center'> <div className='flex h-[14px] w-[14px] shrink-0 items-center justify-center'>
<div className='h-2.5 w-2.5 rounded-full border border-[var(--text-tertiary)]/50' /> <div className='h-[10px] w-[10px] rounded-full border border-[var(--text-tertiary)]/50' />
</div> </div>
{/* Message content */} {/* Message content */}
<div className='min-w-0 flex-1'> <div className='min-w-0 flex-1'>
<p className='truncate text-[var(--text-primary)] text-xs'>{msg.content}</p> <p className='truncate text-[13px] text-[var(--text-primary)]'>{msg.content}</p>
</div> </div>
{/* Actions - always visible */} {/* Actions - always visible */}
<div className='flex shrink-0 items-center gap-0.5'> <div className='flex shrink-0 items-center gap-[4px]'>
<button <button
type='button' type='button'
onClick={(e) => { onClick={(e) => {
e.stopPropagation() e.stopPropagation()
handleSendNow(msg.id) handleSendNow(msg.id)
}} }}
className='rounded p-0.5 text-[var(--text-tertiary)] transition-colors hover:bg-[var(--bg-quaternary)] hover:text-[var(--text-primary)]' className='rounded p-[3px] text-[var(--text-tertiary)] transition-colors hover:bg-[var(--bg-quaternary)] hover:text-[var(--text-primary)]'
title='Send now (aborts current stream)' title='Send now (aborts current stream)'
> >
<ArrowUp className='h-3 w-3' /> <ArrowUp className='h-[14px] w-[14px]' />
</button> </button>
<button <button
type='button' type='button'
@@ -87,10 +88,10 @@ export function QueuedMessages() {
e.stopPropagation() e.stopPropagation()
handleRemove(msg.id) handleRemove(msg.id)
}} }}
className='rounded p-0.5 text-[var(--text-tertiary)] transition-colors hover:bg-red-500/10 hover:text-red-400' className='rounded p-[3px] text-[var(--text-tertiary)] transition-colors hover:bg-red-500/10 hover:text-red-400'
title='Remove from queue' title='Remove from queue'
> >
<Trash2 className='h-3 w-3' /> <Trash2 className='h-[14px] w-[14px]' />
</button> </button>
</div> </div>
</div> </div>

View File

@@ -15,7 +15,7 @@ import {
hasInterrupt as hasInterruptFromConfig, hasInterrupt as hasInterruptFromConfig,
isSpecialTool as isSpecialToolFromConfig, isSpecialTool as isSpecialToolFromConfig,
} from '@/lib/copilot/tools/client/ui-config' } from '@/lib/copilot/tools/client/ui-config'
import CopilotMarkdownRenderer from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/copilot/components/copilot-message/components/markdown-renderer' import { CopilotMarkdownRenderer } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/copilot/components/copilot-message/components/markdown-renderer'
import { SmoothStreamingText } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/copilot/components/copilot-message/components/smooth-streaming' import { SmoothStreamingText } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/copilot/components/copilot-message/components/smooth-streaming'
import { ThinkingBlock } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/copilot/components/copilot-message/components/thinking-block' import { ThinkingBlock } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/copilot/components/copilot-message/components/thinking-block'
import { getDisplayValue } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/workflow-block/workflow-block' import { getDisplayValue } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/workflow-block/workflow-block'
@@ -26,27 +26,30 @@ import { CLASS_TOOL_METADATA } from '@/stores/panel/copilot/store'
import type { SubAgentContentBlock } from '@/stores/panel/copilot/types' import type { SubAgentContentBlock } from '@/stores/panel/copilot/types'
import { useWorkflowStore } from '@/stores/workflows/workflow/store' import { useWorkflowStore } from '@/stores/workflows/workflow/store'
/** /** Plan step can be a string or an object with title and optional plan content */
* Plan step can be either a string or an object with title and plan
*/
type PlanStep = string | { title: string; plan?: string } type PlanStep = string | { title: string; plan?: string }
/** /** Option can be a string or an object with title and optional description */
* Option can be either a string or an object with title and description
*/
type OptionItem = string | { title: string; description?: string } type OptionItem = string | { title: string; description?: string }
/** Result of parsing special XML tags from message content */
interface ParsedTags { interface ParsedTags {
/** Parsed plan steps, keyed by step number */
plan?: Record<string, PlanStep> plan?: Record<string, PlanStep>
/** Whether the plan tag is complete (has closing tag) */
planComplete?: boolean planComplete?: boolean
/** Parsed options, keyed by option number */
options?: Record<string, OptionItem> options?: Record<string, OptionItem>
/** Whether the options tag is complete (has closing tag) */
optionsComplete?: boolean optionsComplete?: boolean
/** Content with special tags removed */
cleanContent: string cleanContent: string
} }
/** /**
* Extract plan steps from plan_respond tool calls in subagent blocks. * Extracts plan steps from plan_respond tool calls in subagent blocks.
* Returns { steps, isComplete } where steps is in the format expected by PlanSteps component. * @param blocks - The subagent content blocks to search
* @returns Object containing steps in the format expected by PlanSteps component, and completion status
*/ */
function extractPlanFromBlocks(blocks: SubAgentContentBlock[] | undefined): { function extractPlanFromBlocks(blocks: SubAgentContentBlock[] | undefined): {
steps: Record<string, PlanStep> | undefined steps: Record<string, PlanStep> | undefined
@@ -54,7 +57,6 @@ function extractPlanFromBlocks(blocks: SubAgentContentBlock[] | undefined): {
} { } {
if (!blocks) return { steps: undefined, isComplete: false } if (!blocks) return { steps: undefined, isComplete: false }
// Find the plan_respond tool call
const planRespondBlock = blocks.find( const planRespondBlock = blocks.find(
(b) => b.type === 'subagent_tool_call' && b.toolCall?.name === 'plan_respond' (b) => b.type === 'subagent_tool_call' && b.toolCall?.name === 'plan_respond'
) )
@@ -63,8 +65,6 @@ function extractPlanFromBlocks(blocks: SubAgentContentBlock[] | undefined): {
return { steps: undefined, isComplete: false } return { steps: undefined, isComplete: false }
} }
// Tool call arguments can be in different places depending on the source
// Also handle nested data.arguments structure from the schema
const tc = planRespondBlock.toolCall as any const tc = planRespondBlock.toolCall as any
const args = tc.params || tc.parameters || tc.input || tc.arguments || tc.data?.arguments || {} const args = tc.params || tc.parameters || tc.input || tc.arguments || tc.data?.arguments || {}
const stepsArray = args.steps const stepsArray = args.steps
@@ -73,9 +73,6 @@ function extractPlanFromBlocks(blocks: SubAgentContentBlock[] | undefined): {
return { steps: undefined, isComplete: false } return { steps: undefined, isComplete: false }
} }
// Convert array format to Record<string, PlanStep> format
// From: [{ number: 1, title: "..." }, { number: 2, title: "..." }]
// To: { "1": "...", "2": "..." }
const steps: Record<string, PlanStep> = {} const steps: Record<string, PlanStep> = {}
for (const step of stepsArray) { for (const step of stepsArray) {
if (step.number !== undefined && step.title) { if (step.number !== undefined && step.title) {
@@ -83,7 +80,6 @@ function extractPlanFromBlocks(blocks: SubAgentContentBlock[] | undefined): {
} }
} }
// Check if the tool call is complete (not pending/executing)
const isComplete = const isComplete =
planRespondBlock.toolCall.state === ClientToolCallState.success || planRespondBlock.toolCall.state === ClientToolCallState.success ||
planRespondBlock.toolCall.state === ClientToolCallState.error planRespondBlock.toolCall.state === ClientToolCallState.error
@@ -95,8 +91,9 @@ function extractPlanFromBlocks(blocks: SubAgentContentBlock[] | undefined): {
} }
/** /**
* Try to parse partial JSON for streaming options. * Parses partial JSON for streaming options, extracting complete key-value pairs from incomplete JSON.
* Attempts to extract complete key-value pairs from incomplete JSON. * @param jsonStr - The potentially incomplete JSON string
* @returns Parsed options record or null if no valid options found
*/ */
function parsePartialOptionsJson(jsonStr: string): Record<string, OptionItem> | null { function parsePartialOptionsJson(jsonStr: string): Record<string, OptionItem> | null {
// Try parsing as-is first (might be complete) // Try parsing as-is first (might be complete)
@@ -107,8 +104,9 @@ function parsePartialOptionsJson(jsonStr: string): Record<string, OptionItem> |
} }
// Try to extract complete key-value pairs from partial JSON // Try to extract complete key-value pairs from partial JSON
// Match patterns like "1": "some text" or "1": {"title": "text"} // Match patterns like "1": "some text" or "1": {"title": "text", "description": "..."}
const result: Record<string, OptionItem> = {} const result: Record<string, OptionItem> = {}
// Match complete string values: "key": "value" // Match complete string values: "key": "value"
const stringPattern = /"(\d+)":\s*"([^"]*?)"/g const stringPattern = /"(\d+)":\s*"([^"]*?)"/g
let match let match
@@ -116,18 +114,24 @@ function parsePartialOptionsJson(jsonStr: string): Record<string, OptionItem> |
result[match[1]] = match[2] result[match[1]] = match[2]
} }
// Match complete object values: "key": {"title": "value"} // Match complete object values with title and optional description
const objectPattern = /"(\d+)":\s*\{[^}]*"title":\s*"([^"]*)"[^}]*\}/g // Pattern matches: "1": {"title": "...", "description": "..."} or "1": {"title": "..."}
const objectPattern =
/"(\d+)":\s*\{\s*"title":\s*"((?:[^"\\]|\\.)*)"\s*(?:,\s*"description":\s*"((?:[^"\\]|\\.)*)")?\s*\}/g
while ((match = objectPattern.exec(jsonStr)) !== null) { while ((match = objectPattern.exec(jsonStr)) !== null) {
result[match[1]] = { title: match[2] } const key = match[1]
const title = match[2].replace(/\\"/g, '"').replace(/\\n/g, '\n')
const description = match[3]?.replace(/\\"/g, '"').replace(/\\n/g, '\n')
result[key] = description ? { title, description } : { title }
} }
return Object.keys(result).length > 0 ? result : null return Object.keys(result).length > 0 ? result : null
} }
/** /**
* Try to parse partial JSON for streaming plan steps. * Parses partial JSON for streaming plan steps, extracting complete key-value pairs from incomplete JSON.
* Attempts to extract complete key-value pairs from incomplete JSON. * @param jsonStr - The potentially incomplete JSON string
* @returns Parsed plan steps record or null if no valid steps found
*/ */
function parsePartialPlanJson(jsonStr: string): Record<string, PlanStep> | null { function parsePartialPlanJson(jsonStr: string): Record<string, PlanStep> | null {
// Try parsing as-is first (might be complete) // Try parsing as-is first (might be complete)
@@ -159,7 +163,10 @@ function parsePartialPlanJson(jsonStr: string): Record<string, PlanStep> | null
} }
/** /**
* Parse <plan> and <options> tags from content * Parses special XML tags (`<plan>` and `<options>`) from message content.
* Handles both complete and streaming/incomplete tags.
* @param content - The message content to parse
* @returns Parsed tags with plan, options, and clean content
*/ */
export function parseSpecialTags(content: string): ParsedTags { export function parseSpecialTags(content: string): ParsedTags {
const result: ParsedTags = { cleanContent: content } const result: ParsedTags = { cleanContent: content }
@@ -167,12 +174,18 @@ export function parseSpecialTags(content: string): ParsedTags {
// Parse <plan> tag - check for complete tag first // Parse <plan> tag - check for complete tag first
const planMatch = content.match(/<plan>([\s\S]*?)<\/plan>/i) const planMatch = content.match(/<plan>([\s\S]*?)<\/plan>/i)
if (planMatch) { if (planMatch) {
// Always strip the tag from display, even if JSON is invalid
result.cleanContent = result.cleanContent.replace(planMatch[0], '').trim()
try { try {
result.plan = JSON.parse(planMatch[1]) result.plan = JSON.parse(planMatch[1])
result.planComplete = true result.planComplete = true
result.cleanContent = result.cleanContent.replace(planMatch[0], '').trim()
} catch { } catch {
// Invalid JSON, ignore // JSON.parse failed - use regex fallback to extract plan from malformed JSON
const fallbackPlan = parsePartialPlanJson(planMatch[1])
if (fallbackPlan) {
result.plan = fallbackPlan
result.planComplete = true
}
} }
} else { } else {
// Check for streaming/incomplete plan tag // Check for streaming/incomplete plan tag
@@ -191,12 +204,18 @@ export function parseSpecialTags(content: string): ParsedTags {
// Parse <options> tag - check for complete tag first // Parse <options> tag - check for complete tag first
const optionsMatch = content.match(/<options>([\s\S]*?)<\/options>/i) const optionsMatch = content.match(/<options>([\s\S]*?)<\/options>/i)
if (optionsMatch) { if (optionsMatch) {
// Always strip the tag from display, even if JSON is invalid
result.cleanContent = result.cleanContent.replace(optionsMatch[0], '').trim()
try { try {
result.options = JSON.parse(optionsMatch[1]) result.options = JSON.parse(optionsMatch[1])
result.optionsComplete = true result.optionsComplete = true
result.cleanContent = result.cleanContent.replace(optionsMatch[0], '').trim()
} catch { } catch {
// Invalid JSON, ignore // JSON.parse failed - use regex fallback to extract options from malformed JSON
const fallbackOptions = parsePartialOptionsJson(optionsMatch[1])
if (fallbackOptions) {
result.options = fallbackOptions
result.optionsComplete = true
}
} }
} else { } else {
// Check for streaming/incomplete options tag // Check for streaming/incomplete options tag
@@ -220,15 +239,15 @@ export function parseSpecialTags(content: string): ParsedTags {
} }
/** /**
* PlanSteps component renders the workflow plan steps from the plan subagent * Renders workflow plan steps as a numbered to-do list.
* Displays as a to-do list with checkmarks and strikethrough text * @param steps - Plan steps keyed by step number
* @param streaming - When true, uses smooth streaming animation for step titles
*/ */
function PlanSteps({ function PlanSteps({
steps, steps,
streaming = false, streaming = false,
}: { }: {
steps: Record<string, PlanStep> steps: Record<string, PlanStep>
/** When true, uses smooth streaming animation for step titles */
streaming?: boolean streaming?: boolean
}) { }) {
const sortedSteps = useMemo(() => { const sortedSteps = useMemo(() => {
@@ -249,7 +268,7 @@ function PlanSteps({
if (sortedSteps.length === 0) return null if (sortedSteps.length === 0) return null
return ( return (
<div className='mt-1.5 overflow-hidden rounded-[6px] border border-[var(--border-1)] bg-[var(--surface-1)]'> <div className='mt-0 overflow-hidden rounded-[6px] border border-[var(--border-1)] bg-[var(--surface-1)]'>
<div className='flex items-center gap-[8px] border-[var(--border-1)] border-b bg-[var(--surface-2)] p-[8px]'> <div className='flex items-center gap-[8px] border-[var(--border-1)] border-b bg-[var(--surface-2)] p-[8px]'>
<LayoutList className='ml-[2px] h-3 w-3 flex-shrink-0 text-[var(--text-tertiary)]' /> <LayoutList className='ml-[2px] h-3 w-3 flex-shrink-0 text-[var(--text-tertiary)]' />
<span className='font-medium text-[12px] text-[var(--text-primary)]'>To-dos</span> <span className='font-medium text-[12px] text-[var(--text-primary)]'>To-dos</span>
@@ -257,7 +276,7 @@ function PlanSteps({
{sortedSteps.length} {sortedSteps.length}
</span> </span>
</div> </div>
<div className='flex flex-col gap-[6px] px-[10px] py-[8px]'> <div className='flex flex-col gap-[6px] px-[10px] py-[6px]'>
{sortedSteps.map(([num, title], index) => { {sortedSteps.map(([num, title], index) => {
const isLastStep = index === sortedSteps.length - 1 const isLastStep = index === sortedSteps.length - 1
return ( return (
@@ -281,9 +300,8 @@ function PlanSteps({
} }
/** /**
* OptionsSelector component renders selectable options from the agent * Renders selectable options from the agent with keyboard navigation and click selection.
* Supports keyboard navigation (arrow up/down, enter) and click selection * After selection, shows the chosen option highlighted and others struck through.
* After selection, shows the chosen option highlighted and others struck through
*/ */
export function OptionsSelector({ export function OptionsSelector({
options, options,
@@ -291,6 +309,7 @@ export function OptionsSelector({
disabled = false, disabled = false,
enableKeyboardNav = false, enableKeyboardNav = false,
streaming = false, streaming = false,
selectedOptionKey = null,
}: { }: {
options: Record<string, OptionItem> options: Record<string, OptionItem>
onSelect: (optionKey: string, optionText: string) => void onSelect: (optionKey: string, optionText: string) => void
@@ -299,6 +318,8 @@ export function OptionsSelector({
enableKeyboardNav?: boolean enableKeyboardNav?: boolean
/** When true, looks enabled but interaction is disabled (for streaming state) */ /** When true, looks enabled but interaction is disabled (for streaming state) */
streaming?: boolean streaming?: boolean
/** Pre-selected option key (for restoring selection from history) */
selectedOptionKey?: string | null
}) { }) {
const isInteractionDisabled = disabled || streaming const isInteractionDisabled = disabled || streaming
const sortedOptions = useMemo(() => { const sortedOptions = useMemo(() => {
@@ -316,8 +337,8 @@ export function OptionsSelector({
}) })
}, [options]) }, [options])
const [hoveredIndex, setHoveredIndex] = useState(0) const [hoveredIndex, setHoveredIndex] = useState(-1)
const [chosenKey, setChosenKey] = useState<string | null>(null) const [chosenKey, setChosenKey] = useState<string | null>(selectedOptionKey)
const containerRef = useRef<HTMLDivElement>(null) const containerRef = useRef<HTMLDivElement>(null)
const isLocked = chosenKey !== null const isLocked = chosenKey !== null
@@ -327,7 +348,8 @@ export function OptionsSelector({
if (isInteractionDisabled || !enableKeyboardNav || isLocked) return if (isInteractionDisabled || !enableKeyboardNav || isLocked) return
const handleKeyDown = (e: KeyboardEvent) => { const handleKeyDown = (e: KeyboardEvent) => {
// Only handle if the container or document body is focused (not when typing in input) if (e.defaultPrevented) return
const activeElement = document.activeElement const activeElement = document.activeElement
const isInputFocused = const isInputFocused =
activeElement?.tagName === 'INPUT' || activeElement?.tagName === 'INPUT' ||
@@ -338,13 +360,14 @@ export function OptionsSelector({
if (e.key === 'ArrowDown') { if (e.key === 'ArrowDown') {
e.preventDefault() e.preventDefault()
setHoveredIndex((prev) => Math.min(prev + 1, sortedOptions.length - 1)) setHoveredIndex((prev) => (prev < 0 ? 0 : Math.min(prev + 1, sortedOptions.length - 1)))
} else if (e.key === 'ArrowUp') { } else if (e.key === 'ArrowUp') {
e.preventDefault() e.preventDefault()
setHoveredIndex((prev) => Math.max(prev - 1, 0)) setHoveredIndex((prev) => (prev < 0 ? sortedOptions.length - 1 : Math.max(prev - 1, 0)))
} else if (e.key === 'Enter') { } else if (e.key === 'Enter') {
e.preventDefault() e.preventDefault()
const selected = sortedOptions[hoveredIndex] const indexToSelect = hoveredIndex < 0 ? 0 : hoveredIndex
const selected = sortedOptions[indexToSelect]
if (selected) { if (selected) {
setChosenKey(selected.key) setChosenKey(selected.key)
onSelect(selected.key, selected.title) onSelect(selected.key, selected.title)
@@ -368,7 +391,7 @@ export function OptionsSelector({
if (sortedOptions.length === 0) return null if (sortedOptions.length === 0) return null
return ( return (
<div ref={containerRef} className='flex flex-col gap-0.5 pb-0.5'> <div ref={containerRef} className='flex flex-col gap-[4px] pt-[4px]'>
{sortedOptions.map((option, index) => { {sortedOptions.map((option, index) => {
const isHovered = index === hoveredIndex && !isLocked const isHovered = index === hoveredIndex && !isLocked
const isChosen = option.key === chosenKey const isChosen = option.key === chosenKey
@@ -386,6 +409,9 @@ export function OptionsSelector({
onMouseEnter={() => { onMouseEnter={() => {
if (!isLocked && !streaming) setHoveredIndex(index) if (!isLocked && !streaming) setHoveredIndex(index)
}} }}
onMouseLeave={() => {
if (!isLocked && !streaming && sortedOptions.length === 1) setHoveredIndex(-1)
}}
className={clsx( className={clsx(
'group flex cursor-pointer items-start gap-2 rounded-[6px] p-1', 'group flex cursor-pointer items-start gap-2 rounded-[6px] p-1',
'hover:bg-[var(--surface-4)]', 'hover:bg-[var(--surface-4)]',
@@ -421,30 +447,31 @@ export function OptionsSelector({
) )
} }
/** Props for the ToolCall component */
interface ToolCallProps { interface ToolCallProps {
/** Tool call data object */
toolCall?: CopilotToolCall toolCall?: CopilotToolCall
/** Tool call ID for store lookup */
toolCallId?: string toolCallId?: string
/** Callback when tool call state changes */
onStateChange?: (state: any) => void onStateChange?: (state: any) => void
/** Whether this tool call is from the current/latest message. Controls shimmer and action buttons. */
isCurrentMessage?: boolean
} }
/** /** Props for the ShimmerOverlayText component */
* Props for shimmer overlay text component.
*/
interface ShimmerOverlayTextProps { interface ShimmerOverlayTextProps {
/** The text content to display */ /** Text content to display */
text: string text: string
/** Whether the shimmer animation is active */ /** Whether shimmer animation is active */
active?: boolean active?: boolean
/** Additional class names for the wrapper */ /** Additional class names for the wrapper */
className?: string className?: string
/** Whether to use special gradient styling (for important actions) */ /** Whether to use special gradient styling for important actions */
isSpecial?: boolean isSpecial?: boolean
} }
/** /** Action verbs at the start of tool display names, highlighted for visual hierarchy */
* Action verbs that appear at the start of tool display names.
* These will be highlighted in a lighter color for better visual hierarchy.
*/
const ACTION_VERBS = [ const ACTION_VERBS = [
'Analyzing', 'Analyzing',
'Analyzed', 'Analyzed',
@@ -552,7 +579,8 @@ const ACTION_VERBS = [
/** /**
* Splits text into action verb and remainder for two-tone rendering. * Splits text into action verb and remainder for two-tone rendering.
* Returns [actionVerb, remainder] or [null, text] if no match. * @param text - The text to split
* @returns Tuple of [actionVerb, remainder] or [null, text] if no match
*/ */
function splitActionVerb(text: string): [string | null, string] { function splitActionVerb(text: string): [string | null, string] {
for (const verb of ACTION_VERBS) { for (const verb of ACTION_VERBS) {
@@ -572,10 +600,9 @@ function splitActionVerb(text: string): [string | null, string] {
} }
/** /**
* Renders text with a subtle white shimmer overlay when active, creating a skeleton-like * Renders text with a shimmer overlay animation when active.
* loading effect that passes over the existing words without replacing them. * Special tools use a gradient color; normal tools highlight action verbs.
* For special tool calls, uses a gradient color. For normal tools, highlights action verbs * Uses CSS truncation to clamp to one line with ellipsis.
* in a lighter color with the rest in default gray.
*/ */
const ShimmerOverlayText = memo(function ShimmerOverlayText({ const ShimmerOverlayText = memo(function ShimmerOverlayText({
text, text,
@@ -585,10 +612,13 @@ const ShimmerOverlayText = memo(function ShimmerOverlayText({
}: ShimmerOverlayTextProps) { }: ShimmerOverlayTextProps) {
const [actionVerb, remainder] = splitActionVerb(text) const [actionVerb, remainder] = splitActionVerb(text)
// Base classes for single-line truncation with ellipsis
const truncateClasses = 'block w-full overflow-hidden text-ellipsis whitespace-nowrap'
// Special tools: use tertiary-2 color for entire text with shimmer // Special tools: use tertiary-2 color for entire text with shimmer
if (isSpecial) { if (isSpecial) {
return ( return (
<span className={`relative inline-block ${className || ''}`}> <span className={`relative ${truncateClasses} ${className || ''}`}>
<span className='text-[var(--brand-tertiary-2)]'>{text}</span> <span className='text-[var(--brand-tertiary-2)]'>{text}</span>
{active ? ( {active ? (
<span <span
@@ -596,7 +626,7 @@ const ShimmerOverlayText = memo(function ShimmerOverlayText({
className='pointer-events-none absolute inset-0 select-none overflow-hidden' className='pointer-events-none absolute inset-0 select-none overflow-hidden'
> >
<span <span
className='block text-transparent' className='block overflow-hidden text-ellipsis whitespace-nowrap text-transparent'
style={{ style={{
backgroundImage: backgroundImage:
'linear-gradient(90deg, rgba(51,196,129,0) 0%, rgba(255,255,255,0.6) 50%, rgba(51,196,129,0) 100%)', 'linear-gradient(90deg, rgba(51,196,129,0) 0%, rgba(255,255,255,0.6) 50%, rgba(51,196,129,0) 100%)',
@@ -627,7 +657,7 @@ const ShimmerOverlayText = memo(function ShimmerOverlayText({
// Light mode: primary (#2d2d2d) vs muted (#737373) for good contrast // Light mode: primary (#2d2d2d) vs muted (#737373) for good contrast
// Dark mode: tertiary (#b3b3b3) vs muted (#787878) for good contrast // Dark mode: tertiary (#b3b3b3) vs muted (#787878) for good contrast
return ( return (
<span className={`relative inline-block ${className || ''}`}> <span className={`relative ${truncateClasses} ${className || ''}`}>
{actionVerb ? ( {actionVerb ? (
<> <>
<span className='text-[var(--text-primary)] dark:text-[var(--text-tertiary)]'> <span className='text-[var(--text-primary)] dark:text-[var(--text-tertiary)]'>
@@ -644,7 +674,7 @@ const ShimmerOverlayText = memo(function ShimmerOverlayText({
className='pointer-events-none absolute inset-0 select-none overflow-hidden' className='pointer-events-none absolute inset-0 select-none overflow-hidden'
> >
<span <span
className='block text-transparent' className='block overflow-hidden text-ellipsis whitespace-nowrap text-transparent'
style={{ style={{
backgroundImage: backgroundImage:
'linear-gradient(90deg, rgba(255,255,255,0) 0%, rgba(255,255,255,0.85) 50%, rgba(255,255,255,0) 100%)', 'linear-gradient(90deg, rgba(255,255,255,0) 0%, rgba(255,255,255,0.85) 50%, rgba(255,255,255,0) 100%)',
@@ -672,8 +702,9 @@ const ShimmerOverlayText = memo(function ShimmerOverlayText({
}) })
/** /**
* Get the outer collapse header label for completed subagent tools. * Gets the collapse header label for completed subagent tools.
* Uses the tool's UI config. * @param toolName - The tool name to get the label for
* @returns The completion label from UI config, defaults to 'Thought'
*/ */
function getSubagentCompletionLabel(toolName: string): string { function getSubagentCompletionLabel(toolName: string): string {
const labels = getSubagentLabelsFromConfig(toolName, false) const labels = getSubagentLabelsFromConfig(toolName, false)
@@ -681,8 +712,9 @@ function getSubagentCompletionLabel(toolName: string): string {
} }
/** /**
* SubAgentThinkingContent renders subagent blocks as simple thinking text (ThinkingBlock). * Renders subagent blocks as thinking text within regular tool calls.
* Used for inline rendering within regular tool calls that have subagent content. * @param blocks - The subagent content blocks to render
* @param isStreaming - Whether streaming animations should be shown (caller should pre-compute currentMessage check)
*/ */
function SubAgentThinkingContent({ function SubAgentThinkingContent({
blocks, blocks,
@@ -717,7 +749,7 @@ function SubAgentThinkingContent({
const hasSpecialTags = hasPlan const hasSpecialTags = hasPlan
return ( return (
<div className='space-y-1.5'> <div className='space-y-[4px]'>
{cleanText.trim() && ( {cleanText.trim() && (
<ThinkingBlock <ThinkingBlock
content={cleanText} content={cleanText}
@@ -731,32 +763,29 @@ function SubAgentThinkingContent({
) )
} }
/** /** Subagents that collapse into summary headers when done streaming */
* Subagents that should collapse when done streaming.
* Default behavior is to NOT collapse (stay expanded like edit, superagent, info, etc.).
* Only plan, debug, and research collapse into summary headers.
*/
const COLLAPSIBLE_SUBAGENTS = new Set(['plan', 'debug', 'research']) const COLLAPSIBLE_SUBAGENTS = new Set(['plan', 'debug', 'research'])
/** /**
* SubagentContentRenderer handles the rendering of subagent content. * Handles rendering of subagent content with streaming and collapse behavior.
* - During streaming: Shows content at top level
* - When done (not streaming): Most subagents stay expanded, only specific ones collapse
* - Exception: plan, debug, research, info subagents collapse into a header
*/ */
const SubagentContentRenderer = memo(function SubagentContentRenderer({ const SubagentContentRenderer = memo(function SubagentContentRenderer({
toolCall, toolCall,
shouldCollapse, shouldCollapse,
isCurrentMessage = true,
}: { }: {
toolCall: CopilotToolCall toolCall: CopilotToolCall
shouldCollapse: boolean shouldCollapse: boolean
/** Whether this is from the current/latest message. Controls shimmer animations. */
isCurrentMessage?: boolean
}) { }) {
const [isExpanded, setIsExpanded] = useState(true) const [isExpanded, setIsExpanded] = useState(true)
const [duration, setDuration] = useState(0) const [duration, setDuration] = useState(0)
const startTimeRef = useRef<number>(Date.now()) const startTimeRef = useRef<number>(Date.now())
const wasStreamingRef = useRef(false) const wasStreamingRef = useRef(false)
const isStreaming = !!toolCall.subAgentStreaming // Only show streaming animations for current message
const isStreaming = isCurrentMessage && !!toolCall.subAgentStreaming
useEffect(() => { useEffect(() => {
if (isStreaming && !wasStreamingRef.current) { if (isStreaming && !wasStreamingRef.current) {
@@ -850,7 +879,11 @@ const SubagentContentRenderer = memo(function SubagentContentRenderer({
} }
return ( return (
<div key={`tool-${segment.block.toolCall.id || index}`}> <div key={`tool-${segment.block.toolCall.id || index}`}>
<ToolCall toolCallId={segment.block.toolCall.id} toolCall={segment.block.toolCall} /> <ToolCall
toolCallId={segment.block.toolCall.id}
toolCall={segment.block.toolCall}
isCurrentMessage={isCurrentMessage}
/>
</div> </div>
) )
} }
@@ -861,7 +894,7 @@ const SubagentContentRenderer = memo(function SubagentContentRenderer({
if (isStreaming || !shouldCollapse) { if (isStreaming || !shouldCollapse) {
return ( return (
<div className='w-full space-y-1.5'> <div className='w-full space-y-[4px]'>
{renderCollapsibleContent()} {renderCollapsibleContent()}
{hasPlan && planToRender && <PlanSteps steps={planToRender} streaming={isPlanStreaming} />} {hasPlan && planToRender && <PlanSteps steps={planToRender} streaming={isPlanStreaming} />}
</div> </div>
@@ -888,30 +921,30 @@ const SubagentContentRenderer = memo(function SubagentContentRenderer({
<div <div
className={clsx( className={clsx(
'overflow-hidden transition-all duration-150 ease-out', 'overflow-hidden transition-all duration-150 ease-out',
isExpanded ? 'mt-1.5 max-h-[5000px] opacity-100' : 'max-h-0 opacity-0' isExpanded ? 'mt-1.5 max-h-[5000px] space-y-[4px] opacity-100' : 'max-h-0 opacity-0'
)} )}
> >
{renderCollapsibleContent()} {renderCollapsibleContent()}
</div> </div>
{/* Plan stays outside the collapsible */} {hasPlan && planToRender && (
{hasPlan && planToRender && <PlanSteps steps={planToRender} />} <div className='mt-[6px]'>
<PlanSteps steps={planToRender} />
</div>
)}
</div> </div>
) )
}) })
/** /**
* Determines if a tool call is "special" and should display with gradient styling. * Determines if a tool call should display with special gradient styling.
* Uses the tool's UI config.
*/ */
function isSpecialToolCall(toolCall: CopilotToolCall): boolean { function isSpecialToolCall(toolCall: CopilotToolCall): boolean {
return isSpecialToolFromConfig(toolCall.name) return isSpecialToolFromConfig(toolCall.name)
} }
/** /**
* WorkflowEditSummary shows a full-width summary of workflow edits (like Cursor's diff). * Displays a summary of workflow edits with added, edited, and deleted blocks.
* Displays: workflow name with stats (+N green, N orange, -N red)
* Expands inline on click to show individual blocks with their icons.
*/ */
const WorkflowEditSummary = memo(function WorkflowEditSummary({ const WorkflowEditSummary = memo(function WorkflowEditSummary({
toolCall, toolCall,
@@ -1169,9 +1202,7 @@ const WorkflowEditSummary = memo(function WorkflowEditSummary({
) )
}) })
/** /** Checks if a tool is server-side executed (not a client tool) */
* Checks if a tool is an integration tool (server-side executed, not a client tool)
*/
function isIntegrationTool(toolName: string): boolean { function isIntegrationTool(toolName: string): boolean {
return !CLASS_TOOL_METADATA[toolName] return !CLASS_TOOL_METADATA[toolName]
} }
@@ -1317,9 +1348,7 @@ function getDisplayName(toolCall: CopilotToolCall): string {
return `${stateVerb} ${formattedName}` return `${stateVerb} ${formattedName}`
} }
/** /** Gets verb prefix based on tool call state */
* Get verb prefix based on tool state
*/
function getStateVerb(state: string): string { function getStateVerb(state: string): string {
switch (state) { switch (state) {
case 'pending': case 'pending':
@@ -1338,8 +1367,7 @@ function getStateVerb(state: string): string {
} }
/** /**
* Format tool name for display * Formats tool name for display (e.g., "google_calendar_list_events" -> "Google Calendar List Events")
* e.g., "google_calendar_list_events" -> "Google Calendar List Events"
*/ */
function formatToolName(name: string): string { function formatToolName(name: string): string {
const baseName = name.replace(/_v\d+$/, '') const baseName = name.replace(/_v\d+$/, '')
@@ -1415,7 +1443,7 @@ function RunSkipButtons({
// Standardized buttons for all interrupt tools: Allow, (Always Allow for client tools only), Skip // Standardized buttons for all interrupt tools: Allow, (Always Allow for client tools only), Skip
return ( return (
<div className='mt-1.5 flex gap-[6px]'> <div className='mt-[10px] flex gap-[6px]'>
<Button onClick={onRun} disabled={isProcessing} variant='tertiary'> <Button onClick={onRun} disabled={isProcessing} variant='tertiary'>
{isProcessing ? 'Allowing...' : 'Allow'} {isProcessing ? 'Allowing...' : 'Allow'}
</Button> </Button>
@@ -1431,7 +1459,12 @@ function RunSkipButtons({
) )
} }
export function ToolCall({ toolCall: toolCallProp, toolCallId, onStateChange }: ToolCallProps) { export function ToolCall({
toolCall: toolCallProp,
toolCallId,
onStateChange,
isCurrentMessage = true,
}: ToolCallProps) {
const [, forceUpdate] = useState({}) const [, forceUpdate] = useState({})
// Get live toolCall from store to ensure we have the latest state // Get live toolCall from store to ensure we have the latest state
const effectiveId = toolCallId || toolCallProp?.id const effectiveId = toolCallId || toolCallProp?.id
@@ -1445,9 +1478,7 @@ export function ToolCall({ toolCall: toolCallProp, toolCallId, onStateChange }:
const isExpandablePending = const isExpandablePending =
toolCall?.state === 'pending' && toolCall?.state === 'pending' &&
(toolCall.name === 'make_api_request' || (toolCall.name === 'make_api_request' || toolCall.name === 'set_global_workflow_variables')
toolCall.name === 'set_global_workflow_variables' ||
toolCall.name === 'run_workflow')
const [expanded, setExpanded] = useState(isExpandablePending) const [expanded, setExpanded] = useState(isExpandablePending)
const [showRemoveAutoAllow, setShowRemoveAutoAllow] = useState(false) const [showRemoveAutoAllow, setShowRemoveAutoAllow] = useState(false)
@@ -1522,6 +1553,7 @@ export function ToolCall({ toolCall: toolCallProp, toolCallId, onStateChange }:
<SubagentContentRenderer <SubagentContentRenderer
toolCall={toolCall} toolCall={toolCall}
shouldCollapse={COLLAPSIBLE_SUBAGENTS.has(toolCall.name)} shouldCollapse={COLLAPSIBLE_SUBAGENTS.has(toolCall.name)}
isCurrentMessage={isCurrentMessage}
/> />
) )
} }
@@ -1550,37 +1582,34 @@ export function ToolCall({ toolCall: toolCallProp, toolCallId, onStateChange }:
} }
// Check if tool has params table config (meaning it's expandable) // Check if tool has params table config (meaning it's expandable)
const hasParamsTable = !!getToolUIConfig(toolCall.name)?.paramsTable const hasParamsTable = !!getToolUIConfig(toolCall.name)?.paramsTable
const isRunWorkflow = toolCall.name === 'run_workflow'
const isExpandableTool = const isExpandableTool =
hasParamsTable || hasParamsTable ||
toolCall.name === 'make_api_request' || toolCall.name === 'make_api_request' ||
toolCall.name === 'set_global_workflow_variables' || toolCall.name === 'set_global_workflow_variables'
toolCall.name === 'run_workflow'
const showButtons = shouldShowRunSkipButtons(toolCall) const showButtons = isCurrentMessage && shouldShowRunSkipButtons(toolCall)
// Check UI config for secondary action // Check UI config for secondary action - only show for current message tool calls
const toolUIConfig = getToolUIConfig(toolCall.name) const toolUIConfig = getToolUIConfig(toolCall.name)
const secondaryAction = toolUIConfig?.secondaryAction const secondaryAction = toolUIConfig?.secondaryAction
const showSecondaryAction = secondaryAction?.showInStates.includes( const showSecondaryAction = secondaryAction?.showInStates.includes(
toolCall.state as ClientToolCallState toolCall.state as ClientToolCallState
) )
const isExecuting =
toolCall.state === (ClientToolCallState.executing as any) ||
toolCall.state === ('executing' as any)
// Legacy fallbacks for tools that haven't migrated to UI config // Legacy fallbacks for tools that haven't migrated to UI config
const showMoveToBackground = const showMoveToBackground =
showSecondaryAction && secondaryAction?.text === 'Move to Background' isCurrentMessage &&
? true ((showSecondaryAction && secondaryAction?.text === 'Move to Background') ||
: !secondaryAction && (!secondaryAction && toolCall.name === 'run_workflow' && isExecuting))
toolCall.name === 'run_workflow' &&
(toolCall.state === (ClientToolCallState.executing as any) ||
toolCall.state === ('executing' as any))
const showWake = const showWake =
showSecondaryAction && secondaryAction?.text === 'Wake' isCurrentMessage &&
? true ((showSecondaryAction && secondaryAction?.text === 'Wake') ||
: !secondaryAction && (!secondaryAction && toolCall.name === 'sleep' && isExecuting))
toolCall.name === 'sleep' &&
(toolCall.state === (ClientToolCallState.executing as any) ||
toolCall.state === ('executing' as any))
const handleStateChange = (state: any) => { const handleStateChange = (state: any) => {
forceUpdate({}) forceUpdate({})
@@ -1594,6 +1623,8 @@ export function ToolCall({ toolCall: toolCallProp, toolCallId, onStateChange }:
toolCall.state === ClientToolCallState.pending || toolCall.state === ClientToolCallState.pending ||
toolCall.state === ClientToolCallState.executing toolCall.state === ClientToolCallState.executing
const shouldShowShimmer = isCurrentMessage && isLoadingState
const isSpecial = isSpecialToolCall(toolCall) const isSpecial = isSpecialToolCall(toolCall)
const renderPendingDetails = () => { const renderPendingDetails = () => {
@@ -1903,7 +1934,7 @@ export function ToolCall({ toolCall: toolCallProp, toolCallId, onStateChange }:
</span> </span>
</div> </div>
{/* Input entries */} {/* Input entries */}
<div className='flex flex-col'> <div className='flex flex-col pt-[6px]'>
{inputEntries.map(([key, value], index) => { {inputEntries.map(([key, value], index) => {
const isComplex = isComplexValue(value) const isComplex = isComplexValue(value)
const displayValue = formatValueForDisplay(value) const displayValue = formatValueForDisplay(value)
@@ -1912,8 +1943,8 @@ export function ToolCall({ toolCall: toolCallProp, toolCallId, onStateChange }:
<div <div
key={key} key={key}
className={clsx( className={clsx(
'flex flex-col gap-1.5 px-[10px] py-[8px]', 'flex flex-col gap-[6px] px-[10px] pb-[6px]',
index > 0 && 'border-[var(--border-1)] border-t' index > 0 && 'mt-[6px] border-[var(--border-1)] border-t pt-[6px]'
)} )}
> >
{/* Input key */} {/* Input key */}
@@ -2005,14 +2036,14 @@ export function ToolCall({ toolCall: toolCallProp, toolCallId, onStateChange }:
<div className={isEnvVarsClickable ? 'cursor-pointer' : ''} onClick={handleEnvVarsClick}> <div className={isEnvVarsClickable ? 'cursor-pointer' : ''} onClick={handleEnvVarsClick}>
<ShimmerOverlayText <ShimmerOverlayText
text={displayName} text={displayName}
active={isLoadingState} active={shouldShowShimmer}
isSpecial={isSpecial} isSpecial={isSpecial}
className='font-[470] font-season text-[var(--text-secondary)] text-sm dark:text-[var(--text-muted)]' className='font-[470] font-season text-[var(--text-secondary)] text-sm dark:text-[var(--text-muted)]'
/> />
</div> </div>
<div className='mt-1.5'>{renderPendingDetails()}</div> <div className='mt-[10px]'>{renderPendingDetails()}</div>
{showRemoveAutoAllow && isAutoAllowed && ( {showRemoveAutoAllow && isAutoAllowed && (
<div className='mt-1.5'> <div className='mt-[10px]'>
<Button <Button
onClick={async () => { onClick={async () => {
await removeAutoAllowedTool(toolCall.name) await removeAutoAllowedTool(toolCall.name)
@@ -2037,7 +2068,7 @@ export function ToolCall({ toolCall: toolCallProp, toolCallId, onStateChange }:
{toolCall.subAgentBlocks && toolCall.subAgentBlocks.length > 0 && ( {toolCall.subAgentBlocks && toolCall.subAgentBlocks.length > 0 && (
<SubAgentThinkingContent <SubAgentThinkingContent
blocks={toolCall.subAgentBlocks} blocks={toolCall.subAgentBlocks}
isStreaming={toolCall.subAgentStreaming} isStreaming={isCurrentMessage && toolCall.subAgentStreaming}
/> />
)} )}
</div> </div>
@@ -2062,18 +2093,18 @@ export function ToolCall({ toolCall: toolCallProp, toolCallId, onStateChange }:
> >
<ShimmerOverlayText <ShimmerOverlayText
text={displayName} text={displayName}
active={isLoadingState} active={shouldShowShimmer}
isSpecial={isSpecial} isSpecial={isSpecial}
className='font-[470] font-season text-[var(--text-secondary)] text-sm dark:text-[var(--text-muted)]' className='font-[470] font-season text-[var(--text-secondary)] text-sm dark:text-[var(--text-muted)]'
/> />
</div> </div>
{code && ( {code && (
<div className='mt-1.5'> <div className='mt-[10px]'>
<Code.Viewer code={code} language='javascript' showGutter className='min-h-0' /> <Code.Viewer code={code} language='javascript' showGutter className='min-h-0' />
</div> </div>
)} )}
{showRemoveAutoAllow && isAutoAllowed && ( {showRemoveAutoAllow && isAutoAllowed && (
<div className='mt-1.5'> <div className='mt-[10px]'>
<Button <Button
onClick={async () => { onClick={async () => {
await removeAutoAllowedTool(toolCall.name) await removeAutoAllowedTool(toolCall.name)
@@ -2098,14 +2129,14 @@ export function ToolCall({ toolCall: toolCallProp, toolCallId, onStateChange }:
{toolCall.subAgentBlocks && toolCall.subAgentBlocks.length > 0 && ( {toolCall.subAgentBlocks && toolCall.subAgentBlocks.length > 0 && (
<SubAgentThinkingContent <SubAgentThinkingContent
blocks={toolCall.subAgentBlocks} blocks={toolCall.subAgentBlocks}
isStreaming={toolCall.subAgentStreaming} isStreaming={isCurrentMessage && toolCall.subAgentStreaming}
/> />
)} )}
</div> </div>
) )
} }
const isToolNameClickable = isExpandableTool || isAutoAllowed const isToolNameClickable = (!isRunWorkflow && isExpandableTool) || isAutoAllowed
const handleToolNameClick = () => { const handleToolNameClick = () => {
if (isExpandableTool) { if (isExpandableTool) {
@@ -2116,6 +2147,7 @@ export function ToolCall({ toolCall: toolCallProp, toolCallId, onStateChange }:
} }
const isEditWorkflow = toolCall.name === 'edit_workflow' const isEditWorkflow = toolCall.name === 'edit_workflow'
const shouldShowDetails = isRunWorkflow || (isExpandableTool && expanded)
const hasOperations = Array.isArray(params.operations) && params.operations.length > 0 const hasOperations = Array.isArray(params.operations) && params.operations.length > 0
const hideTextForEditWorkflow = isEditWorkflow && hasOperations const hideTextForEditWorkflow = isEditWorkflow && hasOperations
@@ -2125,15 +2157,15 @@ export function ToolCall({ toolCall: toolCallProp, toolCallId, onStateChange }:
<div className={isToolNameClickable ? 'cursor-pointer' : ''} onClick={handleToolNameClick}> <div className={isToolNameClickable ? 'cursor-pointer' : ''} onClick={handleToolNameClick}>
<ShimmerOverlayText <ShimmerOverlayText
text={displayName} text={displayName}
active={isLoadingState} active={shouldShowShimmer}
isSpecial={isSpecial} isSpecial={isSpecial}
className='font-[470] font-season text-[var(--text-secondary)] text-sm dark:text-[var(--text-muted)]' className='font-[470] font-season text-[var(--text-secondary)] text-sm dark:text-[var(--text-muted)]'
/> />
</div> </div>
)} )}
{isExpandableTool && expanded && <div className='mt-1.5'>{renderPendingDetails()}</div>} {shouldShowDetails && <div className='mt-[10px]'>{renderPendingDetails()}</div>}
{showRemoveAutoAllow && isAutoAllowed && ( {showRemoveAutoAllow && isAutoAllowed && (
<div className='mt-1.5'> <div className='mt-[10px]'>
<Button <Button
onClick={async () => { onClick={async () => {
await removeAutoAllowedTool(toolCall.name) await removeAutoAllowedTool(toolCall.name)
@@ -2154,7 +2186,7 @@ export function ToolCall({ toolCall: toolCallProp, toolCallId, onStateChange }:
editedParams={editedParams} editedParams={editedParams}
/> />
) : showMoveToBackground ? ( ) : showMoveToBackground ? (
<div className='mt-1.5'> <div className='mt-[10px]'>
<Button <Button
onClick={async () => { onClick={async () => {
try { try {
@@ -2175,7 +2207,7 @@ export function ToolCall({ toolCall: toolCallProp, toolCallId, onStateChange }:
</Button> </Button>
</div> </div>
) : showWake ? ( ) : showWake ? (
<div className='mt-1.5'> <div className='mt-[10px]'>
<Button <Button
onClick={async () => { onClick={async () => {
try { try {
@@ -2208,7 +2240,7 @@ export function ToolCall({ toolCall: toolCallProp, toolCallId, onStateChange }:
{toolCall.subAgentBlocks && toolCall.subAgentBlocks.length > 0 && ( {toolCall.subAgentBlocks && toolCall.subAgentBlocks.length > 0 && (
<SubAgentThinkingContent <SubAgentThinkingContent
blocks={toolCall.subAgentBlocks} blocks={toolCall.subAgentBlocks}
isStreaming={toolCall.subAgentStreaming} isStreaming={isCurrentMessage && toolCall.subAgentStreaming}
/> />
)} )}
</div> </div>

View File

@@ -0,0 +1,127 @@
'use client'
import { ArrowUp, Image, Loader2 } from 'lucide-react'
import { Badge, Button } from '@/components/emcn'
import { cn } from '@/lib/core/utils/cn'
import { ModeSelector } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/copilot/components/user-input/components/mode-selector/mode-selector'
import { ModelSelector } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/copilot/components/user-input/components/model-selector/model-selector'
interface BottomControlsProps {
mode: 'ask' | 'build' | 'plan'
onModeChange?: (mode: 'ask' | 'build' | 'plan') => void
selectedModel: string
onModelSelect: (model: string) => void
isNearTop: boolean
disabled: boolean
hideModeSelector: boolean
canSubmit: boolean
isLoading: boolean
isAborting: boolean
showAbortButton: boolean
onSubmit: () => void
onAbort: () => void
onFileSelect: () => void
}
/**
* Bottom controls section of the user input
* Contains mode selector, model selector, file attachment button, and submit/abort buttons
*/
export function BottomControls({
mode,
onModeChange,
selectedModel,
onModelSelect,
isNearTop,
disabled,
hideModeSelector,
canSubmit,
isLoading,
isAborting,
showAbortButton,
onSubmit,
onAbort,
onFileSelect,
}: BottomControlsProps) {
return (
<div className='flex items-center justify-between gap-2'>
{/* Left side: Mode Selector + Model Selector */}
<div className='flex min-w-0 flex-1 items-center gap-[8px]'>
{!hideModeSelector && (
<ModeSelector
mode={mode}
onModeChange={onModeChange}
isNearTop={isNearTop}
disabled={disabled}
/>
)}
<ModelSelector
selectedModel={selectedModel}
isNearTop={isNearTop}
onModelSelect={onModelSelect}
/>
</div>
{/* Right side: Attach Button + Send Button */}
<div className='flex flex-shrink-0 items-center gap-[10px]'>
<Badge
onClick={onFileSelect}
title='Attach file'
className={cn(
'cursor-pointer rounded-[6px] border-0 bg-transparent p-[0px] dark:bg-transparent',
disabled && 'cursor-not-allowed opacity-50'
)}
>
<Image className='!h-3.5 !w-3.5 scale-x-110' />
</Badge>
{showAbortButton ? (
<Button
onClick={onAbort}
disabled={isAborting}
className={cn(
'h-[20px] w-[20px] rounded-full border-0 p-0 transition-colors',
!isAborting
? 'bg-[var(--c-383838)] hover:bg-[var(--c-575757)] dark:bg-[var(--c-E0E0E0)] dark:hover:bg-[var(--c-CFCFCF)]'
: 'bg-[var(--c-383838)] dark:bg-[var(--c-E0E0E0)]'
)}
title='Stop generation'
>
{isAborting ? (
<Loader2 className='block h-[13px] w-[13px] animate-spin text-white dark:text-black' />
) : (
<svg
className='block h-[13px] w-[13px] fill-white dark:fill-black'
viewBox='0 0 24 24'
xmlns='http://www.w3.org/2000/svg'
>
<rect x='4' y='4' width='16' height='16' rx='3' ry='3' />
</svg>
)}
</Button>
) : (
<Button
onClick={onSubmit}
disabled={!canSubmit}
className={cn(
'h-[22px] w-[22px] rounded-full border-0 p-0 transition-colors',
canSubmit
? 'bg-[var(--c-383838)] hover:bg-[var(--c-575757)] dark:bg-[var(--c-E0E0E0)] dark:hover:bg-[var(--c-CFCFCF)]'
: 'bg-[var(--c-808080)] dark:bg-[var(--c-808080)]'
)}
>
{isLoading ? (
<Loader2 className='block h-3.5 w-3.5 animate-spin text-white dark:text-black' />
) : (
<ArrowUp
className='block h-3.5 w-3.5 text-white dark:text-black'
strokeWidth={2.25}
/>
)}
</Button>
)}
</div>
</div>
)
}

View File

@@ -1,6 +1,7 @@
export { AttachedFilesDisplay } from './attached-files-display/attached-files-display' export { AttachedFilesDisplay } from './attached-files-display'
export { ContextPills } from './context-pills/context-pills' export { BottomControls } from './bottom-controls'
export { type MentionFolderNav, MentionMenu } from './mention-menu/mention-menu' export { ContextPills } from './context-pills'
export { ModeSelector } from './mode-selector/mode-selector' export { type MentionFolderNav, MentionMenu } from './mention-menu'
export { ModelSelector } from './model-selector/model-selector' export { ModeSelector } from './mode-selector'
export { type SlashFolderNav, SlashMenu } from './slash-menu/slash-menu' export { ModelSelector } from './model-selector'
export { type SlashFolderNav, SlashMenu } from './slash-menu'

View File

@@ -1,5 +1,6 @@
import { useCallback, useEffect, useRef, useState } from 'react' import { useCallback, useEffect, useRef, useState } from 'react'
import { import {
escapeRegex,
filterOutContext, filterOutContext,
isContextAlreadySelected, isContextAlreadySelected,
} from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/copilot/components/user-input/utils' } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/copilot/components/user-input/utils'
@@ -22,9 +23,6 @@ interface UseContextManagementProps {
export function useContextManagement({ message, initialContexts }: UseContextManagementProps) { export function useContextManagement({ message, initialContexts }: UseContextManagementProps) {
const [selectedContexts, setSelectedContexts] = useState<ChatContext[]>(initialContexts ?? []) const [selectedContexts, setSelectedContexts] = useState<ChatContext[]>(initialContexts ?? [])
const initializedRef = useRef(false) const initializedRef = useRef(false)
const escapeRegex = useCallback((value: string) => {
return value.replace(/[.*+?^${}()|[\]\\]/g, '\\$&')
}, [])
// Initialize with initial contexts when they're first provided (for edit mode) // Initialize with initial contexts when they're first provided (for edit mode)
useEffect(() => { useEffect(() => {

View File

@@ -9,19 +9,19 @@ import {
useState, useState,
} from 'react' } from 'react'
import { createLogger } from '@sim/logger' import { createLogger } from '@sim/logger'
import { ArrowUp, AtSign, Image, Loader2 } from 'lucide-react' import { AtSign } from 'lucide-react'
import { useParams } from 'next/navigation' import { useParams } from 'next/navigation'
import { createPortal } from 'react-dom' import { createPortal } from 'react-dom'
import { Badge, Button, Textarea } from '@/components/emcn' import { Badge, Button, Textarea } from '@/components/emcn'
import { useSession } from '@/lib/auth/auth-client' import { useSession } from '@/lib/auth/auth-client'
import type { CopilotModelId } from '@/lib/copilot/models'
import { cn } from '@/lib/core/utils/cn' import { cn } from '@/lib/core/utils/cn'
import { import {
AttachedFilesDisplay, AttachedFilesDisplay,
BottomControls,
ContextPills, ContextPills,
type MentionFolderNav, type MentionFolderNav,
MentionMenu, MentionMenu,
ModelSelector,
ModeSelector,
type SlashFolderNav, type SlashFolderNav,
SlashMenu, SlashMenu,
} from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/copilot/components/user-input/components' } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/copilot/components/user-input/components'
@@ -44,6 +44,10 @@ import {
useTextareaAutoResize, useTextareaAutoResize,
} from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/copilot/components/user-input/hooks' } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/copilot/components/user-input/hooks'
import type { MessageFileAttachment } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/copilot/components/user-input/hooks/use-file-attachments' import type { MessageFileAttachment } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/copilot/components/user-input/hooks/use-file-attachments'
import {
computeMentionHighlightRanges,
extractContextTokens,
} from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/copilot/components/user-input/utils'
import type { ChatContext } from '@/stores/panel' import type { ChatContext } from '@/stores/panel'
import { useCopilotStore } from '@/stores/panel' import { useCopilotStore } from '@/stores/panel'
@@ -263,7 +267,6 @@ const UserInput = forwardRef<UserInputRef, UserInputProps>(
if (q && q.length > 0) { if (q && q.length > 0) {
void mentionData.ensurePastChatsLoaded() void mentionData.ensurePastChatsLoaded()
// workflows and workflow-blocks auto-load from stores
void mentionData.ensureKnowledgeLoaded() void mentionData.ensureKnowledgeLoaded()
void mentionData.ensureBlocksLoaded() void mentionData.ensureBlocksLoaded()
void mentionData.ensureTemplatesLoaded() void mentionData.ensureTemplatesLoaded()
@@ -306,7 +309,7 @@ const UserInput = forwardRef<UserInputRef, UserInputProps>(
size: f.size, size: f.size,
})) }))
onSubmit(trimmedMessage, fileAttachmentsForApi, contextManagement.selectedContexts as any) onSubmit(trimmedMessage, fileAttachmentsForApi, contextManagement.selectedContexts)
const shouldClearInput = clearOnSubmit && !options.preserveInput && !overrideMessage const shouldClearInput = clearOnSubmit && !options.preserveInput && !overrideMessage
if (shouldClearInput) { if (shouldClearInput) {
@@ -657,7 +660,7 @@ const UserInput = forwardRef<UserInputRef, UserInputProps>(
const handleModelSelect = useCallback( const handleModelSelect = useCallback(
(model: string) => { (model: string) => {
setSelectedModel(model as any) setSelectedModel(model as CopilotModelId)
}, },
[setSelectedModel] [setSelectedModel]
) )
@@ -677,15 +680,17 @@ const UserInput = forwardRef<UserInputRef, UserInputProps>(
return <span>{displayText}</span> return <span>{displayText}</span>
} }
const elements: React.ReactNode[] = [] const tokens = extractContextTokens(contexts)
const ranges = mentionTokensWithContext.computeMentionRanges() const ranges = computeMentionHighlightRanges(message, tokens)
if (ranges.length === 0) { if (ranges.length === 0) {
const displayText = message.endsWith('\n') ? `${message}\u200B` : message const displayText = message.endsWith('\n') ? `${message}\u200B` : message
return <span>{displayText}</span> return <span>{displayText}</span>
} }
const elements: React.ReactNode[] = []
let lastIndex = 0 let lastIndex = 0
for (let i = 0; i < ranges.length; i++) { for (let i = 0; i < ranges.length; i++) {
const range = ranges[i] const range = ranges[i]
@@ -694,13 +699,12 @@ const UserInput = forwardRef<UserInputRef, UserInputProps>(
elements.push(<span key={`text-${i}-${lastIndex}-${range.start}`}>{before}</span>) elements.push(<span key={`text-${i}-${lastIndex}-${range.start}`}>{before}</span>)
} }
const mentionText = message.slice(range.start, range.end)
elements.push( elements.push(
<span <span
key={`mention-${i}-${range.start}-${range.end}`} key={`mention-${i}-${range.start}-${range.end}`}
className='rounded-[4px] bg-[rgba(50,189,126,0.65)] py-[1px]' className='rounded-[4px] bg-[rgba(50,189,126,0.65)] py-[1px]'
> >
{mentionText} {range.token}
</span> </span>
) )
lastIndex = range.end lastIndex = range.end
@@ -713,7 +717,7 @@ const UserInput = forwardRef<UserInputRef, UserInputProps>(
} }
return elements.length > 0 ? elements : <span>{'\u00A0'}</span> return elements.length > 0 ? elements : <span>{'\u00A0'}</span>
}, [message, contextManagement.selectedContexts, mentionTokensWithContext]) }, [message, contextManagement.selectedContexts])
return ( return (
<div <div
@@ -855,87 +859,22 @@ const UserInput = forwardRef<UserInputRef, UserInputProps>(
</div> </div>
{/* Bottom Row: Mode Selector + Model Selector + Attach Button + Send Button */} {/* Bottom Row: Mode Selector + Model Selector + Attach Button + Send Button */}
<div className='flex items-center justify-between gap-2'> <BottomControls
{/* Left side: Mode Selector + Model Selector */} mode={mode}
<div className='flex min-w-0 flex-1 items-center gap-[8px]'> onModeChange={onModeChange}
{!hideModeSelector && ( selectedModel={selectedModel}
<ModeSelector onModelSelect={handleModelSelect}
mode={mode} isNearTop={isNearTop}
onModeChange={onModeChange} disabled={disabled}
isNearTop={isNearTop} hideModeSelector={hideModeSelector}
disabled={disabled} canSubmit={canSubmit}
/> isLoading={isLoading}
)} isAborting={isAborting}
showAbortButton={Boolean(showAbortButton)}
<ModelSelector onSubmit={() => void handleSubmit()}
selectedModel={selectedModel} onAbort={handleAbort}
isNearTop={isNearTop} onFileSelect={fileAttachments.handleFileSelect}
onModelSelect={handleModelSelect} />
/>
</div>
{/* Right side: Attach Button + Send Button */}
<div className='flex flex-shrink-0 items-center gap-[10px]'>
<Badge
onClick={fileAttachments.handleFileSelect}
title='Attach file'
className={cn(
'cursor-pointer rounded-[6px] border-0 bg-transparent p-[0px] dark:bg-transparent',
disabled && 'cursor-not-allowed opacity-50'
)}
>
<Image className='!h-3.5 !w-3.5 scale-x-110' />
</Badge>
{showAbortButton ? (
<Button
onClick={handleAbort}
disabled={isAborting}
className={cn(
'h-[20px] w-[20px] rounded-full border-0 p-0 transition-colors',
!isAborting
? 'bg-[var(--c-383838)] hover:bg-[var(--c-575757)] dark:bg-[var(--c-E0E0E0)] dark:hover:bg-[var(--c-CFCFCF)]'
: 'bg-[var(--c-383838)] dark:bg-[var(--c-E0E0E0)]'
)}
title='Stop generation'
>
{isAborting ? (
<Loader2 className='block h-[13px] w-[13px] animate-spin text-white dark:text-black' />
) : (
<svg
className='block h-[13px] w-[13px] fill-white dark:fill-black'
viewBox='0 0 24 24'
xmlns='http://www.w3.org/2000/svg'
>
<rect x='4' y='4' width='16' height='16' rx='3' ry='3' />
</svg>
)}
</Button>
) : (
<Button
onClick={() => {
void handleSubmit()
}}
disabled={!canSubmit}
className={cn(
'h-[22px] w-[22px] rounded-full border-0 p-0 transition-colors',
canSubmit
? 'bg-[var(--c-383838)] hover:bg-[var(--c-575757)] dark:bg-[var(--c-E0E0E0)] dark:hover:bg-[var(--c-CFCFCF)]'
: 'bg-[var(--c-808080)] dark:bg-[var(--c-808080)]'
)}
>
{isLoading ? (
<Loader2 className='block h-3.5 w-3.5 animate-spin text-white dark:text-black' />
) : (
<ArrowUp
className='block h-3.5 w-3.5 text-white dark:text-black'
strokeWidth={2.25}
/>
)}
</Button>
)}
</div>
</div>
{/* Hidden File Input - enabled during streaming so users can prepare images for the next message */} {/* Hidden File Input - enabled during streaming so users can prepare images for the next message */}
<input <input

View File

@@ -1,3 +1,4 @@
import type { ReactNode } from 'react'
import { import {
FOLDER_CONFIGS, FOLDER_CONFIGS,
type MentionFolderId, type MentionFolderId,
@@ -5,6 +6,102 @@ import {
import type { MentionDataReturn } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/copilot/components/user-input/hooks/use-mention-data' import type { MentionDataReturn } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/copilot/components/user-input/hooks/use-mention-data'
import type { ChatContext } from '@/stores/panel' import type { ChatContext } from '@/stores/panel'
/**
* Escapes special regex characters in a string
* @param value - String to escape
* @returns Escaped string safe for use in RegExp
*/
export function escapeRegex(value: string): string {
return value.replace(/[.*+?^${}()|[\]\\]/g, '\\$&')
}
/**
* Extracts mention tokens from contexts for display/matching
* Filters out current_workflow contexts and builds prefixed labels
* @param contexts - Array of chat contexts
* @returns Array of prefixed token strings (e.g., "@workflow", "/web")
*/
export function extractContextTokens(contexts: ChatContext[]): string[] {
return contexts
.filter((c) => c.kind !== 'current_workflow' && c.label)
.map((c) => {
const prefix = c.kind === 'slash_command' ? '/' : '@'
return `${prefix}${c.label}`
})
}
/**
* Mention range for text highlighting
*/
export interface MentionHighlightRange {
start: number
end: number
token: string
}
/**
* Computes mention ranges in text for highlighting
* @param text - Text to search
* @param tokens - Prefixed tokens to find (e.g., "@workflow", "/web")
* @returns Array of ranges with start, end, and matched token
*/
export function computeMentionHighlightRanges(
text: string,
tokens: string[]
): MentionHighlightRange[] {
if (!tokens.length || !text) return []
const pattern = new RegExp(`(${tokens.map(escapeRegex).join('|')})`, 'g')
const ranges: MentionHighlightRange[] = []
let match: RegExpExecArray | null
while ((match = pattern.exec(text)) !== null) {
ranges.push({
start: match.index,
end: match.index + match[0].length,
token: match[0],
})
}
return ranges
}
/**
* Builds React nodes with highlighted mention tokens
* @param text - Text to render
* @param contexts - Chat contexts to highlight
* @param createHighlightSpan - Function to create highlighted span element
* @returns Array of React nodes with highlighted mentions
*/
export function buildMentionHighlightNodes(
text: string,
contexts: ChatContext[],
createHighlightSpan: (token: string, key: string) => ReactNode
): ReactNode[] {
const tokens = extractContextTokens(contexts)
if (!tokens.length) return [text]
const ranges = computeMentionHighlightRanges(text, tokens)
if (!ranges.length) return [text]
const nodes: ReactNode[] = []
let lastIndex = 0
for (const range of ranges) {
if (range.start > lastIndex) {
nodes.push(text.slice(lastIndex, range.start))
}
nodes.push(createHighlightSpan(range.token, `mention-${range.start}-${range.end}`))
lastIndex = range.end
}
if (lastIndex < text.length) {
nodes.push(text.slice(lastIndex))
}
return nodes
}
/** /**
* Gets the data array for a folder ID from mentionData. * Gets the data array for a folder ID from mentionData.
* Uses FOLDER_CONFIGS as the source of truth for key mapping. * Uses FOLDER_CONFIGS as the source of truth for key mapping.

View File

@@ -2,9 +2,7 @@
import { Button } from '@/components/emcn' import { Button } from '@/components/emcn'
/** /** Props for the Welcome component */
* Props for the CopilotWelcome component
*/
interface WelcomeProps { interface WelcomeProps {
/** Callback when a suggested question is clicked */ /** Callback when a suggested question is clicked */
onQuestionClick?: (question: string) => void onQuestionClick?: (question: string) => void
@@ -12,13 +10,7 @@ interface WelcomeProps {
mode?: 'ask' | 'build' | 'plan' mode?: 'ask' | 'build' | 'plan'
} }
/** /** Welcome screen displaying suggested questions based on current mode */
* Welcome screen component for the copilot
* Displays suggested questions and capabilities based on current mode
*
* @param props - Component props
* @returns Welcome screen UI
*/
export function Welcome({ onQuestionClick, mode = 'ask' }: WelcomeProps) { export function Welcome({ onQuestionClick, mode = 'ask' }: WelcomeProps) {
const capabilities = const capabilities =
mode === 'build' mode === 'build'

View File

@@ -24,6 +24,7 @@ import {
import { Trash } from '@/components/emcn/icons/trash' import { Trash } from '@/components/emcn/icons/trash'
import { cn } from '@/lib/core/utils/cn' import { cn } from '@/lib/core/utils/cn'
import { import {
ChatHistorySkeleton,
CopilotMessage, CopilotMessage,
PlanModeSection, PlanModeSection,
QueuedMessages, QueuedMessages,
@@ -40,6 +41,7 @@ import {
useTodoManagement, useTodoManagement,
} from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/copilot/hooks' } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/copilot/hooks'
import { useScrollManagement } from '@/app/workspace/[workspaceId]/w/[workflowId]/hooks' import { useScrollManagement } from '@/app/workspace/[workspaceId]/w/[workflowId]/hooks'
import type { ChatContext } from '@/stores/panel'
import { useCopilotStore } from '@/stores/panel' import { useCopilotStore } from '@/stores/panel'
import { useWorkflowRegistry } from '@/stores/workflows/registry/store' import { useWorkflowRegistry } from '@/stores/workflows/registry/store'
@@ -74,10 +76,12 @@ export const Copilot = forwardRef<CopilotRef, CopilotProps>(({ panelWidth }, ref
const copilotContainerRef = useRef<HTMLDivElement>(null) const copilotContainerRef = useRef<HTMLDivElement>(null)
const cancelEditCallbackRef = useRef<(() => void) | null>(null) const cancelEditCallbackRef = useRef<(() => void) | null>(null)
const [editingMessageId, setEditingMessageId] = useState<string | null>(null) const [editingMessageId, setEditingMessageId] = useState<string | null>(null)
const [isEditingMessage, setIsEditingMessage] = useState(false)
const [revertingMessageId, setRevertingMessageId] = useState<string | null>(null) const [revertingMessageId, setRevertingMessageId] = useState<string | null>(null)
const [isHistoryDropdownOpen, setIsHistoryDropdownOpen] = useState(false) const [isHistoryDropdownOpen, setIsHistoryDropdownOpen] = useState(false)
// Derived state - editing when there's an editingMessageId
const isEditingMessage = editingMessageId !== null
const { activeWorkflowId } = useWorkflowRegistry() const { activeWorkflowId } = useWorkflowRegistry()
const { const {
@@ -106,9 +110,9 @@ export const Copilot = forwardRef<CopilotRef, CopilotProps>(({ panelWidth }, ref
areChatsFresh, areChatsFresh,
workflowId: copilotWorkflowId, workflowId: copilotWorkflowId,
setPlanTodos, setPlanTodos,
closePlanTodos,
clearPlanArtifact, clearPlanArtifact,
savePlanArtifact, savePlanArtifact,
setSelectedModel,
loadAutoAllowedTools, loadAutoAllowedTools,
} = useCopilotStore() } = useCopilotStore()
@@ -126,7 +130,7 @@ export const Copilot = forwardRef<CopilotRef, CopilotProps>(({ panelWidth }, ref
// Handle scroll management (80px stickiness for copilot) // Handle scroll management (80px stickiness for copilot)
const { scrollAreaRef, scrollToBottom } = useScrollManagement(messages, isSendingMessage, { const { scrollAreaRef, scrollToBottom } = useScrollManagement(messages, isSendingMessage, {
stickinessThreshold: 80, stickinessThreshold: 40,
}) })
// Handle chat history grouping // Handle chat history grouping
@@ -146,15 +150,10 @@ export const Copilot = forwardRef<CopilotRef, CopilotProps>(({ panelWidth }, ref
isSendingMessage, isSendingMessage,
showPlanTodos, showPlanTodos,
planTodos, planTodos,
setPlanTodos,
}) })
/** /** Gets markdown content for design document section (available in all modes once created) */
* Get markdown content for design document section
* Available in all modes once created
*/
const designDocumentContent = useMemo(() => { const designDocumentContent = useMemo(() => {
// Use streaming content if available
if (streamingPlanContent) { if (streamingPlanContent) {
logger.info('[DesignDocument] Using streaming plan content', { logger.info('[DesignDocument] Using streaming plan content', {
contentLength: streamingPlanContent.length, contentLength: streamingPlanContent.length,
@@ -165,9 +164,7 @@ export const Copilot = forwardRef<CopilotRef, CopilotProps>(({ panelWidth }, ref
return '' return ''
}, [streamingPlanContent]) }, [streamingPlanContent])
/** /** Focuses the copilot input */
* Helper function to focus the copilot input
*/
const focusInput = useCallback(() => { const focusInput = useCallback(() => {
userInputRef.current?.focus() userInputRef.current?.focus()
}, []) }, [])
@@ -181,24 +178,30 @@ export const Copilot = forwardRef<CopilotRef, CopilotProps>(({ panelWidth }, ref
currentInputValue: inputValue, currentInputValue: inputValue,
}) })
/** /** Auto-scrolls to bottom when chat loads */
* Auto-scroll to bottom when chat loads in
*/
useEffect(() => { useEffect(() => {
if (isInitialized && messages.length > 0) { if (isInitialized && messages.length > 0) {
scrollToBottom() scrollToBottom()
} }
}, [isInitialized, messages.length, scrollToBottom]) }, [isInitialized, messages.length, scrollToBottom])
/** /** Cleanup on unmount - aborts active streaming. Uses refs to avoid stale closures */
* Note: We intentionally do NOT abort on component unmount. const isSendingRef = useRef(isSendingMessage)
* Streams continue server-side and can be resumed when user returns. isSendingRef.current = isSendingMessage
* The server persists chunks to Redis for resumption. const abortMessageRef = useRef(abortMessage)
*/ abortMessageRef.current = abortMessage
/** useEffect(() => {
* Container-level click capture to cancel edit mode when clicking outside the current edit area return () => {
*/ if (isSendingRef.current) {
abortMessageRef.current()
logger.info('Aborted active message streaming due to component unmount')
}
}
// eslint-disable-next-line react-hooks/exhaustive-deps
}, [])
/** Cancels edit mode when clicking outside the current edit area */
const handleCopilotClickCapture = useCallback( const handleCopilotClickCapture = useCallback(
(event: ReactMouseEvent<HTMLDivElement>) => { (event: ReactMouseEvent<HTMLDivElement>) => {
if (!isEditingMessage) return if (!isEditingMessage) return
@@ -227,10 +230,7 @@ export const Copilot = forwardRef<CopilotRef, CopilotProps>(({ panelWidth }, ref
[isEditingMessage, editingMessageId] [isEditingMessage, editingMessageId]
) )
/** /** Creates a new chat session and focuses the input */
* Handles creating a new chat session
* Focuses the input after creation
*/
const handleStartNewChat = useCallback(() => { const handleStartNewChat = useCallback(() => {
createNewChat() createNewChat()
logger.info('Started new chat') logger.info('Started new chat')
@@ -240,10 +240,7 @@ export const Copilot = forwardRef<CopilotRef, CopilotProps>(({ panelWidth }, ref
}, 100) }, 100)
}, [createNewChat]) }, [createNewChat])
/** /** Sets the input value and focuses the textarea */
* Sets the input value and focuses the textarea
* @param value - The value to set in the input
*/
const handleSetInputValueAndFocus = useCallback( const handleSetInputValueAndFocus = useCallback(
(value: string) => { (value: string) => {
setInputValue(value) setInputValue(value)
@@ -254,7 +251,7 @@ export const Copilot = forwardRef<CopilotRef, CopilotProps>(({ panelWidth }, ref
[setInputValue] [setInputValue]
) )
// Expose functions to parent /** Exposes imperative functions to parent */
useImperativeHandle( useImperativeHandle(
ref, ref,
() => ({ () => ({
@@ -265,10 +262,7 @@ export const Copilot = forwardRef<CopilotRef, CopilotProps>(({ panelWidth }, ref
[handleStartNewChat, handleSetInputValueAndFocus, focusInput] [handleStartNewChat, handleSetInputValueAndFocus, focusInput]
) )
/** /** Aborts current message streaming and collapses todos if shown */
* Handles aborting the current message streaming
* Collapses todos if they are currently shown
*/
const handleAbort = useCallback(() => { const handleAbort = useCallback(() => {
abortMessage() abortMessage()
if (showPlanTodos) { if (showPlanTodos) {
@@ -276,20 +270,20 @@ export const Copilot = forwardRef<CopilotRef, CopilotProps>(({ panelWidth }, ref
} }
}, [abortMessage, showPlanTodos]) }, [abortMessage, showPlanTodos])
/** /** Closes the plan todos section and clears the todos */
* Handles message submission to the copilot const handleClosePlanTodos = useCallback(() => {
* @param query - The message text to send closePlanTodos()
* @param fileAttachments - Optional file attachments setPlanTodos([])
* @param contexts - Optional context references }, [closePlanTodos, setPlanTodos])
*/
/** Handles message submission to the copilot */
const handleSubmit = useCallback( const handleSubmit = useCallback(
async (query: string, fileAttachments?: MessageFileAttachment[], contexts?: any[]) => { async (query: string, fileAttachments?: MessageFileAttachment[], contexts?: ChatContext[]) => {
// Allow submission even when isSendingMessage - store will queue the message // Allow submission even when isSendingMessage - store will queue the message
if (!query || !activeWorkflowId) return if (!query || !activeWorkflowId) return
if (showPlanTodos) { if (showPlanTodos) {
const store = useCopilotStore.getState() setPlanTodos([])
store.setPlanTodos([])
} }
try { try {
@@ -303,37 +297,25 @@ export const Copilot = forwardRef<CopilotRef, CopilotProps>(({ panelWidth }, ref
logger.error('Failed to send message:', error) logger.error('Failed to send message:', error)
} }
}, },
[activeWorkflowId, sendMessage, showPlanTodos] [activeWorkflowId, sendMessage, showPlanTodos, setPlanTodos]
) )
/** /** Handles message edit mode changes */
* Handles message edit mode changes
* @param messageId - ID of the message being edited
* @param isEditing - Whether edit mode is active
*/
const handleEditModeChange = useCallback( const handleEditModeChange = useCallback(
(messageId: string, isEditing: boolean, cancelCallback?: () => void) => { (messageId: string, isEditing: boolean, cancelCallback?: () => void) => {
setEditingMessageId(isEditing ? messageId : null) setEditingMessageId(isEditing ? messageId : null)
setIsEditingMessage(isEditing)
cancelEditCallbackRef.current = isEditing ? cancelCallback || null : null cancelEditCallbackRef.current = isEditing ? cancelCallback || null : null
logger.info('Edit mode changed', { messageId, isEditing, willDimMessages: isEditing }) logger.info('Edit mode changed', { messageId, isEditing, willDimMessages: isEditing })
}, },
[] []
) )
/** /** Handles checkpoint revert mode changes */
* Handles checkpoint revert mode changes
* @param messageId - ID of the message being reverted
* @param isReverting - Whether revert mode is active
*/
const handleRevertModeChange = useCallback((messageId: string, isReverting: boolean) => { const handleRevertModeChange = useCallback((messageId: string, isReverting: boolean) => {
setRevertingMessageId(isReverting ? messageId : null) setRevertingMessageId(isReverting ? messageId : null)
}, []) }, [])
/** /** Handles chat deletion */
* Handles chat deletion
* @param chatId - ID of the chat to delete
*/
const handleDeleteChat = useCallback( const handleDeleteChat = useCallback(
async (chatId: string) => { async (chatId: string) => {
try { try {
@@ -345,38 +327,15 @@ export const Copilot = forwardRef<CopilotRef, CopilotProps>(({ panelWidth }, ref
[deleteChat] [deleteChat]
) )
/** /** Handles history dropdown opening state, loads chats if needed (non-blocking) */
* Handles history dropdown opening state
* Loads chats if needed when dropdown opens (non-blocking)
* @param open - Whether the dropdown is open
*/
const handleHistoryDropdownOpen = useCallback( const handleHistoryDropdownOpen = useCallback(
(open: boolean) => { (open: boolean) => {
setIsHistoryDropdownOpen(open) setIsHistoryDropdownOpen(open)
// Fire hook without awaiting - prevents blocking and state issues
handleHistoryDropdownOpenHook(open) handleHistoryDropdownOpenHook(open)
}, },
[handleHistoryDropdownOpenHook] [handleHistoryDropdownOpenHook]
) )
/**
* Skeleton loading component for chat history
*/
const ChatHistorySkeleton = () => (
<>
<PopoverSection>
<div className='h-3 w-12 animate-pulse rounded bg-muted/40' />
</PopoverSection>
<div className='flex flex-col gap-0.5'>
{[1, 2, 3].map((i) => (
<div key={i} className='flex h-[25px] items-center px-[6px]'>
<div className='h-3 w-full animate-pulse rounded bg-muted/40' />
</div>
))}
</div>
</>
)
return ( return (
<> <>
<div <div
@@ -515,21 +474,18 @@ export const Copilot = forwardRef<CopilotRef, CopilotProps>(({ panelWidth }, ref
className='h-full overflow-y-auto overflow-x-hidden px-[8px]' className='h-full overflow-y-auto overflow-x-hidden px-[8px]'
> >
<div <div
className={`w-full max-w-full space-y-4 overflow-hidden py-[8px] ${ className={`w-full max-w-full space-y-[8px] overflow-hidden py-[8px] ${
showPlanTodos && planTodos.length > 0 ? 'pb-14' : 'pb-10' showPlanTodos && planTodos.length > 0 ? 'pb-14' : 'pb-10'
}`} }`}
> >
{messages.map((message, index) => { {messages.map((message, index) => {
// Determine if this message should be dimmed
let isDimmed = false let isDimmed = false
// Dim messages after the one being edited
if (editingMessageId) { if (editingMessageId) {
const editingIndex = messages.findIndex((m) => m.id === editingMessageId) const editingIndex = messages.findIndex((m) => m.id === editingMessageId)
isDimmed = editingIndex !== -1 && index > editingIndex isDimmed = editingIndex !== -1 && index > editingIndex
} }
// Also dim messages after the one showing restore confirmation
if (!isDimmed && revertingMessageId) { if (!isDimmed && revertingMessageId) {
const revertingIndex = messages.findIndex( const revertingIndex = messages.findIndex(
(m) => m.id === revertingMessageId (m) => m.id === revertingMessageId
@@ -537,7 +493,6 @@ export const Copilot = forwardRef<CopilotRef, CopilotProps>(({ panelWidth }, ref
isDimmed = revertingIndex !== -1 && index > revertingIndex isDimmed = revertingIndex !== -1 && index > revertingIndex
} }
// Get checkpoint count for this message to force re-render when it changes
const checkpointCount = messageCheckpoints[message.id]?.length || 0 const checkpointCount = messageCheckpoints[message.id]?.length || 0
return ( return (
@@ -572,11 +527,7 @@ export const Copilot = forwardRef<CopilotRef, CopilotProps>(({ panelWidth }, ref
<TodoList <TodoList
todos={planTodos} todos={planTodos}
collapsed={todosCollapsed} collapsed={todosCollapsed}
onClose={() => { onClose={handleClosePlanTodos}
const store = useCopilotStore.getState()
store.closePlanTodos?.()
useCopilotStore.setState({ planTodos: [] })
}}
/> />
</div> </div>
)} )}

Some files were not shown because too many files have changed in this diff Show More