Compare commits

..

30 Commits

Author SHA1 Message Date
Vikhyath Mondreti
b09f683072 v0.5.63: ui and performance improvements, more google tools 2026-01-18 15:22:42 -08:00
Vikhyath Mondreti
a8bb0db660 v0.5.62: webhook bug fixes, seeding default subblock values, block selection fixes 2026-01-16 20:27:06 -08:00
Waleed
af82820a28 v0.5.61: webhook improvements, workflow controls, react query for deployment status, chat fixes, reducto and pulse OCR, linear fixes 2026-01-16 18:06:23 -08:00
Waleed
4372841797 v0.5.60: invitation flow improvements, chat fixes, a2a improvements, additional copilot actions 2026-01-15 00:02:18 -08:00
Waleed
5e8c843241 v0.5.59: a2a support, documentation 2026-01-13 13:21:21 -08:00
Waleed
7bf3d73ee6 v0.5.58: export folders, new tools, permissions groups enhancements 2026-01-13 00:56:59 -08:00
Vikhyath Mondreti
7ffc11a738 v0.5.57: subagents, context menu improvements, bug fixes 2026-01-11 11:38:40 -08:00
Waleed
be578e2ed7 v0.5.56: batch operations, access control and permission groups, billing fixes 2026-01-10 00:31:34 -08:00
Waleed
f415e5edc4 v0.5.55: polling groups, bedrock provider, devcontainer fixes, workflow preview enhancements 2026-01-08 23:36:56 -08:00
Waleed
13a6e6c3fa v0.5.54: seo, model blacklist, helm chart updates, fireflies integration, autoconnect improvements, billing fixes 2026-01-07 16:09:45 -08:00
Waleed
f5ab7f21ae v0.5.53: hotkey improvements, added redis fallback, fixes for workflow tool 2026-01-06 23:34:52 -08:00
Waleed
bfb6fffe38 v0.5.52: new port-based router block, combobox expression and variable support 2026-01-06 16:14:10 -08:00
Waleed
4fbec0a43f v0.5.51: triggers, kb, condition block improvements, supabase and grain integration updates 2026-01-06 14:26:46 -08:00
Waleed
585f5e365b v0.5.50: import improvements, ui upgrades, kb styling and performance improvements 2026-01-05 00:35:55 -08:00
Waleed
3792bdd252 v0.5.49: hitl improvements, new email styles, imap trigger, logs context menu (#2672)
* feat(logs-context-menu): consolidated logs utils and types, added logs record context menu (#2659)

* feat(email): welcome email; improvement(emails): ui/ux (#2658)

* feat(email): welcome email; improvement(emails): ui/ux

* improvement(emails): links, accounts, preview

* refactor(emails): file structure and wrapper components

* added envvar for personal emails sent, added isHosted gate

* fixed failing tests, added env mock

* fix: removed comment

---------

Co-authored-by: waleed <walif6@gmail.com>

* fix(logging): hitl + trigger dev crash protection (#2664)

* hitl gaps

* deal with trigger worker crashes

* cleanup import strcuture

* feat(imap): added support for imap trigger (#2663)

* feat(tools): added support for imap trigger

* feat(imap): added parity, tested

* ack PR comments

* final cleanup

* feat(i18n): update translations (#2665)

Co-authored-by: waleedlatif1 <waleedlatif1@users.noreply.github.com>

* fix(grain): updated grain trigger to auto-establish trigger (#2666)

Co-authored-by: aadamgough <adam@sim.ai>

* feat(admin): routes to manage deployments (#2667)

* feat(admin): routes to manage deployments

* fix naming fo deployed by

* feat(time-picker): added timepicker emcn component, added to playground, added searchable prop for dropdown, added more timezones for schedule, updated license and notice date (#2668)

* feat(time-picker): added timepicker emcn component, added to playground, added searchable prop for dropdown, added more timezones for schedule, updated license and notice date

* removed unused params, cleaned up redundant utils

* improvement(invite): aligned styling (#2669)

* improvement(invite): aligned with rest of app

* fix(invite): error handling

* fix: addressed comments

---------

Co-authored-by: Emir Karabeg <78010029+emir-karabeg@users.noreply.github.com>
Co-authored-by: Vikhyath Mondreti <vikhyathvikku@gmail.com>
Co-authored-by: waleedlatif1 <waleedlatif1@users.noreply.github.com>
Co-authored-by: Adam Gough <77861281+aadamgough@users.noreply.github.com>
Co-authored-by: aadamgough <adam@sim.ai>
2026-01-03 13:19:18 -08:00
Waleed
eb5d1f3e5b v0.5.48: copy-paste workflow blocks, docs updates, mcp tool fixes 2025-12-31 18:00:04 -08:00
Waleed
54ab82c8dd v0.5.47: deploy workflow as mcp, kb chunks tokenizer, UI improvements, jira service management tools 2025-12-30 23:18:58 -08:00
Waleed
f895bf469b v0.5.46: build improvements, greptile, light mode improvements 2025-12-29 02:17:52 -08:00
Waleed
dd3209af06 v0.5.45: light mode fixes, realtime usage indicator, docker build improvements 2025-12-27 19:57:42 -08:00
Waleed
b6ba3b50a7 v0.5.44: keyboard shortcuts, autolayout, light mode, byok, testing improvements 2025-12-26 21:25:19 -08:00
Waleed
b304233062 v0.5.43: export logs, circleback, grain, vertex, code hygiene, schedule improvements 2025-12-23 19:19:18 -08:00
Vikhyath Mondreti
57e4b49bd6 v0.5.42: fix memory migration 2025-12-23 01:24:54 -08:00
Vikhyath Mondreti
e12dd204ed v0.5.41: memory fixes, copilot improvements, knowledgebase improvements, LLM providers standardization 2025-12-23 00:15:18 -08:00
Vikhyath Mondreti
3d9d9cbc54 v0.5.40: supabase ops to allow non-public schemas, jira uuid 2025-12-21 22:28:05 -08:00
Waleed
0f4ec962ad v0.5.39: notion, workflow variables fixes 2025-12-20 20:44:00 -08:00
Waleed
4827866f9a v0.5.38: snap to grid, copilot ux improvements, billing line items 2025-12-20 17:24:38 -08:00
Waleed
3e697d9ed9 v0.5.37: redaction utils consolidation, logs updates, autoconnect improvements, additional kb tag types 2025-12-19 22:31:55 -08:00
Martin Yankov
4431a1a484 fix(helm): add custom egress rules to realtime network policy (#2481)
The realtime service network policy was missing the custom egress rules section
that allows configuration of additional egress rules via values.yaml. This caused
the realtime pods to be unable to connect to external databases (e.g., PostgreSQL
on port 5432) when using external database configurations.

The app network policy already had this section, but the realtime network policy
was missing it, creating an inconsistency and preventing the realtime service
from accessing external databases configured via networkPolicy.egress values.

This fix adds the same custom egress rules template section to the realtime
network policy, matching the app network policy behavior and allowing users to
configure database connectivity via values.yaml.
2025-12-19 18:59:08 -08:00
Waleed
4d1a9a3f22 v0.5.36: hitl improvements, opengraph, slack fixes, one-click unsubscribe, auth checks, new db indexes 2025-12-19 01:27:49 -08:00
Vikhyath Mondreti
eb07a080fb v0.5.35: helm updates, copilot improvements, 404 for docs, salesforce fixes, subflow resize clamping 2025-12-18 16:23:19 -08:00
285 changed files with 4126 additions and 18952 deletions

View File

@@ -1,35 +0,0 @@
---
paths:
- "apps/sim/components/emcn/**"
---
# EMCN Components
Import from `@/components/emcn`, never from subpaths (except CSS files).
## CVA vs Direct Styles
**Use CVA when:** 2+ variants (primary/secondary, sm/md/lg)
```tsx
const buttonVariants = cva('base-classes', {
variants: { variant: { default: '...', primary: '...' } }
})
export { Button, buttonVariants }
```
**Use direct className when:** Single consistent style, no variations
```tsx
function Label({ className, ...props }) {
return <Primitive className={cn('style-classes', className)} {...props} />
}
```
## Rules
- Use Radix UI primitives for accessibility
- Export component and variants (if using CVA)
- TSDoc with usage examples
- Consistent tokens: `font-medium`, `text-[12px]`, `rounded-[4px]`
- `transition-colors` for hover states

View File

@@ -1,13 +0,0 @@
# Global Standards
## Logging
Import `createLogger` from `sim/logger`. Use `logger.info`, `logger.warn`, `logger.error` instead of `console.log`.
## Comments
Use TSDoc for documentation. No `====` separators. No non-TSDoc comments.
## Styling
Never update global styles. Keep all styling local to components.
## Package Manager
Use `bun` and `bunx`, not `npm` and `npx`.

View File

@@ -1,56 +0,0 @@
---
paths:
- "apps/sim/**"
---
# Sim App Architecture
## Core Principles
1. **Single Responsibility**: Each component, hook, store has one clear purpose
2. **Composition Over Complexity**: Break down complex logic into smaller pieces
3. **Type Safety First**: TypeScript interfaces for all props, state, return types
4. **Predictable State**: Zustand for global state, useState for UI-only concerns
## Root-Level Structure
```
apps/sim/
├── app/ # Next.js app router (pages, API routes)
├── blocks/ # Block definitions and registry
├── components/ # Shared UI (emcn/, ui/)
├── executor/ # Workflow execution engine
├── hooks/ # Shared hooks (queries/, selectors/)
├── lib/ # App-wide utilities
├── providers/ # LLM provider integrations
├── stores/ # Zustand stores
├── tools/ # Tool definitions
└── triggers/ # Trigger definitions
```
## Feature Organization
Features live under `app/workspace/[workspaceId]/`:
```
feature/
├── components/ # Feature components
├── hooks/ # Feature-scoped hooks
├── utils/ # Feature-scoped utilities (2+ consumers)
├── feature.tsx # Main component
└── page.tsx # Next.js page entry
```
## Naming Conventions
- **Components**: PascalCase (`WorkflowList`)
- **Hooks**: `use` prefix (`useWorkflowOperations`)
- **Files**: kebab-case (`workflow-list.tsx`)
- **Stores**: `stores/feature/store.ts`
- **Constants**: SCREAMING_SNAKE_CASE
- **Interfaces**: PascalCase with suffix (`WorkflowListProps`)
## Utils Rules
- **Never create `utils.ts` for single consumer** - inline it
- **Create `utils.ts` when** 2+ files need the same helper
- **Check existing sources** before duplicating (`lib/` has many utilities)
- **Location**: `lib/` (app-wide) → `feature/utils/` (feature-scoped) → inline (single-use)

View File

@@ -1,48 +0,0 @@
---
paths:
- "apps/sim/**/*.tsx"
---
# Component Patterns
## Structure Order
```typescript
'use client' // Only if using hooks
// Imports (external → internal)
// Constants at module level
const CONFIG = { SPACING: 8 } as const
// Props interface
interface ComponentProps {
requiredProp: string
optionalProp?: boolean
}
export function Component({ requiredProp, optionalProp = false }: ComponentProps) {
// a. Refs
// b. External hooks (useParams, useRouter)
// c. Store hooks
// d. Custom hooks
// e. Local state
// f. useMemo
// g. useCallback
// h. useEffect
// i. Return JSX
}
```
## Rules
1. `'use client'` only when using React hooks
2. Always define props interface
3. Extract constants with `as const`
4. Semantic HTML (`aside`, `nav`, `article`)
5. Optional chain callbacks: `onAction?.(id)`
## Component Extraction
**Extract when:** 50+ lines, used in 2+ files, or has own state/logic
**Keep inline when:** < 10 lines, single use, purely presentational

View File

@@ -1,55 +0,0 @@
---
paths:
- "apps/sim/**/use-*.ts"
- "apps/sim/**/hooks/**/*.ts"
---
# Hook Patterns
## Structure
```typescript
interface UseFeatureProps {
id: string
onSuccess?: (result: Result) => void
}
export function useFeature({ id, onSuccess }: UseFeatureProps) {
// 1. Refs for stable dependencies
const idRef = useRef(id)
const onSuccessRef = useRef(onSuccess)
// 2. State
const [data, setData] = useState<Data | null>(null)
const [isLoading, setIsLoading] = useState(false)
// 3. Sync refs
useEffect(() => {
idRef.current = id
onSuccessRef.current = onSuccess
}, [id, onSuccess])
// 4. Operations (useCallback with empty deps when using refs)
const fetchData = useCallback(async () => {
setIsLoading(true)
try {
const result = await fetch(`/api/${idRef.current}`).then(r => r.json())
setData(result)
onSuccessRef.current?.(result)
} finally {
setIsLoading(false)
}
}, [])
return { data, isLoading, fetchData }
}
```
## Rules
1. Single responsibility per hook
2. Props interface required
3. Refs for stable callback dependencies
4. Wrap returned functions in useCallback
5. Always try/catch async operations
6. Track loading/error states

View File

@@ -1,62 +0,0 @@
---
paths:
- "apps/sim/**/*.ts"
- "apps/sim/**/*.tsx"
---
# Import Patterns
## Absolute Imports
**Always use absolute imports.** Never use relative imports.
```typescript
// ✓ Good
import { useWorkflowStore } from '@/stores/workflows/store'
import { Button } from '@/components/ui/button'
// ✗ Bad
import { useWorkflowStore } from '../../../stores/workflows/store'
```
## Barrel Exports
Use barrel exports (`index.ts`) when a folder has 3+ exports. Import from barrel, not individual files.
```typescript
// ✓ Good
import { Dashboard, Sidebar } from '@/app/workspace/[workspaceId]/logs/components'
// ✗ Bad
import { Dashboard } from '@/app/workspace/[workspaceId]/logs/components/dashboard/dashboard'
```
## No Re-exports
Do not re-export from non-barrel files. Import directly from the source.
```typescript
// ✓ Good - import from where it's declared
import { CORE_TRIGGER_TYPES } from '@/stores/logs/filters/types'
// ✗ Bad - re-exporting in utils.ts then importing from there
import { CORE_TRIGGER_TYPES } from '@/app/workspace/.../utils'
```
## Import Order
1. React/core libraries
2. External libraries
3. UI components (`@/components/emcn`, `@/components/ui`)
4. Utilities (`@/lib/...`)
5. Stores (`@/stores/...`)
6. Feature imports
7. CSS imports
## Type Imports
Use `type` keyword for type-only imports:
```typescript
import type { WorkflowLog } from '@/stores/logs/types'
```

View File

@@ -1,209 +0,0 @@
---
paths:
- "apps/sim/tools/**"
- "apps/sim/blocks/**"
- "apps/sim/triggers/**"
---
# Adding Integrations
## Overview
Adding a new integration typically requires:
1. **Tools** - API operations (`tools/{service}/`)
2. **Block** - UI component (`blocks/blocks/{service}.ts`)
3. **Icon** - SVG icon (`components/icons.tsx`)
4. **Trigger** (optional) - Webhooks/polling (`triggers/{service}/`)
Always look up the service's API docs first.
## 1. Tools (`tools/{service}/`)
```
tools/{service}/
├── index.ts # Export all tools
├── types.ts # Params/response types
├── {action}.ts # Individual tool (e.g., send_message.ts)
└── ...
```
**Tool file structure:**
```typescript
// tools/{service}/{action}.ts
import type { {Service}Params, {Service}Response } from '@/tools/{service}/types'
import type { ToolConfig } from '@/tools/types'
export const {service}{Action}Tool: ToolConfig<{Service}Params, {Service}Response> = {
id: '{service}_{action}',
name: '{Service} {Action}',
description: 'What this tool does',
version: '1.0.0',
oauth: { required: true, provider: '{service}' }, // if OAuth
params: { /* param definitions */ },
request: {
url: '/api/tools/{service}/{action}',
method: 'POST',
headers: () => ({ 'Content-Type': 'application/json' }),
body: (params) => ({ ...params }),
},
transformResponse: async (response) => {
const data = await response.json()
if (!data.success) throw new Error(data.error)
return { success: true, output: data.output }
},
outputs: { /* output definitions */ },
}
```
**Register in `tools/registry.ts`:**
```typescript
import { {service}{Action}Tool } from '@/tools/{service}'
// Add to registry object
{service}_{action}: {service}{Action}Tool,
```
## 2. Block (`blocks/blocks/{service}.ts`)
```typescript
import { {Service}Icon } from '@/components/icons'
import type { BlockConfig } from '@/blocks/types'
import type { {Service}Response } from '@/tools/{service}/types'
export const {Service}Block: BlockConfig<{Service}Response> = {
type: '{service}',
name: '{Service}',
description: 'Short description',
longDescription: 'Detailed description',
category: 'tools',
bgColor: '#hexcolor',
icon: {Service}Icon,
subBlocks: [ /* see SubBlock Properties below */ ],
tools: {
access: ['{service}_{action}', ...],
config: {
tool: (params) => `{service}_${params.operation}`,
params: (params) => ({ ...params }),
},
},
inputs: { /* input definitions */ },
outputs: { /* output definitions */ },
}
```
### SubBlock Properties
```typescript
{
id: 'fieldName', // Unique identifier
title: 'Field Label', // UI label
type: 'short-input', // See SubBlock Types below
placeholder: 'Hint text',
required: true, // See Required below
condition: { ... }, // See Condition below
dependsOn: ['otherField'], // See DependsOn below
mode: 'basic', // 'basic' | 'advanced' | 'both' | 'trigger'
}
```
**SubBlock Types:** `short-input`, `long-input`, `dropdown`, `code`, `switch`, `slider`, `oauth-input`, `channel-selector`, `user-selector`, `file-upload`, etc.
### `condition` - Show/hide based on another field
```typescript
// Show when operation === 'send'
condition: { field: 'operation', value: 'send' }
// Show when operation is 'send' OR 'read'
condition: { field: 'operation', value: ['send', 'read'] }
// Show when operation !== 'send'
condition: { field: 'operation', value: 'send', not: true }
// Complex: NOT in list AND another condition
condition: {
field: 'operation',
value: ['list_channels', 'list_users'],
not: true,
and: { field: 'destinationType', value: 'dm', not: true }
}
```
### `required` - Field validation
```typescript
// Always required
required: true
// Conditionally required (same syntax as condition)
required: { field: 'operation', value: 'send' }
```
### `dependsOn` - Clear field when dependencies change
```typescript
// Clear when credential changes
dependsOn: ['credential']
// Clear when authMethod changes AND (credential OR botToken) changes
dependsOn: { all: ['authMethod'], any: ['credential', 'botToken'] }
```
### `mode` - When to show field
- `'basic'` - Only in basic mode (default UI)
- `'advanced'` - Only in advanced mode (manual input)
- `'both'` - Show in both modes (default)
- `'trigger'` - Only when block is used as trigger
**Register in `blocks/registry.ts`:**
```typescript
import { {Service}Block } from '@/blocks/blocks/{service}'
// Add to registry object (alphabetically)
{service}: {Service}Block,
```
## 3. Icon (`components/icons.tsx`)
```typescript
export function {Service}Icon(props: SVGProps<SVGSVGElement>) {
return (
<svg {...props} viewBox="0 0 24 24" fill="none" xmlns="http://www.w3.org/2000/svg">
{/* SVG path from service's brand assets */}
</svg>
)
}
```
## 4. Trigger (`triggers/{service}/`) - Optional
```
triggers/{service}/
├── index.ts # Export all triggers
├── webhook.ts # Webhook handler
├── utils.ts # Shared utilities
└── {event}.ts # Specific event handlers
```
**Register in `triggers/registry.ts`:**
```typescript
import { {service}WebhookTrigger } from '@/triggers/{service}'
// Add to TRIGGER_REGISTRY
{service}_webhook: {service}WebhookTrigger,
```
## Checklist
- [ ] Look up API docs for the service
- [ ] Create `tools/{service}/types.ts` with proper types
- [ ] Create tool files for each operation
- [ ] Create `tools/{service}/index.ts` barrel export
- [ ] Register tools in `tools/registry.ts`
- [ ] Add icon to `components/icons.tsx`
- [ ] Create block in `blocks/blocks/{service}.ts`
- [ ] Register block in `blocks/registry.ts`
- [ ] (Optional) Create triggers in `triggers/{service}/`
- [ ] (Optional) Register triggers in `triggers/registry.ts`

View File

@@ -1,66 +0,0 @@
---
paths:
- "apps/sim/hooks/queries/**/*.ts"
---
# React Query Patterns
All React Query hooks live in `hooks/queries/`.
## Query Key Factory
Every query file defines a keys factory:
```typescript
export const entityKeys = {
all: ['entity'] as const,
list: (workspaceId?: string) => [...entityKeys.all, 'list', workspaceId ?? ''] as const,
detail: (id?: string) => [...entityKeys.all, 'detail', id ?? ''] as const,
}
```
## File Structure
```typescript
// 1. Query keys factory
// 2. Types (if needed)
// 3. Private fetch functions
// 4. Exported hooks
```
## Query Hook
```typescript
export function useEntityList(workspaceId?: string, options?: { enabled?: boolean }) {
return useQuery({
queryKey: entityKeys.list(workspaceId),
queryFn: () => fetchEntities(workspaceId as string),
enabled: Boolean(workspaceId) && (options?.enabled ?? true),
staleTime: 60 * 1000,
placeholderData: keepPreviousData,
})
}
```
## Mutation Hook
```typescript
export function useCreateEntity() {
const queryClient = useQueryClient()
return useMutation({
mutationFn: async (variables) => { /* fetch POST */ },
onSuccess: () => queryClient.invalidateQueries({ queryKey: entityKeys.all }),
})
}
```
## Optimistic Updates
For optimistic mutations syncing with Zustand, use `createOptimisticMutationHandlers` from `@/hooks/queries/utils/optimistic-mutation`.
## Naming
- **Keys**: `entityKeys`
- **Query hooks**: `useEntity`, `useEntityList`
- **Mutation hooks**: `useCreateEntity`, `useUpdateEntity`
- **Fetch functions**: `fetchEntity` (private)

View File

@@ -1,71 +0,0 @@
---
paths:
- "apps/sim/**/store.ts"
- "apps/sim/**/stores/**/*.ts"
---
# Zustand Store Patterns
Stores live in `stores/`. Complex stores split into `store.ts` + `types.ts`.
## Basic Store
```typescript
import { create } from 'zustand'
import { devtools } from 'zustand/middleware'
import type { FeatureState } from '@/stores/feature/types'
const initialState = { items: [] as Item[], activeId: null as string | null }
export const useFeatureStore = create<FeatureState>()(
devtools(
(set, get) => ({
...initialState,
setItems: (items) => set({ items }),
addItem: (item) => set((state) => ({ items: [...state.items, item] })),
reset: () => set(initialState),
}),
{ name: 'feature-store' }
)
)
```
## Persisted Store
```typescript
import { create } from 'zustand'
import { persist } from 'zustand/middleware'
export const useFeatureStore = create<FeatureState>()(
persist(
(set) => ({
width: 300,
setWidth: (width) => set({ width }),
_hasHydrated: false,
setHasHydrated: (v) => set({ _hasHydrated: v }),
}),
{
name: 'feature-state',
partialize: (state) => ({ width: state.width }),
onRehydrateStorage: () => (state) => state?.setHasHydrated(true),
}
)
)
```
## Rules
1. Use `devtools` middleware (named stores)
2. Use `persist` only when data should survive reload
3. `partialize` to persist only necessary state
4. `_hasHydrated` pattern for persisted stores needing hydration tracking
5. Immutable updates only
6. `set((state) => ...)` when depending on previous state
7. Provide `reset()` action
## Outside React
```typescript
const items = useFeatureStore.getState().items
useFeatureStore.setState({ items: newItems })
```

View File

@@ -1,41 +0,0 @@
---
paths:
- "apps/sim/**/*.tsx"
- "apps/sim/**/*.css"
---
# Styling Rules
## Tailwind
1. **No inline styles** - Use Tailwind classes
2. **No duplicate dark classes** - Skip `dark:` when value matches light mode
3. **Exact values** - `text-[14px]`, `h-[26px]`
4. **Transitions** - `transition-colors` for interactive states
## Conditional Classes
```typescript
import { cn } from '@/lib/utils'
<div className={cn(
'base-classes',
isActive && 'active-classes',
disabled ? 'opacity-60' : 'hover:bg-accent'
)} />
```
## CSS Variables
For dynamic values (widths, heights) synced with stores:
```typescript
// In store
setWidth: (width) => {
set({ width })
document.documentElement.style.setProperty('--sidebar-width', `${width}px`)
}
// In component
<aside style={{ width: 'var(--sidebar-width)' }} />
```

View File

@@ -1,58 +0,0 @@
---
paths:
- "apps/sim/**/*.test.ts"
- "apps/sim/**/*.test.tsx"
---
# Testing Patterns
Use Vitest. Test files: `feature.ts``feature.test.ts`
## Structure
```typescript
/**
* @vitest-environment node
*/
import { databaseMock, loggerMock } from '@sim/testing'
import { describe, expect, it, vi } from 'vitest'
vi.mock('@sim/db', () => databaseMock)
vi.mock('@sim/logger', () => loggerMock)
import { myFunction } from '@/lib/feature'
describe('myFunction', () => {
beforeEach(() => vi.clearAllMocks())
it.concurrent('isolated tests run in parallel', () => { ... })
})
```
## @sim/testing Package
Always prefer over local mocks.
| Category | Utilities |
|----------|-----------|
| **Mocks** | `loggerMock`, `databaseMock`, `setupGlobalFetchMock()` |
| **Factories** | `createSession()`, `createWorkflowRecord()`, `createBlock()`, `createExecutorContext()` |
| **Builders** | `WorkflowBuilder`, `ExecutionContextBuilder` |
| **Assertions** | `expectWorkflowAccessGranted()`, `expectBlockExecuted()` |
## Rules
1. `@vitest-environment node` directive at file top
2. `vi.mock()` calls before importing mocked modules
3. `@sim/testing` utilities over local mocks
4. `it.concurrent` for isolated tests (no shared mutable state)
5. `beforeEach(() => vi.clearAllMocks())` to reset state
## Hoisted Mocks
For mutable mock references:
```typescript
const mockFn = vi.hoisted(() => vi.fn())
vi.mock('@/lib/module', () => ({ myFunction: mockFn }))
mockFn.mockResolvedValue({ data: 'test' })
```

View File

@@ -1,21 +0,0 @@
---
paths:
- "apps/sim/**/*.ts"
- "apps/sim/**/*.tsx"
---
# TypeScript Rules
1. **No `any`** - Use proper types or `unknown` with type guards
2. **Props interface** - Always define for components
3. **Const assertions** - `as const` for constant objects/arrays
4. **Ref types** - Explicit: `useRef<HTMLDivElement>(null)`
5. **Type imports** - `import type { X }` for type-only imports
```typescript
// ✗ Bad
const handleClick = (e: any) => {}
// ✓ Good
const handleClick = (e: React.MouseEvent<HTMLButtonElement>) => {}
```

View File

@@ -8,7 +8,7 @@ alwaysApply: true
You are a professional software engineer. All code must follow best practices: accurate, readable, clean, and efficient.
## Logging
Import `createLogger` from `@sim/logger`. Use `logger.info`, `logger.warn`, `logger.error` instead of `console.log`.
Import `createLogger` from `sim/logger`. Use `logger.info`, `logger.warn`, `logger.error` instead of `console.log`.
## Comments
Use TSDoc for documentation. No `====` separators. No non-TSDoc comments.

View File

@@ -86,112 +86,27 @@ export async function GET(request: NextRequest) {
)
.limit(candidateLimit)
const knownLocales = ['en', 'es', 'fr', 'de', 'ja', 'zh']
const seenIds = new Set<string>()
const mergedResults = []
const vectorRankMap = new Map<string, number>()
vectorResults.forEach((r, idx) => vectorRankMap.set(r.chunkId, idx + 1))
const keywordRankMap = new Map<string, number>()
keywordResults.forEach((r, idx) => keywordRankMap.set(r.chunkId, idx + 1))
const allChunkIds = new Set([
...vectorResults.map((r) => r.chunkId),
...keywordResults.map((r) => r.chunkId),
])
const k = 60
type ResultWithRRF = (typeof vectorResults)[0] & { rrfScore: number }
const scoredResults: ResultWithRRF[] = []
for (const chunkId of allChunkIds) {
const vectorRank = vectorRankMap.get(chunkId) ?? Number.POSITIVE_INFINITY
const keywordRank = keywordRankMap.get(chunkId) ?? Number.POSITIVE_INFINITY
const rrfScore = 1 / (k + vectorRank) + 1 / (k + keywordRank)
const result =
vectorResults.find((r) => r.chunkId === chunkId) ||
keywordResults.find((r) => r.chunkId === chunkId)
if (result) {
scoredResults.push({ ...result, rrfScore })
for (let i = 0; i < Math.max(vectorResults.length, keywordResults.length); i++) {
if (i < vectorResults.length && !seenIds.has(vectorResults[i].chunkId)) {
mergedResults.push(vectorResults[i])
seenIds.add(vectorResults[i].chunkId)
}
if (i < keywordResults.length && !seenIds.has(keywordResults[i].chunkId)) {
mergedResults.push(keywordResults[i])
seenIds.add(keywordResults[i].chunkId)
}
}
scoredResults.sort((a, b) => b.rrfScore - a.rrfScore)
const localeFilteredResults = scoredResults.filter((result) => {
const firstPart = result.sourceDocument.split('/')[0]
if (knownLocales.includes(firstPart)) {
return firstPart === locale
}
return locale === 'en'
})
const queryLower = query.toLowerCase()
const getTitleBoost = (result: ResultWithRRF): number => {
const fileName = result.sourceDocument
.replace('.mdx', '')
.split('/')
.pop()
?.toLowerCase()
?.replace(/_/g, ' ')
if (fileName === queryLower) return 0.01
if (fileName?.includes(queryLower)) return 0.005
return 0
}
localeFilteredResults.sort((a, b) => {
return b.rrfScore + getTitleBoost(b) - (a.rrfScore + getTitleBoost(a))
})
const pageMap = new Map<string, ResultWithRRF>()
for (const result of localeFilteredResults) {
const pageKey = result.sourceDocument
const existing = pageMap.get(pageKey)
if (!existing || result.rrfScore > existing.rrfScore) {
pageMap.set(pageKey, result)
}
}
const deduplicatedResults = Array.from(pageMap.values())
.sort((a, b) => b.rrfScore + getTitleBoost(b) - (a.rrfScore + getTitleBoost(a)))
.slice(0, limit)
const searchResults = deduplicatedResults.map((result) => {
const filteredResults = mergedResults.slice(0, limit)
const searchResults = filteredResults.map((result) => {
const title = result.headerText || result.sourceDocument.replace('.mdx', '')
const pathParts = result.sourceDocument
.replace('.mdx', '')
.split('/')
.filter((part) => part !== 'index' && !knownLocales.includes(part))
.map((part) => {
return part
.replace(/_/g, ' ')
.split(' ')
.map((word) => {
const acronyms = [
'api',
'mcp',
'sdk',
'url',
'http',
'json',
'xml',
'html',
'css',
'ai',
]
if (acronyms.includes(word.toLowerCase())) {
return word.toUpperCase()
}
return word.charAt(0).toUpperCase() + word.slice(1)
})
.join(' ')
})
.map((part) => part.charAt(0).toUpperCase() + part.slice(1))
return {
id: result.chunkId,

View File

@@ -1739,12 +1739,12 @@ export function BrowserUseIcon(props: SVGProps<SVGSVGElement>) {
{...props}
version='1.0'
xmlns='http://www.w3.org/2000/svg'
width='28'
height='28'
width='150pt'
height='150pt'
viewBox='0 0 150 150'
preserveAspectRatio='xMidYMid meet'
>
<g transform='translate(0,150) scale(0.05,-0.05)' fill='currentColor' stroke='none'>
<g transform='translate(0,150) scale(0.05,-0.05)' fill='#000000' stroke='none'>
<path
d='M786 2713 c-184 -61 -353 -217 -439 -405 -76 -165 -65 -539 19 -666
l57 -85 -48 -124 c-203 -517 -79 -930 346 -1155 159 -85 441 -71 585 28 l111

View File

@@ -7,7 +7,7 @@ import { BlockInfoCard } from "@/components/ui/block-info-card"
<BlockInfoCard
type="browser_use"
color="#181C1E"
color="#E0E0E0"
/>
{/* MANUAL-CONTENT-START:intro */}

View File

@@ -52,15 +52,6 @@ Read content from a Google Slides presentation
| --------- | ---- | ----------- |
| `slides` | json | Array of slides with their content |
| `metadata` | json | Presentation metadata including ID, title, and URL |
| ↳ `presentationId` | string | The presentation ID |
| ↳ `title` | string | The presentation title |
| ↳ `pageSize` | object | Presentation page size |
| ↳ `width` | json | Page width as a Dimension object |
| ↳ `height` | json | Page height as a Dimension object |
| ↳ `width` | json | Page width as a Dimension object |
| ↳ `height` | json | Page height as a Dimension object |
| ↳ `mimeType` | string | The mime type of the presentation |
| ↳ `url` | string | URL to open the presentation |
### `google_slides_write`
@@ -80,10 +71,6 @@ Write or update content in a Google Slides presentation
| --------- | ---- | ----------- |
| `updatedContent` | boolean | Indicates if presentation content was updated successfully |
| `metadata` | json | Updated presentation metadata including ID, title, and URL |
| ↳ `presentationId` | string | The presentation ID |
| ↳ `title` | string | The presentation title |
| ↳ `mimeType` | string | The mime type of the presentation |
| ↳ `url` | string | URL to open the presentation |
### `google_slides_create`
@@ -103,10 +90,6 @@ Create a new Google Slides presentation
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `metadata` | json | Created presentation metadata including ID, title, and URL |
| ↳ `presentationId` | string | The presentation ID |
| ↳ `title` | string | The presentation title |
| ↳ `mimeType` | string | The mime type of the presentation |
| ↳ `url` | string | URL to open the presentation |
### `google_slides_replace_all_text`
@@ -128,10 +111,6 @@ Find and replace all occurrences of text throughout a Google Slides presentation
| --------- | ---- | ----------- |
| `occurrencesChanged` | number | Number of text occurrences that were replaced |
| `metadata` | json | Operation metadata including presentation ID and URL |
| ↳ `presentationId` | string | The presentation ID |
| ↳ `findText` | string | The text that was searched for |
| ↳ `replaceText` | string | The text that replaced the matches |
| ↳ `url` | string | URL to open the presentation |
### `google_slides_add_slide`
@@ -152,10 +131,6 @@ Add a new slide to a Google Slides presentation with a specified layout
| --------- | ---- | ----------- |
| `slideId` | string | The object ID of the newly created slide |
| `metadata` | json | Operation metadata including presentation ID, layout, and URL |
| ↳ `presentationId` | string | The presentation ID |
| ↳ `layout` | string | The layout used for the new slide |
| ↳ `insertionIndex` | number | The zero-based index where the slide was inserted |
| ↳ `url` | string | URL to open the presentation |
### `google_slides_add_image`
@@ -179,10 +154,6 @@ Insert an image into a specific slide in a Google Slides presentation
| --------- | ---- | ----------- |
| `imageId` | string | The object ID of the newly created image |
| `metadata` | json | Operation metadata including presentation ID and image URL |
| ↳ `presentationId` | string | The presentation ID |
| ↳ `pageObjectId` | string | The page object ID where the image was inserted |
| ↳ `imageUrl` | string | The source image URL |
| ↳ `url` | string | URL to open the presentation |
### `google_slides_get_thumbnail`
@@ -205,10 +176,6 @@ Generate a thumbnail image of a specific slide in a Google Slides presentation
| `width` | number | Width of the thumbnail in pixels |
| `height` | number | Height of the thumbnail in pixels |
| `metadata` | json | Operation metadata including presentation ID and page object ID |
| ↳ `presentationId` | string | The presentation ID |
| ↳ `pageObjectId` | string | The page object ID for the thumbnail |
| ↳ `thumbnailSize` | string | The requested thumbnail size |
| ↳ `mimeType` | string | The thumbnail MIME type |
### `google_slides_get_page`

View File

@@ -4,7 +4,7 @@ import { eq } from 'drizzle-orm'
import { NextResponse } from 'next/server'
import { getSession } from '@/lib/auth'
const logger = createLogger('SSOProvidersRoute')
const logger = createLogger('SSO-Providers')
export async function GET() {
try {

View File

@@ -6,7 +6,7 @@ import { hasSSOAccess } from '@/lib/billing'
import { env } from '@/lib/core/config/env'
import { REDACTED_MARKER } from '@/lib/core/security/redaction'
const logger = createLogger('SSORegisterRoute')
const logger = createLogger('SSO-Register')
const mappingSchema = z
.object({
@@ -43,10 +43,6 @@ const ssoRegistrationSchema = z.discriminatedUnion('providerType', [
])
.default(['openid', 'profile', 'email']),
pkce: z.boolean().default(true),
authorizationEndpoint: z.string().url().optional(),
tokenEndpoint: z.string().url().optional(),
userInfoEndpoint: z.string().url().optional(),
jwksEndpoint: z.string().url().optional(),
}),
z.object({
providerType: z.literal('saml'),
@@ -68,10 +64,12 @@ const ssoRegistrationSchema = z.discriminatedUnion('providerType', [
export async function POST(request: NextRequest) {
try {
// SSO plugin must be enabled in Better Auth
if (!env.SSO_ENABLED) {
return NextResponse.json({ error: 'SSO is not enabled' }, { status: 400 })
}
// Check plan access (enterprise) or env var override
const session = await getSession()
if (!session?.user?.id) {
return NextResponse.json({ error: 'Authentication required' }, { status: 401 })
@@ -118,16 +116,7 @@ export async function POST(request: NextRequest) {
}
if (providerType === 'oidc') {
const {
clientId,
clientSecret,
scopes,
pkce,
authorizationEndpoint,
tokenEndpoint,
userInfoEndpoint,
jwksEndpoint,
} = body
const { clientId, clientSecret, scopes, pkce } = body
const oidcConfig: any = {
clientId,
@@ -138,102 +127,48 @@ export async function POST(request: NextRequest) {
pkce: pkce ?? true,
}
oidcConfig.authorizationEndpoint = authorizationEndpoint
oidcConfig.tokenEndpoint = tokenEndpoint
oidcConfig.userInfoEndpoint = userInfoEndpoint
oidcConfig.jwksEndpoint = jwksEndpoint
const needsDiscovery =
!oidcConfig.authorizationEndpoint || !oidcConfig.tokenEndpoint || !oidcConfig.jwksEndpoint
if (needsDiscovery) {
const discoveryUrl = `${issuer.replace(/\/$/, '')}/.well-known/openid-configuration`
try {
logger.info('Fetching OIDC discovery document for missing endpoints', {
discoveryUrl,
hasAuthEndpoint: !!oidcConfig.authorizationEndpoint,
hasTokenEndpoint: !!oidcConfig.tokenEndpoint,
hasJwksEndpoint: !!oidcConfig.jwksEndpoint,
})
const discoveryResponse = await fetch(discoveryUrl, {
headers: { Accept: 'application/json' },
})
if (!discoveryResponse.ok) {
logger.error('Failed to fetch OIDC discovery document', {
status: discoveryResponse.status,
statusText: discoveryResponse.statusText,
})
return NextResponse.json(
{
error: `Failed to fetch OIDC discovery document from ${discoveryUrl}. Status: ${discoveryResponse.status}. Provide all endpoints explicitly or verify the issuer URL.`,
},
{ status: 400 }
)
}
const discovery = await discoveryResponse.json()
oidcConfig.authorizationEndpoint =
oidcConfig.authorizationEndpoint || discovery.authorization_endpoint
oidcConfig.tokenEndpoint = oidcConfig.tokenEndpoint || discovery.token_endpoint
oidcConfig.userInfoEndpoint = oidcConfig.userInfoEndpoint || discovery.userinfo_endpoint
oidcConfig.jwksEndpoint = oidcConfig.jwksEndpoint || discovery.jwks_uri
logger.info('Merged OIDC endpoints (user-provided + discovery)', {
providerId,
issuer,
authorizationEndpoint: oidcConfig.authorizationEndpoint,
tokenEndpoint: oidcConfig.tokenEndpoint,
userInfoEndpoint: oidcConfig.userInfoEndpoint,
jwksEndpoint: oidcConfig.jwksEndpoint,
})
} catch (error) {
logger.error('Error fetching OIDC discovery document', {
error: error instanceof Error ? error.message : 'Unknown error',
discoveryUrl,
})
return NextResponse.json(
{
error: `Failed to fetch OIDC discovery document from ${discoveryUrl}. Please verify the issuer URL is correct or provide all endpoints explicitly.`,
},
{ status: 400 }
)
}
} else {
logger.info('Using explicitly provided OIDC endpoints (all present)', {
providerId,
issuer,
authorizationEndpoint: oidcConfig.authorizationEndpoint,
tokenEndpoint: oidcConfig.tokenEndpoint,
userInfoEndpoint: oidcConfig.userInfoEndpoint,
jwksEndpoint: oidcConfig.jwksEndpoint,
})
}
// Add manual endpoints for providers that might need them
// Common patterns for OIDC providers that don't support discovery properly
if (
!oidcConfig.authorizationEndpoint ||
!oidcConfig.tokenEndpoint ||
!oidcConfig.jwksEndpoint
issuer.includes('okta.com') ||
issuer.includes('auth0.com') ||
issuer.includes('identityserver')
) {
const missing: string[] = []
if (!oidcConfig.authorizationEndpoint) missing.push('authorizationEndpoint')
if (!oidcConfig.tokenEndpoint) missing.push('tokenEndpoint')
if (!oidcConfig.jwksEndpoint) missing.push('jwksEndpoint')
const baseUrl = issuer.includes('/oauth2/default')
? issuer.replace('/oauth2/default', '')
: issuer.replace('/oauth', '').replace('/v2.0', '').replace('/oauth2', '')
logger.error('Missing required OIDC endpoints after discovery merge', {
missing,
authorizationEndpoint: oidcConfig.authorizationEndpoint,
tokenEndpoint: oidcConfig.tokenEndpoint,
jwksEndpoint: oidcConfig.jwksEndpoint,
// Okta-style endpoints
if (issuer.includes('okta.com')) {
oidcConfig.authorizationEndpoint = `${baseUrl}/oauth2/default/v1/authorize`
oidcConfig.tokenEndpoint = `${baseUrl}/oauth2/default/v1/token`
oidcConfig.userInfoEndpoint = `${baseUrl}/oauth2/default/v1/userinfo`
oidcConfig.jwksEndpoint = `${baseUrl}/oauth2/default/v1/keys`
}
// Auth0-style endpoints
else if (issuer.includes('auth0.com')) {
oidcConfig.authorizationEndpoint = `${baseUrl}/authorize`
oidcConfig.tokenEndpoint = `${baseUrl}/oauth/token`
oidcConfig.userInfoEndpoint = `${baseUrl}/userinfo`
oidcConfig.jwksEndpoint = `${baseUrl}/.well-known/jwks.json`
}
// Generic OIDC endpoints (IdentityServer, etc.)
else {
oidcConfig.authorizationEndpoint = `${baseUrl}/connect/authorize`
oidcConfig.tokenEndpoint = `${baseUrl}/connect/token`
oidcConfig.userInfoEndpoint = `${baseUrl}/connect/userinfo`
oidcConfig.jwksEndpoint = `${baseUrl}/.well-known/jwks`
}
logger.info('Using manual OIDC endpoints for provider', {
providerId,
provider: issuer.includes('okta.com')
? 'Okta'
: issuer.includes('auth0.com')
? 'Auth0'
: 'Generic',
authEndpoint: oidcConfig.authorizationEndpoint,
})
return NextResponse.json(
{
error: `Missing required OIDC endpoints: ${missing.join(', ')}. Please provide these explicitly or verify the issuer supports OIDC discovery.`,
},
{ status: 400 }
)
}
providerConfig.oidcConfig = oidcConfig

View File

@@ -224,7 +224,7 @@ export async function POST(req: NextRequest) {
hasApiKey: !!executionParams.apiKey,
})
const result = await executeTool(resolvedToolName, executionParams)
const result = await executeTool(resolvedToolName, executionParams, true)
logger.info(`[${tracker.requestId}] Tool execution complete`, {
toolName,

View File

@@ -1,11 +1,10 @@
import { db } from '@sim/db'
import { templateCreators } from '@sim/db/schema'
import { templateCreators, user } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { eq } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { getSession } from '@/lib/auth'
import { generateRequestId } from '@/lib/core/utils/request'
import { verifyEffectiveSuperUser } from '@/lib/templates/permissions'
const logger = createLogger('CreatorVerificationAPI')
@@ -24,8 +23,9 @@ export async function POST(request: NextRequest, { params }: { params: Promise<{
}
// Check if user is a super user
const { effectiveSuperUser } = await verifyEffectiveSuperUser(session.user.id)
if (!effectiveSuperUser) {
const currentUser = await db.select().from(user).where(eq(user.id, session.user.id)).limit(1)
if (!currentUser[0]?.isSuperUser) {
logger.warn(`[${requestId}] Non-super user attempted to verify creator: ${id}`)
return NextResponse.json({ error: 'Only super users can verify creators' }, { status: 403 })
}
@@ -76,8 +76,9 @@ export async function DELETE(
}
// Check if user is a super user
const { effectiveSuperUser } = await verifyEffectiveSuperUser(session.user.id)
if (!effectiveSuperUser) {
const currentUser = await db.select().from(user).where(eq(user.id, session.user.id)).limit(1)
if (!currentUser[0]?.isSuperUser) {
logger.warn(`[${requestId}] Non-super user attempted to unverify creator: ${id}`)
return NextResponse.json({ error: 'Only super users can unverify creators' }, { status: 403 })
}

View File

@@ -6,10 +6,9 @@ import { createLogger } from '@sim/logger'
import binaryExtensionsList from 'binary-extensions'
import { type NextRequest, NextResponse } from 'next/server'
import { checkHybridAuth } from '@/lib/auth/hybrid'
import { secureFetchWithPinnedIP, validateUrlWithDNS } from '@/lib/core/security/input-validation'
import { createPinnedUrl, validateUrlWithDNS } from '@/lib/core/security/input-validation'
import { isSupportedFileType, parseFile } from '@/lib/file-parsers'
import { isUsingCloudStorage, type StorageContext, StorageService } from '@/lib/uploads'
import { uploadExecutionFile } from '@/lib/uploads/contexts/execution'
import { UPLOAD_DIR_SERVER } from '@/lib/uploads/core/setup.server'
import { getFileMetadataByKey } from '@/lib/uploads/server/metadata'
import {
@@ -22,7 +21,6 @@ import {
} from '@/lib/uploads/utils/file-utils'
import { getUserEntityPermissions } from '@/lib/workspaces/permissions/utils'
import { verifyFileAccess } from '@/app/api/files/authorization'
import type { UserFile } from '@/executor/types'
import '@/lib/uploads/core/setup.server'
export const dynamic = 'force-dynamic'
@@ -32,12 +30,6 @@ const logger = createLogger('FilesParseAPI')
const MAX_DOWNLOAD_SIZE_BYTES = 100 * 1024 * 1024 // 100 MB
const DOWNLOAD_TIMEOUT_MS = 30000 // 30 seconds
interface ExecutionContext {
workspaceId: string
workflowId: string
executionId: string
}
interface ParseResult {
success: boolean
content?: string
@@ -45,7 +37,6 @@ interface ParseResult {
filePath: string
originalName?: string // Original filename from database (for workspace files)
viewerUrl?: string | null // Viewer URL for the file if available
userFile?: UserFile // UserFile object for the raw file
metadata?: {
fileType: string
size: number
@@ -79,45 +70,27 @@ export async function POST(request: NextRequest) {
const userId = authResult.userId
const requestData = await request.json()
const { filePath, fileType, workspaceId, workflowId, executionId } = requestData
const { filePath, fileType, workspaceId } = requestData
if (!filePath || (typeof filePath === 'string' && filePath.trim() === '')) {
return NextResponse.json({ success: false, error: 'No file path provided' }, { status: 400 })
}
// Build execution context if all required fields are present
const executionContext: ExecutionContext | undefined =
workspaceId && workflowId && executionId
? { workspaceId, workflowId, executionId }
: undefined
logger.info('File parse request received:', {
filePath,
fileType,
workspaceId,
userId,
hasExecutionContext: !!executionContext,
})
logger.info('File parse request received:', { filePath, fileType, workspaceId, userId })
if (Array.isArray(filePath)) {
const results = []
for (const singlePath of filePath) {
if (!singlePath || (typeof singlePath === 'string' && singlePath.trim() === '')) {
for (const path of filePath) {
if (!path || (typeof path === 'string' && path.trim() === '')) {
results.push({
success: false,
error: 'Empty file path in array',
filePath: singlePath || '',
filePath: path || '',
})
continue
}
const result = await parseFileSingle(
singlePath,
fileType,
workspaceId,
userId,
executionContext
)
const result = await parseFileSingle(path, fileType, workspaceId, userId)
if (result.metadata) {
result.metadata.processingTime = Date.now() - startTime
}
@@ -133,7 +106,6 @@ export async function POST(request: NextRequest) {
fileType: result.metadata?.fileType || 'application/octet-stream',
size: result.metadata?.size || 0,
binary: false,
file: result.userFile,
},
filePath: result.filePath,
viewerUrl: result.viewerUrl,
@@ -149,7 +121,7 @@ export async function POST(request: NextRequest) {
})
}
const result = await parseFileSingle(filePath, fileType, workspaceId, userId, executionContext)
const result = await parseFileSingle(filePath, fileType, workspaceId, userId)
if (result.metadata) {
result.metadata.processingTime = Date.now() - startTime
@@ -165,7 +137,6 @@ export async function POST(request: NextRequest) {
fileType: result.metadata?.fileType || 'application/octet-stream',
size: result.metadata?.size || 0,
binary: false,
file: result.userFile,
},
filePath: result.filePath,
viewerUrl: result.viewerUrl,
@@ -193,8 +164,7 @@ async function parseFileSingle(
filePath: string,
fileType: string,
workspaceId: string,
userId: string,
executionContext?: ExecutionContext
userId: string
): Promise<ParseResult> {
logger.info('Parsing file:', filePath)
@@ -216,18 +186,18 @@ async function parseFileSingle(
}
if (filePath.includes('/api/files/serve/')) {
return handleCloudFile(filePath, fileType, undefined, userId, executionContext)
return handleCloudFile(filePath, fileType, undefined, userId)
}
if (filePath.startsWith('http://') || filePath.startsWith('https://')) {
return handleExternalUrl(filePath, fileType, workspaceId, userId, executionContext)
return handleExternalUrl(filePath, fileType, workspaceId, userId)
}
if (isUsingCloudStorage()) {
return handleCloudFile(filePath, fileType, undefined, userId, executionContext)
return handleCloudFile(filePath, fileType, undefined, userId)
}
return handleLocalFile(filePath, fileType, userId, executionContext)
return handleLocalFile(filePath, fileType, userId)
}
/**
@@ -260,14 +230,12 @@ function validateFilePath(filePath: string): { isValid: boolean; error?: string
/**
* Handle external URL
* If workspaceId is provided, checks if file already exists and saves to workspace if not
* If executionContext is provided, also stores the file in execution storage and returns UserFile
*/
async function handleExternalUrl(
url: string,
fileType: string,
workspaceId: string,
userId: string,
executionContext?: ExecutionContext
userId: string
): Promise<ParseResult> {
try {
logger.info('Fetching external URL:', url)
@@ -344,13 +312,17 @@ async function handleExternalUrl(
if (existingFile) {
const storageFilePath = `/api/files/serve/${existingFile.key}`
return handleCloudFile(storageFilePath, fileType, 'workspace', userId, executionContext)
return handleCloudFile(storageFilePath, fileType, 'workspace', userId)
}
}
}
const response = await secureFetchWithPinnedIP(url, urlValidation.resolvedIP!, {
timeout: DOWNLOAD_TIMEOUT_MS,
const pinnedUrl = createPinnedUrl(url, urlValidation.resolvedIP!)
const response = await fetch(pinnedUrl, {
signal: AbortSignal.timeout(DOWNLOAD_TIMEOUT_MS),
headers: {
Host: urlValidation.originalHostname!,
},
})
if (!response.ok) {
throw new Error(`Failed to fetch URL: ${response.status} ${response.statusText}`)
@@ -369,19 +341,6 @@ async function handleExternalUrl(
logger.info(`Downloaded file from URL: ${url}, size: ${buffer.length} bytes`)
let userFile: UserFile | undefined
const mimeType = response.headers.get('content-type') || getMimeTypeFromExtension(extension)
if (executionContext) {
try {
userFile = await uploadExecutionFile(executionContext, buffer, filename, mimeType, userId)
logger.info(`Stored file in execution storage: ${filename}`, { key: userFile.key })
} catch (uploadError) {
logger.warn(`Failed to store file in execution storage:`, uploadError)
// Continue without userFile - parsing can still work
}
}
if (shouldCheckWorkspace) {
try {
const permission = await getUserEntityPermissions(userId, 'workspace', workspaceId)
@@ -394,6 +353,8 @@ async function handleExternalUrl(
})
} else {
const { uploadWorkspaceFile } = await import('@/lib/uploads/contexts/workspace')
const mimeType =
response.headers.get('content-type') || getMimeTypeFromExtension(extension)
await uploadWorkspaceFile(workspaceId, userId, buffer, filename, mimeType)
logger.info(`Saved URL file to workspace storage: ${filename}`)
}
@@ -402,23 +363,17 @@ async function handleExternalUrl(
}
}
let parseResult: ParseResult
if (extension === 'pdf') {
parseResult = await handlePdfBuffer(buffer, filename, fileType, url)
} else if (extension === 'csv') {
parseResult = await handleCsvBuffer(buffer, filename, fileType, url)
} else if (isSupportedFileType(extension)) {
parseResult = await handleGenericTextBuffer(buffer, filename, extension, fileType, url)
} else {
parseResult = handleGenericBuffer(buffer, filename, extension, fileType)
return await handlePdfBuffer(buffer, filename, fileType, url)
}
if (extension === 'csv') {
return await handleCsvBuffer(buffer, filename, fileType, url)
}
if (isSupportedFileType(extension)) {
return await handleGenericTextBuffer(buffer, filename, extension, fileType, url)
}
// Attach userFile to the result
if (userFile) {
parseResult.userFile = userFile
}
return parseResult
return handleGenericBuffer(buffer, filename, extension, fileType)
} catch (error) {
logger.error(`Error handling external URL ${url}:`, error)
return {
@@ -431,15 +386,12 @@ async function handleExternalUrl(
/**
* Handle file stored in cloud storage
* If executionContext is provided and file is not already from execution storage,
* copies the file to execution storage and returns UserFile
*/
async function handleCloudFile(
filePath: string,
fileType: string,
explicitContext: string | undefined,
userId: string,
executionContext?: ExecutionContext
userId: string
): Promise<ParseResult> {
try {
const cloudKey = extractStorageKey(filePath)
@@ -486,7 +438,6 @@ async function handleCloudFile(
const filename = originalFilename || cloudKey.split('/').pop() || cloudKey
const extension = path.extname(filename).toLowerCase().substring(1)
const mimeType = getMimeTypeFromExtension(extension)
const normalizedFilePath = `/api/files/serve/${encodeURIComponent(cloudKey)}?context=${context}`
let workspaceIdFromKey: string | undefined
@@ -502,39 +453,6 @@ async function handleCloudFile(
const viewerUrl = getViewerUrl(cloudKey, workspaceIdFromKey)
// Store file in execution storage if executionContext is provided
let userFile: UserFile | undefined
if (executionContext) {
// If file is already from execution context, create UserFile reference without re-uploading
if (context === 'execution') {
userFile = {
id: `file_${Date.now()}_${Math.random().toString(36).substring(2, 9)}`,
name: filename,
url: normalizedFilePath,
size: fileBuffer.length,
type: mimeType,
key: cloudKey,
context: 'execution',
}
logger.info(`Created UserFile reference for existing execution file: ${filename}`)
} else {
// Copy from workspace/other storage to execution storage
try {
userFile = await uploadExecutionFile(
executionContext,
fileBuffer,
filename,
mimeType,
userId
)
logger.info(`Copied file to execution storage: ${filename}`, { key: userFile.key })
} catch (uploadError) {
logger.warn(`Failed to copy file to execution storage:`, uploadError)
}
}
}
let parseResult: ParseResult
if (extension === 'pdf') {
parseResult = await handlePdfBuffer(fileBuffer, filename, fileType, normalizedFilePath)
@@ -559,11 +477,6 @@ async function handleCloudFile(
parseResult.viewerUrl = viewerUrl
// Attach userFile to the result
if (userFile) {
parseResult.userFile = userFile
}
return parseResult
} catch (error) {
logger.error(`Error handling cloud file ${filePath}:`, error)
@@ -587,8 +500,7 @@ async function handleCloudFile(
async function handleLocalFile(
filePath: string,
fileType: string,
userId: string,
executionContext?: ExecutionContext
userId: string
): Promise<ParseResult> {
try {
const filename = filePath.split('/').pop() || filePath
@@ -628,32 +540,13 @@ async function handleLocalFile(
const hash = createHash('md5').update(fileBuffer).digest('hex')
const extension = path.extname(filename).toLowerCase().substring(1)
const mimeType = fileType || getMimeTypeFromExtension(extension)
// Store file in execution storage if executionContext is provided
let userFile: UserFile | undefined
if (executionContext) {
try {
userFile = await uploadExecutionFile(
executionContext,
fileBuffer,
filename,
mimeType,
userId
)
logger.info(`Stored local file in execution storage: ${filename}`, { key: userFile.key })
} catch (uploadError) {
logger.warn(`Failed to store local file in execution storage:`, uploadError)
}
}
return {
success: true,
content: result.content,
filePath,
userFile,
metadata: {
fileType: mimeType,
fileType: fileType || getMimeTypeFromExtension(extension),
size: stats.size,
hash,
processingTime: 0,

View File

@@ -0,0 +1,395 @@
import { createLogger } from '@sim/logger'
import type { NextRequest } from 'next/server'
import { NextResponse } from 'next/server'
import { z } from 'zod'
import { checkHybridAuth } from '@/lib/auth/hybrid'
import { generateInternalToken } from '@/lib/auth/internal'
import { isDev } from '@/lib/core/config/feature-flags'
import { createPinnedUrl, validateUrlWithDNS } from '@/lib/core/security/input-validation'
import { generateRequestId } from '@/lib/core/utils/request'
import { getBaseUrl } from '@/lib/core/utils/urls'
import { executeTool } from '@/tools'
import { getTool, validateRequiredParametersAfterMerge } from '@/tools/utils'
const logger = createLogger('ProxyAPI')
const proxyPostSchema = z.object({
toolId: z.string().min(1, 'toolId is required'),
params: z.record(z.any()).optional().default({}),
executionContext: z
.object({
workflowId: z.string().optional(),
workspaceId: z.string().optional(),
executionId: z.string().optional(),
userId: z.string().optional(),
})
.optional(),
})
/**
* Creates a minimal set of default headers for proxy requests
* @returns Record of HTTP headers
*/
const getProxyHeaders = (): Record<string, string> => {
return {
'User-Agent':
'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/135.0.0.0 Safari/537.36',
Accept: '*/*',
'Accept-Encoding': 'gzip, deflate, br',
'Cache-Control': 'no-cache',
Connection: 'keep-alive',
}
}
/**
* Formats a response with CORS headers
* @param responseData Response data object
* @param status HTTP status code
* @returns NextResponse with CORS headers
*/
const formatResponse = (responseData: any, status = 200) => {
return NextResponse.json(responseData, {
status,
headers: {
'Access-Control-Allow-Origin': '*',
'Access-Control-Allow-Methods': 'GET, POST, PUT, DELETE, OPTIONS',
'Access-Control-Allow-Headers': 'Content-Type, Authorization',
},
})
}
/**
* Creates an error response with consistent formatting
* @param error Error object or message
* @param status HTTP status code
* @param additionalData Additional data to include in the response
* @returns Formatted error response
*/
const createErrorResponse = (error: any, status = 500, additionalData = {}) => {
const errorMessage = error instanceof Error ? error.message : String(error)
const errorStack = error instanceof Error ? error.stack : undefined
logger.error('Creating error response', {
errorMessage,
status,
stack: isDev ? errorStack : undefined,
})
return formatResponse(
{
success: false,
error: errorMessage,
stack: isDev ? errorStack : undefined,
...additionalData,
},
status
)
}
/**
* GET handler for direct external URL proxying
* This allows for GET requests to external APIs
*/
export async function GET(request: Request) {
const url = new URL(request.url)
const targetUrl = url.searchParams.get('url')
const requestId = generateRequestId()
// Vault download proxy: /api/proxy?vaultDownload=1&bucket=...&object=...&credentialId=...
const vaultDownload = url.searchParams.get('vaultDownload')
if (vaultDownload === '1') {
try {
const bucket = url.searchParams.get('bucket')
const objectParam = url.searchParams.get('object')
const credentialId = url.searchParams.get('credentialId')
if (!bucket || !objectParam || !credentialId) {
return createErrorResponse('Missing bucket, object, or credentialId', 400)
}
// Fetch access token using existing token API
const baseUrl = new URL(getBaseUrl())
const tokenUrl = new URL('/api/auth/oauth/token', baseUrl)
// Build headers: forward session cookies if present; include internal auth for server-side
const tokenHeaders: Record<string, string> = { 'Content-Type': 'application/json' }
const incomingCookie = request.headers.get('cookie')
if (incomingCookie) tokenHeaders.Cookie = incomingCookie
try {
const internalToken = await generateInternalToken()
tokenHeaders.Authorization = `Bearer ${internalToken}`
} catch (_e) {
// best-effort internal auth
}
// Optional workflow context for collaboration auth
const workflowId = url.searchParams.get('workflowId') || undefined
const tokenRes = await fetch(tokenUrl.toString(), {
method: 'POST',
headers: tokenHeaders,
body: JSON.stringify({ credentialId, workflowId }),
})
if (!tokenRes.ok) {
const err = await tokenRes.text()
return createErrorResponse(`Failed to fetch access token: ${err}`, 401)
}
const tokenJson = await tokenRes.json()
const accessToken = tokenJson.accessToken
if (!accessToken) {
return createErrorResponse('No access token available', 401)
}
// Avoid double-encoding: incoming object may already be percent-encoded
const objectDecoded = decodeURIComponent(objectParam)
const gcsUrl = `https://storage.googleapis.com/storage/v1/b/${encodeURIComponent(
bucket
)}/o/${encodeURIComponent(objectDecoded)}?alt=media`
const fileRes = await fetch(gcsUrl, {
headers: { Authorization: `Bearer ${accessToken}` },
})
if (!fileRes.ok) {
const errText = await fileRes.text()
return createErrorResponse(errText || 'Failed to download file', fileRes.status)
}
const headers = new Headers()
fileRes.headers.forEach((v, k) => headers.set(k, v))
return new NextResponse(fileRes.body, { status: 200, headers })
} catch (error: any) {
logger.error(`[${requestId}] Vault download proxy failed`, {
error: error instanceof Error ? error.message : String(error),
})
return createErrorResponse('Vault download failed', 500)
}
}
if (!targetUrl) {
logger.error(`[${requestId}] Missing 'url' parameter`)
return createErrorResponse("Missing 'url' parameter", 400)
}
const urlValidation = await validateUrlWithDNS(targetUrl)
if (!urlValidation.isValid) {
logger.warn(`[${requestId}] Blocked proxy request`, {
url: targetUrl.substring(0, 100),
error: urlValidation.error,
})
return createErrorResponse(urlValidation.error || 'Invalid URL', 403)
}
const method = url.searchParams.get('method') || 'GET'
const bodyParam = url.searchParams.get('body')
let body: string | undefined
if (bodyParam && ['POST', 'PUT', 'PATCH'].includes(method.toUpperCase())) {
try {
body = decodeURIComponent(bodyParam)
} catch (error) {
logger.warn(`[${requestId}] Failed to decode body parameter`, error)
}
}
const customHeaders: Record<string, string> = {}
for (const [key, value] of url.searchParams.entries()) {
if (key.startsWith('header.')) {
const headerName = key.substring(7)
customHeaders[headerName] = value
}
}
if (body && !customHeaders['Content-Type']) {
customHeaders['Content-Type'] = 'application/json'
}
logger.info(`[${requestId}] Proxying ${method} request to: ${targetUrl}`)
try {
const pinnedUrl = createPinnedUrl(targetUrl, urlValidation.resolvedIP!)
const response = await fetch(pinnedUrl, {
method: method,
headers: {
...getProxyHeaders(),
...customHeaders,
Host: urlValidation.originalHostname!,
},
body: body || undefined,
})
const contentType = response.headers.get('content-type') || ''
let data
if (contentType.includes('application/json')) {
data = await response.json()
} else {
data = await response.text()
}
const errorMessage = !response.ok
? data && typeof data === 'object' && data.error
? `${data.error.message || JSON.stringify(data.error)}`
: response.statusText || `HTTP error ${response.status}`
: undefined
if (!response.ok) {
logger.error(`[${requestId}] External API error: ${response.status} ${response.statusText}`)
}
return formatResponse({
success: response.ok,
status: response.status,
statusText: response.statusText,
headers: Object.fromEntries(response.headers.entries()),
data,
error: errorMessage,
})
} catch (error: any) {
logger.error(`[${requestId}] Proxy GET request failed`, {
url: targetUrl,
error: error instanceof Error ? error.message : String(error),
stack: error instanceof Error ? error.stack : undefined,
})
return createErrorResponse(error)
}
}
export async function POST(request: NextRequest) {
const requestId = generateRequestId()
const startTime = new Date()
const startTimeISO = startTime.toISOString()
try {
const authResult = await checkHybridAuth(request, { requireWorkflowId: false })
if (!authResult.success) {
logger.error(`[${requestId}] Authentication failed for proxy:`, authResult.error)
return createErrorResponse('Unauthorized', 401)
}
let requestBody
try {
requestBody = await request.json()
} catch (parseError) {
logger.error(`[${requestId}] Failed to parse request body`, {
error: parseError instanceof Error ? parseError.message : String(parseError),
})
throw new Error('Invalid JSON in request body')
}
const validationResult = proxyPostSchema.safeParse(requestBody)
if (!validationResult.success) {
logger.error(`[${requestId}] Request validation failed`, {
errors: validationResult.error.errors,
})
const errorMessages = validationResult.error.errors
.map((err) => `${err.path.join('.')}: ${err.message}`)
.join(', ')
throw new Error(`Validation failed: ${errorMessages}`)
}
const { toolId, params } = validationResult.data
logger.info(`[${requestId}] Processing tool: ${toolId}`)
const tool = getTool(toolId)
if (!tool) {
logger.error(`[${requestId}] Tool not found: ${toolId}`)
throw new Error(`Tool not found: ${toolId}`)
}
try {
validateRequiredParametersAfterMerge(toolId, tool, params)
} catch (validationError) {
logger.warn(`[${requestId}] Tool validation failed for ${toolId}`, {
error: validationError instanceof Error ? validationError.message : String(validationError),
})
const endTime = new Date()
const endTimeISO = endTime.toISOString()
const duration = endTime.getTime() - startTime.getTime()
return createErrorResponse(validationError, 400, {
startTime: startTimeISO,
endTime: endTimeISO,
duration,
})
}
const hasFileOutputs =
tool.outputs &&
Object.values(tool.outputs).some(
(output) => output.type === 'file' || output.type === 'file[]'
)
const result = await executeTool(
toolId,
params,
true, // skipProxy (we're already in the proxy)
!hasFileOutputs, // skipPostProcess (don't skip if tool has file outputs)
undefined // execution context is not available in proxy context
)
if (!result.success) {
logger.warn(`[${requestId}] Tool execution failed for ${toolId}`, {
error: result.error || 'Unknown error',
})
throw new Error(result.error || 'Tool execution failed')
}
const endTime = new Date()
const endTimeISO = endTime.toISOString()
const duration = endTime.getTime() - startTime.getTime()
const responseWithTimingData = {
...result,
startTime: startTimeISO,
endTime: endTimeISO,
duration,
timing: {
startTime: startTimeISO,
endTime: endTimeISO,
duration,
},
}
logger.info(`[${requestId}] Tool executed successfully: ${toolId} (${duration}ms)`)
return formatResponse(responseWithTimingData)
} catch (error: any) {
logger.error(`[${requestId}] Proxy request failed`, {
error: error instanceof Error ? error.message : String(error),
stack: error instanceof Error ? error.stack : undefined,
name: error instanceof Error ? error.name : undefined,
})
const endTime = new Date()
const endTimeISO = endTime.toISOString()
const duration = endTime.getTime() - startTime.getTime()
return createErrorResponse(error, 500, {
startTime: startTimeISO,
endTime: endTimeISO,
duration,
})
}
}
export async function OPTIONS() {
return new NextResponse(null, {
status: 204,
headers: {
'Access-Control-Allow-Origin': '*',
'Access-Control-Allow-Methods': 'GET, POST, PUT, DELETE, OPTIONS',
'Access-Control-Allow-Headers': 'Content-Type, Authorization',
'Access-Control-Max-Age': '86400',
},
})
}

View File

@@ -1,193 +0,0 @@
import { db } from '@sim/db'
import { copilotChats, workflow, workspace } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { eq } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { getSession } from '@/lib/auth'
import { verifyEffectiveSuperUser } from '@/lib/templates/permissions'
import { parseWorkflowJson } from '@/lib/workflows/operations/import-export'
import {
loadWorkflowFromNormalizedTables,
saveWorkflowToNormalizedTables,
} from '@/lib/workflows/persistence/utils'
import { sanitizeForExport } from '@/lib/workflows/sanitization/json-sanitizer'
const logger = createLogger('SuperUserImportWorkflow')
interface ImportWorkflowRequest {
workflowId: string
targetWorkspaceId: string
}
/**
* POST /api/superuser/import-workflow
*
* Superuser endpoint to import a workflow by ID along with its copilot chats.
* This creates a copy of the workflow in the target workspace with new IDs.
* Only the workflow structure and copilot chats are copied - no deployments,
* webhooks, triggers, or other sensitive data.
*
* Requires both isSuperUser flag AND superUserModeEnabled setting.
*/
export async function POST(request: NextRequest) {
try {
const session = await getSession()
if (!session?.user?.id) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
const { effectiveSuperUser, isSuperUser, superUserModeEnabled } =
await verifyEffectiveSuperUser(session.user.id)
if (!effectiveSuperUser) {
logger.warn('Non-effective-superuser attempted to access import-workflow endpoint', {
userId: session.user.id,
isSuperUser,
superUserModeEnabled,
})
return NextResponse.json({ error: 'Forbidden: Superuser access required' }, { status: 403 })
}
const body: ImportWorkflowRequest = await request.json()
const { workflowId, targetWorkspaceId } = body
if (!workflowId) {
return NextResponse.json({ error: 'workflowId is required' }, { status: 400 })
}
if (!targetWorkspaceId) {
return NextResponse.json({ error: 'targetWorkspaceId is required' }, { status: 400 })
}
// Verify target workspace exists
const [targetWorkspace] = await db
.select({ id: workspace.id, ownerId: workspace.ownerId })
.from(workspace)
.where(eq(workspace.id, targetWorkspaceId))
.limit(1)
if (!targetWorkspace) {
return NextResponse.json({ error: 'Target workspace not found' }, { status: 404 })
}
// Get the source workflow
const [sourceWorkflow] = await db
.select()
.from(workflow)
.where(eq(workflow.id, workflowId))
.limit(1)
if (!sourceWorkflow) {
return NextResponse.json({ error: 'Source workflow not found' }, { status: 404 })
}
// Load the workflow state from normalized tables
const normalizedData = await loadWorkflowFromNormalizedTables(workflowId)
if (!normalizedData) {
return NextResponse.json(
{ error: 'Workflow has no normalized data - cannot import' },
{ status: 400 }
)
}
// Use existing export logic to create export format
const workflowState = {
blocks: normalizedData.blocks,
edges: normalizedData.edges,
loops: normalizedData.loops,
parallels: normalizedData.parallels,
metadata: {
name: sourceWorkflow.name,
description: sourceWorkflow.description ?? undefined,
color: sourceWorkflow.color,
},
}
const exportData = sanitizeForExport(workflowState)
// Use existing import logic (parseWorkflowJson regenerates IDs automatically)
const { data: importedData, errors } = parseWorkflowJson(JSON.stringify(exportData))
if (!importedData || errors.length > 0) {
return NextResponse.json(
{ error: `Failed to parse workflow: ${errors.join(', ')}` },
{ status: 400 }
)
}
// Create new workflow record
const newWorkflowId = crypto.randomUUID()
const now = new Date()
await db.insert(workflow).values({
id: newWorkflowId,
userId: session.user.id,
workspaceId: targetWorkspaceId,
folderId: null, // Don't copy folder association
name: `[Debug Import] ${sourceWorkflow.name}`,
description: sourceWorkflow.description,
color: sourceWorkflow.color,
lastSynced: now,
createdAt: now,
updatedAt: now,
isDeployed: false, // Never copy deployment status
runCount: 0,
variables: sourceWorkflow.variables || {},
})
// Save using existing persistence logic
const saveResult = await saveWorkflowToNormalizedTables(newWorkflowId, importedData)
if (!saveResult.success) {
// Clean up the workflow record if save failed
await db.delete(workflow).where(eq(workflow.id, newWorkflowId))
return NextResponse.json(
{ error: `Failed to save workflow state: ${saveResult.error}` },
{ status: 500 }
)
}
// Copy copilot chats associated with the source workflow
const sourceCopilotChats = await db
.select()
.from(copilotChats)
.where(eq(copilotChats.workflowId, workflowId))
let copilotChatsImported = 0
for (const chat of sourceCopilotChats) {
await db.insert(copilotChats).values({
userId: session.user.id,
workflowId: newWorkflowId,
title: chat.title ? `[Import] ${chat.title}` : null,
messages: chat.messages,
model: chat.model,
conversationId: null, // Don't copy conversation ID
previewYaml: chat.previewYaml,
planArtifact: chat.planArtifact,
config: chat.config,
createdAt: new Date(),
updatedAt: new Date(),
})
copilotChatsImported++
}
logger.info('Superuser imported workflow', {
userId: session.user.id,
sourceWorkflowId: workflowId,
newWorkflowId,
targetWorkspaceId,
copilotChatsImported,
})
return NextResponse.json({
success: true,
newWorkflowId,
copilotChatsImported,
})
} catch (error) {
logger.error('Error importing workflow', error)
return NextResponse.json({ error: 'Internal server error' }, { status: 500 })
}
}

View File

@@ -5,7 +5,7 @@ import { eq } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { getSession } from '@/lib/auth'
import { generateRequestId } from '@/lib/core/utils/request'
import { verifyEffectiveSuperUser } from '@/lib/templates/permissions'
import { verifySuperUser } from '@/lib/templates/permissions'
const logger = createLogger('TemplateApprovalAPI')
@@ -25,8 +25,8 @@ export async function POST(request: NextRequest, { params }: { params: Promise<{
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
const { effectiveSuperUser } = await verifyEffectiveSuperUser(session.user.id)
if (!effectiveSuperUser) {
const { isSuperUser } = await verifySuperUser(session.user.id)
if (!isSuperUser) {
logger.warn(`[${requestId}] Non-super user attempted to approve template: ${id}`)
return NextResponse.json({ error: 'Only super users can approve templates' }, { status: 403 })
}
@@ -71,8 +71,8 @@ export async function DELETE(
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
const { effectiveSuperUser } = await verifyEffectiveSuperUser(session.user.id)
if (!effectiveSuperUser) {
const { isSuperUser } = await verifySuperUser(session.user.id)
if (!isSuperUser) {
logger.warn(`[${requestId}] Non-super user attempted to reject template: ${id}`)
return NextResponse.json({ error: 'Only super users can reject templates' }, { status: 403 })
}

View File

@@ -5,7 +5,7 @@ import { eq } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { getSession } from '@/lib/auth'
import { generateRequestId } from '@/lib/core/utils/request'
import { verifyEffectiveSuperUser } from '@/lib/templates/permissions'
import { verifySuperUser } from '@/lib/templates/permissions'
const logger = createLogger('TemplateRejectionAPI')
@@ -25,8 +25,8 @@ export async function POST(request: NextRequest, { params }: { params: Promise<{
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
const { effectiveSuperUser } = await verifyEffectiveSuperUser(session.user.id)
if (!effectiveSuperUser) {
const { isSuperUser } = await verifySuperUser(session.user.id)
if (!isSuperUser) {
logger.warn(`[${requestId}] Non-super user attempted to reject template: ${id}`)
return NextResponse.json({ error: 'Only super users can reject templates' }, { status: 403 })
}

View File

@@ -3,6 +3,7 @@ import {
templateCreators,
templateStars,
templates,
user,
workflow,
workflowDeploymentVersion,
} from '@sim/db/schema'
@@ -13,7 +14,6 @@ import { v4 as uuidv4 } from 'uuid'
import { z } from 'zod'
import { getSession } from '@/lib/auth'
import { generateRequestId } from '@/lib/core/utils/request'
import { verifyEffectiveSuperUser } from '@/lib/templates/permissions'
import {
extractRequiredCredentials,
sanitizeCredentials,
@@ -70,8 +70,8 @@ export async function GET(request: NextRequest) {
logger.debug(`[${requestId}] Fetching templates with params:`, params)
// Check if user is a super user
const { effectiveSuperUser } = await verifyEffectiveSuperUser(session.user.id)
const isSuperUser = effectiveSuperUser
const currentUser = await db.select().from(user).where(eq(user.id, session.user.id)).limit(1)
const isSuperUser = currentUser[0]?.isSuperUser || false
// Build query conditions
const conditions = []

View File

@@ -550,8 +550,6 @@ export interface AdminUserBilling {
totalWebhookTriggers: number
totalScheduledExecutions: number
totalChatExecutions: number
totalMcpExecutions: number
totalA2aExecutions: number
totalTokensUsed: number
totalCost: string
currentUsageLimit: string | null

View File

@@ -97,8 +97,6 @@ export const GET = withAdminAuthParams<RouteParams>(async (_, context) => {
totalWebhookTriggers: stats?.totalWebhookTriggers ?? 0,
totalScheduledExecutions: stats?.totalScheduledExecutions ?? 0,
totalChatExecutions: stats?.totalChatExecutions ?? 0,
totalMcpExecutions: stats?.totalMcpExecutions ?? 0,
totalA2aExecutions: stats?.totalA2aExecutions ?? 0,
totalTokensUsed: stats?.totalTokensUsed ?? 0,
totalCost: stats?.totalCost ?? '0',
currentUsageLimit: stats?.currentUsageLimit ?? null,

View File

@@ -19,7 +19,7 @@ export interface RateLimitResult {
export async function checkRateLimit(
request: NextRequest,
endpoint: 'logs' | 'logs-detail' | 'workflows' | 'workflow-detail' = 'logs'
endpoint: 'logs' | 'logs-detail' = 'logs'
): Promise<RateLimitResult> {
try {
const auth = await authenticateV1Request(request)

View File

@@ -1,102 +0,0 @@
import { db } from '@sim/db'
import { permissions, workflow, workflowBlocks } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { and, eq } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { extractInputFieldsFromBlocks } from '@/lib/workflows/input-format'
import { createApiResponse, getUserLimits } from '@/app/api/v1/logs/meta'
import { checkRateLimit, createRateLimitResponse } from '@/app/api/v1/middleware'
const logger = createLogger('V1WorkflowDetailsAPI')
export const revalidate = 0
export async function GET(request: NextRequest, { params }: { params: Promise<{ id: string }> }) {
const requestId = crypto.randomUUID().slice(0, 8)
try {
const rateLimit = await checkRateLimit(request, 'workflow-detail')
if (!rateLimit.allowed) {
return createRateLimitResponse(rateLimit)
}
const userId = rateLimit.userId!
const { id } = await params
logger.info(`[${requestId}] Fetching workflow details for ${id}`, { userId })
const rows = await db
.select({
id: workflow.id,
name: workflow.name,
description: workflow.description,
color: workflow.color,
folderId: workflow.folderId,
workspaceId: workflow.workspaceId,
isDeployed: workflow.isDeployed,
deployedAt: workflow.deployedAt,
runCount: workflow.runCount,
lastRunAt: workflow.lastRunAt,
variables: workflow.variables,
createdAt: workflow.createdAt,
updatedAt: workflow.updatedAt,
})
.from(workflow)
.innerJoin(
permissions,
and(
eq(permissions.entityType, 'workspace'),
eq(permissions.entityId, workflow.workspaceId),
eq(permissions.userId, userId)
)
)
.where(eq(workflow.id, id))
.limit(1)
const workflowData = rows[0]
if (!workflowData) {
return NextResponse.json({ error: 'Workflow not found' }, { status: 404 })
}
const blockRows = await db
.select({
id: workflowBlocks.id,
type: workflowBlocks.type,
subBlocks: workflowBlocks.subBlocks,
})
.from(workflowBlocks)
.where(eq(workflowBlocks.workflowId, id))
const blocksRecord = Object.fromEntries(
blockRows.map((block) => [block.id, { type: block.type, subBlocks: block.subBlocks }])
)
const inputs = extractInputFieldsFromBlocks(blocksRecord)
const response = {
id: workflowData.id,
name: workflowData.name,
description: workflowData.description,
color: workflowData.color,
folderId: workflowData.folderId,
workspaceId: workflowData.workspaceId,
isDeployed: workflowData.isDeployed,
deployedAt: workflowData.deployedAt?.toISOString() || null,
runCount: workflowData.runCount,
lastRunAt: workflowData.lastRunAt?.toISOString() || null,
variables: workflowData.variables || {},
inputs,
createdAt: workflowData.createdAt.toISOString(),
updatedAt: workflowData.updatedAt.toISOString(),
}
const limits = await getUserLimits(userId)
const apiResponse = createApiResponse({ data: response }, limits, rateLimit)
return NextResponse.json(apiResponse.body, { headers: apiResponse.headers })
} catch (error: unknown) {
const message = error instanceof Error ? error.message : 'Unknown error'
logger.error(`[${requestId}] Workflow details fetch error`, { error: message })
return NextResponse.json({ error: 'Internal server error' }, { status: 500 })
}
}

View File

@@ -1,184 +0,0 @@
import { db } from '@sim/db'
import { permissions, workflow } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { and, asc, eq, gt, or } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { createApiResponse, getUserLimits } from '@/app/api/v1/logs/meta'
import { checkRateLimit, createRateLimitResponse } from '@/app/api/v1/middleware'
const logger = createLogger('V1WorkflowsAPI')
export const dynamic = 'force-dynamic'
export const revalidate = 0
const QueryParamsSchema = z.object({
workspaceId: z.string(),
folderId: z.string().optional(),
deployedOnly: z.coerce.boolean().optional().default(false),
limit: z.coerce.number().min(1).max(100).optional().default(50),
cursor: z.string().optional(),
})
interface CursorData {
sortOrder: number
createdAt: string
id: string
}
function encodeCursor(data: CursorData): string {
return Buffer.from(JSON.stringify(data)).toString('base64')
}
function decodeCursor(cursor: string): CursorData | null {
try {
return JSON.parse(Buffer.from(cursor, 'base64').toString())
} catch {
return null
}
}
export async function GET(request: NextRequest) {
const requestId = crypto.randomUUID().slice(0, 8)
try {
const rateLimit = await checkRateLimit(request, 'workflows')
if (!rateLimit.allowed) {
return createRateLimitResponse(rateLimit)
}
const userId = rateLimit.userId!
const { searchParams } = new URL(request.url)
const rawParams = Object.fromEntries(searchParams.entries())
const validationResult = QueryParamsSchema.safeParse(rawParams)
if (!validationResult.success) {
return NextResponse.json(
{ error: 'Invalid parameters', details: validationResult.error.errors },
{ status: 400 }
)
}
const params = validationResult.data
logger.info(`[${requestId}] Fetching workflows for workspace ${params.workspaceId}`, {
userId,
filters: {
folderId: params.folderId,
deployedOnly: params.deployedOnly,
},
})
const conditions = [
eq(workflow.workspaceId, params.workspaceId),
eq(permissions.entityType, 'workspace'),
eq(permissions.entityId, params.workspaceId),
eq(permissions.userId, userId),
]
if (params.folderId) {
conditions.push(eq(workflow.folderId, params.folderId))
}
if (params.deployedOnly) {
conditions.push(eq(workflow.isDeployed, true))
}
if (params.cursor) {
const cursorData = decodeCursor(params.cursor)
if (cursorData) {
const cursorCondition = or(
gt(workflow.sortOrder, cursorData.sortOrder),
and(
eq(workflow.sortOrder, cursorData.sortOrder),
gt(workflow.createdAt, new Date(cursorData.createdAt))
),
and(
eq(workflow.sortOrder, cursorData.sortOrder),
eq(workflow.createdAt, new Date(cursorData.createdAt)),
gt(workflow.id, cursorData.id)
)
)
if (cursorCondition) {
conditions.push(cursorCondition)
}
}
}
const orderByClause = [asc(workflow.sortOrder), asc(workflow.createdAt), asc(workflow.id)]
const rows = await db
.select({
id: workflow.id,
name: workflow.name,
description: workflow.description,
color: workflow.color,
folderId: workflow.folderId,
workspaceId: workflow.workspaceId,
isDeployed: workflow.isDeployed,
deployedAt: workflow.deployedAt,
runCount: workflow.runCount,
lastRunAt: workflow.lastRunAt,
sortOrder: workflow.sortOrder,
createdAt: workflow.createdAt,
updatedAt: workflow.updatedAt,
})
.from(workflow)
.innerJoin(
permissions,
and(
eq(permissions.entityType, 'workspace'),
eq(permissions.entityId, params.workspaceId),
eq(permissions.userId, userId)
)
)
.where(and(...conditions))
.orderBy(...orderByClause)
.limit(params.limit + 1)
const hasMore = rows.length > params.limit
const data = rows.slice(0, params.limit)
let nextCursor: string | undefined
if (hasMore && data.length > 0) {
const lastWorkflow = data[data.length - 1]
nextCursor = encodeCursor({
sortOrder: lastWorkflow.sortOrder,
createdAt: lastWorkflow.createdAt.toISOString(),
id: lastWorkflow.id,
})
}
const formattedWorkflows = data.map((w) => ({
id: w.id,
name: w.name,
description: w.description,
color: w.color,
folderId: w.folderId,
workspaceId: w.workspaceId,
isDeployed: w.isDeployed,
deployedAt: w.deployedAt?.toISOString() || null,
runCount: w.runCount,
lastRunAt: w.lastRunAt?.toISOString() || null,
createdAt: w.createdAt.toISOString(),
updatedAt: w.updatedAt.toISOString(),
}))
const limits = await getUserLimits(userId)
const response = createApiResponse(
{
data: formattedWorkflows,
nextCursor,
},
limits,
rateLimit
)
return NextResponse.json(response.body, { headers: response.headers })
} catch (error: unknown) {
const message = error instanceof Error ? error.message : 'Unknown error'
logger.error(`[${requestId}] Workflows fetch error`, { error: message })
return NextResponse.json({ error: 'Internal server error' }, { status: 500 })
}
}

View File

@@ -12,10 +12,6 @@ import { markExecutionCancelled } from '@/lib/execution/cancellation'
import { processInputFileFields } from '@/lib/execution/files'
import { preprocessExecution } from '@/lib/execution/preprocessing'
import { LoggingSession } from '@/lib/logs/execution/logging-session'
import {
cleanupExecutionBase64Cache,
hydrateUserFilesWithBase64,
} from '@/lib/uploads/utils/user-file-base64.server'
import { executeWorkflowCore } from '@/lib/workflows/executor/execution-core'
import { type ExecutionEvent, encodeSSEEvent } from '@/lib/workflows/executor/execution-events'
import { PauseResumeManager } from '@/lib/workflows/executor/human-in-the-loop-manager'
@@ -29,7 +25,7 @@ import type { WorkflowExecutionPayload } from '@/background/workflow-execution'
import { normalizeName } from '@/executor/constants'
import { ExecutionSnapshot } from '@/executor/execution/snapshot'
import type { ExecutionMetadata, IterationContext } from '@/executor/execution/types'
import type { NormalizedBlockOutput, StreamingExecution } from '@/executor/types'
import type { StreamingExecution } from '@/executor/types'
import { Serializer } from '@/serializer'
import { CORE_TRIGGER_TYPES, type CoreTriggerType } from '@/stores/logs/filters/types'
@@ -42,8 +38,6 @@ const ExecuteWorkflowSchema = z.object({
useDraftState: z.boolean().optional(),
input: z.any().optional(),
isClientSession: z.boolean().optional(),
includeFileBase64: z.boolean().optional().default(true),
base64MaxBytes: z.number().int().positive().optional(),
workflowStateOverride: z
.object({
blocks: z.record(z.any()),
@@ -220,8 +214,6 @@ export async function POST(req: NextRequest, { params }: { params: Promise<{ id:
useDraftState,
input: validatedInput,
isClientSession = false,
includeFileBase64,
base64MaxBytes,
workflowStateOverride,
} = validation.data
@@ -235,8 +227,6 @@ export async function POST(req: NextRequest, { params }: { params: Promise<{ id:
triggerType,
stream,
useDraftState,
includeFileBase64,
base64MaxBytes,
workflowStateOverride,
workflowId: _workflowId, // Also exclude workflowId used for internal JWT auth
...rest
@@ -437,31 +427,16 @@ export async function POST(req: NextRequest, { params }: { params: Promise<{ id:
snapshot,
callbacks: {},
loggingSession,
includeFileBase64,
base64MaxBytes,
})
const outputWithBase64 = includeFileBase64
? ((await hydrateUserFilesWithBase64(result.output, {
requestId,
executionId,
maxBytes: base64MaxBytes,
})) as NormalizedBlockOutput)
: result.output
const resultWithBase64 = { ...result, output: outputWithBase64 }
// Cleanup base64 cache for this execution
await cleanupExecutionBase64Cache(executionId)
const hasResponseBlock = workflowHasResponseBlock(resultWithBase64)
const hasResponseBlock = workflowHasResponseBlock(result)
if (hasResponseBlock) {
return createHttpResponseFromBlock(resultWithBase64)
return createHttpResponseFromBlock(result)
}
const filteredResult = {
success: result.success,
output: outputWithBase64,
output: result.output,
error: result.error,
metadata: result.metadata
? {
@@ -523,8 +498,6 @@ export async function POST(req: NextRequest, { params }: { params: Promise<{ id:
selectedOutputs: resolvedSelectedOutputs,
isSecureMode: false,
workflowTriggerType: triggerType === 'chat' ? 'chat' : 'api',
includeFileBase64,
base64MaxBytes,
},
executionId,
})
@@ -725,8 +698,6 @@ export async function POST(req: NextRequest, { params }: { params: Promise<{ id:
},
loggingSession,
abortSignal: abortController.signal,
includeFileBase64,
base64MaxBytes,
})
if (result.status === 'paused') {
@@ -779,21 +750,12 @@ export async function POST(req: NextRequest, { params }: { params: Promise<{ id:
workflowId,
data: {
success: result.success,
output: includeFileBase64
? await hydrateUserFilesWithBase64(result.output, {
requestId,
executionId,
maxBytes: base64MaxBytes,
})
: result.output,
output: result.output,
duration: result.metadata?.duration || 0,
startTime: result.metadata?.startTime || startTime.toISOString(),
endTime: result.metadata?.endTime || new Date().toISOString(),
},
})
// Cleanup base64 cache for this execution
await cleanupExecutionBase64Cache(executionId)
} catch (error: any) {
const errorMessage = error.message || 'Unknown error'
logger.error(`[${requestId}] SSE execution failed: ${errorMessage}`)

View File

@@ -33,7 +33,6 @@ const BlockDataSchema = z.object({
doWhileCondition: z.string().optional(),
parallelType: z.enum(['collection', 'count']).optional(),
type: z.string().optional(),
canonicalModes: z.record(z.enum(['basic', 'advanced'])).optional(),
})
const SubBlockStateSchema = z.object({

View File

@@ -2,7 +2,7 @@
import { useRef, useState } from 'react'
import { createLogger } from '@sim/logger'
import { isUserFileWithMetadata } from '@/lib/core/utils/user-file'
import { isUserFile } from '@/lib/core/utils/display-filters'
import type { ChatFile, ChatMessage } from '@/app/chat/components/message/message'
import { CHAT_ERROR_MESSAGES } from '@/app/chat/constants'
@@ -17,7 +17,7 @@ function extractFilesFromData(
return files
}
if (isUserFileWithMetadata(data)) {
if (isUserFile(data)) {
if (!seenIds.has(data.id)) {
seenIds.add(data.id)
files.push({
@@ -232,7 +232,7 @@ export function useChatStreaming() {
return null
}
if (isUserFileWithMetadata(value)) {
if (isUserFile(value)) {
return null
}
@@ -285,7 +285,7 @@ export function useChatStreaming() {
const value = getOutputValue(blockOutputs, config.path)
if (isUserFileWithMetadata(value)) {
if (isUserFile(value)) {
extractedFiles.push({
id: value.id,
name: value.name,

View File

@@ -1,13 +1,10 @@
'use client'
import { Suspense, useEffect, useState } from 'react'
import { Loader2 } from 'lucide-react'
import { CheckCircle, Heart, Info, Loader2, XCircle } from 'lucide-react'
import { useSearchParams } from 'next/navigation'
import { inter } from '@/app/_styles/fonts/inter/inter'
import { soehne } from '@/app/_styles/fonts/soehne/soehne'
import { BrandedButton } from '@/app/(auth)/components/branded-button'
import { SupportFooter } from '@/app/(auth)/components/support-footer'
import { InviteLayout } from '@/app/invite/components'
import { Button, Card, CardContent, CardDescription, CardHeader, CardTitle } from '@/components/ui'
import { useBrandConfig } from '@/lib/branding/branding'
interface UnsubscribeData {
success: boolean
@@ -30,6 +27,7 @@ function UnsubscribeContent() {
const [error, setError] = useState<string | null>(null)
const [processing, setProcessing] = useState(false)
const [unsubscribed, setUnsubscribed] = useState(false)
const brand = useBrandConfig()
const email = searchParams.get('email')
const token = searchParams.get('token')
@@ -111,7 +109,7 @@ function UnsubscribeContent() {
} else {
setError(result.error || 'Failed to unsubscribe')
}
} catch {
} catch (error) {
setError('Failed to process unsubscribe request')
} finally {
setProcessing(false)
@@ -120,171 +118,272 @@ function UnsubscribeContent() {
if (loading) {
return (
<InviteLayout>
<div className='space-y-1 text-center'>
<h1 className={`${soehne.className} font-medium text-[32px] text-black tracking-tight`}>
Loading
</h1>
<p className={`${inter.className} font-[380] text-[16px] text-muted-foreground`}>
Validating your unsubscribe link...
</p>
</div>
<div className={`${inter.className} mt-8 flex w-full items-center justify-center py-8`}>
<Loader2 className='h-8 w-8 animate-spin text-muted-foreground' />
</div>
<SupportFooter position='absolute' />
</InviteLayout>
<div className='before:-z-50 relative flex min-h-screen items-center justify-center before:pointer-events-none before:fixed before:inset-0 before:bg-white'>
<Card className='w-full max-w-md border shadow-sm'>
<CardContent className='flex items-center justify-center p-8'>
<Loader2 className='h-8 w-8 animate-spin text-muted-foreground' />
</CardContent>
</Card>
</div>
)
}
if (error) {
return (
<InviteLayout>
<div className='space-y-1 text-center'>
<h1 className={`${soehne.className} font-medium text-[32px] text-black tracking-tight`}>
Invalid Unsubscribe Link
</h1>
<p className={`${inter.className} font-[380] text-[16px] text-muted-foreground`}>
{error}
</p>
</div>
<div className='before:-z-50 relative flex min-h-screen items-center justify-center p-4 before:pointer-events-none before:fixed before:inset-0 before:bg-white'>
<Card className='w-full max-w-md border shadow-sm'>
<CardHeader className='text-center'>
<XCircle className='mx-auto mb-2 h-12 w-12 text-red-500' />
<CardTitle className='text-foreground'>Invalid Unsubscribe Link</CardTitle>
<CardDescription className='text-muted-foreground'>
This unsubscribe link is invalid or has expired
</CardDescription>
</CardHeader>
<CardContent className='space-y-4'>
<div className='rounded-lg border bg-red-50 p-4'>
<p className='text-red-800 text-sm'>
<strong>Error:</strong> {error}
</p>
</div>
<div className={`${inter.className} mt-8 w-full max-w-[410px] space-y-3`}>
<BrandedButton onClick={() => window.history.back()}>Go Back</BrandedButton>
</div>
<div className='space-y-3'>
<p className='text-muted-foreground text-sm'>This could happen if:</p>
<ul className='ml-4 list-inside list-disc space-y-1 text-muted-foreground text-sm'>
<li>The link is missing required parameters</li>
<li>The link has expired or been used already</li>
<li>The link was copied incorrectly</li>
</ul>
</div>
<SupportFooter position='absolute' />
</InviteLayout>
<div className='mt-6 flex flex-col gap-3'>
<Button
onClick={() =>
window.open(
`mailto:${brand.supportEmail}?subject=Unsubscribe%20Help&body=Hi%2C%20I%20need%20help%20unsubscribing%20from%20emails.%20My%20unsubscribe%20link%20is%20not%20working.`,
'_blank'
)
}
className='w-full bg-[var(--brand-primary-hex)] font-medium text-white shadow-sm transition-colors duration-200 hover:bg-[var(--brand-primary-hover-hex)]'
>
Contact Support
</Button>
<Button onClick={() => window.history.back()} variant='outline' className='w-full'>
Go Back
</Button>
</div>
<div className='mt-4 text-center'>
<p className='text-muted-foreground text-xs'>
Need immediate help? Email us at{' '}
<a
href={`mailto:${brand.supportEmail}`}
className='text-muted-foreground hover:underline'
>
{brand.supportEmail}
</a>
</p>
</div>
</CardContent>
</Card>
</div>
)
}
if (data?.isTransactional) {
return (
<InviteLayout>
<div className='space-y-1 text-center'>
<h1 className={`${soehne.className} font-medium text-[32px] text-black tracking-tight`}>
Important Account Emails
</h1>
<p className={`${inter.className} font-[380] text-[16px] text-muted-foreground`}>
Transactional emails like password resets, account confirmations, and security alerts
cannot be unsubscribed from as they contain essential information for your account.
</p>
</div>
<div className='before:-z-50 relative flex min-h-screen items-center justify-center p-4 before:pointer-events-none before:fixed before:inset-0 before:bg-white'>
<Card className='w-full max-w-md border shadow-sm'>
<CardHeader className='text-center'>
<Info className='mx-auto mb-2 h-12 w-12 text-blue-500' />
<CardTitle className='text-foreground'>Important Account Emails</CardTitle>
<CardDescription className='text-muted-foreground'>
This email contains important information about your account
</CardDescription>
</CardHeader>
<CardContent className='space-y-4'>
<div className='rounded-lg border bg-blue-50 p-4'>
<p className='text-blue-800 text-sm'>
<strong>Transactional emails</strong> like password resets, account confirmations,
and security alerts cannot be unsubscribed from as they contain essential
information for your account security and functionality.
</p>
</div>
<div className={`${inter.className} mt-8 w-full max-w-[410px] space-y-3`}>
<BrandedButton onClick={() => window.close()}>Close</BrandedButton>
</div>
<div className='space-y-3'>
<p className='text-foreground text-sm'>
If you no longer wish to receive these emails, you can:
</p>
<ul className='ml-4 list-inside list-disc space-y-1 text-muted-foreground text-sm'>
<li>Close your account entirely</li>
<li>Contact our support team for assistance</li>
</ul>
</div>
<SupportFooter position='absolute' />
</InviteLayout>
<div className='mt-6 flex flex-col gap-3'>
<Button
onClick={() =>
window.open(
`mailto:${brand.supportEmail}?subject=Account%20Help&body=Hi%2C%20I%20need%20help%20with%20my%20account%20emails.`,
'_blank'
)
}
className='w-full bg-blue-600 text-white hover:bg-blue-700'
>
Contact Support
</Button>
<Button onClick={() => window.close()} variant='outline' className='w-full'>
Close
</Button>
</div>
</CardContent>
</Card>
</div>
)
}
if (unsubscribed) {
return (
<InviteLayout>
<div className='space-y-1 text-center'>
<h1 className={`${soehne.className} font-medium text-[32px] text-black tracking-tight`}>
Successfully Unsubscribed
</h1>
<p className={`${inter.className} font-[380] text-[16px] text-muted-foreground`}>
You have been unsubscribed from our emails. You will stop receiving emails within 48
hours.
</p>
</div>
<div className={`${inter.className} mt-8 w-full max-w-[410px] space-y-3`}>
<BrandedButton onClick={() => window.close()}>Close</BrandedButton>
</div>
<SupportFooter position='absolute' />
</InviteLayout>
<div className='before:-z-50 relative flex min-h-screen items-center justify-center before:pointer-events-none before:fixed before:inset-0 before:bg-white'>
<Card className='w-full max-w-md border shadow-sm'>
<CardHeader className='text-center'>
<CheckCircle className='mx-auto mb-2 h-12 w-12 text-green-500' />
<CardTitle className='text-foreground'>Successfully Unsubscribed</CardTitle>
<CardDescription className='text-muted-foreground'>
You have been unsubscribed from our emails. You will stop receiving emails within 48
hours.
</CardDescription>
</CardHeader>
<CardContent className='text-center'>
<p className='text-muted-foreground text-sm'>
If you change your mind, you can always update your email preferences in your account
settings or contact us at{' '}
<a
href={`mailto:${brand.supportEmail}`}
className='text-muted-foreground hover:underline'
>
{brand.supportEmail}
</a>
</p>
</CardContent>
</Card>
</div>
)
}
const isAlreadyUnsubscribedFromAll = data?.currentPreferences.unsubscribeAll
return (
<InviteLayout>
<div className='space-y-1 text-center'>
<h1 className={`${soehne.className} font-medium text-[32px] text-black tracking-tight`}>
Email Preferences
</h1>
<p className={`${inter.className} font-[380] text-[16px] text-muted-foreground`}>
Choose which emails you'd like to stop receiving.
</p>
<p className={`${inter.className} mt-2 font-[380] text-[14px] text-muted-foreground`}>
{data?.email}
</p>
</div>
<div className='before:-z-50 relative flex min-h-screen items-center justify-center p-4 before:pointer-events-none before:fixed before:inset-0 before:bg-white'>
<Card className='w-full max-w-md border shadow-sm'>
<CardHeader className='text-center'>
<Heart className='mx-auto mb-2 h-12 w-12 text-red-500' />
<CardTitle className='text-foreground'>We&apos;re sorry to see you go!</CardTitle>
<CardDescription className='text-muted-foreground'>
We understand email preferences are personal. Choose which emails you&apos;d like to
stop receiving from Sim.
</CardDescription>
<div className='mt-2 rounded-lg border bg-muted/50 p-3'>
<p className='text-muted-foreground text-xs'>
Email: <span className='font-medium text-foreground'>{data?.email}</span>
</p>
</div>
</CardHeader>
<CardContent className='space-y-4'>
<div className='space-y-3'>
<Button
onClick={() => handleUnsubscribe('all')}
disabled={processing || data?.currentPreferences.unsubscribeAll}
variant='destructive'
className='w-full'
>
{data?.currentPreferences.unsubscribeAll ? (
<CheckCircle className='mr-2 h-4 w-4' />
) : null}
{processing
? 'Unsubscribing...'
: data?.currentPreferences.unsubscribeAll
? 'Unsubscribed from All Emails'
: 'Unsubscribe from All Marketing Emails'}
</Button>
<div className={`${inter.className} mt-8 w-full max-w-[410px] space-y-3`}>
<BrandedButton
onClick={() => handleUnsubscribe('all')}
disabled={processing || isAlreadyUnsubscribedFromAll}
loading={processing}
loadingText='Unsubscribing'
>
{isAlreadyUnsubscribedFromAll
? 'Unsubscribed from All Emails'
: 'Unsubscribe from All Marketing Emails'}
</BrandedButton>
<div className='text-center text-muted-foreground text-sm'>
or choose specific types:
</div>
<div className='py-2 text-center'>
<span className={`${inter.className} font-[380] text-[14px] text-muted-foreground`}>
or choose specific types
</span>
</div>
<Button
onClick={() => handleUnsubscribe('marketing')}
disabled={
processing ||
data?.currentPreferences.unsubscribeAll ||
data?.currentPreferences.unsubscribeMarketing
}
variant='outline'
className='w-full'
>
{data?.currentPreferences.unsubscribeMarketing ? (
<CheckCircle className='mr-2 h-4 w-4' />
) : null}
{data?.currentPreferences.unsubscribeMarketing
? 'Unsubscribed from Marketing'
: 'Unsubscribe from Marketing Emails'}
</Button>
<BrandedButton
onClick={() => handleUnsubscribe('marketing')}
disabled={
processing ||
isAlreadyUnsubscribedFromAll ||
data?.currentPreferences.unsubscribeMarketing
}
>
{data?.currentPreferences.unsubscribeMarketing
? 'Unsubscribed from Marketing'
: 'Unsubscribe from Marketing Emails'}
</BrandedButton>
<Button
onClick={() => handleUnsubscribe('updates')}
disabled={
processing ||
data?.currentPreferences.unsubscribeAll ||
data?.currentPreferences.unsubscribeUpdates
}
variant='outline'
className='w-full'
>
{data?.currentPreferences.unsubscribeUpdates ? (
<CheckCircle className='mr-2 h-4 w-4' />
) : null}
{data?.currentPreferences.unsubscribeUpdates
? 'Unsubscribed from Updates'
: 'Unsubscribe from Product Updates'}
</Button>
<BrandedButton
onClick={() => handleUnsubscribe('updates')}
disabled={
processing ||
isAlreadyUnsubscribedFromAll ||
data?.currentPreferences.unsubscribeUpdates
}
>
{data?.currentPreferences.unsubscribeUpdates
? 'Unsubscribed from Updates'
: 'Unsubscribe from Product Updates'}
</BrandedButton>
<Button
onClick={() => handleUnsubscribe('notifications')}
disabled={
processing ||
data?.currentPreferences.unsubscribeAll ||
data?.currentPreferences.unsubscribeNotifications
}
variant='outline'
className='w-full'
>
{data?.currentPreferences.unsubscribeNotifications ? (
<CheckCircle className='mr-2 h-4 w-4' />
) : null}
{data?.currentPreferences.unsubscribeNotifications
? 'Unsubscribed from Notifications'
: 'Unsubscribe from Notifications'}
</Button>
</div>
<BrandedButton
onClick={() => handleUnsubscribe('notifications')}
disabled={
processing ||
isAlreadyUnsubscribedFromAll ||
data?.currentPreferences.unsubscribeNotifications
}
>
{data?.currentPreferences.unsubscribeNotifications
? 'Unsubscribed from Notifications'
: 'Unsubscribe from Notifications'}
</BrandedButton>
</div>
<div className='mt-6 space-y-3'>
<div className='rounded-lg border bg-muted/50 p-3'>
<p className='text-center text-muted-foreground text-xs'>
<strong>Note:</strong> You&apos;ll continue receiving important account emails like
password resets and security alerts.
</p>
</div>
<div className={`${inter.className} mt-6 max-w-[410px] text-center`}>
<p className='font-[380] text-[13px] text-muted-foreground'>
You'll continue receiving important account emails like password resets and security
alerts.
</p>
</div>
<SupportFooter position='absolute' />
</InviteLayout>
<p className='text-center text-muted-foreground text-xs'>
Questions? Contact us at{' '}
<a
href={`mailto:${brand.supportEmail}`}
className='text-muted-foreground hover:underline'
>
{brand.supportEmail}
</a>
</p>
</div>
</CardContent>
</Card>
</div>
)
}
@@ -292,20 +391,13 @@ export default function Unsubscribe() {
return (
<Suspense
fallback={
<InviteLayout>
<div className='space-y-1 text-center'>
<h1 className={`${soehne.className} font-medium text-[32px] text-black tracking-tight`}>
Loading
</h1>
<p className={`${inter.className} font-[380] text-[16px] text-muted-foreground`}>
Validating your unsubscribe link...
</p>
</div>
<div className={`${inter.className} mt-8 flex w-full items-center justify-center py-8`}>
<Loader2 className='h-8 w-8 animate-spin text-muted-foreground' />
</div>
<SupportFooter position='absolute' />
</InviteLayout>
<div className='before:-z-50 relative flex min-h-screen items-center justify-center before:pointer-events-none before:fixed before:inset-0 before:bg-white'>
<Card className='w-full max-w-md border shadow-sm'>
<CardContent className='flex items-center justify-center p-8'>
<Loader2 className='h-8 w-8 animate-spin text-muted-foreground' />
</CardContent>
</Card>
</div>
}
>
<UnsubscribeContent />

View File

@@ -2,6 +2,7 @@
import { useRef, useState } from 'react'
import { createLogger } from '@sim/logger'
import { useQueryClient } from '@tanstack/react-query'
import {
Button,
Label,
@@ -13,7 +14,7 @@ import {
Textarea,
} from '@/components/emcn'
import type { DocumentData } from '@/lib/knowledge/types'
import { useCreateChunk } from '@/hooks/queries/knowledge'
import { knowledgeKeys } from '@/hooks/queries/knowledge'
const logger = createLogger('CreateChunkModal')
@@ -30,20 +31,16 @@ export function CreateChunkModal({
document,
knowledgeBaseId,
}: CreateChunkModalProps) {
const {
mutate: createChunk,
isPending: isCreating,
error: mutationError,
reset: resetMutation,
} = useCreateChunk()
const queryClient = useQueryClient()
const [content, setContent] = useState('')
const [isCreating, setIsCreating] = useState(false)
const [error, setError] = useState<string | null>(null)
const [showUnsavedChangesAlert, setShowUnsavedChangesAlert] = useState(false)
const isProcessingRef = useRef(false)
const error = mutationError?.message ?? null
const hasUnsavedChanges = content.trim().length > 0
const handleCreateChunk = () => {
const handleCreateChunk = async () => {
if (!document || content.trim().length === 0 || isProcessingRef.current) {
if (isProcessingRef.current) {
logger.warn('Chunk creation already in progress, ignoring duplicate request')
@@ -51,32 +48,57 @@ export function CreateChunkModal({
return
}
isProcessingRef.current = true
try {
isProcessingRef.current = true
setIsCreating(true)
setError(null)
createChunk(
{
knowledgeBaseId,
documentId: document.id,
content: content.trim(),
enabled: true,
},
{
onSuccess: () => {
isProcessingRef.current = false
onClose()
},
onError: () => {
isProcessingRef.current = false
},
const response = await fetch(
`/api/knowledge/${knowledgeBaseId}/documents/${document.id}/chunks`,
{
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({
content: content.trim(),
enabled: true,
}),
}
)
if (!response.ok) {
const result = await response.json()
throw new Error(result.error || 'Failed to create chunk')
}
)
const result = await response.json()
if (result.success && result.data) {
logger.info('Chunk created successfully:', result.data.id)
await queryClient.invalidateQueries({
queryKey: knowledgeKeys.detail(knowledgeBaseId),
})
onClose()
} else {
throw new Error(result.error || 'Failed to create chunk')
}
} catch (err) {
logger.error('Error creating chunk:', err)
setError(err instanceof Error ? err.message : 'An error occurred')
} finally {
isProcessingRef.current = false
setIsCreating(false)
}
}
const onClose = () => {
onOpenChange(false)
setContent('')
setError(null)
setShowUnsavedChangesAlert(false)
resetMutation()
}
const handleCloseAttempt = () => {

View File

@@ -1,8 +1,13 @@
'use client'
import { useState } from 'react'
import { createLogger } from '@sim/logger'
import { useQueryClient } from '@tanstack/react-query'
import { Button, Modal, ModalBody, ModalContent, ModalFooter, ModalHeader } from '@/components/emcn'
import type { ChunkData } from '@/lib/knowledge/types'
import { useDeleteChunk } from '@/hooks/queries/knowledge'
import { knowledgeKeys } from '@/hooks/queries/knowledge'
const logger = createLogger('DeleteChunkModal')
interface DeleteChunkModalProps {
chunk: ChunkData | null
@@ -19,12 +24,44 @@ export function DeleteChunkModal({
isOpen,
onClose,
}: DeleteChunkModalProps) {
const { mutate: deleteChunk, isPending: isDeleting } = useDeleteChunk()
const queryClient = useQueryClient()
const [isDeleting, setIsDeleting] = useState(false)
const handleDeleteChunk = () => {
const handleDeleteChunk = async () => {
if (!chunk || isDeleting) return
deleteChunk({ knowledgeBaseId, documentId, chunkId: chunk.id }, { onSuccess: onClose })
try {
setIsDeleting(true)
const response = await fetch(
`/api/knowledge/${knowledgeBaseId}/documents/${documentId}/chunks/${chunk.id}`,
{
method: 'DELETE',
}
)
if (!response.ok) {
throw new Error('Failed to delete chunk')
}
const result = await response.json()
if (result.success) {
logger.info('Chunk deleted successfully:', chunk.id)
await queryClient.invalidateQueries({
queryKey: knowledgeKeys.detail(knowledgeBaseId),
})
onClose()
} else {
throw new Error(result.error || 'Failed to delete chunk')
}
} catch (err) {
logger.error('Error deleting chunk:', err)
} finally {
setIsDeleting(false)
}
}
if (!chunk) return null

View File

@@ -25,7 +25,6 @@ import {
} from '@/hooks/kb/use-knowledge-base-tag-definitions'
import { useNextAvailableSlot } from '@/hooks/kb/use-next-available-slot'
import { type TagDefinitionInput, useTagDefinitions } from '@/hooks/kb/use-tag-definitions'
import { useUpdateDocumentTags } from '@/hooks/queries/knowledge'
const logger = createLogger('DocumentTagsModal')
@@ -59,6 +58,8 @@ function formatValueForDisplay(value: string, fieldType: string): string {
try {
const date = new Date(value)
if (Number.isNaN(date.getTime())) return value
// For UTC dates, display the UTC date to prevent timezone shifts
// e.g., 2002-05-16T00:00:00.000Z should show as "May 16, 2002" not "May 15, 2002"
if (typeof value === 'string' && (value.endsWith('Z') || /[+-]\d{2}:\d{2}$/.test(value))) {
return new Date(
date.getUTCFullYear(),
@@ -95,7 +96,6 @@ export function DocumentTagsModal({
const documentTagHook = useTagDefinitions(knowledgeBaseId, documentId)
const kbTagHook = useKnowledgeBaseTagDefinitions(knowledgeBaseId)
const { getNextAvailableSlot: getServerNextSlot } = useNextAvailableSlot(knowledgeBaseId)
const { mutateAsync: updateDocumentTags } = useUpdateDocumentTags()
const { saveTagDefinitions, tagDefinitions, fetchTagDefinitions } = documentTagHook
const { tagDefinitions: kbTagDefinitions, fetchTagDefinitions: refreshTagDefinitions } = kbTagHook
@@ -118,6 +118,7 @@ export function DocumentTagsModal({
const definition = definitions.find((def) => def.tagSlot === slot)
if (rawValue !== null && rawValue !== undefined && definition) {
// Convert value to string for storage
const stringValue = String(rawValue).trim()
if (stringValue) {
tags.push({
@@ -141,34 +142,41 @@ export function DocumentTagsModal({
async (tagsToSave: DocumentTag[]) => {
if (!documentData) return
const tagData: Record<string, string> = {}
try {
const tagData: Record<string, string> = {}
ALL_TAG_SLOTS.forEach((slot) => {
const tag = tagsToSave.find((t) => t.slot === slot)
if (tag?.value.trim()) {
tagData[slot] = tag.value.trim()
} else {
tagData[slot] = ''
// Only include tags that have values (omit empty ones)
// Use empty string for slots that should be cleared
ALL_TAG_SLOTS.forEach((slot) => {
const tag = tagsToSave.find((t) => t.slot === slot)
if (tag?.value.trim()) {
tagData[slot] = tag.value.trim()
} else {
// Use empty string to clear a tag (API schema expects string, not null)
tagData[slot] = ''
}
})
const response = await fetch(`/api/knowledge/${knowledgeBaseId}/documents/${documentId}`, {
method: 'PUT',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify(tagData),
})
if (!response.ok) {
throw new Error('Failed to update document tags')
}
})
await updateDocumentTags({
knowledgeBaseId,
documentId,
tags: tagData,
})
onDocumentUpdate?.(tagData)
await fetchTagDefinitions()
onDocumentUpdate?.(tagData as Record<string, string>)
await fetchTagDefinitions()
} catch (error) {
logger.error('Error updating document tags:', error)
throw error
}
},
[
documentData,
knowledgeBaseId,
documentId,
updateDocumentTags,
fetchTagDefinitions,
onDocumentUpdate,
]
[documentData, knowledgeBaseId, documentId, fetchTagDefinitions, onDocumentUpdate]
)
const handleRemoveTag = async (index: number) => {

View File

@@ -2,6 +2,7 @@
import { useEffect, useMemo, useRef, useState } from 'react'
import { createLogger } from '@sim/logger'
import { useQueryClient } from '@tanstack/react-query'
import { ChevronDown, ChevronUp } from 'lucide-react'
import {
Button,
@@ -18,7 +19,7 @@ import {
import type { ChunkData, DocumentData } from '@/lib/knowledge/types'
import { getAccurateTokenCount, getTokenStrings } from '@/lib/tokenization/estimators'
import { useUserPermissionsContext } from '@/app/workspace/[workspaceId]/providers/workspace-permissions-provider'
import { useUpdateChunk } from '@/hooks/queries/knowledge'
import { knowledgeKeys } from '@/hooks/queries/knowledge'
const logger = createLogger('EditChunkModal')
@@ -49,22 +50,17 @@ export function EditChunkModal({
onNavigateToPage,
maxChunkSize,
}: EditChunkModalProps) {
const queryClient = useQueryClient()
const userPermissions = useUserPermissionsContext()
const {
mutate: updateChunk,
isPending: isSaving,
error: mutationError,
reset: resetMutation,
} = useUpdateChunk()
const [editedContent, setEditedContent] = useState(chunk?.content || '')
const [isSaving, setIsSaving] = useState(false)
const [isNavigating, setIsNavigating] = useState(false)
const [error, setError] = useState<string | null>(null)
const [showUnsavedChangesAlert, setShowUnsavedChangesAlert] = useState(false)
const [pendingNavigation, setPendingNavigation] = useState<(() => void) | null>(null)
const [tokenizerOn, setTokenizerOn] = useState(false)
const textareaRef = useRef<HTMLTextAreaElement>(null)
const error = mutationError?.message ?? null
const hasUnsavedChanges = editedContent !== (chunk?.content || '')
const tokenStrings = useMemo(() => {
@@ -106,15 +102,44 @@ export function EditChunkModal({
const canNavigatePrev = currentChunkIndex > 0 || currentPage > 1
const canNavigateNext = currentChunkIndex < allChunks.length - 1 || currentPage < totalPages
const handleSaveContent = () => {
const handleSaveContent = async () => {
if (!chunk || !document) return
updateChunk({
knowledgeBaseId,
documentId: document.id,
chunkId: chunk.id,
content: editedContent,
})
try {
setIsSaving(true)
setError(null)
const response = await fetch(
`/api/knowledge/${knowledgeBaseId}/documents/${document.id}/chunks/${chunk.id}`,
{
method: 'PUT',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({
content: editedContent,
}),
}
)
if (!response.ok) {
const result = await response.json()
throw new Error(result.error || 'Failed to update chunk')
}
const result = await response.json()
if (result.success) {
await queryClient.invalidateQueries({
queryKey: knowledgeKeys.detail(knowledgeBaseId),
})
}
} catch (err) {
logger.error('Error updating chunk:', err)
setError(err instanceof Error ? err.message : 'An error occurred')
} finally {
setIsSaving(false)
}
}
const navigateToChunk = async (direction: 'prev' | 'next') => {
@@ -140,6 +165,7 @@ export function EditChunkModal({
}
} catch (err) {
logger.error(`Error navigating ${direction}:`, err)
setError(`Failed to navigate to ${direction === 'prev' ? 'previous' : 'next'} chunk`)
} finally {
setIsNavigating(false)
}
@@ -159,7 +185,6 @@ export function EditChunkModal({
setPendingNavigation(null)
setShowUnsavedChangesAlert(true)
} else {
resetMutation()
onClose()
}
}
@@ -170,7 +195,6 @@ export function EditChunkModal({
void pendingNavigation()
setPendingNavigation(null)
} else {
resetMutation()
onClose()
}
}

View File

@@ -48,13 +48,7 @@ import { ActionBar } from '@/app/workspace/[workspaceId]/knowledge/[id]/componen
import { useUserPermissionsContext } from '@/app/workspace/[workspaceId]/providers/workspace-permissions-provider'
import { useContextMenu } from '@/app/workspace/[workspaceId]/w/components/sidebar/hooks'
import { useDocument, useDocumentChunks, useKnowledgeBase } from '@/hooks/kb/use-knowledge'
import {
knowledgeKeys,
useBulkChunkOperation,
useDeleteDocument,
useDocumentChunkSearchQuery,
useUpdateChunk,
} from '@/hooks/queries/knowledge'
import { knowledgeKeys, useDocumentChunkSearchQuery } from '@/hooks/queries/knowledge'
const logger = createLogger('Document')
@@ -409,13 +403,11 @@ export function Document({
const [isCreateChunkModalOpen, setIsCreateChunkModalOpen] = useState(false)
const [chunkToDelete, setChunkToDelete] = useState<ChunkData | null>(null)
const [isDeleteModalOpen, setIsDeleteModalOpen] = useState(false)
const [isBulkOperating, setIsBulkOperating] = useState(false)
const [showDeleteDocumentDialog, setShowDeleteDocumentDialog] = useState(false)
const [isDeletingDocument, setIsDeletingDocument] = useState(false)
const [contextMenuChunk, setContextMenuChunk] = useState<ChunkData | null>(null)
const { mutate: updateChunkMutation } = useUpdateChunk()
const { mutate: deleteDocumentMutation, isPending: isDeletingDocument } = useDeleteDocument()
const { mutate: bulkChunkMutation, isPending: isBulkOperating } = useBulkChunkOperation()
const {
isOpen: isContextMenuOpen,
position: contextMenuPosition,
@@ -448,23 +440,36 @@ export function Document({
setSelectedChunk(null)
}
const handleToggleEnabled = (chunkId: string) => {
const handleToggleEnabled = async (chunkId: string) => {
const chunk = displayChunks.find((c) => c.id === chunkId)
if (!chunk) return
updateChunkMutation(
{
knowledgeBaseId,
documentId,
chunkId,
enabled: !chunk.enabled,
},
{
onSuccess: () => {
updateChunk(chunkId, { enabled: !chunk.enabled })
},
try {
const response = await fetch(
`/api/knowledge/${knowledgeBaseId}/documents/${documentId}/chunks/${chunkId}`,
{
method: 'PUT',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({
enabled: !chunk.enabled,
}),
}
)
if (!response.ok) {
throw new Error('Failed to update chunk')
}
)
const result = await response.json()
if (result.success) {
updateChunk(chunkId, { enabled: !chunk.enabled })
}
} catch (err) {
logger.error('Error updating chunk:', err)
}
}
const handleDeleteChunk = (chunkId: string) => {
@@ -510,65 +515,107 @@ export function Document({
/**
* Handles deleting the document
*/
const handleDeleteDocument = () => {
const handleDeleteDocument = async () => {
if (!documentData) return
deleteDocumentMutation(
{ knowledgeBaseId, documentId },
{
onSuccess: () => {
router.push(`/workspace/${workspaceId}/knowledge/${knowledgeBaseId}`)
},
try {
setIsDeletingDocument(true)
const response = await fetch(`/api/knowledge/${knowledgeBaseId}/documents/${documentId}`, {
method: 'DELETE',
})
if (!response.ok) {
throw new Error('Failed to delete document')
}
)
const result = await response.json()
if (result.success) {
await queryClient.invalidateQueries({
queryKey: knowledgeKeys.detail(knowledgeBaseId),
})
router.push(`/workspace/${workspaceId}/knowledge/${knowledgeBaseId}`)
} else {
throw new Error(result.error || 'Failed to delete document')
}
} catch (err) {
logger.error('Error deleting document:', err)
setIsDeletingDocument(false)
}
}
const performBulkChunkOperation = (
const performBulkChunkOperation = async (
operation: 'enable' | 'disable' | 'delete',
chunks: ChunkData[]
) => {
if (chunks.length === 0) return
bulkChunkMutation(
{
knowledgeBaseId,
documentId,
operation,
chunkIds: chunks.map((chunk) => chunk.id),
},
{
onSuccess: (result) => {
if (operation === 'delete' || result.errorCount > 0) {
refreshChunks()
} else {
chunks.forEach((chunk) => {
updateChunk(chunk.id, { enabled: operation === 'enable' })
})
}
logger.info(`Successfully ${operation}d ${result.successCount} chunks`)
setSelectedChunks(new Set())
},
try {
setIsBulkOperating(true)
const response = await fetch(
`/api/knowledge/${knowledgeBaseId}/documents/${documentId}/chunks`,
{
method: 'PATCH',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({
operation,
chunkIds: chunks.map((chunk) => chunk.id),
}),
}
)
if (!response.ok) {
throw new Error(`Failed to ${operation} chunks`)
}
)
const result = await response.json()
if (result.success) {
if (operation === 'delete') {
await refreshChunks()
} else {
result.data.results.forEach((opResult: any) => {
if (opResult.operation === operation) {
opResult.chunkIds.forEach((chunkId: string) => {
updateChunk(chunkId, { enabled: operation === 'enable' })
})
}
})
}
logger.info(`Successfully ${operation}d ${result.data.successCount} chunks`)
}
setSelectedChunks(new Set())
} catch (err) {
logger.error(`Error ${operation}ing chunks:`, err)
} finally {
setIsBulkOperating(false)
}
}
const handleBulkEnable = () => {
const handleBulkEnable = async () => {
const chunksToEnable = displayChunks.filter(
(chunk) => selectedChunks.has(chunk.id) && !chunk.enabled
)
performBulkChunkOperation('enable', chunksToEnable)
await performBulkChunkOperation('enable', chunksToEnable)
}
const handleBulkDisable = () => {
const handleBulkDisable = async () => {
const chunksToDisable = displayChunks.filter(
(chunk) => selectedChunks.has(chunk.id) && chunk.enabled
)
performBulkChunkOperation('disable', chunksToDisable)
await performBulkChunkOperation('disable', chunksToDisable)
}
const handleBulkDelete = () => {
const handleBulkDelete = async () => {
const chunksToDelete = displayChunks.filter((chunk) => selectedChunks.has(chunk.id))
performBulkChunkOperation('delete', chunksToDelete)
await performBulkChunkOperation('delete', chunksToDelete)
}
const selectedChunksList = displayChunks.filter((chunk) => selectedChunks.has(chunk.id))

View File

@@ -2,6 +2,7 @@
import { useCallback, useEffect, useRef, useState } from 'react'
import { createLogger } from '@sim/logger'
import { useQueryClient } from '@tanstack/react-query'
import { format } from 'date-fns'
import {
AlertCircle,
@@ -61,12 +62,7 @@ import {
type TagDefinition,
useKnowledgeBaseTagDefinitions,
} from '@/hooks/kb/use-knowledge-base-tag-definitions'
import {
useBulkDocumentOperation,
useDeleteDocument,
useDeleteKnowledgeBase,
useUpdateDocument,
} from '@/hooks/queries/knowledge'
import { knowledgeKeys } from '@/hooks/queries/knowledge'
const logger = createLogger('KnowledgeBase')
@@ -411,17 +407,12 @@ export function KnowledgeBase({
id,
knowledgeBaseName: passedKnowledgeBaseName,
}: KnowledgeBaseProps) {
const queryClient = useQueryClient()
const params = useParams()
const workspaceId = params.workspaceId as string
const { removeKnowledgeBase } = useKnowledgeBasesList(workspaceId, { enabled: false })
const userPermissions = useUserPermissionsContext()
const { mutate: updateDocumentMutation } = useUpdateDocument()
const { mutate: deleteDocumentMutation } = useDeleteDocument()
const { mutate: deleteKnowledgeBaseMutation, isPending: isDeleting } =
useDeleteKnowledgeBase(workspaceId)
const { mutate: bulkDocumentMutation, isPending: isBulkOperating } = useBulkDocumentOperation()
const [searchQuery, setSearchQuery] = useState('')
const [showTagsModal, setShowTagsModal] = useState(false)
@@ -436,6 +427,8 @@ export function KnowledgeBase({
const [selectedDocuments, setSelectedDocuments] = useState<Set<string>>(new Set())
const [showDeleteDialog, setShowDeleteDialog] = useState(false)
const [showAddDocumentsModal, setShowAddDocumentsModal] = useState(false)
const [isDeleting, setIsDeleting] = useState(false)
const [isBulkOperating, setIsBulkOperating] = useState(false)
const [showDeleteDocumentModal, setShowDeleteDocumentModal] = useState(false)
const [documentToDelete, setDocumentToDelete] = useState<string | null>(null)
const [showBulkDeleteModal, setShowBulkDeleteModal] = useState(false)
@@ -557,7 +550,7 @@ export function KnowledgeBase({
/**
* Checks for documents with stale processing states and marks them as failed
*/
const checkForDeadProcesses = () => {
const checkForDeadProcesses = async () => {
const now = new Date()
const DEAD_PROCESS_THRESHOLD_MS = 600 * 1000 // 10 minutes
@@ -574,79 +567,116 @@ export function KnowledgeBase({
logger.warn(`Found ${staleDocuments.length} documents with dead processes`)
staleDocuments.forEach((doc) => {
updateDocumentMutation(
{
knowledgeBaseId: id,
documentId: doc.id,
updates: { markFailedDueToTimeout: true },
},
{
onSuccess: () => {
logger.info(`Successfully marked dead process as failed for document: ${doc.filename}`)
const markFailedPromises = staleDocuments.map(async (doc) => {
try {
const response = await fetch(`/api/knowledge/${id}/documents/${doc.id}`, {
method: 'PUT',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({
markFailedDueToTimeout: true,
}),
})
if (!response.ok) {
const errorData = await response.json().catch(() => ({ error: 'Unknown error' }))
logger.error(`Failed to mark document ${doc.id} as failed: ${errorData.error}`)
return
}
)
const result = await response.json()
if (result.success) {
logger.info(`Successfully marked dead process as failed for document: ${doc.filename}`)
}
} catch (error) {
logger.error(`Error marking document ${doc.id} as failed:`, error)
}
})
await Promise.allSettled(markFailedPromises)
}
const handleToggleEnabled = (docId: string) => {
const handleToggleEnabled = async (docId: string) => {
const document = documents.find((doc) => doc.id === docId)
if (!document) return
const newEnabled = !document.enabled
// Optimistic update
updateDocument(docId, { enabled: newEnabled })
updateDocumentMutation(
{
knowledgeBaseId: id,
documentId: docId,
updates: { enabled: newEnabled },
},
{
onError: () => {
// Rollback on error
updateDocument(docId, { enabled: !newEnabled })
try {
const response = await fetch(`/api/knowledge/${id}/documents/${docId}`, {
method: 'PUT',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({
enabled: newEnabled,
}),
})
if (!response.ok) {
throw new Error('Failed to update document')
}
)
const result = await response.json()
if (!result.success) {
updateDocument(docId, { enabled: !newEnabled })
}
} catch (err) {
updateDocument(docId, { enabled: !newEnabled })
logger.error('Error updating document:', err)
}
}
/**
* Handles retrying a failed document processing
*/
const handleRetryDocument = (docId: string) => {
// Optimistic update
updateDocument(docId, {
processingStatus: 'pending',
processingError: null,
processingStartedAt: null,
processingCompletedAt: null,
})
const handleRetryDocument = async (docId: string) => {
try {
updateDocument(docId, {
processingStatus: 'pending',
processingError: null,
processingStartedAt: null,
processingCompletedAt: null,
})
updateDocumentMutation(
{
knowledgeBaseId: id,
documentId: docId,
updates: { retryProcessing: true },
},
{
onSuccess: () => {
refreshDocuments()
logger.info(`Document retry initiated successfully for: ${docId}`)
},
onError: (err) => {
logger.error('Error retrying document:', err)
updateDocument(docId, {
processingStatus: 'failed',
processingError:
err instanceof Error ? err.message : 'Failed to retry document processing',
})
const response = await fetch(`/api/knowledge/${id}/documents/${docId}`, {
method: 'PUT',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({
retryProcessing: true,
}),
})
if (!response.ok) {
throw new Error('Failed to retry document processing')
}
)
const result = await response.json()
if (!result.success) {
throw new Error(result.error || 'Failed to retry document processing')
}
await refreshDocuments()
logger.info(`Document retry initiated successfully for: ${docId}`)
} catch (err) {
logger.error('Error retrying document:', err)
const currentDoc = documents.find((doc) => doc.id === docId)
if (currentDoc) {
updateDocument(docId, {
processingStatus: 'failed',
processingError:
err instanceof Error ? err.message : 'Failed to retry document processing',
})
}
}
}
/**
@@ -664,32 +694,43 @@ export function KnowledgeBase({
const currentDoc = documents.find((doc) => doc.id === documentId)
const previousName = currentDoc?.filename
// Optimistic update
updateDocument(documentId, { filename: newName })
queryClient.setQueryData<DocumentData>(knowledgeKeys.document(id, documentId), (previous) =>
previous ? { ...previous, filename: newName } : previous
)
return new Promise<void>((resolve, reject) => {
updateDocumentMutation(
{
knowledgeBaseId: id,
documentId,
updates: { filename: newName },
try {
const response = await fetch(`/api/knowledge/${id}/documents/${documentId}`, {
method: 'PUT',
headers: {
'Content-Type': 'application/json',
},
{
onSuccess: () => {
logger.info(`Document renamed: ${documentId}`)
resolve()
},
onError: (err) => {
// Rollback on error
if (previousName !== undefined) {
updateDocument(documentId, { filename: previousName })
}
logger.error('Error renaming document:', err)
reject(err)
},
}
)
})
body: JSON.stringify({ filename: newName }),
})
if (!response.ok) {
const result = await response.json()
throw new Error(result.error || 'Failed to rename document')
}
const result = await response.json()
if (!result.success) {
throw new Error(result.error || 'Failed to rename document')
}
logger.info(`Document renamed: ${documentId}`)
} catch (err) {
if (previousName !== undefined) {
updateDocument(documentId, { filename: previousName })
queryClient.setQueryData<DocumentData>(
knowledgeKeys.document(id, documentId),
(previous) => (previous ? { ...previous, filename: previousName } : previous)
)
}
logger.error('Error renaming document:', err)
throw err
}
}
/**
@@ -703,26 +744,35 @@ export function KnowledgeBase({
/**
* Confirms and executes the deletion of a single document
*/
const confirmDeleteDocument = () => {
const confirmDeleteDocument = async () => {
if (!documentToDelete) return
deleteDocumentMutation(
{ knowledgeBaseId: id, documentId: documentToDelete },
{
onSuccess: () => {
refreshDocuments()
setSelectedDocuments((prev) => {
const newSet = new Set(prev)
newSet.delete(documentToDelete)
return newSet
})
},
onSettled: () => {
setShowDeleteDocumentModal(false)
setDocumentToDelete(null)
},
try {
const response = await fetch(`/api/knowledge/${id}/documents/${documentToDelete}`, {
method: 'DELETE',
})
if (!response.ok) {
throw new Error('Failed to delete document')
}
)
const result = await response.json()
if (result.success) {
refreshDocuments()
setSelectedDocuments((prev) => {
const newSet = new Set(prev)
newSet.delete(documentToDelete)
return newSet
})
}
} catch (err) {
logger.error('Error deleting document:', err)
} finally {
setShowDeleteDocumentModal(false)
setDocumentToDelete(null)
}
}
/**
@@ -768,18 +818,32 @@ export function KnowledgeBase({
/**
* Handles deleting the entire knowledge base
*/
const handleDeleteKnowledgeBase = () => {
const handleDeleteKnowledgeBase = async () => {
if (!knowledgeBase) return
deleteKnowledgeBaseMutation(
{ knowledgeBaseId: id },
{
onSuccess: () => {
removeKnowledgeBase(id)
router.push(`/workspace/${workspaceId}/knowledge`)
},
try {
setIsDeleting(true)
const response = await fetch(`/api/knowledge/${id}`, {
method: 'DELETE',
})
if (!response.ok) {
throw new Error('Failed to delete knowledge base')
}
)
const result = await response.json()
if (result.success) {
removeKnowledgeBase(id)
router.push(`/workspace/${workspaceId}/knowledge`)
} else {
throw new Error(result.error || 'Failed to delete knowledge base')
}
} catch (err) {
logger.error('Error deleting knowledge base:', err)
setIsDeleting(false)
}
}
/**
@@ -792,57 +856,93 @@ export function KnowledgeBase({
/**
* Handles bulk enabling of selected documents
*/
const handleBulkEnable = () => {
const handleBulkEnable = async () => {
const documentsToEnable = documents.filter(
(doc) => selectedDocuments.has(doc.id) && !doc.enabled
)
if (documentsToEnable.length === 0) return
bulkDocumentMutation(
{
knowledgeBaseId: id,
operation: 'enable',
documentIds: documentsToEnable.map((doc) => doc.id),
},
{
onSuccess: (result) => {
result.updatedDocuments?.forEach((updatedDoc) => {
updateDocument(updatedDoc.id, { enabled: updatedDoc.enabled })
})
logger.info(`Successfully enabled ${result.successCount} documents`)
setSelectedDocuments(new Set())
try {
setIsBulkOperating(true)
const response = await fetch(`/api/knowledge/${id}/documents`, {
method: 'PATCH',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({
operation: 'enable',
documentIds: documentsToEnable.map((doc) => doc.id),
}),
})
if (!response.ok) {
throw new Error('Failed to enable documents')
}
)
const result = await response.json()
if (result.success) {
result.data.updatedDocuments.forEach((updatedDoc: { id: string; enabled: boolean }) => {
updateDocument(updatedDoc.id, { enabled: updatedDoc.enabled })
})
logger.info(`Successfully enabled ${result.data.successCount} documents`)
}
setSelectedDocuments(new Set())
} catch (err) {
logger.error('Error enabling documents:', err)
} finally {
setIsBulkOperating(false)
}
}
/**
* Handles bulk disabling of selected documents
*/
const handleBulkDisable = () => {
const handleBulkDisable = async () => {
const documentsToDisable = documents.filter(
(doc) => selectedDocuments.has(doc.id) && doc.enabled
)
if (documentsToDisable.length === 0) return
bulkDocumentMutation(
{
knowledgeBaseId: id,
operation: 'disable',
documentIds: documentsToDisable.map((doc) => doc.id),
},
{
onSuccess: (result) => {
result.updatedDocuments?.forEach((updatedDoc) => {
updateDocument(updatedDoc.id, { enabled: updatedDoc.enabled })
})
logger.info(`Successfully disabled ${result.successCount} documents`)
setSelectedDocuments(new Set())
try {
setIsBulkOperating(true)
const response = await fetch(`/api/knowledge/${id}/documents`, {
method: 'PATCH',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({
operation: 'disable',
documentIds: documentsToDisable.map((doc) => doc.id),
}),
})
if (!response.ok) {
throw new Error('Failed to disable documents')
}
)
const result = await response.json()
if (result.success) {
result.data.updatedDocuments.forEach((updatedDoc: { id: string; enabled: boolean }) => {
updateDocument(updatedDoc.id, { enabled: updatedDoc.enabled })
})
logger.info(`Successfully disabled ${result.data.successCount} documents`)
}
setSelectedDocuments(new Set())
} catch (err) {
logger.error('Error disabling documents:', err)
} finally {
setIsBulkOperating(false)
}
}
/**
@@ -856,28 +956,44 @@ export function KnowledgeBase({
/**
* Confirms and executes the bulk deletion of selected documents
*/
const confirmBulkDelete = () => {
const confirmBulkDelete = async () => {
const documentsToDelete = documents.filter((doc) => selectedDocuments.has(doc.id))
if (documentsToDelete.length === 0) return
bulkDocumentMutation(
{
knowledgeBaseId: id,
operation: 'delete',
documentIds: documentsToDelete.map((doc) => doc.id),
},
{
onSuccess: (result) => {
logger.info(`Successfully deleted ${result.successCount} documents`)
refreshDocuments()
setSelectedDocuments(new Set())
},
onSettled: () => {
setShowBulkDeleteModal(false)
try {
setIsBulkOperating(true)
const response = await fetch(`/api/knowledge/${id}/documents`, {
method: 'PATCH',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({
operation: 'delete',
documentIds: documentsToDelete.map((doc) => doc.id),
}),
})
if (!response.ok) {
throw new Error('Failed to delete documents')
}
)
const result = await response.json()
if (result.success) {
logger.info(`Successfully deleted ${result.data.successCount} documents`)
}
await refreshDocuments()
setSelectedDocuments(new Set())
} catch (err) {
logger.error('Error deleting documents:', err)
} finally {
setIsBulkOperating(false)
setShowBulkDeleteModal(false)
}
}
const selectedDocumentsList = documents.filter((doc) => selectedDocuments.has(doc.id))

View File

@@ -22,10 +22,10 @@ import {
type TagDefinition,
useKnowledgeBaseTagDefinitions,
} from '@/hooks/kb/use-knowledge-base-tag-definitions'
import { useCreateTagDefinition, useDeleteTagDefinition } from '@/hooks/queries/knowledge'
const logger = createLogger('BaseTagsModal')
/** Field type display labels */
const FIELD_TYPE_LABELS: Record<string, string> = {
text: 'Text',
number: 'Number',
@@ -45,6 +45,7 @@ interface DocumentListProps {
totalCount: number
}
/** Displays a list of documents affected by tag operations */
function DocumentList({ documents, totalCount }: DocumentListProps) {
const displayLimit = 5
const hasMore = totalCount > displayLimit
@@ -94,14 +95,13 @@ export function BaseTagsModal({ open, onOpenChange, knowledgeBaseId }: BaseTagsM
const { tagDefinitions: kbTagDefinitions, fetchTagDefinitions: refreshTagDefinitions } =
useKnowledgeBaseTagDefinitions(knowledgeBaseId)
const createTagMutation = useCreateTagDefinition()
const deleteTagMutation = useDeleteTagDefinition()
const [deleteTagDialogOpen, setDeleteTagDialogOpen] = useState(false)
const [selectedTag, setSelectedTag] = useState<TagDefinition | null>(null)
const [viewDocumentsDialogOpen, setViewDocumentsDialogOpen] = useState(false)
const [isDeletingTag, setIsDeletingTag] = useState(false)
const [tagUsageData, setTagUsageData] = useState<TagUsageData[]>([])
const [isCreatingTag, setIsCreatingTag] = useState(false)
const [isSavingTag, setIsSavingTag] = useState(false)
const [createTagForm, setCreateTagForm] = useState({
displayName: '',
fieldType: 'text',
@@ -177,12 +177,13 @@ export function BaseTagsModal({ open, onOpenChange, knowledgeBaseId }: BaseTagsM
}
const tagNameConflict =
isCreatingTag && !createTagMutation.isPending && hasTagNameConflict(createTagForm.displayName)
isCreatingTag && !isSavingTag && hasTagNameConflict(createTagForm.displayName)
const canSaveTag = () => {
return createTagForm.displayName.trim() && !hasTagNameConflict(createTagForm.displayName)
}
/** Get slot usage counts per field type */
const getSlotUsageByFieldType = (fieldType: string): { used: number; max: number } => {
const config = TAG_SLOT_CONFIG[fieldType as keyof typeof TAG_SLOT_CONFIG]
if (!config) return { used: 0, max: 0 }
@@ -190,11 +191,13 @@ export function BaseTagsModal({ open, onOpenChange, knowledgeBaseId }: BaseTagsM
return { used, max: config.maxSlots }
}
/** Check if a field type has available slots */
const hasAvailableSlots = (fieldType: string): boolean => {
const { used, max } = getSlotUsageByFieldType(fieldType)
return used < max
}
/** Field type options for Combobox */
const fieldTypeOptions: ComboboxOption[] = useMemo(() => {
return SUPPORTED_FIELD_TYPES.filter((type) => hasAvailableSlots(type)).map((type) => {
const { used, max } = getSlotUsageByFieldType(type)
@@ -208,17 +211,43 @@ export function BaseTagsModal({ open, onOpenChange, knowledgeBaseId }: BaseTagsM
const saveTagDefinition = async () => {
if (!canSaveTag()) return
setIsSavingTag(true)
try {
// Check if selected field type has available slots
if (!hasAvailableSlots(createTagForm.fieldType)) {
throw new Error(`No available slots for ${createTagForm.fieldType} type`)
}
await createTagMutation.mutateAsync({
knowledgeBaseId,
// Get the next available slot from the API
const slotResponse = await fetch(
`/api/knowledge/${knowledgeBaseId}/next-available-slot?fieldType=${createTagForm.fieldType}`
)
if (!slotResponse.ok) {
throw new Error('Failed to get available slot')
}
const slotResult = await slotResponse.json()
if (!slotResult.success || !slotResult.data?.nextAvailableSlot) {
throw new Error('No available tag slots for this field type')
}
const newTagDefinition = {
tagSlot: slotResult.data.nextAvailableSlot,
displayName: createTagForm.displayName.trim(),
fieldType: createTagForm.fieldType,
}
const response = await fetch(`/api/knowledge/${knowledgeBaseId}/tag-definitions`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify(newTagDefinition),
})
if (!response.ok) {
throw new Error('Failed to create tag definition')
}
await Promise.all([refreshTagDefinitions(), fetchTagUsage()])
setCreateTagForm({
@@ -228,17 +257,27 @@ export function BaseTagsModal({ open, onOpenChange, knowledgeBaseId }: BaseTagsM
setIsCreatingTag(false)
} catch (error) {
logger.error('Error creating tag definition:', error)
} finally {
setIsSavingTag(false)
}
}
const confirmDeleteTag = async () => {
if (!selectedTag) return
setIsDeletingTag(true)
try {
await deleteTagMutation.mutateAsync({
knowledgeBaseId,
tagDefinitionId: selectedTag.id,
})
const response = await fetch(
`/api/knowledge/${knowledgeBaseId}/tag-definitions/${selectedTag.id}`,
{
method: 'DELETE',
}
)
if (!response.ok) {
const errorText = await response.text()
throw new Error(`Failed to delete tag definition: ${response.status} ${errorText}`)
}
await Promise.all([refreshTagDefinitions(), fetchTagUsage()])
@@ -246,6 +285,8 @@ export function BaseTagsModal({ open, onOpenChange, knowledgeBaseId }: BaseTagsM
setSelectedTag(null)
} catch (error) {
logger.error('Error deleting tag definition:', error)
} finally {
setIsDeletingTag(false)
}
}
@@ -392,11 +433,11 @@ export function BaseTagsModal({ open, onOpenChange, knowledgeBaseId }: BaseTagsM
className='flex-1'
disabled={
!canSaveTag() ||
createTagMutation.isPending ||
isSavingTag ||
!hasAvailableSlots(createTagForm.fieldType)
}
>
{createTagMutation.isPending ? 'Creating...' : 'Create Tag'}
{isSavingTag ? 'Creating...' : 'Create Tag'}
</Button>
</div>
</div>
@@ -440,17 +481,13 @@ export function BaseTagsModal({ open, onOpenChange, knowledgeBaseId }: BaseTagsM
<ModalFooter>
<Button
variant='default'
disabled={deleteTagMutation.isPending}
disabled={isDeletingTag}
onClick={() => setDeleteTagDialogOpen(false)}
>
Cancel
</Button>
<Button
variant='destructive'
onClick={confirmDeleteTag}
disabled={deleteTagMutation.isPending}
>
{deleteTagMutation.isPending ? 'Deleting...' : 'Delete Tag'}
<Button variant='destructive' onClick={confirmDeleteTag} disabled={isDeletingTag}>
{isDeletingTag ? <>Deleting...</> : 'Delete Tag'}
</Button>
</ModalFooter>
</ModalContent>
@@ -462,7 +499,7 @@ export function BaseTagsModal({ open, onOpenChange, knowledgeBaseId }: BaseTagsM
<ModalHeader>Documents using "{selectedTag?.displayName}"</ModalHeader>
<ModalBody>
<div className='space-y-[8px]'>
<p className='text-[12px] text-[var(--text-secondary)]'>
<p className='text-[12px] text-[var(--text-tertiary)]'>
{selectedTagUsage?.documentCount || 0} document
{selectedTagUsage?.documentCount !== 1 ? 's are' : ' is'} currently using this tag
definition.
@@ -470,7 +507,7 @@ export function BaseTagsModal({ open, onOpenChange, knowledgeBaseId }: BaseTagsM
{selectedTagUsage?.documentCount === 0 ? (
<div className='rounded-[6px] border p-[16px] text-center'>
<p className='text-[12px] text-[var(--text-secondary)]'>
<p className='text-[12px] text-[var(--text-tertiary)]'>
This tag definition is not being used by any documents. You can safely delete it
to free up the tag slot.
</p>

View File

@@ -3,6 +3,7 @@
import { useEffect, useRef, useState } from 'react'
import { zodResolver } from '@hookform/resolvers/zod'
import { createLogger } from '@sim/logger'
import { useQueryClient } from '@tanstack/react-query'
import { Loader2, RotateCcw, X } from 'lucide-react'
import { useParams } from 'next/navigation'
import { useForm } from 'react-hook-form'
@@ -22,7 +23,7 @@ import { cn } from '@/lib/core/utils/cn'
import { formatFileSize, validateKnowledgeBaseFile } from '@/lib/uploads/utils/file-utils'
import { ACCEPT_ATTRIBUTE } from '@/lib/uploads/utils/validation'
import { useKnowledgeUpload } from '@/app/workspace/[workspaceId]/knowledge/hooks/use-knowledge-upload'
import { useCreateKnowledgeBase, useDeleteKnowledgeBase } from '@/hooks/queries/knowledge'
import { knowledgeKeys } from '@/hooks/queries/knowledge'
const logger = createLogger('CreateBaseModal')
@@ -81,11 +82,10 @@ interface SubmitStatus {
export function CreateBaseModal({ open, onOpenChange }: CreateBaseModalProps) {
const params = useParams()
const workspaceId = params.workspaceId as string
const createKnowledgeBaseMutation = useCreateKnowledgeBase(workspaceId)
const deleteKnowledgeBaseMutation = useDeleteKnowledgeBase(workspaceId)
const queryClient = useQueryClient()
const fileInputRef = useRef<HTMLInputElement>(null)
const [isSubmitting, setIsSubmitting] = useState(false)
const [submitStatus, setSubmitStatus] = useState<SubmitStatus | null>(null)
const [files, setFiles] = useState<FileWithPreview[]>([])
const [fileError, setFileError] = useState<string | null>(null)
@@ -245,14 +245,12 @@ export function CreateBaseModal({ open, onOpenChange }: CreateBaseModalProps) {
})
}
const isSubmitting =
createKnowledgeBaseMutation.isPending || deleteKnowledgeBaseMutation.isPending || isUploading
const onSubmit = async (data: FormValues) => {
setIsSubmitting(true)
setSubmitStatus(null)
try {
const newKnowledgeBase = await createKnowledgeBaseMutation.mutateAsync({
const knowledgeBasePayload = {
name: data.name,
description: data.description || undefined,
workspaceId: workspaceId,
@@ -261,8 +259,29 @@ export function CreateBaseModal({ open, onOpenChange }: CreateBaseModalProps) {
minSize: data.minChunkSize,
overlap: data.overlapSize,
},
}
const response = await fetch('/api/knowledge', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify(knowledgeBasePayload),
})
if (!response.ok) {
const errorData = await response.json()
throw new Error(errorData.error || 'Failed to create knowledge base')
}
const result = await response.json()
if (!result.success) {
throw new Error(result.error || 'Failed to create knowledge base')
}
const newKnowledgeBase = result.data
if (files.length > 0) {
try {
const uploadedFiles = await uploadFiles(files, newKnowledgeBase.id, {
@@ -274,11 +293,15 @@ export function CreateBaseModal({ open, onOpenChange }: CreateBaseModalProps) {
logger.info(`Successfully uploaded ${uploadedFiles.length} files`)
logger.info(`Started processing ${uploadedFiles.length} documents in the background`)
await queryClient.invalidateQueries({
queryKey: knowledgeKeys.list(workspaceId),
})
} catch (uploadError) {
logger.error('File upload failed, deleting knowledge base:', uploadError)
try {
await deleteKnowledgeBaseMutation.mutateAsync({
knowledgeBaseId: newKnowledgeBase.id,
await fetch(`/api/knowledge/${newKnowledgeBase.id}`, {
method: 'DELETE',
})
logger.info(`Deleted orphaned knowledge base: ${newKnowledgeBase.id}`)
} catch (deleteError) {
@@ -286,6 +309,10 @@ export function CreateBaseModal({ open, onOpenChange }: CreateBaseModalProps) {
}
throw uploadError
}
} else {
await queryClient.invalidateQueries({
queryKey: knowledgeKeys.list(workspaceId),
})
}
files.forEach((file) => URL.revokeObjectURL(file.preview))
@@ -298,6 +325,8 @@ export function CreateBaseModal({ open, onOpenChange }: CreateBaseModalProps) {
type: 'error',
message: error instanceof Error ? error.message : 'An unknown error occurred',
})
} finally {
setIsSubmitting(false)
}
}

View File

@@ -2,6 +2,7 @@
import { useEffect, useState } from 'react'
import { createLogger } from '@sim/logger'
import { useQueryClient } from '@tanstack/react-query'
import { AlertTriangle, ChevronDown, LibraryBig, MoreHorizontal } from 'lucide-react'
import Link from 'next/link'
import {
@@ -14,7 +15,7 @@ import {
} from '@/components/emcn'
import { Trash } from '@/components/emcn/icons/trash'
import { filterButtonClass } from '@/app/workspace/[workspaceId]/knowledge/components/constants'
import { useUpdateKnowledgeBase } from '@/hooks/queries/knowledge'
import { knowledgeKeys } from '@/hooks/queries/knowledge'
const logger = createLogger('KnowledgeHeader')
@@ -53,13 +54,14 @@ interface Workspace {
}
export function KnowledgeHeader({ breadcrumbs, options }: KnowledgeHeaderProps) {
const queryClient = useQueryClient()
const [isActionsPopoverOpen, setIsActionsPopoverOpen] = useState(false)
const [isWorkspacePopoverOpen, setIsWorkspacePopoverOpen] = useState(false)
const [workspaces, setWorkspaces] = useState<Workspace[]>([])
const [isLoadingWorkspaces, setIsLoadingWorkspaces] = useState(false)
const [isUpdatingWorkspace, setIsUpdatingWorkspace] = useState(false)
const updateKnowledgeBase = useUpdateKnowledgeBase()
// Fetch available workspaces
useEffect(() => {
if (!options?.knowledgeBaseId) return
@@ -74,6 +76,7 @@ export function KnowledgeHeader({ breadcrumbs, options }: KnowledgeHeaderProps)
const data = await response.json()
// Filter workspaces where user has write/admin permissions
const availableWorkspaces = data.workspaces
.filter((ws: any) => ws.permissions === 'write' || ws.permissions === 'admin')
.map((ws: any) => ({
@@ -94,27 +97,47 @@ export function KnowledgeHeader({ breadcrumbs, options }: KnowledgeHeaderProps)
}, [options?.knowledgeBaseId])
const handleWorkspaceChange = async (workspaceId: string | null) => {
if (updateKnowledgeBase.isPending || !options?.knowledgeBaseId) return
if (isUpdatingWorkspace || !options?.knowledgeBaseId) return
setIsWorkspacePopoverOpen(false)
try {
setIsUpdatingWorkspace(true)
setIsWorkspacePopoverOpen(false)
updateKnowledgeBase.mutate(
{
knowledgeBaseId: options.knowledgeBaseId,
updates: { workspaceId },
},
{
onSuccess: () => {
logger.info(
`Knowledge base workspace updated: ${options.knowledgeBaseId} -> ${workspaceId}`
)
options.onWorkspaceChange?.(workspaceId)
},
onError: (err) => {
logger.error('Error updating workspace:', err)
const response = await fetch(`/api/knowledge/${options.knowledgeBaseId}`, {
method: 'PUT',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({
workspaceId,
}),
})
if (!response.ok) {
const result = await response.json()
throw new Error(result.error || 'Failed to update workspace')
}
)
const result = await response.json()
if (result.success) {
logger.info(
`Knowledge base workspace updated: ${options.knowledgeBaseId} -> ${workspaceId}`
)
await queryClient.invalidateQueries({
queryKey: knowledgeKeys.detail(options.knowledgeBaseId),
})
await options.onWorkspaceChange?.(workspaceId)
} else {
throw new Error(result.error || 'Failed to update workspace')
}
} catch (err) {
logger.error('Error updating workspace:', err)
} finally {
setIsUpdatingWorkspace(false)
}
}
const currentWorkspace = workspaces.find((ws) => ws.id === options?.currentWorkspaceId)
@@ -124,6 +147,7 @@ export function KnowledgeHeader({ breadcrumbs, options }: KnowledgeHeaderProps)
<div className={HEADER_STYLES.container}>
<div className={HEADER_STYLES.breadcrumbs}>
{breadcrumbs.map((breadcrumb, index) => {
// Use unique identifier when available, fallback to content-based key
const key = breadcrumb.id || `${breadcrumb.label}-${breadcrumb.href || index}`
return (
@@ -165,13 +189,13 @@ export function KnowledgeHeader({ breadcrumbs, options }: KnowledgeHeaderProps)
<PopoverTrigger asChild>
<Button
variant='outline'
disabled={isLoadingWorkspaces || updateKnowledgeBase.isPending}
disabled={isLoadingWorkspaces || isUpdatingWorkspace}
className={filterButtonClass}
>
<span className='truncate'>
{isLoadingWorkspaces
? 'Loading...'
: updateKnowledgeBase.isPending
: isUpdatingWorkspace
? 'Updating...'
: currentWorkspace?.name || 'No workspace'}
</span>

View File

@@ -32,7 +32,6 @@ import {
import { useUserPermissionsContext } from '@/app/workspace/[workspaceId]/providers/workspace-permissions-provider'
import { useContextMenu } from '@/app/workspace/[workspaceId]/w/components/sidebar/hooks'
import { useKnowledgeBasesList } from '@/hooks/kb/use-knowledge'
import { useDeleteKnowledgeBase, useUpdateKnowledgeBase } from '@/hooks/queries/knowledge'
import { useDebounce } from '@/hooks/use-debounce'
const logger = createLogger('Knowledge')
@@ -52,12 +51,10 @@ export function Knowledge() {
const params = useParams()
const workspaceId = params.workspaceId as string
const { knowledgeBases, isLoading, error } = useKnowledgeBasesList(workspaceId)
const { knowledgeBases, isLoading, error, removeKnowledgeBase, updateKnowledgeBase } =
useKnowledgeBasesList(workspaceId)
const userPermissions = useUserPermissionsContext()
const { mutateAsync: updateKnowledgeBaseMutation } = useUpdateKnowledgeBase(workspaceId)
const { mutateAsync: deleteKnowledgeBaseMutation } = useDeleteKnowledgeBase(workspaceId)
const [searchQuery, setSearchQuery] = useState('')
const debouncedSearchQuery = useDebounce(searchQuery, 300)
const [isCreateModalOpen, setIsCreateModalOpen] = useState(false)
@@ -115,13 +112,29 @@ export function Knowledge() {
*/
const handleUpdateKnowledgeBase = useCallback(
async (id: string, name: string, description: string) => {
await updateKnowledgeBaseMutation({
knowledgeBaseId: id,
updates: { name, description },
const response = await fetch(`/api/knowledge/${id}`, {
method: 'PUT',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({ name, description }),
})
logger.info(`Knowledge base updated: ${id}`)
if (!response.ok) {
const result = await response.json()
throw new Error(result.error || 'Failed to update knowledge base')
}
const result = await response.json()
if (result.success) {
logger.info(`Knowledge base updated: ${id}`)
updateKnowledgeBase(id, { name, description })
} else {
throw new Error(result.error || 'Failed to update knowledge base')
}
},
[updateKnowledgeBaseMutation]
[updateKnowledgeBase]
)
/**
@@ -129,10 +142,25 @@ export function Knowledge() {
*/
const handleDeleteKnowledgeBase = useCallback(
async (id: string) => {
await deleteKnowledgeBaseMutation({ knowledgeBaseId: id })
logger.info(`Knowledge base deleted: ${id}`)
const response = await fetch(`/api/knowledge/${id}`, {
method: 'DELETE',
})
if (!response.ok) {
const result = await response.json()
throw new Error(result.error || 'Failed to delete knowledge base')
}
const result = await response.json()
if (result.success) {
logger.info(`Knowledge base deleted: ${id}`)
removeKnowledgeBase(id)
} else {
throw new Error(result.error || 'Failed to delete knowledge base')
}
},
[deleteKnowledgeBaseMutation]
[removeKnowledgeBase]
)
/**

View File

@@ -1,22 +0,0 @@
import { PopoverSection } from '@/components/emcn'
/**
* Skeleton loading component for chat history dropdown
* Displays placeholder content while chats are being loaded
*/
export function ChatHistorySkeleton() {
return (
<>
<PopoverSection>
<div className='h-3 w-12 animate-pulse rounded bg-muted/40' />
</PopoverSection>
<div className='flex flex-col gap-0.5'>
{[1, 2, 3].map((i) => (
<div key={i} className='flex h-[25px] items-center px-[6px]'>
<div className='h-3 w-full animate-pulse rounded bg-muted/40' />
</div>
))}
</div>
</>
)
}

View File

@@ -1,79 +0,0 @@
import { Button } from '@/components/emcn'
type CheckpointConfirmationVariant = 'restore' | 'discard'
interface CheckpointConfirmationProps {
/** Confirmation variant - 'restore' for reverting, 'discard' for edit with checkpoint options */
variant: CheckpointConfirmationVariant
/** Whether an action is currently processing */
isProcessing: boolean
/** Callback when cancel is clicked */
onCancel: () => void
/** Callback when revert is clicked */
onRevert: () => void
/** Callback when continue is clicked (only for 'discard' variant) */
onContinue?: () => void
}
/**
* Inline confirmation for checkpoint operations
* Supports two variants:
* - 'restore': Simple revert confirmation with warning
* - 'discard': Edit with checkpoint options (revert or continue without revert)
*/
export function CheckpointConfirmation({
variant,
isProcessing,
onCancel,
onRevert,
onContinue,
}: CheckpointConfirmationProps) {
const isRestoreVariant = variant === 'restore'
return (
<div className='mt-[8px] rounded-[4px] border border-[var(--border)] bg-[var(--surface-4)] p-[10px]'>
<p className='mb-[8px] text-[12px] text-[var(--text-primary)]'>
{isRestoreVariant ? (
<>
Revert to checkpoint? This will restore your workflow to the state saved at this
checkpoint.{' '}
<span className='text-[var(--text-error)]'>This action cannot be undone.</span>
</>
) : (
'Continue from a previous message?'
)}
</p>
<div className='flex gap-[8px]'>
<Button
onClick={onCancel}
variant='active'
size='sm'
className='flex-1'
disabled={isProcessing}
>
Cancel
</Button>
<Button
onClick={onRevert}
variant='destructive'
size='sm'
className='flex-1'
disabled={isProcessing}
>
{isProcessing ? 'Reverting...' : 'Revert'}
</Button>
{!isRestoreVariant && onContinue && (
<Button
onClick={onContinue}
variant='tertiary'
size='sm'
className='flex-1'
disabled={isProcessing}
>
Continue
</Button>
)}
</div>
</div>
)
}

View File

@@ -1,6 +1,5 @@
export * from './checkpoint-confirmation'
export * from './file-display'
export { CopilotMarkdownRenderer } from './markdown-renderer'
export { default as CopilotMarkdownRenderer } from './markdown-renderer'
export * from './smooth-streaming'
export * from './thinking-block'
export * from './usage-limit-actions'

View File

@@ -1 +0,0 @@
export { default as CopilotMarkdownRenderer } from './markdown-renderer'

View File

@@ -1,17 +1,27 @@
import { memo, useEffect, useRef, useState } from 'react'
import { cn } from '@/lib/core/utils/cn'
import { CopilotMarkdownRenderer } from '../markdown-renderer'
import CopilotMarkdownRenderer from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/copilot/components/copilot-message/components/markdown-renderer'
/** Character animation delay in milliseconds */
/**
* Character animation delay in milliseconds
*/
const CHARACTER_DELAY = 3
/** Props for the StreamingIndicator component */
/**
* Props for the StreamingIndicator component
*/
interface StreamingIndicatorProps {
/** Optional class name for layout adjustments */
className?: string
}
/** Shows animated dots during message streaming when no content has arrived */
/**
* StreamingIndicator shows animated dots during message streaming
* Used as a standalone indicator when no content has arrived yet
*
* @param props - Component props
* @returns Animated loading indicator
*/
export const StreamingIndicator = memo(({ className }: StreamingIndicatorProps) => (
<div className={cn('flex h-[1.25rem] items-center text-muted-foreground', className)}>
<div className='flex space-x-0.5'>
@@ -24,7 +34,9 @@ export const StreamingIndicator = memo(({ className }: StreamingIndicatorProps)
StreamingIndicator.displayName = 'StreamingIndicator'
/** Props for the SmoothStreamingText component */
/**
* Props for the SmoothStreamingText component
*/
interface SmoothStreamingTextProps {
/** Content to display with streaming animation */
content: string
@@ -32,12 +44,20 @@ interface SmoothStreamingTextProps {
isStreaming: boolean
}
/** Displays text with character-by-character animation for smooth streaming */
/**
* SmoothStreamingText component displays text with character-by-character animation
* Creates a smooth streaming effect for AI responses
*
* @param props - Component props
* @returns Streaming text with smooth animation
*/
export const SmoothStreamingText = memo(
({ content, isStreaming }: SmoothStreamingTextProps) => {
// Initialize with full content when not streaming to avoid flash on page load
const [displayedContent, setDisplayedContent] = useState(() => (isStreaming ? '' : content))
const contentRef = useRef(content)
const timeoutRef = useRef<NodeJS.Timeout | null>(null)
// Initialize index based on streaming state
const indexRef = useRef(isStreaming ? 0 : content.length)
const isAnimatingRef = useRef(false)
@@ -75,6 +95,7 @@ export const SmoothStreamingText = memo(
}
}
} else {
// Streaming ended - show full content immediately
if (timeoutRef.current) {
clearTimeout(timeoutRef.current)
}
@@ -98,6 +119,7 @@ export const SmoothStreamingText = memo(
)
},
(prevProps, nextProps) => {
// Prevent re-renders during streaming unless content actually changed
return (
prevProps.content === nextProps.content && prevProps.isStreaming === nextProps.isStreaming
)

View File

@@ -3,45 +3,66 @@
import { memo, useEffect, useMemo, useRef, useState } from 'react'
import clsx from 'clsx'
import { ChevronUp } from 'lucide-react'
import { CopilotMarkdownRenderer } from '../markdown-renderer'
import CopilotMarkdownRenderer from './markdown-renderer'
/** Removes thinking tags (raw or escaped) and special tags from streamed content */
/**
* Removes thinking tags (raw or escaped) from streamed content.
*/
function stripThinkingTags(text: string): string {
return text
.replace(/<\/?thinking[^>]*>/gi, '')
.replace(/&lt;\/?thinking[^&]*&gt;/gi, '')
.replace(/<options>[\s\S]*?<\/options>/gi, '')
.replace(/<options>[\s\S]*$/gi, '')
.replace(/<plan>[\s\S]*?<\/plan>/gi, '')
.replace(/<plan>[\s\S]*$/gi, '')
.trim()
}
/** Interval for auto-scroll during streaming (ms) */
/**
* Max height for thinking content before internal scrolling kicks in
*/
const THINKING_MAX_HEIGHT = 150
/**
* Height threshold before gradient fade kicks in
*/
const GRADIENT_THRESHOLD = 100
/**
* Interval for auto-scroll during streaming (ms)
*/
const SCROLL_INTERVAL = 50
/** Timer update interval in milliseconds */
/**
* Timer update interval in milliseconds
*/
const TIMER_UPDATE_INTERVAL = 100
/** Thinking text streaming delay - faster than main text */
/**
* Thinking text streaming - much faster than main text
* Essentially instant with minimal delay
*/
const THINKING_DELAY = 0.5
const THINKING_CHARS_PER_FRAME = 3
/** Props for the SmoothThinkingText component */
/**
* Props for the SmoothThinkingText component
*/
interface SmoothThinkingTextProps {
content: string
isStreaming: boolean
}
/**
* Renders thinking content with fast streaming animation.
* SmoothThinkingText renders thinking content with fast streaming animation
* Uses gradient fade at top when content is tall enough
*/
const SmoothThinkingText = memo(
({ content, isStreaming }: SmoothThinkingTextProps) => {
// Initialize with full content when not streaming to avoid flash on page load
const [displayedContent, setDisplayedContent] = useState(() => (isStreaming ? '' : content))
const [showGradient, setShowGradient] = useState(false)
const contentRef = useRef(content)
const textRef = useRef<HTMLDivElement>(null)
const rafRef = useRef<number | null>(null)
// Initialize index based on streaming state
const indexRef = useRef(isStreaming ? 0 : content.length)
const lastFrameTimeRef = useRef<number>(0)
const isAnimatingRef = useRef(false)
@@ -67,6 +88,7 @@ const SmoothThinkingText = memo(
if (elapsed >= THINKING_DELAY) {
if (currentIndex < currentContent.length) {
// Reveal multiple characters per frame for faster streaming
const newIndex = Math.min(
currentIndex + THINKING_CHARS_PER_FRAME,
currentContent.length
@@ -88,6 +110,7 @@ const SmoothThinkingText = memo(
rafRef.current = requestAnimationFrame(animateText)
}
} else {
// Streaming ended - show full content immediately
if (rafRef.current) {
cancelAnimationFrame(rafRef.current)
}
@@ -104,10 +127,30 @@ const SmoothThinkingText = memo(
}
}, [content, isStreaming])
// Check if content height exceeds threshold for gradient
useEffect(() => {
if (textRef.current && isStreaming) {
const height = textRef.current.scrollHeight
setShowGradient(height > GRADIENT_THRESHOLD)
} else {
setShowGradient(false)
}
}, [displayedContent, isStreaming])
// Apply vertical gradient fade at the top only when content is tall enough
const gradientStyle =
isStreaming && showGradient
? {
maskImage: 'linear-gradient(to bottom, transparent 0%, black 30%, black 100%)',
WebkitMaskImage: 'linear-gradient(to bottom, transparent 0%, black 30%, black 100%)',
}
: undefined
return (
<div
ref={textRef}
className='[&_*]:!text-[var(--text-muted)] [&_*]:!text-[12px] [&_*]:!leading-[1.4] [&_p]:!m-0 [&_p]:!mb-1 [&_h1]:!text-[12px] [&_h1]:!font-semibold [&_h1]:!m-0 [&_h1]:!mb-1 [&_h2]:!text-[12px] [&_h2]:!font-semibold [&_h2]:!m-0 [&_h2]:!mb-1 [&_h3]:!text-[12px] [&_h3]:!font-semibold [&_h3]:!m-0 [&_h3]:!mb-1 [&_code]:!text-[11px] [&_ul]:!pl-5 [&_ul]:!my-1 [&_ol]:!pl-6 [&_ol]:!my-1 [&_li]:!my-0.5 [&_li]:!py-0 font-season text-[12px] text-[var(--text-muted)]'
style={gradientStyle}
>
<CopilotMarkdownRenderer content={displayedContent} />
</div>
@@ -122,7 +165,9 @@ const SmoothThinkingText = memo(
SmoothThinkingText.displayName = 'SmoothThinkingText'
/** Props for the ThinkingBlock component */
/**
* Props for the ThinkingBlock component
*/
interface ThinkingBlockProps {
/** Content of the thinking block */
content: string
@@ -137,8 +182,13 @@ interface ThinkingBlockProps {
}
/**
* Displays AI reasoning/thinking process with collapsible content and duration timer.
* Auto-expands during streaming and collapses when complete.
* ThinkingBlock component displays AI reasoning/thinking process
* Shows collapsible content with duration timer
* Auto-expands during streaming and collapses when complete
* Auto-collapses when a tool call or other content comes in after it
*
* @param props - Component props
* @returns Thinking block with expandable content and timer
*/
export function ThinkingBlock({
content,
@@ -147,6 +197,7 @@ export function ThinkingBlock({
label = 'Thought',
hasSpecialTags = false,
}: ThinkingBlockProps) {
// Strip thinking tags from content on render to handle persisted messages
const cleanContent = useMemo(() => stripThinkingTags(content || ''), [content])
const [isExpanded, setIsExpanded] = useState(false)
@@ -158,8 +209,12 @@ export function ThinkingBlock({
const lastScrollTopRef = useRef(0)
const programmaticScrollRef = useRef(false)
/** Auto-expands during streaming, auto-collapses when streaming ends or following content arrives */
/**
* Auto-expands block when streaming with content
* Auto-collapses when streaming ends OR when following content arrives
*/
useEffect(() => {
// Collapse if streaming ended, there's following content, or special tags arrived
if (!isStreaming || hasFollowingContent || hasSpecialTags) {
setIsExpanded(false)
userCollapsedRef.current = false
@@ -172,6 +227,7 @@ export function ThinkingBlock({
}
}, [isStreaming, cleanContent, hasFollowingContent, hasSpecialTags])
// Reset start time when streaming begins
useEffect(() => {
if (isStreaming && !hasFollowingContent) {
startTimeRef.current = Date.now()
@@ -180,7 +236,9 @@ export function ThinkingBlock({
}
}, [isStreaming, hasFollowingContent])
// Update duration timer during streaming (stop when following content arrives)
useEffect(() => {
// Stop timer if not streaming or if there's following content (thinking is done)
if (!isStreaming || hasFollowingContent) return
const interval = setInterval(() => {
@@ -190,6 +248,7 @@ export function ThinkingBlock({
return () => clearInterval(interval)
}, [isStreaming, hasFollowingContent])
// Handle scroll events to detect user scrolling away
useEffect(() => {
const container = scrollContainerRef.current
if (!container || !isExpanded) return
@@ -208,6 +267,7 @@ export function ThinkingBlock({
setUserHasScrolledAway(true)
}
// Re-stick if user scrolls back to bottom with intent
if (userHasScrolledAway && isNearBottom && delta > 10) {
setUserHasScrolledAway(false)
}
@@ -221,6 +281,7 @@ export function ThinkingBlock({
return () => container.removeEventListener('scroll', handleScroll)
}, [isExpanded, userHasScrolledAway])
// Smart auto-scroll: always scroll to bottom while streaming unless user scrolled away
useEffect(() => {
if (!isStreaming || !isExpanded || userHasScrolledAway) return
@@ -241,16 +302,20 @@ export function ThinkingBlock({
return () => window.clearInterval(intervalId)
}, [isStreaming, isExpanded, userHasScrolledAway])
/** Formats duration in milliseconds to seconds (minimum 1s) */
/**
* Formats duration in milliseconds to seconds
* Always shows seconds, rounded to nearest whole second, minimum 1s
*/
const formatDuration = (ms: number) => {
const seconds = Math.max(1, Math.round(ms / 1000))
return `${seconds}s`
}
const hasContent = cleanContent.length > 0
// Thinking is "done" when streaming ends OR when there's following content (like a tool call) OR when special tags appear
const isThinkingDone = !isStreaming || hasFollowingContent || hasSpecialTags
const durationText = `${label} for ${formatDuration(duration)}`
// Convert past tense label to present tense for streaming (e.g., "Thought" → "Thinking")
const getStreamingLabel = (lbl: string) => {
if (lbl === 'Thought') return 'Thinking'
if (lbl.endsWith('ed')) return `${lbl.slice(0, -2)}ing`
@@ -258,9 +323,11 @@ export function ThinkingBlock({
}
const streamingLabel = getStreamingLabel(label)
// During streaming: show header with shimmer effect + expanded content
if (!isThinkingDone) {
return (
<div>
{/* Define shimmer keyframes */}
<style>{`
@keyframes thinking-shimmer {
0% { background-position: 150% 0; }
@@ -329,6 +396,7 @@ export function ThinkingBlock({
)
}
// After done: show collapsible header with duration
return (
<div>
<button
@@ -358,6 +426,7 @@ export function ThinkingBlock({
isExpanded ? 'mt-1.5 max-h-[150px] opacity-100' : 'max-h-0 opacity-0'
)}
>
{/* Completed thinking text - dimmed with markdown */}
<div className='[&_*]:!text-[var(--text-muted)] [&_*]:!text-[12px] [&_*]:!leading-[1.4] [&_p]:!m-0 [&_p]:!mb-1 [&_h1]:!text-[12px] [&_h1]:!font-semibold [&_h1]:!m-0 [&_h1]:!mb-1 [&_h2]:!text-[12px] [&_h2]:!font-semibold [&_h2]:!m-0 [&_h2]:!mb-1 [&_h3]:!text-[12px] [&_h3]:!font-semibold [&_h3]:!m-0 [&_h3]:!mb-1 [&_code]:!text-[11px] [&_ul]:!pl-5 [&_ul]:!my-1 [&_ol]:!pl-6 [&_ol]:!my-1 [&_li]:!my-0.5 [&_li]:!py-0 font-season text-[12px] text-[var(--text-muted)]'>
<CopilotMarkdownRenderer content={cleanContent} />
</div>

View File

@@ -9,20 +9,18 @@ import {
ToolCall,
} from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/copilot/components'
import {
CheckpointConfirmation,
FileAttachmentDisplay,
SmoothStreamingText,
StreamingIndicator,
ThinkingBlock,
UsageLimitActions,
} from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/copilot/components/copilot-message/components'
import { CopilotMarkdownRenderer } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/copilot/components/copilot-message/components/markdown-renderer'
import CopilotMarkdownRenderer from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/copilot/components/copilot-message/components/markdown-renderer'
import {
useCheckpointManagement,
useMessageEditing,
} from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/copilot/components/copilot-message/hooks'
import { UserInput } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/copilot/components/user-input/user-input'
import { buildMentionHighlightNodes } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/copilot/components/user-input/utils'
import type { CopilotMessage as CopilotMessageType } from '@/stores/panel'
import { useCopilotStore } from '@/stores/panel'
@@ -70,6 +68,7 @@ const CopilotMessage: FC<CopilotMessageProps> = memo(
const isUser = message.role === 'user'
const isAssistant = message.role === 'assistant'
// Store state
const {
messageCheckpoints: allMessageCheckpoints,
messages,
@@ -80,18 +79,23 @@ const CopilotMessage: FC<CopilotMessageProps> = memo(
isAborting,
} = useCopilotStore()
// Get checkpoints for this message if it's a user message
const messageCheckpoints = isUser ? allMessageCheckpoints[message.id] || [] : []
const hasCheckpoints = messageCheckpoints.length > 0 && messageCheckpoints.some((cp) => cp?.id)
// Check if this is the last user message (for showing abort button)
const isLastUserMessage = useMemo(() => {
if (!isUser) return false
const userMessages = messages.filter((m) => m.role === 'user')
return userMessages.length > 0 && userMessages[userMessages.length - 1]?.id === message.id
}, [isUser, messages, message.id])
// UI state
const [isHoveringMessage, setIsHoveringMessage] = useState(false)
const cancelEditRef = useRef<(() => void) | null>(null)
// Checkpoint management hook
const {
showRestoreConfirmation,
showCheckpointDiscardModal,
@@ -114,6 +118,7 @@ const CopilotMessage: FC<CopilotMessageProps> = memo(
() => cancelEditRef.current?.()
)
// Message editing hook
const {
isEditMode,
isExpanded,
@@ -142,20 +147,27 @@ const CopilotMessage: FC<CopilotMessageProps> = memo(
cancelEditRef.current = handleCancelEdit
// Get clean text content with double newline parsing
const cleanTextContent = useMemo(() => {
if (!message.content) return ''
// Parse out excessive newlines (more than 2 consecutive newlines)
return message.content.replace(/\n{3,}/g, '\n\n')
}, [message.content])
// Parse special tags from message content (options, plan)
// Parse during streaming to show options/plan as they stream in
const parsedTags = useMemo(() => {
if (isUser) return null
// Try message.content first
if (message.content) {
const parsed = parseSpecialTags(message.content)
if (parsed.options || parsed.plan) return parsed
}
if (message.contentBlocks && message.contentBlocks.length > 0) {
// During streaming, check content blocks for options/plan
if (isStreaming && message.contentBlocks && message.contentBlocks.length > 0) {
for (const block of message.contentBlocks) {
if (block.type === 'text' && block.content) {
const parsed = parseSpecialTags(block.content)
@@ -164,42 +176,23 @@ const CopilotMessage: FC<CopilotMessageProps> = memo(
}
}
return null
}, [message.content, message.contentBlocks, isUser])
const selectedOptionKey = useMemo(() => {
if (!parsedTags?.options || isStreaming) return null
const currentIndex = messages.findIndex((m) => m.id === message.id)
if (currentIndex === -1 || currentIndex >= messages.length - 1) return null
const nextMessage = messages[currentIndex + 1]
if (!nextMessage || nextMessage.role !== 'user') return null
const nextContent = nextMessage.content?.trim()
if (!nextContent) return null
for (const [key, option] of Object.entries(parsedTags.options)) {
const optionTitle = typeof option === 'string' ? option : option.title
if (nextContent === optionTitle) {
return key
}
}
return null
}, [parsedTags?.options, messages, message.id, isStreaming])
return message.content ? parseSpecialTags(message.content) : null
}, [message.content, message.contentBlocks, isUser, isStreaming])
// Get sendMessage from store for continuation actions
const sendMessage = useCopilotStore((s) => s.sendMessage)
// Handler for option selection
const handleOptionSelect = useCallback(
(_optionKey: string, optionText: string) => {
// Send the option text as a message
sendMessage(optionText)
},
[sendMessage]
)
const isActivelyStreaming = isLastMessage && isStreaming
// Memoize content blocks to avoid re-rendering unchanged blocks
// No entrance animations to prevent layout shift
const memoizedContentBlocks = useMemo(() => {
if (!message.contentBlocks || message.contentBlocks.length === 0) {
return null
@@ -209,21 +202,21 @@ const CopilotMessage: FC<CopilotMessageProps> = memo(
if (block.type === 'text') {
const isLastTextBlock =
index === message.contentBlocks!.length - 1 && block.type === 'text'
// Always strip special tags from display (they're rendered separately as options/plan)
const parsed = parseSpecialTags(block.content)
const cleanBlockContent = parsed.cleanContent.replace(/\n{3,}/g, '\n\n')
// Skip if no content after stripping tags
if (!cleanBlockContent.trim()) return null
const shouldUseSmoothing = isActivelyStreaming && isLastTextBlock
// Use smooth streaming for the last text block if we're streaming
const shouldUseSmoothing = isStreaming && isLastTextBlock
const blockKey = `text-${index}-${block.timestamp || index}`
return (
<div key={blockKey} className='w-full max-w-full'>
{shouldUseSmoothing ? (
<SmoothStreamingText
content={cleanBlockContent}
isStreaming={isActivelyStreaming}
/>
<SmoothStreamingText content={cleanBlockContent} isStreaming={isStreaming} />
) : (
<CopilotMarkdownRenderer content={cleanBlockContent} />
)}
@@ -231,7 +224,9 @@ const CopilotMessage: FC<CopilotMessageProps> = memo(
)
}
if (block.type === 'thinking') {
// Check if there are any blocks after this one (tool calls, text, etc.)
const hasFollowingContent = index < message.contentBlocks!.length - 1
// Check if special tags (options, plan) are present - should also close thinking
const hasSpecialTags = !!(parsedTags?.options || parsedTags?.plan)
const blockKey = `thinking-${index}-${block.timestamp || index}`
@@ -239,7 +234,7 @@ const CopilotMessage: FC<CopilotMessageProps> = memo(
<div key={blockKey} className='w-full'>
<ThinkingBlock
content={block.content}
isStreaming={isActivelyStreaming}
isStreaming={isStreaming}
hasFollowingContent={hasFollowingContent}
hasSpecialTags={hasSpecialTags}
/>
@@ -251,22 +246,18 @@ const CopilotMessage: FC<CopilotMessageProps> = memo(
return (
<div key={blockKey}>
<ToolCall
toolCallId={block.toolCall.id}
toolCall={block.toolCall}
isCurrentMessage={isLastMessage}
/>
<ToolCall toolCallId={block.toolCall.id} toolCall={block.toolCall} />
</div>
)
}
return null
})
}, [message.contentBlocks, isActivelyStreaming, parsedTags, isLastMessage])
}, [message.contentBlocks, isStreaming, parsedTags])
if (isUser) {
return (
<div
className={`w-full max-w-full flex-none overflow-hidden transition-opacity duration-200 [max-width:var(--panel-max-width)] ${isDimmed ? 'opacity-40' : 'opacity-100'}`}
className={`w-full max-w-full overflow-hidden transition-opacity duration-200 [max-width:var(--panel-max-width)] ${isDimmed ? 'opacity-40' : 'opacity-100'}`}
style={{ '--panel-max-width': `${panelWidth - 16}px` } as React.CSSProperties}
>
{isEditMode ? (
@@ -297,15 +288,42 @@ const CopilotMessage: FC<CopilotMessageProps> = memo(
initialContexts={message.contexts}
/>
{/* Inline checkpoint confirmation - shown below input in edit mode */}
{/* Inline Checkpoint Discard Confirmation - shown below input in edit mode */}
{showCheckpointDiscardModal && (
<CheckpointConfirmation
variant='discard'
isProcessing={isProcessingDiscard}
onCancel={handleCancelCheckpointDiscard}
onRevert={handleContinueAndRevert}
onContinue={handleContinueWithoutRevert}
/>
<div className='mt-[8px] rounded-[4px] border border-[var(--border)] bg-[var(--surface-4)] p-[10px]'>
<p className='mb-[8px] text-[12px] text-[var(--text-primary)]'>
Continue from a previous message?
</p>
<div className='flex gap-[8px]'>
<Button
onClick={handleCancelCheckpointDiscard}
variant='active'
size='sm'
className='flex-1'
disabled={isProcessingDiscard}
>
Cancel
</Button>
<Button
onClick={handleContinueAndRevert}
variant='destructive'
size='sm'
className='flex-1'
disabled={isProcessingDiscard}
>
{isProcessingDiscard ? 'Reverting...' : 'Revert'}
</Button>
<Button
onClick={handleContinueWithoutRevert}
variant='tertiary'
size='sm'
className='flex-1'
disabled={isProcessingDiscard}
>
Continue
</Button>
</div>
</div>
)}
</div>
) : (
@@ -330,15 +348,46 @@ const CopilotMessage: FC<CopilotMessageProps> = memo(
ref={messageContentRef}
className={`relative whitespace-pre-wrap break-words px-[2px] py-1 font-medium font-sans text-[var(--text-primary)] text-sm leading-[1.25rem] ${isSendingMessage && isLastUserMessage && isHoveringMessage ? 'pr-7' : ''} ${!isExpanded && needsExpansion ? 'max-h-[60px] overflow-hidden' : 'overflow-visible'}`}
>
{buildMentionHighlightNodes(
message.content || '',
message.contexts || [],
(token, key) => (
<span key={key} className='rounded-[4px] bg-[rgba(50,189,126,0.65)] py-[1px]'>
{token}
</span>
)
)}
{(() => {
const text = message.content || ''
const contexts: any[] = Array.isArray((message as any).contexts)
? ((message as any).contexts as any[])
: []
// Build tokens with their prefixes (@ for mentions, / for commands)
const tokens = contexts
.filter((c) => c?.kind !== 'current_workflow' && c?.label)
.map((c) => {
const prefix = c?.kind === 'slash_command' ? '/' : '@'
return `${prefix}${c.label}`
})
if (!tokens.length) return text
const escapeRegex = (s: string) => s.replace(/[.*+?^${}()|[\]\\]/g, '\\$&')
const pattern = new RegExp(`(${tokens.map(escapeRegex).join('|')})`, 'g')
const nodes: React.ReactNode[] = []
let lastIndex = 0
let match: RegExpExecArray | null
while ((match = pattern.exec(text)) !== null) {
const i = match.index
const before = text.slice(lastIndex, i)
if (before) nodes.push(before)
const mention = match[0]
nodes.push(
<span
key={`mention-${i}-${lastIndex}`}
className='rounded-[4px] bg-[rgba(50,189,126,0.65)] py-[1px]'
>
{mention}
</span>
)
lastIndex = i + mention.length
}
const tail = text.slice(lastIndex)
if (tail) nodes.push(tail)
return nodes
})()}
</div>
{/* Gradient fade when truncated - applies to entire message box */}
@@ -388,30 +437,65 @@ const CopilotMessage: FC<CopilotMessageProps> = memo(
</div>
)}
{/* Inline restore checkpoint confirmation */}
{/* Inline Restore Checkpoint Confirmation */}
{showRestoreConfirmation && (
<CheckpointConfirmation
variant='restore'
isProcessing={isReverting}
onCancel={handleCancelRevert}
onRevert={handleConfirmRevert}
/>
<div className='mt-[8px] rounded-[4px] border border-[var(--border)] bg-[var(--surface-4)] p-[10px]'>
<p className='mb-[8px] text-[12px] text-[var(--text-primary)]'>
Revert to checkpoint? This will restore your workflow to the state saved at this
checkpoint.{' '}
<span className='text-[var(--text-error)]'>This action cannot be undone.</span>
</p>
<div className='flex gap-[8px]'>
<Button
onClick={handleCancelRevert}
variant='active'
size='sm'
className='flex-1'
disabled={isReverting}
>
Cancel
</Button>
<Button
onClick={handleConfirmRevert}
variant='destructive'
size='sm'
className='flex-1'
disabled={isReverting}
>
{isReverting ? 'Reverting...' : 'Revert'}
</Button>
</div>
</div>
)}
</div>
)
}
// Check if there's any visible content in the blocks
const hasVisibleContent = useMemo(() => {
if (!message.contentBlocks || message.contentBlocks.length === 0) return false
return message.contentBlocks.some((block) => {
if (block.type === 'text') {
const parsed = parseSpecialTags(block.content)
return parsed.cleanContent.trim().length > 0
}
return block.type === 'thinking' || block.type === 'tool_call'
})
}, [message.contentBlocks])
if (isAssistant) {
return (
<div
className={`w-full max-w-full flex-none overflow-hidden [max-width:var(--panel-max-width)] ${isDimmed ? 'opacity-40' : 'opacity-100'}`}
className={`w-full max-w-full overflow-hidden [max-width:var(--panel-max-width)] ${isDimmed ? 'opacity-40' : 'opacity-100'}`}
style={{ '--panel-max-width': `${panelWidth - 16}px` } as React.CSSProperties}
>
<div className='max-w-full space-y-[4px] px-[2px] pb-[4px]'>
<div className='max-w-full space-y-1 px-[2px]'>
{/* Content blocks in chronological order */}
{memoizedContentBlocks || (isStreaming && <div className='min-h-0' />)}
{memoizedContentBlocks}
{isStreaming && <StreamingIndicator />}
{isStreaming && (
<StreamingIndicator className={!hasVisibleContent ? 'mt-1' : undefined} />
)}
{message.errorType === 'usage_limit' && (
<div className='flex gap-1.5'>
@@ -450,7 +534,6 @@ const CopilotMessage: FC<CopilotMessageProps> = memo(
isLastMessage && !isStreaming && parsedTags.optionsComplete === true
}
streaming={isStreaming || !parsedTags.optionsComplete}
selectedOptionKey={selectedOptionKey}
/>
)}
</div>
@@ -461,22 +544,50 @@ const CopilotMessage: FC<CopilotMessageProps> = memo(
return null
},
(prevProps, nextProps) => {
// Custom comparison function for better streaming performance
const prevMessage = prevProps.message
const nextMessage = nextProps.message
if (prevMessage.id !== nextMessage.id) return false
if (prevProps.isStreaming !== nextProps.isStreaming) return false
if (prevProps.isDimmed !== nextProps.isDimmed) return false
if (prevProps.panelWidth !== nextProps.panelWidth) return false
if (prevProps.checkpointCount !== nextProps.checkpointCount) return false
if (prevProps.isLastMessage !== nextProps.isLastMessage) return false
// If message IDs are different, always re-render
if (prevMessage.id !== nextMessage.id) {
return false
}
// If streaming state changed, re-render
if (prevProps.isStreaming !== nextProps.isStreaming) {
return false
}
// If dimmed state changed, re-render
if (prevProps.isDimmed !== nextProps.isDimmed) {
return false
}
// If panel width changed, re-render
if (prevProps.panelWidth !== nextProps.panelWidth) {
return false
}
// If checkpoint count changed, re-render
if (prevProps.checkpointCount !== nextProps.checkpointCount) {
return false
}
// If isLastMessage changed, re-render (for options visibility)
if (prevProps.isLastMessage !== nextProps.isLastMessage) {
return false
}
// For streaming messages, check if content actually changed
if (nextProps.isStreaming) {
const prevBlocks = prevMessage.contentBlocks || []
const nextBlocks = nextMessage.contentBlocks || []
if (prevBlocks.length !== nextBlocks.length) return false
if (prevBlocks.length !== nextBlocks.length) {
return false // Content blocks changed
}
// Helper: get last block content by type
const getLastBlockContent = (blocks: any[], type: 'text' | 'thinking'): string | null => {
for (let i = blocks.length - 1; i >= 0; i--) {
const block = blocks[i]
@@ -487,6 +598,7 @@ const CopilotMessage: FC<CopilotMessageProps> = memo(
return null
}
// Re-render if the last text block content changed
const prevLastTextContent = getLastBlockContent(prevBlocks as any[], 'text')
const nextLastTextContent = getLastBlockContent(nextBlocks as any[], 'text')
if (
@@ -497,6 +609,7 @@ const CopilotMessage: FC<CopilotMessageProps> = memo(
return false
}
// Re-render if the last thinking block content changed
const prevLastThinkingContent = getLastBlockContent(prevBlocks as any[], 'thinking')
const nextLastThinkingContent = getLastBlockContent(nextBlocks as any[], 'thinking')
if (
@@ -507,18 +620,24 @@ const CopilotMessage: FC<CopilotMessageProps> = memo(
return false
}
// Check if tool calls changed
const prevToolCalls = prevMessage.toolCalls || []
const nextToolCalls = nextMessage.toolCalls || []
if (prevToolCalls.length !== nextToolCalls.length) return false
if (prevToolCalls.length !== nextToolCalls.length) {
return false // Tool calls count changed
}
for (let i = 0; i < nextToolCalls.length; i++) {
if (prevToolCalls[i]?.state !== nextToolCalls[i]?.state) return false
if (prevToolCalls[i]?.state !== nextToolCalls[i]?.state) {
return false // Tool call state changed
}
}
return true
}
// For non-streaming messages, do a deeper comparison including tool call states
if (
prevMessage.content !== nextMessage.content ||
prevMessage.role !== nextMessage.role ||
@@ -528,12 +647,16 @@ const CopilotMessage: FC<CopilotMessageProps> = memo(
return false
}
// Check tool call states for non-streaming messages too
const prevToolCalls = prevMessage.toolCalls || []
const nextToolCalls = nextMessage.toolCalls || []
for (let i = 0; i < nextToolCalls.length; i++) {
if (prevToolCalls[i]?.state !== nextToolCalls[i]?.state) return false
if (prevToolCalls[i]?.state !== nextToolCalls[i]?.state) {
return false // Tool call state changed
}
}
// Check contentBlocks tool call states
const prevContentBlocks = prevMessage.contentBlocks || []
const nextContentBlocks = nextMessage.contentBlocks || []
for (let i = 0; i < nextContentBlocks.length; i++) {
@@ -544,7 +667,7 @@ const CopilotMessage: FC<CopilotMessageProps> = memo(
nextBlock?.type === 'tool_call' &&
prevBlock.toolCall?.state !== nextBlock.toolCall?.state
) {
return false
return false // ContentBlock tool call state changed
}
}

View File

@@ -15,7 +15,6 @@ const logger = createLogger('useCheckpointManagement')
* @param messageCheckpoints - Checkpoints for this message
* @param onRevertModeChange - Callback for revert mode changes
* @param onEditModeChange - Callback for edit mode changes
* @param onCancelEdit - Callback when edit is cancelled
* @returns Checkpoint management utilities
*/
export function useCheckpointManagement(
@@ -38,13 +37,17 @@ export function useCheckpointManagement(
const { revertToCheckpoint, currentChat } = useCopilotStore()
/** Initiates checkpoint revert confirmation */
/**
* Handles initiating checkpoint revert
*/
const handleRevertToCheckpoint = useCallback(() => {
setShowRestoreConfirmation(true)
onRevertModeChange?.(true)
}, [onRevertModeChange])
/** Confirms and executes checkpoint revert */
/**
* Confirms checkpoint revert and updates state
*/
const handleConfirmRevert = useCallback(async () => {
if (messageCheckpoints.length > 0) {
const latestCheckpoint = messageCheckpoints[0]
@@ -113,13 +116,18 @@ export function useCheckpointManagement(
onRevertModeChange,
])
/** Cancels checkpoint revert */
/**
* Cancels checkpoint revert
*/
const handleCancelRevert = useCallback(() => {
setShowRestoreConfirmation(false)
onRevertModeChange?.(false)
}, [onRevertModeChange])
/** Reverts to checkpoint then proceeds with pending edit */
/**
* Handles "Continue and revert" action for checkpoint discard modal
* Reverts to checkpoint then proceeds with pending edit
*/
const handleContinueAndRevert = useCallback(async () => {
setIsProcessingDiscard(true)
try {
@@ -176,7 +184,9 @@ export function useCheckpointManagement(
}
}, [messageCheckpoints, revertToCheckpoint, message, messages, onEditModeChange, onCancelEdit])
/** Cancels checkpoint discard and clears pending edit */
/**
* Cancels checkpoint discard and clears pending edit
*/
const handleCancelCheckpointDiscard = useCallback(() => {
setShowCheckpointDiscardModal(false)
onEditModeChange?.(false)
@@ -184,11 +194,11 @@ export function useCheckpointManagement(
pendingEditRef.current = null
}, [onEditModeChange, onCancelEdit])
/** Continues with edit without reverting checkpoint */
/**
* Continues with edit WITHOUT reverting checkpoint
*/
const handleContinueWithoutRevert = useCallback(async () => {
setShowCheckpointDiscardModal(false)
onEditModeChange?.(false)
onCancelEdit?.()
if (pendingEditRef.current) {
const { message: msg, fileAttachments, contexts } = pendingEditRef.current
@@ -215,34 +225,43 @@ export function useCheckpointManagement(
}
}, [message, messages, onEditModeChange, onCancelEdit])
/** Handles keyboard events for confirmation dialogs */
/**
* Handles keyboard events for restore confirmation (Escape/Enter)
*/
useEffect(() => {
const isActive = showRestoreConfirmation || showCheckpointDiscardModal
if (!isActive) return
if (!showRestoreConfirmation) return
const handleKeyDown = (event: KeyboardEvent) => {
if (event.defaultPrevented) return
if (event.key === 'Escape') {
if (showRestoreConfirmation) handleCancelRevert()
else handleCancelCheckpointDiscard()
handleCancelRevert()
} else if (event.key === 'Enter') {
event.preventDefault()
if (showRestoreConfirmation) handleConfirmRevert()
else handleContinueAndRevert()
handleConfirmRevert()
}
}
document.addEventListener('keydown', handleKeyDown)
return () => document.removeEventListener('keydown', handleKeyDown)
}, [
showRestoreConfirmation,
showCheckpointDiscardModal,
handleCancelRevert,
handleConfirmRevert,
handleCancelCheckpointDiscard,
handleContinueAndRevert,
])
}, [showRestoreConfirmation, handleCancelRevert, handleConfirmRevert])
/**
* Handles keyboard events for checkpoint discard modal (Escape/Enter)
*/
useEffect(() => {
if (!showCheckpointDiscardModal) return
const handleCheckpointDiscardKeyDown = async (event: KeyboardEvent) => {
if (event.key === 'Escape') {
handleCancelCheckpointDiscard()
} else if (event.key === 'Enter') {
event.preventDefault()
await handleContinueAndRevert()
}
}
document.addEventListener('keydown', handleCheckpointDiscardKeyDown)
return () => document.removeEventListener('keydown', handleCheckpointDiscardKeyDown)
}, [showCheckpointDiscardModal, handleCancelCheckpointDiscard, handleContinueAndRevert])
return {
// State

View File

@@ -2,23 +2,24 @@
import { useCallback, useEffect, useRef, useState } from 'react'
import { createLogger } from '@sim/logger'
import type { ChatContext, CopilotMessage, MessageFileAttachment } from '@/stores/panel'
import type { CopilotMessage } from '@/stores/panel'
import { useCopilotStore } from '@/stores/panel'
const logger = createLogger('useMessageEditing')
/** Ref interface for UserInput component */
interface UserInputRef {
focus: () => void
}
/** Message truncation height in pixels */
/**
* Message truncation height in pixels
*/
const MESSAGE_TRUNCATION_HEIGHT = 60
/** Delay before attaching click-outside listener to avoid immediate trigger */
/**
* Delay before attaching click-outside listener to avoid immediate trigger
*/
const CLICK_OUTSIDE_DELAY = 100
/** Delay before aborting when editing during stream */
/**
* Delay before aborting when editing during stream
*/
const ABORT_DELAY = 100
interface UseMessageEditingProps {
@@ -31,8 +32,8 @@ interface UseMessageEditingProps {
setShowCheckpointDiscardModal: (show: boolean) => void
pendingEditRef: React.MutableRefObject<{
message: string
fileAttachments?: MessageFileAttachment[]
contexts?: ChatContext[]
fileAttachments?: any[]
contexts?: any[]
} | null>
/**
* When true, disables the internal document click-outside handler.
@@ -68,11 +69,13 @@ export function useMessageEditing(props: UseMessageEditingProps) {
const editContainerRef = useRef<HTMLDivElement>(null)
const messageContentRef = useRef<HTMLDivElement>(null)
const userInputRef = useRef<UserInputRef>(null)
const userInputRef = useRef<any>(null)
const { sendMessage, isSendingMessage, abortMessage, currentChat } = useCopilotStore()
/** Checks if message content needs expansion based on height */
/**
* Checks if message content needs expansion based on height
*/
useEffect(() => {
if (messageContentRef.current && message.role === 'user') {
const scrollHeight = messageContentRef.current.scrollHeight
@@ -80,7 +83,9 @@ export function useMessageEditing(props: UseMessageEditingProps) {
}
}, [message.content, message.role])
/** Enters edit mode */
/**
* Handles entering edit mode
*/
const handleEditMessage = useCallback(() => {
setIsEditMode(true)
setIsExpanded(false)
@@ -92,14 +97,18 @@ export function useMessageEditing(props: UseMessageEditingProps) {
}, 0)
}, [message.content, onEditModeChange])
/** Cancels edit mode */
/**
* Handles canceling edit mode
*/
const handleCancelEdit = useCallback(() => {
setIsEditMode(false)
setEditedContent(message.content)
onEditModeChange?.(false)
}, [message.content, onEditModeChange])
/** Handles message click to enter edit mode */
/**
* Handles clicking on message to enter edit mode
*/
const handleMessageClick = useCallback(() => {
if (needsExpansion && !isExpanded) {
setIsExpanded(true)
@@ -107,13 +116,12 @@ export function useMessageEditing(props: UseMessageEditingProps) {
handleEditMessage()
}, [needsExpansion, isExpanded, handleEditMessage])
/** Performs the edit operation - truncates messages after edited message and resends */
/**
* Performs the actual edit operation
* Truncates messages after edited message and resends with same ID
*/
const performEdit = useCallback(
async (
editedMessage: string,
fileAttachments?: MessageFileAttachment[],
contexts?: ChatContext[]
) => {
async (editedMessage: string, fileAttachments?: any[], contexts?: any[]) => {
const currentMessages = messages
const editIndex = currentMessages.findIndex((m) => m.id === message.id)
@@ -126,7 +134,7 @@ export function useMessageEditing(props: UseMessageEditingProps) {
...message,
content: editedMessage,
fileAttachments: fileAttachments || message.fileAttachments,
contexts: contexts || message.contexts,
contexts: contexts || (message as any).contexts,
}
useCopilotStore.setState({ messages: [...truncatedMessages, updatedMessage] })
@@ -145,7 +153,7 @@ export function useMessageEditing(props: UseMessageEditingProps) {
timestamp: m.timestamp,
...(m.contentBlocks && { contentBlocks: m.contentBlocks }),
...(m.fileAttachments && { fileAttachments: m.fileAttachments }),
...(m.contexts && { contexts: m.contexts }),
...((m as any).contexts && { contexts: (m as any).contexts }),
})),
}),
})
@@ -156,7 +164,7 @@ export function useMessageEditing(props: UseMessageEditingProps) {
await sendMessage(editedMessage, {
fileAttachments: fileAttachments || message.fileAttachments,
contexts: contexts || message.contexts,
contexts: contexts || (message as any).contexts,
messageId: message.id,
queueIfBusy: false,
})
@@ -165,13 +173,12 @@ export function useMessageEditing(props: UseMessageEditingProps) {
[messages, message, currentChat, sendMessage, onEditModeChange]
)
/** Submits edited message, checking for checkpoints first */
/**
* Handles submitting edited message
* Checks for checkpoints and shows confirmation if needed
*/
const handleSubmitEdit = useCallback(
async (
editedMessage: string,
fileAttachments?: MessageFileAttachment[],
contexts?: ChatContext[]
) => {
async (editedMessage: string, fileAttachments?: any[], contexts?: any[]) => {
if (!editedMessage.trim()) return
if (isSendingMessage) {
@@ -197,7 +204,9 @@ export function useMessageEditing(props: UseMessageEditingProps) {
]
)
/** Keyboard-only exit (Esc) */
/**
* Keyboard-only exit (Esc). Click-outside is optionally handled by parent.
*/
useEffect(() => {
if (!isEditMode) return
@@ -213,7 +222,9 @@ export function useMessageEditing(props: UseMessageEditingProps) {
}
}, [isEditMode, handleCancelEdit])
/** Optional document-level click-outside handler */
/**
* Optional document-level click-outside handler (disabled when parent manages it).
*/
useEffect(() => {
if (!isEditMode || disableDocumentClickOutside) return

View File

@@ -1,8 +1,7 @@
export * from './chat-history-skeleton'
export * from './copilot-message'
export * from './plan-mode-section'
export * from './queued-messages'
export * from './todo-list'
export * from './tool-call'
export * from './user-input'
export * from './welcome'
export * from './copilot-message/copilot-message'
export * from './plan-mode-section/plan-mode-section'
export * from './queued-messages/queued-messages'
export * from './todo-list/todo-list'
export * from './tool-call/tool-call'
export * from './user-input/user-input'
export * from './welcome/welcome'

View File

@@ -29,7 +29,7 @@ import { Check, GripHorizontal, Pencil, X } from 'lucide-react'
import { Button, Textarea } from '@/components/emcn'
import { Trash } from '@/components/emcn/icons/trash'
import { cn } from '@/lib/core/utils/cn'
import { CopilotMarkdownRenderer } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/copilot/components/copilot-message/components/markdown-renderer'
import CopilotMarkdownRenderer from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/copilot/components/copilot-message/components/markdown-renderer'
/**
* Shared border and background styles

View File

@@ -31,22 +31,21 @@ export function QueuedMessages() {
if (messageQueue.length === 0) return null
return (
<div className='mx-[14px] overflow-hidden rounded-t-[4px] border border-[var(--border)] border-b-0 bg-[var(--bg-secondary)]'>
<div className='mx-2 overflow-hidden rounded-t-lg border border-black/[0.08] border-b-0 bg-[var(--bg-secondary)] dark:border-white/[0.08]'>
{/* Header */}
<button
type='button'
onClick={() => setIsExpanded(!isExpanded)}
className='flex w-full items-center justify-between px-[10px] py-[6px] transition-colors hover:bg-[var(--surface-3)]'
className='flex w-full items-center justify-between px-2.5 py-1.5 transition-colors hover:bg-[var(--bg-tertiary)]'
>
<div className='flex items-center gap-[6px]'>
<div className='flex items-center gap-1.5'>
{isExpanded ? (
<ChevronDown className='h-[14px] w-[14px] text-[var(--text-tertiary)]' />
<ChevronDown className='h-3 w-3 text-[var(--text-tertiary)]' />
) : (
<ChevronRight className='h-[14px] w-[14px] text-[var(--text-tertiary)]' />
<ChevronRight className='h-3 w-3 text-[var(--text-tertiary)]' />
)}
<span className='font-medium text-[12px] text-[var(--text-primary)]'>Queued</span>
<span className='flex-shrink-0 font-medium text-[12px] text-[var(--text-tertiary)]'>
{messageQueue.length}
<span className='font-medium text-[var(--text-secondary)] text-xs'>
{messageQueue.length} Queued
</span>
</div>
</button>
@@ -57,30 +56,30 @@ export function QueuedMessages() {
{messageQueue.map((msg) => (
<div
key={msg.id}
className='group flex items-center gap-[8px] border-[var(--border)] border-t px-[10px] py-[6px] hover:bg-[var(--surface-3)]'
className='group flex items-center gap-2 border-black/[0.04] border-t px-2.5 py-1.5 hover:bg-[var(--bg-tertiary)] dark:border-white/[0.04]'
>
{/* Radio indicator */}
<div className='flex h-[14px] w-[14px] shrink-0 items-center justify-center'>
<div className='h-[10px] w-[10px] rounded-full border border-[var(--text-tertiary)]/50' />
<div className='flex h-3 w-3 shrink-0 items-center justify-center'>
<div className='h-2.5 w-2.5 rounded-full border border-[var(--text-tertiary)]/50' />
</div>
{/* Message content */}
<div className='min-w-0 flex-1'>
<p className='truncate text-[13px] text-[var(--text-primary)]'>{msg.content}</p>
<p className='truncate text-[var(--text-primary)] text-xs'>{msg.content}</p>
</div>
{/* Actions - always visible */}
<div className='flex shrink-0 items-center gap-[4px]'>
<div className='flex shrink-0 items-center gap-0.5'>
<button
type='button'
onClick={(e) => {
e.stopPropagation()
handleSendNow(msg.id)
}}
className='rounded p-[3px] text-[var(--text-tertiary)] transition-colors hover:bg-[var(--bg-quaternary)] hover:text-[var(--text-primary)]'
className='rounded p-0.5 text-[var(--text-tertiary)] transition-colors hover:bg-[var(--bg-quaternary)] hover:text-[var(--text-primary)]'
title='Send now (aborts current stream)'
>
<ArrowUp className='h-[14px] w-[14px]' />
<ArrowUp className='h-3 w-3' />
</button>
<button
type='button'
@@ -88,10 +87,10 @@ export function QueuedMessages() {
e.stopPropagation()
handleRemove(msg.id)
}}
className='rounded p-[3px] text-[var(--text-tertiary)] transition-colors hover:bg-red-500/10 hover:text-red-400'
className='rounded p-0.5 text-[var(--text-tertiary)] transition-colors hover:bg-red-500/10 hover:text-red-400'
title='Remove from queue'
>
<Trash2 className='h-[14px] w-[14px]' />
<Trash2 className='h-3 w-3' />
</button>
</div>
</div>

View File

@@ -15,7 +15,7 @@ import {
hasInterrupt as hasInterruptFromConfig,
isSpecialTool as isSpecialToolFromConfig,
} from '@/lib/copilot/tools/client/ui-config'
import { CopilotMarkdownRenderer } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/copilot/components/copilot-message/components/markdown-renderer'
import CopilotMarkdownRenderer from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/copilot/components/copilot-message/components/markdown-renderer'
import { SmoothStreamingText } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/copilot/components/copilot-message/components/smooth-streaming'
import { ThinkingBlock } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/copilot/components/copilot-message/components/thinking-block'
import { getDisplayValue } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/workflow-block/workflow-block'
@@ -26,74 +26,30 @@ import { CLASS_TOOL_METADATA } from '@/stores/panel/copilot/store'
import type { SubAgentContentBlock } from '@/stores/panel/copilot/types'
import { useWorkflowStore } from '@/stores/workflows/workflow/store'
/** Plan step can be a string or an object with title and optional plan content */
/**
* Parse special tags from content
*/
/**
* Plan step can be either a string or an object with title and plan
*/
type PlanStep = string | { title: string; plan?: string }
/** Option can be a string or an object with title and optional description */
/**
* Option can be either a string or an object with title and description
*/
type OptionItem = string | { title: string; description?: string }
/** Result of parsing special XML tags from message content */
interface ParsedTags {
/** Parsed plan steps, keyed by step number */
plan?: Record<string, PlanStep>
/** Whether the plan tag is complete (has closing tag) */
planComplete?: boolean
/** Parsed options, keyed by option number */
options?: Record<string, OptionItem>
/** Whether the options tag is complete (has closing tag) */
optionsComplete?: boolean
/** Content with special tags removed */
cleanContent: string
}
/**
* Extracts plan steps from plan_respond tool calls in subagent blocks.
* @param blocks - The subagent content blocks to search
* @returns Object containing steps in the format expected by PlanSteps component, and completion status
*/
function extractPlanFromBlocks(blocks: SubAgentContentBlock[] | undefined): {
steps: Record<string, PlanStep> | undefined
isComplete: boolean
} {
if (!blocks) return { steps: undefined, isComplete: false }
const planRespondBlock = blocks.find(
(b) => b.type === 'subagent_tool_call' && b.toolCall?.name === 'plan_respond'
)
if (!planRespondBlock?.toolCall) {
return { steps: undefined, isComplete: false }
}
const tc = planRespondBlock.toolCall as any
const args = tc.params || tc.parameters || tc.input || tc.arguments || tc.data?.arguments || {}
const stepsArray = args.steps
if (!Array.isArray(stepsArray) || stepsArray.length === 0) {
return { steps: undefined, isComplete: false }
}
const steps: Record<string, PlanStep> = {}
for (const step of stepsArray) {
if (step.number !== undefined && step.title) {
steps[String(step.number)] = step.title
}
}
const isComplete =
planRespondBlock.toolCall.state === ClientToolCallState.success ||
planRespondBlock.toolCall.state === ClientToolCallState.error
return {
steps: Object.keys(steps).length > 0 ? steps : undefined,
isComplete,
}
}
/**
* Parses partial JSON for streaming options, extracting complete key-value pairs from incomplete JSON.
* @param jsonStr - The potentially incomplete JSON string
* @returns Parsed options record or null if no valid options found
* Try to parse partial JSON for streaming options.
* Attempts to extract complete key-value pairs from incomplete JSON.
*/
function parsePartialOptionsJson(jsonStr: string): Record<string, OptionItem> | null {
// Try parsing as-is first (might be complete)
@@ -104,9 +60,8 @@ function parsePartialOptionsJson(jsonStr: string): Record<string, OptionItem> |
}
// Try to extract complete key-value pairs from partial JSON
// Match patterns like "1": "some text" or "1": {"title": "text", "description": "..."}
// Match patterns like "1": "some text" or "1": {"title": "text"}
const result: Record<string, OptionItem> = {}
// Match complete string values: "key": "value"
const stringPattern = /"(\d+)":\s*"([^"]*?)"/g
let match
@@ -114,24 +69,18 @@ function parsePartialOptionsJson(jsonStr: string): Record<string, OptionItem> |
result[match[1]] = match[2]
}
// Match complete object values with title and optional description
// Pattern matches: "1": {"title": "...", "description": "..."} or "1": {"title": "..."}
const objectPattern =
/"(\d+)":\s*\{\s*"title":\s*"((?:[^"\\]|\\.)*)"\s*(?:,\s*"description":\s*"((?:[^"\\]|\\.)*)")?\s*\}/g
// Match complete object values: "key": {"title": "value"}
const objectPattern = /"(\d+)":\s*\{[^}]*"title":\s*"([^"]*)"[^}]*\}/g
while ((match = objectPattern.exec(jsonStr)) !== null) {
const key = match[1]
const title = match[2].replace(/\\"/g, '"').replace(/\\n/g, '\n')
const description = match[3]?.replace(/\\"/g, '"').replace(/\\n/g, '\n')
result[key] = description ? { title, description } : { title }
result[match[1]] = { title: match[2] }
}
return Object.keys(result).length > 0 ? result : null
}
/**
* Parses partial JSON for streaming plan steps, extracting complete key-value pairs from incomplete JSON.
* @param jsonStr - The potentially incomplete JSON string
* @returns Parsed plan steps record or null if no valid steps found
* Try to parse partial JSON for streaming plan steps.
* Attempts to extract complete key-value pairs from incomplete JSON.
*/
function parsePartialPlanJson(jsonStr: string): Record<string, PlanStep> | null {
// Try parsing as-is first (might be complete)
@@ -163,10 +112,7 @@ function parsePartialPlanJson(jsonStr: string): Record<string, PlanStep> | null
}
/**
* Parses special XML tags (`<plan>` and `<options>`) from message content.
* Handles both complete and streaming/incomplete tags.
* @param content - The message content to parse
* @returns Parsed tags with plan, options, and clean content
* Parse <plan> and <options> tags from content
*/
export function parseSpecialTags(content: string): ParsedTags {
const result: ParsedTags = { cleanContent: content }
@@ -174,18 +120,12 @@ export function parseSpecialTags(content: string): ParsedTags {
// Parse <plan> tag - check for complete tag first
const planMatch = content.match(/<plan>([\s\S]*?)<\/plan>/i)
if (planMatch) {
// Always strip the tag from display, even if JSON is invalid
result.cleanContent = result.cleanContent.replace(planMatch[0], '').trim()
try {
result.plan = JSON.parse(planMatch[1])
result.planComplete = true
result.cleanContent = result.cleanContent.replace(planMatch[0], '').trim()
} catch {
// JSON.parse failed - use regex fallback to extract plan from malformed JSON
const fallbackPlan = parsePartialPlanJson(planMatch[1])
if (fallbackPlan) {
result.plan = fallbackPlan
result.planComplete = true
}
// Invalid JSON, ignore
}
} else {
// Check for streaming/incomplete plan tag
@@ -204,18 +144,12 @@ export function parseSpecialTags(content: string): ParsedTags {
// Parse <options> tag - check for complete tag first
const optionsMatch = content.match(/<options>([\s\S]*?)<\/options>/i)
if (optionsMatch) {
// Always strip the tag from display, even if JSON is invalid
result.cleanContent = result.cleanContent.replace(optionsMatch[0], '').trim()
try {
result.options = JSON.parse(optionsMatch[1])
result.optionsComplete = true
result.cleanContent = result.cleanContent.replace(optionsMatch[0], '').trim()
} catch {
// JSON.parse failed - use regex fallback to extract options from malformed JSON
const fallbackOptions = parsePartialOptionsJson(optionsMatch[1])
if (fallbackOptions) {
result.options = fallbackOptions
result.optionsComplete = true
}
// Invalid JSON, ignore
}
} else {
// Check for streaming/incomplete options tag
@@ -239,15 +173,15 @@ export function parseSpecialTags(content: string): ParsedTags {
}
/**
* Renders workflow plan steps as a numbered to-do list.
* @param steps - Plan steps keyed by step number
* @param streaming - When true, uses smooth streaming animation for step titles
* PlanSteps component renders the workflow plan steps from the plan subagent
* Displays as a to-do list with checkmarks and strikethrough text
*/
function PlanSteps({
steps,
streaming = false,
}: {
steps: Record<string, PlanStep>
/** When true, uses smooth streaming animation for step titles */
streaming?: boolean
}) {
const sortedSteps = useMemo(() => {
@@ -268,7 +202,7 @@ function PlanSteps({
if (sortedSteps.length === 0) return null
return (
<div className='mt-0 overflow-hidden rounded-[6px] border border-[var(--border-1)] bg-[var(--surface-1)]'>
<div className='mt-1.5 overflow-hidden rounded-[6px] border border-[var(--border-1)] bg-[var(--surface-1)]'>
<div className='flex items-center gap-[8px] border-[var(--border-1)] border-b bg-[var(--surface-2)] p-[8px]'>
<LayoutList className='ml-[2px] h-3 w-3 flex-shrink-0 text-[var(--text-tertiary)]' />
<span className='font-medium text-[12px] text-[var(--text-primary)]'>To-dos</span>
@@ -276,7 +210,7 @@ function PlanSteps({
{sortedSteps.length}
</span>
</div>
<div className='flex flex-col gap-[6px] px-[10px] py-[6px]'>
<div className='flex flex-col gap-[6px] px-[10px] py-[8px]'>
{sortedSteps.map(([num, title], index) => {
const isLastStep = index === sortedSteps.length - 1
return (
@@ -300,8 +234,9 @@ function PlanSteps({
}
/**
* Renders selectable options from the agent with keyboard navigation and click selection.
* After selection, shows the chosen option highlighted and others struck through.
* OptionsSelector component renders selectable options from the agent
* Supports keyboard navigation (arrow up/down, enter) and click selection
* After selection, shows the chosen option highlighted and others struck through
*/
export function OptionsSelector({
options,
@@ -309,7 +244,6 @@ export function OptionsSelector({
disabled = false,
enableKeyboardNav = false,
streaming = false,
selectedOptionKey = null,
}: {
options: Record<string, OptionItem>
onSelect: (optionKey: string, optionText: string) => void
@@ -318,8 +252,6 @@ export function OptionsSelector({
enableKeyboardNav?: boolean
/** When true, looks enabled but interaction is disabled (for streaming state) */
streaming?: boolean
/** Pre-selected option key (for restoring selection from history) */
selectedOptionKey?: string | null
}) {
const isInteractionDisabled = disabled || streaming
const sortedOptions = useMemo(() => {
@@ -337,8 +269,8 @@ export function OptionsSelector({
})
}, [options])
const [hoveredIndex, setHoveredIndex] = useState(-1)
const [chosenKey, setChosenKey] = useState<string | null>(selectedOptionKey)
const [hoveredIndex, setHoveredIndex] = useState(0)
const [chosenKey, setChosenKey] = useState<string | null>(null)
const containerRef = useRef<HTMLDivElement>(null)
const isLocked = chosenKey !== null
@@ -348,8 +280,7 @@ export function OptionsSelector({
if (isInteractionDisabled || !enableKeyboardNav || isLocked) return
const handleKeyDown = (e: KeyboardEvent) => {
if (e.defaultPrevented) return
// Only handle if the container or document body is focused (not when typing in input)
const activeElement = document.activeElement
const isInputFocused =
activeElement?.tagName === 'INPUT' ||
@@ -360,14 +291,13 @@ export function OptionsSelector({
if (e.key === 'ArrowDown') {
e.preventDefault()
setHoveredIndex((prev) => (prev < 0 ? 0 : Math.min(prev + 1, sortedOptions.length - 1)))
setHoveredIndex((prev) => Math.min(prev + 1, sortedOptions.length - 1))
} else if (e.key === 'ArrowUp') {
e.preventDefault()
setHoveredIndex((prev) => (prev < 0 ? sortedOptions.length - 1 : Math.max(prev - 1, 0)))
setHoveredIndex((prev) => Math.max(prev - 1, 0))
} else if (e.key === 'Enter') {
e.preventDefault()
const indexToSelect = hoveredIndex < 0 ? 0 : hoveredIndex
const selected = sortedOptions[indexToSelect]
const selected = sortedOptions[hoveredIndex]
if (selected) {
setChosenKey(selected.key)
onSelect(selected.key, selected.title)
@@ -391,7 +321,7 @@ export function OptionsSelector({
if (sortedOptions.length === 0) return null
return (
<div ref={containerRef} className='flex flex-col gap-[4px] pt-[4px]'>
<div ref={containerRef} className='flex flex-col gap-0.5 pb-0.5'>
{sortedOptions.map((option, index) => {
const isHovered = index === hoveredIndex && !isLocked
const isChosen = option.key === chosenKey
@@ -409,9 +339,6 @@ export function OptionsSelector({
onMouseEnter={() => {
if (!isLocked && !streaming) setHoveredIndex(index)
}}
onMouseLeave={() => {
if (!isLocked && !streaming && sortedOptions.length === 1) setHoveredIndex(-1)
}}
className={clsx(
'group flex cursor-pointer items-start gap-2 rounded-[6px] p-1',
'hover:bg-[var(--surface-4)]',
@@ -447,31 +374,30 @@ export function OptionsSelector({
)
}
/** Props for the ToolCall component */
interface ToolCallProps {
/** Tool call data object */
toolCall?: CopilotToolCall
/** Tool call ID for store lookup */
toolCallId?: string
/** Callback when tool call state changes */
onStateChange?: (state: any) => void
/** Whether this tool call is from the current/latest message. Controls shimmer and action buttons. */
isCurrentMessage?: boolean
}
/** Props for the ShimmerOverlayText component */
/**
* Props for shimmer overlay text component.
*/
interface ShimmerOverlayTextProps {
/** Text content to display */
/** The text content to display */
text: string
/** Whether shimmer animation is active */
/** Whether the shimmer animation is active */
active?: boolean
/** Additional class names for the wrapper */
className?: string
/** Whether to use special gradient styling for important actions */
/** Whether to use special gradient styling (for important actions) */
isSpecial?: boolean
}
/** Action verbs at the start of tool display names, highlighted for visual hierarchy */
/**
* Action verbs that appear at the start of tool display names.
* These will be highlighted in a lighter color for better visual hierarchy.
*/
const ACTION_VERBS = [
'Analyzing',
'Analyzed',
@@ -579,8 +505,7 @@ const ACTION_VERBS = [
/**
* Splits text into action verb and remainder for two-tone rendering.
* @param text - The text to split
* @returns Tuple of [actionVerb, remainder] or [null, text] if no match
* Returns [actionVerb, remainder] or [null, text] if no match.
*/
function splitActionVerb(text: string): [string | null, string] {
for (const verb of ACTION_VERBS) {
@@ -600,9 +525,10 @@ function splitActionVerb(text: string): [string | null, string] {
}
/**
* Renders text with a shimmer overlay animation when active.
* Special tools use a gradient color; normal tools highlight action verbs.
* Uses CSS truncation to clamp to one line with ellipsis.
* Renders text with a subtle white shimmer overlay when active, creating a skeleton-like
* loading effect that passes over the existing words without replacing them.
* For special tool calls, uses a gradient color. For normal tools, highlights action verbs
* in a lighter color with the rest in default gray.
*/
const ShimmerOverlayText = memo(function ShimmerOverlayText({
text,
@@ -612,13 +538,10 @@ const ShimmerOverlayText = memo(function ShimmerOverlayText({
}: ShimmerOverlayTextProps) {
const [actionVerb, remainder] = splitActionVerb(text)
// Base classes for single-line truncation with ellipsis
const truncateClasses = 'block w-full overflow-hidden text-ellipsis whitespace-nowrap'
// Special tools: use tertiary-2 color for entire text with shimmer
if (isSpecial) {
return (
<span className={`relative ${truncateClasses} ${className || ''}`}>
<span className={`relative inline-block ${className || ''}`}>
<span className='text-[var(--brand-tertiary-2)]'>{text}</span>
{active ? (
<span
@@ -626,7 +549,7 @@ const ShimmerOverlayText = memo(function ShimmerOverlayText({
className='pointer-events-none absolute inset-0 select-none overflow-hidden'
>
<span
className='block overflow-hidden text-ellipsis whitespace-nowrap text-transparent'
className='block text-transparent'
style={{
backgroundImage:
'linear-gradient(90deg, rgba(51,196,129,0) 0%, rgba(255,255,255,0.6) 50%, rgba(51,196,129,0) 100%)',
@@ -657,7 +580,7 @@ const ShimmerOverlayText = memo(function ShimmerOverlayText({
// Light mode: primary (#2d2d2d) vs muted (#737373) for good contrast
// Dark mode: tertiary (#b3b3b3) vs muted (#787878) for good contrast
return (
<span className={`relative ${truncateClasses} ${className || ''}`}>
<span className={`relative inline-block ${className || ''}`}>
{actionVerb ? (
<>
<span className='text-[var(--text-primary)] dark:text-[var(--text-tertiary)]'>
@@ -674,7 +597,7 @@ const ShimmerOverlayText = memo(function ShimmerOverlayText({
className='pointer-events-none absolute inset-0 select-none overflow-hidden'
>
<span
className='block overflow-hidden text-ellipsis whitespace-nowrap text-transparent'
className='block text-transparent'
style={{
backgroundImage:
'linear-gradient(90deg, rgba(255,255,255,0) 0%, rgba(255,255,255,0.85) 50%, rgba(255,255,255,0) 100%)',
@@ -702,9 +625,8 @@ const ShimmerOverlayText = memo(function ShimmerOverlayText({
})
/**
* Gets the collapse header label for completed subagent tools.
* @param toolName - The tool name to get the label for
* @returns The completion label from UI config, defaults to 'Thought'
* Get the outer collapse header label for completed subagent tools.
* Uses the tool's UI config.
*/
function getSubagentCompletionLabel(toolName: string): string {
const labels = getSubagentLabelsFromConfig(toolName, false)
@@ -712,9 +634,8 @@ function getSubagentCompletionLabel(toolName: string): string {
}
/**
* Renders subagent blocks as thinking text within regular tool calls.
* @param blocks - The subagent content blocks to render
* @param isStreaming - Whether streaming animations should be shown (caller should pre-compute currentMessage check)
* SubAgentThinkingContent renders subagent blocks as simple thinking text (ThinkingBlock).
* Used for inline rendering within regular tool calls that have subagent content.
*/
function SubAgentThinkingContent({
blocks,
@@ -733,23 +654,14 @@ function SubAgentThinkingContent({
}
}
// Extract plan from plan_respond tool call (preferred) or fall back to <plan> tags
const { steps: planSteps, isComplete: planComplete } = extractPlanFromBlocks(blocks)
const allParsed = parseSpecialTags(allRawText)
// Prefer plan_respond tool data over <plan> tags
const hasPlan =
!!(planSteps && Object.keys(planSteps).length > 0) ||
!!(allParsed.plan && Object.keys(allParsed.plan).length > 0)
const planToRender = planSteps || allParsed.plan
const isPlanStreaming = planSteps ? !planComplete : isStreaming
if (!cleanText.trim() && !allParsed.plan) return null
if (!cleanText.trim() && !hasPlan) return null
const hasSpecialTags = hasPlan
const hasSpecialTags = !!(allParsed.plan && Object.keys(allParsed.plan).length > 0)
return (
<div className='space-y-[4px]'>
<div className='space-y-1.5'>
{cleanText.trim() && (
<ThinkingBlock
content={cleanText}
@@ -758,34 +670,39 @@ function SubAgentThinkingContent({
hasSpecialTags={hasSpecialTags}
/>
)}
{hasPlan && planToRender && <PlanSteps steps={planToRender} streaming={isPlanStreaming} />}
{allParsed.plan && Object.keys(allParsed.plan).length > 0 && (
<PlanSteps steps={allParsed.plan} streaming={isStreaming} />
)}
</div>
)
}
/** Subagents that collapse into summary headers when done streaming */
/**
* Subagents that should collapse when done streaming.
* Default behavior is to NOT collapse (stay expanded like edit, superagent, info, etc.).
* Only plan, debug, and research collapse into summary headers.
*/
const COLLAPSIBLE_SUBAGENTS = new Set(['plan', 'debug', 'research'])
/**
* Handles rendering of subagent content with streaming and collapse behavior.
* SubagentContentRenderer handles the rendering of subagent content.
* - During streaming: Shows content at top level
* - When done (not streaming): Most subagents stay expanded, only specific ones collapse
* - Exception: plan, debug, research, info subagents collapse into a header
*/
const SubagentContentRenderer = memo(function SubagentContentRenderer({
toolCall,
shouldCollapse,
isCurrentMessage = true,
}: {
toolCall: CopilotToolCall
shouldCollapse: boolean
/** Whether this is from the current/latest message. Controls shimmer animations. */
isCurrentMessage?: boolean
}) {
const [isExpanded, setIsExpanded] = useState(true)
const [duration, setDuration] = useState(0)
const startTimeRef = useRef<number>(Date.now())
const wasStreamingRef = useRef(false)
// Only show streaming animations for current message
const isStreaming = isCurrentMessage && !!toolCall.subAgentStreaming
const isStreaming = !!toolCall.subAgentStreaming
useEffect(() => {
if (isStreaming && !wasStreamingRef.current) {
@@ -827,19 +744,8 @@ const SubagentContentRenderer = memo(function SubagentContentRenderer({
}
const allParsed = parseSpecialTags(allRawText)
// Extract plan from plan_respond tool call (preferred) or fall back to <plan> tags
const { steps: planSteps, isComplete: planComplete } = extractPlanFromBlocks(
toolCall.subAgentBlocks
)
const hasPlan =
!!(planSteps && Object.keys(planSteps).length > 0) ||
!!(allParsed.plan && Object.keys(allParsed.plan).length > 0)
const planToRender = planSteps || allParsed.plan
const isPlanStreaming = planSteps ? !planComplete : isStreaming
const hasSpecialTags = !!(
hasPlan ||
(allParsed.plan && Object.keys(allParsed.plan).length > 0) ||
(allParsed.options && Object.keys(allParsed.options).length > 0)
)
@@ -851,6 +757,8 @@ const SubagentContentRenderer = memo(function SubagentContentRenderer({
const outerLabel = getSubagentCompletionLabel(toolCall.name)
const durationText = `${outerLabel} for ${formatDuration(duration)}`
const hasPlan = allParsed.plan && Object.keys(allParsed.plan).length > 0
const renderCollapsibleContent = () => (
<>
{segments.map((segment, index) => {
@@ -879,11 +787,7 @@ const SubagentContentRenderer = memo(function SubagentContentRenderer({
}
return (
<div key={`tool-${segment.block.toolCall.id || index}`}>
<ToolCall
toolCallId={segment.block.toolCall.id}
toolCall={segment.block.toolCall}
isCurrentMessage={isCurrentMessage}
/>
<ToolCall toolCallId={segment.block.toolCall.id} toolCall={segment.block.toolCall} />
</div>
)
}
@@ -894,9 +798,9 @@ const SubagentContentRenderer = memo(function SubagentContentRenderer({
if (isStreaming || !shouldCollapse) {
return (
<div className='w-full space-y-[4px]'>
<div className='w-full space-y-1.5'>
{renderCollapsibleContent()}
{hasPlan && planToRender && <PlanSteps steps={planToRender} streaming={isPlanStreaming} />}
{hasPlan && <PlanSteps steps={allParsed.plan!} streaming={isStreaming} />}
</div>
)
}
@@ -921,30 +825,30 @@ const SubagentContentRenderer = memo(function SubagentContentRenderer({
<div
className={clsx(
'overflow-hidden transition-all duration-150 ease-out',
isExpanded ? 'mt-1.5 max-h-[5000px] space-y-[4px] opacity-100' : 'max-h-0 opacity-0'
isExpanded ? 'mt-1.5 max-h-[5000px] opacity-100' : 'max-h-0 opacity-0'
)}
>
{renderCollapsibleContent()}
</div>
{hasPlan && planToRender && (
<div className='mt-[6px]'>
<PlanSteps steps={planToRender} />
</div>
)}
{/* Plan stays outside the collapsible */}
{hasPlan && <PlanSteps steps={allParsed.plan!} />}
</div>
)
})
/**
* Determines if a tool call should display with special gradient styling.
* Determines if a tool call is "special" and should display with gradient styling.
* Uses the tool's UI config.
*/
function isSpecialToolCall(toolCall: CopilotToolCall): boolean {
return isSpecialToolFromConfig(toolCall.name)
}
/**
* Displays a summary of workflow edits with added, edited, and deleted blocks.
* WorkflowEditSummary shows a full-width summary of workflow edits (like Cursor's diff).
* Displays: workflow name with stats (+N green, N orange, -N red)
* Expands inline on click to show individual blocks with their icons.
*/
const WorkflowEditSummary = memo(function WorkflowEditSummary({
toolCall,
@@ -1202,7 +1106,9 @@ const WorkflowEditSummary = memo(function WorkflowEditSummary({
)
})
/** Checks if a tool is server-side executed (not a client tool) */
/**
* Checks if a tool is an integration tool (server-side executed, not a client tool)
*/
function isIntegrationTool(toolName: string): boolean {
return !CLASS_TOOL_METADATA[toolName]
}
@@ -1348,7 +1254,9 @@ function getDisplayName(toolCall: CopilotToolCall): string {
return `${stateVerb} ${formattedName}`
}
/** Gets verb prefix based on tool call state */
/**
* Get verb prefix based on tool state
*/
function getStateVerb(state: string): string {
switch (state) {
case 'pending':
@@ -1367,7 +1275,8 @@ function getStateVerb(state: string): string {
}
/**
* Formats tool name for display (e.g., "google_calendar_list_events" -> "Google Calendar List Events")
* Format tool name for display
* e.g., "google_calendar_list_events" -> "Google Calendar List Events"
*/
function formatToolName(name: string): string {
const baseName = name.replace(/_v\d+$/, '')
@@ -1443,7 +1352,7 @@ function RunSkipButtons({
// Standardized buttons for all interrupt tools: Allow, (Always Allow for client tools only), Skip
return (
<div className='mt-[10px] flex gap-[6px]'>
<div className='mt-1.5 flex gap-[6px]'>
<Button onClick={onRun} disabled={isProcessing} variant='tertiary'>
{isProcessing ? 'Allowing...' : 'Allow'}
</Button>
@@ -1459,12 +1368,7 @@ function RunSkipButtons({
)
}
export function ToolCall({
toolCall: toolCallProp,
toolCallId,
onStateChange,
isCurrentMessage = true,
}: ToolCallProps) {
export function ToolCall({ toolCall: toolCallProp, toolCallId, onStateChange }: ToolCallProps) {
const [, forceUpdate] = useState({})
// Get live toolCall from store to ensure we have the latest state
const effectiveId = toolCallId || toolCallProp?.id
@@ -1478,7 +1382,9 @@ export function ToolCall({
const isExpandablePending =
toolCall?.state === 'pending' &&
(toolCall.name === 'make_api_request' || toolCall.name === 'set_global_workflow_variables')
(toolCall.name === 'make_api_request' ||
toolCall.name === 'set_global_workflow_variables' ||
toolCall.name === 'run_workflow')
const [expanded, setExpanded] = useState(isExpandablePending)
const [showRemoveAutoAllow, setShowRemoveAutoAllow] = useState(false)
@@ -1506,11 +1412,7 @@ export function ToolCall({
if (
toolCall.name === 'checkoff_todo' ||
toolCall.name === 'mark_todo_in_progress' ||
toolCall.name === 'tool_search_tool_regex' ||
toolCall.name === 'user_memory' ||
toolCall.name === 'edit_respond' ||
toolCall.name === 'debug_respond' ||
toolCall.name === 'plan_respond'
toolCall.name === 'tool_search_tool_regex'
)
return null
@@ -1553,7 +1455,6 @@ export function ToolCall({
<SubagentContentRenderer
toolCall={toolCall}
shouldCollapse={COLLAPSIBLE_SUBAGENTS.has(toolCall.name)}
isCurrentMessage={isCurrentMessage}
/>
)
}
@@ -1582,34 +1483,37 @@ export function ToolCall({
}
// Check if tool has params table config (meaning it's expandable)
const hasParamsTable = !!getToolUIConfig(toolCall.name)?.paramsTable
const isRunWorkflow = toolCall.name === 'run_workflow'
const isExpandableTool =
hasParamsTable ||
toolCall.name === 'make_api_request' ||
toolCall.name === 'set_global_workflow_variables'
toolCall.name === 'set_global_workflow_variables' ||
toolCall.name === 'run_workflow'
const showButtons = isCurrentMessage && shouldShowRunSkipButtons(toolCall)
const showButtons = shouldShowRunSkipButtons(toolCall)
// Check UI config for secondary action - only show for current message tool calls
// Check UI config for secondary action
const toolUIConfig = getToolUIConfig(toolCall.name)
const secondaryAction = toolUIConfig?.secondaryAction
const showSecondaryAction = secondaryAction?.showInStates.includes(
toolCall.state as ClientToolCallState
)
const isExecuting =
toolCall.state === (ClientToolCallState.executing as any) ||
toolCall.state === ('executing' as any)
// Legacy fallbacks for tools that haven't migrated to UI config
const showMoveToBackground =
isCurrentMessage &&
((showSecondaryAction && secondaryAction?.text === 'Move to Background') ||
(!secondaryAction && toolCall.name === 'run_workflow' && isExecuting))
showSecondaryAction && secondaryAction?.text === 'Move to Background'
? true
: !secondaryAction &&
toolCall.name === 'run_workflow' &&
(toolCall.state === (ClientToolCallState.executing as any) ||
toolCall.state === ('executing' as any))
const showWake =
isCurrentMessage &&
((showSecondaryAction && secondaryAction?.text === 'Wake') ||
(!secondaryAction && toolCall.name === 'sleep' && isExecuting))
showSecondaryAction && secondaryAction?.text === 'Wake'
? true
: !secondaryAction &&
toolCall.name === 'sleep' &&
(toolCall.state === (ClientToolCallState.executing as any) ||
toolCall.state === ('executing' as any))
const handleStateChange = (state: any) => {
forceUpdate({})
@@ -1623,8 +1527,6 @@ export function ToolCall({
toolCall.state === ClientToolCallState.pending ||
toolCall.state === ClientToolCallState.executing
const shouldShowShimmer = isCurrentMessage && isLoadingState
const isSpecial = isSpecialToolCall(toolCall)
const renderPendingDetails = () => {
@@ -1934,7 +1836,7 @@ export function ToolCall({
</span>
</div>
{/* Input entries */}
<div className='flex flex-col pt-[6px]'>
<div className='flex flex-col'>
{inputEntries.map(([key, value], index) => {
const isComplex = isComplexValue(value)
const displayValue = formatValueForDisplay(value)
@@ -1943,8 +1845,8 @@ export function ToolCall({
<div
key={key}
className={clsx(
'flex flex-col gap-[6px] px-[10px] pb-[6px]',
index > 0 && 'mt-[6px] border-[var(--border-1)] border-t pt-[6px]'
'flex flex-col gap-1.5 px-[10px] py-[8px]',
index > 0 && 'border-[var(--border-1)] border-t'
)}
>
{/* Input key */}
@@ -2036,14 +1938,14 @@ export function ToolCall({
<div className={isEnvVarsClickable ? 'cursor-pointer' : ''} onClick={handleEnvVarsClick}>
<ShimmerOverlayText
text={displayName}
active={shouldShowShimmer}
active={isLoadingState}
isSpecial={isSpecial}
className='font-[470] font-season text-[var(--text-secondary)] text-sm dark:text-[var(--text-muted)]'
/>
</div>
<div className='mt-[10px]'>{renderPendingDetails()}</div>
<div className='mt-1.5'>{renderPendingDetails()}</div>
{showRemoveAutoAllow && isAutoAllowed && (
<div className='mt-[10px]'>
<div className='mt-1.5'>
<Button
onClick={async () => {
await removeAutoAllowedTool(toolCall.name)
@@ -2068,7 +1970,7 @@ export function ToolCall({
{toolCall.subAgentBlocks && toolCall.subAgentBlocks.length > 0 && (
<SubAgentThinkingContent
blocks={toolCall.subAgentBlocks}
isStreaming={isCurrentMessage && toolCall.subAgentStreaming}
isStreaming={toolCall.subAgentStreaming}
/>
)}
</div>
@@ -2093,18 +1995,18 @@ export function ToolCall({
>
<ShimmerOverlayText
text={displayName}
active={shouldShowShimmer}
active={isLoadingState}
isSpecial={isSpecial}
className='font-[470] font-season text-[var(--text-secondary)] text-sm dark:text-[var(--text-muted)]'
/>
</div>
{code && (
<div className='mt-[10px]'>
<div className='mt-1.5'>
<Code.Viewer code={code} language='javascript' showGutter className='min-h-0' />
</div>
)}
{showRemoveAutoAllow && isAutoAllowed && (
<div className='mt-[10px]'>
<div className='mt-1.5'>
<Button
onClick={async () => {
await removeAutoAllowedTool(toolCall.name)
@@ -2129,14 +2031,14 @@ export function ToolCall({
{toolCall.subAgentBlocks && toolCall.subAgentBlocks.length > 0 && (
<SubAgentThinkingContent
blocks={toolCall.subAgentBlocks}
isStreaming={isCurrentMessage && toolCall.subAgentStreaming}
isStreaming={toolCall.subAgentStreaming}
/>
)}
</div>
)
}
const isToolNameClickable = (!isRunWorkflow && isExpandableTool) || isAutoAllowed
const isToolNameClickable = isExpandableTool || isAutoAllowed
const handleToolNameClick = () => {
if (isExpandableTool) {
@@ -2147,7 +2049,6 @@ export function ToolCall({
}
const isEditWorkflow = toolCall.name === 'edit_workflow'
const shouldShowDetails = isRunWorkflow || (isExpandableTool && expanded)
const hasOperations = Array.isArray(params.operations) && params.operations.length > 0
const hideTextForEditWorkflow = isEditWorkflow && hasOperations
@@ -2157,15 +2058,15 @@ export function ToolCall({
<div className={isToolNameClickable ? 'cursor-pointer' : ''} onClick={handleToolNameClick}>
<ShimmerOverlayText
text={displayName}
active={shouldShowShimmer}
active={isLoadingState}
isSpecial={isSpecial}
className='font-[470] font-season text-[var(--text-secondary)] text-sm dark:text-[var(--text-muted)]'
/>
</div>
)}
{shouldShowDetails && <div className='mt-[10px]'>{renderPendingDetails()}</div>}
{isExpandableTool && expanded && <div className='mt-1.5'>{renderPendingDetails()}</div>}
{showRemoveAutoAllow && isAutoAllowed && (
<div className='mt-[10px]'>
<div className='mt-1.5'>
<Button
onClick={async () => {
await removeAutoAllowedTool(toolCall.name)
@@ -2186,7 +2087,7 @@ export function ToolCall({
editedParams={editedParams}
/>
) : showMoveToBackground ? (
<div className='mt-[10px]'>
<div className='mt-1.5'>
<Button
onClick={async () => {
try {
@@ -2207,7 +2108,7 @@ export function ToolCall({
</Button>
</div>
) : showWake ? (
<div className='mt-[10px]'>
<div className='mt-1.5'>
<Button
onClick={async () => {
try {
@@ -2240,7 +2141,7 @@ export function ToolCall({
{toolCall.subAgentBlocks && toolCall.subAgentBlocks.length > 0 && (
<SubAgentThinkingContent
blocks={toolCall.subAgentBlocks}
isStreaming={isCurrentMessage && toolCall.subAgentStreaming}
isStreaming={toolCall.subAgentStreaming}
/>
)}
</div>

View File

@@ -1,127 +0,0 @@
'use client'
import { ArrowUp, Image, Loader2 } from 'lucide-react'
import { Badge, Button } from '@/components/emcn'
import { cn } from '@/lib/core/utils/cn'
import { ModeSelector } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/copilot/components/user-input/components/mode-selector/mode-selector'
import { ModelSelector } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/copilot/components/user-input/components/model-selector/model-selector'
interface BottomControlsProps {
mode: 'ask' | 'build' | 'plan'
onModeChange?: (mode: 'ask' | 'build' | 'plan') => void
selectedModel: string
onModelSelect: (model: string) => void
isNearTop: boolean
disabled: boolean
hideModeSelector: boolean
canSubmit: boolean
isLoading: boolean
isAborting: boolean
showAbortButton: boolean
onSubmit: () => void
onAbort: () => void
onFileSelect: () => void
}
/**
* Bottom controls section of the user input
* Contains mode selector, model selector, file attachment button, and submit/abort buttons
*/
export function BottomControls({
mode,
onModeChange,
selectedModel,
onModelSelect,
isNearTop,
disabled,
hideModeSelector,
canSubmit,
isLoading,
isAborting,
showAbortButton,
onSubmit,
onAbort,
onFileSelect,
}: BottomControlsProps) {
return (
<div className='flex items-center justify-between gap-2'>
{/* Left side: Mode Selector + Model Selector */}
<div className='flex min-w-0 flex-1 items-center gap-[8px]'>
{!hideModeSelector && (
<ModeSelector
mode={mode}
onModeChange={onModeChange}
isNearTop={isNearTop}
disabled={disabled}
/>
)}
<ModelSelector
selectedModel={selectedModel}
isNearTop={isNearTop}
onModelSelect={onModelSelect}
/>
</div>
{/* Right side: Attach Button + Send Button */}
<div className='flex flex-shrink-0 items-center gap-[10px]'>
<Badge
onClick={onFileSelect}
title='Attach file'
className={cn(
'cursor-pointer rounded-[6px] border-0 bg-transparent p-[0px] dark:bg-transparent',
disabled && 'cursor-not-allowed opacity-50'
)}
>
<Image className='!h-3.5 !w-3.5 scale-x-110' />
</Badge>
{showAbortButton ? (
<Button
onClick={onAbort}
disabled={isAborting}
className={cn(
'h-[20px] w-[20px] rounded-full border-0 p-0 transition-colors',
!isAborting
? 'bg-[var(--c-383838)] hover:bg-[var(--c-575757)] dark:bg-[var(--c-E0E0E0)] dark:hover:bg-[var(--c-CFCFCF)]'
: 'bg-[var(--c-383838)] dark:bg-[var(--c-E0E0E0)]'
)}
title='Stop generation'
>
{isAborting ? (
<Loader2 className='block h-[13px] w-[13px] animate-spin text-white dark:text-black' />
) : (
<svg
className='block h-[13px] w-[13px] fill-white dark:fill-black'
viewBox='0 0 24 24'
xmlns='http://www.w3.org/2000/svg'
>
<rect x='4' y='4' width='16' height='16' rx='3' ry='3' />
</svg>
)}
</Button>
) : (
<Button
onClick={onSubmit}
disabled={!canSubmit}
className={cn(
'h-[22px] w-[22px] rounded-full border-0 p-0 transition-colors',
canSubmit
? 'bg-[var(--c-383838)] hover:bg-[var(--c-575757)] dark:bg-[var(--c-E0E0E0)] dark:hover:bg-[var(--c-CFCFCF)]'
: 'bg-[var(--c-808080)] dark:bg-[var(--c-808080)]'
)}
>
{isLoading ? (
<Loader2 className='block h-3.5 w-3.5 animate-spin text-white dark:text-black' />
) : (
<ArrowUp
className='block h-3.5 w-3.5 text-white dark:text-black'
strokeWidth={2.25}
/>
)}
</Button>
)}
</div>
</div>
)
}

View File

@@ -1,7 +1,6 @@
export { AttachedFilesDisplay } from './attached-files-display'
export { BottomControls } from './bottom-controls'
export { ContextPills } from './context-pills'
export { type MentionFolderNav, MentionMenu } from './mention-menu'
export { ModeSelector } from './mode-selector'
export { ModelSelector } from './model-selector'
export { type SlashFolderNav, SlashMenu } from './slash-menu'
export { AttachedFilesDisplay } from './attached-files-display/attached-files-display'
export { ContextPills } from './context-pills/context-pills'
export { type MentionFolderNav, MentionMenu } from './mention-menu/mention-menu'
export { ModeSelector } from './mode-selector/mode-selector'
export { ModelSelector } from './model-selector/model-selector'
export { type SlashFolderNav, SlashMenu } from './slash-menu/slash-menu'

View File

@@ -1,6 +1,5 @@
import { useCallback, useEffect, useRef, useState } from 'react'
import {
escapeRegex,
filterOutContext,
isContextAlreadySelected,
} from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/copilot/components/user-input/utils'
@@ -23,6 +22,9 @@ interface UseContextManagementProps {
export function useContextManagement({ message, initialContexts }: UseContextManagementProps) {
const [selectedContexts, setSelectedContexts] = useState<ChatContext[]>(initialContexts ?? [])
const initializedRef = useRef(false)
const escapeRegex = useCallback((value: string) => {
return value.replace(/[.*+?^${}()|[\]\\]/g, '\\$&')
}, [])
// Initialize with initial contexts when they're first provided (for edit mode)
useEffect(() => {

View File

@@ -9,19 +9,19 @@ import {
useState,
} from 'react'
import { createLogger } from '@sim/logger'
import { AtSign } from 'lucide-react'
import { ArrowUp, AtSign, Image, Loader2 } from 'lucide-react'
import { useParams } from 'next/navigation'
import { createPortal } from 'react-dom'
import { Badge, Button, Textarea } from '@/components/emcn'
import { useSession } from '@/lib/auth/auth-client'
import type { CopilotModelId } from '@/lib/copilot/models'
import { cn } from '@/lib/core/utils/cn'
import {
AttachedFilesDisplay,
BottomControls,
ContextPills,
type MentionFolderNav,
MentionMenu,
ModelSelector,
ModeSelector,
type SlashFolderNav,
SlashMenu,
} from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/copilot/components/user-input/components'
@@ -44,10 +44,6 @@ import {
useTextareaAutoResize,
} from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/copilot/components/user-input/hooks'
import type { MessageFileAttachment } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/copilot/components/user-input/hooks/use-file-attachments'
import {
computeMentionHighlightRanges,
extractContextTokens,
} from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/copilot/components/user-input/utils'
import type { ChatContext } from '@/stores/panel'
import { useCopilotStore } from '@/stores/panel'
@@ -267,6 +263,7 @@ const UserInput = forwardRef<UserInputRef, UserInputProps>(
if (q && q.length > 0) {
void mentionData.ensurePastChatsLoaded()
// workflows and workflow-blocks auto-load from stores
void mentionData.ensureKnowledgeLoaded()
void mentionData.ensureBlocksLoaded()
void mentionData.ensureTemplatesLoaded()
@@ -309,7 +306,7 @@ const UserInput = forwardRef<UserInputRef, UserInputProps>(
size: f.size,
}))
onSubmit(trimmedMessage, fileAttachmentsForApi, contextManagement.selectedContexts)
onSubmit(trimmedMessage, fileAttachmentsForApi, contextManagement.selectedContexts as any)
const shouldClearInput = clearOnSubmit && !options.preserveInput && !overrideMessage
if (shouldClearInput) {
@@ -660,7 +657,7 @@ const UserInput = forwardRef<UserInputRef, UserInputProps>(
const handleModelSelect = useCallback(
(model: string) => {
setSelectedModel(model as CopilotModelId)
setSelectedModel(model as any)
},
[setSelectedModel]
)
@@ -680,17 +677,15 @@ const UserInput = forwardRef<UserInputRef, UserInputProps>(
return <span>{displayText}</span>
}
const tokens = extractContextTokens(contexts)
const ranges = computeMentionHighlightRanges(message, tokens)
const elements: React.ReactNode[] = []
const ranges = mentionTokensWithContext.computeMentionRanges()
if (ranges.length === 0) {
const displayText = message.endsWith('\n') ? `${message}\u200B` : message
return <span>{displayText}</span>
}
const elements: React.ReactNode[] = []
let lastIndex = 0
for (let i = 0; i < ranges.length; i++) {
const range = ranges[i]
@@ -699,12 +694,13 @@ const UserInput = forwardRef<UserInputRef, UserInputProps>(
elements.push(<span key={`text-${i}-${lastIndex}-${range.start}`}>{before}</span>)
}
const mentionText = message.slice(range.start, range.end)
elements.push(
<span
key={`mention-${i}-${range.start}-${range.end}`}
className='rounded-[4px] bg-[rgba(50,189,126,0.65)] py-[1px]'
>
{range.token}
{mentionText}
</span>
)
lastIndex = range.end
@@ -717,7 +713,7 @@ const UserInput = forwardRef<UserInputRef, UserInputProps>(
}
return elements.length > 0 ? elements : <span>{'\u00A0'}</span>
}, [message, contextManagement.selectedContexts])
}, [message, contextManagement.selectedContexts, mentionTokensWithContext])
return (
<div
@@ -859,22 +855,87 @@ const UserInput = forwardRef<UserInputRef, UserInputProps>(
</div>
{/* Bottom Row: Mode Selector + Model Selector + Attach Button + Send Button */}
<BottomControls
mode={mode}
onModeChange={onModeChange}
selectedModel={selectedModel}
onModelSelect={handleModelSelect}
isNearTop={isNearTop}
disabled={disabled}
hideModeSelector={hideModeSelector}
canSubmit={canSubmit}
isLoading={isLoading}
isAborting={isAborting}
showAbortButton={Boolean(showAbortButton)}
onSubmit={() => void handleSubmit()}
onAbort={handleAbort}
onFileSelect={fileAttachments.handleFileSelect}
/>
<div className='flex items-center justify-between gap-2'>
{/* Left side: Mode Selector + Model Selector */}
<div className='flex min-w-0 flex-1 items-center gap-[8px]'>
{!hideModeSelector && (
<ModeSelector
mode={mode}
onModeChange={onModeChange}
isNearTop={isNearTop}
disabled={disabled}
/>
)}
<ModelSelector
selectedModel={selectedModel}
isNearTop={isNearTop}
onModelSelect={handleModelSelect}
/>
</div>
{/* Right side: Attach Button + Send Button */}
<div className='flex flex-shrink-0 items-center gap-[10px]'>
<Badge
onClick={fileAttachments.handleFileSelect}
title='Attach file'
className={cn(
'cursor-pointer rounded-[6px] border-0 bg-transparent p-[0px] dark:bg-transparent',
disabled && 'cursor-not-allowed opacity-50'
)}
>
<Image className='!h-3.5 !w-3.5 scale-x-110' />
</Badge>
{showAbortButton ? (
<Button
onClick={handleAbort}
disabled={isAborting}
className={cn(
'h-[20px] w-[20px] rounded-full border-0 p-0 transition-colors',
!isAborting
? 'bg-[var(--c-383838)] hover:bg-[var(--c-575757)] dark:bg-[var(--c-E0E0E0)] dark:hover:bg-[var(--c-CFCFCF)]'
: 'bg-[var(--c-383838)] dark:bg-[var(--c-E0E0E0)]'
)}
title='Stop generation'
>
{isAborting ? (
<Loader2 className='block h-[13px] w-[13px] animate-spin text-white dark:text-black' />
) : (
<svg
className='block h-[13px] w-[13px] fill-white dark:fill-black'
viewBox='0 0 24 24'
xmlns='http://www.w3.org/2000/svg'
>
<rect x='4' y='4' width='16' height='16' rx='3' ry='3' />
</svg>
)}
</Button>
) : (
<Button
onClick={() => {
void handleSubmit()
}}
disabled={!canSubmit}
className={cn(
'h-[22px] w-[22px] rounded-full border-0 p-0 transition-colors',
canSubmit
? 'bg-[var(--c-383838)] hover:bg-[var(--c-575757)] dark:bg-[var(--c-E0E0E0)] dark:hover:bg-[var(--c-CFCFCF)]'
: 'bg-[var(--c-808080)] dark:bg-[var(--c-808080)]'
)}
>
{isLoading ? (
<Loader2 className='block h-3.5 w-3.5 animate-spin text-white dark:text-black' />
) : (
<ArrowUp
className='block h-3.5 w-3.5 text-white dark:text-black'
strokeWidth={2.25}
/>
)}
</Button>
)}
</div>
</div>
{/* Hidden File Input - enabled during streaming so users can prepare images for the next message */}
<input

View File

@@ -1,4 +1,3 @@
import type { ReactNode } from 'react'
import {
FOLDER_CONFIGS,
type MentionFolderId,
@@ -6,102 +5,6 @@ import {
import type { MentionDataReturn } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/copilot/components/user-input/hooks/use-mention-data'
import type { ChatContext } from '@/stores/panel'
/**
* Escapes special regex characters in a string
* @param value - String to escape
* @returns Escaped string safe for use in RegExp
*/
export function escapeRegex(value: string): string {
return value.replace(/[.*+?^${}()|[\]\\]/g, '\\$&')
}
/**
* Extracts mention tokens from contexts for display/matching
* Filters out current_workflow contexts and builds prefixed labels
* @param contexts - Array of chat contexts
* @returns Array of prefixed token strings (e.g., "@workflow", "/web")
*/
export function extractContextTokens(contexts: ChatContext[]): string[] {
return contexts
.filter((c) => c.kind !== 'current_workflow' && c.label)
.map((c) => {
const prefix = c.kind === 'slash_command' ? '/' : '@'
return `${prefix}${c.label}`
})
}
/**
* Mention range for text highlighting
*/
export interface MentionHighlightRange {
start: number
end: number
token: string
}
/**
* Computes mention ranges in text for highlighting
* @param text - Text to search
* @param tokens - Prefixed tokens to find (e.g., "@workflow", "/web")
* @returns Array of ranges with start, end, and matched token
*/
export function computeMentionHighlightRanges(
text: string,
tokens: string[]
): MentionHighlightRange[] {
if (!tokens.length || !text) return []
const pattern = new RegExp(`(${tokens.map(escapeRegex).join('|')})`, 'g')
const ranges: MentionHighlightRange[] = []
let match: RegExpExecArray | null
while ((match = pattern.exec(text)) !== null) {
ranges.push({
start: match.index,
end: match.index + match[0].length,
token: match[0],
})
}
return ranges
}
/**
* Builds React nodes with highlighted mention tokens
* @param text - Text to render
* @param contexts - Chat contexts to highlight
* @param createHighlightSpan - Function to create highlighted span element
* @returns Array of React nodes with highlighted mentions
*/
export function buildMentionHighlightNodes(
text: string,
contexts: ChatContext[],
createHighlightSpan: (token: string, key: string) => ReactNode
): ReactNode[] {
const tokens = extractContextTokens(contexts)
if (!tokens.length) return [text]
const ranges = computeMentionHighlightRanges(text, tokens)
if (!ranges.length) return [text]
const nodes: ReactNode[] = []
let lastIndex = 0
for (const range of ranges) {
if (range.start > lastIndex) {
nodes.push(text.slice(lastIndex, range.start))
}
nodes.push(createHighlightSpan(range.token, `mention-${range.start}-${range.end}`))
lastIndex = range.end
}
if (lastIndex < text.length) {
nodes.push(text.slice(lastIndex))
}
return nodes
}
/**
* Gets the data array for a folder ID from mentionData.
* Uses FOLDER_CONFIGS as the source of truth for key mapping.

View File

@@ -2,7 +2,9 @@
import { Button } from '@/components/emcn'
/** Props for the Welcome component */
/**
* Props for the CopilotWelcome component
*/
interface WelcomeProps {
/** Callback when a suggested question is clicked */
onQuestionClick?: (question: string) => void
@@ -10,7 +12,13 @@ interface WelcomeProps {
mode?: 'ask' | 'build' | 'plan'
}
/** Welcome screen displaying suggested questions based on current mode */
/**
* Welcome screen component for the copilot
* Displays suggested questions and capabilities based on current mode
*
* @param props - Component props
* @returns Welcome screen UI
*/
export function Welcome({ onQuestionClick, mode = 'ask' }: WelcomeProps) {
const capabilities =
mode === 'build'

View File

@@ -24,7 +24,6 @@ import {
import { Trash } from '@/components/emcn/icons/trash'
import { cn } from '@/lib/core/utils/cn'
import {
ChatHistorySkeleton,
CopilotMessage,
PlanModeSection,
QueuedMessages,
@@ -41,7 +40,6 @@ import {
useTodoManagement,
} from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/copilot/hooks'
import { useScrollManagement } from '@/app/workspace/[workspaceId]/w/[workflowId]/hooks'
import type { ChatContext } from '@/stores/panel'
import { useCopilotStore } from '@/stores/panel'
import { useWorkflowRegistry } from '@/stores/workflows/registry/store'
@@ -76,12 +74,10 @@ export const Copilot = forwardRef<CopilotRef, CopilotProps>(({ panelWidth }, ref
const copilotContainerRef = useRef<HTMLDivElement>(null)
const cancelEditCallbackRef = useRef<(() => void) | null>(null)
const [editingMessageId, setEditingMessageId] = useState<string | null>(null)
const [isEditingMessage, setIsEditingMessage] = useState(false)
const [revertingMessageId, setRevertingMessageId] = useState<string | null>(null)
const [isHistoryDropdownOpen, setIsHistoryDropdownOpen] = useState(false)
// Derived state - editing when there's an editingMessageId
const isEditingMessage = editingMessageId !== null
const { activeWorkflowId } = useWorkflowRegistry()
const {
@@ -110,9 +106,9 @@ export const Copilot = forwardRef<CopilotRef, CopilotProps>(({ panelWidth }, ref
areChatsFresh,
workflowId: copilotWorkflowId,
setPlanTodos,
closePlanTodos,
clearPlanArtifact,
savePlanArtifact,
setSelectedModel,
loadAutoAllowedTools,
} = useCopilotStore()
@@ -130,7 +126,7 @@ export const Copilot = forwardRef<CopilotRef, CopilotProps>(({ panelWidth }, ref
// Handle scroll management (80px stickiness for copilot)
const { scrollAreaRef, scrollToBottom } = useScrollManagement(messages, isSendingMessage, {
stickinessThreshold: 40,
stickinessThreshold: 80,
})
// Handle chat history grouping
@@ -150,10 +146,15 @@ export const Copilot = forwardRef<CopilotRef, CopilotProps>(({ panelWidth }, ref
isSendingMessage,
showPlanTodos,
planTodos,
setPlanTodos,
})
/** Gets markdown content for design document section (available in all modes once created) */
/**
* Get markdown content for design document section
* Available in all modes once created
*/
const designDocumentContent = useMemo(() => {
// Use streaming content if available
if (streamingPlanContent) {
logger.info('[DesignDocument] Using streaming plan content', {
contentLength: streamingPlanContent.length,
@@ -164,7 +165,9 @@ export const Copilot = forwardRef<CopilotRef, CopilotProps>(({ panelWidth }, ref
return ''
}, [streamingPlanContent])
/** Focuses the copilot input */
/**
* Helper function to focus the copilot input
*/
const focusInput = useCallback(() => {
userInputRef.current?.focus()
}, [])
@@ -178,14 +181,20 @@ export const Copilot = forwardRef<CopilotRef, CopilotProps>(({ panelWidth }, ref
currentInputValue: inputValue,
})
/** Auto-scrolls to bottom when chat loads */
/**
* Auto-scroll to bottom when chat loads in
*/
useEffect(() => {
if (isInitialized && messages.length > 0) {
scrollToBottom()
}
}, [isInitialized, messages.length, scrollToBottom])
/** Cleanup on unmount - aborts active streaming. Uses refs to avoid stale closures */
/**
* Cleanup on component unmount (page refresh, navigation, etc.)
* Uses a ref to track sending state to avoid stale closure issues
* Note: Parent workflow.tsx also has useStreamCleanup for page-level cleanup
*/
const isSendingRef = useRef(isSendingMessage)
isSendingRef.current = isSendingMessage
const abortMessageRef = useRef(abortMessage)
@@ -193,15 +202,19 @@ export const Copilot = forwardRef<CopilotRef, CopilotProps>(({ panelWidth }, ref
useEffect(() => {
return () => {
// Use refs to check current values, not stale closure values
if (isSendingRef.current) {
abortMessageRef.current()
logger.info('Aborted active message streaming due to component unmount')
}
}
// Empty deps - only run cleanup on actual unmount, not on re-renders
// eslint-disable-next-line react-hooks/exhaustive-deps
}, [])
/** Cancels edit mode when clicking outside the current edit area */
/**
* Container-level click capture to cancel edit mode when clicking outside the current edit area
*/
const handleCopilotClickCapture = useCallback(
(event: ReactMouseEvent<HTMLDivElement>) => {
if (!isEditingMessage) return
@@ -230,7 +243,10 @@ export const Copilot = forwardRef<CopilotRef, CopilotProps>(({ panelWidth }, ref
[isEditingMessage, editingMessageId]
)
/** Creates a new chat session and focuses the input */
/**
* Handles creating a new chat session
* Focuses the input after creation
*/
const handleStartNewChat = useCallback(() => {
createNewChat()
logger.info('Started new chat')
@@ -240,7 +256,10 @@ export const Copilot = forwardRef<CopilotRef, CopilotProps>(({ panelWidth }, ref
}, 100)
}, [createNewChat])
/** Sets the input value and focuses the textarea */
/**
* Sets the input value and focuses the textarea
* @param value - The value to set in the input
*/
const handleSetInputValueAndFocus = useCallback(
(value: string) => {
setInputValue(value)
@@ -251,7 +270,7 @@ export const Copilot = forwardRef<CopilotRef, CopilotProps>(({ panelWidth }, ref
[setInputValue]
)
/** Exposes imperative functions to parent */
// Expose functions to parent
useImperativeHandle(
ref,
() => ({
@@ -262,7 +281,10 @@ export const Copilot = forwardRef<CopilotRef, CopilotProps>(({ panelWidth }, ref
[handleStartNewChat, handleSetInputValueAndFocus, focusInput]
)
/** Aborts current message streaming and collapses todos if shown */
/**
* Handles aborting the current message streaming
* Collapses todos if they are currently shown
*/
const handleAbort = useCallback(() => {
abortMessage()
if (showPlanTodos) {
@@ -270,20 +292,20 @@ export const Copilot = forwardRef<CopilotRef, CopilotProps>(({ panelWidth }, ref
}
}, [abortMessage, showPlanTodos])
/** Closes the plan todos section and clears the todos */
const handleClosePlanTodos = useCallback(() => {
closePlanTodos()
setPlanTodos([])
}, [closePlanTodos, setPlanTodos])
/** Handles message submission to the copilot */
/**
* Handles message submission to the copilot
* @param query - The message text to send
* @param fileAttachments - Optional file attachments
* @param contexts - Optional context references
*/
const handleSubmit = useCallback(
async (query: string, fileAttachments?: MessageFileAttachment[], contexts?: ChatContext[]) => {
async (query: string, fileAttachments?: MessageFileAttachment[], contexts?: any[]) => {
// Allow submission even when isSendingMessage - store will queue the message
if (!query || !activeWorkflowId) return
if (showPlanTodos) {
setPlanTodos([])
const store = useCopilotStore.getState()
store.setPlanTodos([])
}
try {
@@ -297,25 +319,37 @@ export const Copilot = forwardRef<CopilotRef, CopilotProps>(({ panelWidth }, ref
logger.error('Failed to send message:', error)
}
},
[activeWorkflowId, sendMessage, showPlanTodos, setPlanTodos]
[activeWorkflowId, sendMessage, showPlanTodos]
)
/** Handles message edit mode changes */
/**
* Handles message edit mode changes
* @param messageId - ID of the message being edited
* @param isEditing - Whether edit mode is active
*/
const handleEditModeChange = useCallback(
(messageId: string, isEditing: boolean, cancelCallback?: () => void) => {
setEditingMessageId(isEditing ? messageId : null)
setIsEditingMessage(isEditing)
cancelEditCallbackRef.current = isEditing ? cancelCallback || null : null
logger.info('Edit mode changed', { messageId, isEditing, willDimMessages: isEditing })
},
[]
)
/** Handles checkpoint revert mode changes */
/**
* Handles checkpoint revert mode changes
* @param messageId - ID of the message being reverted
* @param isReverting - Whether revert mode is active
*/
const handleRevertModeChange = useCallback((messageId: string, isReverting: boolean) => {
setRevertingMessageId(isReverting ? messageId : null)
}, [])
/** Handles chat deletion */
/**
* Handles chat deletion
* @param chatId - ID of the chat to delete
*/
const handleDeleteChat = useCallback(
async (chatId: string) => {
try {
@@ -327,15 +361,38 @@ export const Copilot = forwardRef<CopilotRef, CopilotProps>(({ panelWidth }, ref
[deleteChat]
)
/** Handles history dropdown opening state, loads chats if needed (non-blocking) */
/**
* Handles history dropdown opening state
* Loads chats if needed when dropdown opens (non-blocking)
* @param open - Whether the dropdown is open
*/
const handleHistoryDropdownOpen = useCallback(
(open: boolean) => {
setIsHistoryDropdownOpen(open)
// Fire hook without awaiting - prevents blocking and state issues
handleHistoryDropdownOpenHook(open)
},
[handleHistoryDropdownOpenHook]
)
/**
* Skeleton loading component for chat history
*/
const ChatHistorySkeleton = () => (
<>
<PopoverSection>
<div className='h-3 w-12 animate-pulse rounded bg-muted/40' />
</PopoverSection>
<div className='flex flex-col gap-0.5'>
{[1, 2, 3].map((i) => (
<div key={i} className='flex h-[25px] items-center px-[6px]'>
<div className='h-3 w-full animate-pulse rounded bg-muted/40' />
</div>
))}
</div>
</>
)
return (
<>
<div
@@ -474,18 +531,21 @@ export const Copilot = forwardRef<CopilotRef, CopilotProps>(({ panelWidth }, ref
className='h-full overflow-y-auto overflow-x-hidden px-[8px]'
>
<div
className={`w-full max-w-full space-y-[8px] overflow-hidden py-[8px] ${
className={`w-full max-w-full space-y-4 overflow-hidden py-[8px] ${
showPlanTodos && planTodos.length > 0 ? 'pb-14' : 'pb-10'
}`}
>
{messages.map((message, index) => {
// Determine if this message should be dimmed
let isDimmed = false
// Dim messages after the one being edited
if (editingMessageId) {
const editingIndex = messages.findIndex((m) => m.id === editingMessageId)
isDimmed = editingIndex !== -1 && index > editingIndex
}
// Also dim messages after the one showing restore confirmation
if (!isDimmed && revertingMessageId) {
const revertingIndex = messages.findIndex(
(m) => m.id === revertingMessageId
@@ -493,6 +553,7 @@ export const Copilot = forwardRef<CopilotRef, CopilotProps>(({ panelWidth }, ref
isDimmed = revertingIndex !== -1 && index > revertingIndex
}
// Get checkpoint count for this message to force re-render when it changes
const checkpointCount = messageCheckpoints[message.id]?.length || 0
return (
@@ -527,7 +588,11 @@ export const Copilot = forwardRef<CopilotRef, CopilotProps>(({ panelWidth }, ref
<TodoList
todos={planTodos}
collapsed={todosCollapsed}
onClose={handleClosePlanTodos}
onClose={() => {
const store = useCopilotStore.getState()
store.closePlanTodos?.()
useCopilotStore.setState({ planTodos: [] })
}}
/>
</div>
)}

View File

@@ -24,7 +24,9 @@ export function useChatHistory(props: UseChatHistoryProps) {
const { chats, activeWorkflowId, copilotWorkflowId, loadChats, areChatsFresh, isSendingMessage } =
props
/** Groups chats by time period (Today, Yesterday, This Week, etc.) */
/**
* Groups chats by time period (Today, Yesterday, This Week, etc.)
*/
const groupedChats = useMemo(() => {
if (!activeWorkflowId || copilotWorkflowId !== activeWorkflowId || chats.length === 0) {
return []
@@ -66,21 +68,18 @@ export function useChatHistory(props: UseChatHistoryProps) {
}
})
for (const groupName of Object.keys(groups)) {
groups[groupName].sort((a, b) => {
const dateA = new Date(a.updatedAt).getTime()
const dateB = new Date(b.updatedAt).getTime()
return dateB - dateA
})
}
return Object.entries(groups).filter(([, chats]) => chats.length > 0)
}, [chats, activeWorkflowId, copilotWorkflowId])
/** Handles history dropdown opening and loads chats if needed (non-blocking) */
/**
* Handles history dropdown opening and loads chats if needed
* Does not await loading - fires in background to avoid blocking UI
*/
const handleHistoryDropdownOpen = useCallback(
(open: boolean) => {
// Only load if opening dropdown AND we don't have fresh chats AND not streaming
if (open && activeWorkflowId && !isSendingMessage && !areChatsFresh(activeWorkflowId)) {
// Fire in background, don't await - same pattern as old panel
loadChats(false).catch((error) => {
logger.error('Failed to load chat history:', error)
})

View File

@@ -38,7 +38,11 @@ export function useCopilotInitialization(props: UseCopilotInitializationProps) {
const lastWorkflowIdRef = useRef<string | null>(null)
const hasMountedRef = useRef(false)
/** Initialize on mount - loads chats if needed. Never loads during streaming */
/**
* Initialize on mount - only load chats if needed, don't force refresh
* This prevents unnecessary reloads when the component remounts (e.g., hot reload)
* Never loads during message streaming to prevent interrupting active conversations
*/
useEffect(() => {
if (activeWorkflowId && !hasMountedRef.current && !isSendingMessage) {
hasMountedRef.current = true
@@ -46,12 +50,19 @@ export function useCopilotInitialization(props: UseCopilotInitializationProps) {
lastWorkflowIdRef.current = null
setCopilotWorkflowId(activeWorkflowId)
// Use false to let the store decide if a reload is needed based on cache
loadChats(false)
}
}, [activeWorkflowId, setCopilotWorkflowId, loadChats, isSendingMessage])
/** Handles genuine workflow changes, preventing re-init on every render */
/**
* Initialize the component - only on mount and genuine workflow changes
* Prevents re-initialization on every render or tab switch
* Never reloads during message streaming to preserve active conversations
*/
useEffect(() => {
// Handle genuine workflow changes (not initial mount, not same workflow)
// Only reload if not currently streaming to avoid interrupting conversations
if (
activeWorkflowId &&
activeWorkflowId !== lastWorkflowIdRef.current &&
@@ -69,23 +80,7 @@ export function useCopilotInitialization(props: UseCopilotInitializationProps) {
loadChats(false)
}
if (
activeWorkflowId &&
!isLoadingChats &&
chatsLoadedForWorkflow !== null &&
chatsLoadedForWorkflow !== activeWorkflowId &&
!isSendingMessage
) {
logger.info('Chats loaded for wrong workflow, reloading', {
loaded: chatsLoadedForWorkflow,
active: activeWorkflowId,
})
setIsInitialized(false)
lastWorkflowIdRef.current = activeWorkflowId
setCopilotWorkflowId(activeWorkflowId)
loadChats(false)
}
// Mark as initialized when chats are loaded for the active workflow
if (
activeWorkflowId &&
!isLoadingChats &&
@@ -105,7 +100,9 @@ export function useCopilotInitialization(props: UseCopilotInitializationProps) {
isSendingMessage,
])
/** Load auto-allowed tools once on mount */
/**
* Load auto-allowed tools once on mount
*/
const hasLoadedAutoAllowedToolsRef = useRef(false)
useEffect(() => {
if (hasMountedRef.current && !hasLoadedAutoAllowedToolsRef.current) {

View File

@@ -6,6 +6,7 @@ interface UseTodoManagementProps {
isSendingMessage: boolean
showPlanTodos: boolean
planTodos: Array<{ id: string; content: string; completed?: boolean }>
setPlanTodos: (todos: any[]) => void
}
/**
@@ -15,12 +16,14 @@ interface UseTodoManagementProps {
* @returns Todo management utilities
*/
export function useTodoManagement(props: UseTodoManagementProps) {
const { isSendingMessage, showPlanTodos, planTodos } = props
const { isSendingMessage, showPlanTodos, planTodos, setPlanTodos } = props
const [todosCollapsed, setTodosCollapsed] = useState(false)
const wasSendingRef = useRef(false)
/** Auto-collapse todos when stream completes */
/**
* Auto-collapse todos when stream completes. Do not prune items.
*/
useEffect(() => {
if (wasSendingRef.current && !isSendingMessage && showPlanTodos) {
setTodosCollapsed(true)
@@ -28,7 +31,9 @@ export function useTodoManagement(props: UseTodoManagementProps) {
wasSendingRef.current = isSendingMessage
}, [isSendingMessage, showPlanTodos])
/** Reset collapsed state when todos first appear */
/**
* Reset collapsed state when todos first appear
*/
useEffect(() => {
if (showPlanTodos && planTodos.length > 0) {
if (isSendingMessage) {

View File

@@ -452,6 +452,39 @@ console.log(limits);`
</div>
)}
{/* <div>
<div className='mb-[6.5px] flex items-center justify-between'>
<Label className='block pl-[2px] font-medium text-[13px] text-[var(--text-primary)]'>
URL
</Label>
<Tooltip.Root>
<Tooltip.Trigger asChild>
<Button
variant='ghost'
onClick={() => handleCopy('endpoint', info.endpoint)}
aria-label='Copy endpoint'
className='!p-1.5 -my-1.5'
>
{copied.endpoint ? (
<Check className='h-3 w-3' />
) : (
<Clipboard className='h-3 w-3' />
)}
</Button>
</Tooltip.Trigger>
<Tooltip.Content>
<span>{copied.endpoint ? 'Copied' : 'Copy'}</span>
</Tooltip.Content>
</Tooltip.Root>
</div>
<Code.Viewer
code={info.endpoint}
language='javascript'
wrapText
className='!min-h-0 rounded-[4px] border border-[var(--border-1)]'
/>
</div> */}
<div>
<div className='mb-[6.5px] flex items-center justify-between'>
<Label className='block pl-[2px] font-medium text-[13px] text-[var(--text-primary)]'>

View File

@@ -1,260 +0,0 @@
'use client'
import { useCallback, useEffect, useMemo, useRef, useState } from 'react'
import {
Badge,
Button,
Input,
Label,
Modal,
ModalBody,
ModalContent,
ModalFooter,
ModalHeader,
Textarea,
} from '@/components/emcn'
import { normalizeInputFormatValue } from '@/lib/workflows/input-format'
import { isValidStartBlockType } from '@/lib/workflows/triggers/start-block-types'
import type { InputFormatField } from '@/lib/workflows/types'
import { useWorkflowRegistry } from '@/stores/workflows/registry/store'
import { useSubBlockStore } from '@/stores/workflows/subblock/store'
import { useWorkflowStore } from '@/stores/workflows/workflow/store'
type NormalizedField = InputFormatField & { name: string }
interface ApiInfoModalProps {
open: boolean
onOpenChange: (open: boolean) => void
workflowId: string
}
export function ApiInfoModal({ open, onOpenChange, workflowId }: ApiInfoModalProps) {
const blocks = useWorkflowStore((state) => state.blocks)
const setValue = useSubBlockStore((state) => state.setValue)
const subBlockValues = useSubBlockStore((state) =>
workflowId ? (state.workflowValues[workflowId] ?? {}) : {}
)
const workflowMetadata = useWorkflowRegistry((state) =>
workflowId ? state.workflows[workflowId] : undefined
)
const updateWorkflow = useWorkflowRegistry((state) => state.updateWorkflow)
const [description, setDescription] = useState('')
const [paramDescriptions, setParamDescriptions] = useState<Record<string, string>>({})
const [isSaving, setIsSaving] = useState(false)
const [showUnsavedChangesAlert, setShowUnsavedChangesAlert] = useState(false)
const initialDescriptionRef = useRef('')
const initialParamDescriptionsRef = useRef<Record<string, string>>({})
const starterBlockId = useMemo(() => {
for (const [blockId, block] of Object.entries(blocks)) {
if (!block || typeof block !== 'object') continue
const blockType = (block as { type?: string }).type
if (blockType && isValidStartBlockType(blockType)) {
return blockId
}
}
return null
}, [blocks])
const inputFormat = useMemo((): NormalizedField[] => {
if (!starterBlockId) return []
const storeValue = subBlockValues[starterBlockId]?.inputFormat
const normalized = normalizeInputFormatValue(storeValue) as NormalizedField[]
if (normalized.length > 0) return normalized
const startBlock = blocks[starterBlockId]
const blockValue = startBlock?.subBlocks?.inputFormat?.value
return normalizeInputFormatValue(blockValue) as NormalizedField[]
}, [starterBlockId, subBlockValues, blocks])
useEffect(() => {
if (open) {
const normalizedDesc = workflowMetadata?.description?.toLowerCase().trim()
const isDefaultDescription =
!workflowMetadata?.description ||
workflowMetadata.description === workflowMetadata.name ||
normalizedDesc === 'new workflow' ||
normalizedDesc === 'your first workflow - start building here!'
const initialDescription = isDefaultDescription ? '' : workflowMetadata?.description || ''
setDescription(initialDescription)
initialDescriptionRef.current = initialDescription
const descriptions: Record<string, string> = {}
for (const field of inputFormat) {
if (field.description) {
descriptions[field.name] = field.description
}
}
setParamDescriptions(descriptions)
initialParamDescriptionsRef.current = { ...descriptions }
}
}, [open, workflowMetadata, inputFormat])
const hasChanges = useMemo(() => {
if (description.trim() !== initialDescriptionRef.current.trim()) return true
for (const field of inputFormat) {
const currentValue = (paramDescriptions[field.name] || '').trim()
const initialValue = (initialParamDescriptionsRef.current[field.name] || '').trim()
if (currentValue !== initialValue) return true
}
return false
}, [description, paramDescriptions, inputFormat])
const handleParamDescriptionChange = (fieldName: string, value: string) => {
setParamDescriptions((prev) => ({
...prev,
[fieldName]: value,
}))
}
const handleCloseAttempt = useCallback(() => {
if (hasChanges && !isSaving) {
setShowUnsavedChangesAlert(true)
} else {
onOpenChange(false)
}
}, [hasChanges, isSaving, onOpenChange])
const handleDiscardChanges = useCallback(() => {
setShowUnsavedChangesAlert(false)
setDescription(initialDescriptionRef.current)
setParamDescriptions({ ...initialParamDescriptionsRef.current })
onOpenChange(false)
}, [onOpenChange])
const handleSave = useCallback(async () => {
if (!workflowId) return
const activeWorkflowId = useWorkflowRegistry.getState().activeWorkflowId
if (activeWorkflowId !== workflowId) {
return
}
setIsSaving(true)
try {
if (description.trim() !== (workflowMetadata?.description || '')) {
updateWorkflow(workflowId, { description: description.trim() || 'New workflow' })
}
if (starterBlockId) {
const updatedValue = inputFormat.map((field) => ({
...field,
description: paramDescriptions[field.name]?.trim() || undefined,
}))
setValue(starterBlockId, 'inputFormat', updatedValue)
}
onOpenChange(false)
} finally {
setIsSaving(false)
}
}, [
workflowId,
description,
workflowMetadata,
updateWorkflow,
starterBlockId,
inputFormat,
paramDescriptions,
setValue,
onOpenChange,
])
return (
<>
<Modal open={open} onOpenChange={(openState) => !openState && handleCloseAttempt()}>
<ModalContent className='max-w-[480px]'>
<ModalHeader>
<span>Edit API Info</span>
</ModalHeader>
<ModalBody className='space-y-[12px]'>
<div>
<Label className='mb-[6.5px] block pl-[2px] font-medium text-[13px] text-[var(--text-primary)]'>
Description
</Label>
<Textarea
placeholder='Describe what this workflow API does...'
className='min-h-[80px] resize-none'
value={description}
onChange={(e) => setDescription(e.target.value)}
/>
</div>
{inputFormat.length > 0 && (
<div>
<Label className='mb-[6.5px] block pl-[2px] font-medium text-[13px] text-[var(--text-primary)]'>
Parameters ({inputFormat.length})
</Label>
<div className='flex flex-col gap-[8px]'>
{inputFormat.map((field) => (
<div
key={field.name}
className='overflow-hidden rounded-[4px] border border-[var(--border-1)]'
>
<div className='flex items-center justify-between bg-[var(--surface-4)] px-[10px] py-[5px]'>
<div className='flex min-w-0 flex-1 items-center gap-[8px]'>
<span className='block truncate font-medium text-[14px] text-[var(--text-tertiary)]'>
{field.name}
</span>
<Badge size='sm'>{field.type || 'string'}</Badge>
</div>
</div>
<div className='border-[var(--border-1)] border-t px-[10px] pt-[6px] pb-[10px]'>
<div className='flex flex-col gap-[6px]'>
<Label className='text-[13px]'>Description</Label>
<Input
value={paramDescriptions[field.name] || ''}
onChange={(e) =>
handleParamDescriptionChange(field.name, e.target.value)
}
placeholder={`Enter description for ${field.name}`}
/>
</div>
</div>
</div>
))}
</div>
</div>
)}
</ModalBody>
<ModalFooter>
<Button variant='default' onClick={handleCloseAttempt} disabled={isSaving}>
Cancel
</Button>
<Button variant='tertiary' onClick={handleSave} disabled={isSaving || !hasChanges}>
{isSaving ? 'Saving...' : 'Save'}
</Button>
</ModalFooter>
</ModalContent>
</Modal>
<Modal open={showUnsavedChangesAlert} onOpenChange={setShowUnsavedChangesAlert}>
<ModalContent className='max-w-[400px]'>
<ModalHeader>
<span>Unsaved Changes</span>
</ModalHeader>
<ModalBody>
<p className='text-[14px] text-[var(--text-secondary)]'>
You have unsaved changes. Are you sure you want to discard them?
</p>
</ModalBody>
<ModalFooter>
<Button variant='default' onClick={() => setShowUnsavedChangesAlert(false)}>
Keep Editing
</Button>
<Button variant='destructive' onClick={handleDiscardChanges}>
Discard Changes
</Button>
</ModalFooter>
</ModalContent>
</Modal>
</>
)
}

View File

@@ -283,7 +283,7 @@ export function GeneralDeploy({
<ModalContent size='sm'>
<ModalHeader>Promote to live</ModalHeader>
<ModalBody>
<p className='text-[12px] text-[var(--text-secondary)]'>
<p className='text-[12px] text-[var(--text-tertiary)]'>
Are you sure you want to promote{' '}
<span className='font-medium text-[var(--text-primary)]'>
{versionToPromoteInfo?.name || `v${versionToPromote}`}

Some files were not shown because too many files have changed in this diff Show More