v0.6.40: mothership tool loop, new skills, agiloft, STS, IAM integrations, jira forms endpoints

This commit is contained in:
Waleed
2026-04-13 22:26:19 -07:00
committed by GitHub
732 changed files with 82643 additions and 23090 deletions

View File

@@ -0,0 +1,25 @@
---
name: cleanup
description: Run all code quality skills in sequence — effects, memo, callbacks, state, React Query, and emcn design review
---
# Cleanup
Arguments:
- scope: what to review (default: your current changes). Examples: "diff to main", "PR #123", "src/components/", "whole codebase"
- fix: whether to apply fixes (default: true). Set to false to only propose changes.
User arguments: $ARGUMENTS
## Steps
Run each of these skills in order on the specified scope, passing through the scope and fix arguments. After each skill completes, move to the next. Do not skip any.
1. `/you-might-not-need-an-effect $ARGUMENTS`
2. `/you-might-not-need-a-memo $ARGUMENTS`
3. `/you-might-not-need-a-callback $ARGUMENTS`
4. `/you-might-not-need-state $ARGUMENTS`
5. `/react-query-best-practices $ARGUMENTS`
6. `/emcn-design-review $ARGUMENTS`
After all skills have run, output a summary of what was found and fixed (or proposed) across all six passes.

View File

@@ -0,0 +1,335 @@
---
name: emcn-design-review
description: Review UI code for alignment with the emcn design system — components, tokens, patterns, and conventions
---
# EMCN Design Review
Arguments:
- scope: what to review (default: your current changes). Examples: "diff to main", "PR #123", "src/components/", "whole codebase"
- fix: whether to apply fixes (default: true). Set to false to only propose changes.
User arguments: $ARGUMENTS
## Context
This codebase uses **emcn**, a custom component library built on Radix UI primitives with CVA (class-variance-authority) variants and CSS variable design tokens. All UI must use emcn components and tokens — never raw HTML elements or hardcoded colors.
## Steps
1. Read the emcn barrel export at `apps/sim/components/emcn/components/index.ts` to know what's available
2. Read `apps/sim/app/_styles/globals.css` for the full set of CSS variable tokens
3. Analyze the specified scope against every rule below
4. If fix=true, apply the fixes. If fix=false, propose the fixes without applying.
---
## Imports
- Import components from `@/components/emcn`, never from subpaths
- Import icons from `@/components/emcn/icons` or `lucide-react`
- Import `cn` from `@/lib/core/utils/cn` for conditional class merging
- Import app-specific wrappers (Select, VerifiedBadge) from `@/components/ui`
```tsx
// Good
import { Button, Modal, Badge } from '@/components/emcn'
// Bad
import { Button } from '@/components/emcn/components/button/button'
```
---
## Design Tokens (CSS Variables)
Never use raw color values. Always use CSS variable tokens via Tailwind arbitrary values: `text-[var(--text-primary)]`, not `text-gray-500` or `#333`. The CSS variable pattern is canonical (1,700+ uses) — do not use Tailwind semantic classes like `text-muted-foreground`.
### Text hierarchy
| Token | Use |
|-------|-----|
| `text-[var(--text-primary)]` | Main content text |
| `text-[var(--text-secondary)]` | Secondary/supporting text |
| `text-[var(--text-tertiary)]` | Tertiary text |
| `text-[var(--text-muted)]` | Disabled, placeholder text |
| `text-[var(--text-icon)]` | Icon tinting |
| `text-[var(--text-inverse)]` | Text on dark backgrounds |
| `text-[var(--text-error)]` | Error/warning messages |
### Surfaces (elevation)
| Token | Use |
|-------|-----|
| `bg-[var(--bg)]` | Page background |
| `bg-[var(--surface-2)]` through `bg-[var(--surface-7)]` | Increasing elevation |
| `bg-[var(--surface-hover)]` | Hover state backgrounds |
| `bg-[var(--surface-active)]` | Active/selected backgrounds |
### Borders
| Token | Use |
|-------|-----|
| `border-[var(--border)]` | Default borders |
| `border-[var(--border-1)]` | Stronger borders (inputs, cards) |
| `border-[var(--border-muted)]` | Subtle dividers |
### Status
| Token | Use |
|-------|-----|
| `--success` | Success states |
| `--error` | Error states |
| `--caution` | Warning states |
### Brand
| Token | Use |
|-------|-----|
| `--brand-secondary` | Brand color |
| `--brand-accent` | Accent/CTA color |
### Shadows
Use shadow tokens, never raw box-shadow values:
- `shadow-subtle`, `shadow-medium`, `shadow-overlay`
- `shadow-kbd`, `shadow-card`
### Z-Index
Use z-index tokens for layering:
- `z-[var(--z-dropdown)]` (100), `z-[var(--z-modal)]` (200), `z-[var(--z-popover)]` (300), `z-[var(--z-tooltip)]` (400), `z-[var(--z-toast)]` (500)
---
## Component Usage Rules
### Buttons
Available variants: `default`, `primary`, `destructive`, `ghost`, `outline`, `active`, `secondary`, `tertiary`, `subtle`, `ghost-secondary`, `3d`
| Action type | Variant | Frequency |
|-------------|---------|-----------|
| Toolbar, icon-only, utility actions | `ghost` | Most common (28%) |
| Primary action (create, save, submit) | `primary` | Very common (24%) |
| Cancel, close, secondary action | `default` | Common |
| Delete, remove, destructive action | `destructive` | Targeted use only |
| Active/selected state | `active` | Targeted use only |
| Toggle, mode switch | `outline` | Moderate |
Sizes: `sm` (compact, 32% of buttons) or `md` (default, used when no size specified). Never create custom button styles — use an existing variant.
Buttons without an explicit variant prop get `default` styling. This is acceptable for cancel/secondary actions.
### Modals (Dialogs)
Use `Modal` + subcomponents. Never build custom dialog overlays.
```tsx
<Modal open={open} onOpenChange={setOpen}>
<ModalContent size="sm">
<ModalHeader>Title</ModalHeader>
<ModalBody>Content</ModalBody>
<ModalFooter>
<Button variant="default" onClick={() => setOpen(false)}>Cancel</Button>
<Button variant="primary" onClick={handleSubmit}>Save</Button>
</ModalFooter>
</ModalContent>
</Modal>
```
Modal sizes by frequency: `sm` (440px, most common — confirmations and simple dialogs), `md` (500px, forms), `lg` (600px, content-heavy), `xl` (800px, rare), `full` (1200px, rare).
Footer buttons: Cancel on left (`variant="default"`), primary action on right. This pattern is followed 100% across the codebase.
### Delete/Remove Confirmations
Always use Modal with `size="sm"`. The established pattern:
```tsx
<Modal open={open} onOpenChange={setOpen}>
<ModalContent size="sm">
<ModalHeader>Delete {itemType}</ModalHeader>
<ModalBody>
<p>Description of consequences</p>
<p className="text-[var(--text-error)]">Warning about irreversibility</p>
</ModalBody>
<ModalFooter>
<Button variant="default" onClick={() => setOpen(false)}>Cancel</Button>
<Button variant="destructive" onClick={handleDelete} disabled={isDeleting}>
Delete
</Button>
</ModalFooter>
</ModalContent>
</Modal>
```
Rules:
- Title: "Delete {ItemType}" or "Remove {ItemType}" (use "Remove" for membership/association changes)
- Include consequence description
- Use `text-[var(--text-error)]` for warning text when the action is irreversible
- `variant="destructive"` for the action button (100% compliance)
- `variant="default"` for cancel (100% compliance)
- Cancel left, destructive right (100% compliance)
- For high-risk deletes (workspaces), require typing the name to confirm
- Include recovery info if soft-delete: "You can restore it from Recently Deleted in Settings"
### Toast Notifications
Use the imperative `toast` API from `@/components/emcn`. Never build custom notification UI.
```tsx
import { toast } from '@/components/emcn'
toast.success('Item saved')
toast.error('Something went wrong')
toast.success('Deleted', { action: { label: 'Undo', onClick: handleUndo } })
```
Variants: `default`, `success`, `error`. Auto-dismiss after 5s. Supports optional action buttons with callbacks.
### Badges
Use semantic color variants for status:
| Status | Variant | Usage |
|--------|---------|-------|
| Error, failed, disconnected | `red` | Most common (15 uses) |
| Metadata, roles, auth types, scopes | `gray-secondary` | Very common (12 uses) |
| Type annotations (TS types, field types) | `type` | Very common (12 uses) |
| Success, active, enabled, running | `green` | Common (7 uses) |
| Neutral, default, unknown | `gray` | Common (6 uses) |
| Outline, parameters, public | `outline` | Moderate (6 uses) |
| Warning, processing | `amber` | Moderate (5 uses) |
| Paused, warning | `orange` | Occasional |
| Info, queued | `blue` | Occasional |
| Data types (arrays) | `purple` | Occasional |
| Generic with border | `default` | Occasional |
Use `dot` prop for status indicators (19 instances in codebase). `icon` prop is available but rarely used.
### Tooltips
Use `Tooltip` from emcn with namespace pattern:
```tsx
<Tooltip.Root>
<Tooltip.Trigger asChild>
<Button variant="ghost">{icon}</Button>
</Tooltip.Trigger>
<Tooltip.Content>Helpful text</Tooltip.Content>
</Tooltip.Root>
```
Use tooltips for icon-only buttons and truncated text. Don't tooltip self-explanatory elements.
### Popovers
Use for filters, option menus, and nested navigation:
```tsx
<Popover open={open} onOpenChange={setOpen} size="sm">
<PopoverTrigger asChild>
<Button variant="ghost">Trigger</Button>
</PopoverTrigger>
<PopoverContent side="bottom" align="end" minWidth={160}>
<PopoverSection>Section Title</PopoverSection>
<PopoverItem active={isActive} onClick={handleClick}>
Item Label
</PopoverItem>
<PopoverDivider />
</PopoverContent>
</Popover>
```
### Dropdown Menus
Use for context menus and action menus:
```tsx
<DropdownMenu>
<DropdownMenuTrigger asChild>
<Button variant="ghost">
<MoreHorizontal className="h-[14px] w-[14px]" />
</Button>
</DropdownMenuTrigger>
<DropdownMenuContent align="end">
<DropdownMenuItem onClick={handleEdit}>Edit</DropdownMenuItem>
<DropdownMenuSeparator />
<DropdownMenuItem onClick={handleDelete} className="text-[var(--text-error)]">
Delete
</DropdownMenuItem>
</DropdownMenuContent>
</DropdownMenu>
```
Destructive items go last, after a separator, in error color.
### Forms
Use `FormField` wrapper for labeled inputs:
```tsx
<FormField label="Name" htmlFor="name" error={errors.name} optional>
<Input id="name" value={name} onChange={e => setName(e.target.value)} />
</FormField>
```
Rules:
- Use `Input` from emcn, never raw `<input>` (exception: hidden file inputs)
- Use `Textarea` from emcn, never raw `<textarea>`
- Use `FormField` for label + input + error layout
- Mark optional fields with `optional` prop
- Show errors inline below the input
- Use `Combobox` for searchable selects
- Use `TagInput` for multi-value inputs
### Loading States
Use `Skeleton` for content placeholders:
```tsx
<Skeleton className="h-5 w-[200px] rounded-md" />
```
Rules:
- Mirror the actual UI structure with skeletons
- Match exact dimensions of the final content
- Use `rounded-md` to match component radius
- Stack multiple skeletons for lists
### Icons
Standard sizing — `h-[14px] w-[14px]` is the dominant pattern (400+ uses):
```tsx
<Icon className="h-[14px] w-[14px] text-[var(--text-icon)]" />
```
Size scale by frequency:
1. `h-[14px] w-[14px]` — default for inline icons (most common)
2. `h-[16px] w-[16px]` — slightly larger inline icons
3. `h-3 w-3` (12px) — compact/tight spaces
4. `h-4 w-4` (16px) — Tailwind equivalent, also common
5. `h-3.5 w-3.5` (14px) — Tailwind equivalent of 14px
6. `h-5 w-5` (20px) — larger icons, section headers
Use `text-[var(--text-icon)]` for icon color (113+ uses in codebase).
---
## Styling Rules
1. **Use `cn()` for conditional classes**: `cn('base', condition && 'conditional')` — never template literal concatenation like `` `base ${condition ? 'active' : ''}` ``
2. **Inline styles**: Avoid. Exception: dynamic values that can't be expressed as Tailwind classes (e.g., `style={{ width: dynamicVar }}` or CSS variable references). Never use inline styles for colors or static values.
3. **Never hardcode colors**: Use CSS variable tokens. Never `text-gray-500`, `bg-red-100`, `#fff`, or `rgb()`. Always `text-[var(--text-*)]`, `bg-[var(--surface-*)]`, etc.
4. **Never use Tailwind semantic color classes**: Use `text-[var(--text-muted)]` not `text-muted-foreground`. The CSS variable pattern is canonical.
5. **Never use global styles**: Keep all styling local to components
6. **Hover states**: Use `hover-hover:` pseudo-class for hover-capable devices
7. **Transitions**: Use `transition-colors` for color changes, `transition-colors duration-100` for fast hover
8. **Border radius**: `rounded-lg` (large cards), `rounded-md` (medium), `rounded-sm` (small), `rounded-xs` (tiny)
9. **Typography**: Use semantic sizes — `text-small` (13px), `text-caption` (12px), `text-xs` (11px), `text-micro` (10px)
10. **Font weight**: Use `font-medium` for emphasis, avoid `font-bold` unless for headings
11. **Spacing**: Use Tailwind gap/padding utilities. Common patterns: `gap-2`, `gap-3`, `px-4 py-2.5`
---
## Anti-patterns to flag
- Raw HTML `<button>` instead of Button component (exception: inside Radix primitives)
- Raw HTML `<input>` instead of Input component (exception: hidden file inputs, read-only checkboxes in markdown)
- Hardcoded Tailwind default colors (`text-gray-*`, `bg-red-*`, `text-blue-*`)
- Hex values in className (`bg-[#fff]`, `text-[#333]`)
- Tailwind semantic classes (`text-muted-foreground`) instead of CSS variables (`text-[var(--text-muted)]`)
- Custom modal/dialog implementations instead of `Modal`
- Custom toast/notification implementations instead of `toast`
- Inline styles for colors or static values (dynamic values are acceptable)
- Template literal className concatenation instead of `cn()`
- Wrong button variant for the action type
- Missing loading/skeleton states
- Missing error states on forms
- Importing from emcn subpaths instead of barrel export
- Using arbitrary z-index (`z-50`, `z-[9999]`) instead of z-index tokens
- Custom shadows instead of shadow tokens
- Icon sizes that don't follow the established scale (default to `h-[14px] w-[14px]`)

View File

@@ -0,0 +1,54 @@
---
name: react-query-best-practices
description: Audit React Query usage for best practices — key factories, staleTime, mutations, and server state ownership
---
# React Query Best Practices
Arguments:
- scope: what to analyze (default: your current changes). Examples: "diff to main", "PR #123", "src/hooks/queries/", "whole codebase"
- fix: whether to apply fixes (default: true). Set to false to only propose changes.
User arguments: $ARGUMENTS
## Context
This codebase uses React Query (TanStack Query) as the single source of truth for all server state. All query hooks live in `hooks/queries/`. Zustand is used only for client-only UI state. Server data must never be duplicated into useState or Zustand outside of mutation callbacks that coordinate cross-store state.
## References
Read these before analyzing:
1. https://tkdodo.eu/blog/practical-react-query — foundational defaults, custom hooks, avoiding local state copies
2. https://tkdodo.eu/blog/effective-react-query-keys — key factory pattern, hierarchical keys, fuzzy invalidation
3. https://tkdodo.eu/blog/react-query-as-a-state-manager — React Query IS your server state manager
## Rules to enforce
### Query key factories
- Every file in `hooks/queries/` must have a hierarchical key factory with an `all` root key
- Keys must include intermediate plural keys (`lists`, `details`) for prefix invalidation
- Key factories are colocated with their query hooks, not in a global keys file
### Query hooks
- Every `queryFn` must forward `signal` for request cancellation
- Every query must have an explicit `staleTime` (default 0 is almost never correct)
- `keepPreviousData` / `placeholderData` only on variable-key queries (where params change), never on static keys
- Use `enabled` to prevent queries from running without required params
### Mutations
- Use `onSettled` (not `onSuccess`) for cache reconciliation — it fires on both success and error
- For optimistic updates: save previous data in `onMutate`, roll back in `onError`
- Use targeted invalidation (`entityKeys.lists()`) not broad (`entityKeys.all`) when possible
- Don't include mutation objects in `useCallback` deps — `.mutate()` is stable
### Server state ownership
- Never copy query data into useState. Use query data directly in components.
- Never copy query data into Zustand stores (exception: mutation callbacks that coordinate cross-store state like temp ID replacement)
- The query cache is not a local state manager — `setQueryData` is for optimistic updates only
- Forms are the one deliberate exception: copy server data into local form state with `staleTime: Infinity`
## Steps
1. Read the references above to understand the guidelines
2. Analyze the specified scope against the rules listed above
3. If fix=true, apply the fixes. If fix=false, propose the fixes without applying.

View File

@@ -0,0 +1,51 @@
---
name: you-might-not-need-a-callback
description: Analyze and fix useCallback anti-patterns in your code
---
# You Might Not Need a Callback
Arguments:
- scope: what to analyze (default: your current changes). Examples: "diff to main", "PR #123", "src/components/", "whole codebase"
- fix: whether to apply fixes (default: true). Set to false to only propose changes.
User arguments: $ARGUMENTS
## References
Read before analyzing:
1. https://react.dev/reference/react/useCallback — official docs on when useCallback is actually needed
## When useCallback IS needed
- Passing a callback to a child wrapped in `React.memo` (to preserve referential equality)
- The callback is a dependency of another hook (`useEffect`, `useMemo`)
- The callback is used in a custom hook that documents referential stability requirements
## Anti-patterns to detect
1. **useCallback on functions not passed as props or deps**: If the function is only called within the same component and isn't in any dependency array, useCallback adds overhead for no benefit. Just declare the function normally.
2. **useCallback with exhaustive deps that change every render**: If the dependency array includes values that change on every render, useCallback recalculates every time. The memoization is wasted. Either stabilize the deps (use refs) or remove the useCallback.
3. **useCallback on event handlers passed to native elements**: `<button onClick={handleClick}>` — native elements don't benefit from stable references. Only child components wrapped in React.memo do.
4. **useCallback wrapping a function that creates new objects/arrays**: If the callback returns `{ ...newObj }` or `[...newArr]`, memoizing the callback doesn't prevent the child from re-rendering due to new return values. The memoization is at the wrong level.
5. **useCallback with an empty dep array when deps are needed**: Stale closures — the callback captures outdated values. Either add proper deps or use refs for values that shouldn't trigger re-creation.
6. **Pairing useCallback with React.memo unnecessarily**: If the child component is cheap to render, neither useCallback nor React.memo adds value. Only optimize when you've measured a performance problem.
7. **useCallback in custom hooks that don't need stable references**: Not every hook return needs to be memoized. Only stabilize callbacks when consumers depend on referential equality.
## Codebase-specific notes
This codebase uses a ref pattern for stable callbacks in hooks:
```tsx
const idRef = useRef(id)
useEffect(() => { idRef.current = id }, [id])
const fetchData = useCallback(async () => {
// use idRef.current instead of id
}, []) // empty deps because refs are used
```
This pattern is correct — don't flag it as an anti-pattern.
## Steps
1. Read the reference above
2. Analyze the specified scope for the anti-patterns listed above
3. If fix=true, apply the fixes. If fix=false, propose the fixes without applying.

View File

@@ -0,0 +1,33 @@
---
name: you-might-not-need-a-memo
description: Analyze and fix useMemo/React.memo anti-patterns in your code
---
# You Might Not Need a Memo
Arguments:
- scope: what to analyze (default: your current changes). Examples: "diff to main", "PR #123", "src/components/", "whole codebase"
- fix: whether to apply fixes (default: true). Set to false to only propose changes.
User arguments: $ARGUMENTS
## References
Read before analyzing:
1. https://overreacted.io/before-you-memo/ — two techniques to avoid memo entirely
## Anti-patterns to detect
1. **Wrapping a slow component in React.memo when state can be moved down**: If a component re-renders because of state it doesn't use, move that state into a smaller child component instead of memoizing. The slow component stops re-rendering without memo.
2. **Wrapping in React.memo when children can be lifted up**: If a parent owns state that changes frequently, extract the stateful part and pass the expensive subtree as `children`. Children passed as props don't re-render when the parent's state changes.
3. **useMemo on cheap computations**: Filtering or mapping a small array, string concatenation, simple arithmetic — these don't need memoization. Only memoize when you've measured a performance problem.
4. **useMemo with constantly-changing deps**: If the dependency array changes on every render, useMemo does nothing — it recalculates every time. Fix the deps or remove the memo.
5. **useMemo to create objects/arrays passed as props**: Instead of memoizing to prevent child re-renders, consider whether the child even needs referential stability. If the child doesn't use React.memo or pass it to a dep array, the memo is wasted.
6. **React.memo on components that always receive new props**: If the parent always passes new objects, arrays, or callbacks, React.memo's shallow comparison always fails. Fix the parent instead of memoizing the child.
7. **useMemo for derived state**: If you're computing a value from props or state, just compute it inline during render. React renders are fast. `const fullName = first + ' ' + last` doesn't need useMemo.
## Steps
1. Read the reference above to understand the two core techniques (move state down, lift content up)
2. Analyze the specified scope for the anti-patterns listed above
3. If fix=true, apply the fixes. If fix=false, propose the fixes without applying.

View File

@@ -0,0 +1,38 @@
---
name: you-might-not-need-state
description: Analyze and fix unnecessary useState, derived state, and server-state-in-local-state anti-patterns
---
# You Might Not Need State
Arguments:
- scope: what to analyze (default: your current changes). Examples: "diff to main", "PR #123", "src/components/", "whole codebase"
- fix: whether to apply fixes (default: true). Set to false to only propose changes.
User arguments: $ARGUMENTS
## Context
This codebase uses React Query for all server state and Zustand for client-only global state. useState should only be used for ephemeral UI concerns (open/closed, hover, local form input). Server data should never be copied into useState or Zustand — React Query is the single source of truth.
## References
Read these before analyzing:
1. https://react.dev/learn/choosing-the-state-structure — 5 principles for structuring state
2. https://tkdodo.eu/blog/dont-over-use-state — never store derived/computed values in state
3. https://tkdodo.eu/blog/putting-props-to-use-state — never mirror props into state via useEffect
## Anti-patterns to detect
1. **Derived state stored in useState**: If a value can be computed from props, other state, or query data, compute it inline during render instead of storing it in state.
2. **Server state copied into useState**: Never `useState` + `useEffect` to sync React Query data into local state. Use query data directly. The only exception is forms where users edit server data.
3. **Props mirrored into state**: Never `useState(prop)` + `useEffect(() => setState(prop))`. Use the prop directly, or use a key to reset component state.
4. **Chained useEffect state updates**: Never chain Effects that set state to trigger other Effects. Calculate all derived values in the event handler or inline during render.
5. **Storing objects when an ID suffices**: Store `selectedId` not a copy of the selected object. Derive the object: `items.find(i => i.id === selectedId)`.
6. **State that duplicates Zustand or React Query**: If the data already lives in a store or query cache, don't create a parallel useState.
## Steps
1. Read the references above to understand the guidelines
2. Analyze the specified scope for the anti-patterns listed above
3. If fix=true, apply the fixes. If fix=false, propose the fixes without applying.

View File

@@ -0,0 +1,25 @@
---
description: Run all code quality skills in sequence — effects, memo, callbacks, state, React Query, and emcn design review
argument-hint: [scope] [fix=true|false]
---
# Cleanup
Arguments:
- scope: what to review (default: your current changes). Examples: "diff to main", "PR #123", "src/components/", "whole codebase"
- fix: whether to apply fixes (default: true). Set to false to only propose changes.
User arguments: $ARGUMENTS
## Steps
Run each of these skills in order on the specified scope, passing through the scope and fix arguments. After each skill completes, move to the next. Do not skip any.
1. `/you-might-not-need-an-effect $ARGUMENTS`
2. `/you-might-not-need-a-memo $ARGUMENTS`
3. `/you-might-not-need-a-callback $ARGUMENTS`
4. `/you-might-not-need-state $ARGUMENTS`
5. `/react-query-best-practices $ARGUMENTS`
6. `/emcn-design-review $ARGUMENTS`
After all skills have run, output a summary of what was found and fixed (or proposed) across all six passes.

View File

@@ -0,0 +1,79 @@
---
description: Review UI code for alignment with the emcn design system — components, tokens, patterns, and conventions
argument-hint: [scope] [fix=true|false]
---
# EMCN Design Review
Arguments:
- scope: what to review (default: your current changes). Examples: "diff to main", "PR #123", "src/components/", "whole codebase"
- fix: whether to apply fixes (default: true). Set to false to only propose changes.
User arguments: $ARGUMENTS
## Context
This codebase uses **emcn**, a custom component library built on Radix UI primitives with CVA variants and CSS variable design tokens. All UI must use emcn components and tokens.
## Steps
1. Read the emcn barrel export at `apps/sim/components/emcn/components/index.ts` to know what's available
2. Read `apps/sim/app/_styles/globals.css` for CSS variable tokens
3. Analyze the specified scope against every rule below
4. If fix=true, apply the fixes. If fix=false, propose the fixes without applying.
---
## Imports
- Import from `@/components/emcn` barrel, never subpaths
- Icons from `@/components/emcn/icons` or `lucide-react`
- Use `cn` from `@/lib/core/utils/cn` for conditional classes
## Design Tokens
Use CSS variable pattern (`text-[var(--text-primary)]`), never Tailwind semantics (`text-muted-foreground`) or hardcoded colors (`text-gray-500`, `#333`).
**Text**: `--text-primary`, `--text-secondary`, `--text-tertiary`, `--text-muted`, `--text-icon`, `--text-inverse`, `--text-error`
**Surfaces**: `--bg`, `--surface-2` through `--surface-7`, `--surface-hover`, `--surface-active`
**Borders**: `--border`, `--border-1`, `--border-muted`
**Z-Index**: `--z-dropdown` (100), `--z-modal` (200), `--z-popover` (300), `--z-tooltip` (400), `--z-toast` (500)
**Shadows**: `shadow-subtle`, `shadow-medium`, `shadow-overlay`, `shadow-card`
## Buttons
| Action | Variant |
|--------|---------|
| Toolbar, icon-only | `ghost` (most common, 28%) |
| Create, save, submit | `primary` (24%) |
| Cancel, close | `default` |
| Delete, remove | `destructive` |
| Selected state | `active` |
| Toggle | `outline` |
## Delete/Remove Confirmations
Modal `size="sm"`, title "Delete/Remove {ItemType}", `variant="destructive"` action button, `variant="default"` cancel. Cancel left, action right (100% compliance). Use `text-[var(--text-error)]` for irreversible warnings.
## Toast
`toast.success()`, `toast.error()`, `toast()` from `@/components/emcn`. Never custom notification UI.
## Badges
`red`=error/failed, `gray-secondary`=metadata/roles, `type`=type annotations, `green`=success/active, `gray`=neutral, `amber`=processing, `orange`=paused, `blue`=info. Use `dot` prop for status indicators.
## Icons
Default: `h-[14px] w-[14px]` (400+ uses). Color: `text-[var(--text-icon)]`. Scale: 14px > 16px > 12px > 20px.
## Anti-patterns to flag
- Raw `<button>`/`<input>` instead of emcn components
- Hardcoded colors (`text-gray-*`, `#hex`, `rgb()`)
- Tailwind semantics (`text-muted-foreground`) instead of CSS variables
- Template literal className instead of `cn()`
- Inline styles for colors/static values (dynamic values OK)
- Importing from emcn subpaths instead of barrel
- Arbitrary z-index instead of tokens
- Wrong button variant for action type

View File

@@ -0,0 +1,54 @@
---
description: Audit React Query usage for best practices — key factories, staleTime, mutations, and server state ownership
argument-hint: [scope] [fix=true|false]
---
# React Query Best Practices
Arguments:
- scope: what to analyze (default: your current changes). Examples: "diff to main", "PR #123", "src/hooks/queries/", "whole codebase"
- fix: whether to apply fixes (default: true). Set to false to only propose changes.
User arguments: $ARGUMENTS
## Context
This codebase uses React Query (TanStack Query) as the single source of truth for all server state. All query hooks live in `hooks/queries/`. Zustand is used only for client-only UI state. Server data must never be duplicated into useState or Zustand outside of mutation callbacks that coordinate cross-store state.
## References
Read these before analyzing:
1. https://tkdodo.eu/blog/practical-react-query — foundational defaults, custom hooks, avoiding local state copies
2. https://tkdodo.eu/blog/effective-react-query-keys — key factory pattern, hierarchical keys, fuzzy invalidation
3. https://tkdodo.eu/blog/react-query-as-a-state-manager — React Query IS your server state manager
## Rules to enforce
### Query key factories
- Every file in `hooks/queries/` must have a hierarchical key factory with an `all` root key
- Keys must include intermediate plural keys (`lists`, `details`) for prefix invalidation
- Key factories are colocated with their query hooks, not in a global keys file
### Query hooks
- Every `queryFn` must forward `signal` for request cancellation
- Every query must have an explicit `staleTime` (default 0 is almost never correct)
- `keepPreviousData` / `placeholderData` only on variable-key queries (where params change), never on static keys
- Use `enabled` to prevent queries from running without required params
### Mutations
- Use `onSettled` (not `onSuccess`) for cache reconciliation — it fires on both success and error
- For optimistic updates: save previous data in `onMutate`, roll back in `onError`
- Use targeted invalidation (`entityKeys.lists()`) not broad (`entityKeys.all`) when possible
- Don't include mutation objects in `useCallback` deps — `.mutate()` is stable
### Server state ownership
- Never copy query data into useState. Use query data directly in components.
- Never copy query data into Zustand stores (exception: mutation callbacks that coordinate cross-store state like temp ID replacement)
- The query cache is not a local state manager — `setQueryData` is for optimistic updates only
- Forms are the one deliberate exception: copy server data into local form state with `staleTime: Infinity`
## Steps
1. Read the references above to understand the guidelines
2. Analyze the specified scope against the rules listed above
3. If fix=true, apply the fixes. If fix=false, propose the fixes without applying.

View File

@@ -0,0 +1,35 @@
---
description: Analyze and fix useCallback anti-patterns in your code
argument-hint: [scope] [fix=true|false]
---
# You Might Not Need a Callback
Arguments:
- scope: what to analyze (default: your current changes). Examples: "diff to main", "PR #123", "src/components/", "whole codebase"
- fix: whether to apply fixes (default: true). Set to false to only propose changes.
User arguments: $ARGUMENTS
## References
Read before analyzing:
1. https://react.dev/reference/react/useCallback — official docs on when useCallback is actually needed
## Anti-patterns to detect
1. **useCallback on functions not passed as props or deps**: No benefit if only called within the same component.
2. **useCallback with deps that change every render**: Memoization is wasted.
3. **useCallback on handlers passed to native elements**: `<button onClick={fn}>` doesn't benefit from stable references.
4. **useCallback wrapping functions that return new objects/arrays**: Memoization at the wrong level.
5. **useCallback with empty deps when deps are needed**: Stale closures.
6. **Pairing useCallback + React.memo unnecessarily**: Only optimize when you've measured a problem.
7. **useCallback in hooks that don't need stable references**: Not every hook return needs memoization.
Note: This codebase uses a ref pattern for stable callbacks (`useRef` + empty deps). That pattern is correct — don't flag it.
## Steps
1. Read the reference above
2. Analyze the specified scope for the anti-patterns listed above
3. If fix=true, apply the fixes. If fix=false, propose the fixes without applying.

View File

@@ -0,0 +1,33 @@
---
description: Analyze and fix useMemo/React.memo anti-patterns in your code
argument-hint: [scope] [fix=true|false]
---
# You Might Not Need a Memo
Arguments:
- scope: what to analyze (default: your current changes). Examples: "diff to main", "PR #123", "src/components/", "whole codebase"
- fix: whether to apply fixes (default: true). Set to false to only propose changes.
User arguments: $ARGUMENTS
## References
Read before analyzing:
1. https://overreacted.io/before-you-memo/ — two techniques to avoid memo entirely
## Anti-patterns to detect
1. **State can be moved down instead of memoizing**: Move state into a smaller child so the slow component stops re-rendering without memo.
2. **Children can be lifted up**: Extract stateful part, pass expensive subtree as `children` — children as props don't re-render when parent state changes.
3. **useMemo on cheap computations**: Small array filters, string concat, arithmetic don't need memoization.
4. **useMemo with constantly-changing deps**: Deps change every render = useMemo does nothing.
5. **useMemo to stabilize props for non-memoized children**: If the child isn't wrapped in React.memo, stable references don't matter.
6. **React.memo on components that always receive new props**: Fix the parent instead.
7. **useMemo for derived state**: Just compute inline during render.
## Steps
1. Read the reference above
2. Analyze the specified scope for the anti-patterns listed above
3. If fix=true, apply the fixes. If fix=false, propose the fixes without applying.

View File

@@ -0,0 +1,38 @@
---
description: Analyze and fix unnecessary useState, derived state, and server-state-in-local-state anti-patterns
argument-hint: [scope] [fix=true|false]
---
# You Might Not Need State
Arguments:
- scope: what to analyze (default: your current changes). Examples: "diff to main", "PR #123", "src/components/", "whole codebase"
- fix: whether to apply fixes (default: true). Set to false to only propose changes.
User arguments: $ARGUMENTS
## Context
This codebase uses React Query for all server state and Zustand for client-only global state. useState should only be used for ephemeral UI concerns (open/closed, hover, local form input). Server data should never be copied into useState or Zustand — React Query is the single source of truth.
## References
Read these before analyzing:
1. https://react.dev/learn/choosing-the-state-structure — 5 principles for structuring state
2. https://tkdodo.eu/blog/dont-over-use-state — never store derived/computed values in state
3. https://tkdodo.eu/blog/putting-props-to-use-state — never mirror props into state via useEffect
## Anti-patterns to detect
1. **Derived state stored in useState**: If a value can be computed from props, other state, or query data, compute it inline during render instead of storing it in state.
2. **Server state copied into useState**: Never `useState` + `useEffect` to sync React Query data into local state. Use query data directly. The only exception is forms where users edit server data.
3. **Props mirrored into state**: Never `useState(prop)` + `useEffect(() => setState(prop))`. Use the prop directly, or use a key to reset component state.
4. **Chained useEffect state updates**: Never chain Effects that set state to trigger other Effects. Calculate all derived values in the event handler or inline during render.
5. **Storing objects when an ID suffices**: Store `selectedId` not a copy of the selected object. Derive the object: `items.find(i => i.id === selectedId)`.
6. **State that duplicates Zustand or React Query**: If the data already lives in a store or query cache, don't create a parallel useState.
## Steps
1. Read the references above to understand the guidelines
2. Analyze the specified scope for the anti-patterns listed above
3. If fix=true, apply the fixes. If fix=false, propose the fixes without applying.

View File

@@ -0,0 +1,20 @@
# Cleanup
Arguments:
- scope: what to review (default: your current changes). Examples: "diff to main", "PR #123", "src/components/", "whole codebase"
- fix: whether to apply fixes (default: true). Set to false to only propose changes.
User arguments: $ARGUMENTS
## Steps
Run each of these skills in order on the specified scope, passing through the scope and fix arguments. After each skill completes, move to the next. Do not skip any.
1. `/you-might-not-need-an-effect $ARGUMENTS`
2. `/you-might-not-need-a-memo $ARGUMENTS`
3. `/you-might-not-need-a-callback $ARGUMENTS`
4. `/you-might-not-need-state $ARGUMENTS`
5. `/react-query-best-practices $ARGUMENTS`
6. `/emcn-design-review $ARGUMENTS`
After all skills have run, output a summary of what was found and fixed (or proposed) across all six passes.

View File

@@ -0,0 +1,74 @@
# EMCN Design Review
Arguments:
- scope: what to review (default: your current changes). Examples: "diff to main", "PR #123", "src/components/", "whole codebase"
- fix: whether to apply fixes (default: true). Set to false to only propose changes.
User arguments: $ARGUMENTS
## Context
This codebase uses **emcn**, a custom component library built on Radix UI primitives with CVA variants and CSS variable design tokens. All UI must use emcn components and tokens.
## Steps
1. Read the emcn barrel export at `apps/sim/components/emcn/components/index.ts` to know what's available
2. Read `apps/sim/app/_styles/globals.css` for CSS variable tokens
3. Analyze the specified scope against every rule below
4. If fix=true, apply the fixes. If fix=false, propose the fixes without applying.
---
## Imports
- Import from `@/components/emcn` barrel, never subpaths
- Icons from `@/components/emcn/icons` or `lucide-react`
- Use `cn` from `@/lib/core/utils/cn` for conditional classes
## Design Tokens
Use CSS variable pattern (`text-[var(--text-primary)]`), never Tailwind semantics (`text-muted-foreground`) or hardcoded colors (`text-gray-500`, `#333`).
**Text**: `--text-primary`, `--text-secondary`, `--text-tertiary`, `--text-muted`, `--text-icon`, `--text-inverse`, `--text-error`
**Surfaces**: `--bg`, `--surface-2` through `--surface-7`, `--surface-hover`, `--surface-active`
**Borders**: `--border`, `--border-1`, `--border-muted`
**Z-Index**: `--z-dropdown` (100), `--z-modal` (200), `--z-popover` (300), `--z-tooltip` (400), `--z-toast` (500)
**Shadows**: `shadow-subtle`, `shadow-medium`, `shadow-overlay`, `shadow-card`
## Buttons
| Action | Variant |
|--------|---------|
| Toolbar, icon-only | `ghost` (most common, 28%) |
| Create, save, submit | `primary` (24%) |
| Cancel, close | `default` |
| Delete, remove | `destructive` |
| Selected state | `active` |
| Toggle | `outline` |
## Delete/Remove Confirmations
Modal `size="sm"`, title "Delete/Remove {ItemType}", `variant="destructive"` action button, `variant="default"` cancel. Cancel left, action right (100% compliance). Use `text-[var(--text-error)]` for irreversible warnings.
## Toast
`toast.success()`, `toast.error()`, `toast()` from `@/components/emcn`. Never custom notification UI.
## Badges
`red`=error/failed, `gray-secondary`=metadata/roles, `type`=type annotations, `green`=success/active, `gray`=neutral, `amber`=processing, `orange`=paused, `blue`=info. Use `dot` prop for status indicators.
## Icons
Default: `h-[14px] w-[14px]` (400+ uses). Color: `text-[var(--text-icon)]`. Scale: 14px > 16px > 12px > 20px.
## Anti-patterns to flag
- Raw `<button>`/`<input>` instead of emcn components
- Hardcoded colors (`text-gray-*`, `#hex`, `rgb()`)
- Tailwind semantics (`text-muted-foreground`) instead of CSS variables
- Template literal className instead of `cn()`
- Inline styles for colors/static values (dynamic values OK)
- Importing from emcn subpaths instead of barrel
- Arbitrary z-index instead of tokens
- Wrong button variant for action type

View File

@@ -0,0 +1,49 @@
# React Query Best Practices
Arguments:
- scope: what to analyze (default: your current changes). Examples: "diff to main", "PR #123", "src/hooks/queries/", "whole codebase"
- fix: whether to apply fixes (default: true). Set to false to only propose changes.
User arguments: $ARGUMENTS
## Context
This codebase uses React Query (TanStack Query) as the single source of truth for all server state. All query hooks live in `hooks/queries/`. Zustand is used only for client-only UI state. Server data must never be duplicated into useState or Zustand outside of mutation callbacks that coordinate cross-store state.
## References
Read these before analyzing:
1. https://tkdodo.eu/blog/practical-react-query — foundational defaults, custom hooks, avoiding local state copies
2. https://tkdodo.eu/blog/effective-react-query-keys — key factory pattern, hierarchical keys, fuzzy invalidation
3. https://tkdodo.eu/blog/react-query-as-a-state-manager — React Query IS your server state manager
## Rules to enforce
### Query key factories
- Every file in `hooks/queries/` must have a hierarchical key factory with an `all` root key
- Keys must include intermediate plural keys (`lists`, `details`) for prefix invalidation
- Key factories are colocated with their query hooks, not in a global keys file
### Query hooks
- Every `queryFn` must forward `signal` for request cancellation
- Every query must have an explicit `staleTime` (default 0 is almost never correct)
- `keepPreviousData` / `placeholderData` only on variable-key queries (where params change), never on static keys
- Use `enabled` to prevent queries from running without required params
### Mutations
- Use `onSettled` (not `onSuccess`) for cache reconciliation — it fires on both success and error
- For optimistic updates: save previous data in `onMutate`, roll back in `onError`
- Use targeted invalidation (`entityKeys.lists()`) not broad (`entityKeys.all`) when possible
- Don't include mutation objects in `useCallback` deps — `.mutate()` is stable
### Server state ownership
- Never copy query data into useState. Use query data directly in components.
- Never copy query data into Zustand stores (exception: mutation callbacks that coordinate cross-store state like temp ID replacement)
- The query cache is not a local state manager — `setQueryData` is for optimistic updates only
- Forms are the one deliberate exception: copy server data into local form state with `staleTime: Infinity`
## Steps
1. Read the references above to understand the guidelines
2. Analyze the specified scope against the rules listed above
3. If fix=true, apply the fixes. If fix=false, propose the fixes without applying.

View File

@@ -0,0 +1,30 @@
# You Might Not Need a Callback
Arguments:
- scope: what to analyze (default: your current changes). Examples: "diff to main", "PR #123", "src/components/", "whole codebase"
- fix: whether to apply fixes (default: true). Set to false to only propose changes.
User arguments: $ARGUMENTS
## References
Read before analyzing:
1. https://react.dev/reference/react/useCallback — official docs on when useCallback is actually needed
## Anti-patterns to detect
1. **useCallback on functions not passed as props or deps**: No benefit if only called within the same component.
2. **useCallback with deps that change every render**: Memoization is wasted.
3. **useCallback on handlers passed to native elements**: `<button onClick={fn}>` doesn't benefit from stable references.
4. **useCallback wrapping functions that return new objects/arrays**: Memoization at the wrong level.
5. **useCallback with empty deps when deps are needed**: Stale closures.
6. **Pairing useCallback + React.memo unnecessarily**: Only optimize when you've measured a problem.
7. **useCallback in hooks that don't need stable references**: Not every hook return needs memoization.
Note: This codebase uses a ref pattern for stable callbacks (`useRef` + empty deps). That pattern is correct — don't flag it.
## Steps
1. Read the reference above
2. Analyze the specified scope for the anti-patterns listed above
3. If fix=true, apply the fixes. If fix=false, propose the fixes without applying.

View File

@@ -0,0 +1,28 @@
# You Might Not Need a Memo
Arguments:
- scope: what to analyze (default: your current changes). Examples: "diff to main", "PR #123", "src/components/", "whole codebase"
- fix: whether to apply fixes (default: true). Set to false to only propose changes.
User arguments: $ARGUMENTS
## References
Read before analyzing:
1. https://overreacted.io/before-you-memo/ — two techniques to avoid memo entirely
## Anti-patterns to detect
1. **State can be moved down instead of memoizing**: Move state into a smaller child so the slow component stops re-rendering without memo.
2. **Children can be lifted up**: Extract stateful part, pass expensive subtree as `children` — children as props don't re-render when parent state changes.
3. **useMemo on cheap computations**: Small array filters, string concat, arithmetic don't need memoization.
4. **useMemo with constantly-changing deps**: Deps change every render = useMemo does nothing.
5. **useMemo to stabilize props for non-memoized children**: If the child isn't wrapped in React.memo, stable references don't matter.
6. **React.memo on components that always receive new props**: Fix the parent instead.
7. **useMemo for derived state**: Just compute inline during render.
## Steps
1. Read the reference above
2. Analyze the specified scope for the anti-patterns listed above
3. If fix=true, apply the fixes. If fix=false, propose the fixes without applying.

View File

@@ -0,0 +1,33 @@
# You Might Not Need State
Arguments:
- scope: what to analyze (default: your current changes). Examples: "diff to main", "PR #123", "src/components/", "whole codebase"
- fix: whether to apply fixes (default: true). Set to false to only propose changes.
User arguments: $ARGUMENTS
## Context
This codebase uses React Query for all server state and Zustand for client-only global state. useState should only be used for ephemeral UI concerns (open/closed, hover, local form input). Server data should never be copied into useState or Zustand — React Query is the single source of truth.
## References
Read these before analyzing:
1. https://react.dev/learn/choosing-the-state-structure — 5 principles for structuring state
2. https://tkdodo.eu/blog/dont-over-use-state — never store derived/computed values in state
3. https://tkdodo.eu/blog/putting-props-to-use-state — never mirror props into state via useEffect
## Anti-patterns to detect
1. **Derived state stored in useState**: If a value can be computed from props, other state, or query data, compute it inline during render instead of storing it in state.
2. **Server state copied into useState**: Never `useState` + `useEffect` to sync React Query data into local state. Use query data directly. The only exception is forms where users edit server data.
3. **Props mirrored into state**: Never `useState(prop)` + `useEffect(() => setState(prop))`. Use the prop directly, or use a key to reset component state.
4. **Chained useEffect state updates**: Never chain Effects that set state to trigger other Effects. Calculate all derived values in the event handler or inline during render.
5. **Storing objects when an ID suffices**: Store `selectedId` not a copy of the selected object. Derive the object: `items.find(i => i.id === selectedId)`.
6. **State that duplicates Zustand or React Query**: If the data already lives in a store or query cache, don't create a parallel useState.
## Steps
1. Read the references above to understand the guidelines
2. Analyze the specified scope for the anti-patterns listed above
3. If fix=true, apply the fixes. If fix=false, propose the fixes without applying.

28
.github/CODEOWNERS vendored Normal file
View File

@@ -0,0 +1,28 @@
# Copilot/Mothership chat streaming entrypoints and replay surfaces.
/apps/sim/app/api/copilot/chat/ @simstudioai/mothership
/apps/sim/app/api/copilot/confirm/ @simstudioai/mothership
/apps/sim/app/api/copilot/chats/ @simstudioai/mothership
/apps/sim/app/api/mothership/chat/ @simstudioai/mothership
/apps/sim/app/api/mothership/chats/ @simstudioai/mothership
/apps/sim/app/api/mothership/execute/ @simstudioai/mothership
/apps/sim/app/api/v1/copilot/chat/ @simstudioai/mothership
# Server-side stream orchestration, persistence, and protocol.
/apps/sim/lib/copilot/chat/ @simstudioai/mothership
/apps/sim/lib/copilot/async-runs/ @simstudioai/mothership
/apps/sim/lib/copilot/request/ @simstudioai/mothership
/apps/sim/lib/copilot/generated/ @simstudioai/mothership
/apps/sim/lib/copilot/constants.ts @simstudioai/mothership
/apps/sim/lib/core/utils/sse.ts @simstudioai/mothership
# Stream-time tool execution, confirmations, resource persistence, and handlers.
/apps/sim/lib/copilot/tool-executor/ @simstudioai/mothership
/apps/sim/lib/copilot/tools/ @simstudioai/mothership
/apps/sim/lib/copilot/persistence/ @simstudioai/mothership
/apps/sim/lib/copilot/resources/ @simstudioai/mothership
# Client-side stream consumption, hydration, and reconnect.
/apps/sim/app/workspace/*/home/hooks/index.ts @simstudioai/mothership
/apps/sim/app/workspace/*/home/hooks/use-chat.ts @simstudioai/mothership
/apps/sim/app/workspace/*/home/hooks/use-file-preview-sessions.ts @simstudioai/mothership
/apps/sim/hooks/queries/tasks.ts @simstudioai/mothership

View File

@@ -16,6 +16,7 @@ permissions:
jobs:
test-build:
name: Test and Build
if: github.ref != 'refs/heads/dev' || github.event_name == 'pull_request'
uses: ./.github/workflows/test-build.yml
secrets: inherit
@@ -45,11 +46,72 @@ jobs:
echo " Not a release commit"
fi
# Build AMD64 images and push to ECR immediately (+ GHCR for main)
# Dev: build all 3 images for ECR only (no GHCR, no ARM64)
build-dev:
name: Build Dev ECR
needs: [detect-version]
if: github.event_name == 'push' && github.ref == 'refs/heads/dev'
runs-on: blacksmith-8vcpu-ubuntu-2404
permissions:
contents: read
id-token: write
strategy:
fail-fast: false
matrix:
include:
- dockerfile: ./docker/app.Dockerfile
ecr_repo_secret: ECR_APP
- dockerfile: ./docker/db.Dockerfile
ecr_repo_secret: ECR_MIGRATIONS
- dockerfile: ./docker/realtime.Dockerfile
ecr_repo_secret: ECR_REALTIME
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v4
with:
role-to-assume: ${{ secrets.DEV_AWS_ROLE_TO_ASSUME }}
aws-region: ${{ secrets.DEV_AWS_REGION }}
- name: Login to Amazon ECR
id: login-ecr
uses: aws-actions/amazon-ecr-login@v2
- name: Login to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Set up Docker Buildx
uses: useblacksmith/setup-docker-builder@v1
- name: Resolve ECR repo name
id: ecr-repo
run: echo "name=$ECR_REPO" >> $GITHUB_OUTPUT
env:
ECR_REPO: ${{ matrix.ecr_repo_secret == 'ECR_APP' && secrets.ECR_APP || matrix.ecr_repo_secret == 'ECR_MIGRATIONS' && secrets.ECR_MIGRATIONS || matrix.ecr_repo_secret == 'ECR_REALTIME' && secrets.ECR_REALTIME || '' }}
- name: Build and push
uses: useblacksmith/build-push-action@v2
with:
context: .
file: ${{ matrix.dockerfile }}
platforms: linux/amd64
push: true
tags: ${{ steps.login-ecr.outputs.registry }}/${{ steps.ecr-repo.outputs.name }}:dev
provenance: false
sbom: false
# Main/staging: build AMD64 images and push to ECR + GHCR
build-amd64:
name: Build AMD64
needs: [detect-version]
if: github.event_name == 'push' && (github.ref == 'refs/heads/main' || github.ref == 'refs/heads/staging' || github.ref == 'refs/heads/dev')
needs: [test-build, detect-version]
if: >-
github.event_name == 'push' &&
(github.ref == 'refs/heads/main' || github.ref == 'refs/heads/staging')
runs-on: blacksmith-8vcpu-ubuntu-2404
permissions:
contents: read
@@ -75,8 +137,8 @@ jobs:
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v4
with:
role-to-assume: ${{ github.ref == 'refs/heads/main' && secrets.AWS_ROLE_TO_ASSUME || github.ref == 'refs/heads/dev' && secrets.DEV_AWS_ROLE_TO_ASSUME || secrets.STAGING_AWS_ROLE_TO_ASSUME }}
aws-region: ${{ github.ref == 'refs/heads/main' && secrets.AWS_REGION || github.ref == 'refs/heads/dev' && secrets.DEV_AWS_REGION || secrets.STAGING_AWS_REGION }}
role-to-assume: ${{ github.ref == 'refs/heads/main' && secrets.AWS_ROLE_TO_ASSUME || secrets.STAGING_AWS_ROLE_TO_ASSUME }}
aws-region: ${{ github.ref == 'refs/heads/main' && secrets.AWS_REGION || secrets.STAGING_AWS_REGION }}
- name: Login to Amazon ECR
id: login-ecr
@@ -99,33 +161,33 @@ jobs:
- name: Set up Docker Buildx
uses: useblacksmith/setup-docker-builder@v1
- name: Resolve ECR repo name
id: ecr-repo
run: echo "name=$ECR_REPO" >> $GITHUB_OUTPUT
env:
ECR_REPO: ${{ matrix.ecr_repo_secret == 'ECR_APP' && secrets.ECR_APP || matrix.ecr_repo_secret == 'ECR_MIGRATIONS' && secrets.ECR_MIGRATIONS || matrix.ecr_repo_secret == 'ECR_REALTIME' && secrets.ECR_REALTIME || '' }}
- name: Generate tags
id: meta
run: |
ECR_REGISTRY="${{ steps.login-ecr.outputs.registry }}"
ECR_REPO="${{ secrets[matrix.ecr_repo_secret] }}"
ECR_REPO="${{ steps.ecr-repo.outputs.name }}"
GHCR_IMAGE="${{ matrix.ghcr_image }}"
# ECR tags (always build for ECR)
if [ "${{ github.ref }}" = "refs/heads/main" ]; then
ECR_TAG="latest"
elif [ "${{ github.ref }}" = "refs/heads/dev" ]; then
ECR_TAG="dev"
else
ECR_TAG="staging"
fi
ECR_IMAGE="${ECR_REGISTRY}/${ECR_REPO}:${ECR_TAG}"
# Build tags list
TAGS="${ECR_IMAGE}"
# Add GHCR tags only for main branch
if [ "${{ github.ref }}" = "refs/heads/main" ]; then
GHCR_AMD64="${GHCR_IMAGE}:latest-amd64"
GHCR_SHA="${GHCR_IMAGE}:${{ github.sha }}-amd64"
TAGS="${TAGS},$GHCR_AMD64,$GHCR_SHA"
# Add version tag if this is a release commit
if [ "${{ needs.detect-version.outputs.is_release }}" = "true" ]; then
VERSION="${{ needs.detect-version.outputs.version }}"
GHCR_VERSION="${GHCR_IMAGE}:${VERSION}-amd64"
@@ -256,6 +318,14 @@ jobs:
docker manifest push "${IMAGE_BASE}:${VERSION}"
fi
# Run database migrations for dev
migrate-dev:
name: Migrate Dev DB
needs: [build-dev]
if: github.event_name == 'push' && github.ref == 'refs/heads/dev'
uses: ./.github/workflows/migrations.yml
secrets: inherit
# Check if docs changed
check-docs-changes:
name: Check Docs Changes

View File

@@ -38,5 +38,5 @@ jobs:
- name: Apply migrations
working-directory: ./packages/db
env:
DATABASE_URL: ${{ github.ref == 'refs/heads/main' && secrets.DATABASE_URL || secrets.STAGING_DATABASE_URL }}
DATABASE_URL: ${{ github.ref == 'refs/heads/main' && secrets.DATABASE_URL || github.ref == 'refs/heads/dev' && secrets.DEV_DATABASE_URL || secrets.STAGING_DATABASE_URL }}
run: bunx drizzle-kit migrate --config=./drizzle.config.ts

View File

@@ -105,7 +105,7 @@ jobs:
- name: Run tests with coverage
env:
NODE_OPTIONS: '--no-warnings'
NODE_OPTIONS: '--no-warnings --max-old-space-size=8192'
NEXT_PUBLIC_APP_URL: 'https://www.sim.ai'
DATABASE_URL: 'postgresql://postgres:postgres@localhost:5432/simstudio'
ENCRYPTION_KEY: '7cf672e460e430c1fba707575c2b0e2ad5a99dddf9b7b7e3b5646e630861db1c' # dummy key for CI only
@@ -127,7 +127,7 @@ jobs:
- name: Build application
env:
NODE_OPTIONS: '--no-warnings'
NODE_OPTIONS: '--no-warnings --max-old-space-size=8192'
NEXT_PUBLIC_APP_URL: 'https://www.sim.ai'
DATABASE_URL: 'postgresql://postgres:postgres@localhost:5432/simstudio'
STRIPE_SECRET_KEY: 'dummy_key_for_ci_only'

View File

@@ -74,10 +74,6 @@ docker compose -f docker-compose.prod.yml up -d
Open [http://localhost:3000](http://localhost:3000)
#### Background worker note
The Docker Compose stack starts a dedicated worker container by default. If `REDIS_URL` is not configured, the worker will start, log that it is idle, and do no queue processing. This is expected. Queue-backed API, webhook, and schedule execution requires Redis; installs without Redis continue to use the inline execution path.
Sim also supports local models via [Ollama](https://ollama.ai) and [vLLM](https://docs.vllm.ai/) — see the [Docker self-hosting docs](https://docs.sim.ai/self-hosting/docker) for setup details.
### Self-hosted: Manual Setup
@@ -123,12 +119,10 @@ cd packages/db && bun run db:migrate
5. Start development servers:
```bash
bun run dev:full # Starts Next.js app, realtime socket server, and the BullMQ worker
bun run dev:full # Starts Next.js app and realtime socket server
```
If `REDIS_URL` is not configured, the worker will remain idle and execution continues inline.
Or run separately: `bun run dev` (Next.js), `cd apps/sim && bun run dev:sockets` (realtime), and `cd apps/sim && bun run worker` (BullMQ worker).
Or run separately: `bun run dev` (Next.js) and `cd apps/sim && bun run dev:sockets` (realtime).
## Copilot API Keys

View File

@@ -4625,6 +4625,42 @@ export function DynamoDBIcon(props: SVGProps<SVGSVGElement>) {
)
}
export function IAMIcon(props: SVGProps<SVGSVGElement>) {
return (
<svg {...props} viewBox='0 0 80 80' xmlns='http://www.w3.org/2000/svg'>
<defs>
<linearGradient x1='0%' y1='100%' x2='100%' y2='0%' id='iamGradient'>
<stop stopColor='#BD0816' offset='0%' />
<stop stopColor='#FF5252' offset='100%' />
</linearGradient>
</defs>
<rect fill='url(#iamGradient)' width='80' height='80' />
<path
d='M14,59 L66,59 L66,21 L14,21 L14,59 Z M68,20 L68,60 C68,60.552 67.553,61 67,61 L13,61 C12.447,61 12,60.552 12,60 L12,20 C12,19.448 12.447,19 13,19 L67,19 C67.553,19 68,19.448 68,20 L68,20 Z M44,48 L59,48 L59,46 L44,46 L44,48 Z M57,42 L62,42 L62,40 L57,40 L57,42 Z M44,42 L52,42 L52,40 L44,40 L44,42 Z M29,46 C29,45.449 28.552,45 28,45 C27.448,45 27,45.449 27,46 C27,46.551 27.448,47 28,47 C28.552,47 29,46.551 29,46 L29,46 Z M31,46 C31,47.302 30.161,48.401 29,48.816 L29,51 L27,51 L27,48.815 C25.839,48.401 25,47.302 25,46 C25,44.346 26.346,43 28,43 C29.654,43 31,44.346 31,46 L31,46 Z M19,53.993 L36.994,54 L36.996,50 L33,50 L33,48 L36.996,48 L36.998,45 L33,45 L33,43 L36.999,43 L37,40.007 L19.006,40 L19,53.993 Z M22,38.001 L34,38.006 L34,31 C34.001,28.697 31.197,26.677 28,26.675 L27.996,26.675 C24.804,26.675 22.004,28.696 22.002,31 L22,38.001 Z M17,54.992 L17.006,39 C17.006,38.734 17.111,38.48 17.299,38.292 C17.486,38.105 17.741,38 18.006,38 L20,38.001 L20.002,31 C20.004,27.512 23.59,24.675 27.996,24.675 L28,24.675 C32.412,24.677 36.001,27.515 36,31 L36,38.007 L38,38.008 C38.553,38.008 39,38.456 39,39.008 L38.994,55 C38.994,55.266 38.889,55.52 38.701,55.708 C38.514,55.895 38.259,56 37.994,56 L18,55.992 C17.447,55.992 17,55.544 17,54.992 L17,54.992 Z M60,36 L62,36 L62,34 L60,34 L60,36 Z M44,36 L55,36 L55,34 L44,34 L44,36 Z'
fill='#FFFFFF'
/>
</svg>
)
}
export function STSIcon(props: SVGProps<SVGSVGElement>) {
return (
<svg {...props} viewBox='0 0 80 80' xmlns='http://www.w3.org/2000/svg'>
<defs>
<linearGradient x1='0%' y1='100%' x2='100%' y2='0%' id='stsGradient'>
<stop stopColor='#BD0816' offset='0%' />
<stop stopColor='#FF5252' offset='100%' />
</linearGradient>
</defs>
<rect fill='url(#stsGradient)' width='80' height='80' />
<path
d='M14,59 L66,59 L66,21 L14,21 L14,59 Z M68,20 L68,60 C68,60.552 67.553,61 67,61 L13,61 C12.447,61 12,60.552 12,60 L12,20 C12,19.448 12.447,19 13,19 L67,19 C67.553,19 68,19.448 68,20 L68,20 Z M44,48 L59,48 L59,46 L44,46 L44,48 Z M57,42 L62,42 L62,40 L57,40 L57,42 Z M44,42 L52,42 L52,40 L44,40 L44,42 Z M29,46 C29,45.449 28.552,45 28,45 C27.448,45 27,45.449 27,46 C27,46.551 27.448,47 28,47 C28.552,47 29,46.551 29,46 L29,46 Z M31,46 C31,47.302 30.161,48.401 29,48.816 L29,51 L27,51 L27,48.815 C25.839,48.401 25,47.302 25,46 C25,44.346 26.346,43 28,43 C29.654,43 31,44.346 31,46 L31,46 Z M19,53.993 L36.994,54 L36.996,50 L33,50 L33,48 L36.996,48 L36.998,45 L33,45 L33,43 L36.999,43 L37,40.007 L19.006,40 L19,53.993 Z M22,38.001 L34,38.006 L34,31 C34.001,28.697 31.197,26.677 28,26.675 L27.996,26.675 C24.804,26.675 22.004,28.696 22.002,31 L22,38.001 Z M17,54.992 L17.006,39 C17.006,38.734 17.111,38.48 17.299,38.292 C17.486,38.105 17.741,38 18.006,38 L20,38.001 L20.002,31 C20.004,27.512 23.59,24.675 27.996,24.675 L28,24.675 C32.412,24.677 36.001,27.515 36,31 L36,38.007 L38,38.008 C38.553,38.008 39,38.456 39,39.008 L38.994,55 C38.994,55.266 38.889,55.52 38.701,55.708 C38.514,55.895 38.259,56 37.994,56 L18,55.992 C17.447,55.992 17,55.544 17,54.992 L17,54.992 Z M60,36 L62,36 L62,34 L60,34 L60,36 Z M44,36 L55,36 L55,34 L44,34 L44,36 Z'
fill='#FFFFFF'
/>
</svg>
)
}
export function SecretsManagerIcon(props: SVGProps<SVGSVGElement>) {
return (
<svg {...props} viewBox='0 0 80 80' xmlns='http://www.w3.org/2000/svg'>
@@ -4835,6 +4871,17 @@ export function WordpressIcon(props: SVGProps<SVGSVGElement>) {
)
}
export function AgiloftIcon(props: SVGProps<SVGSVGElement>) {
return (
<svg {...props} viewBox='0 0 47.3 47.2' xmlns='http://www.w3.org/2000/svg'>
<path d='M47.3,21.4H0v-4.3l4.3-4.2h43V21.4z' fill='#263A5C' />
<path d='M47.3,8.6H8.6L17.2,0h30.1V8.6z' fill='#001028' />
<path d='M0,25.7h47.3V30L43,34.4H0V25.7z' fill='#4A6587' />
<path d='M0,38.7h38.8l-8.6,8.5H0V38.7z' fill='#6D8DAF' />
</svg>
)
}
export function AhrefsIcon(props: SVGProps<SVGSVGElement>) {
return (
<svg {...props} xmlns='http://www.w3.org/2000/svg' viewBox='0 0 1065 1300'>

View File

@@ -6,6 +6,7 @@ import type { ComponentType, SVGProps } from 'react'
import {
A2AIcon,
AgentMailIcon,
AgiloftIcon,
AhrefsIcon,
AirtableIcon,
AirweaveIcon,
@@ -88,6 +89,7 @@ import {
HubspotIcon,
HuggingFaceIcon,
HunterIOIcon,
IAMIcon,
ImageIcon,
IncidentioIcon,
InfisicalIcon,
@@ -162,6 +164,7 @@ import {
SmtpIcon,
SQSIcon,
SshIcon,
STSIcon,
STTIcon,
StagehandIcon,
StripeIcon,
@@ -197,6 +200,7 @@ type IconComponent = ComponentType<SVGProps<SVGSVGElement>>
export const blockTypeToIconMap: Record<string, IconComponent> = {
a2a: A2AIcon,
agentmail: AgentMailIcon,
agiloft: AgiloftIcon,
ahrefs: AhrefsIcon,
airtable: AirtableIcon,
airweave: AirweaveIcon,
@@ -276,6 +280,7 @@ export const blockTypeToIconMap: Record<string, IconComponent> = {
hubspot: HubspotIcon,
huggingface: HuggingFaceIcon,
hunter: HunterIOIcon,
iam: IAMIcon,
image_generator: ImageIcon,
imap: MailServerIcon,
incidentio: IncidentioIcon,
@@ -354,6 +359,7 @@ export const blockTypeToIconMap: Record<string, IconComponent> = {
ssh: SshIcon,
stagehand: StagehandIcon,
stripe: StripeIcon,
sts: STSIcon,
stt_v2: STTIcon,
supabase: SupabaseIcon,
tailscale: TailscaleIcon,

View File

@@ -319,17 +319,6 @@ By default, your usage is capped at the credits included in your plan. To allow
Max (individual) shares the same rate limits as team plans. Team plans (Pro or Max for Teams) use the Max-tier rate limits.
### Concurrent Execution Limits
| Plan | Concurrent Executions |
|------|----------------------|
| **Free** | 5 |
| **Pro** | 50 |
| **Max / Team** | 200 |
| **Enterprise** | 200 (customizable) |
Concurrent execution limits control how many workflow executions can run simultaneously within a workspace. When the limit is reached, new executions are queued and admitted as running executions complete. Manual runs from the editor are not subject to these limits.
### File Storage
| Plan | Storage |

View File

@@ -0,0 +1,332 @@
---
title: Agiloft
description: Manage records in Agiloft CLM
---
import { BlockInfoCard } from "@/components/ui/block-info-card"
<BlockInfoCard
type="agiloft"
color="#263A5C"
/>
{/* MANUAL-CONTENT-START:intro */}
[Agiloft](https://www.agiloft.com/) is an enterprise contract lifecycle management (CLM) platform that helps organizations automate and manage contracts, agreements, and related business processes across any knowledge base.
With the Agiloft integration in Sim, you can:
- **Create records**: Add new records to any Agiloft table with custom field values
- **Read records**: Retrieve individual records by ID with optional field selection
- **Update records**: Modify existing record fields in any table
- **Delete records**: Remove records from your knowledge base
- **Search records**: Find records using Agiloft's query syntax with pagination support
- **Select records**: Query records using SQL WHERE clauses for advanced filtering
- **Saved searches**: List saved search definitions available for a table
- **Attach files**: Upload and attach files to record fields
- **Retrieve attachments**: Download attached files from record fields
- **Remove attachments**: Delete attached files from record fields by position
- **Attachment info**: Get metadata about all files attached to a record field
- **Lock records**: Check, acquire, or release locks on records for concurrent editing
In Sim, the Agiloft integration enables your agents to manage contracts and records programmatically as part of automated workflows. Agents can create and update records, search across tables, handle file attachments, and manage record locks — enabling intelligent contract lifecycle automation.
{/* MANUAL-CONTENT-END */}
## Usage Instructions
Integrate with Agiloft contract lifecycle management to create, read, update, delete, and search records. Supports file attachments, SQL-based selection, saved searches, and record locking across any table in your knowledge base.
## Tools
### `agiloft_attach_file`
Attach a file to a field in an Agiloft record.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `instanceUrl` | string | Yes | Agiloft instance URL \(e.g., https://mycompany.agiloft.com\) |
| `knowledgeBase` | string | Yes | Knowledge base name |
| `login` | string | Yes | Agiloft username |
| `password` | string | Yes | Agiloft password |
| `table` | string | Yes | Table name \(e.g., "contracts"\) |
| `recordId` | string | Yes | ID of the record to attach the file to |
| `fieldName` | string | Yes | Name of the attachment field |
| `file` | file | No | File to attach |
| `fileName` | string | No | Name to assign to the file \(defaults to original file name\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `recordId` | string | ID of the record the file was attached to |
| `fieldName` | string | Name of the field the file was attached to |
| `fileName` | string | Name of the attached file |
| `totalAttachments` | number | Total number of files attached in the field after the operation |
### `agiloft_attachment_info`
Get information about file attachments on a record field.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `instanceUrl` | string | Yes | Agiloft instance URL \(e.g., https://mycompany.agiloft.com\) |
| `knowledgeBase` | string | Yes | Knowledge base name |
| `login` | string | Yes | Agiloft username |
| `password` | string | Yes | Agiloft password |
| `table` | string | Yes | Table name \(e.g., "contracts"\) |
| `recordId` | string | Yes | ID of the record to check attachments on |
| `fieldName` | string | Yes | Name of the attachment field to inspect |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `attachments` | array | List of attachments with position, name, and size |
| ↳ `position` | number | Position index of the attachment in the field |
| ↳ `name` | string | File name of the attachment |
| ↳ `size` | number | File size in bytes |
| `totalCount` | number | Total number of attachments in the field |
### `agiloft_create_record`
Create a new record in an Agiloft table.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `instanceUrl` | string | Yes | Agiloft instance URL \(e.g., https://mycompany.agiloft.com\) |
| `knowledgeBase` | string | Yes | Knowledge base name |
| `login` | string | Yes | Agiloft username |
| `password` | string | Yes | Agiloft password |
| `table` | string | Yes | Table name \(e.g., "contracts", "contacts.employees"\) |
| `data` | string | Yes | Record field values as a JSON object \(e.g., \{"first_name": "John", "status": "Active"\}\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `id` | string | ID of the created record |
| `fields` | json | Field values of the created record |
### `agiloft_delete_record`
Delete a record from an Agiloft table.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `instanceUrl` | string | Yes | Agiloft instance URL \(e.g., https://mycompany.agiloft.com\) |
| `knowledgeBase` | string | Yes | Knowledge base name |
| `login` | string | Yes | Agiloft username |
| `password` | string | Yes | Agiloft password |
| `table` | string | Yes | Table name \(e.g., "contracts", "contacts.employees"\) |
| `recordId` | string | Yes | ID of the record to delete |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `id` | string | ID of the deleted record |
| `deleted` | boolean | Whether the record was successfully deleted |
### `agiloft_lock_record`
Lock, unlock, or check the lock status of an Agiloft record.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `instanceUrl` | string | Yes | Agiloft instance URL \(e.g., https://mycompany.agiloft.com\) |
| `knowledgeBase` | string | Yes | Knowledge base name |
| `login` | string | Yes | Agiloft username |
| `password` | string | Yes | Agiloft password |
| `table` | string | Yes | Table name \(e.g., "contracts"\) |
| `recordId` | string | Yes | ID of the record to lock, unlock, or check |
| `lockAction` | string | Yes | Action to perform: "lock", "unlock", or "check" |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `id` | string | Record ID |
| `lockStatus` | string | Lock status \(e.g., "LOCKED", "UNLOCKED"\) |
| `lockedBy` | string | Username of the user who locked the record |
| `lockExpiresInMinutes` | number | Minutes until the lock expires |
### `agiloft_read_record`
Read a record by ID from an Agiloft table.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `instanceUrl` | string | Yes | Agiloft instance URL \(e.g., https://mycompany.agiloft.com\) |
| `knowledgeBase` | string | Yes | Knowledge base name |
| `login` | string | Yes | Agiloft username |
| `password` | string | Yes | Agiloft password |
| `table` | string | Yes | Table name \(e.g., "contracts", "contacts.employees"\) |
| `recordId` | string | Yes | ID of the record to read |
| `fields` | string | No | Comma-separated list of field names to include in the response |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `id` | string | ID of the record |
| `fields` | json | Field values of the record |
### `agiloft_remove_attachment`
Remove an attached file from a field in an Agiloft record.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `instanceUrl` | string | Yes | Agiloft instance URL \(e.g., https://mycompany.agiloft.com\) |
| `knowledgeBase` | string | Yes | Knowledge base name |
| `login` | string | Yes | Agiloft username |
| `password` | string | Yes | Agiloft password |
| `table` | string | Yes | Table name \(e.g., "contracts"\) |
| `recordId` | string | Yes | ID of the record containing the attachment |
| `fieldName` | string | Yes | Name of the attachment field |
| `position` | string | Yes | Position index of the file to remove \(starting from 0\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `recordId` | string | ID of the record |
| `fieldName` | string | Name of the attachment field |
| `remainingAttachments` | number | Number of attachments remaining in the field after removal |
### `agiloft_retrieve_attachment`
Download an attached file from an Agiloft record field.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `instanceUrl` | string | Yes | Agiloft instance URL \(e.g., https://mycompany.agiloft.com\) |
| `knowledgeBase` | string | Yes | Knowledge base name |
| `login` | string | Yes | Agiloft username |
| `password` | string | Yes | Agiloft password |
| `table` | string | Yes | Table name \(e.g., "contracts"\) |
| `recordId` | string | Yes | ID of the record containing the attachment |
| `fieldName` | string | Yes | Name of the attachment field |
| `position` | string | Yes | Position index of the file in the field \(starting from 0\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `file` | file | Downloaded attachment file |
### `agiloft_saved_search`
List saved searches defined for an Agiloft table.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `instanceUrl` | string | Yes | Agiloft instance URL \(e.g., https://mycompany.agiloft.com\) |
| `knowledgeBase` | string | Yes | Knowledge base name |
| `login` | string | Yes | Agiloft username |
| `password` | string | Yes | Agiloft password |
| `table` | string | Yes | Table name to list saved searches for \(e.g., "contracts"\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `searches` | array | List of saved searches for the table |
| ↳ `name` | string | Saved search name |
| ↳ `label` | string | Saved search display label |
| ↳ `id` | string | Saved search database identifier |
| ↳ `description` | string | Saved search description |
### `agiloft_search_records`
Search for records in an Agiloft table using a query.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `instanceUrl` | string | Yes | Agiloft instance URL \(e.g., https://mycompany.agiloft.com\) |
| `knowledgeBase` | string | Yes | Knowledge base name |
| `login` | string | Yes | Agiloft username |
| `password` | string | Yes | Agiloft password |
| `table` | string | Yes | Table name to search in \(e.g., "contracts", "contacts.employees"\) |
| `query` | string | Yes | Search query using Agiloft query syntax \(e.g., "status=\'Active\'" or "company_name~=\'Acme\'"\) |
| `fields` | string | No | Comma-separated list of field names to include in the results |
| `page` | string | No | Page number for paginated results \(starting from 0\) |
| `limit` | string | No | Maximum number of records to return per page |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `records` | json | Array of matching records with their field values |
| `totalCount` | number | Total number of matching records |
| `page` | number | Current page number |
| `limit` | number | Records per page |
### `agiloft_select_records`
Select record IDs matching a SQL WHERE clause from an Agiloft table.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `instanceUrl` | string | Yes | Agiloft instance URL \(e.g., https://mycompany.agiloft.com\) |
| `knowledgeBase` | string | Yes | Knowledge base name |
| `login` | string | Yes | Agiloft username |
| `password` | string | Yes | Agiloft password |
| `table` | string | Yes | Table name \(e.g., "contracts", "contacts.employees"\) |
| `where` | string | Yes | SQL WHERE clause using database column names \(e.g., "summary like \'%new%\'" or "assigned_person=\'John Doe\'"\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `recordIds` | array | Array of record IDs matching the query |
| `totalCount` | number | Total number of matching records |
### `agiloft_update_record`
Update an existing record in an Agiloft table.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `instanceUrl` | string | Yes | Agiloft instance URL \(e.g., https://mycompany.agiloft.com\) |
| `knowledgeBase` | string | Yes | Knowledge base name |
| `login` | string | Yes | Agiloft username |
| `password` | string | Yes | Agiloft password |
| `table` | string | Yes | Table name \(e.g., "contracts", "contacts.employees"\) |
| `recordId` | string | Yes | ID of the record to update |
| `data` | string | Yes | Updated field values as a JSON object \(e.g., \{"status": "Active", "priority": "High"\}\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `id` | string | ID of the updated record |
| `fields` | json | Updated field values of the record |

View File

@@ -0,0 +1,443 @@
---
title: AWS IAM
description: Manage AWS IAM users, roles, policies, and groups
---
import { BlockInfoCard } from "@/components/ui/block-info-card"
<BlockInfoCard
type="iam"
color="linear-gradient(45deg, #BD0816 0%, #FF5252 100%)"
/>
{/* MANUAL-CONTENT-START:intro */}
[AWS Identity and Access Management (IAM)](https://aws.amazon.com/iam/) is a web service that helps you securely control access to AWS resources. IAM lets you manage permissions that control which AWS resources users, groups, and roles can access.
With AWS IAM, you can:
- **Manage users**: Create and manage IAM users, assign them individual security credentials, and grant them permissions to access AWS services and resources
- **Create roles**: Define IAM roles with specific permissions that can be assumed by users, services, or applications for temporary access
- **Attach policies**: Assign managed policies to users and roles to define what actions they can perform on which resources
- **Organize with groups**: Create IAM groups to manage permissions for collections of users, simplifying access management at scale
- **Control access keys**: Generate and manage programmatic access key pairs for API and CLI access to AWS services
In Sim, the AWS IAM integration allows your workflows to automate identity management tasks such as provisioning new users, assigning roles and permissions, managing group memberships, and rotating access keys. This is particularly useful for onboarding automation, security compliance workflows, access reviews, and incident response — enabling your agents to manage AWS access control programmatically.
{/* MANUAL-CONTENT-END */}
## Usage Instructions
Integrate AWS Identity and Access Management into your workflow. Create and manage users, roles, policies, groups, and access keys.
## Tools
### `iam_list_users`
List IAM users in your AWS account
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `region` | string | Yes | AWS region \(e.g., us-east-1\) |
| `accessKeyId` | string | Yes | AWS access key ID |
| `secretAccessKey` | string | Yes | AWS secret access key |
| `pathPrefix` | string | No | Path prefix to filter users \(e.g., /division_abc/\) |
| `maxItems` | number | No | Maximum number of users to return \(1-1000, default 100\) |
| `marker` | string | No | Pagination marker from a previous request |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `users` | json | List of IAM users with userName, userId, arn, path, and dates |
| `isTruncated` | boolean | Whether there are more results available |
| `marker` | string | Pagination marker for the next page of results |
| `count` | number | Number of users returned |
### `iam_get_user`
Get detailed information about an IAM user
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `region` | string | Yes | AWS region \(e.g., us-east-1\) |
| `accessKeyId` | string | Yes | AWS access key ID |
| `secretAccessKey` | string | Yes | AWS secret access key |
| `userName` | string | Yes | The name of the IAM user to retrieve |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `userName` | string | The name of the user |
| `userId` | string | The unique ID of the user |
| `arn` | string | The ARN of the user |
| `path` | string | The path to the user |
| `createDate` | string | Date the user was created |
| `passwordLastUsed` | string | Date the password was last used |
| `permissionsBoundaryArn` | string | ARN of the permissions boundary policy |
| `tags` | json | Tags attached to the user \(key, value pairs\) |
### `iam_create_user`
Create a new IAM user
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `region` | string | Yes | AWS region \(e.g., us-east-1\) |
| `accessKeyId` | string | Yes | AWS access key ID |
| `secretAccessKey` | string | Yes | AWS secret access key |
| `userName` | string | Yes | Name for the new IAM user \(1-64 characters\) |
| `path` | string | No | Path for the user \(e.g., /division_abc/\), defaults to / |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `message` | string | Operation status message |
| `userName` | string | The name of the created user |
| `userId` | string | The unique ID of the created user |
| `arn` | string | The ARN of the created user |
| `path` | string | The path of the created user |
| `createDate` | string | Date the user was created |
### `iam_delete_user`
Delete an IAM user
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `region` | string | Yes | AWS region \(e.g., us-east-1\) |
| `accessKeyId` | string | Yes | AWS access key ID |
| `secretAccessKey` | string | Yes | AWS secret access key |
| `userName` | string | Yes | The name of the IAM user to delete |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `message` | string | Operation status message |
### `iam_list_roles`
List IAM roles in your AWS account
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `region` | string | Yes | AWS region \(e.g., us-east-1\) |
| `accessKeyId` | string | Yes | AWS access key ID |
| `secretAccessKey` | string | Yes | AWS secret access key |
| `pathPrefix` | string | No | Path prefix to filter roles \(e.g., /application/\) |
| `maxItems` | number | No | Maximum number of roles to return \(1-1000, default 100\) |
| `marker` | string | No | Pagination marker from a previous request |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `roles` | json | List of IAM roles with roleName, roleId, arn, path, and dates |
| `isTruncated` | boolean | Whether there are more results available |
| `marker` | string | Pagination marker for the next page of results |
| `count` | number | Number of roles returned |
### `iam_get_role`
Get detailed information about an IAM role
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `region` | string | Yes | AWS region \(e.g., us-east-1\) |
| `accessKeyId` | string | Yes | AWS access key ID |
| `secretAccessKey` | string | Yes | AWS secret access key |
| `roleName` | string | Yes | The name of the IAM role to retrieve |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `roleName` | string | The name of the role |
| `roleId` | string | The unique ID of the role |
| `arn` | string | The ARN of the role |
| `path` | string | The path to the role |
| `createDate` | string | Date the role was created |
| `description` | string | Description of the role |
| `maxSessionDuration` | number | Maximum session duration in seconds |
| `assumeRolePolicyDocument` | string | The trust policy document \(JSON\) |
| `roleLastUsedDate` | string | Date the role was last used |
| `roleLastUsedRegion` | string | AWS region where the role was last used |
### `iam_create_role`
Create a new IAM role with a trust policy
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `region` | string | Yes | AWS region \(e.g., us-east-1\) |
| `accessKeyId` | string | Yes | AWS access key ID |
| `secretAccessKey` | string | Yes | AWS secret access key |
| `roleName` | string | Yes | Name for the new IAM role \(1-64 characters\) |
| `assumeRolePolicyDocument` | string | Yes | Trust policy JSON specifying who can assume this role |
| `description` | string | No | Description of the role |
| `path` | string | No | Path for the role \(e.g., /application/\), defaults to / |
| `maxSessionDuration` | number | No | Maximum session duration in seconds \(3600-43200, default 3600\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `message` | string | Operation status message |
| `roleName` | string | The name of the created role |
| `roleId` | string | The unique ID of the created role |
| `arn` | string | The ARN of the created role |
| `path` | string | The path of the created role |
| `createDate` | string | Date the role was created |
### `iam_delete_role`
Delete an IAM role
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `region` | string | Yes | AWS region \(e.g., us-east-1\) |
| `accessKeyId` | string | Yes | AWS access key ID |
| `secretAccessKey` | string | Yes | AWS secret access key |
| `roleName` | string | Yes | The name of the IAM role to delete |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `message` | string | Operation status message |
### `iam_attach_user_policy`
Attach a managed policy to an IAM user
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `region` | string | Yes | AWS region \(e.g., us-east-1\) |
| `accessKeyId` | string | Yes | AWS access key ID |
| `secretAccessKey` | string | Yes | AWS secret access key |
| `userName` | string | Yes | The name of the IAM user |
| `policyArn` | string | Yes | The ARN of the managed policy to attach |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `message` | string | Operation status message |
### `iam_detach_user_policy`
Remove a managed policy from an IAM user
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `region` | string | Yes | AWS region \(e.g., us-east-1\) |
| `accessKeyId` | string | Yes | AWS access key ID |
| `secretAccessKey` | string | Yes | AWS secret access key |
| `userName` | string | Yes | The name of the IAM user |
| `policyArn` | string | Yes | The ARN of the managed policy to detach |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `message` | string | Operation status message |
### `iam_attach_role_policy`
Attach a managed policy to an IAM role
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `region` | string | Yes | AWS region \(e.g., us-east-1\) |
| `accessKeyId` | string | Yes | AWS access key ID |
| `secretAccessKey` | string | Yes | AWS secret access key |
| `roleName` | string | Yes | The name of the IAM role |
| `policyArn` | string | Yes | The ARN of the managed policy to attach |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `message` | string | Operation status message |
### `iam_detach_role_policy`
Remove a managed policy from an IAM role
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `region` | string | Yes | AWS region \(e.g., us-east-1\) |
| `accessKeyId` | string | Yes | AWS access key ID |
| `secretAccessKey` | string | Yes | AWS secret access key |
| `roleName` | string | Yes | The name of the IAM role |
| `policyArn` | string | Yes | The ARN of the managed policy to detach |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `message` | string | Operation status message |
### `iam_list_policies`
List managed IAM policies
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `region` | string | Yes | AWS region \(e.g., us-east-1\) |
| `accessKeyId` | string | Yes | AWS access key ID |
| `secretAccessKey` | string | Yes | AWS secret access key |
| `scope` | string | No | Filter by scope: All, AWS \(AWS-managed\), or Local \(customer-managed\) |
| `onlyAttached` | boolean | No | If true, only return policies attached to an entity |
| `pathPrefix` | string | No | Path prefix to filter policies |
| `maxItems` | number | No | Maximum number of policies to return \(1-1000, default 100\) |
| `marker` | string | No | Pagination marker from a previous request |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `policies` | json | List of policies with policyName, arn, attachmentCount, and dates |
| `isTruncated` | boolean | Whether there are more results available |
| `marker` | string | Pagination marker for the next page of results |
| `count` | number | Number of policies returned |
### `iam_create_access_key`
Create a new access key pair for an IAM user
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `region` | string | Yes | AWS region \(e.g., us-east-1\) |
| `accessKeyId` | string | Yes | AWS access key ID |
| `secretAccessKey` | string | Yes | AWS secret access key |
| `userName` | string | No | The IAM user to create the key for \(defaults to current user\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `message` | string | Operation status message |
| `accessKeyId` | string | The new access key ID |
| `secretAccessKey` | string | The new secret access key \(only shown once\) |
| `userName` | string | The user the key was created for |
| `status` | string | Status of the access key \(Active\) |
| `createDate` | string | Date the key was created |
### `iam_delete_access_key`
Delete an access key pair for an IAM user
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `region` | string | Yes | AWS region \(e.g., us-east-1\) |
| `accessKeyId` | string | Yes | AWS access key ID |
| `secretAccessKey` | string | Yes | AWS secret access key |
| `accessKeyIdToDelete` | string | Yes | The access key ID to delete |
| `userName` | string | No | The IAM user whose key to delete \(defaults to current user\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `message` | string | Operation status message |
### `iam_list_groups`
List IAM groups in your AWS account
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `region` | string | Yes | AWS region \(e.g., us-east-1\) |
| `accessKeyId` | string | Yes | AWS access key ID |
| `secretAccessKey` | string | Yes | AWS secret access key |
| `pathPrefix` | string | No | Path prefix to filter groups |
| `maxItems` | number | No | Maximum number of groups to return \(1-1000, default 100\) |
| `marker` | string | No | Pagination marker from a previous request |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `groups` | json | List of IAM groups with groupName, groupId, arn, and path |
| `isTruncated` | boolean | Whether there are more results available |
| `marker` | string | Pagination marker for the next page of results |
| `count` | number | Number of groups returned |
### `iam_add_user_to_group`
Add an IAM user to a group
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `region` | string | Yes | AWS region \(e.g., us-east-1\) |
| `accessKeyId` | string | Yes | AWS access key ID |
| `secretAccessKey` | string | Yes | AWS secret access key |
| `userName` | string | Yes | The name of the IAM user |
| `groupName` | string | Yes | The name of the IAM group |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `message` | string | Operation status message |
### `iam_remove_user_from_group`
Remove an IAM user from a group
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `region` | string | Yes | AWS region \(e.g., us-east-1\) |
| `accessKeyId` | string | Yes | AWS access key ID |
| `secretAccessKey` | string | Yes | AWS secret access key |
| `userName` | string | Yes | The name of the IAM user |
| `groupName` | string | Yes | The name of the IAM group |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `message` | string | Operation status message |

View File

@@ -117,7 +117,7 @@ Create a new service request in Jira Service Management
| `description` | string | No | Description for the service request |
| `raiseOnBehalfOf` | string | No | Account ID of customer to raise request on behalf of |
| `requestFieldValues` | json | No | Request field values as key-value pairs \(overrides summary/description if provided\) |
| `formAnswers` | json | No | Form answers for form-based request types \(e.g., \{"summary": \{"text": "Title"\}, "customfield_10010": \{"choices": \["10320"\]\}\}\) |
| `formAnswers` | json | No | Form answers using numeric form question IDs as keys \(e.g., \{"1": \{"text": "Title"\}, "4": \{"choices": \["5"\]\}\}\). Keys are question IDs from the Jira Form, not Jira field names. |
| `requestParticipants` | string | No | Comma-separated account IDs to add as request participants |
| `channel` | string | No | Channel the request originates from \(e.g., portal, email\) |
@@ -758,4 +758,235 @@ List forms (ProForma/JSM Forms) attached to a Jira issue with metadata (name, su
| ↳ `formTemplateId` | string | Source form template ID \(UUID\) |
| `total` | number | Total number of forms |
### `jsm_attach_form`
Attach a form template to an existing Jira issue or JSM request
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `domain` | string | Yes | Your Jira domain \(e.g., yourcompany.atlassian.net\) |
| `cloudId` | string | No | Jira Cloud ID for the instance |
| `issueIdOrKey` | string | Yes | Issue ID or key to attach the form to \(e.g., "SD-123"\) |
| `formTemplateId` | string | Yes | Form template UUID \(from Get Form Templates\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `ts` | string | Timestamp of the operation |
| `issueIdOrKey` | string | Issue ID or key |
| `id` | string | Attached form instance ID \(UUID\) |
| `name` | string | Form name |
| `updated` | string | Last updated timestamp |
| `submitted` | boolean | Whether the form has been submitted |
| `lock` | boolean | Whether the form is locked |
| `internal` | boolean | Whether the form is internal only |
| `formTemplateId` | string | Form template ID |
### `jsm_save_form_answers`
Save answers to a form attached to a Jira issue or JSM request
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `domain` | string | Yes | Your Jira domain \(e.g., yourcompany.atlassian.net\) |
| `cloudId` | string | No | Jira Cloud ID for the instance |
| `issueIdOrKey` | string | Yes | Issue ID or key \(e.g., "SD-123"\) |
| `formId` | string | Yes | Form instance UUID \(from Attach Form or Get Issue Forms\) |
| `answers` | json | Yes | Form answers using numeric question IDs as keys \(e.g., \{"1": \{"text": "Title"\}, "4": \{"choices": \["5"\]\}\}\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `ts` | string | Timestamp of the operation |
| `issueIdOrKey` | string | Issue ID or key |
| `formId` | string | Form instance UUID |
| `state` | json | Form state with status \(open, submitted, locked\) |
| `updated` | string | Last updated timestamp |
### `jsm_submit_form`
Submit a form on a Jira issue or JSM request, locking it from further edits
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `domain` | string | Yes | Your Jira domain \(e.g., yourcompany.atlassian.net\) |
| `cloudId` | string | No | Jira Cloud ID for the instance |
| `issueIdOrKey` | string | Yes | Issue ID or key \(e.g., "SD-123"\) |
| `formId` | string | Yes | Form instance UUID \(from Attach Form or Get Issue Forms\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `ts` | string | Timestamp of the operation |
| `issueIdOrKey` | string | Issue ID or key |
| `formId` | string | Form instance UUID |
| `status` | string | Form status after submission \(open, submitted, locked\) |
### `jsm_get_form`
Get a single form with full design, state, and answers from a Jira issue
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `domain` | string | Yes | Your Jira domain \(e.g., yourcompany.atlassian.net\) |
| `cloudId` | string | No | Jira Cloud ID for the instance |
| `issueIdOrKey` | string | Yes | Issue ID or key \(e.g., "SD-123"\) |
| `formId` | string | Yes | Form instance UUID \(from Attach Form or Get Issue Forms\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `ts` | string | Timestamp of the operation |
| `issueIdOrKey` | string | Issue ID or key |
| `formId` | string | Form instance UUID |
| `design` | json | Full form design with questions, layout, conditions, sections, settings |
| `state` | json | Form state with answers map, status \(o=open, s=submitted, l=locked\), visibility \(i=internal, e=external\) |
| `updated` | string | Last updated timestamp |
### `jsm_get_form_answers`
Get simplified answers from a form attached to a Jira issue or JSM request
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `domain` | string | Yes | Your Jira domain \(e.g., yourcompany.atlassian.net\) |
| `cloudId` | string | No | Jira Cloud ID for the instance |
| `issueIdOrKey` | string | Yes | Issue ID or key \(e.g., "SD-123"\) |
| `formId` | string | Yes | Form instance UUID \(from Attach Form or Get Issue Forms\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `ts` | string | Timestamp of the operation |
| `issueIdOrKey` | string | Issue ID or key |
| `formId` | string | Form instance UUID |
| `answers` | json | Simplified form answers as key-value pairs \(question label to answer text/choices\) |
### `jsm_reopen_form`
Reopen a submitted form on a Jira issue or JSM request, allowing further edits
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `domain` | string | Yes | Your Jira domain \(e.g., yourcompany.atlassian.net\) |
| `cloudId` | string | No | Jira Cloud ID for the instance |
| `issueIdOrKey` | string | Yes | Issue ID or key \(e.g., "SD-123"\) |
| `formId` | string | Yes | Form instance UUID \(from Get Issue Forms\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `ts` | string | Timestamp of the operation |
| `issueIdOrKey` | string | Issue ID or key |
| `formId` | string | Form instance UUID |
| `status` | string | Form status after reopening \(open, submitted, locked\) |
### `jsm_delete_form`
Remove a form from a Jira issue or JSM request
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `domain` | string | Yes | Your Jira domain \(e.g., yourcompany.atlassian.net\) |
| `cloudId` | string | No | Jira Cloud ID for the instance |
| `issueIdOrKey` | string | Yes | Issue ID or key \(e.g., "SD-123"\) |
| `formId` | string | Yes | Form instance UUID to delete |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `ts` | string | Timestamp of the operation |
| `issueIdOrKey` | string | Issue ID or key |
| `formId` | string | Deleted form instance UUID |
| `deleted` | boolean | Whether the form was successfully deleted |
### `jsm_externalise_form`
Make a form visible to customers on a Jira issue or JSM request
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `domain` | string | Yes | Your Jira domain \(e.g., yourcompany.atlassian.net\) |
| `cloudId` | string | No | Jira Cloud ID for the instance |
| `issueIdOrKey` | string | Yes | Issue ID or key \(e.g., "SD-123"\) |
| `formId` | string | Yes | Form instance UUID |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `ts` | string | Timestamp of the operation |
| `issueIdOrKey` | string | Issue ID or key |
| `formId` | string | Form instance UUID |
| `visibility` | string | Form visibility after change \(internal or external\) |
### `jsm_internalise_form`
Make a form internal only (not visible to customers) on a Jira issue or JSM request
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `domain` | string | Yes | Your Jira domain \(e.g., yourcompany.atlassian.net\) |
| `cloudId` | string | No | Jira Cloud ID for the instance |
| `issueIdOrKey` | string | Yes | Issue ID or key \(e.g., "SD-123"\) |
| `formId` | string | Yes | Form instance UUID |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `ts` | string | Timestamp of the operation |
| `issueIdOrKey` | string | Issue ID or key |
| `formId` | string | Form instance UUID |
| `visibility` | string | Form visibility after change \(internal or external\) |
### `jsm_copy_forms`
Copy forms from one Jira issue to another
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `domain` | string | Yes | Your Jira domain \(e.g., yourcompany.atlassian.net\) |
| `cloudId` | string | No | Jira Cloud ID for the instance |
| `sourceIssueIdOrKey` | string | Yes | Source issue ID or key to copy forms from \(e.g., "SD-123"\) |
| `targetIssueIdOrKey` | string | Yes | Target issue ID or key to copy forms to \(e.g., "SD-456"\) |
| `formIds` | json | No | Optional JSON array of form UUIDs to copy \(e.g., \["uuid1", "uuid2"\]\). If omitted, copies all forms. |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `ts` | string | Timestamp of the operation |
| `sourceIssueIdOrKey` | string | Source issue ID or key |
| `targetIssueIdOrKey` | string | Target issue ID or key |
| `copiedForms` | json | Array of successfully copied forms |
| `errors` | json | Array of errors encountered during copy |

View File

@@ -3,6 +3,7 @@
"index",
"a2a",
"agentmail",
"agiloft",
"ahrefs",
"airtable",
"airweave",
@@ -82,6 +83,7 @@
"hubspot",
"huggingface",
"hunter",
"iam",
"image_generator",
"imap",
"incidentio",
@@ -160,6 +162,7 @@
"ssh",
"stagehand",
"stripe",
"sts",
"stt",
"supabase",
"table",

View File

@@ -0,0 +1,128 @@
---
title: AWS STS
description: Connect to AWS Security Token Service
---
import { BlockInfoCard } from "@/components/ui/block-info-card"
<BlockInfoCard
type="sts"
color="linear-gradient(45deg, #BD0816 0%, #FF5252 100%)"
/>
{/* MANUAL-CONTENT-START:intro */}
[AWS Security Token Service (STS)](https://docs.aws.amazon.com/STS/latest/APIReference/welcome.html) is a web service that enables you to request temporary, limited-privilege credentials for AWS Identity and Access Management (IAM) users or for users that you authenticate (federated users).
With AWS STS, you can:
- **Assume IAM roles**: Request temporary credentials to access AWS resources across accounts or with elevated permissions
- **Verify identity**: Determine the AWS account, ARN, and user ID associated with the calling credentials
- **Generate session tokens**: Obtain temporary credentials with optional MFA protection for enhanced security
- **Audit access keys**: Look up the AWS account that owns a given access key for security investigations
In Sim, the AWS STS integration allows your agents to manage temporary credentials as part of automated workflows. This is useful for cross-account access patterns, credential rotation, identity verification before sensitive operations, and security auditing. Agents can assume roles to interact with other AWS services, verify their own identity, or look up access key ownership without exposing long-lived credentials.
{/* MANUAL-CONTENT-END */}
## Usage Instructions
Integrate AWS STS into the workflow. Assume roles, get temporary credentials, verify caller identity, and look up access key information.
## Tools
### `sts_assume_role`
Assume an IAM role and receive temporary security credentials
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `region` | string | Yes | AWS region \(e.g., us-east-1\) |
| `accessKeyId` | string | Yes | AWS access key ID |
| `secretAccessKey` | string | Yes | AWS secret access key |
| `roleArn` | string | Yes | ARN of the IAM role to assume |
| `roleSessionName` | string | Yes | Identifier for the assumed role session |
| `durationSeconds` | number | No | Duration of the session in seconds \(900-43200, default 3600\) |
| `externalId` | string | No | External ID for cross-account access |
| `serialNumber` | string | No | MFA device serial number or ARN |
| `tokenCode` | string | No | MFA token code \(6 digits\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `accessKeyId` | string | Temporary access key ID |
| `secretAccessKey` | string | Temporary secret access key |
| `sessionToken` | string | Temporary session token |
| `expiration` | string | Credential expiration timestamp |
| `assumedRoleArn` | string | ARN of the assumed role |
| `assumedRoleId` | string | Assumed role ID with session name |
| `packedPolicySize` | number | Percentage of allowed policy size used |
### `sts_get_caller_identity`
Get details about the IAM user or role whose credentials are used to call the API
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `region` | string | Yes | AWS region \(e.g., us-east-1\) |
| `accessKeyId` | string | Yes | AWS access key ID |
| `secretAccessKey` | string | Yes | AWS secret access key |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `account` | string | AWS account ID |
| `arn` | string | ARN of the calling entity |
| `userId` | string | Unique identifier of the calling entity |
### `sts_get_session_token`
Get temporary security credentials for an IAM user, optionally with MFA
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `region` | string | Yes | AWS region \(e.g., us-east-1\) |
| `accessKeyId` | string | Yes | AWS access key ID |
| `secretAccessKey` | string | Yes | AWS secret access key |
| `durationSeconds` | number | No | Duration of the session in seconds \(900-129600, default 43200\) |
| `serialNumber` | string | No | MFA device serial number or ARN |
| `tokenCode` | string | No | MFA token code \(6 digits\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `accessKeyId` | string | Temporary access key ID |
| `secretAccessKey` | string | Temporary secret access key |
| `sessionToken` | string | Temporary session token |
| `expiration` | string | Credential expiration timestamp |
### `sts_get_access_key_info`
Get the AWS account ID associated with an access key
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `region` | string | Yes | AWS region \(e.g., us-east-1\) |
| `accessKeyId` | string | Yes | AWS access key ID |
| `secretAccessKey` | string | Yes | AWS secret access key |
| `targetAccessKeyId` | string | Yes | The access key ID to look up |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `account` | string | AWS account ID that owns the access key |

View File

@@ -36,6 +36,7 @@ Before connecting Trello in Sim, add your Sim app origin to the **Allowed Origin
Trello's authorization flow redirects back to Sim using a `return_url`. If your Sim origin is not whitelisted in Trello, Trello will block the redirect and the connection flow will fail before Sim can save the token.
{/* MANUAL-CONTENT-END */}
Integrate with Trello to list board lists, list cards, create cards, update cards, review activity, and add comments.

View File

@@ -2,8 +2,8 @@
import { type SVGProps, useEffect, useRef, useState } from 'react'
import { AnimatePresence, motion, useInView } from 'framer-motion'
import ReactMarkdown, { type Components } from 'react-markdown'
import remarkGfm from 'remark-gfm'
import { Streamdown } from 'streamdown'
import 'streamdown/styles.css'
import { ChevronDown } from '@/components/emcn'
import { Database, File, Library, Table } from '@/components/emcn/icons'
import {
@@ -557,8 +557,8 @@ The team agreed to prioritize the new onboarding flow. Key decisions:
Follow up with engineering on the timeline for the API v2 migration. Draft the proposal for the board meeting next week.`
const MD_COMPONENTS: Components = {
h1: ({ children }) => (
const MD_COMPONENTS = {
h1: ({ children }: { children?: React.ReactNode }) => (
<p
role='presentation'
className='mb-4 border-[#E5E5E5] border-b pb-2 font-semibold text-[#1C1C1C] text-[20px]'
@@ -566,17 +566,23 @@ const MD_COMPONENTS: Components = {
{children}
</p>
),
h2: ({ children }) => (
h2: ({ children }: { children?: React.ReactNode }) => (
<h2 className='mt-5 mb-3 border-[#E5E5E5] border-b pb-1.5 font-semibold text-[#1C1C1C] text-[16px]'>
{children}
</h2>
),
ul: ({ children }) => <ul className='mb-3 list-disc pl-6'>{children}</ul>,
ol: ({ children }) => <ol className='mb-3 list-decimal pl-6'>{children}</ol>,
li: ({ children }) => (
ul: ({ children }: { children?: React.ReactNode }) => (
<ul className='mb-3 list-disc pl-6'>{children}</ul>
),
ol: ({ children }: { children?: React.ReactNode }) => (
<ol className='mb-3 list-decimal pl-6'>{children}</ol>
),
li: ({ children }: { children?: React.ReactNode }) => (
<li className='mb-1 text-[#1C1C1C] text-[14px] leading-[1.6]'>{children}</li>
),
p: ({ children }) => <p className='mb-3 text-[#1C1C1C] text-[14px] leading-[1.6]'>{children}</p>,
p: ({ children }: { children?: React.ReactNode }) => (
<p className='mb-3 text-[#1C1C1C] text-[14px] leading-[1.6]'>{children}</p>
),
}
function MockFullFiles() {
@@ -618,9 +624,9 @@ function MockFullFiles() {
transition={{ duration: 0.4, delay: 0.5 }}
>
<div className='h-full overflow-auto p-6'>
<ReactMarkdown remarkPlugins={[remarkGfm]} components={MD_COMPONENTS}>
<Streamdown mode='static' components={MD_COMPONENTS}>
{source}
</ReactMarkdown>
</Streamdown>
</div>
</motion.div>
</div>

View File

@@ -7,7 +7,7 @@ import { getFormattedGitHubStars } from '@/app/(landing)/actions/github'
const logger = createLogger('github-stars')
const INITIAL_STARS = '27.6k'
const INITIAL_STARS = '27.7k'
/**
* Client component that displays GitHub stars count.

View File

@@ -1,6 +1,6 @@
'use client'
import { useCallback, useEffect, useRef, useState } from 'react'
import { useCallback, useEffect, useRef, useState, useSyncExternalStore } from 'react'
import Image from 'next/image'
import Link from 'next/link'
import { useSearchParams } from 'next/navigation'
@@ -38,6 +38,8 @@ const NAV_LINKS: NavLink[] = [
const LOGO_CELL = 'flex items-center pl-5 lg:pl-16 pr-5'
const LINK_CELL = 'flex items-center px-3.5'
const emptySubscribe = () => () => {}
interface NavbarProps {
logoOnly?: boolean
blogPosts?: NavBlogPost[]
@@ -51,6 +53,12 @@ export default function Navbar({ logoOnly = false, blogPosts = [] }: NavbarProps
const isBrowsingHome = searchParams.has('home')
const useHomeLinks = isAuthenticated || isBrowsingHome
const logoHref = useHomeLinks ? '/?home' : '/'
const mounted = useSyncExternalStore(
emptySubscribe,
() => true,
() => false
)
const shouldShow = mounted && !isSessionPending
const [activeDropdown, setActiveDropdown] = useState<DropdownId>(null)
const [mobileMenuOpen, setMobileMenuOpen] = useState(false)
const closeTimerRef = useRef<ReturnType<typeof setTimeout> | null>(null)
@@ -206,9 +214,11 @@ export default function Navbar({ logoOnly = false, blogPosts = [] }: NavbarProps
<div className='hidden flex-1 lg:block' />
<div
aria-hidden={!shouldShow || undefined}
inert={!shouldShow || undefined}
className={cn(
'hidden items-center gap-2 pr-16 pl-5 lg:flex',
isSessionPending && 'invisible'
'hidden items-center gap-2 pr-16 pl-5 transition-opacity duration-200 lg:flex',
shouldShow ? 'opacity-100' : 'pointer-events-none opacity-0'
)}
>
{isAuthenticated ? (
@@ -326,7 +336,12 @@ export default function Navbar({ logoOnly = false, blogPosts = [] }: NavbarProps
</ul>
<div
className={cn('mt-auto flex flex-col gap-2.5 p-5', isSessionPending && 'invisible')}
aria-hidden={!shouldShow || undefined}
inert={!shouldShow || undefined}
className={cn(
'mt-auto flex flex-col gap-2.5 p-5 transition-opacity duration-200',
shouldShow ? 'opacity-100' : 'pointer-events-none opacity-0'
)}
>
{isAuthenticated ? (
<Link
@@ -392,11 +407,7 @@ interface NavChevronProps {
open: boolean
}
/**
* Animated chevron matching the exact geometry of the emcn ChevronDown SVG.
* Each arm rotates around its midpoint so the center vertex travels up/down
* while the outer endpoints adjust — producing a Stripe-style morph.
*/
/** Matches the exact geometry of the emcn ChevronDown SVG — transform origins are intentional. */
function NavChevron({ open }: NavChevronProps) {
return (
<svg width='9' height='6' viewBox='0 0 10 6' fill='none' className='mt-[1.5px] flex-shrink-0'>

View File

@@ -28,7 +28,6 @@ const PRICING_TIERS: PricingTier[] = [
'5GB file storage',
'3 tables · 1,000 rows each',
'5 min execution limit',
'5 concurrent/workspace',
'7-day log retention',
'CLI/SDK/MCP Access',
],
@@ -46,7 +45,6 @@ const PRICING_TIERS: PricingTier[] = [
'50GB file storage',
'25 tables · 5,000 rows each',
'50 min execution · 150 runs/min',
'50 concurrent/workspace',
'Unlimited log retention',
'CLI/SDK/MCP Access',
],
@@ -64,7 +62,6 @@ const PRICING_TIERS: PricingTier[] = [
'500GB file storage',
'25 tables · 5,000 rows each',
'50 min execution · 300 runs/min',
'200 concurrent/workspace',
'Unlimited log retention',
'CLI/SDK/MCP Access',
],
@@ -81,7 +78,6 @@ const PRICING_TIERS: PricingTier[] = [
'Custom file storage',
'10,000 tables · 1M rows each',
'Custom execution limits',
'Custom concurrency limits',
'Unlimited log retention',
'SSO & SCIM · SOC2',
'Self hosting · Dedicated support',

View File

@@ -6,6 +6,7 @@ import type { ComponentType, SVGProps } from 'react'
import {
A2AIcon,
AgentMailIcon,
AgiloftIcon,
AhrefsIcon,
AirtableIcon,
AirweaveIcon,
@@ -88,6 +89,7 @@ import {
HubspotIcon,
HuggingFaceIcon,
HunterIOIcon,
IAMIcon,
ImageIcon,
IncidentioIcon,
InfisicalIcon,
@@ -162,6 +164,7 @@ import {
SmtpIcon,
SQSIcon,
SshIcon,
STSIcon,
STTIcon,
StagehandIcon,
StripeIcon,
@@ -197,6 +200,7 @@ type IconComponent = ComponentType<SVGProps<SVGSVGElement>>
export const blockTypeToIconMap: Record<string, IconComponent> = {
a2a: A2AIcon,
agentmail: AgentMailIcon,
agiloft: AgiloftIcon,
ahrefs: AhrefsIcon,
airtable: AirtableIcon,
airweave: AirweaveIcon,
@@ -276,6 +280,7 @@ export const blockTypeToIconMap: Record<string, IconComponent> = {
hubspot: HubspotIcon,
huggingface: HuggingFaceIcon,
hunter: HunterIOIcon,
iam: IAMIcon,
image_generator: ImageIcon,
imap: MailServerIcon,
incidentio: IncidentioIcon,
@@ -354,6 +359,7 @@ export const blockTypeToIconMap: Record<string, IconComponent> = {
ssh: SshIcon,
stagehand: StagehandIcon,
stripe: StripeIcon,
sts: STSIcon,
stt_v2: STTIcon,
supabase: SupabaseIcon,
tailscale: TailscaleIcon,

View File

@@ -208,6 +208,73 @@
"integrationTypes": ["email", "communication"],
"tags": ["messaging"]
},
{
"type": "agiloft",
"slug": "agiloft",
"name": "Agiloft",
"description": "Manage records in Agiloft CLM",
"longDescription": "Integrate with Agiloft contract lifecycle management to create, read, update, delete, and search records. Supports file attachments, SQL-based selection, saved searches, and record locking across any table in your knowledge base.",
"bgColor": "#263A5C",
"iconName": "AgiloftIcon",
"docsUrl": "https://docs.sim.ai/tools/agiloft",
"operations": [
{
"name": "Create Record",
"description": "Create a new record in an Agiloft table."
},
{
"name": "Read Record",
"description": "Read a record by ID from an Agiloft table."
},
{
"name": "Update Record",
"description": "Update an existing record in an Agiloft table."
},
{
"name": "Delete Record",
"description": "Delete a record from an Agiloft table."
},
{
"name": "Search Records",
"description": "Search for records in an Agiloft table using a query."
},
{
"name": "Select Records",
"description": "Select record IDs matching a SQL WHERE clause from an Agiloft table."
},
{
"name": "Saved Search",
"description": "List saved searches defined for an Agiloft table."
},
{
"name": "Attach File",
"description": "Attach a file to a field in an Agiloft record."
},
{
"name": "Retrieve Attachment",
"description": "Download an attached file from an Agiloft record field."
},
{
"name": "Remove Attachment",
"description": "Remove an attached file from a field in an Agiloft record."
},
{
"name": "Attachment Info",
"description": "Get information about file attachments on a record field."
},
{
"name": "Lock Record",
"description": "Lock, unlock, or check the lock status of an Agiloft record."
}
],
"operationCount": 12,
"triggers": [],
"triggerCount": 0,
"authType": "none",
"category": "tools",
"integrationTypes": ["productivity", "developer-tools"],
"tags": ["automation"]
},
{
"type": "ahrefs",
"slug": "ahrefs",
@@ -1316,6 +1383,97 @@
"integrationTypes": ["crm", "sales"],
"tags": ["sales-engagement", "enrichment"]
},
{
"type": "iam",
"slug": "aws-iam",
"name": "AWS IAM",
"description": "Manage AWS IAM users, roles, policies, and groups",
"longDescription": "Integrate AWS Identity and Access Management into your workflow. Create and manage users, roles, policies, groups, and access keys.",
"bgColor": "linear-gradient(45deg, #BD0816 0%, #FF5252 100%)",
"iconName": "IAMIcon",
"docsUrl": "https://docs.sim.ai/tools/iam",
"operations": [
{
"name": "List Users",
"description": "List IAM users in your AWS account"
},
{
"name": "Get User",
"description": "Get detailed information about an IAM user"
},
{
"name": "Create User",
"description": "Create a new IAM user"
},
{
"name": "Delete User",
"description": "Delete an IAM user"
},
{
"name": "List Roles",
"description": "List IAM roles in your AWS account"
},
{
"name": "Get Role",
"description": "Get detailed information about an IAM role"
},
{
"name": "Create Role",
"description": "Create a new IAM role with a trust policy"
},
{
"name": "Delete Role",
"description": "Delete an IAM role"
},
{
"name": "Attach User Policy",
"description": "Attach a managed policy to an IAM user"
},
{
"name": "Detach User Policy",
"description": "Remove a managed policy from an IAM user"
},
{
"name": "Attach Role Policy",
"description": "Attach a managed policy to an IAM role"
},
{
"name": "Detach Role Policy",
"description": "Remove a managed policy from an IAM role"
},
{
"name": "List Policies",
"description": "List managed IAM policies"
},
{
"name": "Create Access Key",
"description": "Create a new access key pair for an IAM user"
},
{
"name": "Delete Access Key",
"description": "Delete an access key pair for an IAM user"
},
{
"name": "List Groups",
"description": "List IAM groups in your AWS account"
},
{
"name": "Add User to Group",
"description": "Add an IAM user to a group"
},
{
"name": "Remove User from Group",
"description": "Remove an IAM user from a group"
}
],
"operationCount": 18,
"triggers": [],
"triggerCount": 0,
"authType": "none",
"category": "tools",
"integrationTypes": ["developer-tools", "security"],
"tags": ["cloud", "identity"]
},
{
"type": "secrets_manager",
"slug": "aws-secrets-manager",
@@ -1355,6 +1513,41 @@
"integrationTypes": ["developer-tools", "security"],
"tags": ["cloud", "secrets-management"]
},
{
"type": "sts",
"slug": "aws-sts",
"name": "AWS STS",
"description": "Connect to AWS Security Token Service",
"longDescription": "Integrate AWS STS into the workflow. Assume roles, get temporary credentials, verify caller identity, and look up access key information.",
"bgColor": "linear-gradient(45deg, #BD0816 0%, #FF5252 100%)",
"iconName": "STSIcon",
"docsUrl": "https://docs.sim.ai/tools/sts",
"operations": [
{
"name": "Assume Role",
"description": "Assume an IAM role and receive temporary security credentials"
},
{
"name": "Get Caller Identity",
"description": "Get details about the IAM user or role whose credentials are used to call the API"
},
{
"name": "Get Session Token",
"description": "Get temporary security credentials for an IAM user, optionally with MFA"
},
{
"name": "Get Access Key Info",
"description": "Get the AWS account ID associated with an access key"
}
],
"operationCount": 4,
"triggers": [],
"triggerCount": 0,
"authType": "none",
"category": "tools",
"integrationTypes": ["security", "developer-tools"],
"tags": ["cloud"]
},
{
"type": "textract_v2",
"slug": "aws-textract",
@@ -2374,7 +2567,7 @@
"authType": "none",
"category": "tools",
"integrationTypes": ["security", "analytics", "developer-tools"],
"tags": ["monitoring", "security"]
"tags": ["identity", "monitoring"]
},
{
"type": "cursor_v2",
@@ -6675,9 +6868,49 @@
{
"name": "Get Issue Forms",
"description": "List forms (ProForma/JSM Forms) attached to a Jira issue with metadata (name, submitted status, lock)"
},
{
"name": "Attach Form",
"description": "Attach a form template to an existing Jira issue or JSM request"
},
{
"name": "Save Form Answers",
"description": "Save answers to a form attached to a Jira issue or JSM request"
},
{
"name": "Submit Form",
"description": "Submit a form on a Jira issue or JSM request, locking it from further edits"
},
{
"name": "Get Form",
"description": "Get a single form with full design, state, and answers from a Jira issue"
},
{
"name": "Get Form Answers",
"description": "Get simplified answers from a form attached to a Jira issue or JSM request"
},
{
"name": "Reopen Form",
"description": "Reopen a submitted form on a Jira issue or JSM request, allowing further edits"
},
{
"name": "Delete Form",
"description": "Remove a form from a Jira issue or JSM request"
},
{
"name": "Externalise Form",
"description": "Make a form visible to customers on a Jira issue or JSM request"
},
{
"name": "Internalise Form",
"description": "Make a form internal only (not visible to customers) on a Jira issue or JSM request"
},
{
"name": "Copy Forms",
"description": "Copy forms from one Jira issue to another"
}
],
"operationCount": 24,
"operationCount": 34,
"triggers": [],
"triggerCount": 0,
"authType": "oauth",

View File

@@ -0,0 +1,144 @@
import { db } from '@sim/db'
import { user } from '@sim/db/schema'
import { eq } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { getSession } from '@/lib/auth'
import { env } from '@/lib/core/config/env'
const ENV_URLS: Record<string, string | undefined> = {
dev: env.MOTHERSHIP_DEV_URL,
staging: env.MOTHERSHIP_STAGING_URL,
prod: env.MOTHERSHIP_PROD_URL,
}
function getMothershipUrl(environment: string): string | null {
return ENV_URLS[environment] ?? null
}
async function isAdminRequestAuthorized() {
const session = await getSession()
if (!session?.user?.id) return false
const [currentUser] = await db
.select({ role: user.role })
.from(user)
.where(eq(user.id, session.user.id))
.limit(1)
return currentUser?.role === 'admin'
}
/**
* Proxy to the mothership admin API.
*
* Query params:
* env - "dev" | "staging" | "prod"
* endpoint - the admin endpoint path, e.g. "requests", "licenses", "traces"
*
* The request body (for POST) is forwarded as-is. Additional query params
* (e.g. requestId for GET /traces) are forwarded.
*/
export async function POST(req: NextRequest) {
if (!(await isAdminRequestAuthorized())) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
const adminKey = env.MOTHERSHIP_API_ADMIN_KEY
if (!adminKey) {
return NextResponse.json({ error: 'MOTHERSHIP_API_ADMIN_KEY not configured' }, { status: 500 })
}
const { searchParams } = new URL(req.url)
const environment = searchParams.get('env') || 'dev'
const endpoint = searchParams.get('endpoint')
if (!endpoint) {
return NextResponse.json({ error: 'endpoint query param required' }, { status: 400 })
}
const baseUrl = getMothershipUrl(environment)
if (!baseUrl) {
return NextResponse.json(
{ error: `No URL configured for environment: ${environment}` },
{ status: 400 }
)
}
const targetUrl = `${baseUrl}/api/admin/${endpoint}`
try {
const body = await req.text()
const upstream = await fetch(targetUrl, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'x-api-key': adminKey,
},
...(body ? { body } : {}),
})
const data = await upstream.json()
return NextResponse.json(data, { status: upstream.status })
} catch (error) {
return NextResponse.json(
{
error: `Failed to reach mothership (${environment}): ${error instanceof Error ? error.message : 'Unknown error'}`,
},
{ status: 502 }
)
}
}
export async function GET(req: NextRequest) {
if (!(await isAdminRequestAuthorized())) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
const adminKey = env.MOTHERSHIP_API_ADMIN_KEY
if (!adminKey) {
return NextResponse.json({ error: 'MOTHERSHIP_API_ADMIN_KEY not configured' }, { status: 500 })
}
const { searchParams } = new URL(req.url)
const environment = searchParams.get('env') || 'dev'
const endpoint = searchParams.get('endpoint')
if (!endpoint) {
return NextResponse.json({ error: 'endpoint query param required' }, { status: 400 })
}
const baseUrl = getMothershipUrl(environment)
if (!baseUrl) {
return NextResponse.json(
{ error: `No URL configured for environment: ${environment}` },
{ status: 400 }
)
}
const forwardParams = new URLSearchParams()
searchParams.forEach((value, key) => {
if (key !== 'env' && key !== 'endpoint') {
forwardParams.set(key, value)
}
})
const qs = forwardParams.toString()
const targetUrl = `${baseUrl}/api/admin/${endpoint}${qs ? `?${qs}` : ''}`
try {
const upstream = await fetch(targetUrl, {
method: 'GET',
headers: { 'x-api-key': adminKey },
})
const data = await upstream.json()
return NextResponse.json(data, { status: upstream.status })
} catch (error) {
return NextResponse.json(
{
error: `Failed to reach mothership (${environment}): ${error instanceof Error ? error.message : 'Unknown error'}`,
},
{ status: 502 }
)
}
}

View File

@@ -4,7 +4,7 @@ import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { recordUsage } from '@/lib/billing/core/usage-log'
import { checkAndBillOverageThreshold } from '@/lib/billing/threshold-billing'
import { checkInternalApiKey } from '@/lib/copilot/utils'
import { checkInternalApiKey } from '@/lib/copilot/request/http'
import { isBillingEnabled } from '@/lib/core/config/feature-flags'
import { generateRequestId } from '@/lib/core/utils/request'

View File

@@ -274,6 +274,50 @@ describe('Chat API Route', () => {
)
})
it('passes chat customizations and outputConfigs through in the API request shape', async () => {
mockGetSession.mockResolvedValue({
user: { id: 'user-id', email: 'user@example.com' },
})
const validData = {
workflowId: 'workflow-123',
identifier: 'test-chat',
title: 'Test Chat',
customizations: {
primaryColor: '#000000',
welcomeMessage: 'Hello',
imageUrl: 'https://example.com/icon.png',
},
outputConfigs: [{ blockId: 'agent-1', path: 'content' }],
}
mockLimit.mockResolvedValueOnce([])
mockCheckWorkflowAccessForChatCreation.mockResolvedValue({
hasAccess: true,
workflow: { userId: 'user-id', workspaceId: null, isDeployed: true },
})
const req = new NextRequest('http://localhost:3000/api/chat', {
method: 'POST',
body: JSON.stringify(validData),
})
const response = await POST(req)
expect(response.status).toBe(200)
expect(mockPerformChatDeploy).toHaveBeenCalledWith(
expect.objectContaining({
workflowId: 'workflow-123',
identifier: 'test-chat',
customizations: {
primaryColor: '#000000',
welcomeMessage: 'Hello',
imageUrl: 'https://example.com/icon.png',
},
outputConfigs: [{ blockId: 'agent-1', path: 'content' }],
})
)
})
it('should allow chat deployment when user has workspace admin permission', async () => {
mockGetSession.mockResolvedValue({
user: { id: 'user-id', email: 'user@example.com' },

View File

@@ -1,8 +1,11 @@
import { db } from '@sim/db'
import { user } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { eq } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { checkServerSideUsageLimits } from '@/lib/billing/calculations/usage-monitor'
import { checkInternalApiKey } from '@/lib/copilot/utils'
import { checkInternalApiKey } from '@/lib/copilot/request/http'
const logger = createLogger('CopilotApiKeysValidate')
@@ -34,6 +37,12 @@ export async function POST(req: NextRequest) {
const { userId } = validationResult.data
const [existingUser] = await db.select().from(user).where(eq(user.id, userId)).limit(1)
if (!existingUser) {
logger.warn('[API VALIDATION] userId does not exist', { userId })
return NextResponse.json({ error: 'User not found' }, { status: 403 })
}
logger.info('[API VALIDATION] Validating usage limit', { userId })
const { isExceeded, currentUsage, limit } = await checkServerSideUsageLimits(userId)

View File

@@ -1,11 +1,14 @@
import { createLogger } from '@sim/logger'
import { NextResponse } from 'next/server'
import { getLatestRunForStream } from '@/lib/copilot/async-runs/repository'
import { abortActiveStream, waitForPendingChatStream } from '@/lib/copilot/chat-streaming'
import { SIM_AGENT_API_URL } from '@/lib/copilot/constants'
import { authenticateCopilotRequestSessionOnly } from '@/lib/copilot/request-helpers'
import { authenticateCopilotRequestSessionOnly } from '@/lib/copilot/request/http'
import { abortActiveStream, waitForPendingChatStream } from '@/lib/copilot/request/session'
import { env } from '@/lib/core/config/env'
const logger = createLogger('CopilotChatAbortAPI')
const GO_EXPLICIT_ABORT_TIMEOUT_MS = 3000
const STREAM_ABORT_SETTLE_TIMEOUT_MS = 8000
export async function POST(request: Request) {
const { userId: authenticatedUserId, isAuthenticated } =
@@ -15,7 +18,12 @@ export async function POST(request: Request) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
const body = await request.json().catch(() => ({}))
const body = await request.json().catch((err) => {
logger.warn('Abort request body parse failed; continuing with empty object', {
error: err instanceof Error ? err.message : String(err),
})
return {}
})
const streamId = typeof body.streamId === 'string' ? body.streamId : ''
let chatId = typeof body.chatId === 'string' ? body.chatId : ''
@@ -24,7 +32,13 @@ export async function POST(request: Request) {
}
if (!chatId) {
const run = await getLatestRunForStream(streamId, authenticatedUserId).catch(() => null)
const run = await getLatestRunForStream(streamId, authenticatedUserId).catch((err) => {
logger.warn('getLatestRunForStream failed while resolving chatId for abort', {
streamId,
error: err instanceof Error ? err.message : String(err),
})
return null
})
if (run?.chatId) {
chatId = run.chatId
}
@@ -36,7 +50,10 @@ export async function POST(request: Request) {
headers['x-api-key'] = env.COPILOT_API_KEY
}
const controller = new AbortController()
const timeout = setTimeout(() => controller.abort(), GO_EXPLICIT_ABORT_TIMEOUT_MS)
const timeout = setTimeout(
() => controller.abort('timeout:go_explicit_abort_fetch'),
GO_EXPLICIT_ABORT_TIMEOUT_MS
)
const response = await fetch(`${SIM_AGENT_API_URL}/api/streams/explicit-abort`, {
method: 'POST',
headers,
@@ -50,15 +67,24 @@ export async function POST(request: Request) {
if (!response.ok) {
throw new Error(`Explicit abort marker request failed: ${response.status}`)
}
} catch {
// best effort: local abort should still proceed even if Go marker fails
} catch (err) {
logger.warn('Explicit abort marker request failed; proceeding with local abort', {
streamId,
error: err instanceof Error ? err.message : String(err),
})
}
const aborted = await abortActiveStream(streamId)
if (chatId) {
await waitForPendingChatStream(chatId, GO_EXPLICIT_ABORT_TIMEOUT_MS + 1000, streamId).catch(
() => false
)
const settled = await waitForPendingChatStream(chatId, STREAM_ABORT_SETTLE_TIMEOUT_MS, streamId)
if (!settled) {
return NextResponse.json(
{ error: 'Previous response is still shutting down', aborted, settled: false },
{ status: 409 }
)
}
return NextResponse.json({ aborted, settled: true })
}
return NextResponse.json({ aborted })
}

View File

@@ -36,11 +36,11 @@ vi.mock('drizzle-orm', () => ({
eq: vi.fn((field: unknown, value: unknown) => ({ field, value, type: 'eq' })),
}))
vi.mock('@/lib/copilot/chat-lifecycle', () => ({
vi.mock('@/lib/copilot/chat/lifecycle', () => ({
getAccessibleCopilotChat: mockGetAccessibleCopilotChat,
}))
vi.mock('@/lib/copilot/task-events', () => ({
vi.mock('@/lib/copilot/tasks', () => ({
taskPubSub: { publishStatusChanged: vi.fn() },
}))

View File

@@ -5,8 +5,8 @@ import { and, eq } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { getSession } from '@/lib/auth'
import { getAccessibleCopilotChat } from '@/lib/copilot/chat-lifecycle'
import { taskPubSub } from '@/lib/copilot/task-events'
import { getAccessibleCopilotChat } from '@/lib/copilot/chat/lifecycle'
import { taskPubSub } from '@/lib/copilot/tasks'
const logger = createLogger('DeleteChatAPI')

View File

@@ -0,0 +1,175 @@
import { db } from '@sim/db'
import { copilotChats } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { and, desc, eq } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { getLatestRunForStream } from '@/lib/copilot/async-runs/repository'
import { getAccessibleCopilotChat } from '@/lib/copilot/chat/lifecycle'
import {
authenticateCopilotRequestSessionOnly,
createBadRequestResponse,
createInternalServerErrorResponse,
createUnauthorizedResponse,
} from '@/lib/copilot/request/http'
import { readFilePreviewSessions } from '@/lib/copilot/request/session'
import { readEvents } from '@/lib/copilot/request/session/buffer'
import { toStreamBatchEvent } from '@/lib/copilot/request/session/types'
import { authorizeWorkflowByWorkspacePermission } from '@/lib/workflows/utils'
import { assertActiveWorkspaceAccess } from '@/lib/workspaces/permissions/utils'
const logger = createLogger('CopilotChatAPI')
function transformChat(chat: {
id: string
title: string | null
model: string | null
messages: unknown
planArtifact?: unknown
config?: unknown
conversationId?: string | null
resources?: unknown
createdAt: Date | null
updatedAt: Date | null
}) {
return {
id: chat.id,
title: chat.title,
model: chat.model,
messages: Array.isArray(chat.messages) ? chat.messages : [],
messageCount: Array.isArray(chat.messages) ? chat.messages.length : 0,
planArtifact: chat.planArtifact || null,
config: chat.config || null,
...('conversationId' in chat ? { activeStreamId: chat.conversationId || null } : {}),
...('resources' in chat
? { resources: Array.isArray(chat.resources) ? chat.resources : [] }
: {}),
createdAt: chat.createdAt,
updatedAt: chat.updatedAt,
}
}
export async function GET(req: NextRequest) {
try {
const { searchParams } = new URL(req.url)
const workflowId = searchParams.get('workflowId')
const workspaceId = searchParams.get('workspaceId')
const chatId = searchParams.get('chatId')
const { userId: authenticatedUserId, isAuthenticated } =
await authenticateCopilotRequestSessionOnly()
if (!isAuthenticated || !authenticatedUserId) {
return createUnauthorizedResponse()
}
if (chatId) {
const chat = await getAccessibleCopilotChat(chatId, authenticatedUserId)
if (!chat) {
return NextResponse.json({ success: false, error: 'Chat not found' }, { status: 404 })
}
let streamSnapshot: {
events: ReturnType<typeof toStreamBatchEvent>[]
previewSessions: Awaited<ReturnType<typeof readFilePreviewSessions>>
status: string
} | null = null
if (chat.conversationId) {
try {
const [events, previewSessions, run] = await Promise.all([
readEvents(chat.conversationId, '0'),
readFilePreviewSessions(chat.conversationId).catch((error) => {
logger.warn('Failed to read preview sessions for copilot chat', {
chatId,
conversationId: chat.conversationId,
error: error instanceof Error ? error.message : String(error),
})
return []
}),
getLatestRunForStream(chat.conversationId, authenticatedUserId).catch((error) => {
logger.warn('Failed to fetch latest run for copilot chat snapshot', {
chatId,
conversationId: chat.conversationId,
error: error instanceof Error ? error.message : String(error),
})
return null
}),
])
streamSnapshot = {
events: events.map(toStreamBatchEvent),
previewSessions,
status:
typeof run?.status === 'string'
? run.status
: events.length > 0
? 'active'
: 'unknown',
}
} catch (error) {
logger.warn('Failed to load copilot chat stream snapshot', {
chatId,
conversationId: chat.conversationId,
error: error instanceof Error ? error.message : String(error),
})
}
}
logger.info(`Retrieved chat ${chatId}`)
return NextResponse.json({
success: true,
chat: {
...transformChat(chat),
...(streamSnapshot ? { streamSnapshot } : {}),
},
})
}
if (!workflowId && !workspaceId) {
return createBadRequestResponse('workflowId, workspaceId, or chatId is required')
}
if (workspaceId) {
await assertActiveWorkspaceAccess(workspaceId, authenticatedUserId)
}
if (workflowId) {
const authorization = await authorizeWorkflowByWorkspacePermission({
workflowId,
userId: authenticatedUserId,
action: 'read',
})
if (!authorization.allowed) {
return createUnauthorizedResponse()
}
}
const scopeFilter = workflowId
? eq(copilotChats.workflowId, workflowId)
: eq(copilotChats.workspaceId, workspaceId!)
const chats = await db
.select({
id: copilotChats.id,
title: copilotChats.title,
model: copilotChats.model,
messages: copilotChats.messages,
planArtifact: copilotChats.planArtifact,
config: copilotChats.config,
createdAt: copilotChats.createdAt,
updatedAt: copilotChats.updatedAt,
})
.from(copilotChats)
.where(and(eq(copilotChats.userId, authenticatedUserId), scopeFilter))
.orderBy(desc(copilotChats.updatedAt))
const scope = workflowId ? `workflow ${workflowId}` : `workspace ${workspaceId}`
logger.info(`Retrieved ${chats.length} chats for ${scope}`)
return NextResponse.json({
success: true,
chats: chats.map(transformChat),
})
} catch (error) {
logger.error('Error fetching copilot chats:', error)
return createInternalServerErrorResponse('Failed to fetch chats')
}
}

View File

@@ -0,0 +1,65 @@
import { db } from '@sim/db'
import { copilotChats } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { and, eq } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { getSession } from '@/lib/auth'
import { getAccessibleCopilotChat } from '@/lib/copilot/chat/lifecycle'
import { taskPubSub } from '@/lib/copilot/tasks'
const logger = createLogger('RenameChatAPI')
const RenameChatSchema = z.object({
chatId: z.string().min(1),
title: z.string().min(1).max(200),
})
export async function PATCH(request: NextRequest) {
try {
const session = await getSession()
if (!session?.user?.id) {
return NextResponse.json({ success: false, error: 'Unauthorized' }, { status: 401 })
}
const body = await request.json()
const { chatId, title } = RenameChatSchema.parse(body)
const chat = await getAccessibleCopilotChat(chatId, session.user.id)
if (!chat) {
return NextResponse.json({ success: false, error: 'Chat not found' }, { status: 404 })
}
const now = new Date()
const [updated] = await db
.update(copilotChats)
.set({ title, updatedAt: now, lastSeenAt: now })
.where(and(eq(copilotChats.id, chatId), eq(copilotChats.userId, session.user.id)))
.returning({ id: copilotChats.id, workspaceId: copilotChats.workspaceId })
if (!updated) {
return NextResponse.json({ success: false, error: 'Chat not found' }, { status: 404 })
}
logger.info('Chat renamed', { chatId, title })
if (updated.workspaceId) {
taskPubSub?.publishStatusChanged({
workspaceId: updated.workspaceId,
chatId,
type: 'renamed',
})
}
return NextResponse.json({ success: true })
} catch (error) {
if (error instanceof z.ZodError) {
return NextResponse.json(
{ success: false, error: 'Invalid request data', details: error.errors },
{ status: 400 }
)
}
logger.error('Error renaming chat:', error)
return NextResponse.json({ success: false, error: 'Failed to rename chat' }, { status: 500 })
}
}

View File

@@ -10,8 +10,8 @@ import {
createInternalServerErrorResponse,
createNotFoundResponse,
createUnauthorizedResponse,
} from '@/lib/copilot/request-helpers'
import type { ChatResource, ResourceType } from '@/lib/copilot/resources'
} from '@/lib/copilot/request/http'
import type { ChatResource, ResourceType } from '@/lib/copilot/resources/persistence'
const logger = createLogger('CopilotChatResourcesAPI')
@@ -21,13 +21,14 @@ const VALID_RESOURCE_TYPES = new Set<ResourceType>([
'workflow',
'knowledgebase',
'folder',
'log',
])
const GENERIC_TITLES = new Set(['Table', 'File', 'Workflow', 'Knowledge Base', 'Folder'])
const GENERIC_TITLES = new Set(['Table', 'File', 'Workflow', 'Knowledge Base', 'Folder', 'Log'])
const AddResourceSchema = z.object({
chatId: z.string(),
resource: z.object({
type: z.enum(['table', 'file', 'workflow', 'knowledgebase', 'folder']),
type: z.enum(['table', 'file', 'workflow', 'knowledgebase', 'folder', 'log']),
id: z.string(),
title: z.string(),
}),
@@ -35,7 +36,7 @@ const AddResourceSchema = z.object({
const RemoveResourceSchema = z.object({
chatId: z.string(),
resourceType: z.enum(['table', 'file', 'workflow', 'knowledgebase', 'folder']),
resourceType: z.enum(['table', 'file', 'workflow', 'knowledgebase', 'folder', 'log']),
resourceId: z.string(),
})
@@ -43,7 +44,7 @@ const ReorderResourcesSchema = z.object({
chatId: z.string(),
resources: z.array(
z.object({
type: z.enum(['table', 'file', 'workflow', 'knowledgebase', 'folder']),
type: z.enum(['table', 'file', 'workflow', 'knowledgebase', 'folder', 'log']),
id: z.string(),
title: z.string(),
})

View File

@@ -1,804 +1,2 @@
import { db } from '@sim/db'
import { copilotChats } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { and, desc, eq, sql } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { getSession } from '@/lib/auth'
import { createRunSegment } from '@/lib/copilot/async-runs/repository'
import { getAccessibleCopilotChat, resolveOrCreateChat } from '@/lib/copilot/chat-lifecycle'
import { buildCopilotRequestPayload } from '@/lib/copilot/chat-payload'
import {
acquirePendingChatStream,
createSSEStream,
releasePendingChatStream,
requestChatTitle,
SSE_RESPONSE_HEADERS,
} from '@/lib/copilot/chat-streaming'
import { COPILOT_REQUEST_MODES } from '@/lib/copilot/models'
import { orchestrateCopilotStream } from '@/lib/copilot/orchestrator'
import { getStreamMeta, readStreamEvents } from '@/lib/copilot/orchestrator/stream/buffer'
import type { OrchestratorResult } from '@/lib/copilot/orchestrator/types'
import { resolveActiveResourceContext } from '@/lib/copilot/process-contents'
import {
authenticateCopilotRequestSessionOnly,
createBadRequestResponse,
createInternalServerErrorResponse,
createRequestTracker,
createUnauthorizedResponse,
} from '@/lib/copilot/request-helpers'
import { generateId } from '@/lib/core/utils/uuid'
import { captureServerEvent } from '@/lib/posthog/server'
import {
authorizeWorkflowByWorkspacePermission,
resolveWorkflowIdForUser,
} from '@/lib/workflows/utils'
import {
assertActiveWorkspaceAccess,
getUserEntityPermissions,
} from '@/lib/workspaces/permissions/utils'
export const maxDuration = 3600
const logger = createLogger('CopilotChatAPI')
const FileAttachmentSchema = z.object({
id: z.string(),
key: z.string(),
filename: z.string(),
media_type: z.string(),
size: z.number(),
})
const ResourceAttachmentSchema = z.object({
type: z.enum(['workflow', 'table', 'file', 'knowledgebase']),
id: z.string().min(1),
title: z.string().optional(),
active: z.boolean().optional(),
})
const ChatMessageSchema = z.object({
message: z.string().min(1, 'Message is required'),
userMessageId: z.string().optional(),
chatId: z.string().optional(),
workflowId: z.string().optional(),
workspaceId: z.string().optional(),
workflowName: z.string().optional(),
model: z.string().optional().default('claude-opus-4-6'),
mode: z.enum(COPILOT_REQUEST_MODES).optional().default('agent'),
prefetch: z.boolean().optional(),
createNewChat: z.boolean().optional().default(false),
stream: z.boolean().optional().default(true),
implicitFeedback: z.string().optional(),
fileAttachments: z.array(FileAttachmentSchema).optional(),
resourceAttachments: z.array(ResourceAttachmentSchema).optional(),
provider: z.string().optional(),
contexts: z
.array(
z.object({
kind: z.enum([
'past_chat',
'workflow',
'current_workflow',
'blocks',
'logs',
'workflow_block',
'knowledge',
'templates',
'docs',
'table',
'file',
'folder',
]),
label: z.string(),
chatId: z.string().optional(),
workflowId: z.string().optional(),
knowledgeId: z.string().optional(),
blockId: z.string().optional(),
blockIds: z.array(z.string()).optional(),
templateId: z.string().optional(),
executionId: z.string().optional(),
tableId: z.string().optional(),
fileId: z.string().optional(),
folderId: z.string().optional(),
})
)
.optional(),
commands: z.array(z.string()).optional(),
userTimezone: z.string().optional(),
})
/**
* POST /api/copilot/chat
* Send messages to sim agent and handle chat persistence
*/
export async function POST(req: NextRequest) {
const tracker = createRequestTracker()
let actualChatId: string | undefined
let pendingChatStreamAcquired = false
let pendingChatStreamHandedOff = false
let pendingChatStreamID: string | undefined
try {
// Get session to access user information including name
const session = await getSession()
if (!session?.user?.id) {
return createUnauthorizedResponse()
}
const authenticatedUserId = session.user.id
const body = await req.json()
const {
message,
userMessageId,
chatId,
workflowId: providedWorkflowId,
workspaceId: requestedWorkspaceId,
workflowName,
model,
mode,
prefetch,
createNewChat,
stream,
implicitFeedback,
fileAttachments,
resourceAttachments,
provider,
contexts,
commands,
userTimezone,
} = ChatMessageSchema.parse(body)
const normalizedContexts = Array.isArray(contexts)
? contexts.map((ctx) => {
if (ctx.kind !== 'blocks') return ctx
if (Array.isArray(ctx.blockIds) && ctx.blockIds.length > 0) return ctx
if (ctx.blockId) {
return {
...ctx,
blockIds: [ctx.blockId],
}
}
return ctx
})
: contexts
// Copilot route always requires a workflow scope
const resolved = await resolveWorkflowIdForUser(
authenticatedUserId,
providedWorkflowId,
workflowName,
requestedWorkspaceId
)
if (!resolved) {
return createBadRequestResponse(
'No workflows found. Create a workflow first or provide a valid workflowId.'
)
}
const workflowId = resolved.workflowId
const workflowResolvedName = resolved.workflowName
// Resolve workspace from workflow so it can be sent as implicit context to the copilot.
let resolvedWorkspaceId: string | undefined
try {
const { getWorkflowById } = await import('@/lib/workflows/utils')
const wf = await getWorkflowById(workflowId)
resolvedWorkspaceId = wf?.workspaceId ?? undefined
} catch {
logger
.withMetadata({ requestId: tracker.requestId, messageId: userMessageId })
.warn('Failed to resolve workspaceId from workflow')
}
captureServerEvent(
authenticatedUserId,
'copilot_chat_sent',
{
workflow_id: workflowId,
workspace_id: resolvedWorkspaceId ?? '',
has_file_attachments: Array.isArray(fileAttachments) && fileAttachments.length > 0,
has_contexts: Array.isArray(contexts) && contexts.length > 0,
mode,
},
{
groups: resolvedWorkspaceId ? { workspace: resolvedWorkspaceId } : undefined,
setOnce: { first_copilot_use_at: new Date().toISOString() },
}
)
const userMessageIdToUse = userMessageId || generateId()
const reqLogger = logger.withMetadata({
requestId: tracker.requestId,
messageId: userMessageIdToUse,
})
try {
reqLogger.info('Received chat POST', {
workflowId,
hasContexts: Array.isArray(normalizedContexts),
contextsCount: Array.isArray(normalizedContexts) ? normalizedContexts.length : 0,
contextsPreview: Array.isArray(normalizedContexts)
? normalizedContexts.map((c: any) => ({
kind: c?.kind,
chatId: c?.chatId,
workflowId: c?.workflowId,
executionId: (c as any)?.executionId,
label: c?.label,
}))
: undefined,
})
} catch {}
let currentChat: any = null
let conversationHistory: any[] = []
actualChatId = chatId
const selectedModel = model || 'claude-opus-4-6'
if (chatId || createNewChat) {
const chatResult = await resolveOrCreateChat({
chatId,
userId: authenticatedUserId,
workflowId,
model: selectedModel,
})
currentChat = chatResult.chat
actualChatId = chatResult.chatId || chatId
conversationHistory = Array.isArray(chatResult.conversationHistory)
? chatResult.conversationHistory
: []
if (chatId && !currentChat) {
return createBadRequestResponse('Chat not found')
}
}
let agentContexts: Array<{ type: string; content: string }> = []
if (Array.isArray(normalizedContexts) && normalizedContexts.length > 0) {
try {
const { processContextsServer } = await import('@/lib/copilot/process-contents')
const processed = await processContextsServer(
normalizedContexts as any,
authenticatedUserId,
message,
resolvedWorkspaceId,
actualChatId
)
agentContexts = processed
reqLogger.info('Contexts processed for request', {
processedCount: agentContexts.length,
kinds: agentContexts.map((c) => c.type),
lengthPreview: agentContexts.map((c) => c.content?.length ?? 0),
})
if (
Array.isArray(normalizedContexts) &&
normalizedContexts.length > 0 &&
agentContexts.length === 0
) {
reqLogger.warn(
'Contexts provided but none processed. Check executionId for logs contexts.'
)
}
} catch (e) {
reqLogger.error('Failed to process contexts', e)
}
}
if (
Array.isArray(resourceAttachments) &&
resourceAttachments.length > 0 &&
resolvedWorkspaceId
) {
const results = await Promise.allSettled(
resourceAttachments.map(async (r) => {
const ctx = await resolveActiveResourceContext(
r.type,
r.id,
resolvedWorkspaceId!,
authenticatedUserId,
actualChatId
)
if (!ctx) return null
return {
...ctx,
tag: r.active ? '@active_tab' : '@open_tab',
}
})
)
for (const result of results) {
if (result.status === 'fulfilled' && result.value) {
agentContexts.push(result.value)
} else if (result.status === 'rejected') {
reqLogger.error('Failed to resolve resource attachment', result.reason)
}
}
}
const effectiveMode = mode === 'agent' ? 'build' : mode
const userPermission = resolvedWorkspaceId
? await getUserEntityPermissions(authenticatedUserId, 'workspace', resolvedWorkspaceId).catch(
() => null
)
: null
const requestPayload = await buildCopilotRequestPayload(
{
message,
workflowId: workflowId || '',
workflowName: workflowResolvedName,
workspaceId: resolvedWorkspaceId,
userId: authenticatedUserId,
userMessageId: userMessageIdToUse,
mode,
model: selectedModel,
provider,
contexts: agentContexts,
fileAttachments,
commands,
chatId: actualChatId,
prefetch,
implicitFeedback,
userPermission: userPermission ?? undefined,
userTimezone,
},
{
selectedModel,
}
)
try {
reqLogger.info('About to call Sim Agent', {
hasContext: agentContexts.length > 0,
contextCount: agentContexts.length,
hasFileAttachments: Array.isArray(requestPayload.fileAttachments),
messageLength: message.length,
mode: effectiveMode,
hasTools: Array.isArray(requestPayload.tools),
toolCount: Array.isArray(requestPayload.tools) ? requestPayload.tools.length : 0,
hasBaseTools: Array.isArray(requestPayload.baseTools),
baseToolCount: Array.isArray(requestPayload.baseTools)
? requestPayload.baseTools.length
: 0,
hasCredentials: !!requestPayload.credentials,
})
} catch {}
if (stream && actualChatId) {
const acquired = await acquirePendingChatStream(actualChatId, userMessageIdToUse)
if (!acquired) {
return NextResponse.json(
{
error:
'A response is already in progress for this chat. Wait for it to finish or use Stop.',
},
{ status: 409 }
)
}
pendingChatStreamAcquired = true
pendingChatStreamID = userMessageIdToUse
}
if (actualChatId) {
const userMsg = {
id: userMessageIdToUse,
role: 'user' as const,
content: message,
timestamp: new Date().toISOString(),
...(fileAttachments && fileAttachments.length > 0 && { fileAttachments }),
...(Array.isArray(normalizedContexts) &&
normalizedContexts.length > 0 && {
contexts: normalizedContexts,
}),
}
const [updated] = await db
.update(copilotChats)
.set({
messages: sql`${copilotChats.messages} || ${JSON.stringify([userMsg])}::jsonb`,
conversationId: userMessageIdToUse,
updatedAt: new Date(),
})
.where(eq(copilotChats.id, actualChatId))
.returning({ messages: copilotChats.messages })
if (updated) {
const freshMessages: any[] = Array.isArray(updated.messages) ? updated.messages : []
conversationHistory = freshMessages.filter((m: any) => m.id !== userMessageIdToUse)
}
}
if (stream) {
const executionId = generateId()
const runId = generateId()
const sseStream = createSSEStream({
requestPayload,
userId: authenticatedUserId,
streamId: userMessageIdToUse,
executionId,
runId,
chatId: actualChatId,
currentChat,
isNewChat: conversationHistory.length === 0,
message,
titleModel: selectedModel,
titleProvider: provider,
requestId: tracker.requestId,
workspaceId: resolvedWorkspaceId,
pendingChatStreamAlreadyRegistered: Boolean(actualChatId && stream),
orchestrateOptions: {
userId: authenticatedUserId,
workflowId,
chatId: actualChatId,
executionId,
runId,
goRoute: '/api/copilot',
autoExecuteTools: true,
interactive: true,
onComplete: async (result: OrchestratorResult) => {
if (!actualChatId) return
if (!result.success) return
const assistantMessage: Record<string, unknown> = {
id: generateId(),
role: 'assistant' as const,
content: result.content,
timestamp: new Date().toISOString(),
...(result.requestId ? { requestId: result.requestId } : {}),
}
if (result.toolCalls.length > 0) {
assistantMessage.toolCalls = result.toolCalls
}
if (result.contentBlocks.length > 0) {
assistantMessage.contentBlocks = result.contentBlocks.map((block) => {
const stored: Record<string, unknown> = { type: block.type }
if (block.content) stored.content = block.content
if (block.type === 'tool_call' && block.toolCall) {
const state =
block.toolCall.result?.success !== undefined
? block.toolCall.result.success
? 'success'
: 'error'
: block.toolCall.status
const isSubagentTool = !!block.calledBy
const isNonTerminal =
state === 'cancelled' || state === 'pending' || state === 'executing'
stored.toolCall = {
id: block.toolCall.id,
name: block.toolCall.name,
state,
...(isSubagentTool && isNonTerminal ? {} : { result: block.toolCall.result }),
...(isSubagentTool && isNonTerminal
? {}
: block.toolCall.params
? { params: block.toolCall.params }
: {}),
...(block.calledBy ? { calledBy: block.calledBy } : {}),
}
}
return stored
})
}
try {
const [row] = await db
.select({ messages: copilotChats.messages })
.from(copilotChats)
.where(eq(copilotChats.id, actualChatId))
.limit(1)
const msgs: any[] = Array.isArray(row?.messages) ? row.messages : []
const userIdx = msgs.findIndex((m: any) => m.id === userMessageIdToUse)
const alreadyHasResponse =
userIdx >= 0 &&
userIdx + 1 < msgs.length &&
(msgs[userIdx + 1] as any)?.role === 'assistant'
if (!alreadyHasResponse) {
await db
.update(copilotChats)
.set({
messages: sql`${copilotChats.messages} || ${JSON.stringify([assistantMessage])}::jsonb`,
conversationId: sql`CASE WHEN ${copilotChats.conversationId} = ${userMessageIdToUse} THEN NULL ELSE ${copilotChats.conversationId} END`,
updatedAt: new Date(),
})
.where(eq(copilotChats.id, actualChatId))
}
} catch (error) {
reqLogger.error('Failed to persist chat messages', {
chatId: actualChatId,
error: error instanceof Error ? error.message : 'Unknown error',
})
}
},
},
})
pendingChatStreamHandedOff = true
return new Response(sseStream, { headers: SSE_RESPONSE_HEADERS })
}
const nsExecutionId = generateId()
const nsRunId = generateId()
if (actualChatId) {
await createRunSegment({
id: nsRunId,
executionId: nsExecutionId,
chatId: actualChatId,
userId: authenticatedUserId,
workflowId,
streamId: userMessageIdToUse,
}).catch(() => {})
}
const nonStreamingResult = await orchestrateCopilotStream(requestPayload, {
userId: authenticatedUserId,
workflowId,
chatId: actualChatId,
executionId: nsExecutionId,
runId: nsRunId,
goRoute: '/api/copilot',
autoExecuteTools: true,
interactive: true,
})
const responseData = {
content: nonStreamingResult.content,
toolCalls: nonStreamingResult.toolCalls,
model: selectedModel,
provider: typeof requestPayload?.provider === 'string' ? requestPayload.provider : undefined,
}
reqLogger.info('Non-streaming response from orchestrator', {
hasContent: !!responseData.content,
contentLength: responseData.content?.length || 0,
model: responseData.model,
provider: responseData.provider,
toolCallsCount: responseData.toolCalls?.length || 0,
})
// Save messages if we have a chat
if (currentChat && responseData.content) {
const userMessage = {
id: userMessageIdToUse, // Consistent ID used for request and persistence
role: 'user',
content: message,
timestamp: new Date().toISOString(),
...(fileAttachments && fileAttachments.length > 0 && { fileAttachments }),
...(Array.isArray(normalizedContexts) &&
normalizedContexts.length > 0 && {
contexts: normalizedContexts,
}),
...(Array.isArray(normalizedContexts) &&
normalizedContexts.length > 0 && {
contentBlocks: [
{ type: 'contexts', contexts: normalizedContexts as any, timestamp: Date.now() },
],
}),
}
const assistantMessage = {
id: generateId(),
role: 'assistant',
content: responseData.content,
timestamp: new Date().toISOString(),
}
const updatedMessages = [...conversationHistory, userMessage, assistantMessage]
// Start title generation in parallel if this is first message (non-streaming)
if (actualChatId && !currentChat.title && conversationHistory.length === 0) {
reqLogger.info('Starting title generation for non-streaming response')
requestChatTitle({ message, model: selectedModel, provider, messageId: userMessageIdToUse })
.then(async (title) => {
if (title) {
await db
.update(copilotChats)
.set({
title,
updatedAt: new Date(),
})
.where(eq(copilotChats.id, actualChatId!))
reqLogger.info(`Generated and saved title: ${title}`)
}
})
.catch((error) => {
reqLogger.error('Title generation failed', error)
})
}
// Update chat in database immediately (without blocking for title)
await db
.update(copilotChats)
.set({
messages: updatedMessages,
updatedAt: new Date(),
})
.where(eq(copilotChats.id, actualChatId!))
}
reqLogger.info('Returning non-streaming response', {
duration: tracker.getDuration(),
chatId: actualChatId,
responseLength: responseData.content?.length || 0,
})
return NextResponse.json({
success: true,
response: responseData,
chatId: actualChatId,
metadata: {
requestId: tracker.requestId,
message,
duration: tracker.getDuration(),
},
})
} catch (error) {
if (
actualChatId &&
pendingChatStreamAcquired &&
!pendingChatStreamHandedOff &&
pendingChatStreamID
) {
await releasePendingChatStream(actualChatId, pendingChatStreamID).catch(() => {})
}
const duration = tracker.getDuration()
if (error instanceof z.ZodError) {
logger
.withMetadata({ requestId: tracker.requestId, messageId: pendingChatStreamID ?? undefined })
.error('Validation error', {
duration,
errors: error.errors,
})
return NextResponse.json(
{ error: 'Invalid request data', details: error.errors },
{ status: 400 }
)
}
logger
.withMetadata({ requestId: tracker.requestId, messageId: pendingChatStreamID ?? undefined })
.error('Error handling copilot chat', {
duration,
error: error instanceof Error ? error.message : 'Unknown error',
stack: error instanceof Error ? error.stack : undefined,
})
return NextResponse.json(
{ error: error instanceof Error ? error.message : 'Internal server error' },
{ status: 500 }
)
}
}
export async function GET(req: NextRequest) {
try {
const { searchParams } = new URL(req.url)
const workflowId = searchParams.get('workflowId')
const workspaceId = searchParams.get('workspaceId')
const chatId = searchParams.get('chatId')
const { userId: authenticatedUserId, isAuthenticated } =
await authenticateCopilotRequestSessionOnly()
if (!isAuthenticated || !authenticatedUserId) {
return createUnauthorizedResponse()
}
if (chatId) {
const chat = await getAccessibleCopilotChat(chatId, authenticatedUserId)
if (!chat) {
return NextResponse.json({ success: false, error: 'Chat not found' }, { status: 404 })
}
let streamSnapshot: {
events: Array<{ eventId: number; streamId: string; event: Record<string, unknown> }>
status: string
} | null = null
if (chat.conversationId) {
try {
const [meta, events] = await Promise.all([
getStreamMeta(chat.conversationId),
readStreamEvents(chat.conversationId, 0),
])
streamSnapshot = {
events: events || [],
status: meta?.status || 'unknown',
}
} catch (err) {
logger
.withMetadata({ messageId: chat.conversationId || undefined })
.warn('Failed to read stream snapshot for chat', {
chatId,
conversationId: chat.conversationId,
error: err instanceof Error ? err.message : String(err),
})
}
}
const transformedChat = {
id: chat.id,
title: chat.title,
model: chat.model,
messages: Array.isArray(chat.messages) ? chat.messages : [],
messageCount: Array.isArray(chat.messages) ? chat.messages.length : 0,
planArtifact: chat.planArtifact || null,
config: chat.config || null,
conversationId: chat.conversationId || null,
resources: Array.isArray(chat.resources) ? chat.resources : [],
createdAt: chat.createdAt,
updatedAt: chat.updatedAt,
...(streamSnapshot ? { streamSnapshot } : {}),
}
logger
.withMetadata({ messageId: chat.conversationId || undefined })
.info(`Retrieved chat ${chatId}`)
return NextResponse.json({ success: true, chat: transformedChat })
}
if (!workflowId && !workspaceId) {
return createBadRequestResponse('workflowId, workspaceId, or chatId is required')
}
if (workspaceId) {
await assertActiveWorkspaceAccess(workspaceId, authenticatedUserId)
}
if (workflowId) {
const authorization = await authorizeWorkflowByWorkspacePermission({
workflowId,
userId: authenticatedUserId,
action: 'read',
})
if (!authorization.allowed) {
return createUnauthorizedResponse()
}
}
const scopeFilter = workflowId
? eq(copilotChats.workflowId, workflowId)
: eq(copilotChats.workspaceId, workspaceId!)
const chats = await db
.select({
id: copilotChats.id,
title: copilotChats.title,
model: copilotChats.model,
messages: copilotChats.messages,
planArtifact: copilotChats.planArtifact,
config: copilotChats.config,
createdAt: copilotChats.createdAt,
updatedAt: copilotChats.updatedAt,
})
.from(copilotChats)
.where(and(eq(copilotChats.userId, authenticatedUserId), scopeFilter))
.orderBy(desc(copilotChats.updatedAt))
const transformedChats = chats.map((chat) => ({
id: chat.id,
title: chat.title,
model: chat.model,
messages: Array.isArray(chat.messages) ? chat.messages : [],
messageCount: Array.isArray(chat.messages) ? chat.messages.length : 0,
planArtifact: chat.planArtifact || null,
config: chat.config || null,
createdAt: chat.createdAt,
updatedAt: chat.updatedAt,
}))
const scope = workflowId ? `workflow ${workflowId}` : `workspace ${workspaceId}`
logger.info(`Retrieved ${transformedChats.length} chats for ${scope}`)
return NextResponse.json({
success: true,
chats: transformedChats,
})
} catch (error) {
logger.error('Error fetching copilot chats', error)
return createInternalServerErrorResponse('Failed to fetch chats')
}
}
export { handleUnifiedChatPost as POST, maxDuration } from '@/lib/copilot/chat/post'
export { GET } from './queries'

View File

@@ -0,0 +1,144 @@
import { db } from '@sim/db'
import { copilotChats } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { and, eq, sql } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { getSession } from '@/lib/auth'
import { normalizeMessage, type PersistedMessage } from '@/lib/copilot/chat/persisted-message'
import { taskPubSub } from '@/lib/copilot/tasks'
const logger = createLogger('CopilotChatStopAPI')
const StoredToolCallSchema = z
.object({
id: z.string().optional(),
name: z.string().optional(),
state: z.string().optional(),
params: z.record(z.unknown()).optional(),
result: z
.object({
success: z.boolean(),
output: z.unknown().optional(),
error: z.string().optional(),
})
.optional(),
display: z
.object({
text: z.string().optional(),
title: z.string().optional(),
phaseLabel: z.string().optional(),
})
.optional(),
calledBy: z.string().optional(),
durationMs: z.number().optional(),
error: z.string().optional(),
})
.nullable()
const ContentBlockSchema = z.object({
type: z.string(),
lane: z.enum(['main', 'subagent']).optional(),
content: z.string().optional(),
channel: z.enum(['assistant', 'thinking']).optional(),
phase: z.enum(['call', 'args_delta', 'result']).optional(),
kind: z.enum(['subagent', 'structured_result', 'subagent_result']).optional(),
lifecycle: z.enum(['start', 'end']).optional(),
status: z.enum(['complete', 'error', 'cancelled']).optional(),
toolCall: StoredToolCallSchema.optional(),
})
const StopSchema = z.object({
chatId: z.string(),
streamId: z.string(),
content: z.string(),
contentBlocks: z.array(ContentBlockSchema).optional(),
})
/**
* POST /api/copilot/chat/stop
* Persists partial assistant content when the user stops a stream mid-response.
* Clears conversationId so the server-side onComplete won't duplicate the message.
* The chat stream lock is intentionally left alone here; it is released only once
* the aborted server stream actually unwinds.
*/
export async function POST(req: NextRequest) {
try {
const session = await getSession()
if (!session?.user?.id) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
const { chatId, streamId, content, contentBlocks } = StopSchema.parse(await req.json())
const [row] = await db
.select({
workspaceId: copilotChats.workspaceId,
messages: copilotChats.messages,
})
.from(copilotChats)
.where(and(eq(copilotChats.id, chatId), eq(copilotChats.userId, session.user.id)))
.limit(1)
if (!row) {
return NextResponse.json({ success: true })
}
const messages: Record<string, unknown>[] = Array.isArray(row.messages) ? row.messages : []
const userIdx = messages.findIndex((message) => message.id === streamId)
const alreadyHasResponse =
userIdx >= 0 &&
userIdx + 1 < messages.length &&
(messages[userIdx + 1] as Record<string, unknown>)?.role === 'assistant'
const canAppendAssistant =
userIdx >= 0 && userIdx === messages.length - 1 && !alreadyHasResponse
const updateWhere = and(
eq(copilotChats.id, chatId),
eq(copilotChats.userId, session.user.id),
eq(copilotChats.conversationId, streamId)
)
const setClause: Record<string, unknown> = {
conversationId: null,
updatedAt: new Date(),
}
const hasContent = content.trim().length > 0
const hasBlocks = Array.isArray(contentBlocks) && contentBlocks.length > 0
if ((hasContent || hasBlocks) && canAppendAssistant) {
const normalized = normalizeMessage({
id: crypto.randomUUID(),
role: 'assistant',
content,
timestamp: new Date().toISOString(),
...(hasBlocks ? { contentBlocks } : {}),
})
const assistantMessage: PersistedMessage = normalized
setClause.messages = sql`${copilotChats.messages} || ${JSON.stringify([assistantMessage])}::jsonb`
}
const [updated] = await db
.update(copilotChats)
.set(setClause)
.where(updateWhere)
.returning({ workspaceId: copilotChats.workspaceId })
if (updated?.workspaceId) {
taskPubSub?.publishStatusChanged({
workspaceId: updated.workspaceId,
chatId,
type: 'completed',
})
}
return NextResponse.json({ success: true })
} catch (error) {
if (error instanceof z.ZodError) {
return NextResponse.json({ error: 'Invalid request' }, { status: 400 })
}
logger.error('Error stopping chat stream:', error)
return NextResponse.json({ error: 'Internal server error' }, { status: 500 })
}
}

View File

@@ -4,25 +4,70 @@
import { NextRequest } from 'next/server'
import { beforeEach, describe, expect, it, vi } from 'vitest'
import {
MothershipStreamV1CompletionStatus,
MothershipStreamV1EventType,
} from '@/lib/copilot/generated/mothership-stream-v1'
const { getStreamMeta, readStreamEvents, authenticateCopilotRequestSessionOnly } = vi.hoisted(
() => ({
getStreamMeta: vi.fn(),
readStreamEvents: vi.fn(),
authenticateCopilotRequestSessionOnly: vi.fn(),
})
)
vi.mock('@/lib/copilot/orchestrator/stream/buffer', () => ({
getStreamMeta,
readStreamEvents,
const {
getLatestRunForStream,
readEvents,
readFilePreviewSessions,
checkForReplayGap,
authenticateCopilotRequestSessionOnly,
} = vi.hoisted(() => ({
getLatestRunForStream: vi.fn(),
readEvents: vi.fn(),
readFilePreviewSessions: vi.fn(),
checkForReplayGap: vi.fn(),
authenticateCopilotRequestSessionOnly: vi.fn(),
}))
vi.mock('@/lib/copilot/request-helpers', () => ({
vi.mock('@/lib/copilot/async-runs/repository', () => ({
getLatestRunForStream,
}))
vi.mock('@/lib/copilot/request/session', () => ({
readEvents,
readFilePreviewSessions,
checkForReplayGap,
createEvent: (event: Record<string, unknown>) => ({
stream: {
streamId: event.streamId,
cursor: event.cursor,
},
seq: event.seq,
trace: { requestId: event.requestId ?? '' },
type: event.type,
payload: event.payload,
}),
encodeSSEEnvelope: (event: Record<string, unknown>) =>
new TextEncoder().encode(`data: ${JSON.stringify(event)}\n\n`),
SSE_RESPONSE_HEADERS: {
'Content-Type': 'text/event-stream',
},
}))
vi.mock('@/lib/copilot/request/http', () => ({
authenticateCopilotRequestSessionOnly,
}))
import { GET } from '@/app/api/copilot/chat/stream/route'
import { GET } from './route'
async function readAllChunks(response: Response): Promise<string[]> {
const reader = response.body?.getReader()
expect(reader).toBeTruthy()
const chunks: string[] = []
while (true) {
const { done, value } = await reader!.read()
if (done) {
break
}
chunks.push(new TextDecoder().decode(value))
}
return chunks
}
describe('copilot chat stream replay route', () => {
beforeEach(() => {
@@ -31,29 +76,95 @@ describe('copilot chat stream replay route', () => {
userId: 'user-1',
isAuthenticated: true,
})
readStreamEvents.mockResolvedValue([])
readEvents.mockResolvedValue([])
readFilePreviewSessions.mockResolvedValue([])
checkForReplayGap.mockResolvedValue(null)
})
it('stops replay polling when stream meta becomes cancelled', async () => {
getStreamMeta
it('returns preview sessions in batch mode', async () => {
getLatestRunForStream.mockResolvedValue({
status: 'active',
executionId: 'exec-1',
id: 'run-1',
})
readFilePreviewSessions.mockResolvedValue([
{
schemaVersion: 1,
id: 'preview-1',
streamId: 'stream-1',
toolCallId: 'preview-1',
status: 'streaming',
fileName: 'draft.md',
previewText: 'hello',
previewVersion: 2,
updatedAt: '2026-04-10T00:00:00.000Z',
},
])
const response = await GET(
new NextRequest(
'http://localhost:3000/api/copilot/chat/stream?streamId=stream-1&after=0&batch=true'
)
)
expect(response.status).toBe(200)
await expect(response.json()).resolves.toMatchObject({
success: true,
previewSessions: [
expect.objectContaining({
id: 'preview-1',
previewText: 'hello',
previewVersion: 2,
}),
],
status: 'active',
})
})
it('stops replay polling when run becomes cancelled', async () => {
getLatestRunForStream
.mockResolvedValueOnce({
status: 'active',
userId: 'user-1',
executionId: 'exec-1',
id: 'run-1',
})
.mockResolvedValueOnce({
status: 'cancelled',
userId: 'user-1',
executionId: 'exec-1',
id: 'run-1',
})
const response = await GET(
new NextRequest('http://localhost:3000/api/copilot/chat/stream?streamId=stream-1')
new NextRequest('http://localhost:3000/api/copilot/chat/stream?streamId=stream-1&after=0')
)
const reader = response.body?.getReader()
expect(reader).toBeTruthy()
const chunks = await readAllChunks(response)
expect(chunks.join('')).toContain(
JSON.stringify({
status: MothershipStreamV1CompletionStatus.cancelled,
reason: 'terminal_status',
})
)
expect(getLatestRunForStream).toHaveBeenCalledTimes(2)
})
const first = await reader!.read()
expect(first.done).toBe(true)
expect(getStreamMeta).toHaveBeenCalledTimes(2)
it('emits structured terminal replay error when run metadata disappears', async () => {
getLatestRunForStream
.mockResolvedValueOnce({
status: 'active',
executionId: 'exec-1',
id: 'run-1',
})
.mockResolvedValueOnce(null)
const response = await GET(
new NextRequest('http://localhost:3000/api/copilot/chat/stream?streamId=stream-1&after=0')
)
const chunks = await readAllChunks(response)
const body = chunks.join('')
expect(body).toContain(`"type":"${MothershipStreamV1EventType.error}"`)
expect(body).toContain('"code":"resume_run_unavailable"')
expect(body).toContain(`"type":"${MothershipStreamV1EventType.complete}"`)
})
})

View File

@@ -1,12 +1,20 @@
import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server'
import { getLatestRunForStream } from '@/lib/copilot/async-runs/repository'
import {
getStreamMeta,
readStreamEvents,
type StreamMeta,
} from '@/lib/copilot/orchestrator/stream/buffer'
import { authenticateCopilotRequestSessionOnly } from '@/lib/copilot/request-helpers'
import { SSE_HEADERS } from '@/lib/core/utils/sse'
MothershipStreamV1CompletionStatus,
MothershipStreamV1EventType,
} from '@/lib/copilot/generated/mothership-stream-v1'
import { authenticateCopilotRequestSessionOnly } from '@/lib/copilot/request/http'
import {
checkForReplayGap,
createEvent,
encodeSSEEnvelope,
readEvents,
readFilePreviewSessions,
SSE_RESPONSE_HEADERS,
} from '@/lib/copilot/request/session'
import { toStreamBatchEvent } from '@/lib/copilot/request/session/types'
export const maxDuration = 3600
@@ -14,8 +22,59 @@ const logger = createLogger('CopilotChatStreamAPI')
const POLL_INTERVAL_MS = 250
const MAX_STREAM_MS = 60 * 60 * 1000
function encodeEvent(event: Record<string, any>): Uint8Array {
return new TextEncoder().encode(`data: ${JSON.stringify(event)}\n\n`)
function isTerminalStatus(
status: string | null | undefined
): status is MothershipStreamV1CompletionStatus {
return (
status === MothershipStreamV1CompletionStatus.complete ||
status === MothershipStreamV1CompletionStatus.error ||
status === MothershipStreamV1CompletionStatus.cancelled
)
}
function buildResumeTerminalEnvelopes(options: {
streamId: string
afterCursor: string
status: MothershipStreamV1CompletionStatus
message?: string
code: string
reason?: string
}) {
const baseSeq = Number(options.afterCursor || '0')
const seq = Number.isFinite(baseSeq) ? baseSeq : 0
const envelopes: ReturnType<typeof createEvent>[] = []
if (options.status === MothershipStreamV1CompletionStatus.error) {
envelopes.push(
createEvent({
streamId: options.streamId,
cursor: String(seq + 1),
seq: seq + 1,
requestId: '',
type: MothershipStreamV1EventType.error,
payload: {
message: options.message || 'Stream recovery failed before completion.',
code: options.code,
},
})
)
}
envelopes.push(
createEvent({
streamId: options.streamId,
cursor: String(seq + envelopes.length + 1),
seq: seq + envelopes.length + 1,
requestId: '',
type: MothershipStreamV1EventType.complete,
payload: {
status: options.status,
...(options.reason ? { reason: options.reason } : {}),
},
})
)
return envelopes
}
export async function GET(request: NextRequest) {
@@ -28,58 +87,56 @@ export async function GET(request: NextRequest) {
const url = new URL(request.url)
const streamId = url.searchParams.get('streamId') || ''
const fromParam = url.searchParams.get('from') || '0'
const fromEventId = Number(fromParam || 0)
// If batch=true, return buffered events as JSON instead of SSE
const afterCursor = url.searchParams.get('after') || ''
const batchMode = url.searchParams.get('batch') === 'true'
const toParam = url.searchParams.get('to')
const toEventId = toParam ? Number(toParam) : undefined
const reqLogger = logger.withMetadata({ messageId: streamId || undefined })
reqLogger.info('[Resume] Received resume request', {
streamId: streamId || undefined,
fromEventId,
toEventId,
batchMode,
})
if (!streamId) {
return NextResponse.json({ error: 'streamId is required' }, { status: 400 })
}
const meta = (await getStreamMeta(streamId)) as StreamMeta | null
reqLogger.info('[Resume] Stream lookup', {
streamId,
fromEventId,
toEventId,
batchMode,
hasMeta: !!meta,
metaStatus: meta?.status,
const run = await getLatestRunForStream(streamId, authenticatedUserId).catch((err) => {
logger.warn('Failed to fetch latest run for stream', {
streamId,
error: err instanceof Error ? err.message : String(err),
})
return null
})
if (!meta) {
logger.info('[Resume] Stream lookup', {
streamId,
afterCursor,
batchMode,
hasRun: !!run,
runStatus: run?.status,
})
if (!run) {
return NextResponse.json({ error: 'Stream not found' }, { status: 404 })
}
if (meta.userId && meta.userId !== authenticatedUserId) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 403 })
}
// Batch mode: return all buffered events as JSON
if (batchMode) {
const events = await readStreamEvents(streamId, fromEventId)
const filteredEvents = toEventId ? events.filter((e) => e.eventId <= toEventId) : events
reqLogger.info('[Resume] Batch response', {
const afterSeq = afterCursor || '0'
const [events, previewSessions] = await Promise.all([
readEvents(streamId, afterSeq),
readFilePreviewSessions(streamId).catch((error) => {
logger.warn('Failed to read preview sessions for stream batch', {
streamId,
error: error instanceof Error ? error.message : String(error),
})
return []
}),
])
const batchEvents = events.map(toStreamBatchEvent)
logger.info('[Resume] Batch response', {
streamId,
fromEventId,
toEventId,
eventCount: filteredEvents.length,
afterCursor: afterSeq,
eventCount: batchEvents.length,
previewSessionCount: previewSessions.length,
runStatus: run.status,
})
return NextResponse.json({
success: true,
events: filteredEvents,
status: meta.status,
executionId: meta.executionId,
runId: meta.runId,
events: batchEvents,
previewSessions,
status: run.status,
})
}
@@ -87,9 +144,9 @@ export async function GET(request: NextRequest) {
const stream = new ReadableStream({
async start(controller) {
let lastEventId = Number.isFinite(fromEventId) ? fromEventId : 0
let latestMeta = meta
let cursor = afterCursor || '0'
let controllerClosed = false
let sawTerminalEvent = false
const closeController = () => {
if (controllerClosed) return
@@ -97,14 +154,14 @@ export async function GET(request: NextRequest) {
try {
controller.close()
} catch {
// Controller already closed by runtime/client - treat as normal.
// Controller already closed by runtime/client
}
}
const enqueueEvent = (payload: Record<string, any>) => {
const enqueueEvent = (payload: unknown) => {
if (controllerClosed) return false
try {
controller.enqueue(encodeEvent(payload))
controller.enqueue(encodeSSEEnvelope(payload))
return true
} catch {
controllerClosed = true
@@ -118,47 +175,96 @@ export async function GET(request: NextRequest) {
request.signal.addEventListener('abort', abortListener, { once: true })
const flushEvents = async () => {
const events = await readStreamEvents(streamId, lastEventId)
const events = await readEvents(streamId, cursor)
if (events.length > 0) {
reqLogger.info('[Resume] Flushing events', {
logger.info('[Resume] Flushing events', {
streamId,
fromEventId: lastEventId,
afterCursor: cursor,
eventCount: events.length,
})
}
for (const entry of events) {
lastEventId = entry.eventId
const payload = {
...entry.event,
eventId: entry.eventId,
streamId: entry.streamId,
executionId: latestMeta?.executionId,
runId: latestMeta?.runId,
for (const envelope of events) {
cursor = envelope.stream.cursor ?? String(envelope.seq)
if (envelope.type === MothershipStreamV1EventType.complete) {
sawTerminalEvent = true
}
if (!enqueueEvent(payload)) {
if (!enqueueEvent(envelope)) {
break
}
}
}
const emitTerminalIfMissing = (
status: MothershipStreamV1CompletionStatus,
options?: { message?: string; code: string; reason?: string }
) => {
if (controllerClosed || sawTerminalEvent) {
return
}
for (const envelope of buildResumeTerminalEnvelopes({
streamId,
afterCursor: cursor,
status,
message: options?.message,
code: options?.code ?? 'resume_terminal',
reason: options?.reason,
})) {
cursor = envelope.stream.cursor ?? String(envelope.seq)
if (envelope.type === MothershipStreamV1EventType.complete) {
sawTerminalEvent = true
}
if (!enqueueEvent(envelope)) {
break
}
}
}
try {
const gap = await checkForReplayGap(streamId, afterCursor)
if (gap) {
for (const envelope of gap.envelopes) {
enqueueEvent(envelope)
}
return
}
await flushEvents()
while (!controllerClosed && Date.now() - startTime < MAX_STREAM_MS) {
const currentMeta = await getStreamMeta(streamId)
if (!currentMeta) break
latestMeta = currentMeta
const currentRun = await getLatestRunForStream(streamId, authenticatedUserId).catch(
(err) => {
logger.warn('Failed to poll latest run for stream', {
streamId,
error: err instanceof Error ? err.message : String(err),
})
return null
}
)
if (!currentRun) {
emitTerminalIfMissing(MothershipStreamV1CompletionStatus.error, {
message: 'The stream could not be recovered because its run metadata is unavailable.',
code: 'resume_run_unavailable',
reason: 'run_unavailable',
})
break
}
await flushEvents()
if (controllerClosed) {
break
}
if (
currentMeta.status === 'complete' ||
currentMeta.status === 'error' ||
currentMeta.status === 'cancelled'
) {
if (isTerminalStatus(currentRun.status)) {
emitTerminalIfMissing(currentRun.status, {
message:
currentRun.status === MothershipStreamV1CompletionStatus.error
? typeof currentRun.error === 'string'
? currentRun.error
: 'The recovered stream ended with an error.'
: undefined,
code: 'resume_terminal_status',
reason: 'terminal_status',
})
break
}
@@ -169,12 +275,24 @@ export async function GET(request: NextRequest) {
await new Promise((resolve) => setTimeout(resolve, POLL_INTERVAL_MS))
}
if (!controllerClosed && Date.now() - startTime >= MAX_STREAM_MS) {
emitTerminalIfMissing(MothershipStreamV1CompletionStatus.error, {
message: 'The stream recovery timed out before completion.',
code: 'resume_timeout',
reason: 'timeout',
})
}
} catch (error) {
if (!controllerClosed && !request.signal.aborted) {
reqLogger.warn('Stream replay failed', {
logger.warn('Stream replay failed', {
streamId,
error: error instanceof Error ? error.message : String(error),
})
emitTerminalIfMissing(MothershipStreamV1CompletionStatus.error, {
message: 'The stream replay failed before completion.',
code: 'resume_internal',
reason: 'stream_replay_failed',
})
}
} finally {
request.signal.removeEventListener('abort', abortListener)
@@ -183,5 +301,5 @@ export async function GET(request: NextRequest) {
},
})
return new Response(stream, { headers: SSE_HEADERS })
return new Response(stream, { headers: SSE_RESPONSE_HEADERS })
}

View File

@@ -327,7 +327,35 @@ describe('Copilot Chat Update Messages API Route', () => {
})
expect(mockSet).toHaveBeenCalledWith({
messages,
messages: [
{
id: 'msg-1',
role: 'user',
content: 'Hello',
timestamp: '2024-01-01T10:00:00.000Z',
},
{
id: 'msg-2',
role: 'assistant',
content: 'Hi there!',
timestamp: '2024-01-01T10:01:00.000Z',
contentBlocks: [
{
type: 'text',
content: 'Here is the weather information',
},
{
type: 'tool',
phase: 'call',
toolCall: {
id: 'tool-1',
name: 'get_weather',
state: 'pending',
},
},
],
},
],
updatedAt: expect.any(Date),
})
})

View File

@@ -4,15 +4,16 @@ import { createLogger } from '@sim/logger'
import { eq } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { getAccessibleCopilotChat } from '@/lib/copilot/chat-lifecycle'
import { COPILOT_MODES } from '@/lib/copilot/models'
import { getAccessibleCopilotChat } from '@/lib/copilot/chat/lifecycle'
import { normalizeMessage, type PersistedMessage } from '@/lib/copilot/chat/persisted-message'
import { COPILOT_MODES } from '@/lib/copilot/constants'
import {
authenticateCopilotRequestSessionOnly,
createInternalServerErrorResponse,
createNotFoundResponse,
createRequestTracker,
createUnauthorizedResponse,
} from '@/lib/copilot/request-helpers'
} from '@/lib/copilot/request/http'
const logger = createLogger('CopilotChatUpdateAPI')
@@ -78,12 +79,15 @@ export async function POST(req: NextRequest) {
}
const { chatId, messages, planArtifact, config } = UpdateMessagesSchema.parse(body)
const normalizedMessages: PersistedMessage[] = messages.map((message) =>
normalizeMessage(message as Record<string, unknown>)
)
// Debug: Log what we're about to save
const lastMsgParsed = messages[messages.length - 1]
const lastMsgParsed = normalizedMessages[normalizedMessages.length - 1]
if (lastMsgParsed?.role === 'assistant') {
logger.info(`[${tracker.requestId}] Parsed messages to save`, {
messageCount: messages.length,
messageCount: normalizedMessages.length,
lastMsgId: lastMsgParsed.id,
lastMsgContentLength: lastMsgParsed.content?.length || 0,
lastMsgContentBlockCount: lastMsgParsed.contentBlocks?.length || 0,
@@ -99,8 +103,8 @@ export async function POST(req: NextRequest) {
}
// Update chat with new messages, plan artifact, and config
const updateData: Record<string, any> = {
messages: messages,
const updateData: Record<string, unknown> = {
messages: normalizedMessages,
updatedAt: new Date(),
}
@@ -116,14 +120,14 @@ export async function POST(req: NextRequest) {
logger.info(`[${tracker.requestId}] Successfully updated chat`, {
chatId,
newMessageCount: messages.length,
newMessageCount: normalizedMessages.length,
hasPlanArtifact: !!planArtifact,
hasConfig: !!config,
})
return NextResponse.json({
success: true,
messageCount: messages.length,
messageCount: normalizedMessages.length,
})
} catch (error) {
logger.error(`[${tracker.requestId}] Error updating chat messages:`, error)

View File

@@ -66,7 +66,7 @@ vi.mock('drizzle-orm', () => ({
sql: vi.fn(),
}))
vi.mock('@/lib/copilot/request-helpers', () => ({
vi.mock('@/lib/copilot/request/http', () => ({
authenticateCopilotRequestSessionOnly: mockAuthenticate,
createUnauthorizedResponse: mockCreateUnauthorizedResponse,
createInternalServerErrorResponse: mockCreateInternalServerErrorResponse,

View File

@@ -4,14 +4,14 @@ import { createLogger } from '@sim/logger'
import { and, desc, eq, isNull, or, sql } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { resolveOrCreateChat } from '@/lib/copilot/chat-lifecycle'
import { resolveOrCreateChat } from '@/lib/copilot/chat/lifecycle'
import {
authenticateCopilotRequestSessionOnly,
createBadRequestResponse,
createInternalServerErrorResponse,
createUnauthorizedResponse,
} from '@/lib/copilot/request-helpers'
import { taskPubSub } from '@/lib/copilot/task-events'
} from '@/lib/copilot/request/http'
import { taskPubSub } from '@/lib/copilot/tasks'
import { authorizeWorkflowByWorkspacePermission } from '@/lib/workflows/utils'
import { assertActiveWorkspaceAccess } from '@/lib/workspaces/permissions/utils'
@@ -37,7 +37,7 @@ export async function GET(_request: NextRequest) {
title: copilotChats.title,
workflowId: copilotChats.workflowId,
workspaceId: copilotChats.workspaceId,
conversationId: copilotChats.conversationId,
activeStreamId: copilotChats.conversationId,
updatedAt: copilotChats.updatedAt,
})
.from(copilotChats)

View File

@@ -43,7 +43,7 @@ vi.mock('@/lib/workflows/utils', () => ({
authorizeWorkflowByWorkspacePermission: mockAuthorize,
}))
vi.mock('@/lib/copilot/chat-lifecycle', () => ({
vi.mock('@/lib/copilot/chat/lifecycle', () => ({
getAccessibleCopilotChat: mockGetAccessibleCopilotChat,
}))

View File

@@ -4,14 +4,14 @@ import { createLogger } from '@sim/logger'
import { and, eq } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { getAccessibleCopilotChat } from '@/lib/copilot/chat-lifecycle'
import { getAccessibleCopilotChat } from '@/lib/copilot/chat/lifecycle'
import {
authenticateCopilotRequestSessionOnly,
createInternalServerErrorResponse,
createNotFoundResponse,
createRequestTracker,
createUnauthorizedResponse,
} from '@/lib/copilot/request-helpers'
} from '@/lib/copilot/request/http'
import { getInternalApiBaseUrl } from '@/lib/core/utils/urls'
import { authorizeWorkflowByWorkspacePermission } from '@/lib/workflows/utils'
import { isUuidV4 } from '@/executor/constants'

View File

@@ -62,7 +62,7 @@ vi.mock('drizzle-orm', () => ({
desc: vi.fn((field: unknown) => ({ field, type: 'desc' })),
}))
vi.mock('@/lib/copilot/chat-lifecycle', () => ({
vi.mock('@/lib/copilot/chat/lifecycle', () => ({
getAccessibleCopilotChat: mockGetAccessibleCopilotChat,
}))

View File

@@ -4,14 +4,14 @@ import { createLogger } from '@sim/logger'
import { and, desc, eq } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { getAccessibleCopilotChat } from '@/lib/copilot/chat-lifecycle'
import { getAccessibleCopilotChat } from '@/lib/copilot/chat/lifecycle'
import {
authenticateCopilotRequestSessionOnly,
createBadRequestResponse,
createInternalServerErrorResponse,
createRequestTracker,
createUnauthorizedResponse,
} from '@/lib/copilot/request-helpers'
} from '@/lib/copilot/request/http'
import { authorizeWorkflowByWorkspacePermission } from '@/lib/workflows/utils'
const logger = createLogger('WorkflowCheckpointsAPI')

View File

@@ -38,7 +38,7 @@ const {
publishToolConfirmation: vi.fn(),
}))
vi.mock('@/lib/copilot/request-helpers', () => ({
vi.mock('@/lib/copilot/request/http', () => ({
authenticateCopilotRequestSessionOnly,
createBadRequestResponse,
createInternalServerErrorResponse,
@@ -54,7 +54,7 @@ vi.mock('@/lib/copilot/async-runs/repository', () => ({
completeAsyncToolCall,
}))
vi.mock('@/lib/copilot/orchestrator/persistence', () => ({
vi.mock('@/lib/copilot/persistence/tool-confirm', () => ({
publishToolConfirmation,
}))
@@ -161,27 +161,33 @@ describe('Copilot Confirm API Route', () => {
)
})
it('uses upsertAsyncToolCall for non-terminal confirmations', async () => {
it('accepts primitive terminal confirmation data', async () => {
const response = await POST(
createMockPostRequest({
toolCallId: 'tool-call-123',
status: 'accepted',
status: 'success',
message: 'Tool executed successfully',
data: 'done',
})
)
expect(response.status).toBe(200)
expect(upsertAsyncToolCall).toHaveBeenCalledWith({
runId: 'run-1',
checkpointId: 'checkpoint-1',
expect(completeAsyncToolCall).toHaveBeenCalledWith({
toolCallId: 'tool-call-123',
toolName: 'client_tool',
args: { foo: 'bar' },
status: 'pending',
status: 'completed',
result: 'done',
error: null,
})
expect(completeAsyncToolCall).not.toHaveBeenCalled()
expect(publishToolConfirmation).toHaveBeenCalledWith(
expect.objectContaining({
toolCallId: 'tool-call-123',
status: 'success',
data: 'done',
})
)
})
it('publishes confirmation after a durable non-terminal update', async () => {
it('keeps background as a live pending detach confirmation', async () => {
const response = await POST(
createMockPostRequest({
toolCallId: 'tool-call-123',
@@ -190,14 +196,8 @@ describe('Copilot Confirm API Route', () => {
)
expect(response.status).toBe(200)
expect(upsertAsyncToolCall).toHaveBeenCalledWith({
runId: 'run-1',
checkpointId: 'checkpoint-1',
toolCallId: 'tool-call-123',
toolName: 'client_tool',
args: { foo: 'bar' },
status: 'pending',
})
expect(upsertAsyncToolCall).not.toHaveBeenCalled()
expect(completeAsyncToolCall).not.toHaveBeenCalled()
expect(publishToolConfirmation).toHaveBeenCalledWith(
expect.objectContaining({
toolCallId: 'tool-call-123',
@@ -206,6 +206,32 @@ describe('Copilot Confirm API Route', () => {
)
})
it('rejects unsupported accepted and rejected confirmation statuses', async () => {
const acceptedResponse = await POST(
createMockPostRequest({
toolCallId: 'tool-call-123',
status: 'accepted',
})
)
expect(acceptedResponse.status).toBe(400)
expect(await acceptedResponse.json()).toEqual({
error: 'Invalid request data: Invalid notification status',
})
const rejectedResponse = await POST(
createMockPostRequest({
toolCallId: 'tool-call-123',
status: 'rejected',
})
)
expect(rejectedResponse.status).toBe(400)
expect(await rejectedResponse.json()).toEqual({
error: 'Invalid request data: Invalid notification status',
})
})
it('returns 400 when the durable write fails before publish', async () => {
completeAsyncToolCall.mockRejectedValueOnce(new Error('db down'))

View File

@@ -1,13 +1,19 @@
import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import {
ASYNC_TOOL_CONFIRMATION_STATUS,
ASYNC_TOOL_STATUS,
type AsyncCompletionData,
type AsyncConfirmationStatus,
} from '@/lib/copilot/async-runs/lifecycle'
import {
completeAsyncToolCall,
getAsyncToolCall,
getRunSegment,
upsertAsyncToolCall,
} from '@/lib/copilot/async-runs/repository'
import { publishToolConfirmation } from '@/lib/copilot/orchestrator/persistence'
import { publishToolConfirmation } from '@/lib/copilot/persistence/tool-confirm'
import {
authenticateCopilotRequestSessionOnly,
createBadRequestResponse,
@@ -15,44 +21,62 @@ import {
createNotFoundResponse,
createRequestTracker,
createUnauthorizedResponse,
type NotificationStatus,
} from '@/lib/copilot/request-helpers'
} from '@/lib/copilot/request/http'
const logger = createLogger('CopilotConfirmAPI')
// Schema for confirmation request
const ConfirmationSchema = z.object({
toolCallId: z.string().min(1, 'Tool call ID is required'),
status: z.enum(['success', 'error', 'accepted', 'rejected', 'background', 'cancelled'] as const, {
errorMap: () => ({ message: 'Invalid notification status' }),
}),
status: z.enum(
Object.values(ASYNC_TOOL_CONFIRMATION_STATUS) as [
AsyncConfirmationStatus,
...AsyncConfirmationStatus[],
],
{
errorMap: () => ({ message: 'Invalid notification status' }),
}
),
message: z.string().optional(),
data: z.record(z.unknown()).optional(),
data: z.unknown().optional(),
})
/**
* Persist the durable tool status, then publish a wakeup event.
* Persist terminal durable tool status, then publish a wakeup event.
*
* `background` remains a live detach signal in the current browser workflow
* runtime, so it should not rewrite the durable async row.
*/
async function updateToolCallStatus(
existing: NonNullable<Awaited<ReturnType<typeof getAsyncToolCall>>>,
status: NotificationStatus,
status: AsyncConfirmationStatus,
message?: string,
data?: Record<string, unknown>
data?: AsyncCompletionData
): Promise<boolean> {
const toolCallId = existing.toolCallId
if (status === ASYNC_TOOL_CONFIRMATION_STATUS.background) {
publishToolConfirmation({
toolCallId,
status,
message: message || undefined,
timestamp: new Date().toISOString(),
data,
})
return true
}
const durableStatus =
status === 'success'
? 'completed'
? ASYNC_TOOL_STATUS.completed
: status === 'cancelled'
? 'cancelled'
: status === 'error' || status === 'rejected'
? 'failed'
: 'pending'
? ASYNC_TOOL_STATUS.cancelled
: status === 'error'
? ASYNC_TOOL_STATUS.failed
: ASYNC_TOOL_STATUS.pending
try {
if (
durableStatus === 'completed' ||
durableStatus === 'failed' ||
durableStatus === 'cancelled'
durableStatus === ASYNC_TOOL_STATUS.completed ||
durableStatus === ASYNC_TOOL_STATUS.failed ||
durableStatus === ASYNC_TOOL_STATUS.cancelled
) {
await completeAsyncToolCall({
toolCallId,
@@ -70,12 +94,11 @@ async function updateToolCallStatus(
status: durableStatus,
})
}
const timestamp = new Date().toISOString()
publishToolConfirmation({
toolCallId,
status,
message: message || undefined,
timestamp,
timestamp: new Date().toISOString(),
data,
})
return true
@@ -91,7 +114,7 @@ async function updateToolCallStatus(
/**
* POST /api/copilot/confirm
* Update tool call status (Accept/Reject)
* Accept client tool completion or detach confirmations.
*/
export async function POST(req: NextRequest) {
const tracker = createRequestTracker()
@@ -107,13 +130,25 @@ export async function POST(req: NextRequest) {
const body = await req.json()
const { toolCallId, status, message, data } = ConfirmationSchema.parse(body)
const existing = await getAsyncToolCall(toolCallId).catch(() => null)
const existing = await getAsyncToolCall(toolCallId).catch((err) => {
logger.warn('Failed to fetch async tool call', {
toolCallId,
error: err instanceof Error ? err.message : String(err),
})
return null
})
if (!existing) {
return createNotFoundResponse('Tool call not found')
}
const run = await getRunSegment(existing.runId).catch(() => null)
const run = await getRunSegment(existing.runId).catch((err) => {
logger.warn('Failed to fetch run segment', {
runId: existing.runId,
error: err instanceof Error ? err.message : String(err),
})
return null
})
if (!run) {
return createNotFoundResponse('Tool call run not found')
}

View File

@@ -1,5 +1,5 @@
import { type NextRequest, NextResponse } from 'next/server'
import { authenticateCopilotRequestSessionOnly } from '@/lib/copilot/request-helpers'
import { authenticateCopilotRequestSessionOnly } from '@/lib/copilot/request/http'
import { routeExecution } from '@/lib/copilot/tools/server/router'
/**

View File

@@ -57,7 +57,7 @@ vi.mock('drizzle-orm', () => ({
eq: vi.fn((field: unknown, value: unknown) => ({ field, value, type: 'eq' })),
}))
vi.mock('@/lib/copilot/request-helpers', () => ({
vi.mock('@/lib/copilot/request/http', () => ({
authenticateCopilotRequestSessionOnly: mockAuthenticate,
createUnauthorizedResponse: mockCreateUnauthorizedResponse,
createBadRequestResponse: mockCreateBadRequestResponse,

View File

@@ -10,7 +10,7 @@ import {
createInternalServerErrorResponse,
createRequestTracker,
createUnauthorizedResponse,
} from '@/lib/copilot/request-helpers'
} from '@/lib/copilot/request/http'
import { captureServerEvent } from '@/lib/posthog/server'
const logger = createLogger('CopilotFeedbackAPI')

View File

@@ -1,8 +1,14 @@
import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server'
import { SIM_AGENT_API_URL } from '@/lib/copilot/constants'
import { authenticateCopilotRequestSessionOnly } from '@/lib/copilot/request-helpers'
import type { AvailableModel } from '@/lib/copilot/types'
import { authenticateCopilotRequestSessionOnly } from '@/lib/copilot/request/http'
interface AvailableModel {
id: string
friendlyName: string
provider: string
}
import { env } from '@/lib/core/config/env'
const logger = createLogger('CopilotModelsAPI')

View File

@@ -23,7 +23,7 @@ const {
mockFetch: vi.fn(),
}))
vi.mock('@/lib/copilot/request-helpers', () => ({
vi.mock('@/lib/copilot/request/http', () => ({
authenticateCopilotRequestSessionOnly: mockAuthenticateCopilotRequestSessionOnly,
createUnauthorizedResponse: mockCreateUnauthorizedResponse,
createBadRequestResponse: mockCreateBadRequestResponse,

View File

@@ -7,7 +7,7 @@ import {
createInternalServerErrorResponse,
createRequestTracker,
createUnauthorizedResponse,
} from '@/lib/copilot/request-helpers'
} from '@/lib/copilot/request/http'
import { env } from '@/lib/core/config/env'
const BodySchema = z.object({

View File

@@ -4,7 +4,7 @@ import { z } from 'zod'
import {
authenticateCopilotRequestSessionOnly,
createUnauthorizedResponse,
} from '@/lib/copilot/request-helpers'
} from '@/lib/copilot/request/http'
import { env } from '@/lib/core/config/env'
const logger = createLogger('CopilotTrainingExamplesAPI')

View File

@@ -4,7 +4,7 @@ import { z } from 'zod'
import {
authenticateCopilotRequestSessionOnly,
createUnauthorizedResponse,
} from '@/lib/copilot/request-helpers'
} from '@/lib/copilot/request/http'
import { env } from '@/lib/core/config/env'
const logger = createLogger('CopilotTrainingAPI')

View File

@@ -114,8 +114,12 @@ export async function verifyFileAccess(
// Infer context from key if not explicitly provided
const inferredContext = context || inferContextFromKey(cloudKey)
// 0. Public contexts: profile pictures and OG images are publicly accessible
if (inferredContext === 'profile-pictures' || inferredContext === 'og-images') {
// 0. Public contexts: profile pictures, OG images, and workspace logos are publicly accessible
if (
inferredContext === 'profile-pictures' ||
inferredContext === 'og-images' ||
inferredContext === 'workspace-logos'
) {
logger.info('Public file access allowed', { cloudKey, context: inferredContext })
return true
}

View File

@@ -75,6 +75,16 @@ vi.mock('@/lib/uploads/utils/file-utils', () => ({
vi.mock('@/lib/uploads/setup.server', () => ({}))
vi.mock('@/lib/execution/doc-vm', () => ({
generatePdfFromCode: vi.fn().mockResolvedValue(Buffer.from('%PDF-compiled')),
generateDocxFromCode: vi.fn().mockResolvedValue(Buffer.from('PK\x03\x04compiled')),
generatePptxFromCode: vi.fn().mockResolvedValue(Buffer.from('PK\x03\x04compiled')),
}))
vi.mock('@/lib/uploads/contexts/workspace/workspace-file-manager', () => ({
parseWorkspaceFileKey: vi.fn().mockReturnValue(undefined),
}))
vi.mock('@/app/api/files/utils', () => ({
FileNotFoundError,
createFileResponse: mockCreateFileResponse,

View File

@@ -4,7 +4,11 @@ import { createLogger } from '@sim/logger'
import type { NextRequest } from 'next/server'
import { NextResponse } from 'next/server'
import { checkSessionOrInternalAuth } from '@/lib/auth/hybrid'
import { generatePptxFromCode } from '@/lib/execution/pptx-vm'
import {
generateDocxFromCode,
generatePdfFromCode,
generatePptxFromCode,
} from '@/lib/execution/doc-vm'
import { CopilotFiles, isUsingCloudStorage } from '@/lib/uploads'
import type { StorageContext } from '@/lib/uploads/config'
import { parseWorkspaceFileKey } from '@/lib/uploads/contexts/workspace/workspace-file-manager'
@@ -22,47 +26,73 @@ import {
const logger = createLogger('FilesServeAPI')
const ZIP_MAGIC = Buffer.from([0x50, 0x4b, 0x03, 0x04])
const PDF_MAGIC = Buffer.from([0x25, 0x50, 0x44, 0x46, 0x2d]) // %PDF-
const MAX_COMPILED_PPTX_CACHE = 10
const compiledPptxCache = new Map<string, Buffer>()
function compiledCacheSet(key: string, buffer: Buffer): void {
if (compiledPptxCache.size >= MAX_COMPILED_PPTX_CACHE) {
compiledPptxCache.delete(compiledPptxCache.keys().next().value as string)
}
compiledPptxCache.set(key, buffer)
interface CompilableFormat {
magic: Buffer
compile: (code: string, workspaceId: string) => Promise<Buffer>
contentType: string
}
async function compilePptxIfNeeded(
const COMPILABLE_FORMATS: Record<string, CompilableFormat> = {
'.pptx': {
magic: ZIP_MAGIC,
compile: generatePptxFromCode,
contentType: 'application/vnd.openxmlformats-officedocument.presentationml.presentation',
},
'.docx': {
magic: ZIP_MAGIC,
compile: generateDocxFromCode,
contentType: 'application/vnd.openxmlformats-officedocument.wordprocessingml.document',
},
'.pdf': {
magic: PDF_MAGIC,
compile: generatePdfFromCode,
contentType: 'application/pdf',
},
}
const MAX_COMPILED_DOC_CACHE = 10
const compiledDocCache = new Map<string, Buffer>()
function compiledCacheSet(key: string, buffer: Buffer): void {
if (compiledDocCache.size >= MAX_COMPILED_DOC_CACHE) {
compiledDocCache.delete(compiledDocCache.keys().next().value as string)
}
compiledDocCache.set(key, buffer)
}
async function compileDocumentIfNeeded(
buffer: Buffer,
filename: string,
workspaceId?: string,
raw?: boolean
): Promise<{ buffer: Buffer; contentType: string }> {
const isPptx = filename.toLowerCase().endsWith('.pptx')
if (raw || !isPptx || buffer.subarray(0, 4).equals(ZIP_MAGIC)) {
if (raw) return { buffer, contentType: getContentType(filename) }
const ext = filename.slice(filename.lastIndexOf('.')).toLowerCase()
const format = COMPILABLE_FORMATS[ext]
if (!format) return { buffer, contentType: getContentType(filename) }
const magicLen = format.magic.length
if (buffer.length >= magicLen && buffer.subarray(0, magicLen).equals(format.magic)) {
return { buffer, contentType: getContentType(filename) }
}
const code = buffer.toString('utf-8')
const cacheKey = createHash('sha256')
.update(ext)
.update(code)
.update(workspaceId ?? '')
.digest('hex')
const cached = compiledPptxCache.get(cacheKey)
const cached = compiledDocCache.get(cacheKey)
if (cached) {
return {
buffer: cached,
contentType: 'application/vnd.openxmlformats-officedocument.presentationml.presentation',
}
return { buffer: cached, contentType: format.contentType }
}
const compiled = await generatePptxFromCode(code, workspaceId || '')
const compiled = await format.compile(code, workspaceId || '')
compiledCacheSet(cacheKey, compiled)
return {
buffer: compiled,
contentType: 'application/vnd.openxmlformats-officedocument.presentationml.presentation',
}
return { buffer: compiled, contentType: format.contentType }
}
const STORAGE_KEY_PREFIX_RE = /^\d{13}-[a-z0-9]{7}-/
@@ -95,7 +125,9 @@ export async function GET(
const cloudKey = isCloudPath ? path.slice(1).join('/') : fullPath
const isPublicByKeyPrefix =
cloudKey.startsWith('profile-pictures/') || cloudKey.startsWith('og-images/')
cloudKey.startsWith('profile-pictures/') ||
cloudKey.startsWith('og-images/') ||
cloudKey.startsWith('workspace-logos/')
if (isPublicByKeyPrefix) {
const context = inferContextFromKey(cloudKey)
@@ -169,7 +201,7 @@ async function handleLocalFile(
const segment = filename.split('/').pop() || filename
const displayName = stripStorageKeyPrefix(segment)
const workspaceId = getWorkspaceIdForCompile(filename)
const { buffer: fileBuffer, contentType } = await compilePptxIfNeeded(
const { buffer: fileBuffer, contentType } = await compileDocumentIfNeeded(
rawBuffer,
displayName,
workspaceId,
@@ -226,7 +258,7 @@ async function handleCloudProxy(
const segment = cloudKey.split('/').pop() || 'download'
const displayName = stripStorageKeyPrefix(segment)
const workspaceId = getWorkspaceIdForCompile(cloudKey)
const { buffer: fileBuffer, contentType } = await compilePptxIfNeeded(
const { buffer: fileBuffer, contentType } = await compileDocumentIfNeeded(
rawBuffer,
displayName,
workspaceId,

View File

@@ -2,7 +2,9 @@ import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server'
import { sanitizeFileName } from '@/executor/constants'
import '@/lib/uploads/core/setup.server'
import { AuditAction, AuditResourceType, recordAudit } from '@/lib/audit/log'
import { getSession } from '@/lib/auth'
import { captureServerEvent } from '@/lib/posthog/server'
import type { StorageContext } from '@/lib/uploads/config'
import { generateWorkspaceFileKey } from '@/lib/uploads/contexts/workspace/workspace-file-manager'
import { isImageFileType } from '@/lib/uploads/utils/file-utils'
@@ -64,7 +66,7 @@ export async function POST(request: NextRequest) {
// Context must be explicitly provided
if (!contextParam) {
throw new InvalidRequestError(
'Upload requires explicit context parameter (knowledge-base, workspace, execution, copilot, chat, or profile-pictures)'
'Upload requires explicit context parameter (knowledge-base, workspace, execution, copilot, chat, profile-pictures, or workspace-logos)'
)
}
@@ -282,13 +284,35 @@ export async function POST(request: NextRequest) {
continue
}
if (context === 'copilot' || context === 'chat' || context === 'profile-pictures') {
if (
context === 'copilot' ||
context === 'chat' ||
context === 'profile-pictures' ||
context === 'workspace-logos'
) {
if (context !== 'copilot' && !isImageFileType(file.type)) {
throw new InvalidRequestError(
`Only image files (JPEG, PNG, GIF, WebP, SVG) are allowed for ${context} uploads`
)
}
if (context === 'workspace-logos') {
if (!workspaceId) {
throw new InvalidRequestError('workspace-logos context requires workspaceId parameter')
}
const permission = await getUserEntityPermissions(
session.user.id,
'workspace',
workspaceId
)
if (permission !== 'admin') {
return NextResponse.json(
{ error: 'Admin access required for workspace logo uploads' },
{ status: 403 }
)
}
}
if (context === 'chat' && workspaceId) {
const permission = await getUserEntityPermissions(
session.user.id,
@@ -346,13 +370,40 @@ export async function POST(request: NextRequest) {
}
logger.info(`Successfully uploaded ${context} file: ${fileInfo.key}`)
if (context === 'workspace-logos' && workspaceId) {
recordAudit({
workspaceId,
actorId: session.user.id,
actorName: session.user.name,
actorEmail: session.user.email,
action: AuditAction.FILE_UPLOADED,
resourceType: AuditResourceType.WORKSPACE,
resourceId: workspaceId,
description: `Uploaded workspace logo "${originalName}"`,
metadata: {
fileName: originalName,
fileKey: fileInfo.key,
fileSize: buffer.length,
fileType: file.type,
},
request,
})
captureServerEvent(session.user.id, 'workspace_logo_uploaded', {
workspace_id: workspaceId,
file_name: originalName,
file_size: buffer.length,
})
}
uploadResults.push(uploadResult)
continue
}
// Unknown context
throw new InvalidRequestError(
`Unsupported context: ${context}. Use knowledge-base, workspace, execution, copilot, chat, or profile-pictures`
`Unsupported context: ${context}. Use knowledge-base, workspace, execution, copilot, chat, profile-pictures, or workspace-logos`
)
}

View File

@@ -24,6 +24,27 @@ vi.mock('@/lib/auth/hybrid', () => ({
vi.mock('@/lib/execution/e2b', () => ({
executeInE2B: mockExecuteInE2B,
executeShellInE2B: vi.fn(),
}))
vi.mock('@/lib/copilot/request/tools/files', () => ({
FORMAT_TO_CONTENT_TYPE: {
json: 'application/json',
csv: 'text/csv',
txt: 'text/plain',
md: 'text/markdown',
html: 'text/html',
},
normalizeOutputWorkspaceFileName: vi.fn((p: string) => p.replace(/^files\//, '')),
resolveOutputFormat: vi.fn(() => 'json'),
}))
vi.mock('@/lib/uploads/contexts/workspace/workspace-file-manager', () => ({
uploadWorkspaceFile: vi.fn(),
}))
vi.mock('@/lib/workflows/utils', () => ({
getWorkflowById: vi.fn(),
}))
vi.mock('@/lib/core/config/feature-flags', () => ({
@@ -32,6 +53,7 @@ vi.mock('@/lib/core/config/feature-flags', () => ({
isProd: false,
isDev: false,
isTest: true,
isEmailVerificationEnabled: false,
}))
import { validateProxyUrl } from '@/lib/core/security/input-validation'

View File

@@ -1,16 +1,24 @@
import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server'
import { checkInternalAuth } from '@/lib/auth/hybrid'
import {
FORMAT_TO_CONTENT_TYPE,
normalizeOutputWorkspaceFileName,
resolveOutputFormat,
} from '@/lib/copilot/request/tools/files'
import { isE2bEnabled } from '@/lib/core/config/feature-flags'
import { generateRequestId } from '@/lib/core/utils/request'
import { executeInE2B } from '@/lib/execution/e2b'
import { executeInE2B, executeShellInE2B } from '@/lib/execution/e2b'
import { executeInIsolatedVM } from '@/lib/execution/isolated-vm'
import { CodeLanguage, DEFAULT_CODE_LANGUAGE, isValidCodeLanguage } from '@/lib/execution/languages'
import { uploadWorkspaceFile } from '@/lib/uploads/contexts/workspace/workspace-file-manager'
import { getWorkflowById } from '@/lib/workflows/utils'
import { escapeRegExp, normalizeName, REFERENCE } from '@/executor/constants'
import { type OutputSchema, resolveBlockReference } from '@/executor/utils/block-reference'
import { formatLiteralForCode } from '@/executor/utils/code-formatting'
import {
createEnvVarPattern,
createReferencePattern,
createWorkflowVariablePattern,
} from '@/executor/utils/reference-validation'
export const dynamic = 'force-dynamic'
@@ -20,6 +28,8 @@ export const MAX_DURATION = 210
const logger = createLogger('FunctionExecuteAPI')
const TAG_PATTERN = createReferencePattern()
const E2B_JS_WRAPPER_LINES = 3
const E2B_PYTHON_WRAPPER_LINES = 1
@@ -486,11 +496,7 @@ function resolveTagVariables(
let resolvedCode = code
const undefinedLiteral = language === 'python' ? 'None' : 'undefined'
const tagPattern = new RegExp(
`${REFERENCE.START}([a-zA-Z_](?:[a-zA-Z0-9_${REFERENCE.PATH_DELIMITER}]*[a-zA-Z0-9_])?)${REFERENCE.END}`,
'g'
)
const tagMatches = resolvedCode.match(tagPattern) || []
const tagMatches = resolvedCode.match(TAG_PATTERN) || []
for (const match of tagMatches) {
const tagName = match.slice(REFERENCE.START.length, -REFERENCE.END.length).trim()
@@ -580,6 +586,107 @@ function cleanStdout(stdout: string): string {
return stdout
}
async function maybeExportSandboxFileToWorkspace(args: {
authUserId: string
workflowId?: string
workspaceId?: string
outputPath?: string
outputFormat?: string
outputMimeType?: string
outputSandboxPath?: string
exportedFileContent?: string
stdout: string
executionTime: number
}) {
const {
authUserId,
workflowId,
workspaceId,
outputPath,
outputFormat,
outputMimeType,
outputSandboxPath,
exportedFileContent,
stdout,
executionTime,
} = args
if (!outputSandboxPath) return null
if (!outputPath) {
return NextResponse.json(
{
success: false,
error:
'outputSandboxPath requires outputPath. Set outputPath to the destination workspace file, e.g. "files/result.csv".',
output: { result: null, stdout: cleanStdout(stdout), executionTime },
},
{ status: 400 }
)
}
const resolvedWorkspaceId =
workspaceId || (workflowId ? (await getWorkflowById(workflowId))?.workspaceId : undefined)
if (!resolvedWorkspaceId) {
return NextResponse.json(
{
success: false,
error: 'Workspace context required to save sandbox file to workspace',
output: { result: null, stdout: cleanStdout(stdout), executionTime },
},
{ status: 400 }
)
}
if (exportedFileContent === undefined) {
return NextResponse.json(
{
success: false,
error: `Sandbox file "${outputSandboxPath}" was not found or could not be read`,
output: { result: null, stdout: cleanStdout(stdout), executionTime },
},
{ status: 500 }
)
}
const fileName = normalizeOutputWorkspaceFileName(outputPath)
const TEXT_MIMES = new Set(Object.values(FORMAT_TO_CONTENT_TYPE))
const resolvedMimeType =
outputMimeType ||
FORMAT_TO_CONTENT_TYPE[resolveOutputFormat(fileName, outputFormat)] ||
'application/octet-stream'
const isBinary = !TEXT_MIMES.has(resolvedMimeType)
const fileBuffer = isBinary
? Buffer.from(exportedFileContent, 'base64')
: Buffer.from(exportedFileContent, 'utf-8')
const uploaded = await uploadWorkspaceFile(
resolvedWorkspaceId,
authUserId,
fileBuffer,
fileName,
resolvedMimeType
)
return NextResponse.json({
success: true,
output: {
result: {
message: `Sandbox file exported to files/${fileName}`,
fileId: uploaded.id,
fileName,
downloadUrl: uploaded.url,
sandboxPath: outputSandboxPath,
},
stdout: cleanStdout(stdout),
executionTime,
},
resources: [{ type: 'file', id: uploaded.id, title: fileName }],
})
}
export async function POST(req: NextRequest) {
const requestId = generateRequestId()
const startTime = Date.now()
@@ -603,12 +710,17 @@ export async function POST(req: NextRequest) {
params = {},
timeout = DEFAULT_EXECUTION_TIMEOUT_MS,
language = DEFAULT_CODE_LANGUAGE,
outputPath,
outputFormat,
outputMimeType,
outputSandboxPath,
envVars = {},
blockData = {},
blockNameMapping = {},
blockOutputSchemas = {},
workflowVariables = {},
workflowId,
workspaceId,
isCustomTool = false,
_sandboxFiles,
} = body
@@ -626,18 +738,25 @@ export async function POST(req: NextRequest) {
const lang = isValidCodeLanguage(language) ? language : DEFAULT_CODE_LANGUAGE
const codeResolution = resolveCodeVariables(
code,
executionParams,
envVars,
blockData,
blockNameMapping,
blockOutputSchemas,
workflowVariables,
lang
)
resolvedCode = codeResolution.resolvedCode
const contextVariables = codeResolution.contextVariables
let contextVariables: Record<string, unknown> = {}
if (lang === CodeLanguage.Shell) {
// For shell, env vars are injected as OS env vars via shellEnvs.
// Replace {{VAR}} placeholders with $VAR so the shell can access them natively.
resolvedCode = code.replace(/\{\{([A-Za-z_][A-Za-z0-9_]*)\}\}/g, '$$$1')
} else {
const codeResolution = resolveCodeVariables(
code,
executionParams,
envVars,
blockData,
blockNameMapping,
blockOutputSchemas,
workflowVariables,
lang
)
resolvedCode = codeResolution.resolvedCode
contextVariables = codeResolution.contextVariables
}
let jsImports = ''
let jsRemainingCode = resolvedCode
@@ -652,6 +771,83 @@ export async function POST(req: NextRequest) {
hasImports = jsImports.trim().length > 0 || hasRequireStatements
}
if (lang === CodeLanguage.Shell) {
if (!isE2bEnabled) {
throw new Error(
'Shell execution requires E2B to be enabled. Please contact your administrator to enable E2B.'
)
}
const shellEnvs: Record<string, string> = {}
for (const [k, v] of Object.entries(envVars)) {
shellEnvs[k] = String(v)
}
for (const [k, v] of Object.entries(contextVariables)) {
shellEnvs[k] = String(v)
}
logger.info(`[${requestId}] E2B shell execution`, {
enabled: isE2bEnabled,
hasApiKey: Boolean(process.env.E2B_API_KEY),
envVarCount: Object.keys(shellEnvs).length,
})
const execStart = Date.now()
const {
result: shellResult,
stdout: shellStdout,
sandboxId,
error: shellError,
exportedFileContent,
} = await executeShellInE2B({
code: resolvedCode,
envs: shellEnvs,
timeoutMs: timeout,
sandboxFiles: _sandboxFiles,
outputSandboxPath,
})
const executionTime = Date.now() - execStart
logger.info(`[${requestId}] E2B shell sandbox`, {
sandboxId,
stdoutPreview: shellStdout?.slice(0, 200),
error: shellError,
executionTime,
})
if (shellError) {
return NextResponse.json(
{
success: false,
error: shellError,
output: { result: null, stdout: cleanStdout(shellStdout), executionTime },
},
{ status: 500 }
)
}
if (outputSandboxPath) {
const fileExportResponse = await maybeExportSandboxFileToWorkspace({
authUserId: auth.userId,
workflowId,
workspaceId,
outputPath,
outputFormat,
outputMimeType,
outputSandboxPath,
exportedFileContent,
stdout: shellStdout,
executionTime,
})
if (fileExportResponse) return fileExportResponse
}
return NextResponse.json({
success: true,
output: { result: shellResult ?? null, stdout: cleanStdout(shellStdout), executionTime },
})
}
if (lang === CodeLanguage.Python && !isE2bEnabled) {
throw new Error(
'Python execution requires E2B to be enabled. Please contact your administrator to enable E2B, or use JavaScript instead.'
@@ -719,11 +915,13 @@ export async function POST(req: NextRequest) {
stdout: e2bStdout,
sandboxId,
error: e2bError,
exportedFileContent,
} = await executeInE2B({
code: codeForE2B,
language: CodeLanguage.JavaScript,
timeoutMs: timeout,
sandboxFiles: _sandboxFiles,
outputSandboxPath,
})
const executionTime = Date.now() - execStart
stdout += e2bStdout
@@ -752,6 +950,22 @@ export async function POST(req: NextRequest) {
)
}
if (outputSandboxPath) {
const fileExportResponse = await maybeExportSandboxFileToWorkspace({
authUserId: auth.userId,
workflowId,
workspaceId,
outputPath,
outputFormat,
outputMimeType,
outputSandboxPath,
exportedFileContent,
stdout,
executionTime,
})
if (fileExportResponse) return fileExportResponse
}
return NextResponse.json({
success: true,
output: { result: e2bResult ?? null, stdout: cleanStdout(stdout), executionTime },
@@ -783,11 +997,13 @@ export async function POST(req: NextRequest) {
stdout: e2bStdout,
sandboxId,
error: e2bError,
exportedFileContent,
} = await executeInE2B({
code: codeForE2B,
language: CodeLanguage.Python,
timeoutMs: timeout,
sandboxFiles: _sandboxFiles,
outputSandboxPath,
})
const executionTime = Date.now() - execStart
stdout += e2bStdout
@@ -816,6 +1032,22 @@ export async function POST(req: NextRequest) {
)
}
if (outputSandboxPath) {
const fileExportResponse = await maybeExportSandboxFileToWorkspace({
authUserId: auth.userId,
workflowId,
workspaceId,
outputPath,
outputFormat,
outputMimeType,
outputSandboxPath,
exportedFileContent,
stdout,
executionTime,
})
if (fileExportResponse) return fileExportResponse
}
return NextResponse.json({
success: true,
output: { result: e2bResult ?? null, stdout: cleanStdout(stdout), executionTime },

View File

@@ -6,16 +6,16 @@ import { beforeEach, describe, expect, it, vi } from 'vitest'
const {
mockCheckHybridAuth,
mockGetDispatchJobRecord,
mockGetJobQueue,
mockVerifyWorkflowAccess,
mockGetWorkflowById,
mockGetJob,
} = vi.hoisted(() => ({
mockCheckHybridAuth: vi.fn(),
mockGetDispatchJobRecord: vi.fn(),
mockGetJobQueue: vi.fn(),
mockVerifyWorkflowAccess: vi.fn(),
mockGetWorkflowById: vi.fn(),
mockGetJob: vi.fn(),
}))
vi.mock('@sim/logger', () => ({
@@ -32,19 +32,9 @@ vi.mock('@/lib/auth/hybrid', () => ({
}))
vi.mock('@/lib/core/async-jobs', () => ({
JOB_STATUS: {
PENDING: 'pending',
PROCESSING: 'processing',
COMPLETED: 'completed',
FAILED: 'failed',
},
getJobQueue: mockGetJobQueue,
}))
vi.mock('@/lib/core/workspace-dispatch/store', () => ({
getDispatchJobRecord: mockGetDispatchJobRecord,
}))
vi.mock('@/lib/core/utils/request', () => ({
generateRequestId: vi.fn().mockReturnValue('request-1'),
}))
@@ -85,71 +75,51 @@ describe('GET /api/jobs/[jobId]', () => {
})
mockGetJobQueue.mockResolvedValue({
getJob: vi.fn().mockResolvedValue(null),
getJob: mockGetJob,
})
})
it('returns dispatcher-aware waiting status with metadata', async () => {
mockGetDispatchJobRecord.mockResolvedValue({
id: 'dispatch-1',
workspaceId: 'workspace-1',
lane: 'runtime',
queueName: 'workflow-execution',
bullmqJobName: 'workflow-execution',
bullmqPayload: {},
it('returns job status with metadata', async () => {
mockGetJob.mockResolvedValue({
id: 'job-1',
status: 'pending',
metadata: {
workflowId: 'workflow-1',
},
priority: 10,
status: 'waiting',
createdAt: 1000,
admittedAt: 2000,
})
const response = await GET(createMockRequest(), {
params: Promise.resolve({ jobId: 'dispatch-1' }),
params: Promise.resolve({ jobId: 'job-1' }),
})
const body = await response.json()
expect(response.status).toBe(200)
expect(body.status).toBe('waiting')
expect(body.metadata.queueName).toBe('workflow-execution')
expect(body.metadata.lane).toBe('runtime')
expect(body.metadata.workspaceId).toBe('workspace-1')
expect(body.status).toBe('pending')
expect(body.metadata.workflowId).toBe('workflow-1')
})
it('returns completed output from dispatch state', async () => {
mockGetDispatchJobRecord.mockResolvedValue({
id: 'dispatch-2',
workspaceId: 'workspace-1',
lane: 'interactive',
queueName: 'workflow-execution',
bullmqJobName: 'direct-workflow-execution',
bullmqPayload: {},
it('returns completed output from job', async () => {
mockGetJob.mockResolvedValue({
id: 'job-2',
status: 'completed',
metadata: {
workflowId: 'workflow-1',
},
priority: 1,
status: 'completed',
createdAt: 1000,
startedAt: 2000,
completedAt: 7000,
output: { success: true },
})
const response = await GET(createMockRequest(), {
params: Promise.resolve({ jobId: 'dispatch-2' }),
params: Promise.resolve({ jobId: 'job-2' }),
})
const body = await response.json()
expect(response.status).toBe(200)
expect(body.status).toBe('completed')
expect(body.output).toEqual({ success: true })
expect(body.metadata.duration).toBe(5000)
})
it('returns 404 when neither dispatch nor BullMQ job exists', async () => {
mockGetDispatchJobRecord.mockResolvedValue(null)
it('returns 404 when job does not exist', async () => {
mockGetJob.mockResolvedValue(null)
const response = await GET(createMockRequest(), {
params: Promise.resolve({ jobId: 'missing-job' }),

View File

@@ -3,8 +3,6 @@ import { type NextRequest, NextResponse } from 'next/server'
import { checkHybridAuth } from '@/lib/auth/hybrid'
import { getJobQueue } from '@/lib/core/async-jobs'
import { generateRequestId } from '@/lib/core/utils/request'
import { presentDispatchOrJobStatus } from '@/lib/core/workspace-dispatch/status'
import { getDispatchJobRecord } from '@/lib/core/workspace-dispatch/store'
import { createErrorResponse } from '@/app/api/workflows/utils'
const logger = createLogger('TaskStatusAPI')
@@ -25,15 +23,14 @@ export async function GET(
const authenticatedUserId = authResult.userId
const dispatchJob = await getDispatchJobRecord(taskId)
const jobQueue = await getJobQueue()
const job = dispatchJob ? null : await jobQueue.getJob(taskId)
const job = await jobQueue.getJob(taskId)
if (!job && !dispatchJob) {
if (!job) {
return createErrorResponse('Task not found', 404)
}
const metadataToCheck = dispatchJob?.metadata ?? job?.metadata
const metadataToCheck = job.metadata
if (metadataToCheck?.workflowId) {
const { verifyWorkflowAccess } = await import('@/socket/middleware/permissions')
@@ -61,25 +58,22 @@ export async function GET(
return createErrorResponse('Access denied', 403)
}
const presented = presentDispatchOrJobStatus(dispatchJob, job)
const response: any = {
const response: Record<string, unknown> = {
success: true,
taskId,
status: presented.status,
metadata: presented.metadata,
status: job.status,
metadata: job.metadata,
}
if (presented.output !== undefined) response.output = presented.output
if (presented.error !== undefined) response.error = presented.error
if (presented.estimatedDuration !== undefined) {
response.estimatedDuration = presented.estimatedDuration
}
if (job.output !== undefined) response.output = job.output
if (job.error !== undefined) response.error = job.error
return NextResponse.json(response)
} catch (error: any) {
} catch (error: unknown) {
const errorMessage = error instanceof Error ? error.message : String(error)
logger.error(`[${requestId}] Error fetching task status:`, error)
if (error.message?.includes('not found') || error.status === 404) {
if (errorMessage?.includes('not found')) {
return createErrorResponse('Task not found', 404)
}

View File

@@ -17,14 +17,11 @@ import { eq, sql } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { validateOAuthAccessToken } from '@/lib/auth/oauth-token'
import { getHighestPrioritySubscription } from '@/lib/billing/core/subscription'
import { createRunSegment } from '@/lib/copilot/async-runs/repository'
import { ORCHESTRATION_TIMEOUT_MS, SIM_AGENT_API_URL } from '@/lib/copilot/constants'
import { orchestrateCopilotStream } from '@/lib/copilot/orchestrator'
import { orchestrateSubagentStream } from '@/lib/copilot/orchestrator/subagent'
import {
executeToolServerSide,
prepareExecutionContext,
} from '@/lib/copilot/orchestrator/tool-executor'
import { runHeadlessCopilotLifecycle } from '@/lib/copilot/request/lifecycle/headless'
import { orchestrateSubagentStream } from '@/lib/copilot/request/subagent'
import { ensureHandlersRegistered, executeTool } from '@/lib/copilot/tool-executor'
import { prepareExecutionContext } from '@/lib/copilot/tools/handlers/context'
import { DIRECT_TOOL_DEFS, SUBAGENT_TOOL_DEFS } from '@/lib/copilot/tools/mcp/definitions'
import { env } from '@/lib/core/config/env'
import { RateLimiter } from '@/lib/core/rate-limiter'
@@ -125,12 +122,10 @@ Sim is a workflow automation platform. Workflows are visual pipelines of connect
1. \`list_workspaces\` → know where to work
2. \`create_workflow(name, workspaceId)\` → get a workflowId
3. \`sim_build(request, workflowId)\` → plan and build in one pass
3. \`sim_workflow(request, workflowId)\` → plan and build in one pass
4. \`sim_test(request, workflowId)\` → verify it works
5. \`sim_deploy("deploy as api", workflowId)\` → make it accessible externally (optional)
For fine-grained control, use \`sim_plan\`\`sim_edit\` instead of \`sim_build\`. Pass the plan object from sim_plan EXACTLY as-is to sim_edit's context.plan field.
### Working with Existing Workflows
When the user refers to a workflow by name or description ("the email one", "my Slack bot"):
@@ -148,8 +143,8 @@ When the user refers to a workflow by name or description ("the email one", "my
### Key Rules
- You can test workflows immediately after building — deployment is only needed for external access (API, chat, MCP).
- All copilot tools (build, plan, edit, deploy, test, debug) require workflowId.
- If the user reports errors → use \`sim_debug\` first, don't guess.
- All workflow-scoped copilot tools require \`workflowId\`.
- If the user reports errors, route through \`sim_workflow\` and ask it to reproduce, inspect logs, and fix the issue end to end.
- Variable syntax: \`<blockname.field>\` for block outputs, \`{{ENV_VAR}}\` for env vars.
`
@@ -645,7 +640,8 @@ async function handleDirectToolCall(
startTime: Date.now(),
}
const result = await executeToolServerSide(toolCall, execContext)
ensureHandlersRegistered()
const result = await executeTool(toolCall.name, toolCall.params || {}, execContext)
return {
content: [
@@ -672,7 +668,7 @@ async function handleDirectToolCall(
/**
* Build mode uses the main chat orchestrator with the 'fast' command instead of
* the subagent endpoint. In Go, 'build' is not a registered subagent — it's a mode
* the subagent endpoint. In Go, 'workflow' is not a registered subagent — it's a mode
* (ModeFast) on the main chat processor that bypasses subagent orchestration and
* executes all tools directly.
*/
@@ -728,25 +724,10 @@ async function handleBuildToolCall(
chatId,
}
const executionId = generateId()
const runId = generateId()
const messageId = requestPayload.messageId as string
await createRunSegment({
id: runId,
executionId,
chatId,
userId,
workflowId: resolved.workflowId,
streamId: messageId,
}).catch(() => {})
const result = await orchestrateCopilotStream(requestPayload, {
const result = await runHeadlessCopilotLifecycle(requestPayload, {
userId,
workflowId: resolved.workflowId,
chatId,
executionId,
runId,
goRoute: '/api/mcp',
autoExecuteTools: true,
timeout: ORCHESTRATION_TIMEOUT_MS,
@@ -785,7 +766,7 @@ async function handleSubagentToolCall(
userId: string,
abortSignal?: AbortSignal
): Promise<CallToolResult> {
if (toolDef.agentId === 'build') {
if (toolDef.agentId === 'workflow') {
return handleBuildToolCall(args, userId, abortSignal)
}

View File

@@ -0,0 +1 @@
export { POST } from '@/app/api/copilot/chat/abort/route'

View File

@@ -0,0 +1 @@
export { DELETE, PATCH, POST } from '@/app/api/copilot/chat/resources/route'

View File

@@ -1,417 +1,3 @@
import { db } from '@sim/db'
import { copilotChats } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { eq, sql } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { getSession } from '@/lib/auth'
import { resolveOrCreateChat } from '@/lib/copilot/chat-lifecycle'
import { buildCopilotRequestPayload } from '@/lib/copilot/chat-payload'
import {
acquirePendingChatStream,
createSSEStream,
SSE_RESPONSE_HEADERS,
} from '@/lib/copilot/chat-streaming'
import type { OrchestratorResult } from '@/lib/copilot/orchestrator/types'
import { processContextsServer, resolveActiveResourceContext } from '@/lib/copilot/process-contents'
import { createRequestTracker, createUnauthorizedResponse } from '@/lib/copilot/request-helpers'
import { taskPubSub } from '@/lib/copilot/task-events'
import { generateWorkspaceContext } from '@/lib/copilot/workspace-context'
import { generateId } from '@/lib/core/utils/uuid'
import {
assertActiveWorkspaceAccess,
getUserEntityPermissions,
} from '@/lib/workspaces/permissions/utils'
export const maxDuration = 3600
const logger = createLogger('MothershipChatAPI')
const FileAttachmentSchema = z.object({
id: z.string(),
key: z.string(),
filename: z.string(),
media_type: z.string(),
size: z.number(),
})
const ResourceAttachmentSchema = z.object({
type: z.enum(['workflow', 'table', 'file', 'knowledgebase', 'folder']),
id: z.string().min(1),
title: z.string().optional(),
active: z.boolean().optional(),
})
const MothershipMessageSchema = z.object({
message: z.string().min(1, 'Message is required'),
workspaceId: z.string().min(1, 'workspaceId is required'),
userMessageId: z.string().optional(),
chatId: z.string().optional(),
createNewChat: z.boolean().optional().default(false),
fileAttachments: z.array(FileAttachmentSchema).optional(),
userTimezone: z.string().optional(),
resourceAttachments: z.array(ResourceAttachmentSchema).optional(),
contexts: z
.array(
z.object({
kind: z.enum([
'past_chat',
'workflow',
'current_workflow',
'blocks',
'logs',
'workflow_block',
'knowledge',
'templates',
'docs',
'table',
'file',
'folder',
]),
label: z.string(),
chatId: z.string().optional(),
workflowId: z.string().optional(),
knowledgeId: z.string().optional(),
blockId: z.string().optional(),
blockIds: z.array(z.string()).optional(),
templateId: z.string().optional(),
executionId: z.string().optional(),
tableId: z.string().optional(),
fileId: z.string().optional(),
folderId: z.string().optional(),
})
)
.optional(),
})
/**
* POST /api/mothership/chat
* Workspace-scoped chat — no workflowId, proxies to Go /api/mothership.
*/
export async function POST(req: NextRequest) {
const tracker = createRequestTracker()
let userMessageIdForLogs: string | undefined
try {
const session = await getSession()
if (!session?.user?.id) {
return createUnauthorizedResponse()
}
const authenticatedUserId = session.user.id
const body = await req.json()
const {
message,
workspaceId,
userMessageId: providedMessageId,
chatId,
createNewChat,
fileAttachments,
contexts,
resourceAttachments,
userTimezone,
} = MothershipMessageSchema.parse(body)
const userMessageId = providedMessageId || generateId()
userMessageIdForLogs = userMessageId
const reqLogger = logger.withMetadata({
requestId: tracker.requestId,
messageId: userMessageId,
})
reqLogger.info('Received mothership chat start request', {
workspaceId,
chatId,
createNewChat,
hasContexts: Array.isArray(contexts) && contexts.length > 0,
contextsCount: Array.isArray(contexts) ? contexts.length : 0,
hasResourceAttachments: Array.isArray(resourceAttachments) && resourceAttachments.length > 0,
resourceAttachmentCount: Array.isArray(resourceAttachments) ? resourceAttachments.length : 0,
hasFileAttachments: Array.isArray(fileAttachments) && fileAttachments.length > 0,
fileAttachmentCount: Array.isArray(fileAttachments) ? fileAttachments.length : 0,
})
try {
await assertActiveWorkspaceAccess(workspaceId, authenticatedUserId)
} catch {
return NextResponse.json({ error: 'Workspace not found or access denied' }, { status: 403 })
}
let currentChat: any = null
let conversationHistory: any[] = []
let actualChatId = chatId
if (chatId || createNewChat) {
const chatResult = await resolveOrCreateChat({
chatId,
userId: authenticatedUserId,
workspaceId,
model: 'claude-opus-4-6',
type: 'mothership',
})
currentChat = chatResult.chat
actualChatId = chatResult.chatId || chatId
conversationHistory = Array.isArray(chatResult.conversationHistory)
? chatResult.conversationHistory
: []
if (chatId && !currentChat) {
return NextResponse.json({ error: 'Chat not found' }, { status: 404 })
}
}
let agentContexts: Array<{ type: string; content: string }> = []
if (Array.isArray(contexts) && contexts.length > 0) {
try {
agentContexts = await processContextsServer(
contexts as any,
authenticatedUserId,
message,
workspaceId,
actualChatId
)
} catch (e) {
reqLogger.error('Failed to process contexts', e)
}
}
if (Array.isArray(resourceAttachments) && resourceAttachments.length > 0) {
const results = await Promise.allSettled(
resourceAttachments.map(async (r) => {
const ctx = await resolveActiveResourceContext(
r.type,
r.id,
workspaceId,
authenticatedUserId,
actualChatId
)
if (!ctx) return null
return {
...ctx,
tag: r.active ? '@active_tab' : '@open_tab',
}
})
)
for (const result of results) {
if (result.status === 'fulfilled' && result.value) {
agentContexts.push(result.value)
} else if (result.status === 'rejected') {
reqLogger.error('Failed to resolve resource attachment', result.reason)
}
}
}
if (actualChatId) {
const userMsg = {
id: userMessageId,
role: 'user' as const,
content: message,
timestamp: new Date().toISOString(),
...(fileAttachments &&
fileAttachments.length > 0 && {
fileAttachments: fileAttachments.map((f) => ({
id: f.id,
key: f.key,
filename: f.filename,
media_type: f.media_type,
size: f.size,
})),
}),
...(contexts &&
contexts.length > 0 && {
contexts: contexts.map((c) => ({
kind: c.kind,
label: c.label,
...(c.workflowId && { workflowId: c.workflowId }),
...(c.knowledgeId && { knowledgeId: c.knowledgeId }),
...(c.tableId && { tableId: c.tableId }),
...(c.fileId && { fileId: c.fileId }),
...(c.folderId && { folderId: c.folderId }),
})),
}),
}
const [updated] = await db
.update(copilotChats)
.set({
messages: sql`${copilotChats.messages} || ${JSON.stringify([userMsg])}::jsonb`,
conversationId: userMessageId,
updatedAt: new Date(),
})
.where(eq(copilotChats.id, actualChatId))
.returning({ messages: copilotChats.messages })
if (updated) {
const freshMessages: any[] = Array.isArray(updated.messages) ? updated.messages : []
conversationHistory = freshMessages.filter((m: any) => m.id !== userMessageId)
taskPubSub?.publishStatusChanged({ workspaceId, chatId: actualChatId, type: 'started' })
}
}
const [workspaceContext, userPermission] = await Promise.all([
generateWorkspaceContext(workspaceId, authenticatedUserId),
getUserEntityPermissions(authenticatedUserId, 'workspace', workspaceId).catch(() => null),
])
const requestPayload = await buildCopilotRequestPayload(
{
message,
workspaceId,
userId: authenticatedUserId,
userMessageId,
mode: 'agent',
model: '',
contexts: agentContexts,
fileAttachments,
chatId: actualChatId,
userPermission: userPermission ?? undefined,
workspaceContext,
userTimezone,
},
{ selectedModel: '' }
)
if (actualChatId) {
const acquired = await acquirePendingChatStream(actualChatId, userMessageId)
if (!acquired) {
return NextResponse.json(
{
error:
'A response is already in progress for this chat. Wait for it to finish or use Stop.',
},
{ status: 409 }
)
}
}
const executionId = generateId()
const runId = generateId()
const stream = createSSEStream({
requestPayload,
userId: authenticatedUserId,
streamId: userMessageId,
executionId,
runId,
chatId: actualChatId,
currentChat,
isNewChat: conversationHistory.length === 0,
message,
titleModel: 'claude-opus-4-6',
requestId: tracker.requestId,
workspaceId,
pendingChatStreamAlreadyRegistered: Boolean(actualChatId),
orchestrateOptions: {
userId: authenticatedUserId,
workspaceId,
chatId: actualChatId,
executionId,
runId,
goRoute: '/api/mothership',
autoExecuteTools: true,
interactive: true,
onComplete: async (result: OrchestratorResult) => {
if (!actualChatId) return
if (!result.success) return
const assistantMessage: Record<string, unknown> = {
id: generateId(),
role: 'assistant' as const,
content: result.content,
timestamp: new Date().toISOString(),
...(result.requestId ? { requestId: result.requestId } : {}),
}
if (result.toolCalls.length > 0) {
assistantMessage.toolCalls = result.toolCalls
}
if (result.contentBlocks.length > 0) {
assistantMessage.contentBlocks = result.contentBlocks.map((block) => {
const stored: Record<string, unknown> = { type: block.type }
if (block.content) stored.content = block.content
if (block.type === 'tool_call' && block.toolCall) {
const state =
block.toolCall.result?.success !== undefined
? block.toolCall.result.success
? 'success'
: 'error'
: block.toolCall.status
const isSubagentTool = !!block.calledBy
const isNonTerminal =
state === 'cancelled' || state === 'pending' || state === 'executing'
stored.toolCall = {
id: block.toolCall.id,
name: block.toolCall.name,
state,
...(isSubagentTool && isNonTerminal ? {} : { result: block.toolCall.result }),
...(isSubagentTool && isNonTerminal
? {}
: block.toolCall.params
? { params: block.toolCall.params }
: {}),
...(block.calledBy ? { calledBy: block.calledBy } : {}),
}
}
return stored
})
}
try {
const [row] = await db
.select({ messages: copilotChats.messages })
.from(copilotChats)
.where(eq(copilotChats.id, actualChatId))
.limit(1)
const msgs: any[] = Array.isArray(row?.messages) ? row.messages : []
const userIdx = msgs.findIndex((m: any) => m.id === userMessageId)
const alreadyHasResponse =
userIdx >= 0 &&
userIdx + 1 < msgs.length &&
(msgs[userIdx + 1] as any)?.role === 'assistant'
if (!alreadyHasResponse) {
await db
.update(copilotChats)
.set({
messages: sql`${copilotChats.messages} || ${JSON.stringify([assistantMessage])}::jsonb`,
conversationId: sql`CASE WHEN ${copilotChats.conversationId} = ${userMessageId} THEN NULL ELSE ${copilotChats.conversationId} END`,
updatedAt: new Date(),
})
.where(eq(copilotChats.id, actualChatId))
taskPubSub?.publishStatusChanged({
workspaceId,
chatId: actualChatId,
type: 'completed',
})
}
} catch (error) {
reqLogger.error('Failed to persist chat messages', {
chatId: actualChatId,
error: error instanceof Error ? error.message : 'Unknown error',
})
}
},
},
})
return new Response(stream, { headers: SSE_RESPONSE_HEADERS })
} catch (error) {
if (error instanceof z.ZodError) {
return NextResponse.json(
{ error: 'Invalid request data', details: error.errors },
{ status: 400 }
)
}
logger
.withMetadata({ requestId: tracker.requestId, messageId: userMessageIdForLogs })
.error('Error handling mothership chat', {
error: error instanceof Error ? error.message : 'Unknown error',
})
return NextResponse.json(
{ error: error instanceof Error ? error.message : 'Internal server error' },
{ status: 500 }
)
}
}
// Unified chat route surface.
export { handleUnifiedChatPost as POST, maxDuration } from '@/lib/copilot/chat/post'
export { GET } from '@/app/api/copilot/chat/queries'

View File

@@ -1,114 +1,2 @@
import { db } from '@sim/db'
import { copilotChats } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { and, eq, sql } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { getSession } from '@/lib/auth'
import { releasePendingChatStream } from '@/lib/copilot/chat-streaming'
import { taskPubSub } from '@/lib/copilot/task-events'
import { generateId } from '@/lib/core/utils/uuid'
const logger = createLogger('MothershipChatStopAPI')
const StoredToolCallSchema = z
.object({
id: z.string().optional(),
name: z.string().optional(),
state: z.string().optional(),
params: z.record(z.unknown()).optional(),
result: z
.object({
success: z.boolean(),
output: z.unknown().optional(),
error: z.string().optional(),
})
.optional(),
display: z
.object({
text: z.string().optional(),
})
.optional(),
calledBy: z.string().optional(),
})
.nullable()
const ContentBlockSchema = z.object({
type: z.string(),
content: z.string().optional(),
toolCall: StoredToolCallSchema.optional(),
})
const StopSchema = z.object({
chatId: z.string(),
streamId: z.string(),
content: z.string(),
contentBlocks: z.array(ContentBlockSchema).optional(),
})
/**
* POST /api/mothership/chat/stop
* Persists partial assistant content when the user stops a stream mid-response.
* Clears conversationId so the server-side onComplete won't duplicate the message.
*/
export async function POST(req: NextRequest) {
try {
const session = await getSession()
if (!session?.user?.id) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
const { chatId, streamId, content, contentBlocks } = StopSchema.parse(await req.json())
await releasePendingChatStream(chatId, streamId)
const setClause: Record<string, unknown> = {
conversationId: null,
updatedAt: new Date(),
}
const hasContent = content.trim().length > 0
const hasBlocks = Array.isArray(contentBlocks) && contentBlocks.length > 0
if (hasContent || hasBlocks) {
const assistantMessage: Record<string, unknown> = {
id: generateId(),
role: 'assistant' as const,
content,
timestamp: new Date().toISOString(),
}
if (hasBlocks) {
assistantMessage.contentBlocks = contentBlocks
}
setClause.messages = sql`${copilotChats.messages} || ${JSON.stringify([assistantMessage])}::jsonb`
}
const [updated] = await db
.update(copilotChats)
.set(setClause)
.where(
and(
eq(copilotChats.id, chatId),
eq(copilotChats.userId, session.user.id),
eq(copilotChats.conversationId, streamId)
)
)
.returning({ workspaceId: copilotChats.workspaceId })
if (updated?.workspaceId) {
taskPubSub?.publishStatusChanged({
workspaceId: updated.workspaceId,
chatId,
type: 'completed',
})
}
return NextResponse.json({ success: true })
} catch (error) {
if (error instanceof z.ZodError) {
return NextResponse.json({ error: 'Invalid request' }, { status: 400 })
}
logger.error('Error stopping chat stream:', error)
return NextResponse.json({ error: 'Internal server error' }, { status: 500 })
}
}
// Unified stop route surface.
export { POST } from '@/app/api/copilot/chat/stop/route'

View File

@@ -0,0 +1 @@
export { GET, maxDuration } from '@/app/api/copilot/chat/stream/route'

View File

@@ -4,15 +4,19 @@ import { createLogger } from '@sim/logger'
import { and, eq, sql } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { getAccessibleCopilotChat } from '@/lib/copilot/chat-lifecycle'
import { getStreamMeta, readStreamEvents } from '@/lib/copilot/orchestrator/stream/buffer'
import { getLatestRunForStream } from '@/lib/copilot/async-runs/repository'
import { getAccessibleCopilotChat } from '@/lib/copilot/chat/lifecycle'
import {
authenticateCopilotRequestSessionOnly,
createBadRequestResponse,
createInternalServerErrorResponse,
createUnauthorizedResponse,
} from '@/lib/copilot/request-helpers'
import { taskPubSub } from '@/lib/copilot/task-events'
} from '@/lib/copilot/request/http'
import type { FilePreviewSession } from '@/lib/copilot/request/session'
import { readEvents } from '@/lib/copilot/request/session/buffer'
import { readFilePreviewSessions } from '@/lib/copilot/request/session/file-preview-session'
import { type StreamBatchEvent, toStreamBatchEvent } from '@/lib/copilot/request/session/types'
import { taskPubSub } from '@/lib/copilot/tasks'
import { captureServerEvent } from '@/lib/posthog/server'
const logger = createLogger('MothershipChatAPI')
@@ -47,29 +51,45 @@ export async function GET(
}
let streamSnapshot: {
events: Array<{ eventId: number; streamId: string; event: Record<string, unknown> }>
events: StreamBatchEvent[]
previewSessions: FilePreviewSession[]
status: string
} | null = null
if (chat.conversationId) {
try {
const [meta, events] = await Promise.all([
getStreamMeta(chat.conversationId),
readStreamEvents(chat.conversationId, 0),
const [events, previewSessions] = await Promise.all([
readEvents(chat.conversationId, '0'),
readFilePreviewSessions(chat.conversationId).catch((error) => {
logger.warn('Failed to read preview sessions for mothership chat', {
chatId,
conversationId: chat.conversationId,
error: error instanceof Error ? error.message : String(error),
})
return []
}),
])
streamSnapshot = {
events: events || [],
status: meta?.status || 'unknown',
}
} catch (error) {
logger
.withMetadata({ messageId: chat.conversationId || undefined })
.warn('Failed to read stream snapshot for mothership chat', {
const run = await getLatestRunForStream(chat.conversationId, userId).catch((error) => {
logger.warn('Failed to fetch latest run for mothership chat snapshot', {
chatId,
conversationId: chat.conversationId,
error: error instanceof Error ? error.message : String(error),
})
return null
})
streamSnapshot = {
events: events.map(toStreamBatchEvent),
previewSessions,
status:
typeof run?.status === 'string' ? run.status : events.length > 0 ? 'active' : 'unknown',
}
} catch (error) {
logger.warn('Failed to read stream snapshot for mothership chat', {
chatId,
conversationId: chat.conversationId,
error: error instanceof Error ? error.message : String(error),
})
}
}

View File

@@ -0,0 +1,43 @@
import { db } from '@sim/db'
import { copilotChats } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { and, eq, sql } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import {
authenticateCopilotRequestSessionOnly,
createBadRequestResponse,
createInternalServerErrorResponse,
createUnauthorizedResponse,
} from '@/lib/copilot/request/http'
const logger = createLogger('MarkTaskReadAPI')
const MarkReadSchema = z.object({
chatId: z.string().min(1),
})
export async function POST(request: NextRequest) {
try {
const { userId, isAuthenticated } = await authenticateCopilotRequestSessionOnly()
if (!isAuthenticated || !userId) {
return createUnauthorizedResponse()
}
const body = await request.json()
const { chatId } = MarkReadSchema.parse(body)
await db
.update(copilotChats)
.set({ lastSeenAt: sql`GREATEST(${copilotChats.updatedAt}, NOW())` })
.where(and(eq(copilotChats.id, chatId), eq(copilotChats.userId, userId)))
return NextResponse.json({ success: true })
} catch (error) {
if (error instanceof z.ZodError) {
return createBadRequestResponse('chatId is required')
}
logger.error('Error marking task as read:', error)
return createInternalServerErrorResponse('Failed to mark task as read')
}
}

View File

@@ -9,8 +9,8 @@ import {
createBadRequestResponse,
createInternalServerErrorResponse,
createUnauthorizedResponse,
} from '@/lib/copilot/request-helpers'
import { taskPubSub } from '@/lib/copilot/task-events'
} from '@/lib/copilot/request/http'
import { taskPubSub } from '@/lib/copilot/tasks'
import { captureServerEvent } from '@/lib/posthog/server'
import { assertActiveWorkspaceAccess } from '@/lib/workspaces/permissions/utils'
@@ -39,7 +39,7 @@ export async function GET(request: NextRequest) {
id: copilotChats.id,
title: copilotChats.title,
updatedAt: copilotChats.updatedAt,
conversationId: copilotChats.conversationId,
activeStreamId: copilotChats.conversationId,
lastSeenAt: copilotChats.lastSeenAt,
})
.from(copilotChats)

View File

@@ -7,7 +7,7 @@
* Auth is handled via session cookies (EventSource sends cookies automatically).
*/
import { taskPubSub } from '@/lib/copilot/task-events'
import { taskPubSub } from '@/lib/copilot/tasks'
import { createWorkspaceSSE } from '@/lib/events/sse-endpoint'
export const dynamic = 'force-dynamic'

View File

@@ -2,10 +2,10 @@ import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { checkInternalAuth } from '@/lib/auth/hybrid'
import { createRunSegment } from '@/lib/copilot/async-runs/repository'
import { buildIntegrationToolSchemas } from '@/lib/copilot/chat-payload'
import { orchestrateCopilotStream } from '@/lib/copilot/orchestrator'
import { generateWorkspaceContext } from '@/lib/copilot/workspace-context'
import { buildIntegrationToolSchemas } from '@/lib/copilot/chat/payload'
import { generateWorkspaceContext } from '@/lib/copilot/chat/workspace-context'
import { runHeadlessCopilotLifecycle } from '@/lib/copilot/request/lifecycle/headless'
import { requestExplicitStreamAbort } from '@/lib/copilot/request/session/explicit-abort'
import { generateId } from '@/lib/core/utils/uuid'
import {
assertActiveWorkspaceAccess,
@@ -27,8 +27,16 @@ const ExecuteRequestSchema = z.object({
workspaceId: z.string().min(1, 'workspaceId is required'),
userId: z.string().min(1, 'userId is required'),
chatId: z.string().optional(),
messageId: z.string().optional(),
requestId: z.string().optional(),
workflowId: z.string().optional(),
executionId: z.string().optional(),
})
function isAbortError(error: unknown): boolean {
return error instanceof Error && error.name === 'AbortError'
}
/**
* POST /api/mothership/execute
*
@@ -38,6 +46,7 @@ const ExecuteRequestSchema = z.object({
*/
export async function POST(req: NextRequest) {
let messageId: string | undefined
let requestId: string | undefined
try {
const auth = await checkInternalAuth(req, { requireWorkflowId: false })
@@ -46,14 +55,29 @@ export async function POST(req: NextRequest) {
}
const body = await req.json()
const { messages, responseFormat, workspaceId, userId, chatId } =
ExecuteRequestSchema.parse(body)
const {
messages,
responseFormat,
workspaceId,
userId,
chatId,
messageId: providedMessageId,
requestId: providedRequestId,
workflowId,
executionId,
} = ExecuteRequestSchema.parse(body)
await assertActiveWorkspaceAccess(workspaceId, userId)
const effectiveChatId = chatId || generateId()
messageId = generateId()
const reqLogger = logger.withMetadata({ messageId })
messageId = providedMessageId || generateId()
requestId = providedRequestId || generateId()
const reqLogger = logger.withMetadata({
messageId,
requestId,
workflowId,
executionId,
})
const [workspaceContext, integrationTools, userPermission] = await Promise.all([
generateWorkspaceContext(workspaceId, userId),
buildIntegrationToolSchemas(userId, messageId),
@@ -73,61 +97,96 @@ export async function POST(req: NextRequest) {
...(userPermission ? { userPermission } : {}),
}
const executionId = generateId()
const runId = generateId()
let allowExplicitAbort = true
let explicitAbortRequest: Promise<void> | undefined
const onAbort = () => {
if (!allowExplicitAbort || explicitAbortRequest || !messageId) {
return
}
await createRunSegment({
id: runId,
executionId,
chatId: effectiveChatId,
userId,
workspaceId,
streamId: messageId,
}).catch(() => {})
const result = await orchestrateCopilotStream(requestPayload, {
userId,
workspaceId,
chatId: effectiveChatId,
executionId,
runId,
goRoute: '/api/mothership/execute',
autoExecuteTools: true,
interactive: false,
})
if (!result.success) {
reqLogger.error('Mothership execute failed', {
error: result.error,
errors: result.errors,
explicitAbortRequest = requestExplicitStreamAbort({
streamId: messageId,
userId,
chatId: effectiveChatId,
}).catch((error) => {
reqLogger.warn('Failed to send explicit abort for mothership execution', {
error: error instanceof Error ? error.message : String(error),
})
})
return NextResponse.json(
{
error: result.error || 'Mothership execution failed',
content: result.content || '',
},
{ status: 500 }
)
}
const clientToolNames = new Set(integrationTools.map((t) => t.name))
const clientToolCalls = (result.toolCalls || []).filter(
(tc: { name: string }) => clientToolNames.has(tc.name) || tc.name.startsWith('mcp-')
)
if (req.signal.aborted) {
onAbort()
} else {
req.signal.addEventListener('abort', onAbort, { once: true })
}
return NextResponse.json({
content: result.content,
model: 'mothership',
tokens: result.usage
? {
prompt: result.usage.prompt,
completion: result.usage.completion,
total: (result.usage.prompt || 0) + (result.usage.completion || 0),
try {
const result = await runHeadlessCopilotLifecycle(requestPayload, {
userId,
workspaceId,
chatId: effectiveChatId,
workflowId,
executionId,
simRequestId: requestId,
goRoute: '/api/mothership/execute',
autoExecuteTools: true,
interactive: false,
abortSignal: req.signal,
})
allowExplicitAbort = false
if (req.signal.aborted) {
reqLogger.info('Mothership execute aborted after lifecycle completion')
return NextResponse.json({ error: 'Mothership execution aborted' }, { status: 499 })
}
if (!result.success) {
logger.error(
messageId
? `Mothership execute failed [messageId:${messageId}]`
: 'Mothership execute failed',
{
requestId,
workflowId,
executionId,
error: result.error,
errors: result.errors,
}
: {},
cost: result.cost || undefined,
toolCalls: clientToolCalls,
})
)
return NextResponse.json(
{
error: result.error || 'Mothership execution failed',
content: result.content || '',
},
{ status: 500 }
)
}
const clientToolNames = new Set(integrationTools.map((t) => t.name))
const clientToolCalls = (result.toolCalls || []).filter(
(tc: { name: string }) => clientToolNames.has(tc.name) || tc.name.startsWith('mcp-')
)
return NextResponse.json({
content: result.content,
model: 'mothership',
tokens: result.usage
? {
prompt: result.usage.prompt,
completion: result.usage.completion,
total: (result.usage.prompt || 0) + (result.usage.completion || 0),
}
: {},
cost: result.cost || undefined,
toolCalls: clientToolCalls,
})
} finally {
allowExplicitAbort = false
req.signal.removeEventListener('abort', onAbort)
await explicitAbortRequest
}
} catch (error) {
if (error instanceof z.ZodError) {
return NextResponse.json(
@@ -136,9 +195,26 @@ export async function POST(req: NextRequest) {
)
}
logger.withMetadata({ messageId }).error('Mothership execute error', {
error: error instanceof Error ? error.message : 'Unknown error',
})
if (req.signal.aborted || isAbortError(error)) {
logger.info(
messageId
? `Mothership execute aborted [messageId:${messageId}]`
: 'Mothership execute aborted',
{
requestId,
}
)
return NextResponse.json({ error: 'Mothership execution aborted' }, { status: 499 })
}
logger.error(
messageId ? `Mothership execute error [messageId:${messageId}]` : 'Mothership execute error',
{
requestId,
error: error instanceof Error ? error.message : 'Unknown error',
}
)
return NextResponse.json(
{ error: error instanceof Error ? error.message : 'Internal server error' },

View File

@@ -1,13 +1,11 @@
import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server'
import { AuthType } from '@/lib/auth/hybrid'
import { getJobQueue, shouldUseBullMQ } from '@/lib/core/async-jobs'
import { createBullMQJobData } from '@/lib/core/bullmq'
import { getJobQueue } from '@/lib/core/async-jobs'
import { generateRequestId } from '@/lib/core/utils/request'
import { SSE_HEADERS } from '@/lib/core/utils/sse'
import { getBaseUrl } from '@/lib/core/utils/urls'
import { generateId } from '@/lib/core/utils/uuid'
import { enqueueWorkspaceDispatch } from '@/lib/core/workspace-dispatch'
import { setExecutionMeta } from '@/lib/execution/event-buffer'
import { preprocessExecution } from '@/lib/execution/preprocessing'
import { PauseResumeManager } from '@/lib/workflows/executor/human-in-the-loop-manager'
@@ -227,26 +225,10 @@ export async function POST(
let jobId: string
try {
const useBullMQ = shouldUseBullMQ()
if (useBullMQ) {
jobId = await enqueueWorkspaceDispatch({
id: enqueueResult.resumeExecutionId,
workspaceId: workflow.workspaceId,
lane: 'runtime',
queueName: 'resume-execution',
bullmqJobName: 'resume-execution',
bullmqPayload: createBullMQJobData(resumePayload, {
workflowId,
userId,
}),
metadata: { workflowId, userId },
})
} else {
const jobQueue = await getJobQueue()
jobId = await jobQueue.enqueue('resume-execution', resumePayload, {
metadata: { workflowId, workspaceId: workflow.workspaceId, userId },
})
}
const jobQueue = await getJobQueue()
jobId = await jobQueue.enqueue('resume-execution', resumePayload, {
metadata: { workflowId, workspaceId: workflow.workspaceId, userId },
})
logger.info('Enqueued async resume execution', {
jobId,
resumeExecutionId: enqueueResult.resumeExecutionId,

View File

@@ -14,7 +14,6 @@ const {
mockDbReturning,
mockDbUpdate,
mockEnqueue,
mockEnqueueWorkspaceDispatch,
mockStartJob,
mockCompleteJob,
mockMarkJobFailed,
@@ -24,7 +23,6 @@ const {
const mockDbSet = vi.fn().mockReturnValue({ where: mockDbWhere })
const mockDbUpdate = vi.fn().mockReturnValue({ set: mockDbSet })
const mockEnqueue = vi.fn().mockResolvedValue('job-id-1')
const mockEnqueueWorkspaceDispatch = vi.fn().mockResolvedValue('job-id-1')
const mockStartJob = vi.fn().mockResolvedValue(undefined)
const mockCompleteJob = vi.fn().mockResolvedValue(undefined)
const mockMarkJobFailed = vi.fn().mockResolvedValue(undefined)
@@ -42,7 +40,6 @@ const {
mockDbReturning,
mockDbUpdate,
mockEnqueue,
mockEnqueueWorkspaceDispatch,
mockStartJob,
mockCompleteJob,
mockMarkJobFailed,
@@ -75,15 +72,6 @@ vi.mock('@/lib/core/async-jobs', () => ({
shouldExecuteInline: vi.fn().mockReturnValue(false),
}))
vi.mock('@/lib/core/bullmq', () => ({
isBullMQEnabled: vi.fn().mockReturnValue(true),
createBullMQJobData: vi.fn((payload: unknown) => ({ payload })),
}))
vi.mock('@/lib/core/workspace-dispatch', () => ({
enqueueWorkspaceDispatch: mockEnqueueWorkspaceDispatch,
}))
vi.mock('@/lib/workflows/utils', () => ({
getWorkflowById: vi.fn().mockResolvedValue({
id: 'workflow-1',
@@ -175,8 +163,6 @@ const SINGLE_JOB = [
cronExpression: '0 * * * *',
failedCount: 0,
lastQueuedAt: undefined,
sourceUserId: 'user-1',
sourceWorkspaceId: 'workspace-1',
sourceType: 'job',
},
]
@@ -250,56 +236,48 @@ describe('Scheduled Workflow Execution API Route', () => {
expect(data).toHaveProperty('executedCount', 2)
})
it('should queue mothership jobs to BullMQ when available', async () => {
it('should execute mothership jobs inline', async () => {
mockDbReturning.mockReturnValueOnce([]).mockReturnValueOnce(SINGLE_JOB)
const response = await GET(createMockRequest())
expect(response.status).toBe(200)
expect(mockEnqueueWorkspaceDispatch).toHaveBeenCalledWith(
expect(mockExecuteJobInline).toHaveBeenCalledWith(
expect.objectContaining({
workspaceId: 'workspace-1',
lane: 'runtime',
queueName: 'mothership-job-execution',
bullmqJobName: 'mothership-job-execution',
bullmqPayload: {
payload: {
scheduleId: 'job-1',
cronExpression: '0 * * * *',
failedCount: 0,
now: expect.any(String),
},
},
scheduleId: 'job-1',
cronExpression: '0 * * * *',
failedCount: 0,
now: expect.any(String),
})
)
expect(mockExecuteJobInline).not.toHaveBeenCalled()
})
it('should enqueue preassigned correlation metadata for schedules', async () => {
mockDbReturning.mockReturnValue(SINGLE_SCHEDULE)
it('should enqueue schedule with correlation metadata via job queue', async () => {
mockDbReturning.mockReturnValueOnce(SINGLE_SCHEDULE).mockReturnValueOnce([])
const response = await GET(createMockRequest())
expect(response.status).toBe(200)
expect(mockEnqueueWorkspaceDispatch).toHaveBeenCalledWith(
expect(mockEnqueue).toHaveBeenCalledWith(
'schedule-execution',
expect.objectContaining({
id: 'schedule-execution-1',
workspaceId: 'workspace-1',
lane: 'runtime',
queueName: 'schedule-execution',
bullmqJobName: 'schedule-execution',
metadata: {
scheduleId: 'schedule-1',
workflowId: 'workflow-1',
executionId: 'schedule-execution-1',
requestId: 'test-request-id',
}),
expect.objectContaining({
metadata: expect.objectContaining({
workflowId: 'workflow-1',
correlation: {
workspaceId: 'workspace-1',
correlation: expect.objectContaining({
executionId: 'schedule-execution-1',
requestId: 'test-request-id',
source: 'schedule',
workflowId: 'workflow-1',
scheduleId: 'schedule-1',
triggerType: 'schedule',
scheduledFor: '2025-01-01T00:00:00.000Z',
},
},
}),
}),
})
)
})

View File

@@ -4,11 +4,8 @@ import { and, eq, isNull, lt, lte, ne, not, or, sql } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { verifyCronAuth } from '@/lib/auth/internal'
import { getJobQueue, shouldExecuteInline } from '@/lib/core/async-jobs'
import { createBullMQJobData, isBullMQEnabled } from '@/lib/core/bullmq'
import { generateRequestId } from '@/lib/core/utils/request'
import { generateId } from '@/lib/core/utils/uuid'
import { enqueueWorkspaceDispatch } from '@/lib/core/workspace-dispatch'
import { getWorkflowById } from '@/lib/workflows/utils'
import {
executeJobInline,
executeScheduleJob,
@@ -76,8 +73,6 @@ export async function GET(request: NextRequest) {
cronExpression: workflowSchedule.cronExpression,
failedCount: workflowSchedule.failedCount,
lastQueuedAt: workflowSchedule.lastQueuedAt,
sourceWorkspaceId: workflowSchedule.sourceWorkspaceId,
sourceUserId: workflowSchedule.sourceUserId,
sourceType: workflowSchedule.sourceType,
})
@@ -88,6 +83,9 @@ export async function GET(request: NextRequest) {
const jobQueue = await getJobQueue()
const workflowUtils =
dueSchedules.length > 0 ? await import('@/lib/workflows/utils') : undefined
const schedulePromises = dueSchedules.map(async (schedule) => {
const queueTime = schedule.lastQueuedAt ?? queuedAt
const executionId = generateId()
@@ -117,42 +115,17 @@ export async function GET(request: NextRequest) {
try {
const resolvedWorkflow = schedule.workflowId
? await getWorkflowById(schedule.workflowId)
? await workflowUtils?.getWorkflowById(schedule.workflowId)
: null
const resolvedWorkspaceId = resolvedWorkflow?.workspaceId
let jobId: string
if (isBullMQEnabled()) {
if (!resolvedWorkspaceId) {
throw new Error(
`Missing workspace for scheduled workflow ${schedule.workflowId}; refusing to bypass workspace admission`
)
}
jobId = await enqueueWorkspaceDispatch({
id: executionId,
workspaceId: resolvedWorkspaceId,
lane: 'runtime',
queueName: 'schedule-execution',
bullmqJobName: 'schedule-execution',
bullmqPayload: createBullMQJobData(payload, {
workflowId: schedule.workflowId ?? undefined,
correlation,
}),
metadata: {
workflowId: schedule.workflowId ?? undefined,
correlation,
},
})
} else {
jobId = await jobQueue.enqueue('schedule-execution', payload, {
metadata: {
workflowId: schedule.workflowId ?? undefined,
workspaceId: resolvedWorkspaceId ?? undefined,
correlation,
},
})
}
const jobId = await jobQueue.enqueue('schedule-execution', payload, {
metadata: {
workflowId: schedule.workflowId ?? undefined,
workspaceId: resolvedWorkspaceId ?? undefined,
correlation,
},
})
logger.info(
`[${requestId}] Queued schedule execution task ${jobId} for workflow ${schedule.workflowId}`
)
@@ -204,7 +177,7 @@ export async function GET(request: NextRequest) {
}
})
// Mothership jobs use BullMQ when available, otherwise direct inline execution.
// Mothership jobs are executed inline directly.
const jobPromises = dueJobs.map(async (job) => {
const queueTime = job.lastQueuedAt ?? queuedAt
const payload = {
@@ -215,24 +188,7 @@ export async function GET(request: NextRequest) {
}
try {
if (isBullMQEnabled()) {
if (!job.sourceWorkspaceId || !job.sourceUserId) {
throw new Error(`Mothership job ${job.id} is missing workspace/user ownership`)
}
await enqueueWorkspaceDispatch({
workspaceId: job.sourceWorkspaceId!,
lane: 'runtime',
queueName: 'mothership-job-execution',
bullmqJobName: 'mothership-job-execution',
bullmqPayload: createBullMQJobData(payload),
metadata: {
userId: job.sourceUserId,
},
})
} else {
await executeJobInline(payload)
}
await executeJobInline(payload)
} catch (error) {
logger.error(`[${requestId}] Job execution failed for ${job.id}`, {
error: error instanceof Error ? error.message : String(error),

View File

@@ -3,7 +3,7 @@ import { templates } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { eq } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { checkInternalApiKey } from '@/lib/copilot/utils'
import { checkInternalApiKey } from '@/lib/copilot/request/http'
import { generateRequestId } from '@/lib/core/utils/request'
import { sanitizeForCopilot } from '@/lib/workflows/sanitization/json-sanitizer'

View File

@@ -0,0 +1,144 @@
import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { checkInternalAuth } from '@/lib/auth/hybrid'
import { validateUrlWithDNS } from '@/lib/core/security/input-validation.server'
import { generateRequestId } from '@/lib/core/utils/request'
import { FileInputSchema, type RawFileInput } from '@/lib/uploads/utils/file-schemas'
import { processFilesToUserFiles } from '@/lib/uploads/utils/file-utils'
import { downloadFileFromStorage } from '@/lib/uploads/utils/file-utils.server'
import { agiloftLogin, agiloftLogout, buildAttachFileUrl } from '@/tools/agiloft/utils'
export const dynamic = 'force-dynamic'
const logger = createLogger('AgiloftAttachAPI')
const AgiloftAttachSchema = z.object({
instanceUrl: z.string().min(1, 'Instance URL is required'),
knowledgeBase: z.string().min(1, 'Knowledge base is required'),
login: z.string().min(1, 'Login is required'),
password: z.string().min(1, 'Password is required'),
table: z.string().min(1, 'Table is required'),
recordId: z.string().min(1, 'Record ID is required'),
fieldName: z.string().min(1, 'Field name is required'),
file: FileInputSchema.optional().nullable(),
fileName: z.string().optional().nullable(),
})
export async function POST(request: NextRequest) {
const requestId = generateRequestId()
try {
const authResult = await checkInternalAuth(request, { requireWorkflowId: false })
if (!authResult.success) {
logger.warn(`[${requestId}] Unauthorized Agiloft attach attempt: ${authResult.error}`)
return NextResponse.json(
{ success: false, error: authResult.error || 'Authentication required' },
{ status: 401 }
)
}
const body = await request.json()
const data = AgiloftAttachSchema.parse(body)
if (!data.file) {
return NextResponse.json({ success: false, error: 'File is required' }, { status: 400 })
}
const userFiles = processFilesToUserFiles([data.file as RawFileInput], requestId, logger)
if (userFiles.length === 0) {
return NextResponse.json({ success: false, error: 'Invalid file input' }, { status: 400 })
}
const userFile = userFiles[0]
logger.info(
`[${requestId}] Downloading file for Agiloft attach: ${userFile.name} (${userFile.size} bytes)`
)
const fileBuffer = await downloadFileFromStorage(userFile, requestId, logger)
const resolvedFileName = data.fileName || userFile.name || 'attachment'
const urlValidation = await validateUrlWithDNS(data.instanceUrl, 'instanceUrl')
if (!urlValidation.isValid) {
logger.warn(`[${requestId}] SSRF attempt blocked for Agiloft instance URL`, {
instanceUrl: data.instanceUrl,
})
return NextResponse.json(
{ success: false, error: urlValidation.error || 'Invalid instance URL' },
{ status: 400 }
)
}
const token = await agiloftLogin(data)
const base = data.instanceUrl.replace(/\/$/, '')
try {
const url = buildAttachFileUrl(base, data, resolvedFileName)
logger.info(`[${requestId}] Uploading file to Agiloft: ${resolvedFileName}`)
const agiloftResponse = await fetch(url, {
method: 'PUT',
headers: {
'Content-Type': userFile.type || 'application/octet-stream',
Authorization: `Bearer ${token}`,
},
body: new Uint8Array(fileBuffer),
})
if (!agiloftResponse.ok) {
const errorText = await agiloftResponse.text()
logger.error(
`[${requestId}] Agiloft attach error: ${agiloftResponse.status} - ${errorText}`
)
return NextResponse.json(
{ success: false, error: `Agiloft error: ${agiloftResponse.status} - ${errorText}` },
{ status: agiloftResponse.status }
)
}
let totalAttachments = 0
const responseText = await agiloftResponse.text()
try {
const responseData = JSON.parse(responseText)
const result = responseData.result ?? responseData
totalAttachments = typeof result === 'number' ? result : (result.count ?? result.total ?? 1)
} catch {
totalAttachments = Number(responseText) || 1
}
logger.info(
`[${requestId}] File attached successfully. Total attachments: ${totalAttachments}`
)
return NextResponse.json({
success: true,
output: {
recordId: data.recordId.trim(),
fieldName: data.fieldName.trim(),
fileName: resolvedFileName,
totalAttachments,
},
})
} finally {
await agiloftLogout(data.instanceUrl, data.knowledgeBase, token)
}
} catch (error) {
if (error instanceof z.ZodError) {
logger.warn(`[${requestId}] Invalid request data`, { errors: error.errors })
return NextResponse.json(
{ success: false, error: 'Invalid request data', details: error.errors },
{ status: 400 }
)
}
logger.error(`[${requestId}] Error attaching file to Agiloft:`, error)
return NextResponse.json(
{ success: false, error: error instanceof Error ? error.message : 'Internal server error' },
{ status: 500 }
)
}
}

View File

@@ -3,6 +3,7 @@ import { type NextRequest, NextResponse } from 'next/server'
import { checkSessionOrInternalAuth } from '@/lib/auth/hybrid'
import { validateAlphanumericId, validateJiraCloudId } from '@/lib/core/security/input-validation'
import { getConfluenceCloudId } from '@/tools/confluence/utils'
import { parseAtlassianErrorMessage } from '@/tools/jira/utils'
const logger = createLogger('ConfluenceAttachmentAPI')
@@ -53,15 +54,16 @@ export async function DELETE(request: NextRequest) {
})
if (!response.ok) {
const errorData = await response.json().catch(() => null)
const errorText = await response.text()
logger.error('Confluence API error response:', {
status: response.status,
statusText: response.statusText,
error: JSON.stringify(errorData, null, 2),
error: errorText,
})
const errorMessage =
errorData?.message || `Failed to delete Confluence attachment (${response.status})`
return NextResponse.json({ error: errorMessage }, { status: response.status })
return NextResponse.json(
{ error: parseAtlassianErrorMessage(response.status, response.statusText, errorText) },
{ status: response.status }
)
}
return NextResponse.json({ attachmentId, deleted: true })

View File

@@ -3,6 +3,7 @@ import { type NextRequest, NextResponse } from 'next/server'
import { checkSessionOrInternalAuth } from '@/lib/auth/hybrid'
import { validateAlphanumericId, validateJiraCloudId } from '@/lib/core/security/input-validation'
import { getConfluenceCloudId } from '@/tools/confluence/utils'
import { parseAtlassianErrorMessage } from '@/tools/jira/utils'
const logger = createLogger('ConfluenceAttachmentsAPI')
@@ -64,15 +65,16 @@ export async function GET(request: NextRequest) {
})
if (!response.ok) {
const errorData = await response.json().catch(() => null)
const errorText = await response.text()
logger.error('Confluence API error response:', {
status: response.status,
statusText: response.statusText,
error: JSON.stringify(errorData, null, 2),
error: errorText,
})
const errorMessage =
errorData?.message || `Failed to list Confluence attachments (${response.status})`
return NextResponse.json({ error: errorMessage }, { status: response.status })
return NextResponse.json(
{ error: parseAtlassianErrorMessage(response.status, response.statusText, errorText) },
{ status: response.status }
)
}
const data = await response.json()

View File

@@ -4,6 +4,7 @@ import { z } from 'zod'
import { checkSessionOrInternalAuth } from '@/lib/auth/hybrid'
import { validateAlphanumericId, validateJiraCloudId } from '@/lib/core/security/input-validation'
import { getConfluenceCloudId } from '@/tools/confluence/utils'
import { parseAtlassianErrorMessage } from '@/tools/jira/utils'
const logger = createLogger('ConfluenceBlogPostsAPI')
@@ -98,14 +99,16 @@ export async function GET(request: NextRequest) {
})
if (!response.ok) {
const errorData = await response.json().catch(() => null)
const errorText = await response.text()
logger.error('Confluence API error response:', {
status: response.status,
statusText: response.statusText,
error: JSON.stringify(errorData, null, 2),
error: errorText,
})
const errorMessage = errorData?.message || `Failed to list blog posts (${response.status})`
return NextResponse.json({ error: errorMessage }, { status: response.status })
return NextResponse.json(
{ error: parseAtlassianErrorMessage(response.status, response.statusText, errorText) },
{ status: response.status }
)
}
const data = await response.json()
@@ -197,14 +200,16 @@ export async function POST(request: NextRequest) {
})
if (!response.ok) {
const errorData = await response.json().catch(() => null)
const errorText = await response.text()
logger.error('Confluence API error response:', {
status: response.status,
statusText: response.statusText,
error: JSON.stringify(errorData, null, 2),
error: errorText,
})
const errorMessage = errorData?.message || `Failed to create blog post (${response.status})`
return NextResponse.json({ error: errorMessage }, { status: response.status })
return NextResponse.json(
{ error: parseAtlassianErrorMessage(response.status, response.statusText, errorText) },
{ status: response.status }
)
}
const data = await response.json()
@@ -253,14 +258,16 @@ export async function POST(request: NextRequest) {
})
if (!response.ok) {
const errorData = await response.json().catch(() => null)
const errorText = await response.text()
logger.error('Confluence API error response:', {
status: response.status,
statusText: response.statusText,
error: JSON.stringify(errorData, null, 2),
error: errorText,
})
const errorMessage = errorData?.message || `Failed to get blog post (${response.status})`
return NextResponse.json({ error: errorMessage }, { status: response.status })
return NextResponse.json(
{ error: parseAtlassianErrorMessage(response.status, response.statusText, errorText) },
{ status: response.status }
)
}
const data = await response.json()
@@ -326,7 +333,10 @@ export async function PUT(request: NextRequest) {
})
if (!currentResponse.ok) {
throw new Error(`Failed to fetch current blog post: ${currentResponse.status}`)
const errorText = await currentResponse.text()
throw new Error(
parseAtlassianErrorMessage(currentResponse.status, currentResponse.statusText, errorText)
)
}
const currentPost = await currentResponse.json()
@@ -362,14 +372,16 @@ export async function PUT(request: NextRequest) {
})
if (!response.ok) {
const errorData = await response.json().catch(() => null)
const errorText = await response.text()
logger.error('Confluence API error response:', {
status: response.status,
statusText: response.statusText,
error: JSON.stringify(errorData, null, 2),
error: errorText,
})
const errorMessage = errorData?.message || `Failed to update blog post (${response.status})`
return NextResponse.json({ error: errorMessage }, { status: response.status })
return NextResponse.json(
{ error: parseAtlassianErrorMessage(response.status, response.statusText, errorText) },
{ status: response.status }
)
}
const data = await response.json()
@@ -426,14 +438,16 @@ export async function DELETE(request: NextRequest) {
})
if (!response.ok) {
const errorData = await response.json().catch(() => null)
const errorText = await response.text()
logger.error('Confluence API error response:', {
status: response.status,
statusText: response.statusText,
error: JSON.stringify(errorData, null, 2),
error: errorText,
})
const errorMessage = errorData?.message || `Failed to delete blog post (${response.status})`
return NextResponse.json({ error: errorMessage }, { status: response.status })
return NextResponse.json(
{ error: parseAtlassianErrorMessage(response.status, response.statusText, errorText) },
{ status: response.status }
)
}
return NextResponse.json({ blogPostId, deleted: true })

View File

@@ -4,6 +4,7 @@ import { z } from 'zod'
import { checkSessionOrInternalAuth } from '@/lib/auth/hybrid'
import { validateAlphanumericId, validateJiraCloudId } from '@/lib/core/security/input-validation'
import { getConfluenceCloudId } from '@/tools/confluence/utils'
import { parseAtlassianErrorMessage } from '@/tools/jira/utils'
const logger = createLogger('ConfluenceCommentAPI')
@@ -81,7 +82,10 @@ export async function PUT(request: NextRequest) {
})
if (!getResponse.ok) {
throw new Error(`Failed to fetch current comment: ${getResponse.status}`)
const errorText = await getResponse.text()
throw new Error(
parseAtlassianErrorMessage(getResponse.status, getResponse.statusText, errorText)
)
}
const currentComment = await getResponse.json()
@@ -111,15 +115,16 @@ export async function PUT(request: NextRequest) {
})
if (!response.ok) {
const errorData = await response.json().catch(() => null)
const errorText = await response.text()
logger.error('Confluence API error response:', {
status: response.status,
statusText: response.statusText,
error: JSON.stringify(errorData, null, 2),
error: errorText,
})
const errorMessage =
errorData?.message || `Failed to update Confluence comment (${response.status})`
return NextResponse.json({ error: errorMessage }, { status: response.status })
return NextResponse.json(
{ error: parseAtlassianErrorMessage(response.status, response.statusText, errorText) },
{ status: response.status }
)
}
const data = await response.json()
@@ -169,15 +174,16 @@ export async function DELETE(request: NextRequest) {
})
if (!response.ok) {
const errorData = await response.json().catch(() => null)
const errorText = await response.text()
logger.error('Confluence API error response:', {
status: response.status,
statusText: response.statusText,
error: JSON.stringify(errorData, null, 2),
error: errorText,
})
const errorMessage =
errorData?.message || `Failed to delete Confluence comment (${response.status})`
return NextResponse.json({ error: errorMessage }, { status: response.status })
return NextResponse.json(
{ error: parseAtlassianErrorMessage(response.status, response.statusText, errorText) },
{ status: response.status }
)
}
return NextResponse.json({ commentId, deleted: true })

View File

@@ -3,6 +3,7 @@ import { type NextRequest, NextResponse } from 'next/server'
import { checkSessionOrInternalAuth } from '@/lib/auth/hybrid'
import { validateAlphanumericId, validateJiraCloudId } from '@/lib/core/security/input-validation'
import { getConfluenceCloudId } from '@/tools/confluence/utils'
import { parseAtlassianErrorMessage } from '@/tools/jira/utils'
const logger = createLogger('ConfluenceCommentsAPI')
@@ -69,15 +70,16 @@ export async function POST(request: NextRequest) {
})
if (!response.ok) {
const errorData = await response.json().catch(() => null)
const errorText = await response.text()
logger.error('Confluence API error response:', {
status: response.status,
statusText: response.statusText,
error: JSON.stringify(errorData, null, 2),
error: errorText,
})
const errorMessage =
errorData?.message || `Failed to create Confluence comment (${response.status})`
return NextResponse.json({ error: errorMessage }, { status: response.status })
return NextResponse.json(
{ error: parseAtlassianErrorMessage(response.status, response.statusText, errorText) },
{ status: response.status }
)
}
const data = await response.json()
@@ -149,15 +151,16 @@ export async function GET(request: NextRequest) {
})
if (!response.ok) {
const errorData = await response.json().catch(() => null)
const errorText = await response.text()
logger.error('Confluence API error response:', {
status: response.status,
statusText: response.statusText,
error: JSON.stringify(errorData, null, 2),
error: errorText,
})
const errorMessage =
errorData?.message || `Failed to list Confluence comments (${response.status})`
return NextResponse.json({ error: errorMessage }, { status: response.status })
return NextResponse.json(
{ error: parseAtlassianErrorMessage(response.status, response.statusText, errorText) },
{ status: response.status }
)
}
const data = await response.json()

View File

@@ -3,6 +3,7 @@ import { type NextRequest, NextResponse } from 'next/server'
import { checkSessionOrInternalAuth } from '@/lib/auth/hybrid'
import { validateAlphanumericId, validateJiraCloudId } from '@/lib/core/security/input-validation'
import { getConfluenceCloudId } from '@/tools/confluence/utils'
import { parseAtlassianErrorMessage } from '@/tools/jira/utils'
const logger = createLogger('ConfluenceCreatePageAPI')
@@ -101,30 +102,16 @@ export async function POST(request: NextRequest) {
})
if (!response.ok) {
const errorData = await response.json().catch(() => null)
const errorText = await response.text()
logger.error('Confluence API error response:', {
status: response.status,
statusText: response.statusText,
error: JSON.stringify(errorData, null, 2),
error: errorText,
})
let errorMessage = `Failed to create Confluence page (${response.status})`
if (errorData?.message) {
errorMessage = errorData.message
} else if (errorData?.errors && Array.isArray(errorData.errors)) {
const firstError = errorData.errors[0]
if (firstError?.title) {
if (firstError.title.includes("'spaceId'") && firstError.title.includes('Long')) {
errorMessage =
'Invalid Space ID. Use the list spaces operation to find valid space IDs.'
} else {
errorMessage = firstError.title
}
} else {
errorMessage = JSON.stringify(errorData.errors)
}
let errorMessage = parseAtlassianErrorMessage(response.status, response.statusText, errorText)
if (errorMessage.includes("'spaceId'") && errorMessage.includes('Long')) {
errorMessage = 'Invalid Space ID. Use the list spaces operation to find valid space IDs.'
}
return NextResponse.json({ error: errorMessage }, { status: response.status })
}

Some files were not shown because too many files have changed in this diff Show More