Compare commits

...

20 Commits

Author SHA1 Message Date
Vikhyath Mondreti
6cd078b0fe v0.5.18: ui fixes, nextjs16, workspace notifications, admin APIs, loading improvements, new slack tools 2025-12-05 14:03:09 -08:00
Vikhyath Mondreti
fb4c9827f8 fix(custom-bot-slack): dependsOn incorrectly set for bot_token (#2214)
* fix(custom-bot-slack): dependsOn incorrectly set for bot_token"

* fix other references to be compatible

* fix dependsOn for things depending on authMethod"
2025-12-05 13:54:52 -08:00
Siddharth Ganesan
4fd5f0051f fix(copilot): validation (#2215)
* Fix validation error

* Fix lint
2025-12-05 13:46:50 -08:00
Waleed
002713ec4b feat(i18n): update translations (#2208)
Co-authored-by: icecrasher321 <icecrasher321@users.noreply.github.com>
2025-12-05 13:29:44 -08:00
Waleed
5d6c1f7b88 feat(tools): added more slack tools (#2212) 2025-12-05 13:22:35 -08:00
Siddharth Ganesan
7752beac01 fix(import): fix array errors on import/export (#2211)
* Fix import/export

* Remove copilot gdrive tools

* Null

* Fix lint

* Add copilot validation

* Fix validation
2025-12-05 13:07:01 -08:00
Emir Karabeg
7101dc58d4 improvement: loading, optimistic actions (#2193)
* improvement: loading, optimistic operations

* improvement: folders update

* fix usage indicator rounding + new tsconfig

* remove redundant checks

* fix hmr case for missing workflow loads

* add abstraction for zustand/react hybrid optimism

* remove comments

---------

Co-authored-by: Vikhyath Mondreti <vikhyath@simstudio.ai>
2025-12-05 13:01:12 -08:00
Siddharth Ganesan
58251e28e6 feat(copilot): superagent (#2201)
* Superagent poc

* Checkpoint brokeN

* tool call rag

* Fix

* Fixes

* Improvements

* Creds stuff

* Fix

* Fix tools

* Fix stream

* Prompt

* Update sheets descriptions

* Better

* Copilot components

* Delete stuff

* Remove db migration

* Fix migrations

* Fix things

* Copilot side superagent

* Build workflow from chat

* Combine superagent into copilkot

* Render tools

* Function execution

* Max mode indicators

* Tool call confirmations

* Credential settings

* Remove betas

* Bump version

* Dropdown options in block metadata

* Copilot kb tools

* Fix lint

* Credentials modal

* Fix lint

* Cleanup

* Env var resolution in superagent tools

* Get id for workflow vars

* Fix insert into subflow

* Fix executor for while and do while loops

* Fix metadata for parallel

* Remove db migration

* Rebase

* Add migrations back

* Clean up code

* Fix executor logic issue

* Cleanup

* Diagram tool

* Fix tool naems

* Comment out g3p

* Remove popup option

* Hide o3

* Remove db migration

* Fix merge conflicts

* Fix lint

* Fix tests

* Remove webhook change

* Remove cb change

* Fix lint

* Fix

* Fix lint

* Fix build

* comment out gemini

* Add gemini back

* Remove bad test

* Fix

* Fix test

* Fix

* Nuke bad test

* Fix lint

---------

Co-authored-by: Vikhyath Mondreti <vikhyathvikku@gmail.com>
Co-authored-by: Waleed <walif6@gmail.com>
Co-authored-by: waleedlatif1 <waleedlatif1@users.noreply.github.com>
2025-12-04 21:26:18 -08:00
Vikhyath Mondreti
8ef9a45125 fix(env-vars): refactor for workspace/personal env vars to work with server side execution correctly (#2197)
* fix(env-var-resolution): new executor env var resolution changes

* add sessionuser id"

* cleanup code

* add doc update

* fix build

* fix client session pass through"

* add type change

* fix env var with hitl

* fix types
2025-12-04 21:08:20 -08:00
Waleed
ca818a6503 feat(admin): added admin APIs for admin management (#2206) 2025-12-04 20:52:32 -08:00
Waleed
1b903f2db5 fix(images): updated helm charts with branding URL guidance, removed additional nextjs image optimizations (#2205) 2025-12-04 19:39:51 -08:00
Waleed
414a54c358 feat(i18n): update translations (#2204)
Co-authored-by: icecrasher321 <icecrasher321@users.noreply.github.com>
2025-12-04 19:03:03 -08:00
Vikhyath Mondreti
3b9f0f9ce2 feat(error-notifications): workspace-level configuration of slack, email, webhook notifications for workflow execution (#2157)
* feat(notification): slack, email, webhook notifications from logs

* retain search params for filters to link in notification

* add alerting rules

* update selector

* fix lint

* add limits on num of emails and notification triggers per workspace

* address greptile comments

* add search to combobox

* move notifications to react query

* fix lint

* fix email formatting

* add more alert types

* fix imports

* fix test route

* use emcn componentfor modal

* refactor: consolidate notification config fields into jsonb objects

* regen migration

* fix delete notif modal ui

* make them multiselect dropdowns

* update tag styling

* combobox font size with multiselect tags'
2025-12-04 18:29:22 -08:00
Waleed
dcbdcb43aa chore(deps): upgrade to nextjs 16 (#2203)
* chore(deps): upgrade to nextjs 16

* upgraded fumadocs

* ensure vercel uses bun

* fix build

* fix bui;d

* remove redundant vercel.json
2025-12-04 17:55:37 -08:00
Emir Karabeg
1642ed754b improvement: modal UI (#2202)
* fix: trigger-save delete modal

* improvement: old modal styling
2025-12-04 17:10:59 -08:00
Vikhyath Mondreti
d22b5783be fix(enterprise-plan): seats should be taken from metadata (#2200)
* fix(enterprise): seats need to be picked up from metadata not column

* fix env var access

* fix user avatar
2025-12-04 16:22:29 -08:00
Waleed
8e7d8c93e3 fix(profile-pics): remove sharp dependency for serving profile pics in settings (#2199) 2025-12-04 15:46:10 -08:00
Waleed
ca3eb5b5a5 fix(subscription): fixed text clipping on subscription panel (#2198) 2025-12-04 15:12:50 -08:00
Waleed
dc5a2b1ad1 fix(envvar): fix envvar dropdown positioning, remove dead code (#2196) 2025-12-04 14:35:25 -08:00
Waleed
3f84ed9b72 fix(settings): fix long description on wordpress integration (#2195) 2025-12-04 14:10:50 -08:00
228 changed files with 30186 additions and 7612 deletions

View File

@@ -1,4 +1,4 @@
import { findNeighbour } from 'fumadocs-core/server'
import { findNeighbour } from 'fumadocs-core/page-tree'
import defaultMdxComponents from 'fumadocs-ui/mdx'
import { DocsBody, DocsDescription, DocsPage, DocsTitle } from 'fumadocs-ui/page'
import { ChevronLeft, ChevronRight } from 'lucide-react'
@@ -186,9 +186,6 @@ export default async function Page(props: { params: Promise<{ slug?: string[]; l
footer: <TOCFooter />,
single: false,
}}
article={{
className: 'scroll-smooth max-sm:pb-16',
}}
tableOfContentPopover={{
style: 'clerk',
enabled: true,

View File

@@ -1,7 +1,7 @@
'use client'
import { type ReactNode, useEffect, useState } from 'react'
import type { PageTree } from 'fumadocs-core/server'
import type { Folder, Item, Separator } from 'fumadocs-core/page-tree'
import { ChevronRight } from 'lucide-react'
import Link from 'next/link'
import { usePathname } from 'next/navigation'
@@ -11,7 +11,7 @@ function isActive(url: string, pathname: string, nested = true): boolean {
return url === pathname || (nested && pathname.startsWith(`${url}/`))
}
export function SidebarItem({ item }: { item: PageTree.Item }) {
export function SidebarItem({ item }: { item: Item }) {
const pathname = usePathname()
const active = isActive(item.url, pathname, false)
@@ -33,15 +33,7 @@ export function SidebarItem({ item }: { item: PageTree.Item }) {
)
}
export function SidebarFolder({
item,
level,
children,
}: {
item: PageTree.Folder
level: number
children: ReactNode
}) {
export function SidebarFolder({ item, children }: { item: Folder; children: ReactNode }) {
const pathname = usePathname()
const hasActiveChild = checkHasActiveChild(item, pathname)
const [open, setOpen] = useState(hasActiveChild)
@@ -112,7 +104,7 @@ export function SidebarFolder({
)
}
export function SidebarSeparator({ item }: { item: PageTree.Separator }) {
export function SidebarSeparator({ item }: { item: Separator }) {
return (
<p className='mt-4 mb-1.5 px-2.5 font-semibold text-[10px] text-gray-500/80 uppercase tracking-wide dark:text-gray-500'>
{item.name}
@@ -120,7 +112,7 @@ export function SidebarSeparator({ item }: { item: PageTree.Separator }) {
)
}
function checkHasActiveChild(node: PageTree.Folder, pathname: string): boolean {
function checkHasActiveChild(node: Folder, pathname: string): boolean {
if (node.index && isActive(node.index.url, pathname)) {
return true
}

View File

@@ -696,8 +696,8 @@ export function GrafanaIcon(props: SVGProps<SVGSVGElement>) {
y2='5.356'
gradientUnits='userSpaceOnUse'
>
<stop stop-color='#FFF200' />
<stop offset='1' stop-color='#F15A29' />
<stop stopColor='#FFF200' />
<stop offset='1' stopColor='#F15A29' />
</linearGradient>
</defs>
</svg>
@@ -2757,111 +2757,19 @@ export function MicrosoftSharepointIcon(props: SVGProps<SVGSVGElement>) {
export function MicrosoftPlannerIcon(props: SVGProps<SVGSVGElement>) {
return (
<svg
{...props}
xmlnsXlink='http://www.w3.org/1999/xlink'
viewBox='0 0 24 24'
fill='none'
xmlns='http://www.w3.org/2000/svg'
>
<g clipPath='url(#msplanner_clip0)'>
<path
d='M8.25809 15.7412C7.22488 16.7744 5.54971 16.7744 4.5165 15.7412L0.774909 11.9996C-0.258303 10.9664 -0.258303 9.29129 0.774908 8.25809L4.5165 4.51655C5.54971 3.48335 7.22488 3.48335 8.25809 4.51655L11.9997 8.2581C13.0329 9.29129 13.0329 10.9664 11.9997 11.9996L8.25809 15.7412Z'
fill='url(#msplanner_paint0_linear)'
/>
<path
d='M8.25809 15.7412C7.22488 16.7744 5.54971 16.7744 4.5165 15.7412L0.774909 11.9996C-0.258303 10.9664 -0.258303 9.29129 0.774908 8.25809L4.5165 4.51655C5.54971 3.48335 7.22488 3.48335 8.25809 4.51655L11.9997 8.2581C13.0329 9.29129 13.0329 10.9664 11.9997 11.9996L8.25809 15.7412Z'
fill='url(#msplanner_paint1_linear)'
/>
<path
d='M0.774857 11.9999C1.80809 13.0331 3.48331 13.0331 4.51655 11.9999L15.7417 0.774926C16.7749 -0.258304 18.4501 -0.258309 19.4834 0.774914L23.225 4.51655C24.2583 5.54977 24.2583 7.22496 23.225 8.25819L11.9999 19.4832C10.9667 20.5164 9.29146 20.5164 8.25822 19.4832L0.774857 11.9999Z'
fill='url(#msplanner_paint2_linear)'
/>
<path
d='M0.774857 11.9999C1.80809 13.0331 3.48331 13.0331 4.51655 11.9999L15.7417 0.774926C16.7749 -0.258304 18.4501 -0.258309 19.4834 0.774914L23.225 4.51655C24.2583 5.54977 24.2583 7.22496 23.225 8.25819L11.9999 19.4832C10.9667 20.5164 9.29146 20.5164 8.25822 19.4832L0.774857 11.9999Z'
fill='url(#msplanner_paint3_linear)'
/>
<path
d='M4.51642 15.7413C5.54966 16.7746 7.22487 16.7746 8.25812 15.7413L15.7415 8.25803C16.7748 7.2248 18.45 7.2248 19.4832 8.25803L23.2249 11.9997C24.2582 13.0329 24.2582 14.7081 23.2249 15.7413L15.7415 23.2246C14.7083 24.2579 13.033 24.2579 11.9998 23.2246L4.51642 15.7413Z'
fill='url(#msplanner_paint4_linear)'
/>
<path
d='M4.51642 15.7413C5.54966 16.7746 7.22487 16.7746 8.25812 15.7413L15.7415 8.25803C16.7748 7.2248 18.45 7.2248 19.4832 8.25803L23.2249 11.9997C24.2582 13.0329 24.2582 14.7081 23.2249 15.7413L15.7415 23.2246C14.7083 24.2579 13.033 24.2579 11.9998 23.2246L4.51642 15.7413Z'
fill='url(#msplanner_paint5_linear)'
/>
</g>
<defs>
<linearGradient
id='msplanner_paint0_linear'
x1='6.38724'
y1='3.74167'
x2='2.15779'
y2='12.777'
gradientUnits='userSpaceOnUse'
>
<stop stopColor='#8752E0' />
<stop offset='1' stopColor='#541278' />
</linearGradient>
<linearGradient
id='msplanner_paint1_linear'
x1='8.38032'
y1='11.0696'
x2='4.94062'
y2='7.69244'
gradientUnits='userSpaceOnUse'
>
<stop offset='0.12172' stopColor='#3D0D59' />
<stop offset='1' stopColor='#7034B0' stopOpacity='0' />
</linearGradient>
<linearGradient
id='msplanner_paint2_linear'
x1='18.3701'
y1='-3.33385e-05'
x2='9.85717'
y2='20.4192'
gradientUnits='userSpaceOnUse'
>
<stop stopColor='#DB45E0' />
<stop offset='1' stopColor='#6C0F71' />
</linearGradient>
<linearGradient
id='msplanner_paint3_linear'
x1='18.3701'
y1='-3.33385e-05'
x2='9.85717'
y2='20.4192'
gradientUnits='userSpaceOnUse'
>
<stop stopColor='#DB45E0' />
<stop offset='0.677403' stopColor='#A829AE' />
<stop offset='1' stopColor='#8F28B3' />
</linearGradient>
<linearGradient
id='msplanner_paint4_linear'
x1='18.0002'
y1='7.49958'
x2='14.0004'
y2='23.9988'
gradientUnits='userSpaceOnUse'
>
<stop stopColor='#3DCBFF' />
<stop offset='1' stopColor='#00479E' />
</linearGradient>
<linearGradient
id='msplanner_paint5_linear'
x1='18.2164'
y1='7.92626'
x2='10.5237'
y2='22.9363'
gradientUnits='userSpaceOnUse'
>
<stop stopColor='#3DCBFF' />
<stop offset='1' stopColor='#4A40D4' />
</linearGradient>
<clipPath id='msplanner_clip0'>
<rect width='24' height='24' fill='white' />
</clipPath>
</defs>
<svg {...props} viewBox='0 0 24 24' fill='none' xmlns='http://www.w3.org/2000/svg'>
<path
d='M8.25809 15.7412C7.22488 16.7744 5.54971 16.7744 4.5165 15.7412L0.774909 11.9996C-0.258303 10.9664 -0.258303 9.29129 0.774908 8.25809L4.5165 4.51655C5.54971 3.48335 7.22488 3.48335 8.25809 4.51655L11.9997 8.2581C13.0329 9.29129 13.0329 10.9664 11.9997 11.9996L8.25809 15.7412Z'
fill='#185ABD'
/>
<path
d='M0.774857 11.9999C1.80809 13.0331 3.48331 13.0331 4.51655 11.9999L15.7417 0.774926C16.7749 -0.258304 18.4501 -0.258309 19.4834 0.774914L23.225 4.51655C24.2583 5.54977 24.2583 7.22496 23.225 8.25819L11.9999 19.4832C10.9667 20.5164 9.29146 20.5164 8.25822 19.4832L0.774857 11.9999Z'
fill='#41A5EE'
/>
<path
d='M4.51642 15.7413C5.54966 16.7746 7.22487 16.7746 8.25812 15.7413L15.7415 8.25803C16.7748 7.2248 18.45 7.2248 19.4832 8.25803L23.2249 11.9997C24.2582 13.0329 24.2582 14.7081 23.2249 15.7413L15.7415 23.2246C14.7083 24.2579 13.033 24.2579 11.9998 23.2246L4.51642 15.7413Z'
fill='#2B7CD3'
/>
</svg>
)
}
@@ -3344,29 +3252,10 @@ export function TrelloIcon(props: SVGProps<SVGSVGElement>) {
export function AsanaIcon(props: SVGProps<SVGSVGElement>) {
return (
<svg
{...props}
xmlns='http://www.w3.org/2000/svg'
width='24'
height='24'
viewBox='781.361 0 944.893 873.377'
>
<radialGradient
id='asana_radial_gradient'
cx='943.992'
cy='1221.416'
r='.663'
gradientTransform='matrix(944.8934 0 0 -873.3772 -890717.875 1067234.75)'
gradientUnits='userSpaceOnUse'
>
<stop offset='0' stopColor='#ffb900' />
<stop offset='.6' stopColor='#f95d8f' />
<stop offset='.999' stopColor='#f95353' />
</radialGradient>
<path
fill='url(#asana_radial_gradient)'
d='M1520.766 462.371c-113.508 0-205.508 92-205.508 205.488 0 113.499 92 205.518 205.508 205.518 113.489 0 205.488-92.019 205.488-205.518 0-113.488-91.999-205.488-205.488-205.488zm-533.907.01c-113.489.01-205.498 91.99-205.498 205.488 0 113.489 92.009 205.498 205.498 205.498 113.498 0 205.508-92.009 205.508-205.498 0-113.499-92.01-205.488-205.518-205.488h.01zm472.447-256.883c0 113.489-91.999 205.518-205.488 205.518-113.508 0-205.508-92.029-205.508-205.518S1140.31 0 1253.817 0c113.489 0 205.479 92.009 205.479 205.498h.01z'
/>
<svg {...props} xmlns='http://www.w3.org/2000/svg' viewBox='0 0 24 24' fill='none'>
<circle cx='18' cy='16' r='4' fill='#F06A6A' />
<circle cx='6' cy='16' r='4' fill='#F06A6A' />
<circle cx='12' cy='6.5' r='4' fill='#F06A6A' />
</svg>
)
}
@@ -3975,6 +3864,33 @@ export function DynamoDBIcon(props: SVGProps<SVGSVGElement>) {
)
}
export function McpIcon(props: SVGProps<SVGSVGElement>) {
return (
<svg
{...props}
width='16'
height='16'
viewBox='0 0 16 16'
fill='none'
xmlns='http://www.w3.org/2000/svg'
>
<g clipPath='url(#mcp-clip)'>
<path
fillRule='evenodd'
clipRule='evenodd'
d='M14.5572 7.87503L14.5022 7.92903L8.69824 13.62C8.68102 13.6368 8.66728 13.6569 8.65781 13.679C8.64834 13.7011 8.64332 13.7248 8.64304 13.7489C8.64276 13.773 8.64723 13.7968 8.65619 13.8192C8.66514 13.8415 8.67842 13.8618 8.69524 13.879L8.69824 13.882L9.89024 15.052C9.99431 15.1536 10.0539 15.2923 10.056 15.4378C10.058 15.5832 10.0024 15.7235 9.90124 15.828L9.89124 15.838C9.78385 15.9428 9.63977 16.0014 9.48974 16.0014C9.33972 16.0014 9.19564 15.9428 9.08824 15.838L7.89624 14.67C7.77347 14.5507 7.67588 14.408 7.60923 14.2503C7.54259 14.0927 7.50825 13.9232 7.50825 13.752C7.50825 13.5808 7.54259 13.4114 7.60923 13.2537C7.67588 13.096 7.77347 12.9533 7.89624 12.834L13.7012 7.14203C14.0139 6.83733 14.1928 6.42097 14.1986 5.98444C14.2044 5.54792 14.0367 5.12694 13.7322 4.81403L13.7012 4.78203L13.6672 4.75003C13.3455 4.43669 12.9143 4.26118 12.4651 4.2608C12.016 4.26043 11.5845 4.43522 11.2622 4.74803L6.48124 9.43803H6.47924L6.41424 9.50303C6.30685 9.60778 6.16277 9.66642 6.01274 9.66642C5.86272 9.66642 5.71864 9.60778 5.61124 9.50303C5.50731 9.40128 5.44791 9.26252 5.44604 9.11709C5.44417 8.97166 5.49997 8.83141 5.60124 8.72703L5.61124 8.71703L10.4602 3.96003C11.1102 3.32403 11.1232 2.28203 10.4872 1.63103L10.4582 1.60103C10.1362 1.28736 9.70433 1.11183 9.25474 1.11183C8.80516 1.11183 8.37333 1.28736 8.05124 1.60103L1.63524 7.89603C1.5279 8.00048 1.38403 8.05893 1.23424 8.05893C1.08446 8.05893 0.940591 8.00048 0.833243 7.89603C0.729179 7.79442 0.669597 7.65573 0.667536 7.5103C0.665474 7.36487 0.7211 7.22454 0.822243 7.12003L0.833243 7.11003L7.25024 0.814026C7.78698 0.291743 8.50633 -0.000488281 9.25524 -0.000488281C10.0042 -0.000488281 10.7235 0.291743 11.2602 0.814026C11.8902 1.42703 12.1892 2.30403 12.0632 3.17403C12.9432 3.04903 13.8332 3.34003 14.4692 3.96103L14.5032 3.99403C14.7616 4.24525 14.9679 4.54492 15.1104 4.87591C15.2529 5.2069 15.3287 5.56272 15.3337 5.92304C15.3386 6.28337 15.2725 6.64113 15.1391 6.97589C15.0057 7.31064 14.8076 7.61584 14.5562 7.87403M12.8652 6.32103C12.9692 6.21928 13.0286 6.08052 13.0304 5.93509C13.0323 5.78966 12.9765 5.64941 12.8752 5.54503L12.8652 5.53503C12.7578 5.43027 12.6138 5.37164 12.4637 5.37164C12.3137 5.37164 12.1696 5.43027 12.0622 5.53503L7.31724 10.19C6.99515 10.5037 6.56333 10.6792 6.11374 10.6792C5.66416 10.6792 5.23233 10.5037 4.91024 10.19C4.7552 10.0391 4.63143 9.85901 4.54601 9.66018C4.46058 9.46135 4.41518 9.24763 4.4124 9.03124C4.40961 8.81486 4.4495 8.60004 4.52977 8.39908C4.61005 8.19812 4.72914 8.01494 4.88024 7.86003L4.91124 7.82903L9.65824 3.17403C9.76231 3.07242 9.82189 2.93373 9.82395 2.7883C9.82601 2.64287 9.77039 2.50254 9.66924 2.39803L9.65824 2.38803C9.55085 2.28327 9.40677 2.22464 9.25674 2.22464C9.10672 2.22464 8.96264 2.28327 8.85524 2.38803L4.10824 7.04203C3.84537 7.29765 3.63642 7.60338 3.49374 7.94115C3.35107 8.27892 3.27755 8.64186 3.27755 9.00853C3.27755 9.37519 3.35107 9.73814 3.49374 10.0759C3.63642 10.4137 3.84537 10.7194 4.10824 10.975C4.64515 11.4974 5.36467 11.7896 6.11374 11.7896C6.86282 11.7896 7.58234 11.4974 8.11924 10.975L12.8652 6.32103Z'
fill='currentColor'
/>
</g>
<defs>
<clipPath id='mcp-clip'>
<rect width='16' height='16' fill='white' />
</clipPath>
</defs>
</svg>
)
}
export function WordpressIcon(props: SVGProps<SVGSVGElement>) {
return (
<svg {...props} xmlns='http://www.w3.org/2000/svg' viewBox='0 0 25.925 25.925'>

View File

@@ -251,32 +251,78 @@ Rufen Sie Ausführungsdetails einschließlich des Workflow-Zustandsschnappschuss
</Tab>
</Tabs>
## Webhook-Abonnements
## Benachrichtigungen
Erhalten Sie Echtzeitbenachrichtigungen, wenn Workflow-Ausführungen abgeschlossen werden. Webhooks werden über die Sim-Benutzeroberfläche im Workflow-Editor konfiguriert.
Erhalten Sie Echtzeit-Benachrichtigungen, wenn Workflow-Ausführungen abgeschlossen sind, per Webhook, E-Mail oder Slack. Benachrichtigungen werden auf Workspace-Ebene von der Protokollseite aus konfiguriert.
### Konfiguration
Webhooks können für jeden Workflow über die Benutzeroberfläche des Workflow-Editors konfiguriert werden. Klicken Sie auf das Webhook-Symbol in der Kontrollleiste, um Ihre Webhook-Abonnements einzurichten.
Konfigurieren Sie Benachrichtigungen von der Protokollseite aus, indem Sie auf die Menütaste klicken und "Benachrichtigungen konfigurieren" auswählen.
<div className="mx-auto w-full overflow-hidden rounded-lg">
<Video src="configure-webhook.mp4" width={700} height={450} />
</div>
**Benachrichtigungskanäle:**
- **Webhook**: Senden Sie HTTP POST-Anfragen an Ihren Endpunkt
- **E-Mail**: Erhalten Sie E-Mail-Benachrichtigungen mit Ausführungsdetails
- **Slack**: Posten Sie Nachrichten in einen Slack-Kanal
**Verfügbare Konfigurationsoptionen:**
**Workflow-Auswahl:**
- Wählen Sie bestimmte Workflows zur Überwachung aus
- Oder wählen Sie "Alle Workflows", um aktuelle und zukünftige Workflows einzubeziehen
**Filteroptionen:**
- `levelFilter`: Zu empfangende Protokollebenen (`info`, `error`)
- `triggerFilter`: Zu empfangende Auslösertypen (`api`, `webhook`, `schedule`, `manual`, `chat`)
**Optionale Daten:**
- `includeFinalOutput`: Schließt die endgültige Ausgabe des Workflows ein
- `includeTraceSpans`: Schließt detaillierte Ausführungs-Trace-Spans ein
- `includeRateLimits`: Schließt Informationen zum Ratenlimit ein (Sync/Async-Limits und verbleibende)
- `includeUsageData`: Schließt Abrechnungszeitraum-Nutzung und -Limits ein
### Alarmregeln
Anstatt Benachrichtigungen für jede Ausführung zu erhalten, konfigurieren Sie Alarmregeln, um nur bei erkannten Problemen benachrichtigt zu werden:
**Aufeinanderfolgende Fehler**
- Alarm nach X aufeinanderfolgenden fehlgeschlagenen Ausführungen (z.B. 3 Fehler in Folge)
- Wird zurückgesetzt, wenn eine Ausführung erfolgreich ist
**Fehlerrate**
- Alarm, wenn die Fehlerrate X% in den letzten Y Stunden überschreitet
- Erfordert mindestens 5 Ausführungen im Zeitfenster
- Wird erst nach Ablauf des vollständigen Zeitfensters ausgelöst
**Latenz-Schwellenwert**
- Alarm, wenn eine Ausführung länger als X Sekunden dauert
- Nützlich zum Erkennen langsamer oder hängender Workflows
**Latenz-Spitze**
- Alarm, wenn die Ausführung X% langsamer als der Durchschnitt ist
- Vergleicht mit der durchschnittlichen Dauer über das konfigurierte Zeitfenster
- Erfordert mindestens 5 Ausführungen, um eine Baseline zu etablieren
**Kostenschwelle**
- Alarmierung, wenn eine einzelne Ausführung mehr als $X kostet
- Nützlich, um teure LLM-Aufrufe zu erkennen
**Keine Aktivität**
- Alarmierung, wenn innerhalb von X Stunden keine Ausführungen stattfinden
- Nützlich zur Überwachung geplanter Workflows, die regelmäßig ausgeführt werden sollten
**Fehlerzählung**
- Alarmierung, wenn die Fehleranzahl X innerhalb eines Zeitfensters überschreitet
- Erfasst die Gesamtfehler, nicht aufeinanderfolgende
Alle Alarmtypen beinhalten eine Abklingzeit von 1 Stunde, um Benachrichtigungsspam zu vermeiden.
### Webhook-Konfiguration
Für Webhooks stehen zusätzliche Optionen zur Verfügung:
- `url`: Ihre Webhook-Endpunkt-URL
- `secret`: Optionales Geheimnis für die HMAC-Signaturverifizierung
- `includeFinalOutput`: Die endgültige Ausgabe des Workflows in die Nutzlast einschließen
- `includeTraceSpans`: Detaillierte Ausführungs-Trace-Spans einschließen
- `includeRateLimits`: Informationen zum Ratelimit des Workflow-Besitzers einschließen
- `includeUsageData`: Nutzungs- und Abrechnungsdaten des Workflow-Besitzers einschließen
- `levelFilter`: Array von Log-Ebenen, die empfangen werden sollen (`info`, `error`)
- `triggerFilter`: Array von Auslösertypen, die empfangen werden sollen (`api`, `webhook`, `schedule`, `manual`, `chat`)
- `active`: Webhook-Abonnement aktivieren/deaktivieren
- `secret`: Optionales Geheimnis für HMAC-Signaturverifizierung
### Webhook-Nutzlast
### Payload-Struktur
Wenn eine Workflow-Ausführung abgeschlossen ist, sendet Sim eine POST-Anfrage an Ihre Webhook-URL:
Wenn eine Workflow-Ausführung abgeschlossen ist, sendet Sim die folgende Payload (über Webhook POST, E-Mail oder Slack):
```json
{
@@ -327,17 +373,17 @@ Wenn eine Workflow-Ausführung abgeschlossen ist, sendet Sim eine POST-Anfrage a
### Webhook-Header
Jede Webhook-Anfrage enthält diese Header:
Jede Webhook-Anfrage enthält diese Header (nur Webhook-Kanal):
- `sim-event`: Ereignistyp (immer `workflow.execution.completed`)
- `sim-timestamp`: Unix-Zeitstempel in Millisekunden
- `sim-delivery-id`: Eindeutige Lieferungs-ID für Idempotenz
- `sim-signature`: HMAC-SHA256-Signatur zur Verifizierung (falls Secret konfiguriert)
- `Idempotency-Key`: Identisch mit der Lieferungs-ID zur Erkennung von Duplikaten
- `sim-delivery-id`: Eindeutige Zustell-ID für Idempotenz
- `sim-signature`: HMAC-SHA256-Signatur zur Verifizierung (falls Geheimnis konfiguriert)
- `Idempotency-Key`: Gleich wie Zustell-ID zur Erkennung von Duplikaten
### Signaturverifizierung
Wenn Sie ein Webhook-Secret konfigurieren, überprüfen Sie die Signatur, um sicherzustellen, dass der Webhook von Sim stammt:
Wenn Sie ein Webhook-Geheimnis konfigurieren, überprüfen Sie die Signatur, um sicherzustellen, dass der Webhook von Sim stammt:
<Tabs items={['Node.js', 'Python']}>
<Tab value="Node.js">
@@ -414,7 +460,7 @@ Fehlgeschlagene Webhook-Zustellungen werden mit exponentiellem Backoff und Jitte
- Maximale Versuche: 5
- Wiederholungsverzögerungen: 5 Sekunden, 15 Sekunden, 1 Minute, 3 Minuten, 10 Minuten
- Jitter: Bis zu 10% zusätzliche Verzögerung, um Überlastungen zu vermeiden
- Jitter: Bis zu 10% zusätzliche Verzögerung, um Überlastung zu vermeiden
- Nur HTTP 5xx und 429 Antworten lösen Wiederholungen aus
- Zustellungen haben ein Timeout nach 30 Sekunden
@@ -424,15 +470,15 @@ Fehlgeschlagene Webhook-Zustellungen werden mit exponentiellem Backoff und Jitte
## Best Practices
1. **Polling-Strategie**: Verwenden Sie beim Abfragen von Logs die cursorbasierte Paginierung mit `order=asc` und `startDate`, um neue Logs effizient abzurufen.
1. **Polling-Strategie**: Verwende bei der Abfrage von Logs eine cursor-basierte Paginierung mit `order=asc` und `startDate`, um neue Logs effizient abzurufen.
2. **Webhook-Sicherheit**: Konfigurieren Sie immer ein Webhook-Secret und überprüfen Sie Signaturen, um sicherzustellen, dass Anfragen von Sim stammen.
2. **Webhook-Sicherheit**: Konfiguriere immer ein Webhook-Secret und überprüfe Signaturen, um sicherzustellen, dass Anfragen von Sim stammen.
3. **Idempotenz**: Verwenden Sie den `Idempotency-Key`Header, um doppelte Webhook-Zustellungen zu erkennen und zu behandeln.
3. **Idempotenz**: Verwende den `Idempotency-Key`Header, um doppelte Webhook-Zustellungen zu erkennen und zu behandeln.
4. **Datenschutz**: Standardmäßig werden `finalOutput` und `traceSpans` von den Antworten ausgeschlossen. Aktivieren Sie diese nur, wenn Sie die Daten benötigen und die Datenschutzauswirkungen verstehen.
4. **Datenschutz**: Standardmäßig werden `finalOutput` und `traceSpans` aus den Antworten ausgeschlossen. Aktiviere diese nur, wenn du die Daten benötigst und die Datenschutzauswirkungen verstehst.
5. **Rate-Limiting**: Implementieren Sie exponentielles Backoff, wenn Sie 429-Antworten erhalten. Überprüfen Sie den `Retry-After`Header für die empfohlene Wartezeit.
5. **Rate-Limiting**: Implementiere exponentielles Backoff, wenn du 429-Antworten erhältst. Überprüfe den `Retry-After`Header für die empfohlene Wartezeit.
## Rate-Limiting
@@ -443,7 +489,7 @@ Die API implementiert Rate-Limiting, um eine faire Nutzung zu gewährleisten:
- **Team-Plan**: 60 Anfragen pro Minute
- **Enterprise-Plan**: Individuelle Limits
Informationen zum Rate-Limit sind in den Antwort-Headern enthalten:
Rate-Limit-Informationen sind in den Antwort-Headern enthalten:
- `X-RateLimit-Limit`: Maximale Anfragen pro Zeitfenster
- `X-RateLimit-Remaining`: Verbleibende Anfragen im aktuellen Zeitfenster
- `X-RateLimit-Reset`: ISO-Zeitstempel, wann das Zeitfenster zurückgesetzt wird
@@ -495,7 +541,7 @@ async function pollLogs() {
setInterval(pollLogs, 30000);
```
## Beispiel: Verarbeitung von Webhooks
## Beispiel: Verarbeiten von Webhooks
```javascript
import express from 'express';

View File

@@ -147,4 +147,4 @@ Der Snapshot bietet:
- Erfahren Sie mehr über die [Kostenberechnung](/execution/costs), um die Preisgestaltung von Workflows zu verstehen
- Erkunden Sie die [externe API](/execution/api) für programmatischen Zugriff auf Protokolle
- Richten Sie [Webhook-Benachrichtigungen](/execution/api#webhook-subscriptions) für Echtzeit-Warnungen ein
- Richten Sie [Benachrichtigungen](/execution/api#notifications) für Echtzeit-Warnungen per Webhook, E-Mail oder Slack ein

View File

@@ -66,16 +66,16 @@ Um Umgebungsvariablen in Ihren Workflows zu referenzieren, verwenden Sie die `{{
height={350}
/>
## Variablen-Präzedenz
## Wie Variablen aufgelöst werden
Wenn Sie sowohl persönliche als auch Workspace-Variablen mit demselben Namen haben:
**Workspace-Variablen haben immer Vorrang** vor persönlichen Variablen, unabhängig davon, wer den Workflow ausführt.
1. **Workspace-Variablen haben Vorrang** vor persönlichen Variablen
2. Dies verhindert Namenskonflikte und gewährleistet ein konsistentes Verhalten in Team-Workflows
3. Wenn eine Workspace-Variable existiert, wird die persönliche Variable mit demselben Namen ignoriert
Wenn keine Workspace-Variable für einen Schlüssel existiert, werden persönliche Variablen verwendet:
- **Manuelle Ausführungen (UI)**: Ihre persönlichen Variablen
- **Automatisierte Ausführungen (API, Webhook, Zeitplan, bereitgestellter Chat)**: Persönliche Variablen des Workflow-Besitzers
<Callout type="warning">
Wählen Sie Variablennamen sorgfältig, um unbeabsichtigte Überschreibungen zu vermeiden. Erwägen Sie, persönliche Variablen mit Ihren Initialen oder Workspace-Variablen mit dem Projektnamen zu versehen.
<Callout type="info">
Persönliche Variablen eignen sich am besten zum Testen. Verwenden Sie Workspace-Variablen für Produktions-Workflows.
</Callout>
## Sicherheits-Best-Practices

View File

@@ -240,32 +240,78 @@ Retrieve execution details including the workflow state snapshot.
</Tab>
</Tabs>
## Webhook Subscriptions
## Notifications
Get real-time notifications when workflow executions complete. Webhooks are configured through the Sim UI in the workflow editor.
Get real-time notifications when workflow executions complete via webhook, email, or Slack. Notifications are configured at the workspace level from the Logs page.
### Configuration
Webhooks can be configured for each workflow through the workflow editor UI. Click the webhook icon in the control bar to set up your webhook subscriptions.
Configure notifications from the Logs page by clicking the menu button and selecting "Configure Notifications".
<div className="mx-auto w-full overflow-hidden rounded-lg">
<Video src="configure-webhook.mp4" width={700} height={450} />
</div>
**Notification Channels:**
- **Webhook**: Send HTTP POST requests to your endpoint
- **Email**: Receive email notifications with execution details
- **Slack**: Post messages to a Slack channel
**Available Configuration Options:**
**Workflow Selection:**
- Select specific workflows to monitor
- Or choose "All Workflows" to include current and future workflows
**Filtering Options:**
- `levelFilter`: Log levels to receive (`info`, `error`)
- `triggerFilter`: Trigger types to receive (`api`, `webhook`, `schedule`, `manual`, `chat`)
**Optional Data:**
- `includeFinalOutput`: Include the workflow's final output
- `includeTraceSpans`: Include detailed execution trace spans
- `includeRateLimits`: Include rate limit information (sync/async limits and remaining)
- `includeUsageData`: Include billing period usage and limits
### Alert Rules
Instead of receiving notifications for every execution, configure alert rules to be notified only when issues are detected:
**Consecutive Failures**
- Alert after X consecutive failed executions (e.g., 3 failures in a row)
- Resets when an execution succeeds
**Failure Rate**
- Alert when failure rate exceeds X% over the last Y hours
- Requires minimum 5 executions in the window
- Only triggers after the full time window has elapsed
**Latency Threshold**
- Alert when any execution takes longer than X seconds
- Useful for catching slow or hanging workflows
**Latency Spike**
- Alert when execution is X% slower than the average
- Compares against the average duration over the configured time window
- Requires minimum 5 executions to establish baseline
**Cost Threshold**
- Alert when a single execution costs more than $X
- Useful for catching expensive LLM calls
**No Activity**
- Alert when no executions occur within X hours
- Useful for monitoring scheduled workflows that should run regularly
**Error Count**
- Alert when error count exceeds X within a time window
- Tracks total errors, not consecutive
All alert types include a 1-hour cooldown to prevent notification spam.
### Webhook Configuration
For webhooks, additional options are available:
- `url`: Your webhook endpoint URL
- `secret`: Optional secret for HMAC signature verification
- `includeFinalOutput`: Include the workflow's final output in the payload
- `includeTraceSpans`: Include detailed execution trace spans
- `includeRateLimits`: Include the workflow owner's rate limit information
- `includeUsageData`: Include the workflow owner's usage and billing data
- `levelFilter`: Array of log levels to receive (`info`, `error`)
- `triggerFilter`: Array of trigger types to receive (`api`, `webhook`, `schedule`, `manual`, `chat`)
- `active`: Enable/disable the webhook subscription
### Webhook Payload
### Payload Structure
When a workflow execution completes, Sim sends a POST request to your webhook URL:
When a workflow execution completes, Sim sends the following payload (via webhook POST, email, or Slack):
```json
{
@@ -316,7 +362,7 @@ When a workflow execution completes, Sim sends a POST request to your webhook UR
### Webhook Headers
Each webhook request includes these headers:
Each webhook request includes these headers (webhook channel only):
- `sim-event`: Event type (always `workflow.execution.completed`)
- `sim-timestamp`: Unix timestamp in milliseconds

View File

@@ -147,4 +147,4 @@ The snapshot provides:
- Learn about [Cost Calculation](/execution/costs) to understand workflow pricing
- Explore the [External API](/execution/api) for programmatic log access
- Set up [Webhook notifications](/execution/api#webhook-subscriptions) for real-time alerts
- Set up [Notifications](/execution/api#notifications) for real-time alerts via webhook, email, or Slack

View File

@@ -45,9 +45,9 @@ Create a new event in Google Calendar
| `summary` | string | Yes | Event title/summary |
| `description` | string | No | Event description |
| `location` | string | No | Event location |
| `startDateTime` | string | Yes | Start date and time \(RFC3339 format, e.g., 2025-06-03T10:00:00-08:00\) |
| `endDateTime` | string | Yes | End date and time \(RFC3339 format, e.g., 2025-06-03T11:00:00-08:00\) |
| `timeZone` | string | No | Time zone \(e.g., America/Los_Angeles\) |
| `startDateTime` | string | Yes | Start date and time. MUST include timezone offset \(e.g., 2025-06-03T10:00:00-08:00\) OR provide timeZone parameter |
| `endDateTime` | string | Yes | End date and time. MUST include timezone offset \(e.g., 2025-06-03T11:00:00-08:00\) OR provide timeZone parameter |
| `timeZone` | string | No | Time zone \(e.g., America/Los_Angeles\). Required if datetime does not include offset. Defaults to America/Los_Angeles if not provided. |
| `attendees` | array | No | Array of attendee email addresses |
| `sendUpdates` | string | No | How to send updates to attendees: all, externalOnly, or none |

View File

@@ -113,8 +113,8 @@ List files and folders in Google Drive
| --------- | ---- | -------- | ----------- |
| `folderSelector` | string | No | Select the folder to list files from |
| `folderId` | string | No | The ID of the folder to list files from \(internal use\) |
| `query` | string | No | A query to filter the files |
| `pageSize` | number | No | The number of files to return |
| `query` | string | No | Search term to filter files by name \(e.g. "budget" finds files with "budget" in the name\). Do NOT use Google Drive query syntax here - just provide a plain search term. |
| `pageSize` | number | No | The maximum number of files to return \(default: 100\) |
| `pageToken` | string | No | The page token to use for pagination |
#### Output

View File

@@ -91,8 +91,8 @@ Read data from a Google Sheets spreadsheet
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `spreadsheetId` | string | Yes | The ID of the spreadsheet to read from |
| `range` | string | No | The range of cells to read from |
| `spreadsheetId` | string | Yes | The ID of the spreadsheet \(found in the URL: docs.google.com/spreadsheets/d/\{SPREADSHEET_ID\}/edit\). |
| `range` | string | No | The A1 notation range to read \(e.g. "Sheet1!A1:D10", "A1:B5"\). Defaults to first sheet A1:Z1000 if not specified. |
#### Output
@@ -109,9 +109,9 @@ Write data to a Google Sheets spreadsheet
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `spreadsheetId` | string | Yes | The ID of the spreadsheet to write to |
| `range` | string | No | The range of cells to write to |
| `values` | array | Yes | The data to write to the spreadsheet |
| `spreadsheetId` | string | Yes | The ID of the spreadsheet |
| `range` | string | No | The A1 notation range to write to \(e.g. "Sheet1!A1:D10", "A1:B5"\) |
| `values` | array | Yes | The data to write as a 2D array \(e.g. \[\["Name", "Age"\], \["Alice", 30\], \["Bob", 25\]\]\) or array of objects. |
| `valueInputOption` | string | No | The format of the data to write |
| `includeValuesInResponse` | boolean | No | Whether to include the written values in the response |
@@ -134,8 +134,8 @@ Update data in a Google Sheets spreadsheet
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `spreadsheetId` | string | Yes | The ID of the spreadsheet to update |
| `range` | string | No | The range of cells to update |
| `values` | array | Yes | The data to update in the spreadsheet |
| `range` | string | No | The A1 notation range to update \(e.g. "Sheet1!A1:D10", "A1:B5"\) |
| `values` | array | Yes | The data to update as a 2D array \(e.g. \[\["Name", "Age"\], \["Alice", 30\]\]\) or array of objects. |
| `valueInputOption` | string | No | The format of the data to update |
| `includeValuesInResponse` | boolean | No | Whether to include the updated values in the response |
@@ -158,8 +158,8 @@ Append data to the end of a Google Sheets spreadsheet
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `spreadsheetId` | string | Yes | The ID of the spreadsheet to append to |
| `range` | string | No | The range of cells to append after |
| `values` | array | Yes | The data to append to the spreadsheet |
| `range` | string | No | The A1 notation range to append after \(e.g. "Sheet1", "Sheet1!A:D"\) |
| `values` | array | Yes | The data to append as a 2D array \(e.g. \[\["Alice", 30\], \["Bob", 25\]\]\) or array of objects. |
| `valueInputOption` | string | No | The format of the data to append |
| `insertDataOption` | string | No | How to insert the data \(OVERWRITE or INSERT_ROWS\) |
| `includeValuesInResponse` | boolean | No | Whether to include the appended values in the response |

View File

@@ -122,6 +122,82 @@ Read the latest messages from Slack channels. Retrieve conversation history with
| --------- | ---- | ----------- |
| `messages` | array | Array of message objects from the channel |
### `slack_list_channels`
List all channels in a Slack workspace. Returns public and private channels the bot has access to.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `authMethod` | string | No | Authentication method: oauth or bot_token |
| `botToken` | string | No | Bot token for Custom Bot |
| `includePrivate` | boolean | No | Include private channels the bot is a member of \(default: true\) |
| `excludeArchived` | boolean | No | Exclude archived channels \(default: true\) |
| `limit` | number | No | Maximum number of channels to return \(default: 100, max: 200\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `channels` | array | Array of channel objects from the workspace |
### `slack_list_members`
List all members (user IDs) in a Slack channel. Use with Get User Info to resolve IDs to names.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `authMethod` | string | No | Authentication method: oauth or bot_token |
| `botToken` | string | No | Bot token for Custom Bot |
| `channel` | string | Yes | Channel ID to list members from |
| `limit` | number | No | Maximum number of members to return \(default: 100, max: 200\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `members` | array | Array of user IDs who are members of the channel \(e.g., U1234567890\) |
### `slack_list_users`
List all users in a Slack workspace. Returns user profiles with names and avatars.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `authMethod` | string | No | Authentication method: oauth or bot_token |
| `botToken` | string | No | Bot token for Custom Bot |
| `includeDeleted` | boolean | No | Include deactivated/deleted users \(default: false\) |
| `limit` | number | No | Maximum number of users to return \(default: 100, max: 200\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `users` | array | Array of user objects from the workspace |
### `slack_get_user`
Get detailed information about a specific Slack user by their user ID.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `authMethod` | string | No | Authentication method: oauth or bot_token |
| `botToken` | string | No | Bot token for Custom Bot |
| `userId` | string | Yes | User ID to look up \(e.g., U1234567890\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `user` | object | Detailed user information |
### `slack_download`
Download a file from Slack

View File

@@ -66,16 +66,16 @@ To reference environment variables in your workflows, use the `{{}}` notation. W
height={350}
/>
## Variable Precedence
## How Variables are Resolved
When you have both personal and workspace variables with the same name:
**Workspace variables always take precedence** over personal variables, regardless of who runs the workflow.
1. **Workspace variables take precedence** over personal variables
2. This prevents naming conflicts and ensures consistent behavior across team workflows
3. If a workspace variable exists, the personal variable with the same name is ignored
When no workspace variable exists for a key, personal variables are used:
- **Manual runs (UI)**: Your personal variables
- **Automated runs (API, webhook, schedule, deployed chat)**: Workflow owner's personal variables
<Callout type="warning">
Choose variable names carefully to avoid unintended overrides. Consider prefixing personal variables with your initials or workspace variables with the project name.
<Callout type="info">
Personal variables are best for testing. Use workspace variables for production workflows.
</Callout>
## Security Best Practices

View File

@@ -251,32 +251,78 @@ Recupera detalles de ejecución incluyendo la instantánea del estado del flujo
</Tab>
</Tabs>
## Suscripciones a webhooks
## Notificaciones
Recibe notificaciones en tiempo real cuando se completan las ejecuciones de flujos de trabajo. Los webhooks se configuran a través de la interfaz de usuario de Sim en el editor de flujos de trabajo.
Recibe notificaciones en tiempo real cuando se completan las ejecuciones de flujos de trabajo a través de webhook, correo electrónico o Slack. Las notificaciones se configuran a nivel de espacio de trabajo desde la página de Registros.
### Configuración
Los webhooks pueden configurarse para cada flujo de trabajo a través de la interfaz de usuario del editor de flujos de trabajo. Haz clic en el icono de webhook en la barra de control para configurar tus suscripciones a webhooks.
Configura las notificaciones desde la página de Registros haciendo clic en el botón de menú y seleccionando "Configurar notificaciones".
<div className="mx-auto w-full overflow-hidden rounded-lg">
<Video src="configure-webhook.mp4" width={700} height={450} />
</div>
**Canales de notificación:**
- **Webhook**: Envía solicitudes HTTP POST a tu punto de conexión
- **Correo electrónico**: Recibe notificaciones por correo con detalles de la ejecución
- **Slack**: Publica mensajes en un canal de Slack
**Opciones de configuración disponibles:**
- `url`: URL del punto final de tu webhook
**Selección de flujos de trabajo:**
- Selecciona flujos de trabajo específicos para monitorear
- O elige "Todos los flujos de trabajo" para incluir los flujos actuales y futuros
**Opciones de filtrado:**
- `levelFilter`: Niveles de registro a recibir (`info`, `error`)
- `triggerFilter`: Tipos de disparadores a recibir (`api`, `webhook`, `schedule`, `manual`, `chat`)
**Datos opcionales:**
- `includeFinalOutput`: Incluir la salida final del flujo de trabajo
- `includeTraceSpans`: Incluir trazas detalladas de la ejecución
- `includeRateLimits`: Incluir información de límites de tasa (límites sincrónicos/asincrónicos y restantes)
- `includeUsageData`: Incluir uso y límites del período de facturación
### Reglas de alerta
En lugar de recibir notificaciones por cada ejecución, configura reglas de alerta para ser notificado solo cuando se detecten problemas:
**Fallos consecutivos**
- Alerta después de X ejecuciones fallidas consecutivas (por ejemplo, 3 fallos seguidos)
- Se reinicia cuando una ejecución tiene éxito
**Tasa de fallos**
- Alerta cuando la tasa de fallos supera el X% durante las últimas Y horas
- Requiere un mínimo de 5 ejecuciones en la ventana de tiempo
- Solo se activa después de que haya transcurrido la ventana de tiempo completa
**Umbral de latencia**
- Alerta cuando cualquier ejecución tarda más de X segundos
- Útil para detectar flujos de trabajo lentos o bloqueados
**Pico de latencia**
- Alerta cuando la ejecución es X% más lenta que el promedio
- Compara con la duración promedio durante la ventana de tiempo configurada
- Requiere un mínimo de 5 ejecuciones para establecer una línea base
**Umbral de costo**
- Alerta cuando una sola ejecución cuesta más de $X
- Útil para detectar llamadas costosas a LLM
**Sin actividad**
- Alerta cuando no ocurren ejecuciones dentro de X horas
- Útil para monitorear flujos de trabajo programados que deberían ejecutarse regularmente
**Recuento de errores**
- Alerta cuando el recuento de errores excede X dentro de una ventana de tiempo
- Rastrea errores totales, no consecutivos
Todos los tipos de alertas incluyen un período de enfriamiento de 1 hora para evitar el spam de notificaciones.
### Configuración de webhook
Para webhooks, hay opciones adicionales disponibles:
- `url`: La URL de tu endpoint webhook
- `secret`: Secreto opcional para verificación de firma HMAC
- `includeFinalOutput`: Incluir la salida final del flujo de trabajo en la carga útil
- `includeTraceSpans`: Incluir intervalos de seguimiento de ejecución detallados
- `includeRateLimits`: Incluir información del límite de tasa del propietario del flujo de trabajo
- `includeUsageData`: Incluir datos de uso y facturación del propietario del flujo de trabajo
- `levelFilter`: Array de niveles de registro a recibir (`info`, `error`)
- `triggerFilter`: Array de tipos de disparadores a recibir (`api`, `webhook`, `schedule`, `manual`, `chat`)
- `active`: Habilitar/deshabilitar la suscripción al webhook
### Carga útil del webhook
### Estructura de carga útil
Cuando se completa la ejecución de un flujo de trabajo, Sim envía una solicitud POST a tu URL de webhook:
Cuando se completa la ejecución de un flujo de trabajo, Sim envía la siguiente carga útil (vía webhook POST, correo electrónico o Slack):
```json
{
@@ -325,9 +371,9 @@ Cuando se completa la ejecución de un flujo de trabajo, Sim envía una solicitu
}
```
### Cabeceras de webhook
### Encabezados de webhook
Cada solicitud de webhook incluye estas cabeceras:
Cada solicitud de webhook incluye estos encabezados (solo canal webhook):
- `sim-event`: Tipo de evento (siempre `workflow.execution.completed`)
- `sim-timestamp`: Marca de tiempo Unix en milisegundos
@@ -416,15 +462,15 @@ Las entregas de webhook fallidas se reintentan con retroceso exponencial y fluct
- Retrasos de reintento: 5 segundos, 15 segundos, 1 minuto, 3 minutos, 10 minutos
- Fluctuación: Hasta un 10% de retraso adicional para prevenir el efecto de manada
- Solo las respuestas HTTP 5xx y 429 activan reintentos
- Las entregas agotan el tiempo de espera después de 30 segundos
- Las entregas agotan el tiempo después de 30 segundos
<Callout type="info">
Las entregas de webhook se procesan de forma asíncrona y no afectan al rendimiento de ejecución del flujo de trabajo.
Las entregas de webhook se procesan de forma asíncrona y no afectan el rendimiento de ejecución del flujo de trabajo.
</Callout>
## Mejores prácticas
1. **Estrategia de sondeo**: Al sondear registros, utiliza paginación basada en cursor con `order=asc` y `startDate` para obtener nuevos registros de manera eficiente.
1. **Estrategia de sondeo**: Cuando consultes registros, utiliza paginación basada en cursores con `order=asc` y `startDate` para obtener nuevos registros de manera eficiente.
2. **Seguridad de webhook**: Siempre configura un secreto de webhook y verifica las firmas para asegurar que las solicitudes provienen de Sim.
@@ -432,23 +478,23 @@ Las entregas de webhook fallidas se reintentan con retroceso exponencial y fluct
4. **Privacidad**: Por defecto, `finalOutput` y `traceSpans` están excluidos de las respuestas. Habilítalos solo si necesitas los datos y comprendes las implicaciones de privacidad.
5. **Limitación de tasa**: Implementa retroceso exponencial cuando recibas respuestas 429. Consulta la cabecera `Retry-After` para conocer el tiempo de espera recomendado.
5. **Limitación de tasa**: Implementa retroceso exponencial cuando recibas respuestas 429. Verifica la cabecera `Retry-After` para conocer el tiempo de espera recomendado.
## Limitación de tasa
La API implementa limitación de tasa para garantizar un uso justo:
La API implementa limitación de tasa para asegurar un uso justo:
- **Plan gratuito**: 10 solicitudes por minuto
- **Plan Pro**: 30 solicitudes por minuto
- **Plan Team**: 60 solicitudes por minuto
- **Plan Enterprise**: Límites personalizados
La información del límite de tasa se incluye en los encabezados de respuesta:
- `X-RateLimit-Limit`: Máximo de solicitudes por ventana
La información de límite de tasa se incluye en las cabeceras de respuesta:
- `X-RateLimit-Limit`: Solicitudes máximas por ventana
- `X-RateLimit-Remaining`: Solicitudes restantes en la ventana actual
- `X-RateLimit-Reset`: Marca de tiempo ISO cuando se reinicia la ventana
- `X-RateLimit-Reset`: Marca de tiempo ISO cuando la ventana se reinicia
## Ejemplo: Sondeo para nuevos registros
## Ejemplo: Sondeo de nuevos registros
```javascript
let cursor = null;

View File

@@ -147,4 +147,4 @@ La instantánea proporciona:
- Aprende sobre [Cálculo de costos](/execution/costs) para entender los precios de los flujos de trabajo
- Explora la [API externa](/execution/api) para acceso programático a los registros
- Configura [notificaciones por Webhook](/execution/api#webhook-subscriptions) para alertas en tiempo real
- Configura [Notificaciones](/execution/api#notifications) para alertas en tiempo real vía webhook, correo electrónico o Slack

View File

@@ -66,16 +66,16 @@ Para hacer referencia a variables de entorno en tus flujos de trabajo, utiliza l
height={350}
/>
## Precedencia de variables
## Cómo se resuelven las variables
Cuando tienes variables personales y de espacio de trabajo con el mismo nombre:
**Las variables del espacio de trabajo siempre tienen prioridad** sobre las variables personales, independientemente de quién ejecute el flujo de trabajo.
1. **Las variables del espacio de trabajo tienen precedencia** sobre las variables personales
2. Esto previene conflictos de nombres y asegura un comportamiento consistente en los flujos de trabajo del equipo
3. Si existe una variable de espacio de trabajo, la variable personal con el mismo nombre será ignorada
Cuando no existe una variable de espacio de trabajo para una clave, se utilizan las variables personales:
- **Ejecuciones manuales (UI)**: Tus variables personales
- **Ejecuciones automatizadas (API, webhook, programación, chat implementado)**: Variables personales del propietario del flujo de trabajo
<Callout type="warning">
Elige los nombres de las variables cuidadosamente para evitar sobrescrituras no deseadas. Considera usar prefijos con tus iniciales para variables personales o con el nombre del proyecto para variables del espacio de trabajo.
<Callout type="info">
Las variables personales son mejores para pruebas. Usa variables de espacio de trabajo para flujos de trabajo de producción.
</Callout>
## Mejores prácticas de seguridad

View File

@@ -251,32 +251,78 @@ Récupérer les détails d'exécution, y compris l'instantané de l'état du wor
</Tab>
</Tabs>
## Abonnements aux webhooks
## Notifications
Recevez des notifications en temps réel lorsque les exécutions de workflow sont terminées. Les webhooks sont configurés via l'interface utilisateur Sim dans l'éditeur de workflow.
Recevez des notifications en temps réel lorsque les exécutions de flux de travail sont terminées via webhook, e-mail ou Slack. Les notifications sont configurées au niveau de l'espace de travail depuis la page Logs.
### Configuration
Les webhooks peuvent être configurés pour chaque workflow via l'interface utilisateur de l'éditeur de workflow. Cliquez sur l'icône webhook dans la barre de contrôle pour configurer vos abonnements aux webhooks.
Configurez les notifications depuis la page Logs en cliquant sur le bouton menu et en sélectionnant "Configurer les notifications".
<div className="mx-auto w-full overflow-hidden rounded-lg">
<Video src="configure-webhook.mp4" width={700} height={450} />
</div>
**Canaux de notification :**
- **Webhook** : envoi de requêtes HTTP POST à votre point de terminaison
- **E-mail** : réception de notifications par e-mail avec les détails d'exécution
- **Slack** : publication de messages dans un canal Slack
**Options de configuration disponibles :**
- `url` : URL de votre endpoint webhook
- `secret` : Secret optionnel pour la vérification de signature HMAC
- `includeFinalOutput` : Inclure la sortie finale du workflow dans la charge utile
- `includeTraceSpans` : Inclure les intervalles de trace d'exécution détaillés
- `includeRateLimits` : Inclure les informations de limite de débit du propriétaire du workflow
- `includeUsageData` : Inclure les données d'utilisation et de facturation du propriétaire du workflow
- `levelFilter` : Tableau des niveaux de journal à recevoir (`info`, `error`)
- `triggerFilter` : Tableau des types de déclencheurs à recevoir (`api`, `webhook`, `schedule`, `manual`, `chat`)
- `active` : Activer/désactiver l'abonnement webhook
**Sélection de flux de travail :**
- Sélectionnez des flux de travail spécifiques à surveiller
- Ou choisissez "Tous les flux de travail" pour inclure les flux actuels et futurs
### Charge utile du webhook
**Options de filtrage :**
- `levelFilter` : niveaux de journalisation à recevoir (`info`, `error`)
- `triggerFilter` : types de déclencheurs à recevoir (`api`, `webhook`, `schedule`, `manual`, `chat`)
Lorsqu'une exécution de workflow est terminée, Sim envoie une requête POST à votre URL webhook :
**Données optionnelles :**
- `includeFinalOutput` : inclure le résultat final du flux de travail
- `includeTraceSpans` : inclure les traces détaillées d'exécution
- `includeRateLimits` : inclure les informations de limite de débit (limites synchrones/asynchrones et restantes)
- `includeUsageData` : inclure l'utilisation et les limites de la période de facturation
### Règles d'alerte
Au lieu de recevoir des notifications pour chaque exécution, configurez des règles d'alerte pour être notifié uniquement lorsque des problèmes sont détectés :
**Échecs consécutifs**
- Alerte après X exécutions échouées consécutives (par exemple, 3 échecs d'affilée)
- Réinitialisation lorsqu'une exécution réussit
**Taux d'échec**
- Alerte lorsque le taux d'échec dépasse X % au cours des Y dernières heures
- Nécessite un minimum de 5 exécutions dans la fenêtre
- Ne se déclenche qu'après l'écoulement complet de la fenêtre temporelle
**Seuil de latence**
- Alerte lorsqu'une exécution prend plus de X secondes
- Utile pour détecter les flux de travail lents ou bloqués
**Pic de latence**
- Alerte lorsque l'exécution est X % plus lente que la moyenne
- Compare à la durée moyenne sur la fenêtre temporelle configurée
- Nécessite un minimum de 5 exécutions pour établir une référence
**Seuil de coût**
- Alerte lorsqu'une seule exécution coûte plus de X €
- Utile pour détecter les appels LLM coûteux
**Aucune activité**
- Alerte lorsqu'aucune exécution ne se produit pendant X heures
- Utile pour surveiller les workflows programmés qui devraient s'exécuter régulièrement
**Nombre d'erreurs**
- Alerte lorsque le nombre d'erreurs dépasse X dans une fenêtre temporelle
- Suit le total des erreurs, pas les erreurs consécutives
Tous les types d'alertes incluent un temps de récupération d'une heure pour éviter le spam de notifications.
### Configuration du webhook
Pour les webhooks, des options supplémentaires sont disponibles :
- `url` : l'URL de votre point de terminaison webhook
- `secret` : secret optionnel pour la vérification de signature HMAC
### Structure de la charge utile
Lorsqu'une exécution de workflow se termine, Sim envoie la charge utile suivante (via webhook POST, e-mail ou Slack) :
```json
{
@@ -325,15 +371,15 @@ Lorsqu'une exécution de workflow est terminée, Sim envoie une requête POST à
}
```
### En-têtes de webhook
### En-têtes webhook
Chaque requête webhook inclut ces en-têtes :
Chaque requête webhook inclut ces en-têtes (canal webhook uniquement) :
- `sim-event` : Type d'événement (toujours `workflow.execution.completed`)
- `sim-timestamp` : Horodatage Unix en millisecondes
- `sim-event` : type d'événement (toujours `workflow.execution.completed`)
- `sim-timestamp` : horodatage Unix en millisecondes
- `sim-delivery-id` : ID de livraison unique pour l'idempotence
- `sim-signature` : Signature HMAC-SHA256 pour vérification (si un secret est configuré)
- `Idempotency-Key` : Identique à l'ID de livraison pour la détection des doublons
- `sim-signature` : signature HMAC-SHA256 pour vérification (si un secret est configuré)
- `Idempotency-Key` : identique à l'ID de livraison pour la détection des doublons
### Vérification de signature
@@ -408,14 +454,14 @@ Si vous configurez un secret webhook, vérifiez la signature pour vous assurer q
</Tab>
</Tabs>
### Politique de réessai
### Politique de nouvelle tentative
Les livraisons de webhook échouées sont réessayées avec un backoff exponentiel et du jitter :
- Nombre maximum de tentatives : 5
- Délais de réessai : 5 secondes, 15 secondes, 1 minute, 3 minutes, 10 minutes
- Délais de nouvelle tentative : 5 secondes, 15 secondes, 1 minute, 3 minutes, 10 minutes
- Jitter : jusqu'à 10 % de délai supplémentaire pour éviter l'effet de horde
- Seules les réponses HTTP 5xx et 429 déclenchent des réessais
- Seules les réponses HTTP 5xx et 429 déclenchent de nouvelles tentatives
- Les livraisons expirent après 30 secondes
<Callout type="info">
@@ -424,15 +470,15 @@ Les livraisons de webhook échouées sont réessayées avec un backoff exponenti
## Bonnes pratiques
1. **Stratégie de polling** : lors de l'interrogation des logs, utilisez la pagination basée sur curseur avec `order=asc` et `startDate` pour récupérer efficacement les nouveaux logs.
1. **Stratégie de polling** : Lors du polling des logs, utilisez la pagination basée sur curseur avec `order=asc` et `startDate` pour récupérer efficacement les nouveaux logs.
2. **Sécurité des webhooks** : configurez toujours un secret webhook et vérifiez les signatures pour vous assurer que les requêtes proviennent de Sim.
2. **Sécurité des webhooks** : Configurez toujours un secret de webhook et vérifiez les signatures pour vous assurer que les requêtes proviennent de Sim.
3. **Idempotence** : utilisez l'en-tête `Idempotency-Key` pour détecter et gérer les livraisons de webhook en double.
3. **Idempotence** : Utilisez l'en-tête `Idempotency-Key` pour détecter et gérer les livraisons de webhook en double.
4. **Confidentialité** : par défaut, `finalOutput` et `traceSpans` sont exclus des réponses. Activez-les uniquement si vous avez besoin des données et comprenez les implications en matière de confidentialité.
4. **Confidentialité** : Par défaut, `finalOutput` et `traceSpans` sont exclus des réponses. Activez-les uniquement si vous avez besoin des données et comprenez les implications en matière de confidentialité.
5. **Limitation de débit** : implémentez un backoff exponentiel lorsque vous recevez des réponses 429. Vérifiez l'en-tête `Retry-After` pour connaître le temps d'attente recommandé.
5. **Limitation de débit** : Implémentez un backoff exponentiel lorsque vous recevez des réponses 429. Vérifiez l'en-tête `Retry-After` pour connaître le temps d'attente recommandé.
## Limitation de débit
@@ -443,12 +489,12 @@ L'API implémente une limitation de débit pour garantir une utilisation équita
- **Plan Équipe** : 60 requêtes par minute
- **Plan Entreprise** : Limites personnalisées
Les informations de limite de débit sont incluses dans les en-têtes de réponse :
Les informations de limitation de débit sont incluses dans les en-têtes de réponse :
- `X-RateLimit-Limit` : Nombre maximum de requêtes par fenêtre
- `X-RateLimit-Remaining` : Requêtes restantes dans la fenêtre actuelle
- `X-RateLimit-Reset` : Horodatage ISO indiquant quand la fenêtre se réinitialise
## Exemple : Interrogation pour nouveaux journaux
## Exemple : Polling pour nouveaux logs
```javascript
let cursor = null;

View File

@@ -147,4 +147,4 @@ L'instantané fournit :
- Découvrez le [Calcul des coûts](/execution/costs) pour comprendre la tarification des workflows
- Explorez l'[API externe](/execution/api) pour un accès programmatique aux journaux
- Configurez les [notifications Webhook](/execution/api#webhook-subscriptions) pour des alertes en temps réel
- Configurez les [Notifications](/execution/api#notifications) pour des alertes en temps réel par webhook, e-mail ou Slack

View File

@@ -66,16 +66,16 @@ Pour référencer des variables d'environnement dans vos workflows, utilisez la
height={350}
/>
## Priorité des variables
## Comment les variables sont résolues
Lorsque vous avez à la fois des variables personnelles et d'espace de travail portant le même nom :
**Les variables d'espace de travail ont toujours la priorité** sur les variables personnelles, quel que soit l'utilisateur qui exécute le flux de travail.
1. **Les variables d'espace de travail ont la priorité** sur les variables personnelles
2. Cela évite les conflits de nommage et assure un comportement cohérent dans les workflows d'équipe
3. Si une variable d'espace de travail existe, la variable personnelle portant le même nom est ignorée
Lorsqu'aucune variable d'espace de travail n'existe pour une clé, les variables personnelles sont utilisées :
- **Exécutions manuelles (UI)** : Vos variables personnelles
- **Exécutions automatisées (API, webhook, planification, chat déployé)** : Variables personnelles du propriétaire du flux de travail
<Callout type="warning">
Choisissez les noms de variables avec soin pour éviter les remplacements involontaires. Envisagez de préfixer les variables personnelles avec vos initiales ou les variables d'espace de travail avec le nom du projet.
<Callout type="info">
Les variables personnelles sont idéales pour les tests. Utilisez les variables d'espace de travail pour les flux de travail en production.
</Callout>
## Bonnes pratiques de sécurité

View File

@@ -251,32 +251,78 @@ SimダッシュボードのユーザーセッティングからAPIキーを生
</Tab>
</Tabs>
## Webhookサブスクリプション
## 通知
ワークフロー実行が完了したときにリアルタイム通知を受け取ります。WebhookはSim UIのワークフローエディタで設定されます。
ワークフロー実行が完了したときに、Webhook、メール、またはSlackを通じてリアルタイム通知を受け取ることができます。通知はログページからワークスペースレベルで設定されます。
### 設定
Webhookは、ワークフローエディタUIを通じて各ワークフローに設定できます。コントロールバーのWebhookアイコンをクリックして、Webhookサブスクリプションを設定します。
ログページからメニューボタンをクリックし、「通知を設定する」を選択して通知を設定します。
<div className="mx-auto w-full overflow-hidden rounded-lg">
<Video src="configure-webhook.mp4" width={700} height={450} />
</div>
**通知チャネル:**
- **Webhook**: エンドポイントにHTTP POSTリクエストを送信
- **メール**: 実行詳細を含むメール通知を受信
- **Slack**: Slackチャンネルにメッセージを投稿
**利用可能な設定オプション:**
- `url`: WebhookエンドポイントURL
- `secret`: HMAC署名検証用のオプションシークレット
- `includeFinalOutput`: ペイロードにワークフローの最終出力を含める
**ワークフロー選択:**
- 監視する特定のワークフローを選択
- または「すべてのワークフロー」を選択して現在および将来のワークフローを含める
**フィルタリングオプション:**
- `levelFilter`: 受信するログレベル (`info`, `error`)
- `triggerFilter`: 受信するトリガータイプ (`api`, `webhook`, `schedule`, `manual`, `chat`)
**オプションデータ:**
- `includeFinalOutput`: ワークフローの最終出力を含める
- `includeTraceSpans`: 詳細な実行トレーススパンを含める
- `includeRateLimits`: ワークフロー所有者のレート制限情報を含める
- `includeUsageData`: ワークフロー所有者の使用状況と請求データを含める
- `levelFilter`: 受信するログレベルの配列 (`info`, `error`)
- `triggerFilter`: 受信するトリガータイプの配列 (`api`, `webhook`, `schedule`, `manual`, `chat`)
- `active`: Webhookサブスクリプションの有効化/無効化
- `includeRateLimits`: レート制限情報(同期/非同期の制限と残り)を含める
- `includeUsageData`: 請求期間の使用状況と制限を含める
### Webhookペイロード
### アラートルール
ワークフロー実行が完了すると、SimはWebhook URLにPOSTリクエストを送信します:
すべての実行について通知を受け取る代わりに、問題が検出された場合にのみ通知されるようにアラートルールを設定できます
**連続失敗**
- X回連続して実行が失敗した後にアラート3回連続の失敗
- 実行が成功すると、リセットされます
**失敗率**
- 過去Y時間の失敗率がX%を超えた場合にアラート
- ウィンドウ内で最低5回の実行が必要
- 完全な時間ウィンドウが経過した後にのみトリガーされます
**レイテンシーしきい値**
- 実行がX秒以上かかった場合にアラート
- 遅いまたは停止しているワークフローを検出するのに役立ちます
**レイテンシースパイク**
- 実行が平均よりX%遅い場合にアラート
- 設定された時間ウィンドウでの平均所要時間と比較
- ベースラインを確立するために最低5回の実行が必要
**コスト閾値**
- 単一の実行コストが$Xを超えた場合にアラート
- 高価なLLM呼び出しを検出するのに役立つ
**アクティビティなし**
- X時間以内に実行がない場合にアラート
- 定期的に実行されるべきスケジュールされたワークフローの監視に役立つ
**エラー数**
- 時間枠内でエラー数がXを超えた場合にアラート
- 連続ではなく、総エラー数を追跡
すべてのアラートタイプには、通知スパムを防ぐための1時間のクールダウンが含まれています。
### Webhook設定
Webhookの場合、追加オプションが利用可能です
- `url`WebhookエンドポイントURL
- `secret`HMAC署名検証用のオプションシークレット
### ペイロード構造
ワークフロー実行が完了すると、Simは以下のペイロードを送信しますwebhook POST、メール、またはSlackを介して
```json
{
@@ -327,17 +373,17 @@ Webhookは、ワークフローエディタUIを通じて各ワークフロー
### Webhookヘッダー
各Webhookリクエストには以下のヘッダーが含まれます
各Webhookリクエストには以下のヘッダーが含まれますWebhookチャンネルのみ
- `sim-event`: イベントタイプ(常に `workflow.execution.completed`
- `sim-timestamp`: ミリ秒単位のUnixタイムスタンプ
- `sim-delivery-id`: べき等性のための一意の配信ID
- `sim-signature`: 検証用のHMAC-SHA256署名シークレットが設定されている場合
- `Idempotency-Key`: 重複検出のための配信IDと同じ
- `sim-event`イベントタイプ(常に`workflow.execution.completed`
- `sim-timestamp`ミリ秒単位のUnixタイムスタンプ
- `sim-delivery-id`べき等性のための一意の配信ID
- `sim-signature`検証用のHMAC-SHA256署名シークレットが設定されている場合
- `Idempotency-Key`重複検出のための配信IDと同じ
### 署名検証
Webhookシークレットを設定した場合、署名を検証してWebhookがSimからのものであることを確認してください
Webhookシークレットを設定した場合、署名を検証してWebhookがSimからのものであることを確認します
<Tabs items={['Node.js', 'Python']}>
<Tab value="Node.js">
@@ -412,21 +458,21 @@ Webhookシークレットを設定した場合、署名を検証してWebhookが
失敗したWebhook配信は指数バックオフとジッターを使用して再試行されます
- 最大試行回数: 5回
- リトライ間隔: 5秒、15秒、1分、3分、10分
- ジッター: 最大10%の追加遅延(サンダリングハード問題を防ぐため
- 最大試行回数5回
- リトライ遅延:5秒、15秒、1分、3分、10分
- ジッターサンダリングハード問題を防ぐために最大10%の追加遅延
- HTTP 5xxと429レスポンスのみがリトライをトリガー
- 配信は30秒後にタイムアウト
<Callout type="info">
Webhook配信は非同期で処理され、ワークフロー実行のパフォーマンスに影響しません。
Webhook配信は非同期で処理され、ワークフロー実行のパフォーマンスに影響しません。
</Callout>
## ベストプラクティス
1. **ポーリング戦略**: ログをポーリングする場合、`order=asc`と`startDate`を使用したカーソルベースのページネーションを使用して、新しいログを効率的に取得してください。
1. **ポーリング戦略**: ログをポーリングする場合、`order=asc`と`startDate`を使用したカーソルベースのページネーションを用して、新しいログを効率的に取得してください。
2. **Webhookセキュリティ**: 常にWebhookシークレットを設定し、署名を検証してリクエストがSimからのものであることを確認してください。
2. **Webhookセキュリティ**: 常にWebhookシークレットを設定し、署名を検証してリクエストがSimからのものであることを確認してください。
3. **べき等性**: `Idempotency-Key`ヘッダーを使用して、重複するWebhook配信を検出し処理してください。
@@ -445,7 +491,7 @@ APIは公平な使用を確保するためにレート制限を実装してい
レート制限情報はレスポンスヘッダーに含まれています:
- `X-RateLimit-Limit`: ウィンドウあたりの最大リクエスト数
- `X-RateLimit-Remaining`: 現在のウィンドウで残りのリクエスト数
- `X-RateLimit-Remaining`: 現在のウィンドウで残っているリクエスト数
- `X-RateLimit-Reset`: ウィンドウがリセットされるISOタイムスタンプ
## 例:新しいログのポーリング
@@ -495,7 +541,7 @@ async function pollLogs() {
setInterval(pollLogs, 30000);
```
## 例:ウェブフックの処理
## 例:Webhookの処理
```javascript
import express from 'express';

View File

@@ -145,6 +145,6 @@ Simは異なるワークフローとユースケースに対応する2つの補
## 次のステップ
- ワークフロー価格設定を理解するための[コスト計算](/execution/costs)について学ぶ
- ワークフロー価格設定を理解するための[コスト計算](/execution/costs)について学ぶ
- プログラムによるログアクセスのための[外部API](/execution/api)を探索する
- リアルタイムアラート用の[Webhookによる通知](/execution/api#webhook-subscriptions)を設定する
- webhook、メール、またはSlackによるリアルタイムアラートのための[通知](/execution/api#notifications)を設定する

View File

@@ -66,16 +66,16 @@ Simの環境変数は2つのレベルで機能します
height={350}
/>
## 変数の優先順位
## 変数の解決方法
同じ名前の個人変数とワークスペース変数がある場合:
**ワークスペース変数は常に優先されます**。誰がワークフローを実行するかに関わらず、個人変数よりも優先されます。
1. **ワークスペース変数が**個人変数よりも**優先されます**
2. これにより名前の競合を防ぎ、チームワークフロー全体で一貫した動作を確保します
3. ワークスペース変数が存在する場合、同じ名前の個人変数は無視されます
キーに対するワークスペース変数が存在しない場合、個人変数が使用されます
- **手動実行UI**:あなたの個人変数
- **自動実行API、ウェブフック、スケジュール、デプロイされたチャット**:ワークフロー所有者の個人変数
<Callout type="warning">
意図しない上書きを避けるために、変数名は慎重に選んでください。個人変数にはイニシャルを、ワークスペース変数にはプロジェクト名を接頭辞として付けることを検討してください。
<Callout type="info">
個人変数はテストに最適です。本番環境のワークフローにはワークスペース変数を使用してください。
</Callout>
## セキュリティのベストプラクティス

View File

@@ -251,32 +251,78 @@ curl -H "x-api-key: YOUR_API_KEY" \
</Tab>
</Tabs>
## Webhook 订阅
## 通知
工作流执行完成时获取实时通知。Webhook 可通过 Sim UI 的工作流编辑器进行配置。
通过 webhook、电子邮件或 Slack 获取工作流执行完成实时通知。通知在工作区级别从日志页面进行配置。
### 配置
可以通过工作流编辑器 UI 为每个工作流配置 Webhook。点击控制栏中的 Webhook 图标以设置 Webhook 订阅
通过点击菜单按钮并选择“配置通知”从日志页面配置通知
<div className="mx-auto w-full overflow-hidden rounded-lg">
<Video src="configure-webhook.mp4" width={700} height={450} />
</div>
**通知渠道:**
- **Webhook**:向您的端点发送 HTTP POST 请求
- **电子邮件**:接收包含执行详情的电子邮件通知
- **Slack**:向 Slack 频道发送消息
**可用配置选项**
- `url`:您的 Webhook 端点 URL
- `secret`:用于 HMAC 签名验证的可选密钥
- `includeFinalOutput`:在负载中包含工作流的最终输出
**工作流选择**
- 选择特定的工作流进行监控
- 或选择“所有工作流”以包含当前和未来的工作流
**过滤选项:**
- `levelFilter`:接收的日志级别(`info``error`
- `triggerFilter`:接收的触发类型(`api``webhook``schedule``manual``chat`
**可选数据:**
- `includeFinalOutput`:包含工作流的最终输出
- `includeTraceSpans`:包含详细的执行跟踪跨度
- `includeRateLimits`:包含工作流所有者的速率限制信息
- `includeUsageData`:包含工作流所有者的使用和计费数据
- `levelFilter`:接收的日志级别数组(`info`, `error`
- `triggerFilter`:接收的触发类型数组(`api`, `webhook`, `schedule`, `manual`, `chat`
- `active`:启用/禁用 Webhook 订阅
- `includeRateLimits`:包含速率限制信息(同步/异步限制和剩余)
- `includeUsageData`:包含计费周期的使用情况和限制
### Webhook 负载
### 警报规则
当工作流执行完成时Sim 会向您的 Webhook URL 发送一个 POST 请求
与其为每次执行接收通知,不如配置警报规则,仅在检测到问题时收到通知
**连续失败**
- 在 X 次连续失败执行后发出警报(例如,连续 3 次失败)
- 当执行成功时重置
**失败率**
- 当失败率在过去 Y 小时内超过 X% 时发出警报
- 需要窗口内至少 5 次执行
- 仅在整个时间窗口结束后触发
**延迟阈值**
- 当任何执行时间超过 X 秒时发出警报
- 用于捕捉缓慢或挂起的工作流
**延迟峰值**
- 当执行时间比平均值慢 X% 时发出警报
- 与配置时间窗口内的平均持续时间进行比较
- 需要至少 5 次执行以建立基线
**成本阈值**
- 当单次执行成本超过 $X 时发出警报
- 用于捕捉高成本的 LLM 调用
**无活动**
- 当 X 小时内没有执行发生时发出警报
- 用于监控应定期运行的计划工作流
**错误计数**
- 当错误计数在某个时间窗口内超过 X 时发出警报
- 跟踪总错误数,而非连续错误
所有警报类型都包括 1 小时的冷却时间,以防止通知过多。
### Webhook 配置
对于 webhooks可用以下附加选项
- `url`:您的 webhook 端点 URL
- `secret`:用于 HMAC 签名验证的可选密钥
### 负载结构
当工作流执行完成时Sim 会发送以下负载(通过 webhook POST、电子邮件或 Slack
```json
{
@@ -325,9 +371,9 @@ curl -H "x-api-key: YOUR_API_KEY" \
}
```
### Webhook 请求
### Webhook 头信息
每个 webhook 请求都包含以下请求头:
每个 webhook 请求都包含以下头信息(仅限 webhook 渠道)
- `sim-event`:事件类型(始终为 `workflow.execution.completed`
- `sim-timestamp`:以毫秒为单位的 Unix 时间戳
@@ -410,7 +456,7 @@ curl -H "x-api-key: YOUR_API_KEY" \
### 重试策略
失败的 webhook 交付使用指数退避和抖动进行重试:
失败的 webhook 交付使用指数退避和抖动进行重试:
- 最大尝试次数5
- 重试延迟5 秒、15 秒、1 分钟、3 分钟、10 分钟
@@ -424,15 +470,15 @@ curl -H "x-api-key: YOUR_API_KEY" \
## 最佳实践
1. **轮询策略**:在轮询日志时,使用基于游标的分页与 `order=asc` 和 `startDate` 高效获取新日志。
1. **轮询策略**:在轮询日志时,使用基于游标的分页与 `order=asc` 和 `startDate` 高效获取新日志。
2. **Webhook 安全性**:始终配置 webhook 密钥并验证签名以确保请求来自 Sim。
2. **Webhook 安全性**:始终配置一个 webhook 密钥并验证签名以确保请求来自 Sim。
3. **幂等性**:使用 `Idempotency-Key` 请求头检测并处理重复的 webhook 交付。
3. **幂等性**:使用 `Idempotency-Key` 头检测并处理重复的 webhook 交付。
4. **隐私**:默认情况下,`finalOutput` 和 `traceSpans` 不包含在响应中。仅在需要这些数据并了解隐私影响时启用它们。
4. **隐私**:默认情况下,`finalOutput` 和 `traceSpans` 会从响应中排除。仅在需要这些数据并了解隐私影响时启用它们。
5. **速率限制**:当收到 429 响应时,实施指数退避。检查 `Retry-After` 请求头以获取推荐的等待时间。
5. **速率限制**:当收到 429 响应时,实施指数退避。检查 `Retry-After` 头以获取推荐的等待时间。
## 速率限制
@@ -443,7 +489,7 @@ API 实现了速率限制以确保公平使用:
- **团队计划**:每分钟 60 次请求
- **企业计划**:自定义限制
速率限制信息包含在响应头中:
速率限制信息包含在响应头中:
- `X-RateLimit-Limit`:每个窗口的最大请求数
- `X-RateLimit-Remaining`:当前窗口中剩余的请求数
- `X-RateLimit-Reset`:窗口重置时的 ISO 时间戳
@@ -495,7 +541,7 @@ async function pollLogs() {
setInterval(pollLogs, 30000);
```
## 示例:处理 Webhooks
## 示例:处理 Webhook
```javascript
import express from 'express';

View File

@@ -146,5 +146,5 @@ Sim 提供了两种互补的日志界面,以适应不同的工作流和使用
## 下一步
- 了解 [成本计算](/execution/costs) 以理解工作流定价
- 探索 [外部 API](/execution/api) 以进行编程日志访问
- 设置 [Webhook 通知](/execution/api#webhook-subscriptions) 以获取实时警报
- 探索 [外部 API](/execution/api) 以编程方式访问日志
- 设置 [通知](/execution/api#notifications) 以通过 webhook、电子邮件或 Slack 接收实时警报

View File

@@ -66,16 +66,16 @@ Sim 中的环境变量分为两个级别:
height={350}
/>
## 变量优先级
## 变量的解析方式
当您同时拥有名称相同的个人变量和工作区变量时:
**工作区变量始终优先于**个人变量,无论是谁运行工作流。
1. **工作区变量优先** 于个人变量
2. 这可以防止命名冲突,并确保团队工作流中的行为一致
3. 如果存在工作区变量,则会忽略具有相同名称的个人变量
当某个键没有工作区变量时,将使用个人变量
- **手动运行UI**:使用您的个人变量
- **自动运行API、Webhook、计划任务、已部署的聊天**:使用工作流所有者的个人变量
<Callout type="warning">
请谨慎选择变量名称,以避免意外覆盖。建议为个人变量添加您的姓名首字母前缀,或为工作区变量添加项目名称前缀
<Callout type="info">
个人变量最适合用于测试。生产环境的工作流请使用工作区变量
</Callout>
## 安全最佳实践

View File

@@ -155,10 +155,10 @@ checksums:
content/17: 71f197abc03ccf0fbd23f1bad0bfd450
content/18: 2fa6f2c12d3ad71b64e1c1b7b96bf348
content/19: 17f927a05543fb337b72be5501332577
content/20: d9380cdcc96b7cbc253f7a916bbf8ea5
content/21: 485586a48c36c878a95c22a03e8f3676
content/22: c3d2fa5d359011e843e8771ea6d3491f
content/23: 1a2569897c84b009b770c6585d72d7d6
content/20: 7f832236ba279639c9f12b97e0a75615
content/21: 0a61b9c5fc3ae7648c648a07b1adae17
content/22: a04ec7cf757e7a63c1145257887e01d0
content/23: 9c0f1d6c3ef087ccb77a81267fbbf078
content/24: fc8a2d65e8fa2c33010d2b6fe10bbd68
content/25: 7beb08526a7baf4cc6d32dc67eb67376
content/26: 4f8bbb64b829f602156d0985fd48ca88
@@ -4488,7 +4488,7 @@ checksums:
content/29: 1a98ac22faecc543d8f2d4065023e774
content/30: fa0c9faaa2ffef89fb19f70e271912c3
content/31: c2cef2688104adaf6641092f43d4969a
content/32: ebdbef07196127ef2c7ba23491f62578
content/32: 16ae245dc265324548e4b7f443b57d71
0334b7b88d5f6e736984d25a5913ce74:
meta/title: 30c54e4dc4ce599b87d94be34a8617f5
content/0: c2b41859d63a751682f0d9aec488e581
@@ -4639,43 +4639,57 @@ checksums:
content/31: 620d49f6fa198176bf6e59685e246e2c
content/32: 736aa604a0310410baaf7723027d3cdb
content/33: 3304a33dfb626c6e2267c062e8956a9d
content/34: 3d833eb8f8e6bd1feebea1cb6c388309
content/35: f4e933e28b226f3b5b35d1401d4a8649
content/34: 1dcfad686ae5b282bc1666e422d3a05c
content/35: 852efbf14e8f30db0806cd41b90812ac
content/36: 95ec79465a3a0d67bf1eaf0ef0574b71
content/37: a995cdc6044f9b6745b7fc7e3a0d6d79
content/38: 4a6b924e56225096e27be99282e0924d
content/39: 9a32a035dfbb07c1daec8939fcec70fd
content/40: ffd9a3c7a121a6a22758a7efc9af55b9
content/41: 8bece06923c42ca045eaffc282b8a0e6
content/42: 68926c22cd421a38f0fb4e51eb68dca5
content/43: f29cfb9cfcc3e4eb4bd63c0c4aa78832
content/44: d27a1d4f71d599ebf7b0dd6f748a5e04
content/45: c378967adfff8419115144415d96f47c
content/46: 44383ae82f1d6e0e0bb70bfc4dc558d6
content/47: c681ed9371c4285112b6cf0c75a14d90
content/48: 7c872ba10b1a7bd9aa66f965b8aa35c0
content/49: c6ae2bcbff69a19c7bf2a50de79a6d45
content/50: cbfe1ade60934bcf4914255350760e4f
content/51: 42b378d3e7ee5b53893f4aed2df7371e
content/52: 3304a33dfb626c6e2267c062e8956a9d
content/53: 7d213c6776c3a2c9094cec87ff25b533
content/54: 4abd5155d278bf5a8e313c464cc232c7
content/55: 54d9518f5153dfc440daaeb4c30aa236
content/56: a577d5715cc0369536eed19cb5a4e6ad
content/57: b2a4a0c279f47d58a2456f25a1e1c6f9
content/58: 4f0ae0ea5cd3920a1f5a4a4cc42c3e10
content/59: 25545e546719f2dce0e3865aef7a5f1f
content/60: b3253e17dc54f4092bffb91d55ac5afa
content/61: c13873edfa750ce6caa155ef2f3f1883
content/62: 0bc07e36a42f7e25d5426b6a9deaac02
content/63: 017c829dd54fde20fa9fac7712342236
content/64: ceba97287cca68b1291142a38175d6d8
content/65: 02072ea1e956490139dbe8bbb0bc3e16
content/66: 44871834b9cb423a9978843c08623979
content/67: 0b22ed8a7e64005c666505c48e09f715
content/68: 494dcadaea5e62eddd599700511ecee5
content/69: 8332b16a0bf7a4c862f5104e9ffeb98d
content/70: 90e2f984a874a8f954ddfd127ec8178a
content/37: fc5f5a62b8d58cabbb3e26f1da4fb174
content/38: 361f583959e1599ed744b8bb076d7a63
content/39: 34ce28c18ded945e0ed7adc2fea7a767
content/40: cbfe3780725022693dbe8bca77166ebf
content/41: 6e6eca3194853de1383060f28e62428e
content/42: ab22182e07b366a5ad8aeaa4cd33c98b
content/43: 9757cdca1f88b3501ab320067ffc69f5
content/44: a9766f789c10b20e7616a89977913461
content/45: 93fcde5246eed5fbbda2c38499991caa
content/46: 74075c0a46913bfb13c5677706a6add1
content/47: ec9cca1ed40ff37ecbbd9079ff7e20b8
content/48: 568646d11f867050de7dc098c172cbdb
content/49: 602e3ba652a74c425b233faa001fd885
content/50: 0dd6137b756a3b6c9c68adc89bda4766
content/51: 827b9be3e2fbc0bff1157cbdfbd9cebd
content/52: e991745d67395e34fe24acbec325163c
content/53: 5712e5497ca7e787d96c18eec0c70715
content/54: 8265f9e2c583cdaa2428440263dd5456
content/55: be210009a4f28628db2583119b604dee
content/56: 68926c22cd421a38f0fb4e51eb68dca5
content/57: f29cfb9cfcc3e4eb4bd63c0c4aa78832
content/58: c47dd793baa3187e133d27545832e474
content/59: c378967adfff8419115144415d96f47c
content/60: 44383ae82f1d6e0e0bb70bfc4dc558d6
content/61: c681ed9371c4285112b6cf0c75a14d90
content/62: 7c872ba10b1a7bd9aa66f965b8aa35c0
content/63: c6ae2bcbff69a19c7bf2a50de79a6d45
content/64: cbfe1ade60934bcf4914255350760e4f
content/65: 42b378d3e7ee5b53893f4aed2df7371e
content/66: 3304a33dfb626c6e2267c062e8956a9d
content/67: 7d213c6776c3a2c9094cec87ff25b533
content/68: 4abd5155d278bf5a8e313c464cc232c7
content/69: 54d9518f5153dfc440daaeb4c30aa236
content/70: a577d5715cc0369536eed19cb5a4e6ad
content/71: b2a4a0c279f47d58a2456f25a1e1c6f9
content/72: 4f0ae0ea5cd3920a1f5a4a4cc42c3e10
content/73: 25545e546719f2dce0e3865aef7a5f1f
content/74: b3253e17dc54f4092bffb91d55ac5afa
content/75: c13873edfa750ce6caa155ef2f3f1883
content/76: 0bc07e36a42f7e25d5426b6a9deaac02
content/77: 017c829dd54fde20fa9fac7712342236
content/78: ceba97287cca68b1291142a38175d6d8
content/79: 02072ea1e956490139dbe8bbb0bc3e16
content/80: 44871834b9cb423a9978843c08623979
content/81: 0b22ed8a7e64005c666505c48e09f715
content/82: 494dcadaea5e62eddd599700511ecee5
content/83: 8332b16a0bf7a4c862f5104e9ffeb98d
content/84: 90e2f984a874a8f954ddfd127ec8178a
0e322683b6d10e9fa8c9a17ff15a5fb1:
meta/title: a912b3c7fb996fefccb182cf5c4a3fbc
content/0: e1f8d4b13687e7d73b5b5fbb4cb6142d

View File

@@ -2,9 +2,8 @@ import type { InferPageType } from 'fumadocs-core/source'
import type { source } from '@/lib/source'
export async function getLLMText(page: InferPageType<typeof source>) {
const processed = await page.data.getText('processed')
return `# ${page.data.title} (${page.url})
${page.data.description || ''}
${page.data.content || ''}`
${processed}`
}

View File

@@ -1,5 +1,5 @@
import { loader } from 'fumadocs-core/source'
import { docs } from '@/.source'
import { docs } from '@/.source/server'
import { i18n } from './i18n'
export const source = loader({

View File

@@ -16,10 +16,13 @@ const config = {
destination: '/introduction',
permanent: true,
},
]
},
async rewrites() {
return [
{
source: '/docs/:path*.mdx',
source: '/:path*.mdx',
destination: '/llms.mdx/:path*',
permanent: true,
},
]
},

View File

@@ -5,7 +5,7 @@
"license": "Apache-2.0",
"scripts": {
"dev": "next dev --port 3001",
"build": "NODE_OPTIONS='--max-old-space-size=8192' next build",
"build": "fumadocs-mdx && NODE_OPTIONS='--max-old-space-size=8192' next build",
"start": "next start",
"postinstall": "fumadocs-mdx",
"type-check": "tsc --noEmit"
@@ -14,14 +14,14 @@
"@tabler/icons-react": "^3.31.0",
"@vercel/og": "^0.6.5",
"clsx": "^2.1.1",
"fumadocs-core": "15.8.2",
"fumadocs-mdx": "11.10.1",
"fumadocs-ui": "15.8.2",
"fumadocs-core": "16.2.3",
"fumadocs-mdx": "14.1.0",
"fumadocs-ui": "16.2.3",
"lucide-react": "^0.511.0",
"next": "15.4.8",
"next": "16.0.7",
"next-themes": "^0.4.6",
"react": "19.1.0",
"react-dom": "19.1.0",
"react": "19.2.1",
"react-dom": "19.2.1",
"tailwind-merge": "^3.0.2"
},
"devDependencies": {

View File

@@ -4,9 +4,9 @@ import { type NextFetchEvent, type NextRequest, NextResponse } from 'next/server
import { i18n } from '@/lib/i18n'
const { rewrite: rewriteLLM } = rewritePath('/docs/*path', '/llms.mdx/*path')
const i18nMiddleware = createI18nMiddleware(i18n)
const i18nProxy = createI18nMiddleware(i18n)
export default function middleware(request: NextRequest, event: NextFetchEvent) {
export default function proxy(request: NextRequest, event: NextFetchEvent) {
if (isMarkdownPreferred(request)) {
const result = rewriteLLM(request.nextUrl.pathname)
@@ -15,7 +15,7 @@ export default function middleware(request: NextRequest, event: NextFetchEvent)
}
}
return i18nMiddleware(request, event)
return i18nProxy(request, event)
}
export const config = {

View File

@@ -2,6 +2,11 @@ import { defineConfig, defineDocs } from 'fumadocs-mdx/config'
export const docs = defineDocs({
dir: 'content/docs',
docs: {
postprocess: {
includeProcessedMarkdown: true,
},
},
})
export default defineConfig({

View File

@@ -13,10 +13,10 @@
"moduleResolution": "bundler",
"resolveJsonModule": true,
"isolatedModules": true,
"jsx": "preserve",
"jsx": "react-jsx",
"incremental": true,
"paths": {
"@/.source": ["./.source/index.ts"],
"@/.source/*": ["./.source/*"],
"@/*": ["./*"]
},
"plugins": [
@@ -31,7 +31,8 @@
"**/*.tsx",
".next/types/**/*.ts",
"content/docs/execution/index.mdx",
"content/docs/connections/index.mdx"
"content/docs/connections/index.mdx",
".next/dev/types/**/*.ts"
],
"exclude": ["node_modules"]
}

View File

@@ -24,3 +24,7 @@ API_ENCRYPTION_KEY=your_api_encryption_key # Use `openssl rand -hex 32` to gener
# OLLAMA_URL=http://localhost:11434 # URL for local Ollama server - uncomment if using local models
# VLLM_BASE_URL=http://localhost:8000 # Base URL for your self-hosted vLLM (OpenAI-compatible)
# VLLM_API_KEY= # Optional bearer token if your vLLM instance requires auth
# Admin API (Optional - for self-hosted GitOps)
# ADMIN_API_KEY= # Use `openssl rand -hex 32` to generate. Enables admin API for workflow export/import.
# Usage: curl -H "x-admin-key: your_key" https://your-instance/api/v1/admin/workspaces

View File

@@ -0,0 +1,40 @@
import { db } from '@sim/db'
import { account } from '@sim/db/schema'
import { and, eq } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { getSession } from '@/lib/auth'
import { createLogger } from '@/lib/logs/console/logger'
const logger = createLogger('AuthAccountsAPI')
export async function GET(request: NextRequest) {
try {
const session = await getSession()
if (!session?.user?.id) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
const { searchParams } = new URL(request.url)
const provider = searchParams.get('provider')
const whereConditions = [eq(account.userId, session.user.id)]
if (provider) {
whereConditions.push(eq(account.providerId, provider))
}
const accounts = await db
.select({
id: account.id,
accountId: account.accountId,
providerId: account.providerId,
})
.from(account)
.where(and(...whereConditions))
return NextResponse.json({ accounts })
} catch (error) {
logger.error('Failed to fetch accounts', { error })
return NextResponse.json({ error: 'Internal server error' }, { status: 500 })
}
}

View File

@@ -0,0 +1,150 @@
import { db } from '@sim/db'
import { settings } from '@sim/db/schema'
import { eq } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { auth } from '@/lib/auth'
import { createLogger } from '@/lib/logs/console/logger'
const logger = createLogger('CopilotAutoAllowedToolsAPI')
/**
* GET - Fetch user's auto-allowed integration tools
*/
export async function GET(request: NextRequest) {
try {
const session = await auth.api.getSession({ headers: request.headers })
if (!session?.user?.id) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
const userId = session.user.id
const [userSettings] = await db
.select()
.from(settings)
.where(eq(settings.userId, userId))
.limit(1)
if (userSettings) {
const autoAllowedTools = (userSettings.copilotAutoAllowedTools as string[]) || []
return NextResponse.json({ autoAllowedTools })
}
// If no settings record exists, create one with empty array
await db.insert(settings).values({
id: userId,
userId,
copilotAutoAllowedTools: [],
})
return NextResponse.json({ autoAllowedTools: [] })
} catch (error) {
logger.error('Failed to fetch auto-allowed tools', { error })
return NextResponse.json({ error: 'Internal server error' }, { status: 500 })
}
}
/**
* POST - Add a tool to the auto-allowed list
*/
export async function POST(request: NextRequest) {
try {
const session = await auth.api.getSession({ headers: request.headers })
if (!session?.user?.id) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
const userId = session.user.id
const body = await request.json()
if (!body.toolId || typeof body.toolId !== 'string') {
return NextResponse.json({ error: 'toolId must be a string' }, { status: 400 })
}
const toolId = body.toolId
// Get existing settings
const [existing] = await db.select().from(settings).where(eq(settings.userId, userId)).limit(1)
if (existing) {
const currentTools = (existing.copilotAutoAllowedTools as string[]) || []
// Add tool if not already present
if (!currentTools.includes(toolId)) {
const updatedTools = [...currentTools, toolId]
await db
.update(settings)
.set({
copilotAutoAllowedTools: updatedTools,
updatedAt: new Date(),
})
.where(eq(settings.userId, userId))
logger.info('Added tool to auto-allowed list', { userId, toolId })
return NextResponse.json({ success: true, autoAllowedTools: updatedTools })
}
return NextResponse.json({ success: true, autoAllowedTools: currentTools })
}
// Create new settings record with the tool
await db.insert(settings).values({
id: userId,
userId,
copilotAutoAllowedTools: [toolId],
})
logger.info('Created settings and added tool to auto-allowed list', { userId, toolId })
return NextResponse.json({ success: true, autoAllowedTools: [toolId] })
} catch (error) {
logger.error('Failed to add auto-allowed tool', { error })
return NextResponse.json({ error: 'Internal server error' }, { status: 500 })
}
}
/**
* DELETE - Remove a tool from the auto-allowed list
*/
export async function DELETE(request: NextRequest) {
try {
const session = await auth.api.getSession({ headers: request.headers })
if (!session?.user?.id) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
const userId = session.user.id
const { searchParams } = new URL(request.url)
const toolId = searchParams.get('toolId')
if (!toolId) {
return NextResponse.json({ error: 'toolId query parameter is required' }, { status: 400 })
}
// Get existing settings
const [existing] = await db.select().from(settings).where(eq(settings.userId, userId)).limit(1)
if (existing) {
const currentTools = (existing.copilotAutoAllowedTools as string[]) || []
const updatedTools = currentTools.filter((t) => t !== toolId)
await db
.update(settings)
.set({
copilotAutoAllowedTools: updatedTools,
updatedAt: new Date(),
})
.where(eq(settings.userId, userId))
logger.info('Removed tool from auto-allowed list', { userId, toolId })
return NextResponse.json({ success: true, autoAllowedTools: updatedTools })
}
return NextResponse.json({ success: true, autoAllowedTools: [] })
} catch (error) {
logger.error('Failed to remove auto-allowed tool', { error })
return NextResponse.json({ error: 'Internal server error' }, { status: 500 })
}
}

View File

@@ -1,634 +0,0 @@
/**
* Tests for copilot chat API route
*
* @vitest-environment node
*/
import { NextRequest } from 'next/server'
import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest'
import {
createMockRequest,
mockAuth,
mockCryptoUuid,
setupCommonApiMocks,
} from '@/app/api/__test-utils__/utils'
describe('Copilot Chat API Route', () => {
const mockSelect = vi.fn()
const mockFrom = vi.fn()
const mockWhere = vi.fn()
const mockLimit = vi.fn()
const mockOrderBy = vi.fn()
const mockInsert = vi.fn()
const mockValues = vi.fn()
const mockReturning = vi.fn()
const mockUpdate = vi.fn()
const mockSet = vi.fn()
const mockExecuteProviderRequest = vi.fn()
const mockGetCopilotModel = vi.fn()
const mockGetRotatingApiKey = vi.fn()
beforeEach(() => {
vi.resetModules()
setupCommonApiMocks()
mockCryptoUuid()
mockSelect.mockReturnValue({ from: mockFrom })
mockFrom.mockReturnValue({ where: mockWhere })
mockWhere.mockReturnValue({
orderBy: mockOrderBy,
limit: mockLimit,
})
mockOrderBy.mockResolvedValue([])
mockLimit.mockResolvedValue([])
mockInsert.mockReturnValue({ values: mockValues })
mockValues.mockReturnValue({ returning: mockReturning })
mockUpdate.mockReturnValue({ set: mockSet })
mockSet.mockReturnValue({ where: mockWhere })
vi.doMock('@sim/db', () => ({
db: {
select: mockSelect,
insert: mockInsert,
update: mockUpdate,
},
}))
vi.doMock('@sim/db/schema', () => ({
copilotChats: {
id: 'id',
userId: 'userId',
messages: 'messages',
title: 'title',
model: 'model',
workflowId: 'workflowId',
createdAt: 'createdAt',
updatedAt: 'updatedAt',
},
}))
vi.doMock('drizzle-orm', () => ({
and: vi.fn((...conditions) => ({ conditions, type: 'and' })),
eq: vi.fn((field, value) => ({ field, value, type: 'eq' })),
desc: vi.fn((field) => ({ field, type: 'desc' })),
}))
mockGetCopilotModel.mockReturnValue({
provider: 'anthropic',
model: 'claude-3-haiku-20240307',
})
vi.doMock('@/lib/copilot/config', () => ({
getCopilotModel: mockGetCopilotModel,
}))
vi.doMock('@/lib/copilot/prompts', () => ({
TITLE_GENERATION_SYSTEM_PROMPT: 'Generate a title',
TITLE_GENERATION_USER_PROMPT: vi.fn((msg) => `Generate title for: ${msg}`),
}))
mockExecuteProviderRequest.mockResolvedValue({
content: 'Generated Title',
})
vi.doMock('@/providers', () => ({
executeProviderRequest: mockExecuteProviderRequest,
}))
mockGetRotatingApiKey.mockReturnValue('test-api-key')
vi.doMock('@/lib/core/config/api-keys', () => ({
getRotatingApiKey: mockGetRotatingApiKey,
}))
vi.doMock('@/lib/core/utils/request', () => ({
generateRequestId: vi.fn(() => 'test-request-id'),
}))
const mockEnvValues = {
SIM_AGENT_API_URL: 'http://localhost:8000',
COPILOT_API_KEY: 'test-sim-agent-key',
BETTER_AUTH_URL: 'http://localhost:3000',
NEXT_PUBLIC_APP_URL: 'http://localhost:3000',
NODE_ENV: 'test',
} as const
vi.doMock('@/lib/core/config/env', () => ({
env: mockEnvValues,
getEnv: (variable: string) => mockEnvValues[variable as keyof typeof mockEnvValues],
isTruthy: (value: string | boolean | number | undefined) =>
typeof value === 'string'
? value.toLowerCase() === 'true' || value === '1'
: Boolean(value),
isFalsy: (value: string | boolean | number | undefined) =>
typeof value === 'string'
? value.toLowerCase() === 'false' || value === '0'
: value === false,
}))
global.fetch = vi.fn()
})
afterEach(() => {
vi.clearAllMocks()
vi.restoreAllMocks()
})
describe('POST', () => {
it('should return 401 when user is not authenticated', async () => {
const authMocks = mockAuth()
authMocks.setUnauthenticated()
const req = createMockRequest('POST', {
message: 'Hello',
workflowId: 'workflow-123',
})
const { POST } = await import('@/app/api/copilot/chat/route')
const response = await POST(req)
expect(response.status).toBe(401)
const responseData = await response.json()
expect(responseData).toEqual({ error: 'Unauthorized' })
})
it('should return 400 for invalid request body', async () => {
const authMocks = mockAuth()
authMocks.setAuthenticated()
const req = createMockRequest('POST', {
// Missing required fields
})
const { POST } = await import('@/app/api/copilot/chat/route')
const response = await POST(req)
expect(response.status).toBe(400)
const responseData = await response.json()
expect(responseData.error).toBe('Invalid request data')
expect(responseData.details).toBeDefined()
})
it('should handle new chat creation and forward to sim agent', async () => {
const authMocks = mockAuth()
authMocks.setAuthenticated()
// Mock successful chat creation
const newChat = {
id: 'chat-123',
userId: 'user-123',
workflowId: 'workflow-123',
title: null,
model: 'claude-3-haiku-20240307',
messages: [],
}
mockReturning.mockResolvedValue([newChat])
// Mock successful sim agent response
const mockReadableStream = new ReadableStream({
start(controller) {
const encoder = new TextEncoder()
controller.enqueue(
encoder.encode('data: {"type": "assistant_message", "content": "Hello response"}\\n\\n')
)
controller.close()
},
})
;(global.fetch as any).mockResolvedValue({
ok: true,
body: mockReadableStream,
})
const req = createMockRequest('POST', {
message: 'Hello',
workflowId: 'workflow-123',
createNewChat: true,
stream: true,
})
const { POST } = await import('@/app/api/copilot/chat/route')
const response = await POST(req)
expect(response.status).toBe(200)
expect(mockInsert).toHaveBeenCalled()
expect(mockValues).toHaveBeenCalledWith({
userId: 'user-123',
workflowId: 'workflow-123',
title: null,
model: 'claude-3-haiku-20240307',
messages: [],
})
// Verify sim agent was called
expect(global.fetch).toHaveBeenCalledWith(
'http://localhost:8000/api/chat-completion-streaming',
expect.objectContaining({
method: 'POST',
headers: {
'Content-Type': 'application/json',
'x-api-key': 'test-sim-agent-key',
},
body: JSON.stringify({
message: 'Hello',
workflowId: 'workflow-123',
userId: 'user-123',
stream: true,
streamToolCalls: true,
model: 'claude-4.5-sonnet',
mode: 'agent',
messageId: 'mock-uuid-1234-5678',
version: '1.0.2',
chatId: 'chat-123',
}),
})
)
})
it('should load existing chat and include conversation history', async () => {
const authMocks = mockAuth()
authMocks.setAuthenticated()
// Mock existing chat with history
const existingChat = {
id: 'chat-123',
userId: 'user-123',
workflowId: 'workflow-123',
title: 'Existing Chat',
messages: [
{ role: 'user', content: 'Previous message' },
{ role: 'assistant', content: 'Previous response' },
],
}
// For POST route, the select query uses limit not orderBy
mockLimit.mockResolvedValue([existingChat])
// Mock sim agent response
const mockReadableStream = new ReadableStream({
start(controller) {
controller.close()
},
})
;(global.fetch as any).mockResolvedValue({
ok: true,
body: mockReadableStream,
})
const req = createMockRequest('POST', {
message: 'New message',
workflowId: 'workflow-123',
chatId: 'chat-123',
})
const { POST } = await import('@/app/api/copilot/chat/route')
const response = await POST(req)
expect(response.status).toBe(200)
// Verify conversation history was included
expect(global.fetch).toHaveBeenCalledWith(
'http://localhost:8000/api/chat-completion-streaming',
expect.objectContaining({
body: JSON.stringify({
message: 'New message',
workflowId: 'workflow-123',
userId: 'user-123',
stream: true,
streamToolCalls: true,
model: 'claude-4.5-sonnet',
mode: 'agent',
messageId: 'mock-uuid-1234-5678',
version: '1.0.2',
chatId: 'chat-123',
}),
})
)
})
it('should include implicit feedback in messages', async () => {
const authMocks = mockAuth()
authMocks.setAuthenticated()
const newChat = {
id: 'chat-123',
userId: 'user-123',
workflowId: 'workflow-123',
messages: [],
}
mockReturning.mockResolvedValue([newChat])
;(global.fetch as any).mockResolvedValue({
ok: true,
body: new ReadableStream({
start(controller) {
controller.close()
},
}),
})
const req = createMockRequest('POST', {
message: 'Hello',
workflowId: 'workflow-123',
createNewChat: true,
implicitFeedback: 'User seems confused about the workflow',
})
const { POST } = await import('@/app/api/copilot/chat/route')
await POST(req)
// Verify implicit feedback was included
expect(global.fetch).toHaveBeenCalledWith(
'http://localhost:8000/api/chat-completion-streaming',
expect.objectContaining({
body: JSON.stringify({
message: 'Hello',
workflowId: 'workflow-123',
userId: 'user-123',
stream: true,
streamToolCalls: true,
model: 'claude-4.5-sonnet',
mode: 'agent',
messageId: 'mock-uuid-1234-5678',
version: '1.0.2',
chatId: 'chat-123',
}),
})
)
})
it('should handle sim agent API errors', async () => {
const authMocks = mockAuth()
authMocks.setAuthenticated()
mockReturning.mockResolvedValue([{ id: 'chat-123', messages: [] }])
;(global.fetch as any).mockResolvedValue({
ok: false,
status: 500,
text: () => Promise.resolve('Internal server error'),
})
const req = createMockRequest('POST', {
message: 'Hello',
workflowId: 'workflow-123',
createNewChat: true,
})
const { POST } = await import('@/app/api/copilot/chat/route')
const response = await POST(req)
expect(response.status).toBe(500)
const responseData = await response.json()
expect(responseData.error).toContain('Sim agent API error')
})
it('should handle database errors during chat creation', async () => {
const authMocks = mockAuth()
authMocks.setAuthenticated()
// Mock database error
mockReturning.mockRejectedValue(new Error('Database connection failed'))
const req = createMockRequest('POST', {
message: 'Hello',
workflowId: 'workflow-123',
createNewChat: true,
})
const { POST } = await import('@/app/api/copilot/chat/route')
const response = await POST(req)
expect(response.status).toBe(500)
const responseData = await response.json()
expect(responseData.error).toBe('Database connection failed')
})
it('should use ask mode when specified', async () => {
const authMocks = mockAuth()
authMocks.setAuthenticated()
mockReturning.mockResolvedValue([{ id: 'chat-123', messages: [] }])
;(global.fetch as any).mockResolvedValue({
ok: true,
body: new ReadableStream({
start(controller) {
controller.close()
},
}),
})
const req = createMockRequest('POST', {
message: 'What is this workflow?',
workflowId: 'workflow-123',
createNewChat: true,
mode: 'ask',
})
const { POST } = await import('@/app/api/copilot/chat/route')
await POST(req)
expect(global.fetch).toHaveBeenCalledWith(
'http://localhost:8000/api/chat-completion-streaming',
expect.objectContaining({
body: JSON.stringify({
message: 'What is this workflow?',
workflowId: 'workflow-123',
userId: 'user-123',
stream: true,
streamToolCalls: true,
model: 'claude-4.5-sonnet',
mode: 'ask',
messageId: 'mock-uuid-1234-5678',
version: '1.0.2',
chatId: 'chat-123',
}),
})
)
})
})
describe('GET', () => {
it('should return 401 when user is not authenticated', async () => {
const authMocks = mockAuth()
authMocks.setUnauthenticated()
const req = new NextRequest('http://localhost:3000/api/copilot/chat?workflowId=workflow-123')
const { GET } = await import('@/app/api/copilot/chat/route')
const response = await GET(req)
expect(response.status).toBe(401)
const responseData = await response.json()
expect(responseData).toEqual({ error: 'Unauthorized' })
})
it('should return 400 when workflowId is missing', async () => {
const authMocks = mockAuth()
authMocks.setAuthenticated()
const req = new NextRequest('http://localhost:3000/api/copilot/chat')
const { GET } = await import('@/app/api/copilot/chat/route')
const response = await GET(req)
expect(response.status).toBe(400)
const responseData = await response.json()
expect(responseData.error).toBe('workflowId is required')
})
it('should return chats for authenticated user and workflow', async () => {
const authMocks = mockAuth()
authMocks.setAuthenticated()
// Mock database response (what comes from DB)
const mockDbChats = [
{
id: 'chat-1',
title: 'First Chat',
model: 'claude-3-haiku-20240307',
messages: [
{ role: 'user', content: 'Message 1' },
{ role: 'assistant', content: 'Response 1' },
{ role: 'user', content: 'Message 2' },
{ role: 'assistant', content: 'Response 2' },
],
createdAt: new Date('2024-01-01'),
updatedAt: new Date('2024-01-02'),
},
{
id: 'chat-2',
title: 'Second Chat',
model: 'claude-3-haiku-20240307',
messages: [
{ role: 'user', content: 'Message 1' },
{ role: 'assistant', content: 'Response 1' },
],
createdAt: new Date('2024-01-03'),
updatedAt: new Date('2024-01-04'),
},
]
// Expected transformed response (what the route returns)
const expectedChats = [
{
id: 'chat-1',
title: 'First Chat',
model: 'claude-3-haiku-20240307',
messages: [
{ role: 'user', content: 'Message 1' },
{ role: 'assistant', content: 'Response 1' },
{ role: 'user', content: 'Message 2' },
{ role: 'assistant', content: 'Response 2' },
],
messageCount: 4,
previewYaml: null,
createdAt: new Date('2024-01-01'),
updatedAt: new Date('2024-01-02'),
},
{
id: 'chat-2',
title: 'Second Chat',
model: 'claude-3-haiku-20240307',
messages: [
{ role: 'user', content: 'Message 1' },
{ role: 'assistant', content: 'Response 1' },
],
messageCount: 2,
previewYaml: null,
createdAt: new Date('2024-01-03'),
updatedAt: new Date('2024-01-04'),
},
]
mockOrderBy.mockResolvedValue(mockDbChats)
const req = new NextRequest('http://localhost:3000/api/copilot/chat?workflowId=workflow-123')
const { GET } = await import('@/app/api/copilot/chat/route')
const response = await GET(req)
expect(response.status).toBe(200)
const responseData = await response.json()
expect(responseData).toEqual({
success: true,
chats: [
{
id: 'chat-1',
title: 'First Chat',
model: 'claude-3-haiku-20240307',
messages: [
{ role: 'user', content: 'Message 1' },
{ role: 'assistant', content: 'Response 1' },
{ role: 'user', content: 'Message 2' },
{ role: 'assistant', content: 'Response 2' },
],
messageCount: 4,
previewYaml: null,
config: null,
planArtifact: null,
createdAt: '2024-01-01T00:00:00.000Z',
updatedAt: '2024-01-02T00:00:00.000Z',
},
{
id: 'chat-2',
title: 'Second Chat',
model: 'claude-3-haiku-20240307',
messages: [
{ role: 'user', content: 'Message 1' },
{ role: 'assistant', content: 'Response 1' },
],
messageCount: 2,
previewYaml: null,
config: null,
planArtifact: null,
createdAt: '2024-01-03T00:00:00.000Z',
updatedAt: '2024-01-04T00:00:00.000Z',
},
],
})
// Verify database query was made correctly
expect(mockSelect).toHaveBeenCalled()
expect(mockWhere).toHaveBeenCalled()
expect(mockOrderBy).toHaveBeenCalled()
})
it('should handle database errors when fetching chats', async () => {
const authMocks = mockAuth()
authMocks.setAuthenticated()
// Mock database error
mockOrderBy.mockRejectedValue(new Error('Database query failed'))
const req = new NextRequest('http://localhost:3000/api/copilot/chat?workflowId=workflow-123')
const { GET } = await import('@/app/api/copilot/chat/route')
const response = await GET(req)
expect(response.status).toBe(500)
const responseData = await response.json()
expect(responseData.error).toBe('Failed to fetch chats')
})
it('should return empty array when no chats found', async () => {
const authMocks = mockAuth()
authMocks.setAuthenticated()
mockOrderBy.mockResolvedValue([])
const req = new NextRequest('http://localhost:3000/api/copilot/chat?workflowId=workflow-123')
const { GET } = await import('@/app/api/copilot/chat/route')
const response = await GET(req)
expect(response.status).toBe(200)
const responseData = await response.json()
expect(responseData).toEqual({
success: true,
chats: [],
})
})
})
})

View File

@@ -14,11 +14,13 @@ import {
createRequestTracker,
createUnauthorizedResponse,
} from '@/lib/copilot/request-helpers'
import { getCredentialsServerTool } from '@/lib/copilot/tools/server/user/get-credentials'
import type { CopilotProviderConfig } from '@/lib/copilot/types'
import { env } from '@/lib/core/config/env'
import { createLogger } from '@/lib/logs/console/logger'
import { CopilotFiles } from '@/lib/uploads'
import { createFileContent } from '@/lib/uploads/utils/file-utils'
import { tools } from '@/tools/registry'
const logger = createLogger('CopilotChatAPI')
@@ -57,9 +59,10 @@ const ChatMessageSchema = z.object({
'claude-4.5-sonnet',
'claude-4.5-opus',
'claude-4.1-opus',
'gemini-3-pro',
])
.optional()
.default('claude-4.5-sonnet'),
.default('claude-4.5-opus'),
mode: z.enum(['ask', 'agent', 'plan']).optional().default('agent'),
prefetch: z.boolean().optional(),
createNewChat: z.boolean().optional().default(false),
@@ -313,6 +316,119 @@ export async function POST(req: NextRequest) {
const effectiveConversationId =
(currentChat?.conversationId as string | undefined) || conversationId
// For agent/build mode, fetch credentials and build tool definitions
let integrationTools: any[] = []
let baseTools: any[] = []
let credentials: {
oauth: Record<
string,
{ accessToken: string; accountId: string; name: string; expiresAt?: string }
>
apiKeys: string[]
metadata?: {
connectedOAuth: Array<{ provider: string; name: string; scopes?: string[] }>
configuredApiKeys: string[]
}
} | null = null
if (mode === 'agent') {
// Build base tools (executed locally, not deferred)
// Include function_execute for code execution capability
baseTools = [
{
name: 'function_execute',
description:
'Execute JavaScript code to perform calculations, data transformations, API calls, or any programmatic task. Code runs in a secure sandbox with fetch() available. Write plain statements (not wrapped in functions). Example: const res = await fetch(url); const data = await res.json(); return data;',
input_schema: {
type: 'object',
properties: {
code: {
type: 'string',
description:
'Raw JavaScript statements to execute. Code is auto-wrapped in async context. Use fetch() for HTTP requests. Write like: const res = await fetch(url); return await res.json();',
},
},
required: ['code'],
},
executeLocally: true,
},
]
// Fetch user credentials (OAuth + API keys)
try {
const rawCredentials = await getCredentialsServerTool.execute(
{},
{ userId: authenticatedUserId }
)
// Transform OAuth credentials to map format: { [provider]: { accessToken, accountId, ... } }
const oauthMap: Record<
string,
{ accessToken: string; accountId: string; name: string; expiresAt?: string }
> = {}
const connectedOAuth: Array<{ provider: string; name: string; scopes?: string[] }> = []
for (const cred of rawCredentials?.oauth?.connected?.credentials || []) {
if (cred.accessToken) {
oauthMap[cred.provider] = {
accessToken: cred.accessToken,
accountId: cred.id,
name: cred.name,
}
connectedOAuth.push({
provider: cred.provider,
name: cred.name,
})
}
}
credentials = {
oauth: oauthMap,
apiKeys: rawCredentials?.environment?.variableNames || [],
metadata: {
connectedOAuth,
configuredApiKeys: rawCredentials?.environment?.variableNames || [],
},
}
logger.info(`[${tracker.requestId}] Fetched credentials for build mode`, {
oauthProviders: Object.keys(oauthMap),
apiKeyCount: credentials.apiKeys.length,
})
} catch (error) {
logger.warn(`[${tracker.requestId}] Failed to fetch credentials`, {
error: error instanceof Error ? error.message : String(error),
})
}
// Build tool definitions (schemas only)
try {
const { createUserToolSchema } = await import('@/tools/params')
integrationTools = Object.entries(tools).map(([toolId, toolConfig]) => {
const userSchema = createUserToolSchema(toolConfig)
return {
name: toolId,
description: toolConfig.description || toolConfig.name || toolId,
input_schema: userSchema,
defer_loading: true, // Anthropic Advanced Tool Use
...(toolConfig.oauth?.required && {
oauth: {
required: true,
provider: toolConfig.oauth.provider,
},
}),
}
})
logger.info(`[${tracker.requestId}] Built tool definitions for build mode`, {
integrationToolCount: integrationTools.length,
})
} catch (error) {
logger.warn(`[${tracker.requestId}] Failed to build tool definitions`, {
error: error instanceof Error ? error.message : String(error),
})
}
}
const requestPayload = {
message: message, // Just send the current user message text
workflowId,
@@ -330,6 +446,10 @@ export async function POST(req: NextRequest) {
...(agentContexts.length > 0 && { context: agentContexts }),
...(actualChatId ? { chatId: actualChatId } : {}),
...(processedFileContents.length > 0 && { fileAttachments: processedFileContents }),
// For build/agent mode, include tools and credentials
...(integrationTools.length > 0 && { tools: integrationTools }),
...(baseTools.length > 0 && { baseTools }),
...(credentials && { credentials }),
}
try {
@@ -339,6 +459,12 @@ export async function POST(req: NextRequest) {
hasConversationId: !!effectiveConversationId,
hasFileAttachments: processedFileContents.length > 0,
messageLength: message.length,
mode,
hasTools: integrationTools.length > 0,
toolCount: integrationTools.length,
hasBaseTools: baseTools.length > 0,
baseToolCount: baseTools.length,
hasCredentials: !!credentials,
})
} catch {}

View File

@@ -0,0 +1,275 @@
import { db } from '@sim/db'
import { account, workflow } from '@sim/db/schema'
import { and, eq } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { getSession } from '@/lib/auth'
import {
createBadRequestResponse,
createInternalServerErrorResponse,
createRequestTracker,
createUnauthorizedResponse,
} from '@/lib/copilot/request-helpers'
import { generateRequestId } from '@/lib/core/utils/request'
import { getEffectiveDecryptedEnv } from '@/lib/environment/utils'
import { createLogger } from '@/lib/logs/console/logger'
import { refreshTokenIfNeeded } from '@/app/api/auth/oauth/utils'
import { executeTool } from '@/tools'
import { getTool } from '@/tools/utils'
const logger = createLogger('CopilotExecuteToolAPI')
const ExecuteToolSchema = z.object({
toolCallId: z.string(),
toolName: z.string(),
arguments: z.record(z.any()).optional().default({}),
workflowId: z.string().optional(),
})
/**
* Resolves all {{ENV_VAR}} references in a value recursively
* Works with strings, arrays, and objects
*/
function resolveEnvVarReferences(value: any, envVars: Record<string, string>): any {
if (typeof value === 'string') {
// Check for exact match: entire string is "{{VAR_NAME}}"
const exactMatch = /^\{\{([^}]+)\}\}$/.exec(value)
if (exactMatch) {
const envVarName = exactMatch[1].trim()
return envVars[envVarName] ?? value
}
// Check for embedded references: "prefix {{VAR}} suffix"
return value.replace(/\{\{([^}]+)\}\}/g, (match, varName) => {
const trimmedName = varName.trim()
return envVars[trimmedName] ?? match
})
}
if (Array.isArray(value)) {
return value.map((item) => resolveEnvVarReferences(item, envVars))
}
if (value !== null && typeof value === 'object') {
const resolved: Record<string, any> = {}
for (const [key, val] of Object.entries(value)) {
resolved[key] = resolveEnvVarReferences(val, envVars)
}
return resolved
}
return value
}
export async function POST(req: NextRequest) {
const tracker = createRequestTracker()
try {
const session = await getSession()
if (!session?.user?.id) {
return createUnauthorizedResponse()
}
const userId = session.user.id
const body = await req.json()
try {
const preview = JSON.stringify(body).slice(0, 300)
logger.debug(`[${tracker.requestId}] Incoming execute-tool request`, { preview })
} catch {}
const { toolCallId, toolName, arguments: toolArgs, workflowId } = ExecuteToolSchema.parse(body)
logger.info(`[${tracker.requestId}] Executing tool`, {
toolCallId,
toolName,
workflowId,
hasArgs: Object.keys(toolArgs).length > 0,
})
// Get tool config from registry
const toolConfig = getTool(toolName)
if (!toolConfig) {
// Find similar tool names to help debug
const { tools: allTools } = await import('@/tools/registry')
const allToolNames = Object.keys(allTools)
const prefix = toolName.split('_').slice(0, 2).join('_')
const similarTools = allToolNames
.filter((name) => name.startsWith(`${prefix.split('_')[0]}_`))
.slice(0, 10)
logger.warn(`[${tracker.requestId}] Tool not found in registry`, {
toolName,
prefix,
similarTools,
totalToolsInRegistry: allToolNames.length,
})
return NextResponse.json(
{
success: false,
error: `Tool not found: ${toolName}. Similar tools: ${similarTools.join(', ')}`,
toolCallId,
},
{ status: 404 }
)
}
// Get the workspaceId from the workflow (env vars are stored at workspace level)
let workspaceId: string | undefined
if (workflowId) {
const workflowResult = await db
.select({ workspaceId: workflow.workspaceId })
.from(workflow)
.where(eq(workflow.id, workflowId))
.limit(1)
workspaceId = workflowResult[0]?.workspaceId ?? undefined
}
// Get decrypted environment variables early so we can resolve all {{VAR}} references
const decryptedEnvVars = await getEffectiveDecryptedEnv(userId, workspaceId)
logger.info(`[${tracker.requestId}] Fetched environment variables`, {
workflowId,
workspaceId,
envVarCount: Object.keys(decryptedEnvVars).length,
envVarKeys: Object.keys(decryptedEnvVars),
})
// Build execution params starting with LLM-provided arguments
// Resolve all {{ENV_VAR}} references in the arguments
const executionParams: Record<string, any> = resolveEnvVarReferences(toolArgs, decryptedEnvVars)
logger.info(`[${tracker.requestId}] Resolved env var references in arguments`, {
toolName,
originalArgKeys: Object.keys(toolArgs),
resolvedArgKeys: Object.keys(executionParams),
})
// Resolve OAuth access token if required
if (toolConfig.oauth?.required && toolConfig.oauth.provider) {
const provider = toolConfig.oauth.provider
logger.info(`[${tracker.requestId}] Resolving OAuth token`, { provider })
try {
// Find the account for this provider and user
const accounts = await db
.select()
.from(account)
.where(and(eq(account.providerId, provider), eq(account.userId, userId)))
.limit(1)
if (accounts.length > 0) {
const acc = accounts[0]
const requestId = generateRequestId()
const { accessToken } = await refreshTokenIfNeeded(requestId, acc as any, acc.id)
if (accessToken) {
executionParams.accessToken = accessToken
logger.info(`[${tracker.requestId}] OAuth token resolved`, { provider })
} else {
logger.warn(`[${tracker.requestId}] No access token available`, { provider })
return NextResponse.json(
{
success: false,
error: `OAuth token not available for ${provider}. Please reconnect your account.`,
toolCallId,
},
{ status: 400 }
)
}
} else {
logger.warn(`[${tracker.requestId}] No account found for provider`, { provider })
return NextResponse.json(
{
success: false,
error: `No ${provider} account connected. Please connect your account first.`,
toolCallId,
},
{ status: 400 }
)
}
} catch (error) {
logger.error(`[${tracker.requestId}] Failed to resolve OAuth token`, {
provider,
error: error instanceof Error ? error.message : String(error),
})
return NextResponse.json(
{
success: false,
error: `Failed to get OAuth token for ${provider}`,
toolCallId,
},
{ status: 500 }
)
}
}
// Check if tool requires an API key that wasn't resolved via {{ENV_VAR}} reference
const needsApiKey = toolConfig.params?.apiKey?.required
if (needsApiKey && !executionParams.apiKey) {
logger.warn(`[${tracker.requestId}] No API key found for tool`, { toolName })
return NextResponse.json(
{
success: false,
error: `API key not provided for ${toolName}. Use {{YOUR_API_KEY_ENV_VAR}} to reference your environment variable.`,
toolCallId,
},
{ status: 400 }
)
}
// Add execution context
executionParams._context = {
workflowId,
userId,
}
// Special handling for function_execute - inject environment variables
if (toolName === 'function_execute') {
executionParams.envVars = decryptedEnvVars
executionParams.workflowVariables = {} // No workflow variables in copilot context
executionParams.blockData = {} // No block data in copilot context
executionParams.blockNameMapping = {} // No block mapping in copilot context
executionParams.language = executionParams.language || 'javascript'
executionParams.timeout = executionParams.timeout || 30000
logger.info(`[${tracker.requestId}] Injected env vars for function_execute`, {
envVarCount: Object.keys(decryptedEnvVars).length,
})
}
// Execute the tool
logger.info(`[${tracker.requestId}] Executing tool with resolved credentials`, {
toolName,
hasAccessToken: !!executionParams.accessToken,
hasApiKey: !!executionParams.apiKey,
})
const result = await executeTool(toolName, executionParams, true)
logger.info(`[${tracker.requestId}] Tool execution complete`, {
toolName,
success: result.success,
hasOutput: !!result.output,
})
return NextResponse.json({
success: true,
toolCallId,
result: {
success: result.success,
output: result.output,
error: result.error,
},
})
} catch (error) {
if (error instanceof z.ZodError) {
logger.debug(`[${tracker.requestId}] Zod validation error`, { issues: error.issues })
return createBadRequestResponse('Invalid request body for execute-tool')
}
logger.error(`[${tracker.requestId}] Failed to execute tool:`, error)
const errorMessage = error instanceof Error ? error.message : 'Failed to execute tool'
return createInternalServerErrorResponse(errorMessage)
}
}

View File

@@ -26,6 +26,7 @@ const DEFAULT_ENABLED_MODELS: Record<string, boolean> = {
'claude-4.5-sonnet': true,
'claude-4.5-opus': true,
// 'claude-4.1-opus': true,
'gemini-3-pro': true,
}
// GET - Fetch user's enabled models

View File

@@ -14,13 +14,9 @@ export const dynamic = 'force-dynamic'
/**
* POST - Refresh an MCP server connection (requires any workspace permission)
*/
export const POST = withMcpAuth('read')(
async (
request: NextRequest,
{ userId, workspaceId, requestId },
{ params }: { params: { id: string } }
) => {
const serverId = params.id
export const POST = withMcpAuth<{ id: string }>('read')(
async (request: NextRequest, { userId, workspaceId, requestId }, { params }) => {
const { id: serverId } = await params
try {
logger.info(

View File

@@ -15,13 +15,9 @@ export const dynamic = 'force-dynamic'
/**
* PATCH - Update an MCP server in the workspace (requires write or admin permission)
*/
export const PATCH = withMcpAuth('write')(
async (
request: NextRequest,
{ userId, workspaceId, requestId },
{ params }: { params: { id: string } }
) => {
const serverId = params.id
export const PATCH = withMcpAuth<{ id: string }>('write')(
async (request: NextRequest, { userId, workspaceId, requestId }, { params }) => {
const { id: serverId } = await params
try {
const body = getParsedBody(request) || (await request.json())

View File

@@ -0,0 +1,62 @@
import { nanoid } from 'nanoid'
import { type NextRequest, NextResponse } from 'next/server'
import { verifyCronAuth } from '@/lib/auth/internal'
import { acquireLock, releaseLock } from '@/lib/core/config/redis'
import { createLogger } from '@/lib/logs/console/logger'
import { pollInactivityAlerts } from '@/lib/notifications/inactivity-polling'
const logger = createLogger('InactivityAlertPoll')
export const maxDuration = 120
const LOCK_KEY = 'inactivity-alert-polling-lock'
const LOCK_TTL_SECONDS = 120
export async function GET(request: NextRequest) {
const requestId = nanoid()
logger.info(`Inactivity alert polling triggered (${requestId})`)
try {
const authError = verifyCronAuth(request, 'Inactivity alert polling')
if (authError) {
return authError
}
const locked = await acquireLock(LOCK_KEY, requestId, LOCK_TTL_SECONDS)
if (!locked) {
return NextResponse.json(
{
success: true,
message: 'Polling already in progress skipped',
requestId,
status: 'skip',
},
{ status: 202 }
)
}
const results = await pollInactivityAlerts()
return NextResponse.json({
success: true,
message: 'Inactivity alert polling completed',
requestId,
status: 'completed',
...results,
})
} catch (error) {
logger.error(`Error during inactivity alert polling (${requestId}):`, error)
return NextResponse.json(
{
success: false,
message: 'Inactivity alert polling failed',
error: error instanceof Error ? error.message : 'Unknown error',
requestId,
},
{ status: 500 }
)
} finally {
await releaseLock(LOCK_KEY).catch(() => {})
}
}

View File

@@ -965,7 +965,7 @@ The system will substitute actual values when these placeholders are used, keepi
instruction:
'Extract the requested information from this page according to the schema',
schema: zodSchema,
})
} as any)
logger.info('Successfully extracted structured data as fallback', {
keys: structuredOutput ? Object.keys(structuredOutput) : [],

View File

@@ -0,0 +1,79 @@
/**
* Admin API Authentication
*
* Authenticates admin API requests using the ADMIN_API_KEY environment variable.
* Designed for self-hosted deployments where GitOps/scripted access is needed.
*
* Usage:
* curl -H "x-admin-key: your_admin_key" https://your-instance/api/v1/admin/...
*/
import { createHash, timingSafeEqual } from 'crypto'
import type { NextRequest } from 'next/server'
import { env } from '@/lib/core/config/env'
import { createLogger } from '@/lib/logs/console/logger'
const logger = createLogger('AdminAuth')
export interface AdminAuthSuccess {
authenticated: true
}
export interface AdminAuthFailure {
authenticated: false
error: string
notConfigured?: boolean
}
export type AdminAuthResult = AdminAuthSuccess | AdminAuthFailure
/**
* Authenticate an admin API request.
*
* @param request - The incoming Next.js request
* @returns Authentication result with success status and optional error
*/
export function authenticateAdminRequest(request: NextRequest): AdminAuthResult {
const adminKey = env.ADMIN_API_KEY
if (!adminKey) {
logger.warn('ADMIN_API_KEY environment variable is not set')
return {
authenticated: false,
error: 'Admin API is not configured. Set ADMIN_API_KEY environment variable.',
notConfigured: true,
}
}
const providedKey = request.headers.get('x-admin-key')
if (!providedKey) {
return {
authenticated: false,
error: 'Admin API key required. Provide x-admin-key header.',
}
}
if (!constantTimeCompare(providedKey, adminKey)) {
logger.warn('Invalid admin API key attempted', { keyPrefix: providedKey.slice(0, 8) })
return {
authenticated: false,
error: 'Invalid admin API key',
}
}
return { authenticated: true }
}
/**
* Constant-time string comparison.
*
* @param a - First string to compare
* @param b - Second string to compare
* @returns True if strings are equal, false otherwise
*/
function constantTimeCompare(a: string, b: string): boolean {
const aHash = createHash('sha256').update(a).digest()
const bHash = createHash('sha256').update(b).digest()
return timingSafeEqual(aHash, bHash)
}

View File

@@ -0,0 +1,79 @@
/**
* Admin API v1
*
* A RESTful API for administrative operations on Sim.
*
* Authentication:
* Set ADMIN_API_KEY environment variable and use x-admin-key header.
*
* Endpoints:
* GET /api/v1/admin/users - List all users
* GET /api/v1/admin/users/:id - Get user details
* GET /api/v1/admin/workspaces - List all workspaces
* GET /api/v1/admin/workspaces/:id - Get workspace details
* GET /api/v1/admin/workspaces/:id/workflows - List workspace workflows
* DELETE /api/v1/admin/workspaces/:id/workflows - Delete all workspace workflows
* GET /api/v1/admin/workspaces/:id/folders - List workspace folders
* GET /api/v1/admin/workspaces/:id/export - Export workspace (ZIP/JSON)
* POST /api/v1/admin/workspaces/:id/import - Import into workspace
* GET /api/v1/admin/workflows - List all workflows
* GET /api/v1/admin/workflows/:id - Get workflow details
* DELETE /api/v1/admin/workflows/:id - Delete workflow
* GET /api/v1/admin/workflows/:id/export - Export workflow (JSON)
* POST /api/v1/admin/workflows/import - Import single workflow
*/
export type { AdminAuthFailure, AdminAuthResult, AdminAuthSuccess } from '@/app/api/v1/admin/auth'
export { authenticateAdminRequest } from '@/app/api/v1/admin/auth'
export type { AdminRouteHandler, AdminRouteHandlerWithParams } from '@/app/api/v1/admin/middleware'
export { withAdminAuth, withAdminAuthParams } from '@/app/api/v1/admin/middleware'
export {
badRequestResponse,
errorResponse,
forbiddenResponse,
internalErrorResponse,
listResponse,
notConfiguredResponse,
notFoundResponse,
singleResponse,
unauthorizedResponse,
} from '@/app/api/v1/admin/responses'
export type {
AdminErrorResponse,
AdminFolder,
AdminListResponse,
AdminSingleResponse,
AdminUser,
AdminWorkflow,
AdminWorkflowDetail,
AdminWorkspace,
AdminWorkspaceDetail,
DbUser,
DbWorkflow,
DbWorkflowFolder,
DbWorkspace,
FolderExportPayload,
ImportResult,
PaginationMeta,
PaginationParams,
VariableType,
WorkflowExportPayload,
WorkflowExportState,
WorkflowImportRequest,
WorkflowVariable,
WorkspaceExportPayload,
WorkspaceImportRequest,
WorkspaceImportResponse,
} from '@/app/api/v1/admin/types'
export {
createPaginationMeta,
DEFAULT_LIMIT,
extractWorkflowMetadata,
MAX_LIMIT,
parsePaginationParams,
parseWorkflowVariables,
toAdminFolder,
toAdminUser,
toAdminWorkflow,
toAdminWorkspace,
} from '@/app/api/v1/admin/types'

View File

@@ -0,0 +1,50 @@
import type { NextRequest, NextResponse } from 'next/server'
import { authenticateAdminRequest } from '@/app/api/v1/admin/auth'
import { notConfiguredResponse, unauthorizedResponse } from '@/app/api/v1/admin/responses'
export type AdminRouteHandler = (request: NextRequest) => Promise<NextResponse>
export type AdminRouteHandlerWithParams<TParams> = (
request: NextRequest,
context: { params: Promise<TParams> }
) => Promise<NextResponse>
/**
* Wrap a route handler with admin authentication.
* Returns early with an error response if authentication fails.
*/
export function withAdminAuth(handler: AdminRouteHandler): AdminRouteHandler {
return async (request: NextRequest) => {
const auth = authenticateAdminRequest(request)
if (!auth.authenticated) {
if (auth.notConfigured) {
return notConfiguredResponse()
}
return unauthorizedResponse(auth.error)
}
return handler(request)
}
}
/**
* Wrap a route handler with params with admin authentication.
* Returns early with an error response if authentication fails.
*/
export function withAdminAuthParams<TParams>(
handler: AdminRouteHandlerWithParams<TParams>
): AdminRouteHandlerWithParams<TParams> {
return async (request: NextRequest, context: { params: Promise<TParams> }) => {
const auth = authenticateAdminRequest(request)
if (!auth.authenticated) {
if (auth.notConfigured) {
return notConfiguredResponse()
}
return unauthorizedResponse(auth.error)
}
return handler(request, context)
}
}

View File

@@ -0,0 +1,82 @@
/**
* Admin API Response Helpers
*
* Consistent response formatting for all Admin API endpoints.
*/
import { NextResponse } from 'next/server'
import type {
AdminErrorResponse,
AdminListResponse,
AdminSingleResponse,
PaginationMeta,
} from '@/app/api/v1/admin/types'
/**
* Create a successful list response with pagination
*/
export function listResponse<T>(
data: T[],
pagination: PaginationMeta
): NextResponse<AdminListResponse<T>> {
return NextResponse.json({ data, pagination })
}
/**
* Create a successful single resource response
*/
export function singleResponse<T>(data: T): NextResponse<AdminSingleResponse<T>> {
return NextResponse.json({ data })
}
/**
* Create an error response
*/
export function errorResponse(
code: string,
message: string,
status: number,
details?: unknown
): NextResponse<AdminErrorResponse> {
const body: AdminErrorResponse = {
error: { code, message },
}
if (details !== undefined) {
body.error.details = details
}
return NextResponse.json(body, { status })
}
// =============================================================================
// Common Error Responses
// =============================================================================
export function unauthorizedResponse(message = 'Authentication required'): NextResponse {
return errorResponse('UNAUTHORIZED', message, 401)
}
export function forbiddenResponse(message = 'Access denied'): NextResponse {
return errorResponse('FORBIDDEN', message, 403)
}
export function notFoundResponse(resource: string): NextResponse {
return errorResponse('NOT_FOUND', `${resource} not found`, 404)
}
export function badRequestResponse(message: string, details?: unknown): NextResponse {
return errorResponse('BAD_REQUEST', message, 400, details)
}
export function internalErrorResponse(message = 'Internal server error'): NextResponse {
return errorResponse('INTERNAL_ERROR', message, 500)
}
export function notConfiguredResponse(): NextResponse {
return errorResponse(
'NOT_CONFIGURED',
'Admin API is not configured. Set ADMIN_API_KEY environment variable.',
503
)
}

View File

@@ -0,0 +1,402 @@
/**
* Admin API Types
*
* This file defines the types for the Admin API endpoints.
* All responses follow a consistent structure for predictability.
*/
import type { user, workflow, workflowFolder, workspace } from '@sim/db/schema'
import type { InferSelectModel } from 'drizzle-orm'
import type { Edge } from 'reactflow'
import type { BlockState, Loop, Parallel } from '@/stores/workflows/workflow/types'
// =============================================================================
// Database Model Types (inferred from schema)
// =============================================================================
export type DbUser = InferSelectModel<typeof user>
export type DbWorkspace = InferSelectModel<typeof workspace>
export type DbWorkflow = InferSelectModel<typeof workflow>
export type DbWorkflowFolder = InferSelectModel<typeof workflowFolder>
// =============================================================================
// Pagination
// =============================================================================
export interface PaginationParams {
limit: number
offset: number
}
export interface PaginationMeta {
total: number
limit: number
offset: number
hasMore: boolean
}
export const DEFAULT_LIMIT = 50
export const MAX_LIMIT = 250
export function parsePaginationParams(url: URL): PaginationParams {
const limitParam = url.searchParams.get('limit')
const offsetParam = url.searchParams.get('offset')
let limit = limitParam ? Number.parseInt(limitParam, 10) : DEFAULT_LIMIT
let offset = offsetParam ? Number.parseInt(offsetParam, 10) : 0
if (Number.isNaN(limit) || limit < 1) limit = DEFAULT_LIMIT
if (limit > MAX_LIMIT) limit = MAX_LIMIT
if (Number.isNaN(offset) || offset < 0) offset = 0
return { limit, offset }
}
export function createPaginationMeta(total: number, limit: number, offset: number): PaginationMeta {
return {
total,
limit,
offset,
hasMore: offset + limit < total,
}
}
// =============================================================================
// API Response Types
// =============================================================================
export interface AdminListResponse<T> {
data: T[]
pagination: PaginationMeta
}
export interface AdminSingleResponse<T> {
data: T
}
export interface AdminErrorResponse {
error: {
code: string
message: string
details?: unknown
}
}
// =============================================================================
// User Types
// =============================================================================
export interface AdminUser {
id: string
name: string
email: string
emailVerified: boolean
image: string | null
createdAt: string
updatedAt: string
}
export function toAdminUser(dbUser: DbUser): AdminUser {
return {
id: dbUser.id,
name: dbUser.name,
email: dbUser.email,
emailVerified: dbUser.emailVerified,
image: dbUser.image,
createdAt: dbUser.createdAt.toISOString(),
updatedAt: dbUser.updatedAt.toISOString(),
}
}
// =============================================================================
// Workspace Types
// =============================================================================
export interface AdminWorkspace {
id: string
name: string
ownerId: string
createdAt: string
updatedAt: string
}
export interface AdminWorkspaceDetail extends AdminWorkspace {
workflowCount: number
folderCount: number
}
export function toAdminWorkspace(dbWorkspace: DbWorkspace): AdminWorkspace {
return {
id: dbWorkspace.id,
name: dbWorkspace.name,
ownerId: dbWorkspace.ownerId,
createdAt: dbWorkspace.createdAt.toISOString(),
updatedAt: dbWorkspace.updatedAt.toISOString(),
}
}
// =============================================================================
// Folder Types
// =============================================================================
export interface AdminFolder {
id: string
name: string
parentId: string | null
color: string | null
sortOrder: number
createdAt: string
updatedAt: string
}
export function toAdminFolder(dbFolder: DbWorkflowFolder): AdminFolder {
return {
id: dbFolder.id,
name: dbFolder.name,
parentId: dbFolder.parentId,
color: dbFolder.color,
sortOrder: dbFolder.sortOrder,
createdAt: dbFolder.createdAt.toISOString(),
updatedAt: dbFolder.updatedAt.toISOString(),
}
}
// =============================================================================
// Workflow Types
// =============================================================================
export interface AdminWorkflow {
id: string
name: string
description: string | null
color: string
workspaceId: string | null
folderId: string | null
isDeployed: boolean
deployedAt: string | null
runCount: number
lastRunAt: string | null
createdAt: string
updatedAt: string
}
export interface AdminWorkflowDetail extends AdminWorkflow {
blockCount: number
edgeCount: number
}
export function toAdminWorkflow(dbWorkflow: DbWorkflow): AdminWorkflow {
return {
id: dbWorkflow.id,
name: dbWorkflow.name,
description: dbWorkflow.description,
color: dbWorkflow.color,
workspaceId: dbWorkflow.workspaceId,
folderId: dbWorkflow.folderId,
isDeployed: dbWorkflow.isDeployed,
deployedAt: dbWorkflow.deployedAt?.toISOString() ?? null,
runCount: dbWorkflow.runCount,
lastRunAt: dbWorkflow.lastRunAt?.toISOString() ?? null,
createdAt: dbWorkflow.createdAt.toISOString(),
updatedAt: dbWorkflow.updatedAt.toISOString(),
}
}
// =============================================================================
// Workflow Variable Types
// =============================================================================
export type VariableType = 'string' | 'number' | 'boolean' | 'object' | 'array' | 'plain'
export interface WorkflowVariable {
id: string
name: string
type: VariableType
value: unknown
}
// =============================================================================
// Export/Import Types
// =============================================================================
export interface WorkflowExportState {
blocks: Record<string, BlockState>
edges: Edge[]
loops: Record<string, Loop>
parallels: Record<string, Parallel>
metadata?: {
name?: string
description?: string
color?: string
exportedAt?: string
}
variables?: WorkflowVariable[]
}
export interface WorkflowExportPayload {
version: '1.0'
exportedAt: string
workflow: {
id: string
name: string
description: string | null
color: string
workspaceId: string | null
folderId: string | null
}
state: WorkflowExportState
}
export interface FolderExportPayload {
id: string
name: string
parentId: string | null
}
export interface WorkspaceExportPayload {
version: '1.0'
exportedAt: string
workspace: {
id: string
name: string
}
workflows: Array<{
workflow: WorkflowExportPayload['workflow']
state: WorkflowExportState
}>
folders: FolderExportPayload[]
}
// =============================================================================
// Import Types
// =============================================================================
export interface WorkflowImportRequest {
workspaceId: string
folderId?: string
name?: string
workflow: WorkflowExportPayload | WorkflowExportState | string
}
export interface WorkspaceImportRequest {
workflows: Array<{
content: string | WorkflowExportPayload | WorkflowExportState
name?: string
folderPath?: string[]
}>
}
export interface ImportResult {
workflowId: string
name: string
success: boolean
error?: string
}
export interface WorkspaceImportResponse {
imported: number
failed: number
results: ImportResult[]
}
// =============================================================================
// Utility Functions
// =============================================================================
/**
* Parse workflow variables from database JSON format to array format.
* Handles both array and Record<string, Variable> formats.
*/
export function parseWorkflowVariables(
dbVariables: DbWorkflow['variables']
): WorkflowVariable[] | undefined {
if (!dbVariables) return undefined
try {
const varsObj = typeof dbVariables === 'string' ? JSON.parse(dbVariables) : dbVariables
if (Array.isArray(varsObj)) {
return varsObj.map((v) => ({
id: v.id,
name: v.name,
type: v.type,
value: v.value,
}))
}
if (typeof varsObj === 'object' && varsObj !== null) {
return Object.values(varsObj).map((v: unknown) => {
const variable = v as { id: string; name: string; type: VariableType; value: unknown }
return {
id: variable.id,
name: variable.name,
type: variable.type,
value: variable.value,
}
})
}
} catch {
// pass
}
return undefined
}
/**
* Extract workflow metadata from various export formats.
* Handles both full export payload and raw state formats.
*/
export function extractWorkflowMetadata(
workflowJson: unknown,
overrideName?: string
): { name: string; color: string; description: string } {
const defaults = {
name: overrideName || 'Imported Workflow',
color: '#3972F6',
description: 'Imported via Admin API',
}
if (!workflowJson || typeof workflowJson !== 'object') {
return defaults
}
const parsed = workflowJson as Record<string, unknown>
const name =
overrideName ||
getNestedString(parsed, 'workflow.name') ||
getNestedString(parsed, 'state.metadata.name') ||
getNestedString(parsed, 'metadata.name') ||
defaults.name
const color =
getNestedString(parsed, 'workflow.color') ||
getNestedString(parsed, 'state.metadata.color') ||
getNestedString(parsed, 'metadata.color') ||
defaults.color
const description =
getNestedString(parsed, 'workflow.description') ||
getNestedString(parsed, 'state.metadata.description') ||
getNestedString(parsed, 'metadata.description') ||
defaults.description
return { name, color, description }
}
/**
* Safely get a nested string value from an object.
*/
function getNestedString(obj: Record<string, unknown>, path: string): string | undefined {
const parts = path.split('.')
let current: unknown = obj
for (const part of parts) {
if (current === null || typeof current !== 'object') {
return undefined
}
current = (current as Record<string, unknown>)[part]
}
return typeof current === 'string' ? current : undefined
}

View File

@@ -0,0 +1,46 @@
/**
* GET /api/v1/admin/users/[id]
*
* Get user details.
*
* Response: AdminSingleResponse<AdminUser>
*/
import { db } from '@sim/db'
import { user } from '@sim/db/schema'
import { eq } from 'drizzle-orm'
import { createLogger } from '@/lib/logs/console/logger'
import { withAdminAuthParams } from '@/app/api/v1/admin/middleware'
import {
internalErrorResponse,
notFoundResponse,
singleResponse,
} from '@/app/api/v1/admin/responses'
import { toAdminUser } from '@/app/api/v1/admin/types'
const logger = createLogger('AdminUserDetailAPI')
interface RouteParams {
id: string
}
export const GET = withAdminAuthParams<RouteParams>(async (request, context) => {
const { id: userId } = await context.params
try {
const [userData] = await db.select().from(user).where(eq(user.id, userId)).limit(1)
if (!userData) {
return notFoundResponse('User')
}
const data = toAdminUser(userData)
logger.info(`Admin API: Retrieved user ${userId}`)
return singleResponse(data)
} catch (error) {
logger.error('Admin API: Failed to get user', { error, userId })
return internalErrorResponse('Failed to get user')
}
})

View File

@@ -0,0 +1,49 @@
/**
* GET /api/v1/admin/users
*
* List all users with pagination.
*
* Query Parameters:
* - limit: number (default: 50, max: 250)
* - offset: number (default: 0)
*
* Response: AdminListResponse<AdminUser>
*/
import { db } from '@sim/db'
import { user } from '@sim/db/schema'
import { count } from 'drizzle-orm'
import { createLogger } from '@/lib/logs/console/logger'
import { withAdminAuth } from '@/app/api/v1/admin/middleware'
import { internalErrorResponse, listResponse } from '@/app/api/v1/admin/responses'
import {
type AdminUser,
createPaginationMeta,
parsePaginationParams,
toAdminUser,
} from '@/app/api/v1/admin/types'
const logger = createLogger('AdminUsersAPI')
export const GET = withAdminAuth(async (request) => {
const url = new URL(request.url)
const { limit, offset } = parsePaginationParams(url)
try {
const [countResult, users] = await Promise.all([
db.select({ total: count() }).from(user),
db.select().from(user).orderBy(user.name).limit(limit).offset(offset),
])
const total = countResult[0].total
const data: AdminUser[] = users.map(toAdminUser)
const pagination = createPaginationMeta(total, limit, offset)
logger.info(`Admin API: Listed ${data.length} users (total: ${total})`)
return listResponse(data, pagination)
} catch (error) {
logger.error('Admin API: Failed to list users', { error })
return internalErrorResponse('Failed to list users')
}
})

View File

@@ -0,0 +1,89 @@
/**
* GET /api/v1/admin/workflows/[id]/export
*
* Export a single workflow as JSON.
*
* Response: AdminSingleResponse<WorkflowExportPayload>
*/
import { db } from '@sim/db'
import { workflow } from '@sim/db/schema'
import { eq } from 'drizzle-orm'
import { createLogger } from '@/lib/logs/console/logger'
import { loadWorkflowFromNormalizedTables } from '@/lib/workflows/persistence/utils'
import { withAdminAuthParams } from '@/app/api/v1/admin/middleware'
import {
internalErrorResponse,
notFoundResponse,
singleResponse,
} from '@/app/api/v1/admin/responses'
import {
parseWorkflowVariables,
type WorkflowExportPayload,
type WorkflowExportState,
} from '@/app/api/v1/admin/types'
const logger = createLogger('AdminWorkflowExportAPI')
interface RouteParams {
id: string
}
export const GET = withAdminAuthParams<RouteParams>(async (request, context) => {
const { id: workflowId } = await context.params
try {
const [workflowData] = await db
.select()
.from(workflow)
.where(eq(workflow.id, workflowId))
.limit(1)
if (!workflowData) {
return notFoundResponse('Workflow')
}
const normalizedData = await loadWorkflowFromNormalizedTables(workflowId)
if (!normalizedData) {
return notFoundResponse('Workflow state')
}
const variables = parseWorkflowVariables(workflowData.variables)
const state: WorkflowExportState = {
blocks: normalizedData.blocks,
edges: normalizedData.edges,
loops: normalizedData.loops,
parallels: normalizedData.parallels,
metadata: {
name: workflowData.name,
description: workflowData.description ?? undefined,
color: workflowData.color,
exportedAt: new Date().toISOString(),
},
variables,
}
const exportPayload: WorkflowExportPayload = {
version: '1.0',
exportedAt: new Date().toISOString(),
workflow: {
id: workflowData.id,
name: workflowData.name,
description: workflowData.description,
color: workflowData.color,
workspaceId: workflowData.workspaceId,
folderId: workflowData.folderId,
},
state,
}
logger.info(`Admin API: Exported workflow ${workflowId}`)
return singleResponse(exportPayload)
} catch (error) {
logger.error('Admin API: Failed to export workflow', { error, workflowId })
return internalErrorResponse('Failed to export workflow')
}
})

View File

@@ -0,0 +1,105 @@
/**
* GET /api/v1/admin/workflows/[id]
*
* Get workflow details including block and edge counts.
*
* Response: AdminSingleResponse<AdminWorkflowDetail>
*
* DELETE /api/v1/admin/workflows/[id]
*
* Delete a workflow and all its associated data.
*
* Response: { success: true, workflowId: string }
*/
import { db } from '@sim/db'
import { workflow, workflowBlocks, workflowEdges, workflowSchedule } from '@sim/db/schema'
import { count, eq } from 'drizzle-orm'
import { NextResponse } from 'next/server'
import { createLogger } from '@/lib/logs/console/logger'
import { withAdminAuthParams } from '@/app/api/v1/admin/middleware'
import {
internalErrorResponse,
notFoundResponse,
singleResponse,
} from '@/app/api/v1/admin/responses'
import { type AdminWorkflowDetail, toAdminWorkflow } from '@/app/api/v1/admin/types'
const logger = createLogger('AdminWorkflowDetailAPI')
interface RouteParams {
id: string
}
export const GET = withAdminAuthParams<RouteParams>(async (request, context) => {
const { id: workflowId } = await context.params
try {
const [workflowData] = await db
.select()
.from(workflow)
.where(eq(workflow.id, workflowId))
.limit(1)
if (!workflowData) {
return notFoundResponse('Workflow')
}
const [blockCountResult, edgeCountResult] = await Promise.all([
db
.select({ count: count() })
.from(workflowBlocks)
.where(eq(workflowBlocks.workflowId, workflowId)),
db
.select({ count: count() })
.from(workflowEdges)
.where(eq(workflowEdges.workflowId, workflowId)),
])
const data: AdminWorkflowDetail = {
...toAdminWorkflow(workflowData),
blockCount: blockCountResult[0].count,
edgeCount: edgeCountResult[0].count,
}
logger.info(`Admin API: Retrieved workflow ${workflowId}`)
return singleResponse(data)
} catch (error) {
logger.error('Admin API: Failed to get workflow', { error, workflowId })
return internalErrorResponse('Failed to get workflow')
}
})
export const DELETE = withAdminAuthParams<RouteParams>(async (request, context) => {
const { id: workflowId } = await context.params
try {
const [workflowData] = await db
.select({ id: workflow.id, name: workflow.name })
.from(workflow)
.where(eq(workflow.id, workflowId))
.limit(1)
if (!workflowData) {
return notFoundResponse('Workflow')
}
await db.transaction(async (tx) => {
await Promise.all([
tx.delete(workflowBlocks).where(eq(workflowBlocks.workflowId, workflowId)),
tx.delete(workflowEdges).where(eq(workflowEdges.workflowId, workflowId)),
tx.delete(workflowSchedule).where(eq(workflowSchedule.workflowId, workflowId)),
])
await tx.delete(workflow).where(eq(workflow.id, workflowId))
})
logger.info(`Admin API: Deleted workflow ${workflowId} (${workflowData.name})`)
return NextResponse.json({ success: true, workflowId })
} catch (error) {
logger.error('Admin API: Failed to delete workflow', { error, workflowId })
return internalErrorResponse('Failed to delete workflow')
}
})

View File

@@ -0,0 +1,153 @@
/**
* POST /api/v1/admin/workflows/import
*
* Import a single workflow into a workspace.
*
* Request Body:
* {
* workspaceId: string, // Required: target workspace
* folderId?: string, // Optional: target folder
* name?: string, // Optional: override workflow name
* workflow: object | string // The workflow JSON (from export or raw state)
* }
*
* Response: { workflowId: string, name: string, success: true }
*/
import { db } from '@sim/db'
import { workflow, workspace } from '@sim/db/schema'
import { eq } from 'drizzle-orm'
import { NextResponse } from 'next/server'
import { createLogger } from '@/lib/logs/console/logger'
import { saveWorkflowToNormalizedTables } from '@/lib/workflows/persistence/utils'
import { withAdminAuth } from '@/app/api/v1/admin/middleware'
import {
badRequestResponse,
internalErrorResponse,
notFoundResponse,
} from '@/app/api/v1/admin/responses'
import {
extractWorkflowMetadata,
type WorkflowImportRequest,
type WorkflowVariable,
} from '@/app/api/v1/admin/types'
import { parseWorkflowJson } from '@/stores/workflows/json/importer'
const logger = createLogger('AdminWorkflowImportAPI')
interface ImportSuccessResponse {
workflowId: string
name: string
success: true
}
export const POST = withAdminAuth(async (request) => {
try {
const body = (await request.json()) as WorkflowImportRequest
if (!body.workspaceId) {
return badRequestResponse('workspaceId is required')
}
if (!body.workflow) {
return badRequestResponse('workflow is required')
}
const { workspaceId, folderId, name: overrideName } = body
const [workspaceData] = await db
.select({ id: workspace.id, ownerId: workspace.ownerId })
.from(workspace)
.where(eq(workspace.id, workspaceId))
.limit(1)
if (!workspaceData) {
return notFoundResponse('Workspace')
}
const workflowContent =
typeof body.workflow === 'string' ? body.workflow : JSON.stringify(body.workflow)
const { data: workflowData, errors } = parseWorkflowJson(workflowContent)
if (!workflowData || errors.length > 0) {
return badRequestResponse(`Invalid workflow: ${errors.join(', ')}`)
}
const parsedWorkflow =
typeof body.workflow === 'string'
? (() => {
try {
return JSON.parse(body.workflow)
} catch {
return null
}
})()
: body.workflow
const {
name: workflowName,
color: workflowColor,
description: workflowDescription,
} = extractWorkflowMetadata(parsedWorkflow, overrideName)
const workflowId = crypto.randomUUID()
const now = new Date()
await db.insert(workflow).values({
id: workflowId,
userId: workspaceData.ownerId,
workspaceId,
folderId: folderId || null,
name: workflowName,
description: workflowDescription,
color: workflowColor,
lastSynced: now,
createdAt: now,
updatedAt: now,
isDeployed: false,
runCount: 0,
variables: {},
})
const saveResult = await saveWorkflowToNormalizedTables(workflowId, workflowData)
if (!saveResult.success) {
await db.delete(workflow).where(eq(workflow.id, workflowId))
return internalErrorResponse(`Failed to save workflow state: ${saveResult.error}`)
}
if (workflowData.variables && Array.isArray(workflowData.variables)) {
const variablesRecord: Record<string, WorkflowVariable> = {}
workflowData.variables.forEach((v) => {
const varId = v.id || crypto.randomUUID()
variablesRecord[varId] = {
id: varId,
name: v.name,
type: v.type || 'string',
value: v.value,
}
})
await db
.update(workflow)
.set({ variables: variablesRecord, updatedAt: new Date() })
.where(eq(workflow.id, workflowId))
}
logger.info(
`Admin API: Imported workflow ${workflowId} (${workflowName}) into workspace ${workspaceId}`
)
const response: ImportSuccessResponse = {
workflowId,
name: workflowName,
success: true,
}
return NextResponse.json(response)
} catch (error) {
logger.error('Admin API: Failed to import workflow', { error })
return internalErrorResponse('Failed to import workflow')
}
})

View File

@@ -0,0 +1,49 @@
/**
* GET /api/v1/admin/workflows
*
* List all workflows across all workspaces with pagination.
*
* Query Parameters:
* - limit: number (default: 50, max: 250)
* - offset: number (default: 0)
*
* Response: AdminListResponse<AdminWorkflow>
*/
import { db } from '@sim/db'
import { workflow } from '@sim/db/schema'
import { count } from 'drizzle-orm'
import { createLogger } from '@/lib/logs/console/logger'
import { withAdminAuth } from '@/app/api/v1/admin/middleware'
import { internalErrorResponse, listResponse } from '@/app/api/v1/admin/responses'
import {
type AdminWorkflow,
createPaginationMeta,
parsePaginationParams,
toAdminWorkflow,
} from '@/app/api/v1/admin/types'
const logger = createLogger('AdminWorkflowsAPI')
export const GET = withAdminAuth(async (request) => {
const url = new URL(request.url)
const { limit, offset } = parsePaginationParams(url)
try {
const [countResult, workflows] = await Promise.all([
db.select({ total: count() }).from(workflow),
db.select().from(workflow).orderBy(workflow.name).limit(limit).offset(offset),
])
const total = countResult[0].total
const data: AdminWorkflow[] = workflows.map(toAdminWorkflow)
const pagination = createPaginationMeta(total, limit, offset)
logger.info(`Admin API: Listed ${data.length} workflows (total: ${total})`)
return listResponse(data, pagination)
} catch (error) {
logger.error('Admin API: Failed to list workflows', { error })
return internalErrorResponse('Failed to list workflows')
}
})

View File

@@ -0,0 +1,164 @@
/**
* GET /api/v1/admin/workspaces/[id]/export
*
* Export an entire workspace as a ZIP file or JSON.
*
* Query Parameters:
* - format: 'zip' (default) or 'json'
*
* Response:
* - ZIP file download (Content-Type: application/zip)
* - JSON: WorkspaceExportPayload
*/
import { db } from '@sim/db'
import { workflow, workflowFolder, workspace } from '@sim/db/schema'
import { eq } from 'drizzle-orm'
import { NextResponse } from 'next/server'
import { createLogger } from '@/lib/logs/console/logger'
import { exportWorkspaceToZip } from '@/lib/workflows/operations/import-export'
import { loadWorkflowFromNormalizedTables } from '@/lib/workflows/persistence/utils'
import { withAdminAuthParams } from '@/app/api/v1/admin/middleware'
import {
internalErrorResponse,
notFoundResponse,
singleResponse,
} from '@/app/api/v1/admin/responses'
import {
type FolderExportPayload,
parseWorkflowVariables,
type WorkflowExportState,
type WorkspaceExportPayload,
} from '@/app/api/v1/admin/types'
const logger = createLogger('AdminWorkspaceExportAPI')
interface RouteParams {
id: string
}
export const GET = withAdminAuthParams<RouteParams>(async (request, context) => {
const { id: workspaceId } = await context.params
const url = new URL(request.url)
const format = url.searchParams.get('format') || 'zip'
try {
const [workspaceData] = await db
.select({ id: workspace.id, name: workspace.name })
.from(workspace)
.where(eq(workspace.id, workspaceId))
.limit(1)
if (!workspaceData) {
return notFoundResponse('Workspace')
}
const workflows = await db.select().from(workflow).where(eq(workflow.workspaceId, workspaceId))
const folders = await db
.select()
.from(workflowFolder)
.where(eq(workflowFolder.workspaceId, workspaceId))
const workflowExports: Array<{
workflow: WorkspaceExportPayload['workflows'][number]['workflow']
state: WorkflowExportState
}> = []
for (const wf of workflows) {
try {
const normalizedData = await loadWorkflowFromNormalizedTables(wf.id)
if (!normalizedData) {
logger.warn(`Skipping workflow ${wf.id} - no normalized data found`)
continue
}
const variables = parseWorkflowVariables(wf.variables)
const state: WorkflowExportState = {
blocks: normalizedData.blocks,
edges: normalizedData.edges,
loops: normalizedData.loops,
parallels: normalizedData.parallels,
metadata: {
name: wf.name,
description: wf.description ?? undefined,
color: wf.color,
exportedAt: new Date().toISOString(),
},
variables,
}
workflowExports.push({
workflow: {
id: wf.id,
name: wf.name,
description: wf.description,
color: wf.color,
workspaceId: wf.workspaceId,
folderId: wf.folderId,
},
state,
})
} catch (error) {
logger.error(`Failed to load workflow ${wf.id}:`, { error })
}
}
const folderExports: FolderExportPayload[] = folders.map((f) => ({
id: f.id,
name: f.name,
parentId: f.parentId,
}))
logger.info(
`Admin API: Exporting workspace ${workspaceId} with ${workflowExports.length} workflows and ${folderExports.length} folders`
)
if (format === 'json') {
const exportPayload: WorkspaceExportPayload = {
version: '1.0',
exportedAt: new Date().toISOString(),
workspace: {
id: workspaceData.id,
name: workspaceData.name,
},
workflows: workflowExports,
folders: folderExports,
}
return singleResponse(exportPayload)
}
const zipWorkflows = workflowExports.map((wf) => ({
workflow: {
id: wf.workflow.id,
name: wf.workflow.name,
description: wf.workflow.description ?? undefined,
color: wf.workflow.color ?? undefined,
folderId: wf.workflow.folderId,
},
state: wf.state,
variables: wf.state.variables,
}))
const zipBlob = await exportWorkspaceToZip(workspaceData.name, zipWorkflows, folderExports)
const arrayBuffer = await zipBlob.arrayBuffer()
const sanitizedName = workspaceData.name.replace(/[^a-z0-9-_]/gi, '-')
const filename = `${sanitizedName}-${new Date().toISOString().split('T')[0]}.zip`
return new NextResponse(arrayBuffer, {
status: 200,
headers: {
'Content-Type': 'application/zip',
'Content-Disposition': `attachment; filename="${filename}"`,
'Content-Length': arrayBuffer.byteLength.toString(),
},
})
} catch (error) {
logger.error('Admin API: Failed to export workspace', { error, workspaceId })
return internalErrorResponse('Failed to export workspace')
}
})

View File

@@ -0,0 +1,75 @@
/**
* GET /api/v1/admin/workspaces/[id]/folders
*
* List all folders in a workspace with pagination.
*
* Query Parameters:
* - limit: number (default: 50, max: 250)
* - offset: number (default: 0)
*
* Response: AdminListResponse<AdminFolder>
*/
import { db } from '@sim/db'
import { workflowFolder, workspace } from '@sim/db/schema'
import { count, eq } from 'drizzle-orm'
import { createLogger } from '@/lib/logs/console/logger'
import { withAdminAuthParams } from '@/app/api/v1/admin/middleware'
import { internalErrorResponse, listResponse, notFoundResponse } from '@/app/api/v1/admin/responses'
import {
type AdminFolder,
createPaginationMeta,
parsePaginationParams,
toAdminFolder,
} from '@/app/api/v1/admin/types'
const logger = createLogger('AdminWorkspaceFoldersAPI')
interface RouteParams {
id: string
}
export const GET = withAdminAuthParams<RouteParams>(async (request, context) => {
const { id: workspaceId } = await context.params
const url = new URL(request.url)
const { limit, offset } = parsePaginationParams(url)
try {
const [workspaceData] = await db
.select({ id: workspace.id })
.from(workspace)
.where(eq(workspace.id, workspaceId))
.limit(1)
if (!workspaceData) {
return notFoundResponse('Workspace')
}
const [countResult, folders] = await Promise.all([
db
.select({ total: count() })
.from(workflowFolder)
.where(eq(workflowFolder.workspaceId, workspaceId)),
db
.select()
.from(workflowFolder)
.where(eq(workflowFolder.workspaceId, workspaceId))
.orderBy(workflowFolder.sortOrder, workflowFolder.name)
.limit(limit)
.offset(offset),
])
const total = countResult[0].total
const data: AdminFolder[] = folders.map(toAdminFolder)
const pagination = createPaginationMeta(total, limit, offset)
logger.info(
`Admin API: Listed ${data.length} folders in workspace ${workspaceId} (total: ${total})`
)
return listResponse(data, pagination)
} catch (error) {
logger.error('Admin API: Failed to list workspace folders', { error, workspaceId })
return internalErrorResponse('Failed to list folders')
}
})

View File

@@ -0,0 +1,301 @@
/**
* POST /api/v1/admin/workspaces/[id]/import
*
* Import workflows into a workspace from a ZIP file or JSON.
*
* Content-Type:
* - application/zip or multipart/form-data (with 'file' field) for ZIP upload
* - application/json for JSON payload
*
* JSON Body:
* {
* workflows: Array<{
* content: string | object, // Workflow JSON
* name?: string, // Override name
* folderPath?: string[] // Folder path to create
* }>
* }
*
* Query Parameters:
* - createFolders: 'true' (default) or 'false'
* - rootFolderName: optional name for root import folder
*
* Response: WorkspaceImportResponse
*/
import { db } from '@sim/db'
import { workflow, workflowFolder, workspace } from '@sim/db/schema'
import { eq } from 'drizzle-orm'
import { NextResponse } from 'next/server'
import { createLogger } from '@/lib/logs/console/logger'
import {
extractWorkflowName,
extractWorkflowsFromZip,
} from '@/lib/workflows/operations/import-export'
import { saveWorkflowToNormalizedTables } from '@/lib/workflows/persistence/utils'
import { withAdminAuthParams } from '@/app/api/v1/admin/middleware'
import {
badRequestResponse,
internalErrorResponse,
notFoundResponse,
} from '@/app/api/v1/admin/responses'
import {
extractWorkflowMetadata,
type ImportResult,
type WorkflowVariable,
type WorkspaceImportRequest,
type WorkspaceImportResponse,
} from '@/app/api/v1/admin/types'
import { parseWorkflowJson } from '@/stores/workflows/json/importer'
const logger = createLogger('AdminWorkspaceImportAPI')
interface RouteParams {
id: string
}
interface ParsedWorkflow {
content: string
name: string
folderPath: string[]
}
export const POST = withAdminAuthParams<RouteParams>(async (request, context) => {
const { id: workspaceId } = await context.params
const url = new URL(request.url)
const createFolders = url.searchParams.get('createFolders') !== 'false'
const rootFolderName = url.searchParams.get('rootFolderName')
try {
const [workspaceData] = await db
.select({ id: workspace.id, ownerId: workspace.ownerId })
.from(workspace)
.where(eq(workspace.id, workspaceId))
.limit(1)
if (!workspaceData) {
return notFoundResponse('Workspace')
}
const contentType = request.headers.get('content-type') || ''
let workflowsToImport: ParsedWorkflow[] = []
if (contentType.includes('application/json')) {
const body = (await request.json()) as WorkspaceImportRequest
if (!body.workflows || !Array.isArray(body.workflows)) {
return badRequestResponse('Invalid JSON body. Expected { workflows: [...] }')
}
workflowsToImport = body.workflows.map((w) => ({
content: typeof w.content === 'string' ? w.content : JSON.stringify(w.content),
name: w.name || 'Imported Workflow',
folderPath: w.folderPath || [],
}))
} else if (
contentType.includes('application/zip') ||
contentType.includes('multipart/form-data')
) {
let zipBuffer: ArrayBuffer
if (contentType.includes('multipart/form-data')) {
const formData = await request.formData()
const file = formData.get('file') as File | null
if (!file) {
return badRequestResponse('No file provided in form data. Use field name "file".')
}
zipBuffer = await file.arrayBuffer()
} else {
zipBuffer = await request.arrayBuffer()
}
const blob = new Blob([zipBuffer], { type: 'application/zip' })
const file = new File([blob], 'import.zip', { type: 'application/zip' })
const { workflows } = await extractWorkflowsFromZip(file)
workflowsToImport = workflows
} else {
return badRequestResponse(
'Unsupported Content-Type. Use application/json or application/zip.'
)
}
if (workflowsToImport.length === 0) {
return badRequestResponse('No workflows found to import')
}
let rootFolderId: string | undefined
if (rootFolderName && createFolders) {
rootFolderId = crypto.randomUUID()
await db.insert(workflowFolder).values({
id: rootFolderId,
name: rootFolderName,
userId: workspaceData.ownerId,
workspaceId,
parentId: null,
createdAt: new Date(),
updatedAt: new Date(),
})
}
const folderMap = new Map<string, string>()
const results: ImportResult[] = []
for (const wf of workflowsToImport) {
const result = await importSingleWorkflow(
wf,
workspaceId,
workspaceData.ownerId,
createFolders,
rootFolderId,
folderMap
)
results.push(result)
if (result.success) {
logger.info(`Admin API: Imported workflow ${result.workflowId} (${result.name})`)
} else {
logger.warn(`Admin API: Failed to import workflow ${result.name}: ${result.error}`)
}
}
const imported = results.filter((r) => r.success).length
const failed = results.filter((r) => !r.success).length
logger.info(`Admin API: Import complete - ${imported} succeeded, ${failed} failed`)
const response: WorkspaceImportResponse = { imported, failed, results }
return NextResponse.json(response)
} catch (error) {
logger.error('Admin API: Failed to import into workspace', { error, workspaceId })
return internalErrorResponse('Failed to import workflows')
}
})
async function importSingleWorkflow(
wf: ParsedWorkflow,
workspaceId: string,
ownerId: string,
createFolders: boolean,
rootFolderId: string | undefined,
folderMap: Map<string, string>
): Promise<ImportResult> {
try {
const { data: workflowData, errors } = parseWorkflowJson(wf.content)
if (!workflowData || errors.length > 0) {
return {
workflowId: '',
name: wf.name,
success: false,
error: `Parse error: ${errors.join(', ')}`,
}
}
const workflowName = extractWorkflowName(wf.content, wf.name)
let targetFolderId: string | null = rootFolderId || null
if (createFolders && wf.folderPath.length > 0) {
let parentId = rootFolderId || null
for (let i = 0; i < wf.folderPath.length; i++) {
const pathSegment = wf.folderPath.slice(0, i + 1).join('/')
const fullPath = rootFolderId ? `root/${pathSegment}` : pathSegment
if (!folderMap.has(fullPath)) {
const folderId = crypto.randomUUID()
await db.insert(workflowFolder).values({
id: folderId,
name: wf.folderPath[i],
userId: ownerId,
workspaceId,
parentId,
createdAt: new Date(),
updatedAt: new Date(),
})
folderMap.set(fullPath, folderId)
parentId = folderId
} else {
parentId = folderMap.get(fullPath)!
}
}
const fullFolderPath = rootFolderId
? `root/${wf.folderPath.join('/')}`
: wf.folderPath.join('/')
targetFolderId = folderMap.get(fullFolderPath) || parentId
}
const parsedContent = (() => {
try {
return JSON.parse(wf.content)
} catch {
return null
}
})()
const { color: workflowColor } = extractWorkflowMetadata(parsedContent)
const workflowId = crypto.randomUUID()
const now = new Date()
await db.insert(workflow).values({
id: workflowId,
userId: ownerId,
workspaceId,
folderId: targetFolderId,
name: workflowName,
description: workflowData.metadata?.description || 'Imported via Admin API',
color: workflowColor,
lastSynced: now,
createdAt: now,
updatedAt: now,
isDeployed: false,
runCount: 0,
variables: {},
})
const saveResult = await saveWorkflowToNormalizedTables(workflowId, workflowData)
if (!saveResult.success) {
await db.delete(workflow).where(eq(workflow.id, workflowId))
return {
workflowId: '',
name: workflowName,
success: false,
error: `Failed to save state: ${saveResult.error}`,
}
}
if (workflowData.variables && Array.isArray(workflowData.variables)) {
const variablesRecord: Record<string, WorkflowVariable> = {}
workflowData.variables.forEach((v) => {
const varId = v.id || crypto.randomUUID()
variablesRecord[varId] = {
id: varId,
name: v.name,
type: v.type || 'string',
value: v.value,
}
})
await db
.update(workflow)
.set({ variables: variablesRecord, updatedAt: new Date() })
.where(eq(workflow.id, workflowId))
}
return {
workflowId,
name: workflowName,
success: true,
}
} catch (error) {
return {
workflowId: '',
name: wf.name,
success: false,
error: error instanceof Error ? error.message : 'Unknown error',
}
}
}

View File

@@ -0,0 +1,62 @@
/**
* GET /api/v1/admin/workspaces/[id]
*
* Get workspace details including workflow and folder counts.
*
* Response: AdminSingleResponse<AdminWorkspaceDetail>
*/
import { db } from '@sim/db'
import { workflow, workflowFolder, workspace } from '@sim/db/schema'
import { count, eq } from 'drizzle-orm'
import { createLogger } from '@/lib/logs/console/logger'
import { withAdminAuthParams } from '@/app/api/v1/admin/middleware'
import {
internalErrorResponse,
notFoundResponse,
singleResponse,
} from '@/app/api/v1/admin/responses'
import { type AdminWorkspaceDetail, toAdminWorkspace } from '@/app/api/v1/admin/types'
const logger = createLogger('AdminWorkspaceDetailAPI')
interface RouteParams {
id: string
}
export const GET = withAdminAuthParams<RouteParams>(async (request, context) => {
const { id: workspaceId } = await context.params
try {
const [workspaceData] = await db
.select()
.from(workspace)
.where(eq(workspace.id, workspaceId))
.limit(1)
if (!workspaceData) {
return notFoundResponse('Workspace')
}
const [workflowCountResult, folderCountResult] = await Promise.all([
db.select({ count: count() }).from(workflow).where(eq(workflow.workspaceId, workspaceId)),
db
.select({ count: count() })
.from(workflowFolder)
.where(eq(workflowFolder.workspaceId, workspaceId)),
])
const data: AdminWorkspaceDetail = {
...toAdminWorkspace(workspaceData),
workflowCount: workflowCountResult[0].count,
folderCount: folderCountResult[0].count,
}
logger.info(`Admin API: Retrieved workspace ${workspaceId}`)
return singleResponse(data)
} catch (error) {
logger.error('Admin API: Failed to get workspace', { error, workspaceId })
return internalErrorResponse('Failed to get workspace')
}
})

View File

@@ -0,0 +1,129 @@
/**
* GET /api/v1/admin/workspaces/[id]/workflows
*
* List all workflows in a workspace with pagination.
*
* Query Parameters:
* - limit: number (default: 50, max: 250)
* - offset: number (default: 0)
*
* Response: AdminListResponse<AdminWorkflow>
*
* DELETE /api/v1/admin/workspaces/[id]/workflows
*
* Delete all workflows in a workspace (clean slate for reimport).
*
* Response: { success: true, deleted: number }
*/
import { db } from '@sim/db'
import {
workflow,
workflowBlocks,
workflowEdges,
workflowSchedule,
workspace,
} from '@sim/db/schema'
import { count, eq, inArray } from 'drizzle-orm'
import { NextResponse } from 'next/server'
import { createLogger } from '@/lib/logs/console/logger'
import { withAdminAuthParams } from '@/app/api/v1/admin/middleware'
import { internalErrorResponse, listResponse, notFoundResponse } from '@/app/api/v1/admin/responses'
import {
type AdminWorkflow,
createPaginationMeta,
parsePaginationParams,
toAdminWorkflow,
} from '@/app/api/v1/admin/types'
const logger = createLogger('AdminWorkspaceWorkflowsAPI')
interface RouteParams {
id: string
}
export const GET = withAdminAuthParams<RouteParams>(async (request, context) => {
const { id: workspaceId } = await context.params
const url = new URL(request.url)
const { limit, offset } = parsePaginationParams(url)
try {
const [workspaceData] = await db
.select({ id: workspace.id })
.from(workspace)
.where(eq(workspace.id, workspaceId))
.limit(1)
if (!workspaceData) {
return notFoundResponse('Workspace')
}
const [countResult, workflows] = await Promise.all([
db.select({ total: count() }).from(workflow).where(eq(workflow.workspaceId, workspaceId)),
db
.select()
.from(workflow)
.where(eq(workflow.workspaceId, workspaceId))
.orderBy(workflow.name)
.limit(limit)
.offset(offset),
])
const total = countResult[0].total
const data: AdminWorkflow[] = workflows.map(toAdminWorkflow)
const pagination = createPaginationMeta(total, limit, offset)
logger.info(
`Admin API: Listed ${data.length} workflows in workspace ${workspaceId} (total: ${total})`
)
return listResponse(data, pagination)
} catch (error) {
logger.error('Admin API: Failed to list workspace workflows', { error, workspaceId })
return internalErrorResponse('Failed to list workflows')
}
})
export const DELETE = withAdminAuthParams<RouteParams>(async (request, context) => {
const { id: workspaceId } = await context.params
try {
const [workspaceData] = await db
.select({ id: workspace.id })
.from(workspace)
.where(eq(workspace.id, workspaceId))
.limit(1)
if (!workspaceData) {
return notFoundResponse('Workspace')
}
const workflowsToDelete = await db
.select({ id: workflow.id })
.from(workflow)
.where(eq(workflow.workspaceId, workspaceId))
if (workflowsToDelete.length === 0) {
return NextResponse.json({ success: true, deleted: 0 })
}
const workflowIds = workflowsToDelete.map((w) => w.id)
await db.transaction(async (tx) => {
await Promise.all([
tx.delete(workflowBlocks).where(inArray(workflowBlocks.workflowId, workflowIds)),
tx.delete(workflowEdges).where(inArray(workflowEdges.workflowId, workflowIds)),
tx.delete(workflowSchedule).where(inArray(workflowSchedule.workflowId, workflowIds)),
])
await tx.delete(workflow).where(eq(workflow.workspaceId, workspaceId))
})
logger.info(`Admin API: Deleted ${workflowIds.length} workflows from workspace ${workspaceId}`)
return NextResponse.json({ success: true, deleted: workflowIds.length })
} catch (error) {
logger.error('Admin API: Failed to delete workspace workflows', { error, workspaceId })
return internalErrorResponse('Failed to delete workflows')
}
})

View File

@@ -0,0 +1,49 @@
/**
* GET /api/v1/admin/workspaces
*
* List all workspaces with pagination.
*
* Query Parameters:
* - limit: number (default: 50, max: 250)
* - offset: number (default: 0)
*
* Response: AdminListResponse<AdminWorkspace>
*/
import { db } from '@sim/db'
import { workspace } from '@sim/db/schema'
import { count } from 'drizzle-orm'
import { createLogger } from '@/lib/logs/console/logger'
import { withAdminAuth } from '@/app/api/v1/admin/middleware'
import { internalErrorResponse, listResponse } from '@/app/api/v1/admin/responses'
import {
type AdminWorkspace,
createPaginationMeta,
parsePaginationParams,
toAdminWorkspace,
} from '@/app/api/v1/admin/types'
const logger = createLogger('AdminWorkspacesAPI')
export const GET = withAdminAuth(async (request) => {
const url = new URL(request.url)
const { limit, offset } = parsePaginationParams(url)
try {
const [countResult, workspaces] = await Promise.all([
db.select({ total: count() }).from(workspace),
db.select().from(workspace).orderBy(workspace.name).limit(limit).offset(offset),
])
const total = countResult[0].total
const data: AdminWorkspace[] = workspaces.map(toAdminWorkspace)
const pagination = createPaginationMeta(total, limit, offset)
logger.info(`Admin API: Listed ${data.length} workspaces (total: ${total})`)
return listResponse(data, pagination)
} catch (error) {
logger.error('Admin API: Failed to list workspaces', { error })
return internalErrorResponse('Failed to list workspaces')
}
})

View File

@@ -34,7 +34,7 @@ const ExecuteWorkflowSchema = z.object({
stream: z.boolean().optional(),
useDraftState: z.boolean().optional(),
input: z.any().optional(),
// Optional workflow state override (for executing diff workflows)
isClientSession: z.boolean().optional(),
workflowStateOverride: z
.object({
blocks: z.record(z.any()),
@@ -92,16 +92,17 @@ export async function executeWorkflow(
workflowId,
workspaceId: workflow.workspaceId,
userId: actorUserId,
workflowUserId: workflow.userId,
triggerType,
useDraftState: false,
startTime: new Date().toISOString(),
isClientSession: false,
}
const snapshot = new ExecutionSnapshot(
metadata,
workflow,
input,
{},
workflow.variables || {},
streamConfig?.selectedOutputs || []
)
@@ -329,6 +330,7 @@ export async function POST(req: NextRequest, { params }: { params: Promise<{ id:
stream: streamParam,
useDraftState,
input: validatedInput,
isClientSession = false,
workflowStateOverride,
} = validation.data
@@ -503,9 +505,12 @@ export async function POST(req: NextRequest, { params }: { params: Promise<{ id:
workflowId,
workspaceId: workflow.workspaceId ?? undefined,
userId: actorUserId,
sessionUserId: isClientSession ? userId : undefined,
workflowUserId: workflow.userId,
triggerType,
useDraftState: shouldUseDraftState,
startTime: new Date().toISOString(),
isClientSession,
workflowStateOverride: effectiveWorkflowStateOverride,
}
@@ -513,7 +518,6 @@ export async function POST(req: NextRequest, { params }: { params: Promise<{ id:
metadata,
workflow,
processedInput,
{},
workflow.variables || {},
selectedOutputs
)
@@ -769,9 +773,12 @@ export async function POST(req: NextRequest, { params }: { params: Promise<{ id:
workflowId,
workspaceId: workflow.workspaceId ?? undefined,
userId: actorUserId,
sessionUserId: isClientSession ? userId : undefined,
workflowUserId: workflow.userId,
triggerType,
useDraftState: shouldUseDraftState,
startTime: new Date().toISOString(),
isClientSession,
workflowStateOverride: effectiveWorkflowStateOverride,
}
@@ -779,7 +786,6 @@ export async function POST(req: NextRequest, { params }: { params: Promise<{ id:
metadata,
workflow,
processedInput,
{},
workflow.variables || {},
selectedOutputs
)

View File

@@ -1,221 +0,0 @@
import { db } from '@sim/db'
import { permissions, workflow, workflowLogWebhook } from '@sim/db/schema'
import { and, eq } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { getSession } from '@/lib/auth'
import { encryptSecret } from '@/lib/core/security/encryption'
import { createLogger } from '@/lib/logs/console/logger'
const logger = createLogger('WorkflowLogWebhookUpdate')
type WebhookUpdatePayload = Pick<
typeof workflowLogWebhook.$inferInsert,
| 'url'
| 'includeFinalOutput'
| 'includeTraceSpans'
| 'includeRateLimits'
| 'includeUsageData'
| 'levelFilter'
| 'triggerFilter'
| 'secret'
| 'updatedAt'
>
const UpdateWebhookSchema = z.object({
url: z.string().url('Invalid webhook URL'),
secret: z.string().optional(),
includeFinalOutput: z.boolean(),
includeTraceSpans: z.boolean(),
includeRateLimits: z.boolean(),
includeUsageData: z.boolean(),
levelFilter: z.array(z.enum(['info', 'error'])),
triggerFilter: z.array(z.enum(['api', 'webhook', 'schedule', 'manual', 'chat'])),
})
export async function PUT(
request: NextRequest,
{ params }: { params: Promise<{ id: string; webhookId: string }> }
) {
try {
const session = await getSession()
if (!session?.user?.id) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
const { id: workflowId, webhookId } = await params
const userId = session.user.id
// Check if user has access to the workflow
const hasAccess = await db
.select({ id: workflow.id })
.from(workflow)
.innerJoin(
permissions,
and(
eq(permissions.entityType, 'workspace'),
eq(permissions.entityId, workflow.workspaceId),
eq(permissions.userId, userId)
)
)
.where(eq(workflow.id, workflowId))
.limit(1)
if (hasAccess.length === 0) {
return NextResponse.json({ error: 'Workflow not found' }, { status: 404 })
}
// Check if webhook exists and belongs to this workflow
const existingWebhook = await db
.select()
.from(workflowLogWebhook)
.where(
and(eq(workflowLogWebhook.id, webhookId), eq(workflowLogWebhook.workflowId, workflowId))
)
.limit(1)
if (existingWebhook.length === 0) {
return NextResponse.json({ error: 'Webhook not found' }, { status: 404 })
}
const body = await request.json()
const validationResult = UpdateWebhookSchema.safeParse(body)
if (!validationResult.success) {
return NextResponse.json(
{ error: 'Invalid request', details: validationResult.error.errors },
{ status: 400 }
)
}
const data = validationResult.data
// Check for duplicate URL (excluding current webhook)
const duplicateWebhook = await db
.select({ id: workflowLogWebhook.id })
.from(workflowLogWebhook)
.where(
and(eq(workflowLogWebhook.workflowId, workflowId), eq(workflowLogWebhook.url, data.url))
)
.limit(1)
if (duplicateWebhook.length > 0 && duplicateWebhook[0].id !== webhookId) {
return NextResponse.json(
{ error: 'A webhook with this URL already exists for this workflow' },
{ status: 409 }
)
}
// Prepare update data
const updateData: WebhookUpdatePayload = {
url: data.url,
includeFinalOutput: data.includeFinalOutput,
includeTraceSpans: data.includeTraceSpans,
includeRateLimits: data.includeRateLimits,
includeUsageData: data.includeUsageData,
levelFilter: data.levelFilter,
triggerFilter: data.triggerFilter,
updatedAt: new Date(),
}
// Only update secret if provided
if (data.secret) {
const { encrypted } = await encryptSecret(data.secret)
updateData.secret = encrypted
}
const updatedWebhooks = await db
.update(workflowLogWebhook)
.set(updateData)
.where(eq(workflowLogWebhook.id, webhookId))
.returning()
if (updatedWebhooks.length === 0) {
return NextResponse.json({ error: 'Webhook not found' }, { status: 404 })
}
const updatedWebhook = updatedWebhooks[0]
logger.info('Webhook updated', {
webhookId,
workflowId,
userId,
})
return NextResponse.json({
data: {
id: updatedWebhook.id,
url: updatedWebhook.url,
includeFinalOutput: updatedWebhook.includeFinalOutput,
includeTraceSpans: updatedWebhook.includeTraceSpans,
includeRateLimits: updatedWebhook.includeRateLimits,
includeUsageData: updatedWebhook.includeUsageData,
levelFilter: updatedWebhook.levelFilter,
triggerFilter: updatedWebhook.triggerFilter,
active: updatedWebhook.active,
createdAt: updatedWebhook.createdAt.toISOString(),
updatedAt: updatedWebhook.updatedAt.toISOString(),
},
})
} catch (error) {
logger.error('Failed to update webhook', { error })
return NextResponse.json({ error: 'Failed to update webhook' }, { status: 500 })
}
}
export async function DELETE(
request: NextRequest,
{ params }: { params: Promise<{ id: string; webhookId: string }> }
) {
try {
const session = await getSession()
if (!session?.user?.id) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
const { id: workflowId, webhookId } = await params
const userId = session.user.id
// Check if user has access to the workflow
const hasAccess = await db
.select({ id: workflow.id })
.from(workflow)
.innerJoin(
permissions,
and(
eq(permissions.entityType, 'workspace'),
eq(permissions.entityId, workflow.workspaceId),
eq(permissions.userId, userId)
)
)
.where(eq(workflow.id, workflowId))
.limit(1)
if (hasAccess.length === 0) {
return NextResponse.json({ error: 'Workflow not found' }, { status: 404 })
}
// Delete the webhook (will cascade delete deliveries)
const deletedWebhook = await db
.delete(workflowLogWebhook)
.where(
and(eq(workflowLogWebhook.id, webhookId), eq(workflowLogWebhook.workflowId, workflowId))
)
.returning()
if (deletedWebhook.length === 0) {
return NextResponse.json({ error: 'Webhook not found' }, { status: 404 })
}
logger.info('Webhook deleted', {
webhookId,
workflowId,
userId,
})
return NextResponse.json({ success: true })
} catch (error) {
logger.error('Failed to delete webhook', { error })
return NextResponse.json({ error: 'Failed to delete webhook' }, { status: 500 })
}
}

View File

@@ -1,248 +0,0 @@
import { db } from '@sim/db'
import { permissions, workflow, workflowLogWebhook } from '@sim/db/schema'
import { and, eq } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { v4 as uuidv4 } from 'uuid'
import { z } from 'zod'
import { getSession } from '@/lib/auth'
import { encryptSecret } from '@/lib/core/security/encryption'
import { createLogger } from '@/lib/logs/console/logger'
const logger = createLogger('WorkflowLogWebhookAPI')
const CreateWebhookSchema = z.object({
url: z.string().url(),
secret: z.string().optional(),
includeFinalOutput: z.boolean().optional().default(false),
includeTraceSpans: z.boolean().optional().default(false),
includeRateLimits: z.boolean().optional().default(false),
includeUsageData: z.boolean().optional().default(false),
levelFilter: z
.array(z.enum(['info', 'error']))
.optional()
.default(['info', 'error']),
triggerFilter: z
.array(z.enum(['api', 'webhook', 'schedule', 'manual', 'chat']))
.optional()
.default(['api', 'webhook', 'schedule', 'manual', 'chat']),
active: z.boolean().optional().default(true),
})
export async function GET(request: NextRequest, { params }: { params: Promise<{ id: string }> }) {
try {
const session = await getSession()
if (!session?.user?.id) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
const { id: workflowId } = await params
const userId = session.user.id
const hasAccess = await db
.select({ id: workflow.id })
.from(workflow)
.innerJoin(
permissions,
and(
eq(permissions.entityType, 'workspace'),
eq(permissions.entityId, workflow.workspaceId),
eq(permissions.userId, userId)
)
)
.where(eq(workflow.id, workflowId))
.limit(1)
if (hasAccess.length === 0) {
return NextResponse.json({ error: 'Workflow not found' }, { status: 404 })
}
const webhooks = await db
.select({
id: workflowLogWebhook.id,
url: workflowLogWebhook.url,
includeFinalOutput: workflowLogWebhook.includeFinalOutput,
includeTraceSpans: workflowLogWebhook.includeTraceSpans,
includeRateLimits: workflowLogWebhook.includeRateLimits,
includeUsageData: workflowLogWebhook.includeUsageData,
levelFilter: workflowLogWebhook.levelFilter,
triggerFilter: workflowLogWebhook.triggerFilter,
active: workflowLogWebhook.active,
createdAt: workflowLogWebhook.createdAt,
updatedAt: workflowLogWebhook.updatedAt,
})
.from(workflowLogWebhook)
.where(eq(workflowLogWebhook.workflowId, workflowId))
return NextResponse.json({ data: webhooks })
} catch (error) {
logger.error('Error fetching log webhooks', { error })
return NextResponse.json({ error: 'Internal server error' }, { status: 500 })
}
}
export async function POST(request: NextRequest, { params }: { params: Promise<{ id: string }> }) {
try {
const session = await getSession()
if (!session?.user?.id) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
const { id: workflowId } = await params
const userId = session.user.id
const hasAccess = await db
.select({ id: workflow.id })
.from(workflow)
.innerJoin(
permissions,
and(
eq(permissions.entityType, 'workspace'),
eq(permissions.entityId, workflow.workspaceId),
eq(permissions.userId, userId)
)
)
.where(eq(workflow.id, workflowId))
.limit(1)
if (hasAccess.length === 0) {
return NextResponse.json({ error: 'Workflow not found' }, { status: 404 })
}
const body = await request.json()
const validationResult = CreateWebhookSchema.safeParse(body)
if (!validationResult.success) {
return NextResponse.json(
{ error: 'Invalid request', details: validationResult.error.errors },
{ status: 400 }
)
}
const data = validationResult.data
// Check for duplicate URL
const existingWebhook = await db
.select({ id: workflowLogWebhook.id })
.from(workflowLogWebhook)
.where(
and(eq(workflowLogWebhook.workflowId, workflowId), eq(workflowLogWebhook.url, data.url))
)
.limit(1)
if (existingWebhook.length > 0) {
return NextResponse.json(
{ error: 'A webhook with this URL already exists for this workflow' },
{ status: 409 }
)
}
let encryptedSecret: string | null = null
if (data.secret) {
const { encrypted } = await encryptSecret(data.secret)
encryptedSecret = encrypted
}
const [webhook] = await db
.insert(workflowLogWebhook)
.values({
id: uuidv4(),
workflowId,
url: data.url,
secret: encryptedSecret,
includeFinalOutput: data.includeFinalOutput,
includeTraceSpans: data.includeTraceSpans,
includeRateLimits: data.includeRateLimits,
includeUsageData: data.includeUsageData,
levelFilter: data.levelFilter,
triggerFilter: data.triggerFilter,
active: data.active,
})
.returning()
logger.info('Created log webhook', {
workflowId,
webhookId: webhook.id,
url: data.url,
})
return NextResponse.json({
data: {
id: webhook.id,
url: webhook.url,
includeFinalOutput: webhook.includeFinalOutput,
includeTraceSpans: webhook.includeTraceSpans,
includeRateLimits: webhook.includeRateLimits,
includeUsageData: webhook.includeUsageData,
levelFilter: webhook.levelFilter,
triggerFilter: webhook.triggerFilter,
active: webhook.active,
createdAt: webhook.createdAt,
updatedAt: webhook.updatedAt,
},
})
} catch (error) {
logger.error('Error creating log webhook', { error })
return NextResponse.json({ error: 'Internal server error' }, { status: 500 })
}
}
export async function DELETE(
request: NextRequest,
{ params }: { params: Promise<{ id: string }> }
) {
try {
const session = await getSession()
if (!session?.user?.id) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
const { id: workflowId } = await params
const userId = session.user.id
const { searchParams } = new URL(request.url)
const webhookId = searchParams.get('webhookId')
if (!webhookId) {
return NextResponse.json({ error: 'webhookId is required' }, { status: 400 })
}
const hasAccess = await db
.select({ id: workflow.id })
.from(workflow)
.innerJoin(
permissions,
and(
eq(permissions.entityType, 'workspace'),
eq(permissions.entityId, workflow.workspaceId),
eq(permissions.userId, userId)
)
)
.where(eq(workflow.id, workflowId))
.limit(1)
if (hasAccess.length === 0) {
return NextResponse.json({ error: 'Workflow not found' }, { status: 404 })
}
const deleted = await db
.delete(workflowLogWebhook)
.where(
and(eq(workflowLogWebhook.id, webhookId), eq(workflowLogWebhook.workflowId, workflowId))
)
.returning({ id: workflowLogWebhook.id })
if (deleted.length === 0) {
return NextResponse.json({ error: 'Webhook not found' }, { status: 404 })
}
logger.info('Deleted log webhook', {
workflowId,
webhookId,
})
return NextResponse.json({ success: true })
} catch (error) {
logger.error('Error deleting log webhook', { error })
return NextResponse.json({ error: 'Internal server error' }, { status: 500 })
}
}

View File

@@ -1,232 +0,0 @@
import { createHmac } from 'crypto'
import { db } from '@sim/db'
import { permissions, workflow, workflowLogWebhook } from '@sim/db/schema'
import { and, eq } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { v4 as uuidv4 } from 'uuid'
import { getSession } from '@/lib/auth'
import { decryptSecret } from '@/lib/core/security/encryption'
import { createLogger } from '@/lib/logs/console/logger'
const logger = createLogger('WorkflowLogWebhookTestAPI')
function generateSignature(secret: string, timestamp: number, body: string): string {
const signatureBase = `${timestamp}.${body}`
const hmac = createHmac('sha256', secret)
hmac.update(signatureBase)
return hmac.digest('hex')
}
export async function POST(request: NextRequest, { params }: { params: Promise<{ id: string }> }) {
try {
const session = await getSession()
if (!session?.user?.id) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
const { id: workflowId } = await params
const userId = session.user.id
const { searchParams } = new URL(request.url)
const webhookId = searchParams.get('webhookId')
if (!webhookId) {
return NextResponse.json({ error: 'webhookId is required' }, { status: 400 })
}
const hasAccess = await db
.select({ id: workflow.id })
.from(workflow)
.innerJoin(
permissions,
and(
eq(permissions.entityType, 'workspace'),
eq(permissions.entityId, workflow.workspaceId),
eq(permissions.userId, userId)
)
)
.where(eq(workflow.id, workflowId))
.limit(1)
if (hasAccess.length === 0) {
return NextResponse.json({ error: 'Workflow not found' }, { status: 404 })
}
const [webhook] = await db
.select()
.from(workflowLogWebhook)
.where(
and(eq(workflowLogWebhook.id, webhookId), eq(workflowLogWebhook.workflowId, workflowId))
)
.limit(1)
if (!webhook) {
return NextResponse.json({ error: 'Webhook not found' }, { status: 404 })
}
const timestamp = Date.now()
const eventId = `evt_test_${uuidv4()}`
const executionId = `exec_test_${uuidv4()}`
const logId = `log_test_${uuidv4()}`
const payload = {
id: eventId,
type: 'workflow.execution.completed',
timestamp,
data: {
workflowId,
executionId,
status: 'success',
level: 'info',
trigger: 'manual',
startedAt: new Date(timestamp - 5000).toISOString(),
endedAt: new Date(timestamp).toISOString(),
totalDurationMs: 5000,
cost: {
total: 0.00123,
tokens: { prompt: 100, completion: 50, total: 150 },
models: {
'gpt-4o': {
input: 0.001,
output: 0.00023,
total: 0.00123,
tokens: { prompt: 100, completion: 50, total: 150 },
},
},
},
files: null,
},
links: {
log: `/v1/logs/${logId}`,
execution: `/v1/logs/executions/${executionId}`,
},
}
if (webhook.includeFinalOutput) {
;(payload.data as any).finalOutput = {
message: 'This is a test webhook delivery',
test: true,
}
}
if (webhook.includeTraceSpans) {
;(payload.data as any).traceSpans = [
{
id: 'span_test_1',
name: 'Test Block',
type: 'block',
status: 'success',
startTime: new Date(timestamp - 5000).toISOString(),
endTime: new Date(timestamp).toISOString(),
duration: 5000,
},
]
}
if (webhook.includeRateLimits) {
;(payload.data as any).rateLimits = {
sync: {
limit: 150,
remaining: 45,
resetAt: new Date(timestamp + 60000).toISOString(),
},
async: {
limit: 1000,
remaining: 50,
resetAt: new Date(timestamp + 60000).toISOString(),
},
}
}
if (webhook.includeUsageData) {
;(payload.data as any).usage = {
currentPeriodCost: 2.45,
limit: 10,
plan: 'pro',
isExceeded: false,
}
}
const body = JSON.stringify(payload)
const deliveryId = `delivery_test_${uuidv4()}`
const headers: Record<string, string> = {
'Content-Type': 'application/json',
'sim-event': 'workflow.execution.completed',
'sim-timestamp': timestamp.toString(),
'sim-delivery-id': deliveryId,
'Idempotency-Key': deliveryId,
}
if (webhook.secret) {
const { decrypted } = await decryptSecret(webhook.secret)
const signature = generateSignature(decrypted, timestamp, body)
headers['sim-signature'] = `t=${timestamp},v1=${signature}`
}
logger.info(`Sending test webhook to ${webhook.url}`, { workflowId, webhookId })
const controller = new AbortController()
const timeoutId = setTimeout(() => controller.abort(), 10000)
try {
const response = await fetch(webhook.url, {
method: 'POST',
headers,
body,
signal: controller.signal,
})
clearTimeout(timeoutId)
const responseBody = await response.text().catch(() => '')
const truncatedBody = responseBody.slice(0, 500)
const result = {
success: response.ok,
status: response.status,
statusText: response.statusText,
headers: Object.fromEntries(response.headers.entries()),
body: truncatedBody,
timestamp: new Date().toISOString(),
}
logger.info(`Test webhook completed`, {
workflowId,
webhookId,
status: response.status,
success: response.ok,
})
return NextResponse.json({ data: result })
} catch (error: any) {
clearTimeout(timeoutId)
if (error.name === 'AbortError') {
logger.error(`Test webhook timed out`, { workflowId, webhookId })
return NextResponse.json({
data: {
success: false,
error: 'Request timeout after 10 seconds',
timestamp: new Date().toISOString(),
},
})
}
logger.error(`Test webhook failed`, {
workflowId,
webhookId,
error: error.message,
})
return NextResponse.json({
data: {
success: false,
error: error.message,
timestamp: new Date().toISOString(),
},
})
}
} catch (error) {
logger.error('Error testing webhook', { error })
return NextResponse.json({ error: 'Internal server error' }, { status: 500 })
}
}

View File

@@ -10,11 +10,10 @@ export async function GET(
{
params,
}: {
params: { id: string; executionId: string }
params: Promise<{ id: string; executionId: string }>
}
) {
const workflowId = params.id
const executionId = params.executionId
const { id: workflowId, executionId } = await params
const access = await validateWorkflowAccess(request, workflowId, false)
if (access.error) {

View File

@@ -15,10 +15,10 @@ export async function GET(
{
params,
}: {
params: { id: string }
params: Promise<{ id: string }>
}
) {
const workflowId = params.id
const { id: workflowId } = await params
const access = await validateWorkflowAccess(request, workflowId, false)
if (access.error) {

View File

@@ -0,0 +1,318 @@
import { db } from '@sim/db'
import { workflow, workspaceNotificationSubscription } from '@sim/db/schema'
import { and, eq, inArray } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { getSession } from '@/lib/auth'
import { encryptSecret } from '@/lib/core/security/encryption'
import { createLogger } from '@/lib/logs/console/logger'
import { getUserEntityPermissions } from '@/lib/workspaces/permissions/utils'
import { MAX_EMAIL_RECIPIENTS, MAX_WORKFLOW_IDS } from '../constants'
const logger = createLogger('WorkspaceNotificationAPI')
const levelFilterSchema = z.array(z.enum(['info', 'error']))
const triggerFilterSchema = z.array(z.enum(['api', 'webhook', 'schedule', 'manual', 'chat']))
const alertRuleSchema = z.enum([
'consecutive_failures',
'failure_rate',
'latency_threshold',
'latency_spike',
'cost_threshold',
'no_activity',
'error_count',
])
const alertConfigSchema = z
.object({
rule: alertRuleSchema,
consecutiveFailures: z.number().int().min(1).max(100).optional(),
failureRatePercent: z.number().int().min(1).max(100).optional(),
windowHours: z.number().int().min(1).max(168).optional(),
durationThresholdMs: z.number().int().min(1000).max(3600000).optional(),
latencySpikePercent: z.number().int().min(10).max(1000).optional(),
costThresholdDollars: z.number().min(0.01).max(1000).optional(),
inactivityHours: z.number().int().min(1).max(168).optional(),
errorCountThreshold: z.number().int().min(1).max(1000).optional(),
})
.refine(
(data) => {
switch (data.rule) {
case 'consecutive_failures':
return data.consecutiveFailures !== undefined
case 'failure_rate':
return data.failureRatePercent !== undefined && data.windowHours !== undefined
case 'latency_threshold':
return data.durationThresholdMs !== undefined
case 'latency_spike':
return data.latencySpikePercent !== undefined && data.windowHours !== undefined
case 'cost_threshold':
return data.costThresholdDollars !== undefined
case 'no_activity':
return data.inactivityHours !== undefined
case 'error_count':
return data.errorCountThreshold !== undefined && data.windowHours !== undefined
default:
return false
}
},
{ message: 'Missing required fields for alert rule' }
)
.nullable()
const webhookConfigSchema = z.object({
url: z.string().url(),
secret: z.string().optional(),
})
const slackConfigSchema = z.object({
channelId: z.string(),
channelName: z.string(),
accountId: z.string(),
})
const updateNotificationSchema = z
.object({
workflowIds: z.array(z.string()).max(MAX_WORKFLOW_IDS).optional(),
allWorkflows: z.boolean().optional(),
levelFilter: levelFilterSchema.optional(),
triggerFilter: triggerFilterSchema.optional(),
includeFinalOutput: z.boolean().optional(),
includeTraceSpans: z.boolean().optional(),
includeRateLimits: z.boolean().optional(),
includeUsageData: z.boolean().optional(),
alertConfig: alertConfigSchema.optional(),
webhookConfig: webhookConfigSchema.optional(),
emailRecipients: z.array(z.string().email()).max(MAX_EMAIL_RECIPIENTS).optional(),
slackConfig: slackConfigSchema.optional(),
active: z.boolean().optional(),
})
.refine((data) => !(data.allWorkflows && data.workflowIds && data.workflowIds.length > 0), {
message: 'Cannot specify both allWorkflows and workflowIds',
})
type RouteParams = { params: Promise<{ id: string; notificationId: string }> }
async function checkWorkspaceWriteAccess(
userId: string,
workspaceId: string
): Promise<{ hasAccess: boolean; permission: string | null }> {
const permission = await getUserEntityPermissions(userId, 'workspace', workspaceId)
const hasAccess = permission === 'write' || permission === 'admin'
return { hasAccess, permission }
}
async function getSubscription(notificationId: string, workspaceId: string) {
const [subscription] = await db
.select()
.from(workspaceNotificationSubscription)
.where(
and(
eq(workspaceNotificationSubscription.id, notificationId),
eq(workspaceNotificationSubscription.workspaceId, workspaceId)
)
)
.limit(1)
return subscription
}
export async function GET(request: NextRequest, { params }: RouteParams) {
try {
const session = await getSession()
if (!session?.user?.id) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
const { id: workspaceId, notificationId } = await params
const permission = await getUserEntityPermissions(session.user.id, 'workspace', workspaceId)
if (!permission) {
return NextResponse.json({ error: 'Workspace not found' }, { status: 404 })
}
const subscription = await getSubscription(notificationId, workspaceId)
if (!subscription) {
return NextResponse.json({ error: 'Notification not found' }, { status: 404 })
}
return NextResponse.json({
data: {
id: subscription.id,
notificationType: subscription.notificationType,
workflowIds: subscription.workflowIds,
allWorkflows: subscription.allWorkflows,
levelFilter: subscription.levelFilter,
triggerFilter: subscription.triggerFilter,
includeFinalOutput: subscription.includeFinalOutput,
includeTraceSpans: subscription.includeTraceSpans,
includeRateLimits: subscription.includeRateLimits,
includeUsageData: subscription.includeUsageData,
webhookConfig: subscription.webhookConfig,
emailRecipients: subscription.emailRecipients,
slackConfig: subscription.slackConfig,
alertConfig: subscription.alertConfig,
active: subscription.active,
createdAt: subscription.createdAt,
updatedAt: subscription.updatedAt,
},
})
} catch (error) {
logger.error('Error fetching notification', { error })
return NextResponse.json({ error: 'Internal server error' }, { status: 500 })
}
}
export async function PUT(request: NextRequest, { params }: RouteParams) {
try {
const session = await getSession()
if (!session?.user?.id) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
const { id: workspaceId, notificationId } = await params
const { hasAccess } = await checkWorkspaceWriteAccess(session.user.id, workspaceId)
if (!hasAccess) {
return NextResponse.json({ error: 'Insufficient permissions' }, { status: 403 })
}
const existingSubscription = await getSubscription(notificationId, workspaceId)
if (!existingSubscription) {
return NextResponse.json({ error: 'Notification not found' }, { status: 404 })
}
const body = await request.json()
const validationResult = updateNotificationSchema.safeParse(body)
if (!validationResult.success) {
return NextResponse.json(
{ error: 'Invalid request', details: validationResult.error.errors },
{ status: 400 }
)
}
const data = validationResult.data
if (data.workflowIds && data.workflowIds.length > 0) {
const workflowsInWorkspace = await db
.select({ id: workflow.id })
.from(workflow)
.where(and(eq(workflow.workspaceId, workspaceId), inArray(workflow.id, data.workflowIds)))
const validIds = new Set(workflowsInWorkspace.map((w) => w.id))
const invalidIds = data.workflowIds.filter((id) => !validIds.has(id))
if (invalidIds.length > 0) {
return NextResponse.json(
{ error: 'Some workflow IDs do not belong to this workspace', invalidIds },
{ status: 400 }
)
}
}
const updateData: Record<string, unknown> = { updatedAt: new Date() }
if (data.workflowIds !== undefined) updateData.workflowIds = data.workflowIds
if (data.allWorkflows !== undefined) updateData.allWorkflows = data.allWorkflows
if (data.levelFilter !== undefined) updateData.levelFilter = data.levelFilter
if (data.triggerFilter !== undefined) updateData.triggerFilter = data.triggerFilter
if (data.includeFinalOutput !== undefined)
updateData.includeFinalOutput = data.includeFinalOutput
if (data.includeTraceSpans !== undefined) updateData.includeTraceSpans = data.includeTraceSpans
if (data.includeRateLimits !== undefined) updateData.includeRateLimits = data.includeRateLimits
if (data.includeUsageData !== undefined) updateData.includeUsageData = data.includeUsageData
if (data.alertConfig !== undefined) updateData.alertConfig = data.alertConfig
if (data.emailRecipients !== undefined) updateData.emailRecipients = data.emailRecipients
if (data.slackConfig !== undefined) updateData.slackConfig = data.slackConfig
if (data.active !== undefined) updateData.active = data.active
// Handle webhookConfig with secret encryption
if (data.webhookConfig !== undefined) {
let webhookConfig = data.webhookConfig
if (webhookConfig?.secret) {
const { encrypted } = await encryptSecret(webhookConfig.secret)
webhookConfig = { ...webhookConfig, secret: encrypted }
}
updateData.webhookConfig = webhookConfig
}
const [subscription] = await db
.update(workspaceNotificationSubscription)
.set(updateData)
.where(eq(workspaceNotificationSubscription.id, notificationId))
.returning()
logger.info('Updated notification subscription', {
workspaceId,
subscriptionId: subscription.id,
})
return NextResponse.json({
data: {
id: subscription.id,
notificationType: subscription.notificationType,
workflowIds: subscription.workflowIds,
allWorkflows: subscription.allWorkflows,
levelFilter: subscription.levelFilter,
triggerFilter: subscription.triggerFilter,
includeFinalOutput: subscription.includeFinalOutput,
includeTraceSpans: subscription.includeTraceSpans,
includeRateLimits: subscription.includeRateLimits,
includeUsageData: subscription.includeUsageData,
webhookConfig: subscription.webhookConfig,
emailRecipients: subscription.emailRecipients,
slackConfig: subscription.slackConfig,
alertConfig: subscription.alertConfig,
active: subscription.active,
createdAt: subscription.createdAt,
updatedAt: subscription.updatedAt,
},
})
} catch (error) {
logger.error('Error updating notification', { error })
return NextResponse.json({ error: 'Internal server error' }, { status: 500 })
}
}
export async function DELETE(request: NextRequest, { params }: RouteParams) {
try {
const session = await getSession()
if (!session?.user?.id) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
const { id: workspaceId, notificationId } = await params
const { hasAccess } = await checkWorkspaceWriteAccess(session.user.id, workspaceId)
if (!hasAccess) {
return NextResponse.json({ error: 'Insufficient permissions' }, { status: 403 })
}
const deleted = await db
.delete(workspaceNotificationSubscription)
.where(
and(
eq(workspaceNotificationSubscription.id, notificationId),
eq(workspaceNotificationSubscription.workspaceId, workspaceId)
)
)
.returning({ id: workspaceNotificationSubscription.id })
if (deleted.length === 0) {
return NextResponse.json({ error: 'Notification not found' }, { status: 404 })
}
logger.info('Deleted notification subscription', {
workspaceId,
subscriptionId: notificationId,
})
return NextResponse.json({ success: true })
} catch (error) {
logger.error('Error deleting notification', { error })
return NextResponse.json({ error: 'Internal server error' }, { status: 500 })
}
}

View File

@@ -0,0 +1,319 @@
import { createHmac } from 'crypto'
import { db } from '@sim/db'
import { account, workspaceNotificationSubscription } from '@sim/db/schema'
import { and, eq } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { v4 as uuidv4 } from 'uuid'
import { getSession } from '@/lib/auth'
import { decryptSecret } from '@/lib/core/security/encryption'
import { createLogger } from '@/lib/logs/console/logger'
import { sendEmail } from '@/lib/messaging/email/mailer'
import { getUserEntityPermissions } from '@/lib/workspaces/permissions/utils'
const logger = createLogger('WorkspaceNotificationTestAPI')
type RouteParams = { params: Promise<{ id: string; notificationId: string }> }
interface WebhookConfig {
url: string
secret?: string
}
interface SlackConfig {
channelId: string
channelName: string
accountId: string
}
function generateSignature(secret: string, timestamp: number, body: string): string {
const signatureBase = `${timestamp}.${body}`
const hmac = createHmac('sha256', secret)
hmac.update(signatureBase)
return hmac.digest('hex')
}
function buildTestPayload(subscription: typeof workspaceNotificationSubscription.$inferSelect) {
const timestamp = Date.now()
const eventId = `evt_test_${uuidv4()}`
const executionId = `exec_test_${uuidv4()}`
const payload: Record<string, unknown> = {
id: eventId,
type: 'workflow.execution.completed',
timestamp,
data: {
workflowId: 'test-workflow-id',
workflowName: 'Test Workflow',
executionId,
status: 'success',
level: 'info',
trigger: 'manual',
startedAt: new Date(timestamp - 5000).toISOString(),
endedAt: new Date(timestamp).toISOString(),
totalDurationMs: 5000,
cost: {
total: 0.00123,
tokens: { prompt: 100, completion: 50, total: 150 },
},
},
links: {
log: `/workspace/logs`,
},
}
const data = payload.data as Record<string, unknown>
if (subscription.includeFinalOutput) {
data.finalOutput = { message: 'This is a test notification', test: true }
}
if (subscription.includeTraceSpans) {
data.traceSpans = [
{
id: 'span_test_1',
name: 'Test Block',
type: 'block',
status: 'success',
startTime: new Date(timestamp - 5000).toISOString(),
endTime: new Date(timestamp).toISOString(),
duration: 5000,
},
]
}
if (subscription.includeRateLimits) {
data.rateLimits = {
sync: { limit: 150, remaining: 45, resetAt: new Date(timestamp + 60000).toISOString() },
async: { limit: 1000, remaining: 50, resetAt: new Date(timestamp + 60000).toISOString() },
}
}
if (subscription.includeUsageData) {
data.usage = { currentPeriodCost: 2.45, limit: 10, plan: 'pro', isExceeded: false }
}
return { payload, timestamp }
}
async function testWebhook(subscription: typeof workspaceNotificationSubscription.$inferSelect) {
const webhookConfig = subscription.webhookConfig as WebhookConfig | null
if (!webhookConfig?.url) {
return { success: false, error: 'No webhook URL configured' }
}
const { payload, timestamp } = buildTestPayload(subscription)
const body = JSON.stringify(payload)
const deliveryId = `delivery_test_${uuidv4()}`
const headers: Record<string, string> = {
'Content-Type': 'application/json',
'sim-event': 'workflow.execution.completed',
'sim-timestamp': timestamp.toString(),
'sim-delivery-id': deliveryId,
'Idempotency-Key': deliveryId,
}
if (webhookConfig.secret) {
const { decrypted } = await decryptSecret(webhookConfig.secret)
const signature = generateSignature(decrypted, timestamp, body)
headers['sim-signature'] = `t=${timestamp},v1=${signature}`
}
const controller = new AbortController()
const timeoutId = setTimeout(() => controller.abort(), 10000)
try {
const response = await fetch(webhookConfig.url, {
method: 'POST',
headers,
body,
signal: controller.signal,
})
clearTimeout(timeoutId)
const responseBody = await response.text().catch(() => '')
return {
success: response.ok,
status: response.status,
statusText: response.statusText,
body: responseBody.slice(0, 500),
timestamp: new Date().toISOString(),
}
} catch (error: unknown) {
clearTimeout(timeoutId)
const err = error as Error & { name?: string }
if (err.name === 'AbortError') {
return { success: false, error: 'Request timeout after 10 seconds' }
}
return { success: false, error: err.message }
}
}
async function testEmail(subscription: typeof workspaceNotificationSubscription.$inferSelect) {
if (!subscription.emailRecipients || subscription.emailRecipients.length === 0) {
return { success: false, error: 'No email recipients configured' }
}
const { payload } = buildTestPayload(subscription)
const data = (payload as Record<string, unknown>).data as Record<string, unknown>
const result = await sendEmail({
to: subscription.emailRecipients,
subject: `[Test] Workflow Execution: ${data.workflowName}`,
text: `This is a test notification from Sim Studio.\n\nWorkflow: ${data.workflowName}\nStatus: ${data.status}\nDuration: ${data.totalDurationMs}ms\n\nThis notification is configured for workspace notifications.`,
html: `
<div style="font-family: sans-serif; max-width: 600px; margin: 0 auto;">
<h2 style="color: #7F2FFF;">Test Notification</h2>
<p>This is a test notification from Sim Studio.</p>
<table style="width: 100%; border-collapse: collapse; margin: 20px 0;">
<tr><td style="padding: 8px; border: 1px solid #eee;"><strong>Workflow</strong></td><td style="padding: 8px; border: 1px solid #eee;">${data.workflowName}</td></tr>
<tr><td style="padding: 8px; border: 1px solid #eee;"><strong>Status</strong></td><td style="padding: 8px; border: 1px solid #eee;">${data.status}</td></tr>
<tr><td style="padding: 8px; border: 1px solid #eee;"><strong>Duration</strong></td><td style="padding: 8px; border: 1px solid #eee;">${data.totalDurationMs}ms</td></tr>
</table>
<p style="color: #666; font-size: 12px;">This notification is configured for workspace notifications.</p>
</div>
`,
emailType: 'notifications',
})
return {
success: result.success,
message: result.message,
timestamp: new Date().toISOString(),
}
}
async function testSlack(
subscription: typeof workspaceNotificationSubscription.$inferSelect,
userId: string
) {
const slackConfig = subscription.slackConfig as SlackConfig | null
if (!slackConfig?.channelId || !slackConfig?.accountId) {
return { success: false, error: 'No Slack channel or account configured' }
}
const [slackAccount] = await db
.select({ accessToken: account.accessToken })
.from(account)
.where(and(eq(account.id, slackConfig.accountId), eq(account.userId, userId)))
.limit(1)
if (!slackAccount?.accessToken) {
return { success: false, error: 'Slack account not found or not connected' }
}
const { payload } = buildTestPayload(subscription)
const data = (payload as Record<string, unknown>).data as Record<string, unknown>
const slackPayload = {
channel: slackConfig.channelId,
blocks: [
{
type: 'header',
text: { type: 'plain_text', text: '🧪 Test Notification', emoji: true },
},
{
type: 'section',
fields: [
{ type: 'mrkdwn', text: `*Workflow:*\n${data.workflowName}` },
{ type: 'mrkdwn', text: `*Status:*\n✅ ${data.status}` },
{ type: 'mrkdwn', text: `*Duration:*\n${data.totalDurationMs}ms` },
{ type: 'mrkdwn', text: `*Trigger:*\n${data.trigger}` },
],
},
{
type: 'context',
elements: [
{
type: 'mrkdwn',
text: 'This is a test notification from Sim Studio workspace notifications.',
},
],
},
],
text: `Test notification: ${data.workflowName} - ${data.status}`,
}
try {
const response = await fetch('https://slack.com/api/chat.postMessage', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
Authorization: `Bearer ${slackAccount.accessToken}`,
},
body: JSON.stringify(slackPayload),
})
const result = await response.json()
return {
success: result.ok,
error: result.error,
channel: result.channel,
timestamp: new Date().toISOString(),
}
} catch (error: unknown) {
const err = error as Error
return { success: false, error: err.message }
}
}
export async function POST(request: NextRequest, { params }: RouteParams) {
try {
const session = await getSession()
if (!session?.user?.id) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
const { id: workspaceId, notificationId } = await params
const permission = await getUserEntityPermissions(session.user.id, 'workspace', workspaceId)
if (permission !== 'write' && permission !== 'admin') {
return NextResponse.json({ error: 'Insufficient permissions' }, { status: 403 })
}
const [subscription] = await db
.select()
.from(workspaceNotificationSubscription)
.where(
and(
eq(workspaceNotificationSubscription.id, notificationId),
eq(workspaceNotificationSubscription.workspaceId, workspaceId)
)
)
.limit(1)
if (!subscription) {
return NextResponse.json({ error: 'Notification not found' }, { status: 404 })
}
let result: Record<string, unknown>
switch (subscription.notificationType) {
case 'webhook':
result = await testWebhook(subscription)
break
case 'email':
result = await testEmail(subscription)
break
case 'slack':
result = await testSlack(subscription, session.user.id)
break
default:
return NextResponse.json({ error: 'Unknown notification type' }, { status: 400 })
}
logger.info('Test notification sent', {
workspaceId,
subscriptionId: notificationId,
type: subscription.notificationType,
success: result.success,
})
return NextResponse.json({ data: result })
} catch (error) {
logger.error('Error testing notification', { error })
return NextResponse.json({ error: 'Internal server error' }, { status: 500 })
}
}

View File

@@ -0,0 +1,8 @@
/** Maximum email recipients per notification */
export const MAX_EMAIL_RECIPIENTS = 10
/** Maximum notifications per type per workspace */
export const MAX_NOTIFICATIONS_PER_TYPE = 10
/** Maximum workflow IDs per notification */
export const MAX_WORKFLOW_IDS = 1000

View File

@@ -0,0 +1,284 @@
import { db } from '@sim/db'
import { workflow, workspaceNotificationSubscription } from '@sim/db/schema'
import { and, eq, inArray } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { v4 as uuidv4 } from 'uuid'
import { z } from 'zod'
import { getSession } from '@/lib/auth'
import { encryptSecret } from '@/lib/core/security/encryption'
import { createLogger } from '@/lib/logs/console/logger'
import { getUserEntityPermissions } from '@/lib/workspaces/permissions/utils'
import { MAX_EMAIL_RECIPIENTS, MAX_NOTIFICATIONS_PER_TYPE, MAX_WORKFLOW_IDS } from './constants'
const logger = createLogger('WorkspaceNotificationsAPI')
const notificationTypeSchema = z.enum(['webhook', 'email', 'slack'])
const levelFilterSchema = z.array(z.enum(['info', 'error']))
const triggerFilterSchema = z.array(z.enum(['api', 'webhook', 'schedule', 'manual', 'chat']))
const alertRuleSchema = z.enum([
'consecutive_failures',
'failure_rate',
'latency_threshold',
'latency_spike',
'cost_threshold',
'no_activity',
'error_count',
])
const alertConfigSchema = z
.object({
rule: alertRuleSchema,
consecutiveFailures: z.number().int().min(1).max(100).optional(),
failureRatePercent: z.number().int().min(1).max(100).optional(),
windowHours: z.number().int().min(1).max(168).optional(),
durationThresholdMs: z.number().int().min(1000).max(3600000).optional(),
latencySpikePercent: z.number().int().min(10).max(1000).optional(),
costThresholdDollars: z.number().min(0.01).max(1000).optional(),
inactivityHours: z.number().int().min(1).max(168).optional(),
errorCountThreshold: z.number().int().min(1).max(1000).optional(),
})
.refine(
(data) => {
switch (data.rule) {
case 'consecutive_failures':
return data.consecutiveFailures !== undefined
case 'failure_rate':
return data.failureRatePercent !== undefined && data.windowHours !== undefined
case 'latency_threshold':
return data.durationThresholdMs !== undefined
case 'latency_spike':
return data.latencySpikePercent !== undefined && data.windowHours !== undefined
case 'cost_threshold':
return data.costThresholdDollars !== undefined
case 'no_activity':
return data.inactivityHours !== undefined
case 'error_count':
return data.errorCountThreshold !== undefined && data.windowHours !== undefined
default:
return false
}
},
{ message: 'Missing required fields for alert rule' }
)
.nullable()
const webhookConfigSchema = z.object({
url: z.string().url(),
secret: z.string().optional(),
})
const slackConfigSchema = z.object({
channelId: z.string(),
channelName: z.string(),
accountId: z.string(),
})
const createNotificationSchema = z
.object({
notificationType: notificationTypeSchema,
workflowIds: z.array(z.string()).max(MAX_WORKFLOW_IDS).default([]),
allWorkflows: z.boolean().default(false),
levelFilter: levelFilterSchema.default(['info', 'error']),
triggerFilter: triggerFilterSchema.default(['api', 'webhook', 'schedule', 'manual', 'chat']),
includeFinalOutput: z.boolean().default(false),
includeTraceSpans: z.boolean().default(false),
includeRateLimits: z.boolean().default(false),
includeUsageData: z.boolean().default(false),
alertConfig: alertConfigSchema.optional(),
webhookConfig: webhookConfigSchema.optional(),
emailRecipients: z.array(z.string().email()).max(MAX_EMAIL_RECIPIENTS).optional(),
slackConfig: slackConfigSchema.optional(),
})
.refine(
(data) => {
if (data.notificationType === 'webhook') return !!data.webhookConfig?.url
if (data.notificationType === 'email')
return !!data.emailRecipients && data.emailRecipients.length > 0
if (data.notificationType === 'slack')
return !!data.slackConfig?.channelId && !!data.slackConfig?.accountId
return false
},
{ message: 'Missing required fields for notification type' }
)
.refine((data) => !(data.allWorkflows && data.workflowIds.length > 0), {
message: 'Cannot specify both allWorkflows and workflowIds',
})
async function checkWorkspaceWriteAccess(
userId: string,
workspaceId: string
): Promise<{ hasAccess: boolean; permission: string | null }> {
const permission = await getUserEntityPermissions(userId, 'workspace', workspaceId)
const hasAccess = permission === 'write' || permission === 'admin'
return { hasAccess, permission }
}
export async function GET(request: NextRequest, { params }: { params: Promise<{ id: string }> }) {
try {
const session = await getSession()
if (!session?.user?.id) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
const { id: workspaceId } = await params
const permission = await getUserEntityPermissions(session.user.id, 'workspace', workspaceId)
if (!permission) {
return NextResponse.json({ error: 'Workspace not found' }, { status: 404 })
}
const subscriptions = await db
.select({
id: workspaceNotificationSubscription.id,
notificationType: workspaceNotificationSubscription.notificationType,
workflowIds: workspaceNotificationSubscription.workflowIds,
allWorkflows: workspaceNotificationSubscription.allWorkflows,
levelFilter: workspaceNotificationSubscription.levelFilter,
triggerFilter: workspaceNotificationSubscription.triggerFilter,
includeFinalOutput: workspaceNotificationSubscription.includeFinalOutput,
includeTraceSpans: workspaceNotificationSubscription.includeTraceSpans,
includeRateLimits: workspaceNotificationSubscription.includeRateLimits,
includeUsageData: workspaceNotificationSubscription.includeUsageData,
webhookConfig: workspaceNotificationSubscription.webhookConfig,
emailRecipients: workspaceNotificationSubscription.emailRecipients,
slackConfig: workspaceNotificationSubscription.slackConfig,
alertConfig: workspaceNotificationSubscription.alertConfig,
active: workspaceNotificationSubscription.active,
createdAt: workspaceNotificationSubscription.createdAt,
updatedAt: workspaceNotificationSubscription.updatedAt,
})
.from(workspaceNotificationSubscription)
.where(eq(workspaceNotificationSubscription.workspaceId, workspaceId))
.orderBy(workspaceNotificationSubscription.createdAt)
return NextResponse.json({ data: subscriptions })
} catch (error) {
logger.error('Error fetching notifications', { error })
return NextResponse.json({ error: 'Internal server error' }, { status: 500 })
}
}
export async function POST(request: NextRequest, { params }: { params: Promise<{ id: string }> }) {
try {
const session = await getSession()
if (!session?.user?.id) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
const { id: workspaceId } = await params
const { hasAccess } = await checkWorkspaceWriteAccess(session.user.id, workspaceId)
if (!hasAccess) {
return NextResponse.json({ error: 'Insufficient permissions' }, { status: 403 })
}
const body = await request.json()
const validationResult = createNotificationSchema.safeParse(body)
if (!validationResult.success) {
return NextResponse.json(
{ error: 'Invalid request', details: validationResult.error.errors },
{ status: 400 }
)
}
const data = validationResult.data
const existingCount = await db
.select({ id: workspaceNotificationSubscription.id })
.from(workspaceNotificationSubscription)
.where(
and(
eq(workspaceNotificationSubscription.workspaceId, workspaceId),
eq(workspaceNotificationSubscription.notificationType, data.notificationType)
)
)
if (existingCount.length >= MAX_NOTIFICATIONS_PER_TYPE) {
return NextResponse.json(
{
error: `Maximum ${MAX_NOTIFICATIONS_PER_TYPE} ${data.notificationType} notifications per workspace`,
},
{ status: 400 }
)
}
if (!data.allWorkflows && data.workflowIds.length > 0) {
const workflowsInWorkspace = await db
.select({ id: workflow.id })
.from(workflow)
.where(and(eq(workflow.workspaceId, workspaceId), inArray(workflow.id, data.workflowIds)))
const validIds = new Set(workflowsInWorkspace.map((w) => w.id))
const invalidIds = data.workflowIds.filter((id) => !validIds.has(id))
if (invalidIds.length > 0) {
return NextResponse.json(
{ error: 'Some workflow IDs do not belong to this workspace', invalidIds },
{ status: 400 }
)
}
}
// Encrypt webhook secret if provided
let webhookConfig = data.webhookConfig || null
if (webhookConfig?.secret) {
const { encrypted } = await encryptSecret(webhookConfig.secret)
webhookConfig = { ...webhookConfig, secret: encrypted }
}
const [subscription] = await db
.insert(workspaceNotificationSubscription)
.values({
id: uuidv4(),
workspaceId,
notificationType: data.notificationType,
workflowIds: data.workflowIds,
allWorkflows: data.allWorkflows,
levelFilter: data.levelFilter,
triggerFilter: data.triggerFilter,
includeFinalOutput: data.includeFinalOutput,
includeTraceSpans: data.includeTraceSpans,
includeRateLimits: data.includeRateLimits,
includeUsageData: data.includeUsageData,
alertConfig: data.alertConfig || null,
webhookConfig,
emailRecipients: data.emailRecipients || null,
slackConfig: data.slackConfig || null,
createdBy: session.user.id,
})
.returning()
logger.info('Created notification subscription', {
workspaceId,
subscriptionId: subscription.id,
type: data.notificationType,
})
return NextResponse.json({
data: {
id: subscription.id,
notificationType: subscription.notificationType,
workflowIds: subscription.workflowIds,
allWorkflows: subscription.allWorkflows,
levelFilter: subscription.levelFilter,
triggerFilter: subscription.triggerFilter,
includeFinalOutput: subscription.includeFinalOutput,
includeTraceSpans: subscription.includeTraceSpans,
includeRateLimits: subscription.includeRateLimits,
includeUsageData: subscription.includeUsageData,
webhookConfig: subscription.webhookConfig,
emailRecipients: subscription.emailRecipients,
slackConfig: subscription.slackConfig,
alertConfig: subscription.alertConfig,
active: subscription.active,
createdAt: subscription.createdAt,
updatedAt: subscription.updatedAt,
},
})
} catch (error) {
logger.error('Error creating notification', { error })
return NextResponse.json({ error: 'Internal server error' }, { status: 500 })
}
}

View File

@@ -37,6 +37,7 @@ export function ChatHeader({ chatConfig, starCount }: ChatHeaderProps) {
alt={`${chatConfig?.title || 'Chat'} logo`}
width={24}
height={24}
unoptimized
className='h-6 w-6 rounded-md object-cover'
/>
)}

View File

@@ -1,9 +1,15 @@
'use client'
import { useRef, useState } from 'react'
import { AlertCircle, Loader2, X } from 'lucide-react'
import { AlertCircle, Loader2 } from 'lucide-react'
import { Button, Textarea } from '@/components/emcn'
import { Modal, ModalContent, ModalTitle } from '@/components/emcn/components/modal/modal'
import {
Modal,
ModalBody,
ModalContent,
ModalFooter,
ModalHeader,
} from '@/components/emcn/components/modal/modal'
import { Label } from '@/components/ui/label'
import { createLogger } from '@/lib/logs/console/logger'
import type { ChunkData, DocumentData } from '@/stores/knowledge/store'
@@ -113,132 +119,107 @@ export function CreateChunkModal({
return (
<>
<Modal open={open} onOpenChange={handleCloseAttempt}>
<ModalContent
className='flex h-[74vh] flex-col gap-0 overflow-hidden p-0 sm:max-w-[600px]'
showClose={false}
>
{/* Modal Header */}
<div className='flex-shrink-0 px-6 py-5'>
<div className='flex items-center justify-between'>
<ModalTitle className='font-medium text-[14px] text-[var(--text-primary)] dark:text-[var(--text-primary)]'>
Create Chunk
</ModalTitle>
<Button variant='ghost' className='h-8 w-8 p-0' onClick={handleCloseAttempt}>
<X className='h-4 w-4' />
<span className='sr-only'>Close</span>
</Button>
</div>
</div>
<ModalContent className='h-[74vh] sm:max-w-[600px]'>
<ModalHeader>Create Chunk</ModalHeader>
{/* Modal Body */}
<div className='relative flex min-h-0 flex-1 flex-col overflow-hidden'>
<form className='flex min-h-0 flex-1 flex-col'>
{/* Scrollable Content */}
<div className='scrollbar-hide min-h-0 flex-1 overflow-y-auto pb-20'>
<div className='flex min-h-full flex-col px-6'>
<div className='flex flex-1 flex-col space-y-[12px] pt-0 pb-6'>
{/* Document Info Section */}
<div className='flex-shrink-0 space-y-[8px]'>
<div className='flex items-center gap-3 rounded-lg border bg-muted/30 p-4'>
<div className='min-w-0 flex-1'>
<p className='font-medium text-sm'>
{document?.filename || 'Unknown Document'}
</p>
<p className='text-muted-foreground text-xs'>
Adding chunk to this document
</p>
</div>
</div>
{/* Error Display */}
{error && (
<div className='flex items-center gap-2 rounded-md border border-red-200 bg-red-50 p-3'>
<AlertCircle className='h-4 w-4 text-red-600' />
<p className='text-red-800 text-sm'>{error}</p>
</div>
)}
</div>
{/* Content Input Section - Expands to fill space */}
<div className='flex min-h-0 flex-1 flex-col space-y-[8px]'>
<Label
htmlFor='content'
className='font-medium text-[13px] text-[var(--text-primary)] dark:text-[var(--text-primary)]'
>
Chunk Content
</Label>
<Textarea
id='content'
value={content}
onChange={(e) => setContent(e.target.value)}
placeholder='Enter the content for this chunk...'
className='min-h-0 flex-1 resize-none'
disabled={isCreating}
/>
</div>
<form className='flex min-h-0 flex-1 flex-col'>
<ModalBody>
<div className='space-y-[12px]'>
{/* Document Info Section */}
<div className='flex items-center gap-3 rounded-lg border p-4'>
<div className='min-w-0 flex-1'>
<p className='font-medium text-[var(--text-primary)] text-sm'>
{document?.filename || 'Unknown Document'}
</p>
<p className='text-[var(--text-tertiary)] text-xs'>
Adding chunk to this document
</p>
</div>
</div>
</div>
{/* Fixed Footer with Actions */}
<div className='absolute inset-x-0 bottom-0 bg-[var(--surface-1)] dark:bg-[var(--surface-1)]'>
<div className='flex w-full items-center justify-between gap-[8px] px-6 py-4'>
<Button
variant='default'
onClick={handleCloseAttempt}
type='button'
{/* Error Display */}
{error && (
<div className='flex items-center gap-2 rounded-md border border-[var(--text-error)]/50 bg-[var(--text-error)]/10 p-3'>
<AlertCircle className='h-4 w-4 text-[var(--text-error)]' />
<p className='text-[var(--text-error)] text-sm'>{error}</p>
</div>
)}
{/* Content Input Section */}
<div className='space-y-[8px]'>
<Label
htmlFor='content'
className='mb-[6.5px] block pl-[2px] font-medium text-[13px] text-[var(--text-primary)]'
>
Chunk Content
</Label>
<Textarea
id='content'
value={content}
onChange={(e) => setContent(e.target.value)}
placeholder='Enter the content for this chunk...'
rows={10}
disabled={isCreating}
>
Cancel
</Button>
<Button
variant='primary'
onClick={handleCreateChunk}
type='button'
disabled={!isFormValid || isCreating}
>
{isCreating ? (
<>
<Loader2 className='mr-2 h-4 w-4 animate-spin' />
Creating...
</>
) : (
'Create Chunk'
)}
</Button>
/>
</div>
</div>
</form>
</div>
</ModalBody>
<ModalFooter>
<Button
variant='default'
onClick={handleCloseAttempt}
type='button'
disabled={isCreating}
>
Cancel
</Button>
<Button
variant='primary'
onClick={handleCreateChunk}
type='button'
disabled={!isFormValid || isCreating}
>
{isCreating ? (
<>
<Loader2 className='mr-2 h-4 w-4 animate-spin' />
Creating...
</>
) : (
'Create Chunk'
)}
</Button>
</ModalFooter>
</form>
</ModalContent>
</Modal>
{/* Unsaved Changes Alert */}
<Modal open={showUnsavedChangesAlert} onOpenChange={setShowUnsavedChangesAlert}>
<ModalContent className='flex flex-col gap-0 p-0'>
{/* Modal Header */}
<div className='flex-shrink-0 px-6 py-5'>
<ModalTitle className='font-medium text-[14px] text-[var(--text-primary)] dark:text-[var(--text-primary)]'>
Discard changes?
</ModalTitle>
<p className='mt-2 text-[12px] text-[var(--text-secondary)] dark:text-[var(--text-secondary)]'>
<ModalContent className='w-[400px]'>
<ModalHeader>Discard Changes</ModalHeader>
<ModalBody>
<p className='text-[12px] text-[var(--text-tertiary)]'>
You have unsaved changes. Are you sure you want to close without saving?
</p>
</div>
{/* Modal Footer */}
<div className='flex w-full items-center justify-between gap-[8px] px-6 py-4'>
</ModalBody>
<ModalFooter>
<Button
variant='default'
onClick={() => setShowUnsavedChangesAlert(false)}
type='button'
>
Keep editing
Keep Editing
</Button>
<Button variant='primary' onClick={handleConfirmDiscard} type='button'>
Discard changes
<Button
variant='primary'
onClick={handleConfirmDiscard}
type='button'
className='!bg-[var(--text-error)] !text-white hover:!bg-[var(--text-error)]/90'
>
Discard Changes
</Button>
</div>
</ModalFooter>
</ModalContent>
</Modal>
</>

View File

@@ -2,15 +2,7 @@
import { useState } from 'react'
import { Loader2 } from 'lucide-react'
import {
Button,
Modal,
ModalContent,
ModalDescription,
ModalFooter,
ModalHeader,
ModalTitle,
} from '@/components/emcn'
import { Button, Modal, ModalBody, ModalContent, ModalFooter, ModalHeader } from '@/components/emcn'
import { Trash } from '@/components/emcn/icons/trash'
import { createLogger } from '@/lib/logs/console/logger'
import type { ChunkData } from '@/stores/knowledge/store'
@@ -76,29 +68,23 @@ export function DeleteChunkModal({
return (
<Modal open={isOpen} onOpenChange={onClose}>
<ModalContent>
<ModalHeader>
<ModalTitle>Delete Chunk</ModalTitle>
<ModalDescription>
<ModalContent className='w-[400px]'>
<ModalHeader>Delete Chunk</ModalHeader>
<ModalBody>
<p className='text-[12px] text-[var(--text-tertiary)]'>
Are you sure you want to delete this chunk?{' '}
<span className='text-[var(--text-error)] dark:text-[var(--text-error)]'>
This action cannot be undone.
</span>
</ModalDescription>
</ModalHeader>
<span className='text-[var(--text-error)]'>This action cannot be undone.</span>
</p>
</ModalBody>
<ModalFooter>
<Button
variant='outline'
disabled={isDeleting}
onClick={onClose}
className='h-[32px] px-[12px]'
>
<Button variant='active' disabled={isDeleting} onClick={onClose}>
Cancel
</Button>
<Button
variant='primary'
onClick={handleDeleteChunk}
disabled={isDeleting}
className='h-[32px] bg-[var(--text-error)] px-[12px] text-[var(--white)] hover:bg-[var(--text-error)] hover:text-[var(--white)] dark:bg-[var(--text-error)] dark:text-[var(--white)] hover:dark:bg-[var(--text-error)] dark:hover:text-[var(--white)]'
className='!bg-[var(--text-error)] !text-white hover:!bg-[var(--text-error)]/90'
>
{isDeleting ? (
<>

View File

@@ -6,8 +6,10 @@ import {
Button,
Label,
Modal,
ModalBody,
ModalContent,
ModalTitle,
ModalFooter,
ModalHeader,
Textarea,
Tooltip,
} from '@/components/emcn'
@@ -169,179 +171,155 @@ export function EditChunkModal({
return (
<>
<Modal open={isOpen} onOpenChange={handleCloseAttempt}>
<ModalContent
className='flex h-[74vh] flex-col gap-0 overflow-hidden p-0 sm:max-w-[600px]'
showClose={false}
>
{/* Modal Header */}
<div className='flex-shrink-0 px-6 py-5'>
<div className='flex items-center justify-between'>
<div className='flex items-center gap-3'>
<ModalTitle className='font-medium text-[14px] text-[var(--text-primary)] dark:text-[var(--text-primary)]'>
Edit Chunk
</ModalTitle>
<ModalContent className='h-[74vh] sm:max-w-[600px]'>
<div className='flex items-center justify-between px-[16px] py-[10px]'>
<div className='flex items-center gap-3'>
<span className='font-medium text-[16px] text-[var(--text-primary)]'>Edit Chunk</span>
{/* Navigation Controls */}
<div className='flex items-center gap-1'>
<Tooltip.Root>
<Tooltip.Trigger
asChild
onFocus={(e) => e.preventDefault()}
onBlur={(e) => e.preventDefault()}
{/* Navigation Controls */}
<div className='flex items-center gap-1'>
<Tooltip.Root>
<Tooltip.Trigger
asChild
onFocus={(e) => e.preventDefault()}
onBlur={(e) => e.preventDefault()}
>
<Button
variant='ghost'
onClick={() => handleNavigate('prev')}
disabled={!canNavigatePrev || isNavigating || isSaving}
className='h-8 w-8 p-0'
>
<Button
variant='ghost'
onClick={() => handleNavigate('prev')}
disabled={!canNavigatePrev || isNavigating || isSaving}
className='h-8 w-8 p-0'
>
<ChevronUp className='h-4 w-4' />
</Button>
</Tooltip.Trigger>
<Tooltip.Content side='bottom'>
Previous chunk{' '}
{currentPage > 1 && currentChunkIndex === 0 ? '(previous page)' : ''}
</Tooltip.Content>
</Tooltip.Root>
<ChevronUp className='h-4 w-4' />
</Button>
</Tooltip.Trigger>
<Tooltip.Content side='bottom'>
Previous chunk{' '}
{currentPage > 1 && currentChunkIndex === 0 ? '(previous page)' : ''}
</Tooltip.Content>
</Tooltip.Root>
<Tooltip.Root>
<Tooltip.Trigger
asChild
onFocus={(e) => e.preventDefault()}
onBlur={(e) => e.preventDefault()}
<Tooltip.Root>
<Tooltip.Trigger
asChild
onFocus={(e) => e.preventDefault()}
onBlur={(e) => e.preventDefault()}
>
<Button
variant='ghost'
onClick={() => handleNavigate('next')}
disabled={!canNavigateNext || isNavigating || isSaving}
className='h-8 w-8 p-0'
>
<Button
variant='ghost'
onClick={() => handleNavigate('next')}
disabled={!canNavigateNext || isNavigating || isSaving}
className='h-8 w-8 p-0'
>
<ChevronDown className='h-4 w-4' />
</Button>
</Tooltip.Trigger>
<Tooltip.Content side='bottom'>
Next chunk{' '}
{currentPage < totalPages && currentChunkIndex === allChunks.length - 1
? '(next page)'
: ''}
</Tooltip.Content>
</Tooltip.Root>
</div>
<ChevronDown className='h-4 w-4' />
</Button>
</Tooltip.Trigger>
<Tooltip.Content side='bottom'>
Next chunk{' '}
{currentPage < totalPages && currentChunkIndex === allChunks.length - 1
? '(next page)'
: ''}
</Tooltip.Content>
</Tooltip.Root>
</div>
<Button variant='ghost' className='h-8 w-8 p-0' onClick={handleCloseAttempt}>
<X className='h-4 w-4' />
<span className='sr-only'>Close</span>
</Button>
</div>
<Button variant='ghost' className='h-[16px] w-[16px] p-0' onClick={handleCloseAttempt}>
<X className='h-[16px] w-[16px]' />
<span className='sr-only'>Close</span>
</Button>
</div>
{/* Modal Body */}
<div className='relative flex min-h-0 flex-1 flex-col overflow-hidden'>
<form className='flex min-h-0 flex-1 flex-col'>
{/* Scrollable Content */}
<div className='scrollbar-hide min-h-0 flex-1 overflow-y-auto pb-20'>
<div className='flex min-h-full flex-col px-6'>
<div className='flex flex-1 flex-col space-y-[12px] pt-0 pb-6'>
{/* Document Info Section */}
<div className='flex-shrink-0 space-y-[8px]'>
<div className='flex items-center gap-3 rounded-lg border bg-muted/30 p-4'>
<div className='min-w-0 flex-1'>
<p className='font-medium text-sm'>
{document?.filename || 'Unknown Document'}
</p>
<p className='text-muted-foreground text-xs'>
Editing chunk #{chunk.chunkIndex} Page {currentPage} of {totalPages}
</p>
</div>
</div>
{/* Error Display */}
{error && (
<div className='flex items-center gap-2 rounded-md border border-red-200 bg-red-50 p-3'>
<AlertCircle className='h-4 w-4 text-red-600' />
<p className='text-red-800 text-sm'>{error}</p>
</div>
)}
</div>
{/* Content Input Section - Expands to fill space */}
<div className='flex min-h-0 flex-1 flex-col space-y-[8px]'>
<Label
htmlFor='content'
className='font-medium text-[13px] text-[var(--text-primary)] dark:text-[var(--text-primary)]'
>
Chunk Content
</Label>
<Textarea
id='content'
value={editedContent}
onChange={(e) => setEditedContent(e.target.value)}
placeholder={
userPermissions.canEdit ? 'Enter chunk content...' : 'Read-only view'
}
className='min-h-0 flex-1 resize-none'
disabled={isSaving || isNavigating || !userPermissions.canEdit}
readOnly={!userPermissions.canEdit}
/>
</div>
<form className='flex min-h-0 flex-1 flex-col'>
<ModalBody>
<div className='space-y-[12px]'>
{/* Document Info Section */}
<div className='flex items-center gap-3 rounded-lg border p-4'>
<div className='min-w-0 flex-1'>
<p className='font-medium text-[var(--text-primary)] text-sm'>
{document?.filename || 'Unknown Document'}
</p>
<p className='text-[var(--text-tertiary)] text-xs'>
Editing chunk #{chunk.chunkIndex} Page {currentPage} of {totalPages}
</p>
</div>
</div>
</div>
{/* Fixed Footer with Actions */}
<div className='absolute inset-x-0 bottom-0 bg-[var(--surface-1)] dark:bg-[var(--surface-1)]'>
<div className='flex w-full items-center justify-between gap-[8px] px-6 py-4'>
<Button
variant='default'
onClick={handleCloseAttempt}
type='button'
disabled={isSaving || isNavigating}
{/* Error Display */}
{error && (
<div className='flex items-center gap-2 rounded-md border border-[var(--text-error)]/50 bg-[var(--text-error)]/10 p-3'>
<AlertCircle className='h-4 w-4 text-[var(--text-error)]' />
<p className='text-[var(--text-error)] text-sm'>{error}</p>
</div>
)}
{/* Content Input Section */}
<div className='space-y-[8px]'>
<Label
htmlFor='content'
className='mb-[6.5px] block pl-[2px] font-medium text-[13px] text-[var(--text-primary)]'
>
Cancel
</Button>
{userPermissions.canEdit && (
<Button
variant='primary'
onClick={handleSaveContent}
type='button'
disabled={!isFormValid || isSaving || !hasUnsavedChanges || isNavigating}
>
{isSaving ? (
<>
<Loader2 className='mr-2 h-4 w-4 animate-spin' />
Saving...
</>
) : (
'Save Changes'
)}
</Button>
)}
Chunk Content
</Label>
<Textarea
id='content'
value={editedContent}
onChange={(e) => setEditedContent(e.target.value)}
placeholder={
userPermissions.canEdit ? 'Enter chunk content...' : 'Read-only view'
}
rows={10}
disabled={isSaving || isNavigating || !userPermissions.canEdit}
readOnly={!userPermissions.canEdit}
/>
</div>
</div>
</form>
</div>
</ModalBody>
<ModalFooter>
<Button
variant='default'
onClick={handleCloseAttempt}
type='button'
disabled={isSaving || isNavigating}
>
Cancel
</Button>
{userPermissions.canEdit && (
<Button
variant='primary'
onClick={handleSaveContent}
type='button'
disabled={!isFormValid || isSaving || !hasUnsavedChanges || isNavigating}
>
{isSaving ? (
<>
<Loader2 className='mr-2 h-4 w-4 animate-spin' />
Saving...
</>
) : (
'Save Changes'
)}
</Button>
)}
</ModalFooter>
</form>
</ModalContent>
</Modal>
{/* Unsaved Changes Alert */}
<Modal open={showUnsavedChangesAlert} onOpenChange={setShowUnsavedChangesAlert}>
<ModalContent className='flex flex-col gap-0 p-0'>
{/* Modal Header */}
<div className='flex-shrink-0 px-6 py-5'>
<ModalTitle className='font-medium text-[14px] text-[var(--text-primary)] dark:text-[var(--text-primary)]'>
Unsaved Changes
</ModalTitle>
<p className='mt-2 text-[12px] text-[var(--text-secondary)] dark:text-[var(--text-secondary)]'>
<ModalContent className='w-[400px]'>
<ModalHeader>Unsaved Changes</ModalHeader>
<ModalBody>
<p className='text-[12px] text-[var(--text-tertiary)]'>
You have unsaved changes to this chunk content.
{pendingNavigation
? ' Do you want to discard your changes and navigate to the next chunk?'
: ' Are you sure you want to discard your changes and close the editor?'}
</p>
</div>
{/* Modal Footer */}
<div className='flex w-full items-center justify-between gap-[8px] px-6 py-4'>
</ModalBody>
<ModalFooter>
<Button
variant='default'
onClick={() => {
@@ -356,11 +334,11 @@ export function EditChunkModal({
variant='primary'
onClick={handleConfirmDiscard}
type='button'
className='bg-[var(--text-error)] hover:bg-[var(--text-error)] dark:bg-[var(--text-error)] dark:hover:bg-[var(--text-error)]'
className='!bg-[var(--text-error)] !text-white hover:!bg-[var(--text-error)]/90'
>
Discard Changes
</Button>
</div>
</ModalFooter>
</ModalContent>
</Modal>
</>

View File

@@ -19,11 +19,10 @@ import { useParams, useRouter } from 'next/navigation'
import {
Button,
Modal,
ModalBody,
ModalContent,
ModalDescription,
ModalFooter,
ModalHeader,
ModalTitle,
Tooltip,
} from '@/components/emcn'
import { Trash } from '@/components/emcn/icons/trash'
@@ -1143,31 +1142,29 @@ export function KnowledgeBase({
{/* Delete Confirmation Dialog */}
<Modal open={showDeleteDialog} onOpenChange={setShowDeleteDialog}>
<ModalContent>
<ModalHeader>
<ModalTitle>Delete Knowledge Base</ModalTitle>
<ModalDescription>
<ModalContent className='w-[400px]'>
<ModalHeader>Delete Knowledge Base</ModalHeader>
<ModalBody>
<p className='text-[12px] text-[var(--text-tertiary)]'>
Are you sure you want to delete "{knowledgeBaseName}"? This will permanently delete
the knowledge base and all {totalItems} document
{totalItems === 1 ? '' : 's'} within it.{' '}
<span className='text-[var(--text-error)] dark:text-[var(--text-error)]'>
This action cannot be undone.
</span>
</ModalDescription>
</ModalHeader>
<span className='text-[var(--text-error)]'>This action cannot be undone.</span>
</p>
</ModalBody>
<ModalFooter>
<Button
className='h-[32px] px-[12px]'
variant='outline'
variant='active'
onClick={() => setShowDeleteDialog(false)}
disabled={isDeleting}
>
Cancel
</Button>
<Button
className='h-[32px] bg-[var(--text-error)] px-[12px] text-[var(--white)] hover:bg-[var(--text-error)] hover:text-[var(--white)] dark:bg-[var(--text-error)] dark:text-[var(--white)] hover:dark:bg-[var(--text-error)] dark:hover:text-[var(--white)]'
variant='primary'
onClick={handleDeleteKnowledgeBase}
disabled={isDeleting}
className='!bg-[var(--text-error)] !text-white hover:!bg-[var(--text-error)]/90'
>
{isDeleting ? 'Deleting...' : 'Delete Knowledge Base'}
</Button>

View File

@@ -3,14 +3,7 @@
import { useRef, useState } from 'react'
import { AlertCircle, Check, Loader2, X } from 'lucide-react'
import { useParams } from 'next/navigation'
import {
Button,
Modal,
ModalContent,
ModalFooter,
ModalHeader,
ModalTitle,
} from '@/components/emcn'
import { Button, Modal, ModalBody, ModalContent, ModalFooter, ModalHeader } from '@/components/emcn'
import { Label } from '@/components/ui/label'
import { Progress } from '@/components/ui/progress'
import { createLogger } from '@/lib/logs/console/logger'
@@ -156,15 +149,14 @@ export function UploadModal({
return (
<Modal open={open} onOpenChange={handleClose}>
<ModalContent className='flex max-h-[95vh] flex-col overflow-hidden sm:max-w-[600px]'>
<ModalHeader>
<ModalTitle>Upload Documents</ModalTitle>
</ModalHeader>
<ModalContent className='max-h-[95vh] sm:max-w-[600px]'>
<ModalHeader>Upload Documents</ModalHeader>
<div className='flex-1 space-y-6 overflow-auto'>
{/* File Upload Section */}
<div className='space-y-3'>
<Label>Select Files</Label>
<ModalBody>
<div className='space-y-[12px]'>
<Label className='mb-[6.5px] block pl-[2px] font-medium text-[13px] text-[var(--text-primary)]'>
Select Files
</Label>
{files.length === 0 ? (
<div
@@ -172,10 +164,10 @@ export function UploadModal({
onDragLeave={handleDragLeave}
onDrop={handleDrop}
onClick={() => fileInputRef.current?.click()}
className={`relative flex cursor-pointer items-center justify-center rounded-lg border-2 border-dashed p-8 text-center transition-colors ${
className={`relative flex cursor-pointer items-center justify-center rounded-lg border-[1.5px] border-dashed p-8 text-center transition-colors ${
isDragging
? 'border-primary bg-primary/5'
: 'border-muted-foreground/25 hover:border-muted-foreground/40 hover:bg-muted/10'
? 'border-[var(--brand-primary-hex)] bg-[var(--brand-primary-hex)]/5'
: 'border-[var(--c-575757)] hover:border-[var(--text-secondary)]'
}`}
>
<input
@@ -187,10 +179,10 @@ export function UploadModal({
multiple
/>
<div className='space-y-2'>
<p className='font-medium text-sm'>
<p className='font-medium text-[var(--text-primary)] text-sm'>
{isDragging ? 'Drop files here!' : 'Drop files here or click to browse'}
</p>
<p className='text-muted-foreground text-xs'>
<p className='text-[var(--text-tertiary)] text-xs'>
Supports PDF, DOC, DOCX, TXT, CSV, XLS, XLSX, MD, PPT, PPTX, HTML, JSON, YAML,
YML (max 100MB each)
</p>
@@ -205,8 +197,8 @@ export function UploadModal({
onClick={() => fileInputRef.current?.click()}
className={`cursor-pointer rounded-md border border-dashed p-3 text-center transition-colors ${
isDragging
? 'border-primary bg-primary/5'
: 'border-muted-foreground/25 hover:border-muted-foreground/40'
? 'border-[var(--brand-primary-hex)] bg-[var(--brand-primary-hex)]/5'
: 'border-[var(--c-575757)] hover:border-[var(--text-secondary)]'
}`}
>
<input
@@ -217,7 +209,7 @@ export function UploadModal({
className='hidden'
multiple
/>
<p className='text-sm'>
<p className='text-[var(--text-primary)] text-sm'>
{isDragging ? 'Drop more files here!' : 'Drop more files or click to browse'}
</p>
</div>
@@ -238,12 +230,16 @@ export function UploadModal({
{isCurrentlyUploading && (
<Loader2 className='h-4 w-4 animate-spin text-[var(--brand-primary-hex)]' />
)}
{isCompleted && <Check className='h-4 w-4 text-green-500' />}
{isFailed && <X className='h-4 w-4 text-red-500' />}
<p className='truncate font-medium text-sm'>{file.name}</p>
{isCompleted && (
<Check className='h-4 w-4 text-[var(--text-success)]' />
)}
{isFailed && <X className='h-4 w-4 text-[var(--text-error)]' />}
<p className='truncate font-medium text-[var(--text-primary)] text-sm'>
{file.name}
</p>
</div>
<div className='flex items-center gap-2'>
<p className='text-muted-foreground text-xs'>
<p className='text-[var(--text-tertiary)] text-xs'>
{formatFileSize(file.size)}
</p>
{isCurrentlyUploading && (
@@ -253,7 +249,9 @@ export function UploadModal({
)}
</div>
{isFailed && fileStatus?.error && (
<p className='mt-1 text-red-500 text-xs'>{fileStatus.error}</p>
<p className='mt-1 text-[var(--text-error)] text-xs'>
{fileStatus.error}
</p>
)}
</div>
<Button
@@ -261,7 +259,7 @@ export function UploadModal({
variant='ghost'
onClick={() => removeFile(index)}
disabled={isUploading}
className='h-8 w-8 p-0 text-muted-foreground hover:text-destructive'
className='h-8 w-8 p-0 text-[var(--text-tertiary)] hover:text-[var(--text-error)]'
>
<X className='h-4 w-4' />
</Button>
@@ -275,36 +273,32 @@ export function UploadModal({
{/* Show upload error first, then file error only if no upload error */}
{uploadError && (
<div className='rounded-md border border-destructive/50 bg-destructive/10 px-3 py-2'>
<div className='rounded-md border border-[var(--text-error)]/50 bg-[var(--text-error)]/10 px-3 py-2'>
<div className='flex items-start gap-2'>
<AlertCircle className='mt-0.5 h-4 w-4 shrink-0 text-destructive' />
<div className='flex-1 text-destructive text-sm'>{uploadError.message}</div>
<AlertCircle className='mt-0.5 h-4 w-4 shrink-0 text-[var(--text-error)]' />
<div className='flex-1 text-[var(--text-error)] text-sm'>
{uploadError.message}
</div>
</div>
</div>
)}
{fileError && !uploadError && (
<div className='rounded-md border border-destructive/50 bg-destructive/10 px-3 py-2 text-destructive text-sm'>
<div className='rounded-md border border-[var(--text-error)]/50 bg-[var(--text-error)]/10 px-3 py-2 text-[var(--text-error)] text-sm'>
{fileError}
</div>
)}
</div>
</div>
</ModalBody>
<ModalFooter>
<Button
variant='outline'
onClick={handleClose}
disabled={isUploading}
className='h-[32px] px-[12px]'
>
<Button variant='default' onClick={handleClose} disabled={isUploading}>
Cancel
</Button>
<Button
variant='primary'
onClick={handleUpload}
disabled={files.length === 0 || isUploading}
className='h-[32px] px-[12px]'
>
{isUploading
? uploadProgress.stage === 'uploading'

View File

@@ -7,7 +7,13 @@ import { useParams } from 'next/navigation'
import { useForm } from 'react-hook-form'
import { z } from 'zod'
import { Button, Input, Label, Textarea } from '@/components/emcn'
import { Modal, ModalContent, ModalTitle } from '@/components/emcn/components/modal/modal'
import {
Modal,
ModalBody,
ModalContent,
ModalFooter,
ModalHeader,
} from '@/components/emcn/components/modal/modal'
import { Alert, AlertDescription, AlertTitle } from '@/components/ui/alert'
import { Progress } from '@/components/ui/progress'
import { createLogger } from '@/lib/logs/console/logger'
@@ -310,317 +316,334 @@ export function CreateModal({ open, onOpenChange, onKnowledgeBaseCreated }: Crea
return (
<Modal open={open} onOpenChange={handleClose}>
<ModalContent className='flex h-[78vh] max-h-[95vh] flex-col gap-0 overflow-hidden p-0 sm:max-w-[750px]'>
{/* Modal Header */}
<div className='flex-shrink-0 px-6 py-5'>
<ModalTitle className='font-medium text-[14px] text-[var(--text-primary)] dark:text-[var(--text-primary)]'>
Create Knowledge Base
</ModalTitle>
</div>
<ModalContent className='h-[78vh] max-h-[95vh] sm:max-w-[750px]'>
<ModalHeader>Create Knowledge Base</ModalHeader>
{/* Modal Body */}
<div className='relative flex min-h-0 flex-1 flex-col overflow-hidden'>
<form onSubmit={handleSubmit(onSubmit)} className='flex min-h-0 flex-1 flex-col'>
{/* Scrollable Content */}
<div
ref={scrollContainerRef}
className='scrollbar-hide min-h-0 flex-1 overflow-y-auto pb-20'
>
<div className='px-6'>
{/* Show upload error first, then submit error only if no upload error */}
{uploadError && (
<Alert variant='destructive'>
<AlertCircle className='h-4 w-4' />
<AlertTitle>Upload Error</AlertTitle>
<AlertDescription>{uploadError.message}</AlertDescription>
</Alert>
<form onSubmit={handleSubmit(onSubmit)} className='flex min-h-0 flex-1 flex-col'>
<ModalBody>
<div ref={scrollContainerRef} className='space-y-[12px]'>
{/* Show upload error first, then submit error only if no upload error */}
{uploadError && (
<Alert variant='destructive'>
<AlertCircle className='h-4 w-4' />
<AlertTitle>Upload Error</AlertTitle>
<AlertDescription>{uploadError.message}</AlertDescription>
</Alert>
)}
{submitStatus && submitStatus.type === 'error' && !uploadError && (
<Alert variant='destructive'>
<AlertCircle className='h-4 w-4' />
<AlertTitle>Error</AlertTitle>
<AlertDescription>{submitStatus.message}</AlertDescription>
</Alert>
)}
{/* Form Fields Section */}
<div className='space-y-[8px]'>
<Label
htmlFor='name'
className='mb-[6.5px] block pl-[2px] font-medium text-[13px] text-[var(--text-primary)]'
>
Name *
</Label>
<Input
id='name'
placeholder='Enter knowledge base name'
{...register('name')}
className={errors.name ? 'border-[var(--text-error)]' : ''}
/>
{errors.name && (
<p className='mt-1 text-[var(--text-error)] text-sm'>{errors.name.message}</p>
)}
</div>
{submitStatus && submitStatus.type === 'error' && !uploadError && (
<Alert variant='destructive'>
<AlertCircle className='h-4 w-4' />
<AlertTitle>Error</AlertTitle>
<AlertDescription>{submitStatus.message}</AlertDescription>
</Alert>
<div className='space-y-[8px]'>
<Label
htmlFor='description'
className='mb-[6.5px] block pl-[2px] font-medium text-[13px] text-[var(--text-primary)]'
>
Description
</Label>
<Textarea
id='description'
placeholder='Describe what this knowledge base contains (optional)'
rows={3}
{...register('description')}
className={errors.description ? 'border-[var(--text-error)]' : ''}
/>
{errors.description && (
<p className='mt-1 text-[var(--text-error)] text-sm'>
{errors.description.message}
</p>
)}
</div>
{/* Form Fields Section */}
<div className='space-y-[12px]'>
{/* Chunk Configuration Section */}
<div className='space-y-[12px] rounded-lg border p-5'>
<h3 className='font-medium text-[var(--text-primary)] text-sm'>
Chunking Configuration
</h3>
{/* Min and Max Chunk Size Row */}
<div className='grid grid-cols-2 gap-4'>
<div className='space-y-[8px]'>
<Label htmlFor='name'>Name *</Label>
<Label
htmlFor='minChunkSize'
className='mb-[6.5px] block pl-[2px] font-medium text-[13px] text-[var(--text-primary)]'
>
Min Chunk Size
</Label>
<Input
id='name'
placeholder='Enter knowledge base name'
{...register('name')}
className={errors.name ? 'border-red-500' : ''}
id='minChunkSize'
type='number'
placeholder='1'
{...register('minChunkSize', { valueAsNumber: true })}
className={errors.minChunkSize ? 'border-[var(--text-error)]' : ''}
autoComplete='off'
data-form-type='other'
name='min-chunk-size'
/>
{errors.name && (
<p className='mt-1 text-red-500 text-sm'>{errors.name.message}</p>
{errors.minChunkSize && (
<p className='mt-1 text-[var(--text-error)] text-xs'>
{errors.minChunkSize.message}
</p>
)}
</div>
<div className='space-y-[8px]'>
<Label htmlFor='description'>Description</Label>
<Textarea
id='description'
placeholder='Describe what this knowledge base contains (optional)'
rows={3}
{...register('description')}
className={errors.description ? 'border-red-500' : ''}
<Label
htmlFor='maxChunkSize'
className='mb-[6.5px] block pl-[2px] font-medium text-[13px] text-[var(--text-primary)]'
>
Max Chunk Size
</Label>
<Input
id='maxChunkSize'
type='number'
placeholder='1024'
{...register('maxChunkSize', { valueAsNumber: true })}
className={errors.maxChunkSize ? 'border-[var(--text-error)]' : ''}
autoComplete='off'
data-form-type='other'
name='max-chunk-size'
/>
{errors.description && (
<p className='mt-1 text-red-500 text-sm'>{errors.description.message}</p>
)}
</div>
{/* Chunk Configuration Section */}
<div className='space-y-[12px] rounded-lg border p-5'>
<h3 className='font-medium text-foreground text-sm'>Chunking Configuration</h3>
{/* Min and Max Chunk Size Row */}
<div className='grid grid-cols-2 gap-4'>
<div className='space-y-[8px]'>
<Label htmlFor='minChunkSize'>Min Chunk Size</Label>
<Input
id='minChunkSize'
type='number'
placeholder='1'
{...register('minChunkSize', { valueAsNumber: true })}
className={errors.minChunkSize ? 'border-red-500' : ''}
autoComplete='off'
data-form-type='other'
name='min-chunk-size'
/>
{errors.minChunkSize && (
<p className='mt-1 text-red-500 text-xs'>{errors.minChunkSize.message}</p>
)}
</div>
<div className='space-y-[8px]'>
<Label htmlFor='maxChunkSize'>Max Chunk Size</Label>
<Input
id='maxChunkSize'
type='number'
placeholder='1024'
{...register('maxChunkSize', { valueAsNumber: true })}
className={errors.maxChunkSize ? 'border-red-500' : ''}
autoComplete='off'
data-form-type='other'
name='max-chunk-size'
/>
{errors.maxChunkSize && (
<p className='mt-1 text-red-500 text-xs'>{errors.maxChunkSize.message}</p>
)}
</div>
</div>
{/* Overlap Size */}
<div className='space-y-[8px]'>
<Label htmlFor='overlapSize'>Overlap Size</Label>
<Input
id='overlapSize'
type='number'
placeholder='200'
{...register('overlapSize', { valueAsNumber: true })}
className={errors.overlapSize ? 'border-red-500' : ''}
autoComplete='off'
data-form-type='other'
name='overlap-size'
/>
{errors.overlapSize && (
<p className='mt-1 text-red-500 text-xs'>{errors.overlapSize.message}</p>
)}
</div>
<p className='text-muted-foreground text-xs'>
Configure how documents are split into chunks for processing. Smaller chunks
provide more precise retrieval but may lose context.
</p>
</div>
{/* File Upload Section */}
<div className='space-y-[12px]'>
<Label>Upload Documents</Label>
{files.length === 0 ? (
<div
ref={dropZoneRef}
onDragEnter={handleDragEnter}
onDragOver={handleDragOver}
onDragLeave={handleDragLeave}
onDrop={handleDrop}
onClick={() => fileInputRef.current?.click()}
className={`relative flex cursor-pointer items-center justify-center rounded-lg border-[1.5px] border-dashed py-8 text-center transition-all duration-200 ${
isDragging
? 'border-purple-300 bg-purple-50 shadow-sm'
: 'border-muted-foreground/25 hover:border-muted-foreground/40 hover:bg-muted/10'
}`}
>
<input
ref={fileInputRef}
type='file'
accept={ACCEPT_ATTRIBUTE}
onChange={handleFileChange}
className='hidden'
multiple
/>
<div className='flex flex-col items-center gap-3'>
<div className='space-y-1'>
<p
className={`font-medium text-sm transition-colors duration-200 ${
isDragging ? 'text-purple-700' : ''
}`}
>
{isDragging
? 'Drop files here!'
: 'Drop files here or click to browse'}
</p>
<p className='text-muted-foreground text-xs'>
Supports PDF, DOC, DOCX, TXT, CSV, XLS, XLSX, MD, PPT, PPTX, HTML,
JSON, YAML, YML (max 100MB each)
</p>
</div>
</div>
</div>
) : (
<div className='space-y-2'>
{/* Compact drop area at top of file list */}
<div
ref={dropZoneRef}
onDragEnter={handleDragEnter}
onDragOver={handleDragOver}
onDragLeave={handleDragLeave}
onDrop={handleDrop}
onClick={() => fileInputRef.current?.click()}
className={`cursor-pointer rounded-md border border-dashed p-3 text-center transition-all duration-200 ${
isDragging
? 'border-purple-300 bg-purple-50'
: 'border-muted-foreground/25 hover:border-muted-foreground/40 hover:bg-muted/10'
}`}
>
<input
ref={fileInputRef}
type='file'
accept={ACCEPT_ATTRIBUTE}
onChange={handleFileChange}
className='hidden'
multiple
/>
<div className='flex items-center justify-center gap-2'>
<div>
<p
className={`font-medium text-sm transition-colors duration-200 ${
isDragging ? 'text-purple-700' : ''
}`}
>
{isDragging
? 'Drop more files here!'
: 'Drop more files or click to browse'}
</p>
<p className='text-muted-foreground text-xs'>
PDF, DOC, DOCX, TXT, CSV, XLS, XLSX, MD, PPT, PPTX, HTML (max 100MB
each)
</p>
</div>
</div>
</div>
{/* File list */}
<div className='space-y-2'>
{files.map((file, index) => {
const fileStatus = uploadProgress.fileStatuses?.[index]
const isCurrentlyUploading = fileStatus?.status === 'uploading'
const isCompleted = fileStatus?.status === 'completed'
const isFailed = fileStatus?.status === 'failed'
return (
<div
key={index}
className='flex items-center gap-3 rounded-md border p-3'
>
{getFileIcon(file.type, file.name)}
<div className='min-w-0 flex-1'>
<div className='flex items-center gap-2'>
{isCurrentlyUploading && (
<Loader2 className='h-4 w-4 animate-spin text-[var(--brand-primary-hex)]' />
)}
{isCompleted && <Check className='h-4 w-4 text-green-500' />}
{isFailed && <X className='h-4 w-4 text-red-500' />}
<p className='truncate font-medium text-sm'>{file.name}</p>
</div>
<div className='flex items-center gap-2'>
<p className='text-muted-foreground text-xs'>
{formatFileSize(file.size)}
</p>
{isCurrentlyUploading && (
<div className='min-w-0 max-w-32 flex-1'>
<Progress
value={fileStatus?.progress || 0}
className='h-1'
/>
</div>
)}
</div>
{isFailed && fileStatus?.error && (
<p className='mt-1 text-red-500 text-xs'>{fileStatus.error}</p>
)}
</div>
<Button
type='button'
variant='ghost'
onClick={() => removeFile(index)}
disabled={isUploading}
className='h-8 w-8 p-0 text-muted-foreground hover:text-destructive'
>
<X className='h-4 w-4' />
</Button>
</div>
)
})}
</div>
</div>
)}
{fileError && (
<Alert variant='destructive'>
<AlertCircle className='h-4 w-4' />
<AlertTitle>Error</AlertTitle>
<AlertDescription>{fileError}</AlertDescription>
</Alert>
{errors.maxChunkSize && (
<p className='mt-1 text-[var(--text-error)] text-xs'>
{errors.maxChunkSize.message}
</p>
)}
</div>
</div>
</div>
</div>
{/* Fixed Footer with Actions */}
<div className='absolute inset-x-0 bottom-0 bg-[var(--surface-1)] dark:bg-[var(--surface-1)]'>
<div className='flex w-full items-center justify-between gap-[8px] px-6 py-4'>
<Button
variant='default'
onClick={() => handleClose(false)}
type='button'
disabled={isSubmitting}
>
Cancel
</Button>
<Button
variant='primary'
type='submit'
disabled={isSubmitting || !nameValue?.trim()}
>
{isSubmitting
? isUploading
? uploadProgress.stage === 'uploading'
? `Uploading ${uploadProgress.filesCompleted}/${uploadProgress.totalFiles}...`
: uploadProgress.stage === 'processing'
? 'Processing...'
: 'Creating...'
: 'Creating...'
: 'Create Knowledge Base'}
</Button>
{/* Overlap Size */}
<div className='space-y-[8px]'>
<Label
htmlFor='overlapSize'
className='mb-[6.5px] block pl-[2px] font-medium text-[13px] text-[var(--text-primary)]'
>
Overlap Size
</Label>
<Input
id='overlapSize'
type='number'
placeholder='200'
{...register('overlapSize', { valueAsNumber: true })}
className={errors.overlapSize ? 'border-[var(--text-error)]' : ''}
autoComplete='off'
data-form-type='other'
name='overlap-size'
/>
{errors.overlapSize && (
<p className='mt-1 text-[var(--text-error)] text-xs'>
{errors.overlapSize.message}
</p>
)}
</div>
<p className='text-[var(--text-tertiary)] text-xs'>
Configure how documents are split into chunks for processing. Smaller chunks
provide more precise retrieval but may lose context.
</p>
</div>
{/* File Upload Section */}
<div className='space-y-[12px]'>
<Label className='mb-[6.5px] block pl-[2px] font-medium text-[13px] text-[var(--text-primary)]'>
Upload Documents
</Label>
{files.length === 0 ? (
<div
ref={dropZoneRef}
onDragEnter={handleDragEnter}
onDragOver={handleDragOver}
onDragLeave={handleDragLeave}
onDrop={handleDrop}
onClick={() => fileInputRef.current?.click()}
className={`relative flex cursor-pointer items-center justify-center rounded-lg border-[1.5px] border-dashed py-8 text-center transition-all duration-200 ${
isDragging
? 'border-[var(--brand-primary-hex)] bg-[var(--brand-primary-hex)]/5'
: 'border-[var(--c-575757)] hover:border-[var(--text-secondary)]'
}`}
>
<input
ref={fileInputRef}
type='file'
accept={ACCEPT_ATTRIBUTE}
onChange={handleFileChange}
className='hidden'
multiple
/>
<div className='flex flex-col items-center gap-3'>
<div className='space-y-1'>
<p
className={`font-medium text-[var(--text-primary)] text-sm transition-colors duration-200 ${
isDragging ? 'text-[var(--brand-primary-hex)]' : ''
}`}
>
{isDragging ? 'Drop files here!' : 'Drop files here or click to browse'}
</p>
<p className='text-[var(--text-tertiary)] text-xs'>
Supports PDF, DOC, DOCX, TXT, CSV, XLS, XLSX, MD, PPT, PPTX, HTML, JSON,
YAML, YML (max 100MB each)
</p>
</div>
</div>
</div>
) : (
<div className='space-y-2'>
{/* Compact drop area at top of file list */}
<div
ref={dropZoneRef}
onDragEnter={handleDragEnter}
onDragOver={handleDragOver}
onDragLeave={handleDragLeave}
onDrop={handleDrop}
onClick={() => fileInputRef.current?.click()}
className={`cursor-pointer rounded-md border border-dashed p-3 text-center transition-all duration-200 ${
isDragging
? 'border-[var(--brand-primary-hex)] bg-[var(--brand-primary-hex)]/5'
: 'border-[var(--c-575757)] hover:border-[var(--text-secondary)]'
}`}
>
<input
ref={fileInputRef}
type='file'
accept={ACCEPT_ATTRIBUTE}
onChange={handleFileChange}
className='hidden'
multiple
/>
<div className='flex items-center justify-center gap-2'>
<div>
<p
className={`font-medium text-[var(--text-primary)] text-sm transition-colors duration-200 ${
isDragging ? 'text-[var(--brand-primary-hex)]' : ''
}`}
>
{isDragging
? 'Drop more files here!'
: 'Drop more files or click to browse'}
</p>
<p className='text-[var(--text-tertiary)] text-xs'>
PDF, DOC, DOCX, TXT, CSV, XLS, XLSX, MD, PPT, PPTX, HTML (max 100MB
each)
</p>
</div>
</div>
</div>
{/* File list */}
<div className='space-y-2'>
{files.map((file, index) => {
const fileStatus = uploadProgress.fileStatuses?.[index]
const isCurrentlyUploading = fileStatus?.status === 'uploading'
const isCompleted = fileStatus?.status === 'completed'
const isFailed = fileStatus?.status === 'failed'
return (
<div
key={index}
className='flex items-center gap-3 rounded-md border p-3'
>
{getFileIcon(file.type, file.name)}
<div className='min-w-0 flex-1'>
<div className='flex items-center gap-2'>
{isCurrentlyUploading && (
<Loader2 className='h-4 w-4 animate-spin text-[var(--brand-primary-hex)]' />
)}
{isCompleted && (
<Check className='h-4 w-4 text-[var(--text-success)]' />
)}
{isFailed && <X className='h-4 w-4 text-[var(--text-error)]' />}
<p className='truncate font-medium text-[var(--text-primary)] text-sm'>
{file.name}
</p>
</div>
<div className='flex items-center gap-2'>
<p className='text-[var(--text-tertiary)] text-xs'>
{formatFileSize(file.size)}
</p>
{isCurrentlyUploading && (
<div className='min-w-0 max-w-32 flex-1'>
<Progress value={fileStatus?.progress || 0} className='h-1' />
</div>
)}
</div>
{isFailed && fileStatus?.error && (
<p className='mt-1 text-[var(--text-error)] text-xs'>
{fileStatus.error}
</p>
)}
</div>
<Button
type='button'
variant='ghost'
onClick={() => removeFile(index)}
disabled={isUploading}
className='h-8 w-8 p-0 text-[var(--text-tertiary)] hover:text-[var(--text-error)]'
>
<X className='h-4 w-4' />
</Button>
</div>
)
})}
</div>
</div>
)}
{fileError && (
<Alert variant='destructive'>
<AlertCircle className='h-4 w-4' />
<AlertTitle>Error</AlertTitle>
<AlertDescription>{fileError}</AlertDescription>
</Alert>
)}
</div>
</div>
</form>
</div>
</ModalBody>
<ModalFooter>
<Button
variant='default'
onClick={() => handleClose(false)}
type='button'
disabled={isSubmitting}
>
Cancel
</Button>
<Button variant='primary' type='submit' disabled={isSubmitting || !nameValue?.trim()}>
{isSubmitting
? isUploading
? uploadProgress.stage === 'uploading'
? `Uploading ${uploadProgress.filesCompleted}/${uploadProgress.totalFiles}...`
: uploadProgress.stage === 'processing'
? 'Processing...'
: 'Creating...'
: 'Creating...'
: 'Create Knowledge Base'}
</Button>
</ModalFooter>
</form>
</ModalContent>
</Modal>
)

View File

@@ -1,11 +1,36 @@
import type { ReactNode } from 'react'
import { ArrowUp, Loader2, RefreshCw, Search } from 'lucide-react'
import { Button, Tooltip } from '@/components/emcn'
import { ArrowUp, Bell, Loader2, RefreshCw, Search } from 'lucide-react'
import {
Button,
Popover,
PopoverContent,
PopoverItem,
PopoverScrollArea,
PopoverTrigger,
Tooltip,
} from '@/components/emcn'
import { MoreHorizontal } from '@/components/emcn/icons'
import { Input } from '@/components/ui/input'
import { cn } from '@/lib/core/utils/cn'
import { soehne } from '@/app/_styles/fonts/soehne/soehne'
import Timeline from '@/app/workspace/[workspaceId]/logs/components/filters/components/timeline'
interface ControlsProps {
searchQuery?: string
setSearchQuery?: (v: string) => void
isRefetching: boolean
resetToNow: () => void
live: boolean
setLive: (v: (prev: boolean) => boolean) => void
viewMode: string
setViewMode: (mode: 'logs' | 'dashboard') => void
searchComponent?: ReactNode
showExport?: boolean
onExport?: () => void
canConfigureNotifications?: boolean
onConfigureNotifications?: () => void
}
export function Controls({
searchQuery,
setSearchQuery,
@@ -17,19 +42,9 @@ export function Controls({
setViewMode,
searchComponent,
onExport,
}: {
searchQuery?: string
setSearchQuery?: (v: string) => void
isRefetching: boolean
resetToNow: () => void
live: boolean
setLive: (v: (prev: boolean) => boolean) => void
viewMode: string
setViewMode: (mode: 'logs' | 'dashboard') => void
searchComponent?: ReactNode
showExport?: boolean
onExport?: () => void
}) {
canConfigureNotifications,
onConfigureNotifications,
}: ControlsProps) {
return (
<div
className={cn(
@@ -72,20 +87,29 @@ export function Controls({
<div className='ml-auto flex flex-shrink-0 items-center gap-3'>
{viewMode !== 'dashboard' && (
<Tooltip.Root>
<Tooltip.Trigger asChild>
<Button
variant='ghost'
onClick={onExport}
className='h-9 w-9 p-0 hover:bg-secondary'
aria-label='Export CSV'
>
<ArrowUp className='h-4 w-4' />
<span className='sr-only'>Export CSV</span>
<Popover>
<PopoverTrigger asChild>
<Button variant='ghost' className='h-9 w-9 p-0 hover:bg-secondary'>
<MoreHorizontal className='h-4 w-4' />
<span className='sr-only'>More options</span>
</Button>
</Tooltip.Trigger>
<Tooltip.Content>Export CSV</Tooltip.Content>
</Tooltip.Root>
</PopoverTrigger>
<PopoverContent align='end' sideOffset={4}>
<PopoverScrollArea>
<PopoverItem onClick={onExport}>
<ArrowUp className='h-3 w-3' />
<span>Export as CSV</span>
</PopoverItem>
<PopoverItem
onClick={canConfigureNotifications ? onConfigureNotifications : undefined}
disabled={!canConfigureNotifications}
>
<Bell className='h-3 w-3' />
<span>Configure Notifications</span>
</PopoverItem>
</PopoverScrollArea>
</PopoverContent>
</Popover>
)}
<Tooltip.Root>

View File

@@ -0,0 +1,2 @@
export { NotificationSettings } from './notification-settings'
export { WorkflowSelector } from './workflow-selector'

View File

@@ -0,0 +1,116 @@
'use client'
import { useCallback, useEffect, useState } from 'react'
import { Hash, Lock } from 'lucide-react'
import { Combobox, type ComboboxOption } from '@/components/emcn'
import { createLogger } from '@/lib/logs/console/logger'
const logger = createLogger('SlackChannelSelector')
interface SlackChannel {
id: string
name: string
isPrivate: boolean
}
interface SlackChannelSelectorProps {
accountId: string
value: string
onChange: (channelId: string, channelName: string) => void
disabled?: boolean
error?: string
}
/**
* Standalone Slack channel selector that fetches channels for a given account.
*/
export function SlackChannelSelector({
accountId,
value,
onChange,
disabled = false,
error,
}: SlackChannelSelectorProps) {
const [channels, setChannels] = useState<SlackChannel[]>([])
const [isLoading, setIsLoading] = useState(false)
const [fetchError, setFetchError] = useState<string | null>(null)
const fetchChannels = useCallback(async () => {
if (!accountId) {
setChannels([])
return
}
setIsLoading(true)
setFetchError(null)
try {
const response = await fetch('/api/tools/slack/channels', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ credential: accountId }),
})
if (!response.ok) {
const data = await response.json().catch(() => ({}))
throw new Error(data.error || 'Failed to fetch channels')
}
const data = await response.json()
setChannels(data.channels || [])
} catch (err) {
logger.error('Failed to fetch Slack channels', { error: err })
setFetchError(err instanceof Error ? err.message : 'Failed to fetch channels')
setChannels([])
} finally {
setIsLoading(false)
}
}, [accountId])
useEffect(() => {
fetchChannels()
}, [fetchChannels])
const options: ComboboxOption[] = channels.map((channel) => ({
label: channel.name,
value: channel.id,
icon: channel.isPrivate ? Lock : Hash,
}))
const selectedChannel = channels.find((c) => c.id === value)
if (!accountId) {
return (
<div className='rounded-[8px] border border-dashed p-3 text-center'>
<p className='text-muted-foreground text-sm'>Select a Slack account first</p>
</div>
)
}
const handleChange = (channelId: string) => {
const channel = channels.find((c) => c.id === channelId)
onChange(channelId, channel?.name || '')
}
return (
<div className='space-y-1'>
<Combobox
options={options}
value={value}
onChange={handleChange}
placeholder={
channels.length === 0 && !isLoading ? 'No channels available' : 'Select channel...'
}
disabled={disabled || channels.length === 0}
isLoading={isLoading}
error={fetchError}
/>
{selectedChannel && !fetchError && (
<p className='text-muted-foreground text-xs'>
{selectedChannel.isPrivate ? 'Private' : 'Public'} channel: #{selectedChannel.name}
</p>
)}
{error && <p className='text-red-400 text-xs'>{error}</p>}
</div>
)
}

View File

@@ -0,0 +1,183 @@
'use client'
import { useEffect, useMemo, useState } from 'react'
import { Layers, X } from 'lucide-react'
import { Button, Combobox, type ComboboxOption } from '@/components/emcn'
import { Label, Skeleton } from '@/components/ui'
interface WorkflowSelectorProps {
workspaceId: string
selectedIds: string[]
allWorkflows: boolean
onChange: (ids: string[], allWorkflows: boolean) => void
error?: string
}
const ALL_WORKFLOWS_VALUE = '__all_workflows__'
/**
* Multi-select workflow selector with "All Workflows" option.
*/
export function WorkflowSelector({
workspaceId,
selectedIds,
allWorkflows,
onChange,
error,
}: WorkflowSelectorProps) {
const [workflows, setWorkflows] = useState<Array<{ id: string; name: string }>>([])
const [isLoading, setIsLoading] = useState(true)
useEffect(() => {
const load = async () => {
try {
setIsLoading(true)
const response = await fetch(`/api/workflows?workspaceId=${workspaceId}`)
if (response.ok) {
const data = await response.json()
setWorkflows(data.data || [])
}
} catch {
setWorkflows([])
} finally {
setIsLoading(false)
}
}
load()
}, [workspaceId])
const options: ComboboxOption[] = useMemo(() => {
const workflowOptions = workflows.map((w) => ({
label: w.name,
value: w.id,
}))
return [
{
label: 'All Workflows',
value: ALL_WORKFLOWS_VALUE,
icon: Layers,
},
...workflowOptions,
]
}, [workflows])
const currentValues = useMemo(() => {
if (allWorkflows) {
return [ALL_WORKFLOWS_VALUE]
}
return selectedIds
}, [allWorkflows, selectedIds])
const handleMultiSelectChange = (values: string[]) => {
const hasAllWorkflows = values.includes(ALL_WORKFLOWS_VALUE)
const hadAllWorkflows = allWorkflows
if (hasAllWorkflows && !hadAllWorkflows) {
// User selected "All Workflows" - clear individual selections
onChange([], true)
} else if (!hasAllWorkflows && hadAllWorkflows) {
// User deselected "All Workflows" - switch to individual selection
onChange(
values.filter((v) => v !== ALL_WORKFLOWS_VALUE),
false
)
} else {
// Normal individual workflow selection/deselection
onChange(
values.filter((v) => v !== ALL_WORKFLOWS_VALUE),
false
)
}
}
const handleRemove = (e: React.MouseEvent, id: string) => {
e.preventDefault()
e.stopPropagation()
if (id === ALL_WORKFLOWS_VALUE) {
onChange([], false)
} else {
onChange(
selectedIds.filter((i) => i !== id),
false
)
}
}
const selectedWorkflows = useMemo(() => {
return workflows.filter((w) => selectedIds.includes(w.id))
}, [workflows, selectedIds])
// Render overlay content showing selected items as tags
const overlayContent = useMemo(() => {
if (allWorkflows) {
return (
<div className='flex items-center gap-1'>
<Button
variant='outline'
className='pointer-events-auto h-6 gap-1 rounded-[6px] px-2 text-[11px]'
onMouseDown={(e) => handleRemove(e, ALL_WORKFLOWS_VALUE)}
>
<Layers className='h-3 w-3' />
All Workflows
<X className='h-3 w-3' />
</Button>
</div>
)
}
if (selectedWorkflows.length === 0) {
return null
}
return (
<div className='flex items-center gap-1 overflow-hidden'>
{selectedWorkflows.slice(0, 2).map((w) => (
<Button
key={w.id}
variant='outline'
className='pointer-events-auto h-6 gap-1 rounded-[6px] px-2 text-[11px]'
onMouseDown={(e) => handleRemove(e, w.id)}
>
{w.name}
<X className='h-3 w-3' />
</Button>
))}
{selectedWorkflows.length > 2 && (
<span className='flex h-6 items-center rounded-[6px] border px-2 text-[11px]'>
+{selectedWorkflows.length - 2}
</span>
)}
</div>
)
}, [allWorkflows, selectedWorkflows, selectedIds])
if (isLoading) {
return (
<div className='space-y-2'>
<Label className='font-medium text-sm'>Workflows</Label>
<Skeleton className='h-9 w-full rounded-[4px]' />
</div>
)
}
return (
<div className='space-y-2'>
<Label className='font-medium text-sm'>Workflows</Label>
<Combobox
options={options}
multiSelect
multiSelectValues={currentValues}
onMultiSelectChange={handleMultiSelectChange}
placeholder='Select workflows...'
error={error}
overlayContent={overlayContent}
searchable
searchPlaceholder='Search workflows...'
/>
<p className='text-muted-foreground text-xs'>
Select which workflows should trigger this notification
</p>
</div>
)
}

View File

@@ -1,6 +1,6 @@
'use client'
import { useEffect, useMemo, useState } from 'react'
import { useEffect, useMemo, useRef, useState } from 'react'
import { Search, X } from 'lucide-react'
import { useParams } from 'next/navigation'
import { Button, Popover, PopoverAnchor, PopoverContent } from '@/components/emcn'
@@ -120,6 +120,17 @@ export function AutocompleteSearch({
getSuggestions: (input) => suggestionEngine.getSuggestions(input),
})
const lastExternalValue = useRef(value)
useEffect(() => {
// Only re-initialize if value changed externally (not from user typing)
if (value !== lastExternalValue.current) {
lastExternalValue.current = value
const parsed = parseQuery(value)
initializeFromQuery(parsed.textSearch, parsed.filters)
}
}, [value, initializeFromQuery])
// Initial sync on mount
useEffect(() => {
if (value) {
const parsed = parseQuery(value)

View File

@@ -8,10 +8,12 @@ import { cn } from '@/lib/core/utils/cn'
import { getIntegrationMetadata } from '@/lib/logs/get-trigger-options'
import { parseQuery, queryToApiParams } from '@/lib/logs/query-parser'
import Controls from '@/app/workspace/[workspaceId]/logs/components/dashboard/controls'
import { NotificationSettings } from '@/app/workspace/[workspaceId]/logs/components/notification-settings/notification-settings'
import { AutocompleteSearch } from '@/app/workspace/[workspaceId]/logs/components/search/search'
import { Sidebar } from '@/app/workspace/[workspaceId]/logs/components/sidebar/sidebar'
import Dashboard from '@/app/workspace/[workspaceId]/logs/dashboard'
import { formatDate } from '@/app/workspace/[workspaceId]/logs/utils'
import { useUserPermissionsContext } from '@/app/workspace/[workspaceId]/providers/workspace-permissions-provider'
import { useFolders } from '@/hooks/queries/folders'
import { useLogDetail, useLogsList } from '@/hooks/queries/logs'
import { useDebounce } from '@/hooks/use-debounce'
@@ -58,7 +60,6 @@ export default function Logs() {
level,
workflowIds,
folderIds,
searchQuery: storeSearchQuery,
setSearchQuery: setStoreSearchQuery,
triggers,
viewMode,
@@ -77,14 +78,24 @@ export default function Logs() {
const scrollContainerRef = useRef<HTMLDivElement>(null)
const isInitialized = useRef<boolean>(false)
const [searchQuery, setSearchQuery] = useState(storeSearchQuery)
const [searchQuery, setSearchQuery] = useState('')
const debouncedSearchQuery = useDebounce(searchQuery, 300)
// Sync search query from URL on mount (client-side only)
useEffect(() => {
const urlSearch = new URLSearchParams(window.location.search).get('search') || ''
if (urlSearch && urlSearch !== searchQuery) {
setSearchQuery(urlSearch)
}
}, [])
const [, setAvailableWorkflows] = useState<string[]>([])
const [, setAvailableFolders] = useState<string[]>([])
const [isLive, setIsLive] = useState(false)
const isSearchOpenRef = useRef<boolean>(false)
const [isNotificationSettingsOpen, setIsNotificationSettingsOpen] = useState(false)
const userPermissions = useUserPermissionsContext()
const logFilters = useMemo(
() => ({
@@ -111,10 +122,6 @@ export default function Logs() {
return logsQuery.data.pages.flatMap((page) => page.logs)
}, [logsQuery.data?.pages])
useEffect(() => {
setSearchQuery(storeSearchQuery)
}, [storeSearchQuery])
const foldersQuery = useFolders(workspaceId)
const { getFolderTree } = useFolderStore()
@@ -166,10 +173,10 @@ export default function Logs() {
}, [workspaceId, getFolderTree, foldersQuery.data])
useEffect(() => {
if (isInitialized.current && debouncedSearchQuery !== storeSearchQuery) {
if (isInitialized.current) {
setStoreSearchQuery(debouncedSearchQuery)
}
}, [debouncedSearchQuery, storeSearchQuery])
}, [debouncedSearchQuery, setStoreSearchQuery])
const handleLogClick = (log: WorkflowLog) => {
setSelectedLog(log)
@@ -249,6 +256,8 @@ export default function Logs() {
useEffect(() => {
const handlePopState = () => {
initializeFromURL()
const params = new URLSearchParams(window.location.search)
setSearchQuery(params.get('search') || '')
}
window.addEventListener('popstate', handlePopState)
@@ -381,6 +390,8 @@ export default function Logs() {
}
showExport={true}
onExport={handleExport}
canConfigureNotifications={userPermissions.canEdit}
onConfigureNotifications={() => setIsNotificationSettingsOpen(true)}
/>
{/* Table container */}
@@ -599,6 +610,12 @@ export default function Logs() {
hasNext={selectedLogIndex < logs.length - 1}
hasPrev={selectedLogIndex > 0}
/>
<NotificationSettings
workspaceId={workspaceId}
open={isNotificationSettingsOpen}
onOpenChange={setIsNotificationSettingsOpen}
/>
</div>
)
}

View File

@@ -2,13 +2,10 @@
import { useEffect, useRef, useState } from 'react'
import { Loader2 } from 'lucide-react'
import useDrivePicker from 'react-google-drive-picker'
import { Button } from '@/components/emcn'
import { GoogleDriveIcon } from '@/components/icons'
import { Button, Code } from '@/components/emcn'
import { ClientToolCallState } from '@/lib/copilot/tools/client/base-tool'
import { getClientTool } from '@/lib/copilot/tools/client/manager'
import { getRegisteredTools } from '@/lib/copilot/tools/client/registry'
import { getEnv } from '@/lib/core/config/env'
import { CLASS_TOOL_METADATA, useCopilotStore } from '@/stores/panel/copilot/store'
import type { CopilotToolCall } from '@/stores/panel/copilot/types'
@@ -100,6 +97,10 @@ const ACTION_VERBS = [
'Create',
'Creating',
'Created',
'Generating',
'Generated',
'Rendering',
'Rendered',
] as const
/**
@@ -295,7 +296,43 @@ function getDisplayName(toolCall: CopilotToolCall): string {
const byState = def?.metadata?.displayNames?.[toolCall.state]
if (byState?.text) return byState.text
} catch {}
return toolCall.name
// For integration tools, format the tool name nicely
// e.g., "google_calendar_list_events" -> "Running Google Calendar List Events"
const stateVerb = getStateVerb(toolCall.state)
const formattedName = formatToolName(toolCall.name)
return `${stateVerb} ${formattedName}`
}
/**
* Get verb prefix based on tool state
*/
function getStateVerb(state: string): string {
switch (state) {
case 'pending':
case 'executing':
return 'Running'
case 'success':
return 'Ran'
case 'error':
return 'Failed'
case 'rejected':
case 'aborted':
return 'Skipped'
default:
return 'Running'
}
}
/**
* Format tool name for display
* e.g., "google_calendar_list_events" -> "Google Calendar List Events"
*/
function formatToolName(name: string): string {
return name
.split('_')
.map((word) => word.charAt(0).toUpperCase() + word.slice(1))
.join(' ')
}
function RunSkipButtons({
@@ -310,7 +347,6 @@ function RunSkipButtons({
const [isProcessing, setIsProcessing] = useState(false)
const [buttonsHidden, setButtonsHidden] = useState(false)
const { setToolCallState } = useCopilotStore()
const [openPicker] = useDrivePicker()
const instance = getClientTool(toolCall.id)
const interruptDisplays = instance?.getInterruptDisplays?.()
@@ -327,107 +363,8 @@ function RunSkipButtons({
}
}
// const handleOpenDriveAccess = async () => {
// try {
// const providerId = 'google-drive'
// const credsRes = await fetch(`/api/auth/oauth/credentials?provider=${providerId}`)
// if (!credsRes.ok) return
// const credsData = await credsRes.json()
// const creds = Array.isArray(credsData.credentials) ? credsData.credentials : []
// if (creds.length === 0) return
// const defaultCred = creds.find((c: any) => c.isDefault) || creds[0]
// const tokenRes = await fetch('/api/auth/oauth/token', {
// method: 'POST',
// headers: { 'Content-Type': 'application/json' },
// body: JSON.stringify({ credentialId: defaultCred.id }),
// })
// if (!tokenRes.ok) return
// const { accessToken } = await tokenRes.json()
// if (!accessToken) return
// const clientId = getEnv('NEXT_PUBLIC_GOOGLE_CLIENT_ID') || ''
// const apiKey = getEnv('NEXT_PUBLIC_GOOGLE_API_KEY') || ''
// const projectNumber = getEnv('NEXT_PUBLIC_GOOGLE_PROJECT_NUMBER') || ''
// openPicker({
// clientId,
// developerKey: apiKey,
// viewId: 'DOCS',
// token: accessToken,
// showUploadView: true,
// showUploadFolders: true,
// supportDrives: true,
// multiselect: false,
// appId: projectNumber,
// setSelectFolderEnabled: false,
// callbackFunction: async (data) => {
// if (data.action === 'picked') {
// await onRun()
// }
// },
// })
// } catch {}
// }
if (buttonsHidden) return null
if (toolCall.name === 'gdrive_request_access' && toolCall.state === 'pending') {
return (
<div className='mt-[10px] flex gap-[6px]'>
<Button
onClick={async () => {
const instance = getClientTool(toolCall.id)
if (!instance) return
await instance.handleAccept?.({
openDrivePicker: async (accessToken: string) => {
try {
const clientId = getEnv('NEXT_PUBLIC_GOOGLE_CLIENT_ID') || ''
const apiKey = getEnv('NEXT_PUBLIC_GOOGLE_API_KEY') || ''
const projectNumber = getEnv('NEXT_PUBLIC_GOOGLE_PROJECT_NUMBER') || ''
return await new Promise<boolean>((resolve) => {
openPicker({
clientId,
developerKey: apiKey,
viewId: 'DOCS',
token: accessToken,
showUploadView: true,
showUploadFolders: true,
supportDrives: true,
multiselect: false,
appId: projectNumber,
setSelectFolderEnabled: false,
callbackFunction: async (data) => {
if (data.action === 'picked') resolve(true)
else if (data.action === 'cancel') resolve(false)
},
})
})
} catch {
return false
}
},
})
}}
variant='primary'
title='Grant Google Drive access'
>
<GoogleDriveIcon className='mr-0.5 h-4 w-4' />
Select
</Button>
<Button
onClick={async () => {
setButtonsHidden(true)
await handleSkip(toolCall, setToolCallState, onStateChange)
}}
variant='default'
>
Skip
</Button>
</div>
)
}
return (
<div className='mt-[12px] flex gap-[6px]'>
<Button onClick={onRun} disabled={isProcessing} variant='primary'>
@@ -479,12 +416,19 @@ export function ToolCall({ toolCall: toolCallProp, toolCallId, onStateChange }:
}
}, [params])
// Skip rendering tools that are not in the registry or are explicitly omitted
try {
if (toolCall.name === 'checkoff_todo' || toolCall.name === 'mark_todo_in_progress') return null
// Allow if tool id exists in CLASS_TOOL_METADATA (client tools)
if (!CLASS_TOOL_METADATA[toolCall.name]) return null
} catch {
// Skip rendering some internal tools
if (toolCall.name === 'checkoff_todo' || toolCall.name === 'mark_todo_in_progress') return null
// Get current mode from store to determine if we should render integration tools
const mode = useCopilotStore.getState().mode
// Allow rendering if:
// 1. Tool is in CLASS_TOOL_METADATA (client tools), OR
// 2. We're in build mode (integration tools are executed server-side)
const isClientTool = !!CLASS_TOOL_METADATA[toolCall.name]
const isIntegrationToolInBuildMode = mode === 'build' && !isClientTool
if (!isClientTool && !isIntegrationToolInBuildMode) {
return null
}
const isExpandableTool =
@@ -874,6 +818,34 @@ export function ToolCall({ toolCall: toolCallProp, toolCallId, onStateChange }:
)
}
// Special rendering for function_execute - show code block
if (toolCall.name === 'function_execute') {
const code = params.code || ''
return (
<div className='w-full'>
<ShimmerOverlayText
text={displayName}
active={isLoadingState}
isSpecial={false}
className='font-[470] font-season text-[#939393] text-sm dark:text-[#939393]'
/>
{code && (
<div className='mt-2'>
<Code.Viewer code={code} language='javascript' showGutter />
</div>
)}
{showButtons && (
<RunSkipButtons
toolCall={toolCall}
onStateChange={handleStateChange}
editedParams={editedParams}
/>
)}
</div>
)
}
return (
<div className='w-full'>
<div

View File

@@ -32,6 +32,13 @@ function getModelIconComponent(modelValue: string) {
return <IconComponent className='h-3.5 w-3.5' />
}
/**
* Checks if a model should display the MAX badge
*/
function isMaxModel(modelValue: string): boolean {
return modelValue === 'claude-4.5-sonnet' || modelValue === 'claude-4.5-opus'
}
/**
* Model selector dropdown for choosing AI model.
* Displays model icon and label.
@@ -132,6 +139,11 @@ export function ModelSelector({ selectedModel, isNearTop, onModelSelect }: Model
>
{getModelIconComponent(option.value)}
<span>{option.label}</span>
{isMaxModel(option.value) && (
<Badge variant='default' className='ml-auto px-[6px] py-[1px] text-[10px]'>
MAX
</Badge>
)}
</PopoverItem>
))}
</PopoverScrollArea>

View File

@@ -20,23 +20,24 @@ export const MENTION_OPTIONS = [
* Model configuration options
*/
export const MODEL_OPTIONS = [
// { value: 'claude-4-sonnet', label: 'Claude 4 Sonnet' },
{ value: 'claude-4.5-sonnet', label: 'Claude 4.5 Sonnet' },
{ value: 'claude-4.5-haiku', label: 'Claude 4.5 Haiku' },
{ value: 'claude-4.5-opus', label: 'Claude 4.5 Opus' },
{ value: 'claude-4.5-sonnet', label: 'Claude 4.5 Sonnet' },
// { value: 'claude-4-sonnet', label: 'Claude 4 Sonnet' },
{ value: 'claude-4.5-haiku', label: 'Claude 4.5 Haiku' },
// { value: 'claude-4.1-opus', label: 'Claude 4.1 Opus' },
{ value: 'gpt-5.1-codex', label: 'GPT 5.1 Codex' },
// { value: 'gpt-5-codex', label: 'GPT 5 Codex' },
{ value: 'gpt-5.1-medium', label: 'GPT 5.1 Medium' },
// { value: 'gpt-5-fast', label: 'GPT 5 Fast' },
// { value: 'gpt-5', label: 'GPT 5' },
// { value: 'gpt-5.1-fast', label: 'GPT 5.1 Fast' },
// { value: 'gpt-5.1', label: 'GPT 5.1' },
{ value: 'gpt-5.1-medium', label: 'GPT 5.1 Medium' },
// { value: 'gpt-5.1-high', label: 'GPT 5.1 High' },
// { value: 'gpt-5-codex', label: 'GPT 5 Codex' },
{ value: 'gpt-5.1-codex', label: 'GPT 5.1 Codex' },
// { value: 'gpt-5-high', label: 'GPT 5 High' },
// { value: 'gpt-4o', label: 'GPT 4o' },
// { value: 'gpt-4.1', label: 'GPT 4.1' },
{ value: 'o3', label: 'o3' },
// { value: 'o3', label: 'o3' },
{ value: 'gemini-3-pro', label: 'Gemini 3 Pro' },
] as const
/**

View File

@@ -59,6 +59,15 @@ interface UserInputProps {
panelWidth?: number
clearOnSubmit?: boolean
hasPlanArtifact?: boolean
/** Override workflowId from store (for use outside copilot context) */
workflowIdOverride?: string | null
/** Override selectedModel from store (for use outside copilot context) */
selectedModelOverride?: string
/** Override setSelectedModel from store (for use outside copilot context) */
onModelChangeOverride?: (model: string) => void
hideModeSelector?: boolean
/** Disable @mention functionality */
disableMentions?: boolean
}
interface UserInputRef {
@@ -90,6 +99,11 @@ const UserInput = forwardRef<UserInputRef, UserInputProps>(
panelWidth = 308,
clearOnSubmit = true,
hasPlanArtifact = false,
workflowIdOverride,
selectedModelOverride,
onModelChangeOverride,
hideModeSelector = false,
disableMentions = false,
},
ref
) => {
@@ -98,8 +112,13 @@ const UserInput = forwardRef<UserInputRef, UserInputProps>(
const params = useParams()
const workspaceId = params.workspaceId as string
// Store hooks
const { workflowId, selectedModel, setSelectedModel, contextUsage } = useCopilotStore()
const copilotStore = useCopilotStore()
const workflowId =
workflowIdOverride !== undefined ? workflowIdOverride : copilotStore.workflowId
const selectedModel =
selectedModelOverride !== undefined ? selectedModelOverride : copilotStore.selectedModel
const setSelectedModel = onModelChangeOverride || copilotStore.setSelectedModel
const contextUsage = copilotStore.contextUsage
// Internal state
const [internalMessage, setInternalMessage] = useState('')
@@ -459,6 +478,9 @@ const UserInput = forwardRef<UserInputRef, UserInputProps>(
const newValue = e.target.value
setMessage(newValue)
// Skip mention menu logic if mentions are disabled
if (disableMentions) return
const caret = e.target.selectionStart ?? newValue.length
const active = mentionMenu.getActiveMentionQueryAtPosition(caret, newValue)
@@ -477,7 +499,7 @@ const UserInput = forwardRef<UserInputRef, UserInputProps>(
mentionMenu.setSubmenuQueryStart(null)
}
},
[setMessage, mentionMenu]
[setMessage, mentionMenu, disableMentions]
)
const handleSelectAdjust = useCallback(() => {
@@ -608,32 +630,27 @@ const UserInput = forwardRef<UserInputRef, UserInputProps>(
{/* Top Row: Context controls + Build Workflow button */}
<div className='mb-[6px] flex flex-wrap items-center justify-between gap-[6px]'>
<div className='flex flex-wrap items-center gap-[6px]'>
<Badge
variant='outline'
onClick={handleOpenMentionMenuWithAt}
title='Insert @'
className={cn(
'cursor-pointer rounded-[6px] p-[4.5px]',
(disabled || isLoading) && 'cursor-not-allowed'
)}
>
<AtSign className='h-3 w-3' strokeWidth={1.75} />
</Badge>
{!disableMentions && (
<>
<Badge
variant='outline'
onClick={handleOpenMentionMenuWithAt}
title='Insert @'
className={cn(
'cursor-pointer rounded-[6px] p-[4.5px]',
(disabled || isLoading) && 'cursor-not-allowed'
)}
>
<AtSign className='h-3 w-3' strokeWidth={1.75} />
</Badge>
{/* Context Usage Indicator */}
{/* {contextUsage && contextUsage.percentage > 0 && (
<ContextUsageIndicator
percentage={contextUsage.percentage}
size={18}
strokeWidth={2.5}
/>
)} */}
{/* Selected Context Pills */}
<ContextPills
contexts={contextManagement.selectedContexts}
onRemoveContext={contextManagement.removeContext}
/>
{/* Selected Context Pills */}
<ContextPills
contexts={contextManagement.selectedContexts}
onRemoveContext={contextManagement.removeContext}
/>
</>
)}
</div>
{hasPlanArtifact && (
@@ -690,7 +707,8 @@ const UserInput = forwardRef<UserInputRef, UserInputProps>(
/>
{/* Mention Menu Portal */}
{mentionMenu.showMentionMenu &&
{!disableMentions &&
mentionMenu.showMentionMenu &&
createPortal(
<MentionMenu
mentionMenu={mentionMenu}
@@ -706,12 +724,14 @@ const UserInput = forwardRef<UserInputRef, UserInputProps>(
<div className='flex items-center justify-between gap-2'>
{/* Left side: Mode Selector + Model Selector */}
<div className='flex min-w-0 flex-1 items-center gap-[8px]'>
<ModeSelector
mode={mode}
onModeChange={onModeChange}
isNearTop={isNearTop}
disabled={disabled}
/>
{!hideModeSelector && (
<ModeSelector
mode={mode}
onModeChange={onModeChange}
isNearTop={isNearTop}
disabled={disabled}
/>
)}
<ModelSelector
selectedModel={selectedModel}

View File

@@ -107,6 +107,8 @@ export const Copilot = forwardRef<CopilotRef, CopilotProps>(({ panelWidth }, ref
setPlanTodos,
clearPlanArtifact,
savePlanArtifact,
setSelectedModel,
loadAutoAllowedTools,
} = useCopilotStore()
// Initialize copilot
@@ -117,6 +119,7 @@ export const Copilot = forwardRef<CopilotRef, CopilotProps>(({ panelWidth }, ref
setCopilotWorkflowId,
loadChats,
fetchContextUsage,
loadAutoAllowedTools,
currentChat,
isSendingMessage,
})

View File

@@ -12,6 +12,7 @@ interface UseCopilotInitializationProps {
setCopilotWorkflowId: (workflowId: string | null) => Promise<void>
loadChats: (forceRefresh?: boolean) => Promise<void>
fetchContextUsage: () => Promise<void>
loadAutoAllowedTools: () => Promise<void>
currentChat: any
isSendingMessage: boolean
}
@@ -30,6 +31,7 @@ export function useCopilotInitialization(props: UseCopilotInitializationProps) {
setCopilotWorkflowId,
loadChats,
fetchContextUsage,
loadAutoAllowedTools,
currentChat,
isSendingMessage,
} = props
@@ -112,6 +114,19 @@ export function useCopilotInitialization(props: UseCopilotInitializationProps) {
}
}, [isInitialized, currentChat?.id, activeWorkflowId, fetchContextUsage])
/**
* Load auto-allowed tools once on mount
*/
const hasLoadedAutoAllowedToolsRef = useRef(false)
useEffect(() => {
if (hasMountedRef.current && !hasLoadedAutoAllowedToolsRef.current) {
hasLoadedAutoAllowedToolsRef.current = true
loadAutoAllowedTools().catch((err) => {
logger.warn('[Copilot] Failed to load auto-allowed tools', err)
})
}
}, [loadAutoAllowedTools])
return {
isInitialized,
}

View File

@@ -2,6 +2,8 @@ import { useCallback, useEffect, useMemo, useRef, useState } from 'react'
import { Badge } from '@/components/emcn'
import { Combobox, type ComboboxOption } from '@/components/emcn/components'
import { useSubBlockValue } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/hooks/use-sub-block-value'
import type { SubBlockConfig } from '@/blocks/types'
import { getDependsOnFields } from '@/blocks/utils'
import { ResponseBlockHandler } from '@/executor/handlers/response/response-handler'
import { useWorkflowRegistry } from '@/stores/workflows/registry/store'
import { useSubBlockStore } from '@/stores/workflows/subblock/store'
@@ -43,7 +45,7 @@ interface DropdownProps {
subBlockId: string
) => Promise<Array<{ label: string; id: string }>>
/** Field dependencies that trigger option refetch when changed */
dependsOn?: string[]
dependsOn?: SubBlockConfig['dependsOn']
}
/**
@@ -67,23 +69,25 @@ export function Dropdown({
placeholder = 'Select an option...',
multiSelect = false,
fetchOptions,
dependsOn = [],
dependsOn,
}: DropdownProps) {
const [storeValue, setStoreValue] = useSubBlockValue<string | string[]>(blockId, subBlockId) as [
string | string[] | null | undefined,
(value: string | string[]) => void,
]
const dependsOnFields = useMemo(() => getDependsOnFields(dependsOn), [dependsOn])
const activeWorkflowId = useWorkflowRegistry((s) => s.activeWorkflowId)
const dependencyValues = useSubBlockStore(
useCallback(
(state) => {
if (dependsOn.length === 0 || !activeWorkflowId) return []
if (dependsOnFields.length === 0 || !activeWorkflowId) return []
const workflowValues = state.workflowValues[activeWorkflowId] || {}
const blockValues = workflowValues[blockId] || {}
return dependsOn.map((depKey) => blockValues[depKey] ?? null)
return dependsOnFields.map((depKey) => blockValues[depKey] ?? null)
},
[dependsOn, activeWorkflowId, blockId]
[dependsOnFields, activeWorkflowId, blockId]
)
)
@@ -301,7 +305,7 @@ export function Dropdown({
* This ensures options are refetched with new dependency values (e.g., new credentials)
*/
useEffect(() => {
if (fetchOptions && dependsOn.length > 0) {
if (fetchOptions && dependsOnFields.length > 0) {
const currentDependencyValuesStr = JSON.stringify(dependencyValues)
const previousDependencyValuesStr = previousDependencyValuesRef.current
@@ -314,7 +318,7 @@ export function Dropdown({
previousDependencyValuesRef.current = currentDependencyValuesStr
}
}, [dependencyValues, fetchOptions, dependsOn.length])
}, [dependencyValues, fetchOptions, dependsOnFields.length])
/**
* Effect to fetch options when needed (on mount, when enabled, or when dependencies change)

View File

@@ -9,6 +9,7 @@ import { useDependsOnGate } from '@/app/workspace/[workspaceId]/w/[workflowId]/c
import { useForeignCredential } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/hooks/use-foreign-credential'
import { useSubBlockValue } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/hooks/use-sub-block-value'
import type { SubBlockConfig } from '@/blocks/types'
import { isDependency } from '@/blocks/utils'
import { resolveSelectorForSubBlock, type SelectorResolution } from '@/hooks/selectors/resolution'
import { useCollaborativeWorkflow } from '@/hooks/use-collaborative-workflow'
import { useWorkflowRegistry } from '@/stores/workflows/registry/store'
@@ -92,7 +93,7 @@ export function FileSelectorInput({
!selectorResolution.context.domain
const missingProject =
selectorResolution?.key === 'jira.issues' &&
subBlock.dependsOn?.includes('projectId') &&
isDependency(subBlock.dependsOn, 'projectId') &&
!selectorResolution.context.projectId
const missingPlan =
selectorResolution?.key === 'microsoft.planner' && !selectorResolution.context.planId

View File

@@ -1,14 +1,6 @@
import { useCallback, useEffect, useMemo, useRef, useState } from 'react'
import { useParams } from 'next/navigation'
import {
Button,
Modal,
ModalContent,
ModalDescription,
ModalFooter,
ModalHeader,
ModalTitle,
} from '@/components/emcn'
import { Button, Modal, ModalBody, ModalContent, ModalFooter, ModalHeader } from '@/components/emcn'
import { Trash } from '@/components/emcn/icons/trash'
import { Alert, AlertDescription } from '@/components/ui/alert'
import { cn } from '@/lib/core/utils/cn'
@@ -479,26 +471,23 @@ export function ScheduleSave({ blockId, isPreview = false, disabled = false }: S
)}
<Modal open={showDeleteDialog} onOpenChange={setShowDeleteDialog}>
<ModalContent>
<ModalHeader>
<ModalTitle>Delete schedule?</ModalTitle>
<ModalDescription>
<ModalContent className='w-[400px]'>
<ModalHeader>Delete Schedule</ModalHeader>
<ModalBody>
<p className='text-[12px] text-[var(--text-tertiary)]'>
Are you sure you want to delete this schedule configuration? This will stop the
workflow from running automatically.{' '}
<span className='text-[var(--text-error)]'>This action cannot be undone.</span>
</ModalDescription>
</ModalHeader>
</p>
</ModalBody>
<ModalFooter>
<Button
variant='outline'
onClick={() => setShowDeleteDialog(false)}
className='h-[32px] px-[12px]'
>
<Button variant='active' onClick={() => setShowDeleteDialog(false)}>
Cancel
</Button>
<Button
variant='primary'
onClick={handleDeleteConfirm}
className='h-[32px] bg-[var(--text-error)] px-[12px] text-[var(--white)] hover:bg-[var(--text-error)] hover:text-[var(--white)]'
className='!bg-[var(--text-error)] !text-white hover:!bg-[var(--text-error)]/90'
>
Delete
</Button>

View File

@@ -79,33 +79,29 @@ export function ShortInput({
wandControlRef,
hideInternalWand = false,
}: ShortInputProps) {
// Local state for immediate UI updates during streaming
const [localContent, setLocalContent] = useState<string>('')
const [isFocused, setIsFocused] = useState(false)
const [copied, setCopied] = useState(false)
const persistSubBlockValueRef = useRef<(value: string) => void>(() => {})
// Always call the hook - hooks must be called unconditionally
const justPastedRef = useRef(false)
const webhookManagement = useWebhookManagement({
blockId,
triggerId: undefined,
isPreview,
})
// Wand functionality - always call the hook unconditionally
const wandHook = useWand({
wandConfig: config.wandConfig,
currentValue: localContent,
onStreamStart: () => {
// Clear the content when streaming starts
setLocalContent('')
},
onStreamChunk: (chunk) => {
// Update local content with each chunk as it arrives
setLocalContent((current) => current + chunk)
},
onGeneratedContent: (content) => {
// Final content update
setLocalContent(content)
if (!isPreview && !disabled && !readOnly) {
persistSubBlockValueRef.current(content)
@@ -123,23 +119,18 @@ export function ShortInput({
}
}, [setSubBlockValue])
// Check if wand is actually enabled
const isWandEnabled = config.wandConfig?.enabled ?? false
const inputRef = useRef<HTMLInputElement>(null)
const overlayRef = useRef<HTMLDivElement>(null)
// Get ReactFlow instance for zoom control
const reactFlowInstance = useReactFlow()
const accessiblePrefixes = useAccessibleReferencePrefixes(blockId)
// Check if this input is API key related - memoized to prevent recalculation
const isApiKeyField = useMemo(() => {
const normalizedId = config?.id?.replace(/\s+/g, '').toLowerCase() || ''
const normalizedTitle = config?.title?.replace(/\s+/g, '').toLowerCase() || ''
// Check for common API key naming patterns
const apiKeyPatterns = [
'apikey',
'api_key',
@@ -173,11 +164,23 @@ export function ShortInput({
event: 'change' | 'focus' | 'deleteAll'
}) => {
if (!isApiKeyField || isPreview || disabled || readOnly) return { show: false }
if (justPastedRef.current) {
return { show: false }
}
if (event === 'focus') {
if (value.length > 20 && !value.includes('{{')) {
return { show: false }
}
return { show: true, searchTerm: '' }
}
if (event === 'change') {
// For API key fields, show env vars while typing without requiring '{{'
const looksLikeRawApiKey =
value.length > 30 && !value.includes('{{') && !value.match(/^[A-Z_][A-Z0-9_]*$/i)
if (looksLikeRawApiKey) {
return { show: false }
}
return { show: true, searchTerm: value }
}
if (event === 'deleteAll') {
@@ -188,17 +191,13 @@ export function ShortInput({
[isApiKeyField, isPreview, disabled, readOnly]
)
// Use preview value when in preview mode, otherwise use store value or prop value
const baseValue = isPreview ? previewValue : propValue !== undefined ? propValue : undefined
// During streaming, use local content; otherwise use base value
// Only use webhook URL when useWebhookUrl flag is true
const effectiveValue =
useWebhookUrl && webhookManagement.webhookUrl ? webhookManagement.webhookUrl : baseValue
const value = wandHook?.isStreaming ? localContent : effectiveValue
// Sync local content with base value when not streaming
useEffect(() => {
if (!wandHook.isStreaming) {
const baseValueString = baseValue?.toString() ?? ''
@@ -208,108 +207,41 @@ export function ShortInput({
}
}, [baseValue, wandHook.isStreaming, localContent])
/**
* Scrolls the input to show the cursor position
* Uses canvas for efficient text width measurement instead of DOM manipulation
*/
const scrollToCursor = useCallback(() => {
if (!inputRef.current) return
// Use requestAnimationFrame to ensure DOM has updated
requestAnimationFrame(() => {
if (!inputRef.current) return
const cursorPos = inputRef.current.selectionStart ?? 0
const inputWidth = inputRef.current.offsetWidth
const scrollWidth = inputRef.current.scrollWidth
// Get approximate cursor position in pixels using canvas (more efficient)
const textBeforeCursor = inputRef.current.value.substring(0, cursorPos)
const computedStyle = window.getComputedStyle(inputRef.current)
// Use canvas context for text measurement (more efficient than creating DOM elements)
const canvas = document.createElement('canvas')
const context = canvas.getContext('2d')
if (context) {
context.font = computedStyle.font
const cursorPixelPos = context.measureText(textBeforeCursor).width
// Calculate optimal scroll position to center the cursor
const targetScroll = Math.max(0, cursorPixelPos - inputWidth / 2)
// Only scroll if cursor is not visible
if (
cursorPixelPos < inputRef.current.scrollLeft ||
cursorPixelPos > inputRef.current.scrollLeft + inputWidth
) {
inputRef.current.scrollLeft = Math.min(targetScroll, scrollWidth - inputWidth)
}
// Sync overlay scroll
if (overlayRef.current) {
overlayRef.current.scrollLeft = inputRef.current.scrollLeft
}
}
})
}, [])
// Sync scroll position between input and overlay
const handleScroll = useCallback((e: React.UIEvent<HTMLInputElement>) => {
if (overlayRef.current) {
overlayRef.current.scrollLeft = e.currentTarget.scrollLeft
}
}, [])
// Remove the auto-scroll effect that forces cursor position and replace with natural scrolling
useEffect(() => {
if (inputRef.current && overlayRef.current) {
overlayRef.current.scrollLeft = inputRef.current.scrollLeft
}
}, [value])
// Handle paste events to ensure long values are handled correctly
const handlePaste = useCallback((_e: React.ClipboardEvent<HTMLInputElement>) => {
// Let the paste happen normally
// Then ensure scroll positions are synced after the content is updated
justPastedRef.current = true
setTimeout(() => {
if (inputRef.current && overlayRef.current) {
overlayRef.current.scrollLeft = inputRef.current.scrollLeft
}
}, 0)
justPastedRef.current = false
}, 100)
}, [])
// Handle wheel events to control ReactFlow zoom
const handleWheel = useCallback(
(e: React.WheelEvent<HTMLInputElement>) => {
// Only handle zoom when Ctrl/Cmd key is pressed
if (e.ctrlKey || e.metaKey) {
e.preventDefault()
e.stopPropagation()
// Get current zoom level and viewport
const currentZoom = reactFlowInstance.getZoom()
const { x: viewportX, y: viewportY } = reactFlowInstance.getViewport()
// Calculate zoom factor based on wheel delta
// Use a smaller factor for smoother zooming that matches ReactFlow's native behavior
const delta = e.deltaY > 0 ? 1 : -1
// Using 0.98 instead of 0.95 makes the zoom much slower and more gradual
const zoomFactor = 0.96 ** delta
// Calculate new zoom level with min/max constraints
const newZoom = Math.min(Math.max(currentZoom * zoomFactor, 0.1), 1)
// Get the position of the cursor in the page
const { x: pointerX, y: pointerY } = reactFlowInstance.screenToFlowPosition({
x: e.clientX,
y: e.clientY,
})
// Calculate the new viewport position to keep the cursor position fixed
const newViewportX = viewportX + (pointerX * currentZoom - pointerX * newZoom)
const newViewportY = viewportY + (pointerY * currentZoom - pointerY * newZoom)
// Set the new viewport with the calculated position and zoom
reactFlowInstance.setViewport(
{
x: newViewportX,
@@ -322,8 +254,6 @@ export function ShortInput({
return false
}
// For regular scrolling (without Ctrl/Cmd), let the default behavior happen
// Don't interfere with normal scrolling
return true
},
[reactFlowInstance]
@@ -341,33 +271,6 @@ export function ShortInput({
}
}, [useWebhookUrl, webhookManagement?.webhookUrl, value])
// Value display logic - memoize to avoid unnecessary string operations
const displayValue = useMemo(
() =>
password && !isFocused
? '•'.repeat(value?.toString().length ?? 0)
: (value?.toString() ?? ''),
[password, isFocused, value]
)
// Memoize formatted text to avoid recalculation on every render
const formattedText = useMemo(() => {
const textValue = value?.toString() ?? ''
if (password && !isFocused) {
return '•'.repeat(textValue.length)
}
return formatDisplayText(textValue, {
accessiblePrefixes,
highlightAll: !accessiblePrefixes,
})
}, [value, password, isFocused, accessiblePrefixes])
// Memoize focus handler to prevent unnecessary re-renders
const handleFocus = useCallback(() => {
setIsFocused(true)
}, [])
// Memoize blur handler to prevent unnecessary re-renders
const handleBlur = useCallback(() => {
setIsFocused(false)
}, [])
@@ -422,7 +325,6 @@ export function ShortInput({
onDragOver,
onFocus,
}) => {
// Use controller's value for input, but apply local transformations
const actualValue = wandHook.isStreaming
? localContent
: useWebhookUrl && webhookManagement.webhookUrl

View File

@@ -2,10 +2,10 @@ import { useCallback, useEffect, useMemo, useRef, useState } from 'react'
import {
Button,
Modal,
ModalBody,
ModalContent,
ModalDescription,
ModalFooter,
ModalTitle,
ModalHeader,
} from '@/components/emcn/components'
import { Trash } from '@/components/emcn/icons/trash'
import { Alert, AlertDescription } from '@/components/ui/alert'
@@ -452,24 +452,23 @@ export function TriggerSave({
)}
<Modal open={showDeleteDialog} onOpenChange={setShowDeleteDialog}>
<ModalContent>
<ModalTitle>Delete trigger?</ModalTitle>
<ModalDescription>
Are you sure you want to delete this trigger configuration? This will remove the webhook
and stop all incoming triggers.{' '}
<span className='text-[var(--text-error)]'>This action cannot be undone.</span>
</ModalDescription>
<ModalContent className='w-[400px]'>
<ModalHeader>Delete Trigger</ModalHeader>
<ModalBody>
<p className='text-[12px] text-[var(--text-tertiary)]'>
Are you sure you want to delete this trigger configuration? This will remove the
webhook and stop all incoming triggers.{' '}
<span className='text-[var(--text-error)]'>This action cannot be undone.</span>
</p>
</ModalBody>
<ModalFooter>
<Button
variant='outline'
onClick={() => setShowDeleteDialog(false)}
className='h-[32px] px-[12px]'
>
<Button variant='active' onClick={() => setShowDeleteDialog(false)}>
Cancel
</Button>
<Button
variant='primary'
onClick={handleDeleteConfirm}
className='h-[32px] bg-[var(--text-error)] px-[12px] text-[var(--white)] hover:bg-[var(--text-error)] hover:text-[var(--white)]'
className='!bg-[var(--text-error)] !text-white hover:!bg-[var(--text-error)]/90'
>
Delete
</Button>

View File

@@ -5,10 +5,40 @@ import type { SubBlockConfig } from '@/blocks/types'
import { useWorkflowRegistry } from '@/stores/workflows/registry/store'
import { useSubBlockStore } from '@/stores/workflows/subblock/store'
type DependsOnConfig = string[] | { all?: string[]; any?: string[] }
/**
* Parses dependsOn config and returns normalized all/any arrays
*/
function parseDependsOn(dependsOn: DependsOnConfig | undefined): {
allFields: string[]
anyFields: string[]
allDependsOnFields: string[]
} {
if (!dependsOn) {
return { allFields: [], anyFields: [], allDependsOnFields: [] }
}
if (Array.isArray(dependsOn)) {
// Simple array format: all fields required (AND logic)
return { allFields: dependsOn, anyFields: [], allDependsOnFields: dependsOn }
}
// Object format with all/any
const allFields = dependsOn.all || []
const anyFields = dependsOn.any || []
return {
allFields,
anyFields,
allDependsOnFields: [...allFields, ...anyFields],
}
}
/**
* Centralized dependsOn gating for sub-block components.
* - Computes dependency values from the active workflow/block
* - Returns a stable disabled flag to pass to inputs and to guard effects
* - Supports both AND (all) and OR (any) dependency logic
*/
export function useDependsOnGate(
blockId: string,
@@ -21,8 +51,14 @@ export function useDependsOnGate(
const activeWorkflowId = useWorkflowRegistry((s) => s.activeWorkflowId)
// Use only explicit dependsOn from block config. No inference.
const dependsOn: string[] = (subBlock.dependsOn as string[] | undefined) || []
// Parse dependsOn config to get all/any field lists
const { allFields, anyFields, allDependsOnFields } = useMemo(
() => parseDependsOn(subBlock.dependsOn),
[subBlock.dependsOn]
)
// For backward compatibility, expose flat list of all dependency fields
const dependsOn = allDependsOnFields
const normalizeDependencyValue = (rawValue: unknown): unknown => {
if (rawValue === null || rawValue === undefined) return null
@@ -47,33 +83,64 @@ export function useDependsOnGate(
return rawValue
}
const dependencyValues = useSubBlockStore((state) => {
if (dependsOn.length === 0) return [] as any[]
// Get values for all dependency fields (both all and any)
const dependencyValuesMap = useSubBlockStore((state) => {
if (allDependsOnFields.length === 0) return {} as Record<string, unknown>
// If previewContextValues are provided (e.g., tool parameters), use those first
if (previewContextValues) {
return dependsOn.map((depKey) => normalizeDependencyValue(previewContextValues[depKey]))
const map: Record<string, unknown> = {}
for (const key of allDependsOnFields) {
map[key] = normalizeDependencyValue(previewContextValues[key])
}
return map
}
if (!activeWorkflowId) {
const map: Record<string, unknown> = {}
for (const key of allDependsOnFields) {
map[key] = null
}
return map
}
if (!activeWorkflowId) return dependsOn.map(() => null)
const workflowValues = state.workflowValues[activeWorkflowId] || {}
const blockValues = (workflowValues as any)[blockId] || {}
return dependsOn.map((depKey) => normalizeDependencyValue((blockValues as any)[depKey]))
}) as any[]
const map: Record<string, unknown> = {}
for (const key of allDependsOnFields) {
map[key] = normalizeDependencyValue((blockValues as any)[key])
}
return map
})
// For backward compatibility, also provide array of values
const dependencyValues = useMemo(
() => allDependsOnFields.map((key) => dependencyValuesMap[key]),
[allDependsOnFields, dependencyValuesMap]
) as any[]
const isValueSatisfied = (value: unknown): boolean => {
if (value === null || value === undefined) return false
if (typeof value === 'string') return value.trim().length > 0
if (Array.isArray(value)) return value.length > 0
return value !== ''
}
const depsSatisfied = useMemo(() => {
if (dependsOn.length === 0) return true
return dependencyValues.every((value) => {
if (value === null || value === undefined) return false
if (typeof value === 'string') return value.trim().length > 0
if (Array.isArray(value)) return value.length > 0
return value !== ''
})
}, [dependencyValues, dependsOn])
// Check all fields (AND logic) - all must be satisfied
const allSatisfied =
allFields.length === 0 || allFields.every((key) => isValueSatisfied(dependencyValuesMap[key]))
// Check any fields (OR logic) - at least one must be satisfied
const anySatisfied =
anyFields.length === 0 || anyFields.some((key) => isValueSatisfied(dependencyValuesMap[key]))
return allSatisfied && anySatisfied
}, [allFields, anyFields, dependencyValuesMap])
// Block everything except the credential field itself until dependencies are set
const blocked =
!isPreview && dependsOn.length > 0 && !depsSatisfied && subBlock.type !== 'oauth-input'
!isPreview && allDependsOnFields.length > 0 && !depsSatisfied && subBlock.type !== 'oauth-input'
const finalDisabled = disabledProp || isPreview || blocked

View File

@@ -9,6 +9,7 @@ import { useDependsOnGate } from '@/app/workspace/[workspaceId]/w/[workflowId]/c
import { useForeignCredential } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/hooks/use-foreign-credential'
import { useSubBlockValue } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/editor/components/sub-block/hooks/use-sub-block-value'
import type { SubBlockConfig } from '@/blocks/types'
import { isDependency } from '@/blocks/utils'
import { resolveSelectorForSubBlock, type SelectorResolution } from '@/hooks/selectors/resolution'
import { useCollaborativeWorkflow } from '@/hooks/use-collaborative-workflow'
import { useWorkflowRegistry } from '@/stores/workflows/registry/store'
@@ -92,7 +93,7 @@ export function FileSelectorInput({
!selectorResolution.context.domain
const missingProject =
selectorResolution?.key === 'jira.issues' &&
subBlock.dependsOn?.includes('projectId') &&
isDependency(subBlock.dependsOn, 'projectId') &&
!selectorResolution?.context.projectId
const missingPlan =
selectorResolution?.key === 'microsoft.planner' && !selectorResolution?.context.planId

View File

@@ -9,11 +9,10 @@ import {
Copy,
Layout,
Modal,
ModalBody,
ModalContent,
ModalDescription,
ModalFooter,
ModalHeader,
ModalTitle,
MoreHorizontal,
Play,
Popover,
@@ -532,28 +531,28 @@ export function Panel() {
{/* Delete Confirmation Modal */}
<Modal open={isDeleteModalOpen} onOpenChange={setIsDeleteModalOpen}>
<ModalContent>
<ModalHeader>
<ModalTitle>Delete workflow?</ModalTitle>
<ModalDescription>
<ModalContent className='w-[400px]'>
<ModalHeader>Delete Workflow</ModalHeader>
<ModalBody>
<p className='text-[12px] text-[var(--text-tertiary)]'>
Deleting this workflow will permanently remove all associated blocks, executions, and
configuration.{' '}
<span className='text-[var(--text-error)]'>This action cannot be undone.</span>
</ModalDescription>
</ModalHeader>
</p>
</ModalBody>
<ModalFooter>
<Button
className='h-[32px] px-[12px]'
variant='outline'
variant='active'
onClick={() => setIsDeleteModalOpen(false)}
disabled={isDeleting}
>
Cancel
</Button>
<Button
className='h-[32px] bg-[var(--text-error)] px-[12px] text-[var(--white)] hover:bg-[var(--text-error)] hover:text-[var(--white)]'
variant='primary'
onClick={handleDeleteWorkflow}
disabled={isDeleting}
className='!bg-[var(--text-error)] !text-white hover:!bg-[var(--text-error)]/90'
>
{isDeleting ? 'Deleting...' : 'Delete'}
</Button>

Some files were not shown because too many files have changed in this diff Show More