Compare commits

...

30 Commits

Author SHA1 Message Date
Waleed
be578e2ed7 v0.5.56: batch operations, access control and permission groups, billing fixes 2026-01-10 00:31:34 -08:00
Waleed
baa54b4c97 feat(docs): added circleback docs (#2762) 2026-01-10 00:30:49 -08:00
Waleed
a11d452d7b fix(build): fixed circular dependencies (#2761) 2026-01-10 00:10:20 -08:00
Waleed
6262503b89 feat(deployed-form): added deployed form input (#2679)
* feat(deployed-form): added deployed form input

* styling consolidation, finishing touches on form

* updated docs

* remove unused files with knip

* added more form fields

* consolidated more test utils

* remove unused/unneeded zustand stores, refactored stores for consistency

* improvement(files): uncolorized plan name

* feat(emcn): button-group

* feat(emcn): tag input, tooltip shortcut

* improvement(emcn): modal padding, api, chat, form

* fix: deleted migrations

* feat(form): added migrations

* fix(emcn): tag input

* fix: failing tests on build

* add suplementary hover and fix bg color in date picker

* fix: build errors

---------

Co-authored-by: Emir Karabeg <emirkarabeg@berkeley.edu>
2026-01-09 23:42:21 -08:00
Waleed
67440432bf fix(ops): fix subflow resizing on exit (#2760)
* fix(sockets): broadcast handles and enabled/disabled state

* made all ops batched, removed all individual ops

* fix subflow resizing on exit

* removed unused custom event

* fix failing tests, update testing

* fix test mock
2026-01-09 22:35:03 -08:00
Vikhyath Mondreti
47eb060311 feat(enterprise): permission groups, access control (#2736)
* feat(permission-groups): integration/model access controls for enterprise

* feat: enterprise gating for BYOK, SSO, credential sets with org admin/owner checks

* execution time enforcement of mcp and custom tools

* add admin routes to cleanup permission group data

* fix not being on enterprise checks

* separate out orgs from billing system

* update the docs

* add custom tool blockers based on perm configs

* add migrations

* fix

* address greptile comments

* regen migrations

* fix default model picking based on user config

* cleaned up UI
2026-01-09 20:16:22 -08:00
Adam Gough
fd76e98f0e improvement(wand): added more wands (#2756)
* added wand configs

* fixed greptile comments
2026-01-09 18:41:51 -08:00
Waleed
1dbd16115f feat(sidebar): context menu for nav items in sidebar, toolbar blocks, added missing docs for various blocks and triggers (#2754)
* feat(sidebar): context menu for nav items in sidebar

* added toolbar context menu, fixed incorrect access pattern in old context menus and added docs for missing blocks

* fixed links
2026-01-09 17:50:10 -08:00
Vikhyath Mondreti
38e827b61a fix(docs): new router (#2755)
* fix(docs): new router

* update image
2026-01-09 17:37:04 -08:00
Waleed
1f5e8a41f8 fix(tools): fixed workflow tool for agent to respect user provided params, inject at runtime like all other tools (#2750)
* fix(tools): fixed wrokflow tool for agent to respect user provided params, inject at runtime like all other tools

* ack comments

* remove redunant if-else

* added tests
2026-01-09 17:12:58 -08:00
Adam Gough
796f73ee01 improvement(google-drive) (#2752)
* expanded metadata fields for google drive

* added tag dropdown support

* fixed greptile

* added utils func

* removed comments

* updated docs

* greptile comments

* fixed output schema

* reverted back to bas64 string
2026-01-09 16:56:07 -08:00
Waleed
d3d6012d5c fix(tools): updated memory block to throw better errors, removed deprecated posthog route, remove deprecated templates & console helpers (#2753)
* fix(tools): updated memory block to throw better errors, removed deprecated posthog route, remove deprecated templates & console helpers

* remove isDeployed in favor of deploymentStatus

* ack PR comments
2026-01-09 16:53:37 -08:00
Vikhyath Mondreti
860610b4c2 improvement(billing): team upgrade + session management (#2751)
* improvement(billng): team upgrade + session management

* remove comments

* session updates should be atomic

* make consistent for onSubscritionUpdate

* plan upgrade to refresh session

* fix var name

* remove dead code

* preserve params
2026-01-09 16:36:45 -08:00
Waleed
05bbf34265 improvement(canvas): add multi-block select, add batch handle, enabled, and edge operations (#2738)
* improvement(canvas): add multi-block select, add batch handle, enabled, and edge operations

* feat(i18n): update translations (#2732)

Co-authored-by: icecrasher321 <icecrasher321@users.noreply.github.com>

* don't allow flip handles for subflows

* ack PR comments

* more

* fix missing handler

* remove dead subflow-specific ops

* remove unused code

* fixed subflow ops

* keep edges on subflow actions intact

* fix subflow resizing

* fix remove from subflow bulk

* improvement(canvas): add multi-block select, add batch handle, enabled, and edge operations

* don't allow flip handles for subflows

* ack PR comments

* more

* fix missing handler

* remove dead subflow-specific ops

* remove unused code

* fixed subflow ops

* fix subflow resizing

* keep edges on subflow actions intact

* fixed copy from inside subflow

* types improvement, preview fixes

* fetch varible data in deploy modal

* moved remove from subflow one position to the right

* fix subflow issues

* address greptile comment

* fix test

* improvement(preview): ui/ux

* fix(preview): subflows

* added batch add edges

* removed recovery

* use consolidated consts for sockets operations

* more

---------

Co-authored-by: icecrasher321 <icecrasher321@users.noreply.github.com>
Co-authored-by: Vikhyath Mondreti <vikhyath@simstudio.ai>
Co-authored-by: Emir Karabeg <emirkarabeg@berkeley.edu>
2026-01-09 14:48:23 -08:00
Waleed
753600ed60 feat(i18n): update translations (#2749)
Co-authored-by: icecrasher321 <icecrasher321@users.noreply.github.com>
2026-01-09 14:11:57 -08:00
Vikhyath Mondreti
4da43d937c improvement(docs): multiplier dropped to 1.4 (#2748) 2026-01-09 11:41:04 -08:00
Waleed
9502227fd4 fix(sso): add missing deps to db container for running script (#2746) 2026-01-09 09:42:13 -08:00
Waleed
f415e5edc4 v0.5.55: polling groups, bedrock provider, devcontainer fixes, workflow preview enhancements 2026-01-08 23:36:56 -08:00
Adam Gough
13981549d1 fix(grain): grain trigger update (#2739)
* grain trigger new requirements

* removed comment

* made it generic for all triggers

* fire only for specific trigger type

* removed comments
2026-01-08 23:10:11 -08:00
Waleed
554dcdf062 improvement(execution-snapshot): enhance workflow preview in logs and deploy modal (#2742)
* added larger live deployment preview

* edited subblock UI

* removed comments

* removed carrot

* updated styling to match existing subblocks

* enriched workflow preview

* fix connetion in log preview

* cleanup

* ack PR comments

* more PR comments

* more

* cleanup

* use reactquery cache in deploy modal

* ack comments

* ack PR comment

---------

Co-authored-by: aadamgough <adam@sim.ai>
2026-01-08 23:04:54 -08:00
Adam Gough
6b28742b68 fix(linear): missing params (#2740)
* added missing params

* fixed linear bugs
2026-01-08 20:42:09 -08:00
Vikhyath Mondreti
e5c95093f6 improvement(autoconnect): click to add paths also autoconnect (#2737) 2026-01-08 18:16:15 -08:00
Waleed
b87af80bff feat(i18n): update translations (#2732)
Co-authored-by: icecrasher321 <icecrasher321@users.noreply.github.com>
2026-01-08 18:14:00 -08:00
Vikhyath Mondreti
c2180bf8a0 improvement(enterprise): feature flagging + runtime checks consolidation (#2730)
* improvement(enterprise): enterprise checks code consolidation

* update docs

* revert isHosted check

* add unique index to prevent multiple orgs per user

* address greptile comments

* ui bug
2026-01-08 13:53:22 -08:00
Waleed
fdac4314d2 fix(chat): update stream to respect all output select objects (#2729) 2026-01-08 11:54:07 -08:00
Waleed
a54fcbc094 improvement(auth): added ability to inject secrets to kubernetes, server-side ff to disable email registration (#2728)
* improvement(auth): added ability to inject secrets to kubernetes, server-side ff to disable email registration

* consolidated telemetry events

* comments cleanup

* ack PR comment

* refactor to use createEnvMock helper instead of local mocks
2026-01-08 11:09:35 -08:00
Waleed
05904a73b2 feat(i18n): update translations (#2721)
Co-authored-by: icecrasher321 <icecrasher321@users.noreply.github.com>
2026-01-08 10:30:53 -08:00
Lakshman Patel
1b22d2ce81 fix(devcontainer): use bunx for concurrently command (#2723) 2026-01-07 21:20:29 -08:00
Waleed
26dff7cffe feat(bedrock): added aws bedrock as a model provider (#2722) 2026-01-07 20:08:03 -08:00
Vikhyath Mondreti
020037728d feat(polling-groups): can invite multiple people to have their gmail/outlook inboxes connected to a workflow (#2695)
* progress on cred sets

* fix credential set system

* return data to render credential set in block preview

* progress

* invite flow

* simplify code

* fix ui

* fix tests

* fix types

* fix

* fix icon for outlook

* fix cred set name not showing up for owner

* fix rendering of credential set name

* fix outlook well known folder id resolution

* fix perms for creating cred set

* add to docs and simplify ui

* consolidate webhook code better

* fix tests

* fix credential collab logic issue

* fix ui

* fix lint
2026-01-07 17:49:40 -08:00
718 changed files with 70279 additions and 14227 deletions

View File

@@ -1,60 +1,57 @@
---
description: Testing patterns with Vitest
description: Testing patterns with Vitest and @sim/testing
globs: ["apps/sim/**/*.test.ts", "apps/sim/**/*.test.tsx"]
---
# Testing Patterns
Use Vitest. Test files live next to source: `feature.ts` → `feature.test.ts`
Use Vitest. Test files: `feature.ts` → `feature.test.ts`
## Structure
```typescript
/**
* Tests for [feature name]
*
* @vitest-environment node
*/
import { databaseMock, loggerMock } from '@sim/testing'
import { describe, expect, it, vi } from 'vitest'
// 1. Mocks BEFORE imports
vi.mock('@sim/db', () => ({ db: { select: vi.fn() } }))
vi.mock('@sim/db', () => databaseMock)
vi.mock('@sim/logger', () => loggerMock)
// 2. Imports AFTER mocks
import { describe, expect, it, vi, beforeEach, afterEach } from 'vitest'
import { createSession, loggerMock } from '@sim/testing'
import { myFunction } from '@/lib/feature'
describe('myFunction', () => {
beforeEach(() => vi.clearAllMocks())
it('should do something', () => {
expect(myFunction()).toBe(expected)
})
it.concurrent('runs in parallel', () => { ... })
it.concurrent('isolated tests run in parallel', () => { ... })
})
```
## @sim/testing Package
```typescript
// Factories - create test data
import { createBlock, createWorkflow, createSession } from '@sim/testing'
Always prefer over local mocks.
// Mocks - pre-configured mocks
import { loggerMock, databaseMock, fetchMock } from '@sim/testing'
// Builders - fluent API for complex objects
import { ExecutionBuilder, WorkflowBuilder } from '@sim/testing'
```
| Category | Utilities |
|----------|-----------|
| **Mocks** | `loggerMock`, `databaseMock`, `setupGlobalFetchMock()` |
| **Factories** | `createSession()`, `createWorkflowRecord()`, `createBlock()`, `createExecutorContext()` |
| **Builders** | `WorkflowBuilder`, `ExecutionContextBuilder` |
| **Assertions** | `expectWorkflowAccessGranted()`, `expectBlockExecuted()` |
## Rules
1. `@vitest-environment node` directive at file top
2. **Mocks before imports** - `vi.mock()` calls must come first
3. Use `@sim/testing` factories over manual test data
4. `it.concurrent` for independent tests (faster)
2. `vi.mock()` calls before importing mocked modules
3. `@sim/testing` utilities over local mocks
4. `it.concurrent` for isolated tests (no shared mutable state)
5. `beforeEach(() => vi.clearAllMocks())` to reset state
6. Group related tests with nested `describe` blocks
7. Test file naming: `*.test.ts` (not `*.spec.ts`)
## Hoisted Mocks
For mutable mock references:
```typescript
const mockFn = vi.hoisted(() => vi.fn())
vi.mock('@/lib/module', () => ({ myFunction: mockFn }))
mockFn.mockResolvedValue({ data: 'test' })
```

View File

@@ -173,13 +173,13 @@ Use Vitest. Test files: `feature.ts` → `feature.test.ts`
/**
* @vitest-environment node
*/
// Mocks BEFORE imports
vi.mock('@sim/db', () => ({ db: { select: vi.fn() } }))
// Imports AFTER mocks
import { databaseMock, loggerMock } from '@sim/testing'
import { describe, expect, it, vi } from 'vitest'
import { createSession, loggerMock } from '@sim/testing'
vi.mock('@sim/db', () => databaseMock)
vi.mock('@sim/logger', () => loggerMock)
import { myFunction } from '@/lib/feature'
describe('feature', () => {
beforeEach(() => vi.clearAllMocks())
@@ -187,7 +187,7 @@ describe('feature', () => {
})
```
Use `@sim/testing` factories over manual test data.
Use `@sim/testing` mocks/factories over local test data. See `.cursor/rules/sim-testing.mdc` for details.
## Utils Rules

View File

@@ -4575,3 +4575,22 @@ export function FirefliesIcon(props: SVGProps<SVGSVGElement>) {
</svg>
)
}
export function BedrockIcon(props: SVGProps<SVGSVGElement>) {
return (
<svg {...props} viewBox='0 0 24 24' xmlns='http://www.w3.org/2000/svg'>
<defs>
<linearGradient id='bedrock_gradient' x1='80%' x2='20%' y1='20%' y2='80%'>
<stop offset='0%' stopColor='#6350FB' />
<stop offset='50%' stopColor='#3D8FFF' />
<stop offset='100%' stopColor='#9AD8F8' />
</linearGradient>
</defs>
<path
d='M13.05 15.513h3.08c.214 0 .389.177.389.394v1.82a1.704 1.704 0 011.296 1.661c0 .943-.755 1.708-1.685 1.708-.931 0-1.686-.765-1.686-1.708 0-.807.554-1.484 1.297-1.662v-1.425h-2.69v4.663a.395.395 0 01-.188.338l-2.69 1.641a.385.385 0 01-.405-.002l-4.926-3.086a.395.395 0 01-.185-.336V16.3L2.196 14.87A.395.395 0 012 14.555L2 14.528V9.406c0-.14.073-.27.192-.34l2.465-1.462V4.448c0-.129.062-.249.165-.322l.021-.014L9.77 1.058a.385.385 0 01.407 0l2.69 1.675a.395.395 0 01.185.336V7.6h3.856V5.683a1.704 1.704 0 01-1.296-1.662c0-.943.755-1.708 1.685-1.708.931 0 1.685.765 1.685 1.708 0 .807-.553 1.484-1.296 1.662v2.311a.391.391 0 01-.389.394h-4.245v1.806h6.624a1.69 1.69 0 011.64-1.313c.93 0 1.685.764 1.685 1.707 0 .943-.754 1.708-1.685 1.708a1.69 1.69 0 01-1.64-1.314H13.05v1.937h4.953l.915 1.18a1.66 1.66 0 01.84-.227c.931 0 1.685.764 1.685 1.707 0 .943-.754 1.708-1.685 1.708-.93 0-1.685-.765-1.685-1.708 0-.346.102-.668.276-.937l-.724-.935H13.05v1.806zM9.973 1.856L7.93 3.122V6.09h-.778V3.604L5.435 4.669v2.945l2.11 1.36L9.712 7.61V5.334h.778V7.83c0 .136-.07.263-.184.335L7.963 9.638v2.081l1.422 1.009-.446.646-1.406-.998-1.53 1.005-.423-.66 1.605-1.055v-1.99L5.038 8.29l-2.26 1.34v1.676l1.972-1.189.398.677-2.37 1.429V14.3l2.166 1.258 2.27-1.368.397.677-2.176 1.311V19.3l1.876 1.175 2.365-1.426.398.678-2.017 1.216 1.918 1.201 2.298-1.403v-5.78l-4.758 2.893-.4-.675 5.158-3.136V3.289L9.972 1.856zM16.13 18.47a.913.913 0 00-.908.92c0 .507.406.918.908.918a.913.913 0 00.907-.919.913.913 0 00-.907-.92zm3.63-3.81a.913.913 0 00-.908.92c0 .508.406.92.907.92a.913.913 0 00.908-.92.913.913 0 00-.908-.92zm1.555-4.99a.913.913 0 00-.908.92c0 .507.407.918.908.918a.913.913 0 00.907-.919.913.913 0 00-.907-.92zM17.296 3.1a.913.913 0 00-.907.92c0 .508.406.92.907.92a.913.913 0 00.908-.92.913.913 0 00-.908-.92z'
fill='url(#bedrock_gradient)'
fillRule='nonzero'
/>
</svg>
)
}

View File

@@ -0,0 +1,76 @@
---
title: Enterprise
description: Enterprise-Funktionen für Organisationen mit erweiterten
Sicherheits- und Compliance-Anforderungen
---
import { Callout } from 'fumadocs-ui/components/callout'
Sim Studio Enterprise bietet erweiterte Funktionen für Organisationen mit erhöhten Sicherheits-, Compliance- und Verwaltungsanforderungen.
---
## Bring Your Own Key (BYOK)
Verwenden Sie Ihre eigenen API-Schlüssel für KI-Modellanbieter anstelle der gehosteten Schlüssel von Sim Studio.
### Unterstützte Anbieter
| Anbieter | Verwendung |
|----------|-------|
| OpenAI | Knowledge Base-Embeddings, Agent-Block |
| Anthropic | Agent-Block |
| Google | Agent-Block |
| Mistral | Knowledge Base OCR |
### Einrichtung
1. Navigieren Sie zu **Einstellungen** → **BYOK** in Ihrem Workspace
2. Klicken Sie auf **Schlüssel hinzufügen** für Ihren Anbieter
3. Geben Sie Ihren API-Schlüssel ein und speichern Sie
<Callout type="warn">
BYOK-Schlüssel werden verschlüsselt gespeichert. Nur Organisationsadministratoren und -inhaber können Schlüssel verwalten.
</Callout>
Wenn konfiguriert, verwenden Workflows Ihren Schlüssel anstelle der gehosteten Schlüssel von Sim Studio. Bei Entfernung wechseln Workflows automatisch zu den gehosteten Schlüsseln zurück.
---
## Single Sign-On (SSO)
Enterprise-Authentifizierung mit SAML 2.0- und OIDC-Unterstützung für zentralisiertes Identitätsmanagement.
### Unterstützte Anbieter
- Okta
- Azure AD / Entra ID
- Google Workspace
- OneLogin
- Jeder SAML 2.0- oder OIDC-Anbieter
### Einrichtung
1. Navigieren Sie zu **Einstellungen** → **SSO** in Ihrem Workspace
2. Wählen Sie Ihren Identitätsanbieter
3. Konfigurieren Sie die Verbindung mithilfe der Metadaten Ihres IdP
4. Aktivieren Sie SSO für Ihre Organisation
<Callout type="info">
Sobald SSO aktiviert ist, authentifizieren sich Teammitglieder über Ihren Identitätsanbieter anstelle von E-Mail/Passwort.
</Callout>
---
## Self-Hosted
Für selbst gehostete Bereitstellungen können Enterprise-Funktionen über Umgebungsvariablen aktiviert werden:
| Variable | Beschreibung |
|----------|-------------|
| `SSO_ENABLED`, `NEXT_PUBLIC_SSO_ENABLED` | Single Sign-On mit SAML/OIDC |
| `CREDENTIAL_SETS_ENABLED`, `NEXT_PUBLIC_CREDENTIAL_SETS_ENABLED` | Polling-Gruppen für E-Mail-Trigger |
<Callout type="warn">
BYOK ist nur im gehosteten Sim Studio verfügbar. Selbst gehostete Deployments konfigurieren AI-Provider-Schlüssel direkt über Umgebungsvariablen.
</Callout>

View File

@@ -49,40 +49,40 @@ Die Modellaufschlüsselung zeigt:
<Tabs items={['Hosted Models', 'Bring Your Own API Key']}>
<Tab>
**Gehostete Modelle** - Sim stellt API-Schlüssel mit einem 2-fachen Preismultiplikator bereit:
**Hosted Models** - Sim bietet API-Schlüssel mit einem 1,4-fachen Preismultiplikator für Agent-Blöcke:
**OpenAI**
| Modell | Basispreis (Eingabe/Ausgabe) | Gehosteter Preis (Eingabe/Ausgabe) |
| Modell | Basispreis (Eingabe/Ausgabe) | Hosted-Preis (Eingabe/Ausgabe) |
|-------|---------------------------|----------------------------|
| GPT-5.1 | 1,25 $ / 10,00 $ | 2,50 $ / 20,00 $ |
| GPT-5 | 1,25 $ / 10,00 $ | 2,50 $ / 20,00 $ |
| GPT-5 Mini | 0,25 $ / 2,00 $ | 0,50 $ / 4,00 $ |
| GPT-5 Nano | 0,05 $ / 0,40 $ | 0,10 $ / 0,80 $ |
| GPT-4o | 2,50 $ / 10,00 $ | 5,00 $ / 20,00 $ |
| GPT-4.1 | 2,00 $ / 8,00 $ | 4,00 $ / 16,00 $ |
| GPT-4.1 Mini | 0,40 $ / 1,60 $ | 0,80 $ / 3,20 $ |
| GPT-4.1 Nano | 0,10 $ / 0,40 $ | 0,20 $ / 0,80 $ |
| o1 | 15,00 $ / 60,00 $ | 30,00 $ / 120,00 $ |
| o3 | 2,00 $ / 8,00 $ | 4,00 $ / 16,00 $ |
| o4 Mini | 1,10 $ / 4,40 $ | 2,20 $ / 8,80 $ |
| GPT-5.1 | $1.25 / $10.00 | $1.75 / $14.00 |
| GPT-5 | $1.25 / $10.00 | $1.75 / $14.00 |
| GPT-5 Mini | $0.25 / $2.00 | $0.35 / $2.80 |
| GPT-5 Nano | $0.05 / $0.40 | $0.07 / $0.56 |
| GPT-4o | $2.50 / $10.00 | $3.50 / $14.00 |
| GPT-4.1 | $2.00 / $8.00 | $2.80 / $11.20 |
| GPT-4.1 Mini | $0.40 / $1.60 | $0.56 / $2.24 |
| GPT-4.1 Nano | $0.10 / $0.40 | $0.14 / $0.56 |
| o1 | $15.00 / $60.00 | $21.00 / $84.00 |
| o3 | $2.00 / $8.00 | $2.80 / $11.20 |
| o4 Mini | $1.10 / $4.40 | $1.54 / $6.16 |
**Anthropic**
| Modell | Basispreis (Eingabe/Ausgabe) | Gehosteter Preis (Eingabe/Ausgabe) |
| Modell | Basispreis (Eingabe/Ausgabe) | Hosted-Preis (Eingabe/Ausgabe) |
|-------|---------------------------|----------------------------|
| Claude Opus 4.5 | 5,00 $ / 25,00 $ | 10,00 $ / 50,00 $ |
| Claude Opus 4.1 | 15,00 $ / 75,00 $ | 30,00 $ / 150,00 $ |
| Claude Sonnet 4.5 | 3,00 $ / 15,00 $ | 6,00 $ / 30,00 $ |
| Claude Sonnet 4.0 | 3,00 $ / 15,00 $ | 6,00 $ / 30,00 $ |
| Claude Haiku 4.5 | 1,00 $ / 5,00 $ | 2,00 $ / 10,00 $ |
| Claude Opus 4.5 | $5.00 / $25.00 | $7.00 / $35.00 |
| Claude Opus 4.1 | $15.00 / $75.00 | $21.00 / $105.00 |
| Claude Sonnet 4.5 | $3.00 / $15.00 | $4.20 / $21.00 |
| Claude Sonnet 4.0 | $3.00 / $15.00 | $4.20 / $21.00 |
| Claude Haiku 4.5 | $1.00 / $5.00 | $1.40 / $7.00 |
**Google**
| Modell | Basispreis (Eingabe/Ausgabe) | Gehosteter Preis (Eingabe/Ausgabe) |
| Modell | Basispreis (Eingabe/Ausgabe) | Hosted-Preis (Eingabe/Ausgabe) |
|-------|---------------------------|----------------------------|
| Gemini 3 Pro Preview | 2,00 $ / 12,00 $ | 4,00 $ / 24,00 $ |
| Gemini 2.5 Pro | 1,25 $ / 10,00 $ | 2,50 $ / 20,00 $ |
| Gemini 2.5 Flash | 0,30 $ / 2,50 $ | 0,60 $ / 5,00 $ |
| Gemini 3 Pro Preview | $2.00 / $12.00 | $2.80 / $16.80 |
| Gemini 2.5 Pro | $1.25 / $10.00 | $1.75 / $14.00 |
| Gemini 2.5 Flash | $0.30 / $2.50 | $0.42 / $3.50 |
*Der 2x-Multiplikator deckt Infrastruktur- und API-Verwaltungskosten ab.*
*Der 1,4-fache Multiplikator deckt Infrastruktur- und API-Verwaltungskosten ab.*
</Tab>
<Tab>

View File

@@ -17,7 +17,7 @@ MCP-Server gruppieren Ihre Workflow-Tools zusammen. Erstellen und verwalten Sie
<Video src="mcp/mcp-server.mp4" width={700} height={450} />
</div>
1. Navigieren Sie zu **Einstellungen → MCP-Server**
1. Navigieren Sie zu **Einstellungen → Bereitgestellte MCPs**
2. Klicken Sie auf **Server erstellen**
3. Geben Sie einen Namen und eine optionale Beschreibung ein
4. Kopieren Sie die Server-URL zur Verwendung in Ihren MCP-Clients
@@ -79,7 +79,7 @@ Füge deinen API-Key-Header (`X-API-Key`) für authentifizierten Zugriff hinzu,
## Server-Verwaltung
In der Server-Detailansicht unter **Einstellungen → MCP-Server** kannst du:
In der Server-Detailansicht unter **Einstellungen → Bereitgestellte MCPs** können Sie:
- **Tools anzeigen**: Alle Workflows sehen, die einem Server hinzugefügt wurden
- **URL kopieren**: Die Server-URL für MCP-Clients abrufen

View File

@@ -27,7 +27,7 @@ MCP-Server stellen Sammlungen von Tools bereit, die Ihre Agenten nutzen können.
</div>
1. Navigieren Sie zu Ihren Workspace-Einstellungen
2. Gehen Sie zum Abschnitt **MCP-Server**
2. Gehen Sie zum Abschnitt **Bereitgestellte MCPs**
3. Klicken Sie auf **MCP-Server hinzufügen**
4. Geben Sie die Server-Konfigurationsdetails ein
5. Speichern Sie die Konfiguration

View File

@@ -22,7 +22,7 @@ Verwende den Start-Block für alles, was aus dem Editor, deploy-to-API oder depl
<Cards>
<Card title="Start" href="/triggers/start">
Einheitlicher Einstiegspunkt, der Editor-Ausführungen, API-Bereitstellungen und Chat-Bereitstellungen unterstützt
Einheitlicher Einstiegspunkt, der Editor-Ausführungen, API-Deployments und Chat-Deployments unterstützt
</Card>
<Card title="Webhook" href="/triggers/webhook">
Externe Webhook-Payloads empfangen
@@ -33,6 +33,9 @@ Verwende den Start-Block für alles, was aus dem Editor, deploy-to-API oder depl
<Card title="RSS Feed" href="/triggers/rss">
RSS- und Atom-Feeds auf neue Inhalte überwachen
</Card>
<Card title="Email Polling Groups" href="#email-polling-groups">
Team-Gmail- und Outlook-Postfächer überwachen
</Card>
</Cards>
## Schneller Vergleich
@@ -43,6 +46,7 @@ Verwende den Start-Block für alles, was aus dem Editor, deploy-to-API oder depl
| **Schedule** | Timer, der im Schedule-Block verwaltet wird |
| **Webhook** | Bei eingehender HTTP-Anfrage |
| **RSS Feed** | Neues Element im Feed veröffentlicht |
| **Email Polling Groups** | Neue E-Mail in Team-Gmail- oder Outlook-Postfächern empfangen |
> Der Start-Block stellt immer `input`, `conversationId` und `files` Felder bereit. Füge benutzerdefinierte Felder zum Eingabeformat für zusätzliche strukturierte Daten hinzu.
@@ -65,3 +69,25 @@ Wenn du im Editor auf **Run** klickst, wählt Sim automatisch aus, welcher Trigg
Wenn dein Workflow mehrere Trigger hat, wird der Trigger mit der höchsten Priorität ausgeführt. Wenn du beispielsweise sowohl einen Start-Block als auch einen Webhook-Trigger hast, wird beim Klicken auf Run der Start-Block ausgeführt.
**Externe Auslöser mit Mock-Payloads**: Wenn externe Auslöser (Webhooks und Integrationen) manuell ausgeführt werden, generiert Sim automatisch Mock-Payloads basierend auf der erwarteten Datenstruktur des Auslösers. Dies stellt sicher, dass nachgelagerte Blöcke während des Testens Variablen korrekt auflösen können.
## E-Mail-Polling-Gruppen
Polling-Gruppen ermöglichen es Ihnen, die Gmail- oder Outlook-Postfächer mehrerer Teammitglieder mit einem einzigen Trigger zu überwachen. Erfordert einen Team- oder Enterprise-Plan.
**Erstellen einer Polling-Gruppe** (Admin/Owner)
1. Gehen Sie zu **Einstellungen → E-Mail-Polling**
2. Klicken Sie auf **Erstellen** und wählen Sie Gmail oder Outlook
3. Geben Sie einen Namen für die Gruppe ein
**Mitglieder einladen**
1. Klicken Sie auf **Mitglieder hinzufügen** bei Ihrer Polling-Gruppe
2. Geben Sie E-Mail-Adressen ein (durch Komma oder Zeilenumbruch getrennt oder ziehen Sie eine CSV-Datei per Drag & Drop)
3. Klicken Sie auf **Einladungen senden**
Eingeladene erhalten eine E-Mail mit einem Link, um ihr Konto zu verbinden. Sobald die Verbindung hergestellt ist, wird ihr Postfach automatisch in die Polling-Gruppe aufgenommen. Eingeladene müssen keine Mitglieder Ihrer Sim-Organisation sein.
**Verwendung in einem Workflow**
Wählen Sie beim Konfigurieren eines E-Mail-Triggers Ihre Polling-Gruppe aus dem Dropdown-Menü für Anmeldeinformationen anstelle eines einzelnen Kontos aus. Das System erstellt Webhooks für jedes Mitglied und leitet alle E-Mails durch Ihren Workflow.

View File

@@ -6,12 +6,12 @@ import { Callout } from 'fumadocs-ui/components/callout'
import { Tab, Tabs } from 'fumadocs-ui/components/tabs'
import { Image } from '@/components/ui/image'
The Router block uses AI to intelligently route workflows based on content analysis. Unlike Condition blocks that use simple rules, Routers understand context and intent.
The Router block uses AI to intelligently route workflows based on content analysis. Unlike Condition blocks that use simple rules, Routers understand context and intent. Each route you define creates a separate output port, allowing you to connect different paths to different downstream blocks.
<div className="flex justify-center">
<Image
src="/static/blocks/router.png"
alt="Router Block with Multiple Paths"
alt="Router Block with Multiple Route Ports"
width={500}
height={400}
className="my-6"
@@ -32,21 +32,23 @@ The Router block uses AI to intelligently route workflows based on content analy
## Configuration Options
### Content/Prompt
### Context
The content or prompt that the Router will analyze to make routing decisions. This can be:
The context that the Router will analyze to make routing decisions. This is the input data that gets evaluated against your route descriptions. It can be:
- A direct user query or input
- Output from a previous block
- A system-generated message
- Any text content that needs intelligent routing
### Target Blocks
### Routes
The possible destination blocks that the Router can select from. The Router will automatically detect connected blocks, but you can also:
Define the possible paths that the Router can take. Each route consists of:
- Customize the descriptions of target blocks to improve routing accuracy
- Specify routing criteria for each target block
- Exclude certain blocks from being considered as routing targets
- **Route Title**: A name for the route (e.g., "Sales", "Support", "Technical")
- **Route Description**: A clear description of when this route should be selected (e.g., "Route here when the query is about pricing, purchasing, or sales inquiries")
Each route you add creates a **separate output port** on the Router block. Connect each port to the appropriate downstream block for that route.
### Model Selection
@@ -66,8 +68,9 @@ Your API key for the selected LLM provider. This is securely stored and used for
## Outputs
- **`<router.prompt>`**: Summary of the routing prompt
- **`<router.selected_path>`**: Chosen destination block
- **`<router.context>`**: The context that was analyzed
- **`<router.selectedRoute>`**: The ID of the selected route
- **`<router.selected_path>`**: Details of the chosen destination block
- **`<router.tokens>`**: Token usage statistics
- **`<router.cost>`**: Estimated routing cost
- **`<router.model>`**: Model used for decision-making
@@ -75,26 +78,36 @@ Your API key for the selected LLM provider. This is securely stored and used for
## Example Use Cases
**Customer Support Triage** - Route tickets to specialized departments
```
Input (Ticket) → Router → Agent (Engineering) or Agent (Finance)
Input (Ticket) → Router
├── [Sales Route] → Agent (Sales Team)
├── [Technical Route] → Agent (Engineering)
└── [Billing Route] → Agent (Finance)
```
**Content Classification** - Classify and route user-generated content
```
Input (Feedback) → Router → Workflow (Product) or Workflow (Technical)
Input (Feedback) → Router
├── [Product Feedback] → Workflow (Product Team)
└── [Bug Report] → Workflow (Technical Team)
```
**Lead Qualification** - Route leads based on qualification criteria
```
Input (Lead) → Router → Agent (Enterprise Sales) or Workflow (Self-serve)
```
```
Input (Lead) → Router
├── [Enterprise] → Agent (Enterprise Sales)
└── [Self-serve] → Workflow (Automated Onboarding)
```
## Best Practices
- **Provide clear target descriptions**: Help the Router understand when to select each destination with specific, detailed descriptions
- **Use specific routing criteria**: Define clear conditions and examples for each path to improve accuracy
- **Implement fallback paths**: Connect a default destination for when no specific path is appropriate
- **Test with diverse inputs**: Ensure the Router handles various input types, edge cases, and unexpected content
- **Monitor routing performance**: Review routing decisions regularly and refine criteria based on actual usage patterns
- **Choose appropriate models**: Use models with strong reasoning capabilities for complex routing decisions
- **Write clear route descriptions**: Each route description should clearly explain when that route should be selected. Be specific about the criteria.
- **Make routes mutually exclusive**: When possible, ensure route descriptions don't overlap to prevent ambiguous routing decisions.
- **Include an error/fallback route**: Add a catch-all route for unexpected inputs that don't match other routes.
- **Use descriptive route titles**: Route titles appear in the workflow canvas, so make them meaningful for readability.
- **Test with diverse inputs**: Ensure the Router handles various input types, edge cases, and unexpected content.
- **Monitor routing performance**: Review routing decisions regularly and refine route descriptions based on actual usage patterns.
- **Choose appropriate models**: Use models with strong reasoning capabilities for complex routing decisions.

View File

@@ -0,0 +1,120 @@
---
title: Enterprise
description: Enterprise features for business organizations
---
import { Callout } from 'fumadocs-ui/components/callout'
Sim Studio Enterprise provides advanced features for organizations with enhanced security, compliance, and management requirements.
---
## Access Control
Define permission groups to control what features and integrations team members can use.
### Features
- **Allowed Model Providers** - Restrict which AI providers users can access (OpenAI, Anthropic, Google, etc.)
- **Allowed Blocks** - Control which workflow blocks are available
- **Platform Settings** - Hide Knowledge Base, disable MCP tools, or disable custom tools
### Setup
1. Navigate to **Settings** → **Access Control** in your workspace
2. Create a permission group with your desired restrictions
3. Add team members to the permission group
<Callout type="info">
Users not assigned to any permission group have full access. Permission restrictions are enforced at both UI and execution time.
</Callout>
---
## Bring Your Own Key (BYOK)
Use your own API keys for AI model providers instead of Sim Studio's hosted keys.
### Supported Providers
| Provider | Usage |
|----------|-------|
| OpenAI | Knowledge Base embeddings, Agent block |
| Anthropic | Agent block |
| Google | Agent block |
| Mistral | Knowledge Base OCR |
### Setup
1. Navigate to **Settings** → **BYOK** in your workspace
2. Click **Add Key** for your provider
3. Enter your API key and save
<Callout type="warn">
BYOK keys are encrypted at rest. Only organization admins and owners can manage keys.
</Callout>
When configured, workflows use your key instead of Sim Studio's hosted keys. If removed, workflows automatically fall back to hosted keys.
---
## Single Sign-On (SSO)
Enterprise authentication with SAML 2.0 and OIDC support for centralized identity management.
### Supported Providers
- Okta
- Azure AD / Entra ID
- Google Workspace
- OneLogin
- Any SAML 2.0 or OIDC provider
### Setup
1. Navigate to **Settings** → **SSO** in your workspace
2. Choose your identity provider
3. Configure the connection using your IdP's metadata
4. Enable SSO for your organization
<Callout type="info">
Once SSO is enabled, team members authenticate through your identity provider instead of email/password.
</Callout>
---
## Self-Hosted Configuration
For self-hosted deployments, enterprise features can be enabled via environment variables without requiring billing.
### Environment Variables
| Variable | Description |
|----------|-------------|
| `ORGANIZATIONS_ENABLED`, `NEXT_PUBLIC_ORGANIZATIONS_ENABLED` | Enable team/organization management |
| `ACCESS_CONTROL_ENABLED`, `NEXT_PUBLIC_ACCESS_CONTROL_ENABLED` | Permission groups for access restrictions |
| `SSO_ENABLED`, `NEXT_PUBLIC_SSO_ENABLED` | Single Sign-On with SAML/OIDC |
| `CREDENTIAL_SETS_ENABLED`, `NEXT_PUBLIC_CREDENTIAL_SETS_ENABLED` | Polling Groups for email triggers |
### Organization Management
When billing is disabled, use the Admin API to manage organizations:
```bash
# Create an organization
curl -X POST https://your-instance/api/v1/admin/organizations \
-H "x-admin-key: YOUR_ADMIN_API_KEY" \
-H "Content-Type: application/json" \
-d '{"name": "My Organization", "ownerId": "user-id-here"}'
# Add a member
curl -X POST https://your-instance/api/v1/admin/organizations/{orgId}/members \
-H "x-admin-key: YOUR_ADMIN_API_KEY" \
-H "Content-Type: application/json" \
-d '{"userId": "user-id-here", "role": "admin"}'
```
### Notes
- Enabling `ACCESS_CONTROL_ENABLED` automatically enables organizations, as access control requires organization membership.
- BYOK is only available on hosted Sim Studio. Self-hosted deployments configure AI provider keys directly via environment variables.

View File

@@ -48,40 +48,40 @@ The model breakdown shows:
<Tabs items={['Hosted Models', 'Bring Your Own API Key']}>
<Tab>
**Hosted Models** - Sim provides API keys with a 2x pricing multiplier:
**Hosted Models** - Sim provides API keys with a 1.4x pricing multiplier for Agent blocks:
**OpenAI**
| Model | Base Price (Input/Output) | Hosted Price (Input/Output) |
|-------|---------------------------|----------------------------|
| GPT-5.1 | $1.25 / $10.00 | $2.50 / $20.00 |
| GPT-5 | $1.25 / $10.00 | $2.50 / $20.00 |
| GPT-5 Mini | $0.25 / $2.00 | $0.50 / $4.00 |
| GPT-5 Nano | $0.05 / $0.40 | $0.10 / $0.80 |
| GPT-4o | $2.50 / $10.00 | $5.00 / $20.00 |
| GPT-4.1 | $2.00 / $8.00 | $4.00 / $16.00 |
| GPT-4.1 Mini | $0.40 / $1.60 | $0.80 / $3.20 |
| GPT-4.1 Nano | $0.10 / $0.40 | $0.20 / $0.80 |
| o1 | $15.00 / $60.00 | $30.00 / $120.00 |
| o3 | $2.00 / $8.00 | $4.00 / $16.00 |
| o4 Mini | $1.10 / $4.40 | $2.20 / $8.80 |
| GPT-5.1 | $1.25 / $10.00 | $1.75 / $14.00 |
| GPT-5 | $1.25 / $10.00 | $1.75 / $14.00 |
| GPT-5 Mini | $0.25 / $2.00 | $0.35 / $2.80 |
| GPT-5 Nano | $0.05 / $0.40 | $0.07 / $0.56 |
| GPT-4o | $2.50 / $10.00 | $3.50 / $14.00 |
| GPT-4.1 | $2.00 / $8.00 | $2.80 / $11.20 |
| GPT-4.1 Mini | $0.40 / $1.60 | $0.56 / $2.24 |
| GPT-4.1 Nano | $0.10 / $0.40 | $0.14 / $0.56 |
| o1 | $15.00 / $60.00 | $21.00 / $84.00 |
| o3 | $2.00 / $8.00 | $2.80 / $11.20 |
| o4 Mini | $1.10 / $4.40 | $1.54 / $6.16 |
**Anthropic**
| Model | Base Price (Input/Output) | Hosted Price (Input/Output) |
|-------|---------------------------|----------------------------|
| Claude Opus 4.5 | $5.00 / $25.00 | $10.00 / $50.00 |
| Claude Opus 4.1 | $15.00 / $75.00 | $30.00 / $150.00 |
| Claude Sonnet 4.5 | $3.00 / $15.00 | $6.00 / $30.00 |
| Claude Sonnet 4.0 | $3.00 / $15.00 | $6.00 / $30.00 |
| Claude Haiku 4.5 | $1.00 / $5.00 | $2.00 / $10.00 |
| Claude Opus 4.5 | $5.00 / $25.00 | $7.00 / $35.00 |
| Claude Opus 4.1 | $15.00 / $75.00 | $21.00 / $105.00 |
| Claude Sonnet 4.5 | $3.00 / $15.00 | $4.20 / $21.00 |
| Claude Sonnet 4.0 | $3.00 / $15.00 | $4.20 / $21.00 |
| Claude Haiku 4.5 | $1.00 / $5.00 | $1.40 / $7.00 |
**Google**
| Model | Base Price (Input/Output) | Hosted Price (Input/Output) |
|-------|---------------------------|----------------------------|
| Gemini 3 Pro Preview | $2.00 / $12.00 | $4.00 / $24.00 |
| Gemini 2.5 Pro | $1.25 / $10.00 | $2.50 / $20.00 |
| Gemini 2.5 Flash | $0.30 / $2.50 | $0.60 / $5.00 |
| Gemini 3 Pro Preview | $2.00 / $12.00 | $2.80 / $16.80 |
| Gemini 2.5 Pro | $1.25 / $10.00 | $1.75 / $14.00 |
| Gemini 2.5 Flash | $0.30 / $2.50 | $0.42 / $3.50 |
*The 2x multiplier covers infrastructure and API management costs.*
*The 1.4x multiplier covers infrastructure and API management costs.*
</Tab>
<Tab>

View File

@@ -0,0 +1,136 @@
---
title: Form Deployment
---
import { Callout } from 'fumadocs-ui/components/callout'
import { Tab, Tabs } from 'fumadocs-ui/components/tabs'
Deploy your workflow as an embeddable form that users can fill out on your website or share via link. Form submissions trigger your workflow with the `form` trigger type.
## Overview
Form deployment turns your workflow's Input Format into a responsive form that can be:
- Shared via a direct link (e.g., `https://sim.ai/form/my-survey`)
- Embedded in any website using an iframe
When a user submits the form, it triggers your workflow with the form data.
<Callout type="info">
Forms derive their fields from your workflow's Start block Input Format. Each field becomes a form input with the appropriate type.
</Callout>
## Creating a Form
1. Open your workflow and click **Deploy**
2. Select the **Form** tab
3. Configure:
- **URL**: Unique identifier (e.g., `contact-form` → `sim.ai/form/contact-form`)
- **Title**: Form heading
- **Description**: Optional subtitle
- **Form Fields**: Customize labels and descriptions for each field
- **Authentication**: Public, password-protected, or email whitelist
- **Thank You Message**: Shown after submission
4. Click **Launch**
## Field Type Mapping
| Input Format Type | Form Field |
|------------------|------------|
| `string` | Text input |
| `number` | Number input |
| `boolean` | Toggle switch |
| `object` | JSON editor |
| `array` | JSON array editor |
| `files` | File upload |
## Access Control
| Mode | Description |
|------|-------------|
| **Public** | Anyone with the link can submit |
| **Password** | Users must enter a password |
| **Email Whitelist** | Only specified emails/domains can submit |
For email whitelist:
- Exact: `user@example.com`
- Domain: `@example.com` (all emails from domain)
## Embedding
### Direct Link
```
https://sim.ai/form/your-identifier
```
### Iframe
```html
<iframe
src="https://sim.ai/form/your-identifier"
width="100%"
height="600"
frameborder="0"
title="Form"
></iframe>
```
## API Submission
Submit forms programmatically:
<Tabs items={['cURL', 'TypeScript']}>
<Tab value="cURL">
```bash
curl -X POST https://sim.ai/api/form/your-identifier \
-H "Content-Type: application/json" \
-d '{
"formData": {
"name": "John Doe",
"email": "john@example.com"
}
}'
```
</Tab>
<Tab value="TypeScript">
```typescript
const response = await fetch('https://sim.ai/api/form/your-identifier', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
formData: {
name: 'John Doe',
email: 'john@example.com'
}
})
});
const result = await response.json();
// { success: true, data: { executionId: '...' } }
```
</Tab>
</Tabs>
### Protected Forms
For password-protected forms:
```bash
curl -X POST https://sim.ai/api/form/your-identifier \
-H "Content-Type: application/json" \
-d '{ "password": "secret", "formData": { "name": "John" } }'
```
For email-protected forms:
```bash
curl -X POST https://sim.ai/api/form/your-identifier \
-H "Content-Type: application/json" \
-d '{ "email": "allowed@example.com", "formData": { "name": "John" } }'
```
## Troubleshooting
**"No input fields configured"** - Add Input Format fields to your Start block.
**Form not loading in iframe** - Check your site's CSP allows iframes from `sim.ai`.
**Submissions failing** - Verify the identifier is correct and required fields are filled.

View File

@@ -1,3 +1,3 @@
{
"pages": ["index", "basics", "api", "logging", "costs"]
"pages": ["index", "basics", "api", "form", "logging", "costs"]
}

View File

@@ -15,6 +15,7 @@
"permissions",
"sdks",
"self-hosting",
"./enterprise/index",
"./keyboard-shortcuts/index"
],
"defaultOpen": false

View File

@@ -48,7 +48,7 @@ Integrate Google Drive into the workflow. Can create, upload, and list files.
### `google_drive_upload`
Upload a file to Google Drive
Upload a file to Google Drive with complete metadata returned
#### Input
@@ -65,11 +65,11 @@ Upload a file to Google Drive
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `file` | json | Uploaded file metadata including ID, name, and links |
| `file` | object | Complete uploaded file metadata from Google Drive |
### `google_drive_create_folder`
Create a new folder in Google Drive
Create a new folder in Google Drive with complete metadata returned
#### Input
@@ -83,11 +83,11 @@ Create a new folder in Google Drive
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `file` | json | Created folder metadata including ID, name, and parent information |
| `file` | object | Complete created folder metadata from Google Drive |
### `google_drive_download`
Download a file from Google Drive (exports Google Workspace files automatically)
Download a file from Google Drive with complete metadata (exports Google Workspace files automatically)
#### Input
@@ -96,16 +96,17 @@ Download a file from Google Drive (exports Google Workspace files automatically)
| `fileId` | string | Yes | The ID of the file to download |
| `mimeType` | string | No | The MIME type to export Google Workspace files to \(optional\) |
| `fileName` | string | No | Optional filename override |
| `includeRevisions` | boolean | No | Whether to include revision history in the metadata \(default: true\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `file` | file | Downloaded file stored in execution files |
| `file` | object | Downloaded file stored in execution files |
### `google_drive_list`
List files and folders in Google Drive
List files and folders in Google Drive with complete metadata
#### Input
@@ -121,7 +122,7 @@ List files and folders in Google Drive
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `files` | json | Array of file metadata objects from the specified folder |
| `files` | array | Array of file metadata objects from Google Drive |

View File

@@ -162,6 +162,7 @@ Create a webhook to receive recording events
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Grain API key \(Personal Access Token\) |
| `hookUrl` | string | Yes | Webhook endpoint URL \(must respond 2xx\) |
| `hookType` | string | Yes | Type of webhook: "recording_added" or "upload_status" |
| `filterBeforeDatetime` | string | No | Filter: recordings before this date |
| `filterAfterDatetime` | string | No | Filter: recordings after this date |
| `filterParticipantScope` | string | No | Filter: "internal" or "external" |
@@ -178,6 +179,7 @@ Create a webhook to receive recording events
| `id` | string | Hook UUID |
| `enabled` | boolean | Whether hook is active |
| `hook_url` | string | The webhook URL |
| `hook_type` | string | Type of hook: recording_added or upload_status |
| `filter` | object | Applied filters |
| `include` | object | Included fields |
| `inserted_at` | string | ISO8601 creation timestamp |

View File

@@ -851,24 +851,6 @@ List all status updates for a project in Linear
| --------- | ---- | ----------- |
| `updates` | array | Array of project updates |
### `linear_create_project_link`
Add an external link to a project in Linear
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `projectId` | string | Yes | Project ID to add link to |
| `url` | string | Yes | URL of the external link |
| `label` | string | No | Link label/title |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `link` | object | The created project link |
### `linear_list_notifications`
List notifications for the current user in Linear
@@ -1246,7 +1228,6 @@ Create a new project label in Linear
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `projectId` | string | Yes | The project for this label |
| `name` | string | Yes | Project label name |
| `color` | string | No | Label color \(hex code\) |
| `description` | string | No | Label description |
@@ -1424,12 +1405,12 @@ Create a new project status in Linear
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `projectId` | string | Yes | The project to create the status for |
| `name` | string | Yes | Project status name |
| `type` | string | Yes | Status type: "backlog", "planned", "started", "paused", "completed", or "canceled" |
| `color` | string | Yes | Status color \(hex code\) |
| `position` | number | Yes | Position in status list \(e.g. 0, 1, 2...\) |
| `description` | string | No | Status description |
| `indefinite` | boolean | No | Whether the status is indefinite |
| `position` | number | No | Position in status list |
#### Output

View File

@@ -79,30 +79,6 @@ Capture multiple events at once in PostHog. Use this for bulk event ingestion to
| `status` | string | Status message indicating whether the batch was captured successfully |
| `eventsProcessed` | number | Number of events processed in the batch |
### `posthog_list_events`
List events in PostHog. Note: This endpoint is deprecated but kept for backwards compatibility. For production use, prefer the Query endpoint with HogQL.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `personalApiKey` | string | Yes | PostHog Personal API Key \(for authenticated API access\) |
| `region` | string | No | PostHog region: us \(default\) or eu |
| `projectId` | string | Yes | PostHog Project ID |
| `limit` | number | No | Number of events to return \(default: 100, max: 100\) |
| `offset` | number | No | Number of events to skip for pagination |
| `event` | string | No | Filter by specific event name |
| `distinctId` | string | No | Filter by specific distinct_id |
| `before` | string | No | ISO 8601 timestamp - only return events before this time |
| `after` | string | No | ISO 8601 timestamp - only return events after this time |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `events` | array | List of events with their properties and metadata |
### `posthog_list_persons`
List persons (users) in PostHog. Returns user profiles with their properties and distinct IDs.

View File

@@ -53,6 +53,9 @@ Send a chat completion request to any supported LLM provider
| `vertexProject` | string | No | Google Cloud project ID for Vertex AI |
| `vertexLocation` | string | No | Google Cloud location for Vertex AI \(defaults to us-central1\) |
| `vertexCredential` | string | No | Google Cloud OAuth credential ID for Vertex AI |
| `bedrockAccessKeyId` | string | No | AWS Access Key ID for Bedrock |
| `bedrockSecretKey` | string | No | AWS Secret Access Key for Bedrock |
| `bedrockRegion` | string | No | AWS region for Bedrock \(defaults to us-east-1\) |
#### Output

View File

@@ -33,6 +33,9 @@ Use the Start block for everything originating from the editor, deploy-to-API, o
<Card title="RSS Feed" href="/triggers/rss">
Monitor RSS and Atom feeds for new content
</Card>
<Card title="Email Polling Groups" href="#email-polling-groups">
Monitor team Gmail and Outlook inboxes
</Card>
</Cards>
## Quick Comparison
@@ -43,6 +46,7 @@ Use the Start block for everything originating from the editor, deploy-to-API, o
| **Schedule** | Timer managed in schedule block |
| **Webhook** | On inbound HTTP request |
| **RSS Feed** | New item published to feed |
| **Email Polling Groups** | New email received in team Gmail or Outlook inboxes |
> The Start block always exposes `input`, `conversationId`, and `files` fields. Add custom fields to the input format for additional structured data.
@@ -66,3 +70,24 @@ If your workflow has multiple triggers, the highest priority trigger will be exe
**External triggers with mock payloads**: When external triggers (webhooks and integrations) are executed manually, Sim automatically generates mock payloads based on the trigger's expected data structure. This ensures downstream blocks can resolve variables correctly during testing.
## Email Polling Groups
Polling Groups let you monitor multiple team members' Gmail or Outlook inboxes with a single trigger. Requires a Team or Enterprise plan.
**Creating a Polling Group** (Admin/Owner)
1. Go to **Settings → Email Polling**
2. Click **Create** and choose Gmail or Outlook
3. Enter a name for the group
**Inviting Members**
1. Click **Add Members** on your polling group
2. Enter email addresses (comma or newline separated, or drag & drop a CSV)
3. Click **Send Invites**
Invitees receive an email with a link to connect their account. Once connected, their inbox is automatically included in the polling group. Invitees don't need to be members of your Sim organization.
**Using in a Workflow**
When configuring an email trigger, select your polling group from the credentials dropdown instead of an individual account. The system creates webhooks for each member and routes all emails through your workflow.

View File

@@ -44,7 +44,7 @@ Reference structured values downstream with expressions such as <code>&lt;start.
## How it behaves per entry point
<Tabs items={['Editor run', 'Deploy to API', 'Deploy to chat']}>
<Tabs items={['Editor run', 'Deploy to API', 'Deploy to chat', 'Deploy to form']}>
<Tab>
When you click <strong>Run</strong> in the editor, the Start block renders the Input Format as a form. Default values make it easy to retest without retyping data. Submitting the form triggers the workflow immediately and the values become available on <code>&lt;start.fieldName&gt;</code> (for example <code>&lt;start.sampleField&gt;</code>).
@@ -64,6 +64,13 @@ Reference structured values downstream with expressions such as <code>&lt;start.
If you launch chat with additional structured context (for example from an embed), it merges into the corresponding <code>&lt;start.fieldName&gt;</code> outputs, keeping downstream blocks consistent with API and manual runs.
</Tab>
<Tab>
Form deployments render the Input Format as a standalone, embeddable form page. Each field becomes a form input with appropriate UI controls—text inputs for strings, number inputs for numbers, toggle switches for booleans, and file upload zones for files.
When a user submits the form, values become available on <code>&lt;start.fieldName&gt;</code> just like other entry points. The workflow executes with trigger type <code>form</code>, and submitters see a customizable thank-you message upon completion.
Forms can be embedded via iframe or shared as direct links, making them ideal for surveys, contact forms, and data collection workflows.
</Tab>
</Tabs>
## Referencing Start data downstream

View File

@@ -0,0 +1,76 @@
---
title: Enterprise
description: Funciones enterprise para organizaciones con requisitos avanzados
de seguridad y cumplimiento
---
import { Callout } from 'fumadocs-ui/components/callout'
Sim Studio Enterprise proporciona funciones avanzadas para organizaciones con requisitos mejorados de seguridad, cumplimiento y gestión.
---
## Bring Your Own Key (BYOK)
Usa tus propias claves API para proveedores de modelos de IA en lugar de las claves alojadas de Sim Studio.
### Proveedores compatibles
| Proveedor | Uso |
|----------|-------|
| OpenAI | Embeddings de base de conocimiento, bloque Agent |
| Anthropic | Bloque Agent |
| Google | Bloque Agent |
| Mistral | OCR de base de conocimiento |
### Configuración
1. Navega a **Configuración** → **BYOK** en tu espacio de trabajo
2. Haz clic en **Añadir clave** para tu proveedor
3. Introduce tu clave API y guarda
<Callout type="warn">
Las claves BYOK están cifradas en reposo. Solo los administradores y propietarios de la organización pueden gestionar las claves.
</Callout>
Cuando está configurado, los flujos de trabajo usan tu clave en lugar de las claves alojadas de Sim Studio. Si se elimina, los flujos de trabajo vuelven automáticamente a las claves alojadas.
---
## Single Sign-On (SSO)
Autenticación enterprise con soporte SAML 2.0 y OIDC para gestión centralizada de identidades.
### Proveedores compatibles
- Okta
- Azure AD / Entra ID
- Google Workspace
- OneLogin
- Cualquier proveedor SAML 2.0 u OIDC
### Configuración
1. Navega a **Configuración** → **SSO** en tu espacio de trabajo
2. Elige tu proveedor de identidad
3. Configura la conexión usando los metadatos de tu IdP
4. Activa SSO para tu organización
<Callout type="info">
Una vez que SSO está activado, los miembros del equipo se autentican a través de tu proveedor de identidad en lugar de correo electrónico/contraseña.
</Callout>
---
## Self-Hosted
Para implementaciones self-hosted, las funciones enterprise se pueden activar mediante variables de entorno:
| Variable | Descripción |
|----------|-------------|
| `SSO_ENABLED`, `NEXT_PUBLIC_SSO_ENABLED` | Inicio de sesión único con SAML/OIDC |
| `CREDENTIAL_SETS_ENABLED`, `NEXT_PUBLIC_CREDENTIAL_SETS_ENABLED` | Grupos de sondeo para activadores de correo electrónico |
<Callout type="warn">
BYOK solo está disponible en Sim Studio alojado. Las implementaciones autoalojadas configuran las claves de proveedor de IA directamente a través de variables de entorno.
</Callout>

View File

@@ -49,40 +49,40 @@ El desglose del modelo muestra:
<Tabs items={['Modelos alojados', 'Trae tu propia clave API']}>
<Tab>
**Modelos alojados** - Sim proporciona claves API con un multiplicador de precio de 2x:
**Modelos alojados** - Sim proporciona claves API con un multiplicador de precios de 1.4x para bloques de agente:
**OpenAI**
| Modelo | Precio base (entrada/salida) | Precio alojado (entrada/salida) |
|-------|---------------------------|----------------------------|
| GPT-5.1 | $1.25 / $10.00 | $2.50 / $20.00 |
| GPT-5 | $1.25 / $10.00 | $2.50 / $20.00 |
| GPT-5 Mini | $0.25 / $2.00 | $0.50 / $4.00 |
| GPT-5 Nano | $0.05 / $0.40 | $0.10 / $0.80 |
| GPT-4o | $2.50 / $10.00 | $5.00 / $20.00 |
| GPT-4.1 | $2.00 / $8.00 | $4.00 / $16.00 |
| GPT-4.1 Mini | $0.40 / $1.60 | $0.80 / $3.20 |
| GPT-4.1 Nano | $0.10 / $0.40 | $0.20 / $0.80 |
| o1 | $15.00 / $60.00 | $30.00 / $120.00 |
| o3 | $2.00 / $8.00 | $4.00 / $16.00 |
| o4 Mini | $1.10 / $4.40 | $2.20 / $8.80 |
| GPT-5.1 | $1.25 / $10.00 | $1.75 / $14.00 |
| GPT-5 | $1.25 / $10.00 | $1.75 / $14.00 |
| GPT-5 Mini | $0.25 / $2.00 | $0.35 / $2.80 |
| GPT-5 Nano | $0.05 / $0.40 | $0.07 / $0.56 |
| GPT-4o | $2.50 / $10.00 | $3.50 / $14.00 |
| GPT-4.1 | $2.00 / $8.00 | $2.80 / $11.20 |
| GPT-4.1 Mini | $0.40 / $1.60 | $0.56 / $2.24 |
| GPT-4.1 Nano | $0.10 / $0.40 | $0.14 / $0.56 |
| o1 | $15.00 / $60.00 | $21.00 / $84.00 |
| o3 | $2.00 / $8.00 | $2.80 / $11.20 |
| o4 Mini | $1.10 / $4.40 | $1.54 / $6.16 |
**Anthropic**
| Modelo | Precio base (entrada/salida) | Precio alojado (entrada/salida) |
|-------|---------------------------|----------------------------|
| Claude Opus 4.5 | $5.00 / $25.00 | $10.00 / $50.00 |
| Claude Opus 4.1 | $15.00 / $75.00 | $30.00 / $150.00 |
| Claude Sonnet 4.5 | $3.00 / $15.00 | $6.00 / $30.00 |
| Claude Sonnet 4.0 | $3.00 / $15.00 | $6.00 / $30.00 |
| Claude Haiku 4.5 | $1.00 / $5.00 | $2.00 / $10.00 |
| Claude Opus 4.5 | $5.00 / $25.00 | $7.00 / $35.00 |
| Claude Opus 4.1 | $15.00 / $75.00 | $21.00 / $105.00 |
| Claude Sonnet 4.5 | $3.00 / $15.00 | $4.20 / $21.00 |
| Claude Sonnet 4.0 | $3.00 / $15.00 | $4.20 / $21.00 |
| Claude Haiku 4.5 | $1.00 / $5.00 | $1.40 / $7.00 |
**Google**
| Modelo | Precio base (entrada/salida) | Precio alojado (entrada/salida) |
|-------|---------------------------|----------------------------|
| Gemini 3 Pro Preview | $2.00 / $12.00 | $4.00 / $24.00 |
| Gemini 2.5 Pro | $1.25 / $10.00 | $2.50 / $20.00 |
| Gemini 2.5 Flash | $0.30 / $2.50 | $0.60 / $5.00 |
| Gemini 3 Pro Preview | $2.00 / $12.00 | $2.80 / $16.80 |
| Gemini 2.5 Pro | $1.25 / $10.00 | $1.75 / $14.00 |
| Gemini 2.5 Flash | $0.30 / $2.50 | $0.42 / $3.50 |
*El multiplicador 2x cubre los costos de infraestructura y gestión de API.*
*El multiplicador de 1.4x cubre los costos de infraestructura y gestión de API.*
</Tab>
<Tab>

View File

@@ -17,7 +17,7 @@ Los servidores MCP agrupan tus herramientas de flujo de trabajo. Créalos y gest
<Video src="mcp/mcp-server.mp4" width={700} height={450} />
</div>
1. Navega a **Configuración → Servidores MCP**
1. Navega a **Configuración → MCP implementados**
2. Haz clic en **Crear servidor**
3. Introduce un nombre y una descripción opcional
4. Copia la URL del servidor para usarla en tus clientes MCP
@@ -79,7 +79,7 @@ Incluye tu encabezado de clave API (`X-API-Key`) para acceso autenticado al usar
## Gestión del servidor
Desde la vista de detalle del servidor en **Configuración → Servidores MCP**, puedes:
Desde la vista de detalles del servidor en **Configuración → MCP implementados**, puedes:
- **Ver herramientas**: consulta todos los flujos de trabajo añadidos a un servidor
- **Copiar URL**: obtén la URL del servidor para clientes MCP

View File

@@ -26,8 +26,8 @@ Los servidores MCP proporcionan colecciones de herramientas que tus agentes pued
<Video src="mcp/settings-mcp-tools.mp4" width={700} height={450} />
</div>
1. Navega a los ajustes de tu espacio de trabajo
2. Ve a la sección **Servidores MCP**
1. Navega a la configuración de tu espacio de trabajo
2. Ve a la sección **MCP implementados**
3. Haz clic en **Añadir servidor MCP**
4. Introduce los detalles de configuración del servidor
5. Guarda la configuración

View File

@@ -22,7 +22,7 @@ Utiliza el bloque Start para todo lo que se origina desde el editor, despliegue
<Cards>
<Card title="Start" href="/triggers/start">
Punto de entrada unificado que admite ejecuciones del editor, despliegues de API y despliegues de chat
Punto de entrada unificado que admite ejecuciones en el editor, despliegues de API y despliegues de chat
</Card>
<Card title="Webhook" href="/triggers/webhook">
Recibe cargas útiles de webhooks externos
@@ -31,18 +31,22 @@ Utiliza el bloque Start para todo lo que se origina desde el editor, despliegue
Ejecución basada en cron o intervalos
</Card>
<Card title="RSS Feed" href="/triggers/rss">
Monitorea feeds RSS y Atom para nuevo contenido
Monitorea feeds RSS y Atom para detectar contenido nuevo
</Card>
<Card title="Email Polling Groups" href="#email-polling-groups">
Monitorea bandejas de entrada de Gmail y Outlook del equipo
</Card>
</Cards>
## Comparación rápida
| Disparador | Condición de inicio |
| Trigger | Condición de inicio |
|---------|-----------------|
| **Start** | Ejecuciones del editor, solicitudes de despliegue a API o mensajes de chat |
| **Start** | Ejecuciones en el editor, solicitudes de despliegue a API o mensajes de chat |
| **Schedule** | Temporizador gestionado en el bloque de programación |
| **Webhook** | Al recibir una solicitud HTTP entrante |
| **RSS Feed** | Nuevo elemento publicado en el feed |
| **Email Polling Groups** | Nuevo correo electrónico recibido en bandejas de entrada de Gmail o Outlook del equipo |
> El bloque Start siempre expone los campos `input`, `conversationId` y `files`. Añade campos personalizados al formato de entrada para datos estructurados adicionales.
@@ -65,3 +69,25 @@ Cuando haces clic en **Ejecutar** en el editor, Sim selecciona automáticamente
Si tu flujo de trabajo tiene múltiples disparadores, se ejecutará el disparador de mayor prioridad. Por ejemplo, si tienes tanto un bloque Start como un disparador Webhook, al hacer clic en Ejecutar se ejecutará el bloque Start.
**Disparadores externos con cargas útiles simuladas**: Cuando los disparadores externos (webhooks e integraciones) se ejecutan manualmente, Sim genera automáticamente cargas útiles simuladas basadas en la estructura de datos esperada del disparador. Esto asegura que los bloques posteriores puedan resolver las variables correctamente durante las pruebas.
## Grupos de sondeo de correo electrónico
Los grupos de sondeo te permiten monitorear las bandejas de entrada de Gmail o Outlook de varios miembros del equipo con un solo activador. Requiere un plan Team o Enterprise.
**Crear un grupo de sondeo** (administrador/propietario)
1. Ve a **Configuración → Sondeo de correo electrónico**
2. Haz clic en **Crear** y elige Gmail u Outlook
3. Ingresa un nombre para el grupo
**Invitar miembros**
1. Haz clic en **Agregar miembros** en tu grupo de sondeo
2. Ingresa direcciones de correo electrónico (separadas por comas o saltos de línea, o arrastra y suelta un CSV)
3. Haz clic en **Enviar invitaciones**
Los invitados reciben un correo electrónico con un enlace para conectar su cuenta. Una vez conectada, su bandeja de entrada se incluye automáticamente en el grupo de sondeo. Los invitados no necesitan ser miembros de tu organización Sim.
**Usar en un flujo de trabajo**
Al configurar un activador de correo electrónico, selecciona tu grupo de sondeo del menú desplegable de credenciales en lugar de una cuenta individual. El sistema crea webhooks para cada miembro y enruta todos los correos electrónicos a través de tu flujo de trabajo.

View File

@@ -0,0 +1,76 @@
---
title: Entreprise
description: Fonctionnalités entreprise pour les organisations ayant des
exigences avancées en matière de sécurité et de conformité
---
import { Callout } from 'fumadocs-ui/components/callout'
Sim Studio Entreprise fournit des fonctionnalités avancées pour les organisations ayant des exigences renforcées en matière de sécurité, de conformité et de gestion.
---
## Apportez votre propre clé (BYOK)
Utilisez vos propres clés API pour les fournisseurs de modèles IA au lieu des clés hébergées par Sim Studio.
### Fournisseurs pris en charge
| Fournisseur | Utilisation |
|----------|-------|
| OpenAI | Embeddings de base de connaissances, bloc Agent |
| Anthropic | Bloc Agent |
| Google | Bloc Agent |
| Mistral | OCR de base de connaissances |
### Configuration
1. Accédez à **Paramètres** → **BYOK** dans votre espace de travail
2. Cliquez sur **Ajouter une clé** pour votre fournisseur
3. Saisissez votre clé API et enregistrez
<Callout type="warn">
Les clés BYOK sont chiffrées au repos. Seuls les administrateurs et propriétaires de l'organisation peuvent gérer les clés.
</Callout>
Une fois configurés, les workflows utilisent votre clé au lieu des clés hébergées par Sim Studio. Si elle est supprimée, les workflows basculent automatiquement vers les clés hébergées.
---
## Authentification unique (SSO)
Authentification entreprise avec prise en charge de SAML 2.0 et OIDC pour une gestion centralisée des identités.
### Fournisseurs pris en charge
- Okta
- Azure AD / Entra ID
- Google Workspace
- OneLogin
- Tout fournisseur SAML 2.0 ou OIDC
### Configuration
1. Accédez à **Paramètres** → **SSO** dans votre espace de travail
2. Choisissez votre fournisseur d'identité
3. Configurez la connexion en utilisant les métadonnées de votre IdP
4. Activez le SSO pour votre organisation
<Callout type="info">
Une fois le SSO activé, les membres de l'équipe s'authentifient via votre fournisseur d'identité au lieu d'utiliser un email/mot de passe.
</Callout>
---
## Auto-hébergé
Pour les déploiements auto-hébergés, les fonctionnalités entreprise peuvent être activées via des variables d'environnement :
| Variable | Description |
|----------|-------------|
| `SSO_ENABLED`, `NEXT_PUBLIC_SSO_ENABLED` | Authentification unique avec SAML/OIDC |
| `CREDENTIAL_SETS_ENABLED`, `NEXT_PUBLIC_CREDENTIAL_SETS_ENABLED` | Groupes de sondage pour les déclencheurs d'e-mail |
<Callout type="warn">
BYOK est uniquement disponible sur Sim Studio hébergé. Les déploiements auto-hébergés configurent les clés de fournisseur d'IA directement via les variables d'environnement.
</Callout>

View File

@@ -49,40 +49,40 @@ La répartition des modèles montre :
<Tabs items={['Modèles hébergés', 'Apportez votre propre clé API']}>
<Tab>
**Modèles hébergés** - Sim fournit des clés API avec un multiplicateur de prix de 2x :
**Modèles hébergés** - Sim fournit des clés API avec un multiplicateur de prix de 1,4x pour les blocs Agent :
**OpenAI**
| Modèle | Prix de base (entrée/sortie) | Prix hébergé (entrée/sortie) |
|-------|---------------------------|----------------------------|
| GPT-5.1 | 1,25 $ / 10,00 $ | 2,50 $ / 20,00 $ |
| GPT-5 | 1,25 $ / 10,00 $ | 2,50 $ / 20,00 $ |
| GPT-5 Mini | 0,25 $ / 2,00 $ | 0,50 $ / 4,00 $ |
| GPT-5 Nano | 0,05 $ / 0,40 $ | 0,10 $ / 0,80 $ |
| GPT-4o | 2,50 $ / 10,00 $ | 5,00 $ / 20,00 $ |
| GPT-4.1 | 2,00 $ / 8,00 $ | 4,00 $ / 16,00 $ |
| GPT-4.1 Mini | 0,40 $ / 1,60 $ | 0,80 $ / 3,20 $ |
| GPT-4.1 Nano | 0,10 $ / 0,40 $ | 0,20 $ / 0,80 $ |
| o1 | 15,00 $ / 60,00 $ | 30,00 $ / 120,00 $ |
| o3 | 2,00 $ / 8,00 $ | 4,00 $ / 16,00 $ |
| o4 Mini | 1,10 $ / 4,40 $ | 2,20 $ / 8,80 $ |
| GPT-5.1 | 1,25 $ / 10,00 $ | 1,75 $ / 14,00 $ |
| GPT-5 | 1,25 $ / 10,00 $ | 1,75 $ / 14,00 $ |
| GPT-5 Mini | 0,25 $ / 2,00 $ | 0,35 $ / 2,80 $ |
| GPT-5 Nano | 0,05 $ / 0,40 $ | 0,07 $ / 0,56 $ |
| GPT-4o | 2,50 $ / 10,00 $ | 3,50 $ / 14,00 $ |
| GPT-4.1 | 2,00 $ / 8,00 $ | 2,80 $ / 11,20 $ |
| GPT-4.1 Mini | 0,40 $ / 1,60 $ | 0,56 $ / 2,24 $ |
| GPT-4.1 Nano | 0,10 $ / 0,40 $ | 0,14 $ / 0,56 $ |
| o1 | 15,00 $ / 60,00 $ | 21,00 $ / 84,00 $ |
| o3 | 2,00 $ / 8,00 $ | 2,80 $ / 11,20 $ |
| o4 Mini | 1,10 $ / 4,40 $ | 1,54 $ / 6,16 $ |
**Anthropic**
| Modèle | Prix de base (entrée/sortie) | Prix hébergé (entrée/sortie) |
|-------|---------------------------|----------------------------|
| Claude Opus 4.5 | 5,00 $ / 25,00 $ | 10,00 $ / 50,00 $ |
| Claude Opus 4.1 | 15,00 $ / 75,00 $ | 30,00 $ / 150,00 $ |
| Claude Sonnet 4.5 | 3,00 $ / 15,00 $ | 6,00 $ / 30,00 $ |
| Claude Sonnet 4.0 | 3,00 $ / 15,00 $ | 6,00 $ / 30,00 $ |
| Claude Haiku 4.5 | 1,00 $ / 5,00 $ | 2,00 $ / 10,00 $ |
| Claude Opus 4.5 | 5,00 $ / 25,00 $ | 7,00 $ / 35,00 $ |
| Claude Opus 4.1 | 15,00 $ / 75,00 $ | 21,00 $ / 105,00 $ |
| Claude Sonnet 4.5 | 3,00 $ / 15,00 $ | 4,20 $ / 21,00 $ |
| Claude Sonnet 4.0 | 3,00 $ / 15,00 $ | 4,20 $ / 21,00 $ |
| Claude Haiku 4.5 | 1,00 $ / 5,00 $ | 1,40 $ / 7,00 $ |
**Google**
| Modèle | Prix de base (entrée/sortie) | Prix hébergé (entrée/sortie) |
|-------|---------------------------|----------------------------|
| Gemini 3 Pro Preview | 2,00 $ / 12,00 $ | 4,00 $ / 24,00 $ |
| Gemini 2.5 Pro | 1,25 $ / 10,00 $ | 2,50 $ / 20,00 $ |
| Gemini 2.5 Flash | 0,30 $ / 2,50 $ | 0,60 $ / 5,00 $ |
| Gemini 3 Pro Preview | 2,00 $ / 12,00 $ | 2,80 $ / 16,80 $ |
| Gemini 2.5 Pro | 1,25 $ / 10,00 $ | 1,75 $ / 14,00 $ |
| Gemini 2.5 Flash | 0,30 $ / 2,50 $ | 0,42 $ / 3,50 $ |
*Le multiplicateur 2x couvre les coûts d'infrastructure et de gestion des API.*
*Le multiplicateur de 1,4x couvre les coûts d'infrastructure et de gestion des API.*
</Tab>
<Tab>

View File

@@ -17,11 +17,11 @@ Les serveurs MCP regroupent vos outils de workflow. Créez-les et gérez-les dan
<Video src="mcp/mcp-server.mp4" width={700} height={450} />
</div>
1. Accédez à **Paramètres → Serveurs MCP**
1. Accédez à **Paramètres → MCP déployés**
2. Cliquez sur **Créer un serveur**
3. Saisissez un nom et une description facultative
4. Copiez l'URL du serveur pour l'utiliser dans vos clients MCP
5. Consultez et gérez tous les outils ajoutés au serveur
5. Affichez et gérez tous les outils ajoutés au serveur
## Ajouter un workflow en tant qu'outil
@@ -79,7 +79,7 @@ Incluez votre en-tête de clé API (`X-API-Key`) pour un accès authentifié lor
## Gestion du serveur
Depuis la vue détaillée du serveur dans **Paramètres → Serveurs MCP**, vous pouvez :
Depuis la vue détaillée du serveur dans **Paramètres → MCP déployés**, vous pouvez :
- **Voir les outils** : voir tous les workflows ajoutés à un serveur
- **Copier l'URL** : obtenir l'URL du serveur pour les clients MCP

View File

@@ -28,7 +28,7 @@ Les serveurs MCP fournissent des collections d'outils que vos agents peuvent uti
</div>
1. Accédez aux paramètres de votre espace de travail
2. Allez à la section **Serveurs MCP**
2. Allez dans la section **MCP déployés**
3. Cliquez sur **Ajouter un serveur MCP**
4. Saisissez les détails de configuration du serveur
5. Enregistrez la configuration

View File

@@ -22,7 +22,7 @@ Utilisez le bloc Démarrer pour tout ce qui provient de l'éditeur, du déploiem
<Cards>
<Card title="Start" href="/triggers/start">
Point d'entrée unifié qui prend en charge les exécutions de l'éditeur, les déploiements d'API et les déploiements de chat
Point d'entrée unifié qui prend en charge les exécutions dans l'éditeur, les déploiements API et les déploiements de chat
</Card>
<Card title="Webhook" href="/triggers/webhook">
Recevoir des charges utiles de webhook externes
@@ -31,18 +31,22 @@ Utilisez le bloc Démarrer pour tout ce qui provient de l'éditeur, du déploiem
Exécution basée sur cron ou intervalle
</Card>
<Card title="RSS Feed" href="/triggers/rss">
Surveiller les flux RSS et Atom pour du nouveau contenu
Surveiller les flux RSS et Atom pour détecter du nouveau contenu
</Card>
<Card title="Email Polling Groups" href="#email-polling-groups">
Surveiller les boîtes de réception Gmail et Outlook de l'équipe
</Card>
</Cards>
## Comparaison rapide
| Déclencheur | Condition de démarrage |
|---------|-----------------|
| **Start** | Exécutions de l'éditeur, requêtes de déploiement d'API ou messages de chat |
|-------------|------------------------|
| **Start** | Exécutions dans l'éditeur, requêtes de déploiement vers l'API ou messages de chat |
| **Schedule** | Minuteur géré dans le bloc de planification |
| **Webhook** | Sur requête HTTP entrante |
| **Webhook** | Lors d'une requête HTTP entrante |
| **RSS Feed** | Nouvel élément publié dans le flux |
| **Email Polling Groups** | Nouvel e-mail reçu dans les boîtes de réception Gmail ou Outlook de l'équipe |
> Le bloc Démarrer expose toujours les champs `input`, `conversationId` et `files`. Ajoutez des champs personnalisés au format d'entrée pour des données structurées supplémentaires.
@@ -65,3 +69,25 @@ Lorsque vous cliquez sur **Exécuter** dans l'éditeur, Sim sélectionne automat
Si votre flux de travail comporte plusieurs déclencheurs, le déclencheur de priorité la plus élevée sera exécuté. Par exemple, si vous avez à la fois un bloc Démarrer et un déclencheur Webhook, cliquer sur Exécuter exécutera le bloc Démarrer.
**Déclencheurs externes avec charges utiles simulées** : lorsque des déclencheurs externes (webhooks et intégrations) sont exécutés manuellement, Sim génère automatiquement des charges utiles simulées basées sur la structure de données attendue du déclencheur. Cela garantit que les blocs en aval peuvent résoudre correctement les variables pendant les tests.
## Groupes de surveillance d'e-mails
Les groupes de surveillance vous permettent de surveiller les boîtes de réception Gmail ou Outlook de plusieurs membres de l'équipe avec un seul déclencheur. Nécessite un forfait Team ou Enterprise.
**Créer un groupe de surveillance** (Admin/Propriétaire)
1. Accédez à **Paramètres → Surveillance d'e-mails**
2. Cliquez sur **Créer** et choisissez Gmail ou Outlook
3. Entrez un nom pour le groupe
**Inviter des membres**
1. Cliquez sur **Ajouter des membres** dans votre groupe de surveillance
2. Entrez les adresses e-mail (séparées par des virgules ou des sauts de ligne, ou glissez-déposez un fichier CSV)
3. Cliquez sur **Envoyer les invitations**
Les personnes invitées reçoivent un e-mail avec un lien pour connecter leur compte. Une fois connectée, leur boîte de réception est automatiquement incluse dans le groupe de surveillance. Les personnes invitées n'ont pas besoin d'être membres de votre organisation Sim.
**Utiliser dans un workflow**
Lors de la configuration d'un déclencheur d'e-mail, sélectionnez votre groupe de surveillance dans le menu déroulant des identifiants au lieu d'un compte individuel. Le système crée des webhooks pour chaque membre et achemine tous les e-mails via votre workflow.

View File

@@ -0,0 +1,75 @@
---
title: エンタープライズ
description: 高度なセキュリティとコンプライアンス要件を持つ組織向けのエンタープライズ機能
---
import { Callout } from 'fumadocs-ui/components/callout'
Sim Studio Enterpriseは、強化されたセキュリティ、コンプライアンス、管理要件を持つ組織向けの高度な機能を提供します。
---
## Bring Your Own Key (BYOK)
Sim Studioのホストキーの代わりに、AIモデルプロバイダー用の独自のAPIキーを使用できます。
### 対応プロバイダー
| プロバイダー | 用途 |
|----------|-------|
| OpenAI | ナレッジベースの埋め込み、エージェントブロック |
| Anthropic | エージェントブロック |
| Google | エージェントブロック |
| Mistral | ナレッジベースOCR |
### セットアップ
1. ワークスペースの**設定** → **BYOK**に移動します
2. プロバイダーの**キーを追加**をクリックします
3. APIキーを入力して保存します
<Callout type="warn">
BYOKキーは保存時に暗号化されます。組織の管理者とオーナーのみがキーを管理できます。
</Callout>
設定すると、ワークフローはSim Studioのホストキーの代わりに独自のキーを使用します。削除すると、ワークフローは自動的にホストキーにフォールバックします。
---
## シングルサインオン (SSO)
集中型IDマネジメントのためのSAML 2.0およびOIDCサポートを備えたエンタープライズ認証。
### 対応プロバイダー
- Okta
- Azure AD / Entra ID
- Google Workspace
- OneLogin
- SAML 2.0またはOIDCに対応する任意のプロバイダー
### セットアップ
1. ワークスペースの**設定** → **SSO**に移動します
2. IDプロバイダーを選択します
3. IdPのメタデータを使用して接続を設定します
4. 組織のSSOを有効にします
<Callout type="info">
SSOを有効にすると、チームメンバーはメール/パスワードの代わりにIDプロバイダーを通じて認証します。
</Callout>
---
## セルフホスト
セルフホストデプロイメントの場合、エンタープライズ機能は環境変数を介して有効にできます:
| 変数 | 説明 |
|----------|-------------|
| `SSO_ENABLED`、`NEXT_PUBLIC_SSO_ENABLED` | SAML/OIDCによるシングルサインオン |
| `CREDENTIAL_SETS_ENABLED`、`NEXT_PUBLIC_CREDENTIAL_SETS_ENABLED` | メールトリガー用のポーリンググループ |
<Callout type="warn">
BYOKはホスト型Sim Studioでのみ利用可能です。セルフホスト型デプロイメントでは、環境変数を介してAIプロバイダーキーを直接設定します。
</Callout>

View File

@@ -47,42 +47,42 @@ AIブロックを使用するワークフローでは、ログで詳細なコス
## 料金オプション
<Tabs items={['Hosted Models', 'Bring Your Own API Key']}>
<Tabs items={['ホステッドモデル', '独自のAPIキーを使用']}>
<Tab>
**ホステッドモデル** - Simは2倍の価格乗数APIキーを提供します
**ホステッドモデル** - Simは、エージェントブロック用に1.4倍の価格乗数を適用したAPIキーを提供します:
**OpenAI**
| モデル | 基本価格(入力/出力) | ホステッド価格(入力/出力) |
|-------|---------------------------|----------------------------|
| GPT-5.1 | $1.25 / $10.00 | $2.50 / $20.00 |
| GPT-5 | $1.25 / $10.00 | $2.50 / $20.00 |
| GPT-5 Mini | $0.25 / $2.00 | $0.50 / $4.00 |
| GPT-5 Nano | $0.05 / $0.40 | $0.10 / $0.80 |
| GPT-4o | $2.50 / $10.00 | $5.00 / $20.00 |
| GPT-4.1 | $2.00 / $8.00 | $4.00 / $16.00 |
| GPT-4.1 Mini | $0.40 / $1.60 | $0.80 / $3.20 |
| GPT-4.1 Nano | $0.10 / $0.40 | $0.20 / $0.80 |
| o1 | $15.00 / $60.00 | $30.00 / $120.00 |
| o3 | $2.00 / $8.00 | $4.00 / $16.00 |
| o4 Mini | $1.10 / $4.40 | $2.20 / $8.80 |
| GPT-5.1 | $1.25 / $10.00 | $1.75 / $14.00 |
| GPT-5 | $1.25 / $10.00 | $1.75 / $14.00 |
| GPT-5 Mini | $0.25 / $2.00 | $0.35 / $2.80 |
| GPT-5 Nano | $0.05 / $0.40 | $0.07 / $0.56 |
| GPT-4o | $2.50 / $10.00 | $3.50 / $14.00 |
| GPT-4.1 | $2.00 / $8.00 | $2.80 / $11.20 |
| GPT-4.1 Mini | $0.40 / $1.60 | $0.56 / $2.24 |
| GPT-4.1 Nano | $0.10 / $0.40 | $0.14 / $0.56 |
| o1 | $15.00 / $60.00 | $21.00 / $84.00 |
| o3 | $2.00 / $8.00 | $2.80 / $11.20 |
| o4 Mini | $1.10 / $4.40 | $1.54 / $6.16 |
**Anthropic**
| モデル | 基本価格(入力/出力) | ホステッド価格(入力/出力) |
|-------|---------------------------|----------------------------|
| Claude Opus 4.5 | $5.00 / $25.00 | $10.00 / $50.00 |
| Claude Opus 4.1 | $15.00 / $75.00 | $30.00 / $150.00 |
| Claude Sonnet 4.5 | $3.00 / $15.00 | $6.00 / $30.00 |
| Claude Sonnet 4.0 | $3.00 / $15.00 | $6.00 / $30.00 |
| Claude Haiku 4.5 | $1.00 / $5.00 | $2.00 / $10.00 |
| Claude Opus 4.5 | $5.00 / $25.00 | $7.00 / $35.00 |
| Claude Opus 4.1 | $15.00 / $75.00 | $21.00 / $105.00 |
| Claude Sonnet 4.5 | $3.00 / $15.00 | $4.20 / $21.00 |
| Claude Sonnet 4.0 | $3.00 / $15.00 | $4.20 / $21.00 |
| Claude Haiku 4.5 | $1.00 / $5.00 | $1.40 / $7.00 |
**Google**
| モデル | 基本価格(入力/出力) | ホステッド価格(入力/出力) |
|-------|---------------------------|----------------------------|
| Gemini 3 Pro Preview | $2.00 / $12.00 | $4.00 / $24.00 |
| Gemini 2.5 Pro | $1.25 / $10.00 | $2.50 / $20.00 |
| Gemini 2.5 Flash | $0.30 / $2.50 | $0.60 / $5.00 |
| Gemini 3 Pro Preview | $2.00 / $12.00 | $2.80 / $16.80 |
| Gemini 2.5 Pro | $1.25 / $10.00 | $1.75 / $14.00 |
| Gemini 2.5 Flash | $0.30 / $2.50 | $0.42 / $3.50 |
*2倍の乗数は、インフラストラクチャとAPI管理コストをカバーします。*
*1.4倍の乗数は、インフラストラクチャとAPI管理コストをカバーします。*
</Tab>
<Tab>

View File

@@ -16,11 +16,11 @@ MCPサーバーは、ワークフローツールをまとめてグループ化
<Video src="mcp/mcp-server.mp4" width={700} height={450} />
</div>
1. **設定 → MCPサーバー**に移動
2. **サーバーを作成**をクリック
3. 名前と説明(任意)を入力
4. MCPクライアントで使用するためにサーバーURLをコピー
5. サーバーに追加されたすべてのツールを表示・管理
1. **設定 → デプロイ済みMCP**に移動します
2. **サーバーを作成**をクリックします
3. 名前とオプションの説明を入力します
4. MCPクライアントで使用するためにサーバーURLをコピーします
5. サーバーに追加されたすべてのツールを表示および管理します
## ワークフローをツールとして追加
@@ -78,7 +78,7 @@ mcp-remoteまたは他のHTTPベースのMCPトランスポートを使用する
## サーバー管理
**設定 → MCPサーバー**のサーバー詳細ビューから、以下の操作が可能です:
**設定 → デプロイ済みMCP**のサーバー詳細ビューから、次のことができます:
- **ツールを表示**: サーバーに追加されたすべてのワークフローを確認
- **URLをコピー**: MCPクライアント用のサーバーURLを取得

View File

@@ -27,10 +27,10 @@ MCPサーバーはエージェントが使用できるツールのコレクシ
</div>
1. ワークスペース設定に移動します
2. **MCPサーバー**セクションに進みます
2. **デプロイ済みMCP**セクションに移動します
3. **MCPサーバーを追加**をクリックします
4. サーバー構成の詳細を入力します
5. 構成を保存します
4. サーバー設定の詳細を入力します
5. 設定を保存します
<Callout type="info">
エージェントブロックのツールバーから直接MCPサーバーを構成することもできますクイックセットアップ

View File

@@ -22,16 +22,19 @@ import { Image } from '@/components/ui/image'
<Cards>
<Card title="Start" href="/triggers/start">
エディタ実行、APIデプロイメント、チャットデプロイメントをサポートする統合エントリーポイント
エディタ実行、APIデプロイ、チャットデプロイをサポートする統合エントリーポイント
</Card>
<Card title="Webhook" href="/triggers/webhook">
外部のwebhookペイロードを受信
外部Webhookペイロードを受信
</Card>
<Card title="Schedule" href="/triggers/schedule">
Cronまたは間隔ベースの実行
Cronまたはインターバルベースの実行
</Card>
<Card title="RSS Feed" href="/triggers/rss">
新しいコンテンツのRSSとAtomフィードを監視
RSSおよびAtomフィードの新しいコンテンツを監視
</Card>
<Card title="Email Polling Groups" href="#email-polling-groups">
チームのGmailおよびOutlook受信トレイを監視
</Card>
</Cards>
@@ -39,10 +42,11 @@ import { Image } from '@/components/ui/image'
| トリガー | 開始条件 |
|---------|-----------------|
| **Start** | エディタ実行、APIへのデプロイリクエスト、またはチャットメッセージ |
| **Start** | エディタ実行、deploy-to-APIリクエスト、またはチャットメッセージ |
| **Schedule** | スケジュールブロックで管理されるタイマー |
| **Webhook** | 受信HTTPリクエスト時 |
| **Webhook** | インバウンドHTTPリクエスト時 |
| **RSS Feed** | フィードに新しいアイテムが公開された時 |
| **Email Polling Groups** | チームのGmailまたはOutlook受信トレイに新しいメールが受信された時 |
> スタートブロックは常に `input`、`conversationId`、および `files` フィールドを公開します。追加の構造化データには入力フォーマットにカスタムフィールドを追加してください。
@@ -65,3 +69,25 @@ import { Image } from '@/components/ui/image'
ワークフローに複数のトリガーがある場合、最も優先度の高いトリガーが実行されます。例えば、スタートブロックとウェブフックトリガーの両方がある場合、実行をクリックするとスタートブロックが実行されます。
**モックペイロードを持つ外部トリガー**: 外部トリガーウェブフックと連携が手動で実行される場合、Simはトリガーの予想されるデータ構造に基づいてモックペイロードを自動生成します。これにより、テスト中に下流のブロックが変数を正しく解決できるようになります。
## Email Polling Groups
Polling Groupsを使用すると、単一のトリガーで複数のチームメンバーのGmailまたはOutlook受信トレイを監視できます。TeamまたはEnterpriseプランが必要です。
**Polling Groupの作成**(管理者/オーナー)
1. **設定 → Email Polling**に移動
2. **作成**をクリックし、GmailまたはOutlookを選択
3. グループの名前を入力
**メンバーの招待**
1. Polling Groupの**メンバーを追加**をクリック
2. メールアドレスを入力カンマまたは改行で区切る、またはCSVをドラッグ&ドロップ)
3. **招待を送信**をクリック
招待された人は、アカウントを接続するためのリンクが記載されたメールを受信します。接続されると、その受信トレイは自動的にPolling Groupに含まれます。招待された人は、Sim組織のメンバーである必要はありません。
**ワークフローでの使用**
メールトリガーを設定する際、個別のアカウントではなく、認証情報ドロップダウンからPolling Groupを選択します。システムは各メンバーのWebhookを作成し、すべてのメールをワークフローを通じてルーティングします。

View File

@@ -0,0 +1,75 @@
---
title: 企业版
description: 为具有高级安全性和合规性需求的组织提供企业级功能
---
import { Callout } from 'fumadocs-ui/components/callout'
Sim Studio 企业版为需要更高安全性、合规性和管理能力的组织提供高级功能。
---
## 自带密钥BYOK
使用您自己的 API 密钥对接 AI 模型服务商,而不是使用 Sim Studio 托管的密钥。
### 支持的服务商
| Provider | Usage |
|----------|-------|
| OpenAI | 知识库嵌入、Agent 模块 |
| Anthropic | Agent 模块 |
| Google | Agent 模块 |
| Mistral | 知识库 OCR |
### 配置方法
1. 在您的工作区进入 **设置** → **BYOK**
2. 为您的服务商点击 **添加密钥**
3. 输入您的 API 密钥并保存
<Callout type="warn">
BYOK 密钥静态加密存储。仅组织管理员和所有者可管理密钥。
</Callout>
配置后,工作流将使用您的密钥而非 Sim Studio 托管密钥。如移除,工作流会自动切换回托管密钥。
---
## 单点登录SSO
企业级身份认证,支持 SAML 2.0 和 OIDC实现集中式身份管理。
### 支持的服务商
- Okta
- Azure AD / Entra ID
- Google Workspace
- OneLogin
- 任何 SAML 2.0 或 OIDC 服务商
### 配置方法
1. 在您的工作区进入 **设置** → **SSO**
2. 选择您的身份提供商
3. 使用 IdP 元数据配置连接
4. 为您的组织启用 SSO
<Callout type="info">
启用 SSO 后,团队成员将通过您的身份提供商进行身份验证,而不再使用邮箱/密码。
</Callout>
---
## 自主部署
对于自主部署场景,可通过环境变量启用企业功能:
| 变量 | 描述 |
|----------|-------------|
| `SSO_ENABLED``NEXT_PUBLIC_SSO_ENABLED` | 使用 SAML/OIDC 的单点登录 |
| `CREDENTIAL_SETS_ENABLED``NEXT_PUBLIC_CREDENTIAL_SETS_ENABLED` | 用于邮件触发器的轮询组 |
<Callout type="warn">
BYOK 仅适用于托管版 Sim Studio。自托管部署需通过环境变量直接配置 AI 提供商密钥。
</Callout>

View File

@@ -47,42 +47,42 @@ totalCost = baseExecutionCharge + modelCost
## 定价选项
<Tabs items={[ '托管模型', '自带 API 密钥' ]}>
<Tabs items={['托管模型', '自带 API Key']}>
<Tab>
**托管模型** - Sim 提供 API 密钥,价格为基础价格的 2 倍:
**托管模型** - Sim 为 Agent 模块提供 API Key价格乘以 1.4 倍:
**OpenAI**
| 模型 | 基础价格(输入/输出) | 托管价格(输入/输出) |
|-------|---------------------------|----------------------------|
| GPT-5.1 | $1.25 / $10.00 | $2.50 / $20.00 |
| GPT-5 | $1.25 / $10.00 | $2.50 / $20.00 |
| GPT-5 Mini | $0.25 / $2.00 | $0.50 / $4.00 |
| GPT-5 Nano | $0.05 / $0.40 | $0.10 / $0.80 |
| GPT-4o | $2.50 / $10.00 | $5.00 / $20.00 |
| GPT-4.1 | $2.00 / $8.00 | $4.00 / $16.00 |
| GPT-4.1 Mini | $0.40 / $1.60 | $0.80 / $3.20 |
| GPT-4.1 Nano | $0.10 / $0.40 | $0.20 / $0.80 |
| o1 | $15.00 / $60.00 | $30.00 / $120.00 |
| o3 | $2.00 / $8.00 | $4.00 / $16.00 |
| o4 Mini | $1.10 / $4.40 | $2.20 / $8.80 |
| GPT-5.1 | $1.25 / $10.00 | $1.75 / $14.00 |
| GPT-5 | $1.25 / $10.00 | $1.75 / $14.00 |
| GPT-5 Mini | $0.25 / $2.00 | $0.35 / $2.80 |
| GPT-5 Nano | $0.05 / $0.40 | $0.07 / $0.56 |
| GPT-4o | $2.50 / $10.00 | $3.50 / $14.00 |
| GPT-4.1 | $2.00 / $8.00 | $2.80 / $11.20 |
| GPT-4.1 Mini | $0.40 / $1.60 | $0.56 / $2.24 |
| GPT-4.1 Nano | $0.10 / $0.40 | $0.14 / $0.56 |
| o1 | $15.00 / $60.00 | $21.00 / $84.00 |
| o3 | $2.00 / $8.00 | $2.80 / $11.20 |
| o4 Mini | $1.10 / $4.40 | $1.54 / $6.16 |
**Anthropic**
| 模型 | 基础价格(输入/输出) | 托管价格(输入/输出) |
|-------|---------------------------|----------------------------|
| Claude Opus 4.5 | $5.00 / $25.00 | $10.00 / $50.00 |
| Claude Opus 4.1 | $15.00 / $75.00 | $30.00 / $150.00 |
| Claude Sonnet 4.5 | $3.00 / $15.00 | $6.00 / $30.00 |
| Claude Sonnet 4.0 | $3.00 / $15.00 | $6.00 / $30.00 |
| Claude Haiku 4.5 | $1.00 / $5.00 | $2.00 / $10.00 |
| Claude Opus 4.5 | $5.00 / $25.00 | $7.00 / $35.00 |
| Claude Opus 4.1 | $15.00 / $75.00 | $21.00 / $105.00 |
| Claude Sonnet 4.5 | $3.00 / $15.00 | $4.20 / $21.00 |
| Claude Sonnet 4.0 | $3.00 / $15.00 | $4.20 / $21.00 |
| Claude Haiku 4.5 | $1.00 / $5.00 | $1.40 / $7.00 |
**Google**
| 模型 | 基础价格(输入/输出) | 托管价格(输入/输出) |
|-------|---------------------------|----------------------------|
| Gemini 3 Pro Preview | $2.00 / $12.00 | $4.00 / $24.00 |
| Gemini 2.5 Pro | $1.25 / $10.00 | $2.50 / $20.00 |
| Gemini 2.5 Flash | $0.30 / $2.50 | $0.60 / $5.00 |
| Gemini 3 Pro Preview | $2.00 / $12.00 | $2.80 / $16.80 |
| Gemini 2.5 Pro | $1.25 / $10.00 | $1.75 / $14.00 |
| Gemini 2.5 Flash | $0.30 / $2.50 | $0.42 / $3.50 |
*2 倍系数涵盖了基础设施和 API 管理成本。*
*1.4系数涵盖了基础设施和 API 管理成本。*
</Tab>
<Tab>

View File

@@ -16,11 +16,11 @@ MCP 服务器用于将您的工作流工具进行分组。您可以在工作区
<Video src="mcp/mcp-server.mp4" width={700} height={450} />
</div>
1. 进入 **设置 → MCP 服务器**
1. 进入 **设置 → 已部署的 MCPs**
2. 点击 **创建服务器**
3. 输入名称和可选描述
4. 复制服务器 URL 以在的 MCP 客户端中使用
5. 查看并管理已添加到服务器的所有工具
4. 复制服务器 URL 以在的 MCP 客户端中使用
5. 查看并管理已添加到服务器的所有工具
## 添加工作流为工具
@@ -78,7 +78,7 @@ MCP 服务器用于将您的工作流工具进行分组。您可以在工作区
## 服务器管理
在 **设置 → MCP 服务器** 的服务器详情视图中,您可以:
在 **设置 → 已部署的 MCPs** 的服务器详情页,你可以:
- **查看工具**:查看添加到服务器的所有工作流
- **复制 URL**:获取 MCP 客户端的服务器 URL

View File

@@ -27,9 +27,9 @@ MCP 服务器提供工具集合,供您的代理使用。您可以在工作区
</div>
1. 进入您的工作区设置
2. 转到 **MCP 服务器** 部分
3. 点击 **添加 MCP 服务器**
4. 输入服务器配置详情
2. 前往 **Deployed MCPs** 部分
3. 点击 **Add MCP Server**
4. 输入服务器配置信息
5. 保存配置
<Callout type="info">

View File

@@ -21,17 +21,20 @@ import { Image } from '@/components/ui/image'
使用 Start 块处理从编辑器、部署到 API 或部署到聊天的所有操作。其他触发器可用于事件驱动的工作流:
<Cards>
<Card title="开始" href="/triggers/start">
支持编辑器运行、API 部署和聊天部署的统一入口
<Card title="Start" href="/triggers/start">
支持编辑器运行、API 部署和聊天部署的统一入口
</Card>
<Card title="Webhook" href="/triggers/webhook">
接收外部 webhook 负载
</Card>
<Card title="计划" href="/triggers/schedule">
基于 Cron 或间隔的执行
<Card title="Schedule" href="/triggers/schedule">
基于 cron 或间隔的执行
</Card>
<Card title="RSS " href="/triggers/rss">
监控 RSS 和 Atom 源的新内容
<Card title="RSS Feed" href="/triggers/rss">
监控 RSS 和 Atom 订阅源的新内容
</Card>
<Card title="Email Polling Groups" href="#email-polling-groups">
监控团队 Gmail 和 Outlook 收件箱
</Card>
</Cards>
@@ -39,10 +42,11 @@ import { Image } from '@/components/ui/image'
| 触发器 | 启动条件 |
|---------|-----------------|
| **开始** | 编辑器运行、部署到 API 请求或聊天消息 |
| **计划** | 在计划块中管理的时器 |
| **Start** | 编辑器运行、API 部署请求或聊天消息 |
| **Schedule** | 在 schedule 块中管理的时器 |
| **Webhook** | 收到入站 HTTP 请求时 |
| **RSS ** | 源中发布了新项目 |
| **RSS Feed** | 订阅源中有新内容发布时 |
| **Email Polling Groups** | 团队 Gmail 或 Outlook 收件箱收到新邮件时 |
> Start 块始终公开 `input`、`conversationId` 和 `files` 字段。通过向输入格式添加自定义字段来增加结构化数据。
@@ -65,3 +69,25 @@ import { Image } from '@/components/ui/image'
如果您的工作流有多个触发器,将执行优先级最高的触发器。例如,如果您同时有 Start 块和 Webhook 触发器,点击运行将执行 Start 块。
**带有模拟负载的外部触发器**:当手动执行外部触发器(如 webhooks 和集成Sim 会根据触发器的预期数据结构自动生成模拟负载。这确保了在测试过程中,下游模块可以正确解析变量。
## 邮件轮询组
轮询组可让你通过单一触发器监控多个团队成员的 Gmail 或 Outlook 收件箱。需要 Team 或 Enterprise 方案。
**创建轮询组**(管理员/所有者)
1. 前往 **设置 → 邮件轮询**
2. 点击 **创建**,选择 Gmail 或 Outlook
3. 输入组名
**邀请成员**
1. 在你的轮询组中点击 **添加成员**
2. 输入邮箱地址(用逗号或换行分隔,或拖拽 CSV 文件)
3. 点击 **发送邀请**
受邀者会收到一封带有连接账户链接的邮件。连接后,他们的收件箱会自动加入轮询组。受邀者无需成为你的 Sim 组织成员。
**在工作流中使用**
配置邮件触发器时,从凭据下拉菜单中选择你的轮询组,而不是单独账户。系统会为每位成员创建 webhook并将所有邮件通过你的工作流进行处理。

View File

@@ -4343,7 +4343,7 @@ checksums:
content/5: 6eee8c607e72b6c444d7b3ef07244f20
content/6: 747991e0e80e306dce1061ef7802db2a
content/7: 430153eacb29c66026cf71944df7be20
content/8: 5950966e19939b7a3a320d56ee4a674c
content/8: f9bdeac954d1d138c954c151db0403ec
content/9: 159cf7a6d62e64b0c5db27e73b8c1ff5
content/10: a723187777f9a848d4daa563e9dcbe17
content/11: b1c5f14e5290bcbbf5d590361ee7c053
@@ -4581,11 +4581,11 @@ checksums:
content/10: d19c8c67f52eb08b6a49c0969a9c8b86
content/11: 4024a36e0d9479ff3191fb9cd2b2e365
content/12: 0396a1e5d9548207f56e6b6cae85a542
content/13: 4bfdeac5ad21c75209dcdfde85aa52b0
content/14: 35df9a16b866dbe4bb9fc1d7aee42711
content/15: 135c044066cea8cc0e22f06d67754ec5
content/16: 6882b91e30548d7d331388c26cf2e948
content/17: 29aed7061148ae46fa6ec8bcbc857c3d
content/13: 68f90237f86be125224c56a2643904a3
content/14: e854781f0fbf6f397a3ac682e892a993
content/15: 2340c44af715fb8ca58f43151515aae1
content/16: fc7ae93bff492d80f4b6f16e762e05fa
content/17: 8a46692d5df3fed9f94d59dfc3fb7e0a
content/18: e0571c88ea5bcd4305a6f5772dcbed98
content/19: 83fc31418ff454a5e06b290e3708ef32
content/20: 4392b5939a6d5774fb080cad1ee1dbb8
@@ -5789,9 +5789,9 @@ checksums:
content/1: e71056df0f7b2eb3b2f271f21d0052cc
content/2: da2b445db16c149f56558a4ea876a5f0
content/3: cec18f48b2cd7974eb556880e6604f7f
content/4: b200402d6a01ab565fd56d113c530ef6
content/4: cff35e4208de8f6ef36a6eae79915fab
content/5: 4c3a5708af82c1ee42a12d14fd34e950
content/6: 64fbd5b16f4cff18ba976492a275c05e
content/6: 00a9f255e60b5979014694b0c2a3ba26
content/7: a28151eeb5ba3518b33809055b04f0f6
content/8: cffe5b901d78ebf2000d07dc7579533e
content/9: 73486253d24eeff7ac44dfd0c8868d87
@@ -5801,6 +5801,15 @@ checksums:
content/13: e5ca2445d3b69b062af5bf0a2988e760
content/14: 67e0b520d57e352689789eff5803ebbc
content/15: a1d7382600994068ca24dc03f46b7c73
content/16: 1895a0c773fddeb014c7aab468593b30
content/17: 5b478d664a0b1bc76f19516b2a6e2788
content/18: c97883b63e5e455cd2de51f0406f963f
content/19: 2ff6c01b8eebbdd653d864b105f53cde
content/20: 523b34e945343591d1df51a6ba6357dd
content/21: e6611cff00c91bd2327660aebf9418f4
content/22: 87e7e7df71f0883369e8abda30289c0f
content/23: b248d9eda347cfb122101a4e4b5eaa53
content/24: 2f003723d891d6c53c398b86c7397577
0bf172ef4ee9a2c94a2967d7d320b81b:
meta/title: 330265974a03ee22a09f42fa4ece25f6
meta/description: e3d54cbedf551315cf9e8749228c2d1c
@@ -50141,7 +50150,7 @@ checksums:
content/2: b082096b0c871b2a40418e479af6f158
content/3: 9c94aa34f44540b0632931a8244a6488
content/4: 14f33e16b5a98e4dbdda2a27aa0d7afb
content/5: d7b36732970b7649dd1aa1f1d0a34e74
content/5: 3ea8bad9314f442a69a87f313419ef1a
content/6: f554f833467a6dae5391372fc41dad53
content/7: 9cdb9189ecfcc4a6f567d3fd5fe342f0
content/8: 9a107692cb52c284c1cb022b516d700b
@@ -50158,7 +50167,7 @@ checksums:
content/19: a618fcff50c4856113428639359a922b
content/20: 5fd3a6d2dcd8aa18dbf0b784acaa271c
content/21: d118656dd565c4c22f3c0c3a7c7f3bee
content/22: f49b9be78f1e7a569e290acc1365d417
content/22: c161e7bcfba9cf6ef0ab8ef40ac0c17a
content/23: 0a70ebe6eb4c543c3810977ed46b69b0
content/24: ad8638a3473c909dbcb1e1d9f4f26381
content/25: 95343a9f81cd050d3713988c677c750f
@@ -50299,3 +50308,30 @@ checksums:
content/68: ba6b5020ed971cd7ffc7f0423650dfbf
content/69: b3f310d5ef115bea5a8b75bf25d7ea9a
content/70: 0362be478aa7ba4b6d1ebde0bd83e83a
f5bc5f89ed66818f4c485c554bf26eea:
meta/title: c70474271708e5b27392fde87462fa26
meta/description: 7b47db7fbb818c180b99354b912a72b3
content/0: 232be69c8f3053a40f695f9c9dcb3f2e
content/1: a4a62a6e782e18bd863546dfcf2aec1c
content/2: 51adf33450cab2ef392e93147386647c
content/3: ada515cf6e2e0f9d3f57f720f79699d3
content/4: d5e8b9f64d855675588845dc4124c491
content/5: 3acf1f0551f6097ca6159e66f5c8da1a
content/6: 6a6e277ded1a063ec2c2067abb519088
content/7: 6debcd334c3310480cbe6feab87f37b5
content/8: 0e3372052a2b3a1c43d853d6ed269d69
content/9: 90063613714128f4e61e9588e2d2c735
content/10: 182154179fe2a8b6b73fde0d04e0bf4c
content/11: 51adf33450cab2ef392e93147386647c
content/12: 73c3e8a5d36d6868fdb455fcb3d6074c
content/13: 30cd8f1d6197bce560a091ba19d0392a
content/14: 3acf1f0551f6097ca6159e66f5c8da1a
content/15: 997deef758698d207be9382c45301ad6
content/16: 6debcd334c3310480cbe6feab87f37b5
content/17: e26c8c2dffd70baef0253720c1511886
content/18: a99eba53979531f1c974cf653c346909
content/19: 51adf33450cab2ef392e93147386647c
content/20: ca3ec889fb218b8b130959ff04baa659
content/21: 306617201cf63b42f09bb72c9722e048
content/22: 4b48ba3f10b043f74b70edeb4ad87080
content/23: c8531bd570711abc1963d8b5dcf9deef

Binary file not shown.

Before

Width:  |  Height:  |  Size: 33 KiB

After

Width:  |  Height:  |  Size: 11 KiB

View File

@@ -0,0 +1,100 @@
'use client'
import { forwardRef, useState } from 'react'
import { ArrowRight, ChevronRight, Loader2 } from 'lucide-react'
import { Button, type ButtonProps as EmcnButtonProps } from '@/components/emcn'
import { cn } from '@/lib/core/utils/cn'
import { useBrandedButtonClass } from '@/hooks/use-branded-button-class'
export interface BrandedButtonProps extends Omit<EmcnButtonProps, 'variant' | 'size'> {
/** Shows loading spinner and disables button */
loading?: boolean
/** Text to show when loading (appends "..." automatically) */
loadingText?: string
/** Show arrow animation on hover (default: true) */
showArrow?: boolean
/** Make button full width (default: true) */
fullWidth?: boolean
}
/**
* Branded button for auth and status pages.
* Automatically detects whitelabel customization and applies appropriate styling.
*
* @example
* ```tsx
* // Primary branded button with arrow
* <BrandedButton onClick={handleSubmit}>Sign In</BrandedButton>
*
* // Loading state
* <BrandedButton loading loadingText="Signing in">Sign In</BrandedButton>
*
* // Without arrow animation
* <BrandedButton showArrow={false}>Continue</BrandedButton>
* ```
*/
export const BrandedButton = forwardRef<HTMLButtonElement, BrandedButtonProps>(
(
{
children,
loading = false,
loadingText,
showArrow = true,
fullWidth = true,
className,
disabled,
onMouseEnter,
onMouseLeave,
...props
},
ref
) => {
const buttonClass = useBrandedButtonClass()
const [isHovered, setIsHovered] = useState(false)
const handleMouseEnter = (e: React.MouseEvent<HTMLButtonElement>) => {
setIsHovered(true)
onMouseEnter?.(e)
}
const handleMouseLeave = (e: React.MouseEvent<HTMLButtonElement>) => {
setIsHovered(false)
onMouseLeave?.(e)
}
return (
<Button
ref={ref}
variant='branded'
size='branded'
disabled={disabled || loading}
onMouseEnter={handleMouseEnter}
onMouseLeave={handleMouseLeave}
className={cn(buttonClass, 'group', fullWidth && 'w-full', className)}
{...props}
>
{loading ? (
<span className='flex items-center gap-2'>
<Loader2 className='h-4 w-4 animate-spin' />
{loadingText ? `${loadingText}...` : children}
</span>
) : showArrow ? (
<span className='flex items-center gap-1'>
{children}
<span className='inline-flex transition-transform duration-200 group-hover:translate-x-0.5'>
{isHovered ? (
<ArrowRight className='h-4 w-4' aria-hidden='true' />
) : (
<ChevronRight className='h-4 w-4' aria-hidden='true' />
)}
</span>
</span>
) : (
children
)}
</Button>
)
}
)
BrandedButton.displayName = 'BrandedButton'

View File

@@ -34,7 +34,7 @@ export function SSOLoginButton({
}
const primaryBtnClasses = cn(
primaryClassName || 'auth-button-gradient',
primaryClassName || 'branded-button-gradient',
'flex w-full items-center justify-center gap-2 rounded-[10px] border font-medium text-[15px] text-white transition-all duration-200'
)

View File

@@ -0,0 +1,74 @@
'use client'
import type { ReactNode } from 'react'
import { inter } from '@/app/_styles/fonts/inter/inter'
import { soehne } from '@/app/_styles/fonts/soehne/soehne'
import AuthBackground from '@/app/(auth)/components/auth-background'
import Nav from '@/app/(landing)/components/nav/nav'
import { SupportFooter } from './support-footer'
export interface StatusPageLayoutProps {
/** Page title displayed prominently */
title: string
/** Description text below the title */
description: string | ReactNode
/** Content to render below the title/description (usually buttons) */
children?: ReactNode
/** Whether to show the support footer (default: true) */
showSupportFooter?: boolean
/** Whether to hide the nav bar (useful for embedded forms) */
hideNav?: boolean
}
/**
* Unified layout for status/error pages (404, form unavailable, chat error, etc.).
* Uses AuthBackground and Nav for consistent styling with auth pages.
*
* @example
* ```tsx
* <StatusPageLayout
* title="Page Not Found"
* description="The page you're looking for doesn't exist."
* >
* <BrandedButton onClick={() => router.push('/')}>Return to Home</BrandedButton>
* </StatusPageLayout>
* ```
*/
export function StatusPageLayout({
title,
description,
children,
showSupportFooter = true,
hideNav = false,
}: StatusPageLayoutProps) {
return (
<AuthBackground>
<main className='relative flex min-h-screen flex-col text-foreground'>
{!hideNav && <Nav hideAuthButtons={true} variant='auth' />}
<div className='relative z-30 flex flex-1 items-center justify-center px-4 pb-24'>
<div className='w-full max-w-lg px-4'>
<div className='flex flex-col items-center justify-center'>
<div className='space-y-1 text-center'>
<h1
className={`${soehne.className} font-medium text-[32px] text-black tracking-tight`}
>
{title}
</h1>
<p className={`${inter.className} font-[380] text-[16px] text-muted-foreground`}>
{description}
</p>
</div>
{children && (
<div className={`${inter.className} mt-8 w-full max-w-[410px] space-y-3`}>
{children}
</div>
)}
</div>
</div>
</div>
{showSupportFooter && <SupportFooter position='absolute' />}
</main>
</AuthBackground>
)
}

View File

@@ -0,0 +1,40 @@
'use client'
import { useBrandConfig } from '@/lib/branding/branding'
import { inter } from '@/app/_styles/fonts/inter/inter'
export interface SupportFooterProps {
/** Position style - 'fixed' for pages without AuthLayout, 'absolute' for pages with AuthLayout */
position?: 'fixed' | 'absolute'
}
/**
* Support footer component for auth and status pages.
* Displays a "Need help? Contact support" link using branded support email.
*
* @example
* ```tsx
* // Fixed position (for standalone pages)
* <SupportFooter />
*
* // Absolute position (for pages using AuthLayout)
* <SupportFooter position="absolute" />
* ```
*/
export function SupportFooter({ position = 'fixed' }: SupportFooterProps) {
const brandConfig = useBrandConfig()
return (
<div
className={`${inter.className} auth-text-muted right-0 bottom-0 left-0 z-50 pb-8 text-center font-[340] text-[13px] leading-relaxed ${position}`}
>
Need help?{' '}
<a
href={`mailto:${brandConfig.supportEmail}`}
className='auth-link underline-offset-4 transition hover:underline'
>
Contact support
</a>
</div>
)
}

View File

@@ -105,7 +105,7 @@ export default function LoginPage({
const [password, setPassword] = useState('')
const [passwordErrors, setPasswordErrors] = useState<string[]>([])
const [showValidationError, setShowValidationError] = useState(false)
const [buttonClass, setButtonClass] = useState('auth-button-gradient')
const [buttonClass, setButtonClass] = useState('branded-button-gradient')
const [isButtonHovered, setIsButtonHovered] = useState(false)
const [callbackUrl, setCallbackUrl] = useState('/workspace')
@@ -146,9 +146,9 @@ export default function LoginPage({
const brandAccent = computedStyle.getPropertyValue('--brand-accent-hex').trim()
if (brandAccent && brandAccent !== '#6f3dfa') {
setButtonClass('auth-button-custom')
setButtonClass('branded-button-custom')
} else {
setButtonClass('auth-button-gradient')
setButtonClass('branded-button-gradient')
}
}

View File

@@ -27,7 +27,7 @@ export function RequestResetForm({
statusMessage,
className,
}: RequestResetFormProps) {
const [buttonClass, setButtonClass] = useState('auth-button-gradient')
const [buttonClass, setButtonClass] = useState('branded-button-gradient')
const [isButtonHovered, setIsButtonHovered] = useState(false)
useEffect(() => {
@@ -36,9 +36,9 @@ export function RequestResetForm({
const brandAccent = computedStyle.getPropertyValue('--brand-accent-hex').trim()
if (brandAccent && brandAccent !== '#6f3dfa') {
setButtonClass('auth-button-custom')
setButtonClass('branded-button-custom')
} else {
setButtonClass('auth-button-gradient')
setButtonClass('branded-button-gradient')
}
}
@@ -138,7 +138,7 @@ export function SetNewPasswordForm({
const [validationMessage, setValidationMessage] = useState('')
const [showPassword, setShowPassword] = useState(false)
const [showConfirmPassword, setShowConfirmPassword] = useState(false)
const [buttonClass, setButtonClass] = useState('auth-button-gradient')
const [buttonClass, setButtonClass] = useState('branded-button-gradient')
const [isButtonHovered, setIsButtonHovered] = useState(false)
useEffect(() => {
@@ -147,9 +147,9 @@ export function SetNewPasswordForm({
const brandAccent = computedStyle.getPropertyValue('--brand-accent-hex').trim()
if (brandAccent && brandAccent !== '#6f3dfa') {
setButtonClass('auth-button-custom')
setButtonClass('branded-button-custom')
} else {
setButtonClass('auth-button-gradient')
setButtonClass('branded-button-gradient')
}
}

View File

@@ -95,7 +95,7 @@ function SignupFormContent({
const [showEmailValidationError, setShowEmailValidationError] = useState(false)
const [redirectUrl, setRedirectUrl] = useState('')
const [isInviteFlow, setIsInviteFlow] = useState(false)
const [buttonClass, setButtonClass] = useState('auth-button-gradient')
const [buttonClass, setButtonClass] = useState('branded-button-gradient')
const [isButtonHovered, setIsButtonHovered] = useState(false)
const [name, setName] = useState('')
@@ -109,11 +109,15 @@ function SignupFormContent({
setEmail(emailParam)
}
const redirectParam = searchParams.get('redirect')
// Check both 'redirect' and 'callbackUrl' params (login page uses callbackUrl)
const redirectParam = searchParams.get('redirect') || searchParams.get('callbackUrl')
if (redirectParam) {
setRedirectUrl(redirectParam)
if (redirectParam.startsWith('/invite/')) {
if (
redirectParam.startsWith('/invite/') ||
redirectParam.startsWith('/credential-account/')
) {
setIsInviteFlow(true)
}
}
@@ -128,9 +132,9 @@ function SignupFormContent({
const brandAccent = computedStyle.getPropertyValue('--brand-accent-hex').trim()
if (brandAccent && brandAccent !== '#6f3dfa') {
setButtonClass('auth-button-custom')
setButtonClass('branded-button-custom')
} else {
setButtonClass('auth-button-gradient')
setButtonClass('branded-button-gradient')
}
}

View File

@@ -57,7 +57,7 @@ export default function SSOForm() {
const [email, setEmail] = useState('')
const [emailErrors, setEmailErrors] = useState<string[]>([])
const [showEmailValidationError, setShowEmailValidationError] = useState(false)
const [buttonClass, setButtonClass] = useState('auth-button-gradient')
const [buttonClass, setButtonClass] = useState('branded-button-gradient')
const [callbackUrl, setCallbackUrl] = useState('/workspace')
useEffect(() => {
@@ -96,9 +96,9 @@ export default function SSOForm() {
const brandAccent = computedStyle.getPropertyValue('--brand-accent-hex').trim()
if (brandAccent && brandAccent !== '#6f3dfa') {
setButtonClass('auth-button-custom')
setButtonClass('branded-button-custom')
} else {
setButtonClass('auth-button-gradient')
setButtonClass('branded-button-gradient')
}
}

View File

@@ -58,7 +58,7 @@ function VerificationForm({
setCountdown(30)
}
const [buttonClass, setButtonClass] = useState('auth-button-gradient')
const [buttonClass, setButtonClass] = useState('branded-button-gradient')
useEffect(() => {
const checkCustomBrand = () => {
@@ -66,9 +66,9 @@ function VerificationForm({
const brandAccent = computedStyle.getPropertyValue('--brand-accent-hex').trim()
if (brandAccent && brandAccent !== '#6f3dfa') {
setButtonClass('auth-button-custom')
setButtonClass('branded-button-custom')
} else {
setButtonClass('auth-button-gradient')
setButtonClass('branded-button-gradient')
}
}

View File

@@ -767,7 +767,7 @@ export default function PrivacyPolicy() {
privacy@sim.ai
</Link>
</li>
<li>Mailing Address: Sim, 80 Langton St, San Francisco, CA 94133, USA</li>
<li>Mailing Address: Sim, 80 Langton St, San Francisco, CA 94103, USA</li>
</ul>
<p>We will respond to your request within a reasonable timeframe.</p>
</section>

View File

@@ -1,13 +0,0 @@
export default function Head() {
return (
<>
<link rel='canonical' href='https://sim.ai/studio' />
<link
rel='alternate'
type='application/rss+xml'
title='Sim Studio'
href='https://sim.ai/studio/rss.xml'
/>
</>
)
}

View File

@@ -2,6 +2,7 @@
import type React from 'react'
import { createContext, useCallback, useEffect, useMemo, useState } from 'react'
import { useQueryClient } from '@tanstack/react-query'
import posthog from 'posthog-js'
import { client } from '@/lib/auth/auth-client'
@@ -35,12 +36,15 @@ export function SessionProvider({ children }: { children: React.ReactNode }) {
const [data, setData] = useState<AppSession>(null)
const [isPending, setIsPending] = useState(true)
const [error, setError] = useState<Error | null>(null)
const queryClient = useQueryClient()
const loadSession = useCallback(async () => {
const loadSession = useCallback(async (bypassCache = false) => {
try {
setIsPending(true)
setError(null)
const res = await client.getSession()
const res = bypassCache
? await client.getSession({ query: { disableCookieCache: true } })
: await client.getSession()
setData(res?.data ?? null)
} catch (e) {
setError(e instanceof Error ? e : new Error('Failed to fetch session'))
@@ -50,8 +54,25 @@ export function SessionProvider({ children }: { children: React.ReactNode }) {
}, [])
useEffect(() => {
loadSession()
}, [loadSession])
// Check if user was redirected after plan upgrade
const params = new URLSearchParams(window.location.search)
const wasUpgraded = params.get('upgraded') === 'true'
if (wasUpgraded) {
params.delete('upgraded')
const newUrl = params.toString()
? `${window.location.pathname}?${params.toString()}`
: window.location.pathname
window.history.replaceState({}, '', newUrl)
}
loadSession(wasUpgraded).then(() => {
if (wasUpgraded) {
queryClient.invalidateQueries({ queryKey: ['organizations'] })
queryClient.invalidateQueries({ queryKey: ['subscription'] })
}
})
}, [loadSession, queryClient])
useEffect(() => {
if (isPending || typeof posthog.identify !== 'function') {

View File

@@ -22,12 +22,13 @@ export function ThemeProvider({ children, ...props }: ThemeProviderProps) {
pathname.startsWith('/changelog') ||
pathname.startsWith('/chat') ||
pathname.startsWith('/studio') ||
pathname.startsWith('/resume')
pathname.startsWith('/resume') ||
pathname.startsWith('/form')
return (
<NextThemesProvider
attribute='class'
defaultTheme='system'
defaultTheme='dark'
enableSystem
disableTransitionOnChange
storageKey='sim-theme'

View File

@@ -42,6 +42,40 @@
animation: dash-animation 1.5s linear infinite !important;
}
/**
* React Flow selection box styling
* Uses brand-secondary color for selection highlighting
*/
.react-flow__selection {
background: rgba(51, 180, 255, 0.08) !important;
border: 1px solid var(--brand-secondary) !important;
}
.react-flow__nodesselection-rect,
.react-flow__nodesselection {
background: transparent !important;
border: none !important;
pointer-events: none !important;
}
/**
* Selected node ring indicator
* Uses a pseudo-element overlay to match the original behavior (absolute inset-0 z-40)
*/
.react-flow__node.selected > div > div {
position: relative;
}
.react-flow__node.selected > div > div::after {
content: "";
position: absolute;
inset: 0;
z-index: 40;
border-radius: 8px;
box-shadow: 0 0 0 1.75px var(--brand-secondary);
pointer-events: none;
}
/**
* Color tokens - single source of truth for all colors
* Light mode: Warm theme
@@ -553,27 +587,25 @@ input[type="search"]::-ms-clear {
animation: placeholder-pulse 1.5s ease-in-out infinite;
}
.auth-button-gradient {
background: linear-gradient(to bottom, var(--brand-primary-hex), var(--brand-400)) !important;
border-color: var(--brand-400) !important;
box-shadow: inset 0 2px 4px 0 var(--brand-400) !important;
.branded-button-gradient {
background: linear-gradient(to bottom, #8357ff, #6f3dfa) !important;
border-color: #6f3dfa !important;
box-shadow: inset 0 2px 4px 0 #9b77ff !important;
}
.auth-button-gradient:hover {
background: linear-gradient(to bottom, var(--brand-primary-hex), var(--brand-400)) !important;
.branded-button-gradient:hover {
background: linear-gradient(to bottom, #8357ff, #6f3dfa) !important;
opacity: 0.9;
}
.auth-button-custom {
.branded-button-custom {
background: var(--brand-primary-hex) !important;
border-color: var(--brand-primary-hex) !important;
box-shadow: inset 0 2px 4px 0 rgba(0, 0, 0, 0.1) !important;
}
.auth-button-custom:hover {
.branded-button-custom:hover {
background: var(--brand-primary-hover-hex) !important;
border-color: var(--brand-primary-hover-hex) !important;
opacity: 1;
}
/**

View File

@@ -8,11 +8,18 @@ import { createMockLogger, createMockRequest } from '@/app/api/__test-utils__/ut
describe('OAuth Disconnect API Route', () => {
const mockGetSession = vi.fn()
const mockSelectChain = {
from: vi.fn().mockReturnThis(),
innerJoin: vi.fn().mockReturnThis(),
where: vi.fn().mockResolvedValue([]),
}
const mockDb = {
delete: vi.fn().mockReturnThis(),
where: vi.fn(),
select: vi.fn().mockReturnValue(mockSelectChain),
}
const mockLogger = createMockLogger()
const mockSyncAllWebhooksForCredentialSet = vi.fn().mockResolvedValue({})
const mockUUID = 'mock-uuid-12345678-90ab-cdef-1234-567890abcdef'
@@ -33,6 +40,13 @@ describe('OAuth Disconnect API Route', () => {
vi.doMock('@sim/db/schema', () => ({
account: { userId: 'userId', providerId: 'providerId' },
credentialSetMember: {
id: 'id',
credentialSetId: 'credentialSetId',
userId: 'userId',
status: 'status',
},
credentialSet: { id: 'id', providerId: 'providerId' },
}))
vi.doMock('drizzle-orm', () => ({
@@ -45,6 +59,14 @@ describe('OAuth Disconnect API Route', () => {
vi.doMock('@sim/logger', () => ({
createLogger: vi.fn().mockReturnValue(mockLogger),
}))
vi.doMock('@/lib/core/utils/request', () => ({
generateRequestId: vi.fn().mockReturnValue('test-request-id'),
}))
vi.doMock('@/lib/webhooks/utils.server', () => ({
syncAllWebhooksForCredentialSet: mockSyncAllWebhooksForCredentialSet,
}))
})
afterEach(() => {

View File

@@ -1,11 +1,12 @@
import { db } from '@sim/db'
import { account } from '@sim/db/schema'
import { account, credentialSet, credentialSetMember } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { and, eq, like, or } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { getSession } from '@/lib/auth'
import { generateRequestId } from '@/lib/core/utils/request'
import { syncAllWebhooksForCredentialSet } from '@/lib/webhooks/utils.server'
export const dynamic = 'force-dynamic'
@@ -74,6 +75,49 @@ export async function POST(request: NextRequest) {
)
}
// Sync webhooks for all credential sets the user is a member of
// This removes webhooks that were using the disconnected credential
const userMemberships = await db
.select({
id: credentialSetMember.id,
credentialSetId: credentialSetMember.credentialSetId,
providerId: credentialSet.providerId,
})
.from(credentialSetMember)
.innerJoin(credentialSet, eq(credentialSetMember.credentialSetId, credentialSet.id))
.where(
and(
eq(credentialSetMember.userId, session.user.id),
eq(credentialSetMember.status, 'active')
)
)
for (const membership of userMemberships) {
// Only sync if the credential set matches this provider
// Credential sets store OAuth provider IDs like 'google-email' or 'outlook'
const matchesProvider =
membership.providerId === provider ||
membership.providerId === providerId ||
membership.providerId?.startsWith(`${provider}-`)
if (matchesProvider) {
try {
await syncAllWebhooksForCredentialSet(membership.credentialSetId, requestId)
logger.info(`[${requestId}] Synced webhooks after credential disconnect`, {
credentialSetId: membership.credentialSetId,
provider,
})
} catch (error) {
// Log but don't fail the disconnect - credential is already removed
logger.error(`[${requestId}] Failed to sync webhooks after credential disconnect`, {
credentialSetId: membership.credentialSetId,
provider,
error,
})
}
}
}
return NextResponse.json({ success: true }, { status: 200 })
} catch (error) {
logger.error(`[${requestId}] Error disconnecting OAuth provider`, error)

View File

@@ -138,7 +138,10 @@ describe('OAuth Token API Routes', () => {
const data = await response.json()
expect(response.status).toBe(400)
expect(data).toHaveProperty('error', 'Credential ID is required')
expect(data).toHaveProperty(
'error',
'Either credentialId or (credentialAccountUserId + providerId) is required'
)
expect(mockLogger.warn).toHaveBeenCalled()
})

View File

@@ -4,7 +4,7 @@ import { z } from 'zod'
import { authorizeCredentialUse } from '@/lib/auth/credential-access'
import { checkHybridAuth } from '@/lib/auth/hybrid'
import { generateRequestId } from '@/lib/core/utils/request'
import { getCredential, refreshTokenIfNeeded } from '@/app/api/auth/oauth/utils'
import { getCredential, getOAuthToken, refreshTokenIfNeeded } from '@/app/api/auth/oauth/utils'
export const dynamic = 'force-dynamic'
@@ -12,12 +12,17 @@ const logger = createLogger('OAuthTokenAPI')
const SALESFORCE_INSTANCE_URL_REGEX = /__sf_instance__:([^\s]+)/
const tokenRequestSchema = z.object({
credentialId: z
.string({ required_error: 'Credential ID is required' })
.min(1, 'Credential ID is required'),
workflowId: z.string().min(1, 'Workflow ID is required').nullish(),
})
const tokenRequestSchema = z
.object({
credentialId: z.string().min(1).optional(),
credentialAccountUserId: z.string().min(1).optional(),
providerId: z.string().min(1).optional(),
workflowId: z.string().min(1).nullish(),
})
.refine(
(data) => data.credentialId || (data.credentialAccountUserId && data.providerId),
'Either credentialId or (credentialAccountUserId + providerId) is required'
)
const tokenQuerySchema = z.object({
credentialId: z
@@ -58,9 +63,37 @@ export async function POST(request: NextRequest) {
)
}
const { credentialId, workflowId } = parseResult.data
const { credentialId, credentialAccountUserId, providerId, workflowId } = parseResult.data
if (credentialAccountUserId && providerId) {
logger.info(`[${requestId}] Fetching token by credentialAccountUserId + providerId`, {
credentialAccountUserId,
providerId,
})
try {
const accessToken = await getOAuthToken(credentialAccountUserId, providerId)
if (!accessToken) {
return NextResponse.json(
{
error: `No credential found for user ${credentialAccountUserId} and provider ${providerId}`,
},
{ status: 404 }
)
}
return NextResponse.json({ accessToken }, { status: 200 })
} catch (error) {
const message = error instanceof Error ? error.message : 'Failed to get OAuth token'
logger.warn(`[${requestId}] OAuth token error: ${message}`)
return NextResponse.json({ error: message }, { status: 403 })
}
}
if (!credentialId) {
return NextResponse.json({ error: 'Credential ID is required' }, { status: 400 })
}
// We already have workflowId from the parsed body; avoid forcing hybrid auth to re-read it
const authz = await authorizeCredentialUse(request, {
credentialId,
workflowId: workflowId ?? undefined,
@@ -70,7 +103,6 @@ export async function POST(request: NextRequest) {
return NextResponse.json({ error: authz.error || 'Unauthorized' }, { status: 403 })
}
// Fetch the credential as the owner to enforce ownership scoping
const credential = await getCredential(requestId, credentialId, authz.credentialOwnerUserId)
if (!credential) {
@@ -78,7 +110,6 @@ export async function POST(request: NextRequest) {
}
try {
// Refresh the token if needed
const { accessToken } = await refreshTokenIfNeeded(requestId, credential, credentialId)
let instanceUrl: string | undefined
@@ -145,7 +176,6 @@ export async function GET(request: NextRequest) {
return NextResponse.json({ error: 'User not authenticated' }, { status: 401 })
}
// Get the credential from the database
const credential = await getCredential(requestId, credentialId, auth.userId)
if (!credential) {

View File

@@ -1,7 +1,7 @@
import { db } from '@sim/db'
import { account, workflow } from '@sim/db/schema'
import { account, credentialSetMember, workflow } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { and, desc, eq } from 'drizzle-orm'
import { and, desc, eq, inArray } from 'drizzle-orm'
import { getSession } from '@/lib/auth'
import { refreshOAuthToken } from '@/lib/oauth'
@@ -105,10 +105,10 @@ export async function getOAuthToken(userId: string, providerId: string): Promise
refreshToken: account.refreshToken,
accessTokenExpiresAt: account.accessTokenExpiresAt,
idToken: account.idToken,
scope: account.scope,
})
.from(account)
.where(and(eq(account.userId, userId), eq(account.providerId, providerId)))
// Always use the most recently updated credential for this provider
.orderBy(desc(account.updatedAt))
.limit(1)
@@ -335,3 +335,108 @@ export async function refreshTokenIfNeeded(
throw error
}
}
export interface CredentialSetCredential {
userId: string
credentialId: string
accessToken: string
providerId: string
}
export async function getCredentialsForCredentialSet(
credentialSetId: string,
providerId: string
): Promise<CredentialSetCredential[]> {
logger.info(`Getting credentials for credential set ${credentialSetId}, provider ${providerId}`)
const members = await db
.select({ userId: credentialSetMember.userId })
.from(credentialSetMember)
.where(
and(
eq(credentialSetMember.credentialSetId, credentialSetId),
eq(credentialSetMember.status, 'active')
)
)
logger.info(`Found ${members.length} active members in credential set ${credentialSetId}`)
if (members.length === 0) {
logger.warn(`No active members found for credential set ${credentialSetId}`)
return []
}
const userIds = members.map((m) => m.userId)
logger.debug(`Member user IDs: ${userIds.join(', ')}`)
const credentials = await db
.select({
id: account.id,
userId: account.userId,
providerId: account.providerId,
accessToken: account.accessToken,
refreshToken: account.refreshToken,
accessTokenExpiresAt: account.accessTokenExpiresAt,
})
.from(account)
.where(and(inArray(account.userId, userIds), eq(account.providerId, providerId)))
logger.info(
`Found ${credentials.length} credentials with provider ${providerId} for ${members.length} members`
)
const results: CredentialSetCredential[] = []
for (const cred of credentials) {
const now = new Date()
const tokenExpiry = cred.accessTokenExpiresAt
const shouldRefresh =
!!cred.refreshToken && (!cred.accessToken || (tokenExpiry && tokenExpiry < now))
let accessToken = cred.accessToken
if (shouldRefresh && cred.refreshToken) {
try {
const refreshResult = await refreshOAuthToken(providerId, cred.refreshToken)
if (refreshResult) {
accessToken = refreshResult.accessToken
const updateData: Record<string, unknown> = {
accessToken: refreshResult.accessToken,
accessTokenExpiresAt: new Date(Date.now() + refreshResult.expiresIn * 1000),
updatedAt: new Date(),
}
if (refreshResult.refreshToken && refreshResult.refreshToken !== cred.refreshToken) {
updateData.refreshToken = refreshResult.refreshToken
}
await db.update(account).set(updateData).where(eq(account.id, cred.id))
logger.info(`Refreshed token for user ${cred.userId}, provider ${providerId}`)
}
} catch (error) {
logger.error(`Failed to refresh token for user ${cred.userId}, provider ${providerId}`, {
error: error instanceof Error ? error.message : String(error),
})
continue
}
}
if (accessToken) {
results.push({
userId: cred.userId,
credentialId: cred.id,
accessToken,
providerId,
})
}
}
logger.info(
`Found ${results.length} valid credentials for credential set ${credentialSetId}, provider ${providerId}`
)
return results
}

View File

@@ -1,7 +1,8 @@
import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { auth } from '@/lib/auth'
import { auth, getSession } from '@/lib/auth'
import { hasSSOAccess } from '@/lib/billing'
import { env } from '@/lib/core/config/env'
import { REDACTED_MARKER } from '@/lib/core/security/redaction'
@@ -63,10 +64,22 @@ const ssoRegistrationSchema = z.discriminatedUnion('providerType', [
export async function POST(request: NextRequest) {
try {
// SSO plugin must be enabled in Better Auth
if (!env.SSO_ENABLED) {
return NextResponse.json({ error: 'SSO is not enabled' }, { status: 400 })
}
// Check plan access (enterprise) or env var override
const session = await getSession()
if (!session?.user?.id) {
return NextResponse.json({ error: 'Authentication required' }, { status: 401 })
}
const hasAccess = await hasSSOAccess(session.user.id)
if (!hasAccess) {
return NextResponse.json({ error: 'SSO requires an Enterprise plan' }, { status: 403 })
}
const rawBody = await request.json()
const parseResult = ssoRegistrationSchema.safeParse(rawBody)

View File

@@ -7,10 +7,11 @@ import type { NextRequest } from 'next/server'
import { z } from 'zod'
import { renderOTPEmail } from '@/components/emails'
import { getRedisClient } from '@/lib/core/config/redis'
import { addCorsHeaders } from '@/lib/core/security/deployment'
import { getStorageMethod } from '@/lib/core/storage'
import { generateRequestId } from '@/lib/core/utils/request'
import { sendEmail } from '@/lib/messaging/email/mailer'
import { addCorsHeaders, setChatAuthCookie } from '@/app/api/chat/utils'
import { setChatAuthCookie } from '@/app/api/chat/utils'
import { createErrorResponse, createSuccessResponse } from '@/app/api/workflows/utils'
const logger = createLogger('ChatOtpAPI')

View File

@@ -3,6 +3,7 @@
*
* @vitest-environment node
*/
import { loggerMock } from '@sim/testing'
import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest'
import { createMockRequest } from '@/app/api/__test-utils__/utils'
@@ -120,14 +121,8 @@ describe('Chat Identifier API Route', () => {
validateAuthToken: vi.fn().mockReturnValue(true),
}))
vi.doMock('@sim/logger', () => ({
createLogger: vi.fn().mockReturnValue({
debug: vi.fn(),
info: vi.fn(),
warn: vi.fn(),
error: vi.fn(),
}),
}))
// Mock logger - use loggerMock from @sim/testing
vi.doMock('@sim/logger', () => loggerMock)
vi.doMock('@sim/db', () => {
const mockSelect = vi.fn().mockImplementation((fields) => {

View File

@@ -5,16 +5,12 @@ import { createLogger } from '@sim/logger'
import { eq } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { addCorsHeaders, validateAuthToken } from '@/lib/core/security/deployment'
import { generateRequestId } from '@/lib/core/utils/request'
import { preprocessExecution } from '@/lib/execution/preprocessing'
import { LoggingSession } from '@/lib/logs/execution/logging-session'
import { ChatFiles } from '@/lib/uploads'
import {
addCorsHeaders,
setChatAuthCookie,
validateAuthToken,
validateChatAuth,
} from '@/app/api/chat/utils'
import { setChatAuthCookie, validateChatAuth } from '@/app/api/chat/utils'
import { createErrorResponse, createSuccessResponse } from '@/app/api/workflows/utils'
const logger = createLogger('ChatIdentifierAPI')
@@ -253,7 +249,7 @@ export async function POST(
userId: deployment.userId,
workspaceId,
isDeployed: workflowRecord?.isDeployed ?? false,
variables: workflowRecord?.variables || {},
variables: (workflowRecord?.variables as Record<string, unknown>) ?? undefined,
}
const stream = await createStreamingResponse({

View File

@@ -1,9 +1,10 @@
import { NextRequest } from 'next/server'
/**
* Tests for chat edit API route
*
* @vitest-environment node
*/
import { loggerMock } from '@sim/testing'
import { NextRequest } from 'next/server'
import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest'
vi.mock('@/lib/core/config/feature-flags', () => ({
@@ -50,14 +51,8 @@ describe('Chat Edit API Route', () => {
chat: { id: 'id', identifier: 'identifier', userId: 'userId' },
}))
vi.doMock('@sim/logger', () => ({
createLogger: vi.fn().mockReturnValue({
info: vi.fn(),
error: vi.fn(),
warn: vi.fn(),
debug: vi.fn(),
}),
}))
// Mock logger - use loggerMock from @sim/testing
vi.doMock('@sim/logger', () => loggerMock)
vi.doMock('@/app/api/workflows/utils', () => ({
createSuccessResponse: mockCreateSuccessResponse.mockImplementation((data) => {

View File

@@ -212,6 +212,18 @@ export async function POST(request: NextRequest) {
logger.info(`Chat "${title}" deployed successfully at ${chatUrl}`)
try {
const { PlatformEvents } = await import('@/lib/core/telemetry')
PlatformEvents.chatDeployed({
chatId: id,
workflowId,
authType,
hasOutputConfigs: outputConfigs.length > 0,
})
} catch (_e) {
// Silently fail
}
return createSuccessResponse({
id,
chatUrl,

View File

@@ -1,3 +1,4 @@
import { databaseMock, loggerMock } from '@sim/testing'
import type { NextResponse } from 'next/server'
/**
* Tests for chat API utils
@@ -5,14 +6,9 @@ import type { NextResponse } from 'next/server'
* @vitest-environment node
*/
import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest'
import { env } from '@/lib/core/config/env'
vi.mock('@sim/db', () => ({
db: {
select: vi.fn(),
update: vi.fn(),
},
}))
vi.mock('@sim/db', () => databaseMock)
vi.mock('@sim/logger', () => loggerMock)
vi.mock('@/lib/logs/execution/logging-session', () => ({
LoggingSession: vi.fn().mockImplementation(() => ({
@@ -52,19 +48,10 @@ vi.mock('@/lib/core/config/feature-flags', () => ({
describe('Chat API Utils', () => {
beforeEach(() => {
vi.doMock('@sim/logger', () => ({
createLogger: vi.fn().mockReturnValue({
info: vi.fn(),
error: vi.fn(),
warn: vi.fn(),
debug: vi.fn(),
}),
}))
vi.stubGlobal('process', {
...process,
env: {
...env,
...process.env,
NODE_ENV: 'development',
},
})
@@ -75,8 +62,8 @@ describe('Chat API Utils', () => {
})
describe('Auth token utils', () => {
it('should validate auth tokens', async () => {
const { validateAuthToken } = await import('@/app/api/chat/utils')
it.concurrent('should validate auth tokens', async () => {
const { validateAuthToken } = await import('@/lib/core/security/deployment')
const chatId = 'test-chat-id'
const type = 'password'
@@ -92,8 +79,8 @@ describe('Chat API Utils', () => {
expect(isInvalidChat).toBe(false)
})
it('should reject expired tokens', async () => {
const { validateAuthToken } = await import('@/app/api/chat/utils')
it.concurrent('should reject expired tokens', async () => {
const { validateAuthToken } = await import('@/lib/core/security/deployment')
const chatId = 'test-chat-id'
const expiredToken = Buffer.from(
@@ -136,7 +123,7 @@ describe('Chat API Utils', () => {
describe('CORS handling', () => {
it('should add CORS headers for localhost in development', async () => {
const { addCorsHeaders } = await import('@/app/api/chat/utils')
const { addCorsHeaders } = await import('@/lib/core/security/deployment')
const mockRequest = {
headers: {
@@ -343,7 +330,7 @@ describe('Chat API Utils', () => {
})
describe('Execution Result Processing', () => {
it('should process logs regardless of overall success status', () => {
it.concurrent('should process logs regardless of overall success status', () => {
const executionResult = {
success: false,
output: {},
@@ -381,7 +368,7 @@ describe('Chat API Utils', () => {
expect(executionResult.logs[1].error).toBe('Agent 2 failed')
})
it('should handle ExecutionResult vs StreamingExecution types correctly', () => {
it.concurrent('should handle ExecutionResult vs StreamingExecution types correctly', () => {
const executionResult = {
success: true,
output: { content: 'test' },

View File

@@ -1,17 +1,25 @@
import { createHash } from 'crypto'
import { db } from '@sim/db'
import { chat, workflow } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { eq } from 'drizzle-orm'
import type { NextRequest, NextResponse } from 'next/server'
import { isDev } from '@/lib/core/config/feature-flags'
import {
isEmailAllowed,
setDeploymentAuthCookie,
validateAuthToken,
} from '@/lib/core/security/deployment'
import { decryptSecret } from '@/lib/core/security/encryption'
import { hasAdminPermission } from '@/lib/workspaces/permissions/utils'
const logger = createLogger('ChatAuthUtils')
function hashPassword(encryptedPassword: string): string {
return createHash('sha256').update(encryptedPassword).digest('hex').substring(0, 8)
export function setChatAuthCookie(
response: NextResponse,
chatId: string,
type: string,
encryptedPassword?: string | null
): void {
setDeploymentAuthCookie(response, 'chat', chatId, type, encryptedPassword)
}
/**
@@ -82,77 +90,6 @@ export async function checkChatAccess(
return { hasAccess: false }
}
function encryptAuthToken(chatId: string, type: string, encryptedPassword?: string | null): string {
const pwHash = encryptedPassword ? hashPassword(encryptedPassword) : ''
return Buffer.from(`${chatId}:${type}:${Date.now()}:${pwHash}`).toString('base64')
}
export function validateAuthToken(
token: string,
chatId: string,
encryptedPassword?: string | null
): boolean {
try {
const decoded = Buffer.from(token, 'base64').toString()
const parts = decoded.split(':')
const [storedId, _type, timestamp, storedPwHash] = parts
if (storedId !== chatId) {
return false
}
const createdAt = Number.parseInt(timestamp)
const now = Date.now()
const expireTime = 24 * 60 * 60 * 1000
if (now - createdAt > expireTime) {
return false
}
if (encryptedPassword) {
const currentPwHash = hashPassword(encryptedPassword)
if (storedPwHash !== currentPwHash) {
return false
}
}
return true
} catch (_e) {
return false
}
}
export function setChatAuthCookie(
response: NextResponse,
chatId: string,
type: string,
encryptedPassword?: string | null
): void {
const token = encryptAuthToken(chatId, type, encryptedPassword)
response.cookies.set({
name: `chat_auth_${chatId}`,
value: token,
httpOnly: true,
secure: !isDev,
sameSite: 'lax',
path: '/',
maxAge: 60 * 60 * 24,
})
}
export function addCorsHeaders(response: NextResponse, request: NextRequest) {
const origin = request.headers.get('origin') || ''
if (isDev && origin.includes('localhost')) {
response.headers.set('Access-Control-Allow-Origin', origin)
response.headers.set('Access-Control-Allow-Credentials', 'true')
response.headers.set('Access-Control-Allow-Methods', 'GET, POST, OPTIONS')
response.headers.set('Access-Control-Allow-Headers', 'Content-Type, X-Requested-With')
}
return response
}
export async function validateChatAuth(
requestId: string,
deployment: any,
@@ -231,12 +168,7 @@ export async function validateChatAuth(
const allowedEmails = deployment.allowedEmails || []
if (allowedEmails.includes(email)) {
return { authorized: false, error: 'otp_required' }
}
const domain = email.split('@')[1]
if (domain && allowedEmails.some((allowed: string) => allowed === `@${domain}`)) {
if (isEmailAllowed(email, allowedEmails)) {
return { authorized: false, error: 'otp_required' }
}
@@ -270,12 +202,7 @@ export async function validateChatAuth(
const allowedEmails = deployment.allowedEmails || []
if (allowedEmails.includes(email)) {
return { authorized: true }
}
const domain = email.split('@')[1]
if (domain && allowedEmails.some((allowed: string) => allowed === `@${domain}`)) {
if (isEmailAllowed(email, allowedEmails)) {
return { authorized: true }
}
@@ -296,12 +223,7 @@ export async function validateChatAuth(
const allowedEmails = deployment.allowedEmails || []
if (allowedEmails.includes(userEmail)) {
return { authorized: true }
}
const domain = userEmail.split('@')[1]
if (domain && allowedEmails.some((allowed: string) => allowed === `@${domain}`)) {
if (isEmailAllowed(userEmail, allowedEmails)) {
return { authorized: true }
}

View File

@@ -1,50 +0,0 @@
/**
* @deprecated This route is not currently in use
* @remarks Kept for reference - may be removed in future cleanup
*/
import { db } from '@sim/db'
import { copilotChats } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { eq } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { getSession } from '@/lib/auth'
const logger = createLogger('UpdateChatTitleAPI')
const UpdateTitleSchema = z.object({
chatId: z.string(),
title: z.string(),
})
export async function POST(request: NextRequest) {
try {
const session = await getSession()
if (!session?.user?.id) {
return NextResponse.json({ success: false, error: 'Unauthorized' }, { status: 401 })
}
const body = await request.json()
const parsed = UpdateTitleSchema.parse(body)
// Update the chat title
await db
.update(copilotChats)
.set({
title: parsed.title,
updatedAt: new Date(),
})
.where(eq(copilotChats.id, parsed.chatId))
logger.info('Chat title updated', { chatId: parsed.chatId, title: parsed.title })
return NextResponse.json({ success: true })
} catch (error) {
logger.error('Error updating chat title:', error)
return NextResponse.json(
{ success: false, error: 'Failed to update chat title' },
{ status: 500 }
)
}
}

View File

@@ -0,0 +1,156 @@
import { db } from '@sim/db'
import { credentialSet, credentialSetInvitation, member, organization, user } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { and, eq } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { getEmailSubject, renderPollingGroupInvitationEmail } from '@/components/emails'
import { getSession } from '@/lib/auth'
import { hasCredentialSetsAccess } from '@/lib/billing'
import { getBaseUrl } from '@/lib/core/utils/urls'
import { sendEmail } from '@/lib/messaging/email/mailer'
const logger = createLogger('CredentialSetInviteResend')
async function getCredentialSetWithAccess(credentialSetId: string, userId: string) {
const [set] = await db
.select({
id: credentialSet.id,
organizationId: credentialSet.organizationId,
name: credentialSet.name,
providerId: credentialSet.providerId,
})
.from(credentialSet)
.where(eq(credentialSet.id, credentialSetId))
.limit(1)
if (!set) return null
const [membership] = await db
.select({ role: member.role })
.from(member)
.where(and(eq(member.userId, userId), eq(member.organizationId, set.organizationId)))
.limit(1)
if (!membership) return null
return { set, role: membership.role }
}
export async function POST(
req: NextRequest,
{ params }: { params: Promise<{ id: string; invitationId: string }> }
) {
const session = await getSession()
if (!session?.user?.id) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
// Check plan access (team/enterprise) or env var override
const hasAccess = await hasCredentialSetsAccess(session.user.id)
if (!hasAccess) {
return NextResponse.json(
{ error: 'Credential sets require a Team or Enterprise plan' },
{ status: 403 }
)
}
const { id, invitationId } = await params
try {
const result = await getCredentialSetWithAccess(id, session.user.id)
if (!result) {
return NextResponse.json({ error: 'Credential set not found' }, { status: 404 })
}
if (result.role !== 'admin' && result.role !== 'owner') {
return NextResponse.json({ error: 'Admin or owner permissions required' }, { status: 403 })
}
const [invitation] = await db
.select()
.from(credentialSetInvitation)
.where(
and(
eq(credentialSetInvitation.id, invitationId),
eq(credentialSetInvitation.credentialSetId, id)
)
)
.limit(1)
if (!invitation) {
return NextResponse.json({ error: 'Invitation not found' }, { status: 404 })
}
if (invitation.status !== 'pending') {
return NextResponse.json({ error: 'Only pending invitations can be resent' }, { status: 400 })
}
// Update expiration
const newExpiresAt = new Date()
newExpiresAt.setDate(newExpiresAt.getDate() + 7)
await db
.update(credentialSetInvitation)
.set({ expiresAt: newExpiresAt })
.where(eq(credentialSetInvitation.id, invitationId))
const inviteUrl = `${getBaseUrl()}/credential-account/${invitation.token}`
// Send email if email address exists
if (invitation.email) {
try {
const [inviter] = await db
.select({ name: user.name })
.from(user)
.where(eq(user.id, session.user.id))
.limit(1)
const [org] = await db
.select({ name: organization.name })
.from(organization)
.where(eq(organization.id, result.set.organizationId))
.limit(1)
const provider = (result.set.providerId as 'google-email' | 'outlook') || 'google-email'
const emailHtml = await renderPollingGroupInvitationEmail({
inviterName: inviter?.name || 'A team member',
organizationName: org?.name || 'your organization',
pollingGroupName: result.set.name,
provider,
inviteLink: inviteUrl,
})
const emailResult = await sendEmail({
to: invitation.email,
subject: getEmailSubject('polling-group-invitation'),
html: emailHtml,
emailType: 'transactional',
})
if (!emailResult.success) {
logger.warn('Failed to resend invitation email', {
email: invitation.email,
error: emailResult.message,
})
return NextResponse.json({ error: 'Failed to send email' }, { status: 500 })
}
} catch (emailError) {
logger.error('Error sending invitation email', emailError)
return NextResponse.json({ error: 'Failed to send email' }, { status: 500 })
}
}
logger.info('Resent credential set invitation', {
credentialSetId: id,
invitationId,
userId: session.user.id,
})
return NextResponse.json({ success: true })
} catch (error) {
logger.error('Error resending invitation', error)
return NextResponse.json({ error: 'Failed to resend invitation' }, { status: 500 })
}
}

View File

@@ -0,0 +1,243 @@
import { db } from '@sim/db'
import { credentialSet, credentialSetInvitation, member, organization, user } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { and, eq } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { getEmailSubject, renderPollingGroupInvitationEmail } from '@/components/emails'
import { getSession } from '@/lib/auth'
import { hasCredentialSetsAccess } from '@/lib/billing'
import { getBaseUrl } from '@/lib/core/utils/urls'
import { sendEmail } from '@/lib/messaging/email/mailer'
const logger = createLogger('CredentialSetInvite')
const createInviteSchema = z.object({
email: z.string().email().optional(),
})
async function getCredentialSetWithAccess(credentialSetId: string, userId: string) {
const [set] = await db
.select({
id: credentialSet.id,
organizationId: credentialSet.organizationId,
name: credentialSet.name,
providerId: credentialSet.providerId,
})
.from(credentialSet)
.where(eq(credentialSet.id, credentialSetId))
.limit(1)
if (!set) return null
const [membership] = await db
.select({ role: member.role })
.from(member)
.where(and(eq(member.userId, userId), eq(member.organizationId, set.organizationId)))
.limit(1)
if (!membership) return null
return { set, role: membership.role }
}
export async function GET(req: NextRequest, { params }: { params: Promise<{ id: string }> }) {
const session = await getSession()
if (!session?.user?.id) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
// Check plan access (team/enterprise) or env var override
const hasAccess = await hasCredentialSetsAccess(session.user.id)
if (!hasAccess) {
return NextResponse.json(
{ error: 'Credential sets require a Team or Enterprise plan' },
{ status: 403 }
)
}
const { id } = await params
const result = await getCredentialSetWithAccess(id, session.user.id)
if (!result) {
return NextResponse.json({ error: 'Credential set not found' }, { status: 404 })
}
const invitations = await db
.select()
.from(credentialSetInvitation)
.where(eq(credentialSetInvitation.credentialSetId, id))
return NextResponse.json({ invitations })
}
export async function POST(req: NextRequest, { params }: { params: Promise<{ id: string }> }) {
const session = await getSession()
if (!session?.user?.id) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
// Check plan access (team/enterprise) or env var override
const hasAccess = await hasCredentialSetsAccess(session.user.id)
if (!hasAccess) {
return NextResponse.json(
{ error: 'Credential sets require a Team or Enterprise plan' },
{ status: 403 }
)
}
const { id } = await params
try {
const result = await getCredentialSetWithAccess(id, session.user.id)
if (!result) {
return NextResponse.json({ error: 'Credential set not found' }, { status: 404 })
}
if (result.role !== 'admin' && result.role !== 'owner') {
return NextResponse.json({ error: 'Admin or owner permissions required' }, { status: 403 })
}
const body = await req.json()
const { email } = createInviteSchema.parse(body)
const token = crypto.randomUUID()
const expiresAt = new Date()
expiresAt.setDate(expiresAt.getDate() + 7)
const invitation = {
id: crypto.randomUUID(),
credentialSetId: id,
email: email || null,
token,
invitedBy: session.user.id,
status: 'pending' as const,
expiresAt,
createdAt: new Date(),
}
await db.insert(credentialSetInvitation).values(invitation)
const inviteUrl = `${getBaseUrl()}/credential-account/${token}`
// Send email if email address was provided
if (email) {
try {
// Get inviter name
const [inviter] = await db
.select({ name: user.name })
.from(user)
.where(eq(user.id, session.user.id))
.limit(1)
// Get organization name
const [org] = await db
.select({ name: organization.name })
.from(organization)
.where(eq(organization.id, result.set.organizationId))
.limit(1)
const provider = (result.set.providerId as 'google-email' | 'outlook') || 'google-email'
const emailHtml = await renderPollingGroupInvitationEmail({
inviterName: inviter?.name || 'A team member',
organizationName: org?.name || 'your organization',
pollingGroupName: result.set.name,
provider,
inviteLink: inviteUrl,
})
const emailResult = await sendEmail({
to: email,
subject: getEmailSubject('polling-group-invitation'),
html: emailHtml,
emailType: 'transactional',
})
if (!emailResult.success) {
logger.warn('Failed to send invitation email', {
email,
error: emailResult.message,
})
}
} catch (emailError) {
logger.error('Error sending invitation email', emailError)
// Don't fail the invitation creation if email fails
}
}
logger.info('Created credential set invitation', {
credentialSetId: id,
invitationId: invitation.id,
userId: session.user.id,
emailSent: !!email,
})
return NextResponse.json({
invitation: {
...invitation,
inviteUrl,
},
})
} catch (error) {
if (error instanceof z.ZodError) {
return NextResponse.json({ error: error.errors[0].message }, { status: 400 })
}
logger.error('Error creating invitation', error)
return NextResponse.json({ error: 'Failed to create invitation' }, { status: 500 })
}
}
export async function DELETE(req: NextRequest, { params }: { params: Promise<{ id: string }> }) {
const session = await getSession()
if (!session?.user?.id) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
// Check plan access (team/enterprise) or env var override
const hasAccess = await hasCredentialSetsAccess(session.user.id)
if (!hasAccess) {
return NextResponse.json(
{ error: 'Credential sets require a Team or Enterprise plan' },
{ status: 403 }
)
}
const { id } = await params
const { searchParams } = new URL(req.url)
const invitationId = searchParams.get('invitationId')
if (!invitationId) {
return NextResponse.json({ error: 'invitationId is required' }, { status: 400 })
}
try {
const result = await getCredentialSetWithAccess(id, session.user.id)
if (!result) {
return NextResponse.json({ error: 'Credential set not found' }, { status: 404 })
}
if (result.role !== 'admin' && result.role !== 'owner') {
return NextResponse.json({ error: 'Admin or owner permissions required' }, { status: 403 })
}
await db
.update(credentialSetInvitation)
.set({ status: 'cancelled' })
.where(
and(
eq(credentialSetInvitation.id, invitationId),
eq(credentialSetInvitation.credentialSetId, id)
)
)
return NextResponse.json({ success: true })
} catch (error) {
logger.error('Error cancelling invitation', error)
return NextResponse.json({ error: 'Failed to cancel invitation' }, { status: 500 })
}
}

View File

@@ -0,0 +1,185 @@
import { db } from '@sim/db'
import { account, credentialSet, credentialSetMember, member, user } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { and, eq, inArray } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { getSession } from '@/lib/auth'
import { hasCredentialSetsAccess } from '@/lib/billing'
import { syncAllWebhooksForCredentialSet } from '@/lib/webhooks/utils.server'
const logger = createLogger('CredentialSetMembers')
async function getCredentialSetWithAccess(credentialSetId: string, userId: string) {
const [set] = await db
.select({
id: credentialSet.id,
organizationId: credentialSet.organizationId,
providerId: credentialSet.providerId,
})
.from(credentialSet)
.where(eq(credentialSet.id, credentialSetId))
.limit(1)
if (!set) return null
const [membership] = await db
.select({ role: member.role })
.from(member)
.where(and(eq(member.userId, userId), eq(member.organizationId, set.organizationId)))
.limit(1)
if (!membership) return null
return { set, role: membership.role }
}
export async function GET(req: NextRequest, { params }: { params: Promise<{ id: string }> }) {
const session = await getSession()
if (!session?.user?.id) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
// Check plan access (team/enterprise) or env var override
const hasAccess = await hasCredentialSetsAccess(session.user.id)
if (!hasAccess) {
return NextResponse.json(
{ error: 'Credential sets require a Team or Enterprise plan' },
{ status: 403 }
)
}
const { id } = await params
const result = await getCredentialSetWithAccess(id, session.user.id)
if (!result) {
return NextResponse.json({ error: 'Credential set not found' }, { status: 404 })
}
const members = await db
.select({
id: credentialSetMember.id,
userId: credentialSetMember.userId,
status: credentialSetMember.status,
joinedAt: credentialSetMember.joinedAt,
createdAt: credentialSetMember.createdAt,
userName: user.name,
userEmail: user.email,
userImage: user.image,
})
.from(credentialSetMember)
.leftJoin(user, eq(credentialSetMember.userId, user.id))
.where(eq(credentialSetMember.credentialSetId, id))
// Get credentials for all active members filtered by the polling group's provider
const activeMembers = members.filter((m) => m.status === 'active')
const memberUserIds = activeMembers.map((m) => m.userId)
let credentials: { userId: string; providerId: string; accountId: string }[] = []
if (memberUserIds.length > 0 && result.set.providerId) {
credentials = await db
.select({
userId: account.userId,
providerId: account.providerId,
accountId: account.accountId,
})
.from(account)
.where(
and(inArray(account.userId, memberUserIds), eq(account.providerId, result.set.providerId))
)
}
// Group credentials by userId
const credentialsByUser = credentials.reduce(
(acc, cred) => {
if (!acc[cred.userId]) {
acc[cred.userId] = []
}
acc[cred.userId].push({
providerId: cred.providerId,
accountId: cred.accountId,
})
return acc
},
{} as Record<string, { providerId: string; accountId: string }[]>
)
// Attach credentials to members
const membersWithCredentials = members.map((m) => ({
...m,
credentials: credentialsByUser[m.userId] || [],
}))
return NextResponse.json({ members: membersWithCredentials })
}
export async function DELETE(req: NextRequest, { params }: { params: Promise<{ id: string }> }) {
const session = await getSession()
if (!session?.user?.id) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
// Check plan access (team/enterprise) or env var override
const hasAccess = await hasCredentialSetsAccess(session.user.id)
if (!hasAccess) {
return NextResponse.json(
{ error: 'Credential sets require a Team or Enterprise plan' },
{ status: 403 }
)
}
const { id } = await params
const { searchParams } = new URL(req.url)
const memberId = searchParams.get('memberId')
if (!memberId) {
return NextResponse.json({ error: 'memberId is required' }, { status: 400 })
}
try {
const result = await getCredentialSetWithAccess(id, session.user.id)
if (!result) {
return NextResponse.json({ error: 'Credential set not found' }, { status: 404 })
}
if (result.role !== 'admin' && result.role !== 'owner') {
return NextResponse.json({ error: 'Admin or owner permissions required' }, { status: 403 })
}
const [memberToRemove] = await db
.select()
.from(credentialSetMember)
.where(and(eq(credentialSetMember.id, memberId), eq(credentialSetMember.credentialSetId, id)))
.limit(1)
if (!memberToRemove) {
return NextResponse.json({ error: 'Member not found' }, { status: 404 })
}
const requestId = crypto.randomUUID().slice(0, 8)
// Use transaction to ensure member deletion + webhook sync are atomic
await db.transaction(async (tx) => {
await tx.delete(credentialSetMember).where(eq(credentialSetMember.id, memberId))
const syncResult = await syncAllWebhooksForCredentialSet(id, requestId, tx)
logger.info('Synced webhooks after member removed', {
credentialSetId: id,
...syncResult,
})
})
logger.info('Removed member from credential set', {
credentialSetId: id,
memberId,
userId: session.user.id,
})
return NextResponse.json({ success: true })
} catch (error) {
logger.error('Error removing member from credential set', error)
return NextResponse.json({ error: 'Failed to remove member' }, { status: 500 })
}
}

View File

@@ -0,0 +1,183 @@
import { db } from '@sim/db'
import { credentialSet, credentialSetMember, member } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { and, eq } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { getSession } from '@/lib/auth'
import { hasCredentialSetsAccess } from '@/lib/billing'
const logger = createLogger('CredentialSet')
const updateCredentialSetSchema = z.object({
name: z.string().trim().min(1).max(100).optional(),
description: z.string().max(500).nullable().optional(),
})
async function getCredentialSetWithAccess(credentialSetId: string, userId: string) {
const [set] = await db
.select({
id: credentialSet.id,
organizationId: credentialSet.organizationId,
name: credentialSet.name,
description: credentialSet.description,
providerId: credentialSet.providerId,
createdBy: credentialSet.createdBy,
createdAt: credentialSet.createdAt,
updatedAt: credentialSet.updatedAt,
})
.from(credentialSet)
.where(eq(credentialSet.id, credentialSetId))
.limit(1)
if (!set) return null
const [membership] = await db
.select({ role: member.role })
.from(member)
.where(and(eq(member.userId, userId), eq(member.organizationId, set.organizationId)))
.limit(1)
if (!membership) return null
return { set, role: membership.role }
}
export async function GET(req: NextRequest, { params }: { params: Promise<{ id: string }> }) {
const session = await getSession()
if (!session?.user?.id) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
// Check plan access (team/enterprise) or env var override
const hasAccess = await hasCredentialSetsAccess(session.user.id)
if (!hasAccess) {
return NextResponse.json(
{ error: 'Credential sets require a Team or Enterprise plan' },
{ status: 403 }
)
}
const { id } = await params
const result = await getCredentialSetWithAccess(id, session.user.id)
if (!result) {
return NextResponse.json({ error: 'Credential set not found' }, { status: 404 })
}
return NextResponse.json({ credentialSet: result.set })
}
export async function PUT(req: NextRequest, { params }: { params: Promise<{ id: string }> }) {
const session = await getSession()
if (!session?.user?.id) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
// Check plan access (team/enterprise) or env var override
const hasAccess = await hasCredentialSetsAccess(session.user.id)
if (!hasAccess) {
return NextResponse.json(
{ error: 'Credential sets require a Team or Enterprise plan' },
{ status: 403 }
)
}
const { id } = await params
try {
const result = await getCredentialSetWithAccess(id, session.user.id)
if (!result) {
return NextResponse.json({ error: 'Credential set not found' }, { status: 404 })
}
if (result.role !== 'admin' && result.role !== 'owner') {
return NextResponse.json({ error: 'Admin or owner permissions required' }, { status: 403 })
}
const body = await req.json()
const updates = updateCredentialSetSchema.parse(body)
if (updates.name) {
const existingSet = await db
.select({ id: credentialSet.id })
.from(credentialSet)
.where(
and(
eq(credentialSet.organizationId, result.set.organizationId),
eq(credentialSet.name, updates.name)
)
)
.limit(1)
if (existingSet.length > 0 && existingSet[0].id !== id) {
return NextResponse.json(
{ error: 'A credential set with this name already exists' },
{ status: 409 }
)
}
}
await db
.update(credentialSet)
.set({
...updates,
updatedAt: new Date(),
})
.where(eq(credentialSet.id, id))
const [updated] = await db.select().from(credentialSet).where(eq(credentialSet.id, id)).limit(1)
return NextResponse.json({ credentialSet: updated })
} catch (error) {
if (error instanceof z.ZodError) {
return NextResponse.json({ error: error.errors[0].message }, { status: 400 })
}
logger.error('Error updating credential set', error)
return NextResponse.json({ error: 'Failed to update credential set' }, { status: 500 })
}
}
export async function DELETE(req: NextRequest, { params }: { params: Promise<{ id: string }> }) {
const session = await getSession()
if (!session?.user?.id) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
// Check plan access (team/enterprise) or env var override
const hasAccess = await hasCredentialSetsAccess(session.user.id)
if (!hasAccess) {
return NextResponse.json(
{ error: 'Credential sets require a Team or Enterprise plan' },
{ status: 403 }
)
}
const { id } = await params
try {
const result = await getCredentialSetWithAccess(id, session.user.id)
if (!result) {
return NextResponse.json({ error: 'Credential set not found' }, { status: 404 })
}
if (result.role !== 'admin' && result.role !== 'owner') {
return NextResponse.json({ error: 'Admin or owner permissions required' }, { status: 403 })
}
await db.delete(credentialSetMember).where(eq(credentialSetMember.credentialSetId, id))
await db.delete(credentialSet).where(eq(credentialSet.id, id))
logger.info('Deleted credential set', { credentialSetId: id, userId: session.user.id })
return NextResponse.json({ success: true })
} catch (error) {
logger.error('Error deleting credential set', error)
return NextResponse.json({ error: 'Failed to delete credential set' }, { status: 500 })
}
}

View File

@@ -0,0 +1,53 @@
import { db } from '@sim/db'
import { credentialSet, credentialSetInvitation, organization, user } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { and, eq, gt, isNull, or } from 'drizzle-orm'
import { NextResponse } from 'next/server'
import { getSession } from '@/lib/auth'
const logger = createLogger('CredentialSetInvitations')
export async function GET() {
const session = await getSession()
if (!session?.user?.id || !session?.user?.email) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
try {
const invitations = await db
.select({
invitationId: credentialSetInvitation.id,
token: credentialSetInvitation.token,
status: credentialSetInvitation.status,
expiresAt: credentialSetInvitation.expiresAt,
createdAt: credentialSetInvitation.createdAt,
credentialSetId: credentialSet.id,
credentialSetName: credentialSet.name,
providerId: credentialSet.providerId,
organizationId: organization.id,
organizationName: organization.name,
invitedByName: user.name,
invitedByEmail: user.email,
})
.from(credentialSetInvitation)
.innerJoin(credentialSet, eq(credentialSetInvitation.credentialSetId, credentialSet.id))
.innerJoin(organization, eq(credentialSet.organizationId, organization.id))
.leftJoin(user, eq(credentialSetInvitation.invitedBy, user.id))
.where(
and(
or(
eq(credentialSetInvitation.email, session.user.email),
isNull(credentialSetInvitation.email)
),
eq(credentialSetInvitation.status, 'pending'),
gt(credentialSetInvitation.expiresAt, new Date())
)
)
return NextResponse.json({ invitations })
} catch (error) {
logger.error('Error fetching credential set invitations', error)
return NextResponse.json({ error: 'Failed to fetch invitations' }, { status: 500 })
}
}

View File

@@ -0,0 +1,196 @@
import { db } from '@sim/db'
import {
credentialSet,
credentialSetInvitation,
credentialSetMember,
organization,
} from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { and, eq } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { getSession } from '@/lib/auth'
import { syncAllWebhooksForCredentialSet } from '@/lib/webhooks/utils.server'
const logger = createLogger('CredentialSetInviteToken')
export async function GET(req: NextRequest, { params }: { params: Promise<{ token: string }> }) {
const { token } = await params
const [invitation] = await db
.select({
id: credentialSetInvitation.id,
credentialSetId: credentialSetInvitation.credentialSetId,
email: credentialSetInvitation.email,
status: credentialSetInvitation.status,
expiresAt: credentialSetInvitation.expiresAt,
credentialSetName: credentialSet.name,
providerId: credentialSet.providerId,
organizationId: credentialSet.organizationId,
organizationName: organization.name,
})
.from(credentialSetInvitation)
.innerJoin(credentialSet, eq(credentialSetInvitation.credentialSetId, credentialSet.id))
.innerJoin(organization, eq(credentialSet.organizationId, organization.id))
.where(eq(credentialSetInvitation.token, token))
.limit(1)
if (!invitation) {
return NextResponse.json({ error: 'Invitation not found' }, { status: 404 })
}
if (invitation.status !== 'pending') {
return NextResponse.json({ error: 'Invitation is no longer valid' }, { status: 410 })
}
if (new Date() > invitation.expiresAt) {
await db
.update(credentialSetInvitation)
.set({ status: 'expired' })
.where(eq(credentialSetInvitation.id, invitation.id))
return NextResponse.json({ error: 'Invitation has expired' }, { status: 410 })
}
return NextResponse.json({
invitation: {
credentialSetName: invitation.credentialSetName,
organizationName: invitation.organizationName,
providerId: invitation.providerId,
email: invitation.email,
},
})
}
export async function POST(req: NextRequest, { params }: { params: Promise<{ token: string }> }) {
const { token } = await params
const session = await getSession()
if (!session?.user?.id) {
return NextResponse.json({ error: 'Authentication required' }, { status: 401 })
}
try {
const [invitationData] = await db
.select({
id: credentialSetInvitation.id,
credentialSetId: credentialSetInvitation.credentialSetId,
email: credentialSetInvitation.email,
status: credentialSetInvitation.status,
expiresAt: credentialSetInvitation.expiresAt,
invitedBy: credentialSetInvitation.invitedBy,
providerId: credentialSet.providerId,
})
.from(credentialSetInvitation)
.innerJoin(credentialSet, eq(credentialSetInvitation.credentialSetId, credentialSet.id))
.where(eq(credentialSetInvitation.token, token))
.limit(1)
if (!invitationData) {
return NextResponse.json({ error: 'Invitation not found' }, { status: 404 })
}
const invitation = invitationData
if (invitation.status !== 'pending') {
return NextResponse.json({ error: 'Invitation is no longer valid' }, { status: 410 })
}
if (new Date() > invitation.expiresAt) {
await db
.update(credentialSetInvitation)
.set({ status: 'expired' })
.where(eq(credentialSetInvitation.id, invitation.id))
return NextResponse.json({ error: 'Invitation has expired' }, { status: 410 })
}
const existingMember = await db
.select()
.from(credentialSetMember)
.where(
and(
eq(credentialSetMember.credentialSetId, invitation.credentialSetId),
eq(credentialSetMember.userId, session.user.id)
)
)
.limit(1)
if (existingMember.length > 0) {
return NextResponse.json(
{ error: 'Already a member of this credential set' },
{ status: 409 }
)
}
const now = new Date()
const requestId = crypto.randomUUID().slice(0, 8)
// Use transaction to ensure membership + invitation update + webhook sync are atomic
await db.transaction(async (tx) => {
await tx.insert(credentialSetMember).values({
id: crypto.randomUUID(),
credentialSetId: invitation.credentialSetId,
userId: session.user.id,
status: 'active',
joinedAt: now,
invitedBy: invitation.invitedBy,
createdAt: now,
updatedAt: now,
})
await tx
.update(credentialSetInvitation)
.set({
status: 'accepted',
acceptedAt: now,
acceptedByUserId: session.user.id,
})
.where(eq(credentialSetInvitation.id, invitation.id))
// Clean up all other pending invitations for the same credential set and email
// This prevents duplicate invites from showing up after accepting one
if (invitation.email) {
await tx
.update(credentialSetInvitation)
.set({
status: 'accepted',
acceptedAt: now,
acceptedByUserId: session.user.id,
})
.where(
and(
eq(credentialSetInvitation.credentialSetId, invitation.credentialSetId),
eq(credentialSetInvitation.email, invitation.email),
eq(credentialSetInvitation.status, 'pending')
)
)
}
// Sync webhooks within the transaction
const syncResult = await syncAllWebhooksForCredentialSet(
invitation.credentialSetId,
requestId,
tx
)
logger.info('Synced webhooks after member joined', {
credentialSetId: invitation.credentialSetId,
...syncResult,
})
})
logger.info('Accepted credential set invitation', {
invitationId: invitation.id,
credentialSetId: invitation.credentialSetId,
userId: session.user.id,
})
return NextResponse.json({
success: true,
credentialSetId: invitation.credentialSetId,
providerId: invitation.providerId,
})
} catch (error) {
logger.error('Error accepting invitation', error)
return NextResponse.json({ error: 'Failed to accept invitation' }, { status: 500 })
}
}

View File

@@ -0,0 +1,115 @@
import { db } from '@sim/db'
import { credentialSet, credentialSetMember, organization } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { and, eq } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { getSession } from '@/lib/auth'
import { syncAllWebhooksForCredentialSet } from '@/lib/webhooks/utils.server'
const logger = createLogger('CredentialSetMemberships')
export async function GET() {
const session = await getSession()
if (!session?.user?.id) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
try {
const memberships = await db
.select({
membershipId: credentialSetMember.id,
status: credentialSetMember.status,
joinedAt: credentialSetMember.joinedAt,
credentialSetId: credentialSet.id,
credentialSetName: credentialSet.name,
credentialSetDescription: credentialSet.description,
providerId: credentialSet.providerId,
organizationId: organization.id,
organizationName: organization.name,
})
.from(credentialSetMember)
.innerJoin(credentialSet, eq(credentialSetMember.credentialSetId, credentialSet.id))
.innerJoin(organization, eq(credentialSet.organizationId, organization.id))
.where(eq(credentialSetMember.userId, session.user.id))
return NextResponse.json({ memberships })
} catch (error) {
logger.error('Error fetching credential set memberships', error)
return NextResponse.json({ error: 'Failed to fetch memberships' }, { status: 500 })
}
}
/**
* Leave a credential set (self-revocation).
* Sets status to 'revoked' immediately (blocks execution), then syncs webhooks to clean up.
*/
export async function DELETE(req: NextRequest) {
const session = await getSession()
if (!session?.user?.id) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
const { searchParams } = new URL(req.url)
const credentialSetId = searchParams.get('credentialSetId')
if (!credentialSetId) {
return NextResponse.json({ error: 'credentialSetId is required' }, { status: 400 })
}
try {
const requestId = crypto.randomUUID().slice(0, 8)
// Use transaction to ensure revocation + webhook sync are atomic
await db.transaction(async (tx) => {
// Find and verify membership
const [membership] = await tx
.select()
.from(credentialSetMember)
.where(
and(
eq(credentialSetMember.credentialSetId, credentialSetId),
eq(credentialSetMember.userId, session.user.id)
)
)
.limit(1)
if (!membership) {
throw new Error('Not a member of this credential set')
}
if (membership.status === 'revoked') {
throw new Error('Already left this credential set')
}
// Set status to 'revoked' - this immediately blocks credential from being used
await tx
.update(credentialSetMember)
.set({
status: 'revoked',
updatedAt: new Date(),
})
.where(eq(credentialSetMember.id, membership.id))
// Sync webhooks to remove this user's credential webhooks
const syncResult = await syncAllWebhooksForCredentialSet(credentialSetId, requestId, tx)
logger.info('Synced webhooks after member left', {
credentialSetId,
userId: session.user.id,
...syncResult,
})
})
logger.info('User left credential set', {
credentialSetId,
userId: session.user.id,
})
return NextResponse.json({ success: true })
} catch (error) {
const message = error instanceof Error ? error.message : 'Failed to leave credential set'
logger.error('Error leaving credential set', error)
return NextResponse.json({ error: message }, { status: 500 })
}
}

View File

@@ -0,0 +1,176 @@
import { db } from '@sim/db'
import { credentialSet, credentialSetMember, member, organization, user } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { and, count, desc, eq } from 'drizzle-orm'
import { NextResponse } from 'next/server'
import { z } from 'zod'
import { getSession } from '@/lib/auth'
import { hasCredentialSetsAccess } from '@/lib/billing'
const logger = createLogger('CredentialSets')
const createCredentialSetSchema = z.object({
organizationId: z.string().min(1),
name: z.string().trim().min(1).max(100),
description: z.string().max(500).optional(),
providerId: z.enum(['google-email', 'outlook']),
})
export async function GET(req: Request) {
const session = await getSession()
if (!session?.user?.id) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
// Check plan access (team/enterprise) or env var override
const hasAccess = await hasCredentialSetsAccess(session.user.id)
if (!hasAccess) {
return NextResponse.json(
{ error: 'Credential sets require a Team or Enterprise plan' },
{ status: 403 }
)
}
const { searchParams } = new URL(req.url)
const organizationId = searchParams.get('organizationId')
if (!organizationId) {
return NextResponse.json({ error: 'organizationId is required' }, { status: 400 })
}
const membership = await db
.select({ id: member.id, role: member.role })
.from(member)
.where(and(eq(member.userId, session.user.id), eq(member.organizationId, organizationId)))
.limit(1)
if (membership.length === 0) {
return NextResponse.json({ error: 'Forbidden' }, { status: 403 })
}
const sets = await db
.select({
id: credentialSet.id,
name: credentialSet.name,
description: credentialSet.description,
providerId: credentialSet.providerId,
createdBy: credentialSet.createdBy,
createdAt: credentialSet.createdAt,
updatedAt: credentialSet.updatedAt,
creatorName: user.name,
creatorEmail: user.email,
})
.from(credentialSet)
.leftJoin(user, eq(credentialSet.createdBy, user.id))
.where(eq(credentialSet.organizationId, organizationId))
.orderBy(desc(credentialSet.createdAt))
const setsWithCounts = await Promise.all(
sets.map(async (set) => {
const [memberCount] = await db
.select({ count: count() })
.from(credentialSetMember)
.where(
and(
eq(credentialSetMember.credentialSetId, set.id),
eq(credentialSetMember.status, 'active')
)
)
return {
...set,
memberCount: memberCount?.count ?? 0,
}
})
)
return NextResponse.json({ credentialSets: setsWithCounts })
}
export async function POST(req: Request) {
const session = await getSession()
if (!session?.user?.id) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
// Check plan access (team/enterprise) or env var override
const hasAccess = await hasCredentialSetsAccess(session.user.id)
if (!hasAccess) {
return NextResponse.json(
{ error: 'Credential sets require a Team or Enterprise plan' },
{ status: 403 }
)
}
try {
const body = await req.json()
const { organizationId, name, description, providerId } = createCredentialSetSchema.parse(body)
const membership = await db
.select({ id: member.id, role: member.role })
.from(member)
.where(and(eq(member.userId, session.user.id), eq(member.organizationId, organizationId)))
.limit(1)
const role = membership[0]?.role
if (membership.length === 0 || (role !== 'admin' && role !== 'owner')) {
return NextResponse.json(
{ error: 'Admin or owner permissions required to create credential sets' },
{ status: 403 }
)
}
const orgExists = await db
.select({ id: organization.id })
.from(organization)
.where(eq(organization.id, organizationId))
.limit(1)
if (orgExists.length === 0) {
return NextResponse.json({ error: 'Organization not found' }, { status: 404 })
}
const existingSet = await db
.select({ id: credentialSet.id })
.from(credentialSet)
.where(and(eq(credentialSet.organizationId, organizationId), eq(credentialSet.name, name)))
.limit(1)
if (existingSet.length > 0) {
return NextResponse.json(
{ error: 'A credential set with this name already exists' },
{ status: 409 }
)
}
const now = new Date()
const newCredentialSet = {
id: crypto.randomUUID(),
organizationId,
name,
description: description || null,
providerId,
createdBy: session.user.id,
createdAt: now,
updatedAt: now,
}
await db.insert(credentialSet).values(newCredentialSet)
logger.info('Created credential set', {
credentialSetId: newCredentialSet.id,
organizationId,
userId: session.user.id,
})
return NextResponse.json({ credentialSet: newCredentialSet }, { status: 201 })
} catch (error) {
if (error instanceof z.ZodError) {
return NextResponse.json({ error: error.errors[0].message }, { status: 400 })
}
logger.error('Error creating credential set', error)
return NextResponse.json({ error: 'Failed to create credential set' }, { status: 500 })
}
}

View File

@@ -7,7 +7,7 @@ import { z } from 'zod'
import { getSession } from '@/lib/auth'
import { decryptSecret, encryptSecret } from '@/lib/core/security/encryption'
import { generateRequestId } from '@/lib/core/utils/request'
import type { EnvironmentVariable } from '@/stores/settings/environment/types'
import type { EnvironmentVariable } from '@/stores/settings/environment'
const logger = createLogger('EnvironmentAPI')

View File

@@ -0,0 +1,414 @@
import { randomUUID } from 'crypto'
import { db } from '@sim/db'
import { form, workflow, workflowBlocks } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { eq } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { addCorsHeaders, validateAuthToken } from '@/lib/core/security/deployment'
import { generateRequestId } from '@/lib/core/utils/request'
import { preprocessExecution } from '@/lib/execution/preprocessing'
import { LoggingSession } from '@/lib/logs/execution/logging-session'
import { createStreamingResponse } from '@/lib/workflows/streaming/streaming'
import { setFormAuthCookie, validateFormAuth } from '@/app/api/form/utils'
import { createErrorResponse, createSuccessResponse } from '@/app/api/workflows/utils'
const logger = createLogger('FormIdentifierAPI')
const formPostBodySchema = z.object({
formData: z.record(z.unknown()).optional(),
password: z.string().optional(),
email: z.string().email('Invalid email format').optional().or(z.literal('')),
})
export const dynamic = 'force-dynamic'
export const runtime = 'nodejs'
/**
* Get the input format schema from the workflow's start block
*/
async function getWorkflowInputSchema(workflowId: string): Promise<any[]> {
try {
const blocks = await db
.select()
.from(workflowBlocks)
.where(eq(workflowBlocks.workflowId, workflowId))
// Find the start block (starter or start_trigger type)
const startBlock = blocks.find(
(block) => block.type === 'starter' || block.type === 'start_trigger'
)
if (!startBlock) {
return []
}
// Extract inputFormat from subBlocks
const subBlocks = startBlock.subBlocks as Record<string, any> | null
if (!subBlocks?.inputFormat?.value) {
return []
}
return Array.isArray(subBlocks.inputFormat.value) ? subBlocks.inputFormat.value : []
} catch (error) {
logger.error('Error fetching workflow input schema:', error)
return []
}
}
export async function POST(
request: NextRequest,
{ params }: { params: Promise<{ identifier: string }> }
) {
const { identifier } = await params
const requestId = generateRequestId()
try {
logger.debug(`[${requestId}] Processing form submission for identifier: ${identifier}`)
let parsedBody
try {
const rawBody = await request.json()
const validation = formPostBodySchema.safeParse(rawBody)
if (!validation.success) {
const errorMessage = validation.error.errors
.map((err) => `${err.path.join('.')}: ${err.message}`)
.join(', ')
logger.warn(`[${requestId}] Validation error: ${errorMessage}`)
return addCorsHeaders(
createErrorResponse(`Invalid request body: ${errorMessage}`, 400),
request
)
}
parsedBody = validation.data
} catch (_error) {
return addCorsHeaders(createErrorResponse('Invalid request body', 400), request)
}
const deploymentResult = await db
.select({
id: form.id,
workflowId: form.workflowId,
userId: form.userId,
isActive: form.isActive,
authType: form.authType,
password: form.password,
allowedEmails: form.allowedEmails,
customizations: form.customizations,
})
.from(form)
.where(eq(form.identifier, identifier))
.limit(1)
if (deploymentResult.length === 0) {
logger.warn(`[${requestId}] Form not found for identifier: ${identifier}`)
return addCorsHeaders(createErrorResponse('Form not found', 404), request)
}
const deployment = deploymentResult[0]
if (!deployment.isActive) {
logger.warn(`[${requestId}] Form is not active: ${identifier}`)
const [workflowRecord] = await db
.select({ workspaceId: workflow.workspaceId })
.from(workflow)
.where(eq(workflow.id, deployment.workflowId))
.limit(1)
const workspaceId = workflowRecord?.workspaceId
if (!workspaceId) {
logger.warn(`[${requestId}] Cannot log: workflow ${deployment.workflowId} has no workspace`)
return addCorsHeaders(
createErrorResponse('This form is currently unavailable', 403),
request
)
}
const executionId = randomUUID()
const loggingSession = new LoggingSession(
deployment.workflowId,
executionId,
'form',
requestId
)
await loggingSession.safeStart({
userId: deployment.userId,
workspaceId,
variables: {},
})
await loggingSession.safeCompleteWithError({
error: {
message: 'This form is currently unavailable. The form has been disabled.',
stackTrace: undefined,
},
traceSpans: [],
})
return addCorsHeaders(createErrorResponse('This form is currently unavailable', 403), request)
}
const authResult = await validateFormAuth(requestId, deployment, request, parsedBody)
if (!authResult.authorized) {
return addCorsHeaders(
createErrorResponse(authResult.error || 'Authentication required', 401),
request
)
}
const { formData, password, email } = parsedBody
// If only authentication credentials provided (no form data), just return authenticated
if ((password || email) && !formData) {
const response = addCorsHeaders(createSuccessResponse({ authenticated: true }), request)
setFormAuthCookie(response, deployment.id, deployment.authType, deployment.password)
return response
}
if (!formData || Object.keys(formData).length === 0) {
return addCorsHeaders(createErrorResponse('No form data provided', 400), request)
}
const executionId = randomUUID()
const loggingSession = new LoggingSession(deployment.workflowId, executionId, 'form', requestId)
const preprocessResult = await preprocessExecution({
workflowId: deployment.workflowId,
userId: deployment.userId,
triggerType: 'form',
executionId,
requestId,
checkRateLimit: true,
checkDeployment: true,
loggingSession,
})
if (!preprocessResult.success) {
logger.warn(`[${requestId}] Preprocessing failed: ${preprocessResult.error?.message}`)
return addCorsHeaders(
createErrorResponse(
preprocessResult.error?.message || 'Failed to process request',
preprocessResult.error?.statusCode || 500
),
request
)
}
const { actorUserId, workflowRecord } = preprocessResult
const workspaceOwnerId = actorUserId!
const workspaceId = workflowRecord?.workspaceId
if (!workspaceId) {
logger.error(`[${requestId}] Workflow ${deployment.workflowId} has no workspaceId`)
return addCorsHeaders(
createErrorResponse('Workflow has no associated workspace', 500),
request
)
}
try {
const workflowForExecution = {
id: deployment.workflowId,
userId: deployment.userId,
workspaceId,
isDeployed: workflowRecord?.isDeployed ?? false,
variables: (workflowRecord?.variables ?? {}) as Record<string, unknown>,
}
// Pass form data as the workflow input
const workflowInput = {
input: formData,
...formData, // Spread form fields at top level for convenience
}
// Execute workflow using streaming (for consistency with chat)
const stream = await createStreamingResponse({
requestId,
workflow: workflowForExecution,
input: workflowInput,
executingUserId: workspaceOwnerId,
streamConfig: {
selectedOutputs: [],
isSecureMode: true,
workflowTriggerType: 'api', // Use 'api' type since form is similar
},
executionId,
})
// For forms, we don't stream back - we wait for completion and return success
// Consume the stream to wait for completion
const reader = stream.getReader()
let lastOutput: any = null
try {
while (true) {
const { done, value } = await reader.read()
if (done) break
// Parse SSE data if present
const text = new TextDecoder().decode(value)
const lines = text.split('\n')
for (const line of lines) {
if (line.startsWith('data: ')) {
try {
const data = JSON.parse(line.slice(6))
if (data.type === 'complete' || data.output) {
lastOutput = data.output || data
}
} catch {
// Ignore parse errors
}
}
}
}
} finally {
reader.releaseLock()
}
logger.info(`[${requestId}] Form submission successful for ${identifier}`)
// Return success with customizations for thank you screen
const customizations = deployment.customizations as Record<string, any> | null
return addCorsHeaders(
createSuccessResponse({
success: true,
executionId,
thankYouTitle: customizations?.thankYouTitle || 'Thank you!',
thankYouMessage:
customizations?.thankYouMessage || 'Your response has been submitted successfully.',
}),
request
)
} catch (error: any) {
logger.error(`[${requestId}] Error processing form submission:`, error)
return addCorsHeaders(
createErrorResponse(error.message || 'Failed to process form submission', 500),
request
)
}
} catch (error: any) {
logger.error(`[${requestId}] Error processing form submission:`, error)
return addCorsHeaders(
createErrorResponse(error.message || 'Failed to process form submission', 500),
request
)
}
}
export async function GET(
request: NextRequest,
{ params }: { params: Promise<{ identifier: string }> }
) {
const { identifier } = await params
const requestId = generateRequestId()
try {
logger.debug(`[${requestId}] Fetching form info for identifier: ${identifier}`)
const deploymentResult = await db
.select({
id: form.id,
title: form.title,
description: form.description,
customizations: form.customizations,
isActive: form.isActive,
workflowId: form.workflowId,
authType: form.authType,
password: form.password,
allowedEmails: form.allowedEmails,
showBranding: form.showBranding,
})
.from(form)
.where(eq(form.identifier, identifier))
.limit(1)
if (deploymentResult.length === 0) {
logger.warn(`[${requestId}] Form not found for identifier: ${identifier}`)
return addCorsHeaders(createErrorResponse('Form not found', 404), request)
}
const deployment = deploymentResult[0]
if (!deployment.isActive) {
logger.warn(`[${requestId}] Form is not active: ${identifier}`)
return addCorsHeaders(createErrorResponse('This form is currently unavailable', 403), request)
}
// Get the workflow's input schema
const inputSchema = await getWorkflowInputSchema(deployment.workflowId)
const cookieName = `form_auth_${deployment.id}`
const authCookie = request.cookies.get(cookieName)
// If authenticated (via cookie), return full form config
if (
deployment.authType !== 'public' &&
authCookie &&
validateAuthToken(authCookie.value, deployment.id, deployment.password)
) {
return addCorsHeaders(
createSuccessResponse({
id: deployment.id,
title: deployment.title,
description: deployment.description,
customizations: deployment.customizations,
authType: deployment.authType,
showBranding: deployment.showBranding,
inputSchema,
}),
request
)
}
// Check authentication requirement
const authResult = await validateFormAuth(requestId, deployment, request)
if (!authResult.authorized) {
// Return limited info for auth required forms
logger.info(
`[${requestId}] Authentication required for form: ${identifier}, type: ${deployment.authType}`
)
return addCorsHeaders(
NextResponse.json(
{
success: false,
error: authResult.error || 'Authentication required',
authType: deployment.authType,
title: deployment.title,
customizations: {
primaryColor: (deployment.customizations as any)?.primaryColor,
logoUrl: (deployment.customizations as any)?.logoUrl,
},
},
{ status: 401 }
),
request
)
}
return addCorsHeaders(
createSuccessResponse({
id: deployment.id,
title: deployment.title,
description: deployment.description,
customizations: deployment.customizations,
authType: deployment.authType,
showBranding: deployment.showBranding,
inputSchema,
}),
request
)
} catch (error: any) {
logger.error(`[${requestId}] Error fetching form info:`, error)
return addCorsHeaders(
createErrorResponse(error.message || 'Failed to fetch form information', 500),
request
)
}
}
export async function OPTIONS(request: NextRequest) {
return addCorsHeaders(new NextResponse(null, { status: 204 }), request)
}

View File

@@ -0,0 +1,233 @@
import { db } from '@sim/db'
import { form } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { eq } from 'drizzle-orm'
import type { NextRequest } from 'next/server'
import { z } from 'zod'
import { getSession } from '@/lib/auth'
import { encryptSecret } from '@/lib/core/security/encryption'
import { checkFormAccess, DEFAULT_FORM_CUSTOMIZATIONS } from '@/app/api/form/utils'
import { createErrorResponse, createSuccessResponse } from '@/app/api/workflows/utils'
const logger = createLogger('FormManageAPI')
const fieldConfigSchema = z.object({
name: z.string(),
type: z.string(),
label: z.string(),
description: z.string().optional(),
required: z.boolean().optional(),
})
const updateFormSchema = z.object({
identifier: z
.string()
.min(1, 'Identifier is required')
.max(100, 'Identifier must be 100 characters or less')
.regex(/^[a-z0-9-]+$/, 'Identifier can only contain lowercase letters, numbers, and hyphens')
.optional(),
title: z
.string()
.min(1, 'Title is required')
.max(200, 'Title must be 200 characters or less')
.optional(),
description: z.string().max(1000, 'Description must be 1000 characters or less').optional(),
customizations: z
.object({
primaryColor: z.string().optional(),
welcomeMessage: z
.string()
.max(500, 'Welcome message must be 500 characters or less')
.optional(),
thankYouTitle: z
.string()
.max(100, 'Thank you title must be 100 characters or less')
.optional(),
thankYouMessage: z
.string()
.max(500, 'Thank you message must be 500 characters or less')
.optional(),
logoUrl: z.string().url('Logo URL must be a valid URL').optional().or(z.literal('')),
fieldConfigs: z.array(fieldConfigSchema).optional(),
})
.optional(),
authType: z.enum(['public', 'password', 'email']).optional(),
password: z
.string()
.min(6, 'Password must be at least 6 characters')
.optional()
.or(z.literal('')),
allowedEmails: z.array(z.string()).optional(),
showBranding: z.boolean().optional(),
isActive: z.boolean().optional(),
})
export async function GET(request: NextRequest, { params }: { params: Promise<{ id: string }> }) {
try {
const session = await getSession()
if (!session) {
return createErrorResponse('Unauthorized', 401)
}
const { id } = await params
const { hasAccess, form: formRecord } = await checkFormAccess(id, session.user.id)
if (!hasAccess || !formRecord) {
return createErrorResponse('Form not found or access denied', 404)
}
const { password: _password, ...formWithoutPassword } = formRecord
return createSuccessResponse({
form: {
...formWithoutPassword,
hasPassword: !!formRecord.password,
},
})
} catch (error: any) {
logger.error('Error fetching form:', error)
return createErrorResponse(error.message || 'Failed to fetch form', 500)
}
}
export async function PATCH(request: NextRequest, { params }: { params: Promise<{ id: string }> }) {
try {
const session = await getSession()
if (!session) {
return createErrorResponse('Unauthorized', 401)
}
const { id } = await params
const { hasAccess, form: formRecord } = await checkFormAccess(id, session.user.id)
if (!hasAccess || !formRecord) {
return createErrorResponse('Form not found or access denied', 404)
}
const body = await request.json()
try {
const validatedData = updateFormSchema.parse(body)
const {
identifier,
title,
description,
customizations,
authType,
password,
allowedEmails,
showBranding,
isActive,
} = validatedData
if (identifier && identifier !== formRecord.identifier) {
const existingIdentifier = await db
.select()
.from(form)
.where(eq(form.identifier, identifier))
.limit(1)
if (existingIdentifier.length > 0) {
return createErrorResponse('Identifier already in use', 400)
}
}
if (authType === 'password' && !password && !formRecord.password) {
return createErrorResponse('Password is required when using password protection', 400)
}
if (
authType === 'email' &&
(!allowedEmails || allowedEmails.length === 0) &&
(!formRecord.allowedEmails || (formRecord.allowedEmails as string[]).length === 0)
) {
return createErrorResponse(
'At least one email or domain is required when using email access control',
400
)
}
const updateData: Record<string, any> = {
updatedAt: new Date(),
}
if (identifier !== undefined) updateData.identifier = identifier
if (title !== undefined) updateData.title = title
if (description !== undefined) updateData.description = description
if (showBranding !== undefined) updateData.showBranding = showBranding
if (isActive !== undefined) updateData.isActive = isActive
if (authType !== undefined) updateData.authType = authType
if (allowedEmails !== undefined) updateData.allowedEmails = allowedEmails
if (customizations !== undefined) {
const existingCustomizations = (formRecord.customizations as Record<string, any>) || {}
updateData.customizations = {
...DEFAULT_FORM_CUSTOMIZATIONS,
...existingCustomizations,
...customizations,
}
}
if (password) {
const { encrypted } = await encryptSecret(password)
updateData.password = encrypted
} else if (authType && authType !== 'password') {
updateData.password = null
}
await db.update(form).set(updateData).where(eq(form.id, id))
logger.info(`Form ${id} updated successfully`)
return createSuccessResponse({
message: 'Form updated successfully',
})
} catch (validationError) {
if (validationError instanceof z.ZodError) {
const errorMessage = validationError.errors[0]?.message || 'Invalid request data'
return createErrorResponse(errorMessage, 400, 'VALIDATION_ERROR')
}
throw validationError
}
} catch (error: any) {
logger.error('Error updating form:', error)
return createErrorResponse(error.message || 'Failed to update form', 500)
}
}
export async function DELETE(
request: NextRequest,
{ params }: { params: Promise<{ id: string }> }
) {
try {
const session = await getSession()
if (!session) {
return createErrorResponse('Unauthorized', 401)
}
const { id } = await params
const { hasAccess, form: formRecord } = await checkFormAccess(id, session.user.id)
if (!hasAccess || !formRecord) {
return createErrorResponse('Form not found or access denied', 404)
}
await db.update(form).set({ isActive: false, updatedAt: new Date() }).where(eq(form.id, id))
logger.info(`Form ${id} deleted (soft delete)`)
return createSuccessResponse({
message: 'Form deleted successfully',
})
} catch (error: any) {
logger.error('Error deleting form:', error)
return createErrorResponse(error.message || 'Failed to delete form', 500)
}
}

View File

@@ -0,0 +1,214 @@
import { db } from '@sim/db'
import { form } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { eq } from 'drizzle-orm'
import type { NextRequest } from 'next/server'
import { v4 as uuidv4 } from 'uuid'
import { z } from 'zod'
import { getSession } from '@/lib/auth'
import { isDev } from '@/lib/core/config/feature-flags'
import { encryptSecret } from '@/lib/core/security/encryption'
import { getEmailDomain } from '@/lib/core/utils/urls'
import { deployWorkflow } from '@/lib/workflows/persistence/utils'
import {
checkWorkflowAccessForFormCreation,
DEFAULT_FORM_CUSTOMIZATIONS,
} from '@/app/api/form/utils'
import { createErrorResponse, createSuccessResponse } from '@/app/api/workflows/utils'
const logger = createLogger('FormAPI')
const fieldConfigSchema = z.object({
name: z.string(),
type: z.string(),
label: z.string(),
description: z.string().optional(),
required: z.boolean().optional(),
})
const formSchema = z.object({
workflowId: z.string().min(1, 'Workflow ID is required'),
identifier: z
.string()
.min(1, 'Identifier is required')
.max(100, 'Identifier must be 100 characters or less')
.regex(/^[a-z0-9-]+$/, 'Identifier can only contain lowercase letters, numbers, and hyphens'),
title: z.string().min(1, 'Title is required').max(200, 'Title must be 200 characters or less'),
description: z.string().max(1000, 'Description must be 1000 characters or less').optional(),
customizations: z
.object({
primaryColor: z.string().optional(),
welcomeMessage: z
.string()
.max(500, 'Welcome message must be 500 characters or less')
.optional(),
thankYouTitle: z
.string()
.max(100, 'Thank you title must be 100 characters or less')
.optional(),
thankYouMessage: z
.string()
.max(500, 'Thank you message must be 500 characters or less')
.optional(),
logoUrl: z.string().url('Logo URL must be a valid URL').optional().or(z.literal('')),
fieldConfigs: z.array(fieldConfigSchema).optional(),
})
.optional(),
authType: z.enum(['public', 'password', 'email']).default('public'),
password: z
.string()
.min(6, 'Password must be at least 6 characters')
.optional()
.or(z.literal('')),
allowedEmails: z.array(z.string()).optional().default([]),
showBranding: z.boolean().optional().default(true),
})
export async function GET(request: NextRequest) {
try {
const session = await getSession()
if (!session) {
return createErrorResponse('Unauthorized', 401)
}
const deployments = await db.select().from(form).where(eq(form.userId, session.user.id))
return createSuccessResponse({ deployments })
} catch (error: any) {
logger.error('Error fetching form deployments:', error)
return createErrorResponse(error.message || 'Failed to fetch form deployments', 500)
}
}
export async function POST(request: NextRequest) {
try {
const session = await getSession()
if (!session) {
return createErrorResponse('Unauthorized', 401)
}
const body = await request.json()
try {
const validatedData = formSchema.parse(body)
const {
workflowId,
identifier,
title,
description = '',
customizations,
authType = 'public',
password,
allowedEmails = [],
showBranding = true,
} = validatedData
if (authType === 'password' && !password) {
return createErrorResponse('Password is required when using password protection', 400)
}
if (authType === 'email' && (!Array.isArray(allowedEmails) || allowedEmails.length === 0)) {
return createErrorResponse(
'At least one email or domain is required when using email access control',
400
)
}
const existingIdentifier = await db
.select()
.from(form)
.where(eq(form.identifier, identifier))
.limit(1)
if (existingIdentifier.length > 0) {
return createErrorResponse('Identifier already in use', 400)
}
const { hasAccess, workflow: workflowRecord } = await checkWorkflowAccessForFormCreation(
workflowId,
session.user.id
)
if (!hasAccess || !workflowRecord) {
return createErrorResponse('Workflow not found or access denied', 404)
}
const result = await deployWorkflow({
workflowId,
deployedBy: session.user.id,
})
if (!result.success) {
return createErrorResponse(result.error || 'Failed to deploy workflow', 500)
}
logger.info(
`${workflowRecord.isDeployed ? 'Redeployed' : 'Auto-deployed'} workflow ${workflowId} for form (v${result.version})`
)
let encryptedPassword = null
if (authType === 'password' && password) {
const { encrypted } = await encryptSecret(password)
encryptedPassword = encrypted
}
const id = uuidv4()
logger.info('Creating form deployment with values:', {
workflowId,
identifier,
title,
authType,
hasPassword: !!encryptedPassword,
emailCount: allowedEmails?.length || 0,
showBranding,
})
const mergedCustomizations = {
...DEFAULT_FORM_CUSTOMIZATIONS,
...(customizations || {}),
}
await db.insert(form).values({
id,
workflowId,
userId: session.user.id,
identifier,
title,
description: description || '',
customizations: mergedCustomizations,
isActive: true,
authType,
password: encryptedPassword,
allowedEmails: authType === 'email' ? allowedEmails : [],
showBranding,
createdAt: new Date(),
updatedAt: new Date(),
})
const baseDomain = getEmailDomain()
const protocol = isDev ? 'http' : 'https'
const formUrl = `${protocol}://${baseDomain}/form/${identifier}`
logger.info(`Form "${title}" deployed successfully at ${formUrl}`)
return createSuccessResponse({
id,
formUrl,
message: 'Form deployment created successfully',
})
} catch (validationError) {
if (validationError instanceof z.ZodError) {
const errorMessage = validationError.errors[0]?.message || 'Invalid request data'
return createErrorResponse(errorMessage, 400, 'VALIDATION_ERROR')
}
throw validationError
}
} catch (error: any) {
logger.error('Error creating form deployment:', error)
return createErrorResponse(error.message || 'Failed to create form deployment', 500)
}
}

View File

@@ -0,0 +1,367 @@
import { databaseMock, loggerMock } from '@sim/testing'
import type { NextResponse } from 'next/server'
/**
* Tests for form API utils
*
* @vitest-environment node
*/
import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest'
vi.mock('@sim/db', () => databaseMock)
vi.mock('@sim/logger', () => loggerMock)
const mockDecryptSecret = vi.fn()
vi.mock('@/lib/core/security/encryption', () => ({
decryptSecret: mockDecryptSecret,
}))
vi.mock('@/lib/core/config/feature-flags', () => ({
isDev: true,
isHosted: false,
isProd: false,
}))
vi.mock('@/lib/workspaces/permissions/utils', () => ({
hasAdminPermission: vi.fn(),
}))
describe('Form API Utils', () => {
afterEach(() => {
vi.clearAllMocks()
})
describe('Auth token utils', () => {
it.concurrent('should validate auth tokens', async () => {
const { validateAuthToken } = await import('@/lib/core/security/deployment')
const formId = 'test-form-id'
const type = 'password'
const token = Buffer.from(`${formId}:${type}:${Date.now()}`).toString('base64')
expect(typeof token).toBe('string')
expect(token.length).toBeGreaterThan(0)
const isValid = validateAuthToken(token, formId)
expect(isValid).toBe(true)
const isInvalidForm = validateAuthToken(token, 'wrong-form-id')
expect(isInvalidForm).toBe(false)
})
it.concurrent('should reject expired tokens', async () => {
const { validateAuthToken } = await import('@/lib/core/security/deployment')
const formId = 'test-form-id'
const expiredToken = Buffer.from(
`${formId}:password:${Date.now() - 25 * 60 * 60 * 1000}`
).toString('base64')
const isValid = validateAuthToken(expiredToken, formId)
expect(isValid).toBe(false)
})
it.concurrent('should validate tokens with password hash', async () => {
const { validateAuthToken } = await import('@/lib/core/security/deployment')
const crypto = await import('crypto')
const formId = 'test-form-id'
const encryptedPassword = 'encrypted-password-value'
const pwHash = crypto
.createHash('sha256')
.update(encryptedPassword)
.digest('hex')
.substring(0, 8)
const token = Buffer.from(`${formId}:password:${Date.now()}:${pwHash}`).toString('base64')
const isValid = validateAuthToken(token, formId, encryptedPassword)
expect(isValid).toBe(true)
const isInvalidPassword = validateAuthToken(token, formId, 'different-password')
expect(isInvalidPassword).toBe(false)
})
})
describe('Cookie handling', () => {
it('should set auth cookie correctly', async () => {
const { setFormAuthCookie } = await import('@/app/api/form/utils')
const mockSet = vi.fn()
const mockResponse = {
cookies: {
set: mockSet,
},
} as unknown as NextResponse
const formId = 'test-form-id'
const type = 'password'
setFormAuthCookie(mockResponse, formId, type)
expect(mockSet).toHaveBeenCalledWith({
name: `form_auth_${formId}`,
value: expect.any(String),
httpOnly: true,
secure: false, // Development mode
sameSite: 'lax',
path: '/',
maxAge: 60 * 60 * 24,
})
})
})
describe('CORS handling', () => {
it.concurrent('should add CORS headers for any origin', async () => {
const { addCorsHeaders } = await import('@/lib/core/security/deployment')
const mockRequest = {
headers: {
get: vi.fn().mockReturnValue('http://localhost:3000'),
},
} as any
const mockResponse = {
headers: {
set: vi.fn(),
},
} as unknown as NextResponse
addCorsHeaders(mockResponse, mockRequest)
expect(mockResponse.headers.set).toHaveBeenCalledWith(
'Access-Control-Allow-Origin',
'http://localhost:3000'
)
expect(mockResponse.headers.set).toHaveBeenCalledWith(
'Access-Control-Allow-Credentials',
'true'
)
expect(mockResponse.headers.set).toHaveBeenCalledWith(
'Access-Control-Allow-Methods',
'GET, POST, OPTIONS'
)
expect(mockResponse.headers.set).toHaveBeenCalledWith(
'Access-Control-Allow-Headers',
'Content-Type, X-Requested-With'
)
})
it.concurrent('should not set CORS headers when no origin', async () => {
const { addCorsHeaders } = await import('@/lib/core/security/deployment')
const mockRequest = {
headers: {
get: vi.fn().mockReturnValue(''),
},
} as any
const mockResponse = {
headers: {
set: vi.fn(),
},
} as unknown as NextResponse
addCorsHeaders(mockResponse, mockRequest)
expect(mockResponse.headers.set).not.toHaveBeenCalled()
})
})
describe('Form auth validation', () => {
beforeEach(async () => {
vi.clearAllMocks()
mockDecryptSecret.mockResolvedValue({ decrypted: 'correct-password' })
})
it('should allow access to public forms', async () => {
const { validateFormAuth } = await import('@/app/api/form/utils')
const deployment = {
id: 'form-id',
authType: 'public',
}
const mockRequest = {
cookies: {
get: vi.fn().mockReturnValue(null),
},
} as any
const result = await validateFormAuth('request-id', deployment, mockRequest)
expect(result.authorized).toBe(true)
})
it('should request password auth for GET requests', async () => {
const { validateFormAuth } = await import('@/app/api/form/utils')
const deployment = {
id: 'form-id',
authType: 'password',
}
const mockRequest = {
method: 'GET',
cookies: {
get: vi.fn().mockReturnValue(null),
},
} as any
const result = await validateFormAuth('request-id', deployment, mockRequest)
expect(result.authorized).toBe(false)
expect(result.error).toBe('auth_required_password')
})
it('should validate password for POST requests', async () => {
const { validateFormAuth } = await import('@/app/api/form/utils')
const { decryptSecret } = await import('@/lib/core/security/encryption')
const deployment = {
id: 'form-id',
authType: 'password',
password: 'encrypted-password',
}
const mockRequest = {
method: 'POST',
cookies: {
get: vi.fn().mockReturnValue(null),
},
} as any
const parsedBody = {
password: 'correct-password',
}
const result = await validateFormAuth('request-id', deployment, mockRequest, parsedBody)
expect(decryptSecret).toHaveBeenCalledWith('encrypted-password')
expect(result.authorized).toBe(true)
})
it('should reject incorrect password', async () => {
const { validateFormAuth } = await import('@/app/api/form/utils')
const deployment = {
id: 'form-id',
authType: 'password',
password: 'encrypted-password',
}
const mockRequest = {
method: 'POST',
cookies: {
get: vi.fn().mockReturnValue(null),
},
} as any
const parsedBody = {
password: 'wrong-password',
}
const result = await validateFormAuth('request-id', deployment, mockRequest, parsedBody)
expect(result.authorized).toBe(false)
expect(result.error).toBe('Invalid password')
})
it('should request email auth for email-protected forms', async () => {
const { validateFormAuth } = await import('@/app/api/form/utils')
const deployment = {
id: 'form-id',
authType: 'email',
allowedEmails: ['user@example.com', '@company.com'],
}
const mockRequest = {
method: 'GET',
cookies: {
get: vi.fn().mockReturnValue(null),
},
} as any
const result = await validateFormAuth('request-id', deployment, mockRequest)
expect(result.authorized).toBe(false)
expect(result.error).toBe('auth_required_email')
})
it('should check allowed emails for email auth', async () => {
const { validateFormAuth } = await import('@/app/api/form/utils')
const deployment = {
id: 'form-id',
authType: 'email',
allowedEmails: ['user@example.com', '@company.com'],
}
const mockRequest = {
method: 'POST',
cookies: {
get: vi.fn().mockReturnValue(null),
},
} as any
// Exact email match should authorize
const result1 = await validateFormAuth('request-id', deployment, mockRequest, {
email: 'user@example.com',
})
expect(result1.authorized).toBe(true)
// Domain match should authorize
const result2 = await validateFormAuth('request-id', deployment, mockRequest, {
email: 'other@company.com',
})
expect(result2.authorized).toBe(true)
// Unknown email should not authorize
const result3 = await validateFormAuth('request-id', deployment, mockRequest, {
email: 'user@unknown.com',
})
expect(result3.authorized).toBe(false)
expect(result3.error).toBe('Email not authorized for this form')
})
it('should require password when formData is present without password', async () => {
const { validateFormAuth } = await import('@/app/api/form/utils')
const deployment = {
id: 'form-id',
authType: 'password',
password: 'encrypted-password',
}
const mockRequest = {
method: 'POST',
cookies: {
get: vi.fn().mockReturnValue(null),
},
} as any
const parsedBody = {
formData: { field1: 'value1' },
// No password provided
}
const result = await validateFormAuth('request-id', deployment, mockRequest, parsedBody)
expect(result.authorized).toBe(false)
expect(result.error).toBe('auth_required_password')
})
})
describe('Default customizations', () => {
it.concurrent('should have correct default values', async () => {
const { DEFAULT_FORM_CUSTOMIZATIONS } = await import('@/app/api/form/utils')
expect(DEFAULT_FORM_CUSTOMIZATIONS).toEqual({
welcomeMessage: '',
thankYouTitle: 'Thank you!',
thankYouMessage: 'Your response has been submitted successfully.',
})
})
})
})

View File

@@ -0,0 +1,204 @@
import { db } from '@sim/db'
import { form, workflow } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { eq } from 'drizzle-orm'
import type { NextRequest, NextResponse } from 'next/server'
import {
isEmailAllowed,
setDeploymentAuthCookie,
validateAuthToken,
} from '@/lib/core/security/deployment'
import { decryptSecret } from '@/lib/core/security/encryption'
import { hasAdminPermission } from '@/lib/workspaces/permissions/utils'
const logger = createLogger('FormAuthUtils')
export function setFormAuthCookie(
response: NextResponse,
formId: string,
type: string,
encryptedPassword?: string | null
): void {
setDeploymentAuthCookie(response, 'form', formId, type, encryptedPassword)
}
/**
* Check if user has permission to create a form for a specific workflow
* Either the user owns the workflow directly OR has admin permission for the workflow's workspace
*/
export async function checkWorkflowAccessForFormCreation(
workflowId: string,
userId: string
): Promise<{ hasAccess: boolean; workflow?: any }> {
const workflowData = await db.select().from(workflow).where(eq(workflow.id, workflowId)).limit(1)
if (workflowData.length === 0) {
return { hasAccess: false }
}
const workflowRecord = workflowData[0]
if (workflowRecord.userId === userId) {
return { hasAccess: true, workflow: workflowRecord }
}
if (workflowRecord.workspaceId) {
const hasAdmin = await hasAdminPermission(userId, workflowRecord.workspaceId)
if (hasAdmin) {
return { hasAccess: true, workflow: workflowRecord }
}
}
return { hasAccess: false }
}
/**
* Check if user has access to view/edit/delete a specific form
* Either the user owns the form directly OR has admin permission for the workflow's workspace
*/
export async function checkFormAccess(
formId: string,
userId: string
): Promise<{ hasAccess: boolean; form?: any }> {
const formData = await db
.select({
form: form,
workflowWorkspaceId: workflow.workspaceId,
})
.from(form)
.innerJoin(workflow, eq(form.workflowId, workflow.id))
.where(eq(form.id, formId))
.limit(1)
if (formData.length === 0) {
return { hasAccess: false }
}
const { form: formRecord, workflowWorkspaceId } = formData[0]
if (formRecord.userId === userId) {
return { hasAccess: true, form: formRecord }
}
if (workflowWorkspaceId) {
const hasAdmin = await hasAdminPermission(userId, workflowWorkspaceId)
if (hasAdmin) {
return { hasAccess: true, form: formRecord }
}
}
return { hasAccess: false }
}
export async function validateFormAuth(
requestId: string,
deployment: any,
request: NextRequest,
parsedBody?: any
): Promise<{ authorized: boolean; error?: string }> {
const authType = deployment.authType || 'public'
if (authType === 'public') {
return { authorized: true }
}
const cookieName = `form_auth_${deployment.id}`
const authCookie = request.cookies.get(cookieName)
if (authCookie && validateAuthToken(authCookie.value, deployment.id, deployment.password)) {
return { authorized: true }
}
if (authType === 'password') {
if (request.method === 'GET') {
return { authorized: false, error: 'auth_required_password' }
}
try {
if (!parsedBody) {
return { authorized: false, error: 'Password is required' }
}
const { password, formData } = parsedBody
if (formData && !password) {
return { authorized: false, error: 'auth_required_password' }
}
if (!password) {
return { authorized: false, error: 'Password is required' }
}
if (!deployment.password) {
logger.error(`[${requestId}] No password set for password-protected form: ${deployment.id}`)
return { authorized: false, error: 'Authentication configuration error' }
}
const { decrypted } = await decryptSecret(deployment.password)
if (password !== decrypted) {
return { authorized: false, error: 'Invalid password' }
}
return { authorized: true }
} catch (error) {
logger.error(`[${requestId}] Error validating password:`, error)
return { authorized: false, error: 'Authentication error' }
}
}
if (authType === 'email') {
if (request.method === 'GET') {
return { authorized: false, error: 'auth_required_email' }
}
try {
if (!parsedBody) {
return { authorized: false, error: 'Email is required' }
}
const { email, formData } = parsedBody
if (formData && !email) {
return { authorized: false, error: 'auth_required_email' }
}
if (!email) {
return { authorized: false, error: 'Email is required' }
}
const allowedEmails: string[] = deployment.allowedEmails || []
if (isEmailAllowed(email, allowedEmails)) {
return { authorized: true }
}
return { authorized: false, error: 'Email not authorized for this form' }
} catch (error) {
logger.error(`[${requestId}] Error validating email:`, error)
return { authorized: false, error: 'Authentication error' }
}
}
return { authorized: false, error: 'Unsupported authentication type' }
}
/**
* Form customizations interface
*/
export interface FormCustomizations {
primaryColor?: string
welcomeMessage?: string
thankYouTitle?: string
thankYouMessage?: string
logoUrl?: string
}
/**
* Default form customizations
* Note: primaryColor is intentionally undefined to allow thank you screen to use its green default
*/
export const DEFAULT_FORM_CUSTOMIZATIONS: FormCustomizations = {
welcomeMessage: '',
thankYouTitle: 'Thank you!',
thankYouMessage: 'Your response has been submitted successfully.',
}

View File

@@ -0,0 +1,71 @@
import { db } from '@sim/db'
import { form } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { eq } from 'drizzle-orm'
import type { NextRequest } from 'next/server'
import { z } from 'zod'
import { getSession } from '@/lib/auth'
import { createErrorResponse, createSuccessResponse } from '@/app/api/workflows/utils'
const logger = createLogger('FormValidateAPI')
const validateQuerySchema = z.object({
identifier: z
.string()
.min(1, 'Identifier is required')
.regex(/^[a-z0-9-]+$/, 'Identifier can only contain lowercase letters, numbers, and hyphens')
.max(100, 'Identifier must be 100 characters or less'),
})
/**
* GET endpoint to validate form identifier availability
*/
export async function GET(request: NextRequest) {
try {
const session = await getSession()
if (!session?.user?.id) {
return createErrorResponse('Unauthorized', 401)
}
const { searchParams } = new URL(request.url)
const identifier = searchParams.get('identifier')
const validation = validateQuerySchema.safeParse({ identifier })
if (!validation.success) {
const errorMessage = validation.error.errors[0]?.message || 'Invalid identifier'
logger.warn(`Validation error: ${errorMessage}`)
if (identifier && !/^[a-z0-9-]+$/.test(identifier)) {
return createSuccessResponse({
available: false,
error: errorMessage,
})
}
return createErrorResponse(errorMessage, 400)
}
const { identifier: validatedIdentifier } = validation.data
const existingForm = await db
.select({ id: form.id })
.from(form)
.where(eq(form.identifier, validatedIdentifier))
.limit(1)
const isAvailable = existingForm.length === 0
logger.debug(
`Identifier "${validatedIdentifier}" availability check: ${isAvailable ? 'available' : 'taken'}`
)
return createSuccessResponse({
available: isAvailable,
error: isAvailable ? null : 'This identifier is already in use',
})
} catch (error: unknown) {
const message = error instanceof Error ? error.message : 'Failed to validate identifier'
logger.error('Error validating form identifier:', error)
return createErrorResponse(message, 500)
}
}

View File

@@ -3,6 +3,7 @@
*
* @vitest-environment node
*/
import { loggerMock } from '@sim/testing'
import { NextRequest } from 'next/server'
import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest'
import { createMockRequest } from '@/app/api/__test-utils__/utils'
@@ -82,14 +83,7 @@ vi.mock('@/lib/execution/isolated-vm', () => ({
}),
}))
vi.mock('@sim/logger', () => ({
createLogger: vi.fn(() => ({
info: vi.fn(),
error: vi.fn(),
warn: vi.fn(),
debug: vi.fn(),
})),
}))
vi.mock('@sim/logger', () => loggerMock)
vi.mock('@/lib/execution/e2b', () => ({
executeInE2B: vi.fn(),

View File

@@ -21,7 +21,6 @@ export async function POST(req: NextRequest) {
const requestId = generateRequestId()
try {
// Get user session
const session = await getSession()
if (!session?.user?.email) {
logger.warn(`[${requestId}] Unauthorized help request attempt`)
@@ -30,20 +29,20 @@ export async function POST(req: NextRequest) {
const email = session.user.email
// Handle multipart form data
const formData = await req.formData()
// Extract form fields
const subject = formData.get('subject') as string
const message = formData.get('message') as string
const type = formData.get('type') as string
const workflowId = formData.get('workflowId') as string | null
const workspaceId = formData.get('workspaceId') as string
const userAgent = formData.get('userAgent') as string | null
logger.info(`[${requestId}] Processing help request`, {
type,
email: `${email.substring(0, 3)}***`, // Log partial email for privacy
})
// Validate the form data
const validationResult = helpFormSchema.safeParse({
subject,
message,
@@ -60,7 +59,6 @@ export async function POST(req: NextRequest) {
)
}
// Extract images
const images: { filename: string; content: Buffer; contentType: string }[] = []
for (const [key, value] of formData.entries()) {
@@ -81,10 +79,14 @@ export async function POST(req: NextRequest) {
logger.debug(`[${requestId}] Help request includes ${images.length} images`)
// Prepare email content
const userId = session.user.id
let emailText = `
Type: ${type}
From: ${email}
User ID: ${userId}
Workspace ID: ${workspaceId ?? 'N/A'}
Workflow ID: ${workflowId ?? 'N/A'}
Browser: ${userAgent ?? 'N/A'}
${message}
`
@@ -115,7 +117,6 @@ ${message}
logger.info(`[${requestId}] Help request email sent successfully`)
// Send confirmation email to the user
try {
const confirmationHtml = await renderHelpConfirmationEmail(
type as 'bug' | 'feedback' | 'feature_request' | 'other',

View File

@@ -198,15 +198,14 @@ export async function POST(req: NextRequest, { params }: { params: Promise<{ id:
`[${requestId}] Starting controlled async processing of ${createdDocuments.length} documents`
)
// Track bulk document upload
try {
const { trackPlatformEvent } = await import('@/lib/core/telemetry')
trackPlatformEvent('platform.knowledge_base.documents_uploaded', {
'knowledge_base.id': knowledgeBaseId,
'documents.count': createdDocuments.length,
'documents.upload_type': 'bulk',
'processing.chunk_size': validatedData.processingOptions.chunkSize,
'processing.recipe': validatedData.processingOptions.recipe,
const { PlatformEvents } = await import('@/lib/core/telemetry')
PlatformEvents.knowledgeBaseDocumentsUploaded({
knowledgeBaseId,
documentsCount: createdDocuments.length,
uploadType: 'bulk',
chunkSize: validatedData.processingOptions.chunkSize,
recipe: validatedData.processingOptions.recipe,
})
} catch (_e) {
// Silently fail
@@ -262,15 +261,14 @@ export async function POST(req: NextRequest, { params }: { params: Promise<{ id:
userId
)
// Track single document upload
try {
const { trackPlatformEvent } = await import('@/lib/core/telemetry')
trackPlatformEvent('platform.knowledge_base.documents_uploaded', {
'knowledge_base.id': knowledgeBaseId,
'documents.count': 1,
'documents.upload_type': 'single',
'document.mime_type': validatedData.mimeType,
'document.file_size': validatedData.fileSize,
const { PlatformEvents } = await import('@/lib/core/telemetry')
PlatformEvents.knowledgeBaseDocumentsUploaded({
knowledgeBaseId,
documentsCount: 1,
uploadType: 'single',
mimeType: validatedData.mimeType,
fileSize: validatedData.fileSize,
})
} catch (_e) {
// Silently fail

View File

@@ -2,6 +2,7 @@ import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { getSession } from '@/lib/auth'
import { PlatformEvents } from '@/lib/core/telemetry'
import { generateRequestId } from '@/lib/core/utils/request'
import {
deleteKnowledgeBase,
@@ -183,6 +184,14 @@ export async function DELETE(
await deleteKnowledgeBase(id, requestId)
try {
PlatformEvents.knowledgeBaseDeleted({
knowledgeBaseId: id,
})
} catch {
// Telemetry should not fail the operation
}
logger.info(`[${requestId}] Knowledge base deleted: ${id} for user ${session.user.id}`)
return NextResponse.json({

View File

@@ -2,6 +2,7 @@ import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { getSession } from '@/lib/auth'
import { PlatformEvents } from '@/lib/core/telemetry'
import { generateRequestId } from '@/lib/core/utils/request'
import { createKnowledgeBase, getKnowledgeBases } from '@/lib/knowledge/service'
@@ -94,6 +95,16 @@ export async function POST(req: NextRequest) {
const newKnowledgeBase = await createKnowledgeBase(createData, requestId)
try {
PlatformEvents.knowledgeBaseCreated({
knowledgeBaseId: newKnowledgeBase.id,
name: validatedData.name,
workspaceId: validatedData.workspaceId,
})
} catch {
// Telemetry should not fail the operation
}
logger.info(
`[${requestId}] Knowledge base created: ${newKnowledgeBase.id} for user ${session.user.id}`
)

View File

@@ -5,6 +5,7 @@
*
* @vitest-environment node
*/
import { createEnvMock } from '@sim/testing'
import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest'
import {
createMockRequest,
@@ -26,13 +27,7 @@ vi.mock('drizzle-orm', () => ({
mockKnowledgeSchemas()
vi.mock('@/lib/core/config/env', () => ({
env: {
OPENAI_API_KEY: 'test-api-key',
},
isTruthy: (value: string | boolean | number | undefined) =>
typeof value === 'string' ? value === 'true' || value === '1' : Boolean(value),
}))
vi.mock('@/lib/core/config/env', () => createEnvMock({ OPENAI_API_KEY: 'test-api-key' }))
vi.mock('@/lib/core/utils/request', () => ({
generateRequestId: vi.fn(() => 'test-request-id'),

View File

@@ -1,6 +1,7 @@
import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { PlatformEvents } from '@/lib/core/telemetry'
import { generateRequestId } from '@/lib/core/utils/request'
import { ALL_TAG_SLOTS } from '@/lib/knowledge/constants'
import { getDocumentTagDefinitions } from '@/lib/knowledge/tags/service'
@@ -294,6 +295,16 @@ export async function POST(request: NextRequest) {
const documentIds = results.map((result) => result.documentId)
const documentNameMap = await getDocumentNamesByIds(documentIds)
try {
PlatformEvents.knowledgeBaseSearched({
knowledgeBaseId: accessibleKbIds[0],
resultsCount: results.length,
workspaceId: workspaceId || undefined,
})
} catch {
// Telemetry should not fail the operation
}
return NextResponse.json({
success: true,
data: {

View File

@@ -4,17 +4,15 @@
*
* @vitest-environment node
*/
import { createEnvMock, createMockLogger } from '@sim/testing'
import { beforeEach, describe, expect, it, vi } from 'vitest'
vi.mock('drizzle-orm')
vi.mock('@sim/logger', () => ({
createLogger: vi.fn(() => ({
info: vi.fn(),
debug: vi.fn(),
warn: vi.fn(),
error: vi.fn(),
})),
const loggerMock = vi.hoisted(() => ({
createLogger: () => createMockLogger(),
}))
vi.mock('drizzle-orm')
vi.mock('@sim/logger', () => loggerMock)
vi.mock('@sim/db')
vi.mock('@/lib/knowledge/documents/utils', () => ({
retryWithExponentialBackoff: (fn: any) => fn(),
@@ -30,12 +28,7 @@ vi.stubGlobal(
})
)
vi.mock('@/lib/core/config/env', () => ({
env: {},
getEnv: (key: string) => process.env[key],
isTruthy: (value: string | boolean | number | undefined) =>
typeof value === 'string' ? value === 'true' || value === '1' : Boolean(value),
}))
vi.mock('@/lib/core/config/env', () => createEnvMock())
import {
generateSearchEmbedding,

View File

@@ -6,6 +6,7 @@
* This file contains unit tests for the knowledge base utility functions,
* including access checks, document processing, and embedding generation.
*/
import { createEnvMock } from '@sim/testing'
import { beforeEach, describe, expect, it, vi } from 'vitest'
vi.mock('drizzle-orm', () => ({
@@ -15,12 +16,7 @@ vi.mock('drizzle-orm', () => ({
sql: (strings: TemplateStringsArray, ...expr: any[]) => ({ strings, expr }),
}))
vi.mock('@/lib/core/config/env', () => ({
env: { OPENAI_API_KEY: 'test-key' },
getEnv: (key: string) => process.env[key],
isTruthy: (value: string | boolean | number | undefined) =>
typeof value === 'string' ? value === 'true' || value === '1' : Boolean(value),
}))
vi.mock('@/lib/core/config/env', () => createEnvMock({ OPENAI_API_KEY: 'test-key' }))
vi.mock('@/lib/knowledge/documents/utils', () => ({
retryWithExponentialBackoff: (fn: any) => fn(),

View File

@@ -140,12 +140,12 @@ export const POST = withMcpAuth('write')(
)
try {
const { trackPlatformEvent } = await import('@/lib/core/telemetry')
trackPlatformEvent('platform.mcp.server_added', {
'mcp.server_id': serverId,
'mcp.server_name': body.name,
'mcp.transport': body.transport,
'workspace.id': workspaceId,
const { PlatformEvents } = await import('@/lib/core/telemetry')
PlatformEvents.mcpServerAdded({
serverId,
serverName: body.name,
transport: body.transport,
workspaceId,
})
} catch (_e) {
// Silently fail

View File

@@ -194,12 +194,12 @@ export const POST = withMcpAuth('read')(
logger.info(`[${requestId}] Successfully executed tool ${toolName} on server ${serverId}`)
try {
const { trackPlatformEvent } = await import('@/lib/core/telemetry')
trackPlatformEvent('platform.mcp.tool_executed', {
'mcp.server_id': serverId,
'mcp.tool_name': toolName,
'mcp.execution_status': 'success',
'workspace.id': workspaceId,
const { PlatformEvents } = await import('@/lib/core/telemetry')
PlatformEvents.mcpToolExecuted({
serverId,
toolName,
status: 'success',
workspaceId,
})
} catch {
// Telemetry failure is non-critical

View File

@@ -15,8 +15,11 @@ import { createLogger } from '@sim/logger'
import { and, eq } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { getEmailSubject, renderInvitationEmail } from '@/components/emails'
import { getSession } from '@/lib/auth'
import { requireStripeClient } from '@/lib/billing/stripe-client'
import { getBaseUrl } from '@/lib/core/utils/urls'
import { sendEmail } from '@/lib/messaging/email/mailer'
const logger = createLogger('OrganizationInvitation')
@@ -69,6 +72,102 @@ export async function GET(
}
}
// Resend invitation
export async function POST(
_request: NextRequest,
{ params }: { params: Promise<{ id: string; invitationId: string }> }
) {
const { id: organizationId, invitationId } = await params
const session = await getSession()
if (!session?.user?.id) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
try {
// Verify user is admin/owner
const memberEntry = await db
.select()
.from(member)
.where(and(eq(member.organizationId, organizationId), eq(member.userId, session.user.id)))
.limit(1)
if (memberEntry.length === 0 || !['owner', 'admin'].includes(memberEntry[0].role)) {
return NextResponse.json({ error: 'Forbidden - Admin access required' }, { status: 403 })
}
const orgInvitation = await db
.select()
.from(invitation)
.where(and(eq(invitation.id, invitationId), eq(invitation.organizationId, organizationId)))
.then((rows) => rows[0])
if (!orgInvitation) {
return NextResponse.json({ error: 'Invitation not found' }, { status: 404 })
}
if (orgInvitation.status !== 'pending') {
return NextResponse.json({ error: 'Can only resend pending invitations' }, { status: 400 })
}
const org = await db
.select({ name: organization.name })
.from(organization)
.where(eq(organization.id, organizationId))
.then((rows) => rows[0])
const inviter = await db
.select({ name: user.name })
.from(user)
.where(eq(user.id, session.user.id))
.limit(1)
// Update expiration date
const newExpiresAt = new Date(Date.now() + 7 * 24 * 60 * 60 * 1000) // 7 days
await db
.update(invitation)
.set({ expiresAt: newExpiresAt })
.where(eq(invitation.id, invitationId))
// Send email
const emailHtml = await renderInvitationEmail(
inviter[0]?.name || 'Someone',
org?.name || 'organization',
`${getBaseUrl()}/invite/${invitationId}`
)
const emailResult = await sendEmail({
to: orgInvitation.email,
subject: getEmailSubject('invitation'),
html: emailHtml,
emailType: 'transactional',
})
if (!emailResult.success) {
logger.error('Failed to resend invitation email', {
email: orgInvitation.email,
error: emailResult.message,
})
return NextResponse.json({ error: 'Failed to send invitation email' }, { status: 500 })
}
logger.info('Organization invitation resent', {
organizationId,
invitationId,
resentBy: session.user.id,
email: orgInvitation.email,
})
return NextResponse.json({
success: true,
message: 'Invitation resent successfully',
})
} catch (error) {
logger.error('Error resending organization invitation:', error)
return NextResponse.json({ error: 'Failed to resend invitation' }, { status: 500 })
}
}
export async function PUT(
req: NextRequest,
{ params }: { params: Promise<{ id: string; invitationId: string }> }

View File

@@ -0,0 +1,166 @@
import { db } from '@sim/db'
import { member, permissionGroup, permissionGroupMember } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { and, eq, inArray } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { getSession } from '@/lib/auth'
import { hasAccessControlAccess } from '@/lib/billing'
const logger = createLogger('PermissionGroupBulkMembers')
async function getPermissionGroupWithAccess(groupId: string, userId: string) {
const [group] = await db
.select({
id: permissionGroup.id,
organizationId: permissionGroup.organizationId,
})
.from(permissionGroup)
.where(eq(permissionGroup.id, groupId))
.limit(1)
if (!group) return null
const [membership] = await db
.select({ role: member.role })
.from(member)
.where(and(eq(member.userId, userId), eq(member.organizationId, group.organizationId)))
.limit(1)
if (!membership) return null
return { group, role: membership.role }
}
const bulkAddSchema = z.object({
userIds: z.array(z.string()).optional(),
addAllOrgMembers: z.boolean().optional(),
})
export async function POST(req: NextRequest, { params }: { params: Promise<{ id: string }> }) {
const session = await getSession()
if (!session?.user?.id) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
const { id } = await params
try {
const hasAccess = await hasAccessControlAccess(session.user.id)
if (!hasAccess) {
return NextResponse.json(
{ error: 'Access Control is an Enterprise feature' },
{ status: 403 }
)
}
const result = await getPermissionGroupWithAccess(id, session.user.id)
if (!result) {
return NextResponse.json({ error: 'Permission group not found' }, { status: 404 })
}
if (result.role !== 'admin' && result.role !== 'owner') {
return NextResponse.json({ error: 'Admin or owner permissions required' }, { status: 403 })
}
const body = await req.json()
const { userIds, addAllOrgMembers } = bulkAddSchema.parse(body)
let targetUserIds: string[] = []
if (addAllOrgMembers) {
const orgMembers = await db
.select({ userId: member.userId })
.from(member)
.where(eq(member.organizationId, result.group.organizationId))
targetUserIds = orgMembers.map((m) => m.userId)
} else if (userIds && userIds.length > 0) {
const validMembers = await db
.select({ userId: member.userId })
.from(member)
.where(
and(
eq(member.organizationId, result.group.organizationId),
inArray(member.userId, userIds)
)
)
targetUserIds = validMembers.map((m) => m.userId)
}
if (targetUserIds.length === 0) {
return NextResponse.json({ added: 0, moved: 0 })
}
const existingMemberships = await db
.select({
id: permissionGroupMember.id,
userId: permissionGroupMember.userId,
permissionGroupId: permissionGroupMember.permissionGroupId,
})
.from(permissionGroupMember)
.where(inArray(permissionGroupMember.userId, targetUserIds))
const alreadyInThisGroup = new Set(
existingMemberships.filter((m) => m.permissionGroupId === id).map((m) => m.userId)
)
const usersToAdd = targetUserIds.filter((uid) => !alreadyInThisGroup.has(uid))
if (usersToAdd.length === 0) {
return NextResponse.json({ added: 0, moved: 0 })
}
const membershipsToDelete = existingMemberships.filter(
(m) => m.permissionGroupId !== id && usersToAdd.includes(m.userId)
)
const movedCount = membershipsToDelete.length
await db.transaction(async (tx) => {
if (membershipsToDelete.length > 0) {
await tx.delete(permissionGroupMember).where(
inArray(
permissionGroupMember.id,
membershipsToDelete.map((m) => m.id)
)
)
}
const newMembers = usersToAdd.map((userId) => ({
id: crypto.randomUUID(),
permissionGroupId: id,
userId,
assignedBy: session.user.id,
assignedAt: new Date(),
}))
await tx.insert(permissionGroupMember).values(newMembers)
})
logger.info('Bulk added members to permission group', {
permissionGroupId: id,
addedCount: usersToAdd.length,
movedCount,
assignedBy: session.user.id,
})
return NextResponse.json({ added: usersToAdd.length, moved: movedCount })
} catch (error) {
if (error instanceof z.ZodError) {
return NextResponse.json({ error: error.errors[0].message }, { status: 400 })
}
if (
error instanceof Error &&
error.message.includes('permission_group_member_user_id_unique')
) {
return NextResponse.json(
{ error: 'One or more users are already in a permission group' },
{ status: 409 }
)
}
logger.error('Error bulk adding members to permission group', error)
return NextResponse.json({ error: 'Failed to add members' }, { status: 500 })
}
}

View File

@@ -0,0 +1,229 @@
import { db } from '@sim/db'
import { member, permissionGroup, permissionGroupMember, user } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { and, eq } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { getSession } from '@/lib/auth'
import { hasAccessControlAccess } from '@/lib/billing'
const logger = createLogger('PermissionGroupMembers')
async function getPermissionGroupWithAccess(groupId: string, userId: string) {
const [group] = await db
.select({
id: permissionGroup.id,
organizationId: permissionGroup.organizationId,
})
.from(permissionGroup)
.where(eq(permissionGroup.id, groupId))
.limit(1)
if (!group) return null
const [membership] = await db
.select({ role: member.role })
.from(member)
.where(and(eq(member.userId, userId), eq(member.organizationId, group.organizationId)))
.limit(1)
if (!membership) return null
return { group, role: membership.role }
}
export async function GET(req: NextRequest, { params }: { params: Promise<{ id: string }> }) {
const session = await getSession()
if (!session?.user?.id) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
const { id } = await params
const result = await getPermissionGroupWithAccess(id, session.user.id)
if (!result) {
return NextResponse.json({ error: 'Permission group not found' }, { status: 404 })
}
const members = await db
.select({
id: permissionGroupMember.id,
userId: permissionGroupMember.userId,
assignedAt: permissionGroupMember.assignedAt,
userName: user.name,
userEmail: user.email,
userImage: user.image,
})
.from(permissionGroupMember)
.leftJoin(user, eq(permissionGroupMember.userId, user.id))
.where(eq(permissionGroupMember.permissionGroupId, id))
return NextResponse.json({ members })
}
const addMemberSchema = z.object({
userId: z.string().min(1),
})
export async function POST(req: NextRequest, { params }: { params: Promise<{ id: string }> }) {
const session = await getSession()
if (!session?.user?.id) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
const { id } = await params
try {
const hasAccess = await hasAccessControlAccess(session.user.id)
if (!hasAccess) {
return NextResponse.json(
{ error: 'Access Control is an Enterprise feature' },
{ status: 403 }
)
}
const result = await getPermissionGroupWithAccess(id, session.user.id)
if (!result) {
return NextResponse.json({ error: 'Permission group not found' }, { status: 404 })
}
if (result.role !== 'admin' && result.role !== 'owner') {
return NextResponse.json({ error: 'Admin or owner permissions required' }, { status: 403 })
}
const body = await req.json()
const { userId } = addMemberSchema.parse(body)
const [orgMember] = await db
.select({ id: member.id })
.from(member)
.where(and(eq(member.userId, userId), eq(member.organizationId, result.group.organizationId)))
.limit(1)
if (!orgMember) {
return NextResponse.json(
{ error: 'User is not a member of this organization' },
{ status: 400 }
)
}
const [existingMembership] = await db
.select({
id: permissionGroupMember.id,
permissionGroupId: permissionGroupMember.permissionGroupId,
})
.from(permissionGroupMember)
.where(eq(permissionGroupMember.userId, userId))
.limit(1)
if (existingMembership?.permissionGroupId === id) {
return NextResponse.json(
{ error: 'User is already in this permission group' },
{ status: 409 }
)
}
const newMember = await db.transaction(async (tx) => {
if (existingMembership) {
await tx
.delete(permissionGroupMember)
.where(eq(permissionGroupMember.id, existingMembership.id))
}
const memberData = {
id: crypto.randomUUID(),
permissionGroupId: id,
userId,
assignedBy: session.user.id,
assignedAt: new Date(),
}
await tx.insert(permissionGroupMember).values(memberData)
return memberData
})
logger.info('Added member to permission group', {
permissionGroupId: id,
userId,
assignedBy: session.user.id,
})
return NextResponse.json({ member: newMember }, { status: 201 })
} catch (error) {
if (error instanceof z.ZodError) {
return NextResponse.json({ error: error.errors[0].message }, { status: 400 })
}
if (
error instanceof Error &&
error.message.includes('permission_group_member_user_id_unique')
) {
return NextResponse.json({ error: 'User is already in a permission group' }, { status: 409 })
}
logger.error('Error adding member to permission group', error)
return NextResponse.json({ error: 'Failed to add member' }, { status: 500 })
}
}
export async function DELETE(req: NextRequest, { params }: { params: Promise<{ id: string }> }) {
const session = await getSession()
if (!session?.user?.id) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
const { id } = await params
const { searchParams } = new URL(req.url)
const memberId = searchParams.get('memberId')
if (!memberId) {
return NextResponse.json({ error: 'memberId is required' }, { status: 400 })
}
try {
const hasAccess = await hasAccessControlAccess(session.user.id)
if (!hasAccess) {
return NextResponse.json(
{ error: 'Access Control is an Enterprise feature' },
{ status: 403 }
)
}
const result = await getPermissionGroupWithAccess(id, session.user.id)
if (!result) {
return NextResponse.json({ error: 'Permission group not found' }, { status: 404 })
}
if (result.role !== 'admin' && result.role !== 'owner') {
return NextResponse.json({ error: 'Admin or owner permissions required' }, { status: 403 })
}
const [memberToRemove] = await db
.select()
.from(permissionGroupMember)
.where(
and(eq(permissionGroupMember.id, memberId), eq(permissionGroupMember.permissionGroupId, id))
)
.limit(1)
if (!memberToRemove) {
return NextResponse.json({ error: 'Member not found' }, { status: 404 })
}
await db.delete(permissionGroupMember).where(eq(permissionGroupMember.id, memberId))
logger.info('Removed member from permission group', {
permissionGroupId: id,
memberId,
userId: session.user.id,
})
return NextResponse.json({ success: true })
} catch (error) {
logger.error('Error removing member from permission group', error)
return NextResponse.json({ error: 'Failed to remove member' }, { status: 500 })
}
}

Some files were not shown because too many files have changed in this diff Show More