Compare commits

...

26 Commits

Author SHA1 Message Date
Waleed
f415e5edc4 v0.5.55: polling groups, bedrock provider, devcontainer fixes, workflow preview enhancements 2026-01-08 23:36:56 -08:00
Adam Gough
13981549d1 fix(grain): grain trigger update (#2739)
* grain trigger new requirements

* removed comment

* made it generic for all triggers

* fire only for specific trigger type

* removed comments
2026-01-08 23:10:11 -08:00
Waleed
554dcdf062 improvement(execution-snapshot): enhance workflow preview in logs and deploy modal (#2742)
* added larger live deployment preview

* edited subblock UI

* removed comments

* removed carrot

* updated styling to match existing subblocks

* enriched workflow preview

* fix connetion in log preview

* cleanup

* ack PR comments

* more PR comments

* more

* cleanup

* use reactquery cache in deploy modal

* ack comments

* ack PR comment

---------

Co-authored-by: aadamgough <adam@sim.ai>
2026-01-08 23:04:54 -08:00
Adam Gough
6b28742b68 fix(linear): missing params (#2740)
* added missing params

* fixed linear bugs
2026-01-08 20:42:09 -08:00
Vikhyath Mondreti
e5c95093f6 improvement(autoconnect): click to add paths also autoconnect (#2737) 2026-01-08 18:16:15 -08:00
Waleed
b87af80bff feat(i18n): update translations (#2732)
Co-authored-by: icecrasher321 <icecrasher321@users.noreply.github.com>
2026-01-08 18:14:00 -08:00
Vikhyath Mondreti
c2180bf8a0 improvement(enterprise): feature flagging + runtime checks consolidation (#2730)
* improvement(enterprise): enterprise checks code consolidation

* update docs

* revert isHosted check

* add unique index to prevent multiple orgs per user

* address greptile comments

* ui bug
2026-01-08 13:53:22 -08:00
Waleed
fdac4314d2 fix(chat): update stream to respect all output select objects (#2729) 2026-01-08 11:54:07 -08:00
Waleed
a54fcbc094 improvement(auth): added ability to inject secrets to kubernetes, server-side ff to disable email registration (#2728)
* improvement(auth): added ability to inject secrets to kubernetes, server-side ff to disable email registration

* consolidated telemetry events

* comments cleanup

* ack PR comment

* refactor to use createEnvMock helper instead of local mocks
2026-01-08 11:09:35 -08:00
Waleed
05904a73b2 feat(i18n): update translations (#2721)
Co-authored-by: icecrasher321 <icecrasher321@users.noreply.github.com>
2026-01-08 10:30:53 -08:00
Lakshman Patel
1b22d2ce81 fix(devcontainer): use bunx for concurrently command (#2723) 2026-01-07 21:20:29 -08:00
Waleed
26dff7cffe feat(bedrock): added aws bedrock as a model provider (#2722) 2026-01-07 20:08:03 -08:00
Vikhyath Mondreti
020037728d feat(polling-groups): can invite multiple people to have their gmail/outlook inboxes connected to a workflow (#2695)
* progress on cred sets

* fix credential set system

* return data to render credential set in block preview

* progress

* invite flow

* simplify code

* fix ui

* fix tests

* fix types

* fix

* fix icon for outlook

* fix cred set name not showing up for owner

* fix rendering of credential set name

* fix outlook well known folder id resolution

* fix perms for creating cred set

* add to docs and simplify ui

* consolidate webhook code better

* fix tests

* fix credential collab logic issue

* fix ui

* fix lint
2026-01-07 17:49:40 -08:00
Waleed
13a6e6c3fa v0.5.54: seo, model blacklist, helm chart updates, fireflies integration, autoconnect improvements, billing fixes 2026-01-07 16:09:45 -08:00
Vikhyath Mondreti
cb12ceb82c fix(preproc-errors): should not charge base execution cost in this case (#2719)
* fix(preproc-errors): should not charge base execution cost in this case

* remove comment
2026-01-07 15:32:37 -08:00
Waleed
0f32310ba6 feat(i18n): update translations (#2717)
* feat(i18n): update translations

* fixed chinese docs

---------

Co-authored-by: waleedlatif1 <waleedlatif1@users.noreply.github.com>
2026-01-07 14:28:58 -08:00
Vikhyath Mondreti
730ddf5a66 ui improvements for deploy mcp (#2718) 2026-01-07 14:25:03 -08:00
Waleed
ef4bec2c37 improvement(context-menu): added awareness for chat and variables being open, fixed select calculation to match height calculation for selecting multiple blocks (#2715) 2026-01-07 13:40:53 -08:00
Waleed
2bd27f9a4d feat(fireflies): added fireflies tools and trigger (#2713)
* feat(fireflies): added fireflies tools and trigger

* finished fireflies

* added wandConfig to all timestamp subblocks on the platform

* added current time to timestamp wand generation

* fix file upload subblock styling, tested all fireflies ops

* removed dropdown for trigger for fireflies

* updated docs

* added fireflies to formatWebhookInput

* added more wandConfigs
2026-01-07 13:40:36 -08:00
Vikhyath Mondreti
3b4f7d6adb improvement(add-block): intuitive autoconnect + positioning (#2714)
* improvement(add-block): intuitive autoconnect + positioning

* cleanup code
2026-01-07 12:52:12 -08:00
Waleed
142c9a0428 fix(grain): add grain key to idempotency service (#2712)
* fix(grain): add grain key to idempotency service

* fixed dropdown issue for grain, webhook registration
2026-01-07 12:00:32 -08:00
Waleed
9dc02f3728 improvement(helm): added missing optional envvars to helm for whitelabeling (#2711) 2026-01-07 10:56:13 -08:00
Vikhyath Mondreti
833825f04a fix(deploy-check): race condition fixes (#2710) 2026-01-07 10:48:54 -08:00
Waleed
261becd129 feat(blacklist): added ability to blacklist models & providers (#2709)
* feat(blacklist): added ability to blacklist models & providers

* ack PR comments
2026-01-07 10:41:57 -08:00
Waleed
3ecf7a15eb feat(seo): updated out-of-date site metadata, removed unused static assets, updated emails (#2708)
* feat(seo): updated out-of-date site metadata, removed unused static assets, updated emails

* more

* more

* remove unused social photos
2026-01-07 09:38:40 -08:00
Waleed
1420bfb73c fix(resolver): add both new and old workflow blocks for backwards compatibility 2026-01-07 08:03:36 -08:00
339 changed files with 40073 additions and 2919 deletions

View File

@@ -4414,3 +4414,164 @@ export function JiraServiceManagementIcon(props: SVGProps<SVGSVGElement>) {
</svg>
)
}
export function FirefliesIcon(props: SVGProps<SVGSVGElement>) {
return (
<svg {...props} xmlns='http://www.w3.org/2000/svg' viewBox='-6 -6 68 68'>
<defs>
<linearGradient
id='fireflies_g1'
gradientUnits='userSpaceOnUse'
x1='144.6644'
y1='-133.7781'
x2='54.3811'
y2='-38.9195'
gradientTransform='matrix(0.8571 0 0 -0.8571 -79.2389 -68.1736)'
>
<stop offset='0' stopColor='#E82A73' />
<stop offset='0.113' stopColor='#DE2D7A' />
<stop offset='0.3' stopColor='#C5388F' />
<stop offset='0.54' stopColor='#9B4AB0' />
<stop offset='0.818' stopColor='#6262DE' />
<stop offset='0.994' stopColor='#3B73FF' />
</linearGradient>
<linearGradient
id='fireflies_g2'
gradientUnits='userSpaceOnUse'
x1='145.1664'
y1='-133.3084'
x2='54.8831'
y2='-38.4499'
gradientTransform='matrix(0.8571 0 0 -0.8571 -79.2389 -68.1736)'
>
<stop offset='0' stopColor='#FF3C82' />
<stop offset='0.103' stopColor='#F53E88' />
<stop offset='0.274' stopColor='#DC4598' />
<stop offset='0.492' stopColor='#B251B2' />
<stop offset='0.745' stopColor='#7961D7' />
<stop offset='0.994' stopColor='#3B73FF' />
</linearGradient>
<linearGradient
id='fireflies_g3'
gradientUnits='userSpaceOnUse'
x1='144.7625'
y1='-123.2011'
x2='114.171'
y2='-12.3403'
gradientTransform='matrix(0.8571 0 0 -0.8571 -79.2389 -68.1736)'
>
<stop offset='0' stopColor='#E82A73' />
<stop offset='0.113' stopColor='#DE2D7A' />
<stop offset='0.3' stopColor='#C5388F' />
<stop offset='0.54' stopColor='#9B4AB0' />
<stop offset='0.818' stopColor='#6262DE' />
<stop offset='0.994' stopColor='#3B73FF' />
</linearGradient>
<linearGradient
id='fireflies_g4'
gradientUnits='userSpaceOnUse'
x1='134.8237'
y1='-132.3271'
x2='25.3098'
y2='-98.9636'
gradientTransform='matrix(0.8571 0 0 -0.8571 -79.2389 -68.1736)'
>
<stop offset='0' stopColor='#E82A73' />
<stop offset='0.113' stopColor='#DE2D7A' />
<stop offset='0.3' stopColor='#C5388F' />
<stop offset='0.54' stopColor='#9B4AB0' />
<stop offset='0.818' stopColor='#6262DE' />
<stop offset='0.994' stopColor='#3B73FF' />
</linearGradient>
<linearGradient
id='fireflies_g5'
gradientUnits='userSpaceOnUse'
x1='82.2078'
y1='-52.7908'
x2='112.8836'
y2='-123.0805'
gradientTransform='matrix(0.8571 0 0 -0.8571 -79.2389 -68.1736)'
>
<stop offset='0' stopColor='#E82A73' />
<stop offset='0.114' stopColor='#DE286E' />
<stop offset='0.303' stopColor='#C52361' />
<stop offset='0.544' stopColor='#9B1A4D' />
<stop offset='0.825' stopColor='#620F30' />
<stop offset='0.994' stopColor='#3D081E' />
</linearGradient>
<linearGradient
id='fireflies_g6'
gradientUnits='userSpaceOnUse'
x1='107.6542'
y1='-78.5296'
x2='138.33'
y2='-148.8194'
gradientTransform='matrix(0.8571 0 0 -0.8571 -79.2389 -68.1736)'
>
<stop offset='0' stopColor='#E82A73' />
<stop offset='0.114' stopColor='#DE286E' />
<stop offset='0.303' stopColor='#C52361' />
<stop offset='0.544' stopColor='#9B1A4D' />
<stop offset='0.825' stopColor='#620F30' />
<stop offset='0.994' stopColor='#3D081E' />
</linearGradient>
<linearGradient
id='fireflies_g7'
gradientUnits='userSpaceOnUse'
x1='70.8311'
y1='-99.3209'
x2='140.3046'
y2='-145.474'
gradientTransform='matrix(0.8571 0 0 -0.8571 -79.2389 -68.1736)'
>
<stop offset='0' stopColor='#E82A73' />
<stop offset='0.114' stopColor='#DE286E' />
<stop offset='0.303' stopColor='#C52361' />
<stop offset='0.544' stopColor='#9B1A4D' />
<stop offset='0.825' stopColor='#620F30' />
<stop offset='0.994' stopColor='#3D081E' />
</linearGradient>
<linearGradient
id='fireflies_g8'
gradientUnits='userSpaceOnUse'
x1='297.6904'
y1='-1360.8851'
x2='309.5946'
y2='-1454.8754'
gradientTransform='matrix(0.8571 0 0 -0.8571 -79.2389 -68.1736)'
>
<stop offset='0' stopColor='#E82A73' />
<stop offset='0.114' stopColor='#DE286E' />
<stop offset='0.303' stopColor='#C52361' />
<stop offset='0.544' stopColor='#9B1A4D' />
<stop offset='0.825' stopColor='#620F30' />
<stop offset='0.994' stopColor='#3D081E' />
</linearGradient>
</defs>
<g>
<path fill='url(#fireflies_g1)' d='M18.4,0H0v18.3h18.4V0z' />
<path fill='url(#fireflies_g2)' d='M40.2,22.1H21.8v18.3h18.4V22.1z' />
<path
fill='url(#fireflies_g3)'
d='M40.2,0H21.8v18.3H56v-2.6c0-4.2-1.7-8.1-4.6-11.1C48.4,1.7,44.4,0,40.2,0L40.2,0z'
/>
<path
fill='url(#fireflies_g4)'
d='M0,22.1v18.3c0,4.2,1.7,8.1,4.6,11.1c3,2.9,7,4.6,11.2,4.6h2.6V22.1H0z'
/>
<path fill='url(#fireflies_g5)' opacity='0.18' d='M0,0l18.4,18.3H0V0z' />
<path fill='url(#fireflies_g6)' opacity='0.18' d='M21.8,22.1l18.4,18.3H21.8V22.1z' />
<path
fill='url(#fireflies_g7)'
opacity='0.18'
d='M0,40.3c0,4.2,1.7,8.1,4.6,11.1c3,2.9,7,4.6,11.2,4.6h2.6V22.1L0,40.3z'
/>
<path
fill='url(#fireflies_g8)'
opacity='0.18'
d='M40.2,0c4.2,0,8.2,1.7,11.2,4.6c3,2.9,4.6,6.9,4.6,11.1v2.6H21.8L40.2,0z'
/>
</g>
</svg>
)
}

View File

@@ -28,6 +28,7 @@ import {
ExaAIIcon,
EyeIcon,
FirecrawlIcon,
FirefliesIcon,
GithubIcon,
GitLabIcon,
GmailIcon,
@@ -147,6 +148,7 @@ export const blockTypeToIconMap: Record<string, IconComponent> = {
exa: ExaAIIcon,
file: DocumentIcon,
firecrawl: FirecrawlIcon,
fireflies: FirefliesIcon,
github: GithubIcon,
gitlab: GitLabIcon,
gmail: GmailIcon,

View File

@@ -0,0 +1,76 @@
---
title: Enterprise
description: Enterprise-Funktionen für Organisationen mit erweiterten
Sicherheits- und Compliance-Anforderungen
---
import { Callout } from 'fumadocs-ui/components/callout'
Sim Studio Enterprise bietet erweiterte Funktionen für Organisationen mit erhöhten Sicherheits-, Compliance- und Verwaltungsanforderungen.
---
## Bring Your Own Key (BYOK)
Verwenden Sie Ihre eigenen API-Schlüssel für KI-Modellanbieter anstelle der gehosteten Schlüssel von Sim Studio.
### Unterstützte Anbieter
| Anbieter | Verwendung |
|----------|-------|
| OpenAI | Knowledge Base-Embeddings, Agent-Block |
| Anthropic | Agent-Block |
| Google | Agent-Block |
| Mistral | Knowledge Base OCR |
### Einrichtung
1. Navigieren Sie zu **Einstellungen** → **BYOK** in Ihrem Workspace
2. Klicken Sie auf **Schlüssel hinzufügen** für Ihren Anbieter
3. Geben Sie Ihren API-Schlüssel ein und speichern Sie
<Callout type="warn">
BYOK-Schlüssel werden verschlüsselt gespeichert. Nur Organisationsadministratoren und -inhaber können Schlüssel verwalten.
</Callout>
Wenn konfiguriert, verwenden Workflows Ihren Schlüssel anstelle der gehosteten Schlüssel von Sim Studio. Bei Entfernung wechseln Workflows automatisch zu den gehosteten Schlüsseln zurück.
---
## Single Sign-On (SSO)
Enterprise-Authentifizierung mit SAML 2.0- und OIDC-Unterstützung für zentralisiertes Identitätsmanagement.
### Unterstützte Anbieter
- Okta
- Azure AD / Entra ID
- Google Workspace
- OneLogin
- Jeder SAML 2.0- oder OIDC-Anbieter
### Einrichtung
1. Navigieren Sie zu **Einstellungen** → **SSO** in Ihrem Workspace
2. Wählen Sie Ihren Identitätsanbieter
3. Konfigurieren Sie die Verbindung mithilfe der Metadaten Ihres IdP
4. Aktivieren Sie SSO für Ihre Organisation
<Callout type="info">
Sobald SSO aktiviert ist, authentifizieren sich Teammitglieder über Ihren Identitätsanbieter anstelle von E-Mail/Passwort.
</Callout>
---
## Self-Hosted
Für selbst gehostete Bereitstellungen können Enterprise-Funktionen über Umgebungsvariablen aktiviert werden:
| Variable | Beschreibung |
|----------|-------------|
| `SSO_ENABLED`, `NEXT_PUBLIC_SSO_ENABLED` | Single Sign-On mit SAML/OIDC |
| `CREDENTIAL_SETS_ENABLED`, `NEXT_PUBLIC_CREDENTIAL_SETS_ENABLED` | Polling-Gruppen für E-Mail-Trigger |
<Callout type="warn">
BYOK ist nur im gehosteten Sim Studio verfügbar. Selbst gehostete Deployments konfigurieren AI-Provider-Schlüssel direkt über Umgebungsvariablen.
</Callout>

View File

@@ -17,7 +17,7 @@ MCP-Server gruppieren Ihre Workflow-Tools zusammen. Erstellen und verwalten Sie
<Video src="mcp/mcp-server.mp4" width={700} height={450} />
</div>
1. Navigieren Sie zu **Einstellungen → MCP-Server**
1. Navigieren Sie zu **Einstellungen → Bereitgestellte MCPs**
2. Klicken Sie auf **Server erstellen**
3. Geben Sie einen Namen und eine optionale Beschreibung ein
4. Kopieren Sie die Server-URL zur Verwendung in Ihren MCP-Clients
@@ -79,7 +79,7 @@ Füge deinen API-Key-Header (`X-API-Key`) für authentifizierten Zugriff hinzu,
## Server-Verwaltung
In der Server-Detailansicht unter **Einstellungen → MCP-Server** kannst du:
In der Server-Detailansicht unter **Einstellungen → Bereitgestellte MCPs** können Sie:
- **Tools anzeigen**: Alle Workflows sehen, die einem Server hinzugefügt wurden
- **URL kopieren**: Die Server-URL für MCP-Clients abrufen

View File

@@ -27,7 +27,7 @@ MCP-Server stellen Sammlungen von Tools bereit, die Ihre Agenten nutzen können.
</div>
1. Navigieren Sie zu Ihren Workspace-Einstellungen
2. Gehen Sie zum Abschnitt **MCP-Server**
2. Gehen Sie zum Abschnitt **Bereitgestellte MCPs**
3. Klicken Sie auf **MCP-Server hinzufügen**
4. Geben Sie die Server-Konfigurationsdetails ein
5. Speichern Sie die Konfiguration

View File

@@ -0,0 +1,233 @@
---
title: Fireflies
description: Interagieren Sie mit Fireflies.ai-Besprechungstranskripten und -aufzeichnungen
---
import { BlockInfoCard } from "@/components/ui/block-info-card"
<BlockInfoCard
type="fireflies"
color="#100730"
/>
{/* MANUAL-CONTENT-START:intro */}
[Fireflies.ai](https://fireflies.ai/) ist eine Plattform für Besprechungstranskription und -intelligenz, die sich in Sim integriert und es Ihren Agenten ermöglicht, direkt mit Besprechungsaufzeichnungen, Transkripten und Erkenntnissen über No-Code-Automatisierungen zu arbeiten.
Die Fireflies-Integration in Sim bietet Tools für:
- **Besprechungstranskripte auflisten:** Rufen Sie mehrere Besprechungen und deren Zusammenfassungsinformationen für Ihr Team oder Konto ab.
- **Vollständige Transkriptdetails abrufen:** Greifen Sie auf detaillierte Transkripte zu, einschließlich Zusammenfassungen, Aktionspunkten, Themen und Teilnehmeranalysen für jede Besprechung.
- **Audio oder Video hochladen:** Laden Sie Audio-/Videodateien hoch oder geben Sie URLs zur Transkription an optional können Sie Sprache, Titel, Teilnehmer festlegen und automatisierte Besprechungsnotizen erhalten.
- **Transkripte durchsuchen:** Finden Sie Besprechungen nach Stichwort, Teilnehmer, Moderator oder Zeitraum, um relevante Diskussionen schnell zu lokalisieren.
- **Transkripte löschen:** Entfernen Sie bestimmte Besprechungstranskripte aus Ihrem Fireflies-Workspace.
- **Soundbites (Bites) erstellen:** Extrahieren und markieren Sie wichtige Momente aus Transkripten als Audio- oder Videoclips.
- **Workflows bei Transkriptionsabschluss auslösen:** Aktivieren Sie Sim-Workflows automatisch, wenn eine Fireflies-Besprechungstranskription abgeschlossen ist, mithilfe des bereitgestellten Webhook-Triggers dies ermöglicht Echtzeit-Automatisierungen und Benachrichtigungen basierend auf neuen Besprechungsdaten.
Durch die Kombination dieser Funktionen können Sie Aktionen nach Besprechungen optimieren, strukturierte Erkenntnisse extrahieren, Benachrichtigungen automatisieren, Aufzeichnungen verwalten und benutzerdefinierte Workflows rund um die Anrufe Ihrer Organisation orchestrieren alles sicher unter Verwendung Ihres API-Schlüssels und Ihrer Fireflies-Anmeldedaten.
{/* MANUAL-CONTENT-END */}
## Nutzungsanweisungen
Integrieren Sie Fireflies.ai in den Workflow. Verwalten Sie Besprechungstranskripte, fügen Sie Bots zu Live-Besprechungen hinzu, erstellen Sie Soundbites und mehr. Kann auch Workflows auslösen, wenn Transkriptionen abgeschlossen sind.
## Tools
### `fireflies_list_transcripts`
Meeting-Transkripte von Fireflies.ai mit optionaler Filterung auflisten
#### Eingabe
| Parameter | Typ | Erforderlich | Beschreibung |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Ja | Fireflies API-Schlüssel |
| `keyword` | string | Nein | Suchbegriff im Meeting-Titel oder Transkript |
| `fromDate` | string | Nein | Transkripte ab diesem Datum filtern \(ISO 8601-Format\) |
| `toDate` | string | Nein | Transkripte bis zu diesem Datum filtern \(ISO 8601-Format\) |
| `hostEmail` | string | Nein | Nach E-Mail-Adresse des Meeting-Hosts filtern |
| `participants` | string | Nein | Nach E-Mail-Adressen der Teilnehmer filtern \(durch Komma getrennt\) |
| `limit` | number | Nein | Maximale Anzahl der zurückzugebenden Transkripte \(max. 50\) |
| `skip` | number | Nein | Anzahl der zu überspringenden Transkripte für Paginierung |
#### Ausgabe
| Parameter | Typ | Beschreibung |
| --------- | ---- | ----------- |
| `transcripts` | array | Liste der Transkripte |
| `count` | number | Anzahl der zurückgegebenen Transkripte |
### `fireflies_get_transcript`
Ein einzelnes Transkript mit vollständigen Details einschließlich Zusammenfassung, Aktionspunkten und Analysen abrufen
#### Eingabe
| Parameter | Typ | Erforderlich | Beschreibung |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Ja | Fireflies API-Schlüssel |
| `transcriptId` | string | Ja | Die abzurufende Transkript-ID |
#### Ausgabe
| Parameter | Typ | Beschreibung |
| --------- | ---- | ----------- |
| `transcript` | object | Das Transkript mit vollständigen Details |
### `fireflies_get_user`
Ruft Benutzerinformationen von Fireflies.ai ab. Gibt den aktuellen Benutzer zurück, wenn keine ID angegeben ist.
#### Eingabe
| Parameter | Typ | Erforderlich | Beschreibung |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Ja | Fireflies-API-Schlüssel |
| `userId` | string | Nein | Abzurufende Benutzer-ID \(optional, Standardwert ist der Inhaber des API-Schlüssels\) |
#### Ausgabe
| Parameter | Typ | Beschreibung |
| --------- | ---- | ----------- |
| `user` | object | Benutzerinformationen |
### `fireflies_list_users`
Listet alle Benutzer in Ihrem Fireflies.ai-Team auf
#### Eingabe
| Parameter | Typ | Erforderlich | Beschreibung |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Ja | Fireflies-API-Schlüssel |
#### Ausgabe
| Parameter | Typ | Beschreibung |
| --------- | ---- | ----------- |
| `users` | array | Liste der Teammitglieder |
### `fireflies_upload_audio`
Lädt eine Audiodatei-URL zur Transkription zu Fireflies.ai hoch
#### Eingabe
| Parameter | Typ | Erforderlich | Beschreibung |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Ja | Fireflies-API-Schlüssel |
| `audioFile` | file | Nein | Audio-/Videodatei zur Transkription hochladen |
| `audioUrl` | string | Nein | Öffentliche HTTPS-URL der Audio-/Videodatei \(MP3, MP4, WAV, M4A, OGG\) |
| `title` | string | Nein | Titel für das Meeting/Transkript |
| `webhook` | string | Nein | Webhook-URL zur Benachrichtigung, wenn die Transkription abgeschlossen ist |
| `language` | string | Nein | Sprachcode für die Transkription \(z. B. „es" für Spanisch, „de" für Deutsch\) |
| `attendees` | string | Nein | Teilnehmer im JSON-Format: \[\{"displayName": "Name", "email": "email@example.com"\}\] |
| `clientReferenceId` | string | Nein | Benutzerdefinierte Referenz-ID zur Nachverfolgung |
#### Ausgabe
| Parameter | Typ | Beschreibung |
| --------- | ---- | ----------- |
| `success` | boolean | Ob der Upload erfolgreich war |
| `title` | string | Titel des hochgeladenen Meetings |
| `message` | string | Statusmeldung von Fireflies |
### `fireflies_delete_transcript`
Ein Transkript von Fireflies.ai löschen
#### Eingabe
| Parameter | Typ | Erforderlich | Beschreibung |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Ja | Fireflies API-Schlüssel |
| `transcriptId` | string | Ja | Die zu löschende Transkript-ID |
#### Ausgabe
| Parameter | Typ | Beschreibung |
| --------- | ---- | ----------- |
| `success` | boolean | Ob das Transkript erfolgreich gelöscht wurde |
### `fireflies_add_to_live_meeting`
Fügen Sie den Fireflies.ai-Bot zu einem laufenden Meeting hinzu, um aufzuzeichnen und zu transkribieren
#### Eingabe
| Parameter | Typ | Erforderlich | Beschreibung |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Ja | Fireflies API-Schlüssel |
| `meetingLink` | string | Ja | Gültige Meeting-URL \(Zoom, Google Meet, Microsoft Teams, etc.\) |
| `title` | string | Nein | Titel für das Meeting \(max. 256 Zeichen\) |
| `meetingPassword` | string | Nein | Passwort für das Meeting, falls erforderlich \(max. 32 Zeichen\) |
| `duration` | number | Nein | Meetingdauer in Minuten \(15120, Standard: 60\) |
| `language` | string | Nein | Sprachcode für die Transkription \(z. B. "en", "es", "de"\) |
#### Ausgabe
| Parameter | Typ | Beschreibung |
| --------- | ---- | ----------- |
| `success` | boolean | Ob der Bot erfolgreich zum Meeting hinzugefügt wurde |
### `fireflies_create_bite`
Erstellen Sie einen Soundbite/Highlight aus einem bestimmten Zeitbereich in einem Transkript
#### Eingabe
| Parameter | Typ | Erforderlich | Beschreibung |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Ja | Fireflies API-Schlüssel |
| `transcriptId` | string | Ja | ID des Transkripts, aus dem der Bite erstellt werden soll |
| `startTime` | number | Ja | Startzeit des Bites in Sekunden |
| `endTime` | number | Ja | Endzeit des Bites in Sekunden |
| `name` | string | Nein | Name für den Bite \(max. 256 Zeichen\) |
| `mediaType` | string | Nein | Medientyp: "video" oder "audio" |
| `summary` | string | Nein | Zusammenfassung für den Bite \(max. 500 Zeichen\) |
#### Ausgabe
| Parameter | Typ | Beschreibung |
| --------- | ---- | ----------- |
| `bite` | object | Details des erstellten Bites |
### `fireflies_list_bites`
Soundbites/Highlights von Fireflies.ai auflisten
#### Eingabe
| Parameter | Typ | Erforderlich | Beschreibung |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Ja | Fireflies API-Schlüssel |
| `transcriptId` | string | Nein | Bites für ein bestimmtes Transkript filtern |
| `mine` | boolean | Nein | Nur Bites zurückgeben, die dem Besitzer des API-Schlüssels gehören \(Standard: true\) |
| `limit` | number | Nein | Maximale Anzahl der zurückzugebenden Bites \(max. 50\) |
| `skip` | number | Nein | Anzahl der zu überspringenden Bites für die Paginierung |
#### Ausgabe
| Parameter | Typ | Beschreibung |
| --------- | ---- | ----------- |
| `bites` | array | Liste der Bites/Soundbites |
### `fireflies_list_contacts`
Alle Kontakte aus Ihren Fireflies.ai-Meetings auflisten
#### Eingabe
| Parameter | Typ | Erforderlich | Beschreibung |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Ja | Fireflies-API-Schlüssel |
#### Ausgabe
| Parameter | Typ | Beschreibung |
| --------- | ---- | ----------- |
| `contacts` | array | Liste der Kontakte aus Meetings |
## Hinweise
- Kategorie: `tools`
- Typ: `fireflies`

View File

@@ -22,7 +22,7 @@ Verwende den Start-Block für alles, was aus dem Editor, deploy-to-API oder depl
<Cards>
<Card title="Start" href="/triggers/start">
Einheitlicher Einstiegspunkt, der Editor-Ausführungen, API-Bereitstellungen und Chat-Bereitstellungen unterstützt
Einheitlicher Einstiegspunkt, der Editor-Ausführungen, API-Deployments und Chat-Deployments unterstützt
</Card>
<Card title="Webhook" href="/triggers/webhook">
Externe Webhook-Payloads empfangen
@@ -33,6 +33,9 @@ Verwende den Start-Block für alles, was aus dem Editor, deploy-to-API oder depl
<Card title="RSS Feed" href="/triggers/rss">
RSS- und Atom-Feeds auf neue Inhalte überwachen
</Card>
<Card title="Email Polling Groups" href="#email-polling-groups">
Team-Gmail- und Outlook-Postfächer überwachen
</Card>
</Cards>
## Schneller Vergleich
@@ -43,6 +46,7 @@ Verwende den Start-Block für alles, was aus dem Editor, deploy-to-API oder depl
| **Schedule** | Timer, der im Schedule-Block verwaltet wird |
| **Webhook** | Bei eingehender HTTP-Anfrage |
| **RSS Feed** | Neues Element im Feed veröffentlicht |
| **Email Polling Groups** | Neue E-Mail in Team-Gmail- oder Outlook-Postfächern empfangen |
> Der Start-Block stellt immer `input`, `conversationId` und `files` Felder bereit. Füge benutzerdefinierte Felder zum Eingabeformat für zusätzliche strukturierte Daten hinzu.
@@ -65,3 +69,25 @@ Wenn du im Editor auf **Run** klickst, wählt Sim automatisch aus, welcher Trigg
Wenn dein Workflow mehrere Trigger hat, wird der Trigger mit der höchsten Priorität ausgeführt. Wenn du beispielsweise sowohl einen Start-Block als auch einen Webhook-Trigger hast, wird beim Klicken auf Run der Start-Block ausgeführt.
**Externe Auslöser mit Mock-Payloads**: Wenn externe Auslöser (Webhooks und Integrationen) manuell ausgeführt werden, generiert Sim automatisch Mock-Payloads basierend auf der erwarteten Datenstruktur des Auslösers. Dies stellt sicher, dass nachgelagerte Blöcke während des Testens Variablen korrekt auflösen können.
## E-Mail-Polling-Gruppen
Polling-Gruppen ermöglichen es Ihnen, die Gmail- oder Outlook-Postfächer mehrerer Teammitglieder mit einem einzigen Trigger zu überwachen. Erfordert einen Team- oder Enterprise-Plan.
**Erstellen einer Polling-Gruppe** (Admin/Owner)
1. Gehen Sie zu **Einstellungen → E-Mail-Polling**
2. Klicken Sie auf **Erstellen** und wählen Sie Gmail oder Outlook
3. Geben Sie einen Namen für die Gruppe ein
**Mitglieder einladen**
1. Klicken Sie auf **Mitglieder hinzufügen** bei Ihrer Polling-Gruppe
2. Geben Sie E-Mail-Adressen ein (durch Komma oder Zeilenumbruch getrennt oder ziehen Sie eine CSV-Datei per Drag & Drop)
3. Klicken Sie auf **Einladungen senden**
Eingeladene erhalten eine E-Mail mit einem Link, um ihr Konto zu verbinden. Sobald die Verbindung hergestellt ist, wird ihr Postfach automatisch in die Polling-Gruppe aufgenommen. Eingeladene müssen keine Mitglieder Ihrer Sim-Organisation sein.
**Verwendung in einem Workflow**
Wählen Sie beim Konfigurieren eines E-Mail-Triggers Ihre Polling-Gruppe aus dem Dropdown-Menü für Anmeldeinformationen anstelle eines einzelnen Kontos aus. Das System erstellt Webhooks für jedes Mitglied und leitet alle E-Mails durch Ihren Workflow.

View File

@@ -0,0 +1,75 @@
---
title: Enterprise
description: Enterprise features for organizations with advanced security and compliance requirements
---
import { Callout } from 'fumadocs-ui/components/callout'
Sim Studio Enterprise provides advanced features for organizations with enhanced security, compliance, and management requirements.
---
## Bring Your Own Key (BYOK)
Use your own API keys for AI model providers instead of Sim Studio's hosted keys.
### Supported Providers
| Provider | Usage |
|----------|-------|
| OpenAI | Knowledge Base embeddings, Agent block |
| Anthropic | Agent block |
| Google | Agent block |
| Mistral | Knowledge Base OCR |
### Setup
1. Navigate to **Settings** → **BYOK** in your workspace
2. Click **Add Key** for your provider
3. Enter your API key and save
<Callout type="warn">
BYOK keys are encrypted at rest. Only organization admins and owners can manage keys.
</Callout>
When configured, workflows use your key instead of Sim Studio's hosted keys. If removed, workflows automatically fall back to hosted keys.
---
## Single Sign-On (SSO)
Enterprise authentication with SAML 2.0 and OIDC support for centralized identity management.
### Supported Providers
- Okta
- Azure AD / Entra ID
- Google Workspace
- OneLogin
- Any SAML 2.0 or OIDC provider
### Setup
1. Navigate to **Settings** → **SSO** in your workspace
2. Choose your identity provider
3. Configure the connection using your IdP's metadata
4. Enable SSO for your organization
<Callout type="info">
Once SSO is enabled, team members authenticate through your identity provider instead of email/password.
</Callout>
---
## Self-Hosted
For self-hosted deployments, enterprise features can be enabled via environment variables:
| Variable | Description |
|----------|-------------|
| `SSO_ENABLED`, `NEXT_PUBLIC_SSO_ENABLED` | Single Sign-On with SAML/OIDC |
| `CREDENTIAL_SETS_ENABLED`, `NEXT_PUBLIC_CREDENTIAL_SETS_ENABLED` | Polling Groups for email triggers |
<Callout type="warn">
BYOK is only available on hosted Sim Studio. Self-hosted deployments configure AI provider keys directly via environment variables.
</Callout>

View File

@@ -16,7 +16,7 @@ MCP servers group your workflow tools together. Create and manage them in worksp
<Video src="mcp/mcp-server.mp4" width={700} height={450} />
</div>
1. Navigate to **Settings → MCP Servers**
1. Navigate to **Settings → Deployed MCPs**
2. Click **Create Server**
3. Enter a name and optional description
4. Copy the server URL for use in your MCP clients
@@ -78,7 +78,7 @@ Include your API key header (`X-API-Key`) for authenticated access when using mc
## Server Management
From the server detail view in **Settings → MCP Servers**, you can:
From the server detail view in **Settings → Deployed MCPs**, you can:
- **View tools**: See all workflows added to a server
- **Copy URL**: Get the server URL for MCP clients

View File

@@ -27,7 +27,7 @@ MCP servers provide collections of tools that your agents can use. Configure the
</div>
1. Navigate to your workspace settings
2. Go to the **MCP Servers** section
2. Go to the **Deployed MCPs** section
3. Click **Add MCP Server**
4. Enter the server configuration details
5. Save the configuration

View File

@@ -15,6 +15,7 @@
"permissions",
"sdks",
"self-hosting",
"./enterprise/index",
"./keyboard-shortcuts/index"
],
"defaultOpen": false

View File

@@ -0,0 +1,238 @@
---
title: Fireflies
description: Interact with Fireflies.ai meeting transcripts and recordings
---
import { BlockInfoCard } from "@/components/ui/block-info-card"
<BlockInfoCard
type="fireflies"
color="#100730"
/>
{/* MANUAL-CONTENT-START:intro */}
[Fireflies.ai](https://fireflies.ai/) is a meeting transcription and intelligence platform that integrates with Sim, allowing your agents to work directly with meeting recordings, transcripts, and insights through no-code automations.
The Fireflies integration in Sim provides tools to:
- **List meeting transcripts:** Fetch multiple meetings and their summary information for your team or account.
- **Retrieve full transcript details:** Access detailed transcripts, including summaries, action items, topics, and participant analytics for any meeting.
- **Upload audio or video:** Upload audio/video files or provide URLs for transcription—optionally set language, title, attendees, and receive automated meeting notes.
- **Search transcripts:** Find meetings by keyword, participant, host, or timeframe to quickly locate relevant discussions.
- **Delete transcripts:** Remove specific meeting transcripts from your Fireflies workspace.
- **Create soundbites (Bites):** Extract and highlight key moments from transcripts as audio or video clips.
- **Trigger workflows on transcription completion:** Activate Sim workflows automatically when a Fireflies meeting transcription finishes using the provided webhook trigger—enabling real-time automations and notifications based on new meeting data.
By combining these capabilities, you can streamline post-meeting actions, extract structured insights, automate notifications, manage recordings, and orchestrate custom workflows around your organizations calls—all securely using your API key and Fireflies credentials.
{/* MANUAL-CONTENT-END */}
## Usage Instructions
Integrate Fireflies.ai into the workflow. Manage meeting transcripts, add bot to live meetings, create soundbites, and more. Can also trigger workflows when transcriptions complete.
## Tools
### `fireflies_list_transcripts`
List meeting transcripts from Fireflies.ai with optional filtering
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Fireflies API key |
| `keyword` | string | No | Search keyword in meeting title or transcript |
| `fromDate` | string | No | Filter transcripts from this date \(ISO 8601 format\) |
| `toDate` | string | No | Filter transcripts until this date \(ISO 8601 format\) |
| `hostEmail` | string | No | Filter by meeting host email |
| `participants` | string | No | Filter by participant emails \(comma-separated\) |
| `limit` | number | No | Maximum number of transcripts to return \(max 50\) |
| `skip` | number | No | Number of transcripts to skip for pagination |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `transcripts` | array | List of transcripts |
| `count` | number | Number of transcripts returned |
### `fireflies_get_transcript`
Get a single transcript with full details including summary, action items, and analytics
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Fireflies API key |
| `transcriptId` | string | Yes | The transcript ID to retrieve |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `transcript` | object | The transcript with full details |
### `fireflies_get_user`
Get user information from Fireflies.ai. Returns current user if no ID specified.
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Fireflies API key |
| `userId` | string | No | User ID to retrieve \(optional, defaults to API key owner\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `user` | object | User information |
### `fireflies_list_users`
List all users within your Fireflies.ai team
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Fireflies API key |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `users` | array | List of team users |
### `fireflies_upload_audio`
Upload an audio file URL to Fireflies.ai for transcription
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Fireflies API key |
| `audioFile` | file | No | Audio/video file to upload for transcription |
| `audioUrl` | string | No | Public HTTPS URL of the audio/video file \(MP3, MP4, WAV, M4A, OGG\) |
| `title` | string | No | Title for the meeting/transcript |
| `webhook` | string | No | Webhook URL to notify when transcription is complete |
| `language` | string | No | Language code for transcription \(e.g., "es" for Spanish, "de" for German\) |
| `attendees` | string | No | Attendees in JSON format: \[\{"displayName": "Name", "email": "email@example.com"\}\] |
| `clientReferenceId` | string | No | Custom reference ID for tracking |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `success` | boolean | Whether the upload was successful |
| `title` | string | Title of the uploaded meeting |
| `message` | string | Status message from Fireflies |
### `fireflies_delete_transcript`
Delete a transcript from Fireflies.ai
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Fireflies API key |
| `transcriptId` | string | Yes | The transcript ID to delete |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `success` | boolean | Whether the transcript was successfully deleted |
### `fireflies_add_to_live_meeting`
Add the Fireflies.ai bot to an ongoing meeting to record and transcribe
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Fireflies API key |
| `meetingLink` | string | Yes | Valid meeting URL \(Zoom, Google Meet, Microsoft Teams, etc.\) |
| `title` | string | No | Title for the meeting \(max 256 characters\) |
| `meetingPassword` | string | No | Password for the meeting if required \(max 32 characters\) |
| `duration` | number | No | Meeting duration in minutes \(15-120, default: 60\) |
| `language` | string | No | Language code for transcription \(e.g., "en", "es", "de"\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `success` | boolean | Whether the bot was successfully added to the meeting |
### `fireflies_create_bite`
Create a soundbite/highlight from a specific time range in a transcript
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Fireflies API key |
| `transcriptId` | string | Yes | ID of the transcript to create the bite from |
| `startTime` | number | Yes | Start time of the bite in seconds |
| `endTime` | number | Yes | End time of the bite in seconds |
| `name` | string | No | Name for the bite \(max 256 characters\) |
| `mediaType` | string | No | Media type: "video" or "audio" |
| `summary` | string | No | Summary for the bite \(max 500 characters\) |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `bite` | object | Created bite details |
### `fireflies_list_bites`
List soundbites/highlights from Fireflies.ai
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Fireflies API key |
| `transcriptId` | string | No | Filter bites for a specific transcript |
| `mine` | boolean | No | Only return bites owned by the API key owner \(default: true\) |
| `limit` | number | No | Maximum number of bites to return \(max 50\) |
| `skip` | number | No | Number of bites to skip for pagination |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `bites` | array | List of bites/soundbites |
### `fireflies_list_contacts`
List all contacts from your Fireflies.ai meetings
#### Input
| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Fireflies API key |
#### Output
| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `contacts` | array | List of contacts from meetings |
## Notes
- Category: `tools`
- Type: `fireflies`

View File

@@ -23,6 +23,7 @@
"exa",
"file",
"firecrawl",
"fireflies",
"github",
"gitlab",
"gmail",

View File

@@ -33,6 +33,9 @@ Use the Start block for everything originating from the editor, deploy-to-API, o
<Card title="RSS Feed" href="/triggers/rss">
Monitor RSS and Atom feeds for new content
</Card>
<Card title="Email Polling Groups" href="#email-polling-groups">
Monitor team Gmail and Outlook inboxes
</Card>
</Cards>
## Quick Comparison
@@ -43,6 +46,7 @@ Use the Start block for everything originating from the editor, deploy-to-API, o
| **Schedule** | Timer managed in schedule block |
| **Webhook** | On inbound HTTP request |
| **RSS Feed** | New item published to feed |
| **Email Polling Groups** | New email received in team Gmail or Outlook inboxes |
> The Start block always exposes `input`, `conversationId`, and `files` fields. Add custom fields to the input format for additional structured data.
@@ -66,3 +70,24 @@ If your workflow has multiple triggers, the highest priority trigger will be exe
**External triggers with mock payloads**: When external triggers (webhooks and integrations) are executed manually, Sim automatically generates mock payloads based on the trigger's expected data structure. This ensures downstream blocks can resolve variables correctly during testing.
## Email Polling Groups
Polling Groups let you monitor multiple team members' Gmail or Outlook inboxes with a single trigger. Requires a Team or Enterprise plan.
**Creating a Polling Group** (Admin/Owner)
1. Go to **Settings → Email Polling**
2. Click **Create** and choose Gmail or Outlook
3. Enter a name for the group
**Inviting Members**
1. Click **Add Members** on your polling group
2. Enter email addresses (comma or newline separated, or drag & drop a CSV)
3. Click **Send Invites**
Invitees receive an email with a link to connect their account. Once connected, their inbox is automatically included in the polling group. Invitees don't need to be members of your Sim organization.
**Using in a Workflow**
When configuring an email trigger, select your polling group from the credentials dropdown instead of an individual account. The system creates webhooks for each member and routes all emails through your workflow.

View File

@@ -0,0 +1,76 @@
---
title: Enterprise
description: Funciones enterprise para organizaciones con requisitos avanzados
de seguridad y cumplimiento
---
import { Callout } from 'fumadocs-ui/components/callout'
Sim Studio Enterprise proporciona funciones avanzadas para organizaciones con requisitos mejorados de seguridad, cumplimiento y gestión.
---
## Bring Your Own Key (BYOK)
Usa tus propias claves API para proveedores de modelos de IA en lugar de las claves alojadas de Sim Studio.
### Proveedores compatibles
| Proveedor | Uso |
|----------|-------|
| OpenAI | Embeddings de base de conocimiento, bloque Agent |
| Anthropic | Bloque Agent |
| Google | Bloque Agent |
| Mistral | OCR de base de conocimiento |
### Configuración
1. Navega a **Configuración** → **BYOK** en tu espacio de trabajo
2. Haz clic en **Añadir clave** para tu proveedor
3. Introduce tu clave API y guarda
<Callout type="warn">
Las claves BYOK están cifradas en reposo. Solo los administradores y propietarios de la organización pueden gestionar las claves.
</Callout>
Cuando está configurado, los flujos de trabajo usan tu clave en lugar de las claves alojadas de Sim Studio. Si se elimina, los flujos de trabajo vuelven automáticamente a las claves alojadas.
---
## Single Sign-On (SSO)
Autenticación enterprise con soporte SAML 2.0 y OIDC para gestión centralizada de identidades.
### Proveedores compatibles
- Okta
- Azure AD / Entra ID
- Google Workspace
- OneLogin
- Cualquier proveedor SAML 2.0 u OIDC
### Configuración
1. Navega a **Configuración** → **SSO** en tu espacio de trabajo
2. Elige tu proveedor de identidad
3. Configura la conexión usando los metadatos de tu IdP
4. Activa SSO para tu organización
<Callout type="info">
Una vez que SSO está activado, los miembros del equipo se autentican a través de tu proveedor de identidad en lugar de correo electrónico/contraseña.
</Callout>
---
## Self-Hosted
Para implementaciones self-hosted, las funciones enterprise se pueden activar mediante variables de entorno:
| Variable | Descripción |
|----------|-------------|
| `SSO_ENABLED`, `NEXT_PUBLIC_SSO_ENABLED` | Inicio de sesión único con SAML/OIDC |
| `CREDENTIAL_SETS_ENABLED`, `NEXT_PUBLIC_CREDENTIAL_SETS_ENABLED` | Grupos de sondeo para activadores de correo electrónico |
<Callout type="warn">
BYOK solo está disponible en Sim Studio alojado. Las implementaciones autoalojadas configuran las claves de proveedor de IA directamente a través de variables de entorno.
</Callout>

View File

@@ -17,7 +17,7 @@ Los servidores MCP agrupan tus herramientas de flujo de trabajo. Créalos y gest
<Video src="mcp/mcp-server.mp4" width={700} height={450} />
</div>
1. Navega a **Configuración → Servidores MCP**
1. Navega a **Configuración → MCP implementados**
2. Haz clic en **Crear servidor**
3. Introduce un nombre y una descripción opcional
4. Copia la URL del servidor para usarla en tus clientes MCP
@@ -79,7 +79,7 @@ Incluye tu encabezado de clave API (`X-API-Key`) para acceso autenticado al usar
## Gestión del servidor
Desde la vista de detalle del servidor en **Configuración → Servidores MCP**, puedes:
Desde la vista de detalles del servidor en **Configuración → MCP implementados**, puedes:
- **Ver herramientas**: consulta todos los flujos de trabajo añadidos a un servidor
- **Copiar URL**: obtén la URL del servidor para clientes MCP

View File

@@ -26,8 +26,8 @@ Los servidores MCP proporcionan colecciones de herramientas que tus agentes pued
<Video src="mcp/settings-mcp-tools.mp4" width={700} height={450} />
</div>
1. Navega a los ajustes de tu espacio de trabajo
2. Ve a la sección **Servidores MCP**
1. Navega a la configuración de tu espacio de trabajo
2. Ve a la sección **MCP implementados**
3. Haz clic en **Añadir servidor MCP**
4. Introduce los detalles de configuración del servidor
5. Guarda la configuración

View File

@@ -0,0 +1,233 @@
---
title: Fireflies
description: Interactúa con transcripciones y grabaciones de reuniones de Fireflies.ai
---
import { BlockInfoCard } from "@/components/ui/block-info-card"
<BlockInfoCard
type="fireflies"
color="#100730"
/>
{/* MANUAL-CONTENT-START:intro */}
[Fireflies.ai](https://fireflies.ai/) es una plataforma de transcripción e inteligencia de reuniones que se integra con Sim, permitiendo que tus agentes trabajen directamente con grabaciones de reuniones, transcripciones e información mediante automatizaciones sin código.
La integración de Fireflies en Sim proporciona herramientas para:
- **Listar transcripciones de reuniones:** Obtén múltiples reuniones y su información resumida para tu equipo o cuenta.
- **Recuperar detalles completos de transcripciones:** Accede a transcripciones detalladas, incluyendo resúmenes, elementos de acción, temas y análisis de participantes para cualquier reunión.
- **Subir audio o vídeo:** Sube archivos de audio/vídeo o proporciona URLs para transcripción—opcionalmente establece idioma, título, asistentes y recibe notas de reunión automatizadas.
- **Buscar transcripciones:** Encuentra reuniones por palabra clave, participante, anfitrión o periodo de tiempo para localizar rápidamente discusiones relevantes.
- **Eliminar transcripciones:** Elimina transcripciones de reuniones específicas de tu espacio de trabajo de Fireflies.
- **Crear fragmentos destacados (Bites):** Extrae y resalta momentos clave de las transcripciones como clips de audio o vídeo.
- **Activar flujos de trabajo al completar la transcripción:** Activa flujos de trabajo de Sim automáticamente cuando finaliza una transcripción de reunión de Fireflies usando el webhook trigger proporcionado—habilitando automatizaciones en tiempo real y notificaciones basadas en nuevos datos de reuniones.
Al combinar estas capacidades, puedes optimizar acciones posteriores a las reuniones, extraer información estructurada, automatizar notificaciones, gestionar grabaciones y orquestar flujos de trabajo personalizados en torno a las llamadas de tu organización—todo de forma segura usando tu clave API y credenciales de Fireflies.
{/* MANUAL-CONTENT-END */}
## Instrucciones de uso
Integra Fireflies.ai en el flujo de trabajo. Gestiona transcripciones de reuniones, añade bot a reuniones en vivo, crea fragmentos destacados y más. También puede activar flujos de trabajo cuando se completan las transcripciones.
## Herramientas
### `fireflies_list_transcripts`
Lista las transcripciones de reuniones de Fireflies.ai con filtrado opcional
#### Entrada
| Parámetro | Tipo | Requerido | Descripción |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Sí | Clave API de Fireflies |
| `keyword` | string | No | Palabra clave de búsqueda en el título de la reunión o transcripción |
| `fromDate` | string | No | Filtra transcripciones desde esta fecha \(formato ISO 8601\) |
| `toDate` | string | No | Filtra transcripciones hasta esta fecha \(formato ISO 8601\) |
| `hostEmail` | string | No | Filtra por correo electrónico del anfitrión de la reunión |
| `participants` | string | No | Filtra por correos electrónicos de participantes \(separados por comas\) |
| `limit` | number | No | Número máximo de transcripciones a devolver \(máx. 50\) |
| `skip` | number | No | Número de transcripciones a omitir para paginación |
#### Salida
| Parámetro | Tipo | Descripción |
| --------- | ---- | ----------- |
| `transcripts` | array | Lista de transcripciones |
| `count` | number | Número de transcripciones devueltas |
### `fireflies_get_transcript`
Obtiene una única transcripción con detalles completos, incluyendo resumen, elementos de acción y análisis
#### Entrada
| Parámetro | Tipo | Requerido | Descripción |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Sí | Clave API de Fireflies |
| `transcriptId` | string | Sí | El ID de transcripción a recuperar |
#### Salida
| Parámetro | Tipo | Descripción |
| --------- | ---- | ----------- |
| `transcript` | object | La transcripción con detalles completos |
### `fireflies_get_user`
Obtener información del usuario de Fireflies.ai. Devuelve el usuario actual si no se especifica ningún ID.
#### Entrada
| Parámetro | Tipo | Requerido | Descripción |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Sí | Clave API de Fireflies |
| `userId` | string | No | ID de usuario a recuperar \(opcional, por defecto el propietario de la clave API\) |
#### Salida
| Parámetro | Tipo | Descripción |
| --------- | ---- | ----------- |
| `user` | object | Información del usuario |
### `fireflies_list_users`
Listar todos los usuarios dentro de tu equipo de Fireflies.ai
#### Entrada
| Parámetro | Tipo | Requerido | Descripción |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Sí | Clave API de Fireflies |
#### Salida
| Parámetro | Tipo | Descripción |
| --------- | ---- | ----------- |
| `users` | array | Lista de usuarios del equipo |
### `fireflies_upload_audio`
Subir una URL de archivo de audio a Fireflies.ai para transcripción
#### Entrada
| Parámetro | Tipo | Requerido | Descripción |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Sí | Clave API de Fireflies |
| `audioFile` | file | No | Archivo de audio/vídeo a subir para transcripción |
| `audioUrl` | string | No | URL HTTPS pública del archivo de audio/vídeo \(MP3, MP4, WAV, M4A, OGG\) |
| `title` | string | No | Título para la reunión/transcripción |
| `webhook` | string | No | URL de webhook para notificar cuando la transcripción esté completa |
| `language` | string | No | Código de idioma para la transcripción \(por ejemplo, "es" para español, "de" para alemán\) |
| `attendees` | string | No | Asistentes en formato JSON: \[\{"displayName": "Nombre", "email": "email@example.com"\}\] |
| `clientReferenceId` | string | No | ID de referencia personalizado para seguimiento |
#### Salida
| Parámetro | Tipo | Descripción |
| --------- | ---- | ----------- |
| `success` | boolean | Si la carga fue exitosa |
| `title` | string | Título de la reunión cargada |
| `message` | string | Mensaje de estado de Fireflies |
### `fireflies_delete_transcript`
Eliminar una transcripción de Fireflies.ai
#### Entrada
| Parámetro | Tipo | Requerido | Descripción |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Sí | Clave API de Fireflies |
| `transcriptId` | string | Sí | El ID de la transcripción a eliminar |
#### Salida
| Parámetro | Tipo | Descripción |
| --------- | ---- | ----------- |
| `success` | boolean | Si la transcripción fue eliminada exitosamente |
### `fireflies_add_to_live_meeting`
Agregar el bot de Fireflies.ai a una reunión en curso para grabar y transcribir
#### Entrada
| Parámetro | Tipo | Requerido | Descripción |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Sí | Clave API de Fireflies |
| `meetingLink` | string | Sí | URL de reunión válida \(Zoom, Google Meet, Microsoft Teams, etc.\) |
| `title` | string | No | Título para la reunión \(máximo 256 caracteres\) |
| `meetingPassword` | string | No | Contraseña para la reunión si es necesaria \(máximo 32 caracteres\) |
| `duration` | number | No | Duración de la reunión en minutos \(15-120, predeterminado: 60\) |
| `language` | string | No | Código de idioma para la transcripción \(p. ej., "en", "es", "de"\) |
#### Salida
| Parámetro | Tipo | Descripción |
| --------- | ---- | ----------- |
| `success` | boolean | Si el bot se agregó exitosamente a la reunión |
### `fireflies_create_bite`
Crear un fragmento destacado desde un rango de tiempo específico en una transcripción
#### Entrada
| Parámetro | Tipo | Requerido | Descripción |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Sí | Clave API de Fireflies |
| `transcriptId` | string | Sí | ID de la transcripción desde la cual crear el fragmento |
| `startTime` | number | Sí | Tiempo de inicio del fragmento en segundos |
| `endTime` | number | Sí | Tiempo de finalización del fragmento en segundos |
| `name` | string | No | Nombre para el fragmento \(máximo 256 caracteres\) |
| `mediaType` | string | No | Tipo de medio: "video" o "audio" |
| `summary` | string | No | Resumen para el fragmento \(máximo 500 caracteres\) |
#### Salida
| Parámetro | Tipo | Descripción |
| --------- | ---- | ----------- |
| `bite` | object | Detalles del fragmento creado |
### `fireflies_list_bites`
Listar fragmentos destacados de Fireflies.ai
#### Entrada
| Parámetro | Tipo | Requerido | Descripción |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Sí | Clave API de Fireflies |
| `transcriptId` | string | No | Filtrar fragmentos para una transcripción específica |
| `mine` | boolean | No | Devolver solo fragmentos propiedad del titular de la clave API \(predeterminado: true\) |
| `limit` | number | No | Número máximo de fragmentos a devolver \(máximo 50\) |
| `skip` | number | No | Número de fragmentos a omitir para paginación |
#### Salida
| Parámetro | Tipo | Descripción |
| --------- | ---- | ----------- |
| `bites` | array | Lista de fragmentos/clips de audio |
### `fireflies_list_contacts`
Lista todos los contactos de tus reuniones de Fireflies.ai
#### Entrada
| Parámetro | Tipo | Requerido | Descripción |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Sí | Clave API de Fireflies |
#### Salida
| Parámetro | Tipo | Descripción |
| --------- | ---- | ----------- |
| `contacts` | array | Lista de contactos de las reuniones |
## Notas
- Categoría: `tools`
- Tipo: `fireflies`

View File

@@ -22,7 +22,7 @@ Utiliza el bloque Start para todo lo que se origina desde el editor, despliegue
<Cards>
<Card title="Start" href="/triggers/start">
Punto de entrada unificado que admite ejecuciones del editor, despliegues de API y despliegues de chat
Punto de entrada unificado que admite ejecuciones en el editor, despliegues de API y despliegues de chat
</Card>
<Card title="Webhook" href="/triggers/webhook">
Recibe cargas útiles de webhooks externos
@@ -31,18 +31,22 @@ Utiliza el bloque Start para todo lo que se origina desde el editor, despliegue
Ejecución basada en cron o intervalos
</Card>
<Card title="RSS Feed" href="/triggers/rss">
Monitorea feeds RSS y Atom para nuevo contenido
Monitorea feeds RSS y Atom para detectar contenido nuevo
</Card>
<Card title="Email Polling Groups" href="#email-polling-groups">
Monitorea bandejas de entrada de Gmail y Outlook del equipo
</Card>
</Cards>
## Comparación rápida
| Disparador | Condición de inicio |
| Trigger | Condición de inicio |
|---------|-----------------|
| **Start** | Ejecuciones del editor, solicitudes de despliegue a API o mensajes de chat |
| **Start** | Ejecuciones en el editor, solicitudes de despliegue a API o mensajes de chat |
| **Schedule** | Temporizador gestionado en el bloque de programación |
| **Webhook** | Al recibir una solicitud HTTP entrante |
| **RSS Feed** | Nuevo elemento publicado en el feed |
| **Email Polling Groups** | Nuevo correo electrónico recibido en bandejas de entrada de Gmail o Outlook del equipo |
> El bloque Start siempre expone los campos `input`, `conversationId` y `files`. Añade campos personalizados al formato de entrada para datos estructurados adicionales.
@@ -65,3 +69,25 @@ Cuando haces clic en **Ejecutar** en el editor, Sim selecciona automáticamente
Si tu flujo de trabajo tiene múltiples disparadores, se ejecutará el disparador de mayor prioridad. Por ejemplo, si tienes tanto un bloque Start como un disparador Webhook, al hacer clic en Ejecutar se ejecutará el bloque Start.
**Disparadores externos con cargas útiles simuladas**: Cuando los disparadores externos (webhooks e integraciones) se ejecutan manualmente, Sim genera automáticamente cargas útiles simuladas basadas en la estructura de datos esperada del disparador. Esto asegura que los bloques posteriores puedan resolver las variables correctamente durante las pruebas.
## Grupos de sondeo de correo electrónico
Los grupos de sondeo te permiten monitorear las bandejas de entrada de Gmail o Outlook de varios miembros del equipo con un solo activador. Requiere un plan Team o Enterprise.
**Crear un grupo de sondeo** (administrador/propietario)
1. Ve a **Configuración → Sondeo de correo electrónico**
2. Haz clic en **Crear** y elige Gmail u Outlook
3. Ingresa un nombre para el grupo
**Invitar miembros**
1. Haz clic en **Agregar miembros** en tu grupo de sondeo
2. Ingresa direcciones de correo electrónico (separadas por comas o saltos de línea, o arrastra y suelta un CSV)
3. Haz clic en **Enviar invitaciones**
Los invitados reciben un correo electrónico con un enlace para conectar su cuenta. Una vez conectada, su bandeja de entrada se incluye automáticamente en el grupo de sondeo. Los invitados no necesitan ser miembros de tu organización Sim.
**Usar en un flujo de trabajo**
Al configurar un activador de correo electrónico, selecciona tu grupo de sondeo del menú desplegable de credenciales en lugar de una cuenta individual. El sistema crea webhooks para cada miembro y enruta todos los correos electrónicos a través de tu flujo de trabajo.

View File

@@ -0,0 +1,76 @@
---
title: Entreprise
description: Fonctionnalités entreprise pour les organisations ayant des
exigences avancées en matière de sécurité et de conformité
---
import { Callout } from 'fumadocs-ui/components/callout'
Sim Studio Entreprise fournit des fonctionnalités avancées pour les organisations ayant des exigences renforcées en matière de sécurité, de conformité et de gestion.
---
## Apportez votre propre clé (BYOK)
Utilisez vos propres clés API pour les fournisseurs de modèles IA au lieu des clés hébergées par Sim Studio.
### Fournisseurs pris en charge
| Fournisseur | Utilisation |
|----------|-------|
| OpenAI | Embeddings de base de connaissances, bloc Agent |
| Anthropic | Bloc Agent |
| Google | Bloc Agent |
| Mistral | OCR de base de connaissances |
### Configuration
1. Accédez à **Paramètres** → **BYOK** dans votre espace de travail
2. Cliquez sur **Ajouter une clé** pour votre fournisseur
3. Saisissez votre clé API et enregistrez
<Callout type="warn">
Les clés BYOK sont chiffrées au repos. Seuls les administrateurs et propriétaires de l'organisation peuvent gérer les clés.
</Callout>
Une fois configurés, les workflows utilisent votre clé au lieu des clés hébergées par Sim Studio. Si elle est supprimée, les workflows basculent automatiquement vers les clés hébergées.
---
## Authentification unique (SSO)
Authentification entreprise avec prise en charge de SAML 2.0 et OIDC pour une gestion centralisée des identités.
### Fournisseurs pris en charge
- Okta
- Azure AD / Entra ID
- Google Workspace
- OneLogin
- Tout fournisseur SAML 2.0 ou OIDC
### Configuration
1. Accédez à **Paramètres** → **SSO** dans votre espace de travail
2. Choisissez votre fournisseur d'identité
3. Configurez la connexion en utilisant les métadonnées de votre IdP
4. Activez le SSO pour votre organisation
<Callout type="info">
Une fois le SSO activé, les membres de l'équipe s'authentifient via votre fournisseur d'identité au lieu d'utiliser un email/mot de passe.
</Callout>
---
## Auto-hébergé
Pour les déploiements auto-hébergés, les fonctionnalités entreprise peuvent être activées via des variables d'environnement :
| Variable | Description |
|----------|-------------|
| `SSO_ENABLED`, `NEXT_PUBLIC_SSO_ENABLED` | Authentification unique avec SAML/OIDC |
| `CREDENTIAL_SETS_ENABLED`, `NEXT_PUBLIC_CREDENTIAL_SETS_ENABLED` | Groupes de sondage pour les déclencheurs d'e-mail |
<Callout type="warn">
BYOK est uniquement disponible sur Sim Studio hébergé. Les déploiements auto-hébergés configurent les clés de fournisseur d'IA directement via les variables d'environnement.
</Callout>

View File

@@ -17,11 +17,11 @@ Les serveurs MCP regroupent vos outils de workflow. Créez-les et gérez-les dan
<Video src="mcp/mcp-server.mp4" width={700} height={450} />
</div>
1. Accédez à **Paramètres → Serveurs MCP**
1. Accédez à **Paramètres → MCP déployés**
2. Cliquez sur **Créer un serveur**
3. Saisissez un nom et une description facultative
4. Copiez l'URL du serveur pour l'utiliser dans vos clients MCP
5. Consultez et gérez tous les outils ajoutés au serveur
5. Affichez et gérez tous les outils ajoutés au serveur
## Ajouter un workflow en tant qu'outil
@@ -79,7 +79,7 @@ Incluez votre en-tête de clé API (`X-API-Key`) pour un accès authentifié lor
## Gestion du serveur
Depuis la vue détaillée du serveur dans **Paramètres → Serveurs MCP**, vous pouvez :
Depuis la vue détaillée du serveur dans **Paramètres → MCP déployés**, vous pouvez :
- **Voir les outils** : voir tous les workflows ajoutés à un serveur
- **Copier l'URL** : obtenir l'URL du serveur pour les clients MCP

View File

@@ -28,7 +28,7 @@ Les serveurs MCP fournissent des collections d'outils que vos agents peuvent uti
</div>
1. Accédez aux paramètres de votre espace de travail
2. Allez à la section **Serveurs MCP**
2. Allez dans la section **MCP déployés**
3. Cliquez sur **Ajouter un serveur MCP**
4. Saisissez les détails de configuration du serveur
5. Enregistrez la configuration

View File

@@ -0,0 +1,234 @@
---
title: Fireflies
description: Interagissez avec les transcriptions et enregistrements de réunions
Fireflies.ai
---
import { BlockInfoCard } from "@/components/ui/block-info-card"
<BlockInfoCard
type="fireflies"
color="#100730"
/>
{/* MANUAL-CONTENT-START:intro */}
[Fireflies.ai](https://fireflies.ai/) est une plateforme de transcription et d'intelligence de réunions qui s'intègre avec Sim, permettant à vos agents de travailler directement avec les enregistrements de réunions, les transcriptions et les informations via des automatisations sans code.
L'intégration Fireflies dans Sim fournit des outils pour :
- **Lister les transcriptions de réunions :** récupérez plusieurs réunions et leurs informations récapitulatives pour votre équipe ou compte.
- **Récupérer les détails complets de transcription :** accédez aux transcriptions détaillées, y compris les résumés, les éléments d'action, les sujets et les analyses des participants pour toute réunion.
- **Télécharger de l'audio ou de la vidéo :** téléchargez des fichiers audio/vidéo ou fournissez des URL pour la transcription—définissez éventuellement la langue, le titre, les participants et recevez des notes de réunion automatisées.
- **Rechercher des transcriptions :** trouvez des réunions par mot-clé, participant, hôte ou période pour localiser rapidement les discussions pertinentes.
- **Supprimer des transcriptions :** supprimez des transcriptions de réunions spécifiques de votre espace de travail Fireflies.
- **Créer des extraits sonores (Bites) :** extrayez et mettez en évidence les moments clés des transcriptions sous forme de clips audio ou vidéo.
- **Déclencher des workflows à la fin de la transcription :** activez automatiquement les workflows Sim lorsqu'une transcription de réunion Fireflies se termine en utilisant le déclencheur webhook fourni—permettant des automatisations en temps réel et des notifications basées sur les nouvelles données de réunion.
En combinant ces capacités, vous pouvez rationaliser les actions post-réunion, extraire des informations structurées, automatiser les notifications, gérer les enregistrements et orchestrer des workflows personnalisés autour des appels de votre organisation—le tout de manière sécurisée en utilisant votre clé API et vos identifiants Fireflies.
{/* MANUAL-CONTENT-END */}
## Instructions d'utilisation
Intégrez Fireflies.ai dans le workflow. Gérez les transcriptions de réunions, ajoutez un bot aux réunions en direct, créez des extraits sonores et plus encore. Peut également déclencher des workflows lorsque les transcriptions sont terminées.
## Outils
### `fireflies_list_transcripts`
Lister les transcriptions de réunions depuis Fireflies.ai avec filtrage optionnel
#### Entrée
| Paramètre | Type | Requis | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Oui | Clé API Fireflies |
| `keyword` | string | Non | Mot-clé de recherche dans le titre de la réunion ou la transcription |
| `fromDate` | string | Non | Filtrer les transcriptions à partir de cette date \(format ISO 8601\) |
| `toDate` | string | Non | Filtrer les transcriptions jusqu'à cette date \(format ISO 8601\) |
| `hostEmail` | string | Non | Filtrer par e-mail de l'organisateur de la réunion |
| `participants` | string | Non | Filtrer par e-mails des participants \(séparés par des virgules\) |
| `limit` | number | Non | Nombre maximum de transcriptions à retourner \(max 50\) |
| `skip` | number | Non | Nombre de transcriptions à ignorer pour la pagination |
#### Sortie
| Paramètre | Type | Description |
| --------- | ---- | ----------- |
| `transcripts` | array | Liste des transcriptions |
| `count` | number | Nombre de transcriptions retournées |
### `fireflies_get_transcript`
Obtenir une transcription unique avec tous les détails, y compris le résumé, les actions à effectuer et les analyses
#### Entrée
| Paramètre | Type | Requis | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Oui | Clé API Fireflies |
| `transcriptId` | string | Oui | L'identifiant de la transcription à récupérer |
#### Sortie
| Paramètre | Type | Description |
| --------- | ---- | ----------- |
| `transcript` | object | La transcription avec tous les détails |
### `fireflies_get_user`
Obtenir les informations utilisateur depuis Fireflies.ai. Renvoie l'utilisateur actuel si aucun ID n'est spécifié.
#### Entrée
| Paramètre | Type | Requis | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Oui | Clé API Fireflies |
| `userId` | string | Non | ID utilisateur à récupérer \(optionnel, par défaut le propriétaire de la clé API\) |
#### Sortie
| Paramètre | Type | Description |
| --------- | ---- | ----------- |
| `user` | object | Informations utilisateur |
### `fireflies_list_users`
Lister tous les utilisateurs de votre équipe Fireflies.ai
#### Entrée
| Paramètre | Type | Requis | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Oui | Clé API Fireflies |
#### Sortie
| Paramètre | Type | Description |
| --------- | ---- | ----------- |
| `users` | array | Liste des utilisateurs de l'équipe |
### `fireflies_upload_audio`
Télécharger une URL de fichier audio vers Fireflies.ai pour transcription
#### Entrée
| Paramètre | Type | Requis | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Oui | Clé API Fireflies |
| `audioFile` | file | Non | Fichier audio/vidéo à télécharger pour transcription |
| `audioUrl` | string | Non | URL HTTPS publique du fichier audio/vidéo \(MP3, MP4, WAV, M4A, OGG\) |
| `title` | string | Non | Titre de la réunion/transcription |
| `webhook` | string | Non | URL webhook pour notification lorsque la transcription est terminée |
| `language` | string | Non | Code de langue pour la transcription \(par ex., « es » pour l'espagnol, « de » pour l'allemand\) |
| `attendees` | string | Non | Participants au format JSON : \[\{"displayName": "Nom", "email": "email@exemple.com"\}\] |
| `clientReferenceId` | string | Non | ID de référence personnalisé pour le suivi |
#### Sortie
| Paramètre | Type | Description |
| --------- | ---- | ----------- |
| `success` | boolean | Indique si le téléversement a réussi |
| `title` | string | Titre de la réunion téléversée |
| `message` | string | Message de statut de Fireflies |
### `fireflies_delete_transcript`
Supprimer une transcription de Fireflies.ai
#### Entrée
| Paramètre | Type | Requis | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Oui | Clé API Fireflies |
| `transcriptId` | string | Oui | L'identifiant de la transcription à supprimer |
#### Sortie
| Paramètre | Type | Description |
| --------- | ---- | ----------- |
| `success` | boolean | Indique si la transcription a été supprimée avec succès |
### `fireflies_add_to_live_meeting`
Ajouter le bot Fireflies.ai à une réunion en cours pour enregistrer et transcrire
#### Entrée
| Paramètre | Type | Requis | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Oui | Clé API Fireflies |
| `meetingLink` | string | Oui | URL de réunion valide \(Zoom, Google Meet, Microsoft Teams, etc.\) |
| `title` | string | Non | Titre de la réunion \(256 caractères maximum\) |
| `meetingPassword` | string | Non | Mot de passe de la réunion si nécessaire \(32 caractères maximum\) |
| `duration` | number | Non | Durée de la réunion en minutes \(15-120, par défaut : 60\) |
| `language` | string | Non | Code de langue pour la transcription \(par exemple, "en", "es", "de"\) |
#### Sortie
| Paramètre | Type | Description |
| --------- | ---- | ----------- |
| `success` | boolean | Indique si le bot a été ajouté avec succès à la réunion |
### `fireflies_create_bite`
Créer un extrait sonore/moment fort à partir d'une plage horaire spécifique dans une transcription
#### Entrée
| Paramètre | Type | Requis | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Oui | Clé API Fireflies |
| `transcriptId` | string | Oui | ID de la transcription à partir de laquelle créer l'extrait |
| `startTime` | number | Oui | Heure de début de l'extrait en secondes |
| `endTime` | number | Oui | Heure de fin de l'extrait en secondes |
| `name` | string | Non | Nom de l'extrait \(256 caractères maximum\) |
| `mediaType` | string | Non | Type de média : « video » ou « audio » |
| `summary` | string | Non | Résumé de l'extrait \(500 caractères maximum\) |
#### Sortie
| Paramètre | Type | Description |
| --------- | ---- | ----------- |
| `bite` | object | Détails de l'extrait créé |
### `fireflies_list_bites`
Lister les extraits sonores/moments forts de Fireflies.ai
#### Entrée
| Paramètre | Type | Requis | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Oui | Clé API Fireflies |
| `transcriptId` | string | Non | Filtrer les extraits pour une transcription spécifique |
| `mine` | boolean | Non | Retourner uniquement les extraits appartenant au propriétaire de la clé API \(par défaut : true\) |
| `limit` | number | Non | Nombre maximum d'extraits à retourner \(50 maximum\) |
| `skip` | number | Non | Nombre d'extraits à ignorer pour la pagination |
#### Sortie
| Paramètre | Type | Description |
| --------- | ---- | ----------- |
| `bites` | array | Liste des extraits/extraits sonores |
### `fireflies_list_contacts`
Lister tous les contacts de vos réunions Fireflies.ai
#### Entrée
| Paramètre | Type | Requis | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Oui | Clé API Fireflies |
#### Sortie
| Paramètre | Type | Description |
| --------- | ---- | ----------- |
| `contacts` | array | Liste des contacts des réunions |
## Remarques
- Catégorie : `tools`
- Type : `fireflies`

View File

@@ -22,7 +22,7 @@ Utilisez le bloc Démarrer pour tout ce qui provient de l'éditeur, du déploiem
<Cards>
<Card title="Start" href="/triggers/start">
Point d'entrée unifié qui prend en charge les exécutions de l'éditeur, les déploiements d'API et les déploiements de chat
Point d'entrée unifié qui prend en charge les exécutions dans l'éditeur, les déploiements API et les déploiements de chat
</Card>
<Card title="Webhook" href="/triggers/webhook">
Recevoir des charges utiles de webhook externes
@@ -31,18 +31,22 @@ Utilisez le bloc Démarrer pour tout ce qui provient de l'éditeur, du déploiem
Exécution basée sur cron ou intervalle
</Card>
<Card title="RSS Feed" href="/triggers/rss">
Surveiller les flux RSS et Atom pour du nouveau contenu
Surveiller les flux RSS et Atom pour détecter du nouveau contenu
</Card>
<Card title="Email Polling Groups" href="#email-polling-groups">
Surveiller les boîtes de réception Gmail et Outlook de l'équipe
</Card>
</Cards>
## Comparaison rapide
| Déclencheur | Condition de démarrage |
|---------|-----------------|
| **Start** | Exécutions de l'éditeur, requêtes de déploiement d'API ou messages de chat |
|-------------|------------------------|
| **Start** | Exécutions dans l'éditeur, requêtes de déploiement vers l'API ou messages de chat |
| **Schedule** | Minuteur géré dans le bloc de planification |
| **Webhook** | Sur requête HTTP entrante |
| **Webhook** | Lors d'une requête HTTP entrante |
| **RSS Feed** | Nouvel élément publié dans le flux |
| **Email Polling Groups** | Nouvel e-mail reçu dans les boîtes de réception Gmail ou Outlook de l'équipe |
> Le bloc Démarrer expose toujours les champs `input`, `conversationId` et `files`. Ajoutez des champs personnalisés au format d'entrée pour des données structurées supplémentaires.
@@ -65,3 +69,25 @@ Lorsque vous cliquez sur **Exécuter** dans l'éditeur, Sim sélectionne automat
Si votre flux de travail comporte plusieurs déclencheurs, le déclencheur de priorité la plus élevée sera exécuté. Par exemple, si vous avez à la fois un bloc Démarrer et un déclencheur Webhook, cliquer sur Exécuter exécutera le bloc Démarrer.
**Déclencheurs externes avec charges utiles simulées** : lorsque des déclencheurs externes (webhooks et intégrations) sont exécutés manuellement, Sim génère automatiquement des charges utiles simulées basées sur la structure de données attendue du déclencheur. Cela garantit que les blocs en aval peuvent résoudre correctement les variables pendant les tests.
## Groupes de surveillance d'e-mails
Les groupes de surveillance vous permettent de surveiller les boîtes de réception Gmail ou Outlook de plusieurs membres de l'équipe avec un seul déclencheur. Nécessite un forfait Team ou Enterprise.
**Créer un groupe de surveillance** (Admin/Propriétaire)
1. Accédez à **Paramètres → Surveillance d'e-mails**
2. Cliquez sur **Créer** et choisissez Gmail ou Outlook
3. Entrez un nom pour le groupe
**Inviter des membres**
1. Cliquez sur **Ajouter des membres** dans votre groupe de surveillance
2. Entrez les adresses e-mail (séparées par des virgules ou des sauts de ligne, ou glissez-déposez un fichier CSV)
3. Cliquez sur **Envoyer les invitations**
Les personnes invitées reçoivent un e-mail avec un lien pour connecter leur compte. Une fois connectée, leur boîte de réception est automatiquement incluse dans le groupe de surveillance. Les personnes invitées n'ont pas besoin d'être membres de votre organisation Sim.
**Utiliser dans un workflow**
Lors de la configuration d'un déclencheur d'e-mail, sélectionnez votre groupe de surveillance dans le menu déroulant des identifiants au lieu d'un compte individuel. Le système crée des webhooks pour chaque membre et achemine tous les e-mails via votre workflow.

View File

@@ -0,0 +1,75 @@
---
title: エンタープライズ
description: 高度なセキュリティとコンプライアンス要件を持つ組織向けのエンタープライズ機能
---
import { Callout } from 'fumadocs-ui/components/callout'
Sim Studio Enterpriseは、強化されたセキュリティ、コンプライアンス、管理要件を持つ組織向けの高度な機能を提供します。
---
## Bring Your Own Key (BYOK)
Sim Studioのホストキーの代わりに、AIモデルプロバイダー用の独自のAPIキーを使用できます。
### 対応プロバイダー
| プロバイダー | 用途 |
|----------|-------|
| OpenAI | ナレッジベースの埋め込み、エージェントブロック |
| Anthropic | エージェントブロック |
| Google | エージェントブロック |
| Mistral | ナレッジベースOCR |
### セットアップ
1. ワークスペースの**設定** → **BYOK**に移動します
2. プロバイダーの**キーを追加**をクリックします
3. APIキーを入力して保存します
<Callout type="warn">
BYOKキーは保存時に暗号化されます。組織の管理者とオーナーのみがキーを管理できます。
</Callout>
設定すると、ワークフローはSim Studioのホストキーの代わりに独自のキーを使用します。削除すると、ワークフローは自動的にホストキーにフォールバックします。
---
## シングルサインオン (SSO)
集中型IDマネジメントのためのSAML 2.0およびOIDCサポートを備えたエンタープライズ認証。
### 対応プロバイダー
- Okta
- Azure AD / Entra ID
- Google Workspace
- OneLogin
- SAML 2.0またはOIDCに対応する任意のプロバイダー
### セットアップ
1. ワークスペースの**設定** → **SSO**に移動します
2. IDプロバイダーを選択します
3. IdPのメタデータを使用して接続を設定します
4. 組織のSSOを有効にします
<Callout type="info">
SSOを有効にすると、チームメンバーはメール/パスワードの代わりにIDプロバイダーを通じて認証します。
</Callout>
---
## セルフホスト
セルフホストデプロイメントの場合、エンタープライズ機能は環境変数を介して有効にできます:
| 変数 | 説明 |
|----------|-------------|
| `SSO_ENABLED`、`NEXT_PUBLIC_SSO_ENABLED` | SAML/OIDCによるシングルサインオン |
| `CREDENTIAL_SETS_ENABLED`、`NEXT_PUBLIC_CREDENTIAL_SETS_ENABLED` | メールトリガー用のポーリンググループ |
<Callout type="warn">
BYOKはホスト型Sim Studioでのみ利用可能です。セルフホスト型デプロイメントでは、環境変数を介してAIプロバイダーキーを直接設定します。
</Callout>

View File

@@ -16,11 +16,11 @@ MCPサーバーは、ワークフローツールをまとめてグループ化
<Video src="mcp/mcp-server.mp4" width={700} height={450} />
</div>
1. **設定 → MCPサーバー**に移動
2. **サーバーを作成**をクリック
3. 名前と説明(任意)を入力
4. MCPクライアントで使用するためにサーバーURLをコピー
5. サーバーに追加されたすべてのツールを表示・管理
1. **設定 → デプロイ済みMCP**に移動します
2. **サーバーを作成**をクリックします
3. 名前とオプションの説明を入力します
4. MCPクライアントで使用するためにサーバーURLをコピーします
5. サーバーに追加されたすべてのツールを表示および管理します
## ワークフローをツールとして追加
@@ -78,7 +78,7 @@ mcp-remoteまたは他のHTTPベースのMCPトランスポートを使用する
## サーバー管理
**設定 → MCPサーバー**のサーバー詳細ビューから、以下の操作が可能です:
**設定 → デプロイ済みMCP**のサーバー詳細ビューから、次のことができます:
- **ツールを表示**: サーバーに追加されたすべてのワークフローを確認
- **URLをコピー**: MCPクライアント用のサーバーURLを取得

View File

@@ -27,10 +27,10 @@ MCPサーバーはエージェントが使用できるツールのコレクシ
</div>
1. ワークスペース設定に移動します
2. **MCPサーバー**セクションに進みます
2. **デプロイ済みMCP**セクションに移動します
3. **MCPサーバーを追加**をクリックします
4. サーバー構成の詳細を入力します
5. 構成を保存します
4. サーバー設定の詳細を入力します
5. 設定を保存します
<Callout type="info">
エージェントブロックのツールバーから直接MCPサーバーを構成することもできますクイックセットアップ

View File

@@ -0,0 +1,233 @@
---
title: Fireflies
description: Fireflies.aiの会議文字起こしと録画を操作
---
import { BlockInfoCard } from "@/components/ui/block-info-card"
<BlockInfoCard
type="fireflies"
color="#100730"
/>
{/* MANUAL-CONTENT-START:intro */}
[Fireflies.ai](https://fireflies.ai/)は、会議の文字起こしとインテリジェンスプラットフォームで、Simと統合されており、エージェントがーコード自動化を通じて会議の録画、文字起こし、インサイトを直接操作できます。
SimのFireflies統合は以下のツールを提供します。
- **会議文字起こしの一覧表示:** チームまたはアカウントの複数の会議とその要約情報を取得します。
- **完全な文字起こし詳細の取得:** 任意の会議について、要約、アクションアイテム、トピック、参加者分析を含む詳細な文字起こしにアクセスします。
- **音声または動画のアップロード:** 音声/動画ファイルをアップロードするか、文字起こし用のURLを提供します。オプションで言語、タイトル、参加者を設定し、自動化された会議メモを受け取ることができます。
- **文字起こしの検索:** キーワード、参加者、ホスト、または期間で会議を検索し、関連する議論を素早く見つけます。
- **文字起こしの削除:** Firefliesワークスペースから特定の会議文字起こしを削除します。
- **サウンドバイト(Bites)の作成:** 文字起こしから重要な瞬間を音声または動画クリップとして抽出してハイライトします。
- **文字起こし完了時のワークフロートリガー:** Firefliesの会議文字起こしが完了したときに、提供されたWebhookトリガーを使用してSimワークフローを自動的に起動します。これにより、新しい会議データに基づくリアルタイムの自動化と通知が可能になります。
これらの機能を組み合わせることで、会議後のアクションを効率化し、構造化されたインサイトを抽出し、通知を自動化し、録画を管理し、組織の通話に関するカスタムワークフローを調整できます。すべてAPIキーとFirefliesの認証情報を使用して安全に実行されます。
{/* MANUAL-CONTENT-END */}
## 使用方法
Fireflies.aiをワークフローに統合します。会議の文字起こしを管理し、ライブ会議にボットを追加し、サウンドバイトを作成するなどの操作が可能です。文字起こしが完了したときにワークフローをトリガーすることもできます。
## ツール
### `fireflies_list_transcripts`
Fireflies.aiからミーティングの文字起こしをオプションのフィルタリング付きで一覧表示
#### 入力
| パラメータ | 型 | 必須 | 説明 |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | はい | Fireflies APIキー |
| `keyword` | string | いいえ | ミーティングタイトルまたは文字起こし内の検索キーワード |
| `fromDate` | string | いいえ | この日付以降の文字起こしをフィルタリングISO 8601形式 |
| `toDate` | string | いいえ | この日付までの文字起こしをフィルタリングISO 8601形式 |
| `hostEmail` | string | いいえ | ミーティングホストのメールアドレスでフィルタリング |
| `participants` | string | いいえ | 参加者のメールアドレスでフィルタリング(カンマ区切り) |
| `limit` | number | いいえ | 返す文字起こしの最大数最大50 |
| `skip` | number | いいえ | ページネーションのためにスキップする文字起こしの数 |
#### 出力
| パラメータ | 型 | 説明 |
| --------- | ---- | ----------- |
| `transcripts` | array | 文字起こしのリスト |
| `count` | number | 返された文字起こしの数 |
### `fireflies_get_transcript`
要約、アクションアイテム、分析を含む完全な詳細情報を持つ単一の文字起こしを取得
#### 入力
| パラメータ | 型 | 必須 | 説明 |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | はい | Fireflies APIキー |
| `transcriptId` | string | はい | 取得する文字起こしID |
#### 出力
| パラメータ | 型 | 説明 |
| --------- | ---- | ----------- |
| `transcript` | object | 完全な詳細情報を持つ文字起こし |
### `fireflies_get_user`
Fireflies.aiからユーザー情報を取得します。IDが指定されていない場合は現在のユーザーを返します。
#### 入力
| パラメータ | 型 | 必須 | 説明 |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | はい | Fireflies APIキー |
| `userId` | string | いいえ | 取得するユーザーIDオプション、デフォルトはAPIキー所有者 |
#### 出力
| パラメータ | 型 | 説明 |
| --------- | ---- | ----------- |
| `user` | object | ユーザー情報 |
### `fireflies_list_users`
Fireflies.aiチーム内のすべてのユーザーを一覧表示します
#### 入力
| パラメータ | 型 | 必須 | 説明 |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | はい | Fireflies APIキー |
#### 出力
| パラメータ | 型 | 説明 |
| --------- | ---- | ----------- |
| `users` | array | チームユーザーのリスト |
### `fireflies_upload_audio`
音声ファイルのURLをFireflies.aiにアップロードして文字起こしを行います
#### 入力
| パラメータ | 型 | 必須 | 説明 |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | はい | Fireflies APIキー |
| `audioFile` | file | いいえ | 文字起こし用にアップロードする音声/動画ファイル |
| `audioUrl` | string | いいえ | 音声/動画ファイルの公開HTTPS URLMP3、MP4、WAV、M4A、OGG |
| `title` | string | いいえ | ミーティング/文字起こしのタイトル |
| `webhook` | string | いいえ | 文字起こし完了時に通知するWebhook URL |
| `language` | string | いいえ | 文字起こしの言語コードスペイン語は「es」、ドイツ語は「de」 |
| `attendees` | string | いいえ | JSON形式の参加者\[\{"displayName": "名前", "email": "email@example.com"\}\] |
| `clientReferenceId` | string | いいえ | 追跡用のカスタム参照ID |
#### 出力
| パラメータ | 型 | 説明 |
| --------- | ---- | ----------- |
| `success` | boolean | アップロードが成功したかどうか |
| `title` | string | アップロードされたミーティングのタイトル |
| `message` | string | Firefliesからのステータスメッセージ |
### `fireflies_delete_transcript`
Fireflies.aiからトランスクリプトを削除する
#### 入力
| パラメータ | 型 | 必須 | 説明 |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | はい | Fireflies APIキー |
| `transcriptId` | string | はい | 削除するトランスクリプトID |
#### 出力
| パラメータ | 型 | 説明 |
| --------- | ---- | ----------- |
| `success` | boolean | トランスクリプトが正常に削除されたかどうか |
### `fireflies_add_to_live_meeting`
進行中のミーティングにFireflies.aiボットを追加して録音および文字起こしを行う
#### 入力
| パラメータ | 型 | 必須 | 説明 |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | はい | Fireflies APIキー |
| `meetingLink` | string | はい | 有効なミーティングURL(Zoom、Google Meet、Microsoft Teamsなど) |
| `title` | string | いいえ | ミーティングのタイトル(最大256文字) |
| `meetingPassword` | string | いいえ | 必要な場合のミーティングパスワード(最大32文字) |
| `duration` | number | いいえ | ミーティングの長さ(分単位、15-120、デフォルト:60) |
| `language` | string | いいえ | 文字起こしの言語コード(例:"en"、"es"、"de") |
#### 出力
| パラメータ | 型 | 説明 |
| --------- | ---- | ----------- |
| `success` | boolean | ボットがミーティングに正常に追加されたかどうか |
### `fireflies_create_bite`
トランスクリプトの特定の時間範囲からサウンドバイト/ハイライトを作成します
#### 入力
| パラメータ | 型 | 必須 | 説明 |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | はい | Fireflies APIキー |
| `transcriptId` | string | はい | バイトを作成するトランスクリプトのID |
| `startTime` | number | はい | バイトの開始時間(秒) |
| `endTime` | number | はい | バイトの終了時間(秒) |
| `name` | string | いいえ | バイトの名前(最大256文字) |
| `mediaType` | string | いいえ | メディアタイプ:「video」または「audio」 |
| `summary` | string | いいえ | バイトの概要(最大500文字) |
#### 出力
| パラメータ | 型 | 説明 |
| --------- | ---- | ----------- |
| `bite` | object | 作成されたバイトの詳細 |
### `fireflies_list_bites`
Fireflies.aiからサウンドバイト/ハイライトを一覧表示します
#### 入力
| パラメータ | 型 | 必須 | 説明 |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | はい | Fireflies APIキー |
| `transcriptId` | string | いいえ | 特定のトランスクリプトのバイトをフィルタリング |
| `mine` | boolean | いいえ | APIキー所有者が所有するバイトのみを返す(デフォルト:true) |
| `limit` | number | いいえ | 返すバイトの最大数(最大50) |
| `skip` | number | いいえ | ページネーションのためにスキップするバイトの数 |
#### 出力
| パラメータ | 型 | 説明 |
| --------- | ---- | ----------- |
| `bites` | array | バイト/サウンドバイトのリスト |
### `fireflies_list_contacts`
Fireflies.aiミーティングからすべての連絡先をリスト表示
#### 入力
| パラメータ | 型 | 必須 | 説明 |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | はい | Fireflies APIキー |
#### 出力
| パラメータ | 型 | 説明 |
| --------- | ---- | ----------- |
| `contacts` | array | ミーティングからの連絡先のリスト |
## 注記
- カテゴリ: `tools`
- タイプ: `fireflies`

View File

@@ -22,16 +22,19 @@ import { Image } from '@/components/ui/image'
<Cards>
<Card title="Start" href="/triggers/start">
エディタ実行、APIデプロイメント、チャットデプロイメントをサポートする統合エントリーポイント
エディタ実行、APIデプロイ、チャットデプロイをサポートする統合エントリーポイント
</Card>
<Card title="Webhook" href="/triggers/webhook">
外部のwebhookペイロードを受信
外部Webhookペイロードを受信
</Card>
<Card title="Schedule" href="/triggers/schedule">
Cronまたは間隔ベースの実行
Cronまたはインターバルベースの実行
</Card>
<Card title="RSS Feed" href="/triggers/rss">
新しいコンテンツのRSSとAtomフィードを監視
RSSおよびAtomフィードの新しいコンテンツを監視
</Card>
<Card title="Email Polling Groups" href="#email-polling-groups">
チームのGmailおよびOutlook受信トレイを監視
</Card>
</Cards>
@@ -39,10 +42,11 @@ import { Image } from '@/components/ui/image'
| トリガー | 開始条件 |
|---------|-----------------|
| **Start** | エディタ実行、APIへのデプロイリクエスト、またはチャットメッセージ |
| **Start** | エディタ実行、deploy-to-APIリクエスト、またはチャットメッセージ |
| **Schedule** | スケジュールブロックで管理されるタイマー |
| **Webhook** | 受信HTTPリクエスト時 |
| **Webhook** | インバウンドHTTPリクエスト時 |
| **RSS Feed** | フィードに新しいアイテムが公開された時 |
| **Email Polling Groups** | チームのGmailまたはOutlook受信トレイに新しいメールが受信された時 |
> スタートブロックは常に `input`、`conversationId`、および `files` フィールドを公開します。追加の構造化データには入力フォーマットにカスタムフィールドを追加してください。
@@ -65,3 +69,25 @@ import { Image } from '@/components/ui/image'
ワークフローに複数のトリガーがある場合、最も優先度の高いトリガーが実行されます。例えば、スタートブロックとウェブフックトリガーの両方がある場合、実行をクリックするとスタートブロックが実行されます。
**モックペイロードを持つ外部トリガー**: 外部トリガーウェブフックと連携が手動で実行される場合、Simはトリガーの予想されるデータ構造に基づいてモックペイロードを自動生成します。これにより、テスト中に下流のブロックが変数を正しく解決できるようになります。
## Email Polling Groups
Polling Groupsを使用すると、単一のトリガーで複数のチームメンバーのGmailまたはOutlook受信トレイを監視できます。TeamまたはEnterpriseプランが必要です。
**Polling Groupの作成**(管理者/オーナー)
1. **設定 → Email Polling**に移動
2. **作成**をクリックし、GmailまたはOutlookを選択
3. グループの名前を入力
**メンバーの招待**
1. Polling Groupの**メンバーを追加**をクリック
2. メールアドレスを入力カンマまたは改行で区切る、またはCSVをドラッグ&ドロップ)
3. **招待を送信**をクリック
招待された人は、アカウントを接続するためのリンクが記載されたメールを受信します。接続されると、その受信トレイは自動的にPolling Groupに含まれます。招待された人は、Sim組織のメンバーである必要はありません。
**ワークフローでの使用**
メールトリガーを設定する際、個別のアカウントではなく、認証情報ドロップダウンからPolling Groupを選択します。システムは各メンバーのWebhookを作成し、すべてのメールをワークフローを通じてルーティングします。

View File

@@ -0,0 +1,75 @@
---
title: 企业版
description: 为具有高级安全性和合规性需求的组织提供企业级功能
---
import { Callout } from 'fumadocs-ui/components/callout'
Sim Studio 企业版为需要更高安全性、合规性和管理能力的组织提供高级功能。
---
## 自带密钥BYOK
使用您自己的 API 密钥对接 AI 模型服务商,而不是使用 Sim Studio 托管的密钥。
### 支持的服务商
| Provider | Usage |
|----------|-------|
| OpenAI | 知识库嵌入、Agent 模块 |
| Anthropic | Agent 模块 |
| Google | Agent 模块 |
| Mistral | 知识库 OCR |
### 配置方法
1. 在您的工作区进入 **设置** → **BYOK**
2. 为您的服务商点击 **添加密钥**
3. 输入您的 API 密钥并保存
<Callout type="warn">
BYOK 密钥静态加密存储。仅组织管理员和所有者可管理密钥。
</Callout>
配置后,工作流将使用您的密钥而非 Sim Studio 托管密钥。如移除,工作流会自动切换回托管密钥。
---
## 单点登录SSO
企业级身份认证,支持 SAML 2.0 和 OIDC实现集中式身份管理。
### 支持的服务商
- Okta
- Azure AD / Entra ID
- Google Workspace
- OneLogin
- 任何 SAML 2.0 或 OIDC 服务商
### 配置方法
1. 在您的工作区进入 **设置** → **SSO**
2. 选择您的身份提供商
3. 使用 IdP 元数据配置连接
4. 为您的组织启用 SSO
<Callout type="info">
启用 SSO 后,团队成员将通过您的身份提供商进行身份验证,而不再使用邮箱/密码。
</Callout>
---
## 自主部署
对于自主部署场景,可通过环境变量启用企业功能:
| 变量 | 描述 |
|----------|-------------|
| `SSO_ENABLED``NEXT_PUBLIC_SSO_ENABLED` | 使用 SAML/OIDC 的单点登录 |
| `CREDENTIAL_SETS_ENABLED``NEXT_PUBLIC_CREDENTIAL_SETS_ENABLED` | 用于邮件触发器的轮询组 |
<Callout type="warn">
BYOK 仅适用于托管版 Sim Studio。自托管部署需通过环境变量直接配置 AI 提供商密钥。
</Callout>

View File

@@ -16,11 +16,11 @@ MCP 服务器用于将您的工作流工具进行分组。您可以在工作区
<Video src="mcp/mcp-server.mp4" width={700} height={450} />
</div>
1. 进入 **设置 → MCP 服务器**
1. 进入 **设置 → 已部署的 MCPs**
2. 点击 **创建服务器**
3. 输入名称和可选描述
4. 复制服务器 URL 以在的 MCP 客户端中使用
5. 查看并管理已添加到服务器的所有工具
4. 复制服务器 URL 以在的 MCP 客户端中使用
5. 查看并管理已添加到服务器的所有工具
## 添加工作流为工具
@@ -78,7 +78,7 @@ MCP 服务器用于将您的工作流工具进行分组。您可以在工作区
## 服务器管理
在 **设置 → MCP 服务器** 的服务器详情视图中,您可以:
在 **设置 → 已部署的 MCPs** 的服务器详情页,你可以:
- **查看工具**:查看添加到服务器的所有工作流
- **复制 URL**:获取 MCP 客户端的服务器 URL

View File

@@ -27,9 +27,9 @@ MCP 服务器提供工具集合,供您的代理使用。您可以在工作区
</div>
1. 进入您的工作区设置
2. 转到 **MCP 服务器** 部分
3. 点击 **添加 MCP 服务器**
4. 输入服务器配置详情
2. 前往 **Deployed MCPs** 部分
3. 点击 **Add MCP Server**
4. 输入服务器配置信息
5. 保存配置
<Callout type="info">

View File

@@ -0,0 +1,233 @@
---
title: Fireflies
description: 与 Fireflies.ai 会议转录和录音进行交互
---
import { BlockInfoCard } from "@/components/ui/block-info-card"
<BlockInfoCard
type="fireflies"
color="#100730"
/>
{/* MANUAL-CONTENT-START:intro */}
[Fireflies.ai](https://fireflies.ai/) 是一个会议转录与智能平台,可与 Sim 集成,让你的代理可以通过零代码自动化,直接处理会议录音、转录和洞察。
Fireflies 在 Sim 中的集成提供了以下工具:
- **列出会议转录:** 为你的团队或账户获取多个会议及其摘要信息。
- **获取完整转录详情:** 访问详细转录内容,包括摘要、行动项、主题和与会者分析。
- **上传音频或视频:** 上传音频/视频文件或提供 URL 进行转录——可选设置语言、标题、与会者,并自动获取会议笔记。
- **搜索转录:** 通过关键词、参与者、主持人或时间范围查找会议,快速定位相关讨论。
- **删除转录:** 从你的 Fireflies 工作区中移除特定会议转录。
- **创建音频片段Bites** 从转录中提取并高亮关键时刻,生成音频或视频片段。
- **转录完成时触发工作流:** 使用提供的 webhook 触发器,在 Fireflies 会议转录完成后自动激活 Sim 工作流,实现基于新会议数据的实时自动化和通知。
结合这些功能,你可以简化会后操作,提取结构化洞察,自动发送通知,管理录音,并围绕组织的通话编排自定义工作流——所有操作都可通过你的 API key 和 Fireflies 凭证安全完成。
{/* MANUAL-CONTENT-END */}
## 使用说明
将 Fireflies.ai 集成到工作流中。管理会议转录、为实时会议添加机器人、创建音频片段等。还可在转录完成时触发工作流。
## 工具
### `fireflies_list_transcripts`
列出来自 Fireflies.ai 的会议记录,并可选进行筛选
#### 输入
| 参数 | 类型 | 必填 | 说明 |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | 是 | Fireflies API key |
| `keyword` | string | 否 | 按会议标题或记录内容搜索关键词 |
| `fromDate` | string | 否 | 从此日期筛选记录ISO 8601 格式) |
| `toDate` | string | 否 | 筛选至此日期的记录ISO 8601 格式) |
| `hostEmail` | string | 否 | 按会议主持人邮箱筛选 |
| `participants` | string | 否 | 按参与者邮箱筛选(逗号分隔) |
| `limit` | number | 否 | 返回的最大记录数(最多 50 条) |
| `skip` | number | 否 | 分页时跳过的记录数 |
#### 输出
| 参数 | 类型 | 说明 |
| --------- | ---- | ----------- |
| `transcripts` | array | 记录列表 |
| `count` | number | 返回的记录数 |
### `fireflies_get_transcript`
获取单条会议记录,包含摘要、行动项和分析等完整信息
#### 输入
| 参数 | 类型 | 必填 | 说明 |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | 是 | Fireflies API key |
| `transcriptId` | string | 是 | 要获取的记录 ID |
#### 输出
| 参数 | 类型 | 说明 |
| --------- | ---- | ----------- |
| `transcript` | object | 包含完整信息的会议记录 |
### `fireflies_get_user`
从 Fireflies.ai 获取用户信息。如果未指定 ID则返回当前用户信息。
#### 输入
| 参数 | 类型 | 必填 | 说明 |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | 是 | Fireflies API key |
| `userId` | string | 否 | 要检索的用户 ID可选默认为 API key 所有者) |
#### 输出
| 参数 | 类型 | 说明 |
| --------- | ---- | ----------- |
| `user` | object | 用户信息 |
### `fireflies_list_users`
列出你在 Fireflies.ai 团队中的所有用户
#### 输入
| 参数 | 类型 | 必填 | 说明 |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | 是 | Fireflies API key |
#### 输出
| 参数 | 类型 | 说明 |
| --------- | ---- | ----------- |
| `users` | array | 团队用户列表 |
### `fireflies_upload_audio`
上传音频文件 URL 到 Fireflies.ai 进行转录
#### 输入
| 参数 | 类型 | 必填 | 说明 |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | 是 | Fireflies API key |
| `audioFile` | file | 否 | 要上传用于转录的音频/视频文件 |
| `audioUrl` | string | 否 | 音频/视频文件的公开 HTTPS URLMP3、MP4、WAV、M4A、OGG |
| `title` | string | 否 | 会议/转录标题 |
| `webhook` | string | 否 | 转录完成后通知的 Webhook URL |
| `language` | string | 否 | 转录语言代码(如 "es" 表示西班牙语,"de" 表示德语) |
| `attendees` | string | 否 | 以 JSON 格式填写的与会者信息:\[\{"displayName": "Name", "email": "email@example.com"\}\] |
| `clientReferenceId` | string | 否 | 用于追踪的自定义参考 ID |
#### 输出
| 参数 | 类型 | 描述 |
| --------- | ---- | ----------- |
| `success` | boolean | 上传是否成功 |
| `title` | string | 上传会议的标题 |
| `message` | string | 来自 Fireflies 的状态信息 |
### `fireflies_delete_transcript`
从 Fireflies.ai 删除一份转录记录
#### 输入
| 参数 | 类型 | 必填 | 描述 |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | 是 | Fireflies API 密钥 |
| `transcriptId` | string | 是 | 要删除的转录 ID |
#### 输出
| 参数 | 类型 | 描述 |
| --------- | ---- | ----------- |
| `success` | boolean | 转录是否已成功删除 |
### `fireflies_add_to_live_meeting`
将 Fireflies.ai 机器人添加到正在进行的会议中进行录音和转录
#### 输入
| 参数 | 类型 | 必填 | 描述 |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | 是 | Fireflies API 密钥 |
| `meetingLink` | string | 是 | 有效的会议 URL如 Zoom、Google Meet、Microsoft Teams 等) |
| `title` | string | 否 | 会议标题(最多 256 个字符) |
| `meetingPassword` | string | 否 | 会议密码(如需要,最多 32 个字符) |
| `duration` | number | 否 | 会议时长分钟15-120默认60 |
| `language` | string | 否 | 转录语言代码(如 "en"、"es"、"de" |
#### 输出
| 参数 | 类型 | 描述 |
| --------- | ---- | ----------- |
| `success` | boolean | 机器人是否已成功添加到会议中 |
### `fireflies_create_bite`
从转录文本的指定时间范围创建一个音频片段/高光
#### 输入
| 参数 | 类型 | 必填 | 描述 |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | 是 | Fireflies API key |
| `transcriptId` | string | 是 | 要创建片段的转录文本 ID |
| `startTime` | number | 是 | 片段起始时间(秒) |
| `endTime` | number | 是 | 片段结束时间(秒) |
| `name` | string | 否 | 片段名称(最多 256 个字符) |
| `mediaType` | string | 否 | 媒体类型:"video" 或 "audio" |
| `summary` | string | 否 | 片段摘要(最多 500 个字符) |
#### 输出
| 参数 | 类型 | 描述 |
| --------- | ---- | ----------- |
| `bite` | object | 创建的片段详情 |
### `fireflies_list_bites`
列出 Fireflies.ai 的音频片段/高光
#### 输入
| 参数 | 类型 | 必填 | 描述 |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | 是 | Fireflies API key |
| `transcriptId` | string | 否 | 按指定转录文本筛选片段 |
| `mine` | boolean | 否 | 仅返回 API key 拥有者拥有的片段默认true |
| `limit` | number | 否 | 返回的片段最大数量(最多 50 个) |
| `skip` | number | 否 | 分页时跳过的片段数量 |
#### 输出
| 参数 | 类型 | 描述 |
| --------- | ---- | ----------- |
| `bites` | array | bite/soundbite 列表 |
### `fireflies_list_contacts`
列出你在 Fireflies.ai 会议中的所有联系人
#### 输入
| 参数 | 类型 | 必填 | 描述 |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | 是 | Fireflies API key |
#### 输出
| 参数 | 类型 | 描述 |
| --------- | ---- | ----------- |
| `contacts` | array | 会议联系人列表 |
## 备注
- 分类:`tools`
- 类型:`fireflies`

View File

@@ -21,17 +21,20 @@ import { Image } from '@/components/ui/image'
使用 Start 块处理从编辑器、部署到 API 或部署到聊天的所有操作。其他触发器可用于事件驱动的工作流:
<Cards>
<Card title="开始" href="/triggers/start">
支持编辑器运行、API 部署和聊天部署的统一入口
<Card title="Start" href="/triggers/start">
支持编辑器运行、API 部署和聊天部署的统一入口
</Card>
<Card title="Webhook" href="/triggers/webhook">
接收外部 webhook 负载
</Card>
<Card title="计划" href="/triggers/schedule">
基于 Cron 或间隔的执行
<Card title="Schedule" href="/triggers/schedule">
基于 cron 或间隔的执行
</Card>
<Card title="RSS " href="/triggers/rss">
监控 RSS 和 Atom 源的新内容
<Card title="RSS Feed" href="/triggers/rss">
监控 RSS 和 Atom 订阅源的新内容
</Card>
<Card title="Email Polling Groups" href="#email-polling-groups">
监控团队 Gmail 和 Outlook 收件箱
</Card>
</Cards>
@@ -39,10 +42,11 @@ import { Image } from '@/components/ui/image'
| 触发器 | 启动条件 |
|---------|-----------------|
| **开始** | 编辑器运行、部署到 API 请求或聊天消息 |
| **计划** | 在计划块中管理的时器 |
| **Start** | 编辑器运行、API 部署请求或聊天消息 |
| **Schedule** | 在 schedule 块中管理的时器 |
| **Webhook** | 收到入站 HTTP 请求时 |
| **RSS ** | 源中发布了新项目 |
| **RSS Feed** | 订阅源中有新内容发布时 |
| **Email Polling Groups** | 团队 Gmail 或 Outlook 收件箱收到新邮件时 |
> Start 块始终公开 `input`、`conversationId` 和 `files` 字段。通过向输入格式添加自定义字段来增加结构化数据。
@@ -65,3 +69,25 @@ import { Image } from '@/components/ui/image'
如果您的工作流有多个触发器,将执行优先级最高的触发器。例如,如果您同时有 Start 块和 Webhook 触发器,点击运行将执行 Start 块。
**带有模拟负载的外部触发器**:当手动执行外部触发器(如 webhooks 和集成Sim 会根据触发器的预期数据结构自动生成模拟负载。这确保了在测试过程中,下游模块可以正确解析变量。
## 邮件轮询组
轮询组可让你通过单一触发器监控多个团队成员的 Gmail 或 Outlook 收件箱。需要 Team 或 Enterprise 方案。
**创建轮询组**(管理员/所有者)
1. 前往 **设置 → 邮件轮询**
2. 点击 **创建**,选择 Gmail 或 Outlook
3. 输入组名
**邀请成员**
1. 在你的轮询组中点击 **添加成员**
2. 输入邮箱地址(用逗号或换行分隔,或拖拽 CSV 文件)
3. 点击 **发送邀请**
受邀者会收到一封带有连接账户链接的邮件。连接后,他们的收件箱会自动加入轮询组。受邀者无需成为你的 Sim 组织成员。
**在工作流中使用**
配置邮件触发器时,从凭据下拉菜单中选择你的轮询组,而不是单独账户。系统会为每位成员创建 webhook并将所有邮件通过你的工作流进行处理。

View File

@@ -4343,7 +4343,7 @@ checksums:
content/5: 6eee8c607e72b6c444d7b3ef07244f20
content/6: 747991e0e80e306dce1061ef7802db2a
content/7: 430153eacb29c66026cf71944df7be20
content/8: 5950966e19939b7a3a320d56ee4a674c
content/8: f9bdeac954d1d138c954c151db0403ec
content/9: 159cf7a6d62e64b0c5db27e73b8c1ff5
content/10: a723187777f9a848d4daa563e9dcbe17
content/11: b1c5f14e5290bcbbf5d590361ee7c053
@@ -5789,9 +5789,9 @@ checksums:
content/1: e71056df0f7b2eb3b2f271f21d0052cc
content/2: da2b445db16c149f56558a4ea876a5f0
content/3: cec18f48b2cd7974eb556880e6604f7f
content/4: b200402d6a01ab565fd56d113c530ef6
content/4: cff35e4208de8f6ef36a6eae79915fab
content/5: 4c3a5708af82c1ee42a12d14fd34e950
content/6: 64fbd5b16f4cff18ba976492a275c05e
content/6: 00a9f255e60b5979014694b0c2a3ba26
content/7: a28151eeb5ba3518b33809055b04f0f6
content/8: cffe5b901d78ebf2000d07dc7579533e
content/9: 73486253d24eeff7ac44dfd0c8868d87
@@ -5801,6 +5801,15 @@ checksums:
content/13: e5ca2445d3b69b062af5bf0a2988e760
content/14: 67e0b520d57e352689789eff5803ebbc
content/15: a1d7382600994068ca24dc03f46b7c73
content/16: 1895a0c773fddeb014c7aab468593b30
content/17: 5b478d664a0b1bc76f19516b2a6e2788
content/18: c97883b63e5e455cd2de51f0406f963f
content/19: 2ff6c01b8eebbdd653d864b105f53cde
content/20: 523b34e945343591d1df51a6ba6357dd
content/21: e6611cff00c91bd2327660aebf9418f4
content/22: 87e7e7df71f0883369e8abda30289c0f
content/23: b248d9eda347cfb122101a4e4b5eaa53
content/24: 2f003723d891d6c53c398b86c7397577
0bf172ef4ee9a2c94a2967d7d320b81b:
meta/title: 330265974a03ee22a09f42fa4ece25f6
meta/description: e3d54cbedf551315cf9e8749228c2d1c
@@ -50141,7 +50150,7 @@ checksums:
content/2: b082096b0c871b2a40418e479af6f158
content/3: 9c94aa34f44540b0632931a8244a6488
content/4: 14f33e16b5a98e4dbdda2a27aa0d7afb
content/5: d7b36732970b7649dd1aa1f1d0a34e74
content/5: 3ea8bad9314f442a69a87f313419ef1a
content/6: f554f833467a6dae5391372fc41dad53
content/7: 9cdb9189ecfcc4a6f567d3fd5fe342f0
content/8: 9a107692cb52c284c1cb022b516d700b
@@ -50158,7 +50167,7 @@ checksums:
content/19: a618fcff50c4856113428639359a922b
content/20: 5fd3a6d2dcd8aa18dbf0b784acaa271c
content/21: d118656dd565c4c22f3c0c3a7c7f3bee
content/22: f49b9be78f1e7a569e290acc1365d417
content/22: c161e7bcfba9cf6ef0ab8ef40ac0c17a
content/23: 0a70ebe6eb4c543c3810977ed46b69b0
content/24: ad8638a3473c909dbcb1e1d9f4f26381
content/25: 95343a9f81cd050d3713988c677c750f
@@ -50225,3 +50234,104 @@ checksums:
content/23: 59da7694b8be001fec8b9f9d7b604faf
content/24: 8fb6954068c6687d44121e21a95cf1b6
content/25: 9e7b1a1a453340d20adf4cacbd532018
fa1c42261042a9cde3e5c1f691169876:
meta/title: 34a88e7137f1af4a641d20c686673cf4
meta/description: 8371b5fceeb140f5ac5a6facbb778a5f
content/0: 1b031fb0c62c46b177aeed5c3d3f8f80
content/1: 2ea4b5bc50001e7c494837ceb1370539
content/2: 6306e3afffcd2563b1792c558ca2655e
content/3: b6ba91252e179f4fb17da86e51b3df12
content/4: b02e7d685008724ca7b34d8f4b43007c
content/5: 02d371955e9386f261757123f4240365
content/6: 821e6394b0a953e2b0842b04ae8f3105
content/7: cdbaa3964c4e6a7ccf7a326161d056cf
content/8: 9c8aa3f09c9b2bd50ea4cdff3598ea4e
content/9: 6383f4ae36fb08c2899399ba021b19b1
content/10: 9697169c028783b065b30044f4c0fe26
content/11: 371d0e46b4bd2c23f559b8bc112f6955
content/12: a7186564ee9cb3e4e96cb03dbc84d710
content/13: bcadfc362b69078beee0088e5936c98b
content/14: 82e7c6eb98b5b33f22431aecdca80703
content/15: d18932457fd95c545b2c870a3daea47f
content/16: 588b8dfb6af5511044c19eb468ab865d
content/17: 371d0e46b4bd2c23f559b8bc112f6955
content/18: b0a7eeeb3feae67dd21196780ae0d5eb
content/19: bcadfc362b69078beee0088e5936c98b
content/20: c87b7e083a1ade7bfe4e5c7639bbc2b4
content/21: 9b64a33ba85db593c28ac57d74be12e0
content/22: 1cd336acd20989efc7a172f67cd3633c
content/23: 371d0e46b4bd2c23f559b8bc112f6955
content/24: a4f5290e9ed361bf4fdd835f5c6efd6c
content/25: bcadfc362b69078beee0088e5936c98b
content/26: ae0a66fda10f781f6eaf952d27025c2d
content/27: 73ee8f45410139ce0426d03776ce9a0b
content/28: b204098918149dc3d623810d2e0f10df
content/29: 371d0e46b4bd2c23f559b8bc112f6955
content/30: ad69ead28418cfe4b757c6b65ce6c985
content/31: bcadfc362b69078beee0088e5936c98b
content/32: 6d232c1820e52c643ca5074fa1ea0e0b
content/33: 383c61dc19eeb83f3b1b132087581c04
content/34: 0fb444a929514e2b5654af1f62087809
content/35: 371d0e46b4bd2c23f559b8bc112f6955
content/36: dfcfaf2aa85cde2e3282b507bc5c4b59
content/37: bcadfc362b69078beee0088e5936c98b
content/38: 4a8a54bbcff9102d58847d3dd8cf6f9d
content/39: fd73283c59dad77ce75095aece6f934b
content/40: 06668b83f4b8b37c426b0384d211a27f
content/41: 371d0e46b4bd2c23f559b8bc112f6955
content/42: 53db172d2fa498f7dca7cacdd3fdc67c
content/43: bcadfc362b69078beee0088e5936c98b
content/44: 56a281731309b62b67662e6a46a2a55b
content/45: 6accb29bf4f712a88304d74becba1aeb
content/46: a6cdfbfad60e27a6dd080833fc5c0cda
content/47: 371d0e46b4bd2c23f559b8bc112f6955
content/48: ce13160ba405b0b8d396b7aa98810b23
content/49: bcadfc362b69078beee0088e5936c98b
content/50: 65ff00bcedb0c69c3e4eec317cdfcb44
content/51: bfd77718128856a7549229a9dbe3c2d5
content/52: e322fc91f9a1546a0fdfbb137bdbdfc8
content/53: 371d0e46b4bd2c23f559b8bc112f6955
content/54: 631159b3a40d1eaf269229d34ae33eb8
content/55: bcadfc362b69078beee0088e5936c98b
content/56: b52f83ac6d343783d1a0c06d14a99368
content/57: cb2ca71e1732e20d0e100229914f3191
content/58: 825d7e4652afde24f405b6cc347f51d3
content/59: 371d0e46b4bd2c23f559b8bc112f6955
content/60: e715106d45f9a7021c4d1b76ae2277ad
content/61: bcadfc362b69078beee0088e5936c98b
content/62: 7e2dc302f6805a80dc63c8ab1dfb0955
content/63: 3373816242f7df96dcaf462f8913bdaf
content/64: 7f8c9d671cfc8a7ac34c2101de4e86cc
content/65: 371d0e46b4bd2c23f559b8bc112f6955
content/66: ad69ead28418cfe4b757c6b65ce6c985
content/67: bcadfc362b69078beee0088e5936c98b
content/68: ba6b5020ed971cd7ffc7f0423650dfbf
content/69: b3f310d5ef115bea5a8b75bf25d7ea9a
content/70: 0362be478aa7ba4b6d1ebde0bd83e83a
f5bc5f89ed66818f4c485c554bf26eea:
meta/title: c70474271708e5b27392fde87462fa26
meta/description: 7b47db7fbb818c180b99354b912a72b3
content/0: 232be69c8f3053a40f695f9c9dcb3f2e
content/1: a4a62a6e782e18bd863546dfcf2aec1c
content/2: 51adf33450cab2ef392e93147386647c
content/3: ada515cf6e2e0f9d3f57f720f79699d3
content/4: d5e8b9f64d855675588845dc4124c491
content/5: 3acf1f0551f6097ca6159e66f5c8da1a
content/6: 6a6e277ded1a063ec2c2067abb519088
content/7: 6debcd334c3310480cbe6feab87f37b5
content/8: 0e3372052a2b3a1c43d853d6ed269d69
content/9: 90063613714128f4e61e9588e2d2c735
content/10: 182154179fe2a8b6b73fde0d04e0bf4c
content/11: 51adf33450cab2ef392e93147386647c
content/12: 73c3e8a5d36d6868fdb455fcb3d6074c
content/13: 30cd8f1d6197bce560a091ba19d0392a
content/14: 3acf1f0551f6097ca6159e66f5c8da1a
content/15: 997deef758698d207be9382c45301ad6
content/16: 6debcd334c3310480cbe6feab87f37b5
content/17: e26c8c2dffd70baef0253720c1511886
content/18: a99eba53979531f1c974cf653c346909
content/19: 51adf33450cab2ef392e93147386647c
content/20: ca3ec889fb218b8b130959ff04baa659
content/21: 306617201cf63b42f09bb72c9722e048
content/22: 4b48ba3f10b043f74b70edeb4ad87080
content/23: c8531bd570711abc1963d8b5dcf9deef

View File

@@ -109,11 +109,15 @@ function SignupFormContent({
setEmail(emailParam)
}
const redirectParam = searchParams.get('redirect')
// Check both 'redirect' and 'callbackUrl' params (login page uses callbackUrl)
const redirectParam = searchParams.get('redirect') || searchParams.get('callbackUrl')
if (redirectParam) {
setRedirectUrl(redirectParam)
if (redirectParam.startsWith('/invite/')) {
if (
redirectParam.startsWith('/invite/') ||
redirectParam.startsWith('/credential-account/')
) {
setIsInviteFlow(true)
}
}

View File

@@ -42,17 +42,6 @@ export default function StructuredData() {
publisher: {
'@id': 'https://sim.ai/#organization',
},
potentialAction: [
{
'@type': 'SearchAction',
'@id': 'https://sim.ai/#searchaction',
target: {
'@type': 'EntryPoint',
urlTemplate: 'https://sim.ai/search?q={search_term_string}',
},
'query-input': 'required name=search_term_string',
},
],
inLanguage: 'en-US',
},
{
@@ -110,7 +99,7 @@ export default function StructuredData() {
name: 'Community Plan',
price: '0',
priceCurrency: 'USD',
priceValidUntil: '2025-12-31',
priceValidUntil: '2026-12-31',
itemCondition: 'https://schema.org/NewCondition',
availability: 'https://schema.org/InStock',
seller: {
@@ -134,7 +123,7 @@ export default function StructuredData() {
unitText: 'MONTH',
billingIncrement: 1,
},
priceValidUntil: '2025-12-31',
priceValidUntil: '2026-12-31',
itemCondition: 'https://schema.org/NewCondition',
availability: 'https://schema.org/InStock',
seller: {
@@ -154,7 +143,7 @@ export default function StructuredData() {
unitText: 'MONTH',
billingIncrement: 1,
},
priceValidUntil: '2025-12-31',
priceValidUntil: '2026-12-31',
itemCondition: 'https://schema.org/NewCondition',
availability: 'https://schema.org/InStock',
seller: {
@@ -184,8 +173,8 @@ export default function StructuredData() {
screenshot: [
{
'@type': 'ImageObject',
url: 'https://sim.ai/screenshots/workflow-builder.png',
caption: 'Sim workflow builder interface',
url: 'https://sim.ai/logo/426-240/primary/small.png',
caption: 'Sim AI agent workflow builder interface',
},
],
},
@@ -223,16 +212,9 @@ export default function StructuredData() {
}
return (
<>
<script
type='application/ld+json'
dangerouslySetInnerHTML={{ __html: JSON.stringify(structuredData) }}
/>
{/* LLM-friendly semantic HTML comments */}
{/* About: Sim is a visual workflow builder for AI agents and large language models (LLMs) */}
{/* Purpose: Enable users to create AI-powered automations without coding */}
{/* Features: Drag-and-drop interface, 100+ integrations, multi-model support */}
{/* Use cases: Email automation, chatbots, data analysis, content generation */}
</>
<script
type='application/ld+json'
dangerouslySetInnerHTML={{ __html: JSON.stringify(structuredData) }}
/>
)
}

View File

@@ -0,0 +1,42 @@
import { getBaseUrl } from '@/lib/core/utils/urls'
export async function GET() {
const baseUrl = getBaseUrl()
const expiresDate = new Date()
expiresDate.setFullYear(expiresDate.getFullYear() + 1)
const expires = expiresDate.toISOString()
const securityTxt = `# Security Policy for Sim
# https://securitytxt.org/
# RFC 9116: https://www.rfc-editor.org/rfc/rfc9116.html
# Required: Contact information for security reports
Contact: mailto:security@sim.ai
# Required: When this file expires (ISO 8601 format, within 1 year)
Expires: ${expires}
# Preferred languages for security reports
Preferred-Languages: en
# Canonical URL for this security.txt file
Canonical: ${baseUrl}/.well-known/security.txt
# Link to security policy page
Policy: ${baseUrl}/security
# Acknowledgments page for security researchers
# Acknowledgments: ${baseUrl}/security/thanks
# If you discover a security vulnerability, please report it responsibly.
# We appreciate your help in keeping Sim and our users secure.
`
return new Response(securityTxt, {
headers: {
'Content-Type': 'text/plain; charset=utf-8',
'Cache-Control': 'public, max-age=86400',
},
})
}

View File

@@ -8,11 +8,18 @@ import { createMockLogger, createMockRequest } from '@/app/api/__test-utils__/ut
describe('OAuth Disconnect API Route', () => {
const mockGetSession = vi.fn()
const mockSelectChain = {
from: vi.fn().mockReturnThis(),
innerJoin: vi.fn().mockReturnThis(),
where: vi.fn().mockResolvedValue([]),
}
const mockDb = {
delete: vi.fn().mockReturnThis(),
where: vi.fn(),
select: vi.fn().mockReturnValue(mockSelectChain),
}
const mockLogger = createMockLogger()
const mockSyncAllWebhooksForCredentialSet = vi.fn().mockResolvedValue({})
const mockUUID = 'mock-uuid-12345678-90ab-cdef-1234-567890abcdef'
@@ -33,6 +40,13 @@ describe('OAuth Disconnect API Route', () => {
vi.doMock('@sim/db/schema', () => ({
account: { userId: 'userId', providerId: 'providerId' },
credentialSetMember: {
id: 'id',
credentialSetId: 'credentialSetId',
userId: 'userId',
status: 'status',
},
credentialSet: { id: 'id', providerId: 'providerId' },
}))
vi.doMock('drizzle-orm', () => ({
@@ -45,6 +59,14 @@ describe('OAuth Disconnect API Route', () => {
vi.doMock('@sim/logger', () => ({
createLogger: vi.fn().mockReturnValue(mockLogger),
}))
vi.doMock('@/lib/core/utils/request', () => ({
generateRequestId: vi.fn().mockReturnValue('test-request-id'),
}))
vi.doMock('@/lib/webhooks/utils.server', () => ({
syncAllWebhooksForCredentialSet: mockSyncAllWebhooksForCredentialSet,
}))
})
afterEach(() => {

View File

@@ -1,11 +1,12 @@
import { db } from '@sim/db'
import { account } from '@sim/db/schema'
import { account, credentialSet, credentialSetMember } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { and, eq, like, or } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { getSession } from '@/lib/auth'
import { generateRequestId } from '@/lib/core/utils/request'
import { syncAllWebhooksForCredentialSet } from '@/lib/webhooks/utils.server'
export const dynamic = 'force-dynamic'
@@ -74,6 +75,49 @@ export async function POST(request: NextRequest) {
)
}
// Sync webhooks for all credential sets the user is a member of
// This removes webhooks that were using the disconnected credential
const userMemberships = await db
.select({
id: credentialSetMember.id,
credentialSetId: credentialSetMember.credentialSetId,
providerId: credentialSet.providerId,
})
.from(credentialSetMember)
.innerJoin(credentialSet, eq(credentialSetMember.credentialSetId, credentialSet.id))
.where(
and(
eq(credentialSetMember.userId, session.user.id),
eq(credentialSetMember.status, 'active')
)
)
for (const membership of userMemberships) {
// Only sync if the credential set matches this provider
// Credential sets store OAuth provider IDs like 'google-email' or 'outlook'
const matchesProvider =
membership.providerId === provider ||
membership.providerId === providerId ||
membership.providerId?.startsWith(`${provider}-`)
if (matchesProvider) {
try {
await syncAllWebhooksForCredentialSet(membership.credentialSetId, requestId)
logger.info(`[${requestId}] Synced webhooks after credential disconnect`, {
credentialSetId: membership.credentialSetId,
provider,
})
} catch (error) {
// Log but don't fail the disconnect - credential is already removed
logger.error(`[${requestId}] Failed to sync webhooks after credential disconnect`, {
credentialSetId: membership.credentialSetId,
provider,
error,
})
}
}
}
return NextResponse.json({ success: true }, { status: 200 })
} catch (error) {
logger.error(`[${requestId}] Error disconnecting OAuth provider`, error)

View File

@@ -138,7 +138,10 @@ describe('OAuth Token API Routes', () => {
const data = await response.json()
expect(response.status).toBe(400)
expect(data).toHaveProperty('error', 'Credential ID is required')
expect(data).toHaveProperty(
'error',
'Either credentialId or (credentialAccountUserId + providerId) is required'
)
expect(mockLogger.warn).toHaveBeenCalled()
})

View File

@@ -4,7 +4,7 @@ import { z } from 'zod'
import { authorizeCredentialUse } from '@/lib/auth/credential-access'
import { checkHybridAuth } from '@/lib/auth/hybrid'
import { generateRequestId } from '@/lib/core/utils/request'
import { getCredential, refreshTokenIfNeeded } from '@/app/api/auth/oauth/utils'
import { getCredential, getOAuthToken, refreshTokenIfNeeded } from '@/app/api/auth/oauth/utils'
export const dynamic = 'force-dynamic'
@@ -12,12 +12,17 @@ const logger = createLogger('OAuthTokenAPI')
const SALESFORCE_INSTANCE_URL_REGEX = /__sf_instance__:([^\s]+)/
const tokenRequestSchema = z.object({
credentialId: z
.string({ required_error: 'Credential ID is required' })
.min(1, 'Credential ID is required'),
workflowId: z.string().min(1, 'Workflow ID is required').nullish(),
})
const tokenRequestSchema = z
.object({
credentialId: z.string().min(1).optional(),
credentialAccountUserId: z.string().min(1).optional(),
providerId: z.string().min(1).optional(),
workflowId: z.string().min(1).nullish(),
})
.refine(
(data) => data.credentialId || (data.credentialAccountUserId && data.providerId),
'Either credentialId or (credentialAccountUserId + providerId) is required'
)
const tokenQuerySchema = z.object({
credentialId: z
@@ -58,9 +63,37 @@ export async function POST(request: NextRequest) {
)
}
const { credentialId, workflowId } = parseResult.data
const { credentialId, credentialAccountUserId, providerId, workflowId } = parseResult.data
if (credentialAccountUserId && providerId) {
logger.info(`[${requestId}] Fetching token by credentialAccountUserId + providerId`, {
credentialAccountUserId,
providerId,
})
try {
const accessToken = await getOAuthToken(credentialAccountUserId, providerId)
if (!accessToken) {
return NextResponse.json(
{
error: `No credential found for user ${credentialAccountUserId} and provider ${providerId}`,
},
{ status: 404 }
)
}
return NextResponse.json({ accessToken }, { status: 200 })
} catch (error) {
const message = error instanceof Error ? error.message : 'Failed to get OAuth token'
logger.warn(`[${requestId}] OAuth token error: ${message}`)
return NextResponse.json({ error: message }, { status: 403 })
}
}
if (!credentialId) {
return NextResponse.json({ error: 'Credential ID is required' }, { status: 400 })
}
// We already have workflowId from the parsed body; avoid forcing hybrid auth to re-read it
const authz = await authorizeCredentialUse(request, {
credentialId,
workflowId: workflowId ?? undefined,
@@ -70,7 +103,6 @@ export async function POST(request: NextRequest) {
return NextResponse.json({ error: authz.error || 'Unauthorized' }, { status: 403 })
}
// Fetch the credential as the owner to enforce ownership scoping
const credential = await getCredential(requestId, credentialId, authz.credentialOwnerUserId)
if (!credential) {
@@ -78,7 +110,6 @@ export async function POST(request: NextRequest) {
}
try {
// Refresh the token if needed
const { accessToken } = await refreshTokenIfNeeded(requestId, credential, credentialId)
let instanceUrl: string | undefined
@@ -145,7 +176,6 @@ export async function GET(request: NextRequest) {
return NextResponse.json({ error: 'User not authenticated' }, { status: 401 })
}
// Get the credential from the database
const credential = await getCredential(requestId, credentialId, auth.userId)
if (!credential) {

View File

@@ -1,7 +1,7 @@
import { db } from '@sim/db'
import { account, workflow } from '@sim/db/schema'
import { account, credentialSetMember, workflow } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { and, desc, eq } from 'drizzle-orm'
import { and, desc, eq, inArray } from 'drizzle-orm'
import { getSession } from '@/lib/auth'
import { refreshOAuthToken } from '@/lib/oauth'
@@ -105,10 +105,10 @@ export async function getOAuthToken(userId: string, providerId: string): Promise
refreshToken: account.refreshToken,
accessTokenExpiresAt: account.accessTokenExpiresAt,
idToken: account.idToken,
scope: account.scope,
})
.from(account)
.where(and(eq(account.userId, userId), eq(account.providerId, providerId)))
// Always use the most recently updated credential for this provider
.orderBy(desc(account.updatedAt))
.limit(1)
@@ -335,3 +335,108 @@ export async function refreshTokenIfNeeded(
throw error
}
}
export interface CredentialSetCredential {
userId: string
credentialId: string
accessToken: string
providerId: string
}
export async function getCredentialsForCredentialSet(
credentialSetId: string,
providerId: string
): Promise<CredentialSetCredential[]> {
logger.info(`Getting credentials for credential set ${credentialSetId}, provider ${providerId}`)
const members = await db
.select({ userId: credentialSetMember.userId })
.from(credentialSetMember)
.where(
and(
eq(credentialSetMember.credentialSetId, credentialSetId),
eq(credentialSetMember.status, 'active')
)
)
logger.info(`Found ${members.length} active members in credential set ${credentialSetId}`)
if (members.length === 0) {
logger.warn(`No active members found for credential set ${credentialSetId}`)
return []
}
const userIds = members.map((m) => m.userId)
logger.debug(`Member user IDs: ${userIds.join(', ')}`)
const credentials = await db
.select({
id: account.id,
userId: account.userId,
providerId: account.providerId,
accessToken: account.accessToken,
refreshToken: account.refreshToken,
accessTokenExpiresAt: account.accessTokenExpiresAt,
})
.from(account)
.where(and(inArray(account.userId, userIds), eq(account.providerId, providerId)))
logger.info(
`Found ${credentials.length} credentials with provider ${providerId} for ${members.length} members`
)
const results: CredentialSetCredential[] = []
for (const cred of credentials) {
const now = new Date()
const tokenExpiry = cred.accessTokenExpiresAt
const shouldRefresh =
!!cred.refreshToken && (!cred.accessToken || (tokenExpiry && tokenExpiry < now))
let accessToken = cred.accessToken
if (shouldRefresh && cred.refreshToken) {
try {
const refreshResult = await refreshOAuthToken(providerId, cred.refreshToken)
if (refreshResult) {
accessToken = refreshResult.accessToken
const updateData: Record<string, unknown> = {
accessToken: refreshResult.accessToken,
accessTokenExpiresAt: new Date(Date.now() + refreshResult.expiresIn * 1000),
updatedAt: new Date(),
}
if (refreshResult.refreshToken && refreshResult.refreshToken !== cred.refreshToken) {
updateData.refreshToken = refreshResult.refreshToken
}
await db.update(account).set(updateData).where(eq(account.id, cred.id))
logger.info(`Refreshed token for user ${cred.userId}, provider ${providerId}`)
}
} catch (error) {
logger.error(`Failed to refresh token for user ${cred.userId}, provider ${providerId}`, {
error: error instanceof Error ? error.message : String(error),
})
continue
}
}
if (accessToken) {
results.push({
userId: cred.userId,
credentialId: cred.id,
accessToken,
providerId,
})
}
}
logger.info(
`Found ${results.length} valid credentials for credential set ${credentialSetId}, provider ${providerId}`
)
return results
}

View File

@@ -1,7 +1,8 @@
import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { auth } from '@/lib/auth'
import { auth, getSession } from '@/lib/auth'
import { hasSSOAccess } from '@/lib/billing'
import { env } from '@/lib/core/config/env'
import { REDACTED_MARKER } from '@/lib/core/security/redaction'
@@ -63,10 +64,22 @@ const ssoRegistrationSchema = z.discriminatedUnion('providerType', [
export async function POST(request: NextRequest) {
try {
// SSO plugin must be enabled in Better Auth
if (!env.SSO_ENABLED) {
return NextResponse.json({ error: 'SSO is not enabled' }, { status: 400 })
}
// Check plan access (enterprise) or env var override
const session = await getSession()
if (!session?.user?.id) {
return NextResponse.json({ error: 'Authentication required' }, { status: 401 })
}
const hasAccess = await hasSSOAccess(session.user.id)
if (!hasAccess) {
return NextResponse.json({ error: 'SSO requires an Enterprise plan' }, { status: 403 })
}
const rawBody = await request.json()
const parseResult = ssoRegistrationSchema.safeParse(rawBody)

View File

@@ -212,6 +212,18 @@ export async function POST(request: NextRequest) {
logger.info(`Chat "${title}" deployed successfully at ${chatUrl}`)
try {
const { PlatformEvents } = await import('@/lib/core/telemetry')
PlatformEvents.chatDeployed({
chatId: id,
workflowId,
authType,
hasOutputConfigs: outputConfigs.length > 0,
})
} catch (_e) {
// Silently fail
}
return createSuccessResponse({
id,
chatUrl,

View File

@@ -0,0 +1,156 @@
import { db } from '@sim/db'
import { credentialSet, credentialSetInvitation, member, organization, user } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { and, eq } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { getEmailSubject, renderPollingGroupInvitationEmail } from '@/components/emails'
import { getSession } from '@/lib/auth'
import { hasCredentialSetsAccess } from '@/lib/billing'
import { getBaseUrl } from '@/lib/core/utils/urls'
import { sendEmail } from '@/lib/messaging/email/mailer'
const logger = createLogger('CredentialSetInviteResend')
async function getCredentialSetWithAccess(credentialSetId: string, userId: string) {
const [set] = await db
.select({
id: credentialSet.id,
organizationId: credentialSet.organizationId,
name: credentialSet.name,
providerId: credentialSet.providerId,
})
.from(credentialSet)
.where(eq(credentialSet.id, credentialSetId))
.limit(1)
if (!set) return null
const [membership] = await db
.select({ role: member.role })
.from(member)
.where(and(eq(member.userId, userId), eq(member.organizationId, set.organizationId)))
.limit(1)
if (!membership) return null
return { set, role: membership.role }
}
export async function POST(
req: NextRequest,
{ params }: { params: Promise<{ id: string; invitationId: string }> }
) {
const session = await getSession()
if (!session?.user?.id) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
// Check plan access (team/enterprise) or env var override
const hasAccess = await hasCredentialSetsAccess(session.user.id)
if (!hasAccess) {
return NextResponse.json(
{ error: 'Credential sets require a Team or Enterprise plan' },
{ status: 403 }
)
}
const { id, invitationId } = await params
try {
const result = await getCredentialSetWithAccess(id, session.user.id)
if (!result) {
return NextResponse.json({ error: 'Credential set not found' }, { status: 404 })
}
if (result.role !== 'admin' && result.role !== 'owner') {
return NextResponse.json({ error: 'Admin or owner permissions required' }, { status: 403 })
}
const [invitation] = await db
.select()
.from(credentialSetInvitation)
.where(
and(
eq(credentialSetInvitation.id, invitationId),
eq(credentialSetInvitation.credentialSetId, id)
)
)
.limit(1)
if (!invitation) {
return NextResponse.json({ error: 'Invitation not found' }, { status: 404 })
}
if (invitation.status !== 'pending') {
return NextResponse.json({ error: 'Only pending invitations can be resent' }, { status: 400 })
}
// Update expiration
const newExpiresAt = new Date()
newExpiresAt.setDate(newExpiresAt.getDate() + 7)
await db
.update(credentialSetInvitation)
.set({ expiresAt: newExpiresAt })
.where(eq(credentialSetInvitation.id, invitationId))
const inviteUrl = `${getBaseUrl()}/credential-account/${invitation.token}`
// Send email if email address exists
if (invitation.email) {
try {
const [inviter] = await db
.select({ name: user.name })
.from(user)
.where(eq(user.id, session.user.id))
.limit(1)
const [org] = await db
.select({ name: organization.name })
.from(organization)
.where(eq(organization.id, result.set.organizationId))
.limit(1)
const provider = (result.set.providerId as 'google-email' | 'outlook') || 'google-email'
const emailHtml = await renderPollingGroupInvitationEmail({
inviterName: inviter?.name || 'A team member',
organizationName: org?.name || 'your organization',
pollingGroupName: result.set.name,
provider,
inviteLink: inviteUrl,
})
const emailResult = await sendEmail({
to: invitation.email,
subject: getEmailSubject('polling-group-invitation'),
html: emailHtml,
emailType: 'transactional',
})
if (!emailResult.success) {
logger.warn('Failed to resend invitation email', {
email: invitation.email,
error: emailResult.message,
})
return NextResponse.json({ error: 'Failed to send email' }, { status: 500 })
}
} catch (emailError) {
logger.error('Error sending invitation email', emailError)
return NextResponse.json({ error: 'Failed to send email' }, { status: 500 })
}
}
logger.info('Resent credential set invitation', {
credentialSetId: id,
invitationId,
userId: session.user.id,
})
return NextResponse.json({ success: true })
} catch (error) {
logger.error('Error resending invitation', error)
return NextResponse.json({ error: 'Failed to resend invitation' }, { status: 500 })
}
}

View File

@@ -0,0 +1,243 @@
import { db } from '@sim/db'
import { credentialSet, credentialSetInvitation, member, organization, user } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { and, eq } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { getEmailSubject, renderPollingGroupInvitationEmail } from '@/components/emails'
import { getSession } from '@/lib/auth'
import { hasCredentialSetsAccess } from '@/lib/billing'
import { getBaseUrl } from '@/lib/core/utils/urls'
import { sendEmail } from '@/lib/messaging/email/mailer'
const logger = createLogger('CredentialSetInvite')
const createInviteSchema = z.object({
email: z.string().email().optional(),
})
async function getCredentialSetWithAccess(credentialSetId: string, userId: string) {
const [set] = await db
.select({
id: credentialSet.id,
organizationId: credentialSet.organizationId,
name: credentialSet.name,
providerId: credentialSet.providerId,
})
.from(credentialSet)
.where(eq(credentialSet.id, credentialSetId))
.limit(1)
if (!set) return null
const [membership] = await db
.select({ role: member.role })
.from(member)
.where(and(eq(member.userId, userId), eq(member.organizationId, set.organizationId)))
.limit(1)
if (!membership) return null
return { set, role: membership.role }
}
export async function GET(req: NextRequest, { params }: { params: Promise<{ id: string }> }) {
const session = await getSession()
if (!session?.user?.id) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
// Check plan access (team/enterprise) or env var override
const hasAccess = await hasCredentialSetsAccess(session.user.id)
if (!hasAccess) {
return NextResponse.json(
{ error: 'Credential sets require a Team or Enterprise plan' },
{ status: 403 }
)
}
const { id } = await params
const result = await getCredentialSetWithAccess(id, session.user.id)
if (!result) {
return NextResponse.json({ error: 'Credential set not found' }, { status: 404 })
}
const invitations = await db
.select()
.from(credentialSetInvitation)
.where(eq(credentialSetInvitation.credentialSetId, id))
return NextResponse.json({ invitations })
}
export async function POST(req: NextRequest, { params }: { params: Promise<{ id: string }> }) {
const session = await getSession()
if (!session?.user?.id) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
// Check plan access (team/enterprise) or env var override
const hasAccess = await hasCredentialSetsAccess(session.user.id)
if (!hasAccess) {
return NextResponse.json(
{ error: 'Credential sets require a Team or Enterprise plan' },
{ status: 403 }
)
}
const { id } = await params
try {
const result = await getCredentialSetWithAccess(id, session.user.id)
if (!result) {
return NextResponse.json({ error: 'Credential set not found' }, { status: 404 })
}
if (result.role !== 'admin' && result.role !== 'owner') {
return NextResponse.json({ error: 'Admin or owner permissions required' }, { status: 403 })
}
const body = await req.json()
const { email } = createInviteSchema.parse(body)
const token = crypto.randomUUID()
const expiresAt = new Date()
expiresAt.setDate(expiresAt.getDate() + 7)
const invitation = {
id: crypto.randomUUID(),
credentialSetId: id,
email: email || null,
token,
invitedBy: session.user.id,
status: 'pending' as const,
expiresAt,
createdAt: new Date(),
}
await db.insert(credentialSetInvitation).values(invitation)
const inviteUrl = `${getBaseUrl()}/credential-account/${token}`
// Send email if email address was provided
if (email) {
try {
// Get inviter name
const [inviter] = await db
.select({ name: user.name })
.from(user)
.where(eq(user.id, session.user.id))
.limit(1)
// Get organization name
const [org] = await db
.select({ name: organization.name })
.from(organization)
.where(eq(organization.id, result.set.organizationId))
.limit(1)
const provider = (result.set.providerId as 'google-email' | 'outlook') || 'google-email'
const emailHtml = await renderPollingGroupInvitationEmail({
inviterName: inviter?.name || 'A team member',
organizationName: org?.name || 'your organization',
pollingGroupName: result.set.name,
provider,
inviteLink: inviteUrl,
})
const emailResult = await sendEmail({
to: email,
subject: getEmailSubject('polling-group-invitation'),
html: emailHtml,
emailType: 'transactional',
})
if (!emailResult.success) {
logger.warn('Failed to send invitation email', {
email,
error: emailResult.message,
})
}
} catch (emailError) {
logger.error('Error sending invitation email', emailError)
// Don't fail the invitation creation if email fails
}
}
logger.info('Created credential set invitation', {
credentialSetId: id,
invitationId: invitation.id,
userId: session.user.id,
emailSent: !!email,
})
return NextResponse.json({
invitation: {
...invitation,
inviteUrl,
},
})
} catch (error) {
if (error instanceof z.ZodError) {
return NextResponse.json({ error: error.errors[0].message }, { status: 400 })
}
logger.error('Error creating invitation', error)
return NextResponse.json({ error: 'Failed to create invitation' }, { status: 500 })
}
}
export async function DELETE(req: NextRequest, { params }: { params: Promise<{ id: string }> }) {
const session = await getSession()
if (!session?.user?.id) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
// Check plan access (team/enterprise) or env var override
const hasAccess = await hasCredentialSetsAccess(session.user.id)
if (!hasAccess) {
return NextResponse.json(
{ error: 'Credential sets require a Team or Enterprise plan' },
{ status: 403 }
)
}
const { id } = await params
const { searchParams } = new URL(req.url)
const invitationId = searchParams.get('invitationId')
if (!invitationId) {
return NextResponse.json({ error: 'invitationId is required' }, { status: 400 })
}
try {
const result = await getCredentialSetWithAccess(id, session.user.id)
if (!result) {
return NextResponse.json({ error: 'Credential set not found' }, { status: 404 })
}
if (result.role !== 'admin' && result.role !== 'owner') {
return NextResponse.json({ error: 'Admin or owner permissions required' }, { status: 403 })
}
await db
.update(credentialSetInvitation)
.set({ status: 'cancelled' })
.where(
and(
eq(credentialSetInvitation.id, invitationId),
eq(credentialSetInvitation.credentialSetId, id)
)
)
return NextResponse.json({ success: true })
} catch (error) {
logger.error('Error cancelling invitation', error)
return NextResponse.json({ error: 'Failed to cancel invitation' }, { status: 500 })
}
}

View File

@@ -0,0 +1,185 @@
import { db } from '@sim/db'
import { account, credentialSet, credentialSetMember, member, user } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { and, eq, inArray } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { getSession } from '@/lib/auth'
import { hasCredentialSetsAccess } from '@/lib/billing'
import { syncAllWebhooksForCredentialSet } from '@/lib/webhooks/utils.server'
const logger = createLogger('CredentialSetMembers')
async function getCredentialSetWithAccess(credentialSetId: string, userId: string) {
const [set] = await db
.select({
id: credentialSet.id,
organizationId: credentialSet.organizationId,
providerId: credentialSet.providerId,
})
.from(credentialSet)
.where(eq(credentialSet.id, credentialSetId))
.limit(1)
if (!set) return null
const [membership] = await db
.select({ role: member.role })
.from(member)
.where(and(eq(member.userId, userId), eq(member.organizationId, set.organizationId)))
.limit(1)
if (!membership) return null
return { set, role: membership.role }
}
export async function GET(req: NextRequest, { params }: { params: Promise<{ id: string }> }) {
const session = await getSession()
if (!session?.user?.id) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
// Check plan access (team/enterprise) or env var override
const hasAccess = await hasCredentialSetsAccess(session.user.id)
if (!hasAccess) {
return NextResponse.json(
{ error: 'Credential sets require a Team or Enterprise plan' },
{ status: 403 }
)
}
const { id } = await params
const result = await getCredentialSetWithAccess(id, session.user.id)
if (!result) {
return NextResponse.json({ error: 'Credential set not found' }, { status: 404 })
}
const members = await db
.select({
id: credentialSetMember.id,
userId: credentialSetMember.userId,
status: credentialSetMember.status,
joinedAt: credentialSetMember.joinedAt,
createdAt: credentialSetMember.createdAt,
userName: user.name,
userEmail: user.email,
userImage: user.image,
})
.from(credentialSetMember)
.leftJoin(user, eq(credentialSetMember.userId, user.id))
.where(eq(credentialSetMember.credentialSetId, id))
// Get credentials for all active members filtered by the polling group's provider
const activeMembers = members.filter((m) => m.status === 'active')
const memberUserIds = activeMembers.map((m) => m.userId)
let credentials: { userId: string; providerId: string; accountId: string }[] = []
if (memberUserIds.length > 0 && result.set.providerId) {
credentials = await db
.select({
userId: account.userId,
providerId: account.providerId,
accountId: account.accountId,
})
.from(account)
.where(
and(inArray(account.userId, memberUserIds), eq(account.providerId, result.set.providerId))
)
}
// Group credentials by userId
const credentialsByUser = credentials.reduce(
(acc, cred) => {
if (!acc[cred.userId]) {
acc[cred.userId] = []
}
acc[cred.userId].push({
providerId: cred.providerId,
accountId: cred.accountId,
})
return acc
},
{} as Record<string, { providerId: string; accountId: string }[]>
)
// Attach credentials to members
const membersWithCredentials = members.map((m) => ({
...m,
credentials: credentialsByUser[m.userId] || [],
}))
return NextResponse.json({ members: membersWithCredentials })
}
export async function DELETE(req: NextRequest, { params }: { params: Promise<{ id: string }> }) {
const session = await getSession()
if (!session?.user?.id) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
// Check plan access (team/enterprise) or env var override
const hasAccess = await hasCredentialSetsAccess(session.user.id)
if (!hasAccess) {
return NextResponse.json(
{ error: 'Credential sets require a Team or Enterprise plan' },
{ status: 403 }
)
}
const { id } = await params
const { searchParams } = new URL(req.url)
const memberId = searchParams.get('memberId')
if (!memberId) {
return NextResponse.json({ error: 'memberId is required' }, { status: 400 })
}
try {
const result = await getCredentialSetWithAccess(id, session.user.id)
if (!result) {
return NextResponse.json({ error: 'Credential set not found' }, { status: 404 })
}
if (result.role !== 'admin' && result.role !== 'owner') {
return NextResponse.json({ error: 'Admin or owner permissions required' }, { status: 403 })
}
const [memberToRemove] = await db
.select()
.from(credentialSetMember)
.where(and(eq(credentialSetMember.id, memberId), eq(credentialSetMember.credentialSetId, id)))
.limit(1)
if (!memberToRemove) {
return NextResponse.json({ error: 'Member not found' }, { status: 404 })
}
const requestId = crypto.randomUUID().slice(0, 8)
// Use transaction to ensure member deletion + webhook sync are atomic
await db.transaction(async (tx) => {
await tx.delete(credentialSetMember).where(eq(credentialSetMember.id, memberId))
const syncResult = await syncAllWebhooksForCredentialSet(id, requestId, tx)
logger.info('Synced webhooks after member removed', {
credentialSetId: id,
...syncResult,
})
})
logger.info('Removed member from credential set', {
credentialSetId: id,
memberId,
userId: session.user.id,
})
return NextResponse.json({ success: true })
} catch (error) {
logger.error('Error removing member from credential set', error)
return NextResponse.json({ error: 'Failed to remove member' }, { status: 500 })
}
}

View File

@@ -0,0 +1,183 @@
import { db } from '@sim/db'
import { credentialSet, credentialSetMember, member } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { and, eq } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { getSession } from '@/lib/auth'
import { hasCredentialSetsAccess } from '@/lib/billing'
const logger = createLogger('CredentialSet')
const updateCredentialSetSchema = z.object({
name: z.string().trim().min(1).max(100).optional(),
description: z.string().max(500).nullable().optional(),
})
async function getCredentialSetWithAccess(credentialSetId: string, userId: string) {
const [set] = await db
.select({
id: credentialSet.id,
organizationId: credentialSet.organizationId,
name: credentialSet.name,
description: credentialSet.description,
providerId: credentialSet.providerId,
createdBy: credentialSet.createdBy,
createdAt: credentialSet.createdAt,
updatedAt: credentialSet.updatedAt,
})
.from(credentialSet)
.where(eq(credentialSet.id, credentialSetId))
.limit(1)
if (!set) return null
const [membership] = await db
.select({ role: member.role })
.from(member)
.where(and(eq(member.userId, userId), eq(member.organizationId, set.organizationId)))
.limit(1)
if (!membership) return null
return { set, role: membership.role }
}
export async function GET(req: NextRequest, { params }: { params: Promise<{ id: string }> }) {
const session = await getSession()
if (!session?.user?.id) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
// Check plan access (team/enterprise) or env var override
const hasAccess = await hasCredentialSetsAccess(session.user.id)
if (!hasAccess) {
return NextResponse.json(
{ error: 'Credential sets require a Team or Enterprise plan' },
{ status: 403 }
)
}
const { id } = await params
const result = await getCredentialSetWithAccess(id, session.user.id)
if (!result) {
return NextResponse.json({ error: 'Credential set not found' }, { status: 404 })
}
return NextResponse.json({ credentialSet: result.set })
}
export async function PUT(req: NextRequest, { params }: { params: Promise<{ id: string }> }) {
const session = await getSession()
if (!session?.user?.id) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
// Check plan access (team/enterprise) or env var override
const hasAccess = await hasCredentialSetsAccess(session.user.id)
if (!hasAccess) {
return NextResponse.json(
{ error: 'Credential sets require a Team or Enterprise plan' },
{ status: 403 }
)
}
const { id } = await params
try {
const result = await getCredentialSetWithAccess(id, session.user.id)
if (!result) {
return NextResponse.json({ error: 'Credential set not found' }, { status: 404 })
}
if (result.role !== 'admin' && result.role !== 'owner') {
return NextResponse.json({ error: 'Admin or owner permissions required' }, { status: 403 })
}
const body = await req.json()
const updates = updateCredentialSetSchema.parse(body)
if (updates.name) {
const existingSet = await db
.select({ id: credentialSet.id })
.from(credentialSet)
.where(
and(
eq(credentialSet.organizationId, result.set.organizationId),
eq(credentialSet.name, updates.name)
)
)
.limit(1)
if (existingSet.length > 0 && existingSet[0].id !== id) {
return NextResponse.json(
{ error: 'A credential set with this name already exists' },
{ status: 409 }
)
}
}
await db
.update(credentialSet)
.set({
...updates,
updatedAt: new Date(),
})
.where(eq(credentialSet.id, id))
const [updated] = await db.select().from(credentialSet).where(eq(credentialSet.id, id)).limit(1)
return NextResponse.json({ credentialSet: updated })
} catch (error) {
if (error instanceof z.ZodError) {
return NextResponse.json({ error: error.errors[0].message }, { status: 400 })
}
logger.error('Error updating credential set', error)
return NextResponse.json({ error: 'Failed to update credential set' }, { status: 500 })
}
}
export async function DELETE(req: NextRequest, { params }: { params: Promise<{ id: string }> }) {
const session = await getSession()
if (!session?.user?.id) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
// Check plan access (team/enterprise) or env var override
const hasAccess = await hasCredentialSetsAccess(session.user.id)
if (!hasAccess) {
return NextResponse.json(
{ error: 'Credential sets require a Team or Enterprise plan' },
{ status: 403 }
)
}
const { id } = await params
try {
const result = await getCredentialSetWithAccess(id, session.user.id)
if (!result) {
return NextResponse.json({ error: 'Credential set not found' }, { status: 404 })
}
if (result.role !== 'admin' && result.role !== 'owner') {
return NextResponse.json({ error: 'Admin or owner permissions required' }, { status: 403 })
}
await db.delete(credentialSetMember).where(eq(credentialSetMember.credentialSetId, id))
await db.delete(credentialSet).where(eq(credentialSet.id, id))
logger.info('Deleted credential set', { credentialSetId: id, userId: session.user.id })
return NextResponse.json({ success: true })
} catch (error) {
logger.error('Error deleting credential set', error)
return NextResponse.json({ error: 'Failed to delete credential set' }, { status: 500 })
}
}

View File

@@ -0,0 +1,53 @@
import { db } from '@sim/db'
import { credentialSet, credentialSetInvitation, organization, user } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { and, eq, gt, isNull, or } from 'drizzle-orm'
import { NextResponse } from 'next/server'
import { getSession } from '@/lib/auth'
const logger = createLogger('CredentialSetInvitations')
export async function GET() {
const session = await getSession()
if (!session?.user?.id || !session?.user?.email) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
try {
const invitations = await db
.select({
invitationId: credentialSetInvitation.id,
token: credentialSetInvitation.token,
status: credentialSetInvitation.status,
expiresAt: credentialSetInvitation.expiresAt,
createdAt: credentialSetInvitation.createdAt,
credentialSetId: credentialSet.id,
credentialSetName: credentialSet.name,
providerId: credentialSet.providerId,
organizationId: organization.id,
organizationName: organization.name,
invitedByName: user.name,
invitedByEmail: user.email,
})
.from(credentialSetInvitation)
.innerJoin(credentialSet, eq(credentialSetInvitation.credentialSetId, credentialSet.id))
.innerJoin(organization, eq(credentialSet.organizationId, organization.id))
.leftJoin(user, eq(credentialSetInvitation.invitedBy, user.id))
.where(
and(
or(
eq(credentialSetInvitation.email, session.user.email),
isNull(credentialSetInvitation.email)
),
eq(credentialSetInvitation.status, 'pending'),
gt(credentialSetInvitation.expiresAt, new Date())
)
)
return NextResponse.json({ invitations })
} catch (error) {
logger.error('Error fetching credential set invitations', error)
return NextResponse.json({ error: 'Failed to fetch invitations' }, { status: 500 })
}
}

View File

@@ -0,0 +1,196 @@
import { db } from '@sim/db'
import {
credentialSet,
credentialSetInvitation,
credentialSetMember,
organization,
} from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { and, eq } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { getSession } from '@/lib/auth'
import { syncAllWebhooksForCredentialSet } from '@/lib/webhooks/utils.server'
const logger = createLogger('CredentialSetInviteToken')
export async function GET(req: NextRequest, { params }: { params: Promise<{ token: string }> }) {
const { token } = await params
const [invitation] = await db
.select({
id: credentialSetInvitation.id,
credentialSetId: credentialSetInvitation.credentialSetId,
email: credentialSetInvitation.email,
status: credentialSetInvitation.status,
expiresAt: credentialSetInvitation.expiresAt,
credentialSetName: credentialSet.name,
providerId: credentialSet.providerId,
organizationId: credentialSet.organizationId,
organizationName: organization.name,
})
.from(credentialSetInvitation)
.innerJoin(credentialSet, eq(credentialSetInvitation.credentialSetId, credentialSet.id))
.innerJoin(organization, eq(credentialSet.organizationId, organization.id))
.where(eq(credentialSetInvitation.token, token))
.limit(1)
if (!invitation) {
return NextResponse.json({ error: 'Invitation not found' }, { status: 404 })
}
if (invitation.status !== 'pending') {
return NextResponse.json({ error: 'Invitation is no longer valid' }, { status: 410 })
}
if (new Date() > invitation.expiresAt) {
await db
.update(credentialSetInvitation)
.set({ status: 'expired' })
.where(eq(credentialSetInvitation.id, invitation.id))
return NextResponse.json({ error: 'Invitation has expired' }, { status: 410 })
}
return NextResponse.json({
invitation: {
credentialSetName: invitation.credentialSetName,
organizationName: invitation.organizationName,
providerId: invitation.providerId,
email: invitation.email,
},
})
}
export async function POST(req: NextRequest, { params }: { params: Promise<{ token: string }> }) {
const { token } = await params
const session = await getSession()
if (!session?.user?.id) {
return NextResponse.json({ error: 'Authentication required' }, { status: 401 })
}
try {
const [invitationData] = await db
.select({
id: credentialSetInvitation.id,
credentialSetId: credentialSetInvitation.credentialSetId,
email: credentialSetInvitation.email,
status: credentialSetInvitation.status,
expiresAt: credentialSetInvitation.expiresAt,
invitedBy: credentialSetInvitation.invitedBy,
providerId: credentialSet.providerId,
})
.from(credentialSetInvitation)
.innerJoin(credentialSet, eq(credentialSetInvitation.credentialSetId, credentialSet.id))
.where(eq(credentialSetInvitation.token, token))
.limit(1)
if (!invitationData) {
return NextResponse.json({ error: 'Invitation not found' }, { status: 404 })
}
const invitation = invitationData
if (invitation.status !== 'pending') {
return NextResponse.json({ error: 'Invitation is no longer valid' }, { status: 410 })
}
if (new Date() > invitation.expiresAt) {
await db
.update(credentialSetInvitation)
.set({ status: 'expired' })
.where(eq(credentialSetInvitation.id, invitation.id))
return NextResponse.json({ error: 'Invitation has expired' }, { status: 410 })
}
const existingMember = await db
.select()
.from(credentialSetMember)
.where(
and(
eq(credentialSetMember.credentialSetId, invitation.credentialSetId),
eq(credentialSetMember.userId, session.user.id)
)
)
.limit(1)
if (existingMember.length > 0) {
return NextResponse.json(
{ error: 'Already a member of this credential set' },
{ status: 409 }
)
}
const now = new Date()
const requestId = crypto.randomUUID().slice(0, 8)
// Use transaction to ensure membership + invitation update + webhook sync are atomic
await db.transaction(async (tx) => {
await tx.insert(credentialSetMember).values({
id: crypto.randomUUID(),
credentialSetId: invitation.credentialSetId,
userId: session.user.id,
status: 'active',
joinedAt: now,
invitedBy: invitation.invitedBy,
createdAt: now,
updatedAt: now,
})
await tx
.update(credentialSetInvitation)
.set({
status: 'accepted',
acceptedAt: now,
acceptedByUserId: session.user.id,
})
.where(eq(credentialSetInvitation.id, invitation.id))
// Clean up all other pending invitations for the same credential set and email
// This prevents duplicate invites from showing up after accepting one
if (invitation.email) {
await tx
.update(credentialSetInvitation)
.set({
status: 'accepted',
acceptedAt: now,
acceptedByUserId: session.user.id,
})
.where(
and(
eq(credentialSetInvitation.credentialSetId, invitation.credentialSetId),
eq(credentialSetInvitation.email, invitation.email),
eq(credentialSetInvitation.status, 'pending')
)
)
}
// Sync webhooks within the transaction
const syncResult = await syncAllWebhooksForCredentialSet(
invitation.credentialSetId,
requestId,
tx
)
logger.info('Synced webhooks after member joined', {
credentialSetId: invitation.credentialSetId,
...syncResult,
})
})
logger.info('Accepted credential set invitation', {
invitationId: invitation.id,
credentialSetId: invitation.credentialSetId,
userId: session.user.id,
})
return NextResponse.json({
success: true,
credentialSetId: invitation.credentialSetId,
providerId: invitation.providerId,
})
} catch (error) {
logger.error('Error accepting invitation', error)
return NextResponse.json({ error: 'Failed to accept invitation' }, { status: 500 })
}
}

View File

@@ -0,0 +1,115 @@
import { db } from '@sim/db'
import { credentialSet, credentialSetMember, organization } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { and, eq } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { getSession } from '@/lib/auth'
import { syncAllWebhooksForCredentialSet } from '@/lib/webhooks/utils.server'
const logger = createLogger('CredentialSetMemberships')
export async function GET() {
const session = await getSession()
if (!session?.user?.id) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
try {
const memberships = await db
.select({
membershipId: credentialSetMember.id,
status: credentialSetMember.status,
joinedAt: credentialSetMember.joinedAt,
credentialSetId: credentialSet.id,
credentialSetName: credentialSet.name,
credentialSetDescription: credentialSet.description,
providerId: credentialSet.providerId,
organizationId: organization.id,
organizationName: organization.name,
})
.from(credentialSetMember)
.innerJoin(credentialSet, eq(credentialSetMember.credentialSetId, credentialSet.id))
.innerJoin(organization, eq(credentialSet.organizationId, organization.id))
.where(eq(credentialSetMember.userId, session.user.id))
return NextResponse.json({ memberships })
} catch (error) {
logger.error('Error fetching credential set memberships', error)
return NextResponse.json({ error: 'Failed to fetch memberships' }, { status: 500 })
}
}
/**
* Leave a credential set (self-revocation).
* Sets status to 'revoked' immediately (blocks execution), then syncs webhooks to clean up.
*/
export async function DELETE(req: NextRequest) {
const session = await getSession()
if (!session?.user?.id) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
const { searchParams } = new URL(req.url)
const credentialSetId = searchParams.get('credentialSetId')
if (!credentialSetId) {
return NextResponse.json({ error: 'credentialSetId is required' }, { status: 400 })
}
try {
const requestId = crypto.randomUUID().slice(0, 8)
// Use transaction to ensure revocation + webhook sync are atomic
await db.transaction(async (tx) => {
// Find and verify membership
const [membership] = await tx
.select()
.from(credentialSetMember)
.where(
and(
eq(credentialSetMember.credentialSetId, credentialSetId),
eq(credentialSetMember.userId, session.user.id)
)
)
.limit(1)
if (!membership) {
throw new Error('Not a member of this credential set')
}
if (membership.status === 'revoked') {
throw new Error('Already left this credential set')
}
// Set status to 'revoked' - this immediately blocks credential from being used
await tx
.update(credentialSetMember)
.set({
status: 'revoked',
updatedAt: new Date(),
})
.where(eq(credentialSetMember.id, membership.id))
// Sync webhooks to remove this user's credential webhooks
const syncResult = await syncAllWebhooksForCredentialSet(credentialSetId, requestId, tx)
logger.info('Synced webhooks after member left', {
credentialSetId,
userId: session.user.id,
...syncResult,
})
})
logger.info('User left credential set', {
credentialSetId,
userId: session.user.id,
})
return NextResponse.json({ success: true })
} catch (error) {
const message = error instanceof Error ? error.message : 'Failed to leave credential set'
logger.error('Error leaving credential set', error)
return NextResponse.json({ error: message }, { status: 500 })
}
}

View File

@@ -0,0 +1,176 @@
import { db } from '@sim/db'
import { credentialSet, credentialSetMember, member, organization, user } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { and, count, desc, eq } from 'drizzle-orm'
import { NextResponse } from 'next/server'
import { z } from 'zod'
import { getSession } from '@/lib/auth'
import { hasCredentialSetsAccess } from '@/lib/billing'
const logger = createLogger('CredentialSets')
const createCredentialSetSchema = z.object({
organizationId: z.string().min(1),
name: z.string().trim().min(1).max(100),
description: z.string().max(500).optional(),
providerId: z.enum(['google-email', 'outlook']),
})
export async function GET(req: Request) {
const session = await getSession()
if (!session?.user?.id) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
// Check plan access (team/enterprise) or env var override
const hasAccess = await hasCredentialSetsAccess(session.user.id)
if (!hasAccess) {
return NextResponse.json(
{ error: 'Credential sets require a Team or Enterprise plan' },
{ status: 403 }
)
}
const { searchParams } = new URL(req.url)
const organizationId = searchParams.get('organizationId')
if (!organizationId) {
return NextResponse.json({ error: 'organizationId is required' }, { status: 400 })
}
const membership = await db
.select({ id: member.id, role: member.role })
.from(member)
.where(and(eq(member.userId, session.user.id), eq(member.organizationId, organizationId)))
.limit(1)
if (membership.length === 0) {
return NextResponse.json({ error: 'Forbidden' }, { status: 403 })
}
const sets = await db
.select({
id: credentialSet.id,
name: credentialSet.name,
description: credentialSet.description,
providerId: credentialSet.providerId,
createdBy: credentialSet.createdBy,
createdAt: credentialSet.createdAt,
updatedAt: credentialSet.updatedAt,
creatorName: user.name,
creatorEmail: user.email,
})
.from(credentialSet)
.leftJoin(user, eq(credentialSet.createdBy, user.id))
.where(eq(credentialSet.organizationId, organizationId))
.orderBy(desc(credentialSet.createdAt))
const setsWithCounts = await Promise.all(
sets.map(async (set) => {
const [memberCount] = await db
.select({ count: count() })
.from(credentialSetMember)
.where(
and(
eq(credentialSetMember.credentialSetId, set.id),
eq(credentialSetMember.status, 'active')
)
)
return {
...set,
memberCount: memberCount?.count ?? 0,
}
})
)
return NextResponse.json({ credentialSets: setsWithCounts })
}
export async function POST(req: Request) {
const session = await getSession()
if (!session?.user?.id) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
// Check plan access (team/enterprise) or env var override
const hasAccess = await hasCredentialSetsAccess(session.user.id)
if (!hasAccess) {
return NextResponse.json(
{ error: 'Credential sets require a Team or Enterprise plan' },
{ status: 403 }
)
}
try {
const body = await req.json()
const { organizationId, name, description, providerId } = createCredentialSetSchema.parse(body)
const membership = await db
.select({ id: member.id, role: member.role })
.from(member)
.where(and(eq(member.userId, session.user.id), eq(member.organizationId, organizationId)))
.limit(1)
const role = membership[0]?.role
if (membership.length === 0 || (role !== 'admin' && role !== 'owner')) {
return NextResponse.json(
{ error: 'Admin or owner permissions required to create credential sets' },
{ status: 403 }
)
}
const orgExists = await db
.select({ id: organization.id })
.from(organization)
.where(eq(organization.id, organizationId))
.limit(1)
if (orgExists.length === 0) {
return NextResponse.json({ error: 'Organization not found' }, { status: 404 })
}
const existingSet = await db
.select({ id: credentialSet.id })
.from(credentialSet)
.where(and(eq(credentialSet.organizationId, organizationId), eq(credentialSet.name, name)))
.limit(1)
if (existingSet.length > 0) {
return NextResponse.json(
{ error: 'A credential set with this name already exists' },
{ status: 409 }
)
}
const now = new Date()
const newCredentialSet = {
id: crypto.randomUUID(),
organizationId,
name,
description: description || null,
providerId,
createdBy: session.user.id,
createdAt: now,
updatedAt: now,
}
await db.insert(credentialSet).values(newCredentialSet)
logger.info('Created credential set', {
credentialSetId: newCredentialSet.id,
organizationId,
userId: session.user.id,
})
return NextResponse.json({ credentialSet: newCredentialSet }, { status: 201 })
} catch (error) {
if (error instanceof z.ZodError) {
return NextResponse.json({ error: error.errors[0].message }, { status: 400 })
}
logger.error('Error creating credential set', error)
return NextResponse.json({ error: 'Failed to create credential set' }, { status: 500 })
}
}

View File

@@ -198,15 +198,14 @@ export async function POST(req: NextRequest, { params }: { params: Promise<{ id:
`[${requestId}] Starting controlled async processing of ${createdDocuments.length} documents`
)
// Track bulk document upload
try {
const { trackPlatformEvent } = await import('@/lib/core/telemetry')
trackPlatformEvent('platform.knowledge_base.documents_uploaded', {
'knowledge_base.id': knowledgeBaseId,
'documents.count': createdDocuments.length,
'documents.upload_type': 'bulk',
'processing.chunk_size': validatedData.processingOptions.chunkSize,
'processing.recipe': validatedData.processingOptions.recipe,
const { PlatformEvents } = await import('@/lib/core/telemetry')
PlatformEvents.knowledgeBaseDocumentsUploaded({
knowledgeBaseId,
documentsCount: createdDocuments.length,
uploadType: 'bulk',
chunkSize: validatedData.processingOptions.chunkSize,
recipe: validatedData.processingOptions.recipe,
})
} catch (_e) {
// Silently fail
@@ -262,15 +261,14 @@ export async function POST(req: NextRequest, { params }: { params: Promise<{ id:
userId
)
// Track single document upload
try {
const { trackPlatformEvent } = await import('@/lib/core/telemetry')
trackPlatformEvent('platform.knowledge_base.documents_uploaded', {
'knowledge_base.id': knowledgeBaseId,
'documents.count': 1,
'documents.upload_type': 'single',
'document.mime_type': validatedData.mimeType,
'document.file_size': validatedData.fileSize,
const { PlatformEvents } = await import('@/lib/core/telemetry')
PlatformEvents.knowledgeBaseDocumentsUploaded({
knowledgeBaseId,
documentsCount: 1,
uploadType: 'single',
mimeType: validatedData.mimeType,
fileSize: validatedData.fileSize,
})
} catch (_e) {
// Silently fail

View File

@@ -2,6 +2,7 @@ import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { getSession } from '@/lib/auth'
import { PlatformEvents } from '@/lib/core/telemetry'
import { generateRequestId } from '@/lib/core/utils/request'
import {
deleteKnowledgeBase,
@@ -183,6 +184,14 @@ export async function DELETE(
await deleteKnowledgeBase(id, requestId)
try {
PlatformEvents.knowledgeBaseDeleted({
knowledgeBaseId: id,
})
} catch {
// Telemetry should not fail the operation
}
logger.info(`[${requestId}] Knowledge base deleted: ${id} for user ${session.user.id}`)
return NextResponse.json({

View File

@@ -2,6 +2,7 @@ import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { getSession } from '@/lib/auth'
import { PlatformEvents } from '@/lib/core/telemetry'
import { generateRequestId } from '@/lib/core/utils/request'
import { createKnowledgeBase, getKnowledgeBases } from '@/lib/knowledge/service'
@@ -94,6 +95,16 @@ export async function POST(req: NextRequest) {
const newKnowledgeBase = await createKnowledgeBase(createData, requestId)
try {
PlatformEvents.knowledgeBaseCreated({
knowledgeBaseId: newKnowledgeBase.id,
name: validatedData.name,
workspaceId: validatedData.workspaceId,
})
} catch {
// Telemetry should not fail the operation
}
logger.info(
`[${requestId}] Knowledge base created: ${newKnowledgeBase.id} for user ${session.user.id}`
)

View File

@@ -5,6 +5,7 @@
*
* @vitest-environment node
*/
import { createEnvMock } from '@sim/testing'
import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest'
import {
createMockRequest,
@@ -26,13 +27,7 @@ vi.mock('drizzle-orm', () => ({
mockKnowledgeSchemas()
vi.mock('@/lib/core/config/env', () => ({
env: {
OPENAI_API_KEY: 'test-api-key',
},
isTruthy: (value: string | boolean | number | undefined) =>
typeof value === 'string' ? value === 'true' || value === '1' : Boolean(value),
}))
vi.mock('@/lib/core/config/env', () => createEnvMock({ OPENAI_API_KEY: 'test-api-key' }))
vi.mock('@/lib/core/utils/request', () => ({
generateRequestId: vi.fn(() => 'test-request-id'),

View File

@@ -1,6 +1,7 @@
import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { PlatformEvents } from '@/lib/core/telemetry'
import { generateRequestId } from '@/lib/core/utils/request'
import { ALL_TAG_SLOTS } from '@/lib/knowledge/constants'
import { getDocumentTagDefinitions } from '@/lib/knowledge/tags/service'
@@ -294,6 +295,16 @@ export async function POST(request: NextRequest) {
const documentIds = results.map((result) => result.documentId)
const documentNameMap = await getDocumentNamesByIds(documentIds)
try {
PlatformEvents.knowledgeBaseSearched({
knowledgeBaseId: accessibleKbIds[0],
resultsCount: results.length,
workspaceId: workspaceId || undefined,
})
} catch {
// Telemetry should not fail the operation
}
return NextResponse.json({
success: true,
data: {

View File

@@ -4,6 +4,7 @@
*
* @vitest-environment node
*/
import { createEnvMock } from '@sim/testing'
import { beforeEach, describe, expect, it, vi } from 'vitest'
vi.mock('drizzle-orm')
@@ -30,12 +31,7 @@ vi.stubGlobal(
})
)
vi.mock('@/lib/core/config/env', () => ({
env: {},
getEnv: (key: string) => process.env[key],
isTruthy: (value: string | boolean | number | undefined) =>
typeof value === 'string' ? value === 'true' || value === '1' : Boolean(value),
}))
vi.mock('@/lib/core/config/env', () => createEnvMock())
import {
generateSearchEmbedding,

View File

@@ -6,6 +6,7 @@
* This file contains unit tests for the knowledge base utility functions,
* including access checks, document processing, and embedding generation.
*/
import { createEnvMock } from '@sim/testing'
import { beforeEach, describe, expect, it, vi } from 'vitest'
vi.mock('drizzle-orm', () => ({
@@ -15,12 +16,7 @@ vi.mock('drizzle-orm', () => ({
sql: (strings: TemplateStringsArray, ...expr: any[]) => ({ strings, expr }),
}))
vi.mock('@/lib/core/config/env', () => ({
env: { OPENAI_API_KEY: 'test-key' },
getEnv: (key: string) => process.env[key],
isTruthy: (value: string | boolean | number | undefined) =>
typeof value === 'string' ? value === 'true' || value === '1' : Boolean(value),
}))
vi.mock('@/lib/core/config/env', () => createEnvMock({ OPENAI_API_KEY: 'test-key' }))
vi.mock('@/lib/knowledge/documents/utils', () => ({
retryWithExponentialBackoff: (fn: any) => fn(),

View File

@@ -140,12 +140,12 @@ export const POST = withMcpAuth('write')(
)
try {
const { trackPlatformEvent } = await import('@/lib/core/telemetry')
trackPlatformEvent('platform.mcp.server_added', {
'mcp.server_id': serverId,
'mcp.server_name': body.name,
'mcp.transport': body.transport,
'workspace.id': workspaceId,
const { PlatformEvents } = await import('@/lib/core/telemetry')
PlatformEvents.mcpServerAdded({
serverId,
serverName: body.name,
transport: body.transport,
workspaceId,
})
} catch (_e) {
// Silently fail

View File

@@ -194,12 +194,12 @@ export const POST = withMcpAuth('read')(
logger.info(`[${requestId}] Successfully executed tool ${toolName} on server ${serverId}`)
try {
const { trackPlatformEvent } = await import('@/lib/core/telemetry')
trackPlatformEvent('platform.mcp.tool_executed', {
'mcp.server_id': serverId,
'mcp.tool_name': toolName,
'mcp.execution_status': 'success',
'workspace.id': workspaceId,
const { PlatformEvents } = await import('@/lib/core/telemetry')
PlatformEvents.mcpToolExecuted({
serverId,
toolName,
status: 'success',
workspaceId,
})
} catch {
// Telemetry failure is non-critical

View File

@@ -15,8 +15,11 @@ import { createLogger } from '@sim/logger'
import { and, eq } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { getEmailSubject, renderInvitationEmail } from '@/components/emails'
import { getSession } from '@/lib/auth'
import { requireStripeClient } from '@/lib/billing/stripe-client'
import { getBaseUrl } from '@/lib/core/utils/urls'
import { sendEmail } from '@/lib/messaging/email/mailer'
const logger = createLogger('OrganizationInvitation')
@@ -69,6 +72,102 @@ export async function GET(
}
}
// Resend invitation
export async function POST(
_request: NextRequest,
{ params }: { params: Promise<{ id: string; invitationId: string }> }
) {
const { id: organizationId, invitationId } = await params
const session = await getSession()
if (!session?.user?.id) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
try {
// Verify user is admin/owner
const memberEntry = await db
.select()
.from(member)
.where(and(eq(member.organizationId, organizationId), eq(member.userId, session.user.id)))
.limit(1)
if (memberEntry.length === 0 || !['owner', 'admin'].includes(memberEntry[0].role)) {
return NextResponse.json({ error: 'Forbidden - Admin access required' }, { status: 403 })
}
const orgInvitation = await db
.select()
.from(invitation)
.where(and(eq(invitation.id, invitationId), eq(invitation.organizationId, organizationId)))
.then((rows) => rows[0])
if (!orgInvitation) {
return NextResponse.json({ error: 'Invitation not found' }, { status: 404 })
}
if (orgInvitation.status !== 'pending') {
return NextResponse.json({ error: 'Can only resend pending invitations' }, { status: 400 })
}
const org = await db
.select({ name: organization.name })
.from(organization)
.where(eq(organization.id, organizationId))
.then((rows) => rows[0])
const inviter = await db
.select({ name: user.name })
.from(user)
.where(eq(user.id, session.user.id))
.limit(1)
// Update expiration date
const newExpiresAt = new Date(Date.now() + 7 * 24 * 60 * 60 * 1000) // 7 days
await db
.update(invitation)
.set({ expiresAt: newExpiresAt })
.where(eq(invitation.id, invitationId))
// Send email
const emailHtml = await renderInvitationEmail(
inviter[0]?.name || 'Someone',
org?.name || 'organization',
`${getBaseUrl()}/invite/${invitationId}`
)
const emailResult = await sendEmail({
to: orgInvitation.email,
subject: getEmailSubject('invitation'),
html: emailHtml,
emailType: 'transactional',
})
if (!emailResult.success) {
logger.error('Failed to resend invitation email', {
email: orgInvitation.email,
error: emailResult.message,
})
return NextResponse.json({ error: 'Failed to send invitation email' }, { status: 500 })
}
logger.info('Organization invitation resent', {
organizationId,
invitationId,
resentBy: session.user.id,
email: orgInvitation.email,
})
return NextResponse.json({
success: true,
message: 'Invitation resent successfully',
})
} catch (error) {
logger.error('Error resending organization invitation:', error)
return NextResponse.json({ error: 'Failed to resend invitation' }, { status: 500 })
}
}
export async function PUT(
req: NextRequest,
{ params }: { params: Promise<{ id: string; invitationId: string }> }

View File

@@ -2,6 +2,7 @@ import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server'
import { env } from '@/lib/core/config/env'
import type { ModelsObject } from '@/providers/ollama/types'
import { filterBlacklistedModels, isProviderBlacklisted } from '@/providers/utils'
const logger = createLogger('OllamaModelsAPI')
const OLLAMA_HOST = env.OLLAMA_URL || 'http://localhost:11434'
@@ -9,7 +10,12 @@ const OLLAMA_HOST = env.OLLAMA_URL || 'http://localhost:11434'
/**
* Get available Ollama models
*/
export async function GET(request: NextRequest) {
export async function GET(_request: NextRequest) {
if (isProviderBlacklisted('ollama')) {
logger.info('Ollama provider is blacklisted, returning empty models')
return NextResponse.json({ models: [] })
}
try {
logger.info('Fetching Ollama models', {
host: OLLAMA_HOST,
@@ -31,10 +37,12 @@ export async function GET(request: NextRequest) {
}
const data = (await response.json()) as ModelsObject
const models = data.models.map((model) => model.name)
const allModels = data.models.map((model) => model.name)
const models = filterBlacklistedModels(allModels)
logger.info('Successfully fetched Ollama models', {
count: models.length,
filtered: allModels.length - models.length,
models,
})

View File

@@ -1,6 +1,6 @@
import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server'
import { filterBlacklistedModels } from '@/providers/utils'
import { filterBlacklistedModels, isProviderBlacklisted } from '@/providers/utils'
const logger = createLogger('OpenRouterModelsAPI')
@@ -30,6 +30,11 @@ export interface OpenRouterModelInfo {
}
export async function GET(_request: NextRequest) {
if (isProviderBlacklisted('openrouter')) {
logger.info('OpenRouter provider is blacklisted, returning empty models')
return NextResponse.json({ models: [], modelInfo: {} })
}
try {
const response = await fetch('https://openrouter.ai/api/v1/models', {
headers: { 'Content-Type': 'application/json' },

View File

@@ -41,6 +41,9 @@ export async function POST(request: NextRequest) {
vertexProject,
vertexLocation,
vertexCredential,
bedrockAccessKeyId,
bedrockSecretKey,
bedrockRegion,
responseFormat,
workflowId,
workspaceId,
@@ -67,6 +70,9 @@ export async function POST(request: NextRequest) {
hasVertexProject: !!vertexProject,
hasVertexLocation: !!vertexLocation,
hasVertexCredential: !!vertexCredential,
hasBedrockAccessKeyId: !!bedrockAccessKeyId,
hasBedrockSecretKey: !!bedrockSecretKey,
hasBedrockRegion: !!bedrockRegion,
hasResponseFormat: !!responseFormat,
workflowId,
stream: !!stream,
@@ -116,6 +122,9 @@ export async function POST(request: NextRequest) {
azureApiVersion,
vertexProject,
vertexLocation,
bedrockAccessKeyId,
bedrockSecretKey,
bedrockRegion,
responseFormat,
workflowId,
workspaceId,

View File

@@ -1,13 +1,19 @@
import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server'
import { env } from '@/lib/core/config/env'
import { filterBlacklistedModels, isProviderBlacklisted } from '@/providers/utils'
const logger = createLogger('VLLMModelsAPI')
/**
* Get available vLLM models
*/
export async function GET(request: NextRequest) {
export async function GET(_request: NextRequest) {
if (isProviderBlacklisted('vllm')) {
logger.info('vLLM provider is blacklisted, returning empty models')
return NextResponse.json({ models: [] })
}
const baseUrl = (env.VLLM_BASE_URL || '').replace(/\/$/, '')
if (!baseUrl) {
@@ -42,10 +48,12 @@ export async function GET(request: NextRequest) {
}
const data = (await response.json()) as { data: Array<{ id: string }> }
const models = data.data.map((model) => `vllm/${model.id}`)
const allModels = data.data.map((model) => `vllm/${model.id}`)
const models = filterBlacklistedModels(allModels)
logger.info('Successfully fetched vLLM models', {
count: models.length,
filtered: allModels.length - models.length,
models,
})

View File

@@ -168,18 +168,15 @@ export async function POST(request: NextRequest, { params }: { params: Promise<{
`[${requestId}] Successfully used template: ${id}, created workflow: ${newWorkflowId}`
)
// Track template usage
try {
const { trackPlatformEvent } = await import('@/lib/core/telemetry')
const { PlatformEvents } = await import('@/lib/core/telemetry')
const templateState = templateData.state as any
trackPlatformEvent('platform.template.used', {
'template.id': id,
'template.name': templateData.name,
'workflow.created_id': newWorkflowId,
'workflow.blocks_count': templateState?.blocks
? Object.keys(templateState.blocks).length
: 0,
'workspace.id': workspaceId,
PlatformEvents.templateUsed({
templateId: id,
templateName: templateData.name,
newWorkflowId,
blocksCount: templateState?.blocks ? Object.keys(templateState.blocks).length : 0,
workspaceId,
})
} catch (_e) {
// Silently fail

View File

@@ -0,0 +1,199 @@
/**
* Admin BYOK Keys API
*
* GET /api/v1/admin/byok
* List all BYOK keys with optional filtering.
*
* Query Parameters:
* - organizationId?: string - Filter by organization ID (finds all workspaces billed to this org)
* - workspaceId?: string - Filter by specific workspace ID
*
* Response: { data: AdminBYOKKey[], pagination: PaginationMeta }
*
* DELETE /api/v1/admin/byok
* Delete BYOK keys for an organization or workspace.
* Used when an enterprise plan churns to clean up BYOK keys.
*
* Query Parameters:
* - organizationId: string - Delete all BYOK keys for workspaces billed to this org
* - workspaceId?: string - Delete keys for a specific workspace only (optional)
*
* Response: { success: true, deletedCount: number, workspacesAffected: string[] }
*/
import { db } from '@sim/db'
import { user, workspace, workspaceBYOKKeys } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { eq, inArray, sql } from 'drizzle-orm'
import { withAdminAuth } from '@/app/api/v1/admin/middleware'
import {
badRequestResponse,
internalErrorResponse,
singleResponse,
} from '@/app/api/v1/admin/responses'
const logger = createLogger('AdminBYOKAPI')
export interface AdminBYOKKey {
id: string
workspaceId: string
workspaceName: string
organizationId: string
providerId: string
createdAt: string
createdByUserId: string | null
createdByEmail: string | null
}
export const GET = withAdminAuth(async (request) => {
const url = new URL(request.url)
const organizationId = url.searchParams.get('organizationId')
const workspaceId = url.searchParams.get('workspaceId')
try {
let workspaceIds: string[] = []
if (workspaceId) {
workspaceIds = [workspaceId]
} else if (organizationId) {
const workspaces = await db
.select({ id: workspace.id })
.from(workspace)
.where(eq(workspace.billedAccountUserId, organizationId))
workspaceIds = workspaces.map((w) => w.id)
}
const query = db
.select({
id: workspaceBYOKKeys.id,
workspaceId: workspaceBYOKKeys.workspaceId,
workspaceName: workspace.name,
organizationId: workspace.billedAccountUserId,
providerId: workspaceBYOKKeys.providerId,
createdAt: workspaceBYOKKeys.createdAt,
createdByUserId: workspaceBYOKKeys.createdBy,
createdByEmail: user.email,
})
.from(workspaceBYOKKeys)
.innerJoin(workspace, eq(workspaceBYOKKeys.workspaceId, workspace.id))
.leftJoin(user, eq(workspaceBYOKKeys.createdBy, user.id))
let keys
if (workspaceIds.length > 0) {
keys = await query.where(inArray(workspaceBYOKKeys.workspaceId, workspaceIds))
} else {
keys = await query
}
const formattedKeys: AdminBYOKKey[] = keys.map((k) => ({
id: k.id,
workspaceId: k.workspaceId,
workspaceName: k.workspaceName,
organizationId: k.organizationId,
providerId: k.providerId,
createdAt: k.createdAt.toISOString(),
createdByUserId: k.createdByUserId,
createdByEmail: k.createdByEmail,
}))
logger.info('Admin API: Listed BYOK keys', {
organizationId,
workspaceId,
count: formattedKeys.length,
})
return singleResponse({
data: formattedKeys,
pagination: {
total: formattedKeys.length,
limit: formattedKeys.length,
offset: 0,
hasMore: false,
},
})
} catch (error) {
logger.error('Admin API: Failed to list BYOK keys', { error, organizationId, workspaceId })
return internalErrorResponse('Failed to list BYOK keys')
}
})
export const DELETE = withAdminAuth(async (request) => {
const url = new URL(request.url)
const organizationId = url.searchParams.get('organizationId')
const workspaceId = url.searchParams.get('workspaceId')
const reason = url.searchParams.get('reason') || 'Enterprise plan churn cleanup'
if (!organizationId && !workspaceId) {
return badRequestResponse('Either organizationId or workspaceId is required')
}
try {
let workspaceIds: string[] = []
if (workspaceId) {
workspaceIds = [workspaceId]
} else if (organizationId) {
const workspaces = await db
.select({ id: workspace.id })
.from(workspace)
.where(eq(workspace.billedAccountUserId, organizationId))
workspaceIds = workspaces.map((w) => w.id)
}
if (workspaceIds.length === 0) {
logger.info('Admin API: No workspaces found for BYOK cleanup', {
organizationId,
workspaceId,
})
return singleResponse({
success: true,
deletedCount: 0,
workspacesAffected: [],
message: 'No workspaces found for the given organization/workspace ID',
})
}
const countResult = await db
.select({ count: sql<number>`count(*)` })
.from(workspaceBYOKKeys)
.where(inArray(workspaceBYOKKeys.workspaceId, workspaceIds))
const totalToDelete = Number(countResult[0]?.count ?? 0)
if (totalToDelete === 0) {
logger.info('Admin API: No BYOK keys to delete', {
organizationId,
workspaceId,
workspaceIds,
})
return singleResponse({
success: true,
deletedCount: 0,
workspacesAffected: [],
message: 'No BYOK keys found for the specified workspaces',
})
}
await db.delete(workspaceBYOKKeys).where(inArray(workspaceBYOKKeys.workspaceId, workspaceIds))
logger.info('Admin API: Deleted BYOK keys', {
organizationId,
workspaceId,
workspaceIds,
deletedCount: totalToDelete,
reason,
})
return singleResponse({
success: true,
deletedCount: totalToDelete,
workspacesAffected: workspaceIds,
reason,
})
} catch (error) {
logger.error('Admin API: Failed to delete BYOK keys', { error, organizationId, workspaceId })
return internalErrorResponse('Failed to delete BYOK keys')
}
})

View File

@@ -51,6 +51,10 @@
* GET /api/v1/admin/subscriptions - List all subscriptions
* GET /api/v1/admin/subscriptions/:id - Get subscription details
* DELETE /api/v1/admin/subscriptions/:id - Cancel subscription (?atPeriodEnd=true for scheduled)
*
* BYOK Keys:
* GET /api/v1/admin/byok - List BYOK keys (?organizationId=X or ?workspaceId=X)
* DELETE /api/v1/admin/byok - Delete BYOK keys for org/workspace
*/
export type { AdminAuthFailure, AdminAuthResult, AdminAuthSuccess } from '@/app/api/v1/admin/auth'

View File

@@ -59,6 +59,7 @@ interface RequestBody {
stream?: boolean
history?: ChatMessage[]
workflowId?: string
generationType?: string
}
function safeStringify(value: unknown): string {
@@ -158,7 +159,7 @@ export async function POST(req: NextRequest) {
try {
const body = (await req.json()) as RequestBody
const { prompt, systemPrompt, stream = false, history = [], workflowId } = body
const { prompt, systemPrompt, stream = false, history = [], workflowId, generationType } = body
if (!prompt) {
logger.warn(`[${requestId}] Invalid request: Missing prompt.`)
@@ -222,10 +223,26 @@ export async function POST(req: NextRequest) {
)
}
const finalSystemPrompt =
let finalSystemPrompt =
systemPrompt ||
'You are a helpful AI assistant. Generate content exactly as requested by the user.'
if (generationType === 'timestamp') {
const now = new Date()
const currentTimeContext = `\n\nCurrent date and time context for reference:
- Current UTC timestamp: ${now.toISOString()}
- Current Unix timestamp (seconds): ${Math.floor(now.getTime() / 1000)}
- Current Unix timestamp (milliseconds): ${now.getTime()}
- Current date (UTC): ${now.toISOString().split('T')[0]}
- Current year: ${now.getUTCFullYear()}
- Current month: ${now.getUTCMonth() + 1}
- Current day of month: ${now.getUTCDate()}
- Current day of week: ${['Sunday', 'Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday'][now.getUTCDay()]}
Use this context to calculate relative dates like "yesterday", "last week", "beginning of this month", etc.`
finalSystemPrompt += currentTimeContext
}
const messages: ChatMessage[] = [{ role: 'system', content: finalSystemPrompt }]
messages.push(...history.filter((msg) => msg.role !== 'system'))

View File

@@ -1,10 +1,11 @@
import { db } from '@sim/db'
import { webhook, workflow } from '@sim/db/schema'
import { createLogger } from '@sim/logger'
import { eq } from 'drizzle-orm'
import { and, eq } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { getSession } from '@/lib/auth'
import { validateInteger } from '@/lib/core/security/input-validation'
import { PlatformEvents } from '@/lib/core/telemetry'
import { generateRequestId } from '@/lib/core/utils/request'
import { getUserEntityPermissions } from '@/lib/workspaces/permissions/utils'
@@ -184,16 +185,28 @@ export async function PATCH(request: NextRequest, { params }: { params: Promise<
hasFailedCountUpdate: failedCount !== undefined,
})
// Update the webhook
// Merge providerConfig to preserve credential-related fields
let finalProviderConfig = webhooks[0].webhook.providerConfig
if (providerConfig !== undefined) {
const existingConfig = (webhooks[0].webhook.providerConfig as Record<string, unknown>) || {}
finalProviderConfig = {
...resolvedProviderConfig,
credentialId: existingConfig.credentialId,
credentialSetId: existingConfig.credentialSetId,
userId: existingConfig.userId,
historyId: existingConfig.historyId,
lastCheckedTimestamp: existingConfig.lastCheckedTimestamp,
setupCompleted: existingConfig.setupCompleted,
externalId: existingConfig.externalId,
}
}
const updatedWebhook = await db
.update(webhook)
.set({
path: path !== undefined ? path : webhooks[0].webhook.path,
provider: provider !== undefined ? provider : webhooks[0].webhook.provider,
providerConfig:
providerConfig !== undefined
? resolvedProviderConfig
: webhooks[0].webhook.providerConfig,
providerConfig: finalProviderConfig,
isActive: isActive !== undefined ? isActive : webhooks[0].webhook.isActive,
failedCount: failedCount !== undefined ? failedCount : webhooks[0].webhook.failedCount,
updatedAt: new Date(),
@@ -276,13 +289,67 @@ export async function DELETE(
}
const foundWebhook = webhookData.webhook
const { cleanupExternalWebhook } = await import('@/lib/webhooks/provider-subscriptions')
await cleanupExternalWebhook(foundWebhook, webhookData.workflow, requestId)
await db.delete(webhook).where(eq(webhook.id, id))
const providerConfig = foundWebhook.providerConfig as Record<string, unknown> | null
const credentialSetId = providerConfig?.credentialSetId as string | undefined
const blockId = providerConfig?.blockId as string | undefined
if (credentialSetId && blockId) {
const allCredentialSetWebhooks = await db
.select()
.from(webhook)
.where(and(eq(webhook.workflowId, webhookData.workflow.id), eq(webhook.blockId, blockId)))
const webhooksToDelete = allCredentialSetWebhooks.filter((w) => {
const config = w.providerConfig as Record<string, unknown> | null
return config?.credentialSetId === credentialSetId
})
for (const w of webhooksToDelete) {
await cleanupExternalWebhook(w, webhookData.workflow, requestId)
}
const idsToDelete = webhooksToDelete.map((w) => w.id)
for (const wId of idsToDelete) {
await db.delete(webhook).where(eq(webhook.id, wId))
}
try {
for (const wId of idsToDelete) {
PlatformEvents.webhookDeleted({
webhookId: wId,
workflowId: webhookData.workflow.id,
})
}
} catch {
// Telemetry should not fail the operation
}
logger.info(
`[${requestId}] Successfully deleted ${idsToDelete.length} webhooks for credential set`,
{
credentialSetId,
blockId,
deletedIds: idsToDelete,
}
)
} else {
await cleanupExternalWebhook(foundWebhook, webhookData.workflow, requestId)
await db.delete(webhook).where(eq(webhook.id, id))
try {
PlatformEvents.webhookDeleted({
webhookId: id,
workflowId: webhookData.workflow.id,
})
} catch {
// Telemetry should not fail the operation
}
logger.info(`[${requestId}] Successfully deleted webhook: ${id}`)
}
logger.info(`[${requestId}] Successfully deleted webhook: ${id}`)
return NextResponse.json({ success: true }, { status: 200 })
} catch (error: any) {
logger.error(`[${requestId}] Error deleting webhook`, {

View File

@@ -5,6 +5,7 @@ import { and, desc, eq } from 'drizzle-orm'
import { nanoid } from 'nanoid'
import { type NextRequest, NextResponse } from 'next/server'
import { getSession } from '@/lib/auth'
import { PlatformEvents } from '@/lib/core/telemetry'
import { generateRequestId } from '@/lib/core/utils/request'
import { getBaseUrl } from '@/lib/core/utils/urls'
import { getUserEntityPermissions } from '@/lib/workspaces/permissions/utils'
@@ -262,6 +263,157 @@ export async function POST(request: NextRequest) {
workflowRecord.workspaceId || undefined
)
// --- Credential Set Handling ---
// For credential sets, we fan out to create one webhook per credential at save time.
// This applies to all OAuth-based triggers, not just polling ones.
// Check for credentialSetId directly (frontend may already extract it) or credential set value in credential fields
const rawCredentialId = (resolvedProviderConfig?.credentialId ||
resolvedProviderConfig?.triggerCredentials) as string | undefined
const directCredentialSetId = resolvedProviderConfig?.credentialSetId as string | undefined
if (directCredentialSetId || rawCredentialId) {
const { isCredentialSetValue, extractCredentialSetId } = await import('@/executor/constants')
const credentialSetId =
directCredentialSetId ||
(rawCredentialId && isCredentialSetValue(rawCredentialId)
? extractCredentialSetId(rawCredentialId)
: null)
if (credentialSetId) {
logger.info(
`[${requestId}] Credential set detected for ${provider} trigger. Syncing webhooks for set ${credentialSetId}`
)
const { getProviderIdFromServiceId } = await import('@/lib/oauth')
const { syncWebhooksForCredentialSet, configureGmailPolling, configureOutlookPolling } =
await import('@/lib/webhooks/utils.server')
// Map provider to OAuth provider ID
const oauthProviderId = getProviderIdFromServiceId(provider)
const {
credentialId: _cId,
triggerCredentials: _tCred,
credentialSetId: _csId,
...baseProviderConfig
} = resolvedProviderConfig
try {
const syncResult = await syncWebhooksForCredentialSet({
workflowId,
blockId,
provider,
basePath: finalPath,
credentialSetId,
oauthProviderId,
providerConfig: baseProviderConfig,
requestId,
})
if (syncResult.webhooks.length === 0) {
logger.error(
`[${requestId}] No webhooks created for credential set - no valid credentials found`
)
return NextResponse.json(
{
error: `No valid credentials found in credential set for ${provider}`,
details: 'Please ensure team members have connected their accounts',
},
{ status: 400 }
)
}
// Configure each new webhook (for providers that need configuration)
const pollingProviders = ['gmail', 'outlook']
const needsConfiguration = pollingProviders.includes(provider)
if (needsConfiguration) {
const configureFunc =
provider === 'gmail' ? configureGmailPolling : configureOutlookPolling
const configureErrors: string[] = []
for (const wh of syncResult.webhooks) {
if (wh.isNew) {
// Fetch the webhook data for configuration
const webhookRows = await db
.select()
.from(webhook)
.where(eq(webhook.id, wh.id))
.limit(1)
if (webhookRows.length > 0) {
const success = await configureFunc(webhookRows[0], requestId)
if (!success) {
configureErrors.push(
`Failed to configure webhook for credential ${wh.credentialId}`
)
logger.warn(
`[${requestId}] Failed to configure ${provider} polling for webhook ${wh.id}`
)
}
}
}
}
if (
configureErrors.length > 0 &&
configureErrors.length === syncResult.webhooks.length
) {
// All configurations failed - roll back
logger.error(`[${requestId}] All webhook configurations failed, rolling back`)
for (const wh of syncResult.webhooks) {
await db.delete(webhook).where(eq(webhook.id, wh.id))
}
return NextResponse.json(
{
error: `Failed to configure ${provider} polling`,
details: 'Please check account permissions and try again',
},
{ status: 500 }
)
}
}
logger.info(
`[${requestId}] Successfully synced ${syncResult.webhooks.length} webhooks for credential set ${credentialSetId}`
)
// Return the first webhook as the "primary" for the UI
// The UI will query by credentialSetId to get all of them
const primaryWebhookRows = await db
.select()
.from(webhook)
.where(eq(webhook.id, syncResult.webhooks[0].id))
.limit(1)
return NextResponse.json(
{
webhook: primaryWebhookRows[0],
credentialSetInfo: {
credentialSetId,
totalWebhooks: syncResult.webhooks.length,
created: syncResult.created,
updated: syncResult.updated,
deleted: syncResult.deleted,
},
},
{ status: syncResult.created > 0 ? 201 : 200 }
)
} catch (err) {
logger.error(`[${requestId}] Error syncing webhooks for credential set`, err)
return NextResponse.json(
{
error: `Failed to configure ${provider} webhook`,
details: err instanceof Error ? err.message : 'Unknown error',
},
{ status: 500 }
)
}
}
}
// --- End Credential Set Handling ---
// Create external subscriptions before saving to DB to prevent orphaned records
let externalSubscriptionId: string | undefined
let externalSubscriptionCreated = false
@@ -422,6 +574,10 @@ export async function POST(request: NextRequest) {
blockId,
provider,
providerConfig: resolvedProviderConfig,
credentialSetId:
((resolvedProviderConfig as Record<string, unknown>)?.credentialSetId as
| string
| null) || null,
isActive: true,
updatedAt: new Date(),
})
@@ -445,6 +601,10 @@ export async function POST(request: NextRequest) {
path: finalPath,
provider,
providerConfig: resolvedProviderConfig,
credentialSetId:
((resolvedProviderConfig as Record<string, unknown>)?.credentialSetId as
| string
| null) || null,
isActive: true,
createdAt: new Date(),
updatedAt: new Date(),
@@ -584,7 +744,7 @@ export async function POST(request: NextRequest) {
if (savedWebhook && provider === 'grain') {
logger.info(`[${requestId}] Grain provider detected. Creating Grain webhook subscription.`)
try {
const grainHookId = await createGrainWebhookSubscription(
const grainResult = await createGrainWebhookSubscription(
request,
{
id: savedWebhook.id,
@@ -594,11 +754,12 @@ export async function POST(request: NextRequest) {
requestId
)
if (grainHookId) {
// Update the webhook record with the external Grain hook ID
if (grainResult) {
// Update the webhook record with the external Grain hook ID and event types for filtering
const updatedConfig = {
...(savedWebhook.providerConfig as Record<string, any>),
externalId: grainHookId,
externalId: grainResult.id,
eventTypes: grainResult.eventTypes,
}
await db
.update(webhook)
@@ -610,7 +771,8 @@ export async function POST(request: NextRequest) {
savedWebhook.providerConfig = updatedConfig
logger.info(`[${requestId}] Successfully created Grain webhook`, {
grainHookId,
grainHookId: grainResult.id,
eventTypes: grainResult.eventTypes,
webhookId: savedWebhook.id,
})
}
@@ -631,6 +793,19 @@ export async function POST(request: NextRequest) {
}
// --- End Grain specific logic ---
if (!targetWebhookId && savedWebhook) {
try {
PlatformEvents.webhookCreated({
webhookId: savedWebhook.id,
workflowId: workflowId,
provider: provider || 'generic',
workspaceId: workflowRecord.workspaceId || undefined,
})
} catch {
// Telemetry should not fail the operation
}
}
const status = targetWebhookId ? 200 : 201
return NextResponse.json({ webhook: savedWebhook }, { status })
} catch (error: any) {
@@ -1003,10 +1178,10 @@ async function createGrainWebhookSubscription(
request: NextRequest,
webhookData: any,
requestId: string
): Promise<string | undefined> {
): Promise<{ id: string; eventTypes: string[] } | undefined> {
try {
const { path, providerConfig } = webhookData
const { apiKey, includeHighlights, includeParticipants, includeAiSummary } =
const { apiKey, triggerId, includeHighlights, includeParticipants, includeAiSummary } =
providerConfig || {}
if (!apiKey) {
@@ -1018,12 +1193,53 @@ async function createGrainWebhookSubscription(
)
}
// Map trigger IDs to Grain API hook_type (only 2 options: recording_added, upload_status)
const hookTypeMap: Record<string, string> = {
grain_webhook: 'recording_added',
grain_recording_created: 'recording_added',
grain_recording_updated: 'recording_added',
grain_highlight_created: 'recording_added',
grain_highlight_updated: 'recording_added',
grain_story_created: 'recording_added',
grain_upload_status: 'upload_status',
}
const eventTypeMap: Record<string, string[]> = {
grain_webhook: [],
grain_recording_created: ['recording_added'],
grain_recording_updated: ['recording_updated'],
grain_highlight_created: ['highlight_created'],
grain_highlight_updated: ['highlight_updated'],
grain_story_created: ['story_created'],
grain_upload_status: ['upload_status'],
}
const hookType = hookTypeMap[triggerId] ?? 'recording_added'
const eventTypes = eventTypeMap[triggerId] ?? []
if (!hookTypeMap[triggerId]) {
logger.warn(
`[${requestId}] Unknown triggerId for Grain: ${triggerId}, defaulting to recording_added`,
{
webhookId: webhookData.id,
}
)
}
logger.info(`[${requestId}] Creating Grain webhook`, {
triggerId,
hookType,
eventTypes,
webhookId: webhookData.id,
})
const notificationUrl = `${getBaseUrl()}/api/webhooks/trigger/${path}`
const grainApiUrl = 'https://api.grain.com/_/public-api/v2/hooks/create'
const requestBody: Record<string, any> = {
hook_url: notificationUrl,
hook_type: hookType,
}
// Build include object based on configuration
@@ -1053,8 +1269,10 @@ async function createGrainWebhookSubscription(
const responseBody = await grainResponse.json()
if (!grainResponse.ok || responseBody.error) {
if (!grainResponse.ok || responseBody.error || responseBody.errors) {
logger.warn('[App] Grain response body:', responseBody)
const errorMessage =
responseBody.errors?.detail ||
responseBody.error?.message ||
responseBody.error ||
responseBody.message ||
@@ -1082,10 +1300,11 @@ async function createGrainWebhookSubscription(
`[${requestId}] Successfully created webhook in Grain for webhook ${webhookData.id}.`,
{
grainWebhookId: responseBody.id,
eventTypes,
}
)
return responseBody.id
return { id: responseBody.id, eventTypes }
} catch (error: any) {
logger.error(
`[${requestId}] Exception during Grain webhook creation for webhook ${webhookData.id}.`,

View File

@@ -157,6 +157,112 @@ vi.mock('@/lib/workflows/persistence/utils', () => ({
blockExistsInDeployment: vi.fn().mockResolvedValue(true),
}))
vi.mock('@/lib/webhooks/processor', () => ({
findAllWebhooksForPath: vi.fn().mockImplementation(async (options: { path: string }) => {
// Filter webhooks by path from globalMockData
const matchingWebhooks = globalMockData.webhooks.filter(
(wh) => wh.path === options.path && wh.isActive
)
if (matchingWebhooks.length === 0) {
return []
}
// Return array of {webhook, workflow} objects
return matchingWebhooks.map((wh) => {
const matchingWorkflow = globalMockData.workflows.find((w) => w.id === wh.workflowId) || {
id: wh.workflowId || 'test-workflow-id',
userId: 'test-user-id',
workspaceId: 'test-workspace-id',
}
return {
webhook: wh,
workflow: matchingWorkflow,
}
})
}),
parseWebhookBody: vi.fn().mockImplementation(async (request: any) => {
try {
const cloned = request.clone()
const rawBody = await cloned.text()
const body = rawBody ? JSON.parse(rawBody) : {}
return { body, rawBody }
} catch {
return { body: {}, rawBody: '' }
}
}),
handleProviderChallenges: vi.fn().mockResolvedValue(null),
handleProviderReachabilityTest: vi.fn().mockReturnValue(null),
verifyProviderAuth: vi
.fn()
.mockImplementation(
async (
foundWebhook: any,
_foundWorkflow: any,
request: any,
_rawBody: string,
_requestId: string
) => {
// Implement generic webhook auth verification for tests
if (foundWebhook.provider === 'generic') {
const providerConfig = foundWebhook.providerConfig || {}
if (providerConfig.requireAuth) {
const configToken = providerConfig.token
const secretHeaderName = providerConfig.secretHeaderName
if (configToken) {
let isTokenValid = false
if (secretHeaderName) {
// Custom header auth
const headerValue = request.headers.get(secretHeaderName.toLowerCase())
if (headerValue === configToken) {
isTokenValid = true
}
} else {
// Bearer token auth
const authHeader = request.headers.get('authorization')
if (authHeader?.toLowerCase().startsWith('bearer ')) {
const token = authHeader.substring(7)
if (token === configToken) {
isTokenValid = true
}
}
}
if (!isTokenValid) {
const { NextResponse } = await import('next/server')
return new NextResponse('Unauthorized - Invalid authentication token', {
status: 401,
})
}
} else {
// Auth required but no token configured
const { NextResponse } = await import('next/server')
return new NextResponse('Unauthorized - Authentication required but not configured', {
status: 401,
})
}
}
}
return null
}
),
checkWebhookPreprocessing: vi.fn().mockResolvedValue(null),
formatProviderErrorResponse: vi.fn().mockImplementation((_webhook, error, status) => {
const { NextResponse } = require('next/server')
return NextResponse.json({ error }, { status })
}),
shouldSkipWebhookEvent: vi.fn().mockReturnValue(false),
handlePreDeploymentVerification: vi.fn().mockReturnValue(null),
queueWebhookExecution: vi.fn().mockImplementation(async () => {
// Call processWebhookMock so tests can verify it was called
processWebhookMock()
const { NextResponse } = await import('next/server')
return NextResponse.json({ message: 'Webhook processed' })
}),
}))
vi.mock('drizzle-orm/postgres-js', () => ({
drizzle: vi.fn().mockReturnValue({}),
}))
@@ -165,6 +271,10 @@ vi.mock('postgres', () => vi.fn().mockReturnValue({}))
vi.mock('@sim/logger', () => loggerMock)
vi.mock('@/lib/core/utils/request', () => ({
generateRequestId: vi.fn().mockReturnValue('test-request-id'),
}))
process.env.DATABASE_URL = 'postgresql://test:test@localhost:5432/test'
import { POST } from '@/app/api/webhooks/trigger/[path]/route'

View File

@@ -3,11 +3,14 @@ import { type NextRequest, NextResponse } from 'next/server'
import { generateRequestId } from '@/lib/core/utils/request'
import {
checkWebhookPreprocessing,
findWebhookAndWorkflow,
findAllWebhooksForPath,
formatProviderErrorResponse,
handlePreDeploymentVerification,
handleProviderChallenges,
handleProviderReachabilityTest,
parseWebhookBody,
queueWebhookExecution,
shouldSkipWebhookEvent,
verifyProviderAuth,
} from '@/lib/webhooks/processor'
import { blockExistsInDeployment } from '@/lib/workflows/persistence/utils'
@@ -22,19 +25,7 @@ export async function GET(request: NextRequest, { params }: { params: Promise<{
const requestId = generateRequestId()
const { path } = await params
// Handle Microsoft Graph subscription validation
const url = new URL(request.url)
const validationToken = url.searchParams.get('validationToken')
if (validationToken) {
logger.info(`[${requestId}] Microsoft Graph subscription validation for path: ${path}`)
return new NextResponse(validationToken, {
status: 200,
headers: { 'Content-Type': 'text/plain' },
})
}
// Handle other GET-based verifications if needed
// Handle provider-specific GET verifications (Microsoft Graph, WhatsApp, etc.)
const challengeResponse = await handleProviderChallenges({}, request, requestId, path)
if (challengeResponse) {
return challengeResponse
@@ -50,26 +41,10 @@ export async function POST(
const requestId = generateRequestId()
const { path } = await params
// Log ALL incoming webhook requests for debugging
logger.info(`[${requestId}] Incoming webhook request`, {
path,
method: request.method,
headers: Object.fromEntries(request.headers.entries()),
})
// Handle Microsoft Graph subscription validation (some environments send POST with validationToken)
try {
const url = new URL(request.url)
const validationToken = url.searchParams.get('validationToken')
if (validationToken) {
logger.info(`[${requestId}] Microsoft Graph subscription validation (POST) for path: ${path}`)
return new NextResponse(validationToken, {
status: 200,
headers: { 'Content-Type': 'text/plain' },
})
}
} catch {
// ignore URL parsing errors; proceed to normal handling
// Handle provider challenges before body parsing (Microsoft Graph validationToken, etc.)
const earlyChallenge = await handleProviderChallenges({}, request, requestId, path)
if (earlyChallenge) {
return earlyChallenge
}
const parseResult = await parseWebhookBody(request, requestId)
@@ -86,109 +61,118 @@ export async function POST(
return challengeResponse
}
const findResult = await findWebhookAndWorkflow({ requestId, path })
// Find all webhooks for this path (supports credential set fan-out where multiple webhooks share a path)
const webhooksForPath = await findAllWebhooksForPath({ requestId, path })
if (!findResult) {
if (webhooksForPath.length === 0) {
logger.warn(`[${requestId}] Webhook or workflow not found for path: ${path}`)
return new NextResponse('Not Found', { status: 404 })
}
const { webhook: foundWebhook, workflow: foundWorkflow } = findResult
// Process each webhook
// For credential sets with shared paths, each webhook represents a different credential
const responses: NextResponse[] = []
// Log HubSpot webhook details for debugging
if (foundWebhook.provider === 'hubspot') {
const events = Array.isArray(body) ? body : [body]
const firstEvent = events[0]
logger.info(`[${requestId}] HubSpot webhook received`, {
path,
subscriptionType: firstEvent?.subscriptionType,
objectId: firstEvent?.objectId,
portalId: firstEvent?.portalId,
webhookId: foundWebhook.id,
workflowId: foundWorkflow.id,
triggerId: foundWebhook.providerConfig?.triggerId,
eventCount: events.length,
})
}
const authError = await verifyProviderAuth(
foundWebhook,
foundWorkflow,
request,
rawBody,
requestId
)
if (authError) {
return authError
}
const reachabilityResponse = handleProviderReachabilityTest(foundWebhook, body, requestId)
if (reachabilityResponse) {
return reachabilityResponse
}
let preprocessError: NextResponse | null = null
try {
preprocessError = await checkWebhookPreprocessing(foundWorkflow, foundWebhook, requestId)
if (preprocessError) {
return preprocessError
}
} catch (error) {
logger.error(`[${requestId}] Unexpected error during webhook preprocessing`, {
error: error instanceof Error ? error.message : String(error),
stack: error instanceof Error ? error.stack : undefined,
webhookId: foundWebhook.id,
workflowId: foundWorkflow.id,
})
if (foundWebhook.provider === 'microsoft-teams') {
return NextResponse.json(
{
type: 'message',
text: 'An unexpected error occurred during preprocessing',
},
{ status: 500 }
)
}
return NextResponse.json(
{ error: 'An unexpected error occurred during preprocessing' },
{ status: 500 }
for (const { webhook: foundWebhook, workflow: foundWorkflow } of webhooksForPath) {
const authError = await verifyProviderAuth(
foundWebhook,
foundWorkflow,
request,
rawBody,
requestId
)
}
if (foundWebhook.blockId) {
const blockExists = await blockExistsInDeployment(foundWorkflow.id, foundWebhook.blockId)
if (!blockExists) {
logger.info(
`[${requestId}] Trigger block ${foundWebhook.blockId} not found in deployment for workflow ${foundWorkflow.id}`
)
return new NextResponse('Trigger block not found in deployment', { status: 404 })
if (authError) {
// For multi-webhook, log and continue to next webhook
if (webhooksForPath.length > 1) {
logger.warn(`[${requestId}] Auth failed for webhook ${foundWebhook.id}, continuing to next`)
continue
}
return authError
}
}
if (foundWebhook.provider === 'stripe') {
const providerConfig = (foundWebhook.providerConfig as Record<string, any>) || {}
const eventTypes = providerConfig.eventTypes
const reachabilityResponse = handleProviderReachabilityTest(foundWebhook, body, requestId)
if (reachabilityResponse) {
// Reachability test should return immediately for the first webhook
return reachabilityResponse
}
if (eventTypes && Array.isArray(eventTypes) && eventTypes.length > 0) {
const eventType = body?.type
let preprocessError: NextResponse | null = null
try {
preprocessError = await checkWebhookPreprocessing(foundWorkflow, foundWebhook, requestId)
if (preprocessError) {
if (webhooksForPath.length > 1) {
logger.warn(
`[${requestId}] Preprocessing failed for webhook ${foundWebhook.id}, continuing to next`
)
continue
}
return preprocessError
}
} catch (error) {
logger.error(`[${requestId}] Unexpected error during webhook preprocessing`, {
error: error instanceof Error ? error.message : String(error),
stack: error instanceof Error ? error.stack : undefined,
webhookId: foundWebhook.id,
workflowId: foundWorkflow.id,
})
if (webhooksForPath.length > 1) {
continue
}
return formatProviderErrorResponse(
foundWebhook,
'An unexpected error occurred during preprocessing',
500
)
}
if (foundWebhook.blockId) {
const blockExists = await blockExistsInDeployment(foundWorkflow.id, foundWebhook.blockId)
if (!blockExists) {
const preDeploymentResponse = handlePreDeploymentVerification(foundWebhook, requestId)
if (preDeploymentResponse) {
return preDeploymentResponse
}
if (eventType && !eventTypes.includes(eventType)) {
logger.info(
`[${requestId}] Stripe event type '${eventType}' not in allowed list, skipping execution`
`[${requestId}] Trigger block ${foundWebhook.blockId} not found in deployment for workflow ${foundWorkflow.id}`
)
return new NextResponse('Event type filtered', { status: 200 })
if (webhooksForPath.length > 1) {
continue
}
return new NextResponse('Trigger block not found in deployment', { status: 404 })
}
}
if (shouldSkipWebhookEvent(foundWebhook, body, requestId)) {
continue
}
const response = await queueWebhookExecution(foundWebhook, foundWorkflow, body, request, {
requestId,
path,
testMode: false,
executionTarget: 'deployed',
})
responses.push(response)
}
return queueWebhookExecution(foundWebhook, foundWorkflow, body, request, {
requestId,
path,
testMode: false,
executionTarget: 'deployed',
// Return the last successful response, or a combined response for multiple webhooks
if (responses.length === 0) {
return new NextResponse('No webhooks processed successfully', { status: 500 })
}
if (responses.length === 1) {
return responses[0]
}
// For multiple webhooks, return success if at least one succeeded
logger.info(
`[${requestId}] Processed ${responses.length} webhooks for path: ${path} (credential set fan-out)`
)
return NextResponse.json({
success: true,
webhooksProcessed: responses.length,
})
}

View File

@@ -217,10 +217,8 @@ export async function DELETE(
logger.info(`[${requestId}] Workflow undeployed successfully: ${id}`)
try {
const { trackPlatformEvent } = await import('@/lib/core/telemetry')
trackPlatformEvent('platform.workflow.undeployed', {
'workflow.id': id,
})
const { PlatformEvents } = await import('@/lib/core/telemetry')
PlatformEvents.workflowUndeployed({ workflowId: id })
} catch (_e) {
// Silently fail
}

View File

@@ -2,6 +2,7 @@ import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { getSession } from '@/lib/auth'
import { PlatformEvents } from '@/lib/core/telemetry'
import { generateRequestId } from '@/lib/core/utils/request'
import { duplicateWorkflow } from '@/lib/workflows/persistence/duplicate'
@@ -46,6 +47,16 @@ export async function POST(req: NextRequest, { params }: { params: Promise<{ id:
requestId,
})
try {
PlatformEvents.workflowDuplicated({
sourceWorkflowId,
newWorkflowId: result.id,
workspaceId,
})
} catch {
// Telemetry should not fail the operation
}
const elapsed = Date.now() - startTime
logger.info(
`[${requestId}] Successfully duplicated workflow ${sourceWorkflowId} to ${result.id} in ${elapsed}ms`

View File

@@ -8,6 +8,7 @@ import { authenticateApiKeyFromHeader, updateApiKeyLastUsed } from '@/lib/api-ke
import { getSession } from '@/lib/auth'
import { verifyInternalToken } from '@/lib/auth/internal'
import { env } from '@/lib/core/config/env'
import { PlatformEvents } from '@/lib/core/telemetry'
import { generateRequestId } from '@/lib/core/utils/request'
import { loadWorkflowFromNormalizedTables } from '@/lib/workflows/persistence/utils'
import { getWorkflowAccessContext, getWorkflowById } from '@/lib/workflows/utils'
@@ -335,6 +336,15 @@ export async function DELETE(
await db.delete(workflow).where(eq(workflow.id, workflowId))
try {
PlatformEvents.workflowDeleted({
workflowId,
workspaceId: workflowData.workspaceId || undefined,
})
} catch {
// Telemetry should not fail the operation
}
const elapsed = Date.now() - startTime
logger.info(`[${requestId}] Successfully deleted workflow ${workflowId} in ${elapsed}ms`)

View File

@@ -317,6 +317,8 @@ interface WebhookMetadata {
providerConfig: Record<string, any>
}
const CREDENTIAL_SET_PREFIX = 'credentialSet:'
function buildWebhookMetadata(block: BlockState): WebhookMetadata | null {
const triggerId =
getSubBlockValue<string>(block, 'triggerId') ||
@@ -328,9 +330,17 @@ function buildWebhookMetadata(block: BlockState): WebhookMetadata | null {
const triggerDef = triggerId ? getTrigger(triggerId) : undefined
const provider = triggerDef?.provider || null
// Handle credential sets vs individual credentials
const isCredentialSet = triggerCredentials?.startsWith(CREDENTIAL_SET_PREFIX)
const credentialSetId = isCredentialSet
? triggerCredentials!.slice(CREDENTIAL_SET_PREFIX.length)
: undefined
const credentialId = isCredentialSet ? undefined : triggerCredentials
const providerConfig = {
...(typeof triggerConfig === 'object' ? triggerConfig : {}),
...(triggerCredentials ? { credentialId: triggerCredentials } : {}),
...(credentialId ? { credentialId } : {}),
...(credentialSetId ? { credentialSetId } : {}),
...(triggerId ? { triggerId } : {}),
}
@@ -347,6 +357,54 @@ async function upsertWebhookRecord(
webhookId: string,
metadata: WebhookMetadata
): Promise<void> {
const providerConfig = metadata.providerConfig as Record<string, unknown>
const credentialSetId = providerConfig?.credentialSetId as string | undefined
// For credential sets, delegate to the sync function which handles fan-out
if (credentialSetId && metadata.provider) {
const { syncWebhooksForCredentialSet } = await import('@/lib/webhooks/utils.server')
const { getProviderIdFromServiceId } = await import('@/lib/oauth')
const oauthProviderId = getProviderIdFromServiceId(metadata.provider)
const requestId = crypto.randomUUID().slice(0, 8)
// Extract base config (without credential-specific fields)
const {
credentialId: _cId,
credentialSetId: _csId,
userId: _uId,
...baseConfig
} = providerConfig
try {
await syncWebhooksForCredentialSet({
workflowId,
blockId: block.id,
provider: metadata.provider,
basePath: metadata.triggerPath,
credentialSetId,
oauthProviderId,
providerConfig: baseConfig as Record<string, any>,
requestId,
})
logger.info('Synced credential set webhooks during workflow save', {
workflowId,
blockId: block.id,
credentialSetId,
})
} catch (error) {
logger.error('Failed to sync credential set webhooks during workflow save', {
workflowId,
blockId: block.id,
credentialSetId,
error,
})
}
return
}
// For individual credentials, use the existing single webhook logic
const [existing] = await db.select().from(webhook).where(eq(webhook.id, webhookId)).limit(1)
if (existing) {
@@ -381,6 +439,7 @@ async function upsertWebhookRecord(
path: metadata.triggerPath,
provider: metadata.provider,
providerConfig: metadata.providerConfig,
credentialSetId: null,
isActive: true,
createdAt: new Date(),
updatedAt: new Date(),

View File

@@ -119,12 +119,12 @@ export async function POST(req: NextRequest) {
logger.info(`[${requestId}] Creating workflow ${workflowId} for user ${session.user.id}`)
import('@/lib/core/telemetry')
.then(({ trackPlatformEvent }) => {
trackPlatformEvent('platform.workflow.created', {
'workflow.id': workflowId,
'workflow.name': name,
'workflow.has_workspace': !!workspaceId,
'workflow.has_folder': !!folderId,
.then(({ PlatformEvents }) => {
PlatformEvents.workflowCreated({
workflowId,
name,
workspaceId: workspaceId || undefined,
folderId: folderId || undefined,
})
})
.catch(() => {

View File

@@ -7,6 +7,7 @@ import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { createApiKey, getApiKeyDisplayFormat } from '@/lib/api-key/auth'
import { getSession } from '@/lib/auth'
import { PlatformEvents } from '@/lib/core/telemetry'
import { generateRequestId } from '@/lib/core/utils/request'
import { getUserEntityPermissions } from '@/lib/workspaces/permissions/utils'
@@ -147,6 +148,15 @@ export async function POST(request: NextRequest, { params }: { params: Promise<{
createdAt: apiKey.createdAt,
})
try {
PlatformEvents.apiKeyGenerated({
userId: userId,
keyName: name,
})
} catch {
// Telemetry should not fail the operation
}
logger.info(`[${requestId}] Created workspace API key: ${name} in workspace ${workspaceId}`)
return NextResponse.json({
@@ -198,6 +208,17 @@ export async function DELETE(
)
)
try {
for (const keyId of keys) {
PlatformEvents.apiKeyRevoked({
userId: userId,
keyId: keyId,
})
}
} catch {
// Telemetry should not fail the operation
}
logger.info(
`[${requestId}] Deleted ${deletedCount} workspace API keys from workspace ${workspaceId}`
)

View File

@@ -6,6 +6,8 @@ import { nanoid } from 'nanoid'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { getSession } from '@/lib/auth'
import { isEnterpriseOrgAdminOrOwner } from '@/lib/billing/core/subscription'
import { isHosted } from '@/lib/core/config/feature-flags'
import { decryptSecret, encryptSecret } from '@/lib/core/security/encryption'
import { generateRequestId } from '@/lib/core/utils/request'
import { getUserEntityPermissions } from '@/lib/workspaces/permissions/utils'
@@ -56,6 +58,15 @@ export async function GET(request: NextRequest, { params }: { params: Promise<{
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
let byokEnabled = true
if (isHosted) {
byokEnabled = await isEnterpriseOrgAdminOrOwner(userId)
}
if (!byokEnabled) {
return NextResponse.json({ keys: [], byokEnabled: false })
}
const byokKeys = await db
.select({
id: workspaceBYOKKeys.id,
@@ -97,7 +108,7 @@ export async function GET(request: NextRequest, { params }: { params: Promise<{
})
)
return NextResponse.json({ keys: formattedKeys })
return NextResponse.json({ keys: formattedKeys, byokEnabled: true })
} catch (error: unknown) {
logger.error(`[${requestId}] BYOK keys GET error`, error)
return NextResponse.json(
@@ -120,6 +131,20 @@ export async function POST(request: NextRequest, { params }: { params: Promise<{
const userId = session.user.id
if (isHosted) {
const canManageBYOK = await isEnterpriseOrgAdminOrOwner(userId)
if (!canManageBYOK) {
logger.warn(`[${requestId}] User not authorized to manage BYOK keys`, { userId })
return NextResponse.json(
{
error:
'BYOK is an Enterprise-only feature. Only organization admins and owners can manage API keys.',
},
{ status: 403 }
)
}
}
const permission = await getUserEntityPermissions(userId, 'workspace', workspaceId)
if (permission !== 'admin') {
return NextResponse.json(
@@ -220,6 +245,20 @@ export async function DELETE(
const userId = session.user.id
if (isHosted) {
const canManageBYOK = await isEnterpriseOrgAdminOrOwner(userId)
if (!canManageBYOK) {
logger.warn(`[${requestId}] User not authorized to manage BYOK keys`, { userId })
return NextResponse.json(
{
error:
'BYOK is an Enterprise-only feature. Only organization admins and owners can manage API keys.',
},
{ status: 403 }
)
}
}
const permission = await getUserEntityPermissions(userId, 'workspace', workspaceId)
if (permission !== 'admin') {
return NextResponse.json(

View File

@@ -14,6 +14,7 @@ import { and, eq, inArray } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { WorkspaceInvitationEmail } from '@/components/emails'
import { getSession } from '@/lib/auth'
import { PlatformEvents } from '@/lib/core/telemetry'
import { getBaseUrl } from '@/lib/core/utils/urls'
import { sendEmail } from '@/lib/messaging/email/mailer'
import { getFromEmailAddress } from '@/lib/messaging/email/utils'
@@ -81,7 +82,6 @@ export async function POST(req: NextRequest) {
return NextResponse.json({ error: 'Workspace ID and email are required' }, { status: 400 })
}
// Validate permission type
const validPermissions: PermissionType[] = ['admin', 'write', 'read']
if (!validPermissions.includes(permission)) {
return NextResponse.json(
@@ -90,7 +90,6 @@ export async function POST(req: NextRequest) {
)
}
// Check if user has admin permissions for this workspace
const userPermission = await db
.select()
.from(permissions)
@@ -111,7 +110,6 @@ export async function POST(req: NextRequest) {
)
}
// Get the workspace details for the email
const workspaceDetails = await db
.select()
.from(workspace)
@@ -122,8 +120,6 @@ export async function POST(req: NextRequest) {
return NextResponse.json({ error: 'Workspace not found' }, { status: 404 })
}
// Check if the user is already a member
// First find if a user with this email exists
const existingUser = await db
.select()
.from(user)
@@ -131,7 +127,6 @@ export async function POST(req: NextRequest) {
.then((rows) => rows[0])
if (existingUser) {
// Check if the user already has permissions for this workspace
const existingPermission = await db
.select()
.from(permissions)
@@ -155,7 +150,6 @@ export async function POST(req: NextRequest) {
}
}
// Check if there's already a pending invitation
const existingInvitation = await db
.select()
.from(workspaceInvitation)
@@ -178,12 +172,10 @@ export async function POST(req: NextRequest) {
)
}
// Generate a unique token and set expiry date (1 week from now)
const token = randomUUID()
const expiresAt = new Date()
expiresAt.setDate(expiresAt.getDate() + 7) // 7 days expiry
// Create the invitation
const invitationData = {
id: randomUUID(),
workspaceId,
@@ -198,10 +190,19 @@ export async function POST(req: NextRequest) {
updatedAt: new Date(),
}
// Create invitation
await db.insert(workspaceInvitation).values(invitationData)
// Send the invitation email
try {
PlatformEvents.workspaceMemberInvited({
workspaceId,
invitedBy: session.user.id,
inviteeEmail: email,
role: permission,
})
} catch {
// Telemetry should not fail the operation
}
await sendInvitationEmail({
to: email,
inviterName: session.user.name || session.user.email || 'A user',
@@ -217,7 +218,6 @@ export async function POST(req: NextRequest) {
}
}
// Helper function to send invitation email using the Resend API
async function sendInvitationEmail({
to,
inviterName,
@@ -233,7 +233,6 @@ async function sendInvitationEmail({
}) {
try {
const baseUrl = getBaseUrl()
// Use invitation ID in path, token in query parameter for security
const invitationLink = `${baseUrl}/invite/${invitationId}?token=${token}`
const emailHtml = await render(
@@ -263,6 +262,5 @@ async function sendInvitationEmail({
}
} catch (error) {
logger.error('Error sending invitation email:', error)
// Continue even if email fails - the invitation is still created
}
}

View File

@@ -5,6 +5,7 @@ import { and, desc, eq, isNull } from 'drizzle-orm'
import { NextResponse } from 'next/server'
import { z } from 'zod'
import { getSession } from '@/lib/auth'
import { PlatformEvents } from '@/lib/core/telemetry'
import { buildDefaultWorkflowArtifacts } from '@/lib/workflows/defaults'
import { saveWorkflowToNormalizedTables } from '@/lib/workflows/persistence/utils'
@@ -22,7 +23,6 @@ export async function GET() {
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
// Get all workspaces where the user has permissions
const userWorkspaces = await db
.select({
workspace: workspace,
@@ -34,19 +34,15 @@ export async function GET() {
.orderBy(desc(workspace.createdAt))
if (userWorkspaces.length === 0) {
// Create a default workspace for the user
const defaultWorkspace = await createDefaultWorkspace(session.user.id, session.user.name)
// Migrate existing workflows to the default workspace
await migrateExistingWorkflows(session.user.id, defaultWorkspace.id)
return NextResponse.json({ workspaces: [defaultWorkspace] })
}
// If user has workspaces but might have orphaned workflows, migrate them
await ensureWorkflowsHaveWorkspace(session.user.id, userWorkspaces[0].workspace.id)
// Format the response with permission information
const workspacesWithPermissions = userWorkspaces.map(
({ workspace: workspaceDetails, permissionType }) => ({
...workspaceDetails,
@@ -78,24 +74,19 @@ export async function POST(req: Request) {
}
}
// Helper function to create a default workspace
async function createDefaultWorkspace(userId: string, userName?: string | null) {
// Extract first name only by splitting on spaces and taking the first part
const firstName = userName?.split(' ')[0] || null
const workspaceName = firstName ? `${firstName}'s Workspace` : 'My Workspace'
return createWorkspace(userId, workspaceName)
}
// Helper function to create a workspace
async function createWorkspace(userId: string, name: string) {
const workspaceId = crypto.randomUUID()
const workflowId = crypto.randomUUID()
const now = new Date()
// Create the workspace and initial workflow in a transaction
try {
await db.transaction(async (tx) => {
// Create the workspace
await tx.insert(workspace).values({
id: workspaceId,
name,
@@ -135,8 +126,6 @@ async function createWorkspace(userId: string, name: string) {
variables: {},
})
// No blocks are inserted - empty canvas
logger.info(
`Created workspace ${workspaceId} with initial workflow ${workflowId} for user ${userId}`
)
@@ -153,7 +142,16 @@ async function createWorkspace(userId: string, name: string) {
throw error
}
// Return the workspace data directly instead of querying again
try {
PlatformEvents.workspaceCreated({
workspaceId,
userId,
name,
})
} catch {
// Telemetry should not fail the operation
}
return {
id: workspaceId,
name,
@@ -166,9 +164,7 @@ async function createWorkspace(userId: string, name: string) {
}
}
// Helper function to migrate existing workflows to a workspace
async function migrateExistingWorkflows(userId: string, workspaceId: string) {
// Find all workflows that have no workspace ID
const orphanedWorkflows = await db
.select({ id: workflow.id })
.from(workflow)
@@ -182,7 +178,6 @@ async function migrateExistingWorkflows(userId: string, workspaceId: string) {
`Migrating ${orphanedWorkflows.length} workflows to workspace ${workspaceId} for user ${userId}`
)
// Bulk update all orphaned workflows at once
await db
.update(workflow)
.set({
@@ -192,16 +187,13 @@ async function migrateExistingWorkflows(userId: string, workspaceId: string) {
.where(and(eq(workflow.userId, userId), isNull(workflow.workspaceId)))
}
// Helper function to ensure all workflows have a workspace
async function ensureWorkflowsHaveWorkspace(userId: string, defaultWorkspaceId: string) {
// First check if there are any orphaned workflows
const orphanedWorkflows = await db
.select()
.from(workflow)
.where(and(eq(workflow.userId, userId), isNull(workflow.workspaceId)))
if (orphanedWorkflows.length > 0) {
// Directly update any workflows that don't have a workspace ID in a single query
await db
.update(workflow)
.set({

View File

@@ -175,7 +175,6 @@ export default function ChatClient({ identifier }: { identifier: string }) {
const distanceFromBottom = scrollHeight - scrollTop - clientHeight
setShowScrollButton(distanceFromBottom > 100)
// Track if user is manually scrolling during streaming
if (isStreamingResponse && !isUserScrollingRef.current) {
setUserHasScrolled(true)
}
@@ -191,13 +190,10 @@ export default function ChatClient({ identifier }: { identifier: string }) {
return () => container.removeEventListener('scroll', handleScroll)
}, [handleScroll])
// Reset user scroll tracking when streaming starts
useEffect(() => {
if (isStreamingResponse) {
// Reset userHasScrolled when streaming starts
setUserHasScrolled(false)
// Give a small delay to distinguish between programmatic scroll and user scroll
isUserScrollingRef.current = true
setTimeout(() => {
isUserScrollingRef.current = false
@@ -215,7 +211,6 @@ export default function ChatClient({ identifier }: { identifier: string }) {
})
if (!response.ok) {
// Check if auth is required
if (response.status === 401) {
const errorData = await response.json()
@@ -236,7 +231,6 @@ export default function ChatClient({ identifier }: { identifier: string }) {
throw new Error(`Failed to load chat configuration: ${response.status}`)
}
// Reset auth required state when authentication is successful
setAuthRequired(null)
const data = await response.json()
@@ -260,7 +254,6 @@ export default function ChatClient({ identifier }: { identifier: string }) {
}
}
// Fetch chat config on mount and generate new conversation ID
useEffect(() => {
fetchChatConfig()
setConversationId(uuidv4())
@@ -285,7 +278,6 @@ export default function ChatClient({ identifier }: { identifier: string }) {
}, 800)
}
// Handle sending a message
const handleSendMessage = async (
messageParam?: string,
isVoiceInput = false,
@@ -308,7 +300,6 @@ export default function ChatClient({ identifier }: { identifier: string }) {
filesCount: files?.length,
})
// Reset userHasScrolled when sending a new message
setUserHasScrolled(false)
const userMessage: ChatMessage = {
@@ -325,24 +316,20 @@ export default function ChatClient({ identifier }: { identifier: string }) {
})),
}
// Add the user's message to the chat
setMessages((prev) => [...prev, userMessage])
setInputValue('')
setIsLoading(true)
// Scroll to show only the user's message and loading indicator
setTimeout(() => {
scrollToMessage(userMessage.id, true)
}, 100)
// Create abort controller for request cancellation
const abortController = new AbortController()
const timeoutId = setTimeout(() => {
abortController.abort()
}, CHAT_REQUEST_TIMEOUT_MS)
try {
// Send structured payload to maintain chat context
const payload: any = {
input:
typeof userMessage.content === 'string'
@@ -351,7 +338,6 @@ export default function ChatClient({ identifier }: { identifier: string }) {
conversationId,
}
// Add files if present (convert to base64 for JSON transmission)
if (files && files.length > 0) {
payload.files = await Promise.all(
files.map(async (file) => ({
@@ -379,7 +365,6 @@ export default function ChatClient({ identifier }: { identifier: string }) {
signal: abortController.signal,
})
// Clear timeout since request succeeded
clearTimeout(timeoutId)
if (!response.ok) {
@@ -392,7 +377,6 @@ export default function ChatClient({ identifier }: { identifier: string }) {
throw new Error('Response body is missing')
}
// Use the streaming hook with audio support
const shouldPlayAudio = isVoiceInput || isVoiceFirstMode
const audioHandler = shouldPlayAudio
? createAudioStreamHandler(
@@ -421,7 +405,6 @@ export default function ChatClient({ identifier }: { identifier: string }) {
}
)
} catch (error: any) {
// Clear timeout in case of error
clearTimeout(timeoutId)
if (error.name === 'AbortError') {
@@ -442,7 +425,6 @@ export default function ChatClient({ identifier }: { identifier: string }) {
}
}
// Stop audio when component unmounts or when streaming is stopped
useEffect(() => {
return () => {
stopAudio()
@@ -452,28 +434,23 @@ export default function ChatClient({ identifier }: { identifier: string }) {
}
}, [stopAudio])
// Voice interruption - stop audio when user starts speaking
const handleVoiceInterruption = useCallback(() => {
stopAudio()
// Stop any ongoing streaming response
if (isStreamingResponse) {
stopStreaming(setMessages)
}
}, [isStreamingResponse, stopStreaming, setMessages, stopAudio])
// Handle voice mode activation
const handleVoiceStart = useCallback(() => {
setIsVoiceFirstMode(true)
}, [])
// Handle exiting voice mode
const handleExitVoiceMode = useCallback(() => {
setIsVoiceFirstMode(false)
stopAudio() // Stop any playing audio when exiting
stopAudio()
}, [stopAudio])
// Handle voice transcript from voice-first interface
const handleVoiceTranscript = useCallback(
(transcript: string) => {
logger.info('Received voice transcript:', transcript)
@@ -482,14 +459,11 @@ export default function ChatClient({ identifier }: { identifier: string }) {
[handleSendMessage]
)
// If error, show error message using the extracted component
if (error) {
return <ChatErrorState error={error} starCount={starCount} />
}
// If authentication is required, use the extracted components
if (authRequired) {
// Get title and description from the URL params or use defaults
const title = new URLSearchParams(window.location.search).get('title') || 'chat'
const primaryColor =
new URLSearchParams(window.location.search).get('color') || 'var(--brand-primary-hover-hex)'
@@ -526,12 +500,10 @@ export default function ChatClient({ identifier }: { identifier: string }) {
}
}
// Loading state while fetching config using the extracted component
if (!chatConfig) {
return <ChatLoadingState />
}
// Voice-first mode interface
if (isVoiceFirstMode) {
return (
<VoiceInterface
@@ -551,7 +523,6 @@ export default function ChatClient({ identifier }: { identifier: string }) {
)
}
// Standard text-based chat interface
return (
<div className='fixed inset-0 z-[100] flex flex-col bg-white text-foreground'>
{/* Header component */}

View File

@@ -0,0 +1,269 @@
'use client'
import { useCallback, useEffect, useState } from 'react'
import { Mail } from 'lucide-react'
import { useParams, useRouter } from 'next/navigation'
import { GmailIcon, OutlookIcon } from '@/components/icons'
import { client, useSession } from '@/lib/auth/auth-client'
import { getProviderDisplayName, isPollingProvider } from '@/lib/credential-sets/providers'
import { InviteLayout, InviteStatusCard } from '@/app/invite/components'
interface InvitationInfo {
credentialSetName: string
organizationName: string
providerId: string | null
email: string | null
}
type AcceptedState = 'connecting' | 'already-connected'
export default function CredentialAccountInvitePage() {
const params = useParams()
const router = useRouter()
const token = params.token as string
const { data: session, isPending: sessionLoading } = useSession()
const [invitation, setInvitation] = useState<InvitationInfo | null>(null)
const [loading, setLoading] = useState(true)
const [error, setError] = useState<string | null>(null)
const [accepting, setAccepting] = useState(false)
const [acceptedState, setAcceptedState] = useState<AcceptedState | null>(null)
useEffect(() => {
async function fetchInvitation() {
try {
const res = await fetch(`/api/credential-sets/invite/${token}`)
if (!res.ok) {
const data = await res.json()
setError(data.error || 'Failed to load invitation')
return
}
const data = await res.json()
setInvitation(data.invitation)
} catch {
setError('Failed to load invitation')
} finally {
setLoading(false)
}
}
fetchInvitation()
}, [token])
const handleAccept = useCallback(async () => {
if (!session?.user?.id) {
// Include invite_flow=true so the login page preserves callbackUrl when linking to signup
const callbackUrl = encodeURIComponent(`/credential-account/${token}`)
router.push(`/login?invite_flow=true&callbackUrl=${callbackUrl}`)
return
}
setAccepting(true)
try {
const res = await fetch(`/api/credential-sets/invite/${token}`, {
method: 'POST',
})
if (!res.ok) {
const data = await res.json()
setError(data.error || 'Failed to accept invitation')
return
}
const data = await res.json()
const credentialSetProviderId = data.providerId || invitation?.providerId
// Check if user already has this provider connected
let isAlreadyConnected = false
if (credentialSetProviderId && isPollingProvider(credentialSetProviderId)) {
try {
const connectionsRes = await fetch('/api/auth/oauth/connections')
if (connectionsRes.ok) {
const connectionsData = await connectionsRes.json()
const connections = connectionsData.connections || []
isAlreadyConnected = connections.some(
(conn: { provider: string; accounts?: { id: string }[] }) =>
conn.provider === credentialSetProviderId &&
conn.accounts &&
conn.accounts.length > 0
)
}
} catch {
// If we can't check connections, proceed with OAuth flow
}
}
if (isAlreadyConnected) {
// Already connected - redirect to workspace
setAcceptedState('already-connected')
setTimeout(() => {
router.push('/workspace')
}, 2000)
} else if (credentialSetProviderId && isPollingProvider(credentialSetProviderId)) {
// Not connected - start OAuth flow
setAcceptedState('connecting')
// Small delay to show success message before redirect
setTimeout(async () => {
try {
await client.oauth2.link({
providerId: credentialSetProviderId,
callbackURL: `${window.location.origin}/workspace`,
})
} catch (oauthError) {
// OAuth redirect will happen, this catch is for any pre-redirect errors
console.error('OAuth initiation error:', oauthError)
// If OAuth fails, redirect to workspace where they can connect manually
router.push('/workspace')
}
}, 1500)
} else {
// No provider specified - just redirect to workspace
router.push('/workspace')
}
} catch {
setError('Failed to accept invitation')
} finally {
setAccepting(false)
}
}, [session?.user?.id, token, router, invitation?.providerId])
const providerName = invitation?.providerId
? getProviderDisplayName(invitation.providerId)
: 'email'
const ProviderIcon =
invitation?.providerId === 'outlook'
? OutlookIcon
: invitation?.providerId === 'google-email'
? GmailIcon
: Mail
const providerWithIcon = (
<span className='inline-flex items-baseline gap-1'>
<ProviderIcon className='inline-block h-4 w-4 translate-y-[2px]' />
{providerName}
</span>
)
const getCallbackUrl = () => `/credential-account/${token}`
if (loading || sessionLoading) {
return (
<InviteLayout>
<InviteStatusCard type='loading' title='' description='Loading invitation...' />
</InviteLayout>
)
}
if (error) {
return (
<InviteLayout>
<InviteStatusCard
type='error'
title='Unable to load invitation'
description={error}
icon='error'
actions={[
{
label: 'Return to Home',
onClick: () => router.push('/'),
},
]}
/>
</InviteLayout>
)
}
if (acceptedState === 'already-connected') {
return (
<InviteLayout>
<InviteStatusCard
type='success'
title="You're all set!"
description={`You've joined ${invitation?.credentialSetName}. Your ${providerName} account is already connected. Redirecting to workspace...`}
icon='success'
/>
</InviteLayout>
)
}
if (acceptedState === 'connecting') {
return (
<InviteLayout>
<InviteStatusCard
type='loading'
title={`Connecting to ${providerName}...`}
description={`You've joined ${invitation?.credentialSetName}. You'll be redirected to connect your ${providerName} account.`}
/>
</InviteLayout>
)
}
// Not logged in
if (!session?.user) {
const callbackUrl = encodeURIComponent(getCallbackUrl())
return (
<InviteLayout>
<InviteStatusCard
type='login'
title='Join Email Polling Group'
description={`You've been invited to join ${invitation?.credentialSetName} by ${invitation?.organizationName}. Sign in or create an account to accept this invitation.`}
icon='mail'
actions={[
{
label: 'Sign in',
onClick: () => router.push(`/login?callbackUrl=${callbackUrl}&invite_flow=true`),
},
{
label: 'Create an account',
onClick: () =>
router.push(`/signup?callbackUrl=${callbackUrl}&invite_flow=true&new=true`),
variant: 'outline' as const,
},
{
label: 'Return to Home',
onClick: () => router.push('/'),
variant: 'ghost' as const,
},
]}
/>
</InviteLayout>
)
}
// Logged in - show invitation
return (
<InviteLayout>
<InviteStatusCard
type='invitation'
title='Join Email Polling Group'
description={
<>
You've been invited to join {invitation?.credentialSetName} by{' '}
{invitation?.organizationName}.
{invitation?.providerId && (
<> You'll be asked to connect your {providerWithIcon} account after accepting.</>
)}
</>
}
icon='mail'
actions={[
{
label: `Accept & Connect ${providerName}`,
onClick: handleAccept,
disabled: accepting,
loading: accepting,
},
{
label: 'Return to Home',
onClick: () => router.push('/'),
variant: 'ghost' as const,
},
]}
/>
</InviteLayout>
)
}

View File

@@ -0,0 +1,175 @@
import { getBaseUrl } from '@/lib/core/utils/urls'
export async function GET() {
const baseUrl = getBaseUrl()
const llmsFullContent = `# Sim - AI Agent Workflow Builder
> Sim is an open-source AI agent workflow builder used by 60,000+ developers at startups to Fortune 500 companies. Build and deploy agentic workflows with a visual drag-and-drop canvas. SOC2 and HIPAA compliant.
## Overview
Sim provides a visual interface for building AI agent workflows. Instead of writing code, users drag and drop blocks onto a canvas and connect them to create complex AI automations. Each block represents a step in the workflow - an LLM call, a tool invocation, an API request, or a code execution.
## Product Details
- **Product Name**: Sim
- **Category**: AI Development Tools / Workflow Automation
- **Deployment**: Cloud (SaaS) and Self-hosted options
- **Pricing**: Free tier, Pro ($20/month), Team ($40/month), Enterprise (custom)
- **Compliance**: SOC2 Type II, HIPAA compliant
## Core Concepts
### Workspace
A workspace is the top-level container in Sim. It holds workflows, data sources, credentials, and execution history. Users can create multiple workspaces for different projects or teams.
### Workflow
A workflow is a directed graph of blocks that defines an agentic process. Workflows can be triggered manually, on a schedule, or via webhooks. Each workflow has a unique ID and can be versioned.
### Block
A block is an individual step in a workflow. Types include:
- **Agent Block**: Executes an LLM call with system prompts and tools
- **Function Block**: Runs custom JavaScript/TypeScript code
- **API Block**: Makes HTTP requests to external services
- **Condition Block**: Branches workflow based on conditions
- **Loop Block**: Iterates over arrays or until conditions are met
- **Router Block**: Routes to different paths based on LLM classification
### Trigger
A trigger initiates workflow execution. Types include:
- **Manual**: User clicks "Run" button
- **Schedule**: Cron-based scheduling (e.g., every hour, daily at 9am)
- **Webhook**: HTTP endpoint that triggers on incoming requests
- **Event**: Triggered by external events (email received, Slack message, etc.)
### Execution
An execution is a single run of a workflow. It includes:
- Input parameters
- Block-by-block execution logs
- Output data
- Token usage and cost tracking
- Duration and performance metrics
## Capabilities
### LLM Orchestration
Sim supports all major LLM providers:
- OpenAI (GPT-5.2, GPT-5.1, GPT-5, GPT-4o, GPT-4.1)
- Anthropic (Claude Opus 4.5, Claude Opus 4.1, Claude Sonnet 4.5, Claude Haiku 4.5)
- Google (Gemini Pro 3, Gemini Pro 3 Preview, Gemini 2.5 Pro, Gemini 2.5 Flash)
- Mistral (Mistral Large, Mistral Medium)
- xAI (Grok)
- Perplexity
- Ollama or VLLM (self-hosted open-source models)
- Azure OpenAI
- Amazon Bedrock
### Integrations
100+ pre-built integrations including:
- **Communication**: Slack, Discord, Email (Gmail, Outlook), SMS (Twilio)
- **Productivity**: Notion, Airtable, Google Sheets, Google Docs
- **Development**: GitHub, GitLab, Jira, Linear
- **Data**: PostgreSQL, MySQL, MongoDB, Supabase, Pinecone
- **Storage**: AWS S3, Google Cloud Storage, Dropbox
- **CRM**: Salesforce, HubSpot, Pipedrive
### RAG (Retrieval-Augmented Generation)
Built-in support for:
- Document ingestion (PDF, DOCX, TXT, Markdown)
- Vector database integration (Pinecone, Weaviate, Qdrant)
- Semantic search and retrieval
- Chunking strategies (fixed size, semantic, recursive)
### Code Execution
- Sandboxed JavaScript/TypeScript execution
- Access to npm packages
- Persistent state across executions
- Error handling and retry logic
## Use Cases
### Customer Support Automation
- Classify incoming tickets by urgency and topic
- Generate draft responses using RAG over knowledge base
- Route to appropriate team members
- Auto-close resolved tickets
### Content Generation Pipeline
- Research topics using web search tools
- Generate outlines and drafts with LLMs
- Review and edit with human-in-the-loop
- Publish to CMS platforms
### Data Processing Workflows
- Extract data from documents (invoices, receipts, forms)
- Transform and validate data
- Load into databases or spreadsheets
- Generate reports and summaries
### Sales and Marketing Automation
- Enrich leads with company data
- Score leads based on fit criteria
- Generate personalized outreach emails
- Sync with CRM systems
## Technical Architecture
### Frontend
- Next.js 15 with App Router
- React Flow for canvas visualization
- Tailwind CSS for styling
- Zustand for state management
### Backend
- Node.js with TypeScript
- PostgreSQL for persistent storage
- Redis for caching and queues
- S3-compatible storage for files
### Execution Engine
- Isolated execution per workflow run
- Parallel block execution where possible
- Retry logic with exponential backoff
- Real-time streaming of outputs
## Getting Started
1. **Sign Up**: Create a free account at ${baseUrl}
2. **Create Workspace**: Set up your first workspace
3. **Build Workflow**: Drag blocks onto canvas and connect them
4. **Configure Blocks**: Set up LLM providers, tools, and integrations
5. **Test**: Run the workflow manually to verify
6. **Deploy**: Set up triggers for automated execution
## Links
- **Website**: ${baseUrl}
- **Documentation**: https://docs.sim.ai
- **API Reference**: https://docs.sim.ai/api
- **GitHub**: https://github.com/simstudioai/sim
- **Discord**: https://discord.gg/Hr4UWYEcTT
- **X/Twitter**: https://x.com/simdotai
- **LinkedIn**: https://linkedin.com/company/simstudioai
## Support
- **Email**: help@sim.ai
- **Security Issues**: security@sim.ai
- **Documentation**: https://docs.sim.ai
- **Community Discord**: https://discord.gg/Hr4UWYEcTT
## Legal
- **Terms of Service**: ${baseUrl}/terms
- **Privacy Policy**: ${baseUrl}/privacy
- **Security**: ${baseUrl}/.well-known/security.txt
`
return new Response(llmsFullContent, {
headers: {
'Content-Type': 'text/markdown; charset=utf-8',
'Cache-Control': 'public, max-age=86400, s-maxage=86400',
},
})
}

View File

@@ -1,51 +1,69 @@
export async function GET() {
const llmsContent = `# Sim - AI Agent Workflow Builder
Sim is an open-source AI agent workflow builder for production workflows. Developers at trail-blazing startups to Fortune 500 companies deploy agentic workflows on the Sim platform. 60,000+ developers already use Sim to build and ship AI automations with 100+ integrations. Sim is SOC2 and HIPAA compliant and is designed for secure, enterprise-grade AI automation.
import { getBaseUrl } from '@/lib/core/utils/urls'
Website: https://sim.ai
App: https://sim.ai/workspace
Docs: https://docs.sim.ai
GitHub: https://github.com/simstudioai/sim
Region: global
Primary language: en
export async function GET() {
const baseUrl = getBaseUrl()
const llmsContent = `# Sim
> Sim is an open-source AI agent workflow builder. 60,000+ developers at startups to Fortune 500 companies deploy agentic workflows on the Sim platform. SOC2 and HIPAA compliant.
Sim provides a visual drag-and-drop interface for building and deploying AI agent workflows. Connect to 100+ integrations and ship production-ready AI automations.
## Core Pages
- [Homepage](${baseUrl}): Main landing page with product overview and features
- [Templates](${baseUrl}/templates): Pre-built workflow templates to get started quickly
- [Changelog](${baseUrl}/changelog): Product updates and release notes
- [Sim Studio Blog](${baseUrl}/studio): Announcements, insights, and guides for AI workflows
## Documentation
- [Documentation](https://docs.sim.ai): Complete guides and API reference
- [Quickstart](https://docs.sim.ai/quickstart): Get started in 5 minutes
- [API Reference](https://docs.sim.ai/api): REST API documentation
## Key Concepts
- **Workspace**: Container for workflows, data sources, and executions
- **Workflow**: Directed graph of blocks defining an agentic process
- **Block**: Individual step (LLM call, tool call, HTTP request, code execution)
- **Trigger**: Event or schedule that initiates workflow execution
- **Execution**: A single run of a workflow with logs and outputs
## Capabilities
- Visual workflow builder for multi-step AI agents and tools
- Orchestration of LLM calls, tools, webhooks, and external APIs
- Scheduled and event-driven agent executions
- First-class support for retrieval-augmented generation (RAG)
- Multi-tenant, workspace-based access model
## Ideal Use Cases
- Visual workflow builder with drag-and-drop canvas
- Multi-model LLM orchestration (OpenAI, Anthropic, Google, Mistral, xAI)
- Retrieval-augmented generation (RAG) with vector databases
- 100+ integrations (Slack, Gmail, Notion, Airtable, databases)
- Scheduled and webhook-triggered executions
- Real-time collaboration and version control
## Use Cases
- AI agent workflow automation
- RAG agents and retrieval pipelines
- Chatbot and copilot workflows for SaaS products
- Document and email processing workflows
- Customer support, marketing, and growth automations
- Internal operations automations (ops, finance, legal, sales)
- RAG pipelines and document processing
- Chatbot and copilot workflows for SaaS
- Email and customer support automation
- Internal operations (sales, marketing, legal, finance)
## Key Entities
- Workspace: container for workflows, data sources, and executions
- Workflow: directed graph of blocks defining an agentic process
- Block: individual step (LLM call, tool call, HTTP request, code, etc.)
- Schedule: time-based trigger for running workflows
- Execution: a single run of a workflow
## Links
## Getting Started
- Quickstart: https://docs.sim.ai/quickstart
- Product overview: https://docs.sim.ai
- Source code: https://github.com/simstudioai/sim
- [GitHub Repository](https://github.com/simstudioai/sim): Open-source codebase
- [Discord Community](https://discord.gg/Hr4UWYEcTT): Get help and connect with users
- [X/Twitter](https://x.com/simdotai): Product updates and announcements
## Safety & Reliability
- SOC2 and HIPAA aligned security controls
- Audit-friendly execution logs and cost tracking
- Fine-grained control over external tools, APIs, and data sources
## Optional
- [Careers](${baseUrl}/careers): Join the Sim team
- [Terms of Service](${baseUrl}/terms): Legal terms
- [Privacy Policy](${baseUrl}/privacy): Data handling practices
`
return new Response(llmsContent, {
headers: {
'Content-Type': 'text/plain; charset=utf-8',
'Cache-Control': 'public, max-age=86400',
'Content-Type': 'text/markdown; charset=utf-8',
'Cache-Control': 'public, max-age=86400, s-maxage=86400',
},
})
}

View File

@@ -8,7 +8,7 @@ export default function manifest(): MetadataRoute.Manifest {
name: brand.name === 'Sim' ? 'Sim - AI Agent Workflow Builder' : brand.name,
short_name: brand.name,
description:
'Open-source AI agent workflow builder. 30,000+ developers build and deploy agentic workflows on Sim. Visual drag-and-drop interface for creating AI automations. SOC2 and HIPAA compliant.',
'Open-source AI agent workflow builder. 60,000+ developers build and deploy agentic workflows on Sim. Visual drag-and-drop interface for creating AI automations. SOC2 and HIPAA compliant.',
start_url: '/',
scope: '/',
display: 'standalone',

View File

@@ -29,23 +29,23 @@ export const metadata: Metadata = {
locale: 'en_US',
images: [
{
url: '/logo/primary/rounded.png',
width: 512,
height: 512,
url: '/logo/426-240/primary/small.png',
width: 2130,
height: 1200,
alt: 'Sim - AI Agent Workflow Builder',
type: 'image/png',
},
],
},
twitter: {
card: 'summary',
card: 'summary_large_image',
site: '@simdotai',
creator: '@simdotai',
title: 'Sim - AI Agent Workflow Builder | Open Source',
description:
'Open-source platform for agentic workflows. 60,000+ developers. Visual builder. 100+ integrations. SOC2 & HIPAA compliant.',
images: {
url: '/logo/primary/rounded.png',
url: '/logo/426-240/primary/small.png',
alt: 'Sim - AI Agent Workflow Builder',
},
},

126
apps/sim/app/robots.ts Normal file
View File

@@ -0,0 +1,126 @@
import type { MetadataRoute } from 'next'
import { getBaseUrl } from '@/lib/core/utils/urls'
export default function robots(): MetadataRoute.Robots {
const baseUrl = getBaseUrl()
const disallowedPaths = [
'/api/',
'/workspace/',
'/chat/',
'/playground/',
'/resume/',
'/invite/',
'/unsubscribe/',
'/w/',
'/_next/',
'/private/',
]
return {
rules: [
{
userAgent: '*',
allow: '/',
disallow: disallowedPaths,
},
{
userAgent: 'Googlebot',
allow: '/',
disallow: disallowedPaths,
},
{
userAgent: 'Bingbot',
allow: '/',
disallow: disallowedPaths,
},
{
userAgent: 'YandexBot',
allow: '/',
disallow: disallowedPaths,
},
{
userAgent: 'Baiduspider',
allow: '/',
disallow: disallowedPaths,
},
{
userAgent: 'GPTBot',
allow: '/',
disallow: disallowedPaths,
},
{
userAgent: 'ChatGPT-User',
allow: '/',
disallow: disallowedPaths,
},
{
userAgent: 'OAI-SearchBot',
allow: '/',
disallow: disallowedPaths,
},
{
userAgent: 'ClaudeBot',
allow: '/',
disallow: disallowedPaths,
},
{
userAgent: 'Claude-SearchBot',
allow: '/',
disallow: disallowedPaths,
},
{
userAgent: 'Google-Extended',
allow: '/',
disallow: disallowedPaths,
},
{
userAgent: 'PerplexityBot',
allow: '/',
disallow: disallowedPaths,
},
{
userAgent: 'Meta-ExternalAgent',
allow: '/',
disallow: disallowedPaths,
},
{
userAgent: 'FacebookBot',
allow: '/',
disallow: disallowedPaths,
},
{
userAgent: 'Applebot',
allow: '/',
disallow: disallowedPaths,
},
{
userAgent: 'Applebot-Extended',
allow: '/',
disallow: disallowedPaths,
},
{
userAgent: 'Amazonbot',
allow: '/',
disallow: disallowedPaths,
},
{
userAgent: 'Bytespider',
allow: '/',
disallow: disallowedPaths,
},
{
userAgent: 'CCBot',
allow: '/',
disallow: disallowedPaths,
},
{
userAgent: 'cohere-ai',
allow: '/',
disallow: disallowedPaths,
},
],
sitemap: `${baseUrl}/sitemap.xml`,
host: baseUrl,
}
}

View File

@@ -0,0 +1,12 @@
import { getBaseUrl } from '@/lib/core/utils/urls'
export async function GET() {
const baseUrl = getBaseUrl()
return new Response(null, {
status: 301,
headers: {
Location: `${baseUrl}/.well-known/security.txt`,
},
})
}

View File

@@ -11,42 +11,34 @@ export default async function sitemap(): Promise<MetadataRoute.Sitemap> {
{
url: baseUrl,
lastModified: now,
priority: 1.0, // Homepage - highest priority
},
{
url: `${baseUrl}/studio`,
lastModified: now,
priority: 0.9, // Blog index - high value content
},
{
url: `${baseUrl}/studio/tags`,
lastModified: now,
priority: 0.7, // Tags page - discovery/navigation
},
{
url: `${baseUrl}/templates`,
lastModified: now,
priority: 0.8, // Templates - important discovery page
},
{
url: `${baseUrl}/changelog`,
lastModified: now,
priority: 0.8, // Changelog - important for users
},
{
url: `${baseUrl}/careers`,
lastModified: new Date('2024-10-06'),
priority: 0.6, // Careers - important but not core content
},
{
url: `${baseUrl}/terms`,
lastModified: new Date('2024-10-14'),
priority: 0.5, // Terms - utility page
},
{
url: `${baseUrl}/privacy`,
lastModified: new Date('2024-10-14'),
priority: 0.5, // Privacy - utility page
},
]
@@ -54,7 +46,6 @@ export default async function sitemap(): Promise<MetadataRoute.Sitemap> {
const blogPages: MetadataRoute.Sitemap = posts.map((p) => ({
url: p.canonical,
lastModified: new Date(p.updated ?? p.date),
priority: 0.9, // Blog posts - high value content
}))
return [...staticPages, ...blogPages]

View File

@@ -36,7 +36,7 @@ import { useSession } from '@/lib/auth/auth-client'
import { cn } from '@/lib/core/utils/cn'
import { getBaseUrl } from '@/lib/core/utils/urls'
import type { CredentialRequirement } from '@/lib/workflows/credentials/credential-extractor'
import { WorkflowPreview } from '@/app/workspace/[workspaceId]/w/components/workflow-preview/workflow-preview'
import { WorkflowPreview } from '@/app/workspace/[workspaceId]/w/components/preview'
import { getBlock } from '@/blocks/registry'
import { useStarTemplate, useTemplate } from '@/hooks/queries/templates'

View File

@@ -4,7 +4,7 @@ import { Star, User } from 'lucide-react'
import { useParams, useRouter } from 'next/navigation'
import { VerifiedBadge } from '@/components/ui/verified-badge'
import { cn } from '@/lib/core/utils/cn'
import { WorkflowPreview } from '@/app/workspace/[workspaceId]/w/components/workflow-preview/workflow-preview'
import { WorkflowPreview } from '@/app/workspace/[workspaceId]/w/components/preview'
import { getBlock } from '@/blocks/registry'
import { useStarTemplate } from '@/hooks/queries/templates'
import type { WorkflowState } from '@/stores/workflows/workflow/types'

View File

@@ -1,7 +1,7 @@
export { Dashboard } from './dashboard'
export { LogDetails } from './log-details'
export { ExecutionSnapshot } from './log-details/components/execution-snapshot'
export { FileCards } from './log-details/components/file-download'
export { FrozenCanvas } from './log-details/components/frozen-canvas'
export { TraceSpans } from './log-details/components/trace-spans'
export { LogRowContextMenu } from './log-row-context-menu'
export { LogsList } from './logs-list'

View File

@@ -0,0 +1,224 @@
'use client'
import { useEffect, useMemo, useState } from 'react'
import { AlertCircle, Loader2 } from 'lucide-react'
import { Modal, ModalBody, ModalContent, ModalHeader } from '@/components/emcn'
import { redactApiKeys } from '@/lib/core/security/redaction'
import { cn } from '@/lib/core/utils/cn'
import {
BlockDetailsSidebar,
WorkflowPreview,
} from '@/app/workspace/[workspaceId]/w/components/preview'
import { useExecutionSnapshot } from '@/hooks/queries/logs'
import type { WorkflowState } from '@/stores/workflows/workflow/types'
interface TraceSpan {
blockId?: string
input?: unknown
output?: unknown
status?: string
duration?: number
children?: TraceSpan[]
}
interface BlockExecutionData {
input: unknown
output: unknown
status: string
durationMs: number
}
interface MigratedWorkflowState extends WorkflowState {
_migrated: true
_note?: string
}
function isMigratedWorkflowState(state: WorkflowState): state is MigratedWorkflowState {
return (state as MigratedWorkflowState)._migrated === true
}
interface ExecutionSnapshotProps {
executionId: string
traceSpans?: TraceSpan[]
className?: string
height?: string | number
width?: string | number
isModal?: boolean
isOpen?: boolean
onClose?: () => void
}
export function ExecutionSnapshot({
executionId,
traceSpans,
className,
height = '100%',
width = '100%',
isModal = false,
isOpen = false,
onClose = () => {},
}: ExecutionSnapshotProps) {
const { data, isLoading, error } = useExecutionSnapshot(executionId)
const [pinnedBlockId, setPinnedBlockId] = useState<string | null>(null)
const blockExecutions = useMemo(() => {
if (!traceSpans || !Array.isArray(traceSpans)) return {}
const blockExecutionMap: Record<string, BlockExecutionData> = {}
const collectBlockSpans = (spans: TraceSpan[]): TraceSpan[] => {
const blockSpans: TraceSpan[] = []
for (const span of spans) {
if (span.blockId) {
blockSpans.push(span)
}
if (span.children && Array.isArray(span.children)) {
blockSpans.push(...collectBlockSpans(span.children))
}
}
return blockSpans
}
const allBlockSpans = collectBlockSpans(traceSpans)
for (const span of allBlockSpans) {
if (span.blockId && !blockExecutionMap[span.blockId]) {
blockExecutionMap[span.blockId] = {
input: redactApiKeys(span.input || {}),
output: redactApiKeys(span.output || {}),
status: span.status || 'unknown',
durationMs: span.duration || 0,
}
}
}
return blockExecutionMap
}, [traceSpans])
useEffect(() => {
setPinnedBlockId(null)
}, [executionId])
const workflowState = data?.workflowState as WorkflowState | undefined
const renderContent = () => {
if (isLoading) {
return (
<div
className={cn('flex items-center justify-center', className)}
style={{ height, width }}
>
<div className='flex items-center gap-[8px] text-[var(--text-secondary)]'>
<Loader2 className='h-[16px] w-[16px] animate-spin' />
<span className='text-[13px]'>Loading execution snapshot...</span>
</div>
</div>
)
}
if (error) {
return (
<div
className={cn('flex items-center justify-center', className)}
style={{ height, width }}
>
<div className='flex items-center gap-[8px] text-[var(--text-error)]'>
<AlertCircle className='h-[16px] w-[16px]' />
<span className='text-[13px]'>Failed to load execution snapshot: {error.message}</span>
</div>
</div>
)
}
if (!data || !workflowState) {
return (
<div
className={cn('flex items-center justify-center', className)}
style={{ height, width }}
>
<div className='flex items-center gap-[8px] text-[var(--text-secondary)]'>
<Loader2 className='h-[16px] w-[16px] animate-spin' />
<span className='text-[13px]'>Loading execution snapshot...</span>
</div>
</div>
)
}
if (isMigratedWorkflowState(workflowState)) {
return (
<div
className={cn('flex flex-col items-center justify-center gap-[16px] p-[32px]', className)}
style={{ height, width }}
>
<div className='flex items-center gap-[12px] text-[var(--text-warning)]'>
<AlertCircle className='h-[20px] w-[20px]' />
<span className='font-medium text-[15px]'>Logged State Not Found</span>
</div>
<div className='max-w-md text-center text-[13px] text-[var(--text-secondary)]'>
This log was migrated from the old logging system. The workflow state at execution time
is not available.
</div>
<div className='text-[12px] text-[var(--text-tertiary)]'>Note: {workflowState._note}</div>
</div>
)
}
return (
<div
style={{ height, width }}
className={cn(
'flex overflow-hidden rounded-[4px] border border-[var(--border)]',
className
)}
>
<div className='h-full flex-1'>
<WorkflowPreview
workflowState={workflowState}
showSubBlocks={true}
isPannable={true}
defaultPosition={{ x: 0, y: 0 }}
defaultZoom={0.8}
onNodeClick={(blockId) => {
setPinnedBlockId((prev) => (prev === blockId ? null : blockId))
}}
cursorStyle='pointer'
executedBlocks={blockExecutions}
/>
</div>
{pinnedBlockId && workflowState.blocks[pinnedBlockId] && (
<BlockDetailsSidebar
block={workflowState.blocks[pinnedBlockId]}
executionData={blockExecutions[pinnedBlockId]}
allBlockExecutions={blockExecutions}
workflowBlocks={workflowState.blocks}
isExecutionMode
/>
)}
</div>
)
}
if (isModal) {
return (
<Modal
open={isOpen}
onOpenChange={(open) => {
if (!open) {
setPinnedBlockId(null)
onClose()
}
}}
>
<ModalContent size='full' className='flex h-[90vh] flex-col'>
<ModalHeader>Workflow State</ModalHeader>
<ModalBody className='!p-0 min-h-0 flex-1'>{renderContent()}</ModalBody>
</ModalContent>
</Modal>
)
}
return renderContent()
}

View File

@@ -0,0 +1 @@
export { ExecutionSnapshot } from './execution-snapshot'

View File

@@ -1,657 +0,0 @@
'use client'
import { useEffect, useState } from 'react'
import { createLogger } from '@sim/logger'
import {
AlertCircle,
ChevronDown,
ChevronLeft,
ChevronRight,
ChevronUp,
Clock,
DollarSign,
Hash,
Loader2,
Maximize2,
X,
Zap,
} from 'lucide-react'
import { Badge, Modal, ModalBody, ModalContent, ModalHeader } from '@/components/emcn'
import { Card, CardContent, CardHeader, CardTitle } from '@/components/ui/card'
import { redactApiKeys } from '@/lib/core/security/redaction'
import { cn } from '@/lib/core/utils/cn'
import { WorkflowPreview } from '@/app/workspace/[workspaceId]/w/components/workflow-preview/workflow-preview'
import type { WorkflowState } from '@/stores/workflows/workflow/types'
const logger = createLogger('FrozenCanvas')
function ExpandableDataSection({ title, data }: { title: string; data: any }) {
const [isExpanded, setIsExpanded] = useState(false)
const [isModalOpen, setIsModalOpen] = useState(false)
const jsonString = JSON.stringify(data, null, 2)
const isLargeData = jsonString.length > 500 || jsonString.split('\n').length > 10
return (
<>
<div>
<div className='mb-[6px] flex items-center justify-between'>
<h4 className='font-medium text-[13px] text-[var(--text-primary)]'>{title}</h4>
<div className='flex items-center gap-[4px]'>
{isLargeData && (
<button
onClick={() => setIsModalOpen(true)}
className='rounded-[4px] p-[4px] text-[var(--text-secondary)] transition-colors hover:bg-[var(--surface-3)] hover:text-[var(--text-primary)]'
title='Expand in modal'
type='button'
>
<Maximize2 className='h-[14px] w-[14px]' />
</button>
)}
<button
onClick={() => setIsExpanded(!isExpanded)}
className='rounded-[4px] p-[4px] text-[var(--text-secondary)] transition-colors hover:bg-[var(--surface-3)] hover:text-[var(--text-primary)]'
type='button'
>
{isExpanded ? (
<ChevronUp className='h-[14px] w-[14px]' />
) : (
<ChevronDown className='h-[14px] w-[14px]' />
)}
</button>
</div>
</div>
<div
className={cn(
'overflow-y-auto rounded-[4px] border border-[var(--border)] bg-[var(--surface-3)] p-[12px] font-mono text-[12px] transition-all duration-200',
isExpanded ? 'max-h-96' : 'max-h-32'
)}
>
<pre className='whitespace-pre-wrap break-words text-[var(--text-primary)]'>
{jsonString}
</pre>
</div>
</div>
{isModalOpen && (
<div className='fixed inset-0 z-[200] flex items-center justify-center bg-black/50'>
<div className='mx-[16px] flex h-[80vh] w-full max-w-4xl flex-col overflow-hidden rounded-[8px] border border-[var(--border)] bg-[var(--surface-1)] shadow-lg'>
<div className='flex items-center justify-between border-[var(--border)] border-b p-[16px]'>
<h3 className='font-medium text-[15px] text-[var(--text-primary)]'>{title}</h3>
<button
onClick={() => setIsModalOpen(false)}
className='rounded-[4px] p-[4px] text-[var(--text-secondary)] transition-colors hover:bg-[var(--surface-3)] hover:text-[var(--text-primary)]'
type='button'
>
<X className='h-[16px] w-[16px]' />
</button>
</div>
<div className='flex-1 overflow-auto p-[16px]'>
<pre className='whitespace-pre-wrap break-words font-mono text-[13px] text-[var(--text-primary)]'>
{jsonString}
</pre>
</div>
</div>
</div>
)}
</>
)
}
function formatExecutionData(executionData: any) {
const {
inputData,
outputData,
cost,
tokens,
durationMs,
status,
blockName,
blockType,
errorMessage,
errorStackTrace,
} = executionData
return {
blockName: blockName || 'Unknown Block',
blockType: blockType || 'unknown',
status,
duration: durationMs ? `${durationMs}ms` : 'N/A',
input: redactApiKeys(inputData || {}),
output: redactApiKeys(outputData || {}),
errorMessage,
errorStackTrace,
cost: cost
? {
input: cost.input || 0,
output: cost.output || 0,
total: cost.total || 0,
}
: null,
tokens: tokens
? {
input: tokens.input || tokens.prompt || 0,
output: tokens.output || tokens.completion || 0,
total: tokens.total || 0,
}
: null,
}
}
function getCurrentIterationData(blockExecutionData: any) {
if (blockExecutionData.iterations && Array.isArray(blockExecutionData.iterations)) {
const currentIndex = blockExecutionData.currentIteration ?? 0
return {
executionData: blockExecutionData.iterations[currentIndex],
currentIteration: currentIndex,
totalIterations: blockExecutionData.totalIterations ?? blockExecutionData.iterations.length,
hasMultipleIterations: blockExecutionData.iterations.length > 1,
}
}
return {
executionData: blockExecutionData,
currentIteration: 0,
totalIterations: 1,
hasMultipleIterations: false,
}
}
function PinnedLogs({
executionData,
blockId,
workflowState,
onClose,
}: {
executionData: any | null
blockId: string
workflowState: any
onClose: () => void
}) {
const [currentIterationIndex, setCurrentIterationIndex] = useState(0)
useEffect(() => {
setCurrentIterationIndex(0)
}, [executionData])
if (!executionData) {
const blockInfo = workflowState?.blocks?.[blockId]
const formatted = {
blockName: blockInfo?.name || 'Unknown Block',
blockType: blockInfo?.type || 'unknown',
status: 'not_executed',
}
return (
<Card className='fixed top-[16px] right-[16px] z-[100] max-h-[calc(100vh-8rem)] w-96 overflow-y-auto rounded-[8px] border border-[var(--border)] bg-[var(--surface-1)] shadow-lg'>
<CardHeader className='pb-[12px]'>
<div className='flex items-center justify-between'>
<CardTitle className='flex items-center gap-[8px] text-[15px] text-[var(--text-primary)]'>
<Zap className='h-[16px] w-[16px]' />
{formatted.blockName}
</CardTitle>
<button
onClick={onClose}
className='rounded-[4px] p-[4px] text-[var(--text-secondary)] transition-colors hover:bg-[var(--surface-3)] hover:text-[var(--text-primary)]'
type='button'
>
<X className='h-[16px] w-[16px]' />
</button>
</div>
<div className='flex items-center gap-[8px]'>
<Badge variant='gray-secondary'>{formatted.blockType}</Badge>
<Badge variant='outline'>not executed</Badge>
</div>
</CardHeader>
<CardContent className='space-y-[16px]'>
<div className='rounded-[4px] border border-[var(--border)] bg-[var(--surface-3)] p-[16px] text-center'>
<div className='text-[13px] text-[var(--text-secondary)]'>
This block was not executed because the workflow failed before reaching it.
</div>
</div>
</CardContent>
</Card>
)
}
const iterationInfo = getCurrentIterationData({
...executionData,
currentIteration: currentIterationIndex,
})
const formatted = formatExecutionData(iterationInfo.executionData)
const totalIterations = executionData.iterations?.length || 1
const goToPreviousIteration = () => {
if (currentIterationIndex > 0) {
setCurrentIterationIndex(currentIterationIndex - 1)
}
}
const goToNextIteration = () => {
if (currentIterationIndex < totalIterations - 1) {
setCurrentIterationIndex(currentIterationIndex + 1)
}
}
return (
<Card className='fixed top-[16px] right-[16px] z-[100] max-h-[calc(100vh-8rem)] w-96 overflow-y-auto rounded-[8px] border border-[var(--border)] bg-[var(--surface-1)] shadow-lg'>
<CardHeader className='pb-[12px]'>
<div className='flex items-center justify-between'>
<CardTitle className='flex items-center gap-[8px] text-[15px] text-[var(--text-primary)]'>
<Zap className='h-[16px] w-[16px]' />
{formatted.blockName}
</CardTitle>
<button
onClick={onClose}
className='rounded-[4px] p-[4px] text-[var(--text-secondary)] transition-colors hover:bg-[var(--surface-3)] hover:text-[var(--text-primary)]'
type='button'
>
<X className='h-[16px] w-[16px]' />
</button>
</div>
<div className='flex items-center justify-between'>
<div className='flex items-center gap-[8px]'>
<Badge variant={formatted.status === 'success' ? 'default' : 'red'}>
{formatted.blockType}
</Badge>
<Badge variant='outline'>{formatted.status}</Badge>
</div>
{iterationInfo.hasMultipleIterations && (
<div className='flex items-center gap-[4px]'>
<button
onClick={goToPreviousIteration}
disabled={currentIterationIndex === 0}
className='rounded-[4px] p-[4px] text-[var(--text-secondary)] transition-colors hover:bg-[var(--surface-3)] hover:text-[var(--text-primary)] disabled:cursor-not-allowed disabled:opacity-50'
type='button'
>
<ChevronLeft className='h-[14px] w-[14px]' />
</button>
<span className='px-[8px] text-[12px] text-[var(--text-tertiary)]'>
{iterationInfo.totalIterations !== undefined
? `${currentIterationIndex + 1} / ${iterationInfo.totalIterations}`
: `${currentIterationIndex + 1}`}
</span>
<button
onClick={goToNextIteration}
disabled={currentIterationIndex === totalIterations - 1}
className='rounded-[4px] p-[4px] text-[var(--text-secondary)] transition-colors hover:bg-[var(--surface-3)] hover:text-[var(--text-primary)] disabled:cursor-not-allowed disabled:opacity-50'
type='button'
>
<ChevronRight className='h-[14px] w-[14px]' />
</button>
</div>
)}
</div>
</CardHeader>
<CardContent className='space-y-[16px]'>
<div className='grid grid-cols-2 gap-[12px]'>
<div className='flex items-center gap-[8px]'>
<Clock className='h-[14px] w-[14px] text-[var(--text-secondary)]' />
<span className='text-[13px] text-[var(--text-primary)]'>{formatted.duration}</span>
</div>
{formatted.cost && formatted.cost.total > 0 && (
<div className='flex items-center gap-[8px]'>
<DollarSign className='h-[14px] w-[14px] text-[var(--text-secondary)]' />
<span className='text-[13px] text-[var(--text-primary)]'>
${formatted.cost.total.toFixed(5)}
</span>
</div>
)}
{formatted.tokens && formatted.tokens.total > 0 && (
<div className='flex items-center gap-[8px]'>
<Hash className='h-[14px] w-[14px] text-[var(--text-secondary)]' />
<span className='text-[13px] text-[var(--text-primary)]'>
{formatted.tokens.total} tokens
</span>
</div>
)}
</div>
<ExpandableDataSection title='Input' data={formatted.input} />
<ExpandableDataSection title='Output' data={formatted.output} />
{formatted.cost && formatted.cost.total > 0 && (
<div>
<h4 className='mb-[6px] font-medium text-[13px] text-[var(--text-primary)]'>
Cost Breakdown
</h4>
<div className='space-y-[4px] rounded-[4px] border border-[var(--border)] bg-[var(--surface-3)] p-[12px] text-[13px]'>
<div className='flex justify-between text-[var(--text-primary)]'>
<span>Input:</span>
<span>${formatted.cost.input.toFixed(5)}</span>
</div>
<div className='flex justify-between text-[var(--text-primary)]'>
<span>Output:</span>
<span>${formatted.cost.output.toFixed(5)}</span>
</div>
<div className='flex justify-between border-[var(--border)] border-t pt-[4px] font-medium text-[var(--text-primary)]'>
<span>Total:</span>
<span>${formatted.cost.total.toFixed(5)}</span>
</div>
</div>
</div>
)}
{formatted.tokens && formatted.tokens.total > 0 && (
<div>
<h4 className='mb-[6px] font-medium text-[13px] text-[var(--text-primary)]'>
Token Usage
</h4>
<div className='space-y-[4px] rounded-[4px] border border-[var(--border)] bg-[var(--surface-3)] p-[12px] text-[13px]'>
<div className='flex justify-between text-[var(--text-primary)]'>
<span>Input:</span>
<span>{formatted.tokens.input}</span>
</div>
<div className='flex justify-between text-[var(--text-primary)]'>
<span>Output:</span>
<span>{formatted.tokens.output}</span>
</div>
<div className='flex justify-between border-[var(--border)] border-t pt-[4px] font-medium text-[var(--text-primary)]'>
<span>Total:</span>
<span>{formatted.tokens.total}</span>
</div>
</div>
</div>
)}
</CardContent>
</Card>
)
}
interface FrozenCanvasData {
executionId: string
workflowId: string
workflowState: WorkflowState
executionMetadata: {
trigger: string
startedAt: string
endedAt?: string
totalDurationMs?: number
cost: {
total: number | null
input: number | null
output: number | null
}
totalTokens: number | null
}
}
interface FrozenCanvasProps {
executionId: string
traceSpans?: any[]
className?: string
height?: string | number
width?: string | number
isModal?: boolean
isOpen?: boolean
onClose?: () => void
}
export function FrozenCanvas({
executionId,
traceSpans,
className,
height = '100%',
width = '100%',
isModal = false,
isOpen = false,
onClose,
}: FrozenCanvasProps) {
const [data, setData] = useState<FrozenCanvasData | null>(null)
const [blockExecutions, setBlockExecutions] = useState<Record<string, any>>({})
const [loading, setLoading] = useState(true)
const [error, setError] = useState<string | null>(null)
const [pinnedBlockId, setPinnedBlockId] = useState<string | null>(null)
// Process traceSpans to create blockExecutions map
useEffect(() => {
if (traceSpans && Array.isArray(traceSpans)) {
const blockExecutionMap: Record<string, any> = {}
logger.debug('Processing trace spans for frozen canvas:', { traceSpans })
// Recursively collect all spans with blockId from the trace spans tree
const collectBlockSpans = (spans: any[]): any[] => {
const blockSpans: any[] = []
for (const span of spans) {
// If this span has a blockId, it's a block execution
if (span.blockId) {
blockSpans.push(span)
}
// Recursively check children
if (span.children && Array.isArray(span.children)) {
blockSpans.push(...collectBlockSpans(span.children))
}
}
return blockSpans
}
const allBlockSpans = collectBlockSpans(traceSpans)
logger.debug('Collected all block spans:', allBlockSpans)
// Group spans by blockId
const traceSpansByBlockId = allBlockSpans.reduce((acc: any, span: any) => {
if (span.blockId) {
if (!acc[span.blockId]) {
acc[span.blockId] = []
}
acc[span.blockId].push(span)
}
return acc
}, {})
logger.debug('Grouped trace spans by blockId:', traceSpansByBlockId)
for (const [blockId, spans] of Object.entries(traceSpansByBlockId)) {
const spanArray = spans as any[]
const iterations = spanArray.map((span: any) => {
// Extract error information from span output if status is error
let errorMessage = null
let errorStackTrace = null
if (span.status === 'error' && span.output) {
// Error information can be in different formats in the output
if (typeof span.output === 'string') {
errorMessage = span.output
} else if (span.output.error) {
errorMessage = span.output.error
errorStackTrace = span.output.stackTrace || span.output.stack
} else if (span.output.message) {
errorMessage = span.output.message
errorStackTrace = span.output.stackTrace || span.output.stack
} else {
// Fallback: stringify the entire output for error cases
errorMessage = JSON.stringify(span.output)
}
}
return {
id: span.id,
blockId: span.blockId,
blockName: span.name,
blockType: span.type,
status: span.status,
startedAt: span.startTime,
endedAt: span.endTime,
durationMs: span.duration,
inputData: span.input,
outputData: span.output,
errorMessage,
errorStackTrace,
cost: span.cost || {
input: null,
output: null,
total: null,
},
tokens: span.tokens || {
input: null,
output: null,
total: null,
},
modelUsed: span.model || null,
metadata: {},
}
})
blockExecutionMap[blockId] = {
iterations,
currentIteration: 0,
totalIterations: iterations.length,
}
}
setBlockExecutions(blockExecutionMap)
}
}, [traceSpans])
useEffect(() => {
const fetchData = async () => {
try {
setLoading(true)
setError(null)
const response = await fetch(`/api/logs/execution/${executionId}`)
if (!response.ok) {
throw new Error(`Failed to fetch frozen canvas data: ${response.statusText}`)
}
const result = await response.json()
setData(result)
logger.debug(`Loaded frozen canvas data for execution: ${executionId}`)
} catch (err) {
const errorMessage = err instanceof Error ? err.message : 'Unknown error'
logger.error('Failed to fetch frozen canvas data:', err)
setError(errorMessage)
} finally {
setLoading(false)
}
}
fetchData()
}, [executionId])
const renderContent = () => {
if (loading) {
return (
<div
className={cn('flex items-center justify-center', className)}
style={{ height, width }}
>
<div className='flex items-center gap-[8px] text-[var(--text-secondary)]'>
<Loader2 className='h-[16px] w-[16px] animate-spin' />
<span className='text-[13px]'>Loading frozen canvas...</span>
</div>
</div>
)
}
if (error) {
return (
<div
className={cn('flex items-center justify-center', className)}
style={{ height, width }}
>
<div className='flex items-center gap-[8px] text-[var(--text-error)]'>
<AlertCircle className='h-[16px] w-[16px]' />
<span className='text-[13px]'>Failed to load frozen canvas: {error}</span>
</div>
</div>
)
}
if (!data) {
return (
<div
className={cn('flex items-center justify-center', className)}
style={{ height, width }}
>
<div className='text-[13px] text-[var(--text-secondary)]'>No data available</div>
</div>
)
}
const isMigratedLog = (data.workflowState as any)?._migrated === true
if (isMigratedLog) {
return (
<div
className={cn('flex flex-col items-center justify-center gap-[16px] p-[32px]', className)}
style={{ height, width }}
>
<div className='flex items-center gap-[12px] text-[var(--text-warning)]'>
<AlertCircle className='h-[20px] w-[20px]' />
<span className='font-medium text-[15px]'>Logged State Not Found</span>
</div>
<div className='max-w-md text-center text-[13px] text-[var(--text-secondary)]'>
This log was migrated from the old logging system. The workflow state at execution time
is not available.
</div>
<div className='text-[12px] text-[var(--text-tertiary)]'>
Note: {(data.workflowState as any)?._note}
</div>
</div>
)
}
return (
<>
<div
style={{ height, width }}
className={cn('frozen-canvas-mode h-full w-full', className)}
>
<WorkflowPreview
workflowState={data.workflowState}
showSubBlocks={true}
isPannable={true}
defaultPosition={{ x: 0, y: 0 }}
defaultZoom={0.8}
onNodeClick={(blockId) => {
setPinnedBlockId(blockId)
}}
/>
</div>
{pinnedBlockId && (
<PinnedLogs
executionData={blockExecutions[pinnedBlockId] || null}
blockId={pinnedBlockId}
workflowState={data.workflowState}
onClose={() => setPinnedBlockId(null)}
/>
)}
</>
)
}
if (isModal) {
return (
<Modal open={isOpen} onOpenChange={onClose}>
<ModalContent size='xl' className='flex h-[90vh] flex-col'>
<ModalHeader>Workflow State</ModalHeader>
<ModalBody className='min-h-0 flex-1'>
<div className='flex h-full flex-col'>
<div className='min-h-0 flex-1 overflow-hidden rounded-[4px] border border-[var(--border)]'>
{renderContent()}
</div>
</div>
</ModalBody>
</ModalContent>
</Modal>
)
}
return renderContent()
}

View File

@@ -1 +0,0 @@
export { FrozenCanvas } from './frozen-canvas'

Some files were not shown because too many files have changed in this diff Show More