Compare commits

...

45 Commits

Author SHA1 Message Date
Waleed
377b84e18c v0.4.6: kb improvements, posthog fixes 2025-10-05 21:48:32 -07:00
Waleed
223ecda80e fix(posthog): add rewrites for posthog reverse proxy routes unconditionally, remove unused POSTHOG_ENABLED envvar (#1548) 2025-10-05 21:27:03 -07:00
Waleed
7dde01e74b fix(kb): force kb uploads to use serve route (#1547) 2025-10-05 17:50:21 -07:00
Vikhyath Mondreti
b768ca845e v0.4.5: copilot updates, kb improvements, payment failure fix 2025-10-04 16:37:41 -07:00
Waleed
86ed32ea10 feat(kb): added json/yaml parser+chunker, added dedicated csv chunker (#1539)
* feat(kb): added json/yaml parser+chunker, added dedicated csv chunker

* ack PR comments

* improved kb upload
2025-10-04 14:59:21 -07:00
Vikhyath Mondreti
0e838940f1 fix(copilot): targeted auto-layout for copilot edits + custom tool persistence (#1546)
* fix autolayout and custom tools persistence

* fix

* fix preserving positions within subflow

* more fixes

* fix resizing

* consolidate constants
2025-10-04 14:52:37 -07:00
Siddharth Ganesan
7cc9a23f99 fix(copilot): tool renaming 2025-10-04 11:52:20 -07:00
Vikhyath Mondreti
c42d2a32f3 feat(copilot): fix context / json parsing edge cases (#1542)
* Add get ops examples

* input format incorrectly created by copilot should not crash workflow

* fix tool edits triggering overall delta

* fix(db): add more options for SSL connection, add envvar for base64 db cert (#1533)

* fix trigger additions

* fix nested outputs for triggers

* add condition subblock sanitization

* fix custom tools json

* Model selector

* fix response format sanitization

* remove dead code

* fix export sanitization

* Update migration

* fix import race cond

* Copilot settings

* fix response format

* stop loops/parallels copilot generation from breaking diff view

* fix lint

* Apply suggestion from @greptile-apps[bot]

Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>

* fix tests

* fix lint

---------

Co-authored-by: Siddharth Ganesan <siddharthganesan@gmail.com>
Co-authored-by: Waleed <walif6@gmail.com>
Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>
2025-10-03 19:08:57 -07:00
Vikhyath Mondreti
4da355d269 fix(billing-blocked): block platform usage if payment fails for regular subs as well (#1541) 2025-10-03 12:17:53 -07:00
Waleed
2175fd1106 v0.4.4: database config updates 2025-10-02 20:08:09 -07:00
Waleed
10692b5e5a fix(db): remove overly complex db connection logic (#1538) 2025-10-02 19:54:32 -07:00
Waleed
62298bf094 fix(db): added database config to drizzle.config in app container (#1536) 2025-10-02 18:09:27 -07:00
Waleed
5f1518ffd9 fix(db): added SSL config to migrations container (#1535) 2025-10-02 18:04:31 -07:00
Waleed
cae0e85826 v0.4.3: posthog, docs updates, search modal improvements 2025-10-02 17:02:48 -07:00
Waleed
fa9c97816b fix(db): add more options for SSL connection, add envvar for base64 db cert (#1533) 2025-10-02 15:53:45 -07:00
Vikhyath Mondreti
4bc37db547 feat(copilot): JSON sanitization logic + operations sequence diff correctness (#1521)
* add state sending capability

* progress

* add ability to add title and description to workflow state

* progress in language

* fix

* cleanup code

* fix type issue

* fix subflow deletion case

* Workflow console tool

* fix lint

---------

Co-authored-by: Siddharth Ganesan <siddharthganesan@gmail.com>
2025-10-02 15:11:03 -07:00
Waleed
15138629cb improvement(performance): remove writes to workflow updated_at on position updates for blocks, edges, & subflows (#1531)
* improvement(performance): remove writes to workflow updated_at on position updates for blocks, edges, & subflows

* update query pattern for logs routes
2025-10-02 11:53:50 -07:00
Waleed
ace83ebcae feat(cmdk): added knowledgebases to the cmdk modal (#1530) 2025-10-01 21:21:42 -07:00
Waleed
b33ae5bff9 fix(fumadocs): fixed client-side export on fumadocs (#1529) 2025-10-01 20:52:20 -07:00
Waleed
dc6052578d fix(kb): removed filename constraint from knowledgebase doc names (#1527) 2025-10-01 20:39:56 -07:00
Waleed
4adbae03e7 chore(deps): update fumadocs (#1525) 2025-10-01 20:28:12 -07:00
Vikhyath Mondreti
3509ce8ce4 fix(autolayout): type issue if workflow deployed + remove dead state code (#1524)
* fix(autolayout): type issue if workflow deployed

* remove dead code hasActiveWebhook field
2025-10-01 20:18:29 -07:00
Waleed
7aae108b87 feat(posthog): added posthog for analytics (#1523)
* feat(posthog): added posthog for analytics

* added envvars to env.ts
2025-10-01 20:12:26 -07:00
Waleed
980a6d8347 improvement(db): enforce SSL everywhere where a DB connection is established (#1522)
* improvement(db): enforce SSL everywhere where a DB connection is established

* remove extraneous comment
2025-10-01 19:09:08 -07:00
Vikhyath Mondreti
745eaff622 v0.4.2: autolayout improvements, variable resolution, CI/CD, deployed chat, router block fixes 2025-10-01 17:27:35 -07:00
Vikhyath Mondreti
35d857ef2e fix(trigger): inject project id env var in correctly (#1520) 2025-10-01 17:16:28 -07:00
Waleed
6e63eafb79 improvement(db): remove vercel, remove railway, remove crons, improve DB connection config (#1519)
* improvement(db): remove vercel, remove railway, remove crons, improve DB connection config

* remove NEXT_PUBLIC_VERCEL_URL

* remove db url fallbacks

* remove railway & more vercel stuff

---------

Co-authored-by: waleed <waleed>
2025-10-01 16:37:13 -07:00
Waleed
896f7bb0a0 fix(ci): update trigger.dev ci to only push to staging on merge to staging & for prod as well (#1518) 2025-10-01 13:22:04 -07:00
Waleed
97f69a24e1 fix(redirects): update middleware to allow access to /chat regardless of auth status (#1516) 2025-10-01 10:46:18 -07:00
Waleed
1a2c4040aa improvement(trigger): increase maxDuration for background tasks to 10 min to match sync API executions (#1504)
* improvement(trigger): increase maxDuration for background tasks to 10 min to match sync API executions

* add trigger proj id
2025-10-01 10:40:18 -07:00
Vikhyath Mondreti
4ad9be0836 fix(router): use getBaseUrl() helper (#1515)
* fix(router): use getBaseUrl() helper

* add existence check
2025-10-01 10:39:57 -07:00
Vikhyath Mondreti
0bf2bce368 improvement(var-resolution): resolve variables with block name check and consolidate code (#1469)
* improvement(var-resolution): resolve variables with block name check and consolidate code

* fix tests

* fix type error

* fix var highlighting in kb tags

* fix kb tags
2025-09-30 19:20:35 -07:00
Vikhyath Mondreti
0d881ecc00 fix(deployed-version-check): check deployed version existence pre-queuing (#1508)
* fix(deployed-version-check): check deployed version existence pre-queuing

* fix tests

* fix edge case
2025-09-30 19:20:21 -07:00
Siddharth Ganesan
7e6a5dc7e2 Fix/remove trigger promotion (#1513)
* Revert trigger promotion

* Move trigger

* Fix ci
2025-09-30 18:29:28 -07:00
Siddharth Ganesan
c1a3500bde fix(ci): capture correct deployment version output (#1512)
* Capture correct deployment version output

* Add trigger access token to each step

* Use correct arn
2025-09-30 16:36:19 -07:00
Siddharth Ganesan
561b6f2778 fix(ci): fix trigger version capture 2025-09-30 16:20:25 -07:00
Siddharth Ganesan
cdfee16b8a Fix trigger ci creds (#1510) 2025-09-30 14:03:38 -07:00
Siddharth Ganesan
9f6cb1becf fix(ci): trigger permissions 2025-09-30 13:53:02 -07:00
Siddharth Ganesan
dca8745c44 fix(ci): add skip promotion to trigger ci 2025-09-30 13:37:07 -07:00
Vikhyath Mondreti
c35c8d1f31 improvement(autolayout): use live block heights / widths for autolayout to prevent overlaps (#1505)
* improvement(autolayout): use live block heights / widths for autolayout to prevent overlaps

* improve layering algo for multiple trigger setting

* remove console logs

* add type annotation
2025-09-30 13:24:19 -07:00
Siddharth Ganesan
87c00cec6d improvement(ci): trigger.dev pushes (#1506)
* Fix trigger workflow ci

* Update trigger location
2025-09-30 13:22:24 -07:00
Vikhyath Mondreti
17edf0405b improvement(triggers): uuid, autolayout, copilot context (#1503)
* make trigger select uuid consistent with sidebar selection

* add trigger allowed flag for core triggers

* fix autolayout with new triggers
2025-09-30 11:31:54 -07:00
Siddharth Ganesan
79461840c3 fix(migrations): make sso migration idempotent 2025-09-30 11:04:44 -07:00
Siddharth Ganesan
e76fc8c2da Remove migrations ci (#1501) 2025-09-30 10:43:41 -07:00
Waleed
e9150a53e3 feat(i18n): update translations (#1496) 2025-09-30 09:44:50 -07:00
161 changed files with 14266 additions and 3057 deletions

View File

@@ -10,7 +10,6 @@ services:
environment:
- NODE_ENV=development
- DATABASE_URL=postgresql://postgres:postgres@db:5432/simstudio
- POSTGRES_URL=postgresql://postgres:postgres@db:5432/simstudio
- BETTER_AUTH_URL=http://localhost:3000
- NEXT_PUBLIC_APP_URL=http://localhost:3000
- BUN_INSTALL_CACHE_DIR=/home/bun/.bun/cache

View File

@@ -16,43 +16,200 @@ jobs:
uses: ./.github/workflows/test-build.yml
secrets: inherit
# Build and push images (ECR for staging, ECR + GHCR for main)
build-images:
name: Build Images
# Build AMD64 images and push to ECR immediately (+ GHCR for main)
build-amd64:
name: Build AMD64
needs: test-build
if: github.event_name == 'push' && (github.ref == 'refs/heads/main' || github.ref == 'refs/heads/staging')
uses: ./.github/workflows/images.yml
secrets: inherit
runs-on: blacksmith-4vcpu-ubuntu-2404
permissions:
contents: read
packages: write
id-token: write
strategy:
fail-fast: false
matrix:
include:
- dockerfile: ./docker/app.Dockerfile
ghcr_image: ghcr.io/simstudioai/simstudio
ecr_repo_secret: ECR_APP
- dockerfile: ./docker/db.Dockerfile
ghcr_image: ghcr.io/simstudioai/migrations
ecr_repo_secret: ECR_MIGRATIONS
- dockerfile: ./docker/realtime.Dockerfile
ghcr_image: ghcr.io/simstudioai/realtime
ecr_repo_secret: ECR_REALTIME
steps:
- name: Checkout code
uses: actions/checkout@v4
# Deploy Trigger.dev (after builds complete)
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v4
with:
role-to-assume: ${{ github.ref == 'refs/heads/main' && secrets.AWS_ROLE_TO_ASSUME || secrets.STAGING_AWS_ROLE_TO_ASSUME }}
aws-region: ${{ github.ref == 'refs/heads/main' && secrets.AWS_REGION || secrets.STAGING_AWS_REGION }}
- name: Login to Amazon ECR
id: login-ecr
uses: aws-actions/amazon-ecr-login@v2
- name: Login to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Login to GHCR
if: github.ref == 'refs/heads/main'
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Set up Docker Buildx
uses: useblacksmith/setup-docker-builder@v1
- name: Generate tags
id: meta
run: |
ECR_REGISTRY="${{ steps.login-ecr.outputs.registry }}"
ECR_REPO="${{ secrets[matrix.ecr_repo_secret] }}"
GHCR_IMAGE="${{ matrix.ghcr_image }}"
# ECR tags (always build for ECR)
if [ "${{ github.ref }}" = "refs/heads/main" ]; then
ECR_TAG="latest"
else
ECR_TAG="staging"
fi
ECR_IMAGE="${ECR_REGISTRY}/${ECR_REPO}:${ECR_TAG}"
# Build tags list
TAGS="${ECR_IMAGE}"
# Add GHCR tags only for main branch
if [ "${{ github.ref }}" = "refs/heads/main" ]; then
GHCR_AMD64="${GHCR_IMAGE}:latest-amd64"
GHCR_SHA="${GHCR_IMAGE}:${{ github.sha }}-amd64"
TAGS="${TAGS},$GHCR_AMD64,$GHCR_SHA"
fi
echo "tags=${TAGS}" >> $GITHUB_OUTPUT
- name: Build and push images
uses: useblacksmith/build-push-action@v2
with:
context: .
file: ${{ matrix.dockerfile }}
platforms: linux/amd64
push: true
tags: ${{ steps.meta.outputs.tags }}
provenance: false
sbom: false
# Build ARM64 images for GHCR (main branch only, runs in parallel)
build-ghcr-arm64:
name: Build ARM64 (GHCR Only)
needs: test-build
runs-on: linux-arm64-8-core
if: github.event_name == 'push' && github.ref == 'refs/heads/main'
permissions:
contents: read
packages: write
strategy:
fail-fast: false
matrix:
include:
- dockerfile: ./docker/app.Dockerfile
image: ghcr.io/simstudioai/simstudio
- dockerfile: ./docker/db.Dockerfile
image: ghcr.io/simstudioai/migrations
- dockerfile: ./docker/realtime.Dockerfile
image: ghcr.io/simstudioai/realtime
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Login to GHCR
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Set up Docker Buildx
uses: useblacksmith/setup-docker-builder@v1
- name: Generate ARM64 tags
id: meta
run: |
IMAGE="${{ matrix.image }}"
echo "tags=${IMAGE}:latest-arm64,${IMAGE}:${{ github.sha }}-arm64" >> $GITHUB_OUTPUT
- name: Build and push ARM64 to GHCR
uses: useblacksmith/build-push-action@v2
with:
context: .
file: ${{ matrix.dockerfile }}
platforms: linux/arm64
push: true
tags: ${{ steps.meta.outputs.tags }}
provenance: false
sbom: false
# Create GHCR multi-arch manifests (only for main, after both builds)
create-ghcr-manifests:
name: Create GHCR Manifests
runs-on: blacksmith-4vcpu-ubuntu-2404
needs: [build-amd64, build-ghcr-arm64]
if: github.event_name == 'push' && github.ref == 'refs/heads/main'
permissions:
packages: write
strategy:
matrix:
include:
- image: ghcr.io/simstudioai/simstudio
- image: ghcr.io/simstudioai/migrations
- image: ghcr.io/simstudioai/realtime
steps:
- name: Login to GHCR
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Create and push manifests
run: |
IMAGE_BASE="${{ matrix.image }}"
# Create latest manifest
docker manifest create "${IMAGE_BASE}:latest" \
"${IMAGE_BASE}:latest-amd64" \
"${IMAGE_BASE}:latest-arm64"
docker manifest push "${IMAGE_BASE}:latest"
# Create SHA manifest
docker manifest create "${IMAGE_BASE}:${{ github.sha }}" \
"${IMAGE_BASE}:${{ github.sha }}-amd64" \
"${IMAGE_BASE}:${{ github.sha }}-arm64"
docker manifest push "${IMAGE_BASE}:${{ github.sha }}"
# Deploy Trigger.dev (after ECR images are pushed, runs in parallel with process-docs)
trigger-deploy:
name: Deploy Trigger.dev
needs: build-images
needs: build-amd64
if: github.event_name == 'push' && (github.ref == 'refs/heads/main' || github.ref == 'refs/heads/staging')
uses: ./.github/workflows/trigger-deploy.yml
secrets: inherit
# Run database migrations (depends on build completion and trigger deployment)
migrations:
name: Apply Database Migrations
needs: [build-images, trigger-deploy]
if: |
always() &&
github.event_name == 'push' &&
(github.ref == 'refs/heads/main' || github.ref == 'refs/heads/staging') &&
needs.build-images.result == 'success' &&
needs.trigger-deploy.result == 'success'
uses: ./.github/workflows/migrations.yml
secrets: inherit
# Process docs embeddings if needed
# Process docs embeddings (after ECR images are pushed, runs in parallel with trigger-deploy)
process-docs:
name: Process Docs
needs: migrations
needs: build-amd64
if: github.event_name == 'push' && (github.ref == 'refs/heads/main' || github.ref == 'refs/heads/staging')
uses: ./.github/workflows/docs-embeddings.yml
secrets: inherit

View File

@@ -32,4 +32,4 @@ jobs:
env:
DATABASE_URL: ${{ github.ref == 'refs/heads/main' && secrets.DATABASE_URL || secrets.STAGING_DATABASE_URL }}
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
run: bun run scripts/process-docs-embeddings.ts --clear
run: bun run scripts/process-docs.ts --clear

View File

@@ -13,6 +13,7 @@ jobs:
cancel-in-progress: false
env:
TRIGGER_ACCESS_TOKEN: ${{ secrets.TRIGGER_ACCESS_TOKEN }}
TRIGGER_PROJECT_ID: ${{ secrets.TRIGGER_PROJECT_ID }}
steps:
- name: Checkout code
@@ -39,4 +40,5 @@ jobs:
- name: Deploy to Trigger.dev (Production)
if: github.ref == 'refs/heads/main'
working-directory: ./apps/sim
run: npx --yes trigger.dev@4.0.4 deploy
run: npx --yes trigger.dev@4.0.4 deploy

View File

@@ -1,7 +1,7 @@
import type { ReactNode } from 'react'
import { defineI18nUI } from 'fumadocs-ui/i18n'
import { DocsLayout } from 'fumadocs-ui/layouts/docs'
import { RootProvider } from 'fumadocs-ui/provider'
import { RootProvider } from 'fumadocs-ui/provider/next'
import { ExternalLink, GithubIcon } from 'lucide-react'
import { Inter } from 'next/font/google'
import Image from 'next/image'

View File

@@ -175,56 +175,30 @@ Verwenden Sie einen `Memory`Block mit einer konsistenten `id` (zum Beispiel `cha
- Lesen Sie den Gesprächsverlauf für den Kontext
- Hängen Sie die Antwort des Agenten nach dessen Ausführung an
```yaml
# 1) Add latest user message
- Memory (operation: add)
id: chat
role: user
content: {{input}}
# 2) Load conversation history
- Memory (operation: get)
id: chat
# 3) Run the agent with prior messages available
- Agent
System Prompt: ...
User Prompt: |
Use the conversation so far:
{{memory_get.memories}}
Current user message: {{input}}
# 4) Store the agent reply
- Memory (operation: add)
id: chat
role: assistant
content: {{agent.content}}
```
Siehe die `Memory`Block-Referenz für Details: [/tools/memory](/tools/memory).
Siehe die [`Memory`](/tools/memory) Blockreferenz für Details.
## Eingaben und Ausgaben
<Tabs items={['Configuration', 'Variables', 'Results']}>
<Tabs items={['Konfiguration', 'Variablen', 'Ergebnisse']}>
<Tab>
<ul className="list-disc space-y-2 pl-6">
<li>
<strong>System Prompt</strong>: Anweisungen, die das Verhalten und die Rolle des Agenten definieren
<strong>System-Prompt</strong>: Anweisungen, die das Verhalten und die Rolle des Agenten definieren
</li>
<li>
<strong>User Prompt</strong>: Eingabetext oder -daten zur Verarbeitung
<strong>Benutzer-Prompt</strong>: Eingabetext oder zu verarbeitende Daten
</li>
<li>
<strong>Model</strong>: KI-Modellauswahl (OpenAI, Anthropic, Google, etc.)
<strong>Modell</strong>: KI-Modellauswahl (OpenAI, Anthropic, Google, usw.)
</li>
<li>
<strong>Temperature</strong>: Steuerung der Antwort-Zufälligkeit (0-2)
<strong>Temperatur</strong>: Steuerung der Zufälligkeit der Antwort (0-2)
</li>
<li>
<strong>Tools</strong>: Array verfügbarer Tools für Funktionsaufrufe
</li>
<li>
<strong>Response Format</strong>: JSON-Schema für strukturierte Ausgabe
<strong>Antwortformat</strong>: JSON-Schema für strukturierte Ausgabe
</li>
</ul>
</Tab>
@@ -261,7 +235,7 @@ Siehe die `Memory`Block-Referenz für Details: [/tools/memory](/tools/memory).
## Beispielanwendungsfälle
### Automatisierung des Kundendienstes
### Automatisierung des Kundenservice
<div className="mb-4 rounded-md border p-4">
<h4 className="font-medium">Szenario: Bearbeitung von Kundenanfragen mit Datenbankzugriff</h4>
@@ -269,9 +243,9 @@ Siehe die `Memory`Block-Referenz für Details: [/tools/memory](/tools/memory).
<li>Benutzer reicht ein Support-Ticket über den API-Block ein</li>
<li>Agent prüft Bestellungen/Abonnements in Postgres und durchsucht die Wissensdatenbank nach Anleitungen</li>
<li>Falls eine Eskalation erforderlich ist, erstellt der Agent ein Linear-Ticket mit relevantem Kontext</li>
<li>Agent verfasst eine klare E-Mail-Antwort</li>
<li>Agent erstellt eine klare E-Mail-Antwort</li>
<li>Gmail sendet die Antwort an den Kunden</li>
<li>Konversation wird im Speicher gesichert, um den Verlauf für zukünftige Nachrichten zu erhalten</li>
<li>Konversation wird im Memory gespeichert, um den Verlauf für zukünftige Nachrichten beizubehalten</li>
</ol>
</div>
@@ -287,20 +261,20 @@ Siehe die `Memory`Block-Referenz für Details: [/tools/memory](/tools/memory).
</ol>
</div>
### Werkzeuggestützter Rechercheassistent
### Werkzeuggestützter Forschungsassistent
<div className="mb-4 rounded-md border p-4">
<h4 className="font-medium">Szenario: Rechercheassistent mit Websuche und Dokumentenzugriff</h4>
<h4 className="font-medium">Szenario: Forschungsassistent mit Websuche und Dokumentenzugriff</h4>
<ol className="list-decimal pl-5 text-sm">
<li>Benutzeranfrage über Eingabe erhalten</li>
<li>Agent durchsucht das Web mit dem Google Search-Tool</li>
<li>Agent durchsucht das Web mit dem Google-Suchwerkzeug</li>
<li>Agent greift auf Notion-Datenbank für interne Dokumente zu</li>
<li>Agent erstellt umfassenden Recherchebericht</li>
<li>Agent erstellt umfassenden Forschungsbericht</li>
</ol>
</div>
## Best Practices
## Bewährte Praktiken
- **Sei spezifisch in System-Prompts**: Definiere die Rolle, den Tonfall und die Einschränkungen des Agenten klar. Je spezifischer deine Anweisungen sind, desto besser kann der Agent seinen vorgesehenen Zweck erfüllen.
- **Sei spezifisch in System-Prompts**: Definiere die Rolle, den Ton und die Einschränkungen des Agenten klar. Je spezifischer deine Anweisungen sind, desto besser kann der Agent seinen vorgesehenen Zweck erfüllen.
- **Wähle die richtige Temperatureinstellung**: Verwende niedrigere Temperatureinstellungen (0-0,3), wenn Genauigkeit wichtig ist, oder erhöhe die Temperatur (0,7-2,0) für kreativere oder abwechslungsreichere Antworten
- **Nutze Tools effektiv**: Integriere Tools, die den Zweck des Agenten ergänzen und seine Fähigkeiten erweitern. Sei selektiv bei der Auswahl der Tools, um den Agenten nicht zu überfordern. Für Aufgaben mit wenig Überschneidung verwende einen anderen Agent-Block für die besten Ergebnisse.
- **Nutze Werkzeuge effektiv**: Integriere Werkzeuge, die den Zweck des Agenten ergänzen und seine Fähigkeiten verbessern. Sei selektiv bei der Auswahl der Werkzeuge, um den Agenten nicht zu überfordern. Für Aufgaben mit wenig Überschneidung verwende einen anderen Agent-Block für die besten Ergebnisse.

View File

@@ -24,15 +24,7 @@ Der API-Trigger stellt Ihren Workflow als sicheren HTTP-Endpunkt bereit. Senden
Fügen Sie für jeden Parameter ein Feld **Eingabeformat** hinzu. Die Ausgabeschlüssel zur Laufzeit spiegeln das Schema wider und sind auch unter `<api.input>` verfügbar.
```yaml
- type: string
name: userId
value: demo-user # optional manual test value
- type: number
name: maxTokens
```
Manuelle Ausführungen im Editor verwenden die Spalte `value`, damit Sie testen können, ohne eine Anfrage zu senden. Während der Ausführung füllt der Resolver sowohl `<api.userId>` als auch `<api.input.userId>`.
Manuelle Ausführungen im Editor verwenden die Spalte `value`, damit Sie testen können, ohne eine Anfrage zu senden. Während der Ausführung füllt der Resolver sowohl `<api.userId>` als auch `<api.input.userId>` aus.
## Anfrage-Beispiel
@@ -56,5 +48,5 @@ Erfolgreiche Antworten geben das serialisierte Ausführungsergebnis vom Executor
Wenn kein Eingabeformat definiert ist, stellt der Executor das rohe JSON nur unter `<api.input>` bereit.
<Callout type="warning">
Ein Workflow kann nur einen API-Trigger enthalten. Veröffentlichen Sie eine neue Bereitstellung nach Änderungen, damit der Endpunkt aktuell bleibt.
Ein Workflow kann nur einen API-Trigger enthalten. Veröffentlichen Sie nach Änderungen eine neue Bereitstellung, damit der Endpunkt aktuell bleibt.
</Callout>

View File

@@ -10,7 +10,6 @@ type: object
required:
- type
- name
- inputs
- connections
properties:
type:
@@ -22,21 +21,23 @@ properties:
description: Display name for this loop block
inputs:
type: object
required:
- loopType
description: Optional. If omitted, defaults will be applied.
properties:
loopType:
type: string
enum: [for, forEach]
description: Type of loop to execute
default: for
iterations:
type: number
description: Number of iterations (for 'for' loops)
default: 5
minimum: 1
maximum: 1000
collection:
type: string
description: Collection to iterate over (for 'forEach' loops)
default: ""
maxConcurrency:
type: number
description: Maximum concurrent executions
@@ -45,13 +46,10 @@ properties:
maximum: 10
connections:
type: object
required:
- loop
properties:
# Nested format (recommended)
loop:
type: object
required:
- start
properties:
start:
type: string
@@ -59,26 +57,37 @@ properties:
end:
type: string
description: Target block ID for loop completion (optional)
# Direct handle format (alternative)
loop-start-source:
type: string | string[]
description: Target block ID to execute inside the loop (direct format)
loop-end-source:
type: string | string[]
description: Target block ID for loop completion (direct format, optional)
error:
type: string
description: Target block ID for error handling
note: Use either the nested 'loop' format OR the direct 'loop-start-source' format, not both
```
## Verbindungskonfiguration
Loop-Blöcke verwenden ein spezielles Verbindungsformat mit einem `loop`Abschnitt:
Loop-Blöcke unterstützen zwei Verbindungsformate:
### Direktes Handle-Format (Alternative)
```yaml
connections:
loop:
start: <string> # Target block ID to execute inside the loop
end: <string> # Target block ID after loop completion (optional)
loop-start-source: <string> # Target block ID to execute inside the loop
loop-end-source: <string> # Target block ID after loop completion (optional)
error: <string> # Target block ID for error handling (optional)
```
Beide Formate funktionieren identisch. Verwenden Sie das Format, das Ihnen besser gefällt.
## Konfiguration von untergeordneten Blöcken
Blöcke innerhalb einer Schleife müssen ihre `parentId` auf die Loop-Block-ID setzen:
Blöcke innerhalb einer Schleife müssen ihre `parentId` auf die Loop-Block-ID gesetzt haben. Die Eigenschaft `extent` wird automatisch auf `'parent'` gesetzt und muss nicht angegeben werden:
```yaml
loop-1:
@@ -106,7 +115,7 @@ process-item:
## Beispiele
### For-Schleife (feste Iterationen)
### For-Schleife (feste Anzahl von Iterationen)
```yaml
countdown-loop:
@@ -227,7 +236,7 @@ store-analysis:
};
```
### Schleife mit paralleler Verarbeitung
### Schleife für parallele Verarbeitung
```yaml
parallel-processing-loop:
@@ -261,9 +270,62 @@ process-task:
success: task-completed
```
### Beispiel für direktes Handle-Format
Dieselbe Schleife kann mit dem direkten Handle-Format geschrieben werden:
```yaml
my-loop:
type: loop
name: "Process Items"
inputs:
loopType: forEach
collection: <start.items>
connections:
loop-start-source: process-item # Direct handle format
loop-end-source: final-results # Direct handle format
error: handle-error
process-item:
type: agent
name: "Process Item"
parentId: my-loop
inputs:
systemPrompt: "Process this item"
userPrompt: <loop.currentItem>
model: gpt-4o
apiKey: '{{OPENAI_API_KEY}}'
```
### Minimales Schleifenbeispiel (mit Standardwerten)
Sie können den Abschnitt `inputs` vollständig weglassen, dann werden Standardwerte angewendet:
```yaml
simple-loop:
type: loop
name: "Simple Loop"
# No inputs section - defaults to loopType: 'for', iterations: 5
connections:
loop-start-source: process-step
loop-end-source: complete
process-step:
type: agent
name: "Process Step"
parentId: simple-loop
inputs:
systemPrompt: "Execute step"
userPrompt: "Step <loop.index>"
model: gpt-4o
apiKey: '{{OPENAI_API_KEY}}'
```
Diese Schleife führt standardmäßig 5 Iterationen aus.
## Schleifenvariablen
Innerhalb von untergeordneten Schleifenblöcken sind diese speziellen Variablen verfügbar:
Innerhalb von Schleifenunterblöcken sind diese speziellen Variablen verfügbar:
```yaml
# Available in all child blocks of the loop
@@ -290,6 +352,6 @@ final-processor:
- Verwenden Sie forEach für die Verarbeitung von Sammlungen, for-Schleifen für feste Iterationen
- Erwägen Sie die Verwendung von maxConcurrency für I/O-gebundene Operationen
- Integrieren Sie Fehlerbehandlung für eine robuste Schleifenausführung
- Verwenden Sie aussagekräftige Namen für Schleifen-Unterblöcke
- Verwenden Sie aussagekräftige Namen für Schleifenunterblöcke
- Testen Sie zuerst mit kleinen Sammlungen
- Überwachen Sie die Ausführungszeit bei großen Sammlungen
- Überwachen Sie die Ausführungszeit für große Sammlungen

View File

@@ -175,33 +175,7 @@ Utiliza un bloque `Memory` con un `id` consistente (por ejemplo, `chat`) para pe
- Lee el historial de conversación para contexto
- Añade la respuesta del agente después de que se ejecute
```yaml
# 1) Add latest user message
- Memory (operation: add)
id: chat
role: user
content: {{input}}
# 2) Load conversation history
- Memory (operation: get)
id: chat
# 3) Run the agent with prior messages available
- Agent
System Prompt: ...
User Prompt: |
Use the conversation so far:
{{memory_get.memories}}
Current user message: {{input}}
# 4) Store the agent reply
- Memory (operation: add)
id: chat
role: assistant
content: {{agent.content}}
```
Consulta la referencia del bloque `Memory` para más detalles: [/tools/memory](/tools/memory).
Consulta la referencia del bloque [`Memory`](/tools/memory) para más detalles.
## Entradas y salidas
@@ -212,7 +186,7 @@ Consulta la referencia del bloque `Memory` para más detalles: [/tools/memory](/
<strong>Prompt del sistema</strong>: Instrucciones que definen el comportamiento y rol del agente
</li>
<li>
<strong>Prompt del usuario</strong>: Texto de entrada o datos a procesar
<strong>Prompt del usuario</strong>: Texto o datos de entrada para procesar
</li>
<li>
<strong>Modelo</strong>: Selección del modelo de IA (OpenAI, Anthropic, Google, etc.)
@@ -231,13 +205,13 @@ Consulta la referencia del bloque `Memory` para más detalles: [/tools/memory](/
<Tab>
<ul className="list-disc space-y-2 pl-6">
<li>
<strong>agent.content</strong>: Texto de respuesta o datos estructurados del agente
<strong>agent.content</strong>: Texto de respuesta del agente o datos estructurados
</li>
<li>
<strong>agent.tokens</strong>: Objeto de estadísticas de uso de tokens
<strong>agent.tokens</strong>: Objeto con estadísticas de uso de tokens
</li>
<li>
<strong>agent.tool_calls</strong>: Array de detalles de ejecución de herramientas
<strong>agent.tool_calls</strong>: Array con detalles de ejecución de herramientas
</li>
<li>
<strong>agent.cost</strong>: Costo estimado de la llamada a la API (si está disponible)
@@ -247,7 +221,7 @@ Consulta la referencia del bloque `Memory` para más detalles: [/tools/memory](/
<Tab>
<ul className="list-disc space-y-2 pl-6">
<li>
<strong>Contenido</strong>: Salida de respuesta principal del agente
<strong>Contenido</strong>: Salida principal de respuesta del agente
</li>
<li>
<strong>Metadatos</strong>: Estadísticas de uso y detalles de ejecución
@@ -267,15 +241,15 @@ Consulta la referencia del bloque `Memory` para más detalles: [/tools/memory](/
<h4 className="font-medium">Escenario: Gestionar consultas de clientes con acceso a base de datos</h4>
<ol className="list-decimal pl-5 text-sm">
<li>El usuario envía un ticket de soporte a través del bloque API</li>
<li>El agente verifica pedidos/suscripciones en Postgres y busca en la base de conocimientos para obtener orientación</li>
<li>El agente verifica pedidos/suscripciones en Postgres y busca en la base de conocimientos</li>
<li>Si se necesita escalamiento, el agente crea una incidencia en Linear con el contexto relevante</li>
<li>El agente redacta una respuesta clara por correo electrónico</li>
<li>Gmail envía la respuesta al cliente</li>
<li>La conversación se guarda en Memoria para mantener el historial para mensajes futuros</li>
<li>La conversación se guarda en Memory para mantener el historial para mensajes futuros</li>
</ol>
</div>
### Análisis de contenido multi-modelo
### Análisis de contenido con múltiples modelos
<div className="mb-4 rounded-md border p-4">
<h4 className="font-medium">Escenario: Analizar contenido con diferentes modelos de IA</h4>
@@ -287,13 +261,13 @@ Consulta la referencia del bloque `Memory` para más detalles: [/tools/memory](/
</ol>
</div>
### Asistente de investigación con herramientas
### Asistente de investigación potenciado por herramientas
<div className="mb-4 rounded-md border p-4">
<h4 className="font-medium">Escenario: Asistente de investigación con búsqueda web y acceso a documentos</h4>
<ol className="list-decimal pl-5 text-sm">
<li>Consulta del usuario recibida a través de entrada</li>
<li>El agente busca en la web usando la herramienta de Google Search</li>
<li>El agente busca en la web utilizando la herramienta de Google Search</li>
<li>El agente accede a la base de datos de Notion para documentos internos</li>
<li>El agente compila un informe de investigación completo</li>
</ol>
@@ -301,6 +275,6 @@ Consulta la referencia del bloque `Memory` para más detalles: [/tools/memory](/
## Mejores prácticas
- **Sé específico en los prompts del sistema**: Define claramente el rol, tono y limitaciones del agente. Cuanto más específicas sean tus instrucciones, mejor podrá el agente cumplir con su propósito previsto.
- **Sé específico en los prompts del sistema**: Define claramente el rol del agente, el tono y las limitaciones. Cuanto más específicas sean tus instrucciones, mejor podrá el agente cumplir con su propósito previsto.
- **Elige la configuración de temperatura adecuada**: Usa configuraciones de temperatura más bajas (0-0.3) cuando la precisión es importante, o aumenta la temperatura (0.7-2.0) para respuestas más creativas o variadas
- **Aprovecha las herramientas de manera efectiva**: Integra herramientas que complementen el propósito del agente y mejoren sus capacidades. Sé selectivo sobre qué herramientas proporcionas para evitar sobrecargar al agente. Para tareas con poca superposición, usa otro bloque de Agente para obtener los mejores resultados.
- **Aprovecha las herramientas de manera efectiva**: Integra herramientas que complementen el propósito del agente y mejoren sus capacidades. Sé selectivo sobre qué herramientas proporcionas para evitar sobrecargar al agente. Para tareas con poco solapamiento, usa otro bloque de Agente para obtener los mejores resultados.

View File

@@ -24,14 +24,6 @@ El disparador de API expone tu flujo de trabajo como un punto de conexión HTTP
Añade un campo de **Formato de entrada** para cada parámetro. Las claves de salida en tiempo de ejecución reflejan el esquema y también están disponibles bajo `<api.input>`.
```yaml
- type: string
name: userId
value: demo-user # optional manual test value
- type: number
name: maxTokens
```
Las ejecuciones manuales en el editor utilizan la columna `value` para que puedas realizar pruebas sin enviar una solicitud. Durante la ejecución, el resolutor completa tanto `<api.userId>` como `<api.input.userId>`.
## Ejemplo de solicitud
@@ -44,17 +36,17 @@ curl -X POST \
-d '{"userId":"demo-user","maxTokens":1024}'
```
Las respuestas exitosas devuelven el resultado de ejecución serializado del Ejecutor. Los errores muestran fallos de validación, autenticación o del flujo de trabajo.
Las respuestas exitosas devuelven el resultado de ejecución serializado del Ejecutor. Los errores muestran fallos de validación, autenticación o flujo de trabajo.
## Referencia de salida
| Referencia | Descripción |
|-----------|-------------|
| `<api.field>` | Campo definido en el formato de entrada |
| `<api.field>` | Campo definido en el Formato de Entrada |
| `<api.input>` | Cuerpo completo estructurado de la solicitud |
Si no se define un formato de entrada, el ejecutor expone el JSON sin procesar solo en `<api.input>`.
Si no se define un Formato de Entrada, el ejecutor expone el JSON sin procesar solo en `<api.input>`.
<Callout type="warning">
Un flujo de trabajo puede contener solo un disparador de API. Publica una nueva implementación después de realizar cambios para que el punto de conexión se mantenga actualizado.
Un flujo de trabajo puede contener solo un Disparador de API. Publica una nueva implementación después de realizar cambios para que el punto de conexión se mantenga actualizado.
</Callout>

View File

@@ -10,7 +10,6 @@ type: object
required:
- type
- name
- inputs
- connections
properties:
type:
@@ -22,21 +21,23 @@ properties:
description: Display name for this loop block
inputs:
type: object
required:
- loopType
description: Optional. If omitted, defaults will be applied.
properties:
loopType:
type: string
enum: [for, forEach]
description: Type of loop to execute
default: for
iterations:
type: number
description: Number of iterations (for 'for' loops)
default: 5
minimum: 1
maximum: 1000
collection:
type: string
description: Collection to iterate over (for 'forEach' loops)
default: ""
maxConcurrency:
type: number
description: Maximum concurrent executions
@@ -45,13 +46,10 @@ properties:
maximum: 10
connections:
type: object
required:
- loop
properties:
# Nested format (recommended)
loop:
type: object
required:
- start
properties:
start:
type: string
@@ -59,26 +57,37 @@ properties:
end:
type: string
description: Target block ID for loop completion (optional)
# Direct handle format (alternative)
loop-start-source:
type: string | string[]
description: Target block ID to execute inside the loop (direct format)
loop-end-source:
type: string | string[]
description: Target block ID for loop completion (direct format, optional)
error:
type: string
description: Target block ID for error handling
note: Use either the nested 'loop' format OR the direct 'loop-start-source' format, not both
```
## Configuración de conexión
Los bloques Loop utilizan un formato de conexión especial con una sección `loop`:
Los bloques de bucle admiten dos formatos de conexión:
### Formato de manejador directo (alternativo)
```yaml
connections:
loop:
start: <string> # Target block ID to execute inside the loop
end: <string> # Target block ID after loop completion (optional)
loop-start-source: <string> # Target block ID to execute inside the loop
loop-end-source: <string> # Target block ID after loop completion (optional)
error: <string> # Target block ID for error handling (optional)
```
Ambos formatos funcionan de manera idéntica. Usa el que prefieras.
## Configuración de bloques secundarios
Los bloques dentro de un bucle deben tener su `parentId` configurado con el ID del bloque loop:
Los bloques dentro de un bucle deben tener su `parentId` configurado con el ID del bloque de bucle. La propiedad `extent` se establece automáticamente como `'parent'` y no necesita ser especificada:
```yaml
loop-1:
@@ -261,6 +270,59 @@ process-task:
success: task-completed
```
### Ejemplo de formato de manejador directo
El mismo bucle puede escribirse usando el formato de manejador directo:
```yaml
my-loop:
type: loop
name: "Process Items"
inputs:
loopType: forEach
collection: <start.items>
connections:
loop-start-source: process-item # Direct handle format
loop-end-source: final-results # Direct handle format
error: handle-error
process-item:
type: agent
name: "Process Item"
parentId: my-loop
inputs:
systemPrompt: "Process this item"
userPrompt: <loop.currentItem>
model: gpt-4o
apiKey: '{{OPENAI_API_KEY}}'
```
### Ejemplo de bucle mínimo (usando valores predeterminados)
Puedes omitir completamente la sección `inputs`, y se aplicarán los valores predeterminados:
```yaml
simple-loop:
type: loop
name: "Simple Loop"
# No inputs section - defaults to loopType: 'for', iterations: 5
connections:
loop-start-source: process-step
loop-end-source: complete
process-step:
type: agent
name: "Process Step"
parentId: simple-loop
inputs:
systemPrompt: "Execute step"
userPrompt: "Step <loop.index>"
model: gpt-4o
apiKey: '{{OPENAI_API_KEY}}'
```
Este bucle ejecutará 5 iteraciones por defecto.
## Variables de bucle
Dentro de los bloques secundarios del bucle, estas variables especiales están disponibles:
@@ -286,10 +348,10 @@ final-processor:
## Mejores prácticas
- Establece límites razonables de iteración para evitar tiempos de ejecución prolongados
- Establece límites de iteración razonables para evitar tiempos de ejecución largos
- Usa forEach para procesar colecciones, bucles for para iteraciones fijas
- Considera usar maxConcurrency para operaciones limitadas por E/S
- Incluye manejo de errores para una ejecución robusta de bucles
- Incluye manejo de errores para una ejecución robusta del bucle
- Usa nombres descriptivos para los bloques secundarios del bucle
- Prueba primero con colecciones pequeñas
- Monitorea el tiempo de ejecución para colecciones grandes

View File

@@ -175,33 +175,7 @@ Utilisez un bloc `Memory` avec un `id` cohérent (par exemple, `chat`) pour cons
- Lisez l'historique de conversation pour le contexte
- Ajoutez la réponse de l'Agent après son exécution
```yaml
# 1) Add latest user message
- Memory (operation: add)
id: chat
role: user
content: {{input}}
# 2) Load conversation history
- Memory (operation: get)
id: chat
# 3) Run the agent with prior messages available
- Agent
System Prompt: ...
User Prompt: |
Use the conversation so far:
{{memory_get.memories}}
Current user message: {{input}}
# 4) Store the agent reply
- Memory (operation: add)
id: chat
role: assistant
content: {{agent.content}}
```
Consultez la référence du bloc `Memory` pour plus de détails : [/tools/memory](/tools/memory).
Voir la référence du bloc [`Memory`](/tools/memory) pour plus de détails.
## Entrées et sorties
@@ -209,51 +183,51 @@ Consultez la référence du bloc `Memory` pour plus de détails : [/tools/memory
<Tab>
<ul className="list-disc space-y-2 pl-6">
<li>
<strong>Prompt système</strong> : Instructions définissant le comportement et le rôle de l'agent
<strong>Prompt système</strong> : instructions définissant le comportement et le rôle de l'agent
</li>
<li>
<strong>Prompt utilisateur</strong> : Texte d'entrée ou données à traiter
<strong>Prompt utilisateur</strong> : texte ou données d'entrée à traiter
</li>
<li>
<strong>Modèle</strong> : Sélection du modèle d'IA (OpenAI, Anthropic, Google, etc.)
<strong>Modèle</strong> : sélection du modèle d'IA (OpenAI, Anthropic, Google, etc.)
</li>
<li>
<strong>Température</strong> : Contrôle de l'aléatoire des réponses (0-2)
<strong>Température</strong> : contrôle de l'aléatoire des réponses (0-2)
</li>
<li>
<strong>Outils</strong> : Tableau d'outils disponibles pour l'appel de fonctions
<strong>Outils</strong> : tableau des outils disponibles pour l'appel de fonctions
</li>
<li>
<strong>Format de réponse</strong> : Schéma JSON pour une sortie structurée
<strong>Format de réponse</strong> : schéma JSON pour une sortie structurée
</li>
</ul>
</Tab>
<Tab>
<ul className="list-disc space-y-2 pl-6">
<li>
<strong>agent.content</strong> : Texte de réponse de l'agent ou données structurées
<strong>agent.content</strong> : texte de réponse de l'agent ou données structurées
</li>
<li>
<strong>agent.tokens</strong> : Objet de statistiques d'utilisation des tokens
<strong>agent.tokens</strong> : objet de statistiques d'utilisation des tokens
</li>
<li>
<strong>agent.tool_calls</strong> : Tableau des détails d'exécution des outils
<strong>agent.tool_calls</strong> : tableau des détails d'exécution des outils
</li>
<li>
<strong>agent.cost</strong> : Coût estimé de l'appel API (si disponible)
<strong>agent.cost</strong> : coût estimé de l'appel API (si disponible)
</li>
</ul>
</Tab>
<Tab>
<ul className="list-disc space-y-2 pl-6">
<li>
<strong>Contenu</strong> : Sortie de réponse principale de l'agent
<strong>Contenu</strong> : sortie de réponse principale de l'agent
</li>
<li>
<strong>Métadonnées</strong> : Statistiques d'utilisation et détails d'exécution
<strong>Métadonnées</strong> : statistiques d'utilisation et détails d'exécution
</li>
<li>
<strong>Accès</strong> : Disponible dans les blocs après l'agent
<strong>Accès</strong> : disponible dans les blocs après l'agent
</li>
</ul>
</Tab>
@@ -268,7 +242,7 @@ Consultez la référence du bloc `Memory` pour plus de détails : [/tools/memory
<ol className="list-decimal pl-5 text-sm">
<li>L'utilisateur soumet un ticket de support via le bloc API</li>
<li>L'agent vérifie les commandes/abonnements dans Postgres et recherche des conseils dans la base de connaissances</li>
<li>Si une escalade est nécessaire, l'agent crée un ticket Linear avec le contexte pertinent</li>
<li>Si une escalade est nécessaire, l'agent crée un problème Linear avec le contexte pertinent</li>
<li>L'agent rédige une réponse par e-mail claire</li>
<li>Gmail envoie la réponse au client</li>
<li>La conversation est enregistrée dans Memory pour conserver l'historique des messages futurs</li>
@@ -278,7 +252,7 @@ Consultez la référence du bloc `Memory` pour plus de détails : [/tools/memory
### Analyse de contenu multi-modèles
<div className="mb-4 rounded-md border p-4">
<h4 className="font-medium">Scénario : analyser du contenu avec différents modèles d'IA</h4>
<h4 className="font-medium">Scénario : analyser le contenu avec différents modèles d'IA</h4>
<ol className="list-decimal pl-5 text-sm">
<li>Le bloc de fonction traite le document téléchargé</li>
<li>L'agent avec GPT-4o effectue une analyse technique</li>

View File

@@ -24,14 +24,6 @@ Le déclencheur d'API expose votre flux de travail en tant que point de terminai
Ajoutez un champ **Format d'entrée** pour chaque paramètre. Les clés de sortie d'exécution reflètent le schéma et sont également disponibles sous `<api.input>`.
```yaml
- type: string
name: userId
value: demo-user # optional manual test value
- type: number
name: maxTokens
```
Les exécutions manuelles dans l'éditeur utilisent la colonne `value` pour que vous puissiez tester sans envoyer de requête. Pendant l'exécution, le résolveur remplit à la fois `<api.userId>` et `<api.input.userId>`.
## Exemple de requête
@@ -44,17 +36,17 @@ curl -X POST \
-d '{"userId":"demo-user","maxTokens":1024}'
```
Les réponses réussies renvoient le résultat d'exécution sérialisé de l'Exécuteur. Les erreurs révèlent des problèmes de validation, d'authentification ou d'échec du flux de travail.
Les réponses réussies renvoient le résultat d'exécution sérialisé de l'exécuteur. Les erreurs révèlent des problèmes de validation, d'authentification ou d'échec du workflow.
## Référence de sortie
## Référence des sorties
| Référence | Description |
|-----------|-------------|
| `<api.field>` | Champ défini dans le Format d'entrée |
| `<api.input>` | Corps de la requête structuré complet |
| `<api.field>` | Champ défini dans le format d'entrée |
| `<api.input>` | Corps de requête structuré complet |
Si aucun Format d'entrée n'est défini, l'exécuteur expose le JSON brut uniquement à `<api.input>`.
Si aucun format d'entrée n'est défini, l'exécuteur expose uniquement le JSON brut à `<api.input>`.
<Callout type="warning">
Un flux de travail ne peut contenir qu'un seul déclencheur d'API. Publiez un nouveau déploiement après les modifications pour que le point de terminaison reste à jour.
Un workflow ne peut contenir qu'un seul déclencheur API. Publiez un nouveau déploiement après les modifications pour que le point de terminaison reste à jour.
</Callout>

View File

@@ -10,7 +10,6 @@ type: object
required:
- type
- name
- inputs
- connections
properties:
type:
@@ -22,21 +21,23 @@ properties:
description: Display name for this loop block
inputs:
type: object
required:
- loopType
description: Optional. If omitted, defaults will be applied.
properties:
loopType:
type: string
enum: [for, forEach]
description: Type of loop to execute
default: for
iterations:
type: number
description: Number of iterations (for 'for' loops)
default: 5
minimum: 1
maximum: 1000
collection:
type: string
description: Collection to iterate over (for 'forEach' loops)
default: ""
maxConcurrency:
type: number
description: Maximum concurrent executions
@@ -45,13 +46,10 @@ properties:
maximum: 10
connections:
type: object
required:
- loop
properties:
# Nested format (recommended)
loop:
type: object
required:
- start
properties:
start:
type: string
@@ -59,26 +57,37 @@ properties:
end:
type: string
description: Target block ID for loop completion (optional)
# Direct handle format (alternative)
loop-start-source:
type: string | string[]
description: Target block ID to execute inside the loop (direct format)
loop-end-source:
type: string | string[]
description: Target block ID for loop completion (direct format, optional)
error:
type: string
description: Target block ID for error handling
note: Use either the nested 'loop' format OR the direct 'loop-start-source' format, not both
```
## Configuration de connexion
Les blocs Loop utilisent un format de connexion spécial avec une section `loop` :
Les blocs de boucle prennent en charge deux formats de connexion :
### Format de gestion directe (alternative)
```yaml
connections:
loop:
start: <string> # Target block ID to execute inside the loop
end: <string> # Target block ID after loop completion (optional)
loop-start-source: <string> # Target block ID to execute inside the loop
loop-end-source: <string> # Target block ID after loop completion (optional)
error: <string> # Target block ID for error handling (optional)
```
Les deux formats fonctionnent de manière identique. Utilisez celui que vous préférez.
## Configuration des blocs enfants
Les blocs à l'intérieur d'une boucle doivent avoir leur `parentId` défini sur l'ID du bloc de boucle :
Les blocs à l'intérieur d'une boucle doivent avoir leur `parentId` défini sur l'ID du bloc de boucle. La propriété `extent` est automatiquement définie sur `'parent'` et n'a pas besoin d'être spécifiée :
```yaml
loop-1:
@@ -261,9 +270,62 @@ process-task:
success: task-completed
```
### Exemple de format de gestion directe
La même boucle peut être écrite en utilisant le format de gestion directe :
```yaml
my-loop:
type: loop
name: "Process Items"
inputs:
loopType: forEach
collection: <start.items>
connections:
loop-start-source: process-item # Direct handle format
loop-end-source: final-results # Direct handle format
error: handle-error
process-item:
type: agent
name: "Process Item"
parentId: my-loop
inputs:
systemPrompt: "Process this item"
userPrompt: <loop.currentItem>
model: gpt-4o
apiKey: '{{OPENAI_API_KEY}}'
```
### Exemple de boucle minimale (utilisant les valeurs par défaut)
Vous pouvez omettre entièrement la section `inputs`, et les valeurs par défaut seront appliquées :
```yaml
simple-loop:
type: loop
name: "Simple Loop"
# No inputs section - defaults to loopType: 'for', iterations: 5
connections:
loop-start-source: process-step
loop-end-source: complete
process-step:
type: agent
name: "Process Step"
parentId: simple-loop
inputs:
systemPrompt: "Execute step"
userPrompt: "Step <loop.index>"
model: gpt-4o
apiKey: '{{OPENAI_API_KEY}}'
```
Cette boucle exécutera 5 itérations par défaut.
## Variables de boucle
À l'intérieur des blocs enfants de la boucle, ces variables spéciales sont disponibles :
À l'intérieur des blocs enfants de boucle, ces variables spéciales sont disponibles :
```yaml
# Available in all child blocks of the loop

View File

@@ -172,33 +172,7 @@ When responding to questions about investments, include risk disclaimers.
- コンテキストのために会話履歴を読み取る
- エージェントの実行後に返信を追加
```yaml
# 1) Add latest user message
- Memory (operation: add)
id: chat
role: user
content: {{input}}
# 2) Load conversation history
- Memory (operation: get)
id: chat
# 3) Run the agent with prior messages available
- Agent
System Prompt: ...
User Prompt: |
Use the conversation so far:
{{memory_get.memories}}
Current user message: {{input}}
# 4) Store the agent reply
- Memory (operation: add)
id: chat
role: assistant
content: {{agent.content}}
```
詳細については`Memory`ブロックリファレンスを参照してください: [/tools/memory](/tools/memory)。
詳細については[`Memory`](/tools/memory)ブロックリファレンスを参照してください。
## 入力と出力
@@ -209,7 +183,7 @@ When responding to questions about investments, include risk disclaimers.
<strong>システムプロンプト</strong>: エージェントの動作と役割を定義する指示
</li>
<li>
<strong>ユーザープロンプト</strong>: 処理する入力テキストまたはデータ
<strong>ユーザープロンプト</strong>: 処理するテキストまたはデータ入力
</li>
<li>
<strong>モデル</strong>: AIモデルの選択OpenAI、Anthropic、Google など)
@@ -221,14 +195,14 @@ When responding to questions about investments, include risk disclaimers.
<strong>ツール</strong>: 関数呼び出し用の利用可能なツールの配列
</li>
<li>
<strong>レスポンス形式</strong>: 構造化出力用のJSONスキーマ
<strong>レスポンス形式</strong>: 構造化された出力のためのJSONスキーマ
</li>
</ul>
</Tab>
<Tab>
<ul className="list-disc space-y-2 pl-6">
<li>
<strong>agent.content</strong>: エージェントのレスポンステキストまたは構造化データ
<strong>agent.content</strong>: エージェントの応答テキストまたは構造化データ
</li>
<li>
<strong>agent.tokens</strong>: トークン使用統計オブジェクト
@@ -237,14 +211,14 @@ When responding to questions about investments, include risk disclaimers.
<strong>agent.tool_calls</strong>: ツール実行詳細の配列
</li>
<li>
<strong>agent.cost</strong>: 推定APIコールコスト(利用可能な場合)
<strong>agent.cost</strong>: 推定APIコール費用(利用可能な場合)
</li>
</ul>
</Tab>
<Tab>
<ul className="list-disc space-y-2 pl-6">
<li>
<strong>コンテンツ</strong>: エージェントからの主要なレスポンス出力
<strong>コンテンツ</strong>: エージェントからの主要な応答出力
</li>
<li>
<strong>メタデータ</strong>: 使用統計と実行詳細
@@ -261,43 +235,43 @@ When responding to questions about investments, include risk disclaimers.
### カスタマーサポートの自動化
<div className="mb-4 rounded-md border p-4">
<h4 className="font-medium">シナリオデータベースアクセスによる顧客問い合わせ対応</h4>
<h4 className="font-medium">シナリオ: データベースアクセスによる顧客問い合わせ対応</h4>
<ol className="list-decimal pl-5 text-sm">
<li>ユーザーがAPIブロックを通じてサポートチケットを送信</li>
<li>ユーザーがAPIブロック経由でサポートチケットを送信</li>
<li>エージェントがPostgresで注文/サブスクリプションを確認し、ナレッジベースでガイダンスを検索</li>
<li>エスカレーションが必要な場合、エージェントは関連コンテキストを含むLinearの課題を作成</li>
<li>エージェントが明確なメール返信を作成</li>
<li>エージェントが明確な返信メールを作成</li>
<li>Gmailが顧客に返信を送信</li>
<li>将来のメッセージのために会話履歴を維持するため、会話がメモリに保存される</li>
<li>将来のメッセージのために履歴を維持するため、会話がメモリに保存される</li>
</ol>
</div>
### マルチモデルコンテンツ分析
<div className="mb-4 rounded-md border p-4">
<h4 className="font-medium">シナリオ異なるAIモデルでコンテンツを分析</h4>
<h4 className="font-medium">シナリオ: 異なるAIモデルでコンテンツを分析</h4>
<ol className="list-decimal pl-5 text-sm">
<li>ファンクションブロックがアップロードされた文書を処理</li>
<li>関数ブロックがアップロードされた文書を処理</li>
<li>GPT-4oを搭載したエージェントが技術的分析を実行</li>
<li>Claudeを搭載したエージェントが感情トーンを分析</li>
<li>ファンクションブロックが最終レポート用に結果を統合</li>
<li>Claudeを搭載したエージェントが感情トーンを分析</li>
<li>関数ブロックが最終レポート用に結果を統合</li>
</ol>
</div>
### ツール活用型リサーチアシスタント
### ツール搭載型リサーチアシスタント
<div className="mb-4 rounded-md border p-4">
<h4 className="font-medium">シナリオ:ウェブ検索と文書アクセス機能を持つリサーチアシスタント</h4>
<h4 className="font-medium">シナリオ:ウェブ検索とドキュメントアクセス機能を持つリサーチアシスタント</h4>
<ol className="list-decimal pl-5 text-sm">
<li>入力を通じてユーザークエリを受信</li>
<li>入力からユーザークエリを受信</li>
<li>エージェントがGoogle検索ツールを使用してウェブを検索</li>
<li>エージェントが社内文書用のNotionデータベースにアクセス</li>
<li>エージェントが包括的な調査レポートをまとめる</li>
<li>エージェントが包括的な調査レポートを作成</li>
</ol>
</div>
## ベストプラクティス
- **システムプロンプト具体的に**: エージェントの役割、トーン、制限を明確に定義してください。指示が具体的であればあるほど、エージェントは目的を果たすことができます。
- **適切な温度設定を選択**: 精度が重要な場合は低い温度設定0〜0.3を使用し、よりクリエイティブまたは多様な応答には温度を上げる0.7〜2.0
- **ツールを効果的に活用**: エージェントの目的を補完し、その能力を強化するツールを統合してください。エージェントに負担をかけないよう、提供するツールを選択的にしてください。重複の少いタスクには、最良の結果を得るために別のエージェントブロックを使用してください。
- **システムプロンプト具体的に指示する**エージェントの役割、トーン、制限を明確に定義してください。指示が具体的であればあるほど、エージェントは目的を果たすことができます。
- **適切な温度設定を選択する**精度が重要な場合は低い温度設定0〜0.3)を使用し、よりクリエイティブまたは多様な応答を得るには温度を上げる0.7〜2.0
- **ツールを効果的に活用する**エージェントの目的を補完し、その能力を強化するツールを統合してください。エージェントに負担をかけないよう、提供するツールを選択的にしてください。重複の少いタスクには、最良の結果を得るために別のエージェントブロックを使用してください。

View File

@@ -24,15 +24,7 @@ APIトリガーは、ワークフローを安全なHTTPエンドポイントと
各パラメータに**入力フォーマット**フィールドを追加します。実行時の出力キーはスキーマを反映し、`<api.input>`でも利用できます。
```yaml
- type: string
name: userId
value: demo-user # optional manual test value
- type: number
name: maxTokens
```
エディタでの手動実行では、リクエストを送信せずにテストできるように`value`列を使用します。実行中、リゾルバは`<api.userId>`と`<api.input.userId>`の両方に値を設定します。
エディタでの手動実行は `value` 列を使用するため、リクエストを送信せずにテストできます。実行中、リゾルバーは `<api.userId>` と `<api.input.userId>` の両方に値を設定します。
## リクエスト例
@@ -44,7 +36,7 @@ curl -X POST \
-d '{"userId":"demo-user","maxTokens":1024}'
```
成功したレスポンスはエグゼキュータからシリアル化された実行結果を返します。エラーは検証、認証、またはワークフローの失敗を表示します。
成功したレスポンスはエグゼキュータからシリアル化された実行結果を返します。エラーは検証、認証、またはワークフローの失敗を表示します。
## 出力リファレンス
@@ -53,8 +45,8 @@ curl -X POST \
| `<api.field>` | 入力フォーマットで定義されたフィールド |
| `<api.input>` | 構造化されたリクエスト本文全体 |
入力フォーマットが定義されていない場合、エグゼキュータは生のJSONを`<api.input>`のみ公開します。
入力フォーマットが定義されていない場合、エグゼキュータは生のJSONを `<api.input>` のみ公開します。
<Callout type="warning">
ワークフローには1つのAPIトリガーしか含めることができません。変更後は新しいデプロイメントを公開して、エンドポイントを最新の状態に保ってください。
ワークフローには1つのAPIトリガーのみ含めることができま。変更後は新しいデプロイメントを公開して、エンドポイントを最新の状態に保ってください。
</Callout>

View File

@@ -10,7 +10,6 @@ type: object
required:
- type
- name
- inputs
- connections
properties:
type:
@@ -22,21 +21,23 @@ properties:
description: Display name for this loop block
inputs:
type: object
required:
- loopType
description: Optional. If omitted, defaults will be applied.
properties:
loopType:
type: string
enum: [for, forEach]
description: Type of loop to execute
default: for
iterations:
type: number
description: Number of iterations (for 'for' loops)
default: 5
minimum: 1
maximum: 1000
collection:
type: string
description: Collection to iterate over (for 'forEach' loops)
default: ""
maxConcurrency:
type: number
description: Maximum concurrent executions
@@ -45,13 +46,10 @@ properties:
maximum: 10
connections:
type: object
required:
- loop
properties:
# Nested format (recommended)
loop:
type: object
required:
- start
properties:
start:
type: string
@@ -59,26 +57,37 @@ properties:
end:
type: string
description: Target block ID for loop completion (optional)
# Direct handle format (alternative)
loop-start-source:
type: string | string[]
description: Target block ID to execute inside the loop (direct format)
loop-end-source:
type: string | string[]
description: Target block ID for loop completion (direct format, optional)
error:
type: string
description: Target block ID for error handling
note: Use either the nested 'loop' format OR the direct 'loop-start-source' format, not both
```
## 接続設定
ループブロックは `loop` セクションを持つ特別な接続形式を使用します:
ループブロックは2つの接続形式をサポートしています:
### 直接ハンドル形式(代替)
```yaml
connections:
loop:
start: <string> # Target block ID to execute inside the loop
end: <string> # Target block ID after loop completion (optional)
loop-start-source: <string> # Target block ID to execute inside the loop
loop-end-source: <string> # Target block ID after loop completion (optional)
error: <string> # Target block ID for error handling (optional)
```
## 子ブロック設定
両方の形式は同じように機能します。お好みの方を使用してください。
ループ内のブロックは、その `parentId` をループブロックIDに設定する必要があります
## 子ブロックの設定
ループ内のブロックは、その `parentId` をループブロックIDに設定する必要があります。`extent` プロパティは自動的に `'parent'` に設定されるため、指定する必要はありません:
```yaml
loop-1:
@@ -261,6 +270,59 @@ process-task:
success: task-completed
```
### 直接ハンドル形式の例
同じループは直接ハンドル形式を使用して記述することもできます:
```yaml
my-loop:
type: loop
name: "Process Items"
inputs:
loopType: forEach
collection: <start.items>
connections:
loop-start-source: process-item # Direct handle format
loop-end-source: final-results # Direct handle format
error: handle-error
process-item:
type: agent
name: "Process Item"
parentId: my-loop
inputs:
systemPrompt: "Process this item"
userPrompt: <loop.currentItem>
model: gpt-4o
apiKey: '{{OPENAI_API_KEY}}'
```
### 最小限のループ例(デフォルトを使用)
`inputs` セクションを完全に省略することができ、デフォルト値が適用されます:
```yaml
simple-loop:
type: loop
name: "Simple Loop"
# No inputs section - defaults to loopType: 'for', iterations: 5
connections:
loop-start-source: process-step
loop-end-source: complete
process-step:
type: agent
name: "Process Step"
parentId: simple-loop
inputs:
systemPrompt: "Execute step"
userPrompt: "Step <loop.index>"
model: gpt-4o
apiKey: '{{OPENAI_API_KEY}}'
```
このループはデフォルトで5回の反復を実行します。
## ループ変数
ループ子ブロック内では、以下の特殊変数が利用可能です:
@@ -274,7 +336,7 @@ process-task:
## 出力参照
ループが完了した後、集約された結果を参照できます:
ループが完了した後、その集計結果を参照できます:
```yaml
# In blocks after the loop
@@ -286,10 +348,10 @@ final-processor:
## ベストプラクティス
- 長い実行時間を避けるため適切な繰り返し回数の制限を設定する
- 長い実行時間を避けるため適切な繰り返し制限を設定する
- コレクション処理にはforEachを、固定回数の繰り返しにはforループを使用する
- I/O処理多い操作にはmaxConcurrencyの使用を検討する
- I/O処理多い操作にはmaxConcurrencyの使用を検討する
- 堅牢なループ実行のためにエラー処理を含める
- ループ子ブロックには説明的な名前を使用する
- ループ子ブロックには説明的な名前を使用する
- 最初に小さなコレクションでテストする
- 大きなコレクションの実行時間を監視する

View File

@@ -172,44 +172,18 @@ Agent 模块通过统一的推理接口支持多个 LLM 提供商。可用模型
- 读取对话历史以获取上下文
- 在代理运行后附加其回复
```yaml
# 1) Add latest user message
- Memory (operation: add)
id: chat
role: user
content: {{input}}
# 2) Load conversation history
- Memory (operation: get)
id: chat
# 3) Run the agent with prior messages available
- Agent
System Prompt: ...
User Prompt: |
Use the conversation so far:
{{memory_get.memories}}
Current user message: {{input}}
# 4) Store the agent reply
- Memory (operation: add)
id: chat
role: assistant
content: {{agent.content}}
```
有关详细信息,请参阅 `Memory` 块参考:[工具/内存](/tools/memory)。
有关详细信息,请参阅 [`Memory`](/tools/memory) 块引用。
## 输入和输出
<Tabs items={['配置', '变量', '结果']}>
<Tabs items={['Configuration', 'Variables', 'Results']}>
<Tab>
<ul className="list-disc space-y-2 pl-6">
<li>
<strong>系统提示</strong>:定义代理行为和角色的指令
</li>
<li>
<strong>用户提示</strong>:要处理的输入文本或数据
<strong>用户提示</strong>要处理的输入文本或数据
</li>
<li>
<strong>模型</strong>AI 模型选择OpenAI、Anthropic、Google 等)
@@ -218,7 +192,7 @@ Agent 模块通过统一的推理接口支持多个 LLM 提供商。可用模型
<strong>温度</strong>响应随机性控制0-2
</li>
<li>
<strong>工具</strong>:可用于函数调用的工具数组
<strong>工具</strong>:可用工具数组,用于功能调用
</li>
<li>
<strong>响应格式</strong>:用于结构化输出的 JSON Schema
@@ -234,10 +208,10 @@ Agent 模块通过统一的推理接口支持多个 LLM 提供商。可用模型
<strong>agent.tokens</strong>:令牌使用统计对象
</li>
<li>
<strong>agent.tool_calls</strong>:工具执行详细信息数组
<strong>agent.tool_calls</strong>:工具执行详数组
</li>
<li>
<strong>agent.cost</strong>API 调用的估计成本(如果可用)
<strong>agent.cost</strong>估算的 API 调用成本(如果可用)
</li>
</ul>
</Tab>
@@ -247,7 +221,7 @@ Agent 模块通过统一的推理接口支持多个 LLM 提供商。可用模型
<strong>内容</strong>:代理的主要响应输出
</li>
<li>
<strong>元数据</strong>:使用统计信息和执行详细信息
<strong>元数据</strong>:使用统计和执行详
</li>
<li>
<strong>访问</strong>:在代理之后的块中可用
@@ -261,9 +235,9 @@ Agent 模块通过统一的推理接口支持多个 LLM 提供商。可用模型
### 客户支持自动化
<div className="mb-4 rounded-md border p-4">
<h4 className="font-medium">场景:通过数据库访问处理客户询</h4>
<h4 className="font-medium">场景:通过数据库访问处理客户询</h4>
<ol className="list-decimal pl-5 text-sm">
<li>用户通过 API 块提交支持工单</li>
<li>用户通过 API 块提交支持工单</li>
<li>代理在 Postgres 中检查订单/订阅,并在知识库中搜索指导</li>
<li>如果需要升级,代理会创建一个包含相关上下文的 Linear 问题</li>
<li>代理起草一封清晰的电子邮件回复</li>
@@ -277,10 +251,10 @@ Agent 模块通过统一的推理接口支持多个 LLM 提供商。可用模型
<div className="mb-4 rounded-md border p-4">
<h4 className="font-medium">场景:使用不同的 AI 模型分析内容</h4>
<ol className="list-decimal pl-5 text-sm">
<li>功能块处理上传的文档</li>
<li>功能块处理上传的文档</li>
<li>使用 GPT-4o 的代理执行技术分析</li>
<li>使用 Claude 的代理分析情感和语气</li>
<li>功能块结合结果生成最终报告</li>
<li>功能块结合结果生成最终报告</li>
</ol>
</div>
@@ -298,6 +272,6 @@ Agent 模块通过统一的推理接口支持多个 LLM 提供商。可用模型
## 最佳实践
- **在系统提示中具体说明**:清晰定义代理的角色、语气和限制。您的指令越具体,代理越能更好地实现其预期目的。
- **选择合适的温度设置**当准确性很重要时使用较低的温度设置0-0.3);当需要更具创意或多样化的响应时,增加温度0.7-2.0)。
- **有效利用工具**:集成与代理目的互补并增强其能力的工具。选择性地提供工具,以避免让代理不堪重负。对于重叠较少的任务,使用另一个代理块以获得最佳果。
- **在系统提示中具体说明**:清晰定义代理的角色、语气和限制。您的指令越具体,代理越能更好地完成其预期目的。
- **选择合适的温度设置**当准确性很重要时使用较低的温度设置0-0.3);当需要更具创意或多样化的响应时,可将温度提高0.7-2.0)。
- **有效利用工具**:集成与代理目的互补并增强其能力的工具。选择性地提供工具,以避免让代理不堪重负。对于重叠较少的任务,使用另一个代理块以获得最佳果。

View File

@@ -24,15 +24,7 @@ API 触发器将您的工作流公开为一个安全的 HTTP 端点。将 JSON
为每个参数添加一个 **输入格式** 字段。运行时输出键会镜像该模式,并且也可以在 `<api.input>` 下使用。
```yaml
- type: string
name: userId
value: demo-user # optional manual test value
- type: number
name: maxTokens
```
编辑器中的手动运行使用 `value` 列,因此您可以在不发送请求的情况下进行测试。在执行期间,解析器会填充 `<api.userId>` 和 `<api.input.userId>`。
在编辑器中手动运行使用 `value` 列,因此您可以在不发送请求的情况下进行测试。在执行过程中,解析器会填充 `<api.userId>` 和 `<api.input.userId>`。
## 请求示例
@@ -44,17 +36,17 @@ curl -X POST \
-d '{"userId":"demo-user","maxTokens":1024}'
```
成功的响应会返回来自执行器的序列化执行结果。错误会显示验证、身份验证或工作流失败的原因
成功的响应会返回来自执行器的序列化执行结果。错误会显示验证、证或工作流失败的信息
## 输出参考
| 参考 | 描述 |
|-----------|-------------|
| `<api.field>` | 输入格式中定义的字段 |
| `<api.field>` | 输入格式中定义的字段 |
| `<api.input>` | 整个结构化请求体 |
如果未定义输入格式,执行器仅在 `<api.input>` 处公开原始 JSON。
如果未定义输入格式,执行器仅在 `<api.input>` 处暴露原始 JSON。
<Callout type="warning">
一个工作流只能包含一个 API 触发器。更改后发布新的部署,以确保端点保持最新。
一个工作流只能包含一个 API 触发器。更改后发布新的部署,以确保端点保持最新。
</Callout>

View File

@@ -10,7 +10,6 @@ type: object
required:
- type
- name
- inputs
- connections
properties:
type:
@@ -22,21 +21,23 @@ properties:
description: Display name for this loop block
inputs:
type: object
required:
- loopType
description: Optional. If omitted, defaults will be applied.
properties:
loopType:
type: string
enum: [for, forEach]
description: Type of loop to execute
default: for
iterations:
type: number
description: Number of iterations (for 'for' loops)
default: 5
minimum: 1
maximum: 1000
collection:
type: string
description: Collection to iterate over (for 'forEach' loops)
default: ""
maxConcurrency:
type: number
description: Maximum concurrent executions
@@ -45,13 +46,10 @@ properties:
maximum: 10
connections:
type: object
required:
- loop
properties:
# Nested format (recommended)
loop:
type: object
required:
- start
properties:
start:
type: string
@@ -59,26 +57,37 @@ properties:
end:
type: string
description: Target block ID for loop completion (optional)
# Direct handle format (alternative)
loop-start-source:
type: string | string[]
description: Target block ID to execute inside the loop (direct format)
loop-end-source:
type: string | string[]
description: Target block ID for loop completion (direct format, optional)
error:
type: string
description: Target block ID for error handling
note: Use either the nested 'loop' format OR the direct 'loop-start-source' format, not both
```
## 连接配置
Loop 块使用一种特殊的连接格式,其中包含一个 `loop` 部分
循环块支持两种连接格式
### 直接句柄格式(替代方案)
```yaml
connections:
loop:
start: <string> # Target block ID to execute inside the loop
end: <string> # Target block ID after loop completion (optional)
loop-start-source: <string> # Target block ID to execute inside the loop
loop-end-source: <string> # Target block ID after loop completion (optional)
error: <string> # Target block ID for error handling (optional)
```
两种格式的功能完全相同,可根据您的喜好选择使用。
## 子块配置
循环中的块必须将其 `parentId` 设置为循环块的 ID
循环中的块必须将其 `parentId` 设置为循环块的 ID。`extent` 属性会自动设置为 `'parent'`,无需手动指定
```yaml
loop-1:
@@ -261,9 +270,62 @@ process-task:
success: task-completed
```
### 直接句柄格式示例
同一个循环可以使用直接句柄格式编写:
```yaml
my-loop:
type: loop
name: "Process Items"
inputs:
loopType: forEach
collection: <start.items>
connections:
loop-start-source: process-item # Direct handle format
loop-end-source: final-results # Direct handle format
error: handle-error
process-item:
type: agent
name: "Process Item"
parentId: my-loop
inputs:
systemPrompt: "Process this item"
userPrompt: <loop.currentItem>
model: gpt-4o
apiKey: '{{OPENAI_API_KEY}}'
```
### 最小循环示例(使用默认值)
您可以完全省略 `inputs` 部分,系统将应用默认值:
```yaml
simple-loop:
type: loop
name: "Simple Loop"
# No inputs section - defaults to loopType: 'for', iterations: 5
connections:
loop-start-source: process-step
loop-end-source: complete
process-step:
type: agent
name: "Process Step"
parentId: simple-loop
inputs:
systemPrompt: "Execute step"
userPrompt: "Step <loop.index>"
model: gpt-4o
apiKey: '{{OPENAI_API_KEY}}'
```
此循环默认将执行 5 次迭代。
## 循环变量
在循环子块中,可以使用以下特殊变量:
在循环子块中,可以使用以下特殊变量:
```yaml
# Available in all child blocks of the loop
@@ -274,7 +336,7 @@ process-task:
## 输出引用
循环完成后,可以引用其聚合结果:
循环完成后,可以引用其聚合结果:
```yaml
# In blocks after the loop
@@ -286,10 +348,10 @@ final-processor:
## 最佳实践
- 设置合理的迭代限制以避免长时间执行
- 使用 forEach 处理集合,使用 for 循环处理固定迭代
- 对 I/O 密集型操作,考虑使用 maxConcurrency
- 设置合理的迭代限制以避免长时间执行
- 使用 forEach 处理集合,使用 for 循环进行固定迭代
- 考虑对 I/O 密集型操作使用 maxConcurrency
- 包含错误处理以确保循环执行的健壮性
- 为循环子块使用描述性名称
- 先用小集合进行测试
- 对大集合执行时间进行监控
- 对大集合,监控执行时间

View File

@@ -3378,19 +3378,18 @@ checksums:
content/38: 1b693e51b5b8e31dd088c602663daab4
content/39: 5003cde407a705d39b969eff5bdab18a
content/40: 624199f0ed2378588024cfe6055d6b6b
content/41: 8accff30f8b36f6fc562b44d8fe271dd
content/42: fc62aefa5b726f57f2c9a17a276e601f
content/43: d72903dda50a36b12ec050a06ef23a1b
content/44: 19984dc55d279f1ae3226edf4b62aaa3
content/45: 9c2f91f89a914bf4661512275e461104
content/46: adc97756961688b2f4cc69b773c961c9
content/47: 7b0c309be79b5e1ab30e58a98ea0a778
content/48: ac70442527be4edcc6b0936e2e5dc8c1
content/49: a298d382850ddaa0b53e19975b9d12d2
content/50: 27535bb1de08548a7389708045c10714
content/51: 6f366fdb6389a03bfc4d83c12fa4099d
content/52: b2a4a0c279f47d58a2456f25a1e1c6f9
content/53: 17af9269613458de7f8e36a81b2a6d30
content/41: a2c636da376e80aa3427ce26b2dce0fd
content/42: d72903dda50a36b12ec050a06ef23a1b
content/43: 19984dc55d279f1ae3226edf4b62aaa3
content/44: 9c2f91f89a914bf4661512275e461104
content/45: adc97756961688b2f4cc69b773c961c9
content/46: 7b0c309be79b5e1ab30e58a98ea0a778
content/47: ac70442527be4edcc6b0936e2e5dc8c1
content/48: a298d382850ddaa0b53e19975b9d12d2
content/49: 27535bb1de08548a7389708045c10714
content/50: 6f366fdb6389a03bfc4d83c12fa4099d
content/51: b2a4a0c279f47d58a2456f25a1e1c6f9
content/52: 17af9269613458de7f8e36a81b2a6d30
fa2a1ea3b95cd7608e0a7d78834b7d49:
meta/title: d8df37d5e95512e955c43661de8a40d0
meta/description: d25527b81409cb3d42d9841e8ed318d4
@@ -3567,30 +3566,39 @@ checksums:
meta/title: 27e1d8e6df8b8d3ee07124342bcc5599
meta/description: 5a19804a907fe2a0c7ddc8a933e7e147
content/0: 07f0ef1d9ef5ee2993ab113d95797f37
content/1: c7dc738fa39cff694ecc04dca0aec33e
content/1: 200b847a5b848c11a507cecfcb381e02
content/2: dd7f8a45778d4dddd9bda78c19f046a4
content/3: 298c570bb45cd493870e9406dc8be50e
content/4: d4346df7c1c5da08e84931d2d449023a
content/5: bacf5914637cc0c1e00dfac72f60cf1f
content/6: 700ac74ffe23bbcc102b001676ee1818
content/7: d7c2f6c70070e594bffd0af3d20bbccb
content/8: 33b9b1e9744318597da4b925b0995be2
content/9: caa663af7342e02001aca78c23695b22
content/10: 1fa44e09185c753fec303e2c73e44eaf
content/11: d794cf2ea75f4aa8e73069f41fe8bc45
content/12: f9553f38263ad53c261995083622bdde
content/13: e4625b8a75814b2fcfe3c643a47e22cc
content/14: 20ee0fd34f3baab1099a2f8fb06b13cf
content/15: 73a9d04015d0016a994cf1e8fe8d5c12
content/16: 9a71a905db9dd5d43bdd769f006caf14
content/17: 18c31983f32539861fd5b4e8dd943169
content/18: 55300ae3e3c3213c4ad82c1cf21c89b2
content/19: 8a8aa301371bd07b15c6f568a8e7826f
content/20: a98cce6db23d9a86ac51179100f32529
content/21: b5a605662dbb6fc20ad37fdb436f0581
content/22: 2b204164f64dcf034baa6e5367679735
content/23: b2a4a0c279f47d58a2456f25a1e1c6f9
content/24: 15ebde5d554a3ec6000f71cf32b16859
content/3: b1870986cdef32b6cf3c79a4cd56a8b0
content/4: 5e7c060bf001ead8fb4005385509e857
content/5: 1de0d605f73842c3464e7fb2e09fb92c
content/6: 1a8e292ce7cc3adb2fe38cf2f5668b43
content/7: bacf5914637cc0c1e00dfac72f60cf1f
content/8: 336fdb536f9f5654d4d69a5adb1cf071
content/9: d7c2f6c70070e594bffd0af3d20bbccb
content/10: 33b9b1e9744318597da4b925b0995be2
content/11: caa663af7342e02001aca78c23695b22
content/12: 1fa44e09185c753fec303e2c73e44eaf
content/13: d794cf2ea75f4aa8e73069f41fe8bc45
content/14: f9553f38263ad53c261995083622bdde
content/15: e4625b8a75814b2fcfe3c643a47e22cc
content/16: 20ee0fd34f3baab1099a2f8fb06b13cf
content/17: 73a9d04015d0016a994cf1e8fe8d5c12
content/18: 9a71a905db9dd5d43bdd769f006caf14
content/19: b4017a890213e9ac0afd6b2cfc1bdefc
content/20: 479fd4d587cd0a1b8d27dd440e019215
content/21: db263bbe8b5984777eb738e9e4c3ec71
content/22: 1128c613d71aad35f668367ba2065a01
content/23: 12be239d6ea36a71b022996f56d66901
content/24: aa2240ef8ced8d9b67f7ab50665caae5
content/25: 5cce1d6a21fae7252b8670a47a2fae9e
content/26: 18c31983f32539861fd5b4e8dd943169
content/27: 55300ae3e3c3213c4ad82c1cf21c89b2
content/28: 8a8aa301371bd07b15c6f568a8e7826f
content/29: a98cce6db23d9a86ac51179100f32529
content/30: b5a605662dbb6fc20ad37fdb436f0581
content/31: 2b204164f64dcf034baa6e5367679735
content/32: b2a4a0c279f47d58a2456f25a1e1c6f9
content/33: 15ebde5d554a3ec6000f71cf32b16859
132869ed8674995bace940b1cefc4241:
meta/title: a753d6bd11bc5876c739b95c6d174914
meta/description: 71efdaceb123c4d6b6ee19c085cd9f0f
@@ -3958,12 +3966,11 @@ checksums:
content/3: b3c762557a1a308f3531ef1f19701807
content/4: bf29da79344f37eeadd4c176aa19b8ff
content/5: ae52879ebefa5664a6b7bf8ce5dd57ab
content/6: 5e1cbe37c5714b16c908c7e0fe0b23e3
content/7: ce487c9bc7a730e7d9da4a87b8eaa0a6
content/8: e73f4b831f5b77c71d7d86c83abcbf11
content/9: 07e064793f3e0bbcb02c4dc6083b6daa
content/10: a702b191c3f94458bee880d33853e0cb
content/11: ce110ab5da3ff96f8cbf96ce3376fc51
content/12: 83f9b3ab46b0501c8eb3989bec3f4f1b
content/13: e00be80effb71b0acb014f9aa53dfbe1
content/14: 847a381137856ded9faa5994fbc489fb
content/6: ce487c9bc7a730e7d9da4a87b8eaa0a6
content/7: e73f4b831f5b77c71d7d86c83abcbf11
content/8: 07e064793f3e0bbcb02c4dc6083b6daa
content/9: a702b191c3f94458bee880d33853e0cb
content/10: ce110ab5da3ff96f8cbf96ce3376fc51
content/11: 83f9b3ab46b0501c8eb3989bec3f4f1b
content/12: e00be80effb71b0acb014f9aa53dfbe1
content/13: 847a381137856ded9faa5994fbc489fb

View File

@@ -15,9 +15,9 @@
"@vercel/analytics": "1.5.0",
"@vercel/og": "^0.6.5",
"clsx": "^2.1.1",
"fumadocs-core": "^15.7.5",
"fumadocs-mdx": "^11.5.6",
"fumadocs-ui": "^15.7.5",
"fumadocs-core": "15.8.2",
"fumadocs-mdx": "11.10.1",
"fumadocs-ui": "15.8.2",
"lucide-react": "^0.511.0",
"next": "15.4.1",
"next-themes": "^0.4.6",

View File

@@ -233,7 +233,7 @@ describe('Copilot Chat API Route', () => {
model: 'claude-4.5-sonnet',
mode: 'agent',
messageId: 'mock-uuid-1234-5678',
version: '1.0.0',
version: '1.0.1',
chatId: 'chat-123',
}),
})
@@ -303,7 +303,7 @@ describe('Copilot Chat API Route', () => {
model: 'claude-4.5-sonnet',
mode: 'agent',
messageId: 'mock-uuid-1234-5678',
version: '1.0.0',
version: '1.0.1',
chatId: 'chat-123',
}),
})
@@ -361,7 +361,7 @@ describe('Copilot Chat API Route', () => {
model: 'claude-4.5-sonnet',
mode: 'agent',
messageId: 'mock-uuid-1234-5678',
version: '1.0.0',
version: '1.0.1',
chatId: 'chat-123',
}),
})
@@ -453,7 +453,7 @@ describe('Copilot Chat API Route', () => {
model: 'claude-4.5-sonnet',
mode: 'ask',
messageId: 'mock-uuid-1234-5678',
version: '1.0.0',
version: '1.0.1',
chatId: 'chat-123',
}),
})

View File

@@ -237,7 +237,6 @@ describe('Copilot Checkpoints Revert API Route', () => {
parallels: {},
isDeployed: true,
deploymentStatuses: { production: 'deployed' },
hasActiveWebhook: false,
},
}
@@ -287,7 +286,6 @@ describe('Copilot Checkpoints Revert API Route', () => {
parallels: {},
isDeployed: true,
deploymentStatuses: { production: 'deployed' },
hasActiveWebhook: false,
lastSaved: 1640995200000,
},
},
@@ -309,7 +307,6 @@ describe('Copilot Checkpoints Revert API Route', () => {
parallels: {},
isDeployed: true,
deploymentStatuses: { production: 'deployed' },
hasActiveWebhook: false,
lastSaved: 1640995200000,
}),
}
@@ -445,7 +442,6 @@ describe('Copilot Checkpoints Revert API Route', () => {
parallels: {},
isDeployed: false,
deploymentStatuses: {},
hasActiveWebhook: false,
lastSaved: 1640995200000,
})
})
@@ -722,7 +718,6 @@ describe('Copilot Checkpoints Revert API Route', () => {
production: 'deployed',
staging: 'pending',
},
hasActiveWebhook: true,
deployedAt: '2024-01-01T10:00:00.000Z',
},
}
@@ -769,7 +764,6 @@ describe('Copilot Checkpoints Revert API Route', () => {
production: 'deployed',
staging: 'pending',
},
hasActiveWebhook: true,
deployedAt: '2024-01-01T10:00:00.000Z',
lastSaved: 1640995200000,
})

View File

@@ -73,7 +73,6 @@ export async function POST(request: NextRequest) {
parallels: checkpointState?.parallels || {},
isDeployed: checkpointState?.isDeployed || false,
deploymentStatuses: checkpointState?.deploymentStatuses || {},
hasActiveWebhook: checkpointState?.hasActiveWebhook || false,
lastSaved: Date.now(),
// Only include deployedAt if it's a valid date string that can be converted
...(checkpointState?.deployedAt &&

View File

@@ -0,0 +1,59 @@
import { type NextRequest, NextResponse } from 'next/server'
import { env } from '@/lib/env'
import { createLogger } from '@/lib/logs/console/logger'
const logger = createLogger('CopilotTrainingExamplesAPI')
export const runtime = 'nodejs'
export const dynamic = 'force-dynamic'
export async function POST(request: NextRequest) {
const baseUrl = env.AGENT_INDEXER_URL
if (!baseUrl) {
logger.error('Missing AGENT_INDEXER_URL environment variable')
return NextResponse.json({ error: 'Missing AGENT_INDEXER_URL env' }, { status: 500 })
}
const apiKey = env.AGENT_INDEXER_API_KEY
if (!apiKey) {
logger.error('Missing AGENT_INDEXER_API_KEY environment variable')
return NextResponse.json({ error: 'Missing AGENT_INDEXER_API_KEY env' }, { status: 500 })
}
try {
const body = await request.json()
logger.info('Sending workflow example to agent indexer', {
hasJsonField: typeof body?.json === 'string',
})
const upstream = await fetch(`${baseUrl}/examples/add`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'x-api-key': apiKey,
},
body: JSON.stringify(body),
})
if (!upstream.ok) {
const errorText = await upstream.text()
logger.error('Agent indexer rejected the example', {
status: upstream.status,
error: errorText,
})
return NextResponse.json({ error: errorText }, { status: upstream.status })
}
const data = await upstream.json()
logger.info('Successfully sent workflow example to agent indexer')
return NextResponse.json(data, {
headers: { 'content-type': 'application/json' },
})
} catch (err) {
const errorMessage = err instanceof Error ? err.message : 'Failed to add example'
logger.error('Failed to send workflow example', { error: err })
return NextResponse.json({ error: errorMessage }, { status: 502 })
}
}

View File

@@ -0,0 +1,131 @@
import { eq } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { auth } from '@/lib/auth'
import { createLogger } from '@/lib/logs/console/logger'
import { db } from '@/../../packages/db'
import { settings } from '@/../../packages/db/schema'
const logger = createLogger('CopilotUserModelsAPI')
const DEFAULT_ENABLED_MODELS: Record<string, boolean> = {
'gpt-4o': false,
'gpt-4.1': false,
'gpt-5-fast': false,
'gpt-5': true,
'gpt-5-medium': true,
'gpt-5-high': false,
o3: true,
'claude-4-sonnet': true,
'claude-4.5-sonnet': true,
'claude-4.1-opus': true,
}
// GET - Fetch user's enabled models
export async function GET(request: NextRequest) {
try {
const session = await auth.api.getSession({ headers: request.headers })
if (!session?.user?.id) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
const userId = session.user.id
// Try to fetch existing settings record
const [userSettings] = await db
.select()
.from(settings)
.where(eq(settings.userId, userId))
.limit(1)
if (userSettings) {
const userModelsMap = (userSettings.copilotEnabledModels as Record<string, boolean>) || {}
// Merge: start with defaults, then override with user's existing preferences
const mergedModels = { ...DEFAULT_ENABLED_MODELS }
for (const [modelId, enabled] of Object.entries(userModelsMap)) {
mergedModels[modelId] = enabled
}
// If we added any new models, update the database
const hasNewModels = Object.keys(DEFAULT_ENABLED_MODELS).some(
(key) => !(key in userModelsMap)
)
if (hasNewModels) {
await db
.update(settings)
.set({
copilotEnabledModels: mergedModels,
updatedAt: new Date(),
})
.where(eq(settings.userId, userId))
}
return NextResponse.json({
enabledModels: mergedModels,
})
}
// If no settings record exists, create one with empty object (client will use defaults)
const [created] = await db
.insert(settings)
.values({
id: userId,
userId,
copilotEnabledModels: {},
})
.returning()
return NextResponse.json({
enabledModels: DEFAULT_ENABLED_MODELS,
})
} catch (error) {
logger.error('Failed to fetch user models', { error })
return NextResponse.json({ error: 'Internal server error' }, { status: 500 })
}
}
// PUT - Update user's enabled models
export async function PUT(request: NextRequest) {
try {
const session = await auth.api.getSession({ headers: request.headers })
if (!session?.user?.id) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
const userId = session.user.id
const body = await request.json()
if (!body.enabledModels || typeof body.enabledModels !== 'object') {
return NextResponse.json({ error: 'enabledModels must be an object' }, { status: 400 })
}
// Check if settings record exists
const [existing] = await db.select().from(settings).where(eq(settings.userId, userId)).limit(1)
if (existing) {
// Update existing record
await db
.update(settings)
.set({
copilotEnabledModels: body.enabledModels,
updatedAt: new Date(),
})
.where(eq(settings.userId, userId))
} else {
// Create new settings record
await db.insert(settings).values({
id: userId,
userId,
copilotEnabledModels: body.enabledModels,
})
}
return NextResponse.json({ success: true })
} catch (error) {
logger.error('Failed to update user models', { error })
return NextResponse.json({ error: 'Internal server error' }, { status: 500 })
}
}

View File

@@ -265,9 +265,8 @@ async function handleS3PresignedUrl(
)
}
// For chat images, knowledge base files, and profile pictures, use direct URLs since they need to be accessible by external services
const finalPath =
uploadType === 'chat' || uploadType === 'knowledge-base' || uploadType === 'profile-pictures'
uploadType === 'chat' || uploadType === 'profile-pictures'
? `https://${config.bucket}.s3.${config.region}.amazonaws.com/${uniqueKey}`
: `/api/files/serve/s3/${encodeURIComponent(uniqueKey)}`

View File

@@ -97,7 +97,13 @@ export async function GET(request: NextRequest) {
const baseQuery = db
.select(selectColumns)
.from(workflowExecutionLogs)
.innerJoin(workflow, eq(workflowExecutionLogs.workflowId, workflow.id))
.innerJoin(
workflow,
and(
eq(workflowExecutionLogs.workflowId, workflow.id),
eq(workflow.workspaceId, params.workspaceId)
)
)
.innerJoin(
permissions,
and(
@@ -107,8 +113,8 @@ export async function GET(request: NextRequest) {
)
)
// Build conditions for the joined query
let conditions: SQL | undefined = eq(workflow.workspaceId, params.workspaceId)
// Build additional conditions for the query
let conditions: SQL | undefined
// Filter by level
if (params.level && params.level !== 'all') {
@@ -180,7 +186,13 @@ export async function GET(request: NextRequest) {
const countQuery = db
.select({ count: sql<number>`count(*)` })
.from(workflowExecutionLogs)
.innerJoin(workflow, eq(workflowExecutionLogs.workflowId, workflow.id))
.innerJoin(
workflow,
and(
eq(workflowExecutionLogs.workflowId, workflow.id),
eq(workflow.workspaceId, params.workspaceId)
)
)
.innerJoin(
permissions,
and(

View File

@@ -76,6 +76,8 @@ export async function GET() {
telemetryEnabled: userSettings.telemetryEnabled,
emailPreferences: userSettings.emailPreferences ?? {},
billingUsageNotificationsEnabled: userSettings.billingUsageNotificationsEnabled ?? true,
showFloatingControls: userSettings.showFloatingControls ?? true,
showTrainingControls: userSettings.showTrainingControls ?? false,
},
},
{ status: 200 }

View File

@@ -124,7 +124,13 @@ export async function GET(request: NextRequest) {
workflowDescription: workflow.description,
})
.from(workflowExecutionLogs)
.innerJoin(workflow, eq(workflowExecutionLogs.workflowId, workflow.id))
.innerJoin(
workflow,
and(
eq(workflowExecutionLogs.workflowId, workflow.id),
eq(workflow.workspaceId, params.workspaceId)
)
)
.innerJoin(
permissions,
and(

View File

@@ -133,6 +133,7 @@ describe('Webhook Trigger API Route', () => {
parallels: {},
isFromNormalizedTables: true,
}),
blockExistsInDeployment: vi.fn().mockResolvedValue(true),
}))
hasProcessedMessageMock.mockResolvedValue(false)

View File

@@ -10,6 +10,7 @@ import {
queueWebhookExecution,
verifyProviderAuth,
} from '@/lib/webhooks/processor'
import { blockExistsInDeployment } from '@/lib/workflows/db-helpers'
const logger = createLogger('WebhookTriggerAPI')
@@ -62,6 +63,16 @@ export async function POST(
return usageLimitError
}
if (foundWebhook.blockId) {
const blockExists = await blockExistsInDeployment(foundWorkflow.id, foundWebhook.blockId)
if (!blockExists) {
logger.warn(
`[${requestId}] Trigger block ${foundWebhook.blockId} not found in deployment for workflow ${foundWorkflow.id}`
)
return new NextResponse('Trigger block not deployed', { status: 404 })
}
}
return queueWebhookExecution(foundWebhook, foundWorkflow, body, request, {
requestId,
path,

View File

@@ -8,7 +8,10 @@ import { createLogger } from '@/lib/logs/console/logger'
import { getUserEntityPermissions } from '@/lib/permissions/utils'
import { generateRequestId } from '@/lib/utils'
import { applyAutoLayout } from '@/lib/workflows/autolayout'
import { loadWorkflowFromNormalizedTables } from '@/lib/workflows/db-helpers'
import {
loadWorkflowFromNormalizedTables,
type NormalizedWorkflowData,
} from '@/lib/workflows/db-helpers'
export const dynamic = 'force-dynamic'
@@ -36,10 +39,14 @@ const AutoLayoutRequestSchema = z.object({
})
.optional()
.default({}),
// Optional: if provided, use these blocks instead of loading from DB
// This allows using blocks with live measurements from the UI
blocks: z.record(z.any()).optional(),
edges: z.array(z.any()).optional(),
loops: z.record(z.any()).optional(),
parallels: z.record(z.any()).optional(),
})
type AutoLayoutRequest = z.infer<typeof AutoLayoutRequestSchema>
/**
* POST /api/workflows/[id]/autolayout
* Apply autolayout to an existing workflow
@@ -108,8 +115,23 @@ export async function POST(request: NextRequest, { params }: { params: Promise<{
return NextResponse.json({ error: 'Access denied' }, { status: 403 })
}
// Load current workflow state
const currentWorkflowData = await loadWorkflowFromNormalizedTables(workflowId)
// Use provided blocks/edges if available (with live measurements from UI),
// otherwise load from database
let currentWorkflowData: NormalizedWorkflowData | null
if (layoutOptions.blocks && layoutOptions.edges) {
logger.info(`[${requestId}] Using provided blocks with live measurements`)
currentWorkflowData = {
blocks: layoutOptions.blocks,
edges: layoutOptions.edges,
loops: layoutOptions.loops || {},
parallels: layoutOptions.parallels || {},
isFromNormalizedTables: false,
}
} else {
logger.info(`[${requestId}] Loading blocks from database`)
currentWorkflowData = await loadWorkflowFromNormalizedTables(workflowId)
}
if (!currentWorkflowData) {
logger.error(`[${requestId}] Could not load workflow ${workflowId} for autolayout`)

View File

@@ -76,7 +76,6 @@ export async function POST(
isDeployed: true,
deployedAt: new Date(),
deploymentStatuses: deployedState.deploymentStatuses || {},
hasActiveWebhook: deployedState.hasActiveWebhook || false,
})
if (!saveResult.success) {

View File

@@ -133,7 +133,6 @@ export async function GET(request: NextRequest, { params }: { params: Promise<{
state: {
// Default values for expected properties
deploymentStatuses: {},
hasActiveWebhook: false,
// Data from normalized tables
blocks: normalizedData.blocks,
edges: normalizedData.edges,

View File

@@ -7,6 +7,7 @@ import { getSession } from '@/lib/auth'
import { createLogger } from '@/lib/logs/console/logger'
import { getUserEntityPermissions } from '@/lib/permissions/utils'
import { generateRequestId } from '@/lib/utils'
import { extractAndPersistCustomTools } from '@/lib/workflows/custom-tools-persistence'
import { saveWorkflowToNormalizedTables } from '@/lib/workflows/db-helpers'
import { sanitizeAgentToolsInBlocks } from '@/lib/workflows/validation'
@@ -89,13 +90,6 @@ const ParallelSchema = z.object({
parallelType: z.enum(['count', 'collection']).optional(),
})
const DeploymentStatusSchema = z.object({
id: z.string(),
status: z.enum(['deploying', 'deployed', 'failed', 'stopping', 'stopped']),
deployedAt: z.date().optional(),
error: z.string().optional(),
})
const WorkflowStateSchema = z.object({
blocks: z.record(BlockStateSchema),
edges: z.array(EdgeSchema),
@@ -103,9 +97,7 @@ const WorkflowStateSchema = z.object({
parallels: z.record(ParallelSchema).optional(),
lastSaved: z.number().optional(),
isDeployed: z.boolean().optional(),
deployedAt: z.date().optional(),
deploymentStatuses: z.record(DeploymentStatusSchema).optional(),
hasActiveWebhook: z.boolean().optional(),
deployedAt: z.coerce.date().optional(),
})
/**
@@ -204,8 +196,6 @@ export async function PUT(request: NextRequest, { params }: { params: Promise<{
lastSaved: state.lastSaved || Date.now(),
isDeployed: state.isDeployed || false,
deployedAt: state.deployedAt,
deploymentStatuses: state.deploymentStatuses || {},
hasActiveWebhook: state.hasActiveWebhook || false,
}
const saveResult = await saveWorkflowToNormalizedTables(workflowId, workflowState as any)
@@ -218,6 +208,21 @@ export async function PUT(request: NextRequest, { params }: { params: Promise<{
)
}
// Extract and persist custom tools to database
try {
const { saved, errors } = await extractAndPersistCustomTools(workflowState, userId)
if (saved > 0) {
logger.info(`[${requestId}] Persisted ${saved} custom tool(s) to database`, { workflowId })
}
if (errors.length > 0) {
logger.warn(`[${requestId}] Some custom tools failed to persist`, { errors, workflowId })
}
} catch (error) {
logger.error(`[${requestId}] Failed to persist custom tools`, { error, workflowId })
}
// Update workflow's lastSynced timestamp
await db
.update(workflow)

View File

@@ -89,7 +89,6 @@ export async function GET(request: NextRequest) {
// Use normalized table data - construct state from normalized tables
workflowState = {
deploymentStatuses: {},
hasActiveWebhook: false,
blocks: normalizedData.blocks,
edges: normalizedData.edges,
loops: normalizedData.loops,

View File

@@ -312,7 +312,7 @@ export function EditChunkModal({
<Button
onClick={handleSaveContent}
disabled={!isFormValid || isSaving || !hasUnsavedChanges || isNavigating}
className='bg-[var(--brand-primary-hex)] font-[480] text-muted-foreground shadow-[0_0_0_0_var(--brand-primary-hex)] transition-all duration-200 hover:bg-[var(--brand-primary-hover-hex)] hover:shadow-[0_0_0_4px_rgba(127,47,255,0.15)]'
className='bg-[var(--brand-primary-hex)] font-[480] text-white shadow-[0_0_0_0_var(--brand-primary-hex)] transition-all duration-200 hover:bg-[var(--brand-primary-hover-hex)] hover:shadow-[0_0_0_4px_rgba(127,47,255,0.15)]'
>
{isSaving ? (
<>

View File

@@ -64,7 +64,7 @@ export function UploadModal({
return `File "${file.name}" is too large. Maximum size is 100MB.`
}
if (!ACCEPTED_FILE_TYPES.includes(file.type)) {
return `File "${file.name}" has an unsupported format. Please use PDF, DOC, DOCX, TXT, CSV, XLS, XLSX, MD, PPT, PPTX, or HTML files.`
return `File "${file.name}" has an unsupported format. Please use PDF, DOC, DOCX, TXT, CSV, XLS, XLSX, MD, PPT, PPTX, HTML, JSON, YAML, or YML files.`
}
return null
}
@@ -193,8 +193,8 @@ export function UploadModal({
{isDragging ? 'Drop files here!' : 'Drop files here or click to browse'}
</p>
<p className='text-muted-foreground text-xs'>
Supports PDF, DOC, DOCX, TXT, CSV, XLS, XLSX, MD, PPT, PPTX, HTML (max 100MB
each)
Supports PDF, DOC, DOCX, TXT, CSV, XLS, XLSX, MD, PPT, PPTX, HTML, JSON, YAML,
YML (max 100MB each)
</p>
</div>
</div>

View File

@@ -158,7 +158,7 @@ export function CreateModal({ open, onOpenChange, onKnowledgeBaseCreated }: Crea
// Check file type
if (!ACCEPTED_FILE_TYPES.includes(file.type)) {
setFileError(
`File ${file.name} has an unsupported format. Please use PDF, DOC, DOCX, TXT, CSV, XLS, XLSX, MD, PPT, PPTX, or HTML.`
`File ${file.name} has an unsupported format. Please use PDF, DOC, DOCX, TXT, CSV, XLS, XLSX, MD, PPT, PPTX, HTML, JSON, YAML, or YML.`
)
hasError = true
continue
@@ -501,8 +501,8 @@ export function CreateModal({ open, onOpenChange, onKnowledgeBaseCreated }: Crea
: 'Drop files here or click to browse'}
</p>
<p className='text-muted-foreground text-xs'>
Supports PDF, DOC, DOCX, TXT, CSV, XLS, XLSX, MD, PPT, PPTX, HTML (max
100MB each)
Supports PDF, DOC, DOCX, TXT, CSV, XLS, XLSX, MD, PPT, PPTX, HTML,
JSON, YAML, YML (max 100MB each)
</p>
</div>
</div>

View File

@@ -84,17 +84,200 @@ class ProcessingError extends KnowledgeUploadError {
}
const UPLOAD_CONFIG = {
BATCH_SIZE: 15, // Upload files in parallel - this is fast and not the bottleneck
MAX_RETRIES: 3, // Standard retry count
RETRY_DELAY: 2000, // Initial retry delay in ms (2 seconds)
RETRY_MULTIPLIER: 2, // Standard exponential backoff (2s, 4s, 8s)
CHUNK_SIZE: 5 * 1024 * 1024,
VERCEL_MAX_BODY_SIZE: 4.5 * 1024 * 1024, // Vercel's 4.5MB limit
DIRECT_UPLOAD_THRESHOLD: 4 * 1024 * 1024, // Files > 4MB must use presigned URLs
LARGE_FILE_THRESHOLD: 50 * 1024 * 1024, // Files > 50MB need multipart upload
UPLOAD_TIMEOUT: 60000, // 60 second timeout per upload
MAX_PARALLEL_UPLOADS: 3, // Prevent client saturation mirrors guidance on limiting simultaneous transfers (@Web)
MAX_RETRIES: 3,
RETRY_DELAY_MS: 2000,
RETRY_BACKOFF: 2,
CHUNK_SIZE: 8 * 1024 * 1024, // 8MB keeps us well above S3 minimum part size while reducing part count (@Web)
DIRECT_UPLOAD_THRESHOLD: 4 * 1024 * 1024,
LARGE_FILE_THRESHOLD: 50 * 1024 * 1024,
BASE_TIMEOUT_MS: 2 * 60 * 1000, // baseline per transfer window per large-file guidance (@Web)
TIMEOUT_PER_MB_MS: 1500,
MAX_TIMEOUT_MS: 10 * 60 * 1000,
MULTIPART_PART_CONCURRENCY: 3,
MULTIPART_MAX_RETRIES: 3,
BATCH_REQUEST_SIZE: 50,
} as const
const calculateUploadTimeoutMs = (fileSize: number) => {
const sizeInMb = fileSize / (1024 * 1024)
const dynamicBudget = UPLOAD_CONFIG.BASE_TIMEOUT_MS + sizeInMb * UPLOAD_CONFIG.TIMEOUT_PER_MB_MS
return Math.min(dynamicBudget, UPLOAD_CONFIG.MAX_TIMEOUT_MS)
}
const sleep = (ms: number) => new Promise((resolve) => setTimeout(resolve, ms))
const getHighResTime = () =>
typeof performance !== 'undefined' && typeof performance.now === 'function'
? performance.now()
: Date.now()
const formatMegabytes = (bytes: number) => Number((bytes / (1024 * 1024)).toFixed(2))
const calculateThroughputMbps = (bytes: number, durationMs: number) => {
if (!bytes || !durationMs) return 0
return Number((((bytes * 8) / durationMs) * 0.001).toFixed(2))
}
const formatDurationSeconds = (durationMs: number) => Number((durationMs / 1000).toFixed(2))
const runWithConcurrency = async <T, R>(
items: T[],
limit: number,
worker: (item: T, index: number) => Promise<R>
): Promise<Array<PromiseSettledResult<R>>> => {
const results: Array<PromiseSettledResult<R>> = Array(items.length)
if (items.length === 0) {
return results
}
const concurrency = Math.max(1, Math.min(limit, items.length))
let nextIndex = 0
const runners = Array.from({ length: concurrency }, async () => {
while (true) {
const currentIndex = nextIndex++
if (currentIndex >= items.length) {
break
}
try {
const value = await worker(items[currentIndex], currentIndex)
results[currentIndex] = { status: 'fulfilled', value }
} catch (error) {
results[currentIndex] = { status: 'rejected', reason: error }
}
}
})
await Promise.all(runners)
return results
}
const getErrorName = (error: unknown) =>
typeof error === 'object' && error !== null && 'name' in error ? String((error as any).name) : ''
const getErrorMessage = (error: unknown) =>
error instanceof Error ? error.message : typeof error === 'string' ? error : 'Unknown error'
const isAbortError = (error: unknown) => getErrorName(error) === 'AbortError'
const isNetworkError = (error: unknown) => {
if (!(error instanceof Error)) {
return false
}
const message = error.message.toLowerCase()
return (
message.includes('network') ||
message.includes('fetch') ||
message.includes('connection') ||
message.includes('timeout') ||
message.includes('timed out') ||
message.includes('ecconnreset')
)
}
interface PresignedFileInfo {
path: string
key: string
name: string
size: number
type: string
}
interface PresignedUploadInfo {
fileName: string
presignedUrl: string
fileInfo: PresignedFileInfo
uploadHeaders?: Record<string, string>
directUploadSupported: boolean
presignedUrls?: any
}
const normalizePresignedData = (data: any, context: string): PresignedUploadInfo => {
const presignedUrl = data?.presignedUrl || data?.uploadUrl
const fileInfo = data?.fileInfo
if (!presignedUrl || !fileInfo?.path) {
throw new PresignedUrlError(`Invalid presigned response for ${context}`, data)
}
return {
fileName: data.fileName || fileInfo.name || context,
presignedUrl,
fileInfo: {
path: fileInfo.path,
key: fileInfo.key,
name: fileInfo.name || context,
size: fileInfo.size || data.fileSize || 0,
type: fileInfo.type || data.contentType || '',
},
uploadHeaders: data.uploadHeaders || undefined,
directUploadSupported: data.directUploadSupported !== false,
presignedUrls: data.presignedUrls,
}
}
const getPresignedData = async (
file: File,
timeoutMs: number,
controller?: AbortController
): Promise<PresignedUploadInfo> => {
const localController = controller ?? new AbortController()
const timeoutId = setTimeout(() => localController.abort(), timeoutMs)
const startTime = getHighResTime()
try {
const presignedResponse = await fetch('/api/files/presigned?type=knowledge-base', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({
fileName: file.name,
contentType: file.type,
fileSize: file.size,
}),
signal: localController.signal,
})
if (!presignedResponse.ok) {
let errorDetails: any = null
try {
errorDetails = await presignedResponse.json()
} catch {
// Ignore JSON parsing errors (@Web)
}
logger.error('Presigned URL request failed', {
status: presignedResponse.status,
fileSize: file.size,
})
throw new PresignedUrlError(
`Failed to get presigned URL for ${file.name}: ${presignedResponse.status} ${presignedResponse.statusText}`,
errorDetails
)
}
const presignedData = await presignedResponse.json()
const durationMs = getHighResTime() - startTime
logger.info('Fetched presigned URL', {
fileName: file.name,
sizeMB: formatMegabytes(file.size),
durationMs: formatDurationSeconds(durationMs),
})
return normalizePresignedData(presignedData, file.name)
} finally {
clearTimeout(timeoutId)
if (!controller) {
localController.abort()
}
}
}
export function useKnowledgeUpload(options: UseKnowledgeUploadOptions = {}) {
const [isUploading, setIsUploading] = useState(false)
const [uploadProgress, setUploadProgress] = useState<UploadProgress>({
@@ -154,85 +337,51 @@ export function useKnowledgeUpload(options: UseKnowledgeUploadOptions = {}) {
const uploadSingleFileWithRetry = async (
file: File,
retryCount = 0,
fileIndex?: number
fileIndex?: number,
presignedOverride?: PresignedUploadInfo
): Promise<UploadedFile> => {
const timeoutMs = calculateUploadTimeoutMs(file.size)
let presignedData: PresignedUploadInfo | undefined
const attempt = retryCount + 1
logger.info('Upload attempt started', {
fileName: file.name,
attempt,
sizeMB: formatMegabytes(file.size),
timeoutMs: formatDurationSeconds(timeoutMs),
})
try {
// Create abort controller for timeout
const controller = new AbortController()
const timeoutId = setTimeout(() => controller.abort(), UPLOAD_CONFIG.UPLOAD_TIMEOUT)
const timeoutId = setTimeout(() => controller.abort(), timeoutMs)
try {
// Get presigned URL
const presignedResponse = await fetch('/api/files/presigned?type=knowledge-base', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({
fileName: file.name,
contentType: file.type,
fileSize: file.size,
}),
signal: controller.signal,
})
clearTimeout(timeoutId)
if (!presignedResponse.ok) {
let errorDetails: any = null
try {
errorDetails = await presignedResponse.json()
} catch {
// Ignore JSON parsing errors
}
logger.error('Presigned URL request failed', {
status: presignedResponse.status,
fileSize: file.size,
retryCount,
})
throw new PresignedUrlError(
`Failed to get presigned URL for ${file.name}: ${presignedResponse.status} ${presignedResponse.statusText}`,
errorDetails
)
}
const presignedData = await presignedResponse.json()
presignedData = presignedOverride ?? (await getPresignedData(file, timeoutMs, controller))
if (presignedData.directUploadSupported) {
// Use presigned URLs for all uploads when cloud storage is available
// Check if file needs multipart upload for large files
if (file.size > UPLOAD_CONFIG.LARGE_FILE_THRESHOLD) {
return await uploadFileInChunks(file, presignedData)
return await uploadFileInChunks(file, presignedData, timeoutMs, fileIndex)
}
return await uploadFileDirectly(file, presignedData, fileIndex)
return await uploadFileDirectly(file, presignedData, timeoutMs, controller, fileIndex)
}
// Fallback to traditional upload through API route
// This is only used when cloud storage is not configured
// Must check file size due to Vercel's 4.5MB limit
if (file.size > UPLOAD_CONFIG.DIRECT_UPLOAD_THRESHOLD) {
throw new DirectUploadError(
`File ${file.name} is too large (${(file.size / 1024 / 1024).toFixed(2)}MB) for upload. Cloud storage must be configured for files over 4MB.`,
{ fileSize: file.size, limit: UPLOAD_CONFIG.DIRECT_UPLOAD_THRESHOLD }
)
}
logger.warn(`Using API upload fallback for ${file.name} - cloud storage not configured`)
return await uploadFileThroughAPI(file)
return await uploadFileThroughAPI(file, timeoutMs)
} finally {
clearTimeout(timeoutId)
}
} catch (error) {
const isTimeout = error instanceof Error && error.name === 'AbortError'
const isNetwork =
error instanceof Error &&
(error.message.includes('fetch') ||
error.message.includes('network') ||
error.message.includes('Failed to fetch'))
const isTimeout = isAbortError(error)
const isNetwork = isNetworkError(error)
// Retry logic
if (retryCount < UPLOAD_CONFIG.MAX_RETRIES) {
const delay = UPLOAD_CONFIG.RETRY_DELAY * UPLOAD_CONFIG.RETRY_MULTIPLIER ** retryCount // More aggressive exponential backoff
const delay = UPLOAD_CONFIG.RETRY_DELAY_MS * UPLOAD_CONFIG.RETRY_BACKOFF ** retryCount // More aggressive exponential backoff (@Web)
if (isTimeout || isNetwork) {
logger.warn(
`Upload failed (${isTimeout ? 'timeout' : 'network'}), retrying in ${delay / 1000}s...`,
@@ -244,7 +393,6 @@ export function useKnowledgeUpload(options: UseKnowledgeUploadOptions = {}) {
)
}
// Reset progress to 0 before retry to indicate restart
if (fileIndex !== undefined) {
setUploadProgress((prev) => ({
...prev,
@@ -254,8 +402,14 @@ export function useKnowledgeUpload(options: UseKnowledgeUploadOptions = {}) {
}))
}
await new Promise((resolve) => setTimeout(resolve, delay))
return uploadSingleFileWithRetry(file, retryCount + 1, fileIndex)
await sleep(delay)
const shouldReusePresigned = (isTimeout || isNetwork) && presignedData
return uploadSingleFileWithRetry(
file,
retryCount + 1,
fileIndex,
shouldReusePresigned ? presignedData : undefined
)
}
logger.error('Upload failed after retries', {
@@ -272,12 +426,15 @@ export function useKnowledgeUpload(options: UseKnowledgeUploadOptions = {}) {
*/
const uploadFileDirectly = async (
file: File,
presignedData: any,
presignedData: PresignedUploadInfo,
timeoutMs: number,
outerController: AbortController,
fileIndex?: number
): Promise<UploadedFile> => {
return new Promise((resolve, reject) => {
const xhr = new XMLHttpRequest()
let isCompleted = false // Track if this upload has completed to prevent duplicate state updates
let isCompleted = false
const startTime = getHighResTime()
const timeoutId = setTimeout(() => {
if (!isCompleted) {
@@ -285,7 +442,18 @@ export function useKnowledgeUpload(options: UseKnowledgeUploadOptions = {}) {
xhr.abort()
reject(new Error('Upload timeout'))
}
}, UPLOAD_CONFIG.UPLOAD_TIMEOUT)
}, timeoutMs)
const abortHandler = () => {
if (!isCompleted) {
isCompleted = true
clearTimeout(timeoutId)
xhr.abort()
reject(new DirectUploadError(`Upload aborted for ${file.name}`, {}))
}
}
outerController.signal.addEventListener('abort', abortHandler)
// Track upload progress
xhr.upload.addEventListener('progress', (event) => {
@@ -310,10 +478,19 @@ export function useKnowledgeUpload(options: UseKnowledgeUploadOptions = {}) {
if (!isCompleted) {
isCompleted = true
clearTimeout(timeoutId)
outerController.signal.removeEventListener('abort', abortHandler)
const durationMs = getHighResTime() - startTime
if (xhr.status >= 200 && xhr.status < 300) {
const fullFileUrl = presignedData.fileInfo.path.startsWith('http')
? presignedData.fileInfo.path
: `${window.location.origin}${presignedData.fileInfo.path}`
logger.info('Direct upload completed', {
fileName: file.name,
sizeMB: formatMegabytes(file.size),
durationMs: formatDurationSeconds(durationMs),
throughputMbps: calculateThroughputMbps(file.size, durationMs),
status: xhr.status,
})
resolve(createUploadedFile(file.name, fullFileUrl, file.size, file.type, file))
} else {
logger.error('S3 PUT request failed', {
@@ -336,17 +513,18 @@ export function useKnowledgeUpload(options: UseKnowledgeUploadOptions = {}) {
if (!isCompleted) {
isCompleted = true
clearTimeout(timeoutId)
outerController.signal.removeEventListener('abort', abortHandler)
const durationMs = getHighResTime() - startTime
logger.error('Direct upload network error', {
fileName: file.name,
sizeMB: formatMegabytes(file.size),
durationMs: formatDurationSeconds(durationMs),
})
reject(new DirectUploadError(`Network error uploading ${file.name}`, {}))
}
})
xhr.addEventListener('abort', () => {
if (!isCompleted) {
isCompleted = true
clearTimeout(timeoutId)
reject(new DirectUploadError(`Upload aborted for ${file.name}`, {}))
}
})
xhr.addEventListener('abort', abortHandler)
// Start the upload
xhr.open('PUT', presignedData.presignedUrl)
@@ -366,10 +544,16 @@ export function useKnowledgeUpload(options: UseKnowledgeUploadOptions = {}) {
/**
* Upload large file in chunks (multipart upload)
*/
const uploadFileInChunks = async (file: File, presignedData: any): Promise<UploadedFile> => {
const uploadFileInChunks = async (
file: File,
presignedData: PresignedUploadInfo,
timeoutMs: number,
fileIndex?: number
): Promise<UploadedFile> => {
logger.info(
`Uploading large file ${file.name} (${(file.size / 1024 / 1024).toFixed(2)}MB) using multipart upload`
)
const startTime = getHighResTime()
try {
// Step 1: Initiate multipart upload
@@ -420,37 +604,76 @@ export function useKnowledgeUpload(options: UseKnowledgeUploadOptions = {}) {
// Step 4: Upload parts in parallel (batch them to avoid overwhelming the browser)
const uploadedParts: Array<{ ETag: string; PartNumber: number }> = []
const PARALLEL_UPLOADS = 3 // Upload 3 parts at a time
for (let i = 0; i < presignedUrls.length; i += PARALLEL_UPLOADS) {
const batch = presignedUrls.slice(i, i + PARALLEL_UPLOADS)
const controller = new AbortController()
const multipartTimeoutId = setTimeout(() => controller.abort(), timeoutMs)
const batchPromises = batch.map(async ({ partNumber, url }: any) => {
try {
const uploadPart = async ({ partNumber, url }: any) => {
const start = (partNumber - 1) * chunkSize
const end = Math.min(start + chunkSize, file.size)
const chunk = file.slice(start, end)
const uploadResponse = await fetch(url, {
method: 'PUT',
body: chunk,
headers: {
'Content-Type': file.type,
},
})
for (let attempt = 0; attempt <= UPLOAD_CONFIG.MULTIPART_MAX_RETRIES; attempt++) {
try {
const partResponse = await fetch(url, {
method: 'PUT',
body: chunk,
signal: controller.signal,
headers: {
'Content-Type': file.type,
},
})
if (!uploadResponse.ok) {
throw new Error(`Failed to upload part ${partNumber}: ${uploadResponse.statusText}`)
if (!partResponse.ok) {
throw new Error(`Failed to upload part ${partNumber}: ${partResponse.statusText}`)
}
const etag = partResponse.headers.get('ETag') || ''
logger.info(`Uploaded part ${partNumber}/${numParts}`)
if (fileIndex !== undefined) {
const partProgress = Math.min(100, Math.round((partNumber / numParts) * 100))
setUploadProgress((prev) => ({
...prev,
fileStatuses: prev.fileStatuses?.map((fs, idx) =>
idx === fileIndex ? { ...fs, progress: partProgress } : fs
),
}))
}
return { ETag: etag.replace(/"/g, ''), PartNumber: partNumber }
} catch (partError) {
if (attempt >= UPLOAD_CONFIG.MULTIPART_MAX_RETRIES) {
throw partError
}
const delay = UPLOAD_CONFIG.RETRY_DELAY_MS * UPLOAD_CONFIG.RETRY_BACKOFF ** attempt
logger.warn(
`Part ${partNumber} failed (attempt ${attempt + 1}), retrying in ${Math.round(delay / 1000)}s`
)
await sleep(delay)
}
}
// Get ETag from response headers
const etag = uploadResponse.headers.get('ETag') || ''
logger.info(`Uploaded part ${partNumber}/${numParts}`)
throw new Error(`Retries exhausted for part ${partNumber}`)
}
return { ETag: etag.replace(/"/g, ''), PartNumber: partNumber }
const partResults = await runWithConcurrency(
presignedUrls,
UPLOAD_CONFIG.MULTIPART_PART_CONCURRENCY,
uploadPart
)
partResults.forEach((result) => {
if (result?.status === 'fulfilled') {
uploadedParts.push(result.value)
} else if (result?.status === 'rejected') {
throw result.reason
}
})
const batchResults = await Promise.all(batchPromises)
uploadedParts.push(...batchResults)
} finally {
clearTimeout(multipartTimeoutId)
}
// Step 5: Complete multipart upload
@@ -471,23 +694,37 @@ export function useKnowledgeUpload(options: UseKnowledgeUploadOptions = {}) {
const { path } = await completeResponse.json()
logger.info(`Completed multipart upload for ${file.name}`)
const durationMs = getHighResTime() - startTime
logger.info('Multipart upload metrics', {
fileName: file.name,
sizeMB: formatMegabytes(file.size),
parts: uploadedParts.length,
durationMs: formatDurationSeconds(durationMs),
throughputMbps: calculateThroughputMbps(file.size, durationMs),
})
const fullFileUrl = path.startsWith('http') ? path : `${window.location.origin}${path}`
return createUploadedFile(file.name, fullFileUrl, file.size, file.type, file)
} catch (error) {
logger.error(`Multipart upload failed for ${file.name}:`, error)
const durationMs = getHighResTime() - startTime
logger.warn('Falling back to direct upload after multipart failure', {
fileName: file.name,
sizeMB: formatMegabytes(file.size),
durationMs: formatDurationSeconds(durationMs),
})
// Fall back to direct upload if multipart fails
logger.info('Falling back to direct upload')
return uploadFileDirectly(file, presignedData)
return uploadFileDirectly(file, presignedData, timeoutMs, new AbortController(), fileIndex)
}
}
/**
* Fallback upload through API
*/
const uploadFileThroughAPI = async (file: File): Promise<UploadedFile> => {
const uploadFileThroughAPI = async (file: File, timeoutMs: number): Promise<UploadedFile> => {
const controller = new AbortController()
const timeoutId = setTimeout(() => controller.abort(), UPLOAD_CONFIG.UPLOAD_TIMEOUT)
const timeoutId = setTimeout(() => controller.abort(), timeoutMs)
try {
const formData = new FormData()
@@ -560,19 +797,20 @@ export function useKnowledgeUpload(options: UseKnowledgeUploadOptions = {}) {
logger.info(`Starting batch upload of ${files.length} files`)
try {
const BATCH_SIZE = 100 // Process 100 files at a time
const batches = []
// Create all batches
for (let batchStart = 0; batchStart < files.length; batchStart += BATCH_SIZE) {
const batchFiles = files.slice(batchStart, batchStart + BATCH_SIZE)
for (
let batchStart = 0;
batchStart < files.length;
batchStart += UPLOAD_CONFIG.BATCH_REQUEST_SIZE
) {
const batchFiles = files.slice(batchStart, batchStart + UPLOAD_CONFIG.BATCH_REQUEST_SIZE)
const batchIndexOffset = batchStart
batches.push({ batchFiles, batchIndexOffset })
}
logger.info(`Starting parallel processing of ${batches.length} batches`)
// Step 1: Get ALL presigned URLs in parallel
const presignedPromises = batches.map(async ({ batchFiles }, batchIndex) => {
logger.info(
`Getting presigned URLs for batch ${batchIndex + 1}/${batches.length} (${batchFiles.length} files)`
@@ -605,9 +843,8 @@ export function useKnowledgeUpload(options: UseKnowledgeUploadOptions = {}) {
const allPresignedData = await Promise.all(presignedPromises)
logger.info(`Got all presigned URLs, starting uploads`)
// Step 2: Upload all files with global concurrency control
const allUploads = allPresignedData.flatMap(({ batchFiles, presignedData, batchIndex }) => {
const batchIndexOffset = batchIndex * BATCH_SIZE
const batchIndexOffset = batchIndex * UPLOAD_CONFIG.BATCH_REQUEST_SIZE
return batchFiles.map((file, batchFileIndex) => {
const fileIndex = batchIndexOffset + batchFileIndex
@@ -617,16 +854,14 @@ export function useKnowledgeUpload(options: UseKnowledgeUploadOptions = {}) {
})
})
// Process all uploads with concurrency control
for (let i = 0; i < allUploads.length; i += UPLOAD_CONFIG.BATCH_SIZE) {
const concurrentBatch = allUploads.slice(i, i + UPLOAD_CONFIG.BATCH_SIZE)
const uploadPromises = concurrentBatch.map(async ({ file, presigned, fileIndex }) => {
const uploadResults = await runWithConcurrency(
allUploads,
UPLOAD_CONFIG.MAX_PARALLEL_UPLOADS,
async ({ file, presigned, fileIndex }) => {
if (!presigned) {
throw new Error(`No presigned data for file ${file.name}`)
}
// Mark as uploading
setUploadProgress((prev) => ({
...prev,
fileStatuses: prev.fileStatuses?.map((fs, idx) =>
@@ -635,10 +870,8 @@ export function useKnowledgeUpload(options: UseKnowledgeUploadOptions = {}) {
}))
try {
// Upload directly to storage
const result = await uploadFileDirectly(file, presigned, fileIndex)
const result = await uploadSingleFileWithRetry(file, 0, fileIndex, presigned)
// Mark as completed
setUploadProgress((prev) => ({
...prev,
filesCompleted: prev.filesCompleted + 1,
@@ -649,7 +882,6 @@ export function useKnowledgeUpload(options: UseKnowledgeUploadOptions = {}) {
return result
} catch (error) {
// Mark as failed
setUploadProgress((prev) => ({
...prev,
fileStatuses: prev.fileStatuses?.map((fs, idx) =>
@@ -657,30 +889,27 @@ export function useKnowledgeUpload(options: UseKnowledgeUploadOptions = {}) {
? {
...fs,
status: 'failed' as const,
error: error instanceof Error ? error.message : 'Upload failed',
error: getErrorMessage(error),
}
: fs
),
}))
throw error
}
})
const batchResults = await Promise.allSettled(uploadPromises)
for (let j = 0; j < batchResults.length; j++) {
const result = batchResults[j]
if (result.status === 'fulfilled') {
results.push(result.value)
} else {
failedFiles.push({
file: concurrentBatch[j].file,
error:
result.reason instanceof Error ? result.reason : new Error(String(result.reason)),
})
}
}
}
)
uploadResults.forEach((result, idx) => {
if (result?.status === 'fulfilled') {
results.push(result.value)
} else if (result?.status === 'rejected') {
failedFiles.push({
file: allUploads[idx].file,
error:
result.reason instanceof Error ? result.reason : new Error(String(result.reason)),
})
}
})
if (failedFiles.length > 0) {
logger.error(`Failed to upload ${failedFiles.length} files`)

View File

@@ -86,7 +86,6 @@ export function DiffControls() {
lastSaved: rawState.lastSaved || Date.now(),
isDeployed: rawState.isDeployed || false,
deploymentStatuses: rawState.deploymentStatuses || {},
hasActiveWebhook: rawState.hasActiveWebhook || false,
// Only include deployedAt if it's a valid date, never include null/undefined
...(rawState.deployedAt && rawState.deployedAt instanceof Date
? { deployedAt: rawState.deployedAt }

View File

@@ -3,6 +3,7 @@
import {
forwardRef,
type KeyboardEvent,
useCallback,
useEffect,
useImperativeHandle,
useRef,
@@ -41,7 +42,6 @@ import {
DropdownMenuContent,
DropdownMenuItem,
DropdownMenuTrigger,
Switch,
Textarea,
Tooltip,
TooltipContent,
@@ -49,6 +49,7 @@ import {
TooltipTrigger,
} from '@/components/ui'
import { useSession } from '@/lib/auth-client'
import { isHosted } from '@/lib/environment'
import { createLogger } from '@/lib/logs/console/logger'
import { cn } from '@/lib/utils'
import { useCopilotStore } from '@/stores/copilot/store'
@@ -92,6 +93,7 @@ interface UserInputProps {
onModeChange?: (mode: 'ask' | 'agent') => void
value?: string // Controlled value from outside
onChange?: (value: string) => void // Callback when value changes
panelWidth?: number // Panel width to adjust truncation
}
interface UserInputRef {
@@ -112,6 +114,7 @@ const UserInput = forwardRef<UserInputRef, UserInputProps>(
onModeChange,
value: controlledValue,
onChange: onControlledChange,
panelWidth = 308,
},
ref
) => {
@@ -179,7 +182,7 @@ const UserInput = forwardRef<UserInputRef, UserInputProps>(
const [isLoadingLogs, setIsLoadingLogs] = useState(false)
const { data: session } = useSession()
const { currentChat, workflowId } = useCopilotStore()
const { currentChat, workflowId, enabledModels, setEnabledModels } = useCopilotStore()
const params = useParams()
const workspaceId = params.workspaceId as string
// Track per-chat preference for auto-adding workflow context
@@ -224,6 +227,30 @@ const UserInput = forwardRef<UserInputRef, UserInputProps>(
}
}, [workflowId])
// Fetch enabled models when dropdown is opened for the first time
const fetchEnabledModelsOnce = useCallback(async () => {
if (!isHosted) return
if (enabledModels !== null) return // Already loaded
try {
const res = await fetch('/api/copilot/user-models')
if (!res.ok) {
logger.error('Failed to fetch enabled models')
return
}
const data = await res.json()
const modelsMap = data.enabledModels || {}
// Convert to array for store (API already merged with defaults)
const enabledArray = Object.entries(modelsMap)
.filter(([_, enabled]) => enabled)
.map(([modelId]) => modelId)
setEnabledModels(enabledArray)
} catch (error) {
logger.error('Error fetching enabled models', { error })
}
}, [enabledModels, setEnabledModels])
// Track the last chat ID we've seen to detect chat changes
const [lastChatId, setLastChatId] = useState<string | undefined>(undefined)
// Track if we just sent a message to avoid re-adding context after submit
@@ -1780,7 +1807,7 @@ const UserInput = forwardRef<UserInputRef, UserInputProps>(
const { selectedModel, agentPrefetch, setSelectedModel, setAgentPrefetch } = useCopilotStore()
// Model configurations with their display names
const modelOptions = [
const allModelOptions = [
{ value: 'gpt-5-fast', label: 'gpt-5-fast' },
{ value: 'gpt-5', label: 'gpt-5' },
{ value: 'gpt-5-medium', label: 'gpt-5-medium' },
@@ -1793,23 +1820,36 @@ const UserInput = forwardRef<UserInputRef, UserInputProps>(
{ value: 'claude-4.1-opus', label: 'claude-4.1-opus' },
] as const
// Filter models based on user preferences (only for hosted)
const modelOptions =
isHosted && enabledModels !== null
? allModelOptions.filter((model) => enabledModels.includes(model.value))
: allModelOptions
const getCollapsedModeLabel = () => {
const model = modelOptions.find((m) => m.value === selectedModel)
return model ? model.label : 'Claude 4.5 Sonnet'
return model ? model.label : 'claude-4.5-sonnet'
}
const getModelIcon = () => {
const colorClass = !agentPrefetch
? 'text-[var(--brand-primary-hover-hex)]'
: 'text-muted-foreground'
// Only Brain and BrainCircuit models show purple when agentPrefetch is false
const isBrainModel = [
'gpt-5',
'gpt-5-medium',
'claude-4-sonnet',
'claude-4.5-sonnet',
].includes(selectedModel)
const isBrainCircuitModel = ['gpt-5-high', 'o3', 'claude-4.1-opus'].includes(selectedModel)
const colorClass =
(isBrainModel || isBrainCircuitModel) && !agentPrefetch
? 'text-[var(--brand-primary-hover-hex)]'
: 'text-muted-foreground'
// Match the dropdown icon logic exactly
if (['gpt-5-high', 'o3', 'claude-4.1-opus'].includes(selectedModel)) {
if (isBrainCircuitModel) {
return <BrainCircuit className={`h-3 w-3 ${colorClass}`} />
}
if (
['gpt-5', 'gpt-5-medium', 'claude-4-sonnet', 'claude-4.5-sonnet'].includes(selectedModel)
) {
if (isBrainModel) {
return <Brain className={`h-3 w-3 ${colorClass}`} />
}
if (['gpt-4o', 'gpt-4.1', 'gpt-5-fast'].includes(selectedModel)) {
@@ -3068,7 +3108,7 @@ const UserInput = forwardRef<UserInputRef, UserInputProps>(
variant='ghost'
size='sm'
disabled={!onModeChange}
className='flex h-6 items-center gap-1.5 rounded-full border px-2 py-1 font-medium text-xs'
className='flex h-6 items-center gap-1.5 rounded-full border px-2 py-1 font-medium text-xs focus-visible:ring-0 focus-visible:ring-offset-0'
>
{getModeIcon()}
<span>{getModeText()}</span>
@@ -3134,191 +3174,183 @@ const UserInput = forwardRef<UserInputRef, UserInputProps>(
</TooltipProvider>
</DropdownMenuContent>
</DropdownMenu>
{
<DropdownMenu>
<DropdownMenuTrigger asChild>
<Button
variant='ghost'
size='sm'
className={cn(
'flex h-6 items-center gap-1.5 rounded-full border px-2 py-1 font-medium text-xs',
!agentPrefetch
? 'border-[var(--brand-primary-hover-hex)] text-[var(--brand-primary-hover-hex)] hover:bg-[color-mix(in_srgb,var(--brand-primary-hover-hex)_8%,transparent)] hover:text-[var(--brand-primary-hover-hex)]'
: 'border-border text-foreground'
)}
title='Choose mode'
>
{getModelIcon()}
<span>
{getCollapsedModeLabel()}
{!agentPrefetch &&
!['gpt-4o', 'gpt-4.1', 'gpt-5-fast'].includes(selectedModel) && (
<span className='ml-1 font-semibold'>MAX</span>
)}
</span>
</Button>
</DropdownMenuTrigger>
<DropdownMenuContent align='start' side='top' className='max-h-[400px] p-0'>
<TooltipProvider delayDuration={100} skipDelayDuration={0}>
<div className='w-[220px]'>
<div className='p-2 pb-0'>
<div className='mb-2 flex items-center justify-between'>
<div className='flex items-center gap-1.5'>
<span className='font-medium text-xs'>MAX mode</span>
<Tooltip>
<TooltipTrigger asChild>
<button
type='button'
className='h-3.5 w-3.5 rounded text-muted-foreground transition-colors hover:text-foreground'
aria-label='MAX mode info'
>
<Info className='h-3.5 w-3.5' />
</button>
</TooltipTrigger>
<TooltipContent
side='right'
sideOffset={6}
align='center'
className='max-w-[220px] border bg-popover p-2 text-[11px] text-popover-foreground leading-snug shadow-md'
>
Significantly increases depth of reasoning
<br />
<span className='text-[10px] text-muted-foreground italic'>
Only available for advanced models
</span>
</TooltipContent>
</Tooltip>
</div>
<Switch
checked={!agentPrefetch}
disabled={['gpt-4o', 'gpt-4.1', 'gpt-5-fast'].includes(selectedModel)}
title={
['gpt-4o', 'gpt-4.1', 'gpt-5-fast'].includes(selectedModel)
? 'MAX mode is only available for advanced models'
: undefined
}
onCheckedChange={(checked) => {
if (['gpt-4o', 'gpt-4.1', 'gpt-5-fast'].includes(selectedModel))
return
setAgentPrefetch(!checked)
}}
/>
</div>
<div className='my-1.5 flex justify-center'>
<div className='h-px w-[100%] bg-border' />
</div>
</div>
<div className='max-h-[280px] overflow-y-auto px-2 pb-2'>
<div>
<div className='mb-1'>
<span className='font-medium text-xs'>Model</span>
</div>
<div className='space-y-2'>
{/* Helper function to get icon for a model */}
{(() => {
const getModelIcon = (modelValue: string) => {
if (
['gpt-5-high', 'o3', 'claude-4.1-opus'].includes(modelValue)
) {
return (
<BrainCircuit className='h-3 w-3 text-muted-foreground' />
)
}
if (
[
'gpt-5',
'gpt-5-medium',
'claude-4-sonnet',
'claude-4.5-sonnet',
].includes(modelValue)
) {
return <Brain className='h-3 w-3 text-muted-foreground' />
}
if (['gpt-4o', 'gpt-4.1', 'gpt-5-fast'].includes(modelValue)) {
return <Zap className='h-3 w-3 text-muted-foreground' />
}
return <div className='h-3 w-3' />
}
{(() => {
const isBrainModel = [
'gpt-5',
'gpt-5-medium',
'claude-4-sonnet',
'claude-4.5-sonnet',
].includes(selectedModel)
const isBrainCircuitModel = ['gpt-5-high', 'o3', 'claude-4.1-opus'].includes(
selectedModel
)
const showPurple = (isBrainModel || isBrainCircuitModel) && !agentPrefetch
const renderModelOption = (
option: (typeof modelOptions)[number]
) => (
<DropdownMenuItem
key={option.value}
onSelect={() => {
setSelectedModel(option.value)
// Automatically turn off max mode for fast models (Zap icon)
if (
['gpt-4o', 'gpt-4.1', 'gpt-5-fast'].includes(
option.value
) &&
!agentPrefetch
) {
setAgentPrefetch(true)
}
}}
className={cn(
'flex h-7 items-center gap-1.5 px-2 py-1 text-left text-xs',
selectedModel === option.value ? 'bg-muted/50' : ''
)}
>
{getModelIcon(option.value)}
<span>{option.label}</span>
</DropdownMenuItem>
)
return (
<DropdownMenu
onOpenChange={(open) => {
if (open) {
fetchEnabledModelsOnce()
}
}}
>
<DropdownMenuTrigger asChild>
<Button
variant='ghost'
size='sm'
className={cn(
'flex h-6 items-center gap-1.5 rounded-full border px-2 py-1 font-medium text-xs focus-visible:ring-0 focus-visible:ring-offset-0',
showPurple
? 'border-[var(--brand-primary-hover-hex)] text-[var(--brand-primary-hover-hex)] hover:bg-[color-mix(in_srgb,var(--brand-primary-hover-hex)_8%,transparent)] hover:text-[var(--brand-primary-hover-hex)]'
: 'border-border text-foreground'
)}
title='Choose mode'
>
{getModelIcon()}
<span className={cn(panelWidth < 360 ? 'max-w-[72px] truncate' : '')}>
{getCollapsedModeLabel()}
{agentPrefetch &&
!['gpt-4o', 'gpt-4.1', 'gpt-5-fast'].includes(selectedModel) && (
<span className='ml-1 font-semibold'>Lite</span>
)}
</span>
</Button>
</DropdownMenuTrigger>
<DropdownMenuContent align='start' side='top' className='max-h-[400px] p-0'>
<TooltipProvider delayDuration={100} skipDelayDuration={0}>
<div className='w-[220px]'>
<div className='max-h-[280px] overflow-y-auto p-2'>
<div>
<div className='mb-1'>
<span className='font-medium text-xs'>Model</span>
</div>
<div className='space-y-2'>
{/* Helper function to get icon for a model */}
{(() => {
const getModelIcon = (modelValue: string) => {
if (
['gpt-5-high', 'o3', 'claude-4.1-opus'].includes(modelValue)
) {
return (
<BrainCircuit className='h-3 w-3 text-muted-foreground' />
)
}
if (
[
'gpt-5',
'gpt-5-medium',
'claude-4-sonnet',
'claude-4.5-sonnet',
].includes(modelValue)
) {
return <Brain className='h-3 w-3 text-muted-foreground' />
}
if (['gpt-4o', 'gpt-4.1', 'gpt-5-fast'].includes(modelValue)) {
return <Zap className='h-3 w-3 text-muted-foreground' />
}
return <div className='h-3 w-3' />
}
return (
<>
{/* OpenAI Models */}
<div>
<div className='px-2 py-1 font-medium text-[10px] text-muted-foreground uppercase'>
OpenAI
</div>
<div className='space-y-0.5'>
{modelOptions
.filter((option) =>
[
'gpt-5-fast',
'gpt-5',
'gpt-5-medium',
'gpt-5-high',
'gpt-4o',
'gpt-4.1',
'o3',
].includes(option.value)
)
.map(renderModelOption)}
</div>
</div>
const renderModelOption = (
option: (typeof modelOptions)[number]
) => (
<DropdownMenuItem
key={option.value}
onSelect={() => {
setSelectedModel(option.value)
// Automatically turn off Lite mode for fast models (Zap icon)
if (
['gpt-4o', 'gpt-4.1', 'gpt-5-fast'].includes(
option.value
) &&
agentPrefetch
) {
setAgentPrefetch(false)
}
}}
className={cn(
'flex h-7 items-center gap-1.5 px-2 py-1 text-left text-xs',
selectedModel === option.value ? 'bg-muted/50' : ''
)}
>
{getModelIcon(option.value)}
<span>{option.label}</span>
</DropdownMenuItem>
)
{/* Anthropic Models */}
<div>
<div className='px-2 py-1 font-medium text-[10px] text-muted-foreground uppercase'>
Anthropic
return (
<>
{/* OpenAI Models */}
<div>
<div className='px-2 py-1 font-medium text-[10px] text-muted-foreground uppercase'>
OpenAI
</div>
<div className='space-y-0.5'>
{modelOptions
.filter((option) =>
[
'gpt-5-fast',
'gpt-5',
'gpt-5-medium',
'gpt-5-high',
'gpt-4o',
'gpt-4.1',
'o3',
].includes(option.value)
)
.map(renderModelOption)}
</div>
</div>
<div className='space-y-0.5'>
{modelOptions
.filter((option) =>
[
'claude-4-sonnet',
'claude-4.5-sonnet',
'claude-4.1-opus',
].includes(option.value)
)
.map(renderModelOption)}
{/* Anthropic Models */}
<div>
<div className='px-2 py-1 font-medium text-[10px] text-muted-foreground uppercase'>
Anthropic
</div>
<div className='space-y-0.5'>
{modelOptions
.filter((option) =>
[
'claude-4-sonnet',
'claude-4.5-sonnet',
'claude-4.1-opus',
].includes(option.value)
)
.map(renderModelOption)}
</div>
</div>
</div>
</>
)
})()}
{/* More Models Button (only for hosted) */}
{isHosted && (
<div className='mt-1 border-t pt-1'>
<button
type='button'
onClick={() => {
// Dispatch event to open settings modal on copilot tab
window.dispatchEvent(
new CustomEvent('open-settings', {
detail: { tab: 'copilot' },
})
)
}}
className='w-full rounded-sm px-2 py-1.5 text-left text-muted-foreground text-xs transition-colors hover:bg-muted/50'
>
More Models...
</button>
</div>
)}
</>
)
})()}
</div>
</div>
</div>
</div>
</div>
</TooltipProvider>
</DropdownMenuContent>
</DropdownMenu>
}
</TooltipProvider>
</DropdownMenuContent>
</DropdownMenu>
)
})()}
<Button
variant='ghost'
size='icon'

View File

@@ -440,6 +440,7 @@ export const Copilot = forwardRef<CopilotRef, CopilotProps>(({ panelWidth }, ref
onModeChange={setMode}
value={inputValue}
onChange={setInputValue}
panelWidth={panelWidth}
/>
)}
</>

View File

@@ -30,6 +30,7 @@ import { Textarea } from '@/components/ui/textarea'
import { cn } from '@/lib/utils'
import { sanitizeForCopilot } from '@/lib/workflows/json-sanitizer'
import { formatEditSequence } from '@/lib/workflows/training/compute-edit-sequence'
import { useCurrentWorkflow } from '@/app/workspace/[workspaceId]/w/[workflowId]/hooks/use-current-workflow'
import { useCopilotTrainingStore } from '@/stores/copilot-training/store'
/**
@@ -52,6 +53,8 @@ export function TrainingModal() {
markDatasetSent,
} = useCopilotTrainingStore()
const currentWorkflow = useCurrentWorkflow()
const [localPrompt, setLocalPrompt] = useState(currentPrompt)
const [localTitle, setLocalTitle] = useState(currentTitle)
const [copiedId, setCopiedId] = useState<string | null>(null)
@@ -63,6 +66,11 @@ export function TrainingModal() {
const [sendingSelected, setSendingSelected] = useState(false)
const [sentDatasets, setSentDatasets] = useState<Set<string>>(new Set())
const [failedDatasets, setFailedDatasets] = useState<Set<string>>(new Set())
const [sendingLiveWorkflow, setSendingLiveWorkflow] = useState(false)
const [liveWorkflowSent, setLiveWorkflowSent] = useState(false)
const [liveWorkflowFailed, setLiveWorkflowFailed] = useState(false)
const [liveWorkflowTitle, setLiveWorkflowTitle] = useState('')
const [liveWorkflowDescription, setLiveWorkflowDescription] = useState('')
const handleStart = () => {
if (localTitle.trim() && localPrompt.trim()) {
@@ -285,6 +293,46 @@ export function TrainingModal() {
}
}
const handleSendLiveWorkflow = async () => {
if (!liveWorkflowTitle.trim() || !liveWorkflowDescription.trim()) {
return
}
setLiveWorkflowSent(false)
setLiveWorkflowFailed(false)
setSendingLiveWorkflow(true)
try {
const sanitizedWorkflow = sanitizeForCopilot(currentWorkflow.workflowState)
const response = await fetch('/api/copilot/training/examples', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
json: JSON.stringify(sanitizedWorkflow),
source_path: liveWorkflowTitle,
summary: liveWorkflowDescription,
}),
})
if (!response.ok) {
const error = await response.json()
throw new Error(error.error || 'Failed to send live workflow')
}
setLiveWorkflowSent(true)
setLiveWorkflowTitle('')
setLiveWorkflowDescription('')
setTimeout(() => setLiveWorkflowSent(false), 5000)
} catch (error) {
console.error('Failed to send live workflow:', error)
setLiveWorkflowFailed(true)
setTimeout(() => setLiveWorkflowFailed(false), 5000)
} finally {
setSendingLiveWorkflow(false)
}
}
return (
<Dialog open={showModal} onOpenChange={toggleModal}>
<DialogContent className='max-w-3xl'>
@@ -335,24 +383,24 @@ export function TrainingModal() {
)}
<Tabs defaultValue={isTraining ? 'datasets' : 'new'} className='mt-4'>
<TabsList className='grid w-full grid-cols-2'>
<TabsList className='grid w-full grid-cols-3'>
<TabsTrigger value='new' disabled={isTraining}>
New Session
</TabsTrigger>
<TabsTrigger value='datasets'>Datasets ({datasets.length})</TabsTrigger>
<TabsTrigger value='live'>Send Live State</TabsTrigger>
</TabsList>
{/* New Training Session Tab */}
<TabsContent value='new' className='space-y-4'>
{startSnapshot && (
<div className='rounded-lg border bg-muted/50 p-3'>
<p className='font-medium text-muted-foreground text-sm'>Current Workflow State</p>
<p className='text-sm'>
{Object.keys(startSnapshot.blocks).length} blocks, {startSnapshot.edges.length}{' '}
edges
</p>
</div>
)}
<div className='rounded-lg border bg-muted/50 p-3'>
<p className='mb-2 font-medium text-muted-foreground text-sm'>
Current Workflow State
</p>
<p className='text-sm'>
{currentWorkflow.getBlockCount()} blocks, {currentWorkflow.getEdgeCount()} edges
</p>
</div>
<div className='space-y-2'>
<Label htmlFor='title'>Title</Label>
@@ -628,6 +676,94 @@ export function TrainingModal() {
</>
)}
</TabsContent>
{/* Send Live State Tab */}
<TabsContent value='live' className='space-y-4'>
<div className='rounded-lg border bg-muted/50 p-3'>
<p className='mb-2 font-medium text-muted-foreground text-sm'>
Current Workflow State
</p>
<p className='text-sm'>
{currentWorkflow.getBlockCount()} blocks, {currentWorkflow.getEdgeCount()} edges
</p>
</div>
<div className='space-y-2'>
<Label htmlFor='live-title'>Title</Label>
<Input
id='live-title'
placeholder='e.g., Customer Onboarding Workflow'
value={liveWorkflowTitle}
onChange={(e) => setLiveWorkflowTitle(e.target.value)}
/>
<p className='text-muted-foreground text-xs'>
A short title identifying this workflow
</p>
</div>
<div className='space-y-2'>
<Label htmlFor='live-description'>Description</Label>
<Textarea
id='live-description'
placeholder='Describe what this workflow does...'
value={liveWorkflowDescription}
onChange={(e) => setLiveWorkflowDescription(e.target.value)}
rows={3}
/>
<p className='text-muted-foreground text-xs'>
Explain the purpose and functionality of this workflow
</p>
</div>
<Button
onClick={handleSendLiveWorkflow}
disabled={
!liveWorkflowTitle.trim() ||
!liveWorkflowDescription.trim() ||
sendingLiveWorkflow ||
currentWorkflow.getBlockCount() === 0
}
className='w-full'
>
{sendingLiveWorkflow ? (
<>
<div className='mr-2 h-4 w-4 animate-spin rounded-full border-2 border-current border-t-transparent' />
Sending...
</>
) : liveWorkflowSent ? (
<>
<CheckCircle2 className='mr-2 h-4 w-4' />
Sent Successfully
</>
) : liveWorkflowFailed ? (
<>
<XCircle className='mr-2 h-4 w-4' />
Failed - Try Again
</>
) : (
<>
<Send className='mr-2 h-4 w-4' />
Send Live Workflow State
</>
)}
</Button>
{liveWorkflowSent && (
<div className='rounded-lg border bg-green-50 p-3 dark:bg-green-950/30'>
<p className='text-green-700 text-sm dark:text-green-300'>
Workflow state sent successfully!
</p>
</div>
)}
{liveWorkflowFailed && (
<div className='rounded-lg border bg-red-50 p-3 dark:bg-red-950/30'>
<p className='text-red-700 text-sm dark:text-red-300'>
Failed to send workflow state. Please try again.
</p>
</div>
)}
</TabsContent>
</Tabs>
</DialogContent>
</Dialog>

View File

@@ -10,6 +10,7 @@ import { checkTagTrigger, TagDropdown } from '@/components/ui/tag-dropdown'
import { createLogger } from '@/lib/logs/console/logger'
import { cn } from '@/lib/utils'
import { useSubBlockValue } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/workflow-block/components/sub-block/hooks/use-sub-block-value'
import { useAccessibleReferencePrefixes } from '@/app/workspace/[workspaceId]/w/[workflowId]/hooks/use-accessible-reference-prefixes'
import type { SubBlockConfig } from '@/blocks/types'
import { useTagSelection } from '@/hooks/use-tag-selection'
@@ -60,6 +61,7 @@ export function ComboBox({
const [highlightedIndex, setHighlightedIndex] = useState(-1)
const emitTagSelection = useTagSelection(blockId, subBlockId)
const accessiblePrefixes = useAccessibleReferencePrefixes(blockId)
const inputRef = useRef<HTMLInputElement>(null)
const overlayRef = useRef<HTMLDivElement>(null)
@@ -432,7 +434,10 @@ export function ComboBox({
style={{ right: '42px' }}
>
<div className='w-full truncate text-foreground' style={{ scrollbarWidth: 'none' }}>
{formatDisplayText(displayValue)}
{formatDisplayText(displayValue, {
accessiblePrefixes,
highlightAll: !accessiblePrefixes,
})}
</div>
</div>
{/* Chevron button */}

View File

@@ -5,6 +5,7 @@ import { Label } from '@/components/ui/label'
import { checkTagTrigger, TagDropdown } from '@/components/ui/tag-dropdown'
import { cn } from '@/lib/utils'
import { useSubBlockValue } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/workflow-block/components/sub-block/hooks/use-sub-block-value'
import { useAccessibleReferencePrefixes } from '@/app/workspace/[workspaceId]/w/[workflowId]/hooks/use-accessible-reference-prefixes'
import { useWorkflowRegistry } from '@/stores/workflows/registry/store'
interface InputFormatField {
@@ -152,6 +153,8 @@ export function InputMapping({
setMapping(updated)
}
const accessiblePrefixes = useAccessibleReferencePrefixes(blockId)
if (!selectedWorkflowId) {
return (
<div className='flex flex-col items-center justify-center rounded-lg border border-border/50 bg-muted/30 p-8 text-center'>
@@ -213,6 +216,7 @@ export function InputMapping({
blockId={blockId}
subBlockId={subBlockId}
disabled={isPreview || disabled}
accessiblePrefixes={accessiblePrefixes}
/>
)
})}
@@ -229,6 +233,7 @@ function InputMappingField({
blockId,
subBlockId,
disabled,
accessiblePrefixes,
}: {
fieldName: string
fieldType?: string
@@ -237,6 +242,7 @@ function InputMappingField({
blockId: string
subBlockId: string
disabled: boolean
accessiblePrefixes: Set<string> | undefined
}) {
const [showTags, setShowTags] = useState(false)
const [cursorPosition, setCursorPosition] = useState(0)
@@ -318,7 +324,10 @@ function InputMappingField({
className='w-full whitespace-pre'
style={{ scrollbarWidth: 'none', minWidth: 'fit-content' }}
>
{formatDisplayText(value)}
{formatDisplayText(value, {
accessiblePrefixes,
highlightAll: !accessiblePrefixes,
})}
</div>
</div>

View File

@@ -7,6 +7,7 @@ import { formatDisplayText } from '@/components/ui/formatted-text'
import { Input } from '@/components/ui/input'
import { Label } from '@/components/ui/label'
import { checkTagTrigger, TagDropdown } from '@/components/ui/tag-dropdown'
import { useAccessibleReferencePrefixes } from '@/app/workspace/[workspaceId]/w/[workflowId]/hooks/use-accessible-reference-prefixes'
import type { SubBlockConfig } from '@/blocks/types'
import { useKnowledgeBaseTagDefinitions } from '@/hooks/use-knowledge-base-tag-definitions'
import { useTagSelection } from '@/hooks/use-tag-selection'
@@ -55,6 +56,9 @@ export function KnowledgeTagFilters({
// Use KB tag definitions hook to get available tags
const { tagDefinitions, isLoading } = useKnowledgeBaseTagDefinitions(knowledgeBaseId)
// Get accessible prefixes for variable highlighting
const accessiblePrefixes = useAccessibleReferencePrefixes(blockId)
// State for managing tag dropdown
const [activeTagDropdown, setActiveTagDropdown] = useState<{
rowIndex: number
@@ -314,7 +318,12 @@ export function KnowledgeTagFilters({
className='w-full border-0 text-transparent caret-foreground placeholder:text-muted-foreground/50 focus-visible:ring-0 focus-visible:ring-offset-0'
/>
<div className='pointer-events-none absolute inset-0 flex items-center overflow-hidden bg-transparent px-3 text-sm'>
<div className='whitespace-pre'>{formatDisplayText(cellValue)}</div>
<div className='whitespace-pre'>
{formatDisplayText(cellValue, {
accessiblePrefixes,
highlightAll: !accessiblePrefixes,
})}
</div>
</div>
</div>
</td>

View File

@@ -11,6 +11,7 @@ import { createLogger } from '@/lib/logs/console/logger'
import { cn } from '@/lib/utils'
import { WandPromptBar } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/wand-prompt-bar/wand-prompt-bar'
import { useSubBlockValue } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/workflow-block/components/sub-block/hooks/use-sub-block-value'
import { useAccessibleReferencePrefixes } from '@/app/workspace/[workspaceId]/w/[workflowId]/hooks/use-accessible-reference-prefixes'
import { useWand } from '@/app/workspace/[workspaceId]/w/[workflowId]/hooks/use-wand'
import type { SubBlockConfig } from '@/blocks/types'
import { useTagSelection } from '@/hooks/use-tag-selection'
@@ -92,6 +93,7 @@ export function LongInput({
const overlayRef = useRef<HTMLDivElement>(null)
const [activeSourceBlockId, setActiveSourceBlockId] = useState<string | null>(null)
const containerRef = useRef<HTMLDivElement>(null)
const accessiblePrefixes = useAccessibleReferencePrefixes(blockId)
// Use preview value when in preview mode, otherwise use store value or prop value
const baseValue = isPreview ? previewValue : propValue !== undefined ? propValue : storeValue
@@ -405,7 +407,10 @@ export function LongInput({
height: `${height}px`,
}}
>
{formatDisplayText(value?.toString() ?? '')}
{formatDisplayText(value?.toString() ?? '', {
accessiblePrefixes,
highlightAll: !accessiblePrefixes,
})}
</div>
{/* Wand Button */}

View File

@@ -11,6 +11,7 @@ import { createLogger } from '@/lib/logs/console/logger'
import { cn } from '@/lib/utils'
import { WandPromptBar } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/wand-prompt-bar/wand-prompt-bar'
import { useSubBlockValue } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/workflow-block/components/sub-block/hooks/use-sub-block-value'
import { useAccessibleReferencePrefixes } from '@/app/workspace/[workspaceId]/w/[workflowId]/hooks/use-accessible-reference-prefixes'
import { useWand } from '@/app/workspace/[workspaceId]/w/[workflowId]/hooks/use-wand'
import type { SubBlockConfig } from '@/blocks/types'
import { useTagSelection } from '@/hooks/use-tag-selection'
@@ -345,6 +346,8 @@ export function ShortInput({
}
}
const accessiblePrefixes = useAccessibleReferencePrefixes(blockId)
return (
<>
<WandPromptBar
@@ -417,7 +420,10 @@ export function ShortInput({
>
{password && !isFocused
? '•'.repeat(value?.toString().length ?? 0)
: formatDisplayText(value?.toString() ?? '')}
: formatDisplayText(value?.toString() ?? '', {
accessiblePrefixes,
highlightAll: !accessiblePrefixes,
})}
</div>
</div>

View File

@@ -22,6 +22,7 @@ import { checkTagTrigger, TagDropdown } from '@/components/ui/tag-dropdown'
import { Textarea } from '@/components/ui/textarea'
import { cn } from '@/lib/utils'
import { useSubBlockValue } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/workflow-block/components/sub-block/hooks/use-sub-block-value'
import { useAccessibleReferencePrefixes } from '@/app/workspace/[workspaceId]/w/[workflowId]/hooks/use-accessible-reference-prefixes'
interface Field {
id: string
@@ -80,6 +81,7 @@ export function FieldFormat({
const [cursorPosition, setCursorPosition] = useState(0)
const [activeFieldId, setActiveFieldId] = useState<string | null>(null)
const [activeSourceBlockId, setActiveSourceBlockId] = useState<string | null>(null)
const accessiblePrefixes = useAccessibleReferencePrefixes(blockId)
// Use preview value when in preview mode, otherwise use store value
const value = isPreview ? previewValue : storeValue
@@ -471,7 +473,10 @@ export function FieldFormat({
style={{ scrollbarWidth: 'none', minWidth: 'fit-content' }}
>
{formatDisplayText(
(localValues[field.id] ?? field.value ?? '')?.toString()
(localValues[field.id] ?? field.value ?? '')?.toString(),
accessiblePrefixes
? { accessiblePrefixes }
: { highlightAll: true }
)}
</div>
</div>

View File

@@ -24,6 +24,7 @@ import {
} from '@/components/ui/select'
import { createLogger } from '@/lib/logs/console/logger'
import type { McpTransport } from '@/lib/mcp/types'
import { useAccessibleReferencePrefixes } from '@/app/workspace/[workspaceId]/w/[workflowId]/hooks/use-accessible-reference-prefixes'
import { useMcpServerTest } from '@/hooks/use-mcp-server-test'
import { useMcpServersStore } from '@/stores/mcp-servers/store'
@@ -33,6 +34,7 @@ interface McpServerModalProps {
open: boolean
onOpenChange: (open: boolean) => void
onServerCreated?: () => void
blockId: string
}
interface McpServerFormData {
@@ -42,7 +44,12 @@ interface McpServerFormData {
headers?: Record<string, string>
}
export function McpServerModal({ open, onOpenChange, onServerCreated }: McpServerModalProps) {
export function McpServerModal({
open,
onOpenChange,
onServerCreated,
blockId,
}: McpServerModalProps) {
const params = useParams()
const workspaceId = params.workspaceId as string
const [formData, setFormData] = useState<McpServerFormData>({
@@ -262,6 +269,8 @@ export function McpServerModal({ open, onOpenChange, onServerCreated }: McpServe
workspaceId,
])
const accessiblePrefixes = useAccessibleReferencePrefixes(blockId)
return (
<Dialog open={open} onOpenChange={onOpenChange}>
<DialogContent className='sm:max-w-[600px]'>
@@ -337,7 +346,10 @@ export function McpServerModal({ open, onOpenChange, onServerCreated }: McpServe
className='whitespace-nowrap'
style={{ transform: `translateX(-${urlScrollLeft}px)` }}
>
{formatDisplayText(formData.url || '')}
{formatDisplayText(formData.url || '', {
accessiblePrefixes,
highlightAll: !accessiblePrefixes,
})}
</div>
</div>
</div>
@@ -389,7 +401,10 @@ export function McpServerModal({ open, onOpenChange, onServerCreated }: McpServe
transform: `translateX(-${headerScrollLeft[`key-${index}`] || 0}px)`,
}}
>
{formatDisplayText(key || '')}
{formatDisplayText(key || '', {
accessiblePrefixes,
highlightAll: !accessiblePrefixes,
})}
</div>
</div>
</div>
@@ -417,7 +432,10 @@ export function McpServerModal({ open, onOpenChange, onServerCreated }: McpServe
transform: `translateX(-${headerScrollLeft[`value-${index}`] || 0}px)`,
}}
>
{formatDisplayText(value || '')}
{formatDisplayText(value || '', {
accessiblePrefixes,
highlightAll: !accessiblePrefixes,
})}
</div>
</div>
</div>

View File

@@ -1977,6 +1977,7 @@ export function ToolInput({
// Refresh MCP tools when a new server is created
refreshTools(true)
}}
blockId={blockId}
/>
</div>
)

View File

@@ -148,6 +148,7 @@ export function WorkflowBlock({ id, data }: NodeProps<WorkflowBlockProps>) {
)
const storeIsWide = useWorkflowStore((state) => state.blocks[id]?.isWide ?? false)
const storeBlockHeight = useWorkflowStore((state) => state.blocks[id]?.height ?? 0)
const storeBlockLayout = useWorkflowStore((state) => state.blocks[id]?.layout)
const storeBlockAdvancedMode = useWorkflowStore(
(state) => state.blocks[id]?.advancedMode ?? false
)
@@ -168,6 +169,10 @@ export function WorkflowBlock({ id, data }: NodeProps<WorkflowBlockProps>) {
? (currentWorkflow.blocks[id]?.height ?? 0)
: storeBlockHeight
const blockWidth = currentWorkflow.isDiffMode
? (currentWorkflow.blocks[id]?.layout?.measuredWidth ?? 0)
: (storeBlockLayout?.measuredWidth ?? 0)
// Get per-block webhook status by checking if webhook is configured
const activeWorkflowId = useWorkflowRegistry((state) => state.activeWorkflowId)
@@ -240,7 +245,7 @@ export function WorkflowBlock({ id, data }: NodeProps<WorkflowBlockProps>) {
}, [id, collaborativeSetSubblockValue])
// Workflow store actions
const updateBlockHeight = useWorkflowStore((state) => state.updateBlockHeight)
const updateBlockLayoutMetrics = useWorkflowStore((state) => state.updateBlockLayoutMetrics)
// Execution store
const isActiveBlock = useExecutionStore((state) => state.activeBlockIds.has(id))
@@ -419,9 +424,9 @@ export function WorkflowBlock({ id, data }: NodeProps<WorkflowBlockProps>) {
if (!contentRef.current) return
let rafId: number
const debouncedUpdate = debounce((height: number) => {
if (height !== blockHeight) {
updateBlockHeight(id, height)
const debouncedUpdate = debounce((dimensions: { width: number; height: number }) => {
if (dimensions.height !== blockHeight || dimensions.width !== blockWidth) {
updateBlockLayoutMetrics(id, dimensions)
updateNodeInternals(id)
}
}, 100)
@@ -435,9 +440,10 @@ export function WorkflowBlock({ id, data }: NodeProps<WorkflowBlockProps>) {
// Schedule the update on the next animation frame
rafId = requestAnimationFrame(() => {
for (const entry of entries) {
const height =
entry.borderBoxSize[0]?.blockSize ?? entry.target.getBoundingClientRect().height
debouncedUpdate(height)
const rect = entry.target.getBoundingClientRect()
const height = entry.borderBoxSize[0]?.blockSize ?? rect.height
const width = entry.borderBoxSize[0]?.inlineSize ?? rect.width
debouncedUpdate({ width, height })
}
})
})
@@ -450,7 +456,7 @@ export function WorkflowBlock({ id, data }: NodeProps<WorkflowBlockProps>) {
cancelAnimationFrame(rafId)
}
}
}, [id, blockHeight, updateBlockHeight, updateNodeInternals, lastUpdate])
}, [id, blockHeight, blockWidth, updateBlockLayoutMetrics, updateNodeInternals, lastUpdate])
// SubBlock layout management
function groupSubBlocks(subBlocks: SubBlockConfig[], blockId: string) {

View File

@@ -0,0 +1,64 @@
import { useMemo } from 'react'
import { shallow } from 'zustand/shallow'
import { BlockPathCalculator } from '@/lib/block-path-calculator'
import { SYSTEM_REFERENCE_PREFIXES } from '@/lib/workflows/references'
import { normalizeBlockName } from '@/stores/workflows/utils'
import { useWorkflowStore } from '@/stores/workflows/workflow/store'
import type { Loop, Parallel } from '@/stores/workflows/workflow/types'
export function useAccessibleReferencePrefixes(blockId?: string | null): Set<string> | undefined {
const { blocks, edges, loops, parallels } = useWorkflowStore(
(state) => ({
blocks: state.blocks,
edges: state.edges,
loops: state.loops || {},
parallels: state.parallels || {},
}),
shallow
)
return useMemo(() => {
if (!blockId) {
return undefined
}
const graphEdges = edges.map((edge) => ({ source: edge.source, target: edge.target }))
const ancestorIds = BlockPathCalculator.findAllPathNodes(graphEdges, blockId)
const accessibleIds = new Set<string>(ancestorIds)
accessibleIds.add(blockId)
const starterBlock = Object.values(blocks).find((block) => block.type === 'starter')
if (starterBlock) {
accessibleIds.add(starterBlock.id)
}
const loopValues = Object.values(loops as Record<string, Loop>)
loopValues.forEach((loop) => {
if (!loop?.nodes) return
if (loop.nodes.includes(blockId)) {
loop.nodes.forEach((nodeId) => accessibleIds.add(nodeId))
}
})
const parallelValues = Object.values(parallels as Record<string, Parallel>)
parallelValues.forEach((parallel) => {
if (!parallel?.nodes) return
if (parallel.nodes.includes(blockId)) {
parallel.nodes.forEach((nodeId) => accessibleIds.add(nodeId))
}
})
const prefixes = new Set<string>()
accessibleIds.forEach((id) => {
prefixes.add(normalizeBlockName(id))
const block = blocks[id]
if (block?.name) {
prefixes.add(normalizeBlockName(block.name))
}
})
SYSTEM_REFERENCE_PREFIXES.forEach((prefix) => prefixes.add(prefix))
return prefixes
}, [blockId, blocks, edges, loops, parallels])
}

View File

@@ -19,7 +19,6 @@ export interface CurrentWorkflow {
deployedAt?: Date
deploymentStatuses?: Record<string, DeploymentStatus>
needsRedeployment?: boolean
hasActiveWebhook?: boolean
// Mode information
isDiffMode: boolean
@@ -66,7 +65,6 @@ export function useCurrentWorkflow(): CurrentWorkflow {
deployedAt: activeWorkflow.deployedAt,
deploymentStatuses: activeWorkflow.deploymentStatuses,
needsRedeployment: activeWorkflow.needsRedeployment,
hasActiveWebhook: activeWorkflow.hasActiveWebhook,
// Mode information - update to reflect ready state
isDiffMode: shouldUseDiff,

View File

@@ -98,18 +98,12 @@ const getBlockDimensions = (
}
}
if (block.type === 'workflowBlock') {
const nodeWidth = block.data?.width || block.width
const nodeHeight = block.data?.height || block.height
if (nodeWidth && nodeHeight) {
return { width: nodeWidth, height: nodeHeight }
}
}
return {
width: block.isWide ? 450 : block.data?.width || block.width || 350,
height: Math.max(block.height || block.data?.height || 150, 100),
width: block.layout?.measuredWidth || (block.isWide ? 450 : block.data?.width || 350),
height: Math.max(
block.layout?.measuredHeight || block.height || block.data?.height || 150,
100
),
}
}

View File

@@ -78,13 +78,19 @@ export async function applyAutoLayoutToWorkflow(
},
}
// Call the autolayout API route which has access to the server-side API key
// Call the autolayout API route, sending blocks with live measurements
const response = await fetch(`/api/workflows/${workflowId}/autolayout`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify(layoutOptions),
body: JSON.stringify({
...layoutOptions,
blocks,
edges,
loops,
parallels,
}),
})
if (!response.ok) {
@@ -198,16 +204,19 @@ export async function applyAutoLayoutAndUpdateStore(
useWorkflowStore.getState().updateLastSaved()
// Clean up the workflow state for API validation
// Destructure out UI-only fields that shouldn't be persisted
const { deploymentStatuses, needsRedeployment, dragStartPosition, ...stateToSave } =
newWorkflowState
const cleanedWorkflowState = {
...newWorkflowState,
...stateToSave,
// Convert null dates to undefined (since they're optional)
deployedAt: newWorkflowState.deployedAt ? new Date(newWorkflowState.deployedAt) : undefined,
deployedAt: stateToSave.deployedAt ? new Date(stateToSave.deployedAt) : undefined,
// Ensure other optional fields are properly handled
loops: newWorkflowState.loops || {},
parallels: newWorkflowState.parallels || {},
deploymentStatuses: newWorkflowState.deploymentStatuses || {},
loops: stateToSave.loops || {},
parallels: stateToSave.parallels || {},
// Sanitize edges: remove null/empty handle fields to satisfy schema (optional strings)
edges: (newWorkflowState.edges || []).map((edge: any) => {
edges: (stateToSave.edges || []).map((edge: any) => {
const { sourceHandle, targetHandle, ...rest } = edge || {}
const sanitized: any = { ...rest }
if (typeof sourceHandle === 'string' && sourceHandle.length > 0) {

View File

@@ -781,7 +781,7 @@ const WorkflowContent = React.memo(() => {
// Create the trigger block at the center of the viewport
const centerPosition = project({ x: window.innerWidth / 2, y: window.innerHeight / 2 })
const id = `${triggerId}_${Date.now()}`
const id = crypto.randomUUID()
// Add the trigger block with trigger mode if specified
addBlock(

View File

@@ -19,10 +19,6 @@ import { Dialog, DialogOverlay, DialogPortal, DialogTitle } from '@/components/u
import { Input } from '@/components/ui/input'
import { useBrandConfig } from '@/lib/branding/branding'
import { cn } from '@/lib/utils'
import {
TemplateCard,
TemplateCardSkeleton,
} from '@/app/workspace/[workspaceId]/templates/components/template-card'
import { getKeyboardShortcutText } from '@/app/workspace/[workspaceId]/w/hooks/use-keyboard-shortcuts'
import { getAllBlocks } from '@/blocks'
import { type NavigationSection, useSearchNavigation } from './hooks/use-search-navigation'
@@ -30,28 +26,12 @@ import { type NavigationSection, useSearchNavigation } from './hooks/use-search-
interface SearchModalProps {
open: boolean
onOpenChange: (open: boolean) => void
templates?: TemplateData[]
workflows?: WorkflowItem[]
workspaces?: WorkspaceItem[]
loading?: boolean
knowledgeBases?: KnowledgeBaseItem[]
isOnWorkflowPage?: boolean
}
interface TemplateData {
id: string
title: string
description: string
author: string
usageCount: string
stars: number
icon: string
iconColor: string
state?: {
blocks?: Record<string, { type: string; name?: string }>
}
isStarred?: boolean
}
interface WorkflowItem {
id: string
name: string
@@ -93,6 +73,14 @@ interface PageItem {
shortcut?: string
}
interface KnowledgeBaseItem {
id: string
name: string
description?: string
href: string
isCurrent?: boolean
}
interface DocItem {
id: string
name: string
@@ -104,10 +92,9 @@ interface DocItem {
export function SearchModal({
open,
onOpenChange,
templates = [],
workflows = [],
workspaces = [],
loading = false,
knowledgeBases = [],
isOnWorkflowPage = false,
}: SearchModalProps) {
const [searchQuery, setSearchQuery] = useState('')
@@ -116,14 +103,6 @@ export function SearchModal({
const workspaceId = params.workspaceId as string
const brand = useBrandConfig()
// Local state for templates to handle star changes
const [localTemplates, setLocalTemplates] = useState<TemplateData[]>(templates)
// Update local templates when props change
useEffect(() => {
setLocalTemplates(templates)
}, [templates])
// Get all available blocks - only when on workflow page
const blocks = useMemo(() => {
if (!isOnWorkflowPage) return []
@@ -131,10 +110,7 @@ export function SearchModal({
const allBlocks = getAllBlocks()
const regularBlocks = allBlocks
.filter(
(block) =>
block.type !== 'starter' &&
!block.hideFromToolbar &&
(block.category === 'blocks' || block.category === 'triggers')
(block) => block.type !== 'starter' && !block.hideFromToolbar && block.category === 'blocks'
)
.map(
(block): BlockItem => ({
@@ -171,6 +147,30 @@ export function SearchModal({
return [...regularBlocks, ...specialBlocks].sort((a, b) => a.name.localeCompare(b.name))
}, [isOnWorkflowPage])
// Get all available triggers - only when on workflow page
const triggers = useMemo(() => {
if (!isOnWorkflowPage) return []
const allBlocks = getAllBlocks()
return allBlocks
.filter(
(block) =>
block.type !== 'starter' && !block.hideFromToolbar && block.category === 'triggers'
)
.map(
(block): BlockItem => ({
id: block.type,
name: block.name,
description: block.description || '',
longDescription: block.longDescription,
icon: block.icon,
bgColor: block.bgColor || '#6B7280',
type: block.type,
})
)
.sort((a, b) => a.name.localeCompare(b.name))
}, [isOnWorkflowPage])
// Get all available tools - only when on workflow page
const tools = useMemo(() => {
if (!isOnWorkflowPage) return []
@@ -252,24 +252,18 @@ export function SearchModal({
return blocks.filter((block) => block.name.toLowerCase().includes(query))
}, [blocks, searchQuery])
const filteredTriggers = useMemo(() => {
if (!searchQuery.trim()) return triggers
const query = searchQuery.toLowerCase()
return triggers.filter((trigger) => trigger.name.toLowerCase().includes(query))
}, [triggers, searchQuery])
const filteredTools = useMemo(() => {
if (!searchQuery.trim()) return tools
const query = searchQuery.toLowerCase()
return tools.filter((tool) => tool.name.toLowerCase().includes(query))
}, [tools, searchQuery])
const filteredTemplates = useMemo(() => {
if (!searchQuery.trim()) return localTemplates.slice(0, 8)
const query = searchQuery.toLowerCase()
return localTemplates
.filter(
(template) =>
template.title.toLowerCase().includes(query) ||
template.description.toLowerCase().includes(query)
)
.slice(0, 8)
}, [localTemplates, searchQuery])
const filteredWorkflows = useMemo(() => {
if (!searchQuery.trim()) return workflows
const query = searchQuery.toLowerCase()
@@ -282,6 +276,14 @@ export function SearchModal({
return workspaces.filter((workspace) => workspace.name.toLowerCase().includes(query))
}, [workspaces, searchQuery])
const filteredKnowledgeBases = useMemo(() => {
if (!searchQuery.trim()) return knowledgeBases
const query = searchQuery.toLowerCase()
return knowledgeBases.filter(
(kb) => kb.name.toLowerCase().includes(query) || kb.description?.toLowerCase().includes(query)
)
}, [knowledgeBases, searchQuery])
const filteredPages = useMemo(() => {
if (!searchQuery.trim()) return pages
const query = searchQuery.toLowerCase()
@@ -308,6 +310,16 @@ export function SearchModal({
})
}
if (filteredTriggers.length > 0) {
sections.push({
id: 'triggers',
name: 'Triggers',
type: 'grid',
items: filteredTriggers,
gridCols: filteredTriggers.length, // Single row - all items in one row
})
}
if (filteredTools.length > 0) {
sections.push({
id: 'tools',
@@ -318,20 +330,11 @@ export function SearchModal({
})
}
if (filteredTemplates.length > 0) {
sections.push({
id: 'templates',
name: 'Templates',
type: 'grid',
items: filteredTemplates,
gridCols: filteredTemplates.length, // Single row - all templates in one row
})
}
// Combine all list items into one section
const listItems = [
...filteredWorkspaces.map((item) => ({ type: 'workspace', data: item })),
...filteredWorkflows.map((item) => ({ type: 'workflow', data: item })),
...filteredKnowledgeBases.map((item) => ({ type: 'knowledgebase', data: item })),
...filteredPages.map((item) => ({ type: 'page', data: item })),
...filteredDocs.map((item) => ({ type: 'doc', data: item })),
]
@@ -348,10 +351,11 @@ export function SearchModal({
return sections
}, [
filteredBlocks,
filteredTriggers,
filteredTools,
filteredTemplates,
filteredWorkspaces,
filteredWorkflows,
filteredKnowledgeBases,
filteredPages,
filteredDocs,
])
@@ -463,23 +467,6 @@ export function SearchModal({
return () => window.removeEventListener('keydown', handleKeyDown)
}, [open, handlePageClick, workspaceId])
// Handle template usage callback (closes modal after template is used)
const handleTemplateUsed = useCallback(() => {
onOpenChange(false)
}, [onOpenChange])
// Handle star change callback from template card
const handleStarChange = useCallback(
(templateId: string, isStarred: boolean, newStarCount: number) => {
setLocalTemplates((prevTemplates) =>
prevTemplates.map((template) =>
template.id === templateId ? { ...template, isStarred, stars: newStarCount } : template
)
)
},
[]
)
// Handle item selection based on current item
const handleItemSelection = useCallback(() => {
const current = getCurrentItem()
@@ -487,11 +474,8 @@ export function SearchModal({
const { section, item } = current
if (section.id === 'blocks' || section.id === 'tools') {
if (section.id === 'blocks' || section.id === 'triggers' || section.id === 'tools') {
handleBlockClick(item.type)
} else if (section.id === 'templates') {
// Templates don't have direct selection, but we close the modal
onOpenChange(false)
} else if (section.id === 'list') {
switch (item.type) {
case 'workspace':
@@ -508,6 +492,13 @@ export function SearchModal({
handleNavigationClick(item.data.href)
}
break
case 'knowledgebase':
if (item.data.isCurrent) {
onOpenChange(false)
} else {
handleNavigationClick(item.data.href)
}
break
case 'page':
handlePageClick(item.data.href)
break
@@ -570,15 +561,6 @@ export function SearchModal({
[getCurrentItem]
)
// Render skeleton cards for loading state
const renderSkeletonCards = () => {
return Array.from({ length: 8 }).map((_, index) => (
<div key={`skeleton-${index}`} className='w-80 flex-shrink-0'>
<TemplateCardSkeleton />
</div>
))
}
return (
<Dialog open={open} onOpenChange={onOpenChange}>
<DialogPortal>
@@ -654,6 +636,52 @@ export function SearchModal({
</div>
)}
{/* Triggers Section */}
{filteredTriggers.length > 0 && (
<div>
<h3 className='mb-3 ml-6 font-normal font-sans text-muted-foreground text-sm leading-none tracking-normal'>
Triggers
</h3>
<div
ref={(el) => {
if (el) scrollRefs.current.set('triggers', el)
}}
className='scrollbar-none flex gap-2 overflow-x-auto px-6 pb-1'
style={{ scrollbarWidth: 'none', msOverflowStyle: 'none' }}
>
{filteredTriggers.map((trigger, index) => (
<button
key={trigger.id}
onClick={() => handleBlockClick(trigger.type)}
data-nav-item={`triggers-${index}`}
className={`flex h-auto w-[180px] flex-shrink-0 cursor-pointer flex-col items-start gap-2 rounded-[8px] border p-3 transition-all duration-200 ${
isItemSelected('triggers', index)
? 'border-border bg-secondary/80'
: 'border-border/40 bg-background/60 hover:border-border hover:bg-secondary/80'
}`}
>
<div className='flex items-center gap-2'>
<div
className='flex h-5 w-5 items-center justify-center rounded-[4px]'
style={{ backgroundColor: trigger.bgColor }}
>
<trigger.icon className='!h-3.5 !w-3.5 text-white' />
</div>
<span className='font-medium font-sans text-foreground text-sm leading-none tracking-normal'>
{trigger.name}
</span>
</div>
{(trigger.longDescription || trigger.description) && (
<p className='line-clamp-2 text-left text-muted-foreground text-xs'>
{trigger.longDescription || trigger.description}
</p>
)}
</button>
))}
</div>
</div>
)}
{/* Tools Section */}
{filteredTools.length > 0 && (
<div>
@@ -700,49 +728,6 @@ export function SearchModal({
</div>
)}
{/* Templates Section */}
{(loading || filteredTemplates.length > 0) && (
<div>
<h3 className='mb-3 ml-6 font-normal font-sans text-muted-foreground text-sm leading-none tracking-normal'>
Templates
</h3>
<div
ref={(el) => {
if (el) scrollRefs.current.set('templates', el)
}}
className='scrollbar-none flex gap-4 overflow-x-auto pr-6 pb-1 pl-6'
style={{ scrollbarWidth: 'none', msOverflowStyle: 'none' }}
>
{loading
? renderSkeletonCards()
: filteredTemplates.map((template, index) => (
<div
key={template.id}
data-nav-item={`templates-${index}`}
className={`w-80 flex-shrink-0 rounded-lg transition-all duration-200 ${
isItemSelected('templates', index) ? 'opacity-75' : 'opacity-100'
}`}
>
<TemplateCard
id={template.id}
title={template.title}
description={template.description}
author={template.author}
usageCount={template.usageCount}
stars={template.stars}
icon={template.icon}
iconColor={template.iconColor}
state={template.state}
isStarred={template.isStarred}
onTemplateUsed={handleTemplateUsed}
onStarChange={handleStarChange}
/>
</div>
))}
</div>
</div>
)}
{/* List sections (Workspaces, Workflows, Pages, Docs) */}
{navigationSections.find((s) => s.id === 'list') && (
<div
@@ -826,6 +811,43 @@ export function SearchModal({
</div>
)}
{/* Knowledge Bases */}
{filteredKnowledgeBases.length > 0 && (
<div className='mb-6'>
<h3 className='mb-3 ml-6 font-normal font-sans text-muted-foreground text-sm leading-none tracking-normal'>
Knowledge Bases
</h3>
<div className='space-y-1 px-6'>
{filteredKnowledgeBases.map((kb, kbIndex) => {
const globalIndex =
filteredWorkspaces.length + filteredWorkflows.length + kbIndex
return (
<button
key={kb.id}
onClick={() =>
kb.isCurrent ? onOpenChange(false) : handleNavigationClick(kb.href)
}
data-nav-item={`list-${globalIndex}`}
className={`flex h-10 w-full items-center gap-3 rounded-[8px] px-3 py-2 transition-colors focus:outline-none ${
isItemSelected('list', globalIndex)
? 'bg-accent text-accent-foreground'
: 'hover:bg-accent/60 focus:bg-accent/60'
}`}
>
<div className='flex h-5 w-5 items-center justify-center'>
<LibraryBig className='h-4 w-4 text-muted-foreground' />
</div>
<span className='flex-1 text-left font-normal font-sans text-muted-foreground text-sm leading-none tracking-normal'>
{kb.name}
{kb.isCurrent && ' (current)'}
</span>
</button>
)
})}
</div>
</div>
)}
{/* Pages */}
{filteredPages.length > 0 && (
<div className='mb-6'>
@@ -835,7 +857,10 @@ export function SearchModal({
<div className='space-y-1 px-6'>
{filteredPages.map((page, pageIndex) => {
const globalIndex =
filteredWorkspaces.length + filteredWorkflows.length + pageIndex
filteredWorkspaces.length +
filteredWorkflows.length +
filteredKnowledgeBases.length +
pageIndex
return (
<button
key={page.id}
@@ -872,6 +897,7 @@ export function SearchModal({
const globalIndex =
filteredWorkspaces.length +
filteredWorkflows.length +
filteredKnowledgeBases.length +
filteredPages.length +
docIndex
return (
@@ -902,14 +928,14 @@ export function SearchModal({
{/* Empty state */}
{searchQuery &&
!loading &&
filteredWorkflows.length === 0 &&
filteredWorkspaces.length === 0 &&
filteredKnowledgeBases.length === 0 &&
filteredPages.length === 0 &&
filteredDocs.length === 0 &&
filteredBlocks.length === 0 &&
filteredTools.length === 0 &&
filteredTemplates.length === 0 && (
filteredTriggers.length === 0 &&
filteredTools.length === 0 && (
<div className='ml-6 py-12 text-center'>
<p className='text-muted-foreground'>No results found for "{searchQuery}"</p>
</div>

View File

@@ -155,60 +155,30 @@ export function CreateMenu({ onCreateWorkflow, isCreatingWorkflow = false }: Cre
workspaceId,
})
// Load the imported workflow state into stores immediately (optimistic update)
const { useWorkflowStore } = await import('@/stores/workflows/workflow/store')
const { useSubBlockStore } = await import('@/stores/workflows/subblock/store')
// Set the workflow as active in the registry to prevent reload
useWorkflowRegistry.setState({ activeWorkflowId: newWorkflowId })
// Set the workflow state immediately
useWorkflowStore.setState({
blocks: workflowData.blocks || {},
edges: workflowData.edges || [],
loops: workflowData.loops || {},
parallels: workflowData.parallels || {},
lastSaved: Date.now(),
})
// Initialize subblock store with the imported blocks
useSubBlockStore.getState().initializeFromWorkflow(newWorkflowId, workflowData.blocks || {})
// Also set subblock values if they exist in the imported data
const subBlockStore = useSubBlockStore.getState()
Object.entries(workflowData.blocks).forEach(([blockId, block]: [string, any]) => {
if (block.subBlocks) {
Object.entries(block.subBlocks).forEach(([subBlockId, subBlock]: [string, any]) => {
if (subBlock.value !== null && subBlock.value !== undefined) {
subBlockStore.setValue(blockId, subBlockId, subBlock.value)
}
})
}
})
// Navigate to the new workflow after setting state
router.push(`/workspace/${workspaceId}/w/${newWorkflowId}`)
logger.info('Workflow imported successfully from JSON')
// Save to database in the background (fire and forget)
fetch(`/api/workflows/${newWorkflowId}/state`, {
// Save workflow state to database first
const response = await fetch(`/api/workflows/${newWorkflowId}/state`, {
method: 'PUT',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify(workflowData),
})
.then((response) => {
if (!response.ok) {
logger.error('Failed to persist imported workflow to database')
} else {
logger.info('Imported workflow persisted to database')
}
})
.catch((error) => {
logger.error('Failed to persist imported workflow:', error)
})
if (!response.ok) {
logger.error('Failed to persist imported workflow to database')
throw new Error('Failed to save workflow')
}
logger.info('Imported workflow persisted to database')
// Pre-load the workflow state before navigating
const { setActiveWorkflow } = useWorkflowRegistry.getState()
await setActiveWorkflow(newWorkflowId)
// Navigate to the new workflow (replace to avoid history entry)
router.replace(`/workspace/${workspaceId}/w/${newWorkflowId}`)
logger.info('Workflow imported successfully from JSON')
} catch (error) {
logger.error('Failed to import workflow:', { error })
} finally {

View File

@@ -1,5 +1,5 @@
import { useCallback, useEffect, useState } from 'react'
import { Check, Copy, Plus, Search } from 'lucide-react'
import { useCallback, useEffect, useRef, useState } from 'react'
import { Brain, BrainCircuit, Check, Copy, Plus, Zap } from 'lucide-react'
import {
AlertDialog,
AlertDialogAction,
@@ -10,11 +10,12 @@ import {
AlertDialogHeader,
AlertDialogTitle,
Button,
Input,
Label,
Skeleton,
Switch,
} from '@/components/ui'
import { isHosted } from '@/lib/environment'
import { createLogger } from '@/lib/logs/console/logger'
import { useCopilotStore } from '@/stores/copilot/store'
const logger = createLogger('CopilotSettings')
@@ -23,26 +24,78 @@ interface CopilotKey {
displayKey: string
}
interface ModelOption {
value: string
label: string
icon: 'brain' | 'brainCircuit' | 'zap'
}
const OPENAI_MODELS: ModelOption[] = [
// Zap models first
{ value: 'gpt-4o', label: 'gpt-4o', icon: 'zap' },
{ value: 'gpt-4.1', label: 'gpt-4.1', icon: 'zap' },
{ value: 'gpt-5-fast', label: 'gpt-5-fast', icon: 'zap' },
// Brain models
{ value: 'gpt-5', label: 'gpt-5', icon: 'brain' },
{ value: 'gpt-5-medium', label: 'gpt-5-medium', icon: 'brain' },
// BrainCircuit models
{ value: 'gpt-5-high', label: 'gpt-5-high', icon: 'brainCircuit' },
{ value: 'o3', label: 'o3', icon: 'brainCircuit' },
]
const ANTHROPIC_MODELS: ModelOption[] = [
// Brain models
{ value: 'claude-4-sonnet', label: 'claude-4-sonnet', icon: 'brain' },
{ value: 'claude-4.5-sonnet', label: 'claude-4.5-sonnet', icon: 'brain' },
// BrainCircuit models
{ value: 'claude-4.1-opus', label: 'claude-4.1-opus', icon: 'brainCircuit' },
]
const ALL_MODELS: ModelOption[] = [...OPENAI_MODELS, ...ANTHROPIC_MODELS]
// Default enabled/disabled state for all models
const DEFAULT_ENABLED_MODELS: Record<string, boolean> = {
'gpt-4o': false,
'gpt-4.1': false,
'gpt-5-fast': false,
'gpt-5': true,
'gpt-5-medium': true,
'gpt-5-high': false,
o3: true,
'claude-4-sonnet': true,
'claude-4.5-sonnet': true,
'claude-4.1-opus': true,
}
const getModelIcon = (iconType: 'brain' | 'brainCircuit' | 'zap') => {
switch (iconType) {
case 'brainCircuit':
return <BrainCircuit className='h-3.5 w-3.5 text-muted-foreground' />
case 'brain':
return <Brain className='h-3.5 w-3.5 text-muted-foreground' />
case 'zap':
return <Zap className='h-3.5 w-3.5 text-muted-foreground' />
}
}
export function Copilot() {
const [keys, setKeys] = useState<CopilotKey[]>([])
const [isLoading, setIsLoading] = useState(true)
const [searchTerm, setSearchTerm] = useState('')
const [enabledModelsMap, setEnabledModelsMap] = useState<Record<string, boolean>>({})
const [isModelsLoading, setIsModelsLoading] = useState(true)
const hasFetchedModels = useRef(false)
const { setEnabledModels: setStoreEnabledModels } = useCopilotStore()
// Create flow state
const [showNewKeyDialog, setShowNewKeyDialog] = useState(false)
const [newKey, setNewKey] = useState<string | null>(null)
const [isCreatingKey] = useState(false)
const [newKeyCopySuccess, setNewKeyCopySuccess] = useState(false)
// Delete flow state
const [deleteKey, setDeleteKey] = useState<CopilotKey | null>(null)
const [showDeleteDialog, setShowDeleteDialog] = useState(false)
// Filter keys based on search term (by masked display value)
const filteredKeys = keys.filter((key) =>
key.displayKey.toLowerCase().includes(searchTerm.toLowerCase())
)
const fetchKeys = useCallback(async () => {
try {
setIsLoading(true)
@@ -58,9 +111,41 @@ export function Copilot() {
}
}, [])
const fetchEnabledModels = useCallback(async () => {
if (hasFetchedModels.current) return
hasFetchedModels.current = true
try {
setIsModelsLoading(true)
const res = await fetch('/api/copilot/user-models')
if (!res.ok) throw new Error(`Failed to fetch: ${res.status}`)
const data = await res.json()
const modelsMap = data.enabledModels || DEFAULT_ENABLED_MODELS
setEnabledModelsMap(modelsMap)
// Convert to array for store (API already merged with defaults)
const enabledArray = Object.entries(modelsMap)
.filter(([_, enabled]) => enabled)
.map(([modelId]) => modelId)
setStoreEnabledModels(enabledArray)
} catch (error) {
logger.error('Failed to fetch enabled models', { error })
setEnabledModelsMap(DEFAULT_ENABLED_MODELS)
setStoreEnabledModels(
Object.keys(DEFAULT_ENABLED_MODELS).filter((key) => DEFAULT_ENABLED_MODELS[key])
)
} finally {
setIsModelsLoading(false)
}
}, [setStoreEnabledModels])
useEffect(() => {
fetchKeys()
}, [fetchKeys])
if (isHosted) {
fetchKeys()
}
fetchEnabledModels()
}, [])
const onGenerate = async () => {
try {
@@ -102,63 +187,97 @@ export function Copilot() {
}
}
const onCopy = async (value: string, keyId?: string) => {
const onCopy = async (value: string) => {
try {
await navigator.clipboard.writeText(value)
if (!keyId) {
setNewKeyCopySuccess(true)
setTimeout(() => setNewKeyCopySuccess(false), 1500)
}
setNewKeyCopySuccess(true)
setTimeout(() => setNewKeyCopySuccess(false), 1500)
} catch (error) {
logger.error('Copy failed', { error })
}
}
const toggleModel = async (modelValue: string, enabled: boolean) => {
const newModelsMap = { ...enabledModelsMap, [modelValue]: enabled }
setEnabledModelsMap(newModelsMap)
// Convert to array for store
const enabledArray = Object.entries(newModelsMap)
.filter(([_, isEnabled]) => isEnabled)
.map(([modelId]) => modelId)
setStoreEnabledModels(enabledArray)
try {
const res = await fetch('/api/copilot/user-models', {
method: 'PUT',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ enabledModels: newModelsMap }),
})
if (!res.ok) {
throw new Error('Failed to update models')
}
} catch (error) {
logger.error('Failed to update enabled models', { error })
// Revert on error
setEnabledModelsMap(enabledModelsMap)
const revertedArray = Object.entries(enabledModelsMap)
.filter(([_, isEnabled]) => isEnabled)
.map(([modelId]) => modelId)
setStoreEnabledModels(revertedArray)
}
}
const enabledCount = Object.values(enabledModelsMap).filter(Boolean).length
const totalCount = ALL_MODELS.length
return (
<div className='relative flex h-full flex-col'>
{/* Fixed Header */}
<div className='px-6 pt-4 pb-2'>
{/* Search Input */}
{isLoading ? (
<Skeleton className='h-9 w-56 rounded-lg' />
) : (
<div className='flex h-9 w-56 items-center gap-2 rounded-lg border bg-transparent pr-2 pl-3'>
<Search className='h-4 w-4 flex-shrink-0 text-muted-foreground' strokeWidth={2} />
<Input
placeholder='Search API keys...'
value={searchTerm}
onChange={(e) => setSearchTerm(e.target.value)}
className='flex-1 border-0 bg-transparent px-0 font-[380] font-sans text-base text-foreground leading-none placeholder:text-muted-foreground focus-visible:ring-0 focus-visible:ring-offset-0'
/>
</div>
)}
</div>
{/* Sticky Header with API Keys (only for hosted) */}
{isHosted && (
<div className='sticky top-0 z-10 border-b bg-background px-6 py-4'>
<div className='space-y-3'>
{/* API Keys Header */}
<div className='flex items-center justify-between'>
<div>
<h3 className='font-semibold text-foreground text-sm'>API Keys</h3>
<p className='text-muted-foreground text-xs'>
Generate keys for programmatic access
</p>
</div>
<Button
onClick={onGenerate}
variant='ghost'
size='sm'
className='h-8 rounded-[8px] border bg-background px-3 shadow-xs hover:bg-muted focus:outline-none focus-visible:ring-0 focus-visible:ring-offset-0'
disabled={isLoading}
>
<Plus className='h-3.5 w-3.5 stroke-[2px]' />
Create
</Button>
</div>
{/* Scrollable Content */}
<div className='scrollbar-thin scrollbar-thumb-muted scrollbar-track-transparent min-h-0 flex-1 overflow-y-auto px-6'>
<div className='h-full space-y-2 py-2'>
{isLoading ? (
{/* API Keys List */}
<div className='space-y-2'>
<CopilotKeySkeleton />
<CopilotKeySkeleton />
<CopilotKeySkeleton />
</div>
) : keys.length === 0 ? (
<div className='flex h-full items-center justify-center text-muted-foreground text-sm'>
Click "Generate Key" below to get started
</div>
) : (
<div className='space-y-2'>
{filteredKeys.map((k) => (
<div key={k.id} className='flex flex-col gap-2'>
<Label className='font-normal text-muted-foreground text-xs uppercase'>
Copilot API Key
</Label>
<div className='flex items-center justify-between gap-4'>
<div className='flex items-center gap-3'>
<div className='flex h-8 items-center rounded-[8px] bg-muted px-3'>
<code className='font-mono text-foreground text-xs'>{k.displayKey}</code>
</div>
{isLoading ? (
<>
<CopilotKeySkeleton />
<CopilotKeySkeleton />
</>
) : keys.length === 0 ? (
<div className='py-3 text-center text-muted-foreground text-xs'>
No API keys yet
</div>
) : (
keys.map((k) => (
<div
key={k.id}
className='flex items-center justify-between gap-4 rounded-lg border bg-muted/30 px-3 py-2'
>
<div className='flex min-w-0 items-center gap-3'>
<code className='truncate font-mono text-foreground text-xs'>
{k.displayKey}
</code>
</div>
<Button
@@ -168,44 +287,103 @@ export function Copilot() {
setDeleteKey(k)
setShowDeleteDialog(true)
}}
className='h-8 text-muted-foreground hover:text-foreground'
className='h-7 flex-shrink-0 text-muted-foreground text-xs hover:text-foreground'
>
Delete
</Button>
</div>
</div>
))}
{/* Show message when search has no results but there are keys */}
{searchTerm.trim() && filteredKeys.length === 0 && keys.length > 0 && (
<div className='py-8 text-center text-muted-foreground text-sm'>
No API keys found matching "{searchTerm}"
</div>
))
)}
</div>
)}
</div>
</div>
</div>
)}
{/* Footer */}
<div className='bg-background'>
<div className='flex w-full items-center justify-between px-6 py-4'>
{isLoading ? (
<>
<Skeleton className='h-9 w-[117px] rounded-[8px]' />
<div className='w-[108px]' />
</>
{/* Scrollable Content - Models Section */}
<div className='scrollbar-thin scrollbar-thumb-muted scrollbar-track-transparent flex-1 overflow-y-auto px-6 py-4'>
<div className='space-y-3'>
{/* Models Header */}
<div>
<h3 className='font-semibold text-foreground text-sm'>Models</h3>
<div className='text-muted-foreground text-xs'>
{isModelsLoading ? (
<Skeleton className='mt-0.5 h-3 w-32' />
) : (
<span>
{enabledCount} of {totalCount} enabled
</span>
)}
</div>
</div>
{/* Models List */}
{isModelsLoading ? (
<div className='space-y-2'>
{[1, 2, 3, 4, 5].map((i) => (
<div key={i} className='flex items-center justify-between py-1.5'>
<Skeleton className='h-4 w-32' />
<Skeleton className='h-5 w-9 rounded-full' />
</div>
))}
</div>
) : (
<>
<Button
onClick={onGenerate}
variant='ghost'
className='h-9 rounded-[8px] border bg-background px-3 shadow-xs hover:bg-muted focus:outline-none focus-visible:ring-0 focus-visible:ring-offset-0'
disabled={isLoading}
>
<Plus className='h-4 w-4 stroke-[2px]' />
Create Key
</Button>
</>
<div className='space-y-4'>
{/* OpenAI Models */}
<div>
<div className='mb-2 px-2 font-medium text-[10px] text-muted-foreground uppercase'>
OpenAI
</div>
<div className='space-y-1'>
{OPENAI_MODELS.map((model) => {
const isEnabled = enabledModelsMap[model.value] ?? false
return (
<div
key={model.value}
className='-mx-2 flex items-center justify-between rounded px-2 py-1.5 hover:bg-muted/50'
>
<div className='flex items-center gap-2'>
{getModelIcon(model.icon)}
<span className='text-foreground text-sm'>{model.label}</span>
</div>
<Switch
checked={isEnabled}
onCheckedChange={(checked) => toggleModel(model.value, checked)}
className='scale-90'
/>
</div>
)
})}
</div>
</div>
{/* Anthropic Models */}
<div>
<div className='mb-2 px-2 font-medium text-[10px] text-muted-foreground uppercase'>
Anthropic
</div>
<div className='space-y-1'>
{ANTHROPIC_MODELS.map((model) => {
const isEnabled = enabledModelsMap[model.value] ?? false
return (
<div
key={model.value}
className='-mx-2 flex items-center justify-between rounded px-2 py-1.5 hover:bg-muted/50'
>
<div className='flex items-center gap-2'>
{getModelIcon(model.icon)}
<span className='text-foreground text-sm'>{model.label}</span>
</div>
<Switch
checked={isEnabled}
onCheckedChange={(checked) => toggleModel(model.value, checked)}
className='scale-90'
/>
</div>
)
})}
</div>
</div>
</div>
)}
</div>
</div>
@@ -292,15 +470,9 @@ export function Copilot() {
function CopilotKeySkeleton() {
return (
<div className='flex flex-col gap-2'>
<Skeleton className='h-4 w-32' />
<div className='flex items-center justify-between gap-4'>
<div className='flex items-center gap-3'>
<Skeleton className='h-8 w-20 rounded-[8px]' />
<Skeleton className='h-4 w-24' />
</div>
<Skeleton className='h-8 w-16' />
</div>
<div className='flex items-center justify-between gap-4 rounded-lg border bg-muted/30 px-3 py-2'>
<Skeleton className='h-4 w-48' />
<Skeleton className='h-7 w-14' />
</div>
)
}

View File

@@ -18,6 +18,8 @@ import { getEnv, isTruthy } from '@/lib/env'
import { isHosted } from '@/lib/environment'
import { cn } from '@/lib/utils'
import { useOrganizationStore } from '@/stores/organization'
import { useGeneralStore } from '@/stores/settings/general/store'
import { useSubscriptionStore } from '@/stores/subscription/store'
const isBillingEnabled = isTruthy(getEnv('NEXT_PUBLIC_BILLING_ENABLED'))
@@ -94,7 +96,7 @@ const allNavigationItems: NavigationItem[] = [
},
{
id: 'copilot',
label: 'Copilot Keys',
label: 'Copilot',
icon: Bot,
},
{
@@ -161,9 +163,6 @@ export function SettingsNavigation({
}, [userId, isHosted])
const navigationItems = allNavigationItems.filter((item) => {
if (item.id === 'copilot' && !isHosted) {
return false
}
if (item.hideWhenBillingDisabled && !isBillingEnabled) {
return false
}
@@ -200,6 +199,21 @@ export function SettingsNavigation({
{navigationItems.map((item) => (
<div key={item.id} className='mb-1'>
<button
onMouseEnter={() => {
switch (item.id) {
case 'general':
useGeneralStore.getState().loadSettings()
break
case 'subscription':
useSubscriptionStore.getState().loadData()
break
case 'team':
useOrganizationStore.getState().loadData()
break
default:
break
}
}}
onClick={() => onSectionChange(item.id)}
className={cn(
'group flex h-9 w-full cursor-pointer items-center rounded-[8px] px-2 py-2 font-medium font-sans text-sm transition-colors',

View File

@@ -21,6 +21,7 @@ import {
getVisiblePlans,
} from '@/app/workspace/[workspaceId]/w/components/sidebar/components/settings-modal/components/subscription/subscription-permissions'
import { useOrganizationStore } from '@/stores/organization'
import { useGeneralStore } from '@/stores/settings/general/store'
import { useSubscriptionStore } from '@/stores/subscription/store'
const CONSTANTS = {
@@ -531,32 +532,14 @@ export function Subscription({ onOpenChange }: SubscriptionProps) {
}
function BillingUsageNotificationsToggle() {
const [enabled, setEnabled] = useState<boolean | null>(null)
const isLoading = useGeneralStore((s) => s.isBillingUsageNotificationsLoading)
const enabled = useGeneralStore((s) => s.isBillingUsageNotificationsEnabled)
const setEnabled = useGeneralStore((s) => s.setBillingUsageNotificationsEnabled)
const loadSettings = useGeneralStore((s) => s.loadSettings)
useEffect(() => {
let isMounted = true
const load = async () => {
const res = await fetch('/api/users/me/settings')
const json = await res.json()
const current = json?.data?.billingUsageNotificationsEnabled
if (isMounted) setEnabled(current !== false)
}
load()
return () => {
isMounted = false
}
}, [])
const update = async (next: boolean) => {
setEnabled(next)
await fetch('/api/users/me/settings', {
method: 'PATCH',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ billingUsageNotificationsEnabled: next }),
})
}
if (enabled === null) return null
void loadSettings()
}, [loadSettings])
return (
<div className='mt-4 flex items-center justify-between'>
@@ -564,7 +547,13 @@ function BillingUsageNotificationsToggle() {
<span className='font-medium text-sm'>Usage notifications</span>
<span className='text-muted-foreground text-xs'>Email me when I reach 80% usage</span>
</div>
<Switch checked={enabled} onCheckedChange={(v: boolean) => update(v)} />
<Switch
checked={!!enabled}
disabled={isLoading}
onCheckedChange={(v: boolean) => {
void setEnabled(v)
}}
/>
</div>
)
}

View File

@@ -3,7 +3,6 @@
import { useEffect, useRef, useState } from 'react'
import { Dialog, DialogContent, DialogHeader, DialogTitle } from '@/components/ui'
import { getEnv, isTruthy } from '@/lib/env'
import { isHosted } from '@/lib/environment'
import { createLogger } from '@/lib/logs/console/logger'
import {
Account,
@@ -181,7 +180,7 @@ export function SettingsModal({ open, onOpenChange }: SettingsModalProps) {
<SSO />
</div>
)}
{isHosted && activeSection === 'copilot' && (
{activeSection === 'copilot' && (
<div className='h-full'>
<Copilot />
</div>

View File

@@ -32,6 +32,7 @@ import {
getKeyboardShortcutText,
useGlobalShortcuts,
} from '@/app/workspace/[workspaceId]/w/hooks/use-keyboard-shortcuts'
import { useKnowledgeBasesList } from '@/hooks/use-knowledge'
import { useSubscriptionStore } from '@/stores/subscription/store'
import { useWorkflowDiffStore } from '@/stores/workflow-diff/store'
import { useWorkflowRegistry } from '@/stores/workflows/registry/store'
@@ -115,6 +116,9 @@ export function Sidebar() {
const [templates, setTemplates] = useState<TemplateData[]>([])
const [isTemplatesLoading, setIsTemplatesLoading] = useState(false)
// Knowledge bases for search modal
const { knowledgeBases } = useKnowledgeBasesList(workspaceId)
// Refs
const workflowScrollAreaRef = useRef<HTMLDivElement | null>(null)
const workspaceIdRef = useRef<string>(workspaceId)
@@ -726,6 +730,17 @@ export function Sidebar() {
}))
}, [workspaces, workspaceId])
// Prepare knowledge bases for search modal
const searchKnowledgeBases = useMemo(() => {
return knowledgeBases.map((kb) => ({
id: kb.id,
name: kb.name,
description: kb.description,
href: `/workspace/${workspaceId}/knowledge/${kb.id}`,
isCurrent: knowledgeBaseId === kb.id,
}))
}, [knowledgeBases, workspaceId, knowledgeBaseId])
// Create workflow handler
const handleCreateWorkflow = async (folderId?: string): Promise<string> => {
if (isCreatingWorkflow) {
@@ -1035,10 +1050,9 @@ export function Sidebar() {
<SearchModal
open={showSearchModal}
onOpenChange={setShowSearchModal}
templates={templates}
workflows={searchWorkflows}
workspaces={searchWorkspaces}
loading={isTemplatesLoading}
knowledgeBases={searchKnowledgeBases}
isOnWorkflowPage={isOnWorkflowPage}
/>
</>

View File

@@ -3,6 +3,7 @@ import type { BlockConfig } from '@/blocks/types'
export const ApiTriggerBlock: BlockConfig = {
type: 'api_trigger',
triggerAllowed: true,
name: 'API',
description: 'Expose as HTTP API endpoint',
longDescription:

View File

@@ -7,6 +7,7 @@ const ChatTriggerIcon = (props: SVGProps<SVGSVGElement>) => createElement(Messag
export const ChatTriggerBlock: BlockConfig = {
type: 'chat_trigger',
triggerAllowed: true,
name: 'Chat',
description: 'Start workflow from a chat deployment',
longDescription: 'Chat trigger to run the workflow via deployed chat interfaces.',

View File

@@ -7,6 +7,7 @@ const InputTriggerIcon = (props: SVGProps<SVGSVGElement>) => createElement(FormI
export const InputTriggerBlock: BlockConfig = {
type: 'input_trigger',
triggerAllowed: true,
name: 'Input Form',
description: 'Start workflow manually with a defined input schema',
longDescription:

View File

@@ -7,6 +7,7 @@ const ManualTriggerIcon = (props: SVGProps<SVGSVGElement>) => createElement(Play
export const ManualTriggerBlock: BlockConfig = {
type: 'manual_trigger',
triggerAllowed: true,
name: 'Manual',
description: 'Start workflow manually from the editor',
longDescription:

View File

@@ -7,6 +7,7 @@ const ScheduleIcon = (props: SVGProps<SVGSVGElement>) => createElement(Clock, pr
export const ScheduleBlock: BlockConfig = {
type: 'schedule',
triggerAllowed: true,
name: 'Schedule',
description: 'Trigger workflow execution on a schedule',
longDescription:

View File

@@ -1,28 +1,50 @@
'use client'
import type { ReactNode } from 'react'
import { normalizeBlockName } from '@/stores/workflows/utils'
export interface HighlightContext {
accessiblePrefixes?: Set<string>
highlightAll?: boolean
}
const SYSTEM_PREFIXES = new Set(['start', 'loop', 'parallel', 'variable'])
/**
* Formats text by highlighting block references (<...>) and environment variables ({{...}})
* Used in code editor, long inputs, and short inputs for consistent syntax highlighting
*
* @param text The text to format
*/
export function formatDisplayText(text: string): ReactNode[] {
export function formatDisplayText(text: string, context?: HighlightContext): ReactNode[] {
if (!text) return []
const shouldHighlightPart = (part: string): boolean => {
if (!part.startsWith('<') || !part.endsWith('>')) {
return false
}
if (context?.highlightAll) {
return true
}
const inner = part.slice(1, -1)
const [prefix] = inner.split('.')
const normalizedPrefix = normalizeBlockName(prefix)
if (SYSTEM_PREFIXES.has(normalizedPrefix)) {
return true
}
if (context?.accessiblePrefixes?.has(normalizedPrefix)) {
return true
}
return false
}
const parts = text.split(/(<[^>]+>|\{\{[^}]+\}\})/g)
return parts.map((part, index) => {
if (part.startsWith('<') && part.endsWith('>')) {
return (
<span key={index} className='text-blue-500'>
{part}
</span>
)
}
if (part.match(/^\{\{[^}]+\}\}$/)) {
if (shouldHighlightPart(part) || part.match(/^\{\{[^}]+\}\}$/)) {
return (
<span key={index} className='text-blue-500'>
{part}

View File

@@ -1,13 +1,12 @@
import type React from 'react'
import { useCallback, useEffect, useMemo, useRef, useState } from 'react'
import { ChevronRight } from 'lucide-react'
import { BlockPathCalculator } from '@/lib/block-path-calculator'
import { shallow } from 'zustand/shallow'
import { extractFieldsFromSchema, parseResponseFormatSafely } from '@/lib/response-format'
import { cn } from '@/lib/utils'
import { getBlockOutputPaths, getBlockOutputType } from '@/lib/workflows/block-outputs'
import { useAccessibleReferencePrefixes } from '@/app/workspace/[workspaceId]/w/[workflowId]/hooks/use-accessible-reference-prefixes'
import { getBlock } from '@/blocks'
import type { BlockConfig } from '@/blocks/types'
import { Serializer } from '@/serializer'
import { useVariablesStore } from '@/stores/panel/variables/store'
import type { Variable } from '@/stores/panel/variables/types'
import { useWorkflowRegistry } from '@/stores/workflows/registry/store'
@@ -25,6 +24,15 @@ interface BlockTagGroup {
distance: number
}
interface NestedBlockTagGroup extends BlockTagGroup {
nestedTags: Array<{
key: string
display: string
fullTag?: string
children?: Array<{ key: string; display: string; fullTag: string }>
}>
}
interface TagDropdownProps {
visible: boolean
onSelect: (newValue: string) => void
@@ -70,6 +78,18 @@ const normalizeVariableName = (variableName: string): string => {
return variableName.replace(/\s+/g, '')
}
const ensureRootTag = (tags: string[], rootTag: string): string[] => {
if (!rootTag) {
return tags
}
if (tags.includes(rootTag)) {
return tags
}
return [rootTag, ...tags]
}
const getSubBlockValue = (blockId: string, property: string): any => {
return useSubBlockStore.getState().getValue(blockId, property)
}
@@ -300,12 +320,27 @@ export const TagDropdown: React.FC<TagDropdownProps> = ({
const [parentHovered, setParentHovered] = useState<string | null>(null)
const [submenuHovered, setSubmenuHovered] = useState(false)
const blocks = useWorkflowStore((state) => state.blocks)
const loops = useWorkflowStore((state) => state.loops)
const parallels = useWorkflowStore((state) => state.parallels)
const edges = useWorkflowStore((state) => state.edges)
const { blocks, edges, loops, parallels } = useWorkflowStore(
(state) => ({
blocks: state.blocks,
edges: state.edges,
loops: state.loops || {},
parallels: state.parallels || {},
}),
shallow
)
const workflowId = useWorkflowRegistry((state) => state.activeWorkflowId)
const rawAccessiblePrefixes = useAccessibleReferencePrefixes(blockId)
const combinedAccessiblePrefixes = useMemo(() => {
if (!rawAccessiblePrefixes) return new Set<string>()
const normalized = new Set<string>(rawAccessiblePrefixes)
normalized.add(normalizeBlockName(blockId))
return normalized
}, [rawAccessiblePrefixes, blockId])
// Subscribe to live subblock values for the active workflow to react to input format changes
const workflowSubBlockValues = useSubBlockStore((state) =>
workflowId ? (state.workflowValues[workflowId] ?? {}) : {}
@@ -325,7 +360,6 @@ export const TagDropdown: React.FC<TagDropdownProps> = ({
)
const getVariablesByWorkflowId = useVariablesStore((state) => state.getVariablesByWorkflowId)
const variables = useVariablesStore((state) => state.variables)
const workflowVariables = workflowId ? getVariablesByWorkflowId(workflowId) : []
const searchTerm = useMemo(() => {
@@ -336,8 +370,12 @@ export const TagDropdown: React.FC<TagDropdownProps> = ({
const {
tags,
variableInfoMap = {},
blockTagGroups = [],
variableInfoMap,
blockTagGroups: computedBlockTagGroups,
}: {
tags: string[]
variableInfoMap: Record<string, { type: string; id: string }>
blockTagGroups: BlockTagGroup[]
} = useMemo(() => {
if (activeSourceBlockId) {
const sourceBlock = blocks[activeSourceBlockId]
@@ -481,6 +519,12 @@ export const TagDropdown: React.FC<TagDropdownProps> = ({
}
}
blockTags = ensureRootTag(blockTags, normalizedBlockName)
const shouldShowRootTag = sourceBlock.type === 'generic_webhook'
if (!shouldShowRootTag) {
blockTags = blockTags.filter((tag) => tag !== normalizedBlockName)
}
const blockTagGroups: BlockTagGroup[] = [
{
blockName,
@@ -507,18 +551,7 @@ export const TagDropdown: React.FC<TagDropdownProps> = ({
}
}
const serializer = new Serializer()
const serializedWorkflow = serializer.serializeWorkflow(blocks, edges, loops, parallels)
const accessibleBlockIds = BlockPathCalculator.findAllPathNodes(
serializedWorkflow.connections,
blockId
)
const starterBlock = Object.values(blocks).find((block) => block.type === 'starter')
if (starterBlock && !accessibleBlockIds.includes(starterBlock.id)) {
accessibleBlockIds.push(starterBlock.id)
}
const blockDistances: Record<string, number> = {}
if (starterBlock) {
@@ -623,6 +656,10 @@ export const TagDropdown: React.FC<TagDropdownProps> = ({
const blockTagGroups: BlockTagGroup[] = []
const allBlockTags: string[] = []
// Use the combinedAccessiblePrefixes to iterate through accessible blocks
const accessibleBlockIds = combinedAccessiblePrefixes
? Array.from(combinedAccessiblePrefixes)
: []
for (const accessibleBlockId of accessibleBlockIds) {
const accessibleBlock = blocks[accessibleBlockId]
if (!accessibleBlock) continue
@@ -648,7 +685,8 @@ export const TagDropdown: React.FC<TagDropdownProps> = ({
const normalizedBlockName = normalizeBlockName(blockName)
const outputPaths = generateOutputPaths(mockConfig.outputs)
const blockTags = outputPaths.map((path) => `${normalizedBlockName}.${path}`)
let blockTags = outputPaths.map((path) => `${normalizedBlockName}.${path}`)
blockTags = ensureRootTag(blockTags, normalizedBlockName)
blockTagGroups.push({
blockName,
@@ -750,6 +788,12 @@ export const TagDropdown: React.FC<TagDropdownProps> = ({
}
}
blockTags = ensureRootTag(blockTags, normalizedBlockName)
const shouldShowRootTag = accessibleBlock.type === 'generic_webhook'
if (!shouldShowRootTag) {
blockTags = blockTags.filter((tag) => tag !== normalizedBlockName)
}
blockTagGroups.push({
blockName,
blockId: accessibleBlockId,
@@ -781,51 +825,54 @@ export const TagDropdown: React.FC<TagDropdownProps> = ({
}
return {
tags: [...variableTags, ...contextualTags, ...allBlockTags],
tags: [...allBlockTags, ...variableTags, ...contextualTags],
variableInfoMap,
blockTagGroups: finalBlockTagGroups,
}
}, [
activeSourceBlockId,
combinedAccessiblePrefixes,
blockId,
blocks,
edges,
getMergedSubBlocks,
loops,
parallels,
blockId,
activeSourceBlockId,
workflowVariables,
workflowSubBlockValues,
getMergedSubBlocks,
workflowId,
])
const filteredTags = useMemo(() => {
if (!searchTerm) return tags
return tags.filter((tag: string) => tag.toLowerCase().includes(searchTerm))
return tags.filter((tag) => tag.toLowerCase().includes(searchTerm))
}, [tags, searchTerm])
const { variableTags, filteredBlockTagGroups } = useMemo(() => {
const varTags: string[] = []
filteredTags.forEach((tag) => {
filteredTags.forEach((tag: string) => {
if (tag.startsWith(TAG_PREFIXES.VARIABLE)) {
varTags.push(tag)
}
})
const filteredBlockTagGroups = blockTagGroups
.map((group) => ({
const filteredBlockTagGroups = computedBlockTagGroups
.map((group: BlockTagGroup) => ({
...group,
tags: group.tags.filter((tag) => !searchTerm || tag.toLowerCase().includes(searchTerm)),
tags: group.tags.filter(
(tag: string) => !searchTerm || tag.toLowerCase().includes(searchTerm)
),
}))
.filter((group) => group.tags.length > 0)
.filter((group: BlockTagGroup) => group.tags.length > 0)
return {
variableTags: varTags,
filteredBlockTagGroups,
}
}, [filteredTags, blockTagGroups, searchTerm])
}, [filteredTags, computedBlockTagGroups, searchTerm])
const nestedBlockTagGroups = useMemo(() => {
return filteredBlockTagGroups.map((group) => {
const nestedBlockTagGroups: NestedBlockTagGroup[] = useMemo(() => {
return filteredBlockTagGroups.map((group: BlockTagGroup) => {
const nestedTags: Array<{
key: string
display: string
@@ -839,7 +886,7 @@ export const TagDropdown: React.FC<TagDropdownProps> = ({
> = {}
const directTags: Array<{ key: string; display: string; fullTag: string }> = []
group.tags.forEach((tag) => {
group.tags.forEach((tag: string) => {
const tagParts = tag.split('.')
if (tagParts.length >= 3) {
const parent = tagParts[1]
@@ -899,8 +946,8 @@ export const TagDropdown: React.FC<TagDropdownProps> = ({
visualTags.push(...variableTags)
nestedBlockTagGroups.forEach((group) => {
group.nestedTags.forEach((nestedTag) => {
nestedBlockTagGroups.forEach((group: NestedBlockTagGroup) => {
group.nestedTags.forEach((nestedTag: any) => {
if (nestedTag.children && nestedTag.children.length > 0) {
const firstChild = nestedTag.children[0]
if (firstChild.fullTag) {
@@ -952,8 +999,8 @@ export const TagDropdown: React.FC<TagDropdownProps> = ({
if (tag.startsWith(TAG_PREFIXES.VARIABLE)) {
const variableName = tag.substring(TAG_PREFIXES.VARIABLE.length)
const variableObj = Object.values(variables).find(
(v) => v.name.replace(/\s+/g, '') === variableName
const variableObj = workflowVariables.find(
(v: Variable) => v.name.replace(/\s+/g, '') === variableName
)
if (variableObj) {
@@ -985,7 +1032,7 @@ export const TagDropdown: React.FC<TagDropdownProps> = ({
onSelect(newValue)
onClose?.()
},
[inputValue, cursorPosition, variables, onSelect, onClose]
[inputValue, cursorPosition, workflowVariables, onSelect, onClose]
)
useEffect(() => setSelectedIndex(0), [searchTerm])
@@ -1030,7 +1077,7 @@ export const TagDropdown: React.FC<TagDropdownProps> = ({
if (selectedIndex < 0 || selectedIndex >= orderedTags.length) return null
const selectedTag = orderedTags[selectedIndex]
for (let gi = 0; gi < nestedBlockTagGroups.length; gi++) {
const group = nestedBlockTagGroups[gi]
const group = nestedBlockTagGroups[gi]!
for (let ni = 0; ni < group.nestedTags.length; ni++) {
const nestedTag = group.nestedTags[ni]
if (nestedTag.children && nestedTag.children.length > 0) {
@@ -1051,16 +1098,16 @@ export const TagDropdown: React.FC<TagDropdownProps> = ({
return
}
const currentGroup = nestedBlockTagGroups.find((group) => {
const currentGroup = nestedBlockTagGroups.find((group: NestedBlockTagGroup) => {
return group.nestedTags.some(
(tag, index) =>
(tag: any, index: number) =>
`${group.blockId}-${tag.key}` === currentHovered.tag &&
index === currentHovered.index
)
})
const currentNestedTag = currentGroup?.nestedTags.find(
(tag, index) =>
(tag: any, index: number) =>
`${currentGroup.blockId}-${tag.key}` === currentHovered.tag &&
index === currentHovered.index
)
@@ -1089,8 +1136,8 @@ export const TagDropdown: React.FC<TagDropdownProps> = ({
e.preventDefault()
e.stopPropagation()
if (submenuIndex >= 0 && submenuIndex < children.length) {
const selectedChild = children[submenuIndex]
handleTagSelect(selectedChild.fullTag, currentGroup)
const selectedChild = children[submenuIndex] as any
handleTagSelect(selectedChild.fullTag, currentGroup as BlockTagGroup | undefined)
}
break
case 'Escape':
@@ -1324,7 +1371,7 @@ export const TagDropdown: React.FC<TagDropdownProps> = ({
{nestedBlockTagGroups.length > 0 && (
<>
{variableTags.length > 0 && <div className='my-0' />}
{nestedBlockTagGroups.map((group) => {
{nestedBlockTagGroups.map((group: NestedBlockTagGroup) => {
const blockConfig = getBlock(group.blockType)
let blockColor = blockConfig?.bgColor || BLOCK_COLORS.DEFAULT
@@ -1340,7 +1387,7 @@ export const TagDropdown: React.FC<TagDropdownProps> = ({
{group.blockName}
</div>
<div>
{group.nestedTags.map((nestedTag, index) => {
{group.nestedTags.map((nestedTag: any, index: number) => {
const tagIndex = nestedTag.fullTag
? (tagIndexMap.get(nestedTag.fullTag) ?? -1)
: -1
@@ -1505,7 +1552,7 @@ export const TagDropdown: React.FC<TagDropdownProps> = ({
}}
>
<div className='py-1'>
{nestedTag.children!.map((child, childIndex) => {
{nestedTag.children!.map((child: any, childIndex: number) => {
const isKeyboardSelected =
inSubmenu && submenuIndex === childIndex
const isSelected = isKeyboardSelected

View File

@@ -382,7 +382,6 @@ export function SocketProvider({ children, user }: SocketProviderProps) {
isDeployed: workflowState.isDeployed ?? false,
deployedAt: workflowState.deployedAt,
deploymentStatuses: workflowState.deploymentStatuses || {},
hasActiveWebhook: workflowState.hasActiveWebhook ?? false,
})
// Replace subblock store values for this workflow

View File

@@ -1,6 +1,6 @@
import { getEnv } from '@/lib/env'
import { createLogger } from '@/lib/logs/console/logger'
import { createMcpToolId } from '@/lib/mcp/utils'
import { getBaseUrl } from '@/lib/urls/utils'
import { getAllBlocks } from '@/blocks'
import type { BlockOutput } from '@/blocks/types'
import { BlockType } from '@/executor/consts'
@@ -261,8 +261,7 @@ export class AgentBlockHandler implements BlockHandler {
}
}
const appUrl = getEnv('NEXT_PUBLIC_APP_URL')
const url = new URL(`${appUrl}/api/mcp/tools/discover`)
const url = new URL('/api/mcp/tools/discover', getBaseUrl())
url.searchParams.set('serverId', serverId)
if (context.workspaceId) {
url.searchParams.set('workspaceId', context.workspaceId)
@@ -316,7 +315,7 @@ export class AgentBlockHandler implements BlockHandler {
}
}
const execResponse = await fetch(`${appUrl}/api/mcp/tools/execute`, {
const execResponse = await fetch(`${getBaseUrl()}/api/mcp/tools/execute`, {
method: 'POST',
headers,
body: JSON.stringify({
@@ -640,7 +639,7 @@ export class AgentBlockHandler implements BlockHandler {
) {
logger.info('Using HTTP provider request (browser environment)')
const url = new URL('/api/providers', getEnv('NEXT_PUBLIC_APP_URL') || '')
const url = new URL('/api/providers', getBaseUrl())
const response = await fetch(url.toString(), {
method: 'POST',
headers: { 'Content-Type': 'application/json' },

View File

@@ -1,5 +1,5 @@
import { env } from '@/lib/env'
import { createLogger } from '@/lib/logs/console/logger'
import { getBaseUrl } from '@/lib/urls/utils'
import { generateRouterPrompt } from '@/blocks/blocks/router'
import type { BlockOutput } from '@/blocks/types'
import { BlockType } from '@/executor/consts'
@@ -40,8 +40,7 @@ export class RouterBlockHandler implements BlockHandler {
const providerId = getProviderFromModel(routerConfig.model)
try {
const baseUrl = env.NEXT_PUBLIC_APP_URL || ''
const url = new URL('/api/providers', baseUrl)
const url = new URL('/api/providers', getBaseUrl())
// Create the provider request with proper message formatting
const messages = [{ role: 'user', content: routerConfig.prompt }]

View File

@@ -1356,7 +1356,7 @@ describe('InputResolver', () => {
expect(result.code).toBe('return "Agent response"')
})
it('should reject references to unconnected blocks', () => {
it('should leave references to unconnected blocks as strings', () => {
// Create a new block that is added to the workflow but not connected to isolated-block
workflowWithConnections.blocks.push({
id: 'test-block',
@@ -1402,9 +1402,9 @@ describe('InputResolver', () => {
enabled: true,
}
expect(() => connectionResolver.resolveInputs(testBlock, contextWithConnections)).toThrow(
/Block "isolated-block" is not connected to this block/
)
// Should not throw - inaccessible references remain as strings
const result = connectionResolver.resolveInputs(testBlock, contextWithConnections)
expect(result.code).toBe('return <isolated-block.content>') // Reference remains as-is
})
it('should always allow references to starter block', () => {
@@ -1546,7 +1546,7 @@ describe('InputResolver', () => {
expect(otherResult).toBe('content: Hello World')
})
it('should provide helpful error messages for unconnected blocks', () => {
it('should not throw for unconnected blocks and leave references as strings', () => {
// Create a test block in the workflow first
workflowWithConnections.blocks.push({
id: 'test-block-2',
@@ -1592,9 +1592,9 @@ describe('InputResolver', () => {
enabled: true,
}
expect(() => connectionResolver.resolveInputs(testBlock, contextWithConnections)).toThrow(
/Available connected blocks:.*Agent Block.*Start/
)
// Should not throw - references to nonexistent blocks remain as strings
const result = connectionResolver.resolveInputs(testBlock, contextWithConnections)
expect(result.code).toBe('return <nonexistent.value>') // Reference remains as-is
})
it('should work with block names and normalized names', () => {
@@ -1725,7 +1725,7 @@ describe('InputResolver', () => {
extendedResolver.resolveInputs(block1, extendedContext)
}).not.toThrow()
// Should fail for indirect connection
// Should not fail for indirect connection - reference remains as string
expect(() => {
// Add the response block to the workflow so it can be validated properly
extendedWorkflow.blocks.push({
@@ -1748,8 +1748,9 @@ describe('InputResolver', () => {
outputs: {},
enabled: true,
}
extendedResolver.resolveInputs(block2, extendedContext)
}).toThrow(/Block "agent-1" is not connected to this block/)
const result = extendedResolver.resolveInputs(block2, extendedContext)
expect(result.test).toBe('<agent-1.content>') // Reference remains as-is since agent-1 is not accessible
}).not.toThrow()
})
it('should handle blocks in same loop referencing each other', () => {

View File

@@ -1,11 +1,13 @@
import { BlockPathCalculator } from '@/lib/block-path-calculator'
import { createLogger } from '@/lib/logs/console/logger'
import { VariableManager } from '@/lib/variables/variable-manager'
import { extractReferencePrefixes, SYSTEM_REFERENCE_PREFIXES } from '@/lib/workflows/references'
import { TRIGGER_REFERENCE_ALIAS_MAP } from '@/lib/workflows/triggers'
import { getBlock } from '@/blocks/index'
import type { LoopManager } from '@/executor/loops/loops'
import type { ExecutionContext } from '@/executor/types'
import type { SerializedBlock, SerializedWorkflow } from '@/serializer/types'
import { normalizeBlockName } from '@/stores/workflows/utils'
const logger = createLogger('InputResolver')
@@ -461,64 +463,40 @@ export class InputResolver {
return value
}
const blockMatches = value.match(/<([^>]+)>/g)
if (!blockMatches) return value
const blockMatches = extractReferencePrefixes(value)
if (blockMatches.length === 0) return value
// Filter out patterns that are clearly not variable references (e.g., comparison operators)
const validBlockMatches = blockMatches.filter((match) => this.isValidVariableReference(match))
// If no valid matches found after filtering, return original value
if (validBlockMatches.length === 0) {
return value
}
// If we're in an API block body, check each valid match to see if it looks like XML rather than a reference
if (
currentBlock.metadata?.id === 'api' &&
validBlockMatches.some((match) => {
const innerContent = match.slice(1, -1)
// Patterns that suggest this is XML, not a block reference:
return (
innerContent.includes(':') || // namespaces like soap:Envelope
innerContent.includes('=') || // attributes like xmlns="http://..."
innerContent.includes(' ') || // any space indicates attributes
innerContent.includes('/') || // self-closing tags
!innerContent.includes('.')
) // block refs always have dots
})
) {
return value // Likely XML content, return unchanged
}
const accessiblePrefixes = this.getAccessiblePrefixes(currentBlock)
let resolvedValue = value
// Check if we're in a template literal for function blocks
const isInTemplateLiteral =
currentBlock.metadata?.id === 'function' &&
value.includes('${') &&
value.includes('}') &&
value.includes('`')
for (const match of validBlockMatches) {
// Skip variables - they've already been processed
if (match.startsWith('<variable.')) {
for (const match of blockMatches) {
const { raw, prefix } = match
if (!accessiblePrefixes.has(prefix)) {
continue
}
const path = match.slice(1, -1)
const [blockRef, ...pathParts] = path.split('.')
if (raw.startsWith('<variable.')) {
continue
}
const path = raw.slice(1, -1)
const [blockRefToken, ...pathParts] = path.split('.')
const blockRef = blockRefToken.trim()
// Skip XML-like tags (but allow block names with spaces)
if (blockRef.includes(':')) {
continue
}
// System references (start, loop, parallel, variable) are handled as special cases
const isSystemReference = ['start', 'loop', 'parallel', 'variable'].includes(
blockRef.toLowerCase()
)
// Check if we're in a template literal context
const isInTemplateLiteral =
currentBlock.metadata?.id === 'function' &&
value.includes('${') &&
value.includes('}') &&
value.includes('`')
// System references and regular block references are both processed
// System references (start, loop, parallel, variable) and regular block references are both processed
// Accessibility validation happens later in validateBlockReference
// Special case for trigger block references (start, api, chat, manual)
@@ -657,7 +635,7 @@ export class InputResolver {
}
}
resolvedValue = resolvedValue.replace(match, formattedValue)
resolvedValue = resolvedValue.replace(raw, formattedValue)
continue
}
}
@@ -678,7 +656,7 @@ export class InputResolver {
)
if (formattedValue !== null) {
resolvedValue = resolvedValue.replace(match, formattedValue)
resolvedValue = resolvedValue.replace(raw, formattedValue)
continue
}
}
@@ -699,7 +677,7 @@ export class InputResolver {
)
if (formattedValue !== null) {
resolvedValue = resolvedValue.replace(match, formattedValue)
resolvedValue = resolvedValue.replace(raw, formattedValue)
continue
}
}
@@ -723,7 +701,7 @@ export class InputResolver {
const isInActivePath = context.activeExecutionPath.has(sourceBlock.id)
if (!isInActivePath) {
resolvedValue = resolvedValue.replace(match, '')
resolvedValue = resolvedValue.replace(raw, '')
continue
}
@@ -753,14 +731,14 @@ export class InputResolver {
const isInLoop = this.loopsByBlockId.has(sourceBlock.id)
if (isInLoop) {
resolvedValue = resolvedValue.replace(match, '')
resolvedValue = resolvedValue.replace(raw, '')
continue
}
// If the block hasn't been executed and isn't in the active path,
// it means it's in an inactive branch - return empty string
if (!context.activeExecutionPath.has(sourceBlock.id)) {
resolvedValue = resolvedValue.replace(match, '')
resolvedValue = resolvedValue.replace(raw, '')
continue
}
@@ -861,7 +839,7 @@ export class InputResolver {
: String(replacementValue)
}
resolvedValue = resolvedValue.replace(match, formattedValue)
resolvedValue = resolvedValue.replace(raw, formattedValue)
}
return resolvedValue
@@ -1362,6 +1340,18 @@ export class InputResolver {
if (!sourceBlock) {
const normalizedRef = this.normalizeBlockName(blockRef)
sourceBlock = this.blockByNormalizedName.get(normalizedRef)
if (!sourceBlock) {
for (const candidate of this.workflow.blocks) {
const candidateName = candidate.metadata?.name
if (!candidateName) continue
const normalizedName = this.normalizeBlockName(candidateName)
if (normalizedName === normalizedRef) {
sourceBlock = candidate
break
}
}
}
}
if (!sourceBlock) {
@@ -2035,4 +2025,21 @@ export class InputResolver {
getContainingParallelId(blockId: string): string | undefined {
return this.parallelsByBlockId.get(blockId)
}
private getAccessiblePrefixes(block: SerializedBlock): Set<string> {
const prefixes = new Set<string>()
const accessibleBlocks = this.getAccessibleBlocks(block.id)
accessibleBlocks.forEach((blockId) => {
prefixes.add(normalizeBlockName(blockId))
const sourceBlock = this.blockById.get(blockId)
if (sourceBlock?.metadata?.name) {
prefixes.add(normalizeBlockName(sourceBlock.metadata.name))
}
})
SYSTEM_REFERENCE_PREFIXES.forEach((prefix) => prefixes.add(prefix))
return prefixes
}
}

View File

@@ -2,6 +2,7 @@ import { useCallback, useEffect, useRef } from 'react'
import type { Edge } from 'reactflow'
import { useSession } from '@/lib/auth-client'
import { createLogger } from '@/lib/logs/console/logger'
import { getBlockOutputs } from '@/lib/workflows/block-outputs'
import { getBlock } from '@/blocks'
import { resolveOutputType } from '@/blocks/utils'
import { useSocket } from '@/contexts/socket-context'
@@ -479,7 +480,6 @@ export function useCollaborativeWorkflow() {
isDeployed: workflowData.state.isDeployed || false,
deployedAt: workflowData.state.deployedAt,
lastSaved: workflowData.state.lastSaved || Date.now(),
hasActiveWebhook: workflowData.state.hasActiveWebhook || false,
deploymentStatuses: workflowData.state.deploymentStatuses || {},
})
@@ -762,7 +762,11 @@ export function useCollaborativeWorkflow() {
})
}
const outputs = resolveOutputType(blockConfig.outputs)
// Get outputs based on trigger mode
const isTriggerMode = triggerMode || false
const outputs = isTriggerMode
? getBlockOutputs(type, subBlocks, isTriggerMode)
: resolveOutputType(blockConfig.outputs)
const completeBlockData = {
id,
@@ -776,7 +780,7 @@ export function useCollaborativeWorkflow() {
horizontalHandles: true,
isWide: false,
advancedMode: false,
triggerMode: triggerMode || false,
triggerMode: isTriggerMode,
height: 0, // Default height, will be set by the UI
parentId,
extent,

View File

@@ -5,7 +5,34 @@
* It respects the user's telemetry preferences stored in localStorage.
*
*/
import { env } from './lib/env'
import posthog from 'posthog-js'
import { env, getEnv, isTruthy } from './lib/env'
// Initialize PostHog only if explicitly enabled
if (isTruthy(getEnv('NEXT_PUBLIC_POSTHOG_ENABLED')) && getEnv('NEXT_PUBLIC_POSTHOG_KEY')) {
posthog.init(getEnv('NEXT_PUBLIC_POSTHOG_KEY')!, {
api_host: '/ingest',
ui_host: 'https://us.posthog.com',
person_profiles: 'identified_only',
capture_pageview: true,
capture_pageleave: true,
capture_performance: true,
session_recording: {
maskAllInputs: false,
maskInputOptions: {
password: true,
email: false,
},
recordCrossOriginIframes: false,
recordHeaders: true,
recordBody: true,
},
autocapture: true,
capture_dead_clicks: true,
persistence: 'localStorage+cookie',
enable_heatmaps: true,
})
}
if (typeof window !== 'undefined') {
const TELEMETRY_STATUS_KEY = 'simstudio-telemetry-status'

View File

@@ -14,19 +14,7 @@ import { isBillingEnabled } from '@/lib/environment'
import { SessionContext, type SessionHookResult } from '@/lib/session/session-context'
export function getBaseURL() {
let baseURL
if (env.VERCEL_ENV === 'preview') {
baseURL = `https://${getEnv('NEXT_PUBLIC_VERCEL_URL')}`
} else if (env.VERCEL_ENV === 'development') {
baseURL = `https://${getEnv('NEXT_PUBLIC_VERCEL_URL')}`
} else if (env.VERCEL_ENV === 'production') {
baseURL = env.BETTER_AUTH_URL || getEnv('NEXT_PUBLIC_APP_URL')
} else if (env.NODE_ENV === 'development') {
baseURL = getEnv('NEXT_PUBLIC_APP_URL') || env.BETTER_AUTH_URL || 'http://localhost:3000'
}
return baseURL
return getEnv('NEXT_PUBLIC_APP_URL') || 'http://localhost:3000'
}
export const client = createAuthClient({

View File

@@ -63,7 +63,6 @@ export const auth = betterAuth({
baseURL: getBaseURL(),
trustedOrigins: [
env.NEXT_PUBLIC_APP_URL,
...(env.NEXT_PUBLIC_VERCEL_URL ? [`https://${env.NEXT_PUBLIC_VERCEL_URL}`] : []),
...(env.NEXT_PUBLIC_SOCKET_URL ? [env.NEXT_PUBLIC_SOCKET_URL] : []),
].filter(Boolean),
database: drizzleAdapter(db, {

View File

@@ -137,15 +137,29 @@ export async function handleInvoicePaymentSucceeded(event: Stripe.Event) {
/**
* Handle invoice payment failed webhook
* This is triggered when a user's payment fails for a usage billing invoice
* This is triggered when a user's payment fails for any invoice (subscription or overage)
*/
export async function handleInvoicePaymentFailed(event: Stripe.Event) {
try {
const invoice = event.data.object as Stripe.Invoice
// Check if this is an overage billing invoice
if (invoice.metadata?.type !== 'overage_billing') {
logger.info('Ignoring non-overage billing invoice payment failure', { invoiceId: invoice.id })
const isOverageInvoice = invoice.metadata?.type === 'overage_billing'
let stripeSubscriptionId: string | undefined
if (isOverageInvoice) {
// Overage invoices store subscription ID in metadata
stripeSubscriptionId = invoice.metadata?.subscriptionId as string | undefined
} else {
// Regular subscription invoices have it in parent.subscription_details
const subscription = invoice.parent?.subscription_details?.subscription
stripeSubscriptionId = typeof subscription === 'string' ? subscription : subscription?.id
}
if (!stripeSubscriptionId) {
logger.info('No subscription found on invoice; skipping payment failed handler', {
invoiceId: invoice.id,
isOverageInvoice,
})
return
}
@@ -154,7 +168,7 @@ export async function handleInvoicePaymentFailed(event: Stripe.Event) {
const billingPeriod = invoice.metadata?.billingPeriod || 'unknown'
const attemptCount = invoice.attempt_count || 1
logger.warn('Overage billing invoice payment failed', {
logger.warn('Invoice payment failed', {
invoiceId: invoice.id,
customerId,
failedAmount,
@@ -162,47 +176,59 @@ export async function handleInvoicePaymentFailed(event: Stripe.Event) {
attemptCount,
customerEmail: invoice.customer_email,
hostedInvoiceUrl: invoice.hosted_invoice_url,
isOverageInvoice,
invoiceType: isOverageInvoice ? 'overage' : 'subscription',
})
// Implement dunning management logic here
// For example: suspend service after multiple failures, notify admins, etc.
// Block users after first payment failure
if (attemptCount >= 1) {
logger.error('Multiple payment failures for overage billing', {
logger.error('Payment failure - blocking users', {
invoiceId: invoice.id,
customerId,
attemptCount,
isOverageInvoice,
stripeSubscriptionId,
})
// Block all users under this customer (org members or individual)
// Overage invoices are manual invoices without parent.subscription_details
// We store the subscription ID in metadata when creating them
const stripeSubscriptionId = invoice.metadata?.subscriptionId as string | undefined
if (stripeSubscriptionId) {
const records = await db
.select()
.from(subscriptionTable)
.where(eq(subscriptionTable.stripeSubscriptionId, stripeSubscriptionId))
.limit(1)
if (records.length > 0) {
const sub = records[0]
if (sub.plan === 'team' || sub.plan === 'enterprise') {
const members = await db
.select({ userId: member.userId })
.from(member)
.where(eq(member.organizationId, sub.referenceId))
for (const m of members) {
await db
.update(userStats)
.set({ billingBlocked: true })
.where(eq(userStats.userId, m.userId))
}
} else {
const records = await db
.select()
.from(subscriptionTable)
.where(eq(subscriptionTable.stripeSubscriptionId, stripeSubscriptionId))
.limit(1)
if (records.length > 0) {
const sub = records[0]
if (sub.plan === 'team' || sub.plan === 'enterprise') {
const members = await db
.select({ userId: member.userId })
.from(member)
.where(eq(member.organizationId, sub.referenceId))
for (const m of members) {
await db
.update(userStats)
.set({ billingBlocked: true })
.where(eq(userStats.userId, sub.referenceId))
.where(eq(userStats.userId, m.userId))
}
logger.info('Blocked team/enterprise members due to payment failure', {
organizationId: sub.referenceId,
memberCount: members.length,
isOverageInvoice,
})
} else {
await db
.update(userStats)
.set({ billingBlocked: true })
.where(eq(userStats.userId, sub.referenceId))
logger.info('Blocked user due to payment failure', {
userId: sub.referenceId,
isOverageInvoice,
})
}
} else {
logger.warn('Subscription not found in database for failed payment', {
stripeSubscriptionId,
invoiceId: invoice.id,
})
}
}
} catch (error) {

View File

@@ -1,10 +1,17 @@
import fs from 'fs/promises'
import path from 'path'
import { generateEmbeddings } from '@/lib/embeddings/utils'
import { isDev } from '@/lib/environment'
import { TextChunker } from '@/lib/knowledge/documents/chunker'
import type { DocChunk, DocsChunkerOptions, HeaderInfo } from '@/lib/knowledge/documents/types'
import { createLogger } from '@/lib/logs/console/logger'
import { TextChunker } from './text-chunker'
import type { DocChunk, DocsChunkerOptions } from './types'
interface HeaderInfo {
level: number
text: string
slug?: string
anchor?: string
position?: number
}
interface Frontmatter {
title?: string
@@ -29,7 +36,7 @@ export class DocsChunker {
overlap: options.overlap ?? 50,
})
// Use localhost docs in development, production docs otherwise
this.baseUrl = options.baseUrl ?? (isDev ? 'http://localhost:3001' : 'https://docs.sim.ai')
this.baseUrl = options.baseUrl ?? 'https://docs.sim.ai'
}
/**
@@ -108,9 +115,7 @@ export class DocsChunker {
metadata: {
startIndex: chunkStart,
endIndex: chunkEnd,
hasFrontmatter: i === 0 && content.startsWith('---'),
documentTitle: frontmatter.title,
documentDescription: frontmatter.description,
title: frontmatter.title,
},
}
@@ -200,7 +205,7 @@ export class DocsChunker {
let relevantHeader: HeaderInfo | null = null
for (const header of headers) {
if (header.position <= position) {
if (header.position !== undefined && header.position <= position) {
relevantHeader = header
} else {
break
@@ -285,53 +290,6 @@ export class DocsChunker {
return { data, content: markdownContent }
}
/**
* Split content by headers to respect document structure
*/
private splitByHeaders(
content: string
): Array<{ header: string | null; content: string; level: number }> {
const lines = content.split('\n')
const sections: Array<{ header: string | null; content: string; level: number }> = []
let currentHeader: string | null = null
let currentLevel = 0
let currentContent: string[] = []
for (const line of lines) {
const headerMatch = line.match(/^(#{1,3})\s+(.+)$/) // Only split on H1-H3, not H4-H6
if (headerMatch) {
// Save previous section
if (currentContent.length > 0) {
sections.push({
header: currentHeader,
content: currentContent.join('\n').trim(),
level: currentLevel,
})
}
// Start new section
currentHeader = line
currentLevel = headerMatch[1].length
currentContent = []
} else {
currentContent.push(line)
}
}
// Add final section
if (currentContent.length > 0) {
sections.push({
header: currentHeader,
content: currentContent.join('\n').trim(),
level: currentLevel,
})
}
return sections.filter((section) => section.content.trim().length > 0)
}
/**
* Estimate token count (rough approximation)
*/
@@ -340,175 +298,6 @@ export class DocsChunker {
return Math.ceil(text.length / 4)
}
/**
* Merge small adjacent chunks to reach target size
*/
private mergeSmallChunks(chunks: string[]): string[] {
const merged: string[] = []
let currentChunk = ''
for (const chunk of chunks) {
const currentTokens = this.estimateTokens(currentChunk)
const chunkTokens = this.estimateTokens(chunk)
// If adding this chunk would exceed target size, save current and start new
if (currentTokens > 0 && currentTokens + chunkTokens > 500) {
if (currentChunk.trim()) {
merged.push(currentChunk.trim())
}
currentChunk = chunk
} else {
// Merge with current chunk
currentChunk = currentChunk ? `${currentChunk}\n\n${chunk}` : chunk
}
}
// Add final chunk
if (currentChunk.trim()) {
merged.push(currentChunk.trim())
}
return merged
}
/**
* Chunk a section while preserving tables and structure
*/
private async chunkSection(section: {
header: string | null
content: string
level: number
}): Promise<string[]> {
const content = section.content
const header = section.header
// Check if content contains tables
const hasTable = this.containsTable(content)
if (hasTable) {
// Split by tables and handle each part
return this.splitContentWithTables(content, header)
}
// Regular chunking for text-only content
const chunks = await this.textChunker.chunk(content)
return chunks.map((chunk, index) => {
// Add header to first chunk only
if (index === 0 && header) {
return `${header}\n\n${chunk.text}`.trim()
}
return chunk.text
})
}
/**
* Check if content contains markdown tables
*/
private containsTable(content: string): boolean {
const lines = content.split('\n')
return lines.some((line, index) => {
if (line.includes('|') && line.split('|').length >= 3) {
const nextLine = lines[index + 1]
return nextLine?.includes('|') && nextLine.includes('-')
}
return false
})
}
/**
* Split content that contains tables, keeping tables intact
*/
private splitContentWithTables(content: string, header: string | null): string[] {
const lines = content.split('\n')
const chunks: string[] = []
let currentChunk: string[] = []
let inTable = false
let tableLines: string[] = []
for (let i = 0; i < lines.length; i++) {
const line = lines[i]
// Detect table start
if (line.includes('|') && line.split('|').length >= 3 && !inTable) {
const nextLine = lines[i + 1]
if (nextLine?.includes('|') && nextLine.includes('-')) {
inTable = true
// Save current chunk if it has content
if (currentChunk.length > 0 && currentChunk.join('\n').trim().length > 50) {
const chunkText = currentChunk.join('\n').trim()
const withHeader =
chunks.length === 0 && header ? `${header}\n\n${chunkText}` : chunkText
chunks.push(withHeader)
currentChunk = []
}
tableLines = [line]
continue
}
}
if (inTable) {
tableLines.push(line)
// Detect table end
if (!line.includes('|') || line.trim() === '') {
inTable = false
// Save table as its own chunk
const tableText = tableLines
.filter((l) => l.trim())
.join('\n')
.trim()
if (tableText.length > 0) {
const withHeader =
chunks.length === 0 && header ? `${header}\n\n${tableText}` : tableText
chunks.push(withHeader)
}
tableLines = []
// Start new chunk if current line has content
if (line.trim() !== '') {
currentChunk = [line]
}
}
} else {
currentChunk.push(line)
// If chunk is getting large, save it
if (this.estimateTokens(currentChunk.join('\n')) > 250) {
const chunkText = currentChunk.join('\n').trim()
if (chunkText.length > 50) {
const withHeader =
chunks.length === 0 && header ? `${header}\n\n${chunkText}` : chunkText
chunks.push(withHeader)
}
currentChunk = []
}
}
}
// Handle remaining content
if (inTable && tableLines.length > 0) {
const tableText = tableLines
.filter((l) => l.trim())
.join('\n')
.trim()
if (tableText.length > 0) {
const withHeader = chunks.length === 0 && header ? `${header}\n\n${tableText}` : tableText
chunks.push(withHeader)
}
} else if (currentChunk.length > 0) {
const chunkText = currentChunk.join('\n').trim()
if (chunkText.length > 50) {
const withHeader = chunks.length === 0 && header ? `${header}\n\n${chunkText}` : chunkText
chunks.push(withHeader)
}
}
return chunks.filter((chunk) => chunk.trim().length > 50)
}
/**
* Detect table boundaries in markdown content to avoid splitting them
*/

View File

@@ -0,0 +1,5 @@
export { DocsChunker } from './docs-chunker'
export { JsonYamlChunker } from './json-yaml-chunker'
export { StructuredDataChunker } from './structured-data-chunker'
export { TextChunker } from './text-chunker'
export * from './types'

View File

@@ -0,0 +1,317 @@
import { estimateTokenCount } from '@/lib/tokenization/estimators'
import type { Chunk, ChunkerOptions } from './types'
function getTokenCount(text: string): number {
const estimate = estimateTokenCount(text)
return estimate.count
}
/**
* Configuration for JSON/YAML chunking
*/
const JSON_YAML_CHUNKING_CONFIG = {
TARGET_CHUNK_SIZE: 2000, // Target tokens per chunk
MIN_CHUNK_SIZE: 100, // Minimum tokens per chunk
MAX_CHUNK_SIZE: 3000, // Maximum tokens per chunk
MAX_DEPTH_FOR_SPLITTING: 5, // Maximum depth to traverse for splitting
}
export class JsonYamlChunker {
private chunkSize: number
private minChunkSize: number
constructor(options: ChunkerOptions = {}) {
this.chunkSize = options.chunkSize || JSON_YAML_CHUNKING_CONFIG.TARGET_CHUNK_SIZE
this.minChunkSize = options.minChunkSize || JSON_YAML_CHUNKING_CONFIG.MIN_CHUNK_SIZE
}
/**
* Check if content is structured JSON/YAML data
*/
static isStructuredData(content: string): boolean {
try {
JSON.parse(content)
return true
} catch {
try {
const yaml = require('js-yaml')
yaml.load(content)
return true
} catch {
return false
}
}
}
/**
* Chunk JSON/YAML content intelligently based on structure
*/
async chunk(content: string): Promise<Chunk[]> {
try {
const data = JSON.parse(content)
return this.chunkStructuredData(data)
} catch (error) {
return this.chunkAsText(content)
}
}
/**
* Chunk structured data based on its structure
*/
private chunkStructuredData(data: any, path: string[] = []): Chunk[] {
const chunks: Chunk[] = []
if (Array.isArray(data)) {
return this.chunkArray(data, path)
}
if (typeof data === 'object' && data !== null) {
return this.chunkObject(data, path)
}
const content = JSON.stringify(data, null, 2)
const tokenCount = getTokenCount(content)
if (tokenCount >= this.minChunkSize) {
chunks.push({
text: content,
tokenCount,
metadata: {
startIndex: 0,
endIndex: content.length,
},
})
}
return chunks
}
/**
* Chunk an array intelligently
*/
private chunkArray(arr: any[], path: string[]): Chunk[] {
const chunks: Chunk[] = []
let currentBatch: any[] = []
let currentTokens = 0
const contextHeader = path.length > 0 ? `// ${path.join('.')}\n` : ''
for (let i = 0; i < arr.length; i++) {
const item = arr[i]
const itemStr = JSON.stringify(item, null, 2)
const itemTokens = getTokenCount(itemStr)
if (itemTokens > this.chunkSize) {
// Save current batch if it has items
if (currentBatch.length > 0) {
const batchContent = contextHeader + JSON.stringify(currentBatch, null, 2)
chunks.push({
text: batchContent,
tokenCount: getTokenCount(batchContent),
metadata: {
startIndex: i - currentBatch.length,
endIndex: i - 1,
},
})
currentBatch = []
currentTokens = 0
}
if (typeof item === 'object' && item !== null) {
const subChunks = this.chunkStructuredData(item, [...path, `[${i}]`])
chunks.push(...subChunks)
} else {
chunks.push({
text: contextHeader + itemStr,
tokenCount: itemTokens,
metadata: {
startIndex: i,
endIndex: i,
},
})
}
} else if (currentTokens + itemTokens > this.chunkSize && currentBatch.length > 0) {
const batchContent = contextHeader + JSON.stringify(currentBatch, null, 2)
chunks.push({
text: batchContent,
tokenCount: currentTokens,
metadata: {
startIndex: i - currentBatch.length,
endIndex: i - 1,
},
})
currentBatch = [item]
currentTokens = itemTokens
} else {
currentBatch.push(item)
currentTokens += itemTokens
}
}
if (currentBatch.length > 0) {
const batchContent = contextHeader + JSON.stringify(currentBatch, null, 2)
chunks.push({
text: batchContent,
tokenCount: currentTokens,
metadata: {
startIndex: arr.length - currentBatch.length,
endIndex: arr.length - 1,
},
})
}
return chunks
}
/**
* Chunk an object intelligently
*/
private chunkObject(obj: Record<string, any>, path: string[]): Chunk[] {
const chunks: Chunk[] = []
const entries = Object.entries(obj)
const fullContent = JSON.stringify(obj, null, 2)
const fullTokens = getTokenCount(fullContent)
if (fullTokens <= this.chunkSize) {
chunks.push({
text: fullContent,
tokenCount: fullTokens,
metadata: {
startIndex: 0,
endIndex: fullContent.length,
},
})
return chunks
}
let currentObj: Record<string, any> = {}
let currentTokens = 0
let currentKeys: string[] = []
for (const [key, value] of entries) {
const valueStr = JSON.stringify({ [key]: value }, null, 2)
const valueTokens = getTokenCount(valueStr)
if (valueTokens > this.chunkSize) {
// Save current object if it has properties
if (Object.keys(currentObj).length > 0) {
const objContent = JSON.stringify(currentObj, null, 2)
chunks.push({
text: objContent,
tokenCount: currentTokens,
metadata: {
startIndex: 0,
endIndex: objContent.length,
},
})
currentObj = {}
currentTokens = 0
currentKeys = []
}
if (typeof value === 'object' && value !== null) {
const subChunks = this.chunkStructuredData(value, [...path, key])
chunks.push(...subChunks)
} else {
chunks.push({
text: valueStr,
tokenCount: valueTokens,
metadata: {
startIndex: 0,
endIndex: valueStr.length,
},
})
}
} else if (
currentTokens + valueTokens > this.chunkSize &&
Object.keys(currentObj).length > 0
) {
const objContent = JSON.stringify(currentObj, null, 2)
chunks.push({
text: objContent,
tokenCount: currentTokens,
metadata: {
startIndex: 0,
endIndex: objContent.length,
},
})
currentObj = { [key]: value }
currentTokens = valueTokens
currentKeys = [key]
} else {
currentObj[key] = value
currentTokens += valueTokens
currentKeys.push(key)
}
}
if (Object.keys(currentObj).length > 0) {
const objContent = JSON.stringify(currentObj, null, 2)
chunks.push({
text: objContent,
tokenCount: currentTokens,
metadata: {
startIndex: 0,
endIndex: objContent.length,
},
})
}
return chunks
}
/**
* Fall back to text chunking if JSON parsing fails.
*/
private async chunkAsText(content: string): Promise<Chunk[]> {
const chunks: Chunk[] = []
const lines = content.split('\n')
let currentChunk = ''
let currentTokens = 0
let startIndex = 0
for (const line of lines) {
const lineTokens = getTokenCount(line)
if (currentTokens + lineTokens > this.chunkSize && currentChunk) {
chunks.push({
text: currentChunk,
tokenCount: currentTokens,
metadata: {
startIndex,
endIndex: startIndex + currentChunk.length,
},
})
startIndex += currentChunk.length + 1
currentChunk = line
currentTokens = lineTokens
} else {
currentChunk = currentChunk ? `${currentChunk}\n${line}` : line
currentTokens += lineTokens
}
}
if (currentChunk && currentTokens >= this.minChunkSize) {
chunks.push({
text: currentChunk,
tokenCount: currentTokens,
metadata: {
startIndex,
endIndex: startIndex + currentChunk.length,
},
})
}
return chunks
}
/**
* Static method for chunking JSON/YAML data with default options.
*/
static async chunkJsonYaml(content: string, options: ChunkerOptions = {}): Promise<Chunk[]> {
const chunker = new JsonYamlChunker(options)
return chunker.chunk(content)
}
}

View File

@@ -0,0 +1,220 @@
import type { Chunk, StructuredDataOptions } from './types'
// Configuration for structured data chunking (CSV, XLSX, etc.)
const STRUCTURED_CHUNKING_CONFIG = {
// Target 2000-3000 tokens per chunk for better semantic meaning
TARGET_CHUNK_SIZE: 2500,
MIN_CHUNK_SIZE: 500,
MAX_CHUNK_SIZE: 4000,
// For spreadsheets, group rows together
ROWS_PER_CHUNK: 100, // Start with 100 rows per chunk
MIN_ROWS_PER_CHUNK: 20,
MAX_ROWS_PER_CHUNK: 500,
// For better embeddings quality
INCLUDE_HEADERS_IN_EACH_CHUNK: true,
MAX_HEADER_SIZE: 200, // tokens
}
/**
* Smart chunker for structured data (CSV, XLSX) that preserves semantic meaning
*/
export class StructuredDataChunker {
/**
* Chunk structured data intelligently based on rows and semantic boundaries
*/
static async chunkStructuredData(
content: string,
options: StructuredDataOptions = {}
): Promise<Chunk[]> {
const chunks: Chunk[] = []
const lines = content.split('\n').filter((line) => line.trim())
if (lines.length === 0) {
return chunks
}
// Detect headers (first line or provided)
const headerLine = options.headers?.join('\t') || lines[0]
const dataStartIndex = options.headers ? 0 : 1
// Calculate optimal rows per chunk based on content
const estimatedTokensPerRow = StructuredDataChunker.estimateTokensPerRow(
lines.slice(dataStartIndex, Math.min(10, lines.length))
)
const optimalRowsPerChunk =
StructuredDataChunker.calculateOptimalRowsPerChunk(estimatedTokensPerRow)
console.log(
`Structured data chunking: ${lines.length} rows, ~${estimatedTokensPerRow} tokens/row, ${optimalRowsPerChunk} rows/chunk`
)
let currentChunkRows: string[] = []
let currentTokenEstimate = 0
const headerTokens = StructuredDataChunker.estimateTokens(headerLine)
let chunkStartRow = dataStartIndex
for (let i = dataStartIndex; i < lines.length; i++) {
const row = lines[i]
const rowTokens = StructuredDataChunker.estimateTokens(row)
// Check if adding this row would exceed our target
const projectedTokens =
currentTokenEstimate +
rowTokens +
(STRUCTURED_CHUNKING_CONFIG.INCLUDE_HEADERS_IN_EACH_CHUNK ? headerTokens : 0)
const shouldCreateChunk =
(projectedTokens > STRUCTURED_CHUNKING_CONFIG.TARGET_CHUNK_SIZE &&
currentChunkRows.length >= STRUCTURED_CHUNKING_CONFIG.MIN_ROWS_PER_CHUNK) ||
currentChunkRows.length >= optimalRowsPerChunk
if (shouldCreateChunk && currentChunkRows.length > 0) {
// Create chunk with current rows
const chunkContent = StructuredDataChunker.formatChunk(
headerLine,
currentChunkRows,
options.sheetName
)
chunks.push(StructuredDataChunker.createChunk(chunkContent, chunkStartRow, i - 1))
// Reset for next chunk
currentChunkRows = []
currentTokenEstimate = 0
chunkStartRow = i
}
currentChunkRows.push(row)
currentTokenEstimate += rowTokens
}
// Add remaining rows as final chunk
if (currentChunkRows.length > 0) {
const chunkContent = StructuredDataChunker.formatChunk(
headerLine,
currentChunkRows,
options.sheetName
)
chunks.push(StructuredDataChunker.createChunk(chunkContent, chunkStartRow, lines.length - 1))
}
console.log(`Created ${chunks.length} chunks from ${lines.length} rows of structured data`)
return chunks
}
/**
* Format a chunk with headers and context
*/
private static formatChunk(headerLine: string, rows: string[], sheetName?: string): string {
let content = ''
// Add sheet name context if available
if (sheetName) {
content += `=== ${sheetName} ===\n\n`
}
// Add headers for context
if (STRUCTURED_CHUNKING_CONFIG.INCLUDE_HEADERS_IN_EACH_CHUNK) {
content += `Headers: ${headerLine}\n`
content += `${'-'.repeat(Math.min(80, headerLine.length))}\n`
}
// Add data rows
content += rows.join('\n')
// Add row count for context
content += `\n\n[Rows ${rows.length} of data]`
return content
}
/**
* Create a chunk object with actual row indices
*/
private static createChunk(content: string, startRow: number, endRow: number): Chunk {
const tokenCount = StructuredDataChunker.estimateTokens(content)
return {
text: content,
tokenCount,
metadata: {
startIndex: startRow,
endIndex: endRow,
},
}
}
/**
* Estimate tokens in text (rough approximation)
*/
private static estimateTokens(text: string): number {
// Rough estimate: 1 token per 4 characters for English text
// For structured data with numbers, it's closer to 1 token per 3 characters
return Math.ceil(text.length / 3)
}
/**
* Estimate average tokens per row from sample
*/
private static estimateTokensPerRow(sampleRows: string[]): number {
if (sampleRows.length === 0) return 50 // default estimate
const totalTokens = sampleRows.reduce(
(sum, row) => sum + StructuredDataChunker.estimateTokens(row),
0
)
return Math.ceil(totalTokens / sampleRows.length)
}
/**
* Calculate optimal rows per chunk based on token estimates
*/
private static calculateOptimalRowsPerChunk(tokensPerRow: number): number {
const optimal = Math.floor(STRUCTURED_CHUNKING_CONFIG.TARGET_CHUNK_SIZE / tokensPerRow)
return Math.min(
Math.max(optimal, STRUCTURED_CHUNKING_CONFIG.MIN_ROWS_PER_CHUNK),
STRUCTURED_CHUNKING_CONFIG.MAX_ROWS_PER_CHUNK
)
}
/**
* Check if content appears to be structured data
*/
static isStructuredData(content: string, mimeType?: string): boolean {
// Check mime type first
if (mimeType) {
const structuredMimeTypes = [
'text/csv',
'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet',
'application/vnd.ms-excel',
'text/tab-separated-values',
]
if (structuredMimeTypes.includes(mimeType)) {
return true
}
}
// Check content structure
const lines = content.split('\n').slice(0, 10) // Check first 10 lines
if (lines.length < 2) return false
// Check for consistent delimiters (comma, tab, pipe)
const delimiters = [',', '\t', '|']
for (const delimiter of delimiters) {
const counts = lines.map(
(line) => (line.match(new RegExp(`\\${delimiter}`, 'g')) || []).length
)
const avgCount = counts.reduce((a, b) => a + b, 0) / counts.length
// If most lines have similar delimiter counts, it's likely structured
if (avgCount > 2 && counts.every((c) => Math.abs(c - avgCount) <= 2)) {
return true
}
}
return false
}
}

View File

@@ -1,28 +1,4 @@
export interface ChunkMetadata {
startIndex: number
endIndex: number
tokenCount: number
}
export interface TextChunk {
text: string
metadata: ChunkMetadata
}
export interface ChunkerOptions {
chunkSize?: number
minChunkSize?: number
overlap?: number
}
export interface Chunk {
text: string
tokenCount: number
metadata: {
startIndex: number
endIndex: number
}
}
import type { Chunk, ChunkerOptions } from './types'
/**
* Lightweight text chunker optimized for RAG applications

View File

@@ -0,0 +1,53 @@
export interface ChunkMetadata {
startIndex: number
endIndex: number
tokenCount: number
}
export interface TextChunk {
text: string
metadata: ChunkMetadata
}
export interface ChunkerOptions {
chunkSize?: number
minChunkSize?: number
overlap?: number
}
export interface Chunk {
text: string
tokenCount: number
metadata: {
startIndex: number
endIndex: number
}
}
export interface StructuredDataOptions {
headers?: string[]
totalRows?: number
sheetName?: string
}
export interface DocChunk {
text: string
tokenCount: number
sourceDocument: string
headerLink: string
headerText: string
headerLevel: number
embedding: number[]
embeddingModel: string
metadata: {
sourceUrl?: string
headers?: string[]
title?: string
startIndex: number
endIndex: number
}
}
export interface DocsChunkerOptions extends ChunkerOptions {
baseUrl?: string
}

View File

@@ -268,7 +268,6 @@ async function processWorkflowFromDb(
logger.info('Processed sanitized workflow context', {
workflowId,
blocks: Object.keys(sanitizedState.blocks || {}).length,
edges: sanitizedState.edges.length,
})
// Use the provided kind for the type
return { type: kind, tag, content }

View File

@@ -262,6 +262,14 @@ const ExecutionEntry = z.object({
totalTokens: z.number().nullable(),
blockExecutions: z.array(z.any()), // can be detailed per need
output: z.any().optional(),
errorMessage: z.string().optional(),
errorBlock: z
.object({
blockId: z.string().optional(),
blockName: z.string().optional(),
blockType: z.string().optional(),
})
.optional(),
})
export const ToolResultSchemas = {

View File

@@ -0,0 +1,37 @@
import { Loader2, MinusCircle, XCircle, Zap } from 'lucide-react'
import {
BaseClientTool,
type BaseClientToolMetadata,
ClientToolCallState,
} from '@/lib/copilot/tools/client/base-tool'
export class GetOperationsExamplesClientTool extends BaseClientTool {
static readonly id = 'get_operations_examples'
constructor(toolCallId: string) {
super(toolCallId, GetOperationsExamplesClientTool.id, GetOperationsExamplesClientTool.metadata)
}
static readonly metadata: BaseClientToolMetadata = {
displayNames: {
[ClientToolCallState.generating]: { text: 'Designing workflow component', icon: Loader2 },
[ClientToolCallState.pending]: { text: 'Designing workflow component', icon: Loader2 },
[ClientToolCallState.executing]: { text: 'Designing workflow component', icon: Loader2 },
[ClientToolCallState.success]: { text: 'Designed workflow component', icon: Zap },
[ClientToolCallState.error]: { text: 'Failed to design workflow component', icon: XCircle },
[ClientToolCallState.aborted]: {
text: 'Aborted designing workflow component',
icon: MinusCircle,
},
[ClientToolCallState.rejected]: {
text: 'Skipped designing workflow component',
icon: MinusCircle,
},
},
interrupt: undefined,
}
async execute(): Promise<void> {
return
}
}

View File

@@ -20,11 +20,11 @@ export class PlanClientTool extends BaseClientTool {
static readonly metadata: BaseClientToolMetadata = {
displayNames: {
[ClientToolCallState.generating]: { text: 'Crafting an approach', icon: Loader2 },
[ClientToolCallState.pending]: { text: 'Crafting an approach', icon: Loader2 },
[ClientToolCallState.executing]: { text: 'Crafting an approach', icon: Loader2 },
[ClientToolCallState.success]: { text: 'Crafted an approach', icon: ListTodo },
[ClientToolCallState.error]: { text: 'Failed to craft an approach', icon: X },
[ClientToolCallState.generating]: { text: 'Planning', icon: Loader2 },
[ClientToolCallState.pending]: { text: 'Planning', icon: Loader2 },
[ClientToolCallState.executing]: { text: 'Planning an approach', icon: Loader2 },
[ClientToolCallState.success]: { text: 'Finished planning', icon: ListTodo },
[ClientToolCallState.error]: { text: 'Failed to plan', icon: X },
[ClientToolCallState.aborted]: { text: 'Aborted planning', icon: XCircle },
[ClientToolCallState.rejected]: { text: 'Skipped planning approach', icon: XCircle },
},

View File

@@ -98,7 +98,35 @@ export class EditWorkflowClientTool extends BaseClientTool {
// Prepare currentUserWorkflow JSON from stores to preserve block IDs
let currentUserWorkflow = args?.currentUserWorkflow
if (!currentUserWorkflow) {
const diffStoreState = useWorkflowDiffStore.getState()
let usedDiffWorkflow = false
if (!currentUserWorkflow && diffStoreState.isDiffReady && diffStoreState.diffWorkflow) {
try {
const diffWorkflow = diffStoreState.diffWorkflow
const normalizedDiffWorkflow = {
...diffWorkflow,
blocks: diffWorkflow.blocks || {},
edges: diffWorkflow.edges || [],
loops: diffWorkflow.loops || {},
parallels: diffWorkflow.parallels || {},
}
currentUserWorkflow = JSON.stringify(normalizedDiffWorkflow)
usedDiffWorkflow = true
logger.info('Using diff workflow state as base for edit_workflow operations', {
toolCallId: this.toolCallId,
blocksCount: Object.keys(normalizedDiffWorkflow.blocks).length,
edgesCount: normalizedDiffWorkflow.edges.length,
})
} catch (e) {
logger.warn(
'Failed to serialize diff workflow state; falling back to active workflow',
e as any
)
}
}
if (!currentUserWorkflow && !usedDiffWorkflow) {
try {
const workflowStore = useWorkflowStore.getState()
const fullState = workflowStore.getWorkflowState()

View File

@@ -77,13 +77,13 @@ export interface CopilotBlockMetadata {
name: string
description: string
bestPractices?: string
commonParameters: CopilotSubblockMetadata[]
inputs?: Record<string, any>
inputSchema: CopilotSubblockMetadata[]
inputDefinitions?: Record<string, any>
triggerAllowed?: boolean
authType?: 'OAuth' | 'API Key' | 'Bot Token'
tools: CopilotToolMetadata[]
triggers: CopilotTriggerMetadata[]
operationParameters: Record<string, CopilotSubblockMetadata[]>
operationInputSchema: Record<string, CopilotSubblockMetadata[]>
operations?: Record<
string,
{
@@ -92,7 +92,7 @@ export interface CopilotBlockMetadata {
description?: string
inputs?: Record<string, any>
outputs?: Record<string, any>
parameters?: CopilotSubblockMetadata[]
inputSchema?: CopilotSubblockMetadata[]
}
>
yamlDocumentation?: string
@@ -125,11 +125,11 @@ export const getBlocksMetadataServerTool: BaseServerTool<
id: specialBlock.id,
name: specialBlock.name,
description: specialBlock.description || '',
commonParameters: commonParameters,
inputs: specialBlock.inputs || {},
inputSchema: commonParameters,
inputDefinitions: specialBlock.inputs || {},
tools: [],
triggers: [],
operationParameters,
operationInputSchema: operationParameters,
}
;(metadata as any).subBlocks = undefined
} else {
@@ -192,7 +192,7 @@ export const getBlocksMetadataServerTool: BaseServerTool<
description: toolCfg?.description || undefined,
inputs: { ...filteredToolParams, ...(operationInputs[opId] || {}) },
outputs: toolOutputs,
parameters: operationParameters[opId] || [],
inputSchema: operationParameters[opId] || [],
}
}
@@ -201,13 +201,13 @@ export const getBlocksMetadataServerTool: BaseServerTool<
name: blockConfig.name || blockId,
description: blockConfig.longDescription || blockConfig.description || '',
bestPractices: blockConfig.bestPractices,
commonParameters: commonParameters,
inputs: blockInputs,
inputSchema: commonParameters,
inputDefinitions: blockInputs,
triggerAllowed: !!blockConfig.triggerAllowed,
authType: resolveAuthType(blockConfig.authMode),
tools,
triggers,
operationParameters,
operationInputSchema: operationParameters,
operations,
}
}
@@ -420,7 +420,7 @@ function splitParametersByOperation(
operationParameters[key].push(processed)
}
} else {
// Override description from blockInputs if available (by id or canonicalParamId)
// Override description from inputDefinitions if available (by id or canonicalParamId)
if (blockInputsForDescriptions) {
const candidates = [sb.id, sb.canonicalParamId].filter(Boolean)
for (const key of candidates) {

View File

@@ -4,6 +4,8 @@ import { workflow as workflowTable } from '@sim/db/schema'
import { eq } from 'drizzle-orm'
import type { BaseServerTool } from '@/lib/copilot/tools/server/base-tool'
import { createLogger } from '@/lib/logs/console/logger'
import { getBlockOutputs } from '@/lib/workflows/block-outputs'
import { extractAndPersistCustomTools } from '@/lib/workflows/custom-tools-persistence'
import { loadWorkflowFromNormalizedTables } from '@/lib/workflows/db-helpers'
import { validateWorkflowState } from '@/lib/workflows/validation'
import { getAllBlocks } from '@/blocks/registry'
@@ -11,7 +13,7 @@ import { resolveOutputType } from '@/blocks/utils'
import { generateLoopBlocks, generateParallelBlocks } from '@/stores/workflows/workflow/utils'
interface EditWorkflowOperation {
operation_type: 'add' | 'edit' | 'delete'
operation_type: 'add' | 'edit' | 'delete' | 'insert_into_subflow' | 'extract_from_subflow'
block_id: string
params?: Record<string, any>
}
@@ -22,6 +24,293 @@ interface EditWorkflowParams {
currentUserWorkflow?: string
}
/**
* Topologically sort insert operations to ensure parents are created before children
* Returns sorted array where parent inserts always come before child inserts
*/
function topologicalSortInserts(
inserts: EditWorkflowOperation[],
adds: EditWorkflowOperation[]
): EditWorkflowOperation[] {
if (inserts.length === 0) return []
// Build a map of blockId -> operation for quick lookup
const insertMap = new Map<string, EditWorkflowOperation>()
inserts.forEach((op) => insertMap.set(op.block_id, op))
// Build a set of blocks being added (potential parents)
const addedBlocks = new Set(adds.map((op) => op.block_id))
// Build dependency graph: block -> blocks that depend on it
const dependents = new Map<string, Set<string>>()
const dependencies = new Map<string, Set<string>>()
inserts.forEach((op) => {
const blockId = op.block_id
const parentId = op.params?.subflowId
dependencies.set(blockId, new Set())
if (parentId) {
// Track dependency if parent is being inserted OR being added
// This ensures children wait for parents regardless of operation type
const parentBeingCreated = insertMap.has(parentId) || addedBlocks.has(parentId)
if (parentBeingCreated) {
// Only add dependency if parent is also being inserted (not added)
// Because adds run before inserts, added parents are already created
if (insertMap.has(parentId)) {
dependencies.get(blockId)!.add(parentId)
if (!dependents.has(parentId)) {
dependents.set(parentId, new Set())
}
dependents.get(parentId)!.add(blockId)
}
}
}
})
// Topological sort using Kahn's algorithm
const sorted: EditWorkflowOperation[] = []
const queue: string[] = []
// Start with nodes that have no dependencies (or depend only on added blocks)
inserts.forEach((op) => {
const deps = dependencies.get(op.block_id)!
if (deps.size === 0) {
queue.push(op.block_id)
}
})
while (queue.length > 0) {
const blockId = queue.shift()!
const op = insertMap.get(blockId)
if (op) {
sorted.push(op)
}
// Remove this node from dependencies of others
const children = dependents.get(blockId)
if (children) {
children.forEach((childId) => {
const childDeps = dependencies.get(childId)!
childDeps.delete(blockId)
if (childDeps.size === 0) {
queue.push(childId)
}
})
}
}
// If sorted length doesn't match input, there's a cycle (shouldn't happen with valid operations)
// Just append remaining operations
if (sorted.length < inserts.length) {
inserts.forEach((op) => {
if (!sorted.includes(op)) {
sorted.push(op)
}
})
}
return sorted
}
/**
* Helper to create a block state from operation params
*/
function createBlockFromParams(blockId: string, params: any, parentId?: string): any {
const blockConfig = getAllBlocks().find((b) => b.type === params.type)
// Determine outputs based on trigger mode
const triggerMode = params.triggerMode || false
let outputs: Record<string, any>
if (params.outputs) {
outputs = params.outputs
} else if (blockConfig) {
const subBlocks: Record<string, any> = {}
if (params.inputs) {
Object.entries(params.inputs).forEach(([key, value]) => {
subBlocks[key] = { id: key, type: 'short-input', value: value }
})
}
outputs = triggerMode
? getBlockOutputs(params.type, subBlocks, triggerMode)
: resolveOutputType(blockConfig.outputs)
} else {
outputs = {}
}
const blockState: any = {
id: blockId,
type: params.type,
name: params.name,
position: { x: 0, y: 0 },
enabled: params.enabled !== undefined ? params.enabled : true,
horizontalHandles: true,
isWide: false,
advancedMode: params.advancedMode || false,
height: 0,
triggerMode: triggerMode,
subBlocks: {},
outputs: outputs,
data: parentId ? { parentId, extent: 'parent' as const } : {},
}
// Add inputs as subBlocks
if (params.inputs) {
Object.entries(params.inputs).forEach(([key, value]) => {
let sanitizedValue = value
// Special handling for inputFormat - ensure it's an array
if (key === 'inputFormat' && value !== null && value !== undefined) {
if (!Array.isArray(value)) {
// Invalid format, default to empty array
sanitizedValue = []
}
}
// Special handling for tools - normalize to restore sanitized fields
if (key === 'tools' && Array.isArray(value)) {
sanitizedValue = normalizeTools(value)
}
// Special handling for responseFormat - normalize to ensure consistent format
if (key === 'responseFormat' && value) {
sanitizedValue = normalizeResponseFormat(value)
}
blockState.subBlocks[key] = {
id: key,
type: 'short-input',
value: sanitizedValue,
}
})
}
// Set up subBlocks from block configuration
if (blockConfig) {
blockConfig.subBlocks.forEach((subBlock) => {
if (!blockState.subBlocks[subBlock.id]) {
blockState.subBlocks[subBlock.id] = {
id: subBlock.id,
type: subBlock.type,
value: null,
}
}
})
}
return blockState
}
/**
* Normalize tools array by adding back fields that were sanitized for training
*/
function normalizeTools(tools: any[]): any[] {
return tools.map((tool) => {
if (tool.type === 'custom-tool') {
// Reconstruct sanitized custom tool fields
const normalized: any = {
...tool,
params: tool.params || {},
isExpanded: tool.isExpanded ?? true,
}
// Ensure schema has proper structure
if (normalized.schema?.function) {
normalized.schema = {
type: 'function',
function: {
name: tool.title, // Derive name from title
description: normalized.schema.function.description,
parameters: normalized.schema.function.parameters,
},
}
}
return normalized
}
// For other tool types, just ensure isExpanded exists
return {
...tool,
isExpanded: tool.isExpanded ?? true,
}
})
}
/**
* Normalize responseFormat to ensure consistent storage
* Handles both string (JSON) and object formats
* Returns pretty-printed JSON for better UI readability
*/
function normalizeResponseFormat(value: any): string {
try {
let obj = value
// If it's already a string, parse it first
if (typeof value === 'string') {
const trimmed = value.trim()
if (!trimmed) {
return ''
}
obj = JSON.parse(trimmed)
}
// If it's an object, stringify it with consistent formatting
if (obj && typeof obj === 'object') {
// Sort keys recursively for consistent comparison
const sortKeys = (item: any): any => {
if (Array.isArray(item)) {
return item.map(sortKeys)
}
if (item !== null && typeof item === 'object') {
return Object.keys(item)
.sort()
.reduce((result: any, key: string) => {
result[key] = sortKeys(item[key])
return result
}, {})
}
return item
}
// Return pretty-printed with 2-space indentation for UI readability
// The sanitizer will normalize it to minified format for comparison
return JSON.stringify(sortKeys(obj), null, 2)
}
return String(value)
} catch (error) {
// If parsing fails, return the original value as string
return String(value)
}
}
/**
* Helper to add connections as edges for a block
*/
function addConnectionsAsEdges(
modifiedState: any,
blockId: string,
connections: Record<string, any>
): void {
Object.entries(connections).forEach(([sourceHandle, targets]) => {
const targetArray = Array.isArray(targets) ? targets : [targets]
targetArray.forEach((targetId: string) => {
modifiedState.edges.push({
id: crypto.randomUUID(),
source: blockId,
sourceHandle,
target: targetId,
targetHandle: 'target',
type: 'default',
})
})
})
}
/**
* Apply operations directly to the workflow JSON state
*/
@@ -34,24 +323,49 @@ function applyOperationsToWorkflowState(
// Log initial state
const logger = createLogger('EditWorkflowServerTool')
logger.debug('Initial blocks before operations:', {
blockCount: Object.keys(modifiedState.blocks || {}).length,
blockTypes: Object.entries(modifiedState.blocks || {}).map(([id, block]: [string, any]) => ({
id,
type: block.type,
hasType: block.type !== undefined,
})),
logger.info('Applying operations to workflow:', {
totalOperations: operations.length,
operationTypes: operations.reduce((acc: any, op) => {
acc[op.operation_type] = (acc[op.operation_type] || 0) + 1
return acc
}, {}),
initialBlockCount: Object.keys(modifiedState.blocks || {}).length,
})
// Reorder operations: delete -> add -> edit to ensure consistent application semantics
// Reorder operations: delete -> extract -> add -> insert -> edit
const deletes = operations.filter((op) => op.operation_type === 'delete')
const extracts = operations.filter((op) => op.operation_type === 'extract_from_subflow')
const adds = operations.filter((op) => op.operation_type === 'add')
const inserts = operations.filter((op) => op.operation_type === 'insert_into_subflow')
const edits = operations.filter((op) => op.operation_type === 'edit')
const orderedOperations: EditWorkflowOperation[] = [...deletes, ...adds, ...edits]
// Sort insert operations to ensure parents are inserted before children
// This handles cases where a loop/parallel is being added along with its children
const sortedInserts = topologicalSortInserts(inserts, adds)
const orderedOperations: EditWorkflowOperation[] = [
...deletes,
...extracts,
...adds,
...sortedInserts,
...edits,
]
logger.info('Operations after reordering:', {
order: orderedOperations.map(
(op) =>
`${op.operation_type}:${op.block_id}${op.params?.subflowId ? `(parent:${op.params.subflowId})` : ''}`
),
})
for (const operation of orderedOperations) {
const { operation_type, block_id, params } = operation
logger.debug(`Executing operation: ${operation_type} for block ${block_id}`, {
params: params ? Object.keys(params) : [],
currentBlockCount: Object.keys(modifiedState.blocks).length,
})
switch (operation_type) {
case 'delete': {
if (modifiedState.blocks[block_id]) {
@@ -95,16 +409,53 @@ function applyOperationsToWorkflowState(
if (params?.inputs) {
if (!block.subBlocks) block.subBlocks = {}
Object.entries(params.inputs).forEach(([key, value]) => {
let sanitizedValue = value
// Special handling for inputFormat - ensure it's an array
if (key === 'inputFormat' && value !== null && value !== undefined) {
if (!Array.isArray(value)) {
// Invalid format, default to empty array
sanitizedValue = []
}
}
// Special handling for tools - normalize to restore sanitized fields
if (key === 'tools' && Array.isArray(value)) {
sanitizedValue = normalizeTools(value)
}
// Special handling for responseFormat - normalize to ensure consistent format
if (key === 'responseFormat' && value) {
sanitizedValue = normalizeResponseFormat(value)
}
if (!block.subBlocks[key]) {
block.subBlocks[key] = {
id: key,
type: 'short-input',
value: value,
value: sanitizedValue,
}
} else {
block.subBlocks[key].value = value
block.subBlocks[key].value = sanitizedValue
}
})
// Update loop/parallel configuration in block.data
if (block.type === 'loop') {
block.data = block.data || {}
if (params.inputs.loopType !== undefined) block.data.loopType = params.inputs.loopType
if (params.inputs.iterations !== undefined)
block.data.count = params.inputs.iterations
if (params.inputs.collection !== undefined)
block.data.collection = params.inputs.collection
} else if (block.type === 'parallel') {
block.data = block.data || {}
if (params.inputs.parallelType !== undefined)
block.data.parallelType = params.inputs.parallelType
if (params.inputs.count !== undefined) block.data.count = params.inputs.count
if (params.inputs.collection !== undefined)
block.data.collection = params.inputs.collection
}
}
// Update basic properties
@@ -123,6 +474,50 @@ function applyOperationsToWorkflowState(
}
}
// Handle advanced mode toggle
if (typeof params?.advancedMode === 'boolean') {
block.advancedMode = params.advancedMode
}
// Handle nested nodes update (for loops/parallels)
if (params?.nestedNodes) {
// Remove all existing child blocks
const existingChildren = Object.keys(modifiedState.blocks).filter(
(id) => modifiedState.blocks[id].data?.parentId === block_id
)
existingChildren.forEach((childId) => delete modifiedState.blocks[childId])
// Remove edges to/from removed children
modifiedState.edges = modifiedState.edges.filter(
(edge: any) =>
!existingChildren.includes(edge.source) && !existingChildren.includes(edge.target)
)
// Add new nested blocks
Object.entries(params.nestedNodes).forEach(([childId, childBlock]: [string, any]) => {
const childBlockState = createBlockFromParams(childId, childBlock, block_id)
modifiedState.blocks[childId] = childBlockState
// Add connections for child block
if (childBlock.connections) {
addConnectionsAsEdges(modifiedState, childId, childBlock.connections)
}
})
// Update loop/parallel configuration based on type
if (block.type === 'loop') {
block.data = block.data || {}
if (params.inputs?.loopType) block.data.loopType = params.inputs.loopType
if (params.inputs?.iterations) block.data.count = params.inputs.iterations
if (params.inputs?.collection) block.data.collection = params.inputs.collection
} else if (block.type === 'parallel') {
block.data = block.data || {}
if (params.inputs?.parallelType) block.data.parallelType = params.inputs.parallelType
if (params.inputs?.count) block.data.count = params.inputs.count
if (params.inputs?.collection) block.data.collection = params.inputs.collection
}
}
// Handle connections update (convert to edges)
if (params?.connections) {
// Remove existing edges from this block
@@ -191,82 +586,174 @@ function applyOperationsToWorkflowState(
case 'add': {
if (params?.type && params?.name) {
// Get block configuration
const blockConfig = getAllBlocks().find((block) => block.type === params.type)
// Create new block with proper structure
const newBlock: any = {
id: block_id,
type: params.type,
name: params.name,
position: { x: 0, y: 0 }, // Default position
enabled: true,
horizontalHandles: true,
isWide: false,
advancedMode: false,
height: 0,
triggerMode: false,
subBlocks: {},
outputs: blockConfig ? resolveOutputType(blockConfig.outputs) : {},
data: {},
}
const newBlock = createBlockFromParams(block_id, params)
// Add inputs as subBlocks
if (params.inputs) {
Object.entries(params.inputs).forEach(([key, value]) => {
newBlock.subBlocks[key] = {
id: key,
type: 'short-input',
value: value,
// Set loop/parallel data on parent block BEFORE adding to blocks
if (params.nestedNodes) {
if (params.type === 'loop') {
newBlock.data = {
...newBlock.data,
loopType: params.inputs?.loopType || 'for',
...(params.inputs?.collection && { collection: params.inputs.collection }),
...(params.inputs?.iterations && { count: params.inputs.iterations }),
}
})
}
// Set up subBlocks from block configuration
if (blockConfig) {
blockConfig.subBlocks.forEach((subBlock) => {
if (!newBlock.subBlocks[subBlock.id]) {
newBlock.subBlocks[subBlock.id] = {
id: subBlock.id,
type: subBlock.type,
value: null,
}
} else if (params.type === 'parallel') {
newBlock.data = {
...newBlock.data,
parallelType: params.inputs?.parallelType || 'count',
...(params.inputs?.collection && { collection: params.inputs.collection }),
...(params.inputs?.count && { count: params.inputs.count }),
}
})
}
}
// Add parent block FIRST before adding children
// This ensures children can reference valid parentId
modifiedState.blocks[block_id] = newBlock
// Handle nested nodes (for loops/parallels created from scratch)
if (params.nestedNodes) {
Object.entries(params.nestedNodes).forEach(([childId, childBlock]: [string, any]) => {
const childBlockState = createBlockFromParams(childId, childBlock, block_id)
modifiedState.blocks[childId] = childBlockState
if (childBlock.connections) {
addConnectionsAsEdges(modifiedState, childId, childBlock.connections)
}
})
}
// Add connections as edges
if (params.connections) {
Object.entries(params.connections).forEach(([sourceHandle, targets]) => {
const addEdge = (targetBlock: string, targetHandle?: string) => {
modifiedState.edges.push({
id: crypto.randomUUID(),
source: block_id,
sourceHandle: sourceHandle,
target: targetBlock,
targetHandle: targetHandle || 'target',
type: 'default',
})
addConnectionsAsEdges(modifiedState, block_id, params.connections)
}
}
break
}
case 'insert_into_subflow': {
const subflowId = params?.subflowId
if (!subflowId || !params?.type || !params?.name) {
logger.error('Missing required params for insert_into_subflow', { block_id, params })
break
}
const subflowBlock = modifiedState.blocks[subflowId]
if (!subflowBlock) {
logger.error('Subflow block not found - parent must be created first', {
subflowId,
block_id,
existingBlocks: Object.keys(modifiedState.blocks),
operationType: 'insert_into_subflow',
})
// This is a critical error - the operation ordering is wrong
// Skip this operation but don't break the entire workflow
break
}
if (subflowBlock.type !== 'loop' && subflowBlock.type !== 'parallel') {
logger.error('Subflow block has invalid type', {
subflowId,
type: subflowBlock.type,
block_id,
})
break
}
// Get block configuration
const blockConfig = getAllBlocks().find((block) => block.type === params.type)
// Check if block already exists (moving into subflow) or is new
const existingBlock = modifiedState.blocks[block_id]
if (existingBlock) {
// Moving existing block into subflow - just update parent
existingBlock.data = {
...existingBlock.data,
parentId: subflowId,
extent: 'parent' as const,
}
// Update inputs if provided
if (params.inputs) {
Object.entries(params.inputs).forEach(([key, value]) => {
let sanitizedValue = value
if (key === 'inputFormat' && value !== null && value !== undefined) {
if (!Array.isArray(value)) {
sanitizedValue = []
}
}
if (typeof targets === 'string') {
addEdge(targets)
} else if (Array.isArray(targets)) {
targets.forEach((target: any) => {
if (typeof target === 'string') {
addEdge(target)
} else if (target?.block) {
addEdge(target.block, target.handle)
}
})
} else if (typeof targets === 'object' && (targets as any)?.block) {
addEdge((targets as any).block, (targets as any).handle)
// Special handling for tools - normalize to restore sanitized fields
if (key === 'tools' && Array.isArray(value)) {
sanitizedValue = normalizeTools(value)
}
// Special handling for responseFormat - normalize to ensure consistent format
if (key === 'responseFormat' && value) {
sanitizedValue = normalizeResponseFormat(value)
}
if (!existingBlock.subBlocks[key]) {
existingBlock.subBlocks[key] = {
id: key,
type: 'short-input',
value: sanitizedValue,
}
} else {
existingBlock.subBlocks[key].value = sanitizedValue
}
})
}
} else {
// Create new block as child of subflow
const newBlock = createBlockFromParams(block_id, params, subflowId)
modifiedState.blocks[block_id] = newBlock
}
// Add/update connections as edges
if (params.connections) {
// Remove existing edges from this block
modifiedState.edges = modifiedState.edges.filter((edge: any) => edge.source !== block_id)
// Add new connections
addConnectionsAsEdges(modifiedState, block_id, params.connections)
}
break
}
case 'extract_from_subflow': {
const subflowId = params?.subflowId
if (!subflowId) {
logger.warn('Missing subflowId for extract_from_subflow', { block_id })
break
}
const block = modifiedState.blocks[block_id]
if (!block) {
logger.warn('Block not found for extraction', { block_id })
break
}
// Verify it's actually a child of this subflow
if (block.data?.parentId !== subflowId) {
logger.warn('Block is not a child of specified subflow', {
block_id,
actualParent: block.data?.parentId,
specifiedParent: subflowId,
})
}
// Remove parent relationship
if (block.data) {
block.data.parentId = undefined
block.data.extent = undefined
}
// Note: We keep the block and its edges, just remove parent relationship
// The block becomes a root-level block
break
}
}
@@ -359,7 +846,7 @@ async function getCurrentWorkflowStateFromDb(
export const editWorkflowServerTool: BaseServerTool<EditWorkflowParams, any> = {
name: 'edit_workflow',
async execute(params: EditWorkflowParams): Promise<any> {
async execute(params: EditWorkflowParams, context?: { userId: string }): Promise<any> {
const logger = createLogger('EditWorkflowServerTool')
const { operations, workflowId, currentUserWorkflow } = params
if (!operations || operations.length === 0) throw new Error('operations are required')
@@ -405,6 +892,29 @@ export const editWorkflowServerTool: BaseServerTool<EditWorkflowParams, any> = {
})
}
// Extract and persist custom tools to database
if (context?.userId) {
try {
const finalWorkflowState = validation.sanitizedState || modifiedWorkflowState
const { saved, errors } = await extractAndPersistCustomTools(
finalWorkflowState,
context.userId
)
if (saved > 0) {
logger.info(`Persisted ${saved} custom tool(s) to database`, { workflowId })
}
if (errors.length > 0) {
logger.warn('Some custom tools failed to persist', { errors, workflowId })
}
} catch (error) {
logger.error('Failed to persist custom tools', { error, workflowId })
}
} else {
logger.warn('No userId in context - skipping custom tools persistence', { workflowId })
}
logger.info('edit_workflow successfully applied operations', {
operationCount: operations.length,
blocksCount: Object.keys(modifiedWorkflowState.blocks).length,

View File

@@ -43,6 +43,12 @@ interface ExecutionEntry {
totalTokens: number | null
blockExecutions: BlockExecution[]
output?: any
errorMessage?: string
errorBlock?: {
blockId?: string
blockName?: string
blockType?: string
}
}
function extractBlockExecutionsFromTraceSpans(traceSpans: any[]): BlockExecution[] {
@@ -74,6 +80,140 @@ function extractBlockExecutionsFromTraceSpans(traceSpans: any[]): BlockExecution
return blockExecutions
}
function normalizeErrorMessage(errorValue: unknown): string | undefined {
if (!errorValue) return undefined
if (typeof errorValue === 'string') return errorValue
if (errorValue instanceof Error) return errorValue.message
if (typeof errorValue === 'object') {
try {
return JSON.stringify(errorValue)
} catch {}
}
try {
return String(errorValue)
} catch {
return undefined
}
}
function extractErrorFromExecutionData(executionData: any): ExecutionEntry['errorBlock'] & {
message?: string
} {
if (!executionData) return {}
const errorDetails = executionData.errorDetails
if (errorDetails) {
const message = normalizeErrorMessage(errorDetails.error || errorDetails.message)
if (message) {
return {
message,
blockId: errorDetails.blockId,
blockName: errorDetails.blockName,
blockType: errorDetails.blockType,
}
}
}
const finalOutputError = normalizeErrorMessage(executionData.finalOutput?.error)
if (finalOutputError) {
return {
message: finalOutputError,
blockName: 'Workflow',
}
}
const genericError = normalizeErrorMessage(executionData.error)
if (genericError) {
return {
message: genericError,
blockName: 'Workflow',
}
}
return {}
}
function extractErrorFromTraceSpans(traceSpans: any[]): ExecutionEntry['errorBlock'] & {
message?: string
} {
if (!Array.isArray(traceSpans) || traceSpans.length === 0) return {}
const queue = [...traceSpans]
while (queue.length > 0) {
const span = queue.shift()
if (!span || typeof span !== 'object') continue
const message =
normalizeErrorMessage(span.output?.error) ||
normalizeErrorMessage(span.error) ||
normalizeErrorMessage(span.output?.message) ||
normalizeErrorMessage(span.message)
const status = span.status
if (status === 'error' || message) {
return {
message,
blockId: span.blockId,
blockName: span.blockName || span.name || (span.blockId ? undefined : 'Workflow'),
blockType: span.blockType || span.type,
}
}
if (Array.isArray(span.children)) {
queue.push(...span.children)
}
}
return {}
}
function deriveExecutionErrorSummary(params: {
blockExecutions: BlockExecution[]
traceSpans: any[]
executionData: any
}): { message?: string; block?: ExecutionEntry['errorBlock'] } {
const { blockExecutions, traceSpans, executionData } = params
const blockError = blockExecutions.find((block) => block.status === 'error' && block.errorMessage)
if (blockError) {
return {
message: blockError.errorMessage,
block: {
blockId: blockError.blockId,
blockName: blockError.blockName,
blockType: blockError.blockType,
},
}
}
const executionDataError = extractErrorFromExecutionData(executionData)
if (executionDataError.message) {
return {
message: executionDataError.message,
block: {
blockId: executionDataError.blockId,
blockName:
executionDataError.blockName || (executionDataError.blockId ? undefined : 'Workflow'),
blockType: executionDataError.blockType,
},
}
}
const traceError = extractErrorFromTraceSpans(traceSpans)
if (traceError.message) {
return {
message: traceError.message,
block: {
blockId: traceError.blockId,
blockName: traceError.blockName || (traceError.blockId ? undefined : 'Workflow'),
blockType: traceError.blockType,
},
}
}
return {}
}
export const getWorkflowConsoleServerTool: BaseServerTool<GetWorkflowConsoleArgs, any> = {
name: 'get_workflow_console',
async execute(rawArgs: GetWorkflowConsoleArgs): Promise<any> {
@@ -108,7 +248,8 @@ export const getWorkflowConsoleServerTool: BaseServerTool<GetWorkflowConsoleArgs
.limit(limit)
const formattedEntries: ExecutionEntry[] = executionLogs.map((log) => {
const traceSpans = (log.executionData as any)?.traceSpans || []
const executionData = log.executionData as any
const traceSpans = executionData?.traceSpans || []
const blockExecutions = includeDetails ? extractBlockExecutionsFromTraceSpans(traceSpans) : []
let finalOutput: any
@@ -125,6 +266,12 @@ export const getWorkflowConsoleServerTool: BaseServerTool<GetWorkflowConsoleArgs
if (outputBlock) finalOutput = outputBlock.outputData
}
const { message: errorMessage, block: errorBlock } = deriveExecutionErrorSummary({
blockExecutions,
traceSpans,
executionData,
})
return {
id: log.id,
executionId: log.executionId,
@@ -137,6 +284,8 @@ export const getWorkflowConsoleServerTool: BaseServerTool<GetWorkflowConsoleArgs
totalTokens: (log.cost as any)?.tokens?.total ?? null,
blockExecutions,
output: finalOutput,
errorMessage: errorMessage,
errorBlock: errorBlock,
}
})

View File

@@ -114,7 +114,8 @@ export async function generateEmbeddings(
logger.info(`Using ${config.useAzure ? 'Azure OpenAI' : 'OpenAI'} for embeddings generation`)
const batchSize = 100
// Reduced batch size to prevent API timeouts and improve reliability
const batchSize = 50 // Reduced from 100 to prevent issues with large documents
const allEmbeddings: number[][] = []
for (let i = 0; i < texts.length; i += batchSize) {
@@ -125,6 +126,11 @@ export async function generateEmbeddings(
logger.info(
`Generated embeddings for batch ${Math.floor(i / batchSize) + 1}/${Math.ceil(texts.length / batchSize)}`
)
// Add small delay between batches to avoid rate limiting
if (i + batchSize < texts.length) {
await new Promise((resolve) => setTimeout(resolve, 100))
}
}
return allEmbeddings

View File

@@ -34,9 +34,7 @@ export const env = createEnv({
AGENT_INDEXER_URL: z.string().url().optional(), // URL for agent training data indexer
AGENT_INDEXER_API_KEY: z.string().min(1).optional(), // API key for agent indexer authentication
// Database & Storage
POSTGRES_URL: z.string().url().optional(), // Alternative PostgreSQL connection string
REDIS_URL: z.string().url().optional(), // Redis connection string for caching/sessions
// Payment & Billing
@@ -99,10 +97,10 @@ export const env = createEnv({
// Infrastructure & Deployment
NEXT_RUNTIME: z.string().optional(), // Next.js runtime environment
VERCEL_ENV: z.string().optional(), // Vercel deployment environment
DOCKER_BUILD: z.boolean().optional(), // Flag indicating Docker build environment
// Background Jobs & Scheduling
TRIGGER_PROJECT_ID: z.string().optional(), // Trigger.dev project ID
TRIGGER_SECRET_KEY: z.string().min(1).optional(), // Trigger.dev secret key for background jobs
TRIGGER_DEV_ENABLED: z.boolean().optional(), // Toggle to enable/disable Trigger.dev for async jobs
CRON_SECRET: z.string().optional(), // Secret for authenticating cron job requests
@@ -243,7 +241,6 @@ export const env = createEnv({
client: {
// Core Application URLs - Required for frontend functionality
NEXT_PUBLIC_APP_URL: z.string().url(), // Base URL of the application (e.g., https://app.sim.ai)
NEXT_PUBLIC_VERCEL_URL: z.string().optional(), // Vercel deployment URL for preview/production
// Client-side Services
NEXT_PUBLIC_SOCKET_URL: z.string().url().optional(), // WebSocket server URL for real-time features
@@ -260,6 +257,8 @@ export const env = createEnv({
// Analytics & Tracking
NEXT_PUBLIC_GOOGLE_API_KEY: z.string().optional(), // Google API key for client-side API calls
NEXT_PUBLIC_GOOGLE_PROJECT_NUMBER: z.string().optional(), // Google project number for Drive picker
NEXT_PUBLIC_POSTHOG_ENABLED: z.boolean().optional(), // Enable PostHog analytics (client-side)
NEXT_PUBLIC_POSTHOG_KEY: z.string().optional(), // PostHog project API key
// UI Branding & Whitelabeling
NEXT_PUBLIC_BRAND_NAME: z.string().optional(), // Custom brand name (defaults to "Sim")
@@ -295,7 +294,6 @@ export const env = createEnv({
experimental__runtimeEnv: {
NEXT_PUBLIC_APP_URL: process.env.NEXT_PUBLIC_APP_URL,
NEXT_PUBLIC_VERCEL_URL: process.env.NEXT_PUBLIC_VERCEL_URL,
NEXT_PUBLIC_BLOB_BASE_URL: process.env.NEXT_PUBLIC_BLOB_BASE_URL,
NEXT_PUBLIC_BILLING_ENABLED: process.env.NEXT_PUBLIC_BILLING_ENABLED,
NEXT_PUBLIC_GOOGLE_CLIENT_ID: process.env.NEXT_PUBLIC_GOOGLE_CLIENT_ID,
@@ -320,6 +318,8 @@ export const env = createEnv({
NEXT_PUBLIC_EMAIL_PASSWORD_SIGNUP_ENABLED: process.env.NEXT_PUBLIC_EMAIL_PASSWORD_SIGNUP_ENABLED,
NEXT_PUBLIC_E2B_ENABLED: process.env.NEXT_PUBLIC_E2B_ENABLED,
NEXT_PUBLIC_COPILOT_TRAINING_ENABLED: process.env.NEXT_PUBLIC_COPILOT_TRAINING_ENABLED,
NEXT_PUBLIC_POSTHOG_ENABLED: process.env.NEXT_PUBLIC_POSTHOG_ENABLED,
NEXT_PUBLIC_POSTHOG_KEY: process.env.NEXT_PUBLIC_POSTHOG_KEY,
NODE_ENV: process.env.NODE_ENV,
NEXT_TELEMETRY_DISABLED: process.env.NEXT_TELEMETRY_DISABLED,
},

Some files were not shown because too many files have changed in this diff Show More