* make creator profile required
* fix grafana tag dropdown / outputs mismatch
* fix grafana annotations to make dashboard id required
* fix fal ai
* fix fal ai
* fix zep
* fix(tools): fix perplexity & parallel ai tag dropdown inaccuracies
* fixed stt, tts and added output conditions to conditionally display tag dropdown values based on other subblock values
* updated exa to match latest API
* feat(folders): add the ability to create a folder within a folder in popover (#2287)
* fix(agent): filter out empty params to ensure LLM can set tool params at runtime (#2288)
* fix(mcp): added backfill effect to add missing descriptions for mcp tools (#2290)
* fix(redis): cleanup access pattern across callsites (#2289)
* fix(redis): cleanup access pattern across callsites
* swap redis command to be non blocking
* improvement(log-details): polling, trace spans (#2292)
---------
Co-authored-by: Vikhyath Mondreti <vikhyathvikku@gmail.com>
Co-authored-by: Emir Karabeg <78010029+emir-karabeg@users.noreply.github.com>
* fix(custom-bot-slack): dependsOn incorrectly set for bot_token"
* fix other references to be compatible
* fix dependsOn for things depending on authMethod"
* feat(tools): added rds tools/block
* feat(tools): added rds, dynamodb, background color gradient
* changed conditions for WHERE condition to be json conditions instead of raw string
* fix(team-plans): track departed member usage so value not lost
* reset usage to 0 when they leave team
* prep merge with stagig
* regen migrations
* fix org invite + ws selection'
---------
Co-authored-by: Waleed <walif6@gmail.com>
* feat(agent): added workflow, kb, and function as a tool for agent block
* fix keyboard nav and keyboard selection in tool-inp
* ack PR comments
* remove custom tool changes
* fixed kb tools for agent
* cleanup
* feat(tools): added speech to text with openai whisper, elevenlabs, and deepgram
* added new file icons, implemented ffmpeg
* updated docs
* revert environment
* added blacksmith optimizations to workflows and dockerfiles to enhance performance. please review before pushing to production
* remove cache from and cache to directives from docker based actions, per blacksmith docs
---------
Co-authored-by: Connor Mulholland <connormul@Connors-MacBook-Pro.local>
* fix(subflows): add loops/parallels to accessible list of blocks in the tag dropdown when contained withitn a subflow
* remove currentIteration in loop
* improvement(docs): remove copy page from mobile view on docs
* bring title and pagenav lower on mobile
* added cursor pointer to clickable components in docs
* fix(triggers): dedup + not surfacing deployment status log
* fix ms teams
* change to microsoftteams
* Revert "change to microsoftteams"
This reverts commit 217f808641.
* fix
* fix
* fix provider name
* fix oauth for msteams
* feat(billing): add notif for first failed payment, added upgrade email from free, updated providers that supported granular tool control to support them, fixed envvar popover, fixed redirect to wrong workspace after oauth connect
* fix build
* ack PR comments
* fix(custom-tools): updates to existing tools
* don't reorder custom tools in modal based on edit time
* restructure custom tools to persist copilot generated tools
* fix tests
* fix(templates): view current ui
* update UI to be less cluttered
* make state management for creating user profile smoother
* fix autoselect logic
* fix lint
* improvement(docs): added new platform ss
* rename approval to human in the loop
* cleanup
* remove yml
* removed other languages large sections
* fix icons
* Add helm for copilot
* Remove otel and log level
* Change repo name
* improvement(helm): enhance copilot chart with HA support and validation
* refactor(helm): consolidate copilot secrets and fix postgres volume mount
* feat(tools): added 10 new github triggers
* feat(tools): added 48 new github tools, 12 triggers
* fix(logging): make logging safe start an upsert to prevent insertions of duplicate execution id records, remove layout from github block
* feat(schedules): move schedule configuration out of modals into subblocks
* added more timezones
* added simple in-memory rate limiting to update schedule, validation on numeric values for date and time, fix update schedule behavior
* fix failing tests, ack PR comments
* surface better errors
* improvement(variables): add error context for duplicate variable names, only check for collision when focus is lost
* disallow empty variable names, performance optimizations
* safety guard against empty variables names
* feat(triggers): make triggers use existing subblock system, need to still fix webhook URL on multiselect and add script in text subblock for google form
* minimize added subblocks, cleanup code, make triggers first-class subblock users
* remove multi select dropdown and add props to existing dropdown instead
* cleanup dropdown
* add socket op to delete external webhook connections on block delete
* establish external webhook before creating webhook DB record, surface better errors for ones that require external connections
* fix copy button in short-input
* revert environment.ts, cleanup
* add triggers registry, update copilot tool to reflect new trigger setup
* update trigger-save subblock
* clean
* cleanup
* remove unused subblock store op, update search modal to reflect list of triggers
* add init from workflow to subblock store to populate new subblock format from old triggers
* fix mapping of old names to new ones
* added debug logging
* remove all extraneous debug logging and added mapping for triggerConfig field names that were changed
* fix trigger config for triggers w/ multiple triggers
* edge cases for effectiveTriggerId
* cleaned up
* fix dropdown multiselect
* fix multiselect
* updated short-input copy button
* duplicate blocks in trigger mode
* ack PR comments
* feat(cost): added hidden cost breakdown component to settings > subscription, start collecting current period copilot cost and last period copilot cost
* don't rerender envvars when switching between workflows in the same workspace
* feat(envvars): use cache for envvar dropdown key names, prevent autofill & suggestions in the settings
* add the same prevention for autocomplete and suggestions to sso and webhook
* fix(kb): fix mistral parse and kb uploads, include userId in internal auth
* update updated_at for kb when adding a new doc via knowledge block
* update tests
* Add variables block
* Add wait block
* While loop v1
* While loop v1
* Do while loops
* Copilot user input rerender fix
* Fix while and dowhile
* Vars block dropdown
* While loop docs
* Remove vars block coloring
* Fix lint
* Link docs to wait
* Fix build fail
* remove extraneous text from careers app
* feat(kb): added sort order to kb
* updated styles of workspace selector and delete button to match theme of rest of knowledgebase
* added google forms scope and google drive scope
* added back file scope
---------
Co-authored-by: Adam Gough <adamgough@Mac.attlocal.net>
Co-authored-by: Adam Gough <adamgough@Mac-530.lan>
* fix(dashboard): add additional context for paginated logs in dashboard, add empty state when selected cell has no data
* apps/sim
* renaming
* remove relative import
* feat(supabase): added vector search tool and updated docs
* exclude generic webhook from docs gen
* change items to pages in meta.json for tools directory in the docs
* feat(webhooks): added optioanl input format to webhooks, added support for file uploads
* feat(webhooks): added input format component to generic webhook trigger, added file support
* consolidated execution files utils, extended presigned URL duration for async tasks
* fix(input-format): allow value field to be cleared
* don't let value field be detected as deployment change
* fix zep icon in docs
* exclude collapsed state
* improvement(response-copilot): make it use builder mode over editor mode to prevent json formatting issues
* change placeholder text
* fix conversion between builder and editor mode
* fix(chat-subs): always use next public app url env
* use getBaseUrl everywhere
* move remaining uses
* fix test
* change auth.ts and make getBaseUrl() call not top level for emails
* change remaining uses
* revert csp
* cleanup
* fix
* feat(mistal): added mistral as a provider, updated model prices
* remove the ability for a block to reference its own outluts
* fixed order of responses for guardrails block
* fix(webhooks): use next public app url instead of request origin for webhook registration
* ack PR comments
* ci: pin Bun to v1.2.22 to avoid Bun 1.3 breaking changes
* feat(deployed-chat): updated chat panel UI, deployed chat and API can now accept files
* added nested tag dropdown for files
* added duplicate file validation to chat panel
* update docs & SDKs
* fixed build
* rm extraneous comments
* ack PR comments, cut multiple DB roundtrips for permissions & api key checks in api/workflows
* allow read-only users to access deployment info, but not take actions
* add downloadable file to logs for files passed in via API
* protect files/serve route that is only used client-side
---------
Co-authored-by: waleed <waleed>
* feat(billing): bill by threshold to prevent cancellation edge case
* fix org billing
* fix idempotency key issue
* small optimization for team checks
* remove console log
* remove unused type
* fix error handling
* feat(chat-stream): updated workflow id execute route to support streaming via API
* enable streaming via api
* added only text stream option
* cleanup deployed preview componnet
* updated selectedOutputIds to selectedOutput
* updated TS and Python SDKs with async, rate limits, usage, and streaming API routes
* stream non-streaming blocks when streaming is specified
* fix(chat-panel): add onBlockComplete handler to chat panel to stream back blocks as they complete
* update docs
* cleanup
* ack PR comments
* updated next config
* removed getAssetUrl in favor of local assets
* resolve merge conflicts
* remove extra logic to create sensitive result
* simplify internal auth
* remove vercel blob from CSP + next config
* fix: enable database connection pooling in production
* debug: add diagnostic endpoints to test NODE_ENV and database pooling
* test: add connection testing endpoint to diagnose production delay
* redeuce num of concurrent connections
* add state sending capability
* progress
* add ability to add title and description to workflow state
* progress in language
* fix
* cleanup code
* fix type issue
* fix subflow deletion case
* Workflow console tool
* fix lint
---------
Co-authored-by: Siddharth Ganesan <siddharthganesan@gmail.com>
* improvement(performance): remove writes to workflow updated_at on position updates for blocks, edges, & subflows
* update query pattern for logs routes
* improvement(var-resolution): resolve variables with block name check and consolidate code
* fix tests
* fix type error
* fix var highlighting in kb tags
* fix kb tags
* improvement(autolayout): use live block heights / widths for autolayout to prevent overlaps
* improve layering algo for multiple trigger setting
* remove console logs
* add type annotation
* feat(permissions): allow admin workspace users to deploy workflows in workspaces they don't own
* fixed failing test
* added additional routes
* remove overly complex, unecessary test and fixed docs formatting
* follow DRY
* first push pre testing
* toosl working
* progress
* bun run lint
* added doc
* changed google client ID and secret back
* cleaned up oauth
* removed comment
* removed any and added manual content
---------
Co-authored-by: Adam Gough <adamgough@Mac.attlocal.net>
* improvement(usage): bar execution if limits cannot be determined, init user stats record on user creation instead of in stripe plugin
* upsert user stats record in execution logger
* added add list items
(cherry picked from commit df6ea35d5bb975c03c7ec0c787bd915f34890ac0)
* bun run lint
* minor changes
---------
Co-authored-by: Adam Gough <adamgough@Mac.attlocal.net>
Co-authored-by: Adam Gough <adamgough@Adams-MacBook-Pro.local>
* update infra and remove railway
* feat(signup): added back to login functionalityfrom OTP page
* remove placeholders from docker commands, simplified login flow
* Revert "update infra and remove railway"
This reverts commit abfa2f8d51.
* improvement(code-structure): move db into separate package
* make db separate package
* remake bun lock
* update imports to not maintain two separate ones
* fix CI for tests by adding dummy url
* vercel build fix attempt
* update bun lock
* regenerate bun lock
* fix mocks
* remove db commands from apps/sim package json
* update infra and remove railway
* improvement(landing): insert prompt into copilot panel from landing, open panel on entry
* Revert "update infra and remove railway"
This reverts commit abfa2f8d51.
* fixes
* remove debug logs
* go back to old env
* update infra and remove railway
* feat(tools): add generic mail sending block/tools, updated docs script
* Revert "update infra and remove railway"
This reverts commit abfa2f8d51.
* remove message id
* updated type
* fix(migrations): downgrade nextjs
* fix(bun): pin bun version in db migrations
* Revert "fix(migrations): downgrade nextjs"
This reverts commit 27b544f22d.
* fix(stripe): use latest version to fix event mismatch issues
* fix enterprise handling
* cleanup
* update better auth version
* fix overage order of ops
* upgrade better auth version
* fix image typing
* change image type to string | undefined
* update infra and remove railway
* feat(webhooks): add idempotency service for all triggers/webhooks
* Revert "update infra and remove railway"
This reverts commit abfa2f8d51.
* cleanup
* ack PR comments
* update infra and remove railway
* feat(logs): added intelligent search to logs
* Revert "update infra and remove railway"
This reverts commit abfa2f8d51.
* cleanup
* cleanup
* update infra and remove railway
* feat(api-keys): add workspace-level api keys
* encrypt api keys
* Revert "update infra and remove railway"
This reverts commit b23258a5a1.
* reran migrations
* tested workspace keys
* consolidated code
* more consolidation
* cleanup
* consolidate, remove unused code
* add dummy key for ci
* continue with regular path for self-hosted folks that don't have key set
* fix tests
* fix test
* remove tests
* removed ci additions
* update infra and remove railway
* fix(file-upload): fix nextjs file upload issue with pdf-parse
* Revert "update infra and remove railway"
This reverts commit b23258a5a1.
* update infra and remove railway
* fix(kb): exclude deleted docs from queries
* Revert "update infra and remove railway"
This reverts commit b23258a5a1.
* update infra and remove railway
* overhaul docs
* added a lot more videos/examples to docs
* Revert "update infra and remove railway"
This reverts commit b23258a5a1.
* remove unused lines
* update start block docs
* update agent docs
* update API block docs
* update function block docs
* update response block docs
* update parallel and router docs
* update workflow docs
* update connections docs
* fix(sheets): fixed google sheets update (#1311)
Google sheets append was sending an invalid shape to the google sheets api. This PR fixes this by having similar logic to what is in append.ts
* fix(serializer): Required-field validation now respects sub-block visibility (#1313)
* audit content
* audit content
---------
Co-authored-by: Vikhyath Mondreti <vikhyath@simstudio.ai>
Co-authored-by: Adam Gough <77861281+aadamgough@users.noreply.github.com>
Co-authored-by: Vikhyath Mondreti <vikhyathvikku@gmail.com>
* update infra and remove railway
* fix(webooks-ui): made spacing more clear, added copy button for webhook URL & fixed race condition for mcp tools/server fetching in the mcp block
* Revert "update infra and remove railway"
This reverts commit 5a8876209d.
* remove extraneous comments
* ack PR comments
* update infra and remove railway
* feat(mcp): add mcp support
* consolidate mcp utils
* UI improvements, more MCP stuff
* cleanup placeholders
* reran migrations
* general improvements
* fix server side mcp exec
* more improvements, fixed search in environment settings tab
* persist subblock values for mcp block
* style fixes
* udpdate all text-primary to text-muted-foreground for visibility in dark mode
* Revert "update infra and remove railway"
This reverts commit dbf2b153b8f96808e7bb7e5f86f7e8624e3c12dd.
* make MCP servers workspace-scoped
* cleanup & remove unused dep
* consolidated utils, DRY
* added tests
* better error messages, confirmed that permissions works correctly
* additional improvements
* remove extraneous comments
* reran migrations
* lint
* style changes
* fix: prevent config mutation in MCP client URL retry logic
Fixed an issue where the MCP client was mutating the shared configuration
object's URL during retry attempts. This could cause configuration corruption
if the same config object was reused elsewhere.
* resolve PR comments
* ack PR comments
* update infra and remove railway
* feat(account): add profile pictures
* Revert "update infra and remove railway"
This reverts commit e3f0c49456.
* ack PR comments, use brandConfig logo URL as default
* update infra and remove railway
* fix(input-format): restore tag dropdown in input-format component
* Revert "update infra and remove railway"
This reverts commit 7ade5fb2ef.
* style improvements
* update infra and remove railway
* fix(notifications): increase precision on billing calculations
* Revert "update infra and remove railway"
This reverts commit d17603e844.
* cleanup
* fix(code-subblock): added validation to not parse non-variables as variables in the code subblock
* fix wand prompt bar styling
* fix error message for available connected blocks to only show connected available blocks, not block ID's
* ui
* fix(styling): fix unreadble text in dark mode
* fix styling inconsistencies in kb
* refetch permissions on invite modal open
---------
Co-authored-by: Vikhyath Mondreti <vikhyath@simstudio.ai>
* fix(ui): fix dark mode styling for switch, fix trigger modal UI
* auto-submit OTP when characters are entered
* trim leading and trailing whitespace from name on signup, throw more informative error messages on reset pass
* add parallel ai, postgres, mysql, slight modifications to dark mode styling
* bun install frozen lockfile
* new deps
* improve security, add wand to short input and update wand config
* fix(billing): team usage tracking cleanup, shared pool of limits for team
* address greptile commments
* fix lint
* remove usage of deprecated cols"
* update periodStart and periodEnd correctly
* fix lint
* fix type issue
* fix(billing): cleaned up billing, still more work to do on UI and population of data and consolidation
* fix upgrade
* cleanup
* progress
* works
* Remove 78th migration to prepare for merge with staging
* fix migration conflict
* remove useless test file
* fix
* Fix undefined seat pricing display and handle cancelled subscription seat updates
* cleanup code
* cleanup to use helpers for pulling pricing limits
* cleanup more things
* cleanup
* restore environment ts file
* remove unused files
* fix(team-management): fix team management UI, consolidate components
* use session data instead of subscription data in settings navigation
* remove unused code
* fix UI for enterprise plans
* added enterprise plan support
* progress
* billing state machine
* split overage and base into separate invoices
* fix badge logic
---------
Co-authored-by: waleedlatif1 <walif6@gmail.com>
* feat(integrations): added parallel ai block/tool and corresponding docs
* add postgres block
* added mysql block
* enrich docs for Postgres and MySQL
* make password fields user only for mysql and postgres
* fixed build
* ack greptile comments
* fix PR comments
* remove search_id from parallel ai
* fix parallel ai params
* fix(billing): vercel cron not processing billing periods
* fix(billing): cleanup unused POST and fix bug with billing timing check
* make subscriptions table source of truth for dates
* update org routes
* make everything dependent on stripe webhook
---------
Co-authored-by: Waleed Latif <walif6@gmail.com>
Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>
Co-authored-by: Adam Gough <77861281+aadamgough@users.noreply.github.com>
Co-authored-by: Adam Gough <adamgough@Mac.attlocal.net>
* feat(native-bg-tasks): support webhooks and async workflow executions without trigger"
* fix tests
* fix env var defaults and revert async workflow execution to always use trigger
* fix UI for hiding async
* hide entire toggle
* telegram webhook fix
* changed payloads
* test
* test
* test
* test
* fix github dropdown
* test
* reverted github changes
* fixed github var
* test
* bun run lint
* test
* test
* test
* test
* test
* test
* test
* test
* test
* test
* test
* test
* test
* test
* test
* test
* test
* test
* test
* test
* test
* test
* test
* test
* test
* test push
* test
* bun run lint
* edited airtable payload and webhook deletion
* Revert bun.lock and package.json to upstream/staging
* cleaned up
* test
* test
* resolving more cmments
* resolved comments, updated trigger
* cleaned up, resolved comments
* test
* test
* lint
---------
Co-authored-by: Adam Gough <adamgough@Mac.attlocal.net>
* feat(templates): added in the ability to keep/remove templates when deleting workspace
* code cleanup in sidebar
* add the ability to edit existing templates
* updated template modal
* fix build
* revert bun.lock
* add template logic to workflow deletion as well
* add ability to delete templates
* add owner/admin enforcemnet to modify or delete templates
* fix: clear Docker build cache to use correct Next.js version
- Changed GitHub Actions cache scope from build-v2 to build-v3
- This should force a fresh build without cached Next.js 15.5.0 layers
- Reverted to ^15.3.2 version format that worked on main branch
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
* run install
---------
Co-authored-by: Claude <noreply@anthropic.com>
* fix(gpt-5): fixed verbosity and reasoning parsm
* fixed dropdown
* default values for verbosity and reasoning effort
* cleanup
* use default value in dropdown
* feat(azure-openai): allow usage of azure-openai for knowledgebase uploads
* feat(azure-openai): added azure-openai for kb and wand
* added embeddings utils, added the ability to use mistral through Azure
* fix(oauth): gdrive picker race condition, token route cleanup
* fix test
* feat(mailer): consolidated all emailing to mailer service, added support for Azure ACS (#1054)
* feat(mailer): consolidated all emailing to mailer service, added support for Azure ACS
* fix batch invitation email template
* cleanup
* improvement(emails): add help template instead of doing it inline
* remove fallback version
---------
Co-authored-by: Vikhyath Mondreti <vikhyath@simstudio.ai>
* feat(mailer): consolidated all emailing to mailer service, added support for Azure ACS
* fix batch invitation email template
* cleanup
* improvement(emails): add help template instead of doing it inline
* improvement(serializer): filter out advanced mode fields when executing in basic mode, persist the values but don't include them in serialized block for execution
* fix serializer exclusion logic
* added logic to remove blocks from subflows
* refactored logic into just subflow-node
* bun run lint
* added subflow test
* added a safety check for data.parentId
* added state update logic
* bun run lint
* removed old logic
* removed any
* added tests
* added type safety
* removed test script
* type safety
---------
Co-authored-by: Adam Gough <adamgough@Mac.attlocal.net>
Co-authored-by: waleedlatif1 <walif6@gmail.com>
* improvement(redirects): move redirects to middleware, push to login if no session and workspace if session exists
* remove telemetry consent dialog
* remove migrations
* rerun migrations
* improvement(credentials-sharing-security): cleanup and reuse helper to determine credential access
* few more routes
* fix google sheets block
* fix test mocks
* fix calendar route
* fix(chunks): instantaneous search + server side searching instead of client-side
* add knowledge tags component to sidebar, replace old knowledge tags UI
* add types, remove extraneous comments
* added knowledge-base level tag definitions viewer, ability to create/delete slots in sidebar and respective routes
* ui
* fix stale tag issue
* use logger
* fix for variable format + trig
* fixed slack variable
* microsoft teams working
* fixed outlook, plus added other minor documentation changes and fixed subblock
* removed discord webhook logic
* added airtable logic
* bun run lint
* test
* test again
* test again 2
* test again 3
* test again 4
* test again 4
* test again 4
* bun run lint
* test 5
* test 6
* test 7
* test 7
* test 7
* test 7
* test 7
* test 7
* test 8
* test 9
* test 9
* test 9
* test 10
* test 10
* bun run lint, plus github fixed
* removed some debug statements #935
* testing resolver removing
* testing trig
---------
Co-authored-by: Adam Gough <adamgough@Adams-MacBook-Pro.local>
Co-authored-by: Adam Gough <adamgough@Mac.attlocal.net>
* feat(usage-indicator): added ability to see current usage
* feat(billing): added billing ennabled flag for usage indicator, enforcement of billing usage
---------
Co-authored-by: waleedlatif1 <walif6@gmail.com>
* standardized response format for transformError
* removed trasnformError, moved error handling to executeTool for all different error formats
* remove isInternalRoute, make it implicit in executeTool
* removed directExecution, everything on the server nothing on the client
* fix supabase
* fix(tag-dropdown): fix values for parallel & loop blocks (#929)
* fix(search-modal): add parallel and loop blocks to search modal
* reordered tool params
* update docs
* fix: same child workflow executing in parallel with workflow block
* fixed run button prematurely showing completion before child workflows completed
* prevent child worklfows from touching the activeBlocks & layer logic in the parent executor
* surface child workflow errors to main workfow
* ack PR comments
This directory contains configuration files for Visual Studio Code Dev Containers / GitHub Codespaces. Dev containers provide a consistent, isolated development environment for this project.
Development container configuration for VS Code Dev Containers and GitHub Codespaces.
## Contents
-`devcontainer.json` - The main configuration file that defines the development container settings
-`Dockerfile` - Defines the container image and development environment
-`docker-compose.yml` - Sets up the application and database containers
-`post-create.sh` - Script that runs when the container is created
-`.bashrc` - Custom shell configuration with helpful aliases
## Usage
### Prerequisites
## Prerequisites
- Visual Studio Code
- Docker installation:
-Docker Desktop (Windows/macOS)
- Docker Engine (Linux)
- VS Code Remote - Containers extension
- Docker Desktop or Podman Desktop
-VS Code Dev Containers extension
### Getting Started
## Getting Started
1. Open this project in Visual Studio Code
2.When prompted, click "Reopen in Container"
- Alternatively, press `F1` and select "Remote-Containers: Reopen in Container"
1. Open this project in VS Code
2.Click "Reopen in Container" when prompted (or press `F1` → "Dev Containers: Reopen in Container")
3. Wait for the container to build and initialize
4.The post-creation script will automatically:
4.Start developing with `sim-start`
- Install dependencies
- Set up environment variables
- Run database migrations
- Configure helpful aliases
The setup script will automatically install dependencies and run migrations.
5. Start the application with `sim-start` (alias for `bun run dev`)
## Development Commands
### Development Commands
### Running Services
The development environment includes these helpful aliases:
You have two options for running the development environment:
**Option 1: Run everything together (recommended for most development)**
```bash
sim-start # Runs both app and socket server using concurrently
```
**Option 2: Run services separately (useful for debugging individual services)**
- In the **app** container terminal: `sim-app` (starts Next.js app on port 3000)
- In the **realtime** container terminal: `sim-sockets` (starts socket server on port 3002)
### Other Commands
-`sim-start` - Start the development server
-`sim-migrate` - Push schema changes to the database
-`sim-generate` - Generate new migrations
-`sim-rebuild` - Build and start the production version
-`pgc` - Connect to the PostgreSQL database
-`check-db` - List all databases
### Using GitHub Codespaces
This project is also configured for GitHub Codespaces. To use it:
1. Go to the GitHub repository
2. Click the "Code" button
3. Select the "Codespaces" tab
4. Click "Create codespace on main"
This will start a new Codespace with the development environment already set up.
## Customization
You can customize the development environment by:
- Modifying `devcontainer.json` to add VS Code extensions or settings
- Updating the `Dockerfile` to install additional packages
- Editing `docker-compose.yml` to add services or change configuration
- Modifying `.bashrc` to add custom aliases or configurations
-`build` - Build the application
-`pgc` - Connect to PostgreSQL database
## Troubleshooting
If you encounter issues:
**Build errors**: Rebuild the container with `F1` → "Dev Containers: Rebuild Container"
1. Rebuild the container: `F1` → "Remote-Containers: Rebuild Container"
2. Check Docker logs for build errors
3. Verify Docker Desktop is running
4. Ensure all prerequisites are installed
**Port conflicts**: Ensure ports 3000, 3002, and 5432 are available
For more information, see the [VS Code Remote Development documentation](https://code.visualstudio.com/docs/remote/containers).
**Container runtime issues**: Verify Docker Desktop or Podman Desktop is running
## Technical Details
Services:
- **App container** (8GB memory limit) - Main Next.js application
- **Realtime container** (4GB memory limit) - Socket.io server for real-time features
- **Database** - PostgreSQL with pgvector extension
- **Migrations** - Runs automatically on container creation
You can develop with services running together or independently.
### Personalization
**Project commands** (`sim-start`, `sim-app`, etc.) are automatically available via `/workspace/.devcontainer/sim-commands.sh`.
**Personal shell customization** (aliases, prompts, etc.) should use VS Code's dotfiles feature:
1. Create a dotfiles repository (e.g., `github.com/youruser/dotfiles`)
2. Add your `.bashrc`, `.zshrc`, or other configs
3. Configure in VS Code Settings:
```json
{
"dotfiles.repository": "youruser/dotfiles",
"dotfiles.installCommand": "install.sh"
}
```
This separates project-specific commands from personal preferences, following VS Code best practices.
@@ -416,8 +414,8 @@ In addition, you will need to update the registries:
Your tool should export a constant with a naming convention of `{toolName}Tool`. The tool ID should follow the format `{provider}_{tool_name}`. For example:
```typescript:/apps/sim/tools/pinecone/fetch.ts
import { ToolConfig, ToolResponse } from '../types'
import { PineconeParams, PineconeResponse } from './types'
import { ToolConfig, ToolResponse } from '@/tools/types'
import { PineconeParams, PineconeResponse } from '@/tools/pinecone/types'
if [ -n "$(git status --porcelain content/docs)" ]; then
echo "changes=true" >> $GITHUB_OUTPUT
else
echo "changes=false" >> $GITHUB_OUTPUT
fi
- name:Create Pull Request with translations
if:steps.changes.outputs.changes == 'true'
uses:peter-evans/create-pull-request@v5
with:
token:${{ secrets.GH_PAT }}
commit-message:"feat(i18n): update translations"
title:"feat(i18n): update translations"
body:|
## Summary
Automated translation updates triggered by changes to documentation.
This PR was automatically created after content changes were made, updating translations for all supported languages using Lingo.dev AI translation engine.
- [x] Self-reviewed my changes (automated process)
- [ ] Tests added/updated and passing
- [x] No new warnings introduced
- [x] I confirm that I have read and agree to the terms outlined in the [Contributor License Agreement (CLA)](./CONTRIBUTING.md#contributor-license-agreement-cla)
## Screenshots/Videos
<!-- Translation changes are text-based - no visual changes expected -->
<!-- Reviewers should check the documentation site renders correctly for all languages -->
See the [README](https://github.com/simstudio/sim/tree/main/packages/python-sdk) for usage instructions.
See the [README](https://github.com/simstudioai/sim/tree/main/packages/python-sdk) or the [docs](https://docs.sim.ai/sdks/python) for more information.
See the [README](https://github.com/simstudio/sim/tree/main/packages/ts-sdk) for usage instructions.
See the [README](https://github.com/simstudioai/sim/tree/main/packages/ts-sdk) or the [docs](https://docs.sim.ai/sdks/typescript) for more information.
**You are tasked with implementing solutions that follow best practices. You MUST be accurate, elegant, and efficient as an expert programmer.**
---
# Role
You are a professional software engineer. All code you write MUST follow best practices, ensuring accuracy, quality, readability, and cleanliness. You MUST make FOCUSED EDITS that are EFFICIENT and ELEGANT.
## Logs
ENSURE that you use the logger.info and logger.warn and logger.error instead of the console.log whenever you want to display logs.
## Comments
You must use TSDOC for comments. Do not use ==== for comments to separate sections. Do not leave any comments that are not TSDOC.
## Global Styles
You should not update the global styles unless it is absolutely necessary. Keep all styling local to components and files.
## Bun
Use bun and bunx not npm and npx.
## Code Quality
- Write clean, maintainable code that follows the project's existing patterns
- Prefer composition over inheritance
- Keep functions small and focused on a single responsibility
- Use meaningful variable and function names
- Handle errors gracefully and provide useful error messages
- Write type-safe code with proper TypeScript types
## Testing
- Write tests for new functionality when appropriate
- Ensure existing tests pass before completing work
- Follow the project's testing conventions
## Performance
- Consider performance implications of your code
- Avoid unnecessary re-renders in React components
If you already have Ollama running on your host machine (outside Docker), you need to configure the `OLLAMA_URL` to use `host.docker.internal` instead of `localhost`:
```bash
# Docker Desktop (macOS/Windows)
OLLAMA_URL=http://host.docker.internal:11434 docker compose -f docker-compose.prod.yml up -d
# Linux (add extra_hosts or use host IP)
docker compose -f docker-compose.prod.yml up -d # Then set OLLAMA_URL to your host's IP
```
**Why?** When running inside Docker, `localhost` refers to the container itself, not your host machine. `host.docker.internal` is a special DNS name that resolves to the host.
For Linux users, you can either:
- Use your host machine's actual IP address (e.g., `http://192.168.1.100:11434`)
- Add `extra_hosts: ["host.docker.internal:host-gateway"]` to the simstudio service in your compose file
#### Using vLLM
Sim also supports [vLLM](https://docs.vllm.ai/) for self-hosted models with OpenAI-compatible API:
```bash
# Set these environment variables
VLLM_BASE_URL=http://your-vllm-server:8000
VLLM_API_KEY=your_optional_api_key # Only if your vLLM instance requires auth
```
When running with Docker, use `host.docker.internal` if vLLM is on your host machine (same as Ollama above).
### Self-hosted: Dev Containers
1. Open VS Code with the [Remote - Containers extension](https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-containers)
2. Open the project and click "Reopen in Container" when prompted
3. Run `bun run dev:full` in the terminal or use the `sim-start` alias
- This starts both the main application and the realtime socket server
### Option 4: Manual Setup
### Self-hosted: Manual Setup
**Requirements:**
- [Bun](https://bun.sh/) runtime
- [Node.js](https://nodejs.org/) v20+ (required for sandboxed code execution)
- PostgreSQL 12+ with [pgvector extension](https://github.com/pgvector/pgvector) (required for AI embeddings)
**Note:** Sim uses vector embeddings for AI features like knowledge bases and semantic search, which requires the `pgvector` PostgreSQL extension.
default:'Sim Documentation - Visual Workflow Builder for AI Applications',
template:'%s',
},
description:
'Build agents in seconds with a drag and drop workflow builder. Access comprehensive documentation to help you create efficient workflows and maximize your automation capabilities.',
'Comprehensive documentation for Sim - the visual workflow builder for AI applications. Create powerful AI agents, automation workflows, and data processing pipelines by connecting blocks on a canvas—no coding required.',
title:'Sim Documentation - Visual Workflow Builder for AI Applications',
description:
'Comprehensive documentation for Sim - the visual workflow builder for AI applications. Create powerful AI agents, automation workflows, and data processing pipelines.',
Sim is a visual workflow builder for AI applications that lets you build AI agent workflows visually. Create powerful AI agents, automation workflows, and data processing pipelines by connecting blocks on a canvas—no coding required.
## Documentation Overview
This file provides an overview of our documentation. For full content of all pages, see ${baseUrl}/llms-full.txt
'Comprehensive documentation for Sim visual workflow builder for AI applications. Create powerful AI agents, automation workflows, and data processing pipelines.',
'Visual workflow builder for AI applications. Create powerful AI agents, automation workflows, and data processing pipelines by connecting blocks on a canvas—no coding required.',
url: baseUrl,
author:{
'@type':'Organization',
name:'Sim Team',
},
offers:{
'@type':'Offer',
category:'Developer Tools',
},
featureList:[
'Visual workflow builder with drag-and-drop interface',
description: Create powerful AI agents using any LLM provider
---
import { Callout } from 'fumadocs-ui/components/callout'
import { Step, Steps } from 'fumadocs-ui/components/steps'
import { Tab, Tabs } from 'fumadocs-ui/components/tabs'
import { ThemeImage } from '@/components/ui/theme-image'
The Agent block serves as the interface between your workflow and Large Language Models (LLMs). It executes inference requests against various AI providers, processes natural language inputs according to defined instructions, and generates structured or unstructured outputs for downstream consumption.
<ThemeImage
lightSrc="/static/light/agent-light.png"
darkSrc="/static/dark/agent-dark.png"
alt="Agent Block Configuration"
width={350}
height={175}
/>
## Overview
The Agent block enables you to:
<Steps>
<Step>
<strong>Process natural language</strong>: Analyze user input and generate contextual responses
</Step>
<Step>
<strong>Execute AI-powered tasks</strong>: Perform content analysis, generation, and decision-making
</Step>
<Step>
<strong>Call external tools</strong>: Access APIs, databases, and services during processing
</Step>
<Step>
<strong>Generate structured output</strong>: Return JSON data that matches your schema requirements
</Step>
</Steps>
## Configuration Options
### System Prompt
The system prompt establishes the agent's operational parameters and behavioral constraints. This configuration defines the agent's role, response methodology, and processing boundaries for all incoming requests.
```markdown
You are a helpful assistant that specializes in financial analysis.
Always provide clear explanations and cite sources when possible.
When responding to questions about investments, include risk disclaimers.
```
### User Prompt
The user prompt represents the primary input data for inference processing. This parameter accepts natural language text or structured data that the agent will analyze and respond to. Input sources include:
- **Static Configuration**: Direct text input specified in the block configuration
- **Dynamic Input**: Data passed from upstream blocks through connection interfaces
- **Runtime Generation**: Programmatically generated content during workflow execution
### Model Selection
The Agent block supports multiple LLM providers through a unified inference interface. Available models include:
The temperature range (0-1 or 0-2) varies depending on the selected model.
</p>
### API Key
Your API key for the selected LLM provider. This is securely stored and used for authentication.
### Tools
Tools extend the agent's capabilities through external API integrations and service connections. The tool system enables function calling, allowing the agent to execute operations beyond text generation.
**Tool Integration Process**:
1. Access the Tools configuration section within the Agent block
2. Select from 60+ pre-built integrations or define custom functions
3. Configure authentication parameters and operational constraints
The Response Format parameter enforces structured output generation through JSON Schema validation. This ensures consistent, machine-readable responses that conform to predefined data structures:
```json
{
"name": "user_analysis",
"schema": {
"type": "object",
"properties": {
"sentiment": {
"type": "string",
"enum": ["positive", "negative", "neutral"]
},
"confidence": {
"type": "number",
"minimum": 0,
"maximum": 1
}
},
"required": ["sentiment", "confidence"]
}
}
```
This configuration constrains the model's output to comply with the specified schema, preventing free-form text responses and ensuring structured data generation.
### Accessing Results
After an agent completes, you can access its outputs:
- **`<agent.content>`**: The agent's response text or structured data
<li>Agent with GPT-4o performs technical analysis</li>
<li>Agent with Claude analyzes sentiment and tone</li>
<li>Function block combines results for final report</li>
</ol>
</div>
### Tool-Powered Research Assistant
<div className="mb-4 rounded-md border p-4">
<h4 className="font-medium">Scenario: Research assistant with web search and document access</h4>
<ol className="list-decimal pl-5 text-sm">
<li>User query received via input</li>
<li>Agent searches web using Google Search tool</li>
<li>Agent accesses Notion database for internal docs</li>
<li>Agent compiles comprehensive research report</li>
</ol>
</div>
## Best Practices
- **Be specific in system prompts**: Clearly define the agent's role, tone, and limitations. The more specific your instructions are, the better the agent will be able to fulfill its intended purpose.
- **Choose the right temperature setting**: Use lower temperature settings (0-0.3) when accuracy is important, or increase temperature (0.7-2.0) for more creative or varied responses
- **Leverage tools effectively**: Integrate tools that complement the agent's purpose and enhance its capabilities. Be selective about which tools you provide to avoid overwhelming the agent. For tasks with little overlap, use another Agent block for the best results.
description: Connect to external services through API endpoints
---
import { Callout } from 'fumadocs-ui/components/callout'
import { Step, Steps } from 'fumadocs-ui/components/steps'
import { Tab, Tabs } from 'fumadocs-ui/components/tabs'
import { ThemeImage } from '@/components/ui/theme-image'
The API block enables you to connect your workflow to external services through HTTP requests. It supports various methods like GET, POST, PUT, DELETE, and PATCH, allowing you to interact with virtually any API endpoint.
<ThemeImage
lightSrc="/static/light/api-light.png"
darkSrc="/static/dark/api-dark.png"
alt="API Block"
width={350}
height={175}
/>
## Overview
The API block enables you to:
<Steps>
<Step>
<strong>Connect to external services</strong>: Make HTTP requests to REST APIs and web services
</Step>
<Step>
<strong>Send and receive data</strong>: Process responses and transform data from external sources
</Step>
<Step>
<strong>Integrate third-party platforms</strong>: Connect with services like Stripe, Slack, or custom APIs
</Step>
<Step>
<strong>Handle authentication</strong>: Support various auth methods including Bearer tokens and API keys
</Step>
</Steps>
## Configuration Options
### URL
The endpoint URL for the API request. This can be:
- A static URL entered directly in the block
- A dynamic URL connected from another block's output
- A URL with path parameters
### Method
Select the HTTP method for your request:
- **GET**: Retrieve data from the server
- **POST**: Send data to the server to create a resource
- **PUT**: Update an existing resource on the server
- **DELETE**: Remove a resource from the server
- **PATCH**: Partially update an existing resource
### Query Parameters
Define key-value pairs that will be appended to the URL as query parameters. For example:
```
Key: apiKey
Value: your_api_key_here
Key: limit
Value: 10
```
These would be added to the URL as `?apiKey=your_api_key_here&limit=10`.
### Headers
Configure HTTP headers for your request. Common headers include:
```
Key: Content-Type
Value: application/json
Key: Authorization
Value: Bearer your_token_here
```
### Request Body
For methods that support a request body (POST, PUT, PATCH), you can define the data to send. The body can be:
- JSON data entered directly in the block
- Data connected from another block's output
- Dynamically generated during workflow execution
### Accessing Results
After an API request completes, you can access its outputs:
- **`<api.data>`**: The response body data from the API
- **`<api.status>`**: HTTP status code (200, 404, 500, etc.)
- **`<api.headers>`**: Response headers from the server
- **`<api.error>`**: Error details if the request failed
## Advanced Features
### Dynamic URL Construction
Build URLs dynamically using variables from previous blocks:
description: Create conditional logic and branching in your workflows
---
import { Callout } from 'fumadocs-ui/components/callout'
import { Step, Steps } from 'fumadocs-ui/components/steps'
import { Tab, Tabs } from 'fumadocs-ui/components/tabs'
import { Accordion, Accordions } from 'fumadocs-ui/components/accordion'
import { ThemeImage } from '@/components/ui/theme-image'
The Condition block allows you to branch your workflow execution path based on boolean expressions. It evaluates conditions and routes the workflow accordingly, enabling you to create dynamic, responsive workflows with different execution paths.
<ThemeImage
lightSrc="/static/light/condition-light.png"
darkSrc="/static/dark/condition-dark.png"
alt="Condition Block"
width={350}
height={175}
/>
<Callout>
Condition blocks enable deterministic decision-making without requiring an LLM, making them ideal
for straightforward branching logic.
</Callout>
## Overview
The Condition block enables you to:
<Steps>
<Step>
<strong>Create branching logic</strong>: Route workflows based on boolean expressions
</Step>
<Step>
<strong>Make data-driven decisions</strong>: Evaluate conditions using previous block outputs
</Step>
<Step>
<strong>Handle multiple scenarios</strong>: Define multiple conditions with different paths
</Step>
<Step>
<strong>Provide deterministic routing</strong>: Make decisions without requiring an LLM
</Step>
</Steps>
## How It Works
The Condition block operates through a sequential evaluation process:
1. **Evaluate Expression** - Processes the JavaScript/TypeScript boolean expression using current workflow data
2. **Determine Result** - Returns true or false based on the expression evaluation
3. **Route Workflow** - Directs execution to the appropriate destination block based on the result
4. **Provide Context** - Generates metadata about the decision for debugging and monitoring
## Configuration Options
### Conditions
Define one or more conditions that will be evaluated. Each condition includes:
- **Expression**: A JavaScript/TypeScript expression that evaluates to true or false
- **Path**: The destination block to route to if the condition is true
- **Description**: Optional explanation of what the condition checks
You can create multiple conditions that are evaluated in order, with the first matching condition determining the execution path.
### Condition Expression Format
Conditions use JavaScript syntax and can reference input values from previous blocks.
- **Order conditions correctly**: Place more specific conditions before general ones to ensure specific logic takes precedence over fallbacks
- **Include a default condition**: Add a catch-all condition (`true`) as the last condition to handle unmatched cases and prevent workflow execution from getting stuck
- **Keep expressions simple**: Use clear, straightforward boolean expressions for better readability and easier debugging
- **Document your conditions**: Add descriptions to explain the purpose of each condition for better team collaboration and maintenance
- **Test edge cases**: Verify conditions handle boundary values correctly by testing with values at the edges of your condition ranges
description: Assess content quality using customizable evaluation metrics
---
import { Callout } from 'fumadocs-ui/components/callout'
import { Step, Steps } from 'fumadocs-ui/components/steps'
import { Tab, Tabs } from 'fumadocs-ui/components/tabs'
import { ThemeImage } from '@/components/ui/theme-image'
import { Video } from '@/components/ui/video'
The Evaluator block uses AI to score and assess content quality based on metrics you define. Perfect for quality control, A/B testing, and ensuring your AI outputs meet specific standards.
<ThemeImage
lightSrc="/static/light/evaluator-light.png"
darkSrc="/static/dark/evaluator-dark.png"
alt="Evaluator Block Configuration"
width={350}
height={175}
/>
## What You Can Evaluate
**AI-Generated Content**: Score chatbot responses, generated articles, or marketing copy
**User Input**: Evaluate customer feedback, survey responses, or form submissions
**Content Quality**: Assess clarity, accuracy, relevance, and tone
**Performance Metrics**: Track improvements over time with consistent scoring
**A/B Testing**: Compare different approaches with objective metrics
## Configuration Options
### Evaluation Metrics
Define custom metrics to evaluate content against. Each metric includes:
- **Name**: A short identifier for the metric
- **Description**: A detailed explanation of what the metric measures
- **Range**: The numeric range for scoring (e.g., 1-5, 0-10)
Example metrics:
```
Accuracy (1-5): How factually accurate is the content?
Clarity (1-5): How clear and understandable is the content?
Relevance (1-5): How relevant is the content to the original query?
```
### Content
The content to be evaluated. This can be:
- Directly provided in the block configuration
- Connected from another block's output (typically an Agent block)
description: Execute custom JavaScript or TypeScript code in your workflows
---
import { Callout } from 'fumadocs-ui/components/callout'
import { Step, Steps } from 'fumadocs-ui/components/steps'
import { Tab, Tabs } from 'fumadocs-ui/components/tabs'
import { ThemeImage } from '@/components/ui/theme-image'
The Function block lets you run custom JavaScript or TypeScript code in your workflow. Use it to transform data, perform calculations, or implement custom logic that isn't available in other blocks.
<ThemeImage
lightSrc="/static/light/function-light.png"
darkSrc="/static/dark/function-dark.png"
alt="Function Block with Code Editor"
width={350}
height={175}
/>
## Overview
The Function block enables you to:
<Steps>
<Step>
<strong>Transform data</strong>: Convert formats, parse text, manipulate arrays and objects
</Step>
<Step>
<strong>Perform calculations</strong>: Math operations, statistics, financial calculations
</Step>
<Step>
<strong>Implement custom logic</strong>: Complex conditionals, loops, and algorithms
</Step>
<Step>
<strong>Process external data</strong>: Parse responses, format requests, handle authentication
</Step>
</Steps>
## How It Works
The Function block runs your code in a secure, isolated environment:
1. **Receive Input**: Access data from previous blocks via the `input` object
2. **Execute Code**: Run your JavaScript/TypeScript code
3. **Return Results**: Use `return` to pass data to the next block
4. **Handle Errors**: Built-in error handling and logging
## Configuration Options
### Code Editor
Write your JavaScript/TypeScript code in a full-featured editor with:
- Syntax highlighting and error checking
- Line numbers and bracket matching
- Support for modern JavaScript features
- Native support for `fetch`
### Accessing Input Data
Use the `input` object to access data from previous blocks:
description: The building components of your AI workflows
---
import { Card, Cards } from 'fumadocs-ui/components/card'
import { Step, Steps } from 'fumadocs-ui/components/steps'
import { Tab, Tabs } from 'fumadocs-ui/components/tabs'
import { BlockTypes } from '@/components/ui/block-types'
import { Video } from '@/components/ui/video'
Blocks are the building components you connect together to create AI workflows. Think of them as specialized modules that each handle a specific task—from chatting with AI models to making API calls or processing data.
description: Create iterative workflows with loops that execute blocks repeatedly
---
import { Callout } from 'fumadocs-ui/components/callout'
import { Step, Steps } from 'fumadocs-ui/components/steps'
import { Tab, Tabs } from 'fumadocs-ui/components/tabs'
import { ThemeImage } from '@/components/ui/theme-image'
The Loop block is a container block in Sim that allows you to execute a group of blocks repeatedly. Loops enable iterative processing in your workflows.
<ThemeImage
lightSrc="/static/light/loop-light.png"
darkSrc="/static/dark/loop-dark.png"
alt="Loop Block"
width={500}
height={300}
/>
<Callout type="info">
Loop blocks are container nodes that can hold other blocks inside them. The blocks inside a loop will execute multiple times based on your configuration.
</Callout>
## Overview
The Loop block enables you to:
<Steps>
<Step>
<strong>Iterate over collections</strong>: Process arrays or objects one item at a time
</Step>
<Step>
<strong>Repeat operations</strong>: Execute blocks a fixed number of times
</Step>
</Steps>
## Configuration Options
### Loop Type
Choose between two types of loops:
<Tabs items={['For Loop', 'ForEach Loop']}>
<Tab>
A numeric loop that executes a fixed number of times. Use this when you need to repeat an operation a specific number of times.
```
Example: Run 5 times
- Iteration 1
- Iteration 2
- Iteration 3
- Iteration 4
- Iteration 5
```
</Tab>
<Tab>
A collection-based loop that iterates over each item in an array or object. Use this when you need to process a collection of items.
```
Example: Process ["apple", "banana", "orange"]
- Iteration 1: Process "apple"
- Iteration 2: Process "banana"
- Iteration 3: Process "orange"
```
</Tab>
</Tabs>
## How to Use Loops
### Creating a Loop
1. Drag a Loop block from the toolbar onto your canvas
2. Configure the loop type and parameters
3. Drag other blocks inside the loop container
4. Connect the blocks as needed
### Accessing Results
After a loop completes, you can access aggregated results:
- **`<loop.results>`**: Array of results from all loop iterations
## Example Use Cases
### Processing API Results
<div className="mb-4 rounded-md border p-4">
<h4 className="font-medium">Scenario: Process multiple customer records</h4>
Execute a fixed number of parallel instances. Use this when you need to run the same operation multiple times concurrently.
```
Example: Run 5 parallel instances
- Instance 1 ┐
- Instance 2 ├─ All execute simultaneously
- Instance 3 │
- Instance 4 │
- Instance 5 ┘
```
</Tab>
<Tab>
Distribute a collection across parallel instances. Each instance processes one item from the collection simultaneously.
```
Example: Process ["task1", "task2", "task3"] in parallel
- Instance 1: Process "task1" ┐
- Instance 2: Process "task2" ├─ All execute simultaneously
- Instance 3: Process "task3" ┘
```
</Tab>
</Tabs>
## How to Use Parallel Blocks
### Creating a Parallel Block
1. Drag a Parallel block from the toolbar onto your canvas
2. Configure the parallel type and parameters
3. Drag a single block inside the parallel container
4. Connect the block as needed
### Accessing Results
After a parallel block completes, you can access aggregated results:
- **`<parallel.results>`**: Array of results from all parallel instances
## Example Use Cases
### Batch API Processing
<div className="mb-4 rounded-md border p-4">
<h4 className="font-medium">Scenario: Process multiple API calls simultaneously</h4>
<ol className="list-decimal pl-5 text-sm">
<li>Parallel block with collection of API endpoints</li>
<li>Inside parallel: API block calls each endpoint</li>
<li>After parallel: Process all responses together</li>
</ol>
</div>
### Multi-Model AI Processing
<div className="mb-4 rounded-md border p-4">
<h4 className="font-medium">Scenario: Get responses from multiple AI models</h4>
<ol className="list-decimal pl-5 text-sm">
<li>Count-based parallel set to 3 instances</li>
<li>Inside parallel: Agent configured with different model per instance</li>
<li>After parallel: Compare and select best response</li>
</ol>
</div>
## Advanced Features
### Result Aggregation
Results from all parallel instances are automatically collected:
```javascript
// In a Function block after the parallel
const allResults = input.parallel.results;
// Returns: [result1, result2, result3, ...]
```
### Instance Isolation
Each parallel instance runs independently:
- Separate variable scopes
- No shared state between instances
- Failures in one instance don't affect others
### Limitations
<Callout type="warning">
Container blocks (Loops and Parallels) cannot be nested inside each other. This means:
- You cannot place a Loop block inside a Parallel block
- You cannot place another Parallel block inside a Parallel block
- You cannot place any container block inside another container block
</Callout>
<Callout type="warning">
Parallel blocks can only contain a single block. You cannot have multiple blocks connected to each other inside a parallel - only the first block would execute in that case.
</Callout>
<Callout type="info">
While parallel execution is faster, be mindful of:
- API rate limits when making concurrent requests
- Memory usage with large datasets
- Maximum of 20 concurrent instances to prevent resource exhaustion
</Callout>
## Parallel vs Loop
Understanding when to use each:
| Feature | Parallel | Loop |
|---------|----------|------|
| Execution | Concurrent | Sequential |
| Speed | Faster for independent operations | Slower but ordered |
| Order | No guaranteed order | Maintains order |
| Use case | Independent operations | Dependent operations |
description: Send a structured response back to API calls
---
import { Callout } from 'fumadocs-ui/components/callout'
import { Step, Steps } from 'fumadocs-ui/components/steps'
import { Tab, Tabs } from 'fumadocs-ui/components/tabs'
import { ThemeImage } from '@/components/ui/theme-image'
The Response block is the final step in your workflow that formats and returns data to whoever called your workflow. It's like the "return" statement for your entire workflow—it packages up results and sends them back.
<ThemeImage
lightSrc="/static/light/response-light.png"
darkSrc="/static/dark/response-dark.png"
alt="Response Block Configuration"
width={350}
height={175}
/>
<Callout type="info">
Response blocks are terminal blocks - they end the workflow execution and cannot connect to other blocks.
</Callout>
## When You Need Response Blocks
**API Endpoints**: When your workflow is called via API, Response blocks format the return data
**Webhooks**: Return confirmation or data back to the calling system
**Testing**: See formatted results when testing your workflow
**Data Export**: Structure data for external systems or reports
## Two Ways to Build Responses
### Builder Mode (Recommended)
Visual interface for building response structure:
- Drag and drop fields
- Reference workflow variables easily
- Visual preview of response structure
### Editor Mode (Advanced)
Write JSON directly:
- Full control over response format
- Support for complex nested structures
- Use `<variable.name>` syntax for dynamic values
## Configuration Options
### Response Data
The response data is the main content that will be sent back to the API caller. This should be formatted as JSON and can include:
- Static values
- Dynamic references to workflow variables using the `<variable.name>` syntax
- Nested objects and arrays
- Any valid JSON structure
### Status Code
Set the HTTP status code for the response. Common status codes include:
description: Route workflow execution based on specific conditions or logic
---
import { Callout } from 'fumadocs-ui/components/callout'
import { Step, Steps } from 'fumadocs-ui/components/steps'
import { Tab, Tabs } from 'fumadocs-ui/components/tabs'
import { Accordion, Accordions } from 'fumadocs-ui/components/accordion'
import { ThemeImage } from '@/components/ui/theme-image'
import { Video } from '@/components/ui/video'
The Router block uses AI to intelligently decide which path your workflow should take next. Unlike Condition blocks that use simple rules, Router blocks can understand context and make smart routing decisions based on content analysis.
<ThemeImage
lightSrc="/static/light/router-light.png"
darkSrc="/static/dark/router-dark.png"
alt="Router Block with Multiple Paths"
width={350}
height={175}
/>
## Overview
The Router block enables you to:
<Steps>
<Step>
<strong>Intelligent content routing</strong>: Use AI to understand intent and context
</Step>
<Step>
<strong>Dynamic path selection</strong>: Route workflows based on unstructured content analysis
</Step>
<Step>
<strong>Context-aware decisions</strong>: Make smart routing choices beyond simple rules
</Step>
<Step>
<strong>Multi-path management</strong>: Handle complex workflows with multiple potential destinations
</Step>
</Steps>
## Router vs Condition Blocks
<Accordions>
<Accordion title="When to Use Router">
- AI-powered content analysis needed
- Unstructured or varying content types
- Intent-based routing (e.g., "route support tickets to departments")
- Context-aware decision making required
</Accordion>
<Accordion title="When to Use Condition">
- Simple, rule-based decisions
- Structured data or numeric comparisons
- Fast, deterministic routing needed
- Boolean logic sufficient
</Accordion>
</Accordions>
## How It Works
The Router block:
<Steps>
<Step>
<strong>Analyze content</strong>: Uses an LLM to understand input content and context
</Step>
<Step>
<strong>Evaluate targets</strong>: Compares content against available destination blocks
</Step>
<Step>
<strong>Select destination</strong>: Identifies the most appropriate path based on intent
</Step>
<Step>
<strong>Route execution</strong>: Directs workflow to the selected block
</Step>
</Steps>
## Configuration Options
### Content/Prompt
The content or prompt that the Router will analyze to make routing decisions. This can be:
- A direct user query or input
- Output from a previous block
- A system-generated message
### Target Blocks
The possible destination blocks that the Router can select from. The Router will automatically detect connected blocks, but you can also:
- Customize the descriptions of target blocks to improve routing accuracy
- Specify routing criteria for each target block
- Exclude certain blocks from being considered as routing targets
description: Execute other workflows as reusable components within your current workflow
---
import { Callout } from 'fumadocs-ui/components/callout'
import { Step, Steps } from 'fumadocs-ui/components/steps'
import { Tab, Tabs } from 'fumadocs-ui/components/tabs'
import { ThemeImage } from '@/components/ui/theme-image'
The Workflow block allows you to execute other workflows as reusable components within your current workflow. This powerful feature enables modular design, code reuse, and the creation of complex nested workflows that can be composed from smaller, focused workflows.
<ThemeImage
lightSrc="/static/light/workflow-light.png"
darkSrc="/static/dark/workflow-dark.png"
alt="Workflow Block"
width={300}
height={175}
/>
<Callout type="info">
Workflow blocks enable modular design by allowing you to compose complex workflows from smaller, reusable components.
</Callout>
## Overview
The Workflow block serves as a bridge between workflows, enabling you to:
<Steps>
<Step>
<strong>Reuse existing workflows</strong>: Execute previously created workflows as components within new workflows
</Step>
<Step>
<strong>Create modular designs</strong>: Break down complex processes into smaller, manageable workflows
</Step>
<Step>
<strong>Maintain separation of concerns</strong>: Keep different business logic isolated in separate workflows
</Step>
<Step>
<strong>Enable team collaboration</strong>: Share and reuse workflows across different projects and team members
</Step>
</Steps>
## How It Works
The Workflow block:
1. Takes a reference to another workflow in your workspace
2. Passes input data from the current workflow to the child workflow
3. Executes the child workflow in an isolated context
4. Returns the results back to the parent workflow for further processing
## Configuration Options
### Workflow Selection
Choose which workflow to execute from a dropdown list of available workflows in your workspace. The list includes:
- All workflows you have access to in the current workspace
- Workflows shared with you by other team members
- Both enabled and disabled workflows (though only enabled workflows can be executed)
### Input Data
Define the data to pass to the child workflow:
- **Single Variable Input**: Select a variable or block output to pass to the child workflow
- **Variable References**: Use `<variable.name>` to reference workflow variables
- **Block References**: Use `<blockName.field>` to reference outputs from previous blocks
- **Automatic Mapping**: The selected data is automatically available as `start.input` in the child workflow
- **Optional**: The input field is optional - child workflows can run without input data
- **Type Preservation**: Variable types (strings, numbers, objects, etc.) are preserved when passed to the child workflow
### Accessing Results
After a workflow executes, you can access its outputs:
- **`<workflow.response>`**: The complete output from the child workflow
- **`<workflow.name>`**: The name of the executed child workflow
import { Callout } from 'fumadocs-ui/components/callout'
import { Step, Steps } from 'fumadocs-ui/components/steps'
## How Connections Work
Connections are the pathways that allow data to flow between blocks in your workflow. When you connect two blocks in Sim, you're establishing a data flow relationship that defines how information passes from one block to another.
<Callout type="info">
Each connection represents a directed relationship where data flows from a source block's output
to a destination block's input.
</Callout>
### Creating Connections
<Steps>
<Step>
<strong>Select Source Block</strong>: Click on the output port of the block you want to connect
from
</Step>
<Step>
<strong>Draw Connection</strong>: Drag to the input port of the destination block
</Step>
<Step>
<strong>Confirm Connection</strong>: Release to create the connection
</Step>
<Step>
<strong>Configure (Optional)</strong>: Some connections may require additional configuration
</Step>
</Steps>
### Connection Flow
The flow of data through connections follows these principles:
1. **Directional Flow**: Data always flows from outputs to inputs
2. **Execution Order**: Blocks execute in order based on their connections
3. **Data Transformation**: Data may be transformed as it passes between blocks
4. **Conditional Paths**: Some blocks (like Router and Condition) can direct flow to different paths
### Connection Visualization
Connections are visually represented in the workflow editor:
- **Solid Lines**: Active connections that will pass data
- **Animated Flow**: During execution, data flow is visualized along connections
- **Color Coding**: Different connection types may have different colors
- **Connection Tags**: Visual indicators showing what data is available
### Managing Connections
You can manage your connections in several ways:
- **Delete**: Click on a connection and press Delete or use the context menu
- **Reroute**: Drag a connection to change its path
- **Inspect**: Click on a connection to see details about the data being passed
- **Disable**: Temporarily disable a connection without deleting it
<Callout type="warning">
Deleting a connection will immediately stop data flow between the blocks. Make sure this is
intended before removing connections.
</Callout>
## Connection Compatibility
Not all blocks can be connected to each other. Compatibility depends on:
1. **Data Type Compatibility**: The output type must be compatible with the input type
2. **Block Restrictions**: Some blocks may have restrictions on what they can connect to
3. **Workflow Logic**: Connections must make logical sense in the context of your workflow
The editor will indicate when connections are invalid or incompatible.
description: Understanding the data structure of different block outputs
---
import { Callout } from 'fumadocs-ui/components/callout'
import { Tab, Tabs } from 'fumadocs-ui/components/tabs'
When you connect blocks, the output data structure from the source block determines what values are available in the destination block. Each block type produces a specific output structure that you can reference in downstream blocks.
<Callout type="info">
Understanding these data structures is essential for effectively using connection tags and
accessing the right data in your workflows.
</Callout>
## Block Output Structures
Different block types produce different output structures. Here's what you can expect from each block type:
- **selectedPath**: Information about the selected path
- **blockId**: ID of the selected destination block
- **blockType**: Type of the selected block
- **blockTitle**: Title of the selected block
</Tab>
</Tabs>
## Custom Output Structures
Some blocks may produce custom output structures based on their configuration:
1. **Agent Blocks with Response Format**: When using a response format in an Agent block, the output structure will match the defined schema instead of the standard structure.
2. **Function Blocks**: The `result` field can contain any data structure returned by your function code.
3. **API Blocks**: The `data` field will contain whatever the API returns, which could be any valid JSON structure.
<Callout type="warning">
Always check the actual output structure of your blocks during development to ensure you're
referencing the correct fields in your connections.
</Callout>
## Nested Data Structures
Many block outputs contain nested data structures. You can access these using dot notation in connection tags:
```
<blockId.path.to.nested.data>
```
For example:
- `<agent1.tokens.total>` - Access the total tokens from an Agent block
- `<api1.data.results[0].id>` - Access the ID of the first result from an API response
- `<function1.result.calculations.total>` - Access a nested field in a Function block's result
import { Callout } from 'fumadocs-ui/components/callout'
import { Card, Cards } from 'fumadocs-ui/components/card'
import { ConnectIcon } from '@/components/icons'
import { Video } from '@/components/ui/video'
Connections are the pathways that allow data to flow between blocks in your workflow. They define how information is passed from one block to another, enabling you to create sophisticated, multi-step processes.
<Callout type="info">
Properly configured connections are essential for creating effective workflows. They determine how
data moves through your system and how blocks interact with each other.
description: Using connection tags to reference data between blocks
---
import { Callout } from 'fumadocs-ui/components/callout'
import { Video } from '@/components/ui/video'
Connection tags are visual representations of the data available from connected blocks. They provide an easy way to reference outputs from previous blocks in your workflow.
Connection tags are interactive elements that appear when blocks are connected. They represent the data that can flow from one block to another and allow you to:
- Visualize available data from source blocks
- Reference specific data fields in destination blocks
- Create dynamic data flows between blocks
<Callout type="info">
Connection tags make it easy to see what data is available from previous blocks and use it in your
current block without having to remember complex data structures.
</Callout>
## Using Connection Tags
There are two primary ways to use connection tags in your workflows:
import { Callout } from 'fumadocs-ui/components/callout'
import { Tab, Tabs } from 'fumadocs-ui/components/tabs'
import { Image } from '@/components/ui/image'
Der Agent-Block verbindet deinen Workflow mit Large Language Models (LLMs). Er verarbeitet natürlichsprachliche Eingaben, ruft externe Tools auf und generiert strukturierte oder unstrukturierte Ausgaben.
<div className="flex justify-center">
<Image
src="/static/blocks/agent.png"
alt="Agent-Block-Konfiguration"
width={500}
height={400}
className="my-6"
/>
</div>
## Konfigurationsoptionen
### System-Prompt
Der System-Prompt legt die Betriebsparameter und Verhaltenseinschränkungen des Agenten fest. Diese Konfiguration definiert die Rolle des Agenten, die Antwortmethodik und die Verarbeitungsgrenzen für alle eingehenden Anfragen.
```markdown
You are a helpful assistant that specializes in financial analysis.
Always provide clear explanations and cite sources when possible.
When responding to questions about investments, include risk disclaimers.
```
### Benutzer-Prompt
Der Benutzer-Prompt stellt die primären Eingabedaten für die Inferenzverarbeitung dar. Dieser Parameter akzeptiert natürlichsprachlichen Text oder strukturierte Daten, die der Agent analysieren und auf die er reagieren wird. Zu den Eingabequellen gehören:
- **Statische Konfiguration**: Direkte Texteingabe, die in der Block-Konfiguration angegeben ist
- **Dynamische Eingabe**: Daten, die von vorgelagerten Blöcken über Verbindungsschnittstellen übergeben werden
- **Laufzeitgenerierung**: Programmatisch generierte Inhalte während der Workflow-Ausführung
### Modellauswahl
Der Agent-Block unterstützt mehrere LLM-Anbieter über eine einheitliche Inferenzschnittstelle. Verfügbare Modelle umfassen:
- **Lokale Modelle**: Ollama oder VLLM-kompatible Modelle
### Temperatur
Steuert die Zufälligkeit und Kreativität der Antworten:
- **Niedrig (0-0,3)**: Deterministisch und fokussiert. Am besten für faktische Aufgaben und Genauigkeit.
- **Mittel (0,3-0,7)**: Ausgewogene Kreativität und Fokus. Gut für allgemeine Verwendung.
- **Hoch (0,7-2,0)**: Kreativ und abwechslungsreich. Ideal für Brainstorming und Content-Generierung.
### API-Schlüssel
Ihr API-Schlüssel für den ausgewählten LLM-Anbieter. Dieser wird sicher gespeichert und für die Authentifizierung verwendet.
### Tools
Erweitern Sie die Fähigkeiten des Agenten mit externen Integrationen. Wählen Sie aus über 60 vorgefertigten Tools oder definieren Sie benutzerdefinierte Funktionen.
**Verfügbare Kategorien:**
- **Kommunikation**: Gmail, Slack, Telegram, WhatsApp, Microsoft Teams
- **Datenquellen**: Notion, Google Sheets, Airtable, Supabase, Pinecone
- **Webdienste**: Firecrawl, Google Search, Exa AI, Browser-Automatisierung
- **Auto**: Modell entscheidet kontextbasiert, wann Tools verwendet werden
- **Erforderlich**: Tool muss bei jeder Anfrage aufgerufen werden
- **Keine**: Tool verfügbar, aber dem Modell nicht vorgeschlagen
### Antwortformat
Der Parameter für das Antwortformat erzwingt die Generierung strukturierter Ausgaben durch JSON-Schema-Validierung. Dies gewährleistet konsistente, maschinenlesbare Antworten, die vordefinierten Datenstrukturen entsprechen:
```json
{
"type": "object",
"properties": {
"sentiment": {
"type": "string",
"enum": ["positive", "neutral", "negative"]
},
"summary": {
"type": "string",
"description": "Brief summary of the content"
}
},
"required": ["sentiment", "summary"]
}
```
Diese Konfiguration beschränkt die Ausgabe des Modells auf die Einhaltung des angegebenen Schemas, verhindert Freitextantworten und stellt die Generierung strukturierter Daten sicher.
### Zugriff auf Ergebnisse
Nach Abschluss eines Agenten können Sie auf seine Ausgaben zugreifen:
- **response**: Der Antworttext oder die strukturierten Daten des Agenten
- **toolExecutions**: Details zu allen Tools, die der Agent während der Ausführung verwendet hat
- **estimatedCost**: Geschätzte Kosten des API-Aufrufs (falls verfügbar)
## Erweiterte Funktionen
### Memory + Agent: Gesprächsverlauf
Verwenden Sie einen memory Block mit einer konsistenten memoryId (zum Beispiel, conversationHistory), um Nachrichten zwischen Durchläufen zu speichern und diesen Verlauf in den Prompt des Agenten einzubeziehen.
- Fügen Sie die Nachricht des Benutzers vor dem Agenten hinzu
- Lesen Sie den Gesprächsverlauf für den Kontext
- Hängen Sie die Antwort des Agenten nach dessen Ausführung an
Siehe den [`Memory`](/tools/memory) Blockverweis für Details.
## Ausgaben
- **`<agent.content>`**: Antworttext des Agenten
- **`<agent.tokens>`**: Token-Nutzungsstatistiken
- **`<agent.tool_calls>`**: Details zur Tool-Ausführung
- **`<agent.cost>`**: Geschätzte Kosten des API-Aufrufs
## Beispielanwendungsfälle
**Automatisierung des Kundenservice** - Bearbeitung von Anfragen mit Datenbank- und Tool-Zugriff
**Multi-Modell-Inhaltsanalyse** - Analyse von Inhalten mit verschiedenen KI-Modellen
```
Function (Process) → Agent (GPT-4o Technical) → Agent (Claude Sentiment) → Function (Report)
```
**Tool-gestützter Rechercheassistent** - Recherche mit Websuche und Dokumentenzugriff
```
Input → Agent (Google Search, Notion) → Function (Compile Report)
```
## Bewährte Praktiken
- **Sei spezifisch in System-Prompts**: Definiere die Rolle, den Ton und die Einschränkungen des Agenten klar. Je spezifischer deine Anweisungen sind, desto besser kann der Agent seinen vorgesehenen Zweck erfüllen.
- **Wähle die richtige Temperatureinstellung**: Verwende niedrigere Temperatureinstellungen (0-0,3), wenn Genauigkeit wichtig ist, oder erhöhe die Temperatur (0,7-2,0) für kreativere oder vielfältigere Antworten
- **Nutze Tools effektiv**: Integriere Tools, die den Zweck des Agenten ergänzen und seine Fähigkeiten erweitern. Sei selektiv bei der Auswahl der Tools, um den Agenten nicht zu überfordern. Für Aufgaben mit wenig Überschneidung verwende einen anderen Agent-Block für die besten Ergebnisse.
import { Callout } from 'fumadocs-ui/components/callout'
import { Tab, Tabs } from 'fumadocs-ui/components/tabs'
import { Image } from '@/components/ui/image'
Der API-Block verbindet Ihren Workflow mit externen Diensten durch HTTP-Anfragen. Unterstützt GET, POST, PUT, DELETE und PATCH Methoden für die Interaktion mit REST-APIs.
<div className="flex justify-center">
<Image
src="/static/blocks/api.png"
alt="API-Block"
width={500}
height={400}
className="my-6"
/>
</div>
## Konfigurationsoptionen
### URL
Die Endpunkt-URL für die API-Anfrage. Diese kann sein:
- Eine statische URL, die direkt im Block eingegeben wird
- Eine dynamische URL, die mit der Ausgabe eines anderen Blocks verbunden ist
- Eine URL mit Pfadparametern
### Methode
Wählen Sie die HTTP-Methode für Ihre Anfrage:
- **GET**: Daten vom Server abrufen
- **POST**: Daten an den Server senden, um eine Ressource zu erstellen
- **PUT**: Eine bestehende Ressource auf dem Server aktualisieren
- **DELETE**: Eine Ressource vom Server entfernen
- **PATCH**: Eine bestehende Ressource teilweise aktualisieren
### Abfrageparameter
Definieren Sie Schlüssel-Wert-Paare, die als Abfrageparameter an die URL angehängt werden. Zum Beispiel:
```
Key: apiKey
Value: your_api_key_here
Key: limit
Value: 10
```
Diese würden der URL als `?apiKey=your_api_key_here&limit=10` hinzugefügt.
### Header
Konfigurieren Sie HTTP-Header für Ihre Anfrage. Häufige Header sind:
```
Key: Content-Type
Value: application/json
Key: Authorization
Value: Bearer your_token_here
```
### Anfragekörper
Für Methoden, die einen Anfragekörper unterstützen (POST, PUT, PATCH), können Sie die zu sendenden Daten definieren. Der Körper kann sein:
- JSON-Daten, die direkt im Block eingegeben werden
- Daten, die mit der Ausgabe eines anderen Blocks verbunden sind
- Dynamisch während der Workflow-Ausführung generiert
### Zugriff auf Ergebnisse
Nach Abschluss einer API-Anfrage können Sie auf folgende Ausgaben zugreifen:
import { Callout } from 'fumadocs-ui/components/callout'
import { Tab, Tabs } from 'fumadocs-ui/components/tabs'
import { Image } from '@/components/ui/image'
Der Bedingungsblock verzweigt die Workflow-Ausführung basierend auf booleschen Ausdrücken. Bewerten Sie Bedingungen anhand vorheriger Block-Ausgaben und leiten Sie zu verschiedenen Pfaden weiter, ohne dass ein LLM erforderlich ist.
<div className="flex justify-center">
<Image
src="/static/blocks/condition.png"
alt="Bedingungsblock"
width={500}
height={400}
className="my-6"
/>
</div>
## Konfigurationsoptionen
### Bedingungen
Definieren Sie eine oder mehrere Bedingungen, die ausgewertet werden. Jede Bedingung umfasst:
- **Ausdruck**: Ein JavaScript/TypeScript-Ausdruck, der zu wahr oder falsch ausgewertet wird
- **Pfad**: Der Zielblock, zu dem weitergeleitet werden soll, wenn die Bedingung wahr ist
- **Beschreibung**: Optionale Erklärung, was die Bedingung prüft
Sie können mehrere Bedingungen erstellen, die der Reihe nach ausgewertet werden, wobei die erste übereinstimmende Bedingung den Ausführungspfad bestimmt.
### Format für Bedingungsausdrücke
Bedingungen verwenden JavaScript-Syntax und können auf Eingabewerte aus vorherigen Blöcken verweisen.
**Benutzer-Onboarding-Ablauf** - Onboarding basierend auf Benutzertyp personalisieren
```
Function (Process) → Condition (account_type === 'enterprise') → Advanced or Simple
```
## Bewährte Praktiken
- **Bedingungen korrekt anordnen**: Platzieren Sie spezifischere Bedingungen vor allgemeinen, um sicherzustellen, dass spezifische Logik Vorrang vor Fallbacks hat
- **Verwenden Sie den Else-Zweig bei Bedarf**: Wenn keine Bedingungen übereinstimmen und der Else-Zweig nicht verbunden ist, endet der Workflow-Zweig ordnungsgemäß. Verbinden Sie den Else-Zweig, wenn Sie einen Fallback-Pfad für nicht übereinstimmende Fälle benötigen
- **Halten Sie Ausdrücke einfach**: Verwenden Sie klare, unkomplizierte boolesche Ausdrücke für bessere Lesbarkeit und einfachere Fehlersuche
- **Dokumentieren Sie Ihre Bedingungen**: Fügen Sie Beschreibungen hinzu, um den Zweck jeder Bedingung für bessere Teamzusammenarbeit und Wartung zu erklären
- **Testen Sie Grenzfälle**: Überprüfen Sie, ob Bedingungen Grenzwerte korrekt behandeln, indem Sie mit Werten an den Grenzen Ihrer Bedingungsbereiche testen
import { Callout } from 'fumadocs-ui/components/callout'
import { Tab, Tabs } from 'fumadocs-ui/components/tabs'
import { Image } from '@/components/ui/image'
Der Evaluator-Block nutzt KI, um die Inhaltsqualität anhand benutzerdefinierter Metriken zu bewerten. Perfekt für Qualitätskontrolle, A/B-Tests und um sicherzustellen, dass KI-Ausgaben bestimmte Standards erfüllen.
<div className="flex justify-center">
<Image
src="/static/blocks/evaluator.png"
alt="Evaluator-Block-Konfiguration"
width={500}
height={400}
className="my-6"
/>
</div>
## Konfigurationsoptionen
### Bewertungsmetriken
Definieren Sie benutzerdefinierte Metriken, anhand derer Inhalte bewertet werden. Jede Metrik umfasst:
- **Name**: Eine kurze Bezeichnung für die Metrik
- **Beschreibung**: Eine detaillierte Erklärung, was die Metrik misst
- **Bereich**: Der numerische Bereich für die Bewertung (z.B. 1-5, 0-10)
Beispielmetriken:
```
Accuracy (1-5): How factually accurate is the content?
Clarity (1-5): How clear and understandable is the content?
Relevance (1-5): How relevant is the content to the original query?
```
### Inhalt
Der zu bewertende Inhalt. Dies kann sein:
- Direkt in der Blockkonfiguration bereitgestellt
- Verbunden mit der Ausgabe eines anderen Blocks (typischerweise ein Agent-Block)
- Dynamisch während der Workflow-Ausführung generiert
### Modellauswahl
Wählen Sie ein KI-Modell für die Durchführung der Bewertung:
- **Verwenden Sie spezifische Metrikbeschreibungen**: Definieren Sie klar, was jede Metrik misst, um genauere Bewertungen zu erhalten
- **Wählen Sie geeignete Bereiche**: Wählen Sie Bewertungsbereiche, die ausreichend Granularität bieten, ohne zu komplex zu sein
- **Verbinden Sie mit Agent-Blöcken**: Verwenden Sie Evaluator-Blöcke, um die Ausgaben von Agent-Blöcken zu bewerten und Feedback-Schleifen zu erstellen
- **Verwenden Sie konsistente Metriken**: Für vergleichende Analysen sollten Sie konsistente Metriken über ähnliche Bewertungen hinweg beibehalten
- **Kombinieren Sie mehrere Metriken**: Verwenden Sie verschiedene Metriken, um eine umfassende Bewertung zu erhalten
Der Funktionsblock führt benutzerdefinierten JavaScript- oder TypeScript-Code in Ihren Workflows aus. Transformieren Sie Daten, führen Sie Berechnungen durch oder implementieren Sie benutzerdefinierte Logik.
<div className="flex justify-center">
<Image
src="/static/blocks/function.png"
alt="Funktionsblock mit Code-Editor"
width={500}
height={400}
className="my-6"
/>
</div>
## Ausgaben
- **`<function.result>`**: Der von Ihrer Funktion zurückgegebene Wert
- **`<function.stdout>`**: Console.log()-Ausgabe Ihres Codes
## Beispielanwendungsfälle
**Datenverarbeitungspipeline** - Transformation von API-Antworten in strukturierte Daten
```
API (Fetch) → Function (Process & Validate) → Function (Calculate Metrics) → Response
```
**Implementierung von Geschäftslogik** - Berechnung von Treuepunkten und Stufen
```
Agent (Get History) → Function (Calculate Score) → Function (Determine Tier) → Condition (Route)
```
**Datenvalidierung und -bereinigung** - Validierung und Bereinigung von Benutzereingaben
```
Input → Function (Validate & Sanitize) → API (Save to Database)
```
### Beispiel: Treuepunkte-Rechner
```javascript title="loyalty-calculator.js"
// Process customer data and calculate loyalty score
import { Callout } from 'fumadocs-ui/components/callout'
import { Image } from '@/components/ui/image'
import { Video } from '@/components/ui/video'
Der Guardrails-Block validiert und schützt Ihre KI-Workflows, indem er Inhalte anhand mehrerer Validierungstypen überprüft. Stellen Sie die Datenqualität sicher, verhindern Sie Halluzinationen, erkennen Sie personenbezogene Daten und erzwingen Sie Formatanforderungen, bevor Inhalte durch Ihren Workflow fließen.
<div className="flex justify-center">
<Image
src="/static/blocks/guardrails.png"
alt="Guardrails-Block"
width={500}
height={400}
className="my-6"
/>
</div>
## Validierungstypen
### JSON-Validierung
Überprüft, ob der Inhalt korrekt formatiertes JSON ist. Perfekt, um sicherzustellen, dass strukturierte LLM-Ausgaben sicher geparst werden können.
**Anwendungsfälle:**
- Validierung von JSON-Antworten aus Agent-Blöcken vor dem Parsen
- Sicherstellen, dass API-Payloads korrekt formatiert sind
- Überprüfung der Integrität strukturierter Daten
**Ausgabe:**
- `passed`: `true` bei gültigem JSON, sonst `false`
Prüft, ob der Inhalt einem bestimmten regulären Ausdrucksmuster entspricht.
**Anwendungsfälle:**
- Validierung von E-Mail-Adressen
- Überprüfung von Telefonnummernformaten
- Verifizierung von URLs oder benutzerdefinierten Kennungen
- Durchsetzung spezifischer Textmuster
**Konfiguration:**
- **Regex-Muster**: Der reguläre Ausdruck, gegen den geprüft wird (z.B. `^[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}$` für E-Mails)
**Ausgabe:**
- `passed`: `true` wenn der Inhalt dem Muster entspricht, sonst `false`
- `error`: Fehlermeldung bei fehlgeschlagener Validierung
### Halluzinationserkennung
Verwendet Retrieval-Augmented Generation (RAG) mit LLM-Bewertung, um zu erkennen, wann KI-generierte Inhalte im Widerspruch zu Ihrer Wissensdatenbank stehen oder nicht darin begründet sind.
**Funktionsweise:**
1. Abfrage Ihrer Wissensdatenbank nach relevantem Kontext
2. Übermittlung sowohl der KI-Ausgabe als auch des abgerufenen Kontexts an ein LLM
**Halluzinationen verhindern** - Validieren Sie Kundendienstantworten anhand der Wissensdatenbank
```
Agent (Response) → Guardrails (Check KB) → Condition (Score ≥ 3) → Send or Flag
```
**PII in Benutzereingaben blockieren** - Bereinigen Sie von Benutzern übermittelte Inhalte
```
Input → Guardrails (Detect PII) → Condition (No PII) → Process or Reject
```
## Bewährte Praktiken
- **Verkettung mit Bedingungsblöcken**: Verwenden Sie `<guardrails.passed>`, um die Workflow-Logik basierend auf Validierungsergebnissen zu verzweigen
- **JSON-Validierung vor dem Parsen verwenden**: Validieren Sie immer die JSON-Struktur, bevor Sie versuchen, LLM-Ausgaben zu parsen
- **Geeignete PII-Typen auswählen**: Wählen Sie nur die für Ihren Anwendungsfall relevanten PII-Entitätstypen für bessere Leistung
- **Angemessene Konfidenzgrenzwerte festlegen**: Passen Sie für die Halluzinationserkennung den Grenzwert an Ihre Genauigkeitsanforderungen an (höher = strenger)
- **Starke Modelle für die Halluzinationserkennung verwenden**: GPT-4o oder Claude 3.7 Sonnet bieten genauere Konfidenzwerte
- **PII für die Protokollierung maskieren**: Verwenden Sie den Modus "Mask", wenn Sie Inhalte protokollieren oder speichern müssen, die PII enthalten könnten
- **Regex-Muster testen**: Validieren Sie Ihre Regex-Muster gründlich, bevor Sie sie in der Produktion einsetzen
- **Validierungsfehler überwachen**: Verfolgen Sie `<guardrails.error>`Nachrichten, um häufige Validierungsprobleme zu identifizieren
<Callout type="info">
Die Validierung von Guardrails erfolgt synchron in Ihrem Workflow. Für die Erkennung von Halluzinationen sollten Sie schnellere Modelle (wie GPT-4o-mini) wählen, wenn die Latenz kritisch ist.
import { Callout } from 'fumadocs-ui/components/callout'
import { Tab, Tabs } from 'fumadocs-ui/components/tabs'
import { Image } from '@/components/ui/image'
import { Video } from '@/components/ui/video'
Der Human in the Loop Block pausiert die Workflow-Ausführung und wartet auf menschliches Eingreifen, bevor er fortfährt. Verwenden Sie ihn, um Genehmigungspunkte hinzuzufügen, Feedback zu sammeln oder zusätzliche Eingaben an kritischen Entscheidungspunkten einzuholen.
<div className="flex justify-center">
<Image
src="/static/blocks/hitl-1.png"
alt="Human in the Loop Block Konfiguration"
width={500}
height={400}
className="my-6"
/>
</div>
Wenn die Ausführung diesen Block erreicht, pausiert der Workflow auf unbestimmte Zeit, bis ein Mensch über das Genehmigungsportal, die API oder den Webhook eine Eingabe macht.
<div className="flex justify-center">
<Image
src="/static/blocks/hitl-2.png"
alt="Human in the Loop Genehmigungsportal"
width={700}
height={500}
className="my-6"
/>
</div>
## Konfigurationsoptionen
### Pausierte Ausgabe
Definiert, welche Daten dem Genehmigenden angezeigt werden. Dies ist der Kontext, der im Genehmigungsportal angezeigt wird, um eine fundierte Entscheidung zu ermöglichen.
Verwenden Sie den visuellen Builder oder den JSON-Editor, um die Daten zu strukturieren. Referenzieren Sie Workflow-Variablen mit der `<blockName.output>` Syntax.
```json
{
"customerName": "<agent1.content.name>",
"proposedAction": "<router1.selectedPath>",
"confidenceScore": "<evaluator1.score>",
"generatedEmail": "<agent2.content>"
}
```
### Benachrichtigung
Konfiguriert, wie Genehmigende benachrichtigt werden, wenn eine Genehmigung erforderlich ist. Unterstützte Kanäle sind:
Fügen Sie die Genehmigungs-URL (`<blockId.url>`) in Ihre Benachrichtigungsnachrichten ein, damit Genehmigende auf das Portal zugreifen können.
### Fortsetzungseingabe
Definiert die Felder, die Genehmigende bei der Antwort ausfüllen. Diese Daten werden nach der Fortsetzung des Workflows für nachfolgende Blöcke verfügbar.
```json
{
"approved": {
"type": "boolean",
"description": "Approve or reject this request"
},
"comments": {
"type": "string",
"description": "Optional feedback or explanation"
}
}
```
Greifen Sie in nachgelagerten Blöcken auf Wiederaufnahmedaten mit `<blockId.resumeInput.fieldName>` zu.
Jeder Block generiert eine eindeutige Portal-URL (`<blockId.url>`) mit einer visuellen Oberfläche, die alle pausierten Ausgabedaten und Formularfelder für die Fortsetzungseingabe anzeigt. Mobilgerätekompatibel und sicher.
Teilen Sie diese URL in Benachrichtigungen, damit Genehmiger die Anfragen prüfen und beantworten können.
</Tab>
<Tab>
### REST API
Workflows programmatisch fortsetzen:
```bash
POST /api/workflows/{workflowId}/executions/{executionId}/resume/{blockId}
{
"approved": true,
"comments": "Looks good to proceed"
}
```
Erstellen Sie benutzerdefinierte Genehmigungs-UIs oder integrieren Sie bestehende Systeme.
</Tab>
<Tab>
### Webhook
Fügen Sie ein Webhook-Tool im Benachrichtigungsbereich hinzu, um Genehmigungsanfragen an externe Systeme zu senden. Integration mit Ticketing-Systemen wie Jira oder ServiceNow.
</Tab>
</Tabs>
## Häufige Anwendungsfälle
**Inhaltsgenehmigung** - Überprüfung von KI-generierten Inhalten vor der Veröffentlichung
```
Agent → Human in the Loop → API (Publish)
```
**Mehrstufige Genehmigungen** - Verkettung mehrerer Genehmigungsschritte für wichtige Entscheidungen
```
Agent → Human in the Loop (Manager) → Human in the Loop (Director) → Execute
```
**Datenvalidierung** - Überprüfung extrahierter Daten vor der Verarbeitung
```
Agent (Extract) → Human in the Loop (Validate) → Function (Process)
```
**Qualitätskontrolle** - Überprüfung von KI-Ausgaben vor dem Versand an Kunden
```
Agent (Generate) → Human in the Loop (QA) → Gmail (Send)
```
## Block-Ausgaben
**`url`** - Eindeutige URL für das Genehmigungsportal
**`resumeInput.*`** - Alle in der Fortsetzungseingabe definierten Felder werden verfügbar, nachdem der Workflow fortgesetzt wird
Zugriff über `<blockId.resumeInput.fieldName>`.
## Beispiel
**Pausierte Ausgabe:**
```json
{
"title": "<agent1.content.title>",
"body": "<agent1.content.body>",
"qualityScore": "<evaluator1.score>"
}
```
**Fortsetzungseingabe:**
```json
{
"approved": { "type": "boolean" },
"feedback": { "type": "string" }
}
```
**Nachgelagerte Verwendung:**
```javascript
// Condition block
<approval1.resumeInput.approved> === true
```
Das Beispiel unten zeigt ein Genehmigungsportal, wie es von einem Genehmiger gesehen wird, nachdem der Workflow angehalten wurde. Genehmiger können die Daten überprüfen und Eingaben als Teil der Workflow-Wiederaufnahme bereitstellen. Auf das Genehmigungsportal kann direkt über die eindeutige URL, `<blockId.url>`, zugegriffen werden.
import { Card, Cards } from 'fumadocs-ui/components/card'
import { Step, Steps } from 'fumadocs-ui/components/steps'
import { Tab, Tabs } from 'fumadocs-ui/components/tabs'
import { Video } from '@/components/ui/video'
Blöcke sind die Bausteine, die du miteinander verbindest, um KI-Workflows zu erstellen. Betrachte sie als spezialisierte Module, die jeweils eine bestimmte Aufgabe übernehmen – vom Chatten mit KI-Modellen über API-Aufrufe bis hin zur Datenverarbeitung.
import { Callout } from 'fumadocs-ui/components/callout'
import { Tab, Tabs } from 'fumadocs-ui/components/tabs'
import { Image } from '@/components/ui/image'
Der Schleifenblock ist ein Container, der Blöcke wiederholt ausführt. Iteriere über Sammlungen, wiederhole Operationen eine festgelegte Anzahl von Malen oder fahre fort, solange eine Bedingung erfüllt ist.
<Callout type="info">
Schleifenblöcke sind Container-Knoten, die andere Blöcke in sich enthalten. Die enthaltenen Blöcke werden mehrfach ausgeführt, basierend auf deiner Konfiguration.
**For-Schleife (Iterationen)** - Eine numerische Schleife, die eine festgelegte Anzahl von Malen ausgeführt wird:
<div className="flex justify-center">
<Image
src="/static/blocks/loop-1.png"
alt="For-Schleife mit Iterationen"
width={500}
height={400}
className="my-6"
/>
</div>
Verwende diese, wenn du eine Operation eine bestimmte Anzahl von Malen wiederholen musst.
```
Example: Run 5 times
- Iteration 1
- Iteration 2
- Iteration 3
- Iteration 4
- Iteration 5
```
</Tab>
<Tab>
**ForEach-Schleife (Sammlung)** - Eine sammlungsbasierte Schleife, die über jedes Element in einem Array oder Objekt iteriert:
<div className="flex justify-center">
<Image
src="/static/blocks/loop-2.png"
alt="ForEach-Schleife mit Sammlung"
width={500}
height={400}
className="my-6"
/>
</div>
Verwende diese, wenn du eine Sammlung von Elementen verarbeiten musst.
```
Example: Process ["apple", "banana", "orange"]
- Iteration 1: Process "apple"
- Iteration 2: Process "banana"
- Iteration 3: Process "orange"
```
</Tab>
<Tab>
**While-Schleife (Bedingungsbasiert)** - Wird ausgeführt, solange eine Bedingung als wahr ausgewertet wird:
<div className="flex justify-center">
<Image
src="/static/blocks/loop-3.png"
alt="While-Schleife mit Bedingung"
width={500}
height={400}
className="my-6"
/>
</div>
Verwende diese, wenn du eine Schleife benötigst, die läuft, bis eine bestimmte Bedingung erfüllt ist. Die Bedingung wird **vor** jeder Iteration überprüft.
```
Example: While {"<variable.i>"} < 10
- Check condition → Execute if true
- Inside loop: Increment {"<variable.i>"}
- Inside loop: Variables assigns i = {"<variable.i>"} + 1
- Check condition → Execute if true
- Check condition → Exit if false
```
</Tab>
<Tab>
**Do-While-Schleife (Bedingungsbasiert)** - Wird mindestens einmal ausgeführt und dann fortgesetzt, solange eine Bedingung wahr ist:
<div className="flex justify-center">
<Image
src="/static/blocks/loop-4.png"
alt="Do-While-Schleife mit Bedingung"
width={500}
height={400}
className="my-6"
/>
</div>
Verwende diese, wenn du eine Operation mindestens einmal ausführen musst und dann die Schleife fortsetzen willst, bis eine Bedingung erfüllt ist. Die Bedingung wird **nach** jeder Iteration überprüft.
```
Example: Do-while {"<variable.i>"} < 10
- Execute blocks
- Inside loop: Increment {"<variable.i>"}
- Inside loop: Variables assigns i = {"<variable.i>"} + 1
- Check condition → Continue if true
- Check condition → Exit if false
```
</Tab>
</Tabs>
## Wie man Schleifen verwendet
### Eine Schleife erstellen
1. Ziehe einen Schleifenblock aus der Werkzeugleiste auf deine Leinwand
2. Konfiguriere den Schleifentyp und die Parameter
3. Ziehe andere Blöcke in den Schleifencontainer
4. Verbinde die Blöcke nach Bedarf
### Auf Ergebnisse zugreifen
Nach Abschluss einer Schleife kannst du auf aggregierte Ergebnisse zugreifen:
- **loop.results**: Array mit Ergebnissen aller Schleifendurchläufe
## Beispielanwendungsfälle
**Verarbeitung von API-Ergebnissen** - ForEach-Schleife verarbeitet Kundendatensätze aus einer API
```javascript
// Beispiel: ForEach-Schleife für API-Ergebnisse
const customers = await api.getCustomers();
loop.forEach(customers, (customer) => {
// Verarbeite jeden Kunden
if (customer.status === 'active') {
sendEmail(customer.email, 'Sonderangebot');
}
});
```
**Iterative Inhaltsgenerierung** - For-Schleife generiert mehrere Inhaltsvariationen
```javascript
// Beispiel: For-Schleife für Inhaltsgenerierung
const variations = [];
loop.for(5, (i) => {
// Generiere 5 verschiedene Variationen
const content = ai.generateContent({
prompt: `Variation ${i+1} für Produktbeschreibung`,
temperature: 0.7 + (i * 0.1)
});
variations.push(content);
});
```
**Zähler mit While-Schleife** - While-Schleife verarbeitet Elemente mit Zähler
```javascript
// Beispiel: While-Schleife mit Zähler
let counter = 0;
let processedItems = 0;
loop.while(() => counter < items.length, () => {
if (items[counter].isValid) {
processItem(items[counter]);
processedItems++;
}
counter++;
});
console.log(`${processedItems} gültige Elemente verarbeitet`);
```
## Erweiterte Funktionen
### Einschränkungen
<Callout type="warning">
Container-Blöcke (Schleifen und Parallele) können nicht ineinander verschachtelt werden. Das bedeutet:
- Du kannst keinen Schleifenblock in einen anderen Schleifenblock platzieren
- Du kannst keinen Parallel-Block in einen Schleifenblock platzieren
- Du kannst keinen Container-Block in einen anderen Container-Block platzieren
Wenn du mehrdimensionale Iterationen benötigst, erwäge eine Umstrukturierung deines Workflows, um sequentielle Schleifen zu verwenden oder Daten in Stufen zu verarbeiten.
</Callout>
<Callout type="info">
Schleifen werden sequentiell ausgeführt, nicht parallel. Wenn du eine gleichzeitige Ausführung benötigst, verwende stattdessen den Parallel-Block.
import { Callout } from 'fumadocs-ui/components/callout'
import { Tab, Tabs } from 'fumadocs-ui/components/tabs'
import { Image } from '@/components/ui/image'
Der Parallel-Block ist ein Container, der mehrere Instanzen gleichzeitig ausführt, um Workflows schneller zu verarbeiten. Verarbeiten Sie Elemente simultan statt sequentiell.
<Callout type="info">
Parallel-Blöcke sind Container-Knoten, die ihre Inhalte mehrfach gleichzeitig ausführen, im Gegensatz zu Schleifen, die sequentiell ausgeführt werden.
</Callout>
## Konfigurationsoptionen
### Parallel-Typ
Wählen Sie zwischen zwei Arten der parallelen Ausführung:
import { Callout } from 'fumadocs-ui/components/callout'
import { Tab, Tabs } from 'fumadocs-ui/components/tabs'
import { Image } from '@/components/ui/image'
Der Response-Block formatiert und sendet strukturierte HTTP-Antworten zurück an API-Aufrufer. Verwenden Sie ihn, um Workflow-Ergebnisse mit korrekten Statuscodes und Headern zurückzugeben.
<div className="flex justify-center">
<Image
src="/static/blocks/response.png"
alt="Konfiguration des Antwort-Blocks"
width={500}
height={400}
className="my-6"
/>
</div>
<Callout type="info">
Response-Blöcke sind terminale Blöcke - sie beenden die Workflow-Ausführung und können nicht mit anderen Blöcken verbunden werden.
</Callout>
## Konfigurationsoptionen
### Antwortdaten
Die Antwortdaten sind der Hauptinhalt, der an den API-Aufrufer zurückgesendet wird. Diese sollten als JSON formatiert sein und können Folgendes enthalten:
- Statische Werte
- Dynamische Verweise auf Workflow-Variablen mit der `<variable.name>` Syntax
- Verschachtelte Objekte und Arrays
- Jede gültige JSON-Struktur
### Statuscode
Legen Sie den HTTP-Statuscode für die Antwort fest (standardmäßig 200):
Antwortblöcke sind endgültig - sie beenden die Workflow-Ausführung und senden die HTTP-Antwort an den API-Aufrufer. Es stehen keine Ausgaben für nachgelagerte Blöcke zur Verfügung.
## Variablenreferenzen
Verwenden Sie die `<variable.name>` Syntax, um Workflow-Variablen dynamisch in Ihre Antwort einzufügen:
```json
{
"user": {
"id": "<variable.userId>",
"name": "<variable.userName>",
"email": "<variable.userEmail>"
},
"query": "<variable.searchQuery>",
"results": "<variable.searchResults>",
"totalFound": "<variable.resultCount>",
"processingTime": "<variable.executionTime>ms"
}
```
<Callout type="warning">
Variablennamen sind Groß- und Kleinschreibung sensitiv und müssen exakt mit den in Ihrem Workflow verfügbaren Variablen übereinstimmen.
</Callout>
## Best Practices
- **Verwenden Sie aussagekräftige Statuscodes**: Wählen Sie passende HTTP-Statuscodes, die das Ergebnis des Workflows genau widerspiegeln
- **Strukturieren Sie Ihre Antworten einheitlich**: Behalten Sie eine konsistente JSON-Struktur über alle Ihre API-Endpunkte bei, um eine bessere Entwicklererfahrung zu gewährleisten
- **Fügen Sie relevante Metadaten hinzu**: Fügen Sie Zeitstempel und Versionsinformationen hinzu, um bei der Fehlerbehebung und Überwachung zu helfen
- **Behandeln Sie Fehler elegant**: Verwenden Sie bedingte Logik in Ihrem Workflow, um angemessene Fehlerantworten mit aussagekräftigen Meldungen zu setzen
- **Validieren Sie Variablenreferenzen**: Stellen Sie sicher, dass alle referenzierten Variablen existieren und die erwarteten Datentypen enthalten, bevor der Antwortblock ausgeführt wird
import { Callout } from 'fumadocs-ui/components/callout'
import { Tab, Tabs } from 'fumadocs-ui/components/tabs'
import { Image } from '@/components/ui/image'
Der Router-Block verwendet KI, um Workflows basierend auf Inhaltsanalysen intelligent zu leiten. Im Gegensatz zu Bedingungsblöcken, die einfache Regeln verwenden, verstehen Router Kontext und Absicht.
<div className="flex justify-center">
<Image
src="/static/blocks/router.png"
alt="Router-Block mit mehreren Pfaden"
width={500}
height={400}
className="my-6"
/>
</div>
## Router vs. Bedingung
**Verwende Router, wenn:**
- KI-gestützte Inhaltsanalyse benötigt wird
- Mit unstrukturierten oder variierenden Inhalten gearbeitet wird
- Absichtsbasierte Weiterleitung erforderlich ist (z.B. "Support-Tickets an Abteilungen weiterleiten")
- **Klare Zielbeschreibungen bereitstellen**: Helfen Sie dem Router zu verstehen, wann jedes Ziel ausgewählt werden soll, mit spezifischen, detaillierten Beschreibungen
- **Spezifische Routing-Kriterien verwenden**: Definieren Sie klare Bedingungen und Beispiele für jeden Pfad, um die Genauigkeit zu verbessern
- **Fallback-Pfade implementieren**: Verbinden Sie ein Standardziel für Fälle, in denen kein spezifischer Pfad geeignet ist
- **Mit verschiedenen Eingaben testen**: Stellen Sie sicher, dass der Router verschiedene Eingabetypen, Grenzfälle und unerwartete Inhalte verarbeiten kann
- **Routing-Leistung überwachen**: Überprüfen Sie Routing-Entscheidungen regelmäßig und verfeinern Sie Kriterien basierend auf tatsächlichen Nutzungsmustern
- **Geeignete Modelle auswählen**: Verwenden Sie Modelle mit starken Argumentationsfähigkeiten für komplexe Routing-Entscheidungen
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.