mirror of
https://github.com/simstudioai/sim.git
synced 2026-01-09 23:17:59 -05:00
feat(copilot-v1): Copilot v1 (#662)
* Fix docs agent * Doc agent fixes * Refactor copilot * Lint * Update yaml editor * Lint * Fix block tool * Lint * Get block metadata tool * Lint * Yaml changes * Lint * Fixes? * Lint * Better yaml language * Lint * UPdate * Lint * Fix condition blocks * lint * Fix start block * Fix starter block stuff * Lint * Fix yaml ui * Lint * get yaml tool * Lint * hi * Lint * Agnet * Copilot UI * Lint * Better workflow builder * Lint * REHYDRATION * Lint * Auto layout * Lint * Fixes * Lint * Chatbar sizing * Lint * Initial chat fixes * Lint * Dropdown overflow * Lint * UI text * Lint * Sample question pills * Lint * Ui button * Fix dropdown appearance * Lint * Modal fixes * UI Updates * Lint * Initial ask vs agent mode * Lint * Ask vs agent * Lint * Ask udpate * Ui fixes * Lint * User message width * Chat leak fix * Lint * Agent ui * Checkpointing * Lint * Checkpoints * Lint * Tweaks * Sample questions * Lint * Modal full screen mode * Lint * Prompt updates * Cleaning * Lint * Prompt update * Streaming v1 * Lint * Tool call inline * Lint * Checkpoint * Lint * Fix lint * Sizing * Lint * Copilot ui tool call fixes * Remove output from tool call ui * Updates * Lint * Checkpoitn * Loading icon * Lint * Pulse * Lint * Modal fixes * Sidebar padding * Checkpoint * checkpoint * feat(platform): new UI and templates (#639) (#693) * improvement: control bar * improvement: debug flow * improvement: control bar hovers and skeleton loading * improvement: completed control bar * improvement: panel tab selector complete * refactor: deleted notifications and history dropdown * improvement: chat UI complete * fix: tab change on control bar run * improvement: finshed console (audio display not working) * fix: text wrapping in console content * improvement: audio UI * improvement: image display * feat: add input to console * improvement: code input and showing input on errors * feat: download chat and console * improvement: expandable panel and console visibility * improvement: empty state UI * improvement: finished variables * fix: image in console entry * improvement: sidebar and templates ui * feat: uploading and fetching templates * improvement: sidebar and control bar * improvement: templates * feat: templates done * fix(sockets): remove package-lock * fix: sidebar scroll going over sidebar height (#709) * Checkpoint * Fix build error * Checkpoitn * Docs updates * Checkpoint * Streaming vs non streaming * Clean up yaml save * Fix revert checkpoitn yaml * Doc fixes * Small docs fix * Clean up old yaml docs * Doc updates * Hide copilot * Revert width * Db migration fixes * Add snapshot * Remove space from mdx * Add spaces * Lint * Address greptile comments * lint fix * Hide copilot --------- Co-authored-by: Vikhyath Mondreti <vikhyathvikku@gmail.com> Co-authored-by: Waleed Latif <walif6@gmail.com> Co-authored-by: Emir Karabeg <78010029+emir-karabeg@users.noreply.github.com> Co-authored-by: Siddharth Sim <sidstudio@SiddharthsMBP2.attlocal.net>
This commit is contained in:
committed by
GitHub
parent
31d909bb82
commit
5158a00b54
@@ -172,4 +172,4 @@ After a loop completes, you can access aggregated results:
|
||||
|
||||
- **Set reasonable limits**: Keep iteration counts reasonable to avoid long execution times
|
||||
- **Use ForEach for collections**: When processing arrays or objects, use ForEach instead of For loops
|
||||
- **Handle errors gracefully**: Consider adding error handling inside loops for robust workflows
|
||||
- **Handle errors gracefully**: Consider adding error handling inside loops for robust workflows
|
||||
|
||||
@@ -207,4 +207,4 @@ Understanding when to use each:
|
||||
|
||||
- **Independent operations only**: Ensure operations don't depend on each other
|
||||
- **Handle rate limits**: Add delays or throttling for API-heavy workflows
|
||||
- **Error handling**: Each instance should handle its own errors gracefully
|
||||
- **Error handling**: Each instance should handle its own errors gracefully
|
||||
|
||||
@@ -182,4 +182,5 @@ headers:
|
||||
- **Structure your responses consistently**: Maintain a consistent JSON structure across all your API endpoints for better developer experience
|
||||
- **Include relevant metadata**: Add timestamps and version information to help with debugging and monitoring
|
||||
- **Handle errors gracefully**: Use conditional logic in your workflow to set appropriate error responses with descriptive messages
|
||||
- **Validate variable references**: Ensure all referenced variables exist and contain the expected data types before the Response block executes
|
||||
- **Validate variable references**: Ensure all referenced variables exist and contain the expected data types before the Response block executes
|
||||
|
||||
|
||||
@@ -256,4 +256,4 @@ return {
|
||||
- **Document dependencies**: Clearly document which workflows depend on others and maintain dependency maps
|
||||
- **Test independently**: Ensure child workflows can be tested and validated independently from parent workflows
|
||||
- **Monitor performance**: Be aware that nested workflows can impact overall execution time and resource usage
|
||||
- **Use semantic naming**: Give workflows descriptive names that clearly indicate their purpose and functionality
|
||||
- **Use semantic naming**: Give workflows descriptive names that clearly indicate their purpose and functionality
|
||||
|
||||
@@ -14,6 +14,7 @@
|
||||
"execution",
|
||||
"---Advanced---",
|
||||
"./variables/index",
|
||||
"yaml",
|
||||
"---SDKs---",
|
||||
"./sdks/python",
|
||||
"./sdks/typescript"
|
||||
|
||||
@@ -64,3 +64,14 @@ Tools typically return structured data that can be processed by subsequent block
|
||||
- Status information
|
||||
|
||||
Refer to each tool's specific documentation to understand its exact output format.
|
||||
|
||||
## YAML Configuration
|
||||
|
||||
For detailed YAML workflow configuration and syntax, see the [YAML Workflow Reference](/yaml) documentation. This includes comprehensive guides for:
|
||||
|
||||
- **Block Reference Syntax**: How to connect and reference data between blocks
|
||||
- **Tool Configuration**: Using tools in both standalone blocks and agent configurations
|
||||
- **Environment Variables**: Secure handling of API keys and credentials
|
||||
- **Complete Examples**: Real-world workflow patterns and configurations
|
||||
|
||||
For specific tool parameters and configuration options, refer to each tool's individual documentation page.
|
||||
|
||||
238
apps/docs/content/docs/yaml/block-reference.mdx
Normal file
238
apps/docs/content/docs/yaml/block-reference.mdx
Normal file
@@ -0,0 +1,238 @@
|
||||
---
|
||||
title: Block Reference Syntax
|
||||
description: How to reference data between blocks in YAML workflows
|
||||
---
|
||||
|
||||
import { Callout } from 'fumadocs-ui/components/callout'
|
||||
import { Tab, Tabs } from 'fumadocs-ui/components/tabs'
|
||||
|
||||
Block references are the foundation of data flow in Sim Studio workflows. Understanding how to correctly reference outputs from one block as inputs to another is essential for building functional workflows.
|
||||
|
||||
## Basic Reference Rules
|
||||
|
||||
### 1. Use Block Names, Not Block IDs
|
||||
|
||||
<Tabs items={['Correct', 'Incorrect']}>
|
||||
<Tab>
|
||||
```yaml
|
||||
# Block definition
|
||||
email-sender:
|
||||
type: agent
|
||||
name: "Email Generator"
|
||||
# ... configuration
|
||||
|
||||
# Reference the block
|
||||
next-block:
|
||||
inputs:
|
||||
userPrompt: "Process this: <emailgenerator.content>"
|
||||
```
|
||||
</Tab>
|
||||
<Tab>
|
||||
```yaml
|
||||
# Block definition
|
||||
email-sender:
|
||||
type: agent
|
||||
name: "Email Generator"
|
||||
# ... configuration
|
||||
|
||||
# ❌ Don't reference by block ID
|
||||
next-block:
|
||||
inputs:
|
||||
userPrompt: "Process this: <email-sender.content>"
|
||||
```
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
### 2. Convert Names to Reference Format
|
||||
|
||||
To create a block reference:
|
||||
|
||||
1. **Take the block name**: "Email Generator"
|
||||
2. **Convert to lowercase**: "email generator"
|
||||
3. **Remove spaces and special characters**: "emailgenerator"
|
||||
4. **Add property**: `<emailgenerator.content>`
|
||||
|
||||
### 3. Use Correct Properties
|
||||
|
||||
Different block types expose different properties:
|
||||
|
||||
- **Agent blocks**: `.content` (the AI response)
|
||||
- **Function blocks**: `.output` (the return value)
|
||||
- **API blocks**: `.output` (the response data)
|
||||
- **Tool blocks**: `.output` (the tool result)
|
||||
|
||||
## Reference Examples
|
||||
|
||||
### Common Block References
|
||||
|
||||
```yaml
|
||||
# Agent block outputs
|
||||
<agentname.content> # Primary AI response
|
||||
<agentname.tokens> # Token usage information
|
||||
<agentname.cost> # Estimated cost
|
||||
<agentname.tool_calls> # Tool execution details
|
||||
|
||||
# Function block outputs
|
||||
<functionname.output> # Function return value
|
||||
<functionname.error> # Error information (if any)
|
||||
|
||||
# API block outputs
|
||||
<apiname.output> # Response data
|
||||
<apiname.status> # HTTP status code
|
||||
<apiname.headers> # Response headers
|
||||
|
||||
# Tool block outputs
|
||||
<toolname.output> # Tool execution result
|
||||
```
|
||||
|
||||
### Multi-Word Block Names
|
||||
|
||||
```yaml
|
||||
# Block name: "Data Processor 2"
|
||||
<dataprocessor2.output>
|
||||
|
||||
# Block name: "Email Validation Service"
|
||||
<emailvalidationservice.output>
|
||||
|
||||
# Block name: "Customer Info Agent"
|
||||
<customerinfoagent.content>
|
||||
```
|
||||
|
||||
## Special Reference Cases
|
||||
|
||||
### Starter Block
|
||||
|
||||
<Callout type="warning">
|
||||
The starter block is always referenced as `<start.input>` regardless of its actual name.
|
||||
</Callout>
|
||||
|
||||
```yaml
|
||||
# Starter block definition
|
||||
my-custom-start:
|
||||
type: starter
|
||||
name: "Custom Workflow Start"
|
||||
# ... configuration
|
||||
|
||||
# Always reference as 'start'
|
||||
agent-1:
|
||||
inputs:
|
||||
userPrompt: <start.input> # ✅ Correct
|
||||
# userPrompt: <customworkflowstart.input> # ❌ Wrong
|
||||
```
|
||||
|
||||
### Loop Variables
|
||||
|
||||
Inside loop blocks, special variables are available:
|
||||
|
||||
```yaml
|
||||
# Available in loop child blocks
|
||||
<loop.index> # Current iteration (0-based)
|
||||
<loop.currentItem> # Current item being processed (forEach loops)
|
||||
<loop.items> # Full collection (forEach loops)
|
||||
```
|
||||
|
||||
### Parallel Variables
|
||||
|
||||
Inside parallel blocks, special variables are available:
|
||||
|
||||
```yaml
|
||||
# Available in parallel child blocks
|
||||
<parallel.index> # Instance number (0-based)
|
||||
<parallel.currentItem> # Item for this instance
|
||||
<parallel.items> # Full collection
|
||||
```
|
||||
|
||||
## Complex Reference Examples
|
||||
|
||||
### Nested Data Access
|
||||
|
||||
When referencing complex objects, use dot notation:
|
||||
|
||||
```yaml
|
||||
# If an agent returns structured data
|
||||
data-analyzer:
|
||||
type: agent
|
||||
name: "Data Analyzer"
|
||||
inputs:
|
||||
responseFormat: |
|
||||
{
|
||||
"schema": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"analysis": {"type": "object"},
|
||||
"summary": {"type": "string"},
|
||||
"metrics": {"type": "object"}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
# Reference nested properties
|
||||
next-step:
|
||||
inputs:
|
||||
userPrompt: |
|
||||
Summary: <dataanalyzer.analysis.summary>
|
||||
Score: <dataanalyzer.metrics.score>
|
||||
Full data: <dataanalyzer.content>
|
||||
```
|
||||
|
||||
### Multiple References in Text
|
||||
|
||||
```yaml
|
||||
email-composer:
|
||||
type: agent
|
||||
inputs:
|
||||
userPrompt: |
|
||||
Create an email with the following information:
|
||||
|
||||
Customer: <customeragent.content>
|
||||
Order Details: <orderprocessor.output>
|
||||
Support Ticket: <ticketanalyzer.content>
|
||||
|
||||
Original request: <start.input>
|
||||
```
|
||||
|
||||
### References in Code Blocks
|
||||
|
||||
When using references in function blocks, they're replaced as JavaScript values:
|
||||
|
||||
```yaml
|
||||
data-processor:
|
||||
type: function
|
||||
inputs:
|
||||
code: |
|
||||
// References are replaced with actual values
|
||||
const customerData = <customeragent.content>;
|
||||
const orderInfo = <orderprocessor.output>;
|
||||
const originalInput = <start.input>;
|
||||
|
||||
// Process the data
|
||||
return {
|
||||
customer: customerData.name,
|
||||
orderId: orderInfo.id,
|
||||
processed: true
|
||||
};
|
||||
```
|
||||
|
||||
## Reference Validation
|
||||
|
||||
Sim Studio validates all references when importing YAML:
|
||||
|
||||
### Valid References
|
||||
- Block exists in the workflow
|
||||
- Property is appropriate for block type
|
||||
- No circular dependencies
|
||||
- Proper syntax formatting
|
||||
|
||||
### Common Errors
|
||||
- **Block not found**: Referenced block doesn't exist
|
||||
- **Wrong property**: Using `.content` on a function block
|
||||
- **Typos**: Misspelled block names or properties
|
||||
- **Circular references**: Block references itself directly or indirectly
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Use descriptive block names**: Makes references more readable
|
||||
2. **Be consistent**: Use the same naming convention throughout
|
||||
3. **Check references**: Ensure all referenced blocks exist
|
||||
4. **Avoid deep nesting**: Keep reference chains manageable
|
||||
5. **Document complex flows**: Add comments to explain reference relationships
|
||||
218
apps/docs/content/docs/yaml/blocks/agent.mdx
Normal file
218
apps/docs/content/docs/yaml/blocks/agent.mdx
Normal file
@@ -0,0 +1,218 @@
|
||||
---
|
||||
title: Agent Block YAML Schema
|
||||
description: YAML configuration reference for Agent blocks
|
||||
---
|
||||
|
||||
## Schema Definition
|
||||
|
||||
```yaml
|
||||
type: object
|
||||
required:
|
||||
- type
|
||||
- name
|
||||
properties:
|
||||
type:
|
||||
type: string
|
||||
enum: [agent]
|
||||
description: Block type identifier
|
||||
name:
|
||||
type: string
|
||||
description: Display name for this agent block
|
||||
inputs:
|
||||
type: object
|
||||
properties:
|
||||
systemPrompt:
|
||||
type: string
|
||||
description: Instructions that define the agent's role and behavior
|
||||
userPrompt:
|
||||
type: string
|
||||
description: Input content to process (can reference other blocks)
|
||||
model:
|
||||
type: string
|
||||
description: AI model identifier (e.g., gpt-4o, gemini-2.5-pro, deepseek-chat)
|
||||
temperature:
|
||||
type: number
|
||||
minimum: 0
|
||||
maximum: 2
|
||||
description: Response creativity level (varies by model)
|
||||
apiKey:
|
||||
type: string
|
||||
description: API key for the model provider (use {{ENV_VAR}} format)
|
||||
azureEndpoint:
|
||||
type: string
|
||||
description: Azure OpenAI endpoint URL (required for Azure models)
|
||||
azureApiVersion:
|
||||
type: string
|
||||
description: Azure API version (required for Azure models)
|
||||
memories:
|
||||
type: string
|
||||
description: Memory context from memory blocks
|
||||
tools:
|
||||
type: array
|
||||
description: List of external tools the agent can use
|
||||
items:
|
||||
type: object
|
||||
required: [type, title, toolId, operation, usageControl]
|
||||
properties:
|
||||
type:
|
||||
type: string
|
||||
description: Tool type identifier
|
||||
title:
|
||||
type: string
|
||||
description: Human-readable display name
|
||||
toolId:
|
||||
type: string
|
||||
description: Internal tool identifier
|
||||
operation:
|
||||
type: string
|
||||
description: Tool operation/method name
|
||||
usageControl:
|
||||
type: string
|
||||
enum: [auto, required, none]
|
||||
description: When AI can use the tool
|
||||
params:
|
||||
type: object
|
||||
description: Tool-specific configuration parameters
|
||||
isExpanded:
|
||||
type: boolean
|
||||
description: UI state
|
||||
default: false
|
||||
responseFormat:
|
||||
type: object
|
||||
description: JSON Schema to enforce structured output
|
||||
required:
|
||||
- model
|
||||
- apiKey
|
||||
connections:
|
||||
type: object
|
||||
properties:
|
||||
success:
|
||||
type: string
|
||||
description: Target block ID for successful execution
|
||||
error:
|
||||
type: string
|
||||
description: Target block ID for error handling
|
||||
```
|
||||
|
||||
## Tool Configuration
|
||||
|
||||
Tools are defined as an array where each tool has this structure:
|
||||
|
||||
```yaml
|
||||
tools:
|
||||
- type: <string> # Tool type identifier (exa, gmail, slack, etc.)
|
||||
title: <string> # Human-readable display name
|
||||
toolId: <string> # Internal tool identifier
|
||||
operation: <string> # Tool operation/method name
|
||||
usageControl: <string> # When AI can use it (auto | required | none)
|
||||
params: <object> # Tool-specific configuration parameters
|
||||
isExpanded: <boolean> # UI state (optional, default: false)
|
||||
```
|
||||
|
||||
## Connection Configuration
|
||||
|
||||
Connections define where the workflow goes based on execution results:
|
||||
|
||||
```yaml
|
||||
connections:
|
||||
success: <string> # Target block ID for successful execution
|
||||
error: <string> # Target block ID for error handling (optional)
|
||||
```
|
||||
|
||||
## Examples
|
||||
|
||||
### Basic Agent
|
||||
|
||||
```yaml
|
||||
content-agent:
|
||||
type: agent
|
||||
name: "Content Analyzer 1"
|
||||
inputs:
|
||||
systemPrompt: "You are a helpful content analyzer. Be concise and clear."
|
||||
userPrompt: <start.input>
|
||||
model: gpt-4o
|
||||
temperature: 0.3
|
||||
apiKey: '{{OPENAI_API_KEY}}'
|
||||
connections:
|
||||
success: summary-block
|
||||
|
||||
summary-block:
|
||||
type: agent
|
||||
name: "Summary Generator"
|
||||
inputs:
|
||||
systemPrompt: "Create a brief summary of the analysis."
|
||||
userPrompt: "Analyze this: <contentanalyzer1.content>"
|
||||
model: gpt-4o
|
||||
apiKey: '{{OPENAI_API_KEY}}'
|
||||
connections:
|
||||
success: final-step
|
||||
```
|
||||
|
||||
### Agent with Tools
|
||||
|
||||
```yaml
|
||||
research-agent:
|
||||
type: agent
|
||||
name: "Research Assistant"
|
||||
inputs:
|
||||
systemPrompt: "Research the topic and provide detailed information."
|
||||
userPrompt: <start.input>
|
||||
model: gpt-4o
|
||||
apiKey: '{{OPENAI_API_KEY}}'
|
||||
tools:
|
||||
- type: exa
|
||||
title: "Web Search"
|
||||
toolId: exa_search
|
||||
operation: exa_search
|
||||
usageControl: auto
|
||||
params:
|
||||
apiKey: '{{EXA_API_KEY}}'
|
||||
connections:
|
||||
success: summary-block
|
||||
```
|
||||
|
||||
### Structured Output
|
||||
|
||||
```yaml
|
||||
data-extractor:
|
||||
type: agent
|
||||
name: "Extract Contact Info"
|
||||
inputs:
|
||||
systemPrompt: "Extract contact information from the text."
|
||||
userPrompt: <start.input>
|
||||
model: gpt-4o
|
||||
apiKey: '{{OPENAI_API_KEY}}'
|
||||
responseFormat: |
|
||||
{
|
||||
"name": "contact_extraction",
|
||||
"schema": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"name": {"type": "string"},
|
||||
"email": {"type": "string"},
|
||||
"phone": {"type": "string"}
|
||||
},
|
||||
"required": ["name"]
|
||||
},
|
||||
"strict": true
|
||||
}
|
||||
connections:
|
||||
success: save-contact
|
||||
```
|
||||
|
||||
### Azure OpenAI
|
||||
|
||||
```yaml
|
||||
azure-agent:
|
||||
type: agent
|
||||
name: "Azure AI Assistant"
|
||||
inputs:
|
||||
systemPrompt: "You are a helpful assistant."
|
||||
userPrompt: <start.input>
|
||||
model: gpt-4o
|
||||
apiKey: '{{AZURE_OPENAI_API_KEY}}'
|
||||
azureEndpoint: '{{AZURE_OPENAI_ENDPOINT}}'
|
||||
azureApiVersion: "2024-07-01-preview"
|
||||
connections:
|
||||
success: response-block
|
||||
```
|
||||
179
apps/docs/content/docs/yaml/blocks/api.mdx
Normal file
179
apps/docs/content/docs/yaml/blocks/api.mdx
Normal file
@@ -0,0 +1,179 @@
|
||||
---
|
||||
title: API Block YAML Schema
|
||||
description: YAML configuration reference for API blocks
|
||||
---
|
||||
|
||||
## Schema Definition
|
||||
|
||||
```yaml
|
||||
type: object
|
||||
required:
|
||||
- type
|
||||
- name
|
||||
- inputs
|
||||
properties:
|
||||
type:
|
||||
type: string
|
||||
enum: [api]
|
||||
description: Block type identifier
|
||||
name:
|
||||
type: string
|
||||
description: Display name for this API block
|
||||
inputs:
|
||||
type: object
|
||||
required:
|
||||
- url
|
||||
- method
|
||||
properties:
|
||||
url:
|
||||
type: string
|
||||
description: The endpoint URL to send the request to
|
||||
method:
|
||||
type: string
|
||||
enum: [GET, POST, PUT, DELETE, PATCH]
|
||||
description: HTTP method for the request
|
||||
default: GET
|
||||
queryParams:
|
||||
type: array
|
||||
description: Query parameters as key-value pairs
|
||||
items:
|
||||
type: object
|
||||
properties:
|
||||
key:
|
||||
type: string
|
||||
description: Parameter name
|
||||
value:
|
||||
type: string
|
||||
description: Parameter value
|
||||
headers:
|
||||
type: array
|
||||
description: HTTP headers as key-value pairs
|
||||
items:
|
||||
type: object
|
||||
properties:
|
||||
key:
|
||||
type: string
|
||||
description: Header name
|
||||
value:
|
||||
type: string
|
||||
description: Header value
|
||||
body:
|
||||
type: string
|
||||
description: Request body for POST/PUT/PATCH methods
|
||||
timeout:
|
||||
type: number
|
||||
description: Request timeout in milliseconds
|
||||
default: 30000
|
||||
minimum: 1000
|
||||
maximum: 300000
|
||||
connections:
|
||||
type: object
|
||||
properties:
|
||||
success:
|
||||
type: string
|
||||
description: Target block ID for successful requests
|
||||
error:
|
||||
type: string
|
||||
description: Target block ID for error handling
|
||||
```
|
||||
|
||||
## Connection Configuration
|
||||
|
||||
Connections define where the workflow goes based on request results:
|
||||
|
||||
```yaml
|
||||
connections:
|
||||
success: <string> # Target block ID for successful requests
|
||||
error: <string> # Target block ID for error handling (optional)
|
||||
```
|
||||
|
||||
## Examples
|
||||
|
||||
### Simple GET Request
|
||||
|
||||
```yaml
|
||||
user-api:
|
||||
type: api
|
||||
name: "Fetch User Data"
|
||||
inputs:
|
||||
url: "https://api.example.com/users/123"
|
||||
method: GET
|
||||
headers:
|
||||
- key: "Authorization"
|
||||
value: "Bearer {{API_TOKEN}}"
|
||||
- key: "Content-Type"
|
||||
value: "application/json"
|
||||
connections:
|
||||
success: process-user-data
|
||||
error: handle-api-error
|
||||
```
|
||||
|
||||
### POST Request with Body
|
||||
|
||||
```yaml
|
||||
create-ticket:
|
||||
type: api
|
||||
name: "Create Support Ticket"
|
||||
inputs:
|
||||
url: "https://api.support.com/tickets"
|
||||
method: POST
|
||||
headers:
|
||||
- key: "Authorization"
|
||||
value: "Bearer {{SUPPORT_API_KEY}}"
|
||||
- key: "Content-Type"
|
||||
value: "application/json"
|
||||
body: |
|
||||
{
|
||||
"title": "<agent.title>",
|
||||
"description": "<agent.description>",
|
||||
"priority": "high"
|
||||
}
|
||||
connections:
|
||||
success: ticket-created
|
||||
error: ticket-error
|
||||
```
|
||||
|
||||
### Dynamic URL with Query Parameters
|
||||
|
||||
```yaml
|
||||
search-api:
|
||||
type: api
|
||||
name: "Search Products"
|
||||
inputs:
|
||||
url: "https://api.store.com/products"
|
||||
method: GET
|
||||
queryParams:
|
||||
- key: "q"
|
||||
value: <start.searchTerm>
|
||||
- key: "limit"
|
||||
value: "10"
|
||||
- key: "category"
|
||||
value: <filter.category>
|
||||
headers:
|
||||
- key: "Authorization"
|
||||
value: "Bearer {{STORE_API_KEY}}"
|
||||
connections:
|
||||
success: display-results
|
||||
```
|
||||
|
||||
## Output References
|
||||
|
||||
After an API block executes, you can reference its outputs:
|
||||
|
||||
```yaml
|
||||
# In subsequent blocks
|
||||
next-block:
|
||||
inputs:
|
||||
data: <api-block-name.output> # Response data
|
||||
status: <api-block-name.status> # HTTP status code
|
||||
headers: <api-block-name.headers> # Response headers
|
||||
error: <api-block-name.error> # Error details (if any)
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
- Use environment variables for API keys: `{{API_KEY_NAME}}`
|
||||
- Include error handling with error connections
|
||||
- Set appropriate timeouts for your use case
|
||||
- Validate response status codes in subsequent blocks
|
||||
- Use meaningful block names for easier reference
|
||||
165
apps/docs/content/docs/yaml/blocks/condition.mdx
Normal file
165
apps/docs/content/docs/yaml/blocks/condition.mdx
Normal file
@@ -0,0 +1,165 @@
|
||||
---
|
||||
title: Condition Block YAML Schema
|
||||
description: YAML configuration reference for Condition blocks
|
||||
---
|
||||
|
||||
## Schema Definition
|
||||
|
||||
```yaml
|
||||
type: object
|
||||
required:
|
||||
- type
|
||||
- name
|
||||
- inputs
|
||||
- connections
|
||||
properties:
|
||||
type:
|
||||
type: string
|
||||
enum: [condition]
|
||||
description: Block type identifier
|
||||
name:
|
||||
type: string
|
||||
description: Display name for this condition block
|
||||
inputs:
|
||||
type: object
|
||||
required:
|
||||
- conditions
|
||||
properties:
|
||||
conditions:
|
||||
type: object
|
||||
description: Conditional expressions and their logic
|
||||
properties:
|
||||
if:
|
||||
type: string
|
||||
description: Primary condition expression (boolean)
|
||||
else-if:
|
||||
type: string
|
||||
description: Secondary condition expression (optional)
|
||||
else-if-2:
|
||||
type: string
|
||||
description: Third condition expression (optional)
|
||||
else-if-3:
|
||||
type: string
|
||||
description: Fourth condition expression (optional)
|
||||
# Additional else-if-N conditions can be added as needed
|
||||
else:
|
||||
type: boolean
|
||||
description: Default fallback condition (optional)
|
||||
default: true
|
||||
connections:
|
||||
type: object
|
||||
required:
|
||||
- conditions
|
||||
properties:
|
||||
conditions:
|
||||
type: object
|
||||
description: Target blocks for each condition outcome
|
||||
properties:
|
||||
if:
|
||||
type: string
|
||||
description: Target block ID when 'if' condition is true
|
||||
else-if:
|
||||
type: string
|
||||
description: Target block ID when 'else-if' condition is true
|
||||
else-if-2:
|
||||
type: string
|
||||
description: Target block ID when 'else-if-2' condition is true
|
||||
else-if-3:
|
||||
type: string
|
||||
description: Target block ID when 'else-if-3' condition is true
|
||||
# Additional else-if-N connections can be added as needed
|
||||
else:
|
||||
type: string
|
||||
description: Target block ID when no conditions match
|
||||
```
|
||||
|
||||
## Connection Configuration
|
||||
|
||||
Unlike other blocks, conditions use branching connections based on condition outcomes:
|
||||
|
||||
```yaml
|
||||
connections:
|
||||
conditions:
|
||||
if: <string> # Target block ID when primary condition is true
|
||||
else-if: <string> # Target block ID when secondary condition is true (optional)
|
||||
else-if-2: <string> # Target block ID when third condition is true (optional)
|
||||
else-if-3: <string> # Target block ID when fourth condition is true (optional)
|
||||
# Additional else-if-N connections can be added as needed
|
||||
else: <string> # Target block ID when no conditions match (optional)
|
||||
```
|
||||
|
||||
## Examples
|
||||
|
||||
### Simple If-Else
|
||||
|
||||
```yaml
|
||||
status-check:
|
||||
type: condition
|
||||
name: "Status Check"
|
||||
inputs:
|
||||
conditions:
|
||||
if: <start.status> === "approved"
|
||||
else: true
|
||||
connections:
|
||||
conditions:
|
||||
if: send-approval-email
|
||||
else: send-rejection-email
|
||||
```
|
||||
|
||||
### Multiple Conditions
|
||||
|
||||
```yaml
|
||||
user-routing:
|
||||
type: condition
|
||||
name: "User Type Router"
|
||||
inputs:
|
||||
conditions:
|
||||
if: <start.user_type> === "admin"
|
||||
else-if: <start.user_type> === "premium"
|
||||
else-if-2: <start.user_type> === "basic"
|
||||
else: true
|
||||
connections:
|
||||
conditions:
|
||||
if: admin-dashboard
|
||||
else-if: premium-features
|
||||
else-if-2: basic-features
|
||||
else: registration-flow
|
||||
```
|
||||
|
||||
### Numeric Comparisons
|
||||
|
||||
```yaml
|
||||
score-evaluation:
|
||||
type: condition
|
||||
name: "Score Evaluation"
|
||||
inputs:
|
||||
conditions:
|
||||
if: <agent.score> >= 90
|
||||
else-if: <agent.score> >= 70
|
||||
else-if-2: <agent.score> >= 50
|
||||
else: true
|
||||
connections:
|
||||
conditions:
|
||||
if: excellent-response
|
||||
else-if: good-response
|
||||
else-if-2: average-response
|
||||
else: poor-response
|
||||
```
|
||||
|
||||
### Complex Logic
|
||||
|
||||
```yaml
|
||||
eligibility-check:
|
||||
type: condition
|
||||
name: "Eligibility Check"
|
||||
inputs:
|
||||
conditions:
|
||||
if: <start.age> >= 18 && <start.verified> === true
|
||||
else-if: <start.age> >= 16 && <start.parent_consent> === true
|
||||
else: true
|
||||
connections:
|
||||
conditions:
|
||||
if: full-access
|
||||
else-if: limited-access
|
||||
else: access-denied
|
||||
```
|
||||
255
apps/docs/content/docs/yaml/blocks/evaluator.mdx
Normal file
255
apps/docs/content/docs/yaml/blocks/evaluator.mdx
Normal file
@@ -0,0 +1,255 @@
|
||||
---
|
||||
title: Evaluator Block YAML Schema
|
||||
description: YAML configuration reference for Evaluator blocks
|
||||
---
|
||||
|
||||
## Schema Definition
|
||||
|
||||
```yaml
|
||||
type: object
|
||||
required:
|
||||
- type
|
||||
- name
|
||||
- inputs
|
||||
properties:
|
||||
type:
|
||||
type: string
|
||||
enum: [evaluator]
|
||||
description: Block type identifier
|
||||
name:
|
||||
type: string
|
||||
description: Display name for this evaluator block
|
||||
inputs:
|
||||
type: object
|
||||
required:
|
||||
- content
|
||||
- metrics
|
||||
- model
|
||||
- apiKey
|
||||
properties:
|
||||
content:
|
||||
type: string
|
||||
description: Content to evaluate (can reference other blocks)
|
||||
metrics:
|
||||
type: array
|
||||
description: Evaluation criteria and scoring ranges
|
||||
items:
|
||||
type: object
|
||||
properties:
|
||||
name:
|
||||
type: string
|
||||
description: Metric identifier
|
||||
description:
|
||||
type: string
|
||||
description: Detailed explanation of what the metric measures
|
||||
range:
|
||||
type: object
|
||||
properties:
|
||||
min:
|
||||
type: number
|
||||
description: Minimum score value
|
||||
max:
|
||||
type: number
|
||||
description: Maximum score value
|
||||
required: [min, max]
|
||||
description: Scoring range with numeric bounds
|
||||
model:
|
||||
type: string
|
||||
description: AI model identifier (e.g., gpt-4o, claude-3-5-sonnet-20241022)
|
||||
apiKey:
|
||||
type: string
|
||||
description: API key for the model provider (use {{ENV_VAR}} format)
|
||||
temperature:
|
||||
type: number
|
||||
minimum: 0
|
||||
maximum: 2
|
||||
description: Model temperature for evaluation
|
||||
default: 0.3
|
||||
azureEndpoint:
|
||||
type: string
|
||||
description: Azure OpenAI endpoint URL (required for Azure models)
|
||||
azureApiVersion:
|
||||
type: string
|
||||
description: Azure API version (required for Azure models)
|
||||
connections:
|
||||
type: object
|
||||
properties:
|
||||
success:
|
||||
type: string
|
||||
description: Target block ID for successful evaluation
|
||||
error:
|
||||
type: string
|
||||
description: Target block ID for error handling
|
||||
```
|
||||
|
||||
## Connection Configuration
|
||||
|
||||
Connections define where the workflow goes based on evaluation results:
|
||||
|
||||
```yaml
|
||||
connections:
|
||||
success: <string> # Target block ID for successful evaluation
|
||||
error: <string> # Target block ID for error handling (optional)
|
||||
```
|
||||
|
||||
## Examples
|
||||
|
||||
### Content Quality Evaluation
|
||||
|
||||
```yaml
|
||||
content-evaluator:
|
||||
type: evaluator
|
||||
name: "Content Quality Evaluator"
|
||||
inputs:
|
||||
content: <content-generator.content>
|
||||
metrics:
|
||||
- name: "accuracy"
|
||||
description: "How factually accurate is the content?"
|
||||
range:
|
||||
min: 1
|
||||
max: 5
|
||||
- name: "clarity"
|
||||
description: "How clear and understandable is the content?"
|
||||
range:
|
||||
min: 1
|
||||
max: 5
|
||||
- name: "relevance"
|
||||
description: "How relevant is the content to the original query?"
|
||||
range:
|
||||
min: 1
|
||||
max: 5
|
||||
- name: "completeness"
|
||||
description: "How complete and comprehensive is the content?"
|
||||
range:
|
||||
min: 1
|
||||
max: 5
|
||||
model: gpt-4o
|
||||
temperature: 0.2
|
||||
apiKey: '{{OPENAI_API_KEY}}'
|
||||
connections:
|
||||
success: quality-report
|
||||
error: evaluation-error
|
||||
```
|
||||
|
||||
### Customer Response Evaluation
|
||||
|
||||
```yaml
|
||||
response-evaluator:
|
||||
type: evaluator
|
||||
name: "Customer Response Evaluator"
|
||||
inputs:
|
||||
content: <customer-agent.content>
|
||||
metrics:
|
||||
- name: "helpfulness"
|
||||
description: "How helpful is the response in addressing the customer's needs?"
|
||||
range:
|
||||
min: 1
|
||||
max: 10
|
||||
- name: "tone"
|
||||
description: "How appropriate and professional is the tone?"
|
||||
range:
|
||||
min: 1
|
||||
max: 10
|
||||
- name: "completeness"
|
||||
description: "Does the response fully address all aspects of the inquiry?"
|
||||
range:
|
||||
min: 1
|
||||
max: 10
|
||||
model: claude-3-5-sonnet-20241022
|
||||
apiKey: '{{ANTHROPIC_API_KEY}}'
|
||||
connections:
|
||||
success: response-processor
|
||||
```
|
||||
|
||||
### A/B Testing Evaluation
|
||||
|
||||
```yaml
|
||||
ab-test-evaluator:
|
||||
type: evaluator
|
||||
name: "A/B Test Evaluator"
|
||||
inputs:
|
||||
content: |
|
||||
Version A: <version-a.content>
|
||||
Version B: <version-b.content>
|
||||
|
||||
Compare these two versions for the following criteria.
|
||||
metrics:
|
||||
- name: "engagement"
|
||||
description: "Which version is more likely to engage users?"
|
||||
range: "A, B, or Tie"
|
||||
- name: "clarity"
|
||||
description: "Which version communicates more clearly?"
|
||||
range: "A, B, or Tie"
|
||||
- name: "persuasiveness"
|
||||
description: "Which version is more persuasive?"
|
||||
range: "A, B, or Tie"
|
||||
model: gpt-4o
|
||||
temperature: 0.1
|
||||
apiKey: '{{OPENAI_API_KEY}}'
|
||||
connections:
|
||||
success: test-results
|
||||
```
|
||||
|
||||
### Multi-Dimensional Content Scoring
|
||||
|
||||
```yaml
|
||||
comprehensive-evaluator:
|
||||
type: evaluator
|
||||
name: "Comprehensive Content Evaluator"
|
||||
inputs:
|
||||
content: <ai-writer.content>
|
||||
metrics:
|
||||
- name: "technical_accuracy"
|
||||
description: "How technically accurate and correct is the information?"
|
||||
range:
|
||||
min: 0
|
||||
max: 100
|
||||
- name: "readability"
|
||||
description: "How easy is the content to read and understand?"
|
||||
range:
|
||||
min: 0
|
||||
max: 100
|
||||
- name: "seo_optimization"
|
||||
description: "How well optimized is the content for search engines?"
|
||||
range:
|
||||
min: 0
|
||||
max: 100
|
||||
- name: "user_engagement"
|
||||
description: "How likely is this content to engage and retain readers?"
|
||||
range:
|
||||
min: 0
|
||||
max: 100
|
||||
- name: "brand_alignment"
|
||||
description: "How well does the content align with brand voice and values?"
|
||||
range:
|
||||
min: 0
|
||||
max: 100
|
||||
model: gpt-4o
|
||||
temperature: 0.3
|
||||
apiKey: '{{OPENAI_API_KEY}}'
|
||||
connections:
|
||||
success: content-optimization
|
||||
```
|
||||
|
||||
## Output References
|
||||
|
||||
After an evaluator block executes, you can reference its outputs:
|
||||
|
||||
```yaml
|
||||
# In subsequent blocks
|
||||
next-block:
|
||||
inputs:
|
||||
evaluation: <evaluator-name.content> # Evaluation summary
|
||||
scores: <evaluator-name.scores> # Individual metric scores
|
||||
overall: <evaluator-name.overall> # Overall assessment
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
- Define clear, specific evaluation criteria
|
||||
- Use appropriate scoring ranges for your use case
|
||||
- Choose models with strong reasoning capabilities
|
||||
- Use lower temperature for consistent scoring
|
||||
- Include detailed metric descriptions
|
||||
- Test with diverse content types
|
||||
- Consider multiple evaluators for complex assessments
|
||||
162
apps/docs/content/docs/yaml/blocks/function.mdx
Normal file
162
apps/docs/content/docs/yaml/blocks/function.mdx
Normal file
@@ -0,0 +1,162 @@
|
||||
---
|
||||
title: Function Block YAML Schema
|
||||
description: YAML configuration reference for Function blocks
|
||||
---
|
||||
|
||||
## Schema Definition
|
||||
|
||||
```yaml
|
||||
type: object
|
||||
required:
|
||||
- type
|
||||
- name
|
||||
- inputs
|
||||
properties:
|
||||
type:
|
||||
type: string
|
||||
enum: [function]
|
||||
description: Block type identifier
|
||||
name:
|
||||
type: string
|
||||
description: Display name for this function block
|
||||
inputs:
|
||||
type: object
|
||||
required:
|
||||
- code
|
||||
properties:
|
||||
code:
|
||||
type: string
|
||||
description: JavaScript/TypeScript code to execute (multiline string)
|
||||
timeout:
|
||||
type: number
|
||||
description: Maximum execution time in milliseconds
|
||||
default: 30000
|
||||
minimum: 1000
|
||||
maximum: 300000
|
||||
connections:
|
||||
type: object
|
||||
properties:
|
||||
success:
|
||||
type: string
|
||||
description: Target block ID for successful execution
|
||||
error:
|
||||
type: string
|
||||
description: Target block ID for error handling
|
||||
```
|
||||
|
||||
## Connection Configuration
|
||||
|
||||
Connections define where the workflow goes based on execution results:
|
||||
|
||||
```yaml
|
||||
connections:
|
||||
success: <string> # Target block ID for successful execution
|
||||
error: <string> # Target block ID for error handling (optional)
|
||||
```
|
||||
|
||||
## Examples
|
||||
|
||||
### Simple Validation
|
||||
|
||||
```yaml
|
||||
input-validator:
|
||||
type: function
|
||||
name: "Input Validator"
|
||||
inputs:
|
||||
code: |-
|
||||
// Check if input number is greater than 5
|
||||
const inputValue = parseInt(<start.input>, 10);
|
||||
|
||||
if (inputValue > 5) {
|
||||
return {
|
||||
valid: true,
|
||||
value: inputValue,
|
||||
message: "Input is valid"
|
||||
};
|
||||
} else {
|
||||
return {
|
||||
valid: false,
|
||||
value: inputValue,
|
||||
message: "Input must be greater than 5"
|
||||
};
|
||||
}
|
||||
connections:
|
||||
success: next-step
|
||||
error: handle-error
|
||||
```
|
||||
|
||||
### Data Processing
|
||||
|
||||
```yaml
|
||||
data-processor:
|
||||
type: function
|
||||
name: "Data Transformer"
|
||||
inputs:
|
||||
code: |
|
||||
// Transform the input data
|
||||
const rawData = <start.input>;
|
||||
|
||||
// Process and clean the data
|
||||
const processed = rawData
|
||||
.filter(item => item.status === 'active')
|
||||
.map(item => ({
|
||||
id: item.id,
|
||||
name: item.name.trim(),
|
||||
date: new Date(item.created).toISOString()
|
||||
}));
|
||||
|
||||
return processed;
|
||||
connections:
|
||||
success: api-save
|
||||
error: error-handler
|
||||
```
|
||||
|
||||
### API Integration
|
||||
|
||||
```yaml
|
||||
api-formatter:
|
||||
type: function
|
||||
name: "Format API Request"
|
||||
inputs:
|
||||
code: |
|
||||
// Prepare data for API submission
|
||||
const userData = <agent.response>;
|
||||
|
||||
const apiPayload = {
|
||||
timestamp: new Date().toISOString(),
|
||||
data: userData,
|
||||
source: "workflow-automation",
|
||||
version: "1.0"
|
||||
};
|
||||
|
||||
return apiPayload;
|
||||
connections:
|
||||
success: api-call
|
||||
```
|
||||
|
||||
### Calculations
|
||||
|
||||
```yaml
|
||||
calculator:
|
||||
type: function
|
||||
name: "Calculate Results"
|
||||
inputs:
|
||||
code: |
|
||||
// Perform calculations on input data
|
||||
const numbers = <start.input>;
|
||||
|
||||
const sum = numbers.reduce((a, b) => a + b, 0);
|
||||
const average = sum / numbers.length;
|
||||
const max = Math.max(...numbers);
|
||||
const min = Math.min(...numbers);
|
||||
|
||||
return {
|
||||
sum,
|
||||
average,
|
||||
max,
|
||||
min,
|
||||
count: numbers.length
|
||||
};
|
||||
connections:
|
||||
success: results-display
|
||||
```
|
||||
151
apps/docs/content/docs/yaml/blocks/index.mdx
Normal file
151
apps/docs/content/docs/yaml/blocks/index.mdx
Normal file
@@ -0,0 +1,151 @@
|
||||
---
|
||||
title: Block Schemas
|
||||
description: Complete YAML schema reference for all Sim Studio blocks
|
||||
---
|
||||
|
||||
import { Card, Cards } from "fumadocs-ui/components/card";
|
||||
|
||||
This section contains the complete YAML schema definitions for all available block types in Sim Studio. Each block type has specific configuration requirements and output formats.
|
||||
|
||||
## Core Blocks
|
||||
|
||||
These are the essential building blocks for creating workflows:
|
||||
|
||||
<Cards>
|
||||
<Card title="Starter Block" href="/yaml/blocks/starter">
|
||||
Workflow entry point supporting manual triggers, webhooks, and schedules
|
||||
</Card>
|
||||
<Card title="Agent Block" href="/yaml/blocks/agent">
|
||||
AI-powered processing with LLM integration and tool support
|
||||
</Card>
|
||||
<Card title="Function Block" href="/yaml/blocks/function">
|
||||
Custom JavaScript/TypeScript code execution environment
|
||||
</Card>
|
||||
<Card title="Response Block" href="/yaml/blocks/response">
|
||||
Format and return final workflow results
|
||||
</Card>
|
||||
</Cards>
|
||||
|
||||
## Logic & Control Flow
|
||||
|
||||
Blocks for implementing conditional logic and control flow:
|
||||
|
||||
<Cards>
|
||||
<Card title="Condition Block" href="/yaml/blocks/condition">
|
||||
Conditional branching based on boolean expressions
|
||||
</Card>
|
||||
<Card title="Router Block" href="/yaml/blocks/router">
|
||||
AI-powered intelligent routing to multiple paths
|
||||
</Card>
|
||||
<Card title="Loop Block" href="/yaml/blocks/loop">
|
||||
Iterative processing with for and forEach loops
|
||||
</Card>
|
||||
<Card title="Parallel Block" href="/yaml/blocks/parallel">
|
||||
Concurrent execution across multiple instances
|
||||
</Card>
|
||||
</Cards>
|
||||
|
||||
## Integration Blocks
|
||||
|
||||
Blocks for connecting to external services and systems:
|
||||
|
||||
<Cards>
|
||||
<Card title="API Block" href="/yaml/blocks/api">
|
||||
HTTP requests to external REST APIs
|
||||
</Card>
|
||||
<Card title="Webhook Block" href="/yaml/blocks/webhook">
|
||||
Webhook triggers for external integrations
|
||||
</Card>
|
||||
</Cards>
|
||||
|
||||
## Advanced Blocks
|
||||
|
||||
Specialized blocks for complex workflow patterns:
|
||||
|
||||
<Cards>
|
||||
<Card title="Evaluator Block" href="/yaml/blocks/evaluator">
|
||||
Validate outputs against defined criteria and metrics
|
||||
</Card>
|
||||
<Card title="Workflow Block" href="/yaml/blocks/workflow">
|
||||
Execute other workflows as reusable components
|
||||
</Card>
|
||||
</Cards>
|
||||
|
||||
## Common Schema Elements
|
||||
|
||||
All blocks share these common elements:
|
||||
|
||||
### Basic Structure
|
||||
|
||||
```yaml
|
||||
block-id:
|
||||
type: <block-type>
|
||||
name: <display-name>
|
||||
inputs:
|
||||
# Block-specific configuration
|
||||
connections:
|
||||
# Connection definitions
|
||||
```
|
||||
|
||||
### Connection Types
|
||||
|
||||
- **success**: Target block for successful execution
|
||||
- **error**: Target block for error handling (optional)
|
||||
- **conditions**: Multiple paths for conditional blocks
|
||||
|
||||
### Environment Variables
|
||||
|
||||
Use double curly braces for environment variables:
|
||||
|
||||
```yaml
|
||||
inputs:
|
||||
apiKey: '{{API_KEY_NAME}}'
|
||||
endpoint: '{{SERVICE_ENDPOINT}}'
|
||||
```
|
||||
|
||||
### Block References
|
||||
|
||||
Reference other block outputs using the block name in lowercase:
|
||||
|
||||
```yaml
|
||||
inputs:
|
||||
userPrompt: <blockname.content>
|
||||
data: <functionblock.output>
|
||||
originalInput: <start.input>
|
||||
```
|
||||
|
||||
## Validation Rules
|
||||
|
||||
All YAML blocks are validated against their schemas:
|
||||
|
||||
1. **Required fields**: Must be present
|
||||
2. **Type validation**: Values must match expected types
|
||||
3. **Enum validation**: String values must be from allowed lists
|
||||
4. **Range validation**: Numbers must be within specified ranges
|
||||
5. **Pattern validation**: Strings must match regex patterns (where applicable)
|
||||
|
||||
## Quick Reference
|
||||
|
||||
### Block Types and Properties
|
||||
|
||||
| Block Type | Primary Output | Common Use Cases |
|
||||
|------------|----------------|------------------|
|
||||
| starter | `.input` | Workflow entry point |
|
||||
| agent | `.content` | AI processing, text generation |
|
||||
| function | `.output` | Data transformation, calculations |
|
||||
| api | `.output` | External service integration |
|
||||
| condition | N/A (branching) | Conditional logic |
|
||||
| router | N/A (branching) | Intelligent routing |
|
||||
| response | N/A (terminal) | Final output formatting |
|
||||
| loop | `.results` | Iterative processing |
|
||||
| parallel | `.results` | Concurrent processing |
|
||||
| webhook | `.payload` | External triggers |
|
||||
| evaluator | `.score` | Output validation, quality assessment |
|
||||
| workflow | `.output` | Sub-workflow execution, modularity |
|
||||
|
||||
### Required vs Optional
|
||||
|
||||
- **Always required**: `type`, `name`
|
||||
- **Usually required**: `inputs`, `connections`
|
||||
- **Context dependent**: Specific input fields vary by block type
|
||||
- **Always optional**: `error` connections, UI-specific fields
|
||||
305
apps/docs/content/docs/yaml/blocks/loop.mdx
Normal file
305
apps/docs/content/docs/yaml/blocks/loop.mdx
Normal file
@@ -0,0 +1,305 @@
|
||||
---
|
||||
title: Loop Block YAML Schema
|
||||
description: YAML configuration reference for Loop blocks
|
||||
---
|
||||
|
||||
## Schema Definition
|
||||
|
||||
```yaml
|
||||
type: object
|
||||
required:
|
||||
- type
|
||||
- name
|
||||
- inputs
|
||||
- connections
|
||||
properties:
|
||||
type:
|
||||
type: string
|
||||
enum: [loop]
|
||||
description: Block type identifier
|
||||
name:
|
||||
type: string
|
||||
description: Display name for this loop block
|
||||
inputs:
|
||||
type: object
|
||||
required:
|
||||
- loopType
|
||||
properties:
|
||||
loopType:
|
||||
type: string
|
||||
enum: [for, forEach]
|
||||
description: Type of loop to execute
|
||||
iterations:
|
||||
type: number
|
||||
description: Number of iterations (for 'for' loops)
|
||||
minimum: 1
|
||||
maximum: 1000
|
||||
collection:
|
||||
type: string
|
||||
description: Collection to iterate over (for 'forEach' loops)
|
||||
maxConcurrency:
|
||||
type: number
|
||||
description: Maximum concurrent executions
|
||||
default: 1
|
||||
minimum: 1
|
||||
maximum: 10
|
||||
connections:
|
||||
type: object
|
||||
required:
|
||||
- loop
|
||||
properties:
|
||||
loop:
|
||||
type: object
|
||||
required:
|
||||
- start
|
||||
properties:
|
||||
start:
|
||||
type: string
|
||||
description: Target block ID to execute inside the loop
|
||||
end:
|
||||
type: string
|
||||
description: Target block ID for loop completion (optional)
|
||||
success:
|
||||
type: string
|
||||
description: Target block ID after loop completion (alternative format)
|
||||
error:
|
||||
type: string
|
||||
description: Target block ID for error handling
|
||||
```
|
||||
|
||||
## Connection Configuration
|
||||
|
||||
Loop blocks use a special connection format with a `loop` section:
|
||||
|
||||
```yaml
|
||||
connections:
|
||||
loop:
|
||||
start: <string> # Target block ID to execute inside the loop
|
||||
end: <string> # Target block ID after loop completion (optional)
|
||||
error: <string> # Target block ID for error handling (optional)
|
||||
```
|
||||
|
||||
Alternative format (legacy):
|
||||
```yaml
|
||||
connections:
|
||||
success: <string> # Target block ID after loop completion
|
||||
error: <string> # Target block ID for error handling (optional)
|
||||
```
|
||||
|
||||
## Child Block Configuration
|
||||
|
||||
Blocks inside a loop must have their `parentId` set to the loop block ID:
|
||||
|
||||
```yaml
|
||||
loop-1:
|
||||
type: loop
|
||||
name: "Process Items"
|
||||
inputs:
|
||||
loopType: forEach
|
||||
collection: <start.items>
|
||||
connections:
|
||||
loop:
|
||||
start: process-item
|
||||
end: final-results
|
||||
|
||||
# Child block inside the loop
|
||||
process-item:
|
||||
type: agent
|
||||
name: "Process Item"
|
||||
parentId: loop-1 # References the loop block
|
||||
inputs:
|
||||
systemPrompt: "Process this item"
|
||||
userPrompt: <loop.currentItem>
|
||||
model: gpt-4o
|
||||
apiKey: '{{OPENAI_API_KEY}}'
|
||||
```
|
||||
|
||||
## Examples
|
||||
|
||||
### For Loop (Fixed Iterations)
|
||||
|
||||
```yaml
|
||||
countdown-loop:
|
||||
type: loop
|
||||
name: "Countdown Loop"
|
||||
inputs:
|
||||
loopType: for
|
||||
iterations: 5
|
||||
connections:
|
||||
loop:
|
||||
start: countdown-agent
|
||||
end: countdown-complete
|
||||
|
||||
countdown-agent:
|
||||
type: agent
|
||||
name: "Countdown Agent"
|
||||
parentId: countdown-loop
|
||||
inputs:
|
||||
systemPrompt: "Generate a countdown message"
|
||||
userPrompt: "Count down from 5. Current number: <loop.index>"
|
||||
model: gpt-4o
|
||||
apiKey: '{{OPENAI_API_KEY}}'
|
||||
```
|
||||
|
||||
### ForEach Loop (Collection Processing)
|
||||
|
||||
```yaml
|
||||
email-processor-loop:
|
||||
type: loop
|
||||
name: "Email Processor Loop"
|
||||
inputs:
|
||||
loopType: forEach
|
||||
collection: <start.emails>
|
||||
connections:
|
||||
loop:
|
||||
start: process-single-email
|
||||
end: all-emails-processed
|
||||
|
||||
process-single-email:
|
||||
type: agent
|
||||
name: "Process Single Email"
|
||||
parentId: email-processor-loop
|
||||
inputs:
|
||||
systemPrompt: "Classify and respond to this email"
|
||||
userPrompt: "Email content: <loop.currentItem>"
|
||||
model: gpt-4o
|
||||
apiKey: '{{OPENAI_API_KEY}}'
|
||||
```
|
||||
|
||||
### Complex Loop with Multiple Child Blocks
|
||||
|
||||
```yaml
|
||||
data-analysis-loop:
|
||||
type: loop
|
||||
name: "Data Analysis Loop"
|
||||
inputs:
|
||||
loopType: forEach
|
||||
collection: <data-fetcher.records>
|
||||
maxConcurrency: 3
|
||||
connections:
|
||||
loop:
|
||||
start: validate-record
|
||||
end: generate-report
|
||||
error: handle-loop-error
|
||||
|
||||
validate-record:
|
||||
type: function
|
||||
name: "Validate Record"
|
||||
parentId: data-analysis-loop
|
||||
inputs:
|
||||
code: |
|
||||
const record = <loop.currentItem>;
|
||||
const index = <loop.index>;
|
||||
|
||||
// Validate the record
|
||||
if (!record.id || !record.data) {
|
||||
throw new Error(`Invalid record at index ${index}`);
|
||||
}
|
||||
|
||||
return {
|
||||
valid: true,
|
||||
recordId: record.id,
|
||||
processedAt: new Date().toISOString()
|
||||
};
|
||||
connections:
|
||||
success: analyze-record
|
||||
error: record-error
|
||||
|
||||
analyze-record:
|
||||
type: agent
|
||||
name: "Analyze Record"
|
||||
parentId: data-analysis-loop
|
||||
inputs:
|
||||
systemPrompt: "Analyze this data record and extract insights"
|
||||
userPrompt: |
|
||||
Record ID: <validaterecord.recordId>
|
||||
Data: <loop.currentItem.data>
|
||||
Position in collection: <loop.index>
|
||||
model: gpt-4o
|
||||
apiKey: '{{OPENAI_API_KEY}}'
|
||||
connections:
|
||||
success: store-analysis
|
||||
|
||||
store-analysis:
|
||||
type: function
|
||||
name: "Store Analysis"
|
||||
parentId: data-analysis-loop
|
||||
inputs:
|
||||
code: |
|
||||
const analysis = <analyzerecord.content>;
|
||||
const recordId = <validaterecord.recordId>;
|
||||
|
||||
// Store analysis result
|
||||
return {
|
||||
recordId,
|
||||
analysis,
|
||||
completedAt: new Date().toISOString()
|
||||
};
|
||||
```
|
||||
|
||||
### Concurrent Processing Loop
|
||||
|
||||
```yaml
|
||||
parallel-processing-loop:
|
||||
type: loop
|
||||
name: "Parallel Processing Loop"
|
||||
inputs:
|
||||
loopType: forEach
|
||||
collection: <start.tasks>
|
||||
maxConcurrency: 5
|
||||
connections:
|
||||
loop:
|
||||
start: process-task
|
||||
end: aggregate-results
|
||||
|
||||
process-task:
|
||||
type: api
|
||||
name: "Process Task"
|
||||
parentId: parallel-processing-loop
|
||||
inputs:
|
||||
url: "https://api.example.com/process"
|
||||
method: POST
|
||||
headers:
|
||||
- key: "Authorization"
|
||||
value: "Bearer {{API_TOKEN}}"
|
||||
body: |
|
||||
{
|
||||
"taskId": "<loop.currentItem.id>",
|
||||
"data": "<loop.currentItem.data>"
|
||||
}
|
||||
connections:
|
||||
success: task-completed
|
||||
```
|
||||
|
||||
## Loop Variables
|
||||
|
||||
Inside loop child blocks, these special variables are available:
|
||||
|
||||
```yaml
|
||||
# Available in all child blocks of the loop
|
||||
<loop.index> # Current iteration number (0-based)
|
||||
<loop.currentItem> # Current item being processed (forEach loops)
|
||||
<loop.items> # Full collection (forEach loops)
|
||||
```
|
||||
|
||||
## Output References
|
||||
|
||||
After a loop completes, you can reference its aggregated results:
|
||||
|
||||
```yaml
|
||||
# In blocks after the loop
|
||||
final-processor:
|
||||
inputs:
|
||||
all-results: <loop-name.results> # Array of all iteration results
|
||||
total-count: <loop-name.count> # Number of iterations completed
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
- Set reasonable iteration limits to avoid long execution times
|
||||
- Use forEach for collection processing, for loops for fixed iterations
|
||||
- Consider using maxConcurrency for I/O bound operations
|
||||
- Include error handling for robust loop execution
|
||||
- Use descriptive names for loop child blocks
|
||||
- Test with small collections first
|
||||
- Monitor execution time for large collections
|
||||
17
apps/docs/content/docs/yaml/blocks/meta.json
Normal file
17
apps/docs/content/docs/yaml/blocks/meta.json
Normal file
@@ -0,0 +1,17 @@
|
||||
{
|
||||
"title": "Block Schemas",
|
||||
"pages": [
|
||||
"starter",
|
||||
"agent",
|
||||
"function",
|
||||
"api",
|
||||
"condition",
|
||||
"router",
|
||||
"evaluator",
|
||||
"response",
|
||||
"loop",
|
||||
"parallel",
|
||||
"webhook",
|
||||
"workflow"
|
||||
]
|
||||
}
|
||||
322
apps/docs/content/docs/yaml/blocks/parallel.mdx
Normal file
322
apps/docs/content/docs/yaml/blocks/parallel.mdx
Normal file
@@ -0,0 +1,322 @@
|
||||
---
|
||||
title: Parallel Block YAML Schema
|
||||
description: YAML configuration reference for Parallel blocks
|
||||
---
|
||||
|
||||
## Schema Definition
|
||||
|
||||
```yaml
|
||||
type: object
|
||||
required:
|
||||
- type
|
||||
- name
|
||||
- inputs
|
||||
- connections
|
||||
properties:
|
||||
type:
|
||||
type: string
|
||||
enum: [parallel]
|
||||
description: Block type identifier
|
||||
name:
|
||||
type: string
|
||||
description: Display name for this parallel block
|
||||
inputs:
|
||||
type: object
|
||||
required:
|
||||
- parallelType
|
||||
properties:
|
||||
parallelType:
|
||||
type: string
|
||||
enum: [count, collection]
|
||||
description: Type of parallel execution
|
||||
count:
|
||||
type: number
|
||||
description: Number of parallel instances (for 'count' type)
|
||||
minimum: 1
|
||||
maximum: 100
|
||||
collection:
|
||||
type: string
|
||||
description: Collection to distribute across instances (for 'collection' type)
|
||||
maxConcurrency:
|
||||
type: number
|
||||
description: Maximum concurrent executions
|
||||
default: 10
|
||||
minimum: 1
|
||||
maximum: 50
|
||||
connections:
|
||||
type: object
|
||||
required:
|
||||
- parallel
|
||||
properties:
|
||||
parallel:
|
||||
type: object
|
||||
required:
|
||||
- start
|
||||
properties:
|
||||
start:
|
||||
type: string
|
||||
description: Target block ID to execute inside each parallel instance
|
||||
end:
|
||||
type: string
|
||||
description: Target block ID after all parallel instances complete (optional)
|
||||
success:
|
||||
type: string
|
||||
description: Target block ID after all instances complete (alternative format)
|
||||
error:
|
||||
type: string
|
||||
description: Target block ID for error handling
|
||||
```
|
||||
|
||||
## Connection Configuration
|
||||
|
||||
Parallel blocks use a special connection format with a `parallel` section:
|
||||
|
||||
```yaml
|
||||
connections:
|
||||
parallel:
|
||||
start: <string> # Target block ID to execute inside each parallel instance
|
||||
end: <string> # Target block ID after all instances complete (optional)
|
||||
error: <string> # Target block ID for error handling (optional)
|
||||
```
|
||||
|
||||
Alternative format (legacy):
|
||||
```yaml
|
||||
connections:
|
||||
success: <string> # Target block ID after all instances complete
|
||||
error: <string> # Target block ID for error handling (optional)
|
||||
```
|
||||
|
||||
## Child Block Configuration
|
||||
|
||||
Blocks inside a parallel block must have their `parentId` set to the parallel block ID:
|
||||
|
||||
```yaml
|
||||
parallel-1:
|
||||
type: parallel
|
||||
name: "Process Items"
|
||||
inputs:
|
||||
parallelType: collection
|
||||
collection: <start.items>
|
||||
connections:
|
||||
parallel:
|
||||
start: process-item
|
||||
end: aggregate-results
|
||||
|
||||
# Child block inside the parallel
|
||||
process-item:
|
||||
type: agent
|
||||
name: "Process Item"
|
||||
parentId: parallel-1 # References the parallel block
|
||||
inputs:
|
||||
systemPrompt: "Process this item"
|
||||
userPrompt: <parallel.currentItem>
|
||||
model: gpt-4o
|
||||
apiKey: '{{OPENAI_API_KEY}}'
|
||||
```
|
||||
|
||||
## Examples
|
||||
|
||||
### Count-Based Parallel Processing
|
||||
|
||||
```yaml
|
||||
worker-parallel:
|
||||
type: parallel
|
||||
name: "Worker Parallel"
|
||||
inputs:
|
||||
parallelType: count
|
||||
count: 5
|
||||
maxConcurrency: 3
|
||||
connections:
|
||||
parallel:
|
||||
start: worker-task
|
||||
end: collect-worker-results
|
||||
|
||||
worker-task:
|
||||
type: api
|
||||
name: "Worker Task"
|
||||
parentId: worker-parallel
|
||||
inputs:
|
||||
url: "https://api.worker.com/process"
|
||||
method: POST
|
||||
headers:
|
||||
- key: "Authorization"
|
||||
value: "Bearer {{WORKER_API_KEY}}"
|
||||
body: |
|
||||
{
|
||||
"instanceId": <parallel.index>,
|
||||
"timestamp": "{{new Date().toISOString()}}"
|
||||
}
|
||||
connections:
|
||||
success: worker-complete
|
||||
```
|
||||
|
||||
### Collection-Based Parallel Processing
|
||||
|
||||
```yaml
|
||||
api-parallel:
|
||||
type: parallel
|
||||
name: "API Parallel"
|
||||
inputs:
|
||||
parallelType: collection
|
||||
collection: <start.apiEndpoints>
|
||||
maxConcurrency: 10
|
||||
connections:
|
||||
parallel:
|
||||
start: call-api
|
||||
end: merge-api-results
|
||||
|
||||
call-api:
|
||||
type: api
|
||||
name: "Call API"
|
||||
parentId: api-parallel
|
||||
inputs:
|
||||
url: <parallel.currentItem.endpoint>
|
||||
method: <parallel.currentItem.method>
|
||||
headers:
|
||||
- key: "Authorization"
|
||||
value: "Bearer {{API_TOKEN}}"
|
||||
connections:
|
||||
success: api-complete
|
||||
```
|
||||
|
||||
### Complex Parallel Processing Pipeline
|
||||
|
||||
```yaml
|
||||
data-processing-parallel:
|
||||
type: parallel
|
||||
name: "Data Processing Parallel"
|
||||
inputs:
|
||||
parallelType: collection
|
||||
collection: <data-loader.records>
|
||||
maxConcurrency: 8
|
||||
connections:
|
||||
parallel:
|
||||
start: validate-data
|
||||
end: final-aggregation
|
||||
error: parallel-error-handler
|
||||
|
||||
validate-data:
|
||||
type: function
|
||||
name: "Validate Data"
|
||||
parentId: data-processing-parallel
|
||||
inputs:
|
||||
code: |
|
||||
const record = <parallel.currentItem>;
|
||||
const index = <parallel.index>;
|
||||
|
||||
// Validate record structure
|
||||
if (!record.id || !record.content) {
|
||||
throw new Error(`Invalid record at index ${index}`);
|
||||
}
|
||||
|
||||
return {
|
||||
valid: true,
|
||||
recordId: record.id,
|
||||
validatedAt: new Date().toISOString()
|
||||
};
|
||||
connections:
|
||||
success: process-data
|
||||
error: validation-error
|
||||
|
||||
process-data:
|
||||
type: agent
|
||||
name: "Process Data"
|
||||
parentId: data-processing-parallel
|
||||
inputs:
|
||||
systemPrompt: "Process and analyze this data record"
|
||||
userPrompt: |
|
||||
Record ID: <validatedata.recordId>
|
||||
Content: <parallel.currentItem.content>
|
||||
Instance: <parallel.index>
|
||||
model: gpt-4o
|
||||
temperature: 0.3
|
||||
apiKey: '{{OPENAI_API_KEY}}'
|
||||
connections:
|
||||
success: store-result
|
||||
|
||||
store-result:
|
||||
type: function
|
||||
name: "Store Result"
|
||||
parentId: data-processing-parallel
|
||||
inputs:
|
||||
code: |
|
||||
const processed = <processdata.content>;
|
||||
const recordId = <validatedata.recordId>;
|
||||
|
||||
return {
|
||||
recordId,
|
||||
processed,
|
||||
completedAt: new Date().toISOString(),
|
||||
instanceIndex: <parallel.index>
|
||||
};
|
||||
```
|
||||
|
||||
### Concurrent AI Analysis
|
||||
|
||||
```yaml
|
||||
multi-model-parallel:
|
||||
type: parallel
|
||||
name: "Multi-Model Analysis"
|
||||
inputs:
|
||||
parallelType: collection
|
||||
collection: |
|
||||
[
|
||||
{"model": "gpt-4o", "focus": "technical accuracy"},
|
||||
{"model": "claude-3-5-sonnet-20241022", "focus": "creative quality"},
|
||||
{"model": "gemini-2.0-flash-exp", "focus": "factual verification"}
|
||||
]
|
||||
maxConcurrency: 3
|
||||
connections:
|
||||
parallel:
|
||||
start: analyze-content
|
||||
end: combine-analyses
|
||||
|
||||
analyze-content:
|
||||
type: agent
|
||||
name: "Analyze Content"
|
||||
parentId: multi-model-parallel
|
||||
inputs:
|
||||
systemPrompt: |
|
||||
You are analyzing content with a focus on <parallel.currentItem.focus>.
|
||||
Provide detailed analysis from this perspective.
|
||||
userPrompt: |
|
||||
Content to analyze: <start.content>
|
||||
Analysis focus: <parallel.currentItem.focus>
|
||||
model: <parallel.currentItem.model>
|
||||
apiKey: '{{OPENAI_API_KEY}}'
|
||||
connections:
|
||||
success: analysis-complete
|
||||
```
|
||||
|
||||
## Parallel Variables
|
||||
|
||||
Inside parallel child blocks, these special variables are available:
|
||||
|
||||
```yaml
|
||||
# Available in all child blocks of the parallel
|
||||
<parallel.index> # Instance number (0-based)
|
||||
<parallel.currentItem> # Item for this instance (collection type)
|
||||
<parallel.items> # Full collection (collection type)
|
||||
```
|
||||
|
||||
## Output References
|
||||
|
||||
After a parallel block completes, you can reference its aggregated results:
|
||||
|
||||
```yaml
|
||||
# In blocks after the parallel
|
||||
final-processor:
|
||||
inputs:
|
||||
all-results: <parallel-name.results> # Array of all instance results
|
||||
total-count: <parallel-name.count> # Number of instances completed
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
- Use appropriate maxConcurrency to avoid overwhelming APIs
|
||||
- Ensure operations are independent and don't rely on each other
|
||||
- Include error handling for robust parallel execution
|
||||
- Test with small collections first
|
||||
- Monitor rate limits for external APIs
|
||||
- Use collection type for distributing work, count type for fixed instances
|
||||
- Consider memory usage with large collections
|
||||
140
apps/docs/content/docs/yaml/blocks/response.mdx
Normal file
140
apps/docs/content/docs/yaml/blocks/response.mdx
Normal file
@@ -0,0 +1,140 @@
|
||||
---
|
||||
title: Response Block YAML Schema
|
||||
description: YAML configuration reference for Response blocks
|
||||
---
|
||||
|
||||
## Schema Definition
|
||||
|
||||
```yaml
|
||||
type: object
|
||||
required:
|
||||
- type
|
||||
- name
|
||||
properties:
|
||||
type:
|
||||
type: string
|
||||
enum: [response]
|
||||
description: Block type identifier
|
||||
name:
|
||||
type: string
|
||||
description: Display name for this response block
|
||||
inputs:
|
||||
type: object
|
||||
properties:
|
||||
dataMode:
|
||||
type: string
|
||||
enum: [structured, json]
|
||||
description: Mode for defining response data structure
|
||||
default: structured
|
||||
builderData:
|
||||
type: object
|
||||
description: Structured response data (when dataMode is 'structured')
|
||||
data:
|
||||
type: object
|
||||
description: JSON response data (when dataMode is 'json')
|
||||
status:
|
||||
type: number
|
||||
description: HTTP status code
|
||||
default: 200
|
||||
minimum: 100
|
||||
maximum: 599
|
||||
headers:
|
||||
type: array
|
||||
description: Response headers as key-value pairs
|
||||
items:
|
||||
type: object
|
||||
properties:
|
||||
key:
|
||||
type: string
|
||||
description: Header name
|
||||
value:
|
||||
type: string
|
||||
description: Header value
|
||||
```
|
||||
|
||||
## Connection Configuration
|
||||
|
||||
Response blocks are terminal blocks (no outgoing connections) and define the final output:
|
||||
|
||||
```yaml
|
||||
# No connections object needed - Response blocks are always terminal
|
||||
```
|
||||
|
||||
## Examples
|
||||
|
||||
### Simple Response
|
||||
|
||||
```yaml
|
||||
simple-response:
|
||||
type: response
|
||||
name: "Simple Response"
|
||||
inputs:
|
||||
data:
|
||||
message: "Hello World"
|
||||
timestamp: <function.timestamp>
|
||||
status: 200
|
||||
```
|
||||
|
||||
### Success Response
|
||||
|
||||
```yaml
|
||||
success-response:
|
||||
type: response
|
||||
name: "Success Response"
|
||||
inputs:
|
||||
data:
|
||||
success: true
|
||||
user:
|
||||
id: <agent.user_id>
|
||||
name: <agent.user_name>
|
||||
email: <agent.user_email>
|
||||
created_at: <function.timestamp>
|
||||
status: 201
|
||||
headers:
|
||||
- key: "Location"
|
||||
value: "/api/users/<agent.user_id>"
|
||||
- key: "X-Created-By"
|
||||
value: "workflow-engine"
|
||||
```
|
||||
|
||||
### Error Response
|
||||
|
||||
```yaml
|
||||
error-response:
|
||||
type: response
|
||||
name: "Error Response"
|
||||
inputs:
|
||||
data:
|
||||
error: true
|
||||
message: <agent.error_message>
|
||||
code: "VALIDATION_FAILED"
|
||||
details: <function.validation_errors>
|
||||
status: 400
|
||||
headers:
|
||||
- key: "X-Error-Code"
|
||||
value: "VALIDATION_FAILED"
|
||||
```
|
||||
|
||||
### Paginated Response
|
||||
|
||||
```yaml
|
||||
paginated-response:
|
||||
type: response
|
||||
name: "Paginated Response"
|
||||
inputs:
|
||||
data:
|
||||
data: <agent.results>
|
||||
pagination:
|
||||
page: <start.page>
|
||||
per_page: <start.per_page>
|
||||
total: <function.total_count>
|
||||
total_pages: <function.total_pages>
|
||||
status: 200
|
||||
headers:
|
||||
- key: "X-Total-Count"
|
||||
value: <function.total_count>
|
||||
- key: "Cache-Control"
|
||||
value: "public, max-age=300"
|
||||
- key: "Content-Type"
|
||||
value: "application/json"
|
||||
```
|
||||
200
apps/docs/content/docs/yaml/blocks/router.mdx
Normal file
200
apps/docs/content/docs/yaml/blocks/router.mdx
Normal file
@@ -0,0 +1,200 @@
|
||||
---
|
||||
title: Router Block YAML Schema
|
||||
description: YAML configuration reference for Router blocks
|
||||
---
|
||||
|
||||
## Schema Definition
|
||||
|
||||
```yaml
|
||||
type: object
|
||||
required:
|
||||
- type
|
||||
- name
|
||||
- inputs
|
||||
properties:
|
||||
type:
|
||||
type: string
|
||||
enum: [router]
|
||||
description: Block type identifier
|
||||
name:
|
||||
type: string
|
||||
description: Display name for this router block
|
||||
inputs:
|
||||
type: object
|
||||
required:
|
||||
- prompt
|
||||
- model
|
||||
- apiKey
|
||||
properties:
|
||||
prompt:
|
||||
type: string
|
||||
description: Instructions for routing decisions and criteria
|
||||
model:
|
||||
type: string
|
||||
description: AI model identifier (e.g., gpt-4o, gemini-2.5-pro, deepseek-chat)
|
||||
apiKey:
|
||||
type: string
|
||||
description: API key for the model provider (use {{ENV_VAR}} format)
|
||||
temperature:
|
||||
type: number
|
||||
minimum: 0
|
||||
maximum: 2
|
||||
description: Model temperature for routing decisions
|
||||
default: 0.3
|
||||
azureEndpoint:
|
||||
type: string
|
||||
description: Azure OpenAI endpoint URL (required for Azure models)
|
||||
azureApiVersion:
|
||||
type: string
|
||||
description: Azure API version (required for Azure models)
|
||||
connections:
|
||||
type: object
|
||||
description: Multiple connection paths for different routing outcomes
|
||||
properties:
|
||||
success:
|
||||
type: array
|
||||
items:
|
||||
type: string
|
||||
description: Array of target block IDs for routing destinations
|
||||
```
|
||||
|
||||
## Connection Configuration
|
||||
|
||||
Router blocks use a success array containing all possible routing destinations:
|
||||
|
||||
```yaml
|
||||
connections:
|
||||
success:
|
||||
- <string> # Target block ID option 1
|
||||
- <string> # Target block ID option 2
|
||||
- <string> # Target block ID option 3
|
||||
# Additional target block IDs as needed
|
||||
```
|
||||
|
||||
## Examples
|
||||
|
||||
### Content Type Router
|
||||
|
||||
```yaml
|
||||
content-router:
|
||||
type: router
|
||||
name: "Content Type Router"
|
||||
inputs:
|
||||
prompt: |
|
||||
Route this content based on its type:
|
||||
- If it's a question, route to question-handler
|
||||
- If it's a complaint, route to complaint-handler
|
||||
- If it's feedback, route to feedback-handler
|
||||
- If it's a request, route to request-handler
|
||||
|
||||
Content: <start.input>
|
||||
model: gpt-4o
|
||||
apiKey: '{{OPENAI_API_KEY}}'
|
||||
connections:
|
||||
success:
|
||||
- question-handler
|
||||
- complaint-handler
|
||||
- feedback-handler
|
||||
- request-handler
|
||||
```
|
||||
|
||||
### Priority Router
|
||||
|
||||
```yaml
|
||||
priority-router:
|
||||
type: router
|
||||
name: "Priority Router"
|
||||
inputs:
|
||||
prompt: |
|
||||
Analyze the urgency and route accordingly:
|
||||
- urgent-queue: High priority, needs immediate attention
|
||||
- standard-queue: Normal priority, standard processing
|
||||
- low-queue: Low priority, can be delayed
|
||||
|
||||
Email content: <email-analyzer.content>
|
||||
|
||||
Route based on urgency indicators, deadlines, and tone.
|
||||
model: gpt-4o
|
||||
temperature: 0.2
|
||||
apiKey: '{{OPENAI_API_KEY}}'
|
||||
connections:
|
||||
success:
|
||||
- urgent-queue
|
||||
- standard-queue
|
||||
- low-queue
|
||||
```
|
||||
|
||||
### Department Router
|
||||
|
||||
```yaml
|
||||
department-router:
|
||||
type: router
|
||||
name: "Department Router"
|
||||
inputs:
|
||||
prompt: |
|
||||
Route this customer inquiry to the appropriate department:
|
||||
|
||||
- sales-team: Sales questions, pricing, demos
|
||||
- support-team: Technical issues, bug reports, how-to questions
|
||||
- billing-team: Payment issues, subscription changes, invoices
|
||||
- general-team: General inquiries, feedback, other topics
|
||||
|
||||
Customer message: <start.input>
|
||||
Customer type: <customer-analyzer.type>
|
||||
model: claude-3-5-sonnet-20241022
|
||||
apiKey: '{{ANTHROPIC_API_KEY}}'
|
||||
connections:
|
||||
success:
|
||||
- sales-team
|
||||
- support-team
|
||||
- billing-team
|
||||
- general-team
|
||||
```
|
||||
|
||||
## Advanced Configuration
|
||||
|
||||
### Multiple Models Router
|
||||
|
||||
```yaml
|
||||
model-selector-router:
|
||||
type: router
|
||||
name: "Model Selection Router"
|
||||
inputs:
|
||||
prompt: |
|
||||
Based on the task complexity, route to the appropriate model:
|
||||
- simple-gpt35: Simple questions, basic tasks
|
||||
- advanced-gpt4: Complex analysis, detailed reasoning
|
||||
- specialized-claude: Creative writing, nuanced analysis
|
||||
|
||||
Task: <start.task>
|
||||
Complexity indicators: <analyzer.complexity>
|
||||
model: gpt-4o-mini
|
||||
temperature: 0.1
|
||||
apiKey: '{{OPENAI_API_KEY}}'
|
||||
connections:
|
||||
success:
|
||||
- simple-gpt35
|
||||
- advanced-gpt4
|
||||
- specialized-claude
|
||||
```
|
||||
|
||||
## Output References
|
||||
|
||||
Router blocks don't produce direct outputs but control workflow path:
|
||||
|
||||
```yaml
|
||||
# Router decisions affect which subsequent blocks execute
|
||||
# Access the routed block's outputs normally:
|
||||
final-step:
|
||||
inputs:
|
||||
routed-result: <routed-block-name.content>
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
- Provide clear routing criteria in the prompt
|
||||
- Use specific, descriptive target block names
|
||||
- Include examples of content for each routing path
|
||||
- Use lower temperature values for consistent routing
|
||||
- Test with diverse input types to ensure accurate routing
|
||||
- Consider fallback paths for edge cases
|
||||
183
apps/docs/content/docs/yaml/blocks/starter.mdx
Normal file
183
apps/docs/content/docs/yaml/blocks/starter.mdx
Normal file
@@ -0,0 +1,183 @@
|
||||
---
|
||||
title: Starter Block YAML Schema
|
||||
description: YAML configuration reference for Starter blocks
|
||||
---
|
||||
|
||||
## Schema Definition
|
||||
|
||||
```yaml
|
||||
type: object
|
||||
required:
|
||||
- type
|
||||
- name
|
||||
properties:
|
||||
type:
|
||||
type: string
|
||||
enum: [starter]
|
||||
description: Block type identifier
|
||||
name:
|
||||
type: string
|
||||
description: Display name for this starter block
|
||||
inputs:
|
||||
type: object
|
||||
properties:
|
||||
startWorkflow:
|
||||
type: string
|
||||
enum: [manual, webhook, schedule]
|
||||
description: How the workflow should be triggered
|
||||
default: manual
|
||||
inputFormat:
|
||||
type: array
|
||||
description: Expected input structure for API calls (manual workflows)
|
||||
items:
|
||||
type: object
|
||||
properties:
|
||||
name:
|
||||
type: string
|
||||
description: Field name
|
||||
type:
|
||||
type: string
|
||||
enum: [string, number, boolean, object, array]
|
||||
description: Field type
|
||||
scheduleType:
|
||||
type: string
|
||||
enum: [hourly, daily, weekly, monthly]
|
||||
description: Schedule frequency (schedule workflows only)
|
||||
hourlyMinute:
|
||||
type: number
|
||||
minimum: 0
|
||||
maximum: 59
|
||||
description: Minute of the hour to run (hourly schedules)
|
||||
dailyTime:
|
||||
type: string
|
||||
pattern: "^([01]?[0-9]|2[0-3]):[0-5][0-9]$"
|
||||
description: Time of day to run in HH:MM format (daily schedules)
|
||||
weeklyDay:
|
||||
type: string
|
||||
enum: [MON, TUE, WED, THU, FRI, SAT, SUN]
|
||||
description: Day of week to run (weekly schedules)
|
||||
weeklyTime:
|
||||
type: string
|
||||
pattern: "^([01]?[0-9]|2[0-3]):[0-5][0-9]$"
|
||||
description: Time of day to run in HH:MM format (weekly schedules)
|
||||
monthlyDay:
|
||||
type: number
|
||||
minimum: 1
|
||||
maximum: 28
|
||||
description: Day of month to run (monthly schedules)
|
||||
monthlyTime:
|
||||
type: string
|
||||
pattern: "^([01]?[0-9]|2[0-3]):[0-5][0-9]$"
|
||||
description: Time of day to run in HH:MM format (monthly schedules)
|
||||
timezone:
|
||||
type: string
|
||||
description: Timezone for scheduled workflows
|
||||
default: UTC
|
||||
webhookProvider:
|
||||
type: string
|
||||
enum: [slack, gmail, airtable, telegram, generic, whatsapp, github, discord, stripe]
|
||||
description: Provider for webhook integration (webhook workflows only)
|
||||
webhookConfig:
|
||||
type: object
|
||||
description: Provider-specific webhook configuration
|
||||
connections:
|
||||
type: object
|
||||
properties:
|
||||
success:
|
||||
type: string
|
||||
description: Target block ID to execute when workflow starts
|
||||
```
|
||||
|
||||
## Connection Configuration
|
||||
|
||||
The starter block only has a success connection since it's the entry point:
|
||||
|
||||
```yaml
|
||||
connections:
|
||||
success: <string> # Target block ID to execute when workflow starts
|
||||
```
|
||||
|
||||
## Examples
|
||||
|
||||
### Manual Start
|
||||
|
||||
```yaml
|
||||
start:
|
||||
type: starter
|
||||
name: Start
|
||||
inputs:
|
||||
startWorkflow: manual
|
||||
connections:
|
||||
success: next-block
|
||||
```
|
||||
|
||||
### Manual Start with Input Format
|
||||
|
||||
```yaml
|
||||
start:
|
||||
type: starter
|
||||
name: Start
|
||||
inputs:
|
||||
startWorkflow: manual
|
||||
inputFormat:
|
||||
- name: query
|
||||
type: string
|
||||
- name: email
|
||||
type: string
|
||||
- name: age
|
||||
type: number
|
||||
- name: isActive
|
||||
type: boolean
|
||||
- name: preferences
|
||||
type: object
|
||||
- name: tags
|
||||
type: array
|
||||
connections:
|
||||
success: agent-1
|
||||
```
|
||||
|
||||
### Daily Schedule
|
||||
|
||||
```yaml
|
||||
start:
|
||||
type: starter
|
||||
name: Start
|
||||
inputs:
|
||||
startWorkflow: schedule
|
||||
scheduleType: daily
|
||||
dailyTime: "09:00"
|
||||
timezone: "America/New_York"
|
||||
connections:
|
||||
success: daily-task
|
||||
```
|
||||
|
||||
### Weekly Schedule
|
||||
|
||||
```yaml
|
||||
start:
|
||||
type: starter
|
||||
name: Start
|
||||
inputs:
|
||||
startWorkflow: schedule
|
||||
scheduleType: weekly
|
||||
weeklyDay: MON
|
||||
weeklyTime: "08:30"
|
||||
timezone: UTC
|
||||
connections:
|
||||
success: weekly-report
|
||||
```
|
||||
|
||||
### Webhook Trigger
|
||||
|
||||
```yaml
|
||||
start:
|
||||
type: starter
|
||||
name: Start
|
||||
inputs:
|
||||
startWorkflow: webhook
|
||||
webhookProvider: slack
|
||||
webhookConfig:
|
||||
# Provider-specific configuration
|
||||
connections:
|
||||
success: process-webhook
|
||||
```
|
||||
278
apps/docs/content/docs/yaml/blocks/webhook.mdx
Normal file
278
apps/docs/content/docs/yaml/blocks/webhook.mdx
Normal file
@@ -0,0 +1,278 @@
|
||||
---
|
||||
title: Webhook Block YAML Schema
|
||||
description: YAML configuration reference for Webhook blocks
|
||||
---
|
||||
|
||||
## Schema Definition
|
||||
|
||||
```yaml
|
||||
type: object
|
||||
required:
|
||||
- type
|
||||
- name
|
||||
properties:
|
||||
type:
|
||||
type: string
|
||||
enum: [webhook]
|
||||
description: Block type identifier
|
||||
name:
|
||||
type: string
|
||||
description: Display name for this webhook block
|
||||
inputs:
|
||||
type: object
|
||||
properties:
|
||||
webhookConfig:
|
||||
type: object
|
||||
description: Webhook configuration settings
|
||||
properties:
|
||||
enabled:
|
||||
type: boolean
|
||||
description: Whether the webhook is active
|
||||
default: true
|
||||
secret:
|
||||
type: string
|
||||
description: Secret key for webhook verification
|
||||
headers:
|
||||
type: array
|
||||
description: Expected headers for validation
|
||||
items:
|
||||
type: object
|
||||
properties:
|
||||
key:
|
||||
type: string
|
||||
description: Header name
|
||||
value:
|
||||
type: string
|
||||
description: Expected header value
|
||||
methods:
|
||||
type: array
|
||||
description: Allowed HTTP methods
|
||||
items:
|
||||
type: string
|
||||
enum: [GET, POST, PUT, DELETE, PATCH]
|
||||
default: [POST]
|
||||
responseConfig:
|
||||
type: object
|
||||
description: Response configuration for the webhook
|
||||
properties:
|
||||
status:
|
||||
type: number
|
||||
description: HTTP status code to return
|
||||
default: 200
|
||||
minimum: 100
|
||||
maximum: 599
|
||||
headers:
|
||||
type: array
|
||||
description: Response headers
|
||||
items:
|
||||
type: object
|
||||
properties:
|
||||
key:
|
||||
type: string
|
||||
description: Header name
|
||||
value:
|
||||
type: string
|
||||
description: Header value
|
||||
body:
|
||||
type: string
|
||||
description: Response body content
|
||||
connections:
|
||||
type: object
|
||||
properties:
|
||||
success:
|
||||
type: string
|
||||
description: Target block ID for successful webhook processing
|
||||
error:
|
||||
type: string
|
||||
description: Target block ID for error handling
|
||||
```
|
||||
|
||||
## Connection Configuration
|
||||
|
||||
Connections define where the workflow goes based on webhook processing:
|
||||
|
||||
```yaml
|
||||
connections:
|
||||
success: <string> # Target block ID for successful processing
|
||||
error: <string> # Target block ID for error handling (optional)
|
||||
```
|
||||
|
||||
## Examples
|
||||
|
||||
### Basic Webhook Trigger
|
||||
|
||||
```yaml
|
||||
github-webhook:
|
||||
type: webhook
|
||||
name: "GitHub Webhook"
|
||||
inputs:
|
||||
webhookConfig:
|
||||
enabled: true
|
||||
secret: "{{GITHUB_WEBHOOK_SECRET}}"
|
||||
methods: [POST]
|
||||
headers:
|
||||
- key: "X-GitHub-Event"
|
||||
value: "push"
|
||||
responseConfig:
|
||||
status: 200
|
||||
body: |
|
||||
{
|
||||
"message": "Webhook received successfully",
|
||||
"timestamp": "{{new Date().toISOString()}}"
|
||||
}
|
||||
connections:
|
||||
success: process-github-event
|
||||
error: webhook-error-handler
|
||||
```
|
||||
|
||||
### Slack Event Webhook
|
||||
|
||||
```yaml
|
||||
slack-events:
|
||||
type: webhook
|
||||
name: "Slack Events"
|
||||
inputs:
|
||||
webhookConfig:
|
||||
enabled: true
|
||||
secret: "{{SLACK_SIGNING_SECRET}}"
|
||||
methods: [POST]
|
||||
headers:
|
||||
- key: "Content-Type"
|
||||
value: "application/json"
|
||||
responseConfig:
|
||||
status: 200
|
||||
headers:
|
||||
- key: "Content-Type"
|
||||
value: "application/json"
|
||||
body: |
|
||||
{
|
||||
"challenge": "<webhook.challenge>"
|
||||
}
|
||||
connections:
|
||||
success: handle-slack-event
|
||||
```
|
||||
|
||||
### Payment Webhook (Stripe)
|
||||
|
||||
```yaml
|
||||
stripe-webhook:
|
||||
type: webhook
|
||||
name: "Stripe Payment Webhook"
|
||||
inputs:
|
||||
webhookConfig:
|
||||
enabled: true
|
||||
secret: "{{STRIPE_WEBHOOK_SECRET}}"
|
||||
methods: [POST]
|
||||
headers:
|
||||
- key: "Stripe-Signature"
|
||||
value: "*"
|
||||
responseConfig:
|
||||
status: 200
|
||||
headers:
|
||||
- key: "Content-Type"
|
||||
value: "application/json"
|
||||
body: |
|
||||
{
|
||||
"received": true
|
||||
}
|
||||
connections:
|
||||
success: process-payment-event
|
||||
error: payment-webhook-error
|
||||
```
|
||||
|
||||
### Generic API Webhook
|
||||
|
||||
```yaml
|
||||
api-webhook:
|
||||
type: webhook
|
||||
name: "API Webhook"
|
||||
inputs:
|
||||
webhookConfig:
|
||||
enabled: true
|
||||
methods: [POST, PUT]
|
||||
headers:
|
||||
- key: "Authorization"
|
||||
value: "Bearer {{WEBHOOK_API_KEY}}"
|
||||
- key: "Content-Type"
|
||||
value: "application/json"
|
||||
responseConfig:
|
||||
status: 202
|
||||
headers:
|
||||
- key: "Content-Type"
|
||||
value: "application/json"
|
||||
- key: "X-Processed-By"
|
||||
value: "Sim Studio"
|
||||
body: |
|
||||
{
|
||||
"status": "accepted",
|
||||
"id": "{{Math.random().toString(36).substr(2, 9)}}",
|
||||
"received_at": "{{new Date().toISOString()}}"
|
||||
}
|
||||
connections:
|
||||
success: process-webhook-data
|
||||
```
|
||||
|
||||
### Multi-Method Webhook
|
||||
|
||||
```yaml
|
||||
crud-webhook:
|
||||
type: webhook
|
||||
name: "CRUD Webhook"
|
||||
inputs:
|
||||
webhookConfig:
|
||||
enabled: true
|
||||
methods: [GET, POST, PUT, DELETE]
|
||||
headers:
|
||||
- key: "X-API-Key"
|
||||
value: "{{CRUD_API_KEY}}"
|
||||
responseConfig:
|
||||
status: 200
|
||||
headers:
|
||||
- key: "Content-Type"
|
||||
value: "application/json"
|
||||
body: |
|
||||
{
|
||||
"method": "<webhook.method>",
|
||||
"processed": true,
|
||||
"timestamp": "{{new Date().toISOString()}}"
|
||||
}
|
||||
connections:
|
||||
success: route-by-method
|
||||
```
|
||||
|
||||
## Webhook Variables
|
||||
|
||||
Inside webhook-triggered workflows, these special variables are available:
|
||||
|
||||
```yaml
|
||||
# Available in blocks after the webhook
|
||||
<webhook.payload> # Full request payload/body
|
||||
<webhook.headers> # Request headers
|
||||
<webhook.method> # HTTP method used
|
||||
<webhook.query> # Query parameters
|
||||
<webhook.path> # Request path
|
||||
<webhook.challenge> # Challenge parameter (for verification)
|
||||
```
|
||||
|
||||
## Output References
|
||||
|
||||
After a webhook processes a request, you can reference its data:
|
||||
|
||||
```yaml
|
||||
# In subsequent blocks
|
||||
process-webhook:
|
||||
inputs:
|
||||
payload: <webhook-name.payload> # Request payload
|
||||
headers: <webhook-name.headers> # Request headers
|
||||
method: <webhook-name.method> # HTTP method
|
||||
```
|
||||
|
||||
## Security Best Practices
|
||||
|
||||
- Always use webhook secrets for verification
|
||||
- Validate expected headers and methods
|
||||
- Implement proper error handling
|
||||
- Use HTTPS endpoints in production
|
||||
- Monitor webhook activity and failures
|
||||
- Set appropriate response timeouts
|
||||
- Validate payload structure before processing
|
||||
299
apps/docs/content/docs/yaml/blocks/workflow.mdx
Normal file
299
apps/docs/content/docs/yaml/blocks/workflow.mdx
Normal file
@@ -0,0 +1,299 @@
|
||||
---
|
||||
title: Workflow Block YAML Schema
|
||||
description: YAML configuration reference for Workflow blocks
|
||||
---
|
||||
|
||||
## Schema Definition
|
||||
|
||||
```yaml
|
||||
type: object
|
||||
required:
|
||||
- type
|
||||
- name
|
||||
- inputs
|
||||
properties:
|
||||
type:
|
||||
type: string
|
||||
enum: [workflow]
|
||||
description: Block type identifier
|
||||
name:
|
||||
type: string
|
||||
description: Display name for this workflow block
|
||||
inputs:
|
||||
type: object
|
||||
required:
|
||||
- workflowId
|
||||
properties:
|
||||
workflowId:
|
||||
type: string
|
||||
description: ID of the workflow to execute
|
||||
inputMapping:
|
||||
type: object
|
||||
description: Map current workflow data to sub-workflow inputs
|
||||
additionalProperties:
|
||||
type: string
|
||||
description: Input value or reference to parent workflow data
|
||||
environmentVariables:
|
||||
type: object
|
||||
description: Environment variables to pass to sub-workflow
|
||||
additionalProperties:
|
||||
type: string
|
||||
description: Environment variable value
|
||||
timeout:
|
||||
type: number
|
||||
description: Maximum execution time in milliseconds
|
||||
default: 300000
|
||||
minimum: 1000
|
||||
maximum: 1800000
|
||||
connections:
|
||||
type: object
|
||||
properties:
|
||||
success:
|
||||
type: string
|
||||
description: Target block ID for successful workflow completion
|
||||
error:
|
||||
type: string
|
||||
description: Target block ID for error handling
|
||||
```
|
||||
|
||||
## Connection Configuration
|
||||
|
||||
Connections define where the workflow goes based on sub-workflow results:
|
||||
|
||||
```yaml
|
||||
connections:
|
||||
success: <string> # Target block ID for successful completion
|
||||
error: <string> # Target block ID for error handling (optional)
|
||||
```
|
||||
|
||||
## Examples
|
||||
|
||||
### Simple Workflow Execution
|
||||
|
||||
```yaml
|
||||
data-processor:
|
||||
type: workflow
|
||||
name: "Data Processing Workflow"
|
||||
inputs:
|
||||
workflowId: "data-processing-v2"
|
||||
inputMapping:
|
||||
rawData: <start.input>
|
||||
userId: <user-validator.userId>
|
||||
environmentVariables:
|
||||
PROCESSING_MODE: "production"
|
||||
LOG_LEVEL: "info"
|
||||
connections:
|
||||
success: process-results
|
||||
error: workflow-error-handler
|
||||
```
|
||||
|
||||
### Content Generation Pipeline
|
||||
|
||||
```yaml
|
||||
content-generator:
|
||||
type: workflow
|
||||
name: "Content Generation Pipeline"
|
||||
inputs:
|
||||
workflowId: "content-generation-v3"
|
||||
inputMapping:
|
||||
topic: <start.topic>
|
||||
style: <style-analyzer.recommendedStyle>
|
||||
targetAudience: <audience-detector.audience>
|
||||
brandGuidelines: <brand-config.guidelines>
|
||||
environmentVariables:
|
||||
CONTENT_API_KEY: "{{CONTENT_API_KEY}}"
|
||||
QUALITY_THRESHOLD: "high"
|
||||
timeout: 120000
|
||||
connections:
|
||||
success: review-content
|
||||
error: content-generation-failed
|
||||
```
|
||||
|
||||
### Multi-Step Analysis Workflow
|
||||
|
||||
```yaml
|
||||
analysis-workflow:
|
||||
type: workflow
|
||||
name: "Analysis Workflow"
|
||||
inputs:
|
||||
workflowId: "comprehensive-analysis"
|
||||
inputMapping:
|
||||
document: <document-processor.content>
|
||||
analysisType: "comprehensive"
|
||||
includeMetrics: true
|
||||
outputFormat: "structured"
|
||||
environmentVariables:
|
||||
ANALYSIS_MODEL: "gpt-4o"
|
||||
OPENAI_API_KEY: "{{OPENAI_API_KEY}}"
|
||||
CLAUDE_API_KEY: "{{CLAUDE_API_KEY}}"
|
||||
connections:
|
||||
success: compile-analysis-report
|
||||
error: analysis-workflow-error
|
||||
```
|
||||
|
||||
### Conditional Workflow Execution
|
||||
|
||||
```yaml
|
||||
customer-workflow-router:
|
||||
type: condition
|
||||
name: "Customer Workflow Router"
|
||||
inputs:
|
||||
conditions:
|
||||
if: <customer-type.type> === "enterprise"
|
||||
else-if: <customer-type.type> === "premium"
|
||||
else: true
|
||||
connections:
|
||||
conditions:
|
||||
if: enterprise-workflow
|
||||
else-if: premium-workflow
|
||||
else: standard-workflow
|
||||
|
||||
enterprise-workflow:
|
||||
type: workflow
|
||||
name: "Enterprise Customer Workflow"
|
||||
inputs:
|
||||
workflowId: "enterprise-customer-processing"
|
||||
inputMapping:
|
||||
customerData: <customer-data.profile>
|
||||
accountManager: <account-assignment.manager>
|
||||
tier: "enterprise"
|
||||
environmentVariables:
|
||||
PRIORITY_LEVEL: "high"
|
||||
SLA_REQUIREMENTS: "strict"
|
||||
connections:
|
||||
success: enterprise-complete
|
||||
|
||||
premium-workflow:
|
||||
type: workflow
|
||||
name: "Premium Customer Workflow"
|
||||
inputs:
|
||||
workflowId: "premium-customer-processing"
|
||||
inputMapping:
|
||||
customerData: <customer-data.profile>
|
||||
supportLevel: "premium"
|
||||
environmentVariables:
|
||||
PRIORITY_LEVEL: "medium"
|
||||
connections:
|
||||
success: premium-complete
|
||||
|
||||
standard-workflow:
|
||||
type: workflow
|
||||
name: "Standard Customer Workflow"
|
||||
inputs:
|
||||
workflowId: "standard-customer-processing"
|
||||
inputMapping:
|
||||
customerData: <customer-data.profile>
|
||||
environmentVariables:
|
||||
PRIORITY_LEVEL: "standard"
|
||||
connections:
|
||||
success: standard-complete
|
||||
```
|
||||
|
||||
### Parallel Workflow Execution
|
||||
|
||||
```yaml
|
||||
parallel-workflows:
|
||||
type: parallel
|
||||
name: "Parallel Workflow Processing"
|
||||
inputs:
|
||||
parallelType: collection
|
||||
collection: |
|
||||
[
|
||||
{"workflowId": "sentiment-analysis", "focus": "sentiment"},
|
||||
{"workflowId": "topic-extraction", "focus": "topics"},
|
||||
{"workflowId": "entity-recognition", "focus": "entities"}
|
||||
]
|
||||
connections:
|
||||
success: merge-workflow-results
|
||||
|
||||
execute-analysis-workflow:
|
||||
type: workflow
|
||||
name: "Execute Analysis Workflow"
|
||||
parentId: parallel-workflows
|
||||
inputs:
|
||||
workflowId: <parallel.currentItem.workflowId>
|
||||
inputMapping:
|
||||
content: <start.content>
|
||||
analysisType: <parallel.currentItem.focus>
|
||||
environmentVariables:
|
||||
ANALYSIS_API_KEY: "{{ANALYSIS_API_KEY}}"
|
||||
connections:
|
||||
success: workflow-complete
|
||||
```
|
||||
|
||||
### Error Handling Workflow
|
||||
|
||||
```yaml
|
||||
main-workflow:
|
||||
type: workflow
|
||||
name: "Main Processing Workflow"
|
||||
inputs:
|
||||
workflowId: "main-processing-v1"
|
||||
inputMapping:
|
||||
data: <start.input>
|
||||
timeout: 180000
|
||||
connections:
|
||||
success: main-complete
|
||||
error: error-recovery-workflow
|
||||
|
||||
error-recovery-workflow:
|
||||
type: workflow
|
||||
name: "Error Recovery Workflow"
|
||||
inputs:
|
||||
workflowId: "error-recovery-v1"
|
||||
inputMapping:
|
||||
originalInput: <start.input>
|
||||
errorDetails: <main-workflow.error>
|
||||
failureTimestamp: "{{new Date().toISOString()}}"
|
||||
environmentVariables:
|
||||
RECOVERY_MODE: "automatic"
|
||||
FALLBACK_ENABLED: "true"
|
||||
connections:
|
||||
success: recovery-complete
|
||||
error: manual-intervention-required
|
||||
```
|
||||
|
||||
## Input Mapping
|
||||
|
||||
Map data from the parent workflow to the sub-workflow:
|
||||
|
||||
```yaml
|
||||
inputMapping:
|
||||
# Static values
|
||||
mode: "production"
|
||||
version: "1.0"
|
||||
|
||||
# References to parent workflow data
|
||||
userData: <user-processor.profile>
|
||||
settings: <config-loader.settings>
|
||||
|
||||
# Complex object mapping
|
||||
requestData:
|
||||
id: <start.requestId>
|
||||
timestamp: "{{new Date().toISOString()}}"
|
||||
source: "parent-workflow"
|
||||
```
|
||||
|
||||
## Output References
|
||||
|
||||
After a workflow block completes, you can reference its outputs:
|
||||
|
||||
```yaml
|
||||
# In subsequent blocks
|
||||
next-block:
|
||||
inputs:
|
||||
workflowResult: <workflow-name.output> # Sub-workflow output
|
||||
executionTime: <workflow-name.duration> # Execution duration
|
||||
status: <workflow-name.status> # Execution status
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
- Use descriptive workflow IDs for clarity
|
||||
- Map only necessary data to sub-workflows
|
||||
- Set appropriate timeouts for workflow complexity
|
||||
- Include error handling for robust execution
|
||||
- Pass environment variables securely
|
||||
- Test sub-workflows independently first
|
||||
- Monitor nested workflow performance
|
||||
- Use versioned workflow IDs for stability
|
||||
273
apps/docs/content/docs/yaml/examples.mdx
Normal file
273
apps/docs/content/docs/yaml/examples.mdx
Normal file
@@ -0,0 +1,273 @@
|
||||
---
|
||||
title: YAML Workflow Examples
|
||||
description: Examples of complete YAML workflows
|
||||
---
|
||||
|
||||
import { Tab, Tabs } from 'fumadocs-ui/components/tabs'
|
||||
|
||||
## Multi-Agent Chain Workflow
|
||||
|
||||
A workflow where multiple AI agents process information sequentially:
|
||||
|
||||
```yaml
|
||||
version: '1.0'
|
||||
blocks:
|
||||
start:
|
||||
type: starter
|
||||
name: Start
|
||||
inputs:
|
||||
startWorkflow: manual
|
||||
connections:
|
||||
success: agent-1-initiator
|
||||
|
||||
agent-1-initiator:
|
||||
type: agent
|
||||
name: Agent 1 Initiator
|
||||
inputs:
|
||||
systemPrompt: You are the first agent in a chain. Your role is to analyze the input and create an initial response that will be passed to the next agent.
|
||||
userPrompt: |-
|
||||
Welcome! I'm the first agent in our chain.
|
||||
|
||||
Input to process: <start.input>
|
||||
|
||||
Please create an initial analysis or greeting that the next agent can build upon. Be creative and set a positive tone for the chain!
|
||||
model: gpt-4o
|
||||
temperature: 0.7
|
||||
apiKey: '{{OPENAI_API_KEY}}'
|
||||
connections:
|
||||
success: agent-2-enhancer
|
||||
|
||||
agent-2-enhancer:
|
||||
type: agent
|
||||
name: Agent 2 Enhancer
|
||||
inputs:
|
||||
systemPrompt: You are the second agent in a chain. Take the output from Agent 1 and enhance it with additional insights or improvements.
|
||||
userPrompt: |-
|
||||
I'm the second agent! Here's what Agent 1 provided:
|
||||
|
||||
<agent1initiator.content>
|
||||
|
||||
Now I'll enhance this with additional details, insights, or improvements. Let me build upon their work!
|
||||
model: gpt-4o
|
||||
temperature: 0.7
|
||||
apiKey: '{{OPENAI_API_KEY}}'
|
||||
connections:
|
||||
success: agent-3-refiner
|
||||
|
||||
agent-3-refiner:
|
||||
type: agent
|
||||
name: Agent 3 Refiner
|
||||
inputs:
|
||||
systemPrompt: You are the third agent in a chain. Take the enhanced output from Agent 2 and refine it further, adding structure or organization.
|
||||
userPrompt: |-
|
||||
I'm the third agent in our chain! Here's the enhanced work from Agent 2:
|
||||
|
||||
<agent2enhancer.content>
|
||||
|
||||
My job is to refine and organize this content. I'll add structure, clarity, and polish to make it even better!
|
||||
model: gpt-4o
|
||||
temperature: 0.6
|
||||
apiKey: '{{OPENAI_API_KEY}}'
|
||||
connections:
|
||||
success: agent-4-finalizer
|
||||
|
||||
agent-4-finalizer:
|
||||
type: agent
|
||||
name: Agent 4 Finalizer
|
||||
inputs:
|
||||
systemPrompt: You are the final agent in a chain of 4. Create a comprehensive summary and conclusion based on all the previous agents' work.
|
||||
userPrompt: |-
|
||||
I'm the final agent! Here's the refined work from Agent 3:
|
||||
|
||||
<agent3refiner.content>
|
||||
|
||||
As the last agent in our chain, I'll create a final, polished summary that brings together all the work from our team of 4 agents. Let me conclude this beautifully!
|
||||
model: gpt-4o
|
||||
temperature: 0.5
|
||||
apiKey: '{{OPENAI_API_KEY}}'
|
||||
```
|
||||
|
||||
## Router-Based Conditional Workflow
|
||||
|
||||
A workflow that uses routing logic to send data to different agents based on conditions:
|
||||
|
||||
```yaml
|
||||
version: '1.0'
|
||||
blocks:
|
||||
start:
|
||||
type: starter
|
||||
name: Start
|
||||
inputs:
|
||||
startWorkflow: manual
|
||||
connections:
|
||||
success: router-1
|
||||
|
||||
router-1:
|
||||
type: router
|
||||
name: Router 1
|
||||
inputs:
|
||||
prompt: go to agent 1 if <start.input> is greater than 5. else agent 2 if greater than 10. else agent 3
|
||||
model: gpt-4o
|
||||
apiKey: '{{OPENAI_API_KEY}}'
|
||||
connections:
|
||||
success:
|
||||
- agent-1
|
||||
- agent-2
|
||||
- agent-3
|
||||
|
||||
agent-1:
|
||||
type: agent
|
||||
name: Agent 1
|
||||
inputs:
|
||||
systemPrompt: say 1
|
||||
model: gpt-4o
|
||||
apiKey: '{{OPENAI_API_KEY}}'
|
||||
|
||||
agent-2:
|
||||
type: agent
|
||||
name: Agent 2
|
||||
inputs:
|
||||
systemPrompt: say 2
|
||||
model: gpt-4o
|
||||
apiKey: '{{OPENAI_API_KEY}}'
|
||||
|
||||
agent-3:
|
||||
type: agent
|
||||
name: Agent 3
|
||||
inputs:
|
||||
systemPrompt: say 3
|
||||
model: gpt-4o
|
||||
apiKey: '{{OPENAI_API_KEY}}'
|
||||
```
|
||||
|
||||
## Web Search with Structured Output
|
||||
|
||||
A workflow that searches the web using tools and returns structured data:
|
||||
|
||||
```yaml
|
||||
version: '1.0'
|
||||
blocks:
|
||||
59eb07c1-1411-4b28-a274-fa78f55daf72:
|
||||
type: starter
|
||||
name: Start
|
||||
inputs:
|
||||
startWorkflow: manual
|
||||
connections:
|
||||
success: d77c2c98-56c4-432d-9338-9bac54a2d42f
|
||||
d77c2c98-56c4-432d-9338-9bac54a2d42f:
|
||||
type: agent
|
||||
name: Agent 1
|
||||
inputs:
|
||||
systemPrompt: look up the user input. use structured output
|
||||
userPrompt: <start.input>
|
||||
model: claude-sonnet-4-0
|
||||
apiKey: '{{ANTHROPIC_API_KEY}}'
|
||||
tools:
|
||||
- type: exa
|
||||
title: Exa
|
||||
params:
|
||||
type: auto
|
||||
apiKey: '{{EXA_API_KEY}}'
|
||||
numResults: ''
|
||||
toolId: exa_search
|
||||
operation: exa_search
|
||||
isExpanded: true
|
||||
usageControl: auto
|
||||
responseFormat: |-
|
||||
{
|
||||
"name": "output_schema",
|
||||
"description": "Defines the structure for an output object.",
|
||||
"strict": true,
|
||||
"schema": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"output": {
|
||||
"type": "string",
|
||||
"description": "The output value"
|
||||
}
|
||||
},
|
||||
"additionalProperties": false,
|
||||
"required": ["output"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Loop Processing with Collection
|
||||
|
||||
A workflow that processes each item in a collection using a loop:
|
||||
|
||||
```yaml
|
||||
version: '1.0'
|
||||
blocks:
|
||||
start:
|
||||
type: starter
|
||||
name: Start
|
||||
inputs:
|
||||
startWorkflow: manual
|
||||
connections:
|
||||
success: food-analysis-loop
|
||||
food-analysis-loop:
|
||||
type: loop
|
||||
name: Food Analysis Loop
|
||||
inputs:
|
||||
count: 5
|
||||
loopType: forEach
|
||||
collection: '["apple", "banana", "carrot"]'
|
||||
connections:
|
||||
loop:
|
||||
start: calorie-agent
|
||||
calorie-agent:
|
||||
type: agent
|
||||
name: Calorie Analyzer
|
||||
inputs:
|
||||
systemPrompt: Return the number of calories in the food
|
||||
userPrompt: <loop.currentItem>
|
||||
model: claude-sonnet-4-0
|
||||
apiKey: '{{ANTHROPIC_API_KEY}}'
|
||||
parentId: food-analysis-loop
|
||||
```
|
||||
|
||||
## Email Classification and Response
|
||||
|
||||
A workflow that classifies emails and generates appropriate responses:
|
||||
|
||||
```yaml
|
||||
version: '1.0'
|
||||
blocks:
|
||||
start:
|
||||
type: starter
|
||||
name: Start
|
||||
inputs:
|
||||
startWorkflow: manual
|
||||
connections:
|
||||
success: email-classifier
|
||||
|
||||
email-classifier:
|
||||
type: agent
|
||||
name: Email Classifier
|
||||
inputs:
|
||||
systemPrompt: Classify emails into categories and extract key information.
|
||||
userPrompt: |
|
||||
Classify this email: <start.input>
|
||||
|
||||
Categories: support, billing, sales, feedback
|
||||
Extract: urgency level, customer sentiment, main request
|
||||
model: gpt-4o
|
||||
apiKey: '{{OPENAI_API_KEY}}'
|
||||
connections:
|
||||
success: response-generator
|
||||
|
||||
response-generator:
|
||||
type: agent
|
||||
name: Response Generator
|
||||
inputs:
|
||||
systemPrompt: Generate appropriate responses based on email classification.
|
||||
userPrompt: |
|
||||
Email classification: <emailclassifier.content>
|
||||
Original email: <start.input>
|
||||
|
||||
Generate a professional, helpful response addressing the customer's needs.
|
||||
model: gpt-4o
|
||||
temperature: 0.7
|
||||
apiKey: '{{OPENAI_API_KEY}}'
|
||||
```
|
||||
159
apps/docs/content/docs/yaml/index.mdx
Normal file
159
apps/docs/content/docs/yaml/index.mdx
Normal file
@@ -0,0 +1,159 @@
|
||||
---
|
||||
title: YAML Workflow Reference
|
||||
description: Complete guide to writing YAML workflows in Sim Studio
|
||||
---
|
||||
|
||||
import { Card, Cards } from "fumadocs-ui/components/card";
|
||||
import { Step, Steps } from "fumadocs-ui/components/steps";
|
||||
import { Tab, Tabs } from "fumadocs-ui/components/tabs";
|
||||
|
||||
YAML workflows provide a powerful way to define, version, and share workflow configurations in Sim Studio. This reference guide covers the complete YAML syntax, block schemas, and best practices for creating robust workflows.
|
||||
|
||||
## Quick Start
|
||||
|
||||
Every Sim Studio workflow follows this basic structure:
|
||||
|
||||
```yaml
|
||||
version: '1.0'
|
||||
blocks:
|
||||
start:
|
||||
type: starter
|
||||
name: Start
|
||||
inputs:
|
||||
startWorkflow: manual
|
||||
connections:
|
||||
success: agent-1
|
||||
|
||||
agent-1:
|
||||
type: agent
|
||||
name: "AI Assistant"
|
||||
inputs:
|
||||
systemPrompt: "You are a helpful assistant."
|
||||
userPrompt: 'Hi'
|
||||
model: gpt-4o
|
||||
apiKey: '{{OPENAI_API_KEY}}'
|
||||
```
|
||||
|
||||
## Core Concepts
|
||||
|
||||
<Steps>
|
||||
<Step>
|
||||
<strong>Version Declaration</strong>: Must be exactly `version: '1.0'` (with quotes)
|
||||
</Step>
|
||||
<Step>
|
||||
<strong>Blocks Structure</strong>: All workflow blocks are defined under the `blocks` key
|
||||
</Step>
|
||||
<Step>
|
||||
<strong>Block References</strong>: Use block names in lowercase with spaces removed (e.g., `<aiassistant.content>`)
|
||||
</Step>
|
||||
<Step>
|
||||
<strong>Environment Variables</strong>: Reference with double curly braces `{{VARIABLE_NAME}}`
|
||||
</Step>
|
||||
</Steps>
|
||||
|
||||
## Block Types
|
||||
|
||||
Sim Studio supports several core block types, each with specific YAML schemas:
|
||||
|
||||
<Cards>
|
||||
<Card title="Starter Block" href="/yaml/blocks/starter">
|
||||
Workflow entry point with support for manual, webhook, and scheduled triggers
|
||||
</Card>
|
||||
<Card title="Agent Block" href="/yaml/blocks/agent">
|
||||
AI-powered processing with support for tools and structured output
|
||||
</Card>
|
||||
<Card title="Function Block" href="/yaml/blocks/function">
|
||||
Custom JavaScript/TypeScript code execution
|
||||
</Card>
|
||||
<Card title="API Block" href="/yaml/blocks/api">
|
||||
HTTP requests to external services
|
||||
</Card>
|
||||
<Card title="Condition Block" href="/yaml/blocks/condition">
|
||||
Conditional branching based on boolean expressions
|
||||
</Card>
|
||||
<Card title="Router Block" href="/yaml/blocks/router">
|
||||
AI-powered intelligent routing to multiple paths
|
||||
</Card>
|
||||
<Card title="Loop Block" href="/yaml/blocks/loop">
|
||||
Iterative processing with for and forEach loops
|
||||
</Card>
|
||||
<Card title="Parallel Block" href="/yaml/blocks/parallel">
|
||||
Concurrent execution across multiple instances
|
||||
</Card>
|
||||
<Card title="Webhook Block" href="/yaml/blocks/webhook">
|
||||
Webhook triggers for external integrations
|
||||
</Card>
|
||||
<Card title="Evaluator Block" href="/yaml/blocks/evaluator">
|
||||
Validate outputs against defined criteria and metrics
|
||||
</Card>
|
||||
<Card title="Workflow Block" href="/yaml/blocks/workflow">
|
||||
Execute other workflows as reusable components
|
||||
</Card>
|
||||
<Card title="Response Block" href="/yaml/blocks/response">
|
||||
Final workflow output formatting
|
||||
</Card>
|
||||
</Cards>
|
||||
|
||||
## Block Reference Syntax
|
||||
|
||||
The most critical aspect of YAML workflows is understanding how to reference data between blocks:
|
||||
|
||||
### Basic Rules
|
||||
|
||||
1. **Use the block name** (not the block ID) converted to lowercase with spaces removed
|
||||
2. **Add the appropriate property** (.content for agents, .output for tools)
|
||||
3. **When using chat, reference the starter block** as `<start.input>`
|
||||
|
||||
### Examples
|
||||
|
||||
```yaml
|
||||
# Block definitions
|
||||
email-processor:
|
||||
type: agent
|
||||
name: "Email Agent"
|
||||
# ... configuration
|
||||
|
||||
data-formatter:
|
||||
type: function
|
||||
name: "Data Agent"
|
||||
# ... configuration
|
||||
|
||||
# Referencing their outputs
|
||||
next-block:
|
||||
type: agent
|
||||
name: "Next Step"
|
||||
inputs:
|
||||
userPrompt: |
|
||||
Process this email: <emailagent.content>
|
||||
Use this formatted data: <dataagent.output>
|
||||
Original input: <start.input>
|
||||
```
|
||||
|
||||
### Special Cases
|
||||
|
||||
- **Loop Variables**: `<loop.index>`, `<loop.currentItem>`, `<loop.items>`
|
||||
- **Parallel Variables**: `<parallel.index>`, `<parallel.currentItem>`
|
||||
|
||||
## Environment Variables
|
||||
|
||||
Use environment variables for sensitive data like API keys:
|
||||
|
||||
```yaml
|
||||
inputs:
|
||||
apiKey: '{{OPENAI_API_KEY}}'
|
||||
database: '{{DATABASE_URL}}'
|
||||
token: '{{SLACK_BOT_TOKEN}}'
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
- **Keep block names human-readable**: "Email Processor" for UI display
|
||||
- **Reference environment variables**: Never hardcode API keys
|
||||
- **Structure for readability**: Group related blocks logically
|
||||
- **Test incrementally**: Build workflows step by step
|
||||
|
||||
## Next Steps
|
||||
|
||||
- [Block Reference Syntax](/yaml/block-reference) - Detailed reference rules
|
||||
- [Complete Block Schemas](/yaml/blocks) - All available block types
|
||||
- [Workflow Examples](/yaml/examples) - Real-world workflow patterns
|
||||
4
apps/docs/content/docs/yaml/meta.json
Normal file
4
apps/docs/content/docs/yaml/meta.json
Normal file
@@ -0,0 +1,4 @@
|
||||
{
|
||||
"title": "YAML Reference",
|
||||
"pages": ["index", "block-reference", "blocks", "examples"]
|
||||
}
|
||||
138
apps/sim/app/api/copilot/checkpoints/[id]/revert/route.ts
Normal file
138
apps/sim/app/api/copilot/checkpoints/[id]/revert/route.ts
Normal file
@@ -0,0 +1,138 @@
|
||||
import { and, eq } from 'drizzle-orm'
|
||||
import { type NextRequest, NextResponse } from 'next/server'
|
||||
import { getSession } from '@/lib/auth'
|
||||
import { createLogger } from '@/lib/logs/console-logger'
|
||||
import { db } from '@/db'
|
||||
import { copilotCheckpoints, workflow as workflowTable } from '@/db/schema'
|
||||
|
||||
const logger = createLogger('RevertCheckpointAPI')
|
||||
|
||||
/**
|
||||
* POST /api/copilot/checkpoints/[id]/revert
|
||||
* Revert workflow to a specific checkpoint
|
||||
*/
|
||||
export async function POST(request: NextRequest, { params }: { params: Promise<{ id: string }> }) {
|
||||
const requestId = crypto.randomUUID().slice(0, 8)
|
||||
const checkpointId = (await params).id
|
||||
|
||||
try {
|
||||
const session = await getSession()
|
||||
if (!session?.user?.id) {
|
||||
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
|
||||
}
|
||||
|
||||
logger.info(`[${requestId}] Reverting to checkpoint: ${checkpointId}`, {
|
||||
userId: session.user.id,
|
||||
})
|
||||
|
||||
// Get the checkpoint
|
||||
const checkpoint = await db
|
||||
.select()
|
||||
.from(copilotCheckpoints)
|
||||
.where(
|
||||
and(eq(copilotCheckpoints.id, checkpointId), eq(copilotCheckpoints.userId, session.user.id))
|
||||
)
|
||||
.limit(1)
|
||||
|
||||
if (!checkpoint.length) {
|
||||
return NextResponse.json({ error: 'Checkpoint not found' }, { status: 404 })
|
||||
}
|
||||
|
||||
const checkpointData = checkpoint[0]
|
||||
const { workflowId, yaml: yamlContent } = checkpointData
|
||||
|
||||
logger.info(`[${requestId}] Processing checkpoint revert`, {
|
||||
workflowId,
|
||||
yamlLength: yamlContent.length,
|
||||
})
|
||||
|
||||
// Use the consolidated YAML endpoint instead of duplicating the processing logic
|
||||
const yamlEndpointUrl = `${process.env.NEXT_PUBLIC_BASE_URL || 'http://localhost:3000'}/api/workflows/${workflowId}/yaml`
|
||||
|
||||
const yamlResponse = await fetch(yamlEndpointUrl, {
|
||||
method: 'PUT',
|
||||
headers: {
|
||||
'Content-Type': 'application/json',
|
||||
// Forward auth cookies from the original request
|
||||
Cookie: request.headers.get('Cookie') || '',
|
||||
},
|
||||
body: JSON.stringify({
|
||||
yamlContent,
|
||||
description: `Reverted to checkpoint from ${new Date(checkpointData.createdAt).toLocaleString()}`,
|
||||
source: 'checkpoint_revert',
|
||||
applyAutoLayout: true,
|
||||
createCheckpoint: false, // Don't create a checkpoint when reverting to one
|
||||
}),
|
||||
})
|
||||
|
||||
if (!yamlResponse.ok) {
|
||||
const errorData = await yamlResponse.json()
|
||||
logger.error(`[${requestId}] Consolidated YAML endpoint failed:`, errorData)
|
||||
return NextResponse.json(
|
||||
{
|
||||
success: false,
|
||||
error: 'Failed to revert checkpoint via YAML endpoint',
|
||||
details: errorData.errors || [errorData.error || 'Unknown error'],
|
||||
},
|
||||
{ status: yamlResponse.status }
|
||||
)
|
||||
}
|
||||
|
||||
const yamlResult = await yamlResponse.json()
|
||||
|
||||
if (!yamlResult.success) {
|
||||
logger.error(`[${requestId}] YAML endpoint returned failure:`, yamlResult)
|
||||
return NextResponse.json(
|
||||
{
|
||||
success: false,
|
||||
error: 'Failed to process checkpoint YAML',
|
||||
details: yamlResult.errors || ['Unknown error'],
|
||||
},
|
||||
{ status: 400 }
|
||||
)
|
||||
}
|
||||
|
||||
// Update workflow's lastSynced timestamp
|
||||
await db
|
||||
.update(workflowTable)
|
||||
.set({
|
||||
lastSynced: new Date(),
|
||||
updatedAt: new Date(),
|
||||
})
|
||||
.where(eq(workflowTable.id, workflowId))
|
||||
|
||||
// Notify the socket server to tell clients to rehydrate stores from database
|
||||
try {
|
||||
const socketUrl = process.env.SOCKET_URL || 'http://localhost:3002'
|
||||
await fetch(`${socketUrl}/api/copilot-workflow-edit`, {
|
||||
method: 'POST',
|
||||
headers: { 'Content-Type': 'application/json' },
|
||||
body: JSON.stringify({
|
||||
workflowId,
|
||||
description: `Reverted to checkpoint from ${new Date(checkpointData.createdAt).toLocaleString()}`,
|
||||
}),
|
||||
})
|
||||
logger.info(`[${requestId}] Notified socket server of checkpoint revert`)
|
||||
} catch (socketError) {
|
||||
logger.warn(`[${requestId}] Failed to notify socket server:`, socketError)
|
||||
}
|
||||
|
||||
logger.info(`[${requestId}] Successfully reverted to checkpoint`)
|
||||
|
||||
return NextResponse.json({
|
||||
success: true,
|
||||
message: `Successfully reverted to checkpoint from ${new Date(checkpointData.createdAt).toLocaleString()}`,
|
||||
summary: yamlResult.summary || `Restored workflow from checkpoint.`,
|
||||
warnings: yamlResult.warnings || [],
|
||||
data: yamlResult.data,
|
||||
})
|
||||
} catch (error) {
|
||||
logger.error(`[${requestId}] Error reverting checkpoint:`, error)
|
||||
return NextResponse.json(
|
||||
{
|
||||
error: `Failed to revert checkpoint: ${error instanceof Error ? error.message : 'Unknown error'}`,
|
||||
},
|
||||
{ status: 500 }
|
||||
)
|
||||
}
|
||||
}
|
||||
64
apps/sim/app/api/copilot/checkpoints/route.ts
Normal file
64
apps/sim/app/api/copilot/checkpoints/route.ts
Normal file
@@ -0,0 +1,64 @@
|
||||
import { and, desc, eq } from 'drizzle-orm'
|
||||
import { type NextRequest, NextResponse } from 'next/server'
|
||||
import { getSession } from '@/lib/auth'
|
||||
import { createLogger } from '@/lib/logs/console-logger'
|
||||
import { db } from '@/db'
|
||||
import { copilotCheckpoints } from '@/db/schema'
|
||||
|
||||
const logger = createLogger('CopilotCheckpointsAPI')
|
||||
|
||||
/**
|
||||
* GET /api/copilot/checkpoints
|
||||
* List checkpoints for a specific chat
|
||||
*/
|
||||
export async function GET(request: NextRequest) {
|
||||
const requestId = crypto.randomUUID()
|
||||
|
||||
try {
|
||||
const session = await getSession()
|
||||
if (!session?.user?.id) {
|
||||
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
|
||||
}
|
||||
|
||||
const { searchParams } = new URL(request.url)
|
||||
const chatId = searchParams.get('chatId')
|
||||
const limit = Number(searchParams.get('limit')) || 10
|
||||
const offset = Number(searchParams.get('offset')) || 0
|
||||
|
||||
if (!chatId) {
|
||||
return NextResponse.json({ error: 'chatId is required' }, { status: 400 })
|
||||
}
|
||||
|
||||
logger.info(`[${requestId}] Listing checkpoints for chat: ${chatId}`, {
|
||||
userId: session.user.id,
|
||||
limit,
|
||||
offset,
|
||||
})
|
||||
|
||||
const checkpoints = await db
|
||||
.select()
|
||||
.from(copilotCheckpoints)
|
||||
.where(
|
||||
and(eq(copilotCheckpoints.userId, session.user.id), eq(copilotCheckpoints.chatId, chatId))
|
||||
)
|
||||
.orderBy(desc(copilotCheckpoints.createdAt))
|
||||
.limit(limit)
|
||||
.offset(offset)
|
||||
|
||||
// Format timestamps to ISO strings for consistent timezone handling
|
||||
const formattedCheckpoints = checkpoints.map((checkpoint) => ({
|
||||
id: checkpoint.id,
|
||||
userId: checkpoint.userId,
|
||||
workflowId: checkpoint.workflowId,
|
||||
chatId: checkpoint.chatId,
|
||||
yaml: checkpoint.yaml,
|
||||
createdAt: checkpoint.createdAt.toISOString(),
|
||||
updatedAt: checkpoint.updatedAt.toISOString(),
|
||||
}))
|
||||
|
||||
return NextResponse.json({ checkpoints: formattedCheckpoints })
|
||||
} catch (error) {
|
||||
logger.error(`[${requestId}] Error listing checkpoints:`, error)
|
||||
return NextResponse.json({ error: 'Failed to list checkpoints' }, { status: 500 })
|
||||
}
|
||||
}
|
||||
@@ -1,281 +0,0 @@
|
||||
import { type NextRequest, NextResponse } from 'next/server'
|
||||
import { z } from 'zod'
|
||||
import { getSession } from '@/lib/auth'
|
||||
import {
|
||||
type CopilotChat,
|
||||
type CopilotMessage,
|
||||
createChat,
|
||||
generateChatTitle,
|
||||
generateDocsResponse,
|
||||
getChat,
|
||||
updateChat,
|
||||
} from '@/lib/copilot/service'
|
||||
import { createLogger } from '@/lib/logs/console-logger'
|
||||
|
||||
const logger = createLogger('CopilotDocsAPI')
|
||||
|
||||
// Schema for docs queries
|
||||
const DocsQuerySchema = z.object({
|
||||
query: z.string().min(1, 'Query is required'),
|
||||
topK: z.number().min(1).max(20).default(5),
|
||||
provider: z.string().optional(),
|
||||
model: z.string().optional(),
|
||||
stream: z.boolean().optional().default(false),
|
||||
chatId: z.string().optional(),
|
||||
workflowId: z.string().optional(),
|
||||
createNewChat: z.boolean().optional().default(false),
|
||||
})
|
||||
|
||||
/**
|
||||
* POST /api/copilot/docs
|
||||
* Ask questions about documentation using RAG
|
||||
*/
|
||||
export async function POST(req: NextRequest) {
|
||||
const requestId = crypto.randomUUID()
|
||||
|
||||
try {
|
||||
const session = await getSession()
|
||||
if (!session?.user?.id) {
|
||||
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
|
||||
}
|
||||
|
||||
const body = await req.json()
|
||||
const { query, topK, provider, model, stream, chatId, workflowId, createNewChat } =
|
||||
DocsQuerySchema.parse(body)
|
||||
|
||||
logger.info(`[${requestId}] Docs RAG query: "${query}"`, {
|
||||
provider,
|
||||
model,
|
||||
topK,
|
||||
chatId,
|
||||
workflowId,
|
||||
createNewChat,
|
||||
userId: session.user.id,
|
||||
})
|
||||
|
||||
// Handle chat context
|
||||
let currentChat: CopilotChat | null = null
|
||||
let conversationHistory: CopilotMessage[] = []
|
||||
|
||||
if (chatId) {
|
||||
// Load existing chat
|
||||
currentChat = await getChat(chatId, session.user.id)
|
||||
if (currentChat) {
|
||||
conversationHistory = currentChat.messages
|
||||
}
|
||||
} else if (createNewChat && workflowId) {
|
||||
// Create new chat
|
||||
currentChat = await createChat(session.user.id, workflowId)
|
||||
}
|
||||
|
||||
// Generate docs response
|
||||
const result = await generateDocsResponse(query, conversationHistory, {
|
||||
topK,
|
||||
provider,
|
||||
model,
|
||||
stream,
|
||||
workflowId,
|
||||
requestId,
|
||||
})
|
||||
|
||||
if (stream && result.response instanceof ReadableStream) {
|
||||
// Handle streaming response with docs sources
|
||||
logger.info(`[${requestId}] Returning streaming docs response`)
|
||||
|
||||
const encoder = new TextEncoder()
|
||||
|
||||
return new Response(
|
||||
new ReadableStream({
|
||||
async start(controller) {
|
||||
const reader = (result.response as ReadableStream).getReader()
|
||||
let accumulatedResponse = ''
|
||||
|
||||
try {
|
||||
// Send initial metadata including sources
|
||||
const metadata = {
|
||||
type: 'metadata',
|
||||
chatId: currentChat?.id,
|
||||
sources: result.sources,
|
||||
citations: result.sources.map((source, index) => ({
|
||||
id: index + 1,
|
||||
title: source.title,
|
||||
url: source.url,
|
||||
})),
|
||||
metadata: {
|
||||
requestId,
|
||||
chunksFound: result.sources.length,
|
||||
query,
|
||||
topSimilarity: result.sources[0]?.similarity,
|
||||
provider,
|
||||
model,
|
||||
},
|
||||
}
|
||||
controller.enqueue(encoder.encode(`data: ${JSON.stringify(metadata)}\n\n`))
|
||||
|
||||
while (true) {
|
||||
const { done, value } = await reader.read()
|
||||
if (done) break
|
||||
|
||||
const chunk = new TextDecoder().decode(value)
|
||||
// Clean up any object serialization artifacts in streaming content
|
||||
const cleanedChunk = chunk.replace(/\[object Object\],?/g, '')
|
||||
accumulatedResponse += cleanedChunk
|
||||
|
||||
const contentChunk = {
|
||||
type: 'content',
|
||||
content: cleanedChunk,
|
||||
}
|
||||
controller.enqueue(encoder.encode(`data: ${JSON.stringify(contentChunk)}\n\n`))
|
||||
}
|
||||
|
||||
// Send completion marker first to unblock the user
|
||||
controller.enqueue(encoder.encode(`data: {"type":"done"}\n\n`))
|
||||
|
||||
// Save conversation to database asynchronously (non-blocking)
|
||||
if (currentChat) {
|
||||
// Fire-and-forget database save to avoid blocking stream completion
|
||||
Promise.resolve()
|
||||
.then(async () => {
|
||||
try {
|
||||
const userMessage: CopilotMessage = {
|
||||
id: crypto.randomUUID(),
|
||||
role: 'user',
|
||||
content: query,
|
||||
timestamp: new Date().toISOString(),
|
||||
}
|
||||
|
||||
const assistantMessage: CopilotMessage = {
|
||||
id: crypto.randomUUID(),
|
||||
role: 'assistant',
|
||||
content: accumulatedResponse,
|
||||
timestamp: new Date().toISOString(),
|
||||
citations: result.sources.map((source, index) => ({
|
||||
id: index + 1,
|
||||
title: source.title,
|
||||
url: source.url,
|
||||
})),
|
||||
}
|
||||
|
||||
const updatedMessages = [
|
||||
...conversationHistory,
|
||||
userMessage,
|
||||
assistantMessage,
|
||||
]
|
||||
|
||||
// Generate title if this is the first message
|
||||
let updatedTitle = currentChat.title ?? undefined
|
||||
if (!updatedTitle && conversationHistory.length === 0) {
|
||||
updatedTitle = await generateChatTitle(query)
|
||||
}
|
||||
|
||||
// Update the chat in database
|
||||
await updateChat(currentChat.id, session.user.id, {
|
||||
title: updatedTitle,
|
||||
messages: updatedMessages,
|
||||
})
|
||||
|
||||
logger.info(
|
||||
`[${requestId}] Updated chat ${currentChat.id} with new docs messages`
|
||||
)
|
||||
} catch (dbError) {
|
||||
logger.error(`[${requestId}] Failed to save chat to database:`, dbError)
|
||||
// Database errors don't affect the user's streaming experience
|
||||
}
|
||||
})
|
||||
.catch((error) => {
|
||||
logger.error(`[${requestId}] Unexpected error in async database save:`, error)
|
||||
})
|
||||
}
|
||||
} catch (error) {
|
||||
logger.error(`[${requestId}] Docs streaming error:`, error)
|
||||
try {
|
||||
const errorChunk = {
|
||||
type: 'error',
|
||||
error: 'Streaming failed',
|
||||
}
|
||||
controller.enqueue(encoder.encode(`data: ${JSON.stringify(errorChunk)}\n\n`))
|
||||
} catch (enqueueError) {
|
||||
logger.error(`[${requestId}] Failed to enqueue error response:`, enqueueError)
|
||||
}
|
||||
} finally {
|
||||
controller.close()
|
||||
}
|
||||
},
|
||||
}),
|
||||
{
|
||||
headers: {
|
||||
'Content-Type': 'text/event-stream',
|
||||
'Cache-Control': 'no-cache',
|
||||
Connection: 'keep-alive',
|
||||
},
|
||||
}
|
||||
)
|
||||
}
|
||||
|
||||
// Handle non-streaming response
|
||||
logger.info(`[${requestId}] Docs RAG response generated successfully`)
|
||||
|
||||
// Save conversation to database if we have a chat
|
||||
if (currentChat) {
|
||||
const userMessage: CopilotMessage = {
|
||||
id: crypto.randomUUID(),
|
||||
role: 'user',
|
||||
content: query,
|
||||
timestamp: new Date().toISOString(),
|
||||
}
|
||||
|
||||
const assistantMessage: CopilotMessage = {
|
||||
id: crypto.randomUUID(),
|
||||
role: 'assistant',
|
||||
content: typeof result.response === 'string' ? result.response : '[Streaming Response]',
|
||||
timestamp: new Date().toISOString(),
|
||||
citations: result.sources.map((source, index) => ({
|
||||
id: index + 1,
|
||||
title: source.title,
|
||||
url: source.url,
|
||||
})),
|
||||
}
|
||||
|
||||
const updatedMessages = [...conversationHistory, userMessage, assistantMessage]
|
||||
|
||||
// Generate title if this is the first message
|
||||
let updatedTitle = currentChat.title ?? undefined
|
||||
if (!updatedTitle && conversationHistory.length === 0) {
|
||||
updatedTitle = await generateChatTitle(query)
|
||||
}
|
||||
|
||||
// Update the chat in database
|
||||
await updateChat(currentChat.id, session.user.id, {
|
||||
title: updatedTitle,
|
||||
messages: updatedMessages,
|
||||
})
|
||||
|
||||
logger.info(`[${requestId}] Updated chat ${currentChat.id} with new docs messages`)
|
||||
}
|
||||
|
||||
return NextResponse.json({
|
||||
success: true,
|
||||
response: result.response,
|
||||
sources: result.sources,
|
||||
chatId: currentChat?.id,
|
||||
metadata: {
|
||||
requestId,
|
||||
chunksFound: result.sources.length,
|
||||
query,
|
||||
topSimilarity: result.sources[0]?.similarity,
|
||||
provider,
|
||||
model,
|
||||
},
|
||||
})
|
||||
} catch (error) {
|
||||
if (error instanceof z.ZodError) {
|
||||
return NextResponse.json(
|
||||
{ error: 'Invalid request data', details: error.errors },
|
||||
{ status: 400 }
|
||||
)
|
||||
}
|
||||
|
||||
logger.error(`[${requestId}] Copilot docs error:`, error)
|
||||
return NextResponse.json({ error: 'Internal server error' }, { status: 500 })
|
||||
}
|
||||
}
|
||||
@@ -25,6 +25,7 @@ const SendMessageSchema = z.object({
|
||||
message: z.string().min(1, 'Message is required'),
|
||||
chatId: z.string().optional(),
|
||||
workflowId: z.string().optional(),
|
||||
mode: z.enum(['ask', 'agent']).optional().default('ask'),
|
||||
createNewChat: z.boolean().optional().default(false),
|
||||
stream: z.boolean().optional().default(false),
|
||||
})
|
||||
@@ -90,7 +91,8 @@ export async function POST(req: NextRequest) {
|
||||
|
||||
try {
|
||||
const body = await req.json()
|
||||
const { message, chatId, workflowId, createNewChat, stream } = SendMessageSchema.parse(body)
|
||||
const { message, chatId, workflowId, mode, createNewChat, stream } =
|
||||
SendMessageSchema.parse(body)
|
||||
|
||||
const session = await getSession()
|
||||
if (!session?.user?.id) {
|
||||
@@ -100,6 +102,7 @@ export async function POST(req: NextRequest) {
|
||||
logger.info(`[${requestId}] Copilot message: "${message}"`, {
|
||||
chatId,
|
||||
workflowId,
|
||||
mode,
|
||||
createNewChat,
|
||||
stream,
|
||||
userId: session.user.id,
|
||||
@@ -110,6 +113,7 @@ export async function POST(req: NextRequest) {
|
||||
message,
|
||||
chatId,
|
||||
workflowId,
|
||||
mode,
|
||||
createNewChat,
|
||||
stream,
|
||||
userId: session.user.id,
|
||||
|
||||
@@ -36,7 +36,7 @@ export async function POST(
|
||||
): Promise<NextResponse<DocsSearchSuccessResponse | DocsSearchErrorResponse>> {
|
||||
try {
|
||||
const requestBody: DocsSearchRequest = await request.json()
|
||||
const { query, topK = 5 } = requestBody
|
||||
const { query, topK = 10 } = requestBody
|
||||
|
||||
if (!query) {
|
||||
const errorResponse: DocsSearchErrorResponse = {
|
||||
|
||||
412
apps/sim/app/api/tools/edit-workflow/route.ts
Normal file
412
apps/sim/app/api/tools/edit-workflow/route.ts
Normal file
@@ -0,0 +1,412 @@
|
||||
import { eq } from 'drizzle-orm'
|
||||
import { type NextRequest, NextResponse } from 'next/server'
|
||||
import { autoLayoutWorkflow } from '@/lib/autolayout/service'
|
||||
import { createLogger } from '@/lib/logs/console-logger'
|
||||
import {
|
||||
loadWorkflowFromNormalizedTables,
|
||||
saveWorkflowToNormalizedTables,
|
||||
} from '@/lib/workflows/db-helpers'
|
||||
import { generateWorkflowYaml } from '@/lib/workflows/yaml-generator'
|
||||
import { getUserId } from '@/app/api/auth/oauth/utils'
|
||||
import { getBlock } from '@/blocks'
|
||||
import { db } from '@/db'
|
||||
import { copilotCheckpoints, workflow as workflowTable } from '@/db/schema'
|
||||
import { generateLoopBlocks, generateParallelBlocks } from '@/stores/workflows/workflow/utils'
|
||||
import { convertYamlToWorkflow, parseWorkflowYaml } from '@/stores/workflows/yaml/importer'
|
||||
|
||||
const logger = createLogger('EditWorkflowAPI')
|
||||
|
||||
export async function POST(request: NextRequest) {
|
||||
const requestId = crypto.randomUUID().slice(0, 8)
|
||||
|
||||
try {
|
||||
const body = await request.json()
|
||||
const { yamlContent, workflowId, description, chatId } = body
|
||||
|
||||
if (!yamlContent) {
|
||||
return NextResponse.json(
|
||||
{ success: false, error: 'yamlContent is required' },
|
||||
{ status: 400 }
|
||||
)
|
||||
}
|
||||
|
||||
if (!workflowId) {
|
||||
return NextResponse.json({ success: false, error: 'workflowId is required' }, { status: 400 })
|
||||
}
|
||||
|
||||
logger.info(`[${requestId}] Processing workflow edit request`, {
|
||||
workflowId,
|
||||
yamlLength: yamlContent.length,
|
||||
hasDescription: !!description,
|
||||
hasChatId: !!chatId,
|
||||
})
|
||||
|
||||
// Log the full YAML content for debugging
|
||||
logger.info(`[${requestId}] Full YAML content from copilot:`)
|
||||
logger.info('='.repeat(80))
|
||||
logger.info(yamlContent)
|
||||
logger.info('='.repeat(80))
|
||||
|
||||
// Get the user ID for checkpoint creation
|
||||
const userId = await getUserId(requestId, workflowId)
|
||||
if (!userId) {
|
||||
return NextResponse.json({ success: false, error: 'User not found' }, { status: 404 })
|
||||
}
|
||||
|
||||
// Create checkpoint before making changes (only if chatId is provided)
|
||||
if (chatId) {
|
||||
try {
|
||||
logger.info(`[${requestId}] Creating checkpoint before workflow edit`)
|
||||
|
||||
// Get current workflow state
|
||||
const currentWorkflowData = await loadWorkflowFromNormalizedTables(workflowId)
|
||||
|
||||
if (currentWorkflowData) {
|
||||
// Generate YAML from current state
|
||||
const currentYaml = generateWorkflowYaml(currentWorkflowData)
|
||||
|
||||
// Create checkpoint
|
||||
await db.insert(copilotCheckpoints).values({
|
||||
userId,
|
||||
workflowId,
|
||||
chatId,
|
||||
yaml: currentYaml,
|
||||
})
|
||||
|
||||
logger.info(`[${requestId}] Checkpoint created successfully`)
|
||||
} else {
|
||||
logger.warn(`[${requestId}] Could not load current workflow state for checkpoint`)
|
||||
}
|
||||
} catch (checkpointError) {
|
||||
logger.error(`[${requestId}] Failed to create checkpoint:`, checkpointError)
|
||||
// Continue with workflow edit even if checkpoint fails
|
||||
}
|
||||
}
|
||||
|
||||
// Parse YAML content server-side
|
||||
const { data: yamlWorkflow, errors: parseErrors } = parseWorkflowYaml(yamlContent)
|
||||
|
||||
if (!yamlWorkflow || parseErrors.length > 0) {
|
||||
logger.error('[edit-workflow] YAML parsing failed', { parseErrors })
|
||||
return NextResponse.json({
|
||||
success: true,
|
||||
data: {
|
||||
success: false,
|
||||
message: 'Failed to parse YAML workflow',
|
||||
errors: parseErrors,
|
||||
warnings: [],
|
||||
},
|
||||
})
|
||||
}
|
||||
|
||||
// Convert YAML to workflow format
|
||||
const { blocks, edges, errors: convertErrors, warnings } = convertYamlToWorkflow(yamlWorkflow)
|
||||
|
||||
if (convertErrors.length > 0) {
|
||||
logger.error('[edit-workflow] YAML conversion failed', { convertErrors })
|
||||
return NextResponse.json({
|
||||
success: true,
|
||||
data: {
|
||||
success: false,
|
||||
message: 'Failed to convert YAML to workflow',
|
||||
errors: convertErrors,
|
||||
warnings,
|
||||
},
|
||||
})
|
||||
}
|
||||
|
||||
// Create workflow state (same format as applyWorkflowDiff)
|
||||
const newWorkflowState: any = {
|
||||
blocks: {} as Record<string, any>,
|
||||
edges: [] as any[],
|
||||
loops: {} as Record<string, any>,
|
||||
parallels: {} as Record<string, any>,
|
||||
lastSaved: Date.now(),
|
||||
isDeployed: false,
|
||||
deployedAt: undefined,
|
||||
deploymentStatuses: {} as Record<string, any>,
|
||||
hasActiveSchedule: false,
|
||||
hasActiveWebhook: false,
|
||||
}
|
||||
|
||||
// Process blocks and assign new IDs (complete replacement)
|
||||
const blockIdMapping = new Map<string, string>()
|
||||
|
||||
for (const block of blocks) {
|
||||
const newId = crypto.randomUUID()
|
||||
blockIdMapping.set(block.id, newId)
|
||||
|
||||
// Get block configuration to set proper defaults
|
||||
const blockConfig = getBlock(block.type)
|
||||
const subBlocks: Record<string, any> = {}
|
||||
const outputs: Record<string, any> = {}
|
||||
|
||||
// Set up subBlocks from block configuration
|
||||
if (blockConfig?.subBlocks) {
|
||||
blockConfig.subBlocks.forEach((subBlock) => {
|
||||
subBlocks[subBlock.id] = {
|
||||
id: subBlock.id,
|
||||
type: subBlock.type,
|
||||
value: null,
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
// Set up outputs from block configuration
|
||||
if (blockConfig?.outputs) {
|
||||
if (Array.isArray(blockConfig.outputs)) {
|
||||
blockConfig.outputs.forEach((output) => {
|
||||
outputs[output.id] = { type: output.type }
|
||||
})
|
||||
} else if (typeof blockConfig.outputs === 'object') {
|
||||
Object.assign(outputs, blockConfig.outputs)
|
||||
}
|
||||
}
|
||||
|
||||
newWorkflowState.blocks[newId] = {
|
||||
id: newId,
|
||||
type: block.type,
|
||||
name: block.name,
|
||||
position: block.position,
|
||||
subBlocks,
|
||||
outputs,
|
||||
enabled: true,
|
||||
horizontalHandles: true,
|
||||
isWide: false,
|
||||
height: 0,
|
||||
data: block.data || {},
|
||||
}
|
||||
|
||||
// Set input values as subblock values with block reference mapping
|
||||
if (block.inputs && typeof block.inputs === 'object') {
|
||||
Object.entries(block.inputs).forEach(([key, value]) => {
|
||||
if (newWorkflowState.blocks[newId].subBlocks[key]) {
|
||||
// Update block references in values to use new mapped IDs
|
||||
let processedValue = value
|
||||
if (typeof value === 'string' && value.includes('<') && value.includes('>')) {
|
||||
// Update block references to use new mapped IDs
|
||||
const blockMatches = value.match(/<([^>]+)>/g)
|
||||
if (blockMatches) {
|
||||
for (const match of blockMatches) {
|
||||
const path = match.slice(1, -1)
|
||||
const [blockRef] = path.split('.')
|
||||
|
||||
// Skip system references (start, loop, parallel, variable)
|
||||
if (['start', 'loop', 'parallel', 'variable'].includes(blockRef.toLowerCase())) {
|
||||
continue
|
||||
}
|
||||
|
||||
// Check if this references an old block ID that needs mapping
|
||||
const newMappedId = blockIdMapping.get(blockRef)
|
||||
if (newMappedId) {
|
||||
logger.info(
|
||||
`[${requestId}] Updating block reference: ${blockRef} -> ${newMappedId}`
|
||||
)
|
||||
processedValue = processedValue.replace(
|
||||
new RegExp(`<${blockRef}\\.`, 'g'),
|
||||
`<${newMappedId}.`
|
||||
)
|
||||
processedValue = processedValue.replace(
|
||||
new RegExp(`<${blockRef}>`, 'g'),
|
||||
`<${newMappedId}>`
|
||||
)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
newWorkflowState.blocks[newId].subBlocks[key].value = processedValue
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// Update parent-child relationships with mapped IDs
|
||||
logger.info(`[${requestId}] Block ID mapping:`, Object.fromEntries(blockIdMapping))
|
||||
for (const [newId, blockData] of Object.entries(newWorkflowState.blocks)) {
|
||||
const block = blockData as any
|
||||
if (block.data?.parentId) {
|
||||
logger.info(
|
||||
`[${requestId}] Found child block ${block.name} with parentId: ${block.data.parentId}`
|
||||
)
|
||||
const mappedParentId = blockIdMapping.get(block.data.parentId)
|
||||
if (mappedParentId) {
|
||||
logger.info(
|
||||
`[${requestId}] Updating parent reference: ${block.data.parentId} -> ${mappedParentId}`
|
||||
)
|
||||
block.data.parentId = mappedParentId
|
||||
// Ensure extent is set for child blocks
|
||||
if (!block.data.extent) {
|
||||
block.data.extent = 'parent'
|
||||
}
|
||||
} else {
|
||||
logger.error(
|
||||
`[${requestId}] ❌ Parent block not found for mapping: ${block.data.parentId}`
|
||||
)
|
||||
logger.error(`[${requestId}] Available mappings:`, Array.from(blockIdMapping.keys()))
|
||||
// Remove invalid parent reference
|
||||
block.data.parentId = undefined
|
||||
block.data.extent = undefined
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Process edges with mapped IDs
|
||||
for (const edge of edges) {
|
||||
const sourceId = blockIdMapping.get(edge.source)
|
||||
const targetId = blockIdMapping.get(edge.target)
|
||||
|
||||
if (sourceId && targetId) {
|
||||
newWorkflowState.edges.push({
|
||||
id: crypto.randomUUID(),
|
||||
source: sourceId,
|
||||
target: targetId,
|
||||
sourceHandle: edge.sourceHandle,
|
||||
targetHandle: edge.targetHandle,
|
||||
type: edge.type || 'default',
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// Generate loop and parallel configurations from the imported blocks
|
||||
const loops = generateLoopBlocks(newWorkflowState.blocks)
|
||||
const parallels = generateParallelBlocks(newWorkflowState.blocks)
|
||||
|
||||
// Update workflow state with generated configurations
|
||||
newWorkflowState.loops = loops
|
||||
newWorkflowState.parallels = parallels
|
||||
|
||||
logger.info(`[${requestId}] Generated loop and parallel configurations`, {
|
||||
loopsCount: Object.keys(loops).length,
|
||||
parallelsCount: Object.keys(parallels).length,
|
||||
loopIds: Object.keys(loops),
|
||||
parallelIds: Object.keys(parallels),
|
||||
})
|
||||
|
||||
// Apply intelligent autolayout to optimize block positions
|
||||
try {
|
||||
logger.info(
|
||||
`[${requestId}] Applying autolayout to ${Object.keys(newWorkflowState.blocks).length} blocks`
|
||||
)
|
||||
|
||||
const layoutedBlocks = await autoLayoutWorkflow(
|
||||
newWorkflowState.blocks,
|
||||
newWorkflowState.edges,
|
||||
{
|
||||
strategy: 'smart',
|
||||
direction: 'auto',
|
||||
spacing: {
|
||||
horizontal: 400,
|
||||
vertical: 200,
|
||||
layer: 600,
|
||||
},
|
||||
alignment: 'center',
|
||||
padding: {
|
||||
x: 200,
|
||||
y: 200,
|
||||
},
|
||||
}
|
||||
)
|
||||
|
||||
// Update workflow state with optimized positions
|
||||
newWorkflowState.blocks = layoutedBlocks
|
||||
|
||||
logger.info(`[${requestId}] Autolayout completed successfully`)
|
||||
} catch (layoutError) {
|
||||
// Log the error but don't fail the entire workflow save
|
||||
logger.warn(`[${requestId}] Autolayout failed, using original positions:`, layoutError)
|
||||
}
|
||||
|
||||
// Save directly to database using the same function as the workflow state API
|
||||
const saveResult = await saveWorkflowToNormalizedTables(workflowId, newWorkflowState)
|
||||
|
||||
if (!saveResult.success) {
|
||||
logger.error('[edit-workflow] Failed to save workflow state:', saveResult.error)
|
||||
return NextResponse.json({
|
||||
success: true,
|
||||
data: {
|
||||
success: false,
|
||||
message: `Database save failed: ${saveResult.error || 'Unknown error'}`,
|
||||
errors: [saveResult.error || 'Database save failed'],
|
||||
warnings,
|
||||
},
|
||||
})
|
||||
}
|
||||
|
||||
// Update workflow's lastSynced timestamp
|
||||
await db
|
||||
.update(workflowTable)
|
||||
.set({
|
||||
lastSynced: new Date(),
|
||||
updatedAt: new Date(),
|
||||
state: saveResult.jsonBlob, // Also update JSON blob for backward compatibility
|
||||
})
|
||||
.where(eq(workflowTable.id, workflowId))
|
||||
|
||||
// Notify the socket server to tell clients to rehydrate stores from database
|
||||
try {
|
||||
const socketUrl = process.env.SOCKET_URL || 'http://localhost:3002'
|
||||
await fetch(`${socketUrl}/api/copilot-workflow-edit`, {
|
||||
method: 'POST',
|
||||
headers: { 'Content-Type': 'application/json' },
|
||||
body: JSON.stringify({
|
||||
workflowId,
|
||||
description: description || 'Copilot edited workflow',
|
||||
}),
|
||||
})
|
||||
logger.info('[edit-workflow] Notified socket server to rehydrate client stores from database')
|
||||
} catch (socketError) {
|
||||
// Don't fail the main request if socket notification fails
|
||||
logger.warn('[edit-workflow] Failed to notify socket server:', socketError)
|
||||
}
|
||||
|
||||
// Calculate summary with loop/parallel information
|
||||
const loopBlocksCount = Object.values(newWorkflowState.blocks).filter(
|
||||
(b: any) => b.type === 'loop'
|
||||
).length
|
||||
const parallelBlocksCount = Object.values(newWorkflowState.blocks).filter(
|
||||
(b: any) => b.type === 'parallel'
|
||||
).length
|
||||
|
||||
let summaryDetails = `Successfully created workflow with ${blocks.length} blocks and ${edges.length} connections.`
|
||||
|
||||
if (loopBlocksCount > 0 || parallelBlocksCount > 0) {
|
||||
summaryDetails += ` Generated ${Object.keys(loops).length} loop configurations and ${Object.keys(parallels).length} parallel configurations.`
|
||||
}
|
||||
|
||||
const result = {
|
||||
success: true,
|
||||
errors: [],
|
||||
warnings,
|
||||
summary: summaryDetails,
|
||||
}
|
||||
|
||||
logger.info('[edit-workflow] Import result', {
|
||||
success: result.success,
|
||||
errorCount: result.errors.length,
|
||||
warningCount: result.warnings.length,
|
||||
summary: result.summary,
|
||||
})
|
||||
|
||||
return NextResponse.json({
|
||||
success: true,
|
||||
data: {
|
||||
success: result.success,
|
||||
message: result.success
|
||||
? `Workflow updated successfully${description ? `: ${description}` : ''}`
|
||||
: 'Failed to update workflow',
|
||||
summary: result.summary,
|
||||
errors: result.errors,
|
||||
warnings: result.warnings,
|
||||
},
|
||||
})
|
||||
} catch (error) {
|
||||
logger.error('[edit-workflow] Error:', error)
|
||||
return NextResponse.json(
|
||||
{
|
||||
success: false,
|
||||
error: `Failed to edit workflow: ${error instanceof Error ? error.message : 'Unknown error'}`,
|
||||
},
|
||||
{ status: 500 }
|
||||
)
|
||||
}
|
||||
}
|
||||
66
apps/sim/app/api/tools/get-all-blocks/route.ts
Normal file
66
apps/sim/app/api/tools/get-all-blocks/route.ts
Normal file
@@ -0,0 +1,66 @@
|
||||
import { type NextRequest, NextResponse } from 'next/server'
|
||||
import { createLogger } from '@/lib/logs/console-logger'
|
||||
import { registry as blockRegistry } from '@/blocks/registry'
|
||||
|
||||
const logger = createLogger('GetAllBlocksAPI')
|
||||
|
||||
export async function POST(request: NextRequest) {
|
||||
try {
|
||||
const body = await request.json()
|
||||
const { includeDetails = false, filterCategory } = body
|
||||
|
||||
logger.info('Getting all blocks and tools', { includeDetails, filterCategory })
|
||||
|
||||
// Create mapping of block_id -> [tool_ids]
|
||||
const blockToToolsMapping: Record<string, string[]> = {}
|
||||
|
||||
// Process blocks - filter out hidden blocks and map to their tools
|
||||
Object.entries(blockRegistry)
|
||||
.filter(([blockType, blockConfig]) => {
|
||||
// Filter out hidden blocks
|
||||
if (blockConfig.hideFromToolbar) return false
|
||||
|
||||
// Apply category filter if specified
|
||||
if (filterCategory && blockConfig.category !== filterCategory) return false
|
||||
|
||||
return true
|
||||
})
|
||||
.forEach(([blockType, blockConfig]) => {
|
||||
// Get the tools for this block
|
||||
const blockTools = blockConfig.tools?.access || []
|
||||
blockToToolsMapping[blockType] = blockTools
|
||||
})
|
||||
|
||||
const totalBlocks = Object.keys(blockRegistry).length
|
||||
const includedBlocks = Object.keys(blockToToolsMapping).length
|
||||
const filteredBlocksCount = totalBlocks - includedBlocks
|
||||
|
||||
// Log block to tools mapping for debugging
|
||||
const blockToolsInfo = Object.entries(blockToToolsMapping)
|
||||
.map(([blockType, tools]) => `${blockType}: [${tools.join(', ')}]`)
|
||||
.sort()
|
||||
|
||||
logger.info(`Successfully mapped ${includedBlocks} blocks to their tools`, {
|
||||
totalBlocks,
|
||||
includedBlocks,
|
||||
filteredBlocks: filteredBlocksCount,
|
||||
filterCategory,
|
||||
blockToolsMapping: blockToolsInfo,
|
||||
outputMapping: blockToToolsMapping,
|
||||
})
|
||||
|
||||
return NextResponse.json({
|
||||
success: true,
|
||||
data: blockToToolsMapping,
|
||||
})
|
||||
} catch (error) {
|
||||
logger.error('Get all blocks failed', error)
|
||||
return NextResponse.json(
|
||||
{
|
||||
success: false,
|
||||
error: `Failed to get blocks and tools: ${error instanceof Error ? error.message : 'Unknown error'}`,
|
||||
},
|
||||
{ status: 500 }
|
||||
)
|
||||
}
|
||||
}
|
||||
239
apps/sim/app/api/tools/get-blocks-metadata/route.ts
Normal file
239
apps/sim/app/api/tools/get-blocks-metadata/route.ts
Normal file
@@ -0,0 +1,239 @@
|
||||
import { existsSync, readFileSync } from 'fs'
|
||||
import { join } from 'path'
|
||||
import { type NextRequest, NextResponse } from 'next/server'
|
||||
import { createLogger } from '@/lib/logs/console-logger'
|
||||
import { registry as blockRegistry } from '@/blocks/registry'
|
||||
import { tools as toolsRegistry } from '@/tools/registry'
|
||||
|
||||
const logger = createLogger('GetBlockMetadataAPI')
|
||||
|
||||
// Core blocks that have documentation with YAML schemas
|
||||
const CORE_BLOCKS_WITH_DOCS = [
|
||||
'agent',
|
||||
'function',
|
||||
'api',
|
||||
'condition',
|
||||
'loop',
|
||||
'parallel',
|
||||
'response',
|
||||
'router',
|
||||
'evaluator',
|
||||
'webhook',
|
||||
]
|
||||
|
||||
// Mapping for blocks that have different doc file names
|
||||
const DOCS_FILE_MAPPING: Record<string, string> = {
|
||||
webhook: 'webhook_trigger',
|
||||
}
|
||||
|
||||
// Helper function to read YAML schema from dedicated YAML documentation files
|
||||
function getYamlSchemaFromDocs(blockType: string): string | null {
|
||||
try {
|
||||
const docFileName = DOCS_FILE_MAPPING[blockType] || blockType
|
||||
// Read from the new YAML documentation structure
|
||||
const yamlDocsPath = join(
|
||||
process.cwd(),
|
||||
'..',
|
||||
'docs/content/docs/yaml/blocks',
|
||||
`${docFileName}.mdx`
|
||||
)
|
||||
|
||||
if (!existsSync(yamlDocsPath)) {
|
||||
logger.warn(`YAML schema file not found for ${blockType} at ${yamlDocsPath}`)
|
||||
return null
|
||||
}
|
||||
|
||||
const content = readFileSync(yamlDocsPath, 'utf-8')
|
||||
|
||||
// Remove the frontmatter and return the content after the title
|
||||
const contentWithoutFrontmatter = content.replace(/^---[\s\S]*?---\s*/, '')
|
||||
return contentWithoutFrontmatter.trim()
|
||||
} catch (error) {
|
||||
logger.warn(`Failed to read YAML schema for ${blockType}:`, error)
|
||||
return null
|
||||
}
|
||||
}
|
||||
|
||||
export async function POST(request: NextRequest) {
|
||||
try {
|
||||
const body = await request.json()
|
||||
const { blockIds } = body
|
||||
|
||||
if (!blockIds || !Array.isArray(blockIds)) {
|
||||
return NextResponse.json(
|
||||
{
|
||||
success: false,
|
||||
error: 'blockIds must be an array of block IDs',
|
||||
},
|
||||
{ status: 400 }
|
||||
)
|
||||
}
|
||||
|
||||
logger.info('Getting block metadata', {
|
||||
blockIds,
|
||||
blockCount: blockIds.length,
|
||||
requestedBlocks: blockIds.join(', '),
|
||||
})
|
||||
|
||||
// Create result object
|
||||
const result: Record<string, any> = {}
|
||||
|
||||
for (const blockId of blockIds) {
|
||||
const blockConfig = blockRegistry[blockId]
|
||||
|
||||
if (!blockConfig) {
|
||||
logger.warn(`Block not found: ${blockId}`)
|
||||
continue
|
||||
}
|
||||
|
||||
// Always include code schemas from block configuration
|
||||
const codeSchemas = {
|
||||
inputs: blockConfig.inputs,
|
||||
outputs: blockConfig.outputs,
|
||||
subBlocks: blockConfig.subBlocks,
|
||||
}
|
||||
|
||||
// Check if this is a core block with YAML documentation
|
||||
if (CORE_BLOCKS_WITH_DOCS.includes(blockId)) {
|
||||
// For core blocks, return both YAML schema from documentation AND code schemas
|
||||
const yamlSchema = getYamlSchemaFromDocs(blockId)
|
||||
|
||||
if (yamlSchema) {
|
||||
result[blockId] = {
|
||||
type: 'block',
|
||||
description: blockConfig.description || '',
|
||||
longDescription: blockConfig.longDescription,
|
||||
category: blockConfig.category || '',
|
||||
yamlSchema: yamlSchema,
|
||||
docsLink: blockConfig.docsLink,
|
||||
// Include actual schemas from code
|
||||
codeSchemas: codeSchemas,
|
||||
}
|
||||
} else {
|
||||
// Fallback to regular metadata if YAML schema not found
|
||||
result[blockId] = {
|
||||
type: 'block',
|
||||
description: blockConfig.description || '',
|
||||
longDescription: blockConfig.longDescription,
|
||||
category: blockConfig.category || '',
|
||||
inputs: blockConfig.inputs,
|
||||
outputs: blockConfig.outputs,
|
||||
subBlocks: blockConfig.subBlocks,
|
||||
// Include actual schemas from code
|
||||
codeSchemas: codeSchemas,
|
||||
}
|
||||
}
|
||||
} else {
|
||||
// For tool blocks, return tool schema information AND code schemas
|
||||
const blockTools = blockConfig.tools?.access || []
|
||||
const toolSchemas: Record<string, any> = {}
|
||||
|
||||
for (const toolId of blockTools) {
|
||||
const toolConfig = toolsRegistry[toolId]
|
||||
if (toolConfig) {
|
||||
toolSchemas[toolId] = {
|
||||
id: toolConfig.id,
|
||||
name: toolConfig.name,
|
||||
description: toolConfig.description || '',
|
||||
version: toolConfig.version,
|
||||
params: toolConfig.params,
|
||||
request: toolConfig.request
|
||||
? {
|
||||
method: toolConfig.request.method,
|
||||
url: toolConfig.request.url,
|
||||
headers:
|
||||
typeof toolConfig.request.headers === 'function'
|
||||
? 'function'
|
||||
: toolConfig.request.headers,
|
||||
isInternalRoute: toolConfig.request.isInternalRoute,
|
||||
}
|
||||
: undefined,
|
||||
}
|
||||
} else {
|
||||
logger.warn(`Tool not found: ${toolId} for block: ${blockId}`)
|
||||
toolSchemas[toolId] = {
|
||||
id: toolId,
|
||||
description: 'Tool not found',
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
result[blockId] = {
|
||||
type: 'tool',
|
||||
description: blockConfig.description || '',
|
||||
longDescription: blockConfig.longDescription,
|
||||
category: blockConfig.category || '',
|
||||
inputs: blockConfig.inputs,
|
||||
outputs: blockConfig.outputs,
|
||||
subBlocks: blockConfig.subBlocks,
|
||||
toolSchemas: toolSchemas,
|
||||
// Include actual schemas from code
|
||||
codeSchemas: codeSchemas,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
const processedBlocks = Object.keys(result).length
|
||||
const requestedBlocks = blockIds.length
|
||||
const notFoundBlocks = requestedBlocks - processedBlocks
|
||||
|
||||
// Log detailed output for debugging
|
||||
Object.entries(result).forEach(([blockId, blockData]) => {
|
||||
if (blockData.type === 'block' && blockData.yamlSchema) {
|
||||
logger.info(`Retrieved YAML schema + code schemas for core block: ${blockId}`, {
|
||||
blockId,
|
||||
type: blockData.type,
|
||||
description: blockData.description,
|
||||
yamlSchemaLength: blockData.yamlSchema.length,
|
||||
yamlSchemaPreview: `${blockData.yamlSchema.substring(0, 200)}...`,
|
||||
hasCodeSchemas: !!blockData.codeSchemas,
|
||||
codeSubBlocksCount: blockData.codeSchemas?.subBlocks?.length || 0,
|
||||
})
|
||||
} else if (blockData.type === 'tool' && blockData.toolSchemas) {
|
||||
const toolIds = Object.keys(blockData.toolSchemas)
|
||||
logger.info(`Retrieved tool schemas + code schemas for tool block: ${blockId}`, {
|
||||
blockId,
|
||||
type: blockData.type,
|
||||
description: blockData.description,
|
||||
toolCount: toolIds.length,
|
||||
toolIds: toolIds,
|
||||
hasCodeSchemas: !!blockData.codeSchemas,
|
||||
codeSubBlocksCount: blockData.codeSchemas?.subBlocks?.length || 0,
|
||||
})
|
||||
} else {
|
||||
logger.info(`Retrieved metadata + code schemas for block: ${blockId}`, {
|
||||
blockId,
|
||||
type: blockData.type,
|
||||
description: blockData.description,
|
||||
hasInputs: !!blockData.inputs,
|
||||
hasOutputs: !!blockData.outputs,
|
||||
hasSubBlocks: !!blockData.subBlocks,
|
||||
hasCodeSchemas: !!blockData.codeSchemas,
|
||||
codeSubBlocksCount: blockData.codeSchemas?.subBlocks?.length || 0,
|
||||
})
|
||||
}
|
||||
})
|
||||
|
||||
logger.info(`Successfully processed ${processedBlocks} block metadata`, {
|
||||
requestedBlocks,
|
||||
processedBlocks,
|
||||
notFoundBlocks,
|
||||
coreBlocks: blockIds.filter((id) => CORE_BLOCKS_WITH_DOCS.includes(id)),
|
||||
toolBlocks: blockIds.filter((id) => !CORE_BLOCKS_WITH_DOCS.includes(id)),
|
||||
})
|
||||
|
||||
return NextResponse.json({
|
||||
success: true,
|
||||
data: result,
|
||||
})
|
||||
} catch (error) {
|
||||
logger.error('Get block metadata failed', error)
|
||||
return NextResponse.json(
|
||||
{
|
||||
success: false,
|
||||
error: `Failed to get block metadata: ${error instanceof Error ? error.message : 'Unknown error'}`,
|
||||
},
|
||||
{ status: 500 }
|
||||
)
|
||||
}
|
||||
}
|
||||
@@ -21,7 +21,7 @@ export async function POST(request: NextRequest) {
|
||||
)
|
||||
}
|
||||
|
||||
logger.info('Fetching workflow for YAML generation', { workflowId })
|
||||
logger.info('Fetching user workflow', { workflowId })
|
||||
|
||||
// Fetch workflow from database
|
||||
const [workflowRecord] = await db
|
||||
@@ -190,9 +190,9 @@ export async function POST(request: NextRequest) {
|
||||
}
|
||||
}
|
||||
|
||||
logger.info('Successfully generated workflow YAML', {
|
||||
logger.info('Successfully fetched user workflow YAML', {
|
||||
workflowId,
|
||||
blockCount: response.blockCount,
|
||||
blockCount: response.summary.blockCount,
|
||||
yamlLength: yaml.length,
|
||||
})
|
||||
|
||||
|
||||
25
apps/sim/app/api/tools/get-yaml-structure/route.ts
Normal file
25
apps/sim/app/api/tools/get-yaml-structure/route.ts
Normal file
@@ -0,0 +1,25 @@
|
||||
import { type NextRequest, NextResponse } from 'next/server'
|
||||
import { YAML_WORKFLOW_PROMPT } from '../../../../lib/copilot/prompts'
|
||||
|
||||
export async function POST(request: NextRequest) {
|
||||
try {
|
||||
console.log('[get-yaml-structure] API endpoint called')
|
||||
|
||||
return NextResponse.json({
|
||||
success: true,
|
||||
data: {
|
||||
guide: YAML_WORKFLOW_PROMPT,
|
||||
message: 'Complete YAML workflow syntax guide with examples and best practices',
|
||||
},
|
||||
})
|
||||
} catch (error) {
|
||||
console.error('[get-yaml-structure] Error:', error)
|
||||
return NextResponse.json(
|
||||
{
|
||||
success: false,
|
||||
error: 'Failed to get YAML structure guide',
|
||||
},
|
||||
{ status: 500 }
|
||||
)
|
||||
}
|
||||
}
|
||||
223
apps/sim/app/api/workflows/[id]/autolayout/route.ts
Normal file
223
apps/sim/app/api/workflows/[id]/autolayout/route.ts
Normal file
@@ -0,0 +1,223 @@
|
||||
import { eq } from 'drizzle-orm'
|
||||
import { type NextRequest, NextResponse } from 'next/server'
|
||||
import { z } from 'zod'
|
||||
import { getSession } from '@/lib/auth'
|
||||
import { autoLayoutWorkflow } from '@/lib/autolayout/service'
|
||||
import { createLogger } from '@/lib/logs/console-logger'
|
||||
import { getUserEntityPermissions } from '@/lib/permissions/utils'
|
||||
import {
|
||||
loadWorkflowFromNormalizedTables,
|
||||
saveWorkflowToNormalizedTables,
|
||||
} from '@/lib/workflows/db-helpers'
|
||||
import { db } from '@/db'
|
||||
import { workflow as workflowTable } from '@/db/schema'
|
||||
|
||||
const logger = createLogger('AutoLayoutAPI')
|
||||
|
||||
const AutoLayoutRequestSchema = z.object({
|
||||
strategy: z
|
||||
.enum(['smart', 'hierarchical', 'layered', 'force-directed'])
|
||||
.optional()
|
||||
.default('smart'),
|
||||
direction: z.enum(['horizontal', 'vertical', 'auto']).optional().default('auto'),
|
||||
spacing: z
|
||||
.object({
|
||||
horizontal: z.number().min(100).max(1000).optional().default(400),
|
||||
vertical: z.number().min(50).max(500).optional().default(200),
|
||||
layer: z.number().min(200).max(1200).optional().default(600),
|
||||
})
|
||||
.optional()
|
||||
.default({}),
|
||||
alignment: z.enum(['start', 'center', 'end']).optional().default('center'),
|
||||
padding: z
|
||||
.object({
|
||||
x: z.number().min(50).max(500).optional().default(200),
|
||||
y: z.number().min(50).max(500).optional().default(200),
|
||||
})
|
||||
.optional()
|
||||
.default({}),
|
||||
})
|
||||
|
||||
type AutoLayoutRequest = z.infer<typeof AutoLayoutRequestSchema>
|
||||
|
||||
/**
|
||||
* POST /api/workflows/[id]/autolayout
|
||||
* Apply autolayout to an existing workflow
|
||||
*/
|
||||
export async function POST(request: NextRequest, { params }: { params: Promise<{ id: string }> }) {
|
||||
const requestId = crypto.randomUUID().slice(0, 8)
|
||||
const startTime = Date.now()
|
||||
const { id: workflowId } = await params
|
||||
|
||||
try {
|
||||
// Get the session
|
||||
const session = await getSession()
|
||||
if (!session?.user?.id) {
|
||||
logger.warn(`[${requestId}] Unauthorized autolayout attempt for workflow ${workflowId}`)
|
||||
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
|
||||
}
|
||||
|
||||
const userId = session.user.id
|
||||
|
||||
// Parse request body
|
||||
const body = await request.json()
|
||||
const layoutOptions = AutoLayoutRequestSchema.parse(body)
|
||||
|
||||
logger.info(`[${requestId}] Processing autolayout request for workflow ${workflowId}`, {
|
||||
strategy: layoutOptions.strategy,
|
||||
direction: layoutOptions.direction,
|
||||
userId,
|
||||
})
|
||||
|
||||
// Fetch the workflow to check ownership/access
|
||||
const workflowData = await db
|
||||
.select()
|
||||
.from(workflowTable)
|
||||
.where(eq(workflowTable.id, workflowId))
|
||||
.then((rows) => rows[0])
|
||||
|
||||
if (!workflowData) {
|
||||
logger.warn(`[${requestId}] Workflow ${workflowId} not found for autolayout`)
|
||||
return NextResponse.json({ error: 'Workflow not found' }, { status: 404 })
|
||||
}
|
||||
|
||||
// Check if user has permission to update this workflow
|
||||
let canUpdate = false
|
||||
|
||||
// Case 1: User owns the workflow
|
||||
if (workflowData.userId === userId) {
|
||||
canUpdate = true
|
||||
}
|
||||
|
||||
// Case 2: Workflow belongs to a workspace and user has write or admin permission
|
||||
if (!canUpdate && workflowData.workspaceId) {
|
||||
const userPermission = await getUserEntityPermissions(
|
||||
userId,
|
||||
'workspace',
|
||||
workflowData.workspaceId
|
||||
)
|
||||
if (userPermission === 'write' || userPermission === 'admin') {
|
||||
canUpdate = true
|
||||
}
|
||||
}
|
||||
|
||||
if (!canUpdate) {
|
||||
logger.warn(
|
||||
`[${requestId}] User ${userId} denied permission to autolayout workflow ${workflowId}`
|
||||
)
|
||||
return NextResponse.json({ error: 'Access denied' }, { status: 403 })
|
||||
}
|
||||
|
||||
// Load current workflow state
|
||||
const currentWorkflowData = await loadWorkflowFromNormalizedTables(workflowId)
|
||||
|
||||
if (!currentWorkflowData) {
|
||||
logger.error(`[${requestId}] Could not load workflow ${workflowId} for autolayout`)
|
||||
return NextResponse.json({ error: 'Could not load workflow data' }, { status: 500 })
|
||||
}
|
||||
|
||||
// Apply autolayout
|
||||
logger.info(
|
||||
`[${requestId}] Applying autolayout to ${Object.keys(currentWorkflowData.blocks).length} blocks`
|
||||
)
|
||||
|
||||
const layoutedBlocks = await autoLayoutWorkflow(
|
||||
currentWorkflowData.blocks,
|
||||
currentWorkflowData.edges,
|
||||
{
|
||||
strategy: layoutOptions.strategy,
|
||||
direction: layoutOptions.direction,
|
||||
spacing: {
|
||||
horizontal: layoutOptions.spacing?.horizontal || 400,
|
||||
vertical: layoutOptions.spacing?.vertical || 200,
|
||||
layer: layoutOptions.spacing?.layer || 600,
|
||||
},
|
||||
alignment: layoutOptions.alignment,
|
||||
padding: {
|
||||
x: layoutOptions.padding?.x || 200,
|
||||
y: layoutOptions.padding?.y || 200,
|
||||
},
|
||||
}
|
||||
)
|
||||
|
||||
// Create updated workflow state
|
||||
const updatedWorkflowState = {
|
||||
...currentWorkflowData,
|
||||
blocks: layoutedBlocks,
|
||||
lastSaved: Date.now(),
|
||||
}
|
||||
|
||||
// Save to database
|
||||
const saveResult = await saveWorkflowToNormalizedTables(workflowId, updatedWorkflowState)
|
||||
|
||||
if (!saveResult.success) {
|
||||
logger.error(`[${requestId}] Failed to save autolayout results:`, saveResult.error)
|
||||
return NextResponse.json(
|
||||
{ error: 'Failed to save autolayout results', details: saveResult.error },
|
||||
{ status: 500 }
|
||||
)
|
||||
}
|
||||
|
||||
// Update workflow's lastSynced timestamp
|
||||
await db
|
||||
.update(workflowTable)
|
||||
.set({
|
||||
lastSynced: new Date(),
|
||||
updatedAt: new Date(),
|
||||
state: saveResult.jsonBlob,
|
||||
})
|
||||
.where(eq(workflowTable.id, workflowId))
|
||||
|
||||
// Notify the socket server to tell clients about the autolayout update
|
||||
try {
|
||||
const socketUrl = process.env.SOCKET_URL || 'http://localhost:3002'
|
||||
await fetch(`${socketUrl}/api/workflow-updated`, {
|
||||
method: 'POST',
|
||||
headers: { 'Content-Type': 'application/json' },
|
||||
body: JSON.stringify({ workflowId }),
|
||||
})
|
||||
logger.info(`[${requestId}] Notified socket server of autolayout update`)
|
||||
} catch (socketError) {
|
||||
logger.warn(`[${requestId}] Failed to notify socket server:`, socketError)
|
||||
}
|
||||
|
||||
const elapsed = Date.now() - startTime
|
||||
const blockCount = Object.keys(layoutedBlocks).length
|
||||
|
||||
logger.info(`[${requestId}] Autolayout completed successfully in ${elapsed}ms`, {
|
||||
blockCount,
|
||||
strategy: layoutOptions.strategy,
|
||||
workflowId,
|
||||
})
|
||||
|
||||
return NextResponse.json({
|
||||
success: true,
|
||||
message: `Autolayout applied successfully to ${blockCount} blocks`,
|
||||
data: {
|
||||
strategy: layoutOptions.strategy,
|
||||
direction: layoutOptions.direction,
|
||||
blockCount,
|
||||
elapsed: `${elapsed}ms`,
|
||||
},
|
||||
})
|
||||
} catch (error) {
|
||||
const elapsed = Date.now() - startTime
|
||||
|
||||
if (error instanceof z.ZodError) {
|
||||
logger.warn(`[${requestId}] Invalid autolayout request data`, { errors: error.errors })
|
||||
return NextResponse.json(
|
||||
{ error: 'Invalid request data', details: error.errors },
|
||||
{ status: 400 }
|
||||
)
|
||||
}
|
||||
|
||||
logger.error(`[${requestId}] Autolayout failed after ${elapsed}ms:`, error)
|
||||
return NextResponse.json(
|
||||
{
|
||||
error: 'Autolayout failed',
|
||||
details: error instanceof Error ? error.message : 'Unknown error',
|
||||
},
|
||||
{ status: 500 }
|
||||
)
|
||||
}
|
||||
}
|
||||
@@ -31,7 +31,7 @@ const BlockDataSchema = z.object({
|
||||
const SubBlockStateSchema = z.object({
|
||||
id: z.string(),
|
||||
type: z.string(),
|
||||
value: z.union([z.string(), z.number(), z.array(z.array(z.string())), z.null()]),
|
||||
value: z.any(),
|
||||
})
|
||||
|
||||
const BlockOutputSchema = z.any()
|
||||
|
||||
538
apps/sim/app/api/workflows/[id]/yaml/route.ts
Normal file
538
apps/sim/app/api/workflows/[id]/yaml/route.ts
Normal file
@@ -0,0 +1,538 @@
|
||||
import { eq } from 'drizzle-orm'
|
||||
import { type NextRequest, NextResponse } from 'next/server'
|
||||
import { z } from 'zod'
|
||||
import { autoLayoutWorkflow } from '@/lib/autolayout/service'
|
||||
import { createLogger } from '@/lib/logs/console-logger'
|
||||
import { getUserEntityPermissions } from '@/lib/permissions/utils'
|
||||
import {
|
||||
loadWorkflowFromNormalizedTables,
|
||||
saveWorkflowToNormalizedTables,
|
||||
} from '@/lib/workflows/db-helpers'
|
||||
import { generateWorkflowYaml } from '@/lib/workflows/yaml-generator'
|
||||
import { getUserId as getOAuthUserId } from '@/app/api/auth/oauth/utils'
|
||||
import { getBlock } from '@/blocks'
|
||||
import { resolveOutputType } from '@/blocks/utils'
|
||||
import { db } from '@/db'
|
||||
import { copilotCheckpoints, workflow as workflowTable } from '@/db/schema'
|
||||
import { generateLoopBlocks, generateParallelBlocks } from '@/stores/workflows/workflow/utils'
|
||||
import { convertYamlToWorkflow, parseWorkflowYaml } from '@/stores/workflows/yaml/importer'
|
||||
|
||||
const logger = createLogger('WorkflowYamlAPI')
|
||||
|
||||
// Request schema for YAML workflow operations
|
||||
const YamlWorkflowRequestSchema = z.object({
|
||||
yamlContent: z.string().min(1, 'YAML content is required'),
|
||||
description: z.string().optional(),
|
||||
chatId: z.string().optional(), // For copilot checkpoints
|
||||
source: z.enum(['copilot', 'import', 'editor']).default('editor'),
|
||||
applyAutoLayout: z.boolean().default(true),
|
||||
createCheckpoint: z.boolean().default(false),
|
||||
})
|
||||
|
||||
type YamlWorkflowRequest = z.infer<typeof YamlWorkflowRequestSchema>
|
||||
|
||||
/**
|
||||
* Helper function to create a checkpoint before workflow changes
|
||||
*/
|
||||
async function createWorkflowCheckpoint(
|
||||
userId: string,
|
||||
workflowId: string,
|
||||
chatId: string,
|
||||
requestId: string
|
||||
): Promise<boolean> {
|
||||
try {
|
||||
logger.info(`[${requestId}] Creating checkpoint before workflow edit`)
|
||||
|
||||
// Get current workflow state
|
||||
const currentWorkflowData = await loadWorkflowFromNormalizedTables(workflowId)
|
||||
|
||||
if (currentWorkflowData) {
|
||||
// Generate YAML from current state
|
||||
const currentYaml = generateWorkflowYaml(currentWorkflowData)
|
||||
|
||||
// Create checkpoint
|
||||
await db.insert(copilotCheckpoints).values({
|
||||
userId,
|
||||
workflowId,
|
||||
chatId,
|
||||
yaml: currentYaml,
|
||||
})
|
||||
|
||||
logger.info(`[${requestId}] Checkpoint created successfully`)
|
||||
return true
|
||||
}
|
||||
logger.warn(`[${requestId}] Could not load current workflow state for checkpoint`)
|
||||
return false
|
||||
} catch (error) {
|
||||
logger.error(`[${requestId}] Failed to create checkpoint:`, error)
|
||||
return false
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Helper function to get user ID with proper authentication for both tool calls and direct requests
|
||||
*/
|
||||
async function getUserId(requestId: string, workflowId: string): Promise<string | null> {
|
||||
// Use the OAuth utils function that handles both session and workflow-based auth
|
||||
const userId = await getOAuthUserId(requestId, workflowId)
|
||||
|
||||
if (!userId) {
|
||||
logger.warn(`[${requestId}] Could not determine user ID for workflow ${workflowId}`)
|
||||
return null
|
||||
}
|
||||
|
||||
// For additional security, verify the user has permission to access this workflow
|
||||
const workflowData = await db
|
||||
.select()
|
||||
.from(workflowTable)
|
||||
.where(eq(workflowTable.id, workflowId))
|
||||
.then((rows) => rows[0])
|
||||
|
||||
if (!workflowData) {
|
||||
logger.warn(`[${requestId}] Workflow ${workflowId} not found`)
|
||||
return null
|
||||
}
|
||||
|
||||
// Check if user has permission to update this workflow
|
||||
let canUpdate = false
|
||||
|
||||
// Case 1: User owns the workflow
|
||||
if (workflowData.userId === userId) {
|
||||
canUpdate = true
|
||||
}
|
||||
|
||||
// Case 2: Workflow belongs to a workspace and user has write or admin permission
|
||||
if (!canUpdate && workflowData.workspaceId) {
|
||||
try {
|
||||
const userPermission = await getUserEntityPermissions(
|
||||
userId,
|
||||
'workspace',
|
||||
workflowData.workspaceId
|
||||
)
|
||||
if (userPermission === 'write' || userPermission === 'admin') {
|
||||
canUpdate = true
|
||||
}
|
||||
} catch (error) {
|
||||
logger.warn(`[${requestId}] Error checking workspace permissions:`, error)
|
||||
}
|
||||
}
|
||||
|
||||
if (!canUpdate) {
|
||||
logger.warn(`[${requestId}] User ${userId} denied permission to update workflow ${workflowId}`)
|
||||
return null
|
||||
}
|
||||
|
||||
return userId
|
||||
}
|
||||
|
||||
/**
|
||||
* Helper function to update block references in values with new mapped IDs
|
||||
*/
|
||||
function updateBlockReferences(
|
||||
value: any,
|
||||
blockIdMapping: Map<string, string>,
|
||||
requestId: string
|
||||
): any {
|
||||
if (typeof value === 'string' && value.includes('<') && value.includes('>')) {
|
||||
let processedValue = value
|
||||
const blockMatches = value.match(/<([^>]+)>/g)
|
||||
|
||||
if (blockMatches) {
|
||||
for (const match of blockMatches) {
|
||||
const path = match.slice(1, -1)
|
||||
const [blockRef] = path.split('.')
|
||||
|
||||
// Skip system references (start, loop, parallel, variable)
|
||||
if (['start', 'loop', 'parallel', 'variable'].includes(blockRef.toLowerCase())) {
|
||||
continue
|
||||
}
|
||||
|
||||
// Check if this references an old block ID that needs mapping
|
||||
const newMappedId = blockIdMapping.get(blockRef)
|
||||
if (newMappedId) {
|
||||
logger.info(`[${requestId}] Updating block reference: ${blockRef} -> ${newMappedId}`)
|
||||
processedValue = processedValue.replace(
|
||||
new RegExp(`<${blockRef}\\.`, 'g'),
|
||||
`<${newMappedId}.`
|
||||
)
|
||||
processedValue = processedValue.replace(
|
||||
new RegExp(`<${blockRef}>`, 'g'),
|
||||
`<${newMappedId}>`
|
||||
)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return processedValue
|
||||
}
|
||||
|
||||
// Handle arrays
|
||||
if (Array.isArray(value)) {
|
||||
return value.map((item) => updateBlockReferences(item, blockIdMapping, requestId))
|
||||
}
|
||||
|
||||
// Handle objects
|
||||
if (value !== null && typeof value === 'object') {
|
||||
const result = { ...value }
|
||||
for (const key in result) {
|
||||
result[key] = updateBlockReferences(result[key], blockIdMapping, requestId)
|
||||
}
|
||||
return result
|
||||
}
|
||||
|
||||
return value
|
||||
}
|
||||
|
||||
/**
|
||||
* PUT /api/workflows/[id]/yaml
|
||||
* Consolidated YAML workflow saving endpoint
|
||||
* Handles copilot edits, imports, and text editor saves
|
||||
*/
|
||||
export async function PUT(request: NextRequest, { params }: { params: Promise<{ id: string }> }) {
|
||||
const requestId = crypto.randomUUID().slice(0, 8)
|
||||
const startTime = Date.now()
|
||||
const { id: workflowId } = await params
|
||||
|
||||
try {
|
||||
// Parse and validate request
|
||||
const body = await request.json()
|
||||
const { yamlContent, description, chatId, source, applyAutoLayout, createCheckpoint } =
|
||||
YamlWorkflowRequestSchema.parse(body)
|
||||
|
||||
logger.info(`[${requestId}] Processing ${source} YAML workflow save`, {
|
||||
workflowId,
|
||||
yamlLength: yamlContent.length,
|
||||
hasDescription: !!description,
|
||||
hasChatId: !!chatId,
|
||||
applyAutoLayout,
|
||||
createCheckpoint,
|
||||
})
|
||||
|
||||
// Get and validate user
|
||||
const userId = await getUserId(requestId, workflowId)
|
||||
if (!userId) {
|
||||
return NextResponse.json({ error: 'Unauthorized or workflow not found' }, { status: 403 })
|
||||
}
|
||||
|
||||
// Create checkpoint if requested (typically for copilot)
|
||||
if (createCheckpoint && chatId) {
|
||||
await createWorkflowCheckpoint(userId, workflowId, chatId, requestId)
|
||||
}
|
||||
|
||||
// Parse YAML content
|
||||
const { data: yamlWorkflow, errors: parseErrors } = parseWorkflowYaml(yamlContent)
|
||||
|
||||
if (!yamlWorkflow || parseErrors.length > 0) {
|
||||
logger.error(`[${requestId}] YAML parsing failed`, { parseErrors })
|
||||
return NextResponse.json({
|
||||
success: false,
|
||||
message: 'Failed to parse YAML workflow',
|
||||
errors: parseErrors,
|
||||
warnings: [],
|
||||
})
|
||||
}
|
||||
|
||||
// Convert YAML to workflow format
|
||||
const { blocks, edges, errors: convertErrors, warnings } = convertYamlToWorkflow(yamlWorkflow)
|
||||
|
||||
if (convertErrors.length > 0) {
|
||||
logger.error(`[${requestId}] YAML conversion failed`, { convertErrors })
|
||||
return NextResponse.json({
|
||||
success: false,
|
||||
message: 'Failed to convert YAML to workflow',
|
||||
errors: convertErrors,
|
||||
warnings,
|
||||
})
|
||||
}
|
||||
|
||||
// Create workflow state
|
||||
const newWorkflowState: any = {
|
||||
blocks: {} as Record<string, any>,
|
||||
edges: [] as any[],
|
||||
loops: {} as Record<string, any>,
|
||||
parallels: {} as Record<string, any>,
|
||||
lastSaved: Date.now(),
|
||||
isDeployed: false,
|
||||
deployedAt: undefined,
|
||||
deploymentStatuses: {} as Record<string, any>,
|
||||
hasActiveSchedule: false,
|
||||
hasActiveWebhook: false,
|
||||
}
|
||||
|
||||
// Process blocks with proper configuration setup and assign new IDs
|
||||
const blockIdMapping = new Map<string, string>()
|
||||
|
||||
for (const block of blocks) {
|
||||
const newId = crypto.randomUUID()
|
||||
blockIdMapping.set(block.id, newId)
|
||||
|
||||
// Get block configuration for proper setup
|
||||
const blockConfig = getBlock(block.type)
|
||||
|
||||
if (!blockConfig && (block.type === 'loop' || block.type === 'parallel')) {
|
||||
// Handle loop/parallel blocks (they don't have regular block configs)
|
||||
newWorkflowState.blocks[newId] = {
|
||||
id: newId,
|
||||
type: block.type,
|
||||
name: block.name,
|
||||
position: block.position,
|
||||
subBlocks: {},
|
||||
outputs: {},
|
||||
enabled: true,
|
||||
horizontalHandles: true,
|
||||
isWide: false,
|
||||
height: 0,
|
||||
data: block.data || {},
|
||||
}
|
||||
logger.debug(`[${requestId}] Processed loop/parallel block: ${block.id} -> ${newId}`)
|
||||
} else if (blockConfig) {
|
||||
// Handle regular blocks with proper configuration
|
||||
const subBlocks: Record<string, any> = {}
|
||||
|
||||
// Set up subBlocks from block configuration
|
||||
blockConfig.subBlocks.forEach((subBlock) => {
|
||||
subBlocks[subBlock.id] = {
|
||||
id: subBlock.id,
|
||||
type: subBlock.type,
|
||||
value: null,
|
||||
}
|
||||
})
|
||||
|
||||
// Also ensure we have subBlocks for any YAML inputs that might not be in the config
|
||||
// This handles cases where hidden fields or dynamic configurations exist
|
||||
Object.keys(block.inputs).forEach((inputKey) => {
|
||||
if (!subBlocks[inputKey]) {
|
||||
subBlocks[inputKey] = {
|
||||
id: inputKey,
|
||||
type: 'short-input', // Default type for dynamic inputs
|
||||
value: null,
|
||||
}
|
||||
}
|
||||
})
|
||||
|
||||
// Set up outputs from block configuration
|
||||
const outputs = resolveOutputType(blockConfig.outputs)
|
||||
|
||||
newWorkflowState.blocks[newId] = {
|
||||
id: newId,
|
||||
type: block.type,
|
||||
name: block.name,
|
||||
position: block.position,
|
||||
subBlocks,
|
||||
outputs,
|
||||
enabled: true,
|
||||
horizontalHandles: true,
|
||||
isWide: false,
|
||||
height: 0,
|
||||
data: block.data || {},
|
||||
}
|
||||
|
||||
logger.debug(`[${requestId}] Processed regular block: ${block.id} -> ${newId}`)
|
||||
} else {
|
||||
logger.warn(`[${requestId}] Unknown block type: ${block.type}`)
|
||||
}
|
||||
}
|
||||
|
||||
// Set input values as subblock values with block reference mapping
|
||||
for (const block of blocks) {
|
||||
const newId = blockIdMapping.get(block.id)
|
||||
if (!newId || !newWorkflowState.blocks[newId]) continue
|
||||
|
||||
if (block.inputs && typeof block.inputs === 'object') {
|
||||
Object.entries(block.inputs).forEach(([key, value]) => {
|
||||
if (newWorkflowState.blocks[newId].subBlocks[key]) {
|
||||
// Update block references in values to use new mapped IDs
|
||||
const processedValue = updateBlockReferences(value, blockIdMapping, requestId)
|
||||
newWorkflowState.blocks[newId].subBlocks[key].value = processedValue
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// Update parent-child relationships with mapped IDs
|
||||
logger.info(`[${requestId}] Block ID mapping:`, Object.fromEntries(blockIdMapping))
|
||||
for (const [newId, blockData] of Object.entries(newWorkflowState.blocks)) {
|
||||
const block = blockData as any
|
||||
if (block.data?.parentId) {
|
||||
logger.info(
|
||||
`[${requestId}] Found child block ${block.name} with parentId: ${block.data.parentId}`
|
||||
)
|
||||
const mappedParentId = blockIdMapping.get(block.data.parentId)
|
||||
if (mappedParentId) {
|
||||
logger.info(
|
||||
`[${requestId}] Updating parent reference: ${block.data.parentId} -> ${mappedParentId}`
|
||||
)
|
||||
block.data.parentId = mappedParentId
|
||||
// Ensure extent is set for child blocks
|
||||
if (!block.data.extent) {
|
||||
block.data.extent = 'parent'
|
||||
}
|
||||
} else {
|
||||
logger.error(
|
||||
`[${requestId}] ❌ Parent block not found for mapping: ${block.data.parentId}`
|
||||
)
|
||||
logger.error(`[${requestId}] Available mappings:`, Array.from(blockIdMapping.keys()))
|
||||
// Remove invalid parent reference
|
||||
block.data.parentId = undefined
|
||||
block.data.extent = undefined
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Process edges with mapped IDs and handles
|
||||
for (const edge of edges) {
|
||||
const sourceId = blockIdMapping.get(edge.source)
|
||||
const targetId = blockIdMapping.get(edge.target)
|
||||
|
||||
if (sourceId && targetId) {
|
||||
const newEdgeId = crypto.randomUUID()
|
||||
newWorkflowState.edges.push({
|
||||
id: newEdgeId,
|
||||
source: sourceId,
|
||||
target: targetId,
|
||||
sourceHandle: edge.sourceHandle,
|
||||
targetHandle: edge.targetHandle,
|
||||
type: edge.type || 'default',
|
||||
})
|
||||
} else {
|
||||
logger.warn(
|
||||
`[${requestId}] Skipping edge - missing blocks: ${edge.source} -> ${edge.target}`
|
||||
)
|
||||
}
|
||||
}
|
||||
|
||||
// Generate loop and parallel configurations
|
||||
const loops = generateLoopBlocks(newWorkflowState.blocks)
|
||||
const parallels = generateParallelBlocks(newWorkflowState.blocks)
|
||||
newWorkflowState.loops = loops
|
||||
newWorkflowState.parallels = parallels
|
||||
|
||||
logger.info(`[${requestId}] Generated workflow state`, {
|
||||
blocksCount: Object.keys(newWorkflowState.blocks).length,
|
||||
edgesCount: newWorkflowState.edges.length,
|
||||
loopsCount: Object.keys(loops).length,
|
||||
parallelsCount: Object.keys(parallels).length,
|
||||
})
|
||||
|
||||
// Apply intelligent autolayout if requested
|
||||
if (applyAutoLayout) {
|
||||
try {
|
||||
logger.info(`[${requestId}] Applying autolayout`)
|
||||
|
||||
const layoutedBlocks = await autoLayoutWorkflow(
|
||||
newWorkflowState.blocks,
|
||||
newWorkflowState.edges,
|
||||
{
|
||||
strategy: 'smart',
|
||||
direction: 'auto',
|
||||
spacing: {
|
||||
horizontal: 400,
|
||||
vertical: 200,
|
||||
layer: 600,
|
||||
},
|
||||
alignment: 'center',
|
||||
padding: {
|
||||
x: 200,
|
||||
y: 200,
|
||||
},
|
||||
}
|
||||
)
|
||||
|
||||
newWorkflowState.blocks = layoutedBlocks
|
||||
logger.info(`[${requestId}] Autolayout completed successfully`)
|
||||
} catch (layoutError) {
|
||||
logger.warn(`[${requestId}] Autolayout failed, using original positions:`, layoutError)
|
||||
}
|
||||
}
|
||||
|
||||
// Save to database
|
||||
const saveResult = await saveWorkflowToNormalizedTables(workflowId, newWorkflowState)
|
||||
|
||||
if (!saveResult.success) {
|
||||
logger.error(`[${requestId}] Failed to save workflow state:`, saveResult.error)
|
||||
return NextResponse.json({
|
||||
success: false,
|
||||
message: `Database save failed: ${saveResult.error || 'Unknown error'}`,
|
||||
errors: [saveResult.error || 'Database save failed'],
|
||||
warnings,
|
||||
})
|
||||
}
|
||||
|
||||
// Update workflow's lastSynced timestamp
|
||||
await db
|
||||
.update(workflowTable)
|
||||
.set({
|
||||
lastSynced: new Date(),
|
||||
updatedAt: new Date(),
|
||||
state: saveResult.jsonBlob,
|
||||
})
|
||||
.where(eq(workflowTable.id, workflowId))
|
||||
|
||||
// Notify socket server for real-time collaboration (for copilot and editor)
|
||||
if (source === 'copilot' || source === 'editor') {
|
||||
try {
|
||||
const socketUrl = process.env.SOCKET_URL || 'http://localhost:3002'
|
||||
await fetch(`${socketUrl}/api/copilot-workflow-edit`, {
|
||||
method: 'POST',
|
||||
headers: { 'Content-Type': 'application/json' },
|
||||
body: JSON.stringify({
|
||||
workflowId,
|
||||
description: description || `${source} edited workflow`,
|
||||
}),
|
||||
})
|
||||
logger.info(`[${requestId}] Notified socket server`)
|
||||
} catch (socketError) {
|
||||
logger.warn(`[${requestId}] Failed to notify socket server:`, socketError)
|
||||
}
|
||||
}
|
||||
|
||||
const elapsed = Date.now() - startTime
|
||||
const totalBlocksInWorkflow = Object.keys(newWorkflowState.blocks).length
|
||||
const summary = `Successfully saved workflow with ${totalBlocksInWorkflow} blocks and ${newWorkflowState.edges.length} connections.`
|
||||
|
||||
logger.info(`[${requestId}] YAML workflow save completed in ${elapsed}ms`, {
|
||||
success: true,
|
||||
blocksCount: totalBlocksInWorkflow,
|
||||
edgesCount: newWorkflowState.edges.length,
|
||||
})
|
||||
|
||||
return NextResponse.json({
|
||||
success: true,
|
||||
message: description ? `Workflow updated: ${description}` : 'Workflow updated successfully',
|
||||
summary,
|
||||
data: {
|
||||
blocksCount: totalBlocksInWorkflow,
|
||||
edgesCount: newWorkflowState.edges.length,
|
||||
loopsCount: Object.keys(loops).length,
|
||||
parallelsCount: Object.keys(parallels).length,
|
||||
},
|
||||
errors: [],
|
||||
warnings,
|
||||
})
|
||||
} catch (error) {
|
||||
const elapsed = Date.now() - startTime
|
||||
logger.error(`[${requestId}] YAML workflow save failed in ${elapsed}ms:`, error)
|
||||
|
||||
if (error instanceof z.ZodError) {
|
||||
return NextResponse.json(
|
||||
{
|
||||
success: false,
|
||||
message: 'Invalid request data',
|
||||
errors: error.errors.map((e) => `${e.path.join('.')}: ${e.message}`),
|
||||
warnings: [],
|
||||
},
|
||||
{ status: 400 }
|
||||
)
|
||||
}
|
||||
|
||||
return NextResponse.json(
|
||||
{
|
||||
success: false,
|
||||
message: `Failed to save YAML workflow: ${error instanceof Error ? error.message : 'Unknown error'}`,
|
||||
errors: [error instanceof Error ? error.message : 'Unknown error'],
|
||||
warnings: [],
|
||||
},
|
||||
{ status: 500 }
|
||||
)
|
||||
}
|
||||
}
|
||||
@@ -3,7 +3,9 @@
|
||||
import { memo, useMemo, useState } from 'react'
|
||||
import { Check, Copy } from 'lucide-react'
|
||||
import { Button } from '@/components/ui/button'
|
||||
import { ToolCallCompletion, ToolCallExecution } from '@/components/ui/tool-call'
|
||||
import { Tooltip, TooltipContent, TooltipProvider, TooltipTrigger } from '@/components/ui/tooltip'
|
||||
import { parseMessageContent, stripToolCallIndicators } from '@/lib/tool-call-parser'
|
||||
import MarkdownRenderer from './components/markdown-renderer'
|
||||
|
||||
export interface ChatMessage {
|
||||
@@ -31,6 +33,22 @@ export const ClientChatMessage = memo(
|
||||
return typeof message.content === 'object' && message.content !== null
|
||||
}, [message.content])
|
||||
|
||||
// Parse message content to separate text and tool calls (only for assistant messages)
|
||||
const parsedContent = useMemo(() => {
|
||||
if (message.type === 'assistant' && typeof message.content === 'string') {
|
||||
return parseMessageContent(message.content)
|
||||
}
|
||||
return null
|
||||
}, [message.type, message.content])
|
||||
|
||||
// Get clean text content without tool call indicators
|
||||
const cleanTextContent = useMemo(() => {
|
||||
if (message.type === 'assistant' && typeof message.content === 'string') {
|
||||
return stripToolCallIndicators(message.content)
|
||||
}
|
||||
return message.content
|
||||
}, [message.type, message.content])
|
||||
|
||||
// For user messages (on the right)
|
||||
if (message.type === 'user') {
|
||||
return (
|
||||
@@ -56,18 +74,58 @@ export const ClientChatMessage = memo(
|
||||
return (
|
||||
<div className='px-4 pt-5 pb-2' data-message-id={message.id}>
|
||||
<div className='mx-auto max-w-3xl'>
|
||||
<div className='flex flex-col'>
|
||||
<div>
|
||||
<div className='break-words text-base'>
|
||||
{isJsonObject ? (
|
||||
<pre className='text-gray-800 dark:text-gray-100'>
|
||||
{JSON.stringify(message.content, null, 2)}
|
||||
</pre>
|
||||
) : (
|
||||
<EnhancedMarkdownRenderer content={message.content as string} />
|
||||
)}
|
||||
<div className='flex flex-col space-y-3'>
|
||||
{/* Inline content rendering - tool calls and text in order */}
|
||||
{parsedContent?.inlineContent && parsedContent.inlineContent.length > 0 ? (
|
||||
<div className='space-y-2'>
|
||||
{parsedContent.inlineContent.map((item, index) => {
|
||||
if (item.type === 'tool_call' && item.toolCall) {
|
||||
const toolCall = item.toolCall
|
||||
return (
|
||||
<div key={`${toolCall.id}-${index}`}>
|
||||
{toolCall.state === 'detecting' && (
|
||||
<div className='flex items-center gap-2 rounded-lg border border-blue-200 bg-blue-50 px-3 py-2 text-sm dark:border-blue-800 dark:bg-blue-950'>
|
||||
<div className='h-4 w-4 animate-spin rounded-full border-2 border-blue-600 border-t-transparent dark:border-blue-400' />
|
||||
<span className='text-blue-800 dark:text-blue-200'>
|
||||
Detecting {toolCall.displayName || toolCall.name}...
|
||||
</span>
|
||||
</div>
|
||||
)}
|
||||
{toolCall.state === 'executing' && (
|
||||
<ToolCallExecution toolCall={toolCall} isCompact={true} />
|
||||
)}
|
||||
{(toolCall.state === 'completed' || toolCall.state === 'error') && (
|
||||
<ToolCallCompletion toolCall={toolCall} isCompact={true} />
|
||||
)}
|
||||
</div>
|
||||
)
|
||||
}
|
||||
if (item.type === 'text' && item.content.trim()) {
|
||||
return (
|
||||
<div key={`text-${index}`}>
|
||||
<div className='break-words text-base'>
|
||||
<EnhancedMarkdownRenderer content={item.content} />
|
||||
</div>
|
||||
</div>
|
||||
)
|
||||
}
|
||||
return null
|
||||
})}
|
||||
</div>
|
||||
</div>
|
||||
) : (
|
||||
/* Fallback for empty content or no inline content */
|
||||
<div>
|
||||
<div className='break-words text-base'>
|
||||
{isJsonObject ? (
|
||||
<pre className='text-gray-800 dark:text-gray-100'>
|
||||
{JSON.stringify(cleanTextContent, null, 2)}
|
||||
</pre>
|
||||
) : (
|
||||
<EnhancedMarkdownRenderer content={cleanTextContent as string} />
|
||||
)}
|
||||
</div>
|
||||
</div>
|
||||
)}
|
||||
{message.type === 'assistant' && !isJsonObject && !message.isInitialMessage && (
|
||||
<div className='flex items-center justify-start space-x-2'>
|
||||
{/* Copy Button - Only show when not streaming */}
|
||||
@@ -80,7 +138,11 @@ export const ClientChatMessage = memo(
|
||||
size='sm'
|
||||
className='flex items-center gap-1.5 px-2 py-1'
|
||||
onClick={() => {
|
||||
navigator.clipboard.writeText(message.content as string)
|
||||
const contentToCopy =
|
||||
typeof cleanTextContent === 'string'
|
||||
? cleanTextContent
|
||||
: JSON.stringify(cleanTextContent, null, 2)
|
||||
navigator.clipboard.writeText(contentToCopy)
|
||||
setIsCopied(true)
|
||||
setTimeout(() => setIsCopied(false), 2000)
|
||||
}}
|
||||
|
||||
@@ -46,6 +46,7 @@ import {
|
||||
useKeyboardShortcuts,
|
||||
} from '../../../hooks/use-keyboard-shortcuts'
|
||||
import { useWorkflowExecution } from '../../hooks/use-workflow-execution'
|
||||
import { WorkflowTextEditorModal } from '../workflow-text-editor/workflow-text-editor-modal'
|
||||
import { DeploymentControls } from './components/deployment-controls/deployment-controls'
|
||||
import { ExportControls } from './components/export-controls/export-controls'
|
||||
import { TemplateModal } from './components/template-modal/template-modal'
|
||||
@@ -508,6 +509,36 @@ export function ControlBar({ hasValidationErrors = false }: ControlBarProps) {
|
||||
)
|
||||
}
|
||||
|
||||
/**
|
||||
* Render YAML editor button
|
||||
*/
|
||||
const renderYamlEditorButton = () => {
|
||||
const canEdit = userPermissions.canEdit
|
||||
const isDisabled = isExecuting || isDebugging || !canEdit
|
||||
|
||||
const getTooltipText = () => {
|
||||
if (!canEdit) return 'Admin permission required to edit YAML'
|
||||
if (isDebugging) return 'Cannot edit YAML while debugging'
|
||||
if (isExecuting) return 'Cannot edit YAML while workflow is running'
|
||||
return 'Edit workflow as YAML/JSON'
|
||||
}
|
||||
|
||||
return (
|
||||
<Tooltip>
|
||||
<TooltipTrigger asChild>
|
||||
<WorkflowTextEditorModal
|
||||
disabled={isDisabled}
|
||||
className={cn(
|
||||
'h-12 w-12 rounded-[11px] border bg-card text-card-foreground shadow-xs',
|
||||
isDisabled ? 'cursor-not-allowed opacity-50' : 'cursor-pointer hover:bg-secondary'
|
||||
)}
|
||||
/>
|
||||
</TooltipTrigger>
|
||||
<TooltipContent>{getTooltipText()}</TooltipContent>
|
||||
</Tooltip>
|
||||
)
|
||||
}
|
||||
|
||||
/**
|
||||
* Render auto-layout button
|
||||
*/
|
||||
@@ -943,6 +974,7 @@ export function ControlBar({ hasValidationErrors = false }: ControlBarProps) {
|
||||
{renderDisconnectionNotice()}
|
||||
{renderToggleButton()}
|
||||
{isExpanded && <ExportControls />}
|
||||
{isExpanded && renderYamlEditorButton()}
|
||||
{isExpanded && renderAutoLayoutButton()}
|
||||
{isExpanded && renderDuplicateButton()}
|
||||
{renderDeleteButton()}
|
||||
|
||||
@@ -0,0 +1,156 @@
|
||||
'use client'
|
||||
|
||||
import { useEffect } from 'react'
|
||||
import { formatDistanceToNow } from 'date-fns'
|
||||
import { AlertCircle, History, RotateCcw } from 'lucide-react'
|
||||
import { Button } from '@/components/ui/button'
|
||||
import { ScrollArea } from '@/components/ui/scroll-area'
|
||||
import { Separator } from '@/components/ui/separator'
|
||||
import { useCopilotStore } from '@/stores/copilot/store'
|
||||
|
||||
export function CheckpointPanel() {
|
||||
const {
|
||||
currentChat,
|
||||
checkpoints,
|
||||
isLoadingCheckpoints,
|
||||
isRevertingCheckpoint,
|
||||
checkpointError,
|
||||
loadCheckpoints,
|
||||
revertToCheckpoint: revertToCheckpointAction,
|
||||
clearCheckpointError,
|
||||
} = useCopilotStore()
|
||||
|
||||
// Load checkpoints when chat changes
|
||||
useEffect(() => {
|
||||
if (currentChat?.id) {
|
||||
loadCheckpoints(currentChat.id)
|
||||
}
|
||||
}, [currentChat?.id, loadCheckpoints])
|
||||
|
||||
if (!currentChat) {
|
||||
return (
|
||||
<div className='p-4 text-center text-muted-foreground'>
|
||||
<History className='mx-auto mb-2 h-8 w-8' />
|
||||
<p>No chat selected</p>
|
||||
</div>
|
||||
)
|
||||
}
|
||||
|
||||
if (isLoadingCheckpoints) {
|
||||
return (
|
||||
<div className='p-4 text-center text-muted-foreground'>
|
||||
<div className='mx-auto mb-2 h-6 w-6 animate-spin rounded-full border-2 border-gray-300 border-t-gray-600' />
|
||||
<p>Loading checkpoints...</p>
|
||||
</div>
|
||||
)
|
||||
}
|
||||
|
||||
if (checkpointError) {
|
||||
return (
|
||||
<div className='p-4'>
|
||||
<div className='mb-3 flex items-center gap-2 text-red-600'>
|
||||
<AlertCircle className='h-4 w-4' />
|
||||
<span className='font-medium text-sm'>Error loading checkpoints</span>
|
||||
</div>
|
||||
<p className='mb-3 text-muted-foreground text-xs'>{checkpointError}</p>
|
||||
<Button
|
||||
size='sm'
|
||||
variant='outline'
|
||||
onClick={() => {
|
||||
clearCheckpointError()
|
||||
loadCheckpoints(currentChat.id)
|
||||
}}
|
||||
>
|
||||
Retry
|
||||
</Button>
|
||||
</div>
|
||||
)
|
||||
}
|
||||
|
||||
if (checkpoints.length === 0) {
|
||||
return (
|
||||
<div className='p-4 text-center text-muted-foreground'>
|
||||
<History className='mx-auto mb-2 h-8 w-8' />
|
||||
<p className='text-sm'>No checkpoints yet</p>
|
||||
<p className='mt-1 text-xs'>
|
||||
Checkpoints are created automatically when the agent edits your workflow
|
||||
</p>
|
||||
</div>
|
||||
)
|
||||
}
|
||||
|
||||
const handleRevert = async (checkpointId: string) => {
|
||||
if (
|
||||
window.confirm(
|
||||
'Are you sure you want to revert to this checkpoint? This will replace your current workflow.'
|
||||
)
|
||||
) {
|
||||
await revertToCheckpointAction(checkpointId)
|
||||
}
|
||||
}
|
||||
|
||||
return (
|
||||
<div className='flex h-full flex-col'>
|
||||
<div className='border-b p-4'>
|
||||
<div className='flex items-center gap-2'>
|
||||
<History className='h-4 w-4' />
|
||||
<h3 className='font-medium text-sm'>Workflow Checkpoints</h3>
|
||||
</div>
|
||||
<p className='mt-1 text-muted-foreground text-xs'>
|
||||
Restore your workflow to a previous state
|
||||
</p>
|
||||
</div>
|
||||
|
||||
<ScrollArea className='flex-1'>
|
||||
<div className='p-2'>
|
||||
{checkpoints.map((checkpoint, index) => (
|
||||
<div key={checkpoint.id} className='mb-2'>
|
||||
<div className='rounded-lg border bg-card p-3 transition-colors hover:bg-accent/50'>
|
||||
<div className='flex items-start justify-between gap-2'>
|
||||
<div className='min-w-0 flex-1'>
|
||||
<div className='mb-1 flex items-center gap-2'>
|
||||
<div className='h-2 w-2 rounded-full bg-purple-500' />
|
||||
<span className='font-medium text-muted-foreground text-xs'>
|
||||
Checkpoint {checkpoints.length - index}
|
||||
</span>
|
||||
</div>
|
||||
<p className='text-muted-foreground text-xs'>
|
||||
{formatDistanceToNow(new Date(checkpoint.createdAt), { addSuffix: true })}
|
||||
</p>
|
||||
<p className='mt-1 text-muted-foreground text-xs'>
|
||||
{new Date(checkpoint.createdAt).toLocaleDateString()} at{' '}
|
||||
{new Date(checkpoint.createdAt).toLocaleTimeString([], {
|
||||
hour: '2-digit',
|
||||
minute: '2-digit',
|
||||
})}
|
||||
</p>
|
||||
</div>
|
||||
<Button
|
||||
size='sm'
|
||||
variant='ghost'
|
||||
className='h-6 px-2 text-xs'
|
||||
onClick={() => handleRevert(checkpoint.id)}
|
||||
disabled={isRevertingCheckpoint}
|
||||
>
|
||||
<RotateCcw className='mr-1 h-3 w-3' />
|
||||
Revert
|
||||
</Button>
|
||||
</div>
|
||||
</div>
|
||||
{index < checkpoints.length - 1 && <Separator className='mt-2' />}
|
||||
</div>
|
||||
))}
|
||||
</div>
|
||||
</ScrollArea>
|
||||
|
||||
{isRevertingCheckpoint && (
|
||||
<div className='border-t bg-muted/30 p-3'>
|
||||
<div className='flex items-center gap-2 text-muted-foreground text-sm'>
|
||||
<div className='h-4 w-4 animate-spin rounded-full border-2 border-gray-300 border-t-gray-600' />
|
||||
Reverting workflow...
|
||||
</div>
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
)
|
||||
}
|
||||
@@ -1,10 +1,10 @@
|
||||
'use client'
|
||||
|
||||
import { type KeyboardEvent, useEffect, useRef } from 'react'
|
||||
import { useEffect, useRef, useState } from 'react'
|
||||
import {
|
||||
ArrowUp,
|
||||
Bot,
|
||||
ChevronDown,
|
||||
History,
|
||||
MessageSquarePlus,
|
||||
MoreHorizontal,
|
||||
Trash2,
|
||||
@@ -17,126 +17,34 @@ import {
|
||||
DropdownMenuItem,
|
||||
DropdownMenuTrigger,
|
||||
} from '@/components/ui/dropdown-menu'
|
||||
import { Input } from '@/components/ui/input'
|
||||
import type { CopilotChat } from '@/lib/copilot-api'
|
||||
import type { CopilotChat } from '@/lib/copilot/api'
|
||||
import { createLogger } from '@/lib/logs/console-logger'
|
||||
import type { CopilotMessage } from '@/stores/copilot/types'
|
||||
import { CheckpointPanel } from '../checkpoint-panel'
|
||||
import { ProfessionalInput } from '../professional-input/professional-input'
|
||||
import { ProfessionalMessage } from '../professional-message/professional-message'
|
||||
import { CopilotWelcome } from '../welcome/welcome'
|
||||
|
||||
const logger = createLogger('CopilotModal')
|
||||
|
||||
interface Message {
|
||||
id: string
|
||||
content: string
|
||||
type: 'user' | 'assistant'
|
||||
timestamp: Date
|
||||
citations?: Array<{
|
||||
id: number
|
||||
title: string
|
||||
url: string
|
||||
}>
|
||||
}
|
||||
|
||||
interface CopilotModalMessage {
|
||||
message: Message
|
||||
}
|
||||
|
||||
// Modal-specific message component
|
||||
function ModalCopilotMessage({ message }: CopilotModalMessage) {
|
||||
const renderMarkdown = (text: string) => {
|
||||
let processedText = text
|
||||
|
||||
// Process markdown links: [text](url)
|
||||
processedText = processedText.replace(
|
||||
/\[([^\]]+)\]\(([^)]+)\)/g,
|
||||
'<a href="$2" target="_blank" rel="noopener noreferrer" class="text-blue-600 hover:text-blue-800 font-semibold underline transition-colors">$1</a>'
|
||||
)
|
||||
|
||||
// Handle code blocks
|
||||
processedText = processedText.replace(
|
||||
/```(\w+)?\n([\s\S]*?)\n```/g,
|
||||
'<pre class="bg-muted rounded-md p-3 my-2 overflow-x-auto"><code class="text-sm">$2</code></pre>'
|
||||
)
|
||||
|
||||
// Handle inline code
|
||||
processedText = processedText.replace(
|
||||
/`([^`]+)`/g,
|
||||
'<code class="bg-muted px-1 rounded text-sm">$1</code>'
|
||||
)
|
||||
|
||||
// Handle headers
|
||||
processedText = processedText.replace(
|
||||
/^### (.*$)/gm,
|
||||
'<h3 class="text-lg font-semibold mt-4 mb-2">$1</h3>'
|
||||
)
|
||||
processedText = processedText.replace(
|
||||
/^## (.*$)/gm,
|
||||
'<h2 class="text-xl font-semibold mt-4 mb-2">$1</h2>'
|
||||
)
|
||||
processedText = processedText.replace(
|
||||
/^# (.*$)/gm,
|
||||
'<h1 class="text-2xl font-bold mt-4 mb-2">$1</h1>'
|
||||
)
|
||||
|
||||
// Handle bold
|
||||
processedText = processedText.replace(/\*\*(.*?)\*\*/g, '<strong>$1</strong>')
|
||||
|
||||
// Handle lists
|
||||
processedText = processedText.replace(/^- (.*$)/gm, '<li class="ml-4">• $1</li>')
|
||||
|
||||
// Handle line breaks (reduce spacing)
|
||||
processedText = processedText.replace(/\n\n+/g, '</p><p class="mt-2">')
|
||||
processedText = processedText.replace(/\n/g, '<br>')
|
||||
|
||||
return processedText
|
||||
}
|
||||
|
||||
// For user messages (on the right)
|
||||
if (message.type === 'user') {
|
||||
return (
|
||||
<div className='px-4 py-5'>
|
||||
<div className='mx-auto max-w-3xl'>
|
||||
<div className='flex justify-end'>
|
||||
<div className='max-w-[80%] rounded-3xl bg-[#F4F4F4] px-4 py-3 shadow-sm dark:bg-primary/10'>
|
||||
<div className='whitespace-pre-wrap break-words text-[#0D0D0D] text-base leading-relaxed dark:text-white'>
|
||||
{message.content}
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
)
|
||||
}
|
||||
|
||||
// For assistant messages (on the left)
|
||||
return (
|
||||
<div className='px-4 py-5'>
|
||||
<div className='mx-auto max-w-3xl'>
|
||||
<div className='flex'>
|
||||
<div className='max-w-[80%]'>
|
||||
<div
|
||||
className='prose prose-sm dark:prose-invert max-w-none whitespace-pre-wrap break-words text-base leading-normal'
|
||||
dangerouslySetInnerHTML={{ __html: renderMarkdown(message.content) }}
|
||||
/>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
)
|
||||
}
|
||||
|
||||
interface CopilotModalProps {
|
||||
open: boolean
|
||||
onOpenChange: (open: boolean) => void
|
||||
copilotMessage: string
|
||||
setCopilotMessage: (message: string) => void
|
||||
messages: Message[]
|
||||
messages: CopilotMessage[]
|
||||
onSendMessage: (message: string) => Promise<void>
|
||||
isLoading: boolean
|
||||
isLoadingChats: boolean
|
||||
// Chat management props
|
||||
chats: CopilotChat[]
|
||||
currentChat: CopilotChat | null
|
||||
onSelectChat: (chat: CopilotChat) => void
|
||||
onStartNewChat: () => void
|
||||
onDeleteChat: (chatId: string) => void
|
||||
// Mode props
|
||||
mode: 'ask' | 'agent'
|
||||
onModeChange: (mode: 'ask' | 'agent') => void
|
||||
}
|
||||
|
||||
export function CopilotModal({
|
||||
@@ -147,15 +55,22 @@ export function CopilotModal({
|
||||
messages,
|
||||
onSendMessage,
|
||||
isLoading,
|
||||
isLoadingChats,
|
||||
chats,
|
||||
currentChat,
|
||||
onSelectChat,
|
||||
onStartNewChat,
|
||||
onDeleteChat,
|
||||
mode,
|
||||
onModeChange,
|
||||
}: CopilotModalProps) {
|
||||
const messagesEndRef = useRef<HTMLDivElement>(null)
|
||||
const messagesContainerRef = useRef<HTMLDivElement>(null)
|
||||
const inputRef = useRef<HTMLInputElement>(null)
|
||||
const [isDropdownOpen, setIsDropdownOpen] = useState(false)
|
||||
const [showCheckpoints, setShowCheckpoints] = useState(false)
|
||||
|
||||
// Fixed sidebar width for copilot modal positioning
|
||||
const sidebarWidth = 240 // w-60 (sidebar width from staging)
|
||||
|
||||
// Auto-scroll to bottom when new messages are added
|
||||
useEffect(() => {
|
||||
@@ -164,42 +79,13 @@ export function CopilotModal({
|
||||
}
|
||||
}, [messages])
|
||||
|
||||
// Focus input when modal opens
|
||||
useEffect(() => {
|
||||
if (open && inputRef.current) {
|
||||
inputRef.current.focus()
|
||||
}
|
||||
}, [open])
|
||||
|
||||
// Handle send message
|
||||
const handleSendMessage = async () => {
|
||||
if (!copilotMessage.trim() || isLoading) return
|
||||
|
||||
try {
|
||||
await onSendMessage(copilotMessage.trim())
|
||||
setCopilotMessage('')
|
||||
|
||||
// Ensure input stays focused
|
||||
if (inputRef.current) {
|
||||
inputRef.current.focus()
|
||||
}
|
||||
} catch (error) {
|
||||
logger.error('Failed to send message', error)
|
||||
}
|
||||
}
|
||||
|
||||
// Handle key press
|
||||
const handleKeyPress = (e: KeyboardEvent<HTMLInputElement>) => {
|
||||
if (e.key === 'Enter' && !e.shiftKey) {
|
||||
e.preventDefault()
|
||||
handleSendMessage()
|
||||
}
|
||||
}
|
||||
|
||||
if (!open) return null
|
||||
|
||||
return (
|
||||
<div className='fixed inset-0 z-[100] flex flex-col bg-background'>
|
||||
<div
|
||||
className='fixed inset-y-0 right-0 z-[100] flex flex-col bg-background'
|
||||
style={{ left: `${sidebarWidth}px` }}
|
||||
>
|
||||
<style jsx>{`
|
||||
@keyframes growShrink {
|
||||
0%,
|
||||
@@ -215,159 +101,220 @@ export function CopilotModal({
|
||||
}
|
||||
`}</style>
|
||||
|
||||
{/* Header with chat title, management, and close button */}
|
||||
<div className='flex items-center justify-between border-b px-4 py-3'>
|
||||
<div className='flex flex-1 items-center gap-2'>
|
||||
{/* Chat Title Dropdown */}
|
||||
<DropdownMenu>
|
||||
<DropdownMenuTrigger asChild>
|
||||
<Button variant='ghost' className='h-8 max-w-[300px] flex-1 justify-start px-3'>
|
||||
<span className='truncate'>{currentChat?.title || 'New Chat'}</span>
|
||||
<ChevronDown className='ml-2 h-4 w-4 shrink-0' />
|
||||
</Button>
|
||||
</DropdownMenuTrigger>
|
||||
<DropdownMenuContent align='start' className='z-[110] w-64' sideOffset={8}>
|
||||
{chats.map((chat) => (
|
||||
<div key={chat.id} className='flex items-center'>
|
||||
<DropdownMenuItem
|
||||
onClick={() => onSelectChat(chat)}
|
||||
className='flex-1 cursor-pointer'
|
||||
>
|
||||
<div className='min-w-0 flex-1'>
|
||||
<div className='truncate font-medium text-sm'>
|
||||
{chat.title || 'Untitled Chat'}
|
||||
</div>
|
||||
<div className='text-muted-foreground text-xs'>
|
||||
{chat.messageCount} messages •{' '}
|
||||
{new Date(chat.updatedAt).toLocaleDateString()}
|
||||
</div>
|
||||
</div>
|
||||
</DropdownMenuItem>
|
||||
<DropdownMenu>
|
||||
<DropdownMenuTrigger asChild>
|
||||
<Button variant='ghost' size='sm' className='h-8 w-8 shrink-0 p-0'>
|
||||
<MoreHorizontal className='h-4 w-4' />
|
||||
</Button>
|
||||
</DropdownMenuTrigger>
|
||||
<DropdownMenuContent align='end' className='z-[120]'>
|
||||
<DropdownMenuItem
|
||||
onClick={() => onDeleteChat(chat.id)}
|
||||
className='cursor-pointer text-destructive'
|
||||
>
|
||||
<Trash2 className='mr-2 h-4 w-4' />
|
||||
Delete
|
||||
</DropdownMenuItem>
|
||||
</DropdownMenuContent>
|
||||
</DropdownMenu>
|
||||
</div>
|
||||
))}
|
||||
</DropdownMenuContent>
|
||||
</DropdownMenu>
|
||||
|
||||
{/* New Chat Button */}
|
||||
{/* Show loading state with centered pulsing agent icon */}
|
||||
{isLoadingChats || isLoading ? (
|
||||
<div className='flex h-full items-center justify-center'>
|
||||
<div className='flex items-center justify-center'>
|
||||
<Bot className='h-16 w-16 animate-pulse text-muted-foreground' />
|
||||
</div>
|
||||
</div>
|
||||
) : (
|
||||
<>
|
||||
{/* Close button in top right corner */}
|
||||
<Button
|
||||
variant='ghost'
|
||||
size='sm'
|
||||
onClick={onStartNewChat}
|
||||
className='h-8 w-8 p-0'
|
||||
title='New Chat'
|
||||
size='icon'
|
||||
className='absolute top-3 right-4 z-10 h-8 w-8 rounded-md hover:bg-accent/50'
|
||||
onClick={() => onOpenChange(false)}
|
||||
>
|
||||
<MessageSquarePlus className='h-4 w-4' />
|
||||
<X className='h-4 w-4' />
|
||||
<span className='sr-only'>Close</span>
|
||||
</Button>
|
||||
</div>
|
||||
|
||||
<Button
|
||||
variant='ghost'
|
||||
size='icon'
|
||||
className='h-8 w-8 rounded-md hover:bg-accent/50'
|
||||
onClick={() => onOpenChange(false)}
|
||||
>
|
||||
<X className='h-4 w-4' />
|
||||
<span className='sr-only'>Close</span>
|
||||
</Button>
|
||||
</div>
|
||||
{/* Header with chat title and management */}
|
||||
<div className='border-b py-3'>
|
||||
<div className='mx-auto flex w-full max-w-3xl items-center justify-between px-4'>
|
||||
{/* Chat Title Dropdown */}
|
||||
<DropdownMenu open={isDropdownOpen} onOpenChange={setIsDropdownOpen}>
|
||||
<DropdownMenuTrigger asChild>
|
||||
<Button
|
||||
variant='ghost'
|
||||
className='h-8 max-w-[300px] justify-start px-3 hover:bg-accent/50'
|
||||
>
|
||||
<span className='truncate'>{currentChat?.title || 'New Chat'}</span>
|
||||
<ChevronDown className='ml-2 h-4 w-4 shrink-0' />
|
||||
</Button>
|
||||
</DropdownMenuTrigger>
|
||||
<DropdownMenuContent
|
||||
align='start'
|
||||
className='z-[110] w-72 border-border/50 bg-background/95 shadow-lg backdrop-blur-sm'
|
||||
sideOffset={8}
|
||||
onMouseLeave={() => setIsDropdownOpen(false)}
|
||||
>
|
||||
{isLoadingChats ? (
|
||||
<div className='px-4 py-3 text-muted-foreground text-sm'>Loading chats...</div>
|
||||
) : chats.length === 0 ? (
|
||||
<div className='px-4 py-3 text-muted-foreground text-sm'>No chats yet</div>
|
||||
) : (
|
||||
// Sort chats by updated date (most recent first) for display
|
||||
[...chats]
|
||||
.sort(
|
||||
(a, b) => new Date(b.updatedAt).getTime() - new Date(a.updatedAt).getTime()
|
||||
)
|
||||
.map((chat) => (
|
||||
<div key={chat.id} className='group flex items-center gap-2 px-2 py-1'>
|
||||
<DropdownMenuItem asChild>
|
||||
<div
|
||||
onClick={() => {
|
||||
onSelectChat(chat)
|
||||
setIsDropdownOpen(false)
|
||||
}}
|
||||
className={`min-w-0 flex-1 cursor-pointer rounded-lg px-3 py-2.5 transition-all ${
|
||||
currentChat?.id === chat.id
|
||||
? 'bg-accent/80 text-accent-foreground'
|
||||
: 'hover:bg-accent/40'
|
||||
}`}
|
||||
>
|
||||
<div className='min-w-0'>
|
||||
<div className='truncate font-medium text-sm leading-tight'>
|
||||
{chat.title || 'Untitled Chat'}
|
||||
</div>
|
||||
<div className='mt-0.5 truncate text-muted-foreground text-xs'>
|
||||
{new Date(chat.updatedAt).toLocaleDateString()} at{' '}
|
||||
{new Date(chat.updatedAt).toLocaleTimeString([], {
|
||||
hour: '2-digit',
|
||||
minute: '2-digit',
|
||||
})}{' '}
|
||||
• {chat.messageCount}
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</DropdownMenuItem>
|
||||
<DropdownMenu>
|
||||
<DropdownMenuTrigger asChild>
|
||||
<Button
|
||||
variant='ghost'
|
||||
size='sm'
|
||||
className='h-7 w-7 shrink-0 p-0 hover:bg-accent/60'
|
||||
>
|
||||
<MoreHorizontal className='h-3.5 w-3.5' />
|
||||
</Button>
|
||||
</DropdownMenuTrigger>
|
||||
<DropdownMenuContent
|
||||
align='end'
|
||||
className='z-[120] border-border/50 bg-background/95 shadow-lg backdrop-blur-sm'
|
||||
>
|
||||
<DropdownMenuItem
|
||||
onClick={() => onDeleteChat(chat.id)}
|
||||
className='cursor-pointer text-destructive hover:bg-destructive/10 hover:text-destructive focus:bg-destructive/10 focus:text-destructive'
|
||||
>
|
||||
<Trash2 className='mr-2 h-3.5 w-3.5' />
|
||||
Delete
|
||||
</DropdownMenuItem>
|
||||
</DropdownMenuContent>
|
||||
</DropdownMenu>
|
||||
</div>
|
||||
))
|
||||
)}
|
||||
</DropdownMenuContent>
|
||||
</DropdownMenu>
|
||||
|
||||
{/* Messages container */}
|
||||
<div ref={messagesContainerRef} className='flex-1 overflow-y-auto'>
|
||||
<div className='mx-auto max-w-3xl'>
|
||||
{messages.length === 0 ? (
|
||||
<div className='flex h-full flex-col items-center justify-center px-4 py-10'>
|
||||
<div className='space-y-4 text-center'>
|
||||
<Bot className='mx-auto h-12 w-12 text-muted-foreground' />
|
||||
<div className='space-y-2'>
|
||||
<h3 className='font-medium text-lg'>Welcome to Documentation Copilot</h3>
|
||||
<p className='text-muted-foreground text-sm'>
|
||||
Ask me anything about Sim Studio features, workflows, tools, or how to get
|
||||
started.
|
||||
</p>
|
||||
</div>
|
||||
<div className='mx-auto max-w-xs space-y-2 text-left'>
|
||||
<div className='text-muted-foreground text-xs'>Try asking:</div>
|
||||
<div className='space-y-1'>
|
||||
<div className='rounded bg-muted/50 px-2 py-1 text-xs'>
|
||||
"How do I create a workflow?"
|
||||
</div>
|
||||
<div className='rounded bg-muted/50 px-2 py-1 text-xs'>
|
||||
"What tools are available?"
|
||||
</div>
|
||||
<div className='rounded bg-muted/50 px-2 py-1 text-xs'>
|
||||
"How do I deploy my workflow?"
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
{/* Right side action buttons */}
|
||||
<div className='flex items-center gap-2'>
|
||||
{/* Checkpoint Toggle Button */}
|
||||
<Button
|
||||
variant='ghost'
|
||||
size='sm'
|
||||
onClick={() => setShowCheckpoints(!showCheckpoints)}
|
||||
className={`h-8 w-8 p-0 ${
|
||||
showCheckpoints
|
||||
? 'bg-[#802FFF]/20 text-[#802FFF] hover:bg-[#802FFF]/30'
|
||||
: 'hover:bg-accent/50'
|
||||
}`}
|
||||
title='View Checkpoints'
|
||||
>
|
||||
<History className='h-4 w-4' />
|
||||
</Button>
|
||||
|
||||
{/* New Chat Button */}
|
||||
<Button
|
||||
variant='ghost'
|
||||
size='sm'
|
||||
onClick={onStartNewChat}
|
||||
className='h-8 w-8 p-0'
|
||||
title='New Chat'
|
||||
>
|
||||
<MessageSquarePlus className='h-4 w-4' />
|
||||
</Button>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
{/* Messages container or Checkpoint Panel */}
|
||||
{showCheckpoints ? (
|
||||
<div className='flex-1 overflow-hidden'>
|
||||
<CheckpointPanel />
|
||||
</div>
|
||||
) : (
|
||||
messages.map((message) => <ModalCopilotMessage key={message.id} message={message} />)
|
||||
)}
|
||||
|
||||
{/* Loading indicator (shows only when loading) */}
|
||||
{isLoading && (
|
||||
<div className='px-4 py-5'>
|
||||
<div ref={messagesContainerRef} className='flex-1 overflow-y-auto'>
|
||||
<div className='mx-auto max-w-3xl'>
|
||||
<div className='flex'>
|
||||
<div className='max-w-[80%]'>
|
||||
<div className='flex h-6 items-center'>
|
||||
<div className='loading-dot h-3 w-3 rounded-full bg-black dark:bg-black' />
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
{messages.length === 0 ? (
|
||||
<CopilotWelcome onQuestionClick={onSendMessage} mode={mode} />
|
||||
) : (
|
||||
messages.map((message) => (
|
||||
<ProfessionalMessage
|
||||
key={message.id}
|
||||
message={message}
|
||||
isStreaming={isLoading && message.id === messages[messages.length - 1]?.id}
|
||||
/>
|
||||
))
|
||||
)}
|
||||
|
||||
<div ref={messagesEndRef} className='h-1' />
|
||||
</div>
|
||||
</div>
|
||||
)}
|
||||
|
||||
<div ref={messagesEndRef} className='h-1' />
|
||||
</div>
|
||||
</div>
|
||||
{/* Mode Selector and Input */}
|
||||
{!showCheckpoints && (
|
||||
<>
|
||||
{/* Mode Selector */}
|
||||
<div className='pt-6'>
|
||||
<div className='mx-auto max-w-3xl px-4'>
|
||||
<div className='flex items-center gap-1 rounded-md border bg-muted/30 p-0.5'>
|
||||
<Button
|
||||
variant='ghost'
|
||||
size='sm'
|
||||
onClick={() => onModeChange('ask')}
|
||||
className={`h-6 flex-1 font-medium text-xs ${
|
||||
mode === 'ask'
|
||||
? 'bg-[#802FFF]/20 text-[#802FFF] hover:bg-[#802FFF]/30'
|
||||
: 'hover:bg-muted/50'
|
||||
}`}
|
||||
title='Ask questions and get answers. Cannot edit workflows.'
|
||||
>
|
||||
Ask
|
||||
</Button>
|
||||
<Button
|
||||
variant='ghost'
|
||||
size='sm'
|
||||
onClick={() => onModeChange('agent')}
|
||||
className={`h-6 flex-1 font-medium text-xs ${
|
||||
mode === 'agent'
|
||||
? 'bg-[#802FFF]/20 text-[#802FFF] hover:bg-[#802FFF]/30'
|
||||
: 'hover:bg-muted/50'
|
||||
}`}
|
||||
title='Full agent with workflow editing capabilities.'
|
||||
>
|
||||
Agent
|
||||
</Button>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
{/* Input area (fixed at bottom) */}
|
||||
<div className='bg-background p-4'>
|
||||
<div className='mx-auto max-w-3xl'>
|
||||
<div className='relative rounded-2xl border bg-background shadow-sm'>
|
||||
<Input
|
||||
ref={inputRef}
|
||||
value={copilotMessage}
|
||||
onChange={(e) => setCopilotMessage(e.target.value)}
|
||||
onKeyDown={handleKeyPress}
|
||||
placeholder='Ask about Sim Studio documentation...'
|
||||
className='min-h-[50px] flex-1 rounded-2xl border-0 bg-transparent py-7 pr-16 pl-6 text-base focus-visible:ring-0 focus-visible:ring-offset-0'
|
||||
disabled={isLoading}
|
||||
/>
|
||||
<Button
|
||||
onClick={handleSendMessage}
|
||||
size='icon'
|
||||
disabled={!copilotMessage.trim() || isLoading}
|
||||
className='-translate-y-1/2 absolute top-1/2 right-3 h-10 w-10 rounded-xl bg-black p-0 text-white hover:bg-gray-800 dark:bg-primary dark:hover:bg-primary/80'
|
||||
>
|
||||
<ArrowUp className='h-4 w-4 dark:text-black' />
|
||||
</Button>
|
||||
</div>
|
||||
|
||||
<div className='mt-2 text-center text-muted-foreground text-xs'>
|
||||
<p>Ask questions about Sim Studio documentation and features</p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
{/* Input area */}
|
||||
<ProfessionalInput
|
||||
onSubmit={async (message) => {
|
||||
await onSendMessage(message)
|
||||
setCopilotMessage('')
|
||||
}}
|
||||
disabled={false}
|
||||
isLoading={isLoading}
|
||||
/>
|
||||
</>
|
||||
)}
|
||||
</>
|
||||
)}
|
||||
</div>
|
||||
)
|
||||
}
|
||||
|
||||
@@ -0,0 +1,98 @@
|
||||
'use client'
|
||||
|
||||
import { type FC, type KeyboardEvent, useRef, useState } from 'react'
|
||||
import { ArrowUp, Loader2 } from 'lucide-react'
|
||||
import { Button } from '@/components/ui/button'
|
||||
import { Textarea } from '@/components/ui/textarea'
|
||||
import { cn } from '@/lib/utils'
|
||||
|
||||
interface ProfessionalInputProps {
|
||||
onSubmit: (message: string) => void
|
||||
disabled?: boolean
|
||||
isLoading?: boolean
|
||||
placeholder?: string
|
||||
className?: string
|
||||
}
|
||||
|
||||
const ProfessionalInput: FC<ProfessionalInputProps> = ({
|
||||
onSubmit,
|
||||
disabled = false,
|
||||
isLoading = false,
|
||||
placeholder = 'How can I help you today?',
|
||||
className,
|
||||
}) => {
|
||||
const [message, setMessage] = useState('')
|
||||
const textareaRef = useRef<HTMLTextAreaElement>(null)
|
||||
|
||||
const handleSubmit = () => {
|
||||
const trimmedMessage = message.trim()
|
||||
if (!trimmedMessage || disabled || isLoading) return
|
||||
|
||||
onSubmit(trimmedMessage)
|
||||
setMessage('')
|
||||
|
||||
// Reset textarea height
|
||||
if (textareaRef.current) {
|
||||
textareaRef.current.style.height = 'auto'
|
||||
}
|
||||
}
|
||||
|
||||
const handleKeyDown = (e: KeyboardEvent<HTMLTextAreaElement>) => {
|
||||
if (e.key === 'Enter' && !e.shiftKey) {
|
||||
e.preventDefault()
|
||||
handleSubmit()
|
||||
}
|
||||
}
|
||||
|
||||
const handleInputChange = (e: React.ChangeEvent<HTMLTextAreaElement>) => {
|
||||
setMessage(e.target.value)
|
||||
|
||||
// Auto-resize textarea
|
||||
if (textareaRef.current) {
|
||||
textareaRef.current.style.height = 'auto'
|
||||
textareaRef.current.style.height = `${Math.min(textareaRef.current.scrollHeight, 120)}px`
|
||||
}
|
||||
}
|
||||
|
||||
const canSubmit = message.trim().length > 0 && !disabled && !isLoading
|
||||
|
||||
return (
|
||||
<div className={cn('w-full max-w-full overflow-hidden bg-background p-4', className)}>
|
||||
<div className='mx-auto w-full max-w-3xl'>
|
||||
<div className='relative w-full max-w-full'>
|
||||
<div className='relative flex w-full max-w-full items-end rounded-2xl border border-border bg-background shadow-sm transition-all focus-within:border-primary focus-within:ring-1 focus-within:ring-primary'>
|
||||
<Textarea
|
||||
ref={textareaRef}
|
||||
value={message}
|
||||
onChange={handleInputChange}
|
||||
onKeyDown={handleKeyDown}
|
||||
placeholder={placeholder}
|
||||
disabled={disabled || isLoading}
|
||||
className='max-h-[120px] min-h-[50px] w-full max-w-full resize-none border-0 bg-transparent px-4 py-3 pr-12 text-sm placeholder:text-muted-foreground focus-visible:ring-0 focus-visible:ring-offset-0'
|
||||
rows={1}
|
||||
/>
|
||||
<Button
|
||||
onClick={handleSubmit}
|
||||
disabled={!canSubmit}
|
||||
size='icon'
|
||||
className={cn(
|
||||
'absolute right-2 bottom-2 h-8 w-8 rounded-xl transition-all',
|
||||
canSubmit
|
||||
? 'bg-[#802FFF] text-white shadow-sm hover:bg-[#7028E6]'
|
||||
: 'cursor-not-allowed bg-muted text-muted-foreground'
|
||||
)}
|
||||
>
|
||||
{isLoading ? (
|
||||
<Loader2 className='h-4 w-4 animate-spin' />
|
||||
) : (
|
||||
<ArrowUp className='h-4 w-4' />
|
||||
)}
|
||||
</Button>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
)
|
||||
}
|
||||
|
||||
export { ProfessionalInput }
|
||||
@@ -0,0 +1,403 @@
|
||||
'use client'
|
||||
|
||||
import { type FC, memo, useMemo } from 'react'
|
||||
import { Bot, Copy, User } from 'lucide-react'
|
||||
import { useTheme } from 'next-themes'
|
||||
import ReactMarkdown from 'react-markdown'
|
||||
import { Prism as SyntaxHighlighter } from 'react-syntax-highlighter'
|
||||
import { oneDark, oneLight } from 'react-syntax-highlighter/dist/esm/styles/prism'
|
||||
import remarkGfm from 'remark-gfm'
|
||||
import { Button } from '@/components/ui/button'
|
||||
import { ToolCallCompletion, ToolCallExecution } from '@/components/ui/tool-call'
|
||||
import { parseMessageContent, stripToolCallIndicators } from '@/lib/tool-call-parser'
|
||||
import type { CopilotMessage } from '@/stores/copilot/types'
|
||||
|
||||
interface ProfessionalMessageProps {
|
||||
message: CopilotMessage
|
||||
isStreaming?: boolean
|
||||
}
|
||||
|
||||
const ProfessionalMessage: FC<ProfessionalMessageProps> = memo(({ message, isStreaming }) => {
|
||||
const { theme } = useTheme()
|
||||
const isUser = message.role === 'user'
|
||||
const isAssistant = message.role === 'assistant'
|
||||
|
||||
const handleCopyContent = () => {
|
||||
// Copy clean text content without tool call indicators
|
||||
const contentToCopy = isAssistant ? stripToolCallIndicators(message.content) : message.content
|
||||
navigator.clipboard.writeText(contentToCopy)
|
||||
}
|
||||
|
||||
const formatTimestamp = (timestamp: string) => {
|
||||
return new Date(timestamp).toLocaleTimeString([], { hour: '2-digit', minute: '2-digit' })
|
||||
}
|
||||
|
||||
// Parse message content to separate text and tool calls
|
||||
const parsedContent = useMemo(() => {
|
||||
if (isAssistant && message.content) {
|
||||
return parseMessageContent(message.content)
|
||||
}
|
||||
return null
|
||||
}, [isAssistant, message.content])
|
||||
|
||||
// Get clean text content without tool call indicators
|
||||
const cleanTextContent = useMemo(() => {
|
||||
if (isAssistant && message.content) {
|
||||
return stripToolCallIndicators(message.content)
|
||||
}
|
||||
return message.content
|
||||
}, [isAssistant, message.content])
|
||||
|
||||
// Custom components for react-markdown
|
||||
const markdownComponents = {
|
||||
code: ({ inline, className, children, ...props }: any) => {
|
||||
const match = /language-(\w+)/.exec(className || '')
|
||||
const language = match ? match[1] : ''
|
||||
|
||||
if (!inline && language) {
|
||||
return (
|
||||
<div className='group relative my-3 w-full max-w-full overflow-hidden rounded-lg border bg-muted/30'>
|
||||
<div
|
||||
className='w-full max-w-full overflow-x-auto'
|
||||
style={{ maxWidth: '100%', width: '100%' }}
|
||||
>
|
||||
<div style={{ maxWidth: '100%', overflow: 'hidden' }}>
|
||||
<SyntaxHighlighter
|
||||
style={theme === 'dark' ? oneDark : oneLight}
|
||||
language={language}
|
||||
PreTag='div'
|
||||
className='!m-0 !bg-transparent !max-w-full !w-full'
|
||||
showLineNumbers={language !== 'bash' && language !== 'shell'}
|
||||
wrapLines={true}
|
||||
wrapLongLines={true}
|
||||
customStyle={{
|
||||
margin: '0 !important',
|
||||
padding: '1rem',
|
||||
fontSize: '0.875rem',
|
||||
maxWidth: '100% !important',
|
||||
width: '100% !important',
|
||||
overflow: 'auto',
|
||||
whiteSpace: 'pre-wrap',
|
||||
wordBreak: 'break-all',
|
||||
}}
|
||||
codeTagProps={{
|
||||
style: {
|
||||
maxWidth: '100%',
|
||||
overflow: 'hidden',
|
||||
wordBreak: 'break-all',
|
||||
whiteSpace: 'pre-wrap',
|
||||
},
|
||||
}}
|
||||
{...props}
|
||||
>
|
||||
{String(children).replace(/\n$/, '')}
|
||||
</SyntaxHighlighter>
|
||||
</div>
|
||||
</div>
|
||||
<Button
|
||||
variant='ghost'
|
||||
size='sm'
|
||||
className='absolute top-2 right-2 h-8 w-8 p-0 opacity-0 transition-opacity group-hover:opacity-100'
|
||||
onClick={() => navigator.clipboard.writeText(String(children))}
|
||||
>
|
||||
<Copy className='h-3 w-3' />
|
||||
</Button>
|
||||
</div>
|
||||
)
|
||||
}
|
||||
|
||||
return (
|
||||
<code
|
||||
className='break-all rounded border bg-muted/80 px-1.5 py-0.5 font-mono text-sm'
|
||||
{...props}
|
||||
>
|
||||
{children}
|
||||
</code>
|
||||
)
|
||||
},
|
||||
pre: ({ children }: any) => (
|
||||
<div className='my-3 w-full max-w-full overflow-x-auto rounded-lg border bg-muted/30'>
|
||||
{children}
|
||||
</div>
|
||||
),
|
||||
h1: ({ children }: any) => (
|
||||
<h1 className='mt-6 mb-3 break-words border-b pb-2 font-bold text-foreground text-xl'>
|
||||
{children}
|
||||
</h1>
|
||||
),
|
||||
h2: ({ children }: any) => (
|
||||
<h2 className='mt-5 mb-2 break-words font-semibold text-foreground text-lg'>{children}</h2>
|
||||
),
|
||||
h3: ({ children }: any) => (
|
||||
<h3 className='mt-4 mb-2 break-words font-semibold text-base text-foreground'>{children}</h3>
|
||||
),
|
||||
p: ({ children }: any) => (
|
||||
<p className='mb-3 break-words text-foreground text-sm leading-relaxed last:mb-0'>
|
||||
{children}
|
||||
</p>
|
||||
),
|
||||
a: ({ href, children }: any) => (
|
||||
<a
|
||||
href={href}
|
||||
target='_blank'
|
||||
rel='noopener noreferrer'
|
||||
className='break-all font-medium text-blue-600 underline decoration-blue-600/30 underline-offset-2 transition-colors hover:text-blue-800 hover:decoration-blue-600/60 dark:text-blue-400 dark:hover:text-blue-300'
|
||||
>
|
||||
{children}
|
||||
</a>
|
||||
),
|
||||
ul: ({ children }: any) => (
|
||||
<ul className='mb-3 ml-4 list-outside list-disc space-y-1 break-words'>{children}</ul>
|
||||
),
|
||||
ol: ({ children }: any) => (
|
||||
<ol className='mb-3 ml-4 list-outside list-decimal space-y-1 break-words'>{children}</ol>
|
||||
),
|
||||
li: ({ children }: any) => (
|
||||
<li className='break-words text-foreground text-sm leading-relaxed'>{children}</li>
|
||||
),
|
||||
blockquote: ({ children }: any) => (
|
||||
<blockquote className='my-3 break-words rounded-r-lg border-muted-foreground/20 border-l-4 bg-muted/30 py-2 pl-4 text-muted-foreground italic'>
|
||||
{children}
|
||||
</blockquote>
|
||||
),
|
||||
table: ({ children }: any) => (
|
||||
<div className='my-3 w-full max-w-full overflow-x-auto rounded-lg border'>
|
||||
<table className='w-full text-sm'>{children}</table>
|
||||
</div>
|
||||
),
|
||||
th: ({ children }: any) => (
|
||||
<th className='break-words border-b bg-muted/50 px-3 py-2 text-left font-semibold'>
|
||||
{children}
|
||||
</th>
|
||||
),
|
||||
td: ({ children }: any) => (
|
||||
<td className='break-words border-muted/30 border-b px-3 py-2'>{children}</td>
|
||||
),
|
||||
}
|
||||
|
||||
if (isUser) {
|
||||
return (
|
||||
<div className='group flex w-full max-w-full justify-end overflow-hidden px-4 py-3'>
|
||||
<div className='flex max-w-[85%] items-start gap-3'>
|
||||
<div className='flex flex-col items-end space-y-1'>
|
||||
<div className='max-w-full overflow-hidden rounded-2xl rounded-tr-md bg-primary px-4 py-3 text-primary-foreground shadow-sm'>
|
||||
<div className='overflow-hidden whitespace-pre-wrap break-words text-sm leading-relaxed'>
|
||||
{message.content}
|
||||
</div>
|
||||
</div>
|
||||
<div className='flex items-center gap-2 opacity-0 transition-opacity group-hover:opacity-100'>
|
||||
<span className='text-muted-foreground text-xs'>
|
||||
{formatTimestamp(message.timestamp)}
|
||||
</span>
|
||||
<Button
|
||||
variant='ghost'
|
||||
size='sm'
|
||||
onClick={handleCopyContent}
|
||||
className='h-6 w-6 p-0 text-muted-foreground hover:text-foreground'
|
||||
>
|
||||
<Copy className='h-3 w-3' />
|
||||
</Button>
|
||||
</div>
|
||||
</div>
|
||||
<div className='flex h-8 w-8 items-center justify-center rounded-full bg-primary text-primary-foreground shadow-sm'>
|
||||
<User className='h-4 w-4' />
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
)
|
||||
}
|
||||
|
||||
if (isAssistant) {
|
||||
return (
|
||||
<>
|
||||
<style>{`
|
||||
.message-container .prose pre {
|
||||
max-width: 100% !important;
|
||||
overflow-x: auto !important;
|
||||
}
|
||||
.message-container .prose code {
|
||||
max-width: 100% !important;
|
||||
word-break: break-all !important;
|
||||
white-space: pre-wrap !important;
|
||||
}
|
||||
.message-container div[class*="language-"] {
|
||||
max-width: 100% !important;
|
||||
overflow: hidden !important;
|
||||
}
|
||||
.message-container div[class*="language-"] > div {
|
||||
max-width: 100% !important;
|
||||
overflow-x: auto !important;
|
||||
}
|
||||
.message-container div[class*="language-"] pre {
|
||||
max-width: 100% !important;
|
||||
overflow-x: auto !important;
|
||||
white-space: pre-wrap !important;
|
||||
word-break: break-all !important;
|
||||
}
|
||||
.aggressive-pulse {
|
||||
animation: aggressivePulse 1s ease-in-out infinite;
|
||||
}
|
||||
@keyframes aggressivePulse {
|
||||
0%, 100% {
|
||||
opacity: 1;
|
||||
transform: scale(1);
|
||||
}
|
||||
50% {
|
||||
opacity: 0.4;
|
||||
transform: scale(0.95);
|
||||
}
|
||||
}
|
||||
`}</style>
|
||||
<div className='message-container group flex w-full max-w-full justify-start overflow-hidden px-4 py-3'>
|
||||
<div className='flex w-full max-w-[85%] flex-col'>
|
||||
{/* Main message content with icon */}
|
||||
<div className='mb-3 flex items-end gap-3'>
|
||||
{/* Bot icon aligned with bottom of message bubble */}
|
||||
<div className='flex h-8 w-8 flex-shrink-0 items-center justify-center rounded-full bg-gradient-to-br from-blue-500 to-purple-600 text-white shadow-sm'>
|
||||
<Bot className={`h-4 w-4 ${isStreaming ? 'aggressive-pulse' : ''}`} />
|
||||
</div>
|
||||
|
||||
{/* Message content */}
|
||||
<div className='flex min-w-0 flex-1 flex-col items-start space-y-2'>
|
||||
{/* Inline content rendering - tool calls and text in order */}
|
||||
{parsedContent?.inlineContent && parsedContent.inlineContent.length > 0 ? (
|
||||
<div className='w-full max-w-full space-y-2'>
|
||||
{parsedContent.inlineContent.map((item, index) => {
|
||||
if (item.type === 'tool_call' && item.toolCall) {
|
||||
const toolCall = item.toolCall
|
||||
return (
|
||||
<div key={`${toolCall.id}-${index}`}>
|
||||
{toolCall.state === 'detecting' && (
|
||||
<div className='flex items-center gap-2 rounded-lg border border-blue-200 bg-blue-50 px-3 py-2 text-sm dark:border-blue-800 dark:bg-blue-950'>
|
||||
<div className='h-4 w-4 animate-spin rounded-full border-2 border-blue-600 border-t-transparent dark:border-blue-400' />
|
||||
<span className='text-blue-800 dark:text-blue-200'>
|
||||
Detecting {toolCall.displayName || toolCall.name}...
|
||||
</span>
|
||||
</div>
|
||||
)}
|
||||
{toolCall.state === 'executing' && (
|
||||
<ToolCallExecution toolCall={toolCall} isCompact={false} />
|
||||
)}
|
||||
{(toolCall.state === 'completed' || toolCall.state === 'error') && (
|
||||
<ToolCallCompletion toolCall={toolCall} isCompact={false} />
|
||||
)}
|
||||
</div>
|
||||
)
|
||||
}
|
||||
if (item.type === 'text' && item.content.trim()) {
|
||||
return (
|
||||
<div
|
||||
key={`text-${index}`}
|
||||
className='w-full max-w-full overflow-hidden rounded-2xl rounded-tl-md border bg-muted/50 px-4 py-3 shadow-sm'
|
||||
>
|
||||
<div
|
||||
className='prose prose-sm dark:prose-invert w-full max-w-none overflow-hidden'
|
||||
style={{
|
||||
maxWidth: '100%',
|
||||
width: '100%',
|
||||
overflow: 'hidden',
|
||||
wordBreak: 'break-word',
|
||||
}}
|
||||
>
|
||||
<ReactMarkdown
|
||||
remarkPlugins={[remarkGfm]}
|
||||
components={markdownComponents}
|
||||
>
|
||||
{item.content}
|
||||
</ReactMarkdown>
|
||||
</div>
|
||||
</div>
|
||||
)
|
||||
}
|
||||
return null
|
||||
})}
|
||||
</div>
|
||||
) : (
|
||||
/* Fallback for empty content or streaming */
|
||||
<div className='w-full max-w-full overflow-hidden rounded-2xl rounded-tl-md border bg-muted/50 px-4 py-3 shadow-sm'>
|
||||
{cleanTextContent ? (
|
||||
<div
|
||||
className='prose prose-sm dark:prose-invert w-full max-w-none overflow-hidden'
|
||||
style={{
|
||||
maxWidth: '100%',
|
||||
width: '100%',
|
||||
overflow: 'hidden',
|
||||
wordBreak: 'break-word',
|
||||
}}
|
||||
>
|
||||
<ReactMarkdown remarkPlugins={[remarkGfm]} components={markdownComponents}>
|
||||
{cleanTextContent}
|
||||
</ReactMarkdown>
|
||||
</div>
|
||||
) : isStreaming ? (
|
||||
<div className='flex items-center gap-2 py-1 text-muted-foreground'>
|
||||
<div className='flex space-x-1'>
|
||||
<div
|
||||
className='h-2 w-2 animate-bounce rounded-full bg-current'
|
||||
style={{ animationDelay: '0ms' }}
|
||||
/>
|
||||
<div
|
||||
className='h-2 w-2 animate-bounce rounded-full bg-current'
|
||||
style={{ animationDelay: '150ms' }}
|
||||
/>
|
||||
<div
|
||||
className='h-2 w-2 animate-bounce rounded-full bg-current'
|
||||
style={{ animationDelay: '300ms' }}
|
||||
/>
|
||||
</div>
|
||||
<span className='text-sm'>Thinking...</span>
|
||||
</div>
|
||||
) : null}
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
</div>
|
||||
|
||||
{/* Timestamp and actions - separate from main content */}
|
||||
<div className='ml-11 flex items-center gap-2 opacity-0 transition-opacity group-hover:opacity-100'>
|
||||
<span className='text-muted-foreground text-xs'>
|
||||
{formatTimestamp(message.timestamp)}
|
||||
</span>
|
||||
{cleanTextContent && (
|
||||
<Button
|
||||
variant='ghost'
|
||||
size='sm'
|
||||
onClick={handleCopyContent}
|
||||
className='h-6 w-6 p-0 text-muted-foreground hover:text-foreground'
|
||||
>
|
||||
<Copy className='h-3 w-3' />
|
||||
</Button>
|
||||
)}
|
||||
</div>
|
||||
|
||||
{/* Citations if available */}
|
||||
{message.citations && message.citations.length > 0 && (
|
||||
<div className='mt-2 ml-11 max-w-full space-y-1'>
|
||||
<div className='font-medium text-muted-foreground text-xs'>Sources:</div>
|
||||
<div className='flex flex-wrap gap-1'>
|
||||
{message.citations.map((citation) => (
|
||||
<a
|
||||
key={citation.id}
|
||||
href={citation.url}
|
||||
target='_blank'
|
||||
rel='noopener noreferrer'
|
||||
className='inline-flex max-w-full items-center break-all rounded-md border bg-muted/50 px-2 py-1 text-muted-foreground text-xs transition-colors hover:bg-muted hover:text-foreground'
|
||||
>
|
||||
{citation.title}
|
||||
</a>
|
||||
))}
|
||||
</div>
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
</div>
|
||||
</>
|
||||
)
|
||||
}
|
||||
|
||||
return null
|
||||
})
|
||||
|
||||
ProfessionalMessage.displayName = 'ProfessionalMessage'
|
||||
|
||||
export { ProfessionalMessage }
|
||||
@@ -0,0 +1,58 @@
|
||||
'use client'
|
||||
|
||||
import { Bot } from 'lucide-react'
|
||||
|
||||
interface CopilotWelcomeProps {
|
||||
onQuestionClick?: (question: string) => void
|
||||
mode?: 'ask' | 'agent'
|
||||
}
|
||||
|
||||
export function CopilotWelcome({ onQuestionClick, mode = 'ask' }: CopilotWelcomeProps) {
|
||||
const askQuestions = [
|
||||
'How do I create a workflow?',
|
||||
'What tools are available?',
|
||||
'What does my workflow do?',
|
||||
]
|
||||
|
||||
const agentQuestions = [
|
||||
'Help me build a workflow',
|
||||
'I want to edit my workflow',
|
||||
'Build me a small sample workflow',
|
||||
]
|
||||
|
||||
const exampleQuestions = mode === 'ask' ? askQuestions : agentQuestions
|
||||
|
||||
const handleQuestionClick = (question: string) => {
|
||||
onQuestionClick?.(question)
|
||||
}
|
||||
|
||||
return (
|
||||
<div className='flex h-full flex-col items-center justify-center px-4 py-10'>
|
||||
<div className='space-y-6 text-center'>
|
||||
<Bot className='mx-auto h-12 w-12 text-muted-foreground' />
|
||||
<div className='space-y-2'>
|
||||
<h3 className='font-medium text-lg'>How can I help you today?</h3>
|
||||
<p className='text-muted-foreground text-sm'>
|
||||
{mode === 'ask'
|
||||
? 'Ask me anything about your workflows, available tools, or how to get started.'
|
||||
: 'I can help you build, edit, and create workflows. What would you like to do?'}
|
||||
</p>
|
||||
</div>
|
||||
<div className='mx-auto max-w-sm space-y-3'>
|
||||
<div className='font-medium text-muted-foreground text-xs'>Try asking:</div>
|
||||
<div className='flex flex-wrap justify-center gap-2'>
|
||||
{exampleQuestions.map((question, index) => (
|
||||
<button
|
||||
key={index}
|
||||
className='inline-flex cursor-pointer items-center rounded-full bg-muted/60 px-3 py-1.5 font-medium text-muted-foreground text-xs transition-all hover:scale-105 hover:bg-muted hover:text-foreground active:scale-95'
|
||||
onClick={() => handleQuestionClick(question)}
|
||||
>
|
||||
{question}
|
||||
</button>
|
||||
))}
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
)
|
||||
}
|
||||
@@ -1,16 +1,7 @@
|
||||
'use client'
|
||||
|
||||
import { forwardRef, useCallback, useEffect, useImperativeHandle, useRef } from 'react'
|
||||
import {
|
||||
Bot,
|
||||
ChevronDown,
|
||||
Loader2,
|
||||
MessageSquarePlus,
|
||||
MoreHorizontal,
|
||||
Send,
|
||||
Trash2,
|
||||
User,
|
||||
} from 'lucide-react'
|
||||
import { forwardRef, useCallback, useEffect, useImperativeHandle, useRef, useState } from 'react'
|
||||
import { Bot, ChevronDown, History, MessageSquarePlus, MoreHorizontal, Trash2 } from 'lucide-react'
|
||||
import { Button } from '@/components/ui/button'
|
||||
import {
|
||||
DropdownMenu,
|
||||
@@ -18,13 +9,15 @@ import {
|
||||
DropdownMenuItem,
|
||||
DropdownMenuTrigger,
|
||||
} from '@/components/ui/dropdown-menu'
|
||||
import { Input } from '@/components/ui/input'
|
||||
import { ScrollArea } from '@/components/ui/scroll-area'
|
||||
import { createLogger } from '@/lib/logs/console-logger'
|
||||
import { useCopilotStore } from '@/stores/copilot/store'
|
||||
import type { CopilotMessage } from '@/stores/copilot/types'
|
||||
import { useWorkflowRegistry } from '@/stores/workflows/registry/store'
|
||||
import { CheckpointPanel } from './components/checkpoint-panel'
|
||||
import { CopilotModal } from './components/copilot-modal/copilot-modal'
|
||||
import { ProfessionalInput } from './components/professional-input/professional-input'
|
||||
import { ProfessionalMessage } from './components/professional-message/professional-message'
|
||||
import { CopilotWelcome } from './components/welcome/welcome'
|
||||
|
||||
const logger = createLogger('Copilot')
|
||||
|
||||
@@ -52,8 +45,9 @@ export const Copilot = forwardRef<CopilotRef, CopilotProps>(
|
||||
},
|
||||
ref
|
||||
) => {
|
||||
const inputRef = useRef<HTMLInputElement>(null)
|
||||
const scrollAreaRef = useRef<HTMLDivElement>(null)
|
||||
const [isDropdownOpen, setIsDropdownOpen] = useState(false)
|
||||
const [showCheckpoints, setShowCheckpoints] = useState(false)
|
||||
|
||||
const { activeWorkflowId } = useWorkflowRegistry()
|
||||
|
||||
@@ -67,13 +61,16 @@ export const Copilot = forwardRef<CopilotRef, CopilotProps>(
|
||||
isSendingMessage,
|
||||
error,
|
||||
workflowId,
|
||||
mode,
|
||||
setWorkflowId,
|
||||
validateCurrentChat,
|
||||
selectChat,
|
||||
createNewChat,
|
||||
deleteChat,
|
||||
sendMessage,
|
||||
clearMessages,
|
||||
clearError,
|
||||
setMode,
|
||||
} = useCopilotStore()
|
||||
|
||||
// Sync workflow ID with store
|
||||
@@ -83,6 +80,14 @@ export const Copilot = forwardRef<CopilotRef, CopilotProps>(
|
||||
}
|
||||
}, [activeWorkflowId, workflowId, setWorkflowId])
|
||||
|
||||
// Safety check: Clear any chat that doesn't belong to current workflow
|
||||
useEffect(() => {
|
||||
if (activeWorkflowId && workflowId === activeWorkflowId) {
|
||||
// Validate that current chat belongs to this workflow
|
||||
validateCurrentChat()
|
||||
}
|
||||
}, [currentChat, chats, activeWorkflowId, workflowId, validateCurrentChat])
|
||||
|
||||
// Auto-scroll to bottom when new messages are added
|
||||
useEffect(() => {
|
||||
if (scrollAreaRef.current) {
|
||||
@@ -126,17 +131,9 @@ export const Copilot = forwardRef<CopilotRef, CopilotProps>(
|
||||
|
||||
// Handle message submission
|
||||
const handleSubmit = useCallback(
|
||||
async (e?: React.FormEvent, message?: string) => {
|
||||
e?.preventDefault()
|
||||
|
||||
const query = message || inputRef.current?.value?.trim() || ''
|
||||
async (query: string) => {
|
||||
if (!query || isSendingMessage || !activeWorkflowId) return
|
||||
|
||||
// Clear input if using the form input
|
||||
if (!message && inputRef.current) {
|
||||
inputRef.current.value = ''
|
||||
}
|
||||
|
||||
try {
|
||||
await sendMessage(query, { stream: true })
|
||||
logger.info('Sent message:', query)
|
||||
@@ -147,253 +144,236 @@ export const Copilot = forwardRef<CopilotRef, CopilotProps>(
|
||||
[isSendingMessage, activeWorkflowId, sendMessage]
|
||||
)
|
||||
|
||||
// Format timestamp for display
|
||||
const formatTimestamp = (timestamp: string) => {
|
||||
return new Date(timestamp).toLocaleTimeString([], { hour: '2-digit', minute: '2-digit' })
|
||||
}
|
||||
|
||||
// Function to render content with basic markdown (including direct links from LLM)
|
||||
const renderMarkdownContent = (content: string) => {
|
||||
if (!content) return content
|
||||
|
||||
let processedContent = content
|
||||
|
||||
// Process markdown links: [text](url)
|
||||
processedContent = processedContent.replace(
|
||||
/\[([^\]]+)\]\(([^)]+)\)/g,
|
||||
'<a href="$2" target="_blank" rel="noopener noreferrer" class="text-blue-600 hover:text-blue-800 font-semibold underline transition-colors">$1</a>'
|
||||
)
|
||||
|
||||
// Basic markdown processing
|
||||
processedContent = processedContent
|
||||
.replace(
|
||||
/```(\w+)?\n([\s\S]*?)```/g,
|
||||
'<pre class="bg-muted p-3 rounded-lg overflow-x-auto my-3 text-sm"><code>$2</code></pre>'
|
||||
)
|
||||
.replace(
|
||||
/`([^`]+)`/g,
|
||||
'<code class="bg-muted px-1.5 py-0.5 rounded text-sm font-mono">$1</code>'
|
||||
)
|
||||
.replace(/\*\*(.*?)\*\*/g, '<strong class="font-semibold">$1</strong>')
|
||||
.replace(/\*(.*?)\*/g, '<em class="italic">$1</em>')
|
||||
.replace(/^### (.*$)/gm, '<h3 class="font-semibold text-base mt-4 mb-2">$1</h3>')
|
||||
.replace(/^## (.*$)/gm, '<h2 class="font-semibold text-lg mt-4 mb-2">$1</h2>')
|
||||
.replace(/^# (.*$)/gm, '<h1 class="font-bold text-xl mt-4 mb-3">$1</h1>')
|
||||
.replace(/^\* (.*$)/gm, '<li class="ml-4">• $1</li>')
|
||||
.replace(/^- (.*$)/gm, '<li class="ml-4">• $1</li>')
|
||||
.replace(/\n\n+/g, '</p><p class="mt-2">')
|
||||
.replace(/\n/g, '<br>')
|
||||
|
||||
// Wrap in paragraph tags if needed
|
||||
if (
|
||||
!processedContent.includes('<p>') &&
|
||||
!processedContent.includes('<h1>') &&
|
||||
!processedContent.includes('<h2>') &&
|
||||
!processedContent.includes('<h3>')
|
||||
) {
|
||||
processedContent = `<p>${processedContent}</p>`
|
||||
}
|
||||
|
||||
return processedContent
|
||||
}
|
||||
|
||||
// Render individual message
|
||||
const renderMessage = (message: CopilotMessage) => {
|
||||
return (
|
||||
<div key={message.id} className='group flex gap-3 p-4 hover:bg-muted/30'>
|
||||
<div
|
||||
className={`flex h-8 w-8 items-center justify-center rounded-full ${
|
||||
message.role === 'user' ? 'bg-muted' : 'bg-primary'
|
||||
}`}
|
||||
>
|
||||
{message.role === 'user' ? (
|
||||
<User className='h-4 w-4 text-muted-foreground' />
|
||||
) : (
|
||||
<Bot className='h-4 w-4 text-primary-foreground' />
|
||||
)}
|
||||
</div>
|
||||
<div className='min-w-0 flex-1'>
|
||||
<div className='mb-3 flex items-center gap-2'>
|
||||
<span className='font-medium text-sm'>
|
||||
{message.role === 'user' ? 'You' : 'Copilot'}
|
||||
</span>
|
||||
<span className='text-muted-foreground text-xs'>
|
||||
{formatTimestamp(message.timestamp)}
|
||||
</span>
|
||||
</div>
|
||||
|
||||
{/* Enhanced content rendering with markdown links */}
|
||||
<div className='prose prose-sm dark:prose-invert max-w-none'>
|
||||
<div
|
||||
className='text-foreground text-sm leading-normal'
|
||||
dangerouslySetInnerHTML={{
|
||||
__html: renderMarkdownContent(message.content),
|
||||
}}
|
||||
/>
|
||||
</div>
|
||||
|
||||
{/* Streaming cursor */}
|
||||
{!message.content && (
|
||||
<div className='flex items-center gap-2 text-muted-foreground'>
|
||||
<Loader2 className='h-4 w-4 animate-spin' />
|
||||
<span className='text-sm'>Thinking...</span>
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
</div>
|
||||
)
|
||||
}
|
||||
|
||||
// Convert messages for modal (role -> type)
|
||||
const modalMessages = messages.map((msg) => ({
|
||||
id: msg.id,
|
||||
content: msg.content,
|
||||
type: msg.role as 'user' | 'assistant',
|
||||
timestamp: new Date(msg.timestamp),
|
||||
citations: msg.citations,
|
||||
}))
|
||||
|
||||
// Handle modal message sending
|
||||
const handleModalSendMessage = useCallback(
|
||||
async (message: string) => {
|
||||
await handleSubmit(undefined, message)
|
||||
await handleSubmit(message)
|
||||
},
|
||||
[handleSubmit]
|
||||
)
|
||||
|
||||
return (
|
||||
<>
|
||||
<div className='flex h-full flex-col'>
|
||||
{/* Header with Chat Title and Management */}
|
||||
<div className='border-b p-4'>
|
||||
<div className='flex items-center justify-between'>
|
||||
{/* Chat Title Dropdown */}
|
||||
<DropdownMenu>
|
||||
<DropdownMenuTrigger asChild>
|
||||
<Button variant='ghost' className='h-8 min-w-0 flex-1 justify-start px-3'>
|
||||
<span className='truncate'>{currentChat?.title || 'New Chat'}</span>
|
||||
<ChevronDown className='ml-2 h-4 w-4 shrink-0' />
|
||||
</Button>
|
||||
</DropdownMenuTrigger>
|
||||
<DropdownMenuContent align='start' className='z-[110] w-64' sideOffset={8}>
|
||||
{chats.map((chat) => (
|
||||
<div key={chat.id} className='flex items-center'>
|
||||
<DropdownMenuItem
|
||||
onClick={() => selectChat(chat)}
|
||||
className='flex-1 cursor-pointer'
|
||||
>
|
||||
<div className='min-w-0 flex-1'>
|
||||
<div className='truncate font-medium text-sm'>
|
||||
{chat.title || 'Untitled Chat'}
|
||||
</div>
|
||||
<div className='text-muted-foreground text-xs'>
|
||||
{chat.messageCount} messages •{' '}
|
||||
{new Date(chat.updatedAt).toLocaleDateString()}
|
||||
</div>
|
||||
</div>
|
||||
</DropdownMenuItem>
|
||||
<DropdownMenu>
|
||||
<DropdownMenuTrigger asChild>
|
||||
<Button variant='ghost' size='sm' className='h-8 w-8 shrink-0 p-0'>
|
||||
<MoreHorizontal className='h-4 w-4' />
|
||||
</Button>
|
||||
</DropdownMenuTrigger>
|
||||
<DropdownMenuContent align='end' className='z-[120]'>
|
||||
<DropdownMenuItem
|
||||
onClick={() => handleDeleteChat(chat.id)}
|
||||
className='cursor-pointer text-destructive'
|
||||
>
|
||||
<Trash2 className='mr-2 h-4 w-4' />
|
||||
Delete
|
||||
</DropdownMenuItem>
|
||||
</DropdownMenuContent>
|
||||
</DropdownMenu>
|
||||
</div>
|
||||
))}
|
||||
</DropdownMenuContent>
|
||||
</DropdownMenu>
|
||||
|
||||
{/* New Chat Button */}
|
||||
<Button
|
||||
variant='ghost'
|
||||
size='sm'
|
||||
onClick={handleStartNewChat}
|
||||
className='h-8 w-8 p-0'
|
||||
title='New Chat'
|
||||
>
|
||||
<MessageSquarePlus className='h-4 w-4' />
|
||||
</Button>
|
||||
<div
|
||||
className='flex h-full max-w-full flex-col overflow-hidden'
|
||||
style={{ width: `${panelWidth}px`, maxWidth: `${panelWidth}px` }}
|
||||
>
|
||||
{/* Show loading state with centered pulsing agent icon */}
|
||||
{isLoadingChats || isLoading ? (
|
||||
<div className='flex h-full items-center justify-center'>
|
||||
<div className='flex items-center justify-center'>
|
||||
<Bot className='h-16 w-16 animate-pulse text-muted-foreground' />
|
||||
</div>
|
||||
</div>
|
||||
) : (
|
||||
<>
|
||||
{/* Header with Chat Title and Management */}
|
||||
<div className='border-b p-4'>
|
||||
<div className='flex items-center justify-between'>
|
||||
{/* Chat Title Dropdown */}
|
||||
<DropdownMenu open={isDropdownOpen} onOpenChange={setIsDropdownOpen}>
|
||||
<DropdownMenuTrigger asChild>
|
||||
<Button
|
||||
variant='ghost'
|
||||
className='h-8 min-w-0 flex-1 justify-start px-3 hover:bg-accent/50'
|
||||
>
|
||||
<span className='truncate'>
|
||||
{/* Only show chat title if we have verified workflow match */}
|
||||
{currentChat &&
|
||||
workflowId === activeWorkflowId &&
|
||||
chats.some((chat) => chat.id === currentChat.id)
|
||||
? currentChat.title || 'New Chat'
|
||||
: 'New Chat'}
|
||||
</span>
|
||||
<ChevronDown className='ml-2 h-4 w-4 shrink-0' />
|
||||
</Button>
|
||||
</DropdownMenuTrigger>
|
||||
<DropdownMenuContent
|
||||
align='start'
|
||||
className='z-[110] w-72 border-border/50 bg-background/95 shadow-lg backdrop-blur-sm'
|
||||
sideOffset={8}
|
||||
onMouseLeave={() => setIsDropdownOpen(false)}
|
||||
>
|
||||
{isLoadingChats ? (
|
||||
<div className='px-4 py-3 text-muted-foreground text-sm'>
|
||||
Loading chats...
|
||||
</div>
|
||||
) : chats.length === 0 ? (
|
||||
<div className='px-4 py-3 text-muted-foreground text-sm'>No chats yet</div>
|
||||
) : (
|
||||
// Sort chats by updated date (most recent first) for display
|
||||
[...chats]
|
||||
.sort(
|
||||
(a, b) =>
|
||||
new Date(b.updatedAt).getTime() - new Date(a.updatedAt).getTime()
|
||||
)
|
||||
.map((chat) => (
|
||||
<div key={chat.id} className='group flex items-center gap-2 px-2 py-1'>
|
||||
<DropdownMenuItem asChild>
|
||||
<div
|
||||
onClick={() => {
|
||||
selectChat(chat)
|
||||
setIsDropdownOpen(false)
|
||||
}}
|
||||
className={`min-w-0 flex-1 cursor-pointer rounded-lg px-3 py-2.5 transition-all ${
|
||||
currentChat?.id === chat.id
|
||||
? 'bg-accent/80 text-accent-foreground'
|
||||
: 'hover:bg-accent/40'
|
||||
}`}
|
||||
>
|
||||
<div className='min-w-0'>
|
||||
<div className='truncate font-medium text-sm leading-tight'>
|
||||
{chat.title || 'Untitled Chat'}
|
||||
</div>
|
||||
<div className='mt-0.5 truncate text-muted-foreground text-xs'>
|
||||
{new Date(chat.updatedAt).toLocaleDateString()} at{' '}
|
||||
{new Date(chat.updatedAt).toLocaleTimeString([], {
|
||||
hour: '2-digit',
|
||||
minute: '2-digit',
|
||||
})}{' '}
|
||||
• {chat.messageCount}
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</DropdownMenuItem>
|
||||
<DropdownMenu>
|
||||
<DropdownMenuTrigger asChild>
|
||||
<Button
|
||||
variant='ghost'
|
||||
size='sm'
|
||||
className='h-7 w-7 shrink-0 p-0 hover:bg-accent/60'
|
||||
>
|
||||
<MoreHorizontal className='h-3.5 w-3.5' />
|
||||
</Button>
|
||||
</DropdownMenuTrigger>
|
||||
<DropdownMenuContent
|
||||
align='end'
|
||||
className='z-[120] border-border/50 bg-background/95 shadow-lg backdrop-blur-sm'
|
||||
>
|
||||
<DropdownMenuItem
|
||||
onClick={() => handleDeleteChat(chat.id)}
|
||||
className='cursor-pointer text-destructive hover:bg-destructive/10 hover:text-destructive focus:bg-destructive/10 focus:text-destructive'
|
||||
>
|
||||
<Trash2 className='mr-2 h-3.5 w-3.5' />
|
||||
Delete
|
||||
</DropdownMenuItem>
|
||||
</DropdownMenuContent>
|
||||
</DropdownMenu>
|
||||
</div>
|
||||
))
|
||||
)}
|
||||
</DropdownMenuContent>
|
||||
</DropdownMenu>
|
||||
|
||||
{/* Error display */}
|
||||
{error && (
|
||||
<div className='mt-2 rounded-md bg-destructive/10 p-2 text-destructive text-sm'>
|
||||
{error}
|
||||
<Button
|
||||
variant='ghost'
|
||||
size='sm'
|
||||
onClick={clearError}
|
||||
className='ml-2 h-auto p-1 text-destructive'
|
||||
>
|
||||
Dismiss
|
||||
</Button>
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
{/* Checkpoint Toggle Button */}
|
||||
<Button
|
||||
variant='ghost'
|
||||
size='sm'
|
||||
onClick={() => setShowCheckpoints(!showCheckpoints)}
|
||||
className={`h-8 w-8 p-0 ${
|
||||
showCheckpoints
|
||||
? 'bg-[#802FFF]/20 text-[#802FFF] hover:bg-[#802FFF]/30'
|
||||
: 'hover:bg-accent/50'
|
||||
}`}
|
||||
title='View Checkpoints'
|
||||
>
|
||||
<History className='h-4 w-4' />
|
||||
</Button>
|
||||
|
||||
{/* Messages area */}
|
||||
<ScrollArea ref={scrollAreaRef} className='flex-1'>
|
||||
{messages.length === 0 ? (
|
||||
<div className='flex h-full flex-col items-center justify-center px-4 py-10'>
|
||||
<div className='space-y-4 text-center'>
|
||||
<Bot className='mx-auto h-12 w-12 text-muted-foreground' />
|
||||
<div className='space-y-2'>
|
||||
<h3 className='font-medium text-lg'>Welcome to Documentation Copilot</h3>
|
||||
<p className='text-muted-foreground text-sm'>
|
||||
Ask me anything about Sim Studio features, workflows, tools, or how to get
|
||||
started.
|
||||
</p>
|
||||
{/* New Chat Button */}
|
||||
<Button
|
||||
variant='ghost'
|
||||
size='sm'
|
||||
onClick={handleStartNewChat}
|
||||
className='h-8 w-8 p-0'
|
||||
title='New Chat'
|
||||
>
|
||||
<MessageSquarePlus className='h-4 w-4' />
|
||||
</Button>
|
||||
</div>
|
||||
|
||||
{/* Error display */}
|
||||
{error && (
|
||||
<div className='mt-2 rounded-md bg-destructive/10 p-2 text-destructive text-sm'>
|
||||
{error}
|
||||
<Button
|
||||
variant='ghost'
|
||||
size='sm'
|
||||
onClick={clearError}
|
||||
className='ml-2 h-auto p-1 text-destructive'
|
||||
>
|
||||
Dismiss
|
||||
</Button>
|
||||
</div>
|
||||
<div className='mx-auto max-w-xs space-y-2 text-left'>
|
||||
<div className='text-muted-foreground text-xs'>Try asking:</div>
|
||||
<div className='space-y-1'>
|
||||
<div className='rounded bg-muted/50 px-2 py-1 text-xs'>
|
||||
"How do I create a workflow?"
|
||||
</div>
|
||||
<div className='rounded bg-muted/50 px-2 py-1 text-xs'>
|
||||
"What tools are available?"
|
||||
</div>
|
||||
<div className='rounded bg-muted/50 px-2 py-1 text-xs'>
|
||||
"How do I deploy my workflow?"
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
|
||||
{/* Messages area or Checkpoint Panel */}
|
||||
{showCheckpoints ? (
|
||||
<CheckpointPanel />
|
||||
) : (
|
||||
<ScrollArea ref={scrollAreaRef} className='max-w-full flex-1 overflow-hidden'>
|
||||
{messages.length === 0 ? (
|
||||
<CopilotWelcome onQuestionClick={handleSubmit} mode={mode} />
|
||||
) : (
|
||||
messages.map((message) => (
|
||||
<ProfessionalMessage
|
||||
key={message.id}
|
||||
message={message}
|
||||
isStreaming={
|
||||
isSendingMessage && message.id === messages[messages.length - 1]?.id
|
||||
}
|
||||
/>
|
||||
))
|
||||
)}
|
||||
</ScrollArea>
|
||||
)}
|
||||
|
||||
{/* Mode Selector and Input */}
|
||||
{!showCheckpoints && (
|
||||
<>
|
||||
{/* Mode Selector */}
|
||||
<div className='border-t px-4 pt-2 pb-1'>
|
||||
<div className='flex items-center gap-1 rounded-md border bg-muted/30 p-0.5'>
|
||||
<Button
|
||||
variant='ghost'
|
||||
size='sm'
|
||||
onClick={() => setMode('ask')}
|
||||
className={`h-6 flex-1 font-medium text-xs ${
|
||||
mode === 'ask'
|
||||
? 'bg-[#802FFF]/20 text-[#802FFF] hover:bg-[#802FFF]/30'
|
||||
: 'hover:bg-muted/50'
|
||||
}`}
|
||||
title='Ask questions and get answers. Cannot edit workflows.'
|
||||
>
|
||||
Ask
|
||||
</Button>
|
||||
<Button
|
||||
variant='ghost'
|
||||
size='sm'
|
||||
onClick={() => setMode('agent')}
|
||||
className={`h-6 flex-1 font-medium text-xs ${
|
||||
mode === 'agent'
|
||||
? 'bg-[#802FFF]/20 text-[#802FFF] hover:bg-[#802FFF]/30'
|
||||
: 'hover:bg-muted/50'
|
||||
}`}
|
||||
title='Full agent with workflow editing capabilities.'
|
||||
>
|
||||
Agent
|
||||
</Button>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
) : (
|
||||
messages.map(renderMessage)
|
||||
)}
|
||||
</ScrollArea>
|
||||
|
||||
{/* Input area */}
|
||||
<div className='border-t p-4'>
|
||||
<form onSubmit={handleSubmit} className='flex gap-2'>
|
||||
<Input
|
||||
ref={inputRef}
|
||||
placeholder='Ask about Sim Studio documentation...'
|
||||
disabled={isSendingMessage}
|
||||
className='flex-1'
|
||||
autoComplete='off'
|
||||
/>
|
||||
<Button type='submit' size='icon' disabled={isSendingMessage} className='h-10 w-10'>
|
||||
{isSendingMessage ? (
|
||||
<Loader2 className='h-4 w-4 animate-spin' />
|
||||
) : (
|
||||
<Send className='h-4 w-4' />
|
||||
)}
|
||||
</Button>
|
||||
</form>
|
||||
</div>
|
||||
{/* Input area */}
|
||||
<ProfessionalInput
|
||||
onSubmit={handleSubmit}
|
||||
disabled={!activeWorkflowId}
|
||||
isLoading={isSendingMessage}
|
||||
/>
|
||||
</>
|
||||
)}
|
||||
</>
|
||||
)}
|
||||
</div>
|
||||
|
||||
{/* Fullscreen Modal */}
|
||||
@@ -402,14 +382,17 @@ export const Copilot = forwardRef<CopilotRef, CopilotProps>(
|
||||
onOpenChange={(open) => onFullscreenToggle?.(open)}
|
||||
copilotMessage={fullscreenInput}
|
||||
setCopilotMessage={(message) => onFullscreenInputChange?.(message)}
|
||||
messages={modalMessages}
|
||||
messages={messages}
|
||||
onSendMessage={handleModalSendMessage}
|
||||
isLoading={isSendingMessage}
|
||||
isLoadingChats={isLoadingChats}
|
||||
chats={chats}
|
||||
currentChat={currentChat}
|
||||
onSelectChat={selectChat}
|
||||
onStartNewChat={handleStartNewChat}
|
||||
onDeleteChat={handleDeleteChat}
|
||||
mode={mode}
|
||||
onModeChange={setMode}
|
||||
/>
|
||||
</>
|
||||
)
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
'use client'
|
||||
|
||||
import { useCallback, useEffect, useState } from 'react'
|
||||
import { useCallback, useEffect, useRef, useState } from 'react'
|
||||
import { ArrowDownToLine, CircleSlash, X } from 'lucide-react'
|
||||
import { Tooltip, TooltipContent, TooltipTrigger } from '@/components/ui/tooltip'
|
||||
import { useChatStore } from '@/stores/panel/chat/store'
|
||||
@@ -10,15 +10,18 @@ import { useWorkflowRegistry } from '@/stores/workflows/registry/store'
|
||||
import { Chat } from './components/chat/chat'
|
||||
import { ChatModal } from './components/chat/components/chat-modal/chat-modal'
|
||||
import { Console } from './components/console/console'
|
||||
import { Copilot } from './components/copilot/copilot'
|
||||
import { Variables } from './components/variables/variables'
|
||||
|
||||
export function Panel() {
|
||||
const [chatMessage, setChatMessage] = useState<string>('')
|
||||
const [copilotMessage, setCopilotMessage] = useState<string>('')
|
||||
const [isChatModalOpen, setIsChatModalOpen] = useState(false)
|
||||
const [isCopilotModalOpen, setIsCopilotModalOpen] = useState(false)
|
||||
const [isResizing, setIsResizing] = useState(false)
|
||||
const [resizeStartX, setResizeStartX] = useState(0)
|
||||
const [resizeStartWidth, setResizeStartWidth] = useState(0)
|
||||
const copilotRef = useRef<{ clearMessages: () => void; startNewChat: () => void }>(null)
|
||||
|
||||
const isOpen = usePanelStore((state) => state.isOpen)
|
||||
const togglePanel = usePanelStore((state) => state.togglePanel)
|
||||
@@ -33,8 +36,13 @@ export function Panel() {
|
||||
const exportChatCSV = useChatStore((state) => state.exportChatCSV)
|
||||
const { activeWorkflowId } = useWorkflowRegistry()
|
||||
|
||||
const handleTabClick = (tab: 'chat' | 'console' | 'variables') => {
|
||||
setActiveTab(tab)
|
||||
const handleTabClick = (tab: 'chat' | 'console' | 'variables' | 'copilot') => {
|
||||
// Redirect copilot tab clicks to console since copilot is hidden
|
||||
if (tab === 'copilot') {
|
||||
setActiveTab('console')
|
||||
} else {
|
||||
setActiveTab(tab)
|
||||
}
|
||||
if (!isOpen) {
|
||||
togglePanel()
|
||||
}
|
||||
@@ -107,6 +115,15 @@ export function Panel() {
|
||||
>
|
||||
Console
|
||||
</button>
|
||||
{/* Temporarily hiding copilot tab */}
|
||||
{/* <button
|
||||
onClick={() => handleTabClick('copilot')}
|
||||
className={`panel-tab-base inline-flex flex-1 cursor-pointer items-center justify-center rounded-[10px] border border-transparent py-1 font-[450] text-sm outline-none transition-colors duration-200 ${
|
||||
isOpen && activeTab === 'copilot' ? 'panel-tab-active' : 'panel-tab-inactive'
|
||||
}`}
|
||||
>
|
||||
Copilot
|
||||
</button> */}
|
||||
<button
|
||||
onClick={() => handleTabClick('variables')}
|
||||
className={`panel-tab-base inline-flex flex-1 cursor-pointer items-center justify-center rounded-[10px] border border-transparent py-1 font-[450] text-sm outline-none transition-colors duration-200 ${
|
||||
@@ -161,15 +178,19 @@ export function Panel() {
|
||||
<TooltipContent side='bottom'>Export chat data</TooltipContent>
|
||||
</Tooltip>
|
||||
)}
|
||||
{(activeTab === 'console' || activeTab === 'chat') && (
|
||||
{(activeTab === 'console' || activeTab === 'chat' || activeTab === 'copilot') && (
|
||||
<Tooltip>
|
||||
<TooltipTrigger asChild>
|
||||
<button
|
||||
onClick={() =>
|
||||
activeTab === 'console'
|
||||
? clearConsole(activeWorkflowId)
|
||||
: clearChat(activeWorkflowId)
|
||||
}
|
||||
onClick={() => {
|
||||
if (activeTab === 'console') {
|
||||
clearConsole(activeWorkflowId)
|
||||
} else if (activeTab === 'chat') {
|
||||
clearChat(activeWorkflowId)
|
||||
} else if (activeTab === 'copilot') {
|
||||
copilotRef.current?.clearMessages()
|
||||
}
|
||||
}}
|
||||
className='font-medium text-md leading-normal transition-all hover:brightness-75 dark:hover:brightness-125'
|
||||
style={{ color: 'var(--base-muted-foreground)' }}
|
||||
>
|
||||
@@ -199,6 +220,15 @@ export function Panel() {
|
||||
/>
|
||||
) : activeTab === 'console' ? (
|
||||
<Console panelWidth={panelWidth} />
|
||||
) : activeTab === 'copilot' ? (
|
||||
<Copilot
|
||||
ref={copilotRef}
|
||||
panelWidth={panelWidth}
|
||||
isFullscreen={isCopilotModalOpen}
|
||||
onFullscreenToggle={setIsCopilotModalOpen}
|
||||
fullscreenInput={copilotMessage}
|
||||
onFullscreenInputChange={setCopilotMessage}
|
||||
/>
|
||||
) : (
|
||||
<Variables />
|
||||
)}
|
||||
|
||||
@@ -48,7 +48,31 @@ export function Table({
|
||||
},
|
||||
]
|
||||
}
|
||||
return value as TableRow[]
|
||||
|
||||
// Validate and fix each row to ensure proper structure
|
||||
const validatedRows = value.map((row) => {
|
||||
// Ensure row has an id
|
||||
if (!row.id) {
|
||||
row.id = crypto.randomUUID()
|
||||
}
|
||||
|
||||
// Ensure row has cells object with proper structure
|
||||
if (!row.cells || typeof row.cells !== 'object') {
|
||||
console.warn('Fixing malformed table row:', row)
|
||||
row.cells = Object.fromEntries(columns.map((col) => [col, '']))
|
||||
} else {
|
||||
// Ensure all required columns exist in cells
|
||||
columns.forEach((col) => {
|
||||
if (!(col in row.cells)) {
|
||||
row.cells[col] = ''
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
return row
|
||||
})
|
||||
|
||||
return validatedRows as TableRow[]
|
||||
}, [value, columns])
|
||||
|
||||
// Add state for managing dropdowns
|
||||
@@ -86,14 +110,21 @@ export function Table({
|
||||
const handleCellChange = (rowIndex: number, column: string, value: string) => {
|
||||
if (isPreview || disabled) return
|
||||
|
||||
const updatedRows = [...rows].map((row, idx) =>
|
||||
idx === rowIndex
|
||||
? {
|
||||
...row,
|
||||
cells: { ...row.cells, [column]: value },
|
||||
}
|
||||
: row
|
||||
)
|
||||
const updatedRows = [...rows].map((row, idx) => {
|
||||
if (idx === rowIndex) {
|
||||
// Ensure the row has a proper cells object
|
||||
if (!row.cells || typeof row.cells !== 'object') {
|
||||
console.warn('Fixing malformed row cells during cell change:', row)
|
||||
row.cells = Object.fromEntries(columns.map((col) => [col, '']))
|
||||
}
|
||||
|
||||
return {
|
||||
...row,
|
||||
cells: { ...row.cells, [column]: value },
|
||||
}
|
||||
}
|
||||
return row
|
||||
})
|
||||
|
||||
if (rowIndex === rows.length - 1 && value !== '') {
|
||||
updatedRows.push({
|
||||
@@ -129,6 +160,16 @@ export function Table({
|
||||
)
|
||||
|
||||
const renderCell = (row: TableRow, rowIndex: number, column: string, cellIndex: number) => {
|
||||
// Defensive programming: ensure row.cells exists and has the expected structure
|
||||
if (!row.cells || typeof row.cells !== 'object') {
|
||||
console.warn('Table row has malformed cells data:', row)
|
||||
// Create a fallback cells object
|
||||
row = {
|
||||
...row,
|
||||
cells: Object.fromEntries(columns.map((col) => [col, ''])),
|
||||
}
|
||||
}
|
||||
|
||||
const cellValue = row.cells[column] || ''
|
||||
const cellKey = `${rowIndex}-${column}`
|
||||
|
||||
|
||||
@@ -1,12 +1,10 @@
|
||||
import { createLogger } from '@/lib/logs/console-logger'
|
||||
import { useWorkflowRegistry } from '@/stores/workflows/registry/store'
|
||||
import { useSubBlockStore } from '@/stores/workflows/subblock/store'
|
||||
import { useWorkflowStore } from '@/stores/workflows/workflow/store'
|
||||
import { importWorkflowFromYaml } from '@/stores/workflows/yaml/importer'
|
||||
import type { EditorFormat } from './workflow-text-editor'
|
||||
|
||||
const logger = createLogger('WorkflowApplier')
|
||||
|
||||
export type EditorFormat = 'json' | 'yaml'
|
||||
|
||||
export interface ApplyResult {
|
||||
success: boolean
|
||||
errors: string[]
|
||||
@@ -15,13 +13,15 @@ export interface ApplyResult {
|
||||
}
|
||||
|
||||
/**
|
||||
* Apply workflow changes by using the existing importer for YAML
|
||||
* or direct state replacement for JSON
|
||||
* Apply workflow changes by using the new consolidated YAML endpoint
|
||||
* for YAML format or direct state replacement for JSON
|
||||
*/
|
||||
export async function applyWorkflowDiff(
|
||||
content: string,
|
||||
format: EditorFormat
|
||||
): Promise<ApplyResult> {
|
||||
console.log('🔥 applyWorkflowDiff called!', { format, contentLength: content.length })
|
||||
|
||||
try {
|
||||
const { activeWorkflowId } = useWorkflowRegistry.getState()
|
||||
|
||||
@@ -34,155 +34,161 @@ export async function applyWorkflowDiff(
|
||||
}
|
||||
}
|
||||
|
||||
logger.info('Starting applyWorkflowDiff', {
|
||||
format,
|
||||
activeWorkflowId,
|
||||
contentLength: content.length,
|
||||
})
|
||||
|
||||
if (format === 'yaml') {
|
||||
// Use the existing YAML importer which handles ID mapping and complete state replacement
|
||||
const workflowActions = {
|
||||
addBlock: () => {}, // Not used in this path
|
||||
addEdge: () => {}, // Not used in this path
|
||||
applyAutoLayout: () => {
|
||||
// Trigger auto layout after import
|
||||
window.dispatchEvent(new CustomEvent('trigger-auto-layout'))
|
||||
},
|
||||
setSubBlockValue: () => {}, // Not used in this path
|
||||
getExistingBlocks: () => useWorkflowStore.getState().blocks,
|
||||
}
|
||||
console.log('🔥 Processing YAML format!')
|
||||
|
||||
const result = await importWorkflowFromYaml(content, workflowActions)
|
||||
logger.info('Processing YAML format - calling consolidated YAML endpoint')
|
||||
|
||||
return {
|
||||
success: result.success,
|
||||
errors: result.errors,
|
||||
warnings: result.warnings,
|
||||
appliedOperations: result.success ? 1 : 0, // One complete import operation
|
||||
}
|
||||
}
|
||||
// Handle JSON format - complete state replacement
|
||||
let parsedData: any
|
||||
try {
|
||||
parsedData = JSON.parse(content)
|
||||
} catch (error) {
|
||||
return {
|
||||
success: false,
|
||||
errors: [`Invalid JSON: ${error instanceof Error ? error.message : 'Parse error'}`],
|
||||
warnings: [],
|
||||
appliedOperations: 0,
|
||||
}
|
||||
}
|
||||
|
||||
// Validate JSON structure
|
||||
if (!parsedData.state || !parsedData.state.blocks) {
|
||||
return {
|
||||
success: false,
|
||||
errors: ['Invalid JSON structure: missing state.blocks'],
|
||||
warnings: [],
|
||||
appliedOperations: 0,
|
||||
}
|
||||
}
|
||||
|
||||
// Extract workflow state and subblock values
|
||||
const newWorkflowState = {
|
||||
blocks: parsedData.state.blocks,
|
||||
edges: parsedData.state.edges || [],
|
||||
loops: parsedData.state.loops || {},
|
||||
parallels: parsedData.state.parallels || {},
|
||||
lastSaved: Date.now(),
|
||||
isDeployed: parsedData.state.isDeployed || false,
|
||||
deployedAt: parsedData.state.deployedAt,
|
||||
deploymentStatuses: parsedData.state.deploymentStatuses || {},
|
||||
hasActiveWebhook: parsedData.state.hasActiveWebhook || false,
|
||||
}
|
||||
|
||||
// Atomically update local state with rollback on failure
|
||||
const previousWorkflowState = useWorkflowStore.getState()
|
||||
const previousSubBlockState = useSubBlockStore.getState()
|
||||
|
||||
try {
|
||||
// Update workflow state first
|
||||
useWorkflowStore.setState(newWorkflowState)
|
||||
|
||||
// Update subblock values if provided
|
||||
if (parsedData.subBlockValues) {
|
||||
useSubBlockStore.setState((state: any) => ({
|
||||
workflowValues: {
|
||||
...state.workflowValues,
|
||||
[activeWorkflowId]: parsedData.subBlockValues,
|
||||
try {
|
||||
// Use the new consolidated YAML endpoint
|
||||
const response = await fetch(`/api/workflows/${activeWorkflowId}/yaml`, {
|
||||
method: 'PUT',
|
||||
headers: {
|
||||
'Content-Type': 'application/json',
|
||||
},
|
||||
}))
|
||||
}
|
||||
} catch (error) {
|
||||
// Rollback state changes on any failure
|
||||
logger.error('State update failed, rolling back:', error)
|
||||
useWorkflowStore.setState(previousWorkflowState)
|
||||
useSubBlockStore.setState(previousSubBlockState)
|
||||
body: JSON.stringify({
|
||||
yamlContent: content,
|
||||
source: 'editor',
|
||||
applyAutoLayout: true,
|
||||
createCheckpoint: false,
|
||||
}),
|
||||
})
|
||||
|
||||
return {
|
||||
success: false,
|
||||
errors: [
|
||||
`State update failed: ${error instanceof Error ? error.message : 'Unknown error'}`,
|
||||
],
|
||||
warnings: [],
|
||||
appliedOperations: 0,
|
||||
}
|
||||
}
|
||||
if (!response.ok) {
|
||||
const errorData = await response.json()
|
||||
logger.error('Failed to save YAML workflow:', errorData)
|
||||
return {
|
||||
success: false,
|
||||
errors: [errorData.message || `HTTP ${response.status}: ${response.statusText}`],
|
||||
warnings: [],
|
||||
appliedOperations: 0,
|
||||
}
|
||||
}
|
||||
|
||||
// Update workflow metadata if provided
|
||||
if (parsedData.workflow) {
|
||||
const { updateWorkflow } = useWorkflowRegistry.getState()
|
||||
const metadata = parsedData.workflow
|
||||
const result = await response.json()
|
||||
|
||||
updateWorkflow(activeWorkflowId, {
|
||||
name: metadata.name,
|
||||
description: metadata.description,
|
||||
color: metadata.color,
|
||||
})
|
||||
}
|
||||
logger.info('YAML workflow save completed', {
|
||||
success: result.success,
|
||||
errors: result.errors || [],
|
||||
warnings: result.warnings || [],
|
||||
})
|
||||
|
||||
// Save to database
|
||||
try {
|
||||
const response = await fetch(`/api/workflows/${activeWorkflowId}/state`, {
|
||||
method: 'PUT',
|
||||
headers: {
|
||||
'Content-Type': 'application/json',
|
||||
},
|
||||
body: JSON.stringify(newWorkflowState),
|
||||
})
|
||||
// Trigger auto layout after successful save
|
||||
window.dispatchEvent(new CustomEvent('trigger-auto-layout'))
|
||||
|
||||
if (!response.ok) {
|
||||
const errorData = await response.json()
|
||||
logger.error('Failed to save workflow state:', errorData.error)
|
||||
// Calculate applied operations (blocks + edges)
|
||||
const appliedOperations = (result.data?.blocksCount || 0) + (result.data?.edgesCount || 0)
|
||||
|
||||
return {
|
||||
success: result.success,
|
||||
errors: result.errors || [],
|
||||
warnings: result.warnings || [],
|
||||
appliedOperations,
|
||||
}
|
||||
} catch (error) {
|
||||
logger.error('YAML processing failed:', error)
|
||||
return {
|
||||
success: false,
|
||||
errors: [`Database save failed: ${errorData.error || 'Unknown error'}`],
|
||||
errors: [
|
||||
`YAML processing failed: ${error instanceof Error ? error.message : 'Unknown error'}`,
|
||||
],
|
||||
warnings: [],
|
||||
appliedOperations: 0,
|
||||
}
|
||||
}
|
||||
} catch (error) {
|
||||
logger.error('Failed to save workflow state:', error)
|
||||
return {
|
||||
success: false,
|
||||
errors: [
|
||||
`Failed to save workflow state: ${error instanceof Error ? error.message : 'Unknown error'}`,
|
||||
],
|
||||
warnings: [],
|
||||
appliedOperations: 0,
|
||||
}
|
||||
|
||||
if (format === 'json') {
|
||||
logger.info('Processing JSON format - direct state replacement')
|
||||
|
||||
try {
|
||||
const workflowState = JSON.parse(content)
|
||||
|
||||
// Validate that this looks like a workflow state
|
||||
if (!workflowState.blocks || !workflowState.edges) {
|
||||
return {
|
||||
success: false,
|
||||
errors: ['Invalid workflow state: missing blocks or edges'],
|
||||
warnings: [],
|
||||
appliedOperations: 0,
|
||||
}
|
||||
}
|
||||
|
||||
// Use the existing workflow state endpoint for JSON
|
||||
const response = await fetch(`/api/workflows/${activeWorkflowId}/state`, {
|
||||
method: 'PUT',
|
||||
headers: {
|
||||
'Content-Type': 'application/json',
|
||||
},
|
||||
body: JSON.stringify({
|
||||
...workflowState,
|
||||
lastSaved: Date.now(),
|
||||
}),
|
||||
})
|
||||
|
||||
if (!response.ok) {
|
||||
const errorData = await response.json()
|
||||
logger.error('Failed to save JSON workflow state:', errorData)
|
||||
return {
|
||||
success: false,
|
||||
errors: [errorData.error || `HTTP ${response.status}: ${response.statusText}`],
|
||||
warnings: [],
|
||||
appliedOperations: 0,
|
||||
}
|
||||
}
|
||||
|
||||
const result = await response.json()
|
||||
|
||||
logger.info('JSON workflow state save completed', {
|
||||
success: result.success,
|
||||
blocksCount: result.blocksCount,
|
||||
edgesCount: result.edgesCount,
|
||||
})
|
||||
|
||||
// Trigger auto layout after successful save
|
||||
window.dispatchEvent(new CustomEvent('trigger-auto-layout'))
|
||||
|
||||
// Calculate applied operations
|
||||
const appliedOperations = (result.blocksCount || 0) + (result.edgesCount || 0)
|
||||
|
||||
return {
|
||||
success: true,
|
||||
errors: [],
|
||||
warnings: [],
|
||||
appliedOperations,
|
||||
}
|
||||
} catch (error) {
|
||||
logger.error('JSON processing failed:', error)
|
||||
return {
|
||||
success: false,
|
||||
errors: [
|
||||
`JSON processing failed: ${error instanceof Error ? error.message : 'Unknown error'}`,
|
||||
],
|
||||
warnings: [],
|
||||
appliedOperations: 0,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Trigger auto layout
|
||||
window.dispatchEvent(new CustomEvent('trigger-auto-layout'))
|
||||
|
||||
return {
|
||||
success: true,
|
||||
errors: [],
|
||||
warnings: [],
|
||||
appliedOperations: 1, // One complete state replacement
|
||||
}
|
||||
} catch (error) {
|
||||
logger.error('Failed to apply workflow changes:', error)
|
||||
return {
|
||||
success: false,
|
||||
errors: [`Apply failed: ${error instanceof Error ? error.message : 'Unknown error'}`],
|
||||
errors: [`Unsupported format: ${format}`],
|
||||
warnings: [],
|
||||
appliedOperations: 0,
|
||||
}
|
||||
} catch (error) {
|
||||
logger.error('applyWorkflowDiff failed:', error)
|
||||
return {
|
||||
success: false,
|
||||
errors: [
|
||||
`Failed to apply workflow changes: ${error instanceof Error ? error.message : 'Unknown error'}`,
|
||||
],
|
||||
warnings: [],
|
||||
appliedOperations: 0,
|
||||
}
|
||||
|
||||
@@ -61,6 +61,12 @@ export function WorkflowTextEditorModal({
|
||||
// Handle save operation
|
||||
const handleSave = useCallback(
|
||||
async (content: string, contentFormat: EditorFormat) => {
|
||||
console.log('🔥 WorkflowTextEditorModal.handleSave called!', {
|
||||
contentFormat,
|
||||
contentLength: content.length,
|
||||
activeWorkflowId,
|
||||
})
|
||||
|
||||
if (!activeWorkflowId) {
|
||||
return { success: false, errors: ['No active workflow'] }
|
||||
}
|
||||
@@ -68,9 +74,13 @@ export function WorkflowTextEditorModal({
|
||||
try {
|
||||
logger.info('Applying workflow changes from text editor', { format: contentFormat })
|
||||
|
||||
console.log('🔥 About to call applyWorkflowDiff!', { contentFormat })
|
||||
|
||||
// Apply changes using the simplified approach
|
||||
const applyResult = await applyWorkflowDiff(content, contentFormat)
|
||||
|
||||
console.log('🔥 applyWorkflowDiff returned!', { success: applyResult.success })
|
||||
|
||||
if (applyResult.success) {
|
||||
logger.info('Successfully applied workflow changes', {
|
||||
appliedOperations: applyResult.appliedOperations,
|
||||
|
||||
@@ -327,6 +327,10 @@ const WorkflowContent = React.memo(() => {
|
||||
const handleAutoLayoutEvent = () => {
|
||||
if (cleanup) cleanup()
|
||||
|
||||
// Call auto layout directly without debounce for copilot events
|
||||
handleAutoLayout()
|
||||
|
||||
// Also set up debounced version as backup
|
||||
cleanup = debouncedAutoLayout()
|
||||
}
|
||||
|
||||
|
||||
@@ -3,9 +3,8 @@
|
||||
import { forwardRef, useImperativeHandle, useRef, useState } from 'react'
|
||||
import { useParams, useRouter } from 'next/navigation'
|
||||
import { createLogger } from '@/lib/logs/console-logger'
|
||||
import { useCollaborativeWorkflow } from '@/hooks/use-collaborative-workflow'
|
||||
import { useWorkflowRegistry } from '@/stores/workflows/registry/store'
|
||||
import { importWorkflowFromYaml, parseWorkflowYaml } from '@/stores/workflows/yaml/importer'
|
||||
import { parseWorkflowYaml } from '@/stores/workflows/yaml/importer'
|
||||
|
||||
const logger = createLogger('ImportControls')
|
||||
|
||||
@@ -36,8 +35,6 @@ export const ImportControls = forwardRef<ImportControlsRef, ImportControlsProps>
|
||||
|
||||
// Stores and hooks
|
||||
const { createWorkflow } = useWorkflowRegistry()
|
||||
const { collaborativeAddBlock, collaborativeAddEdge, collaborativeSetSubblockValue } =
|
||||
useCollaborativeWorkflow()
|
||||
|
||||
// Expose methods to parent component
|
||||
useImperativeHandle(ref, () => ({
|
||||
@@ -107,27 +104,32 @@ export const ImportControls = forwardRef<ImportControlsRef, ImportControlsProps>
|
||||
workspaceId,
|
||||
})
|
||||
|
||||
// Import the YAML into the new workflow BEFORE navigation (creates complete state and saves directly to DB)
|
||||
// This avoids timing issues with workflow reload during navigation
|
||||
const result = await importWorkflowFromYaml(
|
||||
content,
|
||||
{
|
||||
addBlock: collaborativeAddBlock,
|
||||
addEdge: collaborativeAddEdge,
|
||||
applyAutoLayout: () => {
|
||||
// Do nothing - auto layout should not run during import
|
||||
},
|
||||
setSubBlockValue: (blockId: string, subBlockId: string, value: unknown) => {
|
||||
// Use the collaborative function - the same one called when users type into fields
|
||||
collaborativeSetSubblockValue(blockId, subBlockId, value)
|
||||
},
|
||||
getExistingBlocks: () => {
|
||||
// For a new workflow, we'll get the starter block from the server
|
||||
return {}
|
||||
},
|
||||
// Use the new consolidated YAML endpoint to import the workflow
|
||||
const response = await fetch(`/api/workflows/${newWorkflowId}/yaml`, {
|
||||
method: 'PUT',
|
||||
headers: {
|
||||
'Content-Type': 'application/json',
|
||||
},
|
||||
newWorkflowId
|
||||
) // Pass the new workflow ID to import into
|
||||
body: JSON.stringify({
|
||||
yamlContent: content,
|
||||
description: 'Workflow imported from YAML',
|
||||
source: 'import',
|
||||
applyAutoLayout: true,
|
||||
createCheckpoint: false,
|
||||
}),
|
||||
})
|
||||
|
||||
if (!response.ok) {
|
||||
const errorData = await response.json()
|
||||
setImportResult({
|
||||
success: false,
|
||||
errors: [errorData.message || `HTTP ${response.status}: ${response.statusText}`],
|
||||
warnings: errorData.warnings || [],
|
||||
})
|
||||
return
|
||||
}
|
||||
|
||||
const result = await response.json()
|
||||
|
||||
// Navigate to the new workflow AFTER import is complete
|
||||
if (result.success) {
|
||||
@@ -135,7 +137,12 @@ export const ImportControls = forwardRef<ImportControlsRef, ImportControlsProps>
|
||||
router.push(`/workspace/${workspaceId}/w/${newWorkflowId}`)
|
||||
}
|
||||
|
||||
setImportResult(result)
|
||||
setImportResult({
|
||||
success: result.success,
|
||||
errors: result.errors || [],
|
||||
warnings: result.warnings || [],
|
||||
summary: result.summary,
|
||||
})
|
||||
|
||||
if (result.success) {
|
||||
setYamlContent('')
|
||||
|
||||
302
apps/sim/components/ui/tool-call.tsx
Normal file
302
apps/sim/components/ui/tool-call.tsx
Normal file
@@ -0,0 +1,302 @@
|
||||
'use client'
|
||||
|
||||
import { useState } from 'react'
|
||||
import { CheckCircle, ChevronDown, ChevronRight, Loader2, Settings, XCircle } from 'lucide-react'
|
||||
import { Badge } from '@/components/ui/badge'
|
||||
import { Button } from '@/components/ui/button'
|
||||
import { Collapsible, CollapsibleContent, CollapsibleTrigger } from '@/components/ui/collapsible'
|
||||
import { getToolDisplayName } from '@/lib/tool-call-parser'
|
||||
import { cn } from '@/lib/utils'
|
||||
import type { ToolCallGroup, ToolCallState } from '@/types/tool-call'
|
||||
|
||||
interface ToolCallProps {
|
||||
toolCall: ToolCallState
|
||||
isCompact?: boolean
|
||||
}
|
||||
|
||||
interface ToolCallGroupProps {
|
||||
group: ToolCallGroup
|
||||
isCompact?: boolean
|
||||
}
|
||||
|
||||
interface ToolCallIndicatorProps {
|
||||
type: 'status' | 'thinking' | 'execution'
|
||||
content: string
|
||||
toolNames?: string[]
|
||||
}
|
||||
|
||||
// Detection State Component
|
||||
export function ToolCallDetection({ content }: { content: string }) {
|
||||
return (
|
||||
<div className='flex min-w-0 items-center gap-2 rounded-lg border border-blue-200 bg-blue-50 px-3 py-2 text-sm dark:border-blue-800 dark:bg-blue-950'>
|
||||
<Loader2 className='h-4 w-4 shrink-0 animate-spin text-blue-600 dark:text-blue-400' />
|
||||
<span className='min-w-0 truncate text-blue-800 dark:text-blue-200'>{content}</span>
|
||||
</div>
|
||||
)
|
||||
}
|
||||
|
||||
// Execution State Component
|
||||
export function ToolCallExecution({ toolCall, isCompact = false }: ToolCallProps) {
|
||||
const [isExpanded, setIsExpanded] = useState(!isCompact)
|
||||
|
||||
return (
|
||||
<div className='min-w-0 rounded-lg border border-amber-200 bg-amber-50 dark:border-amber-800 dark:bg-amber-950'>
|
||||
<Collapsible open={isExpanded} onOpenChange={setIsExpanded}>
|
||||
<CollapsibleTrigger asChild>
|
||||
<Button
|
||||
variant='ghost'
|
||||
className='w-full min-w-0 justify-between px-3 py-4 hover:bg-amber-100 dark:hover:bg-amber-900'
|
||||
>
|
||||
<div className='flex min-w-0 items-center gap-2 overflow-hidden'>
|
||||
<Settings className='h-4 w-4 shrink-0 animate-pulse text-amber-600 dark:text-amber-400' />
|
||||
<span className='min-w-0 truncate font-mono text-amber-800 text-xs dark:text-amber-200'>
|
||||
{toolCall.displayName || toolCall.name}
|
||||
</span>
|
||||
{toolCall.progress && (
|
||||
<Badge
|
||||
variant='outline'
|
||||
className='shrink-0 text-amber-700 text-xs dark:text-amber-300'
|
||||
>
|
||||
{toolCall.progress}
|
||||
</Badge>
|
||||
)}
|
||||
</div>
|
||||
{isExpanded ? (
|
||||
<ChevronDown className='h-4 w-4 shrink-0 text-amber-600 dark:text-amber-400' />
|
||||
) : (
|
||||
<ChevronRight className='h-4 w-4 shrink-0 text-amber-600 dark:text-amber-400' />
|
||||
)}
|
||||
</Button>
|
||||
</CollapsibleTrigger>
|
||||
<CollapsibleContent className='min-w-0 max-w-full px-3 pb-3'>
|
||||
<div className='min-w-0 max-w-full space-y-2'>
|
||||
<div className='flex items-center gap-2 text-amber-700 text-xs dark:text-amber-300'>
|
||||
<Loader2 className='h-3 w-3 shrink-0 animate-spin' />
|
||||
<span>Executing...</span>
|
||||
</div>
|
||||
{toolCall.parameters && Object.keys(toolCall.parameters).length > 0 && (
|
||||
<div className='min-w-0 max-w-full rounded bg-amber-100 p-2 dark:bg-amber-900'>
|
||||
<div className='mb-1 font-medium text-amber-800 text-xs dark:text-amber-200'>
|
||||
Parameters:
|
||||
</div>
|
||||
<div className='min-w-0 max-w-full break-all font-mono text-amber-700 text-xs dark:text-amber-300'>
|
||||
{JSON.stringify(toolCall.parameters, null, 2)}
|
||||
</div>
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
</CollapsibleContent>
|
||||
</Collapsible>
|
||||
</div>
|
||||
)
|
||||
}
|
||||
|
||||
// Completion State Component
|
||||
export function ToolCallCompletion({ toolCall, isCompact = false }: ToolCallProps) {
|
||||
const [isExpanded, setIsExpanded] = useState(false)
|
||||
const isSuccess = toolCall.state === 'completed'
|
||||
const isError = toolCall.state === 'error'
|
||||
|
||||
const formatDuration = (duration?: number) => {
|
||||
if (!duration) return ''
|
||||
return duration < 1000 ? `${duration}ms` : `${(duration / 1000).toFixed(1)}s`
|
||||
}
|
||||
|
||||
return (
|
||||
<div
|
||||
className={cn(
|
||||
'min-w-0 rounded-lg border',
|
||||
isSuccess && 'border-green-200 bg-green-50 dark:border-green-800 dark:bg-green-950',
|
||||
isError && 'border-red-200 bg-red-50 dark:border-red-800 dark:bg-red-950'
|
||||
)}
|
||||
>
|
||||
<Collapsible open={isExpanded} onOpenChange={setIsExpanded}>
|
||||
<CollapsibleTrigger asChild>
|
||||
<Button
|
||||
variant='ghost'
|
||||
className={cn(
|
||||
'w-full min-w-0 justify-between px-3 py-4',
|
||||
isSuccess && 'hover:bg-green-100 dark:hover:bg-green-900',
|
||||
isError && 'hover:bg-red-100 dark:hover:bg-red-900'
|
||||
)}
|
||||
>
|
||||
<div className='flex min-w-0 items-center gap-2 overflow-hidden'>
|
||||
{isSuccess && (
|
||||
<CheckCircle className='h-4 w-4 shrink-0 text-green-600 dark:text-green-400' />
|
||||
)}
|
||||
{isError && <XCircle className='h-4 w-4 shrink-0 text-red-600 dark:text-red-400' />}
|
||||
<span
|
||||
className={cn(
|
||||
'min-w-0 truncate font-mono text-xs',
|
||||
isSuccess && 'text-green-800 dark:text-green-200',
|
||||
isError && 'text-red-800 dark:text-red-200'
|
||||
)}
|
||||
>
|
||||
{getToolDisplayName(toolCall.name, true)}
|
||||
</span>
|
||||
{toolCall.duration && (
|
||||
<Badge
|
||||
variant='outline'
|
||||
className={cn(
|
||||
'shrink-0 text-xs',
|
||||
isSuccess && 'text-green-700 dark:text-green-300',
|
||||
isError && 'text-red-700 dark:text-red-300'
|
||||
)}
|
||||
style={{ fontSize: '0.625rem' }}
|
||||
>
|
||||
{formatDuration(toolCall.duration)}
|
||||
</Badge>
|
||||
)}
|
||||
</div>
|
||||
<div className='flex shrink-0 items-center'>
|
||||
{isExpanded ? (
|
||||
<ChevronDown
|
||||
className={cn(
|
||||
'h-4 w-4',
|
||||
isSuccess && 'text-green-600 dark:text-green-400',
|
||||
isError && 'text-red-600 dark:text-red-400'
|
||||
)}
|
||||
/>
|
||||
) : (
|
||||
<ChevronRight
|
||||
className={cn(
|
||||
'h-4 w-4',
|
||||
isSuccess && 'text-green-600 dark:text-green-400',
|
||||
isError && 'text-red-600 dark:text-red-400'
|
||||
)}
|
||||
/>
|
||||
)}
|
||||
</div>
|
||||
</Button>
|
||||
</CollapsibleTrigger>
|
||||
<CollapsibleContent className='min-w-0 max-w-full px-3 pb-3'>
|
||||
<div className='min-w-0 max-w-full space-y-2'>
|
||||
{toolCall.parameters && Object.keys(toolCall.parameters).length > 0 && (
|
||||
<div
|
||||
className={cn(
|
||||
'min-w-0 max-w-full rounded p-2',
|
||||
isSuccess && 'bg-green-100 dark:bg-green-900',
|
||||
isError && 'bg-red-100 dark:bg-red-900'
|
||||
)}
|
||||
>
|
||||
<div
|
||||
className={cn(
|
||||
'mb-1 font-medium text-xs',
|
||||
isSuccess && 'text-green-800 dark:text-green-200',
|
||||
isError && 'text-red-800 dark:text-red-200'
|
||||
)}
|
||||
>
|
||||
Parameters:
|
||||
</div>
|
||||
<div
|
||||
className={cn(
|
||||
'min-w-0 max-w-full break-all font-mono text-xs',
|
||||
isSuccess && 'text-green-700 dark:text-green-300',
|
||||
isError && 'text-red-700 dark:text-red-300'
|
||||
)}
|
||||
>
|
||||
{JSON.stringify(toolCall.parameters, null, 2)}
|
||||
</div>
|
||||
</div>
|
||||
)}
|
||||
|
||||
{toolCall.error && (
|
||||
<div className='min-w-0 max-w-full rounded bg-red-100 p-2 dark:bg-red-900'>
|
||||
<div className='mb-1 font-medium text-red-800 text-xs dark:text-red-200'>
|
||||
Error:
|
||||
</div>
|
||||
<div className='min-w-0 max-w-full break-all font-mono text-red-700 text-xs dark:text-red-300'>
|
||||
{toolCall.error}
|
||||
</div>
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
</CollapsibleContent>
|
||||
</Collapsible>
|
||||
</div>
|
||||
)
|
||||
}
|
||||
|
||||
// Group Component for Multiple Tool Calls
|
||||
export function ToolCallGroupComponent({ group, isCompact = false }: ToolCallGroupProps) {
|
||||
const [isExpanded, setIsExpanded] = useState(true)
|
||||
|
||||
const completedCount = group.toolCalls.filter((t) => t.state === 'completed').length
|
||||
const totalCount = group.toolCalls.length
|
||||
const isAllCompleted = completedCount === totalCount
|
||||
const hasErrors = group.toolCalls.some((t) => t.state === 'error')
|
||||
|
||||
return (
|
||||
<div className='min-w-0 space-y-2'>
|
||||
{group.summary && (
|
||||
<div className='flex min-w-0 items-center gap-2 rounded-lg border border-blue-200 bg-blue-50 px-3 py-2 text-sm dark:border-blue-800 dark:bg-blue-950'>
|
||||
<Settings className='h-4 w-4 shrink-0 text-blue-600 dark:text-blue-400' />
|
||||
<span className='min-w-0 truncate text-blue-800 dark:text-blue-200'>{group.summary}</span>
|
||||
{!isAllCompleted && (
|
||||
<Badge variant='outline' className='shrink-0 text-blue-700 text-xs dark:text-blue-300'>
|
||||
{completedCount}/{totalCount}
|
||||
</Badge>
|
||||
)}
|
||||
</div>
|
||||
)}
|
||||
|
||||
<Collapsible open={isExpanded} onOpenChange={setIsExpanded}>
|
||||
<CollapsibleTrigger asChild>
|
||||
<Button
|
||||
variant='ghost'
|
||||
className='w-full min-w-0 justify-between px-3 py-3 text-sm hover:bg-muted'
|
||||
>
|
||||
<div className='flex min-w-0 items-center gap-2 overflow-hidden'>
|
||||
<span className='min-w-0 truncate text-muted-foreground'>
|
||||
{isAllCompleted ? 'Completed' : 'In Progress'} ({completedCount}/{totalCount})
|
||||
</span>
|
||||
{hasErrors && (
|
||||
<Badge variant='destructive' className='shrink-0 text-xs'>
|
||||
Errors
|
||||
</Badge>
|
||||
)}
|
||||
</div>
|
||||
{isExpanded ? (
|
||||
<ChevronDown className='h-4 w-4 shrink-0 text-muted-foreground' />
|
||||
) : (
|
||||
<ChevronRight className='h-4 w-4 shrink-0 text-muted-foreground' />
|
||||
)}
|
||||
</Button>
|
||||
</CollapsibleTrigger>
|
||||
<CollapsibleContent className='min-w-0 max-w-full space-y-2'>
|
||||
{group.toolCalls.map((toolCall) => (
|
||||
<div key={toolCall.id} className='min-w-0 max-w-full'>
|
||||
{toolCall.state === 'executing' && (
|
||||
<ToolCallExecution toolCall={toolCall} isCompact={isCompact} />
|
||||
)}
|
||||
{(toolCall.state === 'completed' || toolCall.state === 'error') && (
|
||||
<ToolCallCompletion toolCall={toolCall} isCompact={isCompact} />
|
||||
)}
|
||||
</div>
|
||||
))}
|
||||
</CollapsibleContent>
|
||||
</Collapsible>
|
||||
</div>
|
||||
)
|
||||
}
|
||||
|
||||
// Status Indicator Component
|
||||
export function ToolCallIndicator({ type, content, toolNames }: ToolCallIndicatorProps) {
|
||||
if (type === 'status' && toolNames) {
|
||||
return (
|
||||
<div className='flex min-w-0 items-center gap-2 rounded-lg border border-blue-200 bg-blue-50 px-3 py-2 text-sm dark:border-blue-800 dark:bg-blue-950'>
|
||||
<Loader2 className='h-4 w-4 shrink-0 animate-spin text-blue-600 dark:text-blue-400' />
|
||||
<span className='min-w-0 truncate text-blue-800 dark:text-blue-200'>
|
||||
🔄 {toolNames.join(' • ')}
|
||||
</span>
|
||||
</div>
|
||||
)
|
||||
}
|
||||
|
||||
return (
|
||||
<div className='flex min-w-0 items-center gap-2 rounded-lg border border-blue-200 bg-blue-50 px-3 py-2 text-sm dark:border-blue-800 dark:bg-blue-950'>
|
||||
<Loader2 className='h-4 w-4 shrink-0 animate-spin text-blue-600 dark:text-blue-400' />
|
||||
<span className='min-w-0 truncate text-blue-800 dark:text-blue-200'>{content}</span>
|
||||
</div>
|
||||
)
|
||||
}
|
||||
@@ -306,6 +306,94 @@ export function SocketProvider({ children, user }: SocketProviderProps) {
|
||||
eventHandlers.current.workflowReverted?.(data)
|
||||
})
|
||||
|
||||
// Workflow update events (external changes like LLM edits)
|
||||
socketInstance.on('workflow-updated', (data) => {
|
||||
logger.info(`Workflow ${data.workflowId} has been updated externally - requesting sync`)
|
||||
// Request fresh workflow state to sync with external changes
|
||||
if (data.workflowId === urlWorkflowId) {
|
||||
socketInstance.emit('request-sync', { workflowId: data.workflowId })
|
||||
}
|
||||
})
|
||||
|
||||
// Copilot workflow edit events (database has been updated, rehydrate stores)
|
||||
socketInstance.on('copilot-workflow-edit', async (data) => {
|
||||
logger.info(
|
||||
`Copilot edited workflow ${data.workflowId} - rehydrating stores from database`
|
||||
)
|
||||
|
||||
if (data.workflowId === urlWorkflowId) {
|
||||
try {
|
||||
// Fetch fresh workflow state directly from API
|
||||
const response = await fetch(`/api/workflows/${data.workflowId}`)
|
||||
if (response.ok) {
|
||||
const responseData = await response.json()
|
||||
const workflowData = responseData.data
|
||||
|
||||
if (workflowData?.state) {
|
||||
logger.info('Rehydrating stores with fresh workflow state from database')
|
||||
|
||||
// Import stores dynamically to avoid import issues
|
||||
Promise.all([
|
||||
import('@/stores/workflows/workflow/store'),
|
||||
import('@/stores/workflows/subblock/store'),
|
||||
])
|
||||
.then(([{ useWorkflowStore }, { useSubBlockStore }]) => {
|
||||
const workflowState = workflowData.state
|
||||
|
||||
// Extract subblock values from blocks
|
||||
const subblockValues: Record<string, Record<string, any>> = {}
|
||||
Object.entries(workflowState.blocks || {}).forEach(([blockId, block]) => {
|
||||
const blockState = block as any
|
||||
subblockValues[blockId] = {}
|
||||
Object.entries(blockState.subBlocks || {}).forEach(
|
||||
([subblockId, subblock]) => {
|
||||
subblockValues[blockId][subblockId] = (subblock as any).value
|
||||
}
|
||||
)
|
||||
})
|
||||
|
||||
// Update workflow store with fresh state from database
|
||||
const newWorkflowState = {
|
||||
blocks: workflowState.blocks || {},
|
||||
edges: workflowState.edges || [],
|
||||
loops: workflowState.loops || {},
|
||||
parallels: workflowState.parallels || {},
|
||||
lastSaved: workflowState.lastSaved || Date.now(),
|
||||
isDeployed: workflowState.isDeployed || false,
|
||||
deployedAt: workflowState.deployedAt,
|
||||
deploymentStatuses: workflowState.deploymentStatuses || {},
|
||||
hasActiveSchedule: workflowState.hasActiveSchedule || false,
|
||||
hasActiveWebhook: workflowState.hasActiveWebhook || false,
|
||||
}
|
||||
|
||||
useWorkflowStore.setState(newWorkflowState)
|
||||
|
||||
// Update subblock store with fresh values
|
||||
useSubBlockStore.setState((state: any) => ({
|
||||
workflowValues: {
|
||||
...state.workflowValues,
|
||||
[data.workflowId]: subblockValues,
|
||||
},
|
||||
}))
|
||||
|
||||
// Note: Auto-layout is already handled by the copilot backend before saving
|
||||
// No need to trigger additional auto-layout here to avoid ID conflicts
|
||||
|
||||
logger.info('Successfully rehydrated stores from database after copilot edit')
|
||||
})
|
||||
.catch((error) => {
|
||||
logger.error('Failed to import stores for copilot rehydration:', error)
|
||||
})
|
||||
}
|
||||
} else {
|
||||
logger.error('Failed to fetch fresh workflow state:', response.statusText)
|
||||
}
|
||||
} catch (error) {
|
||||
logger.error('Failed to rehydrate stores after copilot edit:', error)
|
||||
}
|
||||
}
|
||||
})
|
||||
|
||||
// Operation confirmation events
|
||||
socketInstance.on('operation-confirmed', (data) => {
|
||||
logger.debug('Operation confirmed', { operationId: data.operationId })
|
||||
@@ -356,9 +444,71 @@ export function SocketProvider({ children, user }: SocketProviderProps) {
|
||||
logger.debug('Operation confirmed:', data)
|
||||
})
|
||||
|
||||
socketInstance.on('workflow-state', (state) => {
|
||||
// logger.info('Received workflow state from server:', state)
|
||||
// This will be used to sync initial state when joining a workflow
|
||||
socketInstance.on('workflow-state', (workflowData) => {
|
||||
logger.info('Received workflow state from server:', workflowData)
|
||||
|
||||
// Update local stores with the fresh workflow state (same logic as YAML editor)
|
||||
if (workflowData?.state && workflowData.id === urlWorkflowId) {
|
||||
logger.info('Updating local stores with fresh workflow state from server')
|
||||
|
||||
try {
|
||||
// Import stores dynamically to avoid import issues
|
||||
Promise.all([
|
||||
import('@/stores/workflows/workflow/store'),
|
||||
import('@/stores/workflows/subblock/store'),
|
||||
import('@/stores/workflows/registry/store'),
|
||||
])
|
||||
.then(([{ useWorkflowStore }, { useSubBlockStore }, { useWorkflowRegistry }]) => {
|
||||
const workflowState = workflowData.state
|
||||
|
||||
// Extract subblock values from blocks before updating workflow store
|
||||
const subblockValues: Record<string, Record<string, any>> = {}
|
||||
Object.entries(workflowState.blocks || {}).forEach(([blockId, block]) => {
|
||||
const blockState = block as any
|
||||
subblockValues[blockId] = {}
|
||||
Object.entries(blockState.subBlocks || {}).forEach(([subblockId, subblock]) => {
|
||||
subblockValues[blockId][subblockId] = (subblock as any).value
|
||||
})
|
||||
})
|
||||
|
||||
// Update workflow store with new state
|
||||
const newWorkflowState = {
|
||||
blocks: workflowState.blocks || {},
|
||||
edges: workflowState.edges || [],
|
||||
loops: workflowState.loops || {},
|
||||
parallels: workflowState.parallels || {},
|
||||
lastSaved: workflowState.lastSaved || Date.now(),
|
||||
isDeployed: workflowState.isDeployed || false,
|
||||
deployedAt: workflowState.deployedAt,
|
||||
deploymentStatuses: workflowState.deploymentStatuses || {},
|
||||
hasActiveSchedule: workflowState.hasActiveSchedule || false,
|
||||
hasActiveWebhook: workflowState.hasActiveWebhook || false,
|
||||
}
|
||||
|
||||
useWorkflowStore.setState(newWorkflowState)
|
||||
|
||||
// Update subblock store with fresh values
|
||||
useSubBlockStore.setState((state: any) => ({
|
||||
workflowValues: {
|
||||
...state.workflowValues,
|
||||
[workflowData.id]: subblockValues,
|
||||
},
|
||||
}))
|
||||
|
||||
// Note: Auto layout is not triggered here because:
|
||||
// 1. For copilot edits: positions are already optimized by the backend
|
||||
// 2. For other syncs: the existing positions should be preserved
|
||||
// This prevents ID conflicts and unnecessary position updates
|
||||
|
||||
logger.info('Successfully updated local stores with fresh workflow state')
|
||||
})
|
||||
.catch((error) => {
|
||||
logger.error('Failed to import stores for workflow state update:', error)
|
||||
})
|
||||
} catch (error) {
|
||||
logger.error('Failed to update local stores with workflow state:', error)
|
||||
}
|
||||
}
|
||||
})
|
||||
|
||||
setSocket(socketInstance)
|
||||
|
||||
20
apps/sim/db/migrations/0058_clean_shiva.sql
Normal file
20
apps/sim/db/migrations/0058_clean_shiva.sql
Normal file
@@ -0,0 +1,20 @@
|
||||
CREATE TABLE "copilot_checkpoints" (
|
||||
"id" uuid PRIMARY KEY DEFAULT gen_random_uuid() NOT NULL,
|
||||
"user_id" text NOT NULL,
|
||||
"workflow_id" text NOT NULL,
|
||||
"chat_id" uuid NOT NULL,
|
||||
"yaml" text NOT NULL,
|
||||
"created_at" timestamp DEFAULT now() NOT NULL,
|
||||
"updated_at" timestamp DEFAULT now() NOT NULL
|
||||
);
|
||||
--> statement-breakpoint
|
||||
ALTER TABLE "copilot_checkpoints" ADD CONSTRAINT "copilot_checkpoints_user_id_user_id_fk" FOREIGN KEY ("user_id") REFERENCES "public"."user"("id") ON DELETE cascade ON UPDATE no action;--> statement-breakpoint
|
||||
ALTER TABLE "copilot_checkpoints" ADD CONSTRAINT "copilot_checkpoints_workflow_id_workflow_id_fk" FOREIGN KEY ("workflow_id") REFERENCES "public"."workflow"("id") ON DELETE cascade ON UPDATE no action;--> statement-breakpoint
|
||||
ALTER TABLE "copilot_checkpoints" ADD CONSTRAINT "copilot_checkpoints_chat_id_copilot_chats_id_fk" FOREIGN KEY ("chat_id") REFERENCES "public"."copilot_chats"("id") ON DELETE cascade ON UPDATE no action;--> statement-breakpoint
|
||||
CREATE INDEX "copilot_checkpoints_user_id_idx" ON "copilot_checkpoints" USING btree ("user_id");--> statement-breakpoint
|
||||
CREATE INDEX "copilot_checkpoints_workflow_id_idx" ON "copilot_checkpoints" USING btree ("workflow_id");--> statement-breakpoint
|
||||
CREATE INDEX "copilot_checkpoints_chat_id_idx" ON "copilot_checkpoints" USING btree ("chat_id");--> statement-breakpoint
|
||||
CREATE INDEX "copilot_checkpoints_user_workflow_idx" ON "copilot_checkpoints" USING btree ("user_id","workflow_id");--> statement-breakpoint
|
||||
CREATE INDEX "copilot_checkpoints_workflow_chat_idx" ON "copilot_checkpoints" USING btree ("workflow_id","chat_id");--> statement-breakpoint
|
||||
CREATE INDEX "copilot_checkpoints_created_at_idx" ON "copilot_checkpoints" USING btree ("created_at");--> statement-breakpoint
|
||||
CREATE INDEX "copilot_checkpoints_chat_created_at_idx" ON "copilot_checkpoints" USING btree ("chat_id","created_at");
|
||||
5865
apps/sim/db/migrations/meta/0058_snapshot.json
Normal file
5865
apps/sim/db/migrations/meta/0058_snapshot.json
Normal file
File diff suppressed because it is too large
Load Diff
@@ -400,6 +400,13 @@
|
||||
"when": 1752980338632,
|
||||
"tag": "0057_charming_star_brand",
|
||||
"breakpoints": true
|
||||
},
|
||||
{
|
||||
"idx": 58,
|
||||
"version": "7",
|
||||
"when": 1753211027120,
|
||||
"tag": "0058_clean_shiva",
|
||||
"breakpoints": true
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
@@ -1039,6 +1039,48 @@ export const copilotChats = pgTable(
|
||||
})
|
||||
)
|
||||
|
||||
export const copilotCheckpoints = pgTable(
|
||||
'copilot_checkpoints',
|
||||
{
|
||||
id: uuid('id').primaryKey().defaultRandom(),
|
||||
userId: text('user_id')
|
||||
.notNull()
|
||||
.references(() => user.id, { onDelete: 'cascade' }),
|
||||
workflowId: text('workflow_id')
|
||||
.notNull()
|
||||
.references(() => workflow.id, { onDelete: 'cascade' }),
|
||||
chatId: uuid('chat_id')
|
||||
.notNull()
|
||||
.references(() => copilotChats.id, { onDelete: 'cascade' }),
|
||||
yaml: text('yaml').notNull(),
|
||||
createdAt: timestamp('created_at').notNull().defaultNow(),
|
||||
updatedAt: timestamp('updated_at').notNull().defaultNow(),
|
||||
},
|
||||
(table) => ({
|
||||
// Primary access patterns
|
||||
userIdIdx: index('copilot_checkpoints_user_id_idx').on(table.userId),
|
||||
workflowIdIdx: index('copilot_checkpoints_workflow_id_idx').on(table.workflowId),
|
||||
chatIdIdx: index('copilot_checkpoints_chat_id_idx').on(table.chatId),
|
||||
|
||||
// Combined indexes for common queries
|
||||
userWorkflowIdx: index('copilot_checkpoints_user_workflow_idx').on(
|
||||
table.userId,
|
||||
table.workflowId
|
||||
),
|
||||
workflowChatIdx: index('copilot_checkpoints_workflow_chat_idx').on(
|
||||
table.workflowId,
|
||||
table.chatId
|
||||
),
|
||||
|
||||
// Ordering indexes
|
||||
createdAtIdx: index('copilot_checkpoints_created_at_idx').on(table.createdAt),
|
||||
chatCreatedAtIdx: index('copilot_checkpoints_chat_created_at_idx').on(
|
||||
table.chatId,
|
||||
table.createdAt
|
||||
),
|
||||
})
|
||||
)
|
||||
|
||||
export const templates = pgTable(
|
||||
'templates',
|
||||
{
|
||||
|
||||
427
apps/sim/lib/autolayout/algorithms/hierarchical.ts
Normal file
427
apps/sim/lib/autolayout/algorithms/hierarchical.ts
Normal file
@@ -0,0 +1,427 @@
|
||||
import type { LayoutEdge, LayoutNode, LayoutOptions, LayoutResult } from '../types'
|
||||
|
||||
interface LayerNode {
|
||||
node: LayoutNode
|
||||
layer: number
|
||||
position: number
|
||||
}
|
||||
|
||||
/**
|
||||
* Hierarchical layout algorithm optimized for workflow visualization
|
||||
* Creates clear layers based on workflow flow with minimal edge crossings
|
||||
*/
|
||||
export function calculateHierarchicalLayout(
|
||||
nodes: LayoutNode[],
|
||||
edges: LayoutEdge[],
|
||||
options: LayoutOptions
|
||||
): LayoutResult {
|
||||
const TIMEOUT_MS = 3000 // 3 second timeout for hierarchical layout
|
||||
const startTime = Date.now()
|
||||
|
||||
const checkTimeout = () => {
|
||||
if (Date.now() - startTime > TIMEOUT_MS) {
|
||||
throw new Error('Hierarchical layout timeout')
|
||||
}
|
||||
}
|
||||
|
||||
const { direction, spacing, alignment, padding } = options
|
||||
|
||||
try {
|
||||
// Step 1: Determine layout direction
|
||||
checkTimeout()
|
||||
const isHorizontal =
|
||||
direction === 'horizontal' ||
|
||||
(direction === 'auto' && shouldUseHorizontalLayout(nodes, edges))
|
||||
|
||||
// Step 2: Build adjacency lists
|
||||
checkTimeout()
|
||||
const { incomingEdges, outgoingEdges } = buildAdjacencyLists(edges)
|
||||
|
||||
// Step 3: Assign nodes to layers using longest path layering
|
||||
checkTimeout()
|
||||
const layeredNodes = assignLayers(nodes, incomingEdges, outgoingEdges)
|
||||
|
||||
// Step 4: Order nodes within layers to minimize crossings
|
||||
checkTimeout()
|
||||
const orderedLayers = minimizeCrossings(layeredNodes, edges, incomingEdges, outgoingEdges)
|
||||
|
||||
// Step 5: Calculate positions
|
||||
checkTimeout()
|
||||
const positionedNodes = calculatePositions(
|
||||
orderedLayers,
|
||||
nodes,
|
||||
isHorizontal,
|
||||
spacing,
|
||||
alignment,
|
||||
padding
|
||||
)
|
||||
|
||||
// Step 6: Calculate metadata
|
||||
const metadata = calculateLayoutMetadata(positionedNodes, edges, orderedLayers.length)
|
||||
|
||||
return {
|
||||
nodes: positionedNodes,
|
||||
metadata: {
|
||||
...metadata,
|
||||
strategy: 'hierarchical',
|
||||
},
|
||||
}
|
||||
} catch (error) {
|
||||
if (error instanceof Error && error.message.includes('timeout')) {
|
||||
// Fallback to simple linear layout
|
||||
return createSimpleLinearLayout(nodes, options)
|
||||
}
|
||||
throw error
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Simple fallback layout when hierarchical layout times out
|
||||
*/
|
||||
function createSimpleLinearLayout(nodes: LayoutNode[], options: LayoutOptions): LayoutResult {
|
||||
const positions: Array<{ id: string; position: { x: number; y: number } }> = []
|
||||
const spacing = options.spacing.horizontal || 400
|
||||
const startX = options.padding?.x || 200
|
||||
const startY = options.padding?.y || 200
|
||||
|
||||
nodes.forEach((node, index) => {
|
||||
positions.push({
|
||||
id: node.id,
|
||||
position: {
|
||||
x: startX + index * spacing,
|
||||
y: startY,
|
||||
},
|
||||
})
|
||||
})
|
||||
|
||||
return {
|
||||
nodes: positions,
|
||||
metadata: {
|
||||
strategy: 'simple-linear',
|
||||
totalWidth: nodes.length * spacing,
|
||||
totalHeight: 200,
|
||||
layerCount: 1,
|
||||
stats: {
|
||||
crossings: 0,
|
||||
totalEdgeLength: 0,
|
||||
nodeOverlaps: 0,
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
function shouldUseHorizontalLayout(nodes: LayoutNode[], edges: LayoutEdge[]): boolean {
|
||||
// Analyze edge directions and node handle preferences
|
||||
let horizontalPreference = 0
|
||||
|
||||
nodes.forEach((node) => {
|
||||
if (node.horizontalHandles) horizontalPreference++
|
||||
else horizontalPreference--
|
||||
})
|
||||
|
||||
// Analyze workflow complexity - simpler workflows work better horizontally
|
||||
const complexity = edges.length / Math.max(nodes.length, 1)
|
||||
if (complexity < 1.5) horizontalPreference += 2
|
||||
|
||||
return horizontalPreference > 0
|
||||
}
|
||||
|
||||
function buildAdjacencyLists(edges: LayoutEdge[]) {
|
||||
const incomingEdges = new Map<string, string[]>()
|
||||
const outgoingEdges = new Map<string, string[]>()
|
||||
|
||||
edges.forEach((edge) => {
|
||||
if (!outgoingEdges.has(edge.source)) {
|
||||
outgoingEdges.set(edge.source, [])
|
||||
}
|
||||
if (!incomingEdges.has(edge.target)) {
|
||||
incomingEdges.set(edge.target, [])
|
||||
}
|
||||
|
||||
outgoingEdges.get(edge.source)!.push(edge.target)
|
||||
incomingEdges.get(edge.target)!.push(edge.source)
|
||||
})
|
||||
|
||||
return { incomingEdges, outgoingEdges }
|
||||
}
|
||||
|
||||
function assignLayers(
|
||||
nodes: LayoutNode[],
|
||||
incomingEdges: Map<string, string[]>,
|
||||
outgoingEdges: Map<string, string[]>
|
||||
): Map<number, LayoutNode[]> {
|
||||
const nodeToLayer = new Map<string, number>()
|
||||
const layers = new Map<number, LayoutNode[]>()
|
||||
|
||||
// Find root nodes (no incoming edges)
|
||||
const rootNodes = nodes.filter(
|
||||
(node) => !incomingEdges.has(node.id) || incomingEdges.get(node.id)!.length === 0
|
||||
)
|
||||
|
||||
// If no root nodes, pick nodes with highest category priority
|
||||
if (rootNodes.length === 0) {
|
||||
const triggerNodes = nodes.filter((node) => node.category === 'trigger')
|
||||
rootNodes.push(...(triggerNodes.length > 0 ? triggerNodes : [nodes[0]]))
|
||||
}
|
||||
|
||||
// Assign layer 0 to root nodes
|
||||
rootNodes.forEach((node) => {
|
||||
nodeToLayer.set(node.id, 0)
|
||||
})
|
||||
|
||||
// Use longest path layering algorithm with iteration limit
|
||||
let maxLayer = 0
|
||||
let changed = true
|
||||
let iterations = 0
|
||||
const maxIterations = nodes.length * 2 // Prevent infinite loops
|
||||
|
||||
while (changed && iterations < maxIterations) {
|
||||
changed = false
|
||||
iterations++
|
||||
|
||||
nodes.forEach((node) => {
|
||||
const predecessors = incomingEdges.get(node.id) || []
|
||||
|
||||
if (predecessors.length > 0) {
|
||||
const maxPredecessorLayer = Math.max(
|
||||
...predecessors.map((predId) => nodeToLayer.get(predId) || 0)
|
||||
)
|
||||
const currentLayer = nodeToLayer.get(node.id) || 0
|
||||
const newLayer = maxPredecessorLayer + 1
|
||||
|
||||
if (newLayer > currentLayer) {
|
||||
nodeToLayer.set(node.id, newLayer)
|
||||
maxLayer = Math.max(maxLayer, newLayer)
|
||||
changed = true
|
||||
}
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
if (iterations >= maxIterations) {
|
||||
console.warn('Layer assignment reached maximum iterations, may have cycles in graph')
|
||||
}
|
||||
|
||||
// Group nodes by layer
|
||||
nodes.forEach((node) => {
|
||||
const layer = nodeToLayer.get(node.id) || 0
|
||||
if (!layers.has(layer)) {
|
||||
layers.set(layer, [])
|
||||
}
|
||||
layers.get(layer)!.push(node)
|
||||
})
|
||||
|
||||
return layers
|
||||
}
|
||||
|
||||
function minimizeCrossings(
|
||||
layeredNodes: Map<number, LayoutNode[]>,
|
||||
edges: LayoutEdge[],
|
||||
incomingEdges: Map<string, string[]>,
|
||||
outgoingEdges: Map<string, string[]>
|
||||
): LayoutNode[][] {
|
||||
const layers: LayoutNode[][] = []
|
||||
const maxLayer = Math.max(...layeredNodes.keys())
|
||||
|
||||
// Initialize layers
|
||||
for (let i = 0; i <= maxLayer; i++) {
|
||||
layers[i] = layeredNodes.get(i) || []
|
||||
}
|
||||
|
||||
// Apply barycenter heuristic for crossing minimization with limited iterations
|
||||
const maxIterations = Math.min(4, Math.max(1, Math.ceil(maxLayer / 2)))
|
||||
for (let iteration = 0; iteration < maxIterations; iteration++) {
|
||||
if (iteration % 2 === 0) {
|
||||
// Forward pass
|
||||
for (let layer = 1; layer <= maxLayer; layer++) {
|
||||
sortLayerByBarycenter(layers[layer], layers[layer - 1], incomingEdges, true)
|
||||
}
|
||||
} else {
|
||||
// Backward pass
|
||||
for (let layer = maxLayer - 1; layer >= 0; layer--) {
|
||||
sortLayerByBarycenter(layers[layer], layers[layer + 1], outgoingEdges, false)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return layers
|
||||
}
|
||||
|
||||
function sortLayerByBarycenter(
|
||||
currentLayer: LayoutNode[],
|
||||
adjacentLayer: LayoutNode[],
|
||||
edgeMap: Map<string, string[]>,
|
||||
useIncoming: boolean
|
||||
) {
|
||||
const barycenters: Array<{ node: LayoutNode; barycenter: number }> = []
|
||||
|
||||
currentLayer.forEach((node) => {
|
||||
const connectedNodes = edgeMap.get(node.id) || []
|
||||
let barycenter = 0
|
||||
|
||||
if (connectedNodes.length > 0) {
|
||||
const positions = connectedNodes
|
||||
.map((connectedId) => adjacentLayer.findIndex((n) => n.id === connectedId))
|
||||
.filter((pos) => pos !== -1)
|
||||
|
||||
if (positions.length > 0) {
|
||||
barycenter = positions.reduce((sum, pos) => sum + pos, 0) / positions.length
|
||||
}
|
||||
}
|
||||
|
||||
barycenters.push({ node, barycenter })
|
||||
})
|
||||
|
||||
// Sort by barycenter, then by category priority, then by name for stability
|
||||
barycenters.sort((a, b) => {
|
||||
if (Math.abs(a.barycenter - b.barycenter) < 0.1) {
|
||||
const priorityA = getCategoryPriority(a.node.category)
|
||||
const priorityB = getCategoryPriority(b.node.category)
|
||||
if (priorityA !== priorityB) return priorityA - priorityB
|
||||
return a.node.name.localeCompare(b.node.name)
|
||||
}
|
||||
return a.barycenter - b.barycenter
|
||||
})
|
||||
|
||||
// Update layer order
|
||||
currentLayer.splice(0, currentLayer.length, ...barycenters.map((item) => item.node))
|
||||
}
|
||||
|
||||
function getCategoryPriority(category: LayoutNode['category']): number {
|
||||
const priorities = {
|
||||
trigger: 0,
|
||||
processing: 1,
|
||||
logic: 2,
|
||||
container: 3,
|
||||
output: 4,
|
||||
}
|
||||
return priorities[category] || 10
|
||||
}
|
||||
|
||||
function calculatePositions(
|
||||
layers: LayoutNode[][],
|
||||
allNodes: LayoutNode[],
|
||||
isHorizontal: boolean,
|
||||
spacing: LayoutOptions['spacing'],
|
||||
alignment: LayoutOptions['alignment'],
|
||||
padding: LayoutOptions['padding']
|
||||
): Array<{ id: string; position: { x: number; y: number } }> {
|
||||
const positions: Array<{ id: string; position: { x: number; y: number } }> = []
|
||||
|
||||
if (isHorizontal) {
|
||||
// Horizontal layout (left-to-right)
|
||||
let currentX = padding.x
|
||||
|
||||
layers.forEach((layer, layerIndex) => {
|
||||
// Calculate layer width (max node width in this layer)
|
||||
const layerWidth = Math.max(...layer.map((node) => node.width), 0)
|
||||
|
||||
// Calculate total layer height
|
||||
const totalHeight =
|
||||
layer.reduce((sum, node) => sum + node.height, 0) + (layer.length - 1) * spacing.vertical
|
||||
|
||||
// Starting Y position based on alignment
|
||||
let startY: number
|
||||
switch (alignment) {
|
||||
case 'start':
|
||||
startY = padding.y
|
||||
break
|
||||
case 'end':
|
||||
startY = -totalHeight + padding.y
|
||||
break
|
||||
default:
|
||||
startY = -totalHeight / 2 + padding.y
|
||||
break
|
||||
}
|
||||
|
||||
let currentY = startY
|
||||
|
||||
layer.forEach((node) => {
|
||||
positions.push({
|
||||
id: node.id,
|
||||
position: { x: currentX, y: currentY },
|
||||
})
|
||||
currentY += node.height + spacing.vertical
|
||||
})
|
||||
|
||||
currentX += layerWidth + spacing.layer
|
||||
})
|
||||
} else {
|
||||
// Vertical layout (top-to-bottom)
|
||||
let currentY = padding.y
|
||||
|
||||
layers.forEach((layer, layerIndex) => {
|
||||
// Calculate layer height (max node height in this layer)
|
||||
const layerHeight = Math.max(...layer.map((node) => node.height), 0)
|
||||
|
||||
// Calculate total layer width
|
||||
const totalWidth =
|
||||
layer.reduce((sum, node) => sum + node.width, 0) + (layer.length - 1) * spacing.horizontal
|
||||
|
||||
// Starting X position based on alignment
|
||||
let startX: number
|
||||
switch (alignment) {
|
||||
case 'start':
|
||||
startX = padding.x
|
||||
break
|
||||
case 'end':
|
||||
startX = -totalWidth + padding.x
|
||||
break
|
||||
default:
|
||||
startX = -totalWidth / 2 + padding.x
|
||||
break
|
||||
}
|
||||
|
||||
let currentX = startX
|
||||
|
||||
layer.forEach((node) => {
|
||||
positions.push({
|
||||
id: node.id,
|
||||
position: { x: currentX, y: currentY },
|
||||
})
|
||||
currentX += node.width + spacing.horizontal
|
||||
})
|
||||
|
||||
currentY += layerHeight + spacing.layer
|
||||
})
|
||||
}
|
||||
|
||||
return positions
|
||||
}
|
||||
|
||||
function calculateLayoutMetadata(
|
||||
positions: Array<{ id: string; position: { x: number; y: number } }>,
|
||||
edges: LayoutEdge[],
|
||||
layerCount: number
|
||||
) {
|
||||
const nodeMap = new Map(positions.map((p) => [p.id, p.position]))
|
||||
|
||||
// Calculate bounding box
|
||||
const xs = positions.map((p) => p.position.x)
|
||||
const ys = positions.map((p) => p.position.y)
|
||||
const totalWidth = Math.max(...xs) - Math.min(...xs)
|
||||
const totalHeight = Math.max(...ys) - Math.min(...ys)
|
||||
|
||||
// Calculate total edge length
|
||||
let totalEdgeLength = 0
|
||||
edges.forEach((edge) => {
|
||||
const sourcePos = nodeMap.get(edge.source)
|
||||
const targetPos = nodeMap.get(edge.target)
|
||||
if (sourcePos && targetPos) {
|
||||
const dx = targetPos.x - sourcePos.x
|
||||
const dy = targetPos.y - sourcePos.y
|
||||
totalEdgeLength += Math.sqrt(dx * dx + dy * dy)
|
||||
}
|
||||
})
|
||||
|
||||
return {
|
||||
totalWidth,
|
||||
totalHeight,
|
||||
layerCount,
|
||||
stats: {
|
||||
crossings: 0, // TODO: Implement crossing calculation
|
||||
totalEdgeLength,
|
||||
nodeOverlaps: 0, // No overlaps in hierarchical layout
|
||||
},
|
||||
}
|
||||
}
|
||||
587
apps/sim/lib/autolayout/algorithms/smart.ts
Normal file
587
apps/sim/lib/autolayout/algorithms/smart.ts
Normal file
@@ -0,0 +1,587 @@
|
||||
import type { LayoutEdge, LayoutNode, LayoutOptions, LayoutResult } from '../types'
|
||||
import { calculateHierarchicalLayout } from './hierarchical'
|
||||
|
||||
interface WorkflowAnalysis {
|
||||
nodeCount: number
|
||||
edgeCount: number
|
||||
maxDepth: number
|
||||
branchingFactor: number
|
||||
hasParallelPaths: boolean
|
||||
hasLoops: boolean
|
||||
complexity: 'simple' | 'medium' | 'complex'
|
||||
recommendedStrategy: 'hierarchical' | 'layered' | 'force-directed'
|
||||
recommendedDirection: 'horizontal' | 'vertical'
|
||||
}
|
||||
|
||||
/**
|
||||
* Smart layout algorithm that analyzes the workflow and chooses the optimal layout strategy
|
||||
*/
|
||||
export function calculateSmartLayout(
|
||||
nodes: LayoutNode[],
|
||||
edges: LayoutEdge[],
|
||||
options: LayoutOptions
|
||||
): LayoutResult {
|
||||
const TIMEOUT_MS = 5000 // 5 second timeout
|
||||
const startTime = Date.now()
|
||||
|
||||
const checkTimeout = () => {
|
||||
if (Date.now() - startTime > TIMEOUT_MS) {
|
||||
throw new Error('Layout calculation timeout - falling back to simple positioning')
|
||||
}
|
||||
}
|
||||
|
||||
try {
|
||||
// Step 1: Analyze the workflow structure
|
||||
checkTimeout()
|
||||
const analysis = analyzeWorkflow(nodes, edges)
|
||||
|
||||
// Step 2: Choose optimal strategy and direction
|
||||
checkTimeout()
|
||||
const optimizedOptions: LayoutOptions = {
|
||||
...options,
|
||||
strategy: analysis.recommendedStrategy,
|
||||
direction: options.direction === 'auto' ? analysis.recommendedDirection : options.direction,
|
||||
spacing: optimizeSpacing(analysis, options.spacing),
|
||||
}
|
||||
|
||||
// Step 3: Apply the chosen strategy
|
||||
checkTimeout()
|
||||
let result: LayoutResult
|
||||
|
||||
switch (analysis.recommendedStrategy) {
|
||||
case 'hierarchical':
|
||||
result = calculateHierarchicalLayout(nodes, edges, optimizedOptions)
|
||||
break
|
||||
case 'layered':
|
||||
result = calculateLayeredLayout(nodes, edges, optimizedOptions)
|
||||
break
|
||||
case 'force-directed':
|
||||
result = calculateForceDirectedLayout(nodes, edges, optimizedOptions)
|
||||
break
|
||||
default:
|
||||
result = calculateHierarchicalLayout(nodes, edges, optimizedOptions)
|
||||
}
|
||||
|
||||
// Step 4: Apply post-processing optimizations
|
||||
checkTimeout()
|
||||
result = applyPostProcessingOptimizations(result, nodes, edges, analysis)
|
||||
|
||||
// Step 5: Update metadata
|
||||
result.metadata.strategy = 'smart'
|
||||
|
||||
return result
|
||||
} catch (error) {
|
||||
if (error instanceof Error && error.message.includes('timeout')) {
|
||||
// Fallback to simple grid layout on timeout
|
||||
return createFallbackLayout(nodes, edges, options)
|
||||
}
|
||||
throw error
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Fallback layout for when smart layout times out or fails
|
||||
*/
|
||||
function createFallbackLayout(
|
||||
nodes: LayoutNode[],
|
||||
edges: LayoutEdge[],
|
||||
options: LayoutOptions
|
||||
): LayoutResult {
|
||||
const positions: Array<{ id: string; position: { x: number; y: number } }> = []
|
||||
|
||||
// Simple grid layout
|
||||
const cols = Math.ceil(Math.sqrt(nodes.length))
|
||||
const spacing = 400
|
||||
|
||||
nodes.forEach((node, index) => {
|
||||
const row = Math.floor(index / cols)
|
||||
const col = index % cols
|
||||
|
||||
positions.push({
|
||||
id: node.id,
|
||||
position: {
|
||||
x: col * spacing + (options.padding?.x || 200),
|
||||
y: row * spacing + (options.padding?.y || 200),
|
||||
},
|
||||
})
|
||||
})
|
||||
|
||||
const maxX = Math.max(...positions.map((p) => p.position.x))
|
||||
const maxY = Math.max(...positions.map((p) => p.position.y))
|
||||
|
||||
return {
|
||||
nodes: positions,
|
||||
metadata: {
|
||||
strategy: 'fallback-grid',
|
||||
totalWidth: maxX + 400,
|
||||
totalHeight: maxY + 200,
|
||||
layerCount: Math.ceil(nodes.length / cols),
|
||||
stats: {
|
||||
crossings: 0,
|
||||
totalEdgeLength: 0,
|
||||
nodeOverlaps: 0,
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
function analyzeWorkflow(nodes: LayoutNode[], edges: LayoutEdge[]): WorkflowAnalysis {
|
||||
const nodeCount = nodes.length
|
||||
const edgeCount = edges.length
|
||||
|
||||
// Build adjacency lists for analysis
|
||||
const outgoing = new Map<string, string[]>()
|
||||
const incoming = new Map<string, string[]>()
|
||||
|
||||
edges.forEach((edge) => {
|
||||
if (!outgoing.has(edge.source)) outgoing.set(edge.source, [])
|
||||
if (!incoming.has(edge.target)) incoming.set(edge.target, [])
|
||||
outgoing.get(edge.source)!.push(edge.target)
|
||||
incoming.get(edge.target)!.push(edge.source)
|
||||
})
|
||||
|
||||
// Calculate max depth using BFS
|
||||
const maxDepth = calculateMaxDepth(nodes, edges, outgoing)
|
||||
|
||||
// Calculate average branching factor
|
||||
const branchingFactors = Array.from(outgoing.values()).map((targets) => targets.length)
|
||||
const avgBranchingFactor =
|
||||
branchingFactors.length > 0
|
||||
? branchingFactors.reduce((a, b) => a + b, 0) / branchingFactors.length
|
||||
: 0
|
||||
|
||||
// Detect parallel paths
|
||||
const hasParallelPaths = detectParallelPaths(nodes, edges, outgoing)
|
||||
|
||||
// Detect loops/cycles (simplified check)
|
||||
const hasLoops = nodes.some((node) => node.type === 'loop' || node.isContainer)
|
||||
|
||||
// Determine complexity
|
||||
let complexity: WorkflowAnalysis['complexity']
|
||||
if (nodeCount <= 5 && edgeCount <= 6 && maxDepth <= 3) {
|
||||
complexity = 'simple'
|
||||
} else if (nodeCount <= 15 && edgeCount <= 20 && maxDepth <= 6) {
|
||||
complexity = 'medium'
|
||||
} else {
|
||||
complexity = 'complex'
|
||||
}
|
||||
|
||||
// Choose optimal strategy based on analysis
|
||||
let recommendedStrategy: WorkflowAnalysis['recommendedStrategy']
|
||||
let recommendedDirection: WorkflowAnalysis['recommendedDirection']
|
||||
|
||||
if (complexity === 'simple' && !hasParallelPaths) {
|
||||
recommendedStrategy = 'hierarchical'
|
||||
recommendedDirection = avgBranchingFactor < 1.5 ? 'horizontal' : 'vertical'
|
||||
} else if (hasParallelPaths || avgBranchingFactor > 2) {
|
||||
recommendedStrategy = 'layered'
|
||||
recommendedDirection = 'vertical'
|
||||
} else if (complexity === 'complex' && !hasLoops) {
|
||||
recommendedStrategy = 'force-directed'
|
||||
recommendedDirection = 'horizontal'
|
||||
} else {
|
||||
recommendedStrategy = 'hierarchical'
|
||||
recommendedDirection = maxDepth > 4 ? 'vertical' : 'horizontal'
|
||||
}
|
||||
|
||||
// Consider user preferences from nodes
|
||||
const horizontalPreference = nodes.filter((n) => n.horizontalHandles).length
|
||||
const verticalPreference = nodes.length - horizontalPreference
|
||||
|
||||
if (horizontalPreference > verticalPreference * 1.5) {
|
||||
recommendedDirection = 'horizontal'
|
||||
} else if (verticalPreference > horizontalPreference * 1.5) {
|
||||
recommendedDirection = 'vertical'
|
||||
}
|
||||
|
||||
return {
|
||||
nodeCount,
|
||||
edgeCount,
|
||||
maxDepth,
|
||||
branchingFactor: avgBranchingFactor,
|
||||
hasParallelPaths,
|
||||
hasLoops,
|
||||
complexity,
|
||||
recommendedStrategy,
|
||||
recommendedDirection,
|
||||
}
|
||||
}
|
||||
|
||||
function calculateMaxDepth(
|
||||
nodes: LayoutNode[],
|
||||
edges: LayoutEdge[],
|
||||
outgoing: Map<string, string[]>
|
||||
): number {
|
||||
const visited = new Set<string>()
|
||||
const visiting = new Set<string>() // Track nodes currently being visited to detect cycles
|
||||
let maxDepth = 0
|
||||
|
||||
// Find root nodes
|
||||
const roots = nodes.filter(
|
||||
(node) => !edges.some((edge) => edge.target === node.id) || node.category === 'trigger'
|
||||
)
|
||||
|
||||
if (roots.length === 0 && nodes.length > 0) {
|
||||
roots.push(nodes[0])
|
||||
}
|
||||
|
||||
// DFS to find maximum depth with cycle detection
|
||||
function dfs(nodeId: string, depth: number): number {
|
||||
if (visiting.has(nodeId)) {
|
||||
// Cycle detected, return current depth to avoid infinite loop
|
||||
return depth
|
||||
}
|
||||
if (visited.has(nodeId)) return depth
|
||||
|
||||
visiting.add(nodeId)
|
||||
visited.add(nodeId)
|
||||
|
||||
let localMaxDepth = depth
|
||||
const children = outgoing.get(nodeId) || []
|
||||
|
||||
for (const childId of children) {
|
||||
const childDepth = dfs(childId, depth + 1)
|
||||
localMaxDepth = Math.max(localMaxDepth, childDepth)
|
||||
}
|
||||
|
||||
visiting.delete(nodeId)
|
||||
return localMaxDepth
|
||||
}
|
||||
|
||||
for (const root of roots) {
|
||||
const rootDepth = dfs(root.id, 0)
|
||||
maxDepth = Math.max(maxDepth, rootDepth)
|
||||
}
|
||||
|
||||
return maxDepth
|
||||
}
|
||||
|
||||
function detectParallelPaths(
|
||||
nodes: LayoutNode[],
|
||||
edges: LayoutEdge[],
|
||||
outgoing: Map<string, string[]>
|
||||
): boolean {
|
||||
// Quick check - look for nodes that have multiple outgoing edges
|
||||
const nodesWithMultipleOutputs = Array.from(outgoing.entries()).filter(
|
||||
([_, targets]) => targets.length > 1
|
||||
)
|
||||
|
||||
// If there are too many to check efficiently, just return true (assume parallel paths exist)
|
||||
if (nodesWithMultipleOutputs.length > 10) {
|
||||
return true
|
||||
}
|
||||
|
||||
for (const [nodeId, targets] of nodesWithMultipleOutputs) {
|
||||
// Quick heuristic - if targets have different types, they're likely parallel paths
|
||||
const targetNodes = targets
|
||||
.map((targetId) => nodes.find((n) => n.id === targetId))
|
||||
.filter((n): n is LayoutNode => n !== undefined)
|
||||
const uniqueCategories = new Set(targetNodes.map((n) => n.category))
|
||||
|
||||
if (uniqueCategories.size > 1) {
|
||||
return true // Different categories suggest parallel processing paths
|
||||
}
|
||||
|
||||
// Only do expensive convergence check for simple cases
|
||||
if (targets.length <= 3) {
|
||||
if (hasConvergingPaths(targets, outgoing, new Set())) {
|
||||
return true
|
||||
}
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
function hasConvergingPaths(
|
||||
startNodes: string[],
|
||||
outgoing: Map<string, string[]>,
|
||||
visited: Set<string>
|
||||
): boolean {
|
||||
// Early exit for simple cases
|
||||
if (startNodes.length <= 1) return false
|
||||
|
||||
const MAX_DEPTH = 10 // Limit traversal depth to prevent infinite loops
|
||||
const paths = new Map<string, Set<string>>()
|
||||
|
||||
// Trace each path with depth limit
|
||||
startNodes.forEach((startNode, index) => {
|
||||
const pathNodes = new Set<string>()
|
||||
const queue: Array<{ node: string; depth: number }> = [{ node: startNode, depth: 0 }]
|
||||
const pathVisited = new Set<string>()
|
||||
|
||||
while (queue.length > 0) {
|
||||
const { node: current, depth } = queue.shift()!
|
||||
|
||||
if (pathVisited.has(current) || depth > MAX_DEPTH) continue
|
||||
pathVisited.add(current)
|
||||
pathNodes.add(current)
|
||||
|
||||
const children = outgoing.get(current) || []
|
||||
children.forEach((childId) => {
|
||||
if (!pathVisited.has(childId)) {
|
||||
queue.push({ node: childId, depth: depth + 1 })
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
paths.set(`path-${index}`, pathNodes)
|
||||
})
|
||||
|
||||
// Optimized convergence check - early exit on first intersection
|
||||
const pathSets = Array.from(paths.values())
|
||||
for (let i = 0; i < pathSets.length; i++) {
|
||||
for (let j = i + 1; j < pathSets.length; j++) {
|
||||
// Quick check - any common node?
|
||||
for (const node of pathSets[i]) {
|
||||
if (pathSets[j].has(node)) {
|
||||
return true
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return false
|
||||
}
|
||||
|
||||
function optimizeSpacing(
|
||||
analysis: WorkflowAnalysis,
|
||||
baseSpacing: LayoutOptions['spacing']
|
||||
): LayoutOptions['spacing'] {
|
||||
const { complexity, nodeCount, branchingFactor } = analysis
|
||||
|
||||
let multiplier = 1
|
||||
|
||||
// Adjust spacing based on complexity
|
||||
switch (complexity) {
|
||||
case 'simple':
|
||||
multiplier = 0.8
|
||||
break
|
||||
case 'medium':
|
||||
multiplier = 1.0
|
||||
break
|
||||
case 'complex':
|
||||
multiplier = 1.2
|
||||
break
|
||||
}
|
||||
|
||||
// Adjust for node count
|
||||
if (nodeCount > 20) multiplier *= 1.1
|
||||
if (nodeCount > 50) multiplier *= 1.2
|
||||
|
||||
// Adjust for branching factor
|
||||
if (branchingFactor > 3) multiplier *= 1.15
|
||||
|
||||
return {
|
||||
horizontal: Math.round(baseSpacing.horizontal * multiplier),
|
||||
vertical: Math.round(baseSpacing.vertical * multiplier),
|
||||
layer: Math.round(baseSpacing.layer * multiplier),
|
||||
}
|
||||
}
|
||||
|
||||
// Simplified layered layout for medium complexity workflows
|
||||
function calculateLayeredLayout(
|
||||
nodes: LayoutNode[],
|
||||
edges: LayoutEdge[],
|
||||
options: LayoutOptions
|
||||
): LayoutResult {
|
||||
// For now, delegate to hierarchical layout with adjusted spacing
|
||||
const adjustedOptions = {
|
||||
...options,
|
||||
spacing: {
|
||||
...options.spacing,
|
||||
vertical: options.spacing.vertical * 1.2,
|
||||
layer: options.spacing.layer * 0.9,
|
||||
},
|
||||
}
|
||||
|
||||
return calculateHierarchicalLayout(nodes, edges, adjustedOptions)
|
||||
}
|
||||
|
||||
// Simplified force-directed layout for complex workflows
|
||||
function calculateForceDirectedLayout(
|
||||
nodes: LayoutNode[],
|
||||
edges: LayoutEdge[],
|
||||
options: LayoutOptions
|
||||
): LayoutResult {
|
||||
// For now, use a simplified force-directed approach
|
||||
const positions: Array<{ id: string; position: { x: number; y: number } }> = []
|
||||
|
||||
// Start with hierarchical layout as base
|
||||
const hierarchicalResult = calculateHierarchicalLayout(nodes, edges, options)
|
||||
|
||||
// Apply some force-directed adjustments
|
||||
const nodePositions = new Map(hierarchicalResult.nodes.map((n) => [n.id, n.position]))
|
||||
|
||||
// Simple force simulation (simplified)
|
||||
const iterations = 10
|
||||
const edgeLength = options.spacing.layer
|
||||
|
||||
for (let iter = 0; iter < iterations; iter++) {
|
||||
const forces = new Map<string, { x: number; y: number }>()
|
||||
|
||||
// Initialize forces
|
||||
nodes.forEach((node) => {
|
||||
forces.set(node.id, { x: 0, y: 0 })
|
||||
})
|
||||
|
||||
// Attractive forces along edges
|
||||
edges.forEach((edge) => {
|
||||
const sourcePos = nodePositions.get(edge.source)
|
||||
const targetPos = nodePositions.get(edge.target)
|
||||
|
||||
if (sourcePos && targetPos) {
|
||||
const dx = targetPos.x - sourcePos.x
|
||||
const dy = targetPos.y - sourcePos.y
|
||||
const distance = Math.sqrt(dx * dx + dy * dy)
|
||||
|
||||
if (distance > 0) {
|
||||
const force = (distance - edgeLength) * 0.1
|
||||
const fx = (dx / distance) * force
|
||||
const fy = (dy / distance) * force
|
||||
|
||||
const sourceForce = forces.get(edge.source)!
|
||||
const targetForce = forces.get(edge.target)!
|
||||
|
||||
sourceForce.x += fx
|
||||
sourceForce.y += fy
|
||||
targetForce.x -= fx
|
||||
targetForce.y -= fy
|
||||
}
|
||||
}
|
||||
})
|
||||
|
||||
// Apply forces with damping
|
||||
nodes.forEach((node) => {
|
||||
const pos = nodePositions.get(node.id)!
|
||||
const force = forces.get(node.id)!
|
||||
|
||||
pos.x += force.x * 0.5
|
||||
pos.y += force.y * 0.5
|
||||
})
|
||||
}
|
||||
|
||||
// Convert back to result format
|
||||
nodePositions.forEach((position, id) => {
|
||||
positions.push({ id, position })
|
||||
})
|
||||
|
||||
return {
|
||||
nodes: positions,
|
||||
metadata: {
|
||||
...hierarchicalResult.metadata,
|
||||
strategy: 'force-directed',
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
function applyPostProcessingOptimizations(
|
||||
result: LayoutResult,
|
||||
nodes: LayoutNode[],
|
||||
edges: LayoutEdge[],
|
||||
analysis: WorkflowAnalysis
|
||||
): LayoutResult {
|
||||
// Apply alignment improvements
|
||||
result = improveAlignment(result, nodes, analysis)
|
||||
|
||||
// Apply spacing optimizations
|
||||
result = optimizeNodeSpacing(result, nodes, edges)
|
||||
|
||||
// Apply aesthetic improvements
|
||||
result = improveAesthetics(result, nodes, edges)
|
||||
|
||||
return result
|
||||
}
|
||||
|
||||
function improveAlignment(
|
||||
result: LayoutResult,
|
||||
nodes: LayoutNode[],
|
||||
analysis: WorkflowAnalysis
|
||||
): LayoutResult {
|
||||
// For simple workflows, ensure better alignment of key nodes
|
||||
if (analysis.complexity === 'simple') {
|
||||
const nodeMap = new Map(nodes.map((n) => [n.id, n]))
|
||||
const positions = new Map(result.nodes.map((n) => [n.id, n.position]))
|
||||
|
||||
// Align trigger nodes
|
||||
const triggerNodes = result.nodes.filter((n) => {
|
||||
const node = nodeMap.get(n.id)
|
||||
return node?.category === 'trigger'
|
||||
})
|
||||
|
||||
if (triggerNodes.length > 1) {
|
||||
const avgY = triggerNodes.reduce((sum, n) => sum + n.position.y, 0) / triggerNodes.length
|
||||
triggerNodes.forEach((n) => {
|
||||
n.position.y = avgY
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
return result
|
||||
}
|
||||
|
||||
function optimizeNodeSpacing(
|
||||
result: LayoutResult,
|
||||
nodes: LayoutNode[],
|
||||
edges: LayoutEdge[]
|
||||
): LayoutResult {
|
||||
// Ensure minimum spacing between nodes
|
||||
const minSpacing = 50
|
||||
const nodeMap = new Map(nodes.map((n) => [n.id, n]))
|
||||
|
||||
result.nodes.forEach((nodeA, i) => {
|
||||
result.nodes.forEach((nodeB, j) => {
|
||||
if (i >= j) return
|
||||
|
||||
const nodeAData = nodeMap.get(nodeA.id)
|
||||
const nodeBData = nodeMap.get(nodeB.id)
|
||||
|
||||
if (!nodeAData || !nodeBData) return
|
||||
|
||||
const dx = nodeB.position.x - nodeA.position.x
|
||||
const dy = nodeB.position.y - nodeA.position.y
|
||||
const distance = Math.sqrt(dx * dx + dy * dy)
|
||||
|
||||
const requiredDistance = (nodeAData.width + nodeBData.width) / 2 + minSpacing
|
||||
|
||||
if (distance > 0 && distance < requiredDistance) {
|
||||
const adjustmentFactor = (requiredDistance - distance) / distance / 2
|
||||
const adjustX = dx * adjustmentFactor
|
||||
const adjustY = dy * adjustmentFactor
|
||||
|
||||
nodeA.position.x -= adjustX
|
||||
nodeA.position.y -= adjustY
|
||||
nodeB.position.x += adjustX
|
||||
nodeB.position.y += adjustY
|
||||
}
|
||||
})
|
||||
})
|
||||
|
||||
return result
|
||||
}
|
||||
|
||||
function improveAesthetics(
|
||||
result: LayoutResult,
|
||||
nodes: LayoutNode[],
|
||||
edges: LayoutEdge[]
|
||||
): LayoutResult {
|
||||
// Center the layout around origin
|
||||
const positions = result.nodes.map((n) => n.position)
|
||||
const minX = Math.min(...positions.map((p) => p.x))
|
||||
const minY = Math.min(...positions.map((p) => p.y))
|
||||
const maxX = Math.max(...positions.map((p) => p.x))
|
||||
const maxY = Math.max(...positions.map((p) => p.y))
|
||||
|
||||
const centerX = (minX + maxX) / 2
|
||||
const centerY = (minY + maxY) / 2
|
||||
|
||||
result.nodes.forEach((node) => {
|
||||
node.position.x -= centerX
|
||||
node.position.y -= centerY
|
||||
})
|
||||
|
||||
// Update metadata
|
||||
result.metadata.totalWidth = maxX - minX
|
||||
result.metadata.totalHeight = maxY - minY
|
||||
|
||||
return result
|
||||
}
|
||||
544
apps/sim/lib/autolayout/service.ts
Normal file
544
apps/sim/lib/autolayout/service.ts
Normal file
@@ -0,0 +1,544 @@
|
||||
import { createLogger } from '@/lib/logs/console-logger'
|
||||
import { calculateHierarchicalLayout } from './algorithms/hierarchical'
|
||||
import { calculateSmartLayout } from './algorithms/smart'
|
||||
import type { LayoutEdge, LayoutNode, LayoutOptions, LayoutResult, WorkflowGraph } from './types'
|
||||
import { BLOCK_CATEGORIES, BLOCK_DIMENSIONS } from './types'
|
||||
|
||||
const logger = createLogger('AutoLayoutService')
|
||||
|
||||
/**
|
||||
* Main autolayout service for workflow blocks
|
||||
*/
|
||||
export class AutoLayoutService {
|
||||
private static instance: AutoLayoutService
|
||||
|
||||
static getInstance(): AutoLayoutService {
|
||||
if (!AutoLayoutService.instance) {
|
||||
AutoLayoutService.instance = new AutoLayoutService()
|
||||
}
|
||||
return AutoLayoutService.instance
|
||||
}
|
||||
|
||||
/**
|
||||
* Calculate optimal layout for workflow blocks, including nested blocks
|
||||
*/
|
||||
async calculateLayout(
|
||||
workflowGraph: WorkflowGraph,
|
||||
options: Partial<LayoutOptions> = {}
|
||||
): Promise<LayoutResult> {
|
||||
const startTime = Date.now()
|
||||
|
||||
try {
|
||||
// Merge with default options
|
||||
const layoutOptions: LayoutOptions = {
|
||||
strategy: 'smart',
|
||||
direction: 'auto',
|
||||
spacing: {
|
||||
horizontal: 400,
|
||||
vertical: 200,
|
||||
layer: 600,
|
||||
},
|
||||
alignment: 'center',
|
||||
padding: {
|
||||
x: 200,
|
||||
y: 200,
|
||||
},
|
||||
...options,
|
||||
}
|
||||
|
||||
logger.info('Calculating layout with nested block support', {
|
||||
nodeCount: workflowGraph.nodes.length,
|
||||
edgeCount: workflowGraph.edges.length,
|
||||
strategy: layoutOptions.strategy,
|
||||
direction: layoutOptions.direction,
|
||||
})
|
||||
|
||||
// Validate input
|
||||
this.validateWorkflowGraph(workflowGraph)
|
||||
|
||||
// Calculate layout based on strategy
|
||||
let result: LayoutResult
|
||||
|
||||
switch (layoutOptions.strategy) {
|
||||
case 'hierarchical':
|
||||
result = calculateHierarchicalLayout(
|
||||
workflowGraph.nodes,
|
||||
workflowGraph.edges,
|
||||
layoutOptions
|
||||
)
|
||||
break
|
||||
case 'smart':
|
||||
result = calculateSmartLayout(workflowGraph.nodes, workflowGraph.edges, layoutOptions)
|
||||
break
|
||||
default:
|
||||
logger.warn(`Unknown layout strategy: ${layoutOptions.strategy}, falling back to smart`)
|
||||
result = calculateSmartLayout(workflowGraph.nodes, workflowGraph.edges, layoutOptions)
|
||||
}
|
||||
|
||||
const elapsed = Date.now() - startTime
|
||||
logger.info('Layout calculation completed', {
|
||||
strategy: result.metadata.strategy,
|
||||
nodeCount: result.nodes.length,
|
||||
totalWidth: result.metadata.totalWidth,
|
||||
totalHeight: result.metadata.totalHeight,
|
||||
layerCount: result.metadata.layerCount,
|
||||
elapsed: `${elapsed}ms`,
|
||||
})
|
||||
|
||||
return result
|
||||
} catch (error) {
|
||||
const elapsed = Date.now() - startTime
|
||||
logger.error('Layout calculation failed', {
|
||||
error: error instanceof Error ? error.message : 'Unknown error',
|
||||
elapsed: `${elapsed}ms`,
|
||||
nodeCount: workflowGraph.nodes.length,
|
||||
edgeCount: workflowGraph.edges.length,
|
||||
})
|
||||
throw error
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Convert workflow store blocks and edges to layout format with nested block support
|
||||
*/
|
||||
convertWorkflowToGraph(blocks: Record<string, any>, edges: any[]): WorkflowGraph {
|
||||
try {
|
||||
// Convert all blocks to layout nodes
|
||||
const allNodes: LayoutNode[] = Object.values(blocks).map((block) => {
|
||||
const category = BLOCK_CATEGORIES[block.type] || 'processing'
|
||||
const isContainer = block.type === 'loop' || block.type === 'parallel'
|
||||
|
||||
// Determine dimensions
|
||||
let dimensions = BLOCK_DIMENSIONS.default
|
||||
if (isContainer) {
|
||||
dimensions = BLOCK_DIMENSIONS.container
|
||||
} else if (block.isWide) {
|
||||
dimensions = BLOCK_DIMENSIONS.wide
|
||||
}
|
||||
|
||||
// Use actual height if available
|
||||
if (block.height && block.height > 0) {
|
||||
dimensions = { ...dimensions, height: block.height }
|
||||
}
|
||||
|
||||
return {
|
||||
id: block.id,
|
||||
type: block.type,
|
||||
name: block.name || `${block.type} Block`,
|
||||
width: dimensions.width,
|
||||
height: dimensions.height,
|
||||
position: block.position,
|
||||
category,
|
||||
isContainer,
|
||||
parentId: block.data?.parentId || block.parentId,
|
||||
horizontalHandles: block.horizontalHandles ?? true,
|
||||
isWide: block.isWide ?? false,
|
||||
}
|
||||
})
|
||||
|
||||
// Convert edges to layout format
|
||||
const layoutEdges: LayoutEdge[] = edges.map((edge) => ({
|
||||
id: edge.id,
|
||||
source: edge.source,
|
||||
target: edge.target,
|
||||
sourceHandle: edge.sourceHandle,
|
||||
targetHandle: edge.targetHandle,
|
||||
type: edge.type,
|
||||
}))
|
||||
|
||||
// For the main graph, only include top-level nodes
|
||||
const topLevelNodes = allNodes.filter((node) => !node.parentId)
|
||||
const topLevelEdges = layoutEdges.filter((edge) => {
|
||||
const sourceIsTopLevel = topLevelNodes.some((n) => n.id === edge.source)
|
||||
const targetIsTopLevel = topLevelNodes.some((n) => n.id === edge.target)
|
||||
return sourceIsTopLevel && targetIsTopLevel
|
||||
})
|
||||
|
||||
logger.info('Converted workflow to graph with nested support', {
|
||||
totalNodes: allNodes.length,
|
||||
topLevelNodes: topLevelNodes.length,
|
||||
edges: layoutEdges.length,
|
||||
topLevelEdges: topLevelEdges.length,
|
||||
categories: this.countByCategory(topLevelNodes),
|
||||
})
|
||||
|
||||
return {
|
||||
nodes: topLevelNodes,
|
||||
edges: topLevelEdges,
|
||||
}
|
||||
} catch (error) {
|
||||
logger.error('Failed to convert workflow to graph', {
|
||||
error: error instanceof Error ? error.message : 'Unknown error',
|
||||
blockCount: Object.keys(blocks).length,
|
||||
edgeCount: edges.length,
|
||||
})
|
||||
throw error
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Convert layout result back to workflow store format with nested block support
|
||||
*/
|
||||
convertResultToWorkflow(
|
||||
result: LayoutResult,
|
||||
originalBlocks: Record<string, any>,
|
||||
allBlocks: Record<string, any>,
|
||||
allEdges: any[]
|
||||
): Record<string, any> {
|
||||
const updatedBlocks = { ...originalBlocks }
|
||||
|
||||
// Apply top-level layout results
|
||||
result.nodes.forEach(({ id, position }) => {
|
||||
if (updatedBlocks[id]) {
|
||||
updatedBlocks[id] = {
|
||||
...updatedBlocks[id],
|
||||
position: {
|
||||
x: Math.round(position.x),
|
||||
y: Math.round(position.y),
|
||||
},
|
||||
}
|
||||
}
|
||||
})
|
||||
|
||||
// Handle nested blocks inside containers
|
||||
const containerBlocks = result.nodes.filter(
|
||||
(node) =>
|
||||
updatedBlocks[node.id]?.type === 'loop' || updatedBlocks[node.id]?.type === 'parallel'
|
||||
)
|
||||
|
||||
containerBlocks.forEach((containerNode) => {
|
||||
const containerId = containerNode.id
|
||||
|
||||
// Get child blocks for this container
|
||||
const childBlocks = Object.fromEntries(
|
||||
Object.entries(allBlocks).filter(
|
||||
([_, block]) => block.data?.parentId === containerId || block.parentId === containerId
|
||||
)
|
||||
)
|
||||
|
||||
if (Object.keys(childBlocks).length === 0) return
|
||||
|
||||
// Get edges between child blocks
|
||||
const childEdges = allEdges.filter(
|
||||
(edge) => childBlocks[edge.source] && childBlocks[edge.target]
|
||||
)
|
||||
|
||||
// Layout child blocks with container-specific options
|
||||
const childGraph = this.createChildGraph(childBlocks, childEdges)
|
||||
|
||||
if (childGraph.nodes.length > 0) {
|
||||
try {
|
||||
const childLayoutOptions: LayoutOptions = {
|
||||
strategy: 'smart',
|
||||
direction: 'auto',
|
||||
spacing: {
|
||||
horizontal: 300,
|
||||
vertical: 150,
|
||||
layer: 400,
|
||||
},
|
||||
alignment: 'center',
|
||||
padding: {
|
||||
x: 50,
|
||||
y: 80,
|
||||
},
|
||||
}
|
||||
|
||||
const childResult = this.calculateChildLayout(childGraph, childLayoutOptions)
|
||||
|
||||
// Apply child positions relative to container
|
||||
childResult.nodes.forEach(({ id, position }) => {
|
||||
if (updatedBlocks[id]) {
|
||||
updatedBlocks[id] = {
|
||||
...updatedBlocks[id],
|
||||
position: {
|
||||
x: Math.round(position.x),
|
||||
y: Math.round(position.y),
|
||||
},
|
||||
}
|
||||
}
|
||||
})
|
||||
|
||||
// Update container dimensions to fit children
|
||||
const containerDimensions = this.calculateContainerDimensions(
|
||||
childResult.nodes,
|
||||
childBlocks
|
||||
)
|
||||
|
||||
if (updatedBlocks[containerId]) {
|
||||
updatedBlocks[containerId] = {
|
||||
...updatedBlocks[containerId],
|
||||
data: {
|
||||
...updatedBlocks[containerId].data,
|
||||
width: containerDimensions.width,
|
||||
height: containerDimensions.height,
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
logger.info('Laid out child blocks for container', {
|
||||
containerId,
|
||||
childCount: childResult.nodes.length,
|
||||
containerWidth: containerDimensions.width,
|
||||
containerHeight: containerDimensions.height,
|
||||
})
|
||||
} catch (error) {
|
||||
logger.warn('Failed to layout child blocks for container', {
|
||||
containerId,
|
||||
error: error instanceof Error ? error.message : 'Unknown error',
|
||||
childCount: Object.keys(childBlocks).length,
|
||||
})
|
||||
}
|
||||
}
|
||||
})
|
||||
|
||||
logger.info('Converted layout result to workflow format with nested blocks', {
|
||||
updatedNodes: result.nodes.length,
|
||||
totalBlocks: Object.keys(updatedBlocks).length,
|
||||
containerBlocks: containerBlocks.length,
|
||||
})
|
||||
|
||||
return updatedBlocks
|
||||
}
|
||||
|
||||
/**
|
||||
* Create a graph for child blocks inside a container
|
||||
*/
|
||||
private createChildGraph(childBlocks: Record<string, any>, childEdges: any[]): WorkflowGraph {
|
||||
const nodes: LayoutNode[] = Object.values(childBlocks).map((block) => {
|
||||
const category = BLOCK_CATEGORIES[block.type] || 'processing'
|
||||
const isContainer = block.type === 'loop' || block.type === 'parallel'
|
||||
|
||||
let dimensions = BLOCK_DIMENSIONS.default
|
||||
if (isContainer) {
|
||||
dimensions = BLOCK_DIMENSIONS.container
|
||||
} else if (block.isWide) {
|
||||
dimensions = BLOCK_DIMENSIONS.wide
|
||||
}
|
||||
|
||||
if (block.height && block.height > 0) {
|
||||
dimensions = { ...dimensions, height: block.height }
|
||||
}
|
||||
|
||||
return {
|
||||
id: block.id,
|
||||
type: block.type,
|
||||
name: block.name || `${block.type} Block`,
|
||||
width: dimensions.width,
|
||||
height: dimensions.height,
|
||||
position: block.position,
|
||||
category,
|
||||
isContainer,
|
||||
parentId: block.data?.parentId || block.parentId,
|
||||
horizontalHandles: block.horizontalHandles ?? true,
|
||||
isWide: block.isWide ?? false,
|
||||
}
|
||||
})
|
||||
|
||||
const edges: LayoutEdge[] = childEdges.map((edge) => ({
|
||||
id: edge.id,
|
||||
source: edge.source,
|
||||
target: edge.target,
|
||||
sourceHandle: edge.sourceHandle,
|
||||
targetHandle: edge.targetHandle,
|
||||
type: edge.type,
|
||||
}))
|
||||
|
||||
return { nodes, edges }
|
||||
}
|
||||
|
||||
/**
|
||||
* Calculate layout for child blocks using simplified algorithms
|
||||
*/
|
||||
private calculateChildLayout(childGraph: WorkflowGraph, options: LayoutOptions): LayoutResult {
|
||||
// Use hierarchical layout for child blocks as it's simpler and more predictable
|
||||
return calculateHierarchicalLayout(childGraph.nodes, childGraph.edges, options)
|
||||
}
|
||||
|
||||
/**
|
||||
* Calculate optimal container dimensions based on child blocks
|
||||
*/
|
||||
private calculateContainerDimensions(
|
||||
childPositions: Array<{ id: string; position: { x: number; y: number } }>,
|
||||
childBlocks: Record<string, any>
|
||||
): { width: number; height: number } {
|
||||
const minWidth = 500
|
||||
const minHeight = 300
|
||||
const padding = 100
|
||||
|
||||
if (childPositions.length === 0) {
|
||||
return { width: minWidth, height: minHeight }
|
||||
}
|
||||
|
||||
let maxX = 0
|
||||
let maxY = 0
|
||||
|
||||
childPositions.forEach(({ id, position }) => {
|
||||
const block = childBlocks[id]
|
||||
if (!block) return
|
||||
|
||||
let blockWidth = BLOCK_DIMENSIONS.default.width
|
||||
let blockHeight = BLOCK_DIMENSIONS.default.height
|
||||
|
||||
if (block.isWide) {
|
||||
blockWidth = BLOCK_DIMENSIONS.wide.width
|
||||
}
|
||||
if (block.height && block.height > 0) {
|
||||
blockHeight = block.height
|
||||
}
|
||||
|
||||
maxX = Math.max(maxX, position.x + blockWidth)
|
||||
maxY = Math.max(maxY, position.y + blockHeight)
|
||||
})
|
||||
|
||||
return {
|
||||
width: Math.max(minWidth, maxX + padding),
|
||||
height: Math.max(minHeight, maxY + padding),
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Get default layout options optimized for different scenarios
|
||||
*/
|
||||
getDefaultOptions(scenario: 'simple' | 'complex' | 'presentation' = 'simple'): LayoutOptions {
|
||||
const baseOptions: LayoutOptions = {
|
||||
strategy: 'smart',
|
||||
direction: 'auto',
|
||||
spacing: {
|
||||
horizontal: 400,
|
||||
vertical: 200,
|
||||
layer: 600,
|
||||
},
|
||||
alignment: 'center',
|
||||
padding: {
|
||||
x: 200,
|
||||
y: 200,
|
||||
},
|
||||
}
|
||||
|
||||
switch (scenario) {
|
||||
case 'simple':
|
||||
return {
|
||||
...baseOptions,
|
||||
spacing: {
|
||||
horizontal: 350,
|
||||
vertical: 150,
|
||||
layer: 500,
|
||||
},
|
||||
}
|
||||
case 'complex':
|
||||
return {
|
||||
...baseOptions,
|
||||
spacing: {
|
||||
horizontal: 450,
|
||||
vertical: 250,
|
||||
layer: 700,
|
||||
},
|
||||
padding: {
|
||||
x: 300,
|
||||
y: 300,
|
||||
},
|
||||
}
|
||||
case 'presentation':
|
||||
return {
|
||||
...baseOptions,
|
||||
spacing: {
|
||||
horizontal: 500,
|
||||
vertical: 300,
|
||||
layer: 800,
|
||||
},
|
||||
padding: {
|
||||
x: 400,
|
||||
y: 400,
|
||||
},
|
||||
alignment: 'center',
|
||||
}
|
||||
default:
|
||||
return baseOptions
|
||||
}
|
||||
}
|
||||
|
||||
private validateWorkflowGraph(graph: WorkflowGraph): void {
|
||||
if (!graph.nodes || graph.nodes.length === 0) {
|
||||
throw new Error('Workflow graph must contain at least one node')
|
||||
}
|
||||
|
||||
if (!graph.edges) {
|
||||
throw new Error('Workflow graph must have edges array (can be empty)')
|
||||
}
|
||||
|
||||
// Validate node structure
|
||||
graph.nodes.forEach((node, index) => {
|
||||
if (!node.id) {
|
||||
throw new Error(`Node at index ${index} is missing id`)
|
||||
}
|
||||
if (!node.type) {
|
||||
throw new Error(`Node ${node.id} is missing type`)
|
||||
}
|
||||
if (typeof node.width !== 'number' || node.width <= 0) {
|
||||
throw new Error(`Node ${node.id} has invalid width`)
|
||||
}
|
||||
if (typeof node.height !== 'number' || node.height <= 0) {
|
||||
throw new Error(`Node ${node.id} has invalid height`)
|
||||
}
|
||||
})
|
||||
|
||||
// Validate edge structure
|
||||
graph.edges.forEach((edge, index) => {
|
||||
if (!edge.id) {
|
||||
throw new Error(`Edge at index ${index} is missing id`)
|
||||
}
|
||||
if (!edge.source) {
|
||||
throw new Error(`Edge ${edge.id} is missing source`)
|
||||
}
|
||||
if (!edge.target) {
|
||||
throw new Error(`Edge ${edge.id} is missing target`)
|
||||
}
|
||||
|
||||
// Check if source and target nodes exist
|
||||
const sourceExists = graph.nodes.some((n) => n.id === edge.source)
|
||||
const targetExists = graph.nodes.some((n) => n.id === edge.target)
|
||||
|
||||
if (!sourceExists) {
|
||||
throw new Error(`Edge ${edge.id} references non-existent source node: ${edge.source}`)
|
||||
}
|
||||
if (!targetExists) {
|
||||
throw new Error(`Edge ${edge.id} references non-existent target node: ${edge.target}`)
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
private countByCategory(nodes: LayoutNode[]): Record<string, number> {
|
||||
const counts: Record<string, number> = {}
|
||||
nodes.forEach((node) => {
|
||||
counts[node.category] = (counts[node.category] || 0) + 1
|
||||
})
|
||||
return counts
|
||||
}
|
||||
}
|
||||
|
||||
// Export singleton instance
|
||||
export const autoLayoutService = AutoLayoutService.getInstance()
|
||||
|
||||
// Export utility functions
|
||||
export function createWorkflowGraph(blocks: Record<string, any>, edges: any[]): WorkflowGraph {
|
||||
return autoLayoutService.convertWorkflowToGraph(blocks, edges)
|
||||
}
|
||||
|
||||
export function applyLayoutToWorkflow(
|
||||
result: LayoutResult,
|
||||
originalBlocks: Record<string, any>,
|
||||
allBlocks: Record<string, any>,
|
||||
allEdges: any[]
|
||||
): Record<string, any> {
|
||||
return autoLayoutService.convertResultToWorkflow(result, originalBlocks, allBlocks, allEdges)
|
||||
}
|
||||
|
||||
export async function autoLayoutWorkflow(
|
||||
blocks: Record<string, any>,
|
||||
edges: any[],
|
||||
options: Partial<LayoutOptions> = {}
|
||||
): Promise<Record<string, any>> {
|
||||
const graph = createWorkflowGraph(blocks, edges)
|
||||
const result = await autoLayoutService.calculateLayout(graph, options)
|
||||
return applyLayoutToWorkflow(result, blocks, blocks, edges)
|
||||
}
|
||||
101
apps/sim/lib/autolayout/types.ts
Normal file
101
apps/sim/lib/autolayout/types.ts
Normal file
@@ -0,0 +1,101 @@
|
||||
export interface LayoutNode {
|
||||
id: string
|
||||
type: string
|
||||
name: string
|
||||
// Physical dimensions
|
||||
width: number
|
||||
height: number
|
||||
// Current position (if any)
|
||||
position?: { x: number; y: number }
|
||||
// Metadata for layout decisions
|
||||
category: 'trigger' | 'processing' | 'logic' | 'output' | 'container'
|
||||
isContainer: boolean
|
||||
parentId?: string
|
||||
// Handle configuration
|
||||
horizontalHandles?: boolean
|
||||
isWide?: boolean
|
||||
}
|
||||
|
||||
export interface LayoutEdge {
|
||||
id: string
|
||||
source: string
|
||||
target: string
|
||||
sourceHandle?: string
|
||||
targetHandle?: string
|
||||
type?: string
|
||||
}
|
||||
|
||||
export interface LayoutOptions {
|
||||
strategy: 'hierarchical' | 'force-directed' | 'layered' | 'smart'
|
||||
direction: 'horizontal' | 'vertical' | 'auto'
|
||||
spacing: {
|
||||
horizontal: number
|
||||
vertical: number
|
||||
layer: number
|
||||
}
|
||||
alignment: 'start' | 'center' | 'end'
|
||||
padding: {
|
||||
x: number
|
||||
y: number
|
||||
}
|
||||
constraints?: {
|
||||
maxWidth?: number
|
||||
maxHeight?: number
|
||||
preserveUserPositions?: boolean
|
||||
}
|
||||
}
|
||||
|
||||
export interface LayoutResult {
|
||||
nodes: Array<{
|
||||
id: string
|
||||
position: { x: number; y: number }
|
||||
}>
|
||||
metadata: {
|
||||
strategy: string
|
||||
totalWidth: number
|
||||
totalHeight: number
|
||||
layerCount: number
|
||||
stats: {
|
||||
crossings: number
|
||||
totalEdgeLength: number
|
||||
nodeOverlaps: number
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
export interface WorkflowGraph {
|
||||
nodes: LayoutNode[]
|
||||
edges: LayoutEdge[]
|
||||
}
|
||||
|
||||
// Block category mapping for better layout decisions
|
||||
export const BLOCK_CATEGORIES: Record<string, LayoutNode['category']> = {
|
||||
// Triggers
|
||||
starter: 'trigger',
|
||||
schedule: 'trigger',
|
||||
webhook: 'trigger',
|
||||
|
||||
// Processing
|
||||
agent: 'processing',
|
||||
api: 'processing',
|
||||
function: 'processing',
|
||||
|
||||
// Logic
|
||||
condition: 'logic',
|
||||
router: 'logic',
|
||||
evaluator: 'logic',
|
||||
|
||||
// Output
|
||||
response: 'output',
|
||||
|
||||
// Containers
|
||||
loop: 'container',
|
||||
parallel: 'container',
|
||||
}
|
||||
|
||||
// Default dimensions for different block types
|
||||
export const BLOCK_DIMENSIONS: Record<string, { width: number; height: number }> = {
|
||||
default: { width: 320, height: 120 },
|
||||
wide: { width: 480, height: 120 },
|
||||
container: { width: 500, height: 300 },
|
||||
}
|
||||
@@ -1,447 +0,0 @@
|
||||
import { createLogger } from '@/lib/logs/console-logger'
|
||||
|
||||
const logger = createLogger('CopilotAPI')
|
||||
|
||||
/**
|
||||
* Message interface for copilot conversations
|
||||
*/
|
||||
export interface CopilotMessage {
|
||||
id: string
|
||||
role: 'user' | 'assistant' | 'system'
|
||||
content: string
|
||||
timestamp: string
|
||||
citations?: Array<{
|
||||
id: number
|
||||
title: string
|
||||
url: string
|
||||
similarity?: number
|
||||
}>
|
||||
}
|
||||
|
||||
/**
|
||||
* Chat interface for copilot conversations
|
||||
*/
|
||||
export interface CopilotChat {
|
||||
id: string
|
||||
title: string | null
|
||||
model: string
|
||||
messages: CopilotMessage[]
|
||||
messageCount: number
|
||||
createdAt: Date
|
||||
updatedAt: Date
|
||||
}
|
||||
|
||||
/**
|
||||
* Request interface for sending messages
|
||||
*/
|
||||
export interface SendMessageRequest {
|
||||
message: string
|
||||
chatId?: string
|
||||
workflowId?: string
|
||||
createNewChat?: boolean
|
||||
stream?: boolean
|
||||
}
|
||||
|
||||
/**
|
||||
* Request interface for docs queries
|
||||
*/
|
||||
export interface DocsQueryRequest {
|
||||
query: string
|
||||
topK?: number
|
||||
provider?: string
|
||||
model?: string
|
||||
stream?: boolean
|
||||
chatId?: string
|
||||
workflowId?: string
|
||||
createNewChat?: boolean
|
||||
}
|
||||
|
||||
/**
|
||||
* Create a new copilot chat
|
||||
*/
|
||||
export async function createChat(
|
||||
workflowId: string,
|
||||
options: {
|
||||
title?: string
|
||||
initialMessage?: string
|
||||
} = {}
|
||||
): Promise<{
|
||||
success: boolean
|
||||
chat?: CopilotChat
|
||||
error?: string
|
||||
}> {
|
||||
try {
|
||||
const response = await fetch('/api/copilot', {
|
||||
method: 'PUT',
|
||||
headers: { 'Content-Type': 'application/json' },
|
||||
body: JSON.stringify({
|
||||
workflowId,
|
||||
...options,
|
||||
}),
|
||||
})
|
||||
|
||||
const data = await response.json()
|
||||
|
||||
if (!response.ok) {
|
||||
throw new Error(data.error || 'Failed to create chat')
|
||||
}
|
||||
|
||||
return {
|
||||
success: true,
|
||||
chat: data.chat,
|
||||
}
|
||||
} catch (error) {
|
||||
logger.error('Failed to create chat:', error)
|
||||
return {
|
||||
success: false,
|
||||
error: error instanceof Error ? error.message : 'Unknown error',
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* List chats for a specific workflow
|
||||
*/
|
||||
export async function listChats(
|
||||
workflowId: string,
|
||||
options: {
|
||||
limit?: number
|
||||
offset?: number
|
||||
} = {}
|
||||
): Promise<{
|
||||
success: boolean
|
||||
chats: CopilotChat[]
|
||||
error?: string
|
||||
}> {
|
||||
try {
|
||||
const params = new URLSearchParams({
|
||||
workflowId,
|
||||
limit: (options.limit || 50).toString(),
|
||||
offset: (options.offset || 0).toString(),
|
||||
})
|
||||
|
||||
const response = await fetch(`/api/copilot?${params}`)
|
||||
const data = await response.json()
|
||||
|
||||
if (!response.ok) {
|
||||
throw new Error(data.error || 'Failed to list chats')
|
||||
}
|
||||
|
||||
return {
|
||||
success: true,
|
||||
chats: data.chats || [],
|
||||
}
|
||||
} catch (error) {
|
||||
logger.error('Failed to list chats:', error)
|
||||
return {
|
||||
success: false,
|
||||
chats: [],
|
||||
error: error instanceof Error ? error.message : 'Unknown error',
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Get a specific chat with full message history
|
||||
*/
|
||||
export async function getChat(chatId: string): Promise<{
|
||||
success: boolean
|
||||
chat?: CopilotChat
|
||||
error?: string
|
||||
}> {
|
||||
try {
|
||||
const response = await fetch(`/api/copilot?chatId=${chatId}`)
|
||||
const data = await response.json()
|
||||
|
||||
if (!response.ok) {
|
||||
throw new Error(data.error || 'Failed to get chat')
|
||||
}
|
||||
|
||||
return {
|
||||
success: true,
|
||||
chat: data.chat,
|
||||
}
|
||||
} catch (error) {
|
||||
logger.error('Failed to get chat:', error)
|
||||
return {
|
||||
success: false,
|
||||
error: error instanceof Error ? error.message : 'Unknown error',
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Update a chat with new messages
|
||||
*/
|
||||
export async function updateChatMessages(
|
||||
chatId: string,
|
||||
messages: CopilotMessage[]
|
||||
): Promise<{
|
||||
success: boolean
|
||||
chat?: CopilotChat
|
||||
error?: string
|
||||
}> {
|
||||
try {
|
||||
const response = await fetch(`/api/copilot`, {
|
||||
method: 'PATCH',
|
||||
headers: { 'Content-Type': 'application/json' },
|
||||
body: JSON.stringify({
|
||||
chatId,
|
||||
messages,
|
||||
}),
|
||||
})
|
||||
|
||||
const data = await response.json()
|
||||
|
||||
if (!response.ok) {
|
||||
throw new Error(data.error || 'Failed to update chat')
|
||||
}
|
||||
|
||||
return {
|
||||
success: true,
|
||||
chat: data.chat,
|
||||
}
|
||||
} catch (error) {
|
||||
logger.error('Failed to update chat messages:', error)
|
||||
return {
|
||||
success: false,
|
||||
error: error instanceof Error ? error.message : 'Unknown error',
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Delete a chat
|
||||
*/
|
||||
export async function deleteChat(chatId: string): Promise<{
|
||||
success: boolean
|
||||
error?: string
|
||||
}> {
|
||||
try {
|
||||
const response = await fetch(`/api/copilot?chatId=${chatId}`, {
|
||||
method: 'DELETE',
|
||||
})
|
||||
|
||||
const data = await response.json()
|
||||
|
||||
if (!response.ok) {
|
||||
throw new Error(data.error || 'Failed to delete chat')
|
||||
}
|
||||
|
||||
return { success: true }
|
||||
} catch (error) {
|
||||
logger.error('Failed to delete chat:', error)
|
||||
return {
|
||||
success: false,
|
||||
error: error instanceof Error ? error.message : 'Unknown error',
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Send a message using the unified copilot API
|
||||
*/
|
||||
export async function sendMessage(request: SendMessageRequest): Promise<{
|
||||
success: boolean
|
||||
response?: string
|
||||
chatId?: string
|
||||
citations?: Array<{
|
||||
id: number
|
||||
title: string
|
||||
url: string
|
||||
similarity?: number
|
||||
}>
|
||||
error?: string
|
||||
}> {
|
||||
try {
|
||||
const response = await fetch('/api/copilot', {
|
||||
method: 'POST',
|
||||
headers: { 'Content-Type': 'application/json' },
|
||||
body: JSON.stringify(request),
|
||||
})
|
||||
|
||||
const data = await response.json()
|
||||
|
||||
if (!response.ok) {
|
||||
throw new Error(data.error || 'Failed to send message')
|
||||
}
|
||||
|
||||
return {
|
||||
success: true,
|
||||
response: data.response,
|
||||
chatId: data.chatId,
|
||||
citations: data.citations,
|
||||
}
|
||||
} catch (error) {
|
||||
logger.error('Failed to send message:', error)
|
||||
return {
|
||||
success: false,
|
||||
error: error instanceof Error ? error.message : 'Unknown error',
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Send a streaming message using the unified copilot API
|
||||
*/
|
||||
export async function sendStreamingMessage(request: SendMessageRequest): Promise<{
|
||||
success: boolean
|
||||
stream?: ReadableStream
|
||||
chatId?: string
|
||||
error?: string
|
||||
}> {
|
||||
try {
|
||||
logger.debug('Sending streaming message request:', {
|
||||
message: request.message,
|
||||
stream: true,
|
||||
hasWorkflowId: !!request.workflowId,
|
||||
})
|
||||
|
||||
const response = await fetch('/api/copilot', {
|
||||
method: 'POST',
|
||||
headers: { 'Content-Type': 'application/json' },
|
||||
body: JSON.stringify({ ...request, stream: true }),
|
||||
})
|
||||
|
||||
logger.debug('Fetch response received:', {
|
||||
ok: response.ok,
|
||||
status: response.status,
|
||||
statusText: response.statusText,
|
||||
hasBody: !!response.body,
|
||||
contentType: response.headers.get('content-type'),
|
||||
})
|
||||
|
||||
if (!response.ok) {
|
||||
let errorMessage = 'Failed to send streaming message'
|
||||
try {
|
||||
const errorData = await response.json()
|
||||
logger.error('Error response:', errorData)
|
||||
errorMessage = errorData.error || errorMessage
|
||||
} catch {
|
||||
// Response is not JSON, use status text or default message
|
||||
logger.error('Non-JSON error response:', response.statusText)
|
||||
errorMessage = response.statusText || errorMessage
|
||||
}
|
||||
throw new Error(errorMessage)
|
||||
}
|
||||
|
||||
if (!response.body) {
|
||||
logger.error('No response body received')
|
||||
throw new Error('No response body received')
|
||||
}
|
||||
|
||||
logger.debug('Successfully received stream')
|
||||
return {
|
||||
success: true,
|
||||
stream: response.body,
|
||||
}
|
||||
} catch (error) {
|
||||
logger.error('Failed to send streaming message:', error)
|
||||
return {
|
||||
success: false,
|
||||
error: error instanceof Error ? error.message : 'Unknown error',
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Send a message using the docs RAG API with chat context
|
||||
*/
|
||||
export async function sendDocsMessage(request: DocsQueryRequest): Promise<{
|
||||
success: boolean
|
||||
response?: string
|
||||
chatId?: string
|
||||
sources?: Array<{
|
||||
title: string
|
||||
document: string
|
||||
link: string
|
||||
similarity: number
|
||||
}>
|
||||
error?: string
|
||||
}> {
|
||||
try {
|
||||
const response = await fetch('/api/copilot/docs', {
|
||||
method: 'POST',
|
||||
headers: { 'Content-Type': 'application/json' },
|
||||
body: JSON.stringify(request),
|
||||
})
|
||||
|
||||
const data = await response.json()
|
||||
|
||||
if (!response.ok) {
|
||||
throw new Error(data.error || 'Failed to send message')
|
||||
}
|
||||
|
||||
return {
|
||||
success: true,
|
||||
response: data.response,
|
||||
chatId: data.chatId,
|
||||
sources: data.sources,
|
||||
}
|
||||
} catch (error) {
|
||||
logger.error('Failed to send docs message:', error)
|
||||
return {
|
||||
success: false,
|
||||
error: error instanceof Error ? error.message : 'Unknown error',
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Send a streaming docs message
|
||||
*/
|
||||
export async function sendStreamingDocsMessage(request: DocsQueryRequest): Promise<{
|
||||
success: boolean
|
||||
stream?: ReadableStream
|
||||
chatId?: string
|
||||
error?: string
|
||||
}> {
|
||||
try {
|
||||
logger.debug('sendStreamingDocsMessage called with:', request)
|
||||
|
||||
const response = await fetch('/api/copilot/docs', {
|
||||
method: 'POST',
|
||||
headers: { 'Content-Type': 'application/json' },
|
||||
body: JSON.stringify({ ...request, stream: true }),
|
||||
})
|
||||
|
||||
logger.debug('Fetch response received:', {
|
||||
status: response.status,
|
||||
statusText: response.statusText,
|
||||
headers: Object.fromEntries(response.headers.entries()),
|
||||
ok: response.ok,
|
||||
hasBody: !!response.body,
|
||||
})
|
||||
|
||||
if (!response.ok) {
|
||||
let errorMessage = 'Failed to send streaming docs message'
|
||||
try {
|
||||
const errorData = await response.json()
|
||||
logger.error('API error response:', errorData)
|
||||
errorMessage = errorData.error || errorMessage
|
||||
} catch {
|
||||
// Response is not JSON, use status text or default message
|
||||
logger.error('Non-JSON error response:', response.statusText)
|
||||
errorMessage = response.statusText || errorMessage
|
||||
}
|
||||
throw new Error(errorMessage)
|
||||
}
|
||||
|
||||
if (!response.body) {
|
||||
logger.error('No response body received')
|
||||
throw new Error('No response body received')
|
||||
}
|
||||
|
||||
logger.debug('Returning successful result with stream')
|
||||
return {
|
||||
success: true,
|
||||
stream: response.body,
|
||||
}
|
||||
} catch (error) {
|
||||
logger.error('Failed to send streaming docs message:', error)
|
||||
return {
|
||||
success: false,
|
||||
error: error instanceof Error ? error.message : 'Unknown error',
|
||||
}
|
||||
}
|
||||
}
|
||||
486
apps/sim/lib/copilot/api.ts
Normal file
486
apps/sim/lib/copilot/api.ts
Normal file
@@ -0,0 +1,486 @@
|
||||
import { createLogger } from '@/lib/logs/console-logger'
|
||||
|
||||
const logger = createLogger('CopilotAPI')
|
||||
|
||||
/**
|
||||
* Citation interface for documentation references
|
||||
*/
|
||||
export interface Citation {
|
||||
id: number
|
||||
title: string
|
||||
url: string
|
||||
similarity?: number
|
||||
}
|
||||
|
||||
/**
|
||||
* Checkpoint interface for copilot workflow checkpoints
|
||||
*/
|
||||
export interface CopilotCheckpoint {
|
||||
id: string
|
||||
userId: string
|
||||
workflowId: string
|
||||
chatId: string
|
||||
yaml: string
|
||||
createdAt: Date
|
||||
updatedAt: Date
|
||||
}
|
||||
|
||||
/**
|
||||
* Message interface for copilot conversations
|
||||
*/
|
||||
export interface CopilotMessage {
|
||||
id: string
|
||||
role: 'user' | 'assistant' | 'system'
|
||||
content: string
|
||||
timestamp: string
|
||||
citations?: Citation[]
|
||||
}
|
||||
|
||||
/**
|
||||
* Chat interface for copilot conversations
|
||||
*/
|
||||
export interface CopilotChat {
|
||||
id: string
|
||||
title: string | null
|
||||
model: string
|
||||
messages: CopilotMessage[]
|
||||
messageCount: number
|
||||
createdAt: Date
|
||||
updatedAt: Date
|
||||
}
|
||||
|
||||
/**
|
||||
* Request interface for sending messages
|
||||
*/
|
||||
export interface SendMessageRequest {
|
||||
message: string
|
||||
chatId?: string
|
||||
workflowId?: string
|
||||
mode?: 'ask' | 'agent'
|
||||
createNewChat?: boolean
|
||||
stream?: boolean
|
||||
}
|
||||
|
||||
/**
|
||||
* Request interface for docs queries
|
||||
*/
|
||||
export interface DocsQueryRequest {
|
||||
query: string
|
||||
topK?: number
|
||||
provider?: string
|
||||
model?: string
|
||||
stream?: boolean
|
||||
chatId?: string
|
||||
workflowId?: string
|
||||
createNewChat?: boolean
|
||||
}
|
||||
|
||||
/**
|
||||
* Options for creating a new chat
|
||||
*/
|
||||
export interface CreateChatOptions {
|
||||
title?: string
|
||||
initialMessage?: string
|
||||
}
|
||||
|
||||
/**
|
||||
* Options for listing chats
|
||||
*/
|
||||
export interface ListChatsOptions {
|
||||
limit?: number
|
||||
offset?: number
|
||||
}
|
||||
|
||||
/**
|
||||
* Options for listing checkpoints
|
||||
*/
|
||||
export interface ListCheckpointsOptions {
|
||||
limit?: number
|
||||
offset?: number
|
||||
}
|
||||
|
||||
/**
|
||||
* API response interface
|
||||
*/
|
||||
export interface ApiResponse<T = any> {
|
||||
success: boolean
|
||||
error?: string
|
||||
data?: T
|
||||
}
|
||||
|
||||
/**
|
||||
* Chat response interface
|
||||
*/
|
||||
export interface ChatResponse extends ApiResponse<CopilotChat> {
|
||||
chat?: CopilotChat
|
||||
}
|
||||
|
||||
/**
|
||||
* Chats list response interface
|
||||
*/
|
||||
export interface ChatsListResponse extends ApiResponse<CopilotChat[]> {
|
||||
chats: CopilotChat[]
|
||||
}
|
||||
|
||||
/**
|
||||
* Message response interface
|
||||
*/
|
||||
export interface MessageResponse extends ApiResponse {
|
||||
response?: string
|
||||
chatId?: string
|
||||
citations?: Citation[]
|
||||
}
|
||||
|
||||
/**
|
||||
* Streaming response interface
|
||||
*/
|
||||
export interface StreamingResponse extends ApiResponse {
|
||||
stream?: ReadableStream
|
||||
chatId?: string
|
||||
}
|
||||
|
||||
/**
|
||||
* Docs response interface
|
||||
*/
|
||||
export interface DocsResponse extends ApiResponse {
|
||||
response?: string
|
||||
chatId?: string
|
||||
sources?: Array<{
|
||||
title: string
|
||||
document: string
|
||||
link: string
|
||||
similarity: number
|
||||
}>
|
||||
}
|
||||
|
||||
/**
|
||||
* Checkpoints response interface
|
||||
*/
|
||||
export interface CheckpointsResponse extends ApiResponse<CopilotCheckpoint[]> {
|
||||
checkpoints: CopilotCheckpoint[]
|
||||
}
|
||||
|
||||
/**
|
||||
* Helper function to handle API errors
|
||||
*/
|
||||
async function handleApiError(response: Response, defaultMessage: string): Promise<string> {
|
||||
try {
|
||||
const errorData = await response.json()
|
||||
return errorData.error || defaultMessage
|
||||
} catch {
|
||||
return response.statusText || defaultMessage
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Helper function to make API requests with consistent error handling
|
||||
*/
|
||||
async function makeApiRequest<T>(
|
||||
url: string,
|
||||
options: RequestInit,
|
||||
defaultErrorMessage: string
|
||||
): Promise<ApiResponse<T>> {
|
||||
try {
|
||||
const response = await fetch(url, options)
|
||||
const data = await response.json()
|
||||
|
||||
if (!response.ok) {
|
||||
throw new Error(data.error || defaultErrorMessage)
|
||||
}
|
||||
|
||||
return {
|
||||
success: true,
|
||||
data,
|
||||
}
|
||||
} catch (error) {
|
||||
const errorMessage = error instanceof Error ? error.message : 'Unknown error'
|
||||
logger.error(`API request failed: ${defaultErrorMessage}`, error)
|
||||
return {
|
||||
success: false,
|
||||
error: errorMessage,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Create a new copilot chat
|
||||
*/
|
||||
export async function createChat(
|
||||
workflowId: string,
|
||||
options: CreateChatOptions = {}
|
||||
): Promise<ChatResponse> {
|
||||
const result = await makeApiRequest<any>(
|
||||
'/api/copilot',
|
||||
{
|
||||
method: 'PUT',
|
||||
headers: { 'Content-Type': 'application/json' },
|
||||
body: JSON.stringify({
|
||||
workflowId,
|
||||
...options,
|
||||
}),
|
||||
},
|
||||
'Failed to create chat'
|
||||
)
|
||||
|
||||
return {
|
||||
success: result.success,
|
||||
chat: result.data?.chat,
|
||||
error: result.error,
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* List chats for a specific workflow
|
||||
*/
|
||||
export async function listChats(
|
||||
workflowId: string,
|
||||
options: ListChatsOptions = {}
|
||||
): Promise<ChatsListResponse> {
|
||||
const params = new URLSearchParams({
|
||||
workflowId,
|
||||
limit: (options.limit || 50).toString(),
|
||||
offset: (options.offset || 0).toString(),
|
||||
})
|
||||
|
||||
const result = await makeApiRequest<any>(`/api/copilot?${params}`, {}, 'Failed to list chats')
|
||||
|
||||
return {
|
||||
success: result.success,
|
||||
chats: result.data?.chats || [],
|
||||
error: result.error,
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Get a specific chat with full message history
|
||||
*/
|
||||
export async function getChat(chatId: string): Promise<ChatResponse> {
|
||||
const result = await makeApiRequest<any>(
|
||||
`/api/copilot?chatId=${chatId}`,
|
||||
{},
|
||||
'Failed to get chat'
|
||||
)
|
||||
|
||||
return {
|
||||
success: result.success,
|
||||
chat: result.data?.chat,
|
||||
error: result.error,
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Update a chat with new messages
|
||||
*/
|
||||
export async function updateChatMessages(
|
||||
chatId: string,
|
||||
messages: CopilotMessage[]
|
||||
): Promise<ChatResponse> {
|
||||
const result = await makeApiRequest<any>(
|
||||
'/api/copilot',
|
||||
{
|
||||
method: 'PATCH',
|
||||
headers: { 'Content-Type': 'application/json' },
|
||||
body: JSON.stringify({
|
||||
chatId,
|
||||
messages,
|
||||
}),
|
||||
},
|
||||
'Failed to update chat'
|
||||
)
|
||||
|
||||
return {
|
||||
success: result.success,
|
||||
chat: result.data?.chat,
|
||||
error: result.error,
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Delete a chat
|
||||
*/
|
||||
export async function deleteChat(chatId: string): Promise<ApiResponse> {
|
||||
const result = await makeApiRequest<any>(
|
||||
`/api/copilot?chatId=${chatId}`,
|
||||
{ method: 'DELETE' },
|
||||
'Failed to delete chat'
|
||||
)
|
||||
|
||||
return {
|
||||
success: result.success,
|
||||
error: result.error,
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Send a message using the unified copilot API
|
||||
*/
|
||||
export async function sendMessage(request: SendMessageRequest): Promise<MessageResponse> {
|
||||
const result = await makeApiRequest<any>(
|
||||
'/api/copilot',
|
||||
{
|
||||
method: 'POST',
|
||||
headers: { 'Content-Type': 'application/json' },
|
||||
body: JSON.stringify(request),
|
||||
},
|
||||
'Failed to send message'
|
||||
)
|
||||
|
||||
return {
|
||||
success: result.success,
|
||||
response: result.data?.response,
|
||||
chatId: result.data?.chatId,
|
||||
citations: result.data?.citations,
|
||||
error: result.error,
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Send a streaming message using the unified copilot API
|
||||
*/
|
||||
export async function sendStreamingMessage(
|
||||
request: SendMessageRequest
|
||||
): Promise<StreamingResponse> {
|
||||
try {
|
||||
const response = await fetch('/api/copilot', {
|
||||
method: 'POST',
|
||||
headers: { 'Content-Type': 'application/json' },
|
||||
body: JSON.stringify({ ...request, stream: true }),
|
||||
})
|
||||
|
||||
if (!response.ok) {
|
||||
const errorMessage = await handleApiError(response, 'Failed to send streaming message')
|
||||
throw new Error(errorMessage)
|
||||
}
|
||||
|
||||
if (!response.body) {
|
||||
throw new Error('No response body received')
|
||||
}
|
||||
|
||||
return {
|
||||
success: true,
|
||||
stream: response.body,
|
||||
}
|
||||
} catch (error) {
|
||||
logger.error('Failed to send streaming message:', error)
|
||||
return {
|
||||
success: false,
|
||||
error: error instanceof Error ? error.message : 'Unknown error',
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Send a documentation query using the main copilot API
|
||||
*/
|
||||
export async function sendDocsMessage(request: DocsQueryRequest): Promise<DocsResponse> {
|
||||
const message = `Please search the documentation and answer this question: ${request.query}`
|
||||
|
||||
const result = await makeApiRequest<any>(
|
||||
'/api/copilot',
|
||||
{
|
||||
method: 'POST',
|
||||
headers: { 'Content-Type': 'application/json' },
|
||||
body: JSON.stringify({
|
||||
message,
|
||||
chatId: request.chatId,
|
||||
workflowId: request.workflowId,
|
||||
createNewChat: request.createNewChat,
|
||||
stream: request.stream,
|
||||
}),
|
||||
},
|
||||
'Failed to send docs message'
|
||||
)
|
||||
|
||||
return {
|
||||
success: result.success,
|
||||
response: result.data?.response,
|
||||
chatId: result.data?.chatId,
|
||||
sources: [], // Main agent embeds citations directly in response
|
||||
error: result.error,
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Send a streaming documentation query using the main copilot API
|
||||
*/
|
||||
export async function sendStreamingDocsMessage(
|
||||
request: DocsQueryRequest
|
||||
): Promise<StreamingResponse> {
|
||||
try {
|
||||
const message = `Please search the documentation and answer this question: ${request.query}`
|
||||
|
||||
const response = await fetch('/api/copilot', {
|
||||
method: 'POST',
|
||||
headers: { 'Content-Type': 'application/json' },
|
||||
body: JSON.stringify({
|
||||
message,
|
||||
chatId: request.chatId,
|
||||
workflowId: request.workflowId,
|
||||
createNewChat: request.createNewChat,
|
||||
stream: true,
|
||||
}),
|
||||
})
|
||||
|
||||
if (!response.ok) {
|
||||
const errorMessage = await handleApiError(response, 'Failed to send streaming docs message')
|
||||
throw new Error(errorMessage)
|
||||
}
|
||||
|
||||
if (!response.body) {
|
||||
throw new Error('No response body received')
|
||||
}
|
||||
|
||||
return {
|
||||
success: true,
|
||||
stream: response.body,
|
||||
}
|
||||
} catch (error) {
|
||||
logger.error('Failed to send streaming docs message:', error)
|
||||
return {
|
||||
success: false,
|
||||
error: error instanceof Error ? error.message : 'Unknown error',
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* List checkpoints for a specific chat
|
||||
*/
|
||||
export async function listCheckpoints(
|
||||
chatId: string,
|
||||
options: ListCheckpointsOptions = {}
|
||||
): Promise<CheckpointsResponse> {
|
||||
const params = new URLSearchParams({
|
||||
chatId,
|
||||
limit: (options.limit || 10).toString(),
|
||||
offset: (options.offset || 0).toString(),
|
||||
})
|
||||
|
||||
const result = await makeApiRequest<any>(
|
||||
`/api/copilot/checkpoints?${params}`,
|
||||
{},
|
||||
'Failed to list checkpoints'
|
||||
)
|
||||
|
||||
return {
|
||||
success: result.success,
|
||||
checkpoints: result.data?.checkpoints || [],
|
||||
error: result.error,
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Revert workflow to a specific checkpoint
|
||||
*/
|
||||
export async function revertToCheckpoint(checkpointId: string): Promise<ApiResponse> {
|
||||
const result = await makeApiRequest<any>(
|
||||
`/api/copilot/checkpoints/${checkpointId}/revert`,
|
||||
{ method: 'POST' },
|
||||
'Failed to revert to checkpoint'
|
||||
)
|
||||
|
||||
return {
|
||||
success: result.success,
|
||||
error: result.error,
|
||||
}
|
||||
}
|
||||
@@ -1,13 +1,14 @@
|
||||
import { createLogger } from '@/lib/logs/console-logger'
|
||||
import { getProviderDefaultModel } from '@/providers/models'
|
||||
import type { ProviderId } from '@/providers/types'
|
||||
import { AGENT_MODE_SYSTEM_PROMPT } from './prompts'
|
||||
|
||||
const logger = createLogger('CopilotConfig')
|
||||
|
||||
/**
|
||||
* Valid provider IDs for validation
|
||||
*/
|
||||
const VALID_PROVIDER_IDS: ProviderId[] = [
|
||||
const VALID_PROVIDER_IDS: readonly ProviderId[] = [
|
||||
'openai',
|
||||
'azure-openai',
|
||||
'anthropic',
|
||||
@@ -17,7 +18,61 @@ const VALID_PROVIDER_IDS: ProviderId[] = [
|
||||
'cerebras',
|
||||
'groq',
|
||||
'ollama',
|
||||
]
|
||||
] as const
|
||||
|
||||
/**
|
||||
* Configuration validation constraints
|
||||
*/
|
||||
const VALIDATION_CONSTRAINTS = {
|
||||
temperature: { min: 0, max: 2 },
|
||||
maxTokens: { min: 1, max: 100000 },
|
||||
maxSources: { min: 1, max: 20 },
|
||||
similarityThreshold: { min: 0, max: 1 },
|
||||
maxConversationHistory: { min: 1, max: 50 },
|
||||
} as const
|
||||
|
||||
/**
|
||||
* Copilot model types
|
||||
*/
|
||||
export type CopilotModelType = 'chat' | 'rag' | 'title'
|
||||
|
||||
/**
|
||||
* Configuration validation result
|
||||
*/
|
||||
export interface ValidationResult {
|
||||
isValid: boolean
|
||||
errors: string[]
|
||||
}
|
||||
|
||||
/**
|
||||
* Copilot configuration interface
|
||||
*/
|
||||
export interface CopilotConfig {
|
||||
// Chat LLM configuration
|
||||
chat: {
|
||||
defaultProvider: ProviderId
|
||||
defaultModel: string
|
||||
temperature: number
|
||||
maxTokens: number
|
||||
systemPrompt: string
|
||||
}
|
||||
// RAG (documentation search) LLM configuration
|
||||
rag: {
|
||||
defaultProvider: ProviderId
|
||||
defaultModel: string
|
||||
temperature: number
|
||||
maxTokens: number
|
||||
embeddingModel: string
|
||||
maxSources: number
|
||||
similarityThreshold: number
|
||||
}
|
||||
// General configuration
|
||||
general: {
|
||||
streamingEnabled: boolean
|
||||
maxConversationHistory: number
|
||||
titleGenerationModel: string
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Validate and return a ProviderId if valid, otherwise return null
|
||||
@@ -54,33 +109,11 @@ function parseIntEnv(value: string | undefined, name: string): number | null {
|
||||
}
|
||||
|
||||
/**
|
||||
* Copilot configuration interface
|
||||
* Safely parse a boolean from environment variable
|
||||
*/
|
||||
export interface CopilotConfig {
|
||||
// Chat LLM configuration
|
||||
chat: {
|
||||
defaultProvider: ProviderId
|
||||
defaultModel: string
|
||||
temperature: number
|
||||
maxTokens: number
|
||||
systemPrompt: string
|
||||
}
|
||||
// RAG (documentation search) LLM configuration
|
||||
rag: {
|
||||
defaultProvider: ProviderId
|
||||
defaultModel: string
|
||||
temperature: number
|
||||
maxTokens: number
|
||||
embeddingModel: string
|
||||
maxSources: number
|
||||
similarityThreshold: number
|
||||
}
|
||||
// General configuration
|
||||
general: {
|
||||
streamingEnabled: boolean
|
||||
maxConversationHistory: number
|
||||
titleGenerationModel: string // Lighter model for generating chat titles
|
||||
}
|
||||
function parseBooleanEnv(value: string | undefined): boolean | null {
|
||||
if (!value) return null
|
||||
return value.toLowerCase() === 'true'
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -90,77 +123,14 @@ export interface CopilotConfig {
|
||||
export const DEFAULT_COPILOT_CONFIG: CopilotConfig = {
|
||||
chat: {
|
||||
defaultProvider: 'anthropic',
|
||||
defaultModel: 'claude-3-7-sonnet-latest',
|
||||
defaultModel: 'claude-sonnet-4-0',
|
||||
temperature: 0.1,
|
||||
maxTokens: 4000,
|
||||
systemPrompt: `You are a helpful AI assistant for Sim Studio, a powerful workflow automation platform. You can help users with questions about:
|
||||
|
||||
- Creating and managing workflows
|
||||
- Using different tools and blocks
|
||||
- Understanding features and capabilities
|
||||
- Troubleshooting issues
|
||||
- Best practices
|
||||
|
||||
IMPORTANT DISTINCTION - Two types of information:
|
||||
1. **USER'S SPECIFIC WORKFLOW**: Use "Get User's Specific Workflow" tool when users ask about "my workflow", "this workflow", "what I have built", or "my current blocks"
|
||||
2. **GENERAL SIM STUDIO CAPABILITIES**: Use documentation search for general questions about what's possible, how features work, or "what blocks are available"
|
||||
|
||||
WHEN TO USE WORKFLOW TOOL:
|
||||
- "What does my workflow do?"
|
||||
- "What blocks do I have?"
|
||||
- "How is my workflow configured?"
|
||||
- "Show me my current setup"
|
||||
- "What's in this workflow?"
|
||||
- "How do I add [X] to my workflow?" - ALWAYS get their workflow first to give specific advice
|
||||
- "How can I improve my workflow?"
|
||||
- "What's missing from my workflow?"
|
||||
- "How do I connect [X] in my workflow?"
|
||||
|
||||
WHEN TO SEARCH DOCUMENTATION:
|
||||
- "What blocks are available in Sim Studio?"
|
||||
- "How do I use the Gmail block?"
|
||||
- "What features does Sim Studio have?"
|
||||
- "How do I create a workflow?"
|
||||
|
||||
WHEN NOT TO SEARCH:
|
||||
- Simple greetings or casual conversation
|
||||
- General programming questions unrelated to Sim Studio
|
||||
- Thank you messages or small talk
|
||||
|
||||
DOCUMENTATION SEARCH REQUIREMENT:
|
||||
Whenever you use the "Search Documentation" tool, you MUST:
|
||||
1. Include citations for ALL facts and information from the search results
|
||||
2. Link to relevant documentation pages using the exact URLs provided
|
||||
3. Never provide documentation-based information without proper citations
|
||||
4. Acknowledge the sources that helped answer the user's question
|
||||
|
||||
CITATION FORMAT:
|
||||
MANDATORY: Whenever you use the documentation search tool, you MUST include citations in your response:
|
||||
- Include direct links using markdown format: [link text](URL)
|
||||
- Use descriptive link text (e.g., "workflow documentation" not "here")
|
||||
- Place links naturally in context, not clustered at the end
|
||||
- Cite ALL sources that contributed to your answer - don't cherry-pick
|
||||
- When mentioning specific features, tools, or concepts from docs, ALWAYS link to the relevant documentation
|
||||
- Add citations immediately after stating facts or information from the documentation
|
||||
- IMPORTANT: Only cite each source ONCE per response - do not repeat the same URL multiple times
|
||||
|
||||
WORKFLOW-SPECIFIC GUIDANCE:
|
||||
When users ask "How do I..." questions about their workflow:
|
||||
1. **ALWAYS get their workflow first** using the workflow tool
|
||||
2. **Analyze their current setup** - what blocks they have, how they're connected
|
||||
3. **Give specific, actionable steps** based on their actual configuration
|
||||
4. **Reference their actual block names** and current values
|
||||
5. **Provide concrete next steps** they can take immediately
|
||||
|
||||
Example approach:
|
||||
- User: "How do I add error handling to my workflow?"
|
||||
- You: [Get their workflow] → "I can see your workflow has a Starter block connected to an Agent block, then an API block. Here's how to add error handling specifically for your setup: 1) Add a Condition block after your API block to check if the response was successful, 2) Connect the 'false' path to a new Agent block that handles the error..."
|
||||
|
||||
IMPORTANT: Always be clear about whether you're talking about the user's specific workflow or general Sim Studio capabilities. When showing workflow data, explicitly state "In your current workflow..." or "Your workflow contains..." Be actionable and specific - don't give generic advice when you can see their actual setup.`,
|
||||
systemPrompt: AGENT_MODE_SYSTEM_PROMPT,
|
||||
},
|
||||
rag: {
|
||||
defaultProvider: 'anthropic',
|
||||
defaultModel: 'claude-3-7-sonnet-latest',
|
||||
defaultModel: 'claude-sonnet-4-0',
|
||||
temperature: 0.1,
|
||||
maxTokens: 2000,
|
||||
embeddingModel: 'text-embedding-3-small',
|
||||
@@ -170,94 +140,108 @@ IMPORTANT: Always be clear about whether you're talking about the user's specifi
|
||||
general: {
|
||||
streamingEnabled: true,
|
||||
maxConversationHistory: 10,
|
||||
titleGenerationModel: 'claude-3-haiku-20240307', // Faster model for titles
|
||||
titleGenerationModel: 'claude-3-haiku-20240307',
|
||||
},
|
||||
}
|
||||
|
||||
/**
|
||||
* Apply environment variable overrides to configuration
|
||||
*/
|
||||
function applyEnvironmentOverrides(config: CopilotConfig): void {
|
||||
// Chat configuration overrides
|
||||
const chatProvider = validateProviderId(process.env.COPILOT_CHAT_PROVIDER)
|
||||
if (chatProvider) {
|
||||
config.chat.defaultProvider = chatProvider
|
||||
} else if (process.env.COPILOT_CHAT_PROVIDER) {
|
||||
logger.warn(
|
||||
`Invalid COPILOT_CHAT_PROVIDER: ${process.env.COPILOT_CHAT_PROVIDER}. Valid providers: ${VALID_PROVIDER_IDS.join(', ')}`
|
||||
)
|
||||
}
|
||||
|
||||
if (process.env.COPILOT_CHAT_MODEL) {
|
||||
config.chat.defaultModel = process.env.COPILOT_CHAT_MODEL
|
||||
}
|
||||
|
||||
const chatTemperature = parseFloatEnv(
|
||||
process.env.COPILOT_CHAT_TEMPERATURE,
|
||||
'COPILOT_CHAT_TEMPERATURE'
|
||||
)
|
||||
if (chatTemperature !== null) {
|
||||
config.chat.temperature = chatTemperature
|
||||
}
|
||||
|
||||
const chatMaxTokens = parseIntEnv(process.env.COPILOT_CHAT_MAX_TOKENS, 'COPILOT_CHAT_MAX_TOKENS')
|
||||
if (chatMaxTokens !== null) {
|
||||
config.chat.maxTokens = chatMaxTokens
|
||||
}
|
||||
|
||||
// RAG configuration overrides
|
||||
const ragProvider = validateProviderId(process.env.COPILOT_RAG_PROVIDER)
|
||||
if (ragProvider) {
|
||||
config.rag.defaultProvider = ragProvider
|
||||
} else if (process.env.COPILOT_RAG_PROVIDER) {
|
||||
logger.warn(
|
||||
`Invalid COPILOT_RAG_PROVIDER: ${process.env.COPILOT_RAG_PROVIDER}. Valid providers: ${VALID_PROVIDER_IDS.join(', ')}`
|
||||
)
|
||||
}
|
||||
|
||||
if (process.env.COPILOT_RAG_MODEL) {
|
||||
config.rag.defaultModel = process.env.COPILOT_RAG_MODEL
|
||||
}
|
||||
|
||||
const ragTemperature = parseFloatEnv(
|
||||
process.env.COPILOT_RAG_TEMPERATURE,
|
||||
'COPILOT_RAG_TEMPERATURE'
|
||||
)
|
||||
if (ragTemperature !== null) {
|
||||
config.rag.temperature = ragTemperature
|
||||
}
|
||||
|
||||
const ragMaxTokens = parseIntEnv(process.env.COPILOT_RAG_MAX_TOKENS, 'COPILOT_RAG_MAX_TOKENS')
|
||||
if (ragMaxTokens !== null) {
|
||||
config.rag.maxTokens = ragMaxTokens
|
||||
}
|
||||
|
||||
const ragMaxSources = parseIntEnv(process.env.COPILOT_RAG_MAX_SOURCES, 'COPILOT_RAG_MAX_SOURCES')
|
||||
if (ragMaxSources !== null) {
|
||||
config.rag.maxSources = ragMaxSources
|
||||
}
|
||||
|
||||
const ragSimilarityThreshold = parseFloatEnv(
|
||||
process.env.COPILOT_RAG_SIMILARITY_THRESHOLD,
|
||||
'COPILOT_RAG_SIMILARITY_THRESHOLD'
|
||||
)
|
||||
if (ragSimilarityThreshold !== null) {
|
||||
config.rag.similarityThreshold = ragSimilarityThreshold
|
||||
}
|
||||
|
||||
// General configuration overrides
|
||||
const streamingEnabled = parseBooleanEnv(process.env.COPILOT_STREAMING_ENABLED)
|
||||
if (streamingEnabled !== null) {
|
||||
config.general.streamingEnabled = streamingEnabled
|
||||
}
|
||||
|
||||
const maxConversationHistory = parseIntEnv(
|
||||
process.env.COPILOT_MAX_CONVERSATION_HISTORY,
|
||||
'COPILOT_MAX_CONVERSATION_HISTORY'
|
||||
)
|
||||
if (maxConversationHistory !== null) {
|
||||
config.general.maxConversationHistory = maxConversationHistory
|
||||
}
|
||||
|
||||
if (process.env.COPILOT_TITLE_GENERATION_MODEL) {
|
||||
config.general.titleGenerationModel = process.env.COPILOT_TITLE_GENERATION_MODEL
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Get copilot configuration with environment variable overrides
|
||||
*/
|
||||
export function getCopilotConfig(): CopilotConfig {
|
||||
const config = { ...DEFAULT_COPILOT_CONFIG }
|
||||
const config = structuredClone(DEFAULT_COPILOT_CONFIG)
|
||||
|
||||
// Allow environment variable overrides
|
||||
try {
|
||||
// Chat configuration overrides
|
||||
const chatProvider = validateProviderId(process.env.COPILOT_CHAT_PROVIDER)
|
||||
if (chatProvider) {
|
||||
config.chat.defaultProvider = chatProvider
|
||||
} else if (process.env.COPILOT_CHAT_PROVIDER) {
|
||||
logger.warn(
|
||||
`Invalid COPILOT_CHAT_PROVIDER: ${process.env.COPILOT_CHAT_PROVIDER}. Valid providers: ${VALID_PROVIDER_IDS.join(', ')}`
|
||||
)
|
||||
}
|
||||
if (process.env.COPILOT_CHAT_MODEL) {
|
||||
config.chat.defaultModel = process.env.COPILOT_CHAT_MODEL
|
||||
}
|
||||
const chatTemperature = parseFloatEnv(
|
||||
process.env.COPILOT_CHAT_TEMPERATURE,
|
||||
'COPILOT_CHAT_TEMPERATURE'
|
||||
)
|
||||
if (chatTemperature !== null) {
|
||||
config.chat.temperature = chatTemperature
|
||||
}
|
||||
const chatMaxTokens = parseIntEnv(
|
||||
process.env.COPILOT_CHAT_MAX_TOKENS,
|
||||
'COPILOT_CHAT_MAX_TOKENS'
|
||||
)
|
||||
if (chatMaxTokens !== null) {
|
||||
config.chat.maxTokens = chatMaxTokens
|
||||
}
|
||||
|
||||
// RAG configuration overrides
|
||||
const ragProvider = validateProviderId(process.env.COPILOT_RAG_PROVIDER)
|
||||
if (ragProvider) {
|
||||
config.rag.defaultProvider = ragProvider
|
||||
} else if (process.env.COPILOT_RAG_PROVIDER) {
|
||||
logger.warn(
|
||||
`Invalid COPILOT_RAG_PROVIDER: ${process.env.COPILOT_RAG_PROVIDER}. Valid providers: ${VALID_PROVIDER_IDS.join(', ')}`
|
||||
)
|
||||
}
|
||||
if (process.env.COPILOT_RAG_MODEL) {
|
||||
config.rag.defaultModel = process.env.COPILOT_RAG_MODEL
|
||||
}
|
||||
const ragTemperature = parseFloatEnv(
|
||||
process.env.COPILOT_RAG_TEMPERATURE,
|
||||
'COPILOT_RAG_TEMPERATURE'
|
||||
)
|
||||
if (ragTemperature !== null) {
|
||||
config.rag.temperature = ragTemperature
|
||||
}
|
||||
const ragMaxTokens = parseIntEnv(process.env.COPILOT_RAG_MAX_TOKENS, 'COPILOT_RAG_MAX_TOKENS')
|
||||
if (ragMaxTokens !== null) {
|
||||
config.rag.maxTokens = ragMaxTokens
|
||||
}
|
||||
const ragMaxSources = parseIntEnv(
|
||||
process.env.COPILOT_RAG_MAX_SOURCES,
|
||||
'COPILOT_RAG_MAX_SOURCES'
|
||||
)
|
||||
if (ragMaxSources !== null) {
|
||||
config.rag.maxSources = ragMaxSources
|
||||
}
|
||||
const ragSimilarityThreshold = parseFloatEnv(
|
||||
process.env.COPILOT_RAG_SIMILARITY_THRESHOLD,
|
||||
'COPILOT_RAG_SIMILARITY_THRESHOLD'
|
||||
)
|
||||
if (ragSimilarityThreshold !== null) {
|
||||
config.rag.similarityThreshold = ragSimilarityThreshold
|
||||
}
|
||||
|
||||
// General configuration overrides
|
||||
if (process.env.COPILOT_STREAMING_ENABLED) {
|
||||
config.general.streamingEnabled = process.env.COPILOT_STREAMING_ENABLED === 'true'
|
||||
}
|
||||
const maxConversationHistory = parseIntEnv(
|
||||
process.env.COPILOT_MAX_CONVERSATION_HISTORY,
|
||||
'COPILOT_MAX_CONVERSATION_HISTORY'
|
||||
)
|
||||
if (maxConversationHistory !== null) {
|
||||
config.general.maxConversationHistory = maxConversationHistory
|
||||
}
|
||||
applyEnvironmentOverrides(config)
|
||||
|
||||
logger.info('Copilot configuration loaded', {
|
||||
chatProvider: config.chat.defaultProvider,
|
||||
@@ -276,7 +260,7 @@ export function getCopilotConfig(): CopilotConfig {
|
||||
/**
|
||||
* Get the model to use for a specific copilot function
|
||||
*/
|
||||
export function getCopilotModel(type: 'chat' | 'rag' | 'title'): {
|
||||
export function getCopilotModel(type: CopilotModelType): {
|
||||
provider: ProviderId
|
||||
model: string
|
||||
} {
|
||||
@@ -295,7 +279,7 @@ export function getCopilotModel(type: 'chat' | 'rag' | 'title'): {
|
||||
}
|
||||
case 'title':
|
||||
return {
|
||||
provider: config.chat.defaultProvider, // Use same provider as chat
|
||||
provider: config.chat.defaultProvider,
|
||||
model: config.general.titleGenerationModel,
|
||||
}
|
||||
default:
|
||||
@@ -303,13 +287,24 @@ export function getCopilotModel(type: 'chat' | 'rag' | 'title'): {
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Validate a numeric value against constraints
|
||||
*/
|
||||
function validateNumericValue(
|
||||
value: number,
|
||||
constraint: { min: number; max: number },
|
||||
name: string
|
||||
): string | null {
|
||||
if (value < constraint.min || value > constraint.max) {
|
||||
return `${name} must be between ${constraint.min} and ${constraint.max}`
|
||||
}
|
||||
return null
|
||||
}
|
||||
|
||||
/**
|
||||
* Validate that a provider/model combination is available
|
||||
*/
|
||||
export function validateCopilotConfig(config: CopilotConfig): {
|
||||
isValid: boolean
|
||||
errors: string[]
|
||||
} {
|
||||
export function validateCopilotConfig(config: CopilotConfig): ValidationResult {
|
||||
const errors: string[] = []
|
||||
|
||||
// Validate chat provider/model
|
||||
@@ -332,27 +327,50 @@ export function validateCopilotConfig(config: CopilotConfig): {
|
||||
errors.push(`Invalid RAG provider: ${config.rag.defaultProvider}`)
|
||||
}
|
||||
|
||||
// Validate configuration values
|
||||
if (config.chat.temperature < 0 || config.chat.temperature > 2) {
|
||||
errors.push('Chat temperature must be between 0 and 2')
|
||||
}
|
||||
if (config.rag.temperature < 0 || config.rag.temperature > 2) {
|
||||
errors.push('RAG temperature must be between 0 and 2')
|
||||
}
|
||||
if (config.chat.maxTokens < 1 || config.chat.maxTokens > 100000) {
|
||||
errors.push('Chat maxTokens must be between 1 and 100000')
|
||||
}
|
||||
if (config.rag.maxTokens < 1 || config.rag.maxTokens > 100000) {
|
||||
errors.push('RAG maxTokens must be between 1 and 100000')
|
||||
}
|
||||
if (config.rag.maxSources < 1 || config.rag.maxSources > 20) {
|
||||
errors.push('RAG maxSources must be between 1 and 20')
|
||||
}
|
||||
if (config.rag.similarityThreshold < 0 || config.rag.similarityThreshold > 1) {
|
||||
errors.push('RAG similarityThreshold must be between 0 and 1')
|
||||
}
|
||||
if (config.general.maxConversationHistory < 1 || config.general.maxConversationHistory > 50) {
|
||||
errors.push('General maxConversationHistory must be between 1 and 50')
|
||||
// Validate configuration values using constraints
|
||||
const validationChecks = [
|
||||
{
|
||||
value: config.chat.temperature,
|
||||
constraint: VALIDATION_CONSTRAINTS.temperature,
|
||||
name: 'Chat temperature',
|
||||
},
|
||||
{
|
||||
value: config.rag.temperature,
|
||||
constraint: VALIDATION_CONSTRAINTS.temperature,
|
||||
name: 'RAG temperature',
|
||||
},
|
||||
{
|
||||
value: config.chat.maxTokens,
|
||||
constraint: VALIDATION_CONSTRAINTS.maxTokens,
|
||||
name: 'Chat maxTokens',
|
||||
},
|
||||
{
|
||||
value: config.rag.maxTokens,
|
||||
constraint: VALIDATION_CONSTRAINTS.maxTokens,
|
||||
name: 'RAG maxTokens',
|
||||
},
|
||||
{
|
||||
value: config.rag.maxSources,
|
||||
constraint: VALIDATION_CONSTRAINTS.maxSources,
|
||||
name: 'RAG maxSources',
|
||||
},
|
||||
{
|
||||
value: config.rag.similarityThreshold,
|
||||
constraint: VALIDATION_CONSTRAINTS.similarityThreshold,
|
||||
name: 'RAG similarityThreshold',
|
||||
},
|
||||
{
|
||||
value: config.general.maxConversationHistory,
|
||||
constraint: VALIDATION_CONSTRAINTS.maxConversationHistory,
|
||||
name: 'General maxConversationHistory',
|
||||
},
|
||||
]
|
||||
|
||||
for (const check of validationChecks) {
|
||||
const error = validateNumericValue(check.value, check.constraint, check.name)
|
||||
if (error) {
|
||||
errors.push(error)
|
||||
}
|
||||
}
|
||||
|
||||
return {
|
||||
|
||||
662
apps/sim/lib/copilot/prompts.ts
Normal file
662
apps/sim/lib/copilot/prompts.ts
Normal file
@@ -0,0 +1,662 @@
|
||||
/**
|
||||
* Copilot system prompts and templates
|
||||
* Centralized location for all LLM prompts used by the copilot system
|
||||
*/
|
||||
|
||||
/**
|
||||
* Base introduction content shared by both modes
|
||||
*/
|
||||
const BASE_INTRODUCTION = `You are a helpful AI assistant for Sim Studio, a powerful workflow automation platform.`
|
||||
|
||||
/**
|
||||
* Ask mode capabilities description
|
||||
*/
|
||||
const ASK_MODE_CAPABILITIES = `You can help users with questions about:
|
||||
|
||||
- Understanding workflow features and capabilities
|
||||
- Analyzing existing workflows
|
||||
- Explaining how tools and blocks work
|
||||
- Troubleshooting workflow issues
|
||||
- Best practices and recommendations
|
||||
- Documentation search and guidance
|
||||
- Providing detailed guidance on how to build workflows
|
||||
- Explaining workflow structure and block configurations
|
||||
|
||||
You specialize in analysis, education, and providing thorough guidance to help users understand and work with Sim Studio workflows.
|
||||
|
||||
IMPORTANT: You can provide comprehensive guidance, explanations, and step-by-step instructions, but you cannot actually build, modify, or edit workflows for users. Your role is to educate and guide users so they can make the changes themselves.`
|
||||
|
||||
/**
|
||||
* Agent mode capabilities description
|
||||
*/
|
||||
const AGENT_MODE_CAPABILITIES = `⚠️ **CRITICAL WORKFLOW EDITING RULE**: Before ANY workflow edit, you MUST call these four tools: Get User's Workflow → Get All Blocks → Get Block Metadata → Get YAML Structure. NO EXCEPTIONS.
|
||||
|
||||
You can help users with questions about:
|
||||
|
||||
- Creating and managing workflows
|
||||
- Using different tools and blocks
|
||||
- Understanding features and capabilities
|
||||
- Troubleshooting issues
|
||||
- Best practices
|
||||
- Modifying and editing existing workflows
|
||||
- Building new workflows from scratch
|
||||
|
||||
You have FULL workflow editing capabilities and can modify users' workflows directly.`
|
||||
|
||||
/**
|
||||
* Tool usage guidelines shared by both modes
|
||||
*/
|
||||
const TOOL_USAGE_GUIDELINES = `
|
||||
TOOL SELECTION STRATEGY:
|
||||
Choose tools based on the specific information you need to answer the user's question effectively:
|
||||
|
||||
**"Get User's Specific Workflow"** - Helpful when:
|
||||
- User references their existing workflow ("my workflow", "this workflow")
|
||||
- Need to understand current setup before making suggestions
|
||||
- User asks about their current blocks or configuration
|
||||
- Planning modifications or additions to existing workflows
|
||||
|
||||
**"Get All Blocks and Tools"** - Useful when:
|
||||
- Exploring available options for new workflows
|
||||
- User asks "what blocks should I use for..."
|
||||
- Need to recommend specific blocks for a task
|
||||
- General workflow planning and architecture discussions
|
||||
|
||||
**"Search Documentation"** - Good for:
|
||||
- Specific tool/block feature questions
|
||||
- How-to guides and detailed explanations
|
||||
- Feature capabilities and best practices
|
||||
- General Sim Studio information
|
||||
|
||||
**CONTEXT-DRIVEN APPROACH:**
|
||||
Consider what the user is actually asking:
|
||||
|
||||
- **"What does my workflow do?"** → Get their specific workflow
|
||||
- **"How do I build a workflow for X?"** → Get all blocks to explore options
|
||||
- **"How does the Gmail block work?"** → Search documentation for details
|
||||
- **"Add email to my workflow"** → Get their workflow first, then possibly get block metadata
|
||||
- **"What automation options do I have?"** → Get all blocks to show possibilities
|
||||
|
||||
**FLEXIBLE DECISION MAKING:**
|
||||
You don't need to follow rigid patterns. Use the tools that make sense for the specific question and context. Sometimes one tool is sufficient, sometimes you'll need multiple tools to provide a complete answer.`
|
||||
|
||||
/**
|
||||
* Workflow building process (Agent mode only)
|
||||
*/
|
||||
const WORKFLOW_BUILDING_PROCESS = `
|
||||
WORKFLOW BUILDING GUIDELINES:
|
||||
When working with workflows, use these tools strategically based on what information you need:
|
||||
|
||||
**CRITICAL REQUIREMENT - WORKFLOW EDITING PREREQUISITES:**
|
||||
You are STRICTLY FORBIDDEN from calling "Edit Workflow" until you have completed ALL four prerequisite tools:
|
||||
|
||||
1. **"Get User's Specific Workflow"** - REQUIRED to understand current state
|
||||
2. **"Get All Blocks and Tools"** - REQUIRED to know available options
|
||||
3. **"Get Block Metadata"** - REQUIRED for any blocks you plan to use
|
||||
4. **"Get YAML Workflow Structure Guide"** - REQUIRED for proper syntax
|
||||
|
||||
⚠️ **ENFORCEMENT RULE**: You CANNOT and MUST NOT call "Edit Workflow" without first calling all four tools above. This is non-negotiable and must be followed in every workflow editing scenario.
|
||||
|
||||
**TOOL USAGE GUIDELINES:**
|
||||
|
||||
**"Get User's Specific Workflow"** - Use when:
|
||||
- User mentions "my workflow", "this workflow", or "current workflow"
|
||||
- Making modifications to existing workflows
|
||||
- Need to understand current state before suggesting changes
|
||||
- User asks about what they currently have built
|
||||
- MANDATORY before any workflow edits
|
||||
|
||||
**"Get All Blocks and Tools"** - Use when:
|
||||
- Planning new workflows and need to explore available options
|
||||
- User asks "what blocks should I use for..."
|
||||
- Need to recommend specific blocks for a task
|
||||
- Building workflows from scratch
|
||||
- MANDATORY before any workflow edits
|
||||
|
||||
**"Get Block Metadata"** - Use when:
|
||||
- Need detailed configuration options for specific blocks
|
||||
- Understanding input/output schemas
|
||||
- Configuring block parameters correctly
|
||||
- The "Get Block Metadata" tool accepts ONLY block IDs, not tool IDs (e.g., "starter", "agent", "gmail")
|
||||
- MANDATORY before any workflow edits
|
||||
|
||||
**"Get YAML Workflow Structure Guide"** - Use when:
|
||||
- Need proper YAML syntax and formatting rules
|
||||
- Building or editing complex workflows
|
||||
- Ensuring correct workflow structure
|
||||
- MANDATORY before any workflow edits
|
||||
|
||||
**"Edit Workflow"** - ⚠️ RESTRICTED ACCESS:
|
||||
- FORBIDDEN until ALL four prerequisite tools have been called
|
||||
- You MUST have called: Get User's Workflow, Get All Blocks, Get Block Metadata, Get YAML Structure
|
||||
- NO EXCEPTIONS: Every workflow edit requires these four tools first
|
||||
- Only use after: user approval + complete prerequisite tool execution
|
||||
|
||||
**FLEXIBLE APPROACH:**
|
||||
You don't need to call every tool for every request. Use your judgment:
|
||||
|
||||
- **Simple questions**: If user asks about a specific block, you might only need "Get Block Metadata"
|
||||
- **Quick edits**: For minor modifications, you still MUST call all four prerequisite tools before "Edit Workflow"
|
||||
- **Complex builds**: For new workflows, you'll likely need multiple tools to gather information
|
||||
- **Exploration**: If user is exploring options, "Get All Blocks and Tools" might be sufficient
|
||||
|
||||
**COMMON PATTERNS:**
|
||||
|
||||
*New Workflow Creation:*
|
||||
- Typically: Get All Blocks → Get Block Metadata (for chosen blocks) → Get YAML Guide → Edit Workflow
|
||||
|
||||
*Existing Workflow Modification:*
|
||||
- Typically: Get User's Workflow → (optionally Get Block Metadata for new blocks) → Edit Workflow
|
||||
|
||||
*Information/Analysis:*
|
||||
- Might only need: Get User's Workflow or Get Block Metadata
|
||||
|
||||
Use the minimum tools necessary to provide accurate, helpful responses while ensuring you have enough information to complete the task successfully.`
|
||||
|
||||
/**
|
||||
* Ask mode workflow guidance - focused on providing detailed educational guidance
|
||||
*/
|
||||
const ASK_MODE_WORKFLOW_GUIDANCE = `
|
||||
WORKFLOW GUIDANCE AND EDUCATION:
|
||||
When users ask about building, modifying, or improving workflows, provide comprehensive educational guidance:
|
||||
|
||||
1. **ANALYZE THEIR CURRENT STATE**: First understand what they currently have by examining their workflow
|
||||
2. **EXPLAIN THE APPROACH**: Break down exactly what they need to do step-by-step
|
||||
3. **RECOMMEND SPECIFIC BLOCKS**: Tell them which blocks to use and why
|
||||
4. **PROVIDE CONFIGURATION DETAILS**: Explain how to configure each block with specific parameter examples
|
||||
5. **SHOW CONNECTIONS**: Explain how blocks should be connected and data should flow
|
||||
6. **INCLUDE YAML EXAMPLES**: Provide concrete YAML examples they can reference
|
||||
7. **EXPLAIN THE LOGIC**: Help them understand the reasoning behind the workflow design
|
||||
|
||||
For example, if a user asks "How do I add email automation to my workflow?":
|
||||
- First examine their current workflow to understand the context
|
||||
- Explain they'll need a trigger (like a condition block) and an email block
|
||||
- Show them how to configure the Gmail block with specific parameters
|
||||
- Provide a YAML example of how it should look
|
||||
- Explain how to connect it to their existing blocks
|
||||
- Describe the data flow and what variables they can use
|
||||
|
||||
Be educational and thorough - your goal is to make users confident in building workflows themselves through clear, detailed guidance.`
|
||||
|
||||
/**
|
||||
* Documentation search guidelines
|
||||
*/
|
||||
const DOCUMENTATION_SEARCH_GUIDELINES = `
|
||||
WHEN TO SEARCH DOCUMENTATION:
|
||||
- "How do I use the Gmail block?"
|
||||
- "What does the Agent block do?"
|
||||
- "How do I configure API authentication?"
|
||||
- "What features does Sim Studio have?"
|
||||
- "How do I create a workflow?"
|
||||
- Any specific tool/block information or how-to questions
|
||||
|
||||
WHEN NOT TO SEARCH:
|
||||
- Simple greetings or casual conversation
|
||||
- General programming questions unrelated to Sim Studio
|
||||
- Thank you messages or small talk`
|
||||
|
||||
/**
|
||||
* Citation requirements
|
||||
*/
|
||||
const CITATION_REQUIREMENTS = `
|
||||
CITATION REQUIREMENTS:
|
||||
When you use the "Search Documentation" tool:
|
||||
|
||||
1. **MANDATORY CITATIONS**: You MUST include citations for ALL facts and information from the search results
|
||||
2. **Citation Format**: Use markdown links with descriptive text: [workflow documentation](URL)
|
||||
3. **Source URLs**: Use the exact URLs provided in the tool results
|
||||
4. **Link Placement**: Place citations immediately after stating facts from documentation
|
||||
5. **Complete Coverage**: Cite ALL relevant sources that contributed to your answer
|
||||
6. **No Repetition**: Only cite each source ONCE per response
|
||||
7. **Natural Integration**: Place links naturally in context, not clustered at the end
|
||||
|
||||
**Tool Result Processing**:
|
||||
- The search tool returns an array of documentation chunks with content, title, and URL
|
||||
- Use the \`content\` field for information and \`url\` field for citations
|
||||
- Include the \`title\` in your link text when appropriate
|
||||
- Reference multiple sources when they provide complementary information`
|
||||
|
||||
/**
|
||||
* Workflow analysis guidelines
|
||||
*/
|
||||
const WORKFLOW_ANALYSIS_GUIDELINES = `
|
||||
WORKFLOW-SPECIFIC GUIDANCE:
|
||||
When users ask questions about their specific workflow, consider getting their current setup to provide more targeted advice:
|
||||
|
||||
**PERSONALIZED RESPONSES:**
|
||||
- If you have access to their workflow data, reference their actual blocks and configuration
|
||||
- Provide specific steps based on their current setup rather than generic advice
|
||||
- Use their actual block names when giving instructions
|
||||
|
||||
**CLEAR COMMUNICATION:**
|
||||
- Be explicit about whether you're giving general advice or specific guidance for their workflow
|
||||
- When discussing their workflow, use phrases like "In your current workflow..." or "Based on your setup..."
|
||||
- Distinguish between what they currently have and what they could add
|
||||
|
||||
**EXAMPLE APPROACH:**
|
||||
- User: "How do I add error handling to my workflow?"
|
||||
- Consider getting their workflow to see: what blocks they have, how they're connected, where error handling would fit
|
||||
- Then provide specific guidance: "I can see your workflow has a Starter block connected to an Agent block, then an API block. Here's how to add error handling specifically for your setup..."
|
||||
|
||||
**BALANCED GUIDANCE:**
|
||||
- For quick questions, you might provide general guidance without needing their specific workflow
|
||||
- For complex modifications, understanding their current setup is usually helpful
|
||||
- Use your judgment on when specific workflow information would be valuable`
|
||||
|
||||
/**
|
||||
* Ask mode system prompt - focused on analysis and guidance
|
||||
*/
|
||||
export const ASK_MODE_SYSTEM_PROMPT = `${BASE_INTRODUCTION}
|
||||
|
||||
${ASK_MODE_CAPABILITIES}
|
||||
|
||||
${TOOL_USAGE_GUIDELINES}
|
||||
|
||||
${ASK_MODE_WORKFLOW_GUIDANCE}
|
||||
|
||||
${DOCUMENTATION_SEARCH_GUIDELINES}
|
||||
|
||||
${CITATION_REQUIREMENTS}
|
||||
|
||||
${WORKFLOW_ANALYSIS_GUIDELINES}`
|
||||
|
||||
/**
|
||||
* Streaming response guidelines for agent mode
|
||||
*/
|
||||
const STREAMING_RESPONSE_GUIDELINES = `
|
||||
STREAMING COMMUNICATION STYLE:
|
||||
You should communicate your thought process naturally as you work, but avoid repeating information:
|
||||
|
||||
**Response Flow:**
|
||||
1. **Initial explanation** - Briefly state what you plan to do
|
||||
2. **After tool execution** - Build upon what you learned, don't repeat previous statements
|
||||
3. **Progressive disclosure** - Each response segment should add new information
|
||||
4. **Avoid redundancy** - Don't restate what you've already told the user
|
||||
|
||||
**Communication Examples:**
|
||||
- Initial: "I'll start by examining your current workflow..."
|
||||
- After tools: "Based on what I found, you have a Starter and Agent block. Now let me..."
|
||||
- NOT: "I can see you have a workflow" (repeated information)
|
||||
|
||||
**Key Guidelines:**
|
||||
- Stream your reasoning before tool calls
|
||||
- Continue naturally after tools complete with new insights
|
||||
- Reference previous findings briefly, then move forward
|
||||
- Each segment should progress the conversation`
|
||||
|
||||
/**
|
||||
* Agent mode system prompt - full workflow editing capabilities
|
||||
*/
|
||||
export const AGENT_MODE_SYSTEM_PROMPT = `${BASE_INTRODUCTION}
|
||||
|
||||
${AGENT_MODE_CAPABILITIES}
|
||||
|
||||
${TOOL_USAGE_GUIDELINES}
|
||||
|
||||
${WORKFLOW_BUILDING_PROCESS}
|
||||
|
||||
${STREAMING_RESPONSE_GUIDELINES}
|
||||
|
||||
${DOCUMENTATION_SEARCH_GUIDELINES}
|
||||
|
||||
${CITATION_REQUIREMENTS}
|
||||
|
||||
${WORKFLOW_ANALYSIS_GUIDELINES}`
|
||||
|
||||
/**
|
||||
* Main chat system prompt for backwards compatibility
|
||||
* @deprecated Use ASK_MODE_SYSTEM_PROMPT or AGENT_MODE_SYSTEM_PROMPT instead
|
||||
*/
|
||||
export const MAIN_CHAT_SYSTEM_PROMPT = AGENT_MODE_SYSTEM_PROMPT
|
||||
|
||||
/**
|
||||
* Validate that the system prompts are properly constructed
|
||||
* This helps catch any issues with template literal construction
|
||||
*/
|
||||
export function validateSystemPrompts(): {
|
||||
askMode: { valid: boolean; issues: string[] }
|
||||
agentMode: { valid: boolean; issues: string[] }
|
||||
} {
|
||||
const askIssues: string[] = []
|
||||
const agentIssues: string[] = []
|
||||
|
||||
// Check Ask mode prompt
|
||||
if (!ASK_MODE_SYSTEM_PROMPT || ASK_MODE_SYSTEM_PROMPT.length < 500) {
|
||||
askIssues.push('Prompt too short or undefined')
|
||||
}
|
||||
if (!ASK_MODE_SYSTEM_PROMPT.includes('analysis, education, and providing thorough guidance')) {
|
||||
askIssues.push('Missing educational focus description')
|
||||
}
|
||||
if (!ASK_MODE_SYSTEM_PROMPT.includes('WORKFLOW GUIDANCE AND EDUCATION')) {
|
||||
askIssues.push('Missing workflow guidance section')
|
||||
}
|
||||
if (ASK_MODE_SYSTEM_PROMPT.includes('AGENT mode')) {
|
||||
askIssues.push('Should not reference AGENT mode')
|
||||
}
|
||||
if (ASK_MODE_SYSTEM_PROMPT.includes('switch to')) {
|
||||
askIssues.push('Should not suggest switching modes')
|
||||
}
|
||||
if (ASK_MODE_SYSTEM_PROMPT.includes('WORKFLOW BUILDING PROCESS')) {
|
||||
askIssues.push('Should not contain workflow building process (Agent only)')
|
||||
}
|
||||
if (ASK_MODE_SYSTEM_PROMPT.includes('Edit Workflow')) {
|
||||
askIssues.push('Should not reference edit workflow capability')
|
||||
}
|
||||
|
||||
// Check Agent mode prompt
|
||||
if (!AGENT_MODE_SYSTEM_PROMPT || AGENT_MODE_SYSTEM_PROMPT.length < 1000) {
|
||||
agentIssues.push('Prompt too short or undefined')
|
||||
}
|
||||
if (!AGENT_MODE_SYSTEM_PROMPT.includes('WORKFLOW BUILDING PROCESS')) {
|
||||
agentIssues.push('Missing workflow building process')
|
||||
}
|
||||
if (!AGENT_MODE_SYSTEM_PROMPT.includes('Edit Workflow')) {
|
||||
agentIssues.push('Missing edit workflow capability')
|
||||
}
|
||||
if (!AGENT_MODE_SYSTEM_PROMPT.includes('CRITICAL REQUIREMENT')) {
|
||||
agentIssues.push('Missing critical workflow editing requirements')
|
||||
}
|
||||
|
||||
return {
|
||||
askMode: { valid: askIssues.length === 0, issues: askIssues },
|
||||
agentMode: { valid: agentIssues.length === 0, issues: agentIssues },
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* System prompt for generating chat titles
|
||||
* Used when creating concise titles for new conversations
|
||||
*/
|
||||
export const TITLE_GENERATION_SYSTEM_PROMPT = `You are a helpful assistant that generates concise, descriptive titles for chat conversations. Create a title that captures the main topic or question being discussed. Keep it under 50 characters and make it specific and clear.`
|
||||
|
||||
/**
|
||||
* User prompt template for title generation
|
||||
*/
|
||||
export const TITLE_GENERATION_USER_PROMPT = (userMessage: string) =>
|
||||
`Generate a concise title for a conversation that starts with this user message: "${userMessage}"\n\nReturn only the title text, nothing else.`
|
||||
|
||||
/**
|
||||
* YAML Workflow Reference Guide
|
||||
* Comprehensive guide for LLMs on how to write end-to-end YAML workflows correctly
|
||||
*/
|
||||
export const YAML_WORKFLOW_PROMPT = `# Comprehensive Guide to Writing End-to-End YAML Workflows in Sim Studio
|
||||
|
||||
## Fundamental Structure
|
||||
|
||||
Every Sim Studio workflow must follow this exact structure:
|
||||
|
||||
\`\`\`yaml
|
||||
version: '1.0'
|
||||
blocks:
|
||||
block-id:
|
||||
type: block-type
|
||||
name: "Block Name"
|
||||
inputs:
|
||||
key: value
|
||||
connections:
|
||||
success: next-block-id
|
||||
\`\`\`
|
||||
|
||||
### Critical Requirements:
|
||||
- **Version Declaration**: Must be exactly \`version: '1.0'\` (with quotes)
|
||||
- **Single Starter Block**: Every workflow needs exactly one starter block
|
||||
- **Human-Readable Block IDs**: Use descriptive IDs like \`start\`, \`email-sender\`, \`data-processor\`, \`agent-1\`
|
||||
- **Consistent Indentation**: Use 2-space indentation throughout
|
||||
- **Block References**: ⚠️ **CRITICAL** - References use the block **NAME** (not ID), converted to lowercase with spaces removed
|
||||
|
||||
## Complete End-to-End Workflow Examples
|
||||
|
||||
**IMPORTANT**: For complete, up-to-date YAML workflow examples, refer to the documentation at:
|
||||
- **YAML Workflow Examples**: \`/yaml/examples\` - Contains real-world workflow patterns including:
|
||||
- Multi-Agent Chain Workflows
|
||||
- Router-Based Conditional Workflows
|
||||
- Web Search with Structured Output
|
||||
- Loop Processing with Collections
|
||||
- Email Classification and Response
|
||||
- And more practical examples
|
||||
|
||||
- **Block Schema Documentation**: \`/yaml/blocks\` - Contains detailed schemas for all block types including:
|
||||
- Loop blocks with proper \`connections.loop.start\` syntax
|
||||
- Parallel blocks with proper \`connections.parallel.start\` syntax
|
||||
- Agent blocks with tools configuration
|
||||
- All other block types with complete parameter references
|
||||
|
||||
**CRITICAL**: Always use the "Get All Blocks and Tools" and "Get Block Metadata" tools to get the latest examples and schemas when building workflows. The documentation contains the most current syntax and examples.
|
||||
|
||||
## The Starter Block
|
||||
|
||||
The starter block is the entry point for every workflow and has special properties:
|
||||
|
||||
### Manual Start Configuration
|
||||
\`\`\`yaml
|
||||
start:
|
||||
type: starter
|
||||
name: Start
|
||||
inputs:
|
||||
startWorkflow: manual
|
||||
connections:
|
||||
success: next-block
|
||||
\`\`\`
|
||||
|
||||
### Manual Start with Input Format Configuration
|
||||
For API workflows that need structured input validation and processing:
|
||||
\`\`\`yaml
|
||||
start:
|
||||
type: starter
|
||||
name: Start
|
||||
inputs:
|
||||
startWorkflow: manual
|
||||
inputFormat:
|
||||
- name: query
|
||||
type: string
|
||||
- name: email
|
||||
type: string
|
||||
- name: age
|
||||
type: number
|
||||
- name: isActive
|
||||
type: boolean
|
||||
- name: preferences
|
||||
type: object
|
||||
- name: tags
|
||||
type: array
|
||||
connections:
|
||||
success: agent-1
|
||||
\`\`\`
|
||||
|
||||
### Chat Start Configuration
|
||||
\`\`\`yaml
|
||||
start:
|
||||
type: starter
|
||||
name: Start
|
||||
inputs:
|
||||
startWorkflow: chat
|
||||
connections:
|
||||
success: chat-handler
|
||||
\`\`\`
|
||||
|
||||
**Key Points:**
|
||||
- Reference Pattern: Always use \`<start.input>\` to reference starter input
|
||||
- Manual workflows can accept any JSON input structure via API calls
|
||||
- **Input Format**: Use \`inputFormat\` array to define expected input structure for API calls
|
||||
- **Input Format Fields**: Each field requires \`name\` (string) and \`type\` ('string', 'number', 'boolean', 'object', 'array')
|
||||
- **Input Format Benefits**: Provides type validation, structured data access, and better API documentation
|
||||
|
||||
## Block References and Data Flow
|
||||
|
||||
### Reference Naming Convention
|
||||
**CRITICAL**: To reference another block's output, use the block **name** (NOT the block ID) converted to lowercase with spaces removed:
|
||||
|
||||
\`\`\`yaml
|
||||
# Block references use the BLOCK NAME converted to lowercase, spaces removed
|
||||
<blockname.content> # For agent blocks
|
||||
<blockname.output> # For tool blocks (API, Gmail, etc.)
|
||||
<start.input> # For starter block input (special case)
|
||||
<loop.index> # For loop iteration index
|
||||
<loop.item> # For current loop item
|
||||
|
||||
# Environment variables
|
||||
{{OPENAI_API_KEY}}
|
||||
{{CUSTOM_VARIABLE}}
|
||||
\`\`\`
|
||||
|
||||
**Examples of Correct Block References:**
|
||||
- Block name: "Email Sender" → Reference: \`<emailsender.output>\`
|
||||
- Block name: "Data Processor" → Reference: \`<dataprocessor.content>\`
|
||||
- Block name: "Gmail Notification" → Reference: \`<gmailnotification.output>\`
|
||||
- Block name: "Agent 1" → Reference: \`<agent1.content>\`
|
||||
- Block name: "Start" → Reference: \`<start.input>\` (special case)
|
||||
|
||||
**Block Reference Rules:**
|
||||
1. Take the block's **name** field (not the block ID)
|
||||
2. Convert to lowercase
|
||||
3. Remove all spaces and special characters
|
||||
4. Use dot notation with .content (agents) or .output (tools)
|
||||
|
||||
### Data Flow Example
|
||||
\`\`\`yaml
|
||||
email-classifier:
|
||||
type: agent
|
||||
name: Email Classifier
|
||||
inputs:
|
||||
userPrompt: |
|
||||
Classify this email: <start.input>
|
||||
Categories: support, billing, sales, feedback
|
||||
|
||||
response-generator:
|
||||
type: agent
|
||||
name: Response Generator
|
||||
inputs:
|
||||
userPrompt: |
|
||||
Classification: <emailclassifier.content>
|
||||
Original: <start.input>
|
||||
\`\`\`
|
||||
|
||||
## Common Block Types and Patterns
|
||||
|
||||
### Agent Blocks
|
||||
- Use for AI model interactions
|
||||
- Reference previous outputs with \`<blockname.content>\`
|
||||
- Set appropriate temperature for creativity vs consistency
|
||||
|
||||
### Router Blocks
|
||||
- Use for conditional logic and branching
|
||||
- Multiple success connections as array
|
||||
- Clear routing instructions in prompt
|
||||
|
||||
### Tool Blocks
|
||||
- Gmail, Slack, API calls, etc.
|
||||
- Reference outputs with \`<blockname.output>\`
|
||||
- Use environment variables for sensitive data
|
||||
|
||||
### Function Blocks
|
||||
- Custom JavaScript code execution
|
||||
- Access inputs via \`inputs\` parameter
|
||||
- Return results via \`return\` statement
|
||||
|
||||
### Loop Blocks
|
||||
- Iterate over collections or fixed counts
|
||||
- Use \`<loop.index>\` and \`<loop.item>\` references
|
||||
- Child blocks have \`parentId\` set to loop ID
|
||||
|
||||
## Best Practices
|
||||
|
||||
### Human-Readable Block IDs
|
||||
✅ **Good:**
|
||||
\`\`\`yaml
|
||||
email-analyzer:
|
||||
type: agent
|
||||
name: Email Analyzer
|
||||
|
||||
customer-notifier:
|
||||
type: gmail
|
||||
name: Customer Notification
|
||||
\`\`\`
|
||||
|
||||
❌ **Bad:**
|
||||
\`\`\`yaml
|
||||
29bec199-99bb-4e5a-870a-bab01f2cece6:
|
||||
type: agent
|
||||
name: Email Analyzer
|
||||
\`\`\`
|
||||
|
||||
### Clear Block References
|
||||
✅ **Good:**
|
||||
\`\`\`yaml
|
||||
userPrompt: |
|
||||
Process this data: <emailanalyzer.content>
|
||||
|
||||
User input: <start.input>
|
||||
\`\`\`
|
||||
|
||||
❌ **Bad:**
|
||||
\`\`\`yaml
|
||||
userPrompt: Process this data: <emailanalyzer.content.result>
|
||||
\`\`\`
|
||||
|
||||
### Simple Starter Block Configuration
|
||||
✅ **Good:**
|
||||
\`\`\`yaml
|
||||
start:
|
||||
type: starter
|
||||
name: Start
|
||||
inputs:
|
||||
startWorkflow: manual
|
||||
connections:
|
||||
success: next-block
|
||||
\`\`\`
|
||||
|
||||
### Environment Variables for Secrets
|
||||
✅ **Good:**
|
||||
\`\`\`yaml
|
||||
apiKey: '{{OPENAI_API_KEY}}'
|
||||
token: '{{SLACK_BOT_TOKEN}}'
|
||||
\`\`\`
|
||||
|
||||
❌ **Bad:**
|
||||
\`\`\`yaml
|
||||
apiKey: 'sk-1234567890abcdef'
|
||||
\`\`\`
|
||||
|
||||
## Common Patterns
|
||||
|
||||
### Sequential Processing Chain
|
||||
\`\`\`yaml
|
||||
start → data-processor → analyzer → formatter → output-sender
|
||||
\`\`\`
|
||||
|
||||
### Conditional Branching
|
||||
\`\`\`yaml
|
||||
start → classifier → router → [path-a, path-b, path-c]
|
||||
\`\`\`
|
||||
|
||||
### Loop Processing ⚠️ SPECIAL SYNTAX
|
||||
\`\`\`yaml
|
||||
loop-block:
|
||||
type: loop
|
||||
connections:
|
||||
loop:
|
||||
start: child-block-id # Block to execute inside loop
|
||||
end: next-block-id # Block to run after loop completes
|
||||
\`\`\`
|
||||
|
||||
### Parallel Processing ⚠️ SPECIAL SYNTAX
|
||||
\`\`\`yaml
|
||||
parallel-block:
|
||||
type: parallel
|
||||
connections:
|
||||
parallel:
|
||||
start: child-block-id # Block to execute in each parallel instance
|
||||
end: next-block-id # Block to run after all instances complete
|
||||
\`\`\`
|
||||
|
||||
### Error Handling with Fallbacks
|
||||
\`\`\`yaml
|
||||
start → primary-processor → backup-processor (if primary fails)
|
||||
\`\`\`
|
||||
|
||||
### Multi-Step Approval Process
|
||||
\`\`\`yaml
|
||||
start → reviewer → approver → implementer → notifier
|
||||
\`\`\`
|
||||
|
||||
Remember: Always use human-readable block IDs, clear data flow patterns, and descriptive names for maintainable workflows!`
|
||||
@@ -1,14 +1,42 @@
|
||||
import { and, desc, eq } from 'drizzle-orm'
|
||||
import { and, desc, eq, sql } from 'drizzle-orm'
|
||||
import { createLogger } from '@/lib/logs/console-logger'
|
||||
import { getRotatingApiKey } from '@/lib/utils'
|
||||
import { generateEmbeddings } from '@/app/api/knowledge/utils'
|
||||
import { db } from '@/db'
|
||||
import { copilotChats } from '@/db/schema'
|
||||
import { copilotChats, docsEmbeddings } from '@/db/schema'
|
||||
import { executeProviderRequest } from '@/providers'
|
||||
import type { ProviderToolConfig } from '@/providers/types'
|
||||
import { getApiKey } from '@/providers/utils'
|
||||
import { getCopilotConfig, getCopilotModel } from './config'
|
||||
import {
|
||||
AGENT_MODE_SYSTEM_PROMPT,
|
||||
ASK_MODE_SYSTEM_PROMPT,
|
||||
TITLE_GENERATION_SYSTEM_PROMPT,
|
||||
TITLE_GENERATION_USER_PROMPT,
|
||||
validateSystemPrompts,
|
||||
} from './prompts'
|
||||
|
||||
const logger = createLogger('CopilotService')
|
||||
|
||||
// Validate system prompts on module load
|
||||
const promptValidation = validateSystemPrompts()
|
||||
if (!promptValidation.askMode.valid) {
|
||||
logger.error('Ask mode system prompt validation failed:', promptValidation.askMode.issues)
|
||||
}
|
||||
if (!promptValidation.agentMode.valid) {
|
||||
logger.error('Agent mode system prompt validation failed:', promptValidation.agentMode.issues)
|
||||
}
|
||||
|
||||
/**
|
||||
* Citation information for documentation references
|
||||
*/
|
||||
export interface Citation {
|
||||
id: number
|
||||
title: string
|
||||
url: string
|
||||
similarity?: number
|
||||
}
|
||||
|
||||
/**
|
||||
* Message interface for copilot conversations
|
||||
*/
|
||||
@@ -17,12 +45,7 @@ export interface CopilotMessage {
|
||||
role: 'user' | 'assistant' | 'system'
|
||||
content: string
|
||||
timestamp: string
|
||||
citations?: Array<{
|
||||
id: number
|
||||
title: string
|
||||
url: string
|
||||
similarity?: number
|
||||
}>
|
||||
citations?: Citation[]
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -38,6 +61,17 @@ export interface CopilotChat {
|
||||
updatedAt: Date
|
||||
}
|
||||
|
||||
/**
|
||||
* Options for generating chat responses
|
||||
*/
|
||||
export interface GenerateChatResponseOptions {
|
||||
stream?: boolean
|
||||
workflowId?: string
|
||||
requestId?: string
|
||||
mode?: 'ask' | 'agent'
|
||||
chatId?: string
|
||||
}
|
||||
|
||||
/**
|
||||
* Request interface for sending messages
|
||||
*/
|
||||
@@ -45,6 +79,7 @@ export interface SendMessageRequest {
|
||||
message: string
|
||||
chatId?: string
|
||||
workflowId?: string
|
||||
mode?: 'ask' | 'agent'
|
||||
createNewChat?: boolean
|
||||
stream?: boolean
|
||||
userId: string
|
||||
@@ -56,40 +91,241 @@ export interface SendMessageRequest {
|
||||
export interface SendMessageResponse {
|
||||
content: string
|
||||
chatId?: string
|
||||
citations?: Array<{
|
||||
id: number
|
||||
title: string
|
||||
url: string
|
||||
similarity?: number
|
||||
}>
|
||||
citations?: Citation[]
|
||||
metadata?: Record<string, any>
|
||||
}
|
||||
|
||||
/**
|
||||
* Documentation search result
|
||||
*/
|
||||
export interface DocumentationSearchResult {
|
||||
id: number
|
||||
title: string
|
||||
url: string
|
||||
content: string
|
||||
similarity: number
|
||||
}
|
||||
|
||||
/**
|
||||
* Options for creating a new chat
|
||||
*/
|
||||
export interface CreateChatOptions {
|
||||
title?: string
|
||||
initialMessage?: string
|
||||
}
|
||||
|
||||
/**
|
||||
* Options for updating a chat
|
||||
*/
|
||||
export interface UpdateChatOptions {
|
||||
title?: string
|
||||
messages?: CopilotMessage[]
|
||||
}
|
||||
|
||||
/**
|
||||
* Options for listing chats
|
||||
*/
|
||||
export interface ListChatsOptions {
|
||||
limit?: number
|
||||
offset?: number
|
||||
}
|
||||
|
||||
/**
|
||||
* Options for documentation search
|
||||
*/
|
||||
export interface SearchDocumentationOptions {
|
||||
topK?: number
|
||||
threshold?: number
|
||||
}
|
||||
|
||||
/**
|
||||
* Get API key for the given provider
|
||||
*/
|
||||
function getProviderApiKey(provider: string, model: string): string {
|
||||
if (provider === 'openai' || provider === 'anthropic') {
|
||||
return getRotatingApiKey(provider)
|
||||
}
|
||||
return getApiKey(provider, model)
|
||||
}
|
||||
|
||||
/**
|
||||
* Build conversation messages for LLM
|
||||
*/
|
||||
function buildConversationMessages(
|
||||
message: string,
|
||||
conversationHistory: CopilotMessage[],
|
||||
maxHistory: number
|
||||
): Array<{ role: 'user' | 'assistant' | 'system'; content: string }> {
|
||||
const messages = []
|
||||
|
||||
// Add conversation history (limited by config)
|
||||
const recentHistory = conversationHistory.slice(-maxHistory)
|
||||
|
||||
for (const msg of recentHistory) {
|
||||
messages.push({
|
||||
role: msg.role as 'user' | 'assistant' | 'system',
|
||||
content: msg.content,
|
||||
})
|
||||
}
|
||||
|
||||
// Add current user message
|
||||
messages.push({
|
||||
role: 'user' as const,
|
||||
content: message,
|
||||
})
|
||||
|
||||
return messages
|
||||
}
|
||||
|
||||
/**
|
||||
* Get available tools for the given mode
|
||||
*/
|
||||
function getAvailableTools(mode: 'ask' | 'agent'): ProviderToolConfig[] {
|
||||
const allTools: ProviderToolConfig[] = [
|
||||
{
|
||||
id: 'docs_search_internal',
|
||||
name: 'Search Documentation',
|
||||
description:
|
||||
'Search Sim Studio documentation for information about features, tools, workflows, and functionality',
|
||||
params: {},
|
||||
parameters: {
|
||||
type: 'object',
|
||||
properties: {
|
||||
query: {
|
||||
type: 'string',
|
||||
description: 'The search query to find relevant documentation',
|
||||
},
|
||||
topK: {
|
||||
type: 'number',
|
||||
description: 'Number of results to return (default: 10, max: 10)',
|
||||
default: 10,
|
||||
},
|
||||
},
|
||||
required: ['query'],
|
||||
},
|
||||
},
|
||||
{
|
||||
id: 'get_user_workflow',
|
||||
name: "Get User's Specific Workflow",
|
||||
description:
|
||||
"Get the user's current workflow - this shows ONLY the blocks they have actually built and configured in their specific workflow, not general Sim Studio capabilities.",
|
||||
params: {},
|
||||
parameters: {
|
||||
type: 'object',
|
||||
properties: {
|
||||
includeMetadata: {
|
||||
type: 'boolean',
|
||||
description:
|
||||
'Whether to include additional metadata about the workflow (default: false)',
|
||||
default: false,
|
||||
},
|
||||
},
|
||||
required: [],
|
||||
},
|
||||
},
|
||||
{
|
||||
id: 'get_blocks_and_tools',
|
||||
name: 'Get All Blocks and Tools',
|
||||
description:
|
||||
'Get a comprehensive list of all available blocks and tools in Sim Studio with their descriptions, categories, and capabilities.',
|
||||
params: {},
|
||||
parameters: {
|
||||
type: 'object',
|
||||
properties: {
|
||||
includeDetails: {
|
||||
type: 'boolean',
|
||||
description:
|
||||
'Whether to include detailed information like inputs, outputs, and sub-blocks (default: false)',
|
||||
default: false,
|
||||
},
|
||||
filterCategory: {
|
||||
type: 'string',
|
||||
description: 'Optional category filter for blocks (e.g., "tools", "blocks", "ai")',
|
||||
},
|
||||
},
|
||||
required: [],
|
||||
},
|
||||
},
|
||||
{
|
||||
id: 'get_blocks_metadata',
|
||||
name: 'Get Block Metadata',
|
||||
description:
|
||||
'Get detailed metadata including descriptions, schemas, inputs, outputs, and subblocks for specific blocks and their associated tools.',
|
||||
params: {},
|
||||
parameters: {
|
||||
type: 'object',
|
||||
properties: {
|
||||
blockIds: {
|
||||
type: 'array',
|
||||
items: {
|
||||
type: 'string',
|
||||
},
|
||||
description: 'Array of block IDs to get metadata for',
|
||||
},
|
||||
},
|
||||
required: ['blockIds'],
|
||||
},
|
||||
},
|
||||
{
|
||||
id: 'get_yaml_structure',
|
||||
name: 'Get YAML Workflow Structure Guide',
|
||||
description:
|
||||
'Get comprehensive YAML workflow syntax guide and examples to understand how to structure Sim Studio workflows.',
|
||||
params: {},
|
||||
parameters: {
|
||||
type: 'object',
|
||||
properties: {},
|
||||
required: [],
|
||||
},
|
||||
},
|
||||
{
|
||||
id: 'edit_workflow',
|
||||
name: 'Edit Workflow',
|
||||
description:
|
||||
'Save/edit the current workflow by providing YAML content. This performs the same action as saving in the YAML code editor.',
|
||||
params: {},
|
||||
parameters: {
|
||||
type: 'object',
|
||||
properties: {
|
||||
yamlContent: {
|
||||
type: 'string',
|
||||
description: 'The complete YAML workflow content to save',
|
||||
},
|
||||
description: {
|
||||
type: 'string',
|
||||
description: 'Optional description of the changes being made',
|
||||
},
|
||||
},
|
||||
required: ['yamlContent'],
|
||||
},
|
||||
},
|
||||
]
|
||||
|
||||
// Filter tools based on mode
|
||||
return mode === 'ask' ? allTools.filter((tool) => tool.id !== 'edit_workflow') : allTools
|
||||
}
|
||||
|
||||
/**
|
||||
* Validate system prompt for the given mode
|
||||
*/
|
||||
function validateSystemPrompt(mode: 'ask' | 'agent', systemPrompt: string): void {
|
||||
if (!systemPrompt || systemPrompt.length < 100) {
|
||||
throw new Error(`System prompt not properly configured for mode: ${mode}`)
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Generate a chat title using LLM
|
||||
*/
|
||||
export async function generateChatTitle(userMessage: string): Promise<string> {
|
||||
try {
|
||||
const { provider, model } = getCopilotModel('title')
|
||||
let apiKey: string
|
||||
try {
|
||||
// Use rotating key directly for hosted providers
|
||||
if (provider === 'openai' || provider === 'anthropic') {
|
||||
const { getRotatingApiKey } = await import('@/lib/utils')
|
||||
apiKey = getRotatingApiKey(provider)
|
||||
} else {
|
||||
apiKey = getApiKey(provider, model)
|
||||
}
|
||||
} catch (error) {
|
||||
logger.error(`Failed to get API key for title generation (${provider} ${model}):`, error)
|
||||
return 'New Chat' // Fallback if API key is not available
|
||||
}
|
||||
const apiKey = getProviderApiKey(provider, model)
|
||||
|
||||
const response = await executeProviderRequest(provider, {
|
||||
model,
|
||||
systemPrompt:
|
||||
'You are a helpful assistant that generates concise, descriptive titles for chat conversations. Create a title that captures the main topic or question being discussed. Keep it under 50 characters and make it specific and clear.',
|
||||
context: `Generate a concise title for a conversation that starts with this user message: "${userMessage}"\n\nReturn only the title text, nothing else.`,
|
||||
systemPrompt: TITLE_GENERATION_SYSTEM_PROMPT,
|
||||
context: TITLE_GENERATION_USER_PROMPT(userMessage),
|
||||
temperature: 0.3,
|
||||
maxTokens: 50,
|
||||
apiKey,
|
||||
@@ -112,29 +348,12 @@ export async function generateChatTitle(userMessage: string): Promise<string> {
|
||||
*/
|
||||
export async function searchDocumentation(
|
||||
query: string,
|
||||
options: {
|
||||
topK?: number
|
||||
threshold?: number
|
||||
} = {}
|
||||
): Promise<
|
||||
Array<{
|
||||
id: number
|
||||
title: string
|
||||
url: string
|
||||
content: string
|
||||
similarity: number
|
||||
}>
|
||||
> {
|
||||
const { generateEmbeddings } = await import('@/app/api/knowledge/utils')
|
||||
const { docsEmbeddings } = await import('@/db/schema')
|
||||
const { sql } = await import('drizzle-orm')
|
||||
|
||||
options: SearchDocumentationOptions = {}
|
||||
): Promise<DocumentationSearchResult[]> {
|
||||
const config = getCopilotConfig()
|
||||
const { topK = config.rag.maxSources, threshold = config.rag.similarityThreshold } = options
|
||||
|
||||
try {
|
||||
logger.info('Documentation search requested', { query, topK, threshold })
|
||||
|
||||
// Generate embedding for the query
|
||||
const embeddings = await generateEmbeddings([query])
|
||||
const queryEmbedding = embeddings[0]
|
||||
@@ -162,12 +381,6 @@ export async function searchDocumentation(
|
||||
// Filter by similarity threshold
|
||||
const filteredResults = results.filter((result) => result.similarity >= threshold)
|
||||
|
||||
logger.info(`Found ${filteredResults.length} relevant documentation chunks`, {
|
||||
totalResults: results.length,
|
||||
afterFiltering: filteredResults.length,
|
||||
threshold,
|
||||
})
|
||||
|
||||
return filteredResults.map((result, index) => ({
|
||||
id: index + 1,
|
||||
title: String(result.headerText || 'Untitled Section'),
|
||||
@@ -181,290 +394,49 @@ export async function searchDocumentation(
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Generate documentation-based response using RAG
|
||||
*/
|
||||
export async function generateDocsResponse(
|
||||
query: string,
|
||||
conversationHistory: CopilotMessage[] = [],
|
||||
options: {
|
||||
stream?: boolean
|
||||
topK?: number
|
||||
provider?: string
|
||||
model?: string
|
||||
workflowId?: string
|
||||
requestId?: string
|
||||
} = {}
|
||||
): Promise<{
|
||||
response: string | ReadableStream
|
||||
sources: Array<{
|
||||
id: number
|
||||
title: string
|
||||
url: string
|
||||
similarity: number
|
||||
}>
|
||||
}> {
|
||||
const config = getCopilotConfig()
|
||||
const { provider, model } = getCopilotModel('rag')
|
||||
const {
|
||||
stream = config.general.streamingEnabled,
|
||||
topK = config.rag.maxSources,
|
||||
provider: overrideProvider,
|
||||
model: overrideModel,
|
||||
} = options
|
||||
|
||||
const selectedProvider = overrideProvider || provider
|
||||
const selectedModel = overrideModel || model
|
||||
|
||||
try {
|
||||
let apiKey: string
|
||||
try {
|
||||
// Use rotating key directly for hosted providers
|
||||
if (selectedProvider === 'openai' || selectedProvider === 'anthropic') {
|
||||
const { getRotatingApiKey } = await import('@/lib/utils')
|
||||
apiKey = getRotatingApiKey(selectedProvider)
|
||||
} else {
|
||||
apiKey = getApiKey(selectedProvider, selectedModel)
|
||||
}
|
||||
} catch (error) {
|
||||
logger.error(
|
||||
`Failed to get API key for docs response (${selectedProvider} ${selectedModel}):`,
|
||||
error
|
||||
)
|
||||
throw new Error(
|
||||
`API key not configured for ${selectedProvider}. Please set up API keys for this provider or use a different one.`
|
||||
)
|
||||
}
|
||||
|
||||
// Search documentation
|
||||
const searchResults = await searchDocumentation(query, { topK })
|
||||
|
||||
if (searchResults.length === 0) {
|
||||
const fallbackResponse =
|
||||
"I couldn't find any relevant documentation for your question. Please try rephrasing your query or check if you're asking about a feature that exists in Sim Studio."
|
||||
return {
|
||||
response: fallbackResponse,
|
||||
sources: [],
|
||||
}
|
||||
}
|
||||
|
||||
// Format search results as context with numbered sources
|
||||
const context = searchResults
|
||||
.map((result, index) => {
|
||||
return `[${index + 1}] ${result.title}
|
||||
Document: ${result.title}
|
||||
URL: ${result.url}
|
||||
Content: ${result.content}`
|
||||
})
|
||||
.join('\n\n')
|
||||
|
||||
// Build conversation context if we have history
|
||||
let conversationContext = ''
|
||||
if (conversationHistory.length > 0) {
|
||||
conversationContext = '\n\nConversation History:\n'
|
||||
conversationHistory.slice(-config.general.maxConversationHistory).forEach((msg) => {
|
||||
const role = msg.role === 'user' ? 'Human' : 'Assistant'
|
||||
conversationContext += `${role}: ${msg.content}\n`
|
||||
})
|
||||
conversationContext += '\n'
|
||||
}
|
||||
|
||||
const systemPrompt = `You are a helpful assistant that answers questions about Sim Studio documentation. You are having a conversation with the user, so refer to the conversation history when relevant.
|
||||
|
||||
MANDATORY CITATION REQUIREMENT: You MUST include citations for ALL information derived from the provided sources.
|
||||
|
||||
Citation Guidelines:
|
||||
- ALWAYS cite sources when mentioning specific features, concepts, or instructions from the documentation
|
||||
- Use direct links with markdown format: [link text](URL)
|
||||
- Use the exact URLs provided in the source context
|
||||
- Make link text descriptive (e.g., "workflow documentation" not "here")
|
||||
- Place citations immediately after stating facts from the documentation
|
||||
- Cite ALL relevant sources that contributed to your answer - do not omit any
|
||||
- When multiple sources cover the same topic, cite the most comprehensive or relevant one
|
||||
- Place links naturally in context, not clustered at the end
|
||||
- IMPORTANT: Only cite each source ONCE per response - avoid repeating the same URL multiple times
|
||||
|
||||
Content Guidelines:
|
||||
- Answer the user's question accurately using the provided documentation
|
||||
- Consider the conversation history and refer to previous messages when relevant
|
||||
- Format your response in clean, readable markdown
|
||||
- Use bullet points, code blocks, and headers where appropriate
|
||||
- If the question cannot be answered from the context, say so clearly
|
||||
- Be conversational but precise
|
||||
- NEVER include object representations like "[object Object]" - always use proper text
|
||||
- When mentioning tool names, use their actual names from the documentation
|
||||
|
||||
Each source in the context below includes a URL that you can reference directly.`
|
||||
|
||||
const userPrompt = `${conversationContext}Current Question: ${query}
|
||||
|
||||
Documentation Context:
|
||||
${context}`
|
||||
|
||||
logger.info(
|
||||
`Generating docs response using provider: ${selectedProvider}, model: ${selectedModel}`
|
||||
)
|
||||
|
||||
const response = await executeProviderRequest(selectedProvider, {
|
||||
model: selectedModel,
|
||||
systemPrompt,
|
||||
context: userPrompt,
|
||||
temperature: config.rag.temperature,
|
||||
maxTokens: config.rag.maxTokens,
|
||||
apiKey,
|
||||
stream,
|
||||
})
|
||||
|
||||
// Format sources for response
|
||||
const sources = searchResults.map((result) => ({
|
||||
id: result.id,
|
||||
title: result.title,
|
||||
url: result.url,
|
||||
similarity: Math.round(result.similarity * 100) / 100,
|
||||
}))
|
||||
|
||||
// Handle different response types
|
||||
if (response instanceof ReadableStream) {
|
||||
return { response, sources }
|
||||
}
|
||||
|
||||
if ('stream' in response && 'execution' in response) {
|
||||
// Handle StreamingExecution for providers like Anthropic
|
||||
if (stream) {
|
||||
return { response: response.stream, sources }
|
||||
}
|
||||
throw new Error('Unexpected streaming execution response when non-streaming was requested')
|
||||
}
|
||||
|
||||
// At this point, we have a ProviderResponse
|
||||
const content = response.content || 'Sorry, I could not generate a response.'
|
||||
|
||||
// Clean up any object serialization artifacts
|
||||
const cleanedContent = content
|
||||
.replace(/\[object Object\],?/g, '') // Remove [object Object] artifacts
|
||||
.replace(/\s+/g, ' ') // Normalize whitespace
|
||||
.trim()
|
||||
|
||||
return {
|
||||
response: cleanedContent,
|
||||
sources,
|
||||
}
|
||||
} catch (error) {
|
||||
logger.error('Failed to generate docs response:', error)
|
||||
throw new Error(
|
||||
`Failed to generate docs response: ${error instanceof Error ? error.message : 'Unknown error'}`
|
||||
)
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Generate chat response using LLM with optional documentation search
|
||||
*/
|
||||
export async function generateChatResponse(
|
||||
message: string,
|
||||
conversationHistory: CopilotMessage[] = [],
|
||||
options: {
|
||||
stream?: boolean
|
||||
workflowId?: string
|
||||
requestId?: string
|
||||
} = {}
|
||||
options: GenerateChatResponseOptions = {}
|
||||
): Promise<string | ReadableStream> {
|
||||
const config = getCopilotConfig()
|
||||
const { provider, model } = getCopilotModel('chat')
|
||||
const { stream = config.general.streamingEnabled } = options
|
||||
const { stream = config.general.streamingEnabled, mode = 'ask' } = options
|
||||
|
||||
try {
|
||||
let apiKey: string
|
||||
try {
|
||||
// Use rotating key directly for hosted providers
|
||||
if (provider === 'openai' || provider === 'anthropic') {
|
||||
const { getRotatingApiKey } = await import('@/lib/utils')
|
||||
apiKey = getRotatingApiKey(provider)
|
||||
} else {
|
||||
apiKey = getApiKey(provider, model)
|
||||
}
|
||||
} catch (error) {
|
||||
logger.error(`Failed to get API key for chat (${provider} ${model}):`, error)
|
||||
throw new Error(
|
||||
`API key not configured for ${provider}. Please set up API keys for this provider or use a different one.`
|
||||
)
|
||||
}
|
||||
const apiKey = getProviderApiKey(provider, model)
|
||||
|
||||
// Build conversation context
|
||||
const messages = []
|
||||
const messages = buildConversationMessages(
|
||||
message,
|
||||
conversationHistory,
|
||||
config.general.maxConversationHistory
|
||||
)
|
||||
|
||||
// Add conversation history (limited by config)
|
||||
const historyLimit = config.general.maxConversationHistory
|
||||
const recentHistory = conversationHistory.slice(-historyLimit)
|
||||
// Get available tools for the mode
|
||||
const tools = getAvailableTools(mode)
|
||||
|
||||
for (const msg of recentHistory) {
|
||||
messages.push({
|
||||
role: msg.role as 'user' | 'assistant' | 'system',
|
||||
content: msg.content,
|
||||
})
|
||||
}
|
||||
// Get the appropriate system prompt for the mode
|
||||
const systemPrompt = mode === 'ask' ? ASK_MODE_SYSTEM_PROMPT : AGENT_MODE_SYSTEM_PROMPT
|
||||
|
||||
// Add current user message
|
||||
messages.push({
|
||||
role: 'user' as const,
|
||||
content: message,
|
||||
})
|
||||
|
||||
// Define the tools available to the LLM
|
||||
const tools: ProviderToolConfig[] = [
|
||||
{
|
||||
id: 'docs_search_internal',
|
||||
name: 'Search Documentation',
|
||||
description:
|
||||
'Search Sim Studio documentation for information about features, tools, workflows, and functionality',
|
||||
params: {},
|
||||
parameters: {
|
||||
type: 'object',
|
||||
properties: {
|
||||
query: {
|
||||
type: 'string',
|
||||
description: 'The search query to find relevant documentation',
|
||||
},
|
||||
topK: {
|
||||
type: 'number',
|
||||
description: 'Number of results to return (default: 5, max: 10)',
|
||||
default: 5,
|
||||
},
|
||||
},
|
||||
required: ['query'],
|
||||
},
|
||||
},
|
||||
{
|
||||
id: 'get_user_workflow',
|
||||
name: "Get User's Specific Workflow",
|
||||
description:
|
||||
'Get the user\'s current workflow - this shows ONLY the blocks they have actually built and configured in their specific workflow, not general Sim Studio capabilities. Use this when the user asks about "my workflow", "this workflow", wants to know what blocks they currently have, OR when they ask "How do I..." questions about their workflow so you can give specific, actionable advice based on their actual setup.',
|
||||
params: {},
|
||||
parameters: {
|
||||
type: 'object',
|
||||
properties: {
|
||||
includeMetadata: {
|
||||
type: 'boolean',
|
||||
description:
|
||||
'Whether to include additional metadata about the workflow (default: false)',
|
||||
default: false,
|
||||
},
|
||||
},
|
||||
required: [],
|
||||
},
|
||||
},
|
||||
]
|
||||
// Validate system prompt
|
||||
validateSystemPrompt(mode, systemPrompt)
|
||||
|
||||
const response = await executeProviderRequest(provider, {
|
||||
model,
|
||||
systemPrompt: config.chat.systemPrompt,
|
||||
systemPrompt,
|
||||
messages,
|
||||
tools,
|
||||
temperature: config.chat.temperature,
|
||||
maxTokens: config.chat.maxTokens,
|
||||
apiKey,
|
||||
stream,
|
||||
streamToolCalls: true, // Enable tool call streaming for copilot
|
||||
workflowId: options.workflowId,
|
||||
chatId: options.chatId,
|
||||
})
|
||||
|
||||
// Handle StreamingExecution (from providers with tool calls)
|
||||
@@ -474,7 +446,6 @@ export async function generateChatResponse(
|
||||
'stream' in response &&
|
||||
'execution' in response
|
||||
) {
|
||||
logger.info('Detected StreamingExecution from provider')
|
||||
return (response as any).stream
|
||||
}
|
||||
|
||||
@@ -516,12 +487,8 @@ export async function generateChatResponse(
|
||||
export async function createChat(
|
||||
userId: string,
|
||||
workflowId: string,
|
||||
options: {
|
||||
title?: string
|
||||
initialMessage?: string
|
||||
} = {}
|
||||
options: CreateChatOptions = {}
|
||||
): Promise<CopilotChat> {
|
||||
const config = getCopilotConfig()
|
||||
const { provider, model } = getCopilotModel('chat')
|
||||
const { title, initialMessage } = options
|
||||
|
||||
@@ -544,7 +511,7 @@ export async function createChat(
|
||||
.values({
|
||||
userId,
|
||||
workflowId,
|
||||
title: title || null, // Will be generated later if null
|
||||
title: title || null,
|
||||
model,
|
||||
messages: initialMessages,
|
||||
})
|
||||
@@ -554,8 +521,6 @@ export async function createChat(
|
||||
throw new Error('Failed to create chat')
|
||||
}
|
||||
|
||||
logger.info(`Created chat ${newChat.id} for user ${userId}`)
|
||||
|
||||
return {
|
||||
id: newChat.id,
|
||||
title: newChat.title,
|
||||
@@ -609,10 +574,7 @@ export async function getChat(chatId: string, userId: string): Promise<CopilotCh
|
||||
export async function listChats(
|
||||
userId: string,
|
||||
workflowId: string,
|
||||
options: {
|
||||
limit?: number
|
||||
offset?: number
|
||||
} = {}
|
||||
options: ListChatsOptions = {}
|
||||
): Promise<CopilotChat[]> {
|
||||
const { limit = 50, offset = 0 } = options
|
||||
|
||||
@@ -621,7 +583,7 @@ export async function listChats(
|
||||
.select()
|
||||
.from(copilotChats)
|
||||
.where(and(eq(copilotChats.userId, userId), eq(copilotChats.workflowId, workflowId)))
|
||||
.orderBy(desc(copilotChats.updatedAt))
|
||||
.orderBy(desc(copilotChats.createdAt))
|
||||
.limit(limit)
|
||||
.offset(offset)
|
||||
|
||||
@@ -646,10 +608,7 @@ export async function listChats(
|
||||
export async function updateChat(
|
||||
chatId: string,
|
||||
userId: string,
|
||||
updates: {
|
||||
title?: string
|
||||
messages?: CopilotMessage[]
|
||||
}
|
||||
updates: UpdateChatOptions
|
||||
): Promise<CopilotChat | null> {
|
||||
try {
|
||||
// Verify the chat exists and belongs to the user
|
||||
@@ -716,7 +675,7 @@ export async function sendMessage(request: SendMessageRequest): Promise<{
|
||||
response: string | ReadableStream | any
|
||||
chatId?: string
|
||||
}> {
|
||||
const { message, chatId, workflowId, createNewChat, stream, userId } = request
|
||||
const { message, chatId, workflowId, mode, createNewChat, stream, userId } = request
|
||||
|
||||
try {
|
||||
// Handle chat context
|
||||
@@ -738,12 +697,11 @@ export async function sendMessage(request: SendMessageRequest): Promise<{
|
||||
const response = await generateChatResponse(message, conversationHistory, {
|
||||
stream,
|
||||
workflowId,
|
||||
mode,
|
||||
chatId: currentChat?.id,
|
||||
})
|
||||
|
||||
// No need to extract citations - LLM generates direct markdown links
|
||||
|
||||
// For non-streaming responses, save immediately
|
||||
// For streaming responses, save will be handled by the API layer after stream completes
|
||||
if (currentChat && typeof response === 'string') {
|
||||
const userMessage: CopilotMessage = {
|
||||
id: crypto.randomUUID(),
|
||||
@@ -782,3 +740,23 @@ export async function sendMessage(request: SendMessageRequest): Promise<{
|
||||
throw error
|
||||
}
|
||||
}
|
||||
|
||||
// Update existing chat messages (for streaming responses)
|
||||
export async function updateChatMessages(
|
||||
chatId: string,
|
||||
messages: CopilotMessage[]
|
||||
): Promise<void> {
|
||||
try {
|
||||
await db
|
||||
.update(copilotChats)
|
||||
.set({
|
||||
messages,
|
||||
updatedAt: new Date(),
|
||||
})
|
||||
.where(eq(copilotChats.id, chatId))
|
||||
.execute()
|
||||
} catch (error) {
|
||||
logger.error('Failed to update chat messages:', error)
|
||||
throw error
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,29 +1,68 @@
|
||||
import { createLogger } from '@/lib/logs/console-logger'
|
||||
import { useWorkflowRegistry } from '@/stores/workflows/registry/store'
|
||||
import { useWorkflowYamlStore } from '@/stores/workflows/yaml/store'
|
||||
import { searchDocumentation } from './service'
|
||||
|
||||
const logger = createLogger('CopilotTools')
|
||||
|
||||
// Interface for copilot tool execution results
|
||||
/**
|
||||
* Interface for copilot tool execution results
|
||||
*/
|
||||
export interface CopilotToolResult {
|
||||
success: boolean
|
||||
data?: any
|
||||
error?: string
|
||||
}
|
||||
|
||||
// Interface for copilot tool definitions
|
||||
/**
|
||||
* Interface for copilot tool parameters
|
||||
*/
|
||||
export interface CopilotToolParameters {
|
||||
type: 'object'
|
||||
properties: Record<string, any>
|
||||
required: string[]
|
||||
}
|
||||
|
||||
/**
|
||||
* Interface for copilot tool definitions
|
||||
*/
|
||||
export interface CopilotTool {
|
||||
id: string
|
||||
name: string
|
||||
description: string
|
||||
parameters: {
|
||||
type: 'object'
|
||||
properties: Record<string, any>
|
||||
required: string[]
|
||||
}
|
||||
parameters: CopilotToolParameters
|
||||
execute: (args: Record<string, any>) => Promise<CopilotToolResult>
|
||||
}
|
||||
|
||||
// Documentation search tool for copilot
|
||||
/**
|
||||
* Interface for documentation search arguments
|
||||
*/
|
||||
interface DocsSearchArgs {
|
||||
query: string
|
||||
topK?: number
|
||||
}
|
||||
|
||||
/**
|
||||
* Interface for workflow metadata
|
||||
*/
|
||||
interface WorkflowMetadata {
|
||||
workflowId: string
|
||||
name: string
|
||||
description: string | undefined
|
||||
workspaceId: string
|
||||
}
|
||||
|
||||
/**
|
||||
* Interface for user workflow data
|
||||
*/
|
||||
interface UserWorkflowData {
|
||||
yaml: string
|
||||
metadata?: WorkflowMetadata
|
||||
}
|
||||
|
||||
/**
|
||||
* Documentation search tool for copilot
|
||||
*/
|
||||
const docsSearchTool: CopilotTool = {
|
||||
id: 'docs_search_internal',
|
||||
name: 'Search Documentation',
|
||||
@@ -38,22 +77,17 @@ const docsSearchTool: CopilotTool = {
|
||||
},
|
||||
topK: {
|
||||
type: 'number',
|
||||
description: 'Number of results to return (default: 5, max: 10)',
|
||||
default: 5,
|
||||
description: 'Number of results to return (default: 10, max: 10)',
|
||||
default: 10,
|
||||
},
|
||||
},
|
||||
required: ['query'],
|
||||
},
|
||||
execute: async (args: Record<string, any>): Promise<CopilotToolResult> => {
|
||||
try {
|
||||
const { query, topK = 5 } = args
|
||||
|
||||
logger.info('Executing documentation search', { query, topK })
|
||||
|
||||
const { query, topK = 10 } = args
|
||||
const results = await searchDocumentation(query, { topK })
|
||||
|
||||
logger.info(`Found ${results.length} documentation results`, { query })
|
||||
|
||||
return {
|
||||
success: true,
|
||||
data: {
|
||||
@@ -72,7 +106,9 @@ const docsSearchTool: CopilotTool = {
|
||||
},
|
||||
}
|
||||
|
||||
// Get user workflow as YAML tool for copilot
|
||||
/**
|
||||
* Get user workflow as YAML tool for copilot
|
||||
*/
|
||||
const getUserWorkflowTool: CopilotTool = {
|
||||
id: 'get_user_workflow',
|
||||
name: 'Get User Workflow',
|
||||
@@ -85,12 +121,6 @@ const getUserWorkflowTool: CopilotTool = {
|
||||
},
|
||||
execute: async (args: Record<string, any>): Promise<CopilotToolResult> => {
|
||||
try {
|
||||
logger.info('Executing get user workflow')
|
||||
|
||||
// Import the workflow YAML store dynamically to avoid import issues
|
||||
const { useWorkflowYamlStore } = await import('@/stores/workflows/yaml/store')
|
||||
const { useWorkflowRegistry } = await import('@/stores/workflows/registry/store')
|
||||
|
||||
// Get the current workflow YAML using the same logic as export
|
||||
const yamlContent = useWorkflowYamlStore.getState().getYaml()
|
||||
|
||||
@@ -99,24 +129,24 @@ const getUserWorkflowTool: CopilotTool = {
|
||||
const activeWorkflowId = registry.activeWorkflowId
|
||||
const activeWorkflow = activeWorkflowId ? registry.workflows[activeWorkflowId] : null
|
||||
|
||||
let metadata
|
||||
if (activeWorkflow) {
|
||||
let metadata: WorkflowMetadata | undefined
|
||||
if (activeWorkflow && activeWorkflowId) {
|
||||
metadata = {
|
||||
workflowId: activeWorkflowId,
|
||||
name: activeWorkflow.name,
|
||||
name: activeWorkflow.name || 'Untitled Workflow',
|
||||
description: activeWorkflow.description,
|
||||
workspaceId: activeWorkflow.workspaceId,
|
||||
workspaceId: activeWorkflow.workspaceId || '',
|
||||
}
|
||||
}
|
||||
|
||||
logger.info('Successfully retrieved user workflow YAML')
|
||||
const data: UserWorkflowData = {
|
||||
yaml: yamlContent,
|
||||
metadata,
|
||||
}
|
||||
|
||||
return {
|
||||
success: true,
|
||||
data: {
|
||||
yaml: yamlContent,
|
||||
metadata: metadata,
|
||||
},
|
||||
data,
|
||||
}
|
||||
} catch (error) {
|
||||
logger.error('Get user workflow failed', error)
|
||||
@@ -128,18 +158,24 @@ const getUserWorkflowTool: CopilotTool = {
|
||||
},
|
||||
}
|
||||
|
||||
// Copilot tools registry
|
||||
/**
|
||||
* Copilot tools registry
|
||||
*/
|
||||
const copilotTools: Record<string, CopilotTool> = {
|
||||
docs_search_internal: docsSearchTool,
|
||||
get_user_workflow: getUserWorkflowTool,
|
||||
}
|
||||
|
||||
// Get a copilot tool by ID
|
||||
/**
|
||||
* Get a copilot tool by ID
|
||||
*/
|
||||
export function getCopilotTool(toolId: string): CopilotTool | undefined {
|
||||
return copilotTools[toolId]
|
||||
}
|
||||
|
||||
// Execute a copilot tool
|
||||
/**
|
||||
* Execute a copilot tool
|
||||
*/
|
||||
export async function executeCopilotTool(
|
||||
toolId: string,
|
||||
args: Record<string, any>
|
||||
@@ -155,9 +191,7 @@ export async function executeCopilotTool(
|
||||
}
|
||||
|
||||
try {
|
||||
logger.info(`Executing copilot tool: ${toolId}`, { args })
|
||||
const result = await tool.execute(args)
|
||||
logger.info(`Copilot tool execution completed: ${toolId}`, { success: result.success })
|
||||
return result
|
||||
} catch (error) {
|
||||
logger.error(`Copilot tool execution failed: ${toolId}`, error)
|
||||
@@ -168,7 +202,9 @@ export async function executeCopilotTool(
|
||||
}
|
||||
}
|
||||
|
||||
// Get all available copilot tools (for tool definitions in LLM requests)
|
||||
/**
|
||||
* Get all available copilot tools (for tool definitions in LLM requests)
|
||||
*/
|
||||
export function getAllCopilotTools(): CopilotTool[] {
|
||||
return Object.values(copilotTools)
|
||||
}
|
||||
|
||||
414
apps/sim/lib/tool-call-parser.ts
Normal file
414
apps/sim/lib/tool-call-parser.ts
Normal file
@@ -0,0 +1,414 @@
|
||||
import type {
|
||||
InlineContent,
|
||||
ParsedMessageContent,
|
||||
ToolCallIndicator,
|
||||
ToolCallState,
|
||||
} from '@/types/tool-call'
|
||||
|
||||
// Tool ID to display name mapping for better UX
|
||||
const TOOL_DISPLAY_NAMES: Record<string, string> = {
|
||||
docs_search_internal: 'Searching documentation',
|
||||
get_user_workflow: 'Analyzing your workflow',
|
||||
get_blocks_and_tools: 'Getting context',
|
||||
get_blocks_metadata: 'Getting context',
|
||||
get_yaml_structure: 'Designing an approach',
|
||||
edit_workflow: 'Building your workflow',
|
||||
}
|
||||
|
||||
// Past tense versions for completed tool calls
|
||||
const TOOL_PAST_TENSE_NAMES: Record<string, string> = {
|
||||
docs_search_internal: 'Searched documentation',
|
||||
get_user_workflow: 'Analyzed your workflow',
|
||||
get_blocks_and_tools: 'Understood context',
|
||||
get_blocks_metadata: 'Understood context',
|
||||
get_yaml_structure: 'Designed an approach',
|
||||
edit_workflow: 'Built your workflow',
|
||||
}
|
||||
|
||||
// Regex patterns to detect structured tool call events
|
||||
const TOOL_CALL_PATTERNS = {
|
||||
// Matches structured tool call events: __TOOL_CALL_EVENT__{"type":"..."}__TOOL_CALL_EVENT__
|
||||
toolCallEvent: /__TOOL_CALL_EVENT__(.*?)__TOOL_CALL_EVENT__/g,
|
||||
// Fallback patterns for legacy emoji indicators (if needed)
|
||||
statusIndicator: /🔄\s+([^🔄\n]+)/gu,
|
||||
thinkingPattern: /(\.\.\.|…|💭|🤔)/g,
|
||||
functionCall: /(\w+)\s*\(\s*([^)]*)\s*\)/g,
|
||||
completionIndicator: /✅|☑️|✓|Done|Complete/g,
|
||||
errorIndicator: /❌|⚠️|Error|Failed/g,
|
||||
}
|
||||
|
||||
/**
|
||||
* Extract tool names from a status message
|
||||
*/
|
||||
export function extractToolNames(statusMessage: string): string[] {
|
||||
// Remove the 🔄 indicator and split on • or bullet points
|
||||
const cleanMessage = statusMessage.replace(/🔄\s*/, '').trim()
|
||||
|
||||
// Split on common separators
|
||||
const toolNames = cleanMessage
|
||||
.split(/[•·|,&]/)
|
||||
.map((name) => name.trim())
|
||||
.filter((name) => name.length > 0)
|
||||
|
||||
return toolNames
|
||||
}
|
||||
|
||||
/**
|
||||
* Get display name for a tool
|
||||
*/
|
||||
export function getToolDisplayName(toolId: string, isCompleted = false): string {
|
||||
if (isCompleted) {
|
||||
return TOOL_PAST_TENSE_NAMES[toolId] || TOOL_DISPLAY_NAMES[toolId] || toolId.replace(/_/g, ' ')
|
||||
}
|
||||
return TOOL_DISPLAY_NAMES[toolId] || toolId.replace(/_/g, ' ')
|
||||
}
|
||||
|
||||
/**
|
||||
* Parse structured tool call events from the stream and maintain state transitions
|
||||
*/
|
||||
export function parseToolCallEvents(
|
||||
content: string,
|
||||
existingToolCalls: ToolCallState[] = []
|
||||
): ToolCallState[] {
|
||||
const toolCallsMap = new Map<string, ToolCallState>()
|
||||
|
||||
// Start with existing tool calls
|
||||
existingToolCalls.forEach((tc) => {
|
||||
toolCallsMap.set(tc.id, { ...tc })
|
||||
})
|
||||
|
||||
const matches = content.matchAll(TOOL_CALL_PATTERNS.toolCallEvent)
|
||||
|
||||
for (const match of matches) {
|
||||
try {
|
||||
const eventData = JSON.parse(match[1])
|
||||
|
||||
switch (eventData.type) {
|
||||
case 'tool_call_detected':
|
||||
if (!toolCallsMap.has(eventData.toolCall.id)) {
|
||||
toolCallsMap.set(eventData.toolCall.id, {
|
||||
...eventData.toolCall,
|
||||
startTime: Date.now(),
|
||||
})
|
||||
}
|
||||
break
|
||||
|
||||
case 'tool_calls_start':
|
||||
eventData.toolCalls.forEach((toolCall: any) => {
|
||||
if (!toolCallsMap.has(toolCall.id)) {
|
||||
toolCallsMap.set(toolCall.id, {
|
||||
...toolCall,
|
||||
startTime: Date.now(),
|
||||
})
|
||||
} else {
|
||||
// Update existing tool call to executing state
|
||||
const existing = toolCallsMap.get(toolCall.id)!
|
||||
toolCallsMap.set(toolCall.id, {
|
||||
...existing,
|
||||
state: 'executing',
|
||||
parameters: toolCall.parameters || existing.parameters,
|
||||
})
|
||||
}
|
||||
})
|
||||
break
|
||||
|
||||
case 'tool_call_complete': {
|
||||
const completedToolCall = eventData.toolCall
|
||||
if (toolCallsMap.has(completedToolCall.id)) {
|
||||
// Update existing tool call to completed state
|
||||
const existing = toolCallsMap.get(completedToolCall.id)!
|
||||
toolCallsMap.set(completedToolCall.id, {
|
||||
...existing,
|
||||
state: completedToolCall.state,
|
||||
endTime: completedToolCall.endTime,
|
||||
duration: completedToolCall.duration,
|
||||
result: completedToolCall.result,
|
||||
error: completedToolCall.error,
|
||||
})
|
||||
} else {
|
||||
// Create new completed tool call if it doesn't exist
|
||||
toolCallsMap.set(completedToolCall.id, completedToolCall)
|
||||
}
|
||||
break
|
||||
}
|
||||
}
|
||||
} catch (error) {
|
||||
console.warn('Failed to parse tool call event:', error)
|
||||
}
|
||||
}
|
||||
|
||||
return Array.from(toolCallsMap.values())
|
||||
}
|
||||
|
||||
/**
|
||||
* Parse a tool call status message and extract tool information (fallback for legacy)
|
||||
*/
|
||||
export function parseToolCallStatus(content: string): ToolCallIndicator | null {
|
||||
// First check for structured events
|
||||
const structuredEvents = parseToolCallEvents(content)
|
||||
if (structuredEvents.length > 0) {
|
||||
return {
|
||||
type: 'status',
|
||||
content: content,
|
||||
toolNames: structuredEvents.map((e) => e.displayName || e.name),
|
||||
}
|
||||
}
|
||||
|
||||
// Fallback to legacy emoji parsing
|
||||
const statusMatch = content.match(TOOL_CALL_PATTERNS.statusIndicator)
|
||||
|
||||
if (statusMatch) {
|
||||
const statusText = statusMatch[0]
|
||||
const toolNames = extractToolNames(statusText)
|
||||
|
||||
return {
|
||||
type: 'status',
|
||||
content: statusText,
|
||||
toolNames,
|
||||
}
|
||||
}
|
||||
|
||||
// Check for thinking patterns
|
||||
if (TOOL_CALL_PATTERNS.thinkingPattern.test(content)) {
|
||||
return {
|
||||
type: 'thinking',
|
||||
content: content.trim(),
|
||||
}
|
||||
}
|
||||
|
||||
// Check for function call patterns
|
||||
const functionMatch = content.match(TOOL_CALL_PATTERNS.functionCall)
|
||||
if (functionMatch) {
|
||||
return {
|
||||
type: 'execution',
|
||||
content: content.trim(),
|
||||
}
|
||||
}
|
||||
|
||||
return null
|
||||
}
|
||||
|
||||
/**
|
||||
* Create a tool call state from detected information
|
||||
*/
|
||||
export function createToolCallState(
|
||||
name: string,
|
||||
parameters?: Record<string, any>,
|
||||
state: ToolCallState['state'] = 'detecting'
|
||||
): ToolCallState {
|
||||
return {
|
||||
id: `${name}-${Date.now()}-${Math.random().toString(36).slice(2)}`,
|
||||
name,
|
||||
displayName: getToolDisplayName(name),
|
||||
parameters,
|
||||
state,
|
||||
startTime: Date.now(),
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Parse message content and maintain inline positioning of tool calls
|
||||
*/
|
||||
export function parseMessageContent(
|
||||
content: string,
|
||||
existingToolCalls: ToolCallState[] = []
|
||||
): ParsedMessageContent {
|
||||
// Get all tool call events with state transitions
|
||||
const toolCallEvents = parseToolCallEvents(content, existingToolCalls)
|
||||
const toolCallsMap = new Map<string, ToolCallState>()
|
||||
|
||||
toolCallEvents.forEach((tc) => {
|
||||
toolCallsMap.set(tc.id, tc)
|
||||
})
|
||||
|
||||
// Parse content maintaining inline positioning and deduplicating tool calls
|
||||
const inlineContent: InlineContent[] = []
|
||||
const toolCallPositions = new Map<string, number>() // Track where each tool call first appears
|
||||
let currentTextBuffer = ''
|
||||
|
||||
// Split content into segments, preserving tool call markers inline
|
||||
const segments = content.split(/(__TOOL_CALL_EVENT__.*?__TOOL_CALL_EVENT__)/)
|
||||
|
||||
for (let i = 0; i < segments.length; i++) {
|
||||
const segment = segments[i]
|
||||
|
||||
if (segment.match(/__TOOL_CALL_EVENT__.*?__TOOL_CALL_EVENT__/)) {
|
||||
// This is a tool call event
|
||||
|
||||
try {
|
||||
const eventMatch = segment.match(/__TOOL_CALL_EVENT__(.*?)__TOOL_CALL_EVENT__/)
|
||||
if (eventMatch) {
|
||||
const eventData = JSON.parse(eventMatch[1])
|
||||
let toolCallId: string | undefined
|
||||
let toolCall: ToolCallState | undefined
|
||||
|
||||
switch (eventData.type) {
|
||||
case 'tool_call_detected': {
|
||||
const id = eventData.toolCall?.id
|
||||
if (id) {
|
||||
toolCallId = id
|
||||
toolCall = toolCallsMap.get(id)
|
||||
}
|
||||
break
|
||||
}
|
||||
case 'tool_calls_start': {
|
||||
// For multiple tool calls, use the first one
|
||||
const id = eventData.toolCalls?.[0]?.id
|
||||
if (id) {
|
||||
toolCallId = id
|
||||
toolCall = toolCallsMap.get(id)
|
||||
}
|
||||
break
|
||||
}
|
||||
case 'tool_call_complete': {
|
||||
const id = eventData.toolCall?.id
|
||||
if (id) {
|
||||
toolCallId = id
|
||||
toolCall = toolCallsMap.get(id)
|
||||
}
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
if (toolCallId && toolCall) {
|
||||
if (toolCallPositions.has(toolCallId)) {
|
||||
// Update existing tool call in place
|
||||
const existingIndex = toolCallPositions.get(toolCallId)!
|
||||
if (
|
||||
inlineContent[existingIndex] &&
|
||||
inlineContent[existingIndex].type === 'tool_call'
|
||||
) {
|
||||
inlineContent[existingIndex].toolCall = toolCall
|
||||
}
|
||||
} else {
|
||||
// First time seeing this tool call - add accumulated text first
|
||||
if (currentTextBuffer.trim()) {
|
||||
inlineContent.push({
|
||||
type: 'text',
|
||||
content: currentTextBuffer.trim(),
|
||||
})
|
||||
currentTextBuffer = ''
|
||||
}
|
||||
|
||||
// Add new tool call and remember its position
|
||||
const newIndex = inlineContent.length
|
||||
inlineContent.push({
|
||||
type: 'tool_call',
|
||||
content: segment,
|
||||
toolCall,
|
||||
})
|
||||
toolCallPositions.set(toolCallId, newIndex)
|
||||
}
|
||||
} else {
|
||||
// If parsing fails or no tool call found, treat as text
|
||||
currentTextBuffer += segment
|
||||
}
|
||||
}
|
||||
} catch (error) {
|
||||
// If parsing fails, treat as text
|
||||
currentTextBuffer += segment
|
||||
}
|
||||
} else {
|
||||
// Regular text content
|
||||
currentTextBuffer += segment
|
||||
}
|
||||
}
|
||||
|
||||
// Add any remaining text
|
||||
if (currentTextBuffer.trim()) {
|
||||
inlineContent.push({
|
||||
type: 'text',
|
||||
content: currentTextBuffer.trim(),
|
||||
})
|
||||
}
|
||||
|
||||
// Create clean text content for fallback
|
||||
const cleanTextContent = content.replace(TOOL_CALL_PATTERNS.toolCallEvent, '').trim()
|
||||
|
||||
return {
|
||||
textContent: cleanTextContent,
|
||||
toolCalls: Array.from(toolCallsMap.values()),
|
||||
toolGroups: [], // No grouping for inline display
|
||||
inlineContent,
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Update tool call states based on new content
|
||||
*/
|
||||
export function updateToolCallStates(
|
||||
existingToolCalls: ToolCallState[],
|
||||
newContent: string
|
||||
): ToolCallState[] {
|
||||
const updatedToolCalls = [...existingToolCalls]
|
||||
|
||||
// Look for completion or error indicators
|
||||
if (TOOL_CALL_PATTERNS.completionIndicator.test(newContent)) {
|
||||
// Mark executing tools as completed
|
||||
updatedToolCalls.forEach((toolCall) => {
|
||||
if (toolCall.state === 'executing') {
|
||||
toolCall.state = 'completed'
|
||||
toolCall.endTime = Date.now()
|
||||
toolCall.duration = toolCall.endTime - (toolCall.startTime || 0)
|
||||
}
|
||||
})
|
||||
} else if (TOOL_CALL_PATTERNS.errorIndicator.test(newContent)) {
|
||||
// Mark executing tools as error
|
||||
updatedToolCalls.forEach((toolCall) => {
|
||||
if (toolCall.state === 'executing') {
|
||||
toolCall.state = 'error'
|
||||
toolCall.endTime = Date.now()
|
||||
toolCall.duration = toolCall.endTime - (toolCall.startTime || 0)
|
||||
toolCall.error = 'Tool execution failed'
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
return updatedToolCalls
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if content contains tool call indicators
|
||||
*/
|
||||
export function hasToolCallIndicators(content: string): boolean {
|
||||
return (
|
||||
TOOL_CALL_PATTERNS.toolCallEvent.test(content) ||
|
||||
TOOL_CALL_PATTERNS.statusIndicator.test(content) ||
|
||||
TOOL_CALL_PATTERNS.functionCall.test(content) ||
|
||||
TOOL_CALL_PATTERNS.thinkingPattern.test(content)
|
||||
)
|
||||
}
|
||||
|
||||
/**
|
||||
* Remove tool call indicators from content, leaving only text
|
||||
*/
|
||||
export function stripToolCallIndicators(content: string): string {
|
||||
return content
|
||||
.replace(TOOL_CALL_PATTERNS.toolCallEvent, '')
|
||||
.replace(TOOL_CALL_PATTERNS.statusIndicator, '')
|
||||
.replace(/\n\s*\n/g, '\n')
|
||||
.trim()
|
||||
}
|
||||
|
||||
/**
|
||||
* Parse streaming content incrementally
|
||||
*/
|
||||
export function parseStreamingContent(
|
||||
accumulatedContent: string,
|
||||
newChunk: string,
|
||||
existingToolCalls: ToolCallState[] = []
|
||||
): {
|
||||
parsedContent: ParsedMessageContent
|
||||
updatedToolCalls: ToolCallState[]
|
||||
} {
|
||||
const fullContent = accumulatedContent + newChunk
|
||||
const parsedContent = parseMessageContent(fullContent, existingToolCalls)
|
||||
|
||||
// The parseMessageContent now handles state transitions, so we use its tool calls
|
||||
const updatedToolCalls = parsedContent.toolCalls
|
||||
|
||||
return {
|
||||
parsedContent,
|
||||
updatedToolCalls,
|
||||
}
|
||||
}
|
||||
@@ -4,6 +4,11 @@ import { createLogger } from '@/lib/logs/console-logger'
|
||||
import { getBlock } from '@/blocks'
|
||||
import type { SubBlockConfig } from '@/blocks/types'
|
||||
import type { BlockState, WorkflowState } from '@/stores/workflows/workflow/types'
|
||||
import {
|
||||
type ConnectionsFormat,
|
||||
cleanConditionInputs,
|
||||
generateBlockConnections,
|
||||
} from '@/stores/workflows/yaml/parsing-utils'
|
||||
|
||||
const logger = createLogger('WorkflowYamlGenerator')
|
||||
|
||||
@@ -11,18 +16,7 @@ interface YamlBlock {
|
||||
type: string
|
||||
name: string
|
||||
inputs?: Record<string, any>
|
||||
connections?: {
|
||||
incoming?: Array<{
|
||||
source: string
|
||||
sourceHandle?: string
|
||||
targetHandle?: string
|
||||
}>
|
||||
outgoing?: Array<{
|
||||
target: string
|
||||
sourceHandle?: string
|
||||
targetHandle?: string
|
||||
}>
|
||||
}
|
||||
connections?: ConnectionsFormat
|
||||
parentId?: string // Add parentId for nested blocks
|
||||
}
|
||||
|
||||
@@ -96,12 +90,14 @@ function extractBlockInputs(
|
||||
blockConfig.subBlocks.forEach((subBlockConfig: SubBlockConfig) => {
|
||||
const subBlockId = subBlockConfig.id
|
||||
|
||||
// Skip hidden or conditional fields that aren't active
|
||||
if (subBlockConfig.hidden) return
|
||||
|
||||
// Get value from provided values or fallback to block state
|
||||
const value = blockSubBlockValues[subBlockId] ?? blockState.subBlocks[subBlockId]?.value
|
||||
|
||||
// Skip hidden fields ONLY if they have no value (don't skip configured hidden fields)
|
||||
if (subBlockConfig.hidden && (value === undefined || value === null || value === '')) {
|
||||
return
|
||||
}
|
||||
|
||||
// Include value if it exists and isn't empty
|
||||
if (value !== undefined && value !== null && value !== '') {
|
||||
// Handle different input types appropriately
|
||||
@@ -129,6 +125,18 @@ function extractBlockInputs(
|
||||
}
|
||||
break
|
||||
|
||||
case 'input-format':
|
||||
// Clean up input format to only include essential fields
|
||||
if (Array.isArray(value) && value.length > 0) {
|
||||
inputs[subBlockId] = value
|
||||
.map((field: any) => ({
|
||||
name: field.name,
|
||||
type: field.type,
|
||||
}))
|
||||
.filter((field: any) => field.name && field.type)
|
||||
}
|
||||
break
|
||||
|
||||
case 'switch':
|
||||
// Boolean values
|
||||
inputs[subBlockId] = Boolean(value)
|
||||
@@ -219,9 +227,14 @@ export function generateWorkflowYaml(
|
||||
|
||||
// Process each block
|
||||
Object.entries(workflowState.blocks).forEach(([blockId, blockState]) => {
|
||||
const inputs = extractBlockInputs(blockState, blockId, subBlockValues)
|
||||
const incoming = findIncomingConnections(blockId, workflowState.edges)
|
||||
const outgoing = findOutgoingConnections(blockId, workflowState.edges)
|
||||
const rawInputs = extractBlockInputs(blockState, blockId, subBlockValues)
|
||||
|
||||
// Clean up condition inputs to use semantic format
|
||||
const inputs =
|
||||
blockState.type === 'condition' ? cleanConditionInputs(blockId, rawInputs) : rawInputs
|
||||
|
||||
// Use shared utility to generate connections in new format
|
||||
const connections = generateBlockConnections(blockId, workflowState.edges)
|
||||
|
||||
const yamlBlock: YamlBlock = {
|
||||
type: blockState.type,
|
||||
@@ -233,17 +246,10 @@ export function generateWorkflowYaml(
|
||||
yamlBlock.inputs = inputs
|
||||
}
|
||||
|
||||
// Only include connections if they exist
|
||||
if (incoming.length > 0 || outgoing.length > 0) {
|
||||
yamlBlock.connections = {}
|
||||
|
||||
if (incoming.length > 0) {
|
||||
yamlBlock.connections.incoming = incoming
|
||||
}
|
||||
|
||||
if (outgoing.length > 0) {
|
||||
yamlBlock.connections.outgoing = outgoing
|
||||
}
|
||||
// Only include connections if they exist (check if any connection type has content)
|
||||
const hasConnections = Object.keys(connections).length > 0
|
||||
if (hasConnections) {
|
||||
yamlBlock.connections = connections
|
||||
}
|
||||
|
||||
// Include parent-child relationship for nested blocks
|
||||
|
||||
@@ -33,6 +33,8 @@
|
||||
"@better-auth/stripe": "^1.2.9",
|
||||
"@browserbasehq/stagehand": "^2.0.0",
|
||||
"@cerebras/cerebras_cloud_sdk": "^1.23.0",
|
||||
"@chatscope/chat-ui-kit-react": "2.1.1",
|
||||
"@chatscope/chat-ui-kit-styles": "1.4.0",
|
||||
"@hookform/resolvers": "^4.1.3",
|
||||
"@opentelemetry/api": "^1.9.0",
|
||||
"@opentelemetry/exporter-collector": "^0.25.0",
|
||||
@@ -66,6 +68,7 @@
|
||||
"@react-email/components": "^0.0.34",
|
||||
"@sentry/nextjs": "^9.15.0",
|
||||
"@trigger.dev/sdk": "3.3.17",
|
||||
"@types/react-syntax-highlighter": "15.5.13",
|
||||
"@types/three": "0.177.0",
|
||||
"@vercel/og": "^0.6.5",
|
||||
"@vercel/speed-insights": "^1.2.0",
|
||||
@@ -109,8 +112,11 @@
|
||||
"react-hook-form": "^7.54.2",
|
||||
"react-markdown": "^10.1.0",
|
||||
"react-simple-code-editor": "^0.14.1",
|
||||
"react-syntax-highlighter": "15.6.1",
|
||||
"reactflow": "^11.11.4",
|
||||
"recharts": "2.15.3",
|
||||
"rehype-highlight": "7.0.2",
|
||||
"remark-gfm": "4.0.1",
|
||||
"resend": "^4.1.2",
|
||||
"socket.io": "^4.8.1",
|
||||
"stripe": "^17.7.0",
|
||||
|
||||
@@ -261,6 +261,9 @@ ${fieldDescriptions}
|
||||
}
|
||||
}
|
||||
|
||||
// Check if we should stream tool calls (default: false for chat, true for copilot)
|
||||
const shouldStreamToolCalls = request.streamToolCalls ?? false
|
||||
|
||||
// EARLY STREAMING: if caller requested streaming and there are no tools to execute,
|
||||
// we can directly stream the completion.
|
||||
if (request.stream && (!anthropicTools || anthropicTools.length === 0)) {
|
||||
@@ -328,6 +331,864 @@ ${fieldDescriptions}
|
||||
return streamingResult as StreamingExecution
|
||||
}
|
||||
|
||||
// STREAMING WITH INCREMENTAL PARSING: Handle both text and tool calls in real-time
|
||||
if (request.stream && shouldStreamToolCalls) {
|
||||
logger.info('Using incremental streaming parser for Anthropic request', {
|
||||
hasTools: !!(anthropicTools && anthropicTools.length > 0),
|
||||
})
|
||||
|
||||
// Start execution timer for the entire provider execution
|
||||
const providerStartTime = Date.now()
|
||||
const providerStartTimeISO = new Date(providerStartTime).toISOString()
|
||||
|
||||
// Create a streaming request
|
||||
const streamResponse: any = await anthropic.messages.create({
|
||||
...payload,
|
||||
stream: true,
|
||||
})
|
||||
|
||||
// State for incremental parsing
|
||||
let currentBlockType: 'text' | 'tool_use' | null = null
|
||||
let toolCallBuffer: any = null
|
||||
const toolCalls: any[] = []
|
||||
let streamedContent = ''
|
||||
|
||||
// Token usage tracking
|
||||
const tokenUsage = {
|
||||
prompt: 0,
|
||||
completion: 0,
|
||||
total: 0,
|
||||
}
|
||||
|
||||
// Create an incremental parsing stream
|
||||
const incrementalParsingStream = new ReadableStream({
|
||||
async start(controller) {
|
||||
try {
|
||||
for await (const chunk of streamResponse) {
|
||||
// Handle different chunk types
|
||||
if (chunk.type === 'content_block_start') {
|
||||
currentBlockType = chunk.content_block?.type
|
||||
|
||||
if (currentBlockType === 'tool_use') {
|
||||
// Start buffering a tool call
|
||||
toolCallBuffer = {
|
||||
id: chunk.content_block.id,
|
||||
name: chunk.content_block.name,
|
||||
input: {},
|
||||
}
|
||||
logger.info(`Starting tool call: ${chunk.content_block.name}`)
|
||||
|
||||
// Emit tool call detection event
|
||||
const toolDetectionEvent = {
|
||||
type: 'tool_call_detected',
|
||||
toolCall: {
|
||||
id: chunk.content_block.id,
|
||||
name: chunk.content_block.name,
|
||||
displayName: getToolDisplayName(chunk.content_block.name),
|
||||
state: 'detecting',
|
||||
},
|
||||
}
|
||||
controller.enqueue(
|
||||
new TextEncoder().encode(
|
||||
`\n__TOOL_CALL_EVENT__${JSON.stringify(toolDetectionEvent)}__TOOL_CALL_EVENT__\n`
|
||||
)
|
||||
)
|
||||
}
|
||||
} else if (chunk.type === 'content_block_delta') {
|
||||
if (currentBlockType === 'text' && chunk.delta?.text) {
|
||||
// Stream text content immediately to user
|
||||
const textContent = chunk.delta.text
|
||||
streamedContent += textContent
|
||||
controller.enqueue(new TextEncoder().encode(textContent))
|
||||
} else if (currentBlockType === 'tool_use' && chunk.delta?.partial_json) {
|
||||
// Buffer tool call parameters
|
||||
if (toolCallBuffer) {
|
||||
try {
|
||||
// Attempt to parse the accumulated JSON
|
||||
const partialInput = chunk.delta.partial_json
|
||||
// This is partial JSON, we'll parse it when the block is complete
|
||||
toolCallBuffer.partialInput =
|
||||
(toolCallBuffer.partialInput || '') + partialInput
|
||||
} catch (error) {
|
||||
// Ignore parsing errors for partial JSON
|
||||
}
|
||||
}
|
||||
}
|
||||
} else if (chunk.type === 'content_block_stop') {
|
||||
if (currentBlockType === 'tool_use' && toolCallBuffer) {
|
||||
try {
|
||||
// Parse the complete tool call input
|
||||
toolCallBuffer.input = JSON.parse(toolCallBuffer.partialInput || '{}')
|
||||
toolCalls.push(toolCallBuffer)
|
||||
|
||||
// Queue tool call for execution
|
||||
pendingToolCalls.push(toolCallBuffer)
|
||||
|
||||
logger.info(`Completed tool call buffer for: ${toolCallBuffer.name}`)
|
||||
} catch (error) {
|
||||
logger.error('Error parsing tool call input:', { error, toolCallBuffer })
|
||||
}
|
||||
toolCallBuffer = null
|
||||
}
|
||||
currentBlockType = null
|
||||
} else if (chunk.type === 'message_start') {
|
||||
// Track usage data if available
|
||||
if (chunk.message?.usage) {
|
||||
tokenUsage.prompt = chunk.message.usage.input_tokens || 0
|
||||
}
|
||||
} else if (chunk.type === 'message_delta') {
|
||||
// Update token counts as they become available
|
||||
if (chunk.usage) {
|
||||
tokenUsage.completion = chunk.usage.output_tokens || 0
|
||||
tokenUsage.total = tokenUsage.prompt + tokenUsage.completion
|
||||
}
|
||||
} else if (chunk.type === 'message_stop') {
|
||||
// Stream is complete - execute any pending tool calls
|
||||
logger.info('Initial stream completed', {
|
||||
streamedContentLength: streamedContent.length,
|
||||
toolCallsCount: toolCalls.length,
|
||||
pendingToolCallsCount: pendingToolCalls.length,
|
||||
})
|
||||
|
||||
if (pendingToolCalls.length > 0) {
|
||||
// Send structured tool call indicators instead of text
|
||||
const toolCallEvent = {
|
||||
type: 'tool_calls_start',
|
||||
toolCalls: pendingToolCalls.map((tc) => ({
|
||||
id: tc.id,
|
||||
name: tc.name,
|
||||
displayName: getToolDisplayName(tc.name),
|
||||
parameters: tc.input,
|
||||
state: 'executing',
|
||||
})),
|
||||
}
|
||||
controller.enqueue(
|
||||
new TextEncoder().encode(
|
||||
`\n__TOOL_CALL_EVENT__${JSON.stringify(toolCallEvent)}__TOOL_CALL_EVENT__\n`
|
||||
)
|
||||
)
|
||||
|
||||
// Execute tools and continue conversation
|
||||
await executeToolsAndContinue(pendingToolCalls, controller)
|
||||
}
|
||||
|
||||
controller.close()
|
||||
break
|
||||
}
|
||||
}
|
||||
} catch (error) {
|
||||
logger.error('Error in incremental streaming:', { error })
|
||||
controller.error(error)
|
||||
}
|
||||
},
|
||||
})
|
||||
|
||||
// Track conversation state for multi-turn tool execution
|
||||
const conversationMessages = [...messages]
|
||||
const pendingToolCalls: any[] = []
|
||||
const completedToolCalls: any[] = []
|
||||
|
||||
// Tool ID to readable name mapping for better UX
|
||||
const toolDisplayNames: Record<string, string> = {
|
||||
// Actual copilot tool IDs
|
||||
docs_search_internal: 'Searching documentation',
|
||||
get_user_workflow: 'Analyzing your workflow',
|
||||
get_blocks_and_tools: 'Getting context',
|
||||
get_blocks_metadata: 'Getting context',
|
||||
get_yaml_structure: 'Designing an approach',
|
||||
edit_workflow: 'Building your workflow',
|
||||
}
|
||||
|
||||
// Helper function to get display name for tool
|
||||
const getToolDisplayName = (toolId: string): string => {
|
||||
return toolDisplayNames[toolId] || `Executing ${toolId}`
|
||||
}
|
||||
|
||||
// Helper function to group tools by their display names
|
||||
const groupToolsByDisplayName = (toolCalls: any[]): string[] => {
|
||||
const displayNameSet = new Set<string>()
|
||||
toolCalls.forEach((tc) => {
|
||||
displayNameSet.add(getToolDisplayName(tc.name))
|
||||
})
|
||||
return Array.from(displayNameSet)
|
||||
}
|
||||
|
||||
// Helper function to execute tools and continue conversation
|
||||
const executeToolsAndContinue = async (
|
||||
toolCalls: any[],
|
||||
controller: ReadableStreamDefaultController
|
||||
) => {
|
||||
try {
|
||||
logger.info(`Executing ${toolCalls.length} tool calls`, {
|
||||
toolNames: toolCalls.map((tc) => tc.name),
|
||||
})
|
||||
|
||||
// Execute all tools in parallel
|
||||
const toolResults = await Promise.all(
|
||||
toolCalls.map(async (toolCall) => {
|
||||
const tool = request.tools?.find((t: any) => t.id === toolCall.name)
|
||||
if (!tool) {
|
||||
logger.warn(`Tool not found: ${toolCall.name}`)
|
||||
return null
|
||||
}
|
||||
|
||||
const toolCallStartTime = Date.now()
|
||||
const mergedArgs = {
|
||||
...tool.params,
|
||||
...toolCall.input,
|
||||
...(request.workflowId
|
||||
? {
|
||||
_context: {
|
||||
workflowId: request.workflowId,
|
||||
...(request.chatId ? { chatId: request.chatId } : {}),
|
||||
},
|
||||
}
|
||||
: {}),
|
||||
...(request.environmentVariables ? { envVars: request.environmentVariables } : {}),
|
||||
}
|
||||
|
||||
const result = await executeTool(toolCall.name, mergedArgs, true)
|
||||
const toolCallEndTime = Date.now()
|
||||
|
||||
if (result.success) {
|
||||
completedToolCalls.push({
|
||||
name: toolCall.name,
|
||||
arguments: toolCall.input,
|
||||
startTime: new Date(toolCallStartTime).toISOString(),
|
||||
endTime: new Date(toolCallEndTime).toISOString(),
|
||||
duration: toolCallEndTime - toolCallStartTime,
|
||||
result: result.output,
|
||||
})
|
||||
}
|
||||
|
||||
// Emit tool completion event
|
||||
const toolCompletionEvent = {
|
||||
type: 'tool_call_complete',
|
||||
toolCall: {
|
||||
id: toolCall.id,
|
||||
name: toolCall.name,
|
||||
displayName: getToolDisplayName(toolCall.name),
|
||||
parameters: toolCall.input,
|
||||
state: result.success ? 'completed' : 'error',
|
||||
startTime: toolCallStartTime,
|
||||
endTime: toolCallEndTime,
|
||||
duration: toolCallEndTime - toolCallStartTime,
|
||||
result: result.success ? result.output : null,
|
||||
error: result.success ? null : 'Tool execution failed',
|
||||
},
|
||||
}
|
||||
controller.enqueue(
|
||||
new TextEncoder().encode(
|
||||
`\n__TOOL_CALL_EVENT__${JSON.stringify(toolCompletionEvent)}__TOOL_CALL_EVENT__\n`
|
||||
)
|
||||
)
|
||||
|
||||
return {
|
||||
toolCall,
|
||||
result: result.success ? result.output : null,
|
||||
success: result.success,
|
||||
}
|
||||
})
|
||||
)
|
||||
|
||||
// Add tool calls and results to conversation
|
||||
conversationMessages.push({
|
||||
role: 'assistant',
|
||||
content: toolCalls.map((tc) => ({
|
||||
type: 'tool_use',
|
||||
id: tc.id,
|
||||
name: tc.name,
|
||||
input: tc.input,
|
||||
})) as any,
|
||||
})
|
||||
|
||||
conversationMessages.push({
|
||||
role: 'user',
|
||||
content: toolResults
|
||||
.filter((tr) => tr?.success)
|
||||
.map((tr) => ({
|
||||
type: 'tool_result',
|
||||
tool_use_id: tr!.toolCall.id,
|
||||
content: JSON.stringify(tr!.result),
|
||||
})) as any,
|
||||
})
|
||||
|
||||
// Add subtle completion indicator before continuing
|
||||
const completionMessage = `\n`
|
||||
controller.enqueue(new TextEncoder().encode(completionMessage))
|
||||
|
||||
// Continue the conversation with tool results
|
||||
const nextStreamResponse = await anthropic.messages.create({
|
||||
...payload,
|
||||
messages: conversationMessages,
|
||||
stream: true,
|
||||
})
|
||||
|
||||
// Parse the continuation stream
|
||||
await parseContinuationStream(nextStreamResponse, controller)
|
||||
} catch (error) {
|
||||
logger.error('Error executing tools and continuing conversation:', { error })
|
||||
// Continue streaming even if tools fail
|
||||
}
|
||||
}
|
||||
|
||||
// Helper function to parse continuation streams (for tool result responses)
|
||||
const parseContinuationStream = async (
|
||||
streamResponse: any,
|
||||
controller: ReadableStreamDefaultController
|
||||
) => {
|
||||
let currentBlockType: 'text' | 'tool_use' | null = null
|
||||
let toolCallBuffer: any = null
|
||||
const newToolCalls: any[] = []
|
||||
|
||||
for await (const chunk of streamResponse) {
|
||||
if (chunk.type === 'content_block_start') {
|
||||
currentBlockType = chunk.content_block?.type
|
||||
|
||||
if (currentBlockType === 'tool_use') {
|
||||
toolCallBuffer = {
|
||||
id: chunk.content_block.id,
|
||||
name: chunk.content_block.name,
|
||||
input: {},
|
||||
}
|
||||
}
|
||||
} else if (chunk.type === 'content_block_delta') {
|
||||
if (currentBlockType === 'text' && chunk.delta?.text) {
|
||||
// Stream continuation text immediately
|
||||
const textContent = chunk.delta.text
|
||||
controller.enqueue(new TextEncoder().encode(textContent))
|
||||
} else if (currentBlockType === 'tool_use' && chunk.delta?.partial_json) {
|
||||
if (toolCallBuffer) {
|
||||
toolCallBuffer.partialInput =
|
||||
(toolCallBuffer.partialInput || '') + chunk.delta.partial_json
|
||||
}
|
||||
}
|
||||
} else if (chunk.type === 'content_block_stop') {
|
||||
if (currentBlockType === 'tool_use' && toolCallBuffer) {
|
||||
try {
|
||||
toolCallBuffer.input = JSON.parse(toolCallBuffer.partialInput || '{}')
|
||||
newToolCalls.push(toolCallBuffer)
|
||||
} catch (error) {
|
||||
logger.error('Error parsing continuation tool call:', { error })
|
||||
}
|
||||
toolCallBuffer = null
|
||||
}
|
||||
currentBlockType = null
|
||||
} else if (chunk.type === 'message_stop') {
|
||||
// If there are more tool calls, emit structured events and execute them
|
||||
if (newToolCalls.length > 0) {
|
||||
// Send structured tool call indicators for subsequent calls
|
||||
const toolCallEvent = {
|
||||
type: 'tool_calls_start',
|
||||
toolCalls: newToolCalls.map((tc) => ({
|
||||
id: tc.id,
|
||||
name: tc.name,
|
||||
displayName: getToolDisplayName(tc.name),
|
||||
parameters: tc.input,
|
||||
state: 'executing',
|
||||
})),
|
||||
}
|
||||
controller.enqueue(
|
||||
new TextEncoder().encode(
|
||||
`\n__TOOL_CALL_EVENT__${JSON.stringify(toolCallEvent)}__TOOL_CALL_EVENT__\n`
|
||||
)
|
||||
)
|
||||
|
||||
await executeToolsAndContinue(newToolCalls, controller)
|
||||
}
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Create the streaming result
|
||||
const streamingResult = {
|
||||
stream: incrementalParsingStream,
|
||||
execution: {
|
||||
success: true,
|
||||
output: {
|
||||
content: '', // Will be filled by streaming content
|
||||
model: request.model,
|
||||
tokens: tokenUsage,
|
||||
toolCalls:
|
||||
toolCalls.length > 0 ? { list: toolCalls, count: toolCalls.length } : undefined,
|
||||
providerTiming: {
|
||||
startTime: providerStartTimeISO,
|
||||
endTime: new Date().toISOString(),
|
||||
duration: Date.now() - providerStartTime,
|
||||
timeSegments: [
|
||||
{
|
||||
type: 'model',
|
||||
name: 'Incremental streaming with tools',
|
||||
startTime: providerStartTime,
|
||||
endTime: Date.now(),
|
||||
duration: Date.now() - providerStartTime,
|
||||
},
|
||||
],
|
||||
},
|
||||
cost: {
|
||||
total: 0.0, // Will be updated as tokens are counted
|
||||
input: 0.0,
|
||||
output: 0.0,
|
||||
},
|
||||
},
|
||||
logs: [],
|
||||
metadata: {
|
||||
startTime: providerStartTimeISO,
|
||||
endTime: new Date().toISOString(),
|
||||
duration: Date.now() - providerStartTime,
|
||||
},
|
||||
isStreaming: true,
|
||||
},
|
||||
}
|
||||
|
||||
// Return the streaming execution object
|
||||
return streamingResult as StreamingExecution
|
||||
}
|
||||
|
||||
// NON-STREAMING WITH FINAL RESPONSE: Execute all tools silently and return only final response
|
||||
if (request.stream && !shouldStreamToolCalls) {
|
||||
logger.info('Using non-streaming mode for Anthropic request (tool calls executed silently)')
|
||||
|
||||
// Start execution timer for the entire provider execution
|
||||
const providerStartTime = Date.now()
|
||||
const providerStartTimeISO = new Date(providerStartTime).toISOString()
|
||||
|
||||
try {
|
||||
// Make the initial API request
|
||||
const initialCallTime = Date.now()
|
||||
|
||||
// Track the original tool_choice for forced tool tracking
|
||||
const originalToolChoice = payload.tool_choice
|
||||
|
||||
// Track forced tools and their usage
|
||||
const forcedTools = preparedTools?.forcedTools || []
|
||||
let usedForcedTools: string[] = []
|
||||
|
||||
let currentResponse = await anthropic.messages.create(payload)
|
||||
const firstResponseTime = Date.now() - initialCallTime
|
||||
|
||||
let content = ''
|
||||
|
||||
// Extract text content from the message
|
||||
if (Array.isArray(currentResponse.content)) {
|
||||
content = currentResponse.content
|
||||
.filter((item) => item.type === 'text')
|
||||
.map((item) => item.text)
|
||||
.join('\n')
|
||||
}
|
||||
|
||||
const tokens = {
|
||||
prompt: currentResponse.usage?.input_tokens || 0,
|
||||
completion: currentResponse.usage?.output_tokens || 0,
|
||||
total:
|
||||
(currentResponse.usage?.input_tokens || 0) +
|
||||
(currentResponse.usage?.output_tokens || 0),
|
||||
}
|
||||
|
||||
const toolCalls = []
|
||||
const toolResults = []
|
||||
const currentMessages = [...messages]
|
||||
let iterationCount = 0
|
||||
const MAX_ITERATIONS = 10 // Prevent infinite loops
|
||||
|
||||
// Track if a forced tool has been used
|
||||
let hasUsedForcedTool = false
|
||||
|
||||
// Track time spent in model vs tools
|
||||
let modelTime = firstResponseTime
|
||||
let toolsTime = 0
|
||||
|
||||
// Track each model and tool call segment with timestamps
|
||||
const timeSegments: TimeSegment[] = [
|
||||
{
|
||||
type: 'model',
|
||||
name: 'Initial response',
|
||||
startTime: initialCallTime,
|
||||
endTime: initialCallTime + firstResponseTime,
|
||||
duration: firstResponseTime,
|
||||
},
|
||||
]
|
||||
|
||||
// Helper function to check for forced tool usage in Anthropic responses
|
||||
const checkForForcedToolUsage = (response: any, toolChoice: any) => {
|
||||
if (
|
||||
typeof toolChoice === 'object' &&
|
||||
toolChoice !== null &&
|
||||
Array.isArray(response.content)
|
||||
) {
|
||||
const toolUses = response.content.filter((item: any) => item.type === 'tool_use')
|
||||
|
||||
if (toolUses.length > 0) {
|
||||
// Convert Anthropic tool_use format to a format trackForcedToolUsage can understand
|
||||
const adaptedToolCalls = toolUses.map((tool: any) => ({
|
||||
name: tool.name,
|
||||
}))
|
||||
|
||||
// Convert Anthropic tool_choice format to match OpenAI format for tracking
|
||||
const adaptedToolChoice =
|
||||
toolChoice.type === 'tool' ? { function: { name: toolChoice.name } } : toolChoice
|
||||
|
||||
const result = trackForcedToolUsage(
|
||||
adaptedToolCalls,
|
||||
adaptedToolChoice,
|
||||
logger,
|
||||
'anthropic',
|
||||
forcedTools,
|
||||
usedForcedTools
|
||||
)
|
||||
// Make the behavior consistent with the initial check
|
||||
hasUsedForcedTool = result.hasUsedForcedTool
|
||||
usedForcedTools = result.usedForcedTools
|
||||
return result
|
||||
}
|
||||
}
|
||||
return null
|
||||
}
|
||||
|
||||
// Check if a forced tool was used in the first response
|
||||
checkForForcedToolUsage(currentResponse, originalToolChoice)
|
||||
|
||||
try {
|
||||
while (iterationCount < MAX_ITERATIONS) {
|
||||
// Check for tool calls
|
||||
const toolUses = currentResponse.content.filter((item) => item.type === 'tool_use')
|
||||
if (!toolUses || toolUses.length === 0) {
|
||||
break
|
||||
}
|
||||
|
||||
// Track time for tool calls in this batch
|
||||
const toolsStartTime = Date.now()
|
||||
|
||||
// Process each tool call
|
||||
for (const toolUse of toolUses) {
|
||||
try {
|
||||
const toolName = toolUse.name
|
||||
const toolArgs = toolUse.input as Record<string, any>
|
||||
|
||||
// Get the tool from the tools registry
|
||||
const tool = request.tools?.find((t) => t.id === toolName)
|
||||
if (!tool) continue
|
||||
|
||||
// Execute the tool
|
||||
const toolCallStartTime = Date.now()
|
||||
|
||||
// Only merge actual tool parameters for logging
|
||||
const toolParams = {
|
||||
...tool.params,
|
||||
...toolArgs,
|
||||
}
|
||||
|
||||
// Add system parameters for execution
|
||||
const executionParams = {
|
||||
...toolParams,
|
||||
...(request.workflowId
|
||||
? {
|
||||
_context: {
|
||||
workflowId: request.workflowId,
|
||||
...(request.chatId ? { chatId: request.chatId } : {}),
|
||||
},
|
||||
}
|
||||
: {}),
|
||||
...(request.environmentVariables
|
||||
? { envVars: request.environmentVariables }
|
||||
: {}),
|
||||
}
|
||||
|
||||
const result = await executeTool(toolName, executionParams, true)
|
||||
const toolCallEndTime = Date.now()
|
||||
const toolCallDuration = toolCallEndTime - toolCallStartTime
|
||||
|
||||
// Add to time segments for both success and failure
|
||||
timeSegments.push({
|
||||
type: 'tool',
|
||||
name: toolName,
|
||||
startTime: toolCallStartTime,
|
||||
endTime: toolCallEndTime,
|
||||
duration: toolCallDuration,
|
||||
})
|
||||
|
||||
// Prepare result content for the LLM
|
||||
let resultContent: any
|
||||
if (result.success) {
|
||||
toolResults.push(result.output)
|
||||
resultContent = result.output
|
||||
} else {
|
||||
// Include error information so LLM can respond appropriately
|
||||
resultContent = {
|
||||
error: true,
|
||||
message: result.error || 'Tool execution failed',
|
||||
tool: toolName,
|
||||
}
|
||||
}
|
||||
|
||||
toolCalls.push({
|
||||
name: toolName,
|
||||
arguments: toolParams,
|
||||
startTime: new Date(toolCallStartTime).toISOString(),
|
||||
endTime: new Date(toolCallEndTime).toISOString(),
|
||||
duration: toolCallDuration,
|
||||
result: resultContent,
|
||||
success: result.success,
|
||||
})
|
||||
|
||||
// Add the tool call and result to messages (both success and failure)
|
||||
const toolUseId = generateToolUseId(toolName)
|
||||
|
||||
currentMessages.push({
|
||||
role: 'assistant',
|
||||
content: [
|
||||
{
|
||||
type: 'tool_use',
|
||||
id: toolUseId,
|
||||
name: toolName,
|
||||
input: toolArgs,
|
||||
} as any,
|
||||
],
|
||||
})
|
||||
|
||||
currentMessages.push({
|
||||
role: 'user',
|
||||
content: [
|
||||
{
|
||||
type: 'tool_result',
|
||||
tool_use_id: toolUseId,
|
||||
content: JSON.stringify(resultContent),
|
||||
} as any,
|
||||
],
|
||||
})
|
||||
} catch (error) {
|
||||
logger.error('Error processing tool call:', { error })
|
||||
}
|
||||
}
|
||||
|
||||
// Calculate tool call time for this iteration
|
||||
const thisToolsTime = Date.now() - toolsStartTime
|
||||
toolsTime += thisToolsTime
|
||||
|
||||
// Make the next request with updated messages
|
||||
const nextPayload = {
|
||||
...payload,
|
||||
messages: currentMessages,
|
||||
}
|
||||
|
||||
// Update tool_choice based on which forced tools have been used
|
||||
if (
|
||||
typeof originalToolChoice === 'object' &&
|
||||
hasUsedForcedTool &&
|
||||
forcedTools.length > 0
|
||||
) {
|
||||
// If we have remaining forced tools, get the next one to force
|
||||
const remainingTools = forcedTools.filter((tool) => !usedForcedTools.includes(tool))
|
||||
|
||||
if (remainingTools.length > 0) {
|
||||
// Force the next tool - use Anthropic format
|
||||
nextPayload.tool_choice = {
|
||||
type: 'tool',
|
||||
name: remainingTools[0],
|
||||
}
|
||||
logger.info(`Forcing next tool: ${remainingTools[0]}`)
|
||||
} else {
|
||||
// All forced tools have been used, switch to auto by removing tool_choice
|
||||
nextPayload.tool_choice = undefined
|
||||
logger.info('All forced tools have been used, removing tool_choice parameter')
|
||||
}
|
||||
} else if (hasUsedForcedTool && typeof originalToolChoice === 'object') {
|
||||
// Handle the case of a single forced tool that was used
|
||||
nextPayload.tool_choice = undefined
|
||||
logger.info(
|
||||
'Removing tool_choice parameter for subsequent requests after forced tool was used'
|
||||
)
|
||||
}
|
||||
|
||||
// Time the next model call
|
||||
const nextModelStartTime = Date.now()
|
||||
|
||||
// Make the next request
|
||||
currentResponse = await anthropic.messages.create(nextPayload)
|
||||
|
||||
// Check if any forced tools were used in this response
|
||||
checkForForcedToolUsage(currentResponse, nextPayload.tool_choice)
|
||||
|
||||
const nextModelEndTime = Date.now()
|
||||
const thisModelTime = nextModelEndTime - nextModelStartTime
|
||||
|
||||
// Add to time segments
|
||||
timeSegments.push({
|
||||
type: 'model',
|
||||
name: `Model response (iteration ${iterationCount + 1})`,
|
||||
startTime: nextModelStartTime,
|
||||
endTime: nextModelEndTime,
|
||||
duration: thisModelTime,
|
||||
})
|
||||
|
||||
// Add to model time
|
||||
modelTime += thisModelTime
|
||||
|
||||
// Update content if we have a text response
|
||||
const textContent = currentResponse.content
|
||||
.filter((item) => item.type === 'text')
|
||||
.map((item) => item.text)
|
||||
.join('\n')
|
||||
|
||||
if (textContent) {
|
||||
content = textContent
|
||||
}
|
||||
|
||||
// Update token counts
|
||||
if (currentResponse.usage) {
|
||||
tokens.prompt += currentResponse.usage.input_tokens || 0
|
||||
tokens.completion += currentResponse.usage.output_tokens || 0
|
||||
tokens.total +=
|
||||
(currentResponse.usage.input_tokens || 0) +
|
||||
(currentResponse.usage.output_tokens || 0)
|
||||
}
|
||||
|
||||
iterationCount++
|
||||
}
|
||||
} catch (error) {
|
||||
logger.error('Error in Anthropic request:', { error })
|
||||
throw error
|
||||
}
|
||||
|
||||
// If the content looks like it contains JSON, extract just the JSON part
|
||||
if (content.includes('{') && content.includes('}')) {
|
||||
try {
|
||||
const jsonMatch = content.match(/\{[\s\S]*\}/m)
|
||||
if (jsonMatch) {
|
||||
content = jsonMatch[0]
|
||||
}
|
||||
} catch (e) {
|
||||
logger.error('Error extracting JSON from response:', { error: e })
|
||||
}
|
||||
}
|
||||
|
||||
// Calculate overall timing
|
||||
const providerEndTime = Date.now()
|
||||
const providerEndTimeISO = new Date(providerEndTime).toISOString()
|
||||
const totalDuration = providerEndTime - providerStartTime
|
||||
|
||||
// For non-streaming mode with tools, we stream only the final response
|
||||
if (iterationCount > 0) {
|
||||
logger.info(
|
||||
'Using streaming for final Anthropic response after tool calls (non-streaming mode)'
|
||||
)
|
||||
|
||||
// When streaming after tool calls with forced tools, make sure tool_choice is removed
|
||||
// This prevents the API from trying to force tool usage again in the final streaming response
|
||||
const streamingPayload = {
|
||||
...payload,
|
||||
messages: currentMessages,
|
||||
// For Anthropic, omit tool_choice entirely rather than setting it to 'none'
|
||||
stream: true,
|
||||
}
|
||||
|
||||
// Remove the tool_choice parameter as Anthropic doesn't accept 'none' as a string value
|
||||
streamingPayload.tool_choice = undefined
|
||||
|
||||
const streamResponse: any = await anthropic.messages.create(streamingPayload)
|
||||
|
||||
// Create a StreamingExecution response with all collected data
|
||||
const streamingResult = {
|
||||
stream: createReadableStreamFromAnthropicStream(streamResponse),
|
||||
execution: {
|
||||
success: true,
|
||||
output: {
|
||||
content: '', // Will be filled by the callback
|
||||
model: request.model || 'claude-3-7-sonnet-20250219',
|
||||
tokens: {
|
||||
prompt: tokens.prompt,
|
||||
completion: tokens.completion,
|
||||
total: tokens.total,
|
||||
},
|
||||
toolCalls:
|
||||
toolCalls.length > 0
|
||||
? {
|
||||
list: toolCalls,
|
||||
count: toolCalls.length,
|
||||
}
|
||||
: undefined,
|
||||
providerTiming: {
|
||||
startTime: providerStartTimeISO,
|
||||
endTime: new Date().toISOString(),
|
||||
duration: Date.now() - providerStartTime,
|
||||
modelTime: modelTime,
|
||||
toolsTime: toolsTime,
|
||||
firstResponseTime: firstResponseTime,
|
||||
iterations: iterationCount + 1,
|
||||
timeSegments: timeSegments,
|
||||
},
|
||||
cost: {
|
||||
total: (tokens.total || 0) * 0.0001, // Estimate cost based on tokens
|
||||
input: (tokens.prompt || 0) * 0.0001,
|
||||
output: (tokens.completion || 0) * 0.0001,
|
||||
},
|
||||
},
|
||||
logs: [], // No block logs at provider level
|
||||
metadata: {
|
||||
startTime: providerStartTimeISO,
|
||||
endTime: new Date().toISOString(),
|
||||
duration: Date.now() - providerStartTime,
|
||||
},
|
||||
isStreaming: true,
|
||||
},
|
||||
}
|
||||
|
||||
return streamingResult as StreamingExecution
|
||||
}
|
||||
|
||||
// If no tool calls were made, return a direct response
|
||||
return {
|
||||
content,
|
||||
model: request.model || 'claude-3-7-sonnet-20250219',
|
||||
tokens,
|
||||
toolCalls:
|
||||
toolCalls.length > 0
|
||||
? toolCalls.map((tc) => ({
|
||||
name: tc.name,
|
||||
arguments: tc.arguments as Record<string, any>,
|
||||
startTime: tc.startTime,
|
||||
endTime: tc.endTime,
|
||||
duration: tc.duration,
|
||||
result: tc.result,
|
||||
}))
|
||||
: undefined,
|
||||
toolResults: toolResults.length > 0 ? toolResults : undefined,
|
||||
timing: {
|
||||
startTime: providerStartTimeISO,
|
||||
endTime: providerEndTimeISO,
|
||||
duration: totalDuration,
|
||||
modelTime: modelTime,
|
||||
toolsTime: toolsTime,
|
||||
firstResponseTime: firstResponseTime,
|
||||
iterations: iterationCount + 1,
|
||||
timeSegments: timeSegments,
|
||||
},
|
||||
}
|
||||
} catch (error) {
|
||||
// Include timing information even for errors
|
||||
const providerEndTime = Date.now()
|
||||
const providerEndTimeISO = new Date(providerEndTime).toISOString()
|
||||
const totalDuration = providerEndTime - providerStartTime
|
||||
|
||||
logger.error('Error in Anthropic request:', {
|
||||
error,
|
||||
duration: totalDuration,
|
||||
})
|
||||
|
||||
// Create a new error with timing information
|
||||
const enhancedError = new Error(error instanceof Error ? error.message : String(error))
|
||||
// @ts-ignore - Adding timing property to the error
|
||||
enhancedError.timing = {
|
||||
startTime: providerStartTimeISO,
|
||||
endTime: providerEndTimeISO,
|
||||
duration: totalDuration,
|
||||
}
|
||||
|
||||
throw enhancedError
|
||||
}
|
||||
}
|
||||
|
||||
// Start execution timer for the entire provider execution
|
||||
const providerStartTime = Date.now()
|
||||
const providerStartTimeISO = new Date(providerStartTime).toISOString()
|
||||
@@ -459,7 +1320,14 @@ ${fieldDescriptions}
|
||||
// Add system parameters for execution
|
||||
const executionParams = {
|
||||
...toolParams,
|
||||
...(request.workflowId ? { _context: { workflowId: request.workflowId } } : {}),
|
||||
...(request.workflowId
|
||||
? {
|
||||
_context: {
|
||||
workflowId: request.workflowId,
|
||||
...(request.chatId ? { chatId: request.chatId } : {}),
|
||||
},
|
||||
}
|
||||
: {}),
|
||||
...(request.environmentVariables ? { envVars: request.environmentVariables } : {}),
|
||||
}
|
||||
|
||||
|
||||
@@ -383,7 +383,14 @@ export const azureOpenAIProvider: ProviderConfig = {
|
||||
// Add system parameters for execution
|
||||
const executionParams = {
|
||||
...toolParams,
|
||||
...(request.workflowId ? { _context: { workflowId: request.workflowId } } : {}),
|
||||
...(request.workflowId
|
||||
? {
|
||||
_context: {
|
||||
workflowId: request.workflowId,
|
||||
...(request.chatId ? { chatId: request.chatId } : {}),
|
||||
},
|
||||
}
|
||||
: {}),
|
||||
...(request.environmentVariables ? { envVars: request.environmentVariables } : {}),
|
||||
}
|
||||
|
||||
|
||||
@@ -291,7 +291,14 @@ export const cerebrasProvider: ProviderConfig = {
|
||||
// Add system parameters for execution
|
||||
const executionParams = {
|
||||
...toolParams,
|
||||
...(request.workflowId ? { _context: { workflowId: request.workflowId } } : {}),
|
||||
...(request.workflowId
|
||||
? {
|
||||
_context: {
|
||||
workflowId: request.workflowId,
|
||||
...(request.chatId ? { chatId: request.chatId } : {}),
|
||||
},
|
||||
}
|
||||
: {}),
|
||||
...(request.environmentVariables ? { envVars: request.environmentVariables } : {}),
|
||||
}
|
||||
|
||||
|
||||
@@ -293,7 +293,14 @@ export const deepseekProvider: ProviderConfig = {
|
||||
// Add system parameters for execution
|
||||
const executionParams = {
|
||||
...toolParams,
|
||||
...(request.workflowId ? { _context: { workflowId: request.workflowId } } : {}),
|
||||
...(request.workflowId
|
||||
? {
|
||||
_context: {
|
||||
workflowId: request.workflowId,
|
||||
...(request.chatId ? { chatId: request.chatId } : {}),
|
||||
},
|
||||
}
|
||||
: {}),
|
||||
...(request.environmentVariables ? { envVars: request.environmentVariables } : {}),
|
||||
}
|
||||
|
||||
|
||||
@@ -262,7 +262,14 @@ export const groqProvider: ProviderConfig = {
|
||||
// Add system parameters for execution
|
||||
const executionParams = {
|
||||
...toolParams,
|
||||
...(request.workflowId ? { _context: { workflowId: request.workflowId } } : {}),
|
||||
...(request.workflowId
|
||||
? {
|
||||
_context: {
|
||||
workflowId: request.workflowId,
|
||||
...(request.chatId ? { chatId: request.chatId } : {}),
|
||||
},
|
||||
}
|
||||
: {}),
|
||||
...(request.environmentVariables ? { envVars: request.environmentVariables } : {}),
|
||||
}
|
||||
|
||||
|
||||
@@ -194,7 +194,14 @@ export const ollamaProvider: ProviderConfig = {
|
||||
// Add system parameters for execution
|
||||
const executionParams = {
|
||||
...toolParams,
|
||||
...(request.workflowId ? { _context: { workflowId: request.workflowId } } : {}),
|
||||
...(request.workflowId
|
||||
? {
|
||||
_context: {
|
||||
workflowId: request.workflowId,
|
||||
...(request.chatId ? { chatId: request.chatId } : {}),
|
||||
},
|
||||
}
|
||||
: {}),
|
||||
...(request.environmentVariables ? { envVars: request.environmentVariables } : {}),
|
||||
}
|
||||
|
||||
|
||||
@@ -147,7 +147,9 @@ export interface ProviderRequest {
|
||||
}
|
||||
local_execution?: boolean
|
||||
workflowId?: string // Optional workflow ID for authentication context
|
||||
chatId?: string // Optional chat ID for checkpoint context
|
||||
stream?: boolean
|
||||
streamToolCalls?: boolean // Whether to stream tool call responses back to user (default: false)
|
||||
environmentVariables?: Record<string, string> // Environment variables for tool execution
|
||||
// Azure OpenAI specific parameters
|
||||
azureEndpoint?: string
|
||||
|
||||
@@ -900,7 +900,7 @@ export function supportsToolUsageControl(provider: string): boolean {
|
||||
export function prepareToolExecution(
|
||||
tool: { params?: Record<string, any> },
|
||||
llmArgs: Record<string, any>,
|
||||
request: { workflowId?: string; environmentVariables?: Record<string, any> }
|
||||
request: { workflowId?: string; chatId?: string; environmentVariables?: Record<string, any> }
|
||||
): {
|
||||
toolParams: Record<string, any>
|
||||
executionParams: Record<string, any>
|
||||
@@ -914,7 +914,14 @@ export function prepareToolExecution(
|
||||
// Add system parameters for execution
|
||||
const executionParams = {
|
||||
...toolParams,
|
||||
...(request.workflowId ? { _context: { workflowId: request.workflowId } } : {}),
|
||||
...(request.workflowId
|
||||
? {
|
||||
_context: {
|
||||
workflowId: request.workflowId,
|
||||
...(request.chatId ? { chatId: request.chatId } : {}),
|
||||
},
|
||||
}
|
||||
: {}),
|
||||
...(request.environmentVariables ? { envVars: request.environmentVariables } : {}),
|
||||
}
|
||||
|
||||
|
||||
@@ -329,7 +329,14 @@ export const xAIProvider: ProviderConfig = {
|
||||
// Add system parameters for execution
|
||||
const executionParams = {
|
||||
...toolParams,
|
||||
...(request.workflowId ? { _context: { workflowId: request.workflowId } } : {}),
|
||||
...(request.workflowId
|
||||
? {
|
||||
_context: {
|
||||
workflowId: request.workflowId,
|
||||
...(request.chatId ? { chatId: request.chatId } : {}),
|
||||
},
|
||||
}
|
||||
: {}),
|
||||
...(request.environmentVariables ? { envVars: request.environmentVariables } : {}),
|
||||
}
|
||||
|
||||
|
||||
@@ -130,6 +130,54 @@ export class RoomManager {
|
||||
logger.info(`Notified ${room.users.size} users about workflow revert: ${workflowId}`)
|
||||
}
|
||||
|
||||
handleWorkflowUpdate(workflowId: string) {
|
||||
logger.info(`Handling workflow update notification for ${workflowId}`)
|
||||
|
||||
const room = this.workflowRooms.get(workflowId)
|
||||
if (!room) {
|
||||
logger.debug(`No active room found for updated workflow ${workflowId}`)
|
||||
return
|
||||
}
|
||||
|
||||
const timestamp = Date.now()
|
||||
|
||||
// Notify all clients in the workflow room that the workflow has been updated
|
||||
// This will trigger them to refresh their local state
|
||||
this.io.to(workflowId).emit('workflow-updated', {
|
||||
workflowId,
|
||||
message: 'Workflow has been updated externally',
|
||||
timestamp,
|
||||
})
|
||||
|
||||
room.lastModified = timestamp
|
||||
|
||||
logger.info(`Notified ${room.users.size} users about workflow update: ${workflowId}`)
|
||||
}
|
||||
|
||||
handleCopilotWorkflowEdit(workflowId: string, description?: string) {
|
||||
logger.info(`Handling copilot workflow edit notification for ${workflowId}`)
|
||||
|
||||
const room = this.workflowRooms.get(workflowId)
|
||||
if (!room) {
|
||||
logger.debug(`No active room found for copilot workflow edit ${workflowId}`)
|
||||
return
|
||||
}
|
||||
|
||||
const timestamp = Date.now()
|
||||
|
||||
// Emit special event for copilot edits that tells clients to rehydrate from database
|
||||
this.io.to(workflowId).emit('copilot-workflow-edit', {
|
||||
workflowId,
|
||||
description,
|
||||
message: 'Copilot has edited the workflow - rehydrating from database',
|
||||
timestamp,
|
||||
})
|
||||
|
||||
room.lastModified = timestamp
|
||||
|
||||
logger.info(`Notified ${room.users.size} users about copilot workflow edit: ${workflowId}`)
|
||||
}
|
||||
|
||||
async validateWorkflowConsistency(
|
||||
workflowId: string
|
||||
): Promise<{ valid: boolean; issues: string[] }> {
|
||||
|
||||
@@ -50,6 +50,48 @@ export function createHttpHandler(roomManager: RoomManager, logger: Logger) {
|
||||
return
|
||||
}
|
||||
|
||||
// Handle workflow update notifications from the main API
|
||||
if (req.method === 'POST' && req.url === '/api/workflow-updated') {
|
||||
let body = ''
|
||||
req.on('data', (chunk) => {
|
||||
body += chunk.toString()
|
||||
})
|
||||
req.on('end', () => {
|
||||
try {
|
||||
const { workflowId } = JSON.parse(body)
|
||||
roomManager.handleWorkflowUpdate(workflowId)
|
||||
res.writeHead(200, { 'Content-Type': 'application/json' })
|
||||
res.end(JSON.stringify({ success: true }))
|
||||
} catch (error) {
|
||||
logger.error('Error handling workflow update notification:', error)
|
||||
res.writeHead(500, { 'Content-Type': 'application/json' })
|
||||
res.end(JSON.stringify({ error: 'Failed to process update notification' }))
|
||||
}
|
||||
})
|
||||
return
|
||||
}
|
||||
|
||||
// Handle copilot workflow edit notifications from the main API
|
||||
if (req.method === 'POST' && req.url === '/api/copilot-workflow-edit') {
|
||||
let body = ''
|
||||
req.on('data', (chunk) => {
|
||||
body += chunk.toString()
|
||||
})
|
||||
req.on('end', () => {
|
||||
try {
|
||||
const { workflowId, description } = JSON.parse(body)
|
||||
roomManager.handleCopilotWorkflowEdit(workflowId, description)
|
||||
res.writeHead(200, { 'Content-Type': 'application/json' })
|
||||
res.end(JSON.stringify({ success: true }))
|
||||
} catch (error) {
|
||||
logger.error('Error handling copilot workflow edit notification:', error)
|
||||
res.writeHead(500, { 'Content-Type': 'application/json' })
|
||||
res.end(JSON.stringify({ error: 'Failed to process copilot edit notification' }))
|
||||
}
|
||||
})
|
||||
return
|
||||
}
|
||||
|
||||
// Handle workflow revert notifications from the main API
|
||||
if (req.method === 'POST' && req.url === '/api/workflow-reverted') {
|
||||
let body = ''
|
||||
|
||||
@@ -7,10 +7,12 @@ import {
|
||||
deleteChat as deleteApiChat,
|
||||
getChat,
|
||||
listChats,
|
||||
listCheckpoints,
|
||||
revertToCheckpoint,
|
||||
sendStreamingDocsMessage,
|
||||
sendStreamingMessage,
|
||||
updateChatMessages,
|
||||
} from '@/lib/copilot-api'
|
||||
} from '@/lib/copilot/api'
|
||||
import { createLogger } from '@/lib/logs/console-logger'
|
||||
import type { CopilotStore } from './types'
|
||||
|
||||
@@ -20,18 +22,68 @@ const logger = createLogger('CopilotStore')
|
||||
* Initial state for the copilot store
|
||||
*/
|
||||
const initialState = {
|
||||
mode: 'ask' as const,
|
||||
currentChat: null,
|
||||
chats: [],
|
||||
messages: [],
|
||||
checkpoints: [],
|
||||
isLoading: false,
|
||||
isLoadingChats: false,
|
||||
isLoadingCheckpoints: false,
|
||||
isSendingMessage: false,
|
||||
isSaving: false,
|
||||
isRevertingCheckpoint: false,
|
||||
error: null,
|
||||
saveError: null,
|
||||
checkpointError: null,
|
||||
workflowId: null,
|
||||
}
|
||||
|
||||
/**
|
||||
* Helper function to create a new user message
|
||||
*/
|
||||
function createUserMessage(content: string): CopilotMessage {
|
||||
return {
|
||||
id: crypto.randomUUID(),
|
||||
role: 'user',
|
||||
content,
|
||||
timestamp: new Date().toISOString(),
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Helper function to create a streaming placeholder message
|
||||
*/
|
||||
function createStreamingMessage(): CopilotMessage {
|
||||
return {
|
||||
id: crypto.randomUUID(),
|
||||
role: 'assistant',
|
||||
content: '',
|
||||
timestamp: new Date().toISOString(),
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Helper function to create an error message
|
||||
*/
|
||||
function createErrorMessage(messageId: string, content: string): CopilotMessage {
|
||||
return {
|
||||
id: messageId,
|
||||
role: 'assistant',
|
||||
content,
|
||||
timestamp: new Date().toISOString(),
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Helper function to handle errors in async operations
|
||||
*/
|
||||
function handleStoreError(error: unknown, fallbackMessage: string): string {
|
||||
const errorMessage = error instanceof Error ? error.message : fallbackMessage
|
||||
logger.error(fallbackMessage, error)
|
||||
return errorMessage
|
||||
}
|
||||
|
||||
/**
|
||||
* Copilot store using the new unified API
|
||||
*/
|
||||
@@ -40,10 +92,20 @@ export const useCopilotStore = create<CopilotStore>()(
|
||||
(set, get) => ({
|
||||
...initialState,
|
||||
|
||||
// Set chat mode
|
||||
setMode: (mode) => {
|
||||
const previousMode = get().mode
|
||||
set({ mode })
|
||||
logger.info(`Copilot mode changed from ${previousMode} to ${mode}`)
|
||||
},
|
||||
|
||||
// Set current workflow ID
|
||||
setWorkflowId: (workflowId: string | null) => {
|
||||
const currentWorkflowId = get().workflowId
|
||||
if (currentWorkflowId !== workflowId) {
|
||||
logger.info(`Workflow ID changed from ${currentWorkflowId} to ${workflowId}`)
|
||||
|
||||
// Clear all state to prevent cross-workflow data leaks
|
||||
set({
|
||||
workflowId,
|
||||
currentChat: null,
|
||||
@@ -52,15 +114,44 @@ export const useCopilotStore = create<CopilotStore>()(
|
||||
error: null,
|
||||
saveError: null,
|
||||
isSaving: false,
|
||||
isLoading: false,
|
||||
isLoadingChats: false,
|
||||
})
|
||||
|
||||
// Load chats for the new workflow
|
||||
if (workflowId) {
|
||||
get().loadChats()
|
||||
get()
|
||||
.loadChats()
|
||||
.catch((error) => {
|
||||
logger.error('Failed to load chats after workflow change:', error)
|
||||
})
|
||||
}
|
||||
}
|
||||
},
|
||||
|
||||
// Validate current chat belongs to current workflow
|
||||
validateCurrentChat: () => {
|
||||
const { currentChat, chats, workflowId } = get()
|
||||
|
||||
if (!currentChat || !workflowId) {
|
||||
return true
|
||||
}
|
||||
|
||||
// Check if current chat exists in the current workflow's chat list
|
||||
const chatBelongsToWorkflow = chats.some((chat) => chat.id === currentChat.id)
|
||||
|
||||
if (!chatBelongsToWorkflow) {
|
||||
logger.warn(`Current chat ${currentChat.id} does not belong to workflow ${workflowId}`)
|
||||
set({
|
||||
currentChat: null,
|
||||
messages: [],
|
||||
})
|
||||
return false
|
||||
}
|
||||
|
||||
return true
|
||||
},
|
||||
|
||||
// Load chats for current workflow
|
||||
loadChats: async () => {
|
||||
const { workflowId } = get()
|
||||
@@ -79,22 +170,26 @@ export const useCopilotStore = create<CopilotStore>()(
|
||||
chats: result.chats,
|
||||
isLoadingChats: false,
|
||||
})
|
||||
logger.info(`Loaded ${result.chats.length} chats for workflow ${workflowId}`)
|
||||
|
||||
// If no current chat and we have chats, optionally select the most recent one
|
||||
// Auto-select the most recent chat if no current chat is selected and chats exist
|
||||
const { currentChat } = get()
|
||||
if (!currentChat && result.chats.length > 0) {
|
||||
// Auto-select most recent chat
|
||||
await get().selectChat(result.chats[0])
|
||||
}
|
||||
// Sort by updatedAt descending to get the most recent chat
|
||||
const sortedChats = [...result.chats].sort(
|
||||
(a, b) => new Date(b.updatedAt).getTime() - new Date(a.updatedAt).getTime()
|
||||
)
|
||||
const mostRecentChat = sortedChats[0]
|
||||
|
||||
logger.info(`Loaded ${result.chats.length} chats for workflow ${workflowId}`)
|
||||
logger.info(`Auto-selecting most recent chat: ${mostRecentChat.title || 'Untitled'}`)
|
||||
await get().selectChat(mostRecentChat)
|
||||
}
|
||||
} else {
|
||||
throw new Error(result.error || 'Failed to load chats')
|
||||
}
|
||||
} catch (error) {
|
||||
logger.error('Failed to load chats:', error)
|
||||
set({
|
||||
error: error instanceof Error ? error.message : 'Failed to load chats',
|
||||
error: handleStoreError(error, 'Failed to load chats'),
|
||||
isLoadingChats: false,
|
||||
})
|
||||
}
|
||||
@@ -102,12 +197,27 @@ export const useCopilotStore = create<CopilotStore>()(
|
||||
|
||||
// Select a specific chat
|
||||
selectChat: async (chat: CopilotChat) => {
|
||||
const { workflowId } = get()
|
||||
|
||||
if (!workflowId) {
|
||||
logger.error('Cannot select chat: no workflow ID set')
|
||||
return
|
||||
}
|
||||
|
||||
set({ isLoading: true, error: null })
|
||||
|
||||
try {
|
||||
const result = await getChat(chat.id)
|
||||
|
||||
if (result.success && result.chat) {
|
||||
// Verify workflow hasn't changed during selection
|
||||
const currentWorkflow = get().workflowId
|
||||
if (currentWorkflow !== workflowId) {
|
||||
logger.warn('Workflow changed during chat selection')
|
||||
set({ isLoading: false })
|
||||
return
|
||||
}
|
||||
|
||||
set({
|
||||
currentChat: result.chat,
|
||||
messages: result.chat.messages,
|
||||
@@ -119,9 +229,8 @@ export const useCopilotStore = create<CopilotStore>()(
|
||||
throw new Error(result.error || 'Failed to load chat')
|
||||
}
|
||||
} catch (error) {
|
||||
logger.error('Failed to select chat:', error)
|
||||
set({
|
||||
error: error instanceof Error ? error.message : 'Failed to load chat',
|
||||
error: handleStoreError(error, 'Failed to load chat'),
|
||||
isLoading: false,
|
||||
})
|
||||
}
|
||||
@@ -147,17 +256,18 @@ export const useCopilotStore = create<CopilotStore>()(
|
||||
isLoading: false,
|
||||
})
|
||||
|
||||
// Reload chats to include the new one
|
||||
await get().loadChats()
|
||||
// Add the new chat to the chats list
|
||||
set((state) => ({
|
||||
chats: [result.chat!, ...state.chats],
|
||||
}))
|
||||
|
||||
logger.info(`Created new chat: ${result.chat.id}`)
|
||||
} else {
|
||||
throw new Error(result.error || 'Failed to create chat')
|
||||
}
|
||||
} catch (error) {
|
||||
logger.error('Failed to create new chat:', error)
|
||||
set({
|
||||
error: error instanceof Error ? error.message : 'Failed to create chat',
|
||||
error: handleStoreError(error, 'Failed to create chat'),
|
||||
isLoading: false,
|
||||
})
|
||||
}
|
||||
@@ -176,12 +286,27 @@ export const useCopilotStore = create<CopilotStore>()(
|
||||
chats: state.chats.filter((chat) => chat.id !== chatId),
|
||||
}))
|
||||
|
||||
// If this was the current chat, clear it
|
||||
// If this was the current chat, clear it and select another one
|
||||
if (currentChat?.id === chatId) {
|
||||
set({
|
||||
currentChat: null,
|
||||
messages: [],
|
||||
})
|
||||
// Get the updated chats list (after removal) in a single atomic operation
|
||||
const { chats: updatedChats } = get()
|
||||
const remainingChats = updatedChats.filter((chat) => chat.id !== chatId)
|
||||
|
||||
if (remainingChats.length > 0) {
|
||||
const sortedByCreation = [...remainingChats].sort(
|
||||
(a, b) => new Date(b.createdAt).getTime() - new Date(a.createdAt).getTime()
|
||||
)
|
||||
set({
|
||||
currentChat: null,
|
||||
messages: [],
|
||||
})
|
||||
await get().selectChat(sortedByCreation[0])
|
||||
} else {
|
||||
set({
|
||||
currentChat: null,
|
||||
messages: [],
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
logger.info(`Deleted chat: ${chatId}`)
|
||||
@@ -189,99 +314,57 @@ export const useCopilotStore = create<CopilotStore>()(
|
||||
throw new Error(result.error || 'Failed to delete chat')
|
||||
}
|
||||
} catch (error) {
|
||||
logger.error('Failed to delete chat:', error)
|
||||
set({
|
||||
error: error instanceof Error ? error.message : 'Failed to delete chat',
|
||||
error: handleStoreError(error, 'Failed to delete chat'),
|
||||
})
|
||||
}
|
||||
},
|
||||
|
||||
// Send a regular message
|
||||
sendMessage: async (message: string, options = {}) => {
|
||||
const { workflowId, currentChat } = get()
|
||||
const { workflowId, currentChat, mode } = get()
|
||||
const { stream = true } = options
|
||||
|
||||
console.log('[CopilotStore] sendMessage called:', {
|
||||
message,
|
||||
workflowId,
|
||||
hasCurrentChat: !!currentChat,
|
||||
stream,
|
||||
})
|
||||
|
||||
if (!workflowId) {
|
||||
console.warn('[CopilotStore] No workflow ID set')
|
||||
logger.warn('Cannot send message: no workflow ID set')
|
||||
return
|
||||
}
|
||||
|
||||
set({ isSendingMessage: true, error: null })
|
||||
|
||||
// Add user message immediately
|
||||
const userMessage: CopilotMessage = {
|
||||
id: crypto.randomUUID(),
|
||||
role: 'user',
|
||||
content: message,
|
||||
timestamp: new Date().toISOString(),
|
||||
}
|
||||
|
||||
// Add placeholder for streaming response
|
||||
const streamingMessage: CopilotMessage = {
|
||||
id: crypto.randomUUID(),
|
||||
role: 'assistant',
|
||||
content: '',
|
||||
timestamp: new Date().toISOString(),
|
||||
}
|
||||
|
||||
console.log('[CopilotStore] Adding messages to state:', {
|
||||
userMessageId: userMessage.id,
|
||||
streamingMessageId: streamingMessage.id,
|
||||
})
|
||||
const userMessage = createUserMessage(message)
|
||||
const streamingMessage = createStreamingMessage()
|
||||
|
||||
set((state) => ({
|
||||
messages: [...state.messages, userMessage, streamingMessage],
|
||||
}))
|
||||
|
||||
try {
|
||||
console.log('[CopilotStore] Requesting streaming response')
|
||||
const result = await sendStreamingMessage({
|
||||
message,
|
||||
chatId: currentChat?.id,
|
||||
workflowId,
|
||||
mode,
|
||||
createNewChat: !currentChat,
|
||||
stream,
|
||||
})
|
||||
|
||||
console.log('[CopilotStore] Streaming result:', {
|
||||
success: result.success,
|
||||
hasStream: !!result.stream,
|
||||
error: result.error,
|
||||
})
|
||||
|
||||
if (result.success && result.stream) {
|
||||
console.log('[CopilotStore] Starting stream processing')
|
||||
await get().handleStreamingResponse(result.stream, streamingMessage.id)
|
||||
console.log('[CopilotStore] Stream processing completed')
|
||||
} else {
|
||||
console.error('[CopilotStore] Stream request failed:', result.error)
|
||||
throw new Error(result.error || 'Failed to send message')
|
||||
}
|
||||
} catch (error) {
|
||||
logger.error('Failed to send message:', error)
|
||||
|
||||
// Replace streaming message with error
|
||||
const errorMessage: CopilotMessage = {
|
||||
id: streamingMessage.id,
|
||||
role: 'assistant',
|
||||
content:
|
||||
'Sorry, I encountered an error while processing your message. Please try again.',
|
||||
timestamp: new Date().toISOString(),
|
||||
}
|
||||
const errorMessage = createErrorMessage(
|
||||
streamingMessage.id,
|
||||
'Sorry, I encountered an error while processing your message. Please try again.'
|
||||
)
|
||||
|
||||
set((state) => ({
|
||||
messages: state.messages.map((msg) =>
|
||||
msg.id === streamingMessage.id ? errorMessage : msg
|
||||
),
|
||||
error: error instanceof Error ? error.message : 'Failed to send message',
|
||||
error: handleStoreError(error, 'Failed to send message'),
|
||||
isSendingMessage: false,
|
||||
}))
|
||||
}
|
||||
@@ -290,7 +373,7 @@ export const useCopilotStore = create<CopilotStore>()(
|
||||
// Send a docs RAG message
|
||||
sendDocsMessage: async (query: string, options = {}) => {
|
||||
const { workflowId, currentChat } = get()
|
||||
const { stream = true, topK = 5 } = options
|
||||
const { stream = true, topK = 10 } = options
|
||||
|
||||
if (!workflowId) {
|
||||
logger.warn('Cannot send docs message: no workflow ID set')
|
||||
@@ -299,21 +382,8 @@ export const useCopilotStore = create<CopilotStore>()(
|
||||
|
||||
set({ isSendingMessage: true, error: null })
|
||||
|
||||
// Add user message immediately
|
||||
const userMessage: CopilotMessage = {
|
||||
id: crypto.randomUUID(),
|
||||
role: 'user',
|
||||
content: query,
|
||||
timestamp: new Date().toISOString(),
|
||||
}
|
||||
|
||||
// Add placeholder for streaming response
|
||||
const streamingMessage: CopilotMessage = {
|
||||
id: crypto.randomUUID(),
|
||||
role: 'assistant',
|
||||
content: '',
|
||||
timestamp: new Date().toISOString(),
|
||||
}
|
||||
const userMessage = createUserMessage(query)
|
||||
const streamingMessage = createStreamingMessage()
|
||||
|
||||
set((state) => ({
|
||||
messages: [...state.messages, userMessage, streamingMessage],
|
||||
@@ -335,39 +405,27 @@ export const useCopilotStore = create<CopilotStore>()(
|
||||
throw new Error(result.error || 'Failed to send docs message')
|
||||
}
|
||||
} catch (error) {
|
||||
logger.error('Failed to send docs message:', error)
|
||||
|
||||
// Replace streaming message with error
|
||||
const errorMessage: CopilotMessage = {
|
||||
id: streamingMessage.id,
|
||||
role: 'assistant',
|
||||
content:
|
||||
'Sorry, I encountered an error while searching the documentation. Please try again.',
|
||||
timestamp: new Date().toISOString(),
|
||||
}
|
||||
const errorMessage = createErrorMessage(
|
||||
streamingMessage.id,
|
||||
'Sorry, I encountered an error while searching the documentation. Please try again.'
|
||||
)
|
||||
|
||||
set((state) => ({
|
||||
messages: state.messages.map((msg) =>
|
||||
msg.id === streamingMessage.id ? errorMessage : msg
|
||||
),
|
||||
error: error instanceof Error ? error.message : 'Failed to send docs message',
|
||||
error: handleStoreError(error, 'Failed to send docs message'),
|
||||
isSendingMessage: false,
|
||||
}))
|
||||
}
|
||||
},
|
||||
|
||||
// Handle streaming response (shared by both message types)
|
||||
// Handle streaming response
|
||||
handleStreamingResponse: async (stream: ReadableStream, messageId: string) => {
|
||||
console.log('[CopilotStore] handleStreamingResponse started:', {
|
||||
messageId,
|
||||
hasStream: !!stream,
|
||||
})
|
||||
|
||||
const reader = stream.getReader()
|
||||
const decoder = new TextDecoder()
|
||||
let accumulatedContent = ''
|
||||
let newChatId: string | undefined
|
||||
// Citations no longer needed - LLM generates direct markdown links
|
||||
let streamComplete = false
|
||||
|
||||
try {
|
||||
@@ -385,42 +443,23 @@ export const useCopilotStore = create<CopilotStore>()(
|
||||
const data = JSON.parse(line.slice(6))
|
||||
|
||||
if (data.type === 'metadata') {
|
||||
// Get chatId from metadata
|
||||
if (data.chatId) {
|
||||
newChatId = data.chatId
|
||||
}
|
||||
// Citations no longer needed - LLM generates direct markdown links
|
||||
} else if (data.type === 'content') {
|
||||
console.log('[CopilotStore] Received content chunk:', data.content)
|
||||
accumulatedContent += data.content
|
||||
console.log(
|
||||
'[CopilotStore] Accumulated content length:',
|
||||
accumulatedContent.length
|
||||
)
|
||||
|
||||
// Update the streaming message
|
||||
set((state) => ({
|
||||
messages: state.messages.map((msg) =>
|
||||
msg.id === messageId
|
||||
? {
|
||||
...msg,
|
||||
content: accumulatedContent,
|
||||
}
|
||||
: msg
|
||||
msg.id === messageId ? { ...msg, content: accumulatedContent } : msg
|
||||
),
|
||||
}))
|
||||
console.log('[CopilotStore] Updated message state with content')
|
||||
} else if (data.type === 'done' || data.type === 'complete') {
|
||||
console.log('[CopilotStore] Received completion marker:', data.type)
|
||||
} else if (data.type === 'complete') {
|
||||
// Final update
|
||||
set((state) => ({
|
||||
messages: state.messages.map((msg) =>
|
||||
msg.id === messageId
|
||||
? {
|
||||
...msg,
|
||||
content: accumulatedContent,
|
||||
}
|
||||
: msg
|
||||
msg.id === messageId ? { ...msg, content: accumulatedContent } : msg
|
||||
),
|
||||
isSendingMessage: false,
|
||||
}))
|
||||
@@ -428,73 +467,71 @@ export const useCopilotStore = create<CopilotStore>()(
|
||||
// Save chat to database after streaming completes
|
||||
const chatIdToSave = newChatId || get().currentChat?.id
|
||||
if (chatIdToSave) {
|
||||
console.log('[CopilotStore] Saving chat to database:', chatIdToSave)
|
||||
try {
|
||||
await get().saveChatMessages(chatIdToSave)
|
||||
} catch (saveError) {
|
||||
// Save error is already handled in saveChatMessages and reflected in store state
|
||||
// Don't break the streaming flow - user gets the message but knows save failed
|
||||
logger.warn(`Chat save failed after streaming completed: ${saveError}`)
|
||||
logger.warn(`Chat save failed after streaming: ${saveError}`)
|
||||
}
|
||||
}
|
||||
|
||||
// Handle new chat creation
|
||||
if (newChatId && !get().currentChat) {
|
||||
console.log('[CopilotStore] Reloading chats for new chat:', newChatId)
|
||||
// Reload chats to get the updated list
|
||||
await get().loadChats()
|
||||
await get().handleNewChatCreation(newChatId)
|
||||
}
|
||||
|
||||
streamComplete = true
|
||||
console.log('[CopilotStore] Stream marked as complete')
|
||||
break
|
||||
} else if (data.type === 'error') {
|
||||
console.error('[CopilotStore] Received error from stream:', data.error)
|
||||
throw new Error(data.error || 'Streaming error')
|
||||
}
|
||||
} catch (parseError) {
|
||||
console.warn(
|
||||
'[CopilotStore] Failed to parse SSE data:',
|
||||
parseError,
|
||||
'Line:',
|
||||
line
|
||||
)
|
||||
logger.warn('Failed to parse SSE data:', parseError)
|
||||
}
|
||||
} else if (line.trim()) {
|
||||
console.log('[CopilotStore] Non-SSE line (ignored):', line)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
console.log('[CopilotStore] Stream processing completed successfully')
|
||||
logger.info(`Completed streaming response, content length: ${accumulatedContent.length}`)
|
||||
} catch (error) {
|
||||
console.error('[CopilotStore] Error handling streaming response:', error)
|
||||
logger.error('Error handling streaming response:', error)
|
||||
throw error
|
||||
}
|
||||
},
|
||||
|
||||
// Clear current messages
|
||||
clearMessages: () => {
|
||||
set({
|
||||
currentChat: null,
|
||||
messages: [],
|
||||
error: null,
|
||||
})
|
||||
// Handle new chat creation after streaming
|
||||
handleNewChatCreation: async (newChatId: string) => {
|
||||
try {
|
||||
const chatResult = await getChat(newChatId)
|
||||
if (chatResult.success && chatResult.chat) {
|
||||
// Set the new chat as current
|
||||
set({
|
||||
currentChat: chatResult.chat,
|
||||
})
|
||||
|
||||
// Add to chats list if not already there (atomic check and update)
|
||||
set((state) => {
|
||||
const chatExists = state.chats.some((chat) => chat.id === newChatId)
|
||||
if (!chatExists) {
|
||||
return {
|
||||
chats: [chatResult.chat!, ...state.chats],
|
||||
}
|
||||
}
|
||||
return state
|
||||
})
|
||||
}
|
||||
} catch (error) {
|
||||
logger.error('Failed to fetch new chat after creation:', error)
|
||||
// Fallback: reload all chats
|
||||
await get().loadChats()
|
||||
}
|
||||
},
|
||||
|
||||
// Save chat messages to database
|
||||
saveChatMessages: async (chatId: string) => {
|
||||
const { messages } = get()
|
||||
|
||||
set({ isSaving: true, saveError: null })
|
||||
|
||||
try {
|
||||
logger.info(`Saving ${messages.length} messages for chat ${chatId}`)
|
||||
|
||||
// Let the API handle title generation if needed
|
||||
const result = await updateChatMessages(chatId, messages)
|
||||
|
||||
if (result.success && result.chat) {
|
||||
@@ -506,12 +543,26 @@ export const useCopilotStore = create<CopilotStore>()(
|
||||
saveError: null,
|
||||
})
|
||||
|
||||
logger.info(
|
||||
`Successfully saved chat ${chatId} with ${result.chat.messages.length} messages`
|
||||
)
|
||||
// Update the chat in the chats list (atomic check, update, or add)
|
||||
set((state) => {
|
||||
const chatExists = state.chats.some((chat) => chat.id === result.chat!.id)
|
||||
|
||||
if (!chatExists) {
|
||||
// Chat doesn't exist, add it to the beginning
|
||||
return {
|
||||
chats: [result.chat!, ...state.chats],
|
||||
}
|
||||
}
|
||||
// Chat exists, update it
|
||||
const updatedChats = state.chats.map((chat) =>
|
||||
chat.id === result.chat!.id ? result.chat! : chat
|
||||
)
|
||||
return { chats: updatedChats }
|
||||
})
|
||||
|
||||
logger.info(`Successfully saved chat ${chatId}`)
|
||||
} else {
|
||||
const errorMessage = result.error || 'Failed to save chat'
|
||||
logger.error(`Failed to save chat ${chatId}:`, errorMessage)
|
||||
set({
|
||||
isSaving: false,
|
||||
saveError: errorMessage,
|
||||
@@ -519,8 +570,7 @@ export const useCopilotStore = create<CopilotStore>()(
|
||||
throw new Error(errorMessage)
|
||||
}
|
||||
} catch (error) {
|
||||
const errorMessage = error instanceof Error ? error.message : 'Unknown error saving chat'
|
||||
logger.error(`Error saving chat ${chatId}:`, error)
|
||||
const errorMessage = handleStoreError(error, 'Error saving chat')
|
||||
set({
|
||||
isSaving: false,
|
||||
saveError: errorMessage,
|
||||
@@ -529,6 +579,61 @@ export const useCopilotStore = create<CopilotStore>()(
|
||||
}
|
||||
},
|
||||
|
||||
// Load checkpoints for current chat
|
||||
loadCheckpoints: async (chatId: string) => {
|
||||
set({ isLoadingCheckpoints: true, checkpointError: null })
|
||||
|
||||
try {
|
||||
const result = await listCheckpoints(chatId)
|
||||
|
||||
if (result.success) {
|
||||
set({
|
||||
checkpoints: result.checkpoints,
|
||||
isLoadingCheckpoints: false,
|
||||
})
|
||||
logger.info(`Loaded ${result.checkpoints.length} checkpoints for chat ${chatId}`)
|
||||
} else {
|
||||
throw new Error(result.error || 'Failed to load checkpoints')
|
||||
}
|
||||
} catch (error) {
|
||||
set({
|
||||
checkpointError: handleStoreError(error, 'Failed to load checkpoints'),
|
||||
isLoadingCheckpoints: false,
|
||||
})
|
||||
}
|
||||
},
|
||||
|
||||
// Revert to a specific checkpoint
|
||||
revertToCheckpoint: async (checkpointId: string) => {
|
||||
set({ isRevertingCheckpoint: true, checkpointError: null })
|
||||
|
||||
try {
|
||||
const result = await revertToCheckpoint(checkpointId)
|
||||
|
||||
if (result.success) {
|
||||
set({ isRevertingCheckpoint: false })
|
||||
logger.info(`Successfully reverted to checkpoint ${checkpointId}`)
|
||||
} else {
|
||||
throw new Error(result.error || 'Failed to revert to checkpoint')
|
||||
}
|
||||
} catch (error) {
|
||||
set({
|
||||
checkpointError: handleStoreError(error, 'Failed to revert to checkpoint'),
|
||||
isRevertingCheckpoint: false,
|
||||
})
|
||||
}
|
||||
},
|
||||
|
||||
// Clear current messages
|
||||
clearMessages: () => {
|
||||
set({
|
||||
currentChat: null,
|
||||
messages: [],
|
||||
error: null,
|
||||
})
|
||||
logger.info('Cleared current chat and messages')
|
||||
},
|
||||
|
||||
// Clear error state
|
||||
clearError: () => {
|
||||
set({ error: null })
|
||||
@@ -539,6 +644,11 @@ export const useCopilotStore = create<CopilotStore>()(
|
||||
set({ saveError: null })
|
||||
},
|
||||
|
||||
// Clear checkpoint error state
|
||||
clearCheckpointError: () => {
|
||||
set({ checkpointError: null })
|
||||
},
|
||||
|
||||
// Retry saving chat messages
|
||||
retrySave: async (chatId: string) => {
|
||||
await get().saveChatMessages(chatId)
|
||||
|
||||
@@ -1,19 +1,42 @@
|
||||
/**
|
||||
* Message interface for copilot conversations
|
||||
* Citation interface for documentation references
|
||||
*/
|
||||
export interface Citation {
|
||||
id: number
|
||||
title: string
|
||||
url: string
|
||||
similarity?: number
|
||||
}
|
||||
|
||||
/**
|
||||
* Copilot message structure
|
||||
*/
|
||||
export interface CopilotMessage {
|
||||
id: string
|
||||
role: 'user' | 'assistant' | 'system'
|
||||
content: string
|
||||
timestamp: string
|
||||
citations?: Array<{
|
||||
id: number
|
||||
title: string
|
||||
url: string
|
||||
similarity?: number
|
||||
}>
|
||||
citations?: Citation[]
|
||||
}
|
||||
|
||||
/**
|
||||
* Copilot checkpoint structure
|
||||
*/
|
||||
export interface CopilotCheckpoint {
|
||||
id: string
|
||||
userId: string
|
||||
workflowId: string
|
||||
chatId: string
|
||||
yaml: string
|
||||
createdAt: Date
|
||||
updatedAt: Date
|
||||
}
|
||||
|
||||
/**
|
||||
* Chat mode types
|
||||
*/
|
||||
export type CopilotMode = 'ask' | 'agent'
|
||||
|
||||
/**
|
||||
* Chat interface for copilot conversations
|
||||
*/
|
||||
@@ -27,60 +50,94 @@ export interface CopilotChat {
|
||||
updatedAt: Date
|
||||
}
|
||||
|
||||
/**
|
||||
* Options for creating a new chat
|
||||
*/
|
||||
export interface CreateChatOptions {
|
||||
title?: string
|
||||
initialMessage?: string
|
||||
}
|
||||
|
||||
/**
|
||||
* Options for sending messages
|
||||
*/
|
||||
export interface SendMessageOptions {
|
||||
stream?: boolean
|
||||
}
|
||||
|
||||
/**
|
||||
* Options for sending docs messages
|
||||
*/
|
||||
export interface SendDocsMessageOptions {
|
||||
stream?: boolean
|
||||
topK?: number
|
||||
}
|
||||
|
||||
/**
|
||||
* Copilot store state
|
||||
*/
|
||||
export interface CopilotState {
|
||||
// Current active chat
|
||||
// Current mode
|
||||
mode: CopilotMode
|
||||
|
||||
// Chat management
|
||||
currentChat: CopilotChat | null
|
||||
|
||||
// List of available chats for current workflow
|
||||
chats: CopilotChat[]
|
||||
|
||||
// Current messages (from active chat)
|
||||
messages: CopilotMessage[]
|
||||
workflowId: string | null
|
||||
|
||||
// Checkpoint management
|
||||
checkpoints: CopilotCheckpoint[]
|
||||
|
||||
// Loading states
|
||||
isLoading: boolean
|
||||
isLoadingChats: boolean
|
||||
isLoadingCheckpoints: boolean
|
||||
isSendingMessage: boolean
|
||||
|
||||
// Error state
|
||||
error: string | null
|
||||
|
||||
// Save operation error (separate from general errors)
|
||||
saveError: string | null
|
||||
isSaving: boolean
|
||||
isRevertingCheckpoint: boolean
|
||||
|
||||
// Current workflow ID (for chat context)
|
||||
workflowId: string | null
|
||||
// Error states
|
||||
error: string | null
|
||||
saveError: string | null
|
||||
checkpointError: string | null
|
||||
}
|
||||
|
||||
/**
|
||||
* Copilot store actions
|
||||
*/
|
||||
export interface CopilotActions {
|
||||
// Mode management
|
||||
setMode: (mode: CopilotMode) => void
|
||||
|
||||
// Chat management
|
||||
setWorkflowId: (workflowId: string | null) => void
|
||||
validateCurrentChat: () => boolean
|
||||
loadChats: () => Promise<void>
|
||||
selectChat: (chat: CopilotChat) => Promise<void>
|
||||
createNewChat: (options?: { title?: string; initialMessage?: string }) => Promise<void>
|
||||
createNewChat: (options?: CreateChatOptions) => Promise<void>
|
||||
deleteChat: (chatId: string) => Promise<void>
|
||||
|
||||
// Message handling
|
||||
sendMessage: (message: string, options?: { stream?: boolean }) => Promise<void>
|
||||
sendDocsMessage: (query: string, options?: { stream?: boolean; topK?: number }) => Promise<void>
|
||||
sendMessage: (message: string, options?: SendMessageOptions) => Promise<void>
|
||||
sendDocsMessage: (query: string, options?: SendDocsMessageOptions) => Promise<void>
|
||||
saveChatMessages: (chatId: string) => Promise<void>
|
||||
|
||||
// Checkpoint management
|
||||
loadCheckpoints: (chatId: string) => Promise<void>
|
||||
revertToCheckpoint: (checkpointId: string) => Promise<void>
|
||||
|
||||
// Utility actions
|
||||
clearMessages: () => void
|
||||
clearError: () => void
|
||||
clearSaveError: () => void
|
||||
clearCheckpointError: () => void
|
||||
retrySave: (chatId: string) => Promise<void>
|
||||
reset: () => void
|
||||
|
||||
// Internal helper (not exposed publicly)
|
||||
// Internal helpers (not exposed publicly)
|
||||
handleStreamingResponse: (stream: ReadableStream, messageId: string) => Promise<void>
|
||||
handleNewChatCreation: (newChatId: string) => Promise<void>
|
||||
}
|
||||
|
||||
/**
|
||||
|
||||
@@ -27,6 +27,42 @@ export const useSubBlockStore = create<SubBlockStore>()(
|
||||
const activeWorkflowId = useWorkflowRegistry.getState().activeWorkflowId
|
||||
if (!activeWorkflowId) return
|
||||
|
||||
// Validate and fix table data if needed
|
||||
let validatedValue = value
|
||||
if (Array.isArray(value)) {
|
||||
// Check if this looks like table data (array of objects with cells)
|
||||
const isTableData =
|
||||
value.length > 0 &&
|
||||
value.some((item) => item && typeof item === 'object' && 'cells' in item)
|
||||
|
||||
if (isTableData) {
|
||||
console.log('Validating table data for subblock:', { blockId, subBlockId })
|
||||
validatedValue = value.map((row: any) => {
|
||||
// Ensure each row has proper structure
|
||||
if (!row || typeof row !== 'object') {
|
||||
console.warn('Fixing malformed table row:', row)
|
||||
return {
|
||||
id: crypto.randomUUID(),
|
||||
cells: { Key: '', Value: '' },
|
||||
}
|
||||
}
|
||||
|
||||
// Ensure row has an id
|
||||
if (!row.id) {
|
||||
row.id = crypto.randomUUID()
|
||||
}
|
||||
|
||||
// Ensure row has cells object
|
||||
if (!row.cells || typeof row.cells !== 'object') {
|
||||
console.warn('Fixing malformed table row cells:', row)
|
||||
row.cells = { Key: '', Value: '' }
|
||||
}
|
||||
|
||||
return row
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
set((state) => ({
|
||||
workflowValues: {
|
||||
...state.workflowValues,
|
||||
@@ -34,7 +70,7 @@ export const useSubBlockStore = create<SubBlockStore>()(
|
||||
...state.workflowValues[activeWorkflowId],
|
||||
[blockId]: {
|
||||
...state.workflowValues[activeWorkflowId]?.[blockId],
|
||||
[subBlockId]: value,
|
||||
[subBlockId]: validatedValue,
|
||||
},
|
||||
},
|
||||
},
|
||||
|
||||
@@ -1,11 +1,15 @@
|
||||
import { load as yamlParse } from 'js-yaml'
|
||||
import type { Edge } from 'reactflow'
|
||||
import { v4 as uuidv4 } from 'uuid'
|
||||
import { createLogger } from '@/lib/logs/console-logger'
|
||||
import { getBlock } from '@/blocks'
|
||||
import { resolveOutputType } from '@/blocks/utils'
|
||||
import { useWorkflowRegistry } from '@/stores/workflows/registry/store'
|
||||
import { useSubBlockStore } from '@/stores/workflows/subblock/store'
|
||||
import { useWorkflowStore } from '@/stores/workflows/workflow/store'
|
||||
import {
|
||||
type ConnectionsFormat,
|
||||
expandConditionInputs,
|
||||
type ImportedEdge,
|
||||
parseBlockConnections,
|
||||
validateBlockReferences,
|
||||
validateBlockStructure,
|
||||
} from './parsing-utils'
|
||||
|
||||
const logger = createLogger('WorkflowYamlImporter')
|
||||
|
||||
@@ -13,18 +17,7 @@ interface YamlBlock {
|
||||
type: string
|
||||
name: string
|
||||
inputs?: Record<string, any>
|
||||
connections?: {
|
||||
incoming?: Array<{
|
||||
source: string
|
||||
sourceHandle?: string
|
||||
targetHandle?: string
|
||||
}>
|
||||
outgoing?: Array<{
|
||||
target: string
|
||||
sourceHandle?: string
|
||||
targetHandle?: string
|
||||
}>
|
||||
}
|
||||
connections?: ConnectionsFormat
|
||||
parentId?: string // Add parentId for nested blocks
|
||||
}
|
||||
|
||||
@@ -44,15 +37,6 @@ interface ImportedBlock {
|
||||
extent?: 'parent'
|
||||
}
|
||||
|
||||
interface ImportedEdge {
|
||||
id: string
|
||||
source: string
|
||||
target: string
|
||||
sourceHandle: string
|
||||
targetHandle: string
|
||||
type: string
|
||||
}
|
||||
|
||||
interface ImportResult {
|
||||
blocks: ImportedBlock[]
|
||||
edges: ImportedEdge[]
|
||||
@@ -132,45 +116,6 @@ export function parseWorkflowYaml(yamlContent: string): {
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Validate that block references in connections exist
|
||||
*/
|
||||
function validateBlockReferences(yamlWorkflow: YamlWorkflow): string[] {
|
||||
const errors: string[] = []
|
||||
const blockIds = new Set(Object.keys(yamlWorkflow.blocks))
|
||||
|
||||
Object.entries(yamlWorkflow.blocks).forEach(([blockId, block]) => {
|
||||
// Check incoming connection references
|
||||
if (block.connections?.incoming) {
|
||||
block.connections.incoming.forEach((connection) => {
|
||||
if (!blockIds.has(connection.source)) {
|
||||
errors.push(
|
||||
`Block '${blockId}' references non-existent source block '${connection.source}'`
|
||||
)
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
// Check outgoing connection references
|
||||
if (block.connections?.outgoing) {
|
||||
block.connections.outgoing.forEach((connection) => {
|
||||
if (!blockIds.has(connection.target)) {
|
||||
errors.push(
|
||||
`Block '${blockId}' references non-existent target block '${connection.target}'`
|
||||
)
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
// Check parent references
|
||||
if (block.parentId && !blockIds.has(block.parentId)) {
|
||||
errors.push(`Block '${blockId}' references non-existent parent block '${block.parentId}'`)
|
||||
}
|
||||
})
|
||||
|
||||
return errors
|
||||
}
|
||||
|
||||
/**
|
||||
* Validate that block types exist and are valid
|
||||
*/
|
||||
@@ -179,6 +124,14 @@ function validateBlockTypes(yamlWorkflow: YamlWorkflow): { errors: string[]; war
|
||||
const warnings: string[] = []
|
||||
|
||||
Object.entries(yamlWorkflow.blocks).forEach(([blockId, block]) => {
|
||||
// Use shared structure validation
|
||||
const { errors: structureErrors, warnings: structureWarnings } = validateBlockStructure(
|
||||
blockId,
|
||||
block
|
||||
)
|
||||
errors.push(...structureErrors)
|
||||
warnings.push(...structureWarnings)
|
||||
|
||||
// Check if block type exists
|
||||
const blockConfig = getBlock(block.type)
|
||||
|
||||
@@ -346,7 +299,7 @@ export function convertYamlToWorkflow(yamlWorkflow: YamlWorkflow): ImportResult
|
||||
const edges: ImportedEdge[] = []
|
||||
|
||||
// Validate block references
|
||||
const referenceErrors = validateBlockReferences(yamlWorkflow)
|
||||
const referenceErrors = validateBlockReferences(yamlWorkflow.blocks)
|
||||
errors.push(...referenceErrors)
|
||||
|
||||
// Validate block types
|
||||
@@ -365,11 +318,17 @@ export function convertYamlToWorkflow(yamlWorkflow: YamlWorkflow): ImportResult
|
||||
Object.entries(yamlWorkflow.blocks).forEach(([blockId, yamlBlock]) => {
|
||||
const position = positions[blockId] || { x: 100, y: 100 }
|
||||
|
||||
// Expand condition inputs from clean format to internal format
|
||||
const processedInputs =
|
||||
yamlBlock.type === 'condition'
|
||||
? expandConditionInputs(blockId, yamlBlock.inputs || {})
|
||||
: yamlBlock.inputs || {}
|
||||
|
||||
const importedBlock: ImportedBlock = {
|
||||
id: blockId,
|
||||
type: yamlBlock.type,
|
||||
name: yamlBlock.name,
|
||||
inputs: yamlBlock.inputs || {},
|
||||
inputs: processedInputs,
|
||||
position,
|
||||
}
|
||||
|
||||
@@ -402,24 +361,17 @@ export function convertYamlToWorkflow(yamlWorkflow: YamlWorkflow): ImportResult
|
||||
blocks.push(importedBlock)
|
||||
})
|
||||
|
||||
// Convert edges from connections
|
||||
// Convert edges from connections using shared parser
|
||||
Object.entries(yamlWorkflow.blocks).forEach(([blockId, yamlBlock]) => {
|
||||
if (yamlBlock.connections?.outgoing) {
|
||||
yamlBlock.connections.outgoing.forEach((connection) => {
|
||||
const edgeId = `${blockId}-${connection.target}-${Date.now()}`
|
||||
const {
|
||||
edges: blockEdges,
|
||||
errors: connectionErrors,
|
||||
warnings: connectionWarnings,
|
||||
} = parseBlockConnections(blockId, yamlBlock.connections, yamlBlock.type)
|
||||
|
||||
const edge: ImportedEdge = {
|
||||
id: edgeId,
|
||||
source: blockId,
|
||||
target: connection.target,
|
||||
sourceHandle: connection.sourceHandle || 'source',
|
||||
targetHandle: connection.targetHandle || 'target',
|
||||
type: 'workflowEdge',
|
||||
}
|
||||
|
||||
edges.push(edge)
|
||||
})
|
||||
}
|
||||
edges.push(...blockEdges)
|
||||
errors.push(...connectionErrors)
|
||||
warnings.push(...connectionWarnings)
|
||||
})
|
||||
|
||||
// Sort blocks to ensure parents are created before children
|
||||
@@ -429,332 +381,49 @@ export function convertYamlToWorkflow(yamlWorkflow: YamlWorkflow): ImportResult
|
||||
}
|
||||
|
||||
/**
|
||||
* Import workflow from YAML by creating complete state upfront (no UI simulation)
|
||||
* Create smart ID mapping that preserves existing block IDs and generates new ones for new blocks
|
||||
*/
|
||||
export async function importWorkflowFromYaml(
|
||||
yamlContent: string,
|
||||
workflowActions: {
|
||||
addBlock: (
|
||||
id: string,
|
||||
type: string,
|
||||
name: string,
|
||||
position: { x: number; y: number },
|
||||
data?: Record<string, any>,
|
||||
parentId?: string,
|
||||
extent?: 'parent'
|
||||
) => void
|
||||
addEdge: (edge: Edge) => void
|
||||
applyAutoLayout: () => void
|
||||
setSubBlockValue: (blockId: string, subBlockId: string, value: any) => void
|
||||
getExistingBlocks: () => Record<string, any>
|
||||
},
|
||||
targetWorkflowId?: string
|
||||
): Promise<{ success: boolean; errors: string[]; warnings: string[]; summary?: string }> {
|
||||
try {
|
||||
// Parse YAML
|
||||
const { data: yamlWorkflow, errors: parseErrors } = parseWorkflowYaml(yamlContent)
|
||||
function createSmartIdMapping(
|
||||
yamlBlocks: ImportedBlock[],
|
||||
existingBlocks: Record<string, any>,
|
||||
activeWorkflowId: string,
|
||||
forceNewIds = false
|
||||
): Map<string, string> {
|
||||
const yamlIdToActualId = new Map<string, string>()
|
||||
const existingBlockIds = new Set(Object.keys(existingBlocks))
|
||||
|
||||
if (!yamlWorkflow || parseErrors.length > 0) {
|
||||
return { success: false, errors: parseErrors, warnings: [] }
|
||||
}
|
||||
logger.info('Creating smart ID mapping', {
|
||||
activeWorkflowId,
|
||||
yamlBlockCount: yamlBlocks.length,
|
||||
existingBlockCount: Object.keys(existingBlocks).length,
|
||||
existingBlockIds: Array.from(existingBlockIds),
|
||||
yamlBlockIds: yamlBlocks.map((b) => b.id),
|
||||
forceNewIds,
|
||||
})
|
||||
|
||||
// Convert to importable format
|
||||
const { blocks, edges, errors, warnings } = convertYamlToWorkflow(yamlWorkflow)
|
||||
|
||||
if (errors.length > 0) {
|
||||
return { success: false, errors, warnings }
|
||||
}
|
||||
|
||||
// Get the existing workflow state (to preserve starter blocks if they exist)
|
||||
let existingBlocks: Record<string, any> = {}
|
||||
|
||||
if (targetWorkflowId) {
|
||||
// For target workflow, fetch from API
|
||||
try {
|
||||
const response = await fetch(`/api/workflows/${targetWorkflowId}`)
|
||||
if (response.ok) {
|
||||
const workflowData = await response.json()
|
||||
existingBlocks = workflowData.data?.state?.blocks || {}
|
||||
}
|
||||
} catch (error) {
|
||||
logger.warn(`Failed to fetch existing blocks for workflow ${targetWorkflowId}:`, error)
|
||||
}
|
||||
for (const block of yamlBlocks) {
|
||||
if (forceNewIds || !existingBlockIds.has(block.id)) {
|
||||
// Force new ID or block ID doesn't exist in current workflow - generate new UUID
|
||||
const newId = uuidv4()
|
||||
yamlIdToActualId.set(block.id, newId)
|
||||
logger.info(
|
||||
`🆕 Mapping new block: ${block.id} -> ${newId} (${forceNewIds ? 'forced new ID' : `not found in workflow ${activeWorkflowId}`})`
|
||||
)
|
||||
} else {
|
||||
// For active workflow, use from store
|
||||
existingBlocks = workflowActions.getExistingBlocks()
|
||||
}
|
||||
|
||||
const existingStarterBlocks = Object.values(existingBlocks).filter(
|
||||
(block: any) => block.type === 'starter'
|
||||
)
|
||||
|
||||
// Get stores and current workflow info
|
||||
|
||||
// Get current workflow state
|
||||
const currentWorkflowState = useWorkflowStore.getState()
|
||||
const activeWorkflowId = targetWorkflowId || useWorkflowRegistry.getState().activeWorkflowId
|
||||
|
||||
if (!activeWorkflowId) {
|
||||
return { success: false, errors: ['No active workflow'], warnings: [] }
|
||||
}
|
||||
|
||||
// Build complete blocks object
|
||||
const completeBlocks: Record<string, any> = {}
|
||||
const completeSubBlockValues: Record<string, Record<string, any>> = {}
|
||||
const yamlIdToActualId = new Map<string, string>()
|
||||
|
||||
// Handle starter block
|
||||
let starterBlockId: string | null = null
|
||||
const starterBlock = blocks.find((block) => block.type === 'starter')
|
||||
|
||||
if (starterBlock) {
|
||||
if (existingStarterBlocks.length > 0) {
|
||||
// Use existing starter block
|
||||
const existingStarter = existingStarterBlocks[0] as any
|
||||
starterBlockId = existingStarter.id
|
||||
yamlIdToActualId.set(starterBlock.id, existingStarter.id)
|
||||
|
||||
// Keep existing starter but update its inputs
|
||||
completeBlocks[existingStarter.id] = {
|
||||
...existingStarter,
|
||||
// Update name if provided in YAML
|
||||
name: starterBlock.name !== 'Start' ? starterBlock.name : existingStarter.name,
|
||||
}
|
||||
|
||||
// Set starter block values
|
||||
completeSubBlockValues[existingStarter.id] = {
|
||||
...(currentWorkflowState.blocks[existingStarter.id]?.subBlocks
|
||||
? Object.fromEntries(
|
||||
Object.entries(currentWorkflowState.blocks[existingStarter.id].subBlocks).map(
|
||||
([key, subBlock]: [string, any]) => [key, subBlock.value]
|
||||
)
|
||||
)
|
||||
: {}),
|
||||
...starterBlock.inputs, // Override with YAML values
|
||||
}
|
||||
} else {
|
||||
// Create new starter block
|
||||
starterBlockId = crypto.randomUUID()
|
||||
yamlIdToActualId.set(starterBlock.id, starterBlockId)
|
||||
|
||||
// Create complete starter block from block config
|
||||
const blockConfig = getBlock('starter')
|
||||
if (blockConfig) {
|
||||
const subBlocks: Record<string, any> = {}
|
||||
blockConfig.subBlocks.forEach((subBlock) => {
|
||||
subBlocks[subBlock.id] = {
|
||||
id: subBlock.id,
|
||||
type: subBlock.type,
|
||||
value: null,
|
||||
}
|
||||
})
|
||||
|
||||
completeBlocks[starterBlockId] = {
|
||||
id: starterBlockId,
|
||||
type: 'starter',
|
||||
name: starterBlock.name,
|
||||
position: starterBlock.position,
|
||||
subBlocks,
|
||||
outputs: resolveOutputType(blockConfig.outputs),
|
||||
enabled: true,
|
||||
horizontalHandles: true,
|
||||
isWide: false,
|
||||
height: 0,
|
||||
data: starterBlock.data || {},
|
||||
}
|
||||
|
||||
// Set starter block values
|
||||
completeSubBlockValues[starterBlockId] = { ...starterBlock.inputs }
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Create all other blocks
|
||||
// Note: blocks are now sorted to ensure parents come before children,
|
||||
// but we still need the two-phase approach because we're generating new UUIDs
|
||||
let blocksProcessed = 0
|
||||
for (const block of blocks) {
|
||||
if (block.type === 'starter') {
|
||||
continue // Already handled above
|
||||
}
|
||||
|
||||
const blockId = crypto.randomUUID()
|
||||
yamlIdToActualId.set(block.id, blockId)
|
||||
|
||||
// Create complete block from block config
|
||||
const blockConfig = getBlock(block.type)
|
||||
|
||||
if (!blockConfig && (block.type === 'loop' || block.type === 'parallel')) {
|
||||
// Handle loop/parallel blocks
|
||||
completeBlocks[blockId] = {
|
||||
id: blockId,
|
||||
type: block.type,
|
||||
name: block.name,
|
||||
position: block.position,
|
||||
subBlocks: {},
|
||||
outputs: {},
|
||||
enabled: true,
|
||||
horizontalHandles: true,
|
||||
isWide: false,
|
||||
height: 0,
|
||||
data: block.data || {}, // Configuration is already in block.data from convertYamlToWorkflow
|
||||
}
|
||||
|
||||
// Loop/parallel blocks don't use subBlocks, their config is in data
|
||||
// No need to set completeSubBlockValues since they don't have subBlocks
|
||||
blocksProcessed++
|
||||
} else if (blockConfig) {
|
||||
// Handle regular blocks
|
||||
const subBlocks: Record<string, any> = {}
|
||||
blockConfig.subBlocks.forEach((subBlock) => {
|
||||
subBlocks[subBlock.id] = {
|
||||
id: subBlock.id,
|
||||
type: subBlock.type,
|
||||
value: null,
|
||||
}
|
||||
})
|
||||
|
||||
completeBlocks[blockId] = {
|
||||
id: blockId,
|
||||
type: block.type,
|
||||
name: block.name,
|
||||
position: block.position,
|
||||
subBlocks,
|
||||
outputs: resolveOutputType(blockConfig.outputs),
|
||||
enabled: true,
|
||||
horizontalHandles: true,
|
||||
isWide: false,
|
||||
height: 0,
|
||||
data: block.data || {}, // This already includes parentId and extent from convertYamlToWorkflow
|
||||
}
|
||||
|
||||
// Set block input values
|
||||
completeSubBlockValues[blockId] = { ...block.inputs }
|
||||
blocksProcessed++
|
||||
} else {
|
||||
logger.warn(`No block config found for type: ${block.type} (block: ${block.id})`)
|
||||
}
|
||||
}
|
||||
|
||||
// Update parent-child relationships with mapped IDs
|
||||
// This two-phase approach is necessary because:
|
||||
// 1. We generate new UUIDs for all blocks (can't reuse YAML IDs)
|
||||
// 2. Parent references in YAML use the original IDs, need to map to new UUIDs
|
||||
// 3. All blocks must exist before we can map their parent references
|
||||
for (const [blockId, blockData] of Object.entries(completeBlocks)) {
|
||||
if (blockData.data?.parentId) {
|
||||
const mappedParentId = yamlIdToActualId.get(blockData.data.parentId)
|
||||
if (mappedParentId) {
|
||||
blockData.data.parentId = mappedParentId
|
||||
} else {
|
||||
logger.warn(`Parent block not found for mapping: ${blockData.data.parentId}`)
|
||||
// Remove invalid parent reference
|
||||
blockData.data.parentId = undefined
|
||||
blockData.data.extent = undefined
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Create complete edges using the ID mapping
|
||||
const completeEdges: any[] = []
|
||||
for (const edge of edges) {
|
||||
const sourceId = yamlIdToActualId.get(edge.source)
|
||||
const targetId = yamlIdToActualId.get(edge.target)
|
||||
|
||||
if (sourceId && targetId) {
|
||||
completeEdges.push({
|
||||
...edge,
|
||||
source: sourceId,
|
||||
target: targetId,
|
||||
})
|
||||
} else {
|
||||
logger.warn(`Skipping edge - missing blocks: ${edge.source} -> ${edge.target}`)
|
||||
}
|
||||
}
|
||||
|
||||
// Create complete workflow state with values already set in subBlocks
|
||||
|
||||
// Merge subblock values directly into block subBlocks
|
||||
for (const [blockId, blockData] of Object.entries(completeBlocks)) {
|
||||
const blockValues = completeSubBlockValues[blockId] || {}
|
||||
|
||||
// Update subBlock values in place
|
||||
for (const [subBlockId, subBlockData] of Object.entries(blockData.subBlocks || {})) {
|
||||
if (blockValues[subBlockId] !== undefined && blockValues[subBlockId] !== null) {
|
||||
;(subBlockData as any).value = blockValues[subBlockId]
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Create final workflow state
|
||||
const completeWorkflowState = {
|
||||
blocks: completeBlocks,
|
||||
edges: completeEdges,
|
||||
loops: {},
|
||||
parallels: {},
|
||||
lastSaved: Date.now(),
|
||||
isDeployed: false,
|
||||
deployedAt: undefined,
|
||||
deploymentStatuses: {},
|
||||
hasActiveWebhook: false,
|
||||
}
|
||||
|
||||
// Save directly to database via API
|
||||
const response = await fetch(`/api/workflows/${activeWorkflowId}/state`, {
|
||||
method: 'PUT',
|
||||
headers: {
|
||||
'Content-Type': 'application/json',
|
||||
},
|
||||
body: JSON.stringify(completeWorkflowState),
|
||||
})
|
||||
|
||||
if (!response.ok) {
|
||||
const errorData = await response.json()
|
||||
logger.error('Failed to save workflow state:', errorData.error)
|
||||
return {
|
||||
success: false,
|
||||
errors: [`Database save failed: ${errorData.error || 'Unknown error'}`],
|
||||
warnings,
|
||||
}
|
||||
}
|
||||
|
||||
const saveResponse = await response.json()
|
||||
|
||||
// Update local state for immediate UI display (only if importing into active workflow)
|
||||
if (!targetWorkflowId) {
|
||||
useWorkflowStore.setState(completeWorkflowState)
|
||||
|
||||
// Set subblock values in local store
|
||||
useSubBlockStore.setState((state: any) => ({
|
||||
workflowValues: {
|
||||
...state.workflowValues,
|
||||
[activeWorkflowId]: completeSubBlockValues,
|
||||
},
|
||||
}))
|
||||
}
|
||||
|
||||
// Apply auto layout
|
||||
workflowActions.applyAutoLayout()
|
||||
|
||||
const totalBlocksCreated =
|
||||
Object.keys(completeBlocks).length - (existingStarterBlocks.length > 0 ? 1 : 0)
|
||||
|
||||
return {
|
||||
success: true,
|
||||
errors: [],
|
||||
warnings,
|
||||
summary: `Imported ${totalBlocksCreated} new blocks and ${completeEdges.length} connections. ${
|
||||
existingStarterBlocks.length > 0
|
||||
? 'Updated existing starter block.'
|
||||
: 'Created new starter block.'
|
||||
}`,
|
||||
}
|
||||
} catch (error) {
|
||||
logger.error('YAML import failed:', error)
|
||||
return {
|
||||
success: false,
|
||||
errors: [`Import failed: ${error instanceof Error ? error.message : 'Unknown error'}`],
|
||||
warnings: [],
|
||||
// Block ID exists in current workflow - preserve it
|
||||
yamlIdToActualId.set(block.id, block.id)
|
||||
logger.info(
|
||||
`✅ Preserving existing block ID: ${block.id} (exists in workflow ${activeWorkflowId})`
|
||||
)
|
||||
}
|
||||
}
|
||||
|
||||
logger.info('Smart ID mapping completed', {
|
||||
mappings: Array.from(yamlIdToActualId.entries()),
|
||||
preservedCount: Array.from(yamlIdToActualId.entries()).filter(([old, new_]) => old === new_)
|
||||
.length,
|
||||
newCount: Array.from(yamlIdToActualId.entries()).filter(([old, new_]) => old !== new_).length,
|
||||
})
|
||||
|
||||
return yamlIdToActualId
|
||||
}
|
||||
|
||||
653
apps/sim/stores/workflows/yaml/parsing-utils.ts
Normal file
653
apps/sim/stores/workflows/yaml/parsing-utils.ts
Normal file
@@ -0,0 +1,653 @@
|
||||
import { v4 as uuidv4 } from 'uuid'
|
||||
import { createLogger } from '@/lib/logs/console-logger'
|
||||
|
||||
const logger = createLogger('YamlParsingUtils')
|
||||
|
||||
export interface ImportedEdge {
|
||||
id: string
|
||||
source: string
|
||||
target: string
|
||||
sourceHandle: string
|
||||
targetHandle: string
|
||||
type: string
|
||||
}
|
||||
|
||||
export interface ParsedConnections {
|
||||
edges: ImportedEdge[]
|
||||
errors: string[]
|
||||
warnings: string[]
|
||||
}
|
||||
|
||||
export interface ConnectionsFormat {
|
||||
// New format - grouped by handle type
|
||||
success?: string | string[]
|
||||
error?: string | string[]
|
||||
conditions?: Record<string, string | string[]>
|
||||
loop?: {
|
||||
start?: string | string[]
|
||||
end?: string | string[]
|
||||
}
|
||||
parallel?: {
|
||||
start?: string | string[]
|
||||
end?: string | string[]
|
||||
}
|
||||
// Legacy format support
|
||||
incoming?: Array<{
|
||||
source: string
|
||||
sourceHandle?: string
|
||||
targetHandle?: string
|
||||
}>
|
||||
outgoing?: Array<{
|
||||
target: string
|
||||
sourceHandle?: string
|
||||
targetHandle?: string
|
||||
}>
|
||||
}
|
||||
|
||||
/**
|
||||
* Parse block connections from both new grouped format and legacy format
|
||||
*/
|
||||
export function parseBlockConnections(
|
||||
blockId: string,
|
||||
connections: ConnectionsFormat | undefined,
|
||||
blockType?: string
|
||||
): ParsedConnections {
|
||||
const edges: ImportedEdge[] = []
|
||||
const errors: string[] = []
|
||||
const warnings: string[] = []
|
||||
|
||||
if (!connections) {
|
||||
return { edges, errors, warnings }
|
||||
}
|
||||
|
||||
// Handle new grouped format
|
||||
if (hasNewFormat(connections)) {
|
||||
parseNewFormatConnections(blockId, connections, edges, errors, warnings, blockType)
|
||||
}
|
||||
|
||||
// Handle legacy format (for backwards compatibility)
|
||||
if (connections.outgoing) {
|
||||
parseLegacyOutgoingConnections(blockId, connections.outgoing, edges, errors, warnings)
|
||||
}
|
||||
|
||||
return { edges, errors, warnings }
|
||||
}
|
||||
|
||||
/**
|
||||
* Generate connections in the new grouped format from edges
|
||||
*/
|
||||
export function generateBlockConnections(
|
||||
blockId: string,
|
||||
edges: ImportedEdge[] | any[]
|
||||
): ConnectionsFormat {
|
||||
const connections: ConnectionsFormat = {}
|
||||
|
||||
const outgoingEdges = edges.filter((edge) => edge.source === blockId)
|
||||
|
||||
if (outgoingEdges.length === 0) {
|
||||
return connections
|
||||
}
|
||||
|
||||
// Group edges by source handle type
|
||||
const successTargets: string[] = []
|
||||
const errorTargets: string[] = []
|
||||
const conditionTargets: Record<string, string[]> = {}
|
||||
const loopTargets: { start: string[]; end: string[] } = { start: [], end: [] }
|
||||
const parallelTargets: { start: string[]; end: string[] } = { start: [], end: [] }
|
||||
|
||||
// Track condition ordering for clean sequential else-if naming
|
||||
const rawConditionIds: string[] = []
|
||||
|
||||
for (const edge of outgoingEdges) {
|
||||
const handle = edge.sourceHandle ?? 'source'
|
||||
|
||||
if (handle === 'source') {
|
||||
successTargets.push(edge.target)
|
||||
} else if (handle === 'error') {
|
||||
errorTargets.push(edge.target)
|
||||
} else if (handle.startsWith('condition-')) {
|
||||
const rawConditionId = extractConditionId(handle)
|
||||
rawConditionIds.push(rawConditionId)
|
||||
|
||||
if (!conditionTargets[rawConditionId]) {
|
||||
conditionTargets[rawConditionId] = []
|
||||
}
|
||||
conditionTargets[rawConditionId].push(edge.target)
|
||||
} else if (handle === 'loop-start-source') {
|
||||
loopTargets.start.push(edge.target)
|
||||
} else if (handle === 'loop-end-source') {
|
||||
loopTargets.end.push(edge.target)
|
||||
} else if (handle === 'parallel-start-source') {
|
||||
parallelTargets.start.push(edge.target)
|
||||
} else if (handle === 'parallel-end-source') {
|
||||
parallelTargets.end.push(edge.target)
|
||||
}
|
||||
}
|
||||
|
||||
// Create clean condition mapping for timestamp-based else-if IDs
|
||||
const cleanConditionTargets: Record<string, string[]> = {}
|
||||
let elseIfCount = 0
|
||||
|
||||
Object.entries(conditionTargets).forEach(([rawId, targets]) => {
|
||||
let cleanId = rawId
|
||||
|
||||
// Simple check: if this is exactly 'else', keep it as 'else'
|
||||
if (rawId === 'else') {
|
||||
cleanId = 'else'
|
||||
}
|
||||
// Convert timestamp-based else-if IDs to clean sequential format
|
||||
else if (rawId.startsWith('else-if-') && /else-if-\d+$/.test(rawId)) {
|
||||
elseIfCount++
|
||||
if (elseIfCount === 1) {
|
||||
cleanId = 'else-if'
|
||||
} else {
|
||||
cleanId = `else-if-${elseIfCount}`
|
||||
}
|
||||
}
|
||||
|
||||
cleanConditionTargets[cleanId] = targets
|
||||
})
|
||||
|
||||
// After processing all conditions, check if we need to convert the last else-if to else
|
||||
// If we have more than the expected number of else-if conditions, the last one should be else
|
||||
const conditionKeys = Object.keys(cleanConditionTargets)
|
||||
const hasElse = conditionKeys.includes('else')
|
||||
const elseIfKeys = conditionKeys.filter((key) => key.startsWith('else-if'))
|
||||
|
||||
if (!hasElse && elseIfKeys.length > 0) {
|
||||
// Find the highest numbered else-if and convert it to else
|
||||
const highestElseIf = elseIfKeys.sort((a, b) => {
|
||||
const aNum = a === 'else-if' ? 1 : Number.parseInt(a.replace('else-if-', ''))
|
||||
const bNum = b === 'else-if' ? 1 : Number.parseInt(b.replace('else-if-', ''))
|
||||
return bNum - aNum
|
||||
})[0]
|
||||
|
||||
// Move the targets from the highest else-if to else
|
||||
cleanConditionTargets.else = cleanConditionTargets[highestElseIf]
|
||||
delete cleanConditionTargets[highestElseIf]
|
||||
}
|
||||
|
||||
// Add to connections object (use single values for single targets, arrays for multiple)
|
||||
if (successTargets.length > 0) {
|
||||
connections.success = successTargets.length === 1 ? successTargets[0] : successTargets
|
||||
}
|
||||
|
||||
if (errorTargets.length > 0) {
|
||||
connections.error = errorTargets.length === 1 ? errorTargets[0] : errorTargets
|
||||
}
|
||||
|
||||
if (Object.keys(cleanConditionTargets).length > 0) {
|
||||
connections.conditions = {}
|
||||
|
||||
// Sort condition keys to maintain consistent order: if, else-if, else-if-2, ..., else
|
||||
const sortedConditionKeys = Object.keys(cleanConditionTargets).sort((a, b) => {
|
||||
// Define the order priority
|
||||
const getOrder = (key: string): number => {
|
||||
if (key === 'if') return 0
|
||||
if (key === 'else-if') return 1
|
||||
if (key.startsWith('else-if-')) {
|
||||
const num = Number.parseInt(key.replace('else-if-', ''), 10)
|
||||
return 1 + num // else-if-2 = 3, else-if-3 = 4, etc.
|
||||
}
|
||||
if (key === 'else') return 1000 // Always last
|
||||
return 500 // Other conditions in the middle
|
||||
}
|
||||
|
||||
return getOrder(a) - getOrder(b)
|
||||
})
|
||||
|
||||
// Build the connections object in the correct order
|
||||
for (const conditionId of sortedConditionKeys) {
|
||||
const targets = cleanConditionTargets[conditionId]
|
||||
connections.conditions[conditionId] = targets.length === 1 ? targets[0] : targets
|
||||
}
|
||||
}
|
||||
|
||||
if (loopTargets.start.length > 0 || loopTargets.end.length > 0) {
|
||||
connections.loop = {}
|
||||
if (loopTargets.start.length > 0) {
|
||||
connections.loop.start =
|
||||
loopTargets.start.length === 1 ? loopTargets.start[0] : loopTargets.start
|
||||
}
|
||||
if (loopTargets.end.length > 0) {
|
||||
connections.loop.end = loopTargets.end.length === 1 ? loopTargets.end[0] : loopTargets.end
|
||||
}
|
||||
}
|
||||
|
||||
if (parallelTargets.start.length > 0 || parallelTargets.end.length > 0) {
|
||||
connections.parallel = {}
|
||||
if (parallelTargets.start.length > 0) {
|
||||
connections.parallel.start =
|
||||
parallelTargets.start.length === 1 ? parallelTargets.start[0] : parallelTargets.start
|
||||
}
|
||||
if (parallelTargets.end.length > 0) {
|
||||
connections.parallel.end =
|
||||
parallelTargets.end.length === 1 ? parallelTargets.end[0] : parallelTargets.end
|
||||
}
|
||||
}
|
||||
|
||||
return connections
|
||||
}
|
||||
|
||||
/**
|
||||
* Validate block structure (type, name, inputs)
|
||||
*/
|
||||
export function validateBlockStructure(
|
||||
blockId: string,
|
||||
block: any
|
||||
): { errors: string[]; warnings: string[] } {
|
||||
const errors: string[] = []
|
||||
const warnings: string[] = []
|
||||
|
||||
if (!block || typeof block !== 'object') {
|
||||
errors.push(`Invalid block definition for '${blockId}': must be an object`)
|
||||
return { errors, warnings }
|
||||
}
|
||||
|
||||
if (!block.type || typeof block.type !== 'string') {
|
||||
errors.push(`Invalid block '${blockId}': missing or invalid 'type' field`)
|
||||
}
|
||||
|
||||
if (!block.name || typeof block.name !== 'string') {
|
||||
errors.push(`Invalid block '${blockId}': missing or invalid 'name' field`)
|
||||
}
|
||||
|
||||
if (block.inputs && typeof block.inputs !== 'object') {
|
||||
errors.push(`Invalid block '${blockId}': 'inputs' must be an object`)
|
||||
}
|
||||
|
||||
return { errors, warnings }
|
||||
}
|
||||
|
||||
/**
|
||||
* Clean up condition inputs to remove UI state and use semantic format
|
||||
* Preserves actual condition IDs that match connections
|
||||
*/
|
||||
export function cleanConditionInputs(
|
||||
blockId: string,
|
||||
inputs: Record<string, any>
|
||||
): Record<string, any> {
|
||||
const cleanInputs = { ...inputs }
|
||||
|
||||
// Handle condition blocks specially
|
||||
if (cleanInputs.conditions) {
|
||||
try {
|
||||
// Parse the JSON string conditions
|
||||
const conditions =
|
||||
typeof cleanInputs.conditions === 'string'
|
||||
? JSON.parse(cleanInputs.conditions)
|
||||
: cleanInputs.conditions
|
||||
|
||||
if (Array.isArray(conditions)) {
|
||||
// Convert to clean format, preserving actual IDs for connection mapping
|
||||
const tempConditions: Array<{ key: string; value: string }> = []
|
||||
|
||||
// Track else-if count for clean numbering
|
||||
let elseIfCount = 0
|
||||
|
||||
conditions.forEach((condition: any) => {
|
||||
if (condition.title && condition.value !== undefined) {
|
||||
// Create clean semantic keys instead of preserving timestamps
|
||||
let key = condition.title
|
||||
if (condition.title === 'else if') {
|
||||
elseIfCount++
|
||||
if (elseIfCount === 1) {
|
||||
key = 'else-if'
|
||||
} else {
|
||||
key = `else-if-${elseIfCount}`
|
||||
}
|
||||
}
|
||||
|
||||
if (condition.value?.trim()) {
|
||||
tempConditions.push({ key, value: condition.value.trim() })
|
||||
}
|
||||
}
|
||||
})
|
||||
|
||||
// Sort conditions to maintain consistent order: if, else-if, else-if-2, ..., else
|
||||
tempConditions.sort((a, b) => {
|
||||
const getOrder = (key: string): number => {
|
||||
if (key === 'if') return 0
|
||||
if (key === 'else-if') return 1
|
||||
if (key.startsWith('else-if-')) {
|
||||
const num = Number.parseInt(key.replace('else-if-', ''), 10)
|
||||
return 1 + num // else-if-2 = 3, else-if-3 = 4, etc.
|
||||
}
|
||||
if (key === 'else') return 1000 // Always last
|
||||
return 500 // Other conditions in the middle
|
||||
}
|
||||
|
||||
return getOrder(a.key) - getOrder(b.key)
|
||||
})
|
||||
|
||||
// Build the final ordered object
|
||||
const cleanConditions: Record<string, string> = {}
|
||||
tempConditions.forEach(({ key, value }) => {
|
||||
cleanConditions[key] = value
|
||||
})
|
||||
|
||||
// Replace the verbose format with clean format
|
||||
if (Object.keys(cleanConditions).length > 0) {
|
||||
cleanInputs.conditions = cleanConditions
|
||||
} else {
|
||||
cleanInputs.conditions = undefined
|
||||
}
|
||||
}
|
||||
} catch (error) {
|
||||
// If parsing fails, leave as-is with a warning
|
||||
logger.warn(`Failed to clean condition inputs for block ${blockId}:`, error)
|
||||
}
|
||||
}
|
||||
|
||||
return cleanInputs
|
||||
}
|
||||
|
||||
/**
|
||||
* Convert clean condition inputs back to internal format for import
|
||||
*/
|
||||
export function expandConditionInputs(
|
||||
blockId: string,
|
||||
inputs: Record<string, any>
|
||||
): Record<string, any> {
|
||||
const expandedInputs = { ...inputs }
|
||||
|
||||
// Handle clean condition format
|
||||
if (
|
||||
expandedInputs.conditions &&
|
||||
typeof expandedInputs.conditions === 'object' &&
|
||||
!Array.isArray(expandedInputs.conditions)
|
||||
) {
|
||||
const conditionsObj = expandedInputs.conditions as Record<string, string>
|
||||
const conditionsArray: any[] = []
|
||||
|
||||
Object.entries(conditionsObj).forEach(([key, value]) => {
|
||||
const conditionId = `${blockId}-${key}`
|
||||
|
||||
// Determine display title from key
|
||||
let title = key
|
||||
if (key.startsWith('else-if')) {
|
||||
title = 'else if'
|
||||
}
|
||||
|
||||
conditionsArray.push({
|
||||
id: conditionId,
|
||||
title: title,
|
||||
value: value || '',
|
||||
showTags: false,
|
||||
showEnvVars: false,
|
||||
searchTerm: '',
|
||||
cursorPosition: 0,
|
||||
activeSourceBlockId: null,
|
||||
})
|
||||
})
|
||||
|
||||
// Add default else if not present and no existing else key
|
||||
const hasElse = Object.keys(conditionsObj).some((key) => key === 'else')
|
||||
if (!hasElse) {
|
||||
conditionsArray.push({
|
||||
id: `${blockId}-else`,
|
||||
title: 'else',
|
||||
value: '',
|
||||
showTags: false,
|
||||
showEnvVars: false,
|
||||
searchTerm: '',
|
||||
cursorPosition: 0,
|
||||
activeSourceBlockId: null,
|
||||
})
|
||||
}
|
||||
|
||||
expandedInputs.conditions = JSON.stringify(conditionsArray)
|
||||
}
|
||||
|
||||
return expandedInputs
|
||||
}
|
||||
|
||||
/**
|
||||
* Validate that block references in connections exist
|
||||
*/
|
||||
export function validateBlockReferences(blocks: Record<string, any>): string[] {
|
||||
const errors: string[] = []
|
||||
const blockIds = new Set(Object.keys(blocks))
|
||||
|
||||
Object.entries(blocks).forEach(([blockId, block]) => {
|
||||
if (!block.connections) return
|
||||
|
||||
const { edges } = parseBlockConnections(blockId, block.connections, block.type)
|
||||
|
||||
edges.forEach((edge) => {
|
||||
if (!blockIds.has(edge.target)) {
|
||||
errors.push(`Block '${blockId}' references non-existent target block '${edge.target}'`)
|
||||
}
|
||||
})
|
||||
|
||||
// Check parent references
|
||||
if (block.parentId && !blockIds.has(block.parentId)) {
|
||||
errors.push(`Block '${blockId}' references non-existent parent block '${block.parentId}'`)
|
||||
}
|
||||
})
|
||||
|
||||
return errors
|
||||
}
|
||||
|
||||
// Helper functions
|
||||
|
||||
function hasNewFormat(connections: ConnectionsFormat): boolean {
|
||||
return !!(
|
||||
connections.success ||
|
||||
connections.error ||
|
||||
connections.conditions ||
|
||||
connections.loop ||
|
||||
connections.parallel
|
||||
)
|
||||
}
|
||||
|
||||
function parseNewFormatConnections(
|
||||
blockId: string,
|
||||
connections: ConnectionsFormat,
|
||||
edges: ImportedEdge[],
|
||||
errors: string[],
|
||||
warnings: string[],
|
||||
blockType?: string
|
||||
) {
|
||||
// Parse success connections
|
||||
if (connections.success) {
|
||||
const targets = Array.isArray(connections.success) ? connections.success : [connections.success]
|
||||
targets.forEach((target) => {
|
||||
if (typeof target === 'string') {
|
||||
edges.push(createEdge(blockId, target, 'source', 'target'))
|
||||
} else {
|
||||
errors.push(`Invalid success target in block '${blockId}': must be a string`)
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
// Parse error connections
|
||||
if (connections.error) {
|
||||
const targets = Array.isArray(connections.error) ? connections.error : [connections.error]
|
||||
targets.forEach((target) => {
|
||||
if (typeof target === 'string') {
|
||||
edges.push(createEdge(blockId, target, 'error', 'target'))
|
||||
} else {
|
||||
errors.push(`Invalid error target in block '${blockId}': must be a string`)
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
// Parse condition connections
|
||||
if (connections.conditions) {
|
||||
if (typeof connections.conditions !== 'object') {
|
||||
errors.push(`Invalid conditions in block '${blockId}': must be an object`)
|
||||
} else {
|
||||
Object.entries(connections.conditions).forEach(([conditionId, targets]) => {
|
||||
const targetArray = Array.isArray(targets) ? targets : [targets]
|
||||
targetArray.forEach((target) => {
|
||||
if (typeof target === 'string') {
|
||||
// Create condition handle based on block type and condition ID
|
||||
const sourceHandle = createConditionHandle(blockId, conditionId, blockType)
|
||||
edges.push(createEdge(blockId, target, sourceHandle, 'target'))
|
||||
} else {
|
||||
errors.push(
|
||||
`Invalid condition target for '${conditionId}' in block '${blockId}': must be a string`
|
||||
)
|
||||
}
|
||||
})
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// Parse loop connections
|
||||
if (connections.loop) {
|
||||
if (typeof connections.loop !== 'object') {
|
||||
errors.push(`Invalid loop connections in block '${blockId}': must be an object`)
|
||||
} else {
|
||||
if (connections.loop.start) {
|
||||
const targets = Array.isArray(connections.loop.start)
|
||||
? connections.loop.start
|
||||
: [connections.loop.start]
|
||||
targets.forEach((target) => {
|
||||
if (typeof target === 'string') {
|
||||
edges.push(createEdge(blockId, target, 'loop-start-source', 'target'))
|
||||
} else {
|
||||
errors.push(`Invalid loop start target in block '${blockId}': must be a string`)
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
if (connections.loop.end) {
|
||||
const targets = Array.isArray(connections.loop.end)
|
||||
? connections.loop.end
|
||||
: [connections.loop.end]
|
||||
targets.forEach((target) => {
|
||||
if (typeof target === 'string') {
|
||||
edges.push(createEdge(blockId, target, 'loop-end-source', 'target'))
|
||||
} else {
|
||||
errors.push(`Invalid loop end target in block '${blockId}': must be a string`)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Parse parallel connections
|
||||
if (connections.parallel) {
|
||||
if (typeof connections.parallel !== 'object') {
|
||||
errors.push(`Invalid parallel connections in block '${blockId}': must be an object`)
|
||||
} else {
|
||||
if (connections.parallel.start) {
|
||||
const targets = Array.isArray(connections.parallel.start)
|
||||
? connections.parallel.start
|
||||
: [connections.parallel.start]
|
||||
targets.forEach((target) => {
|
||||
if (typeof target === 'string') {
|
||||
edges.push(createEdge(blockId, target, 'parallel-start-source', 'target'))
|
||||
} else {
|
||||
errors.push(`Invalid parallel start target in block '${blockId}': must be a string`)
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
if (connections.parallel.end) {
|
||||
const targets = Array.isArray(connections.parallel.end)
|
||||
? connections.parallel.end
|
||||
: [connections.parallel.end]
|
||||
targets.forEach((target) => {
|
||||
if (typeof target === 'string') {
|
||||
edges.push(createEdge(blockId, target, 'parallel-end-source', 'target'))
|
||||
} else {
|
||||
errors.push(`Invalid parallel end target in block '${blockId}': must be a string`)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
function parseLegacyOutgoingConnections(
|
||||
blockId: string,
|
||||
outgoing: Array<{ target: string; sourceHandle?: string; targetHandle?: string }>,
|
||||
edges: ImportedEdge[],
|
||||
errors: string[],
|
||||
warnings: string[]
|
||||
) {
|
||||
warnings.push(
|
||||
`Block '${blockId}' uses legacy connection format - consider upgrading to the new grouped format`
|
||||
)
|
||||
|
||||
outgoing.forEach((connection) => {
|
||||
if (!connection.target) {
|
||||
errors.push(`Missing target in outgoing connection for block '${blockId}'`)
|
||||
return
|
||||
}
|
||||
|
||||
edges.push(
|
||||
createEdge(
|
||||
blockId,
|
||||
connection.target,
|
||||
connection.sourceHandle || 'source',
|
||||
connection.targetHandle || 'target'
|
||||
)
|
||||
)
|
||||
})
|
||||
}
|
||||
|
||||
function createEdge(
|
||||
source: string,
|
||||
target: string,
|
||||
sourceHandle: string,
|
||||
targetHandle: string
|
||||
): ImportedEdge {
|
||||
return {
|
||||
id: uuidv4(),
|
||||
source,
|
||||
target,
|
||||
sourceHandle,
|
||||
targetHandle,
|
||||
type: 'workflowEdge',
|
||||
}
|
||||
}
|
||||
|
||||
function createConditionHandle(blockId: string, conditionId: string, blockType?: string): string {
|
||||
// For condition blocks, create the handle format that the system expects
|
||||
if (blockType === 'condition') {
|
||||
// Map semantic condition IDs to the internal format the system expects
|
||||
const actualConditionId = `${blockId}-${conditionId}`
|
||||
return `condition-${actualConditionId}`
|
||||
}
|
||||
// For other blocks that might have conditions, use a more explicit format
|
||||
return `condition-${blockId}-${conditionId}`
|
||||
}
|
||||
|
||||
function extractConditionId(sourceHandle: string): string {
|
||||
// Extract condition ID from handle like "condition-blockId-semantic-key"
|
||||
// Example: "condition-e23e6318-bcdc-4572-a76b-5015e3950121-else-if-1752111795510"
|
||||
|
||||
if (!sourceHandle.startsWith('condition-')) {
|
||||
return sourceHandle
|
||||
}
|
||||
|
||||
// Remove "condition-" prefix
|
||||
const withoutPrefix = sourceHandle.substring('condition-'.length)
|
||||
|
||||
// Special case: check if this ends with "-else" (the auto-added else condition)
|
||||
if (withoutPrefix.endsWith('-else')) {
|
||||
return 'else'
|
||||
}
|
||||
|
||||
// Find the first UUID pattern (36 characters with 4 hyphens in specific positions)
|
||||
// UUID format: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
|
||||
const uuidRegex = /^[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}-(.+)$/i
|
||||
const match = withoutPrefix.match(uuidRegex)
|
||||
|
||||
if (match) {
|
||||
// Extract everything after the UUID - return raw ID for further processing
|
||||
return match[1]
|
||||
}
|
||||
|
||||
// Fallback for legacy format or simpler cases
|
||||
const parts = sourceHandle.split('-')
|
||||
if (parts.length >= 2) {
|
||||
return parts[parts.length - 1]
|
||||
}
|
||||
|
||||
return sourceHandle
|
||||
}
|
||||
88
apps/sim/tools/blocks/edit-workflow.ts
Normal file
88
apps/sim/tools/blocks/edit-workflow.ts
Normal file
@@ -0,0 +1,88 @@
|
||||
import type { ToolConfig, ToolResponse } from '../types'
|
||||
|
||||
interface EditWorkflowParams {
|
||||
yamlContent: string
|
||||
description?: string
|
||||
_context?: {
|
||||
workflowId: string
|
||||
chatId?: string
|
||||
}
|
||||
}
|
||||
|
||||
interface EditWorkflowResponse extends ToolResponse {
|
||||
output: {
|
||||
success: boolean
|
||||
message: string
|
||||
summary?: string
|
||||
errors: string[]
|
||||
warnings: string[]
|
||||
data?: {
|
||||
blocksCount: number
|
||||
edgesCount: number
|
||||
loopsCount: number
|
||||
parallelsCount: number
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
export const editWorkflowTool: ToolConfig<EditWorkflowParams, EditWorkflowResponse> = {
|
||||
id: 'edit_workflow',
|
||||
name: 'Edit Workflow',
|
||||
description:
|
||||
'Save/edit the current workflow by providing YAML content. This performs the same action as saving in the YAML code editor. Only call this after getting blocks info, metadata, and YAML structure guide.',
|
||||
version: '1.0.0',
|
||||
|
||||
params: {
|
||||
yamlContent: {
|
||||
type: 'string',
|
||||
required: true,
|
||||
description: 'The complete YAML workflow content to save',
|
||||
},
|
||||
description: {
|
||||
type: 'string',
|
||||
required: false,
|
||||
description: 'Optional description of the changes being made',
|
||||
},
|
||||
},
|
||||
|
||||
request: {
|
||||
url: (params) => `/api/workflows/${params._context?.workflowId}/yaml`,
|
||||
method: 'PUT',
|
||||
headers: () => ({
|
||||
'Content-Type': 'application/json',
|
||||
}),
|
||||
body: (params) => ({
|
||||
yamlContent: params.yamlContent,
|
||||
description: params.description,
|
||||
chatId: params._context?.chatId,
|
||||
source: 'copilot',
|
||||
applyAutoLayout: true,
|
||||
createCheckpoint: true, // Always create checkpoints for copilot edits
|
||||
}),
|
||||
isInternalRoute: true,
|
||||
},
|
||||
|
||||
transformResponse: async (response: Response): Promise<EditWorkflowResponse> => {
|
||||
if (!response.ok) {
|
||||
throw new Error(`Edit workflow failed: ${response.status} ${response.statusText}`)
|
||||
}
|
||||
|
||||
const data = await response.json()
|
||||
|
||||
if (!data.success) {
|
||||
throw new Error(data.message || 'Failed to edit workflow')
|
||||
}
|
||||
|
||||
return {
|
||||
success: true,
|
||||
output: data,
|
||||
}
|
||||
},
|
||||
|
||||
transformError: (error: any): string => {
|
||||
if (error instanceof Error) {
|
||||
return `Failed to edit workflow: ${error.message}`
|
||||
}
|
||||
return 'An unexpected error occurred while editing the workflow'
|
||||
},
|
||||
}
|
||||
78
apps/sim/tools/blocks/get-all.ts
Normal file
78
apps/sim/tools/blocks/get-all.ts
Normal file
@@ -0,0 +1,78 @@
|
||||
import type { ToolConfig, ToolResponse } from '../types'
|
||||
|
||||
interface GetAllBlocksParams {
|
||||
includeDetails?: boolean
|
||||
filterCategory?: string
|
||||
}
|
||||
|
||||
interface GetAllBlocksResult {
|
||||
blockToToolsMapping: Record<string, string[]>
|
||||
}
|
||||
|
||||
interface GetAllBlocksResponse extends ToolResponse {
|
||||
output: GetAllBlocksResult
|
||||
}
|
||||
|
||||
export const getAllBlocksTool: ToolConfig<GetAllBlocksParams, GetAllBlocksResponse> = {
|
||||
id: 'get_blocks_and_tools',
|
||||
name: 'Get All Blocks and Tools',
|
||||
description:
|
||||
'Get a comprehensive list of all available blocks and tools in Sim Studio with their descriptions, categories, and capabilities',
|
||||
version: '1.0.0',
|
||||
|
||||
params: {
|
||||
includeDetails: {
|
||||
type: 'boolean',
|
||||
required: false,
|
||||
description:
|
||||
'Whether to include detailed information like inputs, outputs, and sub-blocks (default: false)',
|
||||
},
|
||||
filterCategory: {
|
||||
type: 'string',
|
||||
required: false,
|
||||
description: 'Optional category filter for blocks (e.g., "tools", "blocks", "ai")',
|
||||
},
|
||||
},
|
||||
|
||||
request: {
|
||||
url: '/api/tools/get-all-blocks',
|
||||
method: 'POST',
|
||||
headers: () => ({
|
||||
'Content-Type': 'application/json',
|
||||
}),
|
||||
body: (params) => ({
|
||||
includeDetails: params.includeDetails || false,
|
||||
filterCategory: params.filterCategory,
|
||||
}),
|
||||
isInternalRoute: true,
|
||||
},
|
||||
|
||||
transformResponse: async (
|
||||
response: Response,
|
||||
params?: GetAllBlocksParams
|
||||
): Promise<GetAllBlocksResponse> => {
|
||||
if (!response.ok) {
|
||||
throw new Error(`Get all blocks failed: ${response.status} ${response.statusText}`)
|
||||
}
|
||||
|
||||
const data = await response.json()
|
||||
|
||||
if (!data.success) {
|
||||
throw new Error(data.error || 'Failed to get blocks and tools')
|
||||
}
|
||||
|
||||
return {
|
||||
success: true,
|
||||
output: {
|
||||
blockToToolsMapping: data.data,
|
||||
},
|
||||
}
|
||||
},
|
||||
|
||||
transformError: (error: any): string => {
|
||||
if (error instanceof Error) {
|
||||
return `Failed to get blocks and tools: ${error.message}`
|
||||
}
|
||||
return 'An unexpected error occurred while getting blocks and tools'
|
||||
},
|
||||
}
|
||||
104
apps/sim/tools/blocks/get-metadata.ts
Normal file
104
apps/sim/tools/blocks/get-metadata.ts
Normal file
@@ -0,0 +1,104 @@
|
||||
import type { ToolConfig, ToolResponse } from '../types'
|
||||
|
||||
interface GetBlockMetadataParams {
|
||||
blockIds: string[]
|
||||
}
|
||||
|
||||
interface BlockMetadataInfo {
|
||||
type: 'block' | 'tool'
|
||||
description: string
|
||||
longDescription?: string
|
||||
category: string
|
||||
docsLink?: string
|
||||
// For core blocks with YAML documentation
|
||||
yamlSchema?: string
|
||||
// For tool blocks or fallback
|
||||
inputs?: Record<string, any>
|
||||
outputs?: Record<string, any>
|
||||
subBlocks?: any[]
|
||||
toolSchemas?: Record<
|
||||
string,
|
||||
{
|
||||
id: string
|
||||
name: string
|
||||
description: string
|
||||
version?: string
|
||||
params?: Record<string, any>
|
||||
request?: {
|
||||
method: string
|
||||
url: string
|
||||
headers?: any
|
||||
isInternalRoute?: boolean
|
||||
}
|
||||
}
|
||||
>
|
||||
// Actual schemas from block code configuration
|
||||
codeSchemas?: {
|
||||
inputs?: Record<string, any>
|
||||
outputs?: Record<string, any>
|
||||
subBlocks?: any[]
|
||||
}
|
||||
}
|
||||
|
||||
interface GetBlockMetadataResult {
|
||||
[blockId: string]: BlockMetadataInfo
|
||||
}
|
||||
|
||||
interface GetBlockMetadataResponse extends ToolResponse {
|
||||
output: GetBlockMetadataResult
|
||||
}
|
||||
|
||||
export const getBlockMetadataTool: ToolConfig<GetBlockMetadataParams, GetBlockMetadataResponse> = {
|
||||
id: 'get_blocks_metadata',
|
||||
name: 'Get Block Metadata',
|
||||
description:
|
||||
'Get detailed metadata for specific blocks. Returns both documentation (YAML schemas) and actual code schemas (inputs, outputs, subBlocks). For core blocks (agent, function, api, etc.), includes YAML schema documentation from docs. For tool blocks, includes tool schema information with parameters and API details. All blocks include precise code schemas from their configuration.',
|
||||
version: '1.0.0',
|
||||
|
||||
params: {
|
||||
blockIds: {
|
||||
type: 'array',
|
||||
required: true,
|
||||
description: 'Array of block IDs to get descriptions for',
|
||||
},
|
||||
},
|
||||
|
||||
request: {
|
||||
url: '/api/tools/get-blocks-metadata',
|
||||
method: 'POST',
|
||||
headers: () => ({
|
||||
'Content-Type': 'application/json',
|
||||
}),
|
||||
body: (params) => ({
|
||||
blockIds: params.blockIds,
|
||||
}),
|
||||
isInternalRoute: true,
|
||||
},
|
||||
|
||||
transformResponse: async (
|
||||
response: Response,
|
||||
params?: GetBlockMetadataParams
|
||||
): Promise<GetBlockMetadataResponse> => {
|
||||
if (!response.ok) {
|
||||
throw new Error(`Get block metadata failed: ${response.status} ${response.statusText}`)
|
||||
}
|
||||
|
||||
const data = await response.json()
|
||||
|
||||
if (!data.success) {
|
||||
throw new Error(data.error || 'Failed to get block metadata')
|
||||
}
|
||||
|
||||
return {
|
||||
success: true,
|
||||
output: data.data,
|
||||
}
|
||||
},
|
||||
|
||||
transformError: (error: any): string => {
|
||||
if (error instanceof Error) {
|
||||
return `Failed to get block metadata: ${error.message}`
|
||||
}
|
||||
return 'An unexpected error occurred while getting block metadata'
|
||||
},
|
||||
}
|
||||
56
apps/sim/tools/blocks/get-yaml-structure.ts
Normal file
56
apps/sim/tools/blocks/get-yaml-structure.ts
Normal file
@@ -0,0 +1,56 @@
|
||||
import type { ToolConfig, ToolResponse } from '../types'
|
||||
|
||||
type GetYamlStructureParams = Record<string, never>
|
||||
|
||||
interface GetYamlStructureResult {
|
||||
guide: string
|
||||
message: string
|
||||
}
|
||||
|
||||
interface GetYamlStructureResponse extends ToolResponse {
|
||||
output: GetYamlStructureResult
|
||||
}
|
||||
|
||||
export const getYamlStructureTool: ToolConfig<GetYamlStructureParams, GetYamlStructureResponse> = {
|
||||
id: 'get_yaml_structure',
|
||||
name: 'Get YAML Workflow Structure Guide',
|
||||
description:
|
||||
'Get comprehensive YAML workflow syntax guide and examples to understand how to structure Sim Studio workflows',
|
||||
version: '1.0.0',
|
||||
|
||||
params: {},
|
||||
|
||||
request: {
|
||||
url: '/api/tools/get-yaml-structure',
|
||||
method: 'POST',
|
||||
headers: () => ({
|
||||
'Content-Type': 'application/json',
|
||||
}),
|
||||
body: () => ({}),
|
||||
isInternalRoute: true,
|
||||
},
|
||||
|
||||
transformResponse: async (response: Response): Promise<GetYamlStructureResponse> => {
|
||||
if (!response.ok) {
|
||||
throw new Error(`Get YAML structure failed: ${response.status} ${response.statusText}`)
|
||||
}
|
||||
|
||||
const data = await response.json()
|
||||
|
||||
if (!data.success) {
|
||||
throw new Error(data.error || 'Failed to get YAML structure guide')
|
||||
}
|
||||
|
||||
return {
|
||||
success: true,
|
||||
output: data.data,
|
||||
}
|
||||
},
|
||||
|
||||
transformError: (error: any): string => {
|
||||
if (error instanceof Error) {
|
||||
return `Failed to get YAML structure guide: ${error.message}`
|
||||
}
|
||||
return 'An unexpected error occurred while getting YAML structure guide'
|
||||
},
|
||||
}
|
||||
@@ -39,7 +39,7 @@ export const docsSearchTool: ToolConfig<DocsSearchParams, DocsSearchResponse> =
|
||||
topK: {
|
||||
type: 'number',
|
||||
required: false,
|
||||
description: 'Number of results to return (default: 5, max: 20)',
|
||||
description: 'Number of results to return (default: 10, max: 20)',
|
||||
},
|
||||
},
|
||||
|
||||
@@ -51,7 +51,7 @@ export const docsSearchTool: ToolConfig<DocsSearchParams, DocsSearchResponse> =
|
||||
}),
|
||||
body: (params) => {
|
||||
// Validate and clamp topK parameter
|
||||
let topK = params.topK || 5
|
||||
let topK = params.topK || 10
|
||||
if (topK > 20) topK = 20
|
||||
if (topK < 1) topK = 1
|
||||
|
||||
|
||||
@@ -2,6 +2,10 @@ import { createLogger } from '@/lib/logs/console-logger'
|
||||
import { getBaseUrl } from '@/lib/urls/utils'
|
||||
import { useCustomToolsStore } from '@/stores/custom-tools/store'
|
||||
import { useEnvironmentStore } from '@/stores/settings/environment/store'
|
||||
import { editWorkflowTool } from '@/tools/blocks/edit-workflow'
|
||||
import { getAllBlocksTool } from '@/tools/blocks/get-all'
|
||||
import { getBlockMetadataTool } from '@/tools/blocks/get-metadata'
|
||||
import { getYamlStructureTool } from '@/tools/blocks/get-yaml-structure'
|
||||
import { docsSearchTool } from '@/tools/docs/search'
|
||||
import { tools } from '@/tools/registry'
|
||||
import type { TableRow, ToolConfig, ToolResponse } from '@/tools/types'
|
||||
@@ -13,6 +17,15 @@ const logger = createLogger('ToolsUtils')
|
||||
const internalTools: Record<string, ToolConfig> = {
|
||||
docs_search_internal: docsSearchTool,
|
||||
get_user_workflow: getUserWorkflowTool,
|
||||
get_blocks_and_tools: getAllBlocksTool,
|
||||
get_blocks_metadata: getBlockMetadataTool,
|
||||
get_yaml_structure: getYamlStructureTool,
|
||||
edit_workflow: editWorkflowTool,
|
||||
}
|
||||
|
||||
// Export the list of internal tool IDs for filtering purposes
|
||||
export const getInternalToolIds = (): Set<string> => {
|
||||
return new Set(Object.keys(internalTools))
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -167,7 +180,7 @@ export function validateToolRequest(
|
||||
// Note: user-only parameters are not checked here as they're optional
|
||||
for (const [paramName, paramConfig] of Object.entries(tool.params)) {
|
||||
if (
|
||||
paramConfig.visibility === 'user-or-llm' &&
|
||||
(paramConfig as any).visibility === 'user-or-llm' &&
|
||||
paramConfig.required &&
|
||||
!(paramName in params)
|
||||
) {
|
||||
|
||||
41
apps/sim/types/tool-call.ts
Normal file
41
apps/sim/types/tool-call.ts
Normal file
@@ -0,0 +1,41 @@
|
||||
export interface ToolCallState {
|
||||
id: string
|
||||
name: string
|
||||
displayName?: string
|
||||
parameters?: Record<string, any>
|
||||
state: 'detecting' | 'executing' | 'completed' | 'error'
|
||||
startTime?: number
|
||||
endTime?: number
|
||||
duration?: number
|
||||
result?: any
|
||||
error?: string
|
||||
progress?: string
|
||||
}
|
||||
|
||||
export interface ToolCallGroup {
|
||||
id: string
|
||||
toolCalls: ToolCallState[]
|
||||
status: 'pending' | 'in_progress' | 'completed' | 'error'
|
||||
startTime?: number
|
||||
endTime?: number
|
||||
summary?: string
|
||||
}
|
||||
|
||||
export interface InlineContent {
|
||||
type: 'text' | 'tool_call'
|
||||
content: string
|
||||
toolCall?: ToolCallState
|
||||
}
|
||||
|
||||
export interface ParsedMessageContent {
|
||||
textContent: string
|
||||
toolCalls: ToolCallState[]
|
||||
toolGroups: ToolCallGroup[]
|
||||
inlineContent?: InlineContent[]
|
||||
}
|
||||
|
||||
export interface ToolCallIndicator {
|
||||
type: 'status' | 'thinking' | 'execution'
|
||||
content: string
|
||||
toolNames?: string[]
|
||||
}
|
||||
105
bun.lock
105
bun.lock
@@ -8,6 +8,7 @@
|
||||
"@t3-oss/env-nextjs": "0.13.4",
|
||||
"@vercel/analytics": "1.5.0",
|
||||
"geist": "^1.4.2",
|
||||
"react-colorful": "5.6.1",
|
||||
"remark-gfm": "4.0.1",
|
||||
"socket.io-client": "4.8.1",
|
||||
},
|
||||
@@ -64,6 +65,8 @@
|
||||
"@better-auth/stripe": "^1.2.9",
|
||||
"@browserbasehq/stagehand": "^2.0.0",
|
||||
"@cerebras/cerebras_cloud_sdk": "^1.23.0",
|
||||
"@chatscope/chat-ui-kit-react": "2.1.1",
|
||||
"@chatscope/chat-ui-kit-styles": "1.4.0",
|
||||
"@hookform/resolvers": "^4.1.3",
|
||||
"@opentelemetry/api": "^1.9.0",
|
||||
"@opentelemetry/exporter-collector": "^0.25.0",
|
||||
@@ -97,6 +100,7 @@
|
||||
"@react-email/components": "^0.0.34",
|
||||
"@sentry/nextjs": "^9.15.0",
|
||||
"@trigger.dev/sdk": "3.3.17",
|
||||
"@types/react-syntax-highlighter": "15.5.13",
|
||||
"@types/three": "0.177.0",
|
||||
"@vercel/og": "^0.6.5",
|
||||
"@vercel/speed-insights": "^1.2.0",
|
||||
@@ -140,8 +144,11 @@
|
||||
"react-hook-form": "^7.54.2",
|
||||
"react-markdown": "^10.1.0",
|
||||
"react-simple-code-editor": "^0.14.1",
|
||||
"react-syntax-highlighter": "15.6.1",
|
||||
"reactflow": "^11.11.4",
|
||||
"recharts": "2.15.3",
|
||||
"rehype-highlight": "7.0.2",
|
||||
"remark-gfm": "4.0.1",
|
||||
"resend": "^4.1.2",
|
||||
"socket.io": "^4.8.1",
|
||||
"stripe": "^17.7.0",
|
||||
@@ -441,6 +448,10 @@
|
||||
|
||||
"@cerebras/cerebras_cloud_sdk": ["@cerebras/cerebras_cloud_sdk@1.35.0", "", { "dependencies": { "@types/node": "^18.11.18", "@types/node-fetch": "^2.6.4", "abort-controller": "^3.0.0", "agentkeepalive": "^4.2.1", "form-data-encoder": "1.7.2", "formdata-node": "^4.3.2", "node-fetch": "^2.6.7" } }, "sha512-bQ6KYHmcvudHJ1aLzqkeETn3Y071/8/zpcZho6g4pKZ+VluHvLmIG0buhrwF9qJY5WSLmXR/s4pruxVRmfV7yQ=="],
|
||||
|
||||
"@chatscope/chat-ui-kit-react": ["@chatscope/chat-ui-kit-react@2.1.1", "", { "dependencies": { "@chatscope/chat-ui-kit-styles": "^1.2.0", "@fortawesome/fontawesome-free": "^6.5.2", "@fortawesome/fontawesome-svg-core": "^6.5.2", "@fortawesome/free-solid-svg-icons": "^6.5.2", "@fortawesome/react-fontawesome": "^0.2.2", "classnames": "^2.2.6", "prop-types": "^15.7.2" }, "peerDependencies": { "react": "^16.12.0 || ^17.0.0 || ^18.2.0 || ^19.0.0", "react-dom": "^16.12.0 || ^17.0.0 || ^18.2.0 || ^19.0.0" } }, "sha512-rCtE9abdmAbBDkAAUYBC1TDTBMZHquqFIZhADptAfHcJ8z8W3XH/z/ZuwBSJXtzi6h1mwCNc3tBmm1A2NLGhNg=="],
|
||||
|
||||
"@chatscope/chat-ui-kit-styles": ["@chatscope/chat-ui-kit-styles@1.4.0", "", {}, "sha512-016mBJD3DESw7Nh+lkKcPd22xG92ghA0VpIXIbjQtmXhC7Ve6wRazTy8z1Ahut+Tbv179+JxrftuMngsj/yV8Q=="],
|
||||
|
||||
"@csstools/color-helpers": ["@csstools/color-helpers@5.0.2", "", {}, "sha512-JqWH1vsgdGcw2RR6VliXXdA0/59LttzlU8UlRT/iUUsEeWfYq8I+K0yhihEUTTHLRm1EXvpsCx3083EU15ecsA=="],
|
||||
|
||||
"@csstools/css-calc": ["@csstools/css-calc@2.1.4", "", { "peerDependencies": { "@csstools/css-parser-algorithms": "^3.0.5", "@csstools/css-tokenizer": "^3.0.4" } }, "sha512-3N8oaj+0juUw/1H3YwmDDJXCgTB1gKU6Hc/bB502u9zR0q2vd786XJH9QfrKIEgFlZmhZiq6epXl4rHqhzsIgQ=="],
|
||||
@@ -523,6 +534,16 @@
|
||||
|
||||
"@formatjs/intl-localematcher": ["@formatjs/intl-localematcher@0.6.1", "", { "dependencies": { "tslib": "^2.8.0" } }, "sha512-ePEgLgVCqi2BBFnTMWPfIghu6FkbZnnBVhO2sSxvLfrdFw7wCHAHiDoM2h4NRgjbaY7+B7HgOLZGkK187pZTZg=="],
|
||||
|
||||
"@fortawesome/fontawesome-common-types": ["@fortawesome/fontawesome-common-types@6.7.2", "", {}, "sha512-Zs+YeHUC5fkt7Mg1l6XTniei3k4bwG/yo3iFUtZWd/pMx9g3fdvkSK9E0FOC+++phXOka78uJcYb8JaFkW52Xg=="],
|
||||
|
||||
"@fortawesome/fontawesome-free": ["@fortawesome/fontawesome-free@6.7.2", "", {}, "sha512-JUOtgFW6k9u4Y+xeIaEiLr3+cjoUPiAuLXoyKOJSia6Duzb7pq+A76P9ZdPDoAoxHdHzq6gE9/jKBGXlZT8FbA=="],
|
||||
|
||||
"@fortawesome/fontawesome-svg-core": ["@fortawesome/fontawesome-svg-core@6.7.2", "", { "dependencies": { "@fortawesome/fontawesome-common-types": "6.7.2" } }, "sha512-yxtOBWDrdi5DD5o1pmVdq3WMCvnobT0LU6R8RyyVXPvFRd2o79/0NCuQoCjNTeZz9EzA9xS3JxNWfv54RIHFEA=="],
|
||||
|
||||
"@fortawesome/free-solid-svg-icons": ["@fortawesome/free-solid-svg-icons@6.7.2", "", { "dependencies": { "@fortawesome/fontawesome-common-types": "6.7.2" } }, "sha512-GsBrnOzU8uj0LECDfD5zomZJIjrPhIlWU82AHwa2s40FKH+kcxQaBvBo3Z4TxyZHIyX8XTDxsyA33/Vx9eFuQA=="],
|
||||
|
||||
"@fortawesome/react-fontawesome": ["@fortawesome/react-fontawesome@0.2.2", "", { "dependencies": { "prop-types": "^15.8.1" }, "peerDependencies": { "@fortawesome/fontawesome-svg-core": "~1 || ~6", "react": ">=16.3" } }, "sha512-EnkrprPNqI6SXJl//m29hpaNzOp1bruISWaOiRtkMi/xSvHJlzc2j2JAYS7egxt/EbjSNV/k6Xy0AQI6vB2+1g=="],
|
||||
|
||||
"@google-cloud/precise-date": ["@google-cloud/precise-date@4.0.0", "", {}, "sha512-1TUx3KdaU3cN7nfCdNf+UVqA/PSX29Cjcox3fZZBtINlRrXVTmUkQnCKv2MbBUbCopbK4olAT1IHl76uZyCiVA=="],
|
||||
|
||||
"@google/genai": ["@google/genai@0.8.0", "", { "dependencies": { "google-auth-library": "^9.14.2", "ws": "^8.18.0" } }, "sha512-Zs+OGyZKyMbFofGJTR9/jTQSv8kITh735N3tEuIZj4VlMQXTC0soCFahysJ9NaeenRlD7xGb6fyqmX+FwrpU6Q=="],
|
||||
@@ -1375,6 +1396,8 @@
|
||||
|
||||
"@types/react-dom": ["@types/react-dom@19.1.6", "", { "peerDependencies": { "@types/react": "^19.0.0" } }, "sha512-4hOiT/dwO8Ko0gV1m/TJZYk3y0KBnY9vzDh7W+DH17b2HFSOGgdj33dhihPeuy3l0q23+4e+hoXHV6hCC4dCXw=="],
|
||||
|
||||
"@types/react-syntax-highlighter": ["@types/react-syntax-highlighter@15.5.13", "", { "dependencies": { "@types/react": "*" } }, "sha512-uLGJ87j6Sz8UaBAooU0T6lWJ0dBmjZgN1PZTrj05TNql2/XpC6+4HhMT5syIdFUUt+FASfCeLLv4kBygNU+8qA=="],
|
||||
|
||||
"@types/shimmer": ["@types/shimmer@1.2.0", "", {}, "sha512-UE7oxhQLLd9gub6JKIAhDq06T0F6FnztwMNRvYgjeQSBeMc1ZG/tA47EwfduvkuQS8apbkM/lpLpWsaCeYsXVg=="],
|
||||
|
||||
"@types/stats.js": ["@types/stats.js@0.17.4", "", {}, "sha512-jIBvWWShCvlBqBNIZt0KAshWpvSjhkwkEu4ZUcASoAvhmrgAUI2t1dXrjSL4xXVLB4FznPrIsX3nKXFl/Dt4vA=="],
|
||||
@@ -1593,13 +1616,13 @@
|
||||
|
||||
"change-case": ["change-case@4.1.2", "", { "dependencies": { "camel-case": "^4.1.2", "capital-case": "^1.0.4", "constant-case": "^3.0.4", "dot-case": "^3.0.4", "header-case": "^2.0.4", "no-case": "^3.0.4", "param-case": "^3.0.4", "pascal-case": "^3.1.2", "path-case": "^3.0.4", "sentence-case": "^3.0.4", "snake-case": "^3.0.4", "tslib": "^2.0.3" } }, "sha512-bSxY2ws9OtviILG1EiY5K7NNxkqg/JnRnFxLtKQ96JaviiIxi7djMrSd0ECT9AC+lttClmYwKw53BWpOMblo7A=="],
|
||||
|
||||
"character-entities": ["character-entities@2.0.2", "", {}, "sha512-shx7oQ0Awen/BRIdkjkvz54PnEEI/EjwXDSIZp86/KKdbafHh1Df/RYGBhn4hbe2+uKC9FnT5UCEdyPz3ai9hQ=="],
|
||||
"character-entities": ["character-entities@1.2.4", "", {}, "sha512-iBMyeEHxfVnIakwOuDXpVkc54HijNgCyQB2w0VfGQThle6NXn50zU6V/u+LDhxHcDUPojn6Kpga3PTAD8W1bQw=="],
|
||||
|
||||
"character-entities-html4": ["character-entities-html4@2.1.0", "", {}, "sha512-1v7fgQRj6hnSwFpq1Eu0ynr/CDEw0rXo2B61qXrLNdHZmPKgb7fqS1a2JwF0rISo9q77jDI8VMEHoApn8qDoZA=="],
|
||||
|
||||
"character-entities-legacy": ["character-entities-legacy@3.0.0", "", {}, "sha512-RpPp0asT/6ufRm//AJVwpViZbGM/MkjQFxJccQRHmISF/22NBtsHqAWmL+/pmkPWoIUJdWyeVleTl1wydHATVQ=="],
|
||||
"character-entities-legacy": ["character-entities-legacy@1.1.4", "", {}, "sha512-3Xnr+7ZFS1uxeiUDvV02wQ+QDbc55o97tIV5zHScSPJpcLm/r0DFPcoY3tYRp+VZukxuMeKgXYmsXQHO05zQeA=="],
|
||||
|
||||
"character-reference-invalid": ["character-reference-invalid@2.0.1", "", {}, "sha512-iBZ4F4wRbyORVsu0jPV7gXkOsGYjGHPmAyv+HiHG8gi5PtC9KI2j1+v8/tlibRvjoWX027ypmG/n0HtO5t7unw=="],
|
||||
"character-reference-invalid": ["character-reference-invalid@1.1.4", "", {}, "sha512-mKKUkUbhPpQlCOfIuZkvSEgktjPFIsZKRRbC6KWVEMvlzblj3i3asQv5ODsrwt0N3pHAEvjP8KTQPHkp0+6jOg=="],
|
||||
|
||||
"chardet": ["chardet@0.7.0", "", {}, "sha512-mT8iDcrh03qDGRRmoA2hmBJnxpllMR+0/0qlzjqZES6NdiWDcZkCNAk4rPFZ9Q85r27unkiNNg8ZOiwZXBHwcA=="],
|
||||
|
||||
@@ -1617,6 +1640,8 @@
|
||||
|
||||
"classcat": ["classcat@5.0.5", "", {}, "sha512-JhZUT7JFcQy/EzW605k/ktHtncoo9vnyW/2GspNYwFlN1C/WmjuV/xtS04e9SOkL2sTdw0VAZ2UGCcQ9lR6p6w=="],
|
||||
|
||||
"classnames": ["classnames@2.5.1", "", {}, "sha512-saHYOzhIQs6wy2sVxTM6bUDsQO4F50V9RQ22qBpEdCW+I+/Wmke2HOl6lS6dTpdxVhb88/I6+Hs+438c3lfUow=="],
|
||||
|
||||
"cli-cursor": ["cli-cursor@3.1.0", "", { "dependencies": { "restore-cursor": "^3.1.0" } }, "sha512-I/zHAwsKf9FqGoXM4WWRACob9+SNukZTd94DWF57E4toouRulbCxcUh6RKUEOQlYTHJnzkPMySvPNaaSLNfLZw=="],
|
||||
|
||||
"cli-spinners": ["cli-spinners@2.9.2", "", {}, "sha512-ywqV+5MmyL4E7ybXgKys4DugZbX0FC6LnwrhjuykIjnK9k8OQacQ7axGKnjDXWNhns0xot3bZI5h55H8yo9cJg=="],
|
||||
@@ -1941,6 +1966,8 @@
|
||||
|
||||
"fastq": ["fastq@1.19.1", "", { "dependencies": { "reusify": "^1.0.4" } }, "sha512-GwLTyxkCXjXbxqIhTsMI2Nui8huMPtnxg7krajPJAjnEG/iiOS7i+zCtWGZR9G0NBKbXKh6X9m9UIsYX/N6vvQ=="],
|
||||
|
||||
"fault": ["fault@1.0.4", "", { "dependencies": { "format": "^0.2.0" } }, "sha512-CJ0HCB5tL5fYTEA7ToAq5+kTwd++Borf1/bifxd9iT70QcXr4MRrO3Llf8Ifs70q+SJcGHFtnIE/Nw6giCtECA=="],
|
||||
|
||||
"fdir": ["fdir@6.4.6", "", { "peerDependencies": { "picomatch": "^3 || ^4" }, "optionalPeers": ["picomatch"] }, "sha512-hiFoqpyZcfNm1yc4u8oWCf9A2c4D3QjCrks3zmoVKVxpQRzmPNar1hUJcBG2RQHvEVGDN+Jm81ZheVLAQMK6+w=="],
|
||||
|
||||
"fetch-blob": ["fetch-blob@3.2.0", "", { "dependencies": { "node-domexception": "^1.0.0", "web-streams-polyfill": "^3.0.3" } }, "sha512-7yAQpD2UMJzLi1Dqv7qFYnPbaPx7ZfFK6PiIxQ4PfkGPyNyl2Ugx+a/umUonmKqjhM4DnfbMvdX6otXq83soQQ=="],
|
||||
@@ -1961,6 +1988,8 @@
|
||||
|
||||
"form-data-encoder": ["form-data-encoder@1.7.2", "", {}, "sha512-qfqtYan3rxrnCk1VYaA4H+Ms9xdpPqvLZa6xmMgFvhO32x7/3J/ExcTd6qpxM0vH2GdMI+poehyBZvqfMTto8A=="],
|
||||
|
||||
"format": ["format@0.2.2", "", {}, "sha512-wzsgA6WOq+09wrU1tsJ09udeR/YZRaeArL9e1wPbFg3GG2yDnC2ldKpxs4xunpFF9DgqCqOIra3bc1HWrJ37Ww=="],
|
||||
|
||||
"formdata-node": ["formdata-node@4.4.1", "", { "dependencies": { "node-domexception": "1.0.0", "web-streams-polyfill": "4.0.0-beta.3" } }, "sha512-0iirZp3uVDjVGt9p49aTaqjk84TrglENEDuqfdlZQ1roC9CWlPk6Avf8EEnZNcAqPonwkG35x4n3ww/1THYAeQ=="],
|
||||
|
||||
"formdata-polyfill": ["formdata-polyfill@4.0.10", "", { "dependencies": { "fetch-blob": "^3.1.2" } }, "sha512-buewHzMvYL29jdeQTVILecSaZKnt/RJWjoZCF5OW60Z67/GmSLBkOFM7qh1PI3zFNtJbaZL5eQu1vLfazOwj4g=="],
|
||||
@@ -2043,6 +2072,10 @@
|
||||
|
||||
"hasown": ["hasown@2.0.2", "", { "dependencies": { "function-bind": "^1.1.2" } }, "sha512-0hJU9SCPvmMzIBdZFqNPXWa6dqh7WdH0cII9y+CyS8rG3nL48Bclra9HmKhVVUHyPWNH5Y7xDwAB7bfgSjkUMQ=="],
|
||||
|
||||
"hast-util-is-element": ["hast-util-is-element@3.0.0", "", { "dependencies": { "@types/hast": "^3.0.0" } }, "sha512-Val9mnv2IWpLbNPqc/pUem+a7Ipj2aHacCwgNfTiK0vJKl0LF+4Ba4+v1oPHFpf3bLYmreq0/l3Gud9S5OH42g=="],
|
||||
|
||||
"hast-util-parse-selector": ["hast-util-parse-selector@2.2.5", "", {}, "sha512-7j6mrk/qqkSehsM92wQjdIgWM2/BW61u/53G6xmC8i1OmEdKLHbk419QKQUjz6LglWsfqoiHmyMRkP1BGjecNQ=="],
|
||||
|
||||
"hast-util-to-estree": ["hast-util-to-estree@3.1.3", "", { "dependencies": { "@types/estree": "^1.0.0", "@types/estree-jsx": "^1.0.0", "@types/hast": "^3.0.0", "comma-separated-tokens": "^2.0.0", "devlop": "^1.0.0", "estree-util-attach-comments": "^3.0.0", "estree-util-is-identifier-name": "^3.0.0", "hast-util-whitespace": "^3.0.0", "mdast-util-mdx-expression": "^2.0.0", "mdast-util-mdx-jsx": "^3.0.0", "mdast-util-mdxjs-esm": "^2.0.0", "property-information": "^7.0.0", "space-separated-tokens": "^2.0.0", "style-to-js": "^1.0.0", "unist-util-position": "^5.0.0", "zwitch": "^2.0.0" } }, "sha512-48+B/rJWAp0jamNbAAf9M7Uf//UVqAoMmgXhBdxTDJLGKY+LRnZ99qcG+Qjl5HfMpYNzS5v4EAwVEF34LeAj7w=="],
|
||||
|
||||
"hast-util-to-html": ["hast-util-to-html@9.0.5", "", { "dependencies": { "@types/hast": "^3.0.0", "@types/unist": "^3.0.0", "ccount": "^2.0.0", "comma-separated-tokens": "^2.0.0", "hast-util-whitespace": "^3.0.0", "html-void-elements": "^3.0.0", "mdast-util-to-hast": "^13.0.0", "property-information": "^7.0.0", "space-separated-tokens": "^2.0.0", "stringify-entities": "^4.0.0", "zwitch": "^2.0.4" } }, "sha512-OguPdidb+fbHQSU4Q4ZiLKnzWo8Wwsf5bZfbvu7//a9oTYoqD/fWpe96NuHkoS9h0ccGOTe0C4NGXdtS0iObOw=="],
|
||||
@@ -2051,8 +2084,12 @@
|
||||
|
||||
"hast-util-to-string": ["hast-util-to-string@3.0.1", "", { "dependencies": { "@types/hast": "^3.0.0" } }, "sha512-XelQVTDWvqcl3axRfI0xSeoVKzyIFPwsAGSLIsKdJKQMXDYJS4WYrBNF/8J7RdhIcFI2BOHgAifggsvsxp/3+A=="],
|
||||
|
||||
"hast-util-to-text": ["hast-util-to-text@4.0.2", "", { "dependencies": { "@types/hast": "^3.0.0", "@types/unist": "^3.0.0", "hast-util-is-element": "^3.0.0", "unist-util-find-after": "^5.0.0" } }, "sha512-KK6y/BN8lbaq654j7JgBydev7wuNMcID54lkRav1P0CaE1e47P72AWWPiGKXTJU271ooYzcvTAn/Zt0REnvc7A=="],
|
||||
|
||||
"hast-util-whitespace": ["hast-util-whitespace@3.0.0", "", { "dependencies": { "@types/hast": "^3.0.0" } }, "sha512-88JUN06ipLwsnv+dVn+OIYOvAuvBMy/Qoi6O7mQHxdPXpjy+Cd6xRkWwux7DKO+4sYILtLBRIKgsdpS2gQc7qw=="],
|
||||
|
||||
"hastscript": ["hastscript@6.0.0", "", { "dependencies": { "@types/hast": "^2.0.0", "comma-separated-tokens": "^1.0.0", "hast-util-parse-selector": "^2.0.0", "property-information": "^5.0.0", "space-separated-tokens": "^1.0.0" } }, "sha512-nDM6bvd7lIqDUiYEiu5Sl/+6ReP0BMk/2f4U/Rooccxkj0P5nm+acM5PrGJ/t5I8qPGiqZSE6hVAwZEdZIvP4w=="],
|
||||
|
||||
"header-case": ["header-case@2.0.4", "", { "dependencies": { "capital-case": "^1.0.4", "tslib": "^2.0.3" } }, "sha512-H/vuk5TEEVZwrR0lp2zed9OCo1uAILMlx0JEMgC26rzyJJ3N1v6XkwHHXJQdR2doSjcGPM6OKPYoJgf0plJ11Q=="],
|
||||
|
||||
"help-me": ["help-me@5.0.0", "", {}, "sha512-7xgomUX6ADmcYzFik0HzAxh/73YlKR9bmFzf51CZwR+b6YtzU2m0u49hQCqV6SvlqIqsaxovfwdvbnsw3b/zpg=="],
|
||||
@@ -2061,6 +2098,10 @@
|
||||
|
||||
"hexer": ["hexer@1.5.0", "", { "dependencies": { "ansi-color": "^0.2.1", "minimist": "^1.1.0", "process": "^0.10.0", "xtend": "^4.0.0" }, "bin": { "hexer": "./cli.js" } }, "sha512-dyrPC8KzBzUJ19QTIo1gXNqIISRXQ0NwteW6OeQHRN4ZuZeHkdODfj0zHBdOlHbRY8GqbqK57C9oWSvQZizFsg=="],
|
||||
|
||||
"highlight.js": ["highlight.js@10.7.3", "", {}, "sha512-tzcUFauisWKNHaRkN4Wjl/ZA07gENAjFl3J/c480dprkGTg5EQstgaNFqBfUqCq54kZRIEcreTsAgF/m2quD7A=="],
|
||||
|
||||
"highlightjs-vue": ["highlightjs-vue@1.0.0", "", {}, "sha512-PDEfEF102G23vHmPhLyPboFCD+BkMGu+GuJe2d9/eH4FsCwvgBpnc9n0pGE+ffKdph38s6foEZiEjdgHdzp+IA=="],
|
||||
|
||||
"hoist-non-react-statics": ["hoist-non-react-statics@3.3.2", "", { "dependencies": { "react-is": "^16.7.0" } }, "sha512-/gGivxi8JPKWNm/W0jSmzcMPpfpPLc3dY/6GxhX2hQ9iGj3aDfklV4ET7NjKpSinLpJ5vafa9iiGIEZg10SfBw=="],
|
||||
|
||||
"html-encoding-sniffer": ["html-encoding-sniffer@4.0.0", "", { "dependencies": { "whatwg-encoding": "^3.1.1" } }, "sha512-Y22oTqIU4uuPgEemfz7NDJz6OeKf12Lsu+QC+s3BVpda64lTiMYCyGwg5ki4vFxkMwQdeZDl2adZoqUgdFuTgQ=="],
|
||||
@@ -2113,9 +2154,9 @@
|
||||
|
||||
"ioredis": ["ioredis@5.6.1", "", { "dependencies": { "@ioredis/commands": "^1.1.1", "cluster-key-slot": "^1.1.0", "debug": "^4.3.4", "denque": "^2.1.0", "lodash.defaults": "^4.2.0", "lodash.isarguments": "^3.1.0", "redis-errors": "^1.2.0", "redis-parser": "^3.0.0", "standard-as-callback": "^2.1.0" } }, "sha512-UxC0Yv1Y4WRJiGQxQkP0hfdL0/5/6YvdfOOClRgJ0qppSarkhneSa6UvkMkms0AkdGimSH3Ikqm+6mkMmX7vGA=="],
|
||||
|
||||
"is-alphabetical": ["is-alphabetical@2.0.1", "", {}, "sha512-FWyyY60MeTNyeSRpkM2Iry0G9hpr7/9kD40mD/cGQEuilcZYS4okz8SN2Q6rLCJ8gbCt6fN+rC+6tMGS99LaxQ=="],
|
||||
"is-alphabetical": ["is-alphabetical@1.0.4", "", {}, "sha512-DwzsA04LQ10FHTZuL0/grVDk4rFoVH1pjAToYwBrHSxcrBIGQuXrQMtD5U1b0U2XVgKZCTLLP8u2Qxqhy3l2Vg=="],
|
||||
|
||||
"is-alphanumerical": ["is-alphanumerical@2.0.1", "", { "dependencies": { "is-alphabetical": "^2.0.0", "is-decimal": "^2.0.0" } }, "sha512-hmbYhX/9MUMF5uh7tOXyK/n0ZvWpad5caBA17GsC6vyuCqaWliRG5K1qS9inmUhEMaOBIW7/whAnSwveW/LtZw=="],
|
||||
"is-alphanumerical": ["is-alphanumerical@1.0.4", "", { "dependencies": { "is-alphabetical": "^1.0.0", "is-decimal": "^1.0.0" } }, "sha512-UzoZUr+XfVz3t3v4KyGEniVL9BDRoQtY7tOyrRybkVNjDFWyo1yhXNGrrBTQxp3ib9BLAWs7k2YKBQsFRkZG9A=="],
|
||||
|
||||
"is-arrayish": ["is-arrayish@0.3.2", "", {}, "sha512-eVRqCvVlZbuw3GrM63ovNSNAeA1K16kaR/LRY/92w0zxQ5/1YzwblUX652i4Xs9RwAGjW9d9y6X88t8OaAJfWQ=="],
|
||||
|
||||
@@ -2123,7 +2164,7 @@
|
||||
|
||||
"is-core-module": ["is-core-module@2.16.1", "", { "dependencies": { "hasown": "^2.0.2" } }, "sha512-UfoeMA6fIJ8wTYFEUjelnaGI67v6+N7qXJEvQuIGa99l4xsCruSYOVSQ0uPANn4dAzm8lkYPaKLrrijLq7x23w=="],
|
||||
|
||||
"is-decimal": ["is-decimal@2.0.1", "", {}, "sha512-AAB9hiomQs5DXWcRB1rqsxGUstbRroFOPPVAomNk/3XHR5JyEZChOyTWe2oayKnsSsr/kcGqF+z6yuH6HHpN0A=="],
|
||||
"is-decimal": ["is-decimal@1.0.4", "", {}, "sha512-RGdriMmQQvZ2aqaQq3awNA6dCGtKpiDFcOzrTWrDAT2MiWrKQVPmxLGHl7Y2nNu6led0kEyoX0enY0qXYsv9zw=="],
|
||||
|
||||
"is-extglob": ["is-extglob@2.1.1", "", {}, "sha512-SbKbANkN603Vi4jEZv49LeVJMn4yGwsbzZworEoyEiutsN3nJYdbO36zfhGJ6QEDpOZIFkDtnq5JRxmvl3jsoQ=="],
|
||||
|
||||
@@ -2131,7 +2172,7 @@
|
||||
|
||||
"is-glob": ["is-glob@4.0.3", "", { "dependencies": { "is-extglob": "^2.1.1" } }, "sha512-xelSayHH36ZgE7ZWhli7pW34hNbNl8Ojv5KVmkJD4hBdD3th8Tfk9vYasLM+mXWOZhFkgZfxhLSnrwRr4elSSg=="],
|
||||
|
||||
"is-hexadecimal": ["is-hexadecimal@2.0.1", "", {}, "sha512-DgZQp241c8oO6cA1SbTEWiXeoxV42vlcJxgH+B3hi1AiqqKruZR3ZGF8In3fj4+/y/7rHvlOZLZtgJ/4ttYGZg=="],
|
||||
"is-hexadecimal": ["is-hexadecimal@1.0.4", "", {}, "sha512-gyPJuv83bHMpocVYoqof5VDiZveEoGoFL8m3BXNb2VW8Xs+rz9kqO8LOQ5DH6EsuvilT1ApazU0pyl+ytbPtlw=="],
|
||||
|
||||
"is-interactive": ["is-interactive@2.0.0", "", {}, "sha512-qP1vozQRI+BMOPcjFzrjXuQvdak2pHNUMZoeG2eRbiSqyvbEf/wQtEOTOX1guk6E3t36RkaqiSt8A/6YElNxLQ=="],
|
||||
|
||||
@@ -2281,6 +2322,8 @@
|
||||
|
||||
"lower-case": ["lower-case@2.0.2", "", { "dependencies": { "tslib": "^2.0.3" } }, "sha512-7fm3l3NAF9WfN6W3JOmf5drwpVqX78JtoGJ3A6W0a6ZnldM41w2fV5D490psKFTpMds8TJse/eHLFFsNHHjHgg=="],
|
||||
|
||||
"lowlight": ["lowlight@1.20.0", "", { "dependencies": { "fault": "^1.0.0", "highlight.js": "~10.7.0" } }, "sha512-8Ktj+prEb1RoCPkEOrPMYUN/nCggB7qAWe3a7OpMjWQkh3l2RD5wKRQ+o8Q8YuI9RG/xs95waaI/E6ym/7NsTw=="],
|
||||
|
||||
"lru-cache": ["lru-cache@11.1.0", "", {}, "sha512-QIXZUBJUx+2zHUdQujWejBkcD9+cs94tLn0+YL8UrCh+D5sCXZ4c7LaEH48pNwRY3MLDgqUFyhlCyjJPf1WP0A=="],
|
||||
|
||||
"lucide-react": ["lucide-react@0.511.0", "", { "peerDependencies": { "react": "^16.5.1 || ^17.0.0 || ^18.0.0 || ^19.0.0" } }, "sha512-VK5a2ydJ7xm8GvBeKLS9mu1pVK6ucef9780JVUjw6bAjJL/QXnd4Y0p7SPeOUMC27YhzNCZvm5d/QX0Tp3rc0w=="],
|
||||
@@ -2531,7 +2574,7 @@
|
||||
|
||||
"parse-css-color": ["parse-css-color@0.2.1", "", { "dependencies": { "color-name": "^1.1.4", "hex-rgb": "^4.1.0" } }, "sha512-bwS/GGIFV3b6KS4uwpzCFj4w297Yl3uqnSgIPsoQkx7GMLROXfMnWvxfNkL0oh8HVhZA4hvJoEoEIqonfJ3BWg=="],
|
||||
|
||||
"parse-entities": ["parse-entities@4.0.2", "", { "dependencies": { "@types/unist": "^2.0.0", "character-entities-legacy": "^3.0.0", "character-reference-invalid": "^2.0.0", "decode-named-character-reference": "^1.0.0", "is-alphanumerical": "^2.0.0", "is-decimal": "^2.0.0", "is-hexadecimal": "^2.0.0" } }, "sha512-GG2AQYWoLgL877gQIKeRPGO1xF9+eG1ujIb5soS5gPvLQ1y2o8FL90w2QWNdf9I361Mpp7726c+lj3U0qK1uGw=="],
|
||||
"parse-entities": ["parse-entities@2.0.0", "", { "dependencies": { "character-entities": "^1.0.0", "character-entities-legacy": "^1.0.0", "character-reference-invalid": "^1.0.0", "is-alphanumerical": "^1.0.0", "is-decimal": "^1.0.0", "is-hexadecimal": "^1.0.0" } }, "sha512-kkywGpCcRYhqQIchaWqZ875wzpS/bMKhz5HnN3p7wveJTkTtyAB/AlnS0f8DFSqYW1T82t6yEAkEcB+A1I3MbQ=="],
|
||||
|
||||
"parse-json": ["parse-json@5.2.0", "", { "dependencies": { "@babel/code-frame": "^7.0.0", "error-ex": "^1.3.1", "json-parse-even-better-errors": "^2.3.0", "lines-and-columns": "^1.1.6" } }, "sha512-ayCKvm/phCGxOkYRSCM82iDwct8/EonSEgCSxWxD7ve6jHggsFl4fZVQBPRNgQoKiuV/odhFrGzQXZwbifC8Rg=="],
|
||||
|
||||
@@ -2697,6 +2740,8 @@
|
||||
|
||||
"react-style-singleton": ["react-style-singleton@2.2.3", "", { "dependencies": { "get-nonce": "^1.0.0", "tslib": "^2.0.0" }, "peerDependencies": { "@types/react": "*", "react": "^16.8.0 || ^17.0.0 || ^18.0.0 || ^19.0.0 || ^19.0.0-rc" }, "optionalPeers": ["@types/react"] }, "sha512-b6jSvxvVnyptAiLjbkWLE/lOnR4lfTtDAl+eUC7RZy+QQWc6wRzIV2CE6xBuMmDxc2qIihtDCZD5NPOFl7fRBQ=="],
|
||||
|
||||
"react-syntax-highlighter": ["react-syntax-highlighter@15.6.1", "", { "dependencies": { "@babel/runtime": "^7.3.1", "highlight.js": "^10.4.1", "highlightjs-vue": "^1.0.0", "lowlight": "^1.17.0", "prismjs": "^1.27.0", "refractor": "^3.6.0" }, "peerDependencies": { "react": ">= 0.14.0" } }, "sha512-OqJ2/vL7lEeV5zTJyG7kmARppUjiB9h9udl4qHQjjgEos66z00Ia0OckwYfRxCSFrW8RJIBnsBwQsHZbVPspqg=="],
|
||||
|
||||
"react-transition-group": ["react-transition-group@4.4.5", "", { "dependencies": { "@babel/runtime": "^7.5.5", "dom-helpers": "^5.0.1", "loose-envify": "^1.4.0", "prop-types": "^15.6.2" }, "peerDependencies": { "react": ">=16.6.0", "react-dom": ">=16.6.0" } }, "sha512-pZcd1MCJoiKiBR2NRxeCRg13uCXbydPnmB4EOeRrY7480qNWO8IIgQG6zlDkm6uRMsURXPuKq0GWtiM59a5Q6g=="],
|
||||
|
||||
"reactflow": ["reactflow@11.11.4", "", { "dependencies": { "@reactflow/background": "11.3.14", "@reactflow/controls": "11.2.14", "@reactflow/core": "11.11.4", "@reactflow/minimap": "11.7.14", "@reactflow/node-resizer": "2.2.14", "@reactflow/node-toolbar": "1.3.14" }, "peerDependencies": { "react": ">=17", "react-dom": ">=17" } }, "sha512-70FOtJkUWH3BAOsN+LU9lCrKoKbtOPnz2uq0CV2PLdNSwxTXOhCbsZr50GmZ+Rtw3jx8Uv7/vBFtCGixLfd4Og=="],
|
||||
@@ -2727,12 +2772,16 @@
|
||||
|
||||
"redis-parser": ["redis-parser@3.0.0", "", { "dependencies": { "redis-errors": "^1.0.0" } }, "sha512-DJnGAeenTdpMEH6uAJRK/uiyEIH9WVsUmoLwzudwGJUwZPp80PDBWPHXSAGNPwNvIXAbe7MSUB1zQFugFml66A=="],
|
||||
|
||||
"refractor": ["refractor@3.6.0", "", { "dependencies": { "hastscript": "^6.0.0", "parse-entities": "^2.0.0", "prismjs": "~1.27.0" } }, "sha512-MY9W41IOWxxk31o+YvFCNyNzdkc9M20NoZK5vq6jkv4I/uh2zkWcfudj0Q1fovjUQJrNewS9NMzeTtqPf+n5EA=="],
|
||||
|
||||
"regex": ["regex@6.0.1", "", { "dependencies": { "regex-utilities": "^2.3.0" } }, "sha512-uorlqlzAKjKQZ5P+kTJr3eeJGSVroLKoHmquUj4zHWuR+hEyNqlXsSKlYYF5F4NI6nl7tWCs0apKJ0lmfsXAPA=="],
|
||||
|
||||
"regex-recursion": ["regex-recursion@6.0.2", "", { "dependencies": { "regex-utilities": "^2.3.0" } }, "sha512-0YCaSCq2VRIebiaUviZNs0cBz1kg5kVS2UKUfNIx8YVs1cN3AV7NTctO5FOKBA+UT2BPJIWZauYHPqJODG50cg=="],
|
||||
|
||||
"regex-utilities": ["regex-utilities@2.3.0", "", {}, "sha512-8VhliFJAWRaUiVvREIiW2NXXTmHs4vMNnSzuJVhscgmGav3g9VDxLrQndI3dZZVVdp0ZO/5v0xmX516/7M9cng=="],
|
||||
|
||||
"rehype-highlight": ["rehype-highlight@7.0.2", "", { "dependencies": { "@types/hast": "^3.0.0", "hast-util-to-text": "^4.0.0", "lowlight": "^3.0.0", "unist-util-visit": "^5.0.0", "vfile": "^6.0.0" } }, "sha512-k158pK7wdC2qL3M5NcZROZ2tR/l7zOzjxXd5VGdcfIyoijjQqpHd3JKtYSBDpDZ38UI2WJWuFAtkMDxmx5kstA=="],
|
||||
|
||||
"rehype-recma": ["rehype-recma@1.0.0", "", { "dependencies": { "@types/estree": "^1.0.0", "@types/hast": "^3.0.0", "hast-util-to-estree": "^3.0.0" } }, "sha512-lqA4rGUf1JmacCNWWZx0Wv1dHqMwxzsDWYMTowuplHF3xH0N/MmrZ/G3BDZnzAkRmxDadujCjaKM2hqYdCBOGw=="],
|
||||
|
||||
"remark": ["remark@15.0.1", "", { "dependencies": { "@types/mdast": "^4.0.0", "remark-parse": "^11.0.0", "remark-stringify": "^11.0.0", "unified": "^11.0.0" } }, "sha512-Eht5w30ruCXgFmxVUSlNWQ9iiimq07URKeFS3hNc8cUWy1llX4KDWfyEDZRycMc+znsN9Ux5/tJ/BFdgdOwA3A=="],
|
||||
@@ -3045,6 +3094,8 @@
|
||||
|
||||
"unified": ["unified@11.0.5", "", { "dependencies": { "@types/unist": "^3.0.0", "bail": "^2.0.0", "devlop": "^1.0.0", "extend": "^3.0.0", "is-plain-obj": "^4.0.0", "trough": "^2.0.0", "vfile": "^6.0.0" } }, "sha512-xKvGhPWw3k84Qjh8bI3ZeJjqnyadK+GEFtazSfZv/rKeTkTjOJho6mFqh2SM96iIcZokxiOpg78GazTSg8+KHA=="],
|
||||
|
||||
"unist-util-find-after": ["unist-util-find-after@5.0.0", "", { "dependencies": { "@types/unist": "^3.0.0", "unist-util-is": "^6.0.0" } }, "sha512-amQa0Ep2m6hE2g72AugUItjbuM8X8cGQnFoHk0pGfrFeT9GZhzN5SW8nRsiGKK7Aif4CrACPENkA6P/Lw6fHGQ=="],
|
||||
|
||||
"unist-util-is": ["unist-util-is@6.0.0", "", { "dependencies": { "@types/unist": "^3.0.0" } }, "sha512-2qCTHimwdxLfz+YzdGfkqNlH0tLi9xjTnHddPmJwtIG9MGsdbutfTc4P+haPD7l7Cjxf/WZj+we5qfVPvvxfYw=="],
|
||||
|
||||
"unist-util-position": ["unist-util-position@5.0.0", "", { "dependencies": { "@types/unist": "^3.0.0" } }, "sha512-fucsC7HjXvkB5R3kTCO7kUjRdrS0BJt3M/FPxmHMBOm8JQi2BsHAHFsy27E0EolP8rp0NzXsJ+jNPyDWvOJZPA=="],
|
||||
@@ -3531,6 +3582,8 @@
|
||||
|
||||
"cosmiconfig/yaml": ["yaml@1.10.2", "", {}, "sha512-r3vXyErRCYJ7wg28yvBY5VSoAF8ZvlcW9/BwUzEtUsjvX/DKs24dIkuwjtuprwJJHsbyUbLApepYTR1BN4uHrg=="],
|
||||
|
||||
"decode-named-character-reference/character-entities": ["character-entities@2.0.2", "", {}, "sha512-shx7oQ0Awen/BRIdkjkvz54PnEEI/EjwXDSIZp86/KKdbafHh1Df/RYGBhn4hbe2+uKC9FnT5UCEdyPz3ai9hQ=="],
|
||||
|
||||
"dom-serializer/entities": ["entities@4.5.0", "", {}, "sha512-V0hjH4dGPh9Ao5p0MoRY6BVqtwCjhz6vI5LT8AJ55H+4g9/4vbHx1I54fS0XuclLhDHArPQCiMjDxjaL8fPxhw=="],
|
||||
|
||||
"engine.io/debug": ["debug@4.3.7", "", { "dependencies": { "ms": "^2.1.3" } }, "sha512-Er2nc/H7RrMXZBFCEim6TCmMk02Z8vLC2Rbi1KEBggpo0fS6l0S1nnapwmIi3yW/+GOJap1Krg4w0Hg80oCqgQ=="],
|
||||
@@ -3567,6 +3620,14 @@
|
||||
|
||||
"groq-sdk/node-fetch": ["node-fetch@2.7.0", "", { "dependencies": { "whatwg-url": "^5.0.0" }, "peerDependencies": { "encoding": "^0.1.0" }, "optionalPeers": ["encoding"] }, "sha512-c4FRfUm/dbcWZ7U+1Wq0AwCyFL+3nt2bEw05wfxSz+DWpWsitgmSgYmy2dQdWyKC1694ELPqMs/YzUSNozLt8A=="],
|
||||
|
||||
"hastscript/@types/hast": ["@types/hast@2.3.10", "", { "dependencies": { "@types/unist": "^2" } }, "sha512-McWspRw8xx8J9HurkVBfYj0xKoE25tOFlHGdx4MJ5xORQrMGZNqJhVQWaIbm6Oyla5kYOXtDiopzKRJzEOkwJw=="],
|
||||
|
||||
"hastscript/comma-separated-tokens": ["comma-separated-tokens@1.0.8", "", {}, "sha512-GHuDRO12Sypu2cV70d1dkA2EUmXHgntrzbpvOB+Qy+49ypNfGgFQIC2fhhXbnyrJRynDCAARsT7Ou0M6hirpfw=="],
|
||||
|
||||
"hastscript/property-information": ["property-information@5.6.0", "", { "dependencies": { "xtend": "^4.0.0" } }, "sha512-YUHSPk+A30YPv+0Qf8i9Mbfe/C0hdPXk1s1jPVToV8pk8BQtpw10ct89Eo7OWkutrwqvT0eicAxlOg3dOAu8JA=="],
|
||||
|
||||
"hastscript/space-separated-tokens": ["space-separated-tokens@1.1.5", "", {}, "sha512-q/JSVd1Lptzhf5bkYm4ob4iWPjx0KiRe3sRFBNrVqbJkFaBm5vbbowy1mymoPNLRa52+oadOhJ+K49wsSeSjTA=="],
|
||||
|
||||
"hoist-non-react-statics/react-is": ["react-is@16.13.1", "", {}, "sha512-24e6ynE2H+OKt4kqsOvNd8kBpV65zoxbA4BVsEOB3ARVWQki/DHzaUoC5KuON/BiccDaCCTZBuOcfZs70kR8bQ=="],
|
||||
|
||||
"htmlparser2/entities": ["entities@4.5.0", "", {}, "sha512-V0hjH4dGPh9Ao5p0MoRY6BVqtwCjhz6vI5LT8AJ55H+4g9/4vbHx1I54fS0XuclLhDHArPQCiMjDxjaL8fPxhw=="],
|
||||
@@ -3609,6 +3670,8 @@
|
||||
|
||||
"mdast-util-find-and-replace/escape-string-regexp": ["escape-string-regexp@5.0.0", "", {}, "sha512-/veY75JbMK4j1yjvuUxuVsiS/hr/4iHs9FTT6cgTexxdE0Ly/glccBAkloH/DofkjRbZU3bnoj38mOmhkZ0lHw=="],
|
||||
|
||||
"mdast-util-mdx-jsx/parse-entities": ["parse-entities@4.0.2", "", { "dependencies": { "@types/unist": "^2.0.0", "character-entities-legacy": "^3.0.0", "character-reference-invalid": "^2.0.0", "decode-named-character-reference": "^1.0.0", "is-alphanumerical": "^2.0.0", "is-decimal": "^2.0.0", "is-hexadecimal": "^2.0.0" } }, "sha512-GG2AQYWoLgL877gQIKeRPGO1xF9+eG1ujIb5soS5gPvLQ1y2o8FL90w2QWNdf9I361Mpp7726c+lj3U0qK1uGw=="],
|
||||
|
||||
"micromatch/picomatch": ["picomatch@2.3.1", "", {}, "sha512-JU3teHTNjmE2VCGFzuY8EXzCDVwEqB2a8fsIvwaStHhAWJEeVd1o1QD80CU6+ZdEXXSLbSsuLwJjkCBWqRQUVA=="],
|
||||
|
||||
"next/postcss": ["postcss@8.4.31", "", { "dependencies": { "nanoid": "^3.3.6", "picocolors": "^1.0.0", "source-map-js": "^1.0.2" } }, "sha512-PS08Iboia9mts/2ygV3eLpY5ghnUcfLV/EXTOW1E2qYxJKGGBUtNjN76FYHnMs36RmARn41bC0AZmn+rR0OVpQ=="],
|
||||
@@ -3639,8 +3702,6 @@
|
||||
|
||||
"ora/strip-ansi": ["strip-ansi@7.1.0", "", { "dependencies": { "ansi-regex": "^6.0.1" } }, "sha512-iq6eVVI64nQQTRYq2KtEg2d2uU7LElhTJwsH4YzIHZshxlgZms/wIc4VoDQTlG/IvVIrBKG06CrZnp0qv7hkcQ=="],
|
||||
|
||||
"parse-entities/@types/unist": ["@types/unist@2.0.11", "", {}, "sha512-CmBKiL6NNo/OqgmMn95Fk9Whlp2mtvIv+KNpQKN2F4SjvrEesubTRWGYSg+BnWZOnlCaSTU1sMpsBOzgbYhnsA=="],
|
||||
|
||||
"pdf-parse/debug": ["debug@3.2.7", "", { "dependencies": { "ms": "^2.1.1" } }, "sha512-CFjzYYAi4ThfiQvizrFQevTTXHtnCqWfe7x1AhgEscTz6ZbLbfoLRLPugTQyBth6f8ZERVUSyWHFD/7Wu4t1XQ=="],
|
||||
|
||||
"playwright/fsevents": ["fsevents@2.3.2", "", { "os": "darwin" }, "sha512-xiqMQR4xAeHTuB9uWm+fFRcIOgKBMiOBP+eXiyT7jsgVCq1bkVygt00oASowB7EdtpOHaaPgKt812P9ab+DDKA=="],
|
||||
@@ -3659,6 +3720,10 @@
|
||||
|
||||
"react-email/commander": ["commander@13.1.0", "", {}, "sha512-/rFeCpNJQbhSZjGVwO9RFV3xPqbnERS8MmIQzCtD/zl6gpJuV/bMLuN92oG3F7d8oDEHHRrujSXNUr8fpjntKw=="],
|
||||
|
||||
"refractor/prismjs": ["prismjs@1.27.0", "", {}, "sha512-t13BGPUlFDR7wRB5kQDG4jjl7XeuH6jbJGt11JHPL96qwsEHNX2+68tFXqc1/k+/jALsbSWJKUOT/hcYAZ5LkA=="],
|
||||
|
||||
"rehype-highlight/lowlight": ["lowlight@3.3.0", "", { "dependencies": { "@types/hast": "^3.0.0", "devlop": "^1.0.0", "highlight.js": "~11.11.0" } }, "sha512-0JNhgFoPvP6U6lE/UdVsSq99tn6DhjjpAj5MxG49ewd2mOBVtwWYIT8ClyABhq198aXXODMU6Ox8DrGy/CpTZQ=="],
|
||||
|
||||
"resend/@react-email/render": ["@react-email/render@1.1.2", "", { "dependencies": { "html-to-text": "^9.0.5", "prettier": "^3.5.3", "react-promise-suspense": "^0.3.4" }, "peerDependencies": { "react": "^18.0 || ^19.0 || ^19.0.0-rc", "react-dom": "^18.0 || ^19.0 || ^19.0.0-rc" } }, "sha512-RnRehYN3v9gVlNMehHPHhyp2RQo7+pSkHDtXPvg3s0GbzM9SQMW4Qrf8GRNvtpLC4gsI+Wt0VatNRUFqjvevbw=="],
|
||||
|
||||
"restore-cursor/onetime": ["onetime@5.1.2", "", { "dependencies": { "mimic-fn": "^2.1.0" } }, "sha512-kbpaSSGJTWdAY5KPVeMOKXSrPtr8C8C7wodJbcsd51jRnmD+GZu8Y0VoU6Dm5Z4vWr0Ig/1NKuWRKf7j5aaYSg=="],
|
||||
@@ -3695,6 +3760,8 @@
|
||||
|
||||
"string-width-cjs/emoji-regex": ["emoji-regex@8.0.0", "", {}, "sha512-MSjYzcWNOA0ewAHpz0MxpYFvwg6yjy1NG3xteoqz644VCo/RPgnr1/GGt+ic3iJTzQ8Eu3TdM14SawnVUmGE6A=="],
|
||||
|
||||
"stringify-entities/character-entities-legacy": ["character-entities-legacy@3.0.0", "", {}, "sha512-RpPp0asT/6ufRm//AJVwpViZbGM/MkjQFxJccQRHmISF/22NBtsHqAWmL+/pmkPWoIUJdWyeVleTl1wydHATVQ=="],
|
||||
|
||||
"sucrase/commander": ["commander@4.1.1", "", {}, "sha512-NOKm8xhkzAjzFx8B2v5OAHT+u5pRQc2UCa2Vq9jYL/31o2wi9mxBA7LIFs3sV5VSC49z6pEhfbMULvShKj26WA=="],
|
||||
|
||||
"sucrase/glob": ["glob@10.4.5", "", { "dependencies": { "foreground-child": "^3.1.0", "jackspeak": "^3.1.2", "minimatch": "^9.0.4", "minipass": "^7.1.2", "package-json-from-dist": "^1.0.0", "path-scurry": "^1.11.1" }, "bin": { "glob": "dist/esm/bin.mjs" } }, "sha512-7Bv8RF0k6xjo7d4A/PxYLbUCfb6c+Vpd2/mB2yRDlew7Jb5hEXiCD9ibfO7wpk8i4sevK6DFny9h7EYbM3/sHg=="],
|
||||
@@ -3939,6 +4006,8 @@
|
||||
|
||||
"groq-sdk/node-fetch/whatwg-url": ["whatwg-url@5.0.0", "", { "dependencies": { "tr46": "~0.0.3", "webidl-conversions": "^3.0.0" } }, "sha512-saE57nupxk6v3HY35+jzBwYa0rKSy0XR8JSxZPwgLr7ys0IBzhGviA1/TUGJLmSVqs8pb9AnvICXEuOHLprYTw=="],
|
||||
|
||||
"hastscript/@types/hast/@types/unist": ["@types/unist@2.0.11", "", {}, "sha512-CmBKiL6NNo/OqgmMn95Fk9Whlp2mtvIv+KNpQKN2F4SjvrEesubTRWGYSg+BnWZOnlCaSTU1sMpsBOzgbYhnsA=="],
|
||||
|
||||
"inquirer/ora/is-interactive": ["is-interactive@1.0.0", "", {}, "sha512-2HvIEKRoqS62guEC+qBjpvRubdX910WCMuJTZ+I9yvqKU2/12eSL549HMwtabb4oupdj2sMP50k+XJfB/8JE6w=="],
|
||||
|
||||
"inquirer/ora/is-unicode-supported": ["is-unicode-supported@0.1.0", "", {}, "sha512-knxG2q4UC3u8stRGyAVJCOdxFmv5DZiRcdlIaAQXAbSfJya+OhopNotLQrstBhququ4ZpuKbDc/8S6mgXgPFPw=="],
|
||||
@@ -3973,6 +4042,18 @@
|
||||
|
||||
"log-update/wrap-ansi/string-width": ["string-width@5.1.2", "", { "dependencies": { "eastasianwidth": "^0.2.0", "emoji-regex": "^9.2.2", "strip-ansi": "^7.0.1" } }, "sha512-HnLOCR3vjcY8beoNLtcjZ5/nxn2afmME6lhrDrebokqMap+XbeW8n9TXpPDOqdGK5qcI3oT0GKTW6wC7EMiVqA=="],
|
||||
|
||||
"mdast-util-mdx-jsx/parse-entities/@types/unist": ["@types/unist@2.0.11", "", {}, "sha512-CmBKiL6NNo/OqgmMn95Fk9Whlp2mtvIv+KNpQKN2F4SjvrEesubTRWGYSg+BnWZOnlCaSTU1sMpsBOzgbYhnsA=="],
|
||||
|
||||
"mdast-util-mdx-jsx/parse-entities/character-entities-legacy": ["character-entities-legacy@3.0.0", "", {}, "sha512-RpPp0asT/6ufRm//AJVwpViZbGM/MkjQFxJccQRHmISF/22NBtsHqAWmL+/pmkPWoIUJdWyeVleTl1wydHATVQ=="],
|
||||
|
||||
"mdast-util-mdx-jsx/parse-entities/character-reference-invalid": ["character-reference-invalid@2.0.1", "", {}, "sha512-iBZ4F4wRbyORVsu0jPV7gXkOsGYjGHPmAyv+HiHG8gi5PtC9KI2j1+v8/tlibRvjoWX027ypmG/n0HtO5t7unw=="],
|
||||
|
||||
"mdast-util-mdx-jsx/parse-entities/is-alphanumerical": ["is-alphanumerical@2.0.1", "", { "dependencies": { "is-alphabetical": "^2.0.0", "is-decimal": "^2.0.0" } }, "sha512-hmbYhX/9MUMF5uh7tOXyK/n0ZvWpad5caBA17GsC6vyuCqaWliRG5K1qS9inmUhEMaOBIW7/whAnSwveW/LtZw=="],
|
||||
|
||||
"mdast-util-mdx-jsx/parse-entities/is-decimal": ["is-decimal@2.0.1", "", {}, "sha512-AAB9hiomQs5DXWcRB1rqsxGUstbRroFOPPVAomNk/3XHR5JyEZChOyTWe2oayKnsSsr/kcGqF+z6yuH6HHpN0A=="],
|
||||
|
||||
"mdast-util-mdx-jsx/parse-entities/is-hexadecimal": ["is-hexadecimal@2.0.1", "", {}, "sha512-DgZQp241c8oO6cA1SbTEWiXeoxV42vlcJxgH+B3hi1AiqqKruZR3ZGF8In3fj4+/y/7rHvlOZLZtgJ/4ttYGZg=="],
|
||||
|
||||
"next-runtime-env/next/@next/env": ["@next/env@14.2.30", "", {}, "sha512-KBiBKrDY6kxTQWGzKjQB7QirL3PiiOkV7KW98leHFjtVRKtft76Ra5qSA/SL75xT44dp6hOcqiiJ6iievLOYug=="],
|
||||
|
||||
"next-runtime-env/next/@next/swc-darwin-arm64": ["@next/swc-darwin-arm64@14.2.30", "", { "os": "darwin", "cpu": "arm64" }, "sha512-EAqfOTb3bTGh9+ewpO/jC59uACadRHM6TSA9DdxJB/6gxOpyV+zrbqeXiFTDy9uV6bmipFDkfpAskeaDcO+7/g=="],
|
||||
@@ -4011,6 +4092,8 @@
|
||||
|
||||
"ora/strip-ansi/ansi-regex": ["ansi-regex@6.1.0", "", {}, "sha512-7HSX4QQb4CspciLpVFwyRe79O3xsIZDDLER21kERQ71oaPodF8jL725AgJMFAYbooIqolJoRLuM81SpeUkpkvA=="],
|
||||
|
||||
"rehype-highlight/lowlight/highlight.js": ["highlight.js@11.11.1", "", {}, "sha512-Xwwo44whKBVCYoliBQwaPvtd/2tYFkRQtXDWj1nackaV2JPXx3L0+Jvd8/qCJ2p+ML0/XVkJ2q+Mr+UVdpJK5w=="],
|
||||
|
||||
"restore-cursor/onetime/mimic-fn": ["mimic-fn@2.1.0", "", {}, "sha512-OqbOk5oEQeAZ8WXWydlu9HJjz9WVdEIvamMCcXmuqUYjTknH/sqsWvhQ3vgwKFRR1HpjvNBKQ37nbJgYzGqGcg=="],
|
||||
|
||||
"sim/tailwindcss/chokidar": ["chokidar@3.6.0", "", { "dependencies": { "anymatch": "~3.1.2", "braces": "~3.0.2", "glob-parent": "~5.1.2", "is-binary-path": "~2.1.0", "is-glob": "~4.0.1", "normalize-path": "~3.0.0", "readdirp": "~3.6.0" }, "optionalDependencies": { "fsevents": "~2.3.2" } }, "sha512-7VT13fmjotKpGipCW9JEQAusEPE+Ei8nl6/g4FBAmIm0GOOLMua9NDDo/DWp0ZAxCr3cPq5ZpBqmPAQgDda2Pw=="],
|
||||
@@ -4117,6 +4200,8 @@
|
||||
|
||||
"log-update/wrap-ansi/string-width/emoji-regex": ["emoji-regex@9.2.2", "", {}, "sha512-L18DaJsXSUk2+42pv8mLs5jJT2hqFkFE4j21wOmgbUqsZ2hL72NsUU785g9RXgo3s0ZNgVl42TiHp3ZtOv/Vyg=="],
|
||||
|
||||
"mdast-util-mdx-jsx/parse-entities/is-alphanumerical/is-alphabetical": ["is-alphabetical@2.0.1", "", {}, "sha512-FWyyY60MeTNyeSRpkM2Iry0G9hpr7/9kD40mD/cGQEuilcZYS4okz8SN2Q6rLCJ8gbCt6fN+rC+6tMGS99LaxQ=="],
|
||||
|
||||
"openai/node-fetch/whatwg-url/tr46": ["tr46@0.0.3", "", {}, "sha512-N3WMsuqV66lT30CrXNbEjx4GEwlow3v6rr4mCcv6prnfwhS01rkgyFdjPNBYd9br7LpXV1+Emh01fHnq2Gdgrw=="],
|
||||
|
||||
"openai/node-fetch/whatwg-url/webidl-conversions": ["webidl-conversions@3.0.1", "", {}, "sha512-2JAn3z8AR6rjK8Sm8orRC0h/bcl/DqL7tRPdGZ4I1CjdF+EaMLmYxBHyXuKL849eucPFhvBoxMsflfOb8kxaeQ=="],
|
||||
|
||||
@@ -32,6 +32,7 @@
|
||||
"@t3-oss/env-nextjs": "0.13.4",
|
||||
"@vercel/analytics": "1.5.0",
|
||||
"geist": "^1.4.2",
|
||||
"react-colorful": "5.6.1",
|
||||
"remark-gfm": "4.0.1",
|
||||
"socket.io-client": "4.8.1"
|
||||
},
|
||||
|
||||
Reference in New Issue
Block a user