Compare commits

...

2 Commits

Author SHA1 Message Date
Vikhyath Mondreti
78b5ae7b3d v0.2.7: fix + feat (#615)
* feat(logging): add additional logs for proxy routes

* fix(blocks): workflow handler not working outside gui (#609)

* fix: key to call api internally for workflow block

* feat: use jwt for internal auth to avoid a static key

* chore: formatter

* fix(sidebar): added loop & parallel subblcoks to sidebar search

* merged improvement/connection into staging (#604)

* merged improvement/connection into staging

* fix: merge conflicts and improved block path calculation

* fix: removed migration

* fix: removed duplicate call

* fix: resolver and merge conflicts

* fix: knowledge base folder

* fix: settings modal

* fix: typeform block

* fix: parallel handler

* fix: stores index

* fix: tests

* fix: tag-dropdown

* improvement: start block input and tag dropdown

* fix block id resolution + missing bracket

* fix lint

* fix test

* works

* fix

* fix lint

* Revert "fix lint"

This reverts commit 433e2f9cfc.

---------

Co-authored-by: Vikhyath Mondreti <vikhyathvikku@gmail.com>
Co-authored-by: Vikhyath Mondreti <vikhyathmondreti@Vikhyaths-MacBook-Air.local>

* fix(autopan): migration missing (#614)

* add autopan migration

* fix lint

* fix linter

* fix tests

---------

Co-authored-by: Vikhyath Mondreti <vikhyathmondreti@vikhyaths-air.lan>

---------

Co-authored-by: Waleed Latif <walif6@gmail.com>
Co-authored-by: Aditya Tripathi <aditya@climactic.co>
Co-authored-by: Emir Karabeg <78010029+emir-karabeg@users.noreply.github.com>
Co-authored-by: Vikhyath Mondreti <vikhyathmondreti@Vikhyaths-MacBook-Air.local>
Co-authored-by: Vikhyath Mondreti <vikhyathmondreti@vikhyaths-air.lan>
2025-07-04 13:48:17 -07:00
Vikhyath Mondreti
016cd6750c v0.2.6: fix + feat + improvement (#612)
* feat(function): added more granular error logs for function execution for easier debugging (#593)

* added more granular error logs for function execution

* added tests

* fixed syntax error reporting

* feat(models): added temp controls for gpt-4.1 family of models (#594)

* improvement(knowledge-upload): create and upload document to KB (#579)

* improvement: added knowledge upload

* improvement: added greptile comments (#579)

* improvement: changed to text to doc (#579)

* improvement: removed comment (#579)

* added input validation, tested persistence of KB selector

* update docs

---------

Co-authored-by: Adam Gough <adamgough@Mac.attlocal.net>
Co-authored-by: Waleed Latif <walif6@gmail.com>

* fix(remove workflow.state usage): no more usage of deprecated state column in any routes (#586)

* fix(remove workflow.state usage): no more usage of deprecated state col in routes

* fix lint

* fix chat route to only use deployed state

* fix lint

* better typing

* remove useless logs

* fix lint

* restore workflow handler file

* removed all other usages of deprecated 'state' column from workflows table, updated tests

---------

Co-authored-by: Vikhyath Mondreti <vikhyathmondreti@Vikhyaths-MacBook-Air.local>
Co-authored-by: Waleed Latif <walif6@gmail.com>

* fix(doc-selector-kb): enable doc selector when kb is selected (#596)

Co-authored-by: Vikhyath Mondreti <vikhyathmondreti@vikhyaths-air.lan>

* fix(unload): remove beforeunload warning since we communicate via wss (#597)

* fix(executor): fix dependency resolution, allow blocks with multiple inputs to execute (#598)

* feat(billing): added migrations for usage-based billing (#601)

* feat(billing): added migrations for usage-based billing

* lint

* lint

* feat(logging): add new schemas + types for new logging system (#599)

* feat(logging): add new schemas + types for logging

* fix lint

* update migration

* fix lint

* Remove migration 48 to avoid conflict with staging

* fixed merge conflict

* fix lint

---------

Co-authored-by: Vikhyath Mondreti <vikhyathmondreti@Vikhyaths-Air.attlocal.net>

* fix(createWorkflow): cleanup create workflow to prevent re-renders (#607)

* fix(createWorkflow): no more client side id, duplicate schedules calls

* fix lint

* more cleanup

* fix lint

* fix spamming of create button causing issues

* fix lint

* add more colors + default workflow name changed

* Update apps/sim/stores/workflows/registry/utils.ts

Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>

---------

Co-authored-by: Vikhyath Mondreti <vikhyathmondreti@Vikhyaths-Air.attlocal.net>
Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>

* fix(telegram): added markdown text rendering (#611)

* fix: added proper markdown

* fix: reverted route.ts file

---------

Co-authored-by: Adam Gough <adamgough@Adams-MacBook-Pro.local>

* fix(kb-upload): fix and consolidate KB file uploads logic (#610)

* fix(kb-upload): fix and consolidate logic

* fix lint

* consolidated presigned routes, fixed temp id kb store issue, added nav to next/prev chunk on edit chunk modal

* fix ci test

---------

Co-authored-by: Vikhyath Mondreti <vikhyathmondreti@vikhyaths-air.lan>
Co-authored-by: Waleed Latif <walif6@gmail.com>

---------

Co-authored-by: Waleed Latif <walif6@gmail.com>
Co-authored-by: Adam Gough <77861281+aadamgough@users.noreply.github.com>
Co-authored-by: Adam Gough <adamgough@Mac.attlocal.net>
Co-authored-by: Vikhyath Mondreti <vikhyathmondreti@Vikhyaths-MacBook-Air.local>
Co-authored-by: Vikhyath Mondreti <vikhyathmondreti@vikhyaths-air.lan>
Co-authored-by: Vikhyath Mondreti <vikhyathmondreti@Vikhyaths-Air.attlocal.net>
Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>
Co-authored-by: Adam Gough <adamgough@Adams-MacBook-Pro.local>
2025-07-03 12:53:14 -07:00
174 changed files with 9336 additions and 3748 deletions

View File

@@ -66,17 +66,17 @@ Define the data to pass to the child workflow:
- **Single Variable Input**: Select a variable or block output to pass to the child workflow
- **Variable References**: Use `<variable.name>` to reference workflow variables
- **Block References**: Use `<blockName.response.field>` to reference outputs from previous blocks
- **Automatic Mapping**: The selected data is automatically available as `start.response.input` in the child workflow
- **Block References**: Use `<blockName.field>` to reference outputs from previous blocks
- **Automatic Mapping**: The selected data is automatically available as `start.input` in the child workflow
- **Optional**: The input field is optional - child workflows can run without input data
- **Type Preservation**: Variable types (strings, numbers, objects, etc.) are preserved when passed to the child workflow
### Examples of Input References
- `<variable.customerData>` - Pass a workflow variable
- `<dataProcessor.response.result>` - Pass the result from a previous block
- `<start.response.input>` - Pass the original workflow input
- `<apiCall.response.data.user>` - Pass a specific field from an API response
- `<dataProcessor.result>` - Pass the result from a previous block
- `<start.input>` - Pass the original workflow input
- `<apiCall.data.user>` - Pass a specific field from an API response
### Execution Context
@@ -109,7 +109,7 @@ To prevent infinite recursion and ensure system stability, the Workflow block in
<strong>Workflow ID</strong>: The identifier of the workflow to execute
</li>
<li>
<strong>Input Variable</strong>: Variable or block reference to pass to the child workflow (e.g., `<variable.name>` or `<block.response.field>`)
<strong>Input Variable</strong>: Variable or block reference to pass to the child workflow (e.g., `<variable.name>` or `<block.field>`)
</li>
</ul>
</Tab>
@@ -150,23 +150,23 @@ blocks:
- type: workflow
name: "Setup Customer Account"
workflowId: "account-setup-workflow"
input: "<Validate Customer Data.response.result>"
input: "<Validate Customer Data.result>"
- type: workflow
name: "Send Welcome Email"
workflowId: "welcome-email-workflow"
input: "<Setup Customer Account.response.result.accountDetails>"
input: "<Setup Customer Account.result.accountDetails>"
```
### Child Workflow: Customer Validation
```yaml
# Reusable customer validation workflow
# Access the input data using: start.response.input
# Access the input data using: start.input
blocks:
- type: function
name: "Validate Email"
code: |
const customerData = start.response.input;
const customerData = start.input;
const emailRegex = /^[^\s@]+@[^\s@]+\.[^\s@]+$/;
return emailRegex.test(customerData.email);
@@ -174,7 +174,7 @@ blocks:
name: "Check Credit Score"
url: "https://api.creditcheck.com/score"
method: "POST"
body: "<start.response.input>"
body: "<start.input>"
```
### Variable Reference Examples
@@ -184,13 +184,13 @@ blocks:
input: "<variable.customerInfo>"
# Using block outputs
input: "<dataProcessor.response.cleanedData>"
input: "<dataProcessor.cleanedData>"
# Using nested object properties
input: "<apiCall.response.data.user.profile>"
input: "<apiCall.data.user.profile>"
# Using array elements (if supported by the resolver)
input: "<listProcessor.response.items[0]>"
input: "<listProcessor.items[0]>"
```
## Access Control and Permissions

View File

@@ -93,7 +93,7 @@ export const sampleWorkflowState = {
webhookPath: { id: 'webhookPath', type: 'short-input', value: '' },
},
outputs: {
response: { type: { input: 'any' } },
input: 'any',
},
enabled: true,
horizontalHandles: true,
@@ -111,7 +111,7 @@ export const sampleWorkflowState = {
type: 'long-input',
value: 'You are a helpful assistant',
},
context: { id: 'context', type: 'short-input', value: '<start.response.input>' },
context: { id: 'context', type: 'short-input', value: '<start.input>' },
model: { id: 'model', type: 'dropdown', value: 'gpt-4o' },
apiKey: { id: 'apiKey', type: 'short-input', value: '{{OPENAI_API_KEY}}' },
},
@@ -138,6 +138,7 @@ export const sampleWorkflowState = {
},
],
loops: {},
parallels: {},
lastSaved: Date.now(),
isDeployed: false,
}
@@ -764,6 +765,20 @@ export function createStorageProviderMocks(options: StorageProviderMockOptions =
bucket: 'test-s3-bucket',
region: 'us-east-1',
},
S3_KB_CONFIG: {
bucket: 'test-s3-kb-bucket',
region: 'us-east-1',
},
BLOB_CONFIG: {
accountName: 'testaccount',
accountKey: 'testkey',
containerName: 'test-container',
},
BLOB_KB_CONFIG: {
accountName: 'testaccount',
accountKey: 'testkey',
containerName: 'test-kb-container',
},
}))
vi.doMock('@aws-sdk/client-s3', () => ({
@@ -806,6 +821,11 @@ export function createStorageProviderMocks(options: StorageProviderMockOptions =
accountKey: 'testkey',
containerName: 'test-container',
},
BLOB_KB_CONFIG: {
accountName: 'testaccount',
accountKey: 'testkey',
containerName: 'test-kb-container',
},
}))
vi.doMock('@azure/storage-blob', () => ({

View File

@@ -241,7 +241,7 @@ describe('Chat Subdomain API Route', () => {
})
describe('POST endpoint', () => {
it('should handle authentication requests without messages', async () => {
it('should handle authentication requests without input', async () => {
const req = createMockRequest('POST', { password: 'test-password' })
const params = Promise.resolve({ subdomain: 'password-protected-chat' })
@@ -257,7 +257,7 @@ describe('Chat Subdomain API Route', () => {
expect(mockSetChatAuthCookie).toHaveBeenCalled()
})
it('should return 400 for requests without message', async () => {
it('should return 400 for requests without input', async () => {
const req = createMockRequest('POST', {})
const params = Promise.resolve({ subdomain: 'test-chat' })
@@ -269,7 +269,7 @@ describe('Chat Subdomain API Route', () => {
const data = await response.json()
expect(data).toHaveProperty('error')
expect(data).toHaveProperty('message', 'No message provided')
expect(data).toHaveProperty('message', 'No input provided')
})
it('should return 401 for unauthorized access', async () => {
@@ -279,7 +279,7 @@ describe('Chat Subdomain API Route', () => {
error: 'Authentication required',
}))
const req = createMockRequest('POST', { message: 'Hello' })
const req = createMockRequest('POST', { input: 'Hello' })
const params = Promise.resolve({ subdomain: 'protected-chat' })
const { POST } = await import('./route')
@@ -342,7 +342,7 @@ describe('Chat Subdomain API Route', () => {
}
})
const req = createMockRequest('POST', { message: 'Hello' })
const req = createMockRequest('POST', { input: 'Hello' })
const params = Promise.resolve({ subdomain: 'test-chat' })
const { POST } = await import('./route')
@@ -357,7 +357,7 @@ describe('Chat Subdomain API Route', () => {
})
it('should return streaming response for valid chat messages', async () => {
const req = createMockRequest('POST', { message: 'Hello world', conversationId: 'conv-123' })
const req = createMockRequest('POST', { input: 'Hello world', conversationId: 'conv-123' })
const params = Promise.resolve({ subdomain: 'test-chat' })
const { POST } = await import('./route')
@@ -374,7 +374,7 @@ describe('Chat Subdomain API Route', () => {
})
it('should handle streaming response body correctly', async () => {
const req = createMockRequest('POST', { message: 'Hello world' })
const req = createMockRequest('POST', { input: 'Hello world' })
const params = Promise.resolve({ subdomain: 'test-chat' })
const { POST } = await import('./route')
@@ -404,7 +404,7 @@ describe('Chat Subdomain API Route', () => {
throw new Error('Execution failed')
})
const req = createMockRequest('POST', { message: 'Trigger error' })
const req = createMockRequest('POST', { input: 'Trigger error' })
const params = Promise.resolve({ subdomain: 'test-chat' })
const { POST } = await import('./route')
@@ -444,7 +444,7 @@ describe('Chat Subdomain API Route', () => {
it('should pass conversationId to executeWorkflowForChat when provided', async () => {
const req = createMockRequest('POST', {
message: 'Hello world',
input: 'Hello world',
conversationId: 'test-conversation-123',
})
const params = Promise.resolve({ subdomain: 'test-chat' })
@@ -461,7 +461,7 @@ describe('Chat Subdomain API Route', () => {
})
it('should handle missing conversationId gracefully', async () => {
const req = createMockRequest('POST', { message: 'Hello world' })
const req = createMockRequest('POST', { input: 'Hello world' })
const params = Promise.resolve({ subdomain: 'test-chat' })
const { POST } = await import('./route')

View File

@@ -72,11 +72,11 @@ export async function POST(
}
// Use the already parsed body
const { message, password, email, conversationId } = parsedBody
const { input, password, email, conversationId } = parsedBody
// If this is an authentication request (has password or email but no message),
// If this is an authentication request (has password or email but no input),
// set auth cookie and return success
if ((password || email) && !message) {
if ((password || email) && !input) {
const response = addCorsHeaders(createSuccessResponse({ authenticated: true }), request)
// Set authentication cookie
@@ -86,8 +86,8 @@ export async function POST(
}
// For chat messages, create regular response
if (!message) {
return addCorsHeaders(createErrorResponse('No message provided', 400), request)
if (!input) {
return addCorsHeaders(createErrorResponse('No input provided', 400), request)
}
// Get the workflow for this chat
@@ -105,8 +105,8 @@ export async function POST(
}
try {
// Execute workflow with structured input (message + conversationId for context)
const result = await executeWorkflowForChat(deployment.id, message, conversationId)
// Execute workflow with structured input (input + conversationId for context)
const result = await executeWorkflowForChat(deployment.id, input, conversationId)
// The result is always a ReadableStream that we can pipe to the client
const streamResponse = new NextResponse(result, {

View File

@@ -128,10 +128,10 @@ export async function validateChatAuth(
return { authorized: false, error: 'Password is required' }
}
const { password, message } = parsedBody
const { password, input } = parsedBody
// If this is a chat message, not an auth attempt
if (message && !password) {
if (input && !password) {
return { authorized: false, error: 'auth_required_password' }
}
@@ -170,10 +170,10 @@ export async function validateChatAuth(
return { authorized: false, error: 'Email is required' }
}
const { email, message } = parsedBody
const { email, input } = parsedBody
// If this is a chat message, not an auth attempt
if (message && !email) {
if (input && !email) {
return { authorized: false, error: 'auth_required_email' }
}
@@ -211,17 +211,17 @@ export async function validateChatAuth(
/**
* Executes a workflow for a chat request and returns the formatted output.
*
* When workflows reference <start.response.input>, they receive a structured JSON
* containing both the message and conversationId for maintaining chat context.
* When workflows reference <start.input>, they receive the input directly.
* The conversationId is available at <start.conversationId> for maintaining chat context.
*
* @param chatId - Chat deployment identifier
* @param message - User's chat message
* @param input - User's chat input
* @param conversationId - Optional ID for maintaining conversation context
* @returns Workflow execution result formatted for the chat interface
*/
export async function executeWorkflowForChat(
chatId: string,
message: string,
input: string,
conversationId?: string
): Promise<any> {
const requestId = crypto.randomUUID().slice(0, 8)
@@ -445,7 +445,7 @@ export async function executeWorkflowForChat(
workflow: serializedWorkflow,
currentBlockStates: processedBlockStates,
envVarValues: decryptedEnvVars,
workflowInput: { input: message, conversationId },
workflowInput: { input: input, conversationId },
workflowVariables,
contextExtensions: {
stream: true,
@@ -463,8 +463,8 @@ export async function executeWorkflowForChat(
if (result && 'success' in result) {
result.logs?.forEach((log: BlockLog) => {
if (streamedContent.has(log.blockId)) {
if (log.output?.response) {
log.output.response.content = streamedContent.get(log.blockId)
if (log.output) {
log.output.content = streamedContent.get(log.blockId)
}
}
})

View File

@@ -239,7 +239,7 @@ Example Scenario:
User Prompt: "Fetch user data from an API. Use the User ID passed in as 'userId' and an API Key stored as the 'SERVICE_API_KEY' environment variable."
Generated Code:
const userId = <block.response.content>; // Correct: Accessing input parameter without quotes
const userId = <block.content>; // Correct: Accessing input parameter without quotes
const apiKey = {{SERVICE_API_KEY}}; // Correct: Accessing environment variable without quotes
const url = \`https://api.example.com/users/\${userId}\`;
@@ -273,7 +273,7 @@ Do not include import/require statements unless absolutely necessary and they ar
Do not include markdown formatting or explanations.
Output only the raw TypeScript code. Use modern TypeScript features where appropriate. Do not use semicolons.
Example:
const userId = <block.response.content> as string
const userId = <block.content> as string
const apiKey = {{SERVICE_API_KEY}}
const response = await fetch(\`https://api.example.com/users/\${userId}\`, { headers: { Authorization: \`Bearer \${apiKey}\` } })
if (!response.ok) {

View File

@@ -39,8 +39,9 @@ describe('/api/files/presigned', () => {
const response = await POST(request)
const data = await response.json()
expect(response.status).toBe(400)
expect(response.status).toBe(500) // Changed from 400 to 500 (StorageConfigError)
expect(data.error).toBe('Direct uploads are only available when cloud storage is enabled')
expect(data.code).toBe('STORAGE_CONFIG_ERROR')
expect(data.directUploadSupported).toBe(false)
})
@@ -64,7 +65,8 @@ describe('/api/files/presigned', () => {
const data = await response.json()
expect(response.status).toBe(400)
expect(data.error).toBe('Missing fileName or contentType')
expect(data.error).toBe('fileName is required and cannot be empty')
expect(data.code).toBe('VALIDATION_ERROR')
})
it('should return error when contentType is missing', async () => {
@@ -87,7 +89,59 @@ describe('/api/files/presigned', () => {
const data = await response.json()
expect(response.status).toBe(400)
expect(data.error).toBe('Missing fileName or contentType')
expect(data.error).toBe('contentType is required and cannot be empty')
expect(data.code).toBe('VALIDATION_ERROR')
})
it('should return error when fileSize is invalid', async () => {
setupFileApiMocks({
cloudEnabled: true,
storageProvider: 's3',
})
const { POST } = await import('./route')
const request = new NextRequest('http://localhost:3000/api/files/presigned', {
method: 'POST',
body: JSON.stringify({
fileName: 'test.txt',
contentType: 'text/plain',
fileSize: 0,
}),
})
const response = await POST(request)
const data = await response.json()
expect(response.status).toBe(400)
expect(data.error).toBe('fileSize must be a positive number')
expect(data.code).toBe('VALIDATION_ERROR')
})
it('should return error when file size exceeds limit', async () => {
setupFileApiMocks({
cloudEnabled: true,
storageProvider: 's3',
})
const { POST } = await import('./route')
const largeFileSize = 150 * 1024 * 1024 // 150MB (exceeds 100MB limit)
const request = new NextRequest('http://localhost:3000/api/files/presigned', {
method: 'POST',
body: JSON.stringify({
fileName: 'large-file.txt',
contentType: 'text/plain',
fileSize: largeFileSize,
}),
})
const response = await POST(request)
const data = await response.json()
expect(response.status).toBe(400)
expect(data.error).toContain('exceeds maximum allowed size')
expect(data.code).toBe('VALIDATION_ERROR')
})
it('should generate S3 presigned URL successfully', async () => {
@@ -122,6 +176,34 @@ describe('/api/files/presigned', () => {
expect(data.directUploadSupported).toBe(true)
})
it('should generate knowledge-base S3 presigned URL with kb prefix', async () => {
setupFileApiMocks({
cloudEnabled: true,
storageProvider: 's3',
})
const { POST } = await import('./route')
const request = new NextRequest(
'http://localhost:3000/api/files/presigned?type=knowledge-base',
{
method: 'POST',
body: JSON.stringify({
fileName: 'knowledge-doc.pdf',
contentType: 'application/pdf',
fileSize: 2048,
}),
}
)
const response = await POST(request)
const data = await response.json()
expect(response.status).toBe(200)
expect(data.fileInfo.key).toMatch(/^kb\/.*knowledge-doc\.pdf$/)
expect(data.directUploadSupported).toBe(true)
})
it('should generate Azure Blob presigned URL successfully', async () => {
setupFileApiMocks({
cloudEnabled: true,
@@ -182,8 +264,9 @@ describe('/api/files/presigned', () => {
const response = await POST(request)
const data = await response.json()
expect(response.status).toBe(400)
expect(data.error).toBe('Unknown storage provider')
expect(response.status).toBe(500) // Changed from 400 to 500 (StorageConfigError)
expect(data.error).toBe('Unknown storage provider: unknown') // Updated error message
expect(data.code).toBe('STORAGE_CONFIG_ERROR')
expect(data.directUploadSupported).toBe(false)
})
@@ -225,8 +308,10 @@ describe('/api/files/presigned', () => {
const data = await response.json()
expect(response.status).toBe(500)
expect(data.error).toBe('Error')
expect(data.message).toBe('S3 service unavailable')
expect(data.error).toBe(
'Failed to generate S3 presigned URL - check AWS credentials and permissions'
) // Updated error message
expect(data.code).toBe('STORAGE_CONFIG_ERROR')
})
it('should handle Azure Blob errors gracefully', async () => {
@@ -269,8 +354,8 @@ describe('/api/files/presigned', () => {
const data = await response.json()
expect(response.status).toBe(500)
expect(data.error).toBe('Error')
expect(data.message).toBe('Azure service unavailable')
expect(data.error).toBe('Failed to generate Azure Blob presigned URL') // Updated error message
expect(data.code).toBe('STORAGE_CONFIG_ERROR')
})
it('should handle malformed JSON gracefully', async () => {
@@ -289,9 +374,9 @@ describe('/api/files/presigned', () => {
const response = await POST(request)
const data = await response.json()
expect(response.status).toBe(500)
expect(data.error).toBe('SyntaxError')
expect(data.message).toContain('Unexpected token')
expect(response.status).toBe(400) // Changed from 500 to 400 (ValidationError)
expect(data.error).toBe('Invalid JSON in request body') // Updated error message
expect(data.code).toBe('VALIDATION_ERROR')
})
})

View File

@@ -6,7 +6,7 @@ import { createLogger } from '@/lib/logs/console-logger'
import { getStorageProvider, isUsingCloudStorage } from '@/lib/uploads'
import { getBlobServiceClient } from '@/lib/uploads/blob/blob-client'
import { getS3Client, sanitizeFilenameForMetadata } from '@/lib/uploads/s3/s3-client'
import { BLOB_CONFIG, S3_CONFIG } from '@/lib/uploads/setup'
import { BLOB_CONFIG, BLOB_KB_CONFIG, S3_CONFIG, S3_KB_CONFIG } from '@/lib/uploads/setup'
import { createErrorResponse, createOptionsResponse } from '../utils'
const logger = createLogger('PresignedUploadAPI')
@@ -17,124 +17,148 @@ interface PresignedUrlRequest {
fileSize: number
}
type UploadType = 'general' | 'knowledge-base'
class PresignedUrlError extends Error {
constructor(
message: string,
public code: string,
public statusCode = 400
) {
super(message)
this.name = 'PresignedUrlError'
}
}
class StorageConfigError extends PresignedUrlError {
constructor(message: string) {
super(message, 'STORAGE_CONFIG_ERROR', 500)
}
}
class ValidationError extends PresignedUrlError {
constructor(message: string) {
super(message, 'VALIDATION_ERROR', 400)
}
}
export async function POST(request: NextRequest) {
try {
// Parse the request body
const data: PresignedUrlRequest = await request.json()
const { fileName, contentType, fileSize } = data
if (!fileName || !contentType) {
return NextResponse.json({ error: 'Missing fileName or contentType' }, { status: 400 })
let data: PresignedUrlRequest
try {
data = await request.json()
} catch {
throw new ValidationError('Invalid JSON in request body')
}
// Only proceed if cloud storage is enabled
const { fileName, contentType, fileSize } = data
if (!fileName?.trim()) {
throw new ValidationError('fileName is required and cannot be empty')
}
if (!contentType?.trim()) {
throw new ValidationError('contentType is required and cannot be empty')
}
if (!fileSize || fileSize <= 0) {
throw new ValidationError('fileSize must be a positive number')
}
const MAX_FILE_SIZE = 100 * 1024 * 1024
if (fileSize > MAX_FILE_SIZE) {
throw new ValidationError(
`File size (${fileSize} bytes) exceeds maximum allowed size (${MAX_FILE_SIZE} bytes)`
)
}
const uploadTypeParam = request.nextUrl.searchParams.get('type')
const uploadType: UploadType =
uploadTypeParam === 'knowledge-base' ? 'knowledge-base' : 'general'
if (!isUsingCloudStorage()) {
return NextResponse.json(
{
error: 'Direct uploads are only available when cloud storage is enabled',
directUploadSupported: false,
},
{ status: 400 }
throw new StorageConfigError(
'Direct uploads are only available when cloud storage is enabled'
)
}
const storageProvider = getStorageProvider()
logger.info(`Generating ${uploadType} presigned URL for ${fileName} using ${storageProvider}`)
switch (storageProvider) {
case 's3':
return await handleS3PresignedUrl(fileName, contentType, fileSize)
return await handleS3PresignedUrl(fileName, contentType, fileSize, uploadType)
case 'blob':
return await handleBlobPresignedUrl(fileName, contentType, fileSize)
return await handleBlobPresignedUrl(fileName, contentType, fileSize, uploadType)
default:
return NextResponse.json(
{
error: 'Unknown storage provider',
directUploadSupported: false,
},
{ status: 400 }
)
throw new StorageConfigError(`Unknown storage provider: ${storageProvider}`)
}
} catch (error) {
logger.error('Error generating presigned URL:', error)
if (error instanceof PresignedUrlError) {
return NextResponse.json(
{
error: error.message,
code: error.code,
directUploadSupported: false,
},
{ status: error.statusCode }
)
}
return createErrorResponse(
error instanceof Error ? error : new Error('Failed to generate presigned URL')
)
}
}
async function handleS3PresignedUrl(fileName: string, contentType: string, fileSize: number) {
// Create a unique key for the file
const safeFileName = fileName.replace(/\s+/g, '-')
const uniqueKey = `${Date.now()}-${uuidv4()}-${safeFileName}`
// Sanitize the original filename for S3 metadata to prevent header errors
const sanitizedOriginalName = sanitizeFilenameForMetadata(fileName)
// Create the S3 command
const command = new PutObjectCommand({
Bucket: S3_CONFIG.bucket,
Key: uniqueKey,
ContentType: contentType,
Metadata: {
originalName: sanitizedOriginalName,
uploadedAt: new Date().toISOString(),
},
})
// Generate the presigned URL
const presignedUrl = await getSignedUrl(getS3Client(), command, { expiresIn: 3600 })
// Create a path for API to serve the file
const servePath = `/api/files/serve/s3/${encodeURIComponent(uniqueKey)}`
logger.info(`Generated presigned URL for ${fileName} (${uniqueKey})`)
return NextResponse.json({
presignedUrl,
fileInfo: {
path: servePath,
key: uniqueKey,
name: fileName,
size: fileSize,
type: contentType,
},
directUploadSupported: true,
})
}
async function handleBlobPresignedUrl(fileName: string, contentType: string, fileSize: number) {
// Create a unique key for the file
const safeFileName = fileName.replace(/\s+/g, '-')
const uniqueKey = `${Date.now()}-${uuidv4()}-${safeFileName}`
async function handleS3PresignedUrl(
fileName: string,
contentType: string,
fileSize: number,
uploadType: UploadType
) {
try {
const blobServiceClient = getBlobServiceClient()
const containerClient = blobServiceClient.getContainerClient(BLOB_CONFIG.containerName)
const blockBlobClient = containerClient.getBlockBlobClient(uniqueKey)
const config = uploadType === 'knowledge-base' ? S3_KB_CONFIG : S3_CONFIG
// Generate SAS token for upload (write permission)
const { BlobSASPermissions, generateBlobSASQueryParameters, StorageSharedKeyCredential } =
await import('@azure/storage-blob')
const sasOptions = {
containerName: BLOB_CONFIG.containerName,
blobName: uniqueKey,
permissions: BlobSASPermissions.parse('w'), // Write permission for upload
startsOn: new Date(),
expiresOn: new Date(Date.now() + 3600 * 1000), // 1 hour expiration
if (!config.bucket || !config.region) {
throw new StorageConfigError(`S3 configuration missing for ${uploadType} uploads`)
}
const sasToken = generateBlobSASQueryParameters(
sasOptions,
new StorageSharedKeyCredential(BLOB_CONFIG.accountName, BLOB_CONFIG.accountKey || '')
).toString()
const safeFileName = fileName.replace(/\s+/g, '-').replace(/[^a-zA-Z0-9.-]/g, '_')
const prefix = uploadType === 'knowledge-base' ? 'kb/' : ''
const uniqueKey = `${prefix}${Date.now()}-${uuidv4()}-${safeFileName}`
const presignedUrl = `${blockBlobClient.url}?${sasToken}`
const sanitizedOriginalName = sanitizeFilenameForMetadata(fileName)
// Create a path for API to serve the file
const servePath = `/api/files/serve/blob/${encodeURIComponent(uniqueKey)}`
const metadata: Record<string, string> = {
originalName: sanitizedOriginalName,
uploadedAt: new Date().toISOString(),
}
logger.info(`Generated presigned URL for ${fileName} (${uniqueKey})`)
if (uploadType === 'knowledge-base') {
metadata.purpose = 'knowledge-base'
}
const command = new PutObjectCommand({
Bucket: config.bucket,
Key: uniqueKey,
ContentType: contentType,
Metadata: metadata,
})
let presignedUrl: string
try {
presignedUrl = await getSignedUrl(getS3Client(), command, { expiresIn: 3600 })
} catch (s3Error) {
logger.error('Failed to generate S3 presigned URL:', s3Error)
throw new StorageConfigError(
'Failed to generate S3 presigned URL - check AWS credentials and permissions'
)
}
const servePath = `/api/files/serve/s3/${encodeURIComponent(uniqueKey)}`
logger.info(`Generated ${uploadType} S3 presigned URL for ${fileName} (${uniqueKey})`)
return NextResponse.json({
presignedUrl,
@@ -146,22 +170,103 @@ async function handleBlobPresignedUrl(fileName: string, contentType: string, fil
type: contentType,
},
directUploadSupported: true,
uploadHeaders: {
'x-ms-blob-type': 'BlockBlob',
'x-ms-blob-content-type': contentType,
'x-ms-meta-originalname': encodeURIComponent(fileName),
'x-ms-meta-uploadedat': new Date().toISOString(),
},
})
} catch (error) {
logger.error('Error generating Blob presigned URL:', error)
return createErrorResponse(
error instanceof Error ? error : new Error('Failed to generate Blob presigned URL')
)
if (error instanceof PresignedUrlError) {
throw error
}
logger.error('Error in S3 presigned URL generation:', error)
throw new StorageConfigError('Failed to generate S3 presigned URL')
}
}
async function handleBlobPresignedUrl(
fileName: string,
contentType: string,
fileSize: number,
uploadType: UploadType
) {
try {
const config = uploadType === 'knowledge-base' ? BLOB_KB_CONFIG : BLOB_CONFIG
if (
!config.accountName ||
!config.containerName ||
(!config.accountKey && !config.connectionString)
) {
throw new StorageConfigError(`Azure Blob configuration missing for ${uploadType} uploads`)
}
const safeFileName = fileName.replace(/\s+/g, '-').replace(/[^a-zA-Z0-9.-]/g, '_')
const prefix = uploadType === 'knowledge-base' ? 'kb/' : ''
const uniqueKey = `${prefix}${Date.now()}-${uuidv4()}-${safeFileName}`
const blobServiceClient = getBlobServiceClient()
const containerClient = blobServiceClient.getContainerClient(config.containerName)
const blockBlobClient = containerClient.getBlockBlobClient(uniqueKey)
const { BlobSASPermissions, generateBlobSASQueryParameters, StorageSharedKeyCredential } =
await import('@azure/storage-blob')
const sasOptions = {
containerName: config.containerName,
blobName: uniqueKey,
permissions: BlobSASPermissions.parse('w'), // Write permission for upload
startsOn: new Date(),
expiresOn: new Date(Date.now() + 3600 * 1000), // 1 hour expiration
}
let sasToken: string
try {
sasToken = generateBlobSASQueryParameters(
sasOptions,
new StorageSharedKeyCredential(config.accountName, config.accountKey || '')
).toString()
} catch (blobError) {
logger.error('Failed to generate Azure Blob SAS token:', blobError)
throw new StorageConfigError(
'Failed to generate Azure Blob SAS token - check Azure credentials and permissions'
)
}
const presignedUrl = `${blockBlobClient.url}?${sasToken}`
const servePath = `/api/files/serve/blob/${encodeURIComponent(uniqueKey)}`
logger.info(`Generated ${uploadType} Azure Blob presigned URL for ${fileName} (${uniqueKey})`)
const uploadHeaders: Record<string, string> = {
'x-ms-blob-type': 'BlockBlob',
'x-ms-blob-content-type': contentType,
'x-ms-meta-originalname': encodeURIComponent(fileName),
'x-ms-meta-uploadedat': new Date().toISOString(),
}
if (uploadType === 'knowledge-base') {
uploadHeaders['x-ms-meta-purpose'] = 'knowledge-base'
}
return NextResponse.json({
presignedUrl,
fileInfo: {
path: servePath,
key: uniqueKey,
name: fileName,
size: fileSize,
type: contentType,
},
directUploadSupported: true,
uploadHeaders,
})
} catch (error) {
if (error instanceof PresignedUrlError) {
throw error
}
logger.error('Error in Azure Blob presigned URL generation:', error)
throw new StorageConfigError('Failed to generate Azure Blob presigned URL')
}
}
// Handle preflight requests
export async function OPTIONS() {
return createOptionsResponse()
}

View File

@@ -1,7 +1,8 @@
import { readFile } from 'fs/promises'
import type { NextRequest, NextResponse } from 'next/server'
import { createLogger } from '@/lib/logs/console-logger'
import { downloadFile, isUsingCloudStorage } from '@/lib/uploads'
import { downloadFile, getStorageProvider, isUsingCloudStorage } from '@/lib/uploads'
import { BLOB_KB_CONFIG, S3_KB_CONFIG } from '@/lib/uploads/setup'
import '@/lib/uploads/setup.server'
import {
@@ -16,6 +17,19 @@ export const dynamic = 'force-dynamic'
const logger = createLogger('FilesServeAPI')
async function streamToBuffer(readableStream: NodeJS.ReadableStream): Promise<Buffer> {
return new Promise((resolve, reject) => {
const chunks: Buffer[] = []
readableStream.on('data', (data) => {
chunks.push(data instanceof Buffer ? data : Buffer.from(data))
})
readableStream.on('end', () => {
resolve(Buffer.concat(chunks))
})
readableStream.on('error', reject)
})
}
/**
* Main API route handler for serving files
*/
@@ -85,12 +99,65 @@ async function handleLocalFile(filename: string): Promise<NextResponse> {
}
}
async function downloadKBFile(cloudKey: string): Promise<Buffer> {
const storageProvider = getStorageProvider()
if (storageProvider === 'blob') {
logger.info(`Downloading KB file from Azure Blob Storage: ${cloudKey}`)
// Use KB-specific blob configuration
const { getBlobServiceClient } = await import('@/lib/uploads/blob/blob-client')
const blobServiceClient = getBlobServiceClient()
const containerClient = blobServiceClient.getContainerClient(BLOB_KB_CONFIG.containerName)
const blockBlobClient = containerClient.getBlockBlobClient(cloudKey)
const downloadBlockBlobResponse = await blockBlobClient.download()
if (!downloadBlockBlobResponse.readableStreamBody) {
throw new Error('Failed to get readable stream from blob download')
}
// Convert stream to buffer
return await streamToBuffer(downloadBlockBlobResponse.readableStreamBody)
}
if (storageProvider === 's3') {
logger.info(`Downloading KB file from S3: ${cloudKey}`)
// Use KB-specific S3 configuration
const { getS3Client } = await import('@/lib/uploads/s3/s3-client')
const { GetObjectCommand } = await import('@aws-sdk/client-s3')
const s3Client = getS3Client()
const command = new GetObjectCommand({
Bucket: S3_KB_CONFIG.bucket,
Key: cloudKey,
})
const response = await s3Client.send(command)
if (!response.Body) {
throw new Error('No body in S3 response')
}
// Convert stream to buffer using the same method as the regular S3 client
const stream = response.Body as any
return new Promise<Buffer>((resolve, reject) => {
const chunks: Buffer[] = []
stream.on('data', (chunk: Buffer) => chunks.push(chunk))
stream.on('end', () => resolve(Buffer.concat(chunks)))
stream.on('error', reject)
})
}
throw new Error(`Unsupported storage provider for KB files: ${storageProvider}`)
}
/**
* Proxy cloud file through our server
*/
async function handleCloudProxy(cloudKey: string): Promise<NextResponse> {
try {
const fileBuffer = await downloadFile(cloudKey)
// Check if this is a KB file (starts with 'kb/')
const isKBFile = cloudKey.startsWith('kb/')
const fileBuffer = isKBFile ? await downloadKBFile(cloudKey) : await downloadFile(cloudKey)
// Extract the original filename from the key (last part after last /)
const originalFilename = cloudKey.split('/').pop() || 'download'

View File

@@ -137,24 +137,22 @@ export async function POST(request: NextRequest) {
const safeExecutionData = {
success: executionData.success,
output: {
response: {
// Sanitize content to remove non-ASCII characters that would cause ByteString errors
content: executionData.output?.response?.content
? String(executionData.output.response.content).replace(/[\u0080-\uFFFF]/g, '')
: '',
model: executionData.output?.response?.model,
tokens: executionData.output?.response?.tokens || {
prompt: 0,
completion: 0,
total: 0,
},
// Sanitize any potential Unicode characters in tool calls
toolCalls: executionData.output?.response?.toolCalls
? sanitizeToolCalls(executionData.output.response.toolCalls)
: undefined,
providerTiming: executionData.output?.response?.providerTiming,
cost: executionData.output?.response?.cost,
// Sanitize content to remove non-ASCII characters that would cause ByteString errors
content: executionData.output?.content
? String(executionData.output.content).replace(/[\u0080-\uFFFF]/g, '')
: '',
model: executionData.output?.model,
tokens: executionData.output?.tokens || {
prompt: 0,
completion: 0,
total: 0,
},
// Sanitize any potential Unicode characters in tool calls
toolCalls: executionData.output?.toolCalls
? sanitizeToolCalls(executionData.output.toolCalls)
: undefined,
providerTiming: executionData.output?.providerTiming,
cost: executionData.output?.cost,
},
error: executionData.error,
logs: [], // Strip logs from header to avoid encoding issues

View File

@@ -46,11 +46,19 @@ const formatResponse = (responseData: any, status = 200) => {
*/
const createErrorResponse = (error: any, status = 500, additionalData = {}) => {
const errorMessage = error instanceof Error ? error.message : String(error)
const errorStack = error instanceof Error ? error.stack : undefined
logger.error('Creating error response', {
errorMessage,
status,
stack: process.env.NODE_ENV === 'development' ? errorStack : undefined,
})
return formatResponse(
{
success: false,
error: errorMessage,
stack: process.env.NODE_ENV === 'development' ? errorStack : undefined,
...additionalData,
},
status
@@ -67,6 +75,7 @@ export async function GET(request: Request) {
const requestId = crypto.randomUUID().slice(0, 8)
if (!targetUrl) {
logger.error(`[${requestId}] Missing 'url' parameter`)
return createErrorResponse("Missing 'url' parameter", 400)
}
@@ -126,6 +135,10 @@ export async function GET(request: Request) {
: response.statusText || `HTTP error ${response.status}`
: undefined
if (!response.ok) {
logger.error(`[${requestId}] External API error: ${response.status} ${response.statusText}`)
}
// Return the proxied response
return formatResponse({
success: response.ok,
@@ -139,6 +152,7 @@ export async function GET(request: Request) {
logger.error(`[${requestId}] Proxy GET request failed`, {
url: targetUrl,
error: error instanceof Error ? error.message : String(error),
stack: error instanceof Error ? error.stack : undefined,
})
return createErrorResponse(error)
@@ -151,22 +165,40 @@ export async function POST(request: Request) {
const startTimeISO = startTime.toISOString()
try {
const { toolId, params } = await request.json()
// Parse request body
let requestBody
try {
requestBody = await request.json()
} catch (parseError) {
logger.error(`[${requestId}] Failed to parse request body`, {
error: parseError instanceof Error ? parseError.message : String(parseError),
})
throw new Error('Invalid JSON in request body')
}
logger.debug(`[${requestId}] Proxy request for tool`, {
toolId,
hasParams: !!params && Object.keys(params).length > 0,
})
const { toolId, params } = requestBody
if (!toolId) {
logger.error(`[${requestId}] Missing toolId in request`)
throw new Error('Missing toolId in request')
}
logger.info(`[${requestId}] Processing tool: ${toolId}`)
// Get tool
const tool = getTool(toolId)
if (!tool) {
logger.error(`[${requestId}] Tool not found: ${toolId}`)
throw new Error(`Tool not found: ${toolId}`)
}
// Validate the tool and its parameters
try {
validateToolRequest(toolId, tool, params)
} catch (error) {
logger.warn(`[${requestId}] Tool validation failed`, {
toolId,
error: error instanceof Error ? error.message : String(error),
} catch (validationError) {
logger.warn(`[${requestId}] Tool validation failed for ${toolId}`, {
error: validationError instanceof Error ? validationError.message : String(validationError),
})
// Add timing information even to error responses
@@ -174,23 +206,18 @@ export async function POST(request: Request) {
const endTimeISO = endTime.toISOString()
const duration = endTime.getTime() - startTime.getTime()
return createErrorResponse(error, 400, {
return createErrorResponse(validationError, 400, {
startTime: startTimeISO,
endTime: endTimeISO,
duration,
})
}
if (!tool) {
logger.error(`[${requestId}] Tool not found`, { toolId })
throw new Error(`Tool not found: ${toolId}`)
}
// Use executeTool with skipProxy=true to prevent recursive proxy calls, and skipPostProcess=true to prevent duplicate post-processing
// Execute tool
const result = await executeTool(toolId, params, true, true)
if (!result.success) {
logger.warn(`[${requestId}] Tool execution failed`, {
toolId,
logger.warn(`[${requestId}] Tool execution failed for ${toolId}`, {
error: result.error || 'Unknown error',
})
@@ -217,9 +244,13 @@ export async function POST(request: Request) {
}
// Fallback
throw new Error('Tool returned an error')
} catch (e) {
if (e instanceof Error) {
throw e
} catch (transformError) {
logger.error(`[${requestId}] Error transformation failed for ${toolId}`, {
error:
transformError instanceof Error ? transformError.message : String(transformError),
})
if (transformError instanceof Error) {
throw transformError
}
throw new Error('Tool returned an error')
}
@@ -246,12 +277,7 @@ export async function POST(request: Request) {
},
}
logger.info(`[${requestId}] Tool executed successfully`, {
toolId,
duration,
startTime: startTimeISO,
endTime: endTimeISO,
})
logger.info(`[${requestId}] Tool executed successfully: ${toolId} (${duration}ms)`)
// Return the response with CORS headers
return formatResponse(responseWithTimingData)
@@ -259,6 +285,7 @@ export async function POST(request: Request) {
logger.error(`[${requestId}] Proxy request failed`, {
error: error instanceof Error ? error.message : String(error),
stack: error instanceof Error ? error.stack : undefined,
name: error instanceof Error ? error.name : undefined,
})
// Add timing information even to error responses

View File

@@ -14,6 +14,7 @@ const SettingsSchema = z.object({
debugMode: z.boolean().optional(),
autoConnect: z.boolean().optional(),
autoFillEnvVars: z.boolean().optional(),
autoPan: z.boolean().optional(),
telemetryEnabled: z.boolean().optional(),
telemetryNotifiedUser: z.boolean().optional(),
emailPreferences: z
@@ -32,6 +33,7 @@ const defaultSettings = {
debugMode: false,
autoConnect: true,
autoFillEnvVars: true,
autoPan: true,
telemetryEnabled: true,
telemetryNotifiedUser: false,
emailPreferences: {},
@@ -65,6 +67,7 @@ export async function GET() {
debugMode: userSettings.debugMode,
autoConnect: userSettings.autoConnect,
autoFillEnvVars: userSettings.autoFillEnvVars,
autoPan: userSettings.autoPan,
telemetryEnabled: userSettings.telemetryEnabled,
telemetryNotifiedUser: userSettings.telemetryNotifiedUser,
emailPreferences: userSettings.emailPreferences ?? {},

View File

@@ -31,6 +31,27 @@ describe('Workflow Deployment API Route', () => {
}),
}))
// Mock serializer
vi.doMock('@/serializer', () => ({
serializeWorkflow: vi.fn().mockReturnValue({
version: '1.0',
blocks: [
{
id: 'block-1',
metadata: { id: 'starter', name: 'Start' },
position: { x: 100, y: 100 },
config: { tool: 'starter', params: {} },
inputs: {},
outputs: {},
enabled: true,
},
],
connections: [],
loops: {},
parallels: {},
}),
}))
vi.doMock('@/lib/workflows/db-helpers', () => ({
loadWorkflowFromNormalizedTables: vi.fn().mockResolvedValue({
blocks: {
@@ -75,6 +96,80 @@ describe('Workflow Deployment API Route', () => {
})
}),
}))
// Mock the database schema module
vi.doMock('@/db/schema', () => ({
workflow: {},
apiKey: {},
workflowBlocks: {},
workflowEdges: {},
workflowSubflows: {},
}))
// Mock drizzle-orm operators
vi.doMock('drizzle-orm', () => ({
eq: vi.fn((field, value) => ({ field, value, type: 'eq' })),
and: vi.fn((...conditions) => ({ conditions, type: 'and' })),
}))
// Mock the database module with proper chainable query builder
let selectCallCount = 0
vi.doMock('@/db', () => ({
db: {
select: vi.fn().mockImplementation(() => {
selectCallCount++
return {
from: vi.fn().mockImplementation(() => ({
where: vi.fn().mockImplementation(() => ({
limit: vi.fn().mockImplementation(() => {
// First call: workflow lookup (should return workflow)
if (selectCallCount === 1) {
return Promise.resolve([{ userId: 'user-id', id: 'workflow-id' }])
}
// Second call: blocks lookup
if (selectCallCount === 2) {
return Promise.resolve([
{
id: 'block-1',
type: 'starter',
name: 'Start',
positionX: '100',
positionY: '100',
enabled: true,
subBlocks: {},
data: {},
},
])
}
// Third call: edges lookup
if (selectCallCount === 3) {
return Promise.resolve([])
}
// Fourth call: subflows lookup
if (selectCallCount === 4) {
return Promise.resolve([])
}
// Fifth call: API key lookup (should return empty for new key test)
if (selectCallCount === 5) {
return Promise.resolve([])
}
// Default: empty array
return Promise.resolve([])
}),
})),
})),
}
}),
insert: vi.fn().mockImplementation(() => ({
values: vi.fn().mockResolvedValue([{ id: 'mock-api-key-id' }]),
})),
update: vi.fn().mockImplementation(() => ({
set: vi.fn().mockImplementation(() => ({
where: vi.fn().mockResolvedValue([]),
})),
})),
},
}))
})
afterEach(() => {
@@ -126,16 +221,7 @@ describe('Workflow Deployment API Route', () => {
* This should generate a new API key
*/
it('should create new API key when deploying workflow for user with no API key', async () => {
const mockInsert = vi.fn().mockReturnValue({
values: vi.fn().mockReturnValue(undefined),
})
const mockUpdate = vi.fn().mockReturnValue({
set: vi.fn().mockReturnValue({
where: vi.fn().mockResolvedValue([{ id: 'workflow-id' }]),
}),
})
// Override the global mock for this specific test
vi.doMock('@/db', () => ({
db: {
select: vi
@@ -143,11 +229,7 @@ describe('Workflow Deployment API Route', () => {
.mockReturnValueOnce({
from: vi.fn().mockReturnValue({
where: vi.fn().mockReturnValue({
limit: vi.fn().mockResolvedValue([
{
userId: 'user-id',
},
]),
limit: vi.fn().mockResolvedValue([{ userId: 'user-id', id: 'workflow-id' }]),
}),
}),
})
@@ -184,8 +266,14 @@ describe('Workflow Deployment API Route', () => {
}),
}),
}),
insert: mockInsert,
update: mockUpdate,
insert: vi.fn().mockImplementation(() => ({
values: vi.fn().mockResolvedValue([{ id: 'mock-api-key-id' }]),
})),
update: vi.fn().mockImplementation(() => ({
set: vi.fn().mockImplementation(() => ({
where: vi.fn().mockResolvedValue([]),
})),
})),
},
}))
@@ -204,9 +292,6 @@ describe('Workflow Deployment API Route', () => {
expect(data).toHaveProperty('apiKey', 'sim_testkeygenerated12345')
expect(data).toHaveProperty('isDeployed', true)
expect(data).toHaveProperty('deployedAt')
expect(mockInsert).toHaveBeenCalled()
expect(mockUpdate).toHaveBeenCalled()
})
/**
@@ -214,14 +299,7 @@ describe('Workflow Deployment API Route', () => {
* This should use the existing API key
*/
it('should use existing API key when deploying workflow', async () => {
const mockInsert = vi.fn()
const mockUpdate = vi.fn().mockReturnValue({
set: vi.fn().mockReturnValue({
where: vi.fn().mockResolvedValue([{ id: 'workflow-id' }]),
}),
})
// Override the global mock for this specific test
vi.doMock('@/db', () => ({
db: {
select: vi
@@ -229,11 +307,7 @@ describe('Workflow Deployment API Route', () => {
.mockReturnValueOnce({
from: vi.fn().mockReturnValue({
where: vi.fn().mockReturnValue({
limit: vi.fn().mockResolvedValue([
{
userId: 'user-id',
},
]),
limit: vi.fn().mockResolvedValue([{ userId: 'user-id', id: 'workflow-id' }]),
}),
}),
})
@@ -266,16 +340,18 @@ describe('Workflow Deployment API Route', () => {
.mockReturnValueOnce({
from: vi.fn().mockReturnValue({
where: vi.fn().mockReturnValue({
limit: vi.fn().mockResolvedValue([
{
key: 'sim_existingtestapikey12345',
},
]), // Existing API key
limit: vi.fn().mockResolvedValue([{ key: 'sim_existingtestapikey12345' }]), // Existing API key
}),
}),
}),
insert: mockInsert,
update: mockUpdate,
insert: vi.fn().mockImplementation(() => ({
values: vi.fn().mockResolvedValue([{ id: 'mock-api-key-id' }]),
})),
update: vi.fn().mockImplementation(() => ({
set: vi.fn().mockImplementation(() => ({
where: vi.fn().mockResolvedValue([]),
})),
})),
},
}))
@@ -293,9 +369,6 @@ describe('Workflow Deployment API Route', () => {
expect(data).toHaveProperty('apiKey', 'sim_existingtestapikey12345')
expect(data).toHaveProperty('isDeployed', true)
expect(mockInsert).not.toHaveBeenCalled()
expect(mockUpdate).toHaveBeenCalled()
})
/**

View File

@@ -139,7 +139,7 @@ export async function POST(request: NextRequest, { params }: { params: Promise<{
return createErrorResponse(validation.error.message, validation.error.status)
}
// Get the workflow to find the user (removed deprecated state column)
// Get the workflow to find the user
const workflowData = await db
.select({
userId: workflow.userId,

View File

@@ -246,10 +246,7 @@ describe('Workflow Execution API Route', () => {
expect.anything(), // serializedWorkflow
expect.anything(), // processedBlockStates
expect.anything(), // decryptedEnvVars
expect.objectContaining({
// processedInput
input: requestBody,
}),
requestBody, // processedInput (direct input, not wrapped)
expect.anything() // workflowVariables
)
})
@@ -285,10 +282,7 @@ describe('Workflow Execution API Route', () => {
expect.anything(), // serializedWorkflow
expect.anything(), // processedBlockStates
expect.anything(), // decryptedEnvVars
expect.objectContaining({
// processedInput
input: structuredInput,
}),
structuredInput, // processedInput (direct input, not wrapped)
expect.anything() // workflowVariables
)
})

View File

@@ -77,19 +77,12 @@ async function executeWorkflow(workflow: any, requestId: string, input?: any) {
input ? JSON.stringify(input, null, 2) : 'No input provided'
)
// Validate and structure input for maximum compatibility
let processedInput = input
if (input && typeof input === 'object') {
// Ensure input is properly structured for the starter block
if (input.input === undefined) {
// If input is not already nested, structure it properly
processedInput = { input: input }
logger.info(
`[${requestId}] Restructured input for workflow:`,
JSON.stringify(processedInput, null, 2)
)
}
}
// Use input directly for API workflows
const processedInput = input
logger.info(
`[${requestId}] Using input directly for workflow:`,
JSON.stringify(processedInput, null, 2)
)
try {
runningExecutions.add(executionKey)
@@ -381,13 +374,13 @@ export async function POST(request: NextRequest, { params }: { params: Promise<{
logger.info(`[${requestId}] No request body provided`)
}
// Don't double-nest the input if it's already structured
// Pass the raw body directly as input for API workflows
const hasContent = Object.keys(body).length > 0
const input = hasContent ? { input: body } : {}
const input = hasContent ? body : {}
logger.info(`[${requestId}] Input passed to workflow:`, JSON.stringify(input, null, 2))
// Execute workflow with the structured input
// Execute workflow with the raw input
const result = await executeWorkflow(validation.workflow, requestId, input)
// Check if the workflow execution contains a response block output

View File

@@ -2,6 +2,7 @@ import { eq } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { getSession } from '@/lib/auth'
import { verifyInternalToken } from '@/lib/auth/internal'
import { createLogger } from '@/lib/logs/console-logger'
import { getUserEntityPermissions, hasAdminPermission } from '@/lib/permissions/utils'
import { loadWorkflowFromNormalizedTables } from '@/lib/workflows/db-helpers'
@@ -28,14 +29,29 @@ export async function GET(request: NextRequest, { params }: { params: Promise<{
const { id: workflowId } = await params
try {
// Get the session
const session = await getSession()
if (!session?.user?.id) {
logger.warn(`[${requestId}] Unauthorized access attempt for workflow ${workflowId}`)
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
// Check for internal JWT token for server-side calls
const authHeader = request.headers.get('authorization')
let isInternalCall = false
if (authHeader?.startsWith('Bearer ')) {
const token = authHeader.split(' ')[1]
isInternalCall = await verifyInternalToken(token)
}
const userId = session.user.id
let userId: string | null = null
if (isInternalCall) {
// For internal calls, we'll skip user-specific access checks
logger.info(`[${requestId}] Internal API call for workflow ${workflowId}`)
} else {
// Get the session for regular user calls
const session = await getSession()
if (!session?.user?.id) {
logger.warn(`[${requestId}] Unauthorized access attempt for workflow ${workflowId}`)
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
userId = session.user.id
}
// Fetch the workflow
const workflowData = await db
@@ -52,26 +68,31 @@ export async function GET(request: NextRequest, { params }: { params: Promise<{
// Check if user has access to this workflow
let hasAccess = false
// Case 1: User owns the workflow
if (workflowData.userId === userId) {
if (isInternalCall) {
// Internal calls have full access
hasAccess = true
}
// Case 2: Workflow belongs to a workspace the user has permissions for
if (!hasAccess && workflowData.workspaceId) {
const userPermission = await getUserEntityPermissions(
userId,
'workspace',
workflowData.workspaceId
)
if (userPermission !== null) {
} else {
// Case 1: User owns the workflow
if (workflowData.userId === userId) {
hasAccess = true
}
}
if (!hasAccess) {
logger.warn(`[${requestId}] User ${userId} denied access to workflow ${workflowId}`)
return NextResponse.json({ error: 'Access denied' }, { status: 403 })
// Case 2: Workflow belongs to a workspace the user has permissions for
if (!hasAccess && workflowData.workspaceId && userId) {
const userPermission = await getUserEntityPermissions(
userId,
'workspace',
workflowData.workspaceId
)
if (userPermission !== null) {
hasAccess = true
}
}
if (!hasAccess) {
logger.warn(`[${requestId}] User ${userId} denied access to workflow ${workflowId}`)
return NextResponse.json({ error: 'Access denied' }, { status: 403 })
}
}
// Try to load from normalized tables first

View File

@@ -104,7 +104,7 @@ async function createWorkspace(userId: string, name: string) {
updatedAt: now,
})
// Create "Workflow 1" for the workspace with start block
// Create initial workflow for the workspace with start block
const starterId = crypto.randomUUID()
const initialState = {
blocks: {
@@ -170,7 +170,7 @@ async function createWorkspace(userId: string, name: string) {
userId,
workspaceId,
folderId: null,
name: 'Workflow 1',
name: 'default-agent',
description: 'Your first workflow - start building here!',
state: initialState,
color: '#3972F6',

View File

@@ -297,7 +297,7 @@ export default function ChatClient({ subdomain }: { subdomain: string }) {
try {
// Send structured payload to maintain chat context
const payload = {
message:
input:
typeof userMessage.content === 'string'
? userMessage.content
: JSON.stringify(userMessage.content),

View File

@@ -1,7 +1,7 @@
'use client'
import { useEffect, useState } from 'react'
import { AlertCircle, Loader2, X } from 'lucide-react'
import { AlertCircle, ChevronDown, ChevronUp, Loader2, X } from 'lucide-react'
import {
AlertDialog,
AlertDialogAction,
@@ -16,6 +16,7 @@ import { Button } from '@/components/ui/button'
import { Dialog, DialogContent, DialogHeader, DialogTitle } from '@/components/ui/dialog'
import { Label } from '@/components/ui/label'
import { Textarea } from '@/components/ui/textarea'
import { Tooltip, TooltipContent, TooltipTrigger } from '@/components/ui/tooltip'
import { createLogger } from '@/lib/logs/console-logger'
import type { ChunkData, DocumentData } from '@/stores/knowledge/store'
@@ -28,6 +29,12 @@ interface EditChunkModalProps {
isOpen: boolean
onClose: () => void
onChunkUpdate?: (updatedChunk: ChunkData) => void
// New props for navigation
allChunks?: ChunkData[]
currentPage?: number
totalPages?: number
onNavigateToChunk?: (chunk: ChunkData) => void
onNavigateToPage?: (page: number, selectChunk: 'first' | 'last') => Promise<void>
}
export function EditChunkModal({
@@ -37,11 +44,18 @@ export function EditChunkModal({
isOpen,
onClose,
onChunkUpdate,
allChunks = [],
currentPage = 1,
totalPages = 1,
onNavigateToChunk,
onNavigateToPage,
}: EditChunkModalProps) {
const [editedContent, setEditedContent] = useState(chunk?.content || '')
const [isSaving, setIsSaving] = useState(false)
const [isNavigating, setIsNavigating] = useState(false)
const [error, setError] = useState<string | null>(null)
const [showUnsavedChangesAlert, setShowUnsavedChangesAlert] = useState(false)
const [pendingNavigation, setPendingNavigation] = useState<(() => void) | null>(null)
// Check if there are unsaved changes
const hasUnsavedChanges = editedContent !== (chunk?.content || '')
@@ -53,6 +67,13 @@ export function EditChunkModal({
}
}, [chunk?.id, chunk?.content])
// Find current chunk index in the current page
const currentChunkIndex = chunk ? allChunks.findIndex((c) => c.id === chunk.id) : -1
// Calculate navigation availability
const canNavigatePrev = currentChunkIndex > 0 || currentPage > 1
const canNavigateNext = currentChunkIndex < allChunks.length - 1 || currentPage < totalPages
const handleSaveContent = async () => {
if (!chunk || !document) return
@@ -82,7 +103,6 @@ export function EditChunkModal({
if (result.success && onChunkUpdate) {
onChunkUpdate(result.data)
onClose()
}
} catch (err) {
logger.error('Error updating chunk:', err)
@@ -92,8 +112,51 @@ export function EditChunkModal({
}
}
const navigateToChunk = async (direction: 'prev' | 'next') => {
if (!chunk || isNavigating) return
try {
setIsNavigating(true)
if (direction === 'prev') {
if (currentChunkIndex > 0) {
// Navigate to previous chunk in current page
const prevChunk = allChunks[currentChunkIndex - 1]
onNavigateToChunk?.(prevChunk)
} else if (currentPage > 1) {
// Load previous page and navigate to last chunk
await onNavigateToPage?.(currentPage - 1, 'last')
}
} else {
if (currentChunkIndex < allChunks.length - 1) {
// Navigate to next chunk in current page
const nextChunk = allChunks[currentChunkIndex + 1]
onNavigateToChunk?.(nextChunk)
} else if (currentPage < totalPages) {
// Load next page and navigate to first chunk
await onNavigateToPage?.(currentPage + 1, 'first')
}
}
} catch (err) {
logger.error(`Error navigating ${direction}:`, err)
setError(`Failed to navigate to ${direction === 'prev' ? 'previous' : 'next'} chunk`)
} finally {
setIsNavigating(false)
}
}
const handleNavigate = (direction: 'prev' | 'next') => {
if (hasUnsavedChanges) {
setPendingNavigation(() => () => navigateToChunk(direction))
setShowUnsavedChangesAlert(true)
} else {
void navigateToChunk(direction)
}
}
const handleCloseAttempt = () => {
if (hasUnsavedChanges && !isSaving) {
setPendingNavigation(null)
setShowUnsavedChangesAlert(true)
} else {
onClose()
@@ -102,7 +165,12 @@ export function EditChunkModal({
const handleConfirmDiscard = () => {
setShowUnsavedChangesAlert(false)
onClose()
if (pendingNavigation) {
void pendingNavigation()
setPendingNavigation(null)
} else {
onClose()
}
}
const isFormValid = editedContent.trim().length > 0 && editedContent.trim().length <= 10000
@@ -118,7 +186,59 @@ export function EditChunkModal({
>
<DialogHeader className='flex-shrink-0 border-b px-6 py-4'>
<div className='flex items-center justify-between'>
<DialogTitle className='font-medium text-lg'>Edit Chunk</DialogTitle>
<div className='flex items-center gap-3'>
<DialogTitle className='font-medium text-lg'>Edit Chunk</DialogTitle>
{/* Navigation Controls */}
<div className='flex items-center gap-1'>
<Tooltip>
<TooltipTrigger
asChild
onFocus={(e) => e.preventDefault()}
onBlur={(e) => e.preventDefault()}
>
<Button
variant='ghost'
size='sm'
onClick={() => handleNavigate('prev')}
disabled={!canNavigatePrev || isNavigating || isSaving}
className='h-8 w-8 p-0'
>
<ChevronUp className='h-4 w-4' />
</Button>
</TooltipTrigger>
<TooltipContent side='bottom'>
Previous chunk{' '}
{currentPage > 1 && currentChunkIndex === 0 ? '(previous page)' : ''}
</TooltipContent>
</Tooltip>
<Tooltip>
<TooltipTrigger
asChild
onFocus={(e) => e.preventDefault()}
onBlur={(e) => e.preventDefault()}
>
<Button
variant='ghost'
size='sm'
onClick={() => handleNavigate('next')}
disabled={!canNavigateNext || isNavigating || isSaving}
className='h-8 w-8 p-0'
>
<ChevronDown className='h-4 w-4' />
</Button>
</TooltipTrigger>
<TooltipContent side='bottom'>
Next chunk{' '}
{currentPage < totalPages && currentChunkIndex === allChunks.length - 1
? '(next page)'
: ''}
</TooltipContent>
</Tooltip>
</div>
</div>
<Button
variant='ghost'
size='icon'
@@ -142,7 +262,7 @@ export function EditChunkModal({
{document?.filename || 'Unknown Document'}
</p>
<p className='text-muted-foreground text-xs'>
Editing chunk #{chunk.chunkIndex}
Editing chunk #{chunk.chunkIndex} Page {currentPage} of {totalPages}
</p>
</div>
</div>
@@ -167,7 +287,7 @@ export function EditChunkModal({
onChange={(e) => setEditedContent(e.target.value)}
placeholder='Enter chunk content...'
className='flex-1 resize-none'
disabled={isSaving}
disabled={isSaving || isNavigating}
/>
</div>
</div>
@@ -176,12 +296,16 @@ export function EditChunkModal({
{/* Footer */}
<div className='mt-auto border-t px-6 pt-4 pb-6'>
<div className='flex justify-between'>
<Button variant='outline' onClick={handleCloseAttempt} disabled={isSaving}>
<Button
variant='outline'
onClick={handleCloseAttempt}
disabled={isSaving || isNavigating}
>
Cancel
</Button>
<Button
onClick={handleSaveContent}
disabled={!isFormValid || isSaving || !hasUnsavedChanges}
disabled={!isFormValid || isSaving || !hasUnsavedChanges || isNavigating}
className='bg-[#701FFC] font-[480] text-primary-foreground shadow-[0_0_0_0_#701FFC] transition-all duration-200 hover:bg-[#6518E6] hover:shadow-[0_0_0_4px_rgba(127,47,255,0.15)]'
>
{isSaving ? (
@@ -205,12 +329,19 @@ export function EditChunkModal({
<AlertDialogHeader>
<AlertDialogTitle>Unsaved Changes</AlertDialogTitle>
<AlertDialogDescription>
You have unsaved changes to this chunk content. Are you sure you want to discard your
changes and close the editor?
You have unsaved changes to this chunk content.
{pendingNavigation
? ' Do you want to discard your changes and navigate to the next chunk?'
: ' Are you sure you want to discard your changes and close the editor?'}
</AlertDialogDescription>
</AlertDialogHeader>
<AlertDialogFooter>
<AlertDialogCancel onClick={() => setShowUnsavedChangesAlert(false)}>
<AlertDialogCancel
onClick={() => {
setShowUnsavedChangesAlert(false)
setPendingNavigation(null)
}}
>
Keep Editing
</AlertDialogCancel>
<AlertDialogAction

View File

@@ -767,6 +767,30 @@ export function Document({
updateChunk(updatedChunk.id, updatedChunk)
setSelectedChunk(updatedChunk)
}}
allChunks={chunks}
currentPage={currentPage}
totalPages={totalPages}
onNavigateToChunk={(chunk: ChunkData) => {
setSelectedChunk(chunk)
}}
onNavigateToPage={async (page: number, selectChunk: 'first' | 'last') => {
await goToPage(page)
const checkAndSelectChunk = () => {
if (!isLoadingChunks && chunks.length > 0) {
if (selectChunk === 'first') {
setSelectedChunk(chunks[0])
} else {
setSelectedChunk(chunks[chunks.length - 1])
}
} else {
// Retry after a short delay if chunks aren't loaded yet
setTimeout(checkAndSelectChunk, 100)
}
}
setTimeout(checkAndSelectChunk, 0)
}}
/>
{/* Create Chunk Modal */}

View File

@@ -36,16 +36,11 @@ import { useKnowledgeBase, useKnowledgeBaseDocuments } from '@/hooks/use-knowled
import { type DocumentData, useKnowledgeStore } from '@/stores/knowledge/store'
import { useSidebarStore } from '@/stores/sidebar/store'
import { KnowledgeHeader } from '../components/knowledge-header/knowledge-header'
import { useKnowledgeUpload } from '../hooks/use-knowledge-upload'
import { KnowledgeBaseLoading } from './components/knowledge-base-loading/knowledge-base-loading'
const logger = createLogger('KnowledgeBase')
interface ProcessedDocumentResponse {
documentId: string
filename: string
status: string
}
interface KnowledgeBaseProps {
id: string
knowledgeBaseName?: string
@@ -145,17 +140,32 @@ export function KnowledgeBase({
const [showDeleteDialog, setShowDeleteDialog] = useState(false)
const [isDeleting, setIsDeleting] = useState(false)
const [isBulkOperating, setIsBulkOperating] = useState(false)
const [isUploading, setIsUploading] = useState(false)
const [uploadError, setUploadError] = useState<{
message: string
timestamp: number
} | null>(null)
const [uploadProgress, setUploadProgress] = useState<{
stage: 'idle' | 'uploading' | 'processing' | 'completing'
filesCompleted: number
totalFiles: number
currentFile?: string
}>({ stage: 'idle', filesCompleted: 0, totalFiles: 0 })
const { isUploading, uploadProgress, uploadError, uploadFiles, clearError } = useKnowledgeUpload({
onUploadComplete: async (uploadedFiles) => {
const pendingDocuments: DocumentData[] = uploadedFiles.map((file, index) => ({
id: `temp-${Date.now()}-${index}`,
knowledgeBaseId: id,
filename: file.filename,
fileUrl: file.fileUrl,
fileSize: file.fileSize,
mimeType: file.mimeType,
chunkCount: 0,
tokenCount: 0,
characterCount: 0,
processingStatus: 'pending' as const,
processingStartedAt: null,
processingCompletedAt: null,
processingError: null,
enabled: true,
uploadedAt: new Date().toISOString(),
}))
useKnowledgeStore.getState().addPendingDocuments(id, pendingDocuments)
await refreshDocuments()
},
})
const router = useRouter()
const fileInputRef = useRef<HTMLInputElement>(null)
@@ -240,11 +250,11 @@ export function KnowledgeBase({
useEffect(() => {
if (uploadError) {
const timer = setTimeout(() => {
setUploadError(null)
clearError()
}, 8000)
return () => clearTimeout(timer)
}
}, [uploadError])
}, [uploadError, clearError])
// Filter documents based on search query
const filteredDocuments = documents.filter((doc) =>
@@ -448,153 +458,18 @@ export function KnowledgeBase({
const files = e.target.files
if (!files || files.length === 0) return
interface UploadedFile {
filename: string
fileUrl: string
fileSize: number
mimeType: string
}
try {
setIsUploading(true)
setUploadError(null)
setUploadProgress({ stage: 'uploading', filesCompleted: 0, totalFiles: files.length })
// Upload all files and start processing
const uploadedFiles: UploadedFile[] = []
const fileArray = Array.from(files)
for (const [index, file] of fileArray.entries()) {
setUploadProgress((prev) => ({ ...prev, currentFile: file.name, filesCompleted: index }))
const formData = new FormData()
formData.append('file', file)
const uploadResponse = await fetch('/api/files/upload', {
method: 'POST',
body: formData,
})
if (!uploadResponse.ok) {
const errorData = await uploadResponse.json()
throw new Error(`Failed to upload ${file.name}: ${errorData.error || 'Unknown error'}`)
}
const uploadResult = await uploadResponse.json()
// Validate upload result structure
if (!uploadResult.path) {
throw new Error(`Invalid upload response for ${file.name}: missing file path`)
}
uploadedFiles.push({
filename: file.name,
fileUrl: uploadResult.path.startsWith('http')
? uploadResult.path
: `${window.location.origin}${uploadResult.path}`,
fileSize: file.size,
mimeType: file.type,
})
}
setUploadProgress((prev) => ({
...prev,
stage: 'processing',
filesCompleted: fileArray.length,
}))
// Start async document processing
const processResponse = await fetch(`/api/knowledge/${id}/documents`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({
documents: uploadedFiles,
processingOptions: {
chunkSize: knowledgeBase?.chunkingConfig?.maxSize || 1024,
minCharactersPerChunk: knowledgeBase?.chunkingConfig?.minSize || 100,
chunkOverlap: knowledgeBase?.chunkingConfig?.overlap || 200,
recipe: 'default',
lang: 'en',
},
bulk: true,
}),
const chunkingConfig = knowledgeBase?.chunkingConfig
await uploadFiles(Array.from(files), id, {
chunkSize: chunkingConfig?.maxSize || 1024,
minCharactersPerChunk: chunkingConfig?.minSize || 100,
chunkOverlap: chunkingConfig?.overlap || 200,
recipe: 'default',
})
if (!processResponse.ok) {
const errorData = await processResponse.json()
throw new Error(
`Failed to start document processing: ${errorData.error || 'Unknown error'}`
)
}
const processResult = await processResponse.json()
// Validate process result structure
if (!processResult.success) {
throw new Error(`Document processing failed: ${processResult.error || 'Unknown error'}`)
}
if (!processResult.data || !processResult.data.documentsCreated) {
throw new Error('Invalid processing response: missing document data')
}
// Create pending document objects and add them to the store immediately
const pendingDocuments: DocumentData[] = processResult.data.documentsCreated.map(
(doc: ProcessedDocumentResponse, index: number) => {
if (!doc.documentId || !doc.filename) {
logger.error(`Invalid document data received:`, doc)
throw new Error(
`Invalid document data for ${uploadedFiles[index]?.filename || 'unknown file'}`
)
}
return {
id: doc.documentId,
knowledgeBaseId: id,
filename: doc.filename,
fileUrl: uploadedFiles[index].fileUrl,
fileSize: uploadedFiles[index].fileSize,
mimeType: uploadedFiles[index].mimeType,
chunkCount: 0,
tokenCount: 0,
characterCount: 0,
processingStatus: 'pending' as const,
processingStartedAt: null,
processingCompletedAt: null,
processingError: null,
enabled: true,
uploadedAt: new Date().toISOString(),
}
}
)
// Add pending documents to store for immediate UI update
useKnowledgeStore.getState().addPendingDocuments(id, pendingDocuments)
logger.info(`Successfully started processing ${uploadedFiles.length} documents`)
setUploadProgress((prev) => ({ ...prev, stage: 'completing' }))
// Trigger a refresh to ensure documents are properly loaded
await refreshDocuments()
setUploadProgress({ stage: 'idle', filesCompleted: 0, totalFiles: 0 })
} catch (err) {
logger.error('Error uploading documents:', err)
const errorMessage =
err instanceof Error ? err.message : 'Unknown error occurred during upload'
setUploadError({
message: errorMessage,
timestamp: Date.now(),
})
// Show user-friendly error message in console for debugging
console.error('Document upload failed:', errorMessage)
} catch (error) {
logger.error('Error uploading files:', error)
// Error handling is managed by the upload hook
} finally {
setIsUploading(false)
setUploadProgress({ stage: 'idle', filesCompleted: 0, totalFiles: 0 })
// Reset the file input
if (fileInputRef.current) {
fileInputRef.current.value = ''
@@ -995,7 +870,7 @@ export function KnowledgeBase({
</tr>
))
) : (
filteredDocuments.map((doc, index) => {
filteredDocuments.map((doc) => {
const isSelected = selectedDocuments.has(doc.id)
const statusDisplay = getStatusDisplay(doc)
// const processingTime = getProcessingTime(doc)
@@ -1254,7 +1129,7 @@ export function KnowledgeBase({
</p>
</div>
<button
onClick={() => setUploadError(null)}
onClick={() => clearError()}
className='flex-shrink-0 rounded-sm opacity-70 hover:opacity-100 focus:outline-none focus:ring-2 focus:ring-ring'
>
<X className='h-4 w-4' />

View File

@@ -13,8 +13,8 @@ import { Label } from '@/components/ui/label'
import { Textarea } from '@/components/ui/textarea'
import { createLogger } from '@/lib/logs/console-logger'
import { getDocumentIcon } from '@/app/workspace/[workspaceId]/knowledge/components/icons/document-icons'
import type { DocumentData, KnowledgeBaseData } from '@/stores/knowledge/store'
import { useKnowledgeStore } from '@/stores/knowledge/store'
import type { KnowledgeBaseData } from '@/stores/knowledge/store'
import { useKnowledgeUpload } from '../../hooks/use-knowledge-upload'
const logger = createLogger('CreateModal')
@@ -29,12 +29,6 @@ const ACCEPTED_FILE_TYPES = [
'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet',
]
interface ProcessedDocumentResponse {
documentId: string
filename: string
status: string
}
interface FileWithPreview extends File {
preview: string
}
@@ -89,6 +83,12 @@ export function CreateModal({ open, onOpenChange, onKnowledgeBaseCreated }: Crea
const scrollContainerRef = useRef<HTMLDivElement>(null)
const dropZoneRef = useRef<HTMLDivElement>(null)
const { uploadFiles } = useKnowledgeUpload({
onUploadComplete: (uploadedFiles) => {
logger.info(`Successfully uploaded ${uploadedFiles.length} files`)
},
})
// Cleanup file preview URLs when component unmounts to prevent memory leaks
useEffect(() => {
return () => {
@@ -235,19 +235,6 @@ export function CreateModal({ open, onOpenChange, onKnowledgeBaseCreated }: Crea
return `${Number.parseFloat((bytes / k ** i).toFixed(1))} ${sizes[i]}`
}
// Helper function to create uploadedFiles array from file uploads
const createUploadedFile = (
filename: string,
fileUrl: string,
fileSize: number,
mimeType: string
) => ({
filename,
fileUrl: fileUrl.startsWith('http') ? fileUrl : `${window.location.origin}${fileUrl}`,
fileSize,
mimeType,
})
const onSubmit = async (data: FormValues) => {
setIsSubmitting(true)
setSubmitStatus(null)
@@ -285,138 +272,14 @@ export function CreateModal({ open, onOpenChange, onKnowledgeBaseCreated }: Crea
const newKnowledgeBase = result.data
// If files are uploaded, upload them and start processing
if (files.length > 0) {
// First, upload all files to get their URLs
interface UploadedFile {
filename: string
fileUrl: string
fileSize: number
mimeType: string
}
const uploadedFiles: UploadedFile[] = []
for (const file of files) {
try {
const presignedResponse = await fetch('/api/files/presigned', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({
fileName: file.name,
contentType: file.type,
fileSize: file.size,
}),
})
const presignedData = await presignedResponse.json()
if (presignedResponse.ok && presignedData.directUploadSupported) {
const uploadHeaders: Record<string, string> = {
'Content-Type': file.type,
}
// Add Azure-specific headers if provided
if (presignedData.uploadHeaders) {
Object.assign(uploadHeaders, presignedData.uploadHeaders)
}
const uploadResponse = await fetch(presignedData.presignedUrl, {
method: 'PUT',
headers: uploadHeaders, // Use the merged headers
body: file,
})
if (!uploadResponse.ok) {
throw new Error(
`Direct upload failed: ${uploadResponse.status} ${uploadResponse.statusText}`
)
}
uploadedFiles.push(
createUploadedFile(file.name, presignedData.fileInfo.path, file.size, file.type)
)
} else {
const formData = new FormData()
formData.append('file', file)
const uploadResponse = await fetch('/api/files/upload', {
method: 'POST',
body: formData,
})
if (!uploadResponse.ok) {
const errorData = await uploadResponse.json()
throw new Error(
`Failed to upload ${file.name}: ${errorData.error || 'Unknown error'}`
)
}
const uploadResult = await uploadResponse.json()
uploadedFiles.push(
createUploadedFile(file.name, uploadResult.path, file.size, file.type)
)
}
} catch (error) {
throw new Error(
`Failed to upload ${file.name}: ${error instanceof Error ? error.message : 'Unknown error'}`
)
}
}
// Start async document processing
const processResponse = await fetch(`/api/knowledge/${newKnowledgeBase.id}/documents`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({
documents: uploadedFiles,
processingOptions: {
chunkSize: data.maxChunkSize,
minCharactersPerChunk: data.minChunkSize,
chunkOverlap: data.overlapSize,
recipe: 'default',
lang: 'en',
},
bulk: true,
}),
const uploadedFiles = await uploadFiles(files, newKnowledgeBase.id, {
chunkSize: data.maxChunkSize,
minCharactersPerChunk: data.minChunkSize,
chunkOverlap: data.overlapSize,
recipe: 'default',
})
if (!processResponse.ok) {
throw new Error('Failed to start document processing')
}
const processResult = await processResponse.json()
// Create pending document objects and add them to the store immediately
if (processResult.success && processResult.data.documentsCreated) {
const pendingDocuments: DocumentData[] = processResult.data.documentsCreated.map(
(doc: ProcessedDocumentResponse, index: number) => ({
id: doc.documentId,
knowledgeBaseId: newKnowledgeBase.id,
filename: doc.filename,
fileUrl: uploadedFiles[index].fileUrl,
fileSize: uploadedFiles[index].fileSize,
mimeType: uploadedFiles[index].mimeType,
chunkCount: 0,
tokenCount: 0,
characterCount: 0,
processingStatus: 'pending' as const,
processingStartedAt: null,
processingCompletedAt: null,
processingError: null,
enabled: true,
uploadedAt: new Date().toISOString(),
})
)
// Add pending documents to store for immediate UI update
useKnowledgeStore.getState().addPendingDocuments(newKnowledgeBase.id, pendingDocuments)
}
// Update the knowledge base object with the correct document count
newKnowledgeBase.docCount = uploadedFiles.length

View File

@@ -0,0 +1,352 @@
import { useState } from 'react'
import { createLogger } from '@/lib/logs/console-logger'
const logger = createLogger('KnowledgeUpload')
export interface UploadedFile {
filename: string
fileUrl: string
fileSize: number
mimeType: string
}
export interface UploadProgress {
stage: 'idle' | 'uploading' | 'processing' | 'completing'
filesCompleted: number
totalFiles: number
currentFile?: string
}
export interface UploadError {
message: string
timestamp: number
code?: string
details?: any
}
export interface ProcessingOptions {
chunkSize?: number
minCharactersPerChunk?: number
chunkOverlap?: number
recipe?: string
}
export interface UseKnowledgeUploadOptions {
onUploadComplete?: (uploadedFiles: UploadedFile[]) => void
onError?: (error: UploadError) => void
}
class KnowledgeUploadError extends Error {
constructor(
message: string,
public code: string,
public details?: any
) {
super(message)
this.name = 'KnowledgeUploadError'
}
}
class PresignedUrlError extends KnowledgeUploadError {
constructor(message: string, details?: any) {
super(message, 'PRESIGNED_URL_ERROR', details)
}
}
class DirectUploadError extends KnowledgeUploadError {
constructor(message: string, details?: any) {
super(message, 'DIRECT_UPLOAD_ERROR', details)
}
}
class ProcessingError extends KnowledgeUploadError {
constructor(message: string, details?: any) {
super(message, 'PROCESSING_ERROR', details)
}
}
export function useKnowledgeUpload(options: UseKnowledgeUploadOptions = {}) {
const [isUploading, setIsUploading] = useState(false)
const [uploadProgress, setUploadProgress] = useState<UploadProgress>({
stage: 'idle',
filesCompleted: 0,
totalFiles: 0,
})
const [uploadError, setUploadError] = useState<UploadError | null>(null)
const createUploadedFile = (
filename: string,
fileUrl: string,
fileSize: number,
mimeType: string
): UploadedFile => ({
filename,
fileUrl,
fileSize,
mimeType,
})
const createErrorFromException = (error: unknown, defaultMessage: string): UploadError => {
if (error instanceof KnowledgeUploadError) {
return {
message: error.message,
code: error.code,
details: error.details,
timestamp: Date.now(),
}
}
if (error instanceof Error) {
return {
message: error.message,
timestamp: Date.now(),
}
}
return {
message: defaultMessage,
timestamp: Date.now(),
}
}
const uploadFiles = async (
files: File[],
knowledgeBaseId: string,
processingOptions: ProcessingOptions = {}
): Promise<UploadedFile[]> => {
if (files.length === 0) {
throw new KnowledgeUploadError('No files provided for upload', 'NO_FILES')
}
if (!knowledgeBaseId?.trim()) {
throw new KnowledgeUploadError('Knowledge base ID is required', 'INVALID_KB_ID')
}
try {
setIsUploading(true)
setUploadError(null)
setUploadProgress({ stage: 'uploading', filesCompleted: 0, totalFiles: files.length })
const uploadedFiles: UploadedFile[] = []
// Upload all files using presigned URLs
for (const [index, file] of files.entries()) {
setUploadProgress((prev) => ({
...prev,
currentFile: file.name,
filesCompleted: index,
}))
try {
// Get presigned URL
const presignedResponse = await fetch('/api/files/presigned?type=knowledge-base', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({
fileName: file.name,
contentType: file.type,
fileSize: file.size,
}),
})
if (!presignedResponse.ok) {
let errorDetails: any = null
try {
errorDetails = await presignedResponse.json()
} catch {
// Ignore JSON parsing errors
}
throw new PresignedUrlError(
`Failed to get presigned URL for ${file.name}: ${presignedResponse.status} ${presignedResponse.statusText}`,
errorDetails
)
}
const presignedData = await presignedResponse.json()
if (presignedData.directUploadSupported) {
// Use presigned URL for direct upload
const uploadHeaders: Record<string, string> = {
'Content-Type': file.type,
}
// Add Azure-specific headers if provided
if (presignedData.uploadHeaders) {
Object.assign(uploadHeaders, presignedData.uploadHeaders)
}
const uploadResponse = await fetch(presignedData.presignedUrl, {
method: 'PUT',
headers: uploadHeaders,
body: file,
})
if (!uploadResponse.ok) {
throw new DirectUploadError(
`Direct upload failed for ${file.name}: ${uploadResponse.status} ${uploadResponse.statusText}`,
{ uploadResponse: uploadResponse.statusText }
)
}
// Convert relative path to full URL for schema validation
const fullFileUrl = presignedData.fileInfo.path.startsWith('http')
? presignedData.fileInfo.path
: `${window.location.origin}${presignedData.fileInfo.path}`
uploadedFiles.push(createUploadedFile(file.name, fullFileUrl, file.size, file.type))
} else {
// Fallback to traditional upload through API route
const formData = new FormData()
formData.append('file', file)
const uploadResponse = await fetch('/api/files/upload', {
method: 'POST',
body: formData,
})
if (!uploadResponse.ok) {
let errorData: any = null
try {
errorData = await uploadResponse.json()
} catch {
// Ignore JSON parsing errors
}
throw new DirectUploadError(
`Failed to upload ${file.name}: ${errorData?.error || 'Unknown error'}`,
errorData
)
}
const uploadResult = await uploadResponse.json()
// Validate upload result structure
if (!uploadResult.path) {
throw new DirectUploadError(
`Invalid upload response for ${file.name}: missing file path`,
uploadResult
)
}
uploadedFiles.push(
createUploadedFile(
file.name,
uploadResult.path.startsWith('http')
? uploadResult.path
: `${window.location.origin}${uploadResult.path}`,
file.size,
file.type
)
)
}
} catch (fileError) {
logger.error(`Error uploading file ${file.name}:`, fileError)
throw fileError // Re-throw to be caught by outer try-catch
}
}
setUploadProgress((prev) => ({ ...prev, stage: 'processing' }))
// Start async document processing
const processPayload = {
documents: uploadedFiles,
processingOptions: {
chunkSize: processingOptions.chunkSize || 1024,
minCharactersPerChunk: processingOptions.minCharactersPerChunk || 100,
chunkOverlap: processingOptions.chunkOverlap || 200,
recipe: processingOptions.recipe || 'default',
lang: 'en',
},
bulk: true,
}
const processResponse = await fetch(`/api/knowledge/${knowledgeBaseId}/documents`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify(processPayload),
})
if (!processResponse.ok) {
let errorData: any = null
try {
errorData = await processResponse.json()
} catch {
// Ignore JSON parsing errors
}
logger.error('Document processing failed:', {
status: processResponse.status,
error: errorData,
uploadedFiles: uploadedFiles.map((f) => ({
filename: f.filename,
fileUrl: f.fileUrl,
fileSize: f.fileSize,
mimeType: f.mimeType,
})),
})
throw new ProcessingError(
`Failed to start document processing: ${errorData?.error || errorData?.message || 'Unknown error'}`,
errorData
)
}
const processResult = await processResponse.json()
// Validate process result structure
if (!processResult.success) {
throw new ProcessingError(
`Document processing failed: ${processResult.error || 'Unknown error'}`,
processResult
)
}
if (!processResult.data || !processResult.data.documentsCreated) {
throw new ProcessingError(
'Invalid processing response: missing document data',
processResult
)
}
setUploadProgress((prev) => ({ ...prev, stage: 'completing' }))
logger.info(`Successfully started processing ${uploadedFiles.length} documents`)
// Call success callback
options.onUploadComplete?.(uploadedFiles)
return uploadedFiles
} catch (err) {
logger.error('Error uploading documents:', err)
const error = createErrorFromException(err, 'Unknown error occurred during upload')
setUploadError(error)
options.onError?.(error)
// Show user-friendly error message in console for debugging
console.error('Document upload failed:', error.message)
throw err
} finally {
setIsUploading(false)
setUploadProgress({ stage: 'idle', filesCompleted: 0, totalFiles: 0 })
}
}
const clearError = () => {
setUploadError(null)
}
return {
isUploading,
uploadProgress,
uploadError,
uploadFiles,
clearError,
}
}

View File

@@ -140,12 +140,20 @@ export function Chat({ panelWidth, chatMessage, setChatMessage }: ChatProps) {
result.logs?.filter((log) => !messageIdMap.has(log.blockId)) || []
if (nonStreamingLogs.length > 0) {
const outputsToRender = selectedOutputs.filter((outputId) =>
nonStreamingLogs.some((log) => log.blockId === outputId.split('.')[0])
)
const outputsToRender = selectedOutputs.filter((outputId) => {
// Extract block ID correctly - handle both formats:
// - "blockId" (direct block ID)
// - "blockId_response.result" (block ID with path)
const blockIdForOutput = outputId.includes('_')
? outputId.split('_')[0]
: outputId.split('.')[0]
return nonStreamingLogs.some((log) => log.blockId === blockIdForOutput)
})
for (const outputId of outputsToRender) {
const blockIdForOutput = outputId.split('.')[0]
const blockIdForOutput = outputId.includes('_')
? outputId.split('_')[0]
: outputId.split('.')[0]
const path = outputId.substring(blockIdForOutput.length + 1)
const log = nonStreamingLogs.find((l) => l.blockId === blockIdForOutput)

View File

@@ -53,13 +53,41 @@ export function OutputSelect({
const addOutput = (path: string, outputObj: any, prefix = '') => {
const fullPath = prefix ? `${prefix}.${path}` : path
if (typeof outputObj === 'object' && outputObj !== null) {
// For objects, recursively add each property
// If not an object or is null, treat as leaf node
if (typeof outputObj !== 'object' || outputObj === null) {
const output = {
id: `${block.id}_${fullPath}`,
label: `${blockName}.${fullPath}`,
blockId: block.id,
blockName: block.name || `Block ${block.id}`,
blockType: block.type,
path: fullPath,
}
outputs.push(output)
return
}
// If has 'type' property, treat as schema definition (leaf node)
if ('type' in outputObj && typeof outputObj.type === 'string') {
const output = {
id: `${block.id}_${fullPath}`,
label: `${blockName}.${fullPath}`,
blockId: block.id,
blockName: block.name || `Block ${block.id}`,
blockType: block.type,
path: fullPath,
}
outputs.push(output)
return
}
// For objects without type, recursively add each property
if (!Array.isArray(outputObj)) {
Object.entries(outputObj).forEach(([key, value]) => {
addOutput(key, value, fullPath)
})
} else {
// Add leaf node as output option
// For arrays, treat as leaf node
outputs.push({
id: `${block.id}_${fullPath}`,
label: `${blockName}.${fullPath}`,
@@ -71,10 +99,10 @@ export function OutputSelect({
}
}
// Start with the response object
if (block.outputs.response) {
addOutput('response', block.outputs.response)
}
// Process all output properties directly (flattened structure)
Object.entries(block.outputs).forEach(([key, value]) => {
addOutput(key, value)
})
}
})

View File

@@ -145,11 +145,13 @@ export const Toolbar = React.memo(() => {
{blocks.map((block) => (
<ToolbarBlock key={block.type} config={block} disabled={!userPermissions.canEdit} />
))}
{activeTab === 'blocks' && !searchQuery && (
<>
<LoopToolbarItem disabled={!userPermissions.canEdit} />
<ParallelToolbarItem disabled={!userPermissions.canEdit} />
</>
{((activeTab === 'blocks' && !searchQuery) ||
(searchQuery && 'loop'.includes(searchQuery.toLowerCase()))) && (
<LoopToolbarItem disabled={!userPermissions.canEdit} />
)}
{((activeTab === 'blocks' && !searchQuery) ||
(searchQuery && 'parallel'.includes(searchQuery.toLowerCase()))) && (
<ParallelToolbarItem disabled={!userPermissions.canEdit} />
)}
</div>
</div>

View File

@@ -4,10 +4,11 @@ import {
type ConnectedBlock,
useBlockConnections,
} from '@/app/workspace/[workspaceId]/w/[workflowId]/hooks/use-block-connections'
import { useSubBlockStore } from '@/stores/workflows/subblock/store'
import { getBlock } from '@/blocks'
interface ConnectionBlocksProps {
blockId: string
horizontalHandles: boolean
setIsConnecting: (isConnecting: boolean) => void
isDisabled?: boolean
}
@@ -20,6 +21,7 @@ interface ResponseField {
export function ConnectionBlocks({
blockId,
horizontalHandles,
setIsConnecting,
isDisabled = false,
}: ConnectionBlocksProps) {
@@ -39,6 +41,10 @@ export function ConnectionBlocks({
e.stopPropagation() // Prevent parent drag handlers from firing
setIsConnecting(true)
// If no specific field is provided, use all available output types
const outputType = field ? field.name : connection.outputType
e.dataTransfer.setData(
'application/json',
JSON.stringify({
@@ -46,9 +52,13 @@ export function ConnectionBlocks({
connectionData: {
id: connection.id,
name: connection.name,
outputType: field ? field.name : connection.outputType,
outputType: outputType,
sourceBlockId: connection.id,
fieldType: field?.type,
// Include all available output types for reference
allOutputTypes: Array.isArray(connection.outputType)
? connection.outputType
: [connection.outputType],
},
})
)
@@ -59,147 +69,59 @@ export function ConnectionBlocks({
setIsConnecting(false)
}
// Helper function to extract fields from JSON Schema
const extractFieldsFromSchema = (connection: ConnectedBlock): ResponseField[] => {
// Handle legacy format with fields array
if (connection.responseFormat?.fields) {
return connection.responseFormat.fields
}
// Handle new JSON Schema format
const schema = connection.responseFormat?.schema || connection.responseFormat
// Safely check if schema and properties exist
if (
!schema ||
typeof schema !== 'object' ||
!('properties' in schema) ||
typeof schema.properties !== 'object'
) {
return []
}
return Object.entries(schema.properties).map(([name, prop]: [string, any]) => ({
name,
type: Array.isArray(prop) ? 'array' : prop.type || 'string',
description: prop.description,
}))
}
// Extract fields from starter block input format
const extractFieldsFromStarterInput = (connection: ConnectedBlock): ResponseField[] => {
// Only process for starter blocks
if (connection.type !== 'starter') return []
try {
// Get input format from subblock store
const inputFormat = useSubBlockStore.getState().getValue(connection.id, 'inputFormat')
// Make sure we have a valid input format
if (!inputFormat || !Array.isArray(inputFormat) || inputFormat.length === 0) {
return [{ name: 'input', type: 'any' }]
}
// Check if any fields have been configured with names
const hasConfiguredFields = inputFormat.some(
(field: any) => field.name && field.name.trim() !== ''
)
// If no fields have been configured, return the default input field
if (!hasConfiguredFields) {
return [{ name: 'input', type: 'any' }]
}
// Map input fields to response fields
return inputFormat.map((field: any) => ({
name: `input.${field.name}`,
type: field.type || 'string',
description: field.description,
}))
} catch (e) {
console.error('Error extracting fields from starter input format:', e)
return [{ name: 'input', type: 'any' }]
}
}
// Deduplicate connections by ID
const connectionMap = incomingConnections.reduce(
(acc, connection) => {
acc[connection.id] = connection
return acc
},
{} as Record<string, ConnectedBlock>
)
// Sort connections by name
const sortedConnections = Object.values(connectionMap).sort((a, b) =>
a.name.localeCompare(b.name)
)
// Use connections in distance order (already sorted and deduplicated by the hook)
const sortedConnections = incomingConnections
// Helper function to render a connection card
const renderConnectionCard = (connection: ConnectedBlock, field?: ResponseField) => {
const displayName = connection.name.replace(/\s+/g, '').toLowerCase()
const renderConnectionCard = (connection: ConnectedBlock) => {
// Get block configuration for icon and color
const blockConfig = getBlock(connection.type)
const displayName = connection.name // Use the actual block name instead of transforming it
const Icon = blockConfig?.icon
const bgColor = blockConfig?.bgColor || '#6B7280' // Fallback to gray
return (
<Card
key={`${field ? field.name : connection.id}`}
key={`${connection.id}-${connection.name}`}
draggable={!isDisabled}
onDragStart={(e) => handleDragStart(e, connection, field)}
onDragStart={(e) => handleDragStart(e, connection)}
onDragEnd={handleDragEnd}
className={cn(
'group flex w-max items-center rounded-lg border bg-card p-2 shadow-sm transition-colors',
'group flex w-max items-center gap-2 rounded-lg border bg-card p-2 shadow-sm transition-colors',
!isDisabled
? 'cursor-grab hover:bg-accent/50 active:cursor-grabbing'
: 'cursor-not-allowed opacity-60'
)}
>
{/* Block icon with color */}
{Icon && (
<div
className='flex h-5 w-5 flex-shrink-0 items-center justify-center rounded'
style={{ backgroundColor: bgColor }}
>
<Icon className='h-3 w-3 text-white' />
</div>
)}
<div className='text-sm'>
<span className='font-medium leading-none'>{displayName}</span>
<span className='text-muted-foreground'>
{field
? `.${field.name}`
: typeof connection.outputType === 'string'
? `.${connection.outputType}`
: ''}
</span>
</div>
</Card>
)
}
return (
<div className='absolute top-0 right-full flex max-h-[400px] flex-col items-end space-y-2 overflow-y-auto pr-5'>
{sortedConnections.map((connection, index) => {
// Special handling for starter blocks with input format
if (connection.type === 'starter') {
const starterFields = extractFieldsFromStarterInput(connection)
// Generate all connection cards - one per block, not per output field
const connectionCards: React.ReactNode[] = []
if (starterFields.length > 0) {
return (
<div key={connection.id} className='space-y-2'>
{starterFields.map((field) => renderConnectionCard(connection, field))}
</div>
)
}
}
sortedConnections.forEach((connection) => {
connectionCards.push(renderConnectionCard(connection))
})
// Regular connection handling
return (
<div key={`${connection.id}-${index}`} className='space-y-2'>
{Array.isArray(connection.outputType)
? // Handle array of field names
connection.outputType.map((fieldName) => {
// Try to find field in response format
const fields = extractFieldsFromSchema(connection)
const field = fields.find((f) => f.name === fieldName) || {
name: fieldName,
type: 'string',
}
// Position and layout based on handle orientation - reverse of ports
// When ports are horizontal: connection blocks on top, aligned to left, closest blocks on bottom row
// When ports are vertical (default): connection blocks on left, stack vertically, aligned to right
const containerClasses = horizontalHandles
? 'absolute bottom-full left-0 flex max-w-[600px] flex-wrap-reverse gap-2 pb-3'
: 'absolute top-0 right-full flex max-h-[400px] max-w-[200px] flex-col items-end gap-2 overflow-y-auto pr-3'
return renderConnectionCard(connection, field)
})
: renderConnectionCard(connection)}
</div>
)
})}
</div>
)
return <div className={containerClasses}>{connectionCards}</div>
}

View File

@@ -26,10 +26,10 @@ interface ScheduleConfigProps {
export function ScheduleConfig({
blockId,
subBlockId,
subBlockId: _subBlockId,
isConnecting,
isPreview = false,
previewValue,
previewValue: _previewValue,
disabled = false,
}: ScheduleConfigProps) {
const [error, setError] = useState<string | null>(null)
@@ -56,13 +56,7 @@ export function ScheduleConfig({
// Get the startWorkflow value to determine if scheduling is enabled
// and expose the setter so we can update it
const [startWorkflow, setStartWorkflow] = useSubBlockValue(blockId, 'startWorkflow')
const isScheduleEnabled = startWorkflow === 'schedule'
const [storeValue, setStoreValue] = useSubBlockValue(blockId, subBlockId)
// Use preview value when in preview mode, otherwise use store value
const value = isPreview ? previewValue : storeValue
const [_startWorkflow, setStartWorkflow] = useSubBlockValue(blockId, 'startWorkflow')
// Function to check if schedule exists in the database
const checkSchedule = async () => {
@@ -110,10 +104,17 @@ export function ScheduleConfig({
// Check for schedule on mount and when relevant dependencies change
useEffect(() => {
// Always check for schedules regardless of the UI setting
// This ensures we detect schedules even when the UI is set to manual
checkSchedule()
}, [workflowId, scheduleType, isModalOpen, refreshCounter])
// Only check for schedules when workflowId changes or modal opens
// Avoid checking on every scheduleType change to prevent excessive API calls
if (workflowId && (isModalOpen || refreshCounter > 0)) {
checkSchedule()
}
// Cleanup function to reset loading state
return () => {
setIsLoading(false)
}
}, [workflowId, isModalOpen, refreshCounter])
// Format the schedule information for display
const getScheduleInfo = () => {

View File

@@ -1,5 +1,6 @@
import { useEffect, useRef, useState } from 'react'
import { BookOpen, Code, Info, RectangleHorizontal, RectangleVertical } from 'lucide-react'
import { useParams } from 'next/navigation'
import { Handle, type NodeProps, Position, useUpdateNodeInternals } from 'reactflow'
import { Badge } from '@/components/ui/badge'
import { Button } from '@/components/ui/button'
@@ -83,6 +84,11 @@ export function WorkflowBlock({ id, data }: NodeProps<WorkflowBlockProps>) {
const isActiveBlock = useExecutionStore((state) => state.activeBlockIds.has(id))
const isActive = dataIsActive || isActiveBlock
// Get the current workflow ID from URL params instead of global state
// This prevents race conditions when switching workflows rapidly
const params = useParams()
const currentWorkflowId = params.workflowId as string
const reactivateSchedule = async (scheduleId: string) => {
try {
const response = await fetch(`/api/schedules/${scheduleId}`, {
@@ -94,7 +100,10 @@ export function WorkflowBlock({ id, data }: NodeProps<WorkflowBlockProps>) {
})
if (response.ok) {
fetchScheduleInfo()
// Use the current workflow ID from params instead of global state
if (currentWorkflowId) {
fetchScheduleInfo(currentWorkflowId)
}
} else {
console.error('Failed to reactivate schedule')
}
@@ -103,11 +112,11 @@ export function WorkflowBlock({ id, data }: NodeProps<WorkflowBlockProps>) {
}
}
const fetchScheduleInfo = async () => {
const fetchScheduleInfo = async (workflowId: string) => {
if (!workflowId) return
try {
setIsLoadingScheduleInfo(true)
const workflowId = useWorkflowRegistry.getState().activeWorkflowId
if (!workflowId) return
const response = await fetch(`/api/schedules?workflowId=${workflowId}&mode=schedule`, {
cache: 'no-store',
@@ -176,12 +185,18 @@ export function WorkflowBlock({ id, data }: NodeProps<WorkflowBlockProps>) {
}
useEffect(() => {
if (type === 'starter') {
fetchScheduleInfo()
if (type === 'starter' && currentWorkflowId) {
fetchScheduleInfo(currentWorkflowId)
} else {
setScheduleInfo(null)
setIsLoadingScheduleInfo(false) // Reset loading state when not a starter block
}
}, [type])
// Cleanup function to reset loading state when component unmounts or workflow changes
return () => {
setIsLoadingScheduleInfo(false)
}
}, [type, currentWorkflowId])
// Get webhook information for the tooltip
useEffect(() => {
@@ -436,6 +451,7 @@ export function WorkflowBlock({ id, data }: NodeProps<WorkflowBlockProps>) {
blockId={id}
setIsConnecting={setIsConnecting}
isDisabled={!userPermissions.canEdit}
horizontalHandles={horizontalHandles}
/>
{/* Input Handle - Don't show for starter blocks */}
@@ -683,7 +699,7 @@ export function WorkflowBlock({ id, data }: NodeProps<WorkflowBlockProps>) {
{Object.entries(config.outputs).map(([key, value]) => (
<div key={key} className='mb-1'>
<span className='text-muted-foreground'>{key}</span>{' '}
{typeof value.type === 'object' ? (
{typeof value === 'object' ? (
<div className='mt-1 pl-3'>
{Object.entries(value.type).map(([typeKey, typeValue]) => (
<div key={typeKey} className='flex items-start'>
@@ -697,7 +713,7 @@ export function WorkflowBlock({ id, data }: NodeProps<WorkflowBlockProps>) {
))}
</div>
) : (
<span className='text-green-500'>{value.type as string}</span>
<span className='text-green-500'>{value as string}</span>
)}
</div>
))}

View File

@@ -1,4 +1,5 @@
import { shallow } from 'zustand/shallow'
import { BlockPathCalculator } from '@/lib/block-path-calculator'
import { createLogger } from '@/lib/logs/console-logger'
import { useSubBlockStore } from '@/stores/workflows/subblock/store'
import { useWorkflowStore } from '@/stores/workflows/workflow/store'
@@ -53,63 +54,6 @@ function extractFieldsFromSchema(schema: any): Field[] {
}))
}
/**
* Finds all blocks along paths leading to the target block
* This is a reverse traversal from the target node to find all ancestors
* along connected paths
* @param edges - List of all edges in the graph
* @param targetNodeId - ID of the target block we're finding connections for
* @returns Array of unique ancestor node IDs
*/
function findAllPathNodes(edges: any[], targetNodeId: string): string[] {
// We'll use a reverse topological sort approach by tracking "distance" from target
const nodeDistances = new Map<string, number>()
const visited = new Set<string>()
const queue: [string, number][] = [[targetNodeId, 0]] // [nodeId, distance]
const pathNodes = new Set<string>()
// Build a reverse adjacency list for faster traversal
const reverseAdjList: Record<string, string[]> = {}
for (const edge of edges) {
if (!reverseAdjList[edge.target]) {
reverseAdjList[edge.target] = []
}
reverseAdjList[edge.target].push(edge.source)
}
// BFS to find all ancestors and their shortest distance from target
while (queue.length > 0) {
const [currentNodeId, distance] = queue.shift()!
if (visited.has(currentNodeId)) {
// If we've seen this node before, update its distance if this path is shorter
const currentDistance = nodeDistances.get(currentNodeId) || Number.POSITIVE_INFINITY
if (distance < currentDistance) {
nodeDistances.set(currentNodeId, distance)
}
continue
}
visited.add(currentNodeId)
nodeDistances.set(currentNodeId, distance)
// Don't add the target node itself to the results
if (currentNodeId !== targetNodeId) {
pathNodes.add(currentNodeId)
}
// Get all incoming edges from the reverse adjacency list
const incomingNodeIds = reverseAdjList[currentNodeId] || []
// Add all source nodes to the queue with incremented distance
for (const sourceId of incomingNodeIds) {
queue.push([sourceId, distance + 1])
}
}
return Array.from(pathNodes)
}
export function useBlockConnections(blockId: string) {
const { edges, blocks } = useWorkflowStore(
(state) => ({
@@ -120,7 +64,7 @@ export function useBlockConnections(blockId: string) {
)
// Find all blocks along paths leading to this block
const allPathNodeIds = findAllPathNodes(edges, blockId)
const allPathNodeIds = BlockPathCalculator.findAllPathNodes(edges, blockId)
// Map each path node to a ConnectedBlock structure
const allPathConnections = allPathNodeIds

View File

@@ -82,9 +82,9 @@ export function useWorkflowExecution() {
}
// If this was a streaming response and we have the final content, update it
if (streamContent && result.output?.response && typeof streamContent === 'string') {
if (streamContent && result.output && typeof streamContent === 'string') {
// Update the content with the final streaming content
enrichedResult.output.response.content = streamContent
enrichedResult.output.content = streamContent
// Also update any block logs to include the content where appropriate
if (enrichedResult.logs) {
@@ -97,10 +97,9 @@ export function useWorkflowExecution() {
if (
isStreamingBlock &&
(log.blockType === 'agent' || log.blockType === 'router') &&
log.output?.response
) {
log.output.response.content = streamContent
}
log.output
)
log.output.content = streamContent
}
}
}
@@ -122,7 +121,7 @@ export function useWorkflowExecution() {
return executionId
} catch (error) {
logger.error('Error persisting logs:', { error })
logger.error('Error persisting logs:', error)
return executionId
}
}
@@ -215,8 +214,8 @@ export function useWorkflowExecution() {
result.logs?.forEach((log: BlockLog) => {
if (streamedContent.has(log.blockId)) {
const content = streamedContent.get(log.blockId) || ''
if (log.output?.response) {
log.output.response.content = content
if (log.output) {
log.output.content = content
}
useConsoleStore.getState().updateConsole(log.blockId, content)
}
@@ -225,9 +224,9 @@ export function useWorkflowExecution() {
controller.enqueue(
encoder.encode(`data: ${JSON.stringify({ event: 'final', data: result })}\n\n`)
)
persistLogs(executionId, result).catch((err) => {
logger.error('Error persisting logs:', { error: err })
})
persistLogs(executionId, result).catch((err) =>
logger.error('Error persisting logs:', err)
)
}
} catch (error: any) {
controller.error(error)
@@ -437,7 +436,7 @@ export function useWorkflowExecution() {
const errorResult: ExecutionResult = {
success: false,
output: { response: {} },
output: {},
error: errorMessage,
logs: [],
}
@@ -560,7 +559,7 @@ export function useWorkflowExecution() {
// Create error result
const errorResult = {
success: false,
output: { response: {} },
output: {},
error: errorMessage,
logs: debugContext.blockLogs,
}
@@ -647,7 +646,7 @@ export function useWorkflowExecution() {
let currentResult: ExecutionResult = {
success: true,
output: { response: {} },
output: {},
logs: debugContext.blockLogs,
}
@@ -743,7 +742,7 @@ export function useWorkflowExecution() {
// Create error result
const errorResult = {
success: false,
output: { response: {} },
output: {},
error: errorMessage,
logs: debugContext.blockLogs,
}

View File

@@ -15,9 +15,14 @@ import { useFolderStore } from '@/stores/folders/store'
interface CreateMenuProps {
onCreateWorkflow: (folderId?: string) => void
isCollapsed?: boolean
isCreatingWorkflow?: boolean
}
export function CreateMenu({ onCreateWorkflow, isCollapsed }: CreateMenuProps) {
export function CreateMenu({
onCreateWorkflow,
isCollapsed,
isCreatingWorkflow = false,
}: CreateMenuProps) {
const [showFolderDialog, setShowFolderDialog] = useState(false)
const [folderName, setFolderName] = useState('')
const [isCreating, setIsCreating] = useState(false)
@@ -73,6 +78,7 @@ export function CreateMenu({ onCreateWorkflow, isCollapsed }: CreateMenuProps) {
onClick={handleCreateWorkflow}
onMouseEnter={() => setIsHoverOpen(true)}
onMouseLeave={() => setIsHoverOpen(false)}
disabled={isCreatingWorkflow}
>
<Plus
className={cn(
@@ -101,11 +107,17 @@ export function CreateMenu({ onCreateWorkflow, isCollapsed }: CreateMenuProps) {
onCloseAutoFocus={(e) => e.preventDefault()}
>
<button
className='flex w-full cursor-default select-none items-center gap-2 rounded-sm px-2 py-1.5 text-sm outline-none transition-colors hover:bg-accent hover:text-accent-foreground'
className={cn(
'flex w-full cursor-default select-none items-center gap-2 rounded-sm px-2 py-1.5 text-sm outline-none transition-colors',
isCreatingWorkflow
? 'cursor-not-allowed opacity-50'
: 'hover:bg-accent hover:text-accent-foreground'
)}
onClick={handleCreateWorkflow}
disabled={isCreatingWorkflow}
>
<File className='h-4 w-4' />
New Workflow
{isCreatingWorkflow ? 'Creating...' : 'New Workflow'}
</button>
<button
className='flex w-full cursor-default select-none items-center gap-2 rounded-sm px-2 py-1.5 text-sm outline-none transition-colors hover:bg-accent hover:text-accent-foreground'

View File

@@ -19,6 +19,7 @@ const TOOLTIPS = {
debugMode: 'Enable visual debugging information during execution.',
autoConnect: 'Automatically connect nodes.',
autoFillEnvVars: 'Automatically fill API keys.',
autoPan: 'Automatically pan to active blocks during workflow execution.',
}
export function General() {
@@ -30,11 +31,13 @@ export function General() {
const isAutoConnectEnabled = useGeneralStore((state) => state.isAutoConnectEnabled)
const isDebugModeEnabled = useGeneralStore((state) => state.isDebugModeEnabled)
const isAutoFillEnvVarsEnabled = useGeneralStore((state) => state.isAutoFillEnvVarsEnabled)
const isAutoPanEnabled = useGeneralStore((state) => state.isAutoPanEnabled)
const setTheme = useGeneralStore((state) => state.setTheme)
const toggleAutoConnect = useGeneralStore((state) => state.toggleAutoConnect)
const toggleDebugMode = useGeneralStore((state) => state.toggleDebugMode)
const toggleAutoFillEnvVars = useGeneralStore((state) => state.toggleAutoFillEnvVars)
const toggleAutoPan = useGeneralStore((state) => state.toggleAutoPan)
const loadSettings = useGeneralStore((state) => state.loadSettings)
useEffect(() => {
@@ -66,6 +69,12 @@ export function General() {
}
}
const handleAutoPanChange = (checked: boolean) => {
if (checked !== isAutoPanEnabled) {
toggleAutoPan()
}
}
const handleRetry = () => {
setRetryCount((prev) => prev + 1)
}
@@ -200,6 +209,35 @@ export function General() {
disabled={isLoading}
/>
</div>
<div className='flex items-center justify-between py-1'>
<div className='flex items-center gap-2'>
<Label htmlFor='auto-pan' className='font-medium'>
Auto-pan during execution
</Label>
<Tooltip>
<TooltipTrigger asChild>
<Button
variant='ghost'
size='sm'
className='h-7 p-1 text-gray-500'
aria-label='Learn more about auto-pan feature'
disabled={isLoading}
>
<Info className='h-5 w-5' />
</Button>
</TooltipTrigger>
<TooltipContent side='top' className='max-w-[300px] p-3'>
<p className='text-sm'>{TOOLTIPS.autoPan}</p>
</TooltipContent>
</Tooltip>
</div>
<Switch
id='auto-pan'
checked={isAutoPanEnabled}
onCheckedChange={handleAutoPanChange}
disabled={isLoading}
/>
</div>
</>
)}
</div>

View File

@@ -41,6 +41,9 @@ export function Sidebar() {
const { isPending: sessionLoading } = useSession()
const userPermissions = useUserPermissionsContext()
const isLoading = workflowsLoading || sessionLoading
// Add state to prevent multiple simultaneous workflow creations
const [isCreatingWorkflow, setIsCreatingWorkflow] = useState(false)
const router = useRouter()
const params = useParams()
const workspaceId = params.workspaceId as string
@@ -108,7 +111,14 @@ export function Sidebar() {
// Create workflow handler
const handleCreateWorkflow = async (folderId?: string) => {
// Prevent multiple simultaneous workflow creations
if (isCreatingWorkflow) {
logger.info('Workflow creation already in progress, ignoring request')
return
}
try {
setIsCreatingWorkflow(true)
const id = await createWorkflow({
workspaceId: workspaceId || undefined,
folderId: folderId || undefined,
@@ -116,6 +126,8 @@ export function Sidebar() {
router.push(`/workspace/${workspaceId}/w/${id}`)
} catch (error) {
logger.error('Error creating workflow:', error)
} finally {
setIsCreatingWorkflow(false)
}
}
@@ -173,7 +185,11 @@ export function Sidebar() {
{isLoading ? <Skeleton className='h-4 w-16' /> : 'Workflows'}
</h2>
{!isCollapsed && !isLoading && (
<CreateMenu onCreateWorkflow={handleCreateWorkflow} isCollapsed={false} />
<CreateMenu
onCreateWorkflow={handleCreateWorkflow}
isCollapsed={false}
isCreatingWorkflow={isCreatingWorkflow}
/>
)}
</div>
<FolderTree

View File

@@ -332,25 +332,9 @@ export const AgentBlock: BlockConfig<AgentResponse> = {
tools: { type: 'json', required: false },
},
outputs: {
response: {
type: {
content: 'string',
model: 'string',
tokens: 'any',
toolCalls: 'any',
},
dependsOn: {
subBlockId: 'responseFormat',
condition: {
whenEmpty: {
content: 'string',
model: 'string',
tokens: 'any',
toolCalls: 'any',
},
whenFilled: 'json',
},
},
},
content: 'string',
model: 'string',
tokens: 'any',
toolCalls: 'any',
},
}

View File

@@ -179,12 +179,8 @@ export const AirtableBlock: BlockConfig<AirtableResponse> = {
},
// Output structure depends on the operation, covered by AirtableResponse union type
outputs: {
response: {
type: {
records: 'json', // Optional: for list, create, updateMultiple
record: 'json', // Optional: for get, update single
metadata: 'json', // Required: present in all responses
},
},
records: 'json', // Optional: for list, create, updateMultiple
record: 'json', // Optional: for get, update single
metadata: 'json', // Required: present in all responses
},
}

View File

@@ -62,12 +62,8 @@ export const ApiBlock: BlockConfig<RequestResponse> = {
params: { type: 'json', required: false },
},
outputs: {
response: {
type: {
data: 'any',
status: 'number',
headers: 'json',
},
},
data: 'any',
status: 'number',
headers: 'json',
},
}

View File

@@ -112,13 +112,9 @@ export const AutoblocksBlock: BlockConfig<AutoblocksResponse> = {
environment: { type: 'string', required: true },
},
outputs: {
response: {
type: {
promptId: 'string',
version: 'string',
renderedPrompt: 'string',
templates: 'json',
},
},
promptId: 'string',
version: 'string',
renderedPrompt: 'string',
templates: 'json',
},
}

View File

@@ -76,13 +76,9 @@ export const BrowserUseBlock: BlockConfig<BrowserUseResponse> = {
save_browser_data: { type: 'boolean', required: false },
},
outputs: {
response: {
type: {
id: 'string',
success: 'boolean',
output: 'any',
steps: 'json',
},
},
id: 'string',
success: 'boolean',
output: 'any',
steps: 'json',
},
}

View File

@@ -50,10 +50,6 @@ Plain Text: Best for populating a table in free-form style.
data: { type: 'json', required: true },
},
outputs: {
response: {
type: {
data: 'any',
},
},
data: 'any',
},
}

View File

@@ -37,13 +37,9 @@ export const ConditionBlock: BlockConfig<ConditionBlockOutput> = {
},
inputs: {},
outputs: {
response: {
type: {
content: 'string',
conditionResult: 'boolean',
selectedPath: 'json',
selectedConditionId: 'string',
},
},
content: 'string',
conditionResult: 'boolean',
selectedPath: 'json',
selectedConditionId: 'string',
},
}

View File

@@ -109,14 +109,10 @@ export const ConfluenceBlock: BlockConfig<ConfluenceResponse> = {
content: { type: 'string', required: false },
},
outputs: {
response: {
type: {
ts: 'string',
pageId: 'string',
content: 'string',
title: 'string',
success: 'boolean',
},
},
ts: 'string',
pageId: 'string',
content: 'string',
title: 'string',
success: 'boolean',
},
}

View File

@@ -149,11 +149,7 @@ export const DiscordBlock: BlockConfig<DiscordResponse> = {
userId: { type: 'string', required: false },
},
outputs: {
response: {
type: {
message: 'string',
data: 'any',
},
},
message: 'string',
data: 'any',
},
}

View File

@@ -39,11 +39,7 @@ export const ElevenLabsBlock: BlockConfig<ElevenLabsBlockResponse> = {
},
outputs: {
response: {
type: {
audioUrl: 'string',
},
},
audioUrl: 'string',
},
subBlocks: [

View File

@@ -307,25 +307,9 @@ export const EvaluatorBlock: BlockConfig<EvaluatorResponse> = {
content: { type: 'string' as ParamType, required: true },
},
outputs: {
response: {
type: {
content: 'string',
model: 'string',
tokens: 'any',
cost: 'any',
},
dependsOn: {
subBlockId: 'metrics',
condition: {
whenEmpty: {
content: 'string',
model: 'string',
tokens: 'any',
cost: 'any',
},
whenFilled: 'json',
},
},
},
},
content: 'string',
model: 'string',
tokens: 'any',
cost: 'any',
} as any,
}

View File

@@ -190,16 +190,12 @@ export const ExaBlock: BlockConfig<ExaResponse> = {
url: { type: 'string', required: false },
},
outputs: {
response: {
type: {
// Search output
results: 'json',
// Find Similar Links output
similarLinks: 'json',
// Answer output
answer: 'string',
citations: 'json',
},
},
// Search output
results: 'json',
// Find Similar Links output
similarLinks: 'json',
// Answer output
answer: 'string',
citations: 'json',
},
}

View File

@@ -130,11 +130,7 @@ export const FileBlock: BlockConfig<FileParserOutput> = {
file: { type: 'json', required: false },
},
outputs: {
response: {
type: {
files: 'json',
combinedContent: 'string',
},
},
files: 'json',
combinedContent: 'string',
},
}

View File

@@ -90,16 +90,12 @@ export const FirecrawlBlock: BlockConfig<FirecrawlResponse> = {
scrapeOptions: { type: 'json', required: false },
},
outputs: {
response: {
type: {
// Scrape output
markdown: 'string',
html: 'any',
metadata: 'json',
// Search output
data: 'json',
warning: 'any',
},
},
// Scrape output
markdown: 'string',
html: 'any',
metadata: 'json',
// Search output
data: 'json',
warning: 'any',
},
}

View File

@@ -27,11 +27,7 @@ export const FunctionBlock: BlockConfig<CodeExecutionOutput> = {
timeout: { type: 'number', required: false },
},
outputs: {
response: {
type: {
result: 'any',
stdout: 'string',
},
},
result: 'any',
stdout: 'string',
},
}

View File

@@ -167,11 +167,7 @@ export const GitHubBlock: BlockConfig<GitHubResponse> = {
branch: { type: 'string', required: false },
},
outputs: {
response: {
type: {
content: 'string',
metadata: 'json',
},
},
content: 'string',
metadata: 'json',
},
}

View File

@@ -179,11 +179,7 @@ export const GmailBlock: BlockConfig<GmailToolResponse> = {
maxResults: { type: 'number', required: false },
},
outputs: {
response: {
type: {
content: 'string',
metadata: 'json',
},
},
content: 'string',
metadata: 'json',
},
}

View File

@@ -87,11 +87,7 @@ export const GoogleSearchBlock: BlockConfig<GoogleSearchResponse> = {
},
outputs: {
response: {
type: {
items: 'json',
searchInformation: 'json',
} as any,
},
items: 'json',
searchInformation: 'json',
},
}

View File

@@ -284,11 +284,7 @@ export const GoogleCalendarBlock: BlockConfig<GoogleCalendarResponse> = {
sendUpdates: { type: 'string', required: false },
},
outputs: {
response: {
type: {
content: 'string',
metadata: 'json',
},
},
content: 'string',
metadata: 'json',
},
}

View File

@@ -181,12 +181,8 @@ export const GoogleDocsBlock: BlockConfig<GoogleDocsResponse> = {
content: { type: 'string', required: false },
},
outputs: {
response: {
type: {
content: 'string',
metadata: 'json',
updatedContent: 'boolean',
},
},
content: 'string',
metadata: 'json',
updatedContent: 'boolean',
},
}

View File

@@ -265,11 +265,7 @@ export const GoogleDriveBlock: BlockConfig<GoogleDriveResponse> = {
pageSize: { type: 'number', required: false },
},
outputs: {
response: {
type: {
file: 'json',
files: 'json',
},
},
file: 'json',
files: 'json',
},
}

View File

@@ -211,16 +211,12 @@ export const GoogleSheetsBlock: BlockConfig<GoogleSheetsResponse> = {
insertDataOption: { type: 'string', required: false },
},
outputs: {
response: {
type: {
data: 'json',
metadata: 'json',
updatedRange: 'string',
updatedRows: 'number',
updatedColumns: 'number',
updatedCells: 'number',
tableRange: 'string',
},
},
data: 'json',
metadata: 'json',
updatedRange: 'string',
updatedRows: 'number',
updatedColumns: 'number',
updatedCells: 'number',
tableRange: 'string',
},
}

View File

@@ -82,17 +82,13 @@ export const GuestyBlock: BlockConfig<GuestyReservationResponse | GuestyGuestRes
phoneNumber: { type: 'string', required: false },
},
outputs: {
response: {
type: {
id: 'string',
guest: 'json',
checkIn: 'string',
checkOut: 'string',
status: 'string',
listing: 'json',
money: 'json',
guests: 'json',
},
},
id: 'string',
guest: 'json',
checkIn: 'string',
checkOut: 'string',
status: 'string',
listing: 'json',
money: 'json',
guests: 'json',
},
}

View File

@@ -114,12 +114,8 @@ export const HuggingFaceBlock: BlockConfig<HuggingFaceChatResponse> = {
apiKey: { type: 'string', required: true },
},
outputs: {
response: {
type: {
content: 'string',
model: 'string',
usage: 'json',
},
},
content: 'string',
model: 'string',
usage: 'json',
},
}

View File

@@ -153,12 +153,8 @@ export const ImageGeneratorBlock: BlockConfig<DalleResponse> = {
apiKey: { type: 'string', required: true },
},
outputs: {
response: {
type: {
content: 'string',
image: 'string',
metadata: 'json',
},
},
content: 'string',
image: 'string',
metadata: 'json',
},
}

View File

@@ -51,10 +51,6 @@ export const JinaBlock: BlockConfig<ReadUrlResponse> = {
apiKey: { type: 'string', required: true },
},
outputs: {
response: {
type: {
content: 'string',
},
},
content: 'string',
},
}

View File

@@ -187,17 +187,13 @@ export const JiraBlock: BlockConfig<JiraResponse> = {
issueType: { type: 'string', required: false },
},
outputs: {
response: {
type: {
ts: 'string',
issueKey: 'string',
summary: 'string',
description: 'string',
created: 'string',
updated: 'string',
success: 'boolean',
url: 'string',
},
},
ts: 'string',
issueKey: 'string',
summary: 'string',
description: 'string',
created: 'string',
updated: 'string',
success: 'boolean',
url: 'string',
},
}

View File

@@ -38,13 +38,9 @@ export const KnowledgeBlock: BlockConfig = {
content: { type: 'string', required: false },
},
outputs: {
response: {
type: {
results: 'json',
query: 'string',
totalResults: 'number',
},
},
results: 'json',
query: 'string',
totalResults: 'number',
},
subBlocks: [
{

View File

@@ -99,11 +99,7 @@ export const LinearBlock: BlockConfig<LinearResponse> = {
description: { type: 'string', required: false },
},
outputs: {
response: {
type: {
issues: 'json',
issue: 'json',
},
},
issues: 'json',
issue: 'json',
},
}

View File

@@ -63,11 +63,7 @@ export const LinkupBlock: BlockConfig<LinkupSearchToolResponse> = {
},
outputs: {
response: {
type: {
answer: 'string',
sources: 'json',
},
},
answer: 'string',
sources: 'json',
},
}

View File

@@ -290,12 +290,8 @@ export const Mem0Block: BlockConfig<Mem0Response> = {
limit: { type: 'number', required: false },
},
outputs: {
response: {
type: {
ids: 'any',
memories: 'any',
searchResults: 'any',
},
},
ids: 'any',
memories: 'any',
searchResults: 'any',
},
}

View File

@@ -105,12 +105,8 @@ export const MemoryBlock: BlockConfig = {
content: { type: 'string', required: false },
},
outputs: {
response: {
type: {
memories: 'any',
id: 'string',
},
},
memories: 'any',
id: 'string',
},
subBlocks: [
{

View File

@@ -199,17 +199,13 @@ export const MicrosoftExcelBlock: BlockConfig<MicrosoftExcelResponse> = {
valueInputOption: { type: 'string', required: false },
},
outputs: {
response: {
type: {
data: 'json',
metadata: 'json',
updatedRange: 'string',
updatedRows: 'number',
updatedColumns: 'number',
updatedCells: 'number',
index: 'number',
values: 'json',
},
},
data: 'json',
metadata: 'json',
updatedRange: 'string',
updatedRows: 'number',
updatedColumns: 'number',
updatedCells: 'number',
index: 'number',
values: 'json',
},
}

View File

@@ -169,12 +169,8 @@ export const MicrosoftTeamsBlock: BlockConfig<MicrosoftTeamsResponse> = {
content: { type: 'string', required: true },
},
outputs: {
response: {
type: {
content: 'string',
metadata: 'json',
updatedContent: 'boolean',
},
},
content: 'string',
metadata: 'json',
updatedContent: 'boolean',
},
}

View File

@@ -202,11 +202,7 @@ export const MistralParseBlock: BlockConfig<MistralParserOutput> = {
// imageMinSize: { type: 'string', required: false },
},
outputs: {
response: {
type: {
content: 'string',
metadata: 'json',
},
},
content: 'string',
metadata: 'json',
},
}

View File

@@ -174,11 +174,7 @@ export const NotionBlock: BlockConfig<NotionResponse> = {
properties: { type: 'string', required: false },
},
outputs: {
response: {
type: {
content: 'string',
metadata: 'any',
},
},
content: 'string',
metadata: 'any',
},
}

View File

@@ -49,12 +49,8 @@ export const OpenAIBlock: BlockConfig = {
apiKey: { type: 'string', required: true },
},
outputs: {
response: {
type: {
embeddings: 'json',
model: 'string',
usage: 'json',
},
},
embeddings: 'json',
model: 'string',
usage: 'json',
},
}

View File

@@ -140,11 +140,7 @@ export const OutlookBlock: BlockConfig<
maxResults: { type: 'number', required: false },
},
outputs: {
response: {
type: {
message: 'string',
results: 'json',
},
},
message: 'string',
results: 'json',
},
}

View File

@@ -106,12 +106,8 @@ export const PerplexityBlock: BlockConfig<PerplexityChatResponse> = {
apiKey: { type: 'string', required: true },
},
outputs: {
response: {
type: {
content: 'string',
model: 'string',
usage: 'json',
},
},
content: 'string',
model: 'string',
usage: 'json',
},
}

View File

@@ -268,15 +268,11 @@ export const PineconeBlock: BlockConfig<PineconeResponse> = {
},
outputs: {
response: {
type: {
matches: 'any',
upsertedCount: 'any',
data: 'any',
model: 'any',
vector_type: 'any',
usage: 'any',
},
},
matches: 'any',
upsertedCount: 'any',
data: 'any',
model: 'any',
vector_type: 'any',
usage: 'any',
},
}

View File

@@ -181,13 +181,9 @@ export const RedditBlock: BlockConfig<
commentLimit: { type: 'number', required: false },
},
outputs: {
response: {
type: {
subreddit: 'string',
posts: 'json',
post: 'json',
comments: 'json',
},
},
subreddit: 'string',
posts: 'json',
post: 'json',
comments: 'json',
},
}

View File

@@ -92,12 +92,8 @@ export const ResponseBlock: BlockConfig<ResponseBlockOutput> = {
},
},
outputs: {
response: {
type: {
data: 'json',
status: 'number',
headers: 'json',
},
},
data: 'json',
status: 'number',
headers: 'json',
},
}

View File

@@ -180,14 +180,10 @@ export const RouterBlock: BlockConfig<RouterResponse> = {
apiKey: { type: 'string', required: true },
},
outputs: {
response: {
type: {
content: 'string',
model: 'string',
tokens: 'any',
cost: 'any',
selectedPath: 'json',
},
},
content: 'string',
model: 'string',
tokens: 'any',
cost: 'any',
selectedPath: 'json',
},
}

View File

@@ -96,11 +96,7 @@ export const S3Block: BlockConfig<S3Response> = {
s3Uri: { type: 'string', required: true },
},
outputs: {
response: {
type: {
url: 'string',
metadata: 'json',
},
},
url: 'string',
metadata: 'json',
},
}

View File

@@ -69,10 +69,6 @@ export const SerperBlock: BlockConfig<SearchResponse> = {
type: { type: 'string', required: false },
},
outputs: {
response: {
type: {
searchResults: 'json',
},
},
searchResults: 'json',
},
}

View File

@@ -138,11 +138,7 @@ export const SlackBlock: BlockConfig<SlackResponse> = {
text: { type: 'string', required: true },
},
outputs: {
response: {
type: {
ts: 'string',
channel: 'string',
},
},
ts: 'string',
channel: 'string',
},
}

View File

@@ -64,10 +64,6 @@ export const StagehandBlock: BlockConfig<StagehandExtractResponse> = {
apiKey: { type: 'string', required: true },
},
outputs: {
response: {
type: {
data: 'json',
},
},
data: 'json',
},
}

View File

@@ -83,11 +83,7 @@ export const StagehandAgentBlock: BlockConfig<StagehandAgentResponse> = {
outputSchema: { type: 'json', required: false },
},
outputs: {
response: {
type: {
agentResult: 'json',
structuredOutput: 'any',
},
},
agentResult: 'json',
structuredOutput: 'any',
},
}

View File

@@ -1,14 +1,7 @@
import { StartIcon } from '@/components/icons'
import type { ToolResponse } from '@/tools/types'
import type { BlockConfig } from '../types'
interface StarterBlockOutput extends ToolResponse {
output: {
input: any
}
}
export const StarterBlock: BlockConfig<StarterBlockOutput> = {
export const StarterBlock: BlockConfig = {
type: 'starter',
name: 'Starter',
description: 'Start workflow',
@@ -189,11 +182,5 @@ export const StarterBlock: BlockConfig<StarterBlockOutput> = {
inputs: {
input: { type: 'json', required: false },
},
outputs: {
response: {
type: {
input: 'any',
},
},
},
outputs: {},
}

View File

@@ -109,11 +109,7 @@ export const SupabaseBlock: BlockConfig<SupabaseResponse> = {
data: { type: 'string', required: false, requiredForToolCall: true },
},
outputs: {
response: {
type: {
message: 'string',
results: 'json',
},
},
message: 'string',
results: 'json',
},
}

View File

@@ -98,15 +98,11 @@ export const TavilyBlock: BlockConfig<TavilyResponse> = {
extract_depth: { type: 'string', required: false },
},
outputs: {
response: {
type: {
results: 'json',
answer: 'any',
query: 'string',
content: 'string',
title: 'string',
url: 'string',
},
},
results: 'json',
answer: 'any',
query: 'string',
content: 'string',
title: 'string',
url: 'string',
},
}

View File

@@ -55,11 +55,7 @@ export const TelegramBlock: BlockConfig<TelegramMessageResponse> = {
text: { type: 'string', required: true },
},
outputs: {
response: {
type: {
ok: 'boolean',
result: 'json',
},
},
ok: 'boolean',
result: 'json',
},
}

View File

@@ -36,11 +36,7 @@ export const ThinkingBlock: BlockConfig<ThinkingToolResponse> = {
},
outputs: {
response: {
type: {
acknowledgedThought: 'string',
},
},
acknowledgedThought: 'string',
},
tools: {

View File

@@ -93,12 +93,8 @@ export const TranslateBlock: BlockConfig = {
systemPrompt: { type: 'string', required: true },
},
outputs: {
response: {
type: {
content: 'string',
model: 'string',
tokens: 'any',
},
},
content: 'string',
model: 'string',
tokens: 'any',
},
}

View File

@@ -62,13 +62,9 @@ export const TwilioSMSBlock: BlockConfig<TwilioSMSBlockOutput> = {
fromNumber: { type: 'string', required: true },
},
outputs: {
response: {
type: {
success: 'boolean',
messageId: 'any',
status: 'any',
error: 'any',
},
},
success: 'boolean',
messageId: 'any',
status: 'any',
error: 'any',
},
}

View File

@@ -215,23 +215,8 @@ export const TypeformBlock: BlockConfig<TypeformResponse> = {
inline: { type: 'boolean', required: false },
},
outputs: {
response: {
type: {
total_items: 'number',
page_count: 'number',
items: 'json',
},
dependsOn: {
subBlockId: 'operation',
condition: {
whenEmpty: {
total_items: 'number',
page_count: 'number',
items: 'json',
},
whenFilled: 'json',
},
},
},
total_items: 'number',
page_count: 'number',
items: 'json',
},
}

View File

@@ -53,12 +53,8 @@ export const VisionBlock: BlockConfig<VisionResponse> = {
prompt: { type: 'string', required: false },
},
outputs: {
response: {
type: {
content: 'string',
model: 'any',
tokens: 'any',
},
},
content: 'string',
model: 'any',
tokens: 'any',
},
}

View File

@@ -64,12 +64,8 @@ export const WhatsAppBlock: BlockConfig<WhatsAppBlockOutput> = {
accessToken: { type: 'string', required: true },
},
outputs: {
response: {
type: {
success: 'boolean',
messageId: 'any',
error: 'any',
},
},
success: 'boolean',
messageId: 'any',
error: 'any',
},
}

View File

@@ -55,7 +55,7 @@ export const WorkflowBlock: BlockConfig = {
title: 'Input Variable (Optional)',
type: 'short-input',
placeholder: 'Select a variable to pass to the child workflow',
description: 'This variable will be available as start.response.input in the child workflow',
description: 'This variable will be available as start.input in the child workflow',
},
],
tools: {
@@ -74,13 +74,9 @@ export const WorkflowBlock: BlockConfig = {
},
},
outputs: {
response: {
type: {
success: 'boolean',
childWorkflowName: 'string',
result: 'json',
error: 'string',
},
},
success: 'boolean',
childWorkflowName: 'string',
result: 'json',
error: 'string',
},
}

View File

@@ -211,17 +211,13 @@ export const XBlock: BlockConfig<XResponse> = {
includeRecentTweets: { type: 'boolean', required: false },
},
outputs: {
response: {
type: {
tweet: 'json',
replies: 'any',
context: 'any',
tweets: 'json',
includes: 'any',
meta: 'json',
user: 'json',
recentTweets: 'any',
},
},
tweet: 'json',
replies: 'any',
context: 'any',
tweets: 'json',
includes: 'any',
meta: 'json',
user: 'json',
recentTweets: 'any',
},
}

View File

@@ -46,11 +46,7 @@ export const YouTubeBlock: BlockConfig<YouTubeSearchResponse> = {
maxResults: { type: 'number', required: false },
},
outputs: {
response: {
type: {
items: 'json',
totalResults: 'number',
},
},
items: 'json',
totalResults: 'number',
},
}

View File

@@ -157,20 +157,10 @@ export interface BlockConfig<T extends ToolResponse = ToolResponse> {
}
}
inputs: Record<string, ParamConfig>
outputs: {
response: {
type: ToolOutputToValueType<ExtractToolOutput<T>>
dependsOn?: {
subBlockId: string
condition: {
whenEmpty: ToolOutputToValueType<ExtractToolOutput<T>>
whenFilled: 'json'
}
}
visualization?: {
type: 'image'
url: string
}
outputs: ToolOutputToValueType<ExtractToolOutput<T>> & {
visualization?: {
type: 'image'
url: string
}
}
hideFromToolbar?: boolean
@@ -179,11 +169,4 @@ export interface BlockConfig<T extends ToolResponse = ToolResponse> {
// Output configuration rules
export interface OutputConfig {
type: BlockOutput
dependsOn?: {
subBlockId: string
condition: {
whenEmpty: BlockOutput
whenFilled: BlockOutput
}
}
}

View File

@@ -1,48 +1,13 @@
import type { BlockOutput, OutputConfig } from '@/blocks/types'
import type { SubBlockState } from '@/stores/workflows/workflow/types'
interface CodeLine {
id: string
content: string
}
function isEmptyValue(value: SubBlockState['value']): boolean {
if (value === null || value === undefined) return true
if (typeof value === 'string') return value.trim() === ''
if (typeof value === 'number') return false
if (Array.isArray(value)) {
// Handle code editor's array of lines format
if (value.length === 0) return true
if (isCodeEditorValue(value)) {
return value.every((line: any) => !line.content.trim())
}
return value.length === 0
}
return false
}
function isCodeEditorValue(value: any[]): value is CodeLine[] {
return value.length > 0 && 'id' in value[0] && 'content' in value[0]
}
import type { BlockOutput } from '@/blocks/types'
export function resolveOutputType(
outputs: Record<string, OutputConfig>,
subBlocks: Record<string, SubBlockState>
outputs: Record<string, string | BlockOutput>
): Record<string, BlockOutput> {
const resolvedOutputs: Record<string, BlockOutput> = {}
for (const [key, outputConfig] of Object.entries(outputs)) {
// If no dependencies, use the type directly
if (!outputConfig.dependsOn) {
resolvedOutputs[key] = outputConfig.type
continue
}
// Handle dependent output types
const subBlock = subBlocks[outputConfig.dependsOn.subBlockId]
resolvedOutputs[key] = isEmptyValue(subBlock?.value)
? outputConfig.dependsOn.condition.whenEmpty
: outputConfig.dependsOn.condition.whenFilled
for (const [key, outputType] of Object.entries(outputs)) {
// Since dependsOn has been removed, just use the type directly
resolvedOutputs[key] = outputType as BlockOutput
}
return resolvedOutputs

View File

@@ -274,7 +274,7 @@ describe('TagDropdown Search and Filtering', () => {
'loop.index',
'loop.currentItem',
'parallel.index',
'block.response.data',
'block.data',
]
const searchTerm = 'user'
@@ -288,7 +288,7 @@ describe('TagDropdown Search and Filtering', () => {
'variable.userName',
'loop.index',
'parallel.currentItem',
'block.response.data',
'block.data',
'variable.userAge',
'loop.currentItem',
]
@@ -313,7 +313,7 @@ describe('TagDropdown Search and Filtering', () => {
expect(variableTags).toEqual(['variable.userName', 'variable.userAge'])
expect(loopTags).toEqual(['loop.index', 'loop.currentItem'])
expect(parallelTags).toEqual(['parallel.currentItem'])
expect(blockTags).toEqual(['block.response.data'])
expect(blockTags).toEqual(['block.data'])
})
})
@@ -358,22 +358,6 @@ describe('checkTagTrigger helper function', () => {
})
describe('extractFieldsFromSchema helper function logic', () => {
test('should extract fields from legacy format with fields array', () => {
const responseFormat = {
fields: [
{ name: 'name', type: 'string', description: 'User name' },
{ name: 'age', type: 'number', description: 'User age' },
],
}
const fields = extractFieldsFromSchema(responseFormat)
expect(fields).toEqual([
{ name: 'name', type: 'string', description: 'User name' },
{ name: 'age', type: 'number', description: 'User age' },
])
})
test('should extract fields from JSON Schema format', () => {
const responseFormat = {
schema: {
@@ -450,6 +434,26 @@ describe('extractFieldsFromSchema helper function logic', () => {
{ name: 'age', type: 'number', description: undefined },
])
})
test('should handle flattened response format (new format)', () => {
const responseFormat = {
schema: {
properties: {
name: { type: 'string', description: 'User name' },
age: { type: 'number', description: 'User age' },
status: { type: 'boolean', description: 'Active status' },
},
},
}
const fields = extractFieldsFromSchema(responseFormat)
expect(fields).toEqual([
{ name: 'name', type: 'string', description: 'User name' },
{ name: 'age', type: 'number', description: 'User age' },
{ name: 'status', type: 'boolean', description: 'Active status' },
])
})
})
describe('TagDropdown Tag Ordering', () => {
@@ -457,7 +461,7 @@ describe('TagDropdown Tag Ordering', () => {
const variableTags = ['variable.userName', 'variable.userAge']
const loopTags = ['loop.index', 'loop.currentItem']
const parallelTags = ['parallel.index']
const blockTags = ['block.response.data']
const blockTags = ['block.data']
const orderedTags = [...variableTags, ...loopTags, ...parallelTags, ...blockTags]
@@ -467,12 +471,12 @@ describe('TagDropdown Tag Ordering', () => {
'loop.index',
'loop.currentItem',
'parallel.index',
'block.response.data',
'block.data',
])
})
test('should create tag index map correctly', () => {
const orderedTags = ['variable.userName', 'loop.index', 'block.response.data']
const orderedTags = ['variable.userName', 'loop.index', 'block.data']
const tagIndexMap = new Map<string, number>()
orderedTags.forEach((tag, index) => {
@@ -481,7 +485,7 @@ describe('TagDropdown Tag Ordering', () => {
expect(tagIndexMap.get('variable.userName')).toBe(0)
expect(tagIndexMap.get('loop.index')).toBe(1)
expect(tagIndexMap.get('block.response.data')).toBe(2)
expect(tagIndexMap.get('block.data')).toBe(2)
expect(tagIndexMap.get('nonexistent')).toBeUndefined()
})
})
@@ -491,39 +495,39 @@ describe('TagDropdown Tag Selection Logic', () => {
const testCases = [
{
description: 'should remove existing closing bracket from incomplete tag',
inputValue: 'Hello <start.response.>',
cursorPosition: 21, // cursor after the dot
tag: 'start.response.input',
expectedResult: 'Hello <start.response.input>',
inputValue: 'Hello <start.>',
cursorPosition: 13, // cursor after the dot
tag: 'start.input',
expectedResult: 'Hello <start.input>',
},
{
description: 'should remove existing closing bracket when replacing tag content',
inputValue: 'Hello <start.response.input>',
cursorPosition: 22, // cursor after 'response.'
tag: 'start.response.data',
expectedResult: 'Hello <start.response.data>',
inputValue: 'Hello <start.input>',
cursorPosition: 12, // cursor after 'start.'
tag: 'start.data',
expectedResult: 'Hello <start.data>',
},
{
description: 'should preserve content after closing bracket',
inputValue: 'Hello <start.response.> world',
cursorPosition: 21,
tag: 'start.response.input',
expectedResult: 'Hello <start.response.input> world',
inputValue: 'Hello <start.> world',
cursorPosition: 13,
tag: 'start.input',
expectedResult: 'Hello <start.input> world',
},
{
description:
'should not affect closing bracket if text between contains invalid characters',
inputValue: 'Hello <start.response.input> and <other>',
cursorPosition: 22,
tag: 'start.response.data',
expectedResult: 'Hello <start.response.data> and <other>',
inputValue: 'Hello <start.input> and <other>',
cursorPosition: 12,
tag: 'start.data',
expectedResult: 'Hello <start.data> and <other>',
},
{
description: 'should handle case with no existing closing bracket',
inputValue: 'Hello <start.response',
cursorPosition: 21,
tag: 'start.response.input',
expectedResult: 'Hello <start.response.input>',
inputValue: 'Hello <start',
cursorPosition: 12,
tag: 'start.input',
expectedResult: 'Hello <start.input>',
},
]
@@ -556,25 +560,25 @@ describe('TagDropdown Tag Selection Logic', () => {
// Valid tag-like text
expect(regex.test('')).toBe(true) // empty string
expect(regex.test('input')).toBe(true)
expect(regex.test('response.data')).toBe(true)
expect(regex.test('content.data')).toBe(true)
expect(regex.test('user_name')).toBe(true)
expect(regex.test('item123')).toBe(true)
expect(regex.test('response.data.item_1')).toBe(true)
expect(regex.test('content.data.item_1')).toBe(true)
// Invalid tag-like text (should not remove closing bracket)
expect(regex.test('input> and more')).toBe(false)
expect(regex.test('response data')).toBe(false) // space
expect(regex.test('content data')).toBe(false) // space
expect(regex.test('user-name')).toBe(false) // hyphen
expect(regex.test('data[')).toBe(false) // bracket
expect(regex.test('response.data!')).toBe(false) // exclamation
expect(regex.test('content.data!')).toBe(false) // exclamation
})
test('should find correct position of last open bracket', () => {
const testCases = [
{ input: 'Hello <start.response', expected: 6 },
{ input: 'Hello <var> and <start.response', expected: 16 },
{ input: 'Hello <start', expected: 6 },
{ input: 'Hello <var> and <start', expected: 16 },
{ input: 'No brackets here', expected: -1 },
{ input: '<start.response', expected: 0 },
{ input: '<start', expected: 0 },
{ input: 'Multiple < < < <last', expected: 15 },
]
@@ -587,7 +591,7 @@ describe('TagDropdown Tag Selection Logic', () => {
test('should find correct position of next closing bracket', () => {
const testCases = [
{ input: 'input>', expected: 5 },
{ input: 'response.data> more text', expected: 13 },
{ input: 'content.data> more text', expected: 12 },
{ input: 'no closing bracket', expected: -1 },
{ input: '>', expected: 0 },
{ input: 'multiple > > > >last', expected: 9 },

View File

@@ -1,33 +1,67 @@
import type React from 'react'
import { useCallback, useEffect, useMemo, useState } from 'react'
import { BlockPathCalculator } from '@/lib/block-path-calculator'
import { createLogger } from '@/lib/logs/console-logger'
import { cn } from '@/lib/utils'
import {
type ConnectedBlock,
useBlockConnections,
} from '@/app/workspace/[workspaceId]/w/[workflowId]/hooks/use-block-connections'
import { getBlock } from '@/blocks'
import { Serializer } from '@/serializer'
import { useVariablesStore } from '@/stores/panel/variables/store'
import type { Variable } from '@/stores/panel/variables/types'
import { useWorkflowRegistry } from '@/stores/workflows/registry/store'
import { useSubBlockStore } from '@/stores/workflows/subblock/store'
import { useWorkflowStore } from '@/stores/workflows/workflow/store'
const logger = createLogger('TagDropdown')
// Type definitions for component data structures
interface BlockTagGroup {
blockName: string
blockId: string
blockType: string
tags: string[]
distance: number
}
interface Field {
name: string
type: string
description?: string
}
interface Metric {
name: string
description: string
range: {
min: number
max: number
// Helper function to extract fields from JSON Schema
export function extractFieldsFromSchema(schema: any): Field[] {
if (!schema || typeof schema !== 'object') {
return []
}
// Handle legacy format with fields array
if (Array.isArray(schema.fields)) {
return schema.fields
}
// Handle new JSON Schema format
const schemaObj = schema.schema || schema
if (!schemaObj || !schemaObj.properties || typeof schemaObj.properties !== 'object') {
return []
}
// Extract fields from schema properties
return Object.entries(schemaObj.properties).map(([name, prop]: [string, any]) => {
// Handle array format like ['string', 'array']
if (Array.isArray(prop)) {
return {
name,
type: prop.includes('array') ? 'array' : prop[0] || 'string',
description: undefined,
}
}
// Handle object format like { type: 'string', description: '...' }
return {
name,
type: prop.type || 'string',
description: prop.description,
}
})
}
interface TagDropdownProps {
@@ -42,32 +76,42 @@ interface TagDropdownProps {
style?: React.CSSProperties
}
// Add a helper function to extract fields from JSON Schema
export const extractFieldsFromSchema = (responseFormat: any): Field[] => {
if (!responseFormat) return []
// Check if tag trigger '<' should show dropdown
export const checkTagTrigger = (text: string, cursorPosition: number): { show: boolean } => {
if (cursorPosition >= 1) {
const textBeforeCursor = text.slice(0, cursorPosition)
const lastOpenBracket = textBeforeCursor.lastIndexOf('<')
const lastCloseBracket = textBeforeCursor.lastIndexOf('>')
// Handle legacy format with fields array
if (Array.isArray(responseFormat.fields)) {
return responseFormat.fields
// Show if we have an unclosed '<' that's not part of a completed tag
if (lastOpenBracket !== -1 && (lastCloseBracket === -1 || lastCloseBracket < lastOpenBracket)) {
return { show: true }
}
}
return { show: false }
}
// Generate output paths from block configuration outputs
const generateOutputPaths = (outputs: Record<string, any>, prefix = ''): string[] => {
const paths: string[] = []
for (const [key, value] of Object.entries(outputs)) {
const currentPath = prefix ? `${prefix}.${key}` : key
if (typeof value === 'string') {
// Simple type like 'string', 'number', 'json', 'any'
paths.push(currentPath)
} else if (typeof value === 'object' && value !== null) {
// Nested object - recurse
const subPaths = generateOutputPaths(value, currentPath)
paths.push(...subPaths)
} else {
// Fallback - add the path
paths.push(currentPath)
}
}
// Handle new JSON Schema format
const schema = responseFormat.schema || responseFormat
if (
!schema ||
typeof schema !== 'object' ||
!('properties' in schema) ||
typeof schema.properties !== 'object' ||
schema.properties === null
) {
return []
}
return Object.entries(schema.properties).map(([name, prop]: [string, any]) => ({
name,
type: Array.isArray(prop) ? 'array' : prop.type || 'string',
description: prop.description,
}))
return paths
}
export const TagDropdown: React.FC<TagDropdownProps> = ({
@@ -81,90 +125,129 @@ export const TagDropdown: React.FC<TagDropdownProps> = ({
onClose,
style,
}) => {
// Component state
const [selectedIndex, setSelectedIndex] = useState(0)
// Get available tags from workflow state
// Store hooks for workflow data
const blocks = useWorkflowStore((state) => state.blocks)
const loops = useWorkflowStore((state) => state.loops)
const parallels = useWorkflowStore((state) => state.parallels)
const _edges = useWorkflowStore((state) => state.edges)
const edges = useWorkflowStore((state) => state.edges)
const workflowId = useWorkflowRegistry((state) => state.activeWorkflowId)
// Get variables from variables store
// Store hooks for variables
const getVariablesByWorkflowId = useVariablesStore((state) => state.getVariablesByWorkflowId)
const loadVariables = useVariablesStore((state) => state.loadVariables)
const variables = useVariablesStore((state) => state.variables)
const workflowVariables = workflowId ? getVariablesByWorkflowId(workflowId) : []
// Get all connected blocks using useBlockConnections
const { incomingConnections } = useBlockConnections(blockId)
// Load variables when workflowId changes
// Load variables when workflow changes
useEffect(() => {
if (workflowId) {
loadVariables(workflowId)
}
}, [workflowId, loadVariables])
// Extract search term from input
// Extract current search term from input
const searchTerm = useMemo(() => {
const textBeforeCursor = inputValue.slice(0, cursorPosition)
const match = textBeforeCursor.match(/<([^>]*)$/)
return match ? match[1].toLowerCase() : ''
}, [inputValue, cursorPosition])
// Get source block and compute tags
const { tags, variableInfoMap = {} } = useMemo(() => {
// Helper function to get output paths
const getOutputPaths = (obj: any, prefix = '', isStarterBlock = false): string[] => {
if (typeof obj !== 'object' || obj === null) {
return prefix ? [prefix] : []
// Generate all available tags using BlockPathCalculator and clean block outputs
const {
tags,
variableInfoMap = {},
blockTagGroups = [],
} = useMemo(() => {
// Handle active source block (drag & drop from specific block)
if (activeSourceBlockId) {
const sourceBlock = blocks[activeSourceBlockId]
if (!sourceBlock) {
return { tags: [], variableInfoMap: {}, blockTagGroups: [] }
}
// Special handling for starter block with input format
if (isStarterBlock && prefix === 'response') {
try {
// Check if there's an input format defined
const inputFormatValue = useSubBlockStore
.getState()
.getValue(activeSourceBlockId || blockId, 'inputFormat')
if (inputFormatValue && Array.isArray(inputFormatValue) && inputFormatValue.length > 0) {
// Check if any fields have been configured with names
const hasConfiguredFields = inputFormatValue.some(
(field: any) => field.name && field.name.trim() !== ''
)
// If no fields have been configured, return the default input path
if (!hasConfiguredFields) {
return ['response.input']
}
// Return fields from input format
return inputFormatValue.map((field: any) => `response.input.${field.name}`)
}
} catch (e) {
logger.error('Error parsing input format:', { e })
}
return ['response.input']
const blockConfig = getBlock(sourceBlock.type)
if (!blockConfig) {
return { tags: [], variableInfoMap: {}, blockTagGroups: [] }
}
if ('type' in obj && typeof obj.type === 'string') {
return [prefix]
const blockName = sourceBlock.name || sourceBlock.type
const normalizedBlockName = blockName.replace(/\s+/g, '').toLowerCase()
// Handle blocks with no outputs (like starter) - show as just <blockname>
let blockTags: string[]
if (Object.keys(blockConfig.outputs).length === 0) {
blockTags = [normalizedBlockName]
} else {
const outputPaths = generateOutputPaths(blockConfig.outputs)
blockTags = outputPaths.map((path) => `${normalizedBlockName}.${path}`)
}
return Object.entries(obj).flatMap(([key, value]) => {
const newPrefix = prefix ? `${prefix}.${key}` : key
return getOutputPaths(value, newPrefix, isStarterBlock)
})
const blockTagGroups: BlockTagGroup[] = [
{
blockName,
blockId: activeSourceBlockId,
blockType: sourceBlock.type,
tags: blockTags,
distance: 0,
},
]
return {
tags: blockTags,
variableInfoMap: {},
blockTagGroups,
}
}
// Variables as tags - format as variable.{variableName}
// Create serialized workflow for BlockPathCalculator
const serializer = new Serializer()
const serializedWorkflow = serializer.serializeWorkflow(blocks, edges, loops, parallels)
// Find accessible blocks using BlockPathCalculator
const accessibleBlockIds = BlockPathCalculator.findAllPathNodes(
serializedWorkflow.connections,
blockId
)
// Always include starter block
const starterBlock = Object.values(blocks).find((block) => block.type === 'starter')
if (starterBlock && !accessibleBlockIds.includes(starterBlock.id)) {
accessibleBlockIds.push(starterBlock.id)
}
// Calculate distances from starter block for ordering
const blockDistances: Record<string, number> = {}
if (starterBlock) {
const adjList: Record<string, string[]> = {}
for (const edge of edges) {
if (!adjList[edge.source]) adjList[edge.source] = []
adjList[edge.source].push(edge.target)
}
const visited = new Set<string>()
const queue: [string, number][] = [[starterBlock.id, 0]]
while (queue.length > 0) {
const [currentNodeId, distance] = queue.shift()!
if (visited.has(currentNodeId)) continue
visited.add(currentNodeId)
blockDistances[currentNodeId] = distance
const outgoingNodeIds = adjList[currentNodeId] || []
for (const targetId of outgoingNodeIds) {
queue.push([targetId, distance + 1])
}
}
}
// Create variable tags
const variableTags = workflowVariables.map(
(variable: Variable) => `variable.${variable.name.replace(/\s+/g, '')}`
)
// Create a map of variable tags to their type information
const variableInfoMap = workflowVariables.reduce(
(acc, variable) => {
const tagName = `variable.${variable.name.replace(/\s+/g, '')}`
@@ -177,225 +260,73 @@ export const TagDropdown: React.FC<TagDropdownProps> = ({
{} as Record<string, { type: string; id: string }>
)
// Loop tags - Add if this block is in a loop
// Generate loop tags if current block is in a loop
const loopTags: string[] = []
// Check if the current block is part of a loop
const containingLoop = Object.entries(loops).find(([_, loop]) => loop.nodes.includes(blockId))
if (containingLoop) {
const [_loopId, loop] = containingLoop
const loopType = loop.loopType || 'for'
// Add loop.index for all loop types
loopTags.push('loop.index')
// Add forEach specific properties
if (loopType === 'forEach') {
// Add loop.currentItem and loop.items
loopTags.push('loop.currentItem')
loopTags.push('loop.items')
}
}
// Parallel tags - Add if this block is in a parallel
// Generate parallel tags if current block is in parallel
const parallelTags: string[] = []
// Check if the current block is part of a parallel
const containingParallel = Object.entries(parallels || {}).find(([_, parallel]) =>
parallel.nodes.includes(blockId)
)
if (containingParallel) {
// Add parallel.index for all parallel blocks
parallelTags.push('parallel.index')
// Add parallel.currentItem and parallel.items
parallelTags.push('parallel.currentItem')
parallelTags.push('parallel.items')
}
// If we have an active source block ID from a drop, use that specific block only
if (activeSourceBlockId) {
const sourceBlock = blocks[activeSourceBlockId]
if (!sourceBlock) return { tags: [...variableTags] }
// Create block tag groups from accessible blocks
const blockTagGroups: BlockTagGroup[] = []
const allBlockTags: string[] = []
const blockName = sourceBlock.name || sourceBlock.type
for (const accessibleBlockId of accessibleBlockIds) {
const accessibleBlock = blocks[accessibleBlockId]
if (!accessibleBlock) continue
const blockConfig = getBlock(accessibleBlock.type)
if (!blockConfig) continue
const blockName = accessibleBlock.name || accessibleBlock.type
const normalizedBlockName = blockName.replace(/\s+/g, '').toLowerCase()
// First check for evaluator metrics
if (sourceBlock.type === 'evaluator') {
try {
const metricsValue = useSubBlockStore
.getState()
.getValue(activeSourceBlockId, 'metrics') as unknown as Metric[]
if (Array.isArray(metricsValue)) {
return {
tags: [
...variableTags,
...metricsValue.map(
(metric) => `${normalizedBlockName}.response.${metric.name.toLowerCase()}`
),
],
}
}
} catch (e) {
logger.error('Error parsing metrics:', { e })
}
// Handle blocks with no outputs (like starter) - show as just <blockname>
let blockTags: string[]
if (Object.keys(blockConfig.outputs).length === 0) {
blockTags = [normalizedBlockName]
} else {
const outputPaths = generateOutputPaths(blockConfig.outputs)
blockTags = outputPaths.map((path) => `${normalizedBlockName}.${path}`)
}
// Then check for response format
try {
const responseFormatValue = useSubBlockStore
.getState()
.getValue(activeSourceBlockId, 'responseFormat')
if (responseFormatValue) {
const responseFormat =
typeof responseFormatValue === 'string'
? JSON.parse(responseFormatValue)
: responseFormatValue
blockTagGroups.push({
blockName,
blockId: accessibleBlockId,
blockType: accessibleBlock.type,
tags: blockTags,
distance: blockDistances[accessibleBlockId] || 0,
})
if (responseFormat) {
const fields = extractFieldsFromSchema(responseFormat)
if (fields.length > 0) {
return {
tags: [
...variableTags,
...fields.map((field: Field) => `${normalizedBlockName}.response.${field.name}`),
],
}
}
}
}
} catch (e) {
logger.error('Error parsing response format:', { e })
}
// Fall back to default outputs if no response format
const outputPaths = getOutputPaths(sourceBlock.outputs, '', sourceBlock.type === 'starter')
return {
tags: [...variableTags, ...outputPaths.map((path) => `${normalizedBlockName}.${path}`)],
}
allBlockTags.push(...blockTags)
}
// Find parallel and loop blocks connected via end-source handles
const endSourceConnections: ConnectedBlock[] = []
// Sort block groups by distance (closest first)
blockTagGroups.sort((a, b) => a.distance - b.distance)
// Get all edges that connect to this block
const incomingEdges = useWorkflowStore
.getState()
.edges.filter((edge) => edge.target === blockId)
for (const edge of incomingEdges) {
const sourceBlock = blocks[edge.source]
if (!sourceBlock) continue
// Check if this is a parallel-end-source or loop-end-source connection
if (edge.sourceHandle === 'parallel-end-source' && sourceBlock.type === 'parallel') {
const blockName = sourceBlock.name || sourceBlock.type
const normalizedBlockName = blockName.replace(/\s+/g, '').toLowerCase()
// Add the parallel block as a referenceable block with its aggregated results
endSourceConnections.push({
id: sourceBlock.id,
type: sourceBlock.type,
outputType: ['response'],
name: blockName,
responseFormat: {
fields: [
{
name: 'completed',
type: 'boolean',
description: 'Whether all executions completed',
},
{
name: 'results',
type: 'array',
description: 'Aggregated results from all parallel executions',
},
{ name: 'message', type: 'string', description: 'Status message' },
],
},
})
} else if (edge.sourceHandle === 'loop-end-source' && sourceBlock.type === 'loop') {
const blockName = sourceBlock.name || sourceBlock.type
const normalizedBlockName = blockName.replace(/\s+/g, '').toLowerCase()
// Add the loop block as a referenceable block with its aggregated results
endSourceConnections.push({
id: sourceBlock.id,
type: sourceBlock.type,
outputType: ['response'],
name: blockName,
responseFormat: {
fields: [
{
name: 'completed',
type: 'boolean',
description: 'Whether all iterations completed',
},
{
name: 'results',
type: 'array',
description: 'Aggregated results from all loop iterations',
},
{ name: 'message', type: 'string', description: 'Status message' },
],
},
})
}
return {
tags: [...variableTags, ...loopTags, ...parallelTags, ...allBlockTags],
variableInfoMap,
blockTagGroups,
}
// Use all incoming connections plus end-source connections
const allConnections = [...incomingConnections, ...endSourceConnections]
const sourceTags = allConnections.flatMap((connection: ConnectedBlock) => {
const blockName = connection.name || connection.type
const normalizedBlockName = blockName.replace(/\s+/g, '').toLowerCase()
// Extract fields from response format
if (connection.responseFormat) {
const fields = extractFieldsFromSchema(connection.responseFormat)
if (fields.length > 0) {
return fields.map((field: Field) => `${normalizedBlockName}.response.${field.name}`)
}
}
// For evaluator blocks, use metrics
if (connection.type === 'evaluator') {
try {
const metricsValue = useSubBlockStore
.getState()
.getValue(connection.id, 'metrics') as unknown as Metric[]
if (Array.isArray(metricsValue)) {
return metricsValue.map(
(metric) => `${normalizedBlockName}.response.${metric.name.toLowerCase()}`
)
}
} catch (e) {
logger.error('Error parsing metrics:', { e })
return []
}
}
// Fall back to default outputs if no response format
const sourceBlock = blocks[connection.id]
if (!sourceBlock) return []
const outputPaths = getOutputPaths(sourceBlock.outputs, '', sourceBlock.type === 'starter')
return outputPaths.map((path) => `${normalizedBlockName}.${path}`)
})
return { tags: [...variableTags, ...loopTags, ...parallelTags, ...sourceTags], variableInfoMap }
}, [
blocks,
incomingConnections,
blockId,
activeSourceBlockId,
workflowVariables,
loops,
parallels,
])
}, [blocks, edges, loops, parallels, blockId, activeSourceBlockId, workflowVariables])
// Filter tags based on search term
const filteredTags = useMemo(() => {
@@ -403,12 +334,11 @@ export const TagDropdown: React.FC<TagDropdownProps> = ({
return tags.filter((tag: string) => tag.toLowerCase().includes(searchTerm))
}, [tags, searchTerm])
// Group tags into variables, loops, and blocks
const { variableTags, loopTags, parallelTags, blockTags } = useMemo(() => {
// Group filtered tags by category
const { variableTags, loopTags, parallelTags, filteredBlockTagGroups } = useMemo(() => {
const varTags: string[] = []
const loopTags: string[] = []
const parTags: string[] = []
const blkTags: string[] = []
filteredTags.forEach((tag) => {
if (tag.startsWith('variable.')) {
@@ -417,20 +347,32 @@ export const TagDropdown: React.FC<TagDropdownProps> = ({
loopTags.push(tag)
} else if (tag.startsWith('parallel.')) {
parTags.push(tag)
} else {
blkTags.push(tag)
}
})
return { variableTags: varTags, loopTags: loopTags, parallelTags: parTags, blockTags: blkTags }
}, [filteredTags])
// Filter block tag groups based on search term
const filteredBlockTagGroups = blockTagGroups
.map((group) => ({
...group,
tags: group.tags.filter((tag) => !searchTerm || tag.toLowerCase().includes(searchTerm)),
}))
.filter((group) => group.tags.length > 0)
// Create ordered tags array that matches the display order for keyboard navigation
return {
variableTags: varTags,
loopTags: loopTags,
parallelTags: parTags,
filteredBlockTagGroups,
}
}, [filteredTags, blockTagGroups, searchTerm])
// Create ordered tags for keyboard navigation
const orderedTags = useMemo(() => {
return [...variableTags, ...loopTags, ...parallelTags, ...blockTags]
}, [variableTags, loopTags, parallelTags, blockTags])
const allBlockTags = filteredBlockTagGroups.flatMap((group) => group.tags)
return [...variableTags, ...loopTags, ...parallelTags, ...allBlockTags]
}, [variableTags, loopTags, parallelTags, filteredBlockTagGroups])
// Create a map for efficient tag index lookups
// Create efficient tag index lookup map
const tagIndexMap = useMemo(() => {
const map = new Map<string, number>()
orderedTags.forEach((tag, index) => {
@@ -439,19 +381,7 @@ export const TagDropdown: React.FC<TagDropdownProps> = ({
return map
}, [orderedTags])
// Reset selection when filtered results change
useEffect(() => {
setSelectedIndex(0)
}, [searchTerm])
// Ensure selectedIndex stays within bounds when orderedTags changes
useEffect(() => {
if (selectedIndex >= orderedTags.length) {
setSelectedIndex(Math.max(0, orderedTags.length - 1))
}
}, [orderedTags.length, selectedIndex])
// Handle tag selection
// Handle tag selection and text replacement
const handleTagSelect = useCallback(
(tag: string) => {
const textBeforeCursor = inputValue.slice(0, cursorPosition)
@@ -461,34 +391,26 @@ export const TagDropdown: React.FC<TagDropdownProps> = ({
const lastOpenBracket = textBeforeCursor.lastIndexOf('<')
if (lastOpenBracket === -1) return
// Process the tag if it's a variable tag
// Process variable tags to maintain compatibility
let processedTag = tag
if (tag.startsWith('variable.')) {
// Get the variable name from the tag (after 'variable.')
const variableName = tag.substring('variable.'.length)
// Find the variable in the store by name
const variableObj = Object.values(variables).find(
(v) => v.name.replace(/\s+/g, '') === variableName
)
// We still use the full tag format internally to maintain compatibility
if (variableObj) {
processedTag = tag
}
}
// Check if there's a closing bracket in textAfterCursor that belongs to the current tag
// Find the first '>' in textAfterCursor (if any)
// Handle existing closing bracket
const nextCloseBracket = textAfterCursor.indexOf('>')
let remainingTextAfterCursor = textAfterCursor
// If there's a '>' right after the cursor or with only whitespace/tag content in between,
// it's likely part of the existing tag being edited, so we should skip it
if (nextCloseBracket !== -1) {
const textBetween = textAfterCursor.slice(0, nextCloseBracket)
// If the text between cursor and '>' contains only tag-like characters (letters, dots, numbers)
// then it's likely part of the current tag being edited
// If text between cursor and '>' contains only tag-like characters, skip it
if (/^[a-zA-Z0-9._]*$/.test(textBetween)) {
remainingTextAfterCursor = textAfterCursor.slice(nextCloseBracket + 1)
}
@@ -502,7 +424,17 @@ export const TagDropdown: React.FC<TagDropdownProps> = ({
[inputValue, cursorPosition, variables, onSelect, onClose]
)
// Add and remove keyboard event listener
// Reset selection when search results change
useEffect(() => setSelectedIndex(0), [searchTerm])
// Keep selection within bounds when tags change
useEffect(() => {
if (selectedIndex >= orderedTags.length) {
setSelectedIndex(Math.max(0, orderedTags.length - 1))
}
}, [orderedTags.length, selectedIndex])
// Handle keyboard navigation
useEffect(() => {
if (visible) {
const handleKeyboardEvent = (e: KeyboardEvent) => {
@@ -539,7 +471,7 @@ export const TagDropdown: React.FC<TagDropdownProps> = ({
}
}, [visible, selectedIndex, orderedTags, handleTagSelect, onClose])
// Don't render if not visible or no tags
// Early return if dropdown should not be visible
if (!visible || tags.length === 0 || orderedTags.length === 0) return null
return (
@@ -555,6 +487,7 @@ export const TagDropdown: React.FC<TagDropdownProps> = ({
<div className='px-3 py-2 text-muted-foreground text-sm'>No matching tags found</div>
) : (
<>
{/* Variables section */}
{variableTags.length > 0 && (
<>
<div className='px-2 pt-2.5 pb-0.5 font-medium text-muted-foreground text-xs'>
@@ -578,8 +511,8 @@ export const TagDropdown: React.FC<TagDropdownProps> = ({
)}
onMouseEnter={() => setSelectedIndex(tagIndex >= 0 ? tagIndex : 0)}
onMouseDown={(e) => {
e.preventDefault() // Prevent input blur
e.stopPropagation() // Prevent event bubbling
e.preventDefault()
e.stopPropagation()
handleTagSelect(tag)
}}
onClick={(e) => {
@@ -609,6 +542,7 @@ export const TagDropdown: React.FC<TagDropdownProps> = ({
</>
)}
{/* Loop section */}
{loopTags.length > 0 && (
<>
{variableTags.length > 0 && <div className='my-0' />}
@@ -620,10 +554,10 @@ export const TagDropdown: React.FC<TagDropdownProps> = ({
const tagIndex = tagIndexMap.get(tag) ?? -1
const loopProperty = tag.split('.')[1]
// Choose appropriate icon/label based on type
// Choose appropriate icon and description based on loop property
let tagIcon = 'L'
let tagDescription = ''
const bgColor = '#8857E6' // Purple for loop variables
const bgColor = '#8857E6'
if (loopProperty === 'currentItem') {
tagIcon = 'i'
@@ -649,8 +583,8 @@ export const TagDropdown: React.FC<TagDropdownProps> = ({
)}
onMouseEnter={() => setSelectedIndex(tagIndex >= 0 ? tagIndex : 0)}
onMouseDown={(e) => {
e.preventDefault() // Prevent input blur
e.stopPropagation() // Prevent event bubbling
e.preventDefault()
e.stopPropagation()
handleTagSelect(tag)
}}
onClick={(e) => {
@@ -676,6 +610,7 @@ export const TagDropdown: React.FC<TagDropdownProps> = ({
</>
)}
{/* Parallel section */}
{parallelTags.length > 0 && (
<>
{loopTags.length > 0 && <div className='my-0' />}
@@ -687,10 +622,10 @@ export const TagDropdown: React.FC<TagDropdownProps> = ({
const tagIndex = tagIndexMap.get(tag) ?? -1
const parallelProperty = tag.split('.')[1]
// Choose appropriate icon/label based on type
// Choose appropriate icon and description based on parallel property
let tagIcon = 'P'
let tagDescription = ''
const bgColor = '#FF5757' // Red for parallel variables
const bgColor = '#FF5757'
if (parallelProperty === 'currentItem') {
tagIcon = 'i'
@@ -716,8 +651,8 @@ export const TagDropdown: React.FC<TagDropdownProps> = ({
)}
onMouseEnter={() => setSelectedIndex(tagIndex >= 0 ? tagIndex : 0)}
onMouseDown={(e) => {
e.preventDefault() // Prevent input blur
e.stopPropagation() // Prevent event bubbling
e.preventDefault()
e.stopPropagation()
handleTagSelect(tag)
}}
onClick={(e) => {
@@ -743,68 +678,72 @@ export const TagDropdown: React.FC<TagDropdownProps> = ({
</>
)}
{blockTags.length > 0 && (
{/* Block sections */}
{filteredBlockTagGroups.length > 0 && (
<>
{(variableTags.length > 0 || loopTags.length > 0 || parallelTags.length > 0) && (
<div className='my-0' />
)}
<div className='px-2 pt-2.5 pb-0.5 font-medium text-muted-foreground text-xs'>
Blocks
</div>
<div className='-mx-1 -px-1'>
{blockTags.map((tag: string) => {
const tagIndex = tagIndexMap.get(tag) ?? -1
{filteredBlockTagGroups.map((group) => {
// Get block color from configuration
const blockConfig = getBlock(group.blockType)
const blockColor = blockConfig?.bgColor || '#2F55FF'
// Get block name from tag (first part before the dot)
const blockName = tag.split('.')[0]
return (
<div key={group.blockId}>
<div className='border-t px-2 pt-1.5 pb-0.5 font-medium text-muted-foreground text-xs first:border-t-0'>
{group.blockName}
</div>
<div>
{group.tags.map((tag: string) => {
const tagIndex = tagIndexMap.get(tag) ?? -1
// Extract path after block name (e.g., "field" from "blockname.field")
// For root reference blocks, show the block name instead of empty path
const tagParts = tag.split('.')
const path = tagParts.slice(1).join('.')
const displayText = path || group.blockName
// Get block type from blocks
const blockType = Object.values(blocks).find(
(block) =>
(block.name || block.type || '').replace(/\s+/g, '').toLowerCase() ===
blockName
)?.type
// Get block color from block config
const blockConfig = blockType ? getBlock(blockType) : null
const blockColor = blockConfig?.bgColor || '#2F55FF' // Default to blue if not found
return (
<button
key={tag}
className={cn(
'flex w-full items-center gap-2 px-3 py-1.5 text-left text-sm',
'hover:bg-accent hover:text-accent-foreground',
'focus:bg-accent focus:text-accent-foreground focus:outline-none',
tagIndex === selectedIndex &&
tagIndex >= 0 &&
'bg-accent text-accent-foreground'
)}
onMouseEnter={() => setSelectedIndex(tagIndex >= 0 ? tagIndex : 0)}
onMouseDown={(e) => {
e.preventDefault() // Prevent input blur
e.stopPropagation() // Prevent event bubbling
handleTagSelect(tag)
}}
onClick={(e) => {
e.preventDefault()
e.stopPropagation()
handleTagSelect(tag)
}}
>
<div
className='flex h-5 w-5 items-center justify-center rounded'
style={{ backgroundColor: blockColor }}
>
<span className='h-3 w-3 font-bold text-white text-xs'>
{blockName.charAt(0).toUpperCase()}
</span>
</div>
<span className='flex-1 truncate'>{tag}</span>
</button>
)
})}
</div>
return (
<button
key={tag}
className={cn(
'flex w-full items-center gap-2 px-3 py-1.5 text-left text-sm',
'hover:bg-accent hover:text-accent-foreground',
'focus:bg-accent focus:text-accent-foreground focus:outline-none',
tagIndex === selectedIndex &&
tagIndex >= 0 &&
'bg-accent text-accent-foreground'
)}
onMouseEnter={() => setSelectedIndex(tagIndex >= 0 ? tagIndex : 0)}
onMouseDown={(e) => {
e.preventDefault()
e.stopPropagation()
handleTagSelect(tag)
}}
onClick={(e) => {
e.preventDefault()
e.stopPropagation()
handleTagSelect(tag)
}}
>
<div
className='flex h-5 w-5 flex-shrink-0 items-center justify-center rounded'
style={{ backgroundColor: blockColor }}
>
<span className='h-3 w-3 font-bold text-white text-xs'>
{group.blockName.charAt(0).toUpperCase()}
</span>
</div>
<span className='max-w-[calc(100%-32px)] truncate'>
{displayText}
</span>
</button>
)
})}
</div>
</div>
)
})}
</>
)}
</>
@@ -813,18 +752,3 @@ export const TagDropdown: React.FC<TagDropdownProps> = ({
</div>
)
}
// Helper function to check for '<' trigger
export const checkTagTrigger = (text: string, cursorPosition: number): { show: boolean } => {
if (cursorPosition >= 1) {
const textBeforeCursor = text.slice(0, cursorPosition)
const lastOpenBracket = textBeforeCursor.lastIndexOf('<')
const lastCloseBracket = textBeforeCursor.lastIndexOf('>')
// Show if we have an unclosed '<' that's not part of a completed tag
if (lastOpenBracket !== -1 && (lastCloseBracket === -1 || lastCloseBracket < lastOpenBracket)) {
return { show: true }
}
}
return { show: false }
}

Some files were not shown because too many files have changed in this diff Show More