Compare commits

..

46 Commits

Author SHA1 Message Date
Waleed
2bb68335ee v0.5.79: longer MCP tools timeout, optimize loop/parallel regeneration, enrich.so integration 2026-01-31 21:57:56 -08:00
Waleed
8528fbe2d2 v0.5.78: billing fixes, mcp timeout increase, reactquery migrations, updated tool param visibilities, DSPy and Google Maps integrations 2026-01-31 13:48:22 -08:00
Waleed
31fdd2be13 v0.5.77: room manager redis migration, tool outputs, ui fixes 2026-01-30 14:57:17 -08:00
Waleed
028bc652c2 v0.5.76: posthog improvements, readme updates 2026-01-29 00:13:19 -08:00
Waleed
c6bf5cd58c v0.5.75: search modal overhaul, helm chart updates, run from block, terminal and visual debugging improvements 2026-01-28 22:54:13 -08:00
Vikhyath Mondreti
11dc18a80d v0.5.74: autolayout improvements, clerk integration, auth enforcements 2026-01-27 20:37:39 -08:00
Waleed
ab4e9dc72f v0.5.73: ci, helm updates, kb, ui fixes, note block enhancements 2026-01-26 22:04:35 -08:00
Vikhyath Mondreti
1c58c35bd8 v0.5.72: azure connection string, supabase improvement, multitrigger resolution, docs quick reference 2026-01-25 23:42:27 -08:00
Waleed
d63a5cb504 v0.5.71: ux, ci improvements, docs updates 2026-01-25 03:08:08 -08:00
Waleed
8bd5d41723 v0.5.70: router fix, anthropic agent response format adherence 2026-01-24 20:57:02 -08:00
Waleed
c12931bc50 v0.5.69: kb upgrades, blog, copilot improvements, auth consolidation (#2973)
* fix(subflows): tag dropdown + resolution logic (#2949)

* fix(subflows): tag dropdown + resolution logic

* fixes;

* revert parallel change

* chore(deps): bump posthog-js to 1.334.1 (#2948)

* fix(idempotency): add conflict target to atomicallyClaimDb query + remove redundant db namespace tracking (#2950)

* fix(idempotency): add conflict target to atomicallyClaimDb query

* delete needs to account for namespace

* simplify namespace filtering logic

* fix cleanup

* consistent target

* improvement(kb): add document filtering, select all, and React Query migration (#2951)

* improvement(kb): add document filtering, select all, and React Query migration

* test(kb): update tests for enabledFilter and removed userId params

* fix(kb): remove non-null assertion, add explicit guard

* improvement(logs): trace span, details (#2952)

* improvement(action-bar): ordering

* improvement(logs): details, trace span

* feat(blog): v0.5 release post (#2953)

* feat(blog): v0.5 post

* improvement(blog): simplify title and remove code block header

- Simplified blog title from Introducing Sim Studio v0.5 to Introducing Sim v0.5
- Removed language label header and copy button from code blocks for cleaner appearance

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* ack PR comments

* small styling improvements

* created system to create post-specific components

* updated componnet

* cache invalidation

---------

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>

* feat(admin): add credits endpoint to issue credits to users (#2954)

* feat(admin): add credits endpoint to issue credits to users

* fix(admin): use existing credit functions and handle enterprise seats

* fix(admin): reject NaN and Infinity in amount validation

* styling

* fix(admin): validate userId and email are strings

* improvement(copilot): fast mode, subagent tool responses and allow preferences (#2955)

* Improvements

* Fix actions mapping

* Remove console logs

* fix(billing): handle missing userStats and prevent crashes (#2956)

* fix(billing): handle missing userStats and prevent crashes

* fix(billing): correct import path for getFilledPillColor

* fix(billing): add Number.isFinite check to lastPeriodCost

* fix(logs): refresh logic to refresh logs details (#2958)

* fix(security): add authentication and input validation to API routes (#2959)

* fix(security): add authentication and input validation to API routes

* moved utils

* remove extraneous commetns

* removed unused dep

* improvement(helm): add internal ingress support and same-host path consolidation (#2960)

* improvement(helm): add internal ingress support and same-host path consolidation

* improvement(helm): clean up ingress template comments

Simplify verbose inline Helm comments and section dividers to match the
minimal style used in services.yaml.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* fix(helm): add missing copilot path consolidation for realtime host

When copilot.host equals realtime.host but differs from app.host,
copilot paths were not being routed. Added logic to consolidate
copilot paths into the realtime rule for this scenario.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* improvement(helm): follow ingress best practices

- Remove orphan comments that appeared when services were disabled
- Add documentation about path ordering requirements
- Paths rendered in order: realtime, copilot, app (specific before catch-all)
- Clean template output matching industry Helm chart standards

---------

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>

* feat(blog): enterprise post (#2961)

* feat(blog): enterprise post

* added more images, styling

* more content

* updated v0-5 post

* remove unused transition

---------

Co-authored-by: Vikhyath Mondreti <vikhyath@simstudio.ai>

* fix(envvars): resolution standardized (#2957)

* fix(envvars): resolution standardized

* remove comments

* address bugbot

* fix highlighting for env vars

* remove comments

* address greptile

* address bugbot

* fix(copilot): mask credentials fix (#2963)

* Fix copilot masking

* Clean up

* Lint

* improvement(webhooks): remove dead code (#2965)

* fix(webhooks): subscription recreation path

* improvement(webhooks): remove dead code

* fix tests

* address bugbot comments

* fix restoration edge case

* fix more edge cases

* address bugbot comments

* fix gmail polling

* add warnings for UI indication for credential sets

* fix(preview): subblock values (#2969)

* fix(child-workflow): nested spans handoff (#2966)

* fix(child-workflow): nested spans handoff

* remove overly defensive programming

* update type check

* type more code

* remove more dead code

* address bugbot comments

* fix(security): restrict API key access on internal-only routes (#2964)

* fix(security): restrict API key access on internal-only routes

* test(security): update function execute tests for checkInternalAuth

* updated agent handler

* move session check higher in checkSessionOrInternalAuth

* extracted duplicate code into helper for resolving user from jwt

* fix(copilot): update copilot chat title (#2968)

* fix(hitl): fix condition blocks after hitl (#2967)

* fix(notes): ghost edges (#2970)

* fix(notes): ghost edges

* fix deployed state fallback

* fallback

* remove UI level checks

* annotation missing from autoconnect source check

* improvement(docs): loop and parallel var reference syntax (#2975)

* fix(blog): slash actions description (#2976)

* improvement(docs): loop and parallel var reference syntax

* fix(blog): slash actions description

* fix(auth): copilot routes (#2977)

* Fix copilot auth

* Fix

* Fix

* Fix

* fix(copilot): fix edit summary for loops/parallels (#2978)

* fix(integrations): hide from tool bar (#2544)

* fix(landing): ui (#2979)

* fix(edge-validation): race condition on collaborative add (#2980)

* fix(variables): boolean type support and input improvements (#2981)

* fix(variables): boolean type support and input improvements

* fix formatting

---------

Co-authored-by: Vikhyath Mondreti <vikhyathvikku@gmail.com>
Co-authored-by: Emir Karabeg <78010029+emir-karabeg@users.noreply.github.com>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Co-authored-by: Siddharth Ganesan <33737564+Sg312@users.noreply.github.com>
Co-authored-by: Vikhyath Mondreti <vikhyath@simstudio.ai>
2026-01-24 14:29:53 -08:00
Waleed
e9c4251c1c v0.5.68: router block reasoning, executor improvements, variable resolution consolidation, helm updates (#2946)
* improvement(workflow-item): stabilize avatar layout and fix name truncation (#2939)

* improvement(workflow-item): stabilize avatar layout and fix name truncation

* fix(avatars): revert overflow bg to hardcoded color for contrast

* fix(executor): stop parallel execution when block errors (#2940)

* improvement(helm): add per-deployment extraVolumes support (#2942)

* fix(gmail): expose messageId field in read email block (#2943)

* fix(resolver): consolidate reference resolution  (#2941)

* fix(resolver): consolidate code to resolve references

* fix edge cases

* use already formatted error

* fix multi index

* fix backwards compat reachability

* handle backwards compatibility accurately

* use shared constant correctly

* feat(router): expose reasoning output in router v2 block (#2945)

* fix(copilot): always allow, credential masking (#2947)

* Fix always allow, credential validation

* Credential masking

* Autoload

* fix(executor): handle condition dead-end branches in loops (#2944)

---------

Co-authored-by: Vikhyath Mondreti <vikhyathvikku@gmail.com>
Co-authored-by: Siddharth Ganesan <33737564+Sg312@users.noreply.github.com>
2026-01-22 13:48:15 -08:00
Waleed
cc2be33d6b v0.5.67: loading, password reset, ui improvements, helm updates (#2928)
* fix(zustand): updated to useShallow from deprecated createWithEqualityFn (#2919)

* fix(logger): use direct env access for webpack inlining (#2920)

* fix(notifications): text overflow with line-clamp (#2921)

* chore(helm): add env vars for Vertex AI, orgs, and telemetry (#2922)

* fix(auth): improve reset password flow and consolidate brand detection (#2924)

* fix(auth): improve reset password flow and consolidate brand detection

* fix(auth): set errorHandled for EMAIL_NOT_VERIFIED to prevent duplicate error

* fix(auth): clear success message on login errors

* chore(auth): fix import order per lint

* fix(action-bar): duplicate subflows with children (#2923)

* fix(action-bar): duplicate subflows with children

* fix(action-bar): add validateTriggerPaste for subflow duplicate

* fix(resolver): agent response format, input formats, root level (#2925)

* fix(resolvers): agent response format, input formats, root level

* fix response block initial seeding

* fix tests

* fix(messages-input): fix cursor alignment and auto-resize with overlay (#2926)

* fix(messages-input): fix cursor alignment and auto-resize with overlay

* fixed remaining zustand warnings

* fix(stores): remove dead code causing log spam on startup (#2927)

* fix(stores): remove dead code causing log spam on startup

* fix(stores): replace custom tools zustand store with react query cache

* improvement(ui): use BrandedButton and BrandedLink components (#2930)

- Refactor auth forms to use BrandedButton component
- Add BrandedLink component for changelog page
- Reduce code duplication in login, signup, reset-password forms
- Update star count default value

* fix(custom-tools): remove unsafe title fallback in getCustomTool (#2929)

* fix(custom-tools): remove unsafe title fallback in getCustomTool

* fix(custom-tools): restore title fallback in getCustomTool lookup

Custom tools are referenced by title (custom_${title}), not database ID.
The title fallback is required for client-side tool resolution to work.

* fix(null-bodies): empty bodies handling (#2931)

* fix(null-statuses): empty bodies handling

* address bugbot comment

* fix(token-refresh): microsoft, notion, x, linear (#2933)

* fix(microsoft): proactive refresh needed

* fix(x): missing token refresh flag

* notion and linear missing flag too

* address bugbot comment

* fix(auth): handle EMAIL_NOT_VERIFIED in onError callback (#2932)

* fix(auth): handle EMAIL_NOT_VERIFIED in onError callback

* refactor(auth): extract redirectToVerify helper to reduce duplication

* fix(workflow-selector): use dedicated selector for workflow dropdown (#2934)

* feat(workflow-block): preview (#2935)

* improvement(copilot): tool configs to show nested props (#2936)

* fix(auth): add genericOAuth providers to trustedProviders (#2937)

---------

Co-authored-by: Vikhyath Mondreti <vikhyathvikku@gmail.com>
Co-authored-by: Emir Karabeg <78010029+emir-karabeg@users.noreply.github.com>
2026-01-21 22:53:25 -08:00
Vikhyath Mondreti
45371e521e v0.5.66: external http requests fix, ring highlighting 2026-01-21 02:55:39 -08:00
Waleed
0ce0f98aa5 v0.5.65: gemini updates, textract integration, ui updates (#2909)
* fix(google): wrap primitive tool responses for Gemini API compatibility (#2900)

* fix(canonical): copilot path + update parent (#2901)

* fix(rss): add top-level title, link, pubDate fields to RSS trigger output (#2902)

* fix(rss): add top-level title, link, pubDate fields to RSS trigger output

* fix(imap): add top-level fields to IMAP trigger output

* improvement(browseruse): add profile id param (#2903)

* improvement(browseruse): add profile id param

* make request a stub since we have directExec

* improvement(executor): upgraded abort controller to handle aborts for loops and parallels (#2880)

* improvement(executor): upgraded abort controller to handle aborts for loops and parallels

* comments

* improvement(files): update execution for passing base64 strings (#2906)

* progress

* improvement(execution): update execution for passing base64 strings

* fix types

* cleanup comments

* path security vuln

* reject promise correctly

* fix redirect case

* remove proxy routes

* fix tests

* use ipaddr

* feat(tools): added textract, added v2 for mistral, updated tag dropdown (#2904)

* feat(tools): added textract

* cleanup

* ack pr comments

* reorder

* removed upload for textract async version

* fix additional fields dropdown in editor, update parser to leave validation to be done on the server

* added mistral v2, files v2, and finalized textract

* updated the rest of the old file patterns, updated mistral outputs for v2

* updated tag dropdown to parse non-operation fields as well

* updated extension finder

* cleanup

* added description for inputs to workflow

* use helper for internal route check

* fix tag dropdown merge conflict change

* remove duplicate code

---------

Co-authored-by: Vikhyath Mondreti <vikhyath@simstudio.ai>

* fix(ui): change add inputs button to match output selector (#2907)

* fix(canvas): removed invite to workspace from canvas popover (#2908)

* fix(canvas): removed invite to workspace

* removed unused props

* fix(copilot): legacy tool display names (#2911)

* fix(a2a): canonical merge  (#2912)

* fix canonical merge

* fix empty array case

* fix(change-detection): copilot diffs have extra field (#2913)

* improvement(logs): improved logs ui bugs, added subflow disable UI (#2910)

* improvement(logs): improved logs ui bugs, added subflow disable UI

* added duplicate to action bar for subflows

* feat(broadcast): email v0.5 (#2905)

---------

Co-authored-by: Vikhyath Mondreti <vikhyathvikku@gmail.com>
Co-authored-by: Vikhyath Mondreti <vikhyath@simstudio.ai>
Co-authored-by: Emir Karabeg <78010029+emir-karabeg@users.noreply.github.com>
2026-01-20 23:54:55 -08:00
Waleed
dff1c9d083 v0.5.64: unsubscribe, search improvements, metrics, additional SSO configuration 2026-01-20 00:34:11 -08:00
Vikhyath Mondreti
b09f683072 v0.5.63: ui and performance improvements, more google tools 2026-01-18 15:22:42 -08:00
Vikhyath Mondreti
a8bb0db660 v0.5.62: webhook bug fixes, seeding default subblock values, block selection fixes 2026-01-16 20:27:06 -08:00
Waleed
af82820a28 v0.5.61: webhook improvements, workflow controls, react query for deployment status, chat fixes, reducto and pulse OCR, linear fixes 2026-01-16 18:06:23 -08:00
Waleed
4372841797 v0.5.60: invitation flow improvements, chat fixes, a2a improvements, additional copilot actions 2026-01-15 00:02:18 -08:00
Waleed
5e8c843241 v0.5.59: a2a support, documentation 2026-01-13 13:21:21 -08:00
Waleed
7bf3d73ee6 v0.5.58: export folders, new tools, permissions groups enhancements 2026-01-13 00:56:59 -08:00
Vikhyath Mondreti
7ffc11a738 v0.5.57: subagents, context menu improvements, bug fixes 2026-01-11 11:38:40 -08:00
Waleed
be578e2ed7 v0.5.56: batch operations, access control and permission groups, billing fixes 2026-01-10 00:31:34 -08:00
Waleed
f415e5edc4 v0.5.55: polling groups, bedrock provider, devcontainer fixes, workflow preview enhancements 2026-01-08 23:36:56 -08:00
Waleed
13a6e6c3fa v0.5.54: seo, model blacklist, helm chart updates, fireflies integration, autoconnect improvements, billing fixes 2026-01-07 16:09:45 -08:00
Waleed
f5ab7f21ae v0.5.53: hotkey improvements, added redis fallback, fixes for workflow tool 2026-01-06 23:34:52 -08:00
Waleed
bfb6fffe38 v0.5.52: new port-based router block, combobox expression and variable support 2026-01-06 16:14:10 -08:00
Waleed
4fbec0a43f v0.5.51: triggers, kb, condition block improvements, supabase and grain integration updates 2026-01-06 14:26:46 -08:00
Waleed
585f5e365b v0.5.50: import improvements, ui upgrades, kb styling and performance improvements 2026-01-05 00:35:55 -08:00
Waleed
3792bdd252 v0.5.49: hitl improvements, new email styles, imap trigger, logs context menu (#2672)
* feat(logs-context-menu): consolidated logs utils and types, added logs record context menu (#2659)

* feat(email): welcome email; improvement(emails): ui/ux (#2658)

* feat(email): welcome email; improvement(emails): ui/ux

* improvement(emails): links, accounts, preview

* refactor(emails): file structure and wrapper components

* added envvar for personal emails sent, added isHosted gate

* fixed failing tests, added env mock

* fix: removed comment

---------

Co-authored-by: waleed <walif6@gmail.com>

* fix(logging): hitl + trigger dev crash protection (#2664)

* hitl gaps

* deal with trigger worker crashes

* cleanup import strcuture

* feat(imap): added support for imap trigger (#2663)

* feat(tools): added support for imap trigger

* feat(imap): added parity, tested

* ack PR comments

* final cleanup

* feat(i18n): update translations (#2665)

Co-authored-by: waleedlatif1 <waleedlatif1@users.noreply.github.com>

* fix(grain): updated grain trigger to auto-establish trigger (#2666)

Co-authored-by: aadamgough <adam@sim.ai>

* feat(admin): routes to manage deployments (#2667)

* feat(admin): routes to manage deployments

* fix naming fo deployed by

* feat(time-picker): added timepicker emcn component, added to playground, added searchable prop for dropdown, added more timezones for schedule, updated license and notice date (#2668)

* feat(time-picker): added timepicker emcn component, added to playground, added searchable prop for dropdown, added more timezones for schedule, updated license and notice date

* removed unused params, cleaned up redundant utils

* improvement(invite): aligned styling (#2669)

* improvement(invite): aligned with rest of app

* fix(invite): error handling

* fix: addressed comments

---------

Co-authored-by: Emir Karabeg <78010029+emir-karabeg@users.noreply.github.com>
Co-authored-by: Vikhyath Mondreti <vikhyathvikku@gmail.com>
Co-authored-by: waleedlatif1 <waleedlatif1@users.noreply.github.com>
Co-authored-by: Adam Gough <77861281+aadamgough@users.noreply.github.com>
Co-authored-by: aadamgough <adam@sim.ai>
2026-01-03 13:19:18 -08:00
Waleed
eb5d1f3e5b v0.5.48: copy-paste workflow blocks, docs updates, mcp tool fixes 2025-12-31 18:00:04 -08:00
Waleed
54ab82c8dd v0.5.47: deploy workflow as mcp, kb chunks tokenizer, UI improvements, jira service management tools 2025-12-30 23:18:58 -08:00
Waleed
f895bf469b v0.5.46: build improvements, greptile, light mode improvements 2025-12-29 02:17:52 -08:00
Waleed
dd3209af06 v0.5.45: light mode fixes, realtime usage indicator, docker build improvements 2025-12-27 19:57:42 -08:00
Waleed
b6ba3b50a7 v0.5.44: keyboard shortcuts, autolayout, light mode, byok, testing improvements 2025-12-26 21:25:19 -08:00
Waleed
b304233062 v0.5.43: export logs, circleback, grain, vertex, code hygiene, schedule improvements 2025-12-23 19:19:18 -08:00
Vikhyath Mondreti
57e4b49bd6 v0.5.42: fix memory migration 2025-12-23 01:24:54 -08:00
Vikhyath Mondreti
e12dd204ed v0.5.41: memory fixes, copilot improvements, knowledgebase improvements, LLM providers standardization 2025-12-23 00:15:18 -08:00
Vikhyath Mondreti
3d9d9cbc54 v0.5.40: supabase ops to allow non-public schemas, jira uuid 2025-12-21 22:28:05 -08:00
Waleed
0f4ec962ad v0.5.39: notion, workflow variables fixes 2025-12-20 20:44:00 -08:00
Waleed
4827866f9a v0.5.38: snap to grid, copilot ux improvements, billing line items 2025-12-20 17:24:38 -08:00
Waleed
3e697d9ed9 v0.5.37: redaction utils consolidation, logs updates, autoconnect improvements, additional kb tag types 2025-12-19 22:31:55 -08:00
Martin Yankov
4431a1a484 fix(helm): add custom egress rules to realtime network policy (#2481)
The realtime service network policy was missing the custom egress rules section
that allows configuration of additional egress rules via values.yaml. This caused
the realtime pods to be unable to connect to external databases (e.g., PostgreSQL
on port 5432) when using external database configurations.

The app network policy already had this section, but the realtime network policy
was missing it, creating an inconsistency and preventing the realtime service
from accessing external databases configured via networkPolicy.egress values.

This fix adds the same custom egress rules template section to the realtime
network policy, matching the app network policy behavior and allowing users to
configure database connectivity via values.yaml.
2025-12-19 18:59:08 -08:00
Waleed
4d1a9a3f22 v0.5.36: hitl improvements, opengraph, slack fixes, one-click unsubscribe, auth checks, new db indexes 2025-12-19 01:27:49 -08:00
Vikhyath Mondreti
eb07a080fb v0.5.35: helm updates, copilot improvements, 404 for docs, salesforce fixes, subflow resize clamping 2025-12-18 16:23:19 -08:00
370 changed files with 3669 additions and 21559 deletions

View File

@@ -183,109 +183,6 @@ export const {ServiceName}Block: BlockConfig = {
}
```
## File Input Handling
When your block accepts file uploads, use the basic/advanced mode pattern with `normalizeFileInput`.
### Basic/Advanced File Pattern
```typescript
// Basic mode: Visual file upload
{
id: 'uploadFile',
title: 'File',
type: 'file-upload',
canonicalParamId: 'file', // Both map to 'file' param
placeholder: 'Upload file',
mode: 'basic',
multiple: false,
required: true,
condition: { field: 'operation', value: 'upload' },
},
// Advanced mode: Reference from other blocks
{
id: 'fileRef',
title: 'File',
type: 'short-input',
canonicalParamId: 'file', // Both map to 'file' param
placeholder: 'Reference file (e.g., {{file_block.output}})',
mode: 'advanced',
required: true,
condition: { field: 'operation', value: 'upload' },
},
```
**Critical constraints:**
- `canonicalParamId` must NOT match any subblock's `id` in the same block
- Values are stored under subblock `id`, not `canonicalParamId`
### Normalizing File Input in tools.config
Use `normalizeFileInput` to handle all input variants:
```typescript
import { normalizeFileInput } from '@/blocks/utils'
tools: {
access: ['service_upload'],
config: {
tool: (params) => {
// Check all field IDs: uploadFile (basic), fileRef (advanced), fileContent (legacy)
const normalizedFile = normalizeFileInput(
params.uploadFile || params.fileRef || params.fileContent,
{ single: true }
)
if (normalizedFile) {
params.file = normalizedFile
}
return `service_${params.operation}`
},
},
}
```
**Why this pattern?**
- Values come through as `params.uploadFile` or `params.fileRef` (the subblock IDs)
- `canonicalParamId` only controls UI/schema mapping, not runtime values
- `normalizeFileInput` handles JSON strings from advanced mode template resolution
### File Input Types in `inputs`
Use `type: 'json'` for file inputs:
```typescript
inputs: {
uploadFile: { type: 'json', description: 'Uploaded file (UserFile)' },
fileRef: { type: 'json', description: 'File reference from previous block' },
// Legacy field for backwards compatibility
fileContent: { type: 'string', description: 'Legacy: base64 encoded content' },
}
```
### Multiple Files
For multiple file uploads:
```typescript
{
id: 'attachments',
title: 'Attachments',
type: 'file-upload',
multiple: true, // Allow multiple files
maxSize: 25, // Max size in MB per file
acceptedTypes: 'image/*,application/pdf,.doc,.docx',
}
// In tools.config:
const normalizedFiles = normalizeFileInput(
params.attachments || params.attachmentRefs,
// No { single: true } - returns array
)
if (normalizedFiles) {
params.files = normalizedFiles
}
```
## Condition Syntax
Controls when a field is shown based on other field values.

View File

@@ -457,230 +457,7 @@ You can usually find this in the service's brand/press kit page, or copy it from
Paste the SVG code here and I'll convert it to a React component.
```
## File Handling
When your integration handles file uploads or downloads, follow these patterns to work with `UserFile` objects consistently.
### What is a UserFile?
A `UserFile` is the standard file representation in Sim:
```typescript
interface UserFile {
id: string // Unique identifier
name: string // Original filename
url: string // Presigned URL for download
size: number // File size in bytes
type: string // MIME type (e.g., 'application/pdf')
base64?: string // Optional base64 content (if small file)
key?: string // Internal storage key
context?: object // Storage context metadata
}
```
### File Input Pattern (Uploads)
For tools that accept file uploads, **always route through an internal API endpoint** rather than calling external APIs directly. This ensures proper file content retrieval.
#### 1. Block SubBlocks for File Input
Use the basic/advanced mode pattern:
```typescript
// Basic mode: File upload UI
{
id: 'uploadFile',
title: 'File',
type: 'file-upload',
canonicalParamId: 'file', // Maps to 'file' param
placeholder: 'Upload file',
mode: 'basic',
multiple: false,
required: true,
condition: { field: 'operation', value: 'upload' },
},
// Advanced mode: Reference from previous block
{
id: 'fileRef',
title: 'File',
type: 'short-input',
canonicalParamId: 'file', // Same canonical param
placeholder: 'Reference file (e.g., {{file_block.output}})',
mode: 'advanced',
required: true,
condition: { field: 'operation', value: 'upload' },
},
```
**Critical:** `canonicalParamId` must NOT match any subblock `id`.
#### 2. Normalize File Input in Block Config
In `tools.config.tool`, use `normalizeFileInput` to handle all input variants:
```typescript
import { normalizeFileInput } from '@/blocks/utils'
tools: {
config: {
tool: (params) => {
// Normalize file from basic (uploadFile), advanced (fileRef), or legacy (fileContent)
const normalizedFile = normalizeFileInput(
params.uploadFile || params.fileRef || params.fileContent,
{ single: true }
)
if (normalizedFile) {
params.file = normalizedFile
}
return `{service}_${params.operation}`
},
},
}
```
#### 3. Create Internal API Route
Create `apps/sim/app/api/tools/{service}/{action}/route.ts`:
```typescript
import { createLogger } from '@sim/logger'
import { NextResponse, type NextRequest } from 'next/server'
import { z } from 'zod'
import { checkInternalAuth } from '@/lib/auth/hybrid'
import { generateRequestId } from '@/lib/core/utils/request'
import { FileInputSchema, type RawFileInput } from '@/lib/uploads/utils/file-schemas'
import { processFilesToUserFiles } from '@/lib/uploads/utils/file-utils'
import { downloadFileFromStorage } from '@/lib/uploads/utils/file-utils.server'
const logger = createLogger('{Service}UploadAPI')
const RequestSchema = z.object({
accessToken: z.string(),
file: FileInputSchema.optional().nullable(),
// Legacy field for backwards compatibility
fileContent: z.string().optional().nullable(),
// ... other params
})
export async function POST(request: NextRequest) {
const requestId = generateRequestId()
const authResult = await checkInternalAuth(request, { requireWorkflowId: false })
if (!authResult.success) {
return NextResponse.json({ success: false, error: 'Unauthorized' }, { status: 401 })
}
const body = await request.json()
const data = RequestSchema.parse(body)
let fileBuffer: Buffer
let fileName: string
// Prefer UserFile input, fall back to legacy base64
if (data.file) {
const userFiles = processFilesToUserFiles([data.file as RawFileInput], requestId, logger)
if (userFiles.length === 0) {
return NextResponse.json({ success: false, error: 'Invalid file' }, { status: 400 })
}
const userFile = userFiles[0]
fileBuffer = await downloadFileFromStorage(userFile, requestId, logger)
fileName = userFile.name
} else if (data.fileContent) {
// Legacy: base64 string (backwards compatibility)
fileBuffer = Buffer.from(data.fileContent, 'base64')
fileName = 'file'
} else {
return NextResponse.json({ success: false, error: 'File required' }, { status: 400 })
}
// Now call external API with fileBuffer
const response = await fetch('https://api.{service}.com/upload', {
method: 'POST',
headers: { Authorization: `Bearer ${data.accessToken}` },
body: new Uint8Array(fileBuffer), // Convert Buffer for fetch
})
// ... handle response
}
```
#### 4. Update Tool to Use Internal Route
```typescript
export const {service}UploadTool: ToolConfig<Params, Response> = {
id: '{service}_upload',
// ...
params: {
file: { type: 'file', required: false, visibility: 'user-or-llm' },
fileContent: { type: 'string', required: false, visibility: 'hidden' }, // Legacy
},
request: {
url: '/api/tools/{service}/upload', // Internal route
method: 'POST',
body: (params) => ({
accessToken: params.accessToken,
file: params.file,
fileContent: params.fileContent,
}),
},
}
```
### File Output Pattern (Downloads)
For tools that return files, use `FileToolProcessor` to store files and return `UserFile` objects.
#### In Tool transformResponse
```typescript
import { FileToolProcessor } from '@/executor/utils/file-tool-processor'
transformResponse: async (response, context) => {
const data = await response.json()
// Process file outputs to UserFile objects
const fileProcessor = new FileToolProcessor(context)
const file = await fileProcessor.processFileData({
data: data.content, // base64 or buffer
mimeType: data.mimeType,
filename: data.filename,
})
return {
success: true,
output: { file },
}
}
```
#### In API Route (for complex file handling)
```typescript
// Return file data that FileToolProcessor can handle
return NextResponse.json({
success: true,
output: {
file: {
data: base64Content,
mimeType: 'application/pdf',
filename: 'document.pdf',
},
},
})
```
### Key Helpers Reference
| Helper | Location | Purpose |
|--------|----------|---------|
| `normalizeFileInput` | `@/blocks/utils` | Normalize file params in block config |
| `processFilesToUserFiles` | `@/lib/uploads/utils/file-utils` | Convert raw inputs to UserFile[] |
| `downloadFileFromStorage` | `@/lib/uploads/utils/file-utils.server` | Get file Buffer from UserFile |
| `FileToolProcessor` | `@/executor/utils/file-tool-processor` | Process tool output files |
| `isUserFile` | `@/lib/core/utils/user-file` | Type guard for UserFile objects |
| `FileInputSchema` | `@/lib/uploads/utils/file-schemas` | Zod schema for file validation |
### Common Gotchas
## Common Gotchas
1. **OAuth serviceId must match** - The `serviceId` in oauth-input must match the OAuth provider configuration
2. **Tool IDs are snake_case** - `stripe_create_payment`, not `stripeCreatePayment`
@@ -688,5 +465,3 @@ return NextResponse.json({
4. **Alphabetical ordering** - Keep imports and registry entries alphabetically sorted
5. **Required can be conditional** - Use `required: { field: 'op', value: 'create' }` instead of always true
6. **DependsOn clears options** - When a dependency changes, selector options are refetched
7. **Never pass Buffer directly to fetch** - Convert to `new Uint8Array(buffer)` for TypeScript compatibility
8. **Always handle legacy file params** - Keep hidden `fileContent` params for backwards compatibility

View File

@@ -195,52 +195,6 @@ import { {service}WebhookTrigger } from '@/triggers/{service}'
{service}_webhook: {service}WebhookTrigger,
```
## File Handling
When integrations handle file uploads/downloads, use `UserFile` objects consistently.
### File Input (Uploads)
1. **Block subBlocks:** Use basic/advanced mode pattern with `canonicalParamId`
2. **Normalize in block config:** Use `normalizeFileInput` from `@/blocks/utils`
3. **Internal API route:** Create route that uses `downloadFileFromStorage` to get file content
4. **Tool routes to internal API:** Don't call external APIs directly with files
```typescript
// In block tools.config:
import { normalizeFileInput } from '@/blocks/utils'
const normalizedFile = normalizeFileInput(
params.uploadFile || params.fileRef || params.fileContent,
{ single: true }
)
if (normalizedFile) params.file = normalizedFile
```
### File Output (Downloads)
Use `FileToolProcessor` in tool `transformResponse` to store files:
```typescript
import { FileToolProcessor } from '@/executor/utils/file-tool-processor'
const processor = new FileToolProcessor(context)
const file = await processor.processFileData({
data: base64Content,
mimeType: 'application/pdf',
filename: 'doc.pdf',
})
```
### Key Helpers
| Helper | Location | Purpose |
|--------|----------|---------|
| `normalizeFileInput` | `@/blocks/utils` | Normalize file params in block config |
| `processFilesToUserFiles` | `@/lib/uploads/utils/file-utils` | Convert raw inputs to UserFile[] |
| `downloadFileFromStorage` | `@/lib/uploads/utils/file-utils.server` | Get Buffer from UserFile |
| `FileToolProcessor` | `@/executor/utils/file-tool-processor` | Process tool output files |
## Checklist
- [ ] Look up API docs for the service
@@ -253,5 +207,3 @@ const file = await processor.processFileData({
- [ ] Register block in `blocks/registry.ts`
- [ ] (Optional) Create triggers in `triggers/{service}/`
- [ ] (Optional) Register triggers in `triggers/registry.ts`
- [ ] (If file uploads) Create internal API route with `downloadFileFromStorage`
- [ ] (If file uploads) Use `normalizeFileInput` in block config

View File

@@ -193,52 +193,6 @@ import { {service}WebhookTrigger } from '@/triggers/{service}'
{service}_webhook: {service}WebhookTrigger,
```
## File Handling
When integrations handle file uploads/downloads, use `UserFile` objects consistently.
### File Input (Uploads)
1. **Block subBlocks:** Use basic/advanced mode pattern with `canonicalParamId`
2. **Normalize in block config:** Use `normalizeFileInput` from `@/blocks/utils`
3. **Internal API route:** Create route that uses `downloadFileFromStorage` to get file content
4. **Tool routes to internal API:** Don't call external APIs directly with files
```typescript
// In block tools.config:
import { normalizeFileInput } from '@/blocks/utils'
const normalizedFile = normalizeFileInput(
params.uploadFile || params.fileRef || params.fileContent,
{ single: true }
)
if (normalizedFile) params.file = normalizedFile
```
### File Output (Downloads)
Use `FileToolProcessor` in tool `transformResponse` to store files:
```typescript
import { FileToolProcessor } from '@/executor/utils/file-tool-processor'
const processor = new FileToolProcessor(context)
const file = await processor.processFileData({
data: base64Content,
mimeType: 'application/pdf',
filename: 'doc.pdf',
})
```
### Key Helpers
| Helper | Location | Purpose |
|--------|----------|---------|
| `normalizeFileInput` | `@/blocks/utils` | Normalize file params in block config |
| `processFilesToUserFiles` | `@/lib/uploads/utils/file-utils` | Convert raw inputs to UserFile[] |
| `downloadFileFromStorage` | `@/lib/uploads/utils/file-utils.server` | Get Buffer from UserFile |
| `FileToolProcessor` | `@/executor/utils/file-tool-processor` | Process tool output files |
## Checklist
- [ ] Look up API docs for the service
@@ -251,5 +205,3 @@ const file = await processor.processFileData({
- [ ] Register block in `blocks/registry.ts`
- [ ] (Optional) Create triggers in `triggers/{service}/`
- [ ] (Optional) Register triggers in `triggers/registry.ts`
- [ ] (If file uploads) Create internal API route with `downloadFileFromStorage`
- [ ] (If file uploads) Use `normalizeFileInput` in block config

View File

@@ -265,23 +265,6 @@ Register in `blocks/registry.ts` (alphabetically).
**dependsOn:** `['field']` or `{ all: ['a'], any: ['b', 'c'] }`
**File Input Pattern (basic/advanced mode):**
```typescript
// Basic: file-upload UI
{ id: 'uploadFile', type: 'file-upload', canonicalParamId: 'file', mode: 'basic' },
// Advanced: reference from other blocks
{ id: 'fileRef', type: 'short-input', canonicalParamId: 'file', mode: 'advanced' },
```
In `tools.config.tool`, normalize with:
```typescript
import { normalizeFileInput } from '@/blocks/utils'
const file = normalizeFileInput(params.uploadFile || params.fileRef, { single: true })
if (file) params.file = file
```
For file uploads, create an internal API route (`/api/tools/{service}/upload`) that uses `downloadFileFromStorage` to get file content from `UserFile` objects.
### 3. Icon (`components/icons.tsx`)
```typescript
@@ -310,5 +293,3 @@ Register in `triggers/registry.ts`.
- [ ] Create block in `blocks/blocks/{service}.ts`
- [ ] Register block in `blocks/registry.ts`
- [ ] (Optional) Create and register triggers
- [ ] (If file uploads) Create internal API route with `downloadFileFromStorage`
- [ ] (If file uploads) Use `normalizeFileInput` in block config

View File

@@ -1,168 +0,0 @@
---
title: Passing Files
---
import { Callout } from 'fumadocs-ui/components/callout'
import { Tab, Tabs } from 'fumadocs-ui/components/tabs'
Sim makes it easy to work with files throughout your workflows. Blocks can receive files, process them, and pass them to other blocks seamlessly.
## File Objects
When blocks output files (like Gmail attachments, generated images, or parsed documents), they return a standardized file object:
```json
{
"name": "report.pdf",
"url": "https://...",
"base64": "JVBERi0xLjQK...",
"type": "application/pdf",
"size": 245678
}
```
You can access any of these properties when referencing files from previous blocks.
## The File Block
The **File block** is the universal entry point for files in your workflows. It accepts files from any source and outputs standardized file objects that work with all integrations.
**Inputs:**
- **Uploaded files** - Drag and drop or select files directly
- **External URLs** - Any publicly accessible file URL
- **Files from other blocks** - Pass files from Gmail attachments, Slack downloads, etc.
**Outputs:**
- A list of `UserFile` objects with consistent structure (`name`, `url`, `base64`, `type`, `size`)
- `combinedContent` - Extracted text content from all files (for documents)
**Example usage:**
```
// Get all files from the File block
<file.files>
// Get the first file
<file.files[0]>
// Get combined text content from parsed documents
<file.combinedContent>
```
The File block automatically:
- Detects file types from URLs and extensions
- Extracts text from PDFs, CSVs, and documents
- Generates base64 encoding for binary files
- Creates presigned URLs for secure access
Use the File block when you need to normalize files from different sources before passing them to other blocks like Vision, STT, or email integrations.
## Passing Files Between Blocks
Reference files from previous blocks using the tag dropdown. Click in any file input field and type `<` to see available outputs.
**Common patterns:**
```
// Single file from a block
<gmail.attachments[0]>
// Pass the whole file object
<file_parser.files[0]>
// Access specific properties
<gmail.attachments[0].name>
<gmail.attachments[0].base64>
```
Most blocks accept the full file object and extract what they need automatically. You don't need to manually extract `base64` or `url` in most cases.
## Triggering Workflows with Files
When calling a workflow via API that expects file input, include files in your request:
<Tabs items={['Base64', 'URL']}>
<Tab value="Base64">
```bash
curl -X POST "https://sim.ai/api/workflows/YOUR_WORKFLOW_ID/execute" \
-H "Content-Type: application/json" \
-H "x-api-key: YOUR_API_KEY" \
-d '{
"document": {
"name": "report.pdf",
"base64": "JVBERi0xLjQK...",
"type": "application/pdf"
}
}'
```
</Tab>
<Tab value="URL">
```bash
curl -X POST "https://sim.ai/api/workflows/YOUR_WORKFLOW_ID/execute" \
-H "Content-Type: application/json" \
-H "x-api-key: YOUR_API_KEY" \
-d '{
"document": {
"name": "report.pdf",
"url": "https://example.com/report.pdf",
"type": "application/pdf"
}
}'
```
</Tab>
</Tabs>
The workflow's Start block should have an input field configured to receive the file parameter.
## Receiving Files in API Responses
When a workflow outputs files, they're included in the response:
```json
{
"success": true,
"output": {
"generatedFile": {
"name": "output.png",
"url": "https://...",
"base64": "iVBORw0KGgo...",
"type": "image/png",
"size": 34567
}
}
}
```
Use `url` for direct downloads or `base64` for inline processing.
## Blocks That Work with Files
**File inputs:**
- **File** - Parse documents, images, and text files
- **Vision** - Analyze images with AI models
- **Mistral Parser** - Extract text from PDFs
**File outputs:**
- **Gmail** - Email attachments
- **Slack** - Downloaded files
- **TTS** - Generated audio files
- **Video Generator** - Generated videos
- **Image Generator** - Generated images
**File storage:**
- **Supabase** - Upload/download from storage
- **S3** - AWS S3 operations
- **Google Drive** - Drive file operations
- **Dropbox** - Dropbox file operations
<Callout type="info">
Files are automatically available to downstream blocks. The execution engine handles all file transfer and format conversion.
</Callout>
## Best Practices
1. **Use file objects directly** - Pass the full file object rather than extracting individual properties. Blocks handle the conversion automatically.
2. **Check file types** - Ensure the file type matches what the receiving block expects. The Vision block needs images, the File block handles documents.
3. **Consider file size** - Large files increase execution time. For very large files, consider using storage blocks (S3, Supabase) for intermediate storage.

View File

@@ -1,3 +1,3 @@
{
"pages": ["index", "basics", "files", "api", "logging", "costs"]
"pages": ["index", "basics", "api", "logging", "costs"]
}

View File

@@ -180,11 +180,6 @@ A quick lookup for everyday actions in the Sim workflow editor. For keyboard sho
<td>Right-click → **Enable/Disable**</td>
<td><ActionImage src="/static/quick-reference/disable-block.png" alt="Disable block" /></td>
</tr>
<tr>
<td>Lock/Unlock a block</td>
<td>Hover block → Click lock icon (Admin only)</td>
<td><ActionImage src="/static/quick-reference/lock-block.png" alt="Lock block" /></td>
</tr>
<tr>
<td>Toggle handle orientation</td>
<td>Right-click → **Toggle Handles**</td>

View File

@@ -11,7 +11,7 @@ import { BlockInfoCard } from "@/components/ui/block-info-card"
/>
{/* MANUAL-CONTENT-START:intro */}
The [Pulse](https://www.runpulse.com) tool enables seamless extraction of text and structured content from a wide variety of documents—including PDFs, images, and Office files—using state-of-the-art OCR (Optical Character Recognition) powered by Pulse. Designed for automated agentic workflows, Pulse Parser makes it easy to unlock valuable information trapped in unstructured documents and integrate the extracted content directly into your workflow.
The [Pulse](https://www.pulseapi.com/) tool enables seamless extraction of text and structured content from a wide variety of documents—including PDFs, images, and Office files—using state-of-the-art OCR (Optical Character Recognition) powered by Pulse. Designed for automated agentic workflows, Pulse Parser makes it easy to unlock valuable information trapped in unstructured documents and integrate the extracted content directly into your workflow.
With Pulse, you can:

Binary file not shown.

Before

Width:  |  Height:  |  Size: 34 KiB

View File

@@ -1,6 +1,6 @@
import { redirect } from 'next/navigation'
import { getEnv, isTruthy } from '@/lib/core/config/env'
import SSOForm from '@/ee/sso/components/sso-form'
import SSOForm from '@/app/(auth)/sso/sso-form'
export const dynamic = 'force-dynamic'

View File

@@ -16,7 +16,7 @@ import {
import { checkHybridAuth } from '@/lib/auth/hybrid'
import { getBrandConfig } from '@/lib/branding/branding'
import { acquireLock, getRedisClient, releaseLock } from '@/lib/core/config/redis'
import { validateUrlWithDNS } from '@/lib/core/security/input-validation.server'
import { validateExternalUrl } from '@/lib/core/security/input-validation'
import { SSE_HEADERS } from '@/lib/core/utils/sse'
import { getBaseUrl } from '@/lib/core/utils/urls'
import { markExecutionCancelled } from '@/lib/execution/cancellation'
@@ -1119,7 +1119,7 @@ async function handlePushNotificationSet(
)
}
const urlValidation = await validateUrlWithDNS(
const urlValidation = validateExternalUrl(
params.pushNotificationConfig.url,
'Push notification URL'
)

View File

@@ -8,7 +8,6 @@ import { verifyCronAuth } from '@/lib/auth/internal'
const logger = createLogger('CleanupStaleExecutions')
const STALE_THRESHOLD_MINUTES = 30
const MAX_INT32 = 2_147_483_647
export async function GET(request: NextRequest) {
try {
@@ -46,14 +45,13 @@ export async function GET(request: NextRequest) {
try {
const staleDurationMs = Date.now() - new Date(execution.startedAt).getTime()
const staleDurationMinutes = Math.round(staleDurationMs / 60000)
const totalDurationMs = Math.min(staleDurationMs, MAX_INT32)
await db
.update(workflowExecutionLogs)
.set({
status: 'failed',
endedAt: new Date(),
totalDurationMs,
totalDurationMs: staleDurationMs,
executionData: sql`jsonb_set(
COALESCE(execution_data, '{}'::jsonb),
ARRAY['error'],

View File

@@ -6,11 +6,7 @@ import { createLogger } from '@sim/logger'
import binaryExtensionsList from 'binary-extensions'
import { type NextRequest, NextResponse } from 'next/server'
import { checkHybridAuth } from '@/lib/auth/hybrid'
import {
secureFetchWithPinnedIP,
validateUrlWithDNS,
} from '@/lib/core/security/input-validation.server'
import { sanitizeUrlForLog } from '@/lib/core/utils/logging'
import { secureFetchWithPinnedIP, validateUrlWithDNS } from '@/lib/core/security/input-validation'
import { isSupportedFileType, parseFile } from '@/lib/file-parsers'
import { isUsingCloudStorage, type StorageContext, StorageService } from '@/lib/uploads'
import { uploadExecutionFile } from '@/lib/uploads/contexts/execution'
@@ -23,7 +19,6 @@ import {
getMimeTypeFromExtension,
getViewerUrl,
inferContextFromKey,
isInternalFileUrl,
} from '@/lib/uploads/utils/file-utils'
import { getUserEntityPermissions } from '@/lib/workspaces/permissions/utils'
import { verifyFileAccess } from '@/app/api/files/authorization'
@@ -220,7 +215,7 @@ async function parseFileSingle(
}
}
if (isInternalFileUrl(filePath)) {
if (filePath.includes('/api/files/serve/')) {
return handleCloudFile(filePath, fileType, undefined, userId, executionContext)
}
@@ -251,7 +246,7 @@ function validateFilePath(filePath: string): { isValid: boolean; error?: string
return { isValid: false, error: 'Invalid path: tilde character not allowed' }
}
if (filePath.startsWith('/') && !isInternalFileUrl(filePath)) {
if (filePath.startsWith('/') && !filePath.startsWith('/api/files/serve/')) {
return { isValid: false, error: 'Path outside allowed directory' }
}
@@ -425,7 +420,7 @@ async function handleExternalUrl(
return parseResult
} catch (error) {
logger.error(`Error handling external URL ${sanitizeUrlForLog(url)}:`, error)
logger.error(`Error handling external URL ${url}:`, error)
return {
success: false,
error: `Error fetching URL: ${(error as Error).message}`,

View File

@@ -284,7 +284,7 @@ async function handleToolsCall(
content: [
{ type: 'text', text: JSON.stringify(executeResult.output || executeResult, null, 2) },
],
isError: executeResult.success === false,
isError: !executeResult.success,
}
return NextResponse.json(createResponse(id, result))

View File

@@ -20,7 +20,6 @@ import { z } from 'zod'
import { getEmailSubject, renderInvitationEmail } from '@/components/emails'
import { getSession } from '@/lib/auth'
import { hasAccessControlAccess } from '@/lib/billing'
import { syncUsageLimitsFromSubscription } from '@/lib/billing/core/usage'
import { requireStripeClient } from '@/lib/billing/stripe-client'
import { getBaseUrl } from '@/lib/core/utils/urls'
import { sendEmail } from '@/lib/messaging/email/mailer'
@@ -502,18 +501,6 @@ export async function PUT(
}
}
if (status === 'accepted') {
try {
await syncUsageLimitsFromSubscription(session.user.id)
} catch (syncError) {
logger.error('Failed to sync usage limits after joining org', {
userId: session.user.id,
organizationId,
error: syncError,
})
}
}
logger.info(`Organization invitation ${status}`, {
organizationId,
invitationId,

View File

@@ -29,7 +29,7 @@ import { hasWorkspaceAdminAccess } from '@/lib/workspaces/permissions/utils'
import {
InvitationsNotAllowedError,
validateInvitationsAllowed,
} from '@/ee/access-control/utils/permission-check'
} from '@/executor/utils/permission-check'
const logger = createLogger('OrganizationInvitations')

View File

@@ -4,7 +4,6 @@ import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { createA2AClient, extractTextContent, isTerminalState } from '@/lib/a2a/utils'
import { checkHybridAuth } from '@/lib/auth/hybrid'
import { validateUrlWithDNS } from '@/lib/core/security/input-validation.server'
import { generateRequestId } from '@/lib/core/utils/request'
export const dynamic = 'force-dynamic'
@@ -96,14 +95,6 @@ export async function POST(request: NextRequest) {
if (validatedData.files && validatedData.files.length > 0) {
for (const file of validatedData.files) {
if (file.type === 'url') {
const urlValidation = await validateUrlWithDNS(file.data, 'fileUrl')
if (!urlValidation.isValid) {
return NextResponse.json(
{ success: false, error: urlValidation.error },
{ status: 400 }
)
}
const filePart: FilePart = {
kind: 'file',
file: {

View File

@@ -3,7 +3,7 @@ import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { createA2AClient } from '@/lib/a2a/utils'
import { checkHybridAuth } from '@/lib/auth/hybrid'
import { validateUrlWithDNS } from '@/lib/core/security/input-validation.server'
import { validateExternalUrl } from '@/lib/core/security/input-validation'
import { generateRequestId } from '@/lib/core/utils/request'
export const dynamic = 'force-dynamic'
@@ -40,7 +40,7 @@ export async function POST(request: NextRequest) {
const body = await request.json()
const validatedData = A2ASetPushNotificationSchema.parse(body)
const urlValidation = await validateUrlWithDNS(validatedData.webhookUrl, 'Webhook URL')
const urlValidation = validateExternalUrl(validatedData.webhookUrl, 'Webhook URL')
if (!urlValidation.isValid) {
logger.warn(`[${requestId}] Invalid webhook URL`, { error: urlValidation.error })
return NextResponse.json(

View File

@@ -92,9 +92,6 @@ export async function POST(request: NextRequest) {
formData.append('comment', comment)
}
// Add minorEdit field as required by Confluence API
formData.append('minorEdit', 'false')
const response = await fetch(url, {
method: 'POST',
headers: {

View File

@@ -4,7 +4,6 @@ import { z } from 'zod'
import { checkInternalAuth } from '@/lib/auth/hybrid'
import { validateNumericId } from '@/lib/core/security/input-validation'
import { generateRequestId } from '@/lib/core/utils/request'
import { RawFileInputArraySchema } from '@/lib/uploads/utils/file-schemas'
import { processFilesToUserFiles } from '@/lib/uploads/utils/file-utils'
import { downloadFileFromStorage } from '@/lib/uploads/utils/file-utils.server'
@@ -16,7 +15,7 @@ const DiscordSendMessageSchema = z.object({
botToken: z.string().min(1, 'Bot token is required'),
channelId: z.string().min(1, 'Channel ID is required'),
content: z.string().optional().nullable(),
files: RawFileInputArraySchema.optional().nullable(),
files: z.array(z.any()).optional().nullable(),
})
export async function POST(request: NextRequest) {
@@ -102,12 +101,6 @@ export async function POST(request: NextRequest) {
logger.info(`[${requestId}] Processing ${validatedData.files.length} file(s)`)
const userFiles = processFilesToUserFiles(validatedData.files, requestId, logger)
const filesOutput: Array<{
name: string
mimeType: string
data: string
size: number
}> = []
if (userFiles.length === 0) {
logger.warn(`[${requestId}] No valid files to upload, falling back to text-only`)
@@ -144,12 +137,6 @@ export async function POST(request: NextRequest) {
logger.info(`[${requestId}] Downloading file ${i}: ${userFile.name}`)
const buffer = await downloadFileFromStorage(userFile, requestId, logger)
filesOutput.push({
name: userFile.name,
mimeType: userFile.type || 'application/octet-stream',
data: buffer.toString('base64'),
size: buffer.length,
})
const blob = new Blob([new Uint8Array(buffer)], { type: userFile.type })
formData.append(`files[${i}]`, blob, userFile.name)
@@ -186,7 +173,6 @@ export async function POST(request: NextRequest) {
message: data.content,
data: data,
fileCount: userFiles.length,
files: filesOutput,
},
})
} catch (error) {

View File

@@ -1,141 +0,0 @@
import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { checkInternalAuth } from '@/lib/auth/hybrid'
import { generateRequestId } from '@/lib/core/utils/request'
import { FileInputSchema } from '@/lib/uploads/utils/file-schemas'
import { processFilesToUserFiles, type RawFileInput } from '@/lib/uploads/utils/file-utils'
import { downloadFileFromStorage } from '@/lib/uploads/utils/file-utils.server'
export const dynamic = 'force-dynamic'
const logger = createLogger('DropboxUploadAPI')
/**
* Escapes non-ASCII characters in JSON string for HTTP header safety.
* Dropbox API requires characters 0x7F and all non-ASCII to be escaped as \uXXXX.
*/
function httpHeaderSafeJson(value: object): string {
return JSON.stringify(value).replace(/[\u007f-\uffff]/g, (c) => {
return '\\u' + ('0000' + c.charCodeAt(0).toString(16)).slice(-4)
})
}
const DropboxUploadSchema = z.object({
accessToken: z.string().min(1, 'Access token is required'),
path: z.string().min(1, 'Destination path is required'),
file: FileInputSchema.optional().nullable(),
// Legacy field for backwards compatibility
fileContent: z.string().optional().nullable(),
fileName: z.string().optional().nullable(),
mode: z.enum(['add', 'overwrite']).optional().nullable(),
autorename: z.boolean().optional().nullable(),
mute: z.boolean().optional().nullable(),
})
export async function POST(request: NextRequest) {
const requestId = generateRequestId()
try {
const authResult = await checkInternalAuth(request, { requireWorkflowId: false })
if (!authResult.success) {
logger.warn(`[${requestId}] Unauthorized Dropbox upload attempt: ${authResult.error}`)
return NextResponse.json(
{ success: false, error: authResult.error || 'Authentication required' },
{ status: 401 }
)
}
logger.info(`[${requestId}] Authenticated Dropbox upload request via ${authResult.authType}`)
const body = await request.json()
const validatedData = DropboxUploadSchema.parse(body)
let fileBuffer: Buffer
let fileName: string
// Prefer UserFile input, fall back to legacy base64 string
if (validatedData.file) {
// Process UserFile input
const userFiles = processFilesToUserFiles(
[validatedData.file as RawFileInput],
requestId,
logger
)
if (userFiles.length === 0) {
return NextResponse.json({ success: false, error: 'Invalid file input' }, { status: 400 })
}
const userFile = userFiles[0]
logger.info(`[${requestId}] Downloading file: ${userFile.name} (${userFile.size} bytes)`)
fileBuffer = await downloadFileFromStorage(userFile, requestId, logger)
fileName = userFile.name
} else if (validatedData.fileContent) {
// Legacy: base64 string input (backwards compatibility)
logger.info(`[${requestId}] Using legacy base64 content input`)
fileBuffer = Buffer.from(validatedData.fileContent, 'base64')
fileName = validatedData.fileName || 'file'
} else {
return NextResponse.json({ success: false, error: 'File is required' }, { status: 400 })
}
// Determine final path
let finalPath = validatedData.path
if (finalPath.endsWith('/')) {
finalPath = `${finalPath}${fileName}`
}
logger.info(`[${requestId}] Uploading to Dropbox: ${finalPath} (${fileBuffer.length} bytes)`)
const dropboxApiArg = {
path: finalPath,
mode: validatedData.mode || 'add',
autorename: validatedData.autorename ?? true,
mute: validatedData.mute ?? false,
}
const response = await fetch('https://content.dropboxapi.com/2/files/upload', {
method: 'POST',
headers: {
Authorization: `Bearer ${validatedData.accessToken}`,
'Content-Type': 'application/octet-stream',
'Dropbox-API-Arg': httpHeaderSafeJson(dropboxApiArg),
},
body: new Uint8Array(fileBuffer),
})
const data = await response.json()
if (!response.ok) {
const errorMessage = data.error_summary || data.error?.message || 'Failed to upload file'
logger.error(`[${requestId}] Dropbox API error:`, { status: response.status, data })
return NextResponse.json({ success: false, error: errorMessage }, { status: response.status })
}
logger.info(`[${requestId}] File uploaded successfully to ${data.path_display}`)
return NextResponse.json({
success: true,
output: {
file: data,
},
})
} catch (error) {
if (error instanceof z.ZodError) {
logger.warn(`[${requestId}] Validation error:`, error.errors)
return NextResponse.json(
{ success: false, error: error.errors[0]?.message || 'Validation failed' },
{ status: 400 }
)
}
logger.error(`[${requestId}] Unexpected error:`, error)
return NextResponse.json(
{ success: false, error: error instanceof Error ? error.message : 'Unknown error' },
{ status: 500 }
)
}
}

View File

@@ -1,195 +0,0 @@
import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { checkInternalAuth } from '@/lib/auth/hybrid'
import {
secureFetchWithPinnedIP,
validateUrlWithDNS,
} from '@/lib/core/security/input-validation.server'
import { generateRequestId } from '@/lib/core/utils/request'
export const dynamic = 'force-dynamic'
const logger = createLogger('GitHubLatestCommitAPI')
interface GitHubErrorResponse {
message?: string
}
interface GitHubCommitResponse {
sha: string
html_url: string
commit: {
message: string
author: { name: string; email: string; date: string }
committer: { name: string; email: string; date: string }
}
author?: { login: string; avatar_url: string; html_url: string }
committer?: { login: string; avatar_url: string; html_url: string }
stats?: { additions: number; deletions: number; total: number }
files?: Array<{
filename: string
status: string
additions: number
deletions: number
changes: number
patch?: string
raw_url?: string
blob_url?: string
}>
}
const GitHubLatestCommitSchema = z.object({
owner: z.string().min(1, 'Owner is required'),
repo: z.string().min(1, 'Repo is required'),
branch: z.string().optional().nullable(),
apiKey: z.string().min(1, 'API key is required'),
})
export async function POST(request: NextRequest) {
const requestId = generateRequestId()
try {
const authResult = await checkInternalAuth(request, { requireWorkflowId: false })
if (!authResult.success) {
logger.warn(`[${requestId}] Unauthorized GitHub latest commit attempt: ${authResult.error}`)
return NextResponse.json(
{
success: false,
error: authResult.error || 'Authentication required',
},
{ status: 401 }
)
}
const body = await request.json()
const validatedData = GitHubLatestCommitSchema.parse(body)
const { owner, repo, branch, apiKey } = validatedData
const baseUrl = `https://api.github.com/repos/${owner}/${repo}`
const commitUrl = branch ? `${baseUrl}/commits/${branch}` : `${baseUrl}/commits/HEAD`
logger.info(`[${requestId}] Fetching latest commit from GitHub`, { owner, repo, branch })
const urlValidation = await validateUrlWithDNS(commitUrl, 'commitUrl')
if (!urlValidation.isValid) {
return NextResponse.json({ success: false, error: urlValidation.error }, { status: 400 })
}
const response = await secureFetchWithPinnedIP(commitUrl, urlValidation.resolvedIP!, {
method: 'GET',
headers: {
Accept: 'application/vnd.github.v3+json',
Authorization: `Bearer ${apiKey}`,
'X-GitHub-Api-Version': '2022-11-28',
},
})
if (!response.ok) {
const errorData = (await response.json().catch(() => ({}))) as GitHubErrorResponse
logger.error(`[${requestId}] GitHub API error`, {
status: response.status,
error: errorData,
})
return NextResponse.json(
{ success: false, error: errorData.message || `GitHub API error: ${response.status}` },
{ status: 400 }
)
}
const data = (await response.json()) as GitHubCommitResponse
const content = `Latest commit: "${data.commit.message}" by ${data.commit.author.name} on ${data.commit.author.date}. SHA: ${data.sha}`
const files = data.files || []
const fileDetailsWithContent = []
for (const file of files) {
const fileDetail: Record<string, any> = {
filename: file.filename,
additions: file.additions,
deletions: file.deletions,
changes: file.changes,
status: file.status,
raw_url: file.raw_url,
blob_url: file.blob_url,
patch: file.patch,
content: undefined,
}
if (file.status !== 'removed' && file.raw_url) {
try {
const rawUrlValidation = await validateUrlWithDNS(file.raw_url, 'rawUrl')
if (rawUrlValidation.isValid) {
const contentResponse = await secureFetchWithPinnedIP(
file.raw_url,
rawUrlValidation.resolvedIP!,
{
headers: {
Authorization: `Bearer ${apiKey}`,
'X-GitHub-Api-Version': '2022-11-28',
},
}
)
if (contentResponse.ok) {
fileDetail.content = await contentResponse.text()
}
}
} catch (error) {
logger.warn(`[${requestId}] Failed to fetch content for ${file.filename}:`, error)
}
}
fileDetailsWithContent.push(fileDetail)
}
logger.info(`[${requestId}] Latest commit fetched successfully`, {
sha: data.sha,
fileCount: files.length,
})
return NextResponse.json({
success: true,
output: {
content,
metadata: {
sha: data.sha,
html_url: data.html_url,
commit_message: data.commit.message,
author: {
name: data.commit.author.name,
login: data.author?.login || 'Unknown',
avatar_url: data.author?.avatar_url || '',
html_url: data.author?.html_url || '',
},
committer: {
name: data.commit.committer.name,
login: data.committer?.login || 'Unknown',
avatar_url: data.committer?.avatar_url || '',
html_url: data.committer?.html_url || '',
},
stats: data.stats
? {
additions: data.stats.additions,
deletions: data.stats.deletions,
total: data.stats.total,
}
: undefined,
files: fileDetailsWithContent.length > 0 ? fileDetailsWithContent : undefined,
},
},
})
} catch (error) {
logger.error(`[${requestId}] Error fetching GitHub latest commit:`, error)
return NextResponse.json(
{
success: false,
error: error instanceof Error ? error.message : 'Unknown error occurred',
},
{ status: 500 }
)
}
}

View File

@@ -3,7 +3,6 @@ import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { checkInternalAuth } from '@/lib/auth/hybrid'
import { generateRequestId } from '@/lib/core/utils/request'
import { RawFileInputArraySchema } from '@/lib/uploads/utils/file-schemas'
import { processFilesToUserFiles } from '@/lib/uploads/utils/file-utils'
import { downloadFileFromStorage } from '@/lib/uploads/utils/file-utils.server'
import {
@@ -29,7 +28,7 @@ const GmailDraftSchema = z.object({
replyToMessageId: z.string().optional().nullable(),
cc: z.string().optional().nullable(),
bcc: z.string().optional().nullable(),
attachments: RawFileInputArraySchema.optional().nullable(),
attachments: z.array(z.any()).optional().nullable(),
})
export async function POST(request: NextRequest) {

View File

@@ -3,7 +3,6 @@ import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { checkInternalAuth } from '@/lib/auth/hybrid'
import { generateRequestId } from '@/lib/core/utils/request'
import { RawFileInputArraySchema } from '@/lib/uploads/utils/file-schemas'
import { processFilesToUserFiles } from '@/lib/uploads/utils/file-utils'
import { downloadFileFromStorage } from '@/lib/uploads/utils/file-utils.server'
import {
@@ -29,7 +28,7 @@ const GmailSendSchema = z.object({
replyToMessageId: z.string().optional().nullable(),
cc: z.string().optional().nullable(),
bcc: z.string().optional().nullable(),
attachments: RawFileInputArraySchema.optional().nullable(),
attachments: z.array(z.any()).optional().nullable(),
})
export async function POST(request: NextRequest) {

View File

@@ -1,252 +0,0 @@
import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { checkInternalAuth } from '@/lib/auth/hybrid'
import {
secureFetchWithPinnedIP,
validateUrlWithDNS,
} from '@/lib/core/security/input-validation.server'
import { generateRequestId } from '@/lib/core/utils/request'
import type { GoogleDriveFile, GoogleDriveRevision } from '@/tools/google_drive/types'
import {
ALL_FILE_FIELDS,
ALL_REVISION_FIELDS,
DEFAULT_EXPORT_FORMATS,
GOOGLE_WORKSPACE_MIME_TYPES,
} from '@/tools/google_drive/utils'
export const dynamic = 'force-dynamic'
const logger = createLogger('GoogleDriveDownloadAPI')
/** Google API error response structure */
interface GoogleApiErrorResponse {
error?: {
message?: string
code?: number
status?: string
}
}
/** Google Drive revisions list response */
interface GoogleDriveRevisionsResponse {
revisions?: GoogleDriveRevision[]
nextPageToken?: string
}
const GoogleDriveDownloadSchema = z.object({
accessToken: z.string().min(1, 'Access token is required'),
fileId: z.string().min(1, 'File ID is required'),
mimeType: z.string().optional().nullable(),
fileName: z.string().optional().nullable(),
includeRevisions: z.boolean().optional().default(true),
})
export async function POST(request: NextRequest) {
const requestId = generateRequestId()
try {
const authResult = await checkInternalAuth(request, { requireWorkflowId: false })
if (!authResult.success) {
logger.warn(`[${requestId}] Unauthorized Google Drive download attempt: ${authResult.error}`)
return NextResponse.json(
{
success: false,
error: authResult.error || 'Authentication required',
},
{ status: 401 }
)
}
const body = await request.json()
const validatedData = GoogleDriveDownloadSchema.parse(body)
const {
accessToken,
fileId,
mimeType: exportMimeType,
fileName,
includeRevisions,
} = validatedData
const authHeader = `Bearer ${accessToken}`
logger.info(`[${requestId}] Getting file metadata from Google Drive`, { fileId })
const metadataUrl = `https://www.googleapis.com/drive/v3/files/${fileId}?fields=${ALL_FILE_FIELDS}&supportsAllDrives=true`
const metadataUrlValidation = await validateUrlWithDNS(metadataUrl, 'metadataUrl')
if (!metadataUrlValidation.isValid) {
return NextResponse.json(
{ success: false, error: metadataUrlValidation.error },
{ status: 400 }
)
}
const metadataResponse = await secureFetchWithPinnedIP(
metadataUrl,
metadataUrlValidation.resolvedIP!,
{
headers: { Authorization: authHeader },
}
)
if (!metadataResponse.ok) {
const errorDetails = (await metadataResponse
.json()
.catch(() => ({}))) as GoogleApiErrorResponse
logger.error(`[${requestId}] Failed to get file metadata`, {
status: metadataResponse.status,
error: errorDetails,
})
return NextResponse.json(
{ success: false, error: errorDetails.error?.message || 'Failed to get file metadata' },
{ status: 400 }
)
}
const metadata = (await metadataResponse.json()) as GoogleDriveFile
const fileMimeType = metadata.mimeType
let fileBuffer: Buffer
let finalMimeType = fileMimeType
if (GOOGLE_WORKSPACE_MIME_TYPES.includes(fileMimeType)) {
const exportFormat = exportMimeType || DEFAULT_EXPORT_FORMATS[fileMimeType] || 'text/plain'
finalMimeType = exportFormat
logger.info(`[${requestId}] Exporting Google Workspace file`, {
fileId,
mimeType: fileMimeType,
exportFormat,
})
const exportUrl = `https://www.googleapis.com/drive/v3/files/${fileId}/export?mimeType=${encodeURIComponent(exportFormat)}&supportsAllDrives=true`
const exportUrlValidation = await validateUrlWithDNS(exportUrl, 'exportUrl')
if (!exportUrlValidation.isValid) {
return NextResponse.json(
{ success: false, error: exportUrlValidation.error },
{ status: 400 }
)
}
const exportResponse = await secureFetchWithPinnedIP(
exportUrl,
exportUrlValidation.resolvedIP!,
{ headers: { Authorization: authHeader } }
)
if (!exportResponse.ok) {
const exportError = (await exportResponse
.json()
.catch(() => ({}))) as GoogleApiErrorResponse
logger.error(`[${requestId}] Failed to export file`, {
status: exportResponse.status,
error: exportError,
})
return NextResponse.json(
{
success: false,
error: exportError.error?.message || 'Failed to export Google Workspace file',
},
{ status: 400 }
)
}
const arrayBuffer = await exportResponse.arrayBuffer()
fileBuffer = Buffer.from(arrayBuffer)
} else {
logger.info(`[${requestId}] Downloading regular file`, { fileId, mimeType: fileMimeType })
const downloadUrl = `https://www.googleapis.com/drive/v3/files/${fileId}?alt=media&supportsAllDrives=true`
const downloadUrlValidation = await validateUrlWithDNS(downloadUrl, 'downloadUrl')
if (!downloadUrlValidation.isValid) {
return NextResponse.json(
{ success: false, error: downloadUrlValidation.error },
{ status: 400 }
)
}
const downloadResponse = await secureFetchWithPinnedIP(
downloadUrl,
downloadUrlValidation.resolvedIP!,
{ headers: { Authorization: authHeader } }
)
if (!downloadResponse.ok) {
const downloadError = (await downloadResponse
.json()
.catch(() => ({}))) as GoogleApiErrorResponse
logger.error(`[${requestId}] Failed to download file`, {
status: downloadResponse.status,
error: downloadError,
})
return NextResponse.json(
{ success: false, error: downloadError.error?.message || 'Failed to download file' },
{ status: 400 }
)
}
const arrayBuffer = await downloadResponse.arrayBuffer()
fileBuffer = Buffer.from(arrayBuffer)
}
const canReadRevisions = metadata.capabilities?.canReadRevisions === true
if (includeRevisions && canReadRevisions) {
try {
const revisionsUrl = `https://www.googleapis.com/drive/v3/files/${fileId}/revisions?fields=revisions(${ALL_REVISION_FIELDS})&pageSize=100`
const revisionsUrlValidation = await validateUrlWithDNS(revisionsUrl, 'revisionsUrl')
if (revisionsUrlValidation.isValid) {
const revisionsResponse = await secureFetchWithPinnedIP(
revisionsUrl,
revisionsUrlValidation.resolvedIP!,
{ headers: { Authorization: authHeader } }
)
if (revisionsResponse.ok) {
const revisionsData = (await revisionsResponse.json()) as GoogleDriveRevisionsResponse
metadata.revisions = revisionsData.revisions
logger.info(`[${requestId}] Fetched file revisions`, {
fileId,
revisionCount: metadata.revisions?.length || 0,
})
}
}
} catch (error) {
logger.warn(`[${requestId}] Error fetching revisions, continuing without them`, { error })
}
}
const resolvedName = fileName || metadata.name || 'download'
logger.info(`[${requestId}] File downloaded successfully`, {
fileId,
name: resolvedName,
size: fileBuffer.length,
mimeType: finalMimeType,
})
const base64Data = fileBuffer.toString('base64')
return NextResponse.json({
success: true,
output: {
file: {
name: resolvedName,
mimeType: finalMimeType,
data: base64Data,
size: fileBuffer.length,
},
metadata,
},
})
} catch (error) {
logger.error(`[${requestId}] Error downloading Google Drive file:`, error)
return NextResponse.json(
{
success: false,
error: error instanceof Error ? error.message : 'Unknown error occurred',
},
{ status: 500 }
)
}
}

View File

@@ -3,7 +3,6 @@ import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { checkInternalAuth } from '@/lib/auth/hybrid'
import { generateRequestId } from '@/lib/core/utils/request'
import { RawFileInputSchema } from '@/lib/uploads/utils/file-schemas'
import { processSingleFileToUserFile } from '@/lib/uploads/utils/file-utils'
import { downloadFileFromStorage } from '@/lib/uploads/utils/file-utils.server'
import {
@@ -21,7 +20,7 @@ const GOOGLE_DRIVE_API_BASE = 'https://www.googleapis.com/upload/drive/v3/files'
const GoogleDriveUploadSchema = z.object({
accessToken: z.string().min(1, 'Access token is required'),
fileName: z.string().min(1, 'File name is required'),
file: RawFileInputSchema.optional().nullable(),
file: z.any().optional().nullable(),
mimeType: z.string().optional().nullable(),
folderId: z.string().optional().nullable(),
})

View File

@@ -1,131 +0,0 @@
import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { checkInternalAuth } from '@/lib/auth/hybrid'
import {
secureFetchWithPinnedIP,
validateUrlWithDNS,
} from '@/lib/core/security/input-validation.server'
import { generateRequestId } from '@/lib/core/utils/request'
import { enhanceGoogleVaultError } from '@/tools/google_vault/utils'
export const dynamic = 'force-dynamic'
const logger = createLogger('GoogleVaultDownloadExportFileAPI')
const GoogleVaultDownloadExportFileSchema = z.object({
accessToken: z.string().min(1, 'Access token is required'),
bucketName: z.string().min(1, 'Bucket name is required'),
objectName: z.string().min(1, 'Object name is required'),
fileName: z.string().optional().nullable(),
})
export async function POST(request: NextRequest) {
const requestId = generateRequestId()
try {
const authResult = await checkInternalAuth(request, { requireWorkflowId: false })
if (!authResult.success) {
logger.warn(`[${requestId}] Unauthorized Google Vault download attempt: ${authResult.error}`)
return NextResponse.json(
{
success: false,
error: authResult.error || 'Authentication required',
},
{ status: 401 }
)
}
const body = await request.json()
const validatedData = GoogleVaultDownloadExportFileSchema.parse(body)
const { accessToken, bucketName, objectName, fileName } = validatedData
const bucket = encodeURIComponent(bucketName)
const object = encodeURIComponent(objectName)
const downloadUrl = `https://storage.googleapis.com/storage/v1/b/${bucket}/o/${object}?alt=media`
logger.info(`[${requestId}] Downloading file from Google Vault`, { bucketName, objectName })
const urlValidation = await validateUrlWithDNS(downloadUrl, 'downloadUrl')
if (!urlValidation.isValid) {
return NextResponse.json(
{ success: false, error: enhanceGoogleVaultError(urlValidation.error || 'Invalid URL') },
{ status: 400 }
)
}
const downloadResponse = await secureFetchWithPinnedIP(downloadUrl, urlValidation.resolvedIP!, {
method: 'GET',
headers: {
Authorization: `Bearer ${accessToken}`,
},
})
if (!downloadResponse.ok) {
const errorText = await downloadResponse.text().catch(() => '')
const errorMessage = `Failed to download file: ${errorText || downloadResponse.statusText}`
logger.error(`[${requestId}] Failed to download Vault export file`, {
status: downloadResponse.status,
error: errorText,
})
return NextResponse.json(
{ success: false, error: enhanceGoogleVaultError(errorMessage) },
{ status: 400 }
)
}
const contentType = downloadResponse.headers.get('content-type') || 'application/octet-stream'
const disposition = downloadResponse.headers.get('content-disposition') || ''
const match = disposition.match(/filename\*=UTF-8''([^;]+)|filename="([^"]+)"/)
let resolvedName = fileName
if (!resolvedName) {
if (match?.[1]) {
try {
resolvedName = decodeURIComponent(match[1])
} catch {
resolvedName = match[1]
}
} else if (match?.[2]) {
resolvedName = match[2]
} else if (objectName) {
const parts = objectName.split('/')
resolvedName = parts[parts.length - 1] || 'vault-export.bin'
} else {
resolvedName = 'vault-export.bin'
}
}
const arrayBuffer = await downloadResponse.arrayBuffer()
const buffer = Buffer.from(arrayBuffer)
logger.info(`[${requestId}] Vault export file downloaded successfully`, {
name: resolvedName,
size: buffer.length,
mimeType: contentType,
})
return NextResponse.json({
success: true,
output: {
file: {
name: resolvedName,
mimeType: contentType,
data: buffer.toString('base64'),
size: buffer.length,
},
},
})
} catch (error) {
logger.error(`[${requestId}] Error downloading Google Vault export file:`, error)
return NextResponse.json(
{
success: false,
error: error instanceof Error ? error.message : 'Unknown error occurred',
},
{ status: 500 }
)
}
}

View File

@@ -1,10 +1,7 @@
import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server'
import { checkInternalAuth } from '@/lib/auth/hybrid'
import {
secureFetchWithPinnedIP,
validateUrlWithDNS,
} from '@/lib/core/security/input-validation.server'
import { validateImageUrl } from '@/lib/core/security/input-validation'
import { generateRequestId } from '@/lib/core/utils/request'
const logger = createLogger('ImageProxyAPI')
@@ -29,7 +26,7 @@ export async function GET(request: NextRequest) {
return new NextResponse('Missing URL parameter', { status: 400 })
}
const urlValidation = await validateUrlWithDNS(imageUrl, 'imageUrl')
const urlValidation = validateImageUrl(imageUrl)
if (!urlValidation.isValid) {
logger.warn(`[${requestId}] Blocked image proxy request`, {
url: imageUrl.substring(0, 100),
@@ -41,8 +38,7 @@ export async function GET(request: NextRequest) {
logger.info(`[${requestId}] Proxying image request for: ${imageUrl}`)
try {
const imageResponse = await secureFetchWithPinnedIP(imageUrl, urlValidation.resolvedIP!, {
method: 'GET',
const imageResponse = await fetch(imageUrl, {
headers: {
'User-Agent':
'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/122.0.0.0 Safari/537.36',
@@ -68,14 +64,14 @@ export async function GET(request: NextRequest) {
const contentType = imageResponse.headers.get('content-type') || 'image/jpeg'
const imageArrayBuffer = await imageResponse.arrayBuffer()
const imageBlob = await imageResponse.blob()
if (imageArrayBuffer.byteLength === 0) {
logger.error(`[${requestId}] Empty image received`)
if (imageBlob.size === 0) {
logger.error(`[${requestId}] Empty image blob received`)
return new NextResponse('Empty image received', { status: 404 })
}
return new NextResponse(imageArrayBuffer, {
return new NextResponse(imageBlob, {
headers: {
'Content-Type': contentType,
'Access-Control-Allow-Origin': '*',

View File

@@ -1,121 +0,0 @@
import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { checkInternalAuth } from '@/lib/auth/hybrid'
import { RawFileInputArraySchema } from '@/lib/uploads/utils/file-schemas'
import { processFilesToUserFiles } from '@/lib/uploads/utils/file-utils'
import { downloadFileFromStorage } from '@/lib/uploads/utils/file-utils.server'
import { getJiraCloudId } from '@/tools/jira/utils'
const logger = createLogger('JiraAddAttachmentAPI')
export const dynamic = 'force-dynamic'
const JiraAddAttachmentSchema = z.object({
accessToken: z.string().min(1, 'Access token is required'),
domain: z.string().min(1, 'Domain is required'),
issueKey: z.string().min(1, 'Issue key is required'),
files: RawFileInputArraySchema,
cloudId: z.string().optional().nullable(),
})
export async function POST(request: NextRequest) {
const requestId = `jira-attach-${Date.now()}`
try {
const authResult = await checkInternalAuth(request, { requireWorkflowId: false })
if (!authResult.success) {
return NextResponse.json(
{ success: false, error: authResult.error || 'Unauthorized' },
{ status: 401 }
)
}
const body = await request.json()
const validatedData = JiraAddAttachmentSchema.parse(body)
const userFiles = processFilesToUserFiles(validatedData.files, requestId, logger)
if (userFiles.length === 0) {
return NextResponse.json(
{ success: false, error: 'No valid files provided for upload' },
{ status: 400 }
)
}
const cloudId =
validatedData.cloudId ||
(await getJiraCloudId(validatedData.domain, validatedData.accessToken))
const formData = new FormData()
const filesOutput: Array<{ name: string; mimeType: string; data: string; size: number }> = []
for (const file of userFiles) {
const buffer = await downloadFileFromStorage(file, requestId, logger)
filesOutput.push({
name: file.name,
mimeType: file.type || 'application/octet-stream',
data: buffer.toString('base64'),
size: buffer.length,
})
const blob = new Blob([new Uint8Array(buffer)], {
type: file.type || 'application/octet-stream',
})
formData.append('file', blob, file.name)
}
const url = `https://api.atlassian.com/ex/jira/${cloudId}/rest/api/3/issue/${validatedData.issueKey}/attachments`
const response = await fetch(url, {
method: 'POST',
headers: {
Authorization: `Bearer ${validatedData.accessToken}`,
'X-Atlassian-Token': 'no-check',
},
body: formData,
})
if (!response.ok) {
const errorText = await response.text()
logger.error(`[${requestId}] Jira attachment upload failed`, {
status: response.status,
statusText: response.statusText,
error: errorText,
})
return NextResponse.json(
{
success: false,
error: `Failed to upload attachments: ${response.statusText}`,
},
{ status: response.status }
)
}
const attachments = await response.json()
const attachmentIds = Array.isArray(attachments)
? attachments.map((attachment) => attachment.id).filter(Boolean)
: []
return NextResponse.json({
success: true,
output: {
ts: new Date().toISOString(),
issueKey: validatedData.issueKey,
attachmentIds,
files: filesOutput,
},
})
} catch (error) {
if (error instanceof z.ZodError) {
return NextResponse.json(
{ success: false, error: 'Invalid request data', details: error.errors },
{ status: 400 }
)
}
logger.error(`[${requestId}] Jira attachment upload error`, error)
return NextResponse.json(
{ success: false, error: error instanceof Error ? error.message : 'Internal server error' },
{ status: 500 }
)
}
}

View File

@@ -2,11 +2,9 @@ import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { checkInternalAuth } from '@/lib/auth/hybrid'
import { secureFetchWithValidation } from '@/lib/core/security/input-validation.server'
import { generateRequestId } from '@/lib/core/utils/request'
import { RawFileInputArraySchema } from '@/lib/uploads/utils/file-schemas'
import { uploadFilesForTeamsMessage } from '@/tools/microsoft_teams/server-utils'
import type { GraphApiErrorResponse, GraphChatMessage } from '@/tools/microsoft_teams/types'
import { processFilesToUserFiles } from '@/lib/uploads/utils/file-utils'
import { downloadFileFromStorage } from '@/lib/uploads/utils/file-utils.server'
import { resolveMentionsForChannel, type TeamsMention } from '@/tools/microsoft_teams/utils'
export const dynamic = 'force-dynamic'
@@ -18,7 +16,7 @@ const TeamsWriteChannelSchema = z.object({
teamId: z.string().min(1, 'Team ID is required'),
channelId: z.string().min(1, 'Channel ID is required'),
content: z.string().min(1, 'Message content is required'),
files: RawFileInputArraySchema.optional().nullable(),
files: z.array(z.any()).optional().nullable(),
})
export async function POST(request: NextRequest) {
@@ -55,12 +53,93 @@ export async function POST(request: NextRequest) {
fileCount: validatedData.files?.length || 0,
})
const { attachments, filesOutput } = await uploadFilesForTeamsMessage({
rawFiles: validatedData.files || [],
accessToken: validatedData.accessToken,
requestId,
logger,
})
const attachments: any[] = []
if (validatedData.files && validatedData.files.length > 0) {
const rawFiles = validatedData.files
logger.info(`[${requestId}] Processing ${rawFiles.length} file(s) for upload to OneDrive`)
const userFiles = processFilesToUserFiles(rawFiles, requestId, logger)
for (const file of userFiles) {
try {
logger.info(`[${requestId}] Uploading file to Teams: ${file.name} (${file.size} bytes)`)
const buffer = await downloadFileFromStorage(file, requestId, logger)
const uploadUrl =
'https://graph.microsoft.com/v1.0/me/drive/root:/TeamsAttachments/' +
encodeURIComponent(file.name) +
':/content'
logger.info(`[${requestId}] Uploading to Teams: ${uploadUrl}`)
const uploadResponse = await fetch(uploadUrl, {
method: 'PUT',
headers: {
Authorization: `Bearer ${validatedData.accessToken}`,
'Content-Type': file.type || 'application/octet-stream',
},
body: new Uint8Array(buffer),
})
if (!uploadResponse.ok) {
const errorData = await uploadResponse.json().catch(() => ({}))
logger.error(`[${requestId}] Teams upload failed:`, errorData)
throw new Error(
`Failed to upload file to Teams: ${errorData.error?.message || 'Unknown error'}`
)
}
const uploadedFile = await uploadResponse.json()
logger.info(`[${requestId}] File uploaded to Teams successfully`, {
id: uploadedFile.id,
webUrl: uploadedFile.webUrl,
})
const fileDetailsUrl = `https://graph.microsoft.com/v1.0/me/drive/items/${uploadedFile.id}?$select=id,name,webDavUrl,eTag,size`
const fileDetailsResponse = await fetch(fileDetailsUrl, {
headers: {
Authorization: `Bearer ${validatedData.accessToken}`,
},
})
if (!fileDetailsResponse.ok) {
const errorData = await fileDetailsResponse.json().catch(() => ({}))
logger.error(`[${requestId}] Failed to get file details:`, errorData)
throw new Error(
`Failed to get file details: ${errorData.error?.message || 'Unknown error'}`
)
}
const fileDetails = await fileDetailsResponse.json()
logger.info(`[${requestId}] Got file details`, {
webDavUrl: fileDetails.webDavUrl,
eTag: fileDetails.eTag,
})
const attachmentId = fileDetails.eTag?.match(/\{([a-f0-9-]+)\}/i)?.[1] || fileDetails.id
attachments.push({
id: attachmentId,
contentType: 'reference',
contentUrl: fileDetails.webDavUrl,
name: file.name,
})
logger.info(`[${requestId}] Created attachment reference for ${file.name}`)
} catch (error) {
logger.error(`[${requestId}] Failed to process file ${file.name}:`, error)
throw new Error(
`Failed to process file "${file.name}": ${error instanceof Error ? error.message : 'Unknown error'}`
)
}
}
logger.info(
`[${requestId}] All ${attachments.length} file(s) uploaded and attachment references created`
)
}
let messageContent = validatedData.content
let contentType: 'text' | 'html' = 'text'
@@ -118,21 +197,17 @@ export async function POST(request: NextRequest) {
const teamsUrl = `https://graph.microsoft.com/v1.0/teams/${encodeURIComponent(validatedData.teamId)}/channels/${encodeURIComponent(validatedData.channelId)}/messages`
const teamsResponse = await secureFetchWithValidation(
teamsUrl,
{
method: 'POST',
headers: {
'Content-Type': 'application/json',
Authorization: `Bearer ${validatedData.accessToken}`,
},
body: JSON.stringify(messageBody),
const teamsResponse = await fetch(teamsUrl, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
Authorization: `Bearer ${validatedData.accessToken}`,
},
'teamsUrl'
)
body: JSON.stringify(messageBody),
})
if (!teamsResponse.ok) {
const errorData = (await teamsResponse.json().catch(() => ({}))) as GraphApiErrorResponse
const errorData = await teamsResponse.json().catch(() => ({}))
logger.error(`[${requestId}] Microsoft Teams API error:`, errorData)
return NextResponse.json(
{
@@ -143,7 +218,7 @@ export async function POST(request: NextRequest) {
)
}
const responseData = (await teamsResponse.json()) as GraphChatMessage
const responseData = await teamsResponse.json()
logger.info(`[${requestId}] Teams channel message sent successfully`, {
messageId: responseData.id,
attachmentCount: attachments.length,
@@ -162,7 +237,6 @@ export async function POST(request: NextRequest) {
url: responseData.webUrl || '',
attachmentCount: attachments.length,
},
files: filesOutput,
},
})
} catch (error) {

View File

@@ -2,11 +2,9 @@ import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { checkInternalAuth } from '@/lib/auth/hybrid'
import { secureFetchWithValidation } from '@/lib/core/security/input-validation.server'
import { generateRequestId } from '@/lib/core/utils/request'
import { RawFileInputArraySchema } from '@/lib/uploads/utils/file-schemas'
import { uploadFilesForTeamsMessage } from '@/tools/microsoft_teams/server-utils'
import type { GraphApiErrorResponse, GraphChatMessage } from '@/tools/microsoft_teams/types'
import { processFilesToUserFiles } from '@/lib/uploads/utils/file-utils'
import { downloadFileFromStorage } from '@/lib/uploads/utils/file-utils.server'
import { resolveMentionsForChat, type TeamsMention } from '@/tools/microsoft_teams/utils'
export const dynamic = 'force-dynamic'
@@ -17,7 +15,7 @@ const TeamsWriteChatSchema = z.object({
accessToken: z.string().min(1, 'Access token is required'),
chatId: z.string().min(1, 'Chat ID is required'),
content: z.string().min(1, 'Message content is required'),
files: RawFileInputArraySchema.optional().nullable(),
files: z.array(z.any()).optional().nullable(),
})
export async function POST(request: NextRequest) {
@@ -53,12 +51,93 @@ export async function POST(request: NextRequest) {
fileCount: validatedData.files?.length || 0,
})
const { attachments, filesOutput } = await uploadFilesForTeamsMessage({
rawFiles: validatedData.files || [],
accessToken: validatedData.accessToken,
requestId,
logger,
})
const attachments: any[] = []
if (validatedData.files && validatedData.files.length > 0) {
const rawFiles = validatedData.files
logger.info(`[${requestId}] Processing ${rawFiles.length} file(s) for upload to Teams`)
const userFiles = processFilesToUserFiles(rawFiles, requestId, logger)
for (const file of userFiles) {
try {
logger.info(`[${requestId}] Uploading file to Teams: ${file.name} (${file.size} bytes)`)
const buffer = await downloadFileFromStorage(file, requestId, logger)
const uploadUrl =
'https://graph.microsoft.com/v1.0/me/drive/root:/TeamsAttachments/' +
encodeURIComponent(file.name) +
':/content'
logger.info(`[${requestId}] Uploading to Teams: ${uploadUrl}`)
const uploadResponse = await fetch(uploadUrl, {
method: 'PUT',
headers: {
Authorization: `Bearer ${validatedData.accessToken}`,
'Content-Type': file.type || 'application/octet-stream',
},
body: new Uint8Array(buffer),
})
if (!uploadResponse.ok) {
const errorData = await uploadResponse.json().catch(() => ({}))
logger.error(`[${requestId}] Teams upload failed:`, errorData)
throw new Error(
`Failed to upload file to Teams: ${errorData.error?.message || 'Unknown error'}`
)
}
const uploadedFile = await uploadResponse.json()
logger.info(`[${requestId}] File uploaded to Teams successfully`, {
id: uploadedFile.id,
webUrl: uploadedFile.webUrl,
})
const fileDetailsUrl = `https://graph.microsoft.com/v1.0/me/drive/items/${uploadedFile.id}?$select=id,name,webDavUrl,eTag,size`
const fileDetailsResponse = await fetch(fileDetailsUrl, {
headers: {
Authorization: `Bearer ${validatedData.accessToken}`,
},
})
if (!fileDetailsResponse.ok) {
const errorData = await fileDetailsResponse.json().catch(() => ({}))
logger.error(`[${requestId}] Failed to get file details:`, errorData)
throw new Error(
`Failed to get file details: ${errorData.error?.message || 'Unknown error'}`
)
}
const fileDetails = await fileDetailsResponse.json()
logger.info(`[${requestId}] Got file details`, {
webDavUrl: fileDetails.webDavUrl,
eTag: fileDetails.eTag,
})
const attachmentId = fileDetails.eTag?.match(/\{([a-f0-9-]+)\}/i)?.[1] || fileDetails.id
attachments.push({
id: attachmentId,
contentType: 'reference',
contentUrl: fileDetails.webDavUrl,
name: file.name,
})
logger.info(`[${requestId}] Created attachment reference for ${file.name}`)
} catch (error) {
logger.error(`[${requestId}] Failed to process file ${file.name}:`, error)
throw new Error(
`Failed to process file "${file.name}": ${error instanceof Error ? error.message : 'Unknown error'}`
)
}
}
logger.info(
`[${requestId}] All ${attachments.length} file(s) uploaded and attachment references created`
)
}
let messageContent = validatedData.content
let contentType: 'text' | 'html' = 'text'
@@ -115,21 +194,17 @@ export async function POST(request: NextRequest) {
const teamsUrl = `https://graph.microsoft.com/v1.0/chats/${encodeURIComponent(validatedData.chatId)}/messages`
const teamsResponse = await secureFetchWithValidation(
teamsUrl,
{
method: 'POST',
headers: {
'Content-Type': 'application/json',
Authorization: `Bearer ${validatedData.accessToken}`,
},
body: JSON.stringify(messageBody),
const teamsResponse = await fetch(teamsUrl, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
Authorization: `Bearer ${validatedData.accessToken}`,
},
'teamsUrl'
)
body: JSON.stringify(messageBody),
})
if (!teamsResponse.ok) {
const errorData = (await teamsResponse.json().catch(() => ({}))) as GraphApiErrorResponse
const errorData = await teamsResponse.json().catch(() => ({}))
logger.error(`[${requestId}] Microsoft Teams API error:`, errorData)
return NextResponse.json(
{
@@ -140,7 +215,7 @@ export async function POST(request: NextRequest) {
)
}
const responseData = (await teamsResponse.json()) as GraphChatMessage
const responseData = await teamsResponse.json()
logger.info(`[${requestId}] Teams message sent successfully`, {
messageId: responseData.id,
attachmentCount: attachments.length,
@@ -158,7 +233,6 @@ export async function POST(request: NextRequest) {
url: responseData.webUrl || '',
attachmentCount: attachments.length,
},
files: filesOutput,
},
})
} catch (error) {

View File

@@ -2,17 +2,15 @@ import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { checkInternalAuth } from '@/lib/auth/hybrid'
import {
secureFetchWithPinnedIP,
validateUrlWithDNS,
} from '@/lib/core/security/input-validation.server'
import { generateRequestId } from '@/lib/core/utils/request'
import { FileInputSchema } from '@/lib/uploads/utils/file-schemas'
import { isInternalFileUrl, processSingleFileToUserFile } from '@/lib/uploads/utils/file-utils'
import { getBaseUrl } from '@/lib/core/utils/urls'
import { StorageService } from '@/lib/uploads'
import {
downloadFileFromStorage,
resolveInternalFileUrl,
} from '@/lib/uploads/utils/file-utils.server'
extractStorageKey,
inferContextFromKey,
isInternalFileUrl,
} from '@/lib/uploads/utils/file-utils'
import { verifyFileAccess } from '@/app/api/files/authorization'
export const dynamic = 'force-dynamic'
@@ -20,9 +18,7 @@ const logger = createLogger('MistralParseAPI')
const MistralParseSchema = z.object({
apiKey: z.string().min(1, 'API key is required'),
filePath: z.string().min(1, 'File path is required').optional(),
fileData: FileInputSchema.optional(),
file: FileInputSchema.optional(),
filePath: z.string().min(1, 'File path is required'),
resultType: z.string().optional(),
pages: z.array(z.number()).optional(),
includeImageBase64: z.boolean().optional(),
@@ -53,140 +49,66 @@ export async function POST(request: NextRequest) {
const body = await request.json()
const validatedData = MistralParseSchema.parse(body)
const fileData = validatedData.file || validatedData.fileData
const filePath = typeof fileData === 'string' ? fileData : validatedData.filePath
if (!fileData && (!filePath || filePath.trim() === '')) {
return NextResponse.json(
{
success: false,
error: 'File input is required',
},
{ status: 400 }
)
}
logger.info(`[${requestId}] Mistral parse request`, {
hasFileData: Boolean(fileData),
filePath,
isWorkspaceFile: filePath ? isInternalFileUrl(filePath) : false,
filePath: validatedData.filePath,
isWorkspaceFile: isInternalFileUrl(validatedData.filePath),
userId,
})
const mistralBody: any = {
model: 'mistral-ocr-latest',
let fileUrl = validatedData.filePath
if (isInternalFileUrl(validatedData.filePath)) {
try {
const storageKey = extractStorageKey(validatedData.filePath)
const context = inferContextFromKey(storageKey)
const hasAccess = await verifyFileAccess(
storageKey,
userId,
undefined, // customConfig
context, // context
false // isLocal
)
if (!hasAccess) {
logger.warn(`[${requestId}] Unauthorized presigned URL generation attempt`, {
userId,
key: storageKey,
context,
})
return NextResponse.json(
{
success: false,
error: 'File not found',
},
{ status: 404 }
)
}
fileUrl = await StorageService.generatePresignedDownloadUrl(storageKey, context, 5 * 60)
logger.info(`[${requestId}] Generated presigned URL for ${context} file`)
} catch (error) {
logger.error(`[${requestId}] Failed to generate presigned URL:`, error)
return NextResponse.json(
{
success: false,
error: 'Failed to generate file access URL',
},
{ status: 500 }
)
}
} else if (validatedData.filePath?.startsWith('/')) {
const baseUrl = getBaseUrl()
fileUrl = `${baseUrl}${validatedData.filePath}`
}
if (fileData && typeof fileData === 'object') {
const rawFile = fileData
let userFile
try {
userFile = processSingleFileToUserFile(rawFile, requestId, logger)
} catch (error) {
return NextResponse.json(
{
success: false,
error: error instanceof Error ? error.message : 'Failed to process file',
},
{ status: 400 }
)
}
let mimeType = userFile.type
if (!mimeType || mimeType === 'application/octet-stream') {
const filename = userFile.name?.toLowerCase() || ''
if (filename.endsWith('.pdf')) {
mimeType = 'application/pdf'
} else if (filename.endsWith('.png')) {
mimeType = 'image/png'
} else if (filename.endsWith('.jpg') || filename.endsWith('.jpeg')) {
mimeType = 'image/jpeg'
} else if (filename.endsWith('.gif')) {
mimeType = 'image/gif'
} else if (filename.endsWith('.webp')) {
mimeType = 'image/webp'
} else {
mimeType = 'application/pdf'
}
}
let base64 = userFile.base64
if (!base64) {
const buffer = await downloadFileFromStorage(userFile, requestId, logger)
base64 = buffer.toString('base64')
}
const base64Payload = base64.startsWith('data:')
? base64
: `data:${mimeType};base64,${base64}`
// Mistral API uses different document types for images vs documents
const isImage = mimeType.startsWith('image/')
if (isImage) {
mistralBody.document = {
type: 'image_url',
image_url: base64Payload,
}
} else {
mistralBody.document = {
type: 'document_url',
document_url: base64Payload,
}
}
} else if (filePath) {
let fileUrl = filePath
const isInternalFilePath = isInternalFileUrl(filePath)
if (isInternalFilePath) {
const resolution = await resolveInternalFileUrl(filePath, userId, requestId, logger)
if (resolution.error) {
return NextResponse.json(
{
success: false,
error: resolution.error.message,
},
{ status: resolution.error.status }
)
}
fileUrl = resolution.fileUrl || fileUrl
} else if (filePath.startsWith('/')) {
logger.warn(`[${requestId}] Invalid internal path`, {
userId,
path: filePath.substring(0, 50),
})
return NextResponse.json(
{
success: false,
error: 'Invalid file path. Only uploaded files are supported for internal paths.',
},
{ status: 400 }
)
} else {
const urlValidation = await validateUrlWithDNS(fileUrl, 'filePath')
if (!urlValidation.isValid) {
return NextResponse.json(
{
success: false,
error: urlValidation.error,
},
{ status: 400 }
)
}
}
const imageExtensions = ['.png', '.jpg', '.jpeg', '.gif', '.webp', '.avif']
const pathname = new URL(fileUrl).pathname.toLowerCase()
const isImageUrl = imageExtensions.some((ext) => pathname.endsWith(ext))
if (isImageUrl) {
mistralBody.document = {
type: 'image_url',
image_url: fileUrl,
}
} else {
mistralBody.document = {
type: 'document_url',
document_url: fileUrl,
}
}
const mistralBody: any = {
model: 'mistral-ocr-latest',
document: {
type: 'document_url',
document_url: fileUrl,
},
}
if (validatedData.pages) {
@@ -202,34 +124,15 @@ export async function POST(request: NextRequest) {
mistralBody.image_min_size = validatedData.imageMinSize
}
const mistralEndpoint = 'https://api.mistral.ai/v1/ocr'
const mistralValidation = await validateUrlWithDNS(mistralEndpoint, 'Mistral API URL')
if (!mistralValidation.isValid) {
logger.error(`[${requestId}] Mistral API URL validation failed`, {
error: mistralValidation.error,
})
return NextResponse.json(
{
success: false,
error: 'Failed to reach Mistral API',
},
{ status: 502 }
)
}
const mistralResponse = await secureFetchWithPinnedIP(
mistralEndpoint,
mistralValidation.resolvedIP!,
{
method: 'POST',
headers: {
'Content-Type': 'application/json',
Accept: 'application/json',
Authorization: `Bearer ${validatedData.apiKey}`,
},
body: JSON.stringify(mistralBody),
}
)
const mistralResponse = await fetch('https://api.mistral.ai/v1/ocr', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
Accept: 'application/json',
Authorization: `Bearer ${validatedData.apiKey}`,
},
body: JSON.stringify(mistralBody),
})
if (!mistralResponse.ok) {
const errorText = await mistralResponse.text()

View File

@@ -1,177 +0,0 @@
import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { checkInternalAuth } from '@/lib/auth/hybrid'
import {
secureFetchWithPinnedIP,
validateUrlWithDNS,
} from '@/lib/core/security/input-validation.server'
import { generateRequestId } from '@/lib/core/utils/request'
export const dynamic = 'force-dynamic'
/** Microsoft Graph API error response structure */
interface GraphApiError {
error?: {
code?: string
message?: string
}
}
/** Microsoft Graph API drive item metadata response */
interface DriveItemMetadata {
id?: string
name?: string
folder?: Record<string, unknown>
file?: {
mimeType?: string
}
}
const logger = createLogger('OneDriveDownloadAPI')
const OneDriveDownloadSchema = z.object({
accessToken: z.string().min(1, 'Access token is required'),
fileId: z.string().min(1, 'File ID is required'),
fileName: z.string().optional().nullable(),
})
export async function POST(request: NextRequest) {
const requestId = generateRequestId()
try {
const authResult = await checkInternalAuth(request, { requireWorkflowId: false })
if (!authResult.success) {
logger.warn(`[${requestId}] Unauthorized OneDrive download attempt: ${authResult.error}`)
return NextResponse.json(
{
success: false,
error: authResult.error || 'Authentication required',
},
{ status: 401 }
)
}
const body = await request.json()
const validatedData = OneDriveDownloadSchema.parse(body)
const { accessToken, fileId, fileName } = validatedData
const authHeader = `Bearer ${accessToken}`
logger.info(`[${requestId}] Getting file metadata from OneDrive`, { fileId })
const metadataUrl = `https://graph.microsoft.com/v1.0/me/drive/items/${fileId}`
const metadataUrlValidation = await validateUrlWithDNS(metadataUrl, 'metadataUrl')
if (!metadataUrlValidation.isValid) {
return NextResponse.json(
{ success: false, error: metadataUrlValidation.error },
{ status: 400 }
)
}
const metadataResponse = await secureFetchWithPinnedIP(
metadataUrl,
metadataUrlValidation.resolvedIP!,
{
headers: { Authorization: authHeader },
}
)
if (!metadataResponse.ok) {
const errorDetails = (await metadataResponse.json().catch(() => ({}))) as GraphApiError
logger.error(`[${requestId}] Failed to get file metadata`, {
status: metadataResponse.status,
error: errorDetails,
})
return NextResponse.json(
{ success: false, error: errorDetails.error?.message || 'Failed to get file metadata' },
{ status: 400 }
)
}
const metadata = (await metadataResponse.json()) as DriveItemMetadata
if (metadata.folder && !metadata.file) {
logger.error(`[${requestId}] Attempted to download a folder`, {
itemId: metadata.id,
itemName: metadata.name,
})
return NextResponse.json(
{
success: false,
error: `Cannot download folder "${metadata.name}". Please select a file instead.`,
},
{ status: 400 }
)
}
const mimeType = metadata.file?.mimeType || 'application/octet-stream'
logger.info(`[${requestId}] Downloading file from OneDrive`, { fileId, mimeType })
const downloadUrl = `https://graph.microsoft.com/v1.0/me/drive/items/${fileId}/content`
const downloadUrlValidation = await validateUrlWithDNS(downloadUrl, 'downloadUrl')
if (!downloadUrlValidation.isValid) {
return NextResponse.json(
{ success: false, error: downloadUrlValidation.error },
{ status: 400 }
)
}
const downloadResponse = await secureFetchWithPinnedIP(
downloadUrl,
downloadUrlValidation.resolvedIP!,
{
headers: { Authorization: authHeader },
}
)
if (!downloadResponse.ok) {
const downloadError = (await downloadResponse.json().catch(() => ({}))) as GraphApiError
logger.error(`[${requestId}] Failed to download file`, {
status: downloadResponse.status,
error: downloadError,
})
return NextResponse.json(
{ success: false, error: downloadError.error?.message || 'Failed to download file' },
{ status: 400 }
)
}
const arrayBuffer = await downloadResponse.arrayBuffer()
const fileBuffer = Buffer.from(arrayBuffer)
const resolvedName = fileName || metadata.name || 'download'
logger.info(`[${requestId}] File downloaded successfully`, {
fileId,
name: resolvedName,
size: fileBuffer.length,
mimeType,
})
const base64Data = fileBuffer.toString('base64')
return NextResponse.json({
success: true,
output: {
file: {
name: resolvedName,
mimeType,
data: base64Data,
size: fileBuffer.length,
},
},
})
} catch (error) {
logger.error(`[${requestId}] Error downloading OneDrive file:`, error)
return NextResponse.json(
{
success: false,
error: error instanceof Error ? error.message : 'Unknown error occurred',
},
{ status: 500 }
)
}
}

View File

@@ -4,9 +4,7 @@ import * as XLSX from 'xlsx'
import { z } from 'zod'
import { checkInternalAuth } from '@/lib/auth/hybrid'
import { validateMicrosoftGraphId } from '@/lib/core/security/input-validation'
import { secureFetchWithValidation } from '@/lib/core/security/input-validation.server'
import { generateRequestId } from '@/lib/core/utils/request'
import { RawFileInputSchema } from '@/lib/uploads/utils/file-schemas'
import {
getExtensionFromMimeType,
processSingleFileToUserFile,
@@ -31,33 +29,12 @@ const ExcelValuesSchema = z.union([
const OneDriveUploadSchema = z.object({
accessToken: z.string().min(1, 'Access token is required'),
fileName: z.string().min(1, 'File name is required'),
file: RawFileInputSchema.optional(),
file: z.any().optional(),
folderId: z.string().optional().nullable(),
mimeType: z.string().nullish(),
values: ExcelValuesSchema.optional().nullable(),
conflictBehavior: z.enum(['fail', 'replace', 'rename']).optional().nullable(),
})
/** Microsoft Graph DriveItem response */
interface OneDriveFileData {
id: string
name: string
size: number
webUrl: string
createdDateTime: string
lastModifiedDateTime: string
file?: { mimeType: string }
parentReference?: { id: string; path: string }
'@microsoft.graph.downloadUrl'?: string
}
/** Microsoft Graph Excel range response */
interface ExcelRangeData {
address?: string
addressLocal?: string
values?: unknown[][]
}
export async function POST(request: NextRequest) {
const requestId = generateRequestId()
@@ -111,9 +88,25 @@ export async function POST(request: NextRequest) {
)
}
let fileToProcess
if (Array.isArray(rawFile)) {
if (rawFile.length === 0) {
return NextResponse.json(
{
success: false,
error: 'No file provided',
},
{ status: 400 }
)
}
fileToProcess = rawFile[0]
} else {
fileToProcess = rawFile
}
let userFile
try {
userFile = processSingleFileToUserFile(rawFile, requestId, logger)
userFile = processSingleFileToUserFile(fileToProcess, requestId, logger)
} catch (error) {
return NextResponse.json(
{
@@ -186,23 +179,14 @@ export async function POST(request: NextRequest) {
uploadUrl = `${MICROSOFT_GRAPH_BASE}/me/drive/root:/${encodeURIComponent(fileName)}:/content`
}
// Add conflict behavior if specified (defaults to replace by Microsoft Graph API)
if (validatedData.conflictBehavior) {
uploadUrl += `?@microsoft.graph.conflictBehavior=${validatedData.conflictBehavior}`
}
const uploadResponse = await secureFetchWithValidation(
uploadUrl,
{
method: 'PUT',
headers: {
Authorization: `Bearer ${validatedData.accessToken}`,
'Content-Type': mimeType,
},
body: fileBuffer,
const uploadResponse = await fetch(uploadUrl, {
method: 'PUT',
headers: {
Authorization: `Bearer ${validatedData.accessToken}`,
'Content-Type': mimeType,
},
'uploadUrl'
)
body: new Uint8Array(fileBuffer),
})
if (!uploadResponse.ok) {
const errorText = await uploadResponse.text()
@@ -216,7 +200,7 @@ export async function POST(request: NextRequest) {
)
}
const fileData = (await uploadResponse.json()) as OneDriveFileData
const fileData = await uploadResponse.json()
let excelWriteResult: any | undefined
const shouldWriteExcelContent =
@@ -225,11 +209,8 @@ export async function POST(request: NextRequest) {
if (shouldWriteExcelContent) {
try {
let workbookSessionId: string | undefined
const sessionUrl = `${MICROSOFT_GRAPH_BASE}/me/drive/items/${encodeURIComponent(
fileData.id
)}/workbook/createSession`
const sessionResp = await secureFetchWithValidation(
sessionUrl,
const sessionResp = await fetch(
`${MICROSOFT_GRAPH_BASE}/me/drive/items/${encodeURIComponent(fileData.id)}/workbook/createSession`,
{
method: 'POST',
headers: {
@@ -237,12 +218,11 @@ export async function POST(request: NextRequest) {
'Content-Type': 'application/json',
},
body: JSON.stringify({ persistChanges: true }),
},
'sessionUrl'
}
)
if (sessionResp.ok) {
const sessionData = (await sessionResp.json()) as { id?: string }
const sessionData = await sessionResp.json()
workbookSessionId = sessionData?.id
}
@@ -251,19 +231,14 @@ export async function POST(request: NextRequest) {
const listUrl = `${MICROSOFT_GRAPH_BASE}/me/drive/items/${encodeURIComponent(
fileData.id
)}/workbook/worksheets?$select=name&$orderby=position&$top=1`
const listResp = await secureFetchWithValidation(
listUrl,
{
method: 'GET',
headers: {
Authorization: `Bearer ${validatedData.accessToken}`,
...(workbookSessionId ? { 'workbook-session-id': workbookSessionId } : {}),
},
const listResp = await fetch(listUrl, {
headers: {
Authorization: `Bearer ${validatedData.accessToken}`,
...(workbookSessionId ? { 'workbook-session-id': workbookSessionId } : {}),
},
'listUrl'
)
})
if (listResp.ok) {
const listData = (await listResp.json()) as { value?: Array<{ name?: string }> }
const listData = await listResp.json()
const firstSheetName = listData?.value?.[0]?.name
if (firstSheetName) {
sheetName = firstSheetName
@@ -322,19 +297,15 @@ export async function POST(request: NextRequest) {
)}')/range(address='${encodeURIComponent(computedRangeAddress)}')`
)
const excelWriteResponse = await secureFetchWithValidation(
url.toString(),
{
method: 'PATCH',
headers: {
Authorization: `Bearer ${validatedData.accessToken}`,
'Content-Type': 'application/json',
...(workbookSessionId ? { 'workbook-session-id': workbookSessionId } : {}),
},
body: JSON.stringify({ values: processedValues }),
const excelWriteResponse = await fetch(url.toString(), {
method: 'PATCH',
headers: {
Authorization: `Bearer ${validatedData.accessToken}`,
'Content-Type': 'application/json',
...(workbookSessionId ? { 'workbook-session-id': workbookSessionId } : {}),
},
'excelWriteUrl'
)
body: JSON.stringify({ values: processedValues }),
})
if (!excelWriteResponse || !excelWriteResponse.ok) {
const errorText = excelWriteResponse ? await excelWriteResponse.text() : 'no response'
@@ -349,7 +320,7 @@ export async function POST(request: NextRequest) {
details: errorText,
}
} else {
const writeData = (await excelWriteResponse.json()) as ExcelRangeData
const writeData = await excelWriteResponse.json()
const addr = writeData.address || writeData.addressLocal
const v = writeData.values || []
excelWriteResult = {
@@ -357,25 +328,21 @@ export async function POST(request: NextRequest) {
updatedRange: addr,
updatedRows: Array.isArray(v) ? v.length : undefined,
updatedColumns: Array.isArray(v) && v[0] ? v[0].length : undefined,
updatedCells: Array.isArray(v) && v[0] ? v.length * v[0].length : undefined,
updatedCells: Array.isArray(v) && v[0] ? v.length * (v[0] as any[]).length : undefined,
}
}
if (workbookSessionId) {
try {
const closeUrl = `${MICROSOFT_GRAPH_BASE}/me/drive/items/${encodeURIComponent(
fileData.id
)}/workbook/closeSession`
const closeResp = await secureFetchWithValidation(
closeUrl,
const closeResp = await fetch(
`${MICROSOFT_GRAPH_BASE}/me/drive/items/${encodeURIComponent(fileData.id)}/workbook/closeSession`,
{
method: 'POST',
headers: {
Authorization: `Bearer ${validatedData.accessToken}`,
'workbook-session-id': workbookSessionId,
},
},
'closeSessionUrl'
}
)
if (!closeResp.ok) {
const closeText = await closeResp.text()

View File

@@ -3,7 +3,6 @@ import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { checkInternalAuth } from '@/lib/auth/hybrid'
import { generateRequestId } from '@/lib/core/utils/request'
import { RawFileInputArraySchema } from '@/lib/uploads/utils/file-schemas'
import { processFilesToUserFiles } from '@/lib/uploads/utils/file-utils'
import { downloadFileFromStorage } from '@/lib/uploads/utils/file-utils.server'
@@ -19,7 +18,7 @@ const OutlookDraftSchema = z.object({
contentType: z.enum(['text', 'html']).optional().nullable(),
cc: z.string().optional().nullable(),
bcc: z.string().optional().nullable(),
attachments: RawFileInputArraySchema.optional().nullable(),
attachments: z.array(z.any()).optional().nullable(),
})
export async function POST(request: NextRequest) {

View File

@@ -3,7 +3,6 @@ import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { checkInternalAuth } from '@/lib/auth/hybrid'
import { generateRequestId } from '@/lib/core/utils/request'
import { RawFileInputArraySchema } from '@/lib/uploads/utils/file-schemas'
import { processFilesToUserFiles } from '@/lib/uploads/utils/file-utils'
import { downloadFileFromStorage } from '@/lib/uploads/utils/file-utils.server'
@@ -21,7 +20,7 @@ const OutlookSendSchema = z.object({
bcc: z.string().optional().nullable(),
replyToMessageId: z.string().optional().nullable(),
conversationId: z.string().optional().nullable(),
attachments: RawFileInputArraySchema.optional().nullable(),
attachments: z.array(z.any()).optional().nullable(),
})
export async function POST(request: NextRequest) {
@@ -96,14 +95,14 @@ export async function POST(request: NextRequest) {
if (attachments.length > 0) {
const totalSize = attachments.reduce((sum, file) => sum + file.size, 0)
const maxSize = 3 * 1024 * 1024 // 3MB - Microsoft Graph API limit for inline attachments
const maxSize = 4 * 1024 * 1024 // 4MB
if (totalSize > maxSize) {
const sizeMB = (totalSize / (1024 * 1024)).toFixed(2)
return NextResponse.json(
{
success: false,
error: `Total attachment size (${sizeMB}MB) exceeds Microsoft Graph API limit of 3MB per request`,
error: `Total attachment size (${sizeMB}MB) exceeds Outlook's limit of 4MB per request`,
},
{ status: 400 }
)

View File

@@ -1,165 +0,0 @@
import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { checkInternalAuth } from '@/lib/auth/hybrid'
import {
secureFetchWithPinnedIP,
validateUrlWithDNS,
} from '@/lib/core/security/input-validation.server'
import { generateRequestId } from '@/lib/core/utils/request'
import { getFileExtension, getMimeTypeFromExtension } from '@/lib/uploads/utils/file-utils'
export const dynamic = 'force-dynamic'
const logger = createLogger('PipedriveGetFilesAPI')
interface PipedriveFile {
id?: number
name?: string
url?: string
}
interface PipedriveApiResponse {
success: boolean
data?: PipedriveFile[]
error?: string
}
const PipedriveGetFilesSchema = z.object({
accessToken: z.string().min(1, 'Access token is required'),
deal_id: z.string().optional().nullable(),
person_id: z.string().optional().nullable(),
org_id: z.string().optional().nullable(),
limit: z.string().optional().nullable(),
downloadFiles: z.boolean().optional().default(false),
})
export async function POST(request: NextRequest) {
const requestId = generateRequestId()
try {
const authResult = await checkInternalAuth(request, { requireWorkflowId: false })
if (!authResult.success) {
logger.warn(`[${requestId}] Unauthorized Pipedrive get files attempt: ${authResult.error}`)
return NextResponse.json(
{
success: false,
error: authResult.error || 'Authentication required',
},
{ status: 401 }
)
}
const body = await request.json()
const validatedData = PipedriveGetFilesSchema.parse(body)
const { accessToken, deal_id, person_id, org_id, limit, downloadFiles } = validatedData
const baseUrl = 'https://api.pipedrive.com/v1/files'
const queryParams = new URLSearchParams()
if (deal_id) queryParams.append('deal_id', deal_id)
if (person_id) queryParams.append('person_id', person_id)
if (org_id) queryParams.append('org_id', org_id)
if (limit) queryParams.append('limit', limit)
const queryString = queryParams.toString()
const apiUrl = queryString ? `${baseUrl}?${queryString}` : baseUrl
logger.info(`[${requestId}] Fetching files from Pipedrive`, { deal_id, person_id, org_id })
const urlValidation = await validateUrlWithDNS(apiUrl, 'apiUrl')
if (!urlValidation.isValid) {
return NextResponse.json({ success: false, error: urlValidation.error }, { status: 400 })
}
const response = await secureFetchWithPinnedIP(apiUrl, urlValidation.resolvedIP!, {
method: 'GET',
headers: {
Authorization: `Bearer ${accessToken}`,
Accept: 'application/json',
},
})
const data = (await response.json()) as PipedriveApiResponse
if (!data.success) {
logger.error(`[${requestId}] Pipedrive API request failed`, { data })
return NextResponse.json(
{ success: false, error: data.error || 'Failed to fetch files from Pipedrive' },
{ status: 400 }
)
}
const files = data.data || []
const downloadedFiles: Array<{
name: string
mimeType: string
data: string
size: number
}> = []
if (downloadFiles) {
for (const file of files) {
if (!file?.url) continue
try {
const fileUrlValidation = await validateUrlWithDNS(file.url, 'fileUrl')
if (!fileUrlValidation.isValid) continue
const downloadResponse = await secureFetchWithPinnedIP(
file.url,
fileUrlValidation.resolvedIP!,
{
method: 'GET',
headers: { Authorization: `Bearer ${accessToken}` },
}
)
if (!downloadResponse.ok) continue
const arrayBuffer = await downloadResponse.arrayBuffer()
const buffer = Buffer.from(arrayBuffer)
const extension = getFileExtension(file.name || '')
const mimeType =
downloadResponse.headers.get('content-type') || getMimeTypeFromExtension(extension)
const fileName = file.name || `pipedrive-file-${file.id || Date.now()}`
downloadedFiles.push({
name: fileName,
mimeType,
data: buffer.toString('base64'),
size: buffer.length,
})
} catch (error) {
logger.warn(`[${requestId}] Failed to download file ${file.id}:`, error)
}
}
}
logger.info(`[${requestId}] Pipedrive files fetched successfully`, {
fileCount: files.length,
downloadedCount: downloadedFiles.length,
})
return NextResponse.json({
success: true,
output: {
files,
downloadedFiles: downloadedFiles.length > 0 ? downloadedFiles : undefined,
total_items: files.length,
success: true,
},
})
} catch (error) {
logger.error(`[${requestId}] Error fetching Pipedrive files:`, error)
return NextResponse.json(
{
success: false,
error: error instanceof Error ? error.message : 'Unknown error occurred',
},
{ status: 500 }
)
}
}

View File

@@ -2,14 +2,15 @@ import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { checkInternalAuth } from '@/lib/auth/hybrid'
import {
secureFetchWithPinnedIP,
validateUrlWithDNS,
} from '@/lib/core/security/input-validation.server'
import { generateRequestId } from '@/lib/core/utils/request'
import { RawFileInputSchema } from '@/lib/uploads/utils/file-schemas'
import { isInternalFileUrl } from '@/lib/uploads/utils/file-utils'
import { resolveFileInputToUrl } from '@/lib/uploads/utils/file-utils.server'
import { getBaseUrl } from '@/lib/core/utils/urls'
import { StorageService } from '@/lib/uploads'
import {
extractStorageKey,
inferContextFromKey,
isInternalFileUrl,
} from '@/lib/uploads/utils/file-utils'
import { verifyFileAccess } from '@/app/api/files/authorization'
export const dynamic = 'force-dynamic'
@@ -17,8 +18,7 @@ const logger = createLogger('PulseParseAPI')
const PulseParseSchema = z.object({
apiKey: z.string().min(1, 'API key is required'),
filePath: z.string().optional(),
file: RawFileInputSchema.optional(),
filePath: z.string().min(1, 'File path is required'),
pages: z.string().optional(),
extractFigure: z.boolean().optional(),
figureDescription: z.boolean().optional(),
@@ -51,30 +51,50 @@ export async function POST(request: NextRequest) {
const validatedData = PulseParseSchema.parse(body)
logger.info(`[${requestId}] Pulse parse request`, {
fileName: validatedData.file?.name,
filePath: validatedData.filePath,
isWorkspaceFile: validatedData.filePath ? isInternalFileUrl(validatedData.filePath) : false,
isWorkspaceFile: isInternalFileUrl(validatedData.filePath),
userId,
})
const resolution = await resolveFileInputToUrl({
file: validatedData.file,
filePath: validatedData.filePath,
userId,
requestId,
logger,
})
let fileUrl = validatedData.filePath
if (resolution.error) {
return NextResponse.json(
{ success: false, error: resolution.error.message },
{ status: resolution.error.status }
)
}
if (isInternalFileUrl(validatedData.filePath)) {
try {
const storageKey = extractStorageKey(validatedData.filePath)
const context = inferContextFromKey(storageKey)
const fileUrl = resolution.fileUrl
if (!fileUrl) {
return NextResponse.json({ success: false, error: 'File input is required' }, { status: 400 })
const hasAccess = await verifyFileAccess(storageKey, userId, undefined, context, false)
if (!hasAccess) {
logger.warn(`[${requestId}] Unauthorized presigned URL generation attempt`, {
userId,
key: storageKey,
context,
})
return NextResponse.json(
{
success: false,
error: 'File not found',
},
{ status: 404 }
)
}
fileUrl = await StorageService.generatePresignedDownloadUrl(storageKey, context, 5 * 60)
logger.info(`[${requestId}] Generated presigned URL for ${context} file`)
} catch (error) {
logger.error(`[${requestId}] Failed to generate presigned URL:`, error)
return NextResponse.json(
{
success: false,
error: 'Failed to generate file access URL',
},
{ status: 500 }
)
}
} else if (validatedData.filePath?.startsWith('/')) {
const baseUrl = getBaseUrl()
fileUrl = `${baseUrl}${validatedData.filePath}`
}
const formData = new FormData()
@@ -99,36 +119,13 @@ export async function POST(request: NextRequest) {
formData.append('chunk_size', String(validatedData.chunkSize))
}
const pulseEndpoint = 'https://api.runpulse.com/extract'
const pulseValidation = await validateUrlWithDNS(pulseEndpoint, 'Pulse API URL')
if (!pulseValidation.isValid) {
logger.error(`[${requestId}] Pulse API URL validation failed`, {
error: pulseValidation.error,
})
return NextResponse.json(
{
success: false,
error: 'Failed to reach Pulse API',
},
{ status: 502 }
)
}
const pulsePayload = new Response(formData)
const contentType = pulsePayload.headers.get('content-type') || 'multipart/form-data'
const bodyBuffer = Buffer.from(await pulsePayload.arrayBuffer())
const pulseResponse = await secureFetchWithPinnedIP(
pulseEndpoint,
pulseValidation.resolvedIP!,
{
method: 'POST',
headers: {
'x-api-key': validatedData.apiKey,
'Content-Type': contentType,
},
body: bodyBuffer,
}
)
const pulseResponse = await fetch('https://api.runpulse.com/extract', {
method: 'POST',
headers: {
'x-api-key': validatedData.apiKey,
},
body: formData,
})
if (!pulseResponse.ok) {
const errorText = await pulseResponse.text()

View File

@@ -2,14 +2,15 @@ import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { checkInternalAuth } from '@/lib/auth/hybrid'
import {
secureFetchWithPinnedIP,
validateUrlWithDNS,
} from '@/lib/core/security/input-validation.server'
import { generateRequestId } from '@/lib/core/utils/request'
import { RawFileInputSchema } from '@/lib/uploads/utils/file-schemas'
import { isInternalFileUrl } from '@/lib/uploads/utils/file-utils'
import { resolveFileInputToUrl } from '@/lib/uploads/utils/file-utils.server'
import { getBaseUrl } from '@/lib/core/utils/urls'
import { StorageService } from '@/lib/uploads'
import {
extractStorageKey,
inferContextFromKey,
isInternalFileUrl,
} from '@/lib/uploads/utils/file-utils'
import { verifyFileAccess } from '@/app/api/files/authorization'
export const dynamic = 'force-dynamic'
@@ -17,8 +18,7 @@ const logger = createLogger('ReductoParseAPI')
const ReductoParseSchema = z.object({
apiKey: z.string().min(1, 'API key is required'),
filePath: z.string().optional(),
file: RawFileInputSchema.optional(),
filePath: z.string().min(1, 'File path is required'),
pages: z.array(z.number()).optional(),
tableOutputFormat: z.enum(['html', 'md']).optional(),
})
@@ -47,30 +47,56 @@ export async function POST(request: NextRequest) {
const validatedData = ReductoParseSchema.parse(body)
logger.info(`[${requestId}] Reducto parse request`, {
fileName: validatedData.file?.name,
filePath: validatedData.filePath,
isWorkspaceFile: validatedData.filePath ? isInternalFileUrl(validatedData.filePath) : false,
isWorkspaceFile: isInternalFileUrl(validatedData.filePath),
userId,
})
const resolution = await resolveFileInputToUrl({
file: validatedData.file,
filePath: validatedData.filePath,
userId,
requestId,
logger,
})
let fileUrl = validatedData.filePath
if (resolution.error) {
return NextResponse.json(
{ success: false, error: resolution.error.message },
{ status: resolution.error.status }
)
}
if (isInternalFileUrl(validatedData.filePath)) {
try {
const storageKey = extractStorageKey(validatedData.filePath)
const context = inferContextFromKey(storageKey)
const fileUrl = resolution.fileUrl
if (!fileUrl) {
return NextResponse.json({ success: false, error: 'File input is required' }, { status: 400 })
const hasAccess = await verifyFileAccess(
storageKey,
userId,
undefined, // customConfig
context, // context
false // isLocal
)
if (!hasAccess) {
logger.warn(`[${requestId}] Unauthorized presigned URL generation attempt`, {
userId,
key: storageKey,
context,
})
return NextResponse.json(
{
success: false,
error: 'File not found',
},
{ status: 404 }
)
}
fileUrl = await StorageService.generatePresignedDownloadUrl(storageKey, context, 5 * 60)
logger.info(`[${requestId}] Generated presigned URL for ${context} file`)
} catch (error) {
logger.error(`[${requestId}] Failed to generate presigned URL:`, error)
return NextResponse.json(
{
success: false,
error: 'Failed to generate file access URL',
},
{ status: 500 }
)
}
} else if (validatedData.filePath?.startsWith('/')) {
const baseUrl = getBaseUrl()
fileUrl = `${baseUrl}${validatedData.filePath}`
}
const reductoBody: Record<string, unknown> = {
@@ -78,13 +104,8 @@ export async function POST(request: NextRequest) {
}
if (validatedData.pages && validatedData.pages.length > 0) {
// Reducto API expects page_range as an object with start/end, not an array
const pages = validatedData.pages
reductoBody.settings = {
page_range: {
start: Math.min(...pages),
end: Math.max(...pages),
},
page_range: validatedData.pages,
}
}
@@ -94,34 +115,15 @@ export async function POST(request: NextRequest) {
}
}
const reductoEndpoint = 'https://platform.reducto.ai/parse'
const reductoValidation = await validateUrlWithDNS(reductoEndpoint, 'Reducto API URL')
if (!reductoValidation.isValid) {
logger.error(`[${requestId}] Reducto API URL validation failed`, {
error: reductoValidation.error,
})
return NextResponse.json(
{
success: false,
error: 'Failed to reach Reducto API',
},
{ status: 502 }
)
}
const reductoResponse = await secureFetchWithPinnedIP(
reductoEndpoint,
reductoValidation.resolvedIP!,
{
method: 'POST',
headers: {
'Content-Type': 'application/json',
Accept: 'application/json',
Authorization: `Bearer ${validatedData.apiKey}`,
},
body: JSON.stringify(reductoBody),
}
)
const reductoResponse = await fetch('https://platform.reducto.ai/parse', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
Accept: 'application/json',
Authorization: `Bearer ${validatedData.apiKey}`,
},
body: JSON.stringify(reductoBody),
})
if (!reductoResponse.ok) {
const errorText = await reductoResponse.text()

View File

@@ -4,7 +4,6 @@ import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { checkInternalAuth } from '@/lib/auth/hybrid'
import { generateRequestId } from '@/lib/core/utils/request'
import { RawFileInputSchema } from '@/lib/uploads/utils/file-schemas'
import { processSingleFileToUserFile } from '@/lib/uploads/utils/file-utils'
import { downloadFileFromStorage } from '@/lib/uploads/utils/file-utils.server'
@@ -18,7 +17,7 @@ const S3PutObjectSchema = z.object({
region: z.string().min(1, 'Region is required'),
bucketName: z.string().min(1, 'Bucket name is required'),
objectKey: z.string().min(1, 'Object key is required'),
file: RawFileInputSchema.optional().nullable(),
file: z.any().optional().nullable(),
content: z.string().optional().nullable(),
contentType: z.string().optional().nullable(),
acl: z.string().optional().nullable(),

View File

@@ -1,188 +0,0 @@
import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { checkInternalAuth } from '@/lib/auth/hybrid'
import { generateRequestId } from '@/lib/core/utils/request'
import { RawFileInputArraySchema } from '@/lib/uploads/utils/file-schemas'
import { processFilesToUserFiles } from '@/lib/uploads/utils/file-utils'
import { downloadFileFromStorage } from '@/lib/uploads/utils/file-utils.server'
export const dynamic = 'force-dynamic'
const logger = createLogger('SendGridSendMailAPI')
const SendGridSendMailSchema = z.object({
apiKey: z.string().min(1, 'API key is required'),
from: z.string().min(1, 'From email is required'),
fromName: z.string().optional().nullable(),
to: z.string().min(1, 'To email is required'),
toName: z.string().optional().nullable(),
subject: z.string().optional().nullable(),
content: z.string().optional().nullable(),
contentType: z.string().optional().nullable(),
cc: z.string().optional().nullable(),
bcc: z.string().optional().nullable(),
replyTo: z.string().optional().nullable(),
replyToName: z.string().optional().nullable(),
templateId: z.string().optional().nullable(),
dynamicTemplateData: z.any().optional().nullable(),
attachments: RawFileInputArraySchema.optional().nullable(),
})
export async function POST(request: NextRequest) {
const requestId = generateRequestId()
try {
const authResult = await checkInternalAuth(request, { requireWorkflowId: false })
if (!authResult.success) {
logger.warn(`[${requestId}] Unauthorized SendGrid send attempt: ${authResult.error}`)
return NextResponse.json(
{ success: false, error: authResult.error || 'Authentication required' },
{ status: 401 }
)
}
logger.info(`[${requestId}] Authenticated SendGrid send request via ${authResult.authType}`)
const body = await request.json()
const validatedData = SendGridSendMailSchema.parse(body)
logger.info(`[${requestId}] Sending SendGrid email`, {
to: validatedData.to,
subject: validatedData.subject || '(template)',
hasAttachments: !!(validatedData.attachments && validatedData.attachments.length > 0),
attachmentCount: validatedData.attachments?.length || 0,
})
// Build personalizations
const personalizations: Record<string, unknown> = {
to: [
{ email: validatedData.to, ...(validatedData.toName && { name: validatedData.toName }) },
],
}
if (validatedData.cc) {
personalizations.cc = [{ email: validatedData.cc }]
}
if (validatedData.bcc) {
personalizations.bcc = [{ email: validatedData.bcc }]
}
if (validatedData.templateId && validatedData.dynamicTemplateData) {
personalizations.dynamic_template_data =
typeof validatedData.dynamicTemplateData === 'string'
? JSON.parse(validatedData.dynamicTemplateData)
: validatedData.dynamicTemplateData
}
// Build mail body
const mailBody: Record<string, unknown> = {
personalizations: [personalizations],
from: {
email: validatedData.from,
...(validatedData.fromName && { name: validatedData.fromName }),
},
subject: validatedData.subject,
}
if (validatedData.templateId) {
mailBody.template_id = validatedData.templateId
} else {
mailBody.content = [
{
type: validatedData.contentType || 'text/plain',
value: validatedData.content,
},
]
}
if (validatedData.replyTo) {
mailBody.reply_to = {
email: validatedData.replyTo,
...(validatedData.replyToName && { name: validatedData.replyToName }),
}
}
// Process attachments from UserFile objects
if (validatedData.attachments && validatedData.attachments.length > 0) {
const rawAttachments = validatedData.attachments
logger.info(`[${requestId}] Processing ${rawAttachments.length} attachment(s)`)
const userFiles = processFilesToUserFiles(rawAttachments, requestId, logger)
if (userFiles.length > 0) {
const sendGridAttachments = await Promise.all(
userFiles.map(async (file) => {
try {
logger.info(
`[${requestId}] Downloading attachment: ${file.name} (${file.size} bytes)`
)
const buffer = await downloadFileFromStorage(file, requestId, logger)
return {
content: buffer.toString('base64'),
filename: file.name,
type: file.type || 'application/octet-stream',
disposition: 'attachment',
}
} catch (error) {
logger.error(`[${requestId}] Failed to download attachment ${file.name}:`, error)
throw new Error(
`Failed to download attachment "${file.name}": ${error instanceof Error ? error.message : 'Unknown error'}`
)
}
})
)
mailBody.attachments = sendGridAttachments
}
}
// Send to SendGrid
const response = await fetch('https://api.sendgrid.com/v3/mail/send', {
method: 'POST',
headers: {
Authorization: `Bearer ${validatedData.apiKey}`,
'Content-Type': 'application/json',
},
body: JSON.stringify(mailBody),
})
if (!response.ok) {
const errorData = await response.json().catch(() => ({}))
const errorMessage =
errorData.errors?.[0]?.message || errorData.message || 'Failed to send email'
logger.error(`[${requestId}] SendGrid API error:`, { status: response.status, errorData })
return NextResponse.json({ success: false, error: errorMessage }, { status: response.status })
}
const messageId = response.headers.get('X-Message-Id')
logger.info(`[${requestId}] Email sent successfully`, { messageId })
return NextResponse.json({
success: true,
output: {
success: true,
messageId: messageId || undefined,
to: validatedData.to,
subject: validatedData.subject || '',
},
})
} catch (error) {
if (error instanceof z.ZodError) {
logger.warn(`[${requestId}] Validation error:`, error.errors)
return NextResponse.json(
{ success: false, error: error.errors[0]?.message || 'Validation failed' },
{ status: 400 }
)
}
logger.error(`[${requestId}] Unexpected error:`, error)
return NextResponse.json(
{ success: false, error: error instanceof Error ? error.message : 'Unknown error' },
{ status: 500 }
)
}
}

View File

@@ -4,7 +4,6 @@ import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { checkInternalAuth } from '@/lib/auth/hybrid'
import { generateRequestId } from '@/lib/core/utils/request'
import { getFileExtension, getMimeTypeFromExtension } from '@/lib/uploads/utils/file-utils'
import { createSftpConnection, getSftp, isPathSafe, sanitizePath } from '@/app/api/tools/sftp/utils'
export const dynamic = 'force-dynamic'
@@ -112,8 +111,6 @@ export async function POST(request: NextRequest) {
const buffer = Buffer.concat(chunks)
const fileName = path.basename(remotePath)
const extension = getFileExtension(fileName)
const mimeType = getMimeTypeFromExtension(extension)
let content: string
if (params.encoding === 'base64') {
@@ -127,12 +124,6 @@ export async function POST(request: NextRequest) {
return NextResponse.json({
success: true,
fileName,
file: {
name: fileName,
mimeType,
data: buffer.toString('base64'),
size: buffer.length,
},
content,
size: buffer.length,
encoding: params.encoding,

View File

@@ -3,7 +3,6 @@ import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { checkInternalAuth } from '@/lib/auth/hybrid'
import { generateRequestId } from '@/lib/core/utils/request'
import { RawFileInputArraySchema } from '@/lib/uploads/utils/file-schemas'
import { processFilesToUserFiles } from '@/lib/uploads/utils/file-utils'
import { downloadFileFromStorage } from '@/lib/uploads/utils/file-utils.server'
import {
@@ -27,7 +26,14 @@ const UploadSchema = z.object({
privateKey: z.string().nullish(),
passphrase: z.string().nullish(),
remotePath: z.string().min(1, 'Remote path is required'),
files: RawFileInputArraySchema.optional().nullable(),
files: z
.union([z.array(z.any()), z.string(), z.number(), z.null(), z.undefined()])
.transform((val) => {
if (Array.isArray(val)) return val
if (val === null || val === undefined || val === '') return undefined
return undefined
})
.nullish(),
fileContent: z.string().nullish(),
fileName: z.string().nullish(),
overwrite: z.boolean().default(true),

View File

@@ -2,12 +2,9 @@ import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { checkInternalAuth } from '@/lib/auth/hybrid'
import { secureFetchWithValidation } from '@/lib/core/security/input-validation.server'
import { generateRequestId } from '@/lib/core/utils/request'
import { RawFileInputArraySchema } from '@/lib/uploads/utils/file-schemas'
import { processFilesToUserFiles } from '@/lib/uploads/utils/file-utils'
import { downloadFileFromStorage } from '@/lib/uploads/utils/file-utils.server'
import type { MicrosoftGraphDriveItem } from '@/tools/onedrive/types'
export const dynamic = 'force-dynamic'
@@ -19,7 +16,7 @@ const SharepointUploadSchema = z.object({
driveId: z.string().optional().nullable(),
folderPath: z.string().optional().nullable(),
fileName: z.string().optional().nullable(),
files: RawFileInputArraySchema.optional().nullable(),
files: z.array(z.any()).optional().nullable(),
})
export async function POST(request: NextRequest) {
@@ -82,23 +79,18 @@ export async function POST(request: NextRequest) {
let effectiveDriveId = validatedData.driveId
if (!effectiveDriveId) {
logger.info(`[${requestId}] No driveId provided, fetching default drive for site`)
const driveUrl = `https://graph.microsoft.com/v1.0/sites/${validatedData.siteId}/drive`
const driveResponse = await secureFetchWithValidation(
driveUrl,
const driveResponse = await fetch(
`https://graph.microsoft.com/v1.0/sites/${validatedData.siteId}/drive`,
{
method: 'GET',
headers: {
Authorization: `Bearer ${validatedData.accessToken}`,
Accept: 'application/json',
},
},
'driveUrl'
}
)
if (!driveResponse.ok) {
const errorData = (await driveResponse.json().catch(() => ({}))) as {
error?: { message?: string }
}
const errorData = await driveResponse.json().catch(() => ({}))
logger.error(`[${requestId}] Failed to get default drive:`, errorData)
return NextResponse.json(
{
@@ -109,7 +101,7 @@ export async function POST(request: NextRequest) {
)
}
const driveData = (await driveResponse.json()) as { id: string }
const driveData = await driveResponse.json()
effectiveDriveId = driveData.id
logger.info(`[${requestId}] Using default drive: ${effectiveDriveId}`)
}
@@ -153,87 +145,34 @@ export async function POST(request: NextRequest) {
logger.info(`[${requestId}] Uploading to: ${uploadUrl}`)
const uploadResponse = await secureFetchWithValidation(
uploadUrl,
{
method: 'PUT',
headers: {
Authorization: `Bearer ${validatedData.accessToken}`,
'Content-Type': userFile.type || 'application/octet-stream',
},
body: buffer,
const uploadResponse = await fetch(uploadUrl, {
method: 'PUT',
headers: {
Authorization: `Bearer ${validatedData.accessToken}`,
'Content-Type': userFile.type || 'application/octet-stream',
},
'uploadUrl'
)
body: new Uint8Array(buffer),
})
if (!uploadResponse.ok) {
const errorData = await uploadResponse.json().catch(() => ({}))
logger.error(`[${requestId}] Failed to upload file ${fileName}:`, errorData)
if (uploadResponse.status === 409) {
// File exists - retry with conflict behavior set to replace
logger.warn(`[${requestId}] File ${fileName} already exists, retrying with replace`)
const replaceUrl = `${uploadUrl}?@microsoft.graph.conflictBehavior=replace`
const replaceResponse = await secureFetchWithValidation(
replaceUrl,
{
method: 'PUT',
headers: {
Authorization: `Bearer ${validatedData.accessToken}`,
'Content-Type': userFile.type || 'application/octet-stream',
},
body: buffer,
},
'replaceUrl'
)
if (!replaceResponse.ok) {
const replaceErrorData = (await replaceResponse.json().catch(() => ({}))) as {
error?: { message?: string }
}
logger.error(`[${requestId}] Failed to replace file ${fileName}:`, replaceErrorData)
return NextResponse.json(
{
success: false,
error: replaceErrorData.error?.message || `Failed to replace file: ${fileName}`,
},
{ status: replaceResponse.status }
)
}
const replaceData = (await replaceResponse.json()) as {
id: string
name: string
webUrl: string
size: number
createdDateTime: string
lastModifiedDateTime: string
}
logger.info(`[${requestId}] File replaced successfully: ${fileName}`)
uploadedFiles.push({
id: replaceData.id,
name: replaceData.name,
webUrl: replaceData.webUrl,
size: replaceData.size,
createdDateTime: replaceData.createdDateTime,
lastModifiedDateTime: replaceData.lastModifiedDateTime,
})
logger.warn(`[${requestId}] File ${fileName} already exists, attempting to replace`)
continue
}
return NextResponse.json(
{
success: false,
error:
(errorData as { error?: { message?: string } }).error?.message ||
`Failed to upload file: ${fileName}`,
error: errorData.error?.message || `Failed to upload file: ${fileName}`,
},
{ status: uploadResponse.status }
)
}
const uploadData = (await uploadResponse.json()) as MicrosoftGraphDriveItem
const uploadData = await uploadResponse.json()
logger.info(`[${requestId}] File uploaded successfully: ${fileName}`)
uploadedFiles.push({

View File

@@ -1,170 +0,0 @@
import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { checkInternalAuth } from '@/lib/auth/hybrid'
import {
secureFetchWithPinnedIP,
validateUrlWithDNS,
} from '@/lib/core/security/input-validation.server'
import { generateRequestId } from '@/lib/core/utils/request'
export const dynamic = 'force-dynamic'
const logger = createLogger('SlackDownloadAPI')
const SlackDownloadSchema = z.object({
accessToken: z.string().min(1, 'Access token is required'),
fileId: z.string().min(1, 'File ID is required'),
fileName: z.string().optional().nullable(),
})
export async function POST(request: NextRequest) {
const requestId = generateRequestId()
try {
const authResult = await checkInternalAuth(request, { requireWorkflowId: false })
if (!authResult.success) {
logger.warn(`[${requestId}] Unauthorized Slack download attempt: ${authResult.error}`)
return NextResponse.json(
{
success: false,
error: authResult.error || 'Authentication required',
},
{ status: 401 }
)
}
logger.info(`[${requestId}] Authenticated Slack download request via ${authResult.authType}`, {
userId: authResult.userId,
})
const body = await request.json()
const validatedData = SlackDownloadSchema.parse(body)
const { accessToken, fileId, fileName } = validatedData
logger.info(`[${requestId}] Getting file info from Slack`, { fileId })
const infoResponse = await fetch(`https://slack.com/api/files.info?file=${fileId}`, {
method: 'GET',
headers: {
Authorization: `Bearer ${accessToken}`,
},
})
if (!infoResponse.ok) {
const errorDetails = await infoResponse.json().catch(() => ({}))
logger.error(`[${requestId}] Failed to get file info from Slack`, {
status: infoResponse.status,
statusText: infoResponse.statusText,
error: errorDetails,
})
return NextResponse.json(
{
success: false,
error: errorDetails.error || 'Failed to get file info',
},
{ status: 400 }
)
}
const data = await infoResponse.json()
if (!data.ok) {
logger.error(`[${requestId}] Slack API returned error`, { error: data.error })
return NextResponse.json(
{
success: false,
error: data.error || 'Slack API error',
},
{ status: 400 }
)
}
const file = data.file
const resolvedFileName = fileName || file.name || 'download'
const mimeType = file.mimetype || 'application/octet-stream'
const urlPrivate = file.url_private
if (!urlPrivate) {
return NextResponse.json(
{
success: false,
error: 'File does not have a download URL',
},
{ status: 400 }
)
}
const urlValidation = await validateUrlWithDNS(urlPrivate, 'urlPrivate')
if (!urlValidation.isValid) {
return NextResponse.json(
{
success: false,
error: urlValidation.error,
},
{ status: 400 }
)
}
logger.info(`[${requestId}] Downloading file from Slack`, {
fileId,
fileName: resolvedFileName,
mimeType,
})
const downloadResponse = await secureFetchWithPinnedIP(urlPrivate, urlValidation.resolvedIP!, {
headers: {
Authorization: `Bearer ${accessToken}`,
},
})
if (!downloadResponse.ok) {
logger.error(`[${requestId}] Failed to download file content`, {
status: downloadResponse.status,
statusText: downloadResponse.statusText,
})
return NextResponse.json(
{
success: false,
error: 'Failed to download file content',
},
{ status: 400 }
)
}
const arrayBuffer = await downloadResponse.arrayBuffer()
const fileBuffer = Buffer.from(arrayBuffer)
logger.info(`[${requestId}] File downloaded successfully`, {
fileId,
name: resolvedFileName,
size: fileBuffer.length,
mimeType,
})
const base64Data = fileBuffer.toString('base64')
return NextResponse.json({
success: true,
output: {
file: {
name: resolvedFileName,
mimeType,
data: base64Data,
size: fileBuffer.length,
},
},
})
} catch (error) {
logger.error(`[${requestId}] Error downloading Slack file:`, error)
return NextResponse.json(
{
success: false,
error: error instanceof Error ? error.message : 'Unknown error occurred',
},
{ status: 500 }
)
}
}

View File

@@ -3,7 +3,6 @@ import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { checkInternalAuth } from '@/lib/auth/hybrid'
import { generateRequestId } from '@/lib/core/utils/request'
import { RawFileInputArraySchema } from '@/lib/uploads/utils/file-schemas'
import { sendSlackMessage } from '../utils'
export const dynamic = 'force-dynamic'
@@ -17,7 +16,7 @@ const SlackSendMessageSchema = z
userId: z.string().optional().nullable(),
text: z.string().min(1, 'Message text is required'),
thread_ts: z.string().optional().nullable(),
files: RawFileInputArraySchema.optional().nullable(),
files: z.array(z.any()).optional().nullable(),
})
.refine((data) => data.channel || data.userId, {
message: 'Either channel or userId is required',

View File

@@ -1,8 +1,6 @@
import type { Logger } from '@sim/logger'
import { secureFetchWithValidation } from '@/lib/core/security/input-validation.server'
import { processFilesToUserFiles } from '@/lib/uploads/utils/file-utils'
import { downloadFileFromStorage } from '@/lib/uploads/utils/file-utils.server'
import type { ToolFileData } from '@/tools/types'
/**
* Sends a message to a Slack channel using chat.postMessage
@@ -72,10 +70,9 @@ export async function uploadFilesToSlack(
accessToken: string,
requestId: string,
logger: Logger
): Promise<{ fileIds: string[]; files: ToolFileData[] }> {
): Promise<string[]> {
const userFiles = processFilesToUserFiles(files, requestId, logger)
const uploadedFileIds: string[] = []
const uploadedFiles: ToolFileData[] = []
for (const userFile of userFiles) {
logger.info(`[${requestId}] Uploading file: ${userFile.name}`)
@@ -103,14 +100,10 @@ export async function uploadFilesToSlack(
logger.info(`[${requestId}] Got upload URL for ${userFile.name}, file_id: ${urlData.file_id}`)
const uploadResponse = await secureFetchWithValidation(
urlData.upload_url,
{
method: 'POST',
body: buffer,
},
'uploadUrl'
)
const uploadResponse = await fetch(urlData.upload_url, {
method: 'POST',
body: new Uint8Array(buffer),
})
if (!uploadResponse.ok) {
logger.error(`[${requestId}] Failed to upload file data: ${uploadResponse.status}`)
@@ -119,16 +112,9 @@ export async function uploadFilesToSlack(
logger.info(`[${requestId}] File data uploaded successfully`)
uploadedFileIds.push(urlData.file_id)
// Only add to uploadedFiles after successful upload to keep arrays in sync
uploadedFiles.push({
name: userFile.name,
mimeType: userFile.type || 'application/octet-stream',
data: buffer.toString('base64'),
size: buffer.length,
})
}
return { fileIds: uploadedFileIds, files: uploadedFiles }
return uploadedFileIds
}
/**
@@ -138,8 +124,7 @@ export async function completeSlackFileUpload(
uploadedFileIds: string[],
channel: string,
text: string,
accessToken: string,
threadTs?: string | null
accessToken: string
): Promise<{ ok: boolean; files?: any[]; error?: string }> {
const response = await fetch('https://slack.com/api/files.completeUploadExternal', {
method: 'POST',
@@ -151,7 +136,6 @@ export async function completeSlackFileUpload(
files: uploadedFileIds.map((id) => ({ id })),
channel_id: channel,
initial_comment: text,
...(threadTs && { thread_ts: threadTs }),
}),
})
@@ -233,13 +217,7 @@ export async function sendSlackMessage(
logger: Logger
): Promise<{
success: boolean
output?: {
message: any
ts: string
channel: string
fileCount?: number
files?: ToolFileData[]
}
output?: { message: any; ts: string; channel: string; fileCount?: number }
error?: string
}> {
const { accessToken, text, threadTs, files } = params
@@ -271,15 +249,10 @@ export async function sendSlackMessage(
// Process files
logger.info(`[${requestId}] Processing ${files.length} file(s)`)
const { fileIds, files: uploadedFiles } = await uploadFilesToSlack(
files,
accessToken,
requestId,
logger
)
const uploadedFileIds = await uploadFilesToSlack(files, accessToken, requestId, logger)
// No valid files uploaded - send text-only
if (fileIds.length === 0) {
if (uploadedFileIds.length === 0) {
logger.warn(`[${requestId}] No valid files to upload, sending text-only message`)
const data = await postSlackMessage(accessToken, channel, text, threadTs)
@@ -291,8 +264,8 @@ export async function sendSlackMessage(
return { success: true, output: formatMessageSuccessResponse(data, text) }
}
// Complete file upload with thread support
const completeData = await completeSlackFileUpload(fileIds, channel, text, accessToken, threadTs)
// Complete file upload
const completeData = await completeSlackFileUpload(uploadedFileIds, channel, text, accessToken)
if (!completeData.ok) {
logger.error(`[${requestId}] Failed to complete upload:`, completeData.error)
@@ -309,8 +282,7 @@ export async function sendSlackMessage(
message: fileMessage,
ts: fileMessage.ts,
channel,
fileCount: fileIds.length,
files: uploadedFiles,
fileCount: uploadedFileIds.length,
},
}
}

View File

@@ -4,7 +4,6 @@ import nodemailer from 'nodemailer'
import { z } from 'zod'
import { checkInternalAuth } from '@/lib/auth/hybrid'
import { generateRequestId } from '@/lib/core/utils/request'
import { RawFileInputArraySchema } from '@/lib/uploads/utils/file-schemas'
import { processFilesToUserFiles } from '@/lib/uploads/utils/file-utils'
import { downloadFileFromStorage } from '@/lib/uploads/utils/file-utils.server'
@@ -29,7 +28,7 @@ const SmtpSendSchema = z.object({
cc: z.string().optional().nullable(),
bcc: z.string().optional().nullable(),
replyTo: z.string().optional().nullable(),
attachments: RawFileInputArraySchema.optional().nullable(),
attachments: z.array(z.any()).optional().nullable(),
})
export async function POST(request: NextRequest) {

View File

@@ -5,7 +5,6 @@ import { type NextRequest, NextResponse } from 'next/server'
import type { Client, SFTPWrapper } from 'ssh2'
import { z } from 'zod'
import { checkInternalAuth } from '@/lib/auth/hybrid'
import { getFileExtension, getMimeTypeFromExtension } from '@/lib/uploads/utils/file-utils'
import { createSSHConnection, sanitizePath } from '@/app/api/tools/ssh/utils'
const logger = createLogger('SSHDownloadFileAPI')
@@ -80,16 +79,6 @@ export async function POST(request: NextRequest) {
})
})
// Check file size limit (50MB to prevent memory exhaustion)
const maxSize = 50 * 1024 * 1024
if (stats.size > maxSize) {
const sizeMB = (stats.size / (1024 * 1024)).toFixed(2)
return NextResponse.json(
{ error: `File size (${sizeMB}MB) exceeds download limit of 50MB` },
{ status: 400 }
)
}
// Read file content
const content = await new Promise<Buffer>((resolve, reject) => {
const chunks: Buffer[] = []
@@ -107,8 +96,6 @@ export async function POST(request: NextRequest) {
})
const fileName = path.basename(remotePath)
const extension = getFileExtension(fileName)
const mimeType = getMimeTypeFromExtension(extension)
// Encode content as base64 for binary safety
const base64Content = content.toString('base64')
@@ -117,12 +104,6 @@ export async function POST(request: NextRequest) {
return NextResponse.json({
downloaded: true,
file: {
name: fileName,
mimeType,
data: base64Content,
size: stats.size,
},
content: base64Content,
fileName: fileName,
remotePath: remotePath,

View File

@@ -3,7 +3,6 @@ import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { checkInternalAuth } from '@/lib/auth/hybrid'
import { env } from '@/lib/core/config/env'
import { validateUrlWithDNS } from '@/lib/core/security/input-validation.server'
import { isSensitiveKey, REDACTED_MARKER } from '@/lib/core/security/redaction'
import { ensureZodObject, normalizeUrl } from '@/app/api/tools/stagehand/utils'
@@ -124,10 +123,6 @@ export async function POST(request: NextRequest) {
const variablesObject = processVariables(params.variables)
const startUrl = normalizeUrl(rawStartUrl)
const urlValidation = await validateUrlWithDNS(startUrl, 'startUrl')
if (!urlValidation.isValid) {
return NextResponse.json({ error: urlValidation.error }, { status: 400 })
}
logger.info('Starting Stagehand agent process', {
rawStartUrl,

View File

@@ -3,7 +3,6 @@ import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { checkInternalAuth } from '@/lib/auth/hybrid'
import { env } from '@/lib/core/config/env'
import { validateUrlWithDNS } from '@/lib/core/security/input-validation.server'
import { ensureZodObject, normalizeUrl } from '@/app/api/tools/stagehand/utils'
const logger = createLogger('StagehandExtractAPI')
@@ -52,10 +51,6 @@ export async function POST(request: NextRequest) {
const params = validationResult.data
const { url: rawUrl, instruction, selector, provider, apiKey, schema } = params
const url = normalizeUrl(rawUrl)
const urlValidation = await validateUrlWithDNS(url, 'url')
if (!urlValidation.isValid) {
return NextResponse.json({ error: urlValidation.error }, { status: 400 })
}
logger.info('Starting Stagehand extraction process', {
rawUrl,

View File

@@ -2,15 +2,7 @@ import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server'
import { extractAudioFromVideo, isVideoFile } from '@/lib/audio/extractor'
import { checkInternalAuth } from '@/lib/auth/hybrid'
import {
secureFetchWithPinnedIP,
validateUrlWithDNS,
} from '@/lib/core/security/input-validation.server'
import { getMimeTypeFromExtension, isInternalFileUrl } from '@/lib/uploads/utils/file-utils'
import {
downloadFileFromStorage,
resolveInternalFileUrl,
} from '@/lib/uploads/utils/file-utils.server'
import { downloadFileFromStorage } from '@/lib/uploads/utils/file-utils.server'
import type { UserFile } from '@/executor/types'
import type { TranscriptSegment } from '@/tools/stt/types'
@@ -53,7 +45,6 @@ export async function POST(request: NextRequest) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
const userId = authResult.userId
const body: SttRequestBody = await request.json()
const {
provider,
@@ -81,25 +72,13 @@ export async function POST(request: NextRequest) {
let audioMimeType: string
if (body.audioFile) {
if (Array.isArray(body.audioFile) && body.audioFile.length !== 1) {
return NextResponse.json({ error: 'audioFile must be a single file' }, { status: 400 })
}
const file = Array.isArray(body.audioFile) ? body.audioFile[0] : body.audioFile
logger.info(`[${requestId}] Processing uploaded file: ${file.name}`)
audioBuffer = await downloadFileFromStorage(file, requestId, logger)
audioFileName = file.name
// file.type may be missing if the file came from a block that doesn't preserve it
// Infer from filename extension as fallback
const ext = file.name.split('.').pop()?.toLowerCase() || ''
audioMimeType = file.type || getMimeTypeFromExtension(ext)
audioMimeType = file.type
} else if (body.audioFileReference) {
if (Array.isArray(body.audioFileReference) && body.audioFileReference.length !== 1) {
return NextResponse.json(
{ error: 'audioFileReference must be a single file' },
{ status: 400 }
)
}
const file = Array.isArray(body.audioFileReference)
? body.audioFileReference[0]
: body.audioFileReference
@@ -107,54 +86,18 @@ export async function POST(request: NextRequest) {
audioBuffer = await downloadFileFromStorage(file, requestId, logger)
audioFileName = file.name
const ext = file.name.split('.').pop()?.toLowerCase() || ''
audioMimeType = file.type || getMimeTypeFromExtension(ext)
audioMimeType = file.type
} else if (body.audioUrl) {
logger.info(`[${requestId}] Downloading from URL: ${body.audioUrl}`)
let audioUrl = body.audioUrl.trim()
if (audioUrl.startsWith('/') && !isInternalFileUrl(audioUrl)) {
return NextResponse.json(
{
error: 'Invalid file path. Only uploaded files are supported for internal paths.',
},
{ status: 400 }
)
}
if (isInternalFileUrl(audioUrl)) {
if (!userId) {
return NextResponse.json(
{ error: 'Authentication required for internal file access' },
{ status: 401 }
)
}
const resolution = await resolveInternalFileUrl(audioUrl, userId, requestId, logger)
if (resolution.error) {
return NextResponse.json(
{ error: resolution.error.message },
{ status: resolution.error.status }
)
}
audioUrl = resolution.fileUrl || audioUrl
}
const urlValidation = await validateUrlWithDNS(audioUrl, 'audioUrl')
if (!urlValidation.isValid) {
return NextResponse.json({ error: urlValidation.error }, { status: 400 })
}
const response = await secureFetchWithPinnedIP(audioUrl, urlValidation.resolvedIP!, {
method: 'GET',
})
const response = await fetch(body.audioUrl)
if (!response.ok) {
throw new Error(`Failed to download audio from URL: ${response.statusText}`)
}
const arrayBuffer = await response.arrayBuffer()
audioBuffer = Buffer.from(arrayBuffer)
audioFileName = audioUrl.split('/').pop() || 'audio_file'
audioFileName = body.audioUrl.split('/').pop() || 'audio_file'
audioMimeType = response.headers.get('content-type') || 'audio/mpeg'
} else {
return NextResponse.json(
@@ -206,9 +149,7 @@ export async function POST(request: NextRequest) {
translateToEnglish,
model,
body.prompt,
body.temperature,
audioMimeType,
audioFileName
body.temperature
)
transcript = result.transcript
segments = result.segments
@@ -221,8 +162,7 @@ export async function POST(request: NextRequest) {
language,
timestamps,
diarization,
model,
audioMimeType
model
)
transcript = result.transcript
segments = result.segments
@@ -312,9 +252,7 @@ async function transcribeWithWhisper(
translate?: boolean,
model?: string,
prompt?: string,
temperature?: number,
mimeType?: string,
fileName?: string
temperature?: number
): Promise<{
transcript: string
segments?: TranscriptSegment[]
@@ -323,11 +261,8 @@ async function transcribeWithWhisper(
}> {
const formData = new FormData()
// Use actual MIME type and filename if provided
const actualMimeType = mimeType || 'audio/mpeg'
const actualFileName = fileName || 'audio.mp3'
const blob = new Blob([new Uint8Array(audioBuffer)], { type: actualMimeType })
formData.append('file', blob, actualFileName)
const blob = new Blob([new Uint8Array(audioBuffer)], { type: 'audio/mpeg' })
formData.append('file', blob, 'audio.mp3')
formData.append('model', model || 'whisper-1')
if (language && language !== 'auto') {
@@ -344,11 +279,10 @@ async function transcribeWithWhisper(
formData.append('response_format', 'verbose_json')
// OpenAI API uses array notation for timestamp_granularities
if (timestamps === 'word') {
formData.append('timestamp_granularities[]', 'word')
formData.append('timestamp_granularities', 'word')
} else if (timestamps === 'sentence') {
formData.append('timestamp_granularities[]', 'segment')
formData.append('timestamp_granularities', 'segment')
}
const endpoint = translate ? 'translations' : 'transcriptions'
@@ -391,8 +325,7 @@ async function transcribeWithDeepgram(
language?: string,
timestamps?: 'none' | 'sentence' | 'word',
diarization?: boolean,
model?: string,
mimeType?: string
model?: string
): Promise<{
transcript: string
segments?: TranscriptSegment[]
@@ -424,7 +357,7 @@ async function transcribeWithDeepgram(
method: 'POST',
headers: {
Authorization: `Token ${apiKey}`,
'Content-Type': mimeType || 'audio/mpeg',
'Content-Type': 'audio/mpeg',
},
body: new Uint8Array(audioBuffer),
})
@@ -580,8 +513,7 @@ async function transcribeWithAssemblyAI(
audio_url: upload_url,
}
// AssemblyAI supports 'best', 'slam-1', or 'universal' for speech_model
if (model === 'best' || model === 'slam-1' || model === 'universal') {
if (model === 'best' || model === 'nano') {
transcriptRequest.speech_model = model
}

View File

@@ -3,7 +3,6 @@ import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { checkInternalAuth } from '@/lib/auth/hybrid'
import { generateRequestId } from '@/lib/core/utils/request'
import { FileInputSchema } from '@/lib/uploads/utils/file-schemas'
import { processSingleFileToUserFile } from '@/lib/uploads/utils/file-utils'
import { downloadFileFromStorage } from '@/lib/uploads/utils/file-utils.server'
@@ -17,7 +16,7 @@ const SupabaseStorageUploadSchema = z.object({
bucket: z.string().min(1, 'Bucket name is required'),
fileName: z.string().min(1, 'File name is required'),
path: z.string().optional().nullable(),
fileData: FileInputSchema,
fileData: z.any(),
contentType: z.string().optional().nullable(),
upsert: z.boolean().optional().default(false),
})

View File

@@ -3,7 +3,6 @@ import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { checkInternalAuth } from '@/lib/auth/hybrid'
import { generateRequestId } from '@/lib/core/utils/request'
import { RawFileInputArraySchema } from '@/lib/uploads/utils/file-schemas'
import { processFilesToUserFiles } from '@/lib/uploads/utils/file-utils'
import { downloadFileFromStorage } from '@/lib/uploads/utils/file-utils.server'
import { convertMarkdownToHTML } from '@/tools/telegram/utils'
@@ -15,7 +14,7 @@ const logger = createLogger('TelegramSendDocumentAPI')
const TelegramSendDocumentSchema = z.object({
botToken: z.string().min(1, 'Bot token is required'),
chatId: z.string().min(1, 'Chat ID is required'),
files: RawFileInputArraySchema.optional().nullable(),
files: z.array(z.any()).optional().nullable(),
caption: z.string().optional().nullable(),
})
@@ -94,14 +93,6 @@ export async function POST(request: NextRequest) {
logger.info(`[${requestId}] Uploading document: ${userFile.name}`)
const buffer = await downloadFileFromStorage(userFile, requestId, logger)
const filesOutput = [
{
name: userFile.name,
mimeType: userFile.type || 'application/octet-stream',
data: buffer.toString('base64'),
size: buffer.length,
},
]
logger.info(`[${requestId}] Downloaded file: ${buffer.length} bytes`)
@@ -144,7 +135,6 @@ export async function POST(request: NextRequest) {
output: {
message: 'Document sent successfully',
data: data.result,
files: filesOutput,
},
})
} catch (error) {

View File

@@ -3,18 +3,19 @@ import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { checkInternalAuth } from '@/lib/auth/hybrid'
import { validateAwsRegion, validateS3BucketName } from '@/lib/core/security/input-validation'
import {
secureFetchWithPinnedIP,
validateUrlWithDNS,
} from '@/lib/core/security/input-validation.server'
validateAwsRegion,
validateExternalUrl,
validateS3BucketName,
} from '@/lib/core/security/input-validation'
import { generateRequestId } from '@/lib/core/utils/request'
import { RawFileInputSchema } from '@/lib/uploads/utils/file-schemas'
import { isInternalFileUrl, processSingleFileToUserFile } from '@/lib/uploads/utils/file-utils'
import { StorageService } from '@/lib/uploads'
import {
downloadFileFromStorage,
resolveInternalFileUrl,
} from '@/lib/uploads/utils/file-utils.server'
extractStorageKey,
inferContextFromKey,
isInternalFileUrl,
} from '@/lib/uploads/utils/file-utils'
import { verifyFileAccess } from '@/app/api/files/authorization'
export const dynamic = 'force-dynamic'
export const maxDuration = 300 // 5 minutes for large multi-page PDF processing
@@ -34,7 +35,6 @@ const TextractParseSchema = z
region: z.string().min(1, 'AWS region is required'),
processingMode: z.enum(['sync', 'async']).optional().default('sync'),
filePath: z.string().optional(),
file: RawFileInputSchema.optional(),
s3Uri: z.string().optional(),
featureTypes: z
.array(z.enum(['TABLES', 'FORMS', 'QUERIES', 'SIGNATURES', 'LAYOUT']))
@@ -50,20 +50,6 @@ const TextractParseSchema = z
path: ['region'],
})
}
if (data.processingMode === 'async' && !data.s3Uri) {
ctx.addIssue({
code: z.ZodIssueCode.custom,
message: 'S3 URI is required for multi-page processing (s3://bucket/key)',
path: ['s3Uri'],
})
}
if (data.processingMode !== 'async' && !data.file && !data.filePath) {
ctx.addIssue({
code: z.ZodIssueCode.custom,
message: 'File input is required for single-page processing',
path: ['filePath'],
})
}
})
function getSignatureKey(
@@ -125,14 +111,7 @@ function signAwsRequest(
}
async function fetchDocumentBytes(url: string): Promise<{ bytes: string; contentType: string }> {
const urlValidation = await validateUrlWithDNS(url, 'Document URL')
if (!urlValidation.isValid) {
throw new Error(urlValidation.error || 'Invalid document URL')
}
const response = await secureFetchWithPinnedIP(url, urlValidation.resolvedIP!, {
method: 'GET',
})
const response = await fetch(url)
if (!response.ok) {
throw new Error(`Failed to fetch document: ${response.statusText}`)
}
@@ -339,8 +318,8 @@ export async function POST(request: NextRequest) {
logger.info(`[${requestId}] Textract parse request`, {
processingMode,
hasFile: Boolean(validatedData.file),
hasS3Uri: Boolean(validatedData.s3Uri),
filePath: validatedData.filePath?.substring(0, 50),
s3Uri: validatedData.s3Uri?.substring(0, 50),
featureTypes,
userId,
})
@@ -435,89 +414,90 @@ export async function POST(request: NextRequest) {
})
}
let bytes = ''
let contentType = 'application/octet-stream'
let isPdf = false
if (validatedData.file) {
let userFile
try {
userFile = processSingleFileToUserFile(validatedData.file, requestId, logger)
} catch (error) {
return NextResponse.json(
{
success: false,
error: error instanceof Error ? error.message : 'Failed to process file',
},
{ status: 400 }
)
}
const buffer = await downloadFileFromStorage(userFile, requestId, logger)
bytes = buffer.toString('base64')
contentType = userFile.type || 'application/octet-stream'
isPdf = contentType.includes('pdf') || userFile.name?.toLowerCase().endsWith('.pdf')
} else if (validatedData.filePath) {
let fileUrl = validatedData.filePath
const isInternalFilePath = isInternalFileUrl(fileUrl)
if (isInternalFilePath) {
const resolution = await resolveInternalFileUrl(fileUrl, userId, requestId, logger)
if (resolution.error) {
return NextResponse.json(
{
success: false,
error: resolution.error.message,
},
{ status: resolution.error.status }
)
}
fileUrl = resolution.fileUrl || fileUrl
} else if (fileUrl.startsWith('/')) {
logger.warn(`[${requestId}] Invalid internal path`, {
userId,
path: fileUrl.substring(0, 50),
})
return NextResponse.json(
{
success: false,
error: 'Invalid file path. Only uploaded files are supported for internal paths.',
},
{ status: 400 }
)
} else {
const urlValidation = await validateUrlWithDNS(fileUrl, 'Document URL')
if (!urlValidation.isValid) {
logger.warn(`[${requestId}] SSRF attempt blocked`, {
userId,
url: fileUrl.substring(0, 100),
error: urlValidation.error,
})
return NextResponse.json(
{
success: false,
error: urlValidation.error,
},
{ status: 400 }
)
}
}
const fetched = await fetchDocumentBytes(fileUrl)
bytes = fetched.bytes
contentType = fetched.contentType
isPdf = contentType.includes('pdf') || fileUrl.toLowerCase().endsWith('.pdf')
} else {
if (!validatedData.filePath) {
return NextResponse.json(
{
success: false,
error: 'File input is required for single-page processing',
error: 'File path is required for single-page processing',
},
{ status: 400 }
)
}
let fileUrl = validatedData.filePath
const isInternalFilePath = validatedData.filePath && isInternalFileUrl(validatedData.filePath)
if (isInternalFilePath) {
try {
const storageKey = extractStorageKey(validatedData.filePath)
const context = inferContextFromKey(storageKey)
const hasAccess = await verifyFileAccess(storageKey, userId, undefined, context, false)
if (!hasAccess) {
logger.warn(`[${requestId}] Unauthorized presigned URL generation attempt`, {
userId,
key: storageKey,
context,
})
return NextResponse.json(
{
success: false,
error: 'File not found',
},
{ status: 404 }
)
}
fileUrl = await StorageService.generatePresignedDownloadUrl(storageKey, context, 5 * 60)
logger.info(`[${requestId}] Generated presigned URL for ${context} file`)
} catch (error) {
logger.error(`[${requestId}] Failed to generate presigned URL:`, error)
return NextResponse.json(
{
success: false,
error: 'Failed to generate file access URL',
},
{ status: 500 }
)
}
} else if (validatedData.filePath?.startsWith('/')) {
// Reject arbitrary absolute paths that don't contain /api/files/serve/
logger.warn(`[${requestId}] Invalid internal path`, {
userId,
path: validatedData.filePath.substring(0, 50),
})
return NextResponse.json(
{
success: false,
error: 'Invalid file path. Only uploaded files are supported for internal paths.',
},
{ status: 400 }
)
} else {
const urlValidation = validateExternalUrl(fileUrl, 'Document URL')
if (!urlValidation.isValid) {
logger.warn(`[${requestId}] SSRF attempt blocked`, {
userId,
url: fileUrl.substring(0, 100),
error: urlValidation.error,
})
return NextResponse.json(
{
success: false,
error: urlValidation.error,
},
{ status: 400 }
)
}
}
const { bytes, contentType } = await fetchDocumentBytes(fileUrl)
// Track if this is a PDF for better error messaging
const isPdf = contentType.includes('pdf') || fileUrl.toLowerCase().endsWith('.pdf')
const uri = '/'
let textractBody: Record<string, unknown>

View File

@@ -1,250 +0,0 @@
import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { checkInternalAuth } from '@/lib/auth/hybrid'
import {
secureFetchWithPinnedIP,
validateUrlWithDNS,
} from '@/lib/core/security/input-validation.server'
import { generateRequestId } from '@/lib/core/utils/request'
import { getExtensionFromMimeType } from '@/lib/uploads/utils/file-utils'
export const dynamic = 'force-dynamic'
const logger = createLogger('TwilioGetRecordingAPI')
interface TwilioRecordingResponse {
sid?: string
call_sid?: string
duration?: string
status?: string
channels?: number
source?: string
price?: string
price_unit?: string
uri?: string
error_code?: number
message?: string
error_message?: string
}
interface TwilioErrorResponse {
message?: string
}
interface TwilioTranscription {
transcription_text?: string
status?: string
price?: string
price_unit?: string
}
interface TwilioTranscriptionsResponse {
transcriptions?: TwilioTranscription[]
}
const TwilioGetRecordingSchema = z.object({
accountSid: z.string().min(1, 'Account SID is required'),
authToken: z.string().min(1, 'Auth token is required'),
recordingSid: z.string().min(1, 'Recording SID is required'),
})
export async function POST(request: NextRequest) {
const requestId = generateRequestId()
try {
const authResult = await checkInternalAuth(request, { requireWorkflowId: false })
if (!authResult.success) {
logger.warn(`[${requestId}] Unauthorized Twilio get recording attempt: ${authResult.error}`)
return NextResponse.json(
{
success: false,
error: authResult.error || 'Authentication required',
},
{ status: 401 }
)
}
const body = await request.json()
const validatedData = TwilioGetRecordingSchema.parse(body)
const { accountSid, authToken, recordingSid } = validatedData
if (!accountSid.startsWith('AC')) {
return NextResponse.json(
{
success: false,
error: `Invalid Account SID format. Account SID must start with "AC" (you provided: ${accountSid.substring(0, 2)}...)`,
},
{ status: 400 }
)
}
const twilioAuth = Buffer.from(`${accountSid}:${authToken}`).toString('base64')
logger.info(`[${requestId}] Getting recording info from Twilio`, { recordingSid })
const infoUrl = `https://api.twilio.com/2010-04-01/Accounts/${accountSid}/Recordings/${recordingSid}.json`
const infoUrlValidation = await validateUrlWithDNS(infoUrl, 'infoUrl')
if (!infoUrlValidation.isValid) {
return NextResponse.json({ success: false, error: infoUrlValidation.error }, { status: 400 })
}
const infoResponse = await secureFetchWithPinnedIP(infoUrl, infoUrlValidation.resolvedIP!, {
method: 'GET',
headers: { Authorization: `Basic ${twilioAuth}` },
})
if (!infoResponse.ok) {
const errorData = (await infoResponse.json().catch(() => ({}))) as TwilioErrorResponse
logger.error(`[${requestId}] Twilio API error`, {
status: infoResponse.status,
error: errorData,
})
return NextResponse.json(
{ success: false, error: errorData.message || `Twilio API error: ${infoResponse.status}` },
{ status: 400 }
)
}
const data = (await infoResponse.json()) as TwilioRecordingResponse
if (data.error_code) {
return NextResponse.json({
success: false,
output: {
success: false,
error: data.message || data.error_message || 'Failed to retrieve recording',
},
error: data.message || data.error_message || 'Failed to retrieve recording',
})
}
const baseUrl = 'https://api.twilio.com'
const mediaUrl = data.uri ? `${baseUrl}${data.uri.replace('.json', '')}` : undefined
let transcriptionText: string | undefined
let transcriptionStatus: string | undefined
let transcriptionPrice: string | undefined
let transcriptionPriceUnit: string | undefined
let file:
| {
name: string
mimeType: string
data: string
size: number
}
| undefined
try {
const transcriptionUrl = `https://api.twilio.com/2010-04-01/Accounts/${accountSid}/Transcriptions.json?RecordingSid=${data.sid}`
logger.info(`[${requestId}] Checking for transcriptions`)
const transcriptionUrlValidation = await validateUrlWithDNS(
transcriptionUrl,
'transcriptionUrl'
)
if (transcriptionUrlValidation.isValid) {
const transcriptionResponse = await secureFetchWithPinnedIP(
transcriptionUrl,
transcriptionUrlValidation.resolvedIP!,
{
method: 'GET',
headers: { Authorization: `Basic ${twilioAuth}` },
}
)
if (transcriptionResponse.ok) {
const transcriptionData =
(await transcriptionResponse.json()) as TwilioTranscriptionsResponse
if (transcriptionData.transcriptions && transcriptionData.transcriptions.length > 0) {
const transcription = transcriptionData.transcriptions[0]
transcriptionText = transcription.transcription_text
transcriptionStatus = transcription.status
transcriptionPrice = transcription.price
transcriptionPriceUnit = transcription.price_unit
logger.info(`[${requestId}] Transcription found`, {
status: transcriptionStatus,
textLength: transcriptionText?.length,
})
}
}
}
} catch (error) {
logger.warn(`[${requestId}] Failed to fetch transcription:`, error)
}
if (mediaUrl) {
try {
const mediaUrlValidation = await validateUrlWithDNS(mediaUrl, 'mediaUrl')
if (mediaUrlValidation.isValid) {
const mediaResponse = await secureFetchWithPinnedIP(
mediaUrl,
mediaUrlValidation.resolvedIP!,
{
method: 'GET',
headers: { Authorization: `Basic ${twilioAuth}` },
}
)
if (mediaResponse.ok) {
const contentType =
mediaResponse.headers.get('content-type') || 'application/octet-stream'
const extension = getExtensionFromMimeType(contentType) || 'dat'
const arrayBuffer = await mediaResponse.arrayBuffer()
const buffer = Buffer.from(arrayBuffer)
const fileName = `${data.sid || recordingSid}.${extension}`
file = {
name: fileName,
mimeType: contentType,
data: buffer.toString('base64'),
size: buffer.length,
}
}
}
} catch (error) {
logger.warn(`[${requestId}] Failed to download recording media:`, error)
}
}
logger.info(`[${requestId}] Twilio recording fetched successfully`, {
recordingSid: data.sid,
hasFile: !!file,
hasTranscription: !!transcriptionText,
})
return NextResponse.json({
success: true,
output: {
success: true,
recordingSid: data.sid,
callSid: data.call_sid,
duration: data.duration ? Number.parseInt(data.duration, 10) : undefined,
status: data.status,
channels: data.channels,
source: data.source,
mediaUrl,
file,
price: data.price,
priceUnit: data.price_unit,
uri: data.uri,
transcriptionText,
transcriptionStatus,
transcriptionPrice,
transcriptionPriceUnit,
},
})
} catch (error) {
logger.error(`[${requestId}] Error fetching Twilio recording:`, error)
return NextResponse.json(
{
success: false,
error: error instanceof Error ? error.message : 'Unknown error occurred',
},
{ status: 500 }
)
}
}

View File

@@ -1,20 +1,10 @@
import { GoogleGenAI } from '@google/genai'
import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { checkInternalAuth } from '@/lib/auth/hybrid'
import {
secureFetchWithPinnedIP,
validateUrlWithDNS,
} from '@/lib/core/security/input-validation.server'
import { generateRequestId } from '@/lib/core/utils/request'
import { RawFileInputSchema } from '@/lib/uploads/utils/file-schemas'
import { isInternalFileUrl, processSingleFileToUserFile } from '@/lib/uploads/utils/file-utils'
import {
downloadFileFromStorage,
resolveInternalFileUrl,
} from '@/lib/uploads/utils/file-utils.server'
import { convertUsageMetadata, extractTextContent } from '@/providers/google/utils'
import { processSingleFileToUserFile } from '@/lib/uploads/utils/file-utils'
import { downloadFileFromStorage } from '@/lib/uploads/utils/file-utils.server'
export const dynamic = 'force-dynamic'
@@ -23,8 +13,8 @@ const logger = createLogger('VisionAnalyzeAPI')
const VisionAnalyzeSchema = z.object({
apiKey: z.string().min(1, 'API key is required'),
imageUrl: z.string().optional().nullable(),
imageFile: RawFileInputSchema.optional().nullable(),
model: z.string().optional().default('gpt-5.2'),
imageFile: z.any().optional().nullable(),
model: z.string().optional().default('gpt-4o'),
prompt: z.string().optional().nullable(),
})
@@ -49,7 +39,6 @@ export async function POST(request: NextRequest) {
userId: authResult.userId,
})
const userId = authResult.userId
const body = await request.json()
const validatedData = VisionAnalyzeSchema.parse(body)
@@ -88,72 +77,18 @@ export async function POST(request: NextRequest) {
)
}
let base64 = userFile.base64
let bufferLength = 0
if (!base64) {
const buffer = await downloadFileFromStorage(userFile, requestId, logger)
base64 = buffer.toString('base64')
bufferLength = buffer.length
}
const buffer = await downloadFileFromStorage(userFile, requestId, logger)
const base64 = buffer.toString('base64')
const mimeType = userFile.type || 'image/jpeg'
imageSource = `data:${mimeType};base64,${base64}`
if (bufferLength > 0) {
logger.info(`[${requestId}] Converted image to base64 (${bufferLength} bytes)`)
}
}
let imageUrlValidation: Awaited<ReturnType<typeof validateUrlWithDNS>> | null = null
if (imageSource && !imageSource.startsWith('data:')) {
if (imageSource.startsWith('/') && !isInternalFileUrl(imageSource)) {
return NextResponse.json(
{
success: false,
error: 'Invalid file path. Only uploaded files are supported for internal paths.',
},
{ status: 400 }
)
}
if (isInternalFileUrl(imageSource)) {
if (!userId) {
return NextResponse.json(
{
success: false,
error: 'Authentication required for internal file access',
},
{ status: 401 }
)
}
const resolution = await resolveInternalFileUrl(imageSource, userId, requestId, logger)
if (resolution.error) {
return NextResponse.json(
{
success: false,
error: resolution.error.message,
},
{ status: resolution.error.status }
)
}
imageSource = resolution.fileUrl || imageSource
}
imageUrlValidation = await validateUrlWithDNS(imageSource, 'imageUrl')
if (!imageUrlValidation.isValid) {
return NextResponse.json(
{
success: false,
error: imageUrlValidation.error,
},
{ status: 400 }
)
}
logger.info(`[${requestId}] Converted image to base64 (${buffer.length} bytes)`)
}
const defaultPrompt = 'Please analyze this image and describe what you see in detail.'
const prompt = validatedData.prompt || defaultPrompt
const isClaude = validatedData.model.startsWith('claude-')
const isGemini = validatedData.model.startsWith('gemini-')
const isClaude = validatedData.model.startsWith('claude-3')
const apiUrl = isClaude
? 'https://api.anthropic.com/v1/messages'
: 'https://api.openai.com/v1/chat/completions'
@@ -171,72 +106,6 @@ export async function POST(request: NextRequest) {
let requestBody: any
if (isGemini) {
let base64Payload = imageSource
if (!base64Payload.startsWith('data:')) {
const urlValidation =
imageUrlValidation || (await validateUrlWithDNS(base64Payload, 'imageUrl'))
if (!urlValidation.isValid) {
return NextResponse.json({ success: false, error: urlValidation.error }, { status: 400 })
}
const response = await secureFetchWithPinnedIP(base64Payload, urlValidation.resolvedIP!, {
method: 'GET',
})
if (!response.ok) {
return NextResponse.json(
{ success: false, error: 'Failed to fetch image for Gemini' },
{ status: 400 }
)
}
const contentType =
response.headers.get('content-type') || validatedData.imageFile?.type || 'image/jpeg'
const arrayBuffer = await response.arrayBuffer()
const base64 = Buffer.from(arrayBuffer).toString('base64')
base64Payload = `data:${contentType};base64,${base64}`
}
const base64Marker = ';base64,'
const markerIndex = base64Payload.indexOf(base64Marker)
if (!base64Payload.startsWith('data:') || markerIndex === -1) {
return NextResponse.json(
{ success: false, error: 'Invalid base64 image format' },
{ status: 400 }
)
}
const rawMimeType = base64Payload.slice('data:'.length, markerIndex)
const mediaType = rawMimeType.split(';')[0] || 'image/jpeg'
const base64Data = base64Payload.slice(markerIndex + base64Marker.length)
if (!base64Data) {
return NextResponse.json(
{ success: false, error: 'Invalid base64 image format' },
{ status: 400 }
)
}
const ai = new GoogleGenAI({ apiKey: validatedData.apiKey })
const geminiResponse = await ai.models.generateContent({
model: validatedData.model,
contents: [
{
role: 'user',
parts: [{ text: prompt }, { inlineData: { mimeType: mediaType, data: base64Data } }],
},
],
})
const content = extractTextContent(geminiResponse.candidates?.[0])
const usage = convertUsageMetadata(geminiResponse.usageMetadata)
return NextResponse.json({
success: true,
output: {
content,
model: validatedData.model,
tokens: usage.totalTokenCount || undefined,
},
})
}
if (isClaude) {
if (imageSource.startsWith('data:')) {
const base64Match = imageSource.match(/^data:([^;]+);base64,(.+)$/)
@@ -303,7 +172,7 @@ export async function POST(request: NextRequest) {
],
},
],
max_completion_tokens: 1000,
max_tokens: 1000,
}
}

View File

@@ -3,7 +3,6 @@ import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { checkInternalAuth } from '@/lib/auth/hybrid'
import { generateRequestId } from '@/lib/core/utils/request'
import { RawFileInputSchema } from '@/lib/uploads/utils/file-schemas'
import {
getFileExtension,
getMimeTypeFromExtension,
@@ -20,7 +19,7 @@ const WORDPRESS_COM_API_BASE = 'https://public-api.wordpress.com/wp/v2/sites'
const WordPressUploadSchema = z.object({
accessToken: z.string().min(1, 'Access token is required'),
siteId: z.string().min(1, 'Site ID is required'),
file: RawFileInputSchema.optional().nullable(),
file: z.any().optional().nullable(),
filename: z.string().optional().nullable(),
title: z.string().optional().nullable(),
caption: z.string().optional().nullable(),

View File

@@ -1,216 +0,0 @@
import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { checkInternalAuth } from '@/lib/auth/hybrid'
import {
secureFetchWithPinnedIP,
validateUrlWithDNS,
} from '@/lib/core/security/input-validation.server'
import { generateRequestId } from '@/lib/core/utils/request'
import { getExtensionFromMimeType } from '@/lib/uploads/utils/file-utils'
export const dynamic = 'force-dynamic'
const logger = createLogger('ZoomGetRecordingsAPI')
interface ZoomRecordingFile {
id?: string
meeting_id?: string
recording_start?: string
recording_end?: string
file_type?: string
file_extension?: string
file_size?: number
play_url?: string
download_url?: string
status?: string
recording_type?: string
}
interface ZoomRecordingsResponse {
uuid?: string
id?: string | number
account_id?: string
host_id?: string
topic?: string
type?: number
start_time?: string
duration?: number
total_size?: number
recording_count?: number
share_url?: string
recording_files?: ZoomRecordingFile[]
}
interface ZoomErrorResponse {
message?: string
code?: number
}
const ZoomGetRecordingsSchema = z.object({
accessToken: z.string().min(1, 'Access token is required'),
meetingId: z.string().min(1, 'Meeting ID is required'),
includeFolderItems: z.boolean().optional(),
ttl: z.number().optional(),
downloadFiles: z.boolean().optional().default(false),
})
export async function POST(request: NextRequest) {
const requestId = generateRequestId()
try {
const authResult = await checkInternalAuth(request, { requireWorkflowId: false })
if (!authResult.success) {
logger.warn(`[${requestId}] Unauthorized Zoom get recordings attempt: ${authResult.error}`)
return NextResponse.json(
{
success: false,
error: authResult.error || 'Authentication required',
},
{ status: 401 }
)
}
const body = await request.json()
const validatedData = ZoomGetRecordingsSchema.parse(body)
const { accessToken, meetingId, includeFolderItems, ttl, downloadFiles } = validatedData
const baseUrl = `https://api.zoom.us/v2/meetings/${encodeURIComponent(meetingId)}/recordings`
const queryParams = new URLSearchParams()
if (includeFolderItems != null) {
queryParams.append('include_folder_items', String(includeFolderItems))
}
if (ttl) {
queryParams.append('ttl', String(ttl))
}
const queryString = queryParams.toString()
const apiUrl = queryString ? `${baseUrl}?${queryString}` : baseUrl
logger.info(`[${requestId}] Fetching recordings from Zoom`, { meetingId })
const urlValidation = await validateUrlWithDNS(apiUrl, 'apiUrl')
if (!urlValidation.isValid) {
return NextResponse.json({ success: false, error: urlValidation.error }, { status: 400 })
}
const response = await secureFetchWithPinnedIP(apiUrl, urlValidation.resolvedIP!, {
method: 'GET',
headers: {
'Content-Type': 'application/json',
Authorization: `Bearer ${accessToken}`,
},
})
if (!response.ok) {
const errorData = (await response.json().catch(() => ({}))) as ZoomErrorResponse
logger.error(`[${requestId}] Zoom API error`, {
status: response.status,
error: errorData,
})
return NextResponse.json(
{ success: false, error: errorData.message || `Zoom API error: ${response.status}` },
{ status: 400 }
)
}
const data = (await response.json()) as ZoomRecordingsResponse
const files: Array<{
name: string
mimeType: string
data: string
size: number
}> = []
if (downloadFiles && Array.isArray(data.recording_files)) {
for (const file of data.recording_files) {
if (!file?.download_url) continue
try {
const fileUrlValidation = await validateUrlWithDNS(file.download_url, 'downloadUrl')
if (!fileUrlValidation.isValid) continue
const downloadResponse = await secureFetchWithPinnedIP(
file.download_url,
fileUrlValidation.resolvedIP!,
{
method: 'GET',
headers: { Authorization: `Bearer ${accessToken}` },
}
)
if (!downloadResponse.ok) continue
const contentType =
downloadResponse.headers.get('content-type') || 'application/octet-stream'
const arrayBuffer = await downloadResponse.arrayBuffer()
const buffer = Buffer.from(arrayBuffer)
const extension =
file.file_extension?.toString().toLowerCase() ||
getExtensionFromMimeType(contentType) ||
'dat'
const fileName = `zoom-recording-${file.id || file.recording_start || Date.now()}.${extension}`
files.push({
name: fileName,
mimeType: contentType,
data: buffer.toString('base64'),
size: buffer.length,
})
} catch (error) {
logger.warn(`[${requestId}] Failed to download recording file:`, error)
}
}
}
logger.info(`[${requestId}] Zoom recordings fetched successfully`, {
recordingCount: data.recording_files?.length || 0,
downloadedCount: files.length,
})
return NextResponse.json({
success: true,
output: {
recording: {
uuid: data.uuid,
id: data.id,
account_id: data.account_id,
host_id: data.host_id,
topic: data.topic,
type: data.type,
start_time: data.start_time,
duration: data.duration,
total_size: data.total_size,
recording_count: data.recording_count,
share_url: data.share_url,
recording_files: (data.recording_files || []).map((file: ZoomRecordingFile) => ({
id: file.id,
meeting_id: file.meeting_id,
recording_start: file.recording_start,
recording_end: file.recording_end,
file_type: file.file_type,
file_extension: file.file_extension,
file_size: file.file_size,
play_url: file.play_url,
download_url: file.download_url,
status: file.status,
recording_type: file.recording_type,
})),
},
files: files.length > 0 ? files : undefined,
},
})
} catch (error) {
logger.error(`[${requestId}] Error fetching Zoom recordings:`, error)
return NextResponse.json(
{
success: false,
error: error instanceof Error ? error.message : 'Unknown error occurred',
},
{ status: 500 }
)
}
}

View File

@@ -5,7 +5,6 @@ import { and, eq } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { z } from 'zod'
import { getSession } from '@/lib/auth'
import { hasActiveSubscription } from '@/lib/billing'
const logger = createLogger('SubscriptionTransferAPI')
@@ -89,14 +88,6 @@ export async function POST(request: NextRequest, { params }: { params: Promise<{
)
}
// Check if org already has an active subscription (prevent duplicates)
if (await hasActiveSubscription(organizationId)) {
return NextResponse.json(
{ error: 'Organization already has an active subscription' },
{ status: 409 }
)
}
await db
.update(subscription)
.set({ referenceId: organizationId })

View File

@@ -203,10 +203,6 @@ export const PATCH = withAdminAuthParams<RouteParams>(async (request, context) =
}
updateData.billingBlocked = body.billingBlocked
// Clear the reason when unblocking
if (body.billingBlocked === false) {
updateData.billingBlockedReason = null
}
updated.push('billingBlocked')
}

View File

@@ -1,4 +1,6 @@
import { db, workflow as workflowTable } from '@sim/db'
import { createLogger } from '@sim/logger'
import { eq } from 'drizzle-orm'
import { type NextRequest, NextResponse } from 'next/server'
import { v4 as uuidv4 } from 'uuid'
import { z } from 'zod'
@@ -6,7 +8,6 @@ import { checkHybridAuth } from '@/lib/auth/hybrid'
import { generateRequestId } from '@/lib/core/utils/request'
import { SSE_HEADERS } from '@/lib/core/utils/sse'
import { markExecutionCancelled } from '@/lib/execution/cancellation'
import { preprocessExecution } from '@/lib/execution/preprocessing'
import { LoggingSession } from '@/lib/logs/execution/logging-session'
import { executeWorkflowCore } from '@/lib/workflows/executor/execution-core'
import { createSSECallbacks } from '@/lib/workflows/executor/execution-events'
@@ -74,31 +75,12 @@ export async function POST(req: NextRequest, { params }: { params: Promise<{ id:
const { startBlockId, sourceSnapshot, input } = validation.data
const executionId = uuidv4()
// Run preprocessing checks (billing, rate limits, usage limits)
const preprocessResult = await preprocessExecution({
workflowId,
userId,
triggerType: 'manual',
executionId,
requestId,
checkRateLimit: false, // Manual executions don't rate limit
checkDeployment: false, // Run-from-block doesn't require deployment
})
const [workflowRecord] = await db
.select({ workspaceId: workflowTable.workspaceId, userId: workflowTable.userId })
.from(workflowTable)
.where(eq(workflowTable.id, workflowId))
.limit(1)
if (!preprocessResult.success) {
const { error } = preprocessResult
logger.warn(`[${requestId}] Preprocessing failed for run-from-block`, {
workflowId,
error: error?.message,
statusCode: error?.statusCode,
})
return NextResponse.json(
{ error: error?.message || 'Execution blocked' },
{ status: error?.statusCode || 500 }
)
}
const workflowRecord = preprocessResult.workflowRecord
if (!workflowRecord?.workspaceId) {
return NextResponse.json({ error: 'Workflow not found or has no workspace' }, { status: 404 })
}
@@ -110,7 +92,6 @@ export async function POST(req: NextRequest, { params }: { params: Promise<{ id:
workflowId,
startBlockId,
executedBlocksCount: sourceSnapshot.executedBlocks.length,
billingActorUserId: preprocessResult.actorUserId,
})
const loggingSession = new LoggingSession(workflowId, executionId, 'manual', requestId)

View File

@@ -567,7 +567,6 @@ export async function POST(req: NextRequest, { params }: { params: Promise<{ id:
blockId: string,
blockName: string,
blockType: string,
executionOrder: number,
iterationContext?: IterationContext
) => {
logger.info(`[${requestId}] 🔷 onBlockStart called:`, { blockId, blockName, blockType })
@@ -580,7 +579,6 @@ export async function POST(req: NextRequest, { params }: { params: Promise<{ id:
blockId,
blockName,
blockType,
executionOrder,
...(iterationContext && {
iterationCurrent: iterationContext.iterationCurrent,
iterationTotal: iterationContext.iterationTotal,
@@ -619,7 +617,6 @@ export async function POST(req: NextRequest, { params }: { params: Promise<{ id:
error: callbackData.output.error,
durationMs: callbackData.executionTime || 0,
startedAt: callbackData.startedAt,
executionOrder: callbackData.executionOrder,
endedAt: callbackData.endedAt,
...(iterationContext && {
iterationCurrent: iterationContext.iterationCurrent,
@@ -647,7 +644,6 @@ export async function POST(req: NextRequest, { params }: { params: Promise<{ id:
output: callbackData.output,
durationMs: callbackData.executionTime || 0,
startedAt: callbackData.startedAt,
executionOrder: callbackData.executionOrder,
endedAt: callbackData.endedAt,
...(iterationContext && {
iterationCurrent: iterationContext.iterationCurrent,

View File

@@ -102,7 +102,7 @@ describe('Workspace Invitations API Route', () => {
inArray: vi.fn().mockImplementation((field, values) => ({ type: 'inArray', field, values })),
}))
vi.doMock('@/ee/access-control/utils/permission-check', () => ({
vi.doMock('@/executor/utils/permission-check', () => ({
validateInvitationsAllowed: vi.fn().mockResolvedValue(undefined),
InvitationsNotAllowedError: class InvitationsNotAllowedError extends Error {
constructor() {

View File

@@ -21,7 +21,7 @@ import { getFromEmailAddress } from '@/lib/messaging/email/utils'
import {
InvitationsNotAllowedError,
validateInvitationsAllowed,
} from '@/ee/access-control/utils/permission-check'
} from '@/executor/utils/permission-check'
export const dynamic = 'force-dynamic'
@@ -38,6 +38,7 @@ export async function GET(req: NextRequest) {
}
try {
// Get all workspaces where the user has permissions
const userWorkspaces = await db
.select({ id: workspace.id })
.from(workspace)
@@ -54,8 +55,10 @@ export async function GET(req: NextRequest) {
return NextResponse.json({ invitations: [] })
}
// Get all workspaceIds where the user is a member
const workspaceIds = userWorkspaces.map((w) => w.id)
// Find all invitations for those workspaces
const invitations = await db
.select()
.from(workspaceInvitation)

View File

@@ -14,11 +14,11 @@ import {
ChatMessageContainer,
EmailAuth,
PasswordAuth,
SSOAuth,
VoiceInterface,
} from '@/app/chat/components'
import { CHAT_ERROR_MESSAGES, CHAT_REQUEST_TIMEOUT_MS } from '@/app/chat/constants'
import { useAudioStreaming, useChatStreaming } from '@/app/chat/hooks'
import SSOAuth from '@/ee/sso/components/sso-auth'
const logger = createLogger('ChatClient')

View File

@@ -1,5 +1,6 @@
export { default as EmailAuth } from './auth/email/email-auth'
export { default as PasswordAuth } from './auth/password/password-auth'
export { default as SSOAuth } from './auth/sso/sso-auth'
export { ChatErrorState } from './error-state/error-state'
export { ChatHeader } from './header/header'
export { ChatInput } from './input/input'

View File

@@ -1,7 +1,7 @@
import { redirect } from 'next/navigation'
import { getSession } from '@/lib/auth'
import { verifyWorkspaceMembership } from '@/app/api/workflows/utils'
import { getUserPermissionConfig } from '@/ee/access-control/utils/permission-check'
import { getUserPermissionConfig } from '@/executor/utils/permission-check'
import { Knowledge } from './knowledge'
interface KnowledgePageProps {
@@ -23,6 +23,7 @@ export default async function KnowledgePage({ params }: KnowledgePageProps) {
redirect('/')
}
// Check permission group restrictions
const permissionConfig = await getUserPermissionConfig(session.user.id)
if (permissionConfig?.hideKnowledgeBaseTab) {
redirect(`/workspace/${workspaceId}`)

View File

@@ -104,12 +104,14 @@ function FileCard({ file, isExecutionFile = false, workspaceId }: FileCardProps)
}
return (
<div className='flex flex-col gap-[4px] rounded-[6px] bg-[var(--surface-1)] px-[8px] py-[6px]'>
<div className='flex min-w-0 items-center justify-between gap-[8px]'>
<span className='min-w-0 flex-1 truncate font-medium text-[12px] text-[var(--text-secondary)]'>
{file.name}
</span>
<span className='flex-shrink-0 font-medium text-[12px] text-[var(--text-tertiary)]'>
<div className='flex flex-col gap-[8px] rounded-[6px] bg-[var(--surface-1)] px-[10px] py-[8px]'>
<div className='flex items-center justify-between'>
<div className='flex items-center gap-[8px]'>
<span className='truncate font-medium text-[12px] text-[var(--text-secondary)]'>
{file.name}
</span>
</div>
<span className='font-medium text-[12px] text-[var(--text-tertiary)]'>
{formatFileSize(file.size)}
</span>
</div>
@@ -140,18 +142,20 @@ export function FileCards({ files, isExecutionFile = false, workspaceId }: FileC
}
return (
<div className='mt-[4px] flex flex-col gap-[6px] rounded-[6px] border border-[var(--border)] bg-[var(--surface-2)] px-[10px] py-[8px] dark:bg-transparent'>
<div className='flex w-full flex-col gap-[6px] rounded-[6px] bg-[var(--surface-2)] px-[10px] py-[8px]'>
<span className='font-medium text-[12px] text-[var(--text-tertiary)]'>
Files ({files.length})
</span>
{files.map((file, index) => (
<FileCard
key={file.id || `file-${index}`}
file={file}
isExecutionFile={isExecutionFile}
workspaceId={workspaceId}
/>
))}
<div className='flex flex-col gap-[8px]'>
{files.map((file, index) => (
<FileCard
key={file.id || `file-${index}`}
file={file}
isExecutionFile={isExecutionFile}
workspaceId={workspaceId}
/>
))}
</div>
</div>
)
}

View File

@@ -18,7 +18,6 @@ import {
import { ScrollArea } from '@/components/ui/scroll-area'
import { BASE_EXECUTION_CHARGE } from '@/lib/billing/constants'
import { cn } from '@/lib/core/utils/cn'
import { formatDuration } from '@/lib/core/utils/formatting'
import { filterHiddenOutputKeys } from '@/lib/logs/execution/trace-spans/trace-spans'
import {
ExecutionSnapshot,
@@ -454,7 +453,7 @@ export const LogDetails = memo(function LogDetails({
Duration
</span>
<span className='font-medium text-[13px] text-[var(--text-secondary)]'>
{formatDuration(log.duration, { precision: 2 }) || '—'}
{log.duration || '—'}
</span>
</div>

View File

@@ -6,11 +6,11 @@ import Link from 'next/link'
import { List, type RowComponentProps, useListRef } from 'react-window'
import { Badge, buttonVariants } from '@/components/emcn'
import { cn } from '@/lib/core/utils/cn'
import { formatDuration } from '@/lib/core/utils/formatting'
import {
DELETED_WORKFLOW_COLOR,
DELETED_WORKFLOW_LABEL,
formatDate,
formatDuration,
getDisplayStatus,
LOG_COLUMNS,
StatusBadge,
@@ -113,7 +113,7 @@ const LogRow = memo(
<div className={`${LOG_COLUMNS.duration.width} ${LOG_COLUMNS.duration.minWidth}`}>
<Badge variant='default' className='rounded-[6px] px-[9px] py-[2px] text-[12px]'>
{formatDuration(log.duration, { precision: 2 }) || '—'}
{formatDuration(log.duration) || '—'}
</Badge>
</div>
</div>

View File

@@ -1,7 +1,6 @@
import React from 'react'
import { format } from 'date-fns'
import { Badge } from '@/components/emcn'
import { formatDuration } from '@/lib/core/utils/formatting'
import { getIntegrationMetadata } from '@/lib/logs/get-trigger-options'
import { getBlock } from '@/blocks/registry'
import { CORE_TRIGGER_TYPES } from '@/stores/logs/filters/types'
@@ -363,14 +362,47 @@ export function mapToExecutionLogAlt(log: RawLogResponse): ExecutionLog {
}
}
/**
* Format duration for display in logs UI
* If duration is under 1 second, displays as milliseconds (e.g., "500ms")
* If duration is 1 second or more, displays as seconds (e.g., "1.23s")
* @param duration - Duration string (e.g., "500ms") or null
* @returns Formatted duration string or null
*/
export function formatDuration(duration: string | null): string | null {
if (!duration) return null
// Extract numeric value from duration string (e.g., "500ms" -> 500)
const ms = Number.parseInt(duration.replace(/[^0-9]/g, ''), 10)
if (!Number.isFinite(ms)) return duration
if (ms < 1000) {
return `${ms}ms`
}
// Convert to seconds with up to 2 decimal places
const seconds = ms / 1000
return `${seconds.toFixed(2).replace(/\.?0+$/, '')}s`
}
/**
* Format latency value for display in dashboard UI
* If latency is under 1 second, displays as milliseconds (e.g., "500ms")
* If latency is 1 second or more, displays as seconds (e.g., "1.23s")
* @param ms - Latency in milliseconds (number)
* @returns Formatted latency string
*/
export function formatLatency(ms: number): string {
if (!Number.isFinite(ms) || ms <= 0) return '—'
return formatDuration(ms, { precision: 2 }) ?? '—'
if (ms < 1000) {
return `${Math.round(ms)}ms`
}
// Convert to seconds with up to 2 decimal places
const seconds = ms / 1000
return `${seconds.toFixed(2).replace(/\.?0+$/, '')}s`
}
export const formatDate = (dateString: string) => {

View File

@@ -6,7 +6,7 @@ import { getSession } from '@/lib/auth'
import { verifyWorkspaceMembership } from '@/app/api/workflows/utils'
import type { Template as WorkspaceTemplate } from '@/app/workspace/[workspaceId]/templates/templates'
import Templates from '@/app/workspace/[workspaceId]/templates/templates'
import { getUserPermissionConfig } from '@/ee/access-control/utils/permission-check'
import { getUserPermissionConfig } from '@/executor/utils/permission-check'
interface TemplatesPageProps {
params: Promise<{

View File

@@ -1,5 +1,5 @@
import { memo, useCallback } from 'react'
import { ArrowLeftRight, ArrowUpDown, Circle, CircleOff, Lock, LogOut, Unlock } from 'lucide-react'
import { ArrowLeftRight, ArrowUpDown, Circle, CircleOff, LogOut } from 'lucide-react'
import { Button, Copy, PlayOutline, Tooltip, Trash2 } from '@/components/emcn'
import { cn } from '@/lib/core/utils/cn'
import { isInputDefinitionTrigger } from '@/lib/workflows/triggers/input-definition-triggers'
@@ -49,7 +49,6 @@ export const ActionBar = memo(
collaborativeBatchRemoveBlocks,
collaborativeBatchToggleBlockEnabled,
collaborativeBatchToggleBlockHandles,
collaborativeBatchToggleLocked,
} = useCollaborativeWorkflow()
const { setPendingSelection } = useWorkflowRegistry()
const { handleRunFromBlock } = useWorkflowExecution()
@@ -85,28 +84,16 @@ export const ActionBar = memo(
)
}, [blockId, addNotification, collaborativeBatchAddBlocks, setPendingSelection])
const {
isEnabled,
horizontalHandles,
parentId,
parentType,
isLocked,
isParentLocked,
isParentDisabled,
} = useWorkflowStore(
const { isEnabled, horizontalHandles, parentId, parentType } = useWorkflowStore(
useCallback(
(state) => {
const block = state.blocks[blockId]
const parentId = block?.data?.parentId
const parentBlock = parentId ? state.blocks[parentId] : undefined
return {
isEnabled: block?.enabled ?? true,
horizontalHandles: block?.horizontalHandles ?? false,
parentId,
parentType: parentBlock?.type,
isLocked: block?.locked ?? false,
isParentLocked: parentBlock?.locked ?? false,
isParentDisabled: parentBlock ? !parentBlock.enabled : false,
parentType: parentId ? state.blocks[parentId]?.type : undefined,
}
},
[blockId]
@@ -174,27 +161,25 @@ export const ActionBar = memo(
{!isNoteBlock && !isInsideSubflow && (
<Tooltip.Root>
<Tooltip.Trigger asChild>
<span className='inline-flex'>
<Button
variant='ghost'
onClick={(e) => {
e.stopPropagation()
if (canRunFromBlock && !disabled) {
handleRunFromBlockClick()
}
}}
className={ACTION_BUTTON_STYLES}
disabled={disabled || !canRunFromBlock}
>
<PlayOutline className={ICON_SIZE} />
</Button>
</span>
<Button
variant='ghost'
onClick={(e) => {
e.stopPropagation()
if (canRunFromBlock && !disabled) {
handleRunFromBlockClick()
}
}}
className={ACTION_BUTTON_STYLES}
disabled={disabled || !canRunFromBlock}
>
<PlayOutline className={ICON_SIZE} />
</Button>
</Tooltip.Trigger>
<Tooltip.Content side='top'>
{(() => {
if (disabled) return getTooltipMessage('Run from block')
if (isExecuting) return 'Execution in progress'
if (!dependenciesSatisfied) return 'Run previous blocks first'
if (!dependenciesSatisfied) return 'Run upstream blocks first'
return 'Run from block'
})()}
</Tooltip.Content>
@@ -208,54 +193,18 @@ export const ActionBar = memo(
variant='ghost'
onClick={(e) => {
e.stopPropagation()
// Can't enable if parent is disabled (must enable parent first)
const cantEnable = !isEnabled && isParentDisabled
if (!disabled && !isLocked && !isParentLocked && !cantEnable) {
if (!disabled) {
collaborativeBatchToggleBlockEnabled([blockId])
}
}}
className={ACTION_BUTTON_STYLES}
disabled={
disabled || isLocked || isParentLocked || (!isEnabled && isParentDisabled)
}
disabled={disabled}
>
{isEnabled ? <Circle className={ICON_SIZE} /> : <CircleOff className={ICON_SIZE} />}
</Button>
</Tooltip.Trigger>
<Tooltip.Content side='top'>
{isLocked || isParentLocked
? 'Block is locked'
: !isEnabled && isParentDisabled
? 'Parent container is disabled'
: getTooltipMessage(isEnabled ? 'Disable Block' : 'Enable Block')}
</Tooltip.Content>
</Tooltip.Root>
)}
{userPermissions.canAdmin && (
<Tooltip.Root>
<Tooltip.Trigger asChild>
<Button
variant='ghost'
onClick={(e) => {
e.stopPropagation()
// Can't unlock a block if its parent container is locked
if (!disabled && !(isLocked && isParentLocked)) {
collaborativeBatchToggleLocked([blockId])
}
}}
className={ACTION_BUTTON_STYLES}
disabled={disabled || (isLocked && isParentLocked)}
>
{isLocked ? <Unlock className={ICON_SIZE} /> : <Lock className={ICON_SIZE} />}
</Button>
</Tooltip.Trigger>
<Tooltip.Content side='top'>
{isLocked && isParentLocked
? 'Parent container is locked'
: isLocked
? 'Unlock Block'
: 'Lock Block'}
{getTooltipMessage(isEnabled ? 'Disable Block' : 'Enable Block')}
</Tooltip.Content>
</Tooltip.Root>
)}
@@ -267,21 +216,17 @@ export const ActionBar = memo(
variant='ghost'
onClick={(e) => {
e.stopPropagation()
if (!disabled && !isLocked && !isParentLocked) {
if (!disabled) {
handleDuplicateBlock()
}
}}
className={ACTION_BUTTON_STYLES}
disabled={disabled || isLocked || isParentLocked}
disabled={disabled}
>
<Copy className={ICON_SIZE} />
</Button>
</Tooltip.Trigger>
<Tooltip.Content side='top'>
{isLocked || isParentLocked
? 'Block is locked'
: getTooltipMessage('Duplicate Block')}
</Tooltip.Content>
<Tooltip.Content side='top'>{getTooltipMessage('Duplicate Block')}</Tooltip.Content>
</Tooltip.Root>
)}
@@ -292,12 +237,12 @@ export const ActionBar = memo(
variant='ghost'
onClick={(e) => {
e.stopPropagation()
if (!disabled && !isLocked && !isParentLocked) {
if (!disabled) {
collaborativeBatchToggleBlockHandles([blockId])
}
}}
className={ACTION_BUTTON_STYLES}
disabled={disabled || isLocked || isParentLocked}
disabled={disabled}
>
{horizontalHandles ? (
<ArrowLeftRight className={ICON_SIZE} />
@@ -307,9 +252,7 @@ export const ActionBar = memo(
</Button>
</Tooltip.Trigger>
<Tooltip.Content side='top'>
{isLocked || isParentLocked
? 'Block is locked'
: getTooltipMessage(horizontalHandles ? 'Vertical Ports' : 'Horizontal Ports')}
{getTooltipMessage(horizontalHandles ? 'Vertical Ports' : 'Horizontal Ports')}
</Tooltip.Content>
</Tooltip.Root>
)}
@@ -321,23 +264,19 @@ export const ActionBar = memo(
variant='ghost'
onClick={(e) => {
e.stopPropagation()
if (!disabled && userPermissions.canEdit && !isLocked && !isParentLocked) {
if (!disabled && userPermissions.canEdit) {
window.dispatchEvent(
new CustomEvent('remove-from-subflow', { detail: { blockIds: [blockId] } })
)
}
}}
className={ACTION_BUTTON_STYLES}
disabled={disabled || !userPermissions.canEdit || isLocked || isParentLocked}
disabled={disabled || !userPermissions.canEdit}
>
<LogOut className={ICON_SIZE} />
</Button>
</Tooltip.Trigger>
<Tooltip.Content side='top'>
{isLocked || isParentLocked
? 'Block is locked'
: getTooltipMessage('Remove from Subflow')}
</Tooltip.Content>
<Tooltip.Content side='top'>{getTooltipMessage('Remove from Subflow')}</Tooltip.Content>
</Tooltip.Root>
)}
@@ -347,19 +286,17 @@ export const ActionBar = memo(
variant='ghost'
onClick={(e) => {
e.stopPropagation()
if (!disabled && !isLocked && !isParentLocked) {
if (!disabled) {
collaborativeBatchRemoveBlocks([blockId])
}
}}
className={ACTION_BUTTON_STYLES}
disabled={disabled || isLocked || isParentLocked}
disabled={disabled}
>
<Trash2 className={ICON_SIZE} />
</Button>
</Tooltip.Trigger>
<Tooltip.Content side='top'>
{isLocked || isParentLocked ? 'Block is locked' : getTooltipMessage('Delete Block')}
</Tooltip.Content>
<Tooltip.Content side='top'>{getTooltipMessage('Delete Block')}</Tooltip.Content>
</Tooltip.Root>
</div>
)

View File

@@ -20,9 +20,6 @@ export interface BlockInfo {
horizontalHandles: boolean
parentId?: string
parentType?: string
locked?: boolean
isParentLocked?: boolean
isParentDisabled?: boolean
}
/**
@@ -49,17 +46,10 @@ export interface BlockMenuProps {
showRemoveFromSubflow?: boolean
/** Whether run from block is available (has snapshot, was executed, not inside subflow) */
canRunFromBlock?: boolean
/** Whether to disable edit actions (user can't edit OR blocks are locked) */
disableEdit?: boolean
/** Whether the user has edit permission (ignoring locked state) */
userCanEdit?: boolean
isExecuting?: boolean
/** Whether the selected block is a trigger (has no incoming edges) */
isPositionalTrigger?: boolean
/** Callback to toggle locked state of selected blocks */
onToggleLocked?: () => void
/** Whether the user has admin permissions */
canAdmin?: boolean
}
/**
@@ -88,22 +78,13 @@ export function BlockMenu({
showRemoveFromSubflow = false,
canRunFromBlock = false,
disableEdit = false,
userCanEdit = true,
isExecuting = false,
isPositionalTrigger = false,
onToggleLocked,
canAdmin = false,
}: BlockMenuProps) {
const isSingleBlock = selectedBlocks.length === 1
const allEnabled = selectedBlocks.every((b) => b.enabled)
const allDisabled = selectedBlocks.every((b) => !b.enabled)
const allLocked = selectedBlocks.every((b) => b.locked)
const allUnlocked = selectedBlocks.every((b) => !b.locked)
// Can't unlock blocks that have locked parents
const hasBlockWithLockedParent = selectedBlocks.some((b) => b.locked && b.isParentLocked)
// Can't enable blocks that have disabled parents
const hasBlockWithDisabledParent = selectedBlocks.some((b) => !b.enabled && b.isParentDisabled)
const hasSingletonBlock = selectedBlocks.some(
(b) =>
@@ -127,12 +108,6 @@ export function BlockMenu({
return 'Toggle Enabled'
}
const getToggleLockedLabel = () => {
if (allLocked) return 'Unlock'
if (allUnlocked) return 'Lock'
return 'Toggle Lock'
}
return (
<Popover
open={isOpen}
@@ -164,7 +139,7 @@ export function BlockMenu({
</PopoverItem>
<PopoverItem
className='group'
disabled={!userCanEdit || !hasClipboard}
disabled={disableEdit || !hasClipboard}
onClick={() => {
onPaste()
onClose()
@@ -189,15 +164,13 @@ export function BlockMenu({
{!allNoteBlocks && <PopoverDivider />}
{!allNoteBlocks && (
<PopoverItem
disabled={disableEdit || hasBlockWithDisabledParent}
disabled={disableEdit}
onClick={() => {
if (!disableEdit && !hasBlockWithDisabledParent) {
onToggleEnabled()
onClose()
}
onToggleEnabled()
onClose()
}}
>
{hasBlockWithDisabledParent ? 'Parent is disabled' : getToggleEnabledLabel()}
{getToggleEnabledLabel()}
</PopoverItem>
)}
{!allNoteBlocks && !isSubflow && (
@@ -222,19 +195,6 @@ export function BlockMenu({
Remove from Subflow
</PopoverItem>
)}
{canAdmin && onToggleLocked && (
<PopoverItem
disabled={hasBlockWithLockedParent}
onClick={() => {
if (!hasBlockWithLockedParent) {
onToggleLocked()
onClose()
}
}}
>
{hasBlockWithLockedParent ? 'Parent is locked' : getToggleLockedLabel()}
</PopoverItem>
)}
{/* Single block actions */}
{isSingleBlock && <PopoverDivider />}

View File

@@ -34,8 +34,6 @@ export interface CanvasMenuProps {
canUndo?: boolean
canRedo?: boolean
isInvitationsDisabled?: boolean
/** Whether the workflow has locked blocks (disables auto-layout) */
hasLockedBlocks?: boolean
}
/**
@@ -62,7 +60,6 @@ export function CanvasMenu({
disableEdit = false,
canUndo = false,
canRedo = false,
hasLockedBlocks = false,
}: CanvasMenuProps) {
return (
<Popover
@@ -132,12 +129,11 @@ export function CanvasMenu({
</PopoverItem>
<PopoverItem
className='group'
disabled={disableEdit || hasLockedBlocks}
disabled={disableEdit}
onClick={() => {
onAutoLayout()
onClose()
}}
title={hasLockedBlocks ? 'Unlock blocks to use auto-layout' : undefined}
>
<span>Auto-layout</span>
<span className='ml-auto opacity-70 group-hover:opacity-100'>L</span>

View File

@@ -807,7 +807,7 @@ export function Chat() {
const newReservedFields: StartInputFormatField[] = missingStartReservedFields.map(
(fieldName) => {
const defaultType = fieldName === 'files' ? 'file[]' : 'string'
const defaultType = fieldName === 'files' ? 'files' : 'string'
return {
id: crypto.randomUUID(),

View File

@@ -1,7 +1,6 @@
import { memo, useCallback, useMemo } from 'react'
import ReactMarkdown from 'react-markdown'
import type { NodeProps } from 'reactflow'
import remarkBreaks from 'remark-breaks'
import remarkGfm from 'remark-gfm'
import { cn } from '@/lib/core/utils/cn'
import { BLOCK_DIMENSIONS } from '@/lib/workflows/blocks/block-dimensions'
@@ -306,7 +305,7 @@ function getEmbedInfo(url: string): EmbedInfo | null {
const NoteMarkdown = memo(function NoteMarkdown({ content }: { content: string }) {
return (
<ReactMarkdown
remarkPlugins={[remarkGfm, remarkBreaks]}
remarkPlugins={[remarkGfm]}
components={{
p: ({ children }: any) => (
<p className='mb-1 break-words text-[var(--text-primary)] text-sm leading-[1.25rem] last:mb-0'>

View File

@@ -3,7 +3,6 @@
import { memo, useEffect, useMemo, useRef, useState } from 'react'
import clsx from 'clsx'
import { ChevronUp } from 'lucide-react'
import { formatDuration } from '@/lib/core/utils/formatting'
import { CopilotMarkdownRenderer } from '../markdown-renderer'
/** Removes thinking tags (raw or escaped) and special tags from streamed content */
@@ -242,11 +241,15 @@ export function ThinkingBlock({
return () => window.clearInterval(intervalId)
}, [isStreaming, isExpanded, userHasScrolledAway])
/** Formats duration in milliseconds to seconds (minimum 1s) */
const formatDuration = (ms: number) => {
const seconds = Math.max(1, Math.round(ms / 1000))
return `${seconds}s`
}
const hasContent = cleanContent.length > 0
const isThinkingDone = !isStreaming || hasFollowingContent || hasSpecialTags
// Round to nearest second (minimum 1s) to match original behavior
const roundedMs = Math.max(1000, Math.round(duration / 1000) * 1000)
const durationText = `${label} for ${formatDuration(roundedMs)}`
const durationText = `${label} for ${formatDuration(duration)}`
const getStreamingLabel = (lbl: string) => {
if (lbl === 'Thought') return 'Thinking'

View File

@@ -15,7 +15,6 @@ import {
hasInterrupt as hasInterruptFromConfig,
isSpecialTool as isSpecialToolFromConfig,
} from '@/lib/copilot/tools/client/ui-config'
import { formatDuration } from '@/lib/core/utils/formatting'
import { CopilotMarkdownRenderer } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/copilot/components/copilot-message/components/markdown-renderer'
import { SmoothStreamingText } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/copilot/components/copilot-message/components/smooth-streaming'
import { ThinkingBlock } from '@/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/copilot/components/copilot-message/components/thinking-block'
@@ -849,10 +848,13 @@ const SubagentContentRenderer = memo(function SubagentContentRenderer({
(allParsed.options && Object.keys(allParsed.options).length > 0)
)
const formatDuration = (ms: number) => {
const seconds = Math.max(1, Math.round(ms / 1000))
return `${seconds}s`
}
const outerLabel = getSubagentCompletionLabel(toolCall.name)
// Round to nearest second (minimum 1s) to match original behavior
const roundedMs = Math.max(1000, Math.round(duration / 1000) * 1000)
const durationText = `${outerLabel} for ${formatDuration(roundedMs)}`
const durationText = `${outerLabel} for ${formatDuration(duration)}`
const renderCollapsibleContent = () => (
<>

View File

@@ -179,7 +179,7 @@ export function A2aDeploy({
newFields.push({
id: crypto.randomUUID(),
name: 'files',
type: 'file[]',
type: 'files',
value: '',
collapsed: false,
})

View File

@@ -45,7 +45,7 @@ export function CredentialSelector({
previewValue,
}: CredentialSelectorProps) {
const [showOAuthModal, setShowOAuthModal] = useState(false)
const [editingValue, setEditingValue] = useState('')
const [inputValue, setInputValue] = useState('')
const [isEditing, setIsEditing] = useState(false)
const { activeWorkflowId } = useWorkflowRegistry()
const [storeValue, setStoreValue] = useSubBlockValue<string | null>(blockId, subBlock.id)
@@ -128,7 +128,11 @@ export function CredentialSelector({
return ''
}, [selectedCredentialSet, isForeignCredentialSet, selectedCredential, isForeign])
const displayValue = isEditing ? editingValue : resolvedLabel
useEffect(() => {
if (!isEditing) {
setInputValue(resolvedLabel)
}
}, [resolvedLabel, isEditing])
const invalidSelection =
!isPreview &&
@@ -291,7 +295,7 @@ export function CredentialSelector({
const selectedCredentialProvider = selectedCredential?.provider ?? provider
const overlayContent = useMemo(() => {
if (!displayValue) return null
if (!inputValue) return null
if (isCredentialSetSelected && selectedCredentialSet) {
return (
@@ -299,7 +303,7 @@ export function CredentialSelector({
<div className='mr-2 flex-shrink-0 opacity-90'>
<Users className='h-3 w-3' />
</div>
<span className='truncate'>{displayValue}</span>
<span className='truncate'>{inputValue}</span>
</div>
)
}
@@ -309,12 +313,12 @@ export function CredentialSelector({
<div className='mr-2 flex-shrink-0 opacity-90'>
{getProviderIcon(selectedCredentialProvider)}
</div>
<span className='truncate'>{displayValue}</span>
<span className='truncate'>{inputValue}</span>
</div>
)
}, [
getProviderIcon,
displayValue,
inputValue,
selectedCredentialProvider,
isCredentialSetSelected,
selectedCredentialSet,
@@ -331,6 +335,7 @@ export function CredentialSelector({
const credentialSetId = value.slice(CREDENTIAL_SET.PREFIX.length)
const matchedSet = credentialSets.find((cs) => cs.id === credentialSetId)
if (matchedSet) {
setInputValue(matchedSet.name)
handleCredentialSetSelect(credentialSetId)
return
}
@@ -338,12 +343,13 @@ export function CredentialSelector({
const matchedCred = credentials.find((c) => c.id === value)
if (matchedCred) {
setInputValue(matchedCred.name)
handleSelect(value)
return
}
setIsEditing(true)
setEditingValue(value)
setInputValue(value)
},
[credentials, credentialSets, handleAddCredential, handleSelect, handleCredentialSetSelect]
)
@@ -353,7 +359,7 @@ export function CredentialSelector({
<Combobox
options={comboboxOptions}
groups={comboboxGroups}
value={displayValue}
value={inputValue}
selectedValue={rawSelectedId}
onChange={handleComboboxChange}
onOpenChange={handleOpenChange}

View File

@@ -368,7 +368,6 @@ export function FileUpload({
const uploadedFile: UploadedFile = {
name: selectedFile.name,
path: selectedFile.path,
key: selectedFile.key,
size: selectedFile.size,
type: selectedFile.type,
}

View File

@@ -26,7 +26,7 @@ import { useAccessibleReferencePrefixes } from '@/app/workspace/[workspaceId]/w/
interface Field {
id: string
name: string
type?: 'string' | 'number' | 'boolean' | 'object' | 'array' | 'file[]'
type?: 'string' | 'number' | 'boolean' | 'object' | 'array' | 'files'
value?: string
description?: string
collapsed?: boolean
@@ -57,7 +57,7 @@ const TYPE_OPTIONS: ComboboxOption[] = [
{ label: 'Boolean', value: 'boolean' },
{ label: 'Object', value: 'object' },
{ label: 'Array', value: 'array' },
{ label: 'Files', value: 'file[]' },
{ label: 'Files', value: 'files' },
]
/**
@@ -448,7 +448,7 @@ export function FieldFormat({
)
}
if (field.type === 'file[]') {
if (field.type === 'files') {
const lineCount = fieldValue.split('\n').length
const gutterWidth = calculateGutterWidth(lineCount)

View File

@@ -225,7 +225,7 @@ const getOutputTypeForPath = (
const chatModeTypes: Record<string, string> = {
input: 'string',
conversationId: 'string',
files: 'file[]',
files: 'files',
}
return chatModeTypes[outputPath] || 'any'
}
@@ -908,10 +908,8 @@ const PopoverContextCapture: React.FC<{
* When in nested folders, goes back one level at a time.
* At the root folder level, closes the folder.
*/
const TagDropdownBackButton: React.FC<{ setSelectedIndex: (index: number) => void }> = ({
setSelectedIndex,
}) => {
const { isInFolder, closeFolder, size, isKeyboardNav, setKeyboardNav } = usePopoverContext()
const TagDropdownBackButton: React.FC = () => {
const { isInFolder, closeFolder, colorScheme, size } = usePopoverContext()
const nestedNav = useNestedNavigation()
if (!isInFolder) return null
@@ -924,31 +922,28 @@ const TagDropdownBackButton: React.FC<{ setSelectedIndex: (index: number) => voi
closeFolder()
}
const handleMouseEnter = () => {
if (isKeyboardNav) return
setKeyboardNav(false)
setSelectedIndex(-1)
}
return (
<PopoverItem
onMouseDown={(e) => {
e.preventDefault()
e.stopPropagation()
handleBackClick(e)
}}
onMouseEnter={handleMouseEnter}
<div
className={cn(
'flex min-w-0 cursor-pointer items-center gap-[8px] rounded-[6px] px-[6px] font-base',
size === 'sm' ? 'h-[22px] text-[11px]' : 'h-[26px] text-[13px]',
colorScheme === 'inverted'
? 'text-white hover:bg-[#363636] hover:text-white dark:text-[var(--text-primary)] dark:hover:bg-[var(--surface-5)]'
: 'text-[var(--text-primary)] hover:bg-[var(--border-1)]'
)}
role='button'
onClick={handleBackClick}
>
<svg
className={cn('shrink-0', size === 'sm' ? 'h-3 w-3' : 'h-3.5 w-3.5')}
className={size === 'sm' ? 'h-3 w-3' : 'h-3.5 w-3.5'}
fill='none'
viewBox='0 0 24 24'
stroke='currentColor'
>
<path strokeLinecap='round' strokeLinejoin='round' strokeWidth={2} d='M15 19l-7-7 7-7' />
</svg>
<span className='shrink-0'>Back</span>
</PopoverItem>
<span>Back</span>
</div>
)
}
@@ -1568,11 +1563,16 @@ export const TagDropdown: React.FC<TagDropdownProps> = ({
blockTagGroups.sort((a, b) => a.distance - b.distance)
finalBlockTagGroups.push(...blockTagGroups)
const groupTags = finalBlockTagGroups.flatMap((group) => group.tags)
const tags = [...groupTags, ...variableTags]
const contextualTags: string[] = []
if (loopBlockGroup) {
contextualTags.push(...loopBlockGroup.tags)
}
if (parallelBlockGroup) {
contextualTags.push(...parallelBlockGroup.tags)
}
return {
tags,
tags: [...allBlockTags, ...variableTags, ...contextualTags],
variableInfoMap,
blockTagGroups: finalBlockTagGroups,
}
@@ -1746,7 +1746,7 @@ export const TagDropdown: React.FC<TagDropdownProps> = ({
mergedSubBlocks
)
if (fieldType === 'file' || fieldType === 'file[]' || fieldType === 'array') {
if (fieldType === 'files' || fieldType === 'file[]' || fieldType === 'array') {
const blockName = parts[0]
const remainingPath = parts.slice(2).join('.')
processedTag = `${blockName}.${arrayFieldName}[0].${remainingPath}`
@@ -1961,8 +1961,8 @@ export const TagDropdown: React.FC<TagDropdownProps> = ({
onOpenAutoFocus={(e) => e.preventDefault()}
onCloseAutoFocus={(e) => e.preventDefault()}
>
<TagDropdownBackButton />
<PopoverScrollArea ref={scrollAreaRef}>
<TagDropdownBackButton setSelectedIndex={setSelectedIndex} />
{flatTagList.length === 0 ? (
<div className='px-[6px] py-[8px] text-[12px] text-[var(--white)]/60'>
No matching tags found

View File

@@ -9,9 +9,7 @@ import {
ChevronUp,
ExternalLink,
Loader2,
Lock,
Pencil,
Unlock,
} from 'lucide-react'
import { useParams } from 'next/navigation'
import { useShallow } from 'zustand/react/shallow'
@@ -48,17 +46,10 @@ import { useCollaborativeWorkflow } from '@/hooks/use-collaborative-workflow'
import { usePanelEditorStore } from '@/stores/panel'
import { useWorkflowRegistry } from '@/stores/workflows/registry/store'
import { useSubBlockStore } from '@/stores/workflows/subblock/store'
import { useWorkflowStore } from '@/stores/workflows/workflow/store'
/** Stable empty object to avoid creating new references */
const EMPTY_SUBBLOCK_VALUES = {} as Record<string, any>
/** Shared style for dashed divider lines */
const DASHED_DIVIDER_STYLE = {
backgroundImage:
'repeating-linear-gradient(to right, var(--border) 0px, var(--border) 6px, transparent 6px, transparent 12px)',
} as const
/**
* Icon component for rendering block icons.
*
@@ -78,45 +69,51 @@ const IconComponent = ({ icon: Icon, className }: { icon: any; className?: strin
* @returns Editor panel content
*/
export function Editor() {
const { currentBlockId, connectionsHeight, toggleConnectionsCollapsed, registerRenameCallback } =
usePanelEditorStore(
useShallow((state) => ({
currentBlockId: state.currentBlockId,
connectionsHeight: state.connectionsHeight,
toggleConnectionsCollapsed: state.toggleConnectionsCollapsed,
registerRenameCallback: state.registerRenameCallback,
}))
)
const {
currentBlockId,
connectionsHeight,
toggleConnectionsCollapsed,
shouldFocusRename,
setShouldFocusRename,
} = usePanelEditorStore(
useShallow((state) => ({
currentBlockId: state.currentBlockId,
connectionsHeight: state.connectionsHeight,
toggleConnectionsCollapsed: state.toggleConnectionsCollapsed,
shouldFocusRename: state.shouldFocusRename,
setShouldFocusRename: state.setShouldFocusRename,
}))
)
const currentWorkflow = useCurrentWorkflow()
const currentBlock = currentBlockId ? currentWorkflow.getBlockById(currentBlockId) : null
const blockConfig = currentBlock ? getBlock(currentBlock.type) : null
const title = currentBlock?.name || 'Editor'
// Check if selected block is a subflow (loop or parallel)
const isSubflow =
currentBlock && (currentBlock.type === 'loop' || currentBlock.type === 'parallel')
// Get subflow display properties from configs
const subflowConfig = isSubflow ? (currentBlock.type === 'loop' ? LoopTool : ParallelTool) : null
// Check if selected block is a workflow block
const isWorkflowBlock =
currentBlock && (currentBlock.type === 'workflow' || currentBlock.type === 'workflow_input')
// Get workspace ID from params
const params = useParams()
const workspaceId = params.workspaceId as string
// Refs for resize functionality
const subBlocksRef = useRef<HTMLDivElement>(null)
// Get user permissions
const userPermissions = useUserPermissionsContext()
// Check if block is locked (or inside a locked container) and compute edit permission
// Locked blocks cannot be edited by anyone (admins can only lock/unlock)
const blocks = useWorkflowStore((state) => state.blocks)
const parentId = currentBlock?.data?.parentId as string | undefined
const isParentLocked = parentId ? (blocks[parentId]?.locked ?? false) : false
const isLocked = (currentBlock?.locked ?? false) || isParentLocked
const canEditBlock = userPermissions.canEdit && !isLocked
// Get active workflow ID
const activeWorkflowId = useWorkflowRegistry((state) => state.activeWorkflowId)
// Get block properties (advanced/trigger modes)
const { advancedMode, triggerMode } = useEditorBlockProperties(
currentBlockId,
currentWorkflow.isSnapshotView
@@ -148,17 +145,22 @@ export function Editor() {
[subBlocksForCanonical]
)
const canonicalModeOverrides = currentBlock?.data?.canonicalModes
const advancedValuesPresent = useMemo(
() => hasAdvancedValues(subBlocksForCanonical, blockSubBlockValues, canonicalIndex),
[subBlocksForCanonical, blockSubBlockValues, canonicalIndex]
const advancedValuesPresent = hasAdvancedValues(
subBlocksForCanonical,
blockSubBlockValues,
canonicalIndex
)
const displayAdvancedOptions = canEditBlock ? advancedMode : advancedMode || advancedValuesPresent
const displayAdvancedOptions = userPermissions.canEdit
? advancedMode
: advancedMode || advancedValuesPresent
const hasAdvancedOnlyFields = useMemo(() => {
for (const subBlock of subBlocksForCanonical) {
// Must be standalone advanced (mode: 'advanced' without canonicalParamId)
if (subBlock.mode !== 'advanced') continue
if (canonicalIndex.canonicalIdBySubBlockId[subBlock.id]) continue
// Check condition - skip if condition not met for current values
if (
subBlock.condition &&
!evaluateSubBlockCondition(subBlock.condition, blockSubBlockValues)
@@ -171,6 +173,7 @@ export function Editor() {
return false
}, [subBlocksForCanonical, canonicalIndex.canonicalIdBySubBlockId, blockSubBlockValues])
// Get subblock layout using custom hook
const { subBlocks, stateToUse: subBlockState } = useEditorSubblockLayout(
blockConfig || ({} as any),
currentBlockId || '',
@@ -203,105 +206,95 @@ export function Editor() {
return { regularSubBlocks: regular, advancedOnlySubBlocks: advancedOnly }
}, [subBlocks, canonicalIndex.canonicalIdBySubBlockId])
// Get block connections
const { incomingConnections, hasIncomingConnections } = useBlockConnections(currentBlockId || '')
// Connections resize hook
const { handleMouseDown: handleConnectionsResizeMouseDown, isResizing } = useConnectionsResize({
subBlocksRef,
})
// Collaborative actions
const {
collaborativeSetBlockCanonicalMode,
collaborativeUpdateBlockName,
collaborativeToggleBlockAdvancedMode,
collaborativeBatchToggleLocked,
} = useCollaborativeWorkflow()
// Advanced mode toggle handler
const handleToggleAdvancedMode = useCallback(() => {
if (!currentBlockId || !canEditBlock) return
if (!currentBlockId || !userPermissions.canEdit) return
collaborativeToggleBlockAdvancedMode(currentBlockId)
}, [currentBlockId, canEditBlock, collaborativeToggleBlockAdvancedMode])
}, [currentBlockId, userPermissions.canEdit, collaborativeToggleBlockAdvancedMode])
// Rename state
const [isRenaming, setIsRenaming] = useState(false)
const [editedName, setEditedName] = useState('')
const renamingBlockIdRef = useRef<string | null>(null)
const nameInputRef = useRef<HTMLInputElement>(null)
/**
* Ref callback that auto-selects the input text when mounted.
*/
const nameInputRefCallback = useCallback((element: HTMLInputElement | null) => {
if (element) {
element.select()
}
}, [])
/**
* Starts the rename process for the current block.
* Reads from stores directly to avoid stale closures when called via registered callback.
* Captures the block ID in a ref to ensure the correct block is renamed even if selection changes.
* Handles starting the rename process.
*/
const handleStartRename = useCallback(() => {
const blockId = usePanelEditorStore.getState().currentBlockId
if (!blockId) return
const blocks = useWorkflowStore.getState().blocks
const block = blocks[blockId]
if (!block) return
const parentId = block.data?.parentId as string | undefined
const isParentLocked = parentId ? (blocks[parentId]?.locked ?? false) : false
const isLocked = (block.locked ?? false) || isParentLocked
if (!userPermissions.canEdit || isLocked) return
renamingBlockIdRef.current = blockId
setEditedName(block.name || '')
if (!userPermissions.canEdit || !currentBlock) return
setEditedName(currentBlock.name || '')
setIsRenaming(true)
}, [userPermissions.canEdit])
}, [userPermissions.canEdit, currentBlock])
/**
* Saves the renamed block using the captured block ID from when rename started.
* Handles saving the renamed block.
*/
const handleSaveRename = useCallback(() => {
const blockIdToRename = renamingBlockIdRef.current
if (!blockIdToRename || !isRenaming) return
const blocks = useWorkflowStore.getState().blocks
const blockToRename = blocks[blockIdToRename]
if (!currentBlockId || !isRenaming) return
const trimmedName = editedName.trim()
if (trimmedName && blockToRename && trimmedName !== blockToRename.name) {
const result = collaborativeUpdateBlockName(blockIdToRename, trimmedName)
if (trimmedName && trimmedName !== currentBlock?.name) {
const result = collaborativeUpdateBlockName(currentBlockId, trimmedName)
if (!result.success) {
// Keep rename mode open on error so user can correct the name
return
}
}
renamingBlockIdRef.current = null
setIsRenaming(false)
}, [isRenaming, editedName, collaborativeUpdateBlockName])
}, [currentBlockId, isRenaming, editedName, currentBlock?.name, collaborativeUpdateBlockName])
/**
* Handles canceling the rename process.
*/
const handleCancelRename = useCallback(() => {
renamingBlockIdRef.current = null
setIsRenaming(false)
setEditedName('')
}, [])
// Focus input when entering rename mode
useEffect(() => {
registerRenameCallback(handleStartRename)
return () => registerRenameCallback(null)
}, [registerRenameCallback, handleStartRename])
if (isRenaming && nameInputRef.current) {
nameInputRef.current.select()
}
}, [isRenaming])
// Trigger rename mode when signaled from context menu
useEffect(() => {
if (shouldFocusRename && currentBlock) {
handleStartRename()
setShouldFocusRename(false)
}
}, [shouldFocusRename, currentBlock, handleStartRename, setShouldFocusRename])
/**
* Handles opening documentation link in a new secure tab.
*/
const handleOpenDocs = useCallback(() => {
const handleOpenDocs = () => {
const docsLink = isSubflow ? subflowConfig?.docsLink : blockConfig?.docsLink
window.open(docsLink || 'https://docs.sim.ai/quick-reference', '_blank', 'noopener,noreferrer')
}, [isSubflow, subflowConfig?.docsLink, blockConfig?.docsLink])
if (docsLink) {
window.open(docsLink, '_blank', 'noopener,noreferrer')
}
}
// Get child workflow ID for workflow blocks
const childWorkflowId = isWorkflowBlock ? blockSubBlockValues?.workflowId : null
// Fetch child workflow state for preview (only for workflow blocks with a selected workflow)
const { data: childWorkflowState, isLoading: isLoadingChildWorkflow } =
useWorkflowState(childWorkflowId)
@@ -314,6 +307,7 @@ export function Editor() {
}
}, [childWorkflowId, workspaceId])
// Determine if connections are at minimum height (collapsed state)
const isConnectionsAtMinHeight = connectionsHeight <= 35
return (
@@ -334,7 +328,7 @@ export function Editor() {
)}
{isRenaming ? (
<input
ref={nameInputRefCallback}
ref={nameInputRef}
type='text'
value={editedName}
onChange={(e) => setEditedName(e.target.value)}
@@ -364,36 +358,6 @@ export function Editor() {
)}
</div>
<div className='flex shrink-0 items-center gap-[8px]'>
{/* Locked indicator - clickable to unlock if user has admin permissions, block is locked, and parent is not locked */}
{isLocked && currentBlock && (
<Tooltip.Root>
<Tooltip.Trigger asChild>
{userPermissions.canAdmin && currentBlock.locked && !isParentLocked ? (
<Button
variant='ghost'
className='p-0'
onClick={() => collaborativeBatchToggleLocked([currentBlockId!])}
aria-label='Unlock block'
>
<Unlock className='h-[14px] w-[14px] text-[var(--text-secondary)]' />
</Button>
) : (
<div className='flex items-center justify-center'>
<Lock className='h-[14px] w-[14px] text-[var(--text-secondary)]' />
</div>
)}
</Tooltip.Trigger>
<Tooltip.Content side='top'>
<p>
{isParentLocked
? 'Parent container is locked'
: userPermissions.canAdmin && currentBlock.locked
? 'Unlock block'
: 'Block is locked'}
</p>
</Tooltip.Content>
</Tooltip.Root>
)}
{/* Rename button */}
{currentBlock && (
<Tooltip.Root>
@@ -402,7 +366,7 @@ export function Editor() {
variant='ghost'
className='p-0'
onClick={isRenaming ? handleSaveRename : handleStartRename}
disabled={!canEditBlock}
disabled={!userPermissions.canEdit}
aria-label={isRenaming ? 'Save name' : 'Rename block'}
>
{isRenaming ? (
@@ -435,21 +399,23 @@ export function Editor() {
</Tooltip.Content>
</Tooltip.Root>
)} */}
<Tooltip.Root>
<Tooltip.Trigger asChild>
<Button
variant='ghost'
className='p-0'
onClick={handleOpenDocs}
aria-label='Open documentation'
>
<BookOpen className='h-[14px] w-[14px]' />
</Button>
</Tooltip.Trigger>
<Tooltip.Content side='top'>
<p>Open docs</p>
</Tooltip.Content>
</Tooltip.Root>
{currentBlock && (isSubflow ? subflowConfig?.docsLink : blockConfig?.docsLink) && (
<Tooltip.Root>
<Tooltip.Trigger asChild>
<Button
variant='ghost'
className='p-0'
onClick={handleOpenDocs}
aria-label='Open documentation'
>
<BookOpen className='h-[14px] w-[14px]' />
</Button>
</Tooltip.Trigger>
<Tooltip.Content side='top'>
<p>Open docs</p>
</Tooltip.Content>
</Tooltip.Root>
)}
</div>
</div>
@@ -468,7 +434,7 @@ export function Editor() {
incomingConnections={incomingConnections}
handleConnectionsResizeMouseDown={handleConnectionsResizeMouseDown}
toggleConnectionsCollapsed={toggleConnectionsCollapsed}
userCanEdit={canEditBlock}
userCanEdit={userPermissions.canEdit}
isConnectionsAtMinHeight={isConnectionsAtMinHeight}
/>
) : (
@@ -529,7 +495,13 @@ export function Editor() {
</div>
</div>
<div className='subblock-divider px-[2px] pt-[16px] pb-[13px]'>
<div className='h-[1.25px]' style={DASHED_DIVIDER_STYLE} />
<div
className='h-[1.25px]'
style={{
backgroundImage:
'repeating-linear-gradient(to right, var(--border) 0px, var(--border) 6px, transparent 6px, transparent 12px)',
}}
/>
</div>
</>
)}
@@ -570,14 +542,14 @@ export function Editor() {
config={subBlock}
isPreview={false}
subBlockValues={subBlockState}
disabled={!canEditBlock}
disabled={!userPermissions.canEdit}
fieldDiffStatus={undefined}
allowExpandInPreview={false}
canonicalToggle={
isCanonicalSwap && canonicalMode && canonicalId
? {
mode: canonicalMode,
disabled: !canEditBlock,
disabled: !userPermissions.canEdit,
onToggle: () => {
if (!currentBlockId) return
const nextMode =
@@ -594,16 +566,28 @@ export function Editor() {
/>
{showDivider && (
<div className='subblock-divider px-[2px] pt-[16px] pb-[13px]'>
<div className='h-[1.25px]' style={DASHED_DIVIDER_STYLE} />
<div
className='h-[1.25px]'
style={{
backgroundImage:
'repeating-linear-gradient(to right, var(--border) 0px, var(--border) 6px, transparent 6px, transparent 12px)',
}}
/>
</div>
)}
</div>
)
})}
{hasAdvancedOnlyFields && canEditBlock && (
{hasAdvancedOnlyFields && userPermissions.canEdit && (
<div className='flex items-center gap-[10px] px-[2px] pt-[14px] pb-[12px]'>
<div className='h-[1.25px] flex-1' style={DASHED_DIVIDER_STYLE} />
<div
className='h-[1.25px] flex-1'
style={{
backgroundImage:
'repeating-linear-gradient(to right, var(--border) 0px, var(--border) 6px, transparent 6px, transparent 12px)',
}}
/>
<button
type='button'
onClick={handleToggleAdvancedMode}
@@ -616,7 +600,13 @@ export function Editor() {
className={`h-[14px] w-[14px] transition-transform duration-200 ${displayAdvancedOptions ? 'rotate-180' : ''}`}
/>
</button>
<div className='h-[1.25px] flex-1' style={DASHED_DIVIDER_STYLE} />
<div
className='h-[1.25px] flex-1'
style={{
backgroundImage:
'repeating-linear-gradient(to right, var(--border) 0px, var(--border) 6px, transparent 6px, transparent 12px)',
}}
/>
</div>
)}
@@ -634,13 +624,19 @@ export function Editor() {
config={subBlock}
isPreview={false}
subBlockValues={subBlockState}
disabled={!canEditBlock}
disabled={!userPermissions.canEdit}
fieldDiffStatus={undefined}
allowExpandInPreview={false}
/>
{index < advancedOnlySubBlocks.length - 1 && (
<div className='subblock-divider px-[2px] pt-[16px] pb-[13px]'>
<div className='h-[1.25px]' style={DASHED_DIVIDER_STYLE} />
<div
className='h-[1.25px]'
style={{
backgroundImage:
'repeating-linear-gradient(to right, var(--border) 0px, var(--border) 6px, transparent 6px, transparent 12px)',
}}
/>
</div>
)}
</div>

View File

@@ -45,13 +45,11 @@ import { useWorkflowExecution } from '@/app/workspace/[workspaceId]/w/[workflowI
import { useDeleteWorkflow, useImportWorkflow } from '@/app/workspace/[workspaceId]/w/hooks'
import { usePermissionConfig } from '@/hooks/use-permission-config'
import { useChatStore } from '@/stores/chat/store'
import { useNotificationStore } from '@/stores/notifications/store'
import type { PanelTab } from '@/stores/panel'
import { usePanelStore, useVariablesStore as usePanelVariablesStore } from '@/stores/panel'
import { useVariablesStore } from '@/stores/variables/store'
import { getWorkflowWithValues } from '@/stores/workflows'
import { useWorkflowRegistry } from '@/stores/workflows/registry/store'
import { useWorkflowStore } from '@/stores/workflows/workflow/store'
const logger = createLogger('Panel')
/**
@@ -121,11 +119,6 @@ export const Panel = memo(function Panel() {
hydration.phase === 'state-loading'
const { handleAutoLayout: autoLayoutWithFitView } = useAutoLayout(activeWorkflowId || null)
// Check for locked blocks (disables auto-layout)
const hasLockedBlocks = useWorkflowStore((state) =>
Object.values(state.blocks).some((block) => block.locked)
)
// Delete workflow hook
const { isDeleting, handleDeleteWorkflow } = useDeleteWorkflow({
workspaceId,
@@ -237,24 +230,11 @@ export const Panel = memo(function Panel() {
setIsAutoLayouting(true)
try {
const result = await autoLayoutWithFitView()
if (!result.success && result.error) {
useNotificationStore.getState().addNotification({
level: 'info',
message: result.error,
workflowId: activeWorkflowId || undefined,
})
}
await autoLayoutWithFitView()
} finally {
setIsAutoLayouting(false)
}
}, [
isExecuting,
userPermissions.canEdit,
isAutoLayouting,
autoLayoutWithFitView,
activeWorkflowId,
])
}, [isExecuting, userPermissions.canEdit, isAutoLayouting, autoLayoutWithFitView])
/**
* Handles exporting workflow as JSON
@@ -424,10 +404,7 @@ export const Panel = memo(function Panel() {
<PopoverContent align='start' side='bottom' sideOffset={8}>
<PopoverItem
onClick={handleAutoLayout}
disabled={
isExecuting || !userPermissions.canEdit || isAutoLayouting || hasLockedBlocks
}
title={hasLockedBlocks ? 'Unlock blocks to use auto-layout' : undefined}
disabled={isExecuting || !userPermissions.canEdit || isAutoLayouting}
>
<Layout className='h-3 w-3' animate={isAutoLayouting} variant='clockwise' />
<span>Auto layout</span>

View File

@@ -80,7 +80,6 @@ export const SubflowNodeComponent = memo(({ data, id, selected }: NodeProps<Subf
: undefined
const isEnabled = currentBlock?.enabled ?? true
const isLocked = currentBlock?.locked ?? false
const isPreview = data?.isPreview || false
// Focus state
@@ -201,10 +200,7 @@ export const SubflowNodeComponent = memo(({ data, id, selected }: NodeProps<Subf
{blockName}
</span>
</div>
<div className='flex items-center gap-1'>
{!isEnabled && <Badge variant='gray-secondary'>disabled</Badge>}
{isLocked && <Badge variant='gray-secondary'>locked</Badge>}
</div>
{!isEnabled && <Badge variant='gray-secondary'>disabled</Badge>}
</div>
{!isPreview && (

View File

@@ -105,9 +105,11 @@ export function useTerminalFilters() {
})
}
// Sort by executionOrder (monotonically increasing integer from server)
// Apply sorting by timestamp
result = [...result].sort((a, b) => {
const comparison = a.executionOrder - b.executionOrder
const timeA = new Date(a.timestamp).getTime()
const timeB = new Date(b.timestamp).getTime()
const comparison = timeA - timeB
return sortConfig.direction === 'asc' ? comparison : -comparison
})

View File

@@ -24,7 +24,6 @@ import {
Tooltip,
} from '@/components/emcn'
import { getEnv, isTruthy } from '@/lib/core/config/env'
import { formatDuration } from '@/lib/core/utils/formatting'
import { useRegisterGlobalCommands } from '@/app/workspace/[workspaceId]/providers/global-commands-provider'
import { createCommands } from '@/app/workspace/[workspaceId]/utils/commands-utils'
import {
@@ -44,6 +43,7 @@ import {
type EntryNode,
type ExecutionGroup,
flattenBlockEntriesOnly,
formatDuration,
getBlockColor,
getBlockIcon,
groupEntriesByExecution,
@@ -128,7 +128,7 @@ const BlockRow = memo(function BlockRow({
<StatusDisplay
isRunning={isRunning}
isCanceled={isCanceled}
formattedDuration={formatDuration(entry.durationMs, { precision: 2 }) ?? '-'}
formattedDuration={formatDuration(entry.durationMs)}
/>
</span>
</div>
@@ -201,7 +201,7 @@ const IterationNodeRow = memo(function IterationNodeRow({
<StatusDisplay
isRunning={hasRunningChild}
isCanceled={hasCanceledChild}
formattedDuration={formatDuration(entry.durationMs, { precision: 2 }) ?? '-'}
formattedDuration={formatDuration(entry.durationMs)}
/>
</span>
</div>
@@ -314,7 +314,7 @@ const SubflowNodeRow = memo(function SubflowNodeRow({
<StatusDisplay
isRunning={hasRunningDescendant}
isCanceled={hasCanceledDescendant}
formattedDuration={formatDuration(entry.durationMs, { precision: 2 }) ?? '-'}
formattedDuration={formatDuration(entry.durationMs)}
/>
</span>
</div>

View File

@@ -53,6 +53,17 @@ export function getBlockColor(blockType: string): string {
return '#6b7280'
}
/**
* Formats duration from milliseconds to readable format
*/
export function formatDuration(ms?: number): string {
if (ms === undefined || ms === null) return '-'
if (ms < 1000) {
return `${Math.round(ms)}ms`
}
return `${(ms / 1000).toFixed(2)}s`
}
/**
* Determines if a keyboard event originated from a text-editable element
*/
@@ -184,9 +195,13 @@ function buildEntryTree(entries: ConsoleEntry[]): EntryNode[] {
group.blocks.push(entry)
}
// Sort blocks within each iteration by executionOrder ascending (oldest first, top-down)
// Sort blocks within each iteration by start time ascending (oldest first, top-down)
for (const group of iterationGroupsMap.values()) {
group.blocks.sort((a, b) => a.executionOrder - b.executionOrder)
group.blocks.sort((a, b) => {
const aStart = new Date(a.startedAt || a.timestamp).getTime()
const bStart = new Date(b.startedAt || b.timestamp).getTime()
return aStart - bStart
})
}
// Group iterations by iterationType to create subflow parents
@@ -221,8 +236,6 @@ function buildEntryTree(entries: ConsoleEntry[]): EntryNode[] {
const totalDuration = allBlocks.reduce((sum, b) => sum + (b.durationMs || 0), 0)
// Create synthetic subflow parent entry
// Use the minimum executionOrder from all child blocks for proper ordering
const subflowExecutionOrder = Math.min(...allBlocks.map((b) => b.executionOrder))
const syntheticSubflow: ConsoleEntry = {
id: `subflow-${iterationType}-${firstIteration.blocks[0]?.executionId || 'unknown'}`,
timestamp: new Date(subflowStartMs).toISOString(),
@@ -232,7 +245,6 @@ function buildEntryTree(entries: ConsoleEntry[]): EntryNode[] {
blockType: iterationType,
executionId: firstIteration.blocks[0]?.executionId,
startedAt: new Date(subflowStartMs).toISOString(),
executionOrder: subflowExecutionOrder,
endedAt: new Date(subflowEndMs).toISOString(),
durationMs: totalDuration,
success: !allBlocks.some((b) => b.error),
@@ -250,8 +262,6 @@ function buildEntryTree(entries: ConsoleEntry[]): EntryNode[] {
)
const iterDuration = iterBlocks.reduce((sum, b) => sum + (b.durationMs || 0), 0)
// Use the minimum executionOrder from blocks in this iteration
const iterExecutionOrder = Math.min(...iterBlocks.map((b) => b.executionOrder))
const syntheticIteration: ConsoleEntry = {
id: `iteration-${iterationType}-${iterGroup.iterationCurrent}-${iterBlocks[0]?.executionId || 'unknown'}`,
timestamp: new Date(iterStartMs).toISOString(),
@@ -261,7 +271,6 @@ function buildEntryTree(entries: ConsoleEntry[]): EntryNode[] {
blockType: iterationType,
executionId: iterBlocks[0]?.executionId,
startedAt: new Date(iterStartMs).toISOString(),
executionOrder: iterExecutionOrder,
endedAt: new Date(iterEndMs).toISOString(),
durationMs: iterDuration,
success: !iterBlocks.some((b) => b.error),
@@ -302,9 +311,14 @@ function buildEntryTree(entries: ConsoleEntry[]): EntryNode[] {
nodeType: 'block' as const,
}))
// Combine all nodes and sort by executionOrder ascending (oldest first, top-down)
// Combine all nodes and sort by start time ascending (oldest first, top-down)
const allNodes = [...subflowNodes, ...regularNodes]
allNodes.sort((a, b) => a.entry.executionOrder - b.entry.executionOrder)
allNodes.sort((a, b) => {
const aStart = new Date(a.entry.startedAt || a.entry.timestamp).getTime()
const bStart = new Date(b.entry.startedAt || b.entry.timestamp).getTime()
return aStart - bStart
})
return allNodes
}

View File

@@ -30,7 +30,6 @@ import {
Textarea,
} from '@/components/emcn'
import { cn } from '@/lib/core/utils/cn'
import { formatDuration } from '@/lib/core/utils/formatting'
import { sanitizeForCopilot } from '@/lib/workflows/sanitization/json-sanitizer'
import { formatEditSequence } from '@/lib/workflows/training/compute-edit-sequence'
import { useCurrentWorkflow } from '@/app/workspace/[workspaceId]/w/[workflowId]/hooks/use-current-workflow'
@@ -576,9 +575,7 @@ export function TrainingModal() {
<span className='text-[var(--text-muted)]'>Duration:</span>{' '}
<span className='text-[var(--text-secondary)]'>
{dataset.metadata?.duration
? formatDuration(dataset.metadata.duration, {
precision: 1,
})
? `${(dataset.metadata.duration / 1000).toFixed(1)}s`
: 'N/A'}
</span>
</div>

View File

@@ -18,8 +18,6 @@ export interface UseBlockStateReturn {
diffStatus: DiffStatus
/** Whether this is a deleted block in diff mode */
isDeletedBlock: boolean
/** Whether the block is locked */
isLocked: boolean
}
/**
@@ -42,11 +40,6 @@ export function useBlockState(
? (data.blockState?.enabled ?? true)
: (currentBlock?.enabled ?? true)
// Determine if block is locked
const isLocked = data.isPreview
? (data.blockState?.locked ?? false)
: (currentBlock?.locked ?? false)
// Get diff status
const diffStatus: DiffStatus =
currentWorkflow.isDiffMode && currentBlock && hasDiffStatus(currentBlock)
@@ -75,6 +68,5 @@ export function useBlockState(
isActive,
diffStatus,
isDeletedBlock: isDeletedBlock ?? false,
isLocked,
}
}

View File

@@ -672,7 +672,6 @@ export const WorkflowBlock = memo(function WorkflowBlock({
currentWorkflow,
activeWorkflowId,
isEnabled,
isLocked,
handleClick,
hasRing,
ringStyles,
@@ -1101,7 +1100,7 @@ export const WorkflowBlock = memo(function WorkflowBlock({
{name}
</span>
</div>
<div className='relative z-10 flex flex-shrink-0 items-center gap-1'>
<div className='relative z-10 flex flex-shrink-0 items-center gap-2'>
{isWorkflowSelector &&
childWorkflowId &&
typeof childIsDeployed === 'boolean' &&
@@ -1134,7 +1133,6 @@ export const WorkflowBlock = memo(function WorkflowBlock({
</Tooltip.Root>
)}
{!isEnabled && <Badge variant='gray-secondary'>disabled</Badge>}
{isLocked && <Badge variant='gray-secondary'>locked</Badge>}
{type === 'schedule' && shouldShowScheduleBadge && scheduleInfo?.isDisabled && (
<Tooltip.Root>

View File

@@ -188,7 +188,7 @@ export function useBlockOutputFields({
baseOutputs = {
input: { type: 'string', description: 'User message' },
conversationId: { type: 'string', description: 'Conversation ID' },
files: { type: 'file[]', description: 'Uploaded files' },
files: { type: 'files', description: 'Uploaded files' },
}
} else {
const inputFormatValue = mergedSubBlocks?.inputFormat?.value

View File

@@ -47,7 +47,6 @@ export function useBlockVisual({
isActive: isExecuting,
diffStatus,
isDeletedBlock,
isLocked,
} = useBlockState(blockId, currentWorkflow, data)
const currentBlockId = usePanelEditorStore((state) => state.currentBlockId)
@@ -104,7 +103,6 @@ export function useBlockVisual({
currentWorkflow,
activeWorkflowId,
isEnabled,
isLocked,
handleClick,
hasRing,
ringStyles,

View File

@@ -31,8 +31,7 @@ export function useCanvasContextMenu({ blocks, getNodes, setNodes }: UseCanvasCo
nodes.map((n) => {
const block = blocks[n.id]
const parentId = block?.data?.parentId
const parentBlock = parentId ? blocks[parentId] : undefined
const parentType = parentBlock?.type
const parentType = parentId ? blocks[parentId]?.type : undefined
return {
id: n.id,
type: block?.type || '',
@@ -40,9 +39,6 @@ export function useCanvasContextMenu({ blocks, getNodes, setNodes }: UseCanvasCo
horizontalHandles: block?.horizontalHandles ?? false,
parentId,
parentType,
locked: block?.locked ?? false,
isParentLocked: parentBlock?.locked ?? false,
isParentDisabled: parentBlock ? !parentBlock.enabled : false,
}
}),
[blocks]

Some files were not shown because too many files have changed in this diff Show More