Compare commits

..

15 Commits

Author SHA1 Message Date
Waleed
70c36cb7aa v0.5.105: slack remove reaction, nested subflow locks fix, servicenow pagination, memory improvements 2026-03-04 22:38:26 -08:00
Waleed
f1ec5fe824 v0.5.104: memory improvements, nested subflows, careers page redirect, brandfetch, google meet 2026-03-03 23:45:29 -08:00
Waleed
e07e3c34cc v0.5.103: memory util instrumentation, API docs, amplitude, google pagespeed insights, pagerduty 2026-03-01 23:27:02 -08:00
Waleed
0d2e6ff31d v0.5.102: new integrations, new tools, ci speedups, memory leak instrumentation 2026-02-28 12:48:10 -08:00
Waleed
4fd0989264 v0.5.101: circular dependency mitigation, confluence enhancements, google tasks and bigquery integrations, workflow lock 2026-02-26 15:04:53 -08:00
Waleed
67f8a687f6 v0.5.100: multiple credentials, 40% speedup, gong, attio, audit log improvements 2026-02-25 00:28:25 -08:00
Waleed
af592349d3 v0.5.99: local dev improvements, live workflow logs in terminal 2026-02-23 00:24:49 -08:00
Waleed
0d86ea01f0 v0.5.98: change detection improvements, rate limit and code execution fixes, removed retired models, hex integration 2026-02-21 18:07:40 -08:00
Waleed
115f04e989 v0.5.97: oidc discovery for copilot mcp 2026-02-21 02:06:25 -08:00
Waleed
34d92fae89 v0.5.96: sim oauth provider, slack ephemeral message tool and blockkit support 2026-02-20 18:22:20 -08:00
Waleed
67aa4bb332 v0.5.95: gemini 3.1 pro, cloudflare, dataverse, revenuecat, redis, upstash, algolia tools; isolated-vm robustness improvements, tables backend (#3271)
* feat(tools): advanced fields for youtube, vercel; added cloudflare and dataverse tools (#3257)

* refactor(vercel): mark optional fields as advanced mode

Move optional/power-user fields behind the advanced toggle:
- List Deployments: project filter, target, state
- Create Deployment: project ID override, redeploy from, target
- List Projects: search
- Create/Update Project: framework, build/output/install commands
- Env Vars: variable type
- Webhooks: project IDs filter
- Checks: path, details URL
- Team Members: role filter
- All operations: team ID scope

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* style(youtube): mark optional params as advanced mode

Hide pagination, sort order, and filter fields behind the advanced
toggle for a cleaner default UX across all YouTube operations.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* added advanced fields for vercel and youtube, added cloudflare and dataverse block

* addded desc for dataverse

* add more tools

* ack comment

* more

* ops

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>

* feat(tables): added tables (#2867)

* updates

* required

* trashy table viewer

* updates

* updates

* filtering ui

* updates

* updates

* updates

* one input mode

* format

* fix lints

* improved errors

* updates

* updates

* chages

* doc strings

* breaking down file

* update comments with ai

* updates

* comments

* changes

* revert

* updates

* dedupe

* updates

* updates

* updates

* refactoring

* renames & refactors

* refactoring

* updates

* undo

* update db

* wand

* updates

* fix comments

* fixes

* simplify comments

* u[dates

* renames

* better comments

* validation

* updates

* updates

* updates

* fix sorting

* fix appearnce

* updating prompt to make it user sort

* rm

* updates

* rename

* comments

* clean comments

* simplicifcaiton

* updates

* updates

* refactor

* reduced type confusion

* undo

* rename

* undo changes

* undo

* simplify

* updates

* updates

* revert

* updates

* db updates

* type fix

* fix

* fix error handling

* updates

* docs

* docs

* updates

* rename

* dedupe

* revert

* uncook

* updates

* fix

* fix

* fix

* fix

* prepare merge

* readd migrations

* add back missed code

* migrate enrichment logic to general abstraction

* address bugbot concerns

* adhere to size limits for tables

* remove conflicting migration

* add back migrations

* fix tables auth

* fix permissive auth

* fix lint

* reran migrations

* migrate to use tanstack query for all server state

* update table-selector

* update names

* added tables to permission groups, updated subblock types

---------

Co-authored-by: Vikhyath Mondreti <vikhyath@simstudio.ai>
Co-authored-by: waleed <walif6@gmail.com>

* fix(snapshot): changed insert to upsert when concurrent identical child workflows are running (#3259)

* fix(snapshot): changed insert to upsert when concurrent identical child workflows are running

* fixed ci tests failing

* fix(workflows): disallow duplicate workflow names at the same folder level (#3260)

* feat(tools): added redis, upstash, algolia, and revenuecat (#3261)

* feat(tools): added redis, upstash, algolia, and revenuecat

* ack comment

* feat(models): add gemini-3.1-pro-preview and update gemini-3-pro thinking levels (#3263)

* fix(audit-log): lazily resolve actor name/email when missing (#3262)

* fix(blocks): move type coercions from tools.config.tool to tools.config.params (#3264)

* fix(blocks): move type coercions from tools.config.tool to tools.config.params

Number() coercions in tools.config.tool ran at serialization time before
variable resolution, destroying dynamic references like <block.result.count>
by converting them to NaN/null. Moved all coercions to tools.config.params
which runs at execution time after variables are resolved.

Fixed in 15 blocks: exa, arxiv, sentry, incidentio, wikipedia, ahrefs,
posthog, elasticsearch, dropbox, hunter, lemlist, spotify, youtube, grafana,
parallel. Also added mode: 'advanced' to optional exa fields.

Closes #3258

* fix(blocks): address PR review — move remaining param mutations from tool() to params()

- Moved field mappings from tool() to params() in grafana, posthog,
  lemlist, spotify, dropbox (same dynamic reference bug)
- Fixed parallel.ts excerpts/full_content boolean logic
- Fixed parallel.ts search_queries empty case (must set undefined)
- Fixed elasticsearch.ts timeout not included when already ends with 's'
- Restored dropbox.ts tool() switch for proper default fallback

* fix(blocks): restore field renames to tool() for serialization-time validation

Field renames (e.g. personalApiKey→apiKey) must be in tool() because
validateRequiredFieldsBeforeExecution calls selectToolId()→tool() then
checks renamed field names on params. Only type coercions (Number(),
boolean) stay in params() to avoid destroying dynamic variable references.

* improvement(resolver): resovled empty sentinel to not pass through unexecuted valid refs to text inputs (#3266)

* fix(blocks): add required constraint for serviceDeskId in JSM block (#3268)

* fix(blocks): add required constraint for serviceDeskId in JSM block

* fix(blocks): rename custom field values to request field values in JSM create request

* fix(trigger): add isolated-vm support to trigger.dev container builds (#3269)

Scheduled workflow executions running in trigger.dev containers were
failing to spawn isolated-vm workers because the native module wasn't
available in the container. This caused loop condition evaluation to
silently fail and exit after one iteration.

- Add isolated-vm to build.external and additionalPackages in trigger config
- Include isolated-vm-worker.cjs via additionalFiles for child process spawning
- Add fallback path resolution for worker file in trigger.dev environment

* fix(tables): hide tables from sidebar and block registry (#3270)

* fix(tables): hide tables from sidebar and block registry

* fix(trigger): add isolated-vm support to trigger.dev container builds (#3269)

Scheduled workflow executions running in trigger.dev containers were
failing to spawn isolated-vm workers because the native module wasn't
available in the container. This caused loop condition evaluation to
silently fail and exit after one iteration.

- Add isolated-vm to build.external and additionalPackages in trigger config
- Include isolated-vm-worker.cjs via additionalFiles for child process spawning
- Add fallback path resolution for worker file in trigger.dev environment

* lint

* fix(trigger): update node version to align with main app (#3272)

* fix(build): fix corrupted sticky disk cache on blacksmith (#3273)

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: Lakee Sivaraya <71339072+lakeesiv@users.noreply.github.com>
Co-authored-by: Vikhyath Mondreti <vikhyath@simstudio.ai>
Co-authored-by: Vikhyath Mondreti <vikhyathvikku@gmail.com>
2026-02-20 13:43:07 -08:00
Waleed
15ace5e63f v0.5.94: vercel integration, folder insertion, migrated tracking redirects to rewrites 2026-02-18 16:53:34 -08:00
Waleed
fdca73679d v0.5.93: NextJS config changes, MCP and Blocks whitelisting, copilot keyboard shortcuts, audit logs 2026-02-18 12:10:05 -08:00
Waleed
da46a387c9 v0.5.92: shortlinks, copilot scrolling stickiness, pagination 2026-02-17 15:13:21 -08:00
Waleed
b7e377ec4b v0.5.91: docs i18n, turborepo upgrade 2026-02-16 00:36:05 -08:00
17 changed files with 1 additions and 791 deletions

View File

@@ -32,7 +32,3 @@ API_ENCRYPTION_KEY=your_api_encryption_key # Use `openssl rand -hex 32` to gener
# Admin API (Optional - for self-hosted GitOps)
# ADMIN_API_KEY= # Use `openssl rand -hex 32` to generate. Enables admin API for workflow export/import.
# Usage: curl -H "x-admin-key: your_key" https://your-instance/api/v1/admin/workspaces
# Execute Command (Optional - self-hosted only)
# EXECUTE_COMMAND_ENABLED=true # Enable shell command execution in workflows
# NEXT_PUBLIC_EXECUTE_COMMAND_ENABLED=true # Show Execute Command block in the UI

View File

@@ -366,7 +366,7 @@ function resolveWorkflowVariables(
const variableName = match[1].trim()
const foundVariable = Object.entries(workflowVariables).find(
([_, variable]) => normalizeName(variable.name || '') === normalizeName(variableName)
([_, variable]) => normalizeName(variable.name || '') === variableName
)
if (!foundVariable) {

View File

@@ -1,424 +0,0 @@
import { exec } from 'child_process'
import { createLogger } from '@sim/logger'
import { type NextRequest, NextResponse } from 'next/server'
import { checkInternalAuth } from '@/lib/auth/hybrid'
import { isExecuteCommandEnabled } from '@/lib/core/config/feature-flags'
import { generateRequestId } from '@/lib/core/utils/request'
import { DEFAULT_EXECUTION_TIMEOUT_MS } from '@/lib/execution/constants'
import { normalizeName, REFERENCE, SPECIAL_REFERENCE_PREFIXES } from '@/executor/constants'
import { type OutputSchema, resolveBlockReference } from '@/executor/utils/block-reference'
import {
createEnvVarPattern,
createWorkflowVariablePattern,
} from '@/executor/utils/reference-validation'
export const dynamic = 'force-dynamic'
export const runtime = 'nodejs'
export const MAX_DURATION = 210
const logger = createLogger('ExecuteCommandAPI')
const MAX_BUFFER = 10 * 1024 * 1024 // 10MB
const SAFE_ENV_KEYS = ['PATH', 'HOME', 'SHELL', 'USER', 'LOGNAME', 'LANG', 'TERM', 'TZ'] as const
/**
* Returns a minimal base environment for child processes.
* Only includes POSIX essentials — never server secrets like DATABASE_URL, AUTH_SECRET, etc.
*/
function getSafeBaseEnv(): NodeJS.ProcessEnv {
const env: NodeJS.ProcessEnv = { NODE_ENV: process.env.NODE_ENV }
for (const key of SAFE_ENV_KEYS) {
if (process.env[key]) {
env[key] = process.env[key]
}
}
return env
}
const BLOCKED_ENV_KEYS = new Set([
...SAFE_ENV_KEYS,
'NODE_ENV',
'NODE_OPTIONS',
'LD_PRELOAD',
'LD_LIBRARY_PATH',
'LD_AUDIT',
'DYLD_INSERT_LIBRARIES',
'DYLD_LIBRARY_PATH',
'DYLD_FRAMEWORK_PATH',
'BASH_ENV',
'ENV',
'GLIBC_TUNABLES',
'NODE_PATH',
'PYTHONPATH',
'PERL5LIB',
'PERL5OPT',
'RUBYLIB',
'JAVA_TOOL_OPTIONS',
])
/**
* Filters user-supplied env vars to prevent overriding safe base env
* and injecting process-influencing variables.
*/
function filterUserEnv(env?: Record<string, string>): Record<string, string> {
if (!env) return {}
const filtered: Record<string, string> = {}
for (const [key, value] of Object.entries(env)) {
if (!BLOCKED_ENV_KEYS.has(key)) {
filtered[key] = value
}
}
return filtered
}
interface Replacement {
index: number
length: number
value: string
}
/**
* Collects workflow variable (<variable.name>) replacements from the original command string
*/
function collectWorkflowVariableReplacements(
command: string,
workflowVariables: Record<string, any>
): Replacement[] {
const regex = createWorkflowVariablePattern()
const replacements: Replacement[] = []
let match: RegExpExecArray | null
while ((match = regex.exec(command)) !== null) {
const variableName = match[1].trim()
const foundVariable = Object.entries(workflowVariables).find(
([_, variable]) => normalizeName(variable.name || '') === normalizeName(variableName)
)
if (!foundVariable) {
const availableVars = Object.values(workflowVariables)
.map((v: any) => v.name)
.filter(Boolean)
throw new Error(
`Variable "${variableName}" doesn't exist.` +
(availableVars.length > 0 ? ` Available: ${availableVars.join(', ')}` : '')
)
}
const variable = foundVariable[1]
let value = variable.value
if (typeof value === 'object' && value !== null) {
value = JSON.stringify(value)
} else {
value = String(value ?? '')
}
replacements.push({ index: match.index, length: match[0].length, value })
}
return replacements
}
/**
* Collects environment variable ({{ENV_VAR}}) replacements from the original command string
*/
function collectEnvVarReplacements(
command: string,
envVars: Record<string, string>
): Replacement[] {
const regex = createEnvVarPattern()
const replacements: Replacement[] = []
let match: RegExpExecArray | null
while ((match = regex.exec(command)) !== null) {
const varName = match[1].trim()
if (!(varName in envVars)) {
continue
}
replacements.push({ index: match.index, length: match[0].length, value: envVars[varName] })
}
return replacements
}
/**
* Collects block reference tag (<blockName.field>) replacements from the original command string
*/
function collectTagReplacements(
command: string,
blockData: Record<string, unknown>,
blockNameMapping: Record<string, string>,
blockOutputSchemas: Record<string, OutputSchema>
): Replacement[] {
const tagPattern = new RegExp(
`${REFERENCE.START}(\\S[^${REFERENCE.START}${REFERENCE.END}]*)${REFERENCE.END}`,
'g'
)
const replacements: Replacement[] = []
let match: RegExpExecArray | null
while ((match = tagPattern.exec(command)) !== null) {
const tagName = match[1].trim()
const pathParts = tagName.split(REFERENCE.PATH_DELIMITER)
const blockName = pathParts[0]
const fieldPath = pathParts.slice(1)
const result = resolveBlockReference(blockName, fieldPath, {
blockNameMapping,
blockData,
blockOutputSchemas,
})
if (!result) {
const isSpecialPrefix = SPECIAL_REFERENCE_PREFIXES.some((prefix) => blockName === prefix)
if (!isSpecialPrefix) {
throw new Error(
`Block reference "<${tagName}>" could not be resolved. Check that the block name and field path are correct.`
)
}
continue
}
let stringValue: string
if (result.value === undefined || result.value === null) {
stringValue = ''
} else if (typeof result.value === 'object') {
stringValue = JSON.stringify(result.value)
} else {
stringValue = String(result.value)
}
replacements.push({ index: match.index, length: match[0].length, value: stringValue })
}
return replacements
}
/**
* Resolves all variable references in a command string in a single pass.
* All three patterns are matched against the ORIGINAL command to prevent
* cascading resolution (e.g. a workflow variable value containing {{ENV_VAR}}
* would NOT be further resolved).
*/
function resolveCommandVariables(
command: string,
envVars: Record<string, string>,
blockData: Record<string, unknown>,
blockNameMapping: Record<string, string>,
blockOutputSchemas: Record<string, OutputSchema>,
workflowVariables: Record<string, unknown>
): string {
const allReplacements = [
...collectWorkflowVariableReplacements(command, workflowVariables),
...collectEnvVarReplacements(command, envVars),
...collectTagReplacements(command, blockData, blockNameMapping, blockOutputSchemas),
]
allReplacements.sort((a, b) => a.index - b.index)
for (let i = 1; i < allReplacements.length; i++) {
const prev = allReplacements[i - 1]
const curr = allReplacements[i]
if (curr.index < prev.index + prev.length) {
throw new Error(
`Overlapping variable references detected at positions ${prev.index} and ${curr.index}`
)
}
}
let resolved = command
for (let i = allReplacements.length - 1; i >= 0; i--) {
const { index, length, value } = allReplacements[i]
resolved = resolved.slice(0, index) + value + resolved.slice(index + length)
}
return resolved
}
interface CommandResult {
stdout: string
stderr: string
exitCode: number
timedOut: boolean
maxBufferExceeded: boolean
}
/**
* Execute a shell command and return stdout, stderr, exitCode.
* Distinguishes between a process that exited with non-zero (normal) and one that was killed (timeout).
*/
function executeCommand(
command: string,
options: { timeout: number; cwd?: string; env?: Record<string, string> }
): Promise<CommandResult> {
return new Promise((resolve) => {
const childProcess = exec(
command,
{
timeout: options.timeout,
cwd: options.cwd || undefined,
maxBuffer: MAX_BUFFER,
env: { ...getSafeBaseEnv(), ...filterUserEnv(options.env) },
},
(error, stdout, stderr) => {
if (error) {
const killed = error.killed ?? false
const isMaxBuffer = /maxBuffer/i.test(error.message ?? '')
const exitCode = typeof error.code === 'number' ? error.code : 1
resolve({
stdout: stdout.trimEnd(),
stderr: stderr.trimEnd(),
exitCode,
timedOut: killed && !isMaxBuffer,
maxBufferExceeded: isMaxBuffer,
})
return
}
resolve({
stdout: stdout.trimEnd(),
stderr: stderr.trimEnd(),
exitCode: 0,
timedOut: false,
maxBufferExceeded: false,
})
}
)
childProcess.on('error', (err) => {
resolve({
stdout: '',
stderr: err.message,
exitCode: 1,
timedOut: false,
maxBufferExceeded: false,
})
})
})
}
export async function POST(req: NextRequest) {
const requestId = generateRequestId()
try {
if (!isExecuteCommandEnabled) {
logger.warn(`[${requestId}] Execute Command is disabled`)
return NextResponse.json(
{
success: false,
error:
'Execute Command is not enabled. Set EXECUTE_COMMAND_ENABLED=true in your environment to use this feature. Only available for self-hosted deployments.',
},
{ status: 403 }
)
}
const auth = await checkInternalAuth(req)
if (!auth.success || !auth.userId) {
logger.warn(`[${requestId}] Unauthorized execute command attempt`)
return NextResponse.json({ error: auth.error || 'Unauthorized' }, { status: 401 })
}
const body = await req.json()
const {
command,
workingDirectory,
envVars = {},
blockData = {},
blockNameMapping = {},
blockOutputSchemas = {},
workflowVariables = {},
workflowId,
} = body
const MAX_ALLOWED_TIMEOUT_MS = (MAX_DURATION - 10) * 1000
const parsedTimeout = Number(body.timeout)
const timeout = Math.min(
parsedTimeout > 0 ? parsedTimeout : DEFAULT_EXECUTION_TIMEOUT_MS,
MAX_ALLOWED_TIMEOUT_MS
)
if (!command || typeof command !== 'string') {
return NextResponse.json(
{ success: false, error: 'Command is required and must be a string' },
{ status: 400 }
)
}
logger.info(`[${requestId}] Execute command request`, {
commandLength: command.length,
timeout,
workingDirectory: workingDirectory || '(default)',
workflowId,
})
const resolvedCommand = resolveCommandVariables(
command,
envVars,
blockData,
blockNameMapping,
blockOutputSchemas,
workflowVariables
)
const result = await executeCommand(resolvedCommand, {
timeout,
cwd: workingDirectory,
env: envVars,
})
logger.info(`[${requestId}] Command completed`, {
exitCode: result.exitCode,
timedOut: result.timedOut,
stdoutLength: result.stdout.length,
stderrLength: result.stderr.length,
workflowId,
})
if (result.timedOut) {
return NextResponse.json({
success: false,
output: {
stdout: result.stdout,
stderr: result.stderr,
exitCode: result.exitCode,
},
error: `Command timed out after ${timeout}ms`,
})
}
if (result.maxBufferExceeded) {
return NextResponse.json({
success: false,
output: {
stdout: result.stdout,
stderr: result.stderr,
exitCode: result.exitCode,
},
error: `Command output exceeded maximum buffer size of ${MAX_BUFFER / 1024 / 1024}MB`,
})
}
return NextResponse.json({
success: true,
output: {
stdout: result.stdout,
stderr: result.stderr,
exitCode: result.exitCode,
},
})
} catch (error: unknown) {
const message = error instanceof Error ? error.message : 'Unknown error'
logger.error(`[${requestId}] Execute command failed`, { error: message })
return NextResponse.json(
{
success: false,
output: { stdout: '', stderr: message, exitCode: 1 },
error: message,
},
{ status: 500 }
)
}
}

View File

@@ -1,88 +0,0 @@
import { TerminalIcon } from '@/components/icons'
import { isTruthy } from '@/lib/core/config/env'
import { DEFAULT_EXECUTION_TIMEOUT_MS } from '@/lib/execution/constants'
import type { BlockConfig } from '@/blocks/types'
import type { ExecuteCommandOutput } from '@/tools/execute-command/types'
export const ExecuteCommandBlock: BlockConfig<ExecuteCommandOutput> = {
type: 'execute_command',
name: 'Execute Command',
description: 'Run shell commands',
hideFromToolbar: !isTruthy(process.env.NEXT_PUBLIC_EXECUTE_COMMAND_ENABLED),
longDescription:
'Execute shell commands on the host machine. Only available for self-hosted deployments with EXECUTE_COMMAND_ENABLED=true. Commands run in the default shell of the host OS.',
bestPractices: `
- Commands execute in the default shell of the host machine (bash, zsh, cmd, PowerShell).
- Chain multiple commands with && to run them sequentially.
- Use <blockName.output> syntax to reference outputs from other blocks.
- Use {{ENV_VAR}} syntax to reference environment variables.
- The working directory defaults to the server process directory if not specified. Variable references are not supported in the Working Directory field — use a literal path.
- A non-zero exit code is returned as data (exitCode > 0), not treated as a workflow error. Use a Condition block to branch on exitCode if needed.
- Variable values from other blocks are interpolated directly into the command string. Avoid passing untrusted user input as block references to prevent shell injection.
`,
docsLink: 'https://docs.sim.ai/blocks/execute-command',
category: 'blocks',
bgColor: '#1E1E1E',
icon: TerminalIcon,
subBlocks: [
{
id: 'command',
title: 'Command',
type: 'long-input',
required: true,
placeholder: 'echo "Hello, World!"',
wandConfig: {
enabled: true,
prompt: `You are an expert shell scripting assistant.
Generate ONLY the raw shell command(s) based on the user's request. Never wrap in markdown formatting.
The command runs in the default shell of the host OS (bash, zsh, cmd, or PowerShell).
- Reference outputs from previous blocks using angle bracket syntax: <blockName.output>
- Reference environment variables using double curly brace syntax: {{ENV_VAR_NAME}}
- Chain multiple commands with && to run them sequentially.
- Use pipes (|) to chain command output.
Current command context: {context}
IMPORTANT FORMATTING RULES:
1. Output ONLY the shell command(s). No explanations, no markdown, no comments.
2. Use <blockName.field> to reference block outputs. Do NOT wrap in quotes.
3. Use {{VAR_NAME}} to reference environment variables. Do NOT wrap in quotes.
4. Write portable commands when possible (prefer POSIX-compatible syntax).
5. For multi-step operations, chain with && or use subshells.`,
placeholder: 'Describe the command you want to run...',
},
},
{
id: 'workingDirectory',
title: 'Working Directory',
type: 'short-input',
required: false,
placeholder: '/path/to/directory',
},
{
id: 'timeout',
title: 'Timeout (ms)',
type: 'short-input',
required: false,
placeholder: String(DEFAULT_EXECUTION_TIMEOUT_MS),
},
],
tools: {
access: ['execute_command_run'],
},
inputs: {
command: { type: 'string', description: 'Shell command to execute' },
workingDirectory: { type: 'string', description: 'Working directory for the command' },
timeout: { type: 'number', description: 'Execution timeout in milliseconds' },
},
outputs: {
stdout: { type: 'string', description: 'Standard output from the command' },
stderr: { type: 'string', description: 'Standard error output from the command' },
exitCode: { type: 'number', description: 'Exit code of the command (0 = success)' },
error: {
type: 'string',
description: 'Error message if the command timed out or exceeded buffer limits',
},
},
}

View File

@@ -39,7 +39,6 @@ import { ElevenLabsBlock } from '@/blocks/blocks/elevenlabs'
import { EnrichBlock } from '@/blocks/blocks/enrich'
import { EvaluatorBlock } from '@/blocks/blocks/evaluator'
import { ExaBlock } from '@/blocks/blocks/exa'
import { ExecuteCommandBlock } from '@/blocks/blocks/execute-command'
import { FileBlock, FileV2Block, FileV3Block } from '@/blocks/blocks/file'
import { FirecrawlBlock } from '@/blocks/blocks/firecrawl'
import { FirefliesBlock, FirefliesV2Block } from '@/blocks/blocks/fireflies'
@@ -236,7 +235,6 @@ export const registry: Record<string, BlockConfig> = {
elevenlabs: ElevenLabsBlock,
enrich: EnrichBlock,
evaluator: EvaluatorBlock,
execute_command: ExecuteCommandBlock,
exa: ExaBlock,
file: FileBlock,
file_v2: FileV2Block,

View File

@@ -367,34 +367,6 @@ export function CodeIcon(props: SVGProps<SVGSVGElement>) {
)
}
export function TerminalIcon(props: SVGProps<SVGSVGElement>) {
return (
<svg
{...props}
width='24'
height='24'
viewBox='0 0 24 24'
fill='none'
xmlns='http://www.w3.org/2000/svg'
>
<path
d='M4 17L10 11L4 5'
stroke='currentColor'
strokeWidth='2'
strokeLinecap='round'
strokeLinejoin='round'
/>
<path
d='M12 19H20'
stroke='currentColor'
strokeWidth='2'
strokeLinecap='round'
strokeLinejoin='round'
/>
</svg>
)
}
export function ChartBarIcon(props: SVGProps<SVGSVGElement>) {
return (
<svg

View File

@@ -24,7 +24,6 @@ export enum BlockType {
TRIGGER = 'trigger',
FUNCTION = 'function',
EXECUTE_COMMAND = 'execute_command',
AGENT = 'agent',
API = 'api',
EVALUATOR = 'evaluator',

View File

@@ -1,55 +0,0 @@
import { DEFAULT_EXECUTION_TIMEOUT_MS } from '@/lib/execution/constants'
import { BlockType } from '@/executor/constants'
import type { BlockHandler, ExecutionContext } from '@/executor/types'
import { collectBlockData } from '@/executor/utils/block-data'
import type { SerializedBlock } from '@/serializer/types'
import { executeTool } from '@/tools'
/**
* Handler for Execute Command blocks that run shell commands on the host machine.
*/
export class ExecuteCommandBlockHandler implements BlockHandler {
canHandle(block: SerializedBlock): boolean {
return block.metadata?.id === BlockType.EXECUTE_COMMAND
}
async execute(
ctx: ExecutionContext,
block: SerializedBlock,
inputs: Record<string, any>
): Promise<any> {
const { blockData, blockNameMapping, blockOutputSchemas } = collectBlockData(ctx)
const result = await executeTool(
'execute_command_run',
{
command: inputs.command,
timeout: inputs.timeout || DEFAULT_EXECUTION_TIMEOUT_MS,
workingDirectory: inputs.workingDirectory,
envVars: ctx.environmentVariables || {},
workflowVariables: ctx.workflowVariables || {},
blockData,
blockNameMapping,
blockOutputSchemas,
_context: {
workflowId: ctx.workflowId,
workspaceId: ctx.workspaceId,
userId: ctx.userId,
isDeployedContext: ctx.isDeployedContext,
enforceCredentialAccess: ctx.enforceCredentialAccess,
},
},
false,
ctx
)
if (!result.success) {
return {
...(result.output ?? { stdout: '', stderr: '', exitCode: 1 }),
error: result.error || 'Command execution failed',
}
}
return { ...result.output, error: null }
}
}

View File

@@ -2,7 +2,6 @@ import { AgentBlockHandler } from '@/executor/handlers/agent/agent-handler'
import { ApiBlockHandler } from '@/executor/handlers/api/api-handler'
import { ConditionBlockHandler } from '@/executor/handlers/condition/condition-handler'
import { EvaluatorBlockHandler } from '@/executor/handlers/evaluator/evaluator-handler'
import { ExecuteCommandBlockHandler } from '@/executor/handlers/execute-command/execute-command-handler'
import { FunctionBlockHandler } from '@/executor/handlers/function/function-handler'
import { GenericBlockHandler } from '@/executor/handlers/generic/generic-handler'
import { HumanInTheLoopBlockHandler } from '@/executor/handlers/human-in-the-loop/human-in-the-loop-handler'
@@ -18,7 +17,6 @@ export {
ApiBlockHandler,
ConditionBlockHandler,
EvaluatorBlockHandler,
ExecuteCommandBlockHandler,
FunctionBlockHandler,
GenericBlockHandler,
ResponseBlockHandler,

View File

@@ -9,7 +9,6 @@ import { AgentBlockHandler } from '@/executor/handlers/agent/agent-handler'
import { ApiBlockHandler } from '@/executor/handlers/api/api-handler'
import { ConditionBlockHandler } from '@/executor/handlers/condition/condition-handler'
import { EvaluatorBlockHandler } from '@/executor/handlers/evaluator/evaluator-handler'
import { ExecuteCommandBlockHandler } from '@/executor/handlers/execute-command/execute-command-handler'
import { FunctionBlockHandler } from '@/executor/handlers/function/function-handler'
import { GenericBlockHandler } from '@/executor/handlers/generic/generic-handler'
import { HumanInTheLoopBlockHandler } from '@/executor/handlers/human-in-the-loop/human-in-the-loop-handler'
@@ -31,7 +30,6 @@ export function createBlockHandlers(): BlockHandler[] {
return [
new TriggerBlockHandler(),
new FunctionBlockHandler(),
new ExecuteCommandBlockHandler(),
new ApiBlockHandler(),
new ConditionBlockHandler(),
new RouterBlockHandler(),

View File

@@ -288,9 +288,6 @@ export const env = createEnv({
E2B_ENABLED: z.string().optional(), // Enable E2B remote code execution
E2B_API_KEY: z.string().optional(), // E2B API key for sandbox creation
// Execute Command - for self-hosted deployments
EXECUTE_COMMAND_ENABLED: z.string().optional(), // Enable shell command execution (self-hosted only)
// Credential Sets (Email Polling) - for self-hosted deployments
CREDENTIAL_SETS_ENABLED: z.boolean().optional(), // Enable credential sets on self-hosted (bypasses plan requirements)
@@ -369,7 +366,6 @@ export const env = createEnv({
NEXT_PUBLIC_SUPPORT_EMAIL: z.string().email().optional(), // Custom support email
NEXT_PUBLIC_E2B_ENABLED: z.string().optional(),
NEXT_PUBLIC_EXECUTE_COMMAND_ENABLED: z.string().optional(),
NEXT_PUBLIC_COPILOT_TRAINING_ENABLED: z.string().optional(),
NEXT_PUBLIC_ENABLE_PLAYGROUND: z.string().optional(), // Enable component playground at /playground
NEXT_PUBLIC_DOCUMENTATION_URL: z.string().url().optional(), // Custom documentation URL
@@ -424,7 +420,6 @@ export const env = createEnv({
NEXT_PUBLIC_DISABLE_PUBLIC_API: process.env.NEXT_PUBLIC_DISABLE_PUBLIC_API,
NEXT_PUBLIC_EMAIL_PASSWORD_SIGNUP_ENABLED: process.env.NEXT_PUBLIC_EMAIL_PASSWORD_SIGNUP_ENABLED,
NEXT_PUBLIC_E2B_ENABLED: process.env.NEXT_PUBLIC_E2B_ENABLED,
NEXT_PUBLIC_EXECUTE_COMMAND_ENABLED: process.env.NEXT_PUBLIC_EXECUTE_COMMAND_ENABLED,
NEXT_PUBLIC_COPILOT_TRAINING_ENABLED: process.env.NEXT_PUBLIC_COPILOT_TRAINING_ENABLED,
NEXT_PUBLIC_ENABLE_PLAYGROUND: process.env.NEXT_PUBLIC_ENABLE_PLAYGROUND,
NEXT_PUBLIC_POSTHOG_ENABLED: process.env.NEXT_PUBLIC_POSTHOG_ENABLED,

View File

@@ -105,31 +105,6 @@ export const isOrganizationsEnabled =
*/
export const isE2bEnabled = isTruthy(env.E2B_ENABLED)
/**
* Is Execute Command enabled for running shell commands on the host machine.
* Only available for self-hosted deployments. Blocked on hosted environments.
*/
export const isExecuteCommandEnabled = isTruthy(env.EXECUTE_COMMAND_ENABLED) && !isHosted
if (isTruthy(env.EXECUTE_COMMAND_ENABLED)) {
import('@sim/logger')
.then(({ createLogger }) => {
const logger = createLogger('FeatureFlags')
if (isHosted) {
logger.error(
'EXECUTE_COMMAND_ENABLED is set but ignored on hosted environment. Shell command execution is not available on hosted instances for security.'
)
} else {
logger.warn(
'EXECUTE_COMMAND_ENABLED is enabled. Workflows can execute arbitrary shell commands on the host machine. Only use this in trusted environments.'
)
}
})
.catch(() => {
// Fallback during config compilation when logger is unavailable
})
}
/**
* Are invitations disabled globally
* When true, workspace invitations are disabled for all users

View File

@@ -1,122 +0,0 @@
import { DEFAULT_EXECUTION_TIMEOUT_MS } from '@/lib/execution/constants'
import type { ExecuteCommandInput, ExecuteCommandOutput } from '@/tools/execute-command/types'
import type { ToolConfig } from '@/tools/types'
export const executeCommandRunTool: ToolConfig<ExecuteCommandInput, ExecuteCommandOutput> = {
id: 'execute_command_run',
name: 'Execute Command',
description:
'Execute a shell command on the host machine. Only available for self-hosted deployments with EXECUTE_COMMAND_ENABLED=true.',
version: '1.0.0',
params: {
command: {
type: 'string',
required: true,
visibility: 'user-or-llm',
description: 'The shell command to execute on the host machine',
},
timeout: {
type: 'number',
required: false,
visibility: 'user-or-llm',
description: 'Execution timeout in milliseconds',
default: DEFAULT_EXECUTION_TIMEOUT_MS,
},
workingDirectory: {
type: 'string',
required: false,
visibility: 'user-or-llm',
description: 'Working directory for the command. Defaults to the server process cwd.',
},
envVars: {
type: 'object',
required: false,
visibility: 'hidden',
description: 'Environment variables to make available during execution',
default: {},
},
blockData: {
type: 'object',
required: false,
visibility: 'hidden',
description: 'Block output data for variable resolution',
default: {},
},
blockNameMapping: {
type: 'object',
required: false,
visibility: 'hidden',
description: 'Mapping of block names to block IDs',
default: {},
},
blockOutputSchemas: {
type: 'object',
required: false,
visibility: 'hidden',
description: 'Mapping of block IDs to their output schemas for validation',
default: {},
},
workflowVariables: {
type: 'object',
required: false,
visibility: 'hidden',
description: 'Workflow variables for <variable.name> resolution',
default: {},
},
},
request: {
url: '/api/tools/execute-command/run',
method: 'POST',
headers: () => ({
'Content-Type': 'application/json',
}),
body: (params: ExecuteCommandInput) => ({
command: params.command,
timeout: params.timeout || DEFAULT_EXECUTION_TIMEOUT_MS,
workingDirectory: params.workingDirectory,
envVars: params.envVars || {},
workflowVariables: params.workflowVariables || {},
blockData: params.blockData || {},
blockNameMapping: params.blockNameMapping || {},
blockOutputSchemas: params.blockOutputSchemas || {},
workflowId: params._context?.workflowId,
}),
},
transformResponse: async (response: Response): Promise<ExecuteCommandOutput> => {
const result = await response.json()
if (!result.success) {
return {
success: false,
output: {
stdout: result.output?.stdout || '',
stderr: result.output?.stderr || '',
exitCode: result.output?.exitCode ?? 1,
},
error: result.error,
}
}
return {
success: true,
output: {
stdout: result.output.stdout,
stderr: result.output.stderr,
exitCode: result.output.exitCode,
},
}
},
outputs: {
stdout: { type: 'string', description: 'Standard output from the command' },
stderr: { type: 'string', description: 'Standard error output from the command' },
exitCode: { type: 'number', description: 'Exit code of the command (0 = success)' },
error: {
type: 'string',
description: 'Error message if the command timed out or exceeded buffer limits',
},
},
}

View File

@@ -1,2 +0,0 @@
export { executeCommandRunTool } from '@/tools/execute-command/execute'
export type { ExecuteCommandInput, ExecuteCommandOutput } from '@/tools/execute-command/types'

View File

@@ -1,24 +0,0 @@
import type { ToolResponse } from '@/tools/types'
export interface ExecuteCommandInput {
command: string
timeout?: number
workingDirectory?: string
envVars?: Record<string, string>
workflowVariables?: Record<string, unknown>
blockData?: Record<string, unknown>
blockNameMapping?: Record<string, string>
blockOutputSchemas?: Record<string, Record<string, unknown>>
_context?: {
workflowId?: string
userId?: string
}
}
export interface ExecuteCommandOutput extends ToolResponse {
output: {
stdout: string
stderr: string
exitCode: number
}
}

View File

@@ -433,7 +433,6 @@ import {
exaResearchTool,
exaSearchTool,
} from '@/tools/exa'
import { executeCommandRunTool } from '@/tools/execute-command'
import { fileParserV2Tool, fileParserV3Tool, fileParseTool } from '@/tools/file'
import {
firecrawlAgentTool,
@@ -2338,7 +2337,6 @@ export const tools: Record<string, ToolConfig> = {
webhook_request: webhookRequestTool,
huggingface_chat: huggingfaceChatTool,
llm_chat: llmChatTool,
execute_command_run: executeCommandRunTool,
function_execute: functionExecuteTool,
gamma_generate: gammaGenerateTool,
gamma_generate_from_template: gammaGenerateFromTemplateTool,

View File

@@ -206,10 +206,6 @@ app:
DISABLE_PUBLIC_API: "" # Set to "true" to disable public API toggle globally
NEXT_PUBLIC_DISABLE_PUBLIC_API: "" # Set to "true" to hide public API toggle in UI
# Execute Command (Self-Hosted Only - runs shell commands on host machine)
EXECUTE_COMMAND_ENABLED: "" # Set to "true" to enable shell command execution in workflows (self-hosted only)
NEXT_PUBLIC_EXECUTE_COMMAND_ENABLED: "" # Set to "true" to show Execute Command block in UI
# SSO Configuration (Enterprise Single Sign-On)
# Set to "true" AFTER running the SSO registration script
SSO_ENABLED: "" # Enable SSO authentication ("true" to enable)