mirror of
https://github.com/simstudioai/sim.git
synced 2026-04-06 03:00:16 -04:00
* fix: prevent auth bypass via user-controlled context query param in file serve The /api/files/serve endpoint trusted a user-supplied `context` query parameter to skip authentication. An attacker could append `?context=profile-pictures` to any file URL and download files without auth. Now the public access gate checks the key prefix instead of the query param, and `og-images/` is added to `inferContextFromKey`. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: use randomized heredoc delimiter in SSH execute-script route Prevents accidental heredoc termination if script content contains the delimiter string on its own line. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: escape workingDirectory in SSH execute-command route Use escapeShellArg() with single quotes for the workingDirectory parameter, consistent with all other SSH routes (execute-script, create-directory, delete-file, move-rename). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: harden chat/form deployment auth (OTP brute-force, CSPRNG, HMAC tokens) - Add brute-force protection to OTP verification with attempt tracking (CWE-307) - Replace Math.random() with crypto.randomInt() for OTP generation (CWE-338) - Replace unsigned Base64 auth tokens with HMAC-SHA256 signed tokens (CWE-327) - Use shared isEmailAllowed utility in OTP route instead of inline duplicate - Simplify Redis OTP update to single KEEPTTL call Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: harden SSRF protections and input validation across API routes Add DNS-based SSRF validation for MCP server URLs, secure OIDC discovery with IP-pinned fetch, strengthen OTP/chat/form input validation, sanitize 1Password vault parameters, and tighten deployment security checks. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * lint * fix(file-serve): remove user-controlled context param from authenticated path The `?context` query param was still being passed to `handleCloudProxy` in the authenticated code path, allowing any logged-in user to spoof context as `profile-pictures` and bypass ownership checks in `verifyFileAccess`. Now always use `inferContextFromKey` from the server-controlled key prefix. * fix: handle legacy OTP format in decodeOTPValue for deploy-time compat Add guard for OTP values without colon separator (pre-deploy format) to avoid misparse that would lock out users with in-flight OTPs. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix(mcp): distinguish DNS resolution failures from SSRF policy blocks DNS lookup failures now throw McpDnsResolutionError (502) instead of McpSsrfError (403), so transient DNS hiccups surface as retryable upstream errors rather than confusing permission rejections. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: make OTP attempt counting atomic to prevent TOCTOU race Redis path: use Lua script for atomic read-increment-conditional-delete. DB path: use optimistic locking (UPDATE WHERE value = currentValue) with re-read fallback on conflict. Prevents concurrent wrong guesses from each counting as a single attempt. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: check attempt count before OTP comparison to prevent bypass Reject OTPs that have already reached max failed attempts before comparing the code, closing a race window where a correct guess could bypass brute-force protection. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: validate OIDC discovered endpoints against SSRF The discovery URL itself was SSRF-validated, but endpoint URLs returned in the discovery document (tokenEndpoint, userInfoEndpoint, jwksEndpoint) were stored without validation. A malicious OIDC issuer on a public IP could return internal network URLs in the discovery response. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: remove duplicate OIDC endpoint SSRF validation block Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: validate OIDC discovered endpoints and pin DNS for 1Password Connect - SSRF-validate all endpoint URLs returned by OIDC discovery documents before storing them (authorization, token, userinfo, jwks endpoints) - Pin DNS resolution in 1Password Connect requests using secureFetchWithPinnedIP to prevent TOCTOU DNS rebinding attacks Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * lint * fix: replace KEEPTTL with TTL+EX for Redis <6.0 compat, add DB retry loop - Lua script now reads TTL and uses SET...EX instead of KEEPTTL - DB optimistic locking now retries up to 3 times on conflict Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: address review feedback on OTP atomicity and 1Password fetch - Replace Redis KEEPTTL with TTL+SET EX for Redis <6.0 compatibility - Add retry loop to DB optimistic lock path so concurrent OTP attempts are actually counted instead of silently dropped - Remove unreachable fallback fetch in 1Password Connect; make validateConnectServerUrl return non-nullable string Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: treat Lua nil return as locked when OTP key is missing When the Redis key is deleted/expired between getOTP and incrementOTPAttempts, the Lua script returns nil. Handle this as 'locked' instead of silently treating it as 'incremented'. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: handle Lua nil as locked OTP and add SSRF check to MCP env resolution - Treat Redis Lua nil return (expired/deleted key) as 'locked' instead of silently treating it as a successful increment - Add validateMcpServerSsrf to MCP service resolveConfigEnvVars so env-var URLs are SSRF-validated after resolution at execution time Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: narrow resolvedIP type guard instead of non-null assertion Replace urlValidation.resolvedIP! with proper type narrowing by adding !urlValidation.resolvedIP to the guard clause, so TypeScript can infer the string type without a fragile assertion. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: bind auth tokens to deployment password for immediate revocation Include a SHA-256 hash of the encrypted password in the HMAC-signed token payload. Changing the deployment password now immediately invalidates all existing auth cookies, restoring the pre-HMAC behavior. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: bind auth tokens to deployment password and remove resolvedIP non-null assertion - Include SHA-256 hash of encryptedPassword in HMAC token payload so changing a deployment's password immediately invalidates all sessions - Pass encryptedPassword through setChatAuthCookie/setFormAuthCookie and validateAuthToken at all call sites - Replace non-null assertion on resolvedIP with proper narrowing guard Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: update test assertions for new encryptedPassword parameter Tests now expect the encryptedPassword arg passed to validateAuthToken and setDeploymentAuthCookie after the password-binding change. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: format long lines in chat/form test assertions Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: pass encryptedPassword through OTP route cookie generation Select chat.password in PUT handler DB query and pass it to setChatAuthCookie so OTP-issued tokens include the correct password slot for subsequent validation. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
422 lines
12 KiB
TypeScript
422 lines
12 KiB
TypeScript
import dns from 'dns/promises'
|
|
import http from 'http'
|
|
import https from 'https'
|
|
import type { LookupFunction } from 'net'
|
|
import { createLogger } from '@sim/logger'
|
|
import * as ipaddr from 'ipaddr.js'
|
|
import { type ValidationResult, validateExternalUrl } from '@/lib/core/security/input-validation'
|
|
|
|
const logger = createLogger('InputValidation')
|
|
|
|
/**
|
|
* Result type for async URL validation with resolved IP
|
|
*/
|
|
export interface AsyncValidationResult extends ValidationResult {
|
|
resolvedIP?: string
|
|
originalHostname?: string
|
|
}
|
|
|
|
/**
|
|
* Checks if an IP address is private or reserved (not routable on the public internet)
|
|
* Uses ipaddr.js for robust handling of all IP formats including:
|
|
* - Octal notation (0177.0.0.1)
|
|
* - Hex notation (0x7f000001)
|
|
* - IPv4-mapped IPv6 (::ffff:127.0.0.1)
|
|
* - Various edge cases that regex patterns miss
|
|
*/
|
|
export function isPrivateOrReservedIP(ip: string): boolean {
|
|
try {
|
|
if (!ipaddr.isValid(ip)) {
|
|
return true
|
|
}
|
|
|
|
const addr = ipaddr.process(ip)
|
|
const range = addr.range()
|
|
|
|
return range !== 'unicast'
|
|
} catch {
|
|
return true
|
|
}
|
|
}
|
|
|
|
/**
|
|
* Validates a URL and resolves its DNS to prevent SSRF via DNS rebinding
|
|
*
|
|
* This function:
|
|
* 1. Performs basic URL validation (protocol, format)
|
|
* 2. Resolves the hostname to an IP address
|
|
* 3. Validates the resolved IP is not private/reserved
|
|
* 4. Returns the resolved IP for use in the actual request
|
|
*
|
|
* @param url - The URL to validate
|
|
* @param paramName - Name of the parameter for error messages
|
|
* @returns AsyncValidationResult with resolved IP for DNS pinning
|
|
*/
|
|
export async function validateUrlWithDNS(
|
|
url: string | null | undefined,
|
|
paramName = 'url',
|
|
options: { allowHttp?: boolean } = {}
|
|
): Promise<AsyncValidationResult> {
|
|
const basicValidation = validateExternalUrl(url, paramName, options)
|
|
if (!basicValidation.isValid) {
|
|
return basicValidation
|
|
}
|
|
|
|
const parsedUrl = new URL(url!)
|
|
const hostname = parsedUrl.hostname
|
|
|
|
const hostnameLower = hostname.toLowerCase()
|
|
const cleanHostname =
|
|
hostnameLower.startsWith('[') && hostnameLower.endsWith(']')
|
|
? hostnameLower.slice(1, -1)
|
|
: hostnameLower
|
|
|
|
let isLocalhost = cleanHostname === 'localhost'
|
|
if (ipaddr.isValid(cleanHostname)) {
|
|
const processedIP = ipaddr.process(cleanHostname).toString()
|
|
if (processedIP === '127.0.0.1' || processedIP === '::1') {
|
|
isLocalhost = true
|
|
}
|
|
}
|
|
|
|
try {
|
|
const { address } = await dns.lookup(cleanHostname, { verbatim: true })
|
|
|
|
const resolvedIsLoopback =
|
|
ipaddr.isValid(address) &&
|
|
(() => {
|
|
const ip = ipaddr.process(address).toString()
|
|
return ip === '127.0.0.1' || ip === '::1'
|
|
})()
|
|
|
|
if (
|
|
isPrivateOrReservedIP(address) &&
|
|
!(isLocalhost && resolvedIsLoopback && !options.allowHttp)
|
|
) {
|
|
logger.warn('URL resolves to blocked IP address', {
|
|
paramName,
|
|
hostname,
|
|
resolvedIP: address,
|
|
})
|
|
return {
|
|
isValid: false,
|
|
error: `${paramName} resolves to a blocked IP address`,
|
|
}
|
|
}
|
|
|
|
return {
|
|
isValid: true,
|
|
resolvedIP: address,
|
|
originalHostname: hostname,
|
|
}
|
|
} catch (error) {
|
|
logger.warn('DNS lookup failed for URL', {
|
|
paramName,
|
|
hostname,
|
|
error: error instanceof Error ? error.message : String(error),
|
|
})
|
|
return {
|
|
isValid: false,
|
|
error: `${paramName} hostname could not be resolved`,
|
|
}
|
|
}
|
|
}
|
|
|
|
/**
|
|
* Validates a database hostname by resolving DNS and checking the resolved IP
|
|
* against private/reserved ranges to prevent SSRF via database connections.
|
|
*
|
|
* Unlike validateHostname (which enforces strict RFC hostname format), this
|
|
* function is permissive about hostname format to avoid breaking legitimate
|
|
* database hostnames (e.g. underscores in Docker/K8s service names). It only
|
|
* blocks localhost and private/reserved IPs.
|
|
*
|
|
* @param host - The database hostname to validate
|
|
* @param paramName - Name of the parameter for error messages
|
|
* @returns AsyncValidationResult with resolved IP
|
|
*/
|
|
export async function validateDatabaseHost(
|
|
host: string | null | undefined,
|
|
paramName = 'host'
|
|
): Promise<AsyncValidationResult> {
|
|
if (!host) {
|
|
return { isValid: false, error: `${paramName} is required` }
|
|
}
|
|
|
|
const lowerHost = host.toLowerCase()
|
|
|
|
if (lowerHost === 'localhost') {
|
|
return { isValid: false, error: `${paramName} cannot be localhost` }
|
|
}
|
|
|
|
if (ipaddr.isValid(lowerHost) && isPrivateOrReservedIP(lowerHost)) {
|
|
return { isValid: false, error: `${paramName} cannot be a private IP address` }
|
|
}
|
|
|
|
try {
|
|
const { address } = await dns.lookup(host, { verbatim: true })
|
|
|
|
if (isPrivateOrReservedIP(address)) {
|
|
logger.warn('Database host resolves to blocked IP address', {
|
|
paramName,
|
|
hostname: host,
|
|
resolvedIP: address,
|
|
})
|
|
return {
|
|
isValid: false,
|
|
error: `${paramName} resolves to a blocked IP address`,
|
|
}
|
|
}
|
|
|
|
return {
|
|
isValid: true,
|
|
resolvedIP: address,
|
|
originalHostname: host,
|
|
}
|
|
} catch (error) {
|
|
logger.warn('DNS lookup failed for database host', {
|
|
paramName,
|
|
hostname: host,
|
|
error: error instanceof Error ? error.message : String(error),
|
|
})
|
|
return {
|
|
isValid: false,
|
|
error: `${paramName} hostname could not be resolved`,
|
|
}
|
|
}
|
|
}
|
|
|
|
export interface SecureFetchOptions {
|
|
method?: string
|
|
headers?: Record<string, string>
|
|
body?: string | Buffer | Uint8Array
|
|
timeout?: number
|
|
maxRedirects?: number
|
|
maxResponseBytes?: number
|
|
}
|
|
|
|
export class SecureFetchHeaders {
|
|
private headers: Map<string, string>
|
|
|
|
constructor(headers: Record<string, string>) {
|
|
this.headers = new Map(Object.entries(headers).map(([k, v]) => [k.toLowerCase(), v]))
|
|
}
|
|
|
|
get(name: string): string | null {
|
|
return this.headers.get(name.toLowerCase()) ?? null
|
|
}
|
|
|
|
toRecord(): Record<string, string> {
|
|
const record: Record<string, string> = {}
|
|
for (const [key, value] of this.headers) {
|
|
record[key] = value
|
|
}
|
|
return record
|
|
}
|
|
|
|
[Symbol.iterator]() {
|
|
return this.headers.entries()
|
|
}
|
|
}
|
|
|
|
export interface SecureFetchResponse {
|
|
ok: boolean
|
|
status: number
|
|
statusText: string
|
|
headers: SecureFetchHeaders
|
|
text: () => Promise<string>
|
|
json: () => Promise<unknown>
|
|
arrayBuffer: () => Promise<ArrayBuffer>
|
|
}
|
|
|
|
const DEFAULT_MAX_REDIRECTS = 5
|
|
|
|
function isRedirectStatus(status: number): boolean {
|
|
return status >= 300 && status < 400 && status !== 304
|
|
}
|
|
|
|
function resolveRedirectUrl(baseUrl: string, location: string): string {
|
|
try {
|
|
return new URL(location, baseUrl).toString()
|
|
} catch {
|
|
throw new Error(`Invalid redirect location: ${location}`)
|
|
}
|
|
}
|
|
|
|
/**
|
|
* Performs a fetch with IP pinning to prevent DNS rebinding attacks.
|
|
* Uses the pre-resolved IP address while preserving the original hostname for TLS SNI.
|
|
* Follows redirects securely by validating each redirect target.
|
|
*/
|
|
export async function secureFetchWithPinnedIP(
|
|
url: string,
|
|
resolvedIP: string,
|
|
options: SecureFetchOptions & { allowHttp?: boolean } = {},
|
|
redirectCount = 0
|
|
): Promise<SecureFetchResponse> {
|
|
const maxRedirects = options.maxRedirects ?? DEFAULT_MAX_REDIRECTS
|
|
const maxResponseBytes = options.maxResponseBytes
|
|
|
|
return new Promise((resolve, reject) => {
|
|
const parsed = new URL(url)
|
|
const isHttps = parsed.protocol === 'https:'
|
|
const defaultPort = isHttps ? 443 : 80
|
|
const port = parsed.port ? Number.parseInt(parsed.port, 10) : defaultPort
|
|
|
|
const isIPv6 = resolvedIP.includes(':')
|
|
const family = isIPv6 ? 6 : 4
|
|
|
|
const lookup: LookupFunction = (_hostname, options, callback) => {
|
|
if (options.all) {
|
|
callback(null, [{ address: resolvedIP, family }])
|
|
} else {
|
|
callback(null, resolvedIP, family)
|
|
}
|
|
}
|
|
|
|
const agentOptions: http.AgentOptions = { lookup }
|
|
|
|
const agent = isHttps ? new https.Agent(agentOptions) : new http.Agent(agentOptions)
|
|
|
|
const { 'accept-encoding': _, ...sanitizedHeaders } = options.headers ?? {}
|
|
|
|
const requestOptions: http.RequestOptions = {
|
|
hostname: parsed.hostname,
|
|
port,
|
|
path: parsed.pathname + parsed.search,
|
|
method: options.method || 'GET',
|
|
headers: sanitizedHeaders,
|
|
agent,
|
|
timeout: options.timeout || 300000,
|
|
}
|
|
|
|
const protocol = isHttps ? https : http
|
|
const req = protocol.request(requestOptions, (res) => {
|
|
const statusCode = res.statusCode || 0
|
|
const location = res.headers.location
|
|
|
|
if (isRedirectStatus(statusCode) && location && redirectCount < maxRedirects) {
|
|
res.resume()
|
|
const redirectUrl = resolveRedirectUrl(url, location)
|
|
|
|
validateUrlWithDNS(redirectUrl, 'redirectUrl', { allowHttp: options.allowHttp })
|
|
.then((validation) => {
|
|
if (!validation.isValid) {
|
|
reject(new Error(`Redirect blocked: ${validation.error}`))
|
|
return
|
|
}
|
|
return secureFetchWithPinnedIP(
|
|
redirectUrl,
|
|
validation.resolvedIP!,
|
|
options,
|
|
redirectCount + 1
|
|
)
|
|
})
|
|
.then((response) => {
|
|
if (response) resolve(response)
|
|
})
|
|
.catch(reject)
|
|
return
|
|
}
|
|
|
|
if (isRedirectStatus(statusCode) && location && redirectCount >= maxRedirects) {
|
|
res.resume()
|
|
reject(new Error(`Too many redirects (max: ${maxRedirects})`))
|
|
return
|
|
}
|
|
|
|
const chunks: Buffer[] = []
|
|
let totalBytes = 0
|
|
let responseTerminated = false
|
|
|
|
res.on('data', (chunk: Buffer) => {
|
|
if (responseTerminated) return
|
|
|
|
totalBytes += chunk.length
|
|
if (
|
|
typeof maxResponseBytes === 'number' &&
|
|
maxResponseBytes > 0 &&
|
|
totalBytes > maxResponseBytes
|
|
) {
|
|
responseTerminated = true
|
|
res.destroy(new Error(`Response exceeded maximum size of ${maxResponseBytes} bytes`))
|
|
return
|
|
}
|
|
|
|
chunks.push(chunk)
|
|
})
|
|
|
|
res.on('error', (error) => {
|
|
reject(error)
|
|
})
|
|
|
|
res.on('end', () => {
|
|
if (responseTerminated) return
|
|
const bodyBuffer = Buffer.concat(chunks)
|
|
const body = bodyBuffer.toString('utf-8')
|
|
const headersRecord: Record<string, string> = {}
|
|
for (const [key, value] of Object.entries(res.headers)) {
|
|
if (typeof value === 'string') {
|
|
headersRecord[key.toLowerCase()] = value
|
|
} else if (Array.isArray(value)) {
|
|
headersRecord[key.toLowerCase()] = value.join(', ')
|
|
}
|
|
}
|
|
|
|
resolve({
|
|
ok: statusCode >= 200 && statusCode < 300,
|
|
status: statusCode,
|
|
statusText: res.statusMessage || '',
|
|
headers: new SecureFetchHeaders(headersRecord),
|
|
text: async () => body,
|
|
json: async () => JSON.parse(body),
|
|
arrayBuffer: async () =>
|
|
bodyBuffer.buffer.slice(
|
|
bodyBuffer.byteOffset,
|
|
bodyBuffer.byteOffset + bodyBuffer.byteLength
|
|
),
|
|
})
|
|
})
|
|
})
|
|
|
|
req.on('error', (error) => {
|
|
reject(error)
|
|
})
|
|
|
|
req.on('timeout', () => {
|
|
req.destroy()
|
|
reject(new Error(`Request timed out after ${requestOptions.timeout}ms`))
|
|
})
|
|
|
|
if (options.body) {
|
|
req.write(options.body)
|
|
}
|
|
|
|
req.end()
|
|
})
|
|
}
|
|
|
|
/**
|
|
* Validates a URL and performs a secure fetch with DNS pinning in one call.
|
|
* Combines validateUrlWithDNS and secureFetchWithPinnedIP for convenience.
|
|
*
|
|
* @param url - The URL to fetch
|
|
* @param options - Fetch options (method, headers, body, etc.)
|
|
* @param paramName - Name of the parameter for error messages (default: 'url')
|
|
* @returns SecureFetchResponse
|
|
* @throws Error if URL validation fails
|
|
*/
|
|
export async function secureFetchWithValidation(
|
|
url: string,
|
|
options: SecureFetchOptions & { allowHttp?: boolean } = {},
|
|
paramName = 'url'
|
|
): Promise<SecureFetchResponse> {
|
|
const validation = await validateUrlWithDNS(url, paramName, {
|
|
allowHttp: options.allowHttp,
|
|
})
|
|
if (!validation.isValid) {
|
|
throw new Error(validation.error)
|
|
}
|
|
return secureFetchWithPinnedIP(url, validation.resolvedIP!, options)
|
|
}
|