mirror of
https://github.com/simstudioai/sim.git
synced 2026-04-06 03:00:16 -04:00
* fix: prevent auth bypass via user-controlled context query param in file serve The /api/files/serve endpoint trusted a user-supplied `context` query parameter to skip authentication. An attacker could append `?context=profile-pictures` to any file URL and download files without auth. Now the public access gate checks the key prefix instead of the query param, and `og-images/` is added to `inferContextFromKey`. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: use randomized heredoc delimiter in SSH execute-script route Prevents accidental heredoc termination if script content contains the delimiter string on its own line. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: escape workingDirectory in SSH execute-command route Use escapeShellArg() with single quotes for the workingDirectory parameter, consistent with all other SSH routes (execute-script, create-directory, delete-file, move-rename). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: harden chat/form deployment auth (OTP brute-force, CSPRNG, HMAC tokens) - Add brute-force protection to OTP verification with attempt tracking (CWE-307) - Replace Math.random() with crypto.randomInt() for OTP generation (CWE-338) - Replace unsigned Base64 auth tokens with HMAC-SHA256 signed tokens (CWE-327) - Use shared isEmailAllowed utility in OTP route instead of inline duplicate - Simplify Redis OTP update to single KEEPTTL call Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: harden SSRF protections and input validation across API routes Add DNS-based SSRF validation for MCP server URLs, secure OIDC discovery with IP-pinned fetch, strengthen OTP/chat/form input validation, sanitize 1Password vault parameters, and tighten deployment security checks. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * lint * fix(file-serve): remove user-controlled context param from authenticated path The `?context` query param was still being passed to `handleCloudProxy` in the authenticated code path, allowing any logged-in user to spoof context as `profile-pictures` and bypass ownership checks in `verifyFileAccess`. Now always use `inferContextFromKey` from the server-controlled key prefix. * fix: handle legacy OTP format in decodeOTPValue for deploy-time compat Add guard for OTP values without colon separator (pre-deploy format) to avoid misparse that would lock out users with in-flight OTPs. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix(mcp): distinguish DNS resolution failures from SSRF policy blocks DNS lookup failures now throw McpDnsResolutionError (502) instead of McpSsrfError (403), so transient DNS hiccups surface as retryable upstream errors rather than confusing permission rejections. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: make OTP attempt counting atomic to prevent TOCTOU race Redis path: use Lua script for atomic read-increment-conditional-delete. DB path: use optimistic locking (UPDATE WHERE value = currentValue) with re-read fallback on conflict. Prevents concurrent wrong guesses from each counting as a single attempt. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: check attempt count before OTP comparison to prevent bypass Reject OTPs that have already reached max failed attempts before comparing the code, closing a race window where a correct guess could bypass brute-force protection. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: validate OIDC discovered endpoints against SSRF The discovery URL itself was SSRF-validated, but endpoint URLs returned in the discovery document (tokenEndpoint, userInfoEndpoint, jwksEndpoint) were stored without validation. A malicious OIDC issuer on a public IP could return internal network URLs in the discovery response. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: remove duplicate OIDC endpoint SSRF validation block Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: validate OIDC discovered endpoints and pin DNS for 1Password Connect - SSRF-validate all endpoint URLs returned by OIDC discovery documents before storing them (authorization, token, userinfo, jwks endpoints) - Pin DNS resolution in 1Password Connect requests using secureFetchWithPinnedIP to prevent TOCTOU DNS rebinding attacks Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * lint * fix: replace KEEPTTL with TTL+EX for Redis <6.0 compat, add DB retry loop - Lua script now reads TTL and uses SET...EX instead of KEEPTTL - DB optimistic locking now retries up to 3 times on conflict Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: address review feedback on OTP atomicity and 1Password fetch - Replace Redis KEEPTTL with TTL+SET EX for Redis <6.0 compatibility - Add retry loop to DB optimistic lock path so concurrent OTP attempts are actually counted instead of silently dropped - Remove unreachable fallback fetch in 1Password Connect; make validateConnectServerUrl return non-nullable string Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: treat Lua nil return as locked when OTP key is missing When the Redis key is deleted/expired between getOTP and incrementOTPAttempts, the Lua script returns nil. Handle this as 'locked' instead of silently treating it as 'incremented'. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: handle Lua nil as locked OTP and add SSRF check to MCP env resolution - Treat Redis Lua nil return (expired/deleted key) as 'locked' instead of silently treating it as a successful increment - Add validateMcpServerSsrf to MCP service resolveConfigEnvVars so env-var URLs are SSRF-validated after resolution at execution time Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: narrow resolvedIP type guard instead of non-null assertion Replace urlValidation.resolvedIP! with proper type narrowing by adding !urlValidation.resolvedIP to the guard clause, so TypeScript can infer the string type without a fragile assertion. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: bind auth tokens to deployment password for immediate revocation Include a SHA-256 hash of the encrypted password in the HMAC-signed token payload. Changing the deployment password now immediately invalidates all existing auth cookies, restoring the pre-HMAC behavior. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: bind auth tokens to deployment password and remove resolvedIP non-null assertion - Include SHA-256 hash of encryptedPassword in HMAC token payload so changing a deployment's password immediately invalidates all sessions - Pass encryptedPassword through setChatAuthCookie/setFormAuthCookie and validateAuthToken at all call sites - Replace non-null assertion on resolvedIP with proper narrowing guard Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: update test assertions for new encryptedPassword parameter Tests now expect the encryptedPassword arg passed to validateAuthToken and setDeploymentAuthCookie after the password-binding change. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: format long lines in chat/form test assertions Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: pass encryptedPassword through OTP route cookie generation Select chat.password in PUT handler DB query and pass it to setChatAuthCookie so OTP-issued tokens include the correct password slot for subsequent validation. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
314 lines
8.8 KiB
TypeScript
314 lines
8.8 KiB
TypeScript
import { createHash } from 'crypto'
|
|
import { readFile } from 'fs/promises'
|
|
import { createLogger } from '@sim/logger'
|
|
import type { NextRequest } from 'next/server'
|
|
import { NextResponse } from 'next/server'
|
|
import { checkSessionOrInternalAuth } from '@/lib/auth/hybrid'
|
|
import { generatePptxFromCode } from '@/lib/execution/pptx-vm'
|
|
import { CopilotFiles, isUsingCloudStorage } from '@/lib/uploads'
|
|
import type { StorageContext } from '@/lib/uploads/config'
|
|
import { parseWorkspaceFileKey } from '@/lib/uploads/contexts/workspace/workspace-file-manager'
|
|
import { downloadFile } from '@/lib/uploads/core/storage-service'
|
|
import { inferContextFromKey } from '@/lib/uploads/utils/file-utils'
|
|
import { verifyFileAccess } from '@/app/api/files/authorization'
|
|
import {
|
|
createErrorResponse,
|
|
createFileResponse,
|
|
FileNotFoundError,
|
|
findLocalFile,
|
|
getContentType,
|
|
} from '@/app/api/files/utils'
|
|
|
|
const logger = createLogger('FilesServeAPI')
|
|
|
|
const ZIP_MAGIC = Buffer.from([0x50, 0x4b, 0x03, 0x04])
|
|
|
|
const MAX_COMPILED_PPTX_CACHE = 10
|
|
const compiledPptxCache = new Map<string, Buffer>()
|
|
|
|
function compiledCacheSet(key: string, buffer: Buffer): void {
|
|
if (compiledPptxCache.size >= MAX_COMPILED_PPTX_CACHE) {
|
|
compiledPptxCache.delete(compiledPptxCache.keys().next().value as string)
|
|
}
|
|
compiledPptxCache.set(key, buffer)
|
|
}
|
|
|
|
async function compilePptxIfNeeded(
|
|
buffer: Buffer,
|
|
filename: string,
|
|
workspaceId?: string,
|
|
raw?: boolean
|
|
): Promise<{ buffer: Buffer; contentType: string }> {
|
|
const isPptx = filename.toLowerCase().endsWith('.pptx')
|
|
if (raw || !isPptx || buffer.subarray(0, 4).equals(ZIP_MAGIC)) {
|
|
return { buffer, contentType: getContentType(filename) }
|
|
}
|
|
|
|
const code = buffer.toString('utf-8')
|
|
const cacheKey = createHash('sha256')
|
|
.update(code)
|
|
.update(workspaceId ?? '')
|
|
.digest('hex')
|
|
const cached = compiledPptxCache.get(cacheKey)
|
|
if (cached) {
|
|
return {
|
|
buffer: cached,
|
|
contentType: 'application/vnd.openxmlformats-officedocument.presentationml.presentation',
|
|
}
|
|
}
|
|
|
|
const compiled = await generatePptxFromCode(code, workspaceId || '')
|
|
compiledCacheSet(cacheKey, compiled)
|
|
return {
|
|
buffer: compiled,
|
|
contentType: 'application/vnd.openxmlformats-officedocument.presentationml.presentation',
|
|
}
|
|
}
|
|
|
|
const STORAGE_KEY_PREFIX_RE = /^\d{13}-[a-z0-9]{7}-/
|
|
|
|
function stripStorageKeyPrefix(segment: string): string {
|
|
return STORAGE_KEY_PREFIX_RE.test(segment) ? segment.replace(STORAGE_KEY_PREFIX_RE, '') : segment
|
|
}
|
|
|
|
function getWorkspaceIdForCompile(key: string): string | undefined {
|
|
return parseWorkspaceFileKey(key) ?? undefined
|
|
}
|
|
|
|
export async function GET(
|
|
request: NextRequest,
|
|
{ params }: { params: Promise<{ path: string[] }> }
|
|
) {
|
|
try {
|
|
const { path } = await params
|
|
|
|
if (!path || path.length === 0) {
|
|
throw new FileNotFoundError('No file path provided')
|
|
}
|
|
|
|
logger.info('File serve request:', { path })
|
|
|
|
const fullPath = path.join('/')
|
|
const isS3Path = path[0] === 's3'
|
|
const isBlobPath = path[0] === 'blob'
|
|
const isCloudPath = isS3Path || isBlobPath
|
|
const cloudKey = isCloudPath ? path.slice(1).join('/') : fullPath
|
|
|
|
const isPublicByKeyPrefix =
|
|
cloudKey.startsWith('profile-pictures/') || cloudKey.startsWith('og-images/')
|
|
|
|
if (isPublicByKeyPrefix) {
|
|
const context = inferContextFromKey(cloudKey)
|
|
logger.info(`Serving public ${context}:`, { cloudKey })
|
|
if (isUsingCloudStorage() || isCloudPath) {
|
|
return await handleCloudProxyPublic(cloudKey, context)
|
|
}
|
|
return await handleLocalFilePublic(fullPath)
|
|
}
|
|
|
|
const raw = request.nextUrl.searchParams.get('raw') === '1'
|
|
|
|
const authResult = await checkSessionOrInternalAuth(request, { requireWorkflowId: false })
|
|
|
|
if (!authResult.success || !authResult.userId) {
|
|
logger.warn('Unauthorized file access attempt', {
|
|
path,
|
|
error: authResult.error || 'Missing userId',
|
|
})
|
|
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
|
|
}
|
|
|
|
const userId = authResult.userId
|
|
|
|
if (isUsingCloudStorage()) {
|
|
return await handleCloudProxy(cloudKey, userId, raw)
|
|
}
|
|
|
|
return await handleLocalFile(cloudKey, userId, raw)
|
|
} catch (error) {
|
|
logger.error('Error serving file:', error)
|
|
|
|
if (error instanceof FileNotFoundError) {
|
|
return createErrorResponse(error)
|
|
}
|
|
|
|
return createErrorResponse(error instanceof Error ? error : new Error('Failed to serve file'))
|
|
}
|
|
}
|
|
|
|
async function handleLocalFile(
|
|
filename: string,
|
|
userId: string,
|
|
raw: boolean
|
|
): Promise<NextResponse> {
|
|
try {
|
|
const contextParam: StorageContext | undefined = inferContextFromKey(filename) as
|
|
| StorageContext
|
|
| undefined
|
|
|
|
const hasAccess = await verifyFileAccess(
|
|
filename,
|
|
userId,
|
|
undefined, // customConfig
|
|
contextParam, // context
|
|
true // isLocal
|
|
)
|
|
|
|
if (!hasAccess) {
|
|
logger.warn('Unauthorized local file access attempt', { userId, filename })
|
|
throw new FileNotFoundError(`File not found: ${filename}`)
|
|
}
|
|
|
|
const filePath = await findLocalFile(filename)
|
|
|
|
if (!filePath) {
|
|
throw new FileNotFoundError(`File not found: ${filename}`)
|
|
}
|
|
|
|
const rawBuffer = await readFile(filePath)
|
|
const segment = filename.split('/').pop() || filename
|
|
const displayName = stripStorageKeyPrefix(segment)
|
|
const workspaceId = getWorkspaceIdForCompile(filename)
|
|
const { buffer: fileBuffer, contentType } = await compilePptxIfNeeded(
|
|
rawBuffer,
|
|
displayName,
|
|
workspaceId,
|
|
raw
|
|
)
|
|
|
|
logger.info('Local file served', { userId, filename, size: fileBuffer.length })
|
|
|
|
return createFileResponse({
|
|
buffer: fileBuffer,
|
|
contentType,
|
|
filename: displayName,
|
|
cacheControl: contextParam === 'workspace' ? 'private, no-cache, must-revalidate' : undefined,
|
|
})
|
|
} catch (error) {
|
|
logger.error('Error reading local file:', error)
|
|
throw error
|
|
}
|
|
}
|
|
|
|
async function handleCloudProxy(
|
|
cloudKey: string,
|
|
userId: string,
|
|
raw = false
|
|
): Promise<NextResponse> {
|
|
try {
|
|
const context = inferContextFromKey(cloudKey)
|
|
logger.info(`Inferred context: ${context} from key pattern: ${cloudKey}`)
|
|
|
|
const hasAccess = await verifyFileAccess(
|
|
cloudKey,
|
|
userId,
|
|
undefined, // customConfig
|
|
context, // context
|
|
false // isLocal
|
|
)
|
|
|
|
if (!hasAccess) {
|
|
logger.warn('Unauthorized cloud file access attempt', { userId, key: cloudKey, context })
|
|
throw new FileNotFoundError(`File not found: ${cloudKey}`)
|
|
}
|
|
|
|
let rawBuffer: Buffer
|
|
|
|
if (context === 'copilot') {
|
|
rawBuffer = await CopilotFiles.downloadCopilotFile(cloudKey)
|
|
} else {
|
|
rawBuffer = await downloadFile({
|
|
key: cloudKey,
|
|
context,
|
|
})
|
|
}
|
|
|
|
const segment = cloudKey.split('/').pop() || 'download'
|
|
const displayName = stripStorageKeyPrefix(segment)
|
|
const workspaceId = getWorkspaceIdForCompile(cloudKey)
|
|
const { buffer: fileBuffer, contentType } = await compilePptxIfNeeded(
|
|
rawBuffer,
|
|
displayName,
|
|
workspaceId,
|
|
raw
|
|
)
|
|
|
|
logger.info('Cloud file served', {
|
|
userId,
|
|
key: cloudKey,
|
|
size: fileBuffer.length,
|
|
context,
|
|
})
|
|
|
|
return createFileResponse({
|
|
buffer: fileBuffer,
|
|
contentType,
|
|
filename: displayName,
|
|
cacheControl: context === 'workspace' ? 'private, no-cache, must-revalidate' : undefined,
|
|
})
|
|
} catch (error) {
|
|
logger.error('Error downloading from cloud storage:', error)
|
|
throw error
|
|
}
|
|
}
|
|
|
|
async function handleCloudProxyPublic(
|
|
cloudKey: string,
|
|
context: StorageContext
|
|
): Promise<NextResponse> {
|
|
try {
|
|
let fileBuffer: Buffer
|
|
|
|
if (context === 'copilot') {
|
|
fileBuffer = await CopilotFiles.downloadCopilotFile(cloudKey)
|
|
} else {
|
|
fileBuffer = await downloadFile({
|
|
key: cloudKey,
|
|
context,
|
|
})
|
|
}
|
|
|
|
const filename = cloudKey.split('/').pop() || 'download'
|
|
const contentType = getContentType(filename)
|
|
|
|
logger.info('Public cloud file served', {
|
|
key: cloudKey,
|
|
size: fileBuffer.length,
|
|
context,
|
|
})
|
|
|
|
return createFileResponse({
|
|
buffer: fileBuffer,
|
|
contentType,
|
|
filename,
|
|
})
|
|
} catch (error) {
|
|
logger.error('Error serving public cloud file:', error)
|
|
throw error
|
|
}
|
|
}
|
|
|
|
async function handleLocalFilePublic(filename: string): Promise<NextResponse> {
|
|
try {
|
|
const filePath = await findLocalFile(filename)
|
|
|
|
if (!filePath) {
|
|
throw new FileNotFoundError(`File not found: ${filename}`)
|
|
}
|
|
|
|
const fileBuffer = await readFile(filePath)
|
|
const contentType = getContentType(filename)
|
|
|
|
logger.info('Public local file served', { filename, size: fileBuffer.length })
|
|
|
|
return createFileResponse({
|
|
buffer: fileBuffer,
|
|
contentType,
|
|
filename,
|
|
})
|
|
} catch (error) {
|
|
logger.error('Error reading public local file:', error)
|
|
throw error
|
|
}
|
|
}
|