mirror of
https://github.com/simstudioai/sim.git
synced 2026-04-06 03:00:16 -04:00
* Fix lint * improvement(sidebar): loading * fix(sidebar): use client-generated UUIDs for stable optimistic updates (#3439) * fix(sidebar): use client-generated UUIDs for stable optimistic updates * fix(folders): use zod schema validation for folder create API Replace inline UUID regex with zod schema validation for consistency with other API routes. Update test expectations accordingly. * fix(sidebar): add client UUID to single workflow duplicate hook The useDuplicateWorkflow hook was missing newId: crypto.randomUUID(), causing the same temp-ID-swap issue for single workflow duplication from the context menu. * fix(folders): avoid unnecessary Set re-creation in replaceOptimisticEntry Only create new expandedFolders/selectedFolders Sets when tempId differs from data.id. In the common happy path (client-generated UUIDs), this avoids unnecessary Zustand state reference changes and re-renders. * Mothership block logs * Fix mothership block logs * improvement(knowledge): make connector-synced document chunks readonly (#3440) * improvement(knowledge): make connector-synced document chunks readonly * fix(knowledge): enforce connector chunk readonly on server side * fix(knowledge): disable toggle and delete actions for connector-synced chunks * Job exeuction logs * Job logs * fix(connectors): remove unverifiable requiredScopes for Linear connector * fix(connectors): remove legacy requiredScopes from Jira and Confluence connectors Jira and Confluence OAuth tokens don't return legacy scope names like read:jira-work or read:confluence-content.all, causing the 'Update access' banner to always appear. Set requiredScopes to empty array like Linear. * feat(tasks): add rename to task context menu (#3442) * Revert "fix(connectors): remove legacy requiredScopes from Jira and Confluence connectors" This reverts commita0be3ff414. * fix(connectors): restore Linear connector requiredScopes Linear OAuth does return scopes in the token response. The previous fix of emptying requiredScopes was based on an incorrect assumption. Restoring requiredScopes: ['read'] as it should work correctly. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix(knowledge): pass workspaceId to useOAuthCredentials in connector card The ConnectorCard was calling useOAuthCredentials(providerId) without a workspaceId, causing the credentials API to return an empty array. This meant the credential lookup always failed, getMissingRequiredScopes received undefined, and the "Update access" banner always appeared. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Fix oauth link callback from mothership task * feat(connectors): add Fireflies connector and API key auth support (#3448) * feat(connectors): add Fireflies connector and API key auth support Extend the connector system to support both OAuth and API key authentication via a discriminated union (`ConnectorAuthConfig`). Add Fireflies as the first API key connector, syncing meeting transcripts via the Fireflies GraphQL API. Schema changes: - Make `credentialId` nullable (null for API key connectors) - Add `encryptedApiKey` column (AES-256-GCM encrypted, null for OAuth) This eliminates the `'_apikey_'` sentinel and inline `sourceConfig._encryptedApiKey` patterns, giving each auth mode its own clean column. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix(fireflies): allow 0 for maxTranscripts (means unlimited) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com> * Add context * fix(fireflies): correct types from live API validation (#3450) * fix(fireflies): correct types from live API validation - speakers.id is number, not string (API returns 0, 1, 2...) - summary.action_items is a single string, not string[] - Update formatTranscriptContent to handle action_items as string Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix(fireflies): correct tool types from live API validation - FirefliesSpeaker.id: string -> number - FirefliesSentence.speaker_id: string -> number - FirefliesSpeakerAnalytics.speaker_id: string -> number - FirefliesSummary.action_items: string[] -> string - FirefliesSummary.outline: string[] -> string - FirefliesSummary.shorthand_bullet: string[] -> string - FirefliesSummary.bullet_gist: string[] -> string - FirefliesSummary.topics_discussed: string[] -> string Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com> * feat(knowledge): add connector tools and expand document metadata (#3452) * feat(knowledge): add connector tools and expand document metadata * fix(knowledge): address PR review feedback on new tools * fix(knowledge): remove unused params from get_document transform * refactor, improvement * fix: correct knowledge block canonical pair pattern and subblock migration - Rename manualDocumentId to documentId (advanced subblock ID should match canonicalParamId, consistent with airtable/gmail patterns) - Fix documentSelector.dependsOn to reference knowledgeBaseSelector (basic depends on basic, not advanced) - Remove unnecessary documentId migration (ID unchanged from main) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * lint * fix: resolve post-merge test and lint failures - airtable: sync tableSelector condition with tableId (add getSchema) - backfillCanonicalModes test: add documentId mode to prevent false backfill - schedule PUT test: use invalid action string now that disable is valid - schedule execute tests: add ne mock, sourceType field, use mockReturnValueOnce for two db.update calls - knowledge tools: fix biome formatting (single-line arrow functions) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Fixes * Fixes * Clean vfs * Fix * Fix lint * fix(connectors): add rate limiting, concurrency controls, and bug fixes (#3457) * fix(connectors): add rate limiting, concurrency controls, and bug fixes across knowledge connectors - Add Retry-After header support to fetchWithRetry for all 18 connectors - Batch concurrent API calls (concurrency 5) in Dropbox, Google Docs, Google Drive, OneDrive, SharePoint - Batch concurrent API calls (concurrency 3) in Notion to match 3 req/s limit - Cache GitHub tree in syncContext to avoid re-fetching on every pagination page - Batch GitHub blob fetches with concurrency 5 - Fix GitHub base64 decoding: atob() → Buffer.from() for UTF-8 safety - Fix HubSpot OAuth scope: 'tickets' → 'crm.objects.tickets.read' (v3 API) - Fix HubSpot syncContext key: totalFetched → totalDocsFetched for consistency - Add jitter to nextSyncAt (10% of interval, capped at 5min) to prevent thundering herd - Fix Date consistency in connector DELETE route * fix(connectors): address PR review feedback on retry and SharePoint batching - Remove 120s cap on Retry-After — pass all values through to retry loop - Add maxDelayMs guard: if Retry-After exceeds maxDelayMs, throw immediately instead of hammering with shorter intervals (addresses validate timeout concern) - Add early exit in SharePoint batch loop when maxFiles limit is reached to avoid unnecessary API calls * fix(connectors): cap Retry-After at maxDelayMs instead of aborting Match Google Cloud SDK behavior: when Retry-After exceeds maxDelayMs, cap the wait to maxDelayMs and log a warning, rather than throwing immediately. This ensures retries are bounded in duration while still respecting server guidance within the configured limit. * fix(connectors): add early-exit guard to Dropbox, Google Docs, OneDrive batch loops Match the SharePoint fix — skip remaining batches once maxFiles limit is reached to avoid unnecessary API calls. * improvement(turbo): align turborepo config with best practices (#3458) * improvement(turbo): align turborepo config with best practices * fix(turbo): address PR review feedback * fix(turbo): add lint:check task for read-only lint+format CI checks lint:check previously delegated to format:check which only checked formatting. Now it runs biome check (no --write) which enforces both lint rules and formatting without mutating files. * upgrade turbo * improvement(perf): apply react and js performance optimizations across codebase (#3459) * improvement(perf): apply react and js performance optimizations across codebase - Parallelize independent DB queries with Promise.all in API routes - Defer PostHog and OneDollarStats via dynamic import() to reduce bundle size - Use functional setState in countdown timers to prevent stale closures - Replace O(n*m) .filter().find() with Set-based O(n) lookups in undo-redo - Use .toSorted() instead of .sort() for immutable state operations - Use lazy initializers for useState(new Set()) across 20 components - Remove useMemo wrapping trivially cheap expressions (typeof, ternary, template strings) - Add passive: true to scroll event listener * fix(perf): address PR review feedback - Extract IIFE Set patterns to named consts for readability in use-undo-redo - Hoist Set construction above loops in BATCH_UPDATE_PARENT cases - Add .catch() error handler to PostHog dynamic import - Convert session-provider posthog import to dynamic import() to complete bundle split * fix(analytics): add .catch() to onedollarstats dynamic import * improvement(resource): tables, files * improvement(resources): all outer page structure complete * refactor(queries): comprehensive TanStack Query best practices audit (#3460) * refactor: comprehensive TanStack Query best practices audit and migration - Add AbortSignal forwarding to all 41 queryFn implementations for proper request cancellation - Migrate manual fetch patterns to useMutation hooks (useResetPassword, useRedeemReferralCode, usePurchaseCredits, useImportWorkflow, useOpenBillingPortal, useAllowedMcpDomains) - Migrate standalone hooks to TanStack Query (use-next-available-slot, use-mcp-server-test, use-webhook-management, use-referral-attribution) - Fix query key factories: add missing `all` keys, replace inline keys with factory methods - Fix optimistic mutations: use onSettled instead of onSuccess for cache reconciliation - Replace overly broad cache invalidations with targeted key invalidation - Remove keepPreviousData from static-key queries where it provides no benefit - Add staleTime to queries missing explicit cache duration - Fix `any` type in UpdateSettingParams with proper GeneralSettings typing - Remove dead code: loadingWebhooks/checkedWebhooks from subblock store, unused helper functions - Update settings components (general, debug, referral-code, credit-balance, subscription, mcp) to use mutation state instead of manual useState for loading/error/success Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: remove unstable mutation object from useCallback deps openBillingPortal mutation object is not referentially stable, but .mutate() is stable in TanStack Query v5. Remove from deps to prevent unnecessary handleBadgeClick recreations. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: add missing byWorkflows invalidation to useUpdateTemplate The onSettled handler was missing the byWorkflows() invalidation that was dropped during the onSuccess→onSettled migration. Without this, the deploy modal (useTemplateByWorkflow) would show stale data after a template update. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * docs: add TanStack Query best practices to CLAUDE.md and cursor rules Add comprehensive React Query best practices covering: - Hierarchical query key factories with intermediate plural keys - AbortSignal forwarding in all queryFn implementations - Targeted cache invalidation over broad .all invalidation - onSettled for optimistic mutation cache reconciliation - keepPreviousData only on variable-key queries - No manual fetch in components rule - Stable mutation references in useCallback deps Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: address PR review feedback - Fix syncedRef regression in use-webhook-management: only set syncedRef.current=true when webhook is found, so re-sync works after webhook creation (e.g., post-deploy) - Remove redundant detail(id) invalidation from useUpdateTemplate onSettled since onSuccess already populates cache via setQueryData Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: address second round of PR review feedback - Reset syncedRef when blockId changes in use-webhook-management so component reuse with a different block syncs the new webhook - Add response.ok check in postAttribution so non-2xx responses throw and trigger TanStack Query retry logic Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: use lists() prefix invalidation in useCreateWorkspaceCredential Use workspaceCredentialKeys.lists() instead of .list(workspaceId) so filtered list queries are also invalidated on credential creation, matching the pattern used by update and delete mutations. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: address third round of PR review feedback - Add nullish coalescing fallback for bonusAmount in referral-code to prevent rendering "undefined" when server omits the field - Reset syncedRef when queryEnabled becomes false so webhook data re-syncs when the query is re-enabled without component remount Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: address fourth round of PR review feedback - Add AbortSignal to testMcpServerConnection for consistency - Wrap handleTestConnection in try/catch for mutateAsync error handling - Replace broad subscriptionKeys.all with targeted users()/usage() invalidation - Add intermediate users() key to subscription key factory for prefix matching - Add comment documenting syncedRef null-webhook behavior - Fix api-keys.ts silent error swallowing on non-ok responses - Move deployments.ts cache invalidation from onSuccess to onSettled Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: achieve full TanStack Query best practices compliance - Add intermediate plural keys to api-keys, deployments, and schedules key factories for prefix-based invalidation support - Change copilot-keys from refetchQueries to invalidateQueries - Add signal parameter to organization.ts fetch functions (better-auth client does not support AbortSignal, documented accordingly) - Move useCreateMcpServer invalidation from onSuccess to onSettled Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com> * ran lint * Fix tables row count * Update mothership to match copilot in logs * improvement(resource): layout * fix(knowledge): compute KB tokenCount from documents instead of stale column (#3463) The knowledge_base.token_count column was initialized to 0 and never updated. Replace with COALESCE(SUM(document.token_count), 0) in all read queries, which already JOIN on documents with GROUP BY. * improvement(resources): layout and items * feat(knowledge): add v1 knowledge base API, Obsidian/Evernote connectors, and docs (#3465) * feat(knowledge): add v1 knowledge base API, Obsidian/Evernote connectors, and docs - Add v1 REST API for knowledge bases (CRUD, document management, vector search) - Add Obsidian and Evernote knowledge base connectors - Add file type validation to v1 file and document upload endpoints - Update OpenAPI spec with knowledge base endpoints and schemas - Add connectors documentation page - Apply query hook formatting improvements * fix(knowledge): address PR review feedback - Remove validateFileType from v1/files route (general file upload, not document-only) - Reject tag filters when searching multiple KBs (tag defs are KB-specific) - Cache tag definitions to avoid duplicate getDocumentTagDefinitions call - Fix Obsidian connector silent empty results when syncContext is undefined * improvement(connectors): add syncContext to getDocument, clean up caching - Update docs to say 20+ connectors - Add syncContext param to ConnectorConfig.getDocument interface - Use syncContext in Evernote getDocument to cache tag/notebook maps - Replace index-based cache check with Map keyed by KB ID in search route * fix(knowledge): address second round of PR review feedback - Fix Zod .default('text') overriding tag definition's actual fieldType - Fix encodeURIComponent breaking multi-level folder paths in Obsidian - Use 413 instead of 400 for file-too-large in document upload - Add knowledge-bases to API reference docs navigation Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix(knowledge): prevent cross-workspace KB access in search Filter accessible KBs by matching workspaceId from the request, preventing users from querying KBs in other workspaces they have access to but didn't specify. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix(knowledge): audit resourceId, SSRF protection, recursion depth limit - Fix recordAudit using knowledgeBaseId instead of newDocument.id - Add SSRF validation to Obsidian connector (reject private/loopback URLs) - Add max recursion depth (20) to listVaultFiles Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix(obsidian): remove SSRF check that blocks localhost usage The Obsidian connector is designed to connect to the Local REST API plugin running on localhost (127.0.0.1:27124). The SSRF check was incorrectly blocking this primary use case. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com> * improvement(resources): segmented API * fix(execution): ensure background tasks await post-execution DB status updates (#3466) The fire-and-forget IIFE in execution-core.ts for post-execution logging could be abandoned when trigger.dev tasks exit, leaving executions permanently stuck in "running" status. Store the promise on LoggingSession so background tasks can optionally await it before returning. * improvement(resource): sorting and icons * fix(resource): sorting * improvement(settings): fix mcp modal, add option to edit JSON and add Sim as an MCP client (#3467) * improvement(settings): fix mcp modal, add option to edit JSON and add Sim as an MCP client * added docs link in sidebar * ack comments * ack comments * fixed error msg * feat(mothership): billing (#3464) * Billing update * more billing improvements * credits UI * credit purchase safety * progress * ui improvements * fix cancel sub * fix types * fix daily refresh for teams * make max features differentiated * address bugbot comments * address greptile comments * revert isHosted * address more comments * fix org refresh bar * fix ui rounding * fix minor rounding * fix upgrade issue for legacy plans * fix formatPlanName * fix email dispay names * fix legacy team reference bugs * referral bonus in credits * fix org upgrade bug * improve logs * respect toggle for paid users * fix landing page pro features and usage limit checks * fixed query and usage * add unit test * address more comments * enterprise guard * fix limits bug * pass period start/end for overage * fix(sidebar): restore drag-and-drop for workflows and folders (#3470) * fix(sidebar): restore drag-and-drop for workflows and folders Made-with: Cursor * update docs, unrelated * improvement(tables): consolidation * feat(schedules): add schedule creator modal for standalone jobs Add modal to create standalone scheduled jobs from the Schedules page. Includes POST API endpoint, useCreateSchedule mutation hook, and full modal with schedule type selection, timezone, lifecycle, and live preview. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * feat(schedules): add edit support with context menu for standalone jobs * style(schedules): apply linter formatting * improvement: tables, favicon * feat(files): inline file viewer with text editing (#3475) * feat(files): add inline file viewer with text editing and create file modal Add file preview/edit functionality to the workspace files page. Text files (md, json, txt, yaml, etc.) open in an editable textarea with Cmd/Ctrl+S save. PDFs render in an iframe. New file button creates empty .md files via a modal. Uses ResourceHeader breadcrumbs and ResourceOptionsBar for save/download/delete. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * improvement(files): add UX polish, PR review fixes, and context menu - Add unsaved changes guard modal (matching credentials manager pattern) - Add delete confirmation modal for both viewer and context menu - Add save status feedback (Save → Saving... → Saved) - Add right-click context menu with Open, Download, Delete actions - Add 50MB file size limit on content update API - Add storage quota check before content updates - Add response.ok guard on download to prevent corrupt files - Add skeleton loading for pending file selection (prevents flicker) - Fix updateContent in handleSave dependency array Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix(files): propagate save errors and remove redundant sizeDiff - Remove try/catch in TextEditor.handleSave so errors propagate to parent, which correctly shows save failure status - Remove redundant inner sizeDiff declaration that shadowed outer scope Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix(files): remove unused textareaRef Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix(files): move Cmd+S to parent, add save error feedback, hide save for non-text files - Move Cmd+S keyboard handler from TextEditor to Files so it goes through the parent handleSave with proper status management - Add 'error' save status with red "Save failed" label that auto-resets - Only show Save button for text-editable file types (md, txt, json, etc.) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * improvement(files): add save tooltip, deduplicate text-editable extensions - Add Tooltip on Save button showing Cmd+S / Ctrl+S shortcut - Export TEXT_EDITABLE_EXTENSIONS from file-viewer and reuse in files.tsx instead of duplicating the list inline Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * refactor: extract isMacPlatform to shared utility Move isMacPlatform() from global-commands-provider.tsx to lib/core/utils/platform.ts so it can be reused by files.tsx tooltip without duplication. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * refactor(files): deduplicate delete modal, use shared formatFileSize - Extract DeleteConfirmModal component to eliminate duplicate modal markup between viewer and list modes - Replace local formatFileSize with shared utility from file-utils.ts Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix(files): fix a11y label lint error and remove mutation object from useCallback deps Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix(files): add isDirty guard on handleSave, return proper HTTP status codes Prevents "Saving → Saved" flash when pressing Cmd+S with no changes. Returns 404 for file-not-found and 402 for quota-exceeded instead of 500. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix(files): reset isDirty/saveStatus on delete and discard, remove deprecated navigator.platform - Clear isDirty and saveStatus when deleting the currently-viewed file to prevent spurious beforeunload prompts - Reset saveStatus on discard to prevent stale "Save failed" when opening another file - Remove deprecated navigator.platform, userAgent fallback covers all cases Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix(files): prevent concurrent saves on rapid Cmd+S, add YAML MIME types - Add saveStatus === 'saving' guard to handleSave to prevent duplicate concurrent PUT requests from rapid keyboard shortcuts - Add yaml/yml MIME type mappings to getMimeTypeFromExtension Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * refactor(files): reuse shared extension constants, parallelize cancelQueries - Replace hand-rolled SUPPORTED_EXTENSIONS with composition from existing SUPPORTED_DOCUMENT/AUDIO/VIDEO_EXTENSIONS in validation.ts - Parallelize sequential cancelQueries calls in delete mutation onMutate Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix(files): guard handleCreate against duplicate calls while pending Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix(files): show upload progress on the Upload button, not New file Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix(files): use ref-based guard for create pending state to avoid stale closure The uploadFile.isPending check was stale because the mutation object is excluded from useCallback deps (per codebase convention). Using a ref ensures the guard works correctly across rapid Enter key presses. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * cleanup(files): use shared icon import, remove no-op props, wrap handler in useCallback Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com> * improvement: tables, dropdown * improvement(docs): align sidebar method badges and polish API reference styling (#3484) * improvement(docs): align sidebar method badges and polish API reference styling * fix(docs): revert className prop on DocsPage for CI compatibility * fix(docs): restore oneOf schema for delete rows and use rem units in CSS * fix(docs): replace :has() selectors with direct className for reliable prod layout The API docs layout was intermittently narrow in production because CSS :has(.api-page-header) selectors are unreliable in Tailwind v4 production builds. Apply className="openapi-page" directly to DocsPage and replace all 64 :has() selectors with .openapi-page class targeting. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix(docs): bypass TypeScript check for className prop on DocsPage Use spread with type assertion to pass className to DocsPage, working around a CI type resolution issue where the prop exists at runtime but is not recognized by TypeScript in the Vercel build environment. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix(docs): use inline style tag for grid layout, revert CSS to :has() selectors The className prop on DocsPage doesn't exist in the fumadocs-ui version resolved on Vercel, so .openapi-page was never applied and all 64 CSS rules broke. Revert to :has(.api-page-header) selectors for styling and use an inline <style> tag for the critical grid-column layout override, which is SSR'd and doesn't depend on any CSS selector matching. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix(docs): add pill styling to footer navigation method badges The footer nav badges (POST, GET, etc.) had color from data-method rules but lacked the structural pill styling (padding, border-radius, font-size). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com> * fix(docs): use named grid lines instead of numeric column indices (#3487) Root cause: the fumadocs grid template has 3 columns in production but 5 columns in local dev. Our CSS used `grid-column: 3 / span 2` which targeted the wrong column in the 3-column grid, placing content in the near-zero-width TOC column instead of the main content column. Fix: use `grid-column: main-start / toc-end` which uses CSS named grid lines from grid-template-areas, working regardless of column count. Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com> * improvement(resource): layout * improvement: icon, resource header options * improvement: icons * fix(files): icon * feat(tables): column operations, row ordering, V1 API (#3488) * feat(tables): add column operations, row ordering, V1 columns API, and OpenAPI spec Adds column rename/delete/type change/constraint updates to the tables module, row ordering via position column, UI metadata schema, V1 public API for column operations with rate limiting and audit logging, and OpenAPI documentation. Key changes: - Service-layer column operations with validation (name pattern, type compatibility, unique/required constraints) - Position column on user_table_rows with composite index for efficient ordering - V1 /api/v1/tables/{tableId}/columns endpoint (POST/PATCH/DELETE) with rate limiting and audit - Shared Zod schemas extracted to table/utils.ts using COLUMN_TYPES constant - Targeted React Query invalidation (row vs schema mutations) with consistent onSettled usage - OpenAPI 3.1.0 spec for columns endpoint with code samples - Position field added to all row response mappings for consistency - Sort fallback to position ordering when buildSortClause returns null Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix(tables): use specific error prefixes instead of broad "Cannot" match Prevents internal TypeErrors (e.g. "Cannot read properties of undefined") from leaking as 400 responses. Now matches only domain-specific errors: "Cannot delete the last column" and "Cannot set column". Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix(tables): reject Infinity and NaN in number type compatibility check Number.isFinite rejects Infinity, -Infinity, and NaN, preventing non-finite values from passing column type validation. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix(tables): invalidate table list on row create/delete for stale rowCount Row create and delete mutations now invalidate the table list cache since it includes a computed rowCount. Row updates (which don't change count) continue to only invalidate row queries. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix(tables): add column name length check, deduplicate name gen, reset pagination on clear - Add MAX_COLUMN_NAME_LENGTH validation to addTableColumn (was missing, renameColumn already had it) - Extract generateColumnName helper to eliminate triplicated logic across handleAddColumn, handleInsertColumnLeft, handleInsertColumnRight - Reset pagination to page 0 when clearing sort/filter to prevent showing empty pages after narrowing filters are removed Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: hoist tableId above try block in V1 columns route, add detail invalidation to invalidateRowCount - V1 columns route: `tableId` was declared inside `try` but referenced in `catch` logger.error, causing undefined in error logs. Hoisted `await params` above try in all three handlers (POST, PATCH, DELETE). - invalidateRowCount: added `tableKeys.detail(tableId)` invalidation since the single-table GET response includes `rowCount`, which becomes stale after row create/delete without this. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: add position to all row mutation responses, remove dead filter code - Add `position` field to POST (single + batch) and PATCH row responses across both internal and V1 routes, matching GET responses and OpenAPI spec. - Remove unused `filterConfig`, `handleFilterToggle`, `handleFilterClear`, and `activeFilters` — dead code left over from merge conflict resolution. `handleFilterApply` (the one actually wired to JSX) is preserved. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: invalidateTableSchema now also invalidates table list cache Column add/rename/delete/update mutations now invalidate tableKeys.list() since the list endpoint returns schema.columns for each table. Without this, the sidebar table list would show stale column schemas until staleTime expires. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: replace window.prompt/confirm with emcn Modal dialogs Replace non-standard browser dialogs with proper emcn Modal components to match the existing codebase pattern (e.g. delete table confirmation). - Column rename: Modal with Input field + Enter key support - Column delete: Modal with destructive confirmation Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com> * update schedule creation ui and run lint * improvement: logs * improvement(tables): multi-select and efficiencies * Table tools * improvement(folder-selection): folder deselection + selection order should match visual * fix(selections): more nested folder inaccuracies * Tool updates * Store tool call results * fix(landing): wire agent input to mothership * feat(mothership): resource viewer * fix tests * fix(streaming): smoother streaming with throttled rendering, ResizeObserver scroll, and batched updates (#3471) * fix(streaming): smoother streaming with throttled rendering, ResizeObserver scroll, and batched updates - Add useThrottledValue hook (100ms trailing-edge throttle) to gate DOM re-renders during streaming across all chat surfaces - Replace 100ms setInterval scroll polling with ResizeObserver-based auto-scroll, programmatic scroll timestamp tracking, and nested [data-scrollable] region handling - Extract processContentBuffer from inline content handler for cleaner code organization in copilot SSE handlers - Add RAF-based update batching (50ms max interval) to floating chat and home chat streaming paths - Add useProgressiveList hook for progressive rendering of long conversation histories via requestAnimationFrame Made-with: Cursor * ack PR comments * fix search modal * more comments * ack comments * count * ack comments * ack comment * improvement(mothership): worklfow resource * Fix tool call persistence in chat * Tool results * Fix error status * File uploads to mothership * feat(templates): landing page templates workflow states * improvement(mothership): chat stability * improvement(mothership): chat history and stability * improvement(tables): click-to-select navigation, inline rename, column resize (#3496) * improvement(tables): click-to-select navigation, inline rename, column resize * fix(tables): address PR review comments - Add doneRef guard to useInlineRename preventing Enter+blur double-fire - Fix PATCH error handler: return 500 for non-validation errors, fix unreachable logger.error - Stop click propagation on breadcrumb rename input * fix(tables): add rows-affected check in renameTable service Prevents silent no-op when tableId doesn't match any record. * fix(tables): useMemo deps + placeholder memo initialCharacter check - Use primitive editingId/editValue in useMemo deps instead of whole useInlineRename object (which creates a new ref every render) - Add initialCharacter comparison to placeholderPropsAreEqual, matching the existing pattern in dataRowPropsAreEqual * fix(tables): address round 2 review comments - Mirror name validation (regex + max length) in PatchTableSchema so validateTableName failures return 400 instead of 500 - Add .returning() + rows-affected check to renameWorkspaceFile, matching the renameTable pattern - Check response.ok before parsing JSON in useRenameWorkspaceFile, matching the useRenameTable pattern * refactor(tables): reuse InlineRenameInput in BreadcrumbSegment Replace duplicated inline input markup with the shared component. Eliminates redundant useRef, useEffect, and input boilerplate. * fix(tables): set doneRef in cancelRename to prevent blur-triggered save Escape → cancelRename → input unmounts → blur → submitRename would save instead of canceling. Now cancelRename sets doneRef like submitRename does, blocking the subsequent blur handler. * fix(tables): pointercancel cleanup + typed FileConflictError - Add pointercancel handler to column resize to prevent listener leaks when system interrupts the pointer (touch-action override, etc.) - Replace stringly-typed error.message.includes('already exists') with FileConflictError class for refactor-safe 409 status detection * fix(tables): stable useCallback dep + rename shadowed variable - Use listRename.startRename (stable ref) instead of whole listRename object in handleContextMenuRename deps - Rename inner 'target' to 'origin' in arrow-key handler to avoid shadowing the outer HTMLElement 'target' * fix(tables): move class below imports, stable submitRename, clear editingCell - Move FileConflictError below import statements (import-first convention) - Make submitRename a stable useCallback([]) by reading editingId and editValue through refs (matches existing onSaveRef pattern) - Add setEditingCell(null) to handleEmptyRowClick for symmetry with handleCellClick * feat(tables): persist column widths in table metadata Column widths now survive navigation and page reloads. On resize-end, widths are debounced (500ms) and saved to the table's metadata field via a new PUT /api/table/[tableId]/metadata endpoint. On load, widths are seeded from the server once via React Query. * fix type checking for file viewer * fix(tables): address review feedback — 4 fixes 1. headerRename.onSave now uses the fileId parameter directly instead of the selectedFile closure, preventing rename-wrong-file race 2. updateMetadataMutation uses ref pattern matching mutateRef/createRef 3. Type-to-enter filters non-numeric chars for number columns, non-date chars for date columns 4. renameValue only passed to actively-renaming ColumnHeaderMenu, preserving React.memo for other columns * fix(tables): position-based gap rows, insert above/below, consistency fixes - Fix gap row insert shifting: only shift rows when target position is occupied, preventing unnecessary displacement of rows below - Switch to position-based indexing throughout (positionMap, maxPosition) instead of array-index for correct sparse position handling - Add insert row above/below to context menu - Use CellContent for pending values in PositionGapRows (matching PlaceholderRows) - Add belowHeader selection overlay logic to PositionGapRows - Remove unnecessary 500ms debounce on column width persistence Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix cells nav w keyboard * added preview panel for html, markdown rendering, completed table --------- Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com> * fix(tables): one small tables ting (#3497) * feat(exa-hosted-key): Restore exa hosted key (#3499) Co-authored-by: Theodore Li <theo@sim.ai> * improvement(ui): consistent styling * styling alignment * improvements(tables): styling improvements * improve resizer for file preview for html files * updated document icon * fix(credentials): exclude regular login methods from credential sync * update docs * upgrade turbo * improvement: tables, chat * Fix table column delete * small table rename bug, files updates not persisting * Table batch ops * fix(credentials): block usage at execution layer without perms + fix invites * feat(hosted-key-services) Add hosted key for multiple services (#3461) * feat(hosted keys): Implement serper hosted key * Handle required fields correctly for hosted keys * Add rate limiting (3 tries, exponential backoff) * Add custom pricing, switch to exa as first hosted key * Add telemetry * Consolidate byok type definitions * Add warning comment if default calculation is used * Record usage to user stats table * Fix unit tests, use cost property * Include more metadata in cost output * Fix disabled tests * Fix spacing * Fix lint * Move knowledge cost restructuring away from generic block handler * Migrate knowledge unit tests * Lint * Fix broken tests * Add user based hosted key throttling * Refactor hosted key handling. Add optimistic handling of throttling for custom throttle rules. * Remove research as hosted key. Recommend BYOK if throtttling occurs * Make adding api keys adjustable via env vars * Remove vestigial fields from research * Make billing actor id required for throttling * Switch to round robin for api key distribution * Add helper method for adding hosted key cost * Strip leading double underscores to avoid breaking change * Lint fix * Remove falsy check in favor for explicit null check * Add more detailed metrics for different throttling types * Fix _costDollars field * Handle hosted agent tool calls * Fail loudly if cost field isn't found * Remove any type * Fix type error * Fix lint * Fix usage log double logging data * Fix test * Add browseruse hosted key * Add firecrawl and serper hosted keys * feat(hosted key): Add exa hosted key (#3221) * feat(hosted keys): Implement serper hosted key * Handle required fields correctly for hosted keys * Add rate limiting (3 tries, exponential backoff) * Add custom pricing, switch to exa as first hosted key * Add telemetry * Consolidate byok type definitions * Add warning comment if default calculation is used * Record usage to user stats table * Fix unit tests, use cost property * Include more metadata in cost output * Fix disabled tests * Fix spacing * Fix lint * Move knowledge cost restructuring away from generic block handler * Migrate knowledge unit tests * Lint * Fix broken tests * Add user based hosted key throttling * Refactor hosted key handling. Add optimistic handling of throttling for custom throttle rules. * Remove research as hosted key. Recommend BYOK if throtttling occurs * Make adding api keys adjustable via env vars * Remove vestigial fields from research * Make billing actor id required for throttling * Switch to round robin for api key distribution * Add helper method for adding hosted key cost * Strip leading double underscores to avoid breaking change * Lint fix * Remove falsy check in favor for explicit null check * Add more detailed metrics for different throttling types * Fix _costDollars field * Handle hosted agent tool calls * Fail loudly if cost field isn't found * Remove any type * Fix type error * Fix lint * Fix usage log double logging data * Fix test --------- Co-authored-by: Theodore Li <teddy@zenobiapay.com> * Fail fast on cost data not being found * Add hosted key for google services * Add hosting configuration and pricing logic for ElevenLabs TTS tools * Add linkup hosted key * Add jina hosted key * Add hugging face hosted key * Add perplexity hosting * Add broader metrics for throttling * Add skill for adding hosted key * Lint, remove vestigial hosted keys not implemented * Revert agent changes * fail fast * Fix build issue * Fix build issues * Fix type error * Remove byok types that aren't implemented * Address feedback * Use default model when model id isn't provided * Fix cost default issues * Remove firecrawl error suppression * Restore original behavior for hugging face * Add mistral hosted key * Remove hugging face hosted key * Fix pricing mismatch is mistral and perplexity * Add hosted keys for parallel and brand fetch * Add brandfetch hosted key * Update types * Change byok name to parallel_ai * Add telemetry on unknown models --------- Co-authored-by: Theodore Li <theo@sim.ai> * improvement(settings): SSR prefetch, code splitting, dedicated skeletons * fix: bust browser cache for workspace file downloads The downloadFile function was using a plain fetch() that honored the aggressive cache headers, causing newly created files to download empty. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix(settings): use emcn Skeleton in extracted skeleton files * fix(settings): extract shared response mappers to prevent server/client shape drift Addresses PR review feedback — prefetch.ts duplicated response mapping logic from client hooks. Extracted mapGeneralSettingsResponse and mapUserProfileResponse as shared functions used by both client fetch and server prefetch. * update byok page * fix(settings): include theme sync in client-side prefetch queryFn Hover-based prefetchGeneralSettings now calls syncThemeToNextThemes, matching the useGeneralSettings hook behavior so theme updates aren't missed when prefetch refreshes stale cache. * fix(byok): use EMCN Input for search field instead of ui Input Replace @/components/ui Input with the already-imported EmcnInput for design-system consistency. * fix(byok): use ui Input for search bar to match other settings pages * fix(settings): use emcn Input for file input in general settings * improvement(settings): add search bar to skeleton loading states Skeletons now include the search bar (and action button where applicable) so the layout matches the final component 1:1. Eliminates layout shift when the dynamic chunk loads — search bar area is already reserved by the skeleton. * fix(settings): align skeleton layouts with actual component structures - Fix list item gap from 12px to 8px across all skeletons (API keys, custom tools, credentials, MCP) - Add OAuth icon placeholder to credential skeleton - Fix credential button group gap from 8px to 4px - Remove incorrect gap-[4px] from credential-sets text column - Rebuild debug skeleton to match real layout (description + input/button row) - Add scrollable wrapper to BYOK skeleton with more representative item count * chore: lint fixes * improvement(sidebar): match workspace switcher popover width to sidebar Use Radix UI's built-in --radix-popover-trigger-width CSS variable instead of hardcoded 160px so the popover matches the trigger width and responds to sidebar resizing. * revert hardcoded ff * fix: copilot, improvement: tables, mothership * feat: inline chunk editor and table batch ops with undo/redo (#3504) * feat: inline chunk editor and table batch operations with undo/redo Replace modal-based chunk editing/creation with inline editor following the files tab pattern (state-based view toggle with ResourceHeader). Add batch update API endpoint, undo/redo support, and Popover-based context menus for tables. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: remove icons from table context menu PopoverItems Icons were incorrectly carried over from the DropdownMenu migration. PopoverItems in this codebase use text-only labels. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: restore DropdownMenu for table context menu The table-level context menu was incorrectly migrated to Popover during conflict resolution. Only the row-level context menu uses Popover; the table context menu should remain DropdownMenu with icons, matching the base branch. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: bound cross-page chunk navigation polling to max 50 retries Prevent indefinite polling if page data never loads during chunk navigation across page boundaries. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: navigate to last page after chunk creation for multi-page documents After creating a chunk, navigate to the last page (where new chunks append) before selecting it. This prevents the editor from showing "Loading chunk..." when the new chunk is not on the current page. The loading state breadcrumb remains as an escape hatch for edge cases. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: add duplicate rowId validation to BatchUpdateByIdsSchema Adds a .refine() check to reject duplicate rowIds in batch update requests, consistent with the positions uniqueness check on batch insert. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: address PR review comments - Fix disableEdit logic: use || instead of && so connector doc chunks cannot be edited from context menu (row click still opens viewer) - Add uniqueness validation for rowIds in BatchUpdateByIdsSchema - Fix inconsistent bg token: bg-background → bg-[var(--bg)] in Pagination Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: remove duplicate rowId uniqueness refine on BatchUpdateByIdsSchema The refine was applied both on the inner updates array and the outer object. Keep only the inner array refine which is cleaner. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: address additional PR review comments - Fix stale rowId after create-row redo: patch undo stack with new row ID using patchUndoRowId so subsequent undo targets the correct row - Fix text color tokens in Pagination: use CSS variable references (text-[var(--text-body)], text-[var(--text-secondary)]) instead of Tailwind semantic tokens for consistency with the rest of the file Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: remove dead code and fix type errors in table context menu Remove unused `onAddData` prop and `isEmptyCell` variable from row context menu (introduced in PR but never wired to JSX). Fix type errors in optimistic update spreads by removing unnecessary `as Record<string, unknown>` casts that lost the RowData type. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: prevent false "Saved" status on invalid content and mark fire-and-forget goToPage calls ChunkEditor.handleSave now throws on empty/oversized content instead of silently returning, so the parent's catch block correctly sets saveStatus to 'error'. Also added explicit `void` to unawaited goToPage(1) calls in filter handlers to signal intentional fire-and-forget. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: handle stale totalPages in handleChunkCreated for new-page edge case When creating a chunk that spills onto a new page, totalPages in the closure is stale. Now polls displayChunksRef for the new chunk, and if not found, checks totalPagesRef for an updated page count and navigates to the new last page before continuing to poll. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com> * Streaming fix -- need to test more * Make mothership block use long input instead of prompt input * improvement(billing): isAnnual metadata + docs updates (#3506) * improvement(billing): on demand toggling and infinite limits * store stripe metadata to distinguish annual vs monthly * udpate docs * address bugbot * Add piping * feat(clean-hosted-keys) Remove eleven labs, browseruse. Tweak firecrawl and mistral key impl (#3503) * Remove eleven labs, browseruse, and firecrawl * Remove creditsUsed output * Add back mistral hosting for mistral blocks * Add back firecrawl since they queue up concurrent requests * Fix price calculation, remove agent since its super long running and will clog up queue * Define hosting per tool * Remove redundant token finding --------- Co-authored-by: Theodore Li <theo@sim.ai> * Update vfs to handle hosted keys * improvement(tables): fix cell editing flash, batch API docs, and UI polish (#3507) * fix: show text cursor in chunk editor and ensure textarea fills container Add cursor-text to the editor wrapper so the whole area shows a text cursor. Click on empty space focuses the textarea. Changed textarea from h-full/w-full to flex-1/min-h-0 so it properly fills the flex container. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * improvement(tables): fix cell editing flash, add batch API docs, and UI polish Fix stale-data flash when saving inline cell edits by using TanStack Query's isPending+variables pattern instead of manual cache writes. Also adds OpenAPI docs for batch table endpoints, DatePicker support in row modal, duplicate row in context menu, and styling improvements. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: remove dead resolveColumnFromEvent callback Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: unify paste undo into single create-rows action Batch-created rows from paste now push one `create-rows` undo entry instead of N individual `create-row` entries, so a single Ctrl+Z reverses the entire paste operation. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: validate dates in inline editor and displayToStorage InlineDateEditor now validates computed values via Date.parse before saving, preventing invalid strings like "hello" from being sent to the server. displayToStorage now rejects out-of-range month/day values (e.g. 13/32) instead of producing invalid YYYY-MM-DD strings. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: accept ISO date format in inline date editor Fall back to raw draft input when displayToStorage returns null, so valid ISO dates like "2024-03-15" pasted or typed directly are accepted instead of silently discarded. Date.parse still validates the final value. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: add ISO date support to displayToStorage and fix picker Escape displayToStorage now recognizes YYYY-MM-DD input directly, so ISO dates typed or pasted work correctly for both saving and picker sync. DatePicker Escape now refocuses the input instead of saving, so the user can press Escape again to cancel or Enter to confirm — matching the expected cancel behavior. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: remove dead paste boundary check The totalR guard in handlePaste could never trigger since totalR included pasteRows.length, making targetRow always < totalR. Remove the unused variable and simplify the selection focus calc. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * update openapi --------- Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com> * fix dysfunctional unique operation in tables * feat(autosave): files and chunk editor autosave with debounce + refetch (#3508) * feat(files): debounced autosave while editing * address review comments * more comments * fix: unique constraint check crash and copilot table initial rows - Fix TypeError in updateColumnConstraints: db.execute() returns a plain array with postgres-js, not { rows: [...] }. The .rows.length access always crashed, making "Set unique" completely broken. - Add initialRowCount: 20 to copilot table creation so tables created via chat have the same empty rows as tables created from the UI. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Fix signaling * revert: remove initialRowCount from copilot table creation Copilot populates its own data after creating a table, so pre-creating 20 empty rows causes data to start at position 21 with empty rows above. initialRowCount only makes sense for the manual UI creation flow. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * improvement: chat, workspace header * chat metadata * Fix schema mismatch (#3510) Co-authored-by: Theodore Li <theo@sim.ai> * Fixes * fix: manual table creation starts with 1 row, 1 column Manual tables now create with a single 'name' column and 1 row instead of 2 columns and 20 rows. Copilot tables remain at 0 rows, 0 columns. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: horizontal scroll in embedded table by replacing overflow-hidden with overflow-clip Cell content spans used Tailwind's `truncate` (overflow: hidden), creating scroll containers that consumed trackpad wheel events on macOS without propagating to the actual scroll ancestor. Replaced with overflow-clip which clips identically but doesn't create a scroll container. Also moved focus target from outer container to the scroll div for correctness. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Fix tool call ordering * Fix tests * feat: add task multi-select, context menu, and subscription UI updates Add shift-click range selection, cmd/ctrl-click toggle, and right-click context menu for tasks in sidebar matching workflow/folder patterns. Update subscription settings tab UI. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix(credentials): autosync behaviour cross workspace (#3511) * fix(credentials): autosync behaviour cross workspace * address comments * fix(api-key-reminder) Add reminder on hosted keys that api key isnt needed (#3512) * Add reminder on hosted keys that api key isnt needed * Fix test case --------- Co-authored-by: Theodore Li <theo@sim.ai> * improvement: sidebar, chat * Usage limit * Plan prompt * fix(sidebar): workspace header collapse * fix(sidebar): task navigation * Subagent tool call persistence * Don't drop suabgent text * improvement(ux): streaming * improvement: thinking * fix(random): optimized kb connector sync engine, rerenders in tables, files, editors, chat (#3513) * optimized kb connector sync engine, rerenders in tables, files, editors, chat * refactor(sidebar): rename onTaskClick to onMultiSelectClick for clarity Made-with: Cursor * ack comments, add docsFailed * feat(email-footer) Add "sent with sim ai" for free users (#3515) * Add "sent with sim ai" for free users * Only add prompt injection on free tier * Add try catch around billing info fetch --------- Co-authored-by: Theodore Li <theo@sim.ai> * improvement: modals * ran migrations * fix(mothership): fix hardcoded workflow color, tables drag line overflowing * feat(mothership): file attachment indicators, persistence, and chat input improvements - Show image thumbnails and file-icon cards above user messages in mothership chat - Persist file attachment metadata (key, filename, media_type, size) in DB with user messages - Restore attachments from history via /api/files/serve/ URLs so they survive refresh/navigation - Unify all chat file inputs to use shared CHAT_ACCEPT_ATTRIBUTE constant - Fix file thumbnail overflow: use flex-wrap instead of hidden horizontal scroll - Compact attachment cards in floating workflow chat messages Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * improvement: search modal * improvement(usage): free plan to 1000 credits (#3516) * improvement(billing): free plan to five dollars * fix comment * remove per month terminology from marketing * generate migration * remove migration * add migration back * feat(workspace): add workspace color changing, consolidate update hooks, fix popover dismiss - Add workspace color change via context menu, reusing workflow ColorGrid UI - Consolidate useUpdateWorkspaceName + useUpdateWorkspaceColor into useUpdateWorkspace - Fix popover hover submenu dismiss by using DismissableLayerBranch with pointerEvents - Remove passthrough wrapper for export, reuse Workspace type for capturedWorkspaceRef - Reorder log columns: workflow first, merge date+time into single column Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Update oauth cred tool * fix(diff-controls): fixed positioning for copilot diff controls * fix(font): added back old font for emcn code editor * improvement: panel, special tags * improvement: chat * improvement: loading and file dropping * feat(templates): create home templates * fix(uploads): resolve .md file upload rejection and deduplicate file type utilities Browsers report empty or application/octet-stream MIME types for .md files, causing copilot uploads to be rejected. Added resolveFileType() utility that falls back to extension-based MIME resolution at both client and server boundaries. Consolidated duplicate MIME mappings into module-level constants, removed duplicate isImageFileType from copilot module, and replaced hardcoded ALLOWED_EXTENSIONS with composition from shared validation constants. Also switched file attachment previews to use shared getDocumentIcon utility. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix(home): prevent initial view from being scrollable Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * autofill fixes * added back integrations page, reverted secrets page back to old UI * Fix workspace dropdown getting cut off when sidebar is collapsed * fix(mothership): lint (#3517) * fix(mothership): lint * fix typing * fix tests * fix stale query * fix plan display name * Feat/add mothership manual workflow runs (#3520) * Add run and open workflow buttons in workflow preview * Send log request message after manual workflow run * Make edges in embedded workflow non-editable * Change chat to pass in log as additional context * Revert "Change chat to pass in log as additional context" This reverts commite957dffb2f. * Revert "Send log request message after manual workflow run" This reverts commit0fb92751f0. * Move run and workflow icons to tab bar * Simplify boolean condition --------- Co-authored-by: Theodore Li <theo@sim.ai> * feat(resource-tab-scroll): Allow vertical scrolling to scroll resource tab * fix(remove-speed-hosted-key) Remove maps speed limit hosted key, it's deprecated (#3521) Co-authored-by: Theodore Li <theo@sim.ai> * improvement: home, sidebar * fix(download-file): render correct file download link for mothership (#3522) * fix(download-file): render correct file download link for mothership * Fix uunecessary call * Use simple strip instead of db lookup and moving behavior * Make regex strip more strict --------- Co-authored-by: Theodore Li <theo@sim.ai> * improvement: schedules, auto-scroll * fix(settings): navigate back to origin page instead of always going home Use sessionStorage to store the return URL when entering settings, and use router.replace for tab switches so history doesn't accumulate. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix(schedules): release lastQueuedAt lock on all exit paths to prevent stuck schedules Multiple error/early-return paths in executeScheduleJob and executeJobInline were exiting without clearing lastQueuedAt, causing the dueFilter to permanently skip those schedules — resulting in stale "X hours ago" display for nextRunAt. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * feat(mothership): inline rename for resource tabs + workspace_file rename tool - Add double-click inline rename on file and table resource tabs - Wire useInlineRename + useRenameWorkspaceFile/useRenameTable mutations - Add rename operation to workspace_file copilot tool (schema, server, router) - Add knowledge base resource support (type, extraction, rendering, actions) - Accept optional className on InlineRenameInput for context-specific sizing Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * revert: remove inline rename UI from resource tabs Keep the workspace_file rename tool for the mothership agent. Only the UI-side inline rename (double-click tabs) is removed. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * feat(mothership): knowledge base resource extraction + Resource/ResourceTable refactor - Extract KB resources from knowledge subagent respond format (knowledge_bases array) - Add knowledge_base tool to RESOURCE_TOOL_NAMES and TOOL_UI_METADATA - Extract ResourceTable as independently composable memoized component - Move contentOverride/overlay to Resource shell level (not table primitive) - Remove redundant disableHeaderSort and loadingRows props - Rename internal sort state for clarity (sort → internalSort, sortOverride → externalSort) - Export ResourceTable and ResourceTableProps from barrel Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix(logs) Run workflows client side in mothership to transmit logs (#3529) * Run workflows client side in mothership to transmit logs * Initialize set as constant, prevent duplicate execution * Fix lint --------- Co-authored-by: Theodore Li <theo@sim.ai> * fix(import) fix missing file * fix(resource): Hide resources that have been deleted (#3528) * Hide resources that have been deleted * Handle table, workflow not found * Add animation to prevent flash when previous resource was deleted * Fix animation playing on every switch * Run workflows client side in mothership to transmit logs * Fix race condition for animation * Use shared workflow tool util file --------- Co-authored-by: Theodore Li <theo@sim.ai> * fix: chat scrollbar on sidebar collapse/open * edit existing workflow should bring up artifact * fix(agent) subagent and main agent text being merged without spacing * feat(mothership): remove resource-level delete tools from copilot Remove delete operations for workflows, folders, tables, and files from the mothership copilot to prevent destructive actions via AI. Row-level and column-level deletes are preserved. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: stop sidebar from auto-collapsing when resource panel appears (#3540) The sidebar was forcibly collapsed whenever a resource (e.g. workflow) first appeared in the resource panel during a task. This was disruptive on larger screens where users want to keep both the sidebar and resource panel visible simultaneously. Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com> * fix(mothership): insert copilot-created workflows at top of list (#3537) * feat(mothership): remove resource-level delete tools from copilot Remove delete operations for workflows, folders, tables, and files from the mothership copilot to prevent destructive actions via AI. Row-level and column-level deletes are preserved. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix(mothership): insert copilot-created workflows at top of list * fix(mothership): server-side top-insertion sort order and deduplicate registry logic * fix(mothership): include folder sort orders when computing top-insertion position * fix(mothership): use getNextWorkflowColor instead of hardcoded color --------- Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com> * fix(stop) Add stop of motehership ran workflows, persist stop messages (#3538) * Connect play stop workflow in embedded view to workflow * Fix stop not actually stoping workflow * Fix ui not showing stopped by user * Lint fix * Plumb cancellation through system * Stopping mothership chat stops workflow * Remove extra fluff * Persist blocks on cancellation * Add root level stopped by user --------- Co-authored-by: Theodore Li <theo@sim.ai> * fix(autolayout): targetted autolayout heuristic restored (#3536) * fix(autolayout): targetted autolayout heuristic restored * fix autolayout boundary cases * more fixes * address comments * on conflict updates * address more comments * fix relative position scope * fix tye omission * address bugbot comment * Credential tags * Credential id field * feat(mothership): server-persisted unread task indicators via SSE (#3549) * feat(mothership): server-persisted unread task indicators via SSE Replace fragile client-side polling + timer-based green flash with server-persisted lastSeenAt semantics, real-time SSE push via Redis pub/sub, and dot overlay UI on the Blimp icon. - Add lastSeenAt column to copilotChats for server-persisted read state - Add Redis/local pub/sub singleton for task status events (started, completed, created, deleted, renamed) - Add SSE endpoint (GET /api/mothership/events) with heartbeat and workspace-scoped filtering - Add mark-read endpoint (POST /api/mothership/chats/read) - Publish SSE events from chat, rename, delete, and auto-title handlers - Add useTaskEvents hook for client-side SSE subscription - Add useMarkTaskRead mutation with optimistic update - Replace timer logic in sidebar with TaskStatus state machine (running/unread/idle) and dot overlay using brand color variables - Mark tasks read on mount and stream completion in home page - Fix security: add userId check to delete WHERE clause - Fix: bump updatedAt on stream completion - Fix: set lastSeenAt on rename to prevent false-positive unread Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: address PR review feedback - Return 404 when delete finds no matching chat (was silent no-op) - Move log after ownership check so it only fires on actual deletion - Publish completed SSE event from stop route so sidebar dot clears on abort Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: backfill last_seen_at in migration to prevent false unread dots Existing rows would have last_seen_at = NULL after migration, causing all past completed tasks to show as unread. Backfill sets last_seen_at to updated_at for all existing rows. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: timestamp mismatch on task creation + wasSendingRef leak across navigation - Pass updatedAt explicitly alongside lastSeenAt on chat creation so both use the same JS timestamp (DB defaultNow() ran later, causing updatedAt > lastSeenAt → false unread) - Reset wasSendingRef when chatId changes to prevent a stale true from task A triggering a redundant markRead on task B Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: mark-read fires for inline-created chats + encode workspaceId in SSE URL Expose resolvedChatId from useChat so home.tsx can mark-read even when chatId prop stays undefined after replaceState URL update. Also URL-encode workspaceId in EventSource URL as a defensive measure. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: auto-focus home input on initial view + fix sidebar task click handling Auto-focus the textarea when the initial home view renders. Also fix sidebar task click to always call onMultiSelectClick so selection state stays consistent. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: auto-title sets lastSeenAt + move started event inside DB guard Auto-title now sets both updatedAt and lastSeenAt (matching the rename route pattern) to prevent false-positive unread dots. Also move the 'started' SSE event inside the if(updated) guard so it only fires when the DB update actually matched a row. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * modified tasks multi select to be just like workflows * fix * refactor: extract generic pub/sub and SSE factories + fixes - Extract createPubSubChannel factory (lib/events/pubsub.ts) to eliminate duplicated Redis/EventEmitter boilerplate between task and MCP pub/sub - Extract createWorkspaceSSE factory (lib/events/sse-endpoint.ts) to share auth, heartbeat, and cleanup logic across SSE endpoints - Fix auto-title race suppressing unread status by removing updatedAt/lastSeenAt from title-only DB update - Fix wheel event listener leak in ResourceTabs (RefCallback cleanup was silently discarded) - Fix getFullSelection() missing taskIds (inconsistent with hasAnySelection) - Deduplicate SSE_RESPONSE_HEADERS to spread from shared SSE_HEADERS - Hoist isSttAvailable to module-level constant to avoid per-render IIFE Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com> * feat(logs): add workflow trigger type for sub-workflow executions (#3554) * feat(logs): add workflow trigger type for sub-workflow executions * fix(logs): align workflow filter color with blue-secondary badge variant * feat(tab) allow user to control resource tabs * Make resources persist to backend * Use colored squares for workflows * Add click and drag functionality to resource * Fix expanding panel logic * Reduce duplication, reading resource also opens up resource panel * Move resource dropdown to own file * Handle renamed resources * Clicking already open tab should just switch to tab --------- Co-authored-by: Theodore Li <theo@sim.ai> * Fix new resource tab button not appearing on tasks * improvement(ui): dropdown menus, icons, globals * improvement: notifications, terminal, globals * reverted task logic * feat(context) pass resource tab as context (#3555) * feat(context) add currenttly open resource file to context for agent * Simplify resource resolution * Skip initialize vfs * Restore ff * Add back try catch * Remove redundant code * Remove json serialization/deserialization loop --------- Co-authored-by: Theodore Li <theo@sim.ai> * Feat(references) add at to reference sim resources(#3560) * feat(chat) add at sign * Address bugbot issues * Remove extra chatcontext defs * Add table and file to schema * Add icon to chip for files --------- Co-authored-by: Theodore Li <theo@sim.ai> * improvement(refactor): move to soft deletion of resources + reliability improvements (#3561) * improvement(deletion): migrate to soft deletion of resources * progress * scoping fixes * round of fixes * deduplicated name on workflow import * fix tests * add migration * cleanup dead code * address bugbot comments * optimize query * feat(sim-mailer): email inbox for mothership with chat history and plan gating (#3558) * feat(sim-mailer): email inbox for mothership with chat history and plan gating * revert hardcoded ff * fix(inbox): address PR review comments - plan enforcement, idempotency, webhook auth - Enforce Max plan at API layer: hasInboxAccess() now checks subscription tier (>= 25k credits or enterprise) - Add idempotency guard to executeInboxTask() to prevent duplicate emails on Trigger.dev retries - Add AGENTMAIL_WEBHOOK_SECRET env var for webhook signature verification (Bearer token) * improvement(inbox): harden security and efficiency from code audit - Use crypto.timingSafeEqual for webhook secret comparison (prevents timing attacks) - Atomic claim in executor: WHERE status='received' prevents duplicate processing on retries - Parallelize hasInboxAccess + getUserEntityPermissions in all API routes (reduces latency) - Truncate email body at webhook insertion (50k char limit, prevents unbounded DB storage) - Harden escapeAttr with angle bracket and single quote escaping - Rename use-inbox.ts to inbox.ts (matches hooks/queries/ naming convention) * fix(inbox): replace Bearer token auth with proper Svix HMAC-SHA256 webhook verification - Use per-workspace webhook secret from DB instead of global env var - Verify AgentMail/Svix signatures: HMAC-SHA256 over svix-id.timestamp.body - Timing-safe comparison via crypto.timingSafeEqual - Replay protection via timestamp tolerance (5 min window) - Join mothershipInboxWebhook in workspace lookup (zero additional DB calls) - Remove dead AGENTMAIL_WEBHOOK_SECRET env var - Select only needed workspace columns in webhook handler * fix(inbox): require webhook secret — reject requests when secret is missing Previously, if the webhook secret was missing from the DB (corrupted state), the handler would skip verification entirely and process the request unauthenticated. Now all three conditions are hard requirements: secret must exist in DB, Svix headers must be present, and signature must verify. * fix(inbox): address second round of PR review comments - Exclude rejected tasks from rate limit count to prevent DoS via spam - Strip raw HTML from LLM output before marked.parse to prevent XSS in emails - Track responseSent flag to prevent duplicate emails when DB update fails after send * fix(inbox): address third round of PR review comments - Use dynamic isHosted from feature-flags instead of hardcoded true - Atomic JSON append for chat message persistence (eliminates read-modify-write race) - Handle cutIndex === 0 in stripQuotedReply (body starts with quote) - Clean up orphan mothershipInboxWebhook row on enableInbox rollback - Validate status query parameter against enum in tasks API * fix(inbox): validate cursor param, preserve code blocks in HTML stripping - Validate cursor date before using in query (return 400 for invalid) - Split on fenced code blocks before stripping HTML tags to preserve code examples in email responses * fix(inbox): return 500 on webhook server errors to enable Svix retries * fix(inbox): remove isHosted guard from hasInboxAccess — feature flag is sufficient * fix(inbox): prevent double-enable from deleting webhook secret row * fix(inbox): null-safe stripThinkingTags, encode URL params, surface remove-sender errors - Guard against null result.content in stripThinkingTags - Use encodeURIComponent on all AgentMail API path parameters - Surface handleRemoveSender errors to the user instead of swallowing * improvement(inbox): remove unused types, narrow SELECT queries, fix optimistic ID collision Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix(inbox): add keyboard accessibility to clickable task rows * fix(inbox): use Svix library for webhook verification, fix responseSent flag, prevent inbox enumeration - Replace manual HMAC-SHA256 verification with official Svix library per AgentMail docs - Fix responseSent flag: only set true when email delivery actually succeeds - Return consistent 401 for unknown inbox and bad signature to prevent enumeration - Make AgentMailInbox.organization_id optional to match API docs * chore(db): rebase inbox migration onto feat/mothership-copilot (0172 → 0173) Sync schema with target branch and regenerate migration as 0173 to avoid conflicts with 0172_silky_magma on feat/mothership-copilot. * fix(db): rebase inbox migration to 0173 after feat/mothership-copilot divergence Target branch added 0172_silky_magma, so our inbox migration is now 0173_youthful_stryfe. * fix(db): regenerate inbox migration after rebase on feat/mothership-copilot * fix(inbox): case-insensitive email match and sanitize javascript: URIs in email HTML - Use lower() in isSenderAllowed SQL to match workspace members regardless of email case stored by auth provider - Strip javascript:, vbscript:, and data: URIs from marked HTML output to prevent XSS in outbound email responses * fix(inbox): case-insensitive email match in resolveUserId Consistent with the isSenderAllowed fix — uses lower() so mixed-case stored emails match correctly, preventing silent fallback to workspace owner. --------- Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com> * Kb args * refactor(resource): remove logs-specific escape hatches from Resource abstraction Logs now composes ResourceHeader + ResourceOptionsBar + ResourceTable directly instead of using Resource with contentOverride/overlay escape hatches. Removes contentOverride, onLoadMore, hasMore, isLoadingMore from ResourceProps. Adds ColumnOption to barrel export and fixes table.tsx internal import. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(sim-mailer): download email attachments and pass to LLM as multimodal content Attachments were only passed as metadata text in the email body. Now downloads actual file bytes from AgentMail, converts via createFileContent (same path as interactive chat), and sends as fileAttachments to the orchestrator. Also parallelizes attachment fetching with workspace context loading, and downloads multiple attachments concurrently via Promise.allSettled. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * feat(connector): add Gmail knowledge base connector with thread-based sync and filtering Syncs email threads from Gmail into knowledge bases with configurable filters: label scoping, date range presets, promotions/social exclusion, Gmail search syntax support, and max thread caps to keep KB size manageable. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat(connector): add Outlook knowledge base connector with conversation grouping and filtering Syncs email conversations from Outlook/Office 365 via Microsoft Graph API. Groups messages by conversationId into single documents. Configurable filters: folder selection, date range presets, Focused Inbox, KQL search syntax, and max conversation caps. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * cleanup resource definition * feat(connectors): add 8 knowledge base connectors — Zendesk, Intercom, ServiceNow, Google Sheets, Microsoft Teams, Discord, Google Calendar, Reddit Each connector syncs documents into knowledge bases with configurable filtering: - Zendesk: Help Center articles + support tickets with status/locale filters - Intercom: Articles + conversations with state filtering - ServiceNow: KB articles + incidents with state/priority/category filters - Google Sheets: Spreadsheet tabs as LLM-friendly row-by-row documents - Microsoft Teams: Channel messages (Slack-like pattern) via Graph API - Discord: Channel messages with bot token auth - Google Calendar: Events with date range presets and attendee metadata - Reddit: Subreddit posts with top comments, sort/time filters All connectors validated against official API docs with bug fixes applied. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(inbox): fetch real attachment binary from presigned URL and persist for chat display The AgentMail attachment endpoint returns JSON metadata with a download_url, not raw binary. We were base64-encoding the JSON text and sending it to the LLM, causing provider rejection. Now we parse the metadata, fetch the actual file from the presigned URL, upload it to copilot storage, and persist it on the chat message so images render inline with previews. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * added agentmail domain for mailer * added docs for sim mailer * fix(resource) handle resource deletion deletion (#3568) * Add handle dragging tab to input chat * Add back delete tools * Handle deletions properly with resources view * Fix lint * Add permisssions checking * Skip resource_added event when resource is deleted * Pass workflow id as context --------- Co-authored-by: Theodore Li <theo@sim.ai> * update docs styling, add delete confirmation on inbox * Fix fast edit route * updated docs styling, added FAQs, updated content * upgrade turbo * fix(knowledge) use consistent empty state for documents page Replace the centered "No documents yet" text with the standard Resource table empty state (column headers + create row), matching all other resource pages. Move "Upload documents" from header action to table create row as "New documents". Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(notifications): polish modal styling, credential display, and trigger filters (#3571) * fix(notifications): polish modal styling, credential display, and trigger filters - Show credential display name instead of raw account ID in Slack account selector - Fix label styling to use default Label component (text-primary) for consistency - Fix modal body spacing with proper top padding after tab bar - Replace list-card skeleton with form-field skeleton matching actual layout - Replace custom "Select a Slack account first" box with disabled Combobox (dependsOn pattern) - Use proper Label component in WorkflowSelector with consistent gap spacing - Add overflow badge pattern (slice + +N) to level and trigger filter badges - Use dynamic trigger options from getTriggerOptions() instead of hardcoded CORE_TRIGGER_TYPES - Relax API validation to accept integration trigger types (z.string instead of z.enum) - Deduplicate account rows from credential leftJoin in accounts API - Extract getTriggerOptions() to module-level constants to avoid per-render calls Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(notifications): address PR review feedback - Restore accountId in displayName fallback chain (credentialDisplayName || accountId || providerId) - Add .default([]) to triggerFilter in create schema to preserve backward compatibility - Treat empty triggerFilter as "match all" in notification matching logic - Remove unreachable overflow badge for levelFilter (only 2 possible values) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(settings): add spacing to Sim Keys toggle and replace Sim Mailer icon with Send Add 24px top margin to the "Allow personal Sim keys" toggle so it doesn't sit right below the empty state. Replace the Mail envelope icon for Sim Mailer with a new Send (paper plane) icon matching the emcn icon style. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * standardize back buttons in settings * feat(restore) Add restore endpoints and ui (#3570) * Add restore endpoints and ui * Derive toast from notification * Auth user if workspaceid not found * Fix recently deleted ui * Add restore error toast * Fix deleted at timestamp mismatch --------- Co-authored-by: Theodore Li <theo@sim.ai> * fix type errors * Lint * improvements: ui/ux around mothership * reactquery best practices, UI alignment in restore * clamp logs panel * subagent thinking text * fix build, speedup tests by up to 40% * Fix fast edit * Add download file shortcut on mothership file view * fix: SVG file support in mothership chat and file serving - Send SVGs as document/text-xml to Claude instead of unsupported image/svg+xml, so the mothership can actually read SVG content - Serve SVGs inline with proper content type and CSP sandbox so chat previews render correctly - Add SVG preview support in file viewer (sandboxed iframe) - Derive IMAGE_MIME_TYPES from MIME_TYPE_MAPPING to reduce duplication - Add missing webp to contentTypeMap, SAFE_INLINE_TYPES, binaryExtensions - Consolidate PREVIEWABLE_EXTENSIONS into preview-panel exports Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: replace image/* wildcard with explicit supported types in file picker The image/* accept attribute allowed users to select BMP, TIFF, HEIC, and other image types that are rejected server-side. Replace with the exact set of supported image MIME types and extensions to match the copilot upload validation. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * Context tags * Fix lint * improvement: chat and terminal --------- Co-authored-by: Emir Karabeg <emirkarabeg@berkeley.edu> Co-authored-by: Waleed <walif6@gmail.com> Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com> Co-authored-by: Theodore Li <teddy@zenobiapay.com> Co-authored-by: Vikhyath Mondreti <vikhyathvikku@gmail.com> Co-authored-by: Vikhyath Mondreti <vikhyath@simstudio.ai> Co-authored-by: Theodore Li <theodoreqili@gmail.com> Co-authored-by: Theodore Li <theo@sim.ai>
973 lines
29 KiB
TypeScript
973 lines
29 KiB
TypeScript
import { createLogger } from '@sim/logger'
|
|
import { type NextRequest, NextResponse } from 'next/server'
|
|
import { checkInternalAuth } from '@/lib/auth/hybrid'
|
|
import { isE2bEnabled } from '@/lib/core/config/feature-flags'
|
|
import { generateRequestId } from '@/lib/core/utils/request'
|
|
import { executeInE2B } from '@/lib/execution/e2b'
|
|
import { executeInIsolatedVM } from '@/lib/execution/isolated-vm'
|
|
import { CodeLanguage, DEFAULT_CODE_LANGUAGE, isValidCodeLanguage } from '@/lib/execution/languages'
|
|
import { escapeRegExp, normalizeName, REFERENCE } from '@/executor/constants'
|
|
import { type OutputSchema, resolveBlockReference } from '@/executor/utils/block-reference'
|
|
import { formatLiteralForCode } from '@/executor/utils/code-formatting'
|
|
import {
|
|
createEnvVarPattern,
|
|
createWorkflowVariablePattern,
|
|
} from '@/executor/utils/reference-validation'
|
|
export const dynamic = 'force-dynamic'
|
|
export const runtime = 'nodejs'
|
|
|
|
export const MAX_DURATION = 210
|
|
|
|
const logger = createLogger('FunctionExecuteAPI')
|
|
|
|
const E2B_JS_WRAPPER_LINES = 3
|
|
const E2B_PYTHON_WRAPPER_LINES = 1
|
|
|
|
type TypeScriptModule = typeof import('typescript')
|
|
|
|
let typescriptModulePromise: Promise<TypeScriptModule> | null = null
|
|
|
|
async function loadTypeScriptModule(): Promise<TypeScriptModule> {
|
|
if (!typescriptModulePromise) {
|
|
typescriptModulePromise = import('typescript').then((mod) => {
|
|
const tsModule = (mod?.default ?? mod) as TypeScriptModule
|
|
return tsModule
|
|
})
|
|
}
|
|
|
|
return typescriptModulePromise
|
|
}
|
|
|
|
async function extractJavaScriptImports(
|
|
code: string
|
|
): Promise<{ imports: string; remainingCode: string; importLineCount: number }> {
|
|
try {
|
|
const tsModule = await loadTypeScriptModule()
|
|
|
|
const sourceFile = tsModule.createSourceFile(
|
|
'user-code.js',
|
|
code,
|
|
tsModule.ScriptTarget.Latest,
|
|
true,
|
|
tsModule.ScriptKind.JS
|
|
)
|
|
|
|
const importSegments: Array<{ text: string; start: number; end: number }> = []
|
|
|
|
sourceFile.statements.forEach((statement) => {
|
|
if (
|
|
tsModule.isImportDeclaration(statement) ||
|
|
tsModule.isImportEqualsDeclaration(statement)
|
|
) {
|
|
importSegments.push({
|
|
text: statement.getFullText(sourceFile).trim(),
|
|
start: statement.getFullStart(),
|
|
end: statement.getEnd(),
|
|
})
|
|
}
|
|
})
|
|
|
|
if (importSegments.length === 0) {
|
|
return { imports: '', remainingCode: code, importLineCount: 0 }
|
|
}
|
|
|
|
importSegments.sort((a, b) => a.start - b.start)
|
|
|
|
const imports = importSegments.map((segment) => segment.text).join('\n')
|
|
|
|
let cursor = 0
|
|
const parts: string[] = []
|
|
let importLineCount = 0
|
|
|
|
for (const segment of importSegments) {
|
|
if (segment.start > cursor) {
|
|
parts.push(code.slice(cursor, segment.start))
|
|
}
|
|
|
|
const removedSegment = code.slice(segment.start, segment.end)
|
|
importLineCount += removedSegment.split('\n').length - 1
|
|
|
|
const newlinePlaceholder = removedSegment.replace(/[^\n]/g, '')
|
|
parts.push(newlinePlaceholder)
|
|
|
|
cursor = segment.end
|
|
}
|
|
|
|
if (cursor < code.length) {
|
|
parts.push(code.slice(cursor))
|
|
}
|
|
|
|
const remainingCode = parts.join('')
|
|
|
|
return { imports, remainingCode, importLineCount: Math.max(importLineCount, 0) }
|
|
} catch (error) {
|
|
logger.error('Failed to extract JavaScript imports', { error })
|
|
return { imports: '', remainingCode: code, importLineCount: 0 }
|
|
}
|
|
}
|
|
|
|
/**
|
|
* Enhanced error information interface
|
|
*/
|
|
interface EnhancedError {
|
|
message: string
|
|
line?: number
|
|
column?: number
|
|
stack?: string
|
|
name: string
|
|
originalError: any
|
|
lineContent?: string
|
|
}
|
|
|
|
/**
|
|
* Extract enhanced error information from VM execution errors
|
|
*/
|
|
function extractEnhancedError(
|
|
error: any,
|
|
userCodeStartLine: number,
|
|
userCode?: string
|
|
): EnhancedError {
|
|
const enhanced: EnhancedError = {
|
|
message: error.message || 'Unknown error',
|
|
name: error.name || 'Error',
|
|
originalError: error,
|
|
}
|
|
|
|
if (error.stack) {
|
|
enhanced.stack = error.stack
|
|
|
|
const stackLines: string[] = error.stack.split('\n')
|
|
|
|
for (const line of stackLines) {
|
|
let match = line.match(/user-function\.js:(\d+)(?::(\d+))?/)
|
|
|
|
if (!match) {
|
|
match = line.match(/at\s+user-function\.js:(\d+):(\d+)/)
|
|
}
|
|
|
|
if (match) {
|
|
const stackLine = Number.parseInt(match[1], 10)
|
|
const stackColumn = match[2] ? Number.parseInt(match[2], 10) : undefined
|
|
|
|
const adjustedLine = stackLine - userCodeStartLine + 1
|
|
|
|
const isWrapperSyntaxError =
|
|
stackLine > userCodeStartLine &&
|
|
error.name === 'SyntaxError' &&
|
|
(error.message.includes('Unexpected token') ||
|
|
error.message.includes('Unexpected end of input'))
|
|
|
|
if (isWrapperSyntaxError && userCode) {
|
|
const codeLines = userCode.split('\n')
|
|
const lastUserLine = codeLines.length
|
|
enhanced.line = lastUserLine
|
|
enhanced.column = codeLines[lastUserLine - 1]?.length || 0
|
|
enhanced.lineContent = codeLines[lastUserLine - 1]?.trim()
|
|
break
|
|
}
|
|
|
|
if (adjustedLine > 0) {
|
|
enhanced.line = adjustedLine
|
|
enhanced.column = stackColumn
|
|
|
|
if (userCode) {
|
|
const codeLines = userCode.split('\n')
|
|
if (adjustedLine <= codeLines.length) {
|
|
enhanced.lineContent = codeLines[adjustedLine - 1]?.trim()
|
|
}
|
|
}
|
|
break
|
|
}
|
|
|
|
if (stackLine <= userCodeStartLine) {
|
|
enhanced.line = stackLine
|
|
enhanced.column = stackColumn
|
|
break
|
|
}
|
|
}
|
|
}
|
|
|
|
const cleanedStackLines: string[] = stackLines
|
|
.filter(
|
|
(line: string) =>
|
|
line.includes('user-function.js') ||
|
|
(!line.includes('vm.js') && !line.includes('internal/'))
|
|
)
|
|
.map((line: string) => line.replace(/\s+at\s+/, ' at '))
|
|
|
|
if (cleanedStackLines.length > 0) {
|
|
enhanced.stack = cleanedStackLines.join('\n')
|
|
}
|
|
}
|
|
|
|
return enhanced
|
|
}
|
|
|
|
/**
|
|
* Parse and format E2B error message
|
|
* Removes E2B-specific line references and adds correct user line numbers
|
|
*/
|
|
function formatE2BError(
|
|
errorMessage: string,
|
|
errorOutput: string,
|
|
language: CodeLanguage,
|
|
userCode: string,
|
|
prologueLineCount: number
|
|
): { formattedError: string; cleanedOutput: string } {
|
|
const wrapperLines =
|
|
language === CodeLanguage.Python ? E2B_PYTHON_WRAPPER_LINES : E2B_JS_WRAPPER_LINES
|
|
const totalOffset = prologueLineCount + wrapperLines
|
|
|
|
let userLine: number | undefined
|
|
let cleanErrorType = ''
|
|
let cleanErrorMsg = ''
|
|
|
|
if (language === CodeLanguage.Python) {
|
|
const cellMatch = errorOutput.match(/Cell In\[\d+\], line (\d+)/)
|
|
if (cellMatch) {
|
|
const originalLine = Number.parseInt(cellMatch[1], 10)
|
|
userLine = originalLine - totalOffset
|
|
}
|
|
|
|
cleanErrorMsg = errorMessage
|
|
.replace(/\s*\(detected at line \d+\)/g, '')
|
|
.replace(/\s*\([^)]+\.py, line \d+\)/g, '')
|
|
.trim()
|
|
} else if (language === CodeLanguage.JavaScript) {
|
|
const firstLineEnd = errorMessage.indexOf('\n')
|
|
const firstLine = firstLineEnd > 0 ? errorMessage.substring(0, firstLineEnd) : errorMessage
|
|
|
|
const jsErrorMatch = firstLine.match(/^(\w+Error):\s*[^:]+:\s*([^(]+)\.\s*\((\d+):(\d+)\)/)
|
|
if (jsErrorMatch) {
|
|
cleanErrorType = jsErrorMatch[1]
|
|
cleanErrorMsg = jsErrorMatch[2].trim()
|
|
const originalLine = Number.parseInt(jsErrorMatch[3], 10)
|
|
userLine = originalLine - totalOffset
|
|
} else {
|
|
const arrowMatch = errorMessage.match(/^>\s*(\d+)\s*\|/m)
|
|
if (arrowMatch) {
|
|
const originalLine = Number.parseInt(arrowMatch[1], 10)
|
|
userLine = originalLine - totalOffset
|
|
}
|
|
const errorMatch = firstLine.match(/^(\w+Error):\s*(.+)/)
|
|
if (errorMatch) {
|
|
cleanErrorType = errorMatch[1]
|
|
cleanErrorMsg = errorMatch[2]
|
|
.replace(/^[^:]+:\s*/, '') // Remove file path
|
|
.replace(/\s*\(\d+:\d+\)\s*$/, '') // Remove line:col at end
|
|
.trim()
|
|
} else {
|
|
cleanErrorMsg = firstLine
|
|
}
|
|
}
|
|
}
|
|
|
|
const finalErrorMsg =
|
|
cleanErrorType && cleanErrorMsg
|
|
? `${cleanErrorType}: ${cleanErrorMsg}`
|
|
: cleanErrorMsg || errorMessage
|
|
|
|
let formattedError = finalErrorMsg
|
|
if (userLine && userLine > 0) {
|
|
const codeLines = userCode.split('\n')
|
|
// Clamp userLine to the actual user code range
|
|
const actualUserLine = Math.min(userLine, codeLines.length)
|
|
if (actualUserLine > 0 && actualUserLine <= codeLines.length) {
|
|
const lineContent = codeLines[actualUserLine - 1]?.trim()
|
|
if (lineContent) {
|
|
formattedError = `Line ${actualUserLine}: \`${lineContent}\` - ${finalErrorMsg}`
|
|
} else {
|
|
formattedError = `Line ${actualUserLine} - ${finalErrorMsg}`
|
|
}
|
|
}
|
|
}
|
|
|
|
const cleanedOutput = finalErrorMsg
|
|
|
|
return { formattedError, cleanedOutput }
|
|
}
|
|
|
|
/**
|
|
* Create a detailed error message for users
|
|
*/
|
|
function createUserFriendlyErrorMessage(
|
|
enhanced: EnhancedError,
|
|
requestId: string,
|
|
userCode?: string
|
|
): string {
|
|
let errorMessage = enhanced.message
|
|
|
|
if (enhanced.line !== undefined) {
|
|
let lineInfo = `Line ${enhanced.line}`
|
|
|
|
// Add the actual line content if available
|
|
if (enhanced.lineContent) {
|
|
lineInfo += `: \`${enhanced.lineContent}\``
|
|
}
|
|
|
|
errorMessage = `${lineInfo} - ${errorMessage}`
|
|
} else {
|
|
if (enhanced.stack) {
|
|
const stackMatch = enhanced.stack.match(/user-function\.js:(\d+)(?::(\d+))?/)
|
|
if (stackMatch) {
|
|
const line = Number.parseInt(stackMatch[1], 10)
|
|
let lineInfo = `Line ${line}`
|
|
|
|
if (userCode) {
|
|
const codeLines = userCode.split('\n')
|
|
if (line <= codeLines.length) {
|
|
const lineContent = codeLines[line - 1]?.trim()
|
|
if (lineContent) {
|
|
lineInfo += `: \`${lineContent}\``
|
|
}
|
|
}
|
|
}
|
|
|
|
errorMessage = `${lineInfo} - ${errorMessage}`
|
|
}
|
|
}
|
|
}
|
|
|
|
if (enhanced.name !== 'Error') {
|
|
const errorTypePrefix =
|
|
enhanced.name === 'SyntaxError'
|
|
? 'Syntax Error'
|
|
: enhanced.name === 'TypeError'
|
|
? 'Type Error'
|
|
: enhanced.name === 'ReferenceError'
|
|
? 'Reference Error'
|
|
: enhanced.name
|
|
|
|
if (!errorMessage.toLowerCase().includes(errorTypePrefix.toLowerCase())) {
|
|
errorMessage = `${errorTypePrefix}: ${errorMessage}`
|
|
}
|
|
}
|
|
|
|
return errorMessage
|
|
}
|
|
|
|
function resolveWorkflowVariables(
|
|
code: string,
|
|
workflowVariables: Record<string, any>,
|
|
contextVariables: Record<string, any>
|
|
): string {
|
|
let resolvedCode = code
|
|
|
|
const regex = createWorkflowVariablePattern()
|
|
let match: RegExpExecArray | null
|
|
const replacements: Array<{
|
|
match: string
|
|
index: number
|
|
variableName: string
|
|
variableValue: unknown
|
|
}> = []
|
|
|
|
while ((match = regex.exec(code)) !== null) {
|
|
const variableName = match[1].trim()
|
|
|
|
const foundVariable = Object.entries(workflowVariables).find(
|
|
([_, variable]) => normalizeName(variable.name || '') === variableName
|
|
)
|
|
|
|
if (!foundVariable) {
|
|
const availableVars = Object.values(workflowVariables)
|
|
.map((v) => v.name)
|
|
.filter(Boolean)
|
|
throw new Error(
|
|
`Variable "${variableName}" doesn't exist.` +
|
|
(availableVars.length > 0 ? ` Available: ${availableVars.join(', ')}` : '')
|
|
)
|
|
}
|
|
|
|
const variable = foundVariable[1]
|
|
let variableValue: unknown = variable.value
|
|
|
|
if (variable.value !== undefined && variable.value !== null) {
|
|
const type = variable.type === 'string' ? 'plain' : variable.type
|
|
|
|
if (type === 'number') {
|
|
variableValue = Number(variableValue)
|
|
} else if (type === 'boolean') {
|
|
if (typeof variableValue === 'boolean') {
|
|
// Already a boolean, keep as-is
|
|
} else {
|
|
const normalized = String(variableValue).toLowerCase().trim()
|
|
variableValue = normalized === 'true'
|
|
}
|
|
} else if (type === 'json' && typeof variableValue === 'string') {
|
|
try {
|
|
variableValue = JSON.parse(variableValue)
|
|
} catch {
|
|
// Keep as-is
|
|
}
|
|
}
|
|
}
|
|
|
|
replacements.push({
|
|
match: match[0],
|
|
index: match.index,
|
|
variableName,
|
|
variableValue,
|
|
})
|
|
}
|
|
|
|
for (let i = replacements.length - 1; i >= 0; i--) {
|
|
const { match: matchStr, index, variableName, variableValue } = replacements[i]
|
|
|
|
const safeVarName = `__variable_${variableName.replace(/[^a-zA-Z0-9_]/g, '_')}`
|
|
contextVariables[safeVarName] = variableValue
|
|
resolvedCode =
|
|
resolvedCode.slice(0, index) + safeVarName + resolvedCode.slice(index + matchStr.length)
|
|
}
|
|
|
|
return resolvedCode
|
|
}
|
|
|
|
function resolveEnvironmentVariables(
|
|
code: string,
|
|
params: Record<string, any>,
|
|
envVars: Record<string, string>,
|
|
contextVariables: Record<string, any>
|
|
): string {
|
|
let resolvedCode = code
|
|
|
|
const regex = createEnvVarPattern()
|
|
let match: RegExpExecArray | null
|
|
const replacements: Array<{ match: string; index: number; varName: string; varValue: string }> =
|
|
[]
|
|
|
|
const resolverVars: Record<string, string> = {}
|
|
Object.entries(params).forEach(([key, value]) => {
|
|
if (value !== undefined && value !== null) {
|
|
resolverVars[key] = String(value)
|
|
}
|
|
})
|
|
Object.entries(envVars).forEach(([key, value]) => {
|
|
if (value !== undefined && value !== null) {
|
|
resolverVars[key] = value
|
|
}
|
|
})
|
|
|
|
while ((match = regex.exec(code)) !== null) {
|
|
const varName = match[1].trim()
|
|
|
|
if (!(varName in resolverVars)) {
|
|
continue
|
|
}
|
|
|
|
replacements.push({
|
|
match: match[0],
|
|
index: match.index,
|
|
varName,
|
|
varValue: resolverVars[varName],
|
|
})
|
|
}
|
|
|
|
for (let i = replacements.length - 1; i >= 0; i--) {
|
|
const { match: matchStr, index, varName, varValue } = replacements[i]
|
|
|
|
const safeVarName = `__var_${varName.replace(/[^a-zA-Z0-9_]/g, '_')}`
|
|
contextVariables[safeVarName] = varValue
|
|
resolvedCode =
|
|
resolvedCode.slice(0, index) + safeVarName + resolvedCode.slice(index + matchStr.length)
|
|
}
|
|
|
|
return resolvedCode
|
|
}
|
|
|
|
function resolveTagVariables(
|
|
code: string,
|
|
blockData: Record<string, unknown>,
|
|
blockNameMapping: Record<string, string>,
|
|
blockOutputSchemas: Record<string, OutputSchema>,
|
|
contextVariables: Record<string, unknown>,
|
|
language = 'javascript'
|
|
): string {
|
|
let resolvedCode = code
|
|
const undefinedLiteral = language === 'python' ? 'None' : 'undefined'
|
|
|
|
const tagPattern = new RegExp(
|
|
`${REFERENCE.START}([a-zA-Z_](?:[a-zA-Z0-9_${REFERENCE.PATH_DELIMITER}]*[a-zA-Z0-9_])?)${REFERENCE.END}`,
|
|
'g'
|
|
)
|
|
const tagMatches = resolvedCode.match(tagPattern) || []
|
|
|
|
for (const match of tagMatches) {
|
|
const tagName = match.slice(REFERENCE.START.length, -REFERENCE.END.length).trim()
|
|
const pathParts = tagName.split(REFERENCE.PATH_DELIMITER)
|
|
const blockName = pathParts[0]
|
|
const fieldPath = pathParts.slice(1)
|
|
|
|
const result = resolveBlockReference(blockName, fieldPath, {
|
|
blockNameMapping,
|
|
blockData,
|
|
blockOutputSchemas,
|
|
})
|
|
|
|
if (!result) {
|
|
continue
|
|
}
|
|
|
|
let tagValue = result.value
|
|
|
|
if (tagValue === undefined) {
|
|
resolvedCode = resolvedCode.replace(new RegExp(escapeRegExp(match), 'g'), undefinedLiteral)
|
|
continue
|
|
}
|
|
|
|
if (typeof tagValue === 'string') {
|
|
const trimmed = tagValue.trimStart()
|
|
if (trimmed.startsWith('{') || trimmed.startsWith('[')) {
|
|
try {
|
|
tagValue = JSON.parse(tagValue)
|
|
} catch {
|
|
// Keep as string if not valid JSON
|
|
}
|
|
}
|
|
}
|
|
|
|
const safeVarName = `__tag_${tagName.replace(/_/g, '_1').replace(/\./g, '_0')}`
|
|
contextVariables[safeVarName] = tagValue
|
|
resolvedCode = resolvedCode.replace(new RegExp(escapeRegExp(match), 'g'), safeVarName)
|
|
}
|
|
|
|
return resolvedCode
|
|
}
|
|
|
|
/**
|
|
* Resolves environment variables and tags in code
|
|
* @param code - Code with variables
|
|
* @param params - Parameters that may contain variable values
|
|
* @param envVars - Environment variables from the workflow
|
|
* @returns Resolved code
|
|
*/
|
|
function resolveCodeVariables(
|
|
code: string,
|
|
params: Record<string, unknown>,
|
|
envVars: Record<string, string> = {},
|
|
blockData: Record<string, unknown> = {},
|
|
blockNameMapping: Record<string, string> = {},
|
|
blockOutputSchemas: Record<string, OutputSchema> = {},
|
|
workflowVariables: Record<string, unknown> = {},
|
|
language = 'javascript'
|
|
): { resolvedCode: string; contextVariables: Record<string, unknown> } {
|
|
let resolvedCode = code
|
|
const contextVariables: Record<string, unknown> = {}
|
|
|
|
resolvedCode = resolveWorkflowVariables(resolvedCode, workflowVariables, contextVariables)
|
|
resolvedCode = resolveEnvironmentVariables(resolvedCode, params, envVars, contextVariables)
|
|
resolvedCode = resolveTagVariables(
|
|
resolvedCode,
|
|
blockData,
|
|
blockNameMapping,
|
|
blockOutputSchemas,
|
|
contextVariables,
|
|
language
|
|
)
|
|
|
|
return { resolvedCode, contextVariables }
|
|
}
|
|
|
|
/**
|
|
* Remove one trailing newline from stdout
|
|
* This handles the common case where print() or console.log() adds a trailing \n
|
|
* that users don't expect to see in the output
|
|
*/
|
|
function cleanStdout(stdout: string): string {
|
|
if (stdout.endsWith('\n')) {
|
|
return stdout.slice(0, -1)
|
|
}
|
|
return stdout
|
|
}
|
|
|
|
export async function POST(req: NextRequest) {
|
|
const requestId = generateRequestId()
|
|
const startTime = Date.now()
|
|
let stdout = ''
|
|
let userCodeStartLine = 3 // Default value for error reporting
|
|
let resolvedCode = '' // Store resolved code for error reporting
|
|
|
|
try {
|
|
const auth = await checkInternalAuth(req)
|
|
if (!auth.success || !auth.userId) {
|
|
logger.warn(`[${requestId}] Unauthorized function execution attempt`)
|
|
return NextResponse.json({ error: auth.error || 'Unauthorized' }, { status: 401 })
|
|
}
|
|
|
|
const body = await req.json()
|
|
|
|
const { DEFAULT_EXECUTION_TIMEOUT_MS } = await import('@/lib/execution/constants')
|
|
|
|
const {
|
|
code,
|
|
params = {},
|
|
timeout = DEFAULT_EXECUTION_TIMEOUT_MS,
|
|
language = DEFAULT_CODE_LANGUAGE,
|
|
envVars = {},
|
|
blockData = {},
|
|
blockNameMapping = {},
|
|
blockOutputSchemas = {},
|
|
workflowVariables = {},
|
|
workflowId,
|
|
isCustomTool = false,
|
|
_sandboxFiles,
|
|
} = body
|
|
|
|
const executionParams = { ...params }
|
|
executionParams._context = undefined
|
|
|
|
logger.info(`[${requestId}] Function execution request`, {
|
|
hasCode: !!code,
|
|
paramsCount: Object.keys(executionParams).length,
|
|
timeout,
|
|
workflowId,
|
|
isCustomTool,
|
|
})
|
|
|
|
const lang = isValidCodeLanguage(language) ? language : DEFAULT_CODE_LANGUAGE
|
|
|
|
const codeResolution = resolveCodeVariables(
|
|
code,
|
|
executionParams,
|
|
envVars,
|
|
blockData,
|
|
blockNameMapping,
|
|
blockOutputSchemas,
|
|
workflowVariables,
|
|
lang
|
|
)
|
|
resolvedCode = codeResolution.resolvedCode
|
|
const contextVariables = codeResolution.contextVariables
|
|
|
|
let jsImports = ''
|
|
let jsRemainingCode = resolvedCode
|
|
let hasImports = false
|
|
|
|
if (lang === CodeLanguage.JavaScript) {
|
|
const extractionResult = await extractJavaScriptImports(resolvedCode)
|
|
jsImports = extractionResult.imports
|
|
jsRemainingCode = extractionResult.remainingCode
|
|
|
|
const hasRequireStatements = /require\s*\(\s*['"`]/.test(resolvedCode)
|
|
hasImports = jsImports.trim().length > 0 || hasRequireStatements
|
|
}
|
|
|
|
if (lang === CodeLanguage.Python && !isE2bEnabled) {
|
|
throw new Error(
|
|
'Python execution requires E2B to be enabled. Please contact your administrator to enable E2B, or use JavaScript instead.'
|
|
)
|
|
}
|
|
|
|
if (lang === CodeLanguage.JavaScript && hasImports && !isE2bEnabled) {
|
|
throw new Error(
|
|
'JavaScript code with import statements requires E2B to be enabled. Please remove the import statements, or contact your administrator to enable E2B.'
|
|
)
|
|
}
|
|
|
|
const useE2B =
|
|
isE2bEnabled &&
|
|
!isCustomTool &&
|
|
(lang === CodeLanguage.Python || (lang === CodeLanguage.JavaScript && hasImports))
|
|
|
|
if (useE2B) {
|
|
logger.info(`[${requestId}] E2B status`, {
|
|
enabled: isE2bEnabled,
|
|
hasApiKey: Boolean(process.env.E2B_API_KEY),
|
|
language: lang,
|
|
})
|
|
let prologue = ''
|
|
|
|
if (lang === CodeLanguage.JavaScript) {
|
|
let prologueLineCount = 0
|
|
|
|
const imports = jsImports
|
|
const remainingCode = jsRemainingCode
|
|
|
|
const importSection: string = imports ? `${imports}\n` : ''
|
|
const importLineCount = imports ? imports.split('\n').length : 0
|
|
|
|
const codeBody = remainingCode
|
|
resolvedCode = importSection ? `${imports}\n\n${codeBody}` : codeBody
|
|
|
|
prologue += `const params = JSON.parse(${JSON.stringify(JSON.stringify(executionParams))});\n`
|
|
prologueLineCount++
|
|
prologue += `const environmentVariables = JSON.parse(${JSON.stringify(JSON.stringify(envVars))});\n`
|
|
prologueLineCount++
|
|
for (const [k, v] of Object.entries(contextVariables)) {
|
|
prologue += `const ${k} = ${formatLiteralForCode(v, 'javascript')};\n`
|
|
prologueLineCount++
|
|
}
|
|
|
|
const wrapped = [
|
|
';(async () => {',
|
|
' try {',
|
|
' const __sim_result = await (async () => {',
|
|
` ${codeBody.split('\n').join('\n ')}`,
|
|
' })();',
|
|
" console.log('__SIM_RESULT__=' + JSON.stringify(__sim_result));",
|
|
' } catch (error) {',
|
|
' console.log(String((error && (error.stack || error.message)) || error));',
|
|
' throw error;',
|
|
' }',
|
|
'})();',
|
|
].join('\n')
|
|
const codeForE2B = importSection + prologue + wrapped
|
|
|
|
const execStart = Date.now()
|
|
const {
|
|
result: e2bResult,
|
|
stdout: e2bStdout,
|
|
sandboxId,
|
|
error: e2bError,
|
|
} = await executeInE2B({
|
|
code: codeForE2B,
|
|
language: CodeLanguage.JavaScript,
|
|
timeoutMs: timeout,
|
|
sandboxFiles: _sandboxFiles,
|
|
})
|
|
const executionTime = Date.now() - execStart
|
|
stdout += e2bStdout
|
|
|
|
logger.info(`[${requestId}] E2B JS sandbox`, {
|
|
sandboxId,
|
|
stdoutPreview: e2bStdout?.slice(0, 200),
|
|
error: e2bError,
|
|
})
|
|
|
|
if (e2bError) {
|
|
const { formattedError, cleanedOutput } = formatE2BError(
|
|
e2bError,
|
|
e2bStdout,
|
|
lang,
|
|
resolvedCode,
|
|
prologueLineCount + importLineCount
|
|
)
|
|
return NextResponse.json(
|
|
{
|
|
success: false,
|
|
error: formattedError,
|
|
output: { result: null, stdout: cleanedOutput, executionTime },
|
|
},
|
|
{ status: 500 }
|
|
)
|
|
}
|
|
|
|
return NextResponse.json({
|
|
success: true,
|
|
output: { result: e2bResult ?? null, stdout: cleanStdout(stdout), executionTime },
|
|
})
|
|
}
|
|
|
|
let prologueLineCount = 0
|
|
prologue += 'import json\n'
|
|
prologueLineCount++
|
|
prologue += `params = json.loads(${JSON.stringify(JSON.stringify(executionParams))})\n`
|
|
prologueLineCount++
|
|
prologue += `environmentVariables = json.loads(${JSON.stringify(JSON.stringify(envVars))})\n`
|
|
prologueLineCount++
|
|
for (const [k, v] of Object.entries(contextVariables)) {
|
|
prologue += `${k} = ${formatLiteralForCode(v, 'python')}\n`
|
|
prologueLineCount++
|
|
}
|
|
const wrapped = [
|
|
'def __sim_main__():',
|
|
...resolvedCode.split('\n').map((l) => ` ${l}`),
|
|
'__sim_result__ = __sim_main__()',
|
|
"print('__SIM_RESULT__=' + json.dumps(__sim_result__))",
|
|
].join('\n')
|
|
const codeForE2B = prologue + wrapped
|
|
|
|
const execStart = Date.now()
|
|
const {
|
|
result: e2bResult,
|
|
stdout: e2bStdout,
|
|
sandboxId,
|
|
error: e2bError,
|
|
} = await executeInE2B({
|
|
code: codeForE2B,
|
|
language: CodeLanguage.Python,
|
|
timeoutMs: timeout,
|
|
sandboxFiles: _sandboxFiles,
|
|
})
|
|
const executionTime = Date.now() - execStart
|
|
stdout += e2bStdout
|
|
|
|
logger.info(`[${requestId}] E2B Py sandbox`, {
|
|
sandboxId,
|
|
stdoutPreview: e2bStdout?.slice(0, 200),
|
|
error: e2bError,
|
|
})
|
|
|
|
if (e2bError) {
|
|
const { formattedError, cleanedOutput } = formatE2BError(
|
|
e2bError,
|
|
e2bStdout,
|
|
lang,
|
|
resolvedCode,
|
|
prologueLineCount
|
|
)
|
|
return NextResponse.json(
|
|
{
|
|
success: false,
|
|
error: formattedError,
|
|
output: { result: null, stdout: cleanedOutput, executionTime },
|
|
},
|
|
{ status: 500 }
|
|
)
|
|
}
|
|
|
|
return NextResponse.json({
|
|
success: true,
|
|
output: { result: e2bResult ?? null, stdout: cleanStdout(stdout), executionTime },
|
|
})
|
|
}
|
|
|
|
const executionMethod = 'isolated-vm'
|
|
|
|
const wrapperLines = ['(async () => {', ' try {']
|
|
if (isCustomTool) {
|
|
Object.keys(executionParams).forEach((key) => {
|
|
wrapperLines.push(` const ${key} = params.${key};`)
|
|
})
|
|
}
|
|
userCodeStartLine = wrapperLines.length + 1
|
|
|
|
let codeToExecute = resolvedCode
|
|
let prependedLineCount = 0
|
|
if (isCustomTool) {
|
|
const paramKeys = Object.keys(executionParams)
|
|
const paramDestructuring = paramKeys.map((key) => `const ${key} = params.${key};`).join('\n')
|
|
codeToExecute = `${paramDestructuring}\n${resolvedCode}`
|
|
prependedLineCount = paramKeys.length
|
|
}
|
|
|
|
const isolatedResult = await executeInIsolatedVM({
|
|
code: codeToExecute,
|
|
params: executionParams,
|
|
envVars,
|
|
contextVariables,
|
|
timeoutMs: timeout,
|
|
requestId,
|
|
ownerKey: `user:${auth.userId}`,
|
|
ownerWeight: 1,
|
|
})
|
|
|
|
const executionTime = Date.now() - startTime
|
|
|
|
if (isolatedResult.error) {
|
|
logger.error(`[${requestId}] Function execution failed in isolated-vm`, {
|
|
error: isolatedResult.error,
|
|
executionTime,
|
|
})
|
|
|
|
const ivmError = isolatedResult.error
|
|
let adjustedLine = ivmError.line
|
|
let adjustedLineContent = ivmError.lineContent
|
|
if (prependedLineCount > 0 && ivmError.line !== undefined) {
|
|
adjustedLine = Math.max(1, ivmError.line - prependedLineCount)
|
|
const codeLines = resolvedCode.split('\n')
|
|
if (adjustedLine <= codeLines.length) {
|
|
adjustedLineContent = codeLines[adjustedLine - 1]?.trim()
|
|
}
|
|
}
|
|
const enhancedError: EnhancedError = {
|
|
message: ivmError.message,
|
|
name: ivmError.name,
|
|
stack: ivmError.stack,
|
|
originalError: ivmError,
|
|
line: adjustedLine,
|
|
column: ivmError.column,
|
|
lineContent: adjustedLineContent,
|
|
}
|
|
|
|
const userFriendlyErrorMessage = createUserFriendlyErrorMessage(
|
|
enhancedError,
|
|
requestId,
|
|
resolvedCode
|
|
)
|
|
|
|
logger.error(`[${requestId}] Enhanced error details`, {
|
|
originalMessage: ivmError.message,
|
|
enhancedMessage: userFriendlyErrorMessage,
|
|
line: enhancedError.line,
|
|
column: enhancedError.column,
|
|
lineContent: enhancedError.lineContent,
|
|
errorType: enhancedError.name,
|
|
})
|
|
|
|
return NextResponse.json(
|
|
{
|
|
success: false,
|
|
error: userFriendlyErrorMessage,
|
|
output: {
|
|
result: null,
|
|
stdout: cleanStdout(isolatedResult.stdout),
|
|
executionTime,
|
|
},
|
|
debug: {
|
|
line: enhancedError.line,
|
|
column: enhancedError.column,
|
|
errorType: enhancedError.name,
|
|
lineContent: enhancedError.lineContent,
|
|
stack: enhancedError.stack,
|
|
},
|
|
},
|
|
{ status: 500 }
|
|
)
|
|
}
|
|
|
|
stdout = isolatedResult.stdout
|
|
logger.info(`[${requestId}] Function executed successfully using ${executionMethod}`, {
|
|
executionTime,
|
|
})
|
|
|
|
return NextResponse.json({
|
|
success: true,
|
|
output: { result: isolatedResult.result, stdout: cleanStdout(stdout), executionTime },
|
|
})
|
|
} catch (error: any) {
|
|
const executionTime = Date.now() - startTime
|
|
logger.error(`[${requestId}] Function execution failed`, {
|
|
error: error.message || 'Unknown error',
|
|
stack: error.stack,
|
|
executionTime,
|
|
})
|
|
|
|
const enhancedError = extractEnhancedError(error, userCodeStartLine, resolvedCode)
|
|
const userFriendlyErrorMessage = createUserFriendlyErrorMessage(
|
|
enhancedError,
|
|
requestId,
|
|
resolvedCode
|
|
)
|
|
|
|
logger.error(`[${requestId}] Enhanced error details`, {
|
|
originalMessage: error.message,
|
|
enhancedMessage: userFriendlyErrorMessage,
|
|
line: enhancedError.line,
|
|
column: enhancedError.column,
|
|
lineContent: enhancedError.lineContent,
|
|
errorType: enhancedError.name,
|
|
userCodeStartLine,
|
|
})
|
|
|
|
const errorResponse = {
|
|
success: false,
|
|
error: userFriendlyErrorMessage,
|
|
output: {
|
|
result: null,
|
|
stdout: cleanStdout(stdout),
|
|
executionTime,
|
|
},
|
|
debug: {
|
|
line: enhancedError.line,
|
|
column: enhancedError.column,
|
|
errorType: enhancedError.name,
|
|
lineContent: enhancedError.lineContent,
|
|
stack: enhancedError.stack,
|
|
},
|
|
}
|
|
|
|
return NextResponse.json(errorResponse, { status: 500 })
|
|
}
|
|
}
|